DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] [PATCH v1 0/3] lpm: increase number of next hops for lpm (ipv4)
@ 2015-10-23 13:51 Michal Jastrzebski
  2015-10-23 13:51 ` [dpdk-dev] [PATCH v1 1/3] " Michal Jastrzebski
                   ` (3 more replies)
  0 siblings, 4 replies; 24+ messages in thread
From: Michal Jastrzebski @ 2015-10-23 13:51 UTC (permalink / raw)
  To: dev

From: Michal Kobylinski  <michalx.kobylinski@intel.com>

The current DPDK implementation for LPM for IPv4 and IPv6 limits the
number of next hops to 256, as the next hop ID is an 8-bit long field.
Proposed extension increase number of next hops for IPv4 to 2^24 and
also allows 32-bits read/write operations.

This patchset requires additional change to rte_table library to meet 
ABI compatibility requirements. A v2 will be sent next week.

Michal Kobylinski (3):
  lpm: increase number of next hops for lpm (ipv4)
  examples: update of apps using librte_lpm (ipv4)
  doc: update release 2.2 after changes in librte_lpm

 app/test/test_func_reentrancy.c      |   4 +-
 app/test/test_lpm.c                  | 227 ++++-----
 doc/guides/rel_notes/release_2_2.rst |   2 +
 examples/ip_fragmentation/main.c     |  10 +-
 examples/ip_reassembly/main.c        |   9 +-
 examples/l3fwd-power/main.c          |   2 +-
 examples/l3fwd-vf/main.c             |   2 +-
 examples/l3fwd/main.c                |  16 +-
 examples/load_balancer/runtime.c     |   3 +-
 lib/librte_lpm/rte_lpm.c             | 887 ++++++++++++++++++++++++++++++++++-
 lib/librte_lpm/rte_lpm.h             | 295 +++++++++++-
 lib/librte_lpm/rte_lpm_version.map   |  59 ++-
 lib/librte_table/rte_table_lpm.c     |  10 +-
 13 files changed, 1345 insertions(+), 181 deletions(-)

-- 
1.9.1

^ permalink raw reply	[flat|nested] 24+ messages in thread

* [dpdk-dev] [PATCH v1 1/3] lpm: increase number of next hops for lpm (ipv4)
  2015-10-23 13:51 [dpdk-dev] [PATCH v1 0/3] lpm: increase number of next hops for lpm (ipv4) Michal Jastrzebski
@ 2015-10-23 13:51 ` Michal Jastrzebski
  2015-10-23 14:38   ` Bruce Richardson
  2015-10-23 13:51 ` [dpdk-dev] [PATCH v1 2/3] examples: update of apps using librte_lpm (ipv4) Michal Jastrzebski
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 24+ messages in thread
From: Michal Jastrzebski @ 2015-10-23 13:51 UTC (permalink / raw)
  To: dev

From: Michal Kobylinski <michalx.kobylinski@intel.com>

Main implementation - changes to lpm library regarding new data types.
Additionally this patch implements changes required by test application. 
ABI versioning requirements are met only for lpm library, 
for table library it will be sent in v2 of this patch-set.
 
Signed-off-by: Michal Kobylinski <michalx.kobylinski@intel.com>
---
 app/test/test_func_reentrancy.c    |   4 +-
 app/test/test_lpm.c                | 227 +++++-----
 lib/librte_lpm/rte_lpm.c           | 887 ++++++++++++++++++++++++++++++++++++-
 lib/librte_lpm/rte_lpm.h           | 295 +++++++++++-
 lib/librte_lpm/rte_lpm_version.map |  59 ++-
 lib/librte_table/rte_table_lpm.c   |  10 +-
 6 files changed, 1322 insertions(+), 160 deletions(-)

diff --git a/app/test/test_func_reentrancy.c b/app/test/test_func_reentrancy.c
index dbecc52..331ab29 100644
--- a/app/test/test_func_reentrancy.c
+++ b/app/test/test_func_reentrancy.c
@@ -343,7 +343,7 @@ static void
 lpm_clean(unsigned lcore_id)
 {
 	char lpm_name[MAX_STRING_SIZE];
-	struct rte_lpm *lpm;
+	struct rte_lpm_extend *lpm;
 	int i;
 
 	for (i = 0; i < MAX_LPM_ITER_TIMES; i++) {
@@ -358,7 +358,7 @@ static int
 lpm_create_free(__attribute__((unused)) void *arg)
 {
 	unsigned lcore_self = rte_lcore_id();
-	struct rte_lpm *lpm;
+	struct rte_lpm_extend *lpm;
 	char lpm_name[MAX_STRING_SIZE];
 	int i;
 
diff --git a/app/test/test_lpm.c b/app/test/test_lpm.c
index 8b4ded9..31f54d0 100644
--- a/app/test/test_lpm.c
+++ b/app/test/test_lpm.c
@@ -114,7 +114,7 @@ rte_lpm_test tests[] = {
 int32_t
 test0(void)
 {
-	struct rte_lpm *lpm = NULL;
+	struct rte_lpm_extend *lpm = NULL;
 
 	/* rte_lpm_create: lpm name == NULL */
 	lpm = rte_lpm_create(NULL, SOCKET_ID_ANY, MAX_RULES, 0);
@@ -139,7 +139,7 @@ test0(void)
 int32_t
 test1(void)
 {
-	struct rte_lpm *lpm = NULL;
+	struct rte_lpm_extend *lpm = NULL;
 	int32_t i;
 
 	/* rte_lpm_free: Free NULL */
@@ -163,7 +163,7 @@ test1(void)
 int32_t
 test2(void)
 {
-	struct rte_lpm *lpm = NULL;
+	struct rte_lpm_extend *lpm = NULL;
 
 	lpm = rte_lpm_create(__func__, SOCKET_ID_ANY, MAX_RULES, 0);
 	TEST_LPM_ASSERT(lpm != NULL);
@@ -179,7 +179,7 @@ test2(void)
 int32_t
 test3(void)
 {
-	struct rte_lpm *lpm = NULL;
+	struct rte_lpm_extend *lpm = NULL;
 	uint32_t ip = IPv4(0, 0, 0, 0);
 	uint8_t depth = 24, next_hop = 100;
 	int32_t status = 0;
@@ -212,7 +212,7 @@ test3(void)
 int32_t
 test4(void)
 {
-	struct rte_lpm *lpm = NULL;
+	struct rte_lpm_extend *lpm = NULL;
 	uint32_t ip = IPv4(0, 0, 0, 0);
 	uint8_t depth = 24;
 	int32_t status = 0;
@@ -252,7 +252,7 @@ test5(void)
 	int32_t status = 0;
 
 	/* rte_lpm_lookup: lpm == NULL */
-	status = rte_lpm_lookup(NULL, ip, &next_hop_return);
+	status = rte_lpm_lookup_extend(NULL, ip, &next_hop_return);
 	TEST_LPM_ASSERT(status < 0);
 
 	/*Create vaild lpm to use in rest of test. */
@@ -260,7 +260,7 @@ test5(void)
 	TEST_LPM_ASSERT(lpm != NULL);
 
 	/* rte_lpm_lookup: depth < 1 */
-	status = rte_lpm_lookup(lpm, ip, NULL);
+	status = rte_lpm_lookup_extend(lpm, ip, NULL);
 	TEST_LPM_ASSERT(status < 0);
 
 	rte_lpm_free(lpm);
@@ -276,9 +276,10 @@ test5(void)
 int32_t
 test6(void)
 {
-	struct rte_lpm *lpm = NULL;
+	struct rte_lpm_extend *lpm = NULL;
 	uint32_t ip = IPv4(0, 0, 0, 0);
-	uint8_t depth = 24, next_hop_add = 100, next_hop_return = 0;
+	uint8_t depth = 24;
+	uint32_t next_hop_add = 100, next_hop_return = 0;
 	int32_t status = 0;
 
 	lpm = rte_lpm_create(__func__, SOCKET_ID_ANY, MAX_RULES, 0);
@@ -287,13 +288,13 @@ test6(void)
 	status = rte_lpm_add(lpm, ip, depth, next_hop_add);
 	TEST_LPM_ASSERT(status == 0);
 
-	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
+	status = rte_lpm_lookup_extend(lpm, ip, &next_hop_return);
 	TEST_LPM_ASSERT((status == 0) && (next_hop_return == next_hop_add));
 
 	status = rte_lpm_delete(lpm, ip, depth);
 	TEST_LPM_ASSERT(status == 0);
 
-	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
+	status = rte_lpm_lookup_extend(lpm, ip, &next_hop_return);
 	TEST_LPM_ASSERT(status == -ENOENT);
 
 	rte_lpm_free(lpm);
@@ -309,10 +310,11 @@ int32_t
 test7(void)
 {
 	__m128i ipx4;
-	uint16_t hop[4];
-	struct rte_lpm *lpm = NULL;
+	uint32_t hop[4];
+	struct rte_lpm_extend *lpm = NULL;
 	uint32_t ip = IPv4(0, 0, 0, 0);
-	uint8_t depth = 32, next_hop_add = 100, next_hop_return = 0;
+	uint8_t depth = 32;
+	uint32_t next_hop_add = 100, next_hop_return = 0;
 	int32_t status = 0;
 
 	lpm = rte_lpm_create(__func__, SOCKET_ID_ANY, MAX_RULES, 0);
@@ -321,20 +323,20 @@ test7(void)
 	status = rte_lpm_add(lpm, ip, depth, next_hop_add);
 	TEST_LPM_ASSERT(status == 0);
 
-	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
+	status = rte_lpm_lookup_extend(lpm, ip, &next_hop_return);
 	TEST_LPM_ASSERT((status == 0) && (next_hop_return == next_hop_add));
 
 	ipx4 = _mm_set_epi32(ip, ip + 0x100, ip - 0x100, ip);
-	rte_lpm_lookupx4(lpm, ipx4, hop, UINT16_MAX);
+	rte_lpm_lookupx4_extend(lpm, ipx4, hop, UINT32_MAX);
 	TEST_LPM_ASSERT(hop[0] == next_hop_add);
-	TEST_LPM_ASSERT(hop[1] == UINT16_MAX);
-	TEST_LPM_ASSERT(hop[2] == UINT16_MAX);
+	TEST_LPM_ASSERT(hop[1] == UINT32_MAX);
+	TEST_LPM_ASSERT(hop[2] == UINT32_MAX);
 	TEST_LPM_ASSERT(hop[3] == next_hop_add);
 
 	status = rte_lpm_delete(lpm, ip, depth);
 	TEST_LPM_ASSERT(status == 0);
 
-	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
+	status = rte_lpm_lookup_extend(lpm, ip, &next_hop_return);
 	TEST_LPM_ASSERT(status == -ENOENT);
 
 	rte_lpm_free(lpm);
@@ -355,10 +357,11 @@ int32_t
 test8(void)
 {
 	__m128i ipx4;
-	uint16_t hop[4];
-	struct rte_lpm *lpm = NULL;
+	uint32_t hop[4];
+	struct rte_lpm_extend *lpm = NULL;
 	uint32_t ip1 = IPv4(127, 255, 255, 255), ip2 = IPv4(128, 0, 0, 0);
-	uint8_t depth, next_hop_add, next_hop_return;
+	uint8_t depth;
+	uint32_t next_hop_add, next_hop_return;
 	int32_t status = 0;
 
 	lpm = rte_lpm_create(__func__, SOCKET_ID_ANY, MAX_RULES, 0);
@@ -373,18 +376,18 @@ test8(void)
 		TEST_LPM_ASSERT(status == 0);
 
 		/* Check IP in first half of tbl24 which should be empty. */
-		status = rte_lpm_lookup(lpm, ip1, &next_hop_return);
+		status = rte_lpm_lookup_extend(lpm, ip1, &next_hop_return);
 		TEST_LPM_ASSERT(status == -ENOENT);
 
-		status = rte_lpm_lookup(lpm, ip2, &next_hop_return);
+		status = rte_lpm_lookup_extend(lpm, ip2, &next_hop_return);
 		TEST_LPM_ASSERT((status == 0) &&
 			(next_hop_return == next_hop_add));
 
 		ipx4 = _mm_set_epi32(ip2, ip1, ip2, ip1);
-		rte_lpm_lookupx4(lpm, ipx4, hop, UINT16_MAX);
-		TEST_LPM_ASSERT(hop[0] == UINT16_MAX);
+		rte_lpm_lookupx4_extend(lpm, ipx4, hop, UINT32_MAX);
+		TEST_LPM_ASSERT(hop[0] == UINT32_MAX);
 		TEST_LPM_ASSERT(hop[1] == next_hop_add);
-		TEST_LPM_ASSERT(hop[2] == UINT16_MAX);
+		TEST_LPM_ASSERT(hop[2] == UINT32_MAX);
 		TEST_LPM_ASSERT(hop[3] == next_hop_add);
 	}
 
@@ -395,7 +398,7 @@ test8(void)
 		status = rte_lpm_delete(lpm, ip2, depth);
 		TEST_LPM_ASSERT(status == 0);
 
-		status = rte_lpm_lookup(lpm, ip2, &next_hop_return);
+		status = rte_lpm_lookup_extend(lpm, ip2, &next_hop_return);
 
 		if (depth != 1) {
 			TEST_LPM_ASSERT((status == 0) &&
@@ -405,20 +408,20 @@ test8(void)
 			TEST_LPM_ASSERT(status == -ENOENT);
 		}
 
-		status = rte_lpm_lookup(lpm, ip1, &next_hop_return);
+		status = rte_lpm_lookup_extend(lpm, ip1, &next_hop_return);
 		TEST_LPM_ASSERT(status == -ENOENT);
 
 		ipx4 = _mm_set_epi32(ip1, ip1, ip2, ip2);
-		rte_lpm_lookupx4(lpm, ipx4, hop, UINT16_MAX);
+		rte_lpm_lookupx4_extend(lpm, ipx4, hop, UINT32_MAX);
 		if (depth != 1) {
 			TEST_LPM_ASSERT(hop[0] == next_hop_add);
 			TEST_LPM_ASSERT(hop[1] == next_hop_add);
 		} else {
-			TEST_LPM_ASSERT(hop[0] == UINT16_MAX);
-			TEST_LPM_ASSERT(hop[1] == UINT16_MAX);
+			TEST_LPM_ASSERT(hop[0] == UINT32_MAX);
+			TEST_LPM_ASSERT(hop[1] == UINT32_MAX);
 		}
-		TEST_LPM_ASSERT(hop[2] == UINT16_MAX);
-		TEST_LPM_ASSERT(hop[3] == UINT16_MAX);
+		TEST_LPM_ASSERT(hop[2] == UINT32_MAX);
+		TEST_LPM_ASSERT(hop[3] == UINT32_MAX);
 	}
 
 	rte_lpm_free(lpm);
@@ -436,9 +439,10 @@ test8(void)
 int32_t
 test9(void)
 {
-	struct rte_lpm *lpm = NULL;
+	struct rte_lpm_extend *lpm = NULL;
 	uint32_t ip, ip_1, ip_2;
-	uint8_t depth, depth_1, depth_2, next_hop_add, next_hop_add_1,
+	uint8_t depth, depth_1, depth_2;
+	uint32_t next_hop_add, next_hop_add_1,
 		next_hop_add_2, next_hop_return;
 	int32_t status = 0;
 
@@ -453,13 +457,13 @@ test9(void)
 	status = rte_lpm_add(lpm, ip, depth, next_hop_add);
 	TEST_LPM_ASSERT(status == 0);
 
-	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
+	status = rte_lpm_lookup_extend(lpm, ip, &next_hop_return);
 	TEST_LPM_ASSERT((status == 0) && (next_hop_return == next_hop_add));
 
 	status = rte_lpm_delete(lpm, ip, depth);
 	TEST_LPM_ASSERT(status == 0);
 
-	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
+	status = rte_lpm_lookup_extend(lpm, ip, &next_hop_return);
 	TEST_LPM_ASSERT(status == -ENOENT);
 
 	rte_lpm_delete_all(lpm);
@@ -472,7 +476,7 @@ test9(void)
 	status = rte_lpm_add(lpm, ip, depth, next_hop_add);
 	TEST_LPM_ASSERT(status == 0);
 
-	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
+	status = rte_lpm_lookup_extend(lpm, ip, &next_hop_return);
 	TEST_LPM_ASSERT((status == 0) && (next_hop_return == next_hop_add));
 
 	depth = 24;
@@ -481,7 +485,7 @@ test9(void)
 	status = rte_lpm_add(lpm, ip, depth, next_hop_add);
 	TEST_LPM_ASSERT(status == 0);
 
-	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
+	status = rte_lpm_lookup_extend(lpm, ip, &next_hop_return);
 	TEST_LPM_ASSERT((status == 0) && (next_hop_return == next_hop_add));
 
 	depth = 24;
@@ -494,7 +498,7 @@ test9(void)
 	status = rte_lpm_delete(lpm, ip, depth);
 	TEST_LPM_ASSERT(status == 0);
 
-	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
+	status = rte_lpm_lookup_extend(lpm, ip, &next_hop_return);
 	TEST_LPM_ASSERT(status == -ENOENT);
 
 	rte_lpm_delete_all(lpm);
@@ -508,7 +512,7 @@ test9(void)
 	status = rte_lpm_add(lpm, ip, depth, next_hop_add);
 	TEST_LPM_ASSERT(status == 0);
 
-	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
+	status = rte_lpm_lookup_extend(lpm, ip, &next_hop_return);
 	TEST_LPM_ASSERT((status == 0) && (next_hop_return == next_hop_add));
 
 	ip = IPv4(128, 0, 0, 5);
@@ -518,26 +522,26 @@ test9(void)
 	status = rte_lpm_add(lpm, ip, depth, next_hop_add);
 	TEST_LPM_ASSERT(status == 0);
 
-	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
+	status = rte_lpm_lookup_extend(lpm, ip, &next_hop_return);
 	TEST_LPM_ASSERT((status == 0) && (next_hop_return == next_hop_add));
 
 	status = rte_lpm_delete(lpm, ip, depth);
 	TEST_LPM_ASSERT(status == 0);
 
-	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
+	status = rte_lpm_lookup_extend(lpm, ip, &next_hop_return);
 	TEST_LPM_ASSERT(status == -ENOENT);
 
 	ip = IPv4(128, 0, 0, 0);
 	depth = 32;
 	next_hop_add = 100;
 
-	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
+	status = rte_lpm_lookup_extend(lpm, ip, &next_hop_return);
 	TEST_LPM_ASSERT((status == 0) && (next_hop_return == next_hop_add));
 
 	status = rte_lpm_delete(lpm, ip, depth);
 	TEST_LPM_ASSERT(status == 0);
 
-	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
+	status = rte_lpm_lookup_extend(lpm, ip, &next_hop_return);
 	TEST_LPM_ASSERT(status == -ENOENT);
 
 	rte_lpm_delete_all(lpm);
@@ -557,25 +561,25 @@ test9(void)
 	status = rte_lpm_add(lpm, ip_1, depth_1, next_hop_add_1);
 	TEST_LPM_ASSERT(status == 0);
 
-	status = rte_lpm_lookup(lpm, ip_1, &next_hop_return);
+	status = rte_lpm_lookup_extend(lpm, ip_1, &next_hop_return);
 	TEST_LPM_ASSERT((status == 0) && (next_hop_return == next_hop_add_1));
 
 	status = rte_lpm_add(lpm, ip_2, depth_2, next_hop_add_2);
 	TEST_LPM_ASSERT(status == 0);
 
-	status = rte_lpm_lookup(lpm, ip_2, &next_hop_return);
+	status = rte_lpm_lookup_extend(lpm, ip_2, &next_hop_return);
 	TEST_LPM_ASSERT((status == 0) && (next_hop_return == next_hop_add_2));
 
 	status = rte_lpm_delete(lpm, ip_2, depth_2);
 	TEST_LPM_ASSERT(status == 0);
 
-	status = rte_lpm_lookup(lpm, ip_2, &next_hop_return);
+	status = rte_lpm_lookup_extend(lpm, ip_2, &next_hop_return);
 	TEST_LPM_ASSERT((status == 0) && (next_hop_return == next_hop_add_1));
 
 	status = rte_lpm_delete(lpm, ip_1, depth_1);
 	TEST_LPM_ASSERT(status == 0);
 
-	status = rte_lpm_lookup(lpm, ip_1, &next_hop_return);
+	status = rte_lpm_lookup_extend(lpm, ip_1, &next_hop_return);
 	TEST_LPM_ASSERT(status == -ENOENT);
 
 	rte_lpm_free(lpm);
@@ -600,9 +604,10 @@ int32_t
 test10(void)
 {
 
-	struct rte_lpm *lpm = NULL;
+	struct rte_lpm_extend *lpm = NULL;
 	uint32_t ip;
-	uint8_t depth, next_hop_add, next_hop_return;
+	uint8_t depth;
+	uint32_t next_hop_add, next_hop_return;
 	int32_t status = 0;
 
 	/* Add rule that covers a TBL24 range previously invalid & lookup
@@ -617,13 +622,13 @@ test10(void)
 	status = rte_lpm_add(lpm, ip, depth, next_hop_add);
 	TEST_LPM_ASSERT(status == 0);
 
-	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
+	status = rte_lpm_lookup_extend(lpm, ip, &next_hop_return);
 	TEST_LPM_ASSERT((status == 0) && (next_hop_return == next_hop_add));
 
 	status = rte_lpm_delete(lpm, ip, depth);
 	TEST_LPM_ASSERT(status == 0);
 
-	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
+	status = rte_lpm_lookup_extend(lpm, ip, &next_hop_return);
 	TEST_LPM_ASSERT(status == -ENOENT);
 
 	rte_lpm_delete_all(lpm);
@@ -635,7 +640,7 @@ test10(void)
 	status = rte_lpm_add(lpm, ip, depth, next_hop_add);
 	TEST_LPM_ASSERT(status == 0);
 
-	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
+	status = rte_lpm_lookup_extend(lpm, ip, &next_hop_return);
 	TEST_LPM_ASSERT((status == 0) && (next_hop_return == next_hop_add));
 
 	status = rte_lpm_delete(lpm, ip, depth);
@@ -660,13 +665,13 @@ test10(void)
 	status = rte_lpm_add(lpm, ip, depth, next_hop_add);
 	TEST_LPM_ASSERT(status == 0);
 
-	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
+	status = rte_lpm_lookup_extend(lpm, ip, &next_hop_return);
 	TEST_LPM_ASSERT((status == 0) && (next_hop_return == next_hop_add));
 
 	ip = IPv4(128, 0, 0, 0);
 	next_hop_add = 100;
 
-	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
+	status = rte_lpm_lookup_extend(lpm, ip, &next_hop_return);
 	TEST_LPM_ASSERT((status == 0) && (next_hop_return == next_hop_add));
 
 	ip = IPv4(128, 0, 0, 0);
@@ -675,7 +680,7 @@ test10(void)
 	status = rte_lpm_delete(lpm, ip, depth);
 	TEST_LPM_ASSERT(status == 0);
 
-	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
+	status = rte_lpm_lookup_extend(lpm, ip, &next_hop_return);
 	TEST_LPM_ASSERT(status == -ENOENT);
 
 	ip = IPv4(128, 0, 0, 10);
@@ -684,7 +689,7 @@ test10(void)
 	status = rte_lpm_delete(lpm, ip, depth);
 	TEST_LPM_ASSERT(status == 0);
 
-	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
+	status = rte_lpm_lookup_extend(lpm, ip, &next_hop_return);
 	TEST_LPM_ASSERT(status == -ENOENT);
 
 	rte_lpm_delete_all(lpm);
@@ -699,7 +704,7 @@ test10(void)
 	status = rte_lpm_add(lpm, ip, depth, next_hop_add);
 	TEST_LPM_ASSERT(status == 0);
 
-	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
+	status = rte_lpm_lookup_extend(lpm, ip, &next_hop_return);
 	TEST_LPM_ASSERT((status == 0) && (next_hop_return == next_hop_add));
 
 	next_hop_add = 101;
@@ -707,13 +712,13 @@ test10(void)
 	status = rte_lpm_add(lpm, ip, depth, next_hop_add);
 	TEST_LPM_ASSERT(status == 0);
 
-	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
+	status = rte_lpm_lookup_extend(lpm, ip, &next_hop_return);
 	TEST_LPM_ASSERT((status == 0) && (next_hop_return == next_hop_add));
 
 	status = rte_lpm_delete(lpm, ip, depth);
 	TEST_LPM_ASSERT(status == 0);
 
-	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
+	status = rte_lpm_lookup_extend(lpm, ip, &next_hop_return);
 	TEST_LPM_ASSERT(status == -ENOENT);
 
 	rte_lpm_delete_all(lpm);
@@ -728,7 +733,7 @@ test10(void)
 	status = rte_lpm_add(lpm, ip, depth, next_hop_add);
 	TEST_LPM_ASSERT(status == 0);
 
-	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
+	status = rte_lpm_lookup_extend(lpm, ip, &next_hop_return);
 	TEST_LPM_ASSERT((status == 0) && (next_hop_return == next_hop_add));
 
 	next_hop_add = 101;
@@ -736,13 +741,13 @@ test10(void)
 	status = rte_lpm_add(lpm, ip, depth, next_hop_add);
 	TEST_LPM_ASSERT(status == 0);
 
-	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
+	status = rte_lpm_lookup_extend(lpm, ip, &next_hop_return);
 	TEST_LPM_ASSERT((status == 0) && (next_hop_return == next_hop_add));
 
 	status = rte_lpm_delete(lpm, ip, depth);
 	TEST_LPM_ASSERT(status == 0);
 
-	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
+	status = rte_lpm_lookup_extend(lpm, ip, &next_hop_return);
 	TEST_LPM_ASSERT(status == -ENOENT);
 
 	rte_lpm_delete_all(lpm);
@@ -755,7 +760,7 @@ test10(void)
 	status = rte_lpm_delete(lpm, ip, depth);
 	TEST_LPM_ASSERT(status < 0);
 
-	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
+	status = rte_lpm_lookup_extend(lpm, ip, &next_hop_return);
 	TEST_LPM_ASSERT(status == -ENOENT);
 
 	rte_lpm_delete_all(lpm);
@@ -768,7 +773,7 @@ test10(void)
 	status = rte_lpm_delete(lpm, ip, depth);
 	TEST_LPM_ASSERT(status < 0);
 
-	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
+	status = rte_lpm_lookup_extend(lpm, ip, &next_hop_return);
 	TEST_LPM_ASSERT(status == -ENOENT);
 
 	rte_lpm_free(lpm);
@@ -786,9 +791,10 @@ int32_t
 test11(void)
 {
 
-	struct rte_lpm *lpm = NULL;
+	struct rte_lpm_extend *lpm = NULL;
 	uint32_t ip;
-	uint8_t depth, next_hop_add, next_hop_return;
+	uint8_t depth;
+	uint32_t next_hop_add, next_hop_return;
 	int32_t status = 0;
 
 	lpm = rte_lpm_create(__func__, SOCKET_ID_ANY, MAX_RULES, 0);
@@ -808,13 +814,13 @@ test11(void)
 	status = rte_lpm_add(lpm, ip, depth, next_hop_add);
 	TEST_LPM_ASSERT(status == 0);
 
-	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
+	status = rte_lpm_lookup_extend(lpm, ip, &next_hop_return);
 	TEST_LPM_ASSERT((status == 0) && (next_hop_return == next_hop_add));
 
 	ip = IPv4(128, 0, 0, 0);
 	next_hop_add = 100;
 
-	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
+	status = rte_lpm_lookup_extend(lpm, ip, &next_hop_return);
 	TEST_LPM_ASSERT((status == 0) && (next_hop_return == next_hop_add));
 
 	ip = IPv4(128, 0, 0, 0);
@@ -823,7 +829,7 @@ test11(void)
 	status = rte_lpm_delete(lpm, ip, depth);
 	TEST_LPM_ASSERT(status == 0);
 
-	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
+	status = rte_lpm_lookup_extend(lpm, ip, &next_hop_return);
 	TEST_LPM_ASSERT(status == -ENOENT);
 
 	ip = IPv4(128, 0, 0, 10);
@@ -832,7 +838,7 @@ test11(void)
 	status = rte_lpm_delete(lpm, ip, depth);
 	TEST_LPM_ASSERT(status == 0);
 
-	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
+	status = rte_lpm_lookup_extend(lpm, ip, &next_hop_return);
 	TEST_LPM_ASSERT(status == -ENOENT);
 
 	rte_lpm_free(lpm);
@@ -851,10 +857,11 @@ int32_t
 test12(void)
 {
 	__m128i ipx4;
-	uint16_t hop[4];
-	struct rte_lpm *lpm = NULL;
+	uint32_t hop[4];
+	struct rte_lpm_extend *lpm = NULL;
 	uint32_t ip, i;
-	uint8_t depth, next_hop_add, next_hop_return;
+	uint8_t depth;
+	uint32_t next_hop_add, next_hop_return;
 	int32_t status = 0;
 
 	lpm = rte_lpm_create(__func__, SOCKET_ID_ANY, MAX_RULES, 0);
@@ -868,21 +875,21 @@ test12(void)
 		status = rte_lpm_add(lpm, ip, depth, next_hop_add);
 		TEST_LPM_ASSERT(status == 0);
 
-		status = rte_lpm_lookup(lpm, ip, &next_hop_return);
+		status = rte_lpm_lookup_extend(lpm, ip, &next_hop_return);
 		TEST_LPM_ASSERT((status == 0) &&
 				(next_hop_return == next_hop_add));
 
 		ipx4 = _mm_set_epi32(ip, ip + 1, ip, ip - 1);
-		rte_lpm_lookupx4(lpm, ipx4, hop, UINT16_MAX);
-		TEST_LPM_ASSERT(hop[0] == UINT16_MAX);
+		rte_lpm_lookupx4_extend(lpm, ipx4, hop, UINT32_MAX);
+		TEST_LPM_ASSERT(hop[0] == UINT32_MAX);
 		TEST_LPM_ASSERT(hop[1] == next_hop_add);
-		TEST_LPM_ASSERT(hop[2] == UINT16_MAX);
+		TEST_LPM_ASSERT(hop[2] == UINT32_MAX);
 		TEST_LPM_ASSERT(hop[3] == next_hop_add);
 
 		status = rte_lpm_delete(lpm, ip, depth);
 		TEST_LPM_ASSERT(status == 0);
 
-		status = rte_lpm_lookup(lpm, ip, &next_hop_return);
+		status = rte_lpm_lookup_extend(lpm, ip, &next_hop_return);
 		TEST_LPM_ASSERT(status == -ENOENT);
 	}
 
@@ -902,9 +909,10 @@ test12(void)
 int32_t
 test13(void)
 {
-	struct rte_lpm *lpm = NULL;
+	struct rte_lpm_extend *lpm = NULL;
 	uint32_t ip, i;
-	uint8_t depth, next_hop_add_1, next_hop_add_2, next_hop_return;
+	uint8_t depth;
+	uint32_t next_hop_add_1, next_hop_add_2, next_hop_return;
 	int32_t status = 0;
 
 	lpm = rte_lpm_create(__func__, SOCKET_ID_ANY, MAX_RULES, 0);
@@ -917,7 +925,7 @@ test13(void)
 	status = rte_lpm_add(lpm, ip, depth, next_hop_add_1);
 	TEST_LPM_ASSERT(status == 0);
 
-	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
+	status = rte_lpm_lookup_extend(lpm, ip, &next_hop_return);
 	TEST_LPM_ASSERT((status == 0) && (next_hop_return == next_hop_add_1));
 
 	depth = 32;
@@ -927,14 +935,14 @@ test13(void)
 		status = rte_lpm_add(lpm, ip, depth, next_hop_add_2);
 		TEST_LPM_ASSERT(status == 0);
 
-		status = rte_lpm_lookup(lpm, ip, &next_hop_return);
+		status = rte_lpm_lookup_extend(lpm, ip, &next_hop_return);
 		TEST_LPM_ASSERT((status == 0) &&
 				(next_hop_return == next_hop_add_2));
 
 		status = rte_lpm_delete(lpm, ip, depth);
 		TEST_LPM_ASSERT(status == 0);
 
-		status = rte_lpm_lookup(lpm, ip, &next_hop_return);
+		status = rte_lpm_lookup_extend(lpm, ip, &next_hop_return);
 		TEST_LPM_ASSERT((status == 0) &&
 				(next_hop_return == next_hop_add_1));
 	}
@@ -944,7 +952,7 @@ test13(void)
 	status = rte_lpm_delete(lpm, ip, depth);
 	TEST_LPM_ASSERT(status == 0);
 
-	status = rte_lpm_lookup(lpm, ip, &next_hop_return);
+	status = rte_lpm_lookup_extend(lpm, ip, &next_hop_return);
 	TEST_LPM_ASSERT(status == -ENOENT);
 
 	rte_lpm_free(lpm);
@@ -964,9 +972,10 @@ test14(void)
 	/* We only use depth = 32 in the loop below so we must make sure
 	 * that we have enough storage for all rules at that depth*/
 
-	struct rte_lpm *lpm = NULL;
+	struct rte_lpm_extend *lpm = NULL;
 	uint32_t ip;
-	uint8_t depth, next_hop_add, next_hop_return;
+	uint8_t depth;
+	uint32_t next_hop_add, next_hop_return;
 	int32_t status = 0;
 
 	/* Add enough space for 256 rules for every depth */
@@ -982,7 +991,7 @@ test14(void)
 		status = rte_lpm_add(lpm, ip, depth, next_hop_add);
 		TEST_LPM_ASSERT(status == 0);
 
-		status = rte_lpm_lookup(lpm, ip, &next_hop_return);
+		status = rte_lpm_lookup_extend(lpm, ip, &next_hop_return);
 		TEST_LPM_ASSERT((status == 0) &&
 				(next_hop_return == next_hop_add));
 	}
@@ -1011,7 +1020,7 @@ test14(void)
 int32_t
 test15(void)
 {
-	struct rte_lpm *lpm = NULL, *result = NULL;
+	struct rte_lpm_extend *lpm = NULL, *result = NULL;
 
 	/* Create lpm  */
 	lpm = rte_lpm_create("lpm_find_existing", SOCKET_ID_ANY, 256 * 32, 0);
@@ -1040,7 +1049,7 @@ int32_t
 test16(void)
 {
 	uint32_t ip;
-	struct rte_lpm *lpm = rte_lpm_create(__func__, SOCKET_ID_ANY,
+	struct rte_lpm_extend *lpm = rte_lpm_create(__func__, SOCKET_ID_ANY,
 			256 * 32, 0);
 
 	/* ip loops through all possibilities for top 24 bits of address */
@@ -1071,17 +1080,17 @@ test16(void)
 int32_t
 test17(void)
 {
-	struct rte_lpm *lpm = NULL;
+	struct rte_lpm_extend *lpm = NULL;
 	const uint32_t ip_10_32 = IPv4(10, 10, 10, 2);
 	const uint32_t ip_10_24 = IPv4(10, 10, 10, 0);
 	const uint32_t ip_20_25 = IPv4(10, 10, 20, 2);
 	const uint8_t d_ip_10_32 = 32,
 			d_ip_10_24 = 24,
 			d_ip_20_25 = 25;
-	const uint8_t next_hop_ip_10_32 = 100,
+	const uint32_t next_hop_ip_10_32 = 100,
 			next_hop_ip_10_24 = 105,
 			next_hop_ip_20_25 = 111;
-	uint8_t next_hop_return = 0;
+	uint32_t next_hop_return = 0;
 	int32_t status = 0;
 
 	lpm = rte_lpm_create(__func__, SOCKET_ID_ANY, MAX_RULES, 0);
@@ -1091,7 +1100,7 @@ test17(void)
 			next_hop_ip_10_32)) < 0)
 		return -1;
 
-	status = rte_lpm_lookup(lpm, ip_10_32, &next_hop_return);
+	status = rte_lpm_lookup_extend(lpm, ip_10_32, &next_hop_return);
 	uint8_t test_hop_10_32 = next_hop_return;
 	TEST_LPM_ASSERT(status == 0);
 	TEST_LPM_ASSERT(next_hop_return == next_hop_ip_10_32);
@@ -1100,7 +1109,7 @@ test17(void)
 			next_hop_ip_10_24)) < 0)
 			return -1;
 
-	status = rte_lpm_lookup(lpm, ip_10_24, &next_hop_return);
+	status = rte_lpm_lookup_extend(lpm, ip_10_24, &next_hop_return);
 	uint8_t test_hop_10_24 = next_hop_return;
 	TEST_LPM_ASSERT(status == 0);
 	TEST_LPM_ASSERT(next_hop_return == next_hop_ip_10_24);
@@ -1109,7 +1118,7 @@ test17(void)
 			next_hop_ip_20_25)) < 0)
 		return -1;
 
-	status = rte_lpm_lookup(lpm, ip_20_25, &next_hop_return);
+	status = rte_lpm_lookup_extend(lpm, ip_20_25, &next_hop_return);
 	uint8_t test_hop_20_25 = next_hop_return;
 	TEST_LPM_ASSERT(status == 0);
 	TEST_LPM_ASSERT(next_hop_return == next_hop_ip_20_25);
@@ -1124,11 +1133,11 @@ test17(void)
 		return -1;
 	}
 
-	status = rte_lpm_lookup(lpm, ip_10_32, &next_hop_return);
+	status = rte_lpm_lookup_extend(lpm, ip_10_32, &next_hop_return);
 	TEST_LPM_ASSERT(status == 0);
 	TEST_LPM_ASSERT(next_hop_return == next_hop_ip_10_32);
 
-	status = rte_lpm_lookup(lpm, ip_10_24, &next_hop_return);
+	status = rte_lpm_lookup_extend(lpm, ip_10_24, &next_hop_return);
 	TEST_LPM_ASSERT(status == 0);
 	TEST_LPM_ASSERT(next_hop_return == next_hop_ip_10_24);
 
@@ -1172,10 +1181,10 @@ print_route_distribution(const struct route_rule *table, uint32_t n)
 int32_t
 perf_test(void)
 {
-	struct rte_lpm *lpm = NULL;
+	struct rte_lpm_extend *lpm = NULL;
 	uint64_t begin, total_time, lpm_used_entries = 0;
 	unsigned i, j;
-	uint8_t next_hop_add = 0xAA, next_hop_return = 0;
+	uint32_t next_hop_add = 0xAA, next_hop_return = 0;
 	int status = 0;
 	uint64_t cache_line_counter = 0;
 	int64_t count = 0;
@@ -1236,7 +1245,7 @@ perf_test(void)
 		begin = rte_rdtsc();
 
 		for (j = 0; j < BATCH_SIZE; j ++) {
-			if (rte_lpm_lookup(lpm, ip_batch[j], &next_hop_return) != 0)
+			if (rte_lpm_lookup_extend(lpm, ip_batch[j], &next_hop_return) != 0)
 				count++;
 		}
 
@@ -1252,7 +1261,7 @@ perf_test(void)
 	count = 0;
 	for (i = 0; i < ITERATIONS; i ++) {
 		static uint32_t ip_batch[BATCH_SIZE];
-		uint16_t next_hops[BULK_SIZE];
+		uint32_t next_hops[BULK_SIZE];
 
 		/* Create array of random IP addresses */
 		for (j = 0; j < BATCH_SIZE; j ++)
@@ -1262,9 +1271,9 @@ perf_test(void)
 		begin = rte_rdtsc();
 		for (j = 0; j < BATCH_SIZE; j += BULK_SIZE) {
 			unsigned k;
-			rte_lpm_lookup_bulk(lpm, &ip_batch[j], next_hops, BULK_SIZE);
+			rte_lpm_lookup_bulk_func_extend(lpm, &ip_batch[j], next_hops, BULK_SIZE);
 			for (k = 0; k < BULK_SIZE; k++)
-				if (unlikely(!(next_hops[k] & RTE_LPM_LOOKUP_SUCCESS)))
+				if (unlikely(!(next_hops[k] & RTE_LPM_LOOKUP_SUCCESS_EXTEND)))
 					count++;
 		}
 
@@ -1279,7 +1288,7 @@ perf_test(void)
 	count = 0;
 	for (i = 0; i < ITERATIONS; i++) {
 		static uint32_t ip_batch[BATCH_SIZE];
-		uint16_t next_hops[4];
+		uint32_t next_hops[4];
 
 		/* Create array of random IP addresses */
 		for (j = 0; j < BATCH_SIZE; j++)
@@ -1293,9 +1302,9 @@ perf_test(void)
 
 			ipx4 = _mm_loadu_si128((__m128i *)(ip_batch + j));
 			ipx4 = *(__m128i *)(ip_batch + j);
-			rte_lpm_lookupx4(lpm, ipx4, next_hops, UINT16_MAX);
+			rte_lpm_lookupx4_extend(lpm, ipx4, next_hops, UINT32_MAX);
 			for (k = 0; k < RTE_DIM(next_hops); k++)
-				if (unlikely(next_hops[k] == UINT16_MAX))
+				if (unlikely(next_hops[k] == UINT32_MAX))
 					count++;
 		}
 
diff --git a/lib/librte_lpm/rte_lpm.c b/lib/librte_lpm/rte_lpm.c
index 163ba3c..58b7fcc 100644
--- a/lib/librte_lpm/rte_lpm.c
+++ b/lib/librte_lpm/rte_lpm.c
@@ -120,7 +120,7 @@ depth_to_range(uint8_t depth)
  * Find an existing lpm table and return a pointer to it.
  */
 struct rte_lpm *
-rte_lpm_find_existing(const char *name)
+rte_lpm_find_existing_v20(const char *name)
 {
 	struct rte_lpm *l = NULL;
 	struct rte_tailq_entry *te;
@@ -143,12 +143,42 @@ rte_lpm_find_existing(const char *name)
 
 	return l;
 }
+VERSION_SYMBOL(rte_lpm_find_existing, _v20, 2.0);
+
+struct rte_lpm_extend *
+rte_lpm_find_existing_v22(const char *name)
+{
+	struct rte_lpm_extend *l = NULL;
+	struct rte_tailq_entry *te;
+	struct rte_lpm_list *lpm_list;
+
+	lpm_list = RTE_TAILQ_CAST(rte_lpm_tailq.head, rte_lpm_list);
+
+	rte_rwlock_read_lock(RTE_EAL_TAILQ_RWLOCK);
+	TAILQ_FOREACH(te, lpm_list, next) {
+		l = (struct rte_lpm_extend *) te->data;
+		if (strncmp(name, l->name, RTE_LPM_NAMESIZE) == 0)
+			break;
+	}
+	rte_rwlock_read_unlock(RTE_EAL_TAILQ_RWLOCK);
+
+	if (te == NULL) {
+		rte_errno = ENOENT;
+		return NULL;
+	}
+
+	return l;
+}
+BIND_DEFAULT_SYMBOL(rte_lpm_find_existing, _v22, 2.2);
+MAP_STATIC_SYMBOL(struct rte_lpm_extend *
+		rte_lpm_find_existing(const char *name), rte_lpm_find_existing_v22);
 
 /*
  * Allocates memory for LPM object
  */
+
 struct rte_lpm *
-rte_lpm_create(const char *name, int socket_id, int max_rules,
+rte_lpm_create_v20(const char *name, int socket_id, int max_rules,
 		__rte_unused int flags)
 {
 	char mem_name[RTE_LPM_NAMESIZE];
@@ -213,12 +243,117 @@ exit:
 
 	return lpm;
 }
+VERSION_SYMBOL(rte_lpm_create, _v20, 2.0);
+
+struct rte_lpm_extend *
+rte_lpm_create_v22(const char *name, int socket_id, int max_rules,
+		__rte_unused int flags)
+{
+	char mem_name[RTE_LPM_NAMESIZE];
+	struct rte_lpm_extend *lpm = NULL;
+	struct rte_tailq_entry *te;
+	uint32_t mem_size;
+	struct rte_lpm_list *lpm_list;
+
+	lpm_list = RTE_TAILQ_CAST(rte_lpm_tailq.head, rte_lpm_list);
+
+	RTE_BUILD_BUG_ON(sizeof(union rte_lpm_tbl24_entry_extend) != 4);
+	RTE_BUILD_BUG_ON(sizeof(union rte_lpm_tbl8_entry_extend) != 4);
+
+	/* Check user arguments. */
+	if ((name == NULL) || (socket_id < -1) || (max_rules == 0)) {
+		rte_errno = EINVAL;
+		return NULL;
+	}
+
+	snprintf(mem_name, sizeof(mem_name), "LPM_%s", name);
+
+	/* Determine the amount of memory to allocate. */
+	mem_size = sizeof(*lpm) + (sizeof(lpm->rules_tbl[0]) * max_rules);
+
+	rte_rwlock_write_lock(RTE_EAL_TAILQ_RWLOCK);
+
+	/* guarantee there's no existing */
+	TAILQ_FOREACH(te, lpm_list, next) {
+		lpm = (struct rte_lpm_extend *) te->data;
+		if (strncmp(name, lpm->name, RTE_LPM_NAMESIZE) == 0)
+			break;
+	}
+	if (te != NULL)
+		goto exit;
+
+	/* allocate tailq entry */
+	te = rte_zmalloc("LPM_TAILQ_ENTRY", sizeof(*te), 0);
+	if (te == NULL) {
+		RTE_LOG(ERR, LPM, "Failed to allocate tailq entry\n");
+		goto exit;
+	}
+
+	/* Allocate memory to store the LPM data structures. */
+	lpm = (struct rte_lpm_extend *)rte_zmalloc_socket(mem_name, mem_size,
+			RTE_CACHE_LINE_SIZE, socket_id);
+	if (lpm == NULL) {
+		RTE_LOG(ERR, LPM, "LPM memory allocation failed\n");
+		rte_free(te);
+		goto exit;
+	}
+
+	/* Save user arguments. */
+	lpm->max_rules = max_rules;
+	snprintf(lpm->name, sizeof(lpm->name), "%s", name);
+
+	te->data = (void *) lpm;
+
+	TAILQ_INSERT_TAIL(lpm_list, te, next);
+
+exit:
+	rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
+
+	return lpm;
+}
+BIND_DEFAULT_SYMBOL(rte_lpm_create, _v22, 2.2);
+MAP_STATIC_SYMBOL(struct rte_lpm_extend *
+		rte_lpm_create(const char *name, int socket_id, int max_rules,
+				__rte_unused int flags), rte_lpm_create_v22);
 
 /*
  * Deallocates memory for given LPM table.
  */
 void
-rte_lpm_free(struct rte_lpm *lpm)
+rte_lpm_free_v20(struct rte_lpm *lpm)
+{
+	struct rte_lpm_list *lpm_list;
+	struct rte_tailq_entry *te;
+
+	/* Check user arguments. */
+	if (lpm == NULL)
+		return;
+
+	lpm_list = RTE_TAILQ_CAST(rte_lpm_tailq.head, rte_lpm_list);
+
+	rte_rwlock_write_lock(RTE_EAL_TAILQ_RWLOCK);
+
+	/* find our tailq entry */
+	TAILQ_FOREACH(te, lpm_list, next) {
+		if (te->data == (void *) lpm)
+			break;
+	}
+	if (te == NULL) {
+		rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
+		return;
+	}
+
+	TAILQ_REMOVE(lpm_list, te, next);
+
+	rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
+
+	rte_free(lpm);
+	rte_free(te);
+}
+VERSION_SYMBOL(rte_lpm_free, _v20, 2.0);
+
+void
+rte_lpm_free_v22(struct rte_lpm_extend *lpm)
 {
 	struct rte_lpm_list *lpm_list;
 	struct rte_tailq_entry *te;
@@ -248,6 +383,9 @@ rte_lpm_free(struct rte_lpm *lpm)
 	rte_free(lpm);
 	rte_free(te);
 }
+BIND_DEFAULT_SYMBOL(rte_lpm_free, _v22, 2.2);
+MAP_STATIC_SYMBOL(void rte_lpm_free(struct rte_lpm_extend *lpm),
+		rte_lpm_free_v22);
 
 /*
  * Adds a rule to the rule table.
@@ -328,10 +466,80 @@ rule_add(struct rte_lpm *lpm, uint32_t ip_masked, uint8_t depth,
 	return rule_index;
 }
 
+static inline int32_t
+rule_add_extend(struct rte_lpm_extend *lpm, uint32_t ip_masked, uint8_t depth,
+	uint32_t next_hop)
+{
+	uint32_t rule_gindex, rule_index, last_rule;
+	int i;
+
+	VERIFY_DEPTH(depth);
+
+	/* Scan through rule group to see if rule already exists. */
+	if (lpm->rule_info[depth - 1].used_rules > 0) {
+
+		/* rule_gindex stands for rule group index. */
+		rule_gindex = lpm->rule_info[depth - 1].first_rule;
+		/* Initialise rule_index to point to start of rule group. */
+		rule_index = rule_gindex;
+		/* Last rule = Last used rule in this rule group. */
+		last_rule = rule_gindex + lpm->rule_info[depth - 1].used_rules;
+
+		for (; rule_index < last_rule; rule_index++) {
+
+			/* If rule already exists update its next_hop and return. */
+			if (lpm->rules_tbl[rule_index].ip == ip_masked) {
+				lpm->rules_tbl[rule_index].next_hop = next_hop;
+
+				return rule_index;
+			}
+		}
+
+		if (rule_index == lpm->max_rules)
+			return -ENOSPC;
+	} else {
+		/* Calculate the position in which the rule will be stored. */
+		rule_index = 0;
+
+		for (i = depth - 1; i > 0; i--) {
+			if (lpm->rule_info[i - 1].used_rules > 0) {
+				rule_index = lpm->rule_info[i - 1].first_rule + lpm->rule_info[i - 1].used_rules;
+				break;
+			}
+		}
+		if (rule_index == lpm->max_rules)
+			return -ENOSPC;
+
+		lpm->rule_info[depth - 1].first_rule = rule_index;
+	}
+
+	/* Make room for the new rule in the array. */
+	for (i = RTE_LPM_MAX_DEPTH; i > depth; i--) {
+		if (lpm->rule_info[i - 1].first_rule + lpm->rule_info[i - 1].used_rules == lpm->max_rules)
+			return -ENOSPC;
+
+		if (lpm->rule_info[i - 1].used_rules > 0) {
+			lpm->rules_tbl[lpm->rule_info[i - 1].first_rule + lpm->rule_info[i - 1].used_rules]
+					= lpm->rules_tbl[lpm->rule_info[i - 1].first_rule];
+			lpm->rule_info[i - 1].first_rule++;
+		}
+	}
+
+	/* Add the new rule. */
+	lpm->rules_tbl[rule_index].ip = ip_masked;
+	lpm->rules_tbl[rule_index].next_hop = next_hop;
+
+	/* Increment the used rules counter for this rule group. */
+	lpm->rule_info[depth - 1].used_rules++;
+
+	return rule_index;
+}
+
 /*
  * Delete a rule from the rule table.
  * NOTE: Valid range for depth parameter is 1 .. 32 inclusive.
  */
+
 static inline void
 rule_delete(struct rte_lpm *lpm, int32_t rule_index, uint8_t depth)
 {
@@ -353,6 +561,27 @@ rule_delete(struct rte_lpm *lpm, int32_t rule_index, uint8_t depth)
 	lpm->rule_info[depth - 1].used_rules--;
 }
 
+static inline void
+rule_delete_extend(struct rte_lpm_extend *lpm, int32_t rule_index, uint8_t depth)
+{
+	int i;
+
+	VERIFY_DEPTH(depth);
+
+	lpm->rules_tbl[rule_index] = lpm->rules_tbl[lpm->rule_info[depth - 1].first_rule
+			+ lpm->rule_info[depth - 1].used_rules - 1];
+
+	for (i = depth; i < RTE_LPM_MAX_DEPTH; i++) {
+		if (lpm->rule_info[i].used_rules > 0) {
+			lpm->rules_tbl[lpm->rule_info[i].first_rule - 1] =
+					lpm->rules_tbl[lpm->rule_info[i].first_rule + lpm->rule_info[i].used_rules - 1];
+			lpm->rule_info[i].first_rule--;
+		}
+	}
+
+	lpm->rule_info[depth - 1].used_rules--;
+}
+
 /*
  * Finds a rule in rule table.
  * NOTE: Valid range for depth parameter is 1 .. 32 inclusive.
@@ -378,6 +607,27 @@ rule_find(struct rte_lpm *lpm, uint32_t ip_masked, uint8_t depth)
 	return -EINVAL;
 }
 
+static inline int32_t
+rule_find_extend(struct rte_lpm_extend *lpm, uint32_t ip_masked, uint8_t depth)
+{
+	uint32_t rule_gindex, last_rule, rule_index;
+
+	VERIFY_DEPTH(depth);
+
+	rule_gindex = lpm->rule_info[depth - 1].first_rule;
+	last_rule = rule_gindex + lpm->rule_info[depth - 1].used_rules;
+
+	/* Scan used rules at given depth to find rule. */
+	for (rule_index = rule_gindex; rule_index < last_rule; rule_index++) {
+		/* If rule is found return the rule index. */
+		if (lpm->rules_tbl[rule_index].ip == ip_masked)
+			return rule_index;
+	}
+
+	/* If rule is not found return -EINVAL. */
+	return -EINVAL;
+}
+
 /*
  * Find, clean and allocate a tbl8.
  */
@@ -409,6 +659,33 @@ tbl8_alloc(struct rte_lpm_tbl8_entry *tbl8)
 	return -ENOSPC;
 }
 
+static inline int32_t
+tbl8_alloc_extend(union rte_lpm_tbl8_entry_extend *tbl8)
+{
+	uint32_t tbl8_gindex; /* tbl8 group index. */
+	union rte_lpm_tbl8_entry_extend *tbl8_entry;
+
+	/* Scan through tbl8 to find a free (i.e. INVALID) tbl8 group. */
+	for (tbl8_gindex = 0; tbl8_gindex < RTE_LPM_TBL8_NUM_GROUPS;
+			tbl8_gindex++) {
+		tbl8_entry = &tbl8[tbl8_gindex * RTE_LPM_TBL8_GROUP_NUM_ENTRIES];
+		/* If a free tbl8 group is found clean it and set as VALID. */
+		if (!tbl8_entry->valid_group) {
+			memset(&tbl8_entry[0], 0,
+					RTE_LPM_TBL8_GROUP_NUM_ENTRIES *
+					sizeof(tbl8_entry[0]));
+
+			tbl8_entry->valid_group = VALID;
+
+			/* Return group index for allocated tbl8 group. */
+			return tbl8_gindex;
+		}
+	}
+
+	/* If there are no tbl8 groups free then return error. */
+	return -ENOSPC;
+}
+
 static inline void
 tbl8_free(struct rte_lpm_tbl8_entry *tbl8, uint32_t tbl8_group_start)
 {
@@ -416,6 +693,13 @@ tbl8_free(struct rte_lpm_tbl8_entry *tbl8, uint32_t tbl8_group_start)
 	tbl8[tbl8_group_start].valid_group = INVALID;
 }
 
+static inline void
+tbl8_free_extend(union rte_lpm_tbl8_entry_extend *tbl8, uint32_t tbl8_group_start)
+{
+	/* Set tbl8 group invalid*/
+	tbl8[tbl8_group_start].valid_group = INVALID;
+}
+
 static inline int32_t
 add_depth_small(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
 		uint8_t next_hop)
@@ -485,12 +769,77 @@ add_depth_small(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
 }
 
 static inline int32_t
-add_depth_big(struct rte_lpm *lpm, uint32_t ip_masked, uint8_t depth,
-		uint8_t next_hop)
+add_depth_small_extend(struct rte_lpm_extend *lpm, uint32_t ip, uint8_t depth,
+		uint32_t next_hop)
 {
-	uint32_t tbl24_index;
-	int32_t tbl8_group_index, tbl8_group_start, tbl8_group_end, tbl8_index,
-		tbl8_range, i;
+	uint32_t tbl24_index, tbl24_range, tbl8_index, tbl8_group_end, i, j;
+
+	/* Calculate the index into Table24. */
+	tbl24_index = ip >> 8;
+	tbl24_range = depth_to_range(depth);
+
+	for (i = tbl24_index; i < (tbl24_index + tbl24_range); i++) {
+		/*
+		 * For invalid OR valid and non-extended tbl 24 entries set
+		 * entry.
+		 */
+		if (!lpm->tbl24[i].valid || (lpm->tbl24[i].ext_entry == 0 &&
+				lpm->tbl24[i].depth <= depth)) {
+
+			union rte_lpm_tbl24_entry_extend new_tbl24_entry;
+				new_tbl24_entry.next_hop = next_hop;
+				new_tbl24_entry.valid = VALID;
+				new_tbl24_entry.ext_entry = 0;
+				new_tbl24_entry.depth = depth;
+
+			/* Setting tbl24 entry in one go to avoid race
+			 * conditions
+			 */
+			lpm->tbl24[i] = new_tbl24_entry;
+
+			continue;
+		}
+
+		if (lpm->tbl24[i].ext_entry == 1) {
+			/* If tbl24 entry is valid and extended calculate the
+			 *  index into tbl8.
+			 */
+			tbl8_index = lpm->tbl24[i].tbl8_gindex *
+					RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
+			tbl8_group_end = tbl8_index +
+					RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
+
+			for (j = tbl8_index; j < tbl8_group_end; j++) {
+				if (!lpm->tbl8[j].valid ||
+						lpm->tbl8[j].depth <= depth) {
+					union rte_lpm_tbl8_entry_extend new_tbl8_entry;
+						new_tbl8_entry.valid = VALID;
+						new_tbl8_entry.valid_group = VALID;
+						new_tbl8_entry.depth = depth;
+						new_tbl8_entry.next_hop = next_hop;
+
+					/*
+					 * Setting tbl8 entry in one go to avoid
+					 * race conditions
+					 */
+					lpm->tbl8[j] = new_tbl8_entry;
+
+					continue;
+				}
+			}
+		}
+	}
+
+	return 0;
+}
+
+static inline int32_t
+add_depth_big(struct rte_lpm *lpm, uint32_t ip_masked, uint8_t depth,
+		uint8_t next_hop)
+{
+	uint32_t tbl24_index;
+	int32_t tbl8_group_index, tbl8_group_start, tbl8_group_end, tbl8_index,
+		tbl8_range, i;
 
 	tbl24_index = (ip_masked >> 8);
 	tbl8_range = depth_to_range(depth);
@@ -616,11 +965,140 @@ add_depth_big(struct rte_lpm *lpm, uint32_t ip_masked, uint8_t depth,
 	return 0;
 }
 
+static inline int32_t
+add_depth_big_extend(struct rte_lpm_extend *lpm, uint32_t ip_masked, uint8_t depth,
+		uint32_t next_hop)
+{
+	uint32_t tbl24_index;
+	int32_t tbl8_group_index, tbl8_group_start, tbl8_group_end, tbl8_index,
+		tbl8_range, i;
+
+	tbl24_index = (ip_masked >> 8);
+	tbl8_range = depth_to_range(depth);
+
+	if (!lpm->tbl24[tbl24_index].valid) {
+		/* Search for a free tbl8 group. */
+		tbl8_group_index = tbl8_alloc_extend(lpm->tbl8);
+
+		/* Check tbl8 allocation was successful. */
+		if (tbl8_group_index < 0) {
+			return tbl8_group_index;
+		}
+
+		/* Find index into tbl8 and range. */
+		tbl8_index = (tbl8_group_index *
+				RTE_LPM_TBL8_GROUP_NUM_ENTRIES) +
+				(ip_masked & 0xFF);
+
+		/* Set tbl8 entry. */
+		for (i = tbl8_index; i < (tbl8_index + tbl8_range); i++) {
+			lpm->tbl8[i].depth = depth;
+			lpm->tbl8[i].next_hop = next_hop;
+			lpm->tbl8[i].valid = VALID;
+		}
+
+		/*
+		 * Update tbl24 entry to point to new tbl8 entry. Note: The
+		 * ext_flag and tbl8_index need to be updated simultaneously,
+		 * so assign whole structure in one go
+		 */
+
+		union rte_lpm_tbl24_entry_extend new_tbl24_entry;
+				new_tbl24_entry.next_hop = (uint8_t)tbl8_group_index;
+				new_tbl24_entry.valid = VALID;
+				new_tbl24_entry.ext_entry = 1;
+				new_tbl24_entry.depth = 0;
+
+		lpm->tbl24[tbl24_index] = new_tbl24_entry;
+
+	}
+	/* If valid entry but not extended calculate the index into Table8. */
+	else if (lpm->tbl24[tbl24_index].ext_entry == 0) {
+		/* Search for free tbl8 group. */
+		tbl8_group_index = tbl8_alloc_extend(lpm->tbl8);
+
+		if (tbl8_group_index < 0) {
+			return tbl8_group_index;
+		}
+
+		tbl8_group_start = tbl8_group_index *
+				RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
+		tbl8_group_end = tbl8_group_start +
+				RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
+
+		/* Populate new tbl8 with tbl24 value. */
+		for (i = tbl8_group_start; i < tbl8_group_end; i++) {
+			lpm->tbl8[i].valid = VALID;
+			lpm->tbl8[i].depth = lpm->tbl24[tbl24_index].depth;
+			lpm->tbl8[i].next_hop =
+					lpm->tbl24[tbl24_index].next_hop;
+		}
+
+		tbl8_index = tbl8_group_start + (ip_masked & 0xFF);
+
+		/* Insert new rule into the tbl8 entry. */
+		for (i = tbl8_index; i < tbl8_index + tbl8_range; i++) {
+			if (!lpm->tbl8[i].valid ||
+					lpm->tbl8[i].depth <= depth) {
+				lpm->tbl8[i].valid = VALID;
+				lpm->tbl8[i].depth = depth;
+				lpm->tbl8[i].next_hop = next_hop;
+
+				continue;
+			}
+		}
+
+		/*
+		 * Update tbl24 entry to point to new tbl8 entry. Note: The
+		 * ext_flag and tbl8_index need to be updated simultaneously,
+		 * so assign whole structure in one go.
+		 */
+
+		union rte_lpm_tbl24_entry_extend new_tbl24_entry;
+				new_tbl24_entry.next_hop = (uint8_t)tbl8_group_index;
+				new_tbl24_entry.valid = VALID;
+				new_tbl24_entry.ext_entry = 1;
+				new_tbl24_entry.depth = 0;
+
+		lpm->tbl24[tbl24_index] = new_tbl24_entry;
+
+	} else { /*
+		* If it is valid, extended entry calculate the index into tbl8.
+		*/
+		tbl8_group_index = lpm->tbl24[tbl24_index].tbl8_gindex;
+		tbl8_group_start = tbl8_group_index *
+				RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
+		tbl8_index = tbl8_group_start + (ip_masked & 0xFF);
+
+		for (i = tbl8_index; i < (tbl8_index + tbl8_range); i++) {
+
+			if (!lpm->tbl8[i].valid ||
+					lpm->tbl8[i].depth <= depth) {
+				union rte_lpm_tbl8_entry_extend new_tbl8_entry;
+						new_tbl8_entry.valid = VALID;
+						new_tbl8_entry.depth = depth;
+						new_tbl8_entry.next_hop = next_hop;
+						new_tbl8_entry.valid_group = lpm->tbl8[i].valid_group;
+
+				/*
+				 * Setting tbl8 entry in one go to avoid race
+				 * condition
+				 */
+				lpm->tbl8[i] = new_tbl8_entry;
+
+				continue;
+			}
+		}
+	}
+
+	return 0;
+}
+
 /*
  * Add a route
  */
 int
-rte_lpm_add(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
+rte_lpm_add_v20(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
 		uint8_t next_hop)
 {
 	int32_t rule_index, status = 0;
@@ -659,12 +1137,56 @@ rte_lpm_add(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
 
 	return 0;
 }
+VERSION_SYMBOL(rte_lpm_add, _v20, 2.0);
+
+int
+rte_lpm_add_v22(struct rte_lpm_extend *lpm, uint32_t ip, uint8_t depth,
+		uint32_t next_hop)
+{
+	int32_t rule_index, status = 0;
+	uint32_t ip_masked;
+
+	/* Check user arguments. */
+	if ((lpm == NULL) || (depth < 1) || (depth > RTE_LPM_MAX_DEPTH))
+		return -EINVAL;
+
+	ip_masked = ip & depth_to_mask(depth);
+
+	/* Add the rule to the rule table. */
+	rule_index = rule_add_extend(lpm, ip_masked, depth, next_hop);
+
+	/* If the is no space available for new rule return error. */
+	if (rule_index < 0) {
+		return rule_index;
+	}
+
+	if (depth <= MAX_DEPTH_TBL24) {
+		status = add_depth_small_extend(lpm, ip_masked, depth, next_hop);
+	} else { /* If depth > RTE_LPM_MAX_DEPTH_TBL24 */
+		status = add_depth_big_extend(lpm, ip_masked, depth, next_hop);
+
+		/*
+		 * If add fails due to exhaustion of tbl8 extensions delete
+		 * rule that was added to rule table.
+		 */
+		if (status < 0) {
+			rule_delete_extend(lpm, rule_index, depth);
+
+			return status;
+		}
+	}
+
+	return 0;
+}
+BIND_DEFAULT_SYMBOL(rte_lpm_add, _v22, 2.2);
+MAP_STATIC_SYMBOL(int rte_lpm_add(struct rte_lpm_extend *lpm,
+		uint32_t ip, uint8_t depth, uint32_t next_hop), rte_lpm_add_v22);
 
 /*
  * Look for a rule in the high-level rules table
  */
 int
-rte_lpm_is_rule_present(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
+rte_lpm_is_rule_present_v20(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
 uint8_t *next_hop)
 {
 	uint32_t ip_masked;
@@ -688,6 +1210,37 @@ uint8_t *next_hop)
 	/* If rule is not found return 0. */
 	return 0;
 }
+VERSION_SYMBOL(rte_lpm_is_rule_present, _v20, 2.0);
+
+int
+rte_lpm_is_rule_present_v22(struct rte_lpm_extend *lpm, uint32_t ip, uint8_t depth,
+uint32_t *next_hop)
+{
+	uint32_t ip_masked;
+	int32_t rule_index;
+
+	/* Check user arguments. */
+	if ((lpm == NULL) ||
+		(next_hop == NULL) ||
+		(depth < 1) || (depth > RTE_LPM_MAX_DEPTH))
+		return -EINVAL;
+
+	/* Look for the rule using rule_find. */
+	ip_masked = ip & depth_to_mask(depth);
+	rule_index = rule_find_extend(lpm, ip_masked, depth);
+
+	if (rule_index >= 0) {
+		*next_hop = lpm->rules_tbl[rule_index].next_hop;
+		return 1;
+	}
+
+	/* If rule is not found return 0. */
+	return 0;
+}
+BIND_DEFAULT_SYMBOL(rte_lpm_is_rule_present, _v22, 2.2);
+MAP_STATIC_SYMBOL(int
+		rte_lpm_is_rule_present(struct rte_lpm_extend *lpm, uint32_t ip, uint8_t depth,
+		uint32_t *next_hop), rte_lpm_is_rule_present_v22);
 
 static inline int32_t
 find_previous_rule(struct rte_lpm *lpm, uint32_t ip, uint8_t depth, uint8_t *sub_rule_depth)
@@ -711,6 +1264,28 @@ find_previous_rule(struct rte_lpm *lpm, uint32_t ip, uint8_t depth, uint8_t *sub
 }
 
 static inline int32_t
+find_previous_rule_extend(struct rte_lpm_extend *lpm,
+		uint32_t ip, uint8_t depth, uint8_t *sub_rule_depth)
+{
+	int32_t rule_index;
+	uint32_t ip_masked;
+	uint8_t prev_depth;
+
+	for (prev_depth = (uint8_t)(depth - 1); prev_depth > 0; prev_depth--) {
+		ip_masked = ip & depth_to_mask(prev_depth);
+
+		rule_index = rule_find_extend(lpm, ip_masked, prev_depth);
+
+		if (rule_index >= 0) {
+			*sub_rule_depth = prev_depth;
+			return rule_index;
+		}
+	}
+
+	return -1;
+}
+
+static inline int32_t
 delete_depth_small(struct rte_lpm *lpm, uint32_t ip_masked,
 	uint8_t depth, int32_t sub_rule_index, uint8_t sub_rule_depth)
 {
@@ -805,6 +1380,96 @@ delete_depth_small(struct rte_lpm *lpm, uint32_t ip_masked,
 	return 0;
 }
 
+static inline int32_t
+delete_depth_small_extend(struct rte_lpm_extend *lpm, uint32_t ip_masked,
+	uint8_t depth, int32_t sub_rule_index, uint8_t sub_rule_depth)
+{
+	uint32_t tbl24_range, tbl24_index, tbl8_group_index, tbl8_index, i, j;
+
+	/* Calculate the range and index into Table24. */
+	tbl24_range = depth_to_range(depth);
+	tbl24_index = (ip_masked >> 8);
+
+	/*
+	 * Firstly check the sub_rule_index. A -1 indicates no replacement rule
+	 * and a positive number indicates a sub_rule_index.
+	 */
+	if (sub_rule_index < 0) {
+		/*
+		 * If no replacement rule exists then invalidate entries
+		 * associated with this rule.
+		 */
+		for (i = tbl24_index; i < (tbl24_index + tbl24_range); i++) {
+
+			if (lpm->tbl24[i].ext_entry == 0 &&
+					lpm->tbl24[i].depth <= depth) {
+				lpm->tbl24[i].valid = INVALID;
+			} else {
+				/*
+				 * If TBL24 entry is extended, then there has
+				 * to be a rule with depth >= 25 in the
+				 * associated TBL8 group.
+				 */
+
+				tbl8_group_index = lpm->tbl24[i].tbl8_gindex;
+				tbl8_index = tbl8_group_index *
+						RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
+
+				for (j = tbl8_index; j < (tbl8_index +
+					RTE_LPM_TBL8_GROUP_NUM_ENTRIES); j++) {
+
+					if (lpm->tbl8[j].depth <= depth)
+						lpm->tbl8[j].valid = INVALID;
+				}
+			}
+		}
+	} else {
+		/*
+		 * If a replacement rule exists then modify entries
+		 * associated with this rule.
+		 */
+
+		union rte_lpm_tbl24_entry_extend new_tbl24_entry;
+				new_tbl24_entry.next_hop = lpm->rules_tbl[sub_rule_index].next_hop;
+				new_tbl24_entry.valid = VALID;
+				new_tbl24_entry.ext_entry = 0;
+				new_tbl24_entry.depth = sub_rule_depth;
+
+		union rte_lpm_tbl8_entry_extend new_tbl8_entry;
+				new_tbl8_entry.valid = VALID;
+				new_tbl8_entry.depth = sub_rule_depth;
+				new_tbl8_entry.next_hop = lpm->rules_tbl[sub_rule_index].next_hop;
+
+
+		for (i = tbl24_index; i < (tbl24_index + tbl24_range); i++) {
+
+			if (lpm->tbl24[i].ext_entry == 0 &&
+					lpm->tbl24[i].depth <= depth) {
+				lpm->tbl24[i] = new_tbl24_entry;
+			} else {
+				/*
+				 * If TBL24 entry is extended, then there has
+				 * to be a rule with depth >= 25 in the
+				 * associated TBL8 group.
+				 */
+
+				tbl8_group_index = lpm->tbl24[i].tbl8_gindex;
+				tbl8_index = tbl8_group_index *
+						RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
+
+				for (j = tbl8_index; j < (tbl8_index +
+					RTE_LPM_TBL8_GROUP_NUM_ENTRIES); j++) {
+
+					if (lpm->tbl8[j].depth <= depth)
+						lpm->tbl8[j] = new_tbl8_entry;
+				}
+			}
+		}
+	}
+
+	return 0;
+}
+
 /*
  * Checks if table 8 group can be recycled.
  *
@@ -813,6 +1478,7 @@ delete_depth_small(struct rte_lpm *lpm, uint32_t ip_masked,
  * Return of value > -1 means tbl8 is in use but has all the same values and
  * thus can be recycled
  */
+
 static inline int32_t
 tbl8_recycle_check(struct rte_lpm_tbl8_entry *tbl8, uint32_t tbl8_group_start)
 {
@@ -860,6 +1526,53 @@ tbl8_recycle_check(struct rte_lpm_tbl8_entry *tbl8, uint32_t tbl8_group_start)
 }
 
 static inline int32_t
+tbl8_recycle_check_extend(union rte_lpm_tbl8_entry_extend *tbl8, uint32_t tbl8_group_start)
+{
+	uint32_t tbl8_group_end, i;
+
+	tbl8_group_end = tbl8_group_start + RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
+
+	/*
+	 * Check the first entry of the given tbl8. If it is invalid we know
+	 * this tbl8 does not contain any rule with a depth < RTE_LPM_MAX_DEPTH
+	 *  (As they would affect all entries in a tbl8) and thus this table
+	 *  can not be recycled.
+	 */
+	if (tbl8[tbl8_group_start].valid) {
+		/*
+		 * If first entry is valid check if the depth is less than 24
+		 * and if so check the rest of the entries to verify that they
+		 * are all of this depth.
+		 */
+		if (tbl8[tbl8_group_start].depth < MAX_DEPTH_TBL24) {
+			for (i = (tbl8_group_start + 1); i < tbl8_group_end;
+					i++) {
+
+				if (tbl8[i].depth !=
+						tbl8[tbl8_group_start].depth) {
+
+					return -EEXIST;
+				}
+			}
+			/* If all entries are the same return the tb8 index */
+			return tbl8_group_start;
+		}
+
+		return -EEXIST;
+	}
+	/*
+	 * If the first entry is invalid check if the rest of the entries in
+	 * the tbl8 are invalid.
+	 */
+	for (i = (tbl8_group_start + 1); i < tbl8_group_end; i++) {
+		if (tbl8[i].valid)
+			return -EEXIST;
+	}
+	/* If no valid entries are found then return -EINVAL. */
+	return -EINVAL;
+}
+
+static inline int32_t
 delete_depth_big(struct rte_lpm *lpm, uint32_t ip_masked,
 	uint8_t depth, int32_t sub_rule_index, uint8_t sub_rule_depth)
 {
@@ -938,11 +1651,86 @@ delete_depth_big(struct rte_lpm *lpm, uint32_t ip_masked,
 	return 0;
 }
 
+static inline int32_t
+delete_depth_big_extend(struct rte_lpm_extend *lpm, uint32_t ip_masked,
+	uint8_t depth, int32_t sub_rule_index, uint8_t sub_rule_depth)
+{
+	uint32_t tbl24_index, tbl8_group_index, tbl8_group_start, tbl8_index,
+			tbl8_range, i;
+	int32_t tbl8_recycle_index;
+
+	/*
+	 * Calculate the index into tbl24 and range. Note: All depths larger
+	 * than MAX_DEPTH_TBL24 are associated with only one tbl24 entry.
+	 */
+	tbl24_index = ip_masked >> 8;
+
+	/* Calculate the index into tbl8 and range. */
+	tbl8_group_index = lpm->tbl24[tbl24_index].tbl8_gindex;
+	tbl8_group_start = tbl8_group_index * RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
+	tbl8_index = tbl8_group_start + (ip_masked & 0xFF);
+	tbl8_range = depth_to_range(depth);
+
+	if (sub_rule_index < 0) {
+		/*
+		 * Loop through the range of entries on tbl8 for which the
+		 * rule_to_delete must be removed or modified.
+		 */
+		for (i = tbl8_index; i < (tbl8_index + tbl8_range); i++) {
+			if (lpm->tbl8[i].depth <= depth)
+				lpm->tbl8[i].valid = INVALID;
+		}
+	} else {
+		/* Set new tbl8 entry. */
+		union rte_lpm_tbl8_entry_extend new_tbl8_entry;
+				new_tbl8_entry.valid = VALID;
+				new_tbl8_entry.depth = sub_rule_depth;
+				new_tbl8_entry.valid_group = lpm->tbl8[tbl8_group_start].valid_group;
+				new_tbl8_entry.next_hop = lpm->rules_tbl[sub_rule_index].next_hop;
+
+		/*
+		 * Loop through the range of entries on tbl8 for which the
+		 * rule_to_delete must be modified.
+		 */
+		for (i = tbl8_index; i < (tbl8_index + tbl8_range); i++) {
+			if (lpm->tbl8[i].depth <= depth)
+				lpm->tbl8[i] = new_tbl8_entry;
+		}
+	}
+
+	/*
+	 * Check if there are any valid entries in this tbl8 group. If all
+	 * tbl8 entries are invalid we can free the tbl8 and invalidate the
+	 * associated tbl24 entry.
+	 */
+
+	tbl8_recycle_index = tbl8_recycle_check_extend(lpm->tbl8, tbl8_group_start);
+
+	if (tbl8_recycle_index == -EINVAL) {
+		/* Set tbl24 before freeing tbl8 to avoid race condition. */
+		lpm->tbl24[tbl24_index].valid = 0;
+		tbl8_free_extend(lpm->tbl8, tbl8_group_start);
+	} else if (tbl8_recycle_index > -1) {
+		/* Update tbl24 entry. */
+		union rte_lpm_tbl24_entry_extend new_tbl24_entry;
+				new_tbl24_entry.next_hop = lpm->tbl8[tbl8_recycle_index].next_hop;
+				new_tbl24_entry.valid = VALID;
+				new_tbl24_entry.ext_entry = 0;
+				new_tbl24_entry.depth = lpm->tbl8[tbl8_recycle_index].depth;
+
+		/* Set tbl24 before freeing tbl8 to avoid race condition. */
+		lpm->tbl24[tbl24_index] = new_tbl24_entry;
+		tbl8_free_extend(lpm->tbl8, tbl8_group_start);
+	}
+
+	return 0;
+}
+
 /*
  * Deletes a rule
  */
 int
-rte_lpm_delete(struct rte_lpm *lpm, uint32_t ip, uint8_t depth)
+rte_lpm_delete_v20(struct rte_lpm *lpm, uint32_t ip, uint8_t depth)
 {
 	int32_t rule_to_delete_index, sub_rule_index;
 	uint32_t ip_masked;
@@ -993,12 +1781,85 @@ rte_lpm_delete(struct rte_lpm *lpm, uint32_t ip, uint8_t depth)
 		return delete_depth_big(lpm, ip_masked, depth, sub_rule_index, sub_rule_depth);
 	}
 }
+VERSION_SYMBOL(rte_lpm_delete, _v20, 2.0);
+
+int
+rte_lpm_delete_v22(struct rte_lpm_extend *lpm, uint32_t ip, uint8_t depth)
+{
+	int32_t rule_to_delete_index, sub_rule_index;
+	uint32_t ip_masked;
+	uint8_t sub_rule_depth;
+	/*
+	 * Check input arguments. Note: IP must be a positive integer of 32
+	 * bits in length therefore it need not be checked.
+	 */
+	if ((lpm == NULL) || (depth < 1) || (depth > RTE_LPM_MAX_DEPTH)) {
+		return -EINVAL;
+	}
+
+	ip_masked = ip & depth_to_mask(depth);
+
+	/*
+	 * Find the index of the input rule, that needs to be deleted, in the
+	 * rule table.
+	 */
+	rule_to_delete_index = rule_find_extend(lpm, ip_masked, depth);
+
+	/*
+	 * Check if rule_to_delete_index was found. If no rule was found the
+	 * function rule_find returns -EINVAL.
+	 */
+	if (rule_to_delete_index < 0)
+		return -EINVAL;
+
+	/* Delete the rule from the rule table. */
+	rule_delete_extend(lpm, rule_to_delete_index, depth);
+
+	/*
+	 * Find rule to replace the rule_to_delete. If there is no rule to
+	 * replace the rule_to_delete we return -1 and invalidate the table
+	 * entries associated with this rule.
+	 */
+	sub_rule_depth = 0;
+	sub_rule_index = find_previous_rule_extend(lpm, ip, depth, &sub_rule_depth);
+
+	/*
+	 * If the input depth value is less than 25 use function
+	 * delete_depth_small otherwise use delete_depth_big.
+	 */
+	if (depth <= MAX_DEPTH_TBL24) {
+		return delete_depth_small_extend(lpm, ip_masked, depth,
+				sub_rule_index, sub_rule_depth);
+	} else { /* If depth > MAX_DEPTH_TBL24 */
+		return delete_depth_big_extend(lpm, ip_masked, depth, sub_rule_index, sub_rule_depth);
+	}
+}
+BIND_DEFAULT_SYMBOL(rte_lpm_delete, _v22, 2.2);
+MAP_STATIC_SYMBOL(int rte_lpm_delete(struct rte_lpm_extend *lpm, uint32_t ip, uint8_t depth), rte_lpm_delete_v22);
+
 
 /*
  * Delete all rules from the LPM table.
  */
 void
-rte_lpm_delete_all(struct rte_lpm *lpm)
+rte_lpm_delete_all_v20(struct rte_lpm *lpm)
+{
+	/* Zero rule information. */
+	memset(lpm->rule_info, 0, sizeof(lpm->rule_info));
+
+	/* Zero tbl24. */
+	memset(lpm->tbl24, 0, sizeof(lpm->tbl24));
+
+	/* Zero tbl8. */
+	memset(lpm->tbl8, 0, sizeof(lpm->tbl8));
+
+	/* Delete all rules form the rules table. */
+	memset(lpm->rules_tbl, 0, sizeof(lpm->rules_tbl[0]) * lpm->max_rules);
+}
+VERSION_SYMBOL(rte_lpm_delete_all, _v20, 2.0);
+
+void
+rte_lpm_delete_all_v22(struct rte_lpm_extend *lpm)
 {
 	/* Zero rule information. */
 	memset(lpm->rule_info, 0, sizeof(lpm->rule_info));
@@ -1012,3 +1873,5 @@ rte_lpm_delete_all(struct rte_lpm *lpm)
 	/* Delete all rules form the rules table. */
 	memset(lpm->rules_tbl, 0, sizeof(lpm->rules_tbl[0]) * lpm->max_rules);
 }
+BIND_DEFAULT_SYMBOL(rte_lpm_delete_all, _v22, 2.2);
+MAP_STATIC_SYMBOL(void rte_lpm_delete_all(struct rte_lpm_extend *lpm), rte_lpm_delete_all_v22);
diff --git a/lib/librte_lpm/rte_lpm.h b/lib/librte_lpm/rte_lpm.h
index c299ce2..5ecb95b 100644
--- a/lib/librte_lpm/rte_lpm.h
+++ b/lib/librte_lpm/rte_lpm.h
@@ -49,6 +49,8 @@
 #include <rte_common.h>
 #include <rte_vect.h>
 
+#include <rte_compat.h>
+
 #ifdef __cplusplus
 extern "C" {
 #endif
@@ -128,12 +130,76 @@ struct rte_lpm_tbl8_entry {
 };
 #endif
 
+/** @internal bitmask with valid and ext_entry/valid_group fields set */
+#define RTE_LPM_VALID_EXT_ENTRY_BITMASK_EXTEND 0x03000000
+
+/** Bitmask used to indicate successful lookup */
+#define RTE_LPM_LOOKUP_SUCCESS_EXTEND          0x01000000
+
+/** Bitmask used to get 24-bits value next hop from uint32_t **/
+#define RTE_LPM_NEXT_HOP_MASK 0x00ffffff
+
+#if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN
+
+/** @internal Tbl24 entry structure. */
+union rte_lpm_tbl24_entry_extend {
+uint32_t entry;
+struct{
+	uint32_t next_hop	:24;/**< next hop. */
+	uint32_t valid		:1; /**< Validation flag. */
+    uint32_t ext_entry	:1; /**< External entry. */
+    uint32_t depth		:6; /**< Rule depth. */
+	};
+};
+/* Store group index (i.e. gindex)into tbl8. */
+#define tbl8_gindex next_hop
+
+
+/** @internal Tbl8 entry structure. */
+union rte_lpm_tbl8_entry_extend {
+uint32_t entry;
+struct {
+	uint32_t next_hop		:24;/**< next hop. */
+	uint32_t valid			:1; /**< Validation flag. */
+	uint32_t valid_group	:1; /**< External entry. */
+	uint32_t depth			:6; /**< Rule depth. */
+	};
+};
+#else
+union rte_lpm_tbl24_entry_extend {
+struct {
+	uint32_t depth		:6;
+	uint32_t ext_entry	:1;
+	uint32_t valid		:1;
+	uint32_t next_hop	:24;
+	};
+uint32_t entry;
+};
+#define tbl8_gindex next_hop
+
+union rte_lpm_tbl8_entry_extend {
+struct {
+	uint32_t depth			:6;
+	uint32_t valid_group	:1;
+	uint32_t valid			:1;
+	uint32_t next_hop		:24;
+	};
+uint32_t entry;
+};
+#endif
+
 /** @internal Rule structure. */
 struct rte_lpm_rule {
 	uint32_t ip; /**< Rule IP address. */
 	uint8_t  next_hop; /**< Rule next hop. */
 };
 
+/** @internal Rule (extend) structure. */
+struct rte_lpm_rule_extend {
+	uint32_t ip; /**< Rule IP address. */
+	uint32_t  next_hop; /**< Rule next hop. */
+};
+
 /** @internal Contains metadata about the rules table. */
 struct rte_lpm_rule_info {
 	uint32_t used_rules; /**< Used rules so far. */
@@ -156,6 +222,22 @@ struct rte_lpm {
 			__rte_cache_aligned; /**< LPM rules. */
 };
 
+/** @internal LPM (extend) structure. */
+struct rte_lpm_extend {
+	/* LPM metadata. */
+	char name[RTE_LPM_NAMESIZE];        /**< Name of the lpm. */
+	uint32_t max_rules; /**< Max. balanced rules per lpm. */
+	struct rte_lpm_rule_info rule_info[RTE_LPM_MAX_DEPTH]; /**< Rule info table. */
+
+	/* LPM Tables. */
+	union rte_lpm_tbl24_entry_extend tbl24[RTE_LPM_TBL24_NUM_ENTRIES] \
+			__rte_cache_aligned; /**< LPM tbl24 table. */
+	union rte_lpm_tbl8_entry_extend tbl8[RTE_LPM_TBL8_NUM_ENTRIES] \
+			__rte_cache_aligned; /**< LPM tbl8 table. */
+	struct rte_lpm_rule_extend rules_tbl[0] \
+			__rte_cache_aligned; /**< LPM rules. */
+};
+
 /**
  * Create an LPM object.
  *
@@ -177,8 +259,12 @@ struct rte_lpm {
  *    - EEXIST - a memzone with the same name already exists
  *    - ENOMEM - no appropriate memory area found in which to create memzone
  */
-struct rte_lpm *
+struct rte_lpm_extend *
 rte_lpm_create(const char *name, int socket_id, int max_rules, int flags);
+struct rte_lpm *
+rte_lpm_create_v20(const char *name, int socket_id, int max_rules, int flags);
+struct rte_lpm_extend *
+rte_lpm_create_v22(const char *name, int socket_id, int max_rules, int flags);
 
 /**
  * Find an existing LPM object and return a pointer to it.
@@ -190,8 +276,12 @@ rte_lpm_create(const char *name, int socket_id, int max_rules, int flags);
  *   set appropriately. Possible rte_errno values include:
  *    - ENOENT - required entry not available to return.
  */
-struct rte_lpm *
+struct rte_lpm_extend *
 rte_lpm_find_existing(const char *name);
+struct rte_lpm *
+rte_lpm_find_existing_v20(const char *name);
+struct rte_lpm_extend *
+rte_lpm_find_existing_v22(const char *name);
 
 /**
  * Free an LPM object.
@@ -202,7 +292,11 @@ rte_lpm_find_existing(const char *name);
  *   None
  */
 void
-rte_lpm_free(struct rte_lpm *lpm);
+rte_lpm_free(struct rte_lpm_extend *lpm);
+void
+rte_lpm_free_v20(struct rte_lpm *lpm);
+void
+rte_lpm_free_v22(struct rte_lpm_extend *lpm);
 
 /**
  * Add a rule to the LPM table.
@@ -219,7 +313,11 @@ rte_lpm_free(struct rte_lpm *lpm);
  *   0 on success, negative value otherwise
  */
 int
-rte_lpm_add(struct rte_lpm *lpm, uint32_t ip, uint8_t depth, uint8_t next_hop);
+rte_lpm_add(struct rte_lpm_extend *lpm, uint32_t ip, uint8_t depth, uint32_t next_hop);
+int
+rte_lpm_add_v20(struct rte_lpm *lpm, uint32_t ip, uint8_t depth, uint8_t next_hop);
+int
+rte_lpm_add_v22(struct rte_lpm_extend *lpm, uint32_t ip, uint8_t depth, uint32_t next_hop);
 
 /**
  * Check if a rule is present in the LPM table,
@@ -237,8 +335,14 @@ rte_lpm_add(struct rte_lpm *lpm, uint32_t ip, uint8_t depth, uint8_t next_hop);
  *   1 if the rule exists, 0 if it does not, a negative value on failure
  */
 int
-rte_lpm_is_rule_present(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
+rte_lpm_is_rule_present(struct rte_lpm_extend *lpm, uint32_t ip, uint8_t depth,
+uint32_t *next_hop);
+int
+rte_lpm_is_rule_present_v20(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
 uint8_t *next_hop);
+int
+rte_lpm_is_rule_present_v22(struct rte_lpm_extend *lpm, uint32_t ip, uint8_t depth,
+uint32_t *next_hop);
 
 /**
  * Delete a rule from the LPM table.
@@ -253,7 +357,11 @@ uint8_t *next_hop);
  *   0 on success, negative value otherwise
  */
 int
-rte_lpm_delete(struct rte_lpm *lpm, uint32_t ip, uint8_t depth);
+rte_lpm_delete(struct rte_lpm_extend *lpm, uint32_t ip, uint8_t depth);
+int
+rte_lpm_delete_v20(struct rte_lpm *lpm, uint32_t ip, uint8_t depth);
+int
+rte_lpm_delete_v22(struct rte_lpm_extend *lpm, uint32_t ip, uint8_t depth);
 
 /**
  * Delete all rules from the LPM table.
@@ -262,7 +370,11 @@ rte_lpm_delete(struct rte_lpm *lpm, uint32_t ip, uint8_t depth);
  *   LPM object handle
  */
 void
-rte_lpm_delete_all(struct rte_lpm *lpm);
+rte_lpm_delete_all(struct rte_lpm_extend *lpm);
+void
+rte_lpm_delete_all_v20(struct rte_lpm *lpm);
+void
+rte_lpm_delete_all_v22(struct rte_lpm_extend *lpm);
 
 /**
  * Lookup an IP into the LPM table.
@@ -276,6 +388,7 @@ rte_lpm_delete_all(struct rte_lpm *lpm);
  * @return
  *   -EINVAL for incorrect arguments, -ENOENT on lookup miss, 0 on lookup hit
  */
+
 static inline int
 rte_lpm_lookup(struct rte_lpm *lpm, uint32_t ip, uint8_t *next_hop)
 {
@@ -302,6 +415,32 @@ rte_lpm_lookup(struct rte_lpm *lpm, uint32_t ip, uint8_t *next_hop)
 	return (tbl_entry & RTE_LPM_LOOKUP_SUCCESS) ? 0 : -ENOENT;
 }
 
+static inline int
+rte_lpm_lookup_extend(struct rte_lpm_extend *lpm, uint32_t ip, uint32_t *next_hop)
+{
+	unsigned tbl24_index = (ip >> 8);
+	uint32_t tbl_entry;
+
+	/* DEBUG: Check user input arguments. */
+	RTE_LPM_RETURN_IF_TRUE(((lpm == NULL) || (next_hop == NULL)), -EINVAL);
+
+	/* Copy tbl24 entry */
+	tbl_entry = lpm->tbl24[tbl24_index].entry;
+
+	/* Copy tbl8 entry (only if needed) */
+	if (unlikely((tbl_entry & RTE_LPM_VALID_EXT_ENTRY_BITMASK_EXTEND) ==
+			RTE_LPM_VALID_EXT_ENTRY_BITMASK_EXTEND)) {
+
+		unsigned tbl8_index = (uint8_t)ip +
+				((uint8_t)tbl_entry * RTE_LPM_TBL8_GROUP_NUM_ENTRIES);
+
+		tbl_entry = lpm->tbl8[tbl8_index].entry;
+	}
+
+	*next_hop = tbl_entry & RTE_LPM_NEXT_HOP_MASK;
+	return (tbl_entry & RTE_LPM_LOOKUP_SUCCESS_EXTEND) ? 0 : -ENOENT;
+}
+
 /**
  * Lookup multiple IP addresses in an LPM table. This may be implemented as a
  * macro, so the address of the function should not be used.
@@ -312,9 +451,9 @@ rte_lpm_lookup(struct rte_lpm *lpm, uint32_t ip, uint8_t *next_hop)
  *   Array of IPs to be looked up in the LPM table
  * @param next_hops
  *   Next hop of the most specific rule found for IP (valid on lookup hit only).
- *   This is an array of two byte values. The most significant byte in each
+ *   This is an array of four byte values. The most significant byte in each
  *   value says whether the lookup was successful (bitmask
- *   RTE_LPM_LOOKUP_SUCCESS is set). The least significant byte is the
+ *   RTE_LPM_LOOKUP_SUCCESS is set). The three least significant bytes are the
  *   actual next hop.
  * @param n
  *   Number of elements in ips (and next_hops) array to lookup. This should be a
@@ -322,8 +461,11 @@ rte_lpm_lookup(struct rte_lpm *lpm, uint32_t ip, uint8_t *next_hop)
  *  @return
  *   -EINVAL for incorrect arguments, otherwise 0
  */
+
 #define rte_lpm_lookup_bulk(lpm, ips, next_hops, n) \
 		rte_lpm_lookup_bulk_func(lpm, ips, next_hops, n)
+#define rte_lpm_lookup_bulk_extend(lpm, ips, next_hops, n) \
+		rte_lpm_lookup_bulk_func_extend(lpm, ips, next_hops, n)
 
 static inline int
 rte_lpm_lookup_bulk_func(const struct rte_lpm *lpm, const uint32_t * ips,
@@ -358,8 +500,42 @@ rte_lpm_lookup_bulk_func(const struct rte_lpm *lpm, const uint32_t * ips,
 	return 0;
 }
 
+static inline int
+rte_lpm_lookup_bulk_func_extend(const struct rte_lpm_extend *lpm, const uint32_t *ips,
+		uint32_t *next_hops, const unsigned n)
+{
+	unsigned i;
+	unsigned tbl24_indexes[n];
+
+	/* DEBUG: Check user input arguments. */
+	RTE_LPM_RETURN_IF_TRUE(((lpm == NULL) || (ips == NULL) ||
+			(next_hops == NULL)), -EINVAL);
+
+	for (i = 0; i < n; i++) {
+		tbl24_indexes[i] = ips[i] >> 8;
+	}
+
+	for (i = 0; i < n; i++) {
+		/* Simply copy tbl24 entry to output */
+		next_hops[i] = lpm->tbl24[tbl24_indexes[i]].entry;
+
+		/* Overwrite output with tbl8 entry if needed */
+		if (unlikely((next_hops[i] & RTE_LPM_VALID_EXT_ENTRY_BITMASK_EXTEND) ==
+				RTE_LPM_VALID_EXT_ENTRY_BITMASK_EXTEND)) {
+
+			unsigned tbl8_index = (uint8_t)ips[i] +
+					((uint8_t)next_hops[i] *
+					 RTE_LPM_TBL8_GROUP_NUM_ENTRIES);
+
+			next_hops[i] = lpm->tbl8[tbl8_index].entry;
+		}
+	}
+	return 0;
+}
+
 /* Mask four results. */
 #define	 RTE_LPM_MASKX4_RES	UINT64_C(0x00ff00ff00ff00ff)
+#define	 RTE_LPM_MASKX2_RES	UINT64_C(0x00ffffff00ffffff)
 
 /**
  * Lookup four IP addresses in an LPM table.
@@ -370,9 +546,9 @@ rte_lpm_lookup_bulk_func(const struct rte_lpm *lpm, const uint32_t * ips,
  *   Four IPs to be looked up in the LPM table
  * @param hop
  *   Next hop of the most specific rule found for IP (valid on lookup hit only).
- *   This is an 4 elements array of two byte values.
- *   If the lookup was succesfull for the given IP, then least significant byte
- *   of the corresponding element is the  actual next hop and the most
+ *   This is an 4 elements array of four byte values.
+ *   If the lookup was successful for the given IP, then three least significant bytes
+ *   of the corresponding element are the actual next hop and the most
  *   significant byte is zero.
  *   If the lookup for the given IP failed, then corresponding element would
  *   contain default value, see description of then next parameter.
@@ -380,6 +556,7 @@ rte_lpm_lookup_bulk_func(const struct rte_lpm *lpm, const uint32_t * ips,
  *   Default value to populate into corresponding element of hop[] array,
  *   if lookup would fail.
  */
+
 static inline void
 rte_lpm_lookupx4(const struct rte_lpm *lpm, __m128i ip, uint16_t hop[4],
 	uint16_t defv)
@@ -473,6 +650,100 @@ rte_lpm_lookupx4(const struct rte_lpm *lpm, __m128i ip, uint16_t hop[4],
 	hop[3] = (tbl[3] & RTE_LPM_LOOKUP_SUCCESS) ? (uint8_t)tbl[3] : defv;
 }
 
+static inline void
+rte_lpm_lookupx4_extend(const struct rte_lpm_extend *lpm, __m128i ip, uint32_t hop[4],
+	uint32_t defv)
+{
+	__m128i i24;
+	rte_xmm_t i8;
+	uint32_t tbl[4];
+	uint64_t idx, pt, pt2;
+
+	const __m128i mask8 =
+		_mm_set_epi32(UINT8_MAX, UINT8_MAX, UINT8_MAX, UINT8_MAX);
+
+	/*
+	 * RTE_LPM_VALID_EXT_ENTRY_BITMASK_EXTEND for 2 LPM entries
+	 * as one 64-bit value (0x0300000003000000).
+	 */
+	const uint64_t mask_xv =
+		((uint64_t)RTE_LPM_VALID_EXT_ENTRY_BITMASK_EXTEND |
+		(uint64_t)RTE_LPM_VALID_EXT_ENTRY_BITMASK_EXTEND << 32);
+
+	/*
+	 * RTE_LPM_LOOKUP_SUCCESS_EXTEND for 2 LPM entries
+	 * as one 64-bit value (0x0100000001000000).
+	 */
+	const uint64_t mask_v =
+		((uint64_t)RTE_LPM_LOOKUP_SUCCESS_EXTEND |
+		(uint64_t)RTE_LPM_LOOKUP_SUCCESS_EXTEND << 32);
+
+	/* get 4 indexes for tbl24[]. */
+	i24 = _mm_srli_epi32(ip, CHAR_BIT);
+
+	/* extract values from tbl24[] */
+	idx = _mm_cvtsi128_si64(i24);
+	i24 = _mm_srli_si128(i24, sizeof(uint64_t));
+
+	tbl[0] = lpm->tbl24[(uint32_t)idx].entry;
+	tbl[1] = lpm->tbl24[idx >> 32].entry;
+
+	idx = _mm_cvtsi128_si64(i24);
+
+	tbl[2] = lpm->tbl24[(uint32_t)idx].entry;
+	tbl[3] = lpm->tbl24[idx >> 32].entry;
+
+	/* get 4 indexes for tbl8[]. */
+	i8.x = _mm_and_si128(ip, mask8);
+
+	pt = (uint64_t)tbl[0] |
+		(uint64_t)tbl[1] << 32;
+	pt2 = (uint64_t)tbl[2] |
+		(uint64_t)tbl[3] << 32;
+
+	/* search successfully finished for all 4 IP addresses. */
+	if (likely((pt & mask_xv) == mask_v) &&
+			likely((pt2 & mask_xv) == mask_v)) {
+		*(uint64_t *)hop = pt & RTE_LPM_MASKX2_RES;
+		*(uint64_t *)(hop + 2) = pt2 & RTE_LPM_MASKX2_RES;
+		return;
+	}
+
+	if (unlikely((pt & RTE_LPM_VALID_EXT_ENTRY_BITMASK_EXTEND) ==
+			RTE_LPM_VALID_EXT_ENTRY_BITMASK_EXTEND)) {
+		i8.u32[0] = i8.u32[0] +
+			(uint8_t)tbl[0] * RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
+		tbl[0] = lpm->tbl8[i8.u32[0]].entry;
+	}
+	if (unlikely((pt >> 32 & RTE_LPM_VALID_EXT_ENTRY_BITMASK_EXTEND) ==
+			RTE_LPM_VALID_EXT_ENTRY_BITMASK_EXTEND)) {
+		i8.u32[1] = i8.u32[1] +
+			(uint8_t)tbl[1] * RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
+		tbl[1] = lpm->tbl8[i8.u32[1]].entry;
+	}
+	if (unlikely((pt2 & RTE_LPM_VALID_EXT_ENTRY_BITMASK_EXTEND) ==
+			RTE_LPM_VALID_EXT_ENTRY_BITMASK_EXTEND)) {
+		i8.u32[2] = i8.u32[2] +
+			(uint8_t)tbl[2] * RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
+		tbl[2] = lpm->tbl8[i8.u32[2]].entry;
+	}
+	if (unlikely((pt2 >> 32 & RTE_LPM_VALID_EXT_ENTRY_BITMASK_EXTEND) ==
+			RTE_LPM_VALID_EXT_ENTRY_BITMASK_EXTEND)) {
+		i8.u32[3] = i8.u32[3] +
+			(uint8_t)tbl[3] * RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
+		tbl[3] = lpm->tbl8[i8.u32[3]].entry;
+	}
+
+	hop[0] = (tbl[0] & RTE_LPM_LOOKUP_SUCCESS_EXTEND)
+			? tbl[0] & RTE_LPM_NEXT_HOP_MASK : defv;
+	hop[1] = (tbl[1] & RTE_LPM_LOOKUP_SUCCESS_EXTEND)
+			? tbl[1] & RTE_LPM_NEXT_HOP_MASK : defv;
+	hop[2] = (tbl[2] & RTE_LPM_LOOKUP_SUCCESS_EXTEND)
+			? tbl[2] & RTE_LPM_NEXT_HOP_MASK : defv;
+	hop[3] = (tbl[3] & RTE_LPM_LOOKUP_SUCCESS_EXTEND)
+			? tbl[3] & RTE_LPM_NEXT_HOP_MASK : defv;
+}
+
 #ifdef __cplusplus
 }
 #endif
diff --git a/lib/librte_lpm/rte_lpm_version.map b/lib/librte_lpm/rte_lpm_version.map
index 70e1c05..6ac8d15 100644
--- a/lib/librte_lpm/rte_lpm_version.map
+++ b/lib/librte_lpm/rte_lpm_version.map
@@ -1,23 +1,42 @@
 DPDK_2.0 {
-	global:
+      global:
+      rte_lpm6_add;
+      rte_lpm6_create;
+      rte_lpm6_delete;
+      rte_lpm6_delete_all;
+      rte_lpm6_delete_bulk_func;
+      rte_lpm6_find_existing;
+      rte_lpm6_free;
+      rte_lpm6_is_rule_present;
+      rte_lpm6_lookup;
+      rte_lpm6_lookup_bulk_func;
 
-	rte_lpm_add;
-	rte_lpm_create;
-	rte_lpm_delete;
-	rte_lpm_delete_all;
-	rte_lpm_find_existing;
-	rte_lpm_free;
-	rte_lpm_is_rule_present;
-	rte_lpm6_add;
-	rte_lpm6_create;
-	rte_lpm6_delete;
-	rte_lpm6_delete_all;
-	rte_lpm6_delete_bulk_func;
-	rte_lpm6_find_existing;
-	rte_lpm6_free;
-	rte_lpm6_is_rule_present;
-	rte_lpm6_lookup;
-	rte_lpm6_lookup_bulk_func;
-
-	local: *;
+      local: *;
 };
+
+DPDK_2.2 {
+       global:
+       rte_lpm_add;
+       rte_lpm_is_rule_present;
+       rte_lpm_create;
+       rte_lpm_delete;
+       rte_lpm_delete_all;
+       rte_lpm_find_existing;
+       rte_lpm_free;
+       local:
+       rule_add_extend;
+       rule_delete_extend;
+       rule_find_extend;
+       tbl8_alloc_extend;
+       tbl8_free_extend;
+       add_depth_small_extend;
+       add_depth_big_extend;
+       find_previous_rule_extend;
+       delete_depth_small_extend;
+       tbl8_recycle_check_extend;
+       delete_depth_big_extend;
+       rte_lpm_lookup_extend;
+       rte_lpm_lookup_bulk_func_extend;
+       rte_lpm_lookupx4_extend;
+
+} DPDK_2.0;
diff --git a/lib/librte_table/rte_table_lpm.c b/lib/librte_table/rte_table_lpm.c
index 849d899..ba55319 100644
--- a/lib/librte_table/rte_table_lpm.c
+++ b/lib/librte_table/rte_table_lpm.c
@@ -70,7 +70,7 @@ struct rte_table_lpm {
 	uint32_t offset;
 
 	/* Handle to low-level LPM table */
-	struct rte_lpm *lpm;
+	struct rte_lpm_extend *lpm;
 
 	/* Next Hop Table (NHT) */
 	uint32_t nht_users[RTE_TABLE_LPM_MAX_NEXT_HOPS];
@@ -202,7 +202,7 @@ rte_table_lpm_entry_add(
 	struct rte_table_lpm_key *ip_prefix = (struct rte_table_lpm_key *) key;
 	uint32_t nht_pos, nht_pos0_valid;
 	int status;
-	uint8_t nht_pos0 = 0;
+	uint32_t nht_pos0 = 0;
 
 	/* Check input parameters */
 	if (lpm == NULL) {
@@ -268,7 +268,7 @@ rte_table_lpm_entry_delete(
 {
 	struct rte_table_lpm *lpm = (struct rte_table_lpm *) table;
 	struct rte_table_lpm_key *ip_prefix = (struct rte_table_lpm_key *) key;
-	uint8_t nht_pos;
+	uint32_t nht_pos;
 	int status;
 
 	/* Check input parameters */
@@ -342,9 +342,9 @@ rte_table_lpm_lookup(
 			uint32_t ip = rte_bswap32(
 				RTE_MBUF_METADATA_UINT32(pkt, lpm->offset));
 			int status;
-			uint8_t nht_pos;
+			uint32_t nht_pos;
 
-			status = rte_lpm_lookup(lpm->lpm, ip, &nht_pos);
+			status = rte_lpm_lookup_extend(lpm->lpm, ip, &nht_pos);
 			if (status == 0) {
 				pkts_out_mask |= pkt_mask;
 				entries[i] = (void *) &lpm->nht[nht_pos *
-- 
1.9.1

^ permalink raw reply	[flat|nested] 24+ messages in thread

* [dpdk-dev] [PATCH v1 2/3] examples: update of apps using librte_lpm (ipv4)
  2015-10-23 13:51 [dpdk-dev] [PATCH v1 0/3] lpm: increase number of next hops for lpm (ipv4) Michal Jastrzebski
  2015-10-23 13:51 ` [dpdk-dev] [PATCH v1 1/3] " Michal Jastrzebski
@ 2015-10-23 13:51 ` Michal Jastrzebski
  2015-10-23 13:51 ` [dpdk-dev] [PATCH v1 3/3] doc: update release 2.2 after changes in librte_lpm Michal Jastrzebski
  2015-10-23 16:20 ` [dpdk-dev] [PATCH v1 0/3] lpm: increase number of next hops for lpm (ipv4) Matthew Hall
  3 siblings, 0 replies; 24+ messages in thread
From: Michal Jastrzebski @ 2015-10-23 13:51 UTC (permalink / raw)
  To: dev

From: Michal Kobylinski <michalx.kobylinski@intel.com>

This patch is adapting examples to use new rte_lpm structures. 

Signed-off-by: Michal Kobylinski <michalx.kobylinski@intel.com>
---
 examples/ip_fragmentation/main.c | 10 +++++-----
 examples/ip_reassembly/main.c    |  9 +++++----
 examples/l3fwd-power/main.c      |  2 +-
 examples/l3fwd-vf/main.c         |  2 +-
 examples/l3fwd/main.c            | 16 ++++++++--------
 examples/load_balancer/runtime.c |  3 +--
 6 files changed, 21 insertions(+), 21 deletions(-)

diff --git a/examples/ip_fragmentation/main.c b/examples/ip_fragmentation/main.c
index fbc0b8d..41df0b1 100644
--- a/examples/ip_fragmentation/main.c
+++ b/examples/ip_fragmentation/main.c
@@ -266,8 +266,8 @@ l3fwd_simple_forward(struct rte_mbuf *m, struct lcore_queue_conf *qconf,
 		uint8_t queueid, uint8_t port_in)
 {
 	struct rx_queue *rxq;
-	uint32_t i, len;
-	uint8_t next_hop, port_out, ipv6;
+	uint32_t i, len, next_hop;
+	uint8_t next_hop6, port_out, ipv6;
 	int32_t len2;
 
 	ipv6 = 0;
@@ -327,9 +327,9 @@ l3fwd_simple_forward(struct rte_mbuf *m, struct lcore_queue_conf *qconf,
 		ip_hdr = rte_pktmbuf_mtod(m, struct ipv6_hdr *);
 
 		/* Find destination port */
-		if (rte_lpm6_lookup(rxq->lpm6, ip_hdr->dst_addr, &next_hop) == 0 &&
-				(enabled_port_mask & 1 << next_hop) != 0) {
-			port_out = next_hop;
+		if (rte_lpm6_lookup(rxq->lpm6, ip_hdr->dst_addr, &next_hop6) == 0 &&
+				(enabled_port_mask & 1 << next_hop6) != 0) {
+			port_out = next_hop6;
 
 			/* Build transmission burst for new port */
 			len = qconf->tx_mbufs[port_out].len;
diff --git a/examples/ip_reassembly/main.c b/examples/ip_reassembly/main.c
index 741c398..4a9dcbe 100644
--- a/examples/ip_reassembly/main.c
+++ b/examples/ip_reassembly/main.c
@@ -347,7 +347,8 @@ reassemble(struct rte_mbuf *m, uint8_t portid, uint32_t queue,
 	struct rte_ip_frag_death_row *dr;
 	struct rx_queue *rxq;
 	void *d_addr_bytes;
-	uint8_t next_hop, dst_port;
+	uint8_t dst_port, next_hop6;
+	uint32_t next_hop;
 
 	rxq = &qconf->rx_queue_list[queue];
 
@@ -427,9 +428,9 @@ reassemble(struct rte_mbuf *m, uint8_t portid, uint32_t queue,
 		}
 
 		/* Find destination port */
-		if (rte_lpm6_lookup(rxq->lpm6, ip_hdr->dst_addr, &next_hop) == 0 &&
-				(enabled_port_mask & 1 << next_hop) != 0) {
-			dst_port = next_hop;
+		if (rte_lpm6_lookup(rxq->lpm6, ip_hdr->dst_addr, &next_hop6) == 0 &&
+				(enabled_port_mask & 1 << next_hop6) != 0) {
+			dst_port = next_hop6;
 		}
 
 		eth_hdr->ether_type = rte_be_to_cpu_16(ETHER_TYPE_IPv6);
diff --git a/examples/l3fwd-power/main.c b/examples/l3fwd-power/main.c
index 8bb88ce..f647713 100644
--- a/examples/l3fwd-power/main.c
+++ b/examples/l3fwd-power/main.c
@@ -631,7 +631,7 @@ static inline uint8_t
 get_ipv4_dst_port(struct ipv4_hdr *ipv4_hdr, uint8_t portid,
 		lookup_struct_t *ipv4_l3fwd_lookup_struct)
 {
-	uint8_t next_hop;
+	uint32_t next_hop;
 
 	return (uint8_t) ((rte_lpm_lookup(ipv4_l3fwd_lookup_struct,
 			rte_be_to_cpu_32(ipv4_hdr->dst_addr), &next_hop) == 0)?
diff --git a/examples/l3fwd-vf/main.c b/examples/l3fwd-vf/main.c
index 01f610e..193c3ab 100644
--- a/examples/l3fwd-vf/main.c
+++ b/examples/l3fwd-vf/main.c
@@ -440,7 +440,7 @@ get_dst_port(struct ipv4_hdr *ipv4_hdr,  uint8_t portid, lookup_struct_t * l3fwd
 static inline uint8_t
 get_dst_port(struct ipv4_hdr *ipv4_hdr,  uint8_t portid, lookup_struct_t * l3fwd_lookup_struct)
 {
-	uint8_t next_hop;
+	uint32_t next_hop;
 
 	return (uint8_t) ((rte_lpm_lookup(l3fwd_lookup_struct,
 			rte_be_to_cpu_32(ipv4_hdr->dst_addr), &next_hop) == 0)?
diff --git a/examples/l3fwd/main.c b/examples/l3fwd/main.c
index 1f3e5c6..0f410f0 100644
--- a/examples/l3fwd/main.c
+++ b/examples/l3fwd/main.c
@@ -710,7 +710,7 @@ get_ipv6_dst_port(void *ipv6_hdr,  uint8_t portid, lookup_struct_t * ipv6_l3fwd_
 static inline uint8_t
 get_ipv4_dst_port(void *ipv4_hdr,  uint8_t portid, lookup_struct_t * ipv4_l3fwd_lookup_struct)
 {
-	uint8_t next_hop;
+	uint32_t next_hop;
 
 	return (uint8_t) ((rte_lpm_lookup(ipv4_l3fwd_lookup_struct,
 		rte_be_to_cpu_32(((struct ipv4_hdr *)ipv4_hdr)->dst_addr),
@@ -1151,7 +1151,7 @@ l3fwd_simple_forward(struct rte_mbuf *m, uint8_t portid, struct lcore_conf *qcon
  * to BAD_PORT value.
  */
 static inline __attribute__((always_inline)) void
-rfc1812_process(struct ipv4_hdr *ipv4_hdr, uint16_t *dp, uint32_t ptype)
+rfc1812_process(struct ipv4_hdr *ipv4_hdr, uint32_t *dp, uint32_t ptype)
 {
 	uint8_t ihl;
 
@@ -1182,7 +1182,7 @@ static inline __attribute__((always_inline)) uint16_t
 get_dst_port(const struct lcore_conf *qconf, struct rte_mbuf *pkt,
 	uint32_t dst_ipv4, uint8_t portid)
 {
-	uint8_t next_hop;
+	uint32_t next_hop;
 	struct ipv6_hdr *ipv6_hdr;
 	struct ether_hdr *eth_hdr;
 
@@ -1194,7 +1194,7 @@ get_dst_port(const struct lcore_conf *qconf, struct rte_mbuf *pkt,
 		eth_hdr = rte_pktmbuf_mtod(pkt, struct ether_hdr *);
 		ipv6_hdr = (struct ipv6_hdr *)(eth_hdr + 1);
 		if (rte_lpm6_lookup(qconf->ipv6_lookup_struct,
-				ipv6_hdr->dst_addr, &next_hop) != 0)
+				ipv6_hdr->dst_addr, (uint8_t *)&next_hop) != 0)
 			next_hop = portid;
 	} else {
 		next_hop = portid;
@@ -1205,7 +1205,7 @@ get_dst_port(const struct lcore_conf *qconf, struct rte_mbuf *pkt,
 
 static inline void
 process_packet(struct lcore_conf *qconf, struct rte_mbuf *pkt,
-	uint16_t *dst_port, uint8_t portid)
+	uint32_t *dst_port, uint8_t portid)
 {
 	struct ether_hdr *eth_hdr;
 	struct ipv4_hdr *ipv4_hdr;
@@ -1275,7 +1275,7 @@ processx4_step2(const struct lcore_conf *qconf,
 		uint32_t ipv4_flag,
 		uint8_t portid,
 		struct rte_mbuf *pkt[FWDSTEP],
-		uint16_t dprt[FWDSTEP])
+		uint32_t dprt[FWDSTEP])
 {
 	rte_xmm_t dst;
 	const  __m128i bswap_mask = _mm_set_epi8(12, 13, 14, 15, 8, 9, 10, 11,
@@ -1301,7 +1301,7 @@ processx4_step2(const struct lcore_conf *qconf,
  * Perform RFC1812 checks and updates for IPV4 packets.
  */
 static inline void
-processx4_step3(struct rte_mbuf *pkt[FWDSTEP], uint16_t dst_port[FWDSTEP])
+processx4_step3(struct rte_mbuf *pkt[FWDSTEP], uint32_t dst_port[FWDSTEP])
 {
 	__m128i te[FWDSTEP];
 	__m128i ve[FWDSTEP];
@@ -1527,7 +1527,7 @@ main_loop(__attribute__((unused)) void *dummy)
 	int32_t k;
 	uint16_t dlp;
 	uint16_t *lp;
-	uint16_t dst_port[MAX_PKT_BURST];
+	uint32_t dst_port[MAX_PKT_BURST];
 	__m128i dip[MAX_PKT_BURST / FWDSTEP];
 	uint32_t ipv4_flag[MAX_PKT_BURST / FWDSTEP];
 	uint16_t pnum[MAX_PKT_BURST + 1];
diff --git a/examples/load_balancer/runtime.c b/examples/load_balancer/runtime.c
index 2b265c2..bca63de 100644
--- a/examples/load_balancer/runtime.c
+++ b/examples/load_balancer/runtime.c
@@ -524,8 +524,7 @@ app_lcore_worker(
 		for (j = 0; j < bsz_rd; j ++) {
 			struct rte_mbuf *pkt;
 			struct ipv4_hdr *ipv4_hdr;
-			uint32_t ipv4_dst, pos;
-			uint8_t port;
+			uint32_t ipv4_dst, pos, port;
 
 			if (likely(j < bsz_rd - 1)) {
 				APP_WORKER_PREFETCH1(rte_pktmbuf_mtod(lp->mbuf_in.array[j+1], unsigned char *));
-- 
1.9.1

^ permalink raw reply	[flat|nested] 24+ messages in thread

* [dpdk-dev] [PATCH v1 3/3] doc: update release 2.2 after changes in librte_lpm
  2015-10-23 13:51 [dpdk-dev] [PATCH v1 0/3] lpm: increase number of next hops for lpm (ipv4) Michal Jastrzebski
  2015-10-23 13:51 ` [dpdk-dev] [PATCH v1 1/3] " Michal Jastrzebski
  2015-10-23 13:51 ` [dpdk-dev] [PATCH v1 2/3] examples: update of apps using librte_lpm (ipv4) Michal Jastrzebski
@ 2015-10-23 13:51 ` Michal Jastrzebski
  2015-10-23 14:21   ` Bruce Richardson
  2015-10-23 16:20 ` [dpdk-dev] [PATCH v1 0/3] lpm: increase number of next hops for lpm (ipv4) Matthew Hall
  3 siblings, 1 reply; 24+ messages in thread
From: Michal Jastrzebski @ 2015-10-23 13:51 UTC (permalink / raw)
  To: dev

From: Michal Kobylinski <michalx.kobylinski@intel.com>

Signed-off-by: Michal Kobylinski <michalx.kobylinski@intel.com>
---
 doc/guides/rel_notes/release_2_2.rst | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/doc/guides/rel_notes/release_2_2.rst b/doc/guides/rel_notes/release_2_2.rst
index ab1c25f..3c616ab 100644
--- a/doc/guides/rel_notes/release_2_2.rst
+++ b/doc/guides/rel_notes/release_2_2.rst
@@ -121,6 +121,8 @@ ABI Changes
 
 * librte_cfgfile: Allow longer names and values by increasing the constants
   CFG_NAME_LEN and CFG_VALUE_LEN to 64 and 256 respectively.
+
+* librte_lpm: Increase number of next hops for IPv4 to 2^24
 
 
 Shared Library Versions
-- 
1.9.1

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [dpdk-dev] [PATCH v1 3/3] doc: update release 2.2 after changes in librte_lpm
  2015-10-23 13:51 ` [dpdk-dev] [PATCH v1 3/3] doc: update release 2.2 after changes in librte_lpm Michal Jastrzebski
@ 2015-10-23 14:21   ` Bruce Richardson
  2015-10-23 14:33     ` Jastrzebski, MichalX K
  0 siblings, 1 reply; 24+ messages in thread
From: Bruce Richardson @ 2015-10-23 14:21 UTC (permalink / raw)
  To: Michal Jastrzebski; +Cc: dev

On Fri, Oct 23, 2015 at 03:51:51PM +0200, Michal Jastrzebski wrote:
> From: Michal Kobylinski <michalx.kobylinski@intel.com>
> 
> Signed-off-by: Michal Kobylinski <michalx.kobylinski@intel.com>

Hi Michal,

for when you do your v2, this doc update should be included in with the relevant
changes i.e. in patch 1, not as a separate doc patch.

/Bruce
> ---
>  doc/guides/rel_notes/release_2_2.rst | 2 ++
>  1 file changed, 2 insertions(+)
> 
> diff --git a/doc/guides/rel_notes/release_2_2.rst b/doc/guides/rel_notes/release_2_2.rst
> index ab1c25f..3c616ab 100644
> --- a/doc/guides/rel_notes/release_2_2.rst
> +++ b/doc/guides/rel_notes/release_2_2.rst
> @@ -121,6 +121,8 @@ ABI Changes
>  
>  * librte_cfgfile: Allow longer names and values by increasing the constants
>    CFG_NAME_LEN and CFG_VALUE_LEN to 64 and 256 respectively.
> +
> +* librte_lpm: Increase number of next hops for IPv4 to 2^24
>  
>  
>  Shared Library Versions
> -- 
> 1.9.1
> 

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [dpdk-dev] [PATCH v1 3/3] doc: update release 2.2 after changes in librte_lpm
  2015-10-23 14:21   ` Bruce Richardson
@ 2015-10-23 14:33     ` Jastrzebski, MichalX K
  0 siblings, 0 replies; 24+ messages in thread
From: Jastrzebski, MichalX K @ 2015-10-23 14:33 UTC (permalink / raw)
  To: Richardson, Bruce; +Cc: dev

> -----Original Message-----
> From: Richardson, Bruce
> Sent: Friday, October 23, 2015 4:22 PM
> To: Jastrzebski, MichalX K
> Cc: dev@dpdk.org
> Subject: Re: [dpdk-dev] [PATCH v1 3/3] doc: update release 2.2 after changes
> in librte_lpm
> 
> On Fri, Oct 23, 2015 at 03:51:51PM +0200, Michal Jastrzebski wrote:
> > From: Michal Kobylinski <michalx.kobylinski@intel.com>
> >
> > Signed-off-by: Michal Kobylinski <michalx.kobylinski@intel.com>
> 
> Hi Michal,
> 
> for when you do your v2, this doc update should be included in with the
> relevant
> changes i.e. in patch 1, not as a separate doc patch.
> 
> /Bruce

Thanks Bruce, we will do that. 
The reason it is separated now is that 1st patch is extremely big and difficult to review.
I wonder if in v2 we could move changes related with test application to the second patch?
This will cause of course that without applying whole patch-set dpdk won't compile.

> > ---
> >  doc/guides/rel_notes/release_2_2.rst | 2 ++
> >  1 file changed, 2 insertions(+)
> >
> > diff --git a/doc/guides/rel_notes/release_2_2.rst
> b/doc/guides/rel_notes/release_2_2.rst
> > index ab1c25f..3c616ab 100644
> > --- a/doc/guides/rel_notes/release_2_2.rst
> > +++ b/doc/guides/rel_notes/release_2_2.rst
> > @@ -121,6 +121,8 @@ ABI Changes
> >
> >  * librte_cfgfile: Allow longer names and values by increasing the constants
> >    CFG_NAME_LEN and CFG_VALUE_LEN to 64 and 256 respectively.
> > +
> > +* librte_lpm: Increase number of next hops for IPv4 to 2^24
> >
> >
> >  Shared Library Versions
> > --
> > 1.9.1
> >

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [dpdk-dev] [PATCH v1 1/3] lpm: increase number of next hops for lpm (ipv4)
  2015-10-23 13:51 ` [dpdk-dev] [PATCH v1 1/3] " Michal Jastrzebski
@ 2015-10-23 14:38   ` Bruce Richardson
  2015-10-23 14:59     ` Jastrzebski, MichalX K
  0 siblings, 1 reply; 24+ messages in thread
From: Bruce Richardson @ 2015-10-23 14:38 UTC (permalink / raw)
  To: Michal Jastrzebski; +Cc: dev

On Fri, Oct 23, 2015 at 03:51:49PM +0200, Michal Jastrzebski wrote:
> From: Michal Kobylinski <michalx.kobylinski@intel.com>
> 
> Main implementation - changes to lpm library regarding new data types.
> Additionally this patch implements changes required by test application. 
> ABI versioning requirements are met only for lpm library, 
> for table library it will be sent in v2 of this patch-set.
>  
> Signed-off-by: Michal Kobylinski <michalx.kobylinski@intel.com>
> ---
>  app/test/test_func_reentrancy.c    |   4 +-
>  app/test/test_lpm.c                | 227 +++++-----
>  lib/librte_lpm/rte_lpm.c           | 887 ++++++++++++++++++++++++++++++++++++-
>  lib/librte_lpm/rte_lpm.h           | 295 +++++++++++-
>  lib/librte_lpm/rte_lpm_version.map |  59 ++-
>  lib/librte_table/rte_table_lpm.c   |  10 +-
>  6 files changed, 1322 insertions(+), 160 deletions(-)
> 
> diff --git a/app/test/test_func_reentrancy.c b/app/test/test_func_reentrancy.c
> index dbecc52..331ab29 100644
> --- a/app/test/test_func_reentrancy.c
> +++ b/app/test/test_func_reentrancy.c
> @@ -343,7 +343,7 @@ static void
>  lpm_clean(unsigned lcore_id)
>  {
>  	char lpm_name[MAX_STRING_SIZE];
> -	struct rte_lpm *lpm;
> +	struct rte_lpm_extend *lpm;

I thought this patchset was just to increase the size of the lpm entries, not
to create a whole new entry type? The structure names etc. should all stay the
same, and let the ABI versionning take care of handling code using the older
structures. 

/Bruce

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [dpdk-dev] [PATCH v1 1/3] lpm: increase number of next hops for lpm (ipv4)
  2015-10-23 14:38   ` Bruce Richardson
@ 2015-10-23 14:59     ` Jastrzebski, MichalX K
  0 siblings, 0 replies; 24+ messages in thread
From: Jastrzebski, MichalX K @ 2015-10-23 14:59 UTC (permalink / raw)
  To: Richardson, Bruce; +Cc: dev

> -----Original Message-----
> From: Richardson, Bruce
> Sent: Friday, October 23, 2015 4:39 PM
> To: Jastrzebski, MichalX K
> Cc: dev@dpdk.org
> Subject: Re: [dpdk-dev] [PATCH v1 1/3] lpm: increase number of next hops
> for lpm (ipv4)
> 
> On Fri, Oct 23, 2015 at 03:51:49PM +0200, Michal Jastrzebski wrote:
> > From: Michal Kobylinski <michalx.kobylinski@intel.com>
> >
> > Main implementation - changes to lpm library regarding new data types.
> > Additionally this patch implements changes required by test application.
> > ABI versioning requirements are met only for lpm library,
> > for table library it will be sent in v2 of this patch-set.
> >
> > Signed-off-by: Michal Kobylinski <michalx.kobylinski@intel.com>
> > ---
> >  app/test/test_func_reentrancy.c    |   4 +-
> >  app/test/test_lpm.c                | 227 +++++-----
> >  lib/librte_lpm/rte_lpm.c           | 887
> ++++++++++++++++++++++++++++++++++++-
> >  lib/librte_lpm/rte_lpm.h           | 295 +++++++++++-
> >  lib/librte_lpm/rte_lpm_version.map |  59 ++-
> >  lib/librte_table/rte_table_lpm.c   |  10 +-
> >  6 files changed, 1322 insertions(+), 160 deletions(-)
> >
> > diff --git a/app/test/test_func_reentrancy.c
> b/app/test/test_func_reentrancy.c
> > index dbecc52..331ab29 100644
> > --- a/app/test/test_func_reentrancy.c
> > +++ b/app/test/test_func_reentrancy.c
> > @@ -343,7 +343,7 @@ static void
> >  lpm_clean(unsigned lcore_id)
> >  {
> >  	char lpm_name[MAX_STRING_SIZE];
> > -	struct rte_lpm *lpm;
> > +	struct rte_lpm_extend *lpm;
> 
> I thought this patchset was just to increase the size of the lpm entries, not
> to create a whole new entry type? The structure names etc. should all stay
> the
> same, and let the ABI versionning take care of handling code using the older
> structures.
> 
> /Bruce

Hi Bruce, 
I see Your point. I think we should use here RTE_NEXT_ABI macro.
The code will have to be duplicated but it will allow to use old names in a new version.

Michal

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [dpdk-dev] [PATCH v1 0/3] lpm: increase number of next hops for lpm (ipv4)
  2015-10-23 13:51 [dpdk-dev] [PATCH v1 0/3] lpm: increase number of next hops for lpm (ipv4) Michal Jastrzebski
                   ` (2 preceding siblings ...)
  2015-10-23 13:51 ` [dpdk-dev] [PATCH v1 3/3] doc: update release 2.2 after changes in librte_lpm Michal Jastrzebski
@ 2015-10-23 16:20 ` Matthew Hall
  2015-10-23 16:33   ` Stephen Hemminger
  2015-10-24  6:09   ` Matthew Hall
  3 siblings, 2 replies; 24+ messages in thread
From: Matthew Hall @ 2015-10-23 16:20 UTC (permalink / raw)
  To: Michal Jastrzebski; +Cc: dev

On Fri, Oct 23, 2015 at 03:51:48PM +0200, Michal Jastrzebski wrote:
> From: Michal Kobylinski  <michalx.kobylinski@intel.com>
> 
> The current DPDK implementation for LPM for IPv4 and IPv6 limits the
> number of next hops to 256, as the next hop ID is an 8-bit long field.
> Proposed extension increase number of next hops for IPv4 to 2^24 and
> also allows 32-bits read/write operations.
> 
> This patchset requires additional change to rte_table library to meet 
> ABI compatibility requirements. A v2 will be sent next week.

I also have a patchset for this.

I will send it out as well so we could compare.

Matthew.

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [dpdk-dev] [PATCH v1 0/3] lpm: increase number of next hops for lpm (ipv4)
  2015-10-23 16:20 ` [dpdk-dev] [PATCH v1 0/3] lpm: increase number of next hops for lpm (ipv4) Matthew Hall
@ 2015-10-23 16:33   ` Stephen Hemminger
  2015-10-23 18:38     ` Matthew Hall
  2015-10-24  6:09   ` Matthew Hall
  1 sibling, 1 reply; 24+ messages in thread
From: Stephen Hemminger @ 2015-10-23 16:33 UTC (permalink / raw)
  To: Matthew Hall; +Cc: dev

On Fri, 23 Oct 2015 09:20:33 -0700
Matthew Hall <mhall@mhcomputing.net> wrote:

> On Fri, Oct 23, 2015 at 03:51:48PM +0200, Michal Jastrzebski wrote:
> > From: Michal Kobylinski  <michalx.kobylinski@intel.com>
> > 
> > The current DPDK implementation for LPM for IPv4 and IPv6 limits the
> > number of next hops to 256, as the next hop ID is an 8-bit long field.
> > Proposed extension increase number of next hops for IPv4 to 2^24 and
> > also allows 32-bits read/write operations.
> > 
> > This patchset requires additional change to rte_table library to meet 
> > ABI compatibility requirements. A v2 will be sent next week.
> 
> I also have a patchset for this.
> 
> I will send it out as well so we could compare.
> 
> Matthew.

Could you consider rolling in the Brocade/Vyatta changes to LPM
structure as well. Would prefer only one ABI change

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [dpdk-dev] [PATCH v1 0/3] lpm: increase number of next hops for lpm (ipv4)
  2015-10-23 16:33   ` Stephen Hemminger
@ 2015-10-23 18:38     ` Matthew Hall
  2015-10-23 19:13       ` Vladimir Medvedkin
  2015-10-23 19:59       ` Stephen Hemminger
  0 siblings, 2 replies; 24+ messages in thread
From: Matthew Hall @ 2015-10-23 18:38 UTC (permalink / raw)
  To: Stephen Hemminger; +Cc: dev

On Fri, Oct 23, 2015 at 09:33:05AM -0700, Stephen Hemminger wrote:
> On Fri, 23 Oct 2015 09:20:33 -0700
> Matthew Hall <mhall@mhcomputing.net> wrote:
> 
> > On Fri, Oct 23, 2015 at 03:51:48PM +0200, Michal Jastrzebski wrote:
> > > From: Michal Kobylinski  <michalx.kobylinski@intel.com>
> > > 
> > > The current DPDK implementation for LPM for IPv4 and IPv6 limits the
> > > number of next hops to 256, as the next hop ID is an 8-bit long field.
> > > Proposed extension increase number of next hops for IPv4 to 2^24 and
> > > also allows 32-bits read/write operations.
> > > 
> > > This patchset requires additional change to rte_table library to meet 
> > > ABI compatibility requirements. A v2 will be sent next week.
> > 
> > I also have a patchset for this.
> > 
> > I will send it out as well so we could compare.
> > 
> > Matthew.
> 
> Could you consider rolling in the Brocade/Vyatta changes to LPM
> structure as well. Would prefer only one ABI change

Hi Stephen,

I asked you if you could send me these a while ago but I never heard anything.

That's the only reason I made my own version.

If you have them available also maybe we can consolidate things.

Matthew.

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [dpdk-dev] [PATCH v1 0/3] lpm: increase number of next hops for lpm (ipv4)
  2015-10-23 18:38     ` Matthew Hall
@ 2015-10-23 19:13       ` Vladimir Medvedkin
  2015-10-23 19:59       ` Stephen Hemminger
  1 sibling, 0 replies; 24+ messages in thread
From: Vladimir Medvedkin @ 2015-10-23 19:13 UTC (permalink / raw)
  To: Matthew Hall; +Cc: dev

Hi all,

I also have LPM library implementation. Main points:
- First, we don't need two separate structures rte_lpm_tbl8_entry and
rte_lpm_tbl24_entry. I think it's better to merge in one rte_lpm_tbl_entry
because there is only one difference in name of one bit - valid_group vs
ext_entry. Let it's name will be ext_valid.
- Second, I think that 16 bit is more than enough for next-hop index. It's
better to use remaining 8 bits for so called forwarding class. It is
something like Juniper DCU that can help us to classify traffic based on
dst prefix. But after conversation with Bruce Richardson I agree with him
that next-hop index and forwarding class can be split from one return value
by the application.
- Third, I want to add posibility to lookup AS number (or any other 4 byte)
that originate that prefix. It can be defined like:
union rte_lpm_tbl_entry_extend {
#ifdef RTE_LPM_ASNUM
uint64_t entry;
#else
uint32_t entry;
#endif
#ifdef RTE_LPM_ASNUM
   uint32_t as_num;
#endif
   struct{
       uint32_t next_hop       :24;/**< next hop. */
       uint32_t valid          :1; /**< Validation flag. */
       uint32_t ext_valid :1; /**< External entry. */
       uint32_t depth             :6; /**< Rule depth. */
    };
};
- Fourth, extension of next-hop index is done not only for increasing of
next-hops but also to increase more specific routes. So I think that should
be fixed
+               unsigned tbl8_index = (uint8_t)ip +
+                               ((uint8_t)tbl_entry *
RTE_LPM_TBL8_GROUP_NUM_ENTRIES);

Regards,
Vladimir

2015-10-23 21:38 GMT+03:00 Matthew Hall <mhall@mhcomputing.net>:

> On Fri, Oct 23, 2015 at 09:33:05AM -0700, Stephen Hemminger wrote:
> > On Fri, 23 Oct 2015 09:20:33 -0700
> > Matthew Hall <mhall@mhcomputing.net> wrote:
> >
> > > On Fri, Oct 23, 2015 at 03:51:48PM +0200, Michal Jastrzebski wrote:
> > > > From: Michal Kobylinski  <michalx.kobylinski@intel.com>
> > > >
> > > > The current DPDK implementation for LPM for IPv4 and IPv6 limits the
> > > > number of next hops to 256, as the next hop ID is an 8-bit long
> field.
> > > > Proposed extension increase number of next hops for IPv4 to 2^24 and
> > > > also allows 32-bits read/write operations.
> > > >
> > > > This patchset requires additional change to rte_table library to meet
> > > > ABI compatibility requirements. A v2 will be sent next week.
> > >
> > > I also have a patchset for this.
> > >
> > > I will send it out as well so we could compare.
> > >
> > > Matthew.
> >
> > Could you consider rolling in the Brocade/Vyatta changes to LPM
> > structure as well. Would prefer only one ABI change
>
> Hi Stephen,
>
> I asked you if you could send me these a while ago but I never heard
> anything.
>
> That's the only reason I made my own version.
>
> If you have them available also maybe we can consolidate things.
>
> Matthew.
>

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [dpdk-dev] [PATCH v1 0/3] lpm: increase number of next hops for lpm (ipv4)
  2015-10-23 18:38     ` Matthew Hall
  2015-10-23 19:13       ` Vladimir Medvedkin
@ 2015-10-23 19:59       ` Stephen Hemminger
  1 sibling, 0 replies; 24+ messages in thread
From: Stephen Hemminger @ 2015-10-23 19:59 UTC (permalink / raw)
  To: Matthew Hall; +Cc: dev

From 9efec4571eec4db455a29773b95cf9264c046a03 Mon Sep 17 00:00:00 2001
From: Stephen Hemminger <shemming@brocade.com>
Date: Fri, 23 Oct 2015 12:55:05 -0700
Subject: [PATCH] lpm: brocade extensions

This is a brute-force merge of the Brocade extension to LPM
to current DPDK source tree.

No API/ABI compatibility is expected.
  1. Allow arbitrary number of rules
  2. Get rid of N^2 search for rule add/delete
  3. Add route scope
  4. Extend nexthop to 16 bits
  5. Extend to allow for more info on delete, (callback and nexthop)
  6. Dynamically grow /8 table (requires RCU)
  7. Support full /0 and /32 rules

---
 lib/librte_lpm/rte_lpm.c | 814 ++++++++++++++++++++++++++---------------------
 lib/librte_lpm/rte_lpm.h | 381 +++++++---------------
 2 files changed, 567 insertions(+), 628 deletions(-)

diff --git a/lib/librte_lpm/rte_lpm.c b/lib/librte_lpm/rte_lpm.c
index 163ba3c..ef1f0bf 100644
--- a/lib/librte_lpm/rte_lpm.c
+++ b/lib/librte_lpm/rte_lpm.c
@@ -2,6 +2,7 @@
  *   BSD LICENSE
  *
  *   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+ *   Copyright(c) 2012-2015 Brocade Communications Systems
  *   All rights reserved.
  *
  *   Redistribution and use in source and binary forms, with or without
@@ -38,13 +39,15 @@
 #include <stdio.h>
 #include <errno.h>
 #include <sys/queue.h>
+#include <bsd/sys/tree.h>
 
 #include <rte_log.h>
 #include <rte_branch_prediction.h>
 #include <rte_common.h>
-#include <rte_memory.h>        /* for definition of RTE_CACHE_LINE_SIZE */
+#include <rte_memory.h>               /* for definition of RTE_CACHE_LINE_SIZE */
 #include <rte_malloc.h>
 #include <rte_memzone.h>
+#include <rte_tailq.h>
 #include <rte_eal.h>
 #include <rte_eal_memconfig.h>
 #include <rte_per_lcore.h>
@@ -52,9 +55,25 @@
 #include <rte_errno.h>
 #include <rte_rwlock.h>
 #include <rte_spinlock.h>
+#include <rte_debug.h>
 
 #include "rte_lpm.h"
 
+#include <urcu-qsbr.h>
+
+/** Auto-growth of tbl8 */
+#define RTE_LPM_TBL8_INIT_GROUPS	256	/* power of 2 */
+#define RTE_LPM_TBL8_INIT_ENTRIES	(RTE_LPM_TBL8_INIT_GROUPS * \
+					 RTE_LPM_TBL8_GROUP_NUM_ENTRIES)
+/** Rule structure. */
+struct rte_lpm_rule {
+	uint32_t ip;	    /**< Rule IP address. */
+	uint16_t next_hop;  /**< Rule next hop. */
+	uint8_t  scope;	    /**< Rule scope */
+	uint8_t	 reserved;
+	RB_ENTRY(rte_lpm_rule) link;
+};
+
 TAILQ_HEAD(rte_lpm_list, rte_tailq_entry);
 
 static struct rte_tailq_elem rte_lpm_tailq = {
@@ -71,31 +90,55 @@ enum valid_flag {
 
 /* Macro to enable/disable run-time checks. */
 #if defined(RTE_LIBRTE_LPM_DEBUG)
-#include <rte_debug.h>
-#define VERIFY_DEPTH(depth) do {                                \
-	if ((depth == 0) || (depth > RTE_LPM_MAX_DEPTH))        \
+#define VERIFY_DEPTH(depth) do {				\
+	if (depth > RTE_LPM_MAX_DEPTH)				\
 		rte_panic("LPM: Invalid depth (%u) at line %d", \
-				(unsigned)(depth), __LINE__);   \
+				(unsigned)(depth), __LINE__);	\
 } while (0)
 #else
 #define VERIFY_DEPTH(depth)
 #endif
 
+/* Comparison function for red-black tree nodes.
+   "If the first argument is smaller than the second, the function
+    returns a value smaller than zero.	If they are equal, the function
+    returns zero.  Otherwise, it should return a value greater than zero."
+*/
+static inline int rules_cmp(const struct rte_lpm_rule *r1,
+			    const struct rte_lpm_rule *r2)
+{
+	if (r1->ip < r2->ip)
+		return -1;
+	else if (r1->ip > r2->ip)
+		return 1;
+	else
+		return r1->scope - r2->scope;
+}
+
+/* Satisfy old style attribute in tree.h header */
+#ifndef __unused
+#define __unused __attribute__ ((unused))
+#endif
+
+/* Generate internal functions and make them static. */
+RB_GENERATE_STATIC(rte_lpm_rules_tree, rte_lpm_rule, link, rules_cmp)
+
 /*
  * Converts a given depth value to its corresponding mask value.
  *
  * depth  (IN)		: range = 1 - 32
- * mask   (OUT)		: 32bit mask
+ * mask   (OUT)                : 32bit mask
  */
 static uint32_t __attribute__((pure))
 depth_to_mask(uint8_t depth)
 {
 	VERIFY_DEPTH(depth);
 
-	/* To calculate a mask start with a 1 on the left hand side and right
-	 * shift while populating the left hand side with 1's
-	 */
-	return (int)0x80000000 >> (depth - 1);
+	/* per C std. shift of 32 bits is undefined */
+	if (depth == 0)
+		return 0;
+
+	return ~0u << (32 - depth);
 }
 
 /*
@@ -113,7 +156,7 @@ depth_to_range(uint8_t depth)
 		return 1 << (MAX_DEPTH_TBL24 - depth);
 
 	/* Else if depth is greater than 24 */
-	return (1 << (RTE_LPM_MAX_DEPTH - depth));
+	return 1 << (32 - depth);
 }
 
 /*
@@ -148,31 +191,28 @@ rte_lpm_find_existing(const char *name)
  * Allocates memory for LPM object
  */
 struct rte_lpm *
-rte_lpm_create(const char *name, int socket_id, int max_rules,
-		__rte_unused int flags)
+rte_lpm_create(const char *name, int socket_id)
 {
 	char mem_name[RTE_LPM_NAMESIZE];
 	struct rte_lpm *lpm = NULL;
 	struct rte_tailq_entry *te;
-	uint32_t mem_size;
+	unsigned int depth;
 	struct rte_lpm_list *lpm_list;
 
+	/* check that we have an initialized tail queue */
 	lpm_list = RTE_TAILQ_CAST(rte_lpm_tailq.head, rte_lpm_list);
 
-	RTE_BUILD_BUG_ON(sizeof(struct rte_lpm_tbl24_entry) != 2);
-	RTE_BUILD_BUG_ON(sizeof(struct rte_lpm_tbl8_entry) != 2);
+	RTE_BUILD_BUG_ON(sizeof(struct rte_lpm_tbl24_entry) != 4);
+	RTE_BUILD_BUG_ON(sizeof(struct rte_lpm_tbl8_entry) != 4);
 
 	/* Check user arguments. */
-	if ((name == NULL) || (socket_id < -1) || (max_rules == 0)){
+	if ((name == NULL) || (socket_id < -1)) {
 		rte_errno = EINVAL;
 		return NULL;
 	}
 
 	snprintf(mem_name, sizeof(mem_name), "LPM_%s", name);
 
-	/* Determine the amount of memory to allocate. */
-	mem_size = sizeof(*lpm) + (sizeof(lpm->rules_tbl[0]) * max_rules);
-
 	rte_rwlock_write_lock(RTE_EAL_TAILQ_RWLOCK);
 
 	/* guarantee there's no existing */
@@ -192,17 +232,33 @@ rte_lpm_create(const char *name, int socket_id, int max_rules,
 	}
 
 	/* Allocate memory to store the LPM data structures. */
-	lpm = (struct rte_lpm *)rte_zmalloc_socket(mem_name, mem_size,
-			RTE_CACHE_LINE_SIZE, socket_id);
+	lpm = rte_zmalloc_socket(mem_name, sizeof(*lpm), RTE_CACHE_LINE_SIZE,
+				 socket_id);
 	if (lpm == NULL) {
 		RTE_LOG(ERR, LPM, "LPM memory allocation failed\n");
-		rte_free(te);
 		goto exit;
 	}
 
 	/* Save user arguments. */
-	lpm->max_rules = max_rules;
 	snprintf(lpm->name, sizeof(lpm->name), "%s", name);
+	lpm->socket_id = socket_id;
+
+	/* Vyatta change to use red-black tree */
+	for (depth = 0; depth < RTE_LPM_MAX_DEPTH; ++depth)
+		RB_INIT(&lpm->rules[depth]);
+
+	/* Vyatta change to dynamically grow tbl8 */
+	lpm->tbl8_num_groups = RTE_LPM_TBL8_INIT_GROUPS;
+	lpm->tbl8_rover = RTE_LPM_TBL8_INIT_GROUPS - 1;
+	lpm->tbl8 = rte_calloc_socket(NULL, RTE_LPM_TBL8_INIT_ENTRIES,
+				      sizeof(struct rte_lpm_tbl8_entry),
+				      RTE_CACHE_LINE_SIZE, socket_id);
+	if (lpm->tbl8 == NULL) {
+		RTE_LOG(ERR, LPM, "LPM tbl8 group allocation failed\n");
+		rte_free(lpm);
+		lpm = NULL;
+		goto exit;
+	}
 
 	te->data = (void *) lpm;
 
@@ -245,248 +301,237 @@ rte_lpm_free(struct rte_lpm *lpm)
 
 	rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
 
+	rte_free(lpm->tbl8);
 	rte_free(lpm);
 	rte_free(te);
 }
 
+
 /*
- * Adds a rule to the rule table.
- *
- * NOTE: The rule table is split into 32 groups. Each group contains rules that
- * apply to a specific prefix depth (i.e. group 1 contains rules that apply to
- * prefixes with a depth of 1 etc.). In the following code (depth - 1) is used
- * to refer to depth 1 because even though the depth range is 1 - 32, depths
- * are stored in the rule table from 0 - 31.
- * NOTE: Valid range for depth parameter is 1 .. 32 inclusive.
+ * Finds a rule in rule table.
  */
-static inline int32_t
-rule_add(struct rte_lpm *lpm, uint32_t ip_masked, uint8_t depth,
-	uint8_t next_hop)
+static struct rte_lpm_rule *
+rule_find(struct rte_lpm *lpm, uint32_t ip_masked, uint8_t depth, uint8_t scope)
 {
-	uint32_t rule_gindex, rule_index, last_rule;
-	int i;
-
-	VERIFY_DEPTH(depth);
-
-	/* Scan through rule group to see if rule already exists. */
-	if (lpm->rule_info[depth - 1].used_rules > 0) {
-
-		/* rule_gindex stands for rule group index. */
-		rule_gindex = lpm->rule_info[depth - 1].first_rule;
-		/* Initialise rule_index to point to start of rule group. */
-		rule_index = rule_gindex;
-		/* Last rule = Last used rule in this rule group. */
-		last_rule = rule_gindex + lpm->rule_info[depth - 1].used_rules;
-
-		for (; rule_index < last_rule; rule_index++) {
+	struct rte_lpm_rules_tree *head = &lpm->rules[depth];
+	struct rte_lpm_rule k = {
+		.ip = ip_masked,
+		.scope = scope,
+	};
 
-			/* If rule already exists update its next_hop and return. */
-			if (lpm->rules_tbl[rule_index].ip == ip_masked) {
-				lpm->rules_tbl[rule_index].next_hop = next_hop;
-
-				return rule_index;
-			}
-		}
-
-		if (rule_index == lpm->max_rules)
-			return -ENOSPC;
-	} else {
-		/* Calculate the position in which the rule will be stored. */
-		rule_index = 0;
+	return RB_FIND(rte_lpm_rules_tree, head, &k);
+}
 
-		for (i = depth - 1; i > 0; i--) {
-			if (lpm->rule_info[i - 1].used_rules > 0) {
-				rule_index = lpm->rule_info[i - 1].first_rule + lpm->rule_info[i - 1].used_rules;
-				break;
-			}
-		}
-		if (rule_index == lpm->max_rules)
-			return -ENOSPC;
+/* Finds rule in table in scope order */
+static struct rte_lpm_rule *
+rule_find_any(struct rte_lpm *lpm, uint32_t ip_masked, uint8_t depth)
+{
+	struct rte_lpm_rule *r;
+	int scope;
 
-		lpm->rule_info[depth - 1].first_rule = rule_index;
+	for (scope = 255; scope >= 0; --scope) {
+		r = rule_find(lpm, ip_masked, depth, scope);
+		if (r)
+			return r;
 	}
 
-	/* Make room for the new rule in the array. */
-	for (i = RTE_LPM_MAX_DEPTH; i > depth; i--) {
-		if (lpm->rule_info[i - 1].first_rule + lpm->rule_info[i - 1].used_rules == lpm->max_rules)
-			return -ENOSPC;
+	return NULL;
+}
 
-		if (lpm->rule_info[i - 1].used_rules > 0) {
-			lpm->rules_tbl[lpm->rule_info[i - 1].first_rule + lpm->rule_info[i - 1].used_rules]
-					= lpm->rules_tbl[lpm->rule_info[i - 1].first_rule];
-			lpm->rule_info[i - 1].first_rule++;
-		}
-	}
+/*
+ * Adds a rule to the rule table.
+ *
+ * NOTE: The rule table is split into 32 groups. Each group contains rules that
+ * apply to a specific prefix depth (i.e. group 1 contains rules that apply to
+ * prefixes with a depth of 1 etc.).
+ * NOTE: Valid range for depth parameter is 0 .. 32 inclusive.
+ */
+static struct rte_lpm_rule *
+rule_add(struct rte_lpm *lpm, uint32_t ip_masked, uint8_t depth,
+	 uint16_t next_hop, uint8_t scope)
+{
+	struct rte_lpm_rules_tree *head = &lpm->rules[depth];
+	struct rte_lpm_rule *r, *old;
 
-	/* Add the new rule. */
-	lpm->rules_tbl[rule_index].ip = ip_masked;
-	lpm->rules_tbl[rule_index].next_hop = next_hop;
+	/*
+	 * NB: uses regular malloc to avoid chewing up precious
+	 *  memory pool space for rules.
+	 */
+	r = malloc(sizeof(*r));
+	if (!r)
+		return NULL;
 
-	/* Increment the used rules counter for this rule group. */
-	lpm->rule_info[depth - 1].used_rules++;
+	r->ip = ip_masked;
+	r->next_hop = next_hop;
+	r->scope = scope;
 
-	return rule_index;
+	old = RB_INSERT(rte_lpm_rules_tree, head, r);
+	if (!old)
+		return r;
+
+	/* collision with existing rule */
+	free(r);
+	return old;
 }
 
 /*
  * Delete a rule from the rule table.
  * NOTE: Valid range for depth parameter is 1 .. 32 inclusive.
  */
-static inline void
-rule_delete(struct rte_lpm *lpm, int32_t rule_index, uint8_t depth)
+static void
+rule_delete(struct rte_lpm *lpm, struct rte_lpm_rule *r, uint8_t depth)
 {
-	int i;
+	struct rte_lpm_rules_tree *head = &lpm->rules[depth];
 
-	VERIFY_DEPTH(depth);
-
-	lpm->rules_tbl[rule_index] = lpm->rules_tbl[lpm->rule_info[depth - 1].first_rule
-			+ lpm->rule_info[depth - 1].used_rules - 1];
+	RB_REMOVE(rte_lpm_rules_tree, head, r);
 
-	for (i = depth; i < RTE_LPM_MAX_DEPTH; i++) {
-		if (lpm->rule_info[i].used_rules > 0) {
-			lpm->rules_tbl[lpm->rule_info[i].first_rule - 1] =
-					lpm->rules_tbl[lpm->rule_info[i].first_rule + lpm->rule_info[i].used_rules - 1];
-			lpm->rule_info[i].first_rule--;
-		}
-	}
-
-	lpm->rule_info[depth - 1].used_rules--;
+	free(r);
 }
 
 /*
- * Finds a rule in rule table.
- * NOTE: Valid range for depth parameter is 1 .. 32 inclusive.
+ * Dynamically increase size of tbl8
  */
-static inline int32_t
-rule_find(struct rte_lpm *lpm, uint32_t ip_masked, uint8_t depth)
+static int
+tbl8_grow(struct rte_lpm *lpm)
 {
-	uint32_t rule_gindex, last_rule, rule_index;
-
-	VERIFY_DEPTH(depth);
+	size_t old_size, new_size;
+	struct rte_lpm_tbl8_entry *new_tbl8;
+
+	/* This should not happen,
+	 * worst case is each /24 can point to one tbl8 */
+	if (lpm->tbl8_num_groups >= RTE_LPM_TBL24_NUM_ENTRIES)
+		rte_panic("LPM: tbl8 grow already at %u",
+			  lpm->tbl8_num_groups);
+
+	old_size = lpm->tbl8_num_groups;
+	new_size = old_size << 1;
+	new_tbl8 = rte_calloc_socket(NULL,
+				     new_size * RTE_LPM_TBL8_GROUP_NUM_ENTRIES,
+				     sizeof(struct rte_lpm_tbl8_entry),
+				     RTE_CACHE_LINE_SIZE,
+				     lpm->socket_id);
+	if (new_tbl8 == NULL) {
+		RTE_LOG(ERR, LPM, "LPM tbl8 group expand allocation failed\n");
+		return -ENOMEM;
+	}
 
-	rule_gindex = lpm->rule_info[depth - 1].first_rule;
-	last_rule = rule_gindex + lpm->rule_info[depth - 1].used_rules;
+	memcpy(new_tbl8, lpm->tbl8,
+	       old_size * RTE_LPM_TBL8_GROUP_NUM_ENTRIES
+		   * sizeof(struct rte_lpm_tbl8_entry));
 
-	/* Scan used rules at given depth to find rule. */
-	for (rule_index = rule_gindex; rule_index < last_rule; rule_index++) {
-		/* If rule is found return the rule index. */
-		if (lpm->rules_tbl[rule_index].ip == ip_masked)
-			return rule_index;
-	}
+	/* swap in new table */
+	defer_rcu(rte_free, lpm->tbl8);
+	rcu_assign_pointer(lpm->tbl8, new_tbl8);
+	lpm->tbl8_num_groups = new_size;
 
-	/* If rule is not found return -EINVAL. */
-	return -EINVAL;
+	return 0;
 }
 
 /*
  * Find, clean and allocate a tbl8.
  */
-static inline int32_t
-tbl8_alloc(struct rte_lpm_tbl8_entry *tbl8)
+static int32_t
+tbl8_alloc(struct rte_lpm *lpm)
 {
 	uint32_t tbl8_gindex; /* tbl8 group index. */
 	struct rte_lpm_tbl8_entry *tbl8_entry;
 
 	/* Scan through tbl8 to find a free (i.e. INVALID) tbl8 group. */
-	for (tbl8_gindex = 0; tbl8_gindex < RTE_LPM_TBL8_NUM_GROUPS;
-			tbl8_gindex++) {
-		tbl8_entry = &tbl8[tbl8_gindex *
-		                   RTE_LPM_TBL8_GROUP_NUM_ENTRIES];
+	for (tbl8_gindex = (lpm->tbl8_rover + 1) & (lpm->tbl8_num_groups - 1);
+	     tbl8_gindex != lpm->tbl8_rover;
+	     tbl8_gindex = (tbl8_gindex + 1) & (lpm->tbl8_num_groups - 1)) {
+		tbl8_entry = lpm->tbl8
+			+ tbl8_gindex * RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
+
 		/* If a free tbl8 group is found clean it and set as VALID. */
-		if (!tbl8_entry->valid_group) {
-			memset(&tbl8_entry[0], 0,
-					RTE_LPM_TBL8_GROUP_NUM_ENTRIES *
-					sizeof(tbl8_entry[0]));
+		if (likely(!tbl8_entry->valid_group))
+			goto found;
+	}
 
-			tbl8_entry->valid_group = VALID;
+	/* Out of space expand */
+	tbl8_gindex = lpm->tbl8_num_groups;
+	if (tbl8_grow(lpm) < 0)
+		return -ENOSPC;
 
-			/* Return group index for allocated tbl8 group. */
-			return tbl8_gindex;
-		}
-	}
+	tbl8_entry = lpm->tbl8
+		+ tbl8_gindex * RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
+ found:
+	memset(tbl8_entry, 0,
+	       RTE_LPM_TBL8_GROUP_NUM_ENTRIES * sizeof(tbl8_entry[0]));
+
+	tbl8_entry->valid_group = VALID;
 
-	/* If there are no tbl8 groups free then return error. */
-	return -ENOSPC;
+	/* Remember last slot to start looking there */
+	lpm->tbl8_rover = tbl8_gindex;
+
+	/* Return group index for allocated tbl8 group. */
+	return tbl8_gindex;
 }
 
 static inline void
-tbl8_free(struct rte_lpm_tbl8_entry *tbl8, uint32_t tbl8_group_start)
+tbl8_free(struct rte_lpm *lpm, uint32_t tbl8_group_start)
 {
 	/* Set tbl8 group invalid*/
-	tbl8[tbl8_group_start].valid_group = INVALID;
+	lpm->tbl8[tbl8_group_start].valid_group = INVALID;
 }
 
-static inline int32_t
+static void
 add_depth_small(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
-		uint8_t next_hop)
+		uint16_t next_hop)
 {
 	uint32_t tbl24_index, tbl24_range, tbl8_index, tbl8_group_end, i, j;
+	struct rte_lpm_tbl24_entry new_tbl24_entry = {
+		.valid = VALID,
+		.ext_entry = 0,
+		.depth = depth,
+		{ .next_hop = next_hop, }
+	};
+	struct rte_lpm_tbl8_entry new_tbl8_entry = {
+		.valid_group = VALID,
+		.valid = VALID,
+		.depth = depth,
+		.next_hop = next_hop,
+	};
+
+	/* Force compiler to initialize before assignment */
+	rte_barrier();
 
 	/* Calculate the index into Table24. */
 	tbl24_index = ip >> 8;
 	tbl24_range = depth_to_range(depth);
-
 	for (i = tbl24_index; i < (tbl24_index + tbl24_range); i++) {
 		/*
 		 * For invalid OR valid and non-extended tbl 24 entries set
 		 * entry.
 		 */
-		if (!lpm->tbl24[i].valid || (lpm->tbl24[i].ext_entry == 0 &&
-				lpm->tbl24[i].depth <= depth)) {
-
-			struct rte_lpm_tbl24_entry new_tbl24_entry = {
-				{ .next_hop = next_hop, },
-				.valid = VALID,
-				.ext_entry = 0,
-				.depth = depth,
-			};
-
-			/* Setting tbl24 entry in one go to avoid race
-			 * conditions
-			 */
-			lpm->tbl24[i] = new_tbl24_entry;
-
+		if (!lpm->tbl24[i].valid || lpm->tbl24[i].ext_entry == 0) {
+			if (!lpm->tbl24[i].valid ||
+			    lpm->tbl24[i].depth <= depth)
+				lpm->tbl24[i] = new_tbl24_entry;
 			continue;
 		}
 
-		if (lpm->tbl24[i].ext_entry == 1) {
-			/* If tbl24 entry is valid and extended calculate the
-			 *  index into tbl8.
-			 */
-			tbl8_index = lpm->tbl24[i].tbl8_gindex *
-					RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
-			tbl8_group_end = tbl8_index +
-					RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
-
-			for (j = tbl8_index; j < tbl8_group_end; j++) {
-				if (!lpm->tbl8[j].valid ||
-						lpm->tbl8[j].depth <= depth) {
-					struct rte_lpm_tbl8_entry
-						new_tbl8_entry = {
-						.valid = VALID,
-						.valid_group = VALID,
-						.depth = depth,
-						.next_hop = next_hop,
-					};
-
-					/*
-					 * Setting tbl8 entry in one go to avoid
-					 * race conditions
-					 */
-					lpm->tbl8[j] = new_tbl8_entry;
-
-					continue;
-				}
+		/* If tbl24 entry is valid and extended calculate the index
+		 * into tbl8. */
+		tbl8_index = lpm->tbl24[i].tbl8_gindex
+			* RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
+		tbl8_group_end = tbl8_index + RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
+		for (j = tbl8_index; j < tbl8_group_end; j++) {
+			if (!lpm->tbl8[j].valid ||
+			    lpm->tbl8[j].depth <= depth) {
+				/*
+				 * Setting tbl8 entry in one go to avoid race
+				 * conditions
+				 */
+				lpm->tbl8[j] = new_tbl8_entry;
 			}
 		}
 	}
-
-	return 0;
 }
 
-static inline int32_t
+static int32_t
 add_depth_big(struct rte_lpm *lpm, uint32_t ip_masked, uint8_t depth,
-		uint8_t next_hop)
+		uint16_t next_hop)
 {
 	uint32_t tbl24_index;
 	int32_t tbl8_group_index, tbl8_group_start, tbl8_group_end, tbl8_index,
@@ -497,12 +542,11 @@ add_depth_big(struct rte_lpm *lpm, uint32_t ip_masked, uint8_t depth,
 
 	if (!lpm->tbl24[tbl24_index].valid) {
 		/* Search for a free tbl8 group. */
-		tbl8_group_index = tbl8_alloc(lpm->tbl8);
+		tbl8_group_index = tbl8_alloc(lpm);
 
-		/* Check tbl8 allocation was successful. */
-		if (tbl8_group_index < 0) {
+		/* Check tbl8 allocation was unsuccessful. */
+		if (tbl8_group_index < 0)
 			return tbl8_group_index;
-		}
 
 		/* Find index into tbl8 and range. */
 		tbl8_index = (tbl8_group_index *
@@ -510,35 +554,38 @@ add_depth_big(struct rte_lpm *lpm, uint32_t ip_masked, uint8_t depth,
 				(ip_masked & 0xFF);
 
 		/* Set tbl8 entry. */
-		for (i = tbl8_index; i < (tbl8_index + tbl8_range); i++) {
-			lpm->tbl8[i].depth = depth;
-			lpm->tbl8[i].next_hop = next_hop;
-			lpm->tbl8[i].valid = VALID;
-		}
+		struct rte_lpm_tbl8_entry new_tbl8_entry = {
+			.valid_group = VALID,
+			.valid = VALID,
+			.depth = depth,
+			.next_hop = next_hop,
+		};
+
+		for (i = tbl8_index; i < (tbl8_index + tbl8_range); i++)
+			lpm->tbl8[i] = new_tbl8_entry;
 
 		/*
 		 * Update tbl24 entry to point to new tbl8 entry. Note: The
 		 * ext_flag and tbl8_index need to be updated simultaneously,
 		 * so assign whole structure in one go
 		 */
-
 		struct rte_lpm_tbl24_entry new_tbl24_entry = {
-			{ .tbl8_gindex = (uint8_t)tbl8_group_index, },
 			.valid = VALID,
 			.ext_entry = 1,
 			.depth = 0,
+			{ .tbl8_gindex = tbl8_group_index, }
 		};
 
+		rte_barrier();
 		lpm->tbl24[tbl24_index] = new_tbl24_entry;
-
-	}/* If valid entry but not extended calculate the index into Table8. */
+	}
+	/* If valid entry but not extended calculate the index into Table8. */
 	else if (lpm->tbl24[tbl24_index].ext_entry == 0) {
 		/* Search for free tbl8 group. */
-		tbl8_group_index = tbl8_alloc(lpm->tbl8);
+		tbl8_group_index = tbl8_alloc(lpm);
 
-		if (tbl8_group_index < 0) {
+		if (tbl8_group_index < 0)
 			return tbl8_group_index;
-		}
 
 		tbl8_group_start = tbl8_group_index *
 				RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
@@ -546,69 +593,68 @@ add_depth_big(struct rte_lpm *lpm, uint32_t ip_masked, uint8_t depth,
 				RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
 
 		/* Populate new tbl8 with tbl24 value. */
-		for (i = tbl8_group_start; i < tbl8_group_end; i++) {
-			lpm->tbl8[i].valid = VALID;
-			lpm->tbl8[i].depth = lpm->tbl24[tbl24_index].depth;
-			lpm->tbl8[i].next_hop =
-					lpm->tbl24[tbl24_index].next_hop;
-		}
+		struct rte_lpm_tbl8_entry new_tbl8_entry = {
+			.valid_group = VALID,
+			.valid = VALID,
+			.depth = lpm->tbl24[tbl24_index].depth,
+			.next_hop = lpm->tbl24[tbl24_index].next_hop,
+		};
+
+		for (i = tbl8_group_start; i < tbl8_group_end; i++)
+			lpm->tbl8[i] = new_tbl8_entry;
 
 		tbl8_index = tbl8_group_start + (ip_masked & 0xFF);
 
-		/* Insert new rule into the tbl8 entry. */
-		for (i = tbl8_index; i < tbl8_index + tbl8_range; i++) {
-			if (!lpm->tbl8[i].valid ||
-					lpm->tbl8[i].depth <= depth) {
-				lpm->tbl8[i].valid = VALID;
-				lpm->tbl8[i].depth = depth;
-				lpm->tbl8[i].next_hop = next_hop;
-
-				continue;
-			}
-		}
+		/* Insert new specific rule into the tbl8 entry. */
+		new_tbl8_entry.depth = depth;
+		new_tbl8_entry.next_hop = next_hop;
+		for (i = tbl8_index; i < tbl8_index + tbl8_range; i++)
+			lpm->tbl8[i] = new_tbl8_entry;
 
 		/*
 		 * Update tbl24 entry to point to new tbl8 entry. Note: The
 		 * ext_flag and tbl8_index need to be updated simultaneously,
 		 * so assign whole structure in one go.
 		 */
-
 		struct rte_lpm_tbl24_entry new_tbl24_entry = {
-				{ .tbl8_gindex = (uint8_t)tbl8_group_index, },
 				.valid = VALID,
 				.ext_entry = 1,
 				.depth = 0,
+				{ .tbl8_gindex = tbl8_group_index, }
 		};
 
+		/*
+		 * Ensure compiler isn't doing something completely odd
+		 * like updating tbl24 before tbl8.
+		 */
+		rte_barrier();
 		lpm->tbl24[tbl24_index] = new_tbl24_entry;
 
-	}
-	else { /*
-		* If it is valid, extended entry calculate the index into tbl8.
-		*/
+	} else {
+		/*
+		 * If it is valid, extended entry calculate the index into tbl8.
+		 */
+		struct rte_lpm_tbl8_entry new_tbl8_entry = {
+			.valid_group = VALID,
+			.valid = VALID,
+			.depth = depth,
+			.next_hop = next_hop,
+		};
+		rte_barrier();
+
 		tbl8_group_index = lpm->tbl24[tbl24_index].tbl8_gindex;
 		tbl8_group_start = tbl8_group_index *
 				RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
 		tbl8_index = tbl8_group_start + (ip_masked & 0xFF);
 
 		for (i = tbl8_index; i < (tbl8_index + tbl8_range); i++) {
-
 			if (!lpm->tbl8[i].valid ||
-					lpm->tbl8[i].depth <= depth) {
-				struct rte_lpm_tbl8_entry new_tbl8_entry = {
-					.valid = VALID,
-					.depth = depth,
-					.next_hop = next_hop,
-					.valid_group = lpm->tbl8[i].valid_group,
-				};
-
+			    lpm->tbl8[i].depth <= depth) {
 				/*
 				 * Setting tbl8 entry in one go to avoid race
 				 * condition
 				 */
 				lpm->tbl8[i] = new_tbl8_entry;
-
-				continue;
 			}
 		}
 	}
@@ -621,38 +667,32 @@ add_depth_big(struct rte_lpm *lpm, uint32_t ip_masked, uint8_t depth,
  */
 int
 rte_lpm_add(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
-		uint8_t next_hop)
+	    uint16_t next_hop, uint8_t scope)
 {
-	int32_t rule_index, status = 0;
-	uint32_t ip_masked;
+	struct rte_lpm_rule *rule;
+	uint32_t ip_masked = (ip & depth_to_mask(depth));
 
 	/* Check user arguments. */
-	if ((lpm == NULL) || (depth < 1) || (depth > RTE_LPM_MAX_DEPTH))
+	if ((lpm == NULL) || (depth >= RTE_LPM_MAX_DEPTH))
 		return -EINVAL;
 
-	ip_masked = ip & depth_to_mask(depth);
-
 	/* Add the rule to the rule table. */
-	rule_index = rule_add(lpm, ip_masked, depth, next_hop);
+	rule = rule_add(lpm, ip_masked, depth, next_hop, scope);
 
 	/* If the is no space available for new rule return error. */
-	if (rule_index < 0) {
-		return rule_index;
-	}
-
-	if (depth <= MAX_DEPTH_TBL24) {
-		status = add_depth_small(lpm, ip_masked, depth, next_hop);
-	}
-	else { /* If depth > RTE_LPM_MAX_DEPTH_TBL24 */
-		status = add_depth_big(lpm, ip_masked, depth, next_hop);
+	if (rule == NULL)
+		return -ENOSPC;
 
+	if (depth <= MAX_DEPTH_TBL24)
+		add_depth_small(lpm, ip_masked, depth, next_hop);
+	else {
 		/*
 		 * If add fails due to exhaustion of tbl8 extensions delete
 		 * rule that was added to rule table.
 		 */
+		int status = add_depth_big(lpm, ip_masked, depth, next_hop);
 		if (status < 0) {
-			rule_delete(lpm, rule_index, depth);
-
+			rule_delete(lpm, rule, depth);
 			return status;
 		}
 	}
@@ -665,10 +705,10 @@ rte_lpm_add(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
  */
 int
 rte_lpm_is_rule_present(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
-uint8_t *next_hop)
+			uint16_t *next_hop, uint8_t scope)
 {
 	uint32_t ip_masked;
-	int32_t rule_index;
+	struct rte_lpm_rule *rule;
 
 	/* Check user arguments. */
 	if ((lpm == NULL) ||
@@ -678,10 +718,10 @@ uint8_t *next_hop)
 
 	/* Look for the rule using rule_find. */
 	ip_masked = ip & depth_to_mask(depth);
-	rule_index = rule_find(lpm, ip_masked, depth);
+	rule = rule_find(lpm, ip_masked, depth, scope);
 
-	if (rule_index >= 0) {
-		*next_hop = lpm->rules_tbl[rule_index].next_hop;
+	if (rule != NULL) {
+		*next_hop = rule->next_hop;
 		return 1;
 	}
 
@@ -689,30 +729,29 @@ uint8_t *next_hop)
 	return 0;
 }
 
-static inline int32_t
-find_previous_rule(struct rte_lpm *lpm, uint32_t ip, uint8_t depth, uint8_t *sub_rule_depth)
+static struct rte_lpm_rule *
+find_previous_rule(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
+		   uint8_t *sub_rule_depth)
 {
-	int32_t rule_index;
+	struct rte_lpm_rule *rule;
 	uint32_t ip_masked;
-	uint8_t prev_depth;
+	int prev_depth;
 
-	for (prev_depth = (uint8_t)(depth - 1); prev_depth > 0; prev_depth--) {
+	for (prev_depth = depth - 1; prev_depth >= 0; prev_depth--) {
 		ip_masked = ip & depth_to_mask(prev_depth);
-
-		rule_index = rule_find(lpm, ip_masked, prev_depth);
-
-		if (rule_index >= 0) {
+		rule = rule_find_any(lpm, ip_masked, prev_depth);
+		if (rule) {
 			*sub_rule_depth = prev_depth;
-			return rule_index;
+			return rule;
 		}
 	}
 
-	return -1;
+	return NULL;
 }
 
-static inline int32_t
-delete_depth_small(struct rte_lpm *lpm, uint32_t ip_masked,
-	uint8_t depth, int32_t sub_rule_index, uint8_t sub_rule_depth)
+static void
+delete_depth_small(struct rte_lpm *lpm, uint32_t ip_masked, uint8_t depth,
+		   struct rte_lpm_rule *sub_rule, uint8_t new_depth)
 {
 	uint32_t tbl24_range, tbl24_index, tbl8_group_index, tbl8_index, i, j;
 
@@ -720,28 +759,22 @@ delete_depth_small(struct rte_lpm *lpm, uint32_t ip_masked,
 	tbl24_range = depth_to_range(depth);
 	tbl24_index = (ip_masked >> 8);
 
-	/*
-	 * Firstly check the sub_rule_index. A -1 indicates no replacement rule
-	 * and a positive number indicates a sub_rule_index.
-	 */
-	if (sub_rule_index < 0) {
+	/* Firstly check the sub_rule. */
+	if (sub_rule == NULL) {
 		/*
 		 * If no replacement rule exists then invalidate entries
 		 * associated with this rule.
 		 */
 		for (i = tbl24_index; i < (tbl24_index + tbl24_range); i++) {
-
-			if (lpm->tbl24[i].ext_entry == 0 &&
-					lpm->tbl24[i].depth <= depth ) {
-				lpm->tbl24[i].valid = INVALID;
-			}
-			else {
+			if (lpm->tbl24[i].ext_entry == 0) {
+				if (lpm->tbl24[i].depth <= depth)
+					lpm->tbl24[i].valid = INVALID;
+			} else {
 				/*
 				 * If TBL24 entry is extended, then there has
 				 * to be a rule with depth >= 25 in the
 				 * associated TBL8 group.
 				 */
-
 				tbl8_group_index = lpm->tbl24[i].tbl8_gindex;
 				tbl8_index = tbl8_group_index *
 						RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
@@ -749,60 +782,54 @@ delete_depth_small(struct rte_lpm *lpm, uint32_t ip_masked,
 				for (j = tbl8_index; j < (tbl8_index +
 					RTE_LPM_TBL8_GROUP_NUM_ENTRIES); j++) {
 
-					if (lpm->tbl8[j].depth <= depth)
+					if (lpm->tbl8[j].valid &&
+					    lpm->tbl8[j].depth <= depth)
 						lpm->tbl8[j].valid = INVALID;
 				}
 			}
 		}
-	}
-	else {
+	} else {
 		/*
 		 * If a replacement rule exists then modify entries
 		 * associated with this rule.
 		 */
-
 		struct rte_lpm_tbl24_entry new_tbl24_entry = {
-			{.next_hop = lpm->rules_tbl[sub_rule_index].next_hop,},
 			.valid = VALID,
 			.ext_entry = 0,
-			.depth = sub_rule_depth,
+			.depth = new_depth,
+			{ .next_hop = sub_rule->next_hop, }
 		};
 
 		struct rte_lpm_tbl8_entry new_tbl8_entry = {
+			.valid_group = VALID,
 			.valid = VALID,
-			.depth = sub_rule_depth,
-			.next_hop = lpm->rules_tbl
-			[sub_rule_index].next_hop,
+			.depth = new_depth,
+			.next_hop = sub_rule->next_hop,
 		};
 
 		for (i = tbl24_index; i < (tbl24_index + tbl24_range); i++) {
-
-			if (lpm->tbl24[i].ext_entry == 0 &&
-					lpm->tbl24[i].depth <= depth ) {
-				lpm->tbl24[i] = new_tbl24_entry;
-			}
-			else {
+			if (lpm->tbl24[i].ext_entry == 0) {
+				if (lpm->tbl24[i].depth <= depth)
+					lpm->tbl24[i] = new_tbl24_entry;
+			} else {
 				/*
 				 * If TBL24 entry is extended, then there has
 				 * to be a rule with depth >= 25 in the
 				 * associated TBL8 group.
 				 */
-
 				tbl8_group_index = lpm->tbl24[i].tbl8_gindex;
 				tbl8_index = tbl8_group_index *
 						RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
 
 				for (j = tbl8_index; j < (tbl8_index +
 					RTE_LPM_TBL8_GROUP_NUM_ENTRIES); j++) {
-
-					if (lpm->tbl8[j].depth <= depth)
+					if (!lpm->tbl8[j].valid ||
+					    lpm->tbl8[j].depth <= depth)
 						lpm->tbl8[j] = new_tbl8_entry;
 				}
 			}
 		}
 	}
-
-	return 0;
 }
 
 /*
@@ -813,8 +840,9 @@ delete_depth_small(struct rte_lpm *lpm, uint32_t ip_masked,
  * Return of value > -1 means tbl8 is in use but has all the same values and
  * thus can be recycled
  */
-static inline int32_t
-tbl8_recycle_check(struct rte_lpm_tbl8_entry *tbl8, uint32_t tbl8_group_start)
+static int32_t
+tbl8_recycle_check(const struct rte_lpm_tbl8_entry *tbl8,
+		   uint32_t tbl8_group_start)
 {
 	uint32_t tbl8_group_end, i;
 	tbl8_group_end = tbl8_group_start + RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
@@ -855,13 +883,14 @@ tbl8_recycle_check(struct rte_lpm_tbl8_entry *tbl8, uint32_t tbl8_group_start)
 		if (tbl8[i].valid)
 			return -EEXIST;
 	}
+
 	/* If no valid entries are found then return -EINVAL. */
 	return -EINVAL;
 }
 
-static inline int32_t
-delete_depth_big(struct rte_lpm *lpm, uint32_t ip_masked,
-	uint8_t depth, int32_t sub_rule_index, uint8_t sub_rule_depth)
+static void
+delete_depth_big(struct rte_lpm *lpm, uint32_t ip_masked, uint8_t depth,
+		 struct rte_lpm_rule *sub_rule, uint8_t new_depth)
 {
 	uint32_t tbl24_index, tbl8_group_index, tbl8_group_start, tbl8_index,
 			tbl8_range, i;
@@ -879,23 +908,22 @@ delete_depth_big(struct rte_lpm *lpm, uint32_t ip_masked,
 	tbl8_index = tbl8_group_start + (ip_masked & 0xFF);
 	tbl8_range = depth_to_range(depth);
 
-	if (sub_rule_index < 0) {
+	if (sub_rule == NULL) {
 		/*
 		 * Loop through the range of entries on tbl8 for which the
 		 * rule_to_delete must be removed or modified.
 		 */
 		for (i = tbl8_index; i < (tbl8_index + tbl8_range); i++) {
-			if (lpm->tbl8[i].depth <= depth)
+			if (lpm->tbl8[i].valid && lpm->tbl8[i].depth <= depth)
 				lpm->tbl8[i].valid = INVALID;
 		}
-	}
-	else {
+	} else {
 		/* Set new tbl8 entry. */
 		struct rte_lpm_tbl8_entry new_tbl8_entry = {
+			.valid_group = VALID,
 			.valid = VALID,
-			.depth = sub_rule_depth,
-			.valid_group = lpm->tbl8[tbl8_group_start].valid_group,
-			.next_hop = lpm->rules_tbl[sub_rule_index].next_hop,
+			.depth = new_depth,
+			.next_hop = sub_rule->next_hop,
 		};
 
 		/*
@@ -903,7 +931,7 @@ delete_depth_big(struct rte_lpm *lpm, uint32_t ip_masked,
 		 * rule_to_delete must be modified.
 		 */
 		for (i = tbl8_index; i < (tbl8_index + tbl8_range); i++) {
-			if (lpm->tbl8[i].depth <= depth)
+			if (!lpm->tbl8[i].valid || lpm->tbl8[i].depth <= depth)
 				lpm->tbl8[i] = new_tbl8_entry;
 		}
 	}
@@ -915,100 +943,158 @@ delete_depth_big(struct rte_lpm *lpm, uint32_t ip_masked,
 	 */
 
 	tbl8_recycle_index = tbl8_recycle_check(lpm->tbl8, tbl8_group_start);
-
-	if (tbl8_recycle_index == -EINVAL){
+	if (tbl8_recycle_index == -EINVAL) {
 		/* Set tbl24 before freeing tbl8 to avoid race condition. */
 		lpm->tbl24[tbl24_index].valid = 0;
-		tbl8_free(lpm->tbl8, tbl8_group_start);
-	}
-	else if (tbl8_recycle_index > -1) {
+		rte_barrier();
+		tbl8_free(lpm, tbl8_group_start);
+	} else if (tbl8_recycle_index > -1) {
 		/* Update tbl24 entry. */
 		struct rte_lpm_tbl24_entry new_tbl24_entry = {
-			{ .next_hop = lpm->tbl8[tbl8_recycle_index].next_hop, },
 			.valid = VALID,
 			.ext_entry = 0,
 			.depth = lpm->tbl8[tbl8_recycle_index].depth,
+			{ .next_hop = lpm->tbl8[tbl8_recycle_index].next_hop, }
 		};
 
 		/* Set tbl24 before freeing tbl8 to avoid race condition. */
 		lpm->tbl24[tbl24_index] = new_tbl24_entry;
-		tbl8_free(lpm->tbl8, tbl8_group_start);
+		rte_barrier();
+		tbl8_free(lpm, tbl8_group_start);
 	}
+}
 
-	return 0;
+/*
+ * Find rule to replace the just deleted. If there is no rule to
+ * replace the rule_to_delete we return NULL and invalidate the table
+ * entries associated with this rule.
+ */
+static void rule_replace(struct rte_lpm *lpm, uint32_t ip, uint8_t depth)
+{
+	uint32_t ip_masked;
+	struct rte_lpm_rule *sub_rule;
+	uint8_t sub_depth = 0;
+
+	ip_masked = ip & depth_to_mask(depth);
+	sub_rule = find_previous_rule(lpm, ip, depth, &sub_depth);
+
+	/*
+	 * If the input depth value is less than 25 use function
+	 * delete_depth_small otherwise use delete_depth_big.
+	 */
+	if (depth <= MAX_DEPTH_TBL24)
+		delete_depth_small(lpm, ip_masked, depth, sub_rule, sub_depth);
+	else
+		delete_depth_big(lpm, ip_masked, depth, sub_rule, sub_depth);
 }
 
 /*
  * Deletes a rule
  */
 int
-rte_lpm_delete(struct rte_lpm *lpm, uint32_t ip, uint8_t depth)
+rte_lpm_delete(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
+	       uint16_t *next_hop, uint8_t scope)
 {
-	int32_t rule_to_delete_index, sub_rule_index;
+	struct rte_lpm_rule *rule;
 	uint32_t ip_masked;
-	uint8_t sub_rule_depth;
+
 	/*
 	 * Check input arguments. Note: IP must be a positive integer of 32
 	 * bits in length therefore it need not be checked.
 	 */
-	if ((lpm == NULL) || (depth < 1) || (depth > RTE_LPM_MAX_DEPTH)) {
+	if ((lpm == NULL) || (depth >= RTE_LPM_MAX_DEPTH))
 		return -EINVAL;
-	}
 
 	ip_masked = ip & depth_to_mask(depth);
 
 	/*
-	 * Find the index of the input rule, that needs to be deleted, in the
+	 * Find the input rule, that needs to be deleted, in the
 	 * rule table.
 	 */
-	rule_to_delete_index = rule_find(lpm, ip_masked, depth);
+	rule = rule_find(lpm, ip_masked, depth, scope);
 
 	/*
 	 * Check if rule_to_delete_index was found. If no rule was found the
-	 * function rule_find returns -EINVAL.
+	 * function rule_find returns -E_RTE_NO_TAILQ.
 	 */
-	if (rule_to_delete_index < 0)
+	if (rule == NULL)
 		return -EINVAL;
 
-	/* Delete the rule from the rule table. */
-	rule_delete(lpm, rule_to_delete_index, depth);
-
 	/*
-	 * Find rule to replace the rule_to_delete. If there is no rule to
-	 * replace the rule_to_delete we return -1 and invalidate the table
-	 * entries associated with this rule.
+	 * Return next hop so caller can avoid lookup.
 	 */
-	sub_rule_depth = 0;
-	sub_rule_index = find_previous_rule(lpm, ip, depth, &sub_rule_depth);
+	if (next_hop)
+		*next_hop = rule->next_hop;
 
-	/*
-	 * If the input depth value is less than 25 use function
-	 * delete_depth_small otherwise use delete_depth_big.
-	 */
-	if (depth <= MAX_DEPTH_TBL24) {
-		return delete_depth_small(lpm, ip_masked, depth,
-				sub_rule_index, sub_rule_depth);
-	}
-	else { /* If depth > MAX_DEPTH_TBL24 */
-		return delete_depth_big(lpm, ip_masked, depth, sub_rule_index, sub_rule_depth);
-	}
+	/* Delete the rule from the rule table. */
+	rule_delete(lpm, rule, depth);
+
+	/* Replace with next level up rule */
+	rule_replace(lpm, ip, depth);
+
+	return 0;
 }
 
 /*
  * Delete all rules from the LPM table.
  */
 void
-rte_lpm_delete_all(struct rte_lpm *lpm)
+rte_lpm_delete_all(struct rte_lpm *lpm, rte_lpm_walk_func_t func, void *arg)
 {
-	/* Zero rule information. */
-	memset(lpm->rule_info, 0, sizeof(lpm->rule_info));
+	uint8_t depth;
 
 	/* Zero tbl24. */
 	memset(lpm->tbl24, 0, sizeof(lpm->tbl24));
 
 	/* Zero tbl8. */
-	memset(lpm->tbl8, 0, sizeof(lpm->tbl8));
+	memset(lpm->tbl8, 0,
+	       lpm->tbl8_num_groups * RTE_LPM_TBL8_GROUP_NUM_ENTRIES
+		   * sizeof(struct rte_lpm_tbl8_entry));
+	lpm->tbl8_rover = lpm->tbl8_num_groups - 1;
 
 	/* Delete all rules form the rules table. */
-	memset(lpm->rules_tbl, 0, sizeof(lpm->rules_tbl[0]) * lpm->max_rules);
+	for (depth = 0; depth < RTE_LPM_MAX_DEPTH; ++depth) {
+		struct rte_lpm_rules_tree *head = &lpm->rules[depth];
+		struct rte_lpm_rule *r, *n;
+
+		RB_FOREACH_SAFE(r, rte_lpm_rules_tree, head, n) {
+			if (func)
+				func(lpm, r->ip, depth, r->scope,
+				     r->next_hop, arg);
+			rule_delete(lpm, r, depth);
+		}
+	}
+}
+
+/*
+ * Iterate over LPM rules
+ */
+void
+rte_lpm_walk(struct rte_lpm *lpm, rte_lpm_walk_func_t func, void *arg)
+{
+	uint8_t depth;
+
+	for (depth = 0; depth < RTE_LPM_MAX_DEPTH; depth++) {
+		struct rte_lpm_rules_tree *head = &lpm->rules[depth];
+		struct rte_lpm_rule *r, *n;
+
+		RB_FOREACH_SAFE(r, rte_lpm_rules_tree, head, n) {
+			func(lpm, r->ip, depth, r->scope, r->next_hop, arg);
+		}
+	}
+}
+
+/* Count usage of tbl8 */
+unsigned
+rte_lpm_tbl8_count(const struct rte_lpm *lpm)
+{
+	unsigned i, count = 0;
+
+	for (i = 0; i < lpm->tbl8_num_groups; i++) {
+		const struct rte_lpm_tbl8_entry *tbl8_entry
+			= lpm->tbl8 + i * RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
+		if (tbl8_entry->valid_group)
+			++count;
+	}
+	return count;
 }
diff --git a/lib/librte_lpm/rte_lpm.h b/lib/librte_lpm/rte_lpm.h
index c299ce2..a39e3b5 100644
--- a/lib/librte_lpm/rte_lpm.h
+++ b/lib/librte_lpm/rte_lpm.h
@@ -2,6 +2,7 @@
  *   BSD LICENSE
  *
  *   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+ *   Copyright(c) 2012-2015 Brocade Communications Systems
  *   All rights reserved.
  *
  *   Redistribution and use in source and binary forms, with or without
@@ -43,11 +44,9 @@
 #include <sys/queue.h>
 #include <stdint.h>
 #include <stdlib.h>
+#include <bsd/sys/tree.h>
 #include <rte_branch_prediction.h>
-#include <rte_byteorder.h>
 #include <rte_memory.h>
-#include <rte_common.h>
-#include <rte_vect.h>
 
 #ifdef __cplusplus
 extern "C" {
@@ -55,130 +54,89 @@ extern "C" {
 
 /** Max number of characters in LPM name. */
 #define RTE_LPM_NAMESIZE                32
+ 
+ /** Maximum depth value possible for IPv4 LPM. */
+#define RTE_LPM_MAX_DEPTH               33
+ 
+/** Total number of tbl24 entries. */
+#define RTE_LPM_TBL24_NUM_ENTRIES (1 << 24)
 
-/** Maximum depth value possible for IPv4 LPM. */
-#define RTE_LPM_MAX_DEPTH               32
+/** Number of entries in a tbl8 group. */
+#define RTE_LPM_TBL8_GROUP_NUM_ENTRIES 256
 
-/** @internal Total number of tbl24 entries. */
-#define RTE_LPM_TBL24_NUM_ENTRIES       (1 << 24)
-
-/** @internal Number of entries in a tbl8 group. */
-#define RTE_LPM_TBL8_GROUP_NUM_ENTRIES  256
-
-/** @internal Total number of tbl8 groups in the tbl8. */
-#define RTE_LPM_TBL8_NUM_GROUPS         256
-
-/** @internal Total number of tbl8 entries. */
-#define RTE_LPM_TBL8_NUM_ENTRIES        (RTE_LPM_TBL8_NUM_GROUPS * \
-					RTE_LPM_TBL8_GROUP_NUM_ENTRIES)
-
-/** @internal Macro to enable/disable run-time checks. */
-#if defined(RTE_LIBRTE_LPM_DEBUG)
-#define RTE_LPM_RETURN_IF_TRUE(cond, retval) do { \
-	if (cond) return (retval);                \
-} while (0)
-#else
-#define RTE_LPM_RETURN_IF_TRUE(cond, retval)
-#endif
-
-/** @internal bitmask with valid and ext_entry/valid_group fields set */
-#define RTE_LPM_VALID_EXT_ENTRY_BITMASK 0x0300
-
-/** Bitmask used to indicate successful lookup */
-#define RTE_LPM_LOOKUP_SUCCESS          0x0100
-
-#if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN
-/** @internal Tbl24 entry structure. */
+/** Tbl24 entry structure. */
 struct rte_lpm_tbl24_entry {
+	/* Using single uint8_t to store 3 values. */
+	uint8_t valid       :1; /**< Validation flag. */
+	uint8_t ext_entry   :1; /**< external entry? */
+	uint8_t depth;	      /**< Rule depth. */
 	/* Stores Next hop or group index (i.e. gindex)into tbl8. */
 	union {
-		uint8_t next_hop;
-		uint8_t tbl8_gindex;
+		uint16_t next_hop;
+		uint16_t tbl8_gindex;
 	};
-	/* Using single uint8_t to store 3 values. */
-	uint8_t valid     :1; /**< Validation flag. */
-	uint8_t ext_entry :1; /**< External entry. */
-	uint8_t depth     :6; /**< Rule depth. */
 };
 
-/** @internal Tbl8 entry structure. */
+/** Tbl8 entry structure. */
 struct rte_lpm_tbl8_entry {
-	uint8_t next_hop; /**< next hop. */
-	/* Using single uint8_t to store 3 values. */
+	uint16_t next_hop;	/**< next hop. */
+	uint8_t  depth;	/**< Rule depth. */
 	uint8_t valid       :1; /**< Validation flag. */
 	uint8_t valid_group :1; /**< Group validation flag. */
-	uint8_t depth       :6; /**< Rule depth. */
-};
-#else
-struct rte_lpm_tbl24_entry {
-	uint8_t depth       :6;
-	uint8_t ext_entry   :1;
-	uint8_t valid       :1;
-	union {
-		uint8_t tbl8_gindex;
-		uint8_t next_hop;
-	};
-};
-
-struct rte_lpm_tbl8_entry {
-	uint8_t depth       :6;
-	uint8_t valid_group :1;
-	uint8_t valid       :1;
-	uint8_t next_hop;
-};
-#endif
-
-/** @internal Rule structure. */
-struct rte_lpm_rule {
-	uint32_t ip; /**< Rule IP address. */
-	uint8_t  next_hop; /**< Rule next hop. */
-};
-
-/** @internal Contains metadata about the rules table. */
-struct rte_lpm_rule_info {
-	uint32_t used_rules; /**< Used rules so far. */
-	uint32_t first_rule; /**< Indexes the first rule of a given depth. */
 };
 
 /** @internal LPM structure. */
 struct rte_lpm {
+	TAILQ_ENTRY(rte_lpm) next;      /**< Next in list. */
+
 	/* LPM metadata. */
-	char name[RTE_LPM_NAMESIZE];        /**< Name of the lpm. */
-	uint32_t max_rules; /**< Max. balanced rules per lpm. */
-	struct rte_lpm_rule_info rule_info[RTE_LPM_MAX_DEPTH]; /**< Rule info table. */
+	char name[RTE_LPM_NAMESIZE];    /**< Name of the lpm. */
+
+	/**< LPM rules. */
+	int socket_id;		/**< socket to allocate rules on */
+	RB_HEAD(rte_lpm_rules_tree, rte_lpm_rule) rules[RTE_LPM_MAX_DEPTH];
 
 	/* LPM Tables. */
-	struct rte_lpm_tbl24_entry tbl24[RTE_LPM_TBL24_NUM_ENTRIES] \
+	uint32_t tbl8_num_groups;		/* Number of slots */
+	uint32_t tbl8_rover;			/* Next slot to check */
+	struct rte_lpm_tbl8_entry *tbl8;	/* Actual table */
+
+	struct rte_lpm_tbl24_entry tbl24[RTE_LPM_TBL24_NUM_ENTRIES]
 			__rte_cache_aligned; /**< LPM tbl24 table. */
-	struct rte_lpm_tbl8_entry tbl8[RTE_LPM_TBL8_NUM_ENTRIES] \
-			__rte_cache_aligned; /**< LPM tbl8 table. */
-	struct rte_lpm_rule rules_tbl[0] \
-			__rte_cache_aligned; /**< LPM rules. */
 };
 
 /**
+ * Compiler memory barrier.
+ *
+ * Protects against compiler optimization of ordered operations.
+ */
+#ifdef __GNUC__
+#define rte_barrier() asm volatile("": : :"memory")
+#else
+/* Intel compiler has intrinsic for this. */
+#define rte_barrier() __memory_barrier()
+#endif
+
+/**
  * Create an LPM object.
  *
  * @param name
  *   LPM object name
  * @param socket_id
  *   NUMA socket ID for LPM table memory allocation
- * @param max_rules
- *   Maximum number of LPM rules that can be added
- * @param flags
- *   This parameter is currently unused
  * @return
  *   Handle to LPM object on success, NULL otherwise with rte_errno set
  *   to an appropriate values. Possible rte_errno values include:
  *    - E_RTE_NO_CONFIG - function could not get pointer to rte_config structure
  *    - E_RTE_SECONDARY - function was called from a secondary process instance
+ *    - E_RTE_NO_TAILQ - no tailq list could be got for the lpm object list
  *    - EINVAL - invalid parameter passed to function
  *    - ENOSPC - the maximum number of memzones has already been allocated
  *    - EEXIST - a memzone with the same name already exists
  *    - ENOMEM - no appropriate memory area found in which to create memzone
  */
 struct rte_lpm *
-rte_lpm_create(const char *name, int socket_id, int max_rules, int flags);
+rte_lpm_create(const char *name, int socket_id);
 
 /**
  * Find an existing LPM object and return a pointer to it.
@@ -215,11 +173,14 @@ rte_lpm_free(struct rte_lpm *lpm);
  *   Depth of the rule to be added to the LPM table
  * @param next_hop
  *   Next hop of the rule to be added to the LPM table
+ * @param scope
+ *   Priority scope of this route rule
  * @return
  *   0 on success, negative value otherwise
  */
 int
-rte_lpm_add(struct rte_lpm *lpm, uint32_t ip, uint8_t depth, uint8_t next_hop);
+rte_lpm_add(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
+	    uint16_t next_hop, uint8_t scope);
 
 /**
  * Check if a rule is present in the LPM table,
@@ -231,6 +192,8 @@ rte_lpm_add(struct rte_lpm *lpm, uint32_t ip, uint8_t depth, uint8_t next_hop);
  *   IP of the rule to be searched
  * @param depth
  *   Depth of the rule to searched
+ * @param scope
+ *   Priority scope of the rule
  * @param next_hop
  *   Next hop of the rule (valid only if it is found)
  * @return
@@ -238,7 +201,7 @@ rte_lpm_add(struct rte_lpm *lpm, uint32_t ip, uint8_t depth, uint8_t next_hop);
  */
 int
 rte_lpm_is_rule_present(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
-uint8_t *next_hop);
+			uint16_t *next_hop, uint8_t scope);
 
 /**
  * Delete a rule from the LPM table.
@@ -249,20 +212,30 @@ uint8_t *next_hop);
  *   IP of the rule to be deleted from the LPM table
  * @param depth
  *   Depth of the rule to be deleted from the LPM table
+ * @param scope
+ *   Priority scope of this route rule
  * @return
  *   0 on success, negative value otherwise
  */
 int
-rte_lpm_delete(struct rte_lpm *lpm, uint32_t ip, uint8_t depth);
+rte_lpm_delete(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
+	       uint16_t *next_hop, uint8_t scope);
+
+/** iterator function for LPM rule */
+typedef void (*rte_lpm_walk_func_t)(struct rte_lpm *lpm,
+				    uint32_t ip, uint8_t depth, uint8_t scope,
+				    uint16_t next_hop, void *arg);
 
 /**
  * Delete all rules from the LPM table.
  *
  * @param lpm
  *   LPM object handle
+ * @param func
+ *   Optional callback for each entry
  */
 void
-rte_lpm_delete_all(struct rte_lpm *lpm);
+rte_lpm_delete_all(struct rte_lpm *lpm, rte_lpm_walk_func_t func, void *arg);
 
 /**
  * Lookup an IP into the LPM table.
@@ -277,200 +250,80 @@ rte_lpm_delete_all(struct rte_lpm *lpm);
  *   -EINVAL for incorrect arguments, -ENOENT on lookup miss, 0 on lookup hit
  */
 static inline int
-rte_lpm_lookup(struct rte_lpm *lpm, uint32_t ip, uint8_t *next_hop)
+rte_lpm_lookup(struct rte_lpm *lpm, uint32_t ip, uint16_t *next_hop)
 {
-	unsigned tbl24_index = (ip >> 8);
-	uint16_t tbl_entry;
+	struct rte_lpm_tbl24_entry tbl24;
+	struct rte_lpm_tbl8_entry tbl8;
 
-	/* DEBUG: Check user input arguments. */
-	RTE_LPM_RETURN_IF_TRUE(((lpm == NULL) || (next_hop == NULL)), -EINVAL);
+	/* Copy tbl24 entry (to avoid conconcurrency issues) */
+	tbl24 = lpm->tbl24[ip >> 8];
+	rte_barrier();
 
-	/* Copy tbl24 entry */
-	tbl_entry = *(const uint16_t *)&lpm->tbl24[tbl24_index];
+	/*
+	 * Use the tbl24_index to access the required tbl24 entry then check if
+	 * the tbl24 entry is INVALID, if so return -ENOENT.
+	 */
+	if (unlikely(!tbl24.valid))
+		return -ENOENT; /* Lookup miss. */
 
-	/* Copy tbl8 entry (only if needed) */
-	if (unlikely((tbl_entry & RTE_LPM_VALID_EXT_ENTRY_BITMASK) ==
-			RTE_LPM_VALID_EXT_ENTRY_BITMASK)) {
+	/*
+	 * If tbl24 entry is valid check if it is NOT extended (i.e. it does
+	 * not use a tbl8 extension) if so return the next hop.
+	 */
+	if (tbl24.ext_entry == 0) {
+		*next_hop = tbl24.next_hop;
+		return 0; /* Lookup hit. */
+	}
 
-		unsigned tbl8_index = (uint8_t)ip +
-				((uint8_t)tbl_entry * RTE_LPM_TBL8_GROUP_NUM_ENTRIES);
+	/*
+	 * If tbl24 entry is valid and extended calculate the index into the
+	 * tbl8 entry.
+	 */
+	tbl8 = lpm->tbl8[tbl24.tbl8_gindex * RTE_LPM_TBL8_GROUP_NUM_ENTRIES
+			 + (ip & 0xFF)];
+	rte_barrier();
 
-		tbl_entry = *(const uint16_t *)&lpm->tbl8[tbl8_index];
-	}
+	/* Check if the tbl8 entry is invalid and if so return -ENOENT. */
+	if (unlikely(!tbl8.valid))
+		return -ENOENT; /* Lookup miss. */
 
-	*next_hop = (uint8_t)tbl_entry;
-	return (tbl_entry & RTE_LPM_LOOKUP_SUCCESS) ? 0 : -ENOENT;
+	/* If the tbl8 entry is valid return return the next_hop. */
+	*next_hop = tbl8.next_hop;
+	return 0; /* Lookup hit. */
 }
 
 /**
- * Lookup multiple IP addresses in an LPM table. This may be implemented as a
- * macro, so the address of the function should not be used.
+ * Iterate over all rules in the LPM table.
  *
  * @param lpm
  *   LPM object handle
- * @param ips
- *   Array of IPs to be looked up in the LPM table
- * @param next_hops
- *   Next hop of the most specific rule found for IP (valid on lookup hit only).
- *   This is an array of two byte values. The most significant byte in each
- *   value says whether the lookup was successful (bitmask
- *   RTE_LPM_LOOKUP_SUCCESS is set). The least significant byte is the
- *   actual next hop.
- * @param n
- *   Number of elements in ips (and next_hops) array to lookup. This should be a
- *   compile time constant, and divisible by 8 for best performance.
- *  @return
- *   -EINVAL for incorrect arguments, otherwise 0
+ * @param func
+ *   Callback to display
+ * @param arg
+ *   Argument passed to iterator
  */
-#define rte_lpm_lookup_bulk(lpm, ips, next_hops, n) \
-		rte_lpm_lookup_bulk_func(lpm, ips, next_hops, n)
-
-static inline int
-rte_lpm_lookup_bulk_func(const struct rte_lpm *lpm, const uint32_t * ips,
-		uint16_t * next_hops, const unsigned n)
-{
-	unsigned i;
-	unsigned tbl24_indexes[n];
-
-	/* DEBUG: Check user input arguments. */
-	RTE_LPM_RETURN_IF_TRUE(((lpm == NULL) || (ips == NULL) ||
-			(next_hops == NULL)), -EINVAL);
-
-	for (i = 0; i < n; i++) {
-		tbl24_indexes[i] = ips[i] >> 8;
-	}
-
-	for (i = 0; i < n; i++) {
-		/* Simply copy tbl24 entry to output */
-		next_hops[i] = *(const uint16_t *)&lpm->tbl24[tbl24_indexes[i]];
-
-		/* Overwrite output with tbl8 entry if needed */
-		if (unlikely((next_hops[i] & RTE_LPM_VALID_EXT_ENTRY_BITMASK) ==
-				RTE_LPM_VALID_EXT_ENTRY_BITMASK)) {
-
-			unsigned tbl8_index = (uint8_t)ips[i] +
-					((uint8_t)next_hops[i] *
-					 RTE_LPM_TBL8_GROUP_NUM_ENTRIES);
-
-			next_hops[i] = *(const uint16_t *)&lpm->tbl8[tbl8_index];
-		}
-	}
-	return 0;
-}
+void
+rte_lpm_walk(struct rte_lpm *lpm, rte_lpm_walk_func_t func, void *arg);
 
-/* Mask four results. */
-#define	 RTE_LPM_MASKX4_RES	UINT64_C(0x00ff00ff00ff00ff)
+/**
+ * Return the number of entries in the Tbl8 array
+ *
+ * @param lpm
+ *   LPM object handle
+ */
+unsigned
+rte_lpm_tbl8_count(const struct rte_lpm *lpm);
 
 /**
- * Lookup four IP addresses in an LPM table.
+ * Return the number of free entries in the Tbl8 array
  *
  * @param lpm
  *   LPM object handle
- * @param ip
- *   Four IPs to be looked up in the LPM table
- * @param hop
- *   Next hop of the most specific rule found for IP (valid on lookup hit only).
- *   This is an 4 elements array of two byte values.
- *   If the lookup was succesfull for the given IP, then least significant byte
- *   of the corresponding element is the  actual next hop and the most
- *   significant byte is zero.
- *   If the lookup for the given IP failed, then corresponding element would
- *   contain default value, see description of then next parameter.
- * @param defv
- *   Default value to populate into corresponding element of hop[] array,
- *   if lookup would fail.
  */
-static inline void
-rte_lpm_lookupx4(const struct rte_lpm *lpm, __m128i ip, uint16_t hop[4],
-	uint16_t defv)
+static inline unsigned
+rte_lpm_tbl8_free_count(const struct rte_lpm *lpm)
 {
-	__m128i i24;
-	rte_xmm_t i8;
-	uint16_t tbl[4];
-	uint64_t idx, pt;
-
-	const __m128i mask8 =
-		_mm_set_epi32(UINT8_MAX, UINT8_MAX, UINT8_MAX, UINT8_MAX);
-
-	/*
-	 * RTE_LPM_VALID_EXT_ENTRY_BITMASK for 4 LPM entries
-	 * as one 64-bit value (0x0300030003000300).
-	 */
-	const uint64_t mask_xv =
-		((uint64_t)RTE_LPM_VALID_EXT_ENTRY_BITMASK |
-		(uint64_t)RTE_LPM_VALID_EXT_ENTRY_BITMASK << 16 |
-		(uint64_t)RTE_LPM_VALID_EXT_ENTRY_BITMASK << 32 |
-		(uint64_t)RTE_LPM_VALID_EXT_ENTRY_BITMASK << 48);
-
-	/*
-	 * RTE_LPM_LOOKUP_SUCCESS for 4 LPM entries
-	 * as one 64-bit value (0x0100010001000100).
-	 */
-	const uint64_t mask_v =
-		((uint64_t)RTE_LPM_LOOKUP_SUCCESS |
-		(uint64_t)RTE_LPM_LOOKUP_SUCCESS << 16 |
-		(uint64_t)RTE_LPM_LOOKUP_SUCCESS << 32 |
-		(uint64_t)RTE_LPM_LOOKUP_SUCCESS << 48);
-
-	/* get 4 indexes for tbl24[]. */
-	i24 = _mm_srli_epi32(ip, CHAR_BIT);
-
-	/* extract values from tbl24[] */
-	idx = _mm_cvtsi128_si64(i24);
-	i24 = _mm_srli_si128(i24, sizeof(uint64_t));
-
-	tbl[0] = *(const uint16_t *)&lpm->tbl24[(uint32_t)idx];
-	tbl[1] = *(const uint16_t *)&lpm->tbl24[idx >> 32];
-
-	idx = _mm_cvtsi128_si64(i24);
-
-	tbl[2] = *(const uint16_t *)&lpm->tbl24[(uint32_t)idx];
-	tbl[3] = *(const uint16_t *)&lpm->tbl24[idx >> 32];
-
-	/* get 4 indexes for tbl8[]. */
-	i8.x = _mm_and_si128(ip, mask8);
-
-	pt = (uint64_t)tbl[0] |
-		(uint64_t)tbl[1] << 16 |
-		(uint64_t)tbl[2] << 32 |
-		(uint64_t)tbl[3] << 48;
-
-	/* search successfully finished for all 4 IP addresses. */
-	if (likely((pt & mask_xv) == mask_v)) {
-		uintptr_t ph = (uintptr_t)hop;
-		*(uint64_t *)ph = pt & RTE_LPM_MASKX4_RES;
-		return;
-	}
-
-	if (unlikely((pt & RTE_LPM_VALID_EXT_ENTRY_BITMASK) ==
-			RTE_LPM_VALID_EXT_ENTRY_BITMASK)) {
-		i8.u32[0] = i8.u32[0] +
-			(uint8_t)tbl[0] * RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
-		tbl[0] = *(const uint16_t *)&lpm->tbl8[i8.u32[0]];
-	}
-	if (unlikely((pt >> 16 & RTE_LPM_VALID_EXT_ENTRY_BITMASK) ==
-			RTE_LPM_VALID_EXT_ENTRY_BITMASK)) {
-		i8.u32[1] = i8.u32[1] +
-			(uint8_t)tbl[1] * RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
-		tbl[1] = *(const uint16_t *)&lpm->tbl8[i8.u32[1]];
-	}
-	if (unlikely((pt >> 32 & RTE_LPM_VALID_EXT_ENTRY_BITMASK) ==
-			RTE_LPM_VALID_EXT_ENTRY_BITMASK)) {
-		i8.u32[2] = i8.u32[2] +
-			(uint8_t)tbl[2] * RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
-		tbl[2] = *(const uint16_t *)&lpm->tbl8[i8.u32[2]];
-	}
-	if (unlikely((pt >> 48 & RTE_LPM_VALID_EXT_ENTRY_BITMASK) ==
-			RTE_LPM_VALID_EXT_ENTRY_BITMASK)) {
-		i8.u32[3] = i8.u32[3] +
-			(uint8_t)tbl[3] * RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
-		tbl[3] = *(const uint16_t *)&lpm->tbl8[i8.u32[3]];
-	}
-
-	hop[0] = (tbl[0] & RTE_LPM_LOOKUP_SUCCESS) ? (uint8_t)tbl[0] : defv;
-	hop[1] = (tbl[1] & RTE_LPM_LOOKUP_SUCCESS) ? (uint8_t)tbl[1] : defv;
-	hop[2] = (tbl[2] & RTE_LPM_LOOKUP_SUCCESS) ? (uint8_t)tbl[2] : defv;
-	hop[3] = (tbl[3] & RTE_LPM_LOOKUP_SUCCESS) ? (uint8_t)tbl[3] : defv;
+	return lpm->tbl8_num_groups - rte_lpm_tbl8_count(lpm);
 }
 
 #ifdef __cplusplus
-- 
2.1.4

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [dpdk-dev] [PATCH v1 0/3] lpm: increase number of next hops for lpm (ipv4)
  2015-10-23 16:20 ` [dpdk-dev] [PATCH v1 0/3] lpm: increase number of next hops for lpm (ipv4) Matthew Hall
  2015-10-23 16:33   ` Stephen Hemminger
@ 2015-10-24  6:09   ` Matthew Hall
  2015-10-25 17:52     ` Vladimir Medvedkin
  2015-10-26 12:13     ` Jastrzebski, MichalX K
  1 sibling, 2 replies; 24+ messages in thread
From: Matthew Hall @ 2015-10-24  6:09 UTC (permalink / raw)
  To: Michal Jastrzebski, Michal Kobylinski; +Cc: dev

[-- Attachment #1: Type: text/plain, Size: 1489 bytes --]

On 10/23/15 9:20 AM, Matthew Hall wrote:
> On Fri, Oct 23, 2015 at 03:51:48PM +0200, Michal Jastrzebski wrote:
>> From: Michal Kobylinski  <michalx.kobylinski@intel.com>
>>
>> The current DPDK implementation for LPM for IPv4 and IPv6 limits the
>> number of next hops to 256, as the next hop ID is an 8-bit long field.
>> Proposed extension increase number of next hops for IPv4 to 2^24 and
>> also allows 32-bits read/write operations.
>>
>> This patchset requires additional change to rte_table library to meet
>> ABI compatibility requirements. A v2 will be sent next week.
>
> I also have a patchset for this.
>
> I will send it out as well so we could compare.
>
> Matthew.

Sorry about the delay; I only work on DPDK in personal time and not as 
part of a job. My patchset is attached to this email.

One possible advantage with my patchset, compared to others, is that the 
space problem is fixed in both IPV4 and in IPV6, to prevent asymmetry 
between these two standards, which is something I try to avoid as much 
as humanly possible.

This is because my application code is green-field, so I absolutely 
don't want to put any ugly hacks or incompatibilities in this code if I 
can possibly avoid it.

Otherwise, I am not necessarily as expert about rte_lpm as some of the 
full-time guys, but I think with four or five of us in the thread 
hammering out patches we will be able to create something amazing 
together and I am very very very very very happy about this.

Matthew.

[-- Attachment #2: 0001-rte_lpm.h-use-24-bit-extended-next-hop.patch --]
[-- Type: text/plain, Size: 4336 bytes --]

>From 6a8e3428344ed11af8a1999dcec5c31c10f37c3a Mon Sep 17 00:00:00 2001
From: Matthew Hall <mhall@mhcomputing.net>
Date: Sat, 27 Jun 2015 22:49:46 +0000
Subject: [PATCH 1/8] rte_lpm.h: use 24 bit extended next hop

Signed-off-by: Matthew Hall <mhall@mhcomputing.net>
---
 lib/librte_lpm/rte_lpm.h | 46 +++++++++++++++++++++++++++++-----------------
 1 file changed, 29 insertions(+), 17 deletions(-)

diff --git a/lib/librte_lpm/rte_lpm.h b/lib/librte_lpm/rte_lpm.h
index c299ce2..c677c4a 100644
--- a/lib/librte_lpm/rte_lpm.h
+++ b/lib/librte_lpm/rte_lpm.h
@@ -82,32 +82,36 @@ extern "C" {
 #endif
 
 /** @internal bitmask with valid and ext_entry/valid_group fields set */
-#define RTE_LPM_VALID_EXT_ENTRY_BITMASK 0x0300
+#define RTE_LPM_VALID_EXT_ENTRY_BITMASK 0x03000000
+
+/** @internal bitmask with next_hop field set */
+#define RTE_LPM_NEXT_HOP_BITMASK        0x00FFFFFF
 
 /** Bitmask used to indicate successful lookup */
-#define RTE_LPM_LOOKUP_SUCCESS          0x0100
+#define RTE_LPM_LOOKUP_SUCCESS          0x01000000
+
 
 #if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN
 /** @internal Tbl24 entry structure. */
 struct rte_lpm_tbl24_entry {
-	/* Stores Next hop or group index (i.e. gindex)into tbl8. */
+	/* Stores Next hop or group index (i.e. gindex) into tbl8. */
 	union {
-		uint8_t next_hop;
-		uint8_t tbl8_gindex;
-	};
+		uint32_t next_hop    :24;
+		uint32_t tbl8_gindex :24;
+	} __attribute__((__packed__));
 	/* Using single uint8_t to store 3 values. */
-	uint8_t valid     :1; /**< Validation flag. */
-	uint8_t ext_entry :1; /**< External entry. */
-	uint8_t depth     :6; /**< Rule depth. */
+	uint32_t valid     :1; /**< Validation flag. */
+	uint32_t ext_entry :1; /**< External entry. */
+	uint32_t depth     :6; /**< Rule depth. */
 };
 
 /** @internal Tbl8 entry structure. */
 struct rte_lpm_tbl8_entry {
-	uint8_t next_hop; /**< next hop. */
+	uint32_t next_hop   :24; /**< next hop. */
 	/* Using single uint8_t to store 3 values. */
-	uint8_t valid       :1; /**< Validation flag. */
-	uint8_t valid_group :1; /**< Group validation flag. */
-	uint8_t depth       :6; /**< Rule depth. */
+	uint8_t valid       :1;  /**< Validation flag. */
+	uint8_t valid_group :1;  /**< Group validation flag. */
+	uint8_t depth       :6;  /**< Rule depth. */
 };
 #else
 struct rte_lpm_tbl24_entry {
@@ -130,8 +134,8 @@ struct rte_lpm_tbl8_entry {
 
 /** @internal Rule structure. */
 struct rte_lpm_rule {
-	uint32_t ip; /**< Rule IP address. */
-	uint8_t  next_hop; /**< Rule next hop. */
+	uint32_t ip;       /**< Rule IP address. */
+	uint32_t next_hop; /**< Rule next hop. */
 };
 
 /** @internal Contains metadata about the rules table. */
@@ -219,7 +223,7 @@ rte_lpm_free(struct rte_lpm *lpm);
  *   0 on success, negative value otherwise
  */
 int
-rte_lpm_add(struct rte_lpm *lpm, uint32_t ip, uint8_t depth, uint8_t next_hop);
+rte_lpm_add(struct rte_lpm *lpm, uint32_t ip, uint8_t depth, uint32_t next_hop);
 
 /**
  * Check if a rule is present in the LPM table,
@@ -238,7 +242,7 @@ rte_lpm_add(struct rte_lpm *lpm, uint32_t ip, uint8_t depth, uint8_t next_hop);
  */
 int
 rte_lpm_is_rule_present(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
-uint8_t *next_hop);
+uint32_t *next_hop);
 
 /**
  * Delete a rule from the LPM table.
@@ -301,6 +305,8 @@ rte_lpm_lookup(struct rte_lpm *lpm, uint32_t ip, uint8_t *next_hop)
 	*next_hop = (uint8_t)tbl_entry;
 	return (tbl_entry & RTE_LPM_LOOKUP_SUCCESS) ? 0 : -ENOENT;
 }
+int
+rte_lpm_lookup(struct rte_lpm *lpm, uint32_t ip, uint32_t *next_hop);
 
 /**
  * Lookup multiple IP addresses in an LPM table. This may be implemented as a
@@ -360,6 +366,9 @@ rte_lpm_lookup_bulk_func(const struct rte_lpm *lpm, const uint32_t * ips,
 
 /* Mask four results. */
 #define	 RTE_LPM_MASKX4_RES	UINT64_C(0x00ff00ff00ff00ff)
+int
+rte_lpm_lookup_bulk(const struct rte_lpm *lpm, const uint32_t * ips,
+		uint32_t * next_hops, const unsigned n);
 
 /**
  * Lookup four IP addresses in an LPM table.
@@ -472,6 +481,9 @@ rte_lpm_lookupx4(const struct rte_lpm *lpm, __m128i ip, uint16_t hop[4],
 	hop[2] = (tbl[2] & RTE_LPM_LOOKUP_SUCCESS) ? (uint8_t)tbl[2] : defv;
 	hop[3] = (tbl[3] & RTE_LPM_LOOKUP_SUCCESS) ? (uint8_t)tbl[3] : defv;
 }
+void
+rte_lpm_lookupx4(const struct rte_lpm *lpm, __m128i ip, uint32_t hop[4],
+	uint32_t defv);
 
 #ifdef __cplusplus
 }
-- 
1.9.1


[-- Attachment #3: 0002-rte_lpm.h-disable-inlining-of-rte_lpm-lookup-functio.patch --]
[-- Type: text/plain, Size: 6138 bytes --]

>From 7ee9f2e9a8853d49a332d971f5b56e79efccd71b Mon Sep 17 00:00:00 2001
From: Matthew Hall <mhall@mhcomputing.net>
Date: Sat, 27 Jun 2015 22:53:43 +0000
Subject: [PATCH 2/8] rte_lpm.h: disable inlining of rte_lpm lookup functions

Signed-off-by: Matthew Hall <mhall@mhcomputing.net>
---
 lib/librte_lpm/rte_lpm.h | 152 -----------------------------------------------
 1 file changed, 152 deletions(-)

diff --git a/lib/librte_lpm/rte_lpm.h b/lib/librte_lpm/rte_lpm.h
index c677c4a..76282d8 100644
--- a/lib/librte_lpm/rte_lpm.h
+++ b/lib/librte_lpm/rte_lpm.h
@@ -280,31 +280,6 @@ rte_lpm_delete_all(struct rte_lpm *lpm);
  * @return
  *   -EINVAL for incorrect arguments, -ENOENT on lookup miss, 0 on lookup hit
  */
-static inline int
-rte_lpm_lookup(struct rte_lpm *lpm, uint32_t ip, uint8_t *next_hop)
-{
-	unsigned tbl24_index = (ip >> 8);
-	uint16_t tbl_entry;
-
-	/* DEBUG: Check user input arguments. */
-	RTE_LPM_RETURN_IF_TRUE(((lpm == NULL) || (next_hop == NULL)), -EINVAL);
-
-	/* Copy tbl24 entry */
-	tbl_entry = *(const uint16_t *)&lpm->tbl24[tbl24_index];
-
-	/* Copy tbl8 entry (only if needed) */
-	if (unlikely((tbl_entry & RTE_LPM_VALID_EXT_ENTRY_BITMASK) ==
-			RTE_LPM_VALID_EXT_ENTRY_BITMASK)) {
-
-		unsigned tbl8_index = (uint8_t)ip +
-				((uint8_t)tbl_entry * RTE_LPM_TBL8_GROUP_NUM_ENTRIES);
-
-		tbl_entry = *(const uint16_t *)&lpm->tbl8[tbl8_index];
-	}
-
-	*next_hop = (uint8_t)tbl_entry;
-	return (tbl_entry & RTE_LPM_LOOKUP_SUCCESS) ? 0 : -ENOENT;
-}
 int
 rte_lpm_lookup(struct rte_lpm *lpm, uint32_t ip, uint32_t *next_hop);
 
@@ -328,41 +303,6 @@ rte_lpm_lookup(struct rte_lpm *lpm, uint32_t ip, uint32_t *next_hop);
  *  @return
  *   -EINVAL for incorrect arguments, otherwise 0
  */
-#define rte_lpm_lookup_bulk(lpm, ips, next_hops, n) \
-		rte_lpm_lookup_bulk_func(lpm, ips, next_hops, n)
-
-static inline int
-rte_lpm_lookup_bulk_func(const struct rte_lpm *lpm, const uint32_t * ips,
-		uint16_t * next_hops, const unsigned n)
-{
-	unsigned i;
-	unsigned tbl24_indexes[n];
-
-	/* DEBUG: Check user input arguments. */
-	RTE_LPM_RETURN_IF_TRUE(((lpm == NULL) || (ips == NULL) ||
-			(next_hops == NULL)), -EINVAL);
-
-	for (i = 0; i < n; i++) {
-		tbl24_indexes[i] = ips[i] >> 8;
-	}
-
-	for (i = 0; i < n; i++) {
-		/* Simply copy tbl24 entry to output */
-		next_hops[i] = *(const uint16_t *)&lpm->tbl24[tbl24_indexes[i]];
-
-		/* Overwrite output with tbl8 entry if needed */
-		if (unlikely((next_hops[i] & RTE_LPM_VALID_EXT_ENTRY_BITMASK) ==
-				RTE_LPM_VALID_EXT_ENTRY_BITMASK)) {
-
-			unsigned tbl8_index = (uint8_t)ips[i] +
-					((uint8_t)next_hops[i] *
-					 RTE_LPM_TBL8_GROUP_NUM_ENTRIES);
-
-			next_hops[i] = *(const uint16_t *)&lpm->tbl8[tbl8_index];
-		}
-	}
-	return 0;
-}
 
 /* Mask four results. */
 #define	 RTE_LPM_MASKX4_RES	UINT64_C(0x00ff00ff00ff00ff)
@@ -389,98 +329,6 @@ rte_lpm_lookup_bulk(const struct rte_lpm *lpm, const uint32_t * ips,
  *   Default value to populate into corresponding element of hop[] array,
  *   if lookup would fail.
  */
-static inline void
-rte_lpm_lookupx4(const struct rte_lpm *lpm, __m128i ip, uint16_t hop[4],
-	uint16_t defv)
-{
-	__m128i i24;
-	rte_xmm_t i8;
-	uint16_t tbl[4];
-	uint64_t idx, pt;
-
-	const __m128i mask8 =
-		_mm_set_epi32(UINT8_MAX, UINT8_MAX, UINT8_MAX, UINT8_MAX);
-
-	/*
-	 * RTE_LPM_VALID_EXT_ENTRY_BITMASK for 4 LPM entries
-	 * as one 64-bit value (0x0300030003000300).
-	 */
-	const uint64_t mask_xv =
-		((uint64_t)RTE_LPM_VALID_EXT_ENTRY_BITMASK |
-		(uint64_t)RTE_LPM_VALID_EXT_ENTRY_BITMASK << 16 |
-		(uint64_t)RTE_LPM_VALID_EXT_ENTRY_BITMASK << 32 |
-		(uint64_t)RTE_LPM_VALID_EXT_ENTRY_BITMASK << 48);
-
-	/*
-	 * RTE_LPM_LOOKUP_SUCCESS for 4 LPM entries
-	 * as one 64-bit value (0x0100010001000100).
-	 */
-	const uint64_t mask_v =
-		((uint64_t)RTE_LPM_LOOKUP_SUCCESS |
-		(uint64_t)RTE_LPM_LOOKUP_SUCCESS << 16 |
-		(uint64_t)RTE_LPM_LOOKUP_SUCCESS << 32 |
-		(uint64_t)RTE_LPM_LOOKUP_SUCCESS << 48);
-
-	/* get 4 indexes for tbl24[]. */
-	i24 = _mm_srli_epi32(ip, CHAR_BIT);
-
-	/* extract values from tbl24[] */
-	idx = _mm_cvtsi128_si64(i24);
-	i24 = _mm_srli_si128(i24, sizeof(uint64_t));
-
-	tbl[0] = *(const uint16_t *)&lpm->tbl24[(uint32_t)idx];
-	tbl[1] = *(const uint16_t *)&lpm->tbl24[idx >> 32];
-
-	idx = _mm_cvtsi128_si64(i24);
-
-	tbl[2] = *(const uint16_t *)&lpm->tbl24[(uint32_t)idx];
-	tbl[3] = *(const uint16_t *)&lpm->tbl24[idx >> 32];
-
-	/* get 4 indexes for tbl8[]. */
-	i8.x = _mm_and_si128(ip, mask8);
-
-	pt = (uint64_t)tbl[0] |
-		(uint64_t)tbl[1] << 16 |
-		(uint64_t)tbl[2] << 32 |
-		(uint64_t)tbl[3] << 48;
-
-	/* search successfully finished for all 4 IP addresses. */
-	if (likely((pt & mask_xv) == mask_v)) {
-		uintptr_t ph = (uintptr_t)hop;
-		*(uint64_t *)ph = pt & RTE_LPM_MASKX4_RES;
-		return;
-	}
-
-	if (unlikely((pt & RTE_LPM_VALID_EXT_ENTRY_BITMASK) ==
-			RTE_LPM_VALID_EXT_ENTRY_BITMASK)) {
-		i8.u32[0] = i8.u32[0] +
-			(uint8_t)tbl[0] * RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
-		tbl[0] = *(const uint16_t *)&lpm->tbl8[i8.u32[0]];
-	}
-	if (unlikely((pt >> 16 & RTE_LPM_VALID_EXT_ENTRY_BITMASK) ==
-			RTE_LPM_VALID_EXT_ENTRY_BITMASK)) {
-		i8.u32[1] = i8.u32[1] +
-			(uint8_t)tbl[1] * RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
-		tbl[1] = *(const uint16_t *)&lpm->tbl8[i8.u32[1]];
-	}
-	if (unlikely((pt >> 32 & RTE_LPM_VALID_EXT_ENTRY_BITMASK) ==
-			RTE_LPM_VALID_EXT_ENTRY_BITMASK)) {
-		i8.u32[2] = i8.u32[2] +
-			(uint8_t)tbl[2] * RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
-		tbl[2] = *(const uint16_t *)&lpm->tbl8[i8.u32[2]];
-	}
-	if (unlikely((pt >> 48 & RTE_LPM_VALID_EXT_ENTRY_BITMASK) ==
-			RTE_LPM_VALID_EXT_ENTRY_BITMASK)) {
-		i8.u32[3] = i8.u32[3] +
-			(uint8_t)tbl[3] * RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
-		tbl[3] = *(const uint16_t *)&lpm->tbl8[i8.u32[3]];
-	}
-
-	hop[0] = (tbl[0] & RTE_LPM_LOOKUP_SUCCESS) ? (uint8_t)tbl[0] : defv;
-	hop[1] = (tbl[1] & RTE_LPM_LOOKUP_SUCCESS) ? (uint8_t)tbl[1] : defv;
-	hop[2] = (tbl[2] & RTE_LPM_LOOKUP_SUCCESS) ? (uint8_t)tbl[2] : defv;
-	hop[3] = (tbl[3] & RTE_LPM_LOOKUP_SUCCESS) ? (uint8_t)tbl[3] : defv;
-}
 void
 rte_lpm_lookupx4(const struct rte_lpm *lpm, __m128i ip, uint32_t hop[4],
 	uint32_t defv);
-- 
1.9.1


[-- Attachment #4: 0003-rte_lpm.c-use-24-bit-extended-next-hop.patch --]
[-- Type: text/plain, Size: 8601 bytes --]

>From e54e01b6edcc820230b7e47de40920a00031b6c1 Mon Sep 17 00:00:00 2001
From: Matthew Hall <mhall@mhcomputing.net>
Date: Sat, 27 Jun 2015 22:48:07 +0000
Subject: [PATCH 3/8] rte_lpm.c: use 24 bit extended next hop

Signed-off-by: Matthew Hall <mhall@mhcomputing.net>
---
 lib/librte_lpm/rte_lpm.c | 184 ++++++++++++++++++++++++++++++++++++++++++++---
 1 file changed, 174 insertions(+), 10 deletions(-)

diff --git a/lib/librte_lpm/rte_lpm.c b/lib/librte_lpm/rte_lpm.c
index 163ba3c..d9cb007 100644
--- a/lib/librte_lpm/rte_lpm.c
+++ b/lib/librte_lpm/rte_lpm.c
@@ -159,8 +159,8 @@ rte_lpm_create(const char *name, int socket_id, int max_rules,
 
 	lpm_list = RTE_TAILQ_CAST(rte_lpm_tailq.head, rte_lpm_list);
 
-	RTE_BUILD_BUG_ON(sizeof(struct rte_lpm_tbl24_entry) != 2);
-	RTE_BUILD_BUG_ON(sizeof(struct rte_lpm_tbl8_entry) != 2);
+	/* RTE_BUILD_BUG_ON(sizeof(struct rte_lpm_tbl24_entry) != 2); */
+	/* RTE_BUILD_BUG_ON(sizeof(struct rte_lpm_tbl8_entry) != 2); */
 
 	/* Check user arguments. */
 	if ((name == NULL) || (socket_id < -1) || (max_rules == 0)){
@@ -261,7 +261,7 @@ rte_lpm_free(struct rte_lpm *lpm)
  */
 static inline int32_t
 rule_add(struct rte_lpm *lpm, uint32_t ip_masked, uint8_t depth,
-	uint8_t next_hop)
+	uint32_t next_hop)
 {
 	uint32_t rule_gindex, rule_index, last_rule;
 	int i;
@@ -418,7 +418,7 @@ tbl8_free(struct rte_lpm_tbl8_entry *tbl8, uint32_t tbl8_group_start)
 
 static inline int32_t
 add_depth_small(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
-		uint8_t next_hop)
+		uint32_t next_hop)
 {
 	uint32_t tbl24_index, tbl24_range, tbl8_index, tbl8_group_end, i, j;
 
@@ -486,7 +486,7 @@ add_depth_small(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
 
 static inline int32_t
 add_depth_big(struct rte_lpm *lpm, uint32_t ip_masked, uint8_t depth,
-		uint8_t next_hop)
+		uint32_t next_hop)
 {
 	uint32_t tbl24_index;
 	int32_t tbl8_group_index, tbl8_group_start, tbl8_group_end, tbl8_index,
@@ -621,7 +621,7 @@ add_depth_big(struct rte_lpm *lpm, uint32_t ip_masked, uint8_t depth,
  */
 int
 rte_lpm_add(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
-		uint8_t next_hop)
+		uint32_t next_hop)
 {
 	int32_t rule_index, status = 0;
 	uint32_t ip_masked;
@@ -665,7 +665,7 @@ rte_lpm_add(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
  */
 int
 rte_lpm_is_rule_present(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
-uint8_t *next_hop)
+uint32_t *next_hop)
 {
 	uint32_t ip_masked;
 	int32_t rule_index;
@@ -681,7 +681,7 @@ uint8_t *next_hop)
 	rule_index = rule_find(lpm, ip_masked, depth);
 
 	if (rule_index >= 0) {
-		*next_hop = lpm->rules_tbl[rule_index].next_hop;
+		*next_hop = lpm->rules_tbl[rule_index].next_hop & RTE_LPM_NEXT_HOP_BITMASK;
 		return 1;
 	}
 
@@ -771,8 +771,7 @@ delete_depth_small(struct rte_lpm *lpm, uint32_t ip_masked,
 		struct rte_lpm_tbl8_entry new_tbl8_entry = {
 			.valid = VALID,
 			.depth = sub_rule_depth,
-			.next_hop = lpm->rules_tbl
-			[sub_rule_index].next_hop,
+			.next_hop = lpm->rules_tbl[sub_rule_index].next_hop,
 		};
 
 		for (i = tbl24_index; i < (tbl24_index + tbl24_range); i++) {
@@ -1012,3 +1011,168 @@ rte_lpm_delete_all(struct rte_lpm *lpm)
 	/* Delete all rules form the rules table. */
 	memset(lpm->rules_tbl, 0, sizeof(lpm->rules_tbl[0]) * lpm->max_rules);
 }
+
+int
+rte_lpm_lookup(struct rte_lpm *lpm, uint32_t ip, uint32_t *next_hop)
+{
+	unsigned tbl24_index = (ip >> 8);
+	uint32_t tbl_entry;
+
+	/* DEBUG: Check user input arguments. */
+	RTE_LPM_RETURN_IF_TRUE(((lpm == NULL) || (next_hop == NULL)), -EINVAL);
+
+	/* Copy tbl24 entry */
+	tbl_entry = *(const uint32_t *)&lpm->tbl24[tbl24_index];
+
+	/* Copy tbl8 entry (only if needed) */
+	if (unlikely((tbl_entry & RTE_LPM_VALID_EXT_ENTRY_BITMASK) ==
+			RTE_LPM_VALID_EXT_ENTRY_BITMASK)) {
+
+		unsigned tbl8_index = (uint8_t)ip +
+				((uint32_t)tbl_entry * RTE_LPM_TBL8_GROUP_NUM_ENTRIES);
+
+		tbl_entry = *(const uint32_t *)&lpm->tbl8[tbl8_index];
+	}
+
+	*next_hop = (uint32_t)tbl_entry & RTE_LPM_NEXT_HOP_BITMASK;
+	return (tbl_entry & RTE_LPM_LOOKUP_SUCCESS) ? 0 : -ENOENT;
+}
+
+int
+rte_lpm_lookup_bulk(const struct rte_lpm *lpm, const uint32_t * ips,
+		uint32_t * next_hops, const unsigned n)
+{
+	unsigned i;
+	unsigned tbl24_indexes[n];
+
+	/* DEBUG: Check user input arguments. */
+	RTE_LPM_RETURN_IF_TRUE(((lpm == NULL) || (ips == NULL) ||
+			(next_hops == NULL)), -EINVAL);
+
+	for (i = 0; i < n; i++) {
+		tbl24_indexes[i] = ips[i] >> 8;
+	}
+
+	for (i = 0; i < n; i++) {
+		/* Simply copy tbl24 entry to output */
+		next_hops[i] = *(const uint32_t *)&lpm->tbl24[tbl24_indexes[i]];
+
+		/* Overwrite output with tbl8 entry if needed */
+		if (unlikely((next_hops[i] & RTE_LPM_VALID_EXT_ENTRY_BITMASK) ==
+				RTE_LPM_VALID_EXT_ENTRY_BITMASK)) {
+
+			unsigned tbl8_index = (uint8_t)ips[i] +
+					((uint32_t)next_hops[i] *
+					 RTE_LPM_TBL8_GROUP_NUM_ENTRIES);
+
+			next_hops[i] = *(const uint32_t *)&lpm->tbl8[tbl8_index] & RTE_LPM_NEXT_HOP_BITMASK;
+		}
+	}
+	return 0;
+}
+
+
+static
+__m128i _mm_not_si128(__m128i arg)
+{
+    __m128i minusone = _mm_set1_epi32(0xffffffff);
+    return _mm_xor_si128(arg, minusone);
+}
+
+/**
+ * Lookup four IP addresses in an LPM table.
+ *
+ * @param lpm
+ *   LPM object handle
+ * @param ip
+ *   Four IPs to be looked up in the LPM table
+ * @param hop
+ *   Next hop of the most specific rule found for IP (valid on lookup hit only).
+ *   This is an 4 elements array of two byte values.
+ *   If the lookup was succesfull for the given IP, then least significant byte
+ *   of the corresponding element is the  actual next hop and the most
+ *   significant byte is zero.
+ *   If the lookup for the given IP failed, then corresponding element would
+ *   contain default value, see description of then next parameter.
+ * @param defv
+ *   Default value to populate into corresponding element of hop[] array,
+ *   if lookup would fail.
+ */
+void
+rte_lpm_lookupx4(const struct rte_lpm *lpm, __m128i ip, uint32_t hop[4],
+	uint32_t defv)
+{
+	rte_xmm_t tbl24_i;
+	rte_xmm_t tbl8_i;
+	rte_xmm_t tbl_r;
+	rte_xmm_t tbl_h;
+	rte_xmm_t tbl_r_ok;
+
+	rte_xmm_t mask_8;
+	rte_xmm_t mask_ve;
+	rte_xmm_t mask_v;
+	rte_xmm_t mask_h;
+	rte_xmm_t mask_hi;
+
+	mask_8.x = _mm_set1_epi32(UINT8_MAX);
+
+	/*
+	 * RTE_LPM_VALID_EXT_ENTRY_BITMASK for 4 LPM entries
+	 * as one 64-bit value (0x0300030003000300).
+	 */
+	mask_ve.x = _mm_set1_epi32(RTE_LPM_VALID_EXT_ENTRY_BITMASK);
+
+	/*
+	 * RTE_LPM_LOOKUP_SUCCESS for 4 LPM entries
+	 * as one 64-bit value (0x0100010001000100).
+	 */
+	mask_v.x = _mm_set1_epi32(RTE_LPM_LOOKUP_SUCCESS);
+
+	mask_h.x = _mm_set1_epi32(RTE_LPM_NEXT_HOP_BITMASK);
+	mask_hi.x = _mm_not_si128(mask_h.x);
+
+	/* get 4 indexes for tbl24[]. */
+	tbl24_i.x = _mm_srli_epi32(ip, CHAR_BIT);
+
+	/* extract values from tbl24[] */
+	tbl_r.u32[0] = *(const uint32_t *) &lpm->tbl24[tbl24_i.u32[0]];
+	tbl_r.u32[1] = *(const uint32_t *) &lpm->tbl24[tbl24_i.u32[1]];
+	tbl_r.u32[2] = *(const uint32_t *) &lpm->tbl24[tbl24_i.u32[2]];
+	tbl_r.u32[3] = *(const uint32_t *) &lpm->tbl24[tbl24_i.u32[3]];
+
+	/* search successfully finished for all 4 IP addresses. */
+	tbl_r_ok.x = _mm_and_si128(tbl_r.x, mask_ve.x);
+	tbl_h.x = _mm_and_si128(tbl_r.x, mask_hi.x);
+	if (likely(_mm_test_all_ones(_mm_cmpeq_epi32(tbl_r_ok.x, mask_v.x)))) {
+		*(__m128i*) &hop = tbl_h.x;
+		return;
+	}
+
+	/* get 4 indexes for tbl8[]. */
+	tbl8_i.x = _mm_and_si128(ip, mask_8.x);
+
+	if (unlikely(tbl_r_ok.u32[0] == RTE_LPM_VALID_EXT_ENTRY_BITMASK)) {
+		tbl8_i.u32[0] = tbl8_i.u32[0] + tbl_h.u32[0] * RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
+		tbl_r.u32[0] = *(const uint32_t *) &lpm->tbl8[tbl8_i.u32[0]];
+	}
+	if (unlikely(tbl_r_ok.u32[1] == RTE_LPM_VALID_EXT_ENTRY_BITMASK)) {
+		tbl8_i.u32[1] = tbl8_i.u32[1] + tbl_h.u32[1] * RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
+		tbl_r.u32[1] = *(const uint32_t *) &lpm->tbl8[tbl8_i.u32[1]];
+	}
+	if (unlikely(tbl_r_ok.u32[2] == RTE_LPM_VALID_EXT_ENTRY_BITMASK)) {
+		tbl8_i.u32[2] = tbl8_i.u32[2] + tbl_h.u32[2] * RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
+		tbl_r.u32[2] = *(const uint32_t *) &lpm->tbl8[tbl8_i.u32[2]];
+	}
+	if (unlikely(tbl_r_ok.u32[3] == RTE_LPM_VALID_EXT_ENTRY_BITMASK)) {
+		tbl8_i.u32[3] = tbl8_i.u32[3] + tbl_h.u32[3] * RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
+		tbl_r.u32[3] = *(const uint32_t *) &lpm->tbl8[tbl8_i.u32[3]];
+	}
+
+	tbl_r_ok.x = _mm_and_si128(tbl_r.x, mask_v.x);
+	tbl_h.x = _mm_and_si128(tbl_r.x, mask_h.x);
+
+	hop[0] = tbl_r_ok.u32[0] ? tbl_h.u32[0] : defv;
+	hop[1] = tbl_r_ok.u32[1] ? tbl_h.u32[1] : defv;
+	hop[2] = tbl_r_ok.u32[2] ? tbl_h.u32[2] : defv;
+	hop[3] = tbl_r_ok.u32[3] ? tbl_h.u32[3] : defv;
+}
-- 
1.9.1


[-- Attachment #5: 0004-rte_lpm6.-c-h-use-24-bit-extended-next-hop.patch --]
[-- Type: text/plain, Size: 5655 bytes --]

>From 402d1bce8dd05b31fc6e457ca89bcd0b7160aa69 Mon Sep 17 00:00:00 2001
From: Matthew Hall <mhall@mhcomputing.net>
Date: Sat, 27 Jun 2015 22:54:41 +0000
Subject: [PATCH 4/8] rte_lpm6.{c,h}: use 24 bit extended next hop

Signed-off-by: Matthew Hall <mhall@mhcomputing.net>
---
 lib/librte_lpm/rte_lpm6.c | 27 ++++++++++++++-------------
 lib/librte_lpm/rte_lpm6.h |  8 ++++----
 2 files changed, 18 insertions(+), 17 deletions(-)

diff --git a/lib/librte_lpm/rte_lpm6.c b/lib/librte_lpm/rte_lpm6.c
index 6c2b293..8d7602f 100644
--- a/lib/librte_lpm/rte_lpm6.c
+++ b/lib/librte_lpm/rte_lpm6.c
@@ -96,9 +96,9 @@ struct rte_lpm6_tbl_entry {
 
 /** Rules tbl entry structure. */
 struct rte_lpm6_rule {
-	uint8_t ip[RTE_LPM6_IPV6_ADDR_SIZE]; /**< Rule IP address. */
-	uint8_t next_hop; /**< Rule next hop. */
-	uint8_t depth; /**< Rule depth. */
+	uint8_t  ip[RTE_LPM6_IPV6_ADDR_SIZE]; /**< Rule IP address. */
+	uint32_t next_hop :24; /**< Rule next hop. */
+	uint32_t depth    :8; /**< Rule depth. */
 };
 
 /** LPM6 structure. */
@@ -157,7 +157,7 @@ rte_lpm6_create(const char *name, int socket_id,
 
 	lpm_list = RTE_TAILQ_CAST(rte_lpm6_tailq.head, rte_lpm6_list);
 
-	RTE_BUILD_BUG_ON(sizeof(struct rte_lpm6_tbl_entry) != sizeof(uint32_t));
+	/* RTE_BUILD_BUG_ON(sizeof(struct rte_lpm6_tbl_entry) != sizeof(uint32_t)); */
 
 	/* Check user arguments. */
 	if ((name == NULL) || (socket_id < -1) || (config == NULL) ||
@@ -295,7 +295,7 @@ rte_lpm6_free(struct rte_lpm6 *lpm)
  * the nexthop if so. Otherwise it adds a new rule if enough space is available.
  */
 static inline int32_t
-rule_add(struct rte_lpm6 *lpm, uint8_t *ip, uint8_t next_hop, uint8_t depth)
+rule_add(struct rte_lpm6 *lpm, uint8_t *ip, uint32_t next_hop, uint8_t depth)
 {
 	uint32_t rule_index;
 
@@ -338,7 +338,7 @@ rule_add(struct rte_lpm6 *lpm, uint8_t *ip, uint8_t next_hop, uint8_t depth)
  */
 static void
 expand_rule(struct rte_lpm6 *lpm, uint32_t tbl8_gindex, uint8_t depth,
-		uint8_t next_hop)
+		uint32_t next_hop)
 {
 	uint32_t tbl8_group_end, tbl8_gindex_next, j;
 
@@ -375,7 +375,7 @@ expand_rule(struct rte_lpm6 *lpm, uint32_t tbl8_gindex, uint8_t depth,
 static inline int
 add_step(struct rte_lpm6 *lpm, struct rte_lpm6_tbl_entry *tbl,
 		struct rte_lpm6_tbl_entry **tbl_next, uint8_t *ip, uint8_t bytes,
-		uint8_t first_byte, uint8_t depth, uint8_t next_hop)
+		uint8_t first_byte, uint8_t depth, uint32_t next_hop)
 {
 	uint32_t tbl_index, tbl_range, tbl8_group_start, tbl8_group_end, i;
 	int32_t tbl8_gindex;
@@ -506,7 +506,7 @@ add_step(struct rte_lpm6 *lpm, struct rte_lpm6_tbl_entry *tbl,
  */
 int
 rte_lpm6_add(struct rte_lpm6 *lpm, uint8_t *ip, uint8_t depth,
-		uint8_t next_hop)
+		uint32_t next_hop)
 {
 	struct rte_lpm6_tbl_entry *tbl;
 	struct rte_lpm6_tbl_entry *tbl_next;
@@ -567,7 +567,7 @@ rte_lpm6_add(struct rte_lpm6 *lpm, uint8_t *ip, uint8_t depth,
 static inline int
 lookup_step(const struct rte_lpm6 *lpm, const struct rte_lpm6_tbl_entry *tbl,
 		const struct rte_lpm6_tbl_entry **tbl_next, uint8_t *ip,
-		uint8_t first_byte, uint8_t *next_hop)
+		uint8_t first_byte, uint32_t *next_hop)
 {
 	uint32_t tbl8_index, tbl_entry;
 
@@ -596,7 +596,7 @@ lookup_step(const struct rte_lpm6 *lpm, const struct rte_lpm6_tbl_entry *tbl,
  * Looks up an IP
  */
 int
-rte_lpm6_lookup(const struct rte_lpm6 *lpm, uint8_t *ip, uint8_t *next_hop)
+rte_lpm6_lookup(const struct rte_lpm6 *lpm, uint8_t *ip, uint32_t *next_hop)
 {
 	const struct rte_lpm6_tbl_entry *tbl;
 	const struct rte_lpm6_tbl_entry *tbl_next;
@@ -630,13 +630,14 @@ rte_lpm6_lookup(const struct rte_lpm6 *lpm, uint8_t *ip, uint8_t *next_hop)
 int
 rte_lpm6_lookup_bulk_func(const struct rte_lpm6 *lpm,
 		uint8_t ips[][RTE_LPM6_IPV6_ADDR_SIZE],
-		int16_t * next_hops, unsigned n)
+		uint32_t * next_hops, unsigned n)
 {
 	unsigned i;
 	const struct rte_lpm6_tbl_entry *tbl;
 	const struct rte_lpm6_tbl_entry *tbl_next;
 	uint32_t tbl24_index;
-	uint8_t first_byte, next_hop;
+	uint8_t first_byte;
+	uint32_t next_hop;
 	int status;
 
 	/* DEBUG: Check user input arguments. */
@@ -697,7 +698,7 @@ rule_find(struct rte_lpm6 *lpm, uint8_t *ip, uint8_t depth)
  */
 int
 rte_lpm6_is_rule_present(struct rte_lpm6 *lpm, uint8_t *ip, uint8_t depth,
-uint8_t *next_hop)
+uint32_t *next_hop)
 {
 	uint8_t ip_masked[RTE_LPM6_IPV6_ADDR_SIZE];
 	int32_t rule_index;
diff --git a/lib/librte_lpm/rte_lpm6.h b/lib/librte_lpm/rte_lpm6.h
index cedcea8..dd90beb 100644
--- a/lib/librte_lpm/rte_lpm6.h
+++ b/lib/librte_lpm/rte_lpm6.h
@@ -121,7 +121,7 @@ rte_lpm6_free(struct rte_lpm6 *lpm);
  */
 int
 rte_lpm6_add(struct rte_lpm6 *lpm, uint8_t *ip, uint8_t depth,
-		uint8_t next_hop);
+		uint32_t next_hop);
 
 /**
  * Check if a rule is present in the LPM table,
@@ -140,7 +140,7 @@ rte_lpm6_add(struct rte_lpm6 *lpm, uint8_t *ip, uint8_t depth,
  */
 int
 rte_lpm6_is_rule_present(struct rte_lpm6 *lpm, uint8_t *ip, uint8_t depth,
-uint8_t *next_hop);
+uint32_t *next_hop);
 
 /**
  * Delete a rule from the LPM table.
@@ -197,7 +197,7 @@ rte_lpm6_delete_all(struct rte_lpm6 *lpm);
  *   -EINVAL for incorrect arguments, -ENOENT on lookup miss, 0 on lookup hit
  */
 int
-rte_lpm6_lookup(const struct rte_lpm6 *lpm, uint8_t *ip, uint8_t *next_hop);
+rte_lpm6_lookup(const struct rte_lpm6 *lpm, uint8_t *ip, uint32_t *next_hop);
 
 /**
  * Lookup multiple IP addresses in an LPM table.
@@ -218,7 +218,7 @@ rte_lpm6_lookup(const struct rte_lpm6 *lpm, uint8_t *ip, uint8_t *next_hop);
 int
 rte_lpm6_lookup_bulk_func(const struct rte_lpm6 *lpm,
 		uint8_t ips[][RTE_LPM6_IPV6_ADDR_SIZE],
-		int16_t * next_hops, unsigned n);
+		uint32_t * next_hops, unsigned n);
 
 #ifdef __cplusplus
 }
-- 
1.9.1


[-- Attachment #6: 0005-librte_table-use-uint32_t-for-next-hops-from-librte_.patch --]
[-- Type: text/plain, Size: 2443 bytes --]

>From e3ebfc026f7871d3014a0b9f8881579623b6592b Mon Sep 17 00:00:00 2001
From: Matthew Hall <mhall@mhcomputing.net>
Date: Sat, 27 Jun 2015 22:55:20 +0000
Subject: [PATCH 5/8] librte_table: use uint32_t for next hops from librte_lpm

Signed-off-by: Matthew Hall <mhall@mhcomputing.net>
---
 lib/librte_table/rte_table_lpm.c      | 6 +++---
 lib/librte_table/rte_table_lpm_ipv6.c | 6 +++---
 2 files changed, 6 insertions(+), 6 deletions(-)

diff --git a/lib/librte_table/rte_table_lpm.c b/lib/librte_table/rte_table_lpm.c
index 849d899..2af2eee 100644
--- a/lib/librte_table/rte_table_lpm.c
+++ b/lib/librte_table/rte_table_lpm.c
@@ -202,7 +202,7 @@ rte_table_lpm_entry_add(
 	struct rte_table_lpm_key *ip_prefix = (struct rte_table_lpm_key *) key;
 	uint32_t nht_pos, nht_pos0_valid;
 	int status;
-	uint8_t nht_pos0 = 0;
+	uint32_t nht_pos0 = 0;
 
 	/* Check input parameters */
 	if (lpm == NULL) {
@@ -268,7 +268,7 @@ rte_table_lpm_entry_delete(
 {
 	struct rte_table_lpm *lpm = (struct rte_table_lpm *) table;
 	struct rte_table_lpm_key *ip_prefix = (struct rte_table_lpm_key *) key;
-	uint8_t nht_pos;
+	uint32_t nht_pos;
 	int status;
 
 	/* Check input parameters */
@@ -342,7 +342,7 @@ rte_table_lpm_lookup(
 			uint32_t ip = rte_bswap32(
 				RTE_MBUF_METADATA_UINT32(pkt, lpm->offset));
 			int status;
-			uint8_t nht_pos;
+			uint32_t nht_pos;
 
 			status = rte_lpm_lookup(lpm->lpm, ip, &nht_pos);
 			if (status == 0) {
diff --git a/lib/librte_table/rte_table_lpm_ipv6.c b/lib/librte_table/rte_table_lpm_ipv6.c
index e9bc6a7..81a948e 100644
--- a/lib/librte_table/rte_table_lpm_ipv6.c
+++ b/lib/librte_table/rte_table_lpm_ipv6.c
@@ -213,7 +213,7 @@ rte_table_lpm_ipv6_entry_add(
 		(struct rte_table_lpm_ipv6_key *) key;
 	uint32_t nht_pos, nht_pos0_valid;
 	int status;
-	uint8_t nht_pos0;
+	uint32_t nht_pos0;
 
 	/* Check input parameters */
 	if (lpm == NULL) {
@@ -280,7 +280,7 @@ rte_table_lpm_ipv6_entry_delete(
 	struct rte_table_lpm_ipv6 *lpm = (struct rte_table_lpm_ipv6 *) table;
 	struct rte_table_lpm_ipv6_key *ip_prefix =
 		(struct rte_table_lpm_ipv6_key *) key;
-	uint8_t nht_pos;
+	uint32_t nht_pos;
 	int status;
 
 	/* Check input parameters */
@@ -356,7 +356,7 @@ rte_table_lpm_ipv6_lookup(
 			uint8_t *ip = RTE_MBUF_METADATA_UINT8_PTR(pkt,
 				lpm->offset);
 			int status;
-			uint8_t nht_pos;
+			uint32_t nht_pos;
 
 			status = rte_lpm6_lookup(lpm->lpm, ip, &nht_pos);
 			if (status == 0) {
-- 
1.9.1


[-- Attachment #7: 0006-test_lpm-.c-update-tests-to-use-24-bit-extended-next.patch --]
[-- Type: text/plain, Size: 17123 bytes --]

>From e975f935595c6e901522dbaf10be598573276eaa Mon Sep 17 00:00:00 2001
From: Matthew Hall <mhall@mhcomputing.net>
Date: Sat, 27 Jun 2015 22:34:18 +0000
Subject: [PATCH 6/8] test_lpm*.c: update tests to use 24 bit extended next hop

Signed-off-by: Matthew Hall <mhall@mhcomputing.net>
---
 app/test/test_lpm.c  |  54 +++++++++++++++-----------
 app/test/test_lpm6.c | 108 ++++++++++++++++++++++++++++++---------------------
 2 files changed, 95 insertions(+), 67 deletions(-)

diff --git a/app/test/test_lpm.c b/app/test/test_lpm.c
index 8b4ded9..44e2fb4 100644
--- a/app/test/test_lpm.c
+++ b/app/test/test_lpm.c
@@ -181,7 +181,8 @@ test3(void)
 {
 	struct rte_lpm *lpm = NULL;
 	uint32_t ip = IPv4(0, 0, 0, 0);
-	uint8_t depth = 24, next_hop = 100;
+	uint8_t depth = 24;
+	uint32_t next_hop = 100;
 	int32_t status = 0;
 
 	/* rte_lpm_add: lpm == NULL */
@@ -248,7 +249,7 @@ test5(void)
 #if defined(RTE_LIBRTE_LPM_DEBUG)
 	struct rte_lpm *lpm = NULL;
 	uint32_t ip = IPv4(0, 0, 0, 0);
-	uint8_t next_hop_return = 0;
+	uint32_t next_hop_return = 0;
 	int32_t status = 0;
 
 	/* rte_lpm_lookup: lpm == NULL */
@@ -278,7 +279,8 @@ test6(void)
 {
 	struct rte_lpm *lpm = NULL;
 	uint32_t ip = IPv4(0, 0, 0, 0);
-	uint8_t depth = 24, next_hop_add = 100, next_hop_return = 0;
+	uint8_t depth = 24;
+	uint32_t next_hop_add = 100, next_hop_return = 0;
 	int32_t status = 0;
 
 	lpm = rte_lpm_create(__func__, SOCKET_ID_ANY, MAX_RULES, 0);
@@ -309,10 +311,11 @@ int32_t
 test7(void)
 {
 	__m128i ipx4;
-	uint16_t hop[4];
+	uint32_t hop[4];
 	struct rte_lpm *lpm = NULL;
 	uint32_t ip = IPv4(0, 0, 0, 0);
-	uint8_t depth = 32, next_hop_add = 100, next_hop_return = 0;
+	uint8_t depth = 32;
+	uint32_t next_hop_add = 100, next_hop_return = 0;
 	int32_t status = 0;
 
 	lpm = rte_lpm_create(__func__, SOCKET_ID_ANY, MAX_RULES, 0);
@@ -355,10 +358,11 @@ int32_t
 test8(void)
 {
 	__m128i ipx4;
-	uint16_t hop[4];
+	uint32_t hop[4];
 	struct rte_lpm *lpm = NULL;
 	uint32_t ip1 = IPv4(127, 255, 255, 255), ip2 = IPv4(128, 0, 0, 0);
-	uint8_t depth, next_hop_add, next_hop_return;
+	uint8_t depth;
+	uint32_t next_hop_add, next_hop_return;
 	int32_t status = 0;
 
 	lpm = rte_lpm_create(__func__, SOCKET_ID_ANY, MAX_RULES, 0);
@@ -438,7 +442,8 @@ test9(void)
 {
 	struct rte_lpm *lpm = NULL;
 	uint32_t ip, ip_1, ip_2;
-	uint8_t depth, depth_1, depth_2, next_hop_add, next_hop_add_1,
+	uint8_t depth, depth_1, depth_2;
+	uint32_t next_hop_add, next_hop_add_1,
 		next_hop_add_2, next_hop_return;
 	int32_t status = 0;
 
@@ -602,7 +607,8 @@ test10(void)
 
 	struct rte_lpm *lpm = NULL;
 	uint32_t ip;
-	uint8_t depth, next_hop_add, next_hop_return;
+	uint8_t depth;
+	uint32_t next_hop_add, next_hop_return;
 	int32_t status = 0;
 
 	/* Add rule that covers a TBL24 range previously invalid & lookup
@@ -788,7 +794,8 @@ test11(void)
 
 	struct rte_lpm *lpm = NULL;
 	uint32_t ip;
-	uint8_t depth, next_hop_add, next_hop_return;
+	uint8_t depth;
+	uint32_t next_hop_add, next_hop_return;
 	int32_t status = 0;
 
 	lpm = rte_lpm_create(__func__, SOCKET_ID_ANY, MAX_RULES, 0);
@@ -851,10 +858,11 @@ int32_t
 test12(void)
 {
 	__m128i ipx4;
-	uint16_t hop[4];
+	uint32_t hop[4];
 	struct rte_lpm *lpm = NULL;
 	uint32_t ip, i;
-	uint8_t depth, next_hop_add, next_hop_return;
+	uint8_t depth;
+	uint32_t next_hop_add, next_hop_return;
 	int32_t status = 0;
 
 	lpm = rte_lpm_create(__func__, SOCKET_ID_ANY, MAX_RULES, 0);
@@ -904,7 +912,8 @@ test13(void)
 {
 	struct rte_lpm *lpm = NULL;
 	uint32_t ip, i;
-	uint8_t depth, next_hop_add_1, next_hop_add_2, next_hop_return;
+	uint8_t depth;
+	uint32_t next_hop_add_1, next_hop_add_2, next_hop_return;
 	int32_t status = 0;
 
 	lpm = rte_lpm_create(__func__, SOCKET_ID_ANY, MAX_RULES, 0);
@@ -966,7 +975,8 @@ test14(void)
 
 	struct rte_lpm *lpm = NULL;
 	uint32_t ip;
-	uint8_t depth, next_hop_add, next_hop_return;
+	uint8_t depth;
+	uint32_t next_hop_add, next_hop_return;
 	int32_t status = 0;
 
 	/* Add enough space for 256 rules for every depth */
@@ -1078,10 +1088,10 @@ test17(void)
 	const uint8_t d_ip_10_32 = 32,
 			d_ip_10_24 = 24,
 			d_ip_20_25 = 25;
-	const uint8_t next_hop_ip_10_32 = 100,
+	const uint32_t next_hop_ip_10_32 = 100,
 			next_hop_ip_10_24 = 105,
 			next_hop_ip_20_25 = 111;
-	uint8_t next_hop_return = 0;
+	uint32_t next_hop_return = 0;
 	int32_t status = 0;
 
 	lpm = rte_lpm_create(__func__, SOCKET_ID_ANY, MAX_RULES, 0);
@@ -1092,7 +1102,7 @@ test17(void)
 		return -1;
 
 	status = rte_lpm_lookup(lpm, ip_10_32, &next_hop_return);
-	uint8_t test_hop_10_32 = next_hop_return;
+	uint32_t test_hop_10_32 = next_hop_return;
 	TEST_LPM_ASSERT(status == 0);
 	TEST_LPM_ASSERT(next_hop_return == next_hop_ip_10_32);
 
@@ -1101,7 +1111,7 @@ test17(void)
 			return -1;
 
 	status = rte_lpm_lookup(lpm, ip_10_24, &next_hop_return);
-	uint8_t test_hop_10_24 = next_hop_return;
+	uint32_t test_hop_10_24 = next_hop_return;
 	TEST_LPM_ASSERT(status == 0);
 	TEST_LPM_ASSERT(next_hop_return == next_hop_ip_10_24);
 
@@ -1110,7 +1120,7 @@ test17(void)
 		return -1;
 
 	status = rte_lpm_lookup(lpm, ip_20_25, &next_hop_return);
-	uint8_t test_hop_20_25 = next_hop_return;
+	uint32_t test_hop_20_25 = next_hop_return;
 	TEST_LPM_ASSERT(status == 0);
 	TEST_LPM_ASSERT(next_hop_return == next_hop_ip_20_25);
 
@@ -1175,7 +1185,7 @@ perf_test(void)
 	struct rte_lpm *lpm = NULL;
 	uint64_t begin, total_time, lpm_used_entries = 0;
 	unsigned i, j;
-	uint8_t next_hop_add = 0xAA, next_hop_return = 0;
+	uint32_t next_hop_add = 0xAA, next_hop_return = 0;
 	int status = 0;
 	uint64_t cache_line_counter = 0;
 	int64_t count = 0;
@@ -1252,7 +1262,7 @@ perf_test(void)
 	count = 0;
 	for (i = 0; i < ITERATIONS; i ++) {
 		static uint32_t ip_batch[BATCH_SIZE];
-		uint16_t next_hops[BULK_SIZE];
+		uint32_t next_hops[BULK_SIZE];
 
 		/* Create array of random IP addresses */
 		for (j = 0; j < BATCH_SIZE; j ++)
@@ -1279,7 +1289,7 @@ perf_test(void)
 	count = 0;
 	for (i = 0; i < ITERATIONS; i++) {
 		static uint32_t ip_batch[BATCH_SIZE];
-		uint16_t next_hops[4];
+		uint32_t next_hops[4];
 
 		/* Create array of random IP addresses */
 		for (j = 0; j < BATCH_SIZE; j++)
diff --git a/app/test/test_lpm6.c b/app/test/test_lpm6.c
index 1f88d7a..d5ba20a 100644
--- a/app/test/test_lpm6.c
+++ b/app/test/test_lpm6.c
@@ -291,7 +291,8 @@ test4(void)
 	struct rte_lpm6_config config;
 
 	uint8_t ip[] = {0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0};
-	uint8_t depth = 24, next_hop = 100;
+	uint8_t depth = 24;
+	uint32_t next_hop = 100;
 	int32_t status = 0;
 
 	config.max_rules = MAX_RULES;
@@ -367,7 +368,7 @@ test6(void)
 	struct rte_lpm6 *lpm = NULL;
 	struct rte_lpm6_config config;
 	uint8_t ip[] = {0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0};
-	uint8_t next_hop_return = 0;
+	uint32_t next_hop_return = 0;
 	int32_t status = 0;
 
 	config.max_rules = MAX_RULES;
@@ -405,7 +406,7 @@ test7(void)
 	struct rte_lpm6 *lpm = NULL;
 	struct rte_lpm6_config config;
 	uint8_t ip[10][16];
-	int16_t next_hop_return[10];
+	uint32_t next_hop_return[10];
 	int32_t status = 0;
 
 	config.max_rules = MAX_RULES;
@@ -482,7 +483,8 @@ test9(void)
 	struct rte_lpm6 *lpm = NULL;
 	struct rte_lpm6_config config;
 	uint8_t ip[] = {0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0};
-	uint8_t depth = 16, next_hop_add = 100, next_hop_return = 0;
+	uint8_t depth = 16;
+	uint32_t next_hop_add = 100, next_hop_return = 0;
 	int32_t status = 0;
 	uint8_t i;
 
@@ -526,7 +528,8 @@ test10(void)
 	struct rte_lpm6 *lpm = NULL;
 	struct rte_lpm6_config config;
 	uint8_t ip[] = {0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0};
-	uint8_t depth, next_hop_add = 100;
+	uint8_t depth;
+	uint32_t next_hop_add = 100;
 	int32_t status = 0;
 	int i;
 
@@ -570,7 +573,8 @@ test11(void)
 	struct rte_lpm6 *lpm = NULL;
 	struct rte_lpm6_config config;
 	uint8_t ip[] = {0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0};
-	uint8_t depth, next_hop_add = 100;
+	uint8_t depth;
+	uint32_t next_hop_add = 100;
 	int32_t status = 0;
 
 	config.max_rules = MAX_RULES;
@@ -630,7 +634,8 @@ test12(void)
 	struct rte_lpm6 *lpm = NULL;
 	struct rte_lpm6_config config;
 	uint8_t ip[] = {0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0};
-	uint8_t depth, next_hop_add = 100;
+	uint8_t depth;
+	uint32_t next_hop_add = 100;
 	int32_t status = 0;
 
 	config.max_rules = MAX_RULES;
@@ -668,7 +673,8 @@ test13(void)
 	struct rte_lpm6 *lpm = NULL;
 	struct rte_lpm6_config config;
 	uint8_t ip[] = {0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0};
-	uint8_t depth, next_hop_add = 100;
+	uint8_t depth;
+	uint32_t next_hop_add = 100;
 	int32_t status = 0;
 
 	config.max_rules = 2;
@@ -715,7 +721,8 @@ test14(void)
 	struct rte_lpm6 *lpm = NULL;
 	struct rte_lpm6_config config;
 	uint8_t ip[] = {0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0};
-	uint8_t depth = 25, next_hop_add = 100;
+	uint8_t depth = 25;
+	uint32_t next_hop_add = 100;
 	int32_t status = 0;
 	int i, j;
 
@@ -767,7 +774,8 @@ test15(void)
 	struct rte_lpm6 *lpm = NULL;
 	struct rte_lpm6_config config;
 	uint8_t ip[] = {0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0};
-	uint8_t depth = 24, next_hop_add = 100, next_hop_return = 0;
+	uint8_t depth = 24;
+	uint32_t next_hop_add = 100, next_hop_return = 0;
 	int32_t status = 0;
 
 	config.max_rules = MAX_RULES;
@@ -803,7 +811,8 @@ test16(void)
 	struct rte_lpm6 *lpm = NULL;
 	struct rte_lpm6_config config;
 	uint8_t ip[] = {12,12,1,0,0,0,0,0,0,0,0,0,0,0,0,0};
-	uint8_t depth = 128, next_hop_add = 100, next_hop_return = 0;
+	uint8_t depth = 128;
+	uint32_t next_hop_add = 100, next_hop_return = 0;
 	int32_t status = 0;
 
 	config.max_rules = MAX_RULES;
@@ -847,7 +856,8 @@ test17(void)
 	uint8_t ip1[] = {127,255,255,255,255,255,255,255,255,
 			255,255,255,255,255,255,255};
 	uint8_t ip2[] = {128,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0};
-	uint8_t depth, next_hop_add, next_hop_return;
+	uint8_t depth;
+	uint32_t next_hop_add, next_hop_return;
 	int32_t status = 0;
 
 	config.max_rules = MAX_RULES;
@@ -912,7 +922,8 @@ test18(void)
 	struct rte_lpm6 *lpm = NULL;
 	struct rte_lpm6_config config;
 	uint8_t ip[16], ip_1[16], ip_2[16];
-	uint8_t depth, depth_1, depth_2, next_hop_add, next_hop_add_1,
+	uint8_t depth, depth_1, depth_2;
+	uint32_t next_hop_add, next_hop_add_1,
 		next_hop_add_2, next_hop_return;
 	int32_t status = 0;
 
@@ -1074,7 +1085,8 @@ test19(void)
 	struct rte_lpm6 *lpm = NULL;
 	struct rte_lpm6_config config;
 	uint8_t ip[16];
-	uint8_t depth, next_hop_add, next_hop_return;
+	uint8_t depth;
+	uint32_t next_hop_add, next_hop_return;
 	int32_t status = 0;
 
 	config.max_rules = MAX_RULES;
@@ -1272,7 +1284,8 @@ test20(void)
 	struct rte_lpm6 *lpm = NULL;
 	struct rte_lpm6_config config;
 	uint8_t ip[16];
-	uint8_t depth, next_hop_add, next_hop_return;
+	uint8_t depth;
+	uint32_t next_hop_add, next_hop_return;
 	int32_t status = 0;
 
 	config.max_rules = MAX_RULES;
@@ -1339,8 +1352,9 @@ test21(void)
 	struct rte_lpm6 *lpm = NULL;
 	struct rte_lpm6_config config;
 	uint8_t ip_batch[4][16];
-	uint8_t depth, next_hop_add;
-	int16_t next_hop_return[4];
+	uint8_t depth;
+	uint32_t next_hop_add;
+	uint32_t next_hop_return[4];
 	int32_t status = 0;
 
 	config.max_rules = MAX_RULES;
@@ -1377,7 +1391,7 @@ test21(void)
 			next_hop_return, 4);
 	TEST_LPM_ASSERT(status == 0 && next_hop_return[0] == 100
 			&& next_hop_return[1] == 101 && next_hop_return[2] == 102
-			&& next_hop_return[3] == -1);
+			&& next_hop_return[3] == (uint32_t) -1);
 
 	rte_lpm6_free(lpm);
 
@@ -1397,8 +1411,9 @@ test22(void)
 	struct rte_lpm6 *lpm = NULL;
 	struct rte_lpm6_config config;
 	uint8_t ip_batch[5][16];
-	uint8_t depth[5], next_hop_add;
-	int16_t next_hop_return[5];
+	uint8_t depth[5];
+	uint32_t next_hop_add;
+	uint32_t next_hop_return[5];
 	int32_t status = 0;
 
 	config.max_rules = MAX_RULES;
@@ -1458,8 +1473,8 @@ test22(void)
 
 	status = rte_lpm6_lookup_bulk_func(lpm, ip_batch,
 			next_hop_return, 5);
-	TEST_LPM_ASSERT(status == 0 && next_hop_return[0] == -1
-			&& next_hop_return[1] == -1 && next_hop_return[2] == 103
+	TEST_LPM_ASSERT(status == 0 && next_hop_return[0] == (uint32_t) -1
+			&& next_hop_return[1] == (uint32_t) -1 && next_hop_return[2] == 103
 			&& next_hop_return[3] == 104 && next_hop_return[4] == 105);
 
 	/* Use the delete_bulk function to delete one more. Lookup again */
@@ -1469,8 +1484,8 @@ test22(void)
 
 	status = rte_lpm6_lookup_bulk_func(lpm, ip_batch,
 			next_hop_return, 5);
-	TEST_LPM_ASSERT(status == 0 && next_hop_return[0] == -1
-			&& next_hop_return[1] == -1 && next_hop_return[2] == -1
+	TEST_LPM_ASSERT(status == 0 && next_hop_return[0] == (uint32_t) -1
+			&& next_hop_return[1] == (uint32_t) -1 && next_hop_return[2] == (uint32_t) -1
 			&& next_hop_return[3] == 104 && next_hop_return[4] == 105);
 
 	/* Use the delete_bulk function to delete two, one invalid. Lookup again */
@@ -1482,9 +1497,9 @@ test22(void)
 	IPv6(ip_batch[4], 128, 0, 0, 0, 0, 5, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0);
 	status = rte_lpm6_lookup_bulk_func(lpm, ip_batch,
 			next_hop_return, 5);
-	TEST_LPM_ASSERT(status == 0 && next_hop_return[0] == -1
-			&& next_hop_return[1] == -1 && next_hop_return[2] == -1
-			&& next_hop_return[3] == -1 && next_hop_return[4] == 105);
+	TEST_LPM_ASSERT(status == 0 && next_hop_return[0] == (uint32_t) -1
+			&& next_hop_return[1] == (uint32_t) -1 && next_hop_return[2] == (uint32_t) -1
+			&& next_hop_return[3] == (uint32_t) -1 && next_hop_return[4] == 105);
 
 	/* Use the delete_bulk function to delete the remaining one. Lookup again */
 
@@ -1493,9 +1508,9 @@ test22(void)
 
 	status = rte_lpm6_lookup_bulk_func(lpm, ip_batch,
 			next_hop_return, 5);
-	TEST_LPM_ASSERT(status == 0 && next_hop_return[0] == -1
-			&& next_hop_return[1] == -1 && next_hop_return[2] == -1
-			&& next_hop_return[3] == -1 && next_hop_return[4] == -1);
+	TEST_LPM_ASSERT(status == 0 && next_hop_return[0] == (uint32_t) -1
+			&& next_hop_return[1] == (uint32_t) -1 && next_hop_return[2] == (uint32_t) -1
+			&& next_hop_return[3] == (uint32_t) -1 && next_hop_return[4] == (uint32_t) -1);
 
 	rte_lpm6_free(lpm);
 
@@ -1514,7 +1529,8 @@ test23(void)
 	struct rte_lpm6_config config;
 	uint32_t i;
 	uint8_t ip[16];
-	uint8_t depth, next_hop_add, next_hop_return;
+	uint8_t depth;
+	uint32_t next_hop_add, next_hop_return;
 	int32_t status = 0;
 
 	config.max_rules = MAX_RULES;
@@ -1598,7 +1614,8 @@ test25(void)
 	struct rte_lpm6_config config;
 	uint8_t ip[16];
 	uint32_t i;
-	uint8_t depth, next_hop_add, next_hop_return, next_hop_expected;
+	uint8_t depth;
+	uint32_t next_hop_add, next_hop_return, next_hop_expected;
 	int32_t status = 0;
 
 	config.max_rules = MAX_RULES;
@@ -1646,12 +1663,12 @@ test26(void)
 	uint8_t ip_10_24[] = {10, 10, 10, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0};
 	uint8_t ip_20_25[] = {10, 10, 20, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0};
 	uint8_t d_ip_10_32 = 32;
-	uint8_t	d_ip_10_24 = 24;
-	uint8_t	d_ip_20_25 = 25;
-	uint8_t next_hop_ip_10_32 = 100;
-	uint8_t	next_hop_ip_10_24 = 105;
-	uint8_t	next_hop_ip_20_25 = 111;
-	uint8_t next_hop_return = 0;
+	uint8_t d_ip_10_24 = 24;
+	uint8_t d_ip_20_25 = 25;
+	uint32_t next_hop_ip_10_32 = 100;
+	uint32_t next_hop_ip_10_24 = 105;
+	uint32_t next_hop_ip_20_25 = 111;
+	uint32_t next_hop_return = 0;
 	int32_t status = 0;
 
 	config.max_rules = MAX_RULES;
@@ -1666,7 +1683,7 @@ test26(void)
 		return -1;
 
 	status = rte_lpm6_lookup(lpm, ip_10_32, &next_hop_return);
-	uint8_t test_hop_10_32 = next_hop_return;
+	uint32_t test_hop_10_32 = next_hop_return;
 	TEST_LPM_ASSERT(status == 0);
 	TEST_LPM_ASSERT(next_hop_return == next_hop_ip_10_32);
 
@@ -1675,7 +1692,7 @@ test26(void)
 			return -1;
 
 	status = rte_lpm6_lookup(lpm, ip_10_24, &next_hop_return);
-	uint8_t test_hop_10_24 = next_hop_return;
+	uint32_t test_hop_10_24 = next_hop_return;
 	TEST_LPM_ASSERT(status == 0);
 	TEST_LPM_ASSERT(next_hop_return == next_hop_ip_10_24);
 
@@ -1684,7 +1701,7 @@ test26(void)
 		return -1;
 
 	status = rte_lpm6_lookup(lpm, ip_20_25, &next_hop_return);
-	uint8_t test_hop_20_25 = next_hop_return;
+	uint32_t test_hop_20_25 = next_hop_return;
 	TEST_LPM_ASSERT(status == 0);
 	TEST_LPM_ASSERT(next_hop_return == next_hop_ip_20_25);
 
@@ -1723,7 +1740,8 @@ test27(void)
 		struct rte_lpm6 *lpm = NULL;
 		struct rte_lpm6_config config;
 		uint8_t ip[] = {128,128,128,128,128,128,128,128,128,128,128,128,128,128,0,0};
-		uint8_t depth = 128, next_hop_add = 100, next_hop_return;
+		uint8_t depth = 128;
+		uint32_t next_hop_add = 100, next_hop_return;
 		int32_t status = 0;
 		int i, j;
 
@@ -1799,7 +1817,7 @@ perf_test(void)
 	struct rte_lpm6_config config;
 	uint64_t begin, total_time;
 	unsigned i, j;
-	uint8_t next_hop_add = 0xAA, next_hop_return = 0;
+	uint32_t next_hop_add = 0xAA, next_hop_return = 0;
 	int status = 0;
 	int64_t count = 0;
 
@@ -1856,7 +1874,7 @@ perf_test(void)
 	count = 0;
 
 	uint8_t ip_batch[NUM_IPS_ENTRIES][16];
-	int16_t next_hops[NUM_IPS_ENTRIES];
+	uint32_t next_hops[NUM_IPS_ENTRIES];
 
 	for (i = 0; i < NUM_IPS_ENTRIES; i++)
 		memcpy(ip_batch[i], large_ips_table[i].ip, 16);
@@ -1869,7 +1887,7 @@ perf_test(void)
 		total_time += rte_rdtsc() - begin;
 
 		for (j = 0; j < NUM_IPS_ENTRIES; j++)
-			if (next_hops[j] < 0)
+			if ((int32_t) next_hops[j] < 0)
 				count++;
 	}
 	printf("BULK LPM Lookup: %.1f cycles (fails = %.1f%%)\n",
-- 
1.9.1


[-- Attachment #8: 0007-examples-update-examples-to-use-24-bit-extended-next.patch --]
[-- Type: text/plain, Size: 5383 bytes --]

>From cfaf9c28dbff3bec0c867aa4270b01b04bf50276 Mon Sep 17 00:00:00 2001
From: Matthew Hall <mhall@mhcomputing.net>
Date: Sat, 27 Jun 2015 22:42:42 +0000
Subject: [PATCH 7/8] examples: update examples to use 24 bit extended next hop

Signed-off-by: Matthew Hall <mhall@mhcomputing.net>
---
 examples/ip_reassembly/main.c    |  3 ++-
 examples/l3fwd-power/main.c      |  2 +-
 examples/l3fwd-vf/main.c         |  2 +-
 examples/l3fwd/main.c            | 16 ++++++++--------
 examples/load_balancer/runtime.c |  2 +-
 5 files changed, 13 insertions(+), 12 deletions(-)

diff --git a/examples/ip_reassembly/main.c b/examples/ip_reassembly/main.c
index 741c398..86e33a7 100644
--- a/examples/ip_reassembly/main.c
+++ b/examples/ip_reassembly/main.c
@@ -347,7 +347,8 @@ reassemble(struct rte_mbuf *m, uint8_t portid, uint32_t queue,
 	struct rte_ip_frag_death_row *dr;
 	struct rx_queue *rxq;
 	void *d_addr_bytes;
-	uint8_t next_hop, dst_port;
+	uint32_t next_hop;
+	uint8_t dst_port;
 
 	rxq = &qconf->rx_queue_list[queue];
 
diff --git a/examples/l3fwd-power/main.c b/examples/l3fwd-power/main.c
index 8bb88ce..f647713 100644
--- a/examples/l3fwd-power/main.c
+++ b/examples/l3fwd-power/main.c
@@ -631,7 +631,7 @@ static inline uint8_t
 get_ipv4_dst_port(struct ipv4_hdr *ipv4_hdr, uint8_t portid,
 		lookup_struct_t *ipv4_l3fwd_lookup_struct)
 {
-	uint8_t next_hop;
+	uint32_t next_hop;
 
 	return (uint8_t) ((rte_lpm_lookup(ipv4_l3fwd_lookup_struct,
 			rte_be_to_cpu_32(ipv4_hdr->dst_addr), &next_hop) == 0)?
diff --git a/examples/l3fwd-vf/main.c b/examples/l3fwd-vf/main.c
index 01f610e..193c3ab 100644
--- a/examples/l3fwd-vf/main.c
+++ b/examples/l3fwd-vf/main.c
@@ -440,7 +440,7 @@ get_dst_port(struct ipv4_hdr *ipv4_hdr,  uint8_t portid, lookup_struct_t * l3fwd
 static inline uint8_t
 get_dst_port(struct ipv4_hdr *ipv4_hdr,  uint8_t portid, lookup_struct_t * l3fwd_lookup_struct)
 {
-	uint8_t next_hop;
+	uint32_t next_hop;
 
 	return (uint8_t) ((rte_lpm_lookup(l3fwd_lookup_struct,
 			rte_be_to_cpu_32(ipv4_hdr->dst_addr), &next_hop) == 0)?
diff --git a/examples/l3fwd/main.c b/examples/l3fwd/main.c
index 1f3e5c6..4f31e52 100644
--- a/examples/l3fwd/main.c
+++ b/examples/l3fwd/main.c
@@ -710,7 +710,7 @@ get_ipv6_dst_port(void *ipv6_hdr,  uint8_t portid, lookup_struct_t * ipv6_l3fwd_
 static inline uint8_t
 get_ipv4_dst_port(void *ipv4_hdr,  uint8_t portid, lookup_struct_t * ipv4_l3fwd_lookup_struct)
 {
-	uint8_t next_hop;
+	uint32_t next_hop;
 
 	return (uint8_t) ((rte_lpm_lookup(ipv4_l3fwd_lookup_struct,
 		rte_be_to_cpu_32(((struct ipv4_hdr *)ipv4_hdr)->dst_addr),
@@ -720,7 +720,7 @@ get_ipv4_dst_port(void *ipv4_hdr,  uint8_t portid, lookup_struct_t * ipv4_l3fwd_
 static inline uint8_t
 get_ipv6_dst_port(void *ipv6_hdr,  uint8_t portid, lookup6_struct_t * ipv6_l3fwd_lookup_struct)
 {
-	uint8_t next_hop;
+	uint32_t next_hop;
 	return (uint8_t) ((rte_lpm6_lookup(ipv6_l3fwd_lookup_struct,
 			((struct ipv6_hdr*)ipv6_hdr)->dst_addr, &next_hop) == 0)?
 			next_hop : portid);
@@ -1151,7 +1151,7 @@ l3fwd_simple_forward(struct rte_mbuf *m, uint8_t portid, struct lcore_conf *qcon
  * to BAD_PORT value.
  */
 static inline __attribute__((always_inline)) void
-rfc1812_process(struct ipv4_hdr *ipv4_hdr, uint16_t *dp, uint32_t ptype)
+rfc1812_process(struct ipv4_hdr *ipv4_hdr, uint32_t *dp, uint32_t ptype)
 {
 	uint8_t ihl;
 
@@ -1182,7 +1182,7 @@ static inline __attribute__((always_inline)) uint16_t
 get_dst_port(const struct lcore_conf *qconf, struct rte_mbuf *pkt,
 	uint32_t dst_ipv4, uint8_t portid)
 {
-	uint8_t next_hop;
+	uint32_t next_hop;
 	struct ipv6_hdr *ipv6_hdr;
 	struct ether_hdr *eth_hdr;
 
@@ -1205,7 +1205,7 @@ get_dst_port(const struct lcore_conf *qconf, struct rte_mbuf *pkt,
 
 static inline void
 process_packet(struct lcore_conf *qconf, struct rte_mbuf *pkt,
-	uint16_t *dst_port, uint8_t portid)
+	uint32_t *dst_port, uint8_t portid)
 {
 	struct ether_hdr *eth_hdr;
 	struct ipv4_hdr *ipv4_hdr;
@@ -1275,7 +1275,7 @@ processx4_step2(const struct lcore_conf *qconf,
 		uint32_t ipv4_flag,
 		uint8_t portid,
 		struct rte_mbuf *pkt[FWDSTEP],
-		uint16_t dprt[FWDSTEP])
+		uint32_t dprt[FWDSTEP])
 {
 	rte_xmm_t dst;
 	const  __m128i bswap_mask = _mm_set_epi8(12, 13, 14, 15, 8, 9, 10, 11,
@@ -1301,7 +1301,7 @@ processx4_step2(const struct lcore_conf *qconf,
  * Perform RFC1812 checks and updates for IPV4 packets.
  */
 static inline void
-processx4_step3(struct rte_mbuf *pkt[FWDSTEP], uint16_t dst_port[FWDSTEP])
+processx4_step3(struct rte_mbuf *pkt[FWDSTEP], uint32_t dst_port[FWDSTEP])
 {
 	__m128i te[FWDSTEP];
 	__m128i ve[FWDSTEP];
@@ -1527,7 +1527,7 @@ main_loop(__attribute__((unused)) void *dummy)
 	int32_t k;
 	uint16_t dlp;
 	uint16_t *lp;
-	uint16_t dst_port[MAX_PKT_BURST];
+	uint32_t dst_port[MAX_PKT_BURST];
 	__m128i dip[MAX_PKT_BURST / FWDSTEP];
 	uint32_t ipv4_flag[MAX_PKT_BURST / FWDSTEP];
 	uint16_t pnum[MAX_PKT_BURST + 1];
diff --git a/examples/load_balancer/runtime.c b/examples/load_balancer/runtime.c
index 2b265c2..6944325 100644
--- a/examples/load_balancer/runtime.c
+++ b/examples/load_balancer/runtime.c
@@ -525,7 +525,7 @@ app_lcore_worker(
 			struct rte_mbuf *pkt;
 			struct ipv4_hdr *ipv4_hdr;
 			uint32_t ipv4_dst, pos;
-			uint8_t port;
+			uint32_t port;
 
 			if (likely(j < bsz_rd - 1)) {
 				APP_WORKER_PREFETCH1(rte_pktmbuf_mtod(lp->mbuf_in.array[j+1], unsigned char *));
-- 
1.9.1


[-- Attachment #9: 0008-Makefile-add-fno-strict-aliasing-due-to-LPM-casting-.patch --]
[-- Type: text/plain, Size: 763 bytes --]

>From 2ded34ca11f61ed8a3bcfda6d13339e04b0430bf Mon Sep 17 00:00:00 2001
From: Matthew Hall <mhall@mhcomputing.net>
Date: Sun, 28 Jun 2015 22:52:45 +0000
Subject: [PATCH 8/8] Makefile: add -fno-strict-aliasing due to LPM casting
 logic

Signed-off-by: Matthew Hall <mhall@mhcomputing.net>
---
 lib/librte_lpm/Makefile | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/lib/librte_lpm/Makefile b/lib/librte_lpm/Makefile
index 688cfc9..20030b8 100644
--- a/lib/librte_lpm/Makefile
+++ b/lib/librte_lpm/Makefile
@@ -35,7 +35,7 @@ include $(RTE_SDK)/mk/rte.vars.mk
 LIB = librte_lpm.a
 
 CFLAGS += -O3
-CFLAGS += $(WERROR_FLAGS) -I$(SRCDIR)
+CFLAGS += $(WERROR_FLAGS) -I$(SRCDIR) -fno-strict-aliasing
 
 EXPORT_MAP := rte_lpm_version.map
 
-- 
1.9.1


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [dpdk-dev] [PATCH v1 0/3] lpm: increase number of next hops for lpm (ipv4)
  2015-10-24  6:09   ` Matthew Hall
@ 2015-10-25 17:52     ` Vladimir Medvedkin
       [not found]       ` <20151026115519.GA7576@MKJASTRX-MOBL>
  2015-10-26 12:13     ` Jastrzebski, MichalX K
  1 sibling, 1 reply; 24+ messages in thread
From: Vladimir Medvedkin @ 2015-10-25 17:52 UTC (permalink / raw)
  To: Matthew Hall; +Cc: dev

Hi all,

Here my implementation

Signed-off-by: Vladimir Medvedkin <medvedkinv@gmail.com>
---
 config/common_bsdapp     |   1 +
 config/common_linuxapp   |   1 +
 lib/librte_lpm/rte_lpm.c | 194
+++++++++++++++++++++++++++++------------------
 lib/librte_lpm/rte_lpm.h | 163 +++++++++++++++++++++++----------------
 4 files changed, 219 insertions(+), 140 deletions(-)

diff --git a/config/common_bsdapp b/config/common_bsdapp
index b37dcf4..408cc2c 100644
--- a/config/common_bsdapp
+++ b/config/common_bsdapp
@@ -344,6 +344,7 @@ CONFIG_RTE_LIBRTE_JOBSTATS=y
 #
 CONFIG_RTE_LIBRTE_LPM=y
 CONFIG_RTE_LIBRTE_LPM_DEBUG=n
+CONFIG_RTE_LIBRTE_LPM_ASNUM=n

 #
 # Compile librte_acl
diff --git a/config/common_linuxapp b/config/common_linuxapp
index 0de43d5..1c60e63 100644
--- a/config/common_linuxapp
+++ b/config/common_linuxapp
@@ -352,6 +352,7 @@ CONFIG_RTE_LIBRTE_JOBSTATS=y
 #
 CONFIG_RTE_LIBRTE_LPM=y
 CONFIG_RTE_LIBRTE_LPM_DEBUG=n
+CONFIG_RTE_LIBRTE_LPM_ASNUM=n

 #
 # Compile librte_acl
diff --git a/lib/librte_lpm/rte_lpm.c b/lib/librte_lpm/rte_lpm.c
index 163ba3c..363b400 100644
--- a/lib/librte_lpm/rte_lpm.c
+++ b/lib/librte_lpm/rte_lpm.c
@@ -159,9 +159,11 @@ rte_lpm_create(const char *name, int socket_id, int
max_rules,

        lpm_list = RTE_TAILQ_CAST(rte_lpm_tailq.head, rte_lpm_list);

-       RTE_BUILD_BUG_ON(sizeof(struct rte_lpm_tbl24_entry) != 2);
-       RTE_BUILD_BUG_ON(sizeof(struct rte_lpm_tbl8_entry) != 2);
-
+#ifdef RTE_LIBRTE_LPM_ASNUM
+       RTE_BUILD_BUG_ON(sizeof(struct rte_lpm_tbl_entry) != 8);
+#else
+       RTE_BUILD_BUG_ON(sizeof(struct rte_lpm_tbl_entry) != 4);
+#endif
        /* Check user arguments. */
        if ((name == NULL) || (socket_id < -1) || (max_rules == 0)){
                rte_errno = EINVAL;
@@ -261,7 +263,7 @@ rte_lpm_free(struct rte_lpm *lpm)
  */
 static inline int32_t
 rule_add(struct rte_lpm *lpm, uint32_t ip_masked, uint8_t depth,
-       uint8_t next_hop)
+       struct rte_lpm_res *res)
 {
        uint32_t rule_gindex, rule_index, last_rule;
        int i;
@@ -282,8 +284,11 @@ rule_add(struct rte_lpm *lpm, uint32_t ip_masked,
uint8_t depth,

                        /* If rule already exists update its next_hop and
return. */
                        if (lpm->rules_tbl[rule_index].ip == ip_masked) {
-                               lpm->rules_tbl[rule_index].next_hop =
next_hop;
-
+                               lpm->rules_tbl[rule_index].next_hop =
res->next_hop;
+                               lpm->rules_tbl[rule_index].fwd_class =
res->fwd_class;
+#ifdef RTE_LIBRTE_LPM_ASNUM
+                               lpm->rules_tbl[rule_index].as_num =
res->as_num;
+#endif
                                return rule_index;
                        }
                }
@@ -320,7 +325,11 @@ rule_add(struct rte_lpm *lpm, uint32_t ip_masked,
uint8_t depth,

        /* Add the new rule. */
        lpm->rules_tbl[rule_index].ip = ip_masked;
-       lpm->rules_tbl[rule_index].next_hop = next_hop;
+       lpm->rules_tbl[rule_index].next_hop = res->next_hop;
+       lpm->rules_tbl[rule_index].fwd_class = res->fwd_class;
+#ifdef RTE_LIBRTE_LPM_ASNUM
+       lpm->rules_tbl[rule_index].as_num = res->as_num;
+#endif

        /* Increment the used rules counter for this rule group. */
        lpm->rule_info[depth - 1].used_rules++;
@@ -382,10 +391,10 @@ rule_find(struct rte_lpm *lpm, uint32_t ip_masked,
uint8_t depth)
  * Find, clean and allocate a tbl8.
  */
 static inline int32_t
-tbl8_alloc(struct rte_lpm_tbl8_entry *tbl8)
+tbl8_alloc(struct rte_lpm_tbl_entry *tbl8)
 {
        uint32_t tbl8_gindex; /* tbl8 group index. */
-       struct rte_lpm_tbl8_entry *tbl8_entry;
+       struct rte_lpm_tbl_entry *tbl8_entry;

        /* Scan through tbl8 to find a free (i.e. INVALID) tbl8 group. */
        for (tbl8_gindex = 0; tbl8_gindex < RTE_LPM_TBL8_NUM_GROUPS;
@@ -393,12 +402,12 @@ tbl8_alloc(struct rte_lpm_tbl8_entry *tbl8)
                tbl8_entry = &tbl8[tbl8_gindex *
                                   RTE_LPM_TBL8_GROUP_NUM_ENTRIES];
                /* If a free tbl8 group is found clean it and set as VALID.
*/
-               if (!tbl8_entry->valid_group) {
+               if (!tbl8_entry->ext_valid) {
                        memset(&tbl8_entry[0], 0,
                                        RTE_LPM_TBL8_GROUP_NUM_ENTRIES *
                                        sizeof(tbl8_entry[0]));

-                       tbl8_entry->valid_group = VALID;
+                       tbl8_entry->ext_valid = VALID;

                        /* Return group index for allocated tbl8 group. */
                        return tbl8_gindex;
@@ -410,46 +419,50 @@ tbl8_alloc(struct rte_lpm_tbl8_entry *tbl8)
 }

 static inline void
-tbl8_free(struct rte_lpm_tbl8_entry *tbl8, uint32_t tbl8_group_start)
+tbl8_free(struct rte_lpm_tbl_entry *tbl8, uint32_t tbl8_group_start)
 {
        /* Set tbl8 group invalid*/
-       tbl8[tbl8_group_start].valid_group = INVALID;
+       tbl8[tbl8_group_start].ext_valid = INVALID;
 }

 static inline int32_t
 add_depth_small(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
-               uint8_t next_hop)
+               struct rte_lpm_res *res)
 {
        uint32_t tbl24_index, tbl24_range, tbl8_index, tbl8_group_end, i, j;

        /* Calculate the index into Table24. */
        tbl24_index = ip >> 8;
        tbl24_range = depth_to_range(depth);
+       struct rte_lpm_tbl_entry new_tbl_entry = {
+#ifdef RTE_LIBRTE_LPM_ASNUM
+               .as_num = res->as_num,
+#endif
+               .next_hop = res->next_hop,
+               .fwd_class  = res->fwd_class,
+               .ext_valid = 0,
+               .depth = depth,
+               .valid = VALID,
+       };
+

        for (i = tbl24_index; i < (tbl24_index + tbl24_range); i++) {
                /*
                 * For invalid OR valid and non-extended tbl 24 entries set
                 * entry.
                 */
-               if (!lpm->tbl24[i].valid || (lpm->tbl24[i].ext_entry == 0 &&
+               if (!lpm->tbl24[i].valid || (lpm->tbl24[i].ext_valid == 0 &&
                                lpm->tbl24[i].depth <= depth)) {

-                       struct rte_lpm_tbl24_entry new_tbl24_entry = {
-                               { .next_hop = next_hop, },
-                               .valid = VALID,
-                               .ext_entry = 0,
-                               .depth = depth,
-                       };
-
                        /* Setting tbl24 entry in one go to avoid race
                         * conditions
                         */
-                       lpm->tbl24[i] = new_tbl24_entry;
+                       lpm->tbl24[i] = new_tbl_entry;

                        continue;
                }

-               if (lpm->tbl24[i].ext_entry == 1) {
+               if (lpm->tbl24[i].ext_valid == 1) {
                        /* If tbl24 entry is valid and extended calculate
the
                         *  index into tbl8.
                         */
@@ -461,19 +474,14 @@ add_depth_small(struct rte_lpm *lpm, uint32_t ip,
uint8_t depth,
                        for (j = tbl8_index; j < tbl8_group_end; j++) {
                                if (!lpm->tbl8[j].valid ||
                                                lpm->tbl8[j].depth <=
depth) {
-                                       struct rte_lpm_tbl8_entry
-                                               new_tbl8_entry = {
-                                               .valid = VALID,
-                                               .valid_group = VALID,
-                                               .depth = depth,
-                                               .next_hop = next_hop,
-                                       };
+
+                                       new_tbl_entry.ext_valid = VALID;

                                        /*
                                         * Setting tbl8 entry in one go to
avoid
                                         * race conditions
                                         */
-                                       lpm->tbl8[j] = new_tbl8_entry;
+                                       lpm->tbl8[j] = new_tbl_entry;

                                        continue;
                                }
@@ -486,7 +494,7 @@ add_depth_small(struct rte_lpm *lpm, uint32_t ip,
uint8_t depth,

 static inline int32_t
 add_depth_big(struct rte_lpm *lpm, uint32_t ip_masked, uint8_t depth,
-               uint8_t next_hop)
+               struct rte_lpm_res *res)
 {
        uint32_t tbl24_index;
        int32_t tbl8_group_index, tbl8_group_start, tbl8_group_end,
tbl8_index,
@@ -512,7 +520,11 @@ add_depth_big(struct rte_lpm *lpm, uint32_t ip_masked,
uint8_t depth,
                /* Set tbl8 entry. */
                for (i = tbl8_index; i < (tbl8_index + tbl8_range); i++) {
                        lpm->tbl8[i].depth = depth;
-                       lpm->tbl8[i].next_hop = next_hop;
+                       lpm->tbl8[i].next_hop = res->next_hop;
+                       lpm->tbl8[i].fwd_class = res->fwd_class;
+#ifdef RTE_LIBRTE_LPM_ASNUM
+                       lpm->tbl8[i].as_num = res->as_num;
+#endif
                        lpm->tbl8[i].valid = VALID;
                }

@@ -522,17 +534,17 @@ add_depth_big(struct rte_lpm *lpm, uint32_t
ip_masked, uint8_t depth,
                 * so assign whole structure in one go
                 */

-               struct rte_lpm_tbl24_entry new_tbl24_entry = {
-                       { .tbl8_gindex = (uint8_t)tbl8_group_index, },
-                       .valid = VALID,
-                       .ext_entry = 1,
+               struct rte_lpm_tbl_entry new_tbl24_entry = {
+                       .tbl8_gindex = (uint16_t)tbl8_group_index,
                        .depth = 0,
+                       .ext_valid = 1,
+                       .valid = VALID,
                };

                lpm->tbl24[tbl24_index] = new_tbl24_entry;

        }/* If valid entry but not extended calculate the index into
Table8. */
-       else if (lpm->tbl24[tbl24_index].ext_entry == 0) {
+       else if (lpm->tbl24[tbl24_index].ext_valid == 0) {
                /* Search for free tbl8 group. */
                tbl8_group_index = tbl8_alloc(lpm->tbl8);

@@ -551,6 +563,11 @@ add_depth_big(struct rte_lpm *lpm, uint32_t ip_masked,
uint8_t depth,
                        lpm->tbl8[i].depth = lpm->tbl24[tbl24_index].depth;
                        lpm->tbl8[i].next_hop =
                                        lpm->tbl24[tbl24_index].next_hop;
+                       lpm->tbl8[i].fwd_class =
+                                       lpm->tbl24[tbl24_index].fwd_class;
+#ifdef RTE_LIBRTE_LPM_ASNUM
+                       lpm->tbl8[i].as_num =
lpm->tbl24[tbl24_index].as_num;
+#endif
                }

                tbl8_index = tbl8_group_start + (ip_masked & 0xFF);
@@ -561,7 +578,11 @@ add_depth_big(struct rte_lpm *lpm, uint32_t ip_masked,
uint8_t depth,
                                        lpm->tbl8[i].depth <= depth) {
                                lpm->tbl8[i].valid = VALID;
                                lpm->tbl8[i].depth = depth;
-                               lpm->tbl8[i].next_hop = next_hop;
+                               lpm->tbl8[i].next_hop = res->next_hop;
+                               lpm->tbl8[i].fwd_class = res->fwd_class;
+#ifdef RTE_LIBRTE_LPM_ASNUM
+                               lpm->tbl8[i].as_num = res->as_num;
+#endif

                                continue;
                        }
@@ -573,11 +594,11 @@ add_depth_big(struct rte_lpm *lpm, uint32_t
ip_masked, uint8_t depth,
                 * so assign whole structure in one go.
                 */

-               struct rte_lpm_tbl24_entry new_tbl24_entry = {
-                               { .tbl8_gindex = (uint8_t)tbl8_group_index,
},
-                               .valid = VALID,
-                               .ext_entry = 1,
+               struct rte_lpm_tbl_entry new_tbl24_entry = {
+                               .tbl8_gindex = (uint16_t)tbl8_group_index,
                                .depth = 0,
+                               .ext_valid = 1,
+                               .valid = VALID,
                };

                lpm->tbl24[tbl24_index] = new_tbl24_entry;
@@ -595,11 +616,15 @@ add_depth_big(struct rte_lpm *lpm, uint32_t
ip_masked, uint8_t depth,

                        if (!lpm->tbl8[i].valid ||
                                        lpm->tbl8[i].depth <= depth) {
-                               struct rte_lpm_tbl8_entry new_tbl8_entry = {
-                                       .valid = VALID,
+                               struct rte_lpm_tbl_entry new_tbl8_entry = {
+#ifdef RTE_LIBRTE_LPM_ASNUM
+                                       .as_num = res->as_num,
+#endif
+                                       .next_hop = res->next_hop,
+                                       .fwd_class = res->fwd_class,
                                        .depth = depth,
-                                       .next_hop = next_hop,
-                                       .valid_group =
lpm->tbl8[i].valid_group,
+                                       .ext_valid = lpm->tbl8[i].ext_valid,
+                                       .valid = VALID,
                                };

                                /*
@@ -621,19 +646,19 @@ add_depth_big(struct rte_lpm *lpm, uint32_t
ip_masked, uint8_t depth,
  */
 int
 rte_lpm_add(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
-               uint8_t next_hop)
+               struct rte_lpm_res *res)
 {
        int32_t rule_index, status = 0;
        uint32_t ip_masked;

        /* Check user arguments. */
-       if ((lpm == NULL) || (depth < 1) || (depth > RTE_LPM_MAX_DEPTH))
+       if ((lpm == NULL) || (res == NULL) || (depth < 1) || (depth >
RTE_LPM_MAX_DEPTH))
                return -EINVAL;

        ip_masked = ip & depth_to_mask(depth);

        /* Add the rule to the rule table. */
-       rule_index = rule_add(lpm, ip_masked, depth, next_hop);
+       rule_index = rule_add(lpm, ip_masked, depth, res);

        /* If the is no space available for new rule return error. */
        if (rule_index < 0) {
@@ -641,10 +666,10 @@ rte_lpm_add(struct rte_lpm *lpm, uint32_t ip, uint8_t
depth,
        }

        if (depth <= MAX_DEPTH_TBL24) {
-               status = add_depth_small(lpm, ip_masked, depth, next_hop);
+               status = add_depth_small(lpm, ip_masked, depth, res);
        }
        else { /* If depth > RTE_LPM_MAX_DEPTH_TBL24 */
-               status = add_depth_big(lpm, ip_masked, depth, next_hop);
+               status = add_depth_big(lpm, ip_masked, depth, res);

                /*
                 * If add fails due to exhaustion of tbl8 extensions delete
@@ -665,14 +690,14 @@ rte_lpm_add(struct rte_lpm *lpm, uint32_t ip, uint8_t
depth,
  */
 int
 rte_lpm_is_rule_present(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
-uint8_t *next_hop)
+                       struct rte_lpm_res *res)
 {
        uint32_t ip_masked;
        int32_t rule_index;

        /* Check user arguments. */
        if ((lpm == NULL) ||
-               (next_hop == NULL) ||
+               (res == NULL) ||
                (depth < 1) || (depth > RTE_LPM_MAX_DEPTH))
                return -EINVAL;

@@ -681,7 +706,11 @@ uint8_t *next_hop)
        rule_index = rule_find(lpm, ip_masked, depth);

        if (rule_index >= 0) {
-               *next_hop = lpm->rules_tbl[rule_index].next_hop;
+               res->next_hop = lpm->rules_tbl[rule_index].next_hop;
+               res->fwd_class = lpm->rules_tbl[rule_index].fwd_class;
+#ifdef RTE_LIBRTE_LPM_ASNUM
+               res->as_num = lpm->rules_tbl[rule_index].as_num;
+#endif
                return 1;
        }

@@ -731,7 +760,7 @@ delete_depth_small(struct rte_lpm *lpm, uint32_t
ip_masked,
                 */
                for (i = tbl24_index; i < (tbl24_index + tbl24_range); i++)
{

-                       if (lpm->tbl24[i].ext_entry == 0 &&
+                       if (lpm->tbl24[i].ext_valid == 0 &&
                                        lpm->tbl24[i].depth <= depth ) {
                                lpm->tbl24[i].valid = INVALID;
                        }
@@ -761,23 +790,30 @@ delete_depth_small(struct rte_lpm *lpm, uint32_t
ip_masked,
                 * associated with this rule.
                 */

-               struct rte_lpm_tbl24_entry new_tbl24_entry = {
-                       {.next_hop =
lpm->rules_tbl[sub_rule_index].next_hop,},
-                       .valid = VALID,
-                       .ext_entry = 0,
+               struct rte_lpm_tbl_entry new_tbl24_entry = {
+#ifdef RTE_LIBRTE_LPM_ASNUM
+                       .as_num = lpm->rules_tbl[sub_rule_index].as_num,
+#endif
+                       .next_hop = lpm->rules_tbl[sub_rule_index].next_hop,
+                       .fwd_class =
lpm->rules_tbl[sub_rule_index].fwd_class,
                        .depth = sub_rule_depth,
+                       .ext_valid = 0,
+                       .valid = VALID,
                };

-               struct rte_lpm_tbl8_entry new_tbl8_entry = {
-                       .valid = VALID,
+               struct rte_lpm_tbl_entry new_tbl8_entry = {
+#ifdef RTE_LIBRTE_LPM_ASNUM
+                       .as_num = lpm->rules_tbl[sub_rule_index].as_num,
+#endif
+                       .next_hop = lpm->rules_tbl[sub_rule_index].next_hop,
+                       .fwd_class =
lpm->rules_tbl[sub_rule_index].fwd_class,
                        .depth = sub_rule_depth,
-                       .next_hop = lpm->rules_tbl
-                       [sub_rule_index].next_hop,
+                       .valid = VALID,
                };

                for (i = tbl24_index; i < (tbl24_index + tbl24_range); i++)
{

-                       if (lpm->tbl24[i].ext_entry == 0 &&
+                       if (lpm->tbl24[i].ext_valid == 0 &&
                                        lpm->tbl24[i].depth <= depth ) {
                                lpm->tbl24[i] = new_tbl24_entry;
                        }
@@ -814,7 +850,7 @@ delete_depth_small(struct rte_lpm *lpm, uint32_t
ip_masked,
  * thus can be recycled
  */
 static inline int32_t
-tbl8_recycle_check(struct rte_lpm_tbl8_entry *tbl8, uint32_t
tbl8_group_start)
+tbl8_recycle_check(struct rte_lpm_tbl_entry *tbl8, uint32_t
tbl8_group_start)
 {
        uint32_t tbl8_group_end, i;
        tbl8_group_end = tbl8_group_start + RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
@@ -891,11 +927,15 @@ delete_depth_big(struct rte_lpm *lpm, uint32_t
ip_masked,
        }
        else {
                /* Set new tbl8 entry. */
-               struct rte_lpm_tbl8_entry new_tbl8_entry = {
-                       .valid = VALID,
-                       .depth = sub_rule_depth,
-                       .valid_group =
lpm->tbl8[tbl8_group_start].valid_group,
+               struct rte_lpm_tbl_entry new_tbl8_entry = {
+#ifdef RTE_LIBRTE_LPM_ASNUM
+                       .as_num = lpm->rules_tbl[sub_rule_index].as_num,
+#endif
+                       .fwd_class =
lpm->rules_tbl[sub_rule_index].fwd_class,
                        .next_hop = lpm->rules_tbl[sub_rule_index].next_hop,
+                       .depth = sub_rule_depth,
+                       .ext_valid = lpm->tbl8[tbl8_group_start].ext_valid,
+                       .valid = VALID,
                };

                /*
@@ -923,11 +963,15 @@ delete_depth_big(struct rte_lpm *lpm, uint32_t
ip_masked,
        }
        else if (tbl8_recycle_index > -1) {
                /* Update tbl24 entry. */
-               struct rte_lpm_tbl24_entry new_tbl24_entry = {
-                       { .next_hop =
lpm->tbl8[tbl8_recycle_index].next_hop, },
-                       .valid = VALID,
-                       .ext_entry = 0,
+               struct rte_lpm_tbl_entry new_tbl24_entry = {
+#ifdef RTE_LIBRTE_LPM_ASNUM
+                       .as_num = lpm->tbl8[tbl8_recycle_index].as_num,
+#endif
+                       .next_hop = lpm->tbl8[tbl8_recycle_index].next_hop,
+                       .fwd_class =
lpm->tbl8[tbl8_recycle_index].fwd_class,
                        .depth = lpm->tbl8[tbl8_recycle_index].depth,
+                       .ext_valid = 0,
+                       .valid = VALID,
                };

                /* Set tbl24 before freeing tbl8 to avoid race condition. */
diff --git a/lib/librte_lpm/rte_lpm.h b/lib/librte_lpm/rte_lpm.h
index c299ce2..7c615bc 100644
--- a/lib/librte_lpm/rte_lpm.h
+++ b/lib/librte_lpm/rte_lpm.h
@@ -31,8 +31,8 @@
  *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
  */

-#ifndef _RTE_LPM_H_
-#define _RTE_LPM_H_
+#ifndef _RTE_LPM_EXT_H_
+#define _RTE_LPM_EXT_H_

 /**
  * @file
@@ -81,57 +81,58 @@ extern "C" {
 #define RTE_LPM_RETURN_IF_TRUE(cond, retval)
 #endif

-/** @internal bitmask with valid and ext_entry/valid_group fields set */
-#define RTE_LPM_VALID_EXT_ENTRY_BITMASK 0x0300
+/** @internal bitmask with valid and ext_valid/ext_valid fields set */
+#define RTE_LPM_VALID_EXT_ENTRY_BITMASK 0x03

 /** Bitmask used to indicate successful lookup */
-#define RTE_LPM_LOOKUP_SUCCESS          0x0100
+#define RTE_LPM_LOOKUP_SUCCESS          0x01
+
+struct rte_lpm_res {
+       uint16_t        next_hop;
+       uint8_t         fwd_class;
+#ifdef RTE_LIBRTE_LPM_ASNUM
+       uint32_t        as_num;
+#endif
+};

 #if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN
-/** @internal Tbl24 entry structure. */
-struct rte_lpm_tbl24_entry {
-       /* Stores Next hop or group index (i.e. gindex)into tbl8. */
+struct rte_lpm_tbl_entry {
+       uint8_t valid           :1;
+       uint8_t ext_valid       :1;
+       uint8_t depth           :6;
+       uint8_t fwd_class;
        union {
-               uint8_t next_hop;
-               uint8_t tbl8_gindex;
+               uint16_t next_hop;
+               uint16_t tbl8_gindex;
        };
-       /* Using single uint8_t to store 3 values. */
-       uint8_t valid     :1; /**< Validation flag. */
-       uint8_t ext_entry :1; /**< External entry. */
-       uint8_t depth     :6; /**< Rule depth. */
-};
-
-/** @internal Tbl8 entry structure. */
-struct rte_lpm_tbl8_entry {
-       uint8_t next_hop; /**< next hop. */
-       /* Using single uint8_t to store 3 values. */
-       uint8_t valid       :1; /**< Validation flag. */
-       uint8_t valid_group :1; /**< Group validation flag. */
-       uint8_t depth       :6; /**< Rule depth. */
+#ifdef RTE_LIBRTE_LPM_ASNUM
+       uint32_t as_num;
+#endif
 };
 #else
-struct rte_lpm_tbl24_entry {
-       uint8_t depth       :6;
-       uint8_t ext_entry   :1;
-       uint8_t valid       :1;
+struct rte_lpm_tbl_entry {
+#ifdef RTE_LIBRTE_LPM_ASNUM
+       uint32_t as_num;
+#endif
        union {
-               uint8_t tbl8_gindex;
-               uint8_t next_hop;
+               uint16_t tbl8_gindex;
+               uint16_t next_hop;
        };
-};
-
-struct rte_lpm_tbl8_entry {
-       uint8_t depth       :6;
-       uint8_t valid_group :1;
-       uint8_t valid       :1;
-       uint8_t next_hop;
+       uint8_t fwd_class;
+       uint8_t depth           :6;
+       uint8_t ext_valid       :1;
+       uint8_t valid           :1;
 };
 #endif

 /** @internal Rule structure. */
 struct rte_lpm_rule {
        uint32_t ip; /**< Rule IP address. */
-       uint8_t  next_hop; /**< Rule next hop. */
+#ifdef RTE_LIBRTE_LPM_ASNUM
+       uint32_t as_num;
+#endif
+       uint16_t  next_hop; /**< Rule next hop. */
+       uint8_t fwd_class;
 };

 /** @internal Contains metadata about the rules table. */
@@ -148,9 +149,9 @@ struct rte_lpm {
        struct rte_lpm_rule_info rule_info[RTE_LPM_MAX_DEPTH]; /**< Rule
info table. */

        /* LPM Tables. */
-       struct rte_lpm_tbl24_entry tbl24[RTE_LPM_TBL24_NUM_ENTRIES] \
+       struct rte_lpm_tbl_entry tbl24[RTE_LPM_TBL24_NUM_ENTRIES] \
                        __rte_cache_aligned; /**< LPM tbl24 table. */
-       struct rte_lpm_tbl8_entry tbl8[RTE_LPM_TBL8_NUM_ENTRIES] \
+       struct rte_lpm_tbl_entry tbl8[RTE_LPM_TBL8_NUM_ENTRIES] \
                        __rte_cache_aligned; /**< LPM tbl8 table. */
        struct rte_lpm_rule rules_tbl[0] \
                        __rte_cache_aligned; /**< LPM rules. */
@@ -219,7 +220,7 @@ rte_lpm_free(struct rte_lpm *lpm);
  *   0 on success, negative value otherwise
  */
 int
-rte_lpm_add(struct rte_lpm *lpm, uint32_t ip, uint8_t depth, uint8_t
next_hop);
+rte_lpm_add(struct rte_lpm *lpm, uint32_t ip, uint8_t depth, struct
rte_lpm_res *res);

 /**
  * Check if a rule is present in the LPM table,
@@ -238,7 +239,7 @@ rte_lpm_add(struct rte_lpm *lpm, uint32_t ip, uint8_t
depth, uint8_t next_hop);
  */
 int
 rte_lpm_is_rule_present(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
-uint8_t *next_hop);
+                       struct rte_lpm_res *res);

 /**
  * Delete a rule from the LPM table.
@@ -277,29 +278,43 @@ rte_lpm_delete_all(struct rte_lpm *lpm);
  *   -EINVAL for incorrect arguments, -ENOENT on lookup miss, 0 on lookup
hit
  */
 static inline int
-rte_lpm_lookup(struct rte_lpm *lpm, uint32_t ip, uint8_t *next_hop)
+rte_lpm_lookup(struct rte_lpm *lpm, uint32_t ip, struct rte_lpm_res *res)
 {
        unsigned tbl24_index = (ip >> 8);
-       uint16_t tbl_entry;
-
+#ifdef RTE_LIBRTE_LPM_ASNUM
+       uint64_t tbl_entry;
+#else
+       uint32_t tbl_entry;
+#endif
        /* DEBUG: Check user input arguments. */
-       RTE_LPM_RETURN_IF_TRUE(((lpm == NULL) || (next_hop == NULL)),
-EINVAL);
+       RTE_LPM_RETURN_IF_TRUE(((lpm == NULL) || (res == NULL)), -EINVAL);

        /* Copy tbl24 entry */
-       tbl_entry = *(const uint16_t *)&lpm->tbl24[tbl24_index];
-
+#ifdef RTE_LIBRTE_LPM_ASNUM
+       tbl_entry = *(const uint64_t *)&lpm->tbl24[tbl24_index];
+#else
+       tbl_entry = *(const uint32_t *)&lpm->tbl24[tbl24_index];
+#endif
        /* Copy tbl8 entry (only if needed) */
        if (unlikely((tbl_entry & RTE_LPM_VALID_EXT_ENTRY_BITMASK) ==
                        RTE_LPM_VALID_EXT_ENTRY_BITMASK)) {

                unsigned tbl8_index = (uint8_t)ip +
-                               ((uint8_t)tbl_entry *
RTE_LPM_TBL8_GROUP_NUM_ENTRIES);
+                               ((*(struct rte_lpm_tbl_entry
*)&tbl_entry).tbl8_gindex * RTE_LPM_TBL8_GROUP_NUM_ENTRIES);

-               tbl_entry = *(const uint16_t *)&lpm->tbl8[tbl8_index];
+#ifdef RTE_LIBRTE_LPM_ASNUM
+               tbl_entry = *(const uint64_t *)&lpm->tbl8[tbl8_index];
+#else
+               tbl_entry = *(const uint32_t *)&lpm->tbl8[tbl8_index];
+#endif
        }
-
-       *next_hop = (uint8_t)tbl_entry;
+       res->next_hop  = ((struct rte_lpm_tbl_entry *)&tbl_entry)->next_hop;
+       res->fwd_class = ((struct rte_lpm_tbl_entry
*)&tbl_entry)->fwd_class;
+#ifdef RTE_LIBRTE_LPM_ASNUM
+       res->as_num       = ((struct rte_lpm_tbl_entry
*)&tbl_entry)->as_num;
+#endif
        return (tbl_entry & RTE_LPM_LOOKUP_SUCCESS) ? 0 : -ENOENT;
+
 }

 /**
@@ -322,19 +337,25 @@ rte_lpm_lookup(struct rte_lpm *lpm, uint32_t ip,
uint8_t *next_hop)
  *  @return
  *   -EINVAL for incorrect arguments, otherwise 0
  */
-#define rte_lpm_lookup_bulk(lpm, ips, next_hops, n) \
-               rte_lpm_lookup_bulk_func(lpm, ips, next_hops, n)
+#define rte_lpm_lookup_bulk(lpm, ips, res_tbl, n) \
+               rte_lpm_lookup_bulk_func(lpm, ips, res_tbl, n)

 static inline int
-rte_lpm_lookup_bulk_func(const struct rte_lpm *lpm, const uint32_t * ips,
-               uint16_t * next_hops, const unsigned n)
+rte_lpm_lookup_bulk_func(const struct rte_lpm *lpm, const uint32_t *ips,
+               struct rte_lpm_res *res_tbl, const unsigned n)
 {
        unsigned i;
+       int ret = 0;
+#ifdef RTE_LIBRTE_LPM_ASNUM
+       uint64_t tbl_entry;
+#else
+       uint32_t tbl_entry;
+#endif
        unsigned tbl24_indexes[n];

        /* DEBUG: Check user input arguments. */
        RTE_LPM_RETURN_IF_TRUE(((lpm == NULL) || (ips == NULL) ||
-                       (next_hops == NULL)), -EINVAL);
+                       (res_tbl == NULL)), -EINVAL);

        for (i = 0; i < n; i++) {
                tbl24_indexes[i] = ips[i] >> 8;
@@ -342,20 +363,32 @@ rte_lpm_lookup_bulk_func(const struct rte_lpm *lpm,
const uint32_t * ips,

        for (i = 0; i < n; i++) {
                /* Simply copy tbl24 entry to output */
-               next_hops[i] = *(const uint16_t
*)&lpm->tbl24[tbl24_indexes[i]];
-
+#ifdef RTE_LIBRTE_LPM_ASNUM
+               tbl_entry = *(const uint64_t
*)&lpm->tbl24[tbl24_indexes[i]];
+#else
+               tbl_entry = *(const uint32_t
*)&lpm->tbl24[tbl24_indexes[i]];
+#endif
                /* Overwrite output with tbl8 entry if needed */
-               if (unlikely((next_hops[i] &
RTE_LPM_VALID_EXT_ENTRY_BITMASK) ==
-                               RTE_LPM_VALID_EXT_ENTRY_BITMASK)) {
+               if (unlikely((tbl_entry & RTE_LPM_VALID_EXT_ENTRY_BITMASK)
==
+                       RTE_LPM_VALID_EXT_ENTRY_BITMASK)) {

                        unsigned tbl8_index = (uint8_t)ips[i] +
-                                       ((uint8_t)next_hops[i] *
-                                        RTE_LPM_TBL8_GROUP_NUM_ENTRIES);
+                               ((*(struct rte_lpm_tbl_entry
*)&tbl_entry).tbl8_gindex * RTE_LPM_TBL8_GROUP_NUM_ENTRIES);

-                       next_hops[i] = *(const uint16_t
*)&lpm->tbl8[tbl8_index];
+#ifdef RTE_LIBRTE_LPM_ASNUM
+                       tbl_entry = *(const uint64_t
*)&lpm->tbl8[tbl8_index];
+#else
+                       tbl_entry = *(const uint32_t
*)&lpm->tbl8[tbl8_index];
+#endif
                }
+               res_tbl[i].next_hop     = ((struct rte_lpm_tbl_entry
*)&tbl_entry)->next_hop;
+               res_tbl[i].fwd_class    = ((struct rte_lpm_tbl_entry
*)&tbl_entry)->next_hop;
+#ifdef RTE_LIBRTE_LPM_ASNUM
+               res_tbl[i].as_num       = ((struct rte_lpm_tbl_entry
*)&tbl_entry)->as_num;
+#endif
+               ret |= 1 << i;
        }
-       return 0;
+       return ret;
 }

 /* Mask four results. */
@@ -477,4 +510,4 @@ rte_lpm_lookupx4(const struct rte_lpm *lpm, __m128i ip,
uint16_t hop[4],
 }
 #endif

-#endif /* _RTE_LPM_H_ */
+#endif /* _RTE_LPM_EXT_H_ */

2015-10-24 9:09 GMT+03:00 Matthew Hall <mhall@mhcomputing.net>:

> On 10/23/15 9:20 AM, Matthew Hall wrote:
>
>> On Fri, Oct 23, 2015 at 03:51:48PM +0200, Michal Jastrzebski wrote:
>>
>>> From: Michal Kobylinski  <michalx.kobylinski@intel.com>
>>>
>>> The current DPDK implementation for LPM for IPv4 and IPv6 limits the
>>> number of next hops to 256, as the next hop ID is an 8-bit long field.
>>> Proposed extension increase number of next hops for IPv4 to 2^24 and
>>> also allows 32-bits read/write operations.
>>>
>>> This patchset requires additional change to rte_table library to meet
>>> ABI compatibility requirements. A v2 will be sent next week.
>>>
>>
>> I also have a patchset for this.
>>
>> I will send it out as well so we could compare.
>>
>> Matthew.
>>
>
> Sorry about the delay; I only work on DPDK in personal time and not as
> part of a job. My patchset is attached to this email.
>
> One possible advantage with my patchset, compared to others, is that the
> space problem is fixed in both IPV4 and in IPV6, to prevent asymmetry
> between these two standards, which is something I try to avoid as much as
> humanly possible.
>
> This is because my application code is green-field, so I absolutely don't
> want to put any ugly hacks or incompatibilities in this code if I can
> possibly avoid it.
>
> Otherwise, I am not necessarily as expert about rte_lpm as some of the
> full-time guys, but I think with four or five of us in the thread hammering
> out patches we will be able to create something amazing together and I am
> very very very very very happy about this.
>
> Matthew.
>

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [dpdk-dev] [PATCH v1 0/3] lpm: increase number of next hops for lpm (ipv4)
       [not found]       ` <20151026115519.GA7576@MKJASTRX-MOBL>
@ 2015-10-26 11:57         ` Jastrzebski, MichalX K
  2015-10-26 14:03           ` Vladimir Medvedkin
  0 siblings, 1 reply; 24+ messages in thread
From: Jastrzebski, MichalX K @ 2015-10-26 11:57 UTC (permalink / raw)
  To: Vladimir Medvedkin; +Cc: dev

> -----Original Message-----
> From: Michal Jastrzebski [mailto:michalx.k.jastrzebski@intel.com]
> Sent: Monday, October 26, 2015 12:55 PM
> To: Vladimir Medvedkin
> Subject: Re: [dpdk-dev] [PATCH v1 0/3] lpm: increase number of next hops
> for lpm (ipv4)
> 
> On Sun, Oct 25, 2015 at 08:52:04PM +0300, Vladimir Medvedkin wrote:
> > Hi all,
> >
> > Here my implementation
> >
> > Signed-off-by: Vladimir Medvedkin <medvedkinv@gmail.com>
> > ---
> >  config/common_bsdapp     |   1 +
> >  config/common_linuxapp   |   1 +
> >  lib/librte_lpm/rte_lpm.c | 194
> > +++++++++++++++++++++++++++++------------------
> >  lib/librte_lpm/rte_lpm.h | 163 +++++++++++++++++++++++----------------
> >  4 files changed, 219 insertions(+), 140 deletions(-)
> >
> > diff --git a/config/common_bsdapp b/config/common_bsdapp
> > index b37dcf4..408cc2c 100644
> > --- a/config/common_bsdapp
> > +++ b/config/common_bsdapp
> > @@ -344,6 +344,7 @@ CONFIG_RTE_LIBRTE_JOBSTATS=y
> >  #
> >  CONFIG_RTE_LIBRTE_LPM=y
> >  CONFIG_RTE_LIBRTE_LPM_DEBUG=n
> > +CONFIG_RTE_LIBRTE_LPM_ASNUM=n
> >
> >  #
> >  # Compile librte_acl
> > diff --git a/config/common_linuxapp b/config/common_linuxapp
> > index 0de43d5..1c60e63 100644
> > --- a/config/common_linuxapp
> > +++ b/config/common_linuxapp
> > @@ -352,6 +352,7 @@ CONFIG_RTE_LIBRTE_JOBSTATS=y
> >  #
> >  CONFIG_RTE_LIBRTE_LPM=y
> >  CONFIG_RTE_LIBRTE_LPM_DEBUG=n
> > +CONFIG_RTE_LIBRTE_LPM_ASNUM=n
> >
> >  #
> >  # Compile librte_acl
> > diff --git a/lib/librte_lpm/rte_lpm.c b/lib/librte_lpm/rte_lpm.c
> > index 163ba3c..363b400 100644
> > --- a/lib/librte_lpm/rte_lpm.c
> > +++ b/lib/librte_lpm/rte_lpm.c
> > @@ -159,9 +159,11 @@ rte_lpm_create(const char *name, int socket_id,
> int
> > max_rules,
> >
> >         lpm_list = RTE_TAILQ_CAST(rte_lpm_tailq.head, rte_lpm_list);
> >
> > -       RTE_BUILD_BUG_ON(sizeof(struct rte_lpm_tbl24_entry) != 2);
> > -       RTE_BUILD_BUG_ON(sizeof(struct rte_lpm_tbl8_entry) != 2);
> > -
> > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > +       RTE_BUILD_BUG_ON(sizeof(struct rte_lpm_tbl_entry) != 8);
> > +#else
> > +       RTE_BUILD_BUG_ON(sizeof(struct rte_lpm_tbl_entry) != 4);
> > +#endif
> >         /* Check user arguments. */
> >         if ((name == NULL) || (socket_id < -1) || (max_rules == 0)){
> >                 rte_errno = EINVAL;
> > @@ -261,7 +263,7 @@ rte_lpm_free(struct rte_lpm *lpm)
> >   */
> >  static inline int32_t
> >  rule_add(struct rte_lpm *lpm, uint32_t ip_masked, uint8_t depth,
> > -       uint8_t next_hop)
> > +       struct rte_lpm_res *res)
> >  {
> >         uint32_t rule_gindex, rule_index, last_rule;
> >         int i;
> > @@ -282,8 +284,11 @@ rule_add(struct rte_lpm *lpm, uint32_t
> ip_masked,
> > uint8_t depth,
> >
> >                         /* If rule already exists update its next_hop and
> > return. */
> >                         if (lpm->rules_tbl[rule_index].ip == ip_masked) {
> > -                               lpm->rules_tbl[rule_index].next_hop =
> > next_hop;
> > -
> > +                               lpm->rules_tbl[rule_index].next_hop =
> > res->next_hop;
> > +                               lpm->rules_tbl[rule_index].fwd_class =
> > res->fwd_class;
> > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > +                               lpm->rules_tbl[rule_index].as_num =
> > res->as_num;
> > +#endif
> >                                 return rule_index;
> >                         }
> >                 }
> > @@ -320,7 +325,11 @@ rule_add(struct rte_lpm *lpm, uint32_t
> ip_masked,
> > uint8_t depth,
> >
> >         /* Add the new rule. */
> >         lpm->rules_tbl[rule_index].ip = ip_masked;
> > -       lpm->rules_tbl[rule_index].next_hop = next_hop;
> > +       lpm->rules_tbl[rule_index].next_hop = res->next_hop;
> > +       lpm->rules_tbl[rule_index].fwd_class = res->fwd_class;
> > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > +       lpm->rules_tbl[rule_index].as_num = res->as_num;
> > +#endif
> >
> >         /* Increment the used rules counter for this rule group. */
> >         lpm->rule_info[depth - 1].used_rules++;
> > @@ -382,10 +391,10 @@ rule_find(struct rte_lpm *lpm, uint32_t
> ip_masked,
> > uint8_t depth)
> >   * Find, clean and allocate a tbl8.
> >   */
> >  static inline int32_t
> > -tbl8_alloc(struct rte_lpm_tbl8_entry *tbl8)
> > +tbl8_alloc(struct rte_lpm_tbl_entry *tbl8)
> >  {
> >         uint32_t tbl8_gindex; /* tbl8 group index. */
> > -       struct rte_lpm_tbl8_entry *tbl8_entry;
> > +       struct rte_lpm_tbl_entry *tbl8_entry;
> >
> >         /* Scan through tbl8 to find a free (i.e. INVALID) tbl8 group. */
> >         for (tbl8_gindex = 0; tbl8_gindex < RTE_LPM_TBL8_NUM_GROUPS;
> > @@ -393,12 +402,12 @@ tbl8_alloc(struct rte_lpm_tbl8_entry *tbl8)
> >                 tbl8_entry = &tbl8[tbl8_gindex *
> >                                    RTE_LPM_TBL8_GROUP_NUM_ENTRIES];
> >                 /* If a free tbl8 group is found clean it and set as VALID.
> > */
> > -               if (!tbl8_entry->valid_group) {
> > +               if (!tbl8_entry->ext_valid) {
> >                         memset(&tbl8_entry[0], 0,
> >                                         RTE_LPM_TBL8_GROUP_NUM_ENTRIES *
> >                                         sizeof(tbl8_entry[0]));
> >
> > -                       tbl8_entry->valid_group = VALID;
> > +                       tbl8_entry->ext_valid = VALID;
> >
> >                         /* Return group index for allocated tbl8 group. */
> >                         return tbl8_gindex;
> > @@ -410,46 +419,50 @@ tbl8_alloc(struct rte_lpm_tbl8_entry *tbl8)
> >  }
> >
> >  static inline void
> > -tbl8_free(struct rte_lpm_tbl8_entry *tbl8, uint32_t tbl8_group_start)
> > +tbl8_free(struct rte_lpm_tbl_entry *tbl8, uint32_t tbl8_group_start)
> >  {
> >         /* Set tbl8 group invalid*/
> > -       tbl8[tbl8_group_start].valid_group = INVALID;
> > +       tbl8[tbl8_group_start].ext_valid = INVALID;
> >  }
> >
> >  static inline int32_t
> >  add_depth_small(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
> > -               uint8_t next_hop)
> > +               struct rte_lpm_res *res)
> >  {
> >         uint32_t tbl24_index, tbl24_range, tbl8_index, tbl8_group_end, i, j;
> >
> >         /* Calculate the index into Table24. */
> >         tbl24_index = ip >> 8;
> >         tbl24_range = depth_to_range(depth);
> > +       struct rte_lpm_tbl_entry new_tbl_entry = {
> > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > +               .as_num = res->as_num,
> > +#endif
> > +               .next_hop = res->next_hop,
> > +               .fwd_class  = res->fwd_class,
> > +               .ext_valid = 0,
> > +               .depth = depth,
> > +               .valid = VALID,
> > +       };
> > +
> >
> >         for (i = tbl24_index; i < (tbl24_index + tbl24_range); i++) {
> >                 /*
> >                  * For invalid OR valid and non-extended tbl 24 entries set
> >                  * entry.
> >                  */
> > -               if (!lpm->tbl24[i].valid || (lpm->tbl24[i].ext_entry == 0 &&
> > +               if (!lpm->tbl24[i].valid || (lpm->tbl24[i].ext_valid == 0 &&
> >                                 lpm->tbl24[i].depth <= depth)) {
> >
> > -                       struct rte_lpm_tbl24_entry new_tbl24_entry = {
> > -                               { .next_hop = next_hop, },
> > -                               .valid = VALID,
> > -                               .ext_entry = 0,
> > -                               .depth = depth,
> > -                       };
> > -
> >                         /* Setting tbl24 entry in one go to avoid race
> >                          * conditions
> >                          */
> > -                       lpm->tbl24[i] = new_tbl24_entry;
> > +                       lpm->tbl24[i] = new_tbl_entry;
> >
> >                         continue;
> >                 }
> >
> > -               if (lpm->tbl24[i].ext_entry == 1) {
> > +               if (lpm->tbl24[i].ext_valid == 1) {
> >                         /* If tbl24 entry is valid and extended calculate
> > the
> >                          *  index into tbl8.
> >                          */
> > @@ -461,19 +474,14 @@ add_depth_small(struct rte_lpm *lpm, uint32_t
> ip,
> > uint8_t depth,
> >                         for (j = tbl8_index; j < tbl8_group_end; j++) {
> >                                 if (!lpm->tbl8[j].valid ||
> >                                                 lpm->tbl8[j].depth <=
> > depth) {
> > -                                       struct rte_lpm_tbl8_entry
> > -                                               new_tbl8_entry = {
> > -                                               .valid = VALID,
> > -                                               .valid_group = VALID,
> > -                                               .depth = depth,
> > -                                               .next_hop = next_hop,
> > -                                       };
> > +
> > +                                       new_tbl_entry.ext_valid = VALID;
> >
> >                                         /*
> >                                          * Setting tbl8 entry in one go to
> > avoid
> >                                          * race conditions
> >                                          */
> > -                                       lpm->tbl8[j] = new_tbl8_entry;
> > +                                       lpm->tbl8[j] = new_tbl_entry;
> >
> >                                         continue;
> >                                 }
> > @@ -486,7 +494,7 @@ add_depth_small(struct rte_lpm *lpm, uint32_t ip,
> > uint8_t depth,
> >
> >  static inline int32_t
> >  add_depth_big(struct rte_lpm *lpm, uint32_t ip_masked, uint8_t depth,
> > -               uint8_t next_hop)
> > +               struct rte_lpm_res *res)
> >  {
> >         uint32_t tbl24_index;
> >         int32_t tbl8_group_index, tbl8_group_start, tbl8_group_end,
> > tbl8_index,
> > @@ -512,7 +520,11 @@ add_depth_big(struct rte_lpm *lpm, uint32_t
> ip_masked,
> > uint8_t depth,
> >                 /* Set tbl8 entry. */
> >                 for (i = tbl8_index; i < (tbl8_index + tbl8_range); i++) {
> >                         lpm->tbl8[i].depth = depth;
> > -                       lpm->tbl8[i].next_hop = next_hop;
> > +                       lpm->tbl8[i].next_hop = res->next_hop;
> > +                       lpm->tbl8[i].fwd_class = res->fwd_class;
> > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > +                       lpm->tbl8[i].as_num = res->as_num;
> > +#endif
> >                         lpm->tbl8[i].valid = VALID;
> >                 }
> >
> > @@ -522,17 +534,17 @@ add_depth_big(struct rte_lpm *lpm, uint32_t
> > ip_masked, uint8_t depth,
> >                  * so assign whole structure in one go
> >                  */
> >
> > -               struct rte_lpm_tbl24_entry new_tbl24_entry = {
> > -                       { .tbl8_gindex = (uint8_t)tbl8_group_index, },
> > -                       .valid = VALID,
> > -                       .ext_entry = 1,
> > +               struct rte_lpm_tbl_entry new_tbl24_entry = {
> > +                       .tbl8_gindex = (uint16_t)tbl8_group_index,
> >                         .depth = 0,
> > +                       .ext_valid = 1,
> > +                       .valid = VALID,
> >                 };
> >
> >                 lpm->tbl24[tbl24_index] = new_tbl24_entry;
> >
> >         }/* If valid entry but not extended calculate the index into
> > Table8. */
> > -       else if (lpm->tbl24[tbl24_index].ext_entry == 0) {
> > +       else if (lpm->tbl24[tbl24_index].ext_valid == 0) {
> >                 /* Search for free tbl8 group. */
> >                 tbl8_group_index = tbl8_alloc(lpm->tbl8);
> >
> > @@ -551,6 +563,11 @@ add_depth_big(struct rte_lpm *lpm, uint32_t
> ip_masked,
> > uint8_t depth,
> >                         lpm->tbl8[i].depth = lpm->tbl24[tbl24_index].depth;
> >                         lpm->tbl8[i].next_hop =
> >                                         lpm->tbl24[tbl24_index].next_hop;
> > +                       lpm->tbl8[i].fwd_class =
> > +                                       lpm->tbl24[tbl24_index].fwd_class;
> > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > +                       lpm->tbl8[i].as_num =
> > lpm->tbl24[tbl24_index].as_num;
> > +#endif
> >                 }
> >
> >                 tbl8_index = tbl8_group_start + (ip_masked & 0xFF);
> > @@ -561,7 +578,11 @@ add_depth_big(struct rte_lpm *lpm, uint32_t
> ip_masked,
> > uint8_t depth,
> >                                         lpm->tbl8[i].depth <= depth) {
> >                                 lpm->tbl8[i].valid = VALID;
> >                                 lpm->tbl8[i].depth = depth;
> > -                               lpm->tbl8[i].next_hop = next_hop;
> > +                               lpm->tbl8[i].next_hop = res->next_hop;
> > +                               lpm->tbl8[i].fwd_class = res->fwd_class;
> > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > +                               lpm->tbl8[i].as_num = res->as_num;
> > +#endif
> >
> >                                 continue;
> >                         }
> > @@ -573,11 +594,11 @@ add_depth_big(struct rte_lpm *lpm, uint32_t
> > ip_masked, uint8_t depth,
> >                  * so assign whole structure in one go.
> >                  */
> >
> > -               struct rte_lpm_tbl24_entry new_tbl24_entry = {
> > -                               { .tbl8_gindex = (uint8_t)tbl8_group_index,
> > },
> > -                               .valid = VALID,
> > -                               .ext_entry = 1,
> > +               struct rte_lpm_tbl_entry new_tbl24_entry = {
> > +                               .tbl8_gindex = (uint16_t)tbl8_group_index,
> >                                 .depth = 0,
> > +                               .ext_valid = 1,
> > +                               .valid = VALID,
> >                 };
> >
> >                 lpm->tbl24[tbl24_index] = new_tbl24_entry;
> > @@ -595,11 +616,15 @@ add_depth_big(struct rte_lpm *lpm, uint32_t
> > ip_masked, uint8_t depth,
> >
> >                         if (!lpm->tbl8[i].valid ||
> >                                         lpm->tbl8[i].depth <= depth) {
> > -                               struct rte_lpm_tbl8_entry new_tbl8_entry = {
> > -                                       .valid = VALID,
> > +                               struct rte_lpm_tbl_entry new_tbl8_entry = {
> > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > +                                       .as_num = res->as_num,
> > +#endif
> > +                                       .next_hop = res->next_hop,
> > +                                       .fwd_class = res->fwd_class,
> >                                         .depth = depth,
> > -                                       .next_hop = next_hop,
> > -                                       .valid_group =
> > lpm->tbl8[i].valid_group,
> > +                                       .ext_valid = lpm->tbl8[i].ext_valid,
> > +                                       .valid = VALID,
> >                                 };
> >
> >                                 /*
> > @@ -621,19 +646,19 @@ add_depth_big(struct rte_lpm *lpm, uint32_t
> > ip_masked, uint8_t depth,
> >   */
> >  int
> >  rte_lpm_add(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
> > -               uint8_t next_hop)
> > +               struct rte_lpm_res *res)
> >  {
> >         int32_t rule_index, status = 0;
> >         uint32_t ip_masked;
> >
> >         /* Check user arguments. */
> > -       if ((lpm == NULL) || (depth < 1) || (depth > RTE_LPM_MAX_DEPTH))
> > +       if ((lpm == NULL) || (res == NULL) || (depth < 1) || (depth >
> > RTE_LPM_MAX_DEPTH))
> >                 return -EINVAL;
> >
> >         ip_masked = ip & depth_to_mask(depth);
> >
> >         /* Add the rule to the rule table. */
> > -       rule_index = rule_add(lpm, ip_masked, depth, next_hop);
> > +       rule_index = rule_add(lpm, ip_masked, depth, res);
> >
> >         /* If the is no space available for new rule return error. */
> >         if (rule_index < 0) {
> > @@ -641,10 +666,10 @@ rte_lpm_add(struct rte_lpm *lpm, uint32_t ip,
> uint8_t
> > depth,
> >         }
> >
> >         if (depth <= MAX_DEPTH_TBL24) {
> > -               status = add_depth_small(lpm, ip_masked, depth, next_hop);
> > +               status = add_depth_small(lpm, ip_masked, depth, res);
> >         }
> >         else { /* If depth > RTE_LPM_MAX_DEPTH_TBL24 */
> > -               status = add_depth_big(lpm, ip_masked, depth, next_hop);
> > +               status = add_depth_big(lpm, ip_masked, depth, res);
> >
> >                 /*
> >                  * If add fails due to exhaustion of tbl8 extensions delete
> > @@ -665,14 +690,14 @@ rte_lpm_add(struct rte_lpm *lpm, uint32_t ip,
> uint8_t
> > depth,
> >   */
> >  int
> >  rte_lpm_is_rule_present(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
> > -uint8_t *next_hop)
> > +                       struct rte_lpm_res *res)
> >  {
> >         uint32_t ip_masked;
> >         int32_t rule_index;
> >
> >         /* Check user arguments. */
> >         if ((lpm == NULL) ||
> > -               (next_hop == NULL) ||
> > +               (res == NULL) ||
> >                 (depth < 1) || (depth > RTE_LPM_MAX_DEPTH))
> >                 return -EINVAL;
> >
> > @@ -681,7 +706,11 @@ uint8_t *next_hop)
> >         rule_index = rule_find(lpm, ip_masked, depth);
> >
> >         if (rule_index >= 0) {
> > -               *next_hop = lpm->rules_tbl[rule_index].next_hop;
> > +               res->next_hop = lpm->rules_tbl[rule_index].next_hop;
> > +               res->fwd_class = lpm->rules_tbl[rule_index].fwd_class;
> > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > +               res->as_num = lpm->rules_tbl[rule_index].as_num;
> > +#endif
> >                 return 1;
> >         }
> >
> > @@ -731,7 +760,7 @@ delete_depth_small(struct rte_lpm *lpm, uint32_t
> > ip_masked,
> >                  */
> >                 for (i = tbl24_index; i < (tbl24_index + tbl24_range); i++)
> > {
> >
> > -                       if (lpm->tbl24[i].ext_entry == 0 &&
> > +                       if (lpm->tbl24[i].ext_valid == 0 &&
> >                                         lpm->tbl24[i].depth <= depth ) {
> >                                 lpm->tbl24[i].valid = INVALID;
> >                         }
> > @@ -761,23 +790,30 @@ delete_depth_small(struct rte_lpm *lpm,
> uint32_t
> > ip_masked,
> >                  * associated with this rule.
> >                  */
> >
> > -               struct rte_lpm_tbl24_entry new_tbl24_entry = {
> > -                       {.next_hop =
> > lpm->rules_tbl[sub_rule_index].next_hop,},
> > -                       .valid = VALID,
> > -                       .ext_entry = 0,
> > +               struct rte_lpm_tbl_entry new_tbl24_entry = {
> > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > +                       .as_num = lpm->rules_tbl[sub_rule_index].as_num,
> > +#endif
> > +                       .next_hop = lpm->rules_tbl[sub_rule_index].next_hop,
> > +                       .fwd_class =
> > lpm->rules_tbl[sub_rule_index].fwd_class,
> >                         .depth = sub_rule_depth,
> > +                       .ext_valid = 0,
> > +                       .valid = VALID,
> >                 };
> >
> > -               struct rte_lpm_tbl8_entry new_tbl8_entry = {
> > -                       .valid = VALID,
> > +               struct rte_lpm_tbl_entry new_tbl8_entry = {
> > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > +                       .as_num = lpm->rules_tbl[sub_rule_index].as_num,
> > +#endif
> > +                       .next_hop = lpm->rules_tbl[sub_rule_index].next_hop,
> > +                       .fwd_class =
> > lpm->rules_tbl[sub_rule_index].fwd_class,
> >                         .depth = sub_rule_depth,
> > -                       .next_hop = lpm->rules_tbl
> > -                       [sub_rule_index].next_hop,
> > +                       .valid = VALID,
> >                 };
> >
> >                 for (i = tbl24_index; i < (tbl24_index + tbl24_range); i++)
> > {
> >
> > -                       if (lpm->tbl24[i].ext_entry == 0 &&
> > +                       if (lpm->tbl24[i].ext_valid == 0 &&
> >                                         lpm->tbl24[i].depth <= depth ) {
> >                                 lpm->tbl24[i] = new_tbl24_entry;
> >                         }
> > @@ -814,7 +850,7 @@ delete_depth_small(struct rte_lpm *lpm, uint32_t
> > ip_masked,
> >   * thus can be recycled
> >   */
> >  static inline int32_t
> > -tbl8_recycle_check(struct rte_lpm_tbl8_entry *tbl8, uint32_t
> > tbl8_group_start)
> > +tbl8_recycle_check(struct rte_lpm_tbl_entry *tbl8, uint32_t
> > tbl8_group_start)
> >  {
> >         uint32_t tbl8_group_end, i;
> >         tbl8_group_end = tbl8_group_start +
> RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
> > @@ -891,11 +927,15 @@ delete_depth_big(struct rte_lpm *lpm, uint32_t
> > ip_masked,
> >         }
> >         else {
> >                 /* Set new tbl8 entry. */
> > -               struct rte_lpm_tbl8_entry new_tbl8_entry = {
> > -                       .valid = VALID,
> > -                       .depth = sub_rule_depth,
> > -                       .valid_group =
> > lpm->tbl8[tbl8_group_start].valid_group,
> > +               struct rte_lpm_tbl_entry new_tbl8_entry = {
> > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > +                       .as_num = lpm->rules_tbl[sub_rule_index].as_num,
> > +#endif
> > +                       .fwd_class =
> > lpm->rules_tbl[sub_rule_index].fwd_class,
> >                         .next_hop = lpm->rules_tbl[sub_rule_index].next_hop,
> > +                       .depth = sub_rule_depth,
> > +                       .ext_valid = lpm->tbl8[tbl8_group_start].ext_valid,
> > +                       .valid = VALID,
> >                 };
> >
> >                 /*
> > @@ -923,11 +963,15 @@ delete_depth_big(struct rte_lpm *lpm, uint32_t
> > ip_masked,
> >         }
> >         else if (tbl8_recycle_index > -1) {
> >                 /* Update tbl24 entry. */
> > -               struct rte_lpm_tbl24_entry new_tbl24_entry = {
> > -                       { .next_hop =
> > lpm->tbl8[tbl8_recycle_index].next_hop, },
> > -                       .valid = VALID,
> > -                       .ext_entry = 0,
> > +               struct rte_lpm_tbl_entry new_tbl24_entry = {
> > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > +                       .as_num = lpm->tbl8[tbl8_recycle_index].as_num,
> > +#endif
> > +                       .next_hop = lpm->tbl8[tbl8_recycle_index].next_hop,
> > +                       .fwd_class =
> > lpm->tbl8[tbl8_recycle_index].fwd_class,
> >                         .depth = lpm->tbl8[tbl8_recycle_index].depth,
> > +                       .ext_valid = 0,
> > +                       .valid = VALID,
> >                 };
> >
> >                 /* Set tbl24 before freeing tbl8 to avoid race condition. */
> > diff --git a/lib/librte_lpm/rte_lpm.h b/lib/librte_lpm/rte_lpm.h
> > index c299ce2..7c615bc 100644
> > --- a/lib/librte_lpm/rte_lpm.h
> > +++ b/lib/librte_lpm/rte_lpm.h
> > @@ -31,8 +31,8 @@
> >   *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
> DAMAGE.
> >   */
> >
> > -#ifndef _RTE_LPM_H_
> > -#define _RTE_LPM_H_
> > +#ifndef _RTE_LPM_EXT_H_
> > +#define _RTE_LPM_EXT_H_
> >
> >  /**
> >   * @file
> > @@ -81,57 +81,58 @@ extern "C" {
> >  #define RTE_LPM_RETURN_IF_TRUE(cond, retval)
> >  #endif
> >
> > -/** @internal bitmask with valid and ext_entry/valid_group fields set */
> > -#define RTE_LPM_VALID_EXT_ENTRY_BITMASK 0x0300
> > +/** @internal bitmask with valid and ext_valid/ext_valid fields set */
> > +#define RTE_LPM_VALID_EXT_ENTRY_BITMASK 0x03
> >
> >  /** Bitmask used to indicate successful lookup */
> > -#define RTE_LPM_LOOKUP_SUCCESS          0x0100
> > +#define RTE_LPM_LOOKUP_SUCCESS          0x01
> > +
> > +struct rte_lpm_res {
> > +       uint16_t        next_hop;
> > +       uint8_t         fwd_class;
> > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > +       uint32_t        as_num;
> > +#endif
> > +};
> >
> >  #if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN
> > -/** @internal Tbl24 entry structure. */
> > -struct rte_lpm_tbl24_entry {
> > -       /* Stores Next hop or group index (i.e. gindex)into tbl8. */
> > +struct rte_lpm_tbl_entry {
> > +       uint8_t valid           :1;
> > +       uint8_t ext_valid       :1;
> > +       uint8_t depth           :6;
> > +       uint8_t fwd_class;
> >         union {
> > -               uint8_t next_hop;
> > -               uint8_t tbl8_gindex;
> > +               uint16_t next_hop;
> > +               uint16_t tbl8_gindex;
> >         };
> > -       /* Using single uint8_t to store 3 values. */
> > -       uint8_t valid     :1; /**< Validation flag. */
> > -       uint8_t ext_entry :1; /**< External entry. */
> > -       uint8_t depth     :6; /**< Rule depth. */
> > -};
> > -
> > -/** @internal Tbl8 entry structure. */
> > -struct rte_lpm_tbl8_entry {
> > -       uint8_t next_hop; /**< next hop. */
> > -       /* Using single uint8_t to store 3 values. */
> > -       uint8_t valid       :1; /**< Validation flag. */
> > -       uint8_t valid_group :1; /**< Group validation flag. */
> > -       uint8_t depth       :6; /**< Rule depth. */
> > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > +       uint32_t as_num;
> > +#endif
> >  };
> >  #else
> > -struct rte_lpm_tbl24_entry {
> > -       uint8_t depth       :6;
> > -       uint8_t ext_entry   :1;
> > -       uint8_t valid       :1;
> > +struct rte_lpm_tbl_entry {
> > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > +       uint32_t as_num;
> > +#endif
> >         union {
> > -               uint8_t tbl8_gindex;
> > -               uint8_t next_hop;
> > +               uint16_t tbl8_gindex;
> > +               uint16_t next_hop;
> >         };
> > -};
> > -
> > -struct rte_lpm_tbl8_entry {
> > -       uint8_t depth       :6;
> > -       uint8_t valid_group :1;
> > -       uint8_t valid       :1;
> > -       uint8_t next_hop;
> > +       uint8_t fwd_class;
> > +       uint8_t depth           :6;
> > +       uint8_t ext_valid       :1;
> > +       uint8_t valid           :1;
> >  };
> >  #endif
> >
> >  /** @internal Rule structure. */
> >  struct rte_lpm_rule {
> >         uint32_t ip; /**< Rule IP address. */
> > -       uint8_t  next_hop; /**< Rule next hop. */
> > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > +       uint32_t as_num;
> > +#endif
> > +       uint16_t  next_hop; /**< Rule next hop. */
> > +       uint8_t fwd_class;
> >  };
> >
> >  /** @internal Contains metadata about the rules table. */
> > @@ -148,9 +149,9 @@ struct rte_lpm {
> >         struct rte_lpm_rule_info rule_info[RTE_LPM_MAX_DEPTH]; /**< Rule
> > info table. */
> >
> >         /* LPM Tables. */
> > -       struct rte_lpm_tbl24_entry tbl24[RTE_LPM_TBL24_NUM_ENTRIES] \
> > +       struct rte_lpm_tbl_entry tbl24[RTE_LPM_TBL24_NUM_ENTRIES] \
> >                         __rte_cache_aligned; /**< LPM tbl24 table. */
> > -       struct rte_lpm_tbl8_entry tbl8[RTE_LPM_TBL8_NUM_ENTRIES] \
> > +       struct rte_lpm_tbl_entry tbl8[RTE_LPM_TBL8_NUM_ENTRIES] \
> >                         __rte_cache_aligned; /**< LPM tbl8 table. */
> >         struct rte_lpm_rule rules_tbl[0] \
> >                         __rte_cache_aligned; /**< LPM rules. */
> > @@ -219,7 +220,7 @@ rte_lpm_free(struct rte_lpm *lpm);
> >   *   0 on success, negative value otherwise
> >   */
> >  int
> > -rte_lpm_add(struct rte_lpm *lpm, uint32_t ip, uint8_t depth, uint8_t
> > next_hop);
> > +rte_lpm_add(struct rte_lpm *lpm, uint32_t ip, uint8_t depth, struct
> > rte_lpm_res *res);
> >
> >  /**
> >   * Check if a rule is present in the LPM table,
> > @@ -238,7 +239,7 @@ rte_lpm_add(struct rte_lpm *lpm, uint32_t ip,
> uint8_t
> > depth, uint8_t next_hop);
> >   */
> >  int
> >  rte_lpm_is_rule_present(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
> > -uint8_t *next_hop);
> > +                       struct rte_lpm_res *res);
> >
> >  /**
> >   * Delete a rule from the LPM table.
> > @@ -277,29 +278,43 @@ rte_lpm_delete_all(struct rte_lpm *lpm);
> >   *   -EINVAL for incorrect arguments, -ENOENT on lookup miss, 0 on lookup
> > hit
> >   */
> >  static inline int
> > -rte_lpm_lookup(struct rte_lpm *lpm, uint32_t ip, uint8_t *next_hop)
> > +rte_lpm_lookup(struct rte_lpm *lpm, uint32_t ip, struct rte_lpm_res *res)
> >  {
> >         unsigned tbl24_index = (ip >> 8);
> > -       uint16_t tbl_entry;
> > -
> > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > +       uint64_t tbl_entry;
> > +#else
> > +       uint32_t tbl_entry;
> > +#endif
> >         /* DEBUG: Check user input arguments. */
> > -       RTE_LPM_RETURN_IF_TRUE(((lpm == NULL) || (next_hop == NULL)),
> > -EINVAL);
> > +       RTE_LPM_RETURN_IF_TRUE(((lpm == NULL) || (res == NULL)), -
> EINVAL);
> >
> >         /* Copy tbl24 entry */
> > -       tbl_entry = *(const uint16_t *)&lpm->tbl24[tbl24_index];
> > -
> > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > +       tbl_entry = *(const uint64_t *)&lpm->tbl24[tbl24_index];
> > +#else
> > +       tbl_entry = *(const uint32_t *)&lpm->tbl24[tbl24_index];
> > +#endif
> >         /* Copy tbl8 entry (only if needed) */
> >         if (unlikely((tbl_entry & RTE_LPM_VALID_EXT_ENTRY_BITMASK) ==
> >                         RTE_LPM_VALID_EXT_ENTRY_BITMASK)) {
> >
> >                 unsigned tbl8_index = (uint8_t)ip +
> > -                               ((uint8_t)tbl_entry *
> > RTE_LPM_TBL8_GROUP_NUM_ENTRIES);
> > +                               ((*(struct rte_lpm_tbl_entry
> > *)&tbl_entry).tbl8_gindex * RTE_LPM_TBL8_GROUP_NUM_ENTRIES);
> >
> > -               tbl_entry = *(const uint16_t *)&lpm->tbl8[tbl8_index];
> > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > +               tbl_entry = *(const uint64_t *)&lpm->tbl8[tbl8_index];
> > +#else
> > +               tbl_entry = *(const uint32_t *)&lpm->tbl8[tbl8_index];
> > +#endif
> >         }
> > -
> > -       *next_hop = (uint8_t)tbl_entry;
> > +       res->next_hop  = ((struct rte_lpm_tbl_entry *)&tbl_entry)->next_hop;
> > +       res->fwd_class = ((struct rte_lpm_tbl_entry
> > *)&tbl_entry)->fwd_class;
> > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > +       res->as_num       = ((struct rte_lpm_tbl_entry
> > *)&tbl_entry)->as_num;
> > +#endif
> >         return (tbl_entry & RTE_LPM_LOOKUP_SUCCESS) ? 0 : -ENOENT;
> > +
> >  }
> >
> >  /**
> > @@ -322,19 +337,25 @@ rte_lpm_lookup(struct rte_lpm *lpm, uint32_t ip,
> > uint8_t *next_hop)
> >   *  @return
> >   *   -EINVAL for incorrect arguments, otherwise 0
> >   */
> > -#define rte_lpm_lookup_bulk(lpm, ips, next_hops, n) \
> > -               rte_lpm_lookup_bulk_func(lpm, ips, next_hops, n)
> > +#define rte_lpm_lookup_bulk(lpm, ips, res_tbl, n) \
> > +               rte_lpm_lookup_bulk_func(lpm, ips, res_tbl, n)
> >
> >  static inline int
> > -rte_lpm_lookup_bulk_func(const struct rte_lpm *lpm, const uint32_t * ips,
> > -               uint16_t * next_hops, const unsigned n)
> > +rte_lpm_lookup_bulk_func(const struct rte_lpm *lpm, const uint32_t *ips,
> > +               struct rte_lpm_res *res_tbl, const unsigned n)
> >  {
> >         unsigned i;
> > +       int ret = 0;
> > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > +       uint64_t tbl_entry;
> > +#else
> > +       uint32_t tbl_entry;
> > +#endif
> >         unsigned tbl24_indexes[n];
> >
> >         /* DEBUG: Check user input arguments. */
> >         RTE_LPM_RETURN_IF_TRUE(((lpm == NULL) || (ips == NULL) ||
> > -                       (next_hops == NULL)), -EINVAL);
> > +                       (res_tbl == NULL)), -EINVAL);
> >
> >         for (i = 0; i < n; i++) {
> >                 tbl24_indexes[i] = ips[i] >> 8;
> > @@ -342,20 +363,32 @@ rte_lpm_lookup_bulk_func(const struct rte_lpm
> *lpm,
> > const uint32_t * ips,
> >
> >         for (i = 0; i < n; i++) {
> >                 /* Simply copy tbl24 entry to output */
> > -               next_hops[i] = *(const uint16_t
> > *)&lpm->tbl24[tbl24_indexes[i]];
> > -
> > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > +               tbl_entry = *(const uint64_t
> > *)&lpm->tbl24[tbl24_indexes[i]];
> > +#else
> > +               tbl_entry = *(const uint32_t
> > *)&lpm->tbl24[tbl24_indexes[i]];
> > +#endif
> >                 /* Overwrite output with tbl8 entry if needed */
> > -               if (unlikely((next_hops[i] &
> > RTE_LPM_VALID_EXT_ENTRY_BITMASK) ==
> > -                               RTE_LPM_VALID_EXT_ENTRY_BITMASK)) {
> > +               if (unlikely((tbl_entry & RTE_LPM_VALID_EXT_ENTRY_BITMASK)
> > ==
> > +                       RTE_LPM_VALID_EXT_ENTRY_BITMASK)) {
> >
> >                         unsigned tbl8_index = (uint8_t)ips[i] +
> > -                                       ((uint8_t)next_hops[i] *
> > -                                        RTE_LPM_TBL8_GROUP_NUM_ENTRIES);
> > +                               ((*(struct rte_lpm_tbl_entry
> > *)&tbl_entry).tbl8_gindex * RTE_LPM_TBL8_GROUP_NUM_ENTRIES);
> >
> > -                       next_hops[i] = *(const uint16_t
> > *)&lpm->tbl8[tbl8_index];
> > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > +                       tbl_entry = *(const uint64_t
> > *)&lpm->tbl8[tbl8_index];
> > +#else
> > +                       tbl_entry = *(const uint32_t
> > *)&lpm->tbl8[tbl8_index];
> > +#endif
> >                 }
> > +               res_tbl[i].next_hop     = ((struct rte_lpm_tbl_entry
> > *)&tbl_entry)->next_hop;
> > +               res_tbl[i].fwd_class    = ((struct rte_lpm_tbl_entry
> > *)&tbl_entry)->next_hop;
> > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > +               res_tbl[i].as_num       = ((struct rte_lpm_tbl_entry
> > *)&tbl_entry)->as_num;
> > +#endif
> > +               ret |= 1 << i;
> >         }
> > -       return 0;
> > +       return ret;
> >  }
> >
> >  /* Mask four results. */
> > @@ -477,4 +510,4 @@ rte_lpm_lookupx4(const struct rte_lpm *lpm,
> __m128i ip,
> > uint16_t hop[4],
> >  }
> >  #endif
> >
> > -#endif /* _RTE_LPM_H_ */
> > +#endif /* _RTE_LPM_EXT_H_ */
> >
> > 2015-10-24 9:09 GMT+03:00 Matthew Hall <mhall@mhcomputing.net>:
> >
> > > On 10/23/15 9:20 AM, Matthew Hall wrote:
> > >
> > >> On Fri, Oct 23, 2015 at 03:51:48PM +0200, Michal Jastrzebski wrote:
> > >>
> > >>> From: Michal Kobylinski  <michalx.kobylinski@intel.com>
> > >>>
> > >>> The current DPDK implementation for LPM for IPv4 and IPv6 limits the
> > >>> number of next hops to 256, as the next hop ID is an 8-bit long field.
> > >>> Proposed extension increase number of next hops for IPv4 to 2^24 and
> > >>> also allows 32-bits read/write operations.
> > >>>
> > >>> This patchset requires additional change to rte_table library to meet
> > >>> ABI compatibility requirements. A v2 will be sent next week.
> > >>>
> > >>
> > >> I also have a patchset for this.
> > >>
> > >> I will send it out as well so we could compare.
> > >>
> > >> Matthew.
> > >>
> > >
> > > Sorry about the delay; I only work on DPDK in personal time and not as
> > > part of a job. My patchset is attached to this email.
> > >
> > > One possible advantage with my patchset, compared to others, is that the
> > > space problem is fixed in both IPV4 and in IPV6, to prevent asymmetry
> > > between these two standards, which is something I try to avoid as much
> as
> > > humanly possible.
> > >
> > > This is because my application code is green-field, so I absolutely don't
> > > want to put any ugly hacks or incompatibilities in this code if I can
> > > possibly avoid it.
> > >
> > > Otherwise, I am not necessarily as expert about rte_lpm as some of the
> > > full-time guys, but I think with four or five of us in the thread hammering
> > > out patches we will be able to create something amazing together and I
> am
> > > very very very very very happy about this.
> > >
> > > Matthew.
> > >
> 

Hi Vladimir,
Thanks for sharing Your implementation.
Could You please clarify what as_num and fwd_class fields represent?
The second issue I have is that Your patch doesn’t want to apply on top of
current head. Could You check this please?

Best regards
Michal

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [dpdk-dev] [PATCH v1 0/3] lpm: increase number of next hops for lpm (ipv4)
  2015-10-24  6:09   ` Matthew Hall
  2015-10-25 17:52     ` Vladimir Medvedkin
@ 2015-10-26 12:13     ` Jastrzebski, MichalX K
  2015-10-26 18:40       ` Matthew Hall
  1 sibling, 1 reply; 24+ messages in thread
From: Jastrzebski, MichalX K @ 2015-10-26 12:13 UTC (permalink / raw)
  To: Matthew Hall, Kobylinski, MichalX; +Cc: dev

> -----Original Message-----
> From: Matthew Hall [mailto:mhall@mhcomputing.net]
> Sent: Saturday, October 24, 2015 8:10 AM
> To: Jastrzebski, MichalX K; Kobylinski, MichalX
> Cc: dev@dpdk.org
> Subject: Re: [dpdk-dev] [PATCH v1 0/3] lpm: increase number of next hops
> for lpm (ipv4)
> 
> On 10/23/15 9:20 AM, Matthew Hall wrote:
> > On Fri, Oct 23, 2015 at 03:51:48PM +0200, Michal Jastrzebski wrote:
> >> From: Michal Kobylinski  <michalx.kobylinski@intel.com>
> >>
> >> The current DPDK implementation for LPM for IPv4 and IPv6 limits the
> >> number of next hops to 256, as the next hop ID is an 8-bit long field.
> >> Proposed extension increase number of next hops for IPv4 to 2^24 and
> >> also allows 32-bits read/write operations.
> >>
> >> This patchset requires additional change to rte_table library to meet
> >> ABI compatibility requirements. A v2 will be sent next week.
> >
> > I also have a patchset for this.
> >
> > I will send it out as well so we could compare.
> >
> > Matthew.
> 
> Sorry about the delay; I only work on DPDK in personal time and not as
> part of a job. My patchset is attached to this email.
> 
> One possible advantage with my patchset, compared to others, is that the
> space problem is fixed in both IPV4 and in IPV6, to prevent asymmetry
> between these two standards, which is something I try to avoid as much
> as humanly possible.
> 
> This is because my application code is green-field, so I absolutely
> don't want to put any ugly hacks or incompatibilities in this code if I
> can possibly avoid it.
> 
> Otherwise, I am not necessarily as expert about rte_lpm as some of the
> full-time guys, but I think with four or five of us in the thread
> hammering out patches we will be able to create something amazing
> together and I am very very very very very happy about this.
> 
> Matthew.

Hi Matthew,
Thank You for a patch-set.
I can't apply patch 0001-... , could You check it please? 
I have the following error:

Checking patch lib/librte_lpm/rte_lpm.h...
error: while searching for:
#endif

/** @internal bitmask with valid and ext_entry/valid_group fields set */
#define RTE_LPM_VALID_EXT_ENTRY_BITMASK 0x0300

/** Bitmask used to indicate successful lookup */
#define RTE_LPM_LOOKUP_SUCCESS          0x0100

#if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN
/** @internal Tbl24 entry structure. */
struct rte_lpm_tbl24_entry {
        /* Stores Next hop or group index (i.e. gindex)into tbl8. */
        union {
                uint8_t next_hop;
                uint8_t tbl8_gindex;
        };
        /* Using single uint8_t to store 3 values. */
        uint8_t valid     :1; /**< Validation flag. */
        uint8_t ext_entry :1; /**< External entry. */
        uint8_t depth     :6; /**< Rule depth. */
};

/** @internal Tbl8 entry structure. */
struct rte_lpm_tbl8_entry {
        uint8_t next_hop; /**< next hop. */
        /* Using single uint8_t to store 3 values. */
        uint8_t valid       :1; /**< Validation flag. */
        uint8_t valid_group :1; /**< Group validation flag. */
        uint8_t depth       :6; /**< Rule depth. */
};
#else
struct rte_lpm_tbl24_entry {

error: patch failed: lib/librte_lpm/rte_lpm.h:82
error: lib/librte_lpm/rte_lpm.h: patch does not apply

Best regards,
Michal

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [dpdk-dev] [PATCH v1 0/3] lpm: increase number of next hops for lpm (ipv4)
  2015-10-26 11:57         ` Jastrzebski, MichalX K
@ 2015-10-26 14:03           ` Vladimir Medvedkin
  2015-10-26 15:39             ` Michal Jastrzebski
  0 siblings, 1 reply; 24+ messages in thread
From: Vladimir Medvedkin @ 2015-10-26 14:03 UTC (permalink / raw)
  To: Jastrzebski, MichalX K; +Cc: dev

Hi Michal,

Forwarding class can help us to classify traffic based on dst prefix, it's
something like Juniper DCU. For example on Juniper MX I can make policy
that install prefix into the FIB with some class and use it on dataplane,
for example with ACL.
On Juniper MX I can make something like that:
#show policy-options
policy-statement community-to-class {
term customer {
        from community originate-customer;
        then destination-class customer;
    }
}
community originate-customer members 12345:11111;
# show routing-options
forwarding-table {
    export community-to-class;
}
# show forwarding-options
forwarding-options {
    family inet {
        filter {
            output test-filter;
        }
    }
}
# show firewall family inet filter test-filter
term 1 {
    from {
        protocol icmp;
        destination-class customer;
    }
    then {
        discard;
    }
}
announce route 10.10.10.10/32 next-hop 10.10.10.2 community 12345:11111
After than on dataplane we have
NPC1( vty)# show route ip lookup 10.10.10.10
Route Information (10.10.10.10):
 interface : xe-1/0/0.0 (328)
 Nexthop prefix : -
 Nexthop ID     : 1048574
 MTU            : 0
 Class ID       : 129 <- That is "forwarding class" in my implementation
This construction discards all ICMP traffic that goes to dst prefixes which
was originated with community 12345:11111. With this mechanism we can make
on control plane different sophisticated policy to control traffic on
dataplane.
The same with as_num, we can have on dataplane AS number that has
originated that prefix, or another 4-byte number e.g. geo-id.
What issue do you mean? I think it is because of table/pipeline/test
frameworks that doesen't want to compile due to changing API/ABI. You can
turn it off for LPM testing, if my patch will be applied I will make
changes in above-mentioned frameworks.

Regards,
Vladimir

2015-10-26 14:57 GMT+03:00 Jastrzebski, MichalX K <
michalx.k.jastrzebski@intel.com>:

> > -----Original Message-----
> > From: Michal Jastrzebski [mailto:michalx.k.jastrzebski@intel.com]
> > Sent: Monday, October 26, 2015 12:55 PM
> > To: Vladimir Medvedkin
> > Subject: Re: [dpdk-dev] [PATCH v1 0/3] lpm: increase number of next hops
> > for lpm (ipv4)
> >
> > On Sun, Oct 25, 2015 at 08:52:04PM +0300, Vladimir Medvedkin wrote:
> > > Hi all,
> > >
> > > Here my implementation
> > >
> > > Signed-off-by: Vladimir Medvedkin <medvedkinv@gmail.com>
> > > ---
> > >  config/common_bsdapp     |   1 +
> > >  config/common_linuxapp   |   1 +
> > >  lib/librte_lpm/rte_lpm.c | 194
> > > +++++++++++++++++++++++++++++------------------
> > >  lib/librte_lpm/rte_lpm.h | 163 +++++++++++++++++++++++----------------
> > >  4 files changed, 219 insertions(+), 140 deletions(-)
> > >
> > > diff --git a/config/common_bsdapp b/config/common_bsdapp
> > > index b37dcf4..408cc2c 100644
> > > --- a/config/common_bsdapp
> > > +++ b/config/common_bsdapp
> > > @@ -344,6 +344,7 @@ CONFIG_RTE_LIBRTE_JOBSTATS=y
> > >  #
> > >  CONFIG_RTE_LIBRTE_LPM=y
> > >  CONFIG_RTE_LIBRTE_LPM_DEBUG=n
> > > +CONFIG_RTE_LIBRTE_LPM_ASNUM=n
> > >
> > >  #
> > >  # Compile librte_acl
> > > diff --git a/config/common_linuxapp b/config/common_linuxapp
> > > index 0de43d5..1c60e63 100644
> > > --- a/config/common_linuxapp
> > > +++ b/config/common_linuxapp
> > > @@ -352,6 +352,7 @@ CONFIG_RTE_LIBRTE_JOBSTATS=y
> > >  #
> > >  CONFIG_RTE_LIBRTE_LPM=y
> > >  CONFIG_RTE_LIBRTE_LPM_DEBUG=n
> > > +CONFIG_RTE_LIBRTE_LPM_ASNUM=n
> > >
> > >  #
> > >  # Compile librte_acl
> > > diff --git a/lib/librte_lpm/rte_lpm.c b/lib/librte_lpm/rte_lpm.c
> > > index 163ba3c..363b400 100644
> > > --- a/lib/librte_lpm/rte_lpm.c
> > > +++ b/lib/librte_lpm/rte_lpm.c
> > > @@ -159,9 +159,11 @@ rte_lpm_create(const char *name, int socket_id,
> > int
> > > max_rules,
> > >
> > >         lpm_list = RTE_TAILQ_CAST(rte_lpm_tailq.head, rte_lpm_list);
> > >
> > > -       RTE_BUILD_BUG_ON(sizeof(struct rte_lpm_tbl24_entry) != 2);
> > > -       RTE_BUILD_BUG_ON(sizeof(struct rte_lpm_tbl8_entry) != 2);
> > > -
> > > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > > +       RTE_BUILD_BUG_ON(sizeof(struct rte_lpm_tbl_entry) != 8);
> > > +#else
> > > +       RTE_BUILD_BUG_ON(sizeof(struct rte_lpm_tbl_entry) != 4);
> > > +#endif
> > >         /* Check user arguments. */
> > >         if ((name == NULL) || (socket_id < -1) || (max_rules == 0)){
> > >                 rte_errno = EINVAL;
> > > @@ -261,7 +263,7 @@ rte_lpm_free(struct rte_lpm *lpm)
> > >   */
> > >  static inline int32_t
> > >  rule_add(struct rte_lpm *lpm, uint32_t ip_masked, uint8_t depth,
> > > -       uint8_t next_hop)
> > > +       struct rte_lpm_res *res)
> > >  {
> > >         uint32_t rule_gindex, rule_index, last_rule;
> > >         int i;
> > > @@ -282,8 +284,11 @@ rule_add(struct rte_lpm *lpm, uint32_t
> > ip_masked,
> > > uint8_t depth,
> > >
> > >                         /* If rule already exists update its next_hop
> and
> > > return. */
> > >                         if (lpm->rules_tbl[rule_index].ip ==
> ip_masked) {
> > > -                               lpm->rules_tbl[rule_index].next_hop =
> > > next_hop;
> > > -
> > > +                               lpm->rules_tbl[rule_index].next_hop =
> > > res->next_hop;
> > > +                               lpm->rules_tbl[rule_index].fwd_class =
> > > res->fwd_class;
> > > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > > +                               lpm->rules_tbl[rule_index].as_num =
> > > res->as_num;
> > > +#endif
> > >                                 return rule_index;
> > >                         }
> > >                 }
> > > @@ -320,7 +325,11 @@ rule_add(struct rte_lpm *lpm, uint32_t
> > ip_masked,
> > > uint8_t depth,
> > >
> > >         /* Add the new rule. */
> > >         lpm->rules_tbl[rule_index].ip = ip_masked;
> > > -       lpm->rules_tbl[rule_index].next_hop = next_hop;
> > > +       lpm->rules_tbl[rule_index].next_hop = res->next_hop;
> > > +       lpm->rules_tbl[rule_index].fwd_class = res->fwd_class;
> > > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > > +       lpm->rules_tbl[rule_index].as_num = res->as_num;
> > > +#endif
> > >
> > >         /* Increment the used rules counter for this rule group. */
> > >         lpm->rule_info[depth - 1].used_rules++;
> > > @@ -382,10 +391,10 @@ rule_find(struct rte_lpm *lpm, uint32_t
> > ip_masked,
> > > uint8_t depth)
> > >   * Find, clean and allocate a tbl8.
> > >   */
> > >  static inline int32_t
> > > -tbl8_alloc(struct rte_lpm_tbl8_entry *tbl8)
> > > +tbl8_alloc(struct rte_lpm_tbl_entry *tbl8)
> > >  {
> > >         uint32_t tbl8_gindex; /* tbl8 group index. */
> > > -       struct rte_lpm_tbl8_entry *tbl8_entry;
> > > +       struct rte_lpm_tbl_entry *tbl8_entry;
> > >
> > >         /* Scan through tbl8 to find a free (i.e. INVALID) tbl8 group.
> */
> > >         for (tbl8_gindex = 0; tbl8_gindex < RTE_LPM_TBL8_NUM_GROUPS;
> > > @@ -393,12 +402,12 @@ tbl8_alloc(struct rte_lpm_tbl8_entry *tbl8)
> > >                 tbl8_entry = &tbl8[tbl8_gindex *
> > >                                    RTE_LPM_TBL8_GROUP_NUM_ENTRIES];
> > >                 /* If a free tbl8 group is found clean it and set as
> VALID.
> > > */
> > > -               if (!tbl8_entry->valid_group) {
> > > +               if (!tbl8_entry->ext_valid) {
> > >                         memset(&tbl8_entry[0], 0,
> > >                                         RTE_LPM_TBL8_GROUP_NUM_ENTRIES
> *
> > >                                         sizeof(tbl8_entry[0]));
> > >
> > > -                       tbl8_entry->valid_group = VALID;
> > > +                       tbl8_entry->ext_valid = VALID;
> > >
> > >                         /* Return group index for allocated tbl8
> group. */
> > >                         return tbl8_gindex;
> > > @@ -410,46 +419,50 @@ tbl8_alloc(struct rte_lpm_tbl8_entry *tbl8)
> > >  }
> > >
> > >  static inline void
> > > -tbl8_free(struct rte_lpm_tbl8_entry *tbl8, uint32_t tbl8_group_start)
> > > +tbl8_free(struct rte_lpm_tbl_entry *tbl8, uint32_t tbl8_group_start)
> > >  {
> > >         /* Set tbl8 group invalid*/
> > > -       tbl8[tbl8_group_start].valid_group = INVALID;
> > > +       tbl8[tbl8_group_start].ext_valid = INVALID;
> > >  }
> > >
> > >  static inline int32_t
> > >  add_depth_small(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
> > > -               uint8_t next_hop)
> > > +               struct rte_lpm_res *res)
> > >  {
> > >         uint32_t tbl24_index, tbl24_range, tbl8_index, tbl8_group_end,
> i, j;
> > >
> > >         /* Calculate the index into Table24. */
> > >         tbl24_index = ip >> 8;
> > >         tbl24_range = depth_to_range(depth);
> > > +       struct rte_lpm_tbl_entry new_tbl_entry = {
> > > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > > +               .as_num = res->as_num,
> > > +#endif
> > > +               .next_hop = res->next_hop,
> > > +               .fwd_class  = res->fwd_class,
> > > +               .ext_valid = 0,
> > > +               .depth = depth,
> > > +               .valid = VALID,
> > > +       };
> > > +
> > >
> > >         for (i = tbl24_index; i < (tbl24_index + tbl24_range); i++) {
> > >                 /*
> > >                  * For invalid OR valid and non-extended tbl 24
> entries set
> > >                  * entry.
> > >                  */
> > > -               if (!lpm->tbl24[i].valid || (lpm->tbl24[i].ext_entry
> == 0 &&
> > > +               if (!lpm->tbl24[i].valid || (lpm->tbl24[i].ext_valid
> == 0 &&
> > >                                 lpm->tbl24[i].depth <= depth)) {
> > >
> > > -                       struct rte_lpm_tbl24_entry new_tbl24_entry = {
> > > -                               { .next_hop = next_hop, },
> > > -                               .valid = VALID,
> > > -                               .ext_entry = 0,
> > > -                               .depth = depth,
> > > -                       };
> > > -
> > >                         /* Setting tbl24 entry in one go to avoid race
> > >                          * conditions
> > >                          */
> > > -                       lpm->tbl24[i] = new_tbl24_entry;
> > > +                       lpm->tbl24[i] = new_tbl_entry;
> > >
> > >                         continue;
> > >                 }
> > >
> > > -               if (lpm->tbl24[i].ext_entry == 1) {
> > > +               if (lpm->tbl24[i].ext_valid == 1) {
> > >                         /* If tbl24 entry is valid and extended
> calculate
> > > the
> > >                          *  index into tbl8.
> > >                          */
> > > @@ -461,19 +474,14 @@ add_depth_small(struct rte_lpm *lpm, uint32_t
> > ip,
> > > uint8_t depth,
> > >                         for (j = tbl8_index; j < tbl8_group_end; j++) {
> > >                                 if (!lpm->tbl8[j].valid ||
> > >                                                 lpm->tbl8[j].depth <=
> > > depth) {
> > > -                                       struct rte_lpm_tbl8_entry
> > > -                                               new_tbl8_entry = {
> > > -                                               .valid = VALID,
> > > -                                               .valid_group = VALID,
> > > -                                               .depth = depth,
> > > -                                               .next_hop = next_hop,
> > > -                                       };
> > > +
> > > +                                       new_tbl_entry.ext_valid =
> VALID;
> > >
> > >                                         /*
> > >                                          * Setting tbl8 entry in one
> go to
> > > avoid
> > >                                          * race conditions
> > >                                          */
> > > -                                       lpm->tbl8[j] = new_tbl8_entry;
> > > +                                       lpm->tbl8[j] = new_tbl_entry;
> > >
> > >                                         continue;
> > >                                 }
> > > @@ -486,7 +494,7 @@ add_depth_small(struct rte_lpm *lpm, uint32_t ip,
> > > uint8_t depth,
> > >
> > >  static inline int32_t
> > >  add_depth_big(struct rte_lpm *lpm, uint32_t ip_masked, uint8_t depth,
> > > -               uint8_t next_hop)
> > > +               struct rte_lpm_res *res)
> > >  {
> > >         uint32_t tbl24_index;
> > >         int32_t tbl8_group_index, tbl8_group_start, tbl8_group_end,
> > > tbl8_index,
> > > @@ -512,7 +520,11 @@ add_depth_big(struct rte_lpm *lpm, uint32_t
> > ip_masked,
> > > uint8_t depth,
> > >                 /* Set tbl8 entry. */
> > >                 for (i = tbl8_index; i < (tbl8_index + tbl8_range);
> i++) {
> > >                         lpm->tbl8[i].depth = depth;
> > > -                       lpm->tbl8[i].next_hop = next_hop;
> > > +                       lpm->tbl8[i].next_hop = res->next_hop;
> > > +                       lpm->tbl8[i].fwd_class = res->fwd_class;
> > > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > > +                       lpm->tbl8[i].as_num = res->as_num;
> > > +#endif
> > >                         lpm->tbl8[i].valid = VALID;
> > >                 }
> > >
> > > @@ -522,17 +534,17 @@ add_depth_big(struct rte_lpm *lpm, uint32_t
> > > ip_masked, uint8_t depth,
> > >                  * so assign whole structure in one go
> > >                  */
> > >
> > > -               struct rte_lpm_tbl24_entry new_tbl24_entry = {
> > > -                       { .tbl8_gindex = (uint8_t)tbl8_group_index, },
> > > -                       .valid = VALID,
> > > -                       .ext_entry = 1,
> > > +               struct rte_lpm_tbl_entry new_tbl24_entry = {
> > > +                       .tbl8_gindex = (uint16_t)tbl8_group_index,
> > >                         .depth = 0,
> > > +                       .ext_valid = 1,
> > > +                       .valid = VALID,
> > >                 };
> > >
> > >                 lpm->tbl24[tbl24_index] = new_tbl24_entry;
> > >
> > >         }/* If valid entry but not extended calculate the index into
> > > Table8. */
> > > -       else if (lpm->tbl24[tbl24_index].ext_entry == 0) {
> > > +       else if (lpm->tbl24[tbl24_index].ext_valid == 0) {
> > >                 /* Search for free tbl8 group. */
> > >                 tbl8_group_index = tbl8_alloc(lpm->tbl8);
> > >
> > > @@ -551,6 +563,11 @@ add_depth_big(struct rte_lpm *lpm, uint32_t
> > ip_masked,
> > > uint8_t depth,
> > >                         lpm->tbl8[i].depth =
> lpm->tbl24[tbl24_index].depth;
> > >                         lpm->tbl8[i].next_hop =
> > >
>  lpm->tbl24[tbl24_index].next_hop;
> > > +                       lpm->tbl8[i].fwd_class =
> > > +
>  lpm->tbl24[tbl24_index].fwd_class;
> > > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > > +                       lpm->tbl8[i].as_num =
> > > lpm->tbl24[tbl24_index].as_num;
> > > +#endif
> > >                 }
> > >
> > >                 tbl8_index = tbl8_group_start + (ip_masked & 0xFF);
> > > @@ -561,7 +578,11 @@ add_depth_big(struct rte_lpm *lpm, uint32_t
> > ip_masked,
> > > uint8_t depth,
> > >                                         lpm->tbl8[i].depth <= depth) {
> > >                                 lpm->tbl8[i].valid = VALID;
> > >                                 lpm->tbl8[i].depth = depth;
> > > -                               lpm->tbl8[i].next_hop = next_hop;
> > > +                               lpm->tbl8[i].next_hop = res->next_hop;
> > > +                               lpm->tbl8[i].fwd_class =
> res->fwd_class;
> > > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > > +                               lpm->tbl8[i].as_num = res->as_num;
> > > +#endif
> > >
> > >                                 continue;
> > >                         }
> > > @@ -573,11 +594,11 @@ add_depth_big(struct rte_lpm *lpm, uint32_t
> > > ip_masked, uint8_t depth,
> > >                  * so assign whole structure in one go.
> > >                  */
> > >
> > > -               struct rte_lpm_tbl24_entry new_tbl24_entry = {
> > > -                               { .tbl8_gindex =
> (uint8_t)tbl8_group_index,
> > > },
> > > -                               .valid = VALID,
> > > -                               .ext_entry = 1,
> > > +               struct rte_lpm_tbl_entry new_tbl24_entry = {
> > > +                               .tbl8_gindex =
> (uint16_t)tbl8_group_index,
> > >                                 .depth = 0,
> > > +                               .ext_valid = 1,
> > > +                               .valid = VALID,
> > >                 };
> > >
> > >                 lpm->tbl24[tbl24_index] = new_tbl24_entry;
> > > @@ -595,11 +616,15 @@ add_depth_big(struct rte_lpm *lpm, uint32_t
> > > ip_masked, uint8_t depth,
> > >
> > >                         if (!lpm->tbl8[i].valid ||
> > >                                         lpm->tbl8[i].depth <= depth) {
> > > -                               struct rte_lpm_tbl8_entry
> new_tbl8_entry = {
> > > -                                       .valid = VALID,
> > > +                               struct rte_lpm_tbl_entry
> new_tbl8_entry = {
> > > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > > +                                       .as_num = res->as_num,
> > > +#endif
> > > +                                       .next_hop = res->next_hop,
> > > +                                       .fwd_class = res->fwd_class,
> > >                                         .depth = depth,
> > > -                                       .next_hop = next_hop,
> > > -                                       .valid_group =
> > > lpm->tbl8[i].valid_group,
> > > +                                       .ext_valid =
> lpm->tbl8[i].ext_valid,
> > > +                                       .valid = VALID,
> > >                                 };
> > >
> > >                                 /*
> > > @@ -621,19 +646,19 @@ add_depth_big(struct rte_lpm *lpm, uint32_t
> > > ip_masked, uint8_t depth,
> > >   */
> > >  int
> > >  rte_lpm_add(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
> > > -               uint8_t next_hop)
> > > +               struct rte_lpm_res *res)
> > >  {
> > >         int32_t rule_index, status = 0;
> > >         uint32_t ip_masked;
> > >
> > >         /* Check user arguments. */
> > > -       if ((lpm == NULL) || (depth < 1) || (depth >
> RTE_LPM_MAX_DEPTH))
> > > +       if ((lpm == NULL) || (res == NULL) || (depth < 1) || (depth >
> > > RTE_LPM_MAX_DEPTH))
> > >                 return -EINVAL;
> > >
> > >         ip_masked = ip & depth_to_mask(depth);
> > >
> > >         /* Add the rule to the rule table. */
> > > -       rule_index = rule_add(lpm, ip_masked, depth, next_hop);
> > > +       rule_index = rule_add(lpm, ip_masked, depth, res);
> > >
> > >         /* If the is no space available for new rule return error. */
> > >         if (rule_index < 0) {
> > > @@ -641,10 +666,10 @@ rte_lpm_add(struct rte_lpm *lpm, uint32_t ip,
> > uint8_t
> > > depth,
> > >         }
> > >
> > >         if (depth <= MAX_DEPTH_TBL24) {
> > > -               status = add_depth_small(lpm, ip_masked, depth,
> next_hop);
> > > +               status = add_depth_small(lpm, ip_masked, depth, res);
> > >         }
> > >         else { /* If depth > RTE_LPM_MAX_DEPTH_TBL24 */
> > > -               status = add_depth_big(lpm, ip_masked, depth,
> next_hop);
> > > +               status = add_depth_big(lpm, ip_masked, depth, res);
> > >
> > >                 /*
> > >                  * If add fails due to exhaustion of tbl8 extensions
> delete
> > > @@ -665,14 +690,14 @@ rte_lpm_add(struct rte_lpm *lpm, uint32_t ip,
> > uint8_t
> > > depth,
> > >   */
> > >  int
> > >  rte_lpm_is_rule_present(struct rte_lpm *lpm, uint32_t ip, uint8_t
> depth,
> > > -uint8_t *next_hop)
> > > +                       struct rte_lpm_res *res)
> > >  {
> > >         uint32_t ip_masked;
> > >         int32_t rule_index;
> > >
> > >         /* Check user arguments. */
> > >         if ((lpm == NULL) ||
> > > -               (next_hop == NULL) ||
> > > +               (res == NULL) ||
> > >                 (depth < 1) || (depth > RTE_LPM_MAX_DEPTH))
> > >                 return -EINVAL;
> > >
> > > @@ -681,7 +706,11 @@ uint8_t *next_hop)
> > >         rule_index = rule_find(lpm, ip_masked, depth);
> > >
> > >         if (rule_index >= 0) {
> > > -               *next_hop = lpm->rules_tbl[rule_index].next_hop;
> > > +               res->next_hop = lpm->rules_tbl[rule_index].next_hop;
> > > +               res->fwd_class = lpm->rules_tbl[rule_index].fwd_class;
> > > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > > +               res->as_num = lpm->rules_tbl[rule_index].as_num;
> > > +#endif
> > >                 return 1;
> > >         }
> > >
> > > @@ -731,7 +760,7 @@ delete_depth_small(struct rte_lpm *lpm, uint32_t
> > > ip_masked,
> > >                  */
> > >                 for (i = tbl24_index; i < (tbl24_index + tbl24_range);
> i++)
> > > {
> > >
> > > -                       if (lpm->tbl24[i].ext_entry == 0 &&
> > > +                       if (lpm->tbl24[i].ext_valid == 0 &&
> > >                                         lpm->tbl24[i].depth <= depth )
> {
> > >                                 lpm->tbl24[i].valid = INVALID;
> > >                         }
> > > @@ -761,23 +790,30 @@ delete_depth_small(struct rte_lpm *lpm,
> > uint32_t
> > > ip_masked,
> > >                  * associated with this rule.
> > >                  */
> > >
> > > -               struct rte_lpm_tbl24_entry new_tbl24_entry = {
> > > -                       {.next_hop =
> > > lpm->rules_tbl[sub_rule_index].next_hop,},
> > > -                       .valid = VALID,
> > > -                       .ext_entry = 0,
> > > +               struct rte_lpm_tbl_entry new_tbl24_entry = {
> > > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > > +                       .as_num =
> lpm->rules_tbl[sub_rule_index].as_num,
> > > +#endif
> > > +                       .next_hop =
> lpm->rules_tbl[sub_rule_index].next_hop,
> > > +                       .fwd_class =
> > > lpm->rules_tbl[sub_rule_index].fwd_class,
> > >                         .depth = sub_rule_depth,
> > > +                       .ext_valid = 0,
> > > +                       .valid = VALID,
> > >                 };
> > >
> > > -               struct rte_lpm_tbl8_entry new_tbl8_entry = {
> > > -                       .valid = VALID,
> > > +               struct rte_lpm_tbl_entry new_tbl8_entry = {
> > > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > > +                       .as_num =
> lpm->rules_tbl[sub_rule_index].as_num,
> > > +#endif
> > > +                       .next_hop =
> lpm->rules_tbl[sub_rule_index].next_hop,
> > > +                       .fwd_class =
> > > lpm->rules_tbl[sub_rule_index].fwd_class,
> > >                         .depth = sub_rule_depth,
> > > -                       .next_hop = lpm->rules_tbl
> > > -                       [sub_rule_index].next_hop,
> > > +                       .valid = VALID,
> > >                 };
> > >
> > >                 for (i = tbl24_index; i < (tbl24_index + tbl24_range);
> i++)
> > > {
> > >
> > > -                       if (lpm->tbl24[i].ext_entry == 0 &&
> > > +                       if (lpm->tbl24[i].ext_valid == 0 &&
> > >                                         lpm->tbl24[i].depth <= depth )
> {
> > >                                 lpm->tbl24[i] = new_tbl24_entry;
> > >                         }
> > > @@ -814,7 +850,7 @@ delete_depth_small(struct rte_lpm *lpm, uint32_t
> > > ip_masked,
> > >   * thus can be recycled
> > >   */
> > >  static inline int32_t
> > > -tbl8_recycle_check(struct rte_lpm_tbl8_entry *tbl8, uint32_t
> > > tbl8_group_start)
> > > +tbl8_recycle_check(struct rte_lpm_tbl_entry *tbl8, uint32_t
> > > tbl8_group_start)
> > >  {
> > >         uint32_t tbl8_group_end, i;
> > >         tbl8_group_end = tbl8_group_start +
> > RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
> > > @@ -891,11 +927,15 @@ delete_depth_big(struct rte_lpm *lpm, uint32_t
> > > ip_masked,
> > >         }
> > >         else {
> > >                 /* Set new tbl8 entry. */
> > > -               struct rte_lpm_tbl8_entry new_tbl8_entry = {
> > > -                       .valid = VALID,
> > > -                       .depth = sub_rule_depth,
> > > -                       .valid_group =
> > > lpm->tbl8[tbl8_group_start].valid_group,
> > > +               struct rte_lpm_tbl_entry new_tbl8_entry = {
> > > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > > +                       .as_num =
> lpm->rules_tbl[sub_rule_index].as_num,
> > > +#endif
> > > +                       .fwd_class =
> > > lpm->rules_tbl[sub_rule_index].fwd_class,
> > >                         .next_hop =
> lpm->rules_tbl[sub_rule_index].next_hop,
> > > +                       .depth = sub_rule_depth,
> > > +                       .ext_valid =
> lpm->tbl8[tbl8_group_start].ext_valid,
> > > +                       .valid = VALID,
> > >                 };
> > >
> > >                 /*
> > > @@ -923,11 +963,15 @@ delete_depth_big(struct rte_lpm *lpm, uint32_t
> > > ip_masked,
> > >         }
> > >         else if (tbl8_recycle_index > -1) {
> > >                 /* Update tbl24 entry. */
> > > -               struct rte_lpm_tbl24_entry new_tbl24_entry = {
> > > -                       { .next_hop =
> > > lpm->tbl8[tbl8_recycle_index].next_hop, },
> > > -                       .valid = VALID,
> > > -                       .ext_entry = 0,
> > > +               struct rte_lpm_tbl_entry new_tbl24_entry = {
> > > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > > +                       .as_num = lpm->tbl8[tbl8_recycle_index].as_num,
> > > +#endif
> > > +                       .next_hop =
> lpm->tbl8[tbl8_recycle_index].next_hop,
> > > +                       .fwd_class =
> > > lpm->tbl8[tbl8_recycle_index].fwd_class,
> > >                         .depth = lpm->tbl8[tbl8_recycle_index].depth,
> > > +                       .ext_valid = 0,
> > > +                       .valid = VALID,
> > >                 };
> > >
> > >                 /* Set tbl24 before freeing tbl8 to avoid race
> condition. */
> > > diff --git a/lib/librte_lpm/rte_lpm.h b/lib/librte_lpm/rte_lpm.h
> > > index c299ce2..7c615bc 100644
> > > --- a/lib/librte_lpm/rte_lpm.h
> > > +++ b/lib/librte_lpm/rte_lpm.h
> > > @@ -31,8 +31,8 @@
> > >   *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
> > DAMAGE.
> > >   */
> > >
> > > -#ifndef _RTE_LPM_H_
> > > -#define _RTE_LPM_H_
> > > +#ifndef _RTE_LPM_EXT_H_
> > > +#define _RTE_LPM_EXT_H_
> > >
> > >  /**
> > >   * @file
> > > @@ -81,57 +81,58 @@ extern "C" {
> > >  #define RTE_LPM_RETURN_IF_TRUE(cond, retval)
> > >  #endif
> > >
> > > -/** @internal bitmask with valid and ext_entry/valid_group fields set
> */
> > > -#define RTE_LPM_VALID_EXT_ENTRY_BITMASK 0x0300
> > > +/** @internal bitmask with valid and ext_valid/ext_valid fields set */
> > > +#define RTE_LPM_VALID_EXT_ENTRY_BITMASK 0x03
> > >
> > >  /** Bitmask used to indicate successful lookup */
> > > -#define RTE_LPM_LOOKUP_SUCCESS          0x0100
> > > +#define RTE_LPM_LOOKUP_SUCCESS          0x01
> > > +
> > > +struct rte_lpm_res {
> > > +       uint16_t        next_hop;
> > > +       uint8_t         fwd_class;
> > > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > > +       uint32_t        as_num;
> > > +#endif
> > > +};
> > >
> > >  #if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN
> > > -/** @internal Tbl24 entry structure. */
> > > -struct rte_lpm_tbl24_entry {
> > > -       /* Stores Next hop or group index (i.e. gindex)into tbl8. */
> > > +struct rte_lpm_tbl_entry {
> > > +       uint8_t valid           :1;
> > > +       uint8_t ext_valid       :1;
> > > +       uint8_t depth           :6;
> > > +       uint8_t fwd_class;
> > >         union {
> > > -               uint8_t next_hop;
> > > -               uint8_t tbl8_gindex;
> > > +               uint16_t next_hop;
> > > +               uint16_t tbl8_gindex;
> > >         };
> > > -       /* Using single uint8_t to store 3 values. */
> > > -       uint8_t valid     :1; /**< Validation flag. */
> > > -       uint8_t ext_entry :1; /**< External entry. */
> > > -       uint8_t depth     :6; /**< Rule depth. */
> > > -};
> > > -
> > > -/** @internal Tbl8 entry structure. */
> > > -struct rte_lpm_tbl8_entry {
> > > -       uint8_t next_hop; /**< next hop. */
> > > -       /* Using single uint8_t to store 3 values. */
> > > -       uint8_t valid       :1; /**< Validation flag. */
> > > -       uint8_t valid_group :1; /**< Group validation flag. */
> > > -       uint8_t depth       :6; /**< Rule depth. */
> > > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > > +       uint32_t as_num;
> > > +#endif
> > >  };
> > >  #else
> > > -struct rte_lpm_tbl24_entry {
> > > -       uint8_t depth       :6;
> > > -       uint8_t ext_entry   :1;
> > > -       uint8_t valid       :1;
> > > +struct rte_lpm_tbl_entry {
> > > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > > +       uint32_t as_num;
> > > +#endif
> > >         union {
> > > -               uint8_t tbl8_gindex;
> > > -               uint8_t next_hop;
> > > +               uint16_t tbl8_gindex;
> > > +               uint16_t next_hop;
> > >         };
> > > -};
> > > -
> > > -struct rte_lpm_tbl8_entry {
> > > -       uint8_t depth       :6;
> > > -       uint8_t valid_group :1;
> > > -       uint8_t valid       :1;
> > > -       uint8_t next_hop;
> > > +       uint8_t fwd_class;
> > > +       uint8_t depth           :6;
> > > +       uint8_t ext_valid       :1;
> > > +       uint8_t valid           :1;
> > >  };
> > >  #endif
> > >
> > >  /** @internal Rule structure. */
> > >  struct rte_lpm_rule {
> > >         uint32_t ip; /**< Rule IP address. */
> > > -       uint8_t  next_hop; /**< Rule next hop. */
> > > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > > +       uint32_t as_num;
> > > +#endif
> > > +       uint16_t  next_hop; /**< Rule next hop. */
> > > +       uint8_t fwd_class;
> > >  };
> > >
> > >  /** @internal Contains metadata about the rules table. */
> > > @@ -148,9 +149,9 @@ struct rte_lpm {
> > >         struct rte_lpm_rule_info rule_info[RTE_LPM_MAX_DEPTH]; /**<
> Rule
> > > info table. */
> > >
> > >         /* LPM Tables. */
> > > -       struct rte_lpm_tbl24_entry tbl24[RTE_LPM_TBL24_NUM_ENTRIES] \
> > > +       struct rte_lpm_tbl_entry tbl24[RTE_LPM_TBL24_NUM_ENTRIES] \
> > >                         __rte_cache_aligned; /**< LPM tbl24 table. */
> > > -       struct rte_lpm_tbl8_entry tbl8[RTE_LPM_TBL8_NUM_ENTRIES] \
> > > +       struct rte_lpm_tbl_entry tbl8[RTE_LPM_TBL8_NUM_ENTRIES] \
> > >                         __rte_cache_aligned; /**< LPM tbl8 table. */
> > >         struct rte_lpm_rule rules_tbl[0] \
> > >                         __rte_cache_aligned; /**< LPM rules. */
> > > @@ -219,7 +220,7 @@ rte_lpm_free(struct rte_lpm *lpm);
> > >   *   0 on success, negative value otherwise
> > >   */
> > >  int
> > > -rte_lpm_add(struct rte_lpm *lpm, uint32_t ip, uint8_t depth, uint8_t
> > > next_hop);
> > > +rte_lpm_add(struct rte_lpm *lpm, uint32_t ip, uint8_t depth, struct
> > > rte_lpm_res *res);
> > >
> > >  /**
> > >   * Check if a rule is present in the LPM table,
> > > @@ -238,7 +239,7 @@ rte_lpm_add(struct rte_lpm *lpm, uint32_t ip,
> > uint8_t
> > > depth, uint8_t next_hop);
> > >   */
> > >  int
> > >  rte_lpm_is_rule_present(struct rte_lpm *lpm, uint32_t ip, uint8_t
> depth,
> > > -uint8_t *next_hop);
> > > +                       struct rte_lpm_res *res);
> > >
> > >  /**
> > >   * Delete a rule from the LPM table.
> > > @@ -277,29 +278,43 @@ rte_lpm_delete_all(struct rte_lpm *lpm);
> > >   *   -EINVAL for incorrect arguments, -ENOENT on lookup miss, 0 on
> lookup
> > > hit
> > >   */
> > >  static inline int
> > > -rte_lpm_lookup(struct rte_lpm *lpm, uint32_t ip, uint8_t *next_hop)
> > > +rte_lpm_lookup(struct rte_lpm *lpm, uint32_t ip, struct rte_lpm_res
> *res)
> > >  {
> > >         unsigned tbl24_index = (ip >> 8);
> > > -       uint16_t tbl_entry;
> > > -
> > > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > > +       uint64_t tbl_entry;
> > > +#else
> > > +       uint32_t tbl_entry;
> > > +#endif
> > >         /* DEBUG: Check user input arguments. */
> > > -       RTE_LPM_RETURN_IF_TRUE(((lpm == NULL) || (next_hop == NULL)),
> > > -EINVAL);
> > > +       RTE_LPM_RETURN_IF_TRUE(((lpm == NULL) || (res == NULL)), -
> > EINVAL);
> > >
> > >         /* Copy tbl24 entry */
> > > -       tbl_entry = *(const uint16_t *)&lpm->tbl24[tbl24_index];
> > > -
> > > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > > +       tbl_entry = *(const uint64_t *)&lpm->tbl24[tbl24_index];
> > > +#else
> > > +       tbl_entry = *(const uint32_t *)&lpm->tbl24[tbl24_index];
> > > +#endif
> > >         /* Copy tbl8 entry (only if needed) */
> > >         if (unlikely((tbl_entry & RTE_LPM_VALID_EXT_ENTRY_BITMASK) ==
> > >                         RTE_LPM_VALID_EXT_ENTRY_BITMASK)) {
> > >
> > >                 unsigned tbl8_index = (uint8_t)ip +
> > > -                               ((uint8_t)tbl_entry *
> > > RTE_LPM_TBL8_GROUP_NUM_ENTRIES);
> > > +                               ((*(struct rte_lpm_tbl_entry
> > > *)&tbl_entry).tbl8_gindex * RTE_LPM_TBL8_GROUP_NUM_ENTRIES);
> > >
> > > -               tbl_entry = *(const uint16_t *)&lpm->tbl8[tbl8_index];
> > > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > > +               tbl_entry = *(const uint64_t *)&lpm->tbl8[tbl8_index];
> > > +#else
> > > +               tbl_entry = *(const uint32_t *)&lpm->tbl8[tbl8_index];
> > > +#endif
> > >         }
> > > -
> > > -       *next_hop = (uint8_t)tbl_entry;
> > > +       res->next_hop  = ((struct rte_lpm_tbl_entry
> *)&tbl_entry)->next_hop;
> > > +       res->fwd_class = ((struct rte_lpm_tbl_entry
> > > *)&tbl_entry)->fwd_class;
> > > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > > +       res->as_num       = ((struct rte_lpm_tbl_entry
> > > *)&tbl_entry)->as_num;
> > > +#endif
> > >         return (tbl_entry & RTE_LPM_LOOKUP_SUCCESS) ? 0 : -ENOENT;
> > > +
> > >  }
> > >
> > >  /**
> > > @@ -322,19 +337,25 @@ rte_lpm_lookup(struct rte_lpm *lpm, uint32_t ip,
> > > uint8_t *next_hop)
> > >   *  @return
> > >   *   -EINVAL for incorrect arguments, otherwise 0
> > >   */
> > > -#define rte_lpm_lookup_bulk(lpm, ips, next_hops, n) \
> > > -               rte_lpm_lookup_bulk_func(lpm, ips, next_hops, n)
> > > +#define rte_lpm_lookup_bulk(lpm, ips, res_tbl, n) \
> > > +               rte_lpm_lookup_bulk_func(lpm, ips, res_tbl, n)
> > >
> > >  static inline int
> > > -rte_lpm_lookup_bulk_func(const struct rte_lpm *lpm, const uint32_t *
> ips,
> > > -               uint16_t * next_hops, const unsigned n)
> > > +rte_lpm_lookup_bulk_func(const struct rte_lpm *lpm, const uint32_t
> *ips,
> > > +               struct rte_lpm_res *res_tbl, const unsigned n)
> > >  {
> > >         unsigned i;
> > > +       int ret = 0;
> > > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > > +       uint64_t tbl_entry;
> > > +#else
> > > +       uint32_t tbl_entry;
> > > +#endif
> > >         unsigned tbl24_indexes[n];
> > >
> > >         /* DEBUG: Check user input arguments. */
> > >         RTE_LPM_RETURN_IF_TRUE(((lpm == NULL) || (ips == NULL) ||
> > > -                       (next_hops == NULL)), -EINVAL);
> > > +                       (res_tbl == NULL)), -EINVAL);
> > >
> > >         for (i = 0; i < n; i++) {
> > >                 tbl24_indexes[i] = ips[i] >> 8;
> > > @@ -342,20 +363,32 @@ rte_lpm_lookup_bulk_func(const struct rte_lpm
> > *lpm,
> > > const uint32_t * ips,
> > >
> > >         for (i = 0; i < n; i++) {
> > >                 /* Simply copy tbl24 entry to output */
> > > -               next_hops[i] = *(const uint16_t
> > > *)&lpm->tbl24[tbl24_indexes[i]];
> > > -
> > > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > > +               tbl_entry = *(const uint64_t
> > > *)&lpm->tbl24[tbl24_indexes[i]];
> > > +#else
> > > +               tbl_entry = *(const uint32_t
> > > *)&lpm->tbl24[tbl24_indexes[i]];
> > > +#endif
> > >                 /* Overwrite output with tbl8 entry if needed */
> > > -               if (unlikely((next_hops[i] &
> > > RTE_LPM_VALID_EXT_ENTRY_BITMASK) ==
> > > -                               RTE_LPM_VALID_EXT_ENTRY_BITMASK)) {
> > > +               if (unlikely((tbl_entry &
> RTE_LPM_VALID_EXT_ENTRY_BITMASK)
> > > ==
> > > +                       RTE_LPM_VALID_EXT_ENTRY_BITMASK)) {
> > >
> > >                         unsigned tbl8_index = (uint8_t)ips[i] +
> > > -                                       ((uint8_t)next_hops[i] *
> > > -
> RTE_LPM_TBL8_GROUP_NUM_ENTRIES);
> > > +                               ((*(struct rte_lpm_tbl_entry
> > > *)&tbl_entry).tbl8_gindex * RTE_LPM_TBL8_GROUP_NUM_ENTRIES);
> > >
> > > -                       next_hops[i] = *(const uint16_t
> > > *)&lpm->tbl8[tbl8_index];
> > > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > > +                       tbl_entry = *(const uint64_t
> > > *)&lpm->tbl8[tbl8_index];
> > > +#else
> > > +                       tbl_entry = *(const uint32_t
> > > *)&lpm->tbl8[tbl8_index];
> > > +#endif
> > >                 }
> > > +               res_tbl[i].next_hop     = ((struct rte_lpm_tbl_entry
> > > *)&tbl_entry)->next_hop;
> > > +               res_tbl[i].fwd_class    = ((struct rte_lpm_tbl_entry
> > > *)&tbl_entry)->next_hop;
> > > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > > +               res_tbl[i].as_num       = ((struct rte_lpm_tbl_entry
> > > *)&tbl_entry)->as_num;
> > > +#endif
> > > +               ret |= 1 << i;
> > >         }
> > > -       return 0;
> > > +       return ret;
> > >  }
> > >
> > >  /* Mask four results. */
> > > @@ -477,4 +510,4 @@ rte_lpm_lookupx4(const struct rte_lpm *lpm,
> > __m128i ip,
> > > uint16_t hop[4],
> > >  }
> > >  #endif
> > >
> > > -#endif /* _RTE_LPM_H_ */
> > > +#endif /* _RTE_LPM_EXT_H_ */
> > >
> > > 2015-10-24 9:09 GMT+03:00 Matthew Hall <mhall@mhcomputing.net>:
> > >
> > > > On 10/23/15 9:20 AM, Matthew Hall wrote:
> > > >
> > > >> On Fri, Oct 23, 2015 at 03:51:48PM +0200, Michal Jastrzebski wrote:
> > > >>
> > > >>> From: Michal Kobylinski  <michalx.kobylinski@intel.com>
> > > >>>
> > > >>> The current DPDK implementation for LPM for IPv4 and IPv6 limits
> the
> > > >>> number of next hops to 256, as the next hop ID is an 8-bit long
> field.
> > > >>> Proposed extension increase number of next hops for IPv4 to 2^24
> and
> > > >>> also allows 32-bits read/write operations.
> > > >>>
> > > >>> This patchset requires additional change to rte_table library to
> meet
> > > >>> ABI compatibility requirements. A v2 will be sent next week.
> > > >>>
> > > >>
> > > >> I also have a patchset for this.
> > > >>
> > > >> I will send it out as well so we could compare.
> > > >>
> > > >> Matthew.
> > > >>
> > > >
> > > > Sorry about the delay; I only work on DPDK in personal time and not
> as
> > > > part of a job. My patchset is attached to this email.
> > > >
> > > > One possible advantage with my patchset, compared to others, is that
> the
> > > > space problem is fixed in both IPV4 and in IPV6, to prevent asymmetry
> > > > between these two standards, which is something I try to avoid as
> much
> > as
> > > > humanly possible.
> > > >
> > > > This is because my application code is green-field, so I absolutely
> don't
> > > > want to put any ugly hacks or incompatibilities in this code if I can
> > > > possibly avoid it.
> > > >
> > > > Otherwise, I am not necessarily as expert about rte_lpm as some of
> the
> > > > full-time guys, but I think with four or five of us in the thread
> hammering
> > > > out patches we will be able to create something amazing together and
> I
> > am
> > > > very very very very very happy about this.
> > > >
> > > > Matthew.
> > > >
> >
>
> Hi Vladimir,
> Thanks for sharing Your implementation.
> Could You please clarify what as_num and fwd_class fields represent?
> The second issue I have is that Your patch doesn’t want to apply on top of
> current head. Could You check this please?
>
> Best regards
> Michal
>

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [dpdk-dev] [PATCH v1 0/3] lpm: increase number of next hops for lpm (ipv4)
  2015-10-26 14:03           ` Vladimir Medvedkin
@ 2015-10-26 15:39             ` Michal Jastrzebski
  2015-10-26 16:59               ` Vladimir Medvedkin
  0 siblings, 1 reply; 24+ messages in thread
From: Michal Jastrzebski @ 2015-10-26 15:39 UTC (permalink / raw)
  To: Vladimir Medvedkin; +Cc: dev

esOn Mon, Oct 26, 2015 at 05:03:31PM +0300, Vladimir Medvedkin wrote:
> Hi Michal,
> 
> Forwarding class can help us to classify traffic based on dst prefix, it's
> something like Juniper DCU. For example on Juniper MX I can make policy
> that install prefix into the FIB with some class and use it on dataplane,
> for example with ACL.
> On Juniper MX I can make something like that:
> #show policy-options
> policy-statement community-to-class {
> term customer {
>         from community originate-customer;
>         then destination-class customer;
>     }
> }
> community originate-customer members 12345:11111;
> # show routing-options
> forwarding-table {
>     export community-to-class;
> }
> # show forwarding-options
> forwarding-options {
>     family inet {
>         filter {
>             output test-filter;
>         }
>     }
> }
> # show firewall family inet filter test-filter
> term 1 {
>     from {
>         protocol icmp;
>         destination-class customer;
>     }
>     then {
>         discard;
>     }
> }
> announce route 10.10.10.10/32 next-hop 10.10.10.2 community 12345:11111
> After than on dataplane we have
> NPC1( vty)# show route ip lookup 10.10.10.10
> Route Information (10.10.10.10):
>  interface : xe-1/0/0.0 (328)
>  Nexthop prefix : -
>  Nexthop ID     : 1048574
>  MTU            : 0
>  Class ID       : 129 <- That is "forwarding class" in my implementation
> This construction discards all ICMP traffic that goes to dst prefixes which
> was originated with community 12345:11111. With this mechanism we can make
> on control plane different sophisticated policy to control traffic on
> dataplane.
> The same with as_num, we can have on dataplane AS number that has
> originated that prefix, or another 4-byte number e.g. geo-id.
> What issue do you mean? I think it is because of table/pipeline/test
> frameworks that doesen't want to compile due to changing API/ABI. You can
> turn it off for LPM testing, if my patch will be applied I will make
> changes in above-mentioned frameworks.
> 
> Regards,
> Vladimir

Hi Vladimir,
I have an issue with applying Your patch not compilation.
This is the error i get:
Checking patch config/common_bsdapp...
Checking patch config/common_linuxapp...
Checking patch lib/librte_lpm/rte_lpm.c...
error: while searching for:

       lpm_list = RTE_TAILQ_CAST(rte_lpm_tailq.head, rte_lpm_list);

       RTE_BUILD_BUG_ON(sizeof(struct rte_lpm_tbl24_entry) != 2);
       RTE_BUILD_BUG_ON(sizeof(struct rte_lpm_tbl8_entry) != 2);

       /* Check user arguments. */
       if ((name == NULL) || (socket_id < -1) || (max_rules == 0)){
               rte_errno = EINVAL;

error: patch failed: lib/librte_lpm/rte_lpm.c:159
error: lib/librte_lpm/rte_lpm.c: patch does not apply
Checking patch lib/librte_lpm/rte_lpm.h...
error: while searching for:
#define RTE_LPM_RETURN_IF_TRUE(cond, retval)
#endif

/** @internal bitmask with valid and ext_entry/valid_group fields set */
#define RTE_LPM_VALID_EXT_ENTRY_BITMASK 0x0300

/** Bitmask used to indicate successful lookup */
#define RTE_LPM_LOOKUP_SUCCESS          0x0100

#if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN
/** @internal Tbl24 entry structure. */
struct rte_lpm_tbl24_entry {
       /* Stores Next hop or group index (i.e. gindex)into tbl8. */
       union {
               uint8_t next_hop;
               uint8_t tbl8_gindex;
       };
       /* Using single uint8_t to store 3 values. */
       uint8_t valid     :1; /**< Validation flag. */
       uint8_t ext_entry :1; /**< External entry. */
       uint8_t depth     :6; /**< Rule depth. */
};

/** @internal Tbl8 entry structure. */
struct rte_lpm_tbl8_entry {
       uint8_t next_hop; /**< next hop. */
       /* Using single uint8_t to store 3 values. */
       uint8_t valid       :1; /**< Validation flag. */
       uint8_t valid_group :1; /**< Group validation flag. */
       uint8_t depth       :6; /**< Rule depth. */
};
#else
struct rte_lpm_tbl24_entry {
       uint8_t depth       :6;
       uint8_t ext_entry   :1;
       uint8_t valid       :1;
       union {
               uint8_t tbl8_gindex;
               uint8_t next_hop;
       };
};

struct rte_lpm_tbl8_entry {
       uint8_t depth       :6;
       uint8_t valid_group :1;
       uint8_t valid       :1;
       uint8_t next_hop;
};
#endif

/** @internal Rule structure. */
struct rte_lpm_rule {
       uint32_t ip; /**< Rule IP address. */
       uint8_t  next_hop; /**< Rule next hop. */
};

/** @internal Contains metadata about the rules table. */

error: patch failed: lib/librte_lpm/rte_lpm.h:81
error: lib/librte_lpm/rte_lpm.h: patch does not apply



> 2015-10-26 14:57 GMT+03:00 Jastrzebski, MichalX K <
> michalx.k.jastrzebski@intel.com>:
> 
> > > -----Original Message-----
> > > From: Michal Jastrzebski [mailto:michalx.k.jastrzebski@intel.com]
> > > Sent: Monday, October 26, 2015 12:55 PM
> > > To: Vladimir Medvedkin
> > > Subject: Re: [dpdk-dev] [PATCH v1 0/3] lpm: increase number of next hops
> > > for lpm (ipv4)
> > >
> > > On Sun, Oct 25, 2015 at 08:52:04PM +0300, Vladimir Medvedkin wrote:
> > > > Hi all,
> > > >
> > > > Here my implementation
> > > >
> > > > Signed-off-by: Vladimir Medvedkin <medvedkinv@gmail.com>
> > > > ---
> > > >  config/common_bsdapp     |   1 +
> > > >  config/common_linuxapp   |   1 +
> > > >  lib/librte_lpm/rte_lpm.c | 194
> > > > +++++++++++++++++++++++++++++------------------
> > > >  lib/librte_lpm/rte_lpm.h | 163 +++++++++++++++++++++++----------------
> > > >  4 files changed, 219 insertions(+), 140 deletions(-)
> > > >
> > > > diff --git a/config/common_bsdapp b/config/common_bsdapp
> > > > index b37dcf4..408cc2c 100644
> > > > --- a/config/common_bsdapp
> > > > +++ b/config/common_bsdapp
> > > > @@ -344,6 +344,7 @@ CONFIG_RTE_LIBRTE_JOBSTATS=y
> > > >  #
> > > >  CONFIG_RTE_LIBRTE_LPM=y
> > > >  CONFIG_RTE_LIBRTE_LPM_DEBUG=n
> > > > +CONFIG_RTE_LIBRTE_LPM_ASNUM=n
> > > >
> > > >  #
> > > >  # Compile librte_acl
> > > > diff --git a/config/common_linuxapp b/config/common_linuxapp
> > > > index 0de43d5..1c60e63 100644
> > > > --- a/config/common_linuxapp
> > > > +++ b/config/common_linuxapp
> > > > @@ -352,6 +352,7 @@ CONFIG_RTE_LIBRTE_JOBSTATS=y
> > > >  #
> > > >  CONFIG_RTE_LIBRTE_LPM=y
> > > >  CONFIG_RTE_LIBRTE_LPM_DEBUG=n
> > > > +CONFIG_RTE_LIBRTE_LPM_ASNUM=n
> > > >
> > > >  #
> > > >  # Compile librte_acl
> > > > diff --git a/lib/librte_lpm/rte_lpm.c b/lib/librte_lpm/rte_lpm.c
> > > > index 163ba3c..363b400 100644
> > > > --- a/lib/librte_lpm/rte_lpm.c
> > > > +++ b/lib/librte_lpm/rte_lpm.c
> > > > @@ -159,9 +159,11 @@ rte_lpm_create(const char *name, int socket_id,
> > > int
> > > > max_rules,
> > > >
> > > >         lpm_list = RTE_TAILQ_CAST(rte_lpm_tailq.head, rte_lpm_list);
> > > >
> > > > -       RTE_BUILD_BUG_ON(sizeof(struct rte_lpm_tbl24_entry) != 2);
> > > > -       RTE_BUILD_BUG_ON(sizeof(struct rte_lpm_tbl8_entry) != 2);
> > > > -
> > > > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > > > +       RTE_BUILD_BUG_ON(sizeof(struct rte_lpm_tbl_entry) != 8);
> > > > +#else
> > > > +       RTE_BUILD_BUG_ON(sizeof(struct rte_lpm_tbl_entry) != 4);
> > > > +#endif
> > > >         /* Check user arguments. */
> > > >         if ((name == NULL) || (socket_id < -1) || (max_rules == 0)){
> > > >                 rte_errno = EINVAL;
> > > > @@ -261,7 +263,7 @@ rte_lpm_free(struct rte_lpm *lpm)
> > > >   */
> > > >  static inline int32_t
> > > >  rule_add(struct rte_lpm *lpm, uint32_t ip_masked, uint8_t depth,
> > > > -       uint8_t next_hop)
> > > > +       struct rte_lpm_res *res)
> > > >  {
> > > >         uint32_t rule_gindex, rule_index, last_rule;
> > > >         int i;
> > > > @@ -282,8 +284,11 @@ rule_add(struct rte_lpm *lpm, uint32_t
> > > ip_masked,
> > > > uint8_t depth,
> > > >
> > > >                         /* If rule already exists update its next_hop
> > and
> > > > return. */
> > > >                         if (lpm->rules_tbl[rule_index].ip ==
> > ip_masked) {
> > > > -                               lpm->rules_tbl[rule_index].next_hop =
> > > > next_hop;
> > > > -
> > > > +                               lpm->rules_tbl[rule_index].next_hop =
> > > > res->next_hop;
> > > > +                               lpm->rules_tbl[rule_index].fwd_class =
> > > > res->fwd_class;
> > > > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > > > +                               lpm->rules_tbl[rule_index].as_num =
> > > > res->as_num;
> > > > +#endif
> > > >                                 return rule_index;
> > > >                         }
> > > >                 }
> > > > @@ -320,7 +325,11 @@ rule_add(struct rte_lpm *lpm, uint32_t
> > > ip_masked,
> > > > uint8_t depth,
> > > >
> > > >         /* Add the new rule. */
> > > >         lpm->rules_tbl[rule_index].ip = ip_masked;
> > > > -       lpm->rules_tbl[rule_index].next_hop = next_hop;
> > > > +       lpm->rules_tbl[rule_index].next_hop = res->next_hop;
> > > > +       lpm->rules_tbl[rule_index].fwd_class = res->fwd_class;
> > > > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > > > +       lpm->rules_tbl[rule_index].as_num = res->as_num;
> > > > +#endif
> > > >
> > > >         /* Increment the used rules counter for this rule group. */
> > > >         lpm->rule_info[depth - 1].used_rules++;
> > > > @@ -382,10 +391,10 @@ rule_find(struct rte_lpm *lpm, uint32_t
> > > ip_masked,
> > > > uint8_t depth)
> > > >   * Find, clean and allocate a tbl8.
> > > >   */
> > > >  static inline int32_t
> > > > -tbl8_alloc(struct rte_lpm_tbl8_entry *tbl8)
> > > > +tbl8_alloc(struct rte_lpm_tbl_entry *tbl8)
> > > >  {
> > > >         uint32_t tbl8_gindex; /* tbl8 group index. */
> > > > -       struct rte_lpm_tbl8_entry *tbl8_entry;
> > > > +       struct rte_lpm_tbl_entry *tbl8_entry;
> > > >
> > > >         /* Scan through tbl8 to find a free (i.e. INVALID) tbl8 group.
> > */
> > > >         for (tbl8_gindex = 0; tbl8_gindex < RTE_LPM_TBL8_NUM_GROUPS;
> > > > @@ -393,12 +402,12 @@ tbl8_alloc(struct rte_lpm_tbl8_entry *tbl8)
> > > >                 tbl8_entry = &tbl8[tbl8_gindex *
> > > >                                    RTE_LPM_TBL8_GROUP_NUM_ENTRIES];
> > > >                 /* If a free tbl8 group is found clean it and set as
> > VALID.
> > > > */
> > > > -               if (!tbl8_entry->valid_group) {
> > > > +               if (!tbl8_entry->ext_valid) {
> > > >                         memset(&tbl8_entry[0], 0,
> > > >                                         RTE_LPM_TBL8_GROUP_NUM_ENTRIES
> > *
> > > >                                         sizeof(tbl8_entry[0]));
> > > >
> > > > -                       tbl8_entry->valid_group = VALID;
> > > > +                       tbl8_entry->ext_valid = VALID;
> > > >
> > > >                         /* Return group index for allocated tbl8
> > group. */
> > > >                         return tbl8_gindex;
> > > > @@ -410,46 +419,50 @@ tbl8_alloc(struct rte_lpm_tbl8_entry *tbl8)
> > > >  }
> > > >
> > > >  static inline void
> > > > -tbl8_free(struct rte_lpm_tbl8_entry *tbl8, uint32_t tbl8_group_start)
> > > > +tbl8_free(struct rte_lpm_tbl_entry *tbl8, uint32_t tbl8_group_start)
> > > >  {
> > > >         /* Set tbl8 group invalid*/
> > > > -       tbl8[tbl8_group_start].valid_group = INVALID;
> > > > +       tbl8[tbl8_group_start].ext_valid = INVALID;
> > > >  }
> > > >
> > > >  static inline int32_t
> > > >  add_depth_small(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
> > > > -               uint8_t next_hop)
> > > > +               struct rte_lpm_res *res)
> > > >  {
> > > >         uint32_t tbl24_index, tbl24_range, tbl8_index, tbl8_group_end,
> > i, j;
> > > >
> > > >         /* Calculate the index into Table24. */
> > > >         tbl24_index = ip >> 8;
> > > >         tbl24_range = depth_to_range(depth);
> > > > +       struct rte_lpm_tbl_entry new_tbl_entry = {
> > > > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > > > +               .as_num = res->as_num,
> > > > +#endif
> > > > +               .next_hop = res->next_hop,
> > > > +               .fwd_class  = res->fwd_class,
> > > > +               .ext_valid = 0,
> > > > +               .depth = depth,
> > > > +               .valid = VALID,
> > > > +       };
> > > > +
> > > >
> > > >         for (i = tbl24_index; i < (tbl24_index + tbl24_range); i++) {
> > > >                 /*
> > > >                  * For invalid OR valid and non-extended tbl 24
> > entries set
> > > >                  * entry.
> > > >                  */
> > > > -               if (!lpm->tbl24[i].valid || (lpm->tbl24[i].ext_entry
> > == 0 &&
> > > > +               if (!lpm->tbl24[i].valid || (lpm->tbl24[i].ext_valid
> > == 0 &&
> > > >                                 lpm->tbl24[i].depth <= depth)) {
> > > >
> > > > -                       struct rte_lpm_tbl24_entry new_tbl24_entry = {
> > > > -                               { .next_hop = next_hop, },
> > > > -                               .valid = VALID,
> > > > -                               .ext_entry = 0,
> > > > -                               .depth = depth,
> > > > -                       };
> > > > -
> > > >                         /* Setting tbl24 entry in one go to avoid race
> > > >                          * conditions
> > > >                          */
> > > > -                       lpm->tbl24[i] = new_tbl24_entry;
> > > > +                       lpm->tbl24[i] = new_tbl_entry;
> > > >
> > > >                         continue;
> > > >                 }
> > > >
> > > > -               if (lpm->tbl24[i].ext_entry == 1) {
> > > > +               if (lpm->tbl24[i].ext_valid == 1) {
> > > >                         /* If tbl24 entry is valid and extended
> > calculate
> > > > the
> > > >                          *  index into tbl8.
> > > >                          */
> > > > @@ -461,19 +474,14 @@ add_depth_small(struct rte_lpm *lpm, uint32_t
> > > ip,
> > > > uint8_t depth,
> > > >                         for (j = tbl8_index; j < tbl8_group_end; j++) {
> > > >                                 if (!lpm->tbl8[j].valid ||
> > > >                                                 lpm->tbl8[j].depth <=
> > > > depth) {
> > > > -                                       struct rte_lpm_tbl8_entry
> > > > -                                               new_tbl8_entry = {
> > > > -                                               .valid = VALID,
> > > > -                                               .valid_group = VALID,
> > > > -                                               .depth = depth,
> > > > -                                               .next_hop = next_hop,
> > > > -                                       };
> > > > +
> > > > +                                       new_tbl_entry.ext_valid =
> > VALID;
> > > >
> > > >                                         /*
> > > >                                          * Setting tbl8 entry in one
> > go to
> > > > avoid
> > > >                                          * race conditions
> > > >                                          */
> > > > -                                       lpm->tbl8[j] = new_tbl8_entry;
> > > > +                                       lpm->tbl8[j] = new_tbl_entry;
> > > >
> > > >                                         continue;
> > > >                                 }
> > > > @@ -486,7 +494,7 @@ add_depth_small(struct rte_lpm *lpm, uint32_t ip,
> > > > uint8_t depth,
> > > >
> > > >  static inline int32_t
> > > >  add_depth_big(struct rte_lpm *lpm, uint32_t ip_masked, uint8_t depth,
> > > > -               uint8_t next_hop)
> > > > +               struct rte_lpm_res *res)
> > > >  {
> > > >         uint32_t tbl24_index;
> > > >         int32_t tbl8_group_index, tbl8_group_start, tbl8_group_end,
> > > > tbl8_index,
> > > > @@ -512,7 +520,11 @@ add_depth_big(struct rte_lpm *lpm, uint32_t
> > > ip_masked,
> > > > uint8_t depth,
> > > >                 /* Set tbl8 entry. */
> > > >                 for (i = tbl8_index; i < (tbl8_index + tbl8_range);
> > i++) {
> > > >                         lpm->tbl8[i].depth = depth;
> > > > -                       lpm->tbl8[i].next_hop = next_hop;
> > > > +                       lpm->tbl8[i].next_hop = res->next_hop;
> > > > +                       lpm->tbl8[i].fwd_class = res->fwd_class;
> > > > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > > > +                       lpm->tbl8[i].as_num = res->as_num;
> > > > +#endif
> > > >                         lpm->tbl8[i].valid = VALID;
> > > >                 }
> > > >
> > > > @@ -522,17 +534,17 @@ add_depth_big(struct rte_lpm *lpm, uint32_t
> > > > ip_masked, uint8_t depth,
> > > >                  * so assign whole structure in one go
> > > >                  */
> > > >
> > > > -               struct rte_lpm_tbl24_entry new_tbl24_entry = {
> > > > -                       { .tbl8_gindex = (uint8_t)tbl8_group_index, },
> > > > -                       .valid = VALID,
> > > > -                       .ext_entry = 1,
> > > > +               struct rte_lpm_tbl_entry new_tbl24_entry = {
> > > > +                       .tbl8_gindex = (uint16_t)tbl8_group_index,
> > > >                         .depth = 0,
> > > > +                       .ext_valid = 1,
> > > > +                       .valid = VALID,
> > > >                 };
> > > >
> > > >                 lpm->tbl24[tbl24_index] = new_tbl24_entry;
> > > >
> > > >         }/* If valid entry but not extended calculate the index into
> > > > Table8. */
> > > > -       else if (lpm->tbl24[tbl24_index].ext_entry == 0) {
> > > > +       else if (lpm->tbl24[tbl24_index].ext_valid == 0) {
> > > >                 /* Search for free tbl8 group. */
> > > >                 tbl8_group_index = tbl8_alloc(lpm->tbl8);
> > > >
> > > > @@ -551,6 +563,11 @@ add_depth_big(struct rte_lpm *lpm, uint32_t
> > > ip_masked,
> > > > uint8_t depth,
> > > >                         lpm->tbl8[i].depth =
> > lpm->tbl24[tbl24_index].depth;
> > > >                         lpm->tbl8[i].next_hop =
> > > >
> >  lpm->tbl24[tbl24_index].next_hop;
> > > > +                       lpm->tbl8[i].fwd_class =
> > > > +
> >  lpm->tbl24[tbl24_index].fwd_class;
> > > > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > > > +                       lpm->tbl8[i].as_num =
> > > > lpm->tbl24[tbl24_index].as_num;
> > > > +#endif
> > > >                 }
> > > >
> > > >                 tbl8_index = tbl8_group_start + (ip_masked & 0xFF);
> > > > @@ -561,7 +578,11 @@ add_depth_big(struct rte_lpm *lpm, uint32_t
> > > ip_masked,
> > > > uint8_t depth,
> > > >                                         lpm->tbl8[i].depth <= depth) {
> > > >                                 lpm->tbl8[i].valid = VALID;
> > > >                                 lpm->tbl8[i].depth = depth;
> > > > -                               lpm->tbl8[i].next_hop = next_hop;
> > > > +                               lpm->tbl8[i].next_hop = res->next_hop;
> > > > +                               lpm->tbl8[i].fwd_class =
> > res->fwd_class;
> > > > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > > > +                               lpm->tbl8[i].as_num = res->as_num;
> > > > +#endif
> > > >
> > > >                                 continue;
> > > >                         }
> > > > @@ -573,11 +594,11 @@ add_depth_big(struct rte_lpm *lpm, uint32_t
> > > > ip_masked, uint8_t depth,
> > > >                  * so assign whole structure in one go.
> > > >                  */
> > > >
> > > > -               struct rte_lpm_tbl24_entry new_tbl24_entry = {
> > > > -                               { .tbl8_gindex =
> > (uint8_t)tbl8_group_index,
> > > > },
> > > > -                               .valid = VALID,
> > > > -                               .ext_entry = 1,
> > > > +               struct rte_lpm_tbl_entry new_tbl24_entry = {
> > > > +                               .tbl8_gindex =
> > (uint16_t)tbl8_group_index,
> > > >                                 .depth = 0,
> > > > +                               .ext_valid = 1,
> > > > +                               .valid = VALID,
> > > >                 };
> > > >
> > > >                 lpm->tbl24[tbl24_index] = new_tbl24_entry;
> > > > @@ -595,11 +616,15 @@ add_depth_big(struct rte_lpm *lpm, uint32_t
> > > > ip_masked, uint8_t depth,
> > > >
> > > >                         if (!lpm->tbl8[i].valid ||
> > > >                                         lpm->tbl8[i].depth <= depth) {
> > > > -                               struct rte_lpm_tbl8_entry
> > new_tbl8_entry = {
> > > > -                                       .valid = VALID,
> > > > +                               struct rte_lpm_tbl_entry
> > new_tbl8_entry = {
> > > > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > > > +                                       .as_num = res->as_num,
> > > > +#endif
> > > > +                                       .next_hop = res->next_hop,
> > > > +                                       .fwd_class = res->fwd_class,
> > > >                                         .depth = depth,
> > > > -                                       .next_hop = next_hop,
> > > > -                                       .valid_group =
> > > > lpm->tbl8[i].valid_group,
> > > > +                                       .ext_valid =
> > lpm->tbl8[i].ext_valid,
> > > > +                                       .valid = VALID,
> > > >                                 };
> > > >
> > > >                                 /*
> > > > @@ -621,19 +646,19 @@ add_depth_big(struct rte_lpm *lpm, uint32_t
> > > > ip_masked, uint8_t depth,
> > > >   */
> > > >  int
> > > >  rte_lpm_add(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
> > > > -               uint8_t next_hop)
> > > > +               struct rte_lpm_res *res)
> > > >  {
> > > >         int32_t rule_index, status = 0;
> > > >         uint32_t ip_masked;
> > > >
> > > >         /* Check user arguments. */
> > > > -       if ((lpm == NULL) || (depth < 1) || (depth >
> > RTE_LPM_MAX_DEPTH))
> > > > +       if ((lpm == NULL) || (res == NULL) || (depth < 1) || (depth >
> > > > RTE_LPM_MAX_DEPTH))
> > > >                 return -EINVAL;
> > > >
> > > >         ip_masked = ip & depth_to_mask(depth);
> > > >
> > > >         /* Add the rule to the rule table. */
> > > > -       rule_index = rule_add(lpm, ip_masked, depth, next_hop);
> > > > +       rule_index = rule_add(lpm, ip_masked, depth, res);
> > > >
> > > >         /* If the is no space available for new rule return error. */
> > > >         if (rule_index < 0) {
> > > > @@ -641,10 +666,10 @@ rte_lpm_add(struct rte_lpm *lpm, uint32_t ip,
> > > uint8_t
> > > > depth,
> > > >         }
> > > >
> > > >         if (depth <= MAX_DEPTH_TBL24) {
> > > > -               status = add_depth_small(lpm, ip_masked, depth,
> > next_hop);
> > > > +               status = add_depth_small(lpm, ip_masked, depth, res);
> > > >         }
> > > >         else { /* If depth > RTE_LPM_MAX_DEPTH_TBL24 */
> > > > -               status = add_depth_big(lpm, ip_masked, depth,
> > next_hop);
> > > > +               status = add_depth_big(lpm, ip_masked, depth, res);
> > > >
> > > >                 /*
> > > >                  * If add fails due to exhaustion of tbl8 extensions
> > delete
> > > > @@ -665,14 +690,14 @@ rte_lpm_add(struct rte_lpm *lpm, uint32_t ip,
> > > uint8_t
> > > > depth,
> > > >   */
> > > >  int
> > > >  rte_lpm_is_rule_present(struct rte_lpm *lpm, uint32_t ip, uint8_t
> > depth,
> > > > -uint8_t *next_hop)
> > > > +                       struct rte_lpm_res *res)
> > > >  {
> > > >         uint32_t ip_masked;
> > > >         int32_t rule_index;
> > > >
> > > >         /* Check user arguments. */
> > > >         if ((lpm == NULL) ||
> > > > -               (next_hop == NULL) ||
> > > > +               (res == NULL) ||
> > > >                 (depth < 1) || (depth > RTE_LPM_MAX_DEPTH))
> > > >                 return -EINVAL;
> > > >
> > > > @@ -681,7 +706,11 @@ uint8_t *next_hop)
> > > >         rule_index = rule_find(lpm, ip_masked, depth);
> > > >
> > > >         if (rule_index >= 0) {
> > > > -               *next_hop = lpm->rules_tbl[rule_index].next_hop;
> > > > +               res->next_hop = lpm->rules_tbl[rule_index].next_hop;
> > > > +               res->fwd_class = lpm->rules_tbl[rule_index].fwd_class;
> > > > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > > > +               res->as_num = lpm->rules_tbl[rule_index].as_num;
> > > > +#endif
> > > >                 return 1;
> > > >         }
> > > >
> > > > @@ -731,7 +760,7 @@ delete_depth_small(struct rte_lpm *lpm, uint32_t
> > > > ip_masked,
> > > >                  */
> > > >                 for (i = tbl24_index; i < (tbl24_index + tbl24_range);
> > i++)
> > > > {
> > > >
> > > > -                       if (lpm->tbl24[i].ext_entry == 0 &&
> > > > +                       if (lpm->tbl24[i].ext_valid == 0 &&
> > > >                                         lpm->tbl24[i].depth <= depth )
> > {
> > > >                                 lpm->tbl24[i].valid = INVALID;
> > > >                         }
> > > > @@ -761,23 +790,30 @@ delete_depth_small(struct rte_lpm *lpm,
> > > uint32_t
> > > > ip_masked,
> > > >                  * associated with this rule.
> > > >                  */
> > > >
> > > > -               struct rte_lpm_tbl24_entry new_tbl24_entry = {
> > > > -                       {.next_hop =
> > > > lpm->rules_tbl[sub_rule_index].next_hop,},
> > > > -                       .valid = VALID,
> > > > -                       .ext_entry = 0,
> > > > +               struct rte_lpm_tbl_entry new_tbl24_entry = {
> > > > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > > > +                       .as_num =
> > lpm->rules_tbl[sub_rule_index].as_num,
> > > > +#endif
> > > > +                       .next_hop =
> > lpm->rules_tbl[sub_rule_index].next_hop,
> > > > +                       .fwd_class =
> > > > lpm->rules_tbl[sub_rule_index].fwd_class,
> > > >                         .depth = sub_rule_depth,
> > > > +                       .ext_valid = 0,
> > > > +                       .valid = VALID,
> > > >                 };
> > > >
> > > > -               struct rte_lpm_tbl8_entry new_tbl8_entry = {
> > > > -                       .valid = VALID,
> > > > +               struct rte_lpm_tbl_entry new_tbl8_entry = {
> > > > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > > > +                       .as_num =
> > lpm->rules_tbl[sub_rule_index].as_num,
> > > > +#endif
> > > > +                       .next_hop =
> > lpm->rules_tbl[sub_rule_index].next_hop,
> > > > +                       .fwd_class =
> > > > lpm->rules_tbl[sub_rule_index].fwd_class,
> > > >                         .depth = sub_rule_depth,
> > > > -                       .next_hop = lpm->rules_tbl
> > > > -                       [sub_rule_index].next_hop,
> > > > +                       .valid = VALID,
> > > >                 };
> > > >
> > > >                 for (i = tbl24_index; i < (tbl24_index + tbl24_range);
> > i++)
> > > > {
> > > >
> > > > -                       if (lpm->tbl24[i].ext_entry == 0 &&
> > > > +                       if (lpm->tbl24[i].ext_valid == 0 &&
> > > >                                         lpm->tbl24[i].depth <= depth )
> > {
> > > >                                 lpm->tbl24[i] = new_tbl24_entry;
> > > >                         }
> > > > @@ -814,7 +850,7 @@ delete_depth_small(struct rte_lpm *lpm, uint32_t
> > > > ip_masked,
> > > >   * thus can be recycled
> > > >   */
> > > >  static inline int32_t
> > > > -tbl8_recycle_check(struct rte_lpm_tbl8_entry *tbl8, uint32_t
> > > > tbl8_group_start)
> > > > +tbl8_recycle_check(struct rte_lpm_tbl_entry *tbl8, uint32_t
> > > > tbl8_group_start)
> > > >  {
> > > >         uint32_t tbl8_group_end, i;
> > > >         tbl8_group_end = tbl8_group_start +
> > > RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
> > > > @@ -891,11 +927,15 @@ delete_depth_big(struct rte_lpm *lpm, uint32_t
> > > > ip_masked,
> > > >         }
> > > >         else {
> > > >                 /* Set new tbl8 entry. */
> > > > -               struct rte_lpm_tbl8_entry new_tbl8_entry = {
> > > > -                       .valid = VALID,
> > > > -                       .depth = sub_rule_depth,
> > > > -                       .valid_group =
> > > > lpm->tbl8[tbl8_group_start].valid_group,
> > > > +               struct rte_lpm_tbl_entry new_tbl8_entry = {
> > > > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > > > +                       .as_num =
> > lpm->rules_tbl[sub_rule_index].as_num,
> > > > +#endif
> > > > +                       .fwd_class =
> > > > lpm->rules_tbl[sub_rule_index].fwd_class,
> > > >                         .next_hop =
> > lpm->rules_tbl[sub_rule_index].next_hop,
> > > > +                       .depth = sub_rule_depth,
> > > > +                       .ext_valid =
> > lpm->tbl8[tbl8_group_start].ext_valid,
> > > > +                       .valid = VALID,
> > > >                 };
> > > >
> > > >                 /*
> > > > @@ -923,11 +963,15 @@ delete_depth_big(struct rte_lpm *lpm, uint32_t
> > > > ip_masked,
> > > >         }
> > > >         else if (tbl8_recycle_index > -1) {
> > > >                 /* Update tbl24 entry. */
> > > > -               struct rte_lpm_tbl24_entry new_tbl24_entry = {
> > > > -                       { .next_hop =
> > > > lpm->tbl8[tbl8_recycle_index].next_hop, },
> > > > -                       .valid = VALID,
> > > > -                       .ext_entry = 0,
> > > > +               struct rte_lpm_tbl_entry new_tbl24_entry = {
> > > > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > > > +                       .as_num = lpm->tbl8[tbl8_recycle_index].as_num,
> > > > +#endif
> > > > +                       .next_hop =
> > lpm->tbl8[tbl8_recycle_index].next_hop,
> > > > +                       .fwd_class =
> > > > lpm->tbl8[tbl8_recycle_index].fwd_class,
> > > >                         .depth = lpm->tbl8[tbl8_recycle_index].depth,
> > > > +                       .ext_valid = 0,
> > > > +                       .valid = VALID,
> > > >                 };
> > > >
> > > >                 /* Set tbl24 before freeing tbl8 to avoid race
> > condition. */
> > > > diff --git a/lib/librte_lpm/rte_lpm.h b/lib/librte_lpm/rte_lpm.h
> > > > index c299ce2..7c615bc 100644
> > > > --- a/lib/librte_lpm/rte_lpm.h
> > > > +++ b/lib/librte_lpm/rte_lpm.h
> > > > @@ -31,8 +31,8 @@
> > > >   *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
> > > DAMAGE.
> > > >   */
> > > >
> > > > -#ifndef _RTE_LPM_H_
> > > > -#define _RTE_LPM_H_
> > > > +#ifndef _RTE_LPM_EXT_H_
> > > > +#define _RTE_LPM_EXT_H_
> > > >
> > > >  /**
> > > >   * @file
> > > > @@ -81,57 +81,58 @@ extern "C" {
> > > >  #define RTE_LPM_RETURN_IF_TRUE(cond, retval)
> > > >  #endif
> > > >
> > > > -/** @internal bitmask with valid and ext_entry/valid_group fields set
> > */
> > > > -#define RTE_LPM_VALID_EXT_ENTRY_BITMASK 0x0300
> > > > +/** @internal bitmask with valid and ext_valid/ext_valid fields set */
> > > > +#define RTE_LPM_VALID_EXT_ENTRY_BITMASK 0x03
> > > >
> > > >  /** Bitmask used to indicate successful lookup */
> > > > -#define RTE_LPM_LOOKUP_SUCCESS          0x0100
> > > > +#define RTE_LPM_LOOKUP_SUCCESS          0x01
> > > > +
> > > > +struct rte_lpm_res {
> > > > +       uint16_t        next_hop;
> > > > +       uint8_t         fwd_class;
> > > > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > > > +       uint32_t        as_num;
> > > > +#endif
> > > > +};
> > > >
> > > >  #if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN
> > > > -/** @internal Tbl24 entry structure. */
> > > > -struct rte_lpm_tbl24_entry {
> > > > -       /* Stores Next hop or group index (i.e. gindex)into tbl8. */
> > > > +struct rte_lpm_tbl_entry {
> > > > +       uint8_t valid           :1;
> > > > +       uint8_t ext_valid       :1;
> > > > +       uint8_t depth           :6;
> > > > +       uint8_t fwd_class;
> > > >         union {
> > > > -               uint8_t next_hop;
> > > > -               uint8_t tbl8_gindex;
> > > > +               uint16_t next_hop;
> > > > +               uint16_t tbl8_gindex;
> > > >         };
> > > > -       /* Using single uint8_t to store 3 values. */
> > > > -       uint8_t valid     :1; /**< Validation flag. */
> > > > -       uint8_t ext_entry :1; /**< External entry. */
> > > > -       uint8_t depth     :6; /**< Rule depth. */
> > > > -};
> > > > -
> > > > -/** @internal Tbl8 entry structure. */
> > > > -struct rte_lpm_tbl8_entry {
> > > > -       uint8_t next_hop; /**< next hop. */
> > > > -       /* Using single uint8_t to store 3 values. */
> > > > -       uint8_t valid       :1; /**< Validation flag. */
> > > > -       uint8_t valid_group :1; /**< Group validation flag. */
> > > > -       uint8_t depth       :6; /**< Rule depth. */
> > > > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > > > +       uint32_t as_num;
> > > > +#endif
> > > >  };
> > > >  #else
> > > > -struct rte_lpm_tbl24_entry {
> > > > -       uint8_t depth       :6;
> > > > -       uint8_t ext_entry   :1;
> > > > -       uint8_t valid       :1;
> > > > +struct rte_lpm_tbl_entry {
> > > > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > > > +       uint32_t as_num;
> > > > +#endif
> > > >         union {
> > > > -               uint8_t tbl8_gindex;
> > > > -               uint8_t next_hop;
> > > > +               uint16_t tbl8_gindex;
> > > > +               uint16_t next_hop;
> > > >         };
> > > > -};
> > > > -
> > > > -struct rte_lpm_tbl8_entry {
> > > > -       uint8_t depth       :6;
> > > > -       uint8_t valid_group :1;
> > > > -       uint8_t valid       :1;
> > > > -       uint8_t next_hop;
> > > > +       uint8_t fwd_class;
> > > > +       uint8_t depth           :6;
> > > > +       uint8_t ext_valid       :1;
> > > > +       uint8_t valid           :1;
> > > >  };
> > > >  #endif
> > > >
> > > >  /** @internal Rule structure. */
> > > >  struct rte_lpm_rule {
> > > >         uint32_t ip; /**< Rule IP address. */
> > > > -       uint8_t  next_hop; /**< Rule next hop. */
> > > > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > > > +       uint32_t as_num;
> > > > +#endif
> > > > +       uint16_t  next_hop; /**< Rule next hop. */
> > > > +       uint8_t fwd_class;
> > > >  };
> > > >
> > > >  /** @internal Contains metadata about the rules table. */
> > > > @@ -148,9 +149,9 @@ struct rte_lpm {
> > > >         struct rte_lpm_rule_info rule_info[RTE_LPM_MAX_DEPTH]; /**<
> > Rule
> > > > info table. */
> > > >
> > > >         /* LPM Tables. */
> > > > -       struct rte_lpm_tbl24_entry tbl24[RTE_LPM_TBL24_NUM_ENTRIES] \
> > > > +       struct rte_lpm_tbl_entry tbl24[RTE_LPM_TBL24_NUM_ENTRIES] \
> > > >                         __rte_cache_aligned; /**< LPM tbl24 table. */
> > > > -       struct rte_lpm_tbl8_entry tbl8[RTE_LPM_TBL8_NUM_ENTRIES] \
> > > > +       struct rte_lpm_tbl_entry tbl8[RTE_LPM_TBL8_NUM_ENTRIES] \
> > > >                         __rte_cache_aligned; /**< LPM tbl8 table. */
> > > >         struct rte_lpm_rule rules_tbl[0] \
> > > >                         __rte_cache_aligned; /**< LPM rules. */
> > > > @@ -219,7 +220,7 @@ rte_lpm_free(struct rte_lpm *lpm);
> > > >   *   0 on success, negative value otherwise
> > > >   */
> > > >  int
> > > > -rte_lpm_add(struct rte_lpm *lpm, uint32_t ip, uint8_t depth, uint8_t
> > > > next_hop);
> > > > +rte_lpm_add(struct rte_lpm *lpm, uint32_t ip, uint8_t depth, struct
> > > > rte_lpm_res *res);
> > > >
> > > >  /**
> > > >   * Check if a rule is present in the LPM table,
> > > > @@ -238,7 +239,7 @@ rte_lpm_add(struct rte_lpm *lpm, uint32_t ip,
> > > uint8_t
> > > > depth, uint8_t next_hop);
> > > >   */
> > > >  int
> > > >  rte_lpm_is_rule_present(struct rte_lpm *lpm, uint32_t ip, uint8_t
> > depth,
> > > > -uint8_t *next_hop);
> > > > +                       struct rte_lpm_res *res);
> > > >
> > > >  /**
> > > >   * Delete a rule from the LPM table.
> > > > @@ -277,29 +278,43 @@ rte_lpm_delete_all(struct rte_lpm *lpm);
> > > >   *   -EINVAL for incorrect arguments, -ENOENT on lookup miss, 0 on
> > lookup
> > > > hit
> > > >   */
> > > >  static inline int
> > > > -rte_lpm_lookup(struct rte_lpm *lpm, uint32_t ip, uint8_t *next_hop)
> > > > +rte_lpm_lookup(struct rte_lpm *lpm, uint32_t ip, struct rte_lpm_res
> > *res)
> > > >  {
> > > >         unsigned tbl24_index = (ip >> 8);
> > > > -       uint16_t tbl_entry;
> > > > -
> > > > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > > > +       uint64_t tbl_entry;
> > > > +#else
> > > > +       uint32_t tbl_entry;
> > > > +#endif
> > > >         /* DEBUG: Check user input arguments. */
> > > > -       RTE_LPM_RETURN_IF_TRUE(((lpm == NULL) || (next_hop == NULL)),
> > > > -EINVAL);
> > > > +       RTE_LPM_RETURN_IF_TRUE(((lpm == NULL) || (res == NULL)), -
> > > EINVAL);
> > > >
> > > >         /* Copy tbl24 entry */
> > > > -       tbl_entry = *(const uint16_t *)&lpm->tbl24[tbl24_index];
> > > > -
> > > > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > > > +       tbl_entry = *(const uint64_t *)&lpm->tbl24[tbl24_index];
> > > > +#else
> > > > +       tbl_entry = *(const uint32_t *)&lpm->tbl24[tbl24_index];
> > > > +#endif
> > > >         /* Copy tbl8 entry (only if needed) */
> > > >         if (unlikely((tbl_entry & RTE_LPM_VALID_EXT_ENTRY_BITMASK) ==
> > > >                         RTE_LPM_VALID_EXT_ENTRY_BITMASK)) {
> > > >
> > > >                 unsigned tbl8_index = (uint8_t)ip +
> > > > -                               ((uint8_t)tbl_entry *
> > > > RTE_LPM_TBL8_GROUP_NUM_ENTRIES);
> > > > +                               ((*(struct rte_lpm_tbl_entry
> > > > *)&tbl_entry).tbl8_gindex * RTE_LPM_TBL8_GROUP_NUM_ENTRIES);
> > > >
> > > > -               tbl_entry = *(const uint16_t *)&lpm->tbl8[tbl8_index];
> > > > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > > > +               tbl_entry = *(const uint64_t *)&lpm->tbl8[tbl8_index];
> > > > +#else
> > > > +               tbl_entry = *(const uint32_t *)&lpm->tbl8[tbl8_index];
> > > > +#endif
> > > >         }
> > > > -
> > > > -       *next_hop = (uint8_t)tbl_entry;
> > > > +       res->next_hop  = ((struct rte_lpm_tbl_entry
> > *)&tbl_entry)->next_hop;
> > > > +       res->fwd_class = ((struct rte_lpm_tbl_entry
> > > > *)&tbl_entry)->fwd_class;
> > > > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > > > +       res->as_num       = ((struct rte_lpm_tbl_entry
> > > > *)&tbl_entry)->as_num;
> > > > +#endif
> > > >         return (tbl_entry & RTE_LPM_LOOKUP_SUCCESS) ? 0 : -ENOENT;
> > > > +
> > > >  }
> > > >
> > > >  /**
> > > > @@ -322,19 +337,25 @@ rte_lpm_lookup(struct rte_lpm *lpm, uint32_t ip,
> > > > uint8_t *next_hop)
> > > >   *  @return
> > > >   *   -EINVAL for incorrect arguments, otherwise 0
> > > >   */
> > > > -#define rte_lpm_lookup_bulk(lpm, ips, next_hops, n) \
> > > > -               rte_lpm_lookup_bulk_func(lpm, ips, next_hops, n)
> > > > +#define rte_lpm_lookup_bulk(lpm, ips, res_tbl, n) \
> > > > +               rte_lpm_lookup_bulk_func(lpm, ips, res_tbl, n)
> > > >
> > > >  static inline int
> > > > -rte_lpm_lookup_bulk_func(const struct rte_lpm *lpm, const uint32_t *
> > ips,
> > > > -               uint16_t * next_hops, const unsigned n)
> > > > +rte_lpm_lookup_bulk_func(const struct rte_lpm *lpm, const uint32_t
> > *ips,
> > > > +               struct rte_lpm_res *res_tbl, const unsigned n)
> > > >  {
> > > >         unsigned i;
> > > > +       int ret = 0;
> > > > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > > > +       uint64_t tbl_entry;
> > > > +#else
> > > > +       uint32_t tbl_entry;
> > > > +#endif
> > > >         unsigned tbl24_indexes[n];
> > > >
> > > >         /* DEBUG: Check user input arguments. */
> > > >         RTE_LPM_RETURN_IF_TRUE(((lpm == NULL) || (ips == NULL) ||
> > > > -                       (next_hops == NULL)), -EINVAL);
> > > > +                       (res_tbl == NULL)), -EINVAL);
> > > >
> > > >         for (i = 0; i < n; i++) {
> > > >                 tbl24_indexes[i] = ips[i] >> 8;
> > > > @@ -342,20 +363,32 @@ rte_lpm_lookup_bulk_func(const struct rte_lpm
> > > *lpm,
> > > > const uint32_t * ips,
> > > >
> > > >         for (i = 0; i < n; i++) {
> > > >                 /* Simply copy tbl24 entry to output */
> > > > -               next_hops[i] = *(const uint16_t
> > > > *)&lpm->tbl24[tbl24_indexes[i]];
> > > > -
> > > > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > > > +               tbl_entry = *(const uint64_t
> > > > *)&lpm->tbl24[tbl24_indexes[i]];
> > > > +#else
> > > > +               tbl_entry = *(const uint32_t
> > > > *)&lpm->tbl24[tbl24_indexes[i]];
> > > > +#endif
> > > >                 /* Overwrite output with tbl8 entry if needed */
> > > > -               if (unlikely((next_hops[i] &
> > > > RTE_LPM_VALID_EXT_ENTRY_BITMASK) ==
> > > > -                               RTE_LPM_VALID_EXT_ENTRY_BITMASK)) {
> > > > +               if (unlikely((tbl_entry &
> > RTE_LPM_VALID_EXT_ENTRY_BITMASK)
> > > > ==
> > > > +                       RTE_LPM_VALID_EXT_ENTRY_BITMASK)) {
> > > >
> > > >                         unsigned tbl8_index = (uint8_t)ips[i] +
> > > > -                                       ((uint8_t)next_hops[i] *
> > > > -
> > RTE_LPM_TBL8_GROUP_NUM_ENTRIES);
> > > > +                               ((*(struct rte_lpm_tbl_entry
> > > > *)&tbl_entry).tbl8_gindex * RTE_LPM_TBL8_GROUP_NUM_ENTRIES);
> > > >
> > > > -                       next_hops[i] = *(const uint16_t
> > > > *)&lpm->tbl8[tbl8_index];
> > > > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > > > +                       tbl_entry = *(const uint64_t
> > > > *)&lpm->tbl8[tbl8_index];
> > > > +#else
> > > > +                       tbl_entry = *(const uint32_t
> > > > *)&lpm->tbl8[tbl8_index];
> > > > +#endif
> > > >                 }
> > > > +               res_tbl[i].next_hop     = ((struct rte_lpm_tbl_entry
> > > > *)&tbl_entry)->next_hop;
> > > > +               res_tbl[i].fwd_class    = ((struct rte_lpm_tbl_entry
> > > > *)&tbl_entry)->next_hop;
> > > > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > > > +               res_tbl[i].as_num       = ((struct rte_lpm_tbl_entry
> > > > *)&tbl_entry)->as_num;
> > > > +#endif
> > > > +               ret |= 1 << i;
> > > >         }
> > > > -       return 0;
> > > > +       return ret;
> > > >  }
> > > >
> > > >  /* Mask four results. */
> > > > @@ -477,4 +510,4 @@ rte_lpm_lookupx4(const struct rte_lpm *lpm,
> > > __m128i ip,
> > > > uint16_t hop[4],
> > > >  }
> > > >  #endif
> > > >
> > > > -#endif /* _RTE_LPM_H_ */
> > > > +#endif /* _RTE_LPM_EXT_H_ */
> > > >
> > > > 2015-10-24 9:09 GMT+03:00 Matthew Hall <mhall@mhcomputing.net>:
> > > >
> > > > > On 10/23/15 9:20 AM, Matthew Hall wrote:
> > > > >
> > > > >> On Fri, Oct 23, 2015 at 03:51:48PM +0200, Michal Jastrzebski wrote:
> > > > >>
> > > > >>> From: Michal Kobylinski  <michalx.kobylinski@intel.com>
> > > > >>>
> > > > >>> The current DPDK implementation for LPM for IPv4 and IPv6 limits
> > the
> > > > >>> number of next hops to 256, as the next hop ID is an 8-bit long
> > field.
> > > > >>> Proposed extension increase number of next hops for IPv4 to 2^24
> > and
> > > > >>> also allows 32-bits read/write operations.
> > > > >>>
> > > > >>> This patchset requires additional change to rte_table library to
> > meet
> > > > >>> ABI compatibility requirements. A v2 will be sent next week.
> > > > >>>
> > > > >>
> > > > >> I also have a patchset for this.
> > > > >>
> > > > >> I will send it out as well so we could compare.
> > > > >>
> > > > >> Matthew.
> > > > >>
> > > > >
> > > > > Sorry about the delay; I only work on DPDK in personal time and not
> > as
> > > > > part of a job. My patchset is attached to this email.
> > > > >
> > > > > One possible advantage with my patchset, compared to others, is that
> > the
> > > > > space problem is fixed in both IPV4 and in IPV6, to prevent asymmetry
> > > > > between these two standards, which is something I try to avoid as
> > much
> > > as
> > > > > humanly possible.
> > > > >
> > > > > This is because my application code is green-field, so I absolutely
> > don't
> > > > > want to put any ugly hacks or incompatibilities in this code if I can
> > > > > possibly avoid it.
> > > > >
> > > > > Otherwise, I am not necessarily as expert about rte_lpm as some of
> > the
> > > > > full-time guys, but I think with four or five of us in the thread
> > hammering
> > > > > out patches we will be able to create something amazing together and
> > I
> > > am
> > > > > very very very very very happy about this.
> > > > >
> > > > > Matthew.
> > > > >
> > >
> >
> > Hi Vladimir,
> > Thanks for sharing Your implementation.
> > Could You please clarify what as_num and fwd_class fields represent?
> > The second issue I have is that Your patch doesn’t want to apply on top of
> > current head. Could You check this please?
> >
> > Best regards
> > Michal
> >

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [dpdk-dev] [PATCH v1 0/3] lpm: increase number of next hops for lpm (ipv4)
  2015-10-26 15:39             ` Michal Jastrzebski
@ 2015-10-26 16:59               ` Vladimir Medvedkin
  0 siblings, 0 replies; 24+ messages in thread
From: Vladimir Medvedkin @ 2015-10-26 16:59 UTC (permalink / raw)
  To: Michal Jastrzebski; +Cc: dev

Michal,

Looks strange, you have:
error: while searching for:

       lpm_list = RTE_TAILQ_CAST(rte_lpm_tailq.head, rte_lpm_list);
...
error: patch failed: lib/librte_lpm/rte_lpm.c:159
but if we look at
http://dpdk.org/browse/dpdk/tree/lib/librte_lpm/rte_lpm.c#n159
patch should apply fine.
Latest commit in my repo is 139debc42dc0a320dad40f5295b74d2e3ab8a7f9


2015-10-26 18:39 GMT+03:00 Michal Jastrzebski <
michalx.k.jastrzebski@intel.com>:

> esOn Mon, Oct 26, 2015 at 05:03:31PM +0300, Vladimir Medvedkin wrote:
> > Hi Michal,
> >
> > Forwarding class can help us to classify traffic based on dst prefix,
> it's
> > something like Juniper DCU. For example on Juniper MX I can make policy
> > that install prefix into the FIB with some class and use it on dataplane,
> > for example with ACL.
> > On Juniper MX I can make something like that:
> > #show policy-options
> > policy-statement community-to-class {
> > term customer {
> >         from community originate-customer;
> >         then destination-class customer;
> >     }
> > }
> > community originate-customer members 12345:11111;
> > # show routing-options
> > forwarding-table {
> >     export community-to-class;
> > }
> > # show forwarding-options
> > forwarding-options {
> >     family inet {
> >         filter {
> >             output test-filter;
> >         }
> >     }
> > }
> > # show firewall family inet filter test-filter
> > term 1 {
> >     from {
> >         protocol icmp;
> >         destination-class customer;
> >     }
> >     then {
> >         discard;
> >     }
> > }
> > announce route 10.10.10.10/32 next-hop 10.10.10.2 community 12345:11111
> > After than on dataplane we have
> > NPC1( vty)# show route ip lookup 10.10.10.10
> > Route Information (10.10.10.10):
> >  interface : xe-1/0/0.0 (328)
> >  Nexthop prefix : -
> >  Nexthop ID     : 1048574
> >  MTU            : 0
> >  Class ID       : 129 <- That is "forwarding class" in my implementation
> > This construction discards all ICMP traffic that goes to dst prefixes
> which
> > was originated with community 12345:11111. With this mechanism we can
> make
> > on control plane different sophisticated policy to control traffic on
> > dataplane.
> > The same with as_num, we can have on dataplane AS number that has
> > originated that prefix, or another 4-byte number e.g. geo-id.
> > What issue do you mean? I think it is because of table/pipeline/test
> > frameworks that doesen't want to compile due to changing API/ABI. You can
> > turn it off for LPM testing, if my patch will be applied I will make
> > changes in above-mentioned frameworks.
> >
> > Regards,
> > Vladimir
>
> Hi Vladimir,
> I have an issue with applying Your patch not compilation.
> This is the error i get:
> Checking patch config/common_bsdapp...
> Checking patch config/common_linuxapp...
> Checking patch lib/librte_lpm/rte_lpm.c...
> error: while searching for:
>
>        lpm_list = RTE_TAILQ_CAST(rte_lpm_tailq.head, rte_lpm_list);
>
>        RTE_BUILD_BUG_ON(sizeof(struct rte_lpm_tbl24_entry) != 2);
>        RTE_BUILD_BUG_ON(sizeof(struct rte_lpm_tbl8_entry) != 2);
>
>        /* Check user arguments. */
>        if ((name == NULL) || (socket_id < -1) || (max_rules == 0)){
>                rte_errno = EINVAL;
>
> error: patch failed: lib/librte_lpm/rte_lpm.c:159
> error: lib/librte_lpm/rte_lpm.c: patch does not apply
> Checking patch lib/librte_lpm/rte_lpm.h...
> error: while searching for:
> #define RTE_LPM_RETURN_IF_TRUE(cond, retval)
> #endif
>
> /** @internal bitmask with valid and ext_entry/valid_group fields set */
> #define RTE_LPM_VALID_EXT_ENTRY_BITMASK 0x0300
>
> /** Bitmask used to indicate successful lookup */
> #define RTE_LPM_LOOKUP_SUCCESS          0x0100
>
> #if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN
> /** @internal Tbl24 entry structure. */
> struct rte_lpm_tbl24_entry {
>        /* Stores Next hop or group index (i.e. gindex)into tbl8. */
>        union {
>                uint8_t next_hop;
>                uint8_t tbl8_gindex;
>        };
>        /* Using single uint8_t to store 3 values. */
>        uint8_t valid     :1; /**< Validation flag. */
>        uint8_t ext_entry :1; /**< External entry. */
>        uint8_t depth     :6; /**< Rule depth. */
> };
>
> /** @internal Tbl8 entry structure. */
> struct rte_lpm_tbl8_entry {
>        uint8_t next_hop; /**< next hop. */
>        /* Using single uint8_t to store 3 values. */
>        uint8_t valid       :1; /**< Validation flag. */
>        uint8_t valid_group :1; /**< Group validation flag. */
>        uint8_t depth       :6; /**< Rule depth. */
> };
> #else
> struct rte_lpm_tbl24_entry {
>        uint8_t depth       :6;
>        uint8_t ext_entry   :1;
>        uint8_t valid       :1;
>        union {
>                uint8_t tbl8_gindex;
>                uint8_t next_hop;
>        };
> };
>
> struct rte_lpm_tbl8_entry {
>        uint8_t depth       :6;
>        uint8_t valid_group :1;
>        uint8_t valid       :1;
>        uint8_t next_hop;
> };
> #endif
>
> /** @internal Rule structure. */
> struct rte_lpm_rule {
>        uint32_t ip; /**< Rule IP address. */
>        uint8_t  next_hop; /**< Rule next hop. */
> };
>
> /** @internal Contains metadata about the rules table. */
>
> error: patch failed: lib/librte_lpm/rte_lpm.h:81
> error: lib/librte_lpm/rte_lpm.h: patch does not apply
>
>
>
> > 2015-10-26 14:57 GMT+03:00 Jastrzebski, MichalX K <
> > michalx.k.jastrzebski@intel.com>:
> >
> > > > -----Original Message-----
> > > > From: Michal Jastrzebski [mailto:michalx.k.jastrzebski@intel.com]
> > > > Sent: Monday, October 26, 2015 12:55 PM
> > > > To: Vladimir Medvedkin
> > > > Subject: Re: [dpdk-dev] [PATCH v1 0/3] lpm: increase number of next
> hops
> > > > for lpm (ipv4)
> > > >
> > > > On Sun, Oct 25, 2015 at 08:52:04PM +0300, Vladimir Medvedkin wrote:
> > > > > Hi all,
> > > > >
> > > > > Here my implementation
> > > > >
> > > > > Signed-off-by: Vladimir Medvedkin <medvedkinv@gmail.com>
> > > > > ---
> > > > >  config/common_bsdapp     |   1 +
> > > > >  config/common_linuxapp   |   1 +
> > > > >  lib/librte_lpm/rte_lpm.c | 194
> > > > > +++++++++++++++++++++++++++++------------------
> > > > >  lib/librte_lpm/rte_lpm.h | 163
> +++++++++++++++++++++++----------------
> > > > >  4 files changed, 219 insertions(+), 140 deletions(-)
> > > > >
> > > > > diff --git a/config/common_bsdapp b/config/common_bsdapp
> > > > > index b37dcf4..408cc2c 100644
> > > > > --- a/config/common_bsdapp
> > > > > +++ b/config/common_bsdapp
> > > > > @@ -344,6 +344,7 @@ CONFIG_RTE_LIBRTE_JOBSTATS=y
> > > > >  #
> > > > >  CONFIG_RTE_LIBRTE_LPM=y
> > > > >  CONFIG_RTE_LIBRTE_LPM_DEBUG=n
> > > > > +CONFIG_RTE_LIBRTE_LPM_ASNUM=n
> > > > >
> > > > >  #
> > > > >  # Compile librte_acl
> > > > > diff --git a/config/common_linuxapp b/config/common_linuxapp
> > > > > index 0de43d5..1c60e63 100644
> > > > > --- a/config/common_linuxapp
> > > > > +++ b/config/common_linuxapp
> > > > > @@ -352,6 +352,7 @@ CONFIG_RTE_LIBRTE_JOBSTATS=y
> > > > >  #
> > > > >  CONFIG_RTE_LIBRTE_LPM=y
> > > > >  CONFIG_RTE_LIBRTE_LPM_DEBUG=n
> > > > > +CONFIG_RTE_LIBRTE_LPM_ASNUM=n
> > > > >
> > > > >  #
> > > > >  # Compile librte_acl
> > > > > diff --git a/lib/librte_lpm/rte_lpm.c b/lib/librte_lpm/rte_lpm.c
> > > > > index 163ba3c..363b400 100644
> > > > > --- a/lib/librte_lpm/rte_lpm.c
> > > > > +++ b/lib/librte_lpm/rte_lpm.c
> > > > > @@ -159,9 +159,11 @@ rte_lpm_create(const char *name, int
> socket_id,
> > > > int
> > > > > max_rules,
> > > > >
> > > > >         lpm_list = RTE_TAILQ_CAST(rte_lpm_tailq.head,
> rte_lpm_list);
> > > > >
> > > > > -       RTE_BUILD_BUG_ON(sizeof(struct rte_lpm_tbl24_entry) != 2);
> > > > > -       RTE_BUILD_BUG_ON(sizeof(struct rte_lpm_tbl8_entry) != 2);
> > > > > -
> > > > > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > > > > +       RTE_BUILD_BUG_ON(sizeof(struct rte_lpm_tbl_entry) != 8);
> > > > > +#else
> > > > > +       RTE_BUILD_BUG_ON(sizeof(struct rte_lpm_tbl_entry) != 4);
> > > > > +#endif
> > > > >         /* Check user arguments. */
> > > > >         if ((name == NULL) || (socket_id < -1) || (max_rules ==
> 0)){
> > > > >                 rte_errno = EINVAL;
> > > > > @@ -261,7 +263,7 @@ rte_lpm_free(struct rte_lpm *lpm)
> > > > >   */
> > > > >  static inline int32_t
> > > > >  rule_add(struct rte_lpm *lpm, uint32_t ip_masked, uint8_t depth,
> > > > > -       uint8_t next_hop)
> > > > > +       struct rte_lpm_res *res)
> > > > >  {
> > > > >         uint32_t rule_gindex, rule_index, last_rule;
> > > > >         int i;
> > > > > @@ -282,8 +284,11 @@ rule_add(struct rte_lpm *lpm, uint32_t
> > > > ip_masked,
> > > > > uint8_t depth,
> > > > >
> > > > >                         /* If rule already exists update its
> next_hop
> > > and
> > > > > return. */
> > > > >                         if (lpm->rules_tbl[rule_index].ip ==
> > > ip_masked) {
> > > > > -
>  lpm->rules_tbl[rule_index].next_hop =
> > > > > next_hop;
> > > > > -
> > > > > +
>  lpm->rules_tbl[rule_index].next_hop =
> > > > > res->next_hop;
> > > > > +
>  lpm->rules_tbl[rule_index].fwd_class =
> > > > > res->fwd_class;
> > > > > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > > > > +                               lpm->rules_tbl[rule_index].as_num =
> > > > > res->as_num;
> > > > > +#endif
> > > > >                                 return rule_index;
> > > > >                         }
> > > > >                 }
> > > > > @@ -320,7 +325,11 @@ rule_add(struct rte_lpm *lpm, uint32_t
> > > > ip_masked,
> > > > > uint8_t depth,
> > > > >
> > > > >         /* Add the new rule. */
> > > > >         lpm->rules_tbl[rule_index].ip = ip_masked;
> > > > > -       lpm->rules_tbl[rule_index].next_hop = next_hop;
> > > > > +       lpm->rules_tbl[rule_index].next_hop = res->next_hop;
> > > > > +       lpm->rules_tbl[rule_index].fwd_class = res->fwd_class;
> > > > > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > > > > +       lpm->rules_tbl[rule_index].as_num = res->as_num;
> > > > > +#endif
> > > > >
> > > > >         /* Increment the used rules counter for this rule group. */
> > > > >         lpm->rule_info[depth - 1].used_rules++;
> > > > > @@ -382,10 +391,10 @@ rule_find(struct rte_lpm *lpm, uint32_t
> > > > ip_masked,
> > > > > uint8_t depth)
> > > > >   * Find, clean and allocate a tbl8.
> > > > >   */
> > > > >  static inline int32_t
> > > > > -tbl8_alloc(struct rte_lpm_tbl8_entry *tbl8)
> > > > > +tbl8_alloc(struct rte_lpm_tbl_entry *tbl8)
> > > > >  {
> > > > >         uint32_t tbl8_gindex; /* tbl8 group index. */
> > > > > -       struct rte_lpm_tbl8_entry *tbl8_entry;
> > > > > +       struct rte_lpm_tbl_entry *tbl8_entry;
> > > > >
> > > > >         /* Scan through tbl8 to find a free (i.e. INVALID) tbl8
> group.
> > > */
> > > > >         for (tbl8_gindex = 0; tbl8_gindex <
> RTE_LPM_TBL8_NUM_GROUPS;
> > > > > @@ -393,12 +402,12 @@ tbl8_alloc(struct rte_lpm_tbl8_entry *tbl8)
> > > > >                 tbl8_entry = &tbl8[tbl8_gindex *
> > > > >                                    RTE_LPM_TBL8_GROUP_NUM_ENTRIES];
> > > > >                 /* If a free tbl8 group is found clean it and set
> as
> > > VALID.
> > > > > */
> > > > > -               if (!tbl8_entry->valid_group) {
> > > > > +               if (!tbl8_entry->ext_valid) {
> > > > >                         memset(&tbl8_entry[0], 0,
> > > > >
>  RTE_LPM_TBL8_GROUP_NUM_ENTRIES
> > > *
> > > > >                                         sizeof(tbl8_entry[0]));
> > > > >
> > > > > -                       tbl8_entry->valid_group = VALID;
> > > > > +                       tbl8_entry->ext_valid = VALID;
> > > > >
> > > > >                         /* Return group index for allocated tbl8
> > > group. */
> > > > >                         return tbl8_gindex;
> > > > > @@ -410,46 +419,50 @@ tbl8_alloc(struct rte_lpm_tbl8_entry *tbl8)
> > > > >  }
> > > > >
> > > > >  static inline void
> > > > > -tbl8_free(struct rte_lpm_tbl8_entry *tbl8, uint32_t
> tbl8_group_start)
> > > > > +tbl8_free(struct rte_lpm_tbl_entry *tbl8, uint32_t
> tbl8_group_start)
> > > > >  {
> > > > >         /* Set tbl8 group invalid*/
> > > > > -       tbl8[tbl8_group_start].valid_group = INVALID;
> > > > > +       tbl8[tbl8_group_start].ext_valid = INVALID;
> > > > >  }
> > > > >
> > > > >  static inline int32_t
> > > > >  add_depth_small(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
> > > > > -               uint8_t next_hop)
> > > > > +               struct rte_lpm_res *res)
> > > > >  {
> > > > >         uint32_t tbl24_index, tbl24_range, tbl8_index,
> tbl8_group_end,
> > > i, j;
> > > > >
> > > > >         /* Calculate the index into Table24. */
> > > > >         tbl24_index = ip >> 8;
> > > > >         tbl24_range = depth_to_range(depth);
> > > > > +       struct rte_lpm_tbl_entry new_tbl_entry = {
> > > > > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > > > > +               .as_num = res->as_num,
> > > > > +#endif
> > > > > +               .next_hop = res->next_hop,
> > > > > +               .fwd_class  = res->fwd_class,
> > > > > +               .ext_valid = 0,
> > > > > +               .depth = depth,
> > > > > +               .valid = VALID,
> > > > > +       };
> > > > > +
> > > > >
> > > > >         for (i = tbl24_index; i < (tbl24_index + tbl24_range);
> i++) {
> > > > >                 /*
> > > > >                  * For invalid OR valid and non-extended tbl 24
> > > entries set
> > > > >                  * entry.
> > > > >                  */
> > > > > -               if (!lpm->tbl24[i].valid ||
> (lpm->tbl24[i].ext_entry
> > > == 0 &&
> > > > > +               if (!lpm->tbl24[i].valid ||
> (lpm->tbl24[i].ext_valid
> > > == 0 &&
> > > > >                                 lpm->tbl24[i].depth <= depth)) {
> > > > >
> > > > > -                       struct rte_lpm_tbl24_entry new_tbl24_entry
> = {
> > > > > -                               { .next_hop = next_hop, },
> > > > > -                               .valid = VALID,
> > > > > -                               .ext_entry = 0,
> > > > > -                               .depth = depth,
> > > > > -                       };
> > > > > -
> > > > >                         /* Setting tbl24 entry in one go to avoid
> race
> > > > >                          * conditions
> > > > >                          */
> > > > > -                       lpm->tbl24[i] = new_tbl24_entry;
> > > > > +                       lpm->tbl24[i] = new_tbl_entry;
> > > > >
> > > > >                         continue;
> > > > >                 }
> > > > >
> > > > > -               if (lpm->tbl24[i].ext_entry == 1) {
> > > > > +               if (lpm->tbl24[i].ext_valid == 1) {
> > > > >                         /* If tbl24 entry is valid and extended
> > > calculate
> > > > > the
> > > > >                          *  index into tbl8.
> > > > >                          */
> > > > > @@ -461,19 +474,14 @@ add_depth_small(struct rte_lpm *lpm, uint32_t
> > > > ip,
> > > > > uint8_t depth,
> > > > >                         for (j = tbl8_index; j < tbl8_group_end;
> j++) {
> > > > >                                 if (!lpm->tbl8[j].valid ||
> > > > >                                                 lpm->tbl8[j].depth
> <=
> > > > > depth) {
> > > > > -                                       struct rte_lpm_tbl8_entry
> > > > > -                                               new_tbl8_entry = {
> > > > > -                                               .valid = VALID,
> > > > > -                                               .valid_group =
> VALID,
> > > > > -                                               .depth = depth,
> > > > > -                                               .next_hop =
> next_hop,
> > > > > -                                       };
> > > > > +
> > > > > +                                       new_tbl_entry.ext_valid =
> > > VALID;
> > > > >
> > > > >                                         /*
> > > > >                                          * Setting tbl8 entry in
> one
> > > go to
> > > > > avoid
> > > > >                                          * race conditions
> > > > >                                          */
> > > > > -                                       lpm->tbl8[j] =
> new_tbl8_entry;
> > > > > +                                       lpm->tbl8[j] =
> new_tbl_entry;
> > > > >
> > > > >                                         continue;
> > > > >                                 }
> > > > > @@ -486,7 +494,7 @@ add_depth_small(struct rte_lpm *lpm, uint32_t
> ip,
> > > > > uint8_t depth,
> > > > >
> > > > >  static inline int32_t
> > > > >  add_depth_big(struct rte_lpm *lpm, uint32_t ip_masked, uint8_t
> depth,
> > > > > -               uint8_t next_hop)
> > > > > +               struct rte_lpm_res *res)
> > > > >  {
> > > > >         uint32_t tbl24_index;
> > > > >         int32_t tbl8_group_index, tbl8_group_start, tbl8_group_end,
> > > > > tbl8_index,
> > > > > @@ -512,7 +520,11 @@ add_depth_big(struct rte_lpm *lpm, uint32_t
> > > > ip_masked,
> > > > > uint8_t depth,
> > > > >                 /* Set tbl8 entry. */
> > > > >                 for (i = tbl8_index; i < (tbl8_index + tbl8_range);
> > > i++) {
> > > > >                         lpm->tbl8[i].depth = depth;
> > > > > -                       lpm->tbl8[i].next_hop = next_hop;
> > > > > +                       lpm->tbl8[i].next_hop = res->next_hop;
> > > > > +                       lpm->tbl8[i].fwd_class = res->fwd_class;
> > > > > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > > > > +                       lpm->tbl8[i].as_num = res->as_num;
> > > > > +#endif
> > > > >                         lpm->tbl8[i].valid = VALID;
> > > > >                 }
> > > > >
> > > > > @@ -522,17 +534,17 @@ add_depth_big(struct rte_lpm *lpm, uint32_t
> > > > > ip_masked, uint8_t depth,
> > > > >                  * so assign whole structure in one go
> > > > >                  */
> > > > >
> > > > > -               struct rte_lpm_tbl24_entry new_tbl24_entry = {
> > > > > -                       { .tbl8_gindex =
> (uint8_t)tbl8_group_index, },
> > > > > -                       .valid = VALID,
> > > > > -                       .ext_entry = 1,
> > > > > +               struct rte_lpm_tbl_entry new_tbl24_entry = {
> > > > > +                       .tbl8_gindex = (uint16_t)tbl8_group_index,
> > > > >                         .depth = 0,
> > > > > +                       .ext_valid = 1,
> > > > > +                       .valid = VALID,
> > > > >                 };
> > > > >
> > > > >                 lpm->tbl24[tbl24_index] = new_tbl24_entry;
> > > > >
> > > > >         }/* If valid entry but not extended calculate the index
> into
> > > > > Table8. */
> > > > > -       else if (lpm->tbl24[tbl24_index].ext_entry == 0) {
> > > > > +       else if (lpm->tbl24[tbl24_index].ext_valid == 0) {
> > > > >                 /* Search for free tbl8 group. */
> > > > >                 tbl8_group_index = tbl8_alloc(lpm->tbl8);
> > > > >
> > > > > @@ -551,6 +563,11 @@ add_depth_big(struct rte_lpm *lpm, uint32_t
> > > > ip_masked,
> > > > > uint8_t depth,
> > > > >                         lpm->tbl8[i].depth =
> > > lpm->tbl24[tbl24_index].depth;
> > > > >                         lpm->tbl8[i].next_hop =
> > > > >
> > >  lpm->tbl24[tbl24_index].next_hop;
> > > > > +                       lpm->tbl8[i].fwd_class =
> > > > > +
> > >  lpm->tbl24[tbl24_index].fwd_class;
> > > > > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > > > > +                       lpm->tbl8[i].as_num =
> > > > > lpm->tbl24[tbl24_index].as_num;
> > > > > +#endif
> > > > >                 }
> > > > >
> > > > >                 tbl8_index = tbl8_group_start + (ip_masked & 0xFF);
> > > > > @@ -561,7 +578,11 @@ add_depth_big(struct rte_lpm *lpm, uint32_t
> > > > ip_masked,
> > > > > uint8_t depth,
> > > > >                                         lpm->tbl8[i].depth <=
> depth) {
> > > > >                                 lpm->tbl8[i].valid = VALID;
> > > > >                                 lpm->tbl8[i].depth = depth;
> > > > > -                               lpm->tbl8[i].next_hop = next_hop;
> > > > > +                               lpm->tbl8[i].next_hop =
> res->next_hop;
> > > > > +                               lpm->tbl8[i].fwd_class =
> > > res->fwd_class;
> > > > > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > > > > +                               lpm->tbl8[i].as_num = res->as_num;
> > > > > +#endif
> > > > >
> > > > >                                 continue;
> > > > >                         }
> > > > > @@ -573,11 +594,11 @@ add_depth_big(struct rte_lpm *lpm, uint32_t
> > > > > ip_masked, uint8_t depth,
> > > > >                  * so assign whole structure in one go.
> > > > >                  */
> > > > >
> > > > > -               struct rte_lpm_tbl24_entry new_tbl24_entry = {
> > > > > -                               { .tbl8_gindex =
> > > (uint8_t)tbl8_group_index,
> > > > > },
> > > > > -                               .valid = VALID,
> > > > > -                               .ext_entry = 1,
> > > > > +               struct rte_lpm_tbl_entry new_tbl24_entry = {
> > > > > +                               .tbl8_gindex =
> > > (uint16_t)tbl8_group_index,
> > > > >                                 .depth = 0,
> > > > > +                               .ext_valid = 1,
> > > > > +                               .valid = VALID,
> > > > >                 };
> > > > >
> > > > >                 lpm->tbl24[tbl24_index] = new_tbl24_entry;
> > > > > @@ -595,11 +616,15 @@ add_depth_big(struct rte_lpm *lpm, uint32_t
> > > > > ip_masked, uint8_t depth,
> > > > >
> > > > >                         if (!lpm->tbl8[i].valid ||
> > > > >                                         lpm->tbl8[i].depth <=
> depth) {
> > > > > -                               struct rte_lpm_tbl8_entry
> > > new_tbl8_entry = {
> > > > > -                                       .valid = VALID,
> > > > > +                               struct rte_lpm_tbl_entry
> > > new_tbl8_entry = {
> > > > > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > > > > +                                       .as_num = res->as_num,
> > > > > +#endif
> > > > > +                                       .next_hop = res->next_hop,
> > > > > +                                       .fwd_class =
> res->fwd_class,
> > > > >                                         .depth = depth,
> > > > > -                                       .next_hop = next_hop,
> > > > > -                                       .valid_group =
> > > > > lpm->tbl8[i].valid_group,
> > > > > +                                       .ext_valid =
> > > lpm->tbl8[i].ext_valid,
> > > > > +                                       .valid = VALID,
> > > > >                                 };
> > > > >
> > > > >                                 /*
> > > > > @@ -621,19 +646,19 @@ add_depth_big(struct rte_lpm *lpm, uint32_t
> > > > > ip_masked, uint8_t depth,
> > > > >   */
> > > > >  int
> > > > >  rte_lpm_add(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
> > > > > -               uint8_t next_hop)
> > > > > +               struct rte_lpm_res *res)
> > > > >  {
> > > > >         int32_t rule_index, status = 0;
> > > > >         uint32_t ip_masked;
> > > > >
> > > > >         /* Check user arguments. */
> > > > > -       if ((lpm == NULL) || (depth < 1) || (depth >
> > > RTE_LPM_MAX_DEPTH))
> > > > > +       if ((lpm == NULL) || (res == NULL) || (depth < 1) ||
> (depth >
> > > > > RTE_LPM_MAX_DEPTH))
> > > > >                 return -EINVAL;
> > > > >
> > > > >         ip_masked = ip & depth_to_mask(depth);
> > > > >
> > > > >         /* Add the rule to the rule table. */
> > > > > -       rule_index = rule_add(lpm, ip_masked, depth, next_hop);
> > > > > +       rule_index = rule_add(lpm, ip_masked, depth, res);
> > > > >
> > > > >         /* If the is no space available for new rule return error.
> */
> > > > >         if (rule_index < 0) {
> > > > > @@ -641,10 +666,10 @@ rte_lpm_add(struct rte_lpm *lpm, uint32_t ip,
> > > > uint8_t
> > > > > depth,
> > > > >         }
> > > > >
> > > > >         if (depth <= MAX_DEPTH_TBL24) {
> > > > > -               status = add_depth_small(lpm, ip_masked, depth,
> > > next_hop);
> > > > > +               status = add_depth_small(lpm, ip_masked, depth,
> res);
> > > > >         }
> > > > >         else { /* If depth > RTE_LPM_MAX_DEPTH_TBL24 */
> > > > > -               status = add_depth_big(lpm, ip_masked, depth,
> > > next_hop);
> > > > > +               status = add_depth_big(lpm, ip_masked, depth, res);
> > > > >
> > > > >                 /*
> > > > >                  * If add fails due to exhaustion of tbl8
> extensions
> > > delete
> > > > > @@ -665,14 +690,14 @@ rte_lpm_add(struct rte_lpm *lpm, uint32_t ip,
> > > > uint8_t
> > > > > depth,
> > > > >   */
> > > > >  int
> > > > >  rte_lpm_is_rule_present(struct rte_lpm *lpm, uint32_t ip, uint8_t
> > > depth,
> > > > > -uint8_t *next_hop)
> > > > > +                       struct rte_lpm_res *res)
> > > > >  {
> > > > >         uint32_t ip_masked;
> > > > >         int32_t rule_index;
> > > > >
> > > > >         /* Check user arguments. */
> > > > >         if ((lpm == NULL) ||
> > > > > -               (next_hop == NULL) ||
> > > > > +               (res == NULL) ||
> > > > >                 (depth < 1) || (depth > RTE_LPM_MAX_DEPTH))
> > > > >                 return -EINVAL;
> > > > >
> > > > > @@ -681,7 +706,11 @@ uint8_t *next_hop)
> > > > >         rule_index = rule_find(lpm, ip_masked, depth);
> > > > >
> > > > >         if (rule_index >= 0) {
> > > > > -               *next_hop = lpm->rules_tbl[rule_index].next_hop;
> > > > > +               res->next_hop =
> lpm->rules_tbl[rule_index].next_hop;
> > > > > +               res->fwd_class =
> lpm->rules_tbl[rule_index].fwd_class;
> > > > > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > > > > +               res->as_num = lpm->rules_tbl[rule_index].as_num;
> > > > > +#endif
> > > > >                 return 1;
> > > > >         }
> > > > >
> > > > > @@ -731,7 +760,7 @@ delete_depth_small(struct rte_lpm *lpm,
> uint32_t
> > > > > ip_masked,
> > > > >                  */
> > > > >                 for (i = tbl24_index; i < (tbl24_index +
> tbl24_range);
> > > i++)
> > > > > {
> > > > >
> > > > > -                       if (lpm->tbl24[i].ext_entry == 0 &&
> > > > > +                       if (lpm->tbl24[i].ext_valid == 0 &&
> > > > >                                         lpm->tbl24[i].depth <=
> depth )
> > > {
> > > > >                                 lpm->tbl24[i].valid = INVALID;
> > > > >                         }
> > > > > @@ -761,23 +790,30 @@ delete_depth_small(struct rte_lpm *lpm,
> > > > uint32_t
> > > > > ip_masked,
> > > > >                  * associated with this rule.
> > > > >                  */
> > > > >
> > > > > -               struct rte_lpm_tbl24_entry new_tbl24_entry = {
> > > > > -                       {.next_hop =
> > > > > lpm->rules_tbl[sub_rule_index].next_hop,},
> > > > > -                       .valid = VALID,
> > > > > -                       .ext_entry = 0,
> > > > > +               struct rte_lpm_tbl_entry new_tbl24_entry = {
> > > > > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > > > > +                       .as_num =
> > > lpm->rules_tbl[sub_rule_index].as_num,
> > > > > +#endif
> > > > > +                       .next_hop =
> > > lpm->rules_tbl[sub_rule_index].next_hop,
> > > > > +                       .fwd_class =
> > > > > lpm->rules_tbl[sub_rule_index].fwd_class,
> > > > >                         .depth = sub_rule_depth,
> > > > > +                       .ext_valid = 0,
> > > > > +                       .valid = VALID,
> > > > >                 };
> > > > >
> > > > > -               struct rte_lpm_tbl8_entry new_tbl8_entry = {
> > > > > -                       .valid = VALID,
> > > > > +               struct rte_lpm_tbl_entry new_tbl8_entry = {
> > > > > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > > > > +                       .as_num =
> > > lpm->rules_tbl[sub_rule_index].as_num,
> > > > > +#endif
> > > > > +                       .next_hop =
> > > lpm->rules_tbl[sub_rule_index].next_hop,
> > > > > +                       .fwd_class =
> > > > > lpm->rules_tbl[sub_rule_index].fwd_class,
> > > > >                         .depth = sub_rule_depth,
> > > > > -                       .next_hop = lpm->rules_tbl
> > > > > -                       [sub_rule_index].next_hop,
> > > > > +                       .valid = VALID,
> > > > >                 };
> > > > >
> > > > >                 for (i = tbl24_index; i < (tbl24_index +
> tbl24_range);
> > > i++)
> > > > > {
> > > > >
> > > > > -                       if (lpm->tbl24[i].ext_entry == 0 &&
> > > > > +                       if (lpm->tbl24[i].ext_valid == 0 &&
> > > > >                                         lpm->tbl24[i].depth <=
> depth )
> > > {
> > > > >                                 lpm->tbl24[i] = new_tbl24_entry;
> > > > >                         }
> > > > > @@ -814,7 +850,7 @@ delete_depth_small(struct rte_lpm *lpm,
> uint32_t
> > > > > ip_masked,
> > > > >   * thus can be recycled
> > > > >   */
> > > > >  static inline int32_t
> > > > > -tbl8_recycle_check(struct rte_lpm_tbl8_entry *tbl8, uint32_t
> > > > > tbl8_group_start)
> > > > > +tbl8_recycle_check(struct rte_lpm_tbl_entry *tbl8, uint32_t
> > > > > tbl8_group_start)
> > > > >  {
> > > > >         uint32_t tbl8_group_end, i;
> > > > >         tbl8_group_end = tbl8_group_start +
> > > > RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
> > > > > @@ -891,11 +927,15 @@ delete_depth_big(struct rte_lpm *lpm,
> uint32_t
> > > > > ip_masked,
> > > > >         }
> > > > >         else {
> > > > >                 /* Set new tbl8 entry. */
> > > > > -               struct rte_lpm_tbl8_entry new_tbl8_entry = {
> > > > > -                       .valid = VALID,
> > > > > -                       .depth = sub_rule_depth,
> > > > > -                       .valid_group =
> > > > > lpm->tbl8[tbl8_group_start].valid_group,
> > > > > +               struct rte_lpm_tbl_entry new_tbl8_entry = {
> > > > > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > > > > +                       .as_num =
> > > lpm->rules_tbl[sub_rule_index].as_num,
> > > > > +#endif
> > > > > +                       .fwd_class =
> > > > > lpm->rules_tbl[sub_rule_index].fwd_class,
> > > > >                         .next_hop =
> > > lpm->rules_tbl[sub_rule_index].next_hop,
> > > > > +                       .depth = sub_rule_depth,
> > > > > +                       .ext_valid =
> > > lpm->tbl8[tbl8_group_start].ext_valid,
> > > > > +                       .valid = VALID,
> > > > >                 };
> > > > >
> > > > >                 /*
> > > > > @@ -923,11 +963,15 @@ delete_depth_big(struct rte_lpm *lpm,
> uint32_t
> > > > > ip_masked,
> > > > >         }
> > > > >         else if (tbl8_recycle_index > -1) {
> > > > >                 /* Update tbl24 entry. */
> > > > > -               struct rte_lpm_tbl24_entry new_tbl24_entry = {
> > > > > -                       { .next_hop =
> > > > > lpm->tbl8[tbl8_recycle_index].next_hop, },
> > > > > -                       .valid = VALID,
> > > > > -                       .ext_entry = 0,
> > > > > +               struct rte_lpm_tbl_entry new_tbl24_entry = {
> > > > > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > > > > +                       .as_num =
> lpm->tbl8[tbl8_recycle_index].as_num,
> > > > > +#endif
> > > > > +                       .next_hop =
> > > lpm->tbl8[tbl8_recycle_index].next_hop,
> > > > > +                       .fwd_class =
> > > > > lpm->tbl8[tbl8_recycle_index].fwd_class,
> > > > >                         .depth =
> lpm->tbl8[tbl8_recycle_index].depth,
> > > > > +                       .ext_valid = 0,
> > > > > +                       .valid = VALID,
> > > > >                 };
> > > > >
> > > > >                 /* Set tbl24 before freeing tbl8 to avoid race
> > > condition. */
> > > > > diff --git a/lib/librte_lpm/rte_lpm.h b/lib/librte_lpm/rte_lpm.h
> > > > > index c299ce2..7c615bc 100644
> > > > > --- a/lib/librte_lpm/rte_lpm.h
> > > > > +++ b/lib/librte_lpm/rte_lpm.h
> > > > > @@ -31,8 +31,8 @@
> > > > >   *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
> > > > DAMAGE.
> > > > >   */
> > > > >
> > > > > -#ifndef _RTE_LPM_H_
> > > > > -#define _RTE_LPM_H_
> > > > > +#ifndef _RTE_LPM_EXT_H_
> > > > > +#define _RTE_LPM_EXT_H_
> > > > >
> > > > >  /**
> > > > >   * @file
> > > > > @@ -81,57 +81,58 @@ extern "C" {
> > > > >  #define RTE_LPM_RETURN_IF_TRUE(cond, retval)
> > > > >  #endif
> > > > >
> > > > > -/** @internal bitmask with valid and ext_entry/valid_group fields
> set
> > > */
> > > > > -#define RTE_LPM_VALID_EXT_ENTRY_BITMASK 0x0300
> > > > > +/** @internal bitmask with valid and ext_valid/ext_valid fields
> set */
> > > > > +#define RTE_LPM_VALID_EXT_ENTRY_BITMASK 0x03
> > > > >
> > > > >  /** Bitmask used to indicate successful lookup */
> > > > > -#define RTE_LPM_LOOKUP_SUCCESS          0x0100
> > > > > +#define RTE_LPM_LOOKUP_SUCCESS          0x01
> > > > > +
> > > > > +struct rte_lpm_res {
> > > > > +       uint16_t        next_hop;
> > > > > +       uint8_t         fwd_class;
> > > > > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > > > > +       uint32_t        as_num;
> > > > > +#endif
> > > > > +};
> > > > >
> > > > >  #if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN
> > > > > -/** @internal Tbl24 entry structure. */
> > > > > -struct rte_lpm_tbl24_entry {
> > > > > -       /* Stores Next hop or group index (i.e. gindex)into tbl8.
> */
> > > > > +struct rte_lpm_tbl_entry {
> > > > > +       uint8_t valid           :1;
> > > > > +       uint8_t ext_valid       :1;
> > > > > +       uint8_t depth           :6;
> > > > > +       uint8_t fwd_class;
> > > > >         union {
> > > > > -               uint8_t next_hop;
> > > > > -               uint8_t tbl8_gindex;
> > > > > +               uint16_t next_hop;
> > > > > +               uint16_t tbl8_gindex;
> > > > >         };
> > > > > -       /* Using single uint8_t to store 3 values. */
> > > > > -       uint8_t valid     :1; /**< Validation flag. */
> > > > > -       uint8_t ext_entry :1; /**< External entry. */
> > > > > -       uint8_t depth     :6; /**< Rule depth. */
> > > > > -};
> > > > > -
> > > > > -/** @internal Tbl8 entry structure. */
> > > > > -struct rte_lpm_tbl8_entry {
> > > > > -       uint8_t next_hop; /**< next hop. */
> > > > > -       /* Using single uint8_t to store 3 values. */
> > > > > -       uint8_t valid       :1; /**< Validation flag. */
> > > > > -       uint8_t valid_group :1; /**< Group validation flag. */
> > > > > -       uint8_t depth       :6; /**< Rule depth. */
> > > > > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > > > > +       uint32_t as_num;
> > > > > +#endif
> > > > >  };
> > > > >  #else
> > > > > -struct rte_lpm_tbl24_entry {
> > > > > -       uint8_t depth       :6;
> > > > > -       uint8_t ext_entry   :1;
> > > > > -       uint8_t valid       :1;
> > > > > +struct rte_lpm_tbl_entry {
> > > > > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > > > > +       uint32_t as_num;
> > > > > +#endif
> > > > >         union {
> > > > > -               uint8_t tbl8_gindex;
> > > > > -               uint8_t next_hop;
> > > > > +               uint16_t tbl8_gindex;
> > > > > +               uint16_t next_hop;
> > > > >         };
> > > > > -};
> > > > > -
> > > > > -struct rte_lpm_tbl8_entry {
> > > > > -       uint8_t depth       :6;
> > > > > -       uint8_t valid_group :1;
> > > > > -       uint8_t valid       :1;
> > > > > -       uint8_t next_hop;
> > > > > +       uint8_t fwd_class;
> > > > > +       uint8_t depth           :6;
> > > > > +       uint8_t ext_valid       :1;
> > > > > +       uint8_t valid           :1;
> > > > >  };
> > > > >  #endif
> > > > >
> > > > >  /** @internal Rule structure. */
> > > > >  struct rte_lpm_rule {
> > > > >         uint32_t ip; /**< Rule IP address. */
> > > > > -       uint8_t  next_hop; /**< Rule next hop. */
> > > > > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > > > > +       uint32_t as_num;
> > > > > +#endif
> > > > > +       uint16_t  next_hop; /**< Rule next hop. */
> > > > > +       uint8_t fwd_class;
> > > > >  };
> > > > >
> > > > >  /** @internal Contains metadata about the rules table. */
> > > > > @@ -148,9 +149,9 @@ struct rte_lpm {
> > > > >         struct rte_lpm_rule_info rule_info[RTE_LPM_MAX_DEPTH]; /**<
> > > Rule
> > > > > info table. */
> > > > >
> > > > >         /* LPM Tables. */
> > > > > -       struct rte_lpm_tbl24_entry
> tbl24[RTE_LPM_TBL24_NUM_ENTRIES] \
> > > > > +       struct rte_lpm_tbl_entry tbl24[RTE_LPM_TBL24_NUM_ENTRIES] \
> > > > >                         __rte_cache_aligned; /**< LPM tbl24 table.
> */
> > > > > -       struct rte_lpm_tbl8_entry tbl8[RTE_LPM_TBL8_NUM_ENTRIES] \
> > > > > +       struct rte_lpm_tbl_entry tbl8[RTE_LPM_TBL8_NUM_ENTRIES] \
> > > > >                         __rte_cache_aligned; /**< LPM tbl8 table.
> */
> > > > >         struct rte_lpm_rule rules_tbl[0] \
> > > > >                         __rte_cache_aligned; /**< LPM rules. */
> > > > > @@ -219,7 +220,7 @@ rte_lpm_free(struct rte_lpm *lpm);
> > > > >   *   0 on success, negative value otherwise
> > > > >   */
> > > > >  int
> > > > > -rte_lpm_add(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
> uint8_t
> > > > > next_hop);
> > > > > +rte_lpm_add(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
> struct
> > > > > rte_lpm_res *res);
> > > > >
> > > > >  /**
> > > > >   * Check if a rule is present in the LPM table,
> > > > > @@ -238,7 +239,7 @@ rte_lpm_add(struct rte_lpm *lpm, uint32_t ip,
> > > > uint8_t
> > > > > depth, uint8_t next_hop);
> > > > >   */
> > > > >  int
> > > > >  rte_lpm_is_rule_present(struct rte_lpm *lpm, uint32_t ip, uint8_t
> > > depth,
> > > > > -uint8_t *next_hop);
> > > > > +                       struct rte_lpm_res *res);
> > > > >
> > > > >  /**
> > > > >   * Delete a rule from the LPM table.
> > > > > @@ -277,29 +278,43 @@ rte_lpm_delete_all(struct rte_lpm *lpm);
> > > > >   *   -EINVAL for incorrect arguments, -ENOENT on lookup miss, 0 on
> > > lookup
> > > > > hit
> > > > >   */
> > > > >  static inline int
> > > > > -rte_lpm_lookup(struct rte_lpm *lpm, uint32_t ip, uint8_t
> *next_hop)
> > > > > +rte_lpm_lookup(struct rte_lpm *lpm, uint32_t ip, struct
> rte_lpm_res
> > > *res)
> > > > >  {
> > > > >         unsigned tbl24_index = (ip >> 8);
> > > > > -       uint16_t tbl_entry;
> > > > > -
> > > > > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > > > > +       uint64_t tbl_entry;
> > > > > +#else
> > > > > +       uint32_t tbl_entry;
> > > > > +#endif
> > > > >         /* DEBUG: Check user input arguments. */
> > > > > -       RTE_LPM_RETURN_IF_TRUE(((lpm == NULL) || (next_hop ==
> NULL)),
> > > > > -EINVAL);
> > > > > +       RTE_LPM_RETURN_IF_TRUE(((lpm == NULL) || (res == NULL)), -
> > > > EINVAL);
> > > > >
> > > > >         /* Copy tbl24 entry */
> > > > > -       tbl_entry = *(const uint16_t *)&lpm->tbl24[tbl24_index];
> > > > > -
> > > > > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > > > > +       tbl_entry = *(const uint64_t *)&lpm->tbl24[tbl24_index];
> > > > > +#else
> > > > > +       tbl_entry = *(const uint32_t *)&lpm->tbl24[tbl24_index];
> > > > > +#endif
> > > > >         /* Copy tbl8 entry (only if needed) */
> > > > >         if (unlikely((tbl_entry & RTE_LPM_VALID_EXT_ENTRY_BITMASK)
> ==
> > > > >                         RTE_LPM_VALID_EXT_ENTRY_BITMASK)) {
> > > > >
> > > > >                 unsigned tbl8_index = (uint8_t)ip +
> > > > > -                               ((uint8_t)tbl_entry *
> > > > > RTE_LPM_TBL8_GROUP_NUM_ENTRIES);
> > > > > +                               ((*(struct rte_lpm_tbl_entry
> > > > > *)&tbl_entry).tbl8_gindex * RTE_LPM_TBL8_GROUP_NUM_ENTRIES);
> > > > >
> > > > > -               tbl_entry = *(const uint16_t
> *)&lpm->tbl8[tbl8_index];
> > > > > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > > > > +               tbl_entry = *(const uint64_t
> *)&lpm->tbl8[tbl8_index];
> > > > > +#else
> > > > > +               tbl_entry = *(const uint32_t
> *)&lpm->tbl8[tbl8_index];
> > > > > +#endif
> > > > >         }
> > > > > -
> > > > > -       *next_hop = (uint8_t)tbl_entry;
> > > > > +       res->next_hop  = ((struct rte_lpm_tbl_entry
> > > *)&tbl_entry)->next_hop;
> > > > > +       res->fwd_class = ((struct rte_lpm_tbl_entry
> > > > > *)&tbl_entry)->fwd_class;
> > > > > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > > > > +       res->as_num       = ((struct rte_lpm_tbl_entry
> > > > > *)&tbl_entry)->as_num;
> > > > > +#endif
> > > > >         return (tbl_entry & RTE_LPM_LOOKUP_SUCCESS) ? 0 : -ENOENT;
> > > > > +
> > > > >  }
> > > > >
> > > > >  /**
> > > > > @@ -322,19 +337,25 @@ rte_lpm_lookup(struct rte_lpm *lpm, uint32_t
> ip,
> > > > > uint8_t *next_hop)
> > > > >   *  @return
> > > > >   *   -EINVAL for incorrect arguments, otherwise 0
> > > > >   */
> > > > > -#define rte_lpm_lookup_bulk(lpm, ips, next_hops, n) \
> > > > > -               rte_lpm_lookup_bulk_func(lpm, ips, next_hops, n)
> > > > > +#define rte_lpm_lookup_bulk(lpm, ips, res_tbl, n) \
> > > > > +               rte_lpm_lookup_bulk_func(lpm, ips, res_tbl, n)
> > > > >
> > > > >  static inline int
> > > > > -rte_lpm_lookup_bulk_func(const struct rte_lpm *lpm, const
> uint32_t *
> > > ips,
> > > > > -               uint16_t * next_hops, const unsigned n)
> > > > > +rte_lpm_lookup_bulk_func(const struct rte_lpm *lpm, const uint32_t
> > > *ips,
> > > > > +               struct rte_lpm_res *res_tbl, const unsigned n)
> > > > >  {
> > > > >         unsigned i;
> > > > > +       int ret = 0;
> > > > > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > > > > +       uint64_t tbl_entry;
> > > > > +#else
> > > > > +       uint32_t tbl_entry;
> > > > > +#endif
> > > > >         unsigned tbl24_indexes[n];
> > > > >
> > > > >         /* DEBUG: Check user input arguments. */
> > > > >         RTE_LPM_RETURN_IF_TRUE(((lpm == NULL) || (ips == NULL) ||
> > > > > -                       (next_hops == NULL)), -EINVAL);
> > > > > +                       (res_tbl == NULL)), -EINVAL);
> > > > >
> > > > >         for (i = 0; i < n; i++) {
> > > > >                 tbl24_indexes[i] = ips[i] >> 8;
> > > > > @@ -342,20 +363,32 @@ rte_lpm_lookup_bulk_func(const struct rte_lpm
> > > > *lpm,
> > > > > const uint32_t * ips,
> > > > >
> > > > >         for (i = 0; i < n; i++) {
> > > > >                 /* Simply copy tbl24 entry to output */
> > > > > -               next_hops[i] = *(const uint16_t
> > > > > *)&lpm->tbl24[tbl24_indexes[i]];
> > > > > -
> > > > > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > > > > +               tbl_entry = *(const uint64_t
> > > > > *)&lpm->tbl24[tbl24_indexes[i]];
> > > > > +#else
> > > > > +               tbl_entry = *(const uint32_t
> > > > > *)&lpm->tbl24[tbl24_indexes[i]];
> > > > > +#endif
> > > > >                 /* Overwrite output with tbl8 entry if needed */
> > > > > -               if (unlikely((next_hops[i] &
> > > > > RTE_LPM_VALID_EXT_ENTRY_BITMASK) ==
> > > > > -                               RTE_LPM_VALID_EXT_ENTRY_BITMASK)) {
> > > > > +               if (unlikely((tbl_entry &
> > > RTE_LPM_VALID_EXT_ENTRY_BITMASK)
> > > > > ==
> > > > > +                       RTE_LPM_VALID_EXT_ENTRY_BITMASK)) {
> > > > >
> > > > >                         unsigned tbl8_index = (uint8_t)ips[i] +
> > > > > -                                       ((uint8_t)next_hops[i] *
> > > > > -
> > > RTE_LPM_TBL8_GROUP_NUM_ENTRIES);
> > > > > +                               ((*(struct rte_lpm_tbl_entry
> > > > > *)&tbl_entry).tbl8_gindex * RTE_LPM_TBL8_GROUP_NUM_ENTRIES);
> > > > >
> > > > > -                       next_hops[i] = *(const uint16_t
> > > > > *)&lpm->tbl8[tbl8_index];
> > > > > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > > > > +                       tbl_entry = *(const uint64_t
> > > > > *)&lpm->tbl8[tbl8_index];
> > > > > +#else
> > > > > +                       tbl_entry = *(const uint32_t
> > > > > *)&lpm->tbl8[tbl8_index];
> > > > > +#endif
> > > > >                 }
> > > > > +               res_tbl[i].next_hop     = ((struct
> rte_lpm_tbl_entry
> > > > > *)&tbl_entry)->next_hop;
> > > > > +               res_tbl[i].fwd_class    = ((struct
> rte_lpm_tbl_entry
> > > > > *)&tbl_entry)->next_hop;
> > > > > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > > > > +               res_tbl[i].as_num       = ((struct
> rte_lpm_tbl_entry
> > > > > *)&tbl_entry)->as_num;
> > > > > +#endif
> > > > > +               ret |= 1 << i;
> > > > >         }
> > > > > -       return 0;
> > > > > +       return ret;
> > > > >  }
> > > > >
> > > > >  /* Mask four results. */
> > > > > @@ -477,4 +510,4 @@ rte_lpm_lookupx4(const struct rte_lpm *lpm,
> > > > __m128i ip,
> > > > > uint16_t hop[4],
> > > > >  }
> > > > >  #endif
> > > > >
> > > > > -#endif /* _RTE_LPM_H_ */
> > > > > +#endif /* _RTE_LPM_EXT_H_ */
> > > > >
> > > > > 2015-10-24 9:09 GMT+03:00 Matthew Hall <mhall@mhcomputing.net>:
> > > > >
> > > > > > On 10/23/15 9:20 AM, Matthew Hall wrote:
> > > > > >
> > > > > >> On Fri, Oct 23, 2015 at 03:51:48PM +0200, Michal Jastrzebski
> wrote:
> > > > > >>
> > > > > >>> From: Michal Kobylinski  <michalx.kobylinski@intel.com>
> > > > > >>>
> > > > > >>> The current DPDK implementation for LPM for IPv4 and IPv6
> limits
> > > the
> > > > > >>> number of next hops to 256, as the next hop ID is an 8-bit long
> > > field.
> > > > > >>> Proposed extension increase number of next hops for IPv4 to
> 2^24
> > > and
> > > > > >>> also allows 32-bits read/write operations.
> > > > > >>>
> > > > > >>> This patchset requires additional change to rte_table library
> to
> > > meet
> > > > > >>> ABI compatibility requirements. A v2 will be sent next week.
> > > > > >>>
> > > > > >>
> > > > > >> I also have a patchset for this.
> > > > > >>
> > > > > >> I will send it out as well so we could compare.
> > > > > >>
> > > > > >> Matthew.
> > > > > >>
> > > > > >
> > > > > > Sorry about the delay; I only work on DPDK in personal time and
> not
> > > as
> > > > > > part of a job. My patchset is attached to this email.
> > > > > >
> > > > > > One possible advantage with my patchset, compared to others, is
> that
> > > the
> > > > > > space problem is fixed in both IPV4 and in IPV6, to prevent
> asymmetry
> > > > > > between these two standards, which is something I try to avoid as
> > > much
> > > > as
> > > > > > humanly possible.
> > > > > >
> > > > > > This is because my application code is green-field, so I
> absolutely
> > > don't
> > > > > > want to put any ugly hacks or incompatibilities in this code if
> I can
> > > > > > possibly avoid it.
> > > > > >
> > > > > > Otherwise, I am not necessarily as expert about rte_lpm as some
> of
> > > the
> > > > > > full-time guys, but I think with four or five of us in the thread
> > > hammering
> > > > > > out patches we will be able to create something amazing together
> and
> > > I
> > > > am
> > > > > > very very very very very happy about this.
> > > > > >
> > > > > > Matthew.
> > > > > >
> > > >
> > >
> > > Hi Vladimir,
> > > Thanks for sharing Your implementation.
> > > Could You please clarify what as_num and fwd_class fields represent?
> > > The second issue I have is that Your patch doesn’t want to apply on
> top of
> > > current head. Could You check this please?
> > >
> > > Best regards
> > > Michal
> > >
>

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [dpdk-dev] [PATCH v1 0/3] lpm: increase number of next hops for lpm (ipv4)
  2015-10-26 12:13     ` Jastrzebski, MichalX K
@ 2015-10-26 18:40       ` Matthew Hall
  2015-10-27 10:35         ` Vladimir Medvedkin
  2015-10-30  7:17         ` Matthew Hall
  0 siblings, 2 replies; 24+ messages in thread
From: Matthew Hall @ 2015-10-26 18:40 UTC (permalink / raw)
  To: Jastrzebski, MichalX K; +Cc: dev

> I can't apply patch 0001-... , could You check it please? 

I generated it from a rebase of my own copy of DPDK against DPDK upstream 
master.

So I'm not sure why it would not apply against latest DPDK master.

But I will try it and see what could be the reason.

Matthew.

^ permalink raw reply	[flat|nested] 24+ messages in thread

* [dpdk-dev] [PATCH v1 0/3] lpm: increase number of next hops for lpm (ipv4)
  2015-10-27 10:35         ` Vladimir Medvedkin
@ 2015-10-27 10:33           ` Vladimir Medvedkin
  0 siblings, 0 replies; 24+ messages in thread
From: Vladimir Medvedkin @ 2015-10-27 10:33 UTC (permalink / raw)
  To: dev

Signed-off-by: Vladimir Medvedkin <medvedkinv@gmail.com>
---
 config/common_bsdapp     |   1 +
 config/common_linuxapp   |   1 +
 lib/librte_lpm/rte_lpm.c | 194 +++++++++++++++++++++++++++++------------------
 lib/librte_lpm/rte_lpm.h | 163 +++++++++++++++++++++++----------------
 4 files changed, 219 insertions(+), 140 deletions(-)

diff --git a/config/common_bsdapp b/config/common_bsdapp
index b37dcf4..408cc2c 100644
--- a/config/common_bsdapp
+++ b/config/common_bsdapp
@@ -344,6 +344,7 @@ CONFIG_RTE_LIBRTE_JOBSTATS=y
 #
 CONFIG_RTE_LIBRTE_LPM=y
 CONFIG_RTE_LIBRTE_LPM_DEBUG=n
+CONFIG_RTE_LIBRTE_LPM_ASNUM=n
 
 #
 # Compile librte_acl
diff --git a/config/common_linuxapp b/config/common_linuxapp
index 0de43d5..1c60e63 100644
--- a/config/common_linuxapp
+++ b/config/common_linuxapp
@@ -352,6 +352,7 @@ CONFIG_RTE_LIBRTE_JOBSTATS=y
 #
 CONFIG_RTE_LIBRTE_LPM=y
 CONFIG_RTE_LIBRTE_LPM_DEBUG=n
+CONFIG_RTE_LIBRTE_LPM_ASNUM=n
 
 #
 # Compile librte_acl
diff --git a/lib/librte_lpm/rte_lpm.c b/lib/librte_lpm/rte_lpm.c
index 163ba3c..363b400 100644
--- a/lib/librte_lpm/rte_lpm.c
+++ b/lib/librte_lpm/rte_lpm.c
@@ -159,9 +159,11 @@ rte_lpm_create(const char *name, int socket_id, int max_rules,
 
 	lpm_list = RTE_TAILQ_CAST(rte_lpm_tailq.head, rte_lpm_list);
 
-	RTE_BUILD_BUG_ON(sizeof(struct rte_lpm_tbl24_entry) != 2);
-	RTE_BUILD_BUG_ON(sizeof(struct rte_lpm_tbl8_entry) != 2);
-
+#ifdef RTE_LIBRTE_LPM_ASNUM
+	RTE_BUILD_BUG_ON(sizeof(struct rte_lpm_tbl_entry) != 8);
+#else
+	RTE_BUILD_BUG_ON(sizeof(struct rte_lpm_tbl_entry) != 4);
+#endif
 	/* Check user arguments. */
 	if ((name == NULL) || (socket_id < -1) || (max_rules == 0)){
 		rte_errno = EINVAL;
@@ -261,7 +263,7 @@ rte_lpm_free(struct rte_lpm *lpm)
  */
 static inline int32_t
 rule_add(struct rte_lpm *lpm, uint32_t ip_masked, uint8_t depth,
-	uint8_t next_hop)
+	struct rte_lpm_res *res)
 {
 	uint32_t rule_gindex, rule_index, last_rule;
 	int i;
@@ -282,8 +284,11 @@ rule_add(struct rte_lpm *lpm, uint32_t ip_masked, uint8_t depth,
 
 			/* If rule already exists update its next_hop and return. */
 			if (lpm->rules_tbl[rule_index].ip == ip_masked) {
-				lpm->rules_tbl[rule_index].next_hop = next_hop;
-
+				lpm->rules_tbl[rule_index].next_hop = res->next_hop;
+				lpm->rules_tbl[rule_index].fwd_class = res->fwd_class;
+#ifdef RTE_LIBRTE_LPM_ASNUM
+				lpm->rules_tbl[rule_index].as_num = res->as_num;
+#endif
 				return rule_index;
 			}
 		}
@@ -320,7 +325,11 @@ rule_add(struct rte_lpm *lpm, uint32_t ip_masked, uint8_t depth,
 
 	/* Add the new rule. */
 	lpm->rules_tbl[rule_index].ip = ip_masked;
-	lpm->rules_tbl[rule_index].next_hop = next_hop;
+	lpm->rules_tbl[rule_index].next_hop = res->next_hop;
+	lpm->rules_tbl[rule_index].fwd_class = res->fwd_class;
+#ifdef RTE_LIBRTE_LPM_ASNUM
+	lpm->rules_tbl[rule_index].as_num = res->as_num;
+#endif
 
 	/* Increment the used rules counter for this rule group. */
 	lpm->rule_info[depth - 1].used_rules++;
@@ -382,10 +391,10 @@ rule_find(struct rte_lpm *lpm, uint32_t ip_masked, uint8_t depth)
  * Find, clean and allocate a tbl8.
  */
 static inline int32_t
-tbl8_alloc(struct rte_lpm_tbl8_entry *tbl8)
+tbl8_alloc(struct rte_lpm_tbl_entry *tbl8)
 {
 	uint32_t tbl8_gindex; /* tbl8 group index. */
-	struct rte_lpm_tbl8_entry *tbl8_entry;
+	struct rte_lpm_tbl_entry *tbl8_entry;
 
 	/* Scan through tbl8 to find a free (i.e. INVALID) tbl8 group. */
 	for (tbl8_gindex = 0; tbl8_gindex < RTE_LPM_TBL8_NUM_GROUPS;
@@ -393,12 +402,12 @@ tbl8_alloc(struct rte_lpm_tbl8_entry *tbl8)
 		tbl8_entry = &tbl8[tbl8_gindex *
 		                   RTE_LPM_TBL8_GROUP_NUM_ENTRIES];
 		/* If a free tbl8 group is found clean it and set as VALID. */
-		if (!tbl8_entry->valid_group) {
+		if (!tbl8_entry->ext_valid) {
 			memset(&tbl8_entry[0], 0,
 					RTE_LPM_TBL8_GROUP_NUM_ENTRIES *
 					sizeof(tbl8_entry[0]));
 
-			tbl8_entry->valid_group = VALID;
+			tbl8_entry->ext_valid = VALID;
 
 			/* Return group index for allocated tbl8 group. */
 			return tbl8_gindex;
@@ -410,46 +419,50 @@ tbl8_alloc(struct rte_lpm_tbl8_entry *tbl8)
 }
 
 static inline void
-tbl8_free(struct rte_lpm_tbl8_entry *tbl8, uint32_t tbl8_group_start)
+tbl8_free(struct rte_lpm_tbl_entry *tbl8, uint32_t tbl8_group_start)
 {
 	/* Set tbl8 group invalid*/
-	tbl8[tbl8_group_start].valid_group = INVALID;
+	tbl8[tbl8_group_start].ext_valid = INVALID;
 }
 
 static inline int32_t
 add_depth_small(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
-		uint8_t next_hop)
+		struct rte_lpm_res *res)
 {
 	uint32_t tbl24_index, tbl24_range, tbl8_index, tbl8_group_end, i, j;
 
 	/* Calculate the index into Table24. */
 	tbl24_index = ip >> 8;
 	tbl24_range = depth_to_range(depth);
+	struct rte_lpm_tbl_entry new_tbl_entry = {
+#ifdef RTE_LIBRTE_LPM_ASNUM
+		.as_num	= res->as_num,
+#endif
+		.next_hop = res->next_hop,
+		.fwd_class  = res->fwd_class,
+		.ext_valid = 0,
+		.depth = depth,
+		.valid = VALID,
+	};
+
 
 	for (i = tbl24_index; i < (tbl24_index + tbl24_range); i++) {
 		/*
 		 * For invalid OR valid and non-extended tbl 24 entries set
 		 * entry.
 		 */
-		if (!lpm->tbl24[i].valid || (lpm->tbl24[i].ext_entry == 0 &&
+		if (!lpm->tbl24[i].valid || (lpm->tbl24[i].ext_valid == 0 &&
 				lpm->tbl24[i].depth <= depth)) {
 
-			struct rte_lpm_tbl24_entry new_tbl24_entry = {
-				{ .next_hop = next_hop, },
-				.valid = VALID,
-				.ext_entry = 0,
-				.depth = depth,
-			};
-
 			/* Setting tbl24 entry in one go to avoid race
 			 * conditions
 			 */
-			lpm->tbl24[i] = new_tbl24_entry;
+			lpm->tbl24[i] = new_tbl_entry;
 
 			continue;
 		}
 
-		if (lpm->tbl24[i].ext_entry == 1) {
+		if (lpm->tbl24[i].ext_valid == 1) {
 			/* If tbl24 entry is valid and extended calculate the
 			 *  index into tbl8.
 			 */
@@ -461,19 +474,14 @@ add_depth_small(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
 			for (j = tbl8_index; j < tbl8_group_end; j++) {
 				if (!lpm->tbl8[j].valid ||
 						lpm->tbl8[j].depth <= depth) {
-					struct rte_lpm_tbl8_entry
-						new_tbl8_entry = {
-						.valid = VALID,
-						.valid_group = VALID,
-						.depth = depth,
-						.next_hop = next_hop,
-					};
+
+					new_tbl_entry.ext_valid = VALID;
 
 					/*
 					 * Setting tbl8 entry in one go to avoid
 					 * race conditions
 					 */
-					lpm->tbl8[j] = new_tbl8_entry;
+					lpm->tbl8[j] = new_tbl_entry;
 
 					continue;
 				}
@@ -486,7 +494,7 @@ add_depth_small(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
 
 static inline int32_t
 add_depth_big(struct rte_lpm *lpm, uint32_t ip_masked, uint8_t depth,
-		uint8_t next_hop)
+		struct rte_lpm_res *res)
 {
 	uint32_t tbl24_index;
 	int32_t tbl8_group_index, tbl8_group_start, tbl8_group_end, tbl8_index,
@@ -512,7 +520,11 @@ add_depth_big(struct rte_lpm *lpm, uint32_t ip_masked, uint8_t depth,
 		/* Set tbl8 entry. */
 		for (i = tbl8_index; i < (tbl8_index + tbl8_range); i++) {
 			lpm->tbl8[i].depth = depth;
-			lpm->tbl8[i].next_hop = next_hop;
+			lpm->tbl8[i].next_hop = res->next_hop;
+			lpm->tbl8[i].fwd_class = res->fwd_class;
+#ifdef RTE_LIBRTE_LPM_ASNUM
+			lpm->tbl8[i].as_num = res->as_num;
+#endif
 			lpm->tbl8[i].valid = VALID;
 		}
 
@@ -522,17 +534,17 @@ add_depth_big(struct rte_lpm *lpm, uint32_t ip_masked, uint8_t depth,
 		 * so assign whole structure in one go
 		 */
 
-		struct rte_lpm_tbl24_entry new_tbl24_entry = {
-			{ .tbl8_gindex = (uint8_t)tbl8_group_index, },
-			.valid = VALID,
-			.ext_entry = 1,
+		struct rte_lpm_tbl_entry new_tbl24_entry = {
+			.tbl8_gindex = (uint16_t)tbl8_group_index,
 			.depth = 0,
+			.ext_valid = 1,
+			.valid = VALID,
 		};
 
 		lpm->tbl24[tbl24_index] = new_tbl24_entry;
 
 	}/* If valid entry but not extended calculate the index into Table8. */
-	else if (lpm->tbl24[tbl24_index].ext_entry == 0) {
+	else if (lpm->tbl24[tbl24_index].ext_valid == 0) {
 		/* Search for free tbl8 group. */
 		tbl8_group_index = tbl8_alloc(lpm->tbl8);
 
@@ -551,6 +563,11 @@ add_depth_big(struct rte_lpm *lpm, uint32_t ip_masked, uint8_t depth,
 			lpm->tbl8[i].depth = lpm->tbl24[tbl24_index].depth;
 			lpm->tbl8[i].next_hop =
 					lpm->tbl24[tbl24_index].next_hop;
+			lpm->tbl8[i].fwd_class =
+					lpm->tbl24[tbl24_index].fwd_class;
+#ifdef RTE_LIBRTE_LPM_ASNUM
+			lpm->tbl8[i].as_num = lpm->tbl24[tbl24_index].as_num;
+#endif
 		}
 
 		tbl8_index = tbl8_group_start + (ip_masked & 0xFF);
@@ -561,7 +578,11 @@ add_depth_big(struct rte_lpm *lpm, uint32_t ip_masked, uint8_t depth,
 					lpm->tbl8[i].depth <= depth) {
 				lpm->tbl8[i].valid = VALID;
 				lpm->tbl8[i].depth = depth;
-				lpm->tbl8[i].next_hop = next_hop;
+				lpm->tbl8[i].next_hop = res->next_hop;
+				lpm->tbl8[i].fwd_class = res->fwd_class;
+#ifdef RTE_LIBRTE_LPM_ASNUM
+				lpm->tbl8[i].as_num = res->as_num;
+#endif
 
 				continue;
 			}
@@ -573,11 +594,11 @@ add_depth_big(struct rte_lpm *lpm, uint32_t ip_masked, uint8_t depth,
 		 * so assign whole structure in one go.
 		 */
 
-		struct rte_lpm_tbl24_entry new_tbl24_entry = {
-				{ .tbl8_gindex = (uint8_t)tbl8_group_index, },
-				.valid = VALID,
-				.ext_entry = 1,
+		struct rte_lpm_tbl_entry new_tbl24_entry = {
+				.tbl8_gindex = (uint16_t)tbl8_group_index,
 				.depth = 0,
+				.ext_valid = 1,
+				.valid = VALID,
 		};
 
 		lpm->tbl24[tbl24_index] = new_tbl24_entry;
@@ -595,11 +616,15 @@ add_depth_big(struct rte_lpm *lpm, uint32_t ip_masked, uint8_t depth,
 
 			if (!lpm->tbl8[i].valid ||
 					lpm->tbl8[i].depth <= depth) {
-				struct rte_lpm_tbl8_entry new_tbl8_entry = {
-					.valid = VALID,
+				struct rte_lpm_tbl_entry new_tbl8_entry = {
+#ifdef RTE_LIBRTE_LPM_ASNUM
+					.as_num = res->as_num,
+#endif
+					.next_hop = res->next_hop,
+					.fwd_class = res->fwd_class,
 					.depth = depth,
-					.next_hop = next_hop,
-					.valid_group = lpm->tbl8[i].valid_group,
+					.ext_valid = lpm->tbl8[i].ext_valid,
+					.valid = VALID,
 				};
 
 				/*
@@ -621,19 +646,19 @@ add_depth_big(struct rte_lpm *lpm, uint32_t ip_masked, uint8_t depth,
  */
 int
 rte_lpm_add(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
-		uint8_t next_hop)
+		struct rte_lpm_res *res)
 {
 	int32_t rule_index, status = 0;
 	uint32_t ip_masked;
 
 	/* Check user arguments. */
-	if ((lpm == NULL) || (depth < 1) || (depth > RTE_LPM_MAX_DEPTH))
+	if ((lpm == NULL) || (res == NULL) || (depth < 1) || (depth > RTE_LPM_MAX_DEPTH))
 		return -EINVAL;
 
 	ip_masked = ip & depth_to_mask(depth);
 
 	/* Add the rule to the rule table. */
-	rule_index = rule_add(lpm, ip_masked, depth, next_hop);
+	rule_index = rule_add(lpm, ip_masked, depth, res);
 
 	/* If the is no space available for new rule return error. */
 	if (rule_index < 0) {
@@ -641,10 +666,10 @@ rte_lpm_add(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
 	}
 
 	if (depth <= MAX_DEPTH_TBL24) {
-		status = add_depth_small(lpm, ip_masked, depth, next_hop);
+		status = add_depth_small(lpm, ip_masked, depth, res);
 	}
 	else { /* If depth > RTE_LPM_MAX_DEPTH_TBL24 */
-		status = add_depth_big(lpm, ip_masked, depth, next_hop);
+		status = add_depth_big(lpm, ip_masked, depth, res);
 
 		/*
 		 * If add fails due to exhaustion of tbl8 extensions delete
@@ -665,14 +690,14 @@ rte_lpm_add(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
  */
 int
 rte_lpm_is_rule_present(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
-uint8_t *next_hop)
+			struct rte_lpm_res *res)
 {
 	uint32_t ip_masked;
 	int32_t rule_index;
 
 	/* Check user arguments. */
 	if ((lpm == NULL) ||
-		(next_hop == NULL) ||
+		(res == NULL) ||
 		(depth < 1) || (depth > RTE_LPM_MAX_DEPTH))
 		return -EINVAL;
 
@@ -681,7 +706,11 @@ uint8_t *next_hop)
 	rule_index = rule_find(lpm, ip_masked, depth);
 
 	if (rule_index >= 0) {
-		*next_hop = lpm->rules_tbl[rule_index].next_hop;
+		res->next_hop = lpm->rules_tbl[rule_index].next_hop;
+		res->fwd_class = lpm->rules_tbl[rule_index].fwd_class;
+#ifdef RTE_LIBRTE_LPM_ASNUM
+		res->as_num = lpm->rules_tbl[rule_index].as_num;
+#endif
 		return 1;
 	}
 
@@ -731,7 +760,7 @@ delete_depth_small(struct rte_lpm *lpm, uint32_t ip_masked,
 		 */
 		for (i = tbl24_index; i < (tbl24_index + tbl24_range); i++) {
 
-			if (lpm->tbl24[i].ext_entry == 0 &&
+			if (lpm->tbl24[i].ext_valid == 0 &&
 					lpm->tbl24[i].depth <= depth ) {
 				lpm->tbl24[i].valid = INVALID;
 			}
@@ -761,23 +790,30 @@ delete_depth_small(struct rte_lpm *lpm, uint32_t ip_masked,
 		 * associated with this rule.
 		 */
 
-		struct rte_lpm_tbl24_entry new_tbl24_entry = {
-			{.next_hop = lpm->rules_tbl[sub_rule_index].next_hop,},
-			.valid = VALID,
-			.ext_entry = 0,
+		struct rte_lpm_tbl_entry new_tbl24_entry = {
+#ifdef RTE_LIBRTE_LPM_ASNUM
+			.as_num = lpm->rules_tbl[sub_rule_index].as_num,
+#endif
+			.next_hop = lpm->rules_tbl[sub_rule_index].next_hop,
+			.fwd_class = lpm->rules_tbl[sub_rule_index].fwd_class,
 			.depth = sub_rule_depth,
+			.ext_valid = 0,
+			.valid = VALID,
 		};
 
-		struct rte_lpm_tbl8_entry new_tbl8_entry = {
-			.valid = VALID,
+		struct rte_lpm_tbl_entry new_tbl8_entry = {
+#ifdef RTE_LIBRTE_LPM_ASNUM
+			.as_num = lpm->rules_tbl[sub_rule_index].as_num,
+#endif
+			.next_hop = lpm->rules_tbl[sub_rule_index].next_hop,
+			.fwd_class = lpm->rules_tbl[sub_rule_index].fwd_class,
 			.depth = sub_rule_depth,
-			.next_hop = lpm->rules_tbl
-			[sub_rule_index].next_hop,
+			.valid = VALID,
 		};
 
 		for (i = tbl24_index; i < (tbl24_index + tbl24_range); i++) {
 
-			if (lpm->tbl24[i].ext_entry == 0 &&
+			if (lpm->tbl24[i].ext_valid == 0 &&
 					lpm->tbl24[i].depth <= depth ) {
 				lpm->tbl24[i] = new_tbl24_entry;
 			}
@@ -814,7 +850,7 @@ delete_depth_small(struct rte_lpm *lpm, uint32_t ip_masked,
  * thus can be recycled
  */
 static inline int32_t
-tbl8_recycle_check(struct rte_lpm_tbl8_entry *tbl8, uint32_t tbl8_group_start)
+tbl8_recycle_check(struct rte_lpm_tbl_entry *tbl8, uint32_t tbl8_group_start)
 {
 	uint32_t tbl8_group_end, i;
 	tbl8_group_end = tbl8_group_start + RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
@@ -891,11 +927,15 @@ delete_depth_big(struct rte_lpm *lpm, uint32_t ip_masked,
 	}
 	else {
 		/* Set new tbl8 entry. */
-		struct rte_lpm_tbl8_entry new_tbl8_entry = {
-			.valid = VALID,
-			.depth = sub_rule_depth,
-			.valid_group = lpm->tbl8[tbl8_group_start].valid_group,
+		struct rte_lpm_tbl_entry new_tbl8_entry = {
+#ifdef RTE_LIBRTE_LPM_ASNUM
+			.as_num = lpm->rules_tbl[sub_rule_index].as_num,
+#endif
+			.fwd_class = lpm->rules_tbl[sub_rule_index].fwd_class,
 			.next_hop = lpm->rules_tbl[sub_rule_index].next_hop,
+			.depth = sub_rule_depth,
+			.ext_valid = lpm->tbl8[tbl8_group_start].ext_valid,
+			.valid = VALID,
 		};
 
 		/*
@@ -923,11 +963,15 @@ delete_depth_big(struct rte_lpm *lpm, uint32_t ip_masked,
 	}
 	else if (tbl8_recycle_index > -1) {
 		/* Update tbl24 entry. */
-		struct rte_lpm_tbl24_entry new_tbl24_entry = {
-			{ .next_hop = lpm->tbl8[tbl8_recycle_index].next_hop, },
-			.valid = VALID,
-			.ext_entry = 0,
+		struct rte_lpm_tbl_entry new_tbl24_entry = {
+#ifdef RTE_LIBRTE_LPM_ASNUM
+			.as_num = lpm->tbl8[tbl8_recycle_index].as_num,
+#endif
+			.next_hop = lpm->tbl8[tbl8_recycle_index].next_hop,
+			.fwd_class = lpm->tbl8[tbl8_recycle_index].fwd_class,
 			.depth = lpm->tbl8[tbl8_recycle_index].depth,
+			.ext_valid = 0,
+			.valid = VALID,
 		};
 
 		/* Set tbl24 before freeing tbl8 to avoid race condition. */
diff --git a/lib/librte_lpm/rte_lpm.h b/lib/librte_lpm/rte_lpm.h
index c299ce2..7c615bc 100644
--- a/lib/librte_lpm/rte_lpm.h
+++ b/lib/librte_lpm/rte_lpm.h
@@ -31,8 +31,8 @@
  *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
  */
 
-#ifndef _RTE_LPM_H_
-#define _RTE_LPM_H_
+#ifndef _RTE_LPM_EXT_H_
+#define _RTE_LPM_EXT_H_
 
 /**
  * @file
@@ -81,57 +81,58 @@ extern "C" {
 #define RTE_LPM_RETURN_IF_TRUE(cond, retval)
 #endif
 
-/** @internal bitmask with valid and ext_entry/valid_group fields set */
-#define RTE_LPM_VALID_EXT_ENTRY_BITMASK 0x0300
+/** @internal bitmask with valid and ext_valid/ext_valid fields set */
+#define RTE_LPM_VALID_EXT_ENTRY_BITMASK 0x03
 
 /** Bitmask used to indicate successful lookup */
-#define RTE_LPM_LOOKUP_SUCCESS          0x0100
+#define RTE_LPM_LOOKUP_SUCCESS          0x01
+
+struct rte_lpm_res {
+	uint16_t	next_hop;
+	uint8_t		fwd_class;
+#ifdef RTE_LIBRTE_LPM_ASNUM
+	uint32_t	as_num;
+#endif
+};
 
 #if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN
-/** @internal Tbl24 entry structure. */
-struct rte_lpm_tbl24_entry {
-	/* Stores Next hop or group index (i.e. gindex)into tbl8. */
+struct rte_lpm_tbl_entry {
+	uint8_t valid		:1;
+	uint8_t ext_valid	:1;
+	uint8_t depth		:6;
+	uint8_t fwd_class;
 	union {
-		uint8_t next_hop;
-		uint8_t tbl8_gindex;
+		uint16_t next_hop;
+		uint16_t tbl8_gindex;
 	};
-	/* Using single uint8_t to store 3 values. */
-	uint8_t valid     :1; /**< Validation flag. */
-	uint8_t ext_entry :1; /**< External entry. */
-	uint8_t depth     :6; /**< Rule depth. */
-};
-
-/** @internal Tbl8 entry structure. */
-struct rte_lpm_tbl8_entry {
-	uint8_t next_hop; /**< next hop. */
-	/* Using single uint8_t to store 3 values. */
-	uint8_t valid       :1; /**< Validation flag. */
-	uint8_t valid_group :1; /**< Group validation flag. */
-	uint8_t depth       :6; /**< Rule depth. */
+#ifdef RTE_LIBRTE_LPM_ASNUM
+	uint32_t as_num;
+#endif
 };
 #else
-struct rte_lpm_tbl24_entry {
-	uint8_t depth       :6;
-	uint8_t ext_entry   :1;
-	uint8_t valid       :1;
+struct rte_lpm_tbl_entry {
+#ifdef RTE_LIBRTE_LPM_ASNUM
+	uint32_t as_num;
+#endif
 	union {
-		uint8_t tbl8_gindex;
-		uint8_t next_hop;
+		uint16_t tbl8_gindex;
+		uint16_t next_hop;
 	};
-};
-
-struct rte_lpm_tbl8_entry {
-	uint8_t depth       :6;
-	uint8_t valid_group :1;
-	uint8_t valid       :1;
-	uint8_t next_hop;
+	uint8_t fwd_class;
+	uint8_t	depth		:6;
+	uint8_t ext_valid	:1;
+	uint8_t	valid		:1;
 };
 #endif
 
 /** @internal Rule structure. */
 struct rte_lpm_rule {
 	uint32_t ip; /**< Rule IP address. */
-	uint8_t  next_hop; /**< Rule next hop. */
+#ifdef RTE_LIBRTE_LPM_ASNUM
+	uint32_t as_num;
+#endif
+	uint16_t  next_hop; /**< Rule next hop. */
+	uint8_t fwd_class;
 };
 
 /** @internal Contains metadata about the rules table. */
@@ -148,9 +149,9 @@ struct rte_lpm {
 	struct rte_lpm_rule_info rule_info[RTE_LPM_MAX_DEPTH]; /**< Rule info table. */
 
 	/* LPM Tables. */
-	struct rte_lpm_tbl24_entry tbl24[RTE_LPM_TBL24_NUM_ENTRIES] \
+	struct rte_lpm_tbl_entry tbl24[RTE_LPM_TBL24_NUM_ENTRIES] \
 			__rte_cache_aligned; /**< LPM tbl24 table. */
-	struct rte_lpm_tbl8_entry tbl8[RTE_LPM_TBL8_NUM_ENTRIES] \
+	struct rte_lpm_tbl_entry tbl8[RTE_LPM_TBL8_NUM_ENTRIES] \
 			__rte_cache_aligned; /**< LPM tbl8 table. */
 	struct rte_lpm_rule rules_tbl[0] \
 			__rte_cache_aligned; /**< LPM rules. */
@@ -219,7 +220,7 @@ rte_lpm_free(struct rte_lpm *lpm);
  *   0 on success, negative value otherwise
  */
 int
-rte_lpm_add(struct rte_lpm *lpm, uint32_t ip, uint8_t depth, uint8_t next_hop);
+rte_lpm_add(struct rte_lpm *lpm, uint32_t ip, uint8_t depth, struct rte_lpm_res *res);
 
 /**
  * Check if a rule is present in the LPM table,
@@ -238,7 +239,7 @@ rte_lpm_add(struct rte_lpm *lpm, uint32_t ip, uint8_t depth, uint8_t next_hop);
  */
 int
 rte_lpm_is_rule_present(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
-uint8_t *next_hop);
+			struct rte_lpm_res *res);
 
 /**
  * Delete a rule from the LPM table.
@@ -277,29 +278,43 @@ rte_lpm_delete_all(struct rte_lpm *lpm);
  *   -EINVAL for incorrect arguments, -ENOENT on lookup miss, 0 on lookup hit
  */
 static inline int
-rte_lpm_lookup(struct rte_lpm *lpm, uint32_t ip, uint8_t *next_hop)
+rte_lpm_lookup(struct rte_lpm *lpm, uint32_t ip, struct rte_lpm_res *res)
 {
 	unsigned tbl24_index = (ip >> 8);
-	uint16_t tbl_entry;
-
+#ifdef RTE_LIBRTE_LPM_ASNUM
+	uint64_t tbl_entry;
+#else
+	uint32_t tbl_entry;
+#endif
 	/* DEBUG: Check user input arguments. */
-	RTE_LPM_RETURN_IF_TRUE(((lpm == NULL) || (next_hop == NULL)), -EINVAL);
+	RTE_LPM_RETURN_IF_TRUE(((lpm == NULL) || (res == NULL)), -EINVAL);
 
 	/* Copy tbl24 entry */
-	tbl_entry = *(const uint16_t *)&lpm->tbl24[tbl24_index];
-
+#ifdef RTE_LIBRTE_LPM_ASNUM
+	tbl_entry = *(const uint64_t *)&lpm->tbl24[tbl24_index];
+#else
+	tbl_entry = *(const uint32_t *)&lpm->tbl24[tbl24_index];
+#endif
 	/* Copy tbl8 entry (only if needed) */
 	if (unlikely((tbl_entry & RTE_LPM_VALID_EXT_ENTRY_BITMASK) ==
 			RTE_LPM_VALID_EXT_ENTRY_BITMASK)) {
 
 		unsigned tbl8_index = (uint8_t)ip +
-				((uint8_t)tbl_entry * RTE_LPM_TBL8_GROUP_NUM_ENTRIES);
+				((*(struct rte_lpm_tbl_entry *)&tbl_entry).tbl8_gindex * RTE_LPM_TBL8_GROUP_NUM_ENTRIES);
 
-		tbl_entry = *(const uint16_t *)&lpm->tbl8[tbl8_index];
+#ifdef RTE_LIBRTE_LPM_ASNUM
+		tbl_entry = *(const uint64_t *)&lpm->tbl8[tbl8_index];
+#else
+		tbl_entry = *(const uint32_t *)&lpm->tbl8[tbl8_index];
+#endif
 	}
-
-	*next_hop = (uint8_t)tbl_entry;
+	res->next_hop  = ((struct rte_lpm_tbl_entry *)&tbl_entry)->next_hop;
+	res->fwd_class = ((struct rte_lpm_tbl_entry *)&tbl_entry)->fwd_class;
+#ifdef RTE_LIBRTE_LPM_ASNUM
+	res->as_num	  = ((struct rte_lpm_tbl_entry *)&tbl_entry)->as_num;
+#endif
 	return (tbl_entry & RTE_LPM_LOOKUP_SUCCESS) ? 0 : -ENOENT;
+
 }
 
 /**
@@ -322,19 +337,25 @@ rte_lpm_lookup(struct rte_lpm *lpm, uint32_t ip, uint8_t *next_hop)
  *  @return
  *   -EINVAL for incorrect arguments, otherwise 0
  */
-#define rte_lpm_lookup_bulk(lpm, ips, next_hops, n) \
-		rte_lpm_lookup_bulk_func(lpm, ips, next_hops, n)
+#define rte_lpm_lookup_bulk(lpm, ips, res_tbl, n) \
+		rte_lpm_lookup_bulk_func(lpm, ips, res_tbl, n)
 
 static inline int
-rte_lpm_lookup_bulk_func(const struct rte_lpm *lpm, const uint32_t * ips,
-		uint16_t * next_hops, const unsigned n)
+rte_lpm_lookup_bulk_func(const struct rte_lpm *lpm, const uint32_t *ips,
+		struct rte_lpm_res *res_tbl, const unsigned n)
 {
 	unsigned i;
+	int ret = 0;
+#ifdef RTE_LIBRTE_LPM_ASNUM
+	uint64_t tbl_entry;
+#else
+	uint32_t tbl_entry;
+#endif
 	unsigned tbl24_indexes[n];
 
 	/* DEBUG: Check user input arguments. */
 	RTE_LPM_RETURN_IF_TRUE(((lpm == NULL) || (ips == NULL) ||
-			(next_hops == NULL)), -EINVAL);
+			(res_tbl == NULL)), -EINVAL);
 
 	for (i = 0; i < n; i++) {
 		tbl24_indexes[i] = ips[i] >> 8;
@@ -342,20 +363,32 @@ rte_lpm_lookup_bulk_func(const struct rte_lpm *lpm, const uint32_t * ips,
 
 	for (i = 0; i < n; i++) {
 		/* Simply copy tbl24 entry to output */
-		next_hops[i] = *(const uint16_t *)&lpm->tbl24[tbl24_indexes[i]];
-
+#ifdef RTE_LIBRTE_LPM_ASNUM
+		tbl_entry = *(const uint64_t *)&lpm->tbl24[tbl24_indexes[i]];
+#else
+		tbl_entry = *(const uint32_t *)&lpm->tbl24[tbl24_indexes[i]];
+#endif
 		/* Overwrite output with tbl8 entry if needed */
-		if (unlikely((next_hops[i] & RTE_LPM_VALID_EXT_ENTRY_BITMASK) ==
-				RTE_LPM_VALID_EXT_ENTRY_BITMASK)) {
+		if (unlikely((tbl_entry & RTE_LPM_VALID_EXT_ENTRY_BITMASK) ==
+			RTE_LPM_VALID_EXT_ENTRY_BITMASK)) {
 
 			unsigned tbl8_index = (uint8_t)ips[i] +
-					((uint8_t)next_hops[i] *
-					 RTE_LPM_TBL8_GROUP_NUM_ENTRIES);
+				((*(struct rte_lpm_tbl_entry *)&tbl_entry).tbl8_gindex * RTE_LPM_TBL8_GROUP_NUM_ENTRIES);
 
-			next_hops[i] = *(const uint16_t *)&lpm->tbl8[tbl8_index];
+#ifdef RTE_LIBRTE_LPM_ASNUM
+			tbl_entry = *(const uint64_t *)&lpm->tbl8[tbl8_index];
+#else
+			tbl_entry = *(const uint32_t *)&lpm->tbl8[tbl8_index];
+#endif
 		}
+		res_tbl[i].next_hop	= ((struct rte_lpm_tbl_entry *)&tbl_entry)->next_hop;
+		res_tbl[i].fwd_class	= ((struct rte_lpm_tbl_entry *)&tbl_entry)->next_hop;
+#ifdef RTE_LIBRTE_LPM_ASNUM
+		res_tbl[i].as_num	= ((struct rte_lpm_tbl_entry *)&tbl_entry)->as_num;
+#endif
+		ret |= 1 << i;
 	}
-	return 0;
+	return ret;
 }
 
 /* Mask four results. */
@@ -477,4 +510,4 @@ rte_lpm_lookupx4(const struct rte_lpm *lpm, __m128i ip, uint16_t hop[4],
 }
 #endif
 
-#endif /* _RTE_LPM_H_ */
+#endif /* _RTE_LPM_EXT_H_ */
-- 
1.8.3.2

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [dpdk-dev] [PATCH v1 0/3] lpm: increase number of next hops for lpm (ipv4)
  2015-10-26 18:40       ` Matthew Hall
@ 2015-10-27 10:35         ` Vladimir Medvedkin
  2015-10-27 10:33           ` Vladimir Medvedkin
  2015-10-30  7:17         ` Matthew Hall
  1 sibling, 1 reply; 24+ messages in thread
From: Vladimir Medvedkin @ 2015-10-27 10:35 UTC (permalink / raw)
  To: Matthew Hall; +Cc: dev

Hi Michal,

Try patch below. I will send it via git.

Regards,
Vladimir

2015-10-26 21:40 GMT+03:00 Matthew Hall <mhall@mhcomputing.net>:

> > I can't apply patch 0001-... , could You check it please?
>
> I generated it from a rebase of my own copy of DPDK against DPDK upstream
> master.
>
> So I'm not sure why it would not apply against latest DPDK master.
>
> But I will try it and see what could be the reason.
>
> Matthew.
>

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [dpdk-dev] [PATCH v1 0/3] lpm: increase number of next hops for lpm (ipv4)
  2015-10-26 18:40       ` Matthew Hall
  2015-10-27 10:35         ` Vladimir Medvedkin
@ 2015-10-30  7:17         ` Matthew Hall
  1 sibling, 0 replies; 24+ messages in thread
From: Matthew Hall @ 2015-10-30  7:17 UTC (permalink / raw)
  To: Jastrzebski, MichalX K; +Cc: dev

On Mon, Oct 26, 2015 at 11:40:46AM -0700, Matthew Hall wrote:
> > I can't apply patch 0001-... , could You check it please? 
> 
> I generated it from a rebase of my own copy of DPDK against DPDK upstream 
> master.
> 
> So I'm not sure why it would not apply against latest DPDK master.
> 
> But I will try it and see what could be the reason.
> 
> Matthew.

Hello Michal,

I rechecked it.

The patch does apply perfectly to the latest master branch from 
git://dpdk.org/dpdk using git apply.

Can you take a second look? I compile my DPDK with the clang compiler BTW.

Matthew.

^ permalink raw reply	[flat|nested] 24+ messages in thread

end of thread, other threads:[~2015-10-30  7:19 UTC | newest]

Thread overview: 24+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-10-23 13:51 [dpdk-dev] [PATCH v1 0/3] lpm: increase number of next hops for lpm (ipv4) Michal Jastrzebski
2015-10-23 13:51 ` [dpdk-dev] [PATCH v1 1/3] " Michal Jastrzebski
2015-10-23 14:38   ` Bruce Richardson
2015-10-23 14:59     ` Jastrzebski, MichalX K
2015-10-23 13:51 ` [dpdk-dev] [PATCH v1 2/3] examples: update of apps using librte_lpm (ipv4) Michal Jastrzebski
2015-10-23 13:51 ` [dpdk-dev] [PATCH v1 3/3] doc: update release 2.2 after changes in librte_lpm Michal Jastrzebski
2015-10-23 14:21   ` Bruce Richardson
2015-10-23 14:33     ` Jastrzebski, MichalX K
2015-10-23 16:20 ` [dpdk-dev] [PATCH v1 0/3] lpm: increase number of next hops for lpm (ipv4) Matthew Hall
2015-10-23 16:33   ` Stephen Hemminger
2015-10-23 18:38     ` Matthew Hall
2015-10-23 19:13       ` Vladimir Medvedkin
2015-10-23 19:59       ` Stephen Hemminger
2015-10-24  6:09   ` Matthew Hall
2015-10-25 17:52     ` Vladimir Medvedkin
     [not found]       ` <20151026115519.GA7576@MKJASTRX-MOBL>
2015-10-26 11:57         ` Jastrzebski, MichalX K
2015-10-26 14:03           ` Vladimir Medvedkin
2015-10-26 15:39             ` Michal Jastrzebski
2015-10-26 16:59               ` Vladimir Medvedkin
2015-10-26 12:13     ` Jastrzebski, MichalX K
2015-10-26 18:40       ` Matthew Hall
2015-10-27 10:35         ` Vladimir Medvedkin
2015-10-27 10:33           ` Vladimir Medvedkin
2015-10-30  7:17         ` Matthew Hall

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).