DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] [PATCH 0/4] lpm lookupx4 fixes
@ 2021-01-08  8:21 Ruifeng Wang
  2021-01-08  8:21 ` [dpdk-dev] [PATCH 1/4] lpm: fix vector lookup for Arm Ruifeng Wang
                   ` (6 more replies)
  0 siblings, 7 replies; 20+ messages in thread
From: Ruifeng Wang @ 2021-01-08  8:21 UTC (permalink / raw)
  Cc: dev, nd, vladimir.medvedkin, jerinj, drc, honnappa.nagarahalli,
	Ruifeng Wang

This series fixed bug in lpm4 vector lookup implementations.
When more than 256 tbl8 groups are created, lookupx4 could
retrieve next hop data from wrong group.
The bug is there since next_hop field was expanded from
8-bit to 24-bit, and inherited by other implementations.

Also updated test case to improve coverage to detect such
failure.

Ruifeng Wang (4):
  lpm: fix vector lookup for Arm
  lpm: fix vector lookup for x86
  lpm: fix vector lookup for ppc64
  test/lpm: improve coverage on tbl8

 app/test/test_lpm.c              | 22 ++++++++++++++--------
 lib/librte_lpm/rte_lpm_altivec.h |  8 ++++----
 lib/librte_lpm/rte_lpm_neon.h    |  8 ++++----
 lib/librte_lpm/rte_lpm_sse.h     |  8 ++++----
 4 files changed, 26 insertions(+), 20 deletions(-)

-- 
2.25.1


^ permalink raw reply	[flat|nested] 20+ messages in thread

* [dpdk-dev] [PATCH 1/4] lpm: fix vector lookup for Arm
  2021-01-08  8:21 [dpdk-dev] [PATCH 0/4] lpm lookupx4 fixes Ruifeng Wang
@ 2021-01-08  8:21 ` Ruifeng Wang
  2021-01-08  8:21 ` [dpdk-dev] [PATCH 2/4] lpm: fix vector lookup for x86 Ruifeng Wang
                   ` (5 subsequent siblings)
  6 siblings, 0 replies; 20+ messages in thread
From: Ruifeng Wang @ 2021-01-08  8:21 UTC (permalink / raw)
  To: Jerin Jacob, Ruifeng Wang, Bruce Richardson, Vladimir Medvedkin,
	Jianbo Liu
  Cc: dev, nd, drc, honnappa.nagarahalli, stable

rte_lpm_lookupx4 could return wrong next hop when more than 256 tbl8
groups are created. This is caused by incorrect type casting of tbl8
group index that been stored in tbl24 entry. The casting caused group
index truncation and hence wrong tbl8 group been searched.

Issue fixed by applying proper mask to tbl24 entry to get tbl8 group index.

Fixes: cbc2f1dccfba ("lpm/arm: support NEON")
Cc: jerinj@marvell.com
Cc: stable@dpdk.org

Signed-off-by: Ruifeng Wang <ruifeng.wang@arm.com>
---
 lib/librte_lpm/rte_lpm_neon.h | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/lib/librte_lpm/rte_lpm_neon.h b/lib/librte_lpm/rte_lpm_neon.h
index 6c131d312..4642a866f 100644
--- a/lib/librte_lpm/rte_lpm_neon.h
+++ b/lib/librte_lpm/rte_lpm_neon.h
@@ -81,28 +81,28 @@ rte_lpm_lookupx4(const struct rte_lpm *lpm, xmm_t ip, uint32_t hop[4],
 	if (unlikely((pt & RTE_LPM_VALID_EXT_ENTRY_BITMASK) ==
 			RTE_LPM_VALID_EXT_ENTRY_BITMASK)) {
 		i8.u32[0] = i8.u32[0] +
-			(uint8_t)tbl[0] * RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
+			(tbl[0] & 0x00FFFFFF) * RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
 		ptbl = (const uint32_t *)&lpm->tbl8[i8.u32[0]];
 		tbl[0] = *ptbl;
 	}
 	if (unlikely((pt >> 32 & RTE_LPM_VALID_EXT_ENTRY_BITMASK) ==
 			RTE_LPM_VALID_EXT_ENTRY_BITMASK)) {
 		i8.u32[1] = i8.u32[1] +
-			(uint8_t)tbl[1] * RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
+			(tbl[1] & 0x00FFFFFF) * RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
 		ptbl = (const uint32_t *)&lpm->tbl8[i8.u32[1]];
 		tbl[1] = *ptbl;
 	}
 	if (unlikely((pt2 & RTE_LPM_VALID_EXT_ENTRY_BITMASK) ==
 			RTE_LPM_VALID_EXT_ENTRY_BITMASK)) {
 		i8.u32[2] = i8.u32[2] +
-			(uint8_t)tbl[2] * RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
+			(tbl[2] & 0x00FFFFFF) * RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
 		ptbl = (const uint32_t *)&lpm->tbl8[i8.u32[2]];
 		tbl[2] = *ptbl;
 	}
 	if (unlikely((pt2 >> 32 & RTE_LPM_VALID_EXT_ENTRY_BITMASK) ==
 			RTE_LPM_VALID_EXT_ENTRY_BITMASK)) {
 		i8.u32[3] = i8.u32[3] +
-			(uint8_t)tbl[3] * RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
+			(tbl[3] & 0x00FFFFFF) * RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
 		ptbl = (const uint32_t *)&lpm->tbl8[i8.u32[3]];
 		tbl[3] = *ptbl;
 	}
-- 
2.25.1


^ permalink raw reply	[flat|nested] 20+ messages in thread

* [dpdk-dev] [PATCH 2/4] lpm: fix vector lookup for x86
  2021-01-08  8:21 [dpdk-dev] [PATCH 0/4] lpm lookupx4 fixes Ruifeng Wang
  2021-01-08  8:21 ` [dpdk-dev] [PATCH 1/4] lpm: fix vector lookup for Arm Ruifeng Wang
@ 2021-01-08  8:21 ` Ruifeng Wang
  2021-01-13 18:46   ` Medvedkin, Vladimir
  2021-01-08  8:21 ` [dpdk-dev] [PATCH 3/4] lpm: fix vector lookup for ppc64 Ruifeng Wang
                   ` (4 subsequent siblings)
  6 siblings, 1 reply; 20+ messages in thread
From: Ruifeng Wang @ 2021-01-08  8:21 UTC (permalink / raw)
  To: Bruce Richardson, Konstantin Ananyev, Vladimir Medvedkin,
	Michal Kobylinski, David Hunt
  Cc: dev, nd, jerinj, drc, honnappa.nagarahalli, Ruifeng Wang, stable

rte_lpm_lookupx4 could return wrong next hop when more than 256 tbl8
groups are created. This is caused by incorrect type casting of tbl8
group index that been stored in tbl24 entry. The casting caused group
index truncation and hence wrong tbl8 group been searched.

Issue fixed by applying proper mask to tbl24 entry to get tbl8 group index.

Fixes: dc81ebbacaeb ("lpm: extend IPv4 next hop field")
Cc: michalx.kobylinski@intel.com
Cc: stable@dpdk.org

Signed-off-by: Ruifeng Wang <ruifeng.wang@arm.com>
---
 lib/librte_lpm/rte_lpm_sse.h | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/lib/librte_lpm/rte_lpm_sse.h b/lib/librte_lpm/rte_lpm_sse.h
index 44770b6ff..eaa863c52 100644
--- a/lib/librte_lpm/rte_lpm_sse.h
+++ b/lib/librte_lpm/rte_lpm_sse.h
@@ -82,28 +82,28 @@ rte_lpm_lookupx4(const struct rte_lpm *lpm, xmm_t ip, uint32_t hop[4],
 	if (unlikely((pt & RTE_LPM_VALID_EXT_ENTRY_BITMASK) ==
 			RTE_LPM_VALID_EXT_ENTRY_BITMASK)) {
 		i8.u32[0] = i8.u32[0] +
-			(uint8_t)tbl[0] * RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
+			(tbl[0] & 0x00FFFFFF) * RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
 		ptbl = (const uint32_t *)&lpm->tbl8[i8.u32[0]];
 		tbl[0] = *ptbl;
 	}
 	if (unlikely((pt >> 32 & RTE_LPM_VALID_EXT_ENTRY_BITMASK) ==
 			RTE_LPM_VALID_EXT_ENTRY_BITMASK)) {
 		i8.u32[1] = i8.u32[1] +
-			(uint8_t)tbl[1] * RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
+			(tbl[1] & 0x00FFFFFF) * RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
 		ptbl = (const uint32_t *)&lpm->tbl8[i8.u32[1]];
 		tbl[1] = *ptbl;
 	}
 	if (unlikely((pt2 & RTE_LPM_VALID_EXT_ENTRY_BITMASK) ==
 			RTE_LPM_VALID_EXT_ENTRY_BITMASK)) {
 		i8.u32[2] = i8.u32[2] +
-			(uint8_t)tbl[2] * RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
+			(tbl[2] & 0x00FFFFFF) * RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
 		ptbl = (const uint32_t *)&lpm->tbl8[i8.u32[2]];
 		tbl[2] = *ptbl;
 	}
 	if (unlikely((pt2 >> 32 & RTE_LPM_VALID_EXT_ENTRY_BITMASK) ==
 			RTE_LPM_VALID_EXT_ENTRY_BITMASK)) {
 		i8.u32[3] = i8.u32[3] +
-			(uint8_t)tbl[3] * RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
+			(tbl[3] & 0x00FFFFFF) * RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
 		ptbl = (const uint32_t *)&lpm->tbl8[i8.u32[3]];
 		tbl[3] = *ptbl;
 	}
-- 
2.25.1


^ permalink raw reply	[flat|nested] 20+ messages in thread

* [dpdk-dev] [PATCH 3/4] lpm: fix vector lookup for ppc64
  2021-01-08  8:21 [dpdk-dev] [PATCH 0/4] lpm lookupx4 fixes Ruifeng Wang
  2021-01-08  8:21 ` [dpdk-dev] [PATCH 1/4] lpm: fix vector lookup for Arm Ruifeng Wang
  2021-01-08  8:21 ` [dpdk-dev] [PATCH 2/4] lpm: fix vector lookup for x86 Ruifeng Wang
@ 2021-01-08  8:21 ` Ruifeng Wang
  2021-01-11 21:29   ` David Christensen
  2021-01-08  8:21 ` [dpdk-dev] [PATCH 4/4] test/lpm: improve coverage on tbl8 Ruifeng Wang
                   ` (3 subsequent siblings)
  6 siblings, 1 reply; 20+ messages in thread
From: Ruifeng Wang @ 2021-01-08  8:21 UTC (permalink / raw)
  To: David Christensen, Bruce Richardson, Vladimir Medvedkin,
	Chao Zhu, Gowrishankar Muthukrishnan
  Cc: dev, nd, jerinj, honnappa.nagarahalli, Ruifeng Wang, stable

rte_lpm_lookupx4 could return wrong next hop when more than 256 tbl8
groups are created. This is caused by incorrect type casting of tbl8
group index that been stored in tbl24 entry. The casting caused group
index truncation and hence wrong tbl8 group been searched.

Issue fixed by applying proper mask to tbl24 entry to get tbl8 group index.

Fixes: d2cc7959342b ("lpm: add AltiVec for ppc64")
Cc: gowrishankar.m@linux.vnet.ibm.com
Cc: stable@dpdk.org

Signed-off-by: Ruifeng Wang <ruifeng.wang@arm.com>
---
 lib/librte_lpm/rte_lpm_altivec.h | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/lib/librte_lpm/rte_lpm_altivec.h b/lib/librte_lpm/rte_lpm_altivec.h
index 228c41b38..4fbc1b595 100644
--- a/lib/librte_lpm/rte_lpm_altivec.h
+++ b/lib/librte_lpm/rte_lpm_altivec.h
@@ -88,28 +88,28 @@ rte_lpm_lookupx4(const struct rte_lpm *lpm, xmm_t ip, uint32_t hop[4],
 	if (unlikely((pt & RTE_LPM_VALID_EXT_ENTRY_BITMASK) ==
 			RTE_LPM_VALID_EXT_ENTRY_BITMASK)) {
 		i8.u32[0] = i8.u32[0] +
-			(uint8_t)tbl[0] * RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
+			(tbl[0] & 0x00FFFFFF) * RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
 		ptbl = (const uint32_t *)&lpm->tbl8[i8.u32[0]];
 		tbl[0] = *ptbl;
 	}
 	if (unlikely((pt >> 32 & RTE_LPM_VALID_EXT_ENTRY_BITMASK) ==
 			RTE_LPM_VALID_EXT_ENTRY_BITMASK)) {
 		i8.u32[1] = i8.u32[1] +
-			(uint8_t)tbl[1] * RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
+			(tbl[1] & 0x00FFFFFF) * RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
 		ptbl = (const uint32_t *)&lpm->tbl8[i8.u32[1]];
 		tbl[1] = *ptbl;
 	}
 	if (unlikely((pt2 & RTE_LPM_VALID_EXT_ENTRY_BITMASK) ==
 			RTE_LPM_VALID_EXT_ENTRY_BITMASK)) {
 		i8.u32[2] = i8.u32[2] +
-			(uint8_t)tbl[2] * RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
+			(tbl[2] & 0x00FFFFFF) * RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
 		ptbl = (const uint32_t *)&lpm->tbl8[i8.u32[2]];
 		tbl[2] = *ptbl;
 	}
 	if (unlikely((pt2 >> 32 & RTE_LPM_VALID_EXT_ENTRY_BITMASK) ==
 			RTE_LPM_VALID_EXT_ENTRY_BITMASK)) {
 		i8.u32[3] = i8.u32[3] +
-			(uint8_t)tbl[3] * RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
+			(tbl[3] & 0x00FFFFFF) * RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
 		ptbl = (const uint32_t *)&lpm->tbl8[i8.u32[3]];
 		tbl[3] = *ptbl;
 	}
-- 
2.25.1


^ permalink raw reply	[flat|nested] 20+ messages in thread

* [dpdk-dev] [PATCH 4/4] test/lpm: improve coverage on tbl8
  2021-01-08  8:21 [dpdk-dev] [PATCH 0/4] lpm lookupx4 fixes Ruifeng Wang
                   ` (2 preceding siblings ...)
  2021-01-08  8:21 ` [dpdk-dev] [PATCH 3/4] lpm: fix vector lookup for ppc64 Ruifeng Wang
@ 2021-01-08  8:21 ` Ruifeng Wang
  2021-01-11 21:29   ` David Christensen
  2021-01-13 18:51   ` Medvedkin, Vladimir
  2021-01-13 14:52 ` [dpdk-dev] [PATCH 0/4] lpm lookupx4 fixes David Marchand
                   ` (2 subsequent siblings)
  6 siblings, 2 replies; 20+ messages in thread
From: Ruifeng Wang @ 2021-01-08  8:21 UTC (permalink / raw)
  To: Bruce Richardson, Vladimir Medvedkin
  Cc: dev, nd, jerinj, drc, honnappa.nagarahalli, Ruifeng Wang

Existing test cases create 256 tbl8 groups for testing. The number covers
only 8 bit next_hop/group field. Since the next_hop/group field had been
extended to 24-bits, creating more than 256 groups in tests can improve
the coverage.

Coverage was not expanded to reach the max supported group number, because
it would take too much time to run for this fast-test.

Signed-off-by: Ruifeng Wang <ruifeng.wang@arm.com>
---
 app/test/test_lpm.c | 22 ++++++++++++++--------
 1 file changed, 14 insertions(+), 8 deletions(-)

diff --git a/app/test/test_lpm.c b/app/test/test_lpm.c
index 258b2f67c..ee6c4280b 100644
--- a/app/test/test_lpm.c
+++ b/app/test/test_lpm.c
@@ -993,7 +993,7 @@ test13(void)
 }
 
 /*
- * Fore TBL8 extension exhaustion. Add 256 rules that require a tbl8 extension.
+ * For TBL8 extension exhaustion. Add 512 rules that require a tbl8 extension.
  * No more tbl8 extensions will be allowed. Now add one more rule that required
  * a tbl8 extension and get fail.
  * */
@@ -1008,28 +1008,34 @@ test14(void)
 	struct rte_lpm_config config;
 
 	config.max_rules = 256 * 32;
-	config.number_tbl8s = NUMBER_TBL8S;
+	config.number_tbl8s = 512;
 	config.flags = 0;
-	uint32_t ip, next_hop_add, next_hop_return;
+	uint32_t ip, next_hop_base, next_hop_return;
 	uint8_t depth;
 	int32_t status = 0;
+	xmm_t ipx4;
+	uint32_t hop[4];
 
 	/* Add enough space for 256 rules for every depth */
 	lpm = rte_lpm_create(__func__, SOCKET_ID_ANY, &config);
 	TEST_LPM_ASSERT(lpm != NULL);
 
 	depth = 32;
-	next_hop_add = 100;
+	next_hop_base = 100;
 	ip = RTE_IPV4(0, 0, 0, 0);
 
 	/* Add 256 rules that require a tbl8 extension */
-	for (; ip <= RTE_IPV4(0, 0, 255, 0); ip += 256) {
-		status = rte_lpm_add(lpm, ip, depth, next_hop_add);
+	for (; ip <= RTE_IPV4(0, 1, 255, 0); ip += 256) {
+		status = rte_lpm_add(lpm, ip, depth, next_hop_base + ip);
 		TEST_LPM_ASSERT(status == 0);
 
 		status = rte_lpm_lookup(lpm, ip, &next_hop_return);
 		TEST_LPM_ASSERT((status == 0) &&
-				(next_hop_return == next_hop_add));
+				(next_hop_return == next_hop_base + ip));
+
+		ipx4 = vect_set_epi32(ip + 3, ip + 2, ip + 1, ip);
+		rte_lpm_lookupx4(lpm, ipx4, hop, UINT32_MAX);
+		TEST_LPM_ASSERT(hop[0] == next_hop_base + ip);
 	}
 
 	/* All tbl8 extensions have been used above. Try to add one more and
@@ -1037,7 +1043,7 @@ test14(void)
 	ip = RTE_IPV4(1, 0, 0, 0);
 	depth = 32;
 
-	status = rte_lpm_add(lpm, ip, depth, next_hop_add);
+	status = rte_lpm_add(lpm, ip, depth, next_hop_base + ip);
 	TEST_LPM_ASSERT(status < 0);
 
 	rte_lpm_free(lpm);
-- 
2.25.1


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [dpdk-dev] [PATCH 3/4] lpm: fix vector lookup for ppc64
  2021-01-08  8:21 ` [dpdk-dev] [PATCH 3/4] lpm: fix vector lookup for ppc64 Ruifeng Wang
@ 2021-01-11 21:29   ` David Christensen
  0 siblings, 0 replies; 20+ messages in thread
From: David Christensen @ 2021-01-11 21:29 UTC (permalink / raw)
  To: Ruifeng Wang, Bruce Richardson, Vladimir Medvedkin, Chao Zhu,
	Gowrishankar Muthukrishnan
  Cc: dev, nd, jerinj, honnappa.nagarahalli, stable



On 1/8/21 12:21 AM, Ruifeng Wang wrote:
> rte_lpm_lookupx4 could return wrong next hop when more than 256 tbl8
> groups are created. This is caused by incorrect type casting of tbl8
> group index that been stored in tbl24 entry. The casting caused group
> index truncation and hence wrong tbl8 group been searched.
> 
> Issue fixed by applying proper mask to tbl24 entry to get tbl8 group index.
> 
> Fixes: d2cc7959342b ("lpm: add AltiVec for ppc64")
> Cc: gowrishankar.m@linux.vnet.ibm.com
> Cc: stable@dpdk.org
> 
> Signed-off-by: Ruifeng Wang <ruifeng.wang@arm.com>
> ---
>   lib/librte_lpm/rte_lpm_altivec.h | 8 ++++----
>   1 file changed, 4 insertions(+), 4 deletions(-)
> 
> diff --git a/lib/librte_lpm/rte_lpm_altivec.h b/lib/librte_lpm/rte_lpm_altivec.h
> index 228c41b38..4fbc1b595 100644
> --- a/lib/librte_lpm/rte_lpm_altivec.h
> +++ b/lib/librte_lpm/rte_lpm_altivec.h
> @@ -88,28 +88,28 @@ rte_lpm_lookupx4(const struct rte_lpm *lpm, xmm_t ip, uint32_t hop[4],
>   	if (unlikely((pt & RTE_LPM_VALID_EXT_ENTRY_BITMASK) ==
>   			RTE_LPM_VALID_EXT_ENTRY_BITMASK)) {
>   		i8.u32[0] = i8.u32[0] +
> -			(uint8_t)tbl[0] * RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
> +			(tbl[0] & 0x00FFFFFF) * RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
>   		ptbl = (const uint32_t *)&lpm->tbl8[i8.u32[0]];
>   		tbl[0] = *ptbl;
>   	}
>   	if (unlikely((pt >> 32 & RTE_LPM_VALID_EXT_ENTRY_BITMASK) ==
>   			RTE_LPM_VALID_EXT_ENTRY_BITMASK)) {
>   		i8.u32[1] = i8.u32[1] +
> -			(uint8_t)tbl[1] * RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
> +			(tbl[1] & 0x00FFFFFF) * RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
>   		ptbl = (const uint32_t *)&lpm->tbl8[i8.u32[1]];
>   		tbl[1] = *ptbl;
>   	}
>   	if (unlikely((pt2 & RTE_LPM_VALID_EXT_ENTRY_BITMASK) ==
>   			RTE_LPM_VALID_EXT_ENTRY_BITMASK)) {
>   		i8.u32[2] = i8.u32[2] +
> -			(uint8_t)tbl[2] * RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
> +			(tbl[2] & 0x00FFFFFF) * RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
>   		ptbl = (const uint32_t *)&lpm->tbl8[i8.u32[2]];
>   		tbl[2] = *ptbl;
>   	}
>   	if (unlikely((pt2 >> 32 & RTE_LPM_VALID_EXT_ENTRY_BITMASK) ==
>   			RTE_LPM_VALID_EXT_ENTRY_BITMASK)) {
>   		i8.u32[3] = i8.u32[3] +
> -			(uint8_t)tbl[3] * RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
> +			(tbl[3] & 0x00FFFFFF) * RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
>   		ptbl = (const uint32_t *)&lpm->tbl8[i8.u32[3]];
>   		tbl[3] = *ptbl;
>   	}
> 

Tested-by: David Christensen <drc@linux.vnet.ibm.com>

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [dpdk-dev] [PATCH 4/4] test/lpm: improve coverage on tbl8
  2021-01-08  8:21 ` [dpdk-dev] [PATCH 4/4] test/lpm: improve coverage on tbl8 Ruifeng Wang
@ 2021-01-11 21:29   ` David Christensen
  2021-01-13 18:51   ` Medvedkin, Vladimir
  1 sibling, 0 replies; 20+ messages in thread
From: David Christensen @ 2021-01-11 21:29 UTC (permalink / raw)
  To: Ruifeng Wang, Bruce Richardson, Vladimir Medvedkin
  Cc: dev, nd, jerinj, honnappa.nagarahalli



On 1/8/21 12:21 AM, Ruifeng Wang wrote:
> Existing test cases create 256 tbl8 groups for testing. The number covers
> only 8 bit next_hop/group field. Since the next_hop/group field had been
> extended to 24-bits, creating more than 256 groups in tests can improve
> the coverage.
> 
> Coverage was not expanded to reach the max supported group number, because
> it would take too much time to run for this fast-test.
> 
> Signed-off-by: Ruifeng Wang <ruifeng.wang@arm.com>
> ---
>   app/test/test_lpm.c | 22 ++++++++++++++--------
>   1 file changed, 14 insertions(+), 8 deletions(-)
> 
> diff --git a/app/test/test_lpm.c b/app/test/test_lpm.c
> index 258b2f67c..ee6c4280b 100644
> --- a/app/test/test_lpm.c
> +++ b/app/test/test_lpm.c
> @@ -993,7 +993,7 @@ test13(void)
>   }
> 
>   /*
> - * Fore TBL8 extension exhaustion. Add 256 rules that require a tbl8 extension.
> + * For TBL8 extension exhaustion. Add 512 rules that require a tbl8 extension.
>    * No more tbl8 extensions will be allowed. Now add one more rule that required
>    * a tbl8 extension and get fail.
>    * */
> @@ -1008,28 +1008,34 @@ test14(void)
>   	struct rte_lpm_config config;
> 
>   	config.max_rules = 256 * 32;
> -	config.number_tbl8s = NUMBER_TBL8S;
> +	config.number_tbl8s = 512;
>   	config.flags = 0;
> -	uint32_t ip, next_hop_add, next_hop_return;
> +	uint32_t ip, next_hop_base, next_hop_return;
>   	uint8_t depth;
>   	int32_t status = 0;
> +	xmm_t ipx4;
> +	uint32_t hop[4];
> 
>   	/* Add enough space for 256 rules for every depth */
>   	lpm = rte_lpm_create(__func__, SOCKET_ID_ANY, &config);
>   	TEST_LPM_ASSERT(lpm != NULL);
> 
>   	depth = 32;
> -	next_hop_add = 100;
> +	next_hop_base = 100;
>   	ip = RTE_IPV4(0, 0, 0, 0);
> 
>   	/* Add 256 rules that require a tbl8 extension */
> -	for (; ip <= RTE_IPV4(0, 0, 255, 0); ip += 256) {
> -		status = rte_lpm_add(lpm, ip, depth, next_hop_add);
> +	for (; ip <= RTE_IPV4(0, 1, 255, 0); ip += 256) {
> +		status = rte_lpm_add(lpm, ip, depth, next_hop_base + ip);
>   		TEST_LPM_ASSERT(status == 0);
> 
>   		status = rte_lpm_lookup(lpm, ip, &next_hop_return);
>   		TEST_LPM_ASSERT((status == 0) &&
> -				(next_hop_return == next_hop_add));
> +				(next_hop_return == next_hop_base + ip));
> +
> +		ipx4 = vect_set_epi32(ip + 3, ip + 2, ip + 1, ip);
> +		rte_lpm_lookupx4(lpm, ipx4, hop, UINT32_MAX);
> +		TEST_LPM_ASSERT(hop[0] == next_hop_base + ip);
>   	}
> 
>   	/* All tbl8 extensions have been used above. Try to add one more and
> @@ -1037,7 +1043,7 @@ test14(void)
>   	ip = RTE_IPV4(1, 0, 0, 0);
>   	depth = 32;
> 
> -	status = rte_lpm_add(lpm, ip, depth, next_hop_add);
> +	status = rte_lpm_add(lpm, ip, depth, next_hop_base + ip);
>   	TEST_LPM_ASSERT(status < 0);
> 
>   	rte_lpm_free(lpm);
> 

Tested-by: David Christensen <drc@linux.vnet.ibm.com>

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [dpdk-dev] [PATCH 0/4] lpm lookupx4 fixes
  2021-01-08  8:21 [dpdk-dev] [PATCH 0/4] lpm lookupx4 fixes Ruifeng Wang
                   ` (3 preceding siblings ...)
  2021-01-08  8:21 ` [dpdk-dev] [PATCH 4/4] test/lpm: improve coverage on tbl8 Ruifeng Wang
@ 2021-01-13 14:52 ` David Marchand
  2021-01-14  6:54   ` Ruifeng Wang
  2021-01-13 18:46 ` Medvedkin, Vladimir
  2021-01-14  6:59 ` [dpdk-dev] [PATCH v2 " Ruifeng Wang
  6 siblings, 1 reply; 20+ messages in thread
From: David Marchand @ 2021-01-13 14:52 UTC (permalink / raw)
  To: Ruifeng Wang, Vladimir Medvedkin, Bruce Richardson
  Cc: dev, nd, Jerin Jacob Kollanukkaran, David Christensen,
	Honnappa Nagarahalli

On Fri, Jan 8, 2021 at 9:22 AM Ruifeng Wang <ruifeng.wang@arm.com> wrote:
>
> This series fixed bug in lpm4 vector lookup implementations.
> When more than 256 tbl8 groups are created, lookupx4 could
> retrieve next hop data from wrong group.
> The bug is there since next_hop field was expanded from
> 8-bit to 24-bit, and inherited by other implementations.

This is a single issue: I would squash those 3 patches as a single
patch (with 3 Fixes: tags).

>
> Also updated test case to improve coverage to detect such
> failure.
>
> Ruifeng Wang (4):
>   lpm: fix vector lookup for Arm
>   lpm: fix vector lookup for x86
>   lpm: fix vector lookup for ppc64
>   test/lpm: improve coverage on tbl8

Reviews please?


-- 
David Marchand


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [dpdk-dev] [PATCH 0/4] lpm lookupx4 fixes
  2021-01-08  8:21 [dpdk-dev] [PATCH 0/4] lpm lookupx4 fixes Ruifeng Wang
                   ` (4 preceding siblings ...)
  2021-01-13 14:52 ` [dpdk-dev] [PATCH 0/4] lpm lookupx4 fixes David Marchand
@ 2021-01-13 18:46 ` Medvedkin, Vladimir
  2021-01-14  6:59 ` [dpdk-dev] [PATCH v2 " Ruifeng Wang
  6 siblings, 0 replies; 20+ messages in thread
From: Medvedkin, Vladimir @ 2021-01-13 18:46 UTC (permalink / raw)
  To: Ruifeng Wang; +Cc: dev, nd, jerinj, drc, honnappa.nagarahalli

Hi Ruifeng,

LGTM, Thanks!

On 08/01/2021 08:21, Ruifeng Wang wrote:
> This series fixed bug in lpm4 vector lookup implementations.
> When more than 256 tbl8 groups are created, lookupx4 could
> retrieve next hop data from wrong group.
> The bug is there since next_hop field was expanded from
> 8-bit to 24-bit, and inherited by other implementations.
> 
> Also updated test case to improve coverage to detect such
> failure.
> 
> Ruifeng Wang (4):
>    lpm: fix vector lookup for Arm
>    lpm: fix vector lookup for x86
>    lpm: fix vector lookup for ppc64
>    test/lpm: improve coverage on tbl8
> 
>   app/test/test_lpm.c              | 22 ++++++++++++++--------
>   lib/librte_lpm/rte_lpm_altivec.h |  8 ++++----
>   lib/librte_lpm/rte_lpm_neon.h    |  8 ++++----
>   lib/librte_lpm/rte_lpm_sse.h     |  8 ++++----
>   4 files changed, 26 insertions(+), 20 deletions(-)
> 

Acked-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>

-- 
Regards,
Vladimir

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [dpdk-dev] [PATCH 2/4] lpm: fix vector lookup for x86
  2021-01-08  8:21 ` [dpdk-dev] [PATCH 2/4] lpm: fix vector lookup for x86 Ruifeng Wang
@ 2021-01-13 18:46   ` Medvedkin, Vladimir
  0 siblings, 0 replies; 20+ messages in thread
From: Medvedkin, Vladimir @ 2021-01-13 18:46 UTC (permalink / raw)
  To: Ruifeng Wang, Bruce Richardson, Konstantin Ananyev,
	Michal Kobylinski, David Hunt
  Cc: dev, nd, jerinj, drc, honnappa.nagarahalli, stable



On 08/01/2021 08:21, Ruifeng Wang wrote:
> rte_lpm_lookupx4 could return wrong next hop when more than 256 tbl8
> groups are created. This is caused by incorrect type casting of tbl8
> group index that been stored in tbl24 entry. The casting caused group
> index truncation and hence wrong tbl8 group been searched.
> 
> Issue fixed by applying proper mask to tbl24 entry to get tbl8 group index.
> 
> Fixes: dc81ebbacaeb ("lpm: extend IPv4 next hop field")
> Cc: michalx.kobylinski@intel.com
> Cc: stable@dpdk.org
> 
> Signed-off-by: Ruifeng Wang <ruifeng.wang@arm.com>
> ---
>   lib/librte_lpm/rte_lpm_sse.h | 8 ++++----
>   1 file changed, 4 insertions(+), 4 deletions(-)
> 
> diff --git a/lib/librte_lpm/rte_lpm_sse.h b/lib/librte_lpm/rte_lpm_sse.h
> index 44770b6ff..eaa863c52 100644
> --- a/lib/librte_lpm/rte_lpm_sse.h
> +++ b/lib/librte_lpm/rte_lpm_sse.h
> @@ -82,28 +82,28 @@ rte_lpm_lookupx4(const struct rte_lpm *lpm, xmm_t ip, uint32_t hop[4],
>   	if (unlikely((pt & RTE_LPM_VALID_EXT_ENTRY_BITMASK) ==
>   			RTE_LPM_VALID_EXT_ENTRY_BITMASK)) {
>   		i8.u32[0] = i8.u32[0] +
> -			(uint8_t)tbl[0] * RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
> +			(tbl[0] & 0x00FFFFFF) * RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
>   		ptbl = (const uint32_t *)&lpm->tbl8[i8.u32[0]];
>   		tbl[0] = *ptbl;
>   	}
>   	if (unlikely((pt >> 32 & RTE_LPM_VALID_EXT_ENTRY_BITMASK) ==
>   			RTE_LPM_VALID_EXT_ENTRY_BITMASK)) {
>   		i8.u32[1] = i8.u32[1] +
> -			(uint8_t)tbl[1] * RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
> +			(tbl[1] & 0x00FFFFFF) * RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
>   		ptbl = (const uint32_t *)&lpm->tbl8[i8.u32[1]];
>   		tbl[1] = *ptbl;
>   	}
>   	if (unlikely((pt2 & RTE_LPM_VALID_EXT_ENTRY_BITMASK) ==
>   			RTE_LPM_VALID_EXT_ENTRY_BITMASK)) {
>   		i8.u32[2] = i8.u32[2] +
> -			(uint8_t)tbl[2] * RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
> +			(tbl[2] & 0x00FFFFFF) * RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
>   		ptbl = (const uint32_t *)&lpm->tbl8[i8.u32[2]];
>   		tbl[2] = *ptbl;
>   	}
>   	if (unlikely((pt2 >> 32 & RTE_LPM_VALID_EXT_ENTRY_BITMASK) ==
>   			RTE_LPM_VALID_EXT_ENTRY_BITMASK)) {
>   		i8.u32[3] = i8.u32[3] +
> -			(uint8_t)tbl[3] * RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
> +			(tbl[3] & 0x00FFFFFF) * RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
>   		ptbl = (const uint32_t *)&lpm->tbl8[i8.u32[3]];
>   		tbl[3] = *ptbl;
>   	}
> 

Acked-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>

-- 
Regards,
Vladimir

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [dpdk-dev] [PATCH 4/4] test/lpm: improve coverage on tbl8
  2021-01-08  8:21 ` [dpdk-dev] [PATCH 4/4] test/lpm: improve coverage on tbl8 Ruifeng Wang
  2021-01-11 21:29   ` David Christensen
@ 2021-01-13 18:51   ` Medvedkin, Vladimir
  2021-01-14  6:38     ` Ruifeng Wang
  1 sibling, 1 reply; 20+ messages in thread
From: Medvedkin, Vladimir @ 2021-01-13 18:51 UTC (permalink / raw)
  To: Ruifeng Wang, Bruce Richardson; +Cc: dev, nd, jerinj, drc, honnappa.nagarahalli

Hi Ruifeng,

Please find comment inlined. Apart from that looks good.

On 08/01/2021 08:21, Ruifeng Wang wrote:
> Existing test cases create 256 tbl8 groups for testing. The number covers
> only 8 bit next_hop/group field. Since the next_hop/group field had been
> extended to 24-bits, creating more than 256 groups in tests can improve
> the coverage.
> 
> Coverage was not expanded to reach the max supported group number, because
> it would take too much time to run for this fast-test.
> 
> Signed-off-by: Ruifeng Wang <ruifeng.wang@arm.com>
> ---
>   app/test/test_lpm.c | 22 ++++++++++++++--------
>   1 file changed, 14 insertions(+), 8 deletions(-)
> 
> diff --git a/app/test/test_lpm.c b/app/test/test_lpm.c
> index 258b2f67c..ee6c4280b 100644
> --- a/app/test/test_lpm.c
> +++ b/app/test/test_lpm.c
> @@ -993,7 +993,7 @@ test13(void)
>   }
>   
>   /*
> - * Fore TBL8 extension exhaustion. Add 256 rules that require a tbl8 extension.
> + * For TBL8 extension exhaustion. Add 512 rules that require a tbl8 extension.
>    * No more tbl8 extensions will be allowed. Now add one more rule that required
>    * a tbl8 extension and get fail.
>    * */
> @@ -1008,28 +1008,34 @@ test14(void)
>   	struct rte_lpm_config config;
>   
>   	config.max_rules = 256 * 32;
> -	config.number_tbl8s = NUMBER_TBL8S;
> +	config.number_tbl8s = 512;
>   	config.flags = 0;
> -	uint32_t ip, next_hop_add, next_hop_return;
> +	uint32_t ip, next_hop_base, next_hop_return;
>   	uint8_t depth;
>   	int32_t status = 0;
> +	xmm_t ipx4;
> +	uint32_t hop[4];
>   
>   	/* Add enough space for 256 rules for every depth */
>   	lpm = rte_lpm_create(__func__, SOCKET_ID_ANY, &config);
>   	TEST_LPM_ASSERT(lpm != NULL);
>   
>   	depth = 32;
> -	next_hop_add = 100;
> +	next_hop_base = 100;
>   	ip = RTE_IPV4(0, 0, 0, 0);
>   
>   	/* Add 256 rules that require a tbl8 extension */
> -	for (; ip <= RTE_IPV4(0, 0, 255, 0); ip += 256) {
> -		status = rte_lpm_add(lpm, ip, depth, next_hop_add);
> +	for (; ip <= RTE_IPV4(0, 1, 255, 0); ip += 256) {
> +		status = rte_lpm_add(lpm, ip, depth, next_hop_base + ip);
>   		TEST_LPM_ASSERT(status == 0);
>   
>   		status = rte_lpm_lookup(lpm, ip, &next_hop_return);
>   		TEST_LPM_ASSERT((status == 0) &&
> -				(next_hop_return == next_hop_add));
> +				(next_hop_return == next_hop_base + ip));
> +
> +		ipx4 = vect_set_epi32(ip + 3, ip + 2, ip + 1, ip);
> +		rte_lpm_lookupx4(lpm, ipx4, hop, UINT32_MAX);
> +		TEST_LPM_ASSERT(hop[0] == next_hop_base + ip);

I think it is worth to check all 4 returned next hops here.

>   	}
>   
>   	/* All tbl8 extensions have been used above. Try to add one more and
> @@ -1037,7 +1043,7 @@ test14(void)
>   	ip = RTE_IPV4(1, 0, 0, 0);
>   	depth = 32;
>   
> -	status = rte_lpm_add(lpm, ip, depth, next_hop_add);
> +	status = rte_lpm_add(lpm, ip, depth, next_hop_base + ip);
>   	TEST_LPM_ASSERT(status < 0);
>   
>   	rte_lpm_free(lpm);
> 


-- 
Regards,
Vladimir

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [dpdk-dev] [PATCH 4/4] test/lpm: improve coverage on tbl8
  2021-01-13 18:51   ` Medvedkin, Vladimir
@ 2021-01-14  6:38     ` Ruifeng Wang
  0 siblings, 0 replies; 20+ messages in thread
From: Ruifeng Wang @ 2021-01-14  6:38 UTC (permalink / raw)
  To: Medvedkin, Vladimir, Bruce Richardson
  Cc: dev, nd, jerinj, drc, Honnappa Nagarahalli, nd


> -----Original Message-----
> From: Medvedkin, Vladimir <vladimir.medvedkin@intel.com>
> Sent: Thursday, January 14, 2021 2:52 AM
> To: Ruifeng Wang <Ruifeng.Wang@arm.com>; Bruce Richardson
> <bruce.richardson@intel.com>
> Cc: dev@dpdk.org; nd <nd@arm.com>; jerinj@marvell.com;
> drc@linux.vnet.ibm.com; Honnappa Nagarahalli
> <Honnappa.Nagarahalli@arm.com>
> Subject: Re: [PATCH 4/4] test/lpm: improve coverage on tbl8
> 
> Hi Ruifeng,
> 
> Please find comment inlined. Apart from that looks good.
> 
> On 08/01/2021 08:21, Ruifeng Wang wrote:
> > Existing test cases create 256 tbl8 groups for testing. The number
> > covers only 8 bit next_hop/group field. Since the next_hop/group field
> > had been extended to 24-bits, creating more than 256 groups in tests
> > can improve the coverage.
> >
> > Coverage was not expanded to reach the max supported group number,
> > because it would take too much time to run for this fast-test.
> >
> > Signed-off-by: Ruifeng Wang <ruifeng.wang@arm.com>
> > ---
> >   app/test/test_lpm.c | 22 ++++++++++++++--------
> >   1 file changed, 14 insertions(+), 8 deletions(-)
> >
> > diff --git a/app/test/test_lpm.c b/app/test/test_lpm.c index
> > 258b2f67c..ee6c4280b 100644
> > --- a/app/test/test_lpm.c
> > +++ b/app/test/test_lpm.c
> > @@ -993,7 +993,7 @@ test13(void)
> >   }
> >
> >   /*
> > - * Fore TBL8 extension exhaustion. Add 256 rules that require a tbl8
> extension.
> > + * For TBL8 extension exhaustion. Add 512 rules that require a tbl8
> extension.
> >    * No more tbl8 extensions will be allowed. Now add one more rule that
> required
> >    * a tbl8 extension and get fail.
> >    * */
> > @@ -1008,28 +1008,34 @@ test14(void)
> >   	struct rte_lpm_config config;
> >
> >   	config.max_rules = 256 * 32;
> > -	config.number_tbl8s = NUMBER_TBL8S;
> > +	config.number_tbl8s = 512;
> >   	config.flags = 0;
> > -	uint32_t ip, next_hop_add, next_hop_return;
> > +	uint32_t ip, next_hop_base, next_hop_return;
> >   	uint8_t depth;
> >   	int32_t status = 0;
> > +	xmm_t ipx4;
> > +	uint32_t hop[4];
> >
> >   	/* Add enough space for 256 rules for every depth */
> >   	lpm = rte_lpm_create(__func__, SOCKET_ID_ANY, &config);
> >   	TEST_LPM_ASSERT(lpm != NULL);
> >
> >   	depth = 32;
> > -	next_hop_add = 100;
> > +	next_hop_base = 100;
> >   	ip = RTE_IPV4(0, 0, 0, 0);
> >
> >   	/* Add 256 rules that require a tbl8 extension */
> > -	for (; ip <= RTE_IPV4(0, 0, 255, 0); ip += 256) {
> > -		status = rte_lpm_add(lpm, ip, depth, next_hop_add);
> > +	for (; ip <= RTE_IPV4(0, 1, 255, 0); ip += 256) {
> > +		status = rte_lpm_add(lpm, ip, depth, next_hop_base + ip);
> >   		TEST_LPM_ASSERT(status == 0);
> >
> >   		status = rte_lpm_lookup(lpm, ip, &next_hop_return);
> >   		TEST_LPM_ASSERT((status == 0) &&
> > -				(next_hop_return == next_hop_add));
> > +				(next_hop_return == next_hop_base + ip));
> > +
> > +		ipx4 = vect_set_epi32(ip + 3, ip + 2, ip + 1, ip);
> > +		rte_lpm_lookupx4(lpm, ipx4, hop, UINT32_MAX);
> > +		TEST_LPM_ASSERT(hop[0] == next_hop_base + ip);
> 
> I think it is worth to check all 4 returned next hops here.

Agree. I will send out v2.

> 
> >   	}
> >
> >   	/* All tbl8 extensions have been used above. Try to add one more
> > and @@ -1037,7 +1043,7 @@ test14(void)
> >   	ip = RTE_IPV4(1, 0, 0, 0);
> >   	depth = 32;
> >
> > -	status = rte_lpm_add(lpm, ip, depth, next_hop_add);
> > +	status = rte_lpm_add(lpm, ip, depth, next_hop_base + ip);
> >   	TEST_LPM_ASSERT(status < 0);
> >
> >   	rte_lpm_free(lpm);
> >
> 
> 
> --
> Regards,
> Vladimir

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [dpdk-dev] [PATCH 0/4] lpm lookupx4 fixes
  2021-01-13 14:52 ` [dpdk-dev] [PATCH 0/4] lpm lookupx4 fixes David Marchand
@ 2021-01-14  6:54   ` Ruifeng Wang
  0 siblings, 0 replies; 20+ messages in thread
From: Ruifeng Wang @ 2021-01-14  6:54 UTC (permalink / raw)
  To: David Marchand, Vladimir Medvedkin, Bruce Richardson
  Cc: dev, nd, jerinj, David Christensen, Honnappa Nagarahalli, nd


> -----Original Message-----
> From: David Marchand <david.marchand@redhat.com>
> Sent: Wednesday, January 13, 2021 10:53 PM
> To: Ruifeng Wang <Ruifeng.Wang@arm.com>; Vladimir Medvedkin
> <vladimir.medvedkin@intel.com>; Bruce Richardson
> <bruce.richardson@intel.com>
> Cc: dev <dev@dpdk.org>; nd <nd@arm.com>; jerinj@marvell.com; David
> Christensen <drc@linux.vnet.ibm.com>; Honnappa Nagarahalli
> <Honnappa.Nagarahalli@arm.com>
> Subject: Re: [dpdk-dev] [PATCH 0/4] lpm lookupx4 fixes
> 
> On Fri, Jan 8, 2021 at 9:22 AM Ruifeng Wang <ruifeng.wang@arm.com>
> wrote:
> >
> > This series fixed bug in lpm4 vector lookup implementations.
> > When more than 256 tbl8 groups are created, lookupx4 could retrieve
> > next hop data from wrong group.
> > The bug is there since next_hop field was expanded from 8-bit to
> > 24-bit, and inherited by other implementations.
> 
> This is a single issue: I would squash those 3 patches as a single patch (with 3
> Fixes: tags).

I split the patch for review purpose.
It is OK to squash them into a single one on merge.
Thank you.

> 
> >
> > Also updated test case to improve coverage to detect such failure.
> >
> > Ruifeng Wang (4):
> >   lpm: fix vector lookup for Arm
> >   lpm: fix vector lookup for x86
> >   lpm: fix vector lookup for ppc64
> >   test/lpm: improve coverage on tbl8
> 
> Reviews please?
> 
> 
> --
> David Marchand


^ permalink raw reply	[flat|nested] 20+ messages in thread

* [dpdk-dev] [PATCH v2 0/4] lpm lookupx4 fixes
  2021-01-08  8:21 [dpdk-dev] [PATCH 0/4] lpm lookupx4 fixes Ruifeng Wang
                   ` (5 preceding siblings ...)
  2021-01-13 18:46 ` Medvedkin, Vladimir
@ 2021-01-14  6:59 ` Ruifeng Wang
  2021-01-14  6:59   ` [dpdk-dev] [PATCH v2 1/4] lpm: fix vector lookup for Arm Ruifeng Wang
                     ` (4 more replies)
  6 siblings, 5 replies; 20+ messages in thread
From: Ruifeng Wang @ 2021-01-14  6:59 UTC (permalink / raw)
  Cc: dev, nd, vladimir.medvedkin, jerinj, drc, honnappa.nagarahalli,
	Ruifeng Wang

This series fixed bug in lpm4 vector lookup implementations.
When more than 256 tbl8 groups are created, lookupx4 could
retrieve next hop data from wrong group.
The bug is there since next_hop field was expanded from
8-bit to 24-bit, and inherited by other implementations.

Also updated test case to improve coverage to detect such
failure.

Ruifeng Wang (4):
  lpm: fix vector lookup for Arm
  lpm: fix vector lookup for x86
  lpm: fix vector lookup for ppc64
  test/lpm: improve coverage on tbl8

 app/test/test_lpm.c              | 25 +++++++++++++++++--------
 lib/librte_lpm/rte_lpm_altivec.h |  8 ++++----
 lib/librte_lpm/rte_lpm_neon.h    |  8 ++++----
 lib/librte_lpm/rte_lpm_sse.h     |  8 ++++----
 4 files changed, 29 insertions(+), 20 deletions(-)

-- 
2.25.1


^ permalink raw reply	[flat|nested] 20+ messages in thread

* [dpdk-dev] [PATCH v2 1/4] lpm: fix vector lookup for Arm
  2021-01-14  6:59 ` [dpdk-dev] [PATCH v2 " Ruifeng Wang
@ 2021-01-14  6:59   ` Ruifeng Wang
  2021-01-14  6:59   ` [dpdk-dev] [PATCH v2 2/4] lpm: fix vector lookup for x86 Ruifeng Wang
                     ` (3 subsequent siblings)
  4 siblings, 0 replies; 20+ messages in thread
From: Ruifeng Wang @ 2021-01-14  6:59 UTC (permalink / raw)
  To: Jerin Jacob, Ruifeng Wang, Bruce Richardson, Vladimir Medvedkin,
	Jianbo Liu
  Cc: dev, nd, drc, honnappa.nagarahalli, stable

rte_lpm_lookupx4 could return wrong next hop when more than 256 tbl8
groups are created. This is caused by incorrect type casting of tbl8
group index that been stored in tbl24 entry. The casting caused group
index truncation and hence wrong tbl8 group been searched.

Issue fixed by applying proper mask to tbl24 entry to get tbl8 group index.

Fixes: cbc2f1dccfba ("lpm/arm: support NEON")
Cc: jerinj@marvell.com
Cc: stable@dpdk.org

Signed-off-by: Ruifeng Wang <ruifeng.wang@arm.com>
Acked-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
---
 lib/librte_lpm/rte_lpm_neon.h | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/lib/librte_lpm/rte_lpm_neon.h b/lib/librte_lpm/rte_lpm_neon.h
index 6c131d312..4642a866f 100644
--- a/lib/librte_lpm/rte_lpm_neon.h
+++ b/lib/librte_lpm/rte_lpm_neon.h
@@ -81,28 +81,28 @@ rte_lpm_lookupx4(const struct rte_lpm *lpm, xmm_t ip, uint32_t hop[4],
 	if (unlikely((pt & RTE_LPM_VALID_EXT_ENTRY_BITMASK) ==
 			RTE_LPM_VALID_EXT_ENTRY_BITMASK)) {
 		i8.u32[0] = i8.u32[0] +
-			(uint8_t)tbl[0] * RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
+			(tbl[0] & 0x00FFFFFF) * RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
 		ptbl = (const uint32_t *)&lpm->tbl8[i8.u32[0]];
 		tbl[0] = *ptbl;
 	}
 	if (unlikely((pt >> 32 & RTE_LPM_VALID_EXT_ENTRY_BITMASK) ==
 			RTE_LPM_VALID_EXT_ENTRY_BITMASK)) {
 		i8.u32[1] = i8.u32[1] +
-			(uint8_t)tbl[1] * RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
+			(tbl[1] & 0x00FFFFFF) * RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
 		ptbl = (const uint32_t *)&lpm->tbl8[i8.u32[1]];
 		tbl[1] = *ptbl;
 	}
 	if (unlikely((pt2 & RTE_LPM_VALID_EXT_ENTRY_BITMASK) ==
 			RTE_LPM_VALID_EXT_ENTRY_BITMASK)) {
 		i8.u32[2] = i8.u32[2] +
-			(uint8_t)tbl[2] * RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
+			(tbl[2] & 0x00FFFFFF) * RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
 		ptbl = (const uint32_t *)&lpm->tbl8[i8.u32[2]];
 		tbl[2] = *ptbl;
 	}
 	if (unlikely((pt2 >> 32 & RTE_LPM_VALID_EXT_ENTRY_BITMASK) ==
 			RTE_LPM_VALID_EXT_ENTRY_BITMASK)) {
 		i8.u32[3] = i8.u32[3] +
-			(uint8_t)tbl[3] * RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
+			(tbl[3] & 0x00FFFFFF) * RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
 		ptbl = (const uint32_t *)&lpm->tbl8[i8.u32[3]];
 		tbl[3] = *ptbl;
 	}
-- 
2.25.1


^ permalink raw reply	[flat|nested] 20+ messages in thread

* [dpdk-dev] [PATCH v2 2/4] lpm: fix vector lookup for x86
  2021-01-14  6:59 ` [dpdk-dev] [PATCH v2 " Ruifeng Wang
  2021-01-14  6:59   ` [dpdk-dev] [PATCH v2 1/4] lpm: fix vector lookup for Arm Ruifeng Wang
@ 2021-01-14  6:59   ` Ruifeng Wang
  2021-01-14  6:59   ` [dpdk-dev] [PATCH v2 3/4] lpm: fix vector lookup for ppc64 Ruifeng Wang
                     ` (2 subsequent siblings)
  4 siblings, 0 replies; 20+ messages in thread
From: Ruifeng Wang @ 2021-01-14  6:59 UTC (permalink / raw)
  To: Bruce Richardson, Konstantin Ananyev, Vladimir Medvedkin,
	David Hunt, Michal Kobylinski
  Cc: dev, nd, jerinj, drc, honnappa.nagarahalli, Ruifeng Wang, stable

rte_lpm_lookupx4 could return wrong next hop when more than 256 tbl8
groups are created. This is caused by incorrect type casting of tbl8
group index that been stored in tbl24 entry. The casting caused group
index truncation and hence wrong tbl8 group been searched.

Issue fixed by applying proper mask to tbl24 entry to get tbl8 group index.

Fixes: dc81ebbacaeb ("lpm: extend IPv4 next hop field")
Cc: michalx.kobylinski@intel.com
Cc: stable@dpdk.org

Signed-off-by: Ruifeng Wang <ruifeng.wang@arm.com>
Acked-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
---
 lib/librte_lpm/rte_lpm_sse.h | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/lib/librte_lpm/rte_lpm_sse.h b/lib/librte_lpm/rte_lpm_sse.h
index 44770b6ff..eaa863c52 100644
--- a/lib/librte_lpm/rte_lpm_sse.h
+++ b/lib/librte_lpm/rte_lpm_sse.h
@@ -82,28 +82,28 @@ rte_lpm_lookupx4(const struct rte_lpm *lpm, xmm_t ip, uint32_t hop[4],
 	if (unlikely((pt & RTE_LPM_VALID_EXT_ENTRY_BITMASK) ==
 			RTE_LPM_VALID_EXT_ENTRY_BITMASK)) {
 		i8.u32[0] = i8.u32[0] +
-			(uint8_t)tbl[0] * RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
+			(tbl[0] & 0x00FFFFFF) * RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
 		ptbl = (const uint32_t *)&lpm->tbl8[i8.u32[0]];
 		tbl[0] = *ptbl;
 	}
 	if (unlikely((pt >> 32 & RTE_LPM_VALID_EXT_ENTRY_BITMASK) ==
 			RTE_LPM_VALID_EXT_ENTRY_BITMASK)) {
 		i8.u32[1] = i8.u32[1] +
-			(uint8_t)tbl[1] * RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
+			(tbl[1] & 0x00FFFFFF) * RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
 		ptbl = (const uint32_t *)&lpm->tbl8[i8.u32[1]];
 		tbl[1] = *ptbl;
 	}
 	if (unlikely((pt2 & RTE_LPM_VALID_EXT_ENTRY_BITMASK) ==
 			RTE_LPM_VALID_EXT_ENTRY_BITMASK)) {
 		i8.u32[2] = i8.u32[2] +
-			(uint8_t)tbl[2] * RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
+			(tbl[2] & 0x00FFFFFF) * RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
 		ptbl = (const uint32_t *)&lpm->tbl8[i8.u32[2]];
 		tbl[2] = *ptbl;
 	}
 	if (unlikely((pt2 >> 32 & RTE_LPM_VALID_EXT_ENTRY_BITMASK) ==
 			RTE_LPM_VALID_EXT_ENTRY_BITMASK)) {
 		i8.u32[3] = i8.u32[3] +
-			(uint8_t)tbl[3] * RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
+			(tbl[3] & 0x00FFFFFF) * RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
 		ptbl = (const uint32_t *)&lpm->tbl8[i8.u32[3]];
 		tbl[3] = *ptbl;
 	}
-- 
2.25.1


^ permalink raw reply	[flat|nested] 20+ messages in thread

* [dpdk-dev] [PATCH v2 3/4] lpm: fix vector lookup for ppc64
  2021-01-14  6:59 ` [dpdk-dev] [PATCH v2 " Ruifeng Wang
  2021-01-14  6:59   ` [dpdk-dev] [PATCH v2 1/4] lpm: fix vector lookup for Arm Ruifeng Wang
  2021-01-14  6:59   ` [dpdk-dev] [PATCH v2 2/4] lpm: fix vector lookup for x86 Ruifeng Wang
@ 2021-01-14  6:59   ` Ruifeng Wang
  2021-01-14  6:59   ` [dpdk-dev] [PATCH v2 4/4] test/lpm: improve coverage on tbl8 Ruifeng Wang
  2021-01-14 15:25   ` [dpdk-dev] [PATCH v2 0/4] lpm lookupx4 fixes David Marchand
  4 siblings, 0 replies; 20+ messages in thread
From: Ruifeng Wang @ 2021-01-14  6:59 UTC (permalink / raw)
  To: David Christensen, Bruce Richardson, Vladimir Medvedkin,
	Gowrishankar Muthukrishnan, Chao Zhu
  Cc: dev, nd, jerinj, honnappa.nagarahalli, Ruifeng Wang, stable

rte_lpm_lookupx4 could return wrong next hop when more than 256 tbl8
groups are created. This is caused by incorrect type casting of tbl8
group index that been stored in tbl24 entry. The casting caused group
index truncation and hence wrong tbl8 group been searched.

Issue fixed by applying proper mask to tbl24 entry to get tbl8 group index.

Fixes: d2cc7959342b ("lpm: add AltiVec for ppc64")
Cc: gowrishankar.m@linux.vnet.ibm.com
Cc: stable@dpdk.org

Signed-off-by: Ruifeng Wang <ruifeng.wang@arm.com>
Tested-by: David Christensen <drc@linux.vnet.ibm.com>
Acked-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
---
 lib/librte_lpm/rte_lpm_altivec.h | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/lib/librte_lpm/rte_lpm_altivec.h b/lib/librte_lpm/rte_lpm_altivec.h
index 228c41b38..4fbc1b595 100644
--- a/lib/librte_lpm/rte_lpm_altivec.h
+++ b/lib/librte_lpm/rte_lpm_altivec.h
@@ -88,28 +88,28 @@ rte_lpm_lookupx4(const struct rte_lpm *lpm, xmm_t ip, uint32_t hop[4],
 	if (unlikely((pt & RTE_LPM_VALID_EXT_ENTRY_BITMASK) ==
 			RTE_LPM_VALID_EXT_ENTRY_BITMASK)) {
 		i8.u32[0] = i8.u32[0] +
-			(uint8_t)tbl[0] * RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
+			(tbl[0] & 0x00FFFFFF) * RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
 		ptbl = (const uint32_t *)&lpm->tbl8[i8.u32[0]];
 		tbl[0] = *ptbl;
 	}
 	if (unlikely((pt >> 32 & RTE_LPM_VALID_EXT_ENTRY_BITMASK) ==
 			RTE_LPM_VALID_EXT_ENTRY_BITMASK)) {
 		i8.u32[1] = i8.u32[1] +
-			(uint8_t)tbl[1] * RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
+			(tbl[1] & 0x00FFFFFF) * RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
 		ptbl = (const uint32_t *)&lpm->tbl8[i8.u32[1]];
 		tbl[1] = *ptbl;
 	}
 	if (unlikely((pt2 & RTE_LPM_VALID_EXT_ENTRY_BITMASK) ==
 			RTE_LPM_VALID_EXT_ENTRY_BITMASK)) {
 		i8.u32[2] = i8.u32[2] +
-			(uint8_t)tbl[2] * RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
+			(tbl[2] & 0x00FFFFFF) * RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
 		ptbl = (const uint32_t *)&lpm->tbl8[i8.u32[2]];
 		tbl[2] = *ptbl;
 	}
 	if (unlikely((pt2 >> 32 & RTE_LPM_VALID_EXT_ENTRY_BITMASK) ==
 			RTE_LPM_VALID_EXT_ENTRY_BITMASK)) {
 		i8.u32[3] = i8.u32[3] +
-			(uint8_t)tbl[3] * RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
+			(tbl[3] & 0x00FFFFFF) * RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
 		ptbl = (const uint32_t *)&lpm->tbl8[i8.u32[3]];
 		tbl[3] = *ptbl;
 	}
-- 
2.25.1


^ permalink raw reply	[flat|nested] 20+ messages in thread

* [dpdk-dev] [PATCH v2 4/4] test/lpm: improve coverage on tbl8
  2021-01-14  6:59 ` [dpdk-dev] [PATCH v2 " Ruifeng Wang
                     ` (2 preceding siblings ...)
  2021-01-14  6:59   ` [dpdk-dev] [PATCH v2 3/4] lpm: fix vector lookup for ppc64 Ruifeng Wang
@ 2021-01-14  6:59   ` Ruifeng Wang
  2021-01-14 11:14     ` Medvedkin, Vladimir
  2021-01-14 15:25   ` [dpdk-dev] [PATCH v2 0/4] lpm lookupx4 fixes David Marchand
  4 siblings, 1 reply; 20+ messages in thread
From: Ruifeng Wang @ 2021-01-14  6:59 UTC (permalink / raw)
  To: Bruce Richardson, Vladimir Medvedkin
  Cc: dev, nd, jerinj, drc, honnappa.nagarahalli, Ruifeng Wang

Existing test cases create 256 tbl8 groups for testing. The number covers
only 8 bit next_hop/group field. Since the next_hop/group field had been
extended to 24-bits, creating more than 256 groups in tests can improve
the coverage.

Coverage was not expanded to reach the max supported group number, because
it would take too much time to run for this fast-test.

Signed-off-by: Ruifeng Wang <ruifeng.wang@arm.com>
Tested-by: David Christensen <drc@linux.vnet.ibm.com>
Acked-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
---
v2:
Check all 4 returned next hops. (Vladimir)

 app/test/test_lpm.c | 25 +++++++++++++++++--------
 1 file changed, 17 insertions(+), 8 deletions(-)

diff --git a/app/test/test_lpm.c b/app/test/test_lpm.c
index 258b2f67c..556f5a67b 100644
--- a/app/test/test_lpm.c
+++ b/app/test/test_lpm.c
@@ -993,7 +993,7 @@ test13(void)
 }
 
 /*
- * Fore TBL8 extension exhaustion. Add 256 rules that require a tbl8 extension.
+ * For TBL8 extension exhaustion. Add 512 rules that require a tbl8 extension.
  * No more tbl8 extensions will be allowed. Now add one more rule that required
  * a tbl8 extension and get fail.
  * */
@@ -1008,28 +1008,37 @@ test14(void)
 	struct rte_lpm_config config;
 
 	config.max_rules = 256 * 32;
-	config.number_tbl8s = NUMBER_TBL8S;
+	config.number_tbl8s = 512;
 	config.flags = 0;
-	uint32_t ip, next_hop_add, next_hop_return;
+	uint32_t ip, next_hop_base, next_hop_return;
 	uint8_t depth;
 	int32_t status = 0;
+	xmm_t ipx4;
+	uint32_t hop[4];
 
 	/* Add enough space for 256 rules for every depth */
 	lpm = rte_lpm_create(__func__, SOCKET_ID_ANY, &config);
 	TEST_LPM_ASSERT(lpm != NULL);
 
 	depth = 32;
-	next_hop_add = 100;
+	next_hop_base = 100;
 	ip = RTE_IPV4(0, 0, 0, 0);
 
 	/* Add 256 rules that require a tbl8 extension */
-	for (; ip <= RTE_IPV4(0, 0, 255, 0); ip += 256) {
-		status = rte_lpm_add(lpm, ip, depth, next_hop_add);
+	for (; ip <= RTE_IPV4(0, 1, 255, 0); ip += 256) {
+		status = rte_lpm_add(lpm, ip, depth, next_hop_base + ip);
 		TEST_LPM_ASSERT(status == 0);
 
 		status = rte_lpm_lookup(lpm, ip, &next_hop_return);
 		TEST_LPM_ASSERT((status == 0) &&
-				(next_hop_return == next_hop_add));
+				(next_hop_return == next_hop_base + ip));
+
+		ipx4 = vect_set_epi32(ip + 3, ip + 2, ip + 1, ip);
+		rte_lpm_lookupx4(lpm, ipx4, hop, UINT32_MAX);
+		TEST_LPM_ASSERT(hop[0] == next_hop_base + ip);
+		TEST_LPM_ASSERT(hop[1] == UINT32_MAX);
+		TEST_LPM_ASSERT(hop[2] == UINT32_MAX);
+		TEST_LPM_ASSERT(hop[3] == UINT32_MAX);
 	}
 
 	/* All tbl8 extensions have been used above. Try to add one more and
@@ -1037,7 +1046,7 @@ test14(void)
 	ip = RTE_IPV4(1, 0, 0, 0);
 	depth = 32;
 
-	status = rte_lpm_add(lpm, ip, depth, next_hop_add);
+	status = rte_lpm_add(lpm, ip, depth, next_hop_base + ip);
 	TEST_LPM_ASSERT(status < 0);
 
 	rte_lpm_free(lpm);
-- 
2.25.1


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [dpdk-dev] [PATCH v2 4/4] test/lpm: improve coverage on tbl8
  2021-01-14  6:59   ` [dpdk-dev] [PATCH v2 4/4] test/lpm: improve coverage on tbl8 Ruifeng Wang
@ 2021-01-14 11:14     ` Medvedkin, Vladimir
  0 siblings, 0 replies; 20+ messages in thread
From: Medvedkin, Vladimir @ 2021-01-14 11:14 UTC (permalink / raw)
  To: Ruifeng Wang, Bruce Richardson; +Cc: dev, nd, jerinj, drc, honnappa.nagarahalli



On 14/01/2021 06:59, Ruifeng Wang wrote:
> Existing test cases create 256 tbl8 groups for testing. The number covers
> only 8 bit next_hop/group field. Since the next_hop/group field had been
> extended to 24-bits, creating more than 256 groups in tests can improve
> the coverage.
> 
> Coverage was not expanded to reach the max supported group number, because
> it would take too much time to run for this fast-test.
> 
> Signed-off-by: Ruifeng Wang <ruifeng.wang@arm.com>
> Tested-by: David Christensen <drc@linux.vnet.ibm.com>
> Acked-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
> ---
> v2:
> Check all 4 returned next hops. (Vladimir)
> 
> ...

Acked-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>

-- 
Regards,
Vladimir

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [dpdk-dev] [PATCH v2 0/4] lpm lookupx4 fixes
  2021-01-14  6:59 ` [dpdk-dev] [PATCH v2 " Ruifeng Wang
                     ` (3 preceding siblings ...)
  2021-01-14  6:59   ` [dpdk-dev] [PATCH v2 4/4] test/lpm: improve coverage on tbl8 Ruifeng Wang
@ 2021-01-14 15:25   ` David Marchand
  4 siblings, 0 replies; 20+ messages in thread
From: David Marchand @ 2021-01-14 15:25 UTC (permalink / raw)
  To: Ruifeng Wang
  Cc: dev, nd, Vladimir Medvedkin, Jerin Jacob Kollanukkaran,
	David Christensen, Honnappa Nagarahalli

On Thu, Jan 14, 2021 at 7:59 AM Ruifeng Wang <ruifeng.wang@arm.com> wrote:
>
> This series fixed bug in lpm4 vector lookup implementations.
> When more than 256 tbl8 groups are created, lookupx4 could
> retrieve next hop data from wrong group.
> The bug is there since next_hop field was expanded from
> 8-bit to 24-bit, and inherited by other implementations.
>
> Also updated test case to improve coverage to detect such
> failure.
>
> Ruifeng Wang (4):
>   lpm: fix vector lookup for Arm
>   lpm: fix vector lookup for x86
>   lpm: fix vector lookup for ppc64
>   test/lpm: improve coverage on tbl8
>
>  app/test/test_lpm.c              | 25 +++++++++++++++++--------
>  lib/librte_lpm/rte_lpm_altivec.h |  8 ++++----
>  lib/librte_lpm/rte_lpm_neon.h    |  8 ++++----
>  lib/librte_lpm/rte_lpm_sse.h     |  8 ++++----
>  4 files changed, 29 insertions(+), 20 deletions(-)

Squashed patches 1-3 into one and applied the series, thanks.


-- 
David Marchand


^ permalink raw reply	[flat|nested] 20+ messages in thread

end of thread, other threads:[~2021-01-14 15:25 UTC | newest]

Thread overview: 20+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-01-08  8:21 [dpdk-dev] [PATCH 0/4] lpm lookupx4 fixes Ruifeng Wang
2021-01-08  8:21 ` [dpdk-dev] [PATCH 1/4] lpm: fix vector lookup for Arm Ruifeng Wang
2021-01-08  8:21 ` [dpdk-dev] [PATCH 2/4] lpm: fix vector lookup for x86 Ruifeng Wang
2021-01-13 18:46   ` Medvedkin, Vladimir
2021-01-08  8:21 ` [dpdk-dev] [PATCH 3/4] lpm: fix vector lookup for ppc64 Ruifeng Wang
2021-01-11 21:29   ` David Christensen
2021-01-08  8:21 ` [dpdk-dev] [PATCH 4/4] test/lpm: improve coverage on tbl8 Ruifeng Wang
2021-01-11 21:29   ` David Christensen
2021-01-13 18:51   ` Medvedkin, Vladimir
2021-01-14  6:38     ` Ruifeng Wang
2021-01-13 14:52 ` [dpdk-dev] [PATCH 0/4] lpm lookupx4 fixes David Marchand
2021-01-14  6:54   ` Ruifeng Wang
2021-01-13 18:46 ` Medvedkin, Vladimir
2021-01-14  6:59 ` [dpdk-dev] [PATCH v2 " Ruifeng Wang
2021-01-14  6:59   ` [dpdk-dev] [PATCH v2 1/4] lpm: fix vector lookup for Arm Ruifeng Wang
2021-01-14  6:59   ` [dpdk-dev] [PATCH v2 2/4] lpm: fix vector lookup for x86 Ruifeng Wang
2021-01-14  6:59   ` [dpdk-dev] [PATCH v2 3/4] lpm: fix vector lookup for ppc64 Ruifeng Wang
2021-01-14  6:59   ` [dpdk-dev] [PATCH v2 4/4] test/lpm: improve coverage on tbl8 Ruifeng Wang
2021-01-14 11:14     ` Medvedkin, Vladimir
2021-01-14 15:25   ` [dpdk-dev] [PATCH v2 0/4] lpm lookupx4 fixes David Marchand

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).