DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] [PATCH 0/6] fib: implement AVX512 vector lookup
@ 2020-03-09 12:43 Vladimir Medvedkin
  2020-03-09 12:43 ` [dpdk-dev] [PATCH 1/6] eal: introduce zmm type for AVX 512-bit Vladimir Medvedkin
                   ` (13 more replies)
  0 siblings, 14 replies; 199+ messages in thread
From: Vladimir Medvedkin @ 2020-03-09 12:43 UTC (permalink / raw)
  To: dev; +Cc: konstantin.ananyev, bruce.richardson

This patch series implements vectorized lookup using AVX512 for
ipv4 dir24_8 and ipv6 trie algorithms.
Also introduced rte_fib_set_lookup_fn() to change lookup function type.
Added option to select lookup function type in testfib application.

Vladimir Medvedkin (6):
  eal: introduce zmm type for AVX 512-bit
  fib: make lookup function type configurable
  fib: introduce AVX512 lookup
  fib6: make lookup function type configurable
  fib6: introduce AVX512 lookup
  app/testfib: add support for different lookup functions

 app/test-fib/main.c                               |  58 +++++-
 lib/librte_eal/common/include/arch/x86/rte_vect.h |  20 ++
 lib/librte_fib/dir24_8.c                          | 103 ++++++++--
 lib/librte_fib/dir24_8.h                          |   2 +-
 lib/librte_fib/dir24_8_avx512.h                   | 116 +++++++++++
 lib/librte_fib/rte_fib.c                          |  20 +-
 lib/librte_fib/rte_fib.h                          |  23 +++
 lib/librte_fib/rte_fib6.c                         |  19 +-
 lib/librte_fib/rte_fib6.h                         |  21 ++
 lib/librte_fib/rte_fib_version.map                |   2 +
 lib/librte_fib/trie.c                             |  83 ++++++--
 lib/librte_fib/trie.h                             |   2 +-
 lib/librte_fib/trie_avx512.h                      | 231 ++++++++++++++++++++++
 13 files changed, 670 insertions(+), 30 deletions(-)
 create mode 100644 lib/librte_fib/dir24_8_avx512.h
 create mode 100644 lib/librte_fib/trie_avx512.h

-- 
2.7.4


^ permalink raw reply	[flat|nested] 199+ messages in thread

* [dpdk-dev] [PATCH 1/6] eal: introduce zmm type for AVX 512-bit
  2020-03-09 12:43 [dpdk-dev] [PATCH 0/6] fib: implement AVX512 vector lookup Vladimir Medvedkin
@ 2020-03-09 12:43 ` Vladimir Medvedkin
  2020-03-09 16:39   ` Jerin Jacob
  2020-03-09 12:43 ` [dpdk-dev] [PATCH 2/6] fib: make lookup function type configurable Vladimir Medvedkin
                   ` (12 subsequent siblings)
  13 siblings, 1 reply; 199+ messages in thread
From: Vladimir Medvedkin @ 2020-03-09 12:43 UTC (permalink / raw)
  To: dev; +Cc: konstantin.ananyev, bruce.richardson

New data type to manipulate 512 bit AVX values.

Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
---
 lib/librte_eal/common/include/arch/x86/rte_vect.h | 20 ++++++++++++++++++++
 1 file changed, 20 insertions(+)

diff --git a/lib/librte_eal/common/include/arch/x86/rte_vect.h b/lib/librte_eal/common/include/arch/x86/rte_vect.h
index df5a607..09f30e6 100644
--- a/lib/librte_eal/common/include/arch/x86/rte_vect.h
+++ b/lib/librte_eal/common/include/arch/x86/rte_vect.h
@@ -90,6 +90,26 @@ __extension__ ({                 \
 })
 #endif /* (defined(__ICC) && __ICC < 1210) */
 
+#ifdef __AVX512F__
+
+typedef __m512i zmm_t;
+
+#define	ZMM_SIZE	(sizeof(zmm_t))
+#define	ZMM_MASK	(ZMM_SIZE - 1)
+
+typedef union rte_zmm {
+	zmm_t	 z;
+	ymm_t    y[ZMM_SIZE / sizeof(ymm_t)];
+	xmm_t    x[ZMM_SIZE / sizeof(xmm_t)];
+	uint8_t  u8[ZMM_SIZE / sizeof(uint8_t)];
+	uint16_t u16[ZMM_SIZE / sizeof(uint16_t)];
+	uint32_t u32[ZMM_SIZE / sizeof(uint32_t)];
+	uint64_t u64[ZMM_SIZE / sizeof(uint64_t)];
+	double   pd[ZMM_SIZE / sizeof(double)];
+} rte_zmm_t;
+
+#endif /* __AVX512F__ */
+
 #ifdef __cplusplus
 }
 #endif
-- 
2.7.4


^ permalink raw reply	[flat|nested] 199+ messages in thread

* [dpdk-dev] [PATCH 2/6] fib: make lookup function type configurable
  2020-03-09 12:43 [dpdk-dev] [PATCH 0/6] fib: implement AVX512 vector lookup Vladimir Medvedkin
  2020-03-09 12:43 ` [dpdk-dev] [PATCH 1/6] eal: introduce zmm type for AVX 512-bit Vladimir Medvedkin
@ 2020-03-09 12:43 ` Vladimir Medvedkin
  2020-04-01  5:47   ` Ray Kinsella
  2020-03-09 12:43 ` [dpdk-dev] [PATCH 3/6] fib: introduce AVX512 lookup Vladimir Medvedkin
                   ` (11 subsequent siblings)
  13 siblings, 1 reply; 199+ messages in thread
From: Vladimir Medvedkin @ 2020-03-09 12:43 UTC (permalink / raw)
  To: dev; +Cc: konstantin.ananyev, bruce.richardson

Add type argument to dir24_8_get_lookup_fn()
Now it supports 3 different lookup implementations:
 RTE_FIB_DIR24_8_SCALAR_MACRO
 RTE_FIB_DIR24_8_SCALAR_INLINE
 RTE_FIB_DIR24_8_SCALAR_UNI

Add new rte_fib_set_lookup_fn() - user can change lookup
function type runtime.

Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
---
 lib/librte_fib/dir24_8.c           | 32 ++++++++++++++++++++------------
 lib/librte_fib/dir24_8.h           |  2 +-
 lib/librte_fib/rte_fib.c           | 20 +++++++++++++++++++-
 lib/librte_fib/rte_fib.h           | 22 ++++++++++++++++++++++
 lib/librte_fib/rte_fib_version.map |  1 +
 5 files changed, 63 insertions(+), 14 deletions(-)

diff --git a/lib/librte_fib/dir24_8.c b/lib/librte_fib/dir24_8.c
index c9dce3c..825d061 100644
--- a/lib/librte_fib/dir24_8.c
+++ b/lib/librte_fib/dir24_8.c
@@ -45,13 +45,6 @@ struct dir24_8_tbl {
 
 #define ROUNDUP(x, y)	 RTE_ALIGN_CEIL(x, (1 << (32 - y)))
 
-enum lookup_type {
-	MACRO,
-	INLINE,
-	UNI
-};
-enum lookup_type test_lookup = MACRO;
-
 static inline void *
 get_tbl24_p(struct dir24_8_tbl *dp, uint32_t ip, uint8_t nh_sz)
 {
@@ -253,11 +246,18 @@ dir24_8_lookup_bulk_uni(void *p, const uint32_t *ips,
 }
 
 rte_fib_lookup_fn_t
-dir24_8_get_lookup_fn(struct rte_fib_conf *fib_conf)
+dir24_8_get_lookup_fn(void *p, enum rte_fib_dir24_8_lookup_type type)
 {
-	enum rte_fib_dir24_8_nh_sz nh_sz = fib_conf->dir24_8.nh_sz;
+	enum rte_fib_dir24_8_nh_sz nh_sz;
+	struct dir24_8_tbl *dp = p;
 
-	if (test_lookup == MACRO) {
+	if (dp == NULL)
+		return NULL;
+
+	nh_sz = dp->nh_sz;
+
+	switch (type) {
+	case RTE_FIB_DIR24_8_SCALAR_MACRO:
 		switch (nh_sz) {
 		case RTE_FIB_DIR24_8_1B:
 			return dir24_8_lookup_bulk_1b;
@@ -267,8 +267,10 @@ dir24_8_get_lookup_fn(struct rte_fib_conf *fib_conf)
 			return dir24_8_lookup_bulk_4b;
 		case RTE_FIB_DIR24_8_8B:
 			return dir24_8_lookup_bulk_8b;
+		default:
+			return NULL;
 		}
-	} else if (test_lookup == INLINE) {
+	case RTE_FIB_DIR24_8_SCALAR_INLINE:
 		switch (nh_sz) {
 		case RTE_FIB_DIR24_8_1B:
 			return dir24_8_lookup_bulk_0;
@@ -278,9 +280,15 @@ dir24_8_get_lookup_fn(struct rte_fib_conf *fib_conf)
 			return dir24_8_lookup_bulk_2;
 		case RTE_FIB_DIR24_8_8B:
 			return dir24_8_lookup_bulk_3;
+		default:
+			return NULL;
 		}
-	} else
+	case RTE_FIB_DIR24_8_SCALAR_UNI:
 		return dir24_8_lookup_bulk_uni;
+	default:
+		return NULL;
+	}
+
 	return NULL;
 }
 
diff --git a/lib/librte_fib/dir24_8.h b/lib/librte_fib/dir24_8.h
index 1ec437c..53c5dd2 100644
--- a/lib/librte_fib/dir24_8.h
+++ b/lib/librte_fib/dir24_8.h
@@ -22,7 +22,7 @@ void
 dir24_8_free(void *p);
 
 rte_fib_lookup_fn_t
-dir24_8_get_lookup_fn(struct rte_fib_conf *conf);
+dir24_8_get_lookup_fn(void *p, enum rte_fib_dir24_8_lookup_type type);
 
 int
 dir24_8_modify(struct rte_fib *fib, uint32_t ip, uint8_t depth,
diff --git a/lib/librte_fib/rte_fib.c b/lib/librte_fib/rte_fib.c
index e090808..59120b5 100644
--- a/lib/librte_fib/rte_fib.c
+++ b/lib/librte_fib/rte_fib.c
@@ -107,7 +107,8 @@ init_dataplane(struct rte_fib *fib, __rte_unused int socket_id,
 		fib->dp = dir24_8_create(dp_name, socket_id, conf);
 		if (fib->dp == NULL)
 			return -rte_errno;
-		fib->lookup = dir24_8_get_lookup_fn(conf);
+		fib->lookup = dir24_8_get_lookup_fn(fib->dp,
+			RTE_FIB_DIR24_8_SCALAR_MACRO);
 		fib->modify = dir24_8_modify;
 		return 0;
 	default:
@@ -317,3 +318,20 @@ rte_fib_get_rib(struct rte_fib *fib)
 {
 	return (fib == NULL) ? NULL : fib->rib;
 }
+
+int
+rte_fib_set_lookup_fn(struct rte_fib *fib, int type)
+{
+	rte_fib_lookup_fn_t fn;
+
+	switch (fib->type) {
+	case RTE_FIB_DIR24_8:
+		fn = dir24_8_get_lookup_fn(fib->dp, type);
+		if (fn == NULL)
+			return -EINVAL;
+		fib->lookup = fn;
+		return 0;
+	default:
+		return -EINVAL;
+	}
+}
diff --git a/lib/librte_fib/rte_fib.h b/lib/librte_fib/rte_fib.h
index d06c5ef..0e98775 100644
--- a/lib/librte_fib/rte_fib.h
+++ b/lib/librte_fib/rte_fib.h
@@ -47,6 +47,12 @@ enum rte_fib_dir24_8_nh_sz {
 	RTE_FIB_DIR24_8_8B
 };
 
+enum rte_fib_dir24_8_lookup_type {
+	RTE_FIB_DIR24_8_SCALAR_MACRO,
+	RTE_FIB_DIR24_8_SCALAR_INLINE,
+	RTE_FIB_DIR24_8_SCALAR_UNI
+};
+
 /** FIB configuration structure */
 struct rte_fib_conf {
 	enum rte_fib_type type; /**< Type of FIB struct */
@@ -185,4 +191,20 @@ __rte_experimental
 struct rte_rib *
 rte_fib_get_rib(struct rte_fib *fib);
 
+/**
+ * Set lookup function based on type
+ *
+ * @param fib
+ *   FIB object handle
+ * @param type
+ *   type of lookup function
+ *
+ * @return
+ *    -EINVAL on failure
+ *    0 on success
+ */
+__rte_experimental
+int
+rte_fib_set_lookup_fn(struct rte_fib *fib, int type);
+
 #endif /* _RTE_FIB_H_ */
diff --git a/lib/librte_fib/rte_fib_version.map b/lib/librte_fib/rte_fib_version.map
index 9527417..216af66 100644
--- a/lib/librte_fib/rte_fib_version.map
+++ b/lib/librte_fib/rte_fib_version.map
@@ -9,6 +9,7 @@ EXPERIMENTAL {
 	rte_fib_lookup_bulk;
 	rte_fib_get_dp;
 	rte_fib_get_rib;
+	rte_fib_set_lookup_fn;
 
 	rte_fib6_add;
 	rte_fib6_create;
-- 
2.7.4


^ permalink raw reply	[flat|nested] 199+ messages in thread

* [dpdk-dev] [PATCH 3/6] fib: introduce AVX512 lookup
  2020-03-09 12:43 [dpdk-dev] [PATCH 0/6] fib: implement AVX512 vector lookup Vladimir Medvedkin
  2020-03-09 12:43 ` [dpdk-dev] [PATCH 1/6] eal: introduce zmm type for AVX 512-bit Vladimir Medvedkin
  2020-03-09 12:43 ` [dpdk-dev] [PATCH 2/6] fib: make lookup function type configurable Vladimir Medvedkin
@ 2020-03-09 12:43 ` Vladimir Medvedkin
  2020-04-01  5:54   ` Ray Kinsella
  2020-03-09 12:43 ` [dpdk-dev] [PATCH 4/6] fib6: make lookup function type configurable Vladimir Medvedkin
                   ` (10 subsequent siblings)
  13 siblings, 1 reply; 199+ messages in thread
From: Vladimir Medvedkin @ 2020-03-09 12:43 UTC (permalink / raw)
  To: dev; +Cc: konstantin.ananyev, bruce.richardson

Add new lookup implementation for DIR24_8 algorithm using
AVX512 instruction set

Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
---
 lib/librte_fib/dir24_8.c        |  71 ++++++++++++++++++++++++
 lib/librte_fib/dir24_8_avx512.h | 116 ++++++++++++++++++++++++++++++++++++++++
 lib/librte_fib/rte_fib.h        |   3 +-
 3 files changed, 189 insertions(+), 1 deletion(-)
 create mode 100644 lib/librte_fib/dir24_8_avx512.h

diff --git a/lib/librte_fib/dir24_8.c b/lib/librte_fib/dir24_8.c
index 825d061..9f51dfc 100644
--- a/lib/librte_fib/dir24_8.c
+++ b/lib/librte_fib/dir24_8.c
@@ -245,6 +245,62 @@ dir24_8_lookup_bulk_uni(void *p, const uint32_t *ips,
 	}
 }
 
+#ifdef __AVX512F__
+
+#include "dir24_8_avx512.h"
+
+static void
+rte_dir24_8_vec_lookup_bulk_1b(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n)
+{
+	uint32_t i;
+	for (i = 0; i < (n / 16); i++)
+		dir24_8_vec_lookup_x16(p, ips + i * 16, next_hops + i * 16,
+			sizeof(uint8_t));
+
+	dir24_8_lookup_bulk_1b(p, ips + i * 16, next_hops + i * 16,
+		n - i * 16);
+}
+
+static void
+rte_dir24_8_vec_lookup_bulk_2b(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n)
+{
+	uint32_t i;
+	for (i = 0; i < (n / 16); i++)
+		dir24_8_vec_lookup_x16(p, ips + i * 16, next_hops + i * 16,
+			sizeof(uint16_t));
+
+	dir24_8_lookup_bulk_2b(p, ips + i * 16, next_hops + i * 16,
+		n - i * 16);
+}
+
+static void
+rte_dir24_8_vec_lookup_bulk_4b(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n)
+{
+	uint32_t i;
+	for (i = 0; i < (n / 16); i++)
+		dir24_8_vec_lookup_x16(p, ips + i * 16, next_hops + i * 16,
+			sizeof(uint32_t));
+
+	dir24_8_lookup_bulk_4b(p, ips + i * 16, next_hops + i * 16,
+		n - i * 16);
+}
+
+static void
+rte_dir24_8_vec_lookup_bulk_8b(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n)
+{
+	uint32_t i;
+	for (i = 0; i < (n / 8); i++)
+		dir24_8_vec_lookup_x8_8b(p, ips + i * 8, next_hops + i * 8);
+
+	dir24_8_lookup_bulk_8b(p, ips + i * 8, next_hops + i * 8, n - i * 8);
+}
+
+#endif /* __AVX512F__ */
+
 rte_fib_lookup_fn_t
 dir24_8_get_lookup_fn(void *p, enum rte_fib_dir24_8_lookup_type type)
 {
@@ -285,6 +341,21 @@ dir24_8_get_lookup_fn(void *p, enum rte_fib_dir24_8_lookup_type type)
 		}
 	case RTE_FIB_DIR24_8_SCALAR_UNI:
 		return dir24_8_lookup_bulk_uni;
+#ifdef __AVX512F__
+	case RTE_FIB_DIR24_8_VECTOR:
+		switch (nh_sz) {
+		case RTE_FIB_DIR24_8_1B:
+			return rte_dir24_8_vec_lookup_bulk_1b;
+		case RTE_FIB_DIR24_8_2B:
+			return rte_dir24_8_vec_lookup_bulk_2b;
+		case RTE_FIB_DIR24_8_4B:
+			return rte_dir24_8_vec_lookup_bulk_4b;
+		case RTE_FIB_DIR24_8_8B:
+			return rte_dir24_8_vec_lookup_bulk_8b;
+		default:
+			return NULL;
+		}
+#endif
 	default:
 		return NULL;
 	}
diff --git a/lib/librte_fib/dir24_8_avx512.h b/lib/librte_fib/dir24_8_avx512.h
new file mode 100644
index 0000000..3b6680c
--- /dev/null
+++ b/lib/librte_fib/dir24_8_avx512.h
@@ -0,0 +1,116 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Intel Corporation
+ */
+
+#ifndef _DIR248_AVX512_H_
+#define _DIR248_AVX512_H_
+
+#include <rte_vect.h>
+
+static __rte_always_inline void
+dir24_8_vec_lookup_x16(void *p, const uint32_t *ips,
+	uint64_t *next_hops, int size)
+{
+	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
+	__mmask16 msk_ext;
+	__mmask16 exp_msk = 0x5555;
+	__m512i ip_vec, idxes, res, bytes;
+	const __m512i zero = _mm512_set1_epi32(0);
+	const __m512i lsb = _mm512_set1_epi32(1);
+	const __m512i lsbyte_msk = _mm512_set1_epi32(0xff);
+	__m512i tmp1, tmp2, res_msk;
+	__m256i tmp256;
+	/* used to mask gather values if size is 1/2 (8/16 bit next hops) */
+	if (size == sizeof(uint8_t))
+		res_msk = _mm512_set1_epi32(UINT8_MAX);
+	else if (size == sizeof(uint16_t))
+		res_msk = _mm512_set1_epi32(UINT16_MAX);
+
+	ip_vec = _mm512_loadu_si512(ips);
+	/* mask 24 most significant bits */
+	idxes = _mm512_srli_epi32(ip_vec, 8);
+
+	/**
+	 * lookup in tbl24
+	 * Put it inside branch to make compiller happy with -O0
+	 */
+	if (size == sizeof(uint8_t)) {
+		res = _mm512_i32gather_epi32(idxes, (const int *)dp->tbl24, 1);
+		res = _mm512_and_epi32(res, res_msk);
+	} else if (size == sizeof(uint16_t)) {
+		res = _mm512_i32gather_epi32(idxes, (const int *)dp->tbl24, 2);
+		res = _mm512_and_epi32(res, res_msk);
+	} else
+		res = _mm512_i32gather_epi32(idxes, (const int *)dp->tbl24, 4);
+
+	/* get extended entries indexes */
+	msk_ext = _mm512_test_epi32_mask(res, lsb);
+
+	if (msk_ext != 0) {
+		idxes = _mm512_srli_epi32(res, 1);
+		idxes = _mm512_slli_epi32(idxes, 8);
+		bytes = _mm512_and_epi32(ip_vec, lsbyte_msk);
+		idxes = _mm512_maskz_add_epi32(msk_ext, idxes, bytes);
+		if (size == sizeof(uint8_t)) {
+			idxes = _mm512_mask_i32gather_epi32(zero, msk_ext,
+				idxes, (const int *)dp->tbl8, 1);
+			idxes = _mm512_and_epi32(idxes, res_msk);
+		} else if (size == sizeof(uint16_t)) {
+			idxes = _mm512_mask_i32gather_epi32(zero, msk_ext,
+				idxes, (const int *)dp->tbl8, 2);
+			idxes = _mm512_and_epi32(idxes, res_msk);
+		} else
+			idxes = _mm512_mask_i32gather_epi32(zero, msk_ext,
+				idxes, (const int *)dp->tbl8, 4);
+
+		res = _mm512_mask_blend_epi32(msk_ext, res, idxes);
+	}
+
+	res = _mm512_srli_epi32(res, 1);
+	tmp1 = _mm512_maskz_expand_epi32(exp_msk, res);
+	tmp256 = _mm512_extracti32x8_epi32(res, 1);
+	tmp2 = _mm512_maskz_expand_epi32(exp_msk,
+		_mm512_castsi256_si512(tmp256));
+	_mm512_storeu_si512(next_hops, tmp1);
+	_mm512_storeu_si512(next_hops + 8, tmp2);
+}
+
+static __rte_always_inline void
+dir24_8_vec_lookup_x8_8b(void *p, const uint32_t *ips,
+	uint64_t *next_hops)
+{
+	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
+	const __m512i zero = _mm512_set1_epi32(0);
+	const __m512i lsbyte_msk = _mm512_set1_epi64(0xff);
+	const __m512i lsb = _mm512_set1_epi64(1);
+	__m512i res, idxes, bytes;
+	__m256i idxes_256, ip_vec;
+	__mmask8 msk_ext;
+
+	ip_vec = _mm256_loadu_si256((const void *)ips);
+	/* mask 24 most significant bits */
+	idxes_256 = _mm256_srli_epi32(ip_vec, 8);
+
+	/* lookup in tbl24 */
+	res = _mm512_i32gather_epi64(idxes_256, (const void *)dp->tbl24, 8);
+
+	/* get extended entries indexes */
+	msk_ext = _mm512_test_epi64_mask(res, lsb);
+
+	if (msk_ext != 0) {
+		bytes = _mm512_cvtepi32_epi64(ip_vec);
+		idxes = _mm512_srli_epi64(res, 1);
+		idxes = _mm512_slli_epi64(idxes, 8);
+		bytes = _mm512_and_epi64(bytes, lsbyte_msk);
+		idxes = _mm512_maskz_add_epi64(msk_ext, idxes, bytes);
+		idxes = _mm512_mask_i64gather_epi64(zero, msk_ext, idxes,
+			(const void *)dp->tbl8, 8);
+
+		res = _mm512_mask_blend_epi64(msk_ext, res, idxes);
+	}
+
+	res = _mm512_srli_epi64(res, 1);
+	_mm512_storeu_si512(next_hops, res);
+}
+
+#endif /* _DIR248_AVX512_H_ */
diff --git a/lib/librte_fib/rte_fib.h b/lib/librte_fib/rte_fib.h
index 0e98775..89d0f12 100644
--- a/lib/librte_fib/rte_fib.h
+++ b/lib/librte_fib/rte_fib.h
@@ -50,7 +50,8 @@ enum rte_fib_dir24_8_nh_sz {
 enum rte_fib_dir24_8_lookup_type {
 	RTE_FIB_DIR24_8_SCALAR_MACRO,
 	RTE_FIB_DIR24_8_SCALAR_INLINE,
-	RTE_FIB_DIR24_8_SCALAR_UNI
+	RTE_FIB_DIR24_8_SCALAR_UNI,
+	RTE_FIB_DIR24_8_VECTOR
 };
 
 /** FIB configuration structure */
-- 
2.7.4


^ permalink raw reply	[flat|nested] 199+ messages in thread

* [dpdk-dev] [PATCH 4/6] fib6: make lookup function type configurable
  2020-03-09 12:43 [dpdk-dev] [PATCH 0/6] fib: implement AVX512 vector lookup Vladimir Medvedkin
                   ` (2 preceding siblings ...)
  2020-03-09 12:43 ` [dpdk-dev] [PATCH 3/6] fib: introduce AVX512 lookup Vladimir Medvedkin
@ 2020-03-09 12:43 ` Vladimir Medvedkin
  2020-03-09 12:43 ` [dpdk-dev] [PATCH 5/6] fib6: introduce AVX512 lookup Vladimir Medvedkin
                   ` (9 subsequent siblings)
  13 siblings, 0 replies; 199+ messages in thread
From: Vladimir Medvedkin @ 2020-03-09 12:43 UTC (permalink / raw)
  To: dev; +Cc: konstantin.ananyev, bruce.richardson

Add type argument to trie_get_lookup_fn()
Now it only supports RTE_FIB6_TRIE_SCALAR

Add new rte_fib6_set_lookup_fn() - user can change lookup
function type runtime.

Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
---
 lib/librte_fib/rte_fib6.c          | 19 ++++++++++++++++++-
 lib/librte_fib/rte_fib6.h          | 20 ++++++++++++++++++++
 lib/librte_fib/rte_fib_version.map |  1 +
 lib/librte_fib/trie.c              | 25 ++++++++++++++-----------
 lib/librte_fib/trie.h              |  2 +-
 5 files changed, 54 insertions(+), 13 deletions(-)

diff --git a/lib/librte_fib/rte_fib6.c b/lib/librte_fib/rte_fib6.c
index a1f0db8..9eff712 100644
--- a/lib/librte_fib/rte_fib6.c
+++ b/lib/librte_fib/rte_fib6.c
@@ -107,7 +107,7 @@ init_dataplane(struct rte_fib6 *fib, __rte_unused int socket_id,
 		fib->dp = trie_create(dp_name, socket_id, conf);
 		if (fib->dp == NULL)
 			return -rte_errno;
-		fib->lookup = rte_trie_get_lookup_fn(conf);
+		fib->lookup = trie_get_lookup_fn(fib->dp, RTE_FIB6_TRIE_SCALAR);
 		fib->modify = trie_modify;
 		return 0;
 	default:
@@ -319,3 +319,20 @@ rte_fib6_get_rib(struct rte_fib6 *fib)
 {
 	return (fib == NULL) ? NULL : fib->rib;
 }
+
+int
+rte_fib6_set_lookup_fn(struct rte_fib6 *fib, int type)
+{
+	rte_fib6_lookup_fn_t fn;
+
+	switch (fib->type) {
+	case RTE_FIB6_TRIE:
+		fn = trie_get_lookup_fn(fib->dp, type);
+		if (fn == NULL)
+			return -EINVAL;
+		fib->lookup = fn;
+		return 0;
+	default:
+		return -EINVAL;
+	}
+}
diff --git a/lib/librte_fib/rte_fib6.h b/lib/librte_fib/rte_fib6.h
index 4268704..7d2fd9f 100644
--- a/lib/librte_fib/rte_fib6.h
+++ b/lib/librte_fib/rte_fib6.h
@@ -48,6 +48,10 @@ enum rte_fib_trie_nh_sz {
 	RTE_FIB6_TRIE_8B
 };
 
+enum rte_fib_trie_lookup_type {
+	RTE_FIB6_TRIE_SCALAR
+};
+
 /** FIB configuration structure */
 struct rte_fib6_conf {
 	enum rte_fib6_type type; /**< Type of FIB struct */
@@ -190,4 +194,20 @@ __rte_experimental
 struct rte_rib6 *
 rte_fib6_get_rib(struct rte_fib6 *fib);
 
+/**
+ * Set lookup function based on type
+ *
+ * @param fib
+ *   FIB object handle
+ * @param type
+ *   type of lookup function
+ *
+ * @return
+ *    -EINVAL on failure
+ *    0 on success
+ */
+__rte_experimental
+int
+rte_fib6_set_lookup_fn(struct rte_fib6 *fib, int type);
+
 #endif /* _RTE_FIB6_H_ */
diff --git a/lib/librte_fib/rte_fib_version.map b/lib/librte_fib/rte_fib_version.map
index 216af66..9d1e181 100644
--- a/lib/librte_fib/rte_fib_version.map
+++ b/lib/librte_fib/rte_fib_version.map
@@ -19,6 +19,7 @@ EXPERIMENTAL {
 	rte_fib6_lookup_bulk;
 	rte_fib6_get_dp;
 	rte_fib6_get_rib;
+	rte_fib6_set_lookup_fn;
 
 	local: *;
 };
diff --git a/lib/librte_fib/trie.c b/lib/librte_fib/trie.c
index 2ae2add..63c519a 100644
--- a/lib/librte_fib/trie.c
+++ b/lib/librte_fib/trie.c
@@ -59,13 +59,6 @@ enum edge {
 	REDGE
 };
 
-enum lookup_type {
-	MACRO,
-	INLINE,
-	UNI
-};
-static enum lookup_type test_lookup = MACRO;
-
 static inline uint32_t
 get_tbl24_idx(const uint8_t *ip)
 {
@@ -154,11 +147,18 @@ LOOKUP_FUNC(4b, uint32_t, 2)
 LOOKUP_FUNC(8b, uint64_t, 3)
 
 rte_fib6_lookup_fn_t
-rte_trie_get_lookup_fn(struct rte_fib6_conf *conf)
+trie_get_lookup_fn(void *p, enum rte_fib_trie_lookup_type type)
 {
-	enum rte_fib_trie_nh_sz nh_sz = conf->trie.nh_sz;
+	enum rte_fib_trie_nh_sz nh_sz;
+	struct rte_trie_tbl *dp = p;
 
-	if (test_lookup == MACRO) {
+	if (dp == NULL)
+		return NULL;
+
+	nh_sz = dp->nh_sz;
+
+	switch (type) {
+	case RTE_FIB6_TRIE_SCALAR:
 		switch (nh_sz) {
 		case RTE_FIB6_TRIE_2B:
 			return rte_trie_lookup_bulk_2b;
@@ -166,9 +166,12 @@ rte_trie_get_lookup_fn(struct rte_fib6_conf *conf)
 			return rte_trie_lookup_bulk_4b;
 		case RTE_FIB6_TRIE_8B:
 			return rte_trie_lookup_bulk_8b;
+		default:
+			return NULL;
 		}
+	default:
+		return NULL;
 	}
-
 	return NULL;
 }
 
diff --git a/lib/librte_fib/trie.h b/lib/librte_fib/trie.h
index bb750c5..0d5ef9a 100644
--- a/lib/librte_fib/trie.h
+++ b/lib/librte_fib/trie.h
@@ -22,7 +22,7 @@ void
 trie_free(void *p);
 
 rte_fib6_lookup_fn_t
-rte_trie_get_lookup_fn(struct rte_fib6_conf *fib_conf);
+trie_get_lookup_fn(void *p, enum rte_fib_trie_lookup_type type);
 
 int
 trie_modify(struct rte_fib6 *fib, const uint8_t ip[RTE_FIB6_IPV6_ADDR_SIZE],
-- 
2.7.4


^ permalink raw reply	[flat|nested] 199+ messages in thread

* [dpdk-dev] [PATCH 5/6] fib6: introduce AVX512 lookup
  2020-03-09 12:43 [dpdk-dev] [PATCH 0/6] fib: implement AVX512 vector lookup Vladimir Medvedkin
                   ` (3 preceding siblings ...)
  2020-03-09 12:43 ` [dpdk-dev] [PATCH 4/6] fib6: make lookup function type configurable Vladimir Medvedkin
@ 2020-03-09 12:43 ` Vladimir Medvedkin
  2020-03-09 12:43 ` [dpdk-dev] [PATCH 6/6] app/testfib: add support for different lookup functions Vladimir Medvedkin
                   ` (8 subsequent siblings)
  13 siblings, 0 replies; 199+ messages in thread
From: Vladimir Medvedkin @ 2020-03-09 12:43 UTC (permalink / raw)
  To: dev; +Cc: konstantin.ananyev, bruce.richardson

Add new lookup implementation for FIB6 trie algorithm using
AVX512 instruction set

Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
---
 lib/librte_fib/rte_fib6.h    |   3 +-
 lib/librte_fib/trie.c        |  58 +++++++++++
 lib/librte_fib/trie_avx512.h | 231 +++++++++++++++++++++++++++++++++++++++++++
 3 files changed, 291 insertions(+), 1 deletion(-)
 create mode 100644 lib/librte_fib/trie_avx512.h

diff --git a/lib/librte_fib/rte_fib6.h b/lib/librte_fib/rte_fib6.h
index 7d2fd9f..f1f8f38 100644
--- a/lib/librte_fib/rte_fib6.h
+++ b/lib/librte_fib/rte_fib6.h
@@ -49,7 +49,8 @@ enum rte_fib_trie_nh_sz {
 };
 
 enum rte_fib_trie_lookup_type {
-	RTE_FIB6_TRIE_SCALAR
+	RTE_FIB6_TRIE_SCALAR,
+	RTE_FIB6_TRIE_VECTOR
 };
 
 /** FIB configuration structure */
diff --git a/lib/librte_fib/trie.c b/lib/librte_fib/trie.c
index 63c519a..b983b3a 100644
--- a/lib/librte_fib/trie.c
+++ b/lib/librte_fib/trie.c
@@ -146,6 +146,51 @@ LOOKUP_FUNC(2b, uint16_t, 1)
 LOOKUP_FUNC(4b, uint32_t, 2)
 LOOKUP_FUNC(8b, uint64_t, 3)
 
+#ifdef __AVX512F__
+
+#include "trie_avx512.h"
+
+static void
+rte_trie_vec_lookup_bulk_2b(void *p, uint8_t ips[][RTE_FIB6_IPV6_ADDR_SIZE],
+	uint64_t *next_hops, const unsigned int n)
+{
+	uint32_t i;
+	for (i = 0; i < (n / 16); i++) {
+		trie_vec_lookup_x16(p, (uint8_t (*)[16])&ips[i * 16][0],
+				next_hops + i * 16, sizeof(uint16_t));
+	}
+	rte_trie_lookup_bulk_2b(p, (uint8_t (*)[16])&ips[i * 16][0],
+			next_hops + i * 16, n - i * 16);
+}
+
+static void
+rte_trie_vec_lookup_bulk_4b(void *p, uint8_t ips[][RTE_FIB6_IPV6_ADDR_SIZE],
+	uint64_t *next_hops, const unsigned int n)
+{
+	uint32_t i;
+	for (i = 0; i < (n / 16); i++) {
+		trie_vec_lookup_x16(p, (uint8_t (*)[16])&ips[i * 16][0],
+				next_hops + i * 16, sizeof(uint32_t));
+	}
+	rte_trie_lookup_bulk_4b(p, (uint8_t (*)[16])&ips[i * 16][0],
+			next_hops + i * 16, n - i * 16);
+}
+
+static void
+rte_trie_vec_lookup_bulk_8b(void *p, uint8_t ips[][RTE_FIB6_IPV6_ADDR_SIZE],
+	uint64_t *next_hops, const unsigned int n)
+{
+	uint32_t i;
+	for (i = 0; i < (n / 8); i++) {
+		trie_vec_lookup_x8_8b(p, (uint8_t (*)[16])&ips[i * 8][0],
+				next_hops + i * 8);
+	}
+	rte_trie_lookup_bulk_8b(p, (uint8_t (*)[16])&ips[i * 8][0],
+			next_hops + i * 8, n - i * 8);
+}
+
+#endif /* __AVX512F__ */
+
 rte_fib6_lookup_fn_t
 trie_get_lookup_fn(void *p, enum rte_fib_trie_lookup_type type)
 {
@@ -169,6 +214,19 @@ trie_get_lookup_fn(void *p, enum rte_fib_trie_lookup_type type)
 		default:
 			return NULL;
 		}
+#ifdef __AVX512F__
+	case RTE_FIB6_TRIE_VECTOR:
+		switch (nh_sz) {
+		case RTE_FIB6_TRIE_2B:
+			return rte_trie_vec_lookup_bulk_2b;
+		case RTE_FIB6_TRIE_4B:
+			return rte_trie_vec_lookup_bulk_4b;
+		case RTE_FIB6_TRIE_8B:
+			return rte_trie_vec_lookup_bulk_8b;
+		default:
+			return NULL;
+		}
+#endif /* __AVX512F__ */
 	default:
 		return NULL;
 	}
diff --git a/lib/librte_fib/trie_avx512.h b/lib/librte_fib/trie_avx512.h
new file mode 100644
index 0000000..f60bbd2
--- /dev/null
+++ b/lib/librte_fib/trie_avx512.h
@@ -0,0 +1,231 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Intel Corporation
+ */
+
+#ifndef _TRIE_AVX512_H_
+#define _TRIE_AVX512_H_
+
+#include <rte_vect.h>
+
+static __rte_always_inline void
+transpose_x16(uint8_t ips[16][RTE_FIB6_IPV6_ADDR_SIZE],
+	__m512i *first, __m512i *second, __m512i *third, __m512i *fourth)
+{
+	__m512i tmp1, tmp2, tmp3, tmp4;
+	__m512i tmp5, tmp6, tmp7, tmp8;
+	const rte_zmm_t perm_idxes = {
+		.u32 = { 0, 4, 8, 12, 2, 6, 10, 14,
+			1, 5, 9, 13, 3, 7, 11, 15
+		},
+	};
+
+	/* load all ip addresses */
+	tmp1 = _mm512_loadu_si512(&ips[0][0]);
+	tmp2 = _mm512_loadu_si512(&ips[4][0]);
+	tmp3 = _mm512_loadu_si512(&ips[8][0]);
+	tmp4 = _mm512_loadu_si512(&ips[12][0]);
+
+	/* transpose 4 byte chunks of 16 ips */
+	tmp5 = _mm512_unpacklo_epi32(tmp1, tmp2);
+	tmp7 = _mm512_unpackhi_epi32(tmp1, tmp2);
+	tmp6 = _mm512_unpacklo_epi32(tmp3, tmp4);
+	tmp8 = _mm512_unpackhi_epi32(tmp3, tmp4);
+
+	tmp1 = _mm512_unpacklo_epi32(tmp5, tmp6);
+	tmp3 = _mm512_unpackhi_epi32(tmp5, tmp6);
+	tmp2 = _mm512_unpacklo_epi32(tmp7, tmp8);
+	tmp4 = _mm512_unpackhi_epi32(tmp7, tmp8);
+
+	/* first 4-byte chunks of ips[] */
+	*first = _mm512_permutexvar_epi32(perm_idxes.z, tmp1);
+	/* second 4-byte chunks of ips[] */
+	*second = _mm512_permutexvar_epi32(perm_idxes.z, tmp3);
+	/* third 4-byte chunks of ips[] */
+	*third = _mm512_permutexvar_epi32(perm_idxes.z, tmp2);
+	/* fourth 4-byte chunks of ips[] */
+	*fourth = _mm512_permutexvar_epi32(perm_idxes.z, tmp4);
+}
+
+static __rte_always_inline void
+transpose_x8(uint8_t ips[8][RTE_FIB6_IPV6_ADDR_SIZE],
+	__m512i *first, __m512i *second)
+{
+	__m512i tmp1, tmp2, tmp3, tmp4;
+	const rte_zmm_t perm_idxes = {
+		.u64 = { 0, 2, 4, 6, 1, 3, 5, 7
+		},
+	};
+
+	tmp1 = _mm512_loadu_si512(&ips[0][0]);
+	tmp2 = _mm512_loadu_si512(&ips[4][0]);
+
+	tmp3 = _mm512_unpacklo_epi64(tmp1, tmp2);
+	*first = _mm512_permutexvar_epi64(perm_idxes.z, tmp3);
+	tmp4 = _mm512_unpackhi_epi64(tmp1, tmp2);
+	*second = _mm512_permutexvar_epi64(perm_idxes.z, tmp4);
+}
+
+static __rte_always_inline void
+trie_vec_lookup_x16(void *p, uint8_t ips[16][RTE_FIB6_IPV6_ADDR_SIZE],
+	uint64_t *next_hops, int size)
+{
+	struct rte_trie_tbl *dp = (struct rte_trie_tbl *)p;
+	const __m512i zero = _mm512_set1_epi32(0);
+	const __m512i lsb = _mm512_set1_epi32(1);
+	const __m512i two_lsb = _mm512_set1_epi32(3);
+	__m512i first, second, third, fourth; /*< IPv6 four byte chunks */
+	__m512i idxes, res, shuf_idxes;
+	__m512i tmp, tmp2, bytes, byte_chunk, base_idxes;
+	/* used to mask gather values if size is 2 (16 bit next hops) */
+	const __m512i res_msk = _mm512_set1_epi32(UINT16_MAX);
+	const rte_zmm_t bswap = {
+		.u8 = { 2, 1, 0, 255, 6, 5, 4, 255,
+			10, 9, 8, 255, 14, 13, 12, 255,
+			2, 1, 0, 255, 6, 5, 4, 255,
+			10, 9, 8, 255, 14, 13, 12, 255,
+			2, 1, 0, 255, 6, 5, 4, 255,
+			10, 9, 8, 255, 14, 13, 12, 255,
+			2, 1, 0, 255, 6, 5, 4, 255,
+			10, 9, 8, 255, 14, 13, 12, 255
+			},
+	};
+	const __mmask64 k = 0x1111111111111111;
+	int i = 3;
+	__mmask16 msk_ext, new_msk;
+	__mmask16 exp_msk = 0x5555;
+
+	transpose_x16(ips, &first, &second, &third, &fourth);
+
+	/* get_tbl24_idx() for every 4 byte chunk */
+	idxes = _mm512_shuffle_epi8(first, bswap.z);
+
+	/**
+	 * lookup in tbl24
+	 * Put it inside branch to make compiller happy with -O0
+	 */
+	if (size == sizeof(uint16_t)) {
+		res = _mm512_i32gather_epi32(idxes, (const int *)dp->tbl24, 2);
+		res = _mm512_and_epi32(res, res_msk);
+	} else
+		res = _mm512_i32gather_epi32(idxes, (const int *)dp->tbl24, 4);
+
+
+	/* get extended entries indexes */
+	msk_ext = _mm512_test_epi32_mask(res, lsb);
+
+	tmp = _mm512_srli_epi32(res, 1);
+
+	/* idxes to retrieve bytes */
+	shuf_idxes = _mm512_setr_epi32(3, 7, 11, 15,
+				19, 23, 27, 31,
+				35, 39, 43, 47,
+				51, 55, 59, 63);
+
+	base_idxes = _mm512_setr_epi32(0, 4, 8, 12,
+				16, 20, 24, 28,
+				32, 36, 40, 44,
+				48, 52, 56, 60);
+
+	/* traverse down the trie */
+	while (msk_ext) {
+		idxes = _mm512_maskz_slli_epi32(msk_ext, tmp, 8);
+		byte_chunk = (i < 8) ?
+			((i >= 4) ? second : first) :
+			((i >= 12) ? fourth : third);
+		bytes = _mm512_maskz_shuffle_epi8(k, byte_chunk, shuf_idxes);
+		idxes = _mm512_maskz_add_epi32(msk_ext, idxes, bytes);
+		if (size == sizeof(uint16_t)) {
+			tmp = _mm512_mask_i32gather_epi32(zero, msk_ext,
+				idxes, (const int *)dp->tbl8, 2);
+			tmp = _mm512_and_epi32(tmp, res_msk);
+		} else
+			tmp = _mm512_mask_i32gather_epi32(zero, msk_ext,
+				idxes, (const int *)dp->tbl8, 4);
+		new_msk = _mm512_test_epi32_mask(tmp, lsb);
+		res = _mm512_mask_blend_epi32(msk_ext ^ new_msk, res, tmp);
+		tmp = _mm512_srli_epi32(tmp, 1);
+		msk_ext = new_msk;
+
+		shuf_idxes = _mm512_maskz_add_epi8(k, shuf_idxes, lsb);
+		shuf_idxes = _mm512_and_epi32(shuf_idxes, two_lsb);
+		shuf_idxes = _mm512_maskz_add_epi8(k, shuf_idxes, base_idxes);
+		i++;
+	}
+
+	res = _mm512_srli_epi32(res, 1);
+	tmp = _mm512_maskz_expand_epi32(exp_msk, res);
+	__m256i tmp256;
+	tmp256 = _mm512_extracti32x8_epi32(res, 1);
+	tmp2 = _mm512_maskz_expand_epi32(exp_msk,
+		_mm512_castsi256_si512(tmp256));
+	_mm512_storeu_si512(next_hops, tmp);
+	_mm512_storeu_si512(next_hops + 8, tmp2);
+}
+
+static void
+trie_vec_lookup_x8_8b(void *p, uint8_t ips[8][RTE_FIB6_IPV6_ADDR_SIZE],
+	uint64_t *next_hops)
+{
+	struct rte_trie_tbl *dp = (struct rte_trie_tbl *)p;
+	const __m512i zero = _mm512_set1_epi32(0);
+	const __m512i lsb = _mm512_set1_epi32(1);
+	const __m512i three_lsb = _mm512_set1_epi32(7);
+	__m512i first, second; /*< IPv6 eight byte chunks */
+	__m512i idxes, res, shuf_idxes;
+	__m512i tmp, bytes, byte_chunk, base_idxes;
+	const rte_zmm_t bswap = {
+		.u8 = { 2, 1, 0, 255, 255, 255, 255, 255,
+			10, 9, 8, 255, 255, 255, 255, 255,
+			2, 1, 0, 255, 255, 255, 255, 255,
+			10, 9, 8, 255, 255, 255, 255, 255,
+			2, 1, 0, 255, 255, 255, 255, 255,
+			10, 9, 8, 255, 255, 255, 255, 255,
+			2, 1, 0, 255, 255, 255, 255, 255,
+			10, 9, 8, 255, 255, 255, 255, 255
+			},
+	};
+	const __mmask64 k = 0x101010101010101;
+	int i = 3;
+	__mmask8 msk_ext, new_msk;
+
+	transpose_x8(ips, &first, &second);
+
+	/* get_tbl24_idx() for every 4 byte chunk */
+	idxes = _mm512_shuffle_epi8(first, bswap.z);
+
+	/* lookup in tbl24 */
+	res = _mm512_i64gather_epi64(idxes, (const void *)dp->tbl24, 8);
+	/* get extended entries indexes */
+	msk_ext = _mm512_test_epi64_mask(res, lsb);
+
+	tmp = _mm512_srli_epi64(res, 1);
+
+	/* idxes to retrieve bytes */
+	shuf_idxes = _mm512_setr_epi64(3, 11, 19, 27, 35, 43, 51, 59);
+
+	base_idxes = _mm512_setr_epi64(0, 8, 16, 24, 32, 40, 48, 56);
+
+	/* traverse down the trie */
+	while (msk_ext) {
+		idxes = _mm512_maskz_slli_epi64(msk_ext, tmp, 8);
+		byte_chunk = (i < 8) ? first : second;
+		bytes = _mm512_maskz_shuffle_epi8(k, byte_chunk, shuf_idxes);
+		idxes = _mm512_maskz_add_epi64(msk_ext, idxes, bytes);
+		tmp = _mm512_mask_i64gather_epi64(zero, msk_ext,
+				idxes, (const void *)dp->tbl8, 8);
+		new_msk = _mm512_test_epi64_mask(tmp, lsb);
+		res = _mm512_mask_blend_epi64(msk_ext ^ new_msk, res, tmp);
+		tmp = _mm512_srli_epi64(tmp, 1);
+		msk_ext = new_msk;
+
+		shuf_idxes = _mm512_maskz_add_epi8(k, shuf_idxes, lsb);
+		shuf_idxes = _mm512_and_epi64(shuf_idxes, three_lsb);
+		shuf_idxes = _mm512_maskz_add_epi8(k, shuf_idxes, base_idxes);
+		i++;
+	}
+
+	res = _mm512_srli_epi64(res, 1);
+	_mm512_storeu_si512(next_hops, res);
+}
+
+#endif /* _TRIE_AVX512_H_ */
-- 
2.7.4


^ permalink raw reply	[flat|nested] 199+ messages in thread

* [dpdk-dev] [PATCH 6/6] app/testfib: add support for different lookup functions
  2020-03-09 12:43 [dpdk-dev] [PATCH 0/6] fib: implement AVX512 vector lookup Vladimir Medvedkin
                   ` (4 preceding siblings ...)
  2020-03-09 12:43 ` [dpdk-dev] [PATCH 5/6] fib6: introduce AVX512 lookup Vladimir Medvedkin
@ 2020-03-09 12:43 ` Vladimir Medvedkin
  2020-04-16  9:55 ` [dpdk-dev] [PATCH 0/6] fib: implement AVX512 vector lookup Thomas Monjalon
                   ` (7 subsequent siblings)
  13 siblings, 0 replies; 199+ messages in thread
From: Vladimir Medvedkin @ 2020-03-09 12:43 UTC (permalink / raw)
  To: dev; +Cc: konstantin.ananyev, bruce.richardson

Added -v option to switch between different lookup implementations
to measure their performance and correctness.

Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
---
 app/test-fib/main.c | 58 ++++++++++++++++++++++++++++++++++++++++++++++++++---
 1 file changed, 55 insertions(+), 3 deletions(-)

diff --git a/app/test-fib/main.c b/app/test-fib/main.c
index 5fb67f3..926b59a 100644
--- a/app/test-fib/main.c
+++ b/app/test-fib/main.c
@@ -99,6 +99,7 @@ static struct {
 	uint8_t		ent_sz;
 	uint8_t		rnd_lookup_ips_ratio;
 	uint8_t		print_fract;
+	uint8_t		lookup_fn;
 } config = {
 	.routes_file = NULL,
 	.lookup_ips_file = NULL,
@@ -110,7 +111,8 @@ static struct {
 	.tbl8 = DEFAULT_LPM_TBL8,
 	.ent_sz = 4,
 	.rnd_lookup_ips_ratio = 0,
-	.print_fract = 10
+	.print_fract = 10,
+	.lookup_fn = 0
 };
 
 struct rt_rule_4 {
@@ -636,7 +638,11 @@ print_usage(void)
 		"1/2/4/8 (default 4)>]\n"
 		"[-g <number of tbl8's for dir24_8 or trie FIBs>]\n"
 		"[-w <path to the file to dump routing table>]\n"
-		"[-u <path to the file to dump ip's for lookup>]\n",
+		"[-u <path to the file to dump ip's for lookup>]\n"
+		"[-v <type of loookup function:"
+		"\ts1, s2, s3 (3 types of scalar), v (vector) -"
+		" for DIR24_8 based FIB\n"
+		"\ts, v - for TRIE based ipv6 FIB>]\n",
 		config.prgname);
 }
 
@@ -679,7 +685,7 @@ parse_opts(int argc, char **argv)
 	int opt;
 	char *endptr;
 
-	while ((opt = getopt(argc, argv, "f:t:n:d:l:r:c6ab:e:g:w:u:s")) !=
+	while ((opt = getopt(argc, argv, "f:t:n:d:l:r:c6ab:e:g:w:u:sv:")) !=
 			-1) {
 		switch (opt) {
 		case 'f':
@@ -767,6 +773,22 @@ parse_opts(int argc, char **argv)
 				rte_exit(-EINVAL, "Invalid option -g\n");
 			}
 			break;
+		case 'v':
+			if ((strcmp(optarg, "s1") == 0) ||
+					(strcmp(optarg, "s") == 0))
+				break;
+			else if (strcmp(optarg, "v") == 0) {
+				config.lookup_fn = 1;
+				break;
+			} else if (strcmp(optarg, "s2") == 0) {
+				config.lookup_fn = 2;
+				break;
+			} else if (strcmp(optarg, "s3") == 0) {
+				config.lookup_fn = 3;
+				break;
+			}
+			print_usage();
+			rte_exit(-EINVAL, "Invalid option -v %s\n", optarg);
 		default:
 			print_usage();
 			rte_exit(-EINVAL, "Invalid options\n");
@@ -844,6 +866,24 @@ run_v4(void)
 		return -rte_errno;
 	}
 
+	if (config.lookup_fn != 0) {
+		if (config.lookup_fn == 1)
+			ret = rte_fib_set_lookup_fn(fib,
+				RTE_FIB_DIR24_8_VECTOR);
+		else if (config.lookup_fn == 2)
+			ret = rte_fib_set_lookup_fn(fib,
+				RTE_FIB_DIR24_8_SCALAR_INLINE);
+		else if (config.lookup_fn == 3)
+			ret = rte_fib_set_lookup_fn(fib,
+				RTE_FIB_DIR24_8_SCALAR_UNI);
+		else
+			ret = -EINVAL;
+		if (ret != 0) {
+			printf("Can not init lookup function\n");
+			return ret;
+		}
+	}
+
 	for (k = config.print_fract, i = 0; k > 0; k--) {
 		start = rte_rdtsc_precise();
 		for (j = 0; j < (config.nb_routes - i) / k; j++) {
@@ -1023,6 +1063,18 @@ run_v6(void)
 		return -rte_errno;
 	}
 
+	if (config.lookup_fn != 0) {
+		if (config.lookup_fn == 1)
+			ret = rte_fib6_set_lookup_fn(fib,
+				RTE_FIB6_TRIE_VECTOR);
+		else
+			ret = -EINVAL;
+		if (ret != 0) {
+			printf("Can not init lookup function\n");
+			return ret;
+		}
+	}
+
 	for (k = config.print_fract, i = 0; k > 0; k--) {
 		start = rte_rdtsc_precise();
 		for (j = 0; j < (config.nb_routes - i) / k; j++) {
-- 
2.7.4


^ permalink raw reply	[flat|nested] 199+ messages in thread

* Re: [dpdk-dev] [PATCH 1/6] eal: introduce zmm type for AVX 512-bit
  2020-03-09 12:43 ` [dpdk-dev] [PATCH 1/6] eal: introduce zmm type for AVX 512-bit Vladimir Medvedkin
@ 2020-03-09 16:39   ` Jerin Jacob
  2020-03-10 14:44     ` Medvedkin, Vladimir
  0 siblings, 1 reply; 199+ messages in thread
From: Jerin Jacob @ 2020-03-09 16:39 UTC (permalink / raw)
  To: Vladimir Medvedkin
  Cc: dpdk-dev, Ananyev, Konstantin, Richardson, Bruce, Gavin Hu

On Mon, Mar 9, 2020 at 6:14 PM Vladimir Medvedkin
<vladimir.medvedkin@intel.com> wrote:
>
> New data type to manipulate 512 bit AVX values.
>
> Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
> ---
>  lib/librte_eal/common/include/arch/x86/rte_vect.h | 20 ++++++++++++++++++++
>  1 file changed, 20 insertions(+)
>
> diff --git a/lib/librte_eal/common/include/arch/x86/rte_vect.h b/lib/librte_eal/common/include/arch/x86/rte_vect.h
> index df5a607..09f30e6 100644
> --- a/lib/librte_eal/common/include/arch/x86/rte_vect.h
> +++ b/lib/librte_eal/common/include/arch/x86/rte_vect.h
> @@ -90,6 +90,26 @@ __extension__ ({                 \
>  })
>  #endif /* (defined(__ICC) && __ICC < 1210) */
>
> +#ifdef __AVX512F__
> +
> +typedef __m512i zmm_t;
> +
> +#define        ZMM_SIZE        (sizeof(zmm_t))
> +#define        ZMM_MASK        (ZMM_SIZE - 1)
> +
> +typedef union rte_zmm {
> +       zmm_t    z;
> +       ymm_t    y[ZMM_SIZE / sizeof(ymm_t)];
> +       xmm_t    x[ZMM_SIZE / sizeof(xmm_t)];
> +       uint8_t  u8[ZMM_SIZE / sizeof(uint8_t)];
> +       uint16_t u16[ZMM_SIZE / sizeof(uint16_t)];
> +       uint32_t u32[ZMM_SIZE / sizeof(uint32_t)];
> +       uint64_t u64[ZMM_SIZE / sizeof(uint64_t)];
> +       double   pd[ZMM_SIZE / sizeof(double)];

Are we missing __attribute__((aligned(64))) here?

> +} rte_zmm_t;

IMO, Due to legacy reason, we have selected  rte_xmm_t, rte_ymm_t for
128 and 256 operations in public APIs[1]

# Not sure where xmm_t and ymm_t and new zmm_t come from? Is this name
x86 arch-specific? If so,
why not give the more generic name rte_512i_t or something?
# Currently, In every arch file, we are repeating the definition for
rte_xmm_t, Why not make, this generic definition
in common file. ie.  rte_zmm_t or rte_512i_t definition in common
file(./lib/librte_eal/common/include/generic/rte_vect.h)
# Currently ./lib/librte_eal/common/include/generic/rte_vect.h has
defintion for rte_vXsY_t for vector representation, would that
be enough for public API? Do we need to new type?


[1]
rte_lpm_lookupx4(const struct rte_lpm *lpm, xmm_t ip, uint32_t hop[4],
uint32_t defv)


> +
> +#endif /* __AVX512F__ */
> +
>  #ifdef __cplusplus
>  }
>  #endif
> --
> 2.7.4
>

^ permalink raw reply	[flat|nested] 199+ messages in thread

* Re: [dpdk-dev] [PATCH 1/6] eal: introduce zmm type for AVX 512-bit
  2020-03-09 16:39   ` Jerin Jacob
@ 2020-03-10 14:44     ` Medvedkin, Vladimir
  2020-03-20  8:23       ` Jerin Jacob
  0 siblings, 1 reply; 199+ messages in thread
From: Medvedkin, Vladimir @ 2020-03-10 14:44 UTC (permalink / raw)
  To: Jerin Jacob; +Cc: dpdk-dev, Ananyev, Konstantin, Richardson, Bruce, Gavin Hu

Hi Jerin,

On 09/03/2020 16:39, Jerin Jacob wrote:
> On Mon, Mar 9, 2020 at 6:14 PM Vladimir Medvedkin
> <vladimir.medvedkin@intel.com> wrote:
>> New data type to manipulate 512 bit AVX values.
>>
>> Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
>> ---
>>   lib/librte_eal/common/include/arch/x86/rte_vect.h | 20 ++++++++++++++++++++
>>   1 file changed, 20 insertions(+)
>>
>> diff --git a/lib/librte_eal/common/include/arch/x86/rte_vect.h b/lib/librte_eal/common/include/arch/x86/rte_vect.h
>> index df5a607..09f30e6 100644
>> --- a/lib/librte_eal/common/include/arch/x86/rte_vect.h
>> +++ b/lib/librte_eal/common/include/arch/x86/rte_vect.h
>> @@ -90,6 +90,26 @@ __extension__ ({                 \
>>   })
>>   #endif /* (defined(__ICC) && __ICC < 1210) */
>>
>> +#ifdef __AVX512F__
>> +
>> +typedef __m512i zmm_t;
>> +
>> +#define        ZMM_SIZE        (sizeof(zmm_t))
>> +#define        ZMM_MASK        (ZMM_SIZE - 1)
>> +
>> +typedef union rte_zmm {
>> +       zmm_t    z;
>> +       ymm_t    y[ZMM_SIZE / sizeof(ymm_t)];
>> +       xmm_t    x[ZMM_SIZE / sizeof(xmm_t)];
>> +       uint8_t  u8[ZMM_SIZE / sizeof(uint8_t)];
>> +       uint16_t u16[ZMM_SIZE / sizeof(uint16_t)];
>> +       uint32_t u32[ZMM_SIZE / sizeof(uint32_t)];
>> +       uint64_t u64[ZMM_SIZE / sizeof(uint64_t)];
>> +       double   pd[ZMM_SIZE / sizeof(double)];
> Are we missing __attribute__((aligned(64))) here?
Agree. While modern compilers align __m512i by default, some old could 
failure to align. Please correct me if I'm wrong.
>
>> +} rte_zmm_t;
> IMO, Due to legacy reason, we have selected  rte_xmm_t, rte_ymm_t for
> 128 and 256 operations in public APIs[1]
As for me, since these functions are inlined, prototype should be 
changed to uint32_t ip[4] instead of passing vector type as an argument.
> # Not sure where xmm_t and ymm_t and new zmm_t come from? Is this name
> x86 arch-specific?
Yes, that's why they are in arch/x86/rte_vect.h
> If so,
> why not give the more generic name rte_512i_t or something?
> # Currently, In every arch file, we are repeating the definition for
> rte_xmm_t, Why not make, this generic definition
> in common file. ie.  rte_zmm_t or rte_512i_t definition in common
> file(./lib/librte_eal/common/include/generic/rte_vect.h)
I think there could be some arch specific thing that prevents it from 
being generic.
> # Currently ./lib/librte_eal/common/include/generic/rte_vect.h has
> defintion for rte_vXsY_t for vector representation, would that
> be enough for public API? Do we need to new type?

Definitions for rte_vXsY_tare almost the same as compiler's 
__m[128,256,512]i apart from alignment.
Union types such as rte_zmm_t are very useful because of the ability to 
access parts of a wide vector register with an arbitrary granularity. 
For example, some old compiler don't support 
_mm512_set_epi8()/_mm512_set_epi16() intrinsics, so accessing ".u8[]" of 
".u16[]" solves the problem.

>
>
> [1]
> rte_lpm_lookupx4(const struct rte_lpm *lpm, xmm_t ip, uint32_t hop[4],
> uint32_t defv)
>
>
>> +
>> +#endif /* __AVX512F__ */
>> +
>>   #ifdef __cplusplus
>>   }
>>   #endif
>> --
>> 2.7.4
>>
-- 
Regards,
Vladimir


^ permalink raw reply	[flat|nested] 199+ messages in thread

* Re: [dpdk-dev] [PATCH 1/6] eal: introduce zmm type for AVX 512-bit
  2020-03-10 14:44     ` Medvedkin, Vladimir
@ 2020-03-20  8:23       ` Jerin Jacob
  0 siblings, 0 replies; 199+ messages in thread
From: Jerin Jacob @ 2020-03-20  8:23 UTC (permalink / raw)
  To: Medvedkin, Vladimir
  Cc: dpdk-dev, Ananyev, Konstantin, Richardson, Bruce, Gavin Hu

On Tue, Mar 10, 2020 at 8:14 PM Medvedkin, Vladimir
<vladimir.medvedkin@intel.com> wrote:
>
> Hi Jerin,

Hi Vladimir,


>
> Are we missing __attribute__((aligned(64))) here?
>
> Agree. While modern compilers align __m512i by default, some old could failure to align. Please correct me if I'm wrong.

Yes.

>
> +} rte_zmm_t;
>
> IMO, Due to legacy reason, we have selected  rte_xmm_t, rte_ymm_t for
> 128 and 256 operations in public APIs[1]
>
> As for me, since these functions are inlined, prototype should be changed to uint32_t ip[4] instead of passing vector type as an argument.

OK. Makes sense.

> # Not sure where xmm_t and ymm_t and new zmm_t come from? Is this name
> x86 arch-specific?
>
> Yes, that's why they are in arch/x86/rte_vect.h

See the last comment.

>
> If so,
> why not give the more generic name rte_512i_t or something?
> # Currently, In every arch file, we are repeating the definition for
> rte_xmm_t, Why not make, this generic definition
> in common file. ie.  rte_zmm_t or rte_512i_t definition in common
> file(./lib/librte_eal/common/include/generic/rte_vect.h)
>
> I think there could be some arch specific thing that prevents it from being generic.
>
> # Currently ./lib/librte_eal/common/include/generic/rte_vect.h has
> defintion for rte_vXsY_t for vector representation, would that
> be enough for public API? Do we need to new type?
>
> Definitions for rte_vXsY_tare almost the same as compiler's __m[128,256,512]i apart from alignment.
> Union types such as rte_zmm_t are very useful because of the ability to access parts of a wide vector register with an arbitrary granularity. For example, some old compiler don't support _mm512_set_epi8()/_mm512_set_epi16() intrinsics, so accessing ".u8[]" of ".u16[]" solves the problem.

Yes. We are on the same page.

I think, the only difference in thought is, the x86 specific
definition(rte_zmm_t) name should be something
it needs to be reflected as internal or arch-specific. Earlier APIs
such rte_lpm_lookupx4 has leaked
the xmm_t  definition to public API.
To avoid that danger, please make rte_zmm_t as internal/arch-specific.
Something __rte_x86_zmm_t or
so that denotes it is not a public symbol.

^ permalink raw reply	[flat|nested] 199+ messages in thread

* Re: [dpdk-dev] [PATCH 2/6] fib: make lookup function type configurable
  2020-03-09 12:43 ` [dpdk-dev] [PATCH 2/6] fib: make lookup function type configurable Vladimir Medvedkin
@ 2020-04-01  5:47   ` Ray Kinsella
  2020-04-01 18:48     ` Medvedkin, Vladimir
  0 siblings, 1 reply; 199+ messages in thread
From: Ray Kinsella @ 2020-04-01  5:47 UTC (permalink / raw)
  To: Vladimir Medvedkin, dev; +Cc: konstantin.ananyev, bruce.richardson

Hi Vladimir,

On 09/03/2020 12:43, Vladimir Medvedkin wrote:
> Add type argument to dir24_8_get_lookup_fn()
> Now it supports 3 different lookup implementations:
>  RTE_FIB_DIR24_8_SCALAR_MACRO
>  RTE_FIB_DIR24_8_SCALAR_INLINE
>  RTE_FIB_DIR24_8_SCALAR_UNI
> 
> Add new rte_fib_set_lookup_fn() - user can change lookup
> function type runtime.
> 
> Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
> ---
>  lib/librte_fib/dir24_8.c           | 32 ++++++++++++++++++++------------
>  lib/librte_fib/dir24_8.h           |  2 +-
>  lib/librte_fib/rte_fib.c           | 20 +++++++++++++++++++-
>  lib/librte_fib/rte_fib.h           | 22 ++++++++++++++++++++++
>  lib/librte_fib/rte_fib_version.map |  1 +
>  5 files changed, 63 insertions(+), 14 deletions(-)
> 
> diff --git a/lib/librte_fib/dir24_8.c b/lib/librte_fib/dir24_8.c
> index c9dce3c..825d061 100644
> --- a/lib/librte_fib/dir24_8.c
> +++ b/lib/librte_fib/dir24_8.c
> @@ -45,13 +45,6 @@ struct dir24_8_tbl {
>  
>  #define ROUNDUP(x, y)	 RTE_ALIGN_CEIL(x, (1 << (32 - y)))
>  
> -enum lookup_type {
> -	MACRO,
> -	INLINE,
> -	UNI
> -};
> -enum lookup_type test_lookup = MACRO;
> -
>  static inline void *
>  get_tbl24_p(struct dir24_8_tbl *dp, uint32_t ip, uint8_t nh_sz)
>  {
> @@ -253,11 +246,18 @@ dir24_8_lookup_bulk_uni(void *p, const uint32_t *ips,
>  }
>  
>  rte_fib_lookup_fn_t
> -dir24_8_get_lookup_fn(struct rte_fib_conf *fib_conf)
> +dir24_8_get_lookup_fn(void *p, enum rte_fib_dir24_8_lookup_type type)
>  {
> -	enum rte_fib_dir24_8_nh_sz nh_sz = fib_conf->dir24_8.nh_sz;
> +	enum rte_fib_dir24_8_nh_sz nh_sz;
> +	struct dir24_8_tbl *dp = p;
>  
> -	if (test_lookup == MACRO) {
> +	if (dp == NULL)
> +		return NULL;
> +
> +	nh_sz = dp->nh_sz;
> +
> +	switch (type) {
> +	case RTE_FIB_DIR24_8_SCALAR_MACRO:
>  		switch (nh_sz) {
>  		case RTE_FIB_DIR24_8_1B:
>  			return dir24_8_lookup_bulk_1b;
> @@ -267,8 +267,10 @@ dir24_8_get_lookup_fn(struct rte_fib_conf *fib_conf)
>  			return dir24_8_lookup_bulk_4b;
>  		case RTE_FIB_DIR24_8_8B:
>  			return dir24_8_lookup_bulk_8b;
> +		default:
> +			return NULL;
>  		}
> -	} else if (test_lookup == INLINE) {
> +	case RTE_FIB_DIR24_8_SCALAR_INLINE:
>  		switch (nh_sz) {
>  		case RTE_FIB_DIR24_8_1B:
>  			return dir24_8_lookup_bulk_0;
> @@ -278,9 +280,15 @@ dir24_8_get_lookup_fn(struct rte_fib_conf *fib_conf)
>  			return dir24_8_lookup_bulk_2;
>  		case RTE_FIB_DIR24_8_8B:
>  			return dir24_8_lookup_bulk_3;
> +		default:
> +			return NULL;
>  		}
> -	} else
> +	case RTE_FIB_DIR24_8_SCALAR_UNI:
>  		return dir24_8_lookup_bulk_uni;
> +	default:
> +		return NULL;
> +	}
> +
>  	return NULL;
>  }
>  
> diff --git a/lib/librte_fib/dir24_8.h b/lib/librte_fib/dir24_8.h
> index 1ec437c..53c5dd2 100644
> --- a/lib/librte_fib/dir24_8.h
> +++ b/lib/librte_fib/dir24_8.h
> @@ -22,7 +22,7 @@ void
>  dir24_8_free(void *p);
>  
>  rte_fib_lookup_fn_t
> -dir24_8_get_lookup_fn(struct rte_fib_conf *conf);
> +dir24_8_get_lookup_fn(void *p, enum rte_fib_dir24_8_lookup_type type);
>  
>  int
>  dir24_8_modify(struct rte_fib *fib, uint32_t ip, uint8_t depth,
> diff --git a/lib/librte_fib/rte_fib.c b/lib/librte_fib/rte_fib.c
> index e090808..59120b5 100644
> --- a/lib/librte_fib/rte_fib.c
> +++ b/lib/librte_fib/rte_fib.c
> @@ -107,7 +107,8 @@ init_dataplane(struct rte_fib *fib, __rte_unused int socket_id,
>  		fib->dp = dir24_8_create(dp_name, socket_id, conf);
>  		if (fib->dp == NULL)
>  			return -rte_errno;
> -		fib->lookup = dir24_8_get_lookup_fn(conf);
> +		fib->lookup = dir24_8_get_lookup_fn(fib->dp,
> +			RTE_FIB_DIR24_8_SCALAR_MACRO);
>  		fib->modify = dir24_8_modify;
>  		return 0;
>  	default:
> @@ -317,3 +318,20 @@ rte_fib_get_rib(struct rte_fib *fib)
>  {
>  	return (fib == NULL) ? NULL : fib->rib;
>  }
> +
> +int
> +rte_fib_set_lookup_fn(struct rte_fib *fib, int type)
> +{
> +	rte_fib_lookup_fn_t fn;
> +
> +	switch (fib->type) {
> +	case RTE_FIB_DIR24_8:
> +		fn = dir24_8_get_lookup_fn(fib->dp, type);
> +		if (fn == NULL)
> +			return -EINVAL;
> +		fib->lookup = fn;
> +		return 0;
> +	default:
> +		return -EINVAL;
> +	}
> +}
> diff --git a/lib/librte_fib/rte_fib.h b/lib/librte_fib/rte_fib.h
> index d06c5ef..0e98775 100644
> --- a/lib/librte_fib/rte_fib.h
> +++ b/lib/librte_fib/rte_fib.h
> @@ -47,6 +47,12 @@ enum rte_fib_dir24_8_nh_sz {
>  	RTE_FIB_DIR24_8_8B
>  };
Do we provide the user guidance anywhere on the merits/advantages of each option?

> +enum rte_fib_dir24_8_lookup_type {
> +	RTE_FIB_DIR24_8_SCALAR_MACRO,
> +	RTE_FIB_DIR24_8_SCALAR_INLINE,
> +	RTE_FIB_DIR24_8_SCALAR_UNI
> +};
> +
>  /** FIB configuration structure */
>  struct rte_fib_conf {
>  	enum rte_fib_type type; /**< Type of FIB struct */
> @@ -185,4 +191,20 @@ __rte_experimental
>  struct rte_rib *
>  rte_fib_get_rib(struct rte_fib *fib);
>  
> +/**
> + * Set lookup function based on type
> + *
> + * @param fib
> + *   FIB object handle
> + * @param type
> + *   type of lookup function
> + *
> + * @return
> + *    -EINVAL on failure
> + *    0 on success
> + */
> +__rte_experimental
> +int
> +rte_fib_set_lookup_fn(struct rte_fib *fib, int type);
> +
>  #endif /* _RTE_FIB_H_ */
> diff --git a/lib/librte_fib/rte_fib_version.map b/lib/librte_fib/rte_fib_version.map
> index 9527417..216af66 100644
> --- a/lib/librte_fib/rte_fib_version.map
> +++ b/lib/librte_fib/rte_fib_version.map
> @@ -9,6 +9,7 @@ EXPERIMENTAL {
>  	rte_fib_lookup_bulk;
>  	rte_fib_get_dp;
>  	rte_fib_get_rib;
> +	rte_fib_set_lookup_fn;
>  
>  	rte_fib6_add;
>  	rte_fib6_create;
> 

^ permalink raw reply	[flat|nested] 199+ messages in thread

* Re: [dpdk-dev] [PATCH 3/6] fib: introduce AVX512 lookup
  2020-03-09 12:43 ` [dpdk-dev] [PATCH 3/6] fib: introduce AVX512 lookup Vladimir Medvedkin
@ 2020-04-01  5:54   ` Ray Kinsella
  0 siblings, 0 replies; 199+ messages in thread
From: Ray Kinsella @ 2020-04-01  5:54 UTC (permalink / raw)
  To: Vladimir Medvedkin, dev; +Cc: konstantin.ananyev, bruce.richardson



On 09/03/2020 12:43, Vladimir Medvedkin wrote:
> Add new lookup implementation for DIR24_8 algorithm using
> AVX512 instruction set
> 
> Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
> ---
>  lib/librte_fib/dir24_8.c        |  71 ++++++++++++++++++++++++
>  lib/librte_fib/dir24_8_avx512.h | 116 ++++++++++++++++++++++++++++++++++++++++
>  lib/librte_fib/rte_fib.h        |   3 +-
>  3 files changed, 189 insertions(+), 1 deletion(-)
>  create mode 100644 lib/librte_fib/dir24_8_avx512.h
> 
> diff --git a/lib/librte_fib/dir24_8.c b/lib/librte_fib/dir24_8.c
> index 825d061..9f51dfc 100644
> --- a/lib/librte_fib/dir24_8.c
> +++ b/lib/librte_fib/dir24_8.c
> @@ -245,6 +245,62 @@ dir24_8_lookup_bulk_uni(void *p, const uint32_t *ips,
>  	}
>  }
>  
> +#ifdef __AVX512F__
> +
> +#include "dir24_8_avx512.h"
> +
> +static void
> +rte_dir24_8_vec_lookup_bulk_1b(void *p, const uint32_t *ips,
> +	uint64_t *next_hops, const unsigned int n)
> +{
> +	uint32_t i;
> +	for (i = 0; i < (n / 16); i++)
> +		dir24_8_vec_lookup_x16(p, ips + i * 16, next_hops + i * 16,
> +			sizeof(uint8_t));
> +
> +	dir24_8_lookup_bulk_1b(p, ips + i * 16, next_hops + i * 16,
> +		n - i * 16);
> +}
> +
> +static void
> +rte_dir24_8_vec_lookup_bulk_2b(void *p, const uint32_t *ips,
> +	uint64_t *next_hops, const unsigned int n)
> +{
> +	uint32_t i;
> +	for (i = 0; i < (n / 16); i++)
> +		dir24_8_vec_lookup_x16(p, ips + i * 16, next_hops + i * 16,
> +			sizeof(uint16_t));
> +
> +	dir24_8_lookup_bulk_2b(p, ips + i * 16, next_hops + i * 16,
> +		n - i * 16);
> +}
> +
> +static void
> +rte_dir24_8_vec_lookup_bulk_4b(void *p, const uint32_t *ips,
> +	uint64_t *next_hops, const unsigned int n)
> +{
> +	uint32_t i;
> +	for (i = 0; i < (n / 16); i++)
> +		dir24_8_vec_lookup_x16(p, ips + i * 16, next_hops + i * 16,
> +			sizeof(uint32_t));
> +
> +	dir24_8_lookup_bulk_4b(p, ips + i * 16, next_hops + i * 16,
> +		n - i * 16);
> +}
> +
> +static void
> +rte_dir24_8_vec_lookup_bulk_8b(void *p, const uint32_t *ips,
> +	uint64_t *next_hops, const unsigned int n)
> +{
> +	uint32_t i;
> +	for (i = 0; i < (n / 8); i++)
> +		dir24_8_vec_lookup_x8_8b(p, ips + i * 8, next_hops + i * 8);
> +
> +	dir24_8_lookup_bulk_8b(p, ips + i * 8, next_hops + i * 8, n - i * 8);
> +}
> +
> +#endif /* __AVX512F__ */
> +
>  rte_fib_lookup_fn_t
>  dir24_8_get_lookup_fn(void *p, enum rte_fib_dir24_8_lookup_type type)
>  {
> @@ -285,6 +341,21 @@ dir24_8_get_lookup_fn(void *p, enum rte_fib_dir24_8_lookup_type type)
>  		}
>  	case RTE_FIB_DIR24_8_SCALAR_UNI:
>  		return dir24_8_lookup_bulk_uni;
> +#ifdef __AVX512F__
> +	case RTE_FIB_DIR24_8_VECTOR:
> +		switch (nh_sz) {
> +		case RTE_FIB_DIR24_8_1B:
> +			return rte_dir24_8_vec_lookup_bulk_1b;
> +		case RTE_FIB_DIR24_8_2B:
> +			return rte_dir24_8_vec_lookup_bulk_2b;
> +		case RTE_FIB_DIR24_8_4B:
> +			return rte_dir24_8_vec_lookup_bulk_4b;
> +		case RTE_FIB_DIR24_8_8B:
> +			return rte_dir24_8_vec_lookup_bulk_8b;
> +		default:
> +			return NULL;
> +		}
> +#endif
>  	default:
>  		return NULL;
>  	}
> diff --git a/lib/librte_fib/dir24_8_avx512.h b/lib/librte_fib/dir24_8_avx512.h
> new file mode 100644
> index 0000000..3b6680c
> --- /dev/null
> +++ b/lib/librte_fib/dir24_8_avx512.h
> @@ -0,0 +1,116 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright(c) 2020 Intel Corporation
> + */
> +
> +#ifndef _DIR248_AVX512_H_
> +#define _DIR248_AVX512_H_
> +
> +#include <rte_vect.h>
> +
> +static __rte_always_inline void
> +dir24_8_vec_lookup_x16(void *p, const uint32_t *ips,
> +	uint64_t *next_hops, int size)
> +{
> +	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
> +	__mmask16 msk_ext;
> +	__mmask16 exp_msk = 0x5555;
> +	__m512i ip_vec, idxes, res, bytes;
> +	const __m512i zero = _mm512_set1_epi32(0);
> +	const __m512i lsb = _mm512_set1_epi32(1);
> +	const __m512i lsbyte_msk = _mm512_set1_epi32(0xff);
> +	__m512i tmp1, tmp2, res_msk;
> +	__m256i tmp256;
> +	/* used to mask gather values if size is 1/2 (8/16 bit next hops) */
> +	if (size == sizeof(uint8_t))
> +		res_msk = _mm512_set1_epi32(UINT8_MAX);
> +	else if (size == sizeof(uint16_t))
> +		res_msk = _mm512_set1_epi32(UINT16_MAX);
> +
> +	ip_vec = _mm512_loadu_si512(ips);
> +	/* mask 24 most significant bits */
> +	idxes = _mm512_srli_epi32(ip_vec, 8);
> +
> +	/**
> +	 * lookup in tbl24
> +	 * Put it inside branch to make compiller happy with -O0
> +	 */

typo on compiler.
why not

_mm512_i32gather_epi32(idxes, (const int *)dp->tbl24, size/sizeof(uint8_t));

presume compiler didn't like it for some reason?

> +	if (size == sizeof(uint8_t)) {
> +		res = _mm512_i32gather_epi32(idxes, (const int *)dp->tbl24, 1);
> +		res = _mm512_and_epi32(res, res_msk);
> +	} else if (size == sizeof(uint16_t)) {
> +		res = _mm512_i32gather_epi32(idxes, (const int *)dp->tbl24, 2);
> +		res = _mm512_and_epi32(res, res_msk);
> +	} else
> +		res = _mm512_i32gather_epi32(idxes, (const int *)dp->tbl24, 4);
> +
> +	/* get extended entries indexes */
> +	msk_ext = _mm512_test_epi32_mask(res, lsb);
> +
> +	if (msk_ext != 0) {
> +		idxes = _mm512_srli_epi32(res, 1);
> +		idxes = _mm512_slli_epi32(idxes, 8);
> +		bytes = _mm512_and_epi32(ip_vec, lsbyte_msk);
> +		idxes = _mm512_maskz_add_epi32(msk_ext, idxes, bytes);
> +		if (size == sizeof(uint8_t)) {
> +			idxes = _mm512_mask_i32gather_epi32(zero, msk_ext,
> +				idxes, (const int *)dp->tbl8, 1);
> +			idxes = _mm512_and_epi32(idxes, res_msk);
> +		} else if (size == sizeof(uint16_t)) {
> +			idxes = _mm512_mask_i32gather_epi32(zero, msk_ext,
> +				idxes, (const int *)dp->tbl8, 2);
> +			idxes = _mm512_and_epi32(idxes, res_msk);
> +		} else
> +			idxes = _mm512_mask_i32gather_epi32(zero, msk_ext,
> +				idxes, (const int *)dp->tbl8, 4);
> +
> +		res = _mm512_mask_blend_epi32(msk_ext, res, idxes);
> +	}
> +
> +	res = _mm512_srli_epi32(res, 1);
> +	tmp1 = _mm512_maskz_expand_epi32(exp_msk, res);
> +	tmp256 = _mm512_extracti32x8_epi32(res, 1);
> +	tmp2 = _mm512_maskz_expand_epi32(exp_msk,
> +		_mm512_castsi256_si512(tmp256));
> +	_mm512_storeu_si512(next_hops, tmp1);
> +	_mm512_storeu_si512(next_hops + 8, tmp2);
> +}
> +
> +static __rte_always_inline void
> +dir24_8_vec_lookup_x8_8b(void *p, const uint32_t *ips,
> +	uint64_t *next_hops)
> +{
> +	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
> +	const __m512i zero = _mm512_set1_epi32(0);
> +	const __m512i lsbyte_msk = _mm512_set1_epi64(0xff);
> +	const __m512i lsb = _mm512_set1_epi64(1);
> +	__m512i res, idxes, bytes;
> +	__m256i idxes_256, ip_vec;
> +	__mmask8 msk_ext;
> +
> +	ip_vec = _mm256_loadu_si256((const void *)ips);
> +	/* mask 24 most significant bits */
> +	idxes_256 = _mm256_srli_epi32(ip_vec, 8);
> +
> +	/* lookup in tbl24 */
> +	res = _mm512_i32gather_epi64(idxes_256, (const void *)dp->tbl24, 8);
> +
> +	/* get extended entries indexes */
> +	msk_ext = _mm512_test_epi64_mask(res, lsb);
> +
> +	if (msk_ext != 0) {
> +		bytes = _mm512_cvtepi32_epi64(ip_vec);
> +		idxes = _mm512_srli_epi64(res, 1);
> +		idxes = _mm512_slli_epi64(idxes, 8);
> +		bytes = _mm512_and_epi64(bytes, lsbyte_msk);
> +		idxes = _mm512_maskz_add_epi64(msk_ext, idxes, bytes);
> +		idxes = _mm512_mask_i64gather_epi64(zero, msk_ext, idxes,
> +			(const void *)dp->tbl8, 8);
> +
> +		res = _mm512_mask_blend_epi64(msk_ext, res, idxes);
> +	}
> +
> +	res = _mm512_srli_epi64(res, 1);
> +	_mm512_storeu_si512(next_hops, res);
> +}
> +
> +#endif /* _DIR248_AVX512_H_ */
> diff --git a/lib/librte_fib/rte_fib.h b/lib/librte_fib/rte_fib.h
> index 0e98775..89d0f12 100644
> --- a/lib/librte_fib/rte_fib.h
> +++ b/lib/librte_fib/rte_fib.h
> @@ -50,7 +50,8 @@ enum rte_fib_dir24_8_nh_sz {
>  enum rte_fib_dir24_8_lookup_type {
>  	RTE_FIB_DIR24_8_SCALAR_MACRO,
>  	RTE_FIB_DIR24_8_SCALAR_INLINE,
> -	RTE_FIB_DIR24_8_SCALAR_UNI
> +	RTE_FIB_DIR24_8_SCALAR_UNI,
> +	RTE_FIB_DIR24_8_VECTOR
>  };
>  
>  /** FIB configuration structure */
> 

^ permalink raw reply	[flat|nested] 199+ messages in thread

* Re: [dpdk-dev] [PATCH 2/6] fib: make lookup function type configurable
  2020-04-01  5:47   ` Ray Kinsella
@ 2020-04-01 18:48     ` Medvedkin, Vladimir
  0 siblings, 0 replies; 199+ messages in thread
From: Medvedkin, Vladimir @ 2020-04-01 18:48 UTC (permalink / raw)
  To: Ray Kinsella, dev; +Cc: Ananyev, Konstantin, Richardson, Bruce

Hi Ray,


-----Original Message-----
From: Ray Kinsella <mdr@ashroe.eu> 
Sent: Wednesday, April 1, 2020 6:48 AM
To: Medvedkin, Vladimir <vladimir.medvedkin@intel.com>; dev@dpdk.org
Cc: Ananyev, Konstantin <konstantin.ananyev@intel.com>; Richardson, Bruce <bruce.richardson@intel.com>
Subject: Re: [dpdk-dev] [PATCH 2/6] fib: make lookup function type configurable

Hi Vladimir,

On 09/03/2020 12:43, Vladimir Medvedkin wrote:
> Add type argument to dir24_8_get_lookup_fn() Now it supports 3 
> different lookup implementations:
>  RTE_FIB_DIR24_8_SCALAR_MACRO
>  RTE_FIB_DIR24_8_SCALAR_INLINE
>  RTE_FIB_DIR24_8_SCALAR_UNI
> 
> Add new rte_fib_set_lookup_fn() - user can change lookup function type 
> runtime.
> 
> Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
> ---
>  lib/librte_fib/dir24_8.c           | 32 ++++++++++++++++++++------------
>  lib/librte_fib/dir24_8.h           |  2 +-
>  lib/librte_fib/rte_fib.c           | 20 +++++++++++++++++++-
>  lib/librte_fib/rte_fib.h           | 22 ++++++++++++++++++++++
>  lib/librte_fib/rte_fib_version.map |  1 +
>  5 files changed, 63 insertions(+), 14 deletions(-)
> 
> diff --git a/lib/librte_fib/dir24_8.c b/lib/librte_fib/dir24_8.c index 
> c9dce3c..825d061 100644
> --- a/lib/librte_fib/dir24_8.c
> +++ b/lib/librte_fib/dir24_8.c
> @@ -45,13 +45,6 @@ struct dir24_8_tbl {
>  
>  #define ROUNDUP(x, y)	 RTE_ALIGN_CEIL(x, (1 << (32 - y)))
>  
> -enum lookup_type {
> -	MACRO,
> -	INLINE,
> -	UNI
> -};
> -enum lookup_type test_lookup = MACRO;
> -
>  static inline void *
>  get_tbl24_p(struct dir24_8_tbl *dp, uint32_t ip, uint8_t nh_sz)  { @@ 
> -253,11 +246,18 @@ dir24_8_lookup_bulk_uni(void *p, const uint32_t 
> *ips,  }
>  
>  rte_fib_lookup_fn_t
> -dir24_8_get_lookup_fn(struct rte_fib_conf *fib_conf)
> +dir24_8_get_lookup_fn(void *p, enum rte_fib_dir24_8_lookup_type type)
>  {
> -	enum rte_fib_dir24_8_nh_sz nh_sz = fib_conf->dir24_8.nh_sz;
> +	enum rte_fib_dir24_8_nh_sz nh_sz;
> +	struct dir24_8_tbl *dp = p;
>  
> -	if (test_lookup == MACRO) {
> +	if (dp == NULL)
> +		return NULL;
> +
> +	nh_sz = dp->nh_sz;
> +
> +	switch (type) {
> +	case RTE_FIB_DIR24_8_SCALAR_MACRO:
>  		switch (nh_sz) {
>  		case RTE_FIB_DIR24_8_1B:
>  			return dir24_8_lookup_bulk_1b;
> @@ -267,8 +267,10 @@ dir24_8_get_lookup_fn(struct rte_fib_conf *fib_conf)
>  			return dir24_8_lookup_bulk_4b;
>  		case RTE_FIB_DIR24_8_8B:
>  			return dir24_8_lookup_bulk_8b;
> +		default:
> +			return NULL;
>  		}
> -	} else if (test_lookup == INLINE) {
> +	case RTE_FIB_DIR24_8_SCALAR_INLINE:
>  		switch (nh_sz) {
>  		case RTE_FIB_DIR24_8_1B:
>  			return dir24_8_lookup_bulk_0;
> @@ -278,9 +280,15 @@ dir24_8_get_lookup_fn(struct rte_fib_conf *fib_conf)
>  			return dir24_8_lookup_bulk_2;
>  		case RTE_FIB_DIR24_8_8B:
>  			return dir24_8_lookup_bulk_3;
> +		default:
> +			return NULL;
>  		}
> -	} else
> +	case RTE_FIB_DIR24_8_SCALAR_UNI:
>  		return dir24_8_lookup_bulk_uni;
> +	default:
> +		return NULL;
> +	}
> +
>  	return NULL;
>  }
>  
> diff --git a/lib/librte_fib/dir24_8.h b/lib/librte_fib/dir24_8.h index 
> 1ec437c..53c5dd2 100644
> --- a/lib/librte_fib/dir24_8.h
> +++ b/lib/librte_fib/dir24_8.h
> @@ -22,7 +22,7 @@ void
>  dir24_8_free(void *p);
>  
>  rte_fib_lookup_fn_t
> -dir24_8_get_lookup_fn(struct rte_fib_conf *conf);
> +dir24_8_get_lookup_fn(void *p, enum rte_fib_dir24_8_lookup_type 
> +type);
>  
>  int
>  dir24_8_modify(struct rte_fib *fib, uint32_t ip, uint8_t depth, diff 
> --git a/lib/librte_fib/rte_fib.c b/lib/librte_fib/rte_fib.c index 
> e090808..59120b5 100644
> --- a/lib/librte_fib/rte_fib.c
> +++ b/lib/librte_fib/rte_fib.c
> @@ -107,7 +107,8 @@ init_dataplane(struct rte_fib *fib, __rte_unused int socket_id,
>  		fib->dp = dir24_8_create(dp_name, socket_id, conf);
>  		if (fib->dp == NULL)
>  			return -rte_errno;
> -		fib->lookup = dir24_8_get_lookup_fn(conf);
> +		fib->lookup = dir24_8_get_lookup_fn(fib->dp,
> +			RTE_FIB_DIR24_8_SCALAR_MACRO);
>  		fib->modify = dir24_8_modify;
>  		return 0;
>  	default:
> @@ -317,3 +318,20 @@ rte_fib_get_rib(struct rte_fib *fib)  {
>  	return (fib == NULL) ? NULL : fib->rib;  }
> +
> +int
> +rte_fib_set_lookup_fn(struct rte_fib *fib, int type) {
> +	rte_fib_lookup_fn_t fn;
> +
> +	switch (fib->type) {
> +	case RTE_FIB_DIR24_8:
> +		fn = dir24_8_get_lookup_fn(fib->dp, type);
> +		if (fn == NULL)
> +			return -EINVAL;
> +		fib->lookup = fn;
> +		return 0;
> +	default:
> +		return -EINVAL;
> +	}
> +}
> diff --git a/lib/librte_fib/rte_fib.h b/lib/librte_fib/rte_fib.h index 
> d06c5ef..0e98775 100644
> --- a/lib/librte_fib/rte_fib.h
> +++ b/lib/librte_fib/rte_fib.h
> @@ -47,6 +47,12 @@ enum rte_fib_dir24_8_nh_sz {
>  	RTE_FIB_DIR24_8_8B
>  };
Do we provide the user guidance anywhere on the merits/advantages of each option?

No, we don't at the moment. I covered this in my slides about FIB. In my ToDo I have a plan to add documentation for this library and I will reflect this option there.

> +enum rte_fib_dir24_8_lookup_type {
> +	RTE_FIB_DIR24_8_SCALAR_MACRO,
> +	RTE_FIB_DIR24_8_SCALAR_INLINE,
> +	RTE_FIB_DIR24_8_SCALAR_UNI
> +};
> +
>  /** FIB configuration structure */
>  struct rte_fib_conf {
>  	enum rte_fib_type type; /**< Type of FIB struct */ @@ -185,4 +191,20 
> @@ __rte_experimental  struct rte_rib *  rte_fib_get_rib(struct 
> rte_fib *fib);
>  
> +/**
> + * Set lookup function based on type
> + *
> + * @param fib
> + *   FIB object handle
> + * @param type
> + *   type of lookup function
> + *
> + * @return
> + *    -EINVAL on failure
> + *    0 on success
> + */
> +__rte_experimental
> +int
> +rte_fib_set_lookup_fn(struct rte_fib *fib, int type);
> +
>  #endif /* _RTE_FIB_H_ */
> diff --git a/lib/librte_fib/rte_fib_version.map 
> b/lib/librte_fib/rte_fib_version.map
> index 9527417..216af66 100644
> --- a/lib/librte_fib/rte_fib_version.map
> +++ b/lib/librte_fib/rte_fib_version.map
> @@ -9,6 +9,7 @@ EXPERIMENTAL {
>  	rte_fib_lookup_bulk;
>  	rte_fib_get_dp;
>  	rte_fib_get_rib;
> +	rte_fib_set_lookup_fn;
>  
>  	rte_fib6_add;
>  	rte_fib6_create;
> 

^ permalink raw reply	[flat|nested] 199+ messages in thread

* Re: [dpdk-dev] [PATCH 0/6] fib: implement AVX512 vector lookup
  2020-03-09 12:43 [dpdk-dev] [PATCH 0/6] fib: implement AVX512 vector lookup Vladimir Medvedkin
                   ` (5 preceding siblings ...)
  2020-03-09 12:43 ` [dpdk-dev] [PATCH 6/6] app/testfib: add support for different lookup functions Vladimir Medvedkin
@ 2020-04-16  9:55 ` Thomas Monjalon
  2020-05-14 12:28 ` [dpdk-dev] [PATCH v2 " Vladimir Medvedkin
                   ` (6 subsequent siblings)
  13 siblings, 0 replies; 199+ messages in thread
From: Thomas Monjalon @ 2020-04-16  9:55 UTC (permalink / raw)
  To: Vladimir Medvedkin
  Cc: dev, konstantin.ananyev, bruce.richardson, john.mcnamara, david.marchand

09/03/2020 13:43, Vladimir Medvedkin:
> This patch series implements vectorized lookup using AVX512 for
> ipv4 dir24_8 and ipv6 trie algorithms.
> Also introduced rte_fib_set_lookup_fn() to change lookup function type.
> Added option to select lookup function type in testfib application.

If I understand well, this series is postponed to 20.08.
Vladimir, I think it would be good to focus on having your rte_hash
patches completed, reviewed and merged before 20.05-rc1.




^ permalink raw reply	[flat|nested] 199+ messages in thread

* [dpdk-dev] [PATCH v2 0/6] fib: implement AVX512 vector lookup
  2020-03-09 12:43 [dpdk-dev] [PATCH 0/6] fib: implement AVX512 vector lookup Vladimir Medvedkin
                   ` (6 preceding siblings ...)
  2020-04-16  9:55 ` [dpdk-dev] [PATCH 0/6] fib: implement AVX512 vector lookup Thomas Monjalon
@ 2020-05-14 12:28 ` Vladimir Medvedkin
  2020-05-19 12:12   ` [dpdk-dev] [PATCH v3 0/8] " Vladimir Medvedkin
                     ` (8 more replies)
  2020-05-14 12:28 ` [dpdk-dev] [PATCH v2 1/6] eal: introduce zmm type for AVX 512-bit Vladimir Medvedkin
                   ` (5 subsequent siblings)
  13 siblings, 9 replies; 199+ messages in thread
From: Vladimir Medvedkin @ 2020-05-14 12:28 UTC (permalink / raw)
  To: dev; +Cc: konstantin.ananyev, bruce.richardson

This patch series implements vectorized lookup using AVX512 for
ipv4 dir24_8 and ipv6 trie algorithms.
Also introduced rte_fib_set_lookup_fn() to change lookup function type.
Added option to select lookup function type in testfib application.

v2:
 - rename rte_zmm to __rte_x86_zmm to reflect its internal usage
 - make runtime decision to use avx512 lookup

Vladimir Medvedkin (6):
  eal: introduce zmm type for AVX 512-bit
  fib: make lookup function type configurable
  fib: introduce AVX512 lookup
  fib6: make lookup function type configurable
  fib6: introduce AVX512 lookup
  app/testfib: add support for different lookup functions

 app/test-fib/main.c                   |  58 ++++++++-
 lib/librte_eal/x86/include/rte_vect.h |  20 +++
 lib/librte_fib/Makefile               |  23 ++++
 lib/librte_fib/dir24_8.c              | 106 ++++++++++++++--
 lib/librte_fib/dir24_8.h              |   2 +-
 lib/librte_fib/dir24_8_avx512.h       | 116 +++++++++++++++++
 lib/librte_fib/meson.build            |  13 ++
 lib/librte_fib/rte_fib.c              |  20 ++-
 lib/librte_fib/rte_fib.h              |  23 ++++
 lib/librte_fib/rte_fib6.c             |  19 ++-
 lib/librte_fib/rte_fib6.h             |  21 ++++
 lib/librte_fib/rte_fib_version.map    |   2 +
 lib/librte_fib/trie.c                 |  85 +++++++++++--
 lib/librte_fib/trie.h                 |   2 +-
 lib/librte_fib/trie_avx512.h          | 231 ++++++++++++++++++++++++++++++++++
 15 files changed, 711 insertions(+), 30 deletions(-)
 create mode 100644 lib/librte_fib/dir24_8_avx512.h
 create mode 100644 lib/librte_fib/trie_avx512.h

-- 
2.7.4


^ permalink raw reply	[flat|nested] 199+ messages in thread

* [dpdk-dev] [PATCH v2 1/6] eal: introduce zmm type for AVX 512-bit
  2020-03-09 12:43 [dpdk-dev] [PATCH 0/6] fib: implement AVX512 vector lookup Vladimir Medvedkin
                   ` (7 preceding siblings ...)
  2020-05-14 12:28 ` [dpdk-dev] [PATCH v2 " Vladimir Medvedkin
@ 2020-05-14 12:28 ` Vladimir Medvedkin
  2020-05-14 12:28 ` [dpdk-dev] [PATCH v2 2/6] fib: make lookup function type configurable Vladimir Medvedkin
                   ` (4 subsequent siblings)
  13 siblings, 0 replies; 199+ messages in thread
From: Vladimir Medvedkin @ 2020-05-14 12:28 UTC (permalink / raw)
  To: dev; +Cc: konstantin.ananyev, bruce.richardson

New data type to manipulate 512 bit AVX values.

Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
---
 lib/librte_eal/x86/include/rte_vect.h | 20 ++++++++++++++++++++
 1 file changed, 20 insertions(+)

diff --git a/lib/librte_eal/x86/include/rte_vect.h b/lib/librte_eal/x86/include/rte_vect.h
index df5a607..ffe4f7d 100644
--- a/lib/librte_eal/x86/include/rte_vect.h
+++ b/lib/librte_eal/x86/include/rte_vect.h
@@ -90,6 +90,26 @@ __extension__ ({                 \
 })
 #endif /* (defined(__ICC) && __ICC < 1210) */
 
+#ifdef __AVX512F__
+
+typedef __m512i __x86_zmm_t;
+
+#define	ZMM_SIZE	(sizeof(__x86_zmm_t))
+#define	ZMM_MASK	(ZMM_SIZE - 1)
+
+typedef union __rte_x86_zmm  {
+	__x86_zmm_t	 z;
+	ymm_t    y[ZMM_SIZE / sizeof(ymm_t)];
+	xmm_t    x[ZMM_SIZE / sizeof(xmm_t)];
+	uint8_t  u8[ZMM_SIZE / sizeof(uint8_t)];
+	uint16_t u16[ZMM_SIZE / sizeof(uint16_t)];
+	uint32_t u32[ZMM_SIZE / sizeof(uint32_t)];
+	uint64_t u64[ZMM_SIZE / sizeof(uint64_t)];
+	double   pd[ZMM_SIZE / sizeof(double)];
+} __attribute__((__aligned__(ZMM_SIZE)))  __rte_x86_zmm_t;
+
+#endif /* __AVX512F__ */
+
 #ifdef __cplusplus
 }
 #endif
-- 
2.7.4


^ permalink raw reply	[flat|nested] 199+ messages in thread

* [dpdk-dev] [PATCH v2 2/6] fib: make lookup function type configurable
  2020-03-09 12:43 [dpdk-dev] [PATCH 0/6] fib: implement AVX512 vector lookup Vladimir Medvedkin
                   ` (8 preceding siblings ...)
  2020-05-14 12:28 ` [dpdk-dev] [PATCH v2 1/6] eal: introduce zmm type for AVX 512-bit Vladimir Medvedkin
@ 2020-05-14 12:28 ` Vladimir Medvedkin
  2020-05-14 12:28 ` [dpdk-dev] [PATCH v2 3/6] fib: introduce AVX512 lookup Vladimir Medvedkin
                   ` (3 subsequent siblings)
  13 siblings, 0 replies; 199+ messages in thread
From: Vladimir Medvedkin @ 2020-05-14 12:28 UTC (permalink / raw)
  To: dev; +Cc: konstantin.ananyev, bruce.richardson

Add type argument to dir24_8_get_lookup_fn()
Now it supports 3 different lookup implementations:
 RTE_FIB_DIR24_8_SCALAR_MACRO
 RTE_FIB_DIR24_8_SCALAR_INLINE
 RTE_FIB_DIR24_8_SCALAR_UNI

Add new rte_fib_set_lookup_fn() - user can change lookup
function type runtime.

Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
---
 lib/librte_fib/dir24_8.c           | 32 ++++++++++++++++++++------------
 lib/librte_fib/dir24_8.h           |  2 +-
 lib/librte_fib/rte_fib.c           | 20 +++++++++++++++++++-
 lib/librte_fib/rte_fib.h           | 22 ++++++++++++++++++++++
 lib/librte_fib/rte_fib_version.map |  1 +
 5 files changed, 63 insertions(+), 14 deletions(-)

diff --git a/lib/librte_fib/dir24_8.c b/lib/librte_fib/dir24_8.c
index c9dce3c..825d061 100644
--- a/lib/librte_fib/dir24_8.c
+++ b/lib/librte_fib/dir24_8.c
@@ -45,13 +45,6 @@ struct dir24_8_tbl {
 
 #define ROUNDUP(x, y)	 RTE_ALIGN_CEIL(x, (1 << (32 - y)))
 
-enum lookup_type {
-	MACRO,
-	INLINE,
-	UNI
-};
-enum lookup_type test_lookup = MACRO;
-
 static inline void *
 get_tbl24_p(struct dir24_8_tbl *dp, uint32_t ip, uint8_t nh_sz)
 {
@@ -253,11 +246,18 @@ dir24_8_lookup_bulk_uni(void *p, const uint32_t *ips,
 }
 
 rte_fib_lookup_fn_t
-dir24_8_get_lookup_fn(struct rte_fib_conf *fib_conf)
+dir24_8_get_lookup_fn(void *p, enum rte_fib_dir24_8_lookup_type type)
 {
-	enum rte_fib_dir24_8_nh_sz nh_sz = fib_conf->dir24_8.nh_sz;
+	enum rte_fib_dir24_8_nh_sz nh_sz;
+	struct dir24_8_tbl *dp = p;
 
-	if (test_lookup == MACRO) {
+	if (dp == NULL)
+		return NULL;
+
+	nh_sz = dp->nh_sz;
+
+	switch (type) {
+	case RTE_FIB_DIR24_8_SCALAR_MACRO:
 		switch (nh_sz) {
 		case RTE_FIB_DIR24_8_1B:
 			return dir24_8_lookup_bulk_1b;
@@ -267,8 +267,10 @@ dir24_8_get_lookup_fn(struct rte_fib_conf *fib_conf)
 			return dir24_8_lookup_bulk_4b;
 		case RTE_FIB_DIR24_8_8B:
 			return dir24_8_lookup_bulk_8b;
+		default:
+			return NULL;
 		}
-	} else if (test_lookup == INLINE) {
+	case RTE_FIB_DIR24_8_SCALAR_INLINE:
 		switch (nh_sz) {
 		case RTE_FIB_DIR24_8_1B:
 			return dir24_8_lookup_bulk_0;
@@ -278,9 +280,15 @@ dir24_8_get_lookup_fn(struct rte_fib_conf *fib_conf)
 			return dir24_8_lookup_bulk_2;
 		case RTE_FIB_DIR24_8_8B:
 			return dir24_8_lookup_bulk_3;
+		default:
+			return NULL;
 		}
-	} else
+	case RTE_FIB_DIR24_8_SCALAR_UNI:
 		return dir24_8_lookup_bulk_uni;
+	default:
+		return NULL;
+	}
+
 	return NULL;
 }
 
diff --git a/lib/librte_fib/dir24_8.h b/lib/librte_fib/dir24_8.h
index 1ec437c..53c5dd2 100644
--- a/lib/librte_fib/dir24_8.h
+++ b/lib/librte_fib/dir24_8.h
@@ -22,7 +22,7 @@ void
 dir24_8_free(void *p);
 
 rte_fib_lookup_fn_t
-dir24_8_get_lookup_fn(struct rte_fib_conf *conf);
+dir24_8_get_lookup_fn(void *p, enum rte_fib_dir24_8_lookup_type type);
 
 int
 dir24_8_modify(struct rte_fib *fib, uint32_t ip, uint8_t depth,
diff --git a/lib/librte_fib/rte_fib.c b/lib/librte_fib/rte_fib.c
index e090808..59120b5 100644
--- a/lib/librte_fib/rte_fib.c
+++ b/lib/librte_fib/rte_fib.c
@@ -107,7 +107,8 @@ init_dataplane(struct rte_fib *fib, __rte_unused int socket_id,
 		fib->dp = dir24_8_create(dp_name, socket_id, conf);
 		if (fib->dp == NULL)
 			return -rte_errno;
-		fib->lookup = dir24_8_get_lookup_fn(conf);
+		fib->lookup = dir24_8_get_lookup_fn(fib->dp,
+			RTE_FIB_DIR24_8_SCALAR_MACRO);
 		fib->modify = dir24_8_modify;
 		return 0;
 	default:
@@ -317,3 +318,20 @@ rte_fib_get_rib(struct rte_fib *fib)
 {
 	return (fib == NULL) ? NULL : fib->rib;
 }
+
+int
+rte_fib_set_lookup_fn(struct rte_fib *fib, int type)
+{
+	rte_fib_lookup_fn_t fn;
+
+	switch (fib->type) {
+	case RTE_FIB_DIR24_8:
+		fn = dir24_8_get_lookup_fn(fib->dp, type);
+		if (fn == NULL)
+			return -EINVAL;
+		fib->lookup = fn;
+		return 0;
+	default:
+		return -EINVAL;
+	}
+}
diff --git a/lib/librte_fib/rte_fib.h b/lib/librte_fib/rte_fib.h
index af3bbf0..db35685 100644
--- a/lib/librte_fib/rte_fib.h
+++ b/lib/librte_fib/rte_fib.h
@@ -51,6 +51,12 @@ enum rte_fib_dir24_8_nh_sz {
 	RTE_FIB_DIR24_8_8B
 };
 
+enum rte_fib_dir24_8_lookup_type {
+	RTE_FIB_DIR24_8_SCALAR_MACRO,
+	RTE_FIB_DIR24_8_SCALAR_INLINE,
+	RTE_FIB_DIR24_8_SCALAR_UNI
+};
+
 /** FIB configuration structure */
 struct rte_fib_conf {
 	enum rte_fib_type type; /**< Type of FIB struct */
@@ -189,6 +195,22 @@ __rte_experimental
 struct rte_rib *
 rte_fib_get_rib(struct rte_fib *fib);
 
+/**
+ * Set lookup function based on type
+ *
+ * @param fib
+ *   FIB object handle
+ * @param type
+ *   type of lookup function
+ *
+ * @return
+ *    -EINVAL on failure
+ *    0 on success
+ */
+__rte_experimental
+int
+rte_fib_set_lookup_fn(struct rte_fib *fib, int type);
+
 #ifdef __cplusplus
 }
 #endif
diff --git a/lib/librte_fib/rte_fib_version.map b/lib/librte_fib/rte_fib_version.map
index 9527417..216af66 100644
--- a/lib/librte_fib/rte_fib_version.map
+++ b/lib/librte_fib/rte_fib_version.map
@@ -9,6 +9,7 @@ EXPERIMENTAL {
 	rte_fib_lookup_bulk;
 	rte_fib_get_dp;
 	rte_fib_get_rib;
+	rte_fib_set_lookup_fn;
 
 	rte_fib6_add;
 	rte_fib6_create;
-- 
2.7.4


^ permalink raw reply	[flat|nested] 199+ messages in thread

* [dpdk-dev] [PATCH v2 3/6] fib: introduce AVX512 lookup
  2020-03-09 12:43 [dpdk-dev] [PATCH 0/6] fib: implement AVX512 vector lookup Vladimir Medvedkin
                   ` (9 preceding siblings ...)
  2020-05-14 12:28 ` [dpdk-dev] [PATCH v2 2/6] fib: make lookup function type configurable Vladimir Medvedkin
@ 2020-05-14 12:28 ` Vladimir Medvedkin
  2020-05-14 12:40   ` Bruce Richardson
  2020-05-14 12:28 ` [dpdk-dev] [PATCH v2 4/6] fib6: make lookup function type configurable Vladimir Medvedkin
                   ` (2 subsequent siblings)
  13 siblings, 1 reply; 199+ messages in thread
From: Vladimir Medvedkin @ 2020-05-14 12:28 UTC (permalink / raw)
  To: dev; +Cc: konstantin.ananyev, bruce.richardson

Add new lookup implementation for DIR24_8 algorithm using
AVX512 instruction set

Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
---
 lib/librte_fib/Makefile         |  14 +++++
 lib/librte_fib/dir24_8.c        |  74 +++++++++++++++++++++++++
 lib/librte_fib/dir24_8_avx512.h | 116 ++++++++++++++++++++++++++++++++++++++++
 lib/librte_fib/meson.build      |  10 ++++
 lib/librte_fib/rte_fib.h        |   3 +-
 5 files changed, 216 insertions(+), 1 deletion(-)
 create mode 100644 lib/librte_fib/dir24_8_avx512.h

diff --git a/lib/librte_fib/Makefile b/lib/librte_fib/Makefile
index 1dd2a49..0b6c825 100644
--- a/lib/librte_fib/Makefile
+++ b/lib/librte_fib/Makefile
@@ -20,3 +20,17 @@ SRCS-$(CONFIG_RTE_LIBRTE_FIB) := rte_fib.c rte_fib6.c dir24_8.c trie.c
 SYMLINK-$(CONFIG_RTE_LIBRTE_FIB)-include := rte_fib.h rte_fib6.h
 
 include $(RTE_SDK)/mk/rte.lib.mk
+
+CC_AVX512F_SUPPORT=$(shell $(CC) -mavx512f -dM -E - </dev/null 2>&1 | \
+grep -q __AVX512F__ && echo 1)
+
+CC_AVX512DQ_SUPPORT=$(shell $(CC) -mavx512dq -dM -E - </dev/null 2>&1 | \
+grep -q __AVX512DQ__ && echo 1)
+
+ifeq ($(CC_AVX512F_SUPPORT), 1)
+	ifeq ($(CC_AVX512DQ_SUPPORT), 1)
+		CFLAGS_dir24_8.o += -mavx512f
+		CFLAGS_dir24_8.o += -mavx512dq
+		CFLAGS_dir24_8.o += -DCC_AVX512_SUPPORT
+	endif
+endif
diff --git a/lib/librte_fib/dir24_8.c b/lib/librte_fib/dir24_8.c
index 825d061..443873c 100644
--- a/lib/librte_fib/dir24_8.c
+++ b/lib/librte_fib/dir24_8.c
@@ -245,6 +245,62 @@ dir24_8_lookup_bulk_uni(void *p, const uint32_t *ips,
 	}
 }
 
+#ifdef CC_AVX512_SUPPORT
+
+#include "dir24_8_avx512.h"
+
+static void
+rte_dir24_8_vec_lookup_bulk_1b(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n)
+{
+	uint32_t i;
+	for (i = 0; i < (n / 16); i++)
+		dir24_8_vec_lookup_x16(p, ips + i * 16, next_hops + i * 16,
+			sizeof(uint8_t));
+
+	dir24_8_lookup_bulk_1b(p, ips + i * 16, next_hops + i * 16,
+		n - i * 16);
+}
+
+static void
+rte_dir24_8_vec_lookup_bulk_2b(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n)
+{
+	uint32_t i;
+	for (i = 0; i < (n / 16); i++)
+		dir24_8_vec_lookup_x16(p, ips + i * 16, next_hops + i * 16,
+			sizeof(uint16_t));
+
+	dir24_8_lookup_bulk_2b(p, ips + i * 16, next_hops + i * 16,
+		n - i * 16);
+}
+
+static void
+rte_dir24_8_vec_lookup_bulk_4b(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n)
+{
+	uint32_t i;
+	for (i = 0; i < (n / 16); i++)
+		dir24_8_vec_lookup_x16(p, ips + i * 16, next_hops + i * 16,
+			sizeof(uint32_t));
+
+	dir24_8_lookup_bulk_4b(p, ips + i * 16, next_hops + i * 16,
+		n - i * 16);
+}
+
+static void
+rte_dir24_8_vec_lookup_bulk_8b(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n)
+{
+	uint32_t i;
+	for (i = 0; i < (n / 8); i++)
+		dir24_8_vec_lookup_x8_8b(p, ips + i * 8, next_hops + i * 8);
+
+	dir24_8_lookup_bulk_8b(p, ips + i * 8, next_hops + i * 8, n - i * 8);
+}
+
+#endif /* CC_AVX512_SUPPORT */
+
 rte_fib_lookup_fn_t
 dir24_8_get_lookup_fn(void *p, enum rte_fib_dir24_8_lookup_type type)
 {
@@ -285,6 +341,24 @@ dir24_8_get_lookup_fn(void *p, enum rte_fib_dir24_8_lookup_type type)
 		}
 	case RTE_FIB_DIR24_8_SCALAR_UNI:
 		return dir24_8_lookup_bulk_uni;
+#ifdef CC_AVX512_SUPPORT
+	case RTE_FIB_DIR24_8_VECTOR:
+		if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512F) <= 0)
+			return NULL;
+
+		switch (nh_sz) {
+		case RTE_FIB_DIR24_8_1B:
+			return rte_dir24_8_vec_lookup_bulk_1b;
+		case RTE_FIB_DIR24_8_2B:
+			return rte_dir24_8_vec_lookup_bulk_2b;
+		case RTE_FIB_DIR24_8_4B:
+			return rte_dir24_8_vec_lookup_bulk_4b;
+		case RTE_FIB_DIR24_8_8B:
+			return rte_dir24_8_vec_lookup_bulk_8b;
+		default:
+			return NULL;
+		}
+#endif
 	default:
 		return NULL;
 	}
diff --git a/lib/librte_fib/dir24_8_avx512.h b/lib/librte_fib/dir24_8_avx512.h
new file mode 100644
index 0000000..e3792e0
--- /dev/null
+++ b/lib/librte_fib/dir24_8_avx512.h
@@ -0,0 +1,116 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Intel Corporation
+ */
+
+#ifndef _DIR248_AVX512_H_
+#define _DIR248_AVX512_H_
+
+#include <rte_vect.h>
+
+static __rte_always_inline void
+dir24_8_vec_lookup_x16(void *p, const uint32_t *ips,
+	uint64_t *next_hops, int size)
+{
+	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
+	__mmask16 msk_ext;
+	__mmask16 exp_msk = 0x5555;
+	__m512i ip_vec, idxes, res, bytes;
+	const __m512i zero = _mm512_set1_epi32(0);
+	const __m512i lsb = _mm512_set1_epi32(1);
+	const __m512i lsbyte_msk = _mm512_set1_epi32(0xff);
+	__m512i tmp1, tmp2, res_msk;
+	__m256i tmp256;
+	/* used to mask gather values if size is 1/2 (8/16 bit next hops) */
+	if (size == sizeof(uint8_t))
+		res_msk = _mm512_set1_epi32(UINT8_MAX);
+	else if (size == sizeof(uint16_t))
+		res_msk = _mm512_set1_epi32(UINT16_MAX);
+
+	ip_vec = _mm512_loadu_si512(ips);
+	/* mask 24 most significant bits */
+	idxes = _mm512_srli_epi32(ip_vec, 8);
+
+	/**
+	 * lookup in tbl24
+	 * Put it inside branch to make compiler happy with -O0
+	 */
+	if (size == sizeof(uint8_t)) {
+		res = _mm512_i32gather_epi32(idxes, (const int *)dp->tbl24, 1);
+		res = _mm512_and_epi32(res, res_msk);
+	} else if (size == sizeof(uint16_t)) {
+		res = _mm512_i32gather_epi32(idxes, (const int *)dp->tbl24, 2);
+		res = _mm512_and_epi32(res, res_msk);
+	} else
+		res = _mm512_i32gather_epi32(idxes, (const int *)dp->tbl24, 4);
+
+	/* get extended entries indexes */
+	msk_ext = _mm512_test_epi32_mask(res, lsb);
+
+	if (msk_ext != 0) {
+		idxes = _mm512_srli_epi32(res, 1);
+		idxes = _mm512_slli_epi32(idxes, 8);
+		bytes = _mm512_and_epi32(ip_vec, lsbyte_msk);
+		idxes = _mm512_maskz_add_epi32(msk_ext, idxes, bytes);
+		if (size == sizeof(uint8_t)) {
+			idxes = _mm512_mask_i32gather_epi32(zero, msk_ext,
+				idxes, (const int *)dp->tbl8, 1);
+			idxes = _mm512_and_epi32(idxes, res_msk);
+		} else if (size == sizeof(uint16_t)) {
+			idxes = _mm512_mask_i32gather_epi32(zero, msk_ext,
+				idxes, (const int *)dp->tbl8, 2);
+			idxes = _mm512_and_epi32(idxes, res_msk);
+		} else
+			idxes = _mm512_mask_i32gather_epi32(zero, msk_ext,
+				idxes, (const int *)dp->tbl8, 4);
+
+		res = _mm512_mask_blend_epi32(msk_ext, res, idxes);
+	}
+
+	res = _mm512_srli_epi32(res, 1);
+	tmp1 = _mm512_maskz_expand_epi32(exp_msk, res);
+	tmp256 = _mm512_extracti32x8_epi32(res, 1);
+	tmp2 = _mm512_maskz_expand_epi32(exp_msk,
+		_mm512_castsi256_si512(tmp256));
+	_mm512_storeu_si512(next_hops, tmp1);
+	_mm512_storeu_si512(next_hops + 8, tmp2);
+}
+
+static __rte_always_inline void
+dir24_8_vec_lookup_x8_8b(void *p, const uint32_t *ips,
+	uint64_t *next_hops)
+{
+	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
+	const __m512i zero = _mm512_set1_epi32(0);
+	const __m512i lsbyte_msk = _mm512_set1_epi64(0xff);
+	const __m512i lsb = _mm512_set1_epi64(1);
+	__m512i res, idxes, bytes;
+	__m256i idxes_256, ip_vec;
+	__mmask8 msk_ext;
+
+	ip_vec = _mm256_loadu_si256((const void *)ips);
+	/* mask 24 most significant bits */
+	idxes_256 = _mm256_srli_epi32(ip_vec, 8);
+
+	/* lookup in tbl24 */
+	res = _mm512_i32gather_epi64(idxes_256, (const void *)dp->tbl24, 8);
+
+	/* get extended entries indexes */
+	msk_ext = _mm512_test_epi64_mask(res, lsb);
+
+	if (msk_ext != 0) {
+		bytes = _mm512_cvtepi32_epi64(ip_vec);
+		idxes = _mm512_srli_epi64(res, 1);
+		idxes = _mm512_slli_epi64(idxes, 8);
+		bytes = _mm512_and_epi64(bytes, lsbyte_msk);
+		idxes = _mm512_maskz_add_epi64(msk_ext, idxes, bytes);
+		idxes = _mm512_mask_i64gather_epi64(zero, msk_ext, idxes,
+			(const void *)dp->tbl8, 8);
+
+		res = _mm512_mask_blend_epi64(msk_ext, res, idxes);
+	}
+
+	res = _mm512_srli_epi64(res, 1);
+	_mm512_storeu_si512(next_hops, res);
+}
+
+#endif /* _DIR248_AVX512_H_ */
diff --git a/lib/librte_fib/meson.build b/lib/librte_fib/meson.build
index 771828f..86b1d4a 100644
--- a/lib/librte_fib/meson.build
+++ b/lib/librte_fib/meson.build
@@ -5,3 +5,13 @@
 sources = files('rte_fib.c', 'rte_fib6.c', 'dir24_8.c', 'trie.c')
 headers = files('rte_fib.h', 'rte_fib6.h')
 deps += ['rib']
+
+if dpdk_conf.has('RTE_ARCH_X86')
+	if cc.has_argument('-mavx512f')
+		cflags += '-DCC_AVX512_SUPPORT'
+		cflags += '-mavx512f'
+	endif
+	if cc.has_argument('-mavx512dq')
+		cflags += '-mavx512dq'
+	endif
+endif
diff --git a/lib/librte_fib/rte_fib.h b/lib/librte_fib/rte_fib.h
index db35685..2919d13 100644
--- a/lib/librte_fib/rte_fib.h
+++ b/lib/librte_fib/rte_fib.h
@@ -54,7 +54,8 @@ enum rte_fib_dir24_8_nh_sz {
 enum rte_fib_dir24_8_lookup_type {
 	RTE_FIB_DIR24_8_SCALAR_MACRO,
 	RTE_FIB_DIR24_8_SCALAR_INLINE,
-	RTE_FIB_DIR24_8_SCALAR_UNI
+	RTE_FIB_DIR24_8_SCALAR_UNI,
+	RTE_FIB_DIR24_8_VECTOR
 };
 
 /** FIB configuration structure */
-- 
2.7.4


^ permalink raw reply	[flat|nested] 199+ messages in thread

* [dpdk-dev] [PATCH v2 4/6] fib6: make lookup function type configurable
  2020-03-09 12:43 [dpdk-dev] [PATCH 0/6] fib: implement AVX512 vector lookup Vladimir Medvedkin
                   ` (10 preceding siblings ...)
  2020-05-14 12:28 ` [dpdk-dev] [PATCH v2 3/6] fib: introduce AVX512 lookup Vladimir Medvedkin
@ 2020-05-14 12:28 ` Vladimir Medvedkin
  2020-05-14 12:28 ` [dpdk-dev] [PATCH v2 5/6] fib6: introduce AVX512 lookup Vladimir Medvedkin
  2020-05-14 12:28 ` [dpdk-dev] [PATCH v2 6/6] app/testfib: add support for different lookup functions Vladimir Medvedkin
  13 siblings, 0 replies; 199+ messages in thread
From: Vladimir Medvedkin @ 2020-05-14 12:28 UTC (permalink / raw)
  To: dev; +Cc: konstantin.ananyev, bruce.richardson

Add type argument to trie_get_lookup_fn()
Now it only supports RTE_FIB6_TRIE_SCALAR

Add new rte_fib6_set_lookup_fn() - user can change lookup
function type runtime.

Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
---
 lib/librte_fib/rte_fib6.c          | 19 ++++++++++++++++++-
 lib/librte_fib/rte_fib6.h          | 20 ++++++++++++++++++++
 lib/librte_fib/rte_fib_version.map |  1 +
 lib/librte_fib/trie.c              | 25 ++++++++++++++-----------
 lib/librte_fib/trie.h              |  2 +-
 5 files changed, 54 insertions(+), 13 deletions(-)

diff --git a/lib/librte_fib/rte_fib6.c b/lib/librte_fib/rte_fib6.c
index a1f0db8..9eff712 100644
--- a/lib/librte_fib/rte_fib6.c
+++ b/lib/librte_fib/rte_fib6.c
@@ -107,7 +107,7 @@ init_dataplane(struct rte_fib6 *fib, __rte_unused int socket_id,
 		fib->dp = trie_create(dp_name, socket_id, conf);
 		if (fib->dp == NULL)
 			return -rte_errno;
-		fib->lookup = rte_trie_get_lookup_fn(conf);
+		fib->lookup = trie_get_lookup_fn(fib->dp, RTE_FIB6_TRIE_SCALAR);
 		fib->modify = trie_modify;
 		return 0;
 	default:
@@ -319,3 +319,20 @@ rte_fib6_get_rib(struct rte_fib6 *fib)
 {
 	return (fib == NULL) ? NULL : fib->rib;
 }
+
+int
+rte_fib6_set_lookup_fn(struct rte_fib6 *fib, int type)
+{
+	rte_fib6_lookup_fn_t fn;
+
+	switch (fib->type) {
+	case RTE_FIB6_TRIE:
+		fn = trie_get_lookup_fn(fib->dp, type);
+		if (fn == NULL)
+			return -EINVAL;
+		fib->lookup = fn;
+		return 0;
+	default:
+		return -EINVAL;
+	}
+}
diff --git a/lib/librte_fib/rte_fib6.h b/lib/librte_fib/rte_fib6.h
index 66c71c8..b70369a 100644
--- a/lib/librte_fib/rte_fib6.h
+++ b/lib/librte_fib/rte_fib6.h
@@ -52,6 +52,10 @@ enum rte_fib_trie_nh_sz {
 	RTE_FIB6_TRIE_8B
 };
 
+enum rte_fib_trie_lookup_type {
+	RTE_FIB6_TRIE_SCALAR
+};
+
 /** FIB configuration structure */
 struct rte_fib6_conf {
 	enum rte_fib6_type type; /**< Type of FIB struct */
@@ -194,6 +198,22 @@ __rte_experimental
 struct rte_rib6 *
 rte_fib6_get_rib(struct rte_fib6 *fib);
 
+/**
+ * Set lookup function based on type
+ *
+ * @param fib
+ *   FIB object handle
+ * @param type
+ *   type of lookup function
+ *
+ * @return
+ *    -EINVAL on failure
+ *    0 on success
+ */
+__rte_experimental
+int
+rte_fib6_set_lookup_fn(struct rte_fib6 *fib, int type);
+
 #ifdef __cplusplus
 }
 #endif
diff --git a/lib/librte_fib/rte_fib_version.map b/lib/librte_fib/rte_fib_version.map
index 216af66..9d1e181 100644
--- a/lib/librte_fib/rte_fib_version.map
+++ b/lib/librte_fib/rte_fib_version.map
@@ -19,6 +19,7 @@ EXPERIMENTAL {
 	rte_fib6_lookup_bulk;
 	rte_fib6_get_dp;
 	rte_fib6_get_rib;
+	rte_fib6_set_lookup_fn;
 
 	local: *;
 };
diff --git a/lib/librte_fib/trie.c b/lib/librte_fib/trie.c
index 2ae2add..63c519a 100644
--- a/lib/librte_fib/trie.c
+++ b/lib/librte_fib/trie.c
@@ -59,13 +59,6 @@ enum edge {
 	REDGE
 };
 
-enum lookup_type {
-	MACRO,
-	INLINE,
-	UNI
-};
-static enum lookup_type test_lookup = MACRO;
-
 static inline uint32_t
 get_tbl24_idx(const uint8_t *ip)
 {
@@ -154,11 +147,18 @@ LOOKUP_FUNC(4b, uint32_t, 2)
 LOOKUP_FUNC(8b, uint64_t, 3)
 
 rte_fib6_lookup_fn_t
-rte_trie_get_lookup_fn(struct rte_fib6_conf *conf)
+trie_get_lookup_fn(void *p, enum rte_fib_trie_lookup_type type)
 {
-	enum rte_fib_trie_nh_sz nh_sz = conf->trie.nh_sz;
+	enum rte_fib_trie_nh_sz nh_sz;
+	struct rte_trie_tbl *dp = p;
 
-	if (test_lookup == MACRO) {
+	if (dp == NULL)
+		return NULL;
+
+	nh_sz = dp->nh_sz;
+
+	switch (type) {
+	case RTE_FIB6_TRIE_SCALAR:
 		switch (nh_sz) {
 		case RTE_FIB6_TRIE_2B:
 			return rte_trie_lookup_bulk_2b;
@@ -166,9 +166,12 @@ rte_trie_get_lookup_fn(struct rte_fib6_conf *conf)
 			return rte_trie_lookup_bulk_4b;
 		case RTE_FIB6_TRIE_8B:
 			return rte_trie_lookup_bulk_8b;
+		default:
+			return NULL;
 		}
+	default:
+		return NULL;
 	}
-
 	return NULL;
 }
 
diff --git a/lib/librte_fib/trie.h b/lib/librte_fib/trie.h
index bb750c5..0d5ef9a 100644
--- a/lib/librte_fib/trie.h
+++ b/lib/librte_fib/trie.h
@@ -22,7 +22,7 @@ void
 trie_free(void *p);
 
 rte_fib6_lookup_fn_t
-rte_trie_get_lookup_fn(struct rte_fib6_conf *fib_conf);
+trie_get_lookup_fn(void *p, enum rte_fib_trie_lookup_type type);
 
 int
 trie_modify(struct rte_fib6 *fib, const uint8_t ip[RTE_FIB6_IPV6_ADDR_SIZE],
-- 
2.7.4


^ permalink raw reply	[flat|nested] 199+ messages in thread

* [dpdk-dev] [PATCH v2 5/6] fib6: introduce AVX512 lookup
  2020-03-09 12:43 [dpdk-dev] [PATCH 0/6] fib: implement AVX512 vector lookup Vladimir Medvedkin
                   ` (11 preceding siblings ...)
  2020-05-14 12:28 ` [dpdk-dev] [PATCH v2 4/6] fib6: make lookup function type configurable Vladimir Medvedkin
@ 2020-05-14 12:28 ` Vladimir Medvedkin
  2020-05-14 12:28 ` [dpdk-dev] [PATCH v2 6/6] app/testfib: add support for different lookup functions Vladimir Medvedkin
  13 siblings, 0 replies; 199+ messages in thread
From: Vladimir Medvedkin @ 2020-05-14 12:28 UTC (permalink / raw)
  To: dev; +Cc: konstantin.ananyev, bruce.richardson

Add new lookup implementation for FIB6 trie algorithm using
AVX512 instruction set

Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
---
 lib/librte_fib/Makefile      |   9 ++
 lib/librte_fib/meson.build   |   3 +
 lib/librte_fib/rte_fib6.h    |   3 +-
 lib/librte_fib/trie.c        |  60 +++++++++++
 lib/librte_fib/trie_avx512.h | 231 +++++++++++++++++++++++++++++++++++++++++++
 5 files changed, 305 insertions(+), 1 deletion(-)
 create mode 100644 lib/librte_fib/trie_avx512.h

diff --git a/lib/librte_fib/Makefile b/lib/librte_fib/Makefile
index 0b6c825..1561493 100644
--- a/lib/librte_fib/Makefile
+++ b/lib/librte_fib/Makefile
@@ -27,10 +27,19 @@ grep -q __AVX512F__ && echo 1)
 CC_AVX512DQ_SUPPORT=$(shell $(CC) -mavx512dq -dM -E - </dev/null 2>&1 | \
 grep -q __AVX512DQ__ && echo 1)
 
+CC_AVX512BW_SUPPORT=$(shell $(CC) -mavx512bw -dM -E - </dev/null 2>&1 | \
+grep -q __AVX512BW__ && echo 1)
+
 ifeq ($(CC_AVX512F_SUPPORT), 1)
 	ifeq ($(CC_AVX512DQ_SUPPORT), 1)
 		CFLAGS_dir24_8.o += -mavx512f
 		CFLAGS_dir24_8.o += -mavx512dq
 		CFLAGS_dir24_8.o += -DCC_AVX512_SUPPORT
+		ifeq ($(CC_AVX512BW_SUPPORT), 1)
+			CFLAGS_trie.o += -mavx512f
+			CFLAGS_trie.o += -mavx512dq
+			CFLAGS_trie.o += -mavx512bw
+			CFLAGS_trie.o += -DCC_AVX512_SUPPORT
+		endif
 	endif
 endif
diff --git a/lib/librte_fib/meson.build b/lib/librte_fib/meson.build
index 86b1d4a..4f20629 100644
--- a/lib/librte_fib/meson.build
+++ b/lib/librte_fib/meson.build
@@ -14,4 +14,7 @@ if dpdk_conf.has('RTE_ARCH_X86')
 	if cc.has_argument('-mavx512dq')
 		cflags += '-mavx512dq'
 	endif
+	if cc.has_argument('-mavx512bw')
+		cflags += '-mavx512bw'
+	endif
 endif
diff --git a/lib/librte_fib/rte_fib6.h b/lib/librte_fib/rte_fib6.h
index b70369a..c55efdf 100644
--- a/lib/librte_fib/rte_fib6.h
+++ b/lib/librte_fib/rte_fib6.h
@@ -53,7 +53,8 @@ enum rte_fib_trie_nh_sz {
 };
 
 enum rte_fib_trie_lookup_type {
-	RTE_FIB6_TRIE_SCALAR
+	RTE_FIB6_TRIE_SCALAR,
+	RTE_FIB6_TRIE_VECTOR
 };
 
 /** FIB configuration structure */
diff --git a/lib/librte_fib/trie.c b/lib/librte_fib/trie.c
index 63c519a..564d3a0 100644
--- a/lib/librte_fib/trie.c
+++ b/lib/librte_fib/trie.c
@@ -146,6 +146,51 @@ LOOKUP_FUNC(2b, uint16_t, 1)
 LOOKUP_FUNC(4b, uint32_t, 2)
 LOOKUP_FUNC(8b, uint64_t, 3)
 
+#ifdef CC_AVX512_SUPPORT
+
+#include "trie_avx512.h"
+
+static void
+rte_trie_vec_lookup_bulk_2b(void *p, uint8_t ips[][RTE_FIB6_IPV6_ADDR_SIZE],
+	uint64_t *next_hops, const unsigned int n)
+{
+	uint32_t i;
+	for (i = 0; i < (n / 16); i++) {
+		trie_vec_lookup_x16(p, (uint8_t (*)[16])&ips[i * 16][0],
+				next_hops + i * 16, sizeof(uint16_t));
+	}
+	rte_trie_lookup_bulk_2b(p, (uint8_t (*)[16])&ips[i * 16][0],
+			next_hops + i * 16, n - i * 16);
+}
+
+static void
+rte_trie_vec_lookup_bulk_4b(void *p, uint8_t ips[][RTE_FIB6_IPV6_ADDR_SIZE],
+	uint64_t *next_hops, const unsigned int n)
+{
+	uint32_t i;
+	for (i = 0; i < (n / 16); i++) {
+		trie_vec_lookup_x16(p, (uint8_t (*)[16])&ips[i * 16][0],
+				next_hops + i * 16, sizeof(uint32_t));
+	}
+	rte_trie_lookup_bulk_4b(p, (uint8_t (*)[16])&ips[i * 16][0],
+			next_hops + i * 16, n - i * 16);
+}
+
+static void
+rte_trie_vec_lookup_bulk_8b(void *p, uint8_t ips[][RTE_FIB6_IPV6_ADDR_SIZE],
+	uint64_t *next_hops, const unsigned int n)
+{
+	uint32_t i;
+	for (i = 0; i < (n / 8); i++) {
+		trie_vec_lookup_x8_8b(p, (uint8_t (*)[16])&ips[i * 8][0],
+				next_hops + i * 8);
+	}
+	rte_trie_lookup_bulk_8b(p, (uint8_t (*)[16])&ips[i * 8][0],
+			next_hops + i * 8, n - i * 8);
+}
+
+#endif /* CC_AVX512_SUPPORT */
+
 rte_fib6_lookup_fn_t
 trie_get_lookup_fn(void *p, enum rte_fib_trie_lookup_type type)
 {
@@ -169,6 +214,21 @@ trie_get_lookup_fn(void *p, enum rte_fib_trie_lookup_type type)
 		default:
 			return NULL;
 		}
+#ifdef CC_AVX512_SUPPORT
+	case RTE_FIB6_TRIE_VECTOR:
+		if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512F) <= 0)
+			return NULL;
+		switch (nh_sz) {
+		case RTE_FIB6_TRIE_2B:
+			return rte_trie_vec_lookup_bulk_2b;
+		case RTE_FIB6_TRIE_4B:
+			return rte_trie_vec_lookup_bulk_4b;
+		case RTE_FIB6_TRIE_8B:
+			return rte_trie_vec_lookup_bulk_8b;
+		default:
+			return NULL;
+		}
+#endif /* CC_AVX512_SUPPORT */
 	default:
 		return NULL;
 	}
diff --git a/lib/librte_fib/trie_avx512.h b/lib/librte_fib/trie_avx512.h
new file mode 100644
index 0000000..583bda8
--- /dev/null
+++ b/lib/librte_fib/trie_avx512.h
@@ -0,0 +1,231 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Intel Corporation
+ */
+
+#ifndef _TRIE_AVX512_H_
+#define _TRIE_AVX512_H_
+
+#include <rte_vect.h>
+
+static __rte_always_inline void
+transpose_x16(uint8_t ips[16][RTE_FIB6_IPV6_ADDR_SIZE],
+	__m512i *first, __m512i *second, __m512i *third, __m512i *fourth)
+{
+	__m512i tmp1, tmp2, tmp3, tmp4;
+	__m512i tmp5, tmp6, tmp7, tmp8;
+	const __rte_x86_zmm_t perm_idxes = {
+		.u32 = { 0, 4, 8, 12, 2, 6, 10, 14,
+			1, 5, 9, 13, 3, 7, 11, 15
+		},
+	};
+
+	/* load all ip addresses */
+	tmp1 = _mm512_loadu_si512(&ips[0][0]);
+	tmp2 = _mm512_loadu_si512(&ips[4][0]);
+	tmp3 = _mm512_loadu_si512(&ips[8][0]);
+	tmp4 = _mm512_loadu_si512(&ips[12][0]);
+
+	/* transpose 4 byte chunks of 16 ips */
+	tmp5 = _mm512_unpacklo_epi32(tmp1, tmp2);
+	tmp7 = _mm512_unpackhi_epi32(tmp1, tmp2);
+	tmp6 = _mm512_unpacklo_epi32(tmp3, tmp4);
+	tmp8 = _mm512_unpackhi_epi32(tmp3, tmp4);
+
+	tmp1 = _mm512_unpacklo_epi32(tmp5, tmp6);
+	tmp3 = _mm512_unpackhi_epi32(tmp5, tmp6);
+	tmp2 = _mm512_unpacklo_epi32(tmp7, tmp8);
+	tmp4 = _mm512_unpackhi_epi32(tmp7, tmp8);
+
+	/* first 4-byte chunks of ips[] */
+	*first = _mm512_permutexvar_epi32(perm_idxes.z, tmp1);
+	/* second 4-byte chunks of ips[] */
+	*second = _mm512_permutexvar_epi32(perm_idxes.z, tmp3);
+	/* third 4-byte chunks of ips[] */
+	*third = _mm512_permutexvar_epi32(perm_idxes.z, tmp2);
+	/* fourth 4-byte chunks of ips[] */
+	*fourth = _mm512_permutexvar_epi32(perm_idxes.z, tmp4);
+}
+
+static __rte_always_inline void
+transpose_x8(uint8_t ips[8][RTE_FIB6_IPV6_ADDR_SIZE],
+	__m512i *first, __m512i *second)
+{
+	__m512i tmp1, tmp2, tmp3, tmp4;
+	const __rte_x86_zmm_t perm_idxes = {
+		.u64 = { 0, 2, 4, 6, 1, 3, 5, 7
+		},
+	};
+
+	tmp1 = _mm512_loadu_si512(&ips[0][0]);
+	tmp2 = _mm512_loadu_si512(&ips[4][0]);
+
+	tmp3 = _mm512_unpacklo_epi64(tmp1, tmp2);
+	*first = _mm512_permutexvar_epi64(perm_idxes.z, tmp3);
+	tmp4 = _mm512_unpackhi_epi64(tmp1, tmp2);
+	*second = _mm512_permutexvar_epi64(perm_idxes.z, tmp4);
+}
+
+static __rte_always_inline void
+trie_vec_lookup_x16(void *p, uint8_t ips[16][RTE_FIB6_IPV6_ADDR_SIZE],
+	uint64_t *next_hops, int size)
+{
+	struct rte_trie_tbl *dp = (struct rte_trie_tbl *)p;
+	const __m512i zero = _mm512_set1_epi32(0);
+	const __m512i lsb = _mm512_set1_epi32(1);
+	const __m512i two_lsb = _mm512_set1_epi32(3);
+	__m512i first, second, third, fourth; /*< IPv6 four byte chunks */
+	__m512i idxes, res, shuf_idxes;
+	__m512i tmp, tmp2, bytes, byte_chunk, base_idxes;
+	/* used to mask gather values if size is 2 (16 bit next hops) */
+	const __m512i res_msk = _mm512_set1_epi32(UINT16_MAX);
+	const __rte_x86_zmm_t bswap = {
+		.u8 = { 2, 1, 0, 255, 6, 5, 4, 255,
+			10, 9, 8, 255, 14, 13, 12, 255,
+			2, 1, 0, 255, 6, 5, 4, 255,
+			10, 9, 8, 255, 14, 13, 12, 255,
+			2, 1, 0, 255, 6, 5, 4, 255,
+			10, 9, 8, 255, 14, 13, 12, 255,
+			2, 1, 0, 255, 6, 5, 4, 255,
+			10, 9, 8, 255, 14, 13, 12, 255
+			},
+	};
+	const __mmask64 k = 0x1111111111111111;
+	int i = 3;
+	__mmask16 msk_ext, new_msk;
+	__mmask16 exp_msk = 0x5555;
+
+	transpose_x16(ips, &first, &second, &third, &fourth);
+
+	/* get_tbl24_idx() for every 4 byte chunk */
+	idxes = _mm512_shuffle_epi8(first, bswap.z);
+
+	/**
+	 * lookup in tbl24
+	 * Put it inside branch to make compiller happy with -O0
+	 */
+	if (size == sizeof(uint16_t)) {
+		res = _mm512_i32gather_epi32(idxes, (const int *)dp->tbl24, 2);
+		res = _mm512_and_epi32(res, res_msk);
+	} else
+		res = _mm512_i32gather_epi32(idxes, (const int *)dp->tbl24, 4);
+
+
+	/* get extended entries indexes */
+	msk_ext = _mm512_test_epi32_mask(res, lsb);
+
+	tmp = _mm512_srli_epi32(res, 1);
+
+	/* idxes to retrieve bytes */
+	shuf_idxes = _mm512_setr_epi32(3, 7, 11, 15,
+				19, 23, 27, 31,
+				35, 39, 43, 47,
+				51, 55, 59, 63);
+
+	base_idxes = _mm512_setr_epi32(0, 4, 8, 12,
+				16, 20, 24, 28,
+				32, 36, 40, 44,
+				48, 52, 56, 60);
+
+	/* traverse down the trie */
+	while (msk_ext) {
+		idxes = _mm512_maskz_slli_epi32(msk_ext, tmp, 8);
+		byte_chunk = (i < 8) ?
+			((i >= 4) ? second : first) :
+			((i >= 12) ? fourth : third);
+		bytes = _mm512_maskz_shuffle_epi8(k, byte_chunk, shuf_idxes);
+		idxes = _mm512_maskz_add_epi32(msk_ext, idxes, bytes);
+		if (size == sizeof(uint16_t)) {
+			tmp = _mm512_mask_i32gather_epi32(zero, msk_ext,
+				idxes, (const int *)dp->tbl8, 2);
+			tmp = _mm512_and_epi32(tmp, res_msk);
+		} else
+			tmp = _mm512_mask_i32gather_epi32(zero, msk_ext,
+				idxes, (const int *)dp->tbl8, 4);
+		new_msk = _mm512_test_epi32_mask(tmp, lsb);
+		res = _mm512_mask_blend_epi32(msk_ext ^ new_msk, res, tmp);
+		tmp = _mm512_srli_epi32(tmp, 1);
+		msk_ext = new_msk;
+
+		shuf_idxes = _mm512_maskz_add_epi8(k, shuf_idxes, lsb);
+		shuf_idxes = _mm512_and_epi32(shuf_idxes, two_lsb);
+		shuf_idxes = _mm512_maskz_add_epi8(k, shuf_idxes, base_idxes);
+		i++;
+	}
+
+	res = _mm512_srli_epi32(res, 1);
+	tmp = _mm512_maskz_expand_epi32(exp_msk, res);
+	__m256i tmp256;
+	tmp256 = _mm512_extracti32x8_epi32(res, 1);
+	tmp2 = _mm512_maskz_expand_epi32(exp_msk,
+		_mm512_castsi256_si512(tmp256));
+	_mm512_storeu_si512(next_hops, tmp);
+	_mm512_storeu_si512(next_hops + 8, tmp2);
+}
+
+static void
+trie_vec_lookup_x8_8b(void *p, uint8_t ips[8][RTE_FIB6_IPV6_ADDR_SIZE],
+	uint64_t *next_hops)
+{
+	struct rte_trie_tbl *dp = (struct rte_trie_tbl *)p;
+	const __m512i zero = _mm512_set1_epi32(0);
+	const __m512i lsb = _mm512_set1_epi32(1);
+	const __m512i three_lsb = _mm512_set1_epi32(7);
+	__m512i first, second; /*< IPv6 eight byte chunks */
+	__m512i idxes, res, shuf_idxes;
+	__m512i tmp, bytes, byte_chunk, base_idxes;
+	const __rte_x86_zmm_t bswap = {
+		.u8 = { 2, 1, 0, 255, 255, 255, 255, 255,
+			10, 9, 8, 255, 255, 255, 255, 255,
+			2, 1, 0, 255, 255, 255, 255, 255,
+			10, 9, 8, 255, 255, 255, 255, 255,
+			2, 1, 0, 255, 255, 255, 255, 255,
+			10, 9, 8, 255, 255, 255, 255, 255,
+			2, 1, 0, 255, 255, 255, 255, 255,
+			10, 9, 8, 255, 255, 255, 255, 255
+			},
+	};
+	const __mmask64 k = 0x101010101010101;
+	int i = 3;
+	__mmask8 msk_ext, new_msk;
+
+	transpose_x8(ips, &first, &second);
+
+	/* get_tbl24_idx() for every 4 byte chunk */
+	idxes = _mm512_shuffle_epi8(first, bswap.z);
+
+	/* lookup in tbl24 */
+	res = _mm512_i64gather_epi64(idxes, (const void *)dp->tbl24, 8);
+	/* get extended entries indexes */
+	msk_ext = _mm512_test_epi64_mask(res, lsb);
+
+	tmp = _mm512_srli_epi64(res, 1);
+
+	/* idxes to retrieve bytes */
+	shuf_idxes = _mm512_setr_epi64(3, 11, 19, 27, 35, 43, 51, 59);
+
+	base_idxes = _mm512_setr_epi64(0, 8, 16, 24, 32, 40, 48, 56);
+
+	/* traverse down the trie */
+	while (msk_ext) {
+		idxes = _mm512_maskz_slli_epi64(msk_ext, tmp, 8);
+		byte_chunk = (i < 8) ? first : second;
+		bytes = _mm512_maskz_shuffle_epi8(k, byte_chunk, shuf_idxes);
+		idxes = _mm512_maskz_add_epi64(msk_ext, idxes, bytes);
+		tmp = _mm512_mask_i64gather_epi64(zero, msk_ext,
+				idxes, (const void *)dp->tbl8, 8);
+		new_msk = _mm512_test_epi64_mask(tmp, lsb);
+		res = _mm512_mask_blend_epi64(msk_ext ^ new_msk, res, tmp);
+		tmp = _mm512_srli_epi64(tmp, 1);
+		msk_ext = new_msk;
+
+		shuf_idxes = _mm512_maskz_add_epi8(k, shuf_idxes, lsb);
+		shuf_idxes = _mm512_and_epi64(shuf_idxes, three_lsb);
+		shuf_idxes = _mm512_maskz_add_epi8(k, shuf_idxes, base_idxes);
+		i++;
+	}
+
+	res = _mm512_srli_epi64(res, 1);
+	_mm512_storeu_si512(next_hops, res);
+}
+
+#endif /* _TRIE_AVX512_H_ */
-- 
2.7.4


^ permalink raw reply	[flat|nested] 199+ messages in thread

* [dpdk-dev] [PATCH v2 6/6] app/testfib: add support for different lookup functions
  2020-03-09 12:43 [dpdk-dev] [PATCH 0/6] fib: implement AVX512 vector lookup Vladimir Medvedkin
                   ` (12 preceding siblings ...)
  2020-05-14 12:28 ` [dpdk-dev] [PATCH v2 5/6] fib6: introduce AVX512 lookup Vladimir Medvedkin
@ 2020-05-14 12:28 ` Vladimir Medvedkin
  13 siblings, 0 replies; 199+ messages in thread
From: Vladimir Medvedkin @ 2020-05-14 12:28 UTC (permalink / raw)
  To: dev; +Cc: konstantin.ananyev, bruce.richardson

Added -v option to switch between different lookup implementations
to measure their performance and correctness.

Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
---
 app/test-fib/main.c | 58 ++++++++++++++++++++++++++++++++++++++++++++++++++---
 1 file changed, 55 insertions(+), 3 deletions(-)

diff --git a/app/test-fib/main.c b/app/test-fib/main.c
index 6e80d65..b72e8c4 100644
--- a/app/test-fib/main.c
+++ b/app/test-fib/main.c
@@ -99,6 +99,7 @@ static struct {
 	uint8_t		ent_sz;
 	uint8_t		rnd_lookup_ips_ratio;
 	uint8_t		print_fract;
+	uint8_t		lookup_fn;
 } config = {
 	.routes_file = NULL,
 	.lookup_ips_file = NULL,
@@ -110,7 +111,8 @@ static struct {
 	.tbl8 = DEFAULT_LPM_TBL8,
 	.ent_sz = 4,
 	.rnd_lookup_ips_ratio = 0,
-	.print_fract = 10
+	.print_fract = 10,
+	.lookup_fn = 0
 };
 
 struct rt_rule_4 {
@@ -636,7 +638,11 @@ print_usage(void)
 		"1/2/4/8 (default 4)>]\n"
 		"[-g <number of tbl8's for dir24_8 or trie FIBs>]\n"
 		"[-w <path to the file to dump routing table>]\n"
-		"[-u <path to the file to dump ip's for lookup>]\n",
+		"[-u <path to the file to dump ip's for lookup>]\n"
+		"[-v <type of loookup function:"
+		"\ts1, s2, s3 (3 types of scalar), v (vector) -"
+		" for DIR24_8 based FIB\n"
+		"\ts, v - for TRIE based ipv6 FIB>]\n",
 		config.prgname);
 }
 
@@ -679,7 +685,7 @@ parse_opts(int argc, char **argv)
 	int opt;
 	char *endptr;
 
-	while ((opt = getopt(argc, argv, "f:t:n:d:l:r:c6ab:e:g:w:u:s")) !=
+	while ((opt = getopt(argc, argv, "f:t:n:d:l:r:c6ab:e:g:w:u:sv:")) !=
 			-1) {
 		switch (opt) {
 		case 'f':
@@ -767,6 +773,22 @@ parse_opts(int argc, char **argv)
 				rte_exit(-EINVAL, "Invalid option -g\n");
 			}
 			break;
+		case 'v':
+			if ((strcmp(optarg, "s1") == 0) ||
+					(strcmp(optarg, "s") == 0))
+				break;
+			else if (strcmp(optarg, "v") == 0) {
+				config.lookup_fn = 1;
+				break;
+			} else if (strcmp(optarg, "s2") == 0) {
+				config.lookup_fn = 2;
+				break;
+			} else if (strcmp(optarg, "s3") == 0) {
+				config.lookup_fn = 3;
+				break;
+			}
+			print_usage();
+			rte_exit(-EINVAL, "Invalid option -v %s\n", optarg);
 		default:
 			print_usage();
 			rte_exit(-EINVAL, "Invalid options\n");
@@ -844,6 +866,24 @@ run_v4(void)
 		return -rte_errno;
 	}
 
+	if (config.lookup_fn != 0) {
+		if (config.lookup_fn == 1)
+			ret = rte_fib_set_lookup_fn(fib,
+				RTE_FIB_DIR24_8_VECTOR);
+		else if (config.lookup_fn == 2)
+			ret = rte_fib_set_lookup_fn(fib,
+				RTE_FIB_DIR24_8_SCALAR_INLINE);
+		else if (config.lookup_fn == 3)
+			ret = rte_fib_set_lookup_fn(fib,
+				RTE_FIB_DIR24_8_SCALAR_UNI);
+		else
+			ret = -EINVAL;
+		if (ret != 0) {
+			printf("Can not init lookup function\n");
+			return ret;
+		}
+	}
+
 	for (k = config.print_fract, i = 0; k > 0; k--) {
 		start = rte_rdtsc_precise();
 		for (j = 0; j < (config.nb_routes - i) / k; j++) {
@@ -1023,6 +1063,18 @@ run_v6(void)
 		return -rte_errno;
 	}
 
+	if (config.lookup_fn != 0) {
+		if (config.lookup_fn == 1)
+			ret = rte_fib6_set_lookup_fn(fib,
+				RTE_FIB6_TRIE_VECTOR);
+		else
+			ret = -EINVAL;
+		if (ret != 0) {
+			printf("Can not init lookup function\n");
+			return ret;
+		}
+	}
+
 	for (k = config.print_fract, i = 0; k > 0; k--) {
 		start = rte_rdtsc_precise();
 		for (j = 0; j < (config.nb_routes - i) / k; j++) {
-- 
2.7.4


^ permalink raw reply	[flat|nested] 199+ messages in thread

* Re: [dpdk-dev] [PATCH v2 3/6] fib: introduce AVX512 lookup
  2020-05-14 12:28 ` [dpdk-dev] [PATCH v2 3/6] fib: introduce AVX512 lookup Vladimir Medvedkin
@ 2020-05-14 12:40   ` Bruce Richardson
  2020-05-14 12:43     ` Medvedkin, Vladimir
  0 siblings, 1 reply; 199+ messages in thread
From: Bruce Richardson @ 2020-05-14 12:40 UTC (permalink / raw)
  To: Vladimir Medvedkin; +Cc: dev, konstantin.ananyev

On Thu, May 14, 2020 at 01:28:27PM +0100, Vladimir Medvedkin wrote:
> Add new lookup implementation for DIR24_8 algorithm using
> AVX512 instruction set
> 
> Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
> ---
<snip>
> diff --git a/lib/librte_fib/meson.build b/lib/librte_fib/meson.build
> index 771828f..86b1d4a 100644
> --- a/lib/librte_fib/meson.build
> +++ b/lib/librte_fib/meson.build
> @@ -5,3 +5,13 @@
>  sources = files('rte_fib.c', 'rte_fib6.c', 'dir24_8.c', 'trie.c')
>  headers = files('rte_fib.h', 'rte_fib6.h')
>  deps += ['rib']
> +
> +if dpdk_conf.has('RTE_ARCH_X86')
> +	if cc.has_argument('-mavx512f')
> +		cflags += '-DCC_AVX512_SUPPORT'
> +		cflags += '-mavx512f'
> +	endif
> +	if cc.has_argument('-mavx512dq')
> +		cflags += '-mavx512dq'
> +	endif
> +endif

This will likely break the FIB library for systems which don't have AVX-512
support, since you are enabling AVX-512 for the whole library, meaning that
the compiler can put AVX-512 instructions anywhere it wants in the library.
You need to separate out the AVX-512 code into a separate file, and compile
that file - and only that file - for AVX-512. An example of how this should
be done, can be seen in the AVX support in the i40e net driver.

Regards,
/Bruce

^ permalink raw reply	[flat|nested] 199+ messages in thread

* Re: [dpdk-dev] [PATCH v2 3/6] fib: introduce AVX512 lookup
  2020-05-14 12:40   ` Bruce Richardson
@ 2020-05-14 12:43     ` Medvedkin, Vladimir
  0 siblings, 0 replies; 199+ messages in thread
From: Medvedkin, Vladimir @ 2020-05-14 12:43 UTC (permalink / raw)
  To: Bruce Richardson; +Cc: dev, konstantin.ananyev

Hi Bruce,

On 14/05/2020 13:40, Bruce Richardson wrote:
> On Thu, May 14, 2020 at 01:28:27PM +0100, Vladimir Medvedkin wrote:
>> Add new lookup implementation for DIR24_8 algorithm using
>> AVX512 instruction set
>>
>> Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
>> ---
> <snip>
>> diff --git a/lib/librte_fib/meson.build b/lib/librte_fib/meson.build
>> index 771828f..86b1d4a 100644
>> --- a/lib/librte_fib/meson.build
>> +++ b/lib/librte_fib/meson.build
>> @@ -5,3 +5,13 @@
>>   sources = files('rte_fib.c', 'rte_fib6.c', 'dir24_8.c', 'trie.c')
>>   headers = files('rte_fib.h', 'rte_fib6.h')
>>   deps += ['rib']
>> +
>> +if dpdk_conf.has('RTE_ARCH_X86')
>> +	if cc.has_argument('-mavx512f')
>> +		cflags += '-DCC_AVX512_SUPPORT'
>> +		cflags += '-mavx512f'
>> +	endif
>> +	if cc.has_argument('-mavx512dq')
>> +		cflags += '-mavx512dq'
>> +	endif
>> +endif
> This will likely break the FIB library for systems which don't have AVX-512
> support, since you are enabling AVX-512 for the whole library, meaning that
> the compiler can put AVX-512 instructions anywhere it wants in the library.
> You need to separate out the AVX-512 code into a separate file, and compile
> that file - and only that file - for AVX-512. An example of how this should
> be done, can be seen in the AVX support in the i40e net driver.


Ah, yes, you're right. I will rework this part. Thanks!


>
> Regards,
> /Bruce

-- 
Regards,
Vladimir


^ permalink raw reply	[flat|nested] 199+ messages in thread

* [dpdk-dev] [PATCH v3 0/8] fib: implement AVX512 vector lookup
  2020-05-14 12:28 ` [dpdk-dev] [PATCH v2 " Vladimir Medvedkin
@ 2020-05-19 12:12   ` Vladimir Medvedkin
  2020-05-19 12:23     ` David Marchand
                       ` (10 more replies)
  2020-05-19 12:12   ` [dpdk-dev] [PATCH v3 1/8] eal: introduce zmm type for AVX 512-bit Vladimir Medvedkin
                     ` (7 subsequent siblings)
  8 siblings, 11 replies; 199+ messages in thread
From: Vladimir Medvedkin @ 2020-05-19 12:12 UTC (permalink / raw)
  To: dev; +Cc: konstantin.ananyev, bruce.richardson

This patch series implements vectorized lookup using AVX512 for
ipv4 dir24_8 and ipv6 trie algorithms.
Also introduced rte_fib_set_lookup_fn() to change lookup function type.
Added option to select lookup function type in testfib application.

v3:
 - separate out the AVX-512 code into a separate file

v2:
 - rename rte_zmm to __rte_x86_zmm to reflect its internal usage
 - make runtime decision to use avx512 lookup

Vladimir Medvedkin (8):
  eal: introduce zmm type for AVX 512-bit
  fib: make lookup function type configurable
  fib: move lookup definition into the header file
  fib: introduce AVX512 lookup
  fib6: make lookup function type configurable
  fib6: move lookup definition into the header file
  fib6: introduce AVX512 lookup
  app/testfib: add support for different lookup functions

 app/test-fib/main.c                   |  58 ++++++-
 lib/librte_eal/x86/include/rte_vect.h |  20 +++
 lib/librte_fib/Makefile               |  24 +++
 lib/librte_fib/dir24_8.c              | 281 ++++++----------------------------
 lib/librte_fib/dir24_8.h              | 226 ++++++++++++++++++++++++++-
 lib/librte_fib/dir24_8_avx512.c       | 165 ++++++++++++++++++++
 lib/librte_fib/dir24_8_avx512.h       |  24 +++
 lib/librte_fib/meson.build            |  20 +++
 lib/librte_fib/rte_fib.c              |  20 ++-
 lib/librte_fib/rte_fib.h              |  23 +++
 lib/librte_fib/rte_fib6.c             |  19 ++-
 lib/librte_fib/rte_fib6.h             |  21 +++
 lib/librte_fib/rte_fib_version.map    |   2 +
 lib/librte_fib/trie.c                 | 161 ++++---------------
 lib/librte_fib/trie.h                 | 119 +++++++++++++-
 lib/librte_fib/trie_avx512.c          | 269 ++++++++++++++++++++++++++++++++
 lib/librte_fib/trie_avx512.h          |  20 +++
 17 files changed, 1100 insertions(+), 372 deletions(-)
 create mode 100644 lib/librte_fib/dir24_8_avx512.c
 create mode 100644 lib/librte_fib/dir24_8_avx512.h
 create mode 100644 lib/librte_fib/trie_avx512.c
 create mode 100644 lib/librte_fib/trie_avx512.h

-- 
2.7.4


^ permalink raw reply	[flat|nested] 199+ messages in thread

* [dpdk-dev] [PATCH v3 1/8] eal: introduce zmm type for AVX 512-bit
  2020-05-14 12:28 ` [dpdk-dev] [PATCH v2 " Vladimir Medvedkin
  2020-05-19 12:12   ` [dpdk-dev] [PATCH v3 0/8] " Vladimir Medvedkin
@ 2020-05-19 12:12   ` Vladimir Medvedkin
  2020-06-24 13:14     ` Ananyev, Konstantin
  2020-07-06 17:28     ` Thomas Monjalon
  2020-05-19 12:12   ` [dpdk-dev] [PATCH v3 2/8] fib: make lookup function type configurable Vladimir Medvedkin
                     ` (6 subsequent siblings)
  8 siblings, 2 replies; 199+ messages in thread
From: Vladimir Medvedkin @ 2020-05-19 12:12 UTC (permalink / raw)
  To: dev; +Cc: konstantin.ananyev, bruce.richardson

New data type to manipulate 512 bit AVX values.

Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
---
 lib/librte_eal/x86/include/rte_vect.h | 20 ++++++++++++++++++++
 1 file changed, 20 insertions(+)

diff --git a/lib/librte_eal/x86/include/rte_vect.h b/lib/librte_eal/x86/include/rte_vect.h
index df5a607..ffe4f7d 100644
--- a/lib/librte_eal/x86/include/rte_vect.h
+++ b/lib/librte_eal/x86/include/rte_vect.h
@@ -90,6 +90,26 @@ __extension__ ({                 \
 })
 #endif /* (defined(__ICC) && __ICC < 1210) */
 
+#ifdef __AVX512F__
+
+typedef __m512i __x86_zmm_t;
+
+#define	ZMM_SIZE	(sizeof(__x86_zmm_t))
+#define	ZMM_MASK	(ZMM_SIZE - 1)
+
+typedef union __rte_x86_zmm  {
+	__x86_zmm_t	 z;
+	ymm_t    y[ZMM_SIZE / sizeof(ymm_t)];
+	xmm_t    x[ZMM_SIZE / sizeof(xmm_t)];
+	uint8_t  u8[ZMM_SIZE / sizeof(uint8_t)];
+	uint16_t u16[ZMM_SIZE / sizeof(uint16_t)];
+	uint32_t u32[ZMM_SIZE / sizeof(uint32_t)];
+	uint64_t u64[ZMM_SIZE / sizeof(uint64_t)];
+	double   pd[ZMM_SIZE / sizeof(double)];
+} __attribute__((__aligned__(ZMM_SIZE)))  __rte_x86_zmm_t;
+
+#endif /* __AVX512F__ */
+
 #ifdef __cplusplus
 }
 #endif
-- 
2.7.4


^ permalink raw reply	[flat|nested] 199+ messages in thread

* [dpdk-dev] [PATCH v3 2/8] fib: make lookup function type configurable
  2020-05-14 12:28 ` [dpdk-dev] [PATCH v2 " Vladimir Medvedkin
  2020-05-19 12:12   ` [dpdk-dev] [PATCH v3 0/8] " Vladimir Medvedkin
  2020-05-19 12:12   ` [dpdk-dev] [PATCH v3 1/8] eal: introduce zmm type for AVX 512-bit Vladimir Medvedkin
@ 2020-05-19 12:12   ` Vladimir Medvedkin
  2020-05-19 12:12   ` [dpdk-dev] [PATCH v3 3/8] fib: move lookup definition into the header file Vladimir Medvedkin
                     ` (5 subsequent siblings)
  8 siblings, 0 replies; 199+ messages in thread
From: Vladimir Medvedkin @ 2020-05-19 12:12 UTC (permalink / raw)
  To: dev; +Cc: konstantin.ananyev, bruce.richardson

Add type argument to dir24_8_get_lookup_fn()
Now it supports 3 different lookup implementations:
 RTE_FIB_DIR24_8_SCALAR_MACRO
 RTE_FIB_DIR24_8_SCALAR_INLINE
 RTE_FIB_DIR24_8_SCALAR_UNI

Add new rte_fib_set_lookup_fn() - user can change lookup
function type runtime.

Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
---
 lib/librte_fib/dir24_8.c           | 32 ++++++++++++++++++++------------
 lib/librte_fib/dir24_8.h           |  2 +-
 lib/librte_fib/rte_fib.c           | 20 +++++++++++++++++++-
 lib/librte_fib/rte_fib.h           | 22 ++++++++++++++++++++++
 lib/librte_fib/rte_fib_version.map |  1 +
 5 files changed, 63 insertions(+), 14 deletions(-)

diff --git a/lib/librte_fib/dir24_8.c b/lib/librte_fib/dir24_8.c
index c9dce3c..825d061 100644
--- a/lib/librte_fib/dir24_8.c
+++ b/lib/librte_fib/dir24_8.c
@@ -45,13 +45,6 @@ struct dir24_8_tbl {
 
 #define ROUNDUP(x, y)	 RTE_ALIGN_CEIL(x, (1 << (32 - y)))
 
-enum lookup_type {
-	MACRO,
-	INLINE,
-	UNI
-};
-enum lookup_type test_lookup = MACRO;
-
 static inline void *
 get_tbl24_p(struct dir24_8_tbl *dp, uint32_t ip, uint8_t nh_sz)
 {
@@ -253,11 +246,18 @@ dir24_8_lookup_bulk_uni(void *p, const uint32_t *ips,
 }
 
 rte_fib_lookup_fn_t
-dir24_8_get_lookup_fn(struct rte_fib_conf *fib_conf)
+dir24_8_get_lookup_fn(void *p, enum rte_fib_dir24_8_lookup_type type)
 {
-	enum rte_fib_dir24_8_nh_sz nh_sz = fib_conf->dir24_8.nh_sz;
+	enum rte_fib_dir24_8_nh_sz nh_sz;
+	struct dir24_8_tbl *dp = p;
 
-	if (test_lookup == MACRO) {
+	if (dp == NULL)
+		return NULL;
+
+	nh_sz = dp->nh_sz;
+
+	switch (type) {
+	case RTE_FIB_DIR24_8_SCALAR_MACRO:
 		switch (nh_sz) {
 		case RTE_FIB_DIR24_8_1B:
 			return dir24_8_lookup_bulk_1b;
@@ -267,8 +267,10 @@ dir24_8_get_lookup_fn(struct rte_fib_conf *fib_conf)
 			return dir24_8_lookup_bulk_4b;
 		case RTE_FIB_DIR24_8_8B:
 			return dir24_8_lookup_bulk_8b;
+		default:
+			return NULL;
 		}
-	} else if (test_lookup == INLINE) {
+	case RTE_FIB_DIR24_8_SCALAR_INLINE:
 		switch (nh_sz) {
 		case RTE_FIB_DIR24_8_1B:
 			return dir24_8_lookup_bulk_0;
@@ -278,9 +280,15 @@ dir24_8_get_lookup_fn(struct rte_fib_conf *fib_conf)
 			return dir24_8_lookup_bulk_2;
 		case RTE_FIB_DIR24_8_8B:
 			return dir24_8_lookup_bulk_3;
+		default:
+			return NULL;
 		}
-	} else
+	case RTE_FIB_DIR24_8_SCALAR_UNI:
 		return dir24_8_lookup_bulk_uni;
+	default:
+		return NULL;
+	}
+
 	return NULL;
 }
 
diff --git a/lib/librte_fib/dir24_8.h b/lib/librte_fib/dir24_8.h
index 1ec437c..53c5dd2 100644
--- a/lib/librte_fib/dir24_8.h
+++ b/lib/librte_fib/dir24_8.h
@@ -22,7 +22,7 @@ void
 dir24_8_free(void *p);
 
 rte_fib_lookup_fn_t
-dir24_8_get_lookup_fn(struct rte_fib_conf *conf);
+dir24_8_get_lookup_fn(void *p, enum rte_fib_dir24_8_lookup_type type);
 
 int
 dir24_8_modify(struct rte_fib *fib, uint32_t ip, uint8_t depth,
diff --git a/lib/librte_fib/rte_fib.c b/lib/librte_fib/rte_fib.c
index e090808..59120b5 100644
--- a/lib/librte_fib/rte_fib.c
+++ b/lib/librte_fib/rte_fib.c
@@ -107,7 +107,8 @@ init_dataplane(struct rte_fib *fib, __rte_unused int socket_id,
 		fib->dp = dir24_8_create(dp_name, socket_id, conf);
 		if (fib->dp == NULL)
 			return -rte_errno;
-		fib->lookup = dir24_8_get_lookup_fn(conf);
+		fib->lookup = dir24_8_get_lookup_fn(fib->dp,
+			RTE_FIB_DIR24_8_SCALAR_MACRO);
 		fib->modify = dir24_8_modify;
 		return 0;
 	default:
@@ -317,3 +318,20 @@ rte_fib_get_rib(struct rte_fib *fib)
 {
 	return (fib == NULL) ? NULL : fib->rib;
 }
+
+int
+rte_fib_set_lookup_fn(struct rte_fib *fib, int type)
+{
+	rte_fib_lookup_fn_t fn;
+
+	switch (fib->type) {
+	case RTE_FIB_DIR24_8:
+		fn = dir24_8_get_lookup_fn(fib->dp, type);
+		if (fn == NULL)
+			return -EINVAL;
+		fib->lookup = fn;
+		return 0;
+	default:
+		return -EINVAL;
+	}
+}
diff --git a/lib/librte_fib/rte_fib.h b/lib/librte_fib/rte_fib.h
index af3bbf0..db35685 100644
--- a/lib/librte_fib/rte_fib.h
+++ b/lib/librte_fib/rte_fib.h
@@ -51,6 +51,12 @@ enum rte_fib_dir24_8_nh_sz {
 	RTE_FIB_DIR24_8_8B
 };
 
+enum rte_fib_dir24_8_lookup_type {
+	RTE_FIB_DIR24_8_SCALAR_MACRO,
+	RTE_FIB_DIR24_8_SCALAR_INLINE,
+	RTE_FIB_DIR24_8_SCALAR_UNI
+};
+
 /** FIB configuration structure */
 struct rte_fib_conf {
 	enum rte_fib_type type; /**< Type of FIB struct */
@@ -189,6 +195,22 @@ __rte_experimental
 struct rte_rib *
 rte_fib_get_rib(struct rte_fib *fib);
 
+/**
+ * Set lookup function based on type
+ *
+ * @param fib
+ *   FIB object handle
+ * @param type
+ *   type of lookup function
+ *
+ * @return
+ *    -EINVAL on failure
+ *    0 on success
+ */
+__rte_experimental
+int
+rte_fib_set_lookup_fn(struct rte_fib *fib, int type);
+
 #ifdef __cplusplus
 }
 #endif
diff --git a/lib/librte_fib/rte_fib_version.map b/lib/librte_fib/rte_fib_version.map
index 9527417..216af66 100644
--- a/lib/librte_fib/rte_fib_version.map
+++ b/lib/librte_fib/rte_fib_version.map
@@ -9,6 +9,7 @@ EXPERIMENTAL {
 	rte_fib_lookup_bulk;
 	rte_fib_get_dp;
 	rte_fib_get_rib;
+	rte_fib_set_lookup_fn;
 
 	rte_fib6_add;
 	rte_fib6_create;
-- 
2.7.4


^ permalink raw reply	[flat|nested] 199+ messages in thread

* [dpdk-dev] [PATCH v3 3/8] fib: move lookup definition into the header file
  2020-05-14 12:28 ` [dpdk-dev] [PATCH v2 " Vladimir Medvedkin
                     ` (2 preceding siblings ...)
  2020-05-19 12:12   ` [dpdk-dev] [PATCH v3 2/8] fib: make lookup function type configurable Vladimir Medvedkin
@ 2020-05-19 12:12   ` Vladimir Medvedkin
  2020-07-08 11:23     ` Ananyev, Konstantin
  2020-05-19 12:12   ` [dpdk-dev] [PATCH v3 4/8] fib: introduce AVX512 lookup Vladimir Medvedkin
                     ` (4 subsequent siblings)
  8 siblings, 1 reply; 199+ messages in thread
From: Vladimir Medvedkin @ 2020-05-19 12:12 UTC (permalink / raw)
  To: dev; +Cc: konstantin.ananyev, bruce.richardson

Move dir24_8 table layout and lookup defenition into the
private header file. This is necessary for implementing a
vectorized lookup function in a separate .с file.

Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
---
 lib/librte_fib/dir24_8.c | 225 +----------------------------------------------
 lib/librte_fib/dir24_8.h | 224 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 225 insertions(+), 224 deletions(-)

diff --git a/lib/librte_fib/dir24_8.c b/lib/librte_fib/dir24_8.c
index 825d061..9d74653 100644
--- a/lib/librte_fib/dir24_8.c
+++ b/lib/librte_fib/dir24_8.c
@@ -11,240 +11,17 @@
 
 #include <rte_debug.h>
 #include <rte_malloc.h>
-#include <rte_prefetch.h>
 #include <rte_errno.h>
 #include <rte_memory.h>
-#include <rte_branch_prediction.h>
 
-#include <rte_fib.h>
 #include <rte_rib.h>
+#include <rte_fib.h>
 #include "dir24_8.h"
 
 #define DIR24_8_NAMESIZE	64
 
-#define DIR24_8_TBL24_NUM_ENT		(1 << 24)
-#define DIR24_8_TBL8_GRP_NUM_ENT	256U
-#define DIR24_8_EXT_ENT			1
-#define DIR24_8_TBL24_MASK		0xffffff00
-
-#define BITMAP_SLAB_BIT_SIZE_LOG2	6
-#define BITMAP_SLAB_BIT_SIZE		(1 << BITMAP_SLAB_BIT_SIZE_LOG2)
-#define BITMAP_SLAB_BITMASK		(BITMAP_SLAB_BIT_SIZE - 1)
-
-struct dir24_8_tbl {
-	uint32_t	number_tbl8s;	/**< Total number of tbl8s */
-	uint32_t	rsvd_tbl8s;	/**< Number of reserved tbl8s */
-	uint32_t	cur_tbl8s;	/**< Current number of tbl8s */
-	enum rte_fib_dir24_8_nh_sz	nh_sz;	/**< Size of nexthop entry */
-	uint64_t	def_nh;		/**< Default next hop */
-	uint64_t	*tbl8;		/**< tbl8 table. */
-	uint64_t	*tbl8_idxes;	/**< bitmap containing free tbl8 idxes*/
-	/* tbl24 table. */
-	__extension__ uint64_t	tbl24[0] __rte_cache_aligned;
-};
-
 #define ROUNDUP(x, y)	 RTE_ALIGN_CEIL(x, (1 << (32 - y)))
 
-static inline void *
-get_tbl24_p(struct dir24_8_tbl *dp, uint32_t ip, uint8_t nh_sz)
-{
-	return (void *)&((uint8_t *)dp->tbl24)[(ip &
-		DIR24_8_TBL24_MASK) >> (8 - nh_sz)];
-}
-
-static inline  uint8_t
-bits_in_nh(uint8_t nh_sz)
-{
-	return 8 * (1 << nh_sz);
-}
-
-static inline uint64_t
-get_max_nh(uint8_t nh_sz)
-{
-	return ((1ULL << (bits_in_nh(nh_sz) - 1)) - 1);
-}
-
-static  inline uint32_t
-get_tbl24_idx(uint32_t ip)
-{
-	return ip >> 8;
-}
-
-static  inline uint32_t
-get_tbl8_idx(uint32_t res, uint32_t ip)
-{
-	return (res >> 1) * DIR24_8_TBL8_GRP_NUM_ENT + (uint8_t)ip;
-}
-
-static inline uint64_t
-lookup_msk(uint8_t nh_sz)
-{
-	return ((1ULL << ((1 << (nh_sz + 3)) - 1)) << 1) - 1;
-}
-
-static inline uint8_t
-get_psd_idx(uint32_t val, uint8_t nh_sz)
-{
-	return val & ((1 << (3 - nh_sz)) - 1);
-}
-
-static inline uint32_t
-get_tbl_idx(uint32_t val, uint8_t nh_sz)
-{
-	return val >> (3 - nh_sz);
-}
-
-static inline uint64_t
-get_tbl24(struct dir24_8_tbl *dp, uint32_t ip, uint8_t nh_sz)
-{
-	return ((dp->tbl24[get_tbl_idx(get_tbl24_idx(ip), nh_sz)] >>
-		(get_psd_idx(get_tbl24_idx(ip), nh_sz) *
-		bits_in_nh(nh_sz))) & lookup_msk(nh_sz));
-}
-
-static inline uint64_t
-get_tbl8(struct dir24_8_tbl *dp, uint32_t res, uint32_t ip, uint8_t nh_sz)
-{
-	return ((dp->tbl8[get_tbl_idx(get_tbl8_idx(res, ip), nh_sz)] >>
-		(get_psd_idx(get_tbl8_idx(res, ip), nh_sz) *
-		bits_in_nh(nh_sz))) & lookup_msk(nh_sz));
-}
-
-static inline int
-is_entry_extended(uint64_t ent)
-{
-	return (ent & DIR24_8_EXT_ENT) == DIR24_8_EXT_ENT;
-}
-
-#define LOOKUP_FUNC(suffix, type, bulk_prefetch, nh_sz)			\
-static void dir24_8_lookup_bulk_##suffix(void *p, const uint32_t *ips,	\
-	uint64_t *next_hops, const unsigned int n)			\
-{									\
-	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;		\
-	uint64_t tmp;							\
-	uint32_t i;							\
-	uint32_t prefetch_offset =					\
-		RTE_MIN((unsigned int)bulk_prefetch, n);		\
-									\
-	for (i = 0; i < prefetch_offset; i++)				\
-		rte_prefetch0(get_tbl24_p(dp, ips[i], nh_sz));		\
-	for (i = 0; i < (n - prefetch_offset); i++) {			\
-		rte_prefetch0(get_tbl24_p(dp,				\
-			ips[i + prefetch_offset], nh_sz));		\
-		tmp = ((type *)dp->tbl24)[ips[i] >> 8];			\
-		if (unlikely(is_entry_extended(tmp)))			\
-			tmp = ((type *)dp->tbl8)[(uint8_t)ips[i] +	\
-				((tmp >> 1) * DIR24_8_TBL8_GRP_NUM_ENT)]; \
-		next_hops[i] = tmp >> 1;				\
-	}								\
-	for (; i < n; i++) {						\
-		tmp = ((type *)dp->tbl24)[ips[i] >> 8];			\
-		if (unlikely(is_entry_extended(tmp)))			\
-			tmp = ((type *)dp->tbl8)[(uint8_t)ips[i] +	\
-				((tmp >> 1) * DIR24_8_TBL8_GRP_NUM_ENT)]; \
-		next_hops[i] = tmp >> 1;				\
-	}								\
-}									\
-
-LOOKUP_FUNC(1b, uint8_t, 5, 0)
-LOOKUP_FUNC(2b, uint16_t, 6, 1)
-LOOKUP_FUNC(4b, uint32_t, 15, 2)
-LOOKUP_FUNC(8b, uint64_t, 12, 3)
-
-static inline void
-dir24_8_lookup_bulk(struct dir24_8_tbl *dp, const uint32_t *ips,
-	uint64_t *next_hops, const unsigned int n, uint8_t nh_sz)
-{
-	uint64_t tmp;
-	uint32_t i;
-	uint32_t prefetch_offset = RTE_MIN(15U, n);
-
-	for (i = 0; i < prefetch_offset; i++)
-		rte_prefetch0(get_tbl24_p(dp, ips[i], nh_sz));
-	for (i = 0; i < (n - prefetch_offset); i++) {
-		rte_prefetch0(get_tbl24_p(dp, ips[i + prefetch_offset],
-			nh_sz));
-		tmp = get_tbl24(dp, ips[i], nh_sz);
-		if (unlikely(is_entry_extended(tmp)))
-			tmp = get_tbl8(dp, tmp, ips[i], nh_sz);
-
-		next_hops[i] = tmp >> 1;
-	}
-	for (; i < n; i++) {
-		tmp = get_tbl24(dp, ips[i], nh_sz);
-		if (unlikely(is_entry_extended(tmp)))
-			tmp = get_tbl8(dp, tmp, ips[i], nh_sz);
-
-		next_hops[i] = tmp >> 1;
-	}
-}
-
-static void
-dir24_8_lookup_bulk_0(void *p, const uint32_t *ips,
-	uint64_t *next_hops, const unsigned int n)
-{
-	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
-
-	dir24_8_lookup_bulk(dp, ips, next_hops, n, 0);
-}
-
-static void
-dir24_8_lookup_bulk_1(void *p, const uint32_t *ips,
-	uint64_t *next_hops, const unsigned int n)
-{
-	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
-
-	dir24_8_lookup_bulk(dp, ips, next_hops, n, 1);
-}
-
-static void
-dir24_8_lookup_bulk_2(void *p, const uint32_t *ips,
-	uint64_t *next_hops, const unsigned int n)
-{
-	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
-
-	dir24_8_lookup_bulk(dp, ips, next_hops, n, 2);
-}
-
-static void
-dir24_8_lookup_bulk_3(void *p, const uint32_t *ips,
-	uint64_t *next_hops, const unsigned int n)
-{
-	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
-
-	dir24_8_lookup_bulk(dp, ips, next_hops, n, 3);
-}
-
-static void
-dir24_8_lookup_bulk_uni(void *p, const uint32_t *ips,
-	uint64_t *next_hops, const unsigned int n)
-{
-	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
-	uint64_t tmp;
-	uint32_t i;
-	uint32_t prefetch_offset = RTE_MIN(15U, n);
-	uint8_t nh_sz = dp->nh_sz;
-
-	for (i = 0; i < prefetch_offset; i++)
-		rte_prefetch0(get_tbl24_p(dp, ips[i], nh_sz));
-	for (i = 0; i < (n - prefetch_offset); i++) {
-		rte_prefetch0(get_tbl24_p(dp, ips[i + prefetch_offset],
-			nh_sz));
-		tmp = get_tbl24(dp, ips[i], nh_sz);
-		if (unlikely(is_entry_extended(tmp)))
-			tmp = get_tbl8(dp, tmp, ips[i], nh_sz);
-
-		next_hops[i] = tmp >> 1;
-	}
-	for (; i < n; i++) {
-		tmp = get_tbl24(dp, ips[i], nh_sz);
-		if (unlikely(is_entry_extended(tmp)))
-			tmp = get_tbl8(dp, tmp, ips[i], nh_sz);
-
-		next_hops[i] = tmp >> 1;
-	}
-}
-
 rte_fib_lookup_fn_t
 dir24_8_get_lookup_fn(void *p, enum rte_fib_dir24_8_lookup_type type)
 {
diff --git a/lib/librte_fib/dir24_8.h b/lib/librte_fib/dir24_8.h
index 53c5dd2..56d0389 100644
--- a/lib/librte_fib/dir24_8.h
+++ b/lib/librte_fib/dir24_8.h
@@ -6,6 +6,9 @@
 #ifndef _DIR24_8_H_
 #define _DIR24_8_H_
 
+#include <rte_prefetch.h>
+#include <rte_branch_prediction.h>
+
 /**
  * @file
  * DIR24_8 algorithm
@@ -15,6 +18,227 @@
 extern "C" {
 #endif
 
+#define DIR24_8_TBL24_NUM_ENT		(1 << 24)
+#define DIR24_8_TBL8_GRP_NUM_ENT	256U
+#define DIR24_8_EXT_ENT			1
+#define DIR24_8_TBL24_MASK		0xffffff00
+
+#define BITMAP_SLAB_BIT_SIZE_LOG2	6
+#define BITMAP_SLAB_BIT_SIZE		(1 << BITMAP_SLAB_BIT_SIZE_LOG2)
+#define BITMAP_SLAB_BITMASK		(BITMAP_SLAB_BIT_SIZE - 1)
+
+struct dir24_8_tbl {
+	uint32_t	number_tbl8s;	/**< Total number of tbl8s */
+	uint32_t	rsvd_tbl8s;	/**< Number of reserved tbl8s */
+	uint32_t	cur_tbl8s;	/**< Current number of tbl8s */
+	enum rte_fib_dir24_8_nh_sz	nh_sz;	/**< Size of nexthop entry */
+	uint64_t	def_nh;		/**< Default next hop */
+	uint64_t	*tbl8;		/**< tbl8 table. */
+	uint64_t	*tbl8_idxes;	/**< bitmap containing free tbl8 idxes*/
+	/* tbl24 table. */
+	__extension__ uint64_t	tbl24[0] __rte_cache_aligned;
+};
+
+static inline void *
+get_tbl24_p(struct dir24_8_tbl *dp, uint32_t ip, uint8_t nh_sz)
+{
+	return (void *)&((uint8_t *)dp->tbl24)[(ip &
+		DIR24_8_TBL24_MASK) >> (8 - nh_sz)];
+}
+
+static inline  uint8_t
+bits_in_nh(uint8_t nh_sz)
+{
+	return 8 * (1 << nh_sz);
+}
+
+static inline uint64_t
+get_max_nh(uint8_t nh_sz)
+{
+	return ((1ULL << (bits_in_nh(nh_sz) - 1)) - 1);
+}
+
+static  inline uint32_t
+get_tbl24_idx(uint32_t ip)
+{
+	return ip >> 8;
+}
+
+static  inline uint32_t
+get_tbl8_idx(uint32_t res, uint32_t ip)
+{
+	return (res >> 1) * DIR24_8_TBL8_GRP_NUM_ENT + (uint8_t)ip;
+}
+
+static inline uint64_t
+lookup_msk(uint8_t nh_sz)
+{
+	return ((1ULL << ((1 << (nh_sz + 3)) - 1)) << 1) - 1;
+}
+
+static inline uint8_t
+get_psd_idx(uint32_t val, uint8_t nh_sz)
+{
+	return val & ((1 << (3 - nh_sz)) - 1);
+}
+
+static inline uint32_t
+get_tbl_idx(uint32_t val, uint8_t nh_sz)
+{
+	return val >> (3 - nh_sz);
+}
+
+static inline uint64_t
+get_tbl24(struct dir24_8_tbl *dp, uint32_t ip, uint8_t nh_sz)
+{
+	return ((dp->tbl24[get_tbl_idx(get_tbl24_idx(ip), nh_sz)] >>
+		(get_psd_idx(get_tbl24_idx(ip), nh_sz) *
+		bits_in_nh(nh_sz))) & lookup_msk(nh_sz));
+}
+
+static inline uint64_t
+get_tbl8(struct dir24_8_tbl *dp, uint32_t res, uint32_t ip, uint8_t nh_sz)
+{
+	return ((dp->tbl8[get_tbl_idx(get_tbl8_idx(res, ip), nh_sz)] >>
+		(get_psd_idx(get_tbl8_idx(res, ip), nh_sz) *
+		bits_in_nh(nh_sz))) & lookup_msk(nh_sz));
+}
+
+static inline int
+is_entry_extended(uint64_t ent)
+{
+	return (ent & DIR24_8_EXT_ENT) == DIR24_8_EXT_ENT;
+}
+
+#define LOOKUP_FUNC(suffix, type, bulk_prefetch, nh_sz)			\
+static inline void dir24_8_lookup_bulk_##suffix(void *p, const uint32_t *ips, \
+	uint64_t *next_hops, const unsigned int n)			\
+{									\
+	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;		\
+	uint64_t tmp;							\
+	uint32_t i;							\
+	uint32_t prefetch_offset =					\
+		RTE_MIN((unsigned int)bulk_prefetch, n);		\
+									\
+	for (i = 0; i < prefetch_offset; i++)				\
+		rte_prefetch0(get_tbl24_p(dp, ips[i], nh_sz));		\
+	for (i = 0; i < (n - prefetch_offset); i++) {			\
+		rte_prefetch0(get_tbl24_p(dp,				\
+			ips[i + prefetch_offset], nh_sz));		\
+		tmp = ((type *)dp->tbl24)[ips[i] >> 8];			\
+		if (unlikely(is_entry_extended(tmp)))			\
+			tmp = ((type *)dp->tbl8)[(uint8_t)ips[i] +	\
+				((tmp >> 1) * DIR24_8_TBL8_GRP_NUM_ENT)]; \
+		next_hops[i] = tmp >> 1;				\
+	}								\
+	for (; i < n; i++) {						\
+		tmp = ((type *)dp->tbl24)[ips[i] >> 8];			\
+		if (unlikely(is_entry_extended(tmp)))			\
+			tmp = ((type *)dp->tbl8)[(uint8_t)ips[i] +	\
+				((tmp >> 1) * DIR24_8_TBL8_GRP_NUM_ENT)]; \
+		next_hops[i] = tmp >> 1;				\
+	}								\
+}									\
+
+LOOKUP_FUNC(1b, uint8_t, 5, 0)
+LOOKUP_FUNC(2b, uint16_t, 6, 1)
+LOOKUP_FUNC(4b, uint32_t, 15, 2)
+LOOKUP_FUNC(8b, uint64_t, 12, 3)
+
+static inline void
+dir24_8_lookup_bulk(struct dir24_8_tbl *dp, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n, uint8_t nh_sz)
+{
+	uint64_t tmp;
+	uint32_t i;
+	uint32_t prefetch_offset = RTE_MIN(15U, n);
+
+	for (i = 0; i < prefetch_offset; i++)
+		rte_prefetch0(get_tbl24_p(dp, ips[i], nh_sz));
+	for (i = 0; i < (n - prefetch_offset); i++) {
+		rte_prefetch0(get_tbl24_p(dp, ips[i + prefetch_offset],
+			nh_sz));
+		tmp = get_tbl24(dp, ips[i], nh_sz);
+		if (unlikely(is_entry_extended(tmp)))
+			tmp = get_tbl8(dp, tmp, ips[i], nh_sz);
+
+		next_hops[i] = tmp >> 1;
+	}
+	for (; i < n; i++) {
+		tmp = get_tbl24(dp, ips[i], nh_sz);
+		if (unlikely(is_entry_extended(tmp)))
+			tmp = get_tbl8(dp, tmp, ips[i], nh_sz);
+
+		next_hops[i] = tmp >> 1;
+	}
+}
+
+static inline void
+dir24_8_lookup_bulk_0(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n)
+{
+	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
+
+	dir24_8_lookup_bulk(dp, ips, next_hops, n, 0);
+}
+
+static inline void
+dir24_8_lookup_bulk_1(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n)
+{
+	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
+
+	dir24_8_lookup_bulk(dp, ips, next_hops, n, 1);
+}
+
+static inline void
+dir24_8_lookup_bulk_2(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n)
+{
+	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
+
+	dir24_8_lookup_bulk(dp, ips, next_hops, n, 2);
+}
+
+static inline void
+dir24_8_lookup_bulk_3(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n)
+{
+	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
+
+	dir24_8_lookup_bulk(dp, ips, next_hops, n, 3);
+}
+
+static inline void
+dir24_8_lookup_bulk_uni(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n)
+{
+	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
+	uint64_t tmp;
+	uint32_t i;
+	uint32_t prefetch_offset = RTE_MIN(15U, n);
+	uint8_t nh_sz = dp->nh_sz;
+
+	for (i = 0; i < prefetch_offset; i++)
+		rte_prefetch0(get_tbl24_p(dp, ips[i], nh_sz));
+	for (i = 0; i < (n - prefetch_offset); i++) {
+		rte_prefetch0(get_tbl24_p(dp, ips[i + prefetch_offset],
+			nh_sz));
+		tmp = get_tbl24(dp, ips[i], nh_sz);
+		if (unlikely(is_entry_extended(tmp)))
+			tmp = get_tbl8(dp, tmp, ips[i], nh_sz);
+
+		next_hops[i] = tmp >> 1;
+	}
+	for (; i < n; i++) {
+		tmp = get_tbl24(dp, ips[i], nh_sz);
+		if (unlikely(is_entry_extended(tmp)))
+			tmp = get_tbl8(dp, tmp, ips[i], nh_sz);
+
+		next_hops[i] = tmp >> 1;
+	}
+}
+
 void *
 dir24_8_create(const char *name, int socket_id, struct rte_fib_conf *conf);
 
-- 
2.7.4


^ permalink raw reply	[flat|nested] 199+ messages in thread

* [dpdk-dev] [PATCH v3 4/8] fib: introduce AVX512 lookup
  2020-05-14 12:28 ` [dpdk-dev] [PATCH v2 " Vladimir Medvedkin
                     ` (3 preceding siblings ...)
  2020-05-19 12:12   ` [dpdk-dev] [PATCH v3 3/8] fib: move lookup definition into the header file Vladimir Medvedkin
@ 2020-05-19 12:12   ` Vladimir Medvedkin
  2020-06-24 13:18     ` Ananyev, Konstantin
                       ` (2 more replies)
  2020-05-19 12:13   ` [dpdk-dev] [PATCH v3 5/8] fib6: make lookup function type configurable Vladimir Medvedkin
                     ` (3 subsequent siblings)
  8 siblings, 3 replies; 199+ messages in thread
From: Vladimir Medvedkin @ 2020-05-19 12:12 UTC (permalink / raw)
  To: dev; +Cc: konstantin.ananyev, bruce.richardson

Add new lookup implementation for DIR24_8 algorithm using
AVX512 instruction set

Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
---
 lib/librte_fib/Makefile         |  14 ++++
 lib/librte_fib/dir24_8.c        |  24 ++++++
 lib/librte_fib/dir24_8_avx512.c | 165 ++++++++++++++++++++++++++++++++++++++++
 lib/librte_fib/dir24_8_avx512.h |  24 ++++++
 lib/librte_fib/meson.build      |  11 +++
 lib/librte_fib/rte_fib.h        |   3 +-
 6 files changed, 240 insertions(+), 1 deletion(-)
 create mode 100644 lib/librte_fib/dir24_8_avx512.c
 create mode 100644 lib/librte_fib/dir24_8_avx512.h

diff --git a/lib/librte_fib/Makefile b/lib/librte_fib/Makefile
index 1dd2a49..3958da1 100644
--- a/lib/librte_fib/Makefile
+++ b/lib/librte_fib/Makefile
@@ -19,4 +19,18 @@ SRCS-$(CONFIG_RTE_LIBRTE_FIB) := rte_fib.c rte_fib6.c dir24_8.c trie.c
 # install this header file
 SYMLINK-$(CONFIG_RTE_LIBRTE_FIB)-include := rte_fib.h rte_fib6.h
 
+CC_AVX512F_SUPPORT=$(shell $(CC) -mavx512f -dM -E - </dev/null 2>&1 | \
+grep -q __AVX512F__ && echo 1)
+
+CC_AVX512DQ_SUPPORT=$(shell $(CC) -mavx512dq -dM -E - </dev/null 2>&1 | \
+grep -q __AVX512DQ__ && echo 1)
+
+ifeq ($(CC_AVX512F_SUPPORT), 1)
+	ifeq ($(CC_AVX512DQ_SUPPORT), 1)
+		SRCS-$(CONFIG_RTE_LIBRTE_FIB) += dir24_8_avx512.c
+		CFLAGS_dir24_8_avx512.o += -mavx512f
+		CFLAGS_dir24_8_avx512.o += -mavx512dq
+		CFLAGS_dir24_8.o += -DCC_DIR24_8_AVX512_SUPPORT
+	endif
+endif
 include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_fib/dir24_8.c b/lib/librte_fib/dir24_8.c
index 9d74653..0a1c53f 100644
--- a/lib/librte_fib/dir24_8.c
+++ b/lib/librte_fib/dir24_8.c
@@ -18,6 +18,12 @@
 #include <rte_fib.h>
 #include "dir24_8.h"
 
+#ifdef CC_DIR24_8_AVX512_SUPPORT
+
+#include "dir24_8_avx512.h"
+
+#endif /* CC_DIR24_8_AVX512_SUPPORT */
+
 #define DIR24_8_NAMESIZE	64
 
 #define ROUNDUP(x, y)	 RTE_ALIGN_CEIL(x, (1 << (32 - y)))
@@ -62,6 +68,24 @@ dir24_8_get_lookup_fn(void *p, enum rte_fib_dir24_8_lookup_type type)
 		}
 	case RTE_FIB_DIR24_8_SCALAR_UNI:
 		return dir24_8_lookup_bulk_uni;
+#ifdef CC_DIR24_8_AVX512_SUPPORT
+	case RTE_FIB_DIR24_8_VECTOR:
+		if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512F) <= 0)
+			return NULL;
+
+		switch (nh_sz) {
+		case RTE_FIB_DIR24_8_1B:
+			return rte_dir24_8_vec_lookup_bulk_1b;
+		case RTE_FIB_DIR24_8_2B:
+			return rte_dir24_8_vec_lookup_bulk_2b;
+		case RTE_FIB_DIR24_8_4B:
+			return rte_dir24_8_vec_lookup_bulk_4b;
+		case RTE_FIB_DIR24_8_8B:
+			return rte_dir24_8_vec_lookup_bulk_8b;
+		default:
+			return NULL;
+		}
+#endif
 	default:
 		return NULL;
 	}
diff --git a/lib/librte_fib/dir24_8_avx512.c b/lib/librte_fib/dir24_8_avx512.c
new file mode 100644
index 0000000..43dba28
--- /dev/null
+++ b/lib/librte_fib/dir24_8_avx512.c
@@ -0,0 +1,165 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Intel Corporation
+ */
+
+#include <rte_vect.h>
+#include <rte_fib.h>
+
+#include "dir24_8.h"
+#include "dir24_8_avx512.h"
+
+static __rte_always_inline void
+dir24_8_vec_lookup_x16(void *p, const uint32_t *ips,
+	uint64_t *next_hops, int size)
+{
+	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
+	__mmask16 msk_ext;
+	__mmask16 exp_msk = 0x5555;
+	__m512i ip_vec, idxes, res, bytes;
+	const __m512i zero = _mm512_set1_epi32(0);
+	const __m512i lsb = _mm512_set1_epi32(1);
+	const __m512i lsbyte_msk = _mm512_set1_epi32(0xff);
+	__m512i tmp1, tmp2, res_msk;
+	__m256i tmp256;
+	/* used to mask gather values if size is 1/2 (8/16 bit next hops) */
+	if (size == sizeof(uint8_t))
+		res_msk = _mm512_set1_epi32(UINT8_MAX);
+	else if (size == sizeof(uint16_t))
+		res_msk = _mm512_set1_epi32(UINT16_MAX);
+
+	ip_vec = _mm512_loadu_si512(ips);
+	/* mask 24 most significant bits */
+	idxes = _mm512_srli_epi32(ip_vec, 8);
+
+	/**
+	 * lookup in tbl24
+	 * Put it inside branch to make compiler happy with -O0
+	 */
+	if (size == sizeof(uint8_t)) {
+		res = _mm512_i32gather_epi32(idxes, (const int *)dp->tbl24, 1);
+		res = _mm512_and_epi32(res, res_msk);
+	} else if (size == sizeof(uint16_t)) {
+		res = _mm512_i32gather_epi32(idxes, (const int *)dp->tbl24, 2);
+		res = _mm512_and_epi32(res, res_msk);
+	} else
+		res = _mm512_i32gather_epi32(idxes, (const int *)dp->tbl24, 4);
+
+	/* get extended entries indexes */
+	msk_ext = _mm512_test_epi32_mask(res, lsb);
+
+	if (msk_ext != 0) {
+		idxes = _mm512_srli_epi32(res, 1);
+		idxes = _mm512_slli_epi32(idxes, 8);
+		bytes = _mm512_and_epi32(ip_vec, lsbyte_msk);
+		idxes = _mm512_maskz_add_epi32(msk_ext, idxes, bytes);
+		if (size == sizeof(uint8_t)) {
+			idxes = _mm512_mask_i32gather_epi32(zero, msk_ext,
+				idxes, (const int *)dp->tbl8, 1);
+			idxes = _mm512_and_epi32(idxes, res_msk);
+		} else if (size == sizeof(uint16_t)) {
+			idxes = _mm512_mask_i32gather_epi32(zero, msk_ext,
+				idxes, (const int *)dp->tbl8, 2);
+			idxes = _mm512_and_epi32(idxes, res_msk);
+		} else
+			idxes = _mm512_mask_i32gather_epi32(zero, msk_ext,
+				idxes, (const int *)dp->tbl8, 4);
+
+		res = _mm512_mask_blend_epi32(msk_ext, res, idxes);
+	}
+
+	res = _mm512_srli_epi32(res, 1);
+	tmp1 = _mm512_maskz_expand_epi32(exp_msk, res);
+	tmp256 = _mm512_extracti32x8_epi32(res, 1);
+	tmp2 = _mm512_maskz_expand_epi32(exp_msk,
+		_mm512_castsi256_si512(tmp256));
+	_mm512_storeu_si512(next_hops, tmp1);
+	_mm512_storeu_si512(next_hops + 8, tmp2);
+}
+
+static __rte_always_inline void
+dir24_8_vec_lookup_x8_8b(void *p, const uint32_t *ips,
+	uint64_t *next_hops)
+{
+	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
+	const __m512i zero = _mm512_set1_epi32(0);
+	const __m512i lsbyte_msk = _mm512_set1_epi64(0xff);
+	const __m512i lsb = _mm512_set1_epi64(1);
+	__m512i res, idxes, bytes;
+	__m256i idxes_256, ip_vec;
+	__mmask8 msk_ext;
+
+	ip_vec = _mm256_loadu_si256((const void *)ips);
+	/* mask 24 most significant bits */
+	idxes_256 = _mm256_srli_epi32(ip_vec, 8);
+
+	/* lookup in tbl24 */
+	res = _mm512_i32gather_epi64(idxes_256, (const void *)dp->tbl24, 8);
+
+	/* get extended entries indexes */
+	msk_ext = _mm512_test_epi64_mask(res, lsb);
+
+	if (msk_ext != 0) {
+		bytes = _mm512_cvtepi32_epi64(ip_vec);
+		idxes = _mm512_srli_epi64(res, 1);
+		idxes = _mm512_slli_epi64(idxes, 8);
+		bytes = _mm512_and_epi64(bytes, lsbyte_msk);
+		idxes = _mm512_maskz_add_epi64(msk_ext, idxes, bytes);
+		idxes = _mm512_mask_i64gather_epi64(zero, msk_ext, idxes,
+			(const void *)dp->tbl8, 8);
+
+		res = _mm512_mask_blend_epi64(msk_ext, res, idxes);
+	}
+
+	res = _mm512_srli_epi64(res, 1);
+	_mm512_storeu_si512(next_hops, res);
+}
+
+void
+rte_dir24_8_vec_lookup_bulk_1b(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n)
+{
+	uint32_t i;
+	for (i = 0; i < (n / 16); i++)
+		dir24_8_vec_lookup_x16(p, ips + i * 16, next_hops + i * 16,
+			sizeof(uint8_t));
+
+	dir24_8_lookup_bulk_1b(p, ips + i * 16, next_hops + i * 16,
+		n - i * 16);
+}
+
+void
+rte_dir24_8_vec_lookup_bulk_2b(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n)
+{
+	uint32_t i;
+	for (i = 0; i < (n / 16); i++)
+		dir24_8_vec_lookup_x16(p, ips + i * 16, next_hops + i * 16,
+			sizeof(uint16_t));
+
+	dir24_8_lookup_bulk_2b(p, ips + i * 16, next_hops + i * 16,
+		n - i * 16);
+}
+
+void
+rte_dir24_8_vec_lookup_bulk_4b(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n)
+{
+	uint32_t i;
+	for (i = 0; i < (n / 16); i++)
+		dir24_8_vec_lookup_x16(p, ips + i * 16, next_hops + i * 16,
+			sizeof(uint32_t));
+
+	dir24_8_lookup_bulk_4b(p, ips + i * 16, next_hops + i * 16,
+		n - i * 16);
+}
+
+void
+rte_dir24_8_vec_lookup_bulk_8b(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n)
+{
+	uint32_t i;
+	for (i = 0; i < (n / 8); i++)
+		dir24_8_vec_lookup_x8_8b(p, ips + i * 8, next_hops + i * 8);
+
+	dir24_8_lookup_bulk_8b(p, ips + i * 8, next_hops + i * 8, n - i * 8);
+}
diff --git a/lib/librte_fib/dir24_8_avx512.h b/lib/librte_fib/dir24_8_avx512.h
new file mode 100644
index 0000000..1d3c2b9
--- /dev/null
+++ b/lib/librte_fib/dir24_8_avx512.h
@@ -0,0 +1,24 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Intel Corporation
+ */
+
+#ifndef _DIR248_AVX512_H_
+#define _DIR248_AVX512_H_
+
+void
+rte_dir24_8_vec_lookup_bulk_1b(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n);
+
+void
+rte_dir24_8_vec_lookup_bulk_2b(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n);
+
+void
+rte_dir24_8_vec_lookup_bulk_4b(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n);
+
+void
+rte_dir24_8_vec_lookup_bulk_8b(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n);
+
+#endif /* _DIR248_AVX512_H_ */
diff --git a/lib/librte_fib/meson.build b/lib/librte_fib/meson.build
index 771828f..0963f3c 100644
--- a/lib/librte_fib/meson.build
+++ b/lib/librte_fib/meson.build
@@ -5,3 +5,14 @@
 sources = files('rte_fib.c', 'rte_fib6.c', 'dir24_8.c', 'trie.c')
 headers = files('rte_fib.h', 'rte_fib6.h')
 deps += ['rib']
+
+if dpdk_conf.has('RTE_ARCH_X86') and cc.has_argument('-mavx512f')
+	if cc.has_argument('-mavx512dq')
+		dir24_8_avx512_tmp = static_library('dir24_8_avx512_tmp',
+			'dir24_8_avx512.c',
+			dependencies: static_rte_eal,
+			c_args: cflags + ['-mavx512f'] + ['-mavx512dq'])
+		objs += dir24_8_avx512_tmp.extract_objects('dir24_8_avx512.c')
+		cflags += '-DCC_DIR24_8_AVX512_SUPPORT'
+	endif
+endif
diff --git a/lib/librte_fib/rte_fib.h b/lib/librte_fib/rte_fib.h
index db35685..2919d13 100644
--- a/lib/librte_fib/rte_fib.h
+++ b/lib/librte_fib/rte_fib.h
@@ -54,7 +54,8 @@ enum rte_fib_dir24_8_nh_sz {
 enum rte_fib_dir24_8_lookup_type {
 	RTE_FIB_DIR24_8_SCALAR_MACRO,
 	RTE_FIB_DIR24_8_SCALAR_INLINE,
-	RTE_FIB_DIR24_8_SCALAR_UNI
+	RTE_FIB_DIR24_8_SCALAR_UNI,
+	RTE_FIB_DIR24_8_VECTOR
 };
 
 /** FIB configuration structure */
-- 
2.7.4


^ permalink raw reply	[flat|nested] 199+ messages in thread

* [dpdk-dev] [PATCH v3 5/8] fib6: make lookup function type configurable
  2020-05-14 12:28 ` [dpdk-dev] [PATCH v2 " Vladimir Medvedkin
                     ` (4 preceding siblings ...)
  2020-05-19 12:12   ` [dpdk-dev] [PATCH v3 4/8] fib: introduce AVX512 lookup Vladimir Medvedkin
@ 2020-05-19 12:13   ` Vladimir Medvedkin
  2020-05-19 12:13   ` [dpdk-dev] [PATCH v3 6/8] fib6: move lookup definition into the header file Vladimir Medvedkin
                     ` (2 subsequent siblings)
  8 siblings, 0 replies; 199+ messages in thread
From: Vladimir Medvedkin @ 2020-05-19 12:13 UTC (permalink / raw)
  To: dev; +Cc: konstantin.ananyev, bruce.richardson

Add type argument to trie_get_lookup_fn()
Now it only supports RTE_FIB6_TRIE_SCALAR

Add new rte_fib6_set_lookup_fn() - user can change lookup
function type runtime.

Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
---
 lib/librte_fib/rte_fib6.c          | 19 ++++++++++++++++++-
 lib/librte_fib/rte_fib6.h          | 20 ++++++++++++++++++++
 lib/librte_fib/rte_fib_version.map |  1 +
 lib/librte_fib/trie.c              | 25 ++++++++++++++-----------
 lib/librte_fib/trie.h              |  2 +-
 5 files changed, 54 insertions(+), 13 deletions(-)

diff --git a/lib/librte_fib/rte_fib6.c b/lib/librte_fib/rte_fib6.c
index a1f0db8..9eff712 100644
--- a/lib/librte_fib/rte_fib6.c
+++ b/lib/librte_fib/rte_fib6.c
@@ -107,7 +107,7 @@ init_dataplane(struct rte_fib6 *fib, __rte_unused int socket_id,
 		fib->dp = trie_create(dp_name, socket_id, conf);
 		if (fib->dp == NULL)
 			return -rte_errno;
-		fib->lookup = rte_trie_get_lookup_fn(conf);
+		fib->lookup = trie_get_lookup_fn(fib->dp, RTE_FIB6_TRIE_SCALAR);
 		fib->modify = trie_modify;
 		return 0;
 	default:
@@ -319,3 +319,20 @@ rte_fib6_get_rib(struct rte_fib6 *fib)
 {
 	return (fib == NULL) ? NULL : fib->rib;
 }
+
+int
+rte_fib6_set_lookup_fn(struct rte_fib6 *fib, int type)
+{
+	rte_fib6_lookup_fn_t fn;
+
+	switch (fib->type) {
+	case RTE_FIB6_TRIE:
+		fn = trie_get_lookup_fn(fib->dp, type);
+		if (fn == NULL)
+			return -EINVAL;
+		fib->lookup = fn;
+		return 0;
+	default:
+		return -EINVAL;
+	}
+}
diff --git a/lib/librte_fib/rte_fib6.h b/lib/librte_fib/rte_fib6.h
index 66c71c8..b70369a 100644
--- a/lib/librte_fib/rte_fib6.h
+++ b/lib/librte_fib/rte_fib6.h
@@ -52,6 +52,10 @@ enum rte_fib_trie_nh_sz {
 	RTE_FIB6_TRIE_8B
 };
 
+enum rte_fib_trie_lookup_type {
+	RTE_FIB6_TRIE_SCALAR
+};
+
 /** FIB configuration structure */
 struct rte_fib6_conf {
 	enum rte_fib6_type type; /**< Type of FIB struct */
@@ -194,6 +198,22 @@ __rte_experimental
 struct rte_rib6 *
 rte_fib6_get_rib(struct rte_fib6 *fib);
 
+/**
+ * Set lookup function based on type
+ *
+ * @param fib
+ *   FIB object handle
+ * @param type
+ *   type of lookup function
+ *
+ * @return
+ *    -EINVAL on failure
+ *    0 on success
+ */
+__rte_experimental
+int
+rte_fib6_set_lookup_fn(struct rte_fib6 *fib, int type);
+
 #ifdef __cplusplus
 }
 #endif
diff --git a/lib/librte_fib/rte_fib_version.map b/lib/librte_fib/rte_fib_version.map
index 216af66..9d1e181 100644
--- a/lib/librte_fib/rte_fib_version.map
+++ b/lib/librte_fib/rte_fib_version.map
@@ -19,6 +19,7 @@ EXPERIMENTAL {
 	rte_fib6_lookup_bulk;
 	rte_fib6_get_dp;
 	rte_fib6_get_rib;
+	rte_fib6_set_lookup_fn;
 
 	local: *;
 };
diff --git a/lib/librte_fib/trie.c b/lib/librte_fib/trie.c
index 2ae2add..63c519a 100644
--- a/lib/librte_fib/trie.c
+++ b/lib/librte_fib/trie.c
@@ -59,13 +59,6 @@ enum edge {
 	REDGE
 };
 
-enum lookup_type {
-	MACRO,
-	INLINE,
-	UNI
-};
-static enum lookup_type test_lookup = MACRO;
-
 static inline uint32_t
 get_tbl24_idx(const uint8_t *ip)
 {
@@ -154,11 +147,18 @@ LOOKUP_FUNC(4b, uint32_t, 2)
 LOOKUP_FUNC(8b, uint64_t, 3)
 
 rte_fib6_lookup_fn_t
-rte_trie_get_lookup_fn(struct rte_fib6_conf *conf)
+trie_get_lookup_fn(void *p, enum rte_fib_trie_lookup_type type)
 {
-	enum rte_fib_trie_nh_sz nh_sz = conf->trie.nh_sz;
+	enum rte_fib_trie_nh_sz nh_sz;
+	struct rte_trie_tbl *dp = p;
 
-	if (test_lookup == MACRO) {
+	if (dp == NULL)
+		return NULL;
+
+	nh_sz = dp->nh_sz;
+
+	switch (type) {
+	case RTE_FIB6_TRIE_SCALAR:
 		switch (nh_sz) {
 		case RTE_FIB6_TRIE_2B:
 			return rte_trie_lookup_bulk_2b;
@@ -166,9 +166,12 @@ rte_trie_get_lookup_fn(struct rte_fib6_conf *conf)
 			return rte_trie_lookup_bulk_4b;
 		case RTE_FIB6_TRIE_8B:
 			return rte_trie_lookup_bulk_8b;
+		default:
+			return NULL;
 		}
+	default:
+		return NULL;
 	}
-
 	return NULL;
 }
 
diff --git a/lib/librte_fib/trie.h b/lib/librte_fib/trie.h
index bb750c5..0d5ef9a 100644
--- a/lib/librte_fib/trie.h
+++ b/lib/librte_fib/trie.h
@@ -22,7 +22,7 @@ void
 trie_free(void *p);
 
 rte_fib6_lookup_fn_t
-rte_trie_get_lookup_fn(struct rte_fib6_conf *fib_conf);
+trie_get_lookup_fn(void *p, enum rte_fib_trie_lookup_type type);
 
 int
 trie_modify(struct rte_fib6 *fib, const uint8_t ip[RTE_FIB6_IPV6_ADDR_SIZE],
-- 
2.7.4


^ permalink raw reply	[flat|nested] 199+ messages in thread

* [dpdk-dev] [PATCH v3 6/8] fib6: move lookup definition into the header file
  2020-05-14 12:28 ` [dpdk-dev] [PATCH v2 " Vladimir Medvedkin
                     ` (5 preceding siblings ...)
  2020-05-19 12:13   ` [dpdk-dev] [PATCH v3 5/8] fib6: make lookup function type configurable Vladimir Medvedkin
@ 2020-05-19 12:13   ` Vladimir Medvedkin
  2020-07-08 11:27     ` Ananyev, Konstantin
  2020-05-19 12:13   ` [dpdk-dev] [PATCH v3 7/8] fib6: introduce AVX512 lookup Vladimir Medvedkin
  2020-05-19 12:13   ` [dpdk-dev] [PATCH v3 8/8] app/testfib: add support for different lookup functions Vladimir Medvedkin
  8 siblings, 1 reply; 199+ messages in thread
From: Vladimir Medvedkin @ 2020-05-19 12:13 UTC (permalink / raw)
  To: dev; +Cc: konstantin.ananyev, bruce.richardson

Move trie table layout and lookup defenition into the
private header file. This is necessary for implementing a
vectorized lookup function in a separate .с file.

Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
---
 lib/librte_fib/trie.c | 121 --------------------------------------------------
 lib/librte_fib/trie.h | 117 ++++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 117 insertions(+), 121 deletions(-)

diff --git a/lib/librte_fib/trie.c b/lib/librte_fib/trie.c
index 63c519a..136e938 100644
--- a/lib/librte_fib/trie.c
+++ b/lib/librte_fib/trie.c
@@ -11,141 +11,20 @@
 
 #include <rte_debug.h>
 #include <rte_malloc.h>
-#include <rte_prefetch.h>
 #include <rte_errno.h>
 #include <rte_memory.h>
-#include <rte_branch_prediction.h>
 
 #include <rte_rib6.h>
 #include <rte_fib6.h>
 #include "trie.h"
 
-/* @internal Total number of tbl24 entries. */
-#define TRIE_TBL24_NUM_ENT	(1 << 24)
-
-/* Maximum depth value possible for IPv6 LPM. */
-#define TRIE_MAX_DEPTH		128
-
-/* @internal Number of entries in a tbl8 group. */
-#define TRIE_TBL8_GRP_NUM_ENT	256ULL
-
-/* @internal Total number of tbl8 groups in the tbl8. */
-#define TRIE_TBL8_NUM_GROUPS	65536
-
-/* @internal bitmask with valid and valid_group fields set */
-#define TRIE_EXT_ENT		1
-
 #define TRIE_NAMESIZE		64
 
-#define BITMAP_SLAB_BIT_SIZE_LOG2	6
-#define BITMAP_SLAB_BIT_SIZE		(1ULL << BITMAP_SLAB_BIT_SIZE_LOG2)
-#define BITMAP_SLAB_BITMASK		(BITMAP_SLAB_BIT_SIZE - 1)
-
-struct rte_trie_tbl {
-	uint32_t	number_tbl8s;	/**< Total number of tbl8s */
-	uint32_t	rsvd_tbl8s;	/**< Number of reserved tbl8s */
-	uint32_t	cur_tbl8s;	/**< Current cumber of tbl8s */
-	uint64_t	def_nh;		/**< Default next hop */
-	enum rte_fib_trie_nh_sz	nh_sz;	/**< Size of nexthop entry */
-	uint64_t	*tbl8;		/**< tbl8 table. */
-	uint32_t	*tbl8_pool;	/**< bitmap containing free tbl8 idxes*/
-	uint32_t	tbl8_pool_pos;
-	/* tbl24 table. */
-	__extension__ uint64_t	tbl24[0] __rte_cache_aligned;
-};
-
 enum edge {
 	LEDGE,
 	REDGE
 };
 
-static inline uint32_t
-get_tbl24_idx(const uint8_t *ip)
-{
-	return ip[0] << 16|ip[1] << 8|ip[2];
-}
-
-static inline void *
-get_tbl24_p(struct rte_trie_tbl *dp, const uint8_t *ip, uint8_t nh_sz)
-{
-	uint32_t tbl24_idx;
-
-	tbl24_idx = get_tbl24_idx(ip);
-	return (void *)&((uint8_t *)dp->tbl24)[tbl24_idx << nh_sz];
-}
-
-static inline uint8_t
-bits_in_nh(uint8_t nh_sz)
-{
-	return 8 * (1 << nh_sz);
-}
-
-static inline uint64_t
-get_max_nh(uint8_t nh_sz)
-{
-	return ((1ULL << (bits_in_nh(nh_sz) - 1)) - 1);
-}
-
-static inline uint64_t
-lookup_msk(uint8_t nh_sz)
-{
-	return ((1ULL << ((1 << (nh_sz + 3)) - 1)) << 1) - 1;
-}
-
-static inline uint8_t
-get_psd_idx(uint32_t val, uint8_t nh_sz)
-{
-	return val & ((1 << (3 - nh_sz)) - 1);
-}
-
-static inline uint32_t
-get_tbl_pos(uint32_t val, uint8_t nh_sz)
-{
-	return val >> (3 - nh_sz);
-}
-
-static inline uint64_t
-get_tbl_val_by_idx(uint64_t *tbl, uint32_t idx, uint8_t nh_sz)
-{
-	return ((tbl[get_tbl_pos(idx, nh_sz)] >> (get_psd_idx(idx, nh_sz) *
-		bits_in_nh(nh_sz))) & lookup_msk(nh_sz));
-}
-
-static inline void *
-get_tbl_p_by_idx(uint64_t *tbl, uint64_t idx, uint8_t nh_sz)
-{
-	return (uint8_t *)tbl + (idx << nh_sz);
-}
-
-static inline int
-is_entry_extended(uint64_t ent)
-{
-	return (ent & TRIE_EXT_ENT) == TRIE_EXT_ENT;
-}
-
-#define LOOKUP_FUNC(suffix, type, nh_sz)				\
-static void rte_trie_lookup_bulk_##suffix(void *p,			\
-	uint8_t ips[][RTE_FIB6_IPV6_ADDR_SIZE],			\
-	uint64_t *next_hops, const unsigned int n)			\
-{									\
-	struct rte_trie_tbl *dp = (struct rte_trie_tbl *)p;		\
-	uint64_t tmp;							\
-	uint32_t i, j;							\
-									\
-	for (i = 0; i < n; i++) {					\
-		tmp = ((type *)dp->tbl24)[get_tbl24_idx(&ips[i][0])];	\
-		j = 3;							\
-		while (is_entry_extended(tmp)) {			\
-			tmp = ((type *)dp->tbl8)[ips[i][j++] +		\
-				((tmp >> 1) * TRIE_TBL8_GRP_NUM_ENT)];	\
-		}							\
-		next_hops[i] = tmp >> 1;				\
-	}								\
-}
-LOOKUP_FUNC(2b, uint16_t, 1)
-LOOKUP_FUNC(4b, uint32_t, 2)
-LOOKUP_FUNC(8b, uint64_t, 3)
-
 rte_fib6_lookup_fn_t
 trie_get_lookup_fn(void *p, enum rte_fib_trie_lookup_type type)
 {
diff --git a/lib/librte_fib/trie.h b/lib/librte_fib/trie.h
index 0d5ef9a..663c7a9 100644
--- a/lib/librte_fib/trie.h
+++ b/lib/librte_fib/trie.h
@@ -10,11 +10,128 @@
  * @file
  * RTE IPv6 Longest Prefix Match (LPM)
  */
+#include <rte_prefetch.h>
+#include <rte_branch_prediction.h>
 
 #ifdef __cplusplus
 extern "C" {
 #endif
 
+/* @internal Total number of tbl24 entries. */
+#define TRIE_TBL24_NUM_ENT	(1 << 24)
+/* Maximum depth value possible for IPv6 LPM. */
+#define TRIE_MAX_DEPTH		128
+/* @internal Number of entries in a tbl8 group. */
+#define TRIE_TBL8_GRP_NUM_ENT	256ULL
+/* @internal Total number of tbl8 groups in the tbl8. */
+#define TRIE_TBL8_NUM_GROUPS	65536
+/* @internal bitmask with valid and valid_group fields set */
+#define TRIE_EXT_ENT		1
+
+#define BITMAP_SLAB_BIT_SIZE_LOG2	6
+#define BITMAP_SLAB_BIT_SIZE		(1ULL << BITMAP_SLAB_BIT_SIZE_LOG2)
+#define BITMAP_SLAB_BITMASK		(BITMAP_SLAB_BIT_SIZE - 1)
+
+struct rte_trie_tbl {
+	uint32_t	number_tbl8s;	/**< Total number of tbl8s */
+	uint32_t	rsvd_tbl8s;	/**< Number of reserved tbl8s */
+	uint32_t	cur_tbl8s;	/**< Current cumber of tbl8s */
+	uint64_t	def_nh;		/**< Default next hop */
+	enum rte_fib_trie_nh_sz	nh_sz;	/**< Size of nexthop entry */
+	uint64_t	*tbl8;		/**< tbl8 table. */
+	uint32_t	*tbl8_pool;	/**< bitmap containing free tbl8 idxes*/
+	uint32_t	tbl8_pool_pos;
+	/* tbl24 table. */
+	__extension__ uint64_t	tbl24[0] __rte_cache_aligned;
+};
+
+static inline uint32_t
+get_tbl24_idx(const uint8_t *ip)
+{
+	return ip[0] << 16|ip[1] << 8|ip[2];
+}
+
+static inline void *
+get_tbl24_p(struct rte_trie_tbl *dp, const uint8_t *ip, uint8_t nh_sz)
+{
+	uint32_t tbl24_idx;
+
+	tbl24_idx = get_tbl24_idx(ip);
+	return (void *)&((uint8_t *)dp->tbl24)[tbl24_idx << nh_sz];
+}
+
+static inline uint8_t
+bits_in_nh(uint8_t nh_sz)
+{
+	return 8 * (1 << nh_sz);
+}
+
+static inline uint64_t
+get_max_nh(uint8_t nh_sz)
+{
+	return ((1ULL << (bits_in_nh(nh_sz) - 1)) - 1);
+}
+
+static inline uint64_t
+lookup_msk(uint8_t nh_sz)
+{
+	return ((1ULL << ((1 << (nh_sz + 3)) - 1)) << 1) - 1;
+}
+
+static inline uint8_t
+get_psd_idx(uint32_t val, uint8_t nh_sz)
+{
+	return val & ((1 << (3 - nh_sz)) - 1);
+}
+
+static inline uint32_t
+get_tbl_pos(uint32_t val, uint8_t nh_sz)
+{
+	return val >> (3 - nh_sz);
+}
+
+static inline uint64_t
+get_tbl_val_by_idx(uint64_t *tbl, uint32_t idx, uint8_t nh_sz)
+{
+	return ((tbl[get_tbl_pos(idx, nh_sz)] >> (get_psd_idx(idx, nh_sz) *
+		bits_in_nh(nh_sz))) & lookup_msk(nh_sz));
+}
+
+static inline void *
+get_tbl_p_by_idx(uint64_t *tbl, uint64_t idx, uint8_t nh_sz)
+{
+	return (uint8_t *)tbl + (idx << nh_sz);
+}
+
+static inline int
+is_entry_extended(uint64_t ent)
+{
+	return (ent & TRIE_EXT_ENT) == TRIE_EXT_ENT;
+}
+
+#define LOOKUP_FUNC(suffix, type, nh_sz)				\
+static inline void rte_trie_lookup_bulk_##suffix(void *p,		\
+	uint8_t ips[][RTE_FIB6_IPV6_ADDR_SIZE],				\
+	uint64_t *next_hops, const unsigned int n)			\
+{									\
+	struct rte_trie_tbl *dp = (struct rte_trie_tbl *)p;		\
+	uint64_t tmp;							\
+	uint32_t i, j;							\
+									\
+	for (i = 0; i < n; i++) {					\
+		tmp = ((type *)dp->tbl24)[get_tbl24_idx(&ips[i][0])];	\
+		j = 3;							\
+		while (is_entry_extended(tmp)) {			\
+			tmp = ((type *)dp->tbl8)[ips[i][j++] +		\
+				((tmp >> 1) * TRIE_TBL8_GRP_NUM_ENT)];	\
+		}							\
+		next_hops[i] = tmp >> 1;				\
+	}								\
+}
+LOOKUP_FUNC(2b, uint16_t, 1)
+LOOKUP_FUNC(4b, uint32_t, 2)
+LOOKUP_FUNC(8b, uint64_t, 3)
+
 void *
 trie_create(const char *name, int socket_id, struct rte_fib6_conf *conf);
 
-- 
2.7.4


^ permalink raw reply	[flat|nested] 199+ messages in thread

* [dpdk-dev] [PATCH v3 7/8] fib6: introduce AVX512 lookup
  2020-05-14 12:28 ` [dpdk-dev] [PATCH v2 " Vladimir Medvedkin
                     ` (6 preceding siblings ...)
  2020-05-19 12:13   ` [dpdk-dev] [PATCH v3 6/8] fib6: move lookup definition into the header file Vladimir Medvedkin
@ 2020-05-19 12:13   ` Vladimir Medvedkin
  2020-07-08 12:23     ` Ananyev, Konstantin
  2020-05-19 12:13   ` [dpdk-dev] [PATCH v3 8/8] app/testfib: add support for different lookup functions Vladimir Medvedkin
  8 siblings, 1 reply; 199+ messages in thread
From: Vladimir Medvedkin @ 2020-05-19 12:13 UTC (permalink / raw)
  To: dev; +Cc: konstantin.ananyev, bruce.richardson

Add new lookup implementation for FIB6 trie algorithm using
AVX512 instruction set

Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
---
 lib/librte_fib/Makefile      |  10 ++
 lib/librte_fib/meson.build   |   9 ++
 lib/librte_fib/rte_fib6.h    |   3 +-
 lib/librte_fib/trie.c        |  21 ++++
 lib/librte_fib/trie_avx512.c | 269 +++++++++++++++++++++++++++++++++++++++++++
 lib/librte_fib/trie_avx512.h |  20 ++++
 6 files changed, 331 insertions(+), 1 deletion(-)
 create mode 100644 lib/librte_fib/trie_avx512.c
 create mode 100644 lib/librte_fib/trie_avx512.h

diff --git a/lib/librte_fib/Makefile b/lib/librte_fib/Makefile
index 3958da1..761c7c8 100644
--- a/lib/librte_fib/Makefile
+++ b/lib/librte_fib/Makefile
@@ -25,12 +25,22 @@ grep -q __AVX512F__ && echo 1)
 CC_AVX512DQ_SUPPORT=$(shell $(CC) -mavx512dq -dM -E - </dev/null 2>&1 | \
 grep -q __AVX512DQ__ && echo 1)
 
+CC_AVX512BW_SUPPORT=$(shell $(CC) -mavx512bw -dM -E - </dev/null 2>&1 | \
+grep -q __AVX512BW__ && echo 1)
+
 ifeq ($(CC_AVX512F_SUPPORT), 1)
 	ifeq ($(CC_AVX512DQ_SUPPORT), 1)
 		SRCS-$(CONFIG_RTE_LIBRTE_FIB) += dir24_8_avx512.c
 		CFLAGS_dir24_8_avx512.o += -mavx512f
 		CFLAGS_dir24_8_avx512.o += -mavx512dq
 		CFLAGS_dir24_8.o += -DCC_DIR24_8_AVX512_SUPPORT
+		ifeq ($(CC_AVX512BW_SUPPORT), 1)
+			SRCS-$(CONFIG_RTE_LIBRTE_FIB) += trie_avx512.c
+			CFLAGS_trie_avx512.o += -mavx512f
+			CFLAGS_trie_avx512.o += -mavx512dq
+			CFLAGS_trie_avx512.o += -mavx512bw
+			CFLAGS_trie.o += -DCC_TRIE_AVX512_SUPPORT
+		endif
 	endif
 endif
 include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_fib/meson.build b/lib/librte_fib/meson.build
index 0963f3c..98adf11 100644
--- a/lib/librte_fib/meson.build
+++ b/lib/librte_fib/meson.build
@@ -14,5 +14,14 @@ if dpdk_conf.has('RTE_ARCH_X86') and cc.has_argument('-mavx512f')
 			c_args: cflags + ['-mavx512f'] + ['-mavx512dq'])
 		objs += dir24_8_avx512_tmp.extract_objects('dir24_8_avx512.c')
 		cflags += '-DCC_DIR24_8_AVX512_SUPPORT'
+		if cc.has_argument('-mavx512bw')
+			trie_avx512_tmp = static_library('trie_avx512_tmp',
+				'trie_avx512.c',
+				dependencies: static_rte_eal,
+				c_args: cflags + ['-mavx512f'] + \
+					['-mavx512dq'] + ['-mavx512bw'])
+			objs += trie_avx512_tmp.extract_objects('trie_avx512.c')
+			cflags += '-DCC_TRIE_AVX512_SUPPORT'
+		endif
 	endif
 endif
diff --git a/lib/librte_fib/rte_fib6.h b/lib/librte_fib/rte_fib6.h
index b70369a..c55efdf 100644
--- a/lib/librte_fib/rte_fib6.h
+++ b/lib/librte_fib/rte_fib6.h
@@ -53,7 +53,8 @@ enum rte_fib_trie_nh_sz {
 };
 
 enum rte_fib_trie_lookup_type {
-	RTE_FIB6_TRIE_SCALAR
+	RTE_FIB6_TRIE_SCALAR,
+	RTE_FIB6_TRIE_VECTOR
 };
 
 /** FIB configuration structure */
diff --git a/lib/librte_fib/trie.c b/lib/librte_fib/trie.c
index 136e938..ee6864e 100644
--- a/lib/librte_fib/trie.c
+++ b/lib/librte_fib/trie.c
@@ -18,6 +18,12 @@
 #include <rte_fib6.h>
 #include "trie.h"
 
+#ifdef CC_TRIE_AVX512_SUPPORT
+
+#include "trie_avx512.h"
+
+#endif /* CC_TRIE_AVX512_SUPPORT */
+
 #define TRIE_NAMESIZE		64
 
 enum edge {
@@ -48,6 +54,21 @@ trie_get_lookup_fn(void *p, enum rte_fib_trie_lookup_type type)
 		default:
 			return NULL;
 		}
+#ifdef CC_TRIE_AVX512_SUPPORT
+	case RTE_FIB6_TRIE_VECTOR:
+		if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512F) <= 0)
+			return NULL;
+		switch (nh_sz) {
+		case RTE_FIB6_TRIE_2B:
+			return rte_trie_vec_lookup_bulk_2b;
+		case RTE_FIB6_TRIE_4B:
+			return rte_trie_vec_lookup_bulk_4b;
+		case RTE_FIB6_TRIE_8B:
+			return rte_trie_vec_lookup_bulk_8b;
+		default:
+			return NULL;
+		}
+#endif
 	default:
 		return NULL;
 	}
diff --git a/lib/librte_fib/trie_avx512.c b/lib/librte_fib/trie_avx512.c
new file mode 100644
index 0000000..b1c9e4e
--- /dev/null
+++ b/lib/librte_fib/trie_avx512.c
@@ -0,0 +1,269 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Intel Corporation
+ */
+
+#include <rte_vect.h>
+#include <rte_fib6.h>
+
+#include "trie.h"
+#include "trie_avx512.h"
+
+static __rte_always_inline void
+transpose_x16(uint8_t ips[16][RTE_FIB6_IPV6_ADDR_SIZE],
+	__m512i *first, __m512i *second, __m512i *third, __m512i *fourth)
+{
+	__m512i tmp1, tmp2, tmp3, tmp4;
+	__m512i tmp5, tmp6, tmp7, tmp8;
+	const __rte_x86_zmm_t perm_idxes = {
+		.u32 = { 0, 4, 8, 12, 2, 6, 10, 14,
+			1, 5, 9, 13, 3, 7, 11, 15
+		},
+	};
+
+	/* load all ip addresses */
+	tmp1 = _mm512_loadu_si512(&ips[0][0]);
+	tmp2 = _mm512_loadu_si512(&ips[4][0]);
+	tmp3 = _mm512_loadu_si512(&ips[8][0]);
+	tmp4 = _mm512_loadu_si512(&ips[12][0]);
+
+	/* transpose 4 byte chunks of 16 ips */
+	tmp5 = _mm512_unpacklo_epi32(tmp1, tmp2);
+	tmp7 = _mm512_unpackhi_epi32(tmp1, tmp2);
+	tmp6 = _mm512_unpacklo_epi32(tmp3, tmp4);
+	tmp8 = _mm512_unpackhi_epi32(tmp3, tmp4);
+
+	tmp1 = _mm512_unpacklo_epi32(tmp5, tmp6);
+	tmp3 = _mm512_unpackhi_epi32(tmp5, tmp6);
+	tmp2 = _mm512_unpacklo_epi32(tmp7, tmp8);
+	tmp4 = _mm512_unpackhi_epi32(tmp7, tmp8);
+
+	/* first 4-byte chunks of ips[] */
+	*first = _mm512_permutexvar_epi32(perm_idxes.z, tmp1);
+	/* second 4-byte chunks of ips[] */
+	*second = _mm512_permutexvar_epi32(perm_idxes.z, tmp3);
+	/* third 4-byte chunks of ips[] */
+	*third = _mm512_permutexvar_epi32(perm_idxes.z, tmp2);
+	/* fourth 4-byte chunks of ips[] */
+	*fourth = _mm512_permutexvar_epi32(perm_idxes.z, tmp4);
+}
+
+static __rte_always_inline void
+transpose_x8(uint8_t ips[8][RTE_FIB6_IPV6_ADDR_SIZE],
+	__m512i *first, __m512i *second)
+{
+	__m512i tmp1, tmp2, tmp3, tmp4;
+	const __rte_x86_zmm_t perm_idxes = {
+		.u64 = { 0, 2, 4, 6, 1, 3, 5, 7
+		},
+	};
+
+	tmp1 = _mm512_loadu_si512(&ips[0][0]);
+	tmp2 = _mm512_loadu_si512(&ips[4][0]);
+
+	tmp3 = _mm512_unpacklo_epi64(tmp1, tmp2);
+	*first = _mm512_permutexvar_epi64(perm_idxes.z, tmp3);
+	tmp4 = _mm512_unpackhi_epi64(tmp1, tmp2);
+	*second = _mm512_permutexvar_epi64(perm_idxes.z, tmp4);
+}
+
+static __rte_always_inline void
+trie_vec_lookup_x16(void *p, uint8_t ips[16][RTE_FIB6_IPV6_ADDR_SIZE],
+	uint64_t *next_hops, int size)
+{
+	struct rte_trie_tbl *dp = (struct rte_trie_tbl *)p;
+	const __m512i zero = _mm512_set1_epi32(0);
+	const __m512i lsb = _mm512_set1_epi32(1);
+	const __m512i two_lsb = _mm512_set1_epi32(3);
+	__m512i first, second, third, fourth; /*< IPv6 four byte chunks */
+	__m512i idxes, res, shuf_idxes;
+	__m512i tmp, tmp2, bytes, byte_chunk, base_idxes;
+	/* used to mask gather values if size is 2 (16 bit next hops) */
+	const __m512i res_msk = _mm512_set1_epi32(UINT16_MAX);
+	const __rte_x86_zmm_t bswap = {
+		.u8 = { 2, 1, 0, 255, 6, 5, 4, 255,
+			10, 9, 8, 255, 14, 13, 12, 255,
+			2, 1, 0, 255, 6, 5, 4, 255,
+			10, 9, 8, 255, 14, 13, 12, 255,
+			2, 1, 0, 255, 6, 5, 4, 255,
+			10, 9, 8, 255, 14, 13, 12, 255,
+			2, 1, 0, 255, 6, 5, 4, 255,
+			10, 9, 8, 255, 14, 13, 12, 255
+			},
+	};
+	const __mmask64 k = 0x1111111111111111;
+	int i = 3;
+	__mmask16 msk_ext, new_msk;
+	__mmask16 exp_msk = 0x5555;
+
+	transpose_x16(ips, &first, &second, &third, &fourth);
+
+	/* get_tbl24_idx() for every 4 byte chunk */
+	idxes = _mm512_shuffle_epi8(first, bswap.z);
+
+	/**
+	 * lookup in tbl24
+	 * Put it inside branch to make compiller happy with -O0
+	 */
+	if (size == sizeof(uint16_t)) {
+		res = _mm512_i32gather_epi32(idxes, (const int *)dp->tbl24, 2);
+		res = _mm512_and_epi32(res, res_msk);
+	} else
+		res = _mm512_i32gather_epi32(idxes, (const int *)dp->tbl24, 4);
+
+
+	/* get extended entries indexes */
+	msk_ext = _mm512_test_epi32_mask(res, lsb);
+
+	tmp = _mm512_srli_epi32(res, 1);
+
+	/* idxes to retrieve bytes */
+	shuf_idxes = _mm512_setr_epi32(3, 7, 11, 15,
+				19, 23, 27, 31,
+				35, 39, 43, 47,
+				51, 55, 59, 63);
+
+	base_idxes = _mm512_setr_epi32(0, 4, 8, 12,
+				16, 20, 24, 28,
+				32, 36, 40, 44,
+				48, 52, 56, 60);
+
+	/* traverse down the trie */
+	while (msk_ext) {
+		idxes = _mm512_maskz_slli_epi32(msk_ext, tmp, 8);
+		byte_chunk = (i < 8) ?
+			((i >= 4) ? second : first) :
+			((i >= 12) ? fourth : third);
+		bytes = _mm512_maskz_shuffle_epi8(k, byte_chunk, shuf_idxes);
+		idxes = _mm512_maskz_add_epi32(msk_ext, idxes, bytes);
+		if (size == sizeof(uint16_t)) {
+			tmp = _mm512_mask_i32gather_epi32(zero, msk_ext,
+				idxes, (const int *)dp->tbl8, 2);
+			tmp = _mm512_and_epi32(tmp, res_msk);
+		} else
+			tmp = _mm512_mask_i32gather_epi32(zero, msk_ext,
+				idxes, (const int *)dp->tbl8, 4);
+		new_msk = _mm512_test_epi32_mask(tmp, lsb);
+		res = _mm512_mask_blend_epi32(msk_ext ^ new_msk, res, tmp);
+		tmp = _mm512_srli_epi32(tmp, 1);
+		msk_ext = new_msk;
+
+		shuf_idxes = _mm512_maskz_add_epi8(k, shuf_idxes, lsb);
+		shuf_idxes = _mm512_and_epi32(shuf_idxes, two_lsb);
+		shuf_idxes = _mm512_maskz_add_epi8(k, shuf_idxes, base_idxes);
+		i++;
+	}
+
+	res = _mm512_srli_epi32(res, 1);
+	tmp = _mm512_maskz_expand_epi32(exp_msk, res);
+	__m256i tmp256;
+	tmp256 = _mm512_extracti32x8_epi32(res, 1);
+	tmp2 = _mm512_maskz_expand_epi32(exp_msk,
+		_mm512_castsi256_si512(tmp256));
+	_mm512_storeu_si512(next_hops, tmp);
+	_mm512_storeu_si512(next_hops + 8, tmp2);
+}
+
+static void
+trie_vec_lookup_x8_8b(void *p, uint8_t ips[8][RTE_FIB6_IPV6_ADDR_SIZE],
+	uint64_t *next_hops)
+{
+	struct rte_trie_tbl *dp = (struct rte_trie_tbl *)p;
+	const __m512i zero = _mm512_set1_epi32(0);
+	const __m512i lsb = _mm512_set1_epi32(1);
+	const __m512i three_lsb = _mm512_set1_epi32(7);
+	__m512i first, second; /*< IPv6 eight byte chunks */
+	__m512i idxes, res, shuf_idxes;
+	__m512i tmp, bytes, byte_chunk, base_idxes;
+	const __rte_x86_zmm_t bswap = {
+		.u8 = { 2, 1, 0, 255, 255, 255, 255, 255,
+			10, 9, 8, 255, 255, 255, 255, 255,
+			2, 1, 0, 255, 255, 255, 255, 255,
+			10, 9, 8, 255, 255, 255, 255, 255,
+			2, 1, 0, 255, 255, 255, 255, 255,
+			10, 9, 8, 255, 255, 255, 255, 255,
+			2, 1, 0, 255, 255, 255, 255, 255,
+			10, 9, 8, 255, 255, 255, 255, 255
+			},
+	};
+	const __mmask64 k = 0x101010101010101;
+	int i = 3;
+	__mmask8 msk_ext, new_msk;
+
+	transpose_x8(ips, &first, &second);
+
+	/* get_tbl24_idx() for every 4 byte chunk */
+	idxes = _mm512_shuffle_epi8(first, bswap.z);
+
+	/* lookup in tbl24 */
+	res = _mm512_i64gather_epi64(idxes, (const void *)dp->tbl24, 8);
+	/* get extended entries indexes */
+	msk_ext = _mm512_test_epi64_mask(res, lsb);
+
+	tmp = _mm512_srli_epi64(res, 1);
+
+	/* idxes to retrieve bytes */
+	shuf_idxes = _mm512_setr_epi64(3, 11, 19, 27, 35, 43, 51, 59);
+
+	base_idxes = _mm512_setr_epi64(0, 8, 16, 24, 32, 40, 48, 56);
+
+	/* traverse down the trie */
+	while (msk_ext) {
+		idxes = _mm512_maskz_slli_epi64(msk_ext, tmp, 8);
+		byte_chunk = (i < 8) ? first : second;
+		bytes = _mm512_maskz_shuffle_epi8(k, byte_chunk, shuf_idxes);
+		idxes = _mm512_maskz_add_epi64(msk_ext, idxes, bytes);
+		tmp = _mm512_mask_i64gather_epi64(zero, msk_ext,
+				idxes, (const void *)dp->tbl8, 8);
+		new_msk = _mm512_test_epi64_mask(tmp, lsb);
+		res = _mm512_mask_blend_epi64(msk_ext ^ new_msk, res, tmp);
+		tmp = _mm512_srli_epi64(tmp, 1);
+		msk_ext = new_msk;
+
+		shuf_idxes = _mm512_maskz_add_epi8(k, shuf_idxes, lsb);
+		shuf_idxes = _mm512_and_epi64(shuf_idxes, three_lsb);
+		shuf_idxes = _mm512_maskz_add_epi8(k, shuf_idxes, base_idxes);
+		i++;
+	}
+
+	res = _mm512_srli_epi64(res, 1);
+	_mm512_storeu_si512(next_hops, res);
+}
+
+void
+rte_trie_vec_lookup_bulk_2b(void *p, uint8_t ips[][RTE_FIB6_IPV6_ADDR_SIZE],
+	uint64_t *next_hops, const unsigned int n)
+{
+	uint32_t i;
+	for (i = 0; i < (n / 16); i++) {
+		trie_vec_lookup_x16(p, (uint8_t (*)[16])&ips[i * 16][0],
+				next_hops + i * 16, sizeof(uint16_t));
+	}
+	rte_trie_lookup_bulk_2b(p, (uint8_t (*)[16])&ips[i * 16][0],
+			next_hops + i * 16, n - i * 16);
+}
+
+void
+rte_trie_vec_lookup_bulk_4b(void *p, uint8_t ips[][RTE_FIB6_IPV6_ADDR_SIZE],
+	uint64_t *next_hops, const unsigned int n)
+{
+	uint32_t i;
+	for (i = 0; i < (n / 16); i++) {
+		trie_vec_lookup_x16(p, (uint8_t (*)[16])&ips[i * 16][0],
+				next_hops + i * 16, sizeof(uint32_t));
+	}
+	rte_trie_lookup_bulk_4b(p, (uint8_t (*)[16])&ips[i * 16][0],
+			next_hops + i * 16, n - i * 16);
+}
+
+void
+rte_trie_vec_lookup_bulk_8b(void *p, uint8_t ips[][RTE_FIB6_IPV6_ADDR_SIZE],
+	uint64_t *next_hops, const unsigned int n)
+{
+	uint32_t i;
+	for (i = 0; i < (n / 8); i++) {
+		trie_vec_lookup_x8_8b(p, (uint8_t (*)[16])&ips[i * 8][0],
+				next_hops + i * 8);
+	}
+	rte_trie_lookup_bulk_8b(p, (uint8_t (*)[16])&ips[i * 8][0],
+			next_hops + i * 8, n - i * 8);
+}
diff --git a/lib/librte_fib/trie_avx512.h b/lib/librte_fib/trie_avx512.h
new file mode 100644
index 0000000..ef8c7f0
--- /dev/null
+++ b/lib/librte_fib/trie_avx512.h
@@ -0,0 +1,20 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Intel Corporation
+ */
+
+#ifndef _TRIE_AVX512_H_
+#define _TRIE_AVX512_H_
+
+void
+rte_trie_vec_lookup_bulk_2b(void *p, uint8_t ips[][RTE_FIB6_IPV6_ADDR_SIZE],
+	uint64_t *next_hops, const unsigned int n);
+
+void
+rte_trie_vec_lookup_bulk_4b(void *p, uint8_t ips[][RTE_FIB6_IPV6_ADDR_SIZE],
+	uint64_t *next_hops, const unsigned int n);
+
+void
+rte_trie_vec_lookup_bulk_8b(void *p, uint8_t ips[][RTE_FIB6_IPV6_ADDR_SIZE],
+	uint64_t *next_hops, const unsigned int n);
+
+#endif /* _TRIE_AVX512_H_ */
-- 
2.7.4


^ permalink raw reply	[flat|nested] 199+ messages in thread

* [dpdk-dev] [PATCH v3 8/8] app/testfib: add support for different lookup functions
  2020-05-14 12:28 ` [dpdk-dev] [PATCH v2 " Vladimir Medvedkin
                     ` (7 preceding siblings ...)
  2020-05-19 12:13   ` [dpdk-dev] [PATCH v3 7/8] fib6: introduce AVX512 lookup Vladimir Medvedkin
@ 2020-05-19 12:13   ` Vladimir Medvedkin
  8 siblings, 0 replies; 199+ messages in thread
From: Vladimir Medvedkin @ 2020-05-19 12:13 UTC (permalink / raw)
  To: dev; +Cc: konstantin.ananyev, bruce.richardson

Added -v option to switch between different lookup implementations
to measure their performance and correctness.

Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
---
 app/test-fib/main.c | 58 ++++++++++++++++++++++++++++++++++++++++++++++++++---
 1 file changed, 55 insertions(+), 3 deletions(-)

diff --git a/app/test-fib/main.c b/app/test-fib/main.c
index 6e80d65..b72e8c4 100644
--- a/app/test-fib/main.c
+++ b/app/test-fib/main.c
@@ -99,6 +99,7 @@ static struct {
 	uint8_t		ent_sz;
 	uint8_t		rnd_lookup_ips_ratio;
 	uint8_t		print_fract;
+	uint8_t		lookup_fn;
 } config = {
 	.routes_file = NULL,
 	.lookup_ips_file = NULL,
@@ -110,7 +111,8 @@ static struct {
 	.tbl8 = DEFAULT_LPM_TBL8,
 	.ent_sz = 4,
 	.rnd_lookup_ips_ratio = 0,
-	.print_fract = 10
+	.print_fract = 10,
+	.lookup_fn = 0
 };
 
 struct rt_rule_4 {
@@ -636,7 +638,11 @@ print_usage(void)
 		"1/2/4/8 (default 4)>]\n"
 		"[-g <number of tbl8's for dir24_8 or trie FIBs>]\n"
 		"[-w <path to the file to dump routing table>]\n"
-		"[-u <path to the file to dump ip's for lookup>]\n",
+		"[-u <path to the file to dump ip's for lookup>]\n"
+		"[-v <type of loookup function:"
+		"\ts1, s2, s3 (3 types of scalar), v (vector) -"
+		" for DIR24_8 based FIB\n"
+		"\ts, v - for TRIE based ipv6 FIB>]\n",
 		config.prgname);
 }
 
@@ -679,7 +685,7 @@ parse_opts(int argc, char **argv)
 	int opt;
 	char *endptr;
 
-	while ((opt = getopt(argc, argv, "f:t:n:d:l:r:c6ab:e:g:w:u:s")) !=
+	while ((opt = getopt(argc, argv, "f:t:n:d:l:r:c6ab:e:g:w:u:sv:")) !=
 			-1) {
 		switch (opt) {
 		case 'f':
@@ -767,6 +773,22 @@ parse_opts(int argc, char **argv)
 				rte_exit(-EINVAL, "Invalid option -g\n");
 			}
 			break;
+		case 'v':
+			if ((strcmp(optarg, "s1") == 0) ||
+					(strcmp(optarg, "s") == 0))
+				break;
+			else if (strcmp(optarg, "v") == 0) {
+				config.lookup_fn = 1;
+				break;
+			} else if (strcmp(optarg, "s2") == 0) {
+				config.lookup_fn = 2;
+				break;
+			} else if (strcmp(optarg, "s3") == 0) {
+				config.lookup_fn = 3;
+				break;
+			}
+			print_usage();
+			rte_exit(-EINVAL, "Invalid option -v %s\n", optarg);
 		default:
 			print_usage();
 			rte_exit(-EINVAL, "Invalid options\n");
@@ -844,6 +866,24 @@ run_v4(void)
 		return -rte_errno;
 	}
 
+	if (config.lookup_fn != 0) {
+		if (config.lookup_fn == 1)
+			ret = rte_fib_set_lookup_fn(fib,
+				RTE_FIB_DIR24_8_VECTOR);
+		else if (config.lookup_fn == 2)
+			ret = rte_fib_set_lookup_fn(fib,
+				RTE_FIB_DIR24_8_SCALAR_INLINE);
+		else if (config.lookup_fn == 3)
+			ret = rte_fib_set_lookup_fn(fib,
+				RTE_FIB_DIR24_8_SCALAR_UNI);
+		else
+			ret = -EINVAL;
+		if (ret != 0) {
+			printf("Can not init lookup function\n");
+			return ret;
+		}
+	}
+
 	for (k = config.print_fract, i = 0; k > 0; k--) {
 		start = rte_rdtsc_precise();
 		for (j = 0; j < (config.nb_routes - i) / k; j++) {
@@ -1023,6 +1063,18 @@ run_v6(void)
 		return -rte_errno;
 	}
 
+	if (config.lookup_fn != 0) {
+		if (config.lookup_fn == 1)
+			ret = rte_fib6_set_lookup_fn(fib,
+				RTE_FIB6_TRIE_VECTOR);
+		else
+			ret = -EINVAL;
+		if (ret != 0) {
+			printf("Can not init lookup function\n");
+			return ret;
+		}
+	}
+
 	for (k = config.print_fract, i = 0; k > 0; k--) {
 		start = rte_rdtsc_precise();
 		for (j = 0; j < (config.nb_routes - i) / k; j++) {
-- 
2.7.4


^ permalink raw reply	[flat|nested] 199+ messages in thread

* Re: [dpdk-dev] [PATCH v3 0/8] fib: implement AVX512 vector lookup
  2020-05-19 12:12   ` [dpdk-dev] [PATCH v3 0/8] " Vladimir Medvedkin
@ 2020-05-19 12:23     ` David Marchand
  2020-05-19 12:57       ` Medvedkin, Vladimir
  2020-06-19 10:34     ` Medvedkin, Vladimir
                       ` (9 subsequent siblings)
  10 siblings, 1 reply; 199+ messages in thread
From: David Marchand @ 2020-05-19 12:23 UTC (permalink / raw)
  To: Vladimir Medvedkin; +Cc: dev, Ananyev, Konstantin, Bruce Richardson

On Tue, May 19, 2020 at 2:15 PM Vladimir Medvedkin
<vladimir.medvedkin@intel.com> wrote:
>
> This patch series implements vectorized lookup using AVX512 for
> ipv4 dir24_8 and ipv6 trie algorithms.
> Also introduced rte_fib_set_lookup_fn() to change lookup function type.
> Added option to select lookup function type in testfib application.

Is this series missing a 20.08 prefix in the titles?


-- 
David Marchand


^ permalink raw reply	[flat|nested] 199+ messages in thread

* Re: [dpdk-dev] [PATCH v3 0/8] fib: implement AVX512 vector lookup
  2020-05-19 12:23     ` David Marchand
@ 2020-05-19 12:57       ` Medvedkin, Vladimir
  2020-05-19 13:00         ` David Marchand
  0 siblings, 1 reply; 199+ messages in thread
From: Medvedkin, Vladimir @ 2020-05-19 12:57 UTC (permalink / raw)
  To: David Marchand; +Cc: dev, Ananyev, Konstantin, Bruce Richardson

Hi,

On 19/05/2020 13:23, David Marchand wrote:
> On Tue, May 19, 2020 at 2:15 PM Vladimir Medvedkin
> <vladimir.medvedkin@intel.com> wrote:
>> This patch series implements vectorized lookup using AVX512 for
>> ipv4 dir24_8 and ipv6 trie algorithms.
>> Also introduced rte_fib_set_lookup_fn() to change lookup function type.
>> Added option to select lookup function type in testfib application.
> Is this series missing a 20.08 prefix in the titles?


Ah yes, forgot about it. Do you need me to resend this series with a prefix?


>
-- 
Regards,
Vladimir


^ permalink raw reply	[flat|nested] 199+ messages in thread

* Re: [dpdk-dev] [PATCH v3 0/8] fib: implement AVX512 vector lookup
  2020-05-19 12:57       ` Medvedkin, Vladimir
@ 2020-05-19 13:00         ` David Marchand
  0 siblings, 0 replies; 199+ messages in thread
From: David Marchand @ 2020-05-19 13:00 UTC (permalink / raw)
  To: Medvedkin, Vladimir; +Cc: dev, Ananyev, Konstantin, Bruce Richardson

On Tue, May 19, 2020 at 2:57 PM Medvedkin, Vladimir
<vladimir.medvedkin@intel.com> wrote:
>
> Hi,
>
> On 19/05/2020 13:23, David Marchand wrote:
> > On Tue, May 19, 2020 at 2:15 PM Vladimir Medvedkin
> > <vladimir.medvedkin@intel.com> wrote:
> >> This patch series implements vectorized lookup using AVX512 for
> >> ipv4 dir24_8 and ipv6 trie algorithms.
> >> Also introduced rte_fib_set_lookup_fn() to change lookup function type.
> >> Added option to select lookup function type in testfib application.
> > Is this series missing a 20.08 prefix in the titles?
>
>
> Ah yes, forgot about it. Do you need me to resend this series with a prefix?

I will mark it as deferred for 20.08, no need to resend.
Thanks.

-- 
David Marchand


^ permalink raw reply	[flat|nested] 199+ messages in thread

* Re: [dpdk-dev] [PATCH v3 0/8] fib: implement AVX512 vector lookup
  2020-05-19 12:12   ` [dpdk-dev] [PATCH v3 0/8] " Vladimir Medvedkin
  2020-05-19 12:23     ` David Marchand
@ 2020-06-19 10:34     ` Medvedkin, Vladimir
  2020-07-08 20:16     ` [dpdk-dev] [PATCH v4 " Vladimir Medvedkin
                       ` (8 subsequent siblings)
  10 siblings, 0 replies; 199+ messages in thread
From: Medvedkin, Vladimir @ 2020-06-19 10:34 UTC (permalink / raw)
  To: dev; +Cc: konstantin.ananyev, bruce.richardson, Thomas Monjalon

Waiting for reviews please.

On 19/05/2020 13:12, Vladimir Medvedkin wrote:
> This patch series implements vectorized lookup using AVX512 for
> ipv4 dir24_8 and ipv6 trie algorithms.
> Also introduced rte_fib_set_lookup_fn() to change lookup function type.
> Added option to select lookup function type in testfib application.
>
> v3:
>   - separate out the AVX-512 code into a separate file
>
> v2:
>   - rename rte_zmm to __rte_x86_zmm to reflect its internal usage
>   - make runtime decision to use avx512 lookup
>
> Vladimir Medvedkin (8):
>    eal: introduce zmm type for AVX 512-bit
>    fib: make lookup function type configurable
>    fib: move lookup definition into the header file
>    fib: introduce AVX512 lookup
>    fib6: make lookup function type configurable
>    fib6: move lookup definition into the header file
>    fib6: introduce AVX512 lookup
>    app/testfib: add support for different lookup functions
>
>   app/test-fib/main.c                   |  58 ++++++-
>   lib/librte_eal/x86/include/rte_vect.h |  20 +++
>   lib/librte_fib/Makefile               |  24 +++
>   lib/librte_fib/dir24_8.c              | 281 ++++++----------------------------
>   lib/librte_fib/dir24_8.h              | 226 ++++++++++++++++++++++++++-
>   lib/librte_fib/dir24_8_avx512.c       | 165 ++++++++++++++++++++
>   lib/librte_fib/dir24_8_avx512.h       |  24 +++
>   lib/librte_fib/meson.build            |  20 +++
>   lib/librte_fib/rte_fib.c              |  20 ++-
>   lib/librte_fib/rte_fib.h              |  23 +++
>   lib/librte_fib/rte_fib6.c             |  19 ++-
>   lib/librte_fib/rte_fib6.h             |  21 +++
>   lib/librte_fib/rte_fib_version.map    |   2 +
>   lib/librte_fib/trie.c                 | 161 ++++---------------
>   lib/librte_fib/trie.h                 | 119 +++++++++++++-
>   lib/librte_fib/trie_avx512.c          | 269 ++++++++++++++++++++++++++++++++
>   lib/librte_fib/trie_avx512.h          |  20 +++
>   17 files changed, 1100 insertions(+), 372 deletions(-)
>   create mode 100644 lib/librte_fib/dir24_8_avx512.c
>   create mode 100644 lib/librte_fib/dir24_8_avx512.h
>   create mode 100644 lib/librte_fib/trie_avx512.c
>   create mode 100644 lib/librte_fib/trie_avx512.h
>
-- 
Regards,
Vladimir


^ permalink raw reply	[flat|nested] 199+ messages in thread

* Re: [dpdk-dev] [PATCH v3 1/8] eal: introduce zmm type for AVX 512-bit
  2020-05-19 12:12   ` [dpdk-dev] [PATCH v3 1/8] eal: introduce zmm type for AVX 512-bit Vladimir Medvedkin
@ 2020-06-24 13:14     ` Ananyev, Konstantin
  2020-07-06 17:28     ` Thomas Monjalon
  1 sibling, 0 replies; 199+ messages in thread
From: Ananyev, Konstantin @ 2020-06-24 13:14 UTC (permalink / raw)
  To: Medvedkin, Vladimir, dev; +Cc: Richardson, Bruce

> 
> New data type to manipulate 512 bit AVX values.
> 
> Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
> ---
>  lib/librte_eal/x86/include/rte_vect.h | 20 ++++++++++++++++++++
>  1 file changed, 20 insertions(+)
> 
> diff --git a/lib/librte_eal/x86/include/rte_vect.h b/lib/librte_eal/x86/include/rte_vect.h
> index df5a607..ffe4f7d 100644
> --- a/lib/librte_eal/x86/include/rte_vect.h
> +++ b/lib/librte_eal/x86/include/rte_vect.h
> @@ -90,6 +90,26 @@ __extension__ ({                 \
>  })
>  #endif /* (defined(__ICC) && __ICC < 1210) */
> 
> +#ifdef __AVX512F__
> +
> +typedef __m512i __x86_zmm_t;
> +
> +#define	ZMM_SIZE	(sizeof(__x86_zmm_t))
> +#define	ZMM_MASK	(ZMM_SIZE - 1)
> +
> +typedef union __rte_x86_zmm  {
> +	__x86_zmm_t	 z;
> +	ymm_t    y[ZMM_SIZE / sizeof(ymm_t)];
> +	xmm_t    x[ZMM_SIZE / sizeof(xmm_t)];
> +	uint8_t  u8[ZMM_SIZE / sizeof(uint8_t)];
> +	uint16_t u16[ZMM_SIZE / sizeof(uint16_t)];
> +	uint32_t u32[ZMM_SIZE / sizeof(uint32_t)];
> +	uint64_t u64[ZMM_SIZE / sizeof(uint64_t)];
> +	double   pd[ZMM_SIZE / sizeof(double)];
> +} __attribute__((__aligned__(ZMM_SIZE)))  __rte_x86_zmm_t;
> +
> +#endif /* __AVX512F__ */
> +
>  #ifdef __cplusplus
>  }
>  #endif
> --

Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>

> 2.7.4


^ permalink raw reply	[flat|nested] 199+ messages in thread

* Re: [dpdk-dev] [PATCH v3 4/8] fib: introduce AVX512 lookup
  2020-05-19 12:12   ` [dpdk-dev] [PATCH v3 4/8] fib: introduce AVX512 lookup Vladimir Medvedkin
@ 2020-06-24 13:18     ` Ananyev, Konstantin
  2020-07-08 19:57       ` Medvedkin, Vladimir
  2020-07-06 19:21     ` Thomas Monjalon
  2020-07-07  9:44     ` Bruce Richardson
  2 siblings, 1 reply; 199+ messages in thread
From: Ananyev, Konstantin @ 2020-06-24 13:18 UTC (permalink / raw)
  To: Medvedkin, Vladimir, dev; +Cc: Richardson, Bruce


> Add new lookup implementation for DIR24_8 algorithm using
> AVX512 instruction set
> 
> Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
> ---
>  lib/librte_fib/Makefile         |  14 ++++
>  lib/librte_fib/dir24_8.c        |  24 ++++++
>  lib/librte_fib/dir24_8_avx512.c | 165 ++++++++++++++++++++++++++++++++++++++++
>  lib/librte_fib/dir24_8_avx512.h |  24 ++++++
>  lib/librte_fib/meson.build      |  11 +++
>  lib/librte_fib/rte_fib.h        |   3 +-
>  6 files changed, 240 insertions(+), 1 deletion(-)
>  create mode 100644 lib/librte_fib/dir24_8_avx512.c
>  create mode 100644 lib/librte_fib/dir24_8_avx512.h
> 
> diff --git a/lib/librte_fib/Makefile b/lib/librte_fib/Makefile
> index 1dd2a49..3958da1 100644
> --- a/lib/librte_fib/Makefile
> +++ b/lib/librte_fib/Makefile
> @@ -19,4 +19,18 @@ SRCS-$(CONFIG_RTE_LIBRTE_FIB) := rte_fib.c rte_fib6.c dir24_8.c trie.c
>  # install this header file
>  SYMLINK-$(CONFIG_RTE_LIBRTE_FIB)-include := rte_fib.h rte_fib6.h
> 
> +CC_AVX512F_SUPPORT=$(shell $(CC) -mavx512f -dM -E - </dev/null 2>&1 | \
> +grep -q __AVX512F__ && echo 1)
> +
> +CC_AVX512DQ_SUPPORT=$(shell $(CC) -mavx512dq -dM -E - </dev/null 2>&1 | \
> +grep -q __AVX512DQ__ && echo 1)
> +
> +ifeq ($(CC_AVX512F_SUPPORT), 1)
> +	ifeq ($(CC_AVX512DQ_SUPPORT), 1)
> +		SRCS-$(CONFIG_RTE_LIBRTE_FIB) += dir24_8_avx512.c
> +		CFLAGS_dir24_8_avx512.o += -mavx512f
> +		CFLAGS_dir24_8_avx512.o += -mavx512dq
> +		CFLAGS_dir24_8.o += -DCC_DIR24_8_AVX512_SUPPORT
> +	endif
> +endif
>  include $(RTE_SDK)/mk/rte.lib.mk
> diff --git a/lib/librte_fib/dir24_8.c b/lib/librte_fib/dir24_8.c
> index 9d74653..0a1c53f 100644
> --- a/lib/librte_fib/dir24_8.c
> +++ b/lib/librte_fib/dir24_8.c
> @@ -18,6 +18,12 @@
>  #include <rte_fib.h>
>  #include "dir24_8.h"
> 
> +#ifdef CC_DIR24_8_AVX512_SUPPORT
> +
> +#include "dir24_8_avx512.h"
> +
> +#endif /* CC_DIR24_8_AVX512_SUPPORT */
> +
>  #define DIR24_8_NAMESIZE	64
> 
>  #define ROUNDUP(x, y)	 RTE_ALIGN_CEIL(x, (1 << (32 - y)))
> @@ -62,6 +68,24 @@ dir24_8_get_lookup_fn(void *p, enum rte_fib_dir24_8_lookup_type type)
>  		}
>  	case RTE_FIB_DIR24_8_SCALAR_UNI:
>  		return dir24_8_lookup_bulk_uni;
> +#ifdef CC_DIR24_8_AVX512_SUPPORT
> +	case RTE_FIB_DIR24_8_VECTOR:
> +		if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512F) <= 0)
> +			return NULL;
> +
> +		switch (nh_sz) {
> +		case RTE_FIB_DIR24_8_1B:
> +			return rte_dir24_8_vec_lookup_bulk_1b;
> +		case RTE_FIB_DIR24_8_2B:
> +			return rte_dir24_8_vec_lookup_bulk_2b;
> +		case RTE_FIB_DIR24_8_4B:
> +			return rte_dir24_8_vec_lookup_bulk_4b;
> +		case RTE_FIB_DIR24_8_8B:
> +			return rte_dir24_8_vec_lookup_bulk_8b;
> +		default:
> +			return NULL;
> +		}
> +#endif
>  	default:
>  		return NULL;
>  	}
> diff --git a/lib/librte_fib/dir24_8_avx512.c b/lib/librte_fib/dir24_8_avx512.c
> new file mode 100644
> index 0000000..43dba28
> --- /dev/null
> +++ b/lib/librte_fib/dir24_8_avx512.c
> @@ -0,0 +1,165 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright(c) 2020 Intel Corporation
> + */
> +
> +#include <rte_vect.h>
> +#include <rte_fib.h>
> +
> +#include "dir24_8.h"
> +#include "dir24_8_avx512.h"
> +
> +static __rte_always_inline void
> +dir24_8_vec_lookup_x16(void *p, const uint32_t *ips,
> +	uint64_t *next_hops, int size)
> +{
> +	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
> +	__mmask16 msk_ext;
> +	__mmask16 exp_msk = 0x5555;
> +	__m512i ip_vec, idxes, res, bytes;
> +	const __m512i zero = _mm512_set1_epi32(0);
> +	const __m512i lsb = _mm512_set1_epi32(1);
> +	const __m512i lsbyte_msk = _mm512_set1_epi32(0xff);
> +	__m512i tmp1, tmp2, res_msk;
> +	__m256i tmp256;
> +	/* used to mask gather values if size is 1/2 (8/16 bit next hops) */
> +	if (size == sizeof(uint8_t))
> +		res_msk = _mm512_set1_epi32(UINT8_MAX);
> +	else if (size == sizeof(uint16_t))
> +		res_msk = _mm512_set1_epi32(UINT16_MAX);
> +
> +	ip_vec = _mm512_loadu_si512(ips);
> +	/* mask 24 most significant bits */
> +	idxes = _mm512_srli_epi32(ip_vec, 8);
> +
> +	/**
> +	 * lookup in tbl24
> +	 * Put it inside branch to make compiler happy with -O0
> +	 */
> +	if (size == sizeof(uint8_t)) {
> +		res = _mm512_i32gather_epi32(idxes, (const int *)dp->tbl24, 1);
> +		res = _mm512_and_epi32(res, res_msk);
> +	} else if (size == sizeof(uint16_t)) {
> +		res = _mm512_i32gather_epi32(idxes, (const int *)dp->tbl24, 2);
> +		res = _mm512_and_epi32(res, res_msk);
> +	} else
> +		res = _mm512_i32gather_epi32(idxes, (const int *)dp->tbl24, 4);
> +
> +	/* get extended entries indexes */
> +	msk_ext = _mm512_test_epi32_mask(res, lsb);
> +
> +	if (msk_ext != 0) {
> +		idxes = _mm512_srli_epi32(res, 1);
> +		idxes = _mm512_slli_epi32(idxes, 8);
> +		bytes = _mm512_and_epi32(ip_vec, lsbyte_msk);
> +		idxes = _mm512_maskz_add_epi32(msk_ext, idxes, bytes);
> +		if (size == sizeof(uint8_t)) {
> +			idxes = _mm512_mask_i32gather_epi32(zero, msk_ext,
> +				idxes, (const int *)dp->tbl8, 1);
> +			idxes = _mm512_and_epi32(idxes, res_msk);
> +		} else if (size == sizeof(uint16_t)) {
> +			idxes = _mm512_mask_i32gather_epi32(zero, msk_ext,
> +				idxes, (const int *)dp->tbl8, 2);
> +			idxes = _mm512_and_epi32(idxes, res_msk);
> +		} else
> +			idxes = _mm512_mask_i32gather_epi32(zero, msk_ext,
> +				idxes, (const int *)dp->tbl8, 4);
> +
> +		res = _mm512_mask_blend_epi32(msk_ext, res, idxes);
> +	}
> +
> +	res = _mm512_srli_epi32(res, 1);
> +	tmp1 = _mm512_maskz_expand_epi32(exp_msk, res);
> +	tmp256 = _mm512_extracti32x8_epi32(res, 1);
> +	tmp2 = _mm512_maskz_expand_epi32(exp_msk,
> +		_mm512_castsi256_si512(tmp256));
> +	_mm512_storeu_si512(next_hops, tmp1);
> +	_mm512_storeu_si512(next_hops + 8, tmp2);
> +}
> +
> +static __rte_always_inline void
> +dir24_8_vec_lookup_x8_8b(void *p, const uint32_t *ips,
> +	uint64_t *next_hops)
> +{
> +	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
> +	const __m512i zero = _mm512_set1_epi32(0);
> +	const __m512i lsbyte_msk = _mm512_set1_epi64(0xff);
> +	const __m512i lsb = _mm512_set1_epi64(1);
> +	__m512i res, idxes, bytes;
> +	__m256i idxes_256, ip_vec;
> +	__mmask8 msk_ext;
> +
> +	ip_vec = _mm256_loadu_si256((const void *)ips);
> +	/* mask 24 most significant bits */
> +	idxes_256 = _mm256_srli_epi32(ip_vec, 8);
> +
> +	/* lookup in tbl24 */
> +	res = _mm512_i32gather_epi64(idxes_256, (const void *)dp->tbl24, 8);
> +
> +	/* get extended entries indexes */
> +	msk_ext = _mm512_test_epi64_mask(res, lsb);
> +
> +	if (msk_ext != 0) {
> +		bytes = _mm512_cvtepi32_epi64(ip_vec);
> +		idxes = _mm512_srli_epi64(res, 1);
> +		idxes = _mm512_slli_epi64(idxes, 8);
> +		bytes = _mm512_and_epi64(bytes, lsbyte_msk);
> +		idxes = _mm512_maskz_add_epi64(msk_ext, idxes, bytes);
> +		idxes = _mm512_mask_i64gather_epi64(zero, msk_ext, idxes,
> +			(const void *)dp->tbl8, 8);
> +
> +		res = _mm512_mask_blend_epi64(msk_ext, res, idxes);
> +	}
> +
> +	res = _mm512_srli_epi64(res, 1);
> +	_mm512_storeu_si512(next_hops, res);
> +}
> +
> +void
> +rte_dir24_8_vec_lookup_bulk_1b(void *p, const uint32_t *ips,
> +	uint64_t *next_hops, const unsigned int n)
> +{
> +	uint32_t i;
> +	for (i = 0; i < (n / 16); i++)
> +		dir24_8_vec_lookup_x16(p, ips + i * 16, next_hops + i * 16,
> +			sizeof(uint8_t));
> +

Just curious: if for reminder, instead of calling scalar lookup,
Introduce a masked version of avx512 lookup - would it be slower?

> +	dir24_8_lookup_bulk_1b(p, ips + i * 16, next_hops + i * 16,
> +		n - i * 16);
> +}
> +
> +void
> +rte_dir24_8_vec_lookup_bulk_2b(void *p, const uint32_t *ips,
> +	uint64_t *next_hops, const unsigned int n)
> +{
> +	uint32_t i;
> +	for (i = 0; i < (n / 16); i++)
> +		dir24_8_vec_lookup_x16(p, ips + i * 16, next_hops + i * 16,
> +			sizeof(uint16_t));
> +
> +	dir24_8_lookup_bulk_2b(p, ips + i * 16, next_hops + i * 16,
> +		n - i * 16);
> +}
> +
> +void
> +rte_dir24_8_vec_lookup_bulk_4b(void *p, const uint32_t *ips,
> +	uint64_t *next_hops, const unsigned int n)
> +{
> +	uint32_t i;
> +	for (i = 0; i < (n / 16); i++)
> +		dir24_8_vec_lookup_x16(p, ips + i * 16, next_hops + i * 16,
> +			sizeof(uint32_t));
> +
> +	dir24_8_lookup_bulk_4b(p, ips + i * 16, next_hops + i * 16,
> +		n - i * 16);
> +}
> +
> +void
> +rte_dir24_8_vec_lookup_bulk_8b(void *p, const uint32_t *ips,
> +	uint64_t *next_hops, const unsigned int n)
> +{
> +	uint32_t i;
> +	for (i = 0; i < (n / 8); i++)
> +		dir24_8_vec_lookup_x8_8b(p, ips + i * 8, next_hops + i * 8);
> +
> +	dir24_8_lookup_bulk_8b(p, ips + i * 8, next_hops + i * 8, n - i * 8);
> +}
> diff --git a/lib/librte_fib/dir24_8_avx512.h b/lib/librte_fib/dir24_8_avx512.h
> new file mode 100644
> index 0000000..1d3c2b9
> --- /dev/null
> +++ b/lib/librte_fib/dir24_8_avx512.h
> @@ -0,0 +1,24 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright(c) 2020 Intel Corporation
> + */
> +
> +#ifndef _DIR248_AVX512_H_
> +#define _DIR248_AVX512_H_
> +
> +void
> +rte_dir24_8_vec_lookup_bulk_1b(void *p, const uint32_t *ips,
> +	uint64_t *next_hops, const unsigned int n);
> +
> +void
> +rte_dir24_8_vec_lookup_bulk_2b(void *p, const uint32_t *ips,
> +	uint64_t *next_hops, const unsigned int n);
> +
> +void
> +rte_dir24_8_vec_lookup_bulk_4b(void *p, const uint32_t *ips,
> +	uint64_t *next_hops, const unsigned int n);
> +
> +void
> +rte_dir24_8_vec_lookup_bulk_8b(void *p, const uint32_t *ips,
> +	uint64_t *next_hops, const unsigned int n);
> +
> +#endif /* _DIR248_AVX512_H_ */
> diff --git a/lib/librte_fib/meson.build b/lib/librte_fib/meson.build
> index 771828f..0963f3c 100644
> --- a/lib/librte_fib/meson.build
> +++ b/lib/librte_fib/meson.build
> @@ -5,3 +5,14 @@
>  sources = files('rte_fib.c', 'rte_fib6.c', 'dir24_8.c', 'trie.c')
>  headers = files('rte_fib.h', 'rte_fib6.h')
>  deps += ['rib']
> +
> +if dpdk_conf.has('RTE_ARCH_X86') and cc.has_argument('-mavx512f')
> +	if cc.has_argument('-mavx512dq')
> +		dir24_8_avx512_tmp = static_library('dir24_8_avx512_tmp',
> +			'dir24_8_avx512.c',
> +			dependencies: static_rte_eal,
> +			c_args: cflags + ['-mavx512f'] + ['-mavx512dq'])
> +		objs += dir24_8_avx512_tmp.extract_objects('dir24_8_avx512.c')
> +		cflags += '-DCC_DIR24_8_AVX512_SUPPORT'
> +	endif
> +endif
> diff --git a/lib/librte_fib/rte_fib.h b/lib/librte_fib/rte_fib.h
> index db35685..2919d13 100644
> --- a/lib/librte_fib/rte_fib.h
> +++ b/lib/librte_fib/rte_fib.h
> @@ -54,7 +54,8 @@ enum rte_fib_dir24_8_nh_sz {
>  enum rte_fib_dir24_8_lookup_type {
>  	RTE_FIB_DIR24_8_SCALAR_MACRO,
>  	RTE_FIB_DIR24_8_SCALAR_INLINE,
> -	RTE_FIB_DIR24_8_SCALAR_UNI
> +	RTE_FIB_DIR24_8_SCALAR_UNI,
> +	RTE_FIB_DIR24_8_VECTOR
>  };
> 
>  /** FIB configuration structure */
> --

Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>

> 2.7.4


^ permalink raw reply	[flat|nested] 199+ messages in thread

* Re: [dpdk-dev] [PATCH v3 1/8] eal: introduce zmm type for AVX 512-bit
  2020-05-19 12:12   ` [dpdk-dev] [PATCH v3 1/8] eal: introduce zmm type for AVX 512-bit Vladimir Medvedkin
  2020-06-24 13:14     ` Ananyev, Konstantin
@ 2020-07-06 17:28     ` Thomas Monjalon
  1 sibling, 0 replies; 199+ messages in thread
From: Thomas Monjalon @ 2020-07-06 17:28 UTC (permalink / raw)
  To: Vladimir Medvedkin; +Cc: dev, konstantin.ananyev, bruce.richardson

19/05/2020 14:12, Vladimir Medvedkin:
> New data type to manipulate 512 bit AVX values.
[...]
> +typedef union __rte_x86_zmm  {
> +	__x86_zmm_t	 z;
> +	ymm_t    y[ZMM_SIZE / sizeof(ymm_t)];
> +	xmm_t    x[ZMM_SIZE / sizeof(xmm_t)];
> +	uint8_t  u8[ZMM_SIZE / sizeof(uint8_t)];
> +	uint16_t u16[ZMM_SIZE / sizeof(uint16_t)];
> +	uint32_t u32[ZMM_SIZE / sizeof(uint32_t)];
> +	uint64_t u64[ZMM_SIZE / sizeof(uint64_t)];
> +	double   pd[ZMM_SIZE / sizeof(double)];
> +} __attribute__((__aligned__(ZMM_SIZE)))  __rte_x86_zmm_t;

Should be __rte_aligned(ZMM_SIZE)



^ permalink raw reply	[flat|nested] 199+ messages in thread

* Re: [dpdk-dev] [PATCH v3 4/8] fib: introduce AVX512 lookup
  2020-05-19 12:12   ` [dpdk-dev] [PATCH v3 4/8] fib: introduce AVX512 lookup Vladimir Medvedkin
  2020-06-24 13:18     ` Ananyev, Konstantin
@ 2020-07-06 19:21     ` Thomas Monjalon
  2020-07-08 20:19       ` Medvedkin, Vladimir
  2020-07-07  9:44     ` Bruce Richardson
  2 siblings, 1 reply; 199+ messages in thread
From: Thomas Monjalon @ 2020-07-06 19:21 UTC (permalink / raw)
  To: Vladimir Medvedkin; +Cc: dev, konstantin.ananyev, bruce.richardson

19/05/2020 14:12, Vladimir Medvedkin:
> --- a/lib/librte_fib/meson.build
> +++ b/lib/librte_fib/meson.build
> +if dpdk_conf.has('RTE_ARCH_X86') and cc.has_argument('-mavx512f')
> +	if cc.has_argument('-mavx512dq')
> +		dir24_8_avx512_tmp = static_library('dir24_8_avx512_tmp',
> +			'dir24_8_avx512.c',
> +			dependencies: static_rte_eal,
> +			c_args: cflags + ['-mavx512f'] + ['-mavx512dq'])
> +		objs += dir24_8_avx512_tmp.extract_objects('dir24_8_avx512.c')
> +		cflags += '-DCC_DIR24_8_AVX512_SUPPORT'
> +	endif
> +endif

I don't want to try understanding what this hack is.
But please add comments around it, so we will understand why
compilation fails:

In file included from ../../dpdk/lib/librte_fib/dir24_8_avx512.c:5:
../../dpdk/lib/librte_eal/x86/include/rte_vect.h:97:18: error: expected declaration specifiers or ‘...’ before ‘(’ token
   97 | #define ZMM_SIZE (sizeof(__x86_zmm_t))
      |                  ^




^ permalink raw reply	[flat|nested] 199+ messages in thread

* Re: [dpdk-dev] [PATCH v3 4/8] fib: introduce AVX512 lookup
  2020-05-19 12:12   ` [dpdk-dev] [PATCH v3 4/8] fib: introduce AVX512 lookup Vladimir Medvedkin
  2020-06-24 13:18     ` Ananyev, Konstantin
  2020-07-06 19:21     ` Thomas Monjalon
@ 2020-07-07  9:44     ` Bruce Richardson
  2 siblings, 0 replies; 199+ messages in thread
From: Bruce Richardson @ 2020-07-07  9:44 UTC (permalink / raw)
  To: Vladimir Medvedkin; +Cc: dev, konstantin.ananyev

On Tue, May 19, 2020 at 01:12:59PM +0100, Vladimir Medvedkin wrote:
> Add new lookup implementation for DIR24_8 algorithm using
> AVX512 instruction set
> 
> Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
> ---
>  lib/librte_fib/Makefile         |  14 ++++
>  lib/librte_fib/dir24_8.c        |  24 ++++++
>  lib/librte_fib/dir24_8_avx512.c | 165 ++++++++++++++++++++++++++++++++++++++++
>  lib/librte_fib/dir24_8_avx512.h |  24 ++++++
>  lib/librte_fib/meson.build      |  11 +++
>  lib/librte_fib/rte_fib.h        |   3 +-
>  6 files changed, 240 insertions(+), 1 deletion(-)
>  create mode 100644 lib/librte_fib/dir24_8_avx512.c
>  create mode 100644 lib/librte_fib/dir24_8_avx512.h
>
<snip> 
> diff --git a/lib/librte_fib/meson.build b/lib/librte_fib/meson.build
> index 771828f..0963f3c 100644
> --- a/lib/librte_fib/meson.build
> +++ b/lib/librte_fib/meson.build
> @@ -5,3 +5,14 @@
>  sources = files('rte_fib.c', 'rte_fib6.c', 'dir24_8.c', 'trie.c')
>  headers = files('rte_fib.h', 'rte_fib6.h')
>  deps += ['rib']
> +
> +if dpdk_conf.has('RTE_ARCH_X86') and cc.has_argument('-mavx512f')
> +	if cc.has_argument('-mavx512dq')
> +		dir24_8_avx512_tmp = static_library('dir24_8_avx512_tmp',
> +			'dir24_8_avx512.c',
> +			dependencies: static_rte_eal,
> +			c_args: cflags + ['-mavx512f'] + ['-mavx512dq'])
> +		objs += dir24_8_avx512_tmp.extract_objects('dir24_8_avx512.c')
> +		cflags += '-DCC_DIR24_8_AVX512_SUPPORT'
> +	endif
> +endif

This block looks wrong to me, especially comparing it with the equivalent
block in drivers/net/i40e. Firstly, the two if conditions are unnecessary
and can be merged. However, secondly, I think you should restructure it so
that you first check for AVX-512 already being enabled in the build, and
only if it is not do you need to see about checking compiler support and
using the static lib workaround to get just the one file compiled with
AVX-512. As Thomas suggested, a comment explaining this would also help -
again copying what is in the i40e/meson.build file would probably be a good
start.

/Bruce


^ permalink raw reply	[flat|nested] 199+ messages in thread

* Re: [dpdk-dev] [PATCH v3 3/8] fib: move lookup definition into the header file
  2020-05-19 12:12   ` [dpdk-dev] [PATCH v3 3/8] fib: move lookup definition into the header file Vladimir Medvedkin
@ 2020-07-08 11:23     ` Ananyev, Konstantin
  0 siblings, 0 replies; 199+ messages in thread
From: Ananyev, Konstantin @ 2020-07-08 11:23 UTC (permalink / raw)
  To: Medvedkin, Vladimir, dev; +Cc: Richardson, Bruce

 
> Move dir24_8 table layout and lookup defenition into the
> private header file. This is necessary for implementing a
> vectorized lookup function in a separate .с file.
> 
> Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
> ---

Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>

> 2.7.4


^ permalink raw reply	[flat|nested] 199+ messages in thread

* Re: [dpdk-dev] [PATCH v3 6/8] fib6: move lookup definition into the header file
  2020-05-19 12:13   ` [dpdk-dev] [PATCH v3 6/8] fib6: move lookup definition into the header file Vladimir Medvedkin
@ 2020-07-08 11:27     ` Ananyev, Konstantin
  0 siblings, 0 replies; 199+ messages in thread
From: Ananyev, Konstantin @ 2020-07-08 11:27 UTC (permalink / raw)
  To: Medvedkin, Vladimir, dev; +Cc: Richardson, Bruce


> Move trie table layout and lookup defenition into the
> private header file. This is necessary for implementing a
> vectorized lookup function in a separate .с file.
> 
> Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
> --

Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>

> 2.7.4


^ permalink raw reply	[flat|nested] 199+ messages in thread

* Re: [dpdk-dev] [PATCH v3 7/8] fib6: introduce AVX512 lookup
  2020-05-19 12:13   ` [dpdk-dev] [PATCH v3 7/8] fib6: introduce AVX512 lookup Vladimir Medvedkin
@ 2020-07-08 12:23     ` Ananyev, Konstantin
  2020-07-08 19:56       ` Medvedkin, Vladimir
  0 siblings, 1 reply; 199+ messages in thread
From: Ananyev, Konstantin @ 2020-07-08 12:23 UTC (permalink / raw)
  To: Medvedkin, Vladimir, dev; +Cc: Richardson, Bruce


> 
> Add new lookup implementation for FIB6 trie algorithm using
> AVX512 instruction set
> 
> Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
> ---
>  lib/librte_fib/Makefile      |  10 ++
>  lib/librte_fib/meson.build   |   9 ++
>  lib/librte_fib/rte_fib6.h    |   3 +-
>  lib/librte_fib/trie.c        |  21 ++++
>  lib/librte_fib/trie_avx512.c | 269 +++++++++++++++++++++++++++++++++++++++++++
>  lib/librte_fib/trie_avx512.h |  20 ++++
>  6 files changed, 331 insertions(+), 1 deletion(-)
>  create mode 100644 lib/librte_fib/trie_avx512.c
>  create mode 100644 lib/librte_fib/trie_avx512.h
> 
> diff --git a/lib/librte_fib/Makefile b/lib/librte_fib/Makefile
> index 3958da1..761c7c8 100644
> --- a/lib/librte_fib/Makefile
> +++ b/lib/librte_fib/Makefile
> @@ -25,12 +25,22 @@ grep -q __AVX512F__ && echo 1)
>  CC_AVX512DQ_SUPPORT=$(shell $(CC) -mavx512dq -dM -E - </dev/null 2>&1 | \
>  grep -q __AVX512DQ__ && echo 1)
> 
> +CC_AVX512BW_SUPPORT=$(shell $(CC) -mavx512bw -dM -E - </dev/null 2>&1 | \
> +grep -q __AVX512BW__ && echo 1)
> +
>  ifeq ($(CC_AVX512F_SUPPORT), 1)
>  	ifeq ($(CC_AVX512DQ_SUPPORT), 1)
>  		SRCS-$(CONFIG_RTE_LIBRTE_FIB) += dir24_8_avx512.c
>  		CFLAGS_dir24_8_avx512.o += -mavx512f
>  		CFLAGS_dir24_8_avx512.o += -mavx512dq
>  		CFLAGS_dir24_8.o += -DCC_DIR24_8_AVX512_SUPPORT
> +		ifeq ($(CC_AVX512BW_SUPPORT), 1)
> +			SRCS-$(CONFIG_RTE_LIBRTE_FIB) += trie_avx512.c
> +			CFLAGS_trie_avx512.o += -mavx512f
> +			CFLAGS_trie_avx512.o += -mavx512dq
> +			CFLAGS_trie_avx512.o += -mavx512bw
> +			CFLAGS_trie.o += -DCC_TRIE_AVX512_SUPPORT
> +		endif
>  	endif
>  endif
>  include $(RTE_SDK)/mk/rte.lib.mk
> diff --git a/lib/librte_fib/meson.build b/lib/librte_fib/meson.build
> index 0963f3c..98adf11 100644
> --- a/lib/librte_fib/meson.build
> +++ b/lib/librte_fib/meson.build
> @@ -14,5 +14,14 @@ if dpdk_conf.has('RTE_ARCH_X86') and cc.has_argument('-mavx512f')
>  			c_args: cflags + ['-mavx512f'] + ['-mavx512dq'])
>  		objs += dir24_8_avx512_tmp.extract_objects('dir24_8_avx512.c')
>  		cflags += '-DCC_DIR24_8_AVX512_SUPPORT'
> +		if cc.has_argument('-mavx512bw')
> +			trie_avx512_tmp = static_library('trie_avx512_tmp',
> +				'trie_avx512.c',
> +				dependencies: static_rte_eal,
> +				c_args: cflags + ['-mavx512f'] + \
> +					['-mavx512dq'] + ['-mavx512bw'])
> +			objs += trie_avx512_tmp.extract_objects('trie_avx512.c')
> +			cflags += '-DCC_TRIE_AVX512_SUPPORT'
> +		endif
>  	endif
>  endif
> diff --git a/lib/librte_fib/rte_fib6.h b/lib/librte_fib/rte_fib6.h
> index b70369a..c55efdf 100644
> --- a/lib/librte_fib/rte_fib6.h
> +++ b/lib/librte_fib/rte_fib6.h
> @@ -53,7 +53,8 @@ enum rte_fib_trie_nh_sz {
>  };
> 
>  enum rte_fib_trie_lookup_type {
> -	RTE_FIB6_TRIE_SCALAR
> +	RTE_FIB6_TRIE_SCALAR,
> +	RTE_FIB6_TRIE_VECTOR

As a nit - does this enum needs to be public?
If it does, then probably worth to name it VECTOR_AVX512,
in case someone in future will want to add another vector implementation.
Probably same thought for v4.
Apart from that - LGTM.
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com> 


^ permalink raw reply	[flat|nested] 199+ messages in thread

* Re: [dpdk-dev] [PATCH v3 7/8] fib6: introduce AVX512 lookup
  2020-07-08 12:23     ` Ananyev, Konstantin
@ 2020-07-08 19:56       ` Medvedkin, Vladimir
  0 siblings, 0 replies; 199+ messages in thread
From: Medvedkin, Vladimir @ 2020-07-08 19:56 UTC (permalink / raw)
  To: Ananyev, Konstantin, dev; +Cc: Richardson, Bruce

Hi Konstantin,

Thanks for review,

On 08/07/2020 13:23, Ananyev, Konstantin wrote:
> 
>>
>> Add new lookup implementation for FIB6 trie algorithm using
>> AVX512 instruction set
>>
>> Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
>> ---
>>   lib/librte_fib/Makefile      |  10 ++
>>   lib/librte_fib/meson.build   |   9 ++
>>   lib/librte_fib/rte_fib6.h    |   3 +-
>>   lib/librte_fib/trie.c        |  21 ++++
>>   lib/librte_fib/trie_avx512.c | 269 +++++++++++++++++++++++++++++++++++++++++++
>>   lib/librte_fib/trie_avx512.h |  20 ++++
>>   6 files changed, 331 insertions(+), 1 deletion(-)
>>   create mode 100644 lib/librte_fib/trie_avx512.c
>>   create mode 100644 lib/librte_fib/trie_avx512.h
>>
>> diff --git a/lib/librte_fib/Makefile b/lib/librte_fib/Makefile
>> index 3958da1..761c7c8 100644
>> --- a/lib/librte_fib/Makefile
>> +++ b/lib/librte_fib/Makefile
>> @@ -25,12 +25,22 @@ grep -q __AVX512F__ && echo 1)
>>   CC_AVX512DQ_SUPPORT=$(shell $(CC) -mavx512dq -dM -E - </dev/null 2>&1 | \
>>   grep -q __AVX512DQ__ && echo 1)
>>
>> +CC_AVX512BW_SUPPORT=$(shell $(CC) -mavx512bw -dM -E - </dev/null 2>&1 | \
>> +grep -q __AVX512BW__ && echo 1)
>> +
>>   ifeq ($(CC_AVX512F_SUPPORT), 1)
>>   ifeq ($(CC_AVX512DQ_SUPPORT), 1)
>>   SRCS-$(CONFIG_RTE_LIBRTE_FIB) += dir24_8_avx512.c
>>   CFLAGS_dir24_8_avx512.o += -mavx512f
>>   CFLAGS_dir24_8_avx512.o += -mavx512dq
>>   CFLAGS_dir24_8.o += -DCC_DIR24_8_AVX512_SUPPORT
>> +ifeq ($(CC_AVX512BW_SUPPORT), 1)
>> +SRCS-$(CONFIG_RTE_LIBRTE_FIB) += trie_avx512.c
>> +CFLAGS_trie_avx512.o += -mavx512f
>> +CFLAGS_trie_avx512.o += -mavx512dq
>> +CFLAGS_trie_avx512.o += -mavx512bw
>> +CFLAGS_trie.o += -DCC_TRIE_AVX512_SUPPORT
>> +endif
>>   endif
>>   endif
>>   include $(RTE_SDK)/mk/rte.lib.mk
>> diff --git a/lib/librte_fib/meson.build b/lib/librte_fib/meson.build
>> index 0963f3c..98adf11 100644
>> --- a/lib/librte_fib/meson.build
>> +++ b/lib/librte_fib/meson.build
>> @@ -14,5 +14,14 @@ if dpdk_conf.has('RTE_ARCH_X86') and cc.has_argument('-mavx512f')
>>   c_args: cflags + ['-mavx512f'] + ['-mavx512dq'])
>>   objs += dir24_8_avx512_tmp.extract_objects('dir24_8_avx512.c')
>>   cflags += '-DCC_DIR24_8_AVX512_SUPPORT'
>> +if cc.has_argument('-mavx512bw')
>> +trie_avx512_tmp = static_library('trie_avx512_tmp',
>> +'trie_avx512.c',
>> +dependencies: static_rte_eal,
>> +c_args: cflags + ['-mavx512f'] + \
>> +['-mavx512dq'] + ['-mavx512bw'])
>> +objs += trie_avx512_tmp.extract_objects('trie_avx512.c')
>> +cflags += '-DCC_TRIE_AVX512_SUPPORT'
>> +endif
>>   endif
>>   endif
>> diff --git a/lib/librte_fib/rte_fib6.h b/lib/librte_fib/rte_fib6.h
>> index b70369a..c55efdf 100644
>> --- a/lib/librte_fib/rte_fib6.h
>> +++ b/lib/librte_fib/rte_fib6.h
>> @@ -53,7 +53,8 @@ enum rte_fib_trie_nh_sz {
>>   };
>>
>>   enum rte_fib_trie_lookup_type {
>> -RTE_FIB6_TRIE_SCALAR
>> +RTE_FIB6_TRIE_SCALAR,
>> +RTE_FIB6_TRIE_VECTOR
> 
> As a nit - does this enum needs to be public?
> If it does, then probably worth to name it VECTOR_AVX512,
> in case someone in future will want to add another vector implementation.
> Probably same thought for v4.

This enum is used with rte_fib_set_lookup_fn() so it needs to be public.
I'll change name to be with _AVX512 suffix. The same for v4.

> Apart from that - LGTM.
> Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> 

-- 
Regards,
Vladimir

^ permalink raw reply	[flat|nested] 199+ messages in thread

* Re: [dpdk-dev] [PATCH v3 4/8] fib: introduce AVX512 lookup
  2020-06-24 13:18     ` Ananyev, Konstantin
@ 2020-07-08 19:57       ` Medvedkin, Vladimir
  0 siblings, 0 replies; 199+ messages in thread
From: Medvedkin, Vladimir @ 2020-07-08 19:57 UTC (permalink / raw)
  To: Ananyev, Konstantin, dev; +Cc: Richardson, Bruce



On 24/06/2020 14:18, Ananyev, Konstantin wrote:
> 
>> Add new lookup implementation for DIR24_8 algorithm using
>> AVX512 instruction set
>>
>> Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
>> ---
>>   lib/librte_fib/Makefile         |  14 ++++
>>   lib/librte_fib/dir24_8.c        |  24 ++++++
>>   lib/librte_fib/dir24_8_avx512.c | 165 ++++++++++++++++++++++++++++++++++++++++
>>   lib/librte_fib/dir24_8_avx512.h |  24 ++++++
>>   lib/librte_fib/meson.build      |  11 +++
>>   lib/librte_fib/rte_fib.h        |   3 +-
>>   6 files changed, 240 insertions(+), 1 deletion(-)
>>   create mode 100644 lib/librte_fib/dir24_8_avx512.c
>>   create mode 100644 lib/librte_fib/dir24_8_avx512.h
>>
>> diff --git a/lib/librte_fib/Makefile b/lib/librte_fib/Makefile
>> index 1dd2a49..3958da1 100644
>> --- a/lib/librte_fib/Makefile
>> +++ b/lib/librte_fib/Makefile
>> @@ -19,4 +19,18 @@ SRCS-$(CONFIG_RTE_LIBRTE_FIB) := rte_fib.c rte_fib6.c dir24_8.c trie.c
>>   # install this header file
>>   SYMLINK-$(CONFIG_RTE_LIBRTE_FIB)-include := rte_fib.h rte_fib6.h
>>
>> +CC_AVX512F_SUPPORT=$(shell $(CC) -mavx512f -dM -E - </dev/null 2>&1 | \
>> +grep -q __AVX512F__ && echo 1)
>> +
>> +CC_AVX512DQ_SUPPORT=$(shell $(CC) -mavx512dq -dM -E - </dev/null 2>&1 | \
>> +grep -q __AVX512DQ__ && echo 1)
>> +
>> +ifeq ($(CC_AVX512F_SUPPORT), 1)
>> +ifeq ($(CC_AVX512DQ_SUPPORT), 1)
>> +SRCS-$(CONFIG_RTE_LIBRTE_FIB) += dir24_8_avx512.c
>> +CFLAGS_dir24_8_avx512.o += -mavx512f
>> +CFLAGS_dir24_8_avx512.o += -mavx512dq
>> +CFLAGS_dir24_8.o += -DCC_DIR24_8_AVX512_SUPPORT
>> +endif
>> +endif
>>   include $(RTE_SDK)/mk/rte.lib.mk
>> diff --git a/lib/librte_fib/dir24_8.c b/lib/librte_fib/dir24_8.c
>> index 9d74653..0a1c53f 100644
>> --- a/lib/librte_fib/dir24_8.c
>> +++ b/lib/librte_fib/dir24_8.c
>> @@ -18,6 +18,12 @@
>>   #include <rte_fib.h>
>>   #include "dir24_8.h"
>>
>> +#ifdef CC_DIR24_8_AVX512_SUPPORT
>> +
>> +#include "dir24_8_avx512.h"
>> +
>> +#endif /* CC_DIR24_8_AVX512_SUPPORT */
>> +
>>   #define DIR24_8_NAMESIZE64
>>
>>   #define ROUNDUP(x, y) RTE_ALIGN_CEIL(x, (1 << (32 - y)))
>> @@ -62,6 +68,24 @@ dir24_8_get_lookup_fn(void *p, enum rte_fib_dir24_8_lookup_type type)
>>   }
>>   case RTE_FIB_DIR24_8_SCALAR_UNI:
>>   return dir24_8_lookup_bulk_uni;
>> +#ifdef CC_DIR24_8_AVX512_SUPPORT
>> +case RTE_FIB_DIR24_8_VECTOR:
>> +if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512F) <= 0)
>> +return NULL;
>> +
>> +switch (nh_sz) {
>> +case RTE_FIB_DIR24_8_1B:
>> +return rte_dir24_8_vec_lookup_bulk_1b;
>> +case RTE_FIB_DIR24_8_2B:
>> +return rte_dir24_8_vec_lookup_bulk_2b;
>> +case RTE_FIB_DIR24_8_4B:
>> +return rte_dir24_8_vec_lookup_bulk_4b;
>> +case RTE_FIB_DIR24_8_8B:
>> +return rte_dir24_8_vec_lookup_bulk_8b;
>> +default:
>> +return NULL;
>> +}
>> +#endif
>>   default:
>>   return NULL;
>>   }
>> diff --git a/lib/librte_fib/dir24_8_avx512.c b/lib/librte_fib/dir24_8_avx512.c
>> new file mode 100644
>> index 0000000..43dba28
>> --- /dev/null
>> +++ b/lib/librte_fib/dir24_8_avx512.c
>> @@ -0,0 +1,165 @@
>> +/* SPDX-License-Identifier: BSD-3-Clause
>> + * Copyright(c) 2020 Intel Corporation
>> + */
>> +
>> +#include <rte_vect.h>
>> +#include <rte_fib.h>
>> +
>> +#include "dir24_8.h"
>> +#include "dir24_8_avx512.h"
>> +
>> +static __rte_always_inline void
>> +dir24_8_vec_lookup_x16(void *p, const uint32_t *ips,
>> +uint64_t *next_hops, int size)
>> +{
>> +struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
>> +__mmask16 msk_ext;
>> +__mmask16 exp_msk = 0x5555;
>> +__m512i ip_vec, idxes, res, bytes;
>> +const __m512i zero = _mm512_set1_epi32(0);
>> +const __m512i lsb = _mm512_set1_epi32(1);
>> +const __m512i lsbyte_msk = _mm512_set1_epi32(0xff);
>> +__m512i tmp1, tmp2, res_msk;
>> +__m256i tmp256;
>> +/* used to mask gather values if size is 1/2 (8/16 bit next hops) */
>> +if (size == sizeof(uint8_t))
>> +res_msk = _mm512_set1_epi32(UINT8_MAX);
>> +else if (size == sizeof(uint16_t))
>> +res_msk = _mm512_set1_epi32(UINT16_MAX);
>> +
>> +ip_vec = _mm512_loadu_si512(ips);
>> +/* mask 24 most significant bits */
>> +idxes = _mm512_srli_epi32(ip_vec, 8);
>> +
>> +/**
>> + * lookup in tbl24
>> + * Put it inside branch to make compiler happy with -O0
>> + */
>> +if (size == sizeof(uint8_t)) {
>> +res = _mm512_i32gather_epi32(idxes, (const int *)dp->tbl24, 1);
>> +res = _mm512_and_epi32(res, res_msk);
>> +} else if (size == sizeof(uint16_t)) {
>> +res = _mm512_i32gather_epi32(idxes, (const int *)dp->tbl24, 2);
>> +res = _mm512_and_epi32(res, res_msk);
>> +} else
>> +res = _mm512_i32gather_epi32(idxes, (const int *)dp->tbl24, 4);
>> +
>> +/* get extended entries indexes */
>> +msk_ext = _mm512_test_epi32_mask(res, lsb);
>> +
>> +if (msk_ext != 0) {
>> +idxes = _mm512_srli_epi32(res, 1);
>> +idxes = _mm512_slli_epi32(idxes, 8);
>> +bytes = _mm512_and_epi32(ip_vec, lsbyte_msk);
>> +idxes = _mm512_maskz_add_epi32(msk_ext, idxes, bytes);
>> +if (size == sizeof(uint8_t)) {
>> +idxes = _mm512_mask_i32gather_epi32(zero, msk_ext,
>> +idxes, (const int *)dp->tbl8, 1);
>> +idxes = _mm512_and_epi32(idxes, res_msk);
>> +} else if (size == sizeof(uint16_t)) {
>> +idxes = _mm512_mask_i32gather_epi32(zero, msk_ext,
>> +idxes, (const int *)dp->tbl8, 2);
>> +idxes = _mm512_and_epi32(idxes, res_msk);
>> +} else
>> +idxes = _mm512_mask_i32gather_epi32(zero, msk_ext,
>> +idxes, (const int *)dp->tbl8, 4);
>> +
>> +res = _mm512_mask_blend_epi32(msk_ext, res, idxes);
>> +}
>> +
>> +res = _mm512_srli_epi32(res, 1);
>> +tmp1 = _mm512_maskz_expand_epi32(exp_msk, res);
>> +tmp256 = _mm512_extracti32x8_epi32(res, 1);
>> +tmp2 = _mm512_maskz_expand_epi32(exp_msk,
>> +_mm512_castsi256_si512(tmp256));
>> +_mm512_storeu_si512(next_hops, tmp1);
>> +_mm512_storeu_si512(next_hops + 8, tmp2);
>> +}
>> +
>> +static __rte_always_inline void
>> +dir24_8_vec_lookup_x8_8b(void *p, const uint32_t *ips,
>> +uint64_t *next_hops)
>> +{
>> +struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
>> +const __m512i zero = _mm512_set1_epi32(0);
>> +const __m512i lsbyte_msk = _mm512_set1_epi64(0xff);
>> +const __m512i lsb = _mm512_set1_epi64(1);
>> +__m512i res, idxes, bytes;
>> +__m256i idxes_256, ip_vec;
>> +__mmask8 msk_ext;
>> +
>> +ip_vec = _mm256_loadu_si256((const void *)ips);
>> +/* mask 24 most significant bits */
>> +idxes_256 = _mm256_srli_epi32(ip_vec, 8);
>> +
>> +/* lookup in tbl24 */
>> +res = _mm512_i32gather_epi64(idxes_256, (const void *)dp->tbl24, 8);
>> +
>> +/* get extended entries indexes */
>> +msk_ext = _mm512_test_epi64_mask(res, lsb);
>> +
>> +if (msk_ext != 0) {
>> +bytes = _mm512_cvtepi32_epi64(ip_vec);
>> +idxes = _mm512_srli_epi64(res, 1);
>> +idxes = _mm512_slli_epi64(idxes, 8);
>> +bytes = _mm512_and_epi64(bytes, lsbyte_msk);
>> +idxes = _mm512_maskz_add_epi64(msk_ext, idxes, bytes);
>> +idxes = _mm512_mask_i64gather_epi64(zero, msk_ext, idxes,
>> +(const void *)dp->tbl8, 8);
>> +
>> +res = _mm512_mask_blend_epi64(msk_ext, res, idxes);
>> +}
>> +
>> +res = _mm512_srli_epi64(res, 1);
>> +_mm512_storeu_si512(next_hops, res);
>> +}
>> +
>> +void
>> +rte_dir24_8_vec_lookup_bulk_1b(void *p, const uint32_t *ips,
>> +uint64_t *next_hops, const unsigned int n)
>> +{
>> +uint32_t i;
>> +for (i = 0; i < (n / 16); i++)
>> +dir24_8_vec_lookup_x16(p, ips + i * 16, next_hops + i * 16,
>> +sizeof(uint8_t));
>> +
> 
> Just curious: if for reminder, instead of calling scalar lookup,
> Introduce a masked version of avx512 lookup - would it be slower?

As was discussed offline, I tried, and it is slower than using scalar 
lookup for reminder.

> 
>> +dir24_8_lookup_bulk_1b(p, ips + i * 16, next_hops + i * 16,
>> +n - i * 16);
>> +}
>> +
>> +void
>> +rte_dir24_8_vec_lookup_bulk_2b(void *p, const uint32_t *ips,
>> +uint64_t *next_hops, const unsigned int n)
>> +{
>> +uint32_t i;
>> +for (i = 0; i < (n / 16); i++)
>> +dir24_8_vec_lookup_x16(p, ips + i * 16, next_hops + i * 16,
>> +sizeof(uint16_t));
>> +
>> +dir24_8_lookup_bulk_2b(p, ips + i * 16, next_hops + i * 16,
>> +n - i * 16);
>> +}
>> +
>> +void
>> +rte_dir24_8_vec_lookup_bulk_4b(void *p, const uint32_t *ips,
>> +uint64_t *next_hops, const unsigned int n)
>> +{
>> +uint32_t i;
>> +for (i = 0; i < (n / 16); i++)
>> +dir24_8_vec_lookup_x16(p, ips + i * 16, next_hops + i * 16,
>> +sizeof(uint32_t));
>> +
>> +dir24_8_lookup_bulk_4b(p, ips + i * 16, next_hops + i * 16,
>> +n - i * 16);
>> +}
>> +
>> +void
>> +rte_dir24_8_vec_lookup_bulk_8b(void *p, const uint32_t *ips,
>> +uint64_t *next_hops, const unsigned int n)
>> +{
>> +uint32_t i;
>> +for (i = 0; i < (n / 8); i++)
>> +dir24_8_vec_lookup_x8_8b(p, ips + i * 8, next_hops + i * 8);
>> +
>> +dir24_8_lookup_bulk_8b(p, ips + i * 8, next_hops + i * 8, n - i * 8);
>> +}
>> diff --git a/lib/librte_fib/dir24_8_avx512.h b/lib/librte_fib/dir24_8_avx512.h
>> new file mode 100644
>> index 0000000..1d3c2b9
>> --- /dev/null
>> +++ b/lib/librte_fib/dir24_8_avx512.h
>> @@ -0,0 +1,24 @@
>> +/* SPDX-License-Identifier: BSD-3-Clause
>> + * Copyright(c) 2020 Intel Corporation
>> + */
>> +
>> +#ifndef _DIR248_AVX512_H_
>> +#define _DIR248_AVX512_H_
>> +
>> +void
>> +rte_dir24_8_vec_lookup_bulk_1b(void *p, const uint32_t *ips,
>> +uint64_t *next_hops, const unsigned int n);
>> +
>> +void
>> +rte_dir24_8_vec_lookup_bulk_2b(void *p, const uint32_t *ips,
>> +uint64_t *next_hops, const unsigned int n);
>> +
>> +void
>> +rte_dir24_8_vec_lookup_bulk_4b(void *p, const uint32_t *ips,
>> +uint64_t *next_hops, const unsigned int n);
>> +
>> +void
>> +rte_dir24_8_vec_lookup_bulk_8b(void *p, const uint32_t *ips,
>> +uint64_t *next_hops, const unsigned int n);
>> +
>> +#endif /* _DIR248_AVX512_H_ */
>> diff --git a/lib/librte_fib/meson.build b/lib/librte_fib/meson.build
>> index 771828f..0963f3c 100644
>> --- a/lib/librte_fib/meson.build
>> +++ b/lib/librte_fib/meson.build
>> @@ -5,3 +5,14 @@
>>   sources = files('rte_fib.c', 'rte_fib6.c', 'dir24_8.c', 'trie.c')
>>   headers = files('rte_fib.h', 'rte_fib6.h')
>>   deps += ['rib']
>> +
>> +if dpdk_conf.has('RTE_ARCH_X86') and cc.has_argument('-mavx512f')
>> +if cc.has_argument('-mavx512dq')
>> +dir24_8_avx512_tmp = static_library('dir24_8_avx512_tmp',
>> +'dir24_8_avx512.c',
>> +dependencies: static_rte_eal,
>> +c_args: cflags + ['-mavx512f'] + ['-mavx512dq'])
>> +objs += dir24_8_avx512_tmp.extract_objects('dir24_8_avx512.c')
>> +cflags += '-DCC_DIR24_8_AVX512_SUPPORT'
>> +endif
>> +endif
>> diff --git a/lib/librte_fib/rte_fib.h b/lib/librte_fib/rte_fib.h
>> index db35685..2919d13 100644
>> --- a/lib/librte_fib/rte_fib.h
>> +++ b/lib/librte_fib/rte_fib.h
>> @@ -54,7 +54,8 @@ enum rte_fib_dir24_8_nh_sz {
>>   enum rte_fib_dir24_8_lookup_type {
>>   RTE_FIB_DIR24_8_SCALAR_MACRO,
>>   RTE_FIB_DIR24_8_SCALAR_INLINE,
>> -RTE_FIB_DIR24_8_SCALAR_UNI
>> +RTE_FIB_DIR24_8_SCALAR_UNI,
>> +RTE_FIB_DIR24_8_VECTOR
>>   };
>>
>>   /** FIB configuration structure */
>> --
> 
> Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> 
>> 2.7.4
> 

-- 
Regards,
Vladimir

^ permalink raw reply	[flat|nested] 199+ messages in thread

* [dpdk-dev] [PATCH v4 0/8] fib: implement AVX512 vector lookup
  2020-05-19 12:12   ` [dpdk-dev] [PATCH v3 0/8] " Vladimir Medvedkin
  2020-05-19 12:23     ` David Marchand
  2020-06-19 10:34     ` Medvedkin, Vladimir
@ 2020-07-08 20:16     ` Vladimir Medvedkin
  2020-07-10 14:46       ` [dpdk-dev] [PATCH v5 " Vladimir Medvedkin
                         ` (8 more replies)
  2020-07-08 20:16     ` [dpdk-dev] [PATCH v4 1/8] eal: introduce zmm type for AVX 512-bit Vladimir Medvedkin
                       ` (7 subsequent siblings)
  10 siblings, 9 replies; 199+ messages in thread
From: Vladimir Medvedkin @ 2020-07-08 20:16 UTC (permalink / raw)
  To: dev; +Cc: konstantin.ananyev, bruce.richardson

This patch series implements vectorized lookup using AVX512 for
ipv4 dir24_8 and ipv6 trie algorithms.
Also introduced rte_fib_set_lookup_fn() to change lookup function type.
Added option to select lookup function type in testfib application.

v4:
 - use __rte_aligned() instead of using compiler attribute directly
 - rework and add comments to meson.build

v3:
 - separate out the AVX-512 code into a separate file

v2:
 - rename rte_zmm to __rte_x86_zmm to reflect its internal usage
 - make runtime decision to use avx512 lookup

Vladimir Medvedkin (8):
  eal: introduce zmm type for AVX 512-bit
  fib: make lookup function type configurable
  fib: move lookup definition into the header file
  fib: introduce AVX512 lookup
  fib6: make lookup function type configurable
  fib6: move lookup definition into the header file
  fib6: introduce AVX512 lookup
  app/testfib: add support for different lookup functions

 app/test-fib/main.c                   |  58 +++++-
 lib/librte_eal/x86/include/rte_vect.h |  21 ++
 lib/librte_fib/Makefile               |  24 +++
 lib/librte_fib/dir24_8.c              | 281 +++++---------------------
 lib/librte_fib/dir24_8.h              | 226 ++++++++++++++++++++-
 lib/librte_fib/dir24_8_avx512.c       | 165 +++++++++++++++
 lib/librte_fib/dir24_8_avx512.h       |  24 +++
 lib/librte_fib/meson.build            |  31 +++
 lib/librte_fib/rte_fib.c              |  21 +-
 lib/librte_fib/rte_fib.h              |  24 +++
 lib/librte_fib/rte_fib6.c             |  20 +-
 lib/librte_fib/rte_fib6.h             |  22 ++
 lib/librte_fib/rte_fib_version.map    |   2 +
 lib/librte_fib/trie.c                 | 161 +++------------
 lib/librte_fib/trie.h                 | 119 ++++++++++-
 lib/librte_fib/trie_avx512.c          | 269 ++++++++++++++++++++++++
 lib/librte_fib/trie_avx512.h          |  20 ++
 17 files changed, 1116 insertions(+), 372 deletions(-)
 create mode 100644 lib/librte_fib/dir24_8_avx512.c
 create mode 100644 lib/librte_fib/dir24_8_avx512.h
 create mode 100644 lib/librte_fib/trie_avx512.c
 create mode 100644 lib/librte_fib/trie_avx512.h

-- 
2.17.1


^ permalink raw reply	[flat|nested] 199+ messages in thread

* [dpdk-dev] [PATCH v4 1/8] eal: introduce zmm type for AVX 512-bit
  2020-05-19 12:12   ` [dpdk-dev] [PATCH v3 0/8] " Vladimir Medvedkin
                       ` (2 preceding siblings ...)
  2020-07-08 20:16     ` [dpdk-dev] [PATCH v4 " Vladimir Medvedkin
@ 2020-07-08 20:16     ` Vladimir Medvedkin
  2020-07-09 13:48       ` David Marchand
  2020-07-08 20:16     ` [dpdk-dev] [PATCH v4 2/8] fib: make lookup function type configurable Vladimir Medvedkin
                       ` (6 subsequent siblings)
  10 siblings, 1 reply; 199+ messages in thread
From: Vladimir Medvedkin @ 2020-07-08 20:16 UTC (permalink / raw)
  To: dev; +Cc: konstantin.ananyev, bruce.richardson

New data type to manipulate 512 bit AVX values.

Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 lib/librte_eal/x86/include/rte_vect.h | 21 +++++++++++++++++++++
 1 file changed, 21 insertions(+)

diff --git a/lib/librte_eal/x86/include/rte_vect.h b/lib/librte_eal/x86/include/rte_vect.h
index df5a60762..ae59126bc 100644
--- a/lib/librte_eal/x86/include/rte_vect.h
+++ b/lib/librte_eal/x86/include/rte_vect.h
@@ -13,6 +13,7 @@
 
 #include <stdint.h>
 #include <rte_config.h>
+#include <rte_common.h>
 #include "generic/rte_vect.h"
 
 #if (defined(__ICC) || \
@@ -90,6 +91,26 @@ __extension__ ({                 \
 })
 #endif /* (defined(__ICC) && __ICC < 1210) */
 
+#ifdef __AVX512F__
+
+typedef __m512i __x86_zmm_t;
+
+#define	ZMM_SIZE	(sizeof(__x86_zmm_t))
+#define	ZMM_MASK	(ZMM_SIZE - 1)
+
+typedef union __rte_x86_zmm  {
+	__x86_zmm_t	 z;
+	ymm_t    y[ZMM_SIZE / sizeof(ymm_t)];
+	xmm_t    x[ZMM_SIZE / sizeof(xmm_t)];
+	uint8_t  u8[ZMM_SIZE / sizeof(uint8_t)];
+	uint16_t u16[ZMM_SIZE / sizeof(uint16_t)];
+	uint32_t u32[ZMM_SIZE / sizeof(uint32_t)];
+	uint64_t u64[ZMM_SIZE / sizeof(uint64_t)];
+	double   pd[ZMM_SIZE / sizeof(double)];
+} __rte_aligned(ZMM_SIZE) __rte_x86_zmm_t;
+
+#endif /* __AVX512F__ */
+
 #ifdef __cplusplus
 }
 #endif
-- 
2.17.1


^ permalink raw reply	[flat|nested] 199+ messages in thread

* [dpdk-dev] [PATCH v4 2/8] fib: make lookup function type configurable
  2020-05-19 12:12   ` [dpdk-dev] [PATCH v3 0/8] " Vladimir Medvedkin
                       ` (3 preceding siblings ...)
  2020-07-08 20:16     ` [dpdk-dev] [PATCH v4 1/8] eal: introduce zmm type for AVX 512-bit Vladimir Medvedkin
@ 2020-07-08 20:16     ` Vladimir Medvedkin
  2020-07-08 20:16     ` [dpdk-dev] [PATCH v4 3/8] fib: move lookup definition into the header file Vladimir Medvedkin
                       ` (5 subsequent siblings)
  10 siblings, 0 replies; 199+ messages in thread
From: Vladimir Medvedkin @ 2020-07-08 20:16 UTC (permalink / raw)
  To: dev; +Cc: konstantin.ananyev, bruce.richardson

Add type argument to dir24_8_get_lookup_fn()
Now it supports 3 different lookup implementations:
 RTE_FIB_DIR24_8_SCALAR_MACRO
 RTE_FIB_DIR24_8_SCALAR_INLINE
 RTE_FIB_DIR24_8_SCALAR_UNI

Add new rte_fib_set_lookup_fn() - user can change lookup
function type runtime.

Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
---
 lib/librte_fib/dir24_8.c           | 32 +++++++++++++++++++-----------
 lib/librte_fib/dir24_8.h           |  2 +-
 lib/librte_fib/rte_fib.c           | 21 +++++++++++++++++++-
 lib/librte_fib/rte_fib.h           | 23 +++++++++++++++++++++
 lib/librte_fib/rte_fib_version.map |  1 +
 5 files changed, 65 insertions(+), 14 deletions(-)

diff --git a/lib/librte_fib/dir24_8.c b/lib/librte_fib/dir24_8.c
index c9dce3cbc..825d061fd 100644
--- a/lib/librte_fib/dir24_8.c
+++ b/lib/librte_fib/dir24_8.c
@@ -45,13 +45,6 @@ struct dir24_8_tbl {
 
 #define ROUNDUP(x, y)	 RTE_ALIGN_CEIL(x, (1 << (32 - y)))
 
-enum lookup_type {
-	MACRO,
-	INLINE,
-	UNI
-};
-enum lookup_type test_lookup = MACRO;
-
 static inline void *
 get_tbl24_p(struct dir24_8_tbl *dp, uint32_t ip, uint8_t nh_sz)
 {
@@ -253,11 +246,18 @@ dir24_8_lookup_bulk_uni(void *p, const uint32_t *ips,
 }
 
 rte_fib_lookup_fn_t
-dir24_8_get_lookup_fn(struct rte_fib_conf *fib_conf)
+dir24_8_get_lookup_fn(void *p, enum rte_fib_dir24_8_lookup_type type)
 {
-	enum rte_fib_dir24_8_nh_sz nh_sz = fib_conf->dir24_8.nh_sz;
+	enum rte_fib_dir24_8_nh_sz nh_sz;
+	struct dir24_8_tbl *dp = p;
 
-	if (test_lookup == MACRO) {
+	if (dp == NULL)
+		return NULL;
+
+	nh_sz = dp->nh_sz;
+
+	switch (type) {
+	case RTE_FIB_DIR24_8_SCALAR_MACRO:
 		switch (nh_sz) {
 		case RTE_FIB_DIR24_8_1B:
 			return dir24_8_lookup_bulk_1b;
@@ -267,8 +267,10 @@ dir24_8_get_lookup_fn(struct rte_fib_conf *fib_conf)
 			return dir24_8_lookup_bulk_4b;
 		case RTE_FIB_DIR24_8_8B:
 			return dir24_8_lookup_bulk_8b;
+		default:
+			return NULL;
 		}
-	} else if (test_lookup == INLINE) {
+	case RTE_FIB_DIR24_8_SCALAR_INLINE:
 		switch (nh_sz) {
 		case RTE_FIB_DIR24_8_1B:
 			return dir24_8_lookup_bulk_0;
@@ -278,9 +280,15 @@ dir24_8_get_lookup_fn(struct rte_fib_conf *fib_conf)
 			return dir24_8_lookup_bulk_2;
 		case RTE_FIB_DIR24_8_8B:
 			return dir24_8_lookup_bulk_3;
+		default:
+			return NULL;
 		}
-	} else
+	case RTE_FIB_DIR24_8_SCALAR_UNI:
 		return dir24_8_lookup_bulk_uni;
+	default:
+		return NULL;
+	}
+
 	return NULL;
 }
 
diff --git a/lib/librte_fib/dir24_8.h b/lib/librte_fib/dir24_8.h
index 1ec437c0c..53c5dd29e 100644
--- a/lib/librte_fib/dir24_8.h
+++ b/lib/librte_fib/dir24_8.h
@@ -22,7 +22,7 @@ void
 dir24_8_free(void *p);
 
 rte_fib_lookup_fn_t
-dir24_8_get_lookup_fn(struct rte_fib_conf *conf);
+dir24_8_get_lookup_fn(void *p, enum rte_fib_dir24_8_lookup_type type);
 
 int
 dir24_8_modify(struct rte_fib *fib, uint32_t ip, uint8_t depth,
diff --git a/lib/librte_fib/rte_fib.c b/lib/librte_fib/rte_fib.c
index e0908084f..b9f6efbb1 100644
--- a/lib/librte_fib/rte_fib.c
+++ b/lib/librte_fib/rte_fib.c
@@ -107,7 +107,8 @@ init_dataplane(struct rte_fib *fib, __rte_unused int socket_id,
 		fib->dp = dir24_8_create(dp_name, socket_id, conf);
 		if (fib->dp == NULL)
 			return -rte_errno;
-		fib->lookup = dir24_8_get_lookup_fn(conf);
+		fib->lookup = dir24_8_get_lookup_fn(fib->dp,
+			RTE_FIB_DIR24_8_SCALAR_MACRO);
 		fib->modify = dir24_8_modify;
 		return 0;
 	default:
@@ -317,3 +318,21 @@ rte_fib_get_rib(struct rte_fib *fib)
 {
 	return (fib == NULL) ? NULL : fib->rib;
 }
+
+int
+rte_fib_set_lookup_fn(struct rte_fib *fib,
+	enum rte_fib_dir24_8_lookup_type type)
+{
+	rte_fib_lookup_fn_t fn;
+
+	switch (fib->type) {
+	case RTE_FIB_DIR24_8:
+		fn = dir24_8_get_lookup_fn(fib->dp, type);
+		if (fn == NULL)
+			return -EINVAL;
+		fib->lookup = fn;
+		return 0;
+	default:
+		return -EINVAL;
+	}
+}
diff --git a/lib/librte_fib/rte_fib.h b/lib/librte_fib/rte_fib.h
index 84ee774d2..892898c6f 100644
--- a/lib/librte_fib/rte_fib.h
+++ b/lib/librte_fib/rte_fib.h
@@ -58,6 +58,12 @@ enum rte_fib_dir24_8_nh_sz {
 	RTE_FIB_DIR24_8_8B
 };
 
+enum rte_fib_dir24_8_lookup_type {
+	RTE_FIB_DIR24_8_SCALAR_MACRO,
+	RTE_FIB_DIR24_8_SCALAR_INLINE,
+	RTE_FIB_DIR24_8_SCALAR_UNI
+};
+
 /** FIB configuration structure */
 struct rte_fib_conf {
 	enum rte_fib_type type; /**< Type of FIB struct */
@@ -196,6 +202,23 @@ __rte_experimental
 struct rte_rib *
 rte_fib_get_rib(struct rte_fib *fib);
 
+/**
+ * Set lookup function based on type
+ *
+ * @param fib
+ *   FIB object handle
+ * @param type
+ *   type of lookup function
+ *
+ * @return
+ *    -EINVAL on failure
+ *    0 on success
+ */
+__rte_experimental
+int
+rte_fib_set_lookup_fn(struct rte_fib *fib,
+	enum rte_fib_dir24_8_lookup_type type);
+
 #ifdef __cplusplus
 }
 #endif
diff --git a/lib/librte_fib/rte_fib_version.map b/lib/librte_fib/rte_fib_version.map
index 9527417d2..216af66b3 100644
--- a/lib/librte_fib/rte_fib_version.map
+++ b/lib/librte_fib/rte_fib_version.map
@@ -9,6 +9,7 @@ EXPERIMENTAL {
 	rte_fib_lookup_bulk;
 	rte_fib_get_dp;
 	rte_fib_get_rib;
+	rte_fib_set_lookup_fn;
 
 	rte_fib6_add;
 	rte_fib6_create;
-- 
2.17.1


^ permalink raw reply	[flat|nested] 199+ messages in thread

* [dpdk-dev] [PATCH v4 3/8] fib: move lookup definition into the header file
  2020-05-19 12:12   ` [dpdk-dev] [PATCH v3 0/8] " Vladimir Medvedkin
                       ` (4 preceding siblings ...)
  2020-07-08 20:16     ` [dpdk-dev] [PATCH v4 2/8] fib: make lookup function type configurable Vladimir Medvedkin
@ 2020-07-08 20:16     ` Vladimir Medvedkin
  2020-07-08 20:16     ` [dpdk-dev] [PATCH v4 4/8] fib: introduce AVX512 lookup Vladimir Medvedkin
                       ` (4 subsequent siblings)
  10 siblings, 0 replies; 199+ messages in thread
From: Vladimir Medvedkin @ 2020-07-08 20:16 UTC (permalink / raw)
  To: dev; +Cc: konstantin.ananyev, bruce.richardson

Move dir24_8 table layout and lookup defenition into the
private header file. This is necessary for implementing a
vectorized lookup function in a separate .с file.

Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 lib/librte_fib/dir24_8.c | 225 +--------------------------------------
 lib/librte_fib/dir24_8.h | 224 ++++++++++++++++++++++++++++++++++++++
 2 files changed, 225 insertions(+), 224 deletions(-)

diff --git a/lib/librte_fib/dir24_8.c b/lib/librte_fib/dir24_8.c
index 825d061fd..9d74653cf 100644
--- a/lib/librte_fib/dir24_8.c
+++ b/lib/librte_fib/dir24_8.c
@@ -11,240 +11,17 @@
 
 #include <rte_debug.h>
 #include <rte_malloc.h>
-#include <rte_prefetch.h>
 #include <rte_errno.h>
 #include <rte_memory.h>
-#include <rte_branch_prediction.h>
 
-#include <rte_fib.h>
 #include <rte_rib.h>
+#include <rte_fib.h>
 #include "dir24_8.h"
 
 #define DIR24_8_NAMESIZE	64
 
-#define DIR24_8_TBL24_NUM_ENT		(1 << 24)
-#define DIR24_8_TBL8_GRP_NUM_ENT	256U
-#define DIR24_8_EXT_ENT			1
-#define DIR24_8_TBL24_MASK		0xffffff00
-
-#define BITMAP_SLAB_BIT_SIZE_LOG2	6
-#define BITMAP_SLAB_BIT_SIZE		(1 << BITMAP_SLAB_BIT_SIZE_LOG2)
-#define BITMAP_SLAB_BITMASK		(BITMAP_SLAB_BIT_SIZE - 1)
-
-struct dir24_8_tbl {
-	uint32_t	number_tbl8s;	/**< Total number of tbl8s */
-	uint32_t	rsvd_tbl8s;	/**< Number of reserved tbl8s */
-	uint32_t	cur_tbl8s;	/**< Current number of tbl8s */
-	enum rte_fib_dir24_8_nh_sz	nh_sz;	/**< Size of nexthop entry */
-	uint64_t	def_nh;		/**< Default next hop */
-	uint64_t	*tbl8;		/**< tbl8 table. */
-	uint64_t	*tbl8_idxes;	/**< bitmap containing free tbl8 idxes*/
-	/* tbl24 table. */
-	__extension__ uint64_t	tbl24[0] __rte_cache_aligned;
-};
-
 #define ROUNDUP(x, y)	 RTE_ALIGN_CEIL(x, (1 << (32 - y)))
 
-static inline void *
-get_tbl24_p(struct dir24_8_tbl *dp, uint32_t ip, uint8_t nh_sz)
-{
-	return (void *)&((uint8_t *)dp->tbl24)[(ip &
-		DIR24_8_TBL24_MASK) >> (8 - nh_sz)];
-}
-
-static inline  uint8_t
-bits_in_nh(uint8_t nh_sz)
-{
-	return 8 * (1 << nh_sz);
-}
-
-static inline uint64_t
-get_max_nh(uint8_t nh_sz)
-{
-	return ((1ULL << (bits_in_nh(nh_sz) - 1)) - 1);
-}
-
-static  inline uint32_t
-get_tbl24_idx(uint32_t ip)
-{
-	return ip >> 8;
-}
-
-static  inline uint32_t
-get_tbl8_idx(uint32_t res, uint32_t ip)
-{
-	return (res >> 1) * DIR24_8_TBL8_GRP_NUM_ENT + (uint8_t)ip;
-}
-
-static inline uint64_t
-lookup_msk(uint8_t nh_sz)
-{
-	return ((1ULL << ((1 << (nh_sz + 3)) - 1)) << 1) - 1;
-}
-
-static inline uint8_t
-get_psd_idx(uint32_t val, uint8_t nh_sz)
-{
-	return val & ((1 << (3 - nh_sz)) - 1);
-}
-
-static inline uint32_t
-get_tbl_idx(uint32_t val, uint8_t nh_sz)
-{
-	return val >> (3 - nh_sz);
-}
-
-static inline uint64_t
-get_tbl24(struct dir24_8_tbl *dp, uint32_t ip, uint8_t nh_sz)
-{
-	return ((dp->tbl24[get_tbl_idx(get_tbl24_idx(ip), nh_sz)] >>
-		(get_psd_idx(get_tbl24_idx(ip), nh_sz) *
-		bits_in_nh(nh_sz))) & lookup_msk(nh_sz));
-}
-
-static inline uint64_t
-get_tbl8(struct dir24_8_tbl *dp, uint32_t res, uint32_t ip, uint8_t nh_sz)
-{
-	return ((dp->tbl8[get_tbl_idx(get_tbl8_idx(res, ip), nh_sz)] >>
-		(get_psd_idx(get_tbl8_idx(res, ip), nh_sz) *
-		bits_in_nh(nh_sz))) & lookup_msk(nh_sz));
-}
-
-static inline int
-is_entry_extended(uint64_t ent)
-{
-	return (ent & DIR24_8_EXT_ENT) == DIR24_8_EXT_ENT;
-}
-
-#define LOOKUP_FUNC(suffix, type, bulk_prefetch, nh_sz)			\
-static void dir24_8_lookup_bulk_##suffix(void *p, const uint32_t *ips,	\
-	uint64_t *next_hops, const unsigned int n)			\
-{									\
-	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;		\
-	uint64_t tmp;							\
-	uint32_t i;							\
-	uint32_t prefetch_offset =					\
-		RTE_MIN((unsigned int)bulk_prefetch, n);		\
-									\
-	for (i = 0; i < prefetch_offset; i++)				\
-		rte_prefetch0(get_tbl24_p(dp, ips[i], nh_sz));		\
-	for (i = 0; i < (n - prefetch_offset); i++) {			\
-		rte_prefetch0(get_tbl24_p(dp,				\
-			ips[i + prefetch_offset], nh_sz));		\
-		tmp = ((type *)dp->tbl24)[ips[i] >> 8];			\
-		if (unlikely(is_entry_extended(tmp)))			\
-			tmp = ((type *)dp->tbl8)[(uint8_t)ips[i] +	\
-				((tmp >> 1) * DIR24_8_TBL8_GRP_NUM_ENT)]; \
-		next_hops[i] = tmp >> 1;				\
-	}								\
-	for (; i < n; i++) {						\
-		tmp = ((type *)dp->tbl24)[ips[i] >> 8];			\
-		if (unlikely(is_entry_extended(tmp)))			\
-			tmp = ((type *)dp->tbl8)[(uint8_t)ips[i] +	\
-				((tmp >> 1) * DIR24_8_TBL8_GRP_NUM_ENT)]; \
-		next_hops[i] = tmp >> 1;				\
-	}								\
-}									\
-
-LOOKUP_FUNC(1b, uint8_t, 5, 0)
-LOOKUP_FUNC(2b, uint16_t, 6, 1)
-LOOKUP_FUNC(4b, uint32_t, 15, 2)
-LOOKUP_FUNC(8b, uint64_t, 12, 3)
-
-static inline void
-dir24_8_lookup_bulk(struct dir24_8_tbl *dp, const uint32_t *ips,
-	uint64_t *next_hops, const unsigned int n, uint8_t nh_sz)
-{
-	uint64_t tmp;
-	uint32_t i;
-	uint32_t prefetch_offset = RTE_MIN(15U, n);
-
-	for (i = 0; i < prefetch_offset; i++)
-		rte_prefetch0(get_tbl24_p(dp, ips[i], nh_sz));
-	for (i = 0; i < (n - prefetch_offset); i++) {
-		rte_prefetch0(get_tbl24_p(dp, ips[i + prefetch_offset],
-			nh_sz));
-		tmp = get_tbl24(dp, ips[i], nh_sz);
-		if (unlikely(is_entry_extended(tmp)))
-			tmp = get_tbl8(dp, tmp, ips[i], nh_sz);
-
-		next_hops[i] = tmp >> 1;
-	}
-	for (; i < n; i++) {
-		tmp = get_tbl24(dp, ips[i], nh_sz);
-		if (unlikely(is_entry_extended(tmp)))
-			tmp = get_tbl8(dp, tmp, ips[i], nh_sz);
-
-		next_hops[i] = tmp >> 1;
-	}
-}
-
-static void
-dir24_8_lookup_bulk_0(void *p, const uint32_t *ips,
-	uint64_t *next_hops, const unsigned int n)
-{
-	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
-
-	dir24_8_lookup_bulk(dp, ips, next_hops, n, 0);
-}
-
-static void
-dir24_8_lookup_bulk_1(void *p, const uint32_t *ips,
-	uint64_t *next_hops, const unsigned int n)
-{
-	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
-
-	dir24_8_lookup_bulk(dp, ips, next_hops, n, 1);
-}
-
-static void
-dir24_8_lookup_bulk_2(void *p, const uint32_t *ips,
-	uint64_t *next_hops, const unsigned int n)
-{
-	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
-
-	dir24_8_lookup_bulk(dp, ips, next_hops, n, 2);
-}
-
-static void
-dir24_8_lookup_bulk_3(void *p, const uint32_t *ips,
-	uint64_t *next_hops, const unsigned int n)
-{
-	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
-
-	dir24_8_lookup_bulk(dp, ips, next_hops, n, 3);
-}
-
-static void
-dir24_8_lookup_bulk_uni(void *p, const uint32_t *ips,
-	uint64_t *next_hops, const unsigned int n)
-{
-	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
-	uint64_t tmp;
-	uint32_t i;
-	uint32_t prefetch_offset = RTE_MIN(15U, n);
-	uint8_t nh_sz = dp->nh_sz;
-
-	for (i = 0; i < prefetch_offset; i++)
-		rte_prefetch0(get_tbl24_p(dp, ips[i], nh_sz));
-	for (i = 0; i < (n - prefetch_offset); i++) {
-		rte_prefetch0(get_tbl24_p(dp, ips[i + prefetch_offset],
-			nh_sz));
-		tmp = get_tbl24(dp, ips[i], nh_sz);
-		if (unlikely(is_entry_extended(tmp)))
-			tmp = get_tbl8(dp, tmp, ips[i], nh_sz);
-
-		next_hops[i] = tmp >> 1;
-	}
-	for (; i < n; i++) {
-		tmp = get_tbl24(dp, ips[i], nh_sz);
-		if (unlikely(is_entry_extended(tmp)))
-			tmp = get_tbl8(dp, tmp, ips[i], nh_sz);
-
-		next_hops[i] = tmp >> 1;
-	}
-}
-
 rte_fib_lookup_fn_t
 dir24_8_get_lookup_fn(void *p, enum rte_fib_dir24_8_lookup_type type)
 {
diff --git a/lib/librte_fib/dir24_8.h b/lib/librte_fib/dir24_8.h
index 53c5dd29e..56d038951 100644
--- a/lib/librte_fib/dir24_8.h
+++ b/lib/librte_fib/dir24_8.h
@@ -6,6 +6,9 @@
 #ifndef _DIR24_8_H_
 #define _DIR24_8_H_
 
+#include <rte_prefetch.h>
+#include <rte_branch_prediction.h>
+
 /**
  * @file
  * DIR24_8 algorithm
@@ -15,6 +18,227 @@
 extern "C" {
 #endif
 
+#define DIR24_8_TBL24_NUM_ENT		(1 << 24)
+#define DIR24_8_TBL8_GRP_NUM_ENT	256U
+#define DIR24_8_EXT_ENT			1
+#define DIR24_8_TBL24_MASK		0xffffff00
+
+#define BITMAP_SLAB_BIT_SIZE_LOG2	6
+#define BITMAP_SLAB_BIT_SIZE		(1 << BITMAP_SLAB_BIT_SIZE_LOG2)
+#define BITMAP_SLAB_BITMASK		(BITMAP_SLAB_BIT_SIZE - 1)
+
+struct dir24_8_tbl {
+	uint32_t	number_tbl8s;	/**< Total number of tbl8s */
+	uint32_t	rsvd_tbl8s;	/**< Number of reserved tbl8s */
+	uint32_t	cur_tbl8s;	/**< Current number of tbl8s */
+	enum rte_fib_dir24_8_nh_sz	nh_sz;	/**< Size of nexthop entry */
+	uint64_t	def_nh;		/**< Default next hop */
+	uint64_t	*tbl8;		/**< tbl8 table. */
+	uint64_t	*tbl8_idxes;	/**< bitmap containing free tbl8 idxes*/
+	/* tbl24 table. */
+	__extension__ uint64_t	tbl24[0] __rte_cache_aligned;
+};
+
+static inline void *
+get_tbl24_p(struct dir24_8_tbl *dp, uint32_t ip, uint8_t nh_sz)
+{
+	return (void *)&((uint8_t *)dp->tbl24)[(ip &
+		DIR24_8_TBL24_MASK) >> (8 - nh_sz)];
+}
+
+static inline  uint8_t
+bits_in_nh(uint8_t nh_sz)
+{
+	return 8 * (1 << nh_sz);
+}
+
+static inline uint64_t
+get_max_nh(uint8_t nh_sz)
+{
+	return ((1ULL << (bits_in_nh(nh_sz) - 1)) - 1);
+}
+
+static  inline uint32_t
+get_tbl24_idx(uint32_t ip)
+{
+	return ip >> 8;
+}
+
+static  inline uint32_t
+get_tbl8_idx(uint32_t res, uint32_t ip)
+{
+	return (res >> 1) * DIR24_8_TBL8_GRP_NUM_ENT + (uint8_t)ip;
+}
+
+static inline uint64_t
+lookup_msk(uint8_t nh_sz)
+{
+	return ((1ULL << ((1 << (nh_sz + 3)) - 1)) << 1) - 1;
+}
+
+static inline uint8_t
+get_psd_idx(uint32_t val, uint8_t nh_sz)
+{
+	return val & ((1 << (3 - nh_sz)) - 1);
+}
+
+static inline uint32_t
+get_tbl_idx(uint32_t val, uint8_t nh_sz)
+{
+	return val >> (3 - nh_sz);
+}
+
+static inline uint64_t
+get_tbl24(struct dir24_8_tbl *dp, uint32_t ip, uint8_t nh_sz)
+{
+	return ((dp->tbl24[get_tbl_idx(get_tbl24_idx(ip), nh_sz)] >>
+		(get_psd_idx(get_tbl24_idx(ip), nh_sz) *
+		bits_in_nh(nh_sz))) & lookup_msk(nh_sz));
+}
+
+static inline uint64_t
+get_tbl8(struct dir24_8_tbl *dp, uint32_t res, uint32_t ip, uint8_t nh_sz)
+{
+	return ((dp->tbl8[get_tbl_idx(get_tbl8_idx(res, ip), nh_sz)] >>
+		(get_psd_idx(get_tbl8_idx(res, ip), nh_sz) *
+		bits_in_nh(nh_sz))) & lookup_msk(nh_sz));
+}
+
+static inline int
+is_entry_extended(uint64_t ent)
+{
+	return (ent & DIR24_8_EXT_ENT) == DIR24_8_EXT_ENT;
+}
+
+#define LOOKUP_FUNC(suffix, type, bulk_prefetch, nh_sz)			\
+static inline void dir24_8_lookup_bulk_##suffix(void *p, const uint32_t *ips, \
+	uint64_t *next_hops, const unsigned int n)			\
+{									\
+	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;		\
+	uint64_t tmp;							\
+	uint32_t i;							\
+	uint32_t prefetch_offset =					\
+		RTE_MIN((unsigned int)bulk_prefetch, n);		\
+									\
+	for (i = 0; i < prefetch_offset; i++)				\
+		rte_prefetch0(get_tbl24_p(dp, ips[i], nh_sz));		\
+	for (i = 0; i < (n - prefetch_offset); i++) {			\
+		rte_prefetch0(get_tbl24_p(dp,				\
+			ips[i + prefetch_offset], nh_sz));		\
+		tmp = ((type *)dp->tbl24)[ips[i] >> 8];			\
+		if (unlikely(is_entry_extended(tmp)))			\
+			tmp = ((type *)dp->tbl8)[(uint8_t)ips[i] +	\
+				((tmp >> 1) * DIR24_8_TBL8_GRP_NUM_ENT)]; \
+		next_hops[i] = tmp >> 1;				\
+	}								\
+	for (; i < n; i++) {						\
+		tmp = ((type *)dp->tbl24)[ips[i] >> 8];			\
+		if (unlikely(is_entry_extended(tmp)))			\
+			tmp = ((type *)dp->tbl8)[(uint8_t)ips[i] +	\
+				((tmp >> 1) * DIR24_8_TBL8_GRP_NUM_ENT)]; \
+		next_hops[i] = tmp >> 1;				\
+	}								\
+}									\
+
+LOOKUP_FUNC(1b, uint8_t, 5, 0)
+LOOKUP_FUNC(2b, uint16_t, 6, 1)
+LOOKUP_FUNC(4b, uint32_t, 15, 2)
+LOOKUP_FUNC(8b, uint64_t, 12, 3)
+
+static inline void
+dir24_8_lookup_bulk(struct dir24_8_tbl *dp, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n, uint8_t nh_sz)
+{
+	uint64_t tmp;
+	uint32_t i;
+	uint32_t prefetch_offset = RTE_MIN(15U, n);
+
+	for (i = 0; i < prefetch_offset; i++)
+		rte_prefetch0(get_tbl24_p(dp, ips[i], nh_sz));
+	for (i = 0; i < (n - prefetch_offset); i++) {
+		rte_prefetch0(get_tbl24_p(dp, ips[i + prefetch_offset],
+			nh_sz));
+		tmp = get_tbl24(dp, ips[i], nh_sz);
+		if (unlikely(is_entry_extended(tmp)))
+			tmp = get_tbl8(dp, tmp, ips[i], nh_sz);
+
+		next_hops[i] = tmp >> 1;
+	}
+	for (; i < n; i++) {
+		tmp = get_tbl24(dp, ips[i], nh_sz);
+		if (unlikely(is_entry_extended(tmp)))
+			tmp = get_tbl8(dp, tmp, ips[i], nh_sz);
+
+		next_hops[i] = tmp >> 1;
+	}
+}
+
+static inline void
+dir24_8_lookup_bulk_0(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n)
+{
+	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
+
+	dir24_8_lookup_bulk(dp, ips, next_hops, n, 0);
+}
+
+static inline void
+dir24_8_lookup_bulk_1(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n)
+{
+	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
+
+	dir24_8_lookup_bulk(dp, ips, next_hops, n, 1);
+}
+
+static inline void
+dir24_8_lookup_bulk_2(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n)
+{
+	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
+
+	dir24_8_lookup_bulk(dp, ips, next_hops, n, 2);
+}
+
+static inline void
+dir24_8_lookup_bulk_3(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n)
+{
+	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
+
+	dir24_8_lookup_bulk(dp, ips, next_hops, n, 3);
+}
+
+static inline void
+dir24_8_lookup_bulk_uni(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n)
+{
+	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
+	uint64_t tmp;
+	uint32_t i;
+	uint32_t prefetch_offset = RTE_MIN(15U, n);
+	uint8_t nh_sz = dp->nh_sz;
+
+	for (i = 0; i < prefetch_offset; i++)
+		rte_prefetch0(get_tbl24_p(dp, ips[i], nh_sz));
+	for (i = 0; i < (n - prefetch_offset); i++) {
+		rte_prefetch0(get_tbl24_p(dp, ips[i + prefetch_offset],
+			nh_sz));
+		tmp = get_tbl24(dp, ips[i], nh_sz);
+		if (unlikely(is_entry_extended(tmp)))
+			tmp = get_tbl8(dp, tmp, ips[i], nh_sz);
+
+		next_hops[i] = tmp >> 1;
+	}
+	for (; i < n; i++) {
+		tmp = get_tbl24(dp, ips[i], nh_sz);
+		if (unlikely(is_entry_extended(tmp)))
+			tmp = get_tbl8(dp, tmp, ips[i], nh_sz);
+
+		next_hops[i] = tmp >> 1;
+	}
+}
+
 void *
 dir24_8_create(const char *name, int socket_id, struct rte_fib_conf *conf);
 
-- 
2.17.1


^ permalink raw reply	[flat|nested] 199+ messages in thread

* [dpdk-dev] [PATCH v4 4/8] fib: introduce AVX512 lookup
  2020-05-19 12:12   ` [dpdk-dev] [PATCH v3 0/8] " Vladimir Medvedkin
                       ` (5 preceding siblings ...)
  2020-07-08 20:16     ` [dpdk-dev] [PATCH v4 3/8] fib: move lookup definition into the header file Vladimir Medvedkin
@ 2020-07-08 20:16     ` Vladimir Medvedkin
  2020-07-08 20:16     ` [dpdk-dev] [PATCH v4 5/8] fib6: make lookup function type configurable Vladimir Medvedkin
                       ` (3 subsequent siblings)
  10 siblings, 0 replies; 199+ messages in thread
From: Vladimir Medvedkin @ 2020-07-08 20:16 UTC (permalink / raw)
  To: dev; +Cc: konstantin.ananyev, bruce.richardson

Add new lookup implementation for DIR24_8 algorithm using
AVX512 instruction set

Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 lib/librte_fib/Makefile         |  14 +++
 lib/librte_fib/dir24_8.c        |  24 +++++
 lib/librte_fib/dir24_8_avx512.c | 165 ++++++++++++++++++++++++++++++++
 lib/librte_fib/dir24_8_avx512.h |  24 +++++
 lib/librte_fib/meson.build      |  18 ++++
 lib/librte_fib/rte_fib.h        |   3 +-
 6 files changed, 247 insertions(+), 1 deletion(-)
 create mode 100644 lib/librte_fib/dir24_8_avx512.c
 create mode 100644 lib/librte_fib/dir24_8_avx512.h

diff --git a/lib/librte_fib/Makefile b/lib/librte_fib/Makefile
index 1dd2a495b..3958da106 100644
--- a/lib/librte_fib/Makefile
+++ b/lib/librte_fib/Makefile
@@ -19,4 +19,18 @@ SRCS-$(CONFIG_RTE_LIBRTE_FIB) := rte_fib.c rte_fib6.c dir24_8.c trie.c
 # install this header file
 SYMLINK-$(CONFIG_RTE_LIBRTE_FIB)-include := rte_fib.h rte_fib6.h
 
+CC_AVX512F_SUPPORT=$(shell $(CC) -mavx512f -dM -E - </dev/null 2>&1 | \
+grep -q __AVX512F__ && echo 1)
+
+CC_AVX512DQ_SUPPORT=$(shell $(CC) -mavx512dq -dM -E - </dev/null 2>&1 | \
+grep -q __AVX512DQ__ && echo 1)
+
+ifeq ($(CC_AVX512F_SUPPORT), 1)
+	ifeq ($(CC_AVX512DQ_SUPPORT), 1)
+		SRCS-$(CONFIG_RTE_LIBRTE_FIB) += dir24_8_avx512.c
+		CFLAGS_dir24_8_avx512.o += -mavx512f
+		CFLAGS_dir24_8_avx512.o += -mavx512dq
+		CFLAGS_dir24_8.o += -DCC_DIR24_8_AVX512_SUPPORT
+	endif
+endif
 include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_fib/dir24_8.c b/lib/librte_fib/dir24_8.c
index 9d74653cf..0d7bf2c9e 100644
--- a/lib/librte_fib/dir24_8.c
+++ b/lib/librte_fib/dir24_8.c
@@ -18,6 +18,12 @@
 #include <rte_fib.h>
 #include "dir24_8.h"
 
+#ifdef CC_DIR24_8_AVX512_SUPPORT
+
+#include "dir24_8_avx512.h"
+
+#endif /* CC_DIR24_8_AVX512_SUPPORT */
+
 #define DIR24_8_NAMESIZE	64
 
 #define ROUNDUP(x, y)	 RTE_ALIGN_CEIL(x, (1 << (32 - y)))
@@ -62,6 +68,24 @@ dir24_8_get_lookup_fn(void *p, enum rte_fib_dir24_8_lookup_type type)
 		}
 	case RTE_FIB_DIR24_8_SCALAR_UNI:
 		return dir24_8_lookup_bulk_uni;
+#ifdef CC_DIR24_8_AVX512_SUPPORT
+	case RTE_FIB_DIR24_8_VECTOR_AVX512:
+		if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512F) <= 0)
+			return NULL;
+
+		switch (nh_sz) {
+		case RTE_FIB_DIR24_8_1B:
+			return rte_dir24_8_vec_lookup_bulk_1b;
+		case RTE_FIB_DIR24_8_2B:
+			return rte_dir24_8_vec_lookup_bulk_2b;
+		case RTE_FIB_DIR24_8_4B:
+			return rte_dir24_8_vec_lookup_bulk_4b;
+		case RTE_FIB_DIR24_8_8B:
+			return rte_dir24_8_vec_lookup_bulk_8b;
+		default:
+			return NULL;
+		}
+#endif
 	default:
 		return NULL;
 	}
diff --git a/lib/librte_fib/dir24_8_avx512.c b/lib/librte_fib/dir24_8_avx512.c
new file mode 100644
index 000000000..43dba28cf
--- /dev/null
+++ b/lib/librte_fib/dir24_8_avx512.c
@@ -0,0 +1,165 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Intel Corporation
+ */
+
+#include <rte_vect.h>
+#include <rte_fib.h>
+
+#include "dir24_8.h"
+#include "dir24_8_avx512.h"
+
+static __rte_always_inline void
+dir24_8_vec_lookup_x16(void *p, const uint32_t *ips,
+	uint64_t *next_hops, int size)
+{
+	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
+	__mmask16 msk_ext;
+	__mmask16 exp_msk = 0x5555;
+	__m512i ip_vec, idxes, res, bytes;
+	const __m512i zero = _mm512_set1_epi32(0);
+	const __m512i lsb = _mm512_set1_epi32(1);
+	const __m512i lsbyte_msk = _mm512_set1_epi32(0xff);
+	__m512i tmp1, tmp2, res_msk;
+	__m256i tmp256;
+	/* used to mask gather values if size is 1/2 (8/16 bit next hops) */
+	if (size == sizeof(uint8_t))
+		res_msk = _mm512_set1_epi32(UINT8_MAX);
+	else if (size == sizeof(uint16_t))
+		res_msk = _mm512_set1_epi32(UINT16_MAX);
+
+	ip_vec = _mm512_loadu_si512(ips);
+	/* mask 24 most significant bits */
+	idxes = _mm512_srli_epi32(ip_vec, 8);
+
+	/**
+	 * lookup in tbl24
+	 * Put it inside branch to make compiler happy with -O0
+	 */
+	if (size == sizeof(uint8_t)) {
+		res = _mm512_i32gather_epi32(idxes, (const int *)dp->tbl24, 1);
+		res = _mm512_and_epi32(res, res_msk);
+	} else if (size == sizeof(uint16_t)) {
+		res = _mm512_i32gather_epi32(idxes, (const int *)dp->tbl24, 2);
+		res = _mm512_and_epi32(res, res_msk);
+	} else
+		res = _mm512_i32gather_epi32(idxes, (const int *)dp->tbl24, 4);
+
+	/* get extended entries indexes */
+	msk_ext = _mm512_test_epi32_mask(res, lsb);
+
+	if (msk_ext != 0) {
+		idxes = _mm512_srli_epi32(res, 1);
+		idxes = _mm512_slli_epi32(idxes, 8);
+		bytes = _mm512_and_epi32(ip_vec, lsbyte_msk);
+		idxes = _mm512_maskz_add_epi32(msk_ext, idxes, bytes);
+		if (size == sizeof(uint8_t)) {
+			idxes = _mm512_mask_i32gather_epi32(zero, msk_ext,
+				idxes, (const int *)dp->tbl8, 1);
+			idxes = _mm512_and_epi32(idxes, res_msk);
+		} else if (size == sizeof(uint16_t)) {
+			idxes = _mm512_mask_i32gather_epi32(zero, msk_ext,
+				idxes, (const int *)dp->tbl8, 2);
+			idxes = _mm512_and_epi32(idxes, res_msk);
+		} else
+			idxes = _mm512_mask_i32gather_epi32(zero, msk_ext,
+				idxes, (const int *)dp->tbl8, 4);
+
+		res = _mm512_mask_blend_epi32(msk_ext, res, idxes);
+	}
+
+	res = _mm512_srli_epi32(res, 1);
+	tmp1 = _mm512_maskz_expand_epi32(exp_msk, res);
+	tmp256 = _mm512_extracti32x8_epi32(res, 1);
+	tmp2 = _mm512_maskz_expand_epi32(exp_msk,
+		_mm512_castsi256_si512(tmp256));
+	_mm512_storeu_si512(next_hops, tmp1);
+	_mm512_storeu_si512(next_hops + 8, tmp2);
+}
+
+static __rte_always_inline void
+dir24_8_vec_lookup_x8_8b(void *p, const uint32_t *ips,
+	uint64_t *next_hops)
+{
+	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
+	const __m512i zero = _mm512_set1_epi32(0);
+	const __m512i lsbyte_msk = _mm512_set1_epi64(0xff);
+	const __m512i lsb = _mm512_set1_epi64(1);
+	__m512i res, idxes, bytes;
+	__m256i idxes_256, ip_vec;
+	__mmask8 msk_ext;
+
+	ip_vec = _mm256_loadu_si256((const void *)ips);
+	/* mask 24 most significant bits */
+	idxes_256 = _mm256_srli_epi32(ip_vec, 8);
+
+	/* lookup in tbl24 */
+	res = _mm512_i32gather_epi64(idxes_256, (const void *)dp->tbl24, 8);
+
+	/* get extended entries indexes */
+	msk_ext = _mm512_test_epi64_mask(res, lsb);
+
+	if (msk_ext != 0) {
+		bytes = _mm512_cvtepi32_epi64(ip_vec);
+		idxes = _mm512_srli_epi64(res, 1);
+		idxes = _mm512_slli_epi64(idxes, 8);
+		bytes = _mm512_and_epi64(bytes, lsbyte_msk);
+		idxes = _mm512_maskz_add_epi64(msk_ext, idxes, bytes);
+		idxes = _mm512_mask_i64gather_epi64(zero, msk_ext, idxes,
+			(const void *)dp->tbl8, 8);
+
+		res = _mm512_mask_blend_epi64(msk_ext, res, idxes);
+	}
+
+	res = _mm512_srli_epi64(res, 1);
+	_mm512_storeu_si512(next_hops, res);
+}
+
+void
+rte_dir24_8_vec_lookup_bulk_1b(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n)
+{
+	uint32_t i;
+	for (i = 0; i < (n / 16); i++)
+		dir24_8_vec_lookup_x16(p, ips + i * 16, next_hops + i * 16,
+			sizeof(uint8_t));
+
+	dir24_8_lookup_bulk_1b(p, ips + i * 16, next_hops + i * 16,
+		n - i * 16);
+}
+
+void
+rte_dir24_8_vec_lookup_bulk_2b(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n)
+{
+	uint32_t i;
+	for (i = 0; i < (n / 16); i++)
+		dir24_8_vec_lookup_x16(p, ips + i * 16, next_hops + i * 16,
+			sizeof(uint16_t));
+
+	dir24_8_lookup_bulk_2b(p, ips + i * 16, next_hops + i * 16,
+		n - i * 16);
+}
+
+void
+rte_dir24_8_vec_lookup_bulk_4b(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n)
+{
+	uint32_t i;
+	for (i = 0; i < (n / 16); i++)
+		dir24_8_vec_lookup_x16(p, ips + i * 16, next_hops + i * 16,
+			sizeof(uint32_t));
+
+	dir24_8_lookup_bulk_4b(p, ips + i * 16, next_hops + i * 16,
+		n - i * 16);
+}
+
+void
+rte_dir24_8_vec_lookup_bulk_8b(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n)
+{
+	uint32_t i;
+	for (i = 0; i < (n / 8); i++)
+		dir24_8_vec_lookup_x8_8b(p, ips + i * 8, next_hops + i * 8);
+
+	dir24_8_lookup_bulk_8b(p, ips + i * 8, next_hops + i * 8, n - i * 8);
+}
diff --git a/lib/librte_fib/dir24_8_avx512.h b/lib/librte_fib/dir24_8_avx512.h
new file mode 100644
index 000000000..1d3c2b931
--- /dev/null
+++ b/lib/librte_fib/dir24_8_avx512.h
@@ -0,0 +1,24 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Intel Corporation
+ */
+
+#ifndef _DIR248_AVX512_H_
+#define _DIR248_AVX512_H_
+
+void
+rte_dir24_8_vec_lookup_bulk_1b(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n);
+
+void
+rte_dir24_8_vec_lookup_bulk_2b(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n);
+
+void
+rte_dir24_8_vec_lookup_bulk_4b(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n);
+
+void
+rte_dir24_8_vec_lookup_bulk_8b(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n);
+
+#endif /* _DIR248_AVX512_H_ */
diff --git a/lib/librte_fib/meson.build b/lib/librte_fib/meson.build
index 771828fbe..d96ff0288 100644
--- a/lib/librte_fib/meson.build
+++ b/lib/librte_fib/meson.build
@@ -5,3 +5,21 @@
 sources = files('rte_fib.c', 'rte_fib6.c', 'dir24_8.c', 'trie.c')
 headers = files('rte_fib.h', 'rte_fib6.h')
 deps += ['rib']
+
+if arch_subdir == 'x86' and not machine_args.contains('-mno-avx512f')
+	# compile AVX512 version if either:
+	# a. we have AVX512F supported in minimum instruction set baseline
+	# b. it's not minimum instruction set, but supported by compiler
+	if dpdk_conf.has('RTE_MACHINE_CPUFLAG_AVX512F')
+		cflags += ['-DCC_DIR24_8_AVX512_SUPPORT']
+		sources += files('dir24_8_avx512.c')
+	elif cc.has_multi_arguments('-mavx512f', '-mavx512dq')
+		dir24_8_avx512_tmp = static_library('dir24_8_avx512_tmp',
+				'dir24_8_avx512.c',
+				dependencies: static_rte_eal,
+				c_args: cflags + ['-mavx512f', '-mavx512dq'])
+		objs += dir24_8_avx512_tmp.extract_objects('dir24_8_avx512.c')
+		cflags += '-DCC_DIR24_8_AVX512_SUPPORT'
+	endif
+endif
+
diff --git a/lib/librte_fib/rte_fib.h b/lib/librte_fib/rte_fib.h
index 892898c6f..4a348670d 100644
--- a/lib/librte_fib/rte_fib.h
+++ b/lib/librte_fib/rte_fib.h
@@ -61,7 +61,8 @@ enum rte_fib_dir24_8_nh_sz {
 enum rte_fib_dir24_8_lookup_type {
 	RTE_FIB_DIR24_8_SCALAR_MACRO,
 	RTE_FIB_DIR24_8_SCALAR_INLINE,
-	RTE_FIB_DIR24_8_SCALAR_UNI
+	RTE_FIB_DIR24_8_SCALAR_UNI,
+	RTE_FIB_DIR24_8_VECTOR_AVX512
 };
 
 /** FIB configuration structure */
-- 
2.17.1


^ permalink raw reply	[flat|nested] 199+ messages in thread

* [dpdk-dev] [PATCH v4 5/8] fib6: make lookup function type configurable
  2020-05-19 12:12   ` [dpdk-dev] [PATCH v3 0/8] " Vladimir Medvedkin
                       ` (6 preceding siblings ...)
  2020-07-08 20:16     ` [dpdk-dev] [PATCH v4 4/8] fib: introduce AVX512 lookup Vladimir Medvedkin
@ 2020-07-08 20:16     ` Vladimir Medvedkin
  2020-07-08 20:16     ` [dpdk-dev] [PATCH v4 6/8] fib6: move lookup definition into the header file Vladimir Medvedkin
                       ` (2 subsequent siblings)
  10 siblings, 0 replies; 199+ messages in thread
From: Vladimir Medvedkin @ 2020-07-08 20:16 UTC (permalink / raw)
  To: dev; +Cc: konstantin.ananyev, bruce.richardson

Add type argument to trie_get_lookup_fn()
Now it only supports RTE_FIB6_TRIE_SCALAR

Add new rte_fib6_set_lookup_fn() - user can change lookup
function type runtime.

Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
---
 lib/librte_fib/rte_fib6.c          | 20 +++++++++++++++++++-
 lib/librte_fib/rte_fib6.h          | 21 +++++++++++++++++++++
 lib/librte_fib/rte_fib_version.map |  1 +
 lib/librte_fib/trie.c              | 25 ++++++++++++++-----------
 lib/librte_fib/trie.h              |  2 +-
 5 files changed, 56 insertions(+), 13 deletions(-)

diff --git a/lib/librte_fib/rte_fib6.c b/lib/librte_fib/rte_fib6.c
index a1f0db844..566cd5fb6 100644
--- a/lib/librte_fib/rte_fib6.c
+++ b/lib/librte_fib/rte_fib6.c
@@ -107,7 +107,7 @@ init_dataplane(struct rte_fib6 *fib, __rte_unused int socket_id,
 		fib->dp = trie_create(dp_name, socket_id, conf);
 		if (fib->dp == NULL)
 			return -rte_errno;
-		fib->lookup = rte_trie_get_lookup_fn(conf);
+		fib->lookup = trie_get_lookup_fn(fib->dp, RTE_FIB6_TRIE_SCALAR);
 		fib->modify = trie_modify;
 		return 0;
 	default:
@@ -319,3 +319,21 @@ rte_fib6_get_rib(struct rte_fib6 *fib)
 {
 	return (fib == NULL) ? NULL : fib->rib;
 }
+
+int
+rte_fib6_set_lookup_fn(struct rte_fib6 *fib,
+	enum rte_fib_trie_lookup_type type)
+{
+	rte_fib6_lookup_fn_t fn;
+
+	switch (fib->type) {
+	case RTE_FIB6_TRIE:
+		fn = trie_get_lookup_fn(fib->dp, type);
+		if (fn == NULL)
+			return -EINVAL;
+		fib->lookup = fn;
+		return 0;
+	default:
+		return -EINVAL;
+	}
+}
diff --git a/lib/librte_fib/rte_fib6.h b/lib/librte_fib/rte_fib6.h
index bbfcf23a8..e029c7624 100644
--- a/lib/librte_fib/rte_fib6.h
+++ b/lib/librte_fib/rte_fib6.h
@@ -59,6 +59,10 @@ enum rte_fib_trie_nh_sz {
 	RTE_FIB6_TRIE_8B
 };
 
+enum rte_fib_trie_lookup_type {
+	RTE_FIB6_TRIE_SCALAR
+};
+
 /** FIB configuration structure */
 struct rte_fib6_conf {
 	enum rte_fib6_type type; /**< Type of FIB struct */
@@ -201,6 +205,23 @@ __rte_experimental
 struct rte_rib6 *
 rte_fib6_get_rib(struct rte_fib6 *fib);
 
+/**
+ * Set lookup function based on type
+ *
+ * @param fib
+ *   FIB object handle
+ * @param type
+ *   type of lookup function
+ *
+ * @return
+ *    -EINVAL on failure
+ *    0 on success
+ */
+__rte_experimental
+int
+rte_fib6_set_lookup_fn(struct rte_fib6 *fib,
+	enum rte_fib_trie_lookup_type type);
+
 #ifdef __cplusplus
 }
 #endif
diff --git a/lib/librte_fib/rte_fib_version.map b/lib/librte_fib/rte_fib_version.map
index 216af66b3..9d1e181b3 100644
--- a/lib/librte_fib/rte_fib_version.map
+++ b/lib/librte_fib/rte_fib_version.map
@@ -19,6 +19,7 @@ EXPERIMENTAL {
 	rte_fib6_lookup_bulk;
 	rte_fib6_get_dp;
 	rte_fib6_get_rib;
+	rte_fib6_set_lookup_fn;
 
 	local: *;
 };
diff --git a/lib/librte_fib/trie.c b/lib/librte_fib/trie.c
index 2ae2add4f..63c519a09 100644
--- a/lib/librte_fib/trie.c
+++ b/lib/librte_fib/trie.c
@@ -59,13 +59,6 @@ enum edge {
 	REDGE
 };
 
-enum lookup_type {
-	MACRO,
-	INLINE,
-	UNI
-};
-static enum lookup_type test_lookup = MACRO;
-
 static inline uint32_t
 get_tbl24_idx(const uint8_t *ip)
 {
@@ -154,11 +147,18 @@ LOOKUP_FUNC(4b, uint32_t, 2)
 LOOKUP_FUNC(8b, uint64_t, 3)
 
 rte_fib6_lookup_fn_t
-rte_trie_get_lookup_fn(struct rte_fib6_conf *conf)
+trie_get_lookup_fn(void *p, enum rte_fib_trie_lookup_type type)
 {
-	enum rte_fib_trie_nh_sz nh_sz = conf->trie.nh_sz;
+	enum rte_fib_trie_nh_sz nh_sz;
+	struct rte_trie_tbl *dp = p;
 
-	if (test_lookup == MACRO) {
+	if (dp == NULL)
+		return NULL;
+
+	nh_sz = dp->nh_sz;
+
+	switch (type) {
+	case RTE_FIB6_TRIE_SCALAR:
 		switch (nh_sz) {
 		case RTE_FIB6_TRIE_2B:
 			return rte_trie_lookup_bulk_2b;
@@ -166,9 +166,12 @@ rte_trie_get_lookup_fn(struct rte_fib6_conf *conf)
 			return rte_trie_lookup_bulk_4b;
 		case RTE_FIB6_TRIE_8B:
 			return rte_trie_lookup_bulk_8b;
+		default:
+			return NULL;
 		}
+	default:
+		return NULL;
 	}
-
 	return NULL;
 }
 
diff --git a/lib/librte_fib/trie.h b/lib/librte_fib/trie.h
index bb750c5ae..0d5ef9a9f 100644
--- a/lib/librte_fib/trie.h
+++ b/lib/librte_fib/trie.h
@@ -22,7 +22,7 @@ void
 trie_free(void *p);
 
 rte_fib6_lookup_fn_t
-rte_trie_get_lookup_fn(struct rte_fib6_conf *fib_conf);
+trie_get_lookup_fn(void *p, enum rte_fib_trie_lookup_type type);
 
 int
 trie_modify(struct rte_fib6 *fib, const uint8_t ip[RTE_FIB6_IPV6_ADDR_SIZE],
-- 
2.17.1


^ permalink raw reply	[flat|nested] 199+ messages in thread

* [dpdk-dev] [PATCH v4 6/8] fib6: move lookup definition into the header file
  2020-05-19 12:12   ` [dpdk-dev] [PATCH v3 0/8] " Vladimir Medvedkin
                       ` (7 preceding siblings ...)
  2020-07-08 20:16     ` [dpdk-dev] [PATCH v4 5/8] fib6: make lookup function type configurable Vladimir Medvedkin
@ 2020-07-08 20:16     ` Vladimir Medvedkin
  2020-07-08 20:16     ` [dpdk-dev] [PATCH v4 7/8] fib6: introduce AVX512 lookup Vladimir Medvedkin
  2020-07-08 20:16     ` [dpdk-dev] [PATCH v4 8/8] app/testfib: add support for different lookup functions Vladimir Medvedkin
  10 siblings, 0 replies; 199+ messages in thread
From: Vladimir Medvedkin @ 2020-07-08 20:16 UTC (permalink / raw)
  To: dev; +Cc: konstantin.ananyev, bruce.richardson

Move trie table layout and lookup defenition into the
private header file. This is necessary for implementing a
vectorized lookup function in a separate .с file.

Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 lib/librte_fib/trie.c | 121 ------------------------------------------
 lib/librte_fib/trie.h | 117 ++++++++++++++++++++++++++++++++++++++++
 2 files changed, 117 insertions(+), 121 deletions(-)

diff --git a/lib/librte_fib/trie.c b/lib/librte_fib/trie.c
index 63c519a09..136e938df 100644
--- a/lib/librte_fib/trie.c
+++ b/lib/librte_fib/trie.c
@@ -11,141 +11,20 @@
 
 #include <rte_debug.h>
 #include <rte_malloc.h>
-#include <rte_prefetch.h>
 #include <rte_errno.h>
 #include <rte_memory.h>
-#include <rte_branch_prediction.h>
 
 #include <rte_rib6.h>
 #include <rte_fib6.h>
 #include "trie.h"
 
-/* @internal Total number of tbl24 entries. */
-#define TRIE_TBL24_NUM_ENT	(1 << 24)
-
-/* Maximum depth value possible for IPv6 LPM. */
-#define TRIE_MAX_DEPTH		128
-
-/* @internal Number of entries in a tbl8 group. */
-#define TRIE_TBL8_GRP_NUM_ENT	256ULL
-
-/* @internal Total number of tbl8 groups in the tbl8. */
-#define TRIE_TBL8_NUM_GROUPS	65536
-
-/* @internal bitmask with valid and valid_group fields set */
-#define TRIE_EXT_ENT		1
-
 #define TRIE_NAMESIZE		64
 
-#define BITMAP_SLAB_BIT_SIZE_LOG2	6
-#define BITMAP_SLAB_BIT_SIZE		(1ULL << BITMAP_SLAB_BIT_SIZE_LOG2)
-#define BITMAP_SLAB_BITMASK		(BITMAP_SLAB_BIT_SIZE - 1)
-
-struct rte_trie_tbl {
-	uint32_t	number_tbl8s;	/**< Total number of tbl8s */
-	uint32_t	rsvd_tbl8s;	/**< Number of reserved tbl8s */
-	uint32_t	cur_tbl8s;	/**< Current cumber of tbl8s */
-	uint64_t	def_nh;		/**< Default next hop */
-	enum rte_fib_trie_nh_sz	nh_sz;	/**< Size of nexthop entry */
-	uint64_t	*tbl8;		/**< tbl8 table. */
-	uint32_t	*tbl8_pool;	/**< bitmap containing free tbl8 idxes*/
-	uint32_t	tbl8_pool_pos;
-	/* tbl24 table. */
-	__extension__ uint64_t	tbl24[0] __rte_cache_aligned;
-};
-
 enum edge {
 	LEDGE,
 	REDGE
 };
 
-static inline uint32_t
-get_tbl24_idx(const uint8_t *ip)
-{
-	return ip[0] << 16|ip[1] << 8|ip[2];
-}
-
-static inline void *
-get_tbl24_p(struct rte_trie_tbl *dp, const uint8_t *ip, uint8_t nh_sz)
-{
-	uint32_t tbl24_idx;
-
-	tbl24_idx = get_tbl24_idx(ip);
-	return (void *)&((uint8_t *)dp->tbl24)[tbl24_idx << nh_sz];
-}
-
-static inline uint8_t
-bits_in_nh(uint8_t nh_sz)
-{
-	return 8 * (1 << nh_sz);
-}
-
-static inline uint64_t
-get_max_nh(uint8_t nh_sz)
-{
-	return ((1ULL << (bits_in_nh(nh_sz) - 1)) - 1);
-}
-
-static inline uint64_t
-lookup_msk(uint8_t nh_sz)
-{
-	return ((1ULL << ((1 << (nh_sz + 3)) - 1)) << 1) - 1;
-}
-
-static inline uint8_t
-get_psd_idx(uint32_t val, uint8_t nh_sz)
-{
-	return val & ((1 << (3 - nh_sz)) - 1);
-}
-
-static inline uint32_t
-get_tbl_pos(uint32_t val, uint8_t nh_sz)
-{
-	return val >> (3 - nh_sz);
-}
-
-static inline uint64_t
-get_tbl_val_by_idx(uint64_t *tbl, uint32_t idx, uint8_t nh_sz)
-{
-	return ((tbl[get_tbl_pos(idx, nh_sz)] >> (get_psd_idx(idx, nh_sz) *
-		bits_in_nh(nh_sz))) & lookup_msk(nh_sz));
-}
-
-static inline void *
-get_tbl_p_by_idx(uint64_t *tbl, uint64_t idx, uint8_t nh_sz)
-{
-	return (uint8_t *)tbl + (idx << nh_sz);
-}
-
-static inline int
-is_entry_extended(uint64_t ent)
-{
-	return (ent & TRIE_EXT_ENT) == TRIE_EXT_ENT;
-}
-
-#define LOOKUP_FUNC(suffix, type, nh_sz)				\
-static void rte_trie_lookup_bulk_##suffix(void *p,			\
-	uint8_t ips[][RTE_FIB6_IPV6_ADDR_SIZE],			\
-	uint64_t *next_hops, const unsigned int n)			\
-{									\
-	struct rte_trie_tbl *dp = (struct rte_trie_tbl *)p;		\
-	uint64_t tmp;							\
-	uint32_t i, j;							\
-									\
-	for (i = 0; i < n; i++) {					\
-		tmp = ((type *)dp->tbl24)[get_tbl24_idx(&ips[i][0])];	\
-		j = 3;							\
-		while (is_entry_extended(tmp)) {			\
-			tmp = ((type *)dp->tbl8)[ips[i][j++] +		\
-				((tmp >> 1) * TRIE_TBL8_GRP_NUM_ENT)];	\
-		}							\
-		next_hops[i] = tmp >> 1;				\
-	}								\
-}
-LOOKUP_FUNC(2b, uint16_t, 1)
-LOOKUP_FUNC(4b, uint32_t, 2)
-LOOKUP_FUNC(8b, uint64_t, 3)
-
 rte_fib6_lookup_fn_t
 trie_get_lookup_fn(void *p, enum rte_fib_trie_lookup_type type)
 {
diff --git a/lib/librte_fib/trie.h b/lib/librte_fib/trie.h
index 0d5ef9a9f..663c7a90f 100644
--- a/lib/librte_fib/trie.h
+++ b/lib/librte_fib/trie.h
@@ -10,11 +10,128 @@
  * @file
  * RTE IPv6 Longest Prefix Match (LPM)
  */
+#include <rte_prefetch.h>
+#include <rte_branch_prediction.h>
 
 #ifdef __cplusplus
 extern "C" {
 #endif
 
+/* @internal Total number of tbl24 entries. */
+#define TRIE_TBL24_NUM_ENT	(1 << 24)
+/* Maximum depth value possible for IPv6 LPM. */
+#define TRIE_MAX_DEPTH		128
+/* @internal Number of entries in a tbl8 group. */
+#define TRIE_TBL8_GRP_NUM_ENT	256ULL
+/* @internal Total number of tbl8 groups in the tbl8. */
+#define TRIE_TBL8_NUM_GROUPS	65536
+/* @internal bitmask with valid and valid_group fields set */
+#define TRIE_EXT_ENT		1
+
+#define BITMAP_SLAB_BIT_SIZE_LOG2	6
+#define BITMAP_SLAB_BIT_SIZE		(1ULL << BITMAP_SLAB_BIT_SIZE_LOG2)
+#define BITMAP_SLAB_BITMASK		(BITMAP_SLAB_BIT_SIZE - 1)
+
+struct rte_trie_tbl {
+	uint32_t	number_tbl8s;	/**< Total number of tbl8s */
+	uint32_t	rsvd_tbl8s;	/**< Number of reserved tbl8s */
+	uint32_t	cur_tbl8s;	/**< Current cumber of tbl8s */
+	uint64_t	def_nh;		/**< Default next hop */
+	enum rte_fib_trie_nh_sz	nh_sz;	/**< Size of nexthop entry */
+	uint64_t	*tbl8;		/**< tbl8 table. */
+	uint32_t	*tbl8_pool;	/**< bitmap containing free tbl8 idxes*/
+	uint32_t	tbl8_pool_pos;
+	/* tbl24 table. */
+	__extension__ uint64_t	tbl24[0] __rte_cache_aligned;
+};
+
+static inline uint32_t
+get_tbl24_idx(const uint8_t *ip)
+{
+	return ip[0] << 16|ip[1] << 8|ip[2];
+}
+
+static inline void *
+get_tbl24_p(struct rte_trie_tbl *dp, const uint8_t *ip, uint8_t nh_sz)
+{
+	uint32_t tbl24_idx;
+
+	tbl24_idx = get_tbl24_idx(ip);
+	return (void *)&((uint8_t *)dp->tbl24)[tbl24_idx << nh_sz];
+}
+
+static inline uint8_t
+bits_in_nh(uint8_t nh_sz)
+{
+	return 8 * (1 << nh_sz);
+}
+
+static inline uint64_t
+get_max_nh(uint8_t nh_sz)
+{
+	return ((1ULL << (bits_in_nh(nh_sz) - 1)) - 1);
+}
+
+static inline uint64_t
+lookup_msk(uint8_t nh_sz)
+{
+	return ((1ULL << ((1 << (nh_sz + 3)) - 1)) << 1) - 1;
+}
+
+static inline uint8_t
+get_psd_idx(uint32_t val, uint8_t nh_sz)
+{
+	return val & ((1 << (3 - nh_sz)) - 1);
+}
+
+static inline uint32_t
+get_tbl_pos(uint32_t val, uint8_t nh_sz)
+{
+	return val >> (3 - nh_sz);
+}
+
+static inline uint64_t
+get_tbl_val_by_idx(uint64_t *tbl, uint32_t idx, uint8_t nh_sz)
+{
+	return ((tbl[get_tbl_pos(idx, nh_sz)] >> (get_psd_idx(idx, nh_sz) *
+		bits_in_nh(nh_sz))) & lookup_msk(nh_sz));
+}
+
+static inline void *
+get_tbl_p_by_idx(uint64_t *tbl, uint64_t idx, uint8_t nh_sz)
+{
+	return (uint8_t *)tbl + (idx << nh_sz);
+}
+
+static inline int
+is_entry_extended(uint64_t ent)
+{
+	return (ent & TRIE_EXT_ENT) == TRIE_EXT_ENT;
+}
+
+#define LOOKUP_FUNC(suffix, type, nh_sz)				\
+static inline void rte_trie_lookup_bulk_##suffix(void *p,		\
+	uint8_t ips[][RTE_FIB6_IPV6_ADDR_SIZE],				\
+	uint64_t *next_hops, const unsigned int n)			\
+{									\
+	struct rte_trie_tbl *dp = (struct rte_trie_tbl *)p;		\
+	uint64_t tmp;							\
+	uint32_t i, j;							\
+									\
+	for (i = 0; i < n; i++) {					\
+		tmp = ((type *)dp->tbl24)[get_tbl24_idx(&ips[i][0])];	\
+		j = 3;							\
+		while (is_entry_extended(tmp)) {			\
+			tmp = ((type *)dp->tbl8)[ips[i][j++] +		\
+				((tmp >> 1) * TRIE_TBL8_GRP_NUM_ENT)];	\
+		}							\
+		next_hops[i] = tmp >> 1;				\
+	}								\
+}
+LOOKUP_FUNC(2b, uint16_t, 1)
+LOOKUP_FUNC(4b, uint32_t, 2)
+LOOKUP_FUNC(8b, uint64_t, 3)
+
 void *
 trie_create(const char *name, int socket_id, struct rte_fib6_conf *conf);
 
-- 
2.17.1


^ permalink raw reply	[flat|nested] 199+ messages in thread

* [dpdk-dev] [PATCH v4 7/8] fib6: introduce AVX512 lookup
  2020-05-19 12:12   ` [dpdk-dev] [PATCH v3 0/8] " Vladimir Medvedkin
                       ` (8 preceding siblings ...)
  2020-07-08 20:16     ` [dpdk-dev] [PATCH v4 6/8] fib6: move lookup definition into the header file Vladimir Medvedkin
@ 2020-07-08 20:16     ` Vladimir Medvedkin
  2020-07-08 20:16     ` [dpdk-dev] [PATCH v4 8/8] app/testfib: add support for different lookup functions Vladimir Medvedkin
  10 siblings, 0 replies; 199+ messages in thread
From: Vladimir Medvedkin @ 2020-07-08 20:16 UTC (permalink / raw)
  To: dev; +Cc: konstantin.ananyev, bruce.richardson

Add new lookup implementation for FIB6 trie algorithm using
AVX512 instruction set

Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 lib/librte_fib/Makefile      |  10 ++
 lib/librte_fib/meson.build   |  13 ++
 lib/librte_fib/rte_fib6.h    |   3 +-
 lib/librte_fib/trie.c        |  21 +++
 lib/librte_fib/trie_avx512.c | 269 +++++++++++++++++++++++++++++++++++
 lib/librte_fib/trie_avx512.h |  20 +++
 6 files changed, 335 insertions(+), 1 deletion(-)
 create mode 100644 lib/librte_fib/trie_avx512.c
 create mode 100644 lib/librte_fib/trie_avx512.h

diff --git a/lib/librte_fib/Makefile b/lib/librte_fib/Makefile
index 3958da106..761c7c847 100644
--- a/lib/librte_fib/Makefile
+++ b/lib/librte_fib/Makefile
@@ -25,12 +25,22 @@ grep -q __AVX512F__ && echo 1)
 CC_AVX512DQ_SUPPORT=$(shell $(CC) -mavx512dq -dM -E - </dev/null 2>&1 | \
 grep -q __AVX512DQ__ && echo 1)
 
+CC_AVX512BW_SUPPORT=$(shell $(CC) -mavx512bw -dM -E - </dev/null 2>&1 | \
+grep -q __AVX512BW__ && echo 1)
+
 ifeq ($(CC_AVX512F_SUPPORT), 1)
 	ifeq ($(CC_AVX512DQ_SUPPORT), 1)
 		SRCS-$(CONFIG_RTE_LIBRTE_FIB) += dir24_8_avx512.c
 		CFLAGS_dir24_8_avx512.o += -mavx512f
 		CFLAGS_dir24_8_avx512.o += -mavx512dq
 		CFLAGS_dir24_8.o += -DCC_DIR24_8_AVX512_SUPPORT
+		ifeq ($(CC_AVX512BW_SUPPORT), 1)
+			SRCS-$(CONFIG_RTE_LIBRTE_FIB) += trie_avx512.c
+			CFLAGS_trie_avx512.o += -mavx512f
+			CFLAGS_trie_avx512.o += -mavx512dq
+			CFLAGS_trie_avx512.o += -mavx512bw
+			CFLAGS_trie.o += -DCC_TRIE_AVX512_SUPPORT
+		endif
 	endif
 endif
 include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_fib/meson.build b/lib/librte_fib/meson.build
index d96ff0288..98c8752be 100644
--- a/lib/librte_fib/meson.build
+++ b/lib/librte_fib/meson.build
@@ -13,6 +13,8 @@ if arch_subdir == 'x86' and not machine_args.contains('-mno-avx512f')
 	if dpdk_conf.has('RTE_MACHINE_CPUFLAG_AVX512F')
 		cflags += ['-DCC_DIR24_8_AVX512_SUPPORT']
 		sources += files('dir24_8_avx512.c')
+		cflags += ['-DCC_TRIE_AVX512_SUPPORT']
+		sources += files('trie_avx512.c')
 	elif cc.has_multi_arguments('-mavx512f', '-mavx512dq')
 		dir24_8_avx512_tmp = static_library('dir24_8_avx512_tmp',
 				'dir24_8_avx512.c',
@@ -20,6 +22,17 @@ if arch_subdir == 'x86' and not machine_args.contains('-mno-avx512f')
 				c_args: cflags + ['-mavx512f', '-mavx512dq'])
 		objs += dir24_8_avx512_tmp.extract_objects('dir24_8_avx512.c')
 		cflags += '-DCC_DIR24_8_AVX512_SUPPORT'
+		# TRIE AVX512 implementation uses avx512bw intrinsics along with
+		# avx512f and avx512dq
+		if cc.has_argument('-mavx512bw')
+			trie_avx512_tmp = static_library('trie_avx512_tmp',
+				'trie_avx512.c',
+				dependencies: static_rte_eal,
+				c_args: cflags + ['-mavx512f', \
+					'-mavx512dq', '-mavx512bw'])
+			objs += trie_avx512_tmp.extract_objects('trie_avx512.c')
+			cflags += '-DCC_TRIE_AVX512_SUPPORT'
+		endif
 	endif
 endif
 
diff --git a/lib/librte_fib/rte_fib6.h b/lib/librte_fib/rte_fib6.h
index e029c7624..303be55c1 100644
--- a/lib/librte_fib/rte_fib6.h
+++ b/lib/librte_fib/rte_fib6.h
@@ -60,7 +60,8 @@ enum rte_fib_trie_nh_sz {
 };
 
 enum rte_fib_trie_lookup_type {
-	RTE_FIB6_TRIE_SCALAR
+	RTE_FIB6_TRIE_SCALAR,
+	RTE_FIB6_TRIE_VECTOR_AVX512
 };
 
 /** FIB configuration structure */
diff --git a/lib/librte_fib/trie.c b/lib/librte_fib/trie.c
index 136e938df..d0233ad01 100644
--- a/lib/librte_fib/trie.c
+++ b/lib/librte_fib/trie.c
@@ -18,6 +18,12 @@
 #include <rte_fib6.h>
 #include "trie.h"
 
+#ifdef CC_TRIE_AVX512_SUPPORT
+
+#include "trie_avx512.h"
+
+#endif /* CC_TRIE_AVX512_SUPPORT */
+
 #define TRIE_NAMESIZE		64
 
 enum edge {
@@ -48,6 +54,21 @@ trie_get_lookup_fn(void *p, enum rte_fib_trie_lookup_type type)
 		default:
 			return NULL;
 		}
+#ifdef CC_TRIE_AVX512_SUPPORT
+	case RTE_FIB6_TRIE_VECTOR_AVX512:
+		if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512F) <= 0)
+			return NULL;
+		switch (nh_sz) {
+		case RTE_FIB6_TRIE_2B:
+			return rte_trie_vec_lookup_bulk_2b;
+		case RTE_FIB6_TRIE_4B:
+			return rte_trie_vec_lookup_bulk_4b;
+		case RTE_FIB6_TRIE_8B:
+			return rte_trie_vec_lookup_bulk_8b;
+		default:
+			return NULL;
+		}
+#endif
 	default:
 		return NULL;
 	}
diff --git a/lib/librte_fib/trie_avx512.c b/lib/librte_fib/trie_avx512.c
new file mode 100644
index 000000000..b1c9e4ede
--- /dev/null
+++ b/lib/librte_fib/trie_avx512.c
@@ -0,0 +1,269 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Intel Corporation
+ */
+
+#include <rte_vect.h>
+#include <rte_fib6.h>
+
+#include "trie.h"
+#include "trie_avx512.h"
+
+static __rte_always_inline void
+transpose_x16(uint8_t ips[16][RTE_FIB6_IPV6_ADDR_SIZE],
+	__m512i *first, __m512i *second, __m512i *third, __m512i *fourth)
+{
+	__m512i tmp1, tmp2, tmp3, tmp4;
+	__m512i tmp5, tmp6, tmp7, tmp8;
+	const __rte_x86_zmm_t perm_idxes = {
+		.u32 = { 0, 4, 8, 12, 2, 6, 10, 14,
+			1, 5, 9, 13, 3, 7, 11, 15
+		},
+	};
+
+	/* load all ip addresses */
+	tmp1 = _mm512_loadu_si512(&ips[0][0]);
+	tmp2 = _mm512_loadu_si512(&ips[4][0]);
+	tmp3 = _mm512_loadu_si512(&ips[8][0]);
+	tmp4 = _mm512_loadu_si512(&ips[12][0]);
+
+	/* transpose 4 byte chunks of 16 ips */
+	tmp5 = _mm512_unpacklo_epi32(tmp1, tmp2);
+	tmp7 = _mm512_unpackhi_epi32(tmp1, tmp2);
+	tmp6 = _mm512_unpacklo_epi32(tmp3, tmp4);
+	tmp8 = _mm512_unpackhi_epi32(tmp3, tmp4);
+
+	tmp1 = _mm512_unpacklo_epi32(tmp5, tmp6);
+	tmp3 = _mm512_unpackhi_epi32(tmp5, tmp6);
+	tmp2 = _mm512_unpacklo_epi32(tmp7, tmp8);
+	tmp4 = _mm512_unpackhi_epi32(tmp7, tmp8);
+
+	/* first 4-byte chunks of ips[] */
+	*first = _mm512_permutexvar_epi32(perm_idxes.z, tmp1);
+	/* second 4-byte chunks of ips[] */
+	*second = _mm512_permutexvar_epi32(perm_idxes.z, tmp3);
+	/* third 4-byte chunks of ips[] */
+	*third = _mm512_permutexvar_epi32(perm_idxes.z, tmp2);
+	/* fourth 4-byte chunks of ips[] */
+	*fourth = _mm512_permutexvar_epi32(perm_idxes.z, tmp4);
+}
+
+static __rte_always_inline void
+transpose_x8(uint8_t ips[8][RTE_FIB6_IPV6_ADDR_SIZE],
+	__m512i *first, __m512i *second)
+{
+	__m512i tmp1, tmp2, tmp3, tmp4;
+	const __rte_x86_zmm_t perm_idxes = {
+		.u64 = { 0, 2, 4, 6, 1, 3, 5, 7
+		},
+	};
+
+	tmp1 = _mm512_loadu_si512(&ips[0][0]);
+	tmp2 = _mm512_loadu_si512(&ips[4][0]);
+
+	tmp3 = _mm512_unpacklo_epi64(tmp1, tmp2);
+	*first = _mm512_permutexvar_epi64(perm_idxes.z, tmp3);
+	tmp4 = _mm512_unpackhi_epi64(tmp1, tmp2);
+	*second = _mm512_permutexvar_epi64(perm_idxes.z, tmp4);
+}
+
+static __rte_always_inline void
+trie_vec_lookup_x16(void *p, uint8_t ips[16][RTE_FIB6_IPV6_ADDR_SIZE],
+	uint64_t *next_hops, int size)
+{
+	struct rte_trie_tbl *dp = (struct rte_trie_tbl *)p;
+	const __m512i zero = _mm512_set1_epi32(0);
+	const __m512i lsb = _mm512_set1_epi32(1);
+	const __m512i two_lsb = _mm512_set1_epi32(3);
+	__m512i first, second, third, fourth; /*< IPv6 four byte chunks */
+	__m512i idxes, res, shuf_idxes;
+	__m512i tmp, tmp2, bytes, byte_chunk, base_idxes;
+	/* used to mask gather values if size is 2 (16 bit next hops) */
+	const __m512i res_msk = _mm512_set1_epi32(UINT16_MAX);
+	const __rte_x86_zmm_t bswap = {
+		.u8 = { 2, 1, 0, 255, 6, 5, 4, 255,
+			10, 9, 8, 255, 14, 13, 12, 255,
+			2, 1, 0, 255, 6, 5, 4, 255,
+			10, 9, 8, 255, 14, 13, 12, 255,
+			2, 1, 0, 255, 6, 5, 4, 255,
+			10, 9, 8, 255, 14, 13, 12, 255,
+			2, 1, 0, 255, 6, 5, 4, 255,
+			10, 9, 8, 255, 14, 13, 12, 255
+			},
+	};
+	const __mmask64 k = 0x1111111111111111;
+	int i = 3;
+	__mmask16 msk_ext, new_msk;
+	__mmask16 exp_msk = 0x5555;
+
+	transpose_x16(ips, &first, &second, &third, &fourth);
+
+	/* get_tbl24_idx() for every 4 byte chunk */
+	idxes = _mm512_shuffle_epi8(first, bswap.z);
+
+	/**
+	 * lookup in tbl24
+	 * Put it inside branch to make compiller happy with -O0
+	 */
+	if (size == sizeof(uint16_t)) {
+		res = _mm512_i32gather_epi32(idxes, (const int *)dp->tbl24, 2);
+		res = _mm512_and_epi32(res, res_msk);
+	} else
+		res = _mm512_i32gather_epi32(idxes, (const int *)dp->tbl24, 4);
+
+
+	/* get extended entries indexes */
+	msk_ext = _mm512_test_epi32_mask(res, lsb);
+
+	tmp = _mm512_srli_epi32(res, 1);
+
+	/* idxes to retrieve bytes */
+	shuf_idxes = _mm512_setr_epi32(3, 7, 11, 15,
+				19, 23, 27, 31,
+				35, 39, 43, 47,
+				51, 55, 59, 63);
+
+	base_idxes = _mm512_setr_epi32(0, 4, 8, 12,
+				16, 20, 24, 28,
+				32, 36, 40, 44,
+				48, 52, 56, 60);
+
+	/* traverse down the trie */
+	while (msk_ext) {
+		idxes = _mm512_maskz_slli_epi32(msk_ext, tmp, 8);
+		byte_chunk = (i < 8) ?
+			((i >= 4) ? second : first) :
+			((i >= 12) ? fourth : third);
+		bytes = _mm512_maskz_shuffle_epi8(k, byte_chunk, shuf_idxes);
+		idxes = _mm512_maskz_add_epi32(msk_ext, idxes, bytes);
+		if (size == sizeof(uint16_t)) {
+			tmp = _mm512_mask_i32gather_epi32(zero, msk_ext,
+				idxes, (const int *)dp->tbl8, 2);
+			tmp = _mm512_and_epi32(tmp, res_msk);
+		} else
+			tmp = _mm512_mask_i32gather_epi32(zero, msk_ext,
+				idxes, (const int *)dp->tbl8, 4);
+		new_msk = _mm512_test_epi32_mask(tmp, lsb);
+		res = _mm512_mask_blend_epi32(msk_ext ^ new_msk, res, tmp);
+		tmp = _mm512_srli_epi32(tmp, 1);
+		msk_ext = new_msk;
+
+		shuf_idxes = _mm512_maskz_add_epi8(k, shuf_idxes, lsb);
+		shuf_idxes = _mm512_and_epi32(shuf_idxes, two_lsb);
+		shuf_idxes = _mm512_maskz_add_epi8(k, shuf_idxes, base_idxes);
+		i++;
+	}
+
+	res = _mm512_srli_epi32(res, 1);
+	tmp = _mm512_maskz_expand_epi32(exp_msk, res);
+	__m256i tmp256;
+	tmp256 = _mm512_extracti32x8_epi32(res, 1);
+	tmp2 = _mm512_maskz_expand_epi32(exp_msk,
+		_mm512_castsi256_si512(tmp256));
+	_mm512_storeu_si512(next_hops, tmp);
+	_mm512_storeu_si512(next_hops + 8, tmp2);
+}
+
+static void
+trie_vec_lookup_x8_8b(void *p, uint8_t ips[8][RTE_FIB6_IPV6_ADDR_SIZE],
+	uint64_t *next_hops)
+{
+	struct rte_trie_tbl *dp = (struct rte_trie_tbl *)p;
+	const __m512i zero = _mm512_set1_epi32(0);
+	const __m512i lsb = _mm512_set1_epi32(1);
+	const __m512i three_lsb = _mm512_set1_epi32(7);
+	__m512i first, second; /*< IPv6 eight byte chunks */
+	__m512i idxes, res, shuf_idxes;
+	__m512i tmp, bytes, byte_chunk, base_idxes;
+	const __rte_x86_zmm_t bswap = {
+		.u8 = { 2, 1, 0, 255, 255, 255, 255, 255,
+			10, 9, 8, 255, 255, 255, 255, 255,
+			2, 1, 0, 255, 255, 255, 255, 255,
+			10, 9, 8, 255, 255, 255, 255, 255,
+			2, 1, 0, 255, 255, 255, 255, 255,
+			10, 9, 8, 255, 255, 255, 255, 255,
+			2, 1, 0, 255, 255, 255, 255, 255,
+			10, 9, 8, 255, 255, 255, 255, 255
+			},
+	};
+	const __mmask64 k = 0x101010101010101;
+	int i = 3;
+	__mmask8 msk_ext, new_msk;
+
+	transpose_x8(ips, &first, &second);
+
+	/* get_tbl24_idx() for every 4 byte chunk */
+	idxes = _mm512_shuffle_epi8(first, bswap.z);
+
+	/* lookup in tbl24 */
+	res = _mm512_i64gather_epi64(idxes, (const void *)dp->tbl24, 8);
+	/* get extended entries indexes */
+	msk_ext = _mm512_test_epi64_mask(res, lsb);
+
+	tmp = _mm512_srli_epi64(res, 1);
+
+	/* idxes to retrieve bytes */
+	shuf_idxes = _mm512_setr_epi64(3, 11, 19, 27, 35, 43, 51, 59);
+
+	base_idxes = _mm512_setr_epi64(0, 8, 16, 24, 32, 40, 48, 56);
+
+	/* traverse down the trie */
+	while (msk_ext) {
+		idxes = _mm512_maskz_slli_epi64(msk_ext, tmp, 8);
+		byte_chunk = (i < 8) ? first : second;
+		bytes = _mm512_maskz_shuffle_epi8(k, byte_chunk, shuf_idxes);
+		idxes = _mm512_maskz_add_epi64(msk_ext, idxes, bytes);
+		tmp = _mm512_mask_i64gather_epi64(zero, msk_ext,
+				idxes, (const void *)dp->tbl8, 8);
+		new_msk = _mm512_test_epi64_mask(tmp, lsb);
+		res = _mm512_mask_blend_epi64(msk_ext ^ new_msk, res, tmp);
+		tmp = _mm512_srli_epi64(tmp, 1);
+		msk_ext = new_msk;
+
+		shuf_idxes = _mm512_maskz_add_epi8(k, shuf_idxes, lsb);
+		shuf_idxes = _mm512_and_epi64(shuf_idxes, three_lsb);
+		shuf_idxes = _mm512_maskz_add_epi8(k, shuf_idxes, base_idxes);
+		i++;
+	}
+
+	res = _mm512_srli_epi64(res, 1);
+	_mm512_storeu_si512(next_hops, res);
+}
+
+void
+rte_trie_vec_lookup_bulk_2b(void *p, uint8_t ips[][RTE_FIB6_IPV6_ADDR_SIZE],
+	uint64_t *next_hops, const unsigned int n)
+{
+	uint32_t i;
+	for (i = 0; i < (n / 16); i++) {
+		trie_vec_lookup_x16(p, (uint8_t (*)[16])&ips[i * 16][0],
+				next_hops + i * 16, sizeof(uint16_t));
+	}
+	rte_trie_lookup_bulk_2b(p, (uint8_t (*)[16])&ips[i * 16][0],
+			next_hops + i * 16, n - i * 16);
+}
+
+void
+rte_trie_vec_lookup_bulk_4b(void *p, uint8_t ips[][RTE_FIB6_IPV6_ADDR_SIZE],
+	uint64_t *next_hops, const unsigned int n)
+{
+	uint32_t i;
+	for (i = 0; i < (n / 16); i++) {
+		trie_vec_lookup_x16(p, (uint8_t (*)[16])&ips[i * 16][0],
+				next_hops + i * 16, sizeof(uint32_t));
+	}
+	rte_trie_lookup_bulk_4b(p, (uint8_t (*)[16])&ips[i * 16][0],
+			next_hops + i * 16, n - i * 16);
+}
+
+void
+rte_trie_vec_lookup_bulk_8b(void *p, uint8_t ips[][RTE_FIB6_IPV6_ADDR_SIZE],
+	uint64_t *next_hops, const unsigned int n)
+{
+	uint32_t i;
+	for (i = 0; i < (n / 8); i++) {
+		trie_vec_lookup_x8_8b(p, (uint8_t (*)[16])&ips[i * 8][0],
+				next_hops + i * 8);
+	}
+	rte_trie_lookup_bulk_8b(p, (uint8_t (*)[16])&ips[i * 8][0],
+			next_hops + i * 8, n - i * 8);
+}
diff --git a/lib/librte_fib/trie_avx512.h b/lib/librte_fib/trie_avx512.h
new file mode 100644
index 000000000..ef8c7f0e3
--- /dev/null
+++ b/lib/librte_fib/trie_avx512.h
@@ -0,0 +1,20 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Intel Corporation
+ */
+
+#ifndef _TRIE_AVX512_H_
+#define _TRIE_AVX512_H_
+
+void
+rte_trie_vec_lookup_bulk_2b(void *p, uint8_t ips[][RTE_FIB6_IPV6_ADDR_SIZE],
+	uint64_t *next_hops, const unsigned int n);
+
+void
+rte_trie_vec_lookup_bulk_4b(void *p, uint8_t ips[][RTE_FIB6_IPV6_ADDR_SIZE],
+	uint64_t *next_hops, const unsigned int n);
+
+void
+rte_trie_vec_lookup_bulk_8b(void *p, uint8_t ips[][RTE_FIB6_IPV6_ADDR_SIZE],
+	uint64_t *next_hops, const unsigned int n);
+
+#endif /* _TRIE_AVX512_H_ */
-- 
2.17.1


^ permalink raw reply	[flat|nested] 199+ messages in thread

* [dpdk-dev] [PATCH v4 8/8] app/testfib: add support for different lookup functions
  2020-05-19 12:12   ` [dpdk-dev] [PATCH v3 0/8] " Vladimir Medvedkin
                       ` (9 preceding siblings ...)
  2020-07-08 20:16     ` [dpdk-dev] [PATCH v4 7/8] fib6: introduce AVX512 lookup Vladimir Medvedkin
@ 2020-07-08 20:16     ` Vladimir Medvedkin
  10 siblings, 0 replies; 199+ messages in thread
From: Vladimir Medvedkin @ 2020-07-08 20:16 UTC (permalink / raw)
  To: dev; +Cc: konstantin.ananyev, bruce.richardson

Added -v option to switch between different lookup implementations
to measure their performance and correctness.

Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
---
 app/test-fib/main.c | 58 ++++++++++++++++++++++++++++++++++++++++++---
 1 file changed, 55 insertions(+), 3 deletions(-)

diff --git a/app/test-fib/main.c b/app/test-fib/main.c
index 9cf01b16e..9c2d41387 100644
--- a/app/test-fib/main.c
+++ b/app/test-fib/main.c
@@ -99,6 +99,7 @@ static struct {
 	uint8_t		ent_sz;
 	uint8_t		rnd_lookup_ips_ratio;
 	uint8_t		print_fract;
+	uint8_t		lookup_fn;
 } config = {
 	.routes_file = NULL,
 	.lookup_ips_file = NULL,
@@ -110,7 +111,8 @@ static struct {
 	.tbl8 = DEFAULT_LPM_TBL8,
 	.ent_sz = 4,
 	.rnd_lookup_ips_ratio = 0,
-	.print_fract = 10
+	.print_fract = 10,
+	.lookup_fn = 0
 };
 
 struct rt_rule_4 {
@@ -638,7 +640,11 @@ print_usage(void)
 		"1/2/4/8 (default 4)>]\n"
 		"[-g <number of tbl8's for dir24_8 or trie FIBs>]\n"
 		"[-w <path to the file to dump routing table>]\n"
-		"[-u <path to the file to dump ip's for lookup>]\n",
+		"[-u <path to the file to dump ip's for lookup>]\n"
+		"[-v <type of loookup function:"
+		"\ts1, s2, s3 (3 types of scalar), v (vector) -"
+		" for DIR24_8 based FIB\n"
+		"\ts, v - for TRIE based ipv6 FIB>]\n",
 		config.prgname);
 }
 
@@ -681,7 +687,7 @@ parse_opts(int argc, char **argv)
 	int opt;
 	char *endptr;
 
-	while ((opt = getopt(argc, argv, "f:t:n:d:l:r:c6ab:e:g:w:u:s")) !=
+	while ((opt = getopt(argc, argv, "f:t:n:d:l:r:c6ab:e:g:w:u:sv:")) !=
 			-1) {
 		switch (opt) {
 		case 'f':
@@ -769,6 +775,22 @@ parse_opts(int argc, char **argv)
 				rte_exit(-EINVAL, "Invalid option -g\n");
 			}
 			break;
+		case 'v':
+			if ((strcmp(optarg, "s1") == 0) ||
+					(strcmp(optarg, "s") == 0))
+				break;
+			else if (strcmp(optarg, "v") == 0) {
+				config.lookup_fn = 1;
+				break;
+			} else if (strcmp(optarg, "s2") == 0) {
+				config.lookup_fn = 2;
+				break;
+			} else if (strcmp(optarg, "s3") == 0) {
+				config.lookup_fn = 3;
+				break;
+			}
+			print_usage();
+			rte_exit(-EINVAL, "Invalid option -v %s\n", optarg);
 		default:
 			print_usage();
 			rte_exit(-EINVAL, "Invalid options\n");
@@ -846,6 +868,24 @@ run_v4(void)
 		return -rte_errno;
 	}
 
+	if (config.lookup_fn != 0) {
+		if (config.lookup_fn == 1)
+			ret = rte_fib_set_lookup_fn(fib,
+				RTE_FIB_DIR24_8_VECTOR_AVX512);
+		else if (config.lookup_fn == 2)
+			ret = rte_fib_set_lookup_fn(fib,
+				RTE_FIB_DIR24_8_SCALAR_INLINE);
+		else if (config.lookup_fn == 3)
+			ret = rte_fib_set_lookup_fn(fib,
+				RTE_FIB_DIR24_8_SCALAR_UNI);
+		else
+			ret = -EINVAL;
+		if (ret != 0) {
+			printf("Can not init lookup function\n");
+			return ret;
+		}
+	}
+
 	for (k = config.print_fract, i = 0; k > 0; k--) {
 		start = rte_rdtsc_precise();
 		for (j = 0; j < (config.nb_routes - i) / k; j++) {
@@ -1025,6 +1065,18 @@ run_v6(void)
 		return -rte_errno;
 	}
 
+	if (config.lookup_fn != 0) {
+		if (config.lookup_fn == 1)
+			ret = rte_fib6_set_lookup_fn(fib,
+				RTE_FIB6_TRIE_VECTOR_AVX512);
+		else
+			ret = -EINVAL;
+		if (ret != 0) {
+			printf("Can not init lookup function\n");
+			return ret;
+		}
+	}
+
 	for (k = config.print_fract, i = 0; k > 0; k--) {
 		start = rte_rdtsc_precise();
 		for (j = 0; j < (config.nb_routes - i) / k; j++) {
-- 
2.17.1


^ permalink raw reply	[flat|nested] 199+ messages in thread

* Re: [dpdk-dev] [PATCH v3 4/8] fib: introduce AVX512 lookup
  2020-07-06 19:21     ` Thomas Monjalon
@ 2020-07-08 20:19       ` Medvedkin, Vladimir
  0 siblings, 0 replies; 199+ messages in thread
From: Medvedkin, Vladimir @ 2020-07-08 20:19 UTC (permalink / raw)
  To: Thomas Monjalon; +Cc: dev, konstantin.ananyev, bruce.richardson

Hi Thomas,

On 06/07/2020 20:21, Thomas Monjalon wrote:
> 19/05/2020 14:12, Vladimir Medvedkin:
>> --- a/lib/librte_fib/meson.build
>> +++ b/lib/librte_fib/meson.build
>> +if dpdk_conf.has('RTE_ARCH_X86') and cc.has_argument('-mavx512f')
>> +	if cc.has_argument('-mavx512dq')
>> +		dir24_8_avx512_tmp = static_library('dir24_8_avx512_tmp',
>> +			'dir24_8_avx512.c',
>> +			dependencies: static_rte_eal,
>> +			c_args: cflags + ['-mavx512f'] + ['-mavx512dq'])
>> +		objs += dir24_8_avx512_tmp.extract_objects('dir24_8_avx512.c')
>> +		cflags += '-DCC_DIR24_8_AVX512_SUPPORT'
>> +	endif
>> +endif
> 
> I don't want to try understanding what this hack is.
> But please add comments around it, so we will understand why
> compilation fails:
> 
> In file included from ../../dpdk/lib/librte_fib/dir24_8_avx512.c:5:
> ../../dpdk/lib/librte_eal/x86/include/rte_vect.h:97:18: error: expected declaration specifiers or ‘...’ before ‘(’ token
>     97 | #define ZMM_SIZE (sizeof(__x86_zmm_t))
>        |                  ^
> 
> 

I sent v4 with slightly reworked meson.build, please check compilation.

> 

-- 
Regards,
Vladimir

^ permalink raw reply	[flat|nested] 199+ messages in thread

* Re: [dpdk-dev] [PATCH v4 1/8] eal: introduce zmm type for AVX 512-bit
  2020-07-08 20:16     ` [dpdk-dev] [PATCH v4 1/8] eal: introduce zmm type for AVX 512-bit Vladimir Medvedkin
@ 2020-07-09 13:48       ` David Marchand
  2020-07-09 14:52         ` Medvedkin, Vladimir
  0 siblings, 1 reply; 199+ messages in thread
From: David Marchand @ 2020-07-09 13:48 UTC (permalink / raw)
  To: Vladimir Medvedkin; +Cc: dev, Ananyev, Konstantin, Bruce Richardson

On Wed, Jul 8, 2020 at 10:17 PM Vladimir Medvedkin
<vladimir.medvedkin@intel.com> wrote:
>
> New data type to manipulate 512 bit AVX values.

The title mentions a "zmm" type that is not added by this patch.

Maybe instead, "eal/x86: introduce AVX 512-bit type"


>
> Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
> Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> ---
>  lib/librte_eal/x86/include/rte_vect.h | 21 +++++++++++++++++++++
>  1 file changed, 21 insertions(+)
>
> diff --git a/lib/librte_eal/x86/include/rte_vect.h b/lib/librte_eal/x86/include/rte_vect.h
> index df5a60762..ae59126bc 100644
> --- a/lib/librte_eal/x86/include/rte_vect.h
> +++ b/lib/librte_eal/x86/include/rte_vect.h
> @@ -13,6 +13,7 @@
>
>  #include <stdint.h>
>  #include <rte_config.h>
> +#include <rte_common.h>
>  #include "generic/rte_vect.h"
>
>  #if (defined(__ICC) || \
> @@ -90,6 +91,26 @@ __extension__ ({                 \
>  })
>  #endif /* (defined(__ICC) && __ICC < 1210) */
>
> +#ifdef __AVX512F__
> +
> +typedef __m512i __x86_zmm_t;

We don't need this interim type, using the native __m512 is enough afaics.

Looking at the whole applied series:
$ git grep -lw __x86_zmm_t
lib/librte_eal/x86/include/rte_vect.h


> +
> +#define        ZMM_SIZE        (sizeof(__x86_zmm_t))
> +#define        ZMM_MASK        (ZMM_SIZE - 1)

Macros in a public header need a RTE_ prefix + this is x86 specific,
then RTE_X86_.

Looking at the whole applied series:
$ git grep -lw ZMM_SIZE
lib/librte_eal/x86/include/rte_vect.h
$ git grep -lw ZMM_MASK
lib/librte_eal/x86/include/rte_vect.h

So I wonder if we need to export it or we can instead just #undef
after the struct definition.


> +
> +typedef union __rte_x86_zmm  {
> +       __x86_zmm_t      z;
> +       ymm_t    y[ZMM_SIZE / sizeof(ymm_t)];
> +       xmm_t    x[ZMM_SIZE / sizeof(xmm_t)];
> +       uint8_t  u8[ZMM_SIZE / sizeof(uint8_t)];
> +       uint16_t u16[ZMM_SIZE / sizeof(uint16_t)];
> +       uint32_t u32[ZMM_SIZE / sizeof(uint32_t)];
> +       uint64_t u64[ZMM_SIZE / sizeof(uint64_t)];
> +       double   pd[ZMM_SIZE / sizeof(double)];
> +} __rte_aligned(ZMM_SIZE) __rte_x86_zmm_t;

I don't understand this forced alignment statement.
Would not natural alignment be enough, since all fields in this union
have the same size?


-- 
David Marchand


^ permalink raw reply	[flat|nested] 199+ messages in thread

* Re: [dpdk-dev] [PATCH v4 1/8] eal: introduce zmm type for AVX 512-bit
  2020-07-09 13:48       ` David Marchand
@ 2020-07-09 14:52         ` Medvedkin, Vladimir
  2020-07-09 15:20           ` David Marchand
  0 siblings, 1 reply; 199+ messages in thread
From: Medvedkin, Vladimir @ 2020-07-09 14:52 UTC (permalink / raw)
  To: David Marchand; +Cc: dev, Ananyev, Konstantin, Bruce Richardson

Hi David,

Thanks for review

On 09/07/2020 14:48, David Marchand wrote:
> On Wed, Jul 8, 2020 at 10:17 PM Vladimir Medvedkin
> <vladimir.medvedkin@intel.com> wrote:
>>
>> New data type to manipulate 512 bit AVX values.
> 
> The title mentions a "zmm" type that is not added by this patch.
> 
> Maybe instead, "eal/x86: introduce AVX 512-bit type"
> 

Agree

> 
>>
>> Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
>> Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
>> ---
>>   lib/librte_eal/x86/include/rte_vect.h | 21 +++++++++++++++++++++
>>   1 file changed, 21 insertions(+)
>>
>> diff --git a/lib/librte_eal/x86/include/rte_vect.h b/lib/librte_eal/x86/include/rte_vect.h
>> index df5a60762..ae59126bc 100644
>> --- a/lib/librte_eal/x86/include/rte_vect.h
>> +++ b/lib/librte_eal/x86/include/rte_vect.h
>> @@ -13,6 +13,7 @@
>>
>>   #include <stdint.h>
>>   #include <rte_config.h>
>> +#include <rte_common.h>
>>   #include "generic/rte_vect.h"
>>
>>   #if (defined(__ICC) || \
>> @@ -90,6 +91,26 @@ __extension__ ({                 \
>>   })
>>   #endif /* (defined(__ICC) && __ICC < 1210) */
>>
>> +#ifdef __AVX512F__
>> +
>> +typedef __m512i __x86_zmm_t;
> 
> We don't need this interim type, using the native __m512 is enough afaics.
> 

Agree

> Looking at the whole applied series:
> $ git grep -lw __x86_zmm_t
> lib/librte_eal/x86/include/rte_vect.h
> 
> 
>> +
>> +#define        ZMM_SIZE        (sizeof(__x86_zmm_t))
>> +#define        ZMM_MASK        (ZMM_SIZE - 1)
> 
> Macros in a public header need a RTE_ prefix + this is x86 specific,
> then RTE_X86_.
> 
> Looking at the whole applied series:
> $ git grep -lw ZMM_SIZE
> lib/librte_eal/x86/include/rte_vect.h
> $ git grep -lw ZMM_MASK
> lib/librte_eal/x86/include/rte_vect.h
> 
> So I wonder if we need to export it or we can instead just #undef
> after the struct definition.

I think it's better to undef it

> 
> 
>> +
>> +typedef union __rte_x86_zmm  {
>> +       __x86_zmm_t      z;
>> +       ymm_t    y[ZMM_SIZE / sizeof(ymm_t)];
>> +       xmm_t    x[ZMM_SIZE / sizeof(xmm_t)];
>> +       uint8_t  u8[ZMM_SIZE / sizeof(uint8_t)];
>> +       uint16_t u16[ZMM_SIZE / sizeof(uint16_t)];
>> +       uint32_t u32[ZMM_SIZE / sizeof(uint32_t)];
>> +       uint64_t u64[ZMM_SIZE / sizeof(uint64_t)];
>> +       double   pd[ZMM_SIZE / sizeof(double)];
>> +} __rte_aligned(ZMM_SIZE) __rte_x86_zmm_t;
> 
> I don't understand this forced alignment statement.
> Would not natural alignment be enough, since all fields in this union
> have the same size?
> 

Some compilers won't align this union
https://mails.dpdk.org/archives/dev/2020-March/159591.html

> 

-- 
Regards,
Vladimir

^ permalink raw reply	[flat|nested] 199+ messages in thread

* Re: [dpdk-dev] [PATCH v4 1/8] eal: introduce zmm type for AVX 512-bit
  2020-07-09 14:52         ` Medvedkin, Vladimir
@ 2020-07-09 15:20           ` David Marchand
  0 siblings, 0 replies; 199+ messages in thread
From: David Marchand @ 2020-07-09 15:20 UTC (permalink / raw)
  To: Medvedkin, Vladimir; +Cc: dev, Ananyev, Konstantin, Bruce Richardson

On Thu, Jul 9, 2020 at 4:52 PM Medvedkin, Vladimir
<vladimir.medvedkin@intel.com> wrote:
> >> +
> >> +#define        ZMM_SIZE        (sizeof(__x86_zmm_t))
> >> +#define        ZMM_MASK        (ZMM_SIZE - 1)
> >
> > Macros in a public header need a RTE_ prefix + this is x86 specific,
> > then RTE_X86_.
> >
> > Looking at the whole applied series:
> > $ git grep -lw ZMM_SIZE
> > lib/librte_eal/x86/include/rte_vect.h
> > $ git grep -lw ZMM_MASK
> > lib/librte_eal/x86/include/rte_vect.h
> >
> > So I wonder if we need to export it or we can instead just #undef
> > after the struct definition.
>
> I think it's better to undef it

Even if you undef the macro, please still prefix it.
This is to avoid conflicts with macros defined before including this
rte_vect.h header.


>
> >
> >
> >> +
> >> +typedef union __rte_x86_zmm  {
> >> +       __x86_zmm_t      z;
> >> +       ymm_t    y[ZMM_SIZE / sizeof(ymm_t)];
> >> +       xmm_t    x[ZMM_SIZE / sizeof(xmm_t)];
> >> +       uint8_t  u8[ZMM_SIZE / sizeof(uint8_t)];
> >> +       uint16_t u16[ZMM_SIZE / sizeof(uint16_t)];
> >> +       uint32_t u32[ZMM_SIZE / sizeof(uint32_t)];
> >> +       uint64_t u64[ZMM_SIZE / sizeof(uint64_t)];
> >> +       double   pd[ZMM_SIZE / sizeof(double)];
> >> +} __rte_aligned(ZMM_SIZE) __rte_x86_zmm_t;
> >
> > I don't understand this forced alignment statement.
> > Would not natural alignment be enough, since all fields in this union
> > have the same size?
> >
>
> Some compilers won't align this union
> https://mails.dpdk.org/archives/dev/2020-March/159591.html

Ok, interesting, I will try to keep in mind.


-- 
David Marchand


^ permalink raw reply	[flat|nested] 199+ messages in thread

* [dpdk-dev] [PATCH v5 0/8] fib: implement AVX512 vector lookup
  2020-07-08 20:16     ` [dpdk-dev] [PATCH v4 " Vladimir Medvedkin
@ 2020-07-10 14:46       ` Vladimir Medvedkin
  2020-07-13 11:11         ` [dpdk-dev] [PATCH v6 " Vladimir Medvedkin
                           ` (8 more replies)
  2020-07-10 14:46       ` [dpdk-dev] [PATCH v5 1/8] eal/x86: introduce AVX 512-bit type Vladimir Medvedkin
                         ` (7 subsequent siblings)
  8 siblings, 9 replies; 199+ messages in thread
From: Vladimir Medvedkin @ 2020-07-10 14:46 UTC (permalink / raw)
  To: dev; +Cc: konstantin.ananyev, bruce.richardson

This patch series implements vectorized lookup using AVX512 for
ipv4 dir24_8 and ipv6 trie algorithms.
Also introduced rte_fib_set_lookup_fn() to change lookup function type.
Added option to select lookup function type in testfib application.

v5:
 - prefix zmm macro in rte_vect.h with RTE_X86
 - remove unnecessary typedef for _x86_zmm_t
 - reword commit title
 - fix typos

v4:
 - use __rte_aligned() instead of using compiler attribute directly
 - rework and add comments to meson.build

v3:
 - separate out the AVX-512 code into a separate file

v2:
 - rename rte_zmm to __rte_x86_zmm to reflect its internal usage
 - make runtime decision to use avx512 lookup

Vladimir Medvedkin (8):
  eal/x86: introduce AVX 512-bit type
  fib: make lookup function type configurable
  fib: move lookup definition into the header file
  fib: introduce AVX512 lookup
  fib6: make lookup function type configurable
  fib6: move lookup definition into the header file
  fib6: introduce AVX512 lookup
  app/testfib: add support for different lookup functions

 app/test-fib/main.c                   |  58 +++++-
 lib/librte_eal/x86/include/rte_vect.h |  19 ++
 lib/librte_fib/Makefile               |  24 +++
 lib/librte_fib/dir24_8.c              | 281 +++++---------------------
 lib/librte_fib/dir24_8.h              | 226 ++++++++++++++++++++-
 lib/librte_fib/dir24_8_avx512.c       | 165 +++++++++++++++
 lib/librte_fib/dir24_8_avx512.h       |  24 +++
 lib/librte_fib/meson.build            |  31 +++
 lib/librte_fib/rte_fib.c              |  21 +-
 lib/librte_fib/rte_fib.h              |  24 +++
 lib/librte_fib/rte_fib6.c             |  20 +-
 lib/librte_fib/rte_fib6.h             |  22 ++
 lib/librte_fib/rte_fib_version.map    |   2 +
 lib/librte_fib/trie.c                 | 161 +++------------
 lib/librte_fib/trie.h                 | 119 ++++++++++-
 lib/librte_fib/trie_avx512.c          | 269 ++++++++++++++++++++++++
 lib/librte_fib/trie_avx512.h          |  20 ++
 17 files changed, 1114 insertions(+), 372 deletions(-)
 create mode 100644 lib/librte_fib/dir24_8_avx512.c
 create mode 100644 lib/librte_fib/dir24_8_avx512.h
 create mode 100644 lib/librte_fib/trie_avx512.c
 create mode 100644 lib/librte_fib/trie_avx512.h

-- 
2.17.1


^ permalink raw reply	[flat|nested] 199+ messages in thread

* [dpdk-dev] [PATCH v5 1/8] eal/x86: introduce AVX 512-bit type
  2020-07-08 20:16     ` [dpdk-dev] [PATCH v4 " Vladimir Medvedkin
  2020-07-10 14:46       ` [dpdk-dev] [PATCH v5 " Vladimir Medvedkin
@ 2020-07-10 14:46       ` Vladimir Medvedkin
  2020-07-10 21:49         ` Thomas Monjalon
  2020-07-10 14:46       ` [dpdk-dev] [PATCH v5 2/8] fib: make lookup function type configurable Vladimir Medvedkin
                         ` (6 subsequent siblings)
  8 siblings, 1 reply; 199+ messages in thread
From: Vladimir Medvedkin @ 2020-07-10 14:46 UTC (permalink / raw)
  To: dev; +Cc: konstantin.ananyev, bruce.richardson

New data type to manipulate 512 bit AVX values.

Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
---
 lib/librte_eal/x86/include/rte_vect.h | 19 +++++++++++++++++++
 1 file changed, 19 insertions(+)

diff --git a/lib/librte_eal/x86/include/rte_vect.h b/lib/librte_eal/x86/include/rte_vect.h
index df5a60762..1b2af7138 100644
--- a/lib/librte_eal/x86/include/rte_vect.h
+++ b/lib/librte_eal/x86/include/rte_vect.h
@@ -13,6 +13,7 @@
 
 #include <stdint.h>
 #include <rte_config.h>
+#include <rte_common.h>
 #include "generic/rte_vect.h"
 
 #if (defined(__ICC) || \
@@ -90,6 +91,24 @@ __extension__ ({                 \
 })
 #endif /* (defined(__ICC) && __ICC < 1210) */
 
+#ifdef __AVX512F__
+
+#define	RTE_X86_ZMM_SIZE	(sizeof(__m512i))
+#define	RTE_X86_ZMM_MASK	(ZMM_SIZE - 1)
+
+typedef union __rte_x86_zmm  {
+	__m512i	 z;
+	ymm_t    y[RTE_X86_ZMM_SIZE / sizeof(ymm_t)];
+	xmm_t    x[RTE_X86_ZMM_SIZE / sizeof(xmm_t)];
+	uint8_t  u8[RTE_X86_ZMM_SIZE / sizeof(uint8_t)];
+	uint16_t u16[RTE_X86_ZMM_SIZE / sizeof(uint16_t)];
+	uint32_t u32[RTE_X86_ZMM_SIZE / sizeof(uint32_t)];
+	uint64_t u64[RTE_X86_ZMM_SIZE / sizeof(uint64_t)];
+	double   pd[RTE_X86_ZMM_SIZE / sizeof(double)];
+} __rte_aligned(RTE_X86_ZMM_SIZE) __rte_x86_zmm_t;
+
+#endif /* __AVX512F__ */
+
 #ifdef __cplusplus
 }
 #endif
-- 
2.17.1


^ permalink raw reply	[flat|nested] 199+ messages in thread

* [dpdk-dev] [PATCH v5 2/8] fib: make lookup function type configurable
  2020-07-08 20:16     ` [dpdk-dev] [PATCH v4 " Vladimir Medvedkin
  2020-07-10 14:46       ` [dpdk-dev] [PATCH v5 " Vladimir Medvedkin
  2020-07-10 14:46       ` [dpdk-dev] [PATCH v5 1/8] eal/x86: introduce AVX 512-bit type Vladimir Medvedkin
@ 2020-07-10 14:46       ` Vladimir Medvedkin
  2020-07-10 14:46       ` [dpdk-dev] [PATCH v5 3/8] fib: move lookup definition into the header file Vladimir Medvedkin
                         ` (5 subsequent siblings)
  8 siblings, 0 replies; 199+ messages in thread
From: Vladimir Medvedkin @ 2020-07-10 14:46 UTC (permalink / raw)
  To: dev; +Cc: konstantin.ananyev, bruce.richardson

Add type argument to dir24_8_get_lookup_fn()
Now it supports 3 different lookup implementations:
 RTE_FIB_DIR24_8_SCALAR_MACRO
 RTE_FIB_DIR24_8_SCALAR_INLINE
 RTE_FIB_DIR24_8_SCALAR_UNI

Add new rte_fib_set_lookup_fn() - user can change lookup
function type runtime.

Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
---
 lib/librte_fib/dir24_8.c           | 32 +++++++++++++++++++-----------
 lib/librte_fib/dir24_8.h           |  2 +-
 lib/librte_fib/rte_fib.c           | 21 +++++++++++++++++++-
 lib/librte_fib/rte_fib.h           | 23 +++++++++++++++++++++
 lib/librte_fib/rte_fib_version.map |  1 +
 5 files changed, 65 insertions(+), 14 deletions(-)

diff --git a/lib/librte_fib/dir24_8.c b/lib/librte_fib/dir24_8.c
index c9dce3cbc..825d061fd 100644
--- a/lib/librte_fib/dir24_8.c
+++ b/lib/librte_fib/dir24_8.c
@@ -45,13 +45,6 @@ struct dir24_8_tbl {
 
 #define ROUNDUP(x, y)	 RTE_ALIGN_CEIL(x, (1 << (32 - y)))
 
-enum lookup_type {
-	MACRO,
-	INLINE,
-	UNI
-};
-enum lookup_type test_lookup = MACRO;
-
 static inline void *
 get_tbl24_p(struct dir24_8_tbl *dp, uint32_t ip, uint8_t nh_sz)
 {
@@ -253,11 +246,18 @@ dir24_8_lookup_bulk_uni(void *p, const uint32_t *ips,
 }
 
 rte_fib_lookup_fn_t
-dir24_8_get_lookup_fn(struct rte_fib_conf *fib_conf)
+dir24_8_get_lookup_fn(void *p, enum rte_fib_dir24_8_lookup_type type)
 {
-	enum rte_fib_dir24_8_nh_sz nh_sz = fib_conf->dir24_8.nh_sz;
+	enum rte_fib_dir24_8_nh_sz nh_sz;
+	struct dir24_8_tbl *dp = p;
 
-	if (test_lookup == MACRO) {
+	if (dp == NULL)
+		return NULL;
+
+	nh_sz = dp->nh_sz;
+
+	switch (type) {
+	case RTE_FIB_DIR24_8_SCALAR_MACRO:
 		switch (nh_sz) {
 		case RTE_FIB_DIR24_8_1B:
 			return dir24_8_lookup_bulk_1b;
@@ -267,8 +267,10 @@ dir24_8_get_lookup_fn(struct rte_fib_conf *fib_conf)
 			return dir24_8_lookup_bulk_4b;
 		case RTE_FIB_DIR24_8_8B:
 			return dir24_8_lookup_bulk_8b;
+		default:
+			return NULL;
 		}
-	} else if (test_lookup == INLINE) {
+	case RTE_FIB_DIR24_8_SCALAR_INLINE:
 		switch (nh_sz) {
 		case RTE_FIB_DIR24_8_1B:
 			return dir24_8_lookup_bulk_0;
@@ -278,9 +280,15 @@ dir24_8_get_lookup_fn(struct rte_fib_conf *fib_conf)
 			return dir24_8_lookup_bulk_2;
 		case RTE_FIB_DIR24_8_8B:
 			return dir24_8_lookup_bulk_3;
+		default:
+			return NULL;
 		}
-	} else
+	case RTE_FIB_DIR24_8_SCALAR_UNI:
 		return dir24_8_lookup_bulk_uni;
+	default:
+		return NULL;
+	}
+
 	return NULL;
 }
 
diff --git a/lib/librte_fib/dir24_8.h b/lib/librte_fib/dir24_8.h
index 1ec437c0c..53c5dd29e 100644
--- a/lib/librte_fib/dir24_8.h
+++ b/lib/librte_fib/dir24_8.h
@@ -22,7 +22,7 @@ void
 dir24_8_free(void *p);
 
 rte_fib_lookup_fn_t
-dir24_8_get_lookup_fn(struct rte_fib_conf *conf);
+dir24_8_get_lookup_fn(void *p, enum rte_fib_dir24_8_lookup_type type);
 
 int
 dir24_8_modify(struct rte_fib *fib, uint32_t ip, uint8_t depth,
diff --git a/lib/librte_fib/rte_fib.c b/lib/librte_fib/rte_fib.c
index e0908084f..b9f6efbb1 100644
--- a/lib/librte_fib/rte_fib.c
+++ b/lib/librte_fib/rte_fib.c
@@ -107,7 +107,8 @@ init_dataplane(struct rte_fib *fib, __rte_unused int socket_id,
 		fib->dp = dir24_8_create(dp_name, socket_id, conf);
 		if (fib->dp == NULL)
 			return -rte_errno;
-		fib->lookup = dir24_8_get_lookup_fn(conf);
+		fib->lookup = dir24_8_get_lookup_fn(fib->dp,
+			RTE_FIB_DIR24_8_SCALAR_MACRO);
 		fib->modify = dir24_8_modify;
 		return 0;
 	default:
@@ -317,3 +318,21 @@ rte_fib_get_rib(struct rte_fib *fib)
 {
 	return (fib == NULL) ? NULL : fib->rib;
 }
+
+int
+rte_fib_set_lookup_fn(struct rte_fib *fib,
+	enum rte_fib_dir24_8_lookup_type type)
+{
+	rte_fib_lookup_fn_t fn;
+
+	switch (fib->type) {
+	case RTE_FIB_DIR24_8:
+		fn = dir24_8_get_lookup_fn(fib->dp, type);
+		if (fn == NULL)
+			return -EINVAL;
+		fib->lookup = fn;
+		return 0;
+	default:
+		return -EINVAL;
+	}
+}
diff --git a/lib/librte_fib/rte_fib.h b/lib/librte_fib/rte_fib.h
index 84ee774d2..892898c6f 100644
--- a/lib/librte_fib/rte_fib.h
+++ b/lib/librte_fib/rte_fib.h
@@ -58,6 +58,12 @@ enum rte_fib_dir24_8_nh_sz {
 	RTE_FIB_DIR24_8_8B
 };
 
+enum rte_fib_dir24_8_lookup_type {
+	RTE_FIB_DIR24_8_SCALAR_MACRO,
+	RTE_FIB_DIR24_8_SCALAR_INLINE,
+	RTE_FIB_DIR24_8_SCALAR_UNI
+};
+
 /** FIB configuration structure */
 struct rte_fib_conf {
 	enum rte_fib_type type; /**< Type of FIB struct */
@@ -196,6 +202,23 @@ __rte_experimental
 struct rte_rib *
 rte_fib_get_rib(struct rte_fib *fib);
 
+/**
+ * Set lookup function based on type
+ *
+ * @param fib
+ *   FIB object handle
+ * @param type
+ *   type of lookup function
+ *
+ * @return
+ *    -EINVAL on failure
+ *    0 on success
+ */
+__rte_experimental
+int
+rte_fib_set_lookup_fn(struct rte_fib *fib,
+	enum rte_fib_dir24_8_lookup_type type);
+
 #ifdef __cplusplus
 }
 #endif
diff --git a/lib/librte_fib/rte_fib_version.map b/lib/librte_fib/rte_fib_version.map
index 9527417d2..216af66b3 100644
--- a/lib/librte_fib/rte_fib_version.map
+++ b/lib/librte_fib/rte_fib_version.map
@@ -9,6 +9,7 @@ EXPERIMENTAL {
 	rte_fib_lookup_bulk;
 	rte_fib_get_dp;
 	rte_fib_get_rib;
+	rte_fib_set_lookup_fn;
 
 	rte_fib6_add;
 	rte_fib6_create;
-- 
2.17.1


^ permalink raw reply	[flat|nested] 199+ messages in thread

* [dpdk-dev] [PATCH v5 3/8] fib: move lookup definition into the header file
  2020-07-08 20:16     ` [dpdk-dev] [PATCH v4 " Vladimir Medvedkin
                         ` (2 preceding siblings ...)
  2020-07-10 14:46       ` [dpdk-dev] [PATCH v5 2/8] fib: make lookup function type configurable Vladimir Medvedkin
@ 2020-07-10 14:46       ` Vladimir Medvedkin
  2020-07-10 14:46       ` [dpdk-dev] [PATCH v5 4/8] fib: introduce AVX512 lookup Vladimir Medvedkin
                         ` (4 subsequent siblings)
  8 siblings, 0 replies; 199+ messages in thread
From: Vladimir Medvedkin @ 2020-07-10 14:46 UTC (permalink / raw)
  To: dev; +Cc: konstantin.ananyev, bruce.richardson

Move dir24_8 table layout and lookup definition into the
private header file. This is necessary for implementing a
vectorized lookup function in a separate .с file.

Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
---
 lib/librte_fib/dir24_8.c | 225 +--------------------------------------
 lib/librte_fib/dir24_8.h | 224 ++++++++++++++++++++++++++++++++++++++
 2 files changed, 225 insertions(+), 224 deletions(-)

diff --git a/lib/librte_fib/dir24_8.c b/lib/librte_fib/dir24_8.c
index 825d061fd..9d74653cf 100644
--- a/lib/librte_fib/dir24_8.c
+++ b/lib/librte_fib/dir24_8.c
@@ -11,240 +11,17 @@
 
 #include <rte_debug.h>
 #include <rte_malloc.h>
-#include <rte_prefetch.h>
 #include <rte_errno.h>
 #include <rte_memory.h>
-#include <rte_branch_prediction.h>
 
-#include <rte_fib.h>
 #include <rte_rib.h>
+#include <rte_fib.h>
 #include "dir24_8.h"
 
 #define DIR24_8_NAMESIZE	64
 
-#define DIR24_8_TBL24_NUM_ENT		(1 << 24)
-#define DIR24_8_TBL8_GRP_NUM_ENT	256U
-#define DIR24_8_EXT_ENT			1
-#define DIR24_8_TBL24_MASK		0xffffff00
-
-#define BITMAP_SLAB_BIT_SIZE_LOG2	6
-#define BITMAP_SLAB_BIT_SIZE		(1 << BITMAP_SLAB_BIT_SIZE_LOG2)
-#define BITMAP_SLAB_BITMASK		(BITMAP_SLAB_BIT_SIZE - 1)
-
-struct dir24_8_tbl {
-	uint32_t	number_tbl8s;	/**< Total number of tbl8s */
-	uint32_t	rsvd_tbl8s;	/**< Number of reserved tbl8s */
-	uint32_t	cur_tbl8s;	/**< Current number of tbl8s */
-	enum rte_fib_dir24_8_nh_sz	nh_sz;	/**< Size of nexthop entry */
-	uint64_t	def_nh;		/**< Default next hop */
-	uint64_t	*tbl8;		/**< tbl8 table. */
-	uint64_t	*tbl8_idxes;	/**< bitmap containing free tbl8 idxes*/
-	/* tbl24 table. */
-	__extension__ uint64_t	tbl24[0] __rte_cache_aligned;
-};
-
 #define ROUNDUP(x, y)	 RTE_ALIGN_CEIL(x, (1 << (32 - y)))
 
-static inline void *
-get_tbl24_p(struct dir24_8_tbl *dp, uint32_t ip, uint8_t nh_sz)
-{
-	return (void *)&((uint8_t *)dp->tbl24)[(ip &
-		DIR24_8_TBL24_MASK) >> (8 - nh_sz)];
-}
-
-static inline  uint8_t
-bits_in_nh(uint8_t nh_sz)
-{
-	return 8 * (1 << nh_sz);
-}
-
-static inline uint64_t
-get_max_nh(uint8_t nh_sz)
-{
-	return ((1ULL << (bits_in_nh(nh_sz) - 1)) - 1);
-}
-
-static  inline uint32_t
-get_tbl24_idx(uint32_t ip)
-{
-	return ip >> 8;
-}
-
-static  inline uint32_t
-get_tbl8_idx(uint32_t res, uint32_t ip)
-{
-	return (res >> 1) * DIR24_8_TBL8_GRP_NUM_ENT + (uint8_t)ip;
-}
-
-static inline uint64_t
-lookup_msk(uint8_t nh_sz)
-{
-	return ((1ULL << ((1 << (nh_sz + 3)) - 1)) << 1) - 1;
-}
-
-static inline uint8_t
-get_psd_idx(uint32_t val, uint8_t nh_sz)
-{
-	return val & ((1 << (3 - nh_sz)) - 1);
-}
-
-static inline uint32_t
-get_tbl_idx(uint32_t val, uint8_t nh_sz)
-{
-	return val >> (3 - nh_sz);
-}
-
-static inline uint64_t
-get_tbl24(struct dir24_8_tbl *dp, uint32_t ip, uint8_t nh_sz)
-{
-	return ((dp->tbl24[get_tbl_idx(get_tbl24_idx(ip), nh_sz)] >>
-		(get_psd_idx(get_tbl24_idx(ip), nh_sz) *
-		bits_in_nh(nh_sz))) & lookup_msk(nh_sz));
-}
-
-static inline uint64_t
-get_tbl8(struct dir24_8_tbl *dp, uint32_t res, uint32_t ip, uint8_t nh_sz)
-{
-	return ((dp->tbl8[get_tbl_idx(get_tbl8_idx(res, ip), nh_sz)] >>
-		(get_psd_idx(get_tbl8_idx(res, ip), nh_sz) *
-		bits_in_nh(nh_sz))) & lookup_msk(nh_sz));
-}
-
-static inline int
-is_entry_extended(uint64_t ent)
-{
-	return (ent & DIR24_8_EXT_ENT) == DIR24_8_EXT_ENT;
-}
-
-#define LOOKUP_FUNC(suffix, type, bulk_prefetch, nh_sz)			\
-static void dir24_8_lookup_bulk_##suffix(void *p, const uint32_t *ips,	\
-	uint64_t *next_hops, const unsigned int n)			\
-{									\
-	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;		\
-	uint64_t tmp;							\
-	uint32_t i;							\
-	uint32_t prefetch_offset =					\
-		RTE_MIN((unsigned int)bulk_prefetch, n);		\
-									\
-	for (i = 0; i < prefetch_offset; i++)				\
-		rte_prefetch0(get_tbl24_p(dp, ips[i], nh_sz));		\
-	for (i = 0; i < (n - prefetch_offset); i++) {			\
-		rte_prefetch0(get_tbl24_p(dp,				\
-			ips[i + prefetch_offset], nh_sz));		\
-		tmp = ((type *)dp->tbl24)[ips[i] >> 8];			\
-		if (unlikely(is_entry_extended(tmp)))			\
-			tmp = ((type *)dp->tbl8)[(uint8_t)ips[i] +	\
-				((tmp >> 1) * DIR24_8_TBL8_GRP_NUM_ENT)]; \
-		next_hops[i] = tmp >> 1;				\
-	}								\
-	for (; i < n; i++) {						\
-		tmp = ((type *)dp->tbl24)[ips[i] >> 8];			\
-		if (unlikely(is_entry_extended(tmp)))			\
-			tmp = ((type *)dp->tbl8)[(uint8_t)ips[i] +	\
-				((tmp >> 1) * DIR24_8_TBL8_GRP_NUM_ENT)]; \
-		next_hops[i] = tmp >> 1;				\
-	}								\
-}									\
-
-LOOKUP_FUNC(1b, uint8_t, 5, 0)
-LOOKUP_FUNC(2b, uint16_t, 6, 1)
-LOOKUP_FUNC(4b, uint32_t, 15, 2)
-LOOKUP_FUNC(8b, uint64_t, 12, 3)
-
-static inline void
-dir24_8_lookup_bulk(struct dir24_8_tbl *dp, const uint32_t *ips,
-	uint64_t *next_hops, const unsigned int n, uint8_t nh_sz)
-{
-	uint64_t tmp;
-	uint32_t i;
-	uint32_t prefetch_offset = RTE_MIN(15U, n);
-
-	for (i = 0; i < prefetch_offset; i++)
-		rte_prefetch0(get_tbl24_p(dp, ips[i], nh_sz));
-	for (i = 0; i < (n - prefetch_offset); i++) {
-		rte_prefetch0(get_tbl24_p(dp, ips[i + prefetch_offset],
-			nh_sz));
-		tmp = get_tbl24(dp, ips[i], nh_sz);
-		if (unlikely(is_entry_extended(tmp)))
-			tmp = get_tbl8(dp, tmp, ips[i], nh_sz);
-
-		next_hops[i] = tmp >> 1;
-	}
-	for (; i < n; i++) {
-		tmp = get_tbl24(dp, ips[i], nh_sz);
-		if (unlikely(is_entry_extended(tmp)))
-			tmp = get_tbl8(dp, tmp, ips[i], nh_sz);
-
-		next_hops[i] = tmp >> 1;
-	}
-}
-
-static void
-dir24_8_lookup_bulk_0(void *p, const uint32_t *ips,
-	uint64_t *next_hops, const unsigned int n)
-{
-	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
-
-	dir24_8_lookup_bulk(dp, ips, next_hops, n, 0);
-}
-
-static void
-dir24_8_lookup_bulk_1(void *p, const uint32_t *ips,
-	uint64_t *next_hops, const unsigned int n)
-{
-	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
-
-	dir24_8_lookup_bulk(dp, ips, next_hops, n, 1);
-}
-
-static void
-dir24_8_lookup_bulk_2(void *p, const uint32_t *ips,
-	uint64_t *next_hops, const unsigned int n)
-{
-	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
-
-	dir24_8_lookup_bulk(dp, ips, next_hops, n, 2);
-}
-
-static void
-dir24_8_lookup_bulk_3(void *p, const uint32_t *ips,
-	uint64_t *next_hops, const unsigned int n)
-{
-	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
-
-	dir24_8_lookup_bulk(dp, ips, next_hops, n, 3);
-}
-
-static void
-dir24_8_lookup_bulk_uni(void *p, const uint32_t *ips,
-	uint64_t *next_hops, const unsigned int n)
-{
-	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
-	uint64_t tmp;
-	uint32_t i;
-	uint32_t prefetch_offset = RTE_MIN(15U, n);
-	uint8_t nh_sz = dp->nh_sz;
-
-	for (i = 0; i < prefetch_offset; i++)
-		rte_prefetch0(get_tbl24_p(dp, ips[i], nh_sz));
-	for (i = 0; i < (n - prefetch_offset); i++) {
-		rte_prefetch0(get_tbl24_p(dp, ips[i + prefetch_offset],
-			nh_sz));
-		tmp = get_tbl24(dp, ips[i], nh_sz);
-		if (unlikely(is_entry_extended(tmp)))
-			tmp = get_tbl8(dp, tmp, ips[i], nh_sz);
-
-		next_hops[i] = tmp >> 1;
-	}
-	for (; i < n; i++) {
-		tmp = get_tbl24(dp, ips[i], nh_sz);
-		if (unlikely(is_entry_extended(tmp)))
-			tmp = get_tbl8(dp, tmp, ips[i], nh_sz);
-
-		next_hops[i] = tmp >> 1;
-	}
-}
-
 rte_fib_lookup_fn_t
 dir24_8_get_lookup_fn(void *p, enum rte_fib_dir24_8_lookup_type type)
 {
diff --git a/lib/librte_fib/dir24_8.h b/lib/librte_fib/dir24_8.h
index 53c5dd29e..56d038951 100644
--- a/lib/librte_fib/dir24_8.h
+++ b/lib/librte_fib/dir24_8.h
@@ -6,6 +6,9 @@
 #ifndef _DIR24_8_H_
 #define _DIR24_8_H_
 
+#include <rte_prefetch.h>
+#include <rte_branch_prediction.h>
+
 /**
  * @file
  * DIR24_8 algorithm
@@ -15,6 +18,227 @@
 extern "C" {
 #endif
 
+#define DIR24_8_TBL24_NUM_ENT		(1 << 24)
+#define DIR24_8_TBL8_GRP_NUM_ENT	256U
+#define DIR24_8_EXT_ENT			1
+#define DIR24_8_TBL24_MASK		0xffffff00
+
+#define BITMAP_SLAB_BIT_SIZE_LOG2	6
+#define BITMAP_SLAB_BIT_SIZE		(1 << BITMAP_SLAB_BIT_SIZE_LOG2)
+#define BITMAP_SLAB_BITMASK		(BITMAP_SLAB_BIT_SIZE - 1)
+
+struct dir24_8_tbl {
+	uint32_t	number_tbl8s;	/**< Total number of tbl8s */
+	uint32_t	rsvd_tbl8s;	/**< Number of reserved tbl8s */
+	uint32_t	cur_tbl8s;	/**< Current number of tbl8s */
+	enum rte_fib_dir24_8_nh_sz	nh_sz;	/**< Size of nexthop entry */
+	uint64_t	def_nh;		/**< Default next hop */
+	uint64_t	*tbl8;		/**< tbl8 table. */
+	uint64_t	*tbl8_idxes;	/**< bitmap containing free tbl8 idxes*/
+	/* tbl24 table. */
+	__extension__ uint64_t	tbl24[0] __rte_cache_aligned;
+};
+
+static inline void *
+get_tbl24_p(struct dir24_8_tbl *dp, uint32_t ip, uint8_t nh_sz)
+{
+	return (void *)&((uint8_t *)dp->tbl24)[(ip &
+		DIR24_8_TBL24_MASK) >> (8 - nh_sz)];
+}
+
+static inline  uint8_t
+bits_in_nh(uint8_t nh_sz)
+{
+	return 8 * (1 << nh_sz);
+}
+
+static inline uint64_t
+get_max_nh(uint8_t nh_sz)
+{
+	return ((1ULL << (bits_in_nh(nh_sz) - 1)) - 1);
+}
+
+static  inline uint32_t
+get_tbl24_idx(uint32_t ip)
+{
+	return ip >> 8;
+}
+
+static  inline uint32_t
+get_tbl8_idx(uint32_t res, uint32_t ip)
+{
+	return (res >> 1) * DIR24_8_TBL8_GRP_NUM_ENT + (uint8_t)ip;
+}
+
+static inline uint64_t
+lookup_msk(uint8_t nh_sz)
+{
+	return ((1ULL << ((1 << (nh_sz + 3)) - 1)) << 1) - 1;
+}
+
+static inline uint8_t
+get_psd_idx(uint32_t val, uint8_t nh_sz)
+{
+	return val & ((1 << (3 - nh_sz)) - 1);
+}
+
+static inline uint32_t
+get_tbl_idx(uint32_t val, uint8_t nh_sz)
+{
+	return val >> (3 - nh_sz);
+}
+
+static inline uint64_t
+get_tbl24(struct dir24_8_tbl *dp, uint32_t ip, uint8_t nh_sz)
+{
+	return ((dp->tbl24[get_tbl_idx(get_tbl24_idx(ip), nh_sz)] >>
+		(get_psd_idx(get_tbl24_idx(ip), nh_sz) *
+		bits_in_nh(nh_sz))) & lookup_msk(nh_sz));
+}
+
+static inline uint64_t
+get_tbl8(struct dir24_8_tbl *dp, uint32_t res, uint32_t ip, uint8_t nh_sz)
+{
+	return ((dp->tbl8[get_tbl_idx(get_tbl8_idx(res, ip), nh_sz)] >>
+		(get_psd_idx(get_tbl8_idx(res, ip), nh_sz) *
+		bits_in_nh(nh_sz))) & lookup_msk(nh_sz));
+}
+
+static inline int
+is_entry_extended(uint64_t ent)
+{
+	return (ent & DIR24_8_EXT_ENT) == DIR24_8_EXT_ENT;
+}
+
+#define LOOKUP_FUNC(suffix, type, bulk_prefetch, nh_sz)			\
+static inline void dir24_8_lookup_bulk_##suffix(void *p, const uint32_t *ips, \
+	uint64_t *next_hops, const unsigned int n)			\
+{									\
+	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;		\
+	uint64_t tmp;							\
+	uint32_t i;							\
+	uint32_t prefetch_offset =					\
+		RTE_MIN((unsigned int)bulk_prefetch, n);		\
+									\
+	for (i = 0; i < prefetch_offset; i++)				\
+		rte_prefetch0(get_tbl24_p(dp, ips[i], nh_sz));		\
+	for (i = 0; i < (n - prefetch_offset); i++) {			\
+		rte_prefetch0(get_tbl24_p(dp,				\
+			ips[i + prefetch_offset], nh_sz));		\
+		tmp = ((type *)dp->tbl24)[ips[i] >> 8];			\
+		if (unlikely(is_entry_extended(tmp)))			\
+			tmp = ((type *)dp->tbl8)[(uint8_t)ips[i] +	\
+				((tmp >> 1) * DIR24_8_TBL8_GRP_NUM_ENT)]; \
+		next_hops[i] = tmp >> 1;				\
+	}								\
+	for (; i < n; i++) {						\
+		tmp = ((type *)dp->tbl24)[ips[i] >> 8];			\
+		if (unlikely(is_entry_extended(tmp)))			\
+			tmp = ((type *)dp->tbl8)[(uint8_t)ips[i] +	\
+				((tmp >> 1) * DIR24_8_TBL8_GRP_NUM_ENT)]; \
+		next_hops[i] = tmp >> 1;				\
+	}								\
+}									\
+
+LOOKUP_FUNC(1b, uint8_t, 5, 0)
+LOOKUP_FUNC(2b, uint16_t, 6, 1)
+LOOKUP_FUNC(4b, uint32_t, 15, 2)
+LOOKUP_FUNC(8b, uint64_t, 12, 3)
+
+static inline void
+dir24_8_lookup_bulk(struct dir24_8_tbl *dp, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n, uint8_t nh_sz)
+{
+	uint64_t tmp;
+	uint32_t i;
+	uint32_t prefetch_offset = RTE_MIN(15U, n);
+
+	for (i = 0; i < prefetch_offset; i++)
+		rte_prefetch0(get_tbl24_p(dp, ips[i], nh_sz));
+	for (i = 0; i < (n - prefetch_offset); i++) {
+		rte_prefetch0(get_tbl24_p(dp, ips[i + prefetch_offset],
+			nh_sz));
+		tmp = get_tbl24(dp, ips[i], nh_sz);
+		if (unlikely(is_entry_extended(tmp)))
+			tmp = get_tbl8(dp, tmp, ips[i], nh_sz);
+
+		next_hops[i] = tmp >> 1;
+	}
+	for (; i < n; i++) {
+		tmp = get_tbl24(dp, ips[i], nh_sz);
+		if (unlikely(is_entry_extended(tmp)))
+			tmp = get_tbl8(dp, tmp, ips[i], nh_sz);
+
+		next_hops[i] = tmp >> 1;
+	}
+}
+
+static inline void
+dir24_8_lookup_bulk_0(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n)
+{
+	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
+
+	dir24_8_lookup_bulk(dp, ips, next_hops, n, 0);
+}
+
+static inline void
+dir24_8_lookup_bulk_1(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n)
+{
+	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
+
+	dir24_8_lookup_bulk(dp, ips, next_hops, n, 1);
+}
+
+static inline void
+dir24_8_lookup_bulk_2(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n)
+{
+	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
+
+	dir24_8_lookup_bulk(dp, ips, next_hops, n, 2);
+}
+
+static inline void
+dir24_8_lookup_bulk_3(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n)
+{
+	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
+
+	dir24_8_lookup_bulk(dp, ips, next_hops, n, 3);
+}
+
+static inline void
+dir24_8_lookup_bulk_uni(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n)
+{
+	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
+	uint64_t tmp;
+	uint32_t i;
+	uint32_t prefetch_offset = RTE_MIN(15U, n);
+	uint8_t nh_sz = dp->nh_sz;
+
+	for (i = 0; i < prefetch_offset; i++)
+		rte_prefetch0(get_tbl24_p(dp, ips[i], nh_sz));
+	for (i = 0; i < (n - prefetch_offset); i++) {
+		rte_prefetch0(get_tbl24_p(dp, ips[i + prefetch_offset],
+			nh_sz));
+		tmp = get_tbl24(dp, ips[i], nh_sz);
+		if (unlikely(is_entry_extended(tmp)))
+			tmp = get_tbl8(dp, tmp, ips[i], nh_sz);
+
+		next_hops[i] = tmp >> 1;
+	}
+	for (; i < n; i++) {
+		tmp = get_tbl24(dp, ips[i], nh_sz);
+		if (unlikely(is_entry_extended(tmp)))
+			tmp = get_tbl8(dp, tmp, ips[i], nh_sz);
+
+		next_hops[i] = tmp >> 1;
+	}
+}
+
 void *
 dir24_8_create(const char *name, int socket_id, struct rte_fib_conf *conf);
 
-- 
2.17.1


^ permalink raw reply	[flat|nested] 199+ messages in thread

* [dpdk-dev] [PATCH v5 4/8] fib: introduce AVX512 lookup
  2020-07-08 20:16     ` [dpdk-dev] [PATCH v4 " Vladimir Medvedkin
                         ` (3 preceding siblings ...)
  2020-07-10 14:46       ` [dpdk-dev] [PATCH v5 3/8] fib: move lookup definition into the header file Vladimir Medvedkin
@ 2020-07-10 14:46       ` Vladimir Medvedkin
  2020-07-10 14:46       ` [dpdk-dev] [PATCH v5 5/8] fib6: make lookup function type configurable Vladimir Medvedkin
                         ` (3 subsequent siblings)
  8 siblings, 0 replies; 199+ messages in thread
From: Vladimir Medvedkin @ 2020-07-10 14:46 UTC (permalink / raw)
  To: dev; +Cc: konstantin.ananyev, bruce.richardson

Add new lookup implementation for DIR24_8 algorithm using
AVX512 instruction set

Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
---
 lib/librte_fib/Makefile         |  14 +++
 lib/librte_fib/dir24_8.c        |  24 +++++
 lib/librte_fib/dir24_8_avx512.c | 165 ++++++++++++++++++++++++++++++++
 lib/librte_fib/dir24_8_avx512.h |  24 +++++
 lib/librte_fib/meson.build      |  18 ++++
 lib/librte_fib/rte_fib.h        |   3 +-
 6 files changed, 247 insertions(+), 1 deletion(-)
 create mode 100644 lib/librte_fib/dir24_8_avx512.c
 create mode 100644 lib/librte_fib/dir24_8_avx512.h

diff --git a/lib/librte_fib/Makefile b/lib/librte_fib/Makefile
index 1dd2a495b..3958da106 100644
--- a/lib/librte_fib/Makefile
+++ b/lib/librte_fib/Makefile
@@ -19,4 +19,18 @@ SRCS-$(CONFIG_RTE_LIBRTE_FIB) := rte_fib.c rte_fib6.c dir24_8.c trie.c
 # install this header file
 SYMLINK-$(CONFIG_RTE_LIBRTE_FIB)-include := rte_fib.h rte_fib6.h
 
+CC_AVX512F_SUPPORT=$(shell $(CC) -mavx512f -dM -E - </dev/null 2>&1 | \
+grep -q __AVX512F__ && echo 1)
+
+CC_AVX512DQ_SUPPORT=$(shell $(CC) -mavx512dq -dM -E - </dev/null 2>&1 | \
+grep -q __AVX512DQ__ && echo 1)
+
+ifeq ($(CC_AVX512F_SUPPORT), 1)
+	ifeq ($(CC_AVX512DQ_SUPPORT), 1)
+		SRCS-$(CONFIG_RTE_LIBRTE_FIB) += dir24_8_avx512.c
+		CFLAGS_dir24_8_avx512.o += -mavx512f
+		CFLAGS_dir24_8_avx512.o += -mavx512dq
+		CFLAGS_dir24_8.o += -DCC_DIR24_8_AVX512_SUPPORT
+	endif
+endif
 include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_fib/dir24_8.c b/lib/librte_fib/dir24_8.c
index 9d74653cf..0d7bf2c9e 100644
--- a/lib/librte_fib/dir24_8.c
+++ b/lib/librte_fib/dir24_8.c
@@ -18,6 +18,12 @@
 #include <rte_fib.h>
 #include "dir24_8.h"
 
+#ifdef CC_DIR24_8_AVX512_SUPPORT
+
+#include "dir24_8_avx512.h"
+
+#endif /* CC_DIR24_8_AVX512_SUPPORT */
+
 #define DIR24_8_NAMESIZE	64
 
 #define ROUNDUP(x, y)	 RTE_ALIGN_CEIL(x, (1 << (32 - y)))
@@ -62,6 +68,24 @@ dir24_8_get_lookup_fn(void *p, enum rte_fib_dir24_8_lookup_type type)
 		}
 	case RTE_FIB_DIR24_8_SCALAR_UNI:
 		return dir24_8_lookup_bulk_uni;
+#ifdef CC_DIR24_8_AVX512_SUPPORT
+	case RTE_FIB_DIR24_8_VECTOR_AVX512:
+		if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512F) <= 0)
+			return NULL;
+
+		switch (nh_sz) {
+		case RTE_FIB_DIR24_8_1B:
+			return rte_dir24_8_vec_lookup_bulk_1b;
+		case RTE_FIB_DIR24_8_2B:
+			return rte_dir24_8_vec_lookup_bulk_2b;
+		case RTE_FIB_DIR24_8_4B:
+			return rte_dir24_8_vec_lookup_bulk_4b;
+		case RTE_FIB_DIR24_8_8B:
+			return rte_dir24_8_vec_lookup_bulk_8b;
+		default:
+			return NULL;
+		}
+#endif
 	default:
 		return NULL;
 	}
diff --git a/lib/librte_fib/dir24_8_avx512.c b/lib/librte_fib/dir24_8_avx512.c
new file mode 100644
index 000000000..43dba28cf
--- /dev/null
+++ b/lib/librte_fib/dir24_8_avx512.c
@@ -0,0 +1,165 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Intel Corporation
+ */
+
+#include <rte_vect.h>
+#include <rte_fib.h>
+
+#include "dir24_8.h"
+#include "dir24_8_avx512.h"
+
+static __rte_always_inline void
+dir24_8_vec_lookup_x16(void *p, const uint32_t *ips,
+	uint64_t *next_hops, int size)
+{
+	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
+	__mmask16 msk_ext;
+	__mmask16 exp_msk = 0x5555;
+	__m512i ip_vec, idxes, res, bytes;
+	const __m512i zero = _mm512_set1_epi32(0);
+	const __m512i lsb = _mm512_set1_epi32(1);
+	const __m512i lsbyte_msk = _mm512_set1_epi32(0xff);
+	__m512i tmp1, tmp2, res_msk;
+	__m256i tmp256;
+	/* used to mask gather values if size is 1/2 (8/16 bit next hops) */
+	if (size == sizeof(uint8_t))
+		res_msk = _mm512_set1_epi32(UINT8_MAX);
+	else if (size == sizeof(uint16_t))
+		res_msk = _mm512_set1_epi32(UINT16_MAX);
+
+	ip_vec = _mm512_loadu_si512(ips);
+	/* mask 24 most significant bits */
+	idxes = _mm512_srli_epi32(ip_vec, 8);
+
+	/**
+	 * lookup in tbl24
+	 * Put it inside branch to make compiler happy with -O0
+	 */
+	if (size == sizeof(uint8_t)) {
+		res = _mm512_i32gather_epi32(idxes, (const int *)dp->tbl24, 1);
+		res = _mm512_and_epi32(res, res_msk);
+	} else if (size == sizeof(uint16_t)) {
+		res = _mm512_i32gather_epi32(idxes, (const int *)dp->tbl24, 2);
+		res = _mm512_and_epi32(res, res_msk);
+	} else
+		res = _mm512_i32gather_epi32(idxes, (const int *)dp->tbl24, 4);
+
+	/* get extended entries indexes */
+	msk_ext = _mm512_test_epi32_mask(res, lsb);
+
+	if (msk_ext != 0) {
+		idxes = _mm512_srli_epi32(res, 1);
+		idxes = _mm512_slli_epi32(idxes, 8);
+		bytes = _mm512_and_epi32(ip_vec, lsbyte_msk);
+		idxes = _mm512_maskz_add_epi32(msk_ext, idxes, bytes);
+		if (size == sizeof(uint8_t)) {
+			idxes = _mm512_mask_i32gather_epi32(zero, msk_ext,
+				idxes, (const int *)dp->tbl8, 1);
+			idxes = _mm512_and_epi32(idxes, res_msk);
+		} else if (size == sizeof(uint16_t)) {
+			idxes = _mm512_mask_i32gather_epi32(zero, msk_ext,
+				idxes, (const int *)dp->tbl8, 2);
+			idxes = _mm512_and_epi32(idxes, res_msk);
+		} else
+			idxes = _mm512_mask_i32gather_epi32(zero, msk_ext,
+				idxes, (const int *)dp->tbl8, 4);
+
+		res = _mm512_mask_blend_epi32(msk_ext, res, idxes);
+	}
+
+	res = _mm512_srli_epi32(res, 1);
+	tmp1 = _mm512_maskz_expand_epi32(exp_msk, res);
+	tmp256 = _mm512_extracti32x8_epi32(res, 1);
+	tmp2 = _mm512_maskz_expand_epi32(exp_msk,
+		_mm512_castsi256_si512(tmp256));
+	_mm512_storeu_si512(next_hops, tmp1);
+	_mm512_storeu_si512(next_hops + 8, tmp2);
+}
+
+static __rte_always_inline void
+dir24_8_vec_lookup_x8_8b(void *p, const uint32_t *ips,
+	uint64_t *next_hops)
+{
+	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
+	const __m512i zero = _mm512_set1_epi32(0);
+	const __m512i lsbyte_msk = _mm512_set1_epi64(0xff);
+	const __m512i lsb = _mm512_set1_epi64(1);
+	__m512i res, idxes, bytes;
+	__m256i idxes_256, ip_vec;
+	__mmask8 msk_ext;
+
+	ip_vec = _mm256_loadu_si256((const void *)ips);
+	/* mask 24 most significant bits */
+	idxes_256 = _mm256_srli_epi32(ip_vec, 8);
+
+	/* lookup in tbl24 */
+	res = _mm512_i32gather_epi64(idxes_256, (const void *)dp->tbl24, 8);
+
+	/* get extended entries indexes */
+	msk_ext = _mm512_test_epi64_mask(res, lsb);
+
+	if (msk_ext != 0) {
+		bytes = _mm512_cvtepi32_epi64(ip_vec);
+		idxes = _mm512_srli_epi64(res, 1);
+		idxes = _mm512_slli_epi64(idxes, 8);
+		bytes = _mm512_and_epi64(bytes, lsbyte_msk);
+		idxes = _mm512_maskz_add_epi64(msk_ext, idxes, bytes);
+		idxes = _mm512_mask_i64gather_epi64(zero, msk_ext, idxes,
+			(const void *)dp->tbl8, 8);
+
+		res = _mm512_mask_blend_epi64(msk_ext, res, idxes);
+	}
+
+	res = _mm512_srli_epi64(res, 1);
+	_mm512_storeu_si512(next_hops, res);
+}
+
+void
+rte_dir24_8_vec_lookup_bulk_1b(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n)
+{
+	uint32_t i;
+	for (i = 0; i < (n / 16); i++)
+		dir24_8_vec_lookup_x16(p, ips + i * 16, next_hops + i * 16,
+			sizeof(uint8_t));
+
+	dir24_8_lookup_bulk_1b(p, ips + i * 16, next_hops + i * 16,
+		n - i * 16);
+}
+
+void
+rte_dir24_8_vec_lookup_bulk_2b(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n)
+{
+	uint32_t i;
+	for (i = 0; i < (n / 16); i++)
+		dir24_8_vec_lookup_x16(p, ips + i * 16, next_hops + i * 16,
+			sizeof(uint16_t));
+
+	dir24_8_lookup_bulk_2b(p, ips + i * 16, next_hops + i * 16,
+		n - i * 16);
+}
+
+void
+rte_dir24_8_vec_lookup_bulk_4b(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n)
+{
+	uint32_t i;
+	for (i = 0; i < (n / 16); i++)
+		dir24_8_vec_lookup_x16(p, ips + i * 16, next_hops + i * 16,
+			sizeof(uint32_t));
+
+	dir24_8_lookup_bulk_4b(p, ips + i * 16, next_hops + i * 16,
+		n - i * 16);
+}
+
+void
+rte_dir24_8_vec_lookup_bulk_8b(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n)
+{
+	uint32_t i;
+	for (i = 0; i < (n / 8); i++)
+		dir24_8_vec_lookup_x8_8b(p, ips + i * 8, next_hops + i * 8);
+
+	dir24_8_lookup_bulk_8b(p, ips + i * 8, next_hops + i * 8, n - i * 8);
+}
diff --git a/lib/librte_fib/dir24_8_avx512.h b/lib/librte_fib/dir24_8_avx512.h
new file mode 100644
index 000000000..1d3c2b931
--- /dev/null
+++ b/lib/librte_fib/dir24_8_avx512.h
@@ -0,0 +1,24 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Intel Corporation
+ */
+
+#ifndef _DIR248_AVX512_H_
+#define _DIR248_AVX512_H_
+
+void
+rte_dir24_8_vec_lookup_bulk_1b(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n);
+
+void
+rte_dir24_8_vec_lookup_bulk_2b(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n);
+
+void
+rte_dir24_8_vec_lookup_bulk_4b(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n);
+
+void
+rte_dir24_8_vec_lookup_bulk_8b(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n);
+
+#endif /* _DIR248_AVX512_H_ */
diff --git a/lib/librte_fib/meson.build b/lib/librte_fib/meson.build
index 771828fbe..d96ff0288 100644
--- a/lib/librte_fib/meson.build
+++ b/lib/librte_fib/meson.build
@@ -5,3 +5,21 @@
 sources = files('rte_fib.c', 'rte_fib6.c', 'dir24_8.c', 'trie.c')
 headers = files('rte_fib.h', 'rte_fib6.h')
 deps += ['rib']
+
+if arch_subdir == 'x86' and not machine_args.contains('-mno-avx512f')
+	# compile AVX512 version if either:
+	# a. we have AVX512F supported in minimum instruction set baseline
+	# b. it's not minimum instruction set, but supported by compiler
+	if dpdk_conf.has('RTE_MACHINE_CPUFLAG_AVX512F')
+		cflags += ['-DCC_DIR24_8_AVX512_SUPPORT']
+		sources += files('dir24_8_avx512.c')
+	elif cc.has_multi_arguments('-mavx512f', '-mavx512dq')
+		dir24_8_avx512_tmp = static_library('dir24_8_avx512_tmp',
+				'dir24_8_avx512.c',
+				dependencies: static_rte_eal,
+				c_args: cflags + ['-mavx512f', '-mavx512dq'])
+		objs += dir24_8_avx512_tmp.extract_objects('dir24_8_avx512.c')
+		cflags += '-DCC_DIR24_8_AVX512_SUPPORT'
+	endif
+endif
+
diff --git a/lib/librte_fib/rte_fib.h b/lib/librte_fib/rte_fib.h
index 892898c6f..4a348670d 100644
--- a/lib/librte_fib/rte_fib.h
+++ b/lib/librte_fib/rte_fib.h
@@ -61,7 +61,8 @@ enum rte_fib_dir24_8_nh_sz {
 enum rte_fib_dir24_8_lookup_type {
 	RTE_FIB_DIR24_8_SCALAR_MACRO,
 	RTE_FIB_DIR24_8_SCALAR_INLINE,
-	RTE_FIB_DIR24_8_SCALAR_UNI
+	RTE_FIB_DIR24_8_SCALAR_UNI,
+	RTE_FIB_DIR24_8_VECTOR_AVX512
 };
 
 /** FIB configuration structure */
-- 
2.17.1


^ permalink raw reply	[flat|nested] 199+ messages in thread

* [dpdk-dev] [PATCH v5 5/8] fib6: make lookup function type configurable
  2020-07-08 20:16     ` [dpdk-dev] [PATCH v4 " Vladimir Medvedkin
                         ` (4 preceding siblings ...)
  2020-07-10 14:46       ` [dpdk-dev] [PATCH v5 4/8] fib: introduce AVX512 lookup Vladimir Medvedkin
@ 2020-07-10 14:46       ` Vladimir Medvedkin
  2020-07-10 14:46       ` [dpdk-dev] [PATCH v5 6/8] fib6: move lookup definition into the header file Vladimir Medvedkin
                         ` (2 subsequent siblings)
  8 siblings, 0 replies; 199+ messages in thread
From: Vladimir Medvedkin @ 2020-07-10 14:46 UTC (permalink / raw)
  To: dev; +Cc: konstantin.ananyev, bruce.richardson

Add type argument to trie_get_lookup_fn()
Now it only supports RTE_FIB6_TRIE_SCALAR

Add new rte_fib6_set_lookup_fn() - user can change lookup
function type runtime.

Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
---
 lib/librte_fib/rte_fib6.c          | 20 +++++++++++++++++++-
 lib/librte_fib/rte_fib6.h          | 21 +++++++++++++++++++++
 lib/librte_fib/rte_fib_version.map |  1 +
 lib/librte_fib/trie.c              | 25 ++++++++++++++-----------
 lib/librte_fib/trie.h              |  2 +-
 5 files changed, 56 insertions(+), 13 deletions(-)

diff --git a/lib/librte_fib/rte_fib6.c b/lib/librte_fib/rte_fib6.c
index a1f0db844..566cd5fb6 100644
--- a/lib/librte_fib/rte_fib6.c
+++ b/lib/librte_fib/rte_fib6.c
@@ -107,7 +107,7 @@ init_dataplane(struct rte_fib6 *fib, __rte_unused int socket_id,
 		fib->dp = trie_create(dp_name, socket_id, conf);
 		if (fib->dp == NULL)
 			return -rte_errno;
-		fib->lookup = rte_trie_get_lookup_fn(conf);
+		fib->lookup = trie_get_lookup_fn(fib->dp, RTE_FIB6_TRIE_SCALAR);
 		fib->modify = trie_modify;
 		return 0;
 	default:
@@ -319,3 +319,21 @@ rte_fib6_get_rib(struct rte_fib6 *fib)
 {
 	return (fib == NULL) ? NULL : fib->rib;
 }
+
+int
+rte_fib6_set_lookup_fn(struct rte_fib6 *fib,
+	enum rte_fib_trie_lookup_type type)
+{
+	rte_fib6_lookup_fn_t fn;
+
+	switch (fib->type) {
+	case RTE_FIB6_TRIE:
+		fn = trie_get_lookup_fn(fib->dp, type);
+		if (fn == NULL)
+			return -EINVAL;
+		fib->lookup = fn;
+		return 0;
+	default:
+		return -EINVAL;
+	}
+}
diff --git a/lib/librte_fib/rte_fib6.h b/lib/librte_fib/rte_fib6.h
index bbfcf23a8..e029c7624 100644
--- a/lib/librte_fib/rte_fib6.h
+++ b/lib/librte_fib/rte_fib6.h
@@ -59,6 +59,10 @@ enum rte_fib_trie_nh_sz {
 	RTE_FIB6_TRIE_8B
 };
 
+enum rte_fib_trie_lookup_type {
+	RTE_FIB6_TRIE_SCALAR
+};
+
 /** FIB configuration structure */
 struct rte_fib6_conf {
 	enum rte_fib6_type type; /**< Type of FIB struct */
@@ -201,6 +205,23 @@ __rte_experimental
 struct rte_rib6 *
 rte_fib6_get_rib(struct rte_fib6 *fib);
 
+/**
+ * Set lookup function based on type
+ *
+ * @param fib
+ *   FIB object handle
+ * @param type
+ *   type of lookup function
+ *
+ * @return
+ *    -EINVAL on failure
+ *    0 on success
+ */
+__rte_experimental
+int
+rte_fib6_set_lookup_fn(struct rte_fib6 *fib,
+	enum rte_fib_trie_lookup_type type);
+
 #ifdef __cplusplus
 }
 #endif
diff --git a/lib/librte_fib/rte_fib_version.map b/lib/librte_fib/rte_fib_version.map
index 216af66b3..9d1e181b3 100644
--- a/lib/librte_fib/rte_fib_version.map
+++ b/lib/librte_fib/rte_fib_version.map
@@ -19,6 +19,7 @@ EXPERIMENTAL {
 	rte_fib6_lookup_bulk;
 	rte_fib6_get_dp;
 	rte_fib6_get_rib;
+	rte_fib6_set_lookup_fn;
 
 	local: *;
 };
diff --git a/lib/librte_fib/trie.c b/lib/librte_fib/trie.c
index 2ae2add4f..63c519a09 100644
--- a/lib/librte_fib/trie.c
+++ b/lib/librte_fib/trie.c
@@ -59,13 +59,6 @@ enum edge {
 	REDGE
 };
 
-enum lookup_type {
-	MACRO,
-	INLINE,
-	UNI
-};
-static enum lookup_type test_lookup = MACRO;
-
 static inline uint32_t
 get_tbl24_idx(const uint8_t *ip)
 {
@@ -154,11 +147,18 @@ LOOKUP_FUNC(4b, uint32_t, 2)
 LOOKUP_FUNC(8b, uint64_t, 3)
 
 rte_fib6_lookup_fn_t
-rte_trie_get_lookup_fn(struct rte_fib6_conf *conf)
+trie_get_lookup_fn(void *p, enum rte_fib_trie_lookup_type type)
 {
-	enum rte_fib_trie_nh_sz nh_sz = conf->trie.nh_sz;
+	enum rte_fib_trie_nh_sz nh_sz;
+	struct rte_trie_tbl *dp = p;
 
-	if (test_lookup == MACRO) {
+	if (dp == NULL)
+		return NULL;
+
+	nh_sz = dp->nh_sz;
+
+	switch (type) {
+	case RTE_FIB6_TRIE_SCALAR:
 		switch (nh_sz) {
 		case RTE_FIB6_TRIE_2B:
 			return rte_trie_lookup_bulk_2b;
@@ -166,9 +166,12 @@ rte_trie_get_lookup_fn(struct rte_fib6_conf *conf)
 			return rte_trie_lookup_bulk_4b;
 		case RTE_FIB6_TRIE_8B:
 			return rte_trie_lookup_bulk_8b;
+		default:
+			return NULL;
 		}
+	default:
+		return NULL;
 	}
-
 	return NULL;
 }
 
diff --git a/lib/librte_fib/trie.h b/lib/librte_fib/trie.h
index bb750c5ae..0d5ef9a9f 100644
--- a/lib/librte_fib/trie.h
+++ b/lib/librte_fib/trie.h
@@ -22,7 +22,7 @@ void
 trie_free(void *p);
 
 rte_fib6_lookup_fn_t
-rte_trie_get_lookup_fn(struct rte_fib6_conf *fib_conf);
+trie_get_lookup_fn(void *p, enum rte_fib_trie_lookup_type type);
 
 int
 trie_modify(struct rte_fib6 *fib, const uint8_t ip[RTE_FIB6_IPV6_ADDR_SIZE],
-- 
2.17.1


^ permalink raw reply	[flat|nested] 199+ messages in thread

* [dpdk-dev] [PATCH v5 6/8] fib6: move lookup definition into the header file
  2020-07-08 20:16     ` [dpdk-dev] [PATCH v4 " Vladimir Medvedkin
                         ` (5 preceding siblings ...)
  2020-07-10 14:46       ` [dpdk-dev] [PATCH v5 5/8] fib6: make lookup function type configurable Vladimir Medvedkin
@ 2020-07-10 14:46       ` Vladimir Medvedkin
  2020-07-10 14:46       ` [dpdk-dev] [PATCH v5 7/8] fib6: introduce AVX512 lookup Vladimir Medvedkin
  2020-07-10 14:46       ` [dpdk-dev] [PATCH v5 8/8] app/testfib: add support for different lookup functions Vladimir Medvedkin
  8 siblings, 0 replies; 199+ messages in thread
From: Vladimir Medvedkin @ 2020-07-10 14:46 UTC (permalink / raw)
  To: dev; +Cc: konstantin.ananyev, bruce.richardson

Move trie table layout and lookup definition into the
private header file. This is necessary for implementing a
vectorized lookup function in a separate .с file.

Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
---
 lib/librte_fib/trie.c | 121 ------------------------------------------
 lib/librte_fib/trie.h | 117 ++++++++++++++++++++++++++++++++++++++++
 2 files changed, 117 insertions(+), 121 deletions(-)

diff --git a/lib/librte_fib/trie.c b/lib/librte_fib/trie.c
index 63c519a09..136e938df 100644
--- a/lib/librte_fib/trie.c
+++ b/lib/librte_fib/trie.c
@@ -11,141 +11,20 @@
 
 #include <rte_debug.h>
 #include <rte_malloc.h>
-#include <rte_prefetch.h>
 #include <rte_errno.h>
 #include <rte_memory.h>
-#include <rte_branch_prediction.h>
 
 #include <rte_rib6.h>
 #include <rte_fib6.h>
 #include "trie.h"
 
-/* @internal Total number of tbl24 entries. */
-#define TRIE_TBL24_NUM_ENT	(1 << 24)
-
-/* Maximum depth value possible for IPv6 LPM. */
-#define TRIE_MAX_DEPTH		128
-
-/* @internal Number of entries in a tbl8 group. */
-#define TRIE_TBL8_GRP_NUM_ENT	256ULL
-
-/* @internal Total number of tbl8 groups in the tbl8. */
-#define TRIE_TBL8_NUM_GROUPS	65536
-
-/* @internal bitmask with valid and valid_group fields set */
-#define TRIE_EXT_ENT		1
-
 #define TRIE_NAMESIZE		64
 
-#define BITMAP_SLAB_BIT_SIZE_LOG2	6
-#define BITMAP_SLAB_BIT_SIZE		(1ULL << BITMAP_SLAB_BIT_SIZE_LOG2)
-#define BITMAP_SLAB_BITMASK		(BITMAP_SLAB_BIT_SIZE - 1)
-
-struct rte_trie_tbl {
-	uint32_t	number_tbl8s;	/**< Total number of tbl8s */
-	uint32_t	rsvd_tbl8s;	/**< Number of reserved tbl8s */
-	uint32_t	cur_tbl8s;	/**< Current cumber of tbl8s */
-	uint64_t	def_nh;		/**< Default next hop */
-	enum rte_fib_trie_nh_sz	nh_sz;	/**< Size of nexthop entry */
-	uint64_t	*tbl8;		/**< tbl8 table. */
-	uint32_t	*tbl8_pool;	/**< bitmap containing free tbl8 idxes*/
-	uint32_t	tbl8_pool_pos;
-	/* tbl24 table. */
-	__extension__ uint64_t	tbl24[0] __rte_cache_aligned;
-};
-
 enum edge {
 	LEDGE,
 	REDGE
 };
 
-static inline uint32_t
-get_tbl24_idx(const uint8_t *ip)
-{
-	return ip[0] << 16|ip[1] << 8|ip[2];
-}
-
-static inline void *
-get_tbl24_p(struct rte_trie_tbl *dp, const uint8_t *ip, uint8_t nh_sz)
-{
-	uint32_t tbl24_idx;
-
-	tbl24_idx = get_tbl24_idx(ip);
-	return (void *)&((uint8_t *)dp->tbl24)[tbl24_idx << nh_sz];
-}
-
-static inline uint8_t
-bits_in_nh(uint8_t nh_sz)
-{
-	return 8 * (1 << nh_sz);
-}
-
-static inline uint64_t
-get_max_nh(uint8_t nh_sz)
-{
-	return ((1ULL << (bits_in_nh(nh_sz) - 1)) - 1);
-}
-
-static inline uint64_t
-lookup_msk(uint8_t nh_sz)
-{
-	return ((1ULL << ((1 << (nh_sz + 3)) - 1)) << 1) - 1;
-}
-
-static inline uint8_t
-get_psd_idx(uint32_t val, uint8_t nh_sz)
-{
-	return val & ((1 << (3 - nh_sz)) - 1);
-}
-
-static inline uint32_t
-get_tbl_pos(uint32_t val, uint8_t nh_sz)
-{
-	return val >> (3 - nh_sz);
-}
-
-static inline uint64_t
-get_tbl_val_by_idx(uint64_t *tbl, uint32_t idx, uint8_t nh_sz)
-{
-	return ((tbl[get_tbl_pos(idx, nh_sz)] >> (get_psd_idx(idx, nh_sz) *
-		bits_in_nh(nh_sz))) & lookup_msk(nh_sz));
-}
-
-static inline void *
-get_tbl_p_by_idx(uint64_t *tbl, uint64_t idx, uint8_t nh_sz)
-{
-	return (uint8_t *)tbl + (idx << nh_sz);
-}
-
-static inline int
-is_entry_extended(uint64_t ent)
-{
-	return (ent & TRIE_EXT_ENT) == TRIE_EXT_ENT;
-}
-
-#define LOOKUP_FUNC(suffix, type, nh_sz)				\
-static void rte_trie_lookup_bulk_##suffix(void *p,			\
-	uint8_t ips[][RTE_FIB6_IPV6_ADDR_SIZE],			\
-	uint64_t *next_hops, const unsigned int n)			\
-{									\
-	struct rte_trie_tbl *dp = (struct rte_trie_tbl *)p;		\
-	uint64_t tmp;							\
-	uint32_t i, j;							\
-									\
-	for (i = 0; i < n; i++) {					\
-		tmp = ((type *)dp->tbl24)[get_tbl24_idx(&ips[i][0])];	\
-		j = 3;							\
-		while (is_entry_extended(tmp)) {			\
-			tmp = ((type *)dp->tbl8)[ips[i][j++] +		\
-				((tmp >> 1) * TRIE_TBL8_GRP_NUM_ENT)];	\
-		}							\
-		next_hops[i] = tmp >> 1;				\
-	}								\
-}
-LOOKUP_FUNC(2b, uint16_t, 1)
-LOOKUP_FUNC(4b, uint32_t, 2)
-LOOKUP_FUNC(8b, uint64_t, 3)
-
 rte_fib6_lookup_fn_t
 trie_get_lookup_fn(void *p, enum rte_fib_trie_lookup_type type)
 {
diff --git a/lib/librte_fib/trie.h b/lib/librte_fib/trie.h
index 0d5ef9a9f..663c7a90f 100644
--- a/lib/librte_fib/trie.h
+++ b/lib/librte_fib/trie.h
@@ -10,11 +10,128 @@
  * @file
  * RTE IPv6 Longest Prefix Match (LPM)
  */
+#include <rte_prefetch.h>
+#include <rte_branch_prediction.h>
 
 #ifdef __cplusplus
 extern "C" {
 #endif
 
+/* @internal Total number of tbl24 entries. */
+#define TRIE_TBL24_NUM_ENT	(1 << 24)
+/* Maximum depth value possible for IPv6 LPM. */
+#define TRIE_MAX_DEPTH		128
+/* @internal Number of entries in a tbl8 group. */
+#define TRIE_TBL8_GRP_NUM_ENT	256ULL
+/* @internal Total number of tbl8 groups in the tbl8. */
+#define TRIE_TBL8_NUM_GROUPS	65536
+/* @internal bitmask with valid and valid_group fields set */
+#define TRIE_EXT_ENT		1
+
+#define BITMAP_SLAB_BIT_SIZE_LOG2	6
+#define BITMAP_SLAB_BIT_SIZE		(1ULL << BITMAP_SLAB_BIT_SIZE_LOG2)
+#define BITMAP_SLAB_BITMASK		(BITMAP_SLAB_BIT_SIZE - 1)
+
+struct rte_trie_tbl {
+	uint32_t	number_tbl8s;	/**< Total number of tbl8s */
+	uint32_t	rsvd_tbl8s;	/**< Number of reserved tbl8s */
+	uint32_t	cur_tbl8s;	/**< Current cumber of tbl8s */
+	uint64_t	def_nh;		/**< Default next hop */
+	enum rte_fib_trie_nh_sz	nh_sz;	/**< Size of nexthop entry */
+	uint64_t	*tbl8;		/**< tbl8 table. */
+	uint32_t	*tbl8_pool;	/**< bitmap containing free tbl8 idxes*/
+	uint32_t	tbl8_pool_pos;
+	/* tbl24 table. */
+	__extension__ uint64_t	tbl24[0] __rte_cache_aligned;
+};
+
+static inline uint32_t
+get_tbl24_idx(const uint8_t *ip)
+{
+	return ip[0] << 16|ip[1] << 8|ip[2];
+}
+
+static inline void *
+get_tbl24_p(struct rte_trie_tbl *dp, const uint8_t *ip, uint8_t nh_sz)
+{
+	uint32_t tbl24_idx;
+
+	tbl24_idx = get_tbl24_idx(ip);
+	return (void *)&((uint8_t *)dp->tbl24)[tbl24_idx << nh_sz];
+}
+
+static inline uint8_t
+bits_in_nh(uint8_t nh_sz)
+{
+	return 8 * (1 << nh_sz);
+}
+
+static inline uint64_t
+get_max_nh(uint8_t nh_sz)
+{
+	return ((1ULL << (bits_in_nh(nh_sz) - 1)) - 1);
+}
+
+static inline uint64_t
+lookup_msk(uint8_t nh_sz)
+{
+	return ((1ULL << ((1 << (nh_sz + 3)) - 1)) << 1) - 1;
+}
+
+static inline uint8_t
+get_psd_idx(uint32_t val, uint8_t nh_sz)
+{
+	return val & ((1 << (3 - nh_sz)) - 1);
+}
+
+static inline uint32_t
+get_tbl_pos(uint32_t val, uint8_t nh_sz)
+{
+	return val >> (3 - nh_sz);
+}
+
+static inline uint64_t
+get_tbl_val_by_idx(uint64_t *tbl, uint32_t idx, uint8_t nh_sz)
+{
+	return ((tbl[get_tbl_pos(idx, nh_sz)] >> (get_psd_idx(idx, nh_sz) *
+		bits_in_nh(nh_sz))) & lookup_msk(nh_sz));
+}
+
+static inline void *
+get_tbl_p_by_idx(uint64_t *tbl, uint64_t idx, uint8_t nh_sz)
+{
+	return (uint8_t *)tbl + (idx << nh_sz);
+}
+
+static inline int
+is_entry_extended(uint64_t ent)
+{
+	return (ent & TRIE_EXT_ENT) == TRIE_EXT_ENT;
+}
+
+#define LOOKUP_FUNC(suffix, type, nh_sz)				\
+static inline void rte_trie_lookup_bulk_##suffix(void *p,		\
+	uint8_t ips[][RTE_FIB6_IPV6_ADDR_SIZE],				\
+	uint64_t *next_hops, const unsigned int n)			\
+{									\
+	struct rte_trie_tbl *dp = (struct rte_trie_tbl *)p;		\
+	uint64_t tmp;							\
+	uint32_t i, j;							\
+									\
+	for (i = 0; i < n; i++) {					\
+		tmp = ((type *)dp->tbl24)[get_tbl24_idx(&ips[i][0])];	\
+		j = 3;							\
+		while (is_entry_extended(tmp)) {			\
+			tmp = ((type *)dp->tbl8)[ips[i][j++] +		\
+				((tmp >> 1) * TRIE_TBL8_GRP_NUM_ENT)];	\
+		}							\
+		next_hops[i] = tmp >> 1;				\
+	}								\
+}
+LOOKUP_FUNC(2b, uint16_t, 1)
+LOOKUP_FUNC(4b, uint32_t, 2)
+LOOKUP_FUNC(8b, uint64_t, 3)
+
 void *
 trie_create(const char *name, int socket_id, struct rte_fib6_conf *conf);
 
-- 
2.17.1


^ permalink raw reply	[flat|nested] 199+ messages in thread

* [dpdk-dev] [PATCH v5 7/8] fib6: introduce AVX512 lookup
  2020-07-08 20:16     ` [dpdk-dev] [PATCH v4 " Vladimir Medvedkin
                         ` (6 preceding siblings ...)
  2020-07-10 14:46       ` [dpdk-dev] [PATCH v5 6/8] fib6: move lookup definition into the header file Vladimir Medvedkin
@ 2020-07-10 14:46       ` Vladimir Medvedkin
  2020-07-10 14:46       ` [dpdk-dev] [PATCH v5 8/8] app/testfib: add support for different lookup functions Vladimir Medvedkin
  8 siblings, 0 replies; 199+ messages in thread
From: Vladimir Medvedkin @ 2020-07-10 14:46 UTC (permalink / raw)
  To: dev; +Cc: konstantin.ananyev, bruce.richardson

Add new lookup implementation for FIB6 trie algorithm using
AVX512 instruction set

Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
---
 lib/librte_fib/Makefile      |  10 ++
 lib/librte_fib/meson.build   |  13 ++
 lib/librte_fib/rte_fib6.h    |   3 +-
 lib/librte_fib/trie.c        |  21 +++
 lib/librte_fib/trie_avx512.c | 269 +++++++++++++++++++++++++++++++++++
 lib/librte_fib/trie_avx512.h |  20 +++
 6 files changed, 335 insertions(+), 1 deletion(-)
 create mode 100644 lib/librte_fib/trie_avx512.c
 create mode 100644 lib/librte_fib/trie_avx512.h

diff --git a/lib/librte_fib/Makefile b/lib/librte_fib/Makefile
index 3958da106..761c7c847 100644
--- a/lib/librte_fib/Makefile
+++ b/lib/librte_fib/Makefile
@@ -25,12 +25,22 @@ grep -q __AVX512F__ && echo 1)
 CC_AVX512DQ_SUPPORT=$(shell $(CC) -mavx512dq -dM -E - </dev/null 2>&1 | \
 grep -q __AVX512DQ__ && echo 1)
 
+CC_AVX512BW_SUPPORT=$(shell $(CC) -mavx512bw -dM -E - </dev/null 2>&1 | \
+grep -q __AVX512BW__ && echo 1)
+
 ifeq ($(CC_AVX512F_SUPPORT), 1)
 	ifeq ($(CC_AVX512DQ_SUPPORT), 1)
 		SRCS-$(CONFIG_RTE_LIBRTE_FIB) += dir24_8_avx512.c
 		CFLAGS_dir24_8_avx512.o += -mavx512f
 		CFLAGS_dir24_8_avx512.o += -mavx512dq
 		CFLAGS_dir24_8.o += -DCC_DIR24_8_AVX512_SUPPORT
+		ifeq ($(CC_AVX512BW_SUPPORT), 1)
+			SRCS-$(CONFIG_RTE_LIBRTE_FIB) += trie_avx512.c
+			CFLAGS_trie_avx512.o += -mavx512f
+			CFLAGS_trie_avx512.o += -mavx512dq
+			CFLAGS_trie_avx512.o += -mavx512bw
+			CFLAGS_trie.o += -DCC_TRIE_AVX512_SUPPORT
+		endif
 	endif
 endif
 include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_fib/meson.build b/lib/librte_fib/meson.build
index d96ff0288..98c8752be 100644
--- a/lib/librte_fib/meson.build
+++ b/lib/librte_fib/meson.build
@@ -13,6 +13,8 @@ if arch_subdir == 'x86' and not machine_args.contains('-mno-avx512f')
 	if dpdk_conf.has('RTE_MACHINE_CPUFLAG_AVX512F')
 		cflags += ['-DCC_DIR24_8_AVX512_SUPPORT']
 		sources += files('dir24_8_avx512.c')
+		cflags += ['-DCC_TRIE_AVX512_SUPPORT']
+		sources += files('trie_avx512.c')
 	elif cc.has_multi_arguments('-mavx512f', '-mavx512dq')
 		dir24_8_avx512_tmp = static_library('dir24_8_avx512_tmp',
 				'dir24_8_avx512.c',
@@ -20,6 +22,17 @@ if arch_subdir == 'x86' and not machine_args.contains('-mno-avx512f')
 				c_args: cflags + ['-mavx512f', '-mavx512dq'])
 		objs += dir24_8_avx512_tmp.extract_objects('dir24_8_avx512.c')
 		cflags += '-DCC_DIR24_8_AVX512_SUPPORT'
+		# TRIE AVX512 implementation uses avx512bw intrinsics along with
+		# avx512f and avx512dq
+		if cc.has_argument('-mavx512bw')
+			trie_avx512_tmp = static_library('trie_avx512_tmp',
+				'trie_avx512.c',
+				dependencies: static_rte_eal,
+				c_args: cflags + ['-mavx512f', \
+					'-mavx512dq', '-mavx512bw'])
+			objs += trie_avx512_tmp.extract_objects('trie_avx512.c')
+			cflags += '-DCC_TRIE_AVX512_SUPPORT'
+		endif
 	endif
 endif
 
diff --git a/lib/librte_fib/rte_fib6.h b/lib/librte_fib/rte_fib6.h
index e029c7624..303be55c1 100644
--- a/lib/librte_fib/rte_fib6.h
+++ b/lib/librte_fib/rte_fib6.h
@@ -60,7 +60,8 @@ enum rte_fib_trie_nh_sz {
 };
 
 enum rte_fib_trie_lookup_type {
-	RTE_FIB6_TRIE_SCALAR
+	RTE_FIB6_TRIE_SCALAR,
+	RTE_FIB6_TRIE_VECTOR_AVX512
 };
 
 /** FIB configuration structure */
diff --git a/lib/librte_fib/trie.c b/lib/librte_fib/trie.c
index 136e938df..d0233ad01 100644
--- a/lib/librte_fib/trie.c
+++ b/lib/librte_fib/trie.c
@@ -18,6 +18,12 @@
 #include <rte_fib6.h>
 #include "trie.h"
 
+#ifdef CC_TRIE_AVX512_SUPPORT
+
+#include "trie_avx512.h"
+
+#endif /* CC_TRIE_AVX512_SUPPORT */
+
 #define TRIE_NAMESIZE		64
 
 enum edge {
@@ -48,6 +54,21 @@ trie_get_lookup_fn(void *p, enum rte_fib_trie_lookup_type type)
 		default:
 			return NULL;
 		}
+#ifdef CC_TRIE_AVX512_SUPPORT
+	case RTE_FIB6_TRIE_VECTOR_AVX512:
+		if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512F) <= 0)
+			return NULL;
+		switch (nh_sz) {
+		case RTE_FIB6_TRIE_2B:
+			return rte_trie_vec_lookup_bulk_2b;
+		case RTE_FIB6_TRIE_4B:
+			return rte_trie_vec_lookup_bulk_4b;
+		case RTE_FIB6_TRIE_8B:
+			return rte_trie_vec_lookup_bulk_8b;
+		default:
+			return NULL;
+		}
+#endif
 	default:
 		return NULL;
 	}
diff --git a/lib/librte_fib/trie_avx512.c b/lib/librte_fib/trie_avx512.c
new file mode 100644
index 000000000..b1c9e4ede
--- /dev/null
+++ b/lib/librte_fib/trie_avx512.c
@@ -0,0 +1,269 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Intel Corporation
+ */
+
+#include <rte_vect.h>
+#include <rte_fib6.h>
+
+#include "trie.h"
+#include "trie_avx512.h"
+
+static __rte_always_inline void
+transpose_x16(uint8_t ips[16][RTE_FIB6_IPV6_ADDR_SIZE],
+	__m512i *first, __m512i *second, __m512i *third, __m512i *fourth)
+{
+	__m512i tmp1, tmp2, tmp3, tmp4;
+	__m512i tmp5, tmp6, tmp7, tmp8;
+	const __rte_x86_zmm_t perm_idxes = {
+		.u32 = { 0, 4, 8, 12, 2, 6, 10, 14,
+			1, 5, 9, 13, 3, 7, 11, 15
+		},
+	};
+
+	/* load all ip addresses */
+	tmp1 = _mm512_loadu_si512(&ips[0][0]);
+	tmp2 = _mm512_loadu_si512(&ips[4][0]);
+	tmp3 = _mm512_loadu_si512(&ips[8][0]);
+	tmp4 = _mm512_loadu_si512(&ips[12][0]);
+
+	/* transpose 4 byte chunks of 16 ips */
+	tmp5 = _mm512_unpacklo_epi32(tmp1, tmp2);
+	tmp7 = _mm512_unpackhi_epi32(tmp1, tmp2);
+	tmp6 = _mm512_unpacklo_epi32(tmp3, tmp4);
+	tmp8 = _mm512_unpackhi_epi32(tmp3, tmp4);
+
+	tmp1 = _mm512_unpacklo_epi32(tmp5, tmp6);
+	tmp3 = _mm512_unpackhi_epi32(tmp5, tmp6);
+	tmp2 = _mm512_unpacklo_epi32(tmp7, tmp8);
+	tmp4 = _mm512_unpackhi_epi32(tmp7, tmp8);
+
+	/* first 4-byte chunks of ips[] */
+	*first = _mm512_permutexvar_epi32(perm_idxes.z, tmp1);
+	/* second 4-byte chunks of ips[] */
+	*second = _mm512_permutexvar_epi32(perm_idxes.z, tmp3);
+	/* third 4-byte chunks of ips[] */
+	*third = _mm512_permutexvar_epi32(perm_idxes.z, tmp2);
+	/* fourth 4-byte chunks of ips[] */
+	*fourth = _mm512_permutexvar_epi32(perm_idxes.z, tmp4);
+}
+
+static __rte_always_inline void
+transpose_x8(uint8_t ips[8][RTE_FIB6_IPV6_ADDR_SIZE],
+	__m512i *first, __m512i *second)
+{
+	__m512i tmp1, tmp2, tmp3, tmp4;
+	const __rte_x86_zmm_t perm_idxes = {
+		.u64 = { 0, 2, 4, 6, 1, 3, 5, 7
+		},
+	};
+
+	tmp1 = _mm512_loadu_si512(&ips[0][0]);
+	tmp2 = _mm512_loadu_si512(&ips[4][0]);
+
+	tmp3 = _mm512_unpacklo_epi64(tmp1, tmp2);
+	*first = _mm512_permutexvar_epi64(perm_idxes.z, tmp3);
+	tmp4 = _mm512_unpackhi_epi64(tmp1, tmp2);
+	*second = _mm512_permutexvar_epi64(perm_idxes.z, tmp4);
+}
+
+static __rte_always_inline void
+trie_vec_lookup_x16(void *p, uint8_t ips[16][RTE_FIB6_IPV6_ADDR_SIZE],
+	uint64_t *next_hops, int size)
+{
+	struct rte_trie_tbl *dp = (struct rte_trie_tbl *)p;
+	const __m512i zero = _mm512_set1_epi32(0);
+	const __m512i lsb = _mm512_set1_epi32(1);
+	const __m512i two_lsb = _mm512_set1_epi32(3);
+	__m512i first, second, third, fourth; /*< IPv6 four byte chunks */
+	__m512i idxes, res, shuf_idxes;
+	__m512i tmp, tmp2, bytes, byte_chunk, base_idxes;
+	/* used to mask gather values if size is 2 (16 bit next hops) */
+	const __m512i res_msk = _mm512_set1_epi32(UINT16_MAX);
+	const __rte_x86_zmm_t bswap = {
+		.u8 = { 2, 1, 0, 255, 6, 5, 4, 255,
+			10, 9, 8, 255, 14, 13, 12, 255,
+			2, 1, 0, 255, 6, 5, 4, 255,
+			10, 9, 8, 255, 14, 13, 12, 255,
+			2, 1, 0, 255, 6, 5, 4, 255,
+			10, 9, 8, 255, 14, 13, 12, 255,
+			2, 1, 0, 255, 6, 5, 4, 255,
+			10, 9, 8, 255, 14, 13, 12, 255
+			},
+	};
+	const __mmask64 k = 0x1111111111111111;
+	int i = 3;
+	__mmask16 msk_ext, new_msk;
+	__mmask16 exp_msk = 0x5555;
+
+	transpose_x16(ips, &first, &second, &third, &fourth);
+
+	/* get_tbl24_idx() for every 4 byte chunk */
+	idxes = _mm512_shuffle_epi8(first, bswap.z);
+
+	/**
+	 * lookup in tbl24
+	 * Put it inside branch to make compiller happy with -O0
+	 */
+	if (size == sizeof(uint16_t)) {
+		res = _mm512_i32gather_epi32(idxes, (const int *)dp->tbl24, 2);
+		res = _mm512_and_epi32(res, res_msk);
+	} else
+		res = _mm512_i32gather_epi32(idxes, (const int *)dp->tbl24, 4);
+
+
+	/* get extended entries indexes */
+	msk_ext = _mm512_test_epi32_mask(res, lsb);
+
+	tmp = _mm512_srli_epi32(res, 1);
+
+	/* idxes to retrieve bytes */
+	shuf_idxes = _mm512_setr_epi32(3, 7, 11, 15,
+				19, 23, 27, 31,
+				35, 39, 43, 47,
+				51, 55, 59, 63);
+
+	base_idxes = _mm512_setr_epi32(0, 4, 8, 12,
+				16, 20, 24, 28,
+				32, 36, 40, 44,
+				48, 52, 56, 60);
+
+	/* traverse down the trie */
+	while (msk_ext) {
+		idxes = _mm512_maskz_slli_epi32(msk_ext, tmp, 8);
+		byte_chunk = (i < 8) ?
+			((i >= 4) ? second : first) :
+			((i >= 12) ? fourth : third);
+		bytes = _mm512_maskz_shuffle_epi8(k, byte_chunk, shuf_idxes);
+		idxes = _mm512_maskz_add_epi32(msk_ext, idxes, bytes);
+		if (size == sizeof(uint16_t)) {
+			tmp = _mm512_mask_i32gather_epi32(zero, msk_ext,
+				idxes, (const int *)dp->tbl8, 2);
+			tmp = _mm512_and_epi32(tmp, res_msk);
+		} else
+			tmp = _mm512_mask_i32gather_epi32(zero, msk_ext,
+				idxes, (const int *)dp->tbl8, 4);
+		new_msk = _mm512_test_epi32_mask(tmp, lsb);
+		res = _mm512_mask_blend_epi32(msk_ext ^ new_msk, res, tmp);
+		tmp = _mm512_srli_epi32(tmp, 1);
+		msk_ext = new_msk;
+
+		shuf_idxes = _mm512_maskz_add_epi8(k, shuf_idxes, lsb);
+		shuf_idxes = _mm512_and_epi32(shuf_idxes, two_lsb);
+		shuf_idxes = _mm512_maskz_add_epi8(k, shuf_idxes, base_idxes);
+		i++;
+	}
+
+	res = _mm512_srli_epi32(res, 1);
+	tmp = _mm512_maskz_expand_epi32(exp_msk, res);
+	__m256i tmp256;
+	tmp256 = _mm512_extracti32x8_epi32(res, 1);
+	tmp2 = _mm512_maskz_expand_epi32(exp_msk,
+		_mm512_castsi256_si512(tmp256));
+	_mm512_storeu_si512(next_hops, tmp);
+	_mm512_storeu_si512(next_hops + 8, tmp2);
+}
+
+static void
+trie_vec_lookup_x8_8b(void *p, uint8_t ips[8][RTE_FIB6_IPV6_ADDR_SIZE],
+	uint64_t *next_hops)
+{
+	struct rte_trie_tbl *dp = (struct rte_trie_tbl *)p;
+	const __m512i zero = _mm512_set1_epi32(0);
+	const __m512i lsb = _mm512_set1_epi32(1);
+	const __m512i three_lsb = _mm512_set1_epi32(7);
+	__m512i first, second; /*< IPv6 eight byte chunks */
+	__m512i idxes, res, shuf_idxes;
+	__m512i tmp, bytes, byte_chunk, base_idxes;
+	const __rte_x86_zmm_t bswap = {
+		.u8 = { 2, 1, 0, 255, 255, 255, 255, 255,
+			10, 9, 8, 255, 255, 255, 255, 255,
+			2, 1, 0, 255, 255, 255, 255, 255,
+			10, 9, 8, 255, 255, 255, 255, 255,
+			2, 1, 0, 255, 255, 255, 255, 255,
+			10, 9, 8, 255, 255, 255, 255, 255,
+			2, 1, 0, 255, 255, 255, 255, 255,
+			10, 9, 8, 255, 255, 255, 255, 255
+			},
+	};
+	const __mmask64 k = 0x101010101010101;
+	int i = 3;
+	__mmask8 msk_ext, new_msk;
+
+	transpose_x8(ips, &first, &second);
+
+	/* get_tbl24_idx() for every 4 byte chunk */
+	idxes = _mm512_shuffle_epi8(first, bswap.z);
+
+	/* lookup in tbl24 */
+	res = _mm512_i64gather_epi64(idxes, (const void *)dp->tbl24, 8);
+	/* get extended entries indexes */
+	msk_ext = _mm512_test_epi64_mask(res, lsb);
+
+	tmp = _mm512_srli_epi64(res, 1);
+
+	/* idxes to retrieve bytes */
+	shuf_idxes = _mm512_setr_epi64(3, 11, 19, 27, 35, 43, 51, 59);
+
+	base_idxes = _mm512_setr_epi64(0, 8, 16, 24, 32, 40, 48, 56);
+
+	/* traverse down the trie */
+	while (msk_ext) {
+		idxes = _mm512_maskz_slli_epi64(msk_ext, tmp, 8);
+		byte_chunk = (i < 8) ? first : second;
+		bytes = _mm512_maskz_shuffle_epi8(k, byte_chunk, shuf_idxes);
+		idxes = _mm512_maskz_add_epi64(msk_ext, idxes, bytes);
+		tmp = _mm512_mask_i64gather_epi64(zero, msk_ext,
+				idxes, (const void *)dp->tbl8, 8);
+		new_msk = _mm512_test_epi64_mask(tmp, lsb);
+		res = _mm512_mask_blend_epi64(msk_ext ^ new_msk, res, tmp);
+		tmp = _mm512_srli_epi64(tmp, 1);
+		msk_ext = new_msk;
+
+		shuf_idxes = _mm512_maskz_add_epi8(k, shuf_idxes, lsb);
+		shuf_idxes = _mm512_and_epi64(shuf_idxes, three_lsb);
+		shuf_idxes = _mm512_maskz_add_epi8(k, shuf_idxes, base_idxes);
+		i++;
+	}
+
+	res = _mm512_srli_epi64(res, 1);
+	_mm512_storeu_si512(next_hops, res);
+}
+
+void
+rte_trie_vec_lookup_bulk_2b(void *p, uint8_t ips[][RTE_FIB6_IPV6_ADDR_SIZE],
+	uint64_t *next_hops, const unsigned int n)
+{
+	uint32_t i;
+	for (i = 0; i < (n / 16); i++) {
+		trie_vec_lookup_x16(p, (uint8_t (*)[16])&ips[i * 16][0],
+				next_hops + i * 16, sizeof(uint16_t));
+	}
+	rte_trie_lookup_bulk_2b(p, (uint8_t (*)[16])&ips[i * 16][0],
+			next_hops + i * 16, n - i * 16);
+}
+
+void
+rte_trie_vec_lookup_bulk_4b(void *p, uint8_t ips[][RTE_FIB6_IPV6_ADDR_SIZE],
+	uint64_t *next_hops, const unsigned int n)
+{
+	uint32_t i;
+	for (i = 0; i < (n / 16); i++) {
+		trie_vec_lookup_x16(p, (uint8_t (*)[16])&ips[i * 16][0],
+				next_hops + i * 16, sizeof(uint32_t));
+	}
+	rte_trie_lookup_bulk_4b(p, (uint8_t (*)[16])&ips[i * 16][0],
+			next_hops + i * 16, n - i * 16);
+}
+
+void
+rte_trie_vec_lookup_bulk_8b(void *p, uint8_t ips[][RTE_FIB6_IPV6_ADDR_SIZE],
+	uint64_t *next_hops, const unsigned int n)
+{
+	uint32_t i;
+	for (i = 0; i < (n / 8); i++) {
+		trie_vec_lookup_x8_8b(p, (uint8_t (*)[16])&ips[i * 8][0],
+				next_hops + i * 8);
+	}
+	rte_trie_lookup_bulk_8b(p, (uint8_t (*)[16])&ips[i * 8][0],
+			next_hops + i * 8, n - i * 8);
+}
diff --git a/lib/librte_fib/trie_avx512.h b/lib/librte_fib/trie_avx512.h
new file mode 100644
index 000000000..ef8c7f0e3
--- /dev/null
+++ b/lib/librte_fib/trie_avx512.h
@@ -0,0 +1,20 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Intel Corporation
+ */
+
+#ifndef _TRIE_AVX512_H_
+#define _TRIE_AVX512_H_
+
+void
+rte_trie_vec_lookup_bulk_2b(void *p, uint8_t ips[][RTE_FIB6_IPV6_ADDR_SIZE],
+	uint64_t *next_hops, const unsigned int n);
+
+void
+rte_trie_vec_lookup_bulk_4b(void *p, uint8_t ips[][RTE_FIB6_IPV6_ADDR_SIZE],
+	uint64_t *next_hops, const unsigned int n);
+
+void
+rte_trie_vec_lookup_bulk_8b(void *p, uint8_t ips[][RTE_FIB6_IPV6_ADDR_SIZE],
+	uint64_t *next_hops, const unsigned int n);
+
+#endif /* _TRIE_AVX512_H_ */
-- 
2.17.1


^ permalink raw reply	[flat|nested] 199+ messages in thread

* [dpdk-dev] [PATCH v5 8/8] app/testfib: add support for different lookup functions
  2020-07-08 20:16     ` [dpdk-dev] [PATCH v4 " Vladimir Medvedkin
                         ` (7 preceding siblings ...)
  2020-07-10 14:46       ` [dpdk-dev] [PATCH v5 7/8] fib6: introduce AVX512 lookup Vladimir Medvedkin
@ 2020-07-10 14:46       ` Vladimir Medvedkin
  8 siblings, 0 replies; 199+ messages in thread
From: Vladimir Medvedkin @ 2020-07-10 14:46 UTC (permalink / raw)
  To: dev; +Cc: konstantin.ananyev, bruce.richardson

Added -v option to switch between different lookup implementations
to measure their performance and correctness.

Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
---
 app/test-fib/main.c | 58 ++++++++++++++++++++++++++++++++++++++++++---
 1 file changed, 55 insertions(+), 3 deletions(-)

diff --git a/app/test-fib/main.c b/app/test-fib/main.c
index 9cf01b16e..9c2d41387 100644
--- a/app/test-fib/main.c
+++ b/app/test-fib/main.c
@@ -99,6 +99,7 @@ static struct {
 	uint8_t		ent_sz;
 	uint8_t		rnd_lookup_ips_ratio;
 	uint8_t		print_fract;
+	uint8_t		lookup_fn;
 } config = {
 	.routes_file = NULL,
 	.lookup_ips_file = NULL,
@@ -110,7 +111,8 @@ static struct {
 	.tbl8 = DEFAULT_LPM_TBL8,
 	.ent_sz = 4,
 	.rnd_lookup_ips_ratio = 0,
-	.print_fract = 10
+	.print_fract = 10,
+	.lookup_fn = 0
 };
 
 struct rt_rule_4 {
@@ -638,7 +640,11 @@ print_usage(void)
 		"1/2/4/8 (default 4)>]\n"
 		"[-g <number of tbl8's for dir24_8 or trie FIBs>]\n"
 		"[-w <path to the file to dump routing table>]\n"
-		"[-u <path to the file to dump ip's for lookup>]\n",
+		"[-u <path to the file to dump ip's for lookup>]\n"
+		"[-v <type of loookup function:"
+		"\ts1, s2, s3 (3 types of scalar), v (vector) -"
+		" for DIR24_8 based FIB\n"
+		"\ts, v - for TRIE based ipv6 FIB>]\n",
 		config.prgname);
 }
 
@@ -681,7 +687,7 @@ parse_opts(int argc, char **argv)
 	int opt;
 	char *endptr;
 
-	while ((opt = getopt(argc, argv, "f:t:n:d:l:r:c6ab:e:g:w:u:s")) !=
+	while ((opt = getopt(argc, argv, "f:t:n:d:l:r:c6ab:e:g:w:u:sv:")) !=
 			-1) {
 		switch (opt) {
 		case 'f':
@@ -769,6 +775,22 @@ parse_opts(int argc, char **argv)
 				rte_exit(-EINVAL, "Invalid option -g\n");
 			}
 			break;
+		case 'v':
+			if ((strcmp(optarg, "s1") == 0) ||
+					(strcmp(optarg, "s") == 0))
+				break;
+			else if (strcmp(optarg, "v") == 0) {
+				config.lookup_fn = 1;
+				break;
+			} else if (strcmp(optarg, "s2") == 0) {
+				config.lookup_fn = 2;
+				break;
+			} else if (strcmp(optarg, "s3") == 0) {
+				config.lookup_fn = 3;
+				break;
+			}
+			print_usage();
+			rte_exit(-EINVAL, "Invalid option -v %s\n", optarg);
 		default:
 			print_usage();
 			rte_exit(-EINVAL, "Invalid options\n");
@@ -846,6 +868,24 @@ run_v4(void)
 		return -rte_errno;
 	}
 
+	if (config.lookup_fn != 0) {
+		if (config.lookup_fn == 1)
+			ret = rte_fib_set_lookup_fn(fib,
+				RTE_FIB_DIR24_8_VECTOR_AVX512);
+		else if (config.lookup_fn == 2)
+			ret = rte_fib_set_lookup_fn(fib,
+				RTE_FIB_DIR24_8_SCALAR_INLINE);
+		else if (config.lookup_fn == 3)
+			ret = rte_fib_set_lookup_fn(fib,
+				RTE_FIB_DIR24_8_SCALAR_UNI);
+		else
+			ret = -EINVAL;
+		if (ret != 0) {
+			printf("Can not init lookup function\n");
+			return ret;
+		}
+	}
+
 	for (k = config.print_fract, i = 0; k > 0; k--) {
 		start = rte_rdtsc_precise();
 		for (j = 0; j < (config.nb_routes - i) / k; j++) {
@@ -1025,6 +1065,18 @@ run_v6(void)
 		return -rte_errno;
 	}
 
+	if (config.lookup_fn != 0) {
+		if (config.lookup_fn == 1)
+			ret = rte_fib6_set_lookup_fn(fib,
+				RTE_FIB6_TRIE_VECTOR_AVX512);
+		else
+			ret = -EINVAL;
+		if (ret != 0) {
+			printf("Can not init lookup function\n");
+			return ret;
+		}
+	}
+
 	for (k = config.print_fract, i = 0; k > 0; k--) {
 		start = rte_rdtsc_precise();
 		for (j = 0; j < (config.nb_routes - i) / k; j++) {
-- 
2.17.1


^ permalink raw reply	[flat|nested] 199+ messages in thread

* Re: [dpdk-dev] [PATCH v5 1/8] eal/x86: introduce AVX 512-bit type
  2020-07-10 14:46       ` [dpdk-dev] [PATCH v5 1/8] eal/x86: introduce AVX 512-bit type Vladimir Medvedkin
@ 2020-07-10 21:49         ` Thomas Monjalon
  2020-07-13 10:23           ` Medvedkin, Vladimir
  0 siblings, 1 reply; 199+ messages in thread
From: Thomas Monjalon @ 2020-07-10 21:49 UTC (permalink / raw)
  To: Vladimir Medvedkin
  Cc: dev, konstantin.ananyev, bruce.richardson, david.marchand, jerinj, mdr

Please Cc those who participated in the review previously.
Adding Ray, Jerin, David.

10/07/2020 16:46, Vladimir Medvedkin:
> New data type to manipulate 512 bit AVX values.
[...]
> +#ifdef __AVX512F__
> +
> +#define	RTE_X86_ZMM_SIZE	(sizeof(__m512i))
> +#define	RTE_X86_ZMM_MASK	(ZMM_SIZE - 1)

Why do you use tabs?

> +
> +typedef union __rte_x86_zmm  {

Double space

> +	__m512i	 z;
> +	ymm_t    y[RTE_X86_ZMM_SIZE / sizeof(ymm_t)];
> +	xmm_t    x[RTE_X86_ZMM_SIZE / sizeof(xmm_t)];
> +	uint8_t  u8[RTE_X86_ZMM_SIZE / sizeof(uint8_t)];
> +	uint16_t u16[RTE_X86_ZMM_SIZE / sizeof(uint16_t)];
> +	uint32_t u32[RTE_X86_ZMM_SIZE / sizeof(uint32_t)];
> +	uint64_t u64[RTE_X86_ZMM_SIZE / sizeof(uint64_t)];
> +	double   pd[RTE_X86_ZMM_SIZE / sizeof(double)];
> +} __rte_aligned(RTE_X86_ZMM_SIZE) __rte_x86_zmm_t;
> +
> +#endif /* __AVX512F__ */

You were supposed to undef the macros above.

Vladimir, after your recent contributions,
it seems you are not interested in details.
Please understand we have to maintain a project with consistency
and good doc. Please pay attention, thanks.



^ permalink raw reply	[flat|nested] 199+ messages in thread

* Re: [dpdk-dev] [PATCH v5 1/8] eal/x86: introduce AVX 512-bit type
  2020-07-10 21:49         ` Thomas Monjalon
@ 2020-07-13 10:23           ` Medvedkin, Vladimir
  2020-07-13 10:25             ` Thomas Monjalon
  0 siblings, 1 reply; 199+ messages in thread
From: Medvedkin, Vladimir @ 2020-07-13 10:23 UTC (permalink / raw)
  To: Thomas Monjalon
  Cc: dev, konstantin.ananyev, bruce.richardson, david.marchand, jerinj, mdr

Hi Thomas,

On 10/07/2020 22:49, Thomas Monjalon wrote:
> Please Cc those who participated in the review previously.
> Adding Ray, Jerin, David.
> 
> 10/07/2020 16:46, Vladimir Medvedkin:
>> New data type to manipulate 512 bit AVX values.
> [...]
>> +#ifdef __AVX512F__
>> +
>> +#define	RTE_X86_ZMM_SIZE	(sizeof(__m512i))
>> +#define	RTE_X86_ZMM_MASK	(ZMM_SIZE - 1)
> 
> Why do you use tabs?

Will resend v6

> 
>> +
>> +typedef union __rte_x86_zmm  {
> 
> Double space

Will fix in v6

> 
>> +	__m512i	 z;
>> +	ymm_t    y[RTE_X86_ZMM_SIZE / sizeof(ymm_t)];
>> +	xmm_t    x[RTE_X86_ZMM_SIZE / sizeof(xmm_t)];
>> +	uint8_t  u8[RTE_X86_ZMM_SIZE / sizeof(uint8_t)];
>> +	uint16_t u16[RTE_X86_ZMM_SIZE / sizeof(uint16_t)];
>> +	uint32_t u32[RTE_X86_ZMM_SIZE / sizeof(uint32_t)];
>> +	uint64_t u64[RTE_X86_ZMM_SIZE / sizeof(uint64_t)];
>> +	double   pd[RTE_X86_ZMM_SIZE / sizeof(double)];
>> +} __rte_aligned(RTE_X86_ZMM_SIZE) __rte_x86_zmm_t;
>> +
>> +#endif /* __AVX512F__ */
> 
> You were supposed to undef the macros above.

It was intentional. It could be used later by other libs, like XMM_SIZE:
git grep -lw XMM_SIZE
lib/librte_acl/acl_gen.c
lib/librte_acl/acl_run.h
lib/librte_acl/rte_acl.h
lib/librte_eal/arm/include/rte_vect.h
lib/librte_eal/ppc/include/rte_vect.h
lib/librte_eal/x86/include/rte_vect.h
lib/librte_hash/rte_thash.h

> 
> Vladimir, after your recent contributions,
> it seems you are not interested in details.
> Please understand we have to maintain a project with consistency
> and good doc. Please pay attention, thanks.
> 
> 

-- 
Regards,
Vladimir

^ permalink raw reply	[flat|nested] 199+ messages in thread

* Re: [dpdk-dev] [PATCH v5 1/8] eal/x86: introduce AVX 512-bit type
  2020-07-13 10:23           ` Medvedkin, Vladimir
@ 2020-07-13 10:25             ` Thomas Monjalon
  2020-07-13 10:39               ` Medvedkin, Vladimir
  0 siblings, 1 reply; 199+ messages in thread
From: Thomas Monjalon @ 2020-07-13 10:25 UTC (permalink / raw)
  To: Medvedkin, Vladimir
  Cc: dev, konstantin.ananyev, bruce.richardson, david.marchand, jerinj, mdr

13/07/2020 12:23, Medvedkin, Vladimir:
> Hi Thomas,
> 
> On 10/07/2020 22:49, Thomas Monjalon wrote:
> > Please Cc those who participated in the review previously.
> > Adding Ray, Jerin, David.
> > 
> > 10/07/2020 16:46, Vladimir Medvedkin:
> >> +	__m512i	 z;
> >> +	ymm_t    y[RTE_X86_ZMM_SIZE / sizeof(ymm_t)];
> >> +	xmm_t    x[RTE_X86_ZMM_SIZE / sizeof(xmm_t)];
> >> +	uint8_t  u8[RTE_X86_ZMM_SIZE / sizeof(uint8_t)];
> >> +	uint16_t u16[RTE_X86_ZMM_SIZE / sizeof(uint16_t)];
> >> +	uint32_t u32[RTE_X86_ZMM_SIZE / sizeof(uint32_t)];
> >> +	uint64_t u64[RTE_X86_ZMM_SIZE / sizeof(uint64_t)];
> >> +	double   pd[RTE_X86_ZMM_SIZE / sizeof(double)];
> >> +} __rte_aligned(RTE_X86_ZMM_SIZE) __rte_x86_zmm_t;
> >> +
> >> +#endif /* __AVX512F__ */
> > 
> > You were supposed to undef the macros above.
> 
> It was intentional. It could be used later by other libs, like XMM_SIZE:
> git grep -lw XMM_SIZE
> lib/librte_acl/acl_gen.c
> lib/librte_acl/acl_run.h
> lib/librte_acl/rte_acl.h
> lib/librte_eal/arm/include/rte_vect.h
> lib/librte_eal/ppc/include/rte_vect.h
> lib/librte_eal/x86/include/rte_vect.h
> lib/librte_hash/rte_thash.h

OK. Was it agreed with David to NOT undef?
I may have missed this part.



^ permalink raw reply	[flat|nested] 199+ messages in thread

* Re: [dpdk-dev] [PATCH v5 1/8] eal/x86: introduce AVX 512-bit type
  2020-07-13 10:25             ` Thomas Monjalon
@ 2020-07-13 10:39               ` Medvedkin, Vladimir
  2020-07-13 10:45                 ` Ananyev, Konstantin
  0 siblings, 1 reply; 199+ messages in thread
From: Medvedkin, Vladimir @ 2020-07-13 10:39 UTC (permalink / raw)
  To: Thomas Monjalon
  Cc: dev, konstantin.ananyev, bruce.richardson, david.marchand, jerinj, mdr



On 13/07/2020 11:25, Thomas Monjalon wrote:
> 13/07/2020 12:23, Medvedkin, Vladimir:
>> Hi Thomas,
>>
>> On 10/07/2020 22:49, Thomas Monjalon wrote:
>>> Please Cc those who participated in the review previously.
>>> Adding Ray, Jerin, David.
>>>
>>> 10/07/2020 16:46, Vladimir Medvedkin:
>>>> +	__m512i	 z;
>>>> +	ymm_t    y[RTE_X86_ZMM_SIZE / sizeof(ymm_t)];
>>>> +	xmm_t    x[RTE_X86_ZMM_SIZE / sizeof(xmm_t)];
>>>> +	uint8_t  u8[RTE_X86_ZMM_SIZE / sizeof(uint8_t)];
>>>> +	uint16_t u16[RTE_X86_ZMM_SIZE / sizeof(uint16_t)];
>>>> +	uint32_t u32[RTE_X86_ZMM_SIZE / sizeof(uint32_t)];
>>>> +	uint64_t u64[RTE_X86_ZMM_SIZE / sizeof(uint64_t)];
>>>> +	double   pd[RTE_X86_ZMM_SIZE / sizeof(double)];
>>>> +} __rte_aligned(RTE_X86_ZMM_SIZE) __rte_x86_zmm_t;
>>>> +
>>>> +#endif /* __AVX512F__ */
>>>
>>> You were supposed to undef the macros above.
>>
>> It was intentional. It could be used later by other libs, like XMM_SIZE:
>> git grep -lw XMM_SIZE
>> lib/librte_acl/acl_gen.c
>> lib/librte_acl/acl_run.h
>> lib/librte_acl/rte_acl.h
>> lib/librte_eal/arm/include/rte_vect.h
>> lib/librte_eal/ppc/include/rte_vect.h
>> lib/librte_eal/x86/include/rte_vect.h
>> lib/librte_hash/rte_thash.h
> 
> OK. Was it agreed with David to NOT undef?
> I may have missed this part.
> 

As I can understand David had no objections to export it. I think it 
could be useful for some libs to have those macros. Please correct me if 
I'm wrong.

> 

-- 
Regards,
Vladimir

^ permalink raw reply	[flat|nested] 199+ messages in thread

* Re: [dpdk-dev] [PATCH v5 1/8] eal/x86: introduce AVX 512-bit type
  2020-07-13 10:39               ` Medvedkin, Vladimir
@ 2020-07-13 10:45                 ` Ananyev, Konstantin
  0 siblings, 0 replies; 199+ messages in thread
From: Ananyev, Konstantin @ 2020-07-13 10:45 UTC (permalink / raw)
  To: Medvedkin, Vladimir, Thomas Monjalon
  Cc: dev, Richardson, Bruce, david.marchand, jerinj, mdr

> 
> On 13/07/2020 11:25, Thomas Monjalon wrote:
> > 13/07/2020 12:23, Medvedkin, Vladimir:
> >> Hi Thomas,
> >>
> >> On 10/07/2020 22:49, Thomas Monjalon wrote:
> >>> Please Cc those who participated in the review previously.
> >>> Adding Ray, Jerin, David.
> >>>
> >>> 10/07/2020 16:46, Vladimir Medvedkin:
> >>>> +	__m512i	 z;
> >>>> +	ymm_t    y[RTE_X86_ZMM_SIZE / sizeof(ymm_t)];
> >>>> +	xmm_t    x[RTE_X86_ZMM_SIZE / sizeof(xmm_t)];
> >>>> +	uint8_t  u8[RTE_X86_ZMM_SIZE / sizeof(uint8_t)];
> >>>> +	uint16_t u16[RTE_X86_ZMM_SIZE / sizeof(uint16_t)];
> >>>> +	uint32_t u32[RTE_X86_ZMM_SIZE / sizeof(uint32_t)];
> >>>> +	uint64_t u64[RTE_X86_ZMM_SIZE / sizeof(uint64_t)];
> >>>> +	double   pd[RTE_X86_ZMM_SIZE / sizeof(double)];
> >>>> +} __rte_aligned(RTE_X86_ZMM_SIZE) __rte_x86_zmm_t;
> >>>> +
> >>>> +#endif /* __AVX512F__ */
> >>>
> >>> You were supposed to undef the macros above.
> >>
> >> It was intentional. It could be used later by other libs, like XMM_SIZE:
> >> git grep -lw XMM_SIZE
> >> lib/librte_acl/acl_gen.c
> >> lib/librte_acl/acl_run.h
> >> lib/librte_acl/rte_acl.h
> >> lib/librte_eal/arm/include/rte_vect.h
> >> lib/librte_eal/ppc/include/rte_vect.h
> >> lib/librte_eal/x86/include/rte_vect.h
> >> lib/librte_hash/rte_thash.h
> >
> > OK. Was it agreed with David to NOT undef?
> > I may have missed this part.
> >
> 
> As I can understand David had no objections to export it. I think it
> could be useful for some libs to have those macros. 

+1

^ permalink raw reply	[flat|nested] 199+ messages in thread

* [dpdk-dev] [PATCH v6 0/8] fib: implement AVX512 vector lookup
  2020-07-10 14:46       ` [dpdk-dev] [PATCH v5 " Vladimir Medvedkin
@ 2020-07-13 11:11         ` Vladimir Medvedkin
  2020-07-13 11:56           ` [dpdk-dev] [PATCH v7 " Vladimir Medvedkin
                             ` (9 more replies)
  2020-07-13 11:11         ` [dpdk-dev] [PATCH v6 1/8] eal/x86: introduce AVX 512-bit type Vladimir Medvedkin
                           ` (7 subsequent siblings)
  8 siblings, 10 replies; 199+ messages in thread
From: Vladimir Medvedkin @ 2020-07-13 11:11 UTC (permalink / raw)
  To: dev
  Cc: david.marchand, jerinj, mdr, thomas, konstantin.ananyev,
	bruce.richardson

This patch series implements vectorized lookup using AVX512 for
ipv4 dir24_8 and ipv6 trie algorithms.
Also introduced rte_fib_set_lookup_fn() to change lookup function type.
Added option to select lookup function type in testfib application.

v6:
 - style fixes

v5:
 - prefix zmm macro in rte_vect.h with RTE_X86
 - remove unnecessary typedef for _x86_zmm_t
 - reword commit title
 - fix typos

v4:
 - use __rte_aligned() instead of using compiler attribute directly
 - rework and add comments to meson.build

v3:
 - separate out the AVX-512 code into a separate file

v2:
 - rename rte_zmm to __rte_x86_zmm to reflect its internal usage
 - make runtime decision to use avx512 lookup

Vladimir Medvedkin (8):
  eal/x86: introduce AVX 512-bit type
  fib: make lookup function type configurable
  fib: move lookup definition into the header file
  fib: introduce AVX512 lookup
  fib6: make lookup function type configurable
  fib6: move lookup definition into the header file
  fib6: introduce AVX512 lookup
  app/testfib: add support for different lookup functions

 app/test-fib/main.c                   |  58 +++++-
 lib/librte_eal/x86/include/rte_vect.h |  19 ++
 lib/librte_fib/Makefile               |  24 +++
 lib/librte_fib/dir24_8.c              | 281 +++++---------------------
 lib/librte_fib/dir24_8.h              | 226 ++++++++++++++++++++-
 lib/librte_fib/dir24_8_avx512.c       | 165 +++++++++++++++
 lib/librte_fib/dir24_8_avx512.h       |  24 +++
 lib/librte_fib/meson.build            |  31 +++
 lib/librte_fib/rte_fib.c              |  21 +-
 lib/librte_fib/rte_fib.h              |  24 +++
 lib/librte_fib/rte_fib6.c             |  20 +-
 lib/librte_fib/rte_fib6.h             |  22 ++
 lib/librte_fib/rte_fib_version.map    |   2 +
 lib/librte_fib/trie.c                 | 161 +++------------
 lib/librte_fib/trie.h                 | 119 ++++++++++-
 lib/librte_fib/trie_avx512.c          | 269 ++++++++++++++++++++++++
 lib/librte_fib/trie_avx512.h          |  20 ++
 17 files changed, 1114 insertions(+), 372 deletions(-)
 create mode 100644 lib/librte_fib/dir24_8_avx512.c
 create mode 100644 lib/librte_fib/dir24_8_avx512.h
 create mode 100644 lib/librte_fib/trie_avx512.c
 create mode 100644 lib/librte_fib/trie_avx512.h

-- 
2.17.1


^ permalink raw reply	[flat|nested] 199+ messages in thread

* [dpdk-dev] [PATCH v6 1/8] eal/x86: introduce AVX 512-bit type
  2020-07-10 14:46       ` [dpdk-dev] [PATCH v5 " Vladimir Medvedkin
  2020-07-13 11:11         ` [dpdk-dev] [PATCH v6 " Vladimir Medvedkin
@ 2020-07-13 11:11         ` Vladimir Medvedkin
  2020-07-13 11:33           ` David Marchand
  2020-07-13 11:11         ` [dpdk-dev] [PATCH v6 2/8] fib: make lookup function type configurable Vladimir Medvedkin
                           ` (6 subsequent siblings)
  8 siblings, 1 reply; 199+ messages in thread
From: Vladimir Medvedkin @ 2020-07-13 11:11 UTC (permalink / raw)
  To: dev
  Cc: david.marchand, jerinj, mdr, thomas, konstantin.ananyev,
	bruce.richardson

New data type to manipulate 512 bit AVX values.

Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 lib/librte_eal/x86/include/rte_vect.h | 19 +++++++++++++++++++
 1 file changed, 19 insertions(+)

diff --git a/lib/librte_eal/x86/include/rte_vect.h b/lib/librte_eal/x86/include/rte_vect.h
index df5a60762..30dcfd5e7 100644
--- a/lib/librte_eal/x86/include/rte_vect.h
+++ b/lib/librte_eal/x86/include/rte_vect.h
@@ -13,6 +13,7 @@
 
 #include <stdint.h>
 #include <rte_config.h>
+#include <rte_common.h>
 #include "generic/rte_vect.h"
 
 #if (defined(__ICC) || \
@@ -90,6 +91,24 @@ __extension__ ({                 \
 })
 #endif /* (defined(__ICC) && __ICC < 1210) */
 
+#ifdef __AVX512F__
+
+#define RTE_X86_ZMM_SIZE	(sizeof(__m512i))
+#define RTE_X86_ZMM_MASK	(ZMM_SIZE - 1)
+
+typedef union __rte_x86_zmm {
+	__m512i	 z;
+	ymm_t    y[RTE_X86_ZMM_SIZE / sizeof(ymm_t)];
+	xmm_t    x[RTE_X86_ZMM_SIZE / sizeof(xmm_t)];
+	uint8_t  u8[RTE_X86_ZMM_SIZE / sizeof(uint8_t)];
+	uint16_t u16[RTE_X86_ZMM_SIZE / sizeof(uint16_t)];
+	uint32_t u32[RTE_X86_ZMM_SIZE / sizeof(uint32_t)];
+	uint64_t u64[RTE_X86_ZMM_SIZE / sizeof(uint64_t)];
+	double   pd[RTE_X86_ZMM_SIZE / sizeof(double)];
+} __rte_aligned(RTE_X86_ZMM_SIZE) __rte_x86_zmm_t;
+
+#endif /* __AVX512F__ */
+
 #ifdef __cplusplus
 }
 #endif
-- 
2.17.1


^ permalink raw reply	[flat|nested] 199+ messages in thread

* [dpdk-dev] [PATCH v6 2/8] fib: make lookup function type configurable
  2020-07-10 14:46       ` [dpdk-dev] [PATCH v5 " Vladimir Medvedkin
  2020-07-13 11:11         ` [dpdk-dev] [PATCH v6 " Vladimir Medvedkin
  2020-07-13 11:11         ` [dpdk-dev] [PATCH v6 1/8] eal/x86: introduce AVX 512-bit type Vladimir Medvedkin
@ 2020-07-13 11:11         ` Vladimir Medvedkin
  2020-07-13 11:11         ` [dpdk-dev] [PATCH v6 3/8] fib: move lookup definition into the header file Vladimir Medvedkin
                           ` (5 subsequent siblings)
  8 siblings, 0 replies; 199+ messages in thread
From: Vladimir Medvedkin @ 2020-07-13 11:11 UTC (permalink / raw)
  To: dev
  Cc: david.marchand, jerinj, mdr, thomas, konstantin.ananyev,
	bruce.richardson

Add type argument to dir24_8_get_lookup_fn()
Now it supports 3 different lookup implementations:
 RTE_FIB_DIR24_8_SCALAR_MACRO
 RTE_FIB_DIR24_8_SCALAR_INLINE
 RTE_FIB_DIR24_8_SCALAR_UNI

Add new rte_fib_set_lookup_fn() - user can change lookup
function type runtime.

Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
---
 lib/librte_fib/dir24_8.c           | 32 +++++++++++++++++++-----------
 lib/librte_fib/dir24_8.h           |  2 +-
 lib/librte_fib/rte_fib.c           | 21 +++++++++++++++++++-
 lib/librte_fib/rte_fib.h           | 23 +++++++++++++++++++++
 lib/librte_fib/rte_fib_version.map |  1 +
 5 files changed, 65 insertions(+), 14 deletions(-)

diff --git a/lib/librte_fib/dir24_8.c b/lib/librte_fib/dir24_8.c
index c9dce3cbc..825d061fd 100644
--- a/lib/librte_fib/dir24_8.c
+++ b/lib/librte_fib/dir24_8.c
@@ -45,13 +45,6 @@ struct dir24_8_tbl {
 
 #define ROUNDUP(x, y)	 RTE_ALIGN_CEIL(x, (1 << (32 - y)))
 
-enum lookup_type {
-	MACRO,
-	INLINE,
-	UNI
-};
-enum lookup_type test_lookup = MACRO;
-
 static inline void *
 get_tbl24_p(struct dir24_8_tbl *dp, uint32_t ip, uint8_t nh_sz)
 {
@@ -253,11 +246,18 @@ dir24_8_lookup_bulk_uni(void *p, const uint32_t *ips,
 }
 
 rte_fib_lookup_fn_t
-dir24_8_get_lookup_fn(struct rte_fib_conf *fib_conf)
+dir24_8_get_lookup_fn(void *p, enum rte_fib_dir24_8_lookup_type type)
 {
-	enum rte_fib_dir24_8_nh_sz nh_sz = fib_conf->dir24_8.nh_sz;
+	enum rte_fib_dir24_8_nh_sz nh_sz;
+	struct dir24_8_tbl *dp = p;
 
-	if (test_lookup == MACRO) {
+	if (dp == NULL)
+		return NULL;
+
+	nh_sz = dp->nh_sz;
+
+	switch (type) {
+	case RTE_FIB_DIR24_8_SCALAR_MACRO:
 		switch (nh_sz) {
 		case RTE_FIB_DIR24_8_1B:
 			return dir24_8_lookup_bulk_1b;
@@ -267,8 +267,10 @@ dir24_8_get_lookup_fn(struct rte_fib_conf *fib_conf)
 			return dir24_8_lookup_bulk_4b;
 		case RTE_FIB_DIR24_8_8B:
 			return dir24_8_lookup_bulk_8b;
+		default:
+			return NULL;
 		}
-	} else if (test_lookup == INLINE) {
+	case RTE_FIB_DIR24_8_SCALAR_INLINE:
 		switch (nh_sz) {
 		case RTE_FIB_DIR24_8_1B:
 			return dir24_8_lookup_bulk_0;
@@ -278,9 +280,15 @@ dir24_8_get_lookup_fn(struct rte_fib_conf *fib_conf)
 			return dir24_8_lookup_bulk_2;
 		case RTE_FIB_DIR24_8_8B:
 			return dir24_8_lookup_bulk_3;
+		default:
+			return NULL;
 		}
-	} else
+	case RTE_FIB_DIR24_8_SCALAR_UNI:
 		return dir24_8_lookup_bulk_uni;
+	default:
+		return NULL;
+	}
+
 	return NULL;
 }
 
diff --git a/lib/librte_fib/dir24_8.h b/lib/librte_fib/dir24_8.h
index 1ec437c0c..53c5dd29e 100644
--- a/lib/librte_fib/dir24_8.h
+++ b/lib/librte_fib/dir24_8.h
@@ -22,7 +22,7 @@ void
 dir24_8_free(void *p);
 
 rte_fib_lookup_fn_t
-dir24_8_get_lookup_fn(struct rte_fib_conf *conf);
+dir24_8_get_lookup_fn(void *p, enum rte_fib_dir24_8_lookup_type type);
 
 int
 dir24_8_modify(struct rte_fib *fib, uint32_t ip, uint8_t depth,
diff --git a/lib/librte_fib/rte_fib.c b/lib/librte_fib/rte_fib.c
index e0908084f..b9f6efbb1 100644
--- a/lib/librte_fib/rte_fib.c
+++ b/lib/librte_fib/rte_fib.c
@@ -107,7 +107,8 @@ init_dataplane(struct rte_fib *fib, __rte_unused int socket_id,
 		fib->dp = dir24_8_create(dp_name, socket_id, conf);
 		if (fib->dp == NULL)
 			return -rte_errno;
-		fib->lookup = dir24_8_get_lookup_fn(conf);
+		fib->lookup = dir24_8_get_lookup_fn(fib->dp,
+			RTE_FIB_DIR24_8_SCALAR_MACRO);
 		fib->modify = dir24_8_modify;
 		return 0;
 	default:
@@ -317,3 +318,21 @@ rte_fib_get_rib(struct rte_fib *fib)
 {
 	return (fib == NULL) ? NULL : fib->rib;
 }
+
+int
+rte_fib_set_lookup_fn(struct rte_fib *fib,
+	enum rte_fib_dir24_8_lookup_type type)
+{
+	rte_fib_lookup_fn_t fn;
+
+	switch (fib->type) {
+	case RTE_FIB_DIR24_8:
+		fn = dir24_8_get_lookup_fn(fib->dp, type);
+		if (fn == NULL)
+			return -EINVAL;
+		fib->lookup = fn;
+		return 0;
+	default:
+		return -EINVAL;
+	}
+}
diff --git a/lib/librte_fib/rte_fib.h b/lib/librte_fib/rte_fib.h
index 84ee774d2..892898c6f 100644
--- a/lib/librte_fib/rte_fib.h
+++ b/lib/librte_fib/rte_fib.h
@@ -58,6 +58,12 @@ enum rte_fib_dir24_8_nh_sz {
 	RTE_FIB_DIR24_8_8B
 };
 
+enum rte_fib_dir24_8_lookup_type {
+	RTE_FIB_DIR24_8_SCALAR_MACRO,
+	RTE_FIB_DIR24_8_SCALAR_INLINE,
+	RTE_FIB_DIR24_8_SCALAR_UNI
+};
+
 /** FIB configuration structure */
 struct rte_fib_conf {
 	enum rte_fib_type type; /**< Type of FIB struct */
@@ -196,6 +202,23 @@ __rte_experimental
 struct rte_rib *
 rte_fib_get_rib(struct rte_fib *fib);
 
+/**
+ * Set lookup function based on type
+ *
+ * @param fib
+ *   FIB object handle
+ * @param type
+ *   type of lookup function
+ *
+ * @return
+ *    -EINVAL on failure
+ *    0 on success
+ */
+__rte_experimental
+int
+rte_fib_set_lookup_fn(struct rte_fib *fib,
+	enum rte_fib_dir24_8_lookup_type type);
+
 #ifdef __cplusplus
 }
 #endif
diff --git a/lib/librte_fib/rte_fib_version.map b/lib/librte_fib/rte_fib_version.map
index 9527417d2..216af66b3 100644
--- a/lib/librte_fib/rte_fib_version.map
+++ b/lib/librte_fib/rte_fib_version.map
@@ -9,6 +9,7 @@ EXPERIMENTAL {
 	rte_fib_lookup_bulk;
 	rte_fib_get_dp;
 	rte_fib_get_rib;
+	rte_fib_set_lookup_fn;
 
 	rte_fib6_add;
 	rte_fib6_create;
-- 
2.17.1


^ permalink raw reply	[flat|nested] 199+ messages in thread

* [dpdk-dev] [PATCH v6 3/8] fib: move lookup definition into the header file
  2020-07-10 14:46       ` [dpdk-dev] [PATCH v5 " Vladimir Medvedkin
                           ` (2 preceding siblings ...)
  2020-07-13 11:11         ` [dpdk-dev] [PATCH v6 2/8] fib: make lookup function type configurable Vladimir Medvedkin
@ 2020-07-13 11:11         ` Vladimir Medvedkin
  2020-07-13 11:11         ` [dpdk-dev] [PATCH v6 4/8] fib: introduce AVX512 lookup Vladimir Medvedkin
                           ` (4 subsequent siblings)
  8 siblings, 0 replies; 199+ messages in thread
From: Vladimir Medvedkin @ 2020-07-13 11:11 UTC (permalink / raw)
  To: dev
  Cc: david.marchand, jerinj, mdr, thomas, konstantin.ananyev,
	bruce.richardson

Move dir24_8 table layout and lookup definition into the
private header file. This is necessary for implementing a
vectorized lookup function in a separate .с file.

Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 lib/librte_fib/dir24_8.c | 225 +--------------------------------------
 lib/librte_fib/dir24_8.h | 224 ++++++++++++++++++++++++++++++++++++++
 2 files changed, 225 insertions(+), 224 deletions(-)

diff --git a/lib/librte_fib/dir24_8.c b/lib/librte_fib/dir24_8.c
index 825d061fd..9d74653cf 100644
--- a/lib/librte_fib/dir24_8.c
+++ b/lib/librte_fib/dir24_8.c
@@ -11,240 +11,17 @@
 
 #include <rte_debug.h>
 #include <rte_malloc.h>
-#include <rte_prefetch.h>
 #include <rte_errno.h>
 #include <rte_memory.h>
-#include <rte_branch_prediction.h>
 
-#include <rte_fib.h>
 #include <rte_rib.h>
+#include <rte_fib.h>
 #include "dir24_8.h"
 
 #define DIR24_8_NAMESIZE	64
 
-#define DIR24_8_TBL24_NUM_ENT		(1 << 24)
-#define DIR24_8_TBL8_GRP_NUM_ENT	256U
-#define DIR24_8_EXT_ENT			1
-#define DIR24_8_TBL24_MASK		0xffffff00
-
-#define BITMAP_SLAB_BIT_SIZE_LOG2	6
-#define BITMAP_SLAB_BIT_SIZE		(1 << BITMAP_SLAB_BIT_SIZE_LOG2)
-#define BITMAP_SLAB_BITMASK		(BITMAP_SLAB_BIT_SIZE - 1)
-
-struct dir24_8_tbl {
-	uint32_t	number_tbl8s;	/**< Total number of tbl8s */
-	uint32_t	rsvd_tbl8s;	/**< Number of reserved tbl8s */
-	uint32_t	cur_tbl8s;	/**< Current number of tbl8s */
-	enum rte_fib_dir24_8_nh_sz	nh_sz;	/**< Size of nexthop entry */
-	uint64_t	def_nh;		/**< Default next hop */
-	uint64_t	*tbl8;		/**< tbl8 table. */
-	uint64_t	*tbl8_idxes;	/**< bitmap containing free tbl8 idxes*/
-	/* tbl24 table. */
-	__extension__ uint64_t	tbl24[0] __rte_cache_aligned;
-};
-
 #define ROUNDUP(x, y)	 RTE_ALIGN_CEIL(x, (1 << (32 - y)))
 
-static inline void *
-get_tbl24_p(struct dir24_8_tbl *dp, uint32_t ip, uint8_t nh_sz)
-{
-	return (void *)&((uint8_t *)dp->tbl24)[(ip &
-		DIR24_8_TBL24_MASK) >> (8 - nh_sz)];
-}
-
-static inline  uint8_t
-bits_in_nh(uint8_t nh_sz)
-{
-	return 8 * (1 << nh_sz);
-}
-
-static inline uint64_t
-get_max_nh(uint8_t nh_sz)
-{
-	return ((1ULL << (bits_in_nh(nh_sz) - 1)) - 1);
-}
-
-static  inline uint32_t
-get_tbl24_idx(uint32_t ip)
-{
-	return ip >> 8;
-}
-
-static  inline uint32_t
-get_tbl8_idx(uint32_t res, uint32_t ip)
-{
-	return (res >> 1) * DIR24_8_TBL8_GRP_NUM_ENT + (uint8_t)ip;
-}
-
-static inline uint64_t
-lookup_msk(uint8_t nh_sz)
-{
-	return ((1ULL << ((1 << (nh_sz + 3)) - 1)) << 1) - 1;
-}
-
-static inline uint8_t
-get_psd_idx(uint32_t val, uint8_t nh_sz)
-{
-	return val & ((1 << (3 - nh_sz)) - 1);
-}
-
-static inline uint32_t
-get_tbl_idx(uint32_t val, uint8_t nh_sz)
-{
-	return val >> (3 - nh_sz);
-}
-
-static inline uint64_t
-get_tbl24(struct dir24_8_tbl *dp, uint32_t ip, uint8_t nh_sz)
-{
-	return ((dp->tbl24[get_tbl_idx(get_tbl24_idx(ip), nh_sz)] >>
-		(get_psd_idx(get_tbl24_idx(ip), nh_sz) *
-		bits_in_nh(nh_sz))) & lookup_msk(nh_sz));
-}
-
-static inline uint64_t
-get_tbl8(struct dir24_8_tbl *dp, uint32_t res, uint32_t ip, uint8_t nh_sz)
-{
-	return ((dp->tbl8[get_tbl_idx(get_tbl8_idx(res, ip), nh_sz)] >>
-		(get_psd_idx(get_tbl8_idx(res, ip), nh_sz) *
-		bits_in_nh(nh_sz))) & lookup_msk(nh_sz));
-}
-
-static inline int
-is_entry_extended(uint64_t ent)
-{
-	return (ent & DIR24_8_EXT_ENT) == DIR24_8_EXT_ENT;
-}
-
-#define LOOKUP_FUNC(suffix, type, bulk_prefetch, nh_sz)			\
-static void dir24_8_lookup_bulk_##suffix(void *p, const uint32_t *ips,	\
-	uint64_t *next_hops, const unsigned int n)			\
-{									\
-	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;		\
-	uint64_t tmp;							\
-	uint32_t i;							\
-	uint32_t prefetch_offset =					\
-		RTE_MIN((unsigned int)bulk_prefetch, n);		\
-									\
-	for (i = 0; i < prefetch_offset; i++)				\
-		rte_prefetch0(get_tbl24_p(dp, ips[i], nh_sz));		\
-	for (i = 0; i < (n - prefetch_offset); i++) {			\
-		rte_prefetch0(get_tbl24_p(dp,				\
-			ips[i + prefetch_offset], nh_sz));		\
-		tmp = ((type *)dp->tbl24)[ips[i] >> 8];			\
-		if (unlikely(is_entry_extended(tmp)))			\
-			tmp = ((type *)dp->tbl8)[(uint8_t)ips[i] +	\
-				((tmp >> 1) * DIR24_8_TBL8_GRP_NUM_ENT)]; \
-		next_hops[i] = tmp >> 1;				\
-	}								\
-	for (; i < n; i++) {						\
-		tmp = ((type *)dp->tbl24)[ips[i] >> 8];			\
-		if (unlikely(is_entry_extended(tmp)))			\
-			tmp = ((type *)dp->tbl8)[(uint8_t)ips[i] +	\
-				((tmp >> 1) * DIR24_8_TBL8_GRP_NUM_ENT)]; \
-		next_hops[i] = tmp >> 1;				\
-	}								\
-}									\
-
-LOOKUP_FUNC(1b, uint8_t, 5, 0)
-LOOKUP_FUNC(2b, uint16_t, 6, 1)
-LOOKUP_FUNC(4b, uint32_t, 15, 2)
-LOOKUP_FUNC(8b, uint64_t, 12, 3)
-
-static inline void
-dir24_8_lookup_bulk(struct dir24_8_tbl *dp, const uint32_t *ips,
-	uint64_t *next_hops, const unsigned int n, uint8_t nh_sz)
-{
-	uint64_t tmp;
-	uint32_t i;
-	uint32_t prefetch_offset = RTE_MIN(15U, n);
-
-	for (i = 0; i < prefetch_offset; i++)
-		rte_prefetch0(get_tbl24_p(dp, ips[i], nh_sz));
-	for (i = 0; i < (n - prefetch_offset); i++) {
-		rte_prefetch0(get_tbl24_p(dp, ips[i + prefetch_offset],
-			nh_sz));
-		tmp = get_tbl24(dp, ips[i], nh_sz);
-		if (unlikely(is_entry_extended(tmp)))
-			tmp = get_tbl8(dp, tmp, ips[i], nh_sz);
-
-		next_hops[i] = tmp >> 1;
-	}
-	for (; i < n; i++) {
-		tmp = get_tbl24(dp, ips[i], nh_sz);
-		if (unlikely(is_entry_extended(tmp)))
-			tmp = get_tbl8(dp, tmp, ips[i], nh_sz);
-
-		next_hops[i] = tmp >> 1;
-	}
-}
-
-static void
-dir24_8_lookup_bulk_0(void *p, const uint32_t *ips,
-	uint64_t *next_hops, const unsigned int n)
-{
-	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
-
-	dir24_8_lookup_bulk(dp, ips, next_hops, n, 0);
-}
-
-static void
-dir24_8_lookup_bulk_1(void *p, const uint32_t *ips,
-	uint64_t *next_hops, const unsigned int n)
-{
-	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
-
-	dir24_8_lookup_bulk(dp, ips, next_hops, n, 1);
-}
-
-static void
-dir24_8_lookup_bulk_2(void *p, const uint32_t *ips,
-	uint64_t *next_hops, const unsigned int n)
-{
-	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
-
-	dir24_8_lookup_bulk(dp, ips, next_hops, n, 2);
-}
-
-static void
-dir24_8_lookup_bulk_3(void *p, const uint32_t *ips,
-	uint64_t *next_hops, const unsigned int n)
-{
-	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
-
-	dir24_8_lookup_bulk(dp, ips, next_hops, n, 3);
-}
-
-static void
-dir24_8_lookup_bulk_uni(void *p, const uint32_t *ips,
-	uint64_t *next_hops, const unsigned int n)
-{
-	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
-	uint64_t tmp;
-	uint32_t i;
-	uint32_t prefetch_offset = RTE_MIN(15U, n);
-	uint8_t nh_sz = dp->nh_sz;
-
-	for (i = 0; i < prefetch_offset; i++)
-		rte_prefetch0(get_tbl24_p(dp, ips[i], nh_sz));
-	for (i = 0; i < (n - prefetch_offset); i++) {
-		rte_prefetch0(get_tbl24_p(dp, ips[i + prefetch_offset],
-			nh_sz));
-		tmp = get_tbl24(dp, ips[i], nh_sz);
-		if (unlikely(is_entry_extended(tmp)))
-			tmp = get_tbl8(dp, tmp, ips[i], nh_sz);
-
-		next_hops[i] = tmp >> 1;
-	}
-	for (; i < n; i++) {
-		tmp = get_tbl24(dp, ips[i], nh_sz);
-		if (unlikely(is_entry_extended(tmp)))
-			tmp = get_tbl8(dp, tmp, ips[i], nh_sz);
-
-		next_hops[i] = tmp >> 1;
-	}
-}
-
 rte_fib_lookup_fn_t
 dir24_8_get_lookup_fn(void *p, enum rte_fib_dir24_8_lookup_type type)
 {
diff --git a/lib/librte_fib/dir24_8.h b/lib/librte_fib/dir24_8.h
index 53c5dd29e..56d038951 100644
--- a/lib/librte_fib/dir24_8.h
+++ b/lib/librte_fib/dir24_8.h
@@ -6,6 +6,9 @@
 #ifndef _DIR24_8_H_
 #define _DIR24_8_H_
 
+#include <rte_prefetch.h>
+#include <rte_branch_prediction.h>
+
 /**
  * @file
  * DIR24_8 algorithm
@@ -15,6 +18,227 @@
 extern "C" {
 #endif
 
+#define DIR24_8_TBL24_NUM_ENT		(1 << 24)
+#define DIR24_8_TBL8_GRP_NUM_ENT	256U
+#define DIR24_8_EXT_ENT			1
+#define DIR24_8_TBL24_MASK		0xffffff00
+
+#define BITMAP_SLAB_BIT_SIZE_LOG2	6
+#define BITMAP_SLAB_BIT_SIZE		(1 << BITMAP_SLAB_BIT_SIZE_LOG2)
+#define BITMAP_SLAB_BITMASK		(BITMAP_SLAB_BIT_SIZE - 1)
+
+struct dir24_8_tbl {
+	uint32_t	number_tbl8s;	/**< Total number of tbl8s */
+	uint32_t	rsvd_tbl8s;	/**< Number of reserved tbl8s */
+	uint32_t	cur_tbl8s;	/**< Current number of tbl8s */
+	enum rte_fib_dir24_8_nh_sz	nh_sz;	/**< Size of nexthop entry */
+	uint64_t	def_nh;		/**< Default next hop */
+	uint64_t	*tbl8;		/**< tbl8 table. */
+	uint64_t	*tbl8_idxes;	/**< bitmap containing free tbl8 idxes*/
+	/* tbl24 table. */
+	__extension__ uint64_t	tbl24[0] __rte_cache_aligned;
+};
+
+static inline void *
+get_tbl24_p(struct dir24_8_tbl *dp, uint32_t ip, uint8_t nh_sz)
+{
+	return (void *)&((uint8_t *)dp->tbl24)[(ip &
+		DIR24_8_TBL24_MASK) >> (8 - nh_sz)];
+}
+
+static inline  uint8_t
+bits_in_nh(uint8_t nh_sz)
+{
+	return 8 * (1 << nh_sz);
+}
+
+static inline uint64_t
+get_max_nh(uint8_t nh_sz)
+{
+	return ((1ULL << (bits_in_nh(nh_sz) - 1)) - 1);
+}
+
+static  inline uint32_t
+get_tbl24_idx(uint32_t ip)
+{
+	return ip >> 8;
+}
+
+static  inline uint32_t
+get_tbl8_idx(uint32_t res, uint32_t ip)
+{
+	return (res >> 1) * DIR24_8_TBL8_GRP_NUM_ENT + (uint8_t)ip;
+}
+
+static inline uint64_t
+lookup_msk(uint8_t nh_sz)
+{
+	return ((1ULL << ((1 << (nh_sz + 3)) - 1)) << 1) - 1;
+}
+
+static inline uint8_t
+get_psd_idx(uint32_t val, uint8_t nh_sz)
+{
+	return val & ((1 << (3 - nh_sz)) - 1);
+}
+
+static inline uint32_t
+get_tbl_idx(uint32_t val, uint8_t nh_sz)
+{
+	return val >> (3 - nh_sz);
+}
+
+static inline uint64_t
+get_tbl24(struct dir24_8_tbl *dp, uint32_t ip, uint8_t nh_sz)
+{
+	return ((dp->tbl24[get_tbl_idx(get_tbl24_idx(ip), nh_sz)] >>
+		(get_psd_idx(get_tbl24_idx(ip), nh_sz) *
+		bits_in_nh(nh_sz))) & lookup_msk(nh_sz));
+}
+
+static inline uint64_t
+get_tbl8(struct dir24_8_tbl *dp, uint32_t res, uint32_t ip, uint8_t nh_sz)
+{
+	return ((dp->tbl8[get_tbl_idx(get_tbl8_idx(res, ip), nh_sz)] >>
+		(get_psd_idx(get_tbl8_idx(res, ip), nh_sz) *
+		bits_in_nh(nh_sz))) & lookup_msk(nh_sz));
+}
+
+static inline int
+is_entry_extended(uint64_t ent)
+{
+	return (ent & DIR24_8_EXT_ENT) == DIR24_8_EXT_ENT;
+}
+
+#define LOOKUP_FUNC(suffix, type, bulk_prefetch, nh_sz)			\
+static inline void dir24_8_lookup_bulk_##suffix(void *p, const uint32_t *ips, \
+	uint64_t *next_hops, const unsigned int n)			\
+{									\
+	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;		\
+	uint64_t tmp;							\
+	uint32_t i;							\
+	uint32_t prefetch_offset =					\
+		RTE_MIN((unsigned int)bulk_prefetch, n);		\
+									\
+	for (i = 0; i < prefetch_offset; i++)				\
+		rte_prefetch0(get_tbl24_p(dp, ips[i], nh_sz));		\
+	for (i = 0; i < (n - prefetch_offset); i++) {			\
+		rte_prefetch0(get_tbl24_p(dp,				\
+			ips[i + prefetch_offset], nh_sz));		\
+		tmp = ((type *)dp->tbl24)[ips[i] >> 8];			\
+		if (unlikely(is_entry_extended(tmp)))			\
+			tmp = ((type *)dp->tbl8)[(uint8_t)ips[i] +	\
+				((tmp >> 1) * DIR24_8_TBL8_GRP_NUM_ENT)]; \
+		next_hops[i] = tmp >> 1;				\
+	}								\
+	for (; i < n; i++) {						\
+		tmp = ((type *)dp->tbl24)[ips[i] >> 8];			\
+		if (unlikely(is_entry_extended(tmp)))			\
+			tmp = ((type *)dp->tbl8)[(uint8_t)ips[i] +	\
+				((tmp >> 1) * DIR24_8_TBL8_GRP_NUM_ENT)]; \
+		next_hops[i] = tmp >> 1;				\
+	}								\
+}									\
+
+LOOKUP_FUNC(1b, uint8_t, 5, 0)
+LOOKUP_FUNC(2b, uint16_t, 6, 1)
+LOOKUP_FUNC(4b, uint32_t, 15, 2)
+LOOKUP_FUNC(8b, uint64_t, 12, 3)
+
+static inline void
+dir24_8_lookup_bulk(struct dir24_8_tbl *dp, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n, uint8_t nh_sz)
+{
+	uint64_t tmp;
+	uint32_t i;
+	uint32_t prefetch_offset = RTE_MIN(15U, n);
+
+	for (i = 0; i < prefetch_offset; i++)
+		rte_prefetch0(get_tbl24_p(dp, ips[i], nh_sz));
+	for (i = 0; i < (n - prefetch_offset); i++) {
+		rte_prefetch0(get_tbl24_p(dp, ips[i + prefetch_offset],
+			nh_sz));
+		tmp = get_tbl24(dp, ips[i], nh_sz);
+		if (unlikely(is_entry_extended(tmp)))
+			tmp = get_tbl8(dp, tmp, ips[i], nh_sz);
+
+		next_hops[i] = tmp >> 1;
+	}
+	for (; i < n; i++) {
+		tmp = get_tbl24(dp, ips[i], nh_sz);
+		if (unlikely(is_entry_extended(tmp)))
+			tmp = get_tbl8(dp, tmp, ips[i], nh_sz);
+
+		next_hops[i] = tmp >> 1;
+	}
+}
+
+static inline void
+dir24_8_lookup_bulk_0(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n)
+{
+	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
+
+	dir24_8_lookup_bulk(dp, ips, next_hops, n, 0);
+}
+
+static inline void
+dir24_8_lookup_bulk_1(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n)
+{
+	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
+
+	dir24_8_lookup_bulk(dp, ips, next_hops, n, 1);
+}
+
+static inline void
+dir24_8_lookup_bulk_2(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n)
+{
+	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
+
+	dir24_8_lookup_bulk(dp, ips, next_hops, n, 2);
+}
+
+static inline void
+dir24_8_lookup_bulk_3(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n)
+{
+	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
+
+	dir24_8_lookup_bulk(dp, ips, next_hops, n, 3);
+}
+
+static inline void
+dir24_8_lookup_bulk_uni(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n)
+{
+	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
+	uint64_t tmp;
+	uint32_t i;
+	uint32_t prefetch_offset = RTE_MIN(15U, n);
+	uint8_t nh_sz = dp->nh_sz;
+
+	for (i = 0; i < prefetch_offset; i++)
+		rte_prefetch0(get_tbl24_p(dp, ips[i], nh_sz));
+	for (i = 0; i < (n - prefetch_offset); i++) {
+		rte_prefetch0(get_tbl24_p(dp, ips[i + prefetch_offset],
+			nh_sz));
+		tmp = get_tbl24(dp, ips[i], nh_sz);
+		if (unlikely(is_entry_extended(tmp)))
+			tmp = get_tbl8(dp, tmp, ips[i], nh_sz);
+
+		next_hops[i] = tmp >> 1;
+	}
+	for (; i < n; i++) {
+		tmp = get_tbl24(dp, ips[i], nh_sz);
+		if (unlikely(is_entry_extended(tmp)))
+			tmp = get_tbl8(dp, tmp, ips[i], nh_sz);
+
+		next_hops[i] = tmp >> 1;
+	}
+}
+
 void *
 dir24_8_create(const char *name, int socket_id, struct rte_fib_conf *conf);
 
-- 
2.17.1


^ permalink raw reply	[flat|nested] 199+ messages in thread

* [dpdk-dev] [PATCH v6 4/8] fib: introduce AVX512 lookup
  2020-07-10 14:46       ` [dpdk-dev] [PATCH v5 " Vladimir Medvedkin
                           ` (3 preceding siblings ...)
  2020-07-13 11:11         ` [dpdk-dev] [PATCH v6 3/8] fib: move lookup definition into the header file Vladimir Medvedkin
@ 2020-07-13 11:11         ` Vladimir Medvedkin
  2020-07-13 11:11         ` [dpdk-dev] [PATCH v6 5/8] fib6: make lookup function type configurable Vladimir Medvedkin
                           ` (3 subsequent siblings)
  8 siblings, 0 replies; 199+ messages in thread
From: Vladimir Medvedkin @ 2020-07-13 11:11 UTC (permalink / raw)
  To: dev
  Cc: david.marchand, jerinj, mdr, thomas, konstantin.ananyev,
	bruce.richardson

Add new lookup implementation for DIR24_8 algorithm using
AVX512 instruction set

Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 lib/librte_fib/Makefile         |  14 +++
 lib/librte_fib/dir24_8.c        |  24 +++++
 lib/librte_fib/dir24_8_avx512.c | 165 ++++++++++++++++++++++++++++++++
 lib/librte_fib/dir24_8_avx512.h |  24 +++++
 lib/librte_fib/meson.build      |  18 ++++
 lib/librte_fib/rte_fib.h        |   3 +-
 6 files changed, 247 insertions(+), 1 deletion(-)
 create mode 100644 lib/librte_fib/dir24_8_avx512.c
 create mode 100644 lib/librte_fib/dir24_8_avx512.h

diff --git a/lib/librte_fib/Makefile b/lib/librte_fib/Makefile
index 1dd2a495b..3958da106 100644
--- a/lib/librte_fib/Makefile
+++ b/lib/librte_fib/Makefile
@@ -19,4 +19,18 @@ SRCS-$(CONFIG_RTE_LIBRTE_FIB) := rte_fib.c rte_fib6.c dir24_8.c trie.c
 # install this header file
 SYMLINK-$(CONFIG_RTE_LIBRTE_FIB)-include := rte_fib.h rte_fib6.h
 
+CC_AVX512F_SUPPORT=$(shell $(CC) -mavx512f -dM -E - </dev/null 2>&1 | \
+grep -q __AVX512F__ && echo 1)
+
+CC_AVX512DQ_SUPPORT=$(shell $(CC) -mavx512dq -dM -E - </dev/null 2>&1 | \
+grep -q __AVX512DQ__ && echo 1)
+
+ifeq ($(CC_AVX512F_SUPPORT), 1)
+	ifeq ($(CC_AVX512DQ_SUPPORT), 1)
+		SRCS-$(CONFIG_RTE_LIBRTE_FIB) += dir24_8_avx512.c
+		CFLAGS_dir24_8_avx512.o += -mavx512f
+		CFLAGS_dir24_8_avx512.o += -mavx512dq
+		CFLAGS_dir24_8.o += -DCC_DIR24_8_AVX512_SUPPORT
+	endif
+endif
 include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_fib/dir24_8.c b/lib/librte_fib/dir24_8.c
index 9d74653cf..0d7bf2c9e 100644
--- a/lib/librte_fib/dir24_8.c
+++ b/lib/librte_fib/dir24_8.c
@@ -18,6 +18,12 @@
 #include <rte_fib.h>
 #include "dir24_8.h"
 
+#ifdef CC_DIR24_8_AVX512_SUPPORT
+
+#include "dir24_8_avx512.h"
+
+#endif /* CC_DIR24_8_AVX512_SUPPORT */
+
 #define DIR24_8_NAMESIZE	64
 
 #define ROUNDUP(x, y)	 RTE_ALIGN_CEIL(x, (1 << (32 - y)))
@@ -62,6 +68,24 @@ dir24_8_get_lookup_fn(void *p, enum rte_fib_dir24_8_lookup_type type)
 		}
 	case RTE_FIB_DIR24_8_SCALAR_UNI:
 		return dir24_8_lookup_bulk_uni;
+#ifdef CC_DIR24_8_AVX512_SUPPORT
+	case RTE_FIB_DIR24_8_VECTOR_AVX512:
+		if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512F) <= 0)
+			return NULL;
+
+		switch (nh_sz) {
+		case RTE_FIB_DIR24_8_1B:
+			return rte_dir24_8_vec_lookup_bulk_1b;
+		case RTE_FIB_DIR24_8_2B:
+			return rte_dir24_8_vec_lookup_bulk_2b;
+		case RTE_FIB_DIR24_8_4B:
+			return rte_dir24_8_vec_lookup_bulk_4b;
+		case RTE_FIB_DIR24_8_8B:
+			return rte_dir24_8_vec_lookup_bulk_8b;
+		default:
+			return NULL;
+		}
+#endif
 	default:
 		return NULL;
 	}
diff --git a/lib/librte_fib/dir24_8_avx512.c b/lib/librte_fib/dir24_8_avx512.c
new file mode 100644
index 000000000..43dba28cf
--- /dev/null
+++ b/lib/librte_fib/dir24_8_avx512.c
@@ -0,0 +1,165 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Intel Corporation
+ */
+
+#include <rte_vect.h>
+#include <rte_fib.h>
+
+#include "dir24_8.h"
+#include "dir24_8_avx512.h"
+
+static __rte_always_inline void
+dir24_8_vec_lookup_x16(void *p, const uint32_t *ips,
+	uint64_t *next_hops, int size)
+{
+	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
+	__mmask16 msk_ext;
+	__mmask16 exp_msk = 0x5555;
+	__m512i ip_vec, idxes, res, bytes;
+	const __m512i zero = _mm512_set1_epi32(0);
+	const __m512i lsb = _mm512_set1_epi32(1);
+	const __m512i lsbyte_msk = _mm512_set1_epi32(0xff);
+	__m512i tmp1, tmp2, res_msk;
+	__m256i tmp256;
+	/* used to mask gather values if size is 1/2 (8/16 bit next hops) */
+	if (size == sizeof(uint8_t))
+		res_msk = _mm512_set1_epi32(UINT8_MAX);
+	else if (size == sizeof(uint16_t))
+		res_msk = _mm512_set1_epi32(UINT16_MAX);
+
+	ip_vec = _mm512_loadu_si512(ips);
+	/* mask 24 most significant bits */
+	idxes = _mm512_srli_epi32(ip_vec, 8);
+
+	/**
+	 * lookup in tbl24
+	 * Put it inside branch to make compiler happy with -O0
+	 */
+	if (size == sizeof(uint8_t)) {
+		res = _mm512_i32gather_epi32(idxes, (const int *)dp->tbl24, 1);
+		res = _mm512_and_epi32(res, res_msk);
+	} else if (size == sizeof(uint16_t)) {
+		res = _mm512_i32gather_epi32(idxes, (const int *)dp->tbl24, 2);
+		res = _mm512_and_epi32(res, res_msk);
+	} else
+		res = _mm512_i32gather_epi32(idxes, (const int *)dp->tbl24, 4);
+
+	/* get extended entries indexes */
+	msk_ext = _mm512_test_epi32_mask(res, lsb);
+
+	if (msk_ext != 0) {
+		idxes = _mm512_srli_epi32(res, 1);
+		idxes = _mm512_slli_epi32(idxes, 8);
+		bytes = _mm512_and_epi32(ip_vec, lsbyte_msk);
+		idxes = _mm512_maskz_add_epi32(msk_ext, idxes, bytes);
+		if (size == sizeof(uint8_t)) {
+			idxes = _mm512_mask_i32gather_epi32(zero, msk_ext,
+				idxes, (const int *)dp->tbl8, 1);
+			idxes = _mm512_and_epi32(idxes, res_msk);
+		} else if (size == sizeof(uint16_t)) {
+			idxes = _mm512_mask_i32gather_epi32(zero, msk_ext,
+				idxes, (const int *)dp->tbl8, 2);
+			idxes = _mm512_and_epi32(idxes, res_msk);
+		} else
+			idxes = _mm512_mask_i32gather_epi32(zero, msk_ext,
+				idxes, (const int *)dp->tbl8, 4);
+
+		res = _mm512_mask_blend_epi32(msk_ext, res, idxes);
+	}
+
+	res = _mm512_srli_epi32(res, 1);
+	tmp1 = _mm512_maskz_expand_epi32(exp_msk, res);
+	tmp256 = _mm512_extracti32x8_epi32(res, 1);
+	tmp2 = _mm512_maskz_expand_epi32(exp_msk,
+		_mm512_castsi256_si512(tmp256));
+	_mm512_storeu_si512(next_hops, tmp1);
+	_mm512_storeu_si512(next_hops + 8, tmp2);
+}
+
+static __rte_always_inline void
+dir24_8_vec_lookup_x8_8b(void *p, const uint32_t *ips,
+	uint64_t *next_hops)
+{
+	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
+	const __m512i zero = _mm512_set1_epi32(0);
+	const __m512i lsbyte_msk = _mm512_set1_epi64(0xff);
+	const __m512i lsb = _mm512_set1_epi64(1);
+	__m512i res, idxes, bytes;
+	__m256i idxes_256, ip_vec;
+	__mmask8 msk_ext;
+
+	ip_vec = _mm256_loadu_si256((const void *)ips);
+	/* mask 24 most significant bits */
+	idxes_256 = _mm256_srli_epi32(ip_vec, 8);
+
+	/* lookup in tbl24 */
+	res = _mm512_i32gather_epi64(idxes_256, (const void *)dp->tbl24, 8);
+
+	/* get extended entries indexes */
+	msk_ext = _mm512_test_epi64_mask(res, lsb);
+
+	if (msk_ext != 0) {
+		bytes = _mm512_cvtepi32_epi64(ip_vec);
+		idxes = _mm512_srli_epi64(res, 1);
+		idxes = _mm512_slli_epi64(idxes, 8);
+		bytes = _mm512_and_epi64(bytes, lsbyte_msk);
+		idxes = _mm512_maskz_add_epi64(msk_ext, idxes, bytes);
+		idxes = _mm512_mask_i64gather_epi64(zero, msk_ext, idxes,
+			(const void *)dp->tbl8, 8);
+
+		res = _mm512_mask_blend_epi64(msk_ext, res, idxes);
+	}
+
+	res = _mm512_srli_epi64(res, 1);
+	_mm512_storeu_si512(next_hops, res);
+}
+
+void
+rte_dir24_8_vec_lookup_bulk_1b(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n)
+{
+	uint32_t i;
+	for (i = 0; i < (n / 16); i++)
+		dir24_8_vec_lookup_x16(p, ips + i * 16, next_hops + i * 16,
+			sizeof(uint8_t));
+
+	dir24_8_lookup_bulk_1b(p, ips + i * 16, next_hops + i * 16,
+		n - i * 16);
+}
+
+void
+rte_dir24_8_vec_lookup_bulk_2b(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n)
+{
+	uint32_t i;
+	for (i = 0; i < (n / 16); i++)
+		dir24_8_vec_lookup_x16(p, ips + i * 16, next_hops + i * 16,
+			sizeof(uint16_t));
+
+	dir24_8_lookup_bulk_2b(p, ips + i * 16, next_hops + i * 16,
+		n - i * 16);
+}
+
+void
+rte_dir24_8_vec_lookup_bulk_4b(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n)
+{
+	uint32_t i;
+	for (i = 0; i < (n / 16); i++)
+		dir24_8_vec_lookup_x16(p, ips + i * 16, next_hops + i * 16,
+			sizeof(uint32_t));
+
+	dir24_8_lookup_bulk_4b(p, ips + i * 16, next_hops + i * 16,
+		n - i * 16);
+}
+
+void
+rte_dir24_8_vec_lookup_bulk_8b(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n)
+{
+	uint32_t i;
+	for (i = 0; i < (n / 8); i++)
+		dir24_8_vec_lookup_x8_8b(p, ips + i * 8, next_hops + i * 8);
+
+	dir24_8_lookup_bulk_8b(p, ips + i * 8, next_hops + i * 8, n - i * 8);
+}
diff --git a/lib/librte_fib/dir24_8_avx512.h b/lib/librte_fib/dir24_8_avx512.h
new file mode 100644
index 000000000..1d3c2b931
--- /dev/null
+++ b/lib/librte_fib/dir24_8_avx512.h
@@ -0,0 +1,24 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Intel Corporation
+ */
+
+#ifndef _DIR248_AVX512_H_
+#define _DIR248_AVX512_H_
+
+void
+rte_dir24_8_vec_lookup_bulk_1b(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n);
+
+void
+rte_dir24_8_vec_lookup_bulk_2b(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n);
+
+void
+rte_dir24_8_vec_lookup_bulk_4b(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n);
+
+void
+rte_dir24_8_vec_lookup_bulk_8b(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n);
+
+#endif /* _DIR248_AVX512_H_ */
diff --git a/lib/librte_fib/meson.build b/lib/librte_fib/meson.build
index 771828fbe..d96ff0288 100644
--- a/lib/librte_fib/meson.build
+++ b/lib/librte_fib/meson.build
@@ -5,3 +5,21 @@
 sources = files('rte_fib.c', 'rte_fib6.c', 'dir24_8.c', 'trie.c')
 headers = files('rte_fib.h', 'rte_fib6.h')
 deps += ['rib']
+
+if arch_subdir == 'x86' and not machine_args.contains('-mno-avx512f')
+	# compile AVX512 version if either:
+	# a. we have AVX512F supported in minimum instruction set baseline
+	# b. it's not minimum instruction set, but supported by compiler
+	if dpdk_conf.has('RTE_MACHINE_CPUFLAG_AVX512F')
+		cflags += ['-DCC_DIR24_8_AVX512_SUPPORT']
+		sources += files('dir24_8_avx512.c')
+	elif cc.has_multi_arguments('-mavx512f', '-mavx512dq')
+		dir24_8_avx512_tmp = static_library('dir24_8_avx512_tmp',
+				'dir24_8_avx512.c',
+				dependencies: static_rte_eal,
+				c_args: cflags + ['-mavx512f', '-mavx512dq'])
+		objs += dir24_8_avx512_tmp.extract_objects('dir24_8_avx512.c')
+		cflags += '-DCC_DIR24_8_AVX512_SUPPORT'
+	endif
+endif
+
diff --git a/lib/librte_fib/rte_fib.h b/lib/librte_fib/rte_fib.h
index 892898c6f..4a348670d 100644
--- a/lib/librte_fib/rte_fib.h
+++ b/lib/librte_fib/rte_fib.h
@@ -61,7 +61,8 @@ enum rte_fib_dir24_8_nh_sz {
 enum rte_fib_dir24_8_lookup_type {
 	RTE_FIB_DIR24_8_SCALAR_MACRO,
 	RTE_FIB_DIR24_8_SCALAR_INLINE,
-	RTE_FIB_DIR24_8_SCALAR_UNI
+	RTE_FIB_DIR24_8_SCALAR_UNI,
+	RTE_FIB_DIR24_8_VECTOR_AVX512
 };
 
 /** FIB configuration structure */
-- 
2.17.1


^ permalink raw reply	[flat|nested] 199+ messages in thread

* [dpdk-dev] [PATCH v6 5/8] fib6: make lookup function type configurable
  2020-07-10 14:46       ` [dpdk-dev] [PATCH v5 " Vladimir Medvedkin
                           ` (4 preceding siblings ...)
  2020-07-13 11:11         ` [dpdk-dev] [PATCH v6 4/8] fib: introduce AVX512 lookup Vladimir Medvedkin
@ 2020-07-13 11:11         ` Vladimir Medvedkin
  2020-07-13 11:11         ` [dpdk-dev] [PATCH v6 6/8] fib6: move lookup definition into the header file Vladimir Medvedkin
                           ` (2 subsequent siblings)
  8 siblings, 0 replies; 199+ messages in thread
From: Vladimir Medvedkin @ 2020-07-13 11:11 UTC (permalink / raw)
  To: dev
  Cc: david.marchand, jerinj, mdr, thomas, konstantin.ananyev,
	bruce.richardson

Add type argument to trie_get_lookup_fn()
Now it only supports RTE_FIB6_TRIE_SCALAR

Add new rte_fib6_set_lookup_fn() - user can change lookup
function type runtime.

Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
---
 lib/librte_fib/rte_fib6.c          | 20 +++++++++++++++++++-
 lib/librte_fib/rte_fib6.h          | 21 +++++++++++++++++++++
 lib/librte_fib/rte_fib_version.map |  1 +
 lib/librte_fib/trie.c              | 25 ++++++++++++++-----------
 lib/librte_fib/trie.h              |  2 +-
 5 files changed, 56 insertions(+), 13 deletions(-)

diff --git a/lib/librte_fib/rte_fib6.c b/lib/librte_fib/rte_fib6.c
index a1f0db844..566cd5fb6 100644
--- a/lib/librte_fib/rte_fib6.c
+++ b/lib/librte_fib/rte_fib6.c
@@ -107,7 +107,7 @@ init_dataplane(struct rte_fib6 *fib, __rte_unused int socket_id,
 		fib->dp = trie_create(dp_name, socket_id, conf);
 		if (fib->dp == NULL)
 			return -rte_errno;
-		fib->lookup = rte_trie_get_lookup_fn(conf);
+		fib->lookup = trie_get_lookup_fn(fib->dp, RTE_FIB6_TRIE_SCALAR);
 		fib->modify = trie_modify;
 		return 0;
 	default:
@@ -319,3 +319,21 @@ rte_fib6_get_rib(struct rte_fib6 *fib)
 {
 	return (fib == NULL) ? NULL : fib->rib;
 }
+
+int
+rte_fib6_set_lookup_fn(struct rte_fib6 *fib,
+	enum rte_fib_trie_lookup_type type)
+{
+	rte_fib6_lookup_fn_t fn;
+
+	switch (fib->type) {
+	case RTE_FIB6_TRIE:
+		fn = trie_get_lookup_fn(fib->dp, type);
+		if (fn == NULL)
+			return -EINVAL;
+		fib->lookup = fn;
+		return 0;
+	default:
+		return -EINVAL;
+	}
+}
diff --git a/lib/librte_fib/rte_fib6.h b/lib/librte_fib/rte_fib6.h
index bbfcf23a8..e029c7624 100644
--- a/lib/librte_fib/rte_fib6.h
+++ b/lib/librte_fib/rte_fib6.h
@@ -59,6 +59,10 @@ enum rte_fib_trie_nh_sz {
 	RTE_FIB6_TRIE_8B
 };
 
+enum rte_fib_trie_lookup_type {
+	RTE_FIB6_TRIE_SCALAR
+};
+
 /** FIB configuration structure */
 struct rte_fib6_conf {
 	enum rte_fib6_type type; /**< Type of FIB struct */
@@ -201,6 +205,23 @@ __rte_experimental
 struct rte_rib6 *
 rte_fib6_get_rib(struct rte_fib6 *fib);
 
+/**
+ * Set lookup function based on type
+ *
+ * @param fib
+ *   FIB object handle
+ * @param type
+ *   type of lookup function
+ *
+ * @return
+ *    -EINVAL on failure
+ *    0 on success
+ */
+__rte_experimental
+int
+rte_fib6_set_lookup_fn(struct rte_fib6 *fib,
+	enum rte_fib_trie_lookup_type type);
+
 #ifdef __cplusplus
 }
 #endif
diff --git a/lib/librte_fib/rte_fib_version.map b/lib/librte_fib/rte_fib_version.map
index 216af66b3..9d1e181b3 100644
--- a/lib/librte_fib/rte_fib_version.map
+++ b/lib/librte_fib/rte_fib_version.map
@@ -19,6 +19,7 @@ EXPERIMENTAL {
 	rte_fib6_lookup_bulk;
 	rte_fib6_get_dp;
 	rte_fib6_get_rib;
+	rte_fib6_set_lookup_fn;
 
 	local: *;
 };
diff --git a/lib/librte_fib/trie.c b/lib/librte_fib/trie.c
index 2ae2add4f..63c519a09 100644
--- a/lib/librte_fib/trie.c
+++ b/lib/librte_fib/trie.c
@@ -59,13 +59,6 @@ enum edge {
 	REDGE
 };
 
-enum lookup_type {
-	MACRO,
-	INLINE,
-	UNI
-};
-static enum lookup_type test_lookup = MACRO;
-
 static inline uint32_t
 get_tbl24_idx(const uint8_t *ip)
 {
@@ -154,11 +147,18 @@ LOOKUP_FUNC(4b, uint32_t, 2)
 LOOKUP_FUNC(8b, uint64_t, 3)
 
 rte_fib6_lookup_fn_t
-rte_trie_get_lookup_fn(struct rte_fib6_conf *conf)
+trie_get_lookup_fn(void *p, enum rte_fib_trie_lookup_type type)
 {
-	enum rte_fib_trie_nh_sz nh_sz = conf->trie.nh_sz;
+	enum rte_fib_trie_nh_sz nh_sz;
+	struct rte_trie_tbl *dp = p;
 
-	if (test_lookup == MACRO) {
+	if (dp == NULL)
+		return NULL;
+
+	nh_sz = dp->nh_sz;
+
+	switch (type) {
+	case RTE_FIB6_TRIE_SCALAR:
 		switch (nh_sz) {
 		case RTE_FIB6_TRIE_2B:
 			return rte_trie_lookup_bulk_2b;
@@ -166,9 +166,12 @@ rte_trie_get_lookup_fn(struct rte_fib6_conf *conf)
 			return rte_trie_lookup_bulk_4b;
 		case RTE_FIB6_TRIE_8B:
 			return rte_trie_lookup_bulk_8b;
+		default:
+			return NULL;
 		}
+	default:
+		return NULL;
 	}
-
 	return NULL;
 }
 
diff --git a/lib/librte_fib/trie.h b/lib/librte_fib/trie.h
index bb750c5ae..0d5ef9a9f 100644
--- a/lib/librte_fib/trie.h
+++ b/lib/librte_fib/trie.h
@@ -22,7 +22,7 @@ void
 trie_free(void *p);
 
 rte_fib6_lookup_fn_t
-rte_trie_get_lookup_fn(struct rte_fib6_conf *fib_conf);
+trie_get_lookup_fn(void *p, enum rte_fib_trie_lookup_type type);
 
 int
 trie_modify(struct rte_fib6 *fib, const uint8_t ip[RTE_FIB6_IPV6_ADDR_SIZE],
-- 
2.17.1


^ permalink raw reply	[flat|nested] 199+ messages in thread

* [dpdk-dev] [PATCH v6 6/8] fib6: move lookup definition into the header file
  2020-07-10 14:46       ` [dpdk-dev] [PATCH v5 " Vladimir Medvedkin
                           ` (5 preceding siblings ...)
  2020-07-13 11:11         ` [dpdk-dev] [PATCH v6 5/8] fib6: make lookup function type configurable Vladimir Medvedkin
@ 2020-07-13 11:11         ` Vladimir Medvedkin
  2020-07-13 11:11         ` [dpdk-dev] [PATCH v6 7/8] fib6: introduce AVX512 lookup Vladimir Medvedkin
  2020-07-13 11:11         ` [dpdk-dev] [PATCH v6 8/8] app/testfib: add support for different lookup functions Vladimir Medvedkin
  8 siblings, 0 replies; 199+ messages in thread
From: Vladimir Medvedkin @ 2020-07-13 11:11 UTC (permalink / raw)
  To: dev
  Cc: david.marchand, jerinj, mdr, thomas, konstantin.ananyev,
	bruce.richardson

Move trie table layout and lookup definition into the
private header file. This is necessary for implementing a
vectorized lookup function in a separate .с file.

Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 lib/librte_fib/trie.c | 121 ------------------------------------------
 lib/librte_fib/trie.h | 117 ++++++++++++++++++++++++++++++++++++++++
 2 files changed, 117 insertions(+), 121 deletions(-)

diff --git a/lib/librte_fib/trie.c b/lib/librte_fib/trie.c
index 63c519a09..136e938df 100644
--- a/lib/librte_fib/trie.c
+++ b/lib/librte_fib/trie.c
@@ -11,141 +11,20 @@
 
 #include <rte_debug.h>
 #include <rte_malloc.h>
-#include <rte_prefetch.h>
 #include <rte_errno.h>
 #include <rte_memory.h>
-#include <rte_branch_prediction.h>
 
 #include <rte_rib6.h>
 #include <rte_fib6.h>
 #include "trie.h"
 
-/* @internal Total number of tbl24 entries. */
-#define TRIE_TBL24_NUM_ENT	(1 << 24)
-
-/* Maximum depth value possible for IPv6 LPM. */
-#define TRIE_MAX_DEPTH		128
-
-/* @internal Number of entries in a tbl8 group. */
-#define TRIE_TBL8_GRP_NUM_ENT	256ULL
-
-/* @internal Total number of tbl8 groups in the tbl8. */
-#define TRIE_TBL8_NUM_GROUPS	65536
-
-/* @internal bitmask with valid and valid_group fields set */
-#define TRIE_EXT_ENT		1
-
 #define TRIE_NAMESIZE		64
 
-#define BITMAP_SLAB_BIT_SIZE_LOG2	6
-#define BITMAP_SLAB_BIT_SIZE		(1ULL << BITMAP_SLAB_BIT_SIZE_LOG2)
-#define BITMAP_SLAB_BITMASK		(BITMAP_SLAB_BIT_SIZE - 1)
-
-struct rte_trie_tbl {
-	uint32_t	number_tbl8s;	/**< Total number of tbl8s */
-	uint32_t	rsvd_tbl8s;	/**< Number of reserved tbl8s */
-	uint32_t	cur_tbl8s;	/**< Current cumber of tbl8s */
-	uint64_t	def_nh;		/**< Default next hop */
-	enum rte_fib_trie_nh_sz	nh_sz;	/**< Size of nexthop entry */
-	uint64_t	*tbl8;		/**< tbl8 table. */
-	uint32_t	*tbl8_pool;	/**< bitmap containing free tbl8 idxes*/
-	uint32_t	tbl8_pool_pos;
-	/* tbl24 table. */
-	__extension__ uint64_t	tbl24[0] __rte_cache_aligned;
-};
-
 enum edge {
 	LEDGE,
 	REDGE
 };
 
-static inline uint32_t
-get_tbl24_idx(const uint8_t *ip)
-{
-	return ip[0] << 16|ip[1] << 8|ip[2];
-}
-
-static inline void *
-get_tbl24_p(struct rte_trie_tbl *dp, const uint8_t *ip, uint8_t nh_sz)
-{
-	uint32_t tbl24_idx;
-
-	tbl24_idx = get_tbl24_idx(ip);
-	return (void *)&((uint8_t *)dp->tbl24)[tbl24_idx << nh_sz];
-}
-
-static inline uint8_t
-bits_in_nh(uint8_t nh_sz)
-{
-	return 8 * (1 << nh_sz);
-}
-
-static inline uint64_t
-get_max_nh(uint8_t nh_sz)
-{
-	return ((1ULL << (bits_in_nh(nh_sz) - 1)) - 1);
-}
-
-static inline uint64_t
-lookup_msk(uint8_t nh_sz)
-{
-	return ((1ULL << ((1 << (nh_sz + 3)) - 1)) << 1) - 1;
-}
-
-static inline uint8_t
-get_psd_idx(uint32_t val, uint8_t nh_sz)
-{
-	return val & ((1 << (3 - nh_sz)) - 1);
-}
-
-static inline uint32_t
-get_tbl_pos(uint32_t val, uint8_t nh_sz)
-{
-	return val >> (3 - nh_sz);
-}
-
-static inline uint64_t
-get_tbl_val_by_idx(uint64_t *tbl, uint32_t idx, uint8_t nh_sz)
-{
-	return ((tbl[get_tbl_pos(idx, nh_sz)] >> (get_psd_idx(idx, nh_sz) *
-		bits_in_nh(nh_sz))) & lookup_msk(nh_sz));
-}
-
-static inline void *
-get_tbl_p_by_idx(uint64_t *tbl, uint64_t idx, uint8_t nh_sz)
-{
-	return (uint8_t *)tbl + (idx << nh_sz);
-}
-
-static inline int
-is_entry_extended(uint64_t ent)
-{
-	return (ent & TRIE_EXT_ENT) == TRIE_EXT_ENT;
-}
-
-#define LOOKUP_FUNC(suffix, type, nh_sz)				\
-static void rte_trie_lookup_bulk_##suffix(void *p,			\
-	uint8_t ips[][RTE_FIB6_IPV6_ADDR_SIZE],			\
-	uint64_t *next_hops, const unsigned int n)			\
-{									\
-	struct rte_trie_tbl *dp = (struct rte_trie_tbl *)p;		\
-	uint64_t tmp;							\
-	uint32_t i, j;							\
-									\
-	for (i = 0; i < n; i++) {					\
-		tmp = ((type *)dp->tbl24)[get_tbl24_idx(&ips[i][0])];	\
-		j = 3;							\
-		while (is_entry_extended(tmp)) {			\
-			tmp = ((type *)dp->tbl8)[ips[i][j++] +		\
-				((tmp >> 1) * TRIE_TBL8_GRP_NUM_ENT)];	\
-		}							\
-		next_hops[i] = tmp >> 1;				\
-	}								\
-}
-LOOKUP_FUNC(2b, uint16_t, 1)
-LOOKUP_FUNC(4b, uint32_t, 2)
-LOOKUP_FUNC(8b, uint64_t, 3)
-
 rte_fib6_lookup_fn_t
 trie_get_lookup_fn(void *p, enum rte_fib_trie_lookup_type type)
 {
diff --git a/lib/librte_fib/trie.h b/lib/librte_fib/trie.h
index 0d5ef9a9f..663c7a90f 100644
--- a/lib/librte_fib/trie.h
+++ b/lib/librte_fib/trie.h
@@ -10,11 +10,128 @@
  * @file
  * RTE IPv6 Longest Prefix Match (LPM)
  */
+#include <rte_prefetch.h>
+#include <rte_branch_prediction.h>
 
 #ifdef __cplusplus
 extern "C" {
 #endif
 
+/* @internal Total number of tbl24 entries. */
+#define TRIE_TBL24_NUM_ENT	(1 << 24)
+/* Maximum depth value possible for IPv6 LPM. */
+#define TRIE_MAX_DEPTH		128
+/* @internal Number of entries in a tbl8 group. */
+#define TRIE_TBL8_GRP_NUM_ENT	256ULL
+/* @internal Total number of tbl8 groups in the tbl8. */
+#define TRIE_TBL8_NUM_GROUPS	65536
+/* @internal bitmask with valid and valid_group fields set */
+#define TRIE_EXT_ENT		1
+
+#define BITMAP_SLAB_BIT_SIZE_LOG2	6
+#define BITMAP_SLAB_BIT_SIZE		(1ULL << BITMAP_SLAB_BIT_SIZE_LOG2)
+#define BITMAP_SLAB_BITMASK		(BITMAP_SLAB_BIT_SIZE - 1)
+
+struct rte_trie_tbl {
+	uint32_t	number_tbl8s;	/**< Total number of tbl8s */
+	uint32_t	rsvd_tbl8s;	/**< Number of reserved tbl8s */
+	uint32_t	cur_tbl8s;	/**< Current cumber of tbl8s */
+	uint64_t	def_nh;		/**< Default next hop */
+	enum rte_fib_trie_nh_sz	nh_sz;	/**< Size of nexthop entry */
+	uint64_t	*tbl8;		/**< tbl8 table. */
+	uint32_t	*tbl8_pool;	/**< bitmap containing free tbl8 idxes*/
+	uint32_t	tbl8_pool_pos;
+	/* tbl24 table. */
+	__extension__ uint64_t	tbl24[0] __rte_cache_aligned;
+};
+
+static inline uint32_t
+get_tbl24_idx(const uint8_t *ip)
+{
+	return ip[0] << 16|ip[1] << 8|ip[2];
+}
+
+static inline void *
+get_tbl24_p(struct rte_trie_tbl *dp, const uint8_t *ip, uint8_t nh_sz)
+{
+	uint32_t tbl24_idx;
+
+	tbl24_idx = get_tbl24_idx(ip);
+	return (void *)&((uint8_t *)dp->tbl24)[tbl24_idx << nh_sz];
+}
+
+static inline uint8_t
+bits_in_nh(uint8_t nh_sz)
+{
+	return 8 * (1 << nh_sz);
+}
+
+static inline uint64_t
+get_max_nh(uint8_t nh_sz)
+{
+	return ((1ULL << (bits_in_nh(nh_sz) - 1)) - 1);
+}
+
+static inline uint64_t
+lookup_msk(uint8_t nh_sz)
+{
+	return ((1ULL << ((1 << (nh_sz + 3)) - 1)) << 1) - 1;
+}
+
+static inline uint8_t
+get_psd_idx(uint32_t val, uint8_t nh_sz)
+{
+	return val & ((1 << (3 - nh_sz)) - 1);
+}
+
+static inline uint32_t
+get_tbl_pos(uint32_t val, uint8_t nh_sz)
+{
+	return val >> (3 - nh_sz);
+}
+
+static inline uint64_t
+get_tbl_val_by_idx(uint64_t *tbl, uint32_t idx, uint8_t nh_sz)
+{
+	return ((tbl[get_tbl_pos(idx, nh_sz)] >> (get_psd_idx(idx, nh_sz) *
+		bits_in_nh(nh_sz))) & lookup_msk(nh_sz));
+}
+
+static inline void *
+get_tbl_p_by_idx(uint64_t *tbl, uint64_t idx, uint8_t nh_sz)
+{
+	return (uint8_t *)tbl + (idx << nh_sz);
+}
+
+static inline int
+is_entry_extended(uint64_t ent)
+{
+	return (ent & TRIE_EXT_ENT) == TRIE_EXT_ENT;
+}
+
+#define LOOKUP_FUNC(suffix, type, nh_sz)				\
+static inline void rte_trie_lookup_bulk_##suffix(void *p,		\
+	uint8_t ips[][RTE_FIB6_IPV6_ADDR_SIZE],				\
+	uint64_t *next_hops, const unsigned int n)			\
+{									\
+	struct rte_trie_tbl *dp = (struct rte_trie_tbl *)p;		\
+	uint64_t tmp;							\
+	uint32_t i, j;							\
+									\
+	for (i = 0; i < n; i++) {					\
+		tmp = ((type *)dp->tbl24)[get_tbl24_idx(&ips[i][0])];	\
+		j = 3;							\
+		while (is_entry_extended(tmp)) {			\
+			tmp = ((type *)dp->tbl8)[ips[i][j++] +		\
+				((tmp >> 1) * TRIE_TBL8_GRP_NUM_ENT)];	\
+		}							\
+		next_hops[i] = tmp >> 1;				\
+	}								\
+}
+LOOKUP_FUNC(2b, uint16_t, 1)
+LOOKUP_FUNC(4b, uint32_t, 2)
+LOOKUP_FUNC(8b, uint64_t, 3)
+
 void *
 trie_create(const char *name, int socket_id, struct rte_fib6_conf *conf);
 
-- 
2.17.1


^ permalink raw reply	[flat|nested] 199+ messages in thread

* [dpdk-dev] [PATCH v6 7/8] fib6: introduce AVX512 lookup
  2020-07-10 14:46       ` [dpdk-dev] [PATCH v5 " Vladimir Medvedkin
                           ` (6 preceding siblings ...)
  2020-07-13 11:11         ` [dpdk-dev] [PATCH v6 6/8] fib6: move lookup definition into the header file Vladimir Medvedkin
@ 2020-07-13 11:11         ` Vladimir Medvedkin
  2020-07-13 11:11         ` [dpdk-dev] [PATCH v6 8/8] app/testfib: add support for different lookup functions Vladimir Medvedkin
  8 siblings, 0 replies; 199+ messages in thread
From: Vladimir Medvedkin @ 2020-07-13 11:11 UTC (permalink / raw)
  To: dev
  Cc: david.marchand, jerinj, mdr, thomas, konstantin.ananyev,
	bruce.richardson

Add new lookup implementation for FIB6 trie algorithm using
AVX512 instruction set

Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 lib/librte_fib/Makefile      |  10 ++
 lib/librte_fib/meson.build   |  13 ++
 lib/librte_fib/rte_fib6.h    |   3 +-
 lib/librte_fib/trie.c        |  21 +++
 lib/librte_fib/trie_avx512.c | 269 +++++++++++++++++++++++++++++++++++
 lib/librte_fib/trie_avx512.h |  20 +++
 6 files changed, 335 insertions(+), 1 deletion(-)
 create mode 100644 lib/librte_fib/trie_avx512.c
 create mode 100644 lib/librte_fib/trie_avx512.h

diff --git a/lib/librte_fib/Makefile b/lib/librte_fib/Makefile
index 3958da106..761c7c847 100644
--- a/lib/librte_fib/Makefile
+++ b/lib/librte_fib/Makefile
@@ -25,12 +25,22 @@ grep -q __AVX512F__ && echo 1)
 CC_AVX512DQ_SUPPORT=$(shell $(CC) -mavx512dq -dM -E - </dev/null 2>&1 | \
 grep -q __AVX512DQ__ && echo 1)
 
+CC_AVX512BW_SUPPORT=$(shell $(CC) -mavx512bw -dM -E - </dev/null 2>&1 | \
+grep -q __AVX512BW__ && echo 1)
+
 ifeq ($(CC_AVX512F_SUPPORT), 1)
 	ifeq ($(CC_AVX512DQ_SUPPORT), 1)
 		SRCS-$(CONFIG_RTE_LIBRTE_FIB) += dir24_8_avx512.c
 		CFLAGS_dir24_8_avx512.o += -mavx512f
 		CFLAGS_dir24_8_avx512.o += -mavx512dq
 		CFLAGS_dir24_8.o += -DCC_DIR24_8_AVX512_SUPPORT
+		ifeq ($(CC_AVX512BW_SUPPORT), 1)
+			SRCS-$(CONFIG_RTE_LIBRTE_FIB) += trie_avx512.c
+			CFLAGS_trie_avx512.o += -mavx512f
+			CFLAGS_trie_avx512.o += -mavx512dq
+			CFLAGS_trie_avx512.o += -mavx512bw
+			CFLAGS_trie.o += -DCC_TRIE_AVX512_SUPPORT
+		endif
 	endif
 endif
 include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_fib/meson.build b/lib/librte_fib/meson.build
index d96ff0288..98c8752be 100644
--- a/lib/librte_fib/meson.build
+++ b/lib/librte_fib/meson.build
@@ -13,6 +13,8 @@ if arch_subdir == 'x86' and not machine_args.contains('-mno-avx512f')
 	if dpdk_conf.has('RTE_MACHINE_CPUFLAG_AVX512F')
 		cflags += ['-DCC_DIR24_8_AVX512_SUPPORT']
 		sources += files('dir24_8_avx512.c')
+		cflags += ['-DCC_TRIE_AVX512_SUPPORT']
+		sources += files('trie_avx512.c')
 	elif cc.has_multi_arguments('-mavx512f', '-mavx512dq')
 		dir24_8_avx512_tmp = static_library('dir24_8_avx512_tmp',
 				'dir24_8_avx512.c',
@@ -20,6 +22,17 @@ if arch_subdir == 'x86' and not machine_args.contains('-mno-avx512f')
 				c_args: cflags + ['-mavx512f', '-mavx512dq'])
 		objs += dir24_8_avx512_tmp.extract_objects('dir24_8_avx512.c')
 		cflags += '-DCC_DIR24_8_AVX512_SUPPORT'
+		# TRIE AVX512 implementation uses avx512bw intrinsics along with
+		# avx512f and avx512dq
+		if cc.has_argument('-mavx512bw')
+			trie_avx512_tmp = static_library('trie_avx512_tmp',
+				'trie_avx512.c',
+				dependencies: static_rte_eal,
+				c_args: cflags + ['-mavx512f', \
+					'-mavx512dq', '-mavx512bw'])
+			objs += trie_avx512_tmp.extract_objects('trie_avx512.c')
+			cflags += '-DCC_TRIE_AVX512_SUPPORT'
+		endif
 	endif
 endif
 
diff --git a/lib/librte_fib/rte_fib6.h b/lib/librte_fib/rte_fib6.h
index e029c7624..303be55c1 100644
--- a/lib/librte_fib/rte_fib6.h
+++ b/lib/librte_fib/rte_fib6.h
@@ -60,7 +60,8 @@ enum rte_fib_trie_nh_sz {
 };
 
 enum rte_fib_trie_lookup_type {
-	RTE_FIB6_TRIE_SCALAR
+	RTE_FIB6_TRIE_SCALAR,
+	RTE_FIB6_TRIE_VECTOR_AVX512
 };
 
 /** FIB configuration structure */
diff --git a/lib/librte_fib/trie.c b/lib/librte_fib/trie.c
index 136e938df..d0233ad01 100644
--- a/lib/librte_fib/trie.c
+++ b/lib/librte_fib/trie.c
@@ -18,6 +18,12 @@
 #include <rte_fib6.h>
 #include "trie.h"
 
+#ifdef CC_TRIE_AVX512_SUPPORT
+
+#include "trie_avx512.h"
+
+#endif /* CC_TRIE_AVX512_SUPPORT */
+
 #define TRIE_NAMESIZE		64
 
 enum edge {
@@ -48,6 +54,21 @@ trie_get_lookup_fn(void *p, enum rte_fib_trie_lookup_type type)
 		default:
 			return NULL;
 		}
+#ifdef CC_TRIE_AVX512_SUPPORT
+	case RTE_FIB6_TRIE_VECTOR_AVX512:
+		if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512F) <= 0)
+			return NULL;
+		switch (nh_sz) {
+		case RTE_FIB6_TRIE_2B:
+			return rte_trie_vec_lookup_bulk_2b;
+		case RTE_FIB6_TRIE_4B:
+			return rte_trie_vec_lookup_bulk_4b;
+		case RTE_FIB6_TRIE_8B:
+			return rte_trie_vec_lookup_bulk_8b;
+		default:
+			return NULL;
+		}
+#endif
 	default:
 		return NULL;
 	}
diff --git a/lib/librte_fib/trie_avx512.c b/lib/librte_fib/trie_avx512.c
new file mode 100644
index 000000000..b1c9e4ede
--- /dev/null
+++ b/lib/librte_fib/trie_avx512.c
@@ -0,0 +1,269 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Intel Corporation
+ */
+
+#include <rte_vect.h>
+#include <rte_fib6.h>
+
+#include "trie.h"
+#include "trie_avx512.h"
+
+static __rte_always_inline void
+transpose_x16(uint8_t ips[16][RTE_FIB6_IPV6_ADDR_SIZE],
+	__m512i *first, __m512i *second, __m512i *third, __m512i *fourth)
+{
+	__m512i tmp1, tmp2, tmp3, tmp4;
+	__m512i tmp5, tmp6, tmp7, tmp8;
+	const __rte_x86_zmm_t perm_idxes = {
+		.u32 = { 0, 4, 8, 12, 2, 6, 10, 14,
+			1, 5, 9, 13, 3, 7, 11, 15
+		},
+	};
+
+	/* load all ip addresses */
+	tmp1 = _mm512_loadu_si512(&ips[0][0]);
+	tmp2 = _mm512_loadu_si512(&ips[4][0]);
+	tmp3 = _mm512_loadu_si512(&ips[8][0]);
+	tmp4 = _mm512_loadu_si512(&ips[12][0]);
+
+	/* transpose 4 byte chunks of 16 ips */
+	tmp5 = _mm512_unpacklo_epi32(tmp1, tmp2);
+	tmp7 = _mm512_unpackhi_epi32(tmp1, tmp2);
+	tmp6 = _mm512_unpacklo_epi32(tmp3, tmp4);
+	tmp8 = _mm512_unpackhi_epi32(tmp3, tmp4);
+
+	tmp1 = _mm512_unpacklo_epi32(tmp5, tmp6);
+	tmp3 = _mm512_unpackhi_epi32(tmp5, tmp6);
+	tmp2 = _mm512_unpacklo_epi32(tmp7, tmp8);
+	tmp4 = _mm512_unpackhi_epi32(tmp7, tmp8);
+
+	/* first 4-byte chunks of ips[] */
+	*first = _mm512_permutexvar_epi32(perm_idxes.z, tmp1);
+	/* second 4-byte chunks of ips[] */
+	*second = _mm512_permutexvar_epi32(perm_idxes.z, tmp3);
+	/* third 4-byte chunks of ips[] */
+	*third = _mm512_permutexvar_epi32(perm_idxes.z, tmp2);
+	/* fourth 4-byte chunks of ips[] */
+	*fourth = _mm512_permutexvar_epi32(perm_idxes.z, tmp4);
+}
+
+static __rte_always_inline void
+transpose_x8(uint8_t ips[8][RTE_FIB6_IPV6_ADDR_SIZE],
+	__m512i *first, __m512i *second)
+{
+	__m512i tmp1, tmp2, tmp3, tmp4;
+	const __rte_x86_zmm_t perm_idxes = {
+		.u64 = { 0, 2, 4, 6, 1, 3, 5, 7
+		},
+	};
+
+	tmp1 = _mm512_loadu_si512(&ips[0][0]);
+	tmp2 = _mm512_loadu_si512(&ips[4][0]);
+
+	tmp3 = _mm512_unpacklo_epi64(tmp1, tmp2);
+	*first = _mm512_permutexvar_epi64(perm_idxes.z, tmp3);
+	tmp4 = _mm512_unpackhi_epi64(tmp1, tmp2);
+	*second = _mm512_permutexvar_epi64(perm_idxes.z, tmp4);
+}
+
+static __rte_always_inline void
+trie_vec_lookup_x16(void *p, uint8_t ips[16][RTE_FIB6_IPV6_ADDR_SIZE],
+	uint64_t *next_hops, int size)
+{
+	struct rte_trie_tbl *dp = (struct rte_trie_tbl *)p;
+	const __m512i zero = _mm512_set1_epi32(0);
+	const __m512i lsb = _mm512_set1_epi32(1);
+	const __m512i two_lsb = _mm512_set1_epi32(3);
+	__m512i first, second, third, fourth; /*< IPv6 four byte chunks */
+	__m512i idxes, res, shuf_idxes;
+	__m512i tmp, tmp2, bytes, byte_chunk, base_idxes;
+	/* used to mask gather values if size is 2 (16 bit next hops) */
+	const __m512i res_msk = _mm512_set1_epi32(UINT16_MAX);
+	const __rte_x86_zmm_t bswap = {
+		.u8 = { 2, 1, 0, 255, 6, 5, 4, 255,
+			10, 9, 8, 255, 14, 13, 12, 255,
+			2, 1, 0, 255, 6, 5, 4, 255,
+			10, 9, 8, 255, 14, 13, 12, 255,
+			2, 1, 0, 255, 6, 5, 4, 255,
+			10, 9, 8, 255, 14, 13, 12, 255,
+			2, 1, 0, 255, 6, 5, 4, 255,
+			10, 9, 8, 255, 14, 13, 12, 255
+			},
+	};
+	const __mmask64 k = 0x1111111111111111;
+	int i = 3;
+	__mmask16 msk_ext, new_msk;
+	__mmask16 exp_msk = 0x5555;
+
+	transpose_x16(ips, &first, &second, &third, &fourth);
+
+	/* get_tbl24_idx() for every 4 byte chunk */
+	idxes = _mm512_shuffle_epi8(first, bswap.z);
+
+	/**
+	 * lookup in tbl24
+	 * Put it inside branch to make compiller happy with -O0
+	 */
+	if (size == sizeof(uint16_t)) {
+		res = _mm512_i32gather_epi32(idxes, (const int *)dp->tbl24, 2);
+		res = _mm512_and_epi32(res, res_msk);
+	} else
+		res = _mm512_i32gather_epi32(idxes, (const int *)dp->tbl24, 4);
+
+
+	/* get extended entries indexes */
+	msk_ext = _mm512_test_epi32_mask(res, lsb);
+
+	tmp = _mm512_srli_epi32(res, 1);
+
+	/* idxes to retrieve bytes */
+	shuf_idxes = _mm512_setr_epi32(3, 7, 11, 15,
+				19, 23, 27, 31,
+				35, 39, 43, 47,
+				51, 55, 59, 63);
+
+	base_idxes = _mm512_setr_epi32(0, 4, 8, 12,
+				16, 20, 24, 28,
+				32, 36, 40, 44,
+				48, 52, 56, 60);
+
+	/* traverse down the trie */
+	while (msk_ext) {
+		idxes = _mm512_maskz_slli_epi32(msk_ext, tmp, 8);
+		byte_chunk = (i < 8) ?
+			((i >= 4) ? second : first) :
+			((i >= 12) ? fourth : third);
+		bytes = _mm512_maskz_shuffle_epi8(k, byte_chunk, shuf_idxes);
+		idxes = _mm512_maskz_add_epi32(msk_ext, idxes, bytes);
+		if (size == sizeof(uint16_t)) {
+			tmp = _mm512_mask_i32gather_epi32(zero, msk_ext,
+				idxes, (const int *)dp->tbl8, 2);
+			tmp = _mm512_and_epi32(tmp, res_msk);
+		} else
+			tmp = _mm512_mask_i32gather_epi32(zero, msk_ext,
+				idxes, (const int *)dp->tbl8, 4);
+		new_msk = _mm512_test_epi32_mask(tmp, lsb);
+		res = _mm512_mask_blend_epi32(msk_ext ^ new_msk, res, tmp);
+		tmp = _mm512_srli_epi32(tmp, 1);
+		msk_ext = new_msk;
+
+		shuf_idxes = _mm512_maskz_add_epi8(k, shuf_idxes, lsb);
+		shuf_idxes = _mm512_and_epi32(shuf_idxes, two_lsb);
+		shuf_idxes = _mm512_maskz_add_epi8(k, shuf_idxes, base_idxes);
+		i++;
+	}
+
+	res = _mm512_srli_epi32(res, 1);
+	tmp = _mm512_maskz_expand_epi32(exp_msk, res);
+	__m256i tmp256;
+	tmp256 = _mm512_extracti32x8_epi32(res, 1);
+	tmp2 = _mm512_maskz_expand_epi32(exp_msk,
+		_mm512_castsi256_si512(tmp256));
+	_mm512_storeu_si512(next_hops, tmp);
+	_mm512_storeu_si512(next_hops + 8, tmp2);
+}
+
+static void
+trie_vec_lookup_x8_8b(void *p, uint8_t ips[8][RTE_FIB6_IPV6_ADDR_SIZE],
+	uint64_t *next_hops)
+{
+	struct rte_trie_tbl *dp = (struct rte_trie_tbl *)p;
+	const __m512i zero = _mm512_set1_epi32(0);
+	const __m512i lsb = _mm512_set1_epi32(1);
+	const __m512i three_lsb = _mm512_set1_epi32(7);
+	__m512i first, second; /*< IPv6 eight byte chunks */
+	__m512i idxes, res, shuf_idxes;
+	__m512i tmp, bytes, byte_chunk, base_idxes;
+	const __rte_x86_zmm_t bswap = {
+		.u8 = { 2, 1, 0, 255, 255, 255, 255, 255,
+			10, 9, 8, 255, 255, 255, 255, 255,
+			2, 1, 0, 255, 255, 255, 255, 255,
+			10, 9, 8, 255, 255, 255, 255, 255,
+			2, 1, 0, 255, 255, 255, 255, 255,
+			10, 9, 8, 255, 255, 255, 255, 255,
+			2, 1, 0, 255, 255, 255, 255, 255,
+			10, 9, 8, 255, 255, 255, 255, 255
+			},
+	};
+	const __mmask64 k = 0x101010101010101;
+	int i = 3;
+	__mmask8 msk_ext, new_msk;
+
+	transpose_x8(ips, &first, &second);
+
+	/* get_tbl24_idx() for every 4 byte chunk */
+	idxes = _mm512_shuffle_epi8(first, bswap.z);
+
+	/* lookup in tbl24 */
+	res = _mm512_i64gather_epi64(idxes, (const void *)dp->tbl24, 8);
+	/* get extended entries indexes */
+	msk_ext = _mm512_test_epi64_mask(res, lsb);
+
+	tmp = _mm512_srli_epi64(res, 1);
+
+	/* idxes to retrieve bytes */
+	shuf_idxes = _mm512_setr_epi64(3, 11, 19, 27, 35, 43, 51, 59);
+
+	base_idxes = _mm512_setr_epi64(0, 8, 16, 24, 32, 40, 48, 56);
+
+	/* traverse down the trie */
+	while (msk_ext) {
+		idxes = _mm512_maskz_slli_epi64(msk_ext, tmp, 8);
+		byte_chunk = (i < 8) ? first : second;
+		bytes = _mm512_maskz_shuffle_epi8(k, byte_chunk, shuf_idxes);
+		idxes = _mm512_maskz_add_epi64(msk_ext, idxes, bytes);
+		tmp = _mm512_mask_i64gather_epi64(zero, msk_ext,
+				idxes, (const void *)dp->tbl8, 8);
+		new_msk = _mm512_test_epi64_mask(tmp, lsb);
+		res = _mm512_mask_blend_epi64(msk_ext ^ new_msk, res, tmp);
+		tmp = _mm512_srli_epi64(tmp, 1);
+		msk_ext = new_msk;
+
+		shuf_idxes = _mm512_maskz_add_epi8(k, shuf_idxes, lsb);
+		shuf_idxes = _mm512_and_epi64(shuf_idxes, three_lsb);
+		shuf_idxes = _mm512_maskz_add_epi8(k, shuf_idxes, base_idxes);
+		i++;
+	}
+
+	res = _mm512_srli_epi64(res, 1);
+	_mm512_storeu_si512(next_hops, res);
+}
+
+void
+rte_trie_vec_lookup_bulk_2b(void *p, uint8_t ips[][RTE_FIB6_IPV6_ADDR_SIZE],
+	uint64_t *next_hops, const unsigned int n)
+{
+	uint32_t i;
+	for (i = 0; i < (n / 16); i++) {
+		trie_vec_lookup_x16(p, (uint8_t (*)[16])&ips[i * 16][0],
+				next_hops + i * 16, sizeof(uint16_t));
+	}
+	rte_trie_lookup_bulk_2b(p, (uint8_t (*)[16])&ips[i * 16][0],
+			next_hops + i * 16, n - i * 16);
+}
+
+void
+rte_trie_vec_lookup_bulk_4b(void *p, uint8_t ips[][RTE_FIB6_IPV6_ADDR_SIZE],
+	uint64_t *next_hops, const unsigned int n)
+{
+	uint32_t i;
+	for (i = 0; i < (n / 16); i++) {
+		trie_vec_lookup_x16(p, (uint8_t (*)[16])&ips[i * 16][0],
+				next_hops + i * 16, sizeof(uint32_t));
+	}
+	rte_trie_lookup_bulk_4b(p, (uint8_t (*)[16])&ips[i * 16][0],
+			next_hops + i * 16, n - i * 16);
+}
+
+void
+rte_trie_vec_lookup_bulk_8b(void *p, uint8_t ips[][RTE_FIB6_IPV6_ADDR_SIZE],
+	uint64_t *next_hops, const unsigned int n)
+{
+	uint32_t i;
+	for (i = 0; i < (n / 8); i++) {
+		trie_vec_lookup_x8_8b(p, (uint8_t (*)[16])&ips[i * 8][0],
+				next_hops + i * 8);
+	}
+	rte_trie_lookup_bulk_8b(p, (uint8_t (*)[16])&ips[i * 8][0],
+			next_hops + i * 8, n - i * 8);
+}
diff --git a/lib/librte_fib/trie_avx512.h b/lib/librte_fib/trie_avx512.h
new file mode 100644
index 000000000..ef8c7f0e3
--- /dev/null
+++ b/lib/librte_fib/trie_avx512.h
@@ -0,0 +1,20 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Intel Corporation
+ */
+
+#ifndef _TRIE_AVX512_H_
+#define _TRIE_AVX512_H_
+
+void
+rte_trie_vec_lookup_bulk_2b(void *p, uint8_t ips[][RTE_FIB6_IPV6_ADDR_SIZE],
+	uint64_t *next_hops, const unsigned int n);
+
+void
+rte_trie_vec_lookup_bulk_4b(void *p, uint8_t ips[][RTE_FIB6_IPV6_ADDR_SIZE],
+	uint64_t *next_hops, const unsigned int n);
+
+void
+rte_trie_vec_lookup_bulk_8b(void *p, uint8_t ips[][RTE_FIB6_IPV6_ADDR_SIZE],
+	uint64_t *next_hops, const unsigned int n);
+
+#endif /* _TRIE_AVX512_H_ */
-- 
2.17.1


^ permalink raw reply	[flat|nested] 199+ messages in thread

* [dpdk-dev] [PATCH v6 8/8] app/testfib: add support for different lookup functions
  2020-07-10 14:46       ` [dpdk-dev] [PATCH v5 " Vladimir Medvedkin
                           ` (7 preceding siblings ...)
  2020-07-13 11:11         ` [dpdk-dev] [PATCH v6 7/8] fib6: introduce AVX512 lookup Vladimir Medvedkin
@ 2020-07-13 11:11         ` Vladimir Medvedkin
  8 siblings, 0 replies; 199+ messages in thread
From: Vladimir Medvedkin @ 2020-07-13 11:11 UTC (permalink / raw)
  To: dev
  Cc: david.marchand, jerinj, mdr, thomas, konstantin.ananyev,
	bruce.richardson

Added -v option to switch between different lookup implementations
to measure their performance and correctness.

Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
---
 app/test-fib/main.c | 58 ++++++++++++++++++++++++++++++++++++++++++---
 1 file changed, 55 insertions(+), 3 deletions(-)

diff --git a/app/test-fib/main.c b/app/test-fib/main.c
index 9cf01b16e..9c2d41387 100644
--- a/app/test-fib/main.c
+++ b/app/test-fib/main.c
@@ -99,6 +99,7 @@ static struct {
 	uint8_t		ent_sz;
 	uint8_t		rnd_lookup_ips_ratio;
 	uint8_t		print_fract;
+	uint8_t		lookup_fn;
 } config = {
 	.routes_file = NULL,
 	.lookup_ips_file = NULL,
@@ -110,7 +111,8 @@ static struct {
 	.tbl8 = DEFAULT_LPM_TBL8,
 	.ent_sz = 4,
 	.rnd_lookup_ips_ratio = 0,
-	.print_fract = 10
+	.print_fract = 10,
+	.lookup_fn = 0
 };
 
 struct rt_rule_4 {
@@ -638,7 +640,11 @@ print_usage(void)
 		"1/2/4/8 (default 4)>]\n"
 		"[-g <number of tbl8's for dir24_8 or trie FIBs>]\n"
 		"[-w <path to the file to dump routing table>]\n"
-		"[-u <path to the file to dump ip's for lookup>]\n",
+		"[-u <path to the file to dump ip's for lookup>]\n"
+		"[-v <type of loookup function:"
+		"\ts1, s2, s3 (3 types of scalar), v (vector) -"
+		" for DIR24_8 based FIB\n"
+		"\ts, v - for TRIE based ipv6 FIB>]\n",
 		config.prgname);
 }
 
@@ -681,7 +687,7 @@ parse_opts(int argc, char **argv)
 	int opt;
 	char *endptr;
 
-	while ((opt = getopt(argc, argv, "f:t:n:d:l:r:c6ab:e:g:w:u:s")) !=
+	while ((opt = getopt(argc, argv, "f:t:n:d:l:r:c6ab:e:g:w:u:sv:")) !=
 			-1) {
 		switch (opt) {
 		case 'f':
@@ -769,6 +775,22 @@ parse_opts(int argc, char **argv)
 				rte_exit(-EINVAL, "Invalid option -g\n");
 			}
 			break;
+		case 'v':
+			if ((strcmp(optarg, "s1") == 0) ||
+					(strcmp(optarg, "s") == 0))
+				break;
+			else if (strcmp(optarg, "v") == 0) {
+				config.lookup_fn = 1;
+				break;
+			} else if (strcmp(optarg, "s2") == 0) {
+				config.lookup_fn = 2;
+				break;
+			} else if (strcmp(optarg, "s3") == 0) {
+				config.lookup_fn = 3;
+				break;
+			}
+			print_usage();
+			rte_exit(-EINVAL, "Invalid option -v %s\n", optarg);
 		default:
 			print_usage();
 			rte_exit(-EINVAL, "Invalid options\n");
@@ -846,6 +868,24 @@ run_v4(void)
 		return -rte_errno;
 	}
 
+	if (config.lookup_fn != 0) {
+		if (config.lookup_fn == 1)
+			ret = rte_fib_set_lookup_fn(fib,
+				RTE_FIB_DIR24_8_VECTOR_AVX512);
+		else if (config.lookup_fn == 2)
+			ret = rte_fib_set_lookup_fn(fib,
+				RTE_FIB_DIR24_8_SCALAR_INLINE);
+		else if (config.lookup_fn == 3)
+			ret = rte_fib_set_lookup_fn(fib,
+				RTE_FIB_DIR24_8_SCALAR_UNI);
+		else
+			ret = -EINVAL;
+		if (ret != 0) {
+			printf("Can not init lookup function\n");
+			return ret;
+		}
+	}
+
 	for (k = config.print_fract, i = 0; k > 0; k--) {
 		start = rte_rdtsc_precise();
 		for (j = 0; j < (config.nb_routes - i) / k; j++) {
@@ -1025,6 +1065,18 @@ run_v6(void)
 		return -rte_errno;
 	}
 
+	if (config.lookup_fn != 0) {
+		if (config.lookup_fn == 1)
+			ret = rte_fib6_set_lookup_fn(fib,
+				RTE_FIB6_TRIE_VECTOR_AVX512);
+		else
+			ret = -EINVAL;
+		if (ret != 0) {
+			printf("Can not init lookup function\n");
+			return ret;
+		}
+	}
+
 	for (k = config.print_fract, i = 0; k > 0; k--) {
 		start = rte_rdtsc_precise();
 		for (j = 0; j < (config.nb_routes - i) / k; j++) {
-- 
2.17.1


^ permalink raw reply	[flat|nested] 199+ messages in thread

* Re: [dpdk-dev] [PATCH v6 1/8] eal/x86: introduce AVX 512-bit type
  2020-07-13 11:11         ` [dpdk-dev] [PATCH v6 1/8] eal/x86: introduce AVX 512-bit type Vladimir Medvedkin
@ 2020-07-13 11:33           ` David Marchand
  2020-07-13 11:44             ` Medvedkin, Vladimir
  0 siblings, 1 reply; 199+ messages in thread
From: David Marchand @ 2020-07-13 11:33 UTC (permalink / raw)
  To: Vladimir Medvedkin
  Cc: dev, Jerin Jacob Kollanukkaran, Ray Kinsella, Thomas Monjalon,
	Ananyev, Konstantin, Bruce Richardson

On Mon, Jul 13, 2020 at 1:11 PM Vladimir Medvedkin
<vladimir.medvedkin@intel.com> wrote:
>
> New data type to manipulate 512 bit AVX values.
>
> Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
> Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> ---
>  lib/librte_eal/x86/include/rte_vect.h | 19 +++++++++++++++++++
>  1 file changed, 19 insertions(+)
>
> diff --git a/lib/librte_eal/x86/include/rte_vect.h b/lib/librte_eal/x86/include/rte_vect.h
> index df5a60762..30dcfd5e7 100644
> --- a/lib/librte_eal/x86/include/rte_vect.h
> +++ b/lib/librte_eal/x86/include/rte_vect.h
> @@ -13,6 +13,7 @@
>
>  #include <stdint.h>
>  #include <rte_config.h>
> +#include <rte_common.h>
>  #include "generic/rte_vect.h"
>
>  #if (defined(__ICC) || \
> @@ -90,6 +91,24 @@ __extension__ ({                 \
>  })
>  #endif /* (defined(__ICC) && __ICC < 1210) */
>
> +#ifdef __AVX512F__
> +
> +#define RTE_X86_ZMM_SIZE       (sizeof(__m512i))
> +#define RTE_X86_ZMM_MASK       (ZMM_SIZE - 1)

Please fix:
#define RTE_X86_ZMM_MASK       (RTE_X86_ZMM_SIZE - 1)


> +
> +typedef union __rte_x86_zmm {
> +       __m512i  z;
> +       ymm_t    y[RTE_X86_ZMM_SIZE / sizeof(ymm_t)];
> +       xmm_t    x[RTE_X86_ZMM_SIZE / sizeof(xmm_t)];
> +       uint8_t  u8[RTE_X86_ZMM_SIZE / sizeof(uint8_t)];
> +       uint16_t u16[RTE_X86_ZMM_SIZE / sizeof(uint16_t)];
> +       uint32_t u32[RTE_X86_ZMM_SIZE / sizeof(uint32_t)];
> +       uint64_t u64[RTE_X86_ZMM_SIZE / sizeof(uint64_t)];
> +       double   pd[RTE_X86_ZMM_SIZE / sizeof(double)];
> +} __rte_aligned(RTE_X86_ZMM_SIZE) __rte_x86_zmm_t;
> +
> +#endif /* __AVX512F__ */
> +
>  #ifdef __cplusplus
>  }
>  #endif
> --
> 2.17.1
>


^ permalink raw reply	[flat|nested] 199+ messages in thread

* Re: [dpdk-dev] [PATCH v6 1/8] eal/x86: introduce AVX 512-bit type
  2020-07-13 11:33           ` David Marchand
@ 2020-07-13 11:44             ` Medvedkin, Vladimir
  0 siblings, 0 replies; 199+ messages in thread
From: Medvedkin, Vladimir @ 2020-07-13 11:44 UTC (permalink / raw)
  To: David Marchand
  Cc: dev, Jerin Jacob Kollanukkaran, Ray Kinsella, Thomas Monjalon,
	Ananyev, Konstantin, Bruce Richardson



On 13/07/2020 12:33, David Marchand wrote:
> On Mon, Jul 13, 2020 at 1:11 PM Vladimir Medvedkin
> <vladimir.medvedkin@intel.com> wrote:
>>
>> New data type to manipulate 512 bit AVX values.
>>
>> Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
>> Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
>> ---
>>   lib/librte_eal/x86/include/rte_vect.h | 19 +++++++++++++++++++
>>   1 file changed, 19 insertions(+)
>>
>> diff --git a/lib/librte_eal/x86/include/rte_vect.h b/lib/librte_eal/x86/include/rte_vect.h
>> index df5a60762..30dcfd5e7 100644
>> --- a/lib/librte_eal/x86/include/rte_vect.h
>> +++ b/lib/librte_eal/x86/include/rte_vect.h
>> @@ -13,6 +13,7 @@
>>
>>   #include <stdint.h>
>>   #include <rte_config.h>
>> +#include <rte_common.h>
>>   #include "generic/rte_vect.h"
>>
>>   #if (defined(__ICC) || \
>> @@ -90,6 +91,24 @@ __extension__ ({                 \
>>   })
>>   #endif /* (defined(__ICC) && __ICC < 1210) */
>>
>> +#ifdef __AVX512F__
>> +
>> +#define RTE_X86_ZMM_SIZE       (sizeof(__m512i))
>> +#define RTE_X86_ZMM_MASK       (ZMM_SIZE - 1)
> 
> Please fix:
> #define RTE_X86_ZMM_MASK       (RTE_X86_ZMM_SIZE - 1)
> 

Oh, thanks!

> 
>> +
>> +typedef union __rte_x86_zmm {
>> +       __m512i  z;
>> +       ymm_t    y[RTE_X86_ZMM_SIZE / sizeof(ymm_t)];
>> +       xmm_t    x[RTE_X86_ZMM_SIZE / sizeof(xmm_t)];
>> +       uint8_t  u8[RTE_X86_ZMM_SIZE / sizeof(uint8_t)];
>> +       uint16_t u16[RTE_X86_ZMM_SIZE / sizeof(uint16_t)];
>> +       uint32_t u32[RTE_X86_ZMM_SIZE / sizeof(uint32_t)];
>> +       uint64_t u64[RTE_X86_ZMM_SIZE / sizeof(uint64_t)];
>> +       double   pd[RTE_X86_ZMM_SIZE / sizeof(double)];
>> +} __rte_aligned(RTE_X86_ZMM_SIZE) __rte_x86_zmm_t;
>> +
>> +#endif /* __AVX512F__ */
>> +
>>   #ifdef __cplusplus
>>   }
>>   #endif
>> --
>> 2.17.1
>>
> 

-- 
Regards,
Vladimir

^ permalink raw reply	[flat|nested] 199+ messages in thread

* [dpdk-dev] [PATCH v7 0/8] fib: implement AVX512 vector lookup
  2020-07-13 11:11         ` [dpdk-dev] [PATCH v6 " Vladimir Medvedkin
@ 2020-07-13 11:56           ` Vladimir Medvedkin
  2020-09-30 10:35             ` [dpdk-dev] [PATCH v8 " Vladimir Medvedkin
                               ` (8 more replies)
  2020-07-13 11:56           ` [dpdk-dev] [PATCH v7 1/8] eal/x86: introduce AVX 512-bit type Vladimir Medvedkin
                             ` (8 subsequent siblings)
  9 siblings, 9 replies; 199+ messages in thread
From: Vladimir Medvedkin @ 2020-07-13 11:56 UTC (permalink / raw)
  To: dev
  Cc: david.marchand, jerinj, mdr, thomas, konstantin.ananyev,
	bruce.richardson

This patch series implements vectorized lookup using AVX512 for
ipv4 dir24_8 and ipv6 trie algorithms.
Also introduced rte_fib_set_lookup_fn() to change lookup function type.
Added option to select lookup function type in testfib application.

v7:
 - fix RTE_X86_ZMM_MASK macro

v6:
 - style fixes

v5:
 - prefix zmm macro in rte_vect.h with RTE_X86
 - remove unnecessary typedef for _x86_zmm_t
 - reword commit title
 - fix typos

v4:
 - use __rte_aligned() instead of using compiler attribute directly
 - rework and add comments to meson.build

v3:
 - separate out the AVX-512 code into a separate file

v2:
 - rename rte_zmm to __rte_x86_zmm to reflect its internal usage
 - make runtime decision to use avx512 lookup

Vladimir Medvedkin (8):
  eal/x86: introduce AVX 512-bit type
  fib: make lookup function type configurable
  fib: move lookup definition into the header file
  fib: introduce AVX512 lookup
  fib6: make lookup function type configurable
  fib6: move lookup definition into the header file
  fib6: introduce AVX512 lookup
  app/testfib: add support for different lookup functions

 app/test-fib/main.c                   |  58 +++++-
 lib/librte_eal/x86/include/rte_vect.h |  19 ++
 lib/librte_fib/Makefile               |  24 +++
 lib/librte_fib/dir24_8.c              | 281 +++++---------------------
 lib/librte_fib/dir24_8.h              | 226 ++++++++++++++++++++-
 lib/librte_fib/dir24_8_avx512.c       | 165 +++++++++++++++
 lib/librte_fib/dir24_8_avx512.h       |  24 +++
 lib/librte_fib/meson.build            |  31 +++
 lib/librte_fib/rte_fib.c              |  21 +-
 lib/librte_fib/rte_fib.h              |  24 +++
 lib/librte_fib/rte_fib6.c             |  20 +-
 lib/librte_fib/rte_fib6.h             |  22 ++
 lib/librte_fib/rte_fib_version.map    |   2 +
 lib/librte_fib/trie.c                 | 161 +++------------
 lib/librte_fib/trie.h                 | 119 ++++++++++-
 lib/librte_fib/trie_avx512.c          | 269 ++++++++++++++++++++++++
 lib/librte_fib/trie_avx512.h          |  20 ++
 17 files changed, 1114 insertions(+), 372 deletions(-)
 create mode 100644 lib/librte_fib/dir24_8_avx512.c
 create mode 100644 lib/librte_fib/dir24_8_avx512.h
 create mode 100644 lib/librte_fib/trie_avx512.c
 create mode 100644 lib/librte_fib/trie_avx512.h

-- 
2.17.1


^ permalink raw reply	[flat|nested] 199+ messages in thread

* [dpdk-dev] [PATCH v7 1/8] eal/x86: introduce AVX 512-bit type
  2020-07-13 11:11         ` [dpdk-dev] [PATCH v6 " Vladimir Medvedkin
  2020-07-13 11:56           ` [dpdk-dev] [PATCH v7 " Vladimir Medvedkin
@ 2020-07-13 11:56           ` Vladimir Medvedkin
  2020-07-13 11:56           ` [dpdk-dev] [PATCH v7 2/8] fib: make lookup function type configurable Vladimir Medvedkin
                             ` (7 subsequent siblings)
  9 siblings, 0 replies; 199+ messages in thread
From: Vladimir Medvedkin @ 2020-07-13 11:56 UTC (permalink / raw)
  To: dev
  Cc: david.marchand, jerinj, mdr, thomas, konstantin.ananyev,
	bruce.richardson

New data type to manipulate 512 bit AVX values.

Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 lib/librte_eal/x86/include/rte_vect.h | 19 +++++++++++++++++++
 1 file changed, 19 insertions(+)

diff --git a/lib/librte_eal/x86/include/rte_vect.h b/lib/librte_eal/x86/include/rte_vect.h
index df5a60762..64383c360 100644
--- a/lib/librte_eal/x86/include/rte_vect.h
+++ b/lib/librte_eal/x86/include/rte_vect.h
@@ -13,6 +13,7 @@
 
 #include <stdint.h>
 #include <rte_config.h>
+#include <rte_common.h>
 #include "generic/rte_vect.h"
 
 #if (defined(__ICC) || \
@@ -90,6 +91,24 @@ __extension__ ({                 \
 })
 #endif /* (defined(__ICC) && __ICC < 1210) */
 
+#ifdef __AVX512F__
+
+#define RTE_X86_ZMM_SIZE	(sizeof(__m512i))
+#define RTE_X86_ZMM_MASK	(RTE_X86_ZMM_SIZE - 1)
+
+typedef union __rte_x86_zmm {
+	__m512i	 z;
+	ymm_t    y[RTE_X86_ZMM_SIZE / sizeof(ymm_t)];
+	xmm_t    x[RTE_X86_ZMM_SIZE / sizeof(xmm_t)];
+	uint8_t  u8[RTE_X86_ZMM_SIZE / sizeof(uint8_t)];
+	uint16_t u16[RTE_X86_ZMM_SIZE / sizeof(uint16_t)];
+	uint32_t u32[RTE_X86_ZMM_SIZE / sizeof(uint32_t)];
+	uint64_t u64[RTE_X86_ZMM_SIZE / sizeof(uint64_t)];
+	double   pd[RTE_X86_ZMM_SIZE / sizeof(double)];
+} __rte_aligned(RTE_X86_ZMM_SIZE) __rte_x86_zmm_t;
+
+#endif /* __AVX512F__ */
+
 #ifdef __cplusplus
 }
 #endif
-- 
2.17.1


^ permalink raw reply	[flat|nested] 199+ messages in thread

* [dpdk-dev] [PATCH v7 2/8] fib: make lookup function type configurable
  2020-07-13 11:11         ` [dpdk-dev] [PATCH v6 " Vladimir Medvedkin
  2020-07-13 11:56           ` [dpdk-dev] [PATCH v7 " Vladimir Medvedkin
  2020-07-13 11:56           ` [dpdk-dev] [PATCH v7 1/8] eal/x86: introduce AVX 512-bit type Vladimir Medvedkin
@ 2020-07-13 11:56           ` Vladimir Medvedkin
  2020-07-16 11:51             ` Ananyev, Konstantin
  2020-07-16 14:32             ` Thomas Monjalon
  2020-07-13 11:56           ` [dpdk-dev] [PATCH v7 3/8] fib: move lookup definition into the header file Vladimir Medvedkin
                             ` (6 subsequent siblings)
  9 siblings, 2 replies; 199+ messages in thread
From: Vladimir Medvedkin @ 2020-07-13 11:56 UTC (permalink / raw)
  To: dev
  Cc: david.marchand, jerinj, mdr, thomas, konstantin.ananyev,
	bruce.richardson

Add type argument to dir24_8_get_lookup_fn()
Now it supports 3 different lookup implementations:
 RTE_FIB_DIR24_8_SCALAR_MACRO
 RTE_FIB_DIR24_8_SCALAR_INLINE
 RTE_FIB_DIR24_8_SCALAR_UNI

Add new rte_fib_set_lookup_fn() - user can change lookup
function type runtime.

Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
---
 lib/librte_fib/dir24_8.c           | 32 +++++++++++++++++++-----------
 lib/librte_fib/dir24_8.h           |  2 +-
 lib/librte_fib/rte_fib.c           | 21 +++++++++++++++++++-
 lib/librte_fib/rte_fib.h           | 23 +++++++++++++++++++++
 lib/librte_fib/rte_fib_version.map |  1 +
 5 files changed, 65 insertions(+), 14 deletions(-)

diff --git a/lib/librte_fib/dir24_8.c b/lib/librte_fib/dir24_8.c
index c9dce3cbc..825d061fd 100644
--- a/lib/librte_fib/dir24_8.c
+++ b/lib/librte_fib/dir24_8.c
@@ -45,13 +45,6 @@ struct dir24_8_tbl {
 
 #define ROUNDUP(x, y)	 RTE_ALIGN_CEIL(x, (1 << (32 - y)))
 
-enum lookup_type {
-	MACRO,
-	INLINE,
-	UNI
-};
-enum lookup_type test_lookup = MACRO;
-
 static inline void *
 get_tbl24_p(struct dir24_8_tbl *dp, uint32_t ip, uint8_t nh_sz)
 {
@@ -253,11 +246,18 @@ dir24_8_lookup_bulk_uni(void *p, const uint32_t *ips,
 }
 
 rte_fib_lookup_fn_t
-dir24_8_get_lookup_fn(struct rte_fib_conf *fib_conf)
+dir24_8_get_lookup_fn(void *p, enum rte_fib_dir24_8_lookup_type type)
 {
-	enum rte_fib_dir24_8_nh_sz nh_sz = fib_conf->dir24_8.nh_sz;
+	enum rte_fib_dir24_8_nh_sz nh_sz;
+	struct dir24_8_tbl *dp = p;
 
-	if (test_lookup == MACRO) {
+	if (dp == NULL)
+		return NULL;
+
+	nh_sz = dp->nh_sz;
+
+	switch (type) {
+	case RTE_FIB_DIR24_8_SCALAR_MACRO:
 		switch (nh_sz) {
 		case RTE_FIB_DIR24_8_1B:
 			return dir24_8_lookup_bulk_1b;
@@ -267,8 +267,10 @@ dir24_8_get_lookup_fn(struct rte_fib_conf *fib_conf)
 			return dir24_8_lookup_bulk_4b;
 		case RTE_FIB_DIR24_8_8B:
 			return dir24_8_lookup_bulk_8b;
+		default:
+			return NULL;
 		}
-	} else if (test_lookup == INLINE) {
+	case RTE_FIB_DIR24_8_SCALAR_INLINE:
 		switch (nh_sz) {
 		case RTE_FIB_DIR24_8_1B:
 			return dir24_8_lookup_bulk_0;
@@ -278,9 +280,15 @@ dir24_8_get_lookup_fn(struct rte_fib_conf *fib_conf)
 			return dir24_8_lookup_bulk_2;
 		case RTE_FIB_DIR24_8_8B:
 			return dir24_8_lookup_bulk_3;
+		default:
+			return NULL;
 		}
-	} else
+	case RTE_FIB_DIR24_8_SCALAR_UNI:
 		return dir24_8_lookup_bulk_uni;
+	default:
+		return NULL;
+	}
+
 	return NULL;
 }
 
diff --git a/lib/librte_fib/dir24_8.h b/lib/librte_fib/dir24_8.h
index 1ec437c0c..53c5dd29e 100644
--- a/lib/librte_fib/dir24_8.h
+++ b/lib/librte_fib/dir24_8.h
@@ -22,7 +22,7 @@ void
 dir24_8_free(void *p);
 
 rte_fib_lookup_fn_t
-dir24_8_get_lookup_fn(struct rte_fib_conf *conf);
+dir24_8_get_lookup_fn(void *p, enum rte_fib_dir24_8_lookup_type type);
 
 int
 dir24_8_modify(struct rte_fib *fib, uint32_t ip, uint8_t depth,
diff --git a/lib/librte_fib/rte_fib.c b/lib/librte_fib/rte_fib.c
index e0908084f..b9f6efbb1 100644
--- a/lib/librte_fib/rte_fib.c
+++ b/lib/librte_fib/rte_fib.c
@@ -107,7 +107,8 @@ init_dataplane(struct rte_fib *fib, __rte_unused int socket_id,
 		fib->dp = dir24_8_create(dp_name, socket_id, conf);
 		if (fib->dp == NULL)
 			return -rte_errno;
-		fib->lookup = dir24_8_get_lookup_fn(conf);
+		fib->lookup = dir24_8_get_lookup_fn(fib->dp,
+			RTE_FIB_DIR24_8_SCALAR_MACRO);
 		fib->modify = dir24_8_modify;
 		return 0;
 	default:
@@ -317,3 +318,21 @@ rte_fib_get_rib(struct rte_fib *fib)
 {
 	return (fib == NULL) ? NULL : fib->rib;
 }
+
+int
+rte_fib_set_lookup_fn(struct rte_fib *fib,
+	enum rte_fib_dir24_8_lookup_type type)
+{
+	rte_fib_lookup_fn_t fn;
+
+	switch (fib->type) {
+	case RTE_FIB_DIR24_8:
+		fn = dir24_8_get_lookup_fn(fib->dp, type);
+		if (fn == NULL)
+			return -EINVAL;
+		fib->lookup = fn;
+		return 0;
+	default:
+		return -EINVAL;
+	}
+}
diff --git a/lib/librte_fib/rte_fib.h b/lib/librte_fib/rte_fib.h
index 84ee774d2..892898c6f 100644
--- a/lib/librte_fib/rte_fib.h
+++ b/lib/librte_fib/rte_fib.h
@@ -58,6 +58,12 @@ enum rte_fib_dir24_8_nh_sz {
 	RTE_FIB_DIR24_8_8B
 };
 
+enum rte_fib_dir24_8_lookup_type {
+	RTE_FIB_DIR24_8_SCALAR_MACRO,
+	RTE_FIB_DIR24_8_SCALAR_INLINE,
+	RTE_FIB_DIR24_8_SCALAR_UNI
+};
+
 /** FIB configuration structure */
 struct rte_fib_conf {
 	enum rte_fib_type type; /**< Type of FIB struct */
@@ -196,6 +202,23 @@ __rte_experimental
 struct rte_rib *
 rte_fib_get_rib(struct rte_fib *fib);
 
+/**
+ * Set lookup function based on type
+ *
+ * @param fib
+ *   FIB object handle
+ * @param type
+ *   type of lookup function
+ *
+ * @return
+ *    -EINVAL on failure
+ *    0 on success
+ */
+__rte_experimental
+int
+rte_fib_set_lookup_fn(struct rte_fib *fib,
+	enum rte_fib_dir24_8_lookup_type type);
+
 #ifdef __cplusplus
 }
 #endif
diff --git a/lib/librte_fib/rte_fib_version.map b/lib/librte_fib/rte_fib_version.map
index 9527417d2..216af66b3 100644
--- a/lib/librte_fib/rte_fib_version.map
+++ b/lib/librte_fib/rte_fib_version.map
@@ -9,6 +9,7 @@ EXPERIMENTAL {
 	rte_fib_lookup_bulk;
 	rte_fib_get_dp;
 	rte_fib_get_rib;
+	rte_fib_set_lookup_fn;
 
 	rte_fib6_add;
 	rte_fib6_create;
-- 
2.17.1


^ permalink raw reply	[flat|nested] 199+ messages in thread

* [dpdk-dev] [PATCH v7 3/8] fib: move lookup definition into the header file
  2020-07-13 11:11         ` [dpdk-dev] [PATCH v6 " Vladimir Medvedkin
                             ` (2 preceding siblings ...)
  2020-07-13 11:56           ` [dpdk-dev] [PATCH v7 2/8] fib: make lookup function type configurable Vladimir Medvedkin
@ 2020-07-13 11:56           ` Vladimir Medvedkin
  2020-07-13 11:56           ` [dpdk-dev] [PATCH v7 4/8] fib: introduce AVX512 lookup Vladimir Medvedkin
                             ` (5 subsequent siblings)
  9 siblings, 0 replies; 199+ messages in thread
From: Vladimir Medvedkin @ 2020-07-13 11:56 UTC (permalink / raw)
  To: dev
  Cc: david.marchand, jerinj, mdr, thomas, konstantin.ananyev,
	bruce.richardson

Move dir24_8 table layout and lookup definition into the
private header file. This is necessary for implementing a
vectorized lookup function in a separate .с file.

Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 lib/librte_fib/dir24_8.c | 225 +--------------------------------------
 lib/librte_fib/dir24_8.h | 224 ++++++++++++++++++++++++++++++++++++++
 2 files changed, 225 insertions(+), 224 deletions(-)

diff --git a/lib/librte_fib/dir24_8.c b/lib/librte_fib/dir24_8.c
index 825d061fd..9d74653cf 100644
--- a/lib/librte_fib/dir24_8.c
+++ b/lib/librte_fib/dir24_8.c
@@ -11,240 +11,17 @@
 
 #include <rte_debug.h>
 #include <rte_malloc.h>
-#include <rte_prefetch.h>
 #include <rte_errno.h>
 #include <rte_memory.h>
-#include <rte_branch_prediction.h>
 
-#include <rte_fib.h>
 #include <rte_rib.h>
+#include <rte_fib.h>
 #include "dir24_8.h"
 
 #define DIR24_8_NAMESIZE	64
 
-#define DIR24_8_TBL24_NUM_ENT		(1 << 24)
-#define DIR24_8_TBL8_GRP_NUM_ENT	256U
-#define DIR24_8_EXT_ENT			1
-#define DIR24_8_TBL24_MASK		0xffffff00
-
-#define BITMAP_SLAB_BIT_SIZE_LOG2	6
-#define BITMAP_SLAB_BIT_SIZE		(1 << BITMAP_SLAB_BIT_SIZE_LOG2)
-#define BITMAP_SLAB_BITMASK		(BITMAP_SLAB_BIT_SIZE - 1)
-
-struct dir24_8_tbl {
-	uint32_t	number_tbl8s;	/**< Total number of tbl8s */
-	uint32_t	rsvd_tbl8s;	/**< Number of reserved tbl8s */
-	uint32_t	cur_tbl8s;	/**< Current number of tbl8s */
-	enum rte_fib_dir24_8_nh_sz	nh_sz;	/**< Size of nexthop entry */
-	uint64_t	def_nh;		/**< Default next hop */
-	uint64_t	*tbl8;		/**< tbl8 table. */
-	uint64_t	*tbl8_idxes;	/**< bitmap containing free tbl8 idxes*/
-	/* tbl24 table. */
-	__extension__ uint64_t	tbl24[0] __rte_cache_aligned;
-};
-
 #define ROUNDUP(x, y)	 RTE_ALIGN_CEIL(x, (1 << (32 - y)))
 
-static inline void *
-get_tbl24_p(struct dir24_8_tbl *dp, uint32_t ip, uint8_t nh_sz)
-{
-	return (void *)&((uint8_t *)dp->tbl24)[(ip &
-		DIR24_8_TBL24_MASK) >> (8 - nh_sz)];
-}
-
-static inline  uint8_t
-bits_in_nh(uint8_t nh_sz)
-{
-	return 8 * (1 << nh_sz);
-}
-
-static inline uint64_t
-get_max_nh(uint8_t nh_sz)
-{
-	return ((1ULL << (bits_in_nh(nh_sz) - 1)) - 1);
-}
-
-static  inline uint32_t
-get_tbl24_idx(uint32_t ip)
-{
-	return ip >> 8;
-}
-
-static  inline uint32_t
-get_tbl8_idx(uint32_t res, uint32_t ip)
-{
-	return (res >> 1) * DIR24_8_TBL8_GRP_NUM_ENT + (uint8_t)ip;
-}
-
-static inline uint64_t
-lookup_msk(uint8_t nh_sz)
-{
-	return ((1ULL << ((1 << (nh_sz + 3)) - 1)) << 1) - 1;
-}
-
-static inline uint8_t
-get_psd_idx(uint32_t val, uint8_t nh_sz)
-{
-	return val & ((1 << (3 - nh_sz)) - 1);
-}
-
-static inline uint32_t
-get_tbl_idx(uint32_t val, uint8_t nh_sz)
-{
-	return val >> (3 - nh_sz);
-}
-
-static inline uint64_t
-get_tbl24(struct dir24_8_tbl *dp, uint32_t ip, uint8_t nh_sz)
-{
-	return ((dp->tbl24[get_tbl_idx(get_tbl24_idx(ip), nh_sz)] >>
-		(get_psd_idx(get_tbl24_idx(ip), nh_sz) *
-		bits_in_nh(nh_sz))) & lookup_msk(nh_sz));
-}
-
-static inline uint64_t
-get_tbl8(struct dir24_8_tbl *dp, uint32_t res, uint32_t ip, uint8_t nh_sz)
-{
-	return ((dp->tbl8[get_tbl_idx(get_tbl8_idx(res, ip), nh_sz)] >>
-		(get_psd_idx(get_tbl8_idx(res, ip), nh_sz) *
-		bits_in_nh(nh_sz))) & lookup_msk(nh_sz));
-}
-
-static inline int
-is_entry_extended(uint64_t ent)
-{
-	return (ent & DIR24_8_EXT_ENT) == DIR24_8_EXT_ENT;
-}
-
-#define LOOKUP_FUNC(suffix, type, bulk_prefetch, nh_sz)			\
-static void dir24_8_lookup_bulk_##suffix(void *p, const uint32_t *ips,	\
-	uint64_t *next_hops, const unsigned int n)			\
-{									\
-	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;		\
-	uint64_t tmp;							\
-	uint32_t i;							\
-	uint32_t prefetch_offset =					\
-		RTE_MIN((unsigned int)bulk_prefetch, n);		\
-									\
-	for (i = 0; i < prefetch_offset; i++)				\
-		rte_prefetch0(get_tbl24_p(dp, ips[i], nh_sz));		\
-	for (i = 0; i < (n - prefetch_offset); i++) {			\
-		rte_prefetch0(get_tbl24_p(dp,				\
-			ips[i + prefetch_offset], nh_sz));		\
-		tmp = ((type *)dp->tbl24)[ips[i] >> 8];			\
-		if (unlikely(is_entry_extended(tmp)))			\
-			tmp = ((type *)dp->tbl8)[(uint8_t)ips[i] +	\
-				((tmp >> 1) * DIR24_8_TBL8_GRP_NUM_ENT)]; \
-		next_hops[i] = tmp >> 1;				\
-	}								\
-	for (; i < n; i++) {						\
-		tmp = ((type *)dp->tbl24)[ips[i] >> 8];			\
-		if (unlikely(is_entry_extended(tmp)))			\
-			tmp = ((type *)dp->tbl8)[(uint8_t)ips[i] +	\
-				((tmp >> 1) * DIR24_8_TBL8_GRP_NUM_ENT)]; \
-		next_hops[i] = tmp >> 1;				\
-	}								\
-}									\
-
-LOOKUP_FUNC(1b, uint8_t, 5, 0)
-LOOKUP_FUNC(2b, uint16_t, 6, 1)
-LOOKUP_FUNC(4b, uint32_t, 15, 2)
-LOOKUP_FUNC(8b, uint64_t, 12, 3)
-
-static inline void
-dir24_8_lookup_bulk(struct dir24_8_tbl *dp, const uint32_t *ips,
-	uint64_t *next_hops, const unsigned int n, uint8_t nh_sz)
-{
-	uint64_t tmp;
-	uint32_t i;
-	uint32_t prefetch_offset = RTE_MIN(15U, n);
-
-	for (i = 0; i < prefetch_offset; i++)
-		rte_prefetch0(get_tbl24_p(dp, ips[i], nh_sz));
-	for (i = 0; i < (n - prefetch_offset); i++) {
-		rte_prefetch0(get_tbl24_p(dp, ips[i + prefetch_offset],
-			nh_sz));
-		tmp = get_tbl24(dp, ips[i], nh_sz);
-		if (unlikely(is_entry_extended(tmp)))
-			tmp = get_tbl8(dp, tmp, ips[i], nh_sz);
-
-		next_hops[i] = tmp >> 1;
-	}
-	for (; i < n; i++) {
-		tmp = get_tbl24(dp, ips[i], nh_sz);
-		if (unlikely(is_entry_extended(tmp)))
-			tmp = get_tbl8(dp, tmp, ips[i], nh_sz);
-
-		next_hops[i] = tmp >> 1;
-	}
-}
-
-static void
-dir24_8_lookup_bulk_0(void *p, const uint32_t *ips,
-	uint64_t *next_hops, const unsigned int n)
-{
-	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
-
-	dir24_8_lookup_bulk(dp, ips, next_hops, n, 0);
-}
-
-static void
-dir24_8_lookup_bulk_1(void *p, const uint32_t *ips,
-	uint64_t *next_hops, const unsigned int n)
-{
-	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
-
-	dir24_8_lookup_bulk(dp, ips, next_hops, n, 1);
-}
-
-static void
-dir24_8_lookup_bulk_2(void *p, const uint32_t *ips,
-	uint64_t *next_hops, const unsigned int n)
-{
-	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
-
-	dir24_8_lookup_bulk(dp, ips, next_hops, n, 2);
-}
-
-static void
-dir24_8_lookup_bulk_3(void *p, const uint32_t *ips,
-	uint64_t *next_hops, const unsigned int n)
-{
-	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
-
-	dir24_8_lookup_bulk(dp, ips, next_hops, n, 3);
-}
-
-static void
-dir24_8_lookup_bulk_uni(void *p, const uint32_t *ips,
-	uint64_t *next_hops, const unsigned int n)
-{
-	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
-	uint64_t tmp;
-	uint32_t i;
-	uint32_t prefetch_offset = RTE_MIN(15U, n);
-	uint8_t nh_sz = dp->nh_sz;
-
-	for (i = 0; i < prefetch_offset; i++)
-		rte_prefetch0(get_tbl24_p(dp, ips[i], nh_sz));
-	for (i = 0; i < (n - prefetch_offset); i++) {
-		rte_prefetch0(get_tbl24_p(dp, ips[i + prefetch_offset],
-			nh_sz));
-		tmp = get_tbl24(dp, ips[i], nh_sz);
-		if (unlikely(is_entry_extended(tmp)))
-			tmp = get_tbl8(dp, tmp, ips[i], nh_sz);
-
-		next_hops[i] = tmp >> 1;
-	}
-	for (; i < n; i++) {
-		tmp = get_tbl24(dp, ips[i], nh_sz);
-		if (unlikely(is_entry_extended(tmp)))
-			tmp = get_tbl8(dp, tmp, ips[i], nh_sz);
-
-		next_hops[i] = tmp >> 1;
-	}
-}
-
 rte_fib_lookup_fn_t
 dir24_8_get_lookup_fn(void *p, enum rte_fib_dir24_8_lookup_type type)
 {
diff --git a/lib/librte_fib/dir24_8.h b/lib/librte_fib/dir24_8.h
index 53c5dd29e..56d038951 100644
--- a/lib/librte_fib/dir24_8.h
+++ b/lib/librte_fib/dir24_8.h
@@ -6,6 +6,9 @@
 #ifndef _DIR24_8_H_
 #define _DIR24_8_H_
 
+#include <rte_prefetch.h>
+#include <rte_branch_prediction.h>
+
 /**
  * @file
  * DIR24_8 algorithm
@@ -15,6 +18,227 @@
 extern "C" {
 #endif
 
+#define DIR24_8_TBL24_NUM_ENT		(1 << 24)
+#define DIR24_8_TBL8_GRP_NUM_ENT	256U
+#define DIR24_8_EXT_ENT			1
+#define DIR24_8_TBL24_MASK		0xffffff00
+
+#define BITMAP_SLAB_BIT_SIZE_LOG2	6
+#define BITMAP_SLAB_BIT_SIZE		(1 << BITMAP_SLAB_BIT_SIZE_LOG2)
+#define BITMAP_SLAB_BITMASK		(BITMAP_SLAB_BIT_SIZE - 1)
+
+struct dir24_8_tbl {
+	uint32_t	number_tbl8s;	/**< Total number of tbl8s */
+	uint32_t	rsvd_tbl8s;	/**< Number of reserved tbl8s */
+	uint32_t	cur_tbl8s;	/**< Current number of tbl8s */
+	enum rte_fib_dir24_8_nh_sz	nh_sz;	/**< Size of nexthop entry */
+	uint64_t	def_nh;		/**< Default next hop */
+	uint64_t	*tbl8;		/**< tbl8 table. */
+	uint64_t	*tbl8_idxes;	/**< bitmap containing free tbl8 idxes*/
+	/* tbl24 table. */
+	__extension__ uint64_t	tbl24[0] __rte_cache_aligned;
+};
+
+static inline void *
+get_tbl24_p(struct dir24_8_tbl *dp, uint32_t ip, uint8_t nh_sz)
+{
+	return (void *)&((uint8_t *)dp->tbl24)[(ip &
+		DIR24_8_TBL24_MASK) >> (8 - nh_sz)];
+}
+
+static inline  uint8_t
+bits_in_nh(uint8_t nh_sz)
+{
+	return 8 * (1 << nh_sz);
+}
+
+static inline uint64_t
+get_max_nh(uint8_t nh_sz)
+{
+	return ((1ULL << (bits_in_nh(nh_sz) - 1)) - 1);
+}
+
+static  inline uint32_t
+get_tbl24_idx(uint32_t ip)
+{
+	return ip >> 8;
+}
+
+static  inline uint32_t
+get_tbl8_idx(uint32_t res, uint32_t ip)
+{
+	return (res >> 1) * DIR24_8_TBL8_GRP_NUM_ENT + (uint8_t)ip;
+}
+
+static inline uint64_t
+lookup_msk(uint8_t nh_sz)
+{
+	return ((1ULL << ((1 << (nh_sz + 3)) - 1)) << 1) - 1;
+}
+
+static inline uint8_t
+get_psd_idx(uint32_t val, uint8_t nh_sz)
+{
+	return val & ((1 << (3 - nh_sz)) - 1);
+}
+
+static inline uint32_t
+get_tbl_idx(uint32_t val, uint8_t nh_sz)
+{
+	return val >> (3 - nh_sz);
+}
+
+static inline uint64_t
+get_tbl24(struct dir24_8_tbl *dp, uint32_t ip, uint8_t nh_sz)
+{
+	return ((dp->tbl24[get_tbl_idx(get_tbl24_idx(ip), nh_sz)] >>
+		(get_psd_idx(get_tbl24_idx(ip), nh_sz) *
+		bits_in_nh(nh_sz))) & lookup_msk(nh_sz));
+}
+
+static inline uint64_t
+get_tbl8(struct dir24_8_tbl *dp, uint32_t res, uint32_t ip, uint8_t nh_sz)
+{
+	return ((dp->tbl8[get_tbl_idx(get_tbl8_idx(res, ip), nh_sz)] >>
+		(get_psd_idx(get_tbl8_idx(res, ip), nh_sz) *
+		bits_in_nh(nh_sz))) & lookup_msk(nh_sz));
+}
+
+static inline int
+is_entry_extended(uint64_t ent)
+{
+	return (ent & DIR24_8_EXT_ENT) == DIR24_8_EXT_ENT;
+}
+
+#define LOOKUP_FUNC(suffix, type, bulk_prefetch, nh_sz)			\
+static inline void dir24_8_lookup_bulk_##suffix(void *p, const uint32_t *ips, \
+	uint64_t *next_hops, const unsigned int n)			\
+{									\
+	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;		\
+	uint64_t tmp;							\
+	uint32_t i;							\
+	uint32_t prefetch_offset =					\
+		RTE_MIN((unsigned int)bulk_prefetch, n);		\
+									\
+	for (i = 0; i < prefetch_offset; i++)				\
+		rte_prefetch0(get_tbl24_p(dp, ips[i], nh_sz));		\
+	for (i = 0; i < (n - prefetch_offset); i++) {			\
+		rte_prefetch0(get_tbl24_p(dp,				\
+			ips[i + prefetch_offset], nh_sz));		\
+		tmp = ((type *)dp->tbl24)[ips[i] >> 8];			\
+		if (unlikely(is_entry_extended(tmp)))			\
+			tmp = ((type *)dp->tbl8)[(uint8_t)ips[i] +	\
+				((tmp >> 1) * DIR24_8_TBL8_GRP_NUM_ENT)]; \
+		next_hops[i] = tmp >> 1;				\
+	}								\
+	for (; i < n; i++) {						\
+		tmp = ((type *)dp->tbl24)[ips[i] >> 8];			\
+		if (unlikely(is_entry_extended(tmp)))			\
+			tmp = ((type *)dp->tbl8)[(uint8_t)ips[i] +	\
+				((tmp >> 1) * DIR24_8_TBL8_GRP_NUM_ENT)]; \
+		next_hops[i] = tmp >> 1;				\
+	}								\
+}									\
+
+LOOKUP_FUNC(1b, uint8_t, 5, 0)
+LOOKUP_FUNC(2b, uint16_t, 6, 1)
+LOOKUP_FUNC(4b, uint32_t, 15, 2)
+LOOKUP_FUNC(8b, uint64_t, 12, 3)
+
+static inline void
+dir24_8_lookup_bulk(struct dir24_8_tbl *dp, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n, uint8_t nh_sz)
+{
+	uint64_t tmp;
+	uint32_t i;
+	uint32_t prefetch_offset = RTE_MIN(15U, n);
+
+	for (i = 0; i < prefetch_offset; i++)
+		rte_prefetch0(get_tbl24_p(dp, ips[i], nh_sz));
+	for (i = 0; i < (n - prefetch_offset); i++) {
+		rte_prefetch0(get_tbl24_p(dp, ips[i + prefetch_offset],
+			nh_sz));
+		tmp = get_tbl24(dp, ips[i], nh_sz);
+		if (unlikely(is_entry_extended(tmp)))
+			tmp = get_tbl8(dp, tmp, ips[i], nh_sz);
+
+		next_hops[i] = tmp >> 1;
+	}
+	for (; i < n; i++) {
+		tmp = get_tbl24(dp, ips[i], nh_sz);
+		if (unlikely(is_entry_extended(tmp)))
+			tmp = get_tbl8(dp, tmp, ips[i], nh_sz);
+
+		next_hops[i] = tmp >> 1;
+	}
+}
+
+static inline void
+dir24_8_lookup_bulk_0(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n)
+{
+	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
+
+	dir24_8_lookup_bulk(dp, ips, next_hops, n, 0);
+}
+
+static inline void
+dir24_8_lookup_bulk_1(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n)
+{
+	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
+
+	dir24_8_lookup_bulk(dp, ips, next_hops, n, 1);
+}
+
+static inline void
+dir24_8_lookup_bulk_2(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n)
+{
+	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
+
+	dir24_8_lookup_bulk(dp, ips, next_hops, n, 2);
+}
+
+static inline void
+dir24_8_lookup_bulk_3(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n)
+{
+	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
+
+	dir24_8_lookup_bulk(dp, ips, next_hops, n, 3);
+}
+
+static inline void
+dir24_8_lookup_bulk_uni(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n)
+{
+	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
+	uint64_t tmp;
+	uint32_t i;
+	uint32_t prefetch_offset = RTE_MIN(15U, n);
+	uint8_t nh_sz = dp->nh_sz;
+
+	for (i = 0; i < prefetch_offset; i++)
+		rte_prefetch0(get_tbl24_p(dp, ips[i], nh_sz));
+	for (i = 0; i < (n - prefetch_offset); i++) {
+		rte_prefetch0(get_tbl24_p(dp, ips[i + prefetch_offset],
+			nh_sz));
+		tmp = get_tbl24(dp, ips[i], nh_sz);
+		if (unlikely(is_entry_extended(tmp)))
+			tmp = get_tbl8(dp, tmp, ips[i], nh_sz);
+
+		next_hops[i] = tmp >> 1;
+	}
+	for (; i < n; i++) {
+		tmp = get_tbl24(dp, ips[i], nh_sz);
+		if (unlikely(is_entry_extended(tmp)))
+			tmp = get_tbl8(dp, tmp, ips[i], nh_sz);
+
+		next_hops[i] = tmp >> 1;
+	}
+}
+
 void *
 dir24_8_create(const char *name, int socket_id, struct rte_fib_conf *conf);
 
-- 
2.17.1


^ permalink raw reply	[flat|nested] 199+ messages in thread

* [dpdk-dev] [PATCH v7 4/8] fib: introduce AVX512 lookup
  2020-07-13 11:11         ` [dpdk-dev] [PATCH v6 " Vladimir Medvedkin
                             ` (3 preceding siblings ...)
  2020-07-13 11:56           ` [dpdk-dev] [PATCH v7 3/8] fib: move lookup definition into the header file Vladimir Medvedkin
@ 2020-07-13 11:56           ` Vladimir Medvedkin
  2020-07-13 11:56           ` [dpdk-dev] [PATCH v7 5/8] fib6: make lookup function type configurable Vladimir Medvedkin
                             ` (4 subsequent siblings)
  9 siblings, 0 replies; 199+ messages in thread
From: Vladimir Medvedkin @ 2020-07-13 11:56 UTC (permalink / raw)
  To: dev
  Cc: david.marchand, jerinj, mdr, thomas, konstantin.ananyev,
	bruce.richardson

Add new lookup implementation for DIR24_8 algorithm using
AVX512 instruction set

Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 lib/librte_fib/Makefile         |  14 +++
 lib/librte_fib/dir24_8.c        |  24 +++++
 lib/librte_fib/dir24_8_avx512.c | 165 ++++++++++++++++++++++++++++++++
 lib/librte_fib/dir24_8_avx512.h |  24 +++++
 lib/librte_fib/meson.build      |  18 ++++
 lib/librte_fib/rte_fib.h        |   3 +-
 6 files changed, 247 insertions(+), 1 deletion(-)
 create mode 100644 lib/librte_fib/dir24_8_avx512.c
 create mode 100644 lib/librte_fib/dir24_8_avx512.h

diff --git a/lib/librte_fib/Makefile b/lib/librte_fib/Makefile
index 1dd2a495b..3958da106 100644
--- a/lib/librte_fib/Makefile
+++ b/lib/librte_fib/Makefile
@@ -19,4 +19,18 @@ SRCS-$(CONFIG_RTE_LIBRTE_FIB) := rte_fib.c rte_fib6.c dir24_8.c trie.c
 # install this header file
 SYMLINK-$(CONFIG_RTE_LIBRTE_FIB)-include := rte_fib.h rte_fib6.h
 
+CC_AVX512F_SUPPORT=$(shell $(CC) -mavx512f -dM -E - </dev/null 2>&1 | \
+grep -q __AVX512F__ && echo 1)
+
+CC_AVX512DQ_SUPPORT=$(shell $(CC) -mavx512dq -dM -E - </dev/null 2>&1 | \
+grep -q __AVX512DQ__ && echo 1)
+
+ifeq ($(CC_AVX512F_SUPPORT), 1)
+	ifeq ($(CC_AVX512DQ_SUPPORT), 1)
+		SRCS-$(CONFIG_RTE_LIBRTE_FIB) += dir24_8_avx512.c
+		CFLAGS_dir24_8_avx512.o += -mavx512f
+		CFLAGS_dir24_8_avx512.o += -mavx512dq
+		CFLAGS_dir24_8.o += -DCC_DIR24_8_AVX512_SUPPORT
+	endif
+endif
 include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_fib/dir24_8.c b/lib/librte_fib/dir24_8.c
index 9d74653cf..0d7bf2c9e 100644
--- a/lib/librte_fib/dir24_8.c
+++ b/lib/librte_fib/dir24_8.c
@@ -18,6 +18,12 @@
 #include <rte_fib.h>
 #include "dir24_8.h"
 
+#ifdef CC_DIR24_8_AVX512_SUPPORT
+
+#include "dir24_8_avx512.h"
+
+#endif /* CC_DIR24_8_AVX512_SUPPORT */
+
 #define DIR24_8_NAMESIZE	64
 
 #define ROUNDUP(x, y)	 RTE_ALIGN_CEIL(x, (1 << (32 - y)))
@@ -62,6 +68,24 @@ dir24_8_get_lookup_fn(void *p, enum rte_fib_dir24_8_lookup_type type)
 		}
 	case RTE_FIB_DIR24_8_SCALAR_UNI:
 		return dir24_8_lookup_bulk_uni;
+#ifdef CC_DIR24_8_AVX512_SUPPORT
+	case RTE_FIB_DIR24_8_VECTOR_AVX512:
+		if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512F) <= 0)
+			return NULL;
+
+		switch (nh_sz) {
+		case RTE_FIB_DIR24_8_1B:
+			return rte_dir24_8_vec_lookup_bulk_1b;
+		case RTE_FIB_DIR24_8_2B:
+			return rte_dir24_8_vec_lookup_bulk_2b;
+		case RTE_FIB_DIR24_8_4B:
+			return rte_dir24_8_vec_lookup_bulk_4b;
+		case RTE_FIB_DIR24_8_8B:
+			return rte_dir24_8_vec_lookup_bulk_8b;
+		default:
+			return NULL;
+		}
+#endif
 	default:
 		return NULL;
 	}
diff --git a/lib/librte_fib/dir24_8_avx512.c b/lib/librte_fib/dir24_8_avx512.c
new file mode 100644
index 000000000..43dba28cf
--- /dev/null
+++ b/lib/librte_fib/dir24_8_avx512.c
@@ -0,0 +1,165 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Intel Corporation
+ */
+
+#include <rte_vect.h>
+#include <rte_fib.h>
+
+#include "dir24_8.h"
+#include "dir24_8_avx512.h"
+
+static __rte_always_inline void
+dir24_8_vec_lookup_x16(void *p, const uint32_t *ips,
+	uint64_t *next_hops, int size)
+{
+	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
+	__mmask16 msk_ext;
+	__mmask16 exp_msk = 0x5555;
+	__m512i ip_vec, idxes, res, bytes;
+	const __m512i zero = _mm512_set1_epi32(0);
+	const __m512i lsb = _mm512_set1_epi32(1);
+	const __m512i lsbyte_msk = _mm512_set1_epi32(0xff);
+	__m512i tmp1, tmp2, res_msk;
+	__m256i tmp256;
+	/* used to mask gather values if size is 1/2 (8/16 bit next hops) */
+	if (size == sizeof(uint8_t))
+		res_msk = _mm512_set1_epi32(UINT8_MAX);
+	else if (size == sizeof(uint16_t))
+		res_msk = _mm512_set1_epi32(UINT16_MAX);
+
+	ip_vec = _mm512_loadu_si512(ips);
+	/* mask 24 most significant bits */
+	idxes = _mm512_srli_epi32(ip_vec, 8);
+
+	/**
+	 * lookup in tbl24
+	 * Put it inside branch to make compiler happy with -O0
+	 */
+	if (size == sizeof(uint8_t)) {
+		res = _mm512_i32gather_epi32(idxes, (const int *)dp->tbl24, 1);
+		res = _mm512_and_epi32(res, res_msk);
+	} else if (size == sizeof(uint16_t)) {
+		res = _mm512_i32gather_epi32(idxes, (const int *)dp->tbl24, 2);
+		res = _mm512_and_epi32(res, res_msk);
+	} else
+		res = _mm512_i32gather_epi32(idxes, (const int *)dp->tbl24, 4);
+
+	/* get extended entries indexes */
+	msk_ext = _mm512_test_epi32_mask(res, lsb);
+
+	if (msk_ext != 0) {
+		idxes = _mm512_srli_epi32(res, 1);
+		idxes = _mm512_slli_epi32(idxes, 8);
+		bytes = _mm512_and_epi32(ip_vec, lsbyte_msk);
+		idxes = _mm512_maskz_add_epi32(msk_ext, idxes, bytes);
+		if (size == sizeof(uint8_t)) {
+			idxes = _mm512_mask_i32gather_epi32(zero, msk_ext,
+				idxes, (const int *)dp->tbl8, 1);
+			idxes = _mm512_and_epi32(idxes, res_msk);
+		} else if (size == sizeof(uint16_t)) {
+			idxes = _mm512_mask_i32gather_epi32(zero, msk_ext,
+				idxes, (const int *)dp->tbl8, 2);
+			idxes = _mm512_and_epi32(idxes, res_msk);
+		} else
+			idxes = _mm512_mask_i32gather_epi32(zero, msk_ext,
+				idxes, (const int *)dp->tbl8, 4);
+
+		res = _mm512_mask_blend_epi32(msk_ext, res, idxes);
+	}
+
+	res = _mm512_srli_epi32(res, 1);
+	tmp1 = _mm512_maskz_expand_epi32(exp_msk, res);
+	tmp256 = _mm512_extracti32x8_epi32(res, 1);
+	tmp2 = _mm512_maskz_expand_epi32(exp_msk,
+		_mm512_castsi256_si512(tmp256));
+	_mm512_storeu_si512(next_hops, tmp1);
+	_mm512_storeu_si512(next_hops + 8, tmp2);
+}
+
+static __rte_always_inline void
+dir24_8_vec_lookup_x8_8b(void *p, const uint32_t *ips,
+	uint64_t *next_hops)
+{
+	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
+	const __m512i zero = _mm512_set1_epi32(0);
+	const __m512i lsbyte_msk = _mm512_set1_epi64(0xff);
+	const __m512i lsb = _mm512_set1_epi64(1);
+	__m512i res, idxes, bytes;
+	__m256i idxes_256, ip_vec;
+	__mmask8 msk_ext;
+
+	ip_vec = _mm256_loadu_si256((const void *)ips);
+	/* mask 24 most significant bits */
+	idxes_256 = _mm256_srli_epi32(ip_vec, 8);
+
+	/* lookup in tbl24 */
+	res = _mm512_i32gather_epi64(idxes_256, (const void *)dp->tbl24, 8);
+
+	/* get extended entries indexes */
+	msk_ext = _mm512_test_epi64_mask(res, lsb);
+
+	if (msk_ext != 0) {
+		bytes = _mm512_cvtepi32_epi64(ip_vec);
+		idxes = _mm512_srli_epi64(res, 1);
+		idxes = _mm512_slli_epi64(idxes, 8);
+		bytes = _mm512_and_epi64(bytes, lsbyte_msk);
+		idxes = _mm512_maskz_add_epi64(msk_ext, idxes, bytes);
+		idxes = _mm512_mask_i64gather_epi64(zero, msk_ext, idxes,
+			(const void *)dp->tbl8, 8);
+
+		res = _mm512_mask_blend_epi64(msk_ext, res, idxes);
+	}
+
+	res = _mm512_srli_epi64(res, 1);
+	_mm512_storeu_si512(next_hops, res);
+}
+
+void
+rte_dir24_8_vec_lookup_bulk_1b(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n)
+{
+	uint32_t i;
+	for (i = 0; i < (n / 16); i++)
+		dir24_8_vec_lookup_x16(p, ips + i * 16, next_hops + i * 16,
+			sizeof(uint8_t));
+
+	dir24_8_lookup_bulk_1b(p, ips + i * 16, next_hops + i * 16,
+		n - i * 16);
+}
+
+void
+rte_dir24_8_vec_lookup_bulk_2b(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n)
+{
+	uint32_t i;
+	for (i = 0; i < (n / 16); i++)
+		dir24_8_vec_lookup_x16(p, ips + i * 16, next_hops + i * 16,
+			sizeof(uint16_t));
+
+	dir24_8_lookup_bulk_2b(p, ips + i * 16, next_hops + i * 16,
+		n - i * 16);
+}
+
+void
+rte_dir24_8_vec_lookup_bulk_4b(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n)
+{
+	uint32_t i;
+	for (i = 0; i < (n / 16); i++)
+		dir24_8_vec_lookup_x16(p, ips + i * 16, next_hops + i * 16,
+			sizeof(uint32_t));
+
+	dir24_8_lookup_bulk_4b(p, ips + i * 16, next_hops + i * 16,
+		n - i * 16);
+}
+
+void
+rte_dir24_8_vec_lookup_bulk_8b(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n)
+{
+	uint32_t i;
+	for (i = 0; i < (n / 8); i++)
+		dir24_8_vec_lookup_x8_8b(p, ips + i * 8, next_hops + i * 8);
+
+	dir24_8_lookup_bulk_8b(p, ips + i * 8, next_hops + i * 8, n - i * 8);
+}
diff --git a/lib/librte_fib/dir24_8_avx512.h b/lib/librte_fib/dir24_8_avx512.h
new file mode 100644
index 000000000..1d3c2b931
--- /dev/null
+++ b/lib/librte_fib/dir24_8_avx512.h
@@ -0,0 +1,24 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Intel Corporation
+ */
+
+#ifndef _DIR248_AVX512_H_
+#define _DIR248_AVX512_H_
+
+void
+rte_dir24_8_vec_lookup_bulk_1b(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n);
+
+void
+rte_dir24_8_vec_lookup_bulk_2b(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n);
+
+void
+rte_dir24_8_vec_lookup_bulk_4b(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n);
+
+void
+rte_dir24_8_vec_lookup_bulk_8b(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n);
+
+#endif /* _DIR248_AVX512_H_ */
diff --git a/lib/librte_fib/meson.build b/lib/librte_fib/meson.build
index 771828fbe..d96ff0288 100644
--- a/lib/librte_fib/meson.build
+++ b/lib/librte_fib/meson.build
@@ -5,3 +5,21 @@
 sources = files('rte_fib.c', 'rte_fib6.c', 'dir24_8.c', 'trie.c')
 headers = files('rte_fib.h', 'rte_fib6.h')
 deps += ['rib']
+
+if arch_subdir == 'x86' and not machine_args.contains('-mno-avx512f')
+	# compile AVX512 version if either:
+	# a. we have AVX512F supported in minimum instruction set baseline
+	# b. it's not minimum instruction set, but supported by compiler
+	if dpdk_conf.has('RTE_MACHINE_CPUFLAG_AVX512F')
+		cflags += ['-DCC_DIR24_8_AVX512_SUPPORT']
+		sources += files('dir24_8_avx512.c')
+	elif cc.has_multi_arguments('-mavx512f', '-mavx512dq')
+		dir24_8_avx512_tmp = static_library('dir24_8_avx512_tmp',
+				'dir24_8_avx512.c',
+				dependencies: static_rte_eal,
+				c_args: cflags + ['-mavx512f', '-mavx512dq'])
+		objs += dir24_8_avx512_tmp.extract_objects('dir24_8_avx512.c')
+		cflags += '-DCC_DIR24_8_AVX512_SUPPORT'
+	endif
+endif
+
diff --git a/lib/librte_fib/rte_fib.h b/lib/librte_fib/rte_fib.h
index 892898c6f..4a348670d 100644
--- a/lib/librte_fib/rte_fib.h
+++ b/lib/librte_fib/rte_fib.h
@@ -61,7 +61,8 @@ enum rte_fib_dir24_8_nh_sz {
 enum rte_fib_dir24_8_lookup_type {
 	RTE_FIB_DIR24_8_SCALAR_MACRO,
 	RTE_FIB_DIR24_8_SCALAR_INLINE,
-	RTE_FIB_DIR24_8_SCALAR_UNI
+	RTE_FIB_DIR24_8_SCALAR_UNI,
+	RTE_FIB_DIR24_8_VECTOR_AVX512
 };
 
 /** FIB configuration structure */
-- 
2.17.1


^ permalink raw reply	[flat|nested] 199+ messages in thread

* [dpdk-dev] [PATCH v7 5/8] fib6: make lookup function type configurable
  2020-07-13 11:11         ` [dpdk-dev] [PATCH v6 " Vladimir Medvedkin
                             ` (4 preceding siblings ...)
  2020-07-13 11:56           ` [dpdk-dev] [PATCH v7 4/8] fib: introduce AVX512 lookup Vladimir Medvedkin
@ 2020-07-13 11:56           ` Vladimir Medvedkin
  2020-07-16 11:53             ` Ananyev, Konstantin
  2020-07-13 11:56           ` [dpdk-dev] [PATCH v7 6/8] fib6: move lookup definition into the header file Vladimir Medvedkin
                             ` (3 subsequent siblings)
  9 siblings, 1 reply; 199+ messages in thread
From: Vladimir Medvedkin @ 2020-07-13 11:56 UTC (permalink / raw)
  To: dev
  Cc: david.marchand, jerinj, mdr, thomas, konstantin.ananyev,
	bruce.richardson

Add type argument to trie_get_lookup_fn()
Now it only supports RTE_FIB6_TRIE_SCALAR

Add new rte_fib6_set_lookup_fn() - user can change lookup
function type runtime.

Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
---
 lib/librte_fib/rte_fib6.c          | 20 +++++++++++++++++++-
 lib/librte_fib/rte_fib6.h          | 21 +++++++++++++++++++++
 lib/librte_fib/rte_fib_version.map |  1 +
 lib/librte_fib/trie.c              | 25 ++++++++++++++-----------
 lib/librte_fib/trie.h              |  2 +-
 5 files changed, 56 insertions(+), 13 deletions(-)

diff --git a/lib/librte_fib/rte_fib6.c b/lib/librte_fib/rte_fib6.c
index a1f0db844..566cd5fb6 100644
--- a/lib/librte_fib/rte_fib6.c
+++ b/lib/librte_fib/rte_fib6.c
@@ -107,7 +107,7 @@ init_dataplane(struct rte_fib6 *fib, __rte_unused int socket_id,
 		fib->dp = trie_create(dp_name, socket_id, conf);
 		if (fib->dp == NULL)
 			return -rte_errno;
-		fib->lookup = rte_trie_get_lookup_fn(conf);
+		fib->lookup = trie_get_lookup_fn(fib->dp, RTE_FIB6_TRIE_SCALAR);
 		fib->modify = trie_modify;
 		return 0;
 	default:
@@ -319,3 +319,21 @@ rte_fib6_get_rib(struct rte_fib6 *fib)
 {
 	return (fib == NULL) ? NULL : fib->rib;
 }
+
+int
+rte_fib6_set_lookup_fn(struct rte_fib6 *fib,
+	enum rte_fib_trie_lookup_type type)
+{
+	rte_fib6_lookup_fn_t fn;
+
+	switch (fib->type) {
+	case RTE_FIB6_TRIE:
+		fn = trie_get_lookup_fn(fib->dp, type);
+		if (fn == NULL)
+			return -EINVAL;
+		fib->lookup = fn;
+		return 0;
+	default:
+		return -EINVAL;
+	}
+}
diff --git a/lib/librte_fib/rte_fib6.h b/lib/librte_fib/rte_fib6.h
index bbfcf23a8..e029c7624 100644
--- a/lib/librte_fib/rte_fib6.h
+++ b/lib/librte_fib/rte_fib6.h
@@ -59,6 +59,10 @@ enum rte_fib_trie_nh_sz {
 	RTE_FIB6_TRIE_8B
 };
 
+enum rte_fib_trie_lookup_type {
+	RTE_FIB6_TRIE_SCALAR
+};
+
 /** FIB configuration structure */
 struct rte_fib6_conf {
 	enum rte_fib6_type type; /**< Type of FIB struct */
@@ -201,6 +205,23 @@ __rte_experimental
 struct rte_rib6 *
 rte_fib6_get_rib(struct rte_fib6 *fib);
 
+/**
+ * Set lookup function based on type
+ *
+ * @param fib
+ *   FIB object handle
+ * @param type
+ *   type of lookup function
+ *
+ * @return
+ *    -EINVAL on failure
+ *    0 on success
+ */
+__rte_experimental
+int
+rte_fib6_set_lookup_fn(struct rte_fib6 *fib,
+	enum rte_fib_trie_lookup_type type);
+
 #ifdef __cplusplus
 }
 #endif
diff --git a/lib/librte_fib/rte_fib_version.map b/lib/librte_fib/rte_fib_version.map
index 216af66b3..9d1e181b3 100644
--- a/lib/librte_fib/rte_fib_version.map
+++ b/lib/librte_fib/rte_fib_version.map
@@ -19,6 +19,7 @@ EXPERIMENTAL {
 	rte_fib6_lookup_bulk;
 	rte_fib6_get_dp;
 	rte_fib6_get_rib;
+	rte_fib6_set_lookup_fn;
 
 	local: *;
 };
diff --git a/lib/librte_fib/trie.c b/lib/librte_fib/trie.c
index 2ae2add4f..63c519a09 100644
--- a/lib/librte_fib/trie.c
+++ b/lib/librte_fib/trie.c
@@ -59,13 +59,6 @@ enum edge {
 	REDGE
 };
 
-enum lookup_type {
-	MACRO,
-	INLINE,
-	UNI
-};
-static enum lookup_type test_lookup = MACRO;
-
 static inline uint32_t
 get_tbl24_idx(const uint8_t *ip)
 {
@@ -154,11 +147,18 @@ LOOKUP_FUNC(4b, uint32_t, 2)
 LOOKUP_FUNC(8b, uint64_t, 3)
 
 rte_fib6_lookup_fn_t
-rte_trie_get_lookup_fn(struct rte_fib6_conf *conf)
+trie_get_lookup_fn(void *p, enum rte_fib_trie_lookup_type type)
 {
-	enum rte_fib_trie_nh_sz nh_sz = conf->trie.nh_sz;
+	enum rte_fib_trie_nh_sz nh_sz;
+	struct rte_trie_tbl *dp = p;
 
-	if (test_lookup == MACRO) {
+	if (dp == NULL)
+		return NULL;
+
+	nh_sz = dp->nh_sz;
+
+	switch (type) {
+	case RTE_FIB6_TRIE_SCALAR:
 		switch (nh_sz) {
 		case RTE_FIB6_TRIE_2B:
 			return rte_trie_lookup_bulk_2b;
@@ -166,9 +166,12 @@ rte_trie_get_lookup_fn(struct rte_fib6_conf *conf)
 			return rte_trie_lookup_bulk_4b;
 		case RTE_FIB6_TRIE_8B:
 			return rte_trie_lookup_bulk_8b;
+		default:
+			return NULL;
 		}
+	default:
+		return NULL;
 	}
-
 	return NULL;
 }
 
diff --git a/lib/librte_fib/trie.h b/lib/librte_fib/trie.h
index bb750c5ae..0d5ef9a9f 100644
--- a/lib/librte_fib/trie.h
+++ b/lib/librte_fib/trie.h
@@ -22,7 +22,7 @@ void
 trie_free(void *p);
 
 rte_fib6_lookup_fn_t
-rte_trie_get_lookup_fn(struct rte_fib6_conf *fib_conf);
+trie_get_lookup_fn(void *p, enum rte_fib_trie_lookup_type type);
 
 int
 trie_modify(struct rte_fib6 *fib, const uint8_t ip[RTE_FIB6_IPV6_ADDR_SIZE],
-- 
2.17.1


^ permalink raw reply	[flat|nested] 199+ messages in thread

* [dpdk-dev] [PATCH v7 6/8] fib6: move lookup definition into the header file
  2020-07-13 11:11         ` [dpdk-dev] [PATCH v6 " Vladimir Medvedkin
                             ` (5 preceding siblings ...)
  2020-07-13 11:56           ` [dpdk-dev] [PATCH v7 5/8] fib6: make lookup function type configurable Vladimir Medvedkin
@ 2020-07-13 11:56           ` Vladimir Medvedkin
  2020-07-13 11:56           ` [dpdk-dev] [PATCH v7 7/8] fib6: introduce AVX512 lookup Vladimir Medvedkin
                             ` (2 subsequent siblings)
  9 siblings, 0 replies; 199+ messages in thread
From: Vladimir Medvedkin @ 2020-07-13 11:56 UTC (permalink / raw)
  To: dev
  Cc: david.marchand, jerinj, mdr, thomas, konstantin.ananyev,
	bruce.richardson

Move trie table layout and lookup definition into the
private header file. This is necessary for implementing a
vectorized lookup function in a separate .с file.

Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 lib/librte_fib/trie.c | 121 ------------------------------------------
 lib/librte_fib/trie.h | 117 ++++++++++++++++++++++++++++++++++++++++
 2 files changed, 117 insertions(+), 121 deletions(-)

diff --git a/lib/librte_fib/trie.c b/lib/librte_fib/trie.c
index 63c519a09..136e938df 100644
--- a/lib/librte_fib/trie.c
+++ b/lib/librte_fib/trie.c
@@ -11,141 +11,20 @@
 
 #include <rte_debug.h>
 #include <rte_malloc.h>
-#include <rte_prefetch.h>
 #include <rte_errno.h>
 #include <rte_memory.h>
-#include <rte_branch_prediction.h>
 
 #include <rte_rib6.h>
 #include <rte_fib6.h>
 #include "trie.h"
 
-/* @internal Total number of tbl24 entries. */
-#define TRIE_TBL24_NUM_ENT	(1 << 24)
-
-/* Maximum depth value possible for IPv6 LPM. */
-#define TRIE_MAX_DEPTH		128
-
-/* @internal Number of entries in a tbl8 group. */
-#define TRIE_TBL8_GRP_NUM_ENT	256ULL
-
-/* @internal Total number of tbl8 groups in the tbl8. */
-#define TRIE_TBL8_NUM_GROUPS	65536
-
-/* @internal bitmask with valid and valid_group fields set */
-#define TRIE_EXT_ENT		1
-
 #define TRIE_NAMESIZE		64
 
-#define BITMAP_SLAB_BIT_SIZE_LOG2	6
-#define BITMAP_SLAB_BIT_SIZE		(1ULL << BITMAP_SLAB_BIT_SIZE_LOG2)
-#define BITMAP_SLAB_BITMASK		(BITMAP_SLAB_BIT_SIZE - 1)
-
-struct rte_trie_tbl {
-	uint32_t	number_tbl8s;	/**< Total number of tbl8s */
-	uint32_t	rsvd_tbl8s;	/**< Number of reserved tbl8s */
-	uint32_t	cur_tbl8s;	/**< Current cumber of tbl8s */
-	uint64_t	def_nh;		/**< Default next hop */
-	enum rte_fib_trie_nh_sz	nh_sz;	/**< Size of nexthop entry */
-	uint64_t	*tbl8;		/**< tbl8 table. */
-	uint32_t	*tbl8_pool;	/**< bitmap containing free tbl8 idxes*/
-	uint32_t	tbl8_pool_pos;
-	/* tbl24 table. */
-	__extension__ uint64_t	tbl24[0] __rte_cache_aligned;
-};
-
 enum edge {
 	LEDGE,
 	REDGE
 };
 
-static inline uint32_t
-get_tbl24_idx(const uint8_t *ip)
-{
-	return ip[0] << 16|ip[1] << 8|ip[2];
-}
-
-static inline void *
-get_tbl24_p(struct rte_trie_tbl *dp, const uint8_t *ip, uint8_t nh_sz)
-{
-	uint32_t tbl24_idx;
-
-	tbl24_idx = get_tbl24_idx(ip);
-	return (void *)&((uint8_t *)dp->tbl24)[tbl24_idx << nh_sz];
-}
-
-static inline uint8_t
-bits_in_nh(uint8_t nh_sz)
-{
-	return 8 * (1 << nh_sz);
-}
-
-static inline uint64_t
-get_max_nh(uint8_t nh_sz)
-{
-	return ((1ULL << (bits_in_nh(nh_sz) - 1)) - 1);
-}
-
-static inline uint64_t
-lookup_msk(uint8_t nh_sz)
-{
-	return ((1ULL << ((1 << (nh_sz + 3)) - 1)) << 1) - 1;
-}
-
-static inline uint8_t
-get_psd_idx(uint32_t val, uint8_t nh_sz)
-{
-	return val & ((1 << (3 - nh_sz)) - 1);
-}
-
-static inline uint32_t
-get_tbl_pos(uint32_t val, uint8_t nh_sz)
-{
-	return val >> (3 - nh_sz);
-}
-
-static inline uint64_t
-get_tbl_val_by_idx(uint64_t *tbl, uint32_t idx, uint8_t nh_sz)
-{
-	return ((tbl[get_tbl_pos(idx, nh_sz)] >> (get_psd_idx(idx, nh_sz) *
-		bits_in_nh(nh_sz))) & lookup_msk(nh_sz));
-}
-
-static inline void *
-get_tbl_p_by_idx(uint64_t *tbl, uint64_t idx, uint8_t nh_sz)
-{
-	return (uint8_t *)tbl + (idx << nh_sz);
-}
-
-static inline int
-is_entry_extended(uint64_t ent)
-{
-	return (ent & TRIE_EXT_ENT) == TRIE_EXT_ENT;
-}
-
-#define LOOKUP_FUNC(suffix, type, nh_sz)				\
-static void rte_trie_lookup_bulk_##suffix(void *p,			\
-	uint8_t ips[][RTE_FIB6_IPV6_ADDR_SIZE],			\
-	uint64_t *next_hops, const unsigned int n)			\
-{									\
-	struct rte_trie_tbl *dp = (struct rte_trie_tbl *)p;		\
-	uint64_t tmp;							\
-	uint32_t i, j;							\
-									\
-	for (i = 0; i < n; i++) {					\
-		tmp = ((type *)dp->tbl24)[get_tbl24_idx(&ips[i][0])];	\
-		j = 3;							\
-		while (is_entry_extended(tmp)) {			\
-			tmp = ((type *)dp->tbl8)[ips[i][j++] +		\
-				((tmp >> 1) * TRIE_TBL8_GRP_NUM_ENT)];	\
-		}							\
-		next_hops[i] = tmp >> 1;				\
-	}								\
-}
-LOOKUP_FUNC(2b, uint16_t, 1)
-LOOKUP_FUNC(4b, uint32_t, 2)
-LOOKUP_FUNC(8b, uint64_t, 3)
-
 rte_fib6_lookup_fn_t
 trie_get_lookup_fn(void *p, enum rte_fib_trie_lookup_type type)
 {
diff --git a/lib/librte_fib/trie.h b/lib/librte_fib/trie.h
index 0d5ef9a9f..663c7a90f 100644
--- a/lib/librte_fib/trie.h
+++ b/lib/librte_fib/trie.h
@@ -10,11 +10,128 @@
  * @file
  * RTE IPv6 Longest Prefix Match (LPM)
  */
+#include <rte_prefetch.h>
+#include <rte_branch_prediction.h>
 
 #ifdef __cplusplus
 extern "C" {
 #endif
 
+/* @internal Total number of tbl24 entries. */
+#define TRIE_TBL24_NUM_ENT	(1 << 24)
+/* Maximum depth value possible for IPv6 LPM. */
+#define TRIE_MAX_DEPTH		128
+/* @internal Number of entries in a tbl8 group. */
+#define TRIE_TBL8_GRP_NUM_ENT	256ULL
+/* @internal Total number of tbl8 groups in the tbl8. */
+#define TRIE_TBL8_NUM_GROUPS	65536
+/* @internal bitmask with valid and valid_group fields set */
+#define TRIE_EXT_ENT		1
+
+#define BITMAP_SLAB_BIT_SIZE_LOG2	6
+#define BITMAP_SLAB_BIT_SIZE		(1ULL << BITMAP_SLAB_BIT_SIZE_LOG2)
+#define BITMAP_SLAB_BITMASK		(BITMAP_SLAB_BIT_SIZE - 1)
+
+struct rte_trie_tbl {
+	uint32_t	number_tbl8s;	/**< Total number of tbl8s */
+	uint32_t	rsvd_tbl8s;	/**< Number of reserved tbl8s */
+	uint32_t	cur_tbl8s;	/**< Current cumber of tbl8s */
+	uint64_t	def_nh;		/**< Default next hop */
+	enum rte_fib_trie_nh_sz	nh_sz;	/**< Size of nexthop entry */
+	uint64_t	*tbl8;		/**< tbl8 table. */
+	uint32_t	*tbl8_pool;	/**< bitmap containing free tbl8 idxes*/
+	uint32_t	tbl8_pool_pos;
+	/* tbl24 table. */
+	__extension__ uint64_t	tbl24[0] __rte_cache_aligned;
+};
+
+static inline uint32_t
+get_tbl24_idx(const uint8_t *ip)
+{
+	return ip[0] << 16|ip[1] << 8|ip[2];
+}
+
+static inline void *
+get_tbl24_p(struct rte_trie_tbl *dp, const uint8_t *ip, uint8_t nh_sz)
+{
+	uint32_t tbl24_idx;
+
+	tbl24_idx = get_tbl24_idx(ip);
+	return (void *)&((uint8_t *)dp->tbl24)[tbl24_idx << nh_sz];
+}
+
+static inline uint8_t
+bits_in_nh(uint8_t nh_sz)
+{
+	return 8 * (1 << nh_sz);
+}
+
+static inline uint64_t
+get_max_nh(uint8_t nh_sz)
+{
+	return ((1ULL << (bits_in_nh(nh_sz) - 1)) - 1);
+}
+
+static inline uint64_t
+lookup_msk(uint8_t nh_sz)
+{
+	return ((1ULL << ((1 << (nh_sz + 3)) - 1)) << 1) - 1;
+}
+
+static inline uint8_t
+get_psd_idx(uint32_t val, uint8_t nh_sz)
+{
+	return val & ((1 << (3 - nh_sz)) - 1);
+}
+
+static inline uint32_t
+get_tbl_pos(uint32_t val, uint8_t nh_sz)
+{
+	return val >> (3 - nh_sz);
+}
+
+static inline uint64_t
+get_tbl_val_by_idx(uint64_t *tbl, uint32_t idx, uint8_t nh_sz)
+{
+	return ((tbl[get_tbl_pos(idx, nh_sz)] >> (get_psd_idx(idx, nh_sz) *
+		bits_in_nh(nh_sz))) & lookup_msk(nh_sz));
+}
+
+static inline void *
+get_tbl_p_by_idx(uint64_t *tbl, uint64_t idx, uint8_t nh_sz)
+{
+	return (uint8_t *)tbl + (idx << nh_sz);
+}
+
+static inline int
+is_entry_extended(uint64_t ent)
+{
+	return (ent & TRIE_EXT_ENT) == TRIE_EXT_ENT;
+}
+
+#define LOOKUP_FUNC(suffix, type, nh_sz)				\
+static inline void rte_trie_lookup_bulk_##suffix(void *p,		\
+	uint8_t ips[][RTE_FIB6_IPV6_ADDR_SIZE],				\
+	uint64_t *next_hops, const unsigned int n)			\
+{									\
+	struct rte_trie_tbl *dp = (struct rte_trie_tbl *)p;		\
+	uint64_t tmp;							\
+	uint32_t i, j;							\
+									\
+	for (i = 0; i < n; i++) {					\
+		tmp = ((type *)dp->tbl24)[get_tbl24_idx(&ips[i][0])];	\
+		j = 3;							\
+		while (is_entry_extended(tmp)) {			\
+			tmp = ((type *)dp->tbl8)[ips[i][j++] +		\
+				((tmp >> 1) * TRIE_TBL8_GRP_NUM_ENT)];	\
+		}							\
+		next_hops[i] = tmp >> 1;				\
+	}								\
+}
+LOOKUP_FUNC(2b, uint16_t, 1)
+LOOKUP_FUNC(4b, uint32_t, 2)
+LOOKUP_FUNC(8b, uint64_t, 3)
+
 void *
 trie_create(const char *name, int socket_id, struct rte_fib6_conf *conf);
 
-- 
2.17.1


^ permalink raw reply	[flat|nested] 199+ messages in thread

* [dpdk-dev] [PATCH v7 7/8] fib6: introduce AVX512 lookup
  2020-07-13 11:11         ` [dpdk-dev] [PATCH v6 " Vladimir Medvedkin
                             ` (6 preceding siblings ...)
  2020-07-13 11:56           ` [dpdk-dev] [PATCH v7 6/8] fib6: move lookup definition into the header file Vladimir Medvedkin
@ 2020-07-13 11:56           ` Vladimir Medvedkin
  2020-07-13 11:56           ` [dpdk-dev] [PATCH v7 8/8] app/testfib: add support for different lookup functions Vladimir Medvedkin
  2020-07-13 22:19           ` [dpdk-dev] [PATCH v6 0/8] fib: implement AVX512 vector lookup Stephen Hemminger
  9 siblings, 0 replies; 199+ messages in thread
From: Vladimir Medvedkin @ 2020-07-13 11:56 UTC (permalink / raw)
  To: dev
  Cc: david.marchand, jerinj, mdr, thomas, konstantin.ananyev,
	bruce.richardson

Add new lookup implementation for FIB6 trie algorithm using
AVX512 instruction set

Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 lib/librte_fib/Makefile      |  10 ++
 lib/librte_fib/meson.build   |  13 ++
 lib/librte_fib/rte_fib6.h    |   3 +-
 lib/librte_fib/trie.c        |  21 +++
 lib/librte_fib/trie_avx512.c | 269 +++++++++++++++++++++++++++++++++++
 lib/librte_fib/trie_avx512.h |  20 +++
 6 files changed, 335 insertions(+), 1 deletion(-)
 create mode 100644 lib/librte_fib/trie_avx512.c
 create mode 100644 lib/librte_fib/trie_avx512.h

diff --git a/lib/librte_fib/Makefile b/lib/librte_fib/Makefile
index 3958da106..761c7c847 100644
--- a/lib/librte_fib/Makefile
+++ b/lib/librte_fib/Makefile
@@ -25,12 +25,22 @@ grep -q __AVX512F__ && echo 1)
 CC_AVX512DQ_SUPPORT=$(shell $(CC) -mavx512dq -dM -E - </dev/null 2>&1 | \
 grep -q __AVX512DQ__ && echo 1)
 
+CC_AVX512BW_SUPPORT=$(shell $(CC) -mavx512bw -dM -E - </dev/null 2>&1 | \
+grep -q __AVX512BW__ && echo 1)
+
 ifeq ($(CC_AVX512F_SUPPORT), 1)
 	ifeq ($(CC_AVX512DQ_SUPPORT), 1)
 		SRCS-$(CONFIG_RTE_LIBRTE_FIB) += dir24_8_avx512.c
 		CFLAGS_dir24_8_avx512.o += -mavx512f
 		CFLAGS_dir24_8_avx512.o += -mavx512dq
 		CFLAGS_dir24_8.o += -DCC_DIR24_8_AVX512_SUPPORT
+		ifeq ($(CC_AVX512BW_SUPPORT), 1)
+			SRCS-$(CONFIG_RTE_LIBRTE_FIB) += trie_avx512.c
+			CFLAGS_trie_avx512.o += -mavx512f
+			CFLAGS_trie_avx512.o += -mavx512dq
+			CFLAGS_trie_avx512.o += -mavx512bw
+			CFLAGS_trie.o += -DCC_TRIE_AVX512_SUPPORT
+		endif
 	endif
 endif
 include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_fib/meson.build b/lib/librte_fib/meson.build
index d96ff0288..98c8752be 100644
--- a/lib/librte_fib/meson.build
+++ b/lib/librte_fib/meson.build
@@ -13,6 +13,8 @@ if arch_subdir == 'x86' and not machine_args.contains('-mno-avx512f')
 	if dpdk_conf.has('RTE_MACHINE_CPUFLAG_AVX512F')
 		cflags += ['-DCC_DIR24_8_AVX512_SUPPORT']
 		sources += files('dir24_8_avx512.c')
+		cflags += ['-DCC_TRIE_AVX512_SUPPORT']
+		sources += files('trie_avx512.c')
 	elif cc.has_multi_arguments('-mavx512f', '-mavx512dq')
 		dir24_8_avx512_tmp = static_library('dir24_8_avx512_tmp',
 				'dir24_8_avx512.c',
@@ -20,6 +22,17 @@ if arch_subdir == 'x86' and not machine_args.contains('-mno-avx512f')
 				c_args: cflags + ['-mavx512f', '-mavx512dq'])
 		objs += dir24_8_avx512_tmp.extract_objects('dir24_8_avx512.c')
 		cflags += '-DCC_DIR24_8_AVX512_SUPPORT'
+		# TRIE AVX512 implementation uses avx512bw intrinsics along with
+		# avx512f and avx512dq
+		if cc.has_argument('-mavx512bw')
+			trie_avx512_tmp = static_library('trie_avx512_tmp',
+				'trie_avx512.c',
+				dependencies: static_rte_eal,
+				c_args: cflags + ['-mavx512f', \
+					'-mavx512dq', '-mavx512bw'])
+			objs += trie_avx512_tmp.extract_objects('trie_avx512.c')
+			cflags += '-DCC_TRIE_AVX512_SUPPORT'
+		endif
 	endif
 endif
 
diff --git a/lib/librte_fib/rte_fib6.h b/lib/librte_fib/rte_fib6.h
index e029c7624..303be55c1 100644
--- a/lib/librte_fib/rte_fib6.h
+++ b/lib/librte_fib/rte_fib6.h
@@ -60,7 +60,8 @@ enum rte_fib_trie_nh_sz {
 };
 
 enum rte_fib_trie_lookup_type {
-	RTE_FIB6_TRIE_SCALAR
+	RTE_FIB6_TRIE_SCALAR,
+	RTE_FIB6_TRIE_VECTOR_AVX512
 };
 
 /** FIB configuration structure */
diff --git a/lib/librte_fib/trie.c b/lib/librte_fib/trie.c
index 136e938df..d0233ad01 100644
--- a/lib/librte_fib/trie.c
+++ b/lib/librte_fib/trie.c
@@ -18,6 +18,12 @@
 #include <rte_fib6.h>
 #include "trie.h"
 
+#ifdef CC_TRIE_AVX512_SUPPORT
+
+#include "trie_avx512.h"
+
+#endif /* CC_TRIE_AVX512_SUPPORT */
+
 #define TRIE_NAMESIZE		64
 
 enum edge {
@@ -48,6 +54,21 @@ trie_get_lookup_fn(void *p, enum rte_fib_trie_lookup_type type)
 		default:
 			return NULL;
 		}
+#ifdef CC_TRIE_AVX512_SUPPORT
+	case RTE_FIB6_TRIE_VECTOR_AVX512:
+		if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512F) <= 0)
+			return NULL;
+		switch (nh_sz) {
+		case RTE_FIB6_TRIE_2B:
+			return rte_trie_vec_lookup_bulk_2b;
+		case RTE_FIB6_TRIE_4B:
+			return rte_trie_vec_lookup_bulk_4b;
+		case RTE_FIB6_TRIE_8B:
+			return rte_trie_vec_lookup_bulk_8b;
+		default:
+			return NULL;
+		}
+#endif
 	default:
 		return NULL;
 	}
diff --git a/lib/librte_fib/trie_avx512.c b/lib/librte_fib/trie_avx512.c
new file mode 100644
index 000000000..b1c9e4ede
--- /dev/null
+++ b/lib/librte_fib/trie_avx512.c
@@ -0,0 +1,269 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Intel Corporation
+ */
+
+#include <rte_vect.h>
+#include <rte_fib6.h>
+
+#include "trie.h"
+#include "trie_avx512.h"
+
+static __rte_always_inline void
+transpose_x16(uint8_t ips[16][RTE_FIB6_IPV6_ADDR_SIZE],
+	__m512i *first, __m512i *second, __m512i *third, __m512i *fourth)
+{
+	__m512i tmp1, tmp2, tmp3, tmp4;
+	__m512i tmp5, tmp6, tmp7, tmp8;
+	const __rte_x86_zmm_t perm_idxes = {
+		.u32 = { 0, 4, 8, 12, 2, 6, 10, 14,
+			1, 5, 9, 13, 3, 7, 11, 15
+		},
+	};
+
+	/* load all ip addresses */
+	tmp1 = _mm512_loadu_si512(&ips[0][0]);
+	tmp2 = _mm512_loadu_si512(&ips[4][0]);
+	tmp3 = _mm512_loadu_si512(&ips[8][0]);
+	tmp4 = _mm512_loadu_si512(&ips[12][0]);
+
+	/* transpose 4 byte chunks of 16 ips */
+	tmp5 = _mm512_unpacklo_epi32(tmp1, tmp2);
+	tmp7 = _mm512_unpackhi_epi32(tmp1, tmp2);
+	tmp6 = _mm512_unpacklo_epi32(tmp3, tmp4);
+	tmp8 = _mm512_unpackhi_epi32(tmp3, tmp4);
+
+	tmp1 = _mm512_unpacklo_epi32(tmp5, tmp6);
+	tmp3 = _mm512_unpackhi_epi32(tmp5, tmp6);
+	tmp2 = _mm512_unpacklo_epi32(tmp7, tmp8);
+	tmp4 = _mm512_unpackhi_epi32(tmp7, tmp8);
+
+	/* first 4-byte chunks of ips[] */
+	*first = _mm512_permutexvar_epi32(perm_idxes.z, tmp1);
+	/* second 4-byte chunks of ips[] */
+	*second = _mm512_permutexvar_epi32(perm_idxes.z, tmp3);
+	/* third 4-byte chunks of ips[] */
+	*third = _mm512_permutexvar_epi32(perm_idxes.z, tmp2);
+	/* fourth 4-byte chunks of ips[] */
+	*fourth = _mm512_permutexvar_epi32(perm_idxes.z, tmp4);
+}
+
+static __rte_always_inline void
+transpose_x8(uint8_t ips[8][RTE_FIB6_IPV6_ADDR_SIZE],
+	__m512i *first, __m512i *second)
+{
+	__m512i tmp1, tmp2, tmp3, tmp4;
+	const __rte_x86_zmm_t perm_idxes = {
+		.u64 = { 0, 2, 4, 6, 1, 3, 5, 7
+		},
+	};
+
+	tmp1 = _mm512_loadu_si512(&ips[0][0]);
+	tmp2 = _mm512_loadu_si512(&ips[4][0]);
+
+	tmp3 = _mm512_unpacklo_epi64(tmp1, tmp2);
+	*first = _mm512_permutexvar_epi64(perm_idxes.z, tmp3);
+	tmp4 = _mm512_unpackhi_epi64(tmp1, tmp2);
+	*second = _mm512_permutexvar_epi64(perm_idxes.z, tmp4);
+}
+
+static __rte_always_inline void
+trie_vec_lookup_x16(void *p, uint8_t ips[16][RTE_FIB6_IPV6_ADDR_SIZE],
+	uint64_t *next_hops, int size)
+{
+	struct rte_trie_tbl *dp = (struct rte_trie_tbl *)p;
+	const __m512i zero = _mm512_set1_epi32(0);
+	const __m512i lsb = _mm512_set1_epi32(1);
+	const __m512i two_lsb = _mm512_set1_epi32(3);
+	__m512i first, second, third, fourth; /*< IPv6 four byte chunks */
+	__m512i idxes, res, shuf_idxes;
+	__m512i tmp, tmp2, bytes, byte_chunk, base_idxes;
+	/* used to mask gather values if size is 2 (16 bit next hops) */
+	const __m512i res_msk = _mm512_set1_epi32(UINT16_MAX);
+	const __rte_x86_zmm_t bswap = {
+		.u8 = { 2, 1, 0, 255, 6, 5, 4, 255,
+			10, 9, 8, 255, 14, 13, 12, 255,
+			2, 1, 0, 255, 6, 5, 4, 255,
+			10, 9, 8, 255, 14, 13, 12, 255,
+			2, 1, 0, 255, 6, 5, 4, 255,
+			10, 9, 8, 255, 14, 13, 12, 255,
+			2, 1, 0, 255, 6, 5, 4, 255,
+			10, 9, 8, 255, 14, 13, 12, 255
+			},
+	};
+	const __mmask64 k = 0x1111111111111111;
+	int i = 3;
+	__mmask16 msk_ext, new_msk;
+	__mmask16 exp_msk = 0x5555;
+
+	transpose_x16(ips, &first, &second, &third, &fourth);
+
+	/* get_tbl24_idx() for every 4 byte chunk */
+	idxes = _mm512_shuffle_epi8(first, bswap.z);
+
+	/**
+	 * lookup in tbl24
+	 * Put it inside branch to make compiller happy with -O0
+	 */
+	if (size == sizeof(uint16_t)) {
+		res = _mm512_i32gather_epi32(idxes, (const int *)dp->tbl24, 2);
+		res = _mm512_and_epi32(res, res_msk);
+	} else
+		res = _mm512_i32gather_epi32(idxes, (const int *)dp->tbl24, 4);
+
+
+	/* get extended entries indexes */
+	msk_ext = _mm512_test_epi32_mask(res, lsb);
+
+	tmp = _mm512_srli_epi32(res, 1);
+
+	/* idxes to retrieve bytes */
+	shuf_idxes = _mm512_setr_epi32(3, 7, 11, 15,
+				19, 23, 27, 31,
+				35, 39, 43, 47,
+				51, 55, 59, 63);
+
+	base_idxes = _mm512_setr_epi32(0, 4, 8, 12,
+				16, 20, 24, 28,
+				32, 36, 40, 44,
+				48, 52, 56, 60);
+
+	/* traverse down the trie */
+	while (msk_ext) {
+		idxes = _mm512_maskz_slli_epi32(msk_ext, tmp, 8);
+		byte_chunk = (i < 8) ?
+			((i >= 4) ? second : first) :
+			((i >= 12) ? fourth : third);
+		bytes = _mm512_maskz_shuffle_epi8(k, byte_chunk, shuf_idxes);
+		idxes = _mm512_maskz_add_epi32(msk_ext, idxes, bytes);
+		if (size == sizeof(uint16_t)) {
+			tmp = _mm512_mask_i32gather_epi32(zero, msk_ext,
+				idxes, (const int *)dp->tbl8, 2);
+			tmp = _mm512_and_epi32(tmp, res_msk);
+		} else
+			tmp = _mm512_mask_i32gather_epi32(zero, msk_ext,
+				idxes, (const int *)dp->tbl8, 4);
+		new_msk = _mm512_test_epi32_mask(tmp, lsb);
+		res = _mm512_mask_blend_epi32(msk_ext ^ new_msk, res, tmp);
+		tmp = _mm512_srli_epi32(tmp, 1);
+		msk_ext = new_msk;
+
+		shuf_idxes = _mm512_maskz_add_epi8(k, shuf_idxes, lsb);
+		shuf_idxes = _mm512_and_epi32(shuf_idxes, two_lsb);
+		shuf_idxes = _mm512_maskz_add_epi8(k, shuf_idxes, base_idxes);
+		i++;
+	}
+
+	res = _mm512_srli_epi32(res, 1);
+	tmp = _mm512_maskz_expand_epi32(exp_msk, res);
+	__m256i tmp256;
+	tmp256 = _mm512_extracti32x8_epi32(res, 1);
+	tmp2 = _mm512_maskz_expand_epi32(exp_msk,
+		_mm512_castsi256_si512(tmp256));
+	_mm512_storeu_si512(next_hops, tmp);
+	_mm512_storeu_si512(next_hops + 8, tmp2);
+}
+
+static void
+trie_vec_lookup_x8_8b(void *p, uint8_t ips[8][RTE_FIB6_IPV6_ADDR_SIZE],
+	uint64_t *next_hops)
+{
+	struct rte_trie_tbl *dp = (struct rte_trie_tbl *)p;
+	const __m512i zero = _mm512_set1_epi32(0);
+	const __m512i lsb = _mm512_set1_epi32(1);
+	const __m512i three_lsb = _mm512_set1_epi32(7);
+	__m512i first, second; /*< IPv6 eight byte chunks */
+	__m512i idxes, res, shuf_idxes;
+	__m512i tmp, bytes, byte_chunk, base_idxes;
+	const __rte_x86_zmm_t bswap = {
+		.u8 = { 2, 1, 0, 255, 255, 255, 255, 255,
+			10, 9, 8, 255, 255, 255, 255, 255,
+			2, 1, 0, 255, 255, 255, 255, 255,
+			10, 9, 8, 255, 255, 255, 255, 255,
+			2, 1, 0, 255, 255, 255, 255, 255,
+			10, 9, 8, 255, 255, 255, 255, 255,
+			2, 1, 0, 255, 255, 255, 255, 255,
+			10, 9, 8, 255, 255, 255, 255, 255
+			},
+	};
+	const __mmask64 k = 0x101010101010101;
+	int i = 3;
+	__mmask8 msk_ext, new_msk;
+
+	transpose_x8(ips, &first, &second);
+
+	/* get_tbl24_idx() for every 4 byte chunk */
+	idxes = _mm512_shuffle_epi8(first, bswap.z);
+
+	/* lookup in tbl24 */
+	res = _mm512_i64gather_epi64(idxes, (const void *)dp->tbl24, 8);
+	/* get extended entries indexes */
+	msk_ext = _mm512_test_epi64_mask(res, lsb);
+
+	tmp = _mm512_srli_epi64(res, 1);
+
+	/* idxes to retrieve bytes */
+	shuf_idxes = _mm512_setr_epi64(3, 11, 19, 27, 35, 43, 51, 59);
+
+	base_idxes = _mm512_setr_epi64(0, 8, 16, 24, 32, 40, 48, 56);
+
+	/* traverse down the trie */
+	while (msk_ext) {
+		idxes = _mm512_maskz_slli_epi64(msk_ext, tmp, 8);
+		byte_chunk = (i < 8) ? first : second;
+		bytes = _mm512_maskz_shuffle_epi8(k, byte_chunk, shuf_idxes);
+		idxes = _mm512_maskz_add_epi64(msk_ext, idxes, bytes);
+		tmp = _mm512_mask_i64gather_epi64(zero, msk_ext,
+				idxes, (const void *)dp->tbl8, 8);
+		new_msk = _mm512_test_epi64_mask(tmp, lsb);
+		res = _mm512_mask_blend_epi64(msk_ext ^ new_msk, res, tmp);
+		tmp = _mm512_srli_epi64(tmp, 1);
+		msk_ext = new_msk;
+
+		shuf_idxes = _mm512_maskz_add_epi8(k, shuf_idxes, lsb);
+		shuf_idxes = _mm512_and_epi64(shuf_idxes, three_lsb);
+		shuf_idxes = _mm512_maskz_add_epi8(k, shuf_idxes, base_idxes);
+		i++;
+	}
+
+	res = _mm512_srli_epi64(res, 1);
+	_mm512_storeu_si512(next_hops, res);
+}
+
+void
+rte_trie_vec_lookup_bulk_2b(void *p, uint8_t ips[][RTE_FIB6_IPV6_ADDR_SIZE],
+	uint64_t *next_hops, const unsigned int n)
+{
+	uint32_t i;
+	for (i = 0; i < (n / 16); i++) {
+		trie_vec_lookup_x16(p, (uint8_t (*)[16])&ips[i * 16][0],
+				next_hops + i * 16, sizeof(uint16_t));
+	}
+	rte_trie_lookup_bulk_2b(p, (uint8_t (*)[16])&ips[i * 16][0],
+			next_hops + i * 16, n - i * 16);
+}
+
+void
+rte_trie_vec_lookup_bulk_4b(void *p, uint8_t ips[][RTE_FIB6_IPV6_ADDR_SIZE],
+	uint64_t *next_hops, const unsigned int n)
+{
+	uint32_t i;
+	for (i = 0; i < (n / 16); i++) {
+		trie_vec_lookup_x16(p, (uint8_t (*)[16])&ips[i * 16][0],
+				next_hops + i * 16, sizeof(uint32_t));
+	}
+	rte_trie_lookup_bulk_4b(p, (uint8_t (*)[16])&ips[i * 16][0],
+			next_hops + i * 16, n - i * 16);
+}
+
+void
+rte_trie_vec_lookup_bulk_8b(void *p, uint8_t ips[][RTE_FIB6_IPV6_ADDR_SIZE],
+	uint64_t *next_hops, const unsigned int n)
+{
+	uint32_t i;
+	for (i = 0; i < (n / 8); i++) {
+		trie_vec_lookup_x8_8b(p, (uint8_t (*)[16])&ips[i * 8][0],
+				next_hops + i * 8);
+	}
+	rte_trie_lookup_bulk_8b(p, (uint8_t (*)[16])&ips[i * 8][0],
+			next_hops + i * 8, n - i * 8);
+}
diff --git a/lib/librte_fib/trie_avx512.h b/lib/librte_fib/trie_avx512.h
new file mode 100644
index 000000000..ef8c7f0e3
--- /dev/null
+++ b/lib/librte_fib/trie_avx512.h
@@ -0,0 +1,20 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Intel Corporation
+ */
+
+#ifndef _TRIE_AVX512_H_
+#define _TRIE_AVX512_H_
+
+void
+rte_trie_vec_lookup_bulk_2b(void *p, uint8_t ips[][RTE_FIB6_IPV6_ADDR_SIZE],
+	uint64_t *next_hops, const unsigned int n);
+
+void
+rte_trie_vec_lookup_bulk_4b(void *p, uint8_t ips[][RTE_FIB6_IPV6_ADDR_SIZE],
+	uint64_t *next_hops, const unsigned int n);
+
+void
+rte_trie_vec_lookup_bulk_8b(void *p, uint8_t ips[][RTE_FIB6_IPV6_ADDR_SIZE],
+	uint64_t *next_hops, const unsigned int n);
+
+#endif /* _TRIE_AVX512_H_ */
-- 
2.17.1


^ permalink raw reply	[flat|nested] 199+ messages in thread

* [dpdk-dev] [PATCH v7 8/8] app/testfib: add support for different lookup functions
  2020-07-13 11:11         ` [dpdk-dev] [PATCH v6 " Vladimir Medvedkin
                             ` (7 preceding siblings ...)
  2020-07-13 11:56           ` [dpdk-dev] [PATCH v7 7/8] fib6: introduce AVX512 lookup Vladimir Medvedkin
@ 2020-07-13 11:56           ` Vladimir Medvedkin
  2020-07-13 22:19           ` [dpdk-dev] [PATCH v6 0/8] fib: implement AVX512 vector lookup Stephen Hemminger
  9 siblings, 0 replies; 199+ messages in thread
From: Vladimir Medvedkin @ 2020-07-13 11:56 UTC (permalink / raw)
  To: dev
  Cc: david.marchand, jerinj, mdr, thomas, konstantin.ananyev,
	bruce.richardson

Added -v option to switch between different lookup implementations
to measure their performance and correctness.

Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
---
 app/test-fib/main.c | 58 ++++++++++++++++++++++++++++++++++++++++++---
 1 file changed, 55 insertions(+), 3 deletions(-)

diff --git a/app/test-fib/main.c b/app/test-fib/main.c
index 9cf01b16e..9c2d41387 100644
--- a/app/test-fib/main.c
+++ b/app/test-fib/main.c
@@ -99,6 +99,7 @@ static struct {
 	uint8_t		ent_sz;
 	uint8_t		rnd_lookup_ips_ratio;
 	uint8_t		print_fract;
+	uint8_t		lookup_fn;
 } config = {
 	.routes_file = NULL,
 	.lookup_ips_file = NULL,
@@ -110,7 +111,8 @@ static struct {
 	.tbl8 = DEFAULT_LPM_TBL8,
 	.ent_sz = 4,
 	.rnd_lookup_ips_ratio = 0,
-	.print_fract = 10
+	.print_fract = 10,
+	.lookup_fn = 0
 };
 
 struct rt_rule_4 {
@@ -638,7 +640,11 @@ print_usage(void)
 		"1/2/4/8 (default 4)>]\n"
 		"[-g <number of tbl8's for dir24_8 or trie FIBs>]\n"
 		"[-w <path to the file to dump routing table>]\n"
-		"[-u <path to the file to dump ip's for lookup>]\n",
+		"[-u <path to the file to dump ip's for lookup>]\n"
+		"[-v <type of loookup function:"
+		"\ts1, s2, s3 (3 types of scalar), v (vector) -"
+		" for DIR24_8 based FIB\n"
+		"\ts, v - for TRIE based ipv6 FIB>]\n",
 		config.prgname);
 }
 
@@ -681,7 +687,7 @@ parse_opts(int argc, char **argv)
 	int opt;
 	char *endptr;
 
-	while ((opt = getopt(argc, argv, "f:t:n:d:l:r:c6ab:e:g:w:u:s")) !=
+	while ((opt = getopt(argc, argv, "f:t:n:d:l:r:c6ab:e:g:w:u:sv:")) !=
 			-1) {
 		switch (opt) {
 		case 'f':
@@ -769,6 +775,22 @@ parse_opts(int argc, char **argv)
 				rte_exit(-EINVAL, "Invalid option -g\n");
 			}
 			break;
+		case 'v':
+			if ((strcmp(optarg, "s1") == 0) ||
+					(strcmp(optarg, "s") == 0))
+				break;
+			else if (strcmp(optarg, "v") == 0) {
+				config.lookup_fn = 1;
+				break;
+			} else if (strcmp(optarg, "s2") == 0) {
+				config.lookup_fn = 2;
+				break;
+			} else if (strcmp(optarg, "s3") == 0) {
+				config.lookup_fn = 3;
+				break;
+			}
+			print_usage();
+			rte_exit(-EINVAL, "Invalid option -v %s\n", optarg);
 		default:
 			print_usage();
 			rte_exit(-EINVAL, "Invalid options\n");
@@ -846,6 +868,24 @@ run_v4(void)
 		return -rte_errno;
 	}
 
+	if (config.lookup_fn != 0) {
+		if (config.lookup_fn == 1)
+			ret = rte_fib_set_lookup_fn(fib,
+				RTE_FIB_DIR24_8_VECTOR_AVX512);
+		else if (config.lookup_fn == 2)
+			ret = rte_fib_set_lookup_fn(fib,
+				RTE_FIB_DIR24_8_SCALAR_INLINE);
+		else if (config.lookup_fn == 3)
+			ret = rte_fib_set_lookup_fn(fib,
+				RTE_FIB_DIR24_8_SCALAR_UNI);
+		else
+			ret = -EINVAL;
+		if (ret != 0) {
+			printf("Can not init lookup function\n");
+			return ret;
+		}
+	}
+
 	for (k = config.print_fract, i = 0; k > 0; k--) {
 		start = rte_rdtsc_precise();
 		for (j = 0; j < (config.nb_routes - i) / k; j++) {
@@ -1025,6 +1065,18 @@ run_v6(void)
 		return -rte_errno;
 	}
 
+	if (config.lookup_fn != 0) {
+		if (config.lookup_fn == 1)
+			ret = rte_fib6_set_lookup_fn(fib,
+				RTE_FIB6_TRIE_VECTOR_AVX512);
+		else
+			ret = -EINVAL;
+		if (ret != 0) {
+			printf("Can not init lookup function\n");
+			return ret;
+		}
+	}
+
 	for (k = config.print_fract, i = 0; k > 0; k--) {
 		start = rte_rdtsc_precise();
 		for (j = 0; j < (config.nb_routes - i) / k; j++) {
-- 
2.17.1


^ permalink raw reply	[flat|nested] 199+ messages in thread

* Re: [dpdk-dev] [PATCH v6 0/8] fib: implement AVX512 vector lookup
  2020-07-13 11:11         ` [dpdk-dev] [PATCH v6 " Vladimir Medvedkin
                             ` (8 preceding siblings ...)
  2020-07-13 11:56           ` [dpdk-dev] [PATCH v7 8/8] app/testfib: add support for different lookup functions Vladimir Medvedkin
@ 2020-07-13 22:19           ` Stephen Hemminger
  2020-07-14  7:31             ` Kinsella, Ray
  9 siblings, 1 reply; 199+ messages in thread
From: Stephen Hemminger @ 2020-07-13 22:19 UTC (permalink / raw)
  To: Vladimir Medvedkin
  Cc: dev, david.marchand, jerinj, mdr, thomas, konstantin.ananyev,
	bruce.richardson

On Mon, 13 Jul 2020 12:11:19 +0100
Vladimir Medvedkin <vladimir.medvedkin@intel.com> wrote:

> This patch series implements vectorized lookup using AVX512 for
> ipv4 dir24_8 and ipv6 trie algorithms.
> Also introduced rte_fib_set_lookup_fn() to change lookup function type.
> Added option to select lookup function type in testfib application.
> 
> v6:
>  - style fixes
> 
> v5:
>  - prefix zmm macro in rte_vect.h with RTE_X86
>  - remove unnecessary typedef for _x86_zmm_t
>  - reword commit title
>  - fix typos
> 
> v4:
>  - use __rte_aligned() instead of using compiler attribute directly
>  - rework and add comments to meson.build
> 
> v3:
>  - separate out the AVX-512 code into a separate file
> 
> v2:
>  - rename rte_zmm to __rte_x86_zmm to reflect its internal usage
>  - make runtime decision to use avx512 lookup
> 
> Vladimir Medvedkin (8):
>   eal/x86: introduce AVX 512-bit type
>   fib: make lookup function type configurable
>   fib: move lookup definition into the header file
>   fib: introduce AVX512 lookup
>   fib6: make lookup function type configurable
>   fib6: move lookup definition into the header file
>   fib6: introduce AVX512 lookup
>   app/testfib: add support for different lookup functions
> 
>  app/test-fib/main.c                   |  58 +++++-
>  lib/librte_eal/x86/include/rte_vect.h |  19 ++
>  lib/librte_fib/Makefile               |  24 +++
>  lib/librte_fib/dir24_8.c              | 281 +++++---------------------
>  lib/librte_fib/dir24_8.h              | 226 ++++++++++++++++++++-
>  lib/librte_fib/dir24_8_avx512.c       | 165 +++++++++++++++
>  lib/librte_fib/dir24_8_avx512.h       |  24 +++
>  lib/librte_fib/meson.build            |  31 +++
>  lib/librte_fib/rte_fib.c              |  21 +-
>  lib/librte_fib/rte_fib.h              |  24 +++
>  lib/librte_fib/rte_fib6.c             |  20 +-
>  lib/librte_fib/rte_fib6.h             |  22 ++
>  lib/librte_fib/rte_fib_version.map    |   2 +
>  lib/librte_fib/trie.c                 | 161 +++------------
>  lib/librte_fib/trie.h                 | 119 ++++++++++-
>  lib/librte_fib/trie_avx512.c          | 269 ++++++++++++++++++++++++
>  lib/librte_fib/trie_avx512.h          |  20 ++
>  17 files changed, 1114 insertions(+), 372 deletions(-)
>  create mode 100644 lib/librte_fib/dir24_8_avx512.c
>  create mode 100644 lib/librte_fib/dir24_8_avx512.h
>  create mode 100644 lib/librte_fib/trie_avx512.c
>  create mode 100644 lib/librte_fib/trie_avx512.h
> 

Did anyone else see the recent AVX512 discussion from Linus:
  "I hope AVX512 dies a painful death, and that Intel starts fixing real problems 
   instead of trying to create magic instructions to then create benchmarks that they can look good on. 

^ permalink raw reply	[flat|nested] 199+ messages in thread

* Re: [dpdk-dev] [PATCH v6 0/8] fib: implement AVX512 vector lookup
  2020-07-13 22:19           ` [dpdk-dev] [PATCH v6 0/8] fib: implement AVX512 vector lookup Stephen Hemminger
@ 2020-07-14  7:31             ` Kinsella, Ray
  2020-07-14 14:38               ` Stephen Hemminger
  0 siblings, 1 reply; 199+ messages in thread
From: Kinsella, Ray @ 2020-07-14  7:31 UTC (permalink / raw)
  To: Stephen Hemminger, Vladimir Medvedkin
  Cc: dev, david.marchand, jerinj, thomas, konstantin.ananyev,
	bruce.richardson



On 13/07/2020 23:19, Stephen Hemminger wrote:
> On Mon, 13 Jul 2020 12:11:19 +0100
> Vladimir Medvedkin <vladimir.medvedkin@intel.com> wrote:
> 
>> This patch series implements vectorized lookup using AVX512 for
>> ipv4 dir24_8 and ipv6 trie algorithms.
>> Also introduced rte_fib_set_lookup_fn() to change lookup function type.
>> Added option to select lookup function type in testfib application.
>>
>> v6:
>>  - style fixes
>>
>> v5:
>>  - prefix zmm macro in rte_vect.h with RTE_X86
>>  - remove unnecessary typedef for _x86_zmm_t
>>  - reword commit title
>>  - fix typos
>>
>> v4:
>>  - use __rte_aligned() instead of using compiler attribute directly
>>  - rework and add comments to meson.build
>>
>> v3:
>>  - separate out the AVX-512 code into a separate file
>>
>> v2:
>>  - rename rte_zmm to __rte_x86_zmm to reflect its internal usage
>>  - make runtime decision to use avx512 lookup
>>
>> Vladimir Medvedkin (8):
>>   eal/x86: introduce AVX 512-bit type
>>   fib: make lookup function type configurable
>>   fib: move lookup definition into the header file
>>   fib: introduce AVX512 lookup
>>   fib6: make lookup function type configurable
>>   fib6: move lookup definition into the header file
>>   fib6: introduce AVX512 lookup
>>   app/testfib: add support for different lookup functions
>>
>>  app/test-fib/main.c                   |  58 +++++-
>>  lib/librte_eal/x86/include/rte_vect.h |  19 ++
>>  lib/librte_fib/Makefile               |  24 +++
>>  lib/librte_fib/dir24_8.c              | 281 +++++---------------------
>>  lib/librte_fib/dir24_8.h              | 226 ++++++++++++++++++++-
>>  lib/librte_fib/dir24_8_avx512.c       | 165 +++++++++++++++
>>  lib/librte_fib/dir24_8_avx512.h       |  24 +++
>>  lib/librte_fib/meson.build            |  31 +++
>>  lib/librte_fib/rte_fib.c              |  21 +-
>>  lib/librte_fib/rte_fib.h              |  24 +++
>>  lib/librte_fib/rte_fib6.c             |  20 +-
>>  lib/librte_fib/rte_fib6.h             |  22 ++
>>  lib/librte_fib/rte_fib_version.map    |   2 +
>>  lib/librte_fib/trie.c                 | 161 +++------------
>>  lib/librte_fib/trie.h                 | 119 ++++++++++-
>>  lib/librte_fib/trie_avx512.c          | 269 ++++++++++++++++++++++++
>>  lib/librte_fib/trie_avx512.h          |  20 ++
>>  17 files changed, 1114 insertions(+), 372 deletions(-)
>>  create mode 100644 lib/librte_fib/dir24_8_avx512.c
>>  create mode 100644 lib/librte_fib/dir24_8_avx512.h
>>  create mode 100644 lib/librte_fib/trie_avx512.c
>>  create mode 100644 lib/librte_fib/trie_avx512.h
>>
> 
> Did anyone else see the recent AVX512 discussion from Linus:
>   "I hope AVX512 dies a painful death, and that Intel starts fixing real problems 
>    instead of trying to create magic instructions to then create benchmarks that they can look good on. 

Yup - I saw this one.
Sweeping statements like these are good to provoke debate, the truth is generally more nuanced.
If you continue to read the post, Linus appears to be mostly questioning microprocessor design decisions.

That is an interesting discussion, however the reality is that the technology does exists and may be beneficial for Packet Processing. 

I would suggest, we continue to apply the same logic governing adoption of any technology by DPDK. 
When the technology is present and a clear benefit is shown, we use it with caution.

In the case of Vladimir's patch,
the user has to explicitly switch on the AVX512 lookup with RTE_FIB_DIR24_8_VECTOR_AVX512.

Thanks, 

Ray K

^ permalink raw reply	[flat|nested] 199+ messages in thread

* Re: [dpdk-dev] [PATCH v6 0/8] fib: implement AVX512 vector lookup
  2020-07-14  7:31             ` Kinsella, Ray
@ 2020-07-14 14:38               ` Stephen Hemminger
  2020-07-15  9:47                 ` Thomas Monjalon
  0 siblings, 1 reply; 199+ messages in thread
From: Stephen Hemminger @ 2020-07-14 14:38 UTC (permalink / raw)
  To: Kinsella, Ray
  Cc: Vladimir Medvedkin, dev, david.marchand, jerinj, thomas,
	konstantin.ananyev, bruce.richardson

On Tue, 14 Jul 2020 08:31:32 +0100
"Kinsella, Ray" <mdr@ashroe.eu> wrote:

> On 13/07/2020 23:19, Stephen Hemminger wrote:
> > On Mon, 13 Jul 2020 12:11:19 +0100
> > Vladimir Medvedkin <vladimir.medvedkin@intel.com> wrote:
> >   
> >> This patch series implements vectorized lookup using AVX512 for
> >> ipv4 dir24_8 and ipv6 trie algorithms.
> >> Also introduced rte_fib_set_lookup_fn() to change lookup function type.
> >> Added option to select lookup function type in testfib application.
> >>
> >> v6:
> >>  - style fixes
> >>
> >> v5:
> >>  - prefix zmm macro in rte_vect.h with RTE_X86
> >>  - remove unnecessary typedef for _x86_zmm_t
> >>  - reword commit title
> >>  - fix typos
> >>
> >> v4:
> >>  - use __rte_aligned() instead of using compiler attribute directly
> >>  - rework and add comments to meson.build
> >>
> >> v3:
> >>  - separate out the AVX-512 code into a separate file
> >>
> >> v2:
> >>  - rename rte_zmm to __rte_x86_zmm to reflect its internal usage
> >>  - make runtime decision to use avx512 lookup
> >>
> >> Vladimir Medvedkin (8):
> >>   eal/x86: introduce AVX 512-bit type
> >>   fib: make lookup function type configurable
> >>   fib: move lookup definition into the header file
> >>   fib: introduce AVX512 lookup
> >>   fib6: make lookup function type configurable
> >>   fib6: move lookup definition into the header file
> >>   fib6: introduce AVX512 lookup
> >>   app/testfib: add support for different lookup functions
> >>
> >>  app/test-fib/main.c                   |  58 +++++-
> >>  lib/librte_eal/x86/include/rte_vect.h |  19 ++
> >>  lib/librte_fib/Makefile               |  24 +++
> >>  lib/librte_fib/dir24_8.c              | 281 +++++---------------------
> >>  lib/librte_fib/dir24_8.h              | 226 ++++++++++++++++++++-
> >>  lib/librte_fib/dir24_8_avx512.c       | 165 +++++++++++++++
> >>  lib/librte_fib/dir24_8_avx512.h       |  24 +++
> >>  lib/librte_fib/meson.build            |  31 +++
> >>  lib/librte_fib/rte_fib.c              |  21 +-
> >>  lib/librte_fib/rte_fib.h              |  24 +++
> >>  lib/librte_fib/rte_fib6.c             |  20 +-
> >>  lib/librte_fib/rte_fib6.h             |  22 ++
> >>  lib/librte_fib/rte_fib_version.map    |   2 +
> >>  lib/librte_fib/trie.c                 | 161 +++------------
> >>  lib/librte_fib/trie.h                 | 119 ++++++++++-
> >>  lib/librte_fib/trie_avx512.c          | 269 ++++++++++++++++++++++++
> >>  lib/librte_fib/trie_avx512.h          |  20 ++
> >>  17 files changed, 1114 insertions(+), 372 deletions(-)
> >>  create mode 100644 lib/librte_fib/dir24_8_avx512.c
> >>  create mode 100644 lib/librte_fib/dir24_8_avx512.h
> >>  create mode 100644 lib/librte_fib/trie_avx512.c
> >>  create mode 100644 lib/librte_fib/trie_avx512.h
> >>  
> > 
> > Did anyone else see the recent AVX512 discussion from Linus:
> >   "I hope AVX512 dies a painful death, and that Intel starts fixing real problems 
> >    instead of trying to create magic instructions to then create benchmarks that they can look good on.   
> 
> Yup - I saw this one.
> Sweeping statements like these are good to provoke debate, the truth is generally more nuanced.
> If you continue to read the post, Linus appears to be mostly questioning microprocessor design decisions.
> 
> That is an interesting discussion, however the reality is that the technology does exists and may be beneficial for Packet Processing. 
> 
> I would suggest, we continue to apply the same logic governing adoption of any technology by DPDK. 
> When the technology is present and a clear benefit is shown, we use it with caution.
> 
> In the case of Vladimir's patch,
> the user has to explicitly switch on the AVX512 lookup with RTE_FIB_DIR24_8_VECTOR_AVX512.
> 

Using what is available makes sense in DPDK. 

^ permalink raw reply	[flat|nested] 199+ messages in thread

* Re: [dpdk-dev] [PATCH v6 0/8] fib: implement AVX512 vector lookup
  2020-07-14 14:38               ` Stephen Hemminger
@ 2020-07-15  9:47                 ` Thomas Monjalon
  2020-07-15 10:35                   ` Medvedkin, Vladimir
  0 siblings, 1 reply; 199+ messages in thread
From: Thomas Monjalon @ 2020-07-15  9:47 UTC (permalink / raw)
  To: Kinsella, Ray, Stephen Hemminger
  Cc: Vladimir Medvedkin, dev, david.marchand, jerinj,
	konstantin.ananyev, bruce.richardson

14/07/2020 16:38, Stephen Hemminger:
> "Kinsella, Ray" <mdr@ashroe.eu> wrote:
> > On 13/07/2020 23:19, Stephen Hemminger wrote:
> > > Did anyone else see the recent AVX512 discussion from Linus:
> > >   "I hope AVX512 dies a painful death, and that Intel starts fixing real problems 
> > >    instead of trying to create magic instructions to then create benchmarks that they can look good on.   
> > 
> > Yup - I saw this one.
> > Sweeping statements like these are good to provoke debate, the truth is generally more nuanced.
> > If you continue to read the post, Linus appears to be mostly questioning microprocessor design decisions.
> > 
> > That is an interesting discussion, however the reality is that the technology does exists and may be beneficial for Packet Processing. 
> > 
> > I would suggest, we continue to apply the same logic governing adoption of any technology by DPDK. 
> > When the technology is present and a clear benefit is shown, we use it with caution.
> > 
> > In the case of Vladimir's patch,
> > the user has to explicitly switch on the AVX512 lookup with RTE_FIB_DIR24_8_VECTOR_AVX512.
> 
> Using what is available makes sense in DPDK. 

Why does it require explicit  enabling in application?
AVX512 is not reliable enough to be automatically used when available?




^ permalink raw reply	[flat|nested] 199+ messages in thread

* Re: [dpdk-dev] [PATCH v6 0/8] fib: implement AVX512 vector lookup
  2020-07-15  9:47                 ` Thomas Monjalon
@ 2020-07-15 10:35                   ` Medvedkin, Vladimir
  2020-07-15 11:59                     ` Thomas Monjalon
  0 siblings, 1 reply; 199+ messages in thread
From: Medvedkin, Vladimir @ 2020-07-15 10:35 UTC (permalink / raw)
  To: Thomas Monjalon, Kinsella, Ray, Stephen Hemminger
  Cc: dev, david.marchand, jerinj, konstantin.ananyev, bruce.richardson



On 15/07/2020 10:47, Thomas Monjalon wrote:
> 14/07/2020 16:38, Stephen Hemminger:
>> "Kinsella, Ray" <mdr@ashroe.eu> wrote:
>>> On 13/07/2020 23:19, Stephen Hemminger wrote:
>>>> Did anyone else see the recent AVX512 discussion from Linus:
>>>>    "I hope AVX512 dies a painful death, and that Intel starts fixing real problems
>>>>     instead of trying to create magic instructions to then create benchmarks that they can look good on.
>>>
>>> Yup - I saw this one.
>>> Sweeping statements like these are good to provoke debate, the truth is generally more nuanced.
>>> If you continue to read the post, Linus appears to be mostly questioning microprocessor design decisions.
>>>
>>> That is an interesting discussion, however the reality is that the technology does exists and may be beneficial for Packet Processing.
>>>
>>> I would suggest, we continue to apply the same logic governing adoption of any technology by DPDK.
>>> When the technology is present and a clear benefit is shown, we use it with caution.
>>>
>>> In the case of Vladimir's patch,
>>> the user has to explicitly switch on the AVX512 lookup with RTE_FIB_DIR24_8_VECTOR_AVX512.
>>
>> Using what is available makes sense in DPDK.
> 
> Why does it require explicit  enabling in application?
> AVX512 is not reliable enough to be automatically used when available?
> 

It is reliable enough. User have to explicitly trigger to avx512 lookup 
because using avx512 instructions can reduce the frequency of your 
cores. The user knows their environment better. So the scalar version is 
used so as not to affect the frequency.


> 
> 

-- 
Regards,
Vladimir

^ permalink raw reply	[flat|nested] 199+ messages in thread

* Re: [dpdk-dev] [PATCH v6 0/8] fib: implement AVX512 vector lookup
  2020-07-15 10:35                   ` Medvedkin, Vladimir
@ 2020-07-15 11:59                     ` Thomas Monjalon
  2020-07-15 12:29                       ` Medvedkin, Vladimir
  0 siblings, 1 reply; 199+ messages in thread
From: Thomas Monjalon @ 2020-07-15 11:59 UTC (permalink / raw)
  To: Medvedkin, Vladimir
  Cc: Kinsella, Ray, Stephen Hemminger, dev, david.marchand, jerinj,
	konstantin.ananyev, bruce.richardson

15/07/2020 12:35, Medvedkin, Vladimir:
> On 15/07/2020 10:47, Thomas Monjalon wrote:
> > 14/07/2020 16:38, Stephen Hemminger:
> >> "Kinsella, Ray" <mdr@ashroe.eu> wrote:
> >>> On 13/07/2020 23:19, Stephen Hemminger wrote:
> >>>> Did anyone else see the recent AVX512 discussion from Linus:
> >>>>    "I hope AVX512 dies a painful death, and that Intel starts fixing real problems
> >>>>     instead of trying to create magic instructions to then create benchmarks that they can look good on.
> >>>
> >>> Yup - I saw this one.
> >>> Sweeping statements like these are good to provoke debate, the truth is generally more nuanced.
> >>> If you continue to read the post, Linus appears to be mostly questioning microprocessor design decisions.
> >>>
> >>> That is an interesting discussion, however the reality is that the technology does exists and may be beneficial for Packet Processing.
> >>>
> >>> I would suggest, we continue to apply the same logic governing adoption of any technology by DPDK.
> >>> When the technology is present and a clear benefit is shown, we use it with caution.
> >>>
> >>> In the case of Vladimir's patch,
> >>> the user has to explicitly switch on the AVX512 lookup with RTE_FIB_DIR24_8_VECTOR_AVX512.
> >>
> >> Using what is available makes sense in DPDK.
> > 
> > Why does it require explicit  enabling in application?
> > AVX512 is not reliable enough to be automatically used when available?
> 
> It is reliable enough. User have to explicitly trigger to avx512 lookup 
> because using avx512 instructions can reduce the frequency of your 
> cores. The user knows their environment better. So the scalar version is 
> used so as not to affect the frequency.

So the user must know which micro-optimization is better for a code
they don't know. Reminder: an user is not a developper.
I understand we have no better solution though.
Can we improve the user experience with some recommendations, numbers, etc?



^ permalink raw reply	[flat|nested] 199+ messages in thread

* Re: [dpdk-dev] [PATCH v6 0/8] fib: implement AVX512 vector lookup
  2020-07-15 11:59                     ` Thomas Monjalon
@ 2020-07-15 12:29                       ` Medvedkin, Vladimir
  2020-07-15 12:45                         ` Thomas Monjalon
  0 siblings, 1 reply; 199+ messages in thread
From: Medvedkin, Vladimir @ 2020-07-15 12:29 UTC (permalink / raw)
  To: Thomas Monjalon
  Cc: Kinsella, Ray, Stephen Hemminger, dev, david.marchand, jerinj,
	konstantin.ananyev, bruce.richardson



On 15/07/2020 12:59, Thomas Monjalon wrote:
> 15/07/2020 12:35, Medvedkin, Vladimir:
>> On 15/07/2020 10:47, Thomas Monjalon wrote:
>>> 14/07/2020 16:38, Stephen Hemminger:
>>>> "Kinsella, Ray" <mdr@ashroe.eu> wrote:
>>>>> On 13/07/2020 23:19, Stephen Hemminger wrote:
>>>>>> Did anyone else see the recent AVX512 discussion from Linus:
>>>>>>     "I hope AVX512 dies a painful death, and that Intel starts fixing real problems
>>>>>>      instead of trying to create magic instructions to then create benchmarks that they can look good on.
>>>>>
>>>>> Yup - I saw this one.
>>>>> Sweeping statements like these are good to provoke debate, the truth is generally more nuanced.
>>>>> If you continue to read the post, Linus appears to be mostly questioning microprocessor design decisions.
>>>>>
>>>>> That is an interesting discussion, however the reality is that the technology does exists and may be beneficial for Packet Processing.
>>>>>
>>>>> I would suggest, we continue to apply the same logic governing adoption of any technology by DPDK.
>>>>> When the technology is present and a clear benefit is shown, we use it with caution.
>>>>>
>>>>> In the case of Vladimir's patch,
>>>>> the user has to explicitly switch on the AVX512 lookup with RTE_FIB_DIR24_8_VECTOR_AVX512.
>>>>
>>>> Using what is available makes sense in DPDK.
>>>
>>> Why does it require explicit  enabling in application?
>>> AVX512 is not reliable enough to be automatically used when available?
>>
>> It is reliable enough. User have to explicitly trigger to avx512 lookup
>> because using avx512 instructions can reduce the frequency of your
>> cores. The user knows their environment better. So the scalar version is
>> used so as not to affect the frequency.
> 
> So the user must know which micro-optimization is better for a code
> they don't know. Reminder: an user is not a developper.
> I understand we have no better solution though.
> Can we improve the user experience with some recommendations, numbers, etc?
> 

In case where a user is a developer (dpdk users are mostly devs, aren't 
they?) who uses the fib library in their app may decide to switch to 
avx512 lookup using rte_fib_set_lookup_fn() when they know that their 
code is already using avx512 (ifdef, startup check, etc).
In other case an app developer, for example, could provide to user 
command line option or some interactive command to switch lookup function.
I'd recommend to run testfib app with various "-v" options to evaluate 
lookup performance on a target system to make a decision.

> 

-- 
Regards,
Vladimir

^ permalink raw reply	[flat|nested] 199+ messages in thread

* Re: [dpdk-dev] [PATCH v6 0/8] fib: implement AVX512 vector lookup
  2020-07-15 12:29                       ` Medvedkin, Vladimir
@ 2020-07-15 12:45                         ` Thomas Monjalon
  2020-07-17 16:43                           ` Richardson, Bruce
  0 siblings, 1 reply; 199+ messages in thread
From: Thomas Monjalon @ 2020-07-15 12:45 UTC (permalink / raw)
  To: Medvedkin, Vladimir
  Cc: Kinsella, Ray, Stephen Hemminger, dev, david.marchand, jerinj,
	konstantin.ananyev, bruce.richardson, john.mcnamara,
	tim.odriscoll

15/07/2020 14:29, Medvedkin, Vladimir:
> On 15/07/2020 12:59, Thomas Monjalon wrote:
> > 15/07/2020 12:35, Medvedkin, Vladimir:
> >> On 15/07/2020 10:47, Thomas Monjalon wrote:
> >>> 14/07/2020 16:38, Stephen Hemminger:
> >>>> "Kinsella, Ray" <mdr@ashroe.eu> wrote:
> >>>>> On 13/07/2020 23:19, Stephen Hemminger wrote:
> >>>>>> Did anyone else see the recent AVX512 discussion from Linus:
> >>>>>>     "I hope AVX512 dies a painful death, and that Intel starts fixing real problems
> >>>>>>      instead of trying to create magic instructions to then create benchmarks that they can look good on.
> >>>>>
> >>>>> Yup - I saw this one.
> >>>>> Sweeping statements like these are good to provoke debate, the truth is generally more nuanced.
> >>>>> If you continue to read the post, Linus appears to be mostly questioning microprocessor design decisions.
> >>>>>
> >>>>> That is an interesting discussion, however the reality is that the technology does exists and may be beneficial for Packet Processing.
> >>>>>
> >>>>> I would suggest, we continue to apply the same logic governing adoption of any technology by DPDK.
> >>>>> When the technology is present and a clear benefit is shown, we use it with caution.
> >>>>>
> >>>>> In the case of Vladimir's patch,
> >>>>> the user has to explicitly switch on the AVX512 lookup with RTE_FIB_DIR24_8_VECTOR_AVX512.
> >>>>
> >>>> Using what is available makes sense in DPDK.
> >>>
> >>> Why does it require explicit  enabling in application?
> >>> AVX512 is not reliable enough to be automatically used when available?
> >>
> >> It is reliable enough. User have to explicitly trigger to avx512 lookup
> >> because using avx512 instructions can reduce the frequency of your
> >> cores. The user knows their environment better. So the scalar version is
> >> used so as not to affect the frequency.
> > 
> > So the user must know which micro-optimization is better for a code
> > they don't know. Reminder: an user is not a developper.
> > I understand we have no better solution though.
> > Can we improve the user experience with some recommendations, numbers, etc?
> > 
> 
> In case where a user is a developer (dpdk users are mostly devs, aren't 
> they?) who uses the fib library in their app may decide to switch to 
> avx512 lookup using rte_fib_set_lookup_fn() when they know that their 
> code is already using avx512 (ifdef, startup check, etc).
> In other case an app developer, for example, could provide to user 
> command line option or some interactive command to switch lookup function.
> I'd recommend to run testfib app with various "-v" options to evaluate 
> lookup performance on a target system to make a decision.

I think this is the difference between a library for hackers,
and a product for end-users.
We are not building a product, but we can make a step in that direction
by documenting some knowledge.
I don't know exactly what it means in this case, so I'll let others
suggest some doc improvements (if anyone cares).



^ permalink raw reply	[flat|nested] 199+ messages in thread

* Re: [dpdk-dev] [PATCH v7 2/8] fib: make lookup function type configurable
  2020-07-13 11:56           ` [dpdk-dev] [PATCH v7 2/8] fib: make lookup function type configurable Vladimir Medvedkin
@ 2020-07-16 11:51             ` Ananyev, Konstantin
  2020-07-16 14:32             ` Thomas Monjalon
  1 sibling, 0 replies; 199+ messages in thread
From: Ananyev, Konstantin @ 2020-07-16 11:51 UTC (permalink / raw)
  To: Medvedkin, Vladimir, dev
  Cc: david.marchand, jerinj, mdr, thomas, Richardson, Bruce

> 
> Add type argument to dir24_8_get_lookup_fn()
> Now it supports 3 different lookup implementations:
>  RTE_FIB_DIR24_8_SCALAR_MACRO
>  RTE_FIB_DIR24_8_SCALAR_INLINE
>  RTE_FIB_DIR24_8_SCALAR_UNI
> 
> Add new rte_fib_set_lookup_fn() - user can change lookup
> function type runtime.
> 
> Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
> ---

Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>

> 2.17.1


^ permalink raw reply	[flat|nested] 199+ messages in thread

* Re: [dpdk-dev] [PATCH v7 5/8] fib6: make lookup function type configurable
  2020-07-13 11:56           ` [dpdk-dev] [PATCH v7 5/8] fib6: make lookup function type configurable Vladimir Medvedkin
@ 2020-07-16 11:53             ` Ananyev, Konstantin
  0 siblings, 0 replies; 199+ messages in thread
From: Ananyev, Konstantin @ 2020-07-16 11:53 UTC (permalink / raw)
  To: Medvedkin, Vladimir, dev
  Cc: david.marchand, jerinj, mdr, thomas, Richardson, Bruce

> Add type argument to trie_get_lookup_fn()
> Now it only supports RTE_FIB6_TRIE_SCALAR
> 
> Add new rte_fib6_set_lookup_fn() - user can change lookup
> function type runtime.
> 
> Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
> ---

Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>

> 2.17.1


^ permalink raw reply	[flat|nested] 199+ messages in thread

* Re: [dpdk-dev] [PATCH v7 2/8] fib: make lookup function type configurable
  2020-07-13 11:56           ` [dpdk-dev] [PATCH v7 2/8] fib: make lookup function type configurable Vladimir Medvedkin
  2020-07-16 11:51             ` Ananyev, Konstantin
@ 2020-07-16 14:32             ` Thomas Monjalon
  2020-09-30 11:06               ` Vladimir Medvedkin
  1 sibling, 1 reply; 199+ messages in thread
From: Thomas Monjalon @ 2020-07-16 14:32 UTC (permalink / raw)
  To: Vladimir Medvedkin
  Cc: dev, david.marchand, jerinj, mdr, konstantin.ananyev, bruce.richardson

13/07/2020 13:56, Vladimir Medvedkin:
> Add type argument to dir24_8_get_lookup_fn()
> Now it supports 3 different lookup implementations:
>  RTE_FIB_DIR24_8_SCALAR_MACRO
>  RTE_FIB_DIR24_8_SCALAR_INLINE
>  RTE_FIB_DIR24_8_SCALAR_UNI
> 
> Add new rte_fib_set_lookup_fn() - user can change lookup
> function type runtime.
> 
> Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
[...]
> --- a/lib/librte_fib/rte_fib.h
> +++ b/lib/librte_fib/rte_fib.h
> +enum rte_fib_dir24_8_lookup_type {
> +	RTE_FIB_DIR24_8_SCALAR_MACRO,
> +	RTE_FIB_DIR24_8_SCALAR_INLINE,
> +	RTE_FIB_DIR24_8_SCALAR_UNI
> +};

Doxygen missing.

[...]
> +/**
> + * Set lookup function based on type
> + *
> + * @param fib
> + *   FIB object handle
> + * @param type
> + *   type of lookup function
> + *
> + * @return
> + *    -EINVAL on failure
> + *    0 on success
> + */
> +__rte_experimental
> +int
> +rte_fib_set_lookup_fn(struct rte_fib *fib,
> +	enum rte_fib_dir24_8_lookup_type type);

I think the types deserve to be documented,
explaining why using one or the other.



^ permalink raw reply	[flat|nested] 199+ messages in thread

* Re: [dpdk-dev] [PATCH v6 0/8] fib: implement AVX512 vector lookup
  2020-07-15 12:45                         ` Thomas Monjalon
@ 2020-07-17 16:43                           ` Richardson, Bruce
  2020-07-19 10:04                             ` Thomas Monjalon
  0 siblings, 1 reply; 199+ messages in thread
From: Richardson, Bruce @ 2020-07-17 16:43 UTC (permalink / raw)
  To: Thomas Monjalon, Medvedkin, Vladimir
  Cc: Kinsella, Ray, Stephen Hemminger, dev, david.marchand, jerinj,
	Ananyev, Konstantin, Mcnamara, John, O'Driscoll, Tim



> -----Original Message-----
> From: Thomas Monjalon <thomas@monjalon.net>
> Sent: Wednesday, July 15, 2020 1:45 PM
> To: Medvedkin, Vladimir <vladimir.medvedkin@intel.com>
> Cc: Kinsella, Ray <mdr@ashroe.eu>; Stephen Hemminger
> <stephen@networkplumber.org>; dev@dpdk.org; david.marchand@redhat.com;
> jerinj@marvell.com; Ananyev, Konstantin <konstantin.ananyev@intel.com>;
> Richardson, Bruce <bruce.richardson@intel.com>; Mcnamara, John
> <john.mcnamara@intel.com>; O'Driscoll, Tim <tim.odriscoll@intel.com>
> Subject: Re: [dpdk-dev] [PATCH v6 0/8] fib: implement AVX512 vector lookup
> 
> 15/07/2020 14:29, Medvedkin, Vladimir:
> > On 15/07/2020 12:59, Thomas Monjalon wrote:
> > > 15/07/2020 12:35, Medvedkin, Vladimir:
> > >> On 15/07/2020 10:47, Thomas Monjalon wrote:
> > >>> 14/07/2020 16:38, Stephen Hemminger:
> > >>>> "Kinsella, Ray" <mdr@ashroe.eu> wrote:
> > >>>>> On 13/07/2020 23:19, Stephen Hemminger wrote:
> > >>>>>> Did anyone else see the recent AVX512 discussion from Linus:
> > >>>>>>     "I hope AVX512 dies a painful death, and that Intel starts
> fixing real problems
> > >>>>>>      instead of trying to create magic instructions to then
> create benchmarks that they can look good on.
> > >>>>>
> > >>>>> Yup - I saw this one.
> > >>>>> Sweeping statements like these are good to provoke debate, the
> truth is generally more nuanced.
> > >>>>> If you continue to read the post, Linus appears to be mostly
> questioning microprocessor design decisions.
> > >>>>>
> > >>>>> That is an interesting discussion, however the reality is that the
> technology does exists and may be beneficial for Packet Processing.
> > >>>>>
> > >>>>> I would suggest, we continue to apply the same logic governing
> adoption of any technology by DPDK.
> > >>>>> When the technology is present and a clear benefit is shown, we
> use it with caution.
> > >>>>>
> > >>>>> In the case of Vladimir's patch, the user has to explicitly
> > >>>>> switch on the AVX512 lookup with RTE_FIB_DIR24_8_VECTOR_AVX512.
> > >>>>
> > >>>> Using what is available makes sense in DPDK.
> > >>>
> > >>> Why does it require explicit  enabling in application?
> > >>> AVX512 is not reliable enough to be automatically used when
> available?
> > >>
> > >> It is reliable enough. User have to explicitly trigger to avx512
> > >> lookup because using avx512 instructions can reduce the frequency
> > >> of your cores. The user knows their environment better. So the
> > >> scalar version is used so as not to affect the frequency.
> > >
> > > So the user must know which micro-optimization is better for a code
> > > they don't know. Reminder: an user is not a developper.
> > > I understand we have no better solution though.
> > > Can we improve the user experience with some recommendations, numbers,
> etc?
> > >
> >
> > In case where a user is a developer (dpdk users are mostly devs,
> > aren't
> > they?) who uses the fib library in their app may decide to switch to
> > avx512 lookup using rte_fib_set_lookup_fn() when they know that their
> > code is already using avx512 (ifdef, startup check, etc).
> > In other case an app developer, for example, could provide to user
> > command line option or some interactive command to switch lookup
> function.
> > I'd recommend to run testfib app with various "-v" options to evaluate
> > lookup performance on a target system to make a decision.
> 
> I think this is the difference between a library for hackers, and a
> product for end-users.
> We are not building a product, but we can make a step in that direction by
> documenting some knowledge.
> I don't know exactly what it means in this case, so I'll let others
> suggest some doc improvements (if anyone cares).
> 

We have got a patchset in the works to try and make AVX-512 use simpler for 20.11,
by providing both developer APIs and end-user cmdline flags to control this
centrally for DPDK, rather than having each library provide its own magic hooks
to optionally enable this support. As part of that set, we'll see about what
doc updates need to be made also - again covering both developer and end-app user.

Hopefully we can get that set out soon to get early feedback and reach a good
conclusion.

/Bruce

^ permalink raw reply	[flat|nested] 199+ messages in thread

* Re: [dpdk-dev] [PATCH v6 0/8] fib: implement AVX512 vector lookup
  2020-07-17 16:43                           ` Richardson, Bruce
@ 2020-07-19 10:04                             ` Thomas Monjalon
  0 siblings, 0 replies; 199+ messages in thread
From: Thomas Monjalon @ 2020-07-19 10:04 UTC (permalink / raw)
  To: Medvedkin, Vladimir, Richardson, Bruce
  Cc: Kinsella, Ray, Stephen Hemminger, dev, david.marchand, jerinj,
	Ananyev, Konstantin, Mcnamara, John, O'Driscoll, Tim

17/07/2020 18:43, Richardson, Bruce:
> From: Thomas Monjalon <thomas@monjalon.net>
> > 15/07/2020 14:29, Medvedkin, Vladimir:
> > > On 15/07/2020 12:59, Thomas Monjalon wrote:
> > > > 15/07/2020 12:35, Medvedkin, Vladimir:
> > > >> On 15/07/2020 10:47, Thomas Monjalon wrote:
> > > >>> 14/07/2020 16:38, Stephen Hemminger:
> > > >>>> "Kinsella, Ray" <mdr@ashroe.eu> wrote:
> > > >>>>> On 13/07/2020 23:19, Stephen Hemminger wrote:
> > > >>>>>> Did anyone else see the recent AVX512 discussion from Linus:
> > > >>>>>>     "I hope AVX512 dies a painful death, and that Intel starts
> > fixing real problems
> > > >>>>>>      instead of trying to create magic instructions to then
> > create benchmarks that they can look good on.
> > > >>>>>
> > > >>>>> Yup - I saw this one.
> > > >>>>> Sweeping statements like these are good to provoke debate, the
> > truth is generally more nuanced.
> > > >>>>> If you continue to read the post, Linus appears to be mostly
> > questioning microprocessor design decisions.
> > > >>>>>
> > > >>>>> That is an interesting discussion, however the reality is that the
> > technology does exists and may be beneficial for Packet Processing.
> > > >>>>>
> > > >>>>> I would suggest, we continue to apply the same logic governing
> > adoption of any technology by DPDK.
> > > >>>>> When the technology is present and a clear benefit is shown, we
> > use it with caution.
> > > >>>>>
> > > >>>>> In the case of Vladimir's patch, the user has to explicitly
> > > >>>>> switch on the AVX512 lookup with RTE_FIB_DIR24_8_VECTOR_AVX512.
> > > >>>>
> > > >>>> Using what is available makes sense in DPDK.
> > > >>>
> > > >>> Why does it require explicit  enabling in application?
> > > >>> AVX512 is not reliable enough to be automatically used when
> > available?
> > > >>
> > > >> It is reliable enough. User have to explicitly trigger to avx512
> > > >> lookup because using avx512 instructions can reduce the frequency
> > > >> of your cores. The user knows their environment better. So the
> > > >> scalar version is used so as not to affect the frequency.
> > > >
> > > > So the user must know which micro-optimization is better for a code
> > > > they don't know. Reminder: an user is not a developper.
> > > > I understand we have no better solution though.
> > > > Can we improve the user experience with some recommendations, numbers,
> > etc?
> > > >
> > >
> > > In case where a user is a developer (dpdk users are mostly devs,
> > > aren't
> > > they?) who uses the fib library in their app may decide to switch to
> > > avx512 lookup using rte_fib_set_lookup_fn() when they know that their
> > > code is already using avx512 (ifdef, startup check, etc).
> > > In other case an app developer, for example, could provide to user
> > > command line option or some interactive command to switch lookup
> > function.
> > > I'd recommend to run testfib app with various "-v" options to evaluate
> > > lookup performance on a target system to make a decision.
> > 
> > I think this is the difference between a library for hackers, and a
> > product for end-users.
> > We are not building a product, but we can make a step in that direction by
> > documenting some knowledge.
> > I don't know exactly what it means in this case, so I'll let others
> > suggest some doc improvements (if anyone cares).
> > 
> 
> We have got a patchset in the works to try and make AVX-512 use simpler for 20.11,
> by providing both developer APIs and end-user cmdline flags to control this
> centrally for DPDK, rather than having each library provide its own magic hooks
> to optionally enable this support. As part of that set, we'll see about what
> doc updates need to be made also - again covering both developer and end-app user.
> 
> Hopefully we can get that set out soon to get early feedback and reach a good
> conclusion.

We cannot merge anymore in 20.08 because we passed -rc1.
I am in favor of merging this feature the day after 20.08 release.





^ permalink raw reply	[flat|nested] 199+ messages in thread

* [dpdk-dev] [PATCH v8 0/8] fib: implement AVX512 vector lookup
  2020-07-13 11:56           ` [dpdk-dev] [PATCH v7 " Vladimir Medvedkin
@ 2020-09-30 10:35             ` Vladimir Medvedkin
  2020-10-06 14:31               ` David Marchand
                                 ` (9 more replies)
  2020-09-30 10:35             ` [dpdk-dev] [PATCH v8 1/8] eal/x86: introduce AVX 512-bit type Vladimir Medvedkin
                               ` (7 subsequent siblings)
  8 siblings, 10 replies; 199+ messages in thread
From: Vladimir Medvedkin @ 2020-09-30 10:35 UTC (permalink / raw)
  To: dev
  Cc: david.marchand, jerinj, mdr, thomas, konstantin.ananyev,
	bruce.richardson

This patch series implements vectorized lookup using AVX512 for
ipv4 dir24_8 and ipv6 trie algorithms.
Also introduced rte_fib_set_lookup_fn() to change lookup function type.
Added option to select lookup function type in testfib application.

v8:
 - remove Makefile related changes
 - fix missing doxygen for lookup_type
 - add release notes

v7:
 - fix RTE_X86_ZMM_MASK macro

v6:
 - style fixes

v5:
 - prefix zmm macro in rte_vect.h with RTE_X86
 - remove unnecessary typedef for _x86_zmm_t
 - reword commit title
 - fix typos

v4:
 - use __rte_aligned() instead of using compiler attribute directly
 - rework and add comments to meson.build

v3:
 - separate out the AVX-512 code into a separate file

v2:
 - rename rte_zmm to __rte_x86_zmm to reflect its internal usage
 - make runtime decision to use avx512 lookup

Vladimir Medvedkin (8):
  eal/x86: introduce AVX 512-bit type
  fib: make lookup function type configurable
  fib: move lookup definition into the header file
  fib: introduce AVX512 lookup
  fib6: make lookup function type configurable
  fib6: move lookup definition into the header file
  fib6: introduce AVX512 lookup
  app/testfib: add support for different lookup functions

 app/test-fib/main.c                    |  58 ++++++-
 doc/guides/rel_notes/release_20_11.rst |   3 +
 lib/librte_eal/x86/include/rte_vect.h  |  19 +++
 lib/librte_fib/dir24_8.c               | 281 ++++++---------------------------
 lib/librte_fib/dir24_8.h               | 226 +++++++++++++++++++++++++-
 lib/librte_fib/dir24_8_avx512.c        | 165 +++++++++++++++++++
 lib/librte_fib/dir24_8_avx512.h        |  24 +++
 lib/librte_fib/meson.build             |  31 ++++
 lib/librte_fib/rte_fib.c               |  21 ++-
 lib/librte_fib/rte_fib.h               |  25 +++
 lib/librte_fib/rte_fib6.c              |  20 ++-
 lib/librte_fib/rte_fib6.h              |  24 +++
 lib/librte_fib/rte_fib_version.map     |   2 +
 lib/librte_fib/trie.c                  | 161 ++++---------------
 lib/librte_fib/trie.h                  | 119 +++++++++++++-
 lib/librte_fib/trie_avx512.c           | 269 +++++++++++++++++++++++++++++++
 lib/librte_fib/trie_avx512.h           |  20 +++
 17 files changed, 1096 insertions(+), 372 deletions(-)
 create mode 100644 lib/librte_fib/dir24_8_avx512.c
 create mode 100644 lib/librte_fib/dir24_8_avx512.h
 create mode 100644 lib/librte_fib/trie_avx512.c
 create mode 100644 lib/librte_fib/trie_avx512.h

-- 
2.7.4


^ permalink raw reply	[flat|nested] 199+ messages in thread

* [dpdk-dev] [PATCH v8 1/8] eal/x86: introduce AVX 512-bit type
  2020-07-13 11:56           ` [dpdk-dev] [PATCH v7 " Vladimir Medvedkin
  2020-09-30 10:35             ` [dpdk-dev] [PATCH v8 " Vladimir Medvedkin
@ 2020-09-30 10:35             ` Vladimir Medvedkin
  2020-09-30 10:35             ` [dpdk-dev] [PATCH v8 2/8] fib: make lookup function type configurable Vladimir Medvedkin
                               ` (6 subsequent siblings)
  8 siblings, 0 replies; 199+ messages in thread
From: Vladimir Medvedkin @ 2020-09-30 10:35 UTC (permalink / raw)
  To: dev
  Cc: david.marchand, jerinj, mdr, thomas, konstantin.ananyev,
	bruce.richardson

New data type to manipulate 512 bit AVX values.

Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 lib/librte_eal/x86/include/rte_vect.h | 19 +++++++++++++++++++
 1 file changed, 19 insertions(+)

diff --git a/lib/librte_eal/x86/include/rte_vect.h b/lib/librte_eal/x86/include/rte_vect.h
index df5a607..64383c3 100644
--- a/lib/librte_eal/x86/include/rte_vect.h
+++ b/lib/librte_eal/x86/include/rte_vect.h
@@ -13,6 +13,7 @@
 
 #include <stdint.h>
 #include <rte_config.h>
+#include <rte_common.h>
 #include "generic/rte_vect.h"
 
 #if (defined(__ICC) || \
@@ -90,6 +91,24 @@ __extension__ ({                 \
 })
 #endif /* (defined(__ICC) && __ICC < 1210) */
 
+#ifdef __AVX512F__
+
+#define RTE_X86_ZMM_SIZE	(sizeof(__m512i))
+#define RTE_X86_ZMM_MASK	(RTE_X86_ZMM_SIZE - 1)
+
+typedef union __rte_x86_zmm {
+	__m512i	 z;
+	ymm_t    y[RTE_X86_ZMM_SIZE / sizeof(ymm_t)];
+	xmm_t    x[RTE_X86_ZMM_SIZE / sizeof(xmm_t)];
+	uint8_t  u8[RTE_X86_ZMM_SIZE / sizeof(uint8_t)];
+	uint16_t u16[RTE_X86_ZMM_SIZE / sizeof(uint16_t)];
+	uint32_t u32[RTE_X86_ZMM_SIZE / sizeof(uint32_t)];
+	uint64_t u64[RTE_X86_ZMM_SIZE / sizeof(uint64_t)];
+	double   pd[RTE_X86_ZMM_SIZE / sizeof(double)];
+} __rte_aligned(RTE_X86_ZMM_SIZE) __rte_x86_zmm_t;
+
+#endif /* __AVX512F__ */
+
 #ifdef __cplusplus
 }
 #endif
-- 
2.7.4


^ permalink raw reply	[flat|nested] 199+ messages in thread

* [dpdk-dev] [PATCH v8 2/8] fib: make lookup function type configurable
  2020-07-13 11:56           ` [dpdk-dev] [PATCH v7 " Vladimir Medvedkin
  2020-09-30 10:35             ` [dpdk-dev] [PATCH v8 " Vladimir Medvedkin
  2020-09-30 10:35             ` [dpdk-dev] [PATCH v8 1/8] eal/x86: introduce AVX 512-bit type Vladimir Medvedkin
@ 2020-09-30 10:35             ` Vladimir Medvedkin
  2020-09-30 10:35             ` [dpdk-dev] [PATCH v8 3/8] fib: move lookup definition into the header file Vladimir Medvedkin
                               ` (5 subsequent siblings)
  8 siblings, 0 replies; 199+ messages in thread
From: Vladimir Medvedkin @ 2020-09-30 10:35 UTC (permalink / raw)
  To: dev
  Cc: david.marchand, jerinj, mdr, thomas, konstantin.ananyev,
	bruce.richardson

Add type argument to dir24_8_get_lookup_fn()
Now it supports 3 different lookup implementations:
 RTE_FIB_DIR24_8_SCALAR_MACRO
 RTE_FIB_DIR24_8_SCALAR_INLINE
 RTE_FIB_DIR24_8_SCALAR_UNI

Add new rte_fib_set_lookup_fn() - user can change lookup
function type runtime.

Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
---
 lib/librte_fib/dir24_8.c           | 32 ++++++++++++++++++++------------
 lib/librte_fib/dir24_8.h           |  2 +-
 lib/librte_fib/rte_fib.c           | 21 ++++++++++++++++++++-
 lib/librte_fib/rte_fib.h           | 24 ++++++++++++++++++++++++
 lib/librte_fib/rte_fib_version.map |  1 +
 5 files changed, 66 insertions(+), 14 deletions(-)

diff --git a/lib/librte_fib/dir24_8.c b/lib/librte_fib/dir24_8.c
index c9dce3c..825d061 100644
--- a/lib/librte_fib/dir24_8.c
+++ b/lib/librte_fib/dir24_8.c
@@ -45,13 +45,6 @@ struct dir24_8_tbl {
 
 #define ROUNDUP(x, y)	 RTE_ALIGN_CEIL(x, (1 << (32 - y)))
 
-enum lookup_type {
-	MACRO,
-	INLINE,
-	UNI
-};
-enum lookup_type test_lookup = MACRO;
-
 static inline void *
 get_tbl24_p(struct dir24_8_tbl *dp, uint32_t ip, uint8_t nh_sz)
 {
@@ -253,11 +246,18 @@ dir24_8_lookup_bulk_uni(void *p, const uint32_t *ips,
 }
 
 rte_fib_lookup_fn_t
-dir24_8_get_lookup_fn(struct rte_fib_conf *fib_conf)
+dir24_8_get_lookup_fn(void *p, enum rte_fib_dir24_8_lookup_type type)
 {
-	enum rte_fib_dir24_8_nh_sz nh_sz = fib_conf->dir24_8.nh_sz;
+	enum rte_fib_dir24_8_nh_sz nh_sz;
+	struct dir24_8_tbl *dp = p;
 
-	if (test_lookup == MACRO) {
+	if (dp == NULL)
+		return NULL;
+
+	nh_sz = dp->nh_sz;
+
+	switch (type) {
+	case RTE_FIB_DIR24_8_SCALAR_MACRO:
 		switch (nh_sz) {
 		case RTE_FIB_DIR24_8_1B:
 			return dir24_8_lookup_bulk_1b;
@@ -267,8 +267,10 @@ dir24_8_get_lookup_fn(struct rte_fib_conf *fib_conf)
 			return dir24_8_lookup_bulk_4b;
 		case RTE_FIB_DIR24_8_8B:
 			return dir24_8_lookup_bulk_8b;
+		default:
+			return NULL;
 		}
-	} else if (test_lookup == INLINE) {
+	case RTE_FIB_DIR24_8_SCALAR_INLINE:
 		switch (nh_sz) {
 		case RTE_FIB_DIR24_8_1B:
 			return dir24_8_lookup_bulk_0;
@@ -278,9 +280,15 @@ dir24_8_get_lookup_fn(struct rte_fib_conf *fib_conf)
 			return dir24_8_lookup_bulk_2;
 		case RTE_FIB_DIR24_8_8B:
 			return dir24_8_lookup_bulk_3;
+		default:
+			return NULL;
 		}
-	} else
+	case RTE_FIB_DIR24_8_SCALAR_UNI:
 		return dir24_8_lookup_bulk_uni;
+	default:
+		return NULL;
+	}
+
 	return NULL;
 }
 
diff --git a/lib/librte_fib/dir24_8.h b/lib/librte_fib/dir24_8.h
index 1ec437c..53c5dd2 100644
--- a/lib/librte_fib/dir24_8.h
+++ b/lib/librte_fib/dir24_8.h
@@ -22,7 +22,7 @@ void
 dir24_8_free(void *p);
 
 rte_fib_lookup_fn_t
-dir24_8_get_lookup_fn(struct rte_fib_conf *conf);
+dir24_8_get_lookup_fn(void *p, enum rte_fib_dir24_8_lookup_type type);
 
 int
 dir24_8_modify(struct rte_fib *fib, uint32_t ip, uint8_t depth,
diff --git a/lib/librte_fib/rte_fib.c b/lib/librte_fib/rte_fib.c
index e090808..b9f6efb 100644
--- a/lib/librte_fib/rte_fib.c
+++ b/lib/librte_fib/rte_fib.c
@@ -107,7 +107,8 @@ init_dataplane(struct rte_fib *fib, __rte_unused int socket_id,
 		fib->dp = dir24_8_create(dp_name, socket_id, conf);
 		if (fib->dp == NULL)
 			return -rte_errno;
-		fib->lookup = dir24_8_get_lookup_fn(conf);
+		fib->lookup = dir24_8_get_lookup_fn(fib->dp,
+			RTE_FIB_DIR24_8_SCALAR_MACRO);
 		fib->modify = dir24_8_modify;
 		return 0;
 	default:
@@ -317,3 +318,21 @@ rte_fib_get_rib(struct rte_fib *fib)
 {
 	return (fib == NULL) ? NULL : fib->rib;
 }
+
+int
+rte_fib_set_lookup_fn(struct rte_fib *fib,
+	enum rte_fib_dir24_8_lookup_type type)
+{
+	rte_fib_lookup_fn_t fn;
+
+	switch (fib->type) {
+	case RTE_FIB_DIR24_8:
+		fn = dir24_8_get_lookup_fn(fib->dp, type);
+		if (fn == NULL)
+			return -EINVAL;
+		fib->lookup = fn;
+		return 0;
+	default:
+		return -EINVAL;
+	}
+}
diff --git a/lib/librte_fib/rte_fib.h b/lib/librte_fib/rte_fib.h
index 84ee774..a9bd0da 100644
--- a/lib/librte_fib/rte_fib.h
+++ b/lib/librte_fib/rte_fib.h
@@ -58,6 +58,13 @@ enum rte_fib_dir24_8_nh_sz {
 	RTE_FIB_DIR24_8_8B
 };
 
+/** Type of lookup function implementation */
+enum rte_fib_dir24_8_lookup_type {
+	RTE_FIB_DIR24_8_SCALAR_MACRO,
+	RTE_FIB_DIR24_8_SCALAR_INLINE,
+	RTE_FIB_DIR24_8_SCALAR_UNI
+};
+
 /** FIB configuration structure */
 struct rte_fib_conf {
 	enum rte_fib_type type; /**< Type of FIB struct */
@@ -196,6 +203,23 @@ __rte_experimental
 struct rte_rib *
 rte_fib_get_rib(struct rte_fib *fib);
 
+/**
+ * Set lookup function based on type
+ *
+ * @param fib
+ *   FIB object handle
+ * @param type
+ *   type of lookup function
+ *
+ * @return
+ *    -EINVAL on failure
+ *    0 on success
+ */
+__rte_experimental
+int
+rte_fib_set_lookup_fn(struct rte_fib *fib,
+	enum rte_fib_dir24_8_lookup_type type);
+
 #ifdef __cplusplus
 }
 #endif
diff --git a/lib/librte_fib/rte_fib_version.map b/lib/librte_fib/rte_fib_version.map
index 9527417..216af66 100644
--- a/lib/librte_fib/rte_fib_version.map
+++ b/lib/librte_fib/rte_fib_version.map
@@ -9,6 +9,7 @@ EXPERIMENTAL {
 	rte_fib_lookup_bulk;
 	rte_fib_get_dp;
 	rte_fib_get_rib;
+	rte_fib_set_lookup_fn;
 
 	rte_fib6_add;
 	rte_fib6_create;
-- 
2.7.4


^ permalink raw reply	[flat|nested] 199+ messages in thread

* [dpdk-dev] [PATCH v8 3/8] fib: move lookup definition into the header file
  2020-07-13 11:56           ` [dpdk-dev] [PATCH v7 " Vladimir Medvedkin
                               ` (2 preceding siblings ...)
  2020-09-30 10:35             ` [dpdk-dev] [PATCH v8 2/8] fib: make lookup function type configurable Vladimir Medvedkin
@ 2020-09-30 10:35             ` Vladimir Medvedkin
  2020-09-30 10:35             ` [dpdk-dev] [PATCH v8 4/8] fib: introduce AVX512 lookup Vladimir Medvedkin
                               ` (4 subsequent siblings)
  8 siblings, 0 replies; 199+ messages in thread
From: Vladimir Medvedkin @ 2020-09-30 10:35 UTC (permalink / raw)
  To: dev
  Cc: david.marchand, jerinj, mdr, thomas, konstantin.ananyev,
	bruce.richardson

Move dir24_8 table layout and lookup definition into the
private header file. This is necessary for implementing a
vectorized lookup function in a separate .с file.

Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 lib/librte_fib/dir24_8.c | 225 +----------------------------------------------
 lib/librte_fib/dir24_8.h | 224 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 225 insertions(+), 224 deletions(-)

diff --git a/lib/librte_fib/dir24_8.c b/lib/librte_fib/dir24_8.c
index 825d061..9d74653 100644
--- a/lib/librte_fib/dir24_8.c
+++ b/lib/librte_fib/dir24_8.c
@@ -11,240 +11,17 @@
 
 #include <rte_debug.h>
 #include <rte_malloc.h>
-#include <rte_prefetch.h>
 #include <rte_errno.h>
 #include <rte_memory.h>
-#include <rte_branch_prediction.h>
 
-#include <rte_fib.h>
 #include <rte_rib.h>
+#include <rte_fib.h>
 #include "dir24_8.h"
 
 #define DIR24_8_NAMESIZE	64
 
-#define DIR24_8_TBL24_NUM_ENT		(1 << 24)
-#define DIR24_8_TBL8_GRP_NUM_ENT	256U
-#define DIR24_8_EXT_ENT			1
-#define DIR24_8_TBL24_MASK		0xffffff00
-
-#define BITMAP_SLAB_BIT_SIZE_LOG2	6
-#define BITMAP_SLAB_BIT_SIZE		(1 << BITMAP_SLAB_BIT_SIZE_LOG2)
-#define BITMAP_SLAB_BITMASK		(BITMAP_SLAB_BIT_SIZE - 1)
-
-struct dir24_8_tbl {
-	uint32_t	number_tbl8s;	/**< Total number of tbl8s */
-	uint32_t	rsvd_tbl8s;	/**< Number of reserved tbl8s */
-	uint32_t	cur_tbl8s;	/**< Current number of tbl8s */
-	enum rte_fib_dir24_8_nh_sz	nh_sz;	/**< Size of nexthop entry */
-	uint64_t	def_nh;		/**< Default next hop */
-	uint64_t	*tbl8;		/**< tbl8 table. */
-	uint64_t	*tbl8_idxes;	/**< bitmap containing free tbl8 idxes*/
-	/* tbl24 table. */
-	__extension__ uint64_t	tbl24[0] __rte_cache_aligned;
-};
-
 #define ROUNDUP(x, y)	 RTE_ALIGN_CEIL(x, (1 << (32 - y)))
 
-static inline void *
-get_tbl24_p(struct dir24_8_tbl *dp, uint32_t ip, uint8_t nh_sz)
-{
-	return (void *)&((uint8_t *)dp->tbl24)[(ip &
-		DIR24_8_TBL24_MASK) >> (8 - nh_sz)];
-}
-
-static inline  uint8_t
-bits_in_nh(uint8_t nh_sz)
-{
-	return 8 * (1 << nh_sz);
-}
-
-static inline uint64_t
-get_max_nh(uint8_t nh_sz)
-{
-	return ((1ULL << (bits_in_nh(nh_sz) - 1)) - 1);
-}
-
-static  inline uint32_t
-get_tbl24_idx(uint32_t ip)
-{
-	return ip >> 8;
-}
-
-static  inline uint32_t
-get_tbl8_idx(uint32_t res, uint32_t ip)
-{
-	return (res >> 1) * DIR24_8_TBL8_GRP_NUM_ENT + (uint8_t)ip;
-}
-
-static inline uint64_t
-lookup_msk(uint8_t nh_sz)
-{
-	return ((1ULL << ((1 << (nh_sz + 3)) - 1)) << 1) - 1;
-}
-
-static inline uint8_t
-get_psd_idx(uint32_t val, uint8_t nh_sz)
-{
-	return val & ((1 << (3 - nh_sz)) - 1);
-}
-
-static inline uint32_t
-get_tbl_idx(uint32_t val, uint8_t nh_sz)
-{
-	return val >> (3 - nh_sz);
-}
-
-static inline uint64_t
-get_tbl24(struct dir24_8_tbl *dp, uint32_t ip, uint8_t nh_sz)
-{
-	return ((dp->tbl24[get_tbl_idx(get_tbl24_idx(ip), nh_sz)] >>
-		(get_psd_idx(get_tbl24_idx(ip), nh_sz) *
-		bits_in_nh(nh_sz))) & lookup_msk(nh_sz));
-}
-
-static inline uint64_t
-get_tbl8(struct dir24_8_tbl *dp, uint32_t res, uint32_t ip, uint8_t nh_sz)
-{
-	return ((dp->tbl8[get_tbl_idx(get_tbl8_idx(res, ip), nh_sz)] >>
-		(get_psd_idx(get_tbl8_idx(res, ip), nh_sz) *
-		bits_in_nh(nh_sz))) & lookup_msk(nh_sz));
-}
-
-static inline int
-is_entry_extended(uint64_t ent)
-{
-	return (ent & DIR24_8_EXT_ENT) == DIR24_8_EXT_ENT;
-}
-
-#define LOOKUP_FUNC(suffix, type, bulk_prefetch, nh_sz)			\
-static void dir24_8_lookup_bulk_##suffix(void *p, const uint32_t *ips,	\
-	uint64_t *next_hops, const unsigned int n)			\
-{									\
-	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;		\
-	uint64_t tmp;							\
-	uint32_t i;							\
-	uint32_t prefetch_offset =					\
-		RTE_MIN((unsigned int)bulk_prefetch, n);		\
-									\
-	for (i = 0; i < prefetch_offset; i++)				\
-		rte_prefetch0(get_tbl24_p(dp, ips[i], nh_sz));		\
-	for (i = 0; i < (n - prefetch_offset); i++) {			\
-		rte_prefetch0(get_tbl24_p(dp,				\
-			ips[i + prefetch_offset], nh_sz));		\
-		tmp = ((type *)dp->tbl24)[ips[i] >> 8];			\
-		if (unlikely(is_entry_extended(tmp)))			\
-			tmp = ((type *)dp->tbl8)[(uint8_t)ips[i] +	\
-				((tmp >> 1) * DIR24_8_TBL8_GRP_NUM_ENT)]; \
-		next_hops[i] = tmp >> 1;				\
-	}								\
-	for (; i < n; i++) {						\
-		tmp = ((type *)dp->tbl24)[ips[i] >> 8];			\
-		if (unlikely(is_entry_extended(tmp)))			\
-			tmp = ((type *)dp->tbl8)[(uint8_t)ips[i] +	\
-				((tmp >> 1) * DIR24_8_TBL8_GRP_NUM_ENT)]; \
-		next_hops[i] = tmp >> 1;				\
-	}								\
-}									\
-
-LOOKUP_FUNC(1b, uint8_t, 5, 0)
-LOOKUP_FUNC(2b, uint16_t, 6, 1)
-LOOKUP_FUNC(4b, uint32_t, 15, 2)
-LOOKUP_FUNC(8b, uint64_t, 12, 3)
-
-static inline void
-dir24_8_lookup_bulk(struct dir24_8_tbl *dp, const uint32_t *ips,
-	uint64_t *next_hops, const unsigned int n, uint8_t nh_sz)
-{
-	uint64_t tmp;
-	uint32_t i;
-	uint32_t prefetch_offset = RTE_MIN(15U, n);
-
-	for (i = 0; i < prefetch_offset; i++)
-		rte_prefetch0(get_tbl24_p(dp, ips[i], nh_sz));
-	for (i = 0; i < (n - prefetch_offset); i++) {
-		rte_prefetch0(get_tbl24_p(dp, ips[i + prefetch_offset],
-			nh_sz));
-		tmp = get_tbl24(dp, ips[i], nh_sz);
-		if (unlikely(is_entry_extended(tmp)))
-			tmp = get_tbl8(dp, tmp, ips[i], nh_sz);
-
-		next_hops[i] = tmp >> 1;
-	}
-	for (; i < n; i++) {
-		tmp = get_tbl24(dp, ips[i], nh_sz);
-		if (unlikely(is_entry_extended(tmp)))
-			tmp = get_tbl8(dp, tmp, ips[i], nh_sz);
-
-		next_hops[i] = tmp >> 1;
-	}
-}
-
-static void
-dir24_8_lookup_bulk_0(void *p, const uint32_t *ips,
-	uint64_t *next_hops, const unsigned int n)
-{
-	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
-
-	dir24_8_lookup_bulk(dp, ips, next_hops, n, 0);
-}
-
-static void
-dir24_8_lookup_bulk_1(void *p, const uint32_t *ips,
-	uint64_t *next_hops, const unsigned int n)
-{
-	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
-
-	dir24_8_lookup_bulk(dp, ips, next_hops, n, 1);
-}
-
-static void
-dir24_8_lookup_bulk_2(void *p, const uint32_t *ips,
-	uint64_t *next_hops, const unsigned int n)
-{
-	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
-
-	dir24_8_lookup_bulk(dp, ips, next_hops, n, 2);
-}
-
-static void
-dir24_8_lookup_bulk_3(void *p, const uint32_t *ips,
-	uint64_t *next_hops, const unsigned int n)
-{
-	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
-
-	dir24_8_lookup_bulk(dp, ips, next_hops, n, 3);
-}
-
-static void
-dir24_8_lookup_bulk_uni(void *p, const uint32_t *ips,
-	uint64_t *next_hops, const unsigned int n)
-{
-	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
-	uint64_t tmp;
-	uint32_t i;
-	uint32_t prefetch_offset = RTE_MIN(15U, n);
-	uint8_t nh_sz = dp->nh_sz;
-
-	for (i = 0; i < prefetch_offset; i++)
-		rte_prefetch0(get_tbl24_p(dp, ips[i], nh_sz));
-	for (i = 0; i < (n - prefetch_offset); i++) {
-		rte_prefetch0(get_tbl24_p(dp, ips[i + prefetch_offset],
-			nh_sz));
-		tmp = get_tbl24(dp, ips[i], nh_sz);
-		if (unlikely(is_entry_extended(tmp)))
-			tmp = get_tbl8(dp, tmp, ips[i], nh_sz);
-
-		next_hops[i] = tmp >> 1;
-	}
-	for (; i < n; i++) {
-		tmp = get_tbl24(dp, ips[i], nh_sz);
-		if (unlikely(is_entry_extended(tmp)))
-			tmp = get_tbl8(dp, tmp, ips[i], nh_sz);
-
-		next_hops[i] = tmp >> 1;
-	}
-}
-
 rte_fib_lookup_fn_t
 dir24_8_get_lookup_fn(void *p, enum rte_fib_dir24_8_lookup_type type)
 {
diff --git a/lib/librte_fib/dir24_8.h b/lib/librte_fib/dir24_8.h
index 53c5dd2..56d0389 100644
--- a/lib/librte_fib/dir24_8.h
+++ b/lib/librte_fib/dir24_8.h
@@ -6,6 +6,9 @@
 #ifndef _DIR24_8_H_
 #define _DIR24_8_H_
 
+#include <rte_prefetch.h>
+#include <rte_branch_prediction.h>
+
 /**
  * @file
  * DIR24_8 algorithm
@@ -15,6 +18,227 @@
 extern "C" {
 #endif
 
+#define DIR24_8_TBL24_NUM_ENT		(1 << 24)
+#define DIR24_8_TBL8_GRP_NUM_ENT	256U
+#define DIR24_8_EXT_ENT			1
+#define DIR24_8_TBL24_MASK		0xffffff00
+
+#define BITMAP_SLAB_BIT_SIZE_LOG2	6
+#define BITMAP_SLAB_BIT_SIZE		(1 << BITMAP_SLAB_BIT_SIZE_LOG2)
+#define BITMAP_SLAB_BITMASK		(BITMAP_SLAB_BIT_SIZE - 1)
+
+struct dir24_8_tbl {
+	uint32_t	number_tbl8s;	/**< Total number of tbl8s */
+	uint32_t	rsvd_tbl8s;	/**< Number of reserved tbl8s */
+	uint32_t	cur_tbl8s;	/**< Current number of tbl8s */
+	enum rte_fib_dir24_8_nh_sz	nh_sz;	/**< Size of nexthop entry */
+	uint64_t	def_nh;		/**< Default next hop */
+	uint64_t	*tbl8;		/**< tbl8 table. */
+	uint64_t	*tbl8_idxes;	/**< bitmap containing free tbl8 idxes*/
+	/* tbl24 table. */
+	__extension__ uint64_t	tbl24[0] __rte_cache_aligned;
+};
+
+static inline void *
+get_tbl24_p(struct dir24_8_tbl *dp, uint32_t ip, uint8_t nh_sz)
+{
+	return (void *)&((uint8_t *)dp->tbl24)[(ip &
+		DIR24_8_TBL24_MASK) >> (8 - nh_sz)];
+}
+
+static inline  uint8_t
+bits_in_nh(uint8_t nh_sz)
+{
+	return 8 * (1 << nh_sz);
+}
+
+static inline uint64_t
+get_max_nh(uint8_t nh_sz)
+{
+	return ((1ULL << (bits_in_nh(nh_sz) - 1)) - 1);
+}
+
+static  inline uint32_t
+get_tbl24_idx(uint32_t ip)
+{
+	return ip >> 8;
+}
+
+static  inline uint32_t
+get_tbl8_idx(uint32_t res, uint32_t ip)
+{
+	return (res >> 1) * DIR24_8_TBL8_GRP_NUM_ENT + (uint8_t)ip;
+}
+
+static inline uint64_t
+lookup_msk(uint8_t nh_sz)
+{
+	return ((1ULL << ((1 << (nh_sz + 3)) - 1)) << 1) - 1;
+}
+
+static inline uint8_t
+get_psd_idx(uint32_t val, uint8_t nh_sz)
+{
+	return val & ((1 << (3 - nh_sz)) - 1);
+}
+
+static inline uint32_t
+get_tbl_idx(uint32_t val, uint8_t nh_sz)
+{
+	return val >> (3 - nh_sz);
+}
+
+static inline uint64_t
+get_tbl24(struct dir24_8_tbl *dp, uint32_t ip, uint8_t nh_sz)
+{
+	return ((dp->tbl24[get_tbl_idx(get_tbl24_idx(ip), nh_sz)] >>
+		(get_psd_idx(get_tbl24_idx(ip), nh_sz) *
+		bits_in_nh(nh_sz))) & lookup_msk(nh_sz));
+}
+
+static inline uint64_t
+get_tbl8(struct dir24_8_tbl *dp, uint32_t res, uint32_t ip, uint8_t nh_sz)
+{
+	return ((dp->tbl8[get_tbl_idx(get_tbl8_idx(res, ip), nh_sz)] >>
+		(get_psd_idx(get_tbl8_idx(res, ip), nh_sz) *
+		bits_in_nh(nh_sz))) & lookup_msk(nh_sz));
+}
+
+static inline int
+is_entry_extended(uint64_t ent)
+{
+	return (ent & DIR24_8_EXT_ENT) == DIR24_8_EXT_ENT;
+}
+
+#define LOOKUP_FUNC(suffix, type, bulk_prefetch, nh_sz)			\
+static inline void dir24_8_lookup_bulk_##suffix(void *p, const uint32_t *ips, \
+	uint64_t *next_hops, const unsigned int n)			\
+{									\
+	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;		\
+	uint64_t tmp;							\
+	uint32_t i;							\
+	uint32_t prefetch_offset =					\
+		RTE_MIN((unsigned int)bulk_prefetch, n);		\
+									\
+	for (i = 0; i < prefetch_offset; i++)				\
+		rte_prefetch0(get_tbl24_p(dp, ips[i], nh_sz));		\
+	for (i = 0; i < (n - prefetch_offset); i++) {			\
+		rte_prefetch0(get_tbl24_p(dp,				\
+			ips[i + prefetch_offset], nh_sz));		\
+		tmp = ((type *)dp->tbl24)[ips[i] >> 8];			\
+		if (unlikely(is_entry_extended(tmp)))			\
+			tmp = ((type *)dp->tbl8)[(uint8_t)ips[i] +	\
+				((tmp >> 1) * DIR24_8_TBL8_GRP_NUM_ENT)]; \
+		next_hops[i] = tmp >> 1;				\
+	}								\
+	for (; i < n; i++) {						\
+		tmp = ((type *)dp->tbl24)[ips[i] >> 8];			\
+		if (unlikely(is_entry_extended(tmp)))			\
+			tmp = ((type *)dp->tbl8)[(uint8_t)ips[i] +	\
+				((tmp >> 1) * DIR24_8_TBL8_GRP_NUM_ENT)]; \
+		next_hops[i] = tmp >> 1;				\
+	}								\
+}									\
+
+LOOKUP_FUNC(1b, uint8_t, 5, 0)
+LOOKUP_FUNC(2b, uint16_t, 6, 1)
+LOOKUP_FUNC(4b, uint32_t, 15, 2)
+LOOKUP_FUNC(8b, uint64_t, 12, 3)
+
+static inline void
+dir24_8_lookup_bulk(struct dir24_8_tbl *dp, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n, uint8_t nh_sz)
+{
+	uint64_t tmp;
+	uint32_t i;
+	uint32_t prefetch_offset = RTE_MIN(15U, n);
+
+	for (i = 0; i < prefetch_offset; i++)
+		rte_prefetch0(get_tbl24_p(dp, ips[i], nh_sz));
+	for (i = 0; i < (n - prefetch_offset); i++) {
+		rte_prefetch0(get_tbl24_p(dp, ips[i + prefetch_offset],
+			nh_sz));
+		tmp = get_tbl24(dp, ips[i], nh_sz);
+		if (unlikely(is_entry_extended(tmp)))
+			tmp = get_tbl8(dp, tmp, ips[i], nh_sz);
+
+		next_hops[i] = tmp >> 1;
+	}
+	for (; i < n; i++) {
+		tmp = get_tbl24(dp, ips[i], nh_sz);
+		if (unlikely(is_entry_extended(tmp)))
+			tmp = get_tbl8(dp, tmp, ips[i], nh_sz);
+
+		next_hops[i] = tmp >> 1;
+	}
+}
+
+static inline void
+dir24_8_lookup_bulk_0(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n)
+{
+	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
+
+	dir24_8_lookup_bulk(dp, ips, next_hops, n, 0);
+}
+
+static inline void
+dir24_8_lookup_bulk_1(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n)
+{
+	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
+
+	dir24_8_lookup_bulk(dp, ips, next_hops, n, 1);
+}
+
+static inline void
+dir24_8_lookup_bulk_2(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n)
+{
+	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
+
+	dir24_8_lookup_bulk(dp, ips, next_hops, n, 2);
+}
+
+static inline void
+dir24_8_lookup_bulk_3(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n)
+{
+	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
+
+	dir24_8_lookup_bulk(dp, ips, next_hops, n, 3);
+}
+
+static inline void
+dir24_8_lookup_bulk_uni(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n)
+{
+	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
+	uint64_t tmp;
+	uint32_t i;
+	uint32_t prefetch_offset = RTE_MIN(15U, n);
+	uint8_t nh_sz = dp->nh_sz;
+
+	for (i = 0; i < prefetch_offset; i++)
+		rte_prefetch0(get_tbl24_p(dp, ips[i], nh_sz));
+	for (i = 0; i < (n - prefetch_offset); i++) {
+		rte_prefetch0(get_tbl24_p(dp, ips[i + prefetch_offset],
+			nh_sz));
+		tmp = get_tbl24(dp, ips[i], nh_sz);
+		if (unlikely(is_entry_extended(tmp)))
+			tmp = get_tbl8(dp, tmp, ips[i], nh_sz);
+
+		next_hops[i] = tmp >> 1;
+	}
+	for (; i < n; i++) {
+		tmp = get_tbl24(dp, ips[i], nh_sz);
+		if (unlikely(is_entry_extended(tmp)))
+			tmp = get_tbl8(dp, tmp, ips[i], nh_sz);
+
+		next_hops[i] = tmp >> 1;
+	}
+}
+
 void *
 dir24_8_create(const char *name, int socket_id, struct rte_fib_conf *conf);
 
-- 
2.7.4


^ permalink raw reply	[flat|nested] 199+ messages in thread

* [dpdk-dev] [PATCH v8 4/8] fib: introduce AVX512 lookup
  2020-07-13 11:56           ` [dpdk-dev] [PATCH v7 " Vladimir Medvedkin
                               ` (3 preceding siblings ...)
  2020-09-30 10:35             ` [dpdk-dev] [PATCH v8 3/8] fib: move lookup definition into the header file Vladimir Medvedkin
@ 2020-09-30 10:35             ` Vladimir Medvedkin
  2020-09-30 10:35             ` [dpdk-dev] [PATCH v8 5/8] fib6: make lookup function type configurable Vladimir Medvedkin
                               ` (3 subsequent siblings)
  8 siblings, 0 replies; 199+ messages in thread
From: Vladimir Medvedkin @ 2020-09-30 10:35 UTC (permalink / raw)
  To: dev
  Cc: david.marchand, jerinj, mdr, thomas, konstantin.ananyev,
	bruce.richardson

Add new lookup implementation for DIR24_8 algorithm using
AVX512 instruction set

Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 doc/guides/rel_notes/release_20_11.rst |   3 +
 lib/librte_fib/dir24_8.c               |  24 +++++
 lib/librte_fib/dir24_8_avx512.c        | 165 +++++++++++++++++++++++++++++++++
 lib/librte_fib/dir24_8_avx512.h        |  24 +++++
 lib/librte_fib/meson.build             |  18 ++++
 lib/librte_fib/rte_fib.h               |   3 +-
 6 files changed, 236 insertions(+), 1 deletion(-)
 create mode 100644 lib/librte_fib/dir24_8_avx512.c
 create mode 100644 lib/librte_fib/dir24_8_avx512.h

diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst
index 4eb3224..26a7d8e 100644
--- a/doc/guides/rel_notes/release_20_11.rst
+++ b/doc/guides/rel_notes/release_20_11.rst
@@ -78,6 +78,9 @@ New Features
     ``--portmask=N``
     where N represents the hexadecimal bitmask of ports used.
 
+* **Added AVX512 lookup implementation for FIB.**
+
+  Added a AVX512 lookup functions implementation into FIB library.
 
 Removed Items
 -------------
diff --git a/lib/librte_fib/dir24_8.c b/lib/librte_fib/dir24_8.c
index 9d74653..0d7bf2c 100644
--- a/lib/librte_fib/dir24_8.c
+++ b/lib/librte_fib/dir24_8.c
@@ -18,6 +18,12 @@
 #include <rte_fib.h>
 #include "dir24_8.h"
 
+#ifdef CC_DIR24_8_AVX512_SUPPORT
+
+#include "dir24_8_avx512.h"
+
+#endif /* CC_DIR24_8_AVX512_SUPPORT */
+
 #define DIR24_8_NAMESIZE	64
 
 #define ROUNDUP(x, y)	 RTE_ALIGN_CEIL(x, (1 << (32 - y)))
@@ -62,6 +68,24 @@ dir24_8_get_lookup_fn(void *p, enum rte_fib_dir24_8_lookup_type type)
 		}
 	case RTE_FIB_DIR24_8_SCALAR_UNI:
 		return dir24_8_lookup_bulk_uni;
+#ifdef CC_DIR24_8_AVX512_SUPPORT
+	case RTE_FIB_DIR24_8_VECTOR_AVX512:
+		if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512F) <= 0)
+			return NULL;
+
+		switch (nh_sz) {
+		case RTE_FIB_DIR24_8_1B:
+			return rte_dir24_8_vec_lookup_bulk_1b;
+		case RTE_FIB_DIR24_8_2B:
+			return rte_dir24_8_vec_lookup_bulk_2b;
+		case RTE_FIB_DIR24_8_4B:
+			return rte_dir24_8_vec_lookup_bulk_4b;
+		case RTE_FIB_DIR24_8_8B:
+			return rte_dir24_8_vec_lookup_bulk_8b;
+		default:
+			return NULL;
+		}
+#endif
 	default:
 		return NULL;
 	}
diff --git a/lib/librte_fib/dir24_8_avx512.c b/lib/librte_fib/dir24_8_avx512.c
new file mode 100644
index 0000000..43dba28
--- /dev/null
+++ b/lib/librte_fib/dir24_8_avx512.c
@@ -0,0 +1,165 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Intel Corporation
+ */
+
+#include <rte_vect.h>
+#include <rte_fib.h>
+
+#include "dir24_8.h"
+#include "dir24_8_avx512.h"
+
+static __rte_always_inline void
+dir24_8_vec_lookup_x16(void *p, const uint32_t *ips,
+	uint64_t *next_hops, int size)
+{
+	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
+	__mmask16 msk_ext;
+	__mmask16 exp_msk = 0x5555;
+	__m512i ip_vec, idxes, res, bytes;
+	const __m512i zero = _mm512_set1_epi32(0);
+	const __m512i lsb = _mm512_set1_epi32(1);
+	const __m512i lsbyte_msk = _mm512_set1_epi32(0xff);
+	__m512i tmp1, tmp2, res_msk;
+	__m256i tmp256;
+	/* used to mask gather values if size is 1/2 (8/16 bit next hops) */
+	if (size == sizeof(uint8_t))
+		res_msk = _mm512_set1_epi32(UINT8_MAX);
+	else if (size == sizeof(uint16_t))
+		res_msk = _mm512_set1_epi32(UINT16_MAX);
+
+	ip_vec = _mm512_loadu_si512(ips);
+	/* mask 24 most significant bits */
+	idxes = _mm512_srli_epi32(ip_vec, 8);
+
+	/**
+	 * lookup in tbl24
+	 * Put it inside branch to make compiler happy with -O0
+	 */
+	if (size == sizeof(uint8_t)) {
+		res = _mm512_i32gather_epi32(idxes, (const int *)dp->tbl24, 1);
+		res = _mm512_and_epi32(res, res_msk);
+	} else if (size == sizeof(uint16_t)) {
+		res = _mm512_i32gather_epi32(idxes, (const int *)dp->tbl24, 2);
+		res = _mm512_and_epi32(res, res_msk);
+	} else
+		res = _mm512_i32gather_epi32(idxes, (const int *)dp->tbl24, 4);
+
+	/* get extended entries indexes */
+	msk_ext = _mm512_test_epi32_mask(res, lsb);
+
+	if (msk_ext != 0) {
+		idxes = _mm512_srli_epi32(res, 1);
+		idxes = _mm512_slli_epi32(idxes, 8);
+		bytes = _mm512_and_epi32(ip_vec, lsbyte_msk);
+		idxes = _mm512_maskz_add_epi32(msk_ext, idxes, bytes);
+		if (size == sizeof(uint8_t)) {
+			idxes = _mm512_mask_i32gather_epi32(zero, msk_ext,
+				idxes, (const int *)dp->tbl8, 1);
+			idxes = _mm512_and_epi32(idxes, res_msk);
+		} else if (size == sizeof(uint16_t)) {
+			idxes = _mm512_mask_i32gather_epi32(zero, msk_ext,
+				idxes, (const int *)dp->tbl8, 2);
+			idxes = _mm512_and_epi32(idxes, res_msk);
+		} else
+			idxes = _mm512_mask_i32gather_epi32(zero, msk_ext,
+				idxes, (const int *)dp->tbl8, 4);
+
+		res = _mm512_mask_blend_epi32(msk_ext, res, idxes);
+	}
+
+	res = _mm512_srli_epi32(res, 1);
+	tmp1 = _mm512_maskz_expand_epi32(exp_msk, res);
+	tmp256 = _mm512_extracti32x8_epi32(res, 1);
+	tmp2 = _mm512_maskz_expand_epi32(exp_msk,
+		_mm512_castsi256_si512(tmp256));
+	_mm512_storeu_si512(next_hops, tmp1);
+	_mm512_storeu_si512(next_hops + 8, tmp2);
+}
+
+static __rte_always_inline void
+dir24_8_vec_lookup_x8_8b(void *p, const uint32_t *ips,
+	uint64_t *next_hops)
+{
+	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
+	const __m512i zero = _mm512_set1_epi32(0);
+	const __m512i lsbyte_msk = _mm512_set1_epi64(0xff);
+	const __m512i lsb = _mm512_set1_epi64(1);
+	__m512i res, idxes, bytes;
+	__m256i idxes_256, ip_vec;
+	__mmask8 msk_ext;
+
+	ip_vec = _mm256_loadu_si256((const void *)ips);
+	/* mask 24 most significant bits */
+	idxes_256 = _mm256_srli_epi32(ip_vec, 8);
+
+	/* lookup in tbl24 */
+	res = _mm512_i32gather_epi64(idxes_256, (const void *)dp->tbl24, 8);
+
+	/* get extended entries indexes */
+	msk_ext = _mm512_test_epi64_mask(res, lsb);
+
+	if (msk_ext != 0) {
+		bytes = _mm512_cvtepi32_epi64(ip_vec);
+		idxes = _mm512_srli_epi64(res, 1);
+		idxes = _mm512_slli_epi64(idxes, 8);
+		bytes = _mm512_and_epi64(bytes, lsbyte_msk);
+		idxes = _mm512_maskz_add_epi64(msk_ext, idxes, bytes);
+		idxes = _mm512_mask_i64gather_epi64(zero, msk_ext, idxes,
+			(const void *)dp->tbl8, 8);
+
+		res = _mm512_mask_blend_epi64(msk_ext, res, idxes);
+	}
+
+	res = _mm512_srli_epi64(res, 1);
+	_mm512_storeu_si512(next_hops, res);
+}
+
+void
+rte_dir24_8_vec_lookup_bulk_1b(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n)
+{
+	uint32_t i;
+	for (i = 0; i < (n / 16); i++)
+		dir24_8_vec_lookup_x16(p, ips + i * 16, next_hops + i * 16,
+			sizeof(uint8_t));
+
+	dir24_8_lookup_bulk_1b(p, ips + i * 16, next_hops + i * 16,
+		n - i * 16);
+}
+
+void
+rte_dir24_8_vec_lookup_bulk_2b(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n)
+{
+	uint32_t i;
+	for (i = 0; i < (n / 16); i++)
+		dir24_8_vec_lookup_x16(p, ips + i * 16, next_hops + i * 16,
+			sizeof(uint16_t));
+
+	dir24_8_lookup_bulk_2b(p, ips + i * 16, next_hops + i * 16,
+		n - i * 16);
+}
+
+void
+rte_dir24_8_vec_lookup_bulk_4b(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n)
+{
+	uint32_t i;
+	for (i = 0; i < (n / 16); i++)
+		dir24_8_vec_lookup_x16(p, ips + i * 16, next_hops + i * 16,
+			sizeof(uint32_t));
+
+	dir24_8_lookup_bulk_4b(p, ips + i * 16, next_hops + i * 16,
+		n - i * 16);
+}
+
+void
+rte_dir24_8_vec_lookup_bulk_8b(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n)
+{
+	uint32_t i;
+	for (i = 0; i < (n / 8); i++)
+		dir24_8_vec_lookup_x8_8b(p, ips + i * 8, next_hops + i * 8);
+
+	dir24_8_lookup_bulk_8b(p, ips + i * 8, next_hops + i * 8, n - i * 8);
+}
diff --git a/lib/librte_fib/dir24_8_avx512.h b/lib/librte_fib/dir24_8_avx512.h
new file mode 100644
index 0000000..1d3c2b9
--- /dev/null
+++ b/lib/librte_fib/dir24_8_avx512.h
@@ -0,0 +1,24 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Intel Corporation
+ */
+
+#ifndef _DIR248_AVX512_H_
+#define _DIR248_AVX512_H_
+
+void
+rte_dir24_8_vec_lookup_bulk_1b(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n);
+
+void
+rte_dir24_8_vec_lookup_bulk_2b(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n);
+
+void
+rte_dir24_8_vec_lookup_bulk_4b(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n);
+
+void
+rte_dir24_8_vec_lookup_bulk_8b(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n);
+
+#endif /* _DIR248_AVX512_H_ */
diff --git a/lib/librte_fib/meson.build b/lib/librte_fib/meson.build
index 771828f..d96ff02 100644
--- a/lib/librte_fib/meson.build
+++ b/lib/librte_fib/meson.build
@@ -5,3 +5,21 @@
 sources = files('rte_fib.c', 'rte_fib6.c', 'dir24_8.c', 'trie.c')
 headers = files('rte_fib.h', 'rte_fib6.h')
 deps += ['rib']
+
+if arch_subdir == 'x86' and not machine_args.contains('-mno-avx512f')
+	# compile AVX512 version if either:
+	# a. we have AVX512F supported in minimum instruction set baseline
+	# b. it's not minimum instruction set, but supported by compiler
+	if dpdk_conf.has('RTE_MACHINE_CPUFLAG_AVX512F')
+		cflags += ['-DCC_DIR24_8_AVX512_SUPPORT']
+		sources += files('dir24_8_avx512.c')
+	elif cc.has_multi_arguments('-mavx512f', '-mavx512dq')
+		dir24_8_avx512_tmp = static_library('dir24_8_avx512_tmp',
+				'dir24_8_avx512.c',
+				dependencies: static_rte_eal,
+				c_args: cflags + ['-mavx512f', '-mavx512dq'])
+		objs += dir24_8_avx512_tmp.extract_objects('dir24_8_avx512.c')
+		cflags += '-DCC_DIR24_8_AVX512_SUPPORT'
+	endif
+endif
+
diff --git a/lib/librte_fib/rte_fib.h b/lib/librte_fib/rte_fib.h
index a9bd0da..3e83807 100644
--- a/lib/librte_fib/rte_fib.h
+++ b/lib/librte_fib/rte_fib.h
@@ -62,7 +62,8 @@ enum rte_fib_dir24_8_nh_sz {
 enum rte_fib_dir24_8_lookup_type {
 	RTE_FIB_DIR24_8_SCALAR_MACRO,
 	RTE_FIB_DIR24_8_SCALAR_INLINE,
-	RTE_FIB_DIR24_8_SCALAR_UNI
+	RTE_FIB_DIR24_8_SCALAR_UNI,
+	RTE_FIB_DIR24_8_VECTOR_AVX512
 };
 
 /** FIB configuration structure */
-- 
2.7.4


^ permalink raw reply	[flat|nested] 199+ messages in thread

* [dpdk-dev] [PATCH v8 5/8] fib6: make lookup function type configurable
  2020-07-13 11:56           ` [dpdk-dev] [PATCH v7 " Vladimir Medvedkin
                               ` (4 preceding siblings ...)
  2020-09-30 10:35             ` [dpdk-dev] [PATCH v8 4/8] fib: introduce AVX512 lookup Vladimir Medvedkin
@ 2020-09-30 10:35             ` Vladimir Medvedkin
  2020-09-30 10:35             ` [dpdk-dev] [PATCH v8 6/8] fib6: move lookup definition into the header file Vladimir Medvedkin
                               ` (2 subsequent siblings)
  8 siblings, 0 replies; 199+ messages in thread
From: Vladimir Medvedkin @ 2020-09-30 10:35 UTC (permalink / raw)
  To: dev
  Cc: david.marchand, jerinj, mdr, thomas, konstantin.ananyev,
	bruce.richardson

Add type argument to trie_get_lookup_fn()
Now it only supports RTE_FIB6_TRIE_SCALAR

Add new rte_fib6_set_lookup_fn() - user can change lookup
function type runtime.

Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
---
 lib/librte_fib/rte_fib6.c          | 20 +++++++++++++++++++-
 lib/librte_fib/rte_fib6.h          | 23 +++++++++++++++++++++++
 lib/librte_fib/rte_fib_version.map |  1 +
 lib/librte_fib/trie.c              | 25 ++++++++++++++-----------
 lib/librte_fib/trie.h              |  2 +-
 5 files changed, 58 insertions(+), 13 deletions(-)

diff --git a/lib/librte_fib/rte_fib6.c b/lib/librte_fib/rte_fib6.c
index a1f0db8..566cd5f 100644
--- a/lib/librte_fib/rte_fib6.c
+++ b/lib/librte_fib/rte_fib6.c
@@ -107,7 +107,7 @@ init_dataplane(struct rte_fib6 *fib, __rte_unused int socket_id,
 		fib->dp = trie_create(dp_name, socket_id, conf);
 		if (fib->dp == NULL)
 			return -rte_errno;
-		fib->lookup = rte_trie_get_lookup_fn(conf);
+		fib->lookup = trie_get_lookup_fn(fib->dp, RTE_FIB6_TRIE_SCALAR);
 		fib->modify = trie_modify;
 		return 0;
 	default:
@@ -319,3 +319,21 @@ rte_fib6_get_rib(struct rte_fib6 *fib)
 {
 	return (fib == NULL) ? NULL : fib->rib;
 }
+
+int
+rte_fib6_set_lookup_fn(struct rte_fib6 *fib,
+	enum rte_fib_trie_lookup_type type)
+{
+	rte_fib6_lookup_fn_t fn;
+
+	switch (fib->type) {
+	case RTE_FIB6_TRIE:
+		fn = trie_get_lookup_fn(fib->dp, type);
+		if (fn == NULL)
+			return -EINVAL;
+		fib->lookup = fn;
+		return 0;
+	default:
+		return -EINVAL;
+	}
+}
diff --git a/lib/librte_fib/rte_fib6.h b/lib/librte_fib/rte_fib6.h
index bbfcf23..cc817ad 100644
--- a/lib/librte_fib/rte_fib6.h
+++ b/lib/librte_fib/rte_fib6.h
@@ -53,12 +53,18 @@ enum rte_fib6_op {
 	RTE_FIB6_DEL,
 };
 
+/** Size of nexthop (1 << nh_sz) bits for TRIE based FIB */
 enum rte_fib_trie_nh_sz {
 	RTE_FIB6_TRIE_2B = 1,
 	RTE_FIB6_TRIE_4B,
 	RTE_FIB6_TRIE_8B
 };
 
+/** Type of lookup function implementation */
+enum rte_fib_trie_lookup_type {
+	RTE_FIB6_TRIE_SCALAR
+};
+
 /** FIB configuration structure */
 struct rte_fib6_conf {
 	enum rte_fib6_type type; /**< Type of FIB struct */
@@ -201,6 +207,23 @@ __rte_experimental
 struct rte_rib6 *
 rte_fib6_get_rib(struct rte_fib6 *fib);
 
+/**
+ * Set lookup function based on type
+ *
+ * @param fib
+ *   FIB object handle
+ * @param type
+ *   type of lookup function
+ *
+ * @return
+ *    -EINVAL on failure
+ *    0 on success
+ */
+__rte_experimental
+int
+rte_fib6_set_lookup_fn(struct rte_fib6 *fib,
+	enum rte_fib_trie_lookup_type type);
+
 #ifdef __cplusplus
 }
 #endif
diff --git a/lib/librte_fib/rte_fib_version.map b/lib/librte_fib/rte_fib_version.map
index 216af66..9d1e181 100644
--- a/lib/librte_fib/rte_fib_version.map
+++ b/lib/librte_fib/rte_fib_version.map
@@ -19,6 +19,7 @@ EXPERIMENTAL {
 	rte_fib6_lookup_bulk;
 	rte_fib6_get_dp;
 	rte_fib6_get_rib;
+	rte_fib6_set_lookup_fn;
 
 	local: *;
 };
diff --git a/lib/librte_fib/trie.c b/lib/librte_fib/trie.c
index 2ae2add..63c519a 100644
--- a/lib/librte_fib/trie.c
+++ b/lib/librte_fib/trie.c
@@ -59,13 +59,6 @@ enum edge {
 	REDGE
 };
 
-enum lookup_type {
-	MACRO,
-	INLINE,
-	UNI
-};
-static enum lookup_type test_lookup = MACRO;
-
 static inline uint32_t
 get_tbl24_idx(const uint8_t *ip)
 {
@@ -154,11 +147,18 @@ LOOKUP_FUNC(4b, uint32_t, 2)
 LOOKUP_FUNC(8b, uint64_t, 3)
 
 rte_fib6_lookup_fn_t
-rte_trie_get_lookup_fn(struct rte_fib6_conf *conf)
+trie_get_lookup_fn(void *p, enum rte_fib_trie_lookup_type type)
 {
-	enum rte_fib_trie_nh_sz nh_sz = conf->trie.nh_sz;
+	enum rte_fib_trie_nh_sz nh_sz;
+	struct rte_trie_tbl *dp = p;
 
-	if (test_lookup == MACRO) {
+	if (dp == NULL)
+		return NULL;
+
+	nh_sz = dp->nh_sz;
+
+	switch (type) {
+	case RTE_FIB6_TRIE_SCALAR:
 		switch (nh_sz) {
 		case RTE_FIB6_TRIE_2B:
 			return rte_trie_lookup_bulk_2b;
@@ -166,9 +166,12 @@ rte_trie_get_lookup_fn(struct rte_fib6_conf *conf)
 			return rte_trie_lookup_bulk_4b;
 		case RTE_FIB6_TRIE_8B:
 			return rte_trie_lookup_bulk_8b;
+		default:
+			return NULL;
 		}
+	default:
+		return NULL;
 	}
-
 	return NULL;
 }
 
diff --git a/lib/librte_fib/trie.h b/lib/librte_fib/trie.h
index bb750c5..0d5ef9a 100644
--- a/lib/librte_fib/trie.h
+++ b/lib/librte_fib/trie.h
@@ -22,7 +22,7 @@ void
 trie_free(void *p);
 
 rte_fib6_lookup_fn_t
-rte_trie_get_lookup_fn(struct rte_fib6_conf *fib_conf);
+trie_get_lookup_fn(void *p, enum rte_fib_trie_lookup_type type);
 
 int
 trie_modify(struct rte_fib6 *fib, const uint8_t ip[RTE_FIB6_IPV6_ADDR_SIZE],
-- 
2.7.4


^ permalink raw reply	[flat|nested] 199+ messages in thread

* [dpdk-dev] [PATCH v8 6/8] fib6: move lookup definition into the header file
  2020-07-13 11:56           ` [dpdk-dev] [PATCH v7 " Vladimir Medvedkin
                               ` (5 preceding siblings ...)
  2020-09-30 10:35             ` [dpdk-dev] [PATCH v8 5/8] fib6: make lookup function type configurable Vladimir Medvedkin
@ 2020-09-30 10:35             ` Vladimir Medvedkin
  2020-09-30 10:35             ` [dpdk-dev] [PATCH v8 7/8] fib6: introduce AVX512 lookup Vladimir Medvedkin
  2020-09-30 10:35             ` [dpdk-dev] [PATCH v8 8/8] app/testfib: add support for different lookup functions Vladimir Medvedkin
  8 siblings, 0 replies; 199+ messages in thread
From: Vladimir Medvedkin @ 2020-09-30 10:35 UTC (permalink / raw)
  To: dev
  Cc: david.marchand, jerinj, mdr, thomas, konstantin.ananyev,
	bruce.richardson

Move trie table layout and lookup definition into the
private header file. This is necessary for implementing a
vectorized lookup function in a separate .с file.

Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 lib/librte_fib/trie.c | 121 --------------------------------------------------
 lib/librte_fib/trie.h | 117 ++++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 117 insertions(+), 121 deletions(-)

diff --git a/lib/librte_fib/trie.c b/lib/librte_fib/trie.c
index 63c519a..136e938 100644
--- a/lib/librte_fib/trie.c
+++ b/lib/librte_fib/trie.c
@@ -11,141 +11,20 @@
 
 #include <rte_debug.h>
 #include <rte_malloc.h>
-#include <rte_prefetch.h>
 #include <rte_errno.h>
 #include <rte_memory.h>
-#include <rte_branch_prediction.h>
 
 #include <rte_rib6.h>
 #include <rte_fib6.h>
 #include "trie.h"
 
-/* @internal Total number of tbl24 entries. */
-#define TRIE_TBL24_NUM_ENT	(1 << 24)
-
-/* Maximum depth value possible for IPv6 LPM. */
-#define TRIE_MAX_DEPTH		128
-
-/* @internal Number of entries in a tbl8 group. */
-#define TRIE_TBL8_GRP_NUM_ENT	256ULL
-
-/* @internal Total number of tbl8 groups in the tbl8. */
-#define TRIE_TBL8_NUM_GROUPS	65536
-
-/* @internal bitmask with valid and valid_group fields set */
-#define TRIE_EXT_ENT		1
-
 #define TRIE_NAMESIZE		64
 
-#define BITMAP_SLAB_BIT_SIZE_LOG2	6
-#define BITMAP_SLAB_BIT_SIZE		(1ULL << BITMAP_SLAB_BIT_SIZE_LOG2)
-#define BITMAP_SLAB_BITMASK		(BITMAP_SLAB_BIT_SIZE - 1)
-
-struct rte_trie_tbl {
-	uint32_t	number_tbl8s;	/**< Total number of tbl8s */
-	uint32_t	rsvd_tbl8s;	/**< Number of reserved tbl8s */
-	uint32_t	cur_tbl8s;	/**< Current cumber of tbl8s */
-	uint64_t	def_nh;		/**< Default next hop */
-	enum rte_fib_trie_nh_sz	nh_sz;	/**< Size of nexthop entry */
-	uint64_t	*tbl8;		/**< tbl8 table. */
-	uint32_t	*tbl8_pool;	/**< bitmap containing free tbl8 idxes*/
-	uint32_t	tbl8_pool_pos;
-	/* tbl24 table. */
-	__extension__ uint64_t	tbl24[0] __rte_cache_aligned;
-};
-
 enum edge {
 	LEDGE,
 	REDGE
 };
 
-static inline uint32_t
-get_tbl24_idx(const uint8_t *ip)
-{
-	return ip[0] << 16|ip[1] << 8|ip[2];
-}
-
-static inline void *
-get_tbl24_p(struct rte_trie_tbl *dp, const uint8_t *ip, uint8_t nh_sz)
-{
-	uint32_t tbl24_idx;
-
-	tbl24_idx = get_tbl24_idx(ip);
-	return (void *)&((uint8_t *)dp->tbl24)[tbl24_idx << nh_sz];
-}
-
-static inline uint8_t
-bits_in_nh(uint8_t nh_sz)
-{
-	return 8 * (1 << nh_sz);
-}
-
-static inline uint64_t
-get_max_nh(uint8_t nh_sz)
-{
-	return ((1ULL << (bits_in_nh(nh_sz) - 1)) - 1);
-}
-
-static inline uint64_t
-lookup_msk(uint8_t nh_sz)
-{
-	return ((1ULL << ((1 << (nh_sz + 3)) - 1)) << 1) - 1;
-}
-
-static inline uint8_t
-get_psd_idx(uint32_t val, uint8_t nh_sz)
-{
-	return val & ((1 << (3 - nh_sz)) - 1);
-}
-
-static inline uint32_t
-get_tbl_pos(uint32_t val, uint8_t nh_sz)
-{
-	return val >> (3 - nh_sz);
-}
-
-static inline uint64_t
-get_tbl_val_by_idx(uint64_t *tbl, uint32_t idx, uint8_t nh_sz)
-{
-	return ((tbl[get_tbl_pos(idx, nh_sz)] >> (get_psd_idx(idx, nh_sz) *
-		bits_in_nh(nh_sz))) & lookup_msk(nh_sz));
-}
-
-static inline void *
-get_tbl_p_by_idx(uint64_t *tbl, uint64_t idx, uint8_t nh_sz)
-{
-	return (uint8_t *)tbl + (idx << nh_sz);
-}
-
-static inline int
-is_entry_extended(uint64_t ent)
-{
-	return (ent & TRIE_EXT_ENT) == TRIE_EXT_ENT;
-}
-
-#define LOOKUP_FUNC(suffix, type, nh_sz)				\
-static void rte_trie_lookup_bulk_##suffix(void *p,			\
-	uint8_t ips[][RTE_FIB6_IPV6_ADDR_SIZE],			\
-	uint64_t *next_hops, const unsigned int n)			\
-{									\
-	struct rte_trie_tbl *dp = (struct rte_trie_tbl *)p;		\
-	uint64_t tmp;							\
-	uint32_t i, j;							\
-									\
-	for (i = 0; i < n; i++) {					\
-		tmp = ((type *)dp->tbl24)[get_tbl24_idx(&ips[i][0])];	\
-		j = 3;							\
-		while (is_entry_extended(tmp)) {			\
-			tmp = ((type *)dp->tbl8)[ips[i][j++] +		\
-				((tmp >> 1) * TRIE_TBL8_GRP_NUM_ENT)];	\
-		}							\
-		next_hops[i] = tmp >> 1;				\
-	}								\
-}
-LOOKUP_FUNC(2b, uint16_t, 1)
-LOOKUP_FUNC(4b, uint32_t, 2)
-LOOKUP_FUNC(8b, uint64_t, 3)
-
 rte_fib6_lookup_fn_t
 trie_get_lookup_fn(void *p, enum rte_fib_trie_lookup_type type)
 {
diff --git a/lib/librte_fib/trie.h b/lib/librte_fib/trie.h
index 0d5ef9a..663c7a9 100644
--- a/lib/librte_fib/trie.h
+++ b/lib/librte_fib/trie.h
@@ -10,11 +10,128 @@
  * @file
  * RTE IPv6 Longest Prefix Match (LPM)
  */
+#include <rte_prefetch.h>
+#include <rte_branch_prediction.h>
 
 #ifdef __cplusplus
 extern "C" {
 #endif
 
+/* @internal Total number of tbl24 entries. */
+#define TRIE_TBL24_NUM_ENT	(1 << 24)
+/* Maximum depth value possible for IPv6 LPM. */
+#define TRIE_MAX_DEPTH		128
+/* @internal Number of entries in a tbl8 group. */
+#define TRIE_TBL8_GRP_NUM_ENT	256ULL
+/* @internal Total number of tbl8 groups in the tbl8. */
+#define TRIE_TBL8_NUM_GROUPS	65536
+/* @internal bitmask with valid and valid_group fields set */
+#define TRIE_EXT_ENT		1
+
+#define BITMAP_SLAB_BIT_SIZE_LOG2	6
+#define BITMAP_SLAB_BIT_SIZE		(1ULL << BITMAP_SLAB_BIT_SIZE_LOG2)
+#define BITMAP_SLAB_BITMASK		(BITMAP_SLAB_BIT_SIZE - 1)
+
+struct rte_trie_tbl {
+	uint32_t	number_tbl8s;	/**< Total number of tbl8s */
+	uint32_t	rsvd_tbl8s;	/**< Number of reserved tbl8s */
+	uint32_t	cur_tbl8s;	/**< Current cumber of tbl8s */
+	uint64_t	def_nh;		/**< Default next hop */
+	enum rte_fib_trie_nh_sz	nh_sz;	/**< Size of nexthop entry */
+	uint64_t	*tbl8;		/**< tbl8 table. */
+	uint32_t	*tbl8_pool;	/**< bitmap containing free tbl8 idxes*/
+	uint32_t	tbl8_pool_pos;
+	/* tbl24 table. */
+	__extension__ uint64_t	tbl24[0] __rte_cache_aligned;
+};
+
+static inline uint32_t
+get_tbl24_idx(const uint8_t *ip)
+{
+	return ip[0] << 16|ip[1] << 8|ip[2];
+}
+
+static inline void *
+get_tbl24_p(struct rte_trie_tbl *dp, const uint8_t *ip, uint8_t nh_sz)
+{
+	uint32_t tbl24_idx;
+
+	tbl24_idx = get_tbl24_idx(ip);
+	return (void *)&((uint8_t *)dp->tbl24)[tbl24_idx << nh_sz];
+}
+
+static inline uint8_t
+bits_in_nh(uint8_t nh_sz)
+{
+	return 8 * (1 << nh_sz);
+}
+
+static inline uint64_t
+get_max_nh(uint8_t nh_sz)
+{
+	return ((1ULL << (bits_in_nh(nh_sz) - 1)) - 1);
+}
+
+static inline uint64_t
+lookup_msk(uint8_t nh_sz)
+{
+	return ((1ULL << ((1 << (nh_sz + 3)) - 1)) << 1) - 1;
+}
+
+static inline uint8_t
+get_psd_idx(uint32_t val, uint8_t nh_sz)
+{
+	return val & ((1 << (3 - nh_sz)) - 1);
+}
+
+static inline uint32_t
+get_tbl_pos(uint32_t val, uint8_t nh_sz)
+{
+	return val >> (3 - nh_sz);
+}
+
+static inline uint64_t
+get_tbl_val_by_idx(uint64_t *tbl, uint32_t idx, uint8_t nh_sz)
+{
+	return ((tbl[get_tbl_pos(idx, nh_sz)] >> (get_psd_idx(idx, nh_sz) *
+		bits_in_nh(nh_sz))) & lookup_msk(nh_sz));
+}
+
+static inline void *
+get_tbl_p_by_idx(uint64_t *tbl, uint64_t idx, uint8_t nh_sz)
+{
+	return (uint8_t *)tbl + (idx << nh_sz);
+}
+
+static inline int
+is_entry_extended(uint64_t ent)
+{
+	return (ent & TRIE_EXT_ENT) == TRIE_EXT_ENT;
+}
+
+#define LOOKUP_FUNC(suffix, type, nh_sz)				\
+static inline void rte_trie_lookup_bulk_##suffix(void *p,		\
+	uint8_t ips[][RTE_FIB6_IPV6_ADDR_SIZE],				\
+	uint64_t *next_hops, const unsigned int n)			\
+{									\
+	struct rte_trie_tbl *dp = (struct rte_trie_tbl *)p;		\
+	uint64_t tmp;							\
+	uint32_t i, j;							\
+									\
+	for (i = 0; i < n; i++) {					\
+		tmp = ((type *)dp->tbl24)[get_tbl24_idx(&ips[i][0])];	\
+		j = 3;							\
+		while (is_entry_extended(tmp)) {			\
+			tmp = ((type *)dp->tbl8)[ips[i][j++] +		\
+				((tmp >> 1) * TRIE_TBL8_GRP_NUM_ENT)];	\
+		}							\
+		next_hops[i] = tmp >> 1;				\
+	}								\
+}
+LOOKUP_FUNC(2b, uint16_t, 1)
+LOOKUP_FUNC(4b, uint32_t, 2)
+LOOKUP_FUNC(8b, uint64_t, 3)
+
 void *
 trie_create(const char *name, int socket_id, struct rte_fib6_conf *conf);
 
-- 
2.7.4


^ permalink raw reply	[flat|nested] 199+ messages in thread

* [dpdk-dev] [PATCH v8 7/8] fib6: introduce AVX512 lookup
  2020-07-13 11:56           ` [dpdk-dev] [PATCH v7 " Vladimir Medvedkin
                               ` (6 preceding siblings ...)
  2020-09-30 10:35             ` [dpdk-dev] [PATCH v8 6/8] fib6: move lookup definition into the header file Vladimir Medvedkin
@ 2020-09-30 10:35             ` Vladimir Medvedkin
  2020-09-30 10:35             ` [dpdk-dev] [PATCH v8 8/8] app/testfib: add support for different lookup functions Vladimir Medvedkin
  8 siblings, 0 replies; 199+ messages in thread
From: Vladimir Medvedkin @ 2020-09-30 10:35 UTC (permalink / raw)
  To: dev
  Cc: david.marchand, jerinj, mdr, thomas, konstantin.ananyev,
	bruce.richardson

Add new lookup implementation for FIB6 trie algorithm using
AVX512 instruction set

Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 doc/guides/rel_notes/release_20_11.rst |   2 +-
 lib/librte_fib/meson.build             |  13 ++
 lib/librte_fib/rte_fib6.h              |   3 +-
 lib/librte_fib/trie.c                  |  21 +++
 lib/librte_fib/trie_avx512.c           | 269 +++++++++++++++++++++++++++++++++
 lib/librte_fib/trie_avx512.h           |  20 +++
 6 files changed, 326 insertions(+), 2 deletions(-)
 create mode 100644 lib/librte_fib/trie_avx512.c
 create mode 100644 lib/librte_fib/trie_avx512.h

diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst
index 26a7d8e..cafd499 100644
--- a/doc/guides/rel_notes/release_20_11.rst
+++ b/doc/guides/rel_notes/release_20_11.rst
@@ -80,7 +80,7 @@ New Features
 
 * **Added AVX512 lookup implementation for FIB.**
 
-  Added a AVX512 lookup functions implementation into FIB library.
+  Added a AVX512 lookup functions implementation into FIB and FIB6 libraries.
 
 Removed Items
 -------------
diff --git a/lib/librte_fib/meson.build b/lib/librte_fib/meson.build
index d96ff02..98c8752 100644
--- a/lib/librte_fib/meson.build
+++ b/lib/librte_fib/meson.build
@@ -13,6 +13,8 @@ if arch_subdir == 'x86' and not machine_args.contains('-mno-avx512f')
 	if dpdk_conf.has('RTE_MACHINE_CPUFLAG_AVX512F')
 		cflags += ['-DCC_DIR24_8_AVX512_SUPPORT']
 		sources += files('dir24_8_avx512.c')
+		cflags += ['-DCC_TRIE_AVX512_SUPPORT']
+		sources += files('trie_avx512.c')
 	elif cc.has_multi_arguments('-mavx512f', '-mavx512dq')
 		dir24_8_avx512_tmp = static_library('dir24_8_avx512_tmp',
 				'dir24_8_avx512.c',
@@ -20,6 +22,17 @@ if arch_subdir == 'x86' and not machine_args.contains('-mno-avx512f')
 				c_args: cflags + ['-mavx512f', '-mavx512dq'])
 		objs += dir24_8_avx512_tmp.extract_objects('dir24_8_avx512.c')
 		cflags += '-DCC_DIR24_8_AVX512_SUPPORT'
+		# TRIE AVX512 implementation uses avx512bw intrinsics along with
+		# avx512f and avx512dq
+		if cc.has_argument('-mavx512bw')
+			trie_avx512_tmp = static_library('trie_avx512_tmp',
+				'trie_avx512.c',
+				dependencies: static_rte_eal,
+				c_args: cflags + ['-mavx512f', \
+					'-mavx512dq', '-mavx512bw'])
+			objs += trie_avx512_tmp.extract_objects('trie_avx512.c')
+			cflags += '-DCC_TRIE_AVX512_SUPPORT'
+		endif
 	endif
 endif
 
diff --git a/lib/librte_fib/rte_fib6.h b/lib/librte_fib/rte_fib6.h
index cc817ad..f53b076 100644
--- a/lib/librte_fib/rte_fib6.h
+++ b/lib/librte_fib/rte_fib6.h
@@ -62,7 +62,8 @@ enum rte_fib_trie_nh_sz {
 
 /** Type of lookup function implementation */
 enum rte_fib_trie_lookup_type {
-	RTE_FIB6_TRIE_SCALAR
+	RTE_FIB6_TRIE_SCALAR,
+	RTE_FIB6_TRIE_VECTOR_AVX512
 };
 
 /** FIB configuration structure */
diff --git a/lib/librte_fib/trie.c b/lib/librte_fib/trie.c
index 136e938..d0233ad 100644
--- a/lib/librte_fib/trie.c
+++ b/lib/librte_fib/trie.c
@@ -18,6 +18,12 @@
 #include <rte_fib6.h>
 #include "trie.h"
 
+#ifdef CC_TRIE_AVX512_SUPPORT
+
+#include "trie_avx512.h"
+
+#endif /* CC_TRIE_AVX512_SUPPORT */
+
 #define TRIE_NAMESIZE		64
 
 enum edge {
@@ -48,6 +54,21 @@ trie_get_lookup_fn(void *p, enum rte_fib_trie_lookup_type type)
 		default:
 			return NULL;
 		}
+#ifdef CC_TRIE_AVX512_SUPPORT
+	case RTE_FIB6_TRIE_VECTOR_AVX512:
+		if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512F) <= 0)
+			return NULL;
+		switch (nh_sz) {
+		case RTE_FIB6_TRIE_2B:
+			return rte_trie_vec_lookup_bulk_2b;
+		case RTE_FIB6_TRIE_4B:
+			return rte_trie_vec_lookup_bulk_4b;
+		case RTE_FIB6_TRIE_8B:
+			return rte_trie_vec_lookup_bulk_8b;
+		default:
+			return NULL;
+		}
+#endif
 	default:
 		return NULL;
 	}
diff --git a/lib/librte_fib/trie_avx512.c b/lib/librte_fib/trie_avx512.c
new file mode 100644
index 0000000..b1c9e4e
--- /dev/null
+++ b/lib/librte_fib/trie_avx512.c
@@ -0,0 +1,269 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Intel Corporation
+ */
+
+#include <rte_vect.h>
+#include <rte_fib6.h>
+
+#include "trie.h"
+#include "trie_avx512.h"
+
+static __rte_always_inline void
+transpose_x16(uint8_t ips[16][RTE_FIB6_IPV6_ADDR_SIZE],
+	__m512i *first, __m512i *second, __m512i *third, __m512i *fourth)
+{
+	__m512i tmp1, tmp2, tmp3, tmp4;
+	__m512i tmp5, tmp6, tmp7, tmp8;
+	const __rte_x86_zmm_t perm_idxes = {
+		.u32 = { 0, 4, 8, 12, 2, 6, 10, 14,
+			1, 5, 9, 13, 3, 7, 11, 15
+		},
+	};
+
+	/* load all ip addresses */
+	tmp1 = _mm512_loadu_si512(&ips[0][0]);
+	tmp2 = _mm512_loadu_si512(&ips[4][0]);
+	tmp3 = _mm512_loadu_si512(&ips[8][0]);
+	tmp4 = _mm512_loadu_si512(&ips[12][0]);
+
+	/* transpose 4 byte chunks of 16 ips */
+	tmp5 = _mm512_unpacklo_epi32(tmp1, tmp2);
+	tmp7 = _mm512_unpackhi_epi32(tmp1, tmp2);
+	tmp6 = _mm512_unpacklo_epi32(tmp3, tmp4);
+	tmp8 = _mm512_unpackhi_epi32(tmp3, tmp4);
+
+	tmp1 = _mm512_unpacklo_epi32(tmp5, tmp6);
+	tmp3 = _mm512_unpackhi_epi32(tmp5, tmp6);
+	tmp2 = _mm512_unpacklo_epi32(tmp7, tmp8);
+	tmp4 = _mm512_unpackhi_epi32(tmp7, tmp8);
+
+	/* first 4-byte chunks of ips[] */
+	*first = _mm512_permutexvar_epi32(perm_idxes.z, tmp1);
+	/* second 4-byte chunks of ips[] */
+	*second = _mm512_permutexvar_epi32(perm_idxes.z, tmp3);
+	/* third 4-byte chunks of ips[] */
+	*third = _mm512_permutexvar_epi32(perm_idxes.z, tmp2);
+	/* fourth 4-byte chunks of ips[] */
+	*fourth = _mm512_permutexvar_epi32(perm_idxes.z, tmp4);
+}
+
+static __rte_always_inline void
+transpose_x8(uint8_t ips[8][RTE_FIB6_IPV6_ADDR_SIZE],
+	__m512i *first, __m512i *second)
+{
+	__m512i tmp1, tmp2, tmp3, tmp4;
+	const __rte_x86_zmm_t perm_idxes = {
+		.u64 = { 0, 2, 4, 6, 1, 3, 5, 7
+		},
+	};
+
+	tmp1 = _mm512_loadu_si512(&ips[0][0]);
+	tmp2 = _mm512_loadu_si512(&ips[4][0]);
+
+	tmp3 = _mm512_unpacklo_epi64(tmp1, tmp2);
+	*first = _mm512_permutexvar_epi64(perm_idxes.z, tmp3);
+	tmp4 = _mm512_unpackhi_epi64(tmp1, tmp2);
+	*second = _mm512_permutexvar_epi64(perm_idxes.z, tmp4);
+}
+
+static __rte_always_inline void
+trie_vec_lookup_x16(void *p, uint8_t ips[16][RTE_FIB6_IPV6_ADDR_SIZE],
+	uint64_t *next_hops, int size)
+{
+	struct rte_trie_tbl *dp = (struct rte_trie_tbl *)p;
+	const __m512i zero = _mm512_set1_epi32(0);
+	const __m512i lsb = _mm512_set1_epi32(1);
+	const __m512i two_lsb = _mm512_set1_epi32(3);
+	__m512i first, second, third, fourth; /*< IPv6 four byte chunks */
+	__m512i idxes, res, shuf_idxes;
+	__m512i tmp, tmp2, bytes, byte_chunk, base_idxes;
+	/* used to mask gather values if size is 2 (16 bit next hops) */
+	const __m512i res_msk = _mm512_set1_epi32(UINT16_MAX);
+	const __rte_x86_zmm_t bswap = {
+		.u8 = { 2, 1, 0, 255, 6, 5, 4, 255,
+			10, 9, 8, 255, 14, 13, 12, 255,
+			2, 1, 0, 255, 6, 5, 4, 255,
+			10, 9, 8, 255, 14, 13, 12, 255,
+			2, 1, 0, 255, 6, 5, 4, 255,
+			10, 9, 8, 255, 14, 13, 12, 255,
+			2, 1, 0, 255, 6, 5, 4, 255,
+			10, 9, 8, 255, 14, 13, 12, 255
+			},
+	};
+	const __mmask64 k = 0x1111111111111111;
+	int i = 3;
+	__mmask16 msk_ext, new_msk;
+	__mmask16 exp_msk = 0x5555;
+
+	transpose_x16(ips, &first, &second, &third, &fourth);
+
+	/* get_tbl24_idx() for every 4 byte chunk */
+	idxes = _mm512_shuffle_epi8(first, bswap.z);
+
+	/**
+	 * lookup in tbl24
+	 * Put it inside branch to make compiller happy with -O0
+	 */
+	if (size == sizeof(uint16_t)) {
+		res = _mm512_i32gather_epi32(idxes, (const int *)dp->tbl24, 2);
+		res = _mm512_and_epi32(res, res_msk);
+	} else
+		res = _mm512_i32gather_epi32(idxes, (const int *)dp->tbl24, 4);
+
+
+	/* get extended entries indexes */
+	msk_ext = _mm512_test_epi32_mask(res, lsb);
+
+	tmp = _mm512_srli_epi32(res, 1);
+
+	/* idxes to retrieve bytes */
+	shuf_idxes = _mm512_setr_epi32(3, 7, 11, 15,
+				19, 23, 27, 31,
+				35, 39, 43, 47,
+				51, 55, 59, 63);
+
+	base_idxes = _mm512_setr_epi32(0, 4, 8, 12,
+				16, 20, 24, 28,
+				32, 36, 40, 44,
+				48, 52, 56, 60);
+
+	/* traverse down the trie */
+	while (msk_ext) {
+		idxes = _mm512_maskz_slli_epi32(msk_ext, tmp, 8);
+		byte_chunk = (i < 8) ?
+			((i >= 4) ? second : first) :
+			((i >= 12) ? fourth : third);
+		bytes = _mm512_maskz_shuffle_epi8(k, byte_chunk, shuf_idxes);
+		idxes = _mm512_maskz_add_epi32(msk_ext, idxes, bytes);
+		if (size == sizeof(uint16_t)) {
+			tmp = _mm512_mask_i32gather_epi32(zero, msk_ext,
+				idxes, (const int *)dp->tbl8, 2);
+			tmp = _mm512_and_epi32(tmp, res_msk);
+		} else
+			tmp = _mm512_mask_i32gather_epi32(zero, msk_ext,
+				idxes, (const int *)dp->tbl8, 4);
+		new_msk = _mm512_test_epi32_mask(tmp, lsb);
+		res = _mm512_mask_blend_epi32(msk_ext ^ new_msk, res, tmp);
+		tmp = _mm512_srli_epi32(tmp, 1);
+		msk_ext = new_msk;
+
+		shuf_idxes = _mm512_maskz_add_epi8(k, shuf_idxes, lsb);
+		shuf_idxes = _mm512_and_epi32(shuf_idxes, two_lsb);
+		shuf_idxes = _mm512_maskz_add_epi8(k, shuf_idxes, base_idxes);
+		i++;
+	}
+
+	res = _mm512_srli_epi32(res, 1);
+	tmp = _mm512_maskz_expand_epi32(exp_msk, res);
+	__m256i tmp256;
+	tmp256 = _mm512_extracti32x8_epi32(res, 1);
+	tmp2 = _mm512_maskz_expand_epi32(exp_msk,
+		_mm512_castsi256_si512(tmp256));
+	_mm512_storeu_si512(next_hops, tmp);
+	_mm512_storeu_si512(next_hops + 8, tmp2);
+}
+
+static void
+trie_vec_lookup_x8_8b(void *p, uint8_t ips[8][RTE_FIB6_IPV6_ADDR_SIZE],
+	uint64_t *next_hops)
+{
+	struct rte_trie_tbl *dp = (struct rte_trie_tbl *)p;
+	const __m512i zero = _mm512_set1_epi32(0);
+	const __m512i lsb = _mm512_set1_epi32(1);
+	const __m512i three_lsb = _mm512_set1_epi32(7);
+	__m512i first, second; /*< IPv6 eight byte chunks */
+	__m512i idxes, res, shuf_idxes;
+	__m512i tmp, bytes, byte_chunk, base_idxes;
+	const __rte_x86_zmm_t bswap = {
+		.u8 = { 2, 1, 0, 255, 255, 255, 255, 255,
+			10, 9, 8, 255, 255, 255, 255, 255,
+			2, 1, 0, 255, 255, 255, 255, 255,
+			10, 9, 8, 255, 255, 255, 255, 255,
+			2, 1, 0, 255, 255, 255, 255, 255,
+			10, 9, 8, 255, 255, 255, 255, 255,
+			2, 1, 0, 255, 255, 255, 255, 255,
+			10, 9, 8, 255, 255, 255, 255, 255
+			},
+	};
+	const __mmask64 k = 0x101010101010101;
+	int i = 3;
+	__mmask8 msk_ext, new_msk;
+
+	transpose_x8(ips, &first, &second);
+
+	/* get_tbl24_idx() for every 4 byte chunk */
+	idxes = _mm512_shuffle_epi8(first, bswap.z);
+
+	/* lookup in tbl24 */
+	res = _mm512_i64gather_epi64(idxes, (const void *)dp->tbl24, 8);
+	/* get extended entries indexes */
+	msk_ext = _mm512_test_epi64_mask(res, lsb);
+
+	tmp = _mm512_srli_epi64(res, 1);
+
+	/* idxes to retrieve bytes */
+	shuf_idxes = _mm512_setr_epi64(3, 11, 19, 27, 35, 43, 51, 59);
+
+	base_idxes = _mm512_setr_epi64(0, 8, 16, 24, 32, 40, 48, 56);
+
+	/* traverse down the trie */
+	while (msk_ext) {
+		idxes = _mm512_maskz_slli_epi64(msk_ext, tmp, 8);
+		byte_chunk = (i < 8) ? first : second;
+		bytes = _mm512_maskz_shuffle_epi8(k, byte_chunk, shuf_idxes);
+		idxes = _mm512_maskz_add_epi64(msk_ext, idxes, bytes);
+		tmp = _mm512_mask_i64gather_epi64(zero, msk_ext,
+				idxes, (const void *)dp->tbl8, 8);
+		new_msk = _mm512_test_epi64_mask(tmp, lsb);
+		res = _mm512_mask_blend_epi64(msk_ext ^ new_msk, res, tmp);
+		tmp = _mm512_srli_epi64(tmp, 1);
+		msk_ext = new_msk;
+
+		shuf_idxes = _mm512_maskz_add_epi8(k, shuf_idxes, lsb);
+		shuf_idxes = _mm512_and_epi64(shuf_idxes, three_lsb);
+		shuf_idxes = _mm512_maskz_add_epi8(k, shuf_idxes, base_idxes);
+		i++;
+	}
+
+	res = _mm512_srli_epi64(res, 1);
+	_mm512_storeu_si512(next_hops, res);
+}
+
+void
+rte_trie_vec_lookup_bulk_2b(void *p, uint8_t ips[][RTE_FIB6_IPV6_ADDR_SIZE],
+	uint64_t *next_hops, const unsigned int n)
+{
+	uint32_t i;
+	for (i = 0; i < (n / 16); i++) {
+		trie_vec_lookup_x16(p, (uint8_t (*)[16])&ips[i * 16][0],
+				next_hops + i * 16, sizeof(uint16_t));
+	}
+	rte_trie_lookup_bulk_2b(p, (uint8_t (*)[16])&ips[i * 16][0],
+			next_hops + i * 16, n - i * 16);
+}
+
+void
+rte_trie_vec_lookup_bulk_4b(void *p, uint8_t ips[][RTE_FIB6_IPV6_ADDR_SIZE],
+	uint64_t *next_hops, const unsigned int n)
+{
+	uint32_t i;
+	for (i = 0; i < (n / 16); i++) {
+		trie_vec_lookup_x16(p, (uint8_t (*)[16])&ips[i * 16][0],
+				next_hops + i * 16, sizeof(uint32_t));
+	}
+	rte_trie_lookup_bulk_4b(p, (uint8_t (*)[16])&ips[i * 16][0],
+			next_hops + i * 16, n - i * 16);
+}
+
+void
+rte_trie_vec_lookup_bulk_8b(void *p, uint8_t ips[][RTE_FIB6_IPV6_ADDR_SIZE],
+	uint64_t *next_hops, const unsigned int n)
+{
+	uint32_t i;
+	for (i = 0; i < (n / 8); i++) {
+		trie_vec_lookup_x8_8b(p, (uint8_t (*)[16])&ips[i * 8][0],
+				next_hops + i * 8);
+	}
+	rte_trie_lookup_bulk_8b(p, (uint8_t (*)[16])&ips[i * 8][0],
+			next_hops + i * 8, n - i * 8);
+}
diff --git a/lib/librte_fib/trie_avx512.h b/lib/librte_fib/trie_avx512.h
new file mode 100644
index 0000000..ef8c7f0
--- /dev/null
+++ b/lib/librte_fib/trie_avx512.h
@@ -0,0 +1,20 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Intel Corporation
+ */
+
+#ifndef _TRIE_AVX512_H_
+#define _TRIE_AVX512_H_
+
+void
+rte_trie_vec_lookup_bulk_2b(void *p, uint8_t ips[][RTE_FIB6_IPV6_ADDR_SIZE],
+	uint64_t *next_hops, const unsigned int n);
+
+void
+rte_trie_vec_lookup_bulk_4b(void *p, uint8_t ips[][RTE_FIB6_IPV6_ADDR_SIZE],
+	uint64_t *next_hops, const unsigned int n);
+
+void
+rte_trie_vec_lookup_bulk_8b(void *p, uint8_t ips[][RTE_FIB6_IPV6_ADDR_SIZE],
+	uint64_t *next_hops, const unsigned int n);
+
+#endif /* _TRIE_AVX512_H_ */
-- 
2.7.4


^ permalink raw reply	[flat|nested] 199+ messages in thread

* [dpdk-dev] [PATCH v8 8/8] app/testfib: add support for different lookup functions
  2020-07-13 11:56           ` [dpdk-dev] [PATCH v7 " Vladimir Medvedkin
                               ` (7 preceding siblings ...)
  2020-09-30 10:35             ` [dpdk-dev] [PATCH v8 7/8] fib6: introduce AVX512 lookup Vladimir Medvedkin
@ 2020-09-30 10:35             ` Vladimir Medvedkin
  8 siblings, 0 replies; 199+ messages in thread
From: Vladimir Medvedkin @ 2020-09-30 10:35 UTC (permalink / raw)
  To: dev
  Cc: david.marchand, jerinj, mdr, thomas, konstantin.ananyev,
	bruce.richardson

Added -v option to switch between different lookup implementations
to measure their performance and correctness.

Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
---
 app/test-fib/main.c | 58 ++++++++++++++++++++++++++++++++++++++++++++++++++---
 1 file changed, 55 insertions(+), 3 deletions(-)

diff --git a/app/test-fib/main.c b/app/test-fib/main.c
index 9cf01b1..9c2d413 100644
--- a/app/test-fib/main.c
+++ b/app/test-fib/main.c
@@ -99,6 +99,7 @@ static struct {
 	uint8_t		ent_sz;
 	uint8_t		rnd_lookup_ips_ratio;
 	uint8_t		print_fract;
+	uint8_t		lookup_fn;
 } config = {
 	.routes_file = NULL,
 	.lookup_ips_file = NULL,
@@ -110,7 +111,8 @@ static struct {
 	.tbl8 = DEFAULT_LPM_TBL8,
 	.ent_sz = 4,
 	.rnd_lookup_ips_ratio = 0,
-	.print_fract = 10
+	.print_fract = 10,
+	.lookup_fn = 0
 };
 
 struct rt_rule_4 {
@@ -638,7 +640,11 @@ print_usage(void)
 		"1/2/4/8 (default 4)>]\n"
 		"[-g <number of tbl8's for dir24_8 or trie FIBs>]\n"
 		"[-w <path to the file to dump routing table>]\n"
-		"[-u <path to the file to dump ip's for lookup>]\n",
+		"[-u <path to the file to dump ip's for lookup>]\n"
+		"[-v <type of loookup function:"
+		"\ts1, s2, s3 (3 types of scalar), v (vector) -"
+		" for DIR24_8 based FIB\n"
+		"\ts, v - for TRIE based ipv6 FIB>]\n",
 		config.prgname);
 }
 
@@ -681,7 +687,7 @@ parse_opts(int argc, char **argv)
 	int opt;
 	char *endptr;
 
-	while ((opt = getopt(argc, argv, "f:t:n:d:l:r:c6ab:e:g:w:u:s")) !=
+	while ((opt = getopt(argc, argv, "f:t:n:d:l:r:c6ab:e:g:w:u:sv:")) !=
 			-1) {
 		switch (opt) {
 		case 'f':
@@ -769,6 +775,22 @@ parse_opts(int argc, char **argv)
 				rte_exit(-EINVAL, "Invalid option -g\n");
 			}
 			break;
+		case 'v':
+			if ((strcmp(optarg, "s1") == 0) ||
+					(strcmp(optarg, "s") == 0))
+				break;
+			else if (strcmp(optarg, "v") == 0) {
+				config.lookup_fn = 1;
+				break;
+			} else if (strcmp(optarg, "s2") == 0) {
+				config.lookup_fn = 2;
+				break;
+			} else if (strcmp(optarg, "s3") == 0) {
+				config.lookup_fn = 3;
+				break;
+			}
+			print_usage();
+			rte_exit(-EINVAL, "Invalid option -v %s\n", optarg);
 		default:
 			print_usage();
 			rte_exit(-EINVAL, "Invalid options\n");
@@ -846,6 +868,24 @@ run_v4(void)
 		return -rte_errno;
 	}
 
+	if (config.lookup_fn != 0) {
+		if (config.lookup_fn == 1)
+			ret = rte_fib_set_lookup_fn(fib,
+				RTE_FIB_DIR24_8_VECTOR_AVX512);
+		else if (config.lookup_fn == 2)
+			ret = rte_fib_set_lookup_fn(fib,
+				RTE_FIB_DIR24_8_SCALAR_INLINE);
+		else if (config.lookup_fn == 3)
+			ret = rte_fib_set_lookup_fn(fib,
+				RTE_FIB_DIR24_8_SCALAR_UNI);
+		else
+			ret = -EINVAL;
+		if (ret != 0) {
+			printf("Can not init lookup function\n");
+			return ret;
+		}
+	}
+
 	for (k = config.print_fract, i = 0; k > 0; k--) {
 		start = rte_rdtsc_precise();
 		for (j = 0; j < (config.nb_routes - i) / k; j++) {
@@ -1025,6 +1065,18 @@ run_v6(void)
 		return -rte_errno;
 	}
 
+	if (config.lookup_fn != 0) {
+		if (config.lookup_fn == 1)
+			ret = rte_fib6_set_lookup_fn(fib,
+				RTE_FIB6_TRIE_VECTOR_AVX512);
+		else
+			ret = -EINVAL;
+		if (ret != 0) {
+			printf("Can not init lookup function\n");
+			return ret;
+		}
+	}
+
 	for (k = config.print_fract, i = 0; k > 0; k--) {
 		start = rte_rdtsc_precise();
 		for (j = 0; j < (config.nb_routes - i) / k; j++) {
-- 
2.7.4


^ permalink raw reply	[flat|nested] 199+ messages in thread

* Re: [dpdk-dev] [PATCH v7 2/8] fib: make lookup function type configurable
  2020-07-16 14:32             ` Thomas Monjalon
@ 2020-09-30 11:06               ` Vladimir Medvedkin
  0 siblings, 0 replies; 199+ messages in thread
From: Vladimir Medvedkin @ 2020-09-30 11:06 UTC (permalink / raw)
  To: Thomas Monjalon
  Cc: Vladimir Medvedkin, dev, david.marchand, jerinj, mdr, Ananyev,
	Konstantin, Bruce Richardson

Hi Thomas,

чт, 16 июл. 2020 г. в 15:32, Thomas Monjalon <thomas@monjalon.net>:

> 13/07/2020 13:56, Vladimir Medvedkin:
> > Add type argument to dir24_8_get_lookup_fn()
> > Now it supports 3 different lookup implementations:
> >  RTE_FIB_DIR24_8_SCALAR_MACRO
> >  RTE_FIB_DIR24_8_SCALAR_INLINE
> >  RTE_FIB_DIR24_8_SCALAR_UNI
> >
> > Add new rte_fib_set_lookup_fn() - user can change lookup
> > function type runtime.
> >
> > Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
> [...]
> > --- a/lib/librte_fib/rte_fib.h
> > +++ b/lib/librte_fib/rte_fib.h
> > +enum rte_fib_dir24_8_lookup_type {
> > +     RTE_FIB_DIR24_8_SCALAR_MACRO,
> > +     RTE_FIB_DIR24_8_SCALAR_INLINE,
> > +     RTE_FIB_DIR24_8_SCALAR_UNI
> > +};
>
> Doxygen missing.
>
> [...]
> > +/**
> > + * Set lookup function based on type
> > + *
> > + * @param fib
> > + *   FIB object handle
> > + * @param type
> > + *   type of lookup function
> > + *
> > + * @return
> > + *    -EINVAL on failure
> > + *    0 on success
> > + */
> > +__rte_experimental
> > +int
> > +rte_fib_set_lookup_fn(struct rte_fib *fib,
> > +     enum rte_fib_dir24_8_lookup_type type);
>
> I think the types deserve to be documented,
> explaining why using one or the other.
>

I'm going to get rid of extra lookup types in next releases, so there will
be only well understandable SCALAR and VECTOR types

>
>


-- 
Regards,
Vladimir

^ permalink raw reply	[flat|nested] 199+ messages in thread

* Re: [dpdk-dev] [PATCH v8 0/8] fib: implement AVX512 vector lookup
  2020-09-30 10:35             ` [dpdk-dev] [PATCH v8 " Vladimir Medvedkin
@ 2020-10-06 14:31               ` David Marchand
  2020-10-06 15:13                 ` Medvedkin, Vladimir
  2020-10-07 16:10               ` [dpdk-dev] [PATCH v9 " Vladimir Medvedkin
                                 ` (8 subsequent siblings)
  9 siblings, 1 reply; 199+ messages in thread
From: David Marchand @ 2020-10-06 14:31 UTC (permalink / raw)
  To: Vladimir Medvedkin, Bruce Richardson
  Cc: dev, Jerin Jacob Kollanukkaran, Ray Kinsella, Thomas Monjalon,
	Ananyev, Konstantin

Hello,

On Wed, Sep 30, 2020 at 12:35 PM Vladimir Medvedkin
<vladimir.medvedkin@intel.com> wrote:
>
> This patch series implements vectorized lookup using AVX512 for
> ipv4 dir24_8 and ipv6 trie algorithms.
> Also introduced rte_fib_set_lookup_fn() to change lookup function type.
> Added option to select lookup function type in testfib application.
>
> v8:
>  - remove Makefile related changes
>  - fix missing doxygen for lookup_type
>  - add release notes

Now that https://git.dpdk.org/dpdk/commit/?id=84fb33fec179ea96f814aed9f658d5a2df20745d
is merged, some bits in this series need rework (patch 4 and 7).

I see we are adding an API to control which vector implementation of
the lookup is used.
Is this required?
My previous understanding was that the SIMD bitwidth work would supersede this.


-- 
David Marchand


^ permalink raw reply	[flat|nested] 199+ messages in thread

* Re: [dpdk-dev] [PATCH v8 0/8] fib: implement AVX512 vector lookup
  2020-10-06 14:31               ` David Marchand
@ 2020-10-06 15:13                 ` Medvedkin, Vladimir
  0 siblings, 0 replies; 199+ messages in thread
From: Medvedkin, Vladimir @ 2020-10-06 15:13 UTC (permalink / raw)
  To: David Marchand, Bruce Richardson
  Cc: dev, Jerin Jacob Kollanukkaran, Ray Kinsella, Thomas Monjalon,
	Ananyev, Konstantin

Hi David,

On 06/10/2020 15:31, David Marchand wrote:
> Hello,
> 
> On Wed, Sep 30, 2020 at 12:35 PM Vladimir Medvedkin
> <vladimir.medvedkin@intel.com> wrote:
>>
>> This patch series implements vectorized lookup using AVX512 for
>> ipv4 dir24_8 and ipv6 trie algorithms.
>> Also introduced rte_fib_set_lookup_fn() to change lookup function type.
>> Added option to select lookup function type in testfib application.
>>
>> v8:
>>   - remove Makefile related changes
>>   - fix missing doxygen for lookup_type
>>   - add release notes
> 
> Now that https://git.dpdk.org/dpdk/commit/?id=84fb33fec179ea96f814aed9f658d5a2df20745d
> is merged, some bits in this series need rework (patch 4 and 7).
> 
> I see we are adding an API to control which vector implementation of
> the lookup is used.
> Is this required?
> My previous understanding was that the SIMD bitwidth work would supersede this.
> 

I will resend v9 reflecting the latest SIMD bitwidth patches

> 

-- 
Regards,
Vladimir

^ permalink raw reply	[flat|nested] 199+ messages in thread

* [dpdk-dev] [PATCH v9 0/8] fib: implement AVX512 vector lookup
  2020-09-30 10:35             ` [dpdk-dev] [PATCH v8 " Vladimir Medvedkin
  2020-10-06 14:31               ` David Marchand
@ 2020-10-07 16:10               ` Vladimir Medvedkin
  2020-10-13 13:13                 ` [dpdk-dev] [PATCH v10 " Vladimir Medvedkin
                                   ` (8 more replies)
  2020-10-07 16:10               ` [dpdk-dev] [PATCH v9 1/8] eal/x86: introduce AVX 512-bit type Vladimir Medvedkin
                                 ` (7 subsequent siblings)
  9 siblings, 9 replies; 199+ messages in thread
From: Vladimir Medvedkin @ 2020-10-07 16:10 UTC (permalink / raw)
  To: dev
  Cc: david.marchand, jerinj, mdr, thomas, konstantin.ananyev,
	bruce.richardson, ciara.power

This patch series implements vectorized lookup using AVX512 for
ipv4 dir24_8 and ipv6 trie algorithms.
Also introduced rte_fib_set_lookup_fn() to change lookup function type.
Added option to select lookup function type in testfib application.

Depends-on: patch-79338 ("eal: add max SIMD bitwidth")

v9:
 - meson reworked
 - integration with max SIMD bitwidth patchseries
 - changed the logic of function selection on init

v8:
 - remove Makefile related changes
 - fix missing doxygen for lookup_type
 - add release notes

v7:
 - fix RTE_X86_ZMM_MASK macro

v6:
 - style fixes

v5:
 - prefix zmm macro in rte_vect.h with RTE_X86
 - remove unnecessary typedef for _x86_zmm_t
 - reword commit title
 - fix typos

v4:
 - use __rte_aligned() instead of using compiler attribute directly
 - rework and add comments to meson.build

v3:
 - separate out the AVX-512 code into a separate file

v2:
 - rename rte_zmm to __rte_x86_zmm to reflect its internal usage
 - make runtime decision to use avx512 lookup

Vladimir Medvedkin (8):
  eal/x86: introduce AVX 512-bit type
  fib: make lookup function type configurable
  fib: move lookup definition into the header file
  fib: introduce AVX512 lookup
  fib6: make lookup function type configurable
  fib6: move lookup definition into the header file
  fib6: introduce AVX512 lookup
  app/testfib: add support for different lookup functions

 app/test-fib/main.c                    |  65 ++++++-
 doc/guides/rel_notes/release_20_11.rst |   3 +
 lib/librte_eal/x86/include/rte_vect.h  |  19 ++
 lib/librte_fib/dir24_8.c               | 329 ++++++++-------------------------
 lib/librte_fib/dir24_8.h               | 226 +++++++++++++++++++++-
 lib/librte_fib/dir24_8_avx512.c        | 165 +++++++++++++++++
 lib/librte_fib/dir24_8_avx512.h        |  24 +++
 lib/librte_fib/meson.build             |  51 +++++
 lib/librte_fib/rte_fib.c               |  21 ++-
 lib/librte_fib/rte_fib.h               |  26 +++
 lib/librte_fib/rte_fib6.c              |  20 +-
 lib/librte_fib/rte_fib6.h              |  25 +++
 lib/librte_fib/rte_fib_version.map     |   2 +
 lib/librte_fib/trie.c                  | 191 ++++++-------------
 lib/librte_fib/trie.h                  | 119 +++++++++++-
 lib/librte_fib/trie_avx512.c           | 269 +++++++++++++++++++++++++++
 lib/librte_fib/trie_avx512.h           |  20 ++
 17 files changed, 1185 insertions(+), 390 deletions(-)
 create mode 100644 lib/librte_fib/dir24_8_avx512.c
 create mode 100644 lib/librte_fib/dir24_8_avx512.h
 create mode 100644 lib/librte_fib/trie_avx512.c
 create mode 100644 lib/librte_fib/trie_avx512.h

-- 
2.7.4


^ permalink raw reply	[flat|nested] 199+ messages in thread

* [dpdk-dev] [PATCH v9 1/8] eal/x86: introduce AVX 512-bit type
  2020-09-30 10:35             ` [dpdk-dev] [PATCH v8 " Vladimir Medvedkin
  2020-10-06 14:31               ` David Marchand
  2020-10-07 16:10               ` [dpdk-dev] [PATCH v9 " Vladimir Medvedkin
@ 2020-10-07 16:10               ` Vladimir Medvedkin
  2020-10-07 16:10               ` [dpdk-dev] [PATCH v9 2/8] fib: make lookup function type configurable Vladimir Medvedkin
                                 ` (6 subsequent siblings)
  9 siblings, 0 replies; 199+ messages in thread
From: Vladimir Medvedkin @ 2020-10-07 16:10 UTC (permalink / raw)
  To: dev
  Cc: david.marchand, jerinj, mdr, thomas, konstantin.ananyev,
	bruce.richardson, ciara.power

New data type to manipulate 512 bit AVX values.

Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 lib/librte_eal/x86/include/rte_vect.h | 19 +++++++++++++++++++
 1 file changed, 19 insertions(+)

diff --git a/lib/librte_eal/x86/include/rte_vect.h b/lib/librte_eal/x86/include/rte_vect.h
index b1df75a..1af52e5 100644
--- a/lib/librte_eal/x86/include/rte_vect.h
+++ b/lib/librte_eal/x86/include/rte_vect.h
@@ -13,6 +13,7 @@
 
 #include <stdint.h>
 #include <rte_config.h>
+#include <rte_common.h>
 #include "generic/rte_vect.h"
 
 #if (defined(__ICC) || \
@@ -92,6 +93,24 @@ __extension__ ({                 \
 })
 #endif /* (defined(__ICC) && __ICC < 1210) */
 
+#ifdef __AVX512F__
+
+#define RTE_X86_ZMM_SIZE	(sizeof(__m512i))
+#define RTE_X86_ZMM_MASK	(RTE_X86_ZMM_SIZE - 1)
+
+typedef union __rte_x86_zmm {
+	__m512i	 z;
+	ymm_t    y[RTE_X86_ZMM_SIZE / sizeof(ymm_t)];
+	xmm_t    x[RTE_X86_ZMM_SIZE / sizeof(xmm_t)];
+	uint8_t  u8[RTE_X86_ZMM_SIZE / sizeof(uint8_t)];
+	uint16_t u16[RTE_X86_ZMM_SIZE / sizeof(uint16_t)];
+	uint32_t u32[RTE_X86_ZMM_SIZE / sizeof(uint32_t)];
+	uint64_t u64[RTE_X86_ZMM_SIZE / sizeof(uint64_t)];
+	double   pd[RTE_X86_ZMM_SIZE / sizeof(double)];
+} __rte_aligned(RTE_X86_ZMM_SIZE) __rte_x86_zmm_t;
+
+#endif /* __AVX512F__ */
+
 #ifdef __cplusplus
 }
 #endif
-- 
2.7.4


^ permalink raw reply	[flat|nested] 199+ messages in thread

* [dpdk-dev] [PATCH v9 2/8] fib: make lookup function type configurable
  2020-09-30 10:35             ` [dpdk-dev] [PATCH v8 " Vladimir Medvedkin
                                 ` (2 preceding siblings ...)
  2020-10-07 16:10               ` [dpdk-dev] [PATCH v9 1/8] eal/x86: introduce AVX 512-bit type Vladimir Medvedkin
@ 2020-10-07 16:10               ` Vladimir Medvedkin
  2020-10-07 16:10               ` [dpdk-dev] [PATCH v9 3/8] fib: move lookup definition into the header file Vladimir Medvedkin
                                 ` (5 subsequent siblings)
  9 siblings, 0 replies; 199+ messages in thread
From: Vladimir Medvedkin @ 2020-10-07 16:10 UTC (permalink / raw)
  To: dev
  Cc: david.marchand, jerinj, mdr, thomas, konstantin.ananyev,
	bruce.richardson, ciara.power

Add type argument to dir24_8_get_lookup_fn()
Now it supports 3 different lookup implementations:
 RTE_FIB_DIR24_8_SCALAR_MACRO
 RTE_FIB_DIR24_8_SCALAR_INLINE
 RTE_FIB_DIR24_8_SCALAR_UNI

Add new rte_fib_set_lookup_fn() - user can change lookup
function type runtime.

Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 lib/librte_fib/dir24_8.c           | 84 +++++++++++++++++++++++---------------
 lib/librte_fib/dir24_8.h           |  2 +-
 lib/librte_fib/rte_fib.c           | 21 +++++++++-
 lib/librte_fib/rte_fib.h           | 24 +++++++++++
 lib/librte_fib/rte_fib_version.map |  1 +
 5 files changed, 98 insertions(+), 34 deletions(-)

diff --git a/lib/librte_fib/dir24_8.c b/lib/librte_fib/dir24_8.c
index c9dce3c..ff51f65 100644
--- a/lib/librte_fib/dir24_8.c
+++ b/lib/librte_fib/dir24_8.c
@@ -45,13 +45,6 @@ struct dir24_8_tbl {
 
 #define ROUNDUP(x, y)	 RTE_ALIGN_CEIL(x, (1 << (32 - y)))
 
-enum lookup_type {
-	MACRO,
-	INLINE,
-	UNI
-};
-enum lookup_type test_lookup = MACRO;
-
 static inline void *
 get_tbl24_p(struct dir24_8_tbl *dp, uint32_t ip, uint8_t nh_sz)
 {
@@ -252,35 +245,62 @@ dir24_8_lookup_bulk_uni(void *p, const uint32_t *ips,
 	}
 }
 
+static inline rte_fib_lookup_fn_t
+get_scalar_fn(enum rte_fib_dir24_8_nh_sz nh_sz)
+{
+	switch (nh_sz) {
+	case RTE_FIB_DIR24_8_1B:
+		return dir24_8_lookup_bulk_1b;
+	case RTE_FIB_DIR24_8_2B:
+		return dir24_8_lookup_bulk_2b;
+	case RTE_FIB_DIR24_8_4B:
+		return dir24_8_lookup_bulk_4b;
+	case RTE_FIB_DIR24_8_8B:
+		return dir24_8_lookup_bulk_8b;
+	default:
+		return NULL;
+	}
+}
+
+static inline rte_fib_lookup_fn_t
+get_scalar_fn_inlined(enum rte_fib_dir24_8_nh_sz nh_sz)
+{
+	switch (nh_sz) {
+	case RTE_FIB_DIR24_8_1B:
+		return dir24_8_lookup_bulk_0;
+	case RTE_FIB_DIR24_8_2B:
+		return dir24_8_lookup_bulk_1;
+	case RTE_FIB_DIR24_8_4B:
+		return dir24_8_lookup_bulk_2;
+	case RTE_FIB_DIR24_8_8B:
+		return dir24_8_lookup_bulk_3;
+	default:
+		return NULL;
+	}
+}
+
 rte_fib_lookup_fn_t
-dir24_8_get_lookup_fn(struct rte_fib_conf *fib_conf)
+dir24_8_get_lookup_fn(void *p, enum rte_fib_dir24_8_lookup_type type)
 {
-	enum rte_fib_dir24_8_nh_sz nh_sz = fib_conf->dir24_8.nh_sz;
+	enum rte_fib_dir24_8_nh_sz nh_sz;
+	struct dir24_8_tbl *dp = p;
 
-	if (test_lookup == MACRO) {
-		switch (nh_sz) {
-		case RTE_FIB_DIR24_8_1B:
-			return dir24_8_lookup_bulk_1b;
-		case RTE_FIB_DIR24_8_2B:
-			return dir24_8_lookup_bulk_2b;
-		case RTE_FIB_DIR24_8_4B:
-			return dir24_8_lookup_bulk_4b;
-		case RTE_FIB_DIR24_8_8B:
-			return dir24_8_lookup_bulk_8b;
-		}
-	} else if (test_lookup == INLINE) {
-		switch (nh_sz) {
-		case RTE_FIB_DIR24_8_1B:
-			return dir24_8_lookup_bulk_0;
-		case RTE_FIB_DIR24_8_2B:
-			return dir24_8_lookup_bulk_1;
-		case RTE_FIB_DIR24_8_4B:
-			return dir24_8_lookup_bulk_2;
-		case RTE_FIB_DIR24_8_8B:
-			return dir24_8_lookup_bulk_3;
-		}
-	} else
+	if (dp == NULL)
+		return NULL;
+
+	nh_sz = dp->nh_sz;
+
+	switch (type) {
+	case RTE_FIB_DIR24_8_SCALAR_MACRO:
+		return get_scalar_fn(nh_sz);
+	case RTE_FIB_DIR24_8_SCALAR_INLINE:
+		return get_scalar_fn_inlined(nh_sz);
+	case RTE_FIB_DIR24_8_SCALAR_UNI:
 		return dir24_8_lookup_bulk_uni;
+	default:
+		return NULL;
+	}
+
 	return NULL;
 }
 
diff --git a/lib/librte_fib/dir24_8.h b/lib/librte_fib/dir24_8.h
index 1ec437c..53c5dd2 100644
--- a/lib/librte_fib/dir24_8.h
+++ b/lib/librte_fib/dir24_8.h
@@ -22,7 +22,7 @@ void
 dir24_8_free(void *p);
 
 rte_fib_lookup_fn_t
-dir24_8_get_lookup_fn(struct rte_fib_conf *conf);
+dir24_8_get_lookup_fn(void *p, enum rte_fib_dir24_8_lookup_type type);
 
 int
 dir24_8_modify(struct rte_fib *fib, uint32_t ip, uint8_t depth,
diff --git a/lib/librte_fib/rte_fib.c b/lib/librte_fib/rte_fib.c
index e090808..b9f6efb 100644
--- a/lib/librte_fib/rte_fib.c
+++ b/lib/librte_fib/rte_fib.c
@@ -107,7 +107,8 @@ init_dataplane(struct rte_fib *fib, __rte_unused int socket_id,
 		fib->dp = dir24_8_create(dp_name, socket_id, conf);
 		if (fib->dp == NULL)
 			return -rte_errno;
-		fib->lookup = dir24_8_get_lookup_fn(conf);
+		fib->lookup = dir24_8_get_lookup_fn(fib->dp,
+			RTE_FIB_DIR24_8_SCALAR_MACRO);
 		fib->modify = dir24_8_modify;
 		return 0;
 	default:
@@ -317,3 +318,21 @@ rte_fib_get_rib(struct rte_fib *fib)
 {
 	return (fib == NULL) ? NULL : fib->rib;
 }
+
+int
+rte_fib_set_lookup_fn(struct rte_fib *fib,
+	enum rte_fib_dir24_8_lookup_type type)
+{
+	rte_fib_lookup_fn_t fn;
+
+	switch (fib->type) {
+	case RTE_FIB_DIR24_8:
+		fn = dir24_8_get_lookup_fn(fib->dp, type);
+		if (fn == NULL)
+			return -EINVAL;
+		fib->lookup = fn;
+		return 0;
+	default:
+		return -EINVAL;
+	}
+}
diff --git a/lib/librte_fib/rte_fib.h b/lib/librte_fib/rte_fib.h
index 84ee774..a9bd0da 100644
--- a/lib/librte_fib/rte_fib.h
+++ b/lib/librte_fib/rte_fib.h
@@ -58,6 +58,13 @@ enum rte_fib_dir24_8_nh_sz {
 	RTE_FIB_DIR24_8_8B
 };
 
+/** Type of lookup function implementation */
+enum rte_fib_dir24_8_lookup_type {
+	RTE_FIB_DIR24_8_SCALAR_MACRO,
+	RTE_FIB_DIR24_8_SCALAR_INLINE,
+	RTE_FIB_DIR24_8_SCALAR_UNI
+};
+
 /** FIB configuration structure */
 struct rte_fib_conf {
 	enum rte_fib_type type; /**< Type of FIB struct */
@@ -196,6 +203,23 @@ __rte_experimental
 struct rte_rib *
 rte_fib_get_rib(struct rte_fib *fib);
 
+/**
+ * Set lookup function based on type
+ *
+ * @param fib
+ *   FIB object handle
+ * @param type
+ *   type of lookup function
+ *
+ * @return
+ *    -EINVAL on failure
+ *    0 on success
+ */
+__rte_experimental
+int
+rte_fib_set_lookup_fn(struct rte_fib *fib,
+	enum rte_fib_dir24_8_lookup_type type);
+
 #ifdef __cplusplus
 }
 #endif
diff --git a/lib/librte_fib/rte_fib_version.map b/lib/librte_fib/rte_fib_version.map
index 9527417..216af66 100644
--- a/lib/librte_fib/rte_fib_version.map
+++ b/lib/librte_fib/rte_fib_version.map
@@ -9,6 +9,7 @@ EXPERIMENTAL {
 	rte_fib_lookup_bulk;
 	rte_fib_get_dp;
 	rte_fib_get_rib;
+	rte_fib_set_lookup_fn;
 
 	rte_fib6_add;
 	rte_fib6_create;
-- 
2.7.4


^ permalink raw reply	[flat|nested] 199+ messages in thread

* [dpdk-dev] [PATCH v9 3/8] fib: move lookup definition into the header file
  2020-09-30 10:35             ` [dpdk-dev] [PATCH v8 " Vladimir Medvedkin
                                 ` (3 preceding siblings ...)
  2020-10-07 16:10               ` [dpdk-dev] [PATCH v9 2/8] fib: make lookup function type configurable Vladimir Medvedkin
@ 2020-10-07 16:10               ` Vladimir Medvedkin
  2020-10-07 16:10               ` [dpdk-dev] [PATCH v9 4/8] fib: introduce AVX512 lookup Vladimir Medvedkin
                                 ` (4 subsequent siblings)
  9 siblings, 0 replies; 199+ messages in thread
From: Vladimir Medvedkin @ 2020-10-07 16:10 UTC (permalink / raw)
  To: dev
  Cc: david.marchand, jerinj, mdr, thomas, konstantin.ananyev,
	bruce.richardson, ciara.power

Move dir24_8 table layout and lookup definition into the
private header file. This is necessary for implementing a
vectorized lookup function in a separate .с file.

Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 lib/librte_fib/dir24_8.c | 225 +----------------------------------------------
 lib/librte_fib/dir24_8.h | 224 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 225 insertions(+), 224 deletions(-)

diff --git a/lib/librte_fib/dir24_8.c b/lib/librte_fib/dir24_8.c
index ff51f65..b5f2363 100644
--- a/lib/librte_fib/dir24_8.c
+++ b/lib/librte_fib/dir24_8.c
@@ -11,240 +11,17 @@
 
 #include <rte_debug.h>
 #include <rte_malloc.h>
-#include <rte_prefetch.h>
 #include <rte_errno.h>
 #include <rte_memory.h>
-#include <rte_branch_prediction.h>
 
-#include <rte_fib.h>
 #include <rte_rib.h>
+#include <rte_fib.h>
 #include "dir24_8.h"
 
 #define DIR24_8_NAMESIZE	64
 
-#define DIR24_8_TBL24_NUM_ENT		(1 << 24)
-#define DIR24_8_TBL8_GRP_NUM_ENT	256U
-#define DIR24_8_EXT_ENT			1
-#define DIR24_8_TBL24_MASK		0xffffff00
-
-#define BITMAP_SLAB_BIT_SIZE_LOG2	6
-#define BITMAP_SLAB_BIT_SIZE		(1 << BITMAP_SLAB_BIT_SIZE_LOG2)
-#define BITMAP_SLAB_BITMASK		(BITMAP_SLAB_BIT_SIZE - 1)
-
-struct dir24_8_tbl {
-	uint32_t	number_tbl8s;	/**< Total number of tbl8s */
-	uint32_t	rsvd_tbl8s;	/**< Number of reserved tbl8s */
-	uint32_t	cur_tbl8s;	/**< Current number of tbl8s */
-	enum rte_fib_dir24_8_nh_sz	nh_sz;	/**< Size of nexthop entry */
-	uint64_t	def_nh;		/**< Default next hop */
-	uint64_t	*tbl8;		/**< tbl8 table. */
-	uint64_t	*tbl8_idxes;	/**< bitmap containing free tbl8 idxes*/
-	/* tbl24 table. */
-	__extension__ uint64_t	tbl24[0] __rte_cache_aligned;
-};
-
 #define ROUNDUP(x, y)	 RTE_ALIGN_CEIL(x, (1 << (32 - y)))
 
-static inline void *
-get_tbl24_p(struct dir24_8_tbl *dp, uint32_t ip, uint8_t nh_sz)
-{
-	return (void *)&((uint8_t *)dp->tbl24)[(ip &
-		DIR24_8_TBL24_MASK) >> (8 - nh_sz)];
-}
-
-static inline  uint8_t
-bits_in_nh(uint8_t nh_sz)
-{
-	return 8 * (1 << nh_sz);
-}
-
-static inline uint64_t
-get_max_nh(uint8_t nh_sz)
-{
-	return ((1ULL << (bits_in_nh(nh_sz) - 1)) - 1);
-}
-
-static  inline uint32_t
-get_tbl24_idx(uint32_t ip)
-{
-	return ip >> 8;
-}
-
-static  inline uint32_t
-get_tbl8_idx(uint32_t res, uint32_t ip)
-{
-	return (res >> 1) * DIR24_8_TBL8_GRP_NUM_ENT + (uint8_t)ip;
-}
-
-static inline uint64_t
-lookup_msk(uint8_t nh_sz)
-{
-	return ((1ULL << ((1 << (nh_sz + 3)) - 1)) << 1) - 1;
-}
-
-static inline uint8_t
-get_psd_idx(uint32_t val, uint8_t nh_sz)
-{
-	return val & ((1 << (3 - nh_sz)) - 1);
-}
-
-static inline uint32_t
-get_tbl_idx(uint32_t val, uint8_t nh_sz)
-{
-	return val >> (3 - nh_sz);
-}
-
-static inline uint64_t
-get_tbl24(struct dir24_8_tbl *dp, uint32_t ip, uint8_t nh_sz)
-{
-	return ((dp->tbl24[get_tbl_idx(get_tbl24_idx(ip), nh_sz)] >>
-		(get_psd_idx(get_tbl24_idx(ip), nh_sz) *
-		bits_in_nh(nh_sz))) & lookup_msk(nh_sz));
-}
-
-static inline uint64_t
-get_tbl8(struct dir24_8_tbl *dp, uint32_t res, uint32_t ip, uint8_t nh_sz)
-{
-	return ((dp->tbl8[get_tbl_idx(get_tbl8_idx(res, ip), nh_sz)] >>
-		(get_psd_idx(get_tbl8_idx(res, ip), nh_sz) *
-		bits_in_nh(nh_sz))) & lookup_msk(nh_sz));
-}
-
-static inline int
-is_entry_extended(uint64_t ent)
-{
-	return (ent & DIR24_8_EXT_ENT) == DIR24_8_EXT_ENT;
-}
-
-#define LOOKUP_FUNC(suffix, type, bulk_prefetch, nh_sz)			\
-static void dir24_8_lookup_bulk_##suffix(void *p, const uint32_t *ips,	\
-	uint64_t *next_hops, const unsigned int n)			\
-{									\
-	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;		\
-	uint64_t tmp;							\
-	uint32_t i;							\
-	uint32_t prefetch_offset =					\
-		RTE_MIN((unsigned int)bulk_prefetch, n);		\
-									\
-	for (i = 0; i < prefetch_offset; i++)				\
-		rte_prefetch0(get_tbl24_p(dp, ips[i], nh_sz));		\
-	for (i = 0; i < (n - prefetch_offset); i++) {			\
-		rte_prefetch0(get_tbl24_p(dp,				\
-			ips[i + prefetch_offset], nh_sz));		\
-		tmp = ((type *)dp->tbl24)[ips[i] >> 8];			\
-		if (unlikely(is_entry_extended(tmp)))			\
-			tmp = ((type *)dp->tbl8)[(uint8_t)ips[i] +	\
-				((tmp >> 1) * DIR24_8_TBL8_GRP_NUM_ENT)]; \
-		next_hops[i] = tmp >> 1;				\
-	}								\
-	for (; i < n; i++) {						\
-		tmp = ((type *)dp->tbl24)[ips[i] >> 8];			\
-		if (unlikely(is_entry_extended(tmp)))			\
-			tmp = ((type *)dp->tbl8)[(uint8_t)ips[i] +	\
-				((tmp >> 1) * DIR24_8_TBL8_GRP_NUM_ENT)]; \
-		next_hops[i] = tmp >> 1;				\
-	}								\
-}									\
-
-LOOKUP_FUNC(1b, uint8_t, 5, 0)
-LOOKUP_FUNC(2b, uint16_t, 6, 1)
-LOOKUP_FUNC(4b, uint32_t, 15, 2)
-LOOKUP_FUNC(8b, uint64_t, 12, 3)
-
-static inline void
-dir24_8_lookup_bulk(struct dir24_8_tbl *dp, const uint32_t *ips,
-	uint64_t *next_hops, const unsigned int n, uint8_t nh_sz)
-{
-	uint64_t tmp;
-	uint32_t i;
-	uint32_t prefetch_offset = RTE_MIN(15U, n);
-
-	for (i = 0; i < prefetch_offset; i++)
-		rte_prefetch0(get_tbl24_p(dp, ips[i], nh_sz));
-	for (i = 0; i < (n - prefetch_offset); i++) {
-		rte_prefetch0(get_tbl24_p(dp, ips[i + prefetch_offset],
-			nh_sz));
-		tmp = get_tbl24(dp, ips[i], nh_sz);
-		if (unlikely(is_entry_extended(tmp)))
-			tmp = get_tbl8(dp, tmp, ips[i], nh_sz);
-
-		next_hops[i] = tmp >> 1;
-	}
-	for (; i < n; i++) {
-		tmp = get_tbl24(dp, ips[i], nh_sz);
-		if (unlikely(is_entry_extended(tmp)))
-			tmp = get_tbl8(dp, tmp, ips[i], nh_sz);
-
-		next_hops[i] = tmp >> 1;
-	}
-}
-
-static void
-dir24_8_lookup_bulk_0(void *p, const uint32_t *ips,
-	uint64_t *next_hops, const unsigned int n)
-{
-	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
-
-	dir24_8_lookup_bulk(dp, ips, next_hops, n, 0);
-}
-
-static void
-dir24_8_lookup_bulk_1(void *p, const uint32_t *ips,
-	uint64_t *next_hops, const unsigned int n)
-{
-	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
-
-	dir24_8_lookup_bulk(dp, ips, next_hops, n, 1);
-}
-
-static void
-dir24_8_lookup_bulk_2(void *p, const uint32_t *ips,
-	uint64_t *next_hops, const unsigned int n)
-{
-	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
-
-	dir24_8_lookup_bulk(dp, ips, next_hops, n, 2);
-}
-
-static void
-dir24_8_lookup_bulk_3(void *p, const uint32_t *ips,
-	uint64_t *next_hops, const unsigned int n)
-{
-	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
-
-	dir24_8_lookup_bulk(dp, ips, next_hops, n, 3);
-}
-
-static void
-dir24_8_lookup_bulk_uni(void *p, const uint32_t *ips,
-	uint64_t *next_hops, const unsigned int n)
-{
-	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
-	uint64_t tmp;
-	uint32_t i;
-	uint32_t prefetch_offset = RTE_MIN(15U, n);
-	uint8_t nh_sz = dp->nh_sz;
-
-	for (i = 0; i < prefetch_offset; i++)
-		rte_prefetch0(get_tbl24_p(dp, ips[i], nh_sz));
-	for (i = 0; i < (n - prefetch_offset); i++) {
-		rte_prefetch0(get_tbl24_p(dp, ips[i + prefetch_offset],
-			nh_sz));
-		tmp = get_tbl24(dp, ips[i], nh_sz);
-		if (unlikely(is_entry_extended(tmp)))
-			tmp = get_tbl8(dp, tmp, ips[i], nh_sz);
-
-		next_hops[i] = tmp >> 1;
-	}
-	for (; i < n; i++) {
-		tmp = get_tbl24(dp, ips[i], nh_sz);
-		if (unlikely(is_entry_extended(tmp)))
-			tmp = get_tbl8(dp, tmp, ips[i], nh_sz);
-
-		next_hops[i] = tmp >> 1;
-	}
-}
-
 static inline rte_fib_lookup_fn_t
 get_scalar_fn(enum rte_fib_dir24_8_nh_sz nh_sz)
 {
diff --git a/lib/librte_fib/dir24_8.h b/lib/librte_fib/dir24_8.h
index 53c5dd2..56d0389 100644
--- a/lib/librte_fib/dir24_8.h
+++ b/lib/librte_fib/dir24_8.h
@@ -6,6 +6,9 @@
 #ifndef _DIR24_8_H_
 #define _DIR24_8_H_
 
+#include <rte_prefetch.h>
+#include <rte_branch_prediction.h>
+
 /**
  * @file
  * DIR24_8 algorithm
@@ -15,6 +18,227 @@
 extern "C" {
 #endif
 
+#define DIR24_8_TBL24_NUM_ENT		(1 << 24)
+#define DIR24_8_TBL8_GRP_NUM_ENT	256U
+#define DIR24_8_EXT_ENT			1
+#define DIR24_8_TBL24_MASK		0xffffff00
+
+#define BITMAP_SLAB_BIT_SIZE_LOG2	6
+#define BITMAP_SLAB_BIT_SIZE		(1 << BITMAP_SLAB_BIT_SIZE_LOG2)
+#define BITMAP_SLAB_BITMASK		(BITMAP_SLAB_BIT_SIZE - 1)
+
+struct dir24_8_tbl {
+	uint32_t	number_tbl8s;	/**< Total number of tbl8s */
+	uint32_t	rsvd_tbl8s;	/**< Number of reserved tbl8s */
+	uint32_t	cur_tbl8s;	/**< Current number of tbl8s */
+	enum rte_fib_dir24_8_nh_sz	nh_sz;	/**< Size of nexthop entry */
+	uint64_t	def_nh;		/**< Default next hop */
+	uint64_t	*tbl8;		/**< tbl8 table. */
+	uint64_t	*tbl8_idxes;	/**< bitmap containing free tbl8 idxes*/
+	/* tbl24 table. */
+	__extension__ uint64_t	tbl24[0] __rte_cache_aligned;
+};
+
+static inline void *
+get_tbl24_p(struct dir24_8_tbl *dp, uint32_t ip, uint8_t nh_sz)
+{
+	return (void *)&((uint8_t *)dp->tbl24)[(ip &
+		DIR24_8_TBL24_MASK) >> (8 - nh_sz)];
+}
+
+static inline  uint8_t
+bits_in_nh(uint8_t nh_sz)
+{
+	return 8 * (1 << nh_sz);
+}
+
+static inline uint64_t
+get_max_nh(uint8_t nh_sz)
+{
+	return ((1ULL << (bits_in_nh(nh_sz) - 1)) - 1);
+}
+
+static  inline uint32_t
+get_tbl24_idx(uint32_t ip)
+{
+	return ip >> 8;
+}
+
+static  inline uint32_t
+get_tbl8_idx(uint32_t res, uint32_t ip)
+{
+	return (res >> 1) * DIR24_8_TBL8_GRP_NUM_ENT + (uint8_t)ip;
+}
+
+static inline uint64_t
+lookup_msk(uint8_t nh_sz)
+{
+	return ((1ULL << ((1 << (nh_sz + 3)) - 1)) << 1) - 1;
+}
+
+static inline uint8_t
+get_psd_idx(uint32_t val, uint8_t nh_sz)
+{
+	return val & ((1 << (3 - nh_sz)) - 1);
+}
+
+static inline uint32_t
+get_tbl_idx(uint32_t val, uint8_t nh_sz)
+{
+	return val >> (3 - nh_sz);
+}
+
+static inline uint64_t
+get_tbl24(struct dir24_8_tbl *dp, uint32_t ip, uint8_t nh_sz)
+{
+	return ((dp->tbl24[get_tbl_idx(get_tbl24_idx(ip), nh_sz)] >>
+		(get_psd_idx(get_tbl24_idx(ip), nh_sz) *
+		bits_in_nh(nh_sz))) & lookup_msk(nh_sz));
+}
+
+static inline uint64_t
+get_tbl8(struct dir24_8_tbl *dp, uint32_t res, uint32_t ip, uint8_t nh_sz)
+{
+	return ((dp->tbl8[get_tbl_idx(get_tbl8_idx(res, ip), nh_sz)] >>
+		(get_psd_idx(get_tbl8_idx(res, ip), nh_sz) *
+		bits_in_nh(nh_sz))) & lookup_msk(nh_sz));
+}
+
+static inline int
+is_entry_extended(uint64_t ent)
+{
+	return (ent & DIR24_8_EXT_ENT) == DIR24_8_EXT_ENT;
+}
+
+#define LOOKUP_FUNC(suffix, type, bulk_prefetch, nh_sz)			\
+static inline void dir24_8_lookup_bulk_##suffix(void *p, const uint32_t *ips, \
+	uint64_t *next_hops, const unsigned int n)			\
+{									\
+	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;		\
+	uint64_t tmp;							\
+	uint32_t i;							\
+	uint32_t prefetch_offset =					\
+		RTE_MIN((unsigned int)bulk_prefetch, n);		\
+									\
+	for (i = 0; i < prefetch_offset; i++)				\
+		rte_prefetch0(get_tbl24_p(dp, ips[i], nh_sz));		\
+	for (i = 0; i < (n - prefetch_offset); i++) {			\
+		rte_prefetch0(get_tbl24_p(dp,				\
+			ips[i + prefetch_offset], nh_sz));		\
+		tmp = ((type *)dp->tbl24)[ips[i] >> 8];			\
+		if (unlikely(is_entry_extended(tmp)))			\
+			tmp = ((type *)dp->tbl8)[(uint8_t)ips[i] +	\
+				((tmp >> 1) * DIR24_8_TBL8_GRP_NUM_ENT)]; \
+		next_hops[i] = tmp >> 1;				\
+	}								\
+	for (; i < n; i++) {						\
+		tmp = ((type *)dp->tbl24)[ips[i] >> 8];			\
+		if (unlikely(is_entry_extended(tmp)))			\
+			tmp = ((type *)dp->tbl8)[(uint8_t)ips[i] +	\
+				((tmp >> 1) * DIR24_8_TBL8_GRP_NUM_ENT)]; \
+		next_hops[i] = tmp >> 1;				\
+	}								\
+}									\
+
+LOOKUP_FUNC(1b, uint8_t, 5, 0)
+LOOKUP_FUNC(2b, uint16_t, 6, 1)
+LOOKUP_FUNC(4b, uint32_t, 15, 2)
+LOOKUP_FUNC(8b, uint64_t, 12, 3)
+
+static inline void
+dir24_8_lookup_bulk(struct dir24_8_tbl *dp, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n, uint8_t nh_sz)
+{
+	uint64_t tmp;
+	uint32_t i;
+	uint32_t prefetch_offset = RTE_MIN(15U, n);
+
+	for (i = 0; i < prefetch_offset; i++)
+		rte_prefetch0(get_tbl24_p(dp, ips[i], nh_sz));
+	for (i = 0; i < (n - prefetch_offset); i++) {
+		rte_prefetch0(get_tbl24_p(dp, ips[i + prefetch_offset],
+			nh_sz));
+		tmp = get_tbl24(dp, ips[i], nh_sz);
+		if (unlikely(is_entry_extended(tmp)))
+			tmp = get_tbl8(dp, tmp, ips[i], nh_sz);
+
+		next_hops[i] = tmp >> 1;
+	}
+	for (; i < n; i++) {
+		tmp = get_tbl24(dp, ips[i], nh_sz);
+		if (unlikely(is_entry_extended(tmp)))
+			tmp = get_tbl8(dp, tmp, ips[i], nh_sz);
+
+		next_hops[i] = tmp >> 1;
+	}
+}
+
+static inline void
+dir24_8_lookup_bulk_0(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n)
+{
+	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
+
+	dir24_8_lookup_bulk(dp, ips, next_hops, n, 0);
+}
+
+static inline void
+dir24_8_lookup_bulk_1(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n)
+{
+	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
+
+	dir24_8_lookup_bulk(dp, ips, next_hops, n, 1);
+}
+
+static inline void
+dir24_8_lookup_bulk_2(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n)
+{
+	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
+
+	dir24_8_lookup_bulk(dp, ips, next_hops, n, 2);
+}
+
+static inline void
+dir24_8_lookup_bulk_3(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n)
+{
+	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
+
+	dir24_8_lookup_bulk(dp, ips, next_hops, n, 3);
+}
+
+static inline void
+dir24_8_lookup_bulk_uni(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n)
+{
+	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
+	uint64_t tmp;
+	uint32_t i;
+	uint32_t prefetch_offset = RTE_MIN(15U, n);
+	uint8_t nh_sz = dp->nh_sz;
+
+	for (i = 0; i < prefetch_offset; i++)
+		rte_prefetch0(get_tbl24_p(dp, ips[i], nh_sz));
+	for (i = 0; i < (n - prefetch_offset); i++) {
+		rte_prefetch0(get_tbl24_p(dp, ips[i + prefetch_offset],
+			nh_sz));
+		tmp = get_tbl24(dp, ips[i], nh_sz);
+		if (unlikely(is_entry_extended(tmp)))
+			tmp = get_tbl8(dp, tmp, ips[i], nh_sz);
+
+		next_hops[i] = tmp >> 1;
+	}
+	for (; i < n; i++) {
+		tmp = get_tbl24(dp, ips[i], nh_sz);
+		if (unlikely(is_entry_extended(tmp)))
+			tmp = get_tbl8(dp, tmp, ips[i], nh_sz);
+
+		next_hops[i] = tmp >> 1;
+	}
+}
+
 void *
 dir24_8_create(const char *name, int socket_id, struct rte_fib_conf *conf);
 
-- 
2.7.4


^ permalink raw reply	[flat|nested] 199+ messages in thread

* [dpdk-dev] [PATCH v9 4/8] fib: introduce AVX512 lookup
  2020-09-30 10:35             ` [dpdk-dev] [PATCH v8 " Vladimir Medvedkin
                                 ` (4 preceding siblings ...)
  2020-10-07 16:10               ` [dpdk-dev] [PATCH v9 3/8] fib: move lookup definition into the header file Vladimir Medvedkin
@ 2020-10-07 16:10               ` Vladimir Medvedkin
  2020-10-13 10:27                 ` Bruce Richardson
  2020-10-07 16:10               ` [dpdk-dev] [PATCH v9 5/8] fib6: make lookup function type configurable Vladimir Medvedkin
                                 ` (3 subsequent siblings)
  9 siblings, 1 reply; 199+ messages in thread
From: Vladimir Medvedkin @ 2020-10-07 16:10 UTC (permalink / raw)
  To: dev
  Cc: david.marchand, jerinj, mdr, thomas, konstantin.ananyev,
	bruce.richardson, ciara.power

Add new lookup implementation for DIR24_8 algorithm using
AVX512 instruction set

Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 doc/guides/rel_notes/release_20_11.rst |   3 +
 lib/librte_fib/dir24_8.c               |  36 +++++++
 lib/librte_fib/dir24_8_avx512.c        | 165 +++++++++++++++++++++++++++++++++
 lib/librte_fib/dir24_8_avx512.h        |  24 +++++
 lib/librte_fib/meson.build             |  34 +++++++
 lib/librte_fib/rte_fib.c               |   2 +-
 lib/librte_fib/rte_fib.h               |   4 +-
 7 files changed, 266 insertions(+), 2 deletions(-)
 create mode 100644 lib/librte_fib/dir24_8_avx512.c
 create mode 100644 lib/librte_fib/dir24_8_avx512.h

diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst
index 4eb3224..26a7d8e 100644
--- a/doc/guides/rel_notes/release_20_11.rst
+++ b/doc/guides/rel_notes/release_20_11.rst
@@ -78,6 +78,9 @@ New Features
     ``--portmask=N``
     where N represents the hexadecimal bitmask of ports used.
 
+* **Added AVX512 lookup implementation for FIB.**
+
+  Added a AVX512 lookup functions implementation into FIB library.
 
 Removed Items
 -------------
diff --git a/lib/librte_fib/dir24_8.c b/lib/librte_fib/dir24_8.c
index b5f2363..d3611c9 100644
--- a/lib/librte_fib/dir24_8.c
+++ b/lib/librte_fib/dir24_8.c
@@ -18,6 +18,12 @@
 #include <rte_fib.h>
 #include "dir24_8.h"
 
+#ifdef CC_DIR24_8_AVX512_SUPPORT
+
+#include "dir24_8_avx512.h"
+
+#endif /* CC_DIR24_8_AVX512_SUPPORT */
+
 #define DIR24_8_NAMESIZE	64
 
 #define ROUNDUP(x, y)	 RTE_ALIGN_CEIL(x, (1 << (32 - y)))
@@ -56,11 +62,36 @@ get_scalar_fn_inlined(enum rte_fib_dir24_8_nh_sz nh_sz)
 	}
 }
 
+static inline rte_fib_lookup_fn_t
+get_vector_fn(enum rte_fib_dir24_8_nh_sz nh_sz)
+{
+#ifdef CC_DIR24_8_AVX512_SUPPORT
+	if ((rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512F) <= 0) ||
+			(rte_get_max_simd_bitwidth() < RTE_MAX_512_SIMD))
+		return NULL;
+
+	switch (nh_sz) {
+	case RTE_FIB_DIR24_8_1B:
+		return rte_dir24_8_vec_lookup_bulk_1b;
+	case RTE_FIB_DIR24_8_2B:
+		return rte_dir24_8_vec_lookup_bulk_2b;
+	case RTE_FIB_DIR24_8_4B:
+		return rte_dir24_8_vec_lookup_bulk_4b;
+	case RTE_FIB_DIR24_8_8B:
+		return rte_dir24_8_vec_lookup_bulk_8b;
+	default:
+		return NULL;
+	}
+#endif
+	return NULL;
+}
+
 rte_fib_lookup_fn_t
 dir24_8_get_lookup_fn(void *p, enum rte_fib_dir24_8_lookup_type type)
 {
 	enum rte_fib_dir24_8_nh_sz nh_sz;
 	struct dir24_8_tbl *dp = p;
+	rte_fib_lookup_fn_t ret_fn = NULL;
 
 	if (dp == NULL)
 		return NULL;
@@ -74,6 +105,11 @@ dir24_8_get_lookup_fn(void *p, enum rte_fib_dir24_8_lookup_type type)
 		return get_scalar_fn_inlined(nh_sz);
 	case RTE_FIB_DIR24_8_SCALAR_UNI:
 		return dir24_8_lookup_bulk_uni;
+	case RTE_FIB_DIR24_8_VECTOR_AVX512:
+		return get_vector_fn(nh_sz);
+	case RTE_FIB_DIR24_8_ANY:
+		ret_fn = get_vector_fn(nh_sz);
+		return (ret_fn) ? ret_fn : get_scalar_fn(nh_sz);
 	default:
 		return NULL;
 	}
diff --git a/lib/librte_fib/dir24_8_avx512.c b/lib/librte_fib/dir24_8_avx512.c
new file mode 100644
index 0000000..43dba28
--- /dev/null
+++ b/lib/librte_fib/dir24_8_avx512.c
@@ -0,0 +1,165 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Intel Corporation
+ */
+
+#include <rte_vect.h>
+#include <rte_fib.h>
+
+#include "dir24_8.h"
+#include "dir24_8_avx512.h"
+
+static __rte_always_inline void
+dir24_8_vec_lookup_x16(void *p, const uint32_t *ips,
+	uint64_t *next_hops, int size)
+{
+	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
+	__mmask16 msk_ext;
+	__mmask16 exp_msk = 0x5555;
+	__m512i ip_vec, idxes, res, bytes;
+	const __m512i zero = _mm512_set1_epi32(0);
+	const __m512i lsb = _mm512_set1_epi32(1);
+	const __m512i lsbyte_msk = _mm512_set1_epi32(0xff);
+	__m512i tmp1, tmp2, res_msk;
+	__m256i tmp256;
+	/* used to mask gather values if size is 1/2 (8/16 bit next hops) */
+	if (size == sizeof(uint8_t))
+		res_msk = _mm512_set1_epi32(UINT8_MAX);
+	else if (size == sizeof(uint16_t))
+		res_msk = _mm512_set1_epi32(UINT16_MAX);
+
+	ip_vec = _mm512_loadu_si512(ips);
+	/* mask 24 most significant bits */
+	idxes = _mm512_srli_epi32(ip_vec, 8);
+
+	/**
+	 * lookup in tbl24
+	 * Put it inside branch to make compiler happy with -O0
+	 */
+	if (size == sizeof(uint8_t)) {
+		res = _mm512_i32gather_epi32(idxes, (const int *)dp->tbl24, 1);
+		res = _mm512_and_epi32(res, res_msk);
+	} else if (size == sizeof(uint16_t)) {
+		res = _mm512_i32gather_epi32(idxes, (const int *)dp->tbl24, 2);
+		res = _mm512_and_epi32(res, res_msk);
+	} else
+		res = _mm512_i32gather_epi32(idxes, (const int *)dp->tbl24, 4);
+
+	/* get extended entries indexes */
+	msk_ext = _mm512_test_epi32_mask(res, lsb);
+
+	if (msk_ext != 0) {
+		idxes = _mm512_srli_epi32(res, 1);
+		idxes = _mm512_slli_epi32(idxes, 8);
+		bytes = _mm512_and_epi32(ip_vec, lsbyte_msk);
+		idxes = _mm512_maskz_add_epi32(msk_ext, idxes, bytes);
+		if (size == sizeof(uint8_t)) {
+			idxes = _mm512_mask_i32gather_epi32(zero, msk_ext,
+				idxes, (const int *)dp->tbl8, 1);
+			idxes = _mm512_and_epi32(idxes, res_msk);
+		} else if (size == sizeof(uint16_t)) {
+			idxes = _mm512_mask_i32gather_epi32(zero, msk_ext,
+				idxes, (const int *)dp->tbl8, 2);
+			idxes = _mm512_and_epi32(idxes, res_msk);
+		} else
+			idxes = _mm512_mask_i32gather_epi32(zero, msk_ext,
+				idxes, (const int *)dp->tbl8, 4);
+
+		res = _mm512_mask_blend_epi32(msk_ext, res, idxes);
+	}
+
+	res = _mm512_srli_epi32(res, 1);
+	tmp1 = _mm512_maskz_expand_epi32(exp_msk, res);
+	tmp256 = _mm512_extracti32x8_epi32(res, 1);
+	tmp2 = _mm512_maskz_expand_epi32(exp_msk,
+		_mm512_castsi256_si512(tmp256));
+	_mm512_storeu_si512(next_hops, tmp1);
+	_mm512_storeu_si512(next_hops + 8, tmp2);
+}
+
+static __rte_always_inline void
+dir24_8_vec_lookup_x8_8b(void *p, const uint32_t *ips,
+	uint64_t *next_hops)
+{
+	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
+	const __m512i zero = _mm512_set1_epi32(0);
+	const __m512i lsbyte_msk = _mm512_set1_epi64(0xff);
+	const __m512i lsb = _mm512_set1_epi64(1);
+	__m512i res, idxes, bytes;
+	__m256i idxes_256, ip_vec;
+	__mmask8 msk_ext;
+
+	ip_vec = _mm256_loadu_si256((const void *)ips);
+	/* mask 24 most significant bits */
+	idxes_256 = _mm256_srli_epi32(ip_vec, 8);
+
+	/* lookup in tbl24 */
+	res = _mm512_i32gather_epi64(idxes_256, (const void *)dp->tbl24, 8);
+
+	/* get extended entries indexes */
+	msk_ext = _mm512_test_epi64_mask(res, lsb);
+
+	if (msk_ext != 0) {
+		bytes = _mm512_cvtepi32_epi64(ip_vec);
+		idxes = _mm512_srli_epi64(res, 1);
+		idxes = _mm512_slli_epi64(idxes, 8);
+		bytes = _mm512_and_epi64(bytes, lsbyte_msk);
+		idxes = _mm512_maskz_add_epi64(msk_ext, idxes, bytes);
+		idxes = _mm512_mask_i64gather_epi64(zero, msk_ext, idxes,
+			(const void *)dp->tbl8, 8);
+
+		res = _mm512_mask_blend_epi64(msk_ext, res, idxes);
+	}
+
+	res = _mm512_srli_epi64(res, 1);
+	_mm512_storeu_si512(next_hops, res);
+}
+
+void
+rte_dir24_8_vec_lookup_bulk_1b(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n)
+{
+	uint32_t i;
+	for (i = 0; i < (n / 16); i++)
+		dir24_8_vec_lookup_x16(p, ips + i * 16, next_hops + i * 16,
+			sizeof(uint8_t));
+
+	dir24_8_lookup_bulk_1b(p, ips + i * 16, next_hops + i * 16,
+		n - i * 16);
+}
+
+void
+rte_dir24_8_vec_lookup_bulk_2b(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n)
+{
+	uint32_t i;
+	for (i = 0; i < (n / 16); i++)
+		dir24_8_vec_lookup_x16(p, ips + i * 16, next_hops + i * 16,
+			sizeof(uint16_t));
+
+	dir24_8_lookup_bulk_2b(p, ips + i * 16, next_hops + i * 16,
+		n - i * 16);
+}
+
+void
+rte_dir24_8_vec_lookup_bulk_4b(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n)
+{
+	uint32_t i;
+	for (i = 0; i < (n / 16); i++)
+		dir24_8_vec_lookup_x16(p, ips + i * 16, next_hops + i * 16,
+			sizeof(uint32_t));
+
+	dir24_8_lookup_bulk_4b(p, ips + i * 16, next_hops + i * 16,
+		n - i * 16);
+}
+
+void
+rte_dir24_8_vec_lookup_bulk_8b(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n)
+{
+	uint32_t i;
+	for (i = 0; i < (n / 8); i++)
+		dir24_8_vec_lookup_x8_8b(p, ips + i * 8, next_hops + i * 8);
+
+	dir24_8_lookup_bulk_8b(p, ips + i * 8, next_hops + i * 8, n - i * 8);
+}
diff --git a/lib/librte_fib/dir24_8_avx512.h b/lib/librte_fib/dir24_8_avx512.h
new file mode 100644
index 0000000..1d3c2b9
--- /dev/null
+++ b/lib/librte_fib/dir24_8_avx512.h
@@ -0,0 +1,24 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Intel Corporation
+ */
+
+#ifndef _DIR248_AVX512_H_
+#define _DIR248_AVX512_H_
+
+void
+rte_dir24_8_vec_lookup_bulk_1b(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n);
+
+void
+rte_dir24_8_vec_lookup_bulk_2b(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n);
+
+void
+rte_dir24_8_vec_lookup_bulk_4b(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n);
+
+void
+rte_dir24_8_vec_lookup_bulk_8b(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n);
+
+#endif /* _DIR248_AVX512_H_ */
diff --git a/lib/librte_fib/meson.build b/lib/librte_fib/meson.build
index 771828f..0a8adef 100644
--- a/lib/librte_fib/meson.build
+++ b/lib/librte_fib/meson.build
@@ -5,3 +5,37 @@
 sources = files('rte_fib.c', 'rte_fib6.c', 'dir24_8.c', 'trie.c')
 headers = files('rte_fib.h', 'rte_fib6.h')
 deps += ['rib']
+
+# compile AVX512 version if:
+# we are building 64-bit binary AND binutils can generate proper code
+if dpdk_conf.has('RTE_ARCH_X86_64') and binutils_ok.returncode() == 0
+	# compile AVX512 version if either:
+	# a. we have AVX512F supported in minimum instruction set baseline
+	# b. it's not minimum instruction set, but supported by compiler
+	#
+	# in former case, just add avx512 C file to files list
+	# in latter case, compile c file to static lib, using correct
+	# compiler flags, and then have the .o file from static lib
+	# linked into main lib.
+
+	# check if all required flags already enabled (variant a).
+	acl_avx512_flags = ['__AVX512F__','__AVX512DQ__']
+	acl_avx512_on = true
+	foreach f:acl_avx512_flags
+		if cc.get_define(f, args: machine_args) == ''
+			acl_avx512_on = false
+		endif
+	endforeach
+
+	if acl_avx512_on == true
+		cflags += ['-DCC_DIR24_8_AVX512_SUPPORT']
+		sources += files('dir24_8_avx512.c')
+	elif cc.has_multi_arguments('-mavx512f', '-mavx512dq')
+		dir24_8_avx512_tmp = static_library('dir24_8_avx512_tmp',
+				'dir24_8_avx512.c',
+				dependencies: static_rte_eal,
+				c_args: cflags + ['-mavx512f', '-mavx512dq'])
+		objs += dir24_8_avx512_tmp.extract_objects('dir24_8_avx512.c')
+		cflags += '-DCC_DIR24_8_AVX512_SUPPORT'
+	endif
+endif
diff --git a/lib/librte_fib/rte_fib.c b/lib/librte_fib/rte_fib.c
index b9f6efb..1af2a5f 100644
--- a/lib/librte_fib/rte_fib.c
+++ b/lib/librte_fib/rte_fib.c
@@ -108,7 +108,7 @@ init_dataplane(struct rte_fib *fib, __rte_unused int socket_id,
 		if (fib->dp == NULL)
 			return -rte_errno;
 		fib->lookup = dir24_8_get_lookup_fn(fib->dp,
-			RTE_FIB_DIR24_8_SCALAR_MACRO);
+			RTE_FIB_DIR24_8_ANY);
 		fib->modify = dir24_8_modify;
 		return 0;
 	default:
diff --git a/lib/librte_fib/rte_fib.h b/lib/librte_fib/rte_fib.h
index a9bd0da..16514a9 100644
--- a/lib/librte_fib/rte_fib.h
+++ b/lib/librte_fib/rte_fib.h
@@ -62,7 +62,9 @@ enum rte_fib_dir24_8_nh_sz {
 enum rte_fib_dir24_8_lookup_type {
 	RTE_FIB_DIR24_8_SCALAR_MACRO,
 	RTE_FIB_DIR24_8_SCALAR_INLINE,
-	RTE_FIB_DIR24_8_SCALAR_UNI
+	RTE_FIB_DIR24_8_SCALAR_UNI,
+	RTE_FIB_DIR24_8_VECTOR_AVX512,
+	RTE_FIB_DIR24_8_ANY = UINT32_MAX
 };
 
 /** FIB configuration structure */
-- 
2.7.4


^ permalink raw reply	[flat|nested] 199+ messages in thread

* [dpdk-dev] [PATCH v9 5/8] fib6: make lookup function type configurable
  2020-09-30 10:35             ` [dpdk-dev] [PATCH v8 " Vladimir Medvedkin
                                 ` (5 preceding siblings ...)
  2020-10-07 16:10               ` [dpdk-dev] [PATCH v9 4/8] fib: introduce AVX512 lookup Vladimir Medvedkin
@ 2020-10-07 16:10               ` Vladimir Medvedkin
  2020-10-07 16:10               ` [dpdk-dev] [PATCH v9 6/8] fib6: move lookup definition into the header file Vladimir Medvedkin
                                 ` (2 subsequent siblings)
  9 siblings, 0 replies; 199+ messages in thread
From: Vladimir Medvedkin @ 2020-10-07 16:10 UTC (permalink / raw)
  To: dev
  Cc: david.marchand, jerinj, mdr, thomas, konstantin.ananyev,
	bruce.richardson, ciara.power

Add type argument to trie_get_lookup_fn()
Now it only supports RTE_FIB6_TRIE_SCALAR

Add new rte_fib6_set_lookup_fn() - user can change lookup
function type runtime.

Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 lib/librte_fib/rte_fib6.c          | 20 +++++++++++++++-
 lib/librte_fib/rte_fib6.h          | 23 +++++++++++++++++++
 lib/librte_fib/rte_fib_version.map |  1 +
 lib/librte_fib/trie.c              | 47 +++++++++++++++++++++++---------------
 lib/librte_fib/trie.h              |  2 +-
 5 files changed, 72 insertions(+), 21 deletions(-)

diff --git a/lib/librte_fib/rte_fib6.c b/lib/librte_fib/rte_fib6.c
index a1f0db8..566cd5f 100644
--- a/lib/librte_fib/rte_fib6.c
+++ b/lib/librte_fib/rte_fib6.c
@@ -107,7 +107,7 @@ init_dataplane(struct rte_fib6 *fib, __rte_unused int socket_id,
 		fib->dp = trie_create(dp_name, socket_id, conf);
 		if (fib->dp == NULL)
 			return -rte_errno;
-		fib->lookup = rte_trie_get_lookup_fn(conf);
+		fib->lookup = trie_get_lookup_fn(fib->dp, RTE_FIB6_TRIE_SCALAR);
 		fib->modify = trie_modify;
 		return 0;
 	default:
@@ -319,3 +319,21 @@ rte_fib6_get_rib(struct rte_fib6 *fib)
 {
 	return (fib == NULL) ? NULL : fib->rib;
 }
+
+int
+rte_fib6_set_lookup_fn(struct rte_fib6 *fib,
+	enum rte_fib_trie_lookup_type type)
+{
+	rte_fib6_lookup_fn_t fn;
+
+	switch (fib->type) {
+	case RTE_FIB6_TRIE:
+		fn = trie_get_lookup_fn(fib->dp, type);
+		if (fn == NULL)
+			return -EINVAL;
+		fib->lookup = fn;
+		return 0;
+	default:
+		return -EINVAL;
+	}
+}
diff --git a/lib/librte_fib/rte_fib6.h b/lib/librte_fib/rte_fib6.h
index bbfcf23..cc817ad 100644
--- a/lib/librte_fib/rte_fib6.h
+++ b/lib/librte_fib/rte_fib6.h
@@ -53,12 +53,18 @@ enum rte_fib6_op {
 	RTE_FIB6_DEL,
 };
 
+/** Size of nexthop (1 << nh_sz) bits for TRIE based FIB */
 enum rte_fib_trie_nh_sz {
 	RTE_FIB6_TRIE_2B = 1,
 	RTE_FIB6_TRIE_4B,
 	RTE_FIB6_TRIE_8B
 };
 
+/** Type of lookup function implementation */
+enum rte_fib_trie_lookup_type {
+	RTE_FIB6_TRIE_SCALAR
+};
+
 /** FIB configuration structure */
 struct rte_fib6_conf {
 	enum rte_fib6_type type; /**< Type of FIB struct */
@@ -201,6 +207,23 @@ __rte_experimental
 struct rte_rib6 *
 rte_fib6_get_rib(struct rte_fib6 *fib);
 
+/**
+ * Set lookup function based on type
+ *
+ * @param fib
+ *   FIB object handle
+ * @param type
+ *   type of lookup function
+ *
+ * @return
+ *    -EINVAL on failure
+ *    0 on success
+ */
+__rte_experimental
+int
+rte_fib6_set_lookup_fn(struct rte_fib6 *fib,
+	enum rte_fib_trie_lookup_type type);
+
 #ifdef __cplusplus
 }
 #endif
diff --git a/lib/librte_fib/rte_fib_version.map b/lib/librte_fib/rte_fib_version.map
index 216af66..9d1e181 100644
--- a/lib/librte_fib/rte_fib_version.map
+++ b/lib/librte_fib/rte_fib_version.map
@@ -19,6 +19,7 @@ EXPERIMENTAL {
 	rte_fib6_lookup_bulk;
 	rte_fib6_get_dp;
 	rte_fib6_get_rib;
+	rte_fib6_set_lookup_fn;
 
 	local: *;
 };
diff --git a/lib/librte_fib/trie.c b/lib/librte_fib/trie.c
index 2ae2add..fc14670 100644
--- a/lib/librte_fib/trie.c
+++ b/lib/librte_fib/trie.c
@@ -59,13 +59,6 @@ enum edge {
 	REDGE
 };
 
-enum lookup_type {
-	MACRO,
-	INLINE,
-	UNI
-};
-static enum lookup_type test_lookup = MACRO;
-
 static inline uint32_t
 get_tbl24_idx(const uint8_t *ip)
 {
@@ -153,22 +146,38 @@ LOOKUP_FUNC(2b, uint16_t, 1)
 LOOKUP_FUNC(4b, uint32_t, 2)
 LOOKUP_FUNC(8b, uint64_t, 3)
 
+static inline rte_fib6_lookup_fn_t
+get_scalar_fn(enum rte_fib_trie_nh_sz nh_sz)
+{
+	switch (nh_sz) {
+	case RTE_FIB6_TRIE_2B:
+		return rte_trie_lookup_bulk_2b;
+	case RTE_FIB6_TRIE_4B:
+		return rte_trie_lookup_bulk_4b;
+	case RTE_FIB6_TRIE_8B:
+		return rte_trie_lookup_bulk_8b;
+	default:
+		return NULL;
+	}
+}
+
 rte_fib6_lookup_fn_t
-rte_trie_get_lookup_fn(struct rte_fib6_conf *conf)
+trie_get_lookup_fn(void *p, enum rte_fib_trie_lookup_type type)
 {
-	enum rte_fib_trie_nh_sz nh_sz = conf->trie.nh_sz;
+	enum rte_fib_trie_nh_sz nh_sz;
+	struct rte_trie_tbl *dp = p;
 
-	if (test_lookup == MACRO) {
-		switch (nh_sz) {
-		case RTE_FIB6_TRIE_2B:
-			return rte_trie_lookup_bulk_2b;
-		case RTE_FIB6_TRIE_4B:
-			return rte_trie_lookup_bulk_4b;
-		case RTE_FIB6_TRIE_8B:
-			return rte_trie_lookup_bulk_8b;
-		}
+	if (dp == NULL)
+		return NULL;
+
+	nh_sz = dp->nh_sz;
+
+	switch (type) {
+	case RTE_FIB6_TRIE_SCALAR:
+		return get_scalar_fn(nh_sz);
+	default:
+		return NULL;
 	}
-
 	return NULL;
 }
 
diff --git a/lib/librte_fib/trie.h b/lib/librte_fib/trie.h
index bb750c5..0d5ef9a 100644
--- a/lib/librte_fib/trie.h
+++ b/lib/librte_fib/trie.h
@@ -22,7 +22,7 @@ void
 trie_free(void *p);
 
 rte_fib6_lookup_fn_t
-rte_trie_get_lookup_fn(struct rte_fib6_conf *fib_conf);
+trie_get_lookup_fn(void *p, enum rte_fib_trie_lookup_type type);
 
 int
 trie_modify(struct rte_fib6 *fib, const uint8_t ip[RTE_FIB6_IPV6_ADDR_SIZE],
-- 
2.7.4


^ permalink raw reply	[flat|nested] 199+ messages in thread

* [dpdk-dev] [PATCH v9 6/8] fib6: move lookup definition into the header file
  2020-09-30 10:35             ` [dpdk-dev] [PATCH v8 " Vladimir Medvedkin
                                 ` (6 preceding siblings ...)
  2020-10-07 16:10               ` [dpdk-dev] [PATCH v9 5/8] fib6: make lookup function type configurable Vladimir Medvedkin
@ 2020-10-07 16:10               ` Vladimir Medvedkin
  2020-10-07 16:10               ` [dpdk-dev] [PATCH v9 7/8] fib6: introduce AVX512 lookup Vladimir Medvedkin
  2020-10-07 16:10               ` [dpdk-dev] [PATCH v9 8/8] app/testfib: add support for different lookup functions Vladimir Medvedkin
  9 siblings, 0 replies; 199+ messages in thread
From: Vladimir Medvedkin @ 2020-10-07 16:10 UTC (permalink / raw)
  To: dev
  Cc: david.marchand, jerinj, mdr, thomas, konstantin.ananyev,
	bruce.richardson, ciara.power

Move trie table layout and lookup definition into the
private header file. This is necessary for implementing a
vectorized lookup function in a separate .с file.

Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 lib/librte_fib/trie.c | 121 --------------------------------------------------
 lib/librte_fib/trie.h | 117 ++++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 117 insertions(+), 121 deletions(-)

diff --git a/lib/librte_fib/trie.c b/lib/librte_fib/trie.c
index fc14670..82ba13d 100644
--- a/lib/librte_fib/trie.c
+++ b/lib/librte_fib/trie.c
@@ -11,141 +11,20 @@
 
 #include <rte_debug.h>
 #include <rte_malloc.h>
-#include <rte_prefetch.h>
 #include <rte_errno.h>
 #include <rte_memory.h>
-#include <rte_branch_prediction.h>
 
 #include <rte_rib6.h>
 #include <rte_fib6.h>
 #include "trie.h"
 
-/* @internal Total number of tbl24 entries. */
-#define TRIE_TBL24_NUM_ENT	(1 << 24)
-
-/* Maximum depth value possible for IPv6 LPM. */
-#define TRIE_MAX_DEPTH		128
-
-/* @internal Number of entries in a tbl8 group. */
-#define TRIE_TBL8_GRP_NUM_ENT	256ULL
-
-/* @internal Total number of tbl8 groups in the tbl8. */
-#define TRIE_TBL8_NUM_GROUPS	65536
-
-/* @internal bitmask with valid and valid_group fields set */
-#define TRIE_EXT_ENT		1
-
 #define TRIE_NAMESIZE		64
 
-#define BITMAP_SLAB_BIT_SIZE_LOG2	6
-#define BITMAP_SLAB_BIT_SIZE		(1ULL << BITMAP_SLAB_BIT_SIZE_LOG2)
-#define BITMAP_SLAB_BITMASK		(BITMAP_SLAB_BIT_SIZE - 1)
-
-struct rte_trie_tbl {
-	uint32_t	number_tbl8s;	/**< Total number of tbl8s */
-	uint32_t	rsvd_tbl8s;	/**< Number of reserved tbl8s */
-	uint32_t	cur_tbl8s;	/**< Current cumber of tbl8s */
-	uint64_t	def_nh;		/**< Default next hop */
-	enum rte_fib_trie_nh_sz	nh_sz;	/**< Size of nexthop entry */
-	uint64_t	*tbl8;		/**< tbl8 table. */
-	uint32_t	*tbl8_pool;	/**< bitmap containing free tbl8 idxes*/
-	uint32_t	tbl8_pool_pos;
-	/* tbl24 table. */
-	__extension__ uint64_t	tbl24[0] __rte_cache_aligned;
-};
-
 enum edge {
 	LEDGE,
 	REDGE
 };
 
-static inline uint32_t
-get_tbl24_idx(const uint8_t *ip)
-{
-	return ip[0] << 16|ip[1] << 8|ip[2];
-}
-
-static inline void *
-get_tbl24_p(struct rte_trie_tbl *dp, const uint8_t *ip, uint8_t nh_sz)
-{
-	uint32_t tbl24_idx;
-
-	tbl24_idx = get_tbl24_idx(ip);
-	return (void *)&((uint8_t *)dp->tbl24)[tbl24_idx << nh_sz];
-}
-
-static inline uint8_t
-bits_in_nh(uint8_t nh_sz)
-{
-	return 8 * (1 << nh_sz);
-}
-
-static inline uint64_t
-get_max_nh(uint8_t nh_sz)
-{
-	return ((1ULL << (bits_in_nh(nh_sz) - 1)) - 1);
-}
-
-static inline uint64_t
-lookup_msk(uint8_t nh_sz)
-{
-	return ((1ULL << ((1 << (nh_sz + 3)) - 1)) << 1) - 1;
-}
-
-static inline uint8_t
-get_psd_idx(uint32_t val, uint8_t nh_sz)
-{
-	return val & ((1 << (3 - nh_sz)) - 1);
-}
-
-static inline uint32_t
-get_tbl_pos(uint32_t val, uint8_t nh_sz)
-{
-	return val >> (3 - nh_sz);
-}
-
-static inline uint64_t
-get_tbl_val_by_idx(uint64_t *tbl, uint32_t idx, uint8_t nh_sz)
-{
-	return ((tbl[get_tbl_pos(idx, nh_sz)] >> (get_psd_idx(idx, nh_sz) *
-		bits_in_nh(nh_sz))) & lookup_msk(nh_sz));
-}
-
-static inline void *
-get_tbl_p_by_idx(uint64_t *tbl, uint64_t idx, uint8_t nh_sz)
-{
-	return (uint8_t *)tbl + (idx << nh_sz);
-}
-
-static inline int
-is_entry_extended(uint64_t ent)
-{
-	return (ent & TRIE_EXT_ENT) == TRIE_EXT_ENT;
-}
-
-#define LOOKUP_FUNC(suffix, type, nh_sz)				\
-static void rte_trie_lookup_bulk_##suffix(void *p,			\
-	uint8_t ips[][RTE_FIB6_IPV6_ADDR_SIZE],			\
-	uint64_t *next_hops, const unsigned int n)			\
-{									\
-	struct rte_trie_tbl *dp = (struct rte_trie_tbl *)p;		\
-	uint64_t tmp;							\
-	uint32_t i, j;							\
-									\
-	for (i = 0; i < n; i++) {					\
-		tmp = ((type *)dp->tbl24)[get_tbl24_idx(&ips[i][0])];	\
-		j = 3;							\
-		while (is_entry_extended(tmp)) {			\
-			tmp = ((type *)dp->tbl8)[ips[i][j++] +		\
-				((tmp >> 1) * TRIE_TBL8_GRP_NUM_ENT)];	\
-		}							\
-		next_hops[i] = tmp >> 1;				\
-	}								\
-}
-LOOKUP_FUNC(2b, uint16_t, 1)
-LOOKUP_FUNC(4b, uint32_t, 2)
-LOOKUP_FUNC(8b, uint64_t, 3)
-
 static inline rte_fib6_lookup_fn_t
 get_scalar_fn(enum rte_fib_trie_nh_sz nh_sz)
 {
diff --git a/lib/librte_fib/trie.h b/lib/librte_fib/trie.h
index 0d5ef9a..663c7a9 100644
--- a/lib/librte_fib/trie.h
+++ b/lib/librte_fib/trie.h
@@ -10,11 +10,128 @@
  * @file
  * RTE IPv6 Longest Prefix Match (LPM)
  */
+#include <rte_prefetch.h>
+#include <rte_branch_prediction.h>
 
 #ifdef __cplusplus
 extern "C" {
 #endif
 
+/* @internal Total number of tbl24 entries. */
+#define TRIE_TBL24_NUM_ENT	(1 << 24)
+/* Maximum depth value possible for IPv6 LPM. */
+#define TRIE_MAX_DEPTH		128
+/* @internal Number of entries in a tbl8 group. */
+#define TRIE_TBL8_GRP_NUM_ENT	256ULL
+/* @internal Total number of tbl8 groups in the tbl8. */
+#define TRIE_TBL8_NUM_GROUPS	65536
+/* @internal bitmask with valid and valid_group fields set */
+#define TRIE_EXT_ENT		1
+
+#define BITMAP_SLAB_BIT_SIZE_LOG2	6
+#define BITMAP_SLAB_BIT_SIZE		(1ULL << BITMAP_SLAB_BIT_SIZE_LOG2)
+#define BITMAP_SLAB_BITMASK		(BITMAP_SLAB_BIT_SIZE - 1)
+
+struct rte_trie_tbl {
+	uint32_t	number_tbl8s;	/**< Total number of tbl8s */
+	uint32_t	rsvd_tbl8s;	/**< Number of reserved tbl8s */
+	uint32_t	cur_tbl8s;	/**< Current cumber of tbl8s */
+	uint64_t	def_nh;		/**< Default next hop */
+	enum rte_fib_trie_nh_sz	nh_sz;	/**< Size of nexthop entry */
+	uint64_t	*tbl8;		/**< tbl8 table. */
+	uint32_t	*tbl8_pool;	/**< bitmap containing free tbl8 idxes*/
+	uint32_t	tbl8_pool_pos;
+	/* tbl24 table. */
+	__extension__ uint64_t	tbl24[0] __rte_cache_aligned;
+};
+
+static inline uint32_t
+get_tbl24_idx(const uint8_t *ip)
+{
+	return ip[0] << 16|ip[1] << 8|ip[2];
+}
+
+static inline void *
+get_tbl24_p(struct rte_trie_tbl *dp, const uint8_t *ip, uint8_t nh_sz)
+{
+	uint32_t tbl24_idx;
+
+	tbl24_idx = get_tbl24_idx(ip);
+	return (void *)&((uint8_t *)dp->tbl24)[tbl24_idx << nh_sz];
+}
+
+static inline uint8_t
+bits_in_nh(uint8_t nh_sz)
+{
+	return 8 * (1 << nh_sz);
+}
+
+static inline uint64_t
+get_max_nh(uint8_t nh_sz)
+{
+	return ((1ULL << (bits_in_nh(nh_sz) - 1)) - 1);
+}
+
+static inline uint64_t
+lookup_msk(uint8_t nh_sz)
+{
+	return ((1ULL << ((1 << (nh_sz + 3)) - 1)) << 1) - 1;
+}
+
+static inline uint8_t
+get_psd_idx(uint32_t val, uint8_t nh_sz)
+{
+	return val & ((1 << (3 - nh_sz)) - 1);
+}
+
+static inline uint32_t
+get_tbl_pos(uint32_t val, uint8_t nh_sz)
+{
+	return val >> (3 - nh_sz);
+}
+
+static inline uint64_t
+get_tbl_val_by_idx(uint64_t *tbl, uint32_t idx, uint8_t nh_sz)
+{
+	return ((tbl[get_tbl_pos(idx, nh_sz)] >> (get_psd_idx(idx, nh_sz) *
+		bits_in_nh(nh_sz))) & lookup_msk(nh_sz));
+}
+
+static inline void *
+get_tbl_p_by_idx(uint64_t *tbl, uint64_t idx, uint8_t nh_sz)
+{
+	return (uint8_t *)tbl + (idx << nh_sz);
+}
+
+static inline int
+is_entry_extended(uint64_t ent)
+{
+	return (ent & TRIE_EXT_ENT) == TRIE_EXT_ENT;
+}
+
+#define LOOKUP_FUNC(suffix, type, nh_sz)				\
+static inline void rte_trie_lookup_bulk_##suffix(void *p,		\
+	uint8_t ips[][RTE_FIB6_IPV6_ADDR_SIZE],				\
+	uint64_t *next_hops, const unsigned int n)			\
+{									\
+	struct rte_trie_tbl *dp = (struct rte_trie_tbl *)p;		\
+	uint64_t tmp;							\
+	uint32_t i, j;							\
+									\
+	for (i = 0; i < n; i++) {					\
+		tmp = ((type *)dp->tbl24)[get_tbl24_idx(&ips[i][0])];	\
+		j = 3;							\
+		while (is_entry_extended(tmp)) {			\
+			tmp = ((type *)dp->tbl8)[ips[i][j++] +		\
+				((tmp >> 1) * TRIE_TBL8_GRP_NUM_ENT)];	\
+		}							\
+		next_hops[i] = tmp >> 1;				\
+	}								\
+}
+LOOKUP_FUNC(2b, uint16_t, 1)
+LOOKUP_FUNC(4b, uint32_t, 2)
+LOOKUP_FUNC(8b, uint64_t, 3)
+
 void *
 trie_create(const char *name, int socket_id, struct rte_fib6_conf *conf);
 
-- 
2.7.4


^ permalink raw reply	[flat|nested] 199+ messages in thread

* [dpdk-dev] [PATCH v9 7/8] fib6: introduce AVX512 lookup
  2020-09-30 10:35             ` [dpdk-dev] [PATCH v8 " Vladimir Medvedkin
                                 ` (7 preceding siblings ...)
  2020-10-07 16:10               ` [dpdk-dev] [PATCH v9 6/8] fib6: move lookup definition into the header file Vladimir Medvedkin
@ 2020-10-07 16:10               ` Vladimir Medvedkin
  2020-10-07 16:10               ` [dpdk-dev] [PATCH v9 8/8] app/testfib: add support for different lookup functions Vladimir Medvedkin
  9 siblings, 0 replies; 199+ messages in thread
From: Vladimir Medvedkin @ 2020-10-07 16:10 UTC (permalink / raw)
  To: dev
  Cc: david.marchand, jerinj, mdr, thomas, konstantin.ananyev,
	bruce.richardson, ciara.power

Add new lookup implementation for FIB6 trie algorithm using
AVX512 instruction set

Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 doc/guides/rel_notes/release_20_11.rst |   2 +-
 lib/librte_fib/meson.build             |  17 +++
 lib/librte_fib/rte_fib6.c              |   2 +-
 lib/librte_fib/rte_fib6.h              |   4 +-
 lib/librte_fib/trie.c                  |  33 ++++
 lib/librte_fib/trie_avx512.c           | 269 +++++++++++++++++++++++++++++++++
 lib/librte_fib/trie_avx512.h           |  20 +++
 7 files changed, 344 insertions(+), 3 deletions(-)
 create mode 100644 lib/librte_fib/trie_avx512.c
 create mode 100644 lib/librte_fib/trie_avx512.h

diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst
index 26a7d8e..cafd499 100644
--- a/doc/guides/rel_notes/release_20_11.rst
+++ b/doc/guides/rel_notes/release_20_11.rst
@@ -80,7 +80,7 @@ New Features
 
 * **Added AVX512 lookup implementation for FIB.**
 
-  Added a AVX512 lookup functions implementation into FIB library.
+  Added a AVX512 lookup functions implementation into FIB and FIB6 libraries.
 
 Removed Items
 -------------
diff --git a/lib/librte_fib/meson.build b/lib/librte_fib/meson.build
index 0a8adef..5d93de9 100644
--- a/lib/librte_fib/meson.build
+++ b/lib/librte_fib/meson.build
@@ -30,6 +30,12 @@ if dpdk_conf.has('RTE_ARCH_X86_64') and binutils_ok.returncode() == 0
 	if acl_avx512_on == true
 		cflags += ['-DCC_DIR24_8_AVX512_SUPPORT']
 		sources += files('dir24_8_avx512.c')
+		# TRIE AVX512 implementation uses avx512bw intrinsics along with
+		# avx512f and avx512dq
+		if cc.get_define('__AVX512BW__', args: machine_args) != ''
+			cflags += ['-DCC_TRIE_AVX512_SUPPORT']
+			sources += files('trie_avx512.c')
+		endif
 	elif cc.has_multi_arguments('-mavx512f', '-mavx512dq')
 		dir24_8_avx512_tmp = static_library('dir24_8_avx512_tmp',
 				'dir24_8_avx512.c',
@@ -37,5 +43,16 @@ if dpdk_conf.has('RTE_ARCH_X86_64') and binutils_ok.returncode() == 0
 				c_args: cflags + ['-mavx512f', '-mavx512dq'])
 		objs += dir24_8_avx512_tmp.extract_objects('dir24_8_avx512.c')
 		cflags += '-DCC_DIR24_8_AVX512_SUPPORT'
+		# TRIE AVX512 implementation uses avx512bw intrinsics along with
+		# avx512f and avx512dq
+		if cc.has_argument('-mavx512bw')
+			trie_avx512_tmp = static_library('trie_avx512_tmp',
+				'trie_avx512.c',
+				dependencies: static_rte_eal,
+				c_args: cflags + ['-mavx512f', \
+					'-mavx512dq', '-mavx512bw'])
+			objs += trie_avx512_tmp.extract_objects('trie_avx512.c')
+			cflags += '-DCC_TRIE_AVX512_SUPPORT'
+		endif
 	endif
 endif
diff --git a/lib/librte_fib/rte_fib6.c b/lib/librte_fib/rte_fib6.c
index 566cd5f..8512584 100644
--- a/lib/librte_fib/rte_fib6.c
+++ b/lib/librte_fib/rte_fib6.c
@@ -107,7 +107,7 @@ init_dataplane(struct rte_fib6 *fib, __rte_unused int socket_id,
 		fib->dp = trie_create(dp_name, socket_id, conf);
 		if (fib->dp == NULL)
 			return -rte_errno;
-		fib->lookup = trie_get_lookup_fn(fib->dp, RTE_FIB6_TRIE_SCALAR);
+		fib->lookup = trie_get_lookup_fn(fib->dp, RTE_FIB6_TRIE_ANY);
 		fib->modify = trie_modify;
 		return 0;
 	default:
diff --git a/lib/librte_fib/rte_fib6.h b/lib/librte_fib/rte_fib6.h
index cc817ad..0b4422c 100644
--- a/lib/librte_fib/rte_fib6.h
+++ b/lib/librte_fib/rte_fib6.h
@@ -62,7 +62,9 @@ enum rte_fib_trie_nh_sz {
 
 /** Type of lookup function implementation */
 enum rte_fib_trie_lookup_type {
-	RTE_FIB6_TRIE_SCALAR
+	RTE_FIB6_TRIE_SCALAR,
+	RTE_FIB6_TRIE_VECTOR_AVX512,
+	RTE_FIB6_TRIE_ANY = UINT32_MAX
 };
 
 /** FIB configuration structure */
diff --git a/lib/librte_fib/trie.c b/lib/librte_fib/trie.c
index 82ba13d..4aa5923 100644
--- a/lib/librte_fib/trie.c
+++ b/lib/librte_fib/trie.c
@@ -18,6 +18,12 @@
 #include <rte_fib6.h>
 #include "trie.h"
 
+#ifdef CC_TRIE_AVX512_SUPPORT
+
+#include "trie_avx512.h"
+
+#endif /* CC_TRIE_AVX512_SUPPORT */
+
 #define TRIE_NAMESIZE		64
 
 enum edge {
@@ -40,11 +46,33 @@ get_scalar_fn(enum rte_fib_trie_nh_sz nh_sz)
 	}
 }
 
+static inline rte_fib6_lookup_fn_t
+get_vector_fn(enum rte_fib_trie_nh_sz nh_sz)
+{
+#ifdef CC_TRIE_AVX512_SUPPORT
+	if ((rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512F) <= 0) ||
+			(rte_get_max_simd_bitwidth() < RTE_MAX_512_SIMD))
+		return NULL;
+	switch (nh_sz) {
+	case RTE_FIB6_TRIE_2B:
+		return rte_trie_vec_lookup_bulk_2b;
+	case RTE_FIB6_TRIE_4B:
+		return rte_trie_vec_lookup_bulk_4b;
+	case RTE_FIB6_TRIE_8B:
+		return rte_trie_vec_lookup_bulk_8b;
+	default:
+		return NULL;
+	}
+#endif
+	return NULL;
+}
+
 rte_fib6_lookup_fn_t
 trie_get_lookup_fn(void *p, enum rte_fib_trie_lookup_type type)
 {
 	enum rte_fib_trie_nh_sz nh_sz;
 	struct rte_trie_tbl *dp = p;
+	rte_fib6_lookup_fn_t ret_fn = NULL;
 
 	if (dp == NULL)
 		return NULL;
@@ -54,6 +82,11 @@ trie_get_lookup_fn(void *p, enum rte_fib_trie_lookup_type type)
 	switch (type) {
 	case RTE_FIB6_TRIE_SCALAR:
 		return get_scalar_fn(nh_sz);
+	case RTE_FIB6_TRIE_VECTOR_AVX512:
+		return get_vector_fn(nh_sz);
+	case RTE_FIB6_TRIE_ANY:
+		ret_fn = get_vector_fn(nh_sz);
+		return (ret_fn) ? ret_fn : get_scalar_fn(nh_sz);
 	default:
 		return NULL;
 	}
diff --git a/lib/librte_fib/trie_avx512.c b/lib/librte_fib/trie_avx512.c
new file mode 100644
index 0000000..b1c9e4e
--- /dev/null
+++ b/lib/librte_fib/trie_avx512.c
@@ -0,0 +1,269 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Intel Corporation
+ */
+
+#include <rte_vect.h>
+#include <rte_fib6.h>
+
+#include "trie.h"
+#include "trie_avx512.h"
+
+static __rte_always_inline void
+transpose_x16(uint8_t ips[16][RTE_FIB6_IPV6_ADDR_SIZE],
+	__m512i *first, __m512i *second, __m512i *third, __m512i *fourth)
+{
+	__m512i tmp1, tmp2, tmp3, tmp4;
+	__m512i tmp5, tmp6, tmp7, tmp8;
+	const __rte_x86_zmm_t perm_idxes = {
+		.u32 = { 0, 4, 8, 12, 2, 6, 10, 14,
+			1, 5, 9, 13, 3, 7, 11, 15
+		},
+	};
+
+	/* load all ip addresses */
+	tmp1 = _mm512_loadu_si512(&ips[0][0]);
+	tmp2 = _mm512_loadu_si512(&ips[4][0]);
+	tmp3 = _mm512_loadu_si512(&ips[8][0]);
+	tmp4 = _mm512_loadu_si512(&ips[12][0]);
+
+	/* transpose 4 byte chunks of 16 ips */
+	tmp5 = _mm512_unpacklo_epi32(tmp1, tmp2);
+	tmp7 = _mm512_unpackhi_epi32(tmp1, tmp2);
+	tmp6 = _mm512_unpacklo_epi32(tmp3, tmp4);
+	tmp8 = _mm512_unpackhi_epi32(tmp3, tmp4);
+
+	tmp1 = _mm512_unpacklo_epi32(tmp5, tmp6);
+	tmp3 = _mm512_unpackhi_epi32(tmp5, tmp6);
+	tmp2 = _mm512_unpacklo_epi32(tmp7, tmp8);
+	tmp4 = _mm512_unpackhi_epi32(tmp7, tmp8);
+
+	/* first 4-byte chunks of ips[] */
+	*first = _mm512_permutexvar_epi32(perm_idxes.z, tmp1);
+	/* second 4-byte chunks of ips[] */
+	*second = _mm512_permutexvar_epi32(perm_idxes.z, tmp3);
+	/* third 4-byte chunks of ips[] */
+	*third = _mm512_permutexvar_epi32(perm_idxes.z, tmp2);
+	/* fourth 4-byte chunks of ips[] */
+	*fourth = _mm512_permutexvar_epi32(perm_idxes.z, tmp4);
+}
+
+static __rte_always_inline void
+transpose_x8(uint8_t ips[8][RTE_FIB6_IPV6_ADDR_SIZE],
+	__m512i *first, __m512i *second)
+{
+	__m512i tmp1, tmp2, tmp3, tmp4;
+	const __rte_x86_zmm_t perm_idxes = {
+		.u64 = { 0, 2, 4, 6, 1, 3, 5, 7
+		},
+	};
+
+	tmp1 = _mm512_loadu_si512(&ips[0][0]);
+	tmp2 = _mm512_loadu_si512(&ips[4][0]);
+
+	tmp3 = _mm512_unpacklo_epi64(tmp1, tmp2);
+	*first = _mm512_permutexvar_epi64(perm_idxes.z, tmp3);
+	tmp4 = _mm512_unpackhi_epi64(tmp1, tmp2);
+	*second = _mm512_permutexvar_epi64(perm_idxes.z, tmp4);
+}
+
+static __rte_always_inline void
+trie_vec_lookup_x16(void *p, uint8_t ips[16][RTE_FIB6_IPV6_ADDR_SIZE],
+	uint64_t *next_hops, int size)
+{
+	struct rte_trie_tbl *dp = (struct rte_trie_tbl *)p;
+	const __m512i zero = _mm512_set1_epi32(0);
+	const __m512i lsb = _mm512_set1_epi32(1);
+	const __m512i two_lsb = _mm512_set1_epi32(3);
+	__m512i first, second, third, fourth; /*< IPv6 four byte chunks */
+	__m512i idxes, res, shuf_idxes;
+	__m512i tmp, tmp2, bytes, byte_chunk, base_idxes;
+	/* used to mask gather values if size is 2 (16 bit next hops) */
+	const __m512i res_msk = _mm512_set1_epi32(UINT16_MAX);
+	const __rte_x86_zmm_t bswap = {
+		.u8 = { 2, 1, 0, 255, 6, 5, 4, 255,
+			10, 9, 8, 255, 14, 13, 12, 255,
+			2, 1, 0, 255, 6, 5, 4, 255,
+			10, 9, 8, 255, 14, 13, 12, 255,
+			2, 1, 0, 255, 6, 5, 4, 255,
+			10, 9, 8, 255, 14, 13, 12, 255,
+			2, 1, 0, 255, 6, 5, 4, 255,
+			10, 9, 8, 255, 14, 13, 12, 255
+			},
+	};
+	const __mmask64 k = 0x1111111111111111;
+	int i = 3;
+	__mmask16 msk_ext, new_msk;
+	__mmask16 exp_msk = 0x5555;
+
+	transpose_x16(ips, &first, &second, &third, &fourth);
+
+	/* get_tbl24_idx() for every 4 byte chunk */
+	idxes = _mm512_shuffle_epi8(first, bswap.z);
+
+	/**
+	 * lookup in tbl24
+	 * Put it inside branch to make compiller happy with -O0
+	 */
+	if (size == sizeof(uint16_t)) {
+		res = _mm512_i32gather_epi32(idxes, (const int *)dp->tbl24, 2);
+		res = _mm512_and_epi32(res, res_msk);
+	} else
+		res = _mm512_i32gather_epi32(idxes, (const int *)dp->tbl24, 4);
+
+
+	/* get extended entries indexes */
+	msk_ext = _mm512_test_epi32_mask(res, lsb);
+
+	tmp = _mm512_srli_epi32(res, 1);
+
+	/* idxes to retrieve bytes */
+	shuf_idxes = _mm512_setr_epi32(3, 7, 11, 15,
+				19, 23, 27, 31,
+				35, 39, 43, 47,
+				51, 55, 59, 63);
+
+	base_idxes = _mm512_setr_epi32(0, 4, 8, 12,
+				16, 20, 24, 28,
+				32, 36, 40, 44,
+				48, 52, 56, 60);
+
+	/* traverse down the trie */
+	while (msk_ext) {
+		idxes = _mm512_maskz_slli_epi32(msk_ext, tmp, 8);
+		byte_chunk = (i < 8) ?
+			((i >= 4) ? second : first) :
+			((i >= 12) ? fourth : third);
+		bytes = _mm512_maskz_shuffle_epi8(k, byte_chunk, shuf_idxes);
+		idxes = _mm512_maskz_add_epi32(msk_ext, idxes, bytes);
+		if (size == sizeof(uint16_t)) {
+			tmp = _mm512_mask_i32gather_epi32(zero, msk_ext,
+				idxes, (const int *)dp->tbl8, 2);
+			tmp = _mm512_and_epi32(tmp, res_msk);
+		} else
+			tmp = _mm512_mask_i32gather_epi32(zero, msk_ext,
+				idxes, (const int *)dp->tbl8, 4);
+		new_msk = _mm512_test_epi32_mask(tmp, lsb);
+		res = _mm512_mask_blend_epi32(msk_ext ^ new_msk, res, tmp);
+		tmp = _mm512_srli_epi32(tmp, 1);
+		msk_ext = new_msk;
+
+		shuf_idxes = _mm512_maskz_add_epi8(k, shuf_idxes, lsb);
+		shuf_idxes = _mm512_and_epi32(shuf_idxes, two_lsb);
+		shuf_idxes = _mm512_maskz_add_epi8(k, shuf_idxes, base_idxes);
+		i++;
+	}
+
+	res = _mm512_srli_epi32(res, 1);
+	tmp = _mm512_maskz_expand_epi32(exp_msk, res);
+	__m256i tmp256;
+	tmp256 = _mm512_extracti32x8_epi32(res, 1);
+	tmp2 = _mm512_maskz_expand_epi32(exp_msk,
+		_mm512_castsi256_si512(tmp256));
+	_mm512_storeu_si512(next_hops, tmp);
+	_mm512_storeu_si512(next_hops + 8, tmp2);
+}
+
+static void
+trie_vec_lookup_x8_8b(void *p, uint8_t ips[8][RTE_FIB6_IPV6_ADDR_SIZE],
+	uint64_t *next_hops)
+{
+	struct rte_trie_tbl *dp = (struct rte_trie_tbl *)p;
+	const __m512i zero = _mm512_set1_epi32(0);
+	const __m512i lsb = _mm512_set1_epi32(1);
+	const __m512i three_lsb = _mm512_set1_epi32(7);
+	__m512i first, second; /*< IPv6 eight byte chunks */
+	__m512i idxes, res, shuf_idxes;
+	__m512i tmp, bytes, byte_chunk, base_idxes;
+	const __rte_x86_zmm_t bswap = {
+		.u8 = { 2, 1, 0, 255, 255, 255, 255, 255,
+			10, 9, 8, 255, 255, 255, 255, 255,
+			2, 1, 0, 255, 255, 255, 255, 255,
+			10, 9, 8, 255, 255, 255, 255, 255,
+			2, 1, 0, 255, 255, 255, 255, 255,
+			10, 9, 8, 255, 255, 255, 255, 255,
+			2, 1, 0, 255, 255, 255, 255, 255,
+			10, 9, 8, 255, 255, 255, 255, 255
+			},
+	};
+	const __mmask64 k = 0x101010101010101;
+	int i = 3;
+	__mmask8 msk_ext, new_msk;
+
+	transpose_x8(ips, &first, &second);
+
+	/* get_tbl24_idx() for every 4 byte chunk */
+	idxes = _mm512_shuffle_epi8(first, bswap.z);
+
+	/* lookup in tbl24 */
+	res = _mm512_i64gather_epi64(idxes, (const void *)dp->tbl24, 8);
+	/* get extended entries indexes */
+	msk_ext = _mm512_test_epi64_mask(res, lsb);
+
+	tmp = _mm512_srli_epi64(res, 1);
+
+	/* idxes to retrieve bytes */
+	shuf_idxes = _mm512_setr_epi64(3, 11, 19, 27, 35, 43, 51, 59);
+
+	base_idxes = _mm512_setr_epi64(0, 8, 16, 24, 32, 40, 48, 56);
+
+	/* traverse down the trie */
+	while (msk_ext) {
+		idxes = _mm512_maskz_slli_epi64(msk_ext, tmp, 8);
+		byte_chunk = (i < 8) ? first : second;
+		bytes = _mm512_maskz_shuffle_epi8(k, byte_chunk, shuf_idxes);
+		idxes = _mm512_maskz_add_epi64(msk_ext, idxes, bytes);
+		tmp = _mm512_mask_i64gather_epi64(zero, msk_ext,
+				idxes, (const void *)dp->tbl8, 8);
+		new_msk = _mm512_test_epi64_mask(tmp, lsb);
+		res = _mm512_mask_blend_epi64(msk_ext ^ new_msk, res, tmp);
+		tmp = _mm512_srli_epi64(tmp, 1);
+		msk_ext = new_msk;
+
+		shuf_idxes = _mm512_maskz_add_epi8(k, shuf_idxes, lsb);
+		shuf_idxes = _mm512_and_epi64(shuf_idxes, three_lsb);
+		shuf_idxes = _mm512_maskz_add_epi8(k, shuf_idxes, base_idxes);
+		i++;
+	}
+
+	res = _mm512_srli_epi64(res, 1);
+	_mm512_storeu_si512(next_hops, res);
+}
+
+void
+rte_trie_vec_lookup_bulk_2b(void *p, uint8_t ips[][RTE_FIB6_IPV6_ADDR_SIZE],
+	uint64_t *next_hops, const unsigned int n)
+{
+	uint32_t i;
+	for (i = 0; i < (n / 16); i++) {
+		trie_vec_lookup_x16(p, (uint8_t (*)[16])&ips[i * 16][0],
+				next_hops + i * 16, sizeof(uint16_t));
+	}
+	rte_trie_lookup_bulk_2b(p, (uint8_t (*)[16])&ips[i * 16][0],
+			next_hops + i * 16, n - i * 16);
+}
+
+void
+rte_trie_vec_lookup_bulk_4b(void *p, uint8_t ips[][RTE_FIB6_IPV6_ADDR_SIZE],
+	uint64_t *next_hops, const unsigned int n)
+{
+	uint32_t i;
+	for (i = 0; i < (n / 16); i++) {
+		trie_vec_lookup_x16(p, (uint8_t (*)[16])&ips[i * 16][0],
+				next_hops + i * 16, sizeof(uint32_t));
+	}
+	rte_trie_lookup_bulk_4b(p, (uint8_t (*)[16])&ips[i * 16][0],
+			next_hops + i * 16, n - i * 16);
+}
+
+void
+rte_trie_vec_lookup_bulk_8b(void *p, uint8_t ips[][RTE_FIB6_IPV6_ADDR_SIZE],
+	uint64_t *next_hops, const unsigned int n)
+{
+	uint32_t i;
+	for (i = 0; i < (n / 8); i++) {
+		trie_vec_lookup_x8_8b(p, (uint8_t (*)[16])&ips[i * 8][0],
+				next_hops + i * 8);
+	}
+	rte_trie_lookup_bulk_8b(p, (uint8_t (*)[16])&ips[i * 8][0],
+			next_hops + i * 8, n - i * 8);
+}
diff --git a/lib/librte_fib/trie_avx512.h b/lib/librte_fib/trie_avx512.h
new file mode 100644
index 0000000..ef8c7f0
--- /dev/null
+++ b/lib/librte_fib/trie_avx512.h
@@ -0,0 +1,20 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Intel Corporation
+ */
+
+#ifndef _TRIE_AVX512_H_
+#define _TRIE_AVX512_H_
+
+void
+rte_trie_vec_lookup_bulk_2b(void *p, uint8_t ips[][RTE_FIB6_IPV6_ADDR_SIZE],
+	uint64_t *next_hops, const unsigned int n);
+
+void
+rte_trie_vec_lookup_bulk_4b(void *p, uint8_t ips[][RTE_FIB6_IPV6_ADDR_SIZE],
+	uint64_t *next_hops, const unsigned int n);
+
+void
+rte_trie_vec_lookup_bulk_8b(void *p, uint8_t ips[][RTE_FIB6_IPV6_ADDR_SIZE],
+	uint64_t *next_hops, const unsigned int n);
+
+#endif /* _TRIE_AVX512_H_ */
-- 
2.7.4


^ permalink raw reply	[flat|nested] 199+ messages in thread

* [dpdk-dev] [PATCH v9 8/8] app/testfib: add support for different lookup functions
  2020-09-30 10:35             ` [dpdk-dev] [PATCH v8 " Vladimir Medvedkin
                                 ` (8 preceding siblings ...)
  2020-10-07 16:10               ` [dpdk-dev] [PATCH v9 7/8] fib6: introduce AVX512 lookup Vladimir Medvedkin
@ 2020-10-07 16:10               ` Vladimir Medvedkin
  9 siblings, 0 replies; 199+ messages in thread
From: Vladimir Medvedkin @ 2020-10-07 16:10 UTC (permalink / raw)
  To: dev
  Cc: david.marchand, jerinj, mdr, thomas, konstantin.ananyev,
	bruce.richardson, ciara.power

Added -v option to switch between different lookup implementations
to measure their performance and correctness.

Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
---
 app/test-fib/main.c | 65 ++++++++++++++++++++++++++++++++++++++++++++++++++---
 1 file changed, 62 insertions(+), 3 deletions(-)

diff --git a/app/test-fib/main.c b/app/test-fib/main.c
index 9cf01b1..e46d264 100644
--- a/app/test-fib/main.c
+++ b/app/test-fib/main.c
@@ -99,6 +99,7 @@ static struct {
 	uint8_t		ent_sz;
 	uint8_t		rnd_lookup_ips_ratio;
 	uint8_t		print_fract;
+	uint8_t		lookup_fn;
 } config = {
 	.routes_file = NULL,
 	.lookup_ips_file = NULL,
@@ -110,7 +111,8 @@ static struct {
 	.tbl8 = DEFAULT_LPM_TBL8,
 	.ent_sz = 4,
 	.rnd_lookup_ips_ratio = 0,
-	.print_fract = 10
+	.print_fract = 10,
+	.lookup_fn = 0
 };
 
 struct rt_rule_4 {
@@ -638,7 +640,11 @@ print_usage(void)
 		"1/2/4/8 (default 4)>]\n"
 		"[-g <number of tbl8's for dir24_8 or trie FIBs>]\n"
 		"[-w <path to the file to dump routing table>]\n"
-		"[-u <path to the file to dump ip's for lookup>]\n",
+		"[-u <path to the file to dump ip's for lookup>]\n"
+		"[-v <type of loookup function:"
+		"\ts1, s2, s3 (3 types of scalar), v (vector) -"
+		" for DIR24_8 based FIB\n"
+		"\ts, v - for TRIE based ipv6 FIB>]\n",
 		config.prgname);
 }
 
@@ -681,7 +687,7 @@ parse_opts(int argc, char **argv)
 	int opt;
 	char *endptr;
 
-	while ((opt = getopt(argc, argv, "f:t:n:d:l:r:c6ab:e:g:w:u:s")) !=
+	while ((opt = getopt(argc, argv, "f:t:n:d:l:r:c6ab:e:g:w:u:sv:")) !=
 			-1) {
 		switch (opt) {
 		case 'f':
@@ -769,6 +775,23 @@ parse_opts(int argc, char **argv)
 				rte_exit(-EINVAL, "Invalid option -g\n");
 			}
 			break;
+		case 'v':
+			if ((strcmp(optarg, "s1") == 0) ||
+					(strcmp(optarg, "s") == 0)) {
+				config.lookup_fn = 1;
+				break;
+			} else if (strcmp(optarg, "v") == 0) {
+				config.lookup_fn = 2;
+				break;
+			} else if (strcmp(optarg, "s2") == 0) {
+				config.lookup_fn = 3;
+				break;
+			} else if (strcmp(optarg, "s3") == 0) {
+				config.lookup_fn = 4;
+				break;
+			}
+			print_usage();
+			rte_exit(-EINVAL, "Invalid option -v %s\n", optarg);
 		default:
 			print_usage();
 			rte_exit(-EINVAL, "Invalid options\n");
@@ -846,6 +869,27 @@ run_v4(void)
 		return -rte_errno;
 	}
 
+	if (config.lookup_fn != 0) {
+		if (config.lookup_fn == 1)
+			ret = rte_fib_set_lookup_fn(fib,
+				RTE_FIB_DIR24_8_SCALAR_MACRO);
+		else if (config.lookup_fn == 2)
+			ret = rte_fib_set_lookup_fn(fib,
+				RTE_FIB_DIR24_8_VECTOR_AVX512);
+		else if (config.lookup_fn == 3)
+			ret = rte_fib_set_lookup_fn(fib,
+				RTE_FIB_DIR24_8_SCALAR_INLINE);
+		else if (config.lookup_fn == 4)
+			ret = rte_fib_set_lookup_fn(fib,
+				RTE_FIB_DIR24_8_SCALAR_UNI);
+		else
+			ret = -EINVAL;
+		if (ret != 0) {
+			printf("Can not init lookup function\n");
+			return ret;
+		}
+	}
+
 	for (k = config.print_fract, i = 0; k > 0; k--) {
 		start = rte_rdtsc_precise();
 		for (j = 0; j < (config.nb_routes - i) / k; j++) {
@@ -1025,6 +1069,21 @@ run_v6(void)
 		return -rte_errno;
 	}
 
+	if (config.lookup_fn != 0) {
+		if (config.lookup_fn == 1)
+			ret = rte_fib6_set_lookup_fn(fib,
+				RTE_FIB6_TRIE_SCALAR);
+		else if (config.lookup_fn == 2)
+			ret = rte_fib6_set_lookup_fn(fib,
+				RTE_FIB6_TRIE_VECTOR_AVX512);
+		else
+			ret = -EINVAL;
+		if (ret != 0) {
+			printf("Can not init lookup function\n");
+			return ret;
+		}
+	}
+
 	for (k = config.print_fract, i = 0; k > 0; k--) {
 		start = rte_rdtsc_precise();
 		for (j = 0; j < (config.nb_routes - i) / k; j++) {
-- 
2.7.4


^ permalink raw reply	[flat|nested] 199+ messages in thread

* Re: [dpdk-dev] [PATCH v9 4/8] fib: introduce AVX512 lookup
  2020-10-07 16:10               ` [dpdk-dev] [PATCH v9 4/8] fib: introduce AVX512 lookup Vladimir Medvedkin
@ 2020-10-13 10:27                 ` Bruce Richardson
  0 siblings, 0 replies; 199+ messages in thread
From: Bruce Richardson @ 2020-10-13 10:27 UTC (permalink / raw)
  To: Vladimir Medvedkin
  Cc: dev, david.marchand, jerinj, mdr, thomas, konstantin.ananyev,
	ciara.power

On Wed, Oct 07, 2020 at 05:10:38PM +0100, Vladimir Medvedkin wrote:
> Add new lookup implementation for DIR24_8 algorithm using
> AVX512 instruction set
> 
> Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
> Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> ---
>  doc/guides/rel_notes/release_20_11.rst |   3 +
>  lib/librte_fib/dir24_8.c               |  36 +++++++
>  lib/librte_fib/dir24_8_avx512.c        | 165 +++++++++++++++++++++++++++++++++
>  lib/librte_fib/dir24_8_avx512.h        |  24 +++++
>  lib/librte_fib/meson.build             |  34 +++++++
>  lib/librte_fib/rte_fib.c               |   2 +-
>  lib/librte_fib/rte_fib.h               |   4 +-
>  7 files changed, 266 insertions(+), 2 deletions(-)
>  create mode 100644 lib/librte_fib/dir24_8_avx512.c
>  create mode 100644 lib/librte_fib/dir24_8_avx512.h
> 
<snip>
> diff --git a/lib/librte_fib/meson.build b/lib/librte_fib/meson.build
> index 771828f..0a8adef 100644
> --- a/lib/librte_fib/meson.build
> +++ b/lib/librte_fib/meson.build
> @@ -5,3 +5,37 @@
>  sources = files('rte_fib.c', 'rte_fib6.c', 'dir24_8.c', 'trie.c')
>  headers = files('rte_fib.h', 'rte_fib6.h')
>  deps += ['rib']
> +
> +# compile AVX512 version if:
> +# we are building 64-bit binary AND binutils can generate proper code
> +if dpdk_conf.has('RTE_ARCH_X86_64') and binutils_ok.returncode() == 0
> +	# compile AVX512 version if either:
> +	# a. we have AVX512F supported in minimum instruction set baseline
> +	# b. it's not minimum instruction set, but supported by compiler
> +	#
> +	# in former case, just add avx512 C file to files list
> +	# in latter case, compile c file to static lib, using correct
> +	# compiler flags, and then have the .o file from static lib
> +	# linked into main lib.
> +
> +	# check if all required flags already enabled (variant a).
> +	acl_avx512_flags = ['__AVX512F__','__AVX512DQ__']
> +	acl_avx512_on = true
> +	foreach f:acl_avx512_flags
> +		if cc.get_define(f, args: machine_args) == ''
> +			acl_avx512_on = false
> +		endif
> +	endforeach
> +
> +	if acl_avx512_on == true
> +		cflags += ['-DCC_DIR24_8_AVX512_SUPPORT']
> +		sources += files('dir24_8_avx512.c')
> +	elif cc.has_multi_arguments('-mavx512f', '-mavx512dq')
> +		dir24_8_avx512_tmp = static_library('dir24_8_avx512_tmp',
> +				'dir24_8_avx512.c',
> +				dependencies: static_rte_eal,
> +				c_args: cflags + ['-mavx512f', '-mavx512dq'])
> +		objs += dir24_8_avx512_tmp.extract_objects('dir24_8_avx512.c')
> +		cflags += '-DCC_DIR24_8_AVX512_SUPPORT'
> +	endif
> +endif

This meson change looks ok to me. For the build-system part:

Acked-by: Bruce Richardson <bruce.richardson@intel.com>

^ permalink raw reply	[flat|nested] 199+ messages in thread

* [dpdk-dev] [PATCH v10 0/8] fib: implement AVX512 vector lookup
  2020-10-07 16:10               ` [dpdk-dev] [PATCH v9 " Vladimir Medvedkin
@ 2020-10-13 13:13                 ` Vladimir Medvedkin
  2020-10-16 15:15                   ` David Marchand
                                     ` (9 more replies)
  2020-10-13 13:13                 ` [dpdk-dev] [PATCH v10 1/8] eal/x86: introduce AVX 512-bit type Vladimir Medvedkin
                                   ` (7 subsequent siblings)
  8 siblings, 10 replies; 199+ messages in thread
From: Vladimir Medvedkin @ 2020-10-13 13:13 UTC (permalink / raw)
  To: dev
  Cc: david.marchand, jerinj, mdr, thomas, konstantin.ananyev,
	bruce.richardson, ciara.power

This patch series implements vectorized lookup using AVX512 for
ipv4 dir24_8 and ipv6 trie algorithms.
Also introduced rte_fib_set_lookup_fn() to change lookup function type.
Added option to select lookup function type in testfib application.

Depends-on: patch-80471 ("eal: add max SIMD bitwidth")

v10:
 - reflects the latest changes in the "eal: add max SIMD bitwidth" patch
 - add en extra doxygen comments
 - rebuild on the latest main

v9:
 - meson reworked
 - integration with max SIMD bitwidth patchseries
 - changed the logic of function selection on init

v8:
 - remove Makefile related changes
 - fix missing doxygen for lookup_type
 - add release notes

v7:
 - fix RTE_X86_ZMM_MASK macro

v6:
 - style fixes

v5:
 - prefix zmm macro in rte_vect.h with RTE_X86
 - remove unnecessary typedef for _x86_zmm_t
 - reword commit title
 - fix typos

v4:
 - use __rte_aligned() instead of using compiler attribute directly
 - rework and add comments to meson.build

v3:
 - separate out the AVX-512 code into a separate file

v2:
 - rename rte_zmm to __rte_x86_zmm to reflect its internal usage
 - make runtime decision to use avx512 lookup

Vladimir Medvedkin (8):
  eal/x86: introduce AVX 512-bit type
  fib: make lookup function type configurable
  fib: move lookup definition into the header file
  fib: introduce AVX512 lookup
  fib6: make lookup function type configurable
  fib6: move lookup definition into the header file
  fib6: introduce AVX512 lookup
  app/testfib: add support for different lookup functions

 app/test-fib/main.c                    |  65 ++++++-
 doc/guides/rel_notes/release_20_11.rst |   3 +
 lib/librte_eal/x86/include/rte_vect.h  |  19 ++
 lib/librte_fib/dir24_8.c               | 329 ++++++++-------------------------
 lib/librte_fib/dir24_8.h               | 226 +++++++++++++++++++++-
 lib/librte_fib/dir24_8_avx512.c        | 165 +++++++++++++++++
 lib/librte_fib/dir24_8_avx512.h        |  24 +++
 lib/librte_fib/meson.build             |  51 +++++
 lib/librte_fib/rte_fib.c               |  21 ++-
 lib/librte_fib/rte_fib.h               |  36 ++++
 lib/librte_fib/rte_fib6.c              |  20 +-
 lib/librte_fib/rte_fib6.h              |  26 +++
 lib/librte_fib/rte_fib_version.map     |   2 +
 lib/librte_fib/trie.c                  | 191 ++++++-------------
 lib/librte_fib/trie.h                  | 119 +++++++++++-
 lib/librte_fib/trie_avx512.c           | 269 +++++++++++++++++++++++++++
 lib/librte_fib/trie_avx512.h           |  20 ++
 17 files changed, 1196 insertions(+), 390 deletions(-)
 create mode 100644 lib/librte_fib/dir24_8_avx512.c
 create mode 100644 lib/librte_fib/dir24_8_avx512.h
 create mode 100644 lib/librte_fib/trie_avx512.c
 create mode 100644 lib/librte_fib/trie_avx512.h

-- 
2.7.4


^ permalink raw reply	[flat|nested] 199+ messages in thread

* [dpdk-dev] [PATCH v10 1/8] eal/x86: introduce AVX 512-bit type
  2020-10-07 16:10               ` [dpdk-dev] [PATCH v9 " Vladimir Medvedkin
  2020-10-13 13:13                 ` [dpdk-dev] [PATCH v10 " Vladimir Medvedkin
@ 2020-10-13 13:13                 ` Vladimir Medvedkin
  2020-10-14 12:17                   ` David Marchand
  2020-10-13 13:13                 ` [dpdk-dev] [PATCH v10 2/8] fib: make lookup function type configurable Vladimir Medvedkin
                                   ` (6 subsequent siblings)
  8 siblings, 1 reply; 199+ messages in thread
From: Vladimir Medvedkin @ 2020-10-13 13:13 UTC (permalink / raw)
  To: dev
  Cc: david.marchand, jerinj, mdr, thomas, konstantin.ananyev,
	bruce.richardson, ciara.power

New data type to manipulate 512 bit AVX values.

Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 lib/librte_eal/x86/include/rte_vect.h | 19 +++++++++++++++++++
 1 file changed, 19 insertions(+)

diff --git a/lib/librte_eal/x86/include/rte_vect.h b/lib/librte_eal/x86/include/rte_vect.h
index a00d3d5..f0aad96 100644
--- a/lib/librte_eal/x86/include/rte_vect.h
+++ b/lib/librte_eal/x86/include/rte_vect.h
@@ -13,6 +13,7 @@
 
 #include <stdint.h>
 #include <rte_config.h>
+#include <rte_common.h>
 #include "generic/rte_vect.h"
 
 #if (defined(__ICC) || \
@@ -92,6 +93,24 @@ __extension__ ({                 \
 })
 #endif /* (defined(__ICC) && __ICC < 1210) */
 
+#ifdef __AVX512F__
+
+#define RTE_X86_ZMM_SIZE	(sizeof(__m512i))
+#define RTE_X86_ZMM_MASK	(RTE_X86_ZMM_SIZE - 1)
+
+typedef union __rte_x86_zmm {
+	__m512i	 z;
+	ymm_t    y[RTE_X86_ZMM_SIZE / sizeof(ymm_t)];
+	xmm_t    x[RTE_X86_ZMM_SIZE / sizeof(xmm_t)];
+	uint8_t  u8[RTE_X86_ZMM_SIZE / sizeof(uint8_t)];
+	uint16_t u16[RTE_X86_ZMM_SIZE / sizeof(uint16_t)];
+	uint32_t u32[RTE_X86_ZMM_SIZE / sizeof(uint32_t)];
+	uint64_t u64[RTE_X86_ZMM_SIZE / sizeof(uint64_t)];
+	double   pd[RTE_X86_ZMM_SIZE / sizeof(double)];
+} __rte_aligned(RTE_X86_ZMM_SIZE) __rte_x86_zmm_t;
+
+#endif /* __AVX512F__ */
+
 #ifdef __cplusplus
 }
 #endif
-- 
2.7.4


^ permalink raw reply	[flat|nested] 199+ messages in thread

* [dpdk-dev] [PATCH v10 2/8] fib: make lookup function type configurable
  2020-10-07 16:10               ` [dpdk-dev] [PATCH v9 " Vladimir Medvedkin
  2020-10-13 13:13                 ` [dpdk-dev] [PATCH v10 " Vladimir Medvedkin
  2020-10-13 13:13                 ` [dpdk-dev] [PATCH v10 1/8] eal/x86: introduce AVX 512-bit type Vladimir Medvedkin
@ 2020-10-13 13:13                 ` Vladimir Medvedkin
  2020-10-13 13:13                 ` [dpdk-dev] [PATCH v10 3/8] fib: move lookup definition into the header file Vladimir Medvedkin
                                   ` (5 subsequent siblings)
  8 siblings, 0 replies; 199+ messages in thread
From: Vladimir Medvedkin @ 2020-10-13 13:13 UTC (permalink / raw)
  To: dev
  Cc: david.marchand, jerinj, mdr, thomas, konstantin.ananyev,
	bruce.richardson, ciara.power

Add type argument to dir24_8_get_lookup_fn()
Now it supports 3 different lookup implementations:
 RTE_FIB_DIR24_8_SCALAR_MACRO
 RTE_FIB_DIR24_8_SCALAR_INLINE
 RTE_FIB_DIR24_8_SCALAR_UNI

Add new rte_fib_set_lookup_fn() - user can change lookup
function type runtime.

Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 lib/librte_fib/dir24_8.c           | 84 +++++++++++++++++++++++---------------
 lib/librte_fib/dir24_8.h           |  2 +-
 lib/librte_fib/rte_fib.c           | 21 +++++++++-
 lib/librte_fib/rte_fib.h           | 32 +++++++++++++++
 lib/librte_fib/rte_fib_version.map |  1 +
 5 files changed, 106 insertions(+), 34 deletions(-)

diff --git a/lib/librte_fib/dir24_8.c b/lib/librte_fib/dir24_8.c
index c9dce3c..ff51f65 100644
--- a/lib/librte_fib/dir24_8.c
+++ b/lib/librte_fib/dir24_8.c
@@ -45,13 +45,6 @@ struct dir24_8_tbl {
 
 #define ROUNDUP(x, y)	 RTE_ALIGN_CEIL(x, (1 << (32 - y)))
 
-enum lookup_type {
-	MACRO,
-	INLINE,
-	UNI
-};
-enum lookup_type test_lookup = MACRO;
-
 static inline void *
 get_tbl24_p(struct dir24_8_tbl *dp, uint32_t ip, uint8_t nh_sz)
 {
@@ -252,35 +245,62 @@ dir24_8_lookup_bulk_uni(void *p, const uint32_t *ips,
 	}
 }
 
+static inline rte_fib_lookup_fn_t
+get_scalar_fn(enum rte_fib_dir24_8_nh_sz nh_sz)
+{
+	switch (nh_sz) {
+	case RTE_FIB_DIR24_8_1B:
+		return dir24_8_lookup_bulk_1b;
+	case RTE_FIB_DIR24_8_2B:
+		return dir24_8_lookup_bulk_2b;
+	case RTE_FIB_DIR24_8_4B:
+		return dir24_8_lookup_bulk_4b;
+	case RTE_FIB_DIR24_8_8B:
+		return dir24_8_lookup_bulk_8b;
+	default:
+		return NULL;
+	}
+}
+
+static inline rte_fib_lookup_fn_t
+get_scalar_fn_inlined(enum rte_fib_dir24_8_nh_sz nh_sz)
+{
+	switch (nh_sz) {
+	case RTE_FIB_DIR24_8_1B:
+		return dir24_8_lookup_bulk_0;
+	case RTE_FIB_DIR24_8_2B:
+		return dir24_8_lookup_bulk_1;
+	case RTE_FIB_DIR24_8_4B:
+		return dir24_8_lookup_bulk_2;
+	case RTE_FIB_DIR24_8_8B:
+		return dir24_8_lookup_bulk_3;
+	default:
+		return NULL;
+	}
+}
+
 rte_fib_lookup_fn_t
-dir24_8_get_lookup_fn(struct rte_fib_conf *fib_conf)
+dir24_8_get_lookup_fn(void *p, enum rte_fib_dir24_8_lookup_type type)
 {
-	enum rte_fib_dir24_8_nh_sz nh_sz = fib_conf->dir24_8.nh_sz;
+	enum rte_fib_dir24_8_nh_sz nh_sz;
+	struct dir24_8_tbl *dp = p;
 
-	if (test_lookup == MACRO) {
-		switch (nh_sz) {
-		case RTE_FIB_DIR24_8_1B:
-			return dir24_8_lookup_bulk_1b;
-		case RTE_FIB_DIR24_8_2B:
-			return dir24_8_lookup_bulk_2b;
-		case RTE_FIB_DIR24_8_4B:
-			return dir24_8_lookup_bulk_4b;
-		case RTE_FIB_DIR24_8_8B:
-			return dir24_8_lookup_bulk_8b;
-		}
-	} else if (test_lookup == INLINE) {
-		switch (nh_sz) {
-		case RTE_FIB_DIR24_8_1B:
-			return dir24_8_lookup_bulk_0;
-		case RTE_FIB_DIR24_8_2B:
-			return dir24_8_lookup_bulk_1;
-		case RTE_FIB_DIR24_8_4B:
-			return dir24_8_lookup_bulk_2;
-		case RTE_FIB_DIR24_8_8B:
-			return dir24_8_lookup_bulk_3;
-		}
-	} else
+	if (dp == NULL)
+		return NULL;
+
+	nh_sz = dp->nh_sz;
+
+	switch (type) {
+	case RTE_FIB_DIR24_8_SCALAR_MACRO:
+		return get_scalar_fn(nh_sz);
+	case RTE_FIB_DIR24_8_SCALAR_INLINE:
+		return get_scalar_fn_inlined(nh_sz);
+	case RTE_FIB_DIR24_8_SCALAR_UNI:
 		return dir24_8_lookup_bulk_uni;
+	default:
+		return NULL;
+	}
+
 	return NULL;
 }
 
diff --git a/lib/librte_fib/dir24_8.h b/lib/librte_fib/dir24_8.h
index 1ec437c..53c5dd2 100644
--- a/lib/librte_fib/dir24_8.h
+++ b/lib/librte_fib/dir24_8.h
@@ -22,7 +22,7 @@ void
 dir24_8_free(void *p);
 
 rte_fib_lookup_fn_t
-dir24_8_get_lookup_fn(struct rte_fib_conf *conf);
+dir24_8_get_lookup_fn(void *p, enum rte_fib_dir24_8_lookup_type type);
 
 int
 dir24_8_modify(struct rte_fib *fib, uint32_t ip, uint8_t depth,
diff --git a/lib/librte_fib/rte_fib.c b/lib/librte_fib/rte_fib.c
index e090808..b9f6efb 100644
--- a/lib/librte_fib/rte_fib.c
+++ b/lib/librte_fib/rte_fib.c
@@ -107,7 +107,8 @@ init_dataplane(struct rte_fib *fib, __rte_unused int socket_id,
 		fib->dp = dir24_8_create(dp_name, socket_id, conf);
 		if (fib->dp == NULL)
 			return -rte_errno;
-		fib->lookup = dir24_8_get_lookup_fn(conf);
+		fib->lookup = dir24_8_get_lookup_fn(fib->dp,
+			RTE_FIB_DIR24_8_SCALAR_MACRO);
 		fib->modify = dir24_8_modify;
 		return 0;
 	default:
@@ -317,3 +318,21 @@ rte_fib_get_rib(struct rte_fib *fib)
 {
 	return (fib == NULL) ? NULL : fib->rib;
 }
+
+int
+rte_fib_set_lookup_fn(struct rte_fib *fib,
+	enum rte_fib_dir24_8_lookup_type type)
+{
+	rte_fib_lookup_fn_t fn;
+
+	switch (fib->type) {
+	case RTE_FIB_DIR24_8:
+		fn = dir24_8_get_lookup_fn(fib->dp, type);
+		if (fn == NULL)
+			return -EINVAL;
+		fib->lookup = fn;
+		return 0;
+	default:
+		return -EINVAL;
+	}
+}
diff --git a/lib/librte_fib/rte_fib.h b/lib/librte_fib/rte_fib.h
index 84ee774..2097ee5 100644
--- a/lib/librte_fib/rte_fib.h
+++ b/lib/librte_fib/rte_fib.h
@@ -58,6 +58,21 @@ enum rte_fib_dir24_8_nh_sz {
 	RTE_FIB_DIR24_8_8B
 };
 
+/** Type of lookup function implementation */
+enum rte_fib_dir24_8_lookup_type {
+	RTE_FIB_DIR24_8_SCALAR_MACRO,
+	/**< Macro based lookup function */
+	RTE_FIB_DIR24_8_SCALAR_INLINE,
+	/**<
+	 * Lookup implementation using inlined functions
+	 * for different next hop sizes
+	 */
+	RTE_FIB_DIR24_8_SCALAR_UNI
+	/**<
+	 * Unified lookup function for all next hop sizes
+	 */
+};
+
 /** FIB configuration structure */
 struct rte_fib_conf {
 	enum rte_fib_type type; /**< Type of FIB struct */
@@ -196,6 +211,23 @@ __rte_experimental
 struct rte_rib *
 rte_fib_get_rib(struct rte_fib *fib);
 
+/**
+ * Set lookup function based on type
+ *
+ * @param fib
+ *   FIB object handle
+ * @param type
+ *   type of lookup function
+ *
+ * @return
+ *    -EINVAL on failure
+ *    0 on success
+ */
+__rte_experimental
+int
+rte_fib_set_lookup_fn(struct rte_fib *fib,
+	enum rte_fib_dir24_8_lookup_type type);
+
 #ifdef __cplusplus
 }
 #endif
diff --git a/lib/librte_fib/rte_fib_version.map b/lib/librte_fib/rte_fib_version.map
index 9527417..216af66 100644
--- a/lib/librte_fib/rte_fib_version.map
+++ b/lib/librte_fib/rte_fib_version.map
@@ -9,6 +9,7 @@ EXPERIMENTAL {
 	rte_fib_lookup_bulk;
 	rte_fib_get_dp;
 	rte_fib_get_rib;
+	rte_fib_set_lookup_fn;
 
 	rte_fib6_add;
 	rte_fib6_create;
-- 
2.7.4


^ permalink raw reply	[flat|nested] 199+ messages in thread

* [dpdk-dev] [PATCH v10 3/8] fib: move lookup definition into the header file
  2020-10-07 16:10               ` [dpdk-dev] [PATCH v9 " Vladimir Medvedkin
                                   ` (2 preceding siblings ...)
  2020-10-13 13:13                 ` [dpdk-dev] [PATCH v10 2/8] fib: make lookup function type configurable Vladimir Medvedkin
@ 2020-10-13 13:13                 ` Vladimir Medvedkin
  2020-10-13 13:13                 ` [dpdk-dev] [PATCH v10 4/8] fib: introduce AVX512 lookup Vladimir Medvedkin
                                   ` (4 subsequent siblings)
  8 siblings, 0 replies; 199+ messages in thread
From: Vladimir Medvedkin @ 2020-10-13 13:13 UTC (permalink / raw)
  To: dev
  Cc: david.marchand, jerinj, mdr, thomas, konstantin.ananyev,
	bruce.richardson, ciara.power

Move dir24_8 table layout and lookup definition into the
private header file. This is necessary for implementing a
vectorized lookup function in a separate .с file.

Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 lib/librte_fib/dir24_8.c | 225 +----------------------------------------------
 lib/librte_fib/dir24_8.h | 224 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 225 insertions(+), 224 deletions(-)

diff --git a/lib/librte_fib/dir24_8.c b/lib/librte_fib/dir24_8.c
index ff51f65..b5f2363 100644
--- a/lib/librte_fib/dir24_8.c
+++ b/lib/librte_fib/dir24_8.c
@@ -11,240 +11,17 @@
 
 #include <rte_debug.h>
 #include <rte_malloc.h>
-#include <rte_prefetch.h>
 #include <rte_errno.h>
 #include <rte_memory.h>
-#include <rte_branch_prediction.h>
 
-#include <rte_fib.h>
 #include <rte_rib.h>
+#include <rte_fib.h>
 #include "dir24_8.h"
 
 #define DIR24_8_NAMESIZE	64
 
-#define DIR24_8_TBL24_NUM_ENT		(1 << 24)
-#define DIR24_8_TBL8_GRP_NUM_ENT	256U
-#define DIR24_8_EXT_ENT			1
-#define DIR24_8_TBL24_MASK		0xffffff00
-
-#define BITMAP_SLAB_BIT_SIZE_LOG2	6
-#define BITMAP_SLAB_BIT_SIZE		(1 << BITMAP_SLAB_BIT_SIZE_LOG2)
-#define BITMAP_SLAB_BITMASK		(BITMAP_SLAB_BIT_SIZE - 1)
-
-struct dir24_8_tbl {
-	uint32_t	number_tbl8s;	/**< Total number of tbl8s */
-	uint32_t	rsvd_tbl8s;	/**< Number of reserved tbl8s */
-	uint32_t	cur_tbl8s;	/**< Current number of tbl8s */
-	enum rte_fib_dir24_8_nh_sz	nh_sz;	/**< Size of nexthop entry */
-	uint64_t	def_nh;		/**< Default next hop */
-	uint64_t	*tbl8;		/**< tbl8 table. */
-	uint64_t	*tbl8_idxes;	/**< bitmap containing free tbl8 idxes*/
-	/* tbl24 table. */
-	__extension__ uint64_t	tbl24[0] __rte_cache_aligned;
-};
-
 #define ROUNDUP(x, y)	 RTE_ALIGN_CEIL(x, (1 << (32 - y)))
 
-static inline void *
-get_tbl24_p(struct dir24_8_tbl *dp, uint32_t ip, uint8_t nh_sz)
-{
-	return (void *)&((uint8_t *)dp->tbl24)[(ip &
-		DIR24_8_TBL24_MASK) >> (8 - nh_sz)];
-}
-
-static inline  uint8_t
-bits_in_nh(uint8_t nh_sz)
-{
-	return 8 * (1 << nh_sz);
-}
-
-static inline uint64_t
-get_max_nh(uint8_t nh_sz)
-{
-	return ((1ULL << (bits_in_nh(nh_sz) - 1)) - 1);
-}
-
-static  inline uint32_t
-get_tbl24_idx(uint32_t ip)
-{
-	return ip >> 8;
-}
-
-static  inline uint32_t
-get_tbl8_idx(uint32_t res, uint32_t ip)
-{
-	return (res >> 1) * DIR24_8_TBL8_GRP_NUM_ENT + (uint8_t)ip;
-}
-
-static inline uint64_t
-lookup_msk(uint8_t nh_sz)
-{
-	return ((1ULL << ((1 << (nh_sz + 3)) - 1)) << 1) - 1;
-}
-
-static inline uint8_t
-get_psd_idx(uint32_t val, uint8_t nh_sz)
-{
-	return val & ((1 << (3 - nh_sz)) - 1);
-}
-
-static inline uint32_t
-get_tbl_idx(uint32_t val, uint8_t nh_sz)
-{
-	return val >> (3 - nh_sz);
-}
-
-static inline uint64_t
-get_tbl24(struct dir24_8_tbl *dp, uint32_t ip, uint8_t nh_sz)
-{
-	return ((dp->tbl24[get_tbl_idx(get_tbl24_idx(ip), nh_sz)] >>
-		(get_psd_idx(get_tbl24_idx(ip), nh_sz) *
-		bits_in_nh(nh_sz))) & lookup_msk(nh_sz));
-}
-
-static inline uint64_t
-get_tbl8(struct dir24_8_tbl *dp, uint32_t res, uint32_t ip, uint8_t nh_sz)
-{
-	return ((dp->tbl8[get_tbl_idx(get_tbl8_idx(res, ip), nh_sz)] >>
-		(get_psd_idx(get_tbl8_idx(res, ip), nh_sz) *
-		bits_in_nh(nh_sz))) & lookup_msk(nh_sz));
-}
-
-static inline int
-is_entry_extended(uint64_t ent)
-{
-	return (ent & DIR24_8_EXT_ENT) == DIR24_8_EXT_ENT;
-}
-
-#define LOOKUP_FUNC(suffix, type, bulk_prefetch, nh_sz)			\
-static void dir24_8_lookup_bulk_##suffix(void *p, const uint32_t *ips,	\
-	uint64_t *next_hops, const unsigned int n)			\
-{									\
-	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;		\
-	uint64_t tmp;							\
-	uint32_t i;							\
-	uint32_t prefetch_offset =					\
-		RTE_MIN((unsigned int)bulk_prefetch, n);		\
-									\
-	for (i = 0; i < prefetch_offset; i++)				\
-		rte_prefetch0(get_tbl24_p(dp, ips[i], nh_sz));		\
-	for (i = 0; i < (n - prefetch_offset); i++) {			\
-		rte_prefetch0(get_tbl24_p(dp,				\
-			ips[i + prefetch_offset], nh_sz));		\
-		tmp = ((type *)dp->tbl24)[ips[i] >> 8];			\
-		if (unlikely(is_entry_extended(tmp)))			\
-			tmp = ((type *)dp->tbl8)[(uint8_t)ips[i] +	\
-				((tmp >> 1) * DIR24_8_TBL8_GRP_NUM_ENT)]; \
-		next_hops[i] = tmp >> 1;				\
-	}								\
-	for (; i < n; i++) {						\
-		tmp = ((type *)dp->tbl24)[ips[i] >> 8];			\
-		if (unlikely(is_entry_extended(tmp)))			\
-			tmp = ((type *)dp->tbl8)[(uint8_t)ips[i] +	\
-				((tmp >> 1) * DIR24_8_TBL8_GRP_NUM_ENT)]; \
-		next_hops[i] = tmp >> 1;				\
-	}								\
-}									\
-
-LOOKUP_FUNC(1b, uint8_t, 5, 0)
-LOOKUP_FUNC(2b, uint16_t, 6, 1)
-LOOKUP_FUNC(4b, uint32_t, 15, 2)
-LOOKUP_FUNC(8b, uint64_t, 12, 3)
-
-static inline void
-dir24_8_lookup_bulk(struct dir24_8_tbl *dp, const uint32_t *ips,
-	uint64_t *next_hops, const unsigned int n, uint8_t nh_sz)
-{
-	uint64_t tmp;
-	uint32_t i;
-	uint32_t prefetch_offset = RTE_MIN(15U, n);
-
-	for (i = 0; i < prefetch_offset; i++)
-		rte_prefetch0(get_tbl24_p(dp, ips[i], nh_sz));
-	for (i = 0; i < (n - prefetch_offset); i++) {
-		rte_prefetch0(get_tbl24_p(dp, ips[i + prefetch_offset],
-			nh_sz));
-		tmp = get_tbl24(dp, ips[i], nh_sz);
-		if (unlikely(is_entry_extended(tmp)))
-			tmp = get_tbl8(dp, tmp, ips[i], nh_sz);
-
-		next_hops[i] = tmp >> 1;
-	}
-	for (; i < n; i++) {
-		tmp = get_tbl24(dp, ips[i], nh_sz);
-		if (unlikely(is_entry_extended(tmp)))
-			tmp = get_tbl8(dp, tmp, ips[i], nh_sz);
-
-		next_hops[i] = tmp >> 1;
-	}
-}
-
-static void
-dir24_8_lookup_bulk_0(void *p, const uint32_t *ips,
-	uint64_t *next_hops, const unsigned int n)
-{
-	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
-
-	dir24_8_lookup_bulk(dp, ips, next_hops, n, 0);
-}
-
-static void
-dir24_8_lookup_bulk_1(void *p, const uint32_t *ips,
-	uint64_t *next_hops, const unsigned int n)
-{
-	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
-
-	dir24_8_lookup_bulk(dp, ips, next_hops, n, 1);
-}
-
-static void
-dir24_8_lookup_bulk_2(void *p, const uint32_t *ips,
-	uint64_t *next_hops, const unsigned int n)
-{
-	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
-
-	dir24_8_lookup_bulk(dp, ips, next_hops, n, 2);
-}
-
-static void
-dir24_8_lookup_bulk_3(void *p, const uint32_t *ips,
-	uint64_t *next_hops, const unsigned int n)
-{
-	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
-
-	dir24_8_lookup_bulk(dp, ips, next_hops, n, 3);
-}
-
-static void
-dir24_8_lookup_bulk_uni(void *p, const uint32_t *ips,
-	uint64_t *next_hops, const unsigned int n)
-{
-	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
-	uint64_t tmp;
-	uint32_t i;
-	uint32_t prefetch_offset = RTE_MIN(15U, n);
-	uint8_t nh_sz = dp->nh_sz;
-
-	for (i = 0; i < prefetch_offset; i++)
-		rte_prefetch0(get_tbl24_p(dp, ips[i], nh_sz));
-	for (i = 0; i < (n - prefetch_offset); i++) {
-		rte_prefetch0(get_tbl24_p(dp, ips[i + prefetch_offset],
-			nh_sz));
-		tmp = get_tbl24(dp, ips[i], nh_sz);
-		if (unlikely(is_entry_extended(tmp)))
-			tmp = get_tbl8(dp, tmp, ips[i], nh_sz);
-
-		next_hops[i] = tmp >> 1;
-	}
-	for (; i < n; i++) {
-		tmp = get_tbl24(dp, ips[i], nh_sz);
-		if (unlikely(is_entry_extended(tmp)))
-			tmp = get_tbl8(dp, tmp, ips[i], nh_sz);
-
-		next_hops[i] = tmp >> 1;
-	}
-}
-
 static inline rte_fib_lookup_fn_t
 get_scalar_fn(enum rte_fib_dir24_8_nh_sz nh_sz)
 {
diff --git a/lib/librte_fib/dir24_8.h b/lib/librte_fib/dir24_8.h
index 53c5dd2..56d0389 100644
--- a/lib/librte_fib/dir24_8.h
+++ b/lib/librte_fib/dir24_8.h
@@ -6,6 +6,9 @@
 #ifndef _DIR24_8_H_
 #define _DIR24_8_H_
 
+#include <rte_prefetch.h>
+#include <rte_branch_prediction.h>
+
 /**
  * @file
  * DIR24_8 algorithm
@@ -15,6 +18,227 @@
 extern "C" {
 #endif
 
+#define DIR24_8_TBL24_NUM_ENT		(1 << 24)
+#define DIR24_8_TBL8_GRP_NUM_ENT	256U
+#define DIR24_8_EXT_ENT			1
+#define DIR24_8_TBL24_MASK		0xffffff00
+
+#define BITMAP_SLAB_BIT_SIZE_LOG2	6
+#define BITMAP_SLAB_BIT_SIZE		(1 << BITMAP_SLAB_BIT_SIZE_LOG2)
+#define BITMAP_SLAB_BITMASK		(BITMAP_SLAB_BIT_SIZE - 1)
+
+struct dir24_8_tbl {
+	uint32_t	number_tbl8s;	/**< Total number of tbl8s */
+	uint32_t	rsvd_tbl8s;	/**< Number of reserved tbl8s */
+	uint32_t	cur_tbl8s;	/**< Current number of tbl8s */
+	enum rte_fib_dir24_8_nh_sz	nh_sz;	/**< Size of nexthop entry */
+	uint64_t	def_nh;		/**< Default next hop */
+	uint64_t	*tbl8;		/**< tbl8 table. */
+	uint64_t	*tbl8_idxes;	/**< bitmap containing free tbl8 idxes*/
+	/* tbl24 table. */
+	__extension__ uint64_t	tbl24[0] __rte_cache_aligned;
+};
+
+static inline void *
+get_tbl24_p(struct dir24_8_tbl *dp, uint32_t ip, uint8_t nh_sz)
+{
+	return (void *)&((uint8_t *)dp->tbl24)[(ip &
+		DIR24_8_TBL24_MASK) >> (8 - nh_sz)];
+}
+
+static inline  uint8_t
+bits_in_nh(uint8_t nh_sz)
+{
+	return 8 * (1 << nh_sz);
+}
+
+static inline uint64_t
+get_max_nh(uint8_t nh_sz)
+{
+	return ((1ULL << (bits_in_nh(nh_sz) - 1)) - 1);
+}
+
+static  inline uint32_t
+get_tbl24_idx(uint32_t ip)
+{
+	return ip >> 8;
+}
+
+static  inline uint32_t
+get_tbl8_idx(uint32_t res, uint32_t ip)
+{
+	return (res >> 1) * DIR24_8_TBL8_GRP_NUM_ENT + (uint8_t)ip;
+}
+
+static inline uint64_t
+lookup_msk(uint8_t nh_sz)
+{
+	return ((1ULL << ((1 << (nh_sz + 3)) - 1)) << 1) - 1;
+}
+
+static inline uint8_t
+get_psd_idx(uint32_t val, uint8_t nh_sz)
+{
+	return val & ((1 << (3 - nh_sz)) - 1);
+}
+
+static inline uint32_t
+get_tbl_idx(uint32_t val, uint8_t nh_sz)
+{
+	return val >> (3 - nh_sz);
+}
+
+static inline uint64_t
+get_tbl24(struct dir24_8_tbl *dp, uint32_t ip, uint8_t nh_sz)
+{
+	return ((dp->tbl24[get_tbl_idx(get_tbl24_idx(ip), nh_sz)] >>
+		(get_psd_idx(get_tbl24_idx(ip), nh_sz) *
+		bits_in_nh(nh_sz))) & lookup_msk(nh_sz));
+}
+
+static inline uint64_t
+get_tbl8(struct dir24_8_tbl *dp, uint32_t res, uint32_t ip, uint8_t nh_sz)
+{
+	return ((dp->tbl8[get_tbl_idx(get_tbl8_idx(res, ip), nh_sz)] >>
+		(get_psd_idx(get_tbl8_idx(res, ip), nh_sz) *
+		bits_in_nh(nh_sz))) & lookup_msk(nh_sz));
+}
+
+static inline int
+is_entry_extended(uint64_t ent)
+{
+	return (ent & DIR24_8_EXT_ENT) == DIR24_8_EXT_ENT;
+}
+
+#define LOOKUP_FUNC(suffix, type, bulk_prefetch, nh_sz)			\
+static inline void dir24_8_lookup_bulk_##suffix(void *p, const uint32_t *ips, \
+	uint64_t *next_hops, const unsigned int n)			\
+{									\
+	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;		\
+	uint64_t tmp;							\
+	uint32_t i;							\
+	uint32_t prefetch_offset =					\
+		RTE_MIN((unsigned int)bulk_prefetch, n);		\
+									\
+	for (i = 0; i < prefetch_offset; i++)				\
+		rte_prefetch0(get_tbl24_p(dp, ips[i], nh_sz));		\
+	for (i = 0; i < (n - prefetch_offset); i++) {			\
+		rte_prefetch0(get_tbl24_p(dp,				\
+			ips[i + prefetch_offset], nh_sz));		\
+		tmp = ((type *)dp->tbl24)[ips[i] >> 8];			\
+		if (unlikely(is_entry_extended(tmp)))			\
+			tmp = ((type *)dp->tbl8)[(uint8_t)ips[i] +	\
+				((tmp >> 1) * DIR24_8_TBL8_GRP_NUM_ENT)]; \
+		next_hops[i] = tmp >> 1;				\
+	}								\
+	for (; i < n; i++) {						\
+		tmp = ((type *)dp->tbl24)[ips[i] >> 8];			\
+		if (unlikely(is_entry_extended(tmp)))			\
+			tmp = ((type *)dp->tbl8)[(uint8_t)ips[i] +	\
+				((tmp >> 1) * DIR24_8_TBL8_GRP_NUM_ENT)]; \
+		next_hops[i] = tmp >> 1;				\
+	}								\
+}									\
+
+LOOKUP_FUNC(1b, uint8_t, 5, 0)
+LOOKUP_FUNC(2b, uint16_t, 6, 1)
+LOOKUP_FUNC(4b, uint32_t, 15, 2)
+LOOKUP_FUNC(8b, uint64_t, 12, 3)
+
+static inline void
+dir24_8_lookup_bulk(struct dir24_8_tbl *dp, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n, uint8_t nh_sz)
+{
+	uint64_t tmp;
+	uint32_t i;
+	uint32_t prefetch_offset = RTE_MIN(15U, n);
+
+	for (i = 0; i < prefetch_offset; i++)
+		rte_prefetch0(get_tbl24_p(dp, ips[i], nh_sz));
+	for (i = 0; i < (n - prefetch_offset); i++) {
+		rte_prefetch0(get_tbl24_p(dp, ips[i + prefetch_offset],
+			nh_sz));
+		tmp = get_tbl24(dp, ips[i], nh_sz);
+		if (unlikely(is_entry_extended(tmp)))
+			tmp = get_tbl8(dp, tmp, ips[i], nh_sz);
+
+		next_hops[i] = tmp >> 1;
+	}
+	for (; i < n; i++) {
+		tmp = get_tbl24(dp, ips[i], nh_sz);
+		if (unlikely(is_entry_extended(tmp)))
+			tmp = get_tbl8(dp, tmp, ips[i], nh_sz);
+
+		next_hops[i] = tmp >> 1;
+	}
+}
+
+static inline void
+dir24_8_lookup_bulk_0(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n)
+{
+	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
+
+	dir24_8_lookup_bulk(dp, ips, next_hops, n, 0);
+}
+
+static inline void
+dir24_8_lookup_bulk_1(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n)
+{
+	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
+
+	dir24_8_lookup_bulk(dp, ips, next_hops, n, 1);
+}
+
+static inline void
+dir24_8_lookup_bulk_2(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n)
+{
+	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
+
+	dir24_8_lookup_bulk(dp, ips, next_hops, n, 2);
+}
+
+static inline void
+dir24_8_lookup_bulk_3(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n)
+{
+	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
+
+	dir24_8_lookup_bulk(dp, ips, next_hops, n, 3);
+}
+
+static inline void
+dir24_8_lookup_bulk_uni(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n)
+{
+	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
+	uint64_t tmp;
+	uint32_t i;
+	uint32_t prefetch_offset = RTE_MIN(15U, n);
+	uint8_t nh_sz = dp->nh_sz;
+
+	for (i = 0; i < prefetch_offset; i++)
+		rte_prefetch0(get_tbl24_p(dp, ips[i], nh_sz));
+	for (i = 0; i < (n - prefetch_offset); i++) {
+		rte_prefetch0(get_tbl24_p(dp, ips[i + prefetch_offset],
+			nh_sz));
+		tmp = get_tbl24(dp, ips[i], nh_sz);
+		if (unlikely(is_entry_extended(tmp)))
+			tmp = get_tbl8(dp, tmp, ips[i], nh_sz);
+
+		next_hops[i] = tmp >> 1;
+	}
+	for (; i < n; i++) {
+		tmp = get_tbl24(dp, ips[i], nh_sz);
+		if (unlikely(is_entry_extended(tmp)))
+			tmp = get_tbl8(dp, tmp, ips[i], nh_sz);
+
+		next_hops[i] = tmp >> 1;
+	}
+}
+
 void *
 dir24_8_create(const char *name, int socket_id, struct rte_fib_conf *conf);
 
-- 
2.7.4


^ permalink raw reply	[flat|nested] 199+ messages in thread

* [dpdk-dev] [PATCH v10 4/8] fib: introduce AVX512 lookup
  2020-10-07 16:10               ` [dpdk-dev] [PATCH v9 " Vladimir Medvedkin
                                   ` (3 preceding siblings ...)
  2020-10-13 13:13                 ` [dpdk-dev] [PATCH v10 3/8] fib: move lookup definition into the header file Vladimir Medvedkin
@ 2020-10-13 13:13                 ` Vladimir Medvedkin
  2020-10-13 13:13                 ` [dpdk-dev] [PATCH v10 5/8] fib6: make lookup function type configurable Vladimir Medvedkin
                                   ` (3 subsequent siblings)
  8 siblings, 0 replies; 199+ messages in thread
From: Vladimir Medvedkin @ 2020-10-13 13:13 UTC (permalink / raw)
  To: dev
  Cc: david.marchand, jerinj, mdr, thomas, konstantin.ananyev,
	bruce.richardson, ciara.power

Add new lookup implementation for DIR24_8 algorithm using
AVX512 instruction set

Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 doc/guides/rel_notes/release_20_11.rst |   3 +
 lib/librte_fib/dir24_8.c               |  36 +++++++
 lib/librte_fib/dir24_8_avx512.c        | 165 +++++++++++++++++++++++++++++++++
 lib/librte_fib/dir24_8_avx512.h        |  24 +++++
 lib/librte_fib/meson.build             |  34 +++++++
 lib/librte_fib/rte_fib.c               |   2 +-
 lib/librte_fib/rte_fib.h               |   6 +-
 7 files changed, 268 insertions(+), 2 deletions(-)
 create mode 100644 lib/librte_fib/dir24_8_avx512.c
 create mode 100644 lib/librte_fib/dir24_8_avx512.h

diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst
index 57e3edc..8c2a89f 100644
--- a/doc/guides/rel_notes/release_20_11.rst
+++ b/doc/guides/rel_notes/release_20_11.rst
@@ -148,6 +148,9 @@ New Features
   * Extern objects and functions can be plugged into the pipeline.
   * Transaction-oriented table updates.
 
+* **Added AVX512 lookup implementation for FIB.**
+
+  Added a AVX512 lookup functions implementation into FIB library.
 
 Removed Items
 -------------
diff --git a/lib/librte_fib/dir24_8.c b/lib/librte_fib/dir24_8.c
index b5f2363..d97a776 100644
--- a/lib/librte_fib/dir24_8.c
+++ b/lib/librte_fib/dir24_8.c
@@ -18,6 +18,12 @@
 #include <rte_fib.h>
 #include "dir24_8.h"
 
+#ifdef CC_DIR24_8_AVX512_SUPPORT
+
+#include "dir24_8_avx512.h"
+
+#endif /* CC_DIR24_8_AVX512_SUPPORT */
+
 #define DIR24_8_NAMESIZE	64
 
 #define ROUNDUP(x, y)	 RTE_ALIGN_CEIL(x, (1 << (32 - y)))
@@ -56,11 +62,36 @@ get_scalar_fn_inlined(enum rte_fib_dir24_8_nh_sz nh_sz)
 	}
 }
 
+static inline rte_fib_lookup_fn_t
+get_vector_fn(enum rte_fib_dir24_8_nh_sz nh_sz)
+{
+#ifdef CC_DIR24_8_AVX512_SUPPORT
+	if ((rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512F) <= 0) ||
+			(rte_get_max_simd_bitwidth() < RTE_SIMD_512))
+		return NULL;
+
+	switch (nh_sz) {
+	case RTE_FIB_DIR24_8_1B:
+		return rte_dir24_8_vec_lookup_bulk_1b;
+	case RTE_FIB_DIR24_8_2B:
+		return rte_dir24_8_vec_lookup_bulk_2b;
+	case RTE_FIB_DIR24_8_4B:
+		return rte_dir24_8_vec_lookup_bulk_4b;
+	case RTE_FIB_DIR24_8_8B:
+		return rte_dir24_8_vec_lookup_bulk_8b;
+	default:
+		return NULL;
+	}
+#endif
+	return NULL;
+}
+
 rte_fib_lookup_fn_t
 dir24_8_get_lookup_fn(void *p, enum rte_fib_dir24_8_lookup_type type)
 {
 	enum rte_fib_dir24_8_nh_sz nh_sz;
 	struct dir24_8_tbl *dp = p;
+	rte_fib_lookup_fn_t ret_fn = NULL;
 
 	if (dp == NULL)
 		return NULL;
@@ -74,6 +105,11 @@ dir24_8_get_lookup_fn(void *p, enum rte_fib_dir24_8_lookup_type type)
 		return get_scalar_fn_inlined(nh_sz);
 	case RTE_FIB_DIR24_8_SCALAR_UNI:
 		return dir24_8_lookup_bulk_uni;
+	case RTE_FIB_DIR24_8_VECTOR_AVX512:
+		return get_vector_fn(nh_sz);
+	case RTE_FIB_DIR24_8_ANY:
+		ret_fn = get_vector_fn(nh_sz);
+		return (ret_fn) ? ret_fn : get_scalar_fn(nh_sz);
 	default:
 		return NULL;
 	}
diff --git a/lib/librte_fib/dir24_8_avx512.c b/lib/librte_fib/dir24_8_avx512.c
new file mode 100644
index 0000000..43dba28
--- /dev/null
+++ b/lib/librte_fib/dir24_8_avx512.c
@@ -0,0 +1,165 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Intel Corporation
+ */
+
+#include <rte_vect.h>
+#include <rte_fib.h>
+
+#include "dir24_8.h"
+#include "dir24_8_avx512.h"
+
+static __rte_always_inline void
+dir24_8_vec_lookup_x16(void *p, const uint32_t *ips,
+	uint64_t *next_hops, int size)
+{
+	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
+	__mmask16 msk_ext;
+	__mmask16 exp_msk = 0x5555;
+	__m512i ip_vec, idxes, res, bytes;
+	const __m512i zero = _mm512_set1_epi32(0);
+	const __m512i lsb = _mm512_set1_epi32(1);
+	const __m512i lsbyte_msk = _mm512_set1_epi32(0xff);
+	__m512i tmp1, tmp2, res_msk;
+	__m256i tmp256;
+	/* used to mask gather values if size is 1/2 (8/16 bit next hops) */
+	if (size == sizeof(uint8_t))
+		res_msk = _mm512_set1_epi32(UINT8_MAX);
+	else if (size == sizeof(uint16_t))
+		res_msk = _mm512_set1_epi32(UINT16_MAX);
+
+	ip_vec = _mm512_loadu_si512(ips);
+	/* mask 24 most significant bits */
+	idxes = _mm512_srli_epi32(ip_vec, 8);
+
+	/**
+	 * lookup in tbl24
+	 * Put it inside branch to make compiler happy with -O0
+	 */
+	if (size == sizeof(uint8_t)) {
+		res = _mm512_i32gather_epi32(idxes, (const int *)dp->tbl24, 1);
+		res = _mm512_and_epi32(res, res_msk);
+	} else if (size == sizeof(uint16_t)) {
+		res = _mm512_i32gather_epi32(idxes, (const int *)dp->tbl24, 2);
+		res = _mm512_and_epi32(res, res_msk);
+	} else
+		res = _mm512_i32gather_epi32(idxes, (const int *)dp->tbl24, 4);
+
+	/* get extended entries indexes */
+	msk_ext = _mm512_test_epi32_mask(res, lsb);
+
+	if (msk_ext != 0) {
+		idxes = _mm512_srli_epi32(res, 1);
+		idxes = _mm512_slli_epi32(idxes, 8);
+		bytes = _mm512_and_epi32(ip_vec, lsbyte_msk);
+		idxes = _mm512_maskz_add_epi32(msk_ext, idxes, bytes);
+		if (size == sizeof(uint8_t)) {
+			idxes = _mm512_mask_i32gather_epi32(zero, msk_ext,
+				idxes, (const int *)dp->tbl8, 1);
+			idxes = _mm512_and_epi32(idxes, res_msk);
+		} else if (size == sizeof(uint16_t)) {
+			idxes = _mm512_mask_i32gather_epi32(zero, msk_ext,
+				idxes, (const int *)dp->tbl8, 2);
+			idxes = _mm512_and_epi32(idxes, res_msk);
+		} else
+			idxes = _mm512_mask_i32gather_epi32(zero, msk_ext,
+				idxes, (const int *)dp->tbl8, 4);
+
+		res = _mm512_mask_blend_epi32(msk_ext, res, idxes);
+	}
+
+	res = _mm512_srli_epi32(res, 1);
+	tmp1 = _mm512_maskz_expand_epi32(exp_msk, res);
+	tmp256 = _mm512_extracti32x8_epi32(res, 1);
+	tmp2 = _mm512_maskz_expand_epi32(exp_msk,
+		_mm512_castsi256_si512(tmp256));
+	_mm512_storeu_si512(next_hops, tmp1);
+	_mm512_storeu_si512(next_hops + 8, tmp2);
+}
+
+static __rte_always_inline void
+dir24_8_vec_lookup_x8_8b(void *p, const uint32_t *ips,
+	uint64_t *next_hops)
+{
+	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
+	const __m512i zero = _mm512_set1_epi32(0);
+	const __m512i lsbyte_msk = _mm512_set1_epi64(0xff);
+	const __m512i lsb = _mm512_set1_epi64(1);
+	__m512i res, idxes, bytes;
+	__m256i idxes_256, ip_vec;
+	__mmask8 msk_ext;
+
+	ip_vec = _mm256_loadu_si256((const void *)ips);
+	/* mask 24 most significant bits */
+	idxes_256 = _mm256_srli_epi32(ip_vec, 8);
+
+	/* lookup in tbl24 */
+	res = _mm512_i32gather_epi64(idxes_256, (const void *)dp->tbl24, 8);
+
+	/* get extended entries indexes */
+	msk_ext = _mm512_test_epi64_mask(res, lsb);
+
+	if (msk_ext != 0) {
+		bytes = _mm512_cvtepi32_epi64(ip_vec);
+		idxes = _mm512_srli_epi64(res, 1);
+		idxes = _mm512_slli_epi64(idxes, 8);
+		bytes = _mm512_and_epi64(bytes, lsbyte_msk);
+		idxes = _mm512_maskz_add_epi64(msk_ext, idxes, bytes);
+		idxes = _mm512_mask_i64gather_epi64(zero, msk_ext, idxes,
+			(const void *)dp->tbl8, 8);
+
+		res = _mm512_mask_blend_epi64(msk_ext, res, idxes);
+	}
+
+	res = _mm512_srli_epi64(res, 1);
+	_mm512_storeu_si512(next_hops, res);
+}
+
+void
+rte_dir24_8_vec_lookup_bulk_1b(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n)
+{
+	uint32_t i;
+	for (i = 0; i < (n / 16); i++)
+		dir24_8_vec_lookup_x16(p, ips + i * 16, next_hops + i * 16,
+			sizeof(uint8_t));
+
+	dir24_8_lookup_bulk_1b(p, ips + i * 16, next_hops + i * 16,
+		n - i * 16);
+}
+
+void
+rte_dir24_8_vec_lookup_bulk_2b(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n)
+{
+	uint32_t i;
+	for (i = 0; i < (n / 16); i++)
+		dir24_8_vec_lookup_x16(p, ips + i * 16, next_hops + i * 16,
+			sizeof(uint16_t));
+
+	dir24_8_lookup_bulk_2b(p, ips + i * 16, next_hops + i * 16,
+		n - i * 16);
+}
+
+void
+rte_dir24_8_vec_lookup_bulk_4b(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n)
+{
+	uint32_t i;
+	for (i = 0; i < (n / 16); i++)
+		dir24_8_vec_lookup_x16(p, ips + i * 16, next_hops + i * 16,
+			sizeof(uint32_t));
+
+	dir24_8_lookup_bulk_4b(p, ips + i * 16, next_hops + i * 16,
+		n - i * 16);
+}
+
+void
+rte_dir24_8_vec_lookup_bulk_8b(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n)
+{
+	uint32_t i;
+	for (i = 0; i < (n / 8); i++)
+		dir24_8_vec_lookup_x8_8b(p, ips + i * 8, next_hops + i * 8);
+
+	dir24_8_lookup_bulk_8b(p, ips + i * 8, next_hops + i * 8, n - i * 8);
+}
diff --git a/lib/librte_fib/dir24_8_avx512.h b/lib/librte_fib/dir24_8_avx512.h
new file mode 100644
index 0000000..1d3c2b9
--- /dev/null
+++ b/lib/librte_fib/dir24_8_avx512.h
@@ -0,0 +1,24 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Intel Corporation
+ */
+
+#ifndef _DIR248_AVX512_H_
+#define _DIR248_AVX512_H_
+
+void
+rte_dir24_8_vec_lookup_bulk_1b(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n);
+
+void
+rte_dir24_8_vec_lookup_bulk_2b(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n);
+
+void
+rte_dir24_8_vec_lookup_bulk_4b(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n);
+
+void
+rte_dir24_8_vec_lookup_bulk_8b(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n);
+
+#endif /* _DIR248_AVX512_H_ */
diff --git a/lib/librte_fib/meson.build b/lib/librte_fib/meson.build
index 771828f..0a8adef 100644
--- a/lib/librte_fib/meson.build
+++ b/lib/librte_fib/meson.build
@@ -5,3 +5,37 @@
 sources = files('rte_fib.c', 'rte_fib6.c', 'dir24_8.c', 'trie.c')
 headers = files('rte_fib.h', 'rte_fib6.h')
 deps += ['rib']
+
+# compile AVX512 version if:
+# we are building 64-bit binary AND binutils can generate proper code
+if dpdk_conf.has('RTE_ARCH_X86_64') and binutils_ok.returncode() == 0
+	# compile AVX512 version if either:
+	# a. we have AVX512F supported in minimum instruction set baseline
+	# b. it's not minimum instruction set, but supported by compiler
+	#
+	# in former case, just add avx512 C file to files list
+	# in latter case, compile c file to static lib, using correct
+	# compiler flags, and then have the .o file from static lib
+	# linked into main lib.
+
+	# check if all required flags already enabled (variant a).
+	acl_avx512_flags = ['__AVX512F__','__AVX512DQ__']
+	acl_avx512_on = true
+	foreach f:acl_avx512_flags
+		if cc.get_define(f, args: machine_args) == ''
+			acl_avx512_on = false
+		endif
+	endforeach
+
+	if acl_avx512_on == true
+		cflags += ['-DCC_DIR24_8_AVX512_SUPPORT']
+		sources += files('dir24_8_avx512.c')
+	elif cc.has_multi_arguments('-mavx512f', '-mavx512dq')
+		dir24_8_avx512_tmp = static_library('dir24_8_avx512_tmp',
+				'dir24_8_avx512.c',
+				dependencies: static_rte_eal,
+				c_args: cflags + ['-mavx512f', '-mavx512dq'])
+		objs += dir24_8_avx512_tmp.extract_objects('dir24_8_avx512.c')
+		cflags += '-DCC_DIR24_8_AVX512_SUPPORT'
+	endif
+endif
diff --git a/lib/librte_fib/rte_fib.c b/lib/librte_fib/rte_fib.c
index b9f6efb..1af2a5f 100644
--- a/lib/librte_fib/rte_fib.c
+++ b/lib/librte_fib/rte_fib.c
@@ -108,7 +108,7 @@ init_dataplane(struct rte_fib *fib, __rte_unused int socket_id,
 		if (fib->dp == NULL)
 			return -rte_errno;
 		fib->lookup = dir24_8_get_lookup_fn(fib->dp,
-			RTE_FIB_DIR24_8_SCALAR_MACRO);
+			RTE_FIB_DIR24_8_ANY);
 		fib->modify = dir24_8_modify;
 		return 0;
 	default:
diff --git a/lib/librte_fib/rte_fib.h b/lib/librte_fib/rte_fib.h
index 2097ee5..d4e5d91 100644
--- a/lib/librte_fib/rte_fib.h
+++ b/lib/librte_fib/rte_fib.h
@@ -67,10 +67,14 @@ enum rte_fib_dir24_8_lookup_type {
 	 * Lookup implementation using inlined functions
 	 * for different next hop sizes
 	 */
-	RTE_FIB_DIR24_8_SCALAR_UNI
+	RTE_FIB_DIR24_8_SCALAR_UNI,
 	/**<
 	 * Unified lookup function for all next hop sizes
 	 */
+	RTE_FIB_DIR24_8_VECTOR_AVX512,
+	/**< Vector implementation using AVX512 */
+	RTE_FIB_DIR24_8_ANY = UINT32_MAX
+	/**< Selects the best implementation based on the max simd bitwidth */
 };
 
 /** FIB configuration structure */
-- 
2.7.4


^ permalink raw reply	[flat|nested] 199+ messages in thread

* [dpdk-dev] [PATCH v10 5/8] fib6: make lookup function type configurable
  2020-10-07 16:10               ` [dpdk-dev] [PATCH v9 " Vladimir Medvedkin
                                   ` (4 preceding siblings ...)
  2020-10-13 13:13                 ` [dpdk-dev] [PATCH v10 4/8] fib: introduce AVX512 lookup Vladimir Medvedkin
@ 2020-10-13 13:13                 ` Vladimir Medvedkin
  2020-10-13 13:14                 ` [dpdk-dev] [PATCH v10 6/8] fib6: move lookup definition into the header file Vladimir Medvedkin
                                   ` (2 subsequent siblings)
  8 siblings, 0 replies; 199+ messages in thread
From: Vladimir Medvedkin @ 2020-10-13 13:13 UTC (permalink / raw)
  To: dev
  Cc: david.marchand, jerinj, mdr, thomas, konstantin.ananyev,
	bruce.richardson, ciara.power

Add type argument to trie_get_lookup_fn()
Now it only supports RTE_FIB6_TRIE_SCALAR

Add new rte_fib6_set_lookup_fn() - user can change lookup
function type runtime.

Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 lib/librte_fib/rte_fib6.c          | 20 +++++++++++++++-
 lib/librte_fib/rte_fib6.h          | 23 +++++++++++++++++++
 lib/librte_fib/rte_fib_version.map |  1 +
 lib/librte_fib/trie.c              | 47 +++++++++++++++++++++++---------------
 lib/librte_fib/trie.h              |  2 +-
 5 files changed, 72 insertions(+), 21 deletions(-)

diff --git a/lib/librte_fib/rte_fib6.c b/lib/librte_fib/rte_fib6.c
index a1f0db8..566cd5f 100644
--- a/lib/librte_fib/rte_fib6.c
+++ b/lib/librte_fib/rte_fib6.c
@@ -107,7 +107,7 @@ init_dataplane(struct rte_fib6 *fib, __rte_unused int socket_id,
 		fib->dp = trie_create(dp_name, socket_id, conf);
 		if (fib->dp == NULL)
 			return -rte_errno;
-		fib->lookup = rte_trie_get_lookup_fn(conf);
+		fib->lookup = trie_get_lookup_fn(fib->dp, RTE_FIB6_TRIE_SCALAR);
 		fib->modify = trie_modify;
 		return 0;
 	default:
@@ -319,3 +319,21 @@ rte_fib6_get_rib(struct rte_fib6 *fib)
 {
 	return (fib == NULL) ? NULL : fib->rib;
 }
+
+int
+rte_fib6_set_lookup_fn(struct rte_fib6 *fib,
+	enum rte_fib_trie_lookup_type type)
+{
+	rte_fib6_lookup_fn_t fn;
+
+	switch (fib->type) {
+	case RTE_FIB6_TRIE:
+		fn = trie_get_lookup_fn(fib->dp, type);
+		if (fn == NULL)
+			return -EINVAL;
+		fib->lookup = fn;
+		return 0;
+	default:
+		return -EINVAL;
+	}
+}
diff --git a/lib/librte_fib/rte_fib6.h b/lib/librte_fib/rte_fib6.h
index bbfcf23..cd0c75e 100644
--- a/lib/librte_fib/rte_fib6.h
+++ b/lib/librte_fib/rte_fib6.h
@@ -53,12 +53,18 @@ enum rte_fib6_op {
 	RTE_FIB6_DEL,
 };
 
+/** Size of nexthop (1 << nh_sz) bits for TRIE based FIB */
 enum rte_fib_trie_nh_sz {
 	RTE_FIB6_TRIE_2B = 1,
 	RTE_FIB6_TRIE_4B,
 	RTE_FIB6_TRIE_8B
 };
 
+/** Type of lookup function implementation */
+enum rte_fib_trie_lookup_type {
+	RTE_FIB6_TRIE_SCALAR /**< Scalar lookup function implementation*/
+};
+
 /** FIB configuration structure */
 struct rte_fib6_conf {
 	enum rte_fib6_type type; /**< Type of FIB struct */
@@ -201,6 +207,23 @@ __rte_experimental
 struct rte_rib6 *
 rte_fib6_get_rib(struct rte_fib6 *fib);
 
+/**
+ * Set lookup function based on type
+ *
+ * @param fib
+ *   FIB object handle
+ * @param type
+ *   type of lookup function
+ *
+ * @return
+ *    -EINVAL on failure
+ *    0 on success
+ */
+__rte_experimental
+int
+rte_fib6_set_lookup_fn(struct rte_fib6 *fib,
+	enum rte_fib_trie_lookup_type type);
+
 #ifdef __cplusplus
 }
 #endif
diff --git a/lib/librte_fib/rte_fib_version.map b/lib/librte_fib/rte_fib_version.map
index 216af66..9d1e181 100644
--- a/lib/librte_fib/rte_fib_version.map
+++ b/lib/librte_fib/rte_fib_version.map
@@ -19,6 +19,7 @@ EXPERIMENTAL {
 	rte_fib6_lookup_bulk;
 	rte_fib6_get_dp;
 	rte_fib6_get_rib;
+	rte_fib6_set_lookup_fn;
 
 	local: *;
 };
diff --git a/lib/librte_fib/trie.c b/lib/librte_fib/trie.c
index 2ae2add..fc14670 100644
--- a/lib/librte_fib/trie.c
+++ b/lib/librte_fib/trie.c
@@ -59,13 +59,6 @@ enum edge {
 	REDGE
 };
 
-enum lookup_type {
-	MACRO,
-	INLINE,
-	UNI
-};
-static enum lookup_type test_lookup = MACRO;
-
 static inline uint32_t
 get_tbl24_idx(const uint8_t *ip)
 {
@@ -153,22 +146,38 @@ LOOKUP_FUNC(2b, uint16_t, 1)
 LOOKUP_FUNC(4b, uint32_t, 2)
 LOOKUP_FUNC(8b, uint64_t, 3)
 
+static inline rte_fib6_lookup_fn_t
+get_scalar_fn(enum rte_fib_trie_nh_sz nh_sz)
+{
+	switch (nh_sz) {
+	case RTE_FIB6_TRIE_2B:
+		return rte_trie_lookup_bulk_2b;
+	case RTE_FIB6_TRIE_4B:
+		return rte_trie_lookup_bulk_4b;
+	case RTE_FIB6_TRIE_8B:
+		return rte_trie_lookup_bulk_8b;
+	default:
+		return NULL;
+	}
+}
+
 rte_fib6_lookup_fn_t
-rte_trie_get_lookup_fn(struct rte_fib6_conf *conf)
+trie_get_lookup_fn(void *p, enum rte_fib_trie_lookup_type type)
 {
-	enum rte_fib_trie_nh_sz nh_sz = conf->trie.nh_sz;
+	enum rte_fib_trie_nh_sz nh_sz;
+	struct rte_trie_tbl *dp = p;
 
-	if (test_lookup == MACRO) {
-		switch (nh_sz) {
-		case RTE_FIB6_TRIE_2B:
-			return rte_trie_lookup_bulk_2b;
-		case RTE_FIB6_TRIE_4B:
-			return rte_trie_lookup_bulk_4b;
-		case RTE_FIB6_TRIE_8B:
-			return rte_trie_lookup_bulk_8b;
-		}
+	if (dp == NULL)
+		return NULL;
+
+	nh_sz = dp->nh_sz;
+
+	switch (type) {
+	case RTE_FIB6_TRIE_SCALAR:
+		return get_scalar_fn(nh_sz);
+	default:
+		return NULL;
 	}
-
 	return NULL;
 }
 
diff --git a/lib/librte_fib/trie.h b/lib/librte_fib/trie.h
index bb750c5..0d5ef9a 100644
--- a/lib/librte_fib/trie.h
+++ b/lib/librte_fib/trie.h
@@ -22,7 +22,7 @@ void
 trie_free(void *p);
 
 rte_fib6_lookup_fn_t
-rte_trie_get_lookup_fn(struct rte_fib6_conf *fib_conf);
+trie_get_lookup_fn(void *p, enum rte_fib_trie_lookup_type type);
 
 int
 trie_modify(struct rte_fib6 *fib, const uint8_t ip[RTE_FIB6_IPV6_ADDR_SIZE],
-- 
2.7.4


^ permalink raw reply	[flat|nested] 199+ messages in thread

* [dpdk-dev] [PATCH v10 6/8] fib6: move lookup definition into the header file
  2020-10-07 16:10               ` [dpdk-dev] [PATCH v9 " Vladimir Medvedkin
                                   ` (5 preceding siblings ...)
  2020-10-13 13:13                 ` [dpdk-dev] [PATCH v10 5/8] fib6: make lookup function type configurable Vladimir Medvedkin
@ 2020-10-13 13:14                 ` Vladimir Medvedkin
  2020-10-13 13:14                 ` [dpdk-dev] [PATCH v10 7/8] fib6: introduce AVX512 lookup Vladimir Medvedkin
  2020-10-13 13:14                 ` [dpdk-dev] [PATCH v10 8/8] app/testfib: add support for different lookup functions Vladimir Medvedkin
  8 siblings, 0 replies; 199+ messages in thread
From: Vladimir Medvedkin @ 2020-10-13 13:14 UTC (permalink / raw)
  To: dev
  Cc: david.marchand, jerinj, mdr, thomas, konstantin.ananyev,
	bruce.richardson, ciara.power

Move trie table layout and lookup definition into the
private header file. This is necessary for implementing a
vectorized lookup function in a separate .с file.

Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 lib/librte_fib/trie.c | 121 --------------------------------------------------
 lib/librte_fib/trie.h | 117 ++++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 117 insertions(+), 121 deletions(-)

diff --git a/lib/librte_fib/trie.c b/lib/librte_fib/trie.c
index fc14670..82ba13d 100644
--- a/lib/librte_fib/trie.c
+++ b/lib/librte_fib/trie.c
@@ -11,141 +11,20 @@
 
 #include <rte_debug.h>
 #include <rte_malloc.h>
-#include <rte_prefetch.h>
 #include <rte_errno.h>
 #include <rte_memory.h>
-#include <rte_branch_prediction.h>
 
 #include <rte_rib6.h>
 #include <rte_fib6.h>
 #include "trie.h"
 
-/* @internal Total number of tbl24 entries. */
-#define TRIE_TBL24_NUM_ENT	(1 << 24)
-
-/* Maximum depth value possible for IPv6 LPM. */
-#define TRIE_MAX_DEPTH		128
-
-/* @internal Number of entries in a tbl8 group. */
-#define TRIE_TBL8_GRP_NUM_ENT	256ULL
-
-/* @internal Total number of tbl8 groups in the tbl8. */
-#define TRIE_TBL8_NUM_GROUPS	65536
-
-/* @internal bitmask with valid and valid_group fields set */
-#define TRIE_EXT_ENT		1
-
 #define TRIE_NAMESIZE		64
 
-#define BITMAP_SLAB_BIT_SIZE_LOG2	6
-#define BITMAP_SLAB_BIT_SIZE		(1ULL << BITMAP_SLAB_BIT_SIZE_LOG2)
-#define BITMAP_SLAB_BITMASK		(BITMAP_SLAB_BIT_SIZE - 1)
-
-struct rte_trie_tbl {
-	uint32_t	number_tbl8s;	/**< Total number of tbl8s */
-	uint32_t	rsvd_tbl8s;	/**< Number of reserved tbl8s */
-	uint32_t	cur_tbl8s;	/**< Current cumber of tbl8s */
-	uint64_t	def_nh;		/**< Default next hop */
-	enum rte_fib_trie_nh_sz	nh_sz;	/**< Size of nexthop entry */
-	uint64_t	*tbl8;		/**< tbl8 table. */
-	uint32_t	*tbl8_pool;	/**< bitmap containing free tbl8 idxes*/
-	uint32_t	tbl8_pool_pos;
-	/* tbl24 table. */
-	__extension__ uint64_t	tbl24[0] __rte_cache_aligned;
-};
-
 enum edge {
 	LEDGE,
 	REDGE
 };
 
-static inline uint32_t
-get_tbl24_idx(const uint8_t *ip)
-{
-	return ip[0] << 16|ip[1] << 8|ip[2];
-}
-
-static inline void *
-get_tbl24_p(struct rte_trie_tbl *dp, const uint8_t *ip, uint8_t nh_sz)
-{
-	uint32_t tbl24_idx;
-
-	tbl24_idx = get_tbl24_idx(ip);
-	return (void *)&((uint8_t *)dp->tbl24)[tbl24_idx << nh_sz];
-}
-
-static inline uint8_t
-bits_in_nh(uint8_t nh_sz)
-{
-	return 8 * (1 << nh_sz);
-}
-
-static inline uint64_t
-get_max_nh(uint8_t nh_sz)
-{
-	return ((1ULL << (bits_in_nh(nh_sz) - 1)) - 1);
-}
-
-static inline uint64_t
-lookup_msk(uint8_t nh_sz)
-{
-	return ((1ULL << ((1 << (nh_sz + 3)) - 1)) << 1) - 1;
-}
-
-static inline uint8_t
-get_psd_idx(uint32_t val, uint8_t nh_sz)
-{
-	return val & ((1 << (3 - nh_sz)) - 1);
-}
-
-static inline uint32_t
-get_tbl_pos(uint32_t val, uint8_t nh_sz)
-{
-	return val >> (3 - nh_sz);
-}
-
-static inline uint64_t
-get_tbl_val_by_idx(uint64_t *tbl, uint32_t idx, uint8_t nh_sz)
-{
-	return ((tbl[get_tbl_pos(idx, nh_sz)] >> (get_psd_idx(idx, nh_sz) *
-		bits_in_nh(nh_sz))) & lookup_msk(nh_sz));
-}
-
-static inline void *
-get_tbl_p_by_idx(uint64_t *tbl, uint64_t idx, uint8_t nh_sz)
-{
-	return (uint8_t *)tbl + (idx << nh_sz);
-}
-
-static inline int
-is_entry_extended(uint64_t ent)
-{
-	return (ent & TRIE_EXT_ENT) == TRIE_EXT_ENT;
-}
-
-#define LOOKUP_FUNC(suffix, type, nh_sz)				\
-static void rte_trie_lookup_bulk_##suffix(void *p,			\
-	uint8_t ips[][RTE_FIB6_IPV6_ADDR_SIZE],			\
-	uint64_t *next_hops, const unsigned int n)			\
-{									\
-	struct rte_trie_tbl *dp = (struct rte_trie_tbl *)p;		\
-	uint64_t tmp;							\
-	uint32_t i, j;							\
-									\
-	for (i = 0; i < n; i++) {					\
-		tmp = ((type *)dp->tbl24)[get_tbl24_idx(&ips[i][0])];	\
-		j = 3;							\
-		while (is_entry_extended(tmp)) {			\
-			tmp = ((type *)dp->tbl8)[ips[i][j++] +		\
-				((tmp >> 1) * TRIE_TBL8_GRP_NUM_ENT)];	\
-		}							\
-		next_hops[i] = tmp >> 1;				\
-	}								\
-}
-LOOKUP_FUNC(2b, uint16_t, 1)
-LOOKUP_FUNC(4b, uint32_t, 2)
-LOOKUP_FUNC(8b, uint64_t, 3)
-
 static inline rte_fib6_lookup_fn_t
 get_scalar_fn(enum rte_fib_trie_nh_sz nh_sz)
 {
diff --git a/lib/librte_fib/trie.h b/lib/librte_fib/trie.h
index 0d5ef9a..663c7a9 100644
--- a/lib/librte_fib/trie.h
+++ b/lib/librte_fib/trie.h
@@ -10,11 +10,128 @@
  * @file
  * RTE IPv6 Longest Prefix Match (LPM)
  */
+#include <rte_prefetch.h>
+#include <rte_branch_prediction.h>
 
 #ifdef __cplusplus
 extern "C" {
 #endif
 
+/* @internal Total number of tbl24 entries. */
+#define TRIE_TBL24_NUM_ENT	(1 << 24)
+/* Maximum depth value possible for IPv6 LPM. */
+#define TRIE_MAX_DEPTH		128
+/* @internal Number of entries in a tbl8 group. */
+#define TRIE_TBL8_GRP_NUM_ENT	256ULL
+/* @internal Total number of tbl8 groups in the tbl8. */
+#define TRIE_TBL8_NUM_GROUPS	65536
+/* @internal bitmask with valid and valid_group fields set */
+#define TRIE_EXT_ENT		1
+
+#define BITMAP_SLAB_BIT_SIZE_LOG2	6
+#define BITMAP_SLAB_BIT_SIZE		(1ULL << BITMAP_SLAB_BIT_SIZE_LOG2)
+#define BITMAP_SLAB_BITMASK		(BITMAP_SLAB_BIT_SIZE - 1)
+
+struct rte_trie_tbl {
+	uint32_t	number_tbl8s;	/**< Total number of tbl8s */
+	uint32_t	rsvd_tbl8s;	/**< Number of reserved tbl8s */
+	uint32_t	cur_tbl8s;	/**< Current cumber of tbl8s */
+	uint64_t	def_nh;		/**< Default next hop */
+	enum rte_fib_trie_nh_sz	nh_sz;	/**< Size of nexthop entry */
+	uint64_t	*tbl8;		/**< tbl8 table. */
+	uint32_t	*tbl8_pool;	/**< bitmap containing free tbl8 idxes*/
+	uint32_t	tbl8_pool_pos;
+	/* tbl24 table. */
+	__extension__ uint64_t	tbl24[0] __rte_cache_aligned;
+};
+
+static inline uint32_t
+get_tbl24_idx(const uint8_t *ip)
+{
+	return ip[0] << 16|ip[1] << 8|ip[2];
+}
+
+static inline void *
+get_tbl24_p(struct rte_trie_tbl *dp, const uint8_t *ip, uint8_t nh_sz)
+{
+	uint32_t tbl24_idx;
+
+	tbl24_idx = get_tbl24_idx(ip);
+	return (void *)&((uint8_t *)dp->tbl24)[tbl24_idx << nh_sz];
+}
+
+static inline uint8_t
+bits_in_nh(uint8_t nh_sz)
+{
+	return 8 * (1 << nh_sz);
+}
+
+static inline uint64_t
+get_max_nh(uint8_t nh_sz)
+{
+	return ((1ULL << (bits_in_nh(nh_sz) - 1)) - 1);
+}
+
+static inline uint64_t
+lookup_msk(uint8_t nh_sz)
+{
+	return ((1ULL << ((1 << (nh_sz + 3)) - 1)) << 1) - 1;
+}
+
+static inline uint8_t
+get_psd_idx(uint32_t val, uint8_t nh_sz)
+{
+	return val & ((1 << (3 - nh_sz)) - 1);
+}
+
+static inline uint32_t
+get_tbl_pos(uint32_t val, uint8_t nh_sz)
+{
+	return val >> (3 - nh_sz);
+}
+
+static inline uint64_t
+get_tbl_val_by_idx(uint64_t *tbl, uint32_t idx, uint8_t nh_sz)
+{
+	return ((tbl[get_tbl_pos(idx, nh_sz)] >> (get_psd_idx(idx, nh_sz) *
+		bits_in_nh(nh_sz))) & lookup_msk(nh_sz));
+}
+
+static inline void *
+get_tbl_p_by_idx(uint64_t *tbl, uint64_t idx, uint8_t nh_sz)
+{
+	return (uint8_t *)tbl + (idx << nh_sz);
+}
+
+static inline int
+is_entry_extended(uint64_t ent)
+{
+	return (ent & TRIE_EXT_ENT) == TRIE_EXT_ENT;
+}
+
+#define LOOKUP_FUNC(suffix, type, nh_sz)				\
+static inline void rte_trie_lookup_bulk_##suffix(void *p,		\
+	uint8_t ips[][RTE_FIB6_IPV6_ADDR_SIZE],				\
+	uint64_t *next_hops, const unsigned int n)			\
+{									\
+	struct rte_trie_tbl *dp = (struct rte_trie_tbl *)p;		\
+	uint64_t tmp;							\
+	uint32_t i, j;							\
+									\
+	for (i = 0; i < n; i++) {					\
+		tmp = ((type *)dp->tbl24)[get_tbl24_idx(&ips[i][0])];	\
+		j = 3;							\
+		while (is_entry_extended(tmp)) {			\
+			tmp = ((type *)dp->tbl8)[ips[i][j++] +		\
+				((tmp >> 1) * TRIE_TBL8_GRP_NUM_ENT)];	\
+		}							\
+		next_hops[i] = tmp >> 1;				\
+	}								\
+}
+LOOKUP_FUNC(2b, uint16_t, 1)
+LOOKUP_FUNC(4b, uint32_t, 2)
+LOOKUP_FUNC(8b, uint64_t, 3)
+
 void *
 trie_create(const char *name, int socket_id, struct rte_fib6_conf *conf);
 
-- 
2.7.4


^ permalink raw reply	[flat|nested] 199+ messages in thread

* [dpdk-dev] [PATCH v10 7/8] fib6: introduce AVX512 lookup
  2020-10-07 16:10               ` [dpdk-dev] [PATCH v9 " Vladimir Medvedkin
                                   ` (6 preceding siblings ...)
  2020-10-13 13:14                 ` [dpdk-dev] [PATCH v10 6/8] fib6: move lookup definition into the header file Vladimir Medvedkin
@ 2020-10-13 13:14                 ` Vladimir Medvedkin
  2020-10-13 13:14                 ` [dpdk-dev] [PATCH v10 8/8] app/testfib: add support for different lookup functions Vladimir Medvedkin
  8 siblings, 0 replies; 199+ messages in thread
From: Vladimir Medvedkin @ 2020-10-13 13:14 UTC (permalink / raw)
  To: dev
  Cc: david.marchand, jerinj, mdr, thomas, konstantin.ananyev,
	bruce.richardson, ciara.power

Add new lookup implementation for FIB6 trie algorithm using
AVX512 instruction set

Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 doc/guides/rel_notes/release_20_11.rst |   2 +-
 lib/librte_fib/meson.build             |  17 +++
 lib/librte_fib/rte_fib6.c              |   2 +-
 lib/librte_fib/rte_fib6.h              |   5 +-
 lib/librte_fib/trie.c                  |  33 ++++
 lib/librte_fib/trie_avx512.c           | 269 +++++++++++++++++++++++++++++++++
 lib/librte_fib/trie_avx512.h           |  20 +++
 7 files changed, 345 insertions(+), 3 deletions(-)
 create mode 100644 lib/librte_fib/trie_avx512.c
 create mode 100644 lib/librte_fib/trie_avx512.h

diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst
index 8c2a89f..fc9c13b 100644
--- a/doc/guides/rel_notes/release_20_11.rst
+++ b/doc/guides/rel_notes/release_20_11.rst
@@ -150,7 +150,7 @@ New Features
 
 * **Added AVX512 lookup implementation for FIB.**
 
-  Added a AVX512 lookup functions implementation into FIB library.
+  Added a AVX512 lookup functions implementation into FIB and FIB6 libraries.
 
 Removed Items
 -------------
diff --git a/lib/librte_fib/meson.build b/lib/librte_fib/meson.build
index 0a8adef..5d93de9 100644
--- a/lib/librte_fib/meson.build
+++ b/lib/librte_fib/meson.build
@@ -30,6 +30,12 @@ if dpdk_conf.has('RTE_ARCH_X86_64') and binutils_ok.returncode() == 0
 	if acl_avx512_on == true
 		cflags += ['-DCC_DIR24_8_AVX512_SUPPORT']
 		sources += files('dir24_8_avx512.c')
+		# TRIE AVX512 implementation uses avx512bw intrinsics along with
+		# avx512f and avx512dq
+		if cc.get_define('__AVX512BW__', args: machine_args) != ''
+			cflags += ['-DCC_TRIE_AVX512_SUPPORT']
+			sources += files('trie_avx512.c')
+		endif
 	elif cc.has_multi_arguments('-mavx512f', '-mavx512dq')
 		dir24_8_avx512_tmp = static_library('dir24_8_avx512_tmp',
 				'dir24_8_avx512.c',
@@ -37,5 +43,16 @@ if dpdk_conf.has('RTE_ARCH_X86_64') and binutils_ok.returncode() == 0
 				c_args: cflags + ['-mavx512f', '-mavx512dq'])
 		objs += dir24_8_avx512_tmp.extract_objects('dir24_8_avx512.c')
 		cflags += '-DCC_DIR24_8_AVX512_SUPPORT'
+		# TRIE AVX512 implementation uses avx512bw intrinsics along with
+		# avx512f and avx512dq
+		if cc.has_argument('-mavx512bw')
+			trie_avx512_tmp = static_library('trie_avx512_tmp',
+				'trie_avx512.c',
+				dependencies: static_rte_eal,
+				c_args: cflags + ['-mavx512f', \
+					'-mavx512dq', '-mavx512bw'])
+			objs += trie_avx512_tmp.extract_objects('trie_avx512.c')
+			cflags += '-DCC_TRIE_AVX512_SUPPORT'
+		endif
 	endif
 endif
diff --git a/lib/librte_fib/rte_fib6.c b/lib/librte_fib/rte_fib6.c
index 566cd5f..8512584 100644
--- a/lib/librte_fib/rte_fib6.c
+++ b/lib/librte_fib/rte_fib6.c
@@ -107,7 +107,7 @@ init_dataplane(struct rte_fib6 *fib, __rte_unused int socket_id,
 		fib->dp = trie_create(dp_name, socket_id, conf);
 		if (fib->dp == NULL)
 			return -rte_errno;
-		fib->lookup = trie_get_lookup_fn(fib->dp, RTE_FIB6_TRIE_SCALAR);
+		fib->lookup = trie_get_lookup_fn(fib->dp, RTE_FIB6_TRIE_ANY);
 		fib->modify = trie_modify;
 		return 0;
 	default:
diff --git a/lib/librte_fib/rte_fib6.h b/lib/librte_fib/rte_fib6.h
index cd0c75e..2b2a1c8 100644
--- a/lib/librte_fib/rte_fib6.h
+++ b/lib/librte_fib/rte_fib6.h
@@ -62,7 +62,10 @@ enum rte_fib_trie_nh_sz {
 
 /** Type of lookup function implementation */
 enum rte_fib_trie_lookup_type {
-	RTE_FIB6_TRIE_SCALAR /**< Scalar lookup function implementation*/
+	RTE_FIB6_TRIE_SCALAR, /**< Scalar lookup function implementation*/
+	RTE_FIB6_TRIE_VECTOR_AVX512, /**< Vector implementation using AVX512 */
+	RTE_FIB6_TRIE_ANY = UINT32_MAX
+	/**< Selects the best implementation based on the max simd bitwidth */
 };
 
 /** FIB configuration structure */
diff --git a/lib/librte_fib/trie.c b/lib/librte_fib/trie.c
index 82ba13d..069c3aa 100644
--- a/lib/librte_fib/trie.c
+++ b/lib/librte_fib/trie.c
@@ -18,6 +18,12 @@
 #include <rte_fib6.h>
 #include "trie.h"
 
+#ifdef CC_TRIE_AVX512_SUPPORT
+
+#include "trie_avx512.h"
+
+#endif /* CC_TRIE_AVX512_SUPPORT */
+
 #define TRIE_NAMESIZE		64
 
 enum edge {
@@ -40,11 +46,33 @@ get_scalar_fn(enum rte_fib_trie_nh_sz nh_sz)
 	}
 }
 
+static inline rte_fib6_lookup_fn_t
+get_vector_fn(enum rte_fib_trie_nh_sz nh_sz)
+{
+#ifdef CC_TRIE_AVX512_SUPPORT
+	if ((rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512F) <= 0) ||
+			(rte_get_max_simd_bitwidth() < RTE_SIMD_512))
+		return NULL;
+	switch (nh_sz) {
+	case RTE_FIB6_TRIE_2B:
+		return rte_trie_vec_lookup_bulk_2b;
+	case RTE_FIB6_TRIE_4B:
+		return rte_trie_vec_lookup_bulk_4b;
+	case RTE_FIB6_TRIE_8B:
+		return rte_trie_vec_lookup_bulk_8b;
+	default:
+		return NULL;
+	}
+#endif
+	return NULL;
+}
+
 rte_fib6_lookup_fn_t
 trie_get_lookup_fn(void *p, enum rte_fib_trie_lookup_type type)
 {
 	enum rte_fib_trie_nh_sz nh_sz;
 	struct rte_trie_tbl *dp = p;
+	rte_fib6_lookup_fn_t ret_fn = NULL;
 
 	if (dp == NULL)
 		return NULL;
@@ -54,6 +82,11 @@ trie_get_lookup_fn(void *p, enum rte_fib_trie_lookup_type type)
 	switch (type) {
 	case RTE_FIB6_TRIE_SCALAR:
 		return get_scalar_fn(nh_sz);
+	case RTE_FIB6_TRIE_VECTOR_AVX512:
+		return get_vector_fn(nh_sz);
+	case RTE_FIB6_TRIE_ANY:
+		ret_fn = get_vector_fn(nh_sz);
+		return (ret_fn) ? ret_fn : get_scalar_fn(nh_sz);
 	default:
 		return NULL;
 	}
diff --git a/lib/librte_fib/trie_avx512.c b/lib/librte_fib/trie_avx512.c
new file mode 100644
index 0000000..b1c9e4e
--- /dev/null
+++ b/lib/librte_fib/trie_avx512.c
@@ -0,0 +1,269 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Intel Corporation
+ */
+
+#include <rte_vect.h>
+#include <rte_fib6.h>
+
+#include "trie.h"
+#include "trie_avx512.h"
+
+static __rte_always_inline void
+transpose_x16(uint8_t ips[16][RTE_FIB6_IPV6_ADDR_SIZE],
+	__m512i *first, __m512i *second, __m512i *third, __m512i *fourth)
+{
+	__m512i tmp1, tmp2, tmp3, tmp4;
+	__m512i tmp5, tmp6, tmp7, tmp8;
+	const __rte_x86_zmm_t perm_idxes = {
+		.u32 = { 0, 4, 8, 12, 2, 6, 10, 14,
+			1, 5, 9, 13, 3, 7, 11, 15
+		},
+	};
+
+	/* load all ip addresses */
+	tmp1 = _mm512_loadu_si512(&ips[0][0]);
+	tmp2 = _mm512_loadu_si512(&ips[4][0]);
+	tmp3 = _mm512_loadu_si512(&ips[8][0]);
+	tmp4 = _mm512_loadu_si512(&ips[12][0]);
+
+	/* transpose 4 byte chunks of 16 ips */
+	tmp5 = _mm512_unpacklo_epi32(tmp1, tmp2);
+	tmp7 = _mm512_unpackhi_epi32(tmp1, tmp2);
+	tmp6 = _mm512_unpacklo_epi32(tmp3, tmp4);
+	tmp8 = _mm512_unpackhi_epi32(tmp3, tmp4);
+
+	tmp1 = _mm512_unpacklo_epi32(tmp5, tmp6);
+	tmp3 = _mm512_unpackhi_epi32(tmp5, tmp6);
+	tmp2 = _mm512_unpacklo_epi32(tmp7, tmp8);
+	tmp4 = _mm512_unpackhi_epi32(tmp7, tmp8);
+
+	/* first 4-byte chunks of ips[] */
+	*first = _mm512_permutexvar_epi32(perm_idxes.z, tmp1);
+	/* second 4-byte chunks of ips[] */
+	*second = _mm512_permutexvar_epi32(perm_idxes.z, tmp3);
+	/* third 4-byte chunks of ips[] */
+	*third = _mm512_permutexvar_epi32(perm_idxes.z, tmp2);
+	/* fourth 4-byte chunks of ips[] */
+	*fourth = _mm512_permutexvar_epi32(perm_idxes.z, tmp4);
+}
+
+static __rte_always_inline void
+transpose_x8(uint8_t ips[8][RTE_FIB6_IPV6_ADDR_SIZE],
+	__m512i *first, __m512i *second)
+{
+	__m512i tmp1, tmp2, tmp3, tmp4;
+	const __rte_x86_zmm_t perm_idxes = {
+		.u64 = { 0, 2, 4, 6, 1, 3, 5, 7
+		},
+	};
+
+	tmp1 = _mm512_loadu_si512(&ips[0][0]);
+	tmp2 = _mm512_loadu_si512(&ips[4][0]);
+
+	tmp3 = _mm512_unpacklo_epi64(tmp1, tmp2);
+	*first = _mm512_permutexvar_epi64(perm_idxes.z, tmp3);
+	tmp4 = _mm512_unpackhi_epi64(tmp1, tmp2);
+	*second = _mm512_permutexvar_epi64(perm_idxes.z, tmp4);
+}
+
+static __rte_always_inline void
+trie_vec_lookup_x16(void *p, uint8_t ips[16][RTE_FIB6_IPV6_ADDR_SIZE],
+	uint64_t *next_hops, int size)
+{
+	struct rte_trie_tbl *dp = (struct rte_trie_tbl *)p;
+	const __m512i zero = _mm512_set1_epi32(0);
+	const __m512i lsb = _mm512_set1_epi32(1);
+	const __m512i two_lsb = _mm512_set1_epi32(3);
+	__m512i first, second, third, fourth; /*< IPv6 four byte chunks */
+	__m512i idxes, res, shuf_idxes;
+	__m512i tmp, tmp2, bytes, byte_chunk, base_idxes;
+	/* used to mask gather values if size is 2 (16 bit next hops) */
+	const __m512i res_msk = _mm512_set1_epi32(UINT16_MAX);
+	const __rte_x86_zmm_t bswap = {
+		.u8 = { 2, 1, 0, 255, 6, 5, 4, 255,
+			10, 9, 8, 255, 14, 13, 12, 255,
+			2, 1, 0, 255, 6, 5, 4, 255,
+			10, 9, 8, 255, 14, 13, 12, 255,
+			2, 1, 0, 255, 6, 5, 4, 255,
+			10, 9, 8, 255, 14, 13, 12, 255,
+			2, 1, 0, 255, 6, 5, 4, 255,
+			10, 9, 8, 255, 14, 13, 12, 255
+			},
+	};
+	const __mmask64 k = 0x1111111111111111;
+	int i = 3;
+	__mmask16 msk_ext, new_msk;
+	__mmask16 exp_msk = 0x5555;
+
+	transpose_x16(ips, &first, &second, &third, &fourth);
+
+	/* get_tbl24_idx() for every 4 byte chunk */
+	idxes = _mm512_shuffle_epi8(first, bswap.z);
+
+	/**
+	 * lookup in tbl24
+	 * Put it inside branch to make compiller happy with -O0
+	 */
+	if (size == sizeof(uint16_t)) {
+		res = _mm512_i32gather_epi32(idxes, (const int *)dp->tbl24, 2);
+		res = _mm512_and_epi32(res, res_msk);
+	} else
+		res = _mm512_i32gather_epi32(idxes, (const int *)dp->tbl24, 4);
+
+
+	/* get extended entries indexes */
+	msk_ext = _mm512_test_epi32_mask(res, lsb);
+
+	tmp = _mm512_srli_epi32(res, 1);
+
+	/* idxes to retrieve bytes */
+	shuf_idxes = _mm512_setr_epi32(3, 7, 11, 15,
+				19, 23, 27, 31,
+				35, 39, 43, 47,
+				51, 55, 59, 63);
+
+	base_idxes = _mm512_setr_epi32(0, 4, 8, 12,
+				16, 20, 24, 28,
+				32, 36, 40, 44,
+				48, 52, 56, 60);
+
+	/* traverse down the trie */
+	while (msk_ext) {
+		idxes = _mm512_maskz_slli_epi32(msk_ext, tmp, 8);
+		byte_chunk = (i < 8) ?
+			((i >= 4) ? second : first) :
+			((i >= 12) ? fourth : third);
+		bytes = _mm512_maskz_shuffle_epi8(k, byte_chunk, shuf_idxes);
+		idxes = _mm512_maskz_add_epi32(msk_ext, idxes, bytes);
+		if (size == sizeof(uint16_t)) {
+			tmp = _mm512_mask_i32gather_epi32(zero, msk_ext,
+				idxes, (const int *)dp->tbl8, 2);
+			tmp = _mm512_and_epi32(tmp, res_msk);
+		} else
+			tmp = _mm512_mask_i32gather_epi32(zero, msk_ext,
+				idxes, (const int *)dp->tbl8, 4);
+		new_msk = _mm512_test_epi32_mask(tmp, lsb);
+		res = _mm512_mask_blend_epi32(msk_ext ^ new_msk, res, tmp);
+		tmp = _mm512_srli_epi32(tmp, 1);
+		msk_ext = new_msk;
+
+		shuf_idxes = _mm512_maskz_add_epi8(k, shuf_idxes, lsb);
+		shuf_idxes = _mm512_and_epi32(shuf_idxes, two_lsb);
+		shuf_idxes = _mm512_maskz_add_epi8(k, shuf_idxes, base_idxes);
+		i++;
+	}
+
+	res = _mm512_srli_epi32(res, 1);
+	tmp = _mm512_maskz_expand_epi32(exp_msk, res);
+	__m256i tmp256;
+	tmp256 = _mm512_extracti32x8_epi32(res, 1);
+	tmp2 = _mm512_maskz_expand_epi32(exp_msk,
+		_mm512_castsi256_si512(tmp256));
+	_mm512_storeu_si512(next_hops, tmp);
+	_mm512_storeu_si512(next_hops + 8, tmp2);
+}
+
+static void
+trie_vec_lookup_x8_8b(void *p, uint8_t ips[8][RTE_FIB6_IPV6_ADDR_SIZE],
+	uint64_t *next_hops)
+{
+	struct rte_trie_tbl *dp = (struct rte_trie_tbl *)p;
+	const __m512i zero = _mm512_set1_epi32(0);
+	const __m512i lsb = _mm512_set1_epi32(1);
+	const __m512i three_lsb = _mm512_set1_epi32(7);
+	__m512i first, second; /*< IPv6 eight byte chunks */
+	__m512i idxes, res, shuf_idxes;
+	__m512i tmp, bytes, byte_chunk, base_idxes;
+	const __rte_x86_zmm_t bswap = {
+		.u8 = { 2, 1, 0, 255, 255, 255, 255, 255,
+			10, 9, 8, 255, 255, 255, 255, 255,
+			2, 1, 0, 255, 255, 255, 255, 255,
+			10, 9, 8, 255, 255, 255, 255, 255,
+			2, 1, 0, 255, 255, 255, 255, 255,
+			10, 9, 8, 255, 255, 255, 255, 255,
+			2, 1, 0, 255, 255, 255, 255, 255,
+			10, 9, 8, 255, 255, 255, 255, 255
+			},
+	};
+	const __mmask64 k = 0x101010101010101;
+	int i = 3;
+	__mmask8 msk_ext, new_msk;
+
+	transpose_x8(ips, &first, &second);
+
+	/* get_tbl24_idx() for every 4 byte chunk */
+	idxes = _mm512_shuffle_epi8(first, bswap.z);
+
+	/* lookup in tbl24 */
+	res = _mm512_i64gather_epi64(idxes, (const void *)dp->tbl24, 8);
+	/* get extended entries indexes */
+	msk_ext = _mm512_test_epi64_mask(res, lsb);
+
+	tmp = _mm512_srli_epi64(res, 1);
+
+	/* idxes to retrieve bytes */
+	shuf_idxes = _mm512_setr_epi64(3, 11, 19, 27, 35, 43, 51, 59);
+
+	base_idxes = _mm512_setr_epi64(0, 8, 16, 24, 32, 40, 48, 56);
+
+	/* traverse down the trie */
+	while (msk_ext) {
+		idxes = _mm512_maskz_slli_epi64(msk_ext, tmp, 8);
+		byte_chunk = (i < 8) ? first : second;
+		bytes = _mm512_maskz_shuffle_epi8(k, byte_chunk, shuf_idxes);
+		idxes = _mm512_maskz_add_epi64(msk_ext, idxes, bytes);
+		tmp = _mm512_mask_i64gather_epi64(zero, msk_ext,
+				idxes, (const void *)dp->tbl8, 8);
+		new_msk = _mm512_test_epi64_mask(tmp, lsb);
+		res = _mm512_mask_blend_epi64(msk_ext ^ new_msk, res, tmp);
+		tmp = _mm512_srli_epi64(tmp, 1);
+		msk_ext = new_msk;
+
+		shuf_idxes = _mm512_maskz_add_epi8(k, shuf_idxes, lsb);
+		shuf_idxes = _mm512_and_epi64(shuf_idxes, three_lsb);
+		shuf_idxes = _mm512_maskz_add_epi8(k, shuf_idxes, base_idxes);
+		i++;
+	}
+
+	res = _mm512_srli_epi64(res, 1);
+	_mm512_storeu_si512(next_hops, res);
+}
+
+void
+rte_trie_vec_lookup_bulk_2b(void *p, uint8_t ips[][RTE_FIB6_IPV6_ADDR_SIZE],
+	uint64_t *next_hops, const unsigned int n)
+{
+	uint32_t i;
+	for (i = 0; i < (n / 16); i++) {
+		trie_vec_lookup_x16(p, (uint8_t (*)[16])&ips[i * 16][0],
+				next_hops + i * 16, sizeof(uint16_t));
+	}
+	rte_trie_lookup_bulk_2b(p, (uint8_t (*)[16])&ips[i * 16][0],
+			next_hops + i * 16, n - i * 16);
+}
+
+void
+rte_trie_vec_lookup_bulk_4b(void *p, uint8_t ips[][RTE_FIB6_IPV6_ADDR_SIZE],
+	uint64_t *next_hops, const unsigned int n)
+{
+	uint32_t i;
+	for (i = 0; i < (n / 16); i++) {
+		trie_vec_lookup_x16(p, (uint8_t (*)[16])&ips[i * 16][0],
+				next_hops + i * 16, sizeof(uint32_t));
+	}
+	rte_trie_lookup_bulk_4b(p, (uint8_t (*)[16])&ips[i * 16][0],
+			next_hops + i * 16, n - i * 16);
+}
+
+void
+rte_trie_vec_lookup_bulk_8b(void *p, uint8_t ips[][RTE_FIB6_IPV6_ADDR_SIZE],
+	uint64_t *next_hops, const unsigned int n)
+{
+	uint32_t i;
+	for (i = 0; i < (n / 8); i++) {
+		trie_vec_lookup_x8_8b(p, (uint8_t (*)[16])&ips[i * 8][0],
+				next_hops + i * 8);
+	}
+	rte_trie_lookup_bulk_8b(p, (uint8_t (*)[16])&ips[i * 8][0],
+			next_hops + i * 8, n - i * 8);
+}
diff --git a/lib/librte_fib/trie_avx512.h b/lib/librte_fib/trie_avx512.h
new file mode 100644
index 0000000..ef8c7f0
--- /dev/null
+++ b/lib/librte_fib/trie_avx512.h
@@ -0,0 +1,20 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Intel Corporation
+ */
+
+#ifndef _TRIE_AVX512_H_
+#define _TRIE_AVX512_H_
+
+void
+rte_trie_vec_lookup_bulk_2b(void *p, uint8_t ips[][RTE_FIB6_IPV6_ADDR_SIZE],
+	uint64_t *next_hops, const unsigned int n);
+
+void
+rte_trie_vec_lookup_bulk_4b(void *p, uint8_t ips[][RTE_FIB6_IPV6_ADDR_SIZE],
+	uint64_t *next_hops, const unsigned int n);
+
+void
+rte_trie_vec_lookup_bulk_8b(void *p, uint8_t ips[][RTE_FIB6_IPV6_ADDR_SIZE],
+	uint64_t *next_hops, const unsigned int n);
+
+#endif /* _TRIE_AVX512_H_ */
-- 
2.7.4


^ permalink raw reply	[flat|nested] 199+ messages in thread

* [dpdk-dev] [PATCH v10 8/8] app/testfib: add support for different lookup functions
  2020-10-07 16:10               ` [dpdk-dev] [PATCH v9 " Vladimir Medvedkin
                                   ` (7 preceding siblings ...)
  2020-10-13 13:14                 ` [dpdk-dev] [PATCH v10 7/8] fib6: introduce AVX512 lookup Vladimir Medvedkin
@ 2020-10-13 13:14                 ` Vladimir Medvedkin
  8 siblings, 0 replies; 199+ messages in thread
From: Vladimir Medvedkin @ 2020-10-13 13:14 UTC (permalink / raw)
  To: dev
  Cc: david.marchand, jerinj, mdr, thomas, konstantin.ananyev,
	bruce.richardson, ciara.power

Added -v option to switch between different lookup implementations
to measure their performance and correctness.

Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
---
 app/test-fib/main.c | 65 ++++++++++++++++++++++++++++++++++++++++++++++++++---
 1 file changed, 62 insertions(+), 3 deletions(-)

diff --git a/app/test-fib/main.c b/app/test-fib/main.c
index 9cf01b1..e46d264 100644
--- a/app/test-fib/main.c
+++ b/app/test-fib/main.c
@@ -99,6 +99,7 @@ static struct {
 	uint8_t		ent_sz;
 	uint8_t		rnd_lookup_ips_ratio;
 	uint8_t		print_fract;
+	uint8_t		lookup_fn;
 } config = {
 	.routes_file = NULL,
 	.lookup_ips_file = NULL,
@@ -110,7 +111,8 @@ static struct {
 	.tbl8 = DEFAULT_LPM_TBL8,
 	.ent_sz = 4,
 	.rnd_lookup_ips_ratio = 0,
-	.print_fract = 10
+	.print_fract = 10,
+	.lookup_fn = 0
 };
 
 struct rt_rule_4 {
@@ -638,7 +640,11 @@ print_usage(void)
 		"1/2/4/8 (default 4)>]\n"
 		"[-g <number of tbl8's for dir24_8 or trie FIBs>]\n"
 		"[-w <path to the file to dump routing table>]\n"
-		"[-u <path to the file to dump ip's for lookup>]\n",
+		"[-u <path to the file to dump ip's for lookup>]\n"
+		"[-v <type of loookup function:"
+		"\ts1, s2, s3 (3 types of scalar), v (vector) -"
+		" for DIR24_8 based FIB\n"
+		"\ts, v - for TRIE based ipv6 FIB>]\n",
 		config.prgname);
 }
 
@@ -681,7 +687,7 @@ parse_opts(int argc, char **argv)
 	int opt;
 	char *endptr;
 
-	while ((opt = getopt(argc, argv, "f:t:n:d:l:r:c6ab:e:g:w:u:s")) !=
+	while ((opt = getopt(argc, argv, "f:t:n:d:l:r:c6ab:e:g:w:u:sv:")) !=
 			-1) {
 		switch (opt) {
 		case 'f':
@@ -769,6 +775,23 @@ parse_opts(int argc, char **argv)
 				rte_exit(-EINVAL, "Invalid option -g\n");
 			}
 			break;
+		case 'v':
+			if ((strcmp(optarg, "s1") == 0) ||
+					(strcmp(optarg, "s") == 0)) {
+				config.lookup_fn = 1;
+				break;
+			} else if (strcmp(optarg, "v") == 0) {
+				config.lookup_fn = 2;
+				break;
+			} else if (strcmp(optarg, "s2") == 0) {
+				config.lookup_fn = 3;
+				break;
+			} else if (strcmp(optarg, "s3") == 0) {
+				config.lookup_fn = 4;
+				break;
+			}
+			print_usage();
+			rte_exit(-EINVAL, "Invalid option -v %s\n", optarg);
 		default:
 			print_usage();
 			rte_exit(-EINVAL, "Invalid options\n");
@@ -846,6 +869,27 @@ run_v4(void)
 		return -rte_errno;
 	}
 
+	if (config.lookup_fn != 0) {
+		if (config.lookup_fn == 1)
+			ret = rte_fib_set_lookup_fn(fib,
+				RTE_FIB_DIR24_8_SCALAR_MACRO);
+		else if (config.lookup_fn == 2)
+			ret = rte_fib_set_lookup_fn(fib,
+				RTE_FIB_DIR24_8_VECTOR_AVX512);
+		else if (config.lookup_fn == 3)
+			ret = rte_fib_set_lookup_fn(fib,
+				RTE_FIB_DIR24_8_SCALAR_INLINE);
+		else if (config.lookup_fn == 4)
+			ret = rte_fib_set_lookup_fn(fib,
+				RTE_FIB_DIR24_8_SCALAR_UNI);
+		else
+			ret = -EINVAL;
+		if (ret != 0) {
+			printf("Can not init lookup function\n");
+			return ret;
+		}
+	}
+
 	for (k = config.print_fract, i = 0; k > 0; k--) {
 		start = rte_rdtsc_precise();
 		for (j = 0; j < (config.nb_routes - i) / k; j++) {
@@ -1025,6 +1069,21 @@ run_v6(void)
 		return -rte_errno;
 	}
 
+	if (config.lookup_fn != 0) {
+		if (config.lookup_fn == 1)
+			ret = rte_fib6_set_lookup_fn(fib,
+				RTE_FIB6_TRIE_SCALAR);
+		else if (config.lookup_fn == 2)
+			ret = rte_fib6_set_lookup_fn(fib,
+				RTE_FIB6_TRIE_VECTOR_AVX512);
+		else
+			ret = -EINVAL;
+		if (ret != 0) {
+			printf("Can not init lookup function\n");
+			return ret;
+		}
+	}
+
 	for (k = config.print_fract, i = 0; k > 0; k--) {
 		start = rte_rdtsc_precise();
 		for (j = 0; j < (config.nb_routes - i) / k; j++) {
-- 
2.7.4


^ permalink raw reply	[flat|nested] 199+ messages in thread

* Re: [dpdk-dev] [PATCH v10 1/8] eal/x86: introduce AVX 512-bit type
  2020-10-13 13:13                 ` [dpdk-dev] [PATCH v10 1/8] eal/x86: introduce AVX 512-bit type Vladimir Medvedkin
@ 2020-10-14 12:17                   ` David Marchand
  0 siblings, 0 replies; 199+ messages in thread
From: David Marchand @ 2020-10-14 12:17 UTC (permalink / raw)
  To: Vladimir Medvedkin
  Cc: dev, Jerin Jacob Kollanukkaran, Ray Kinsella, Thomas Monjalon,
	Ananyev, Konstantin, Bruce Richardson, Ciara Power

On Tue, Oct 13, 2020 at 3:14 PM Vladimir Medvedkin
<vladimir.medvedkin@intel.com> wrote:
>
> New data type to manipulate 512 bit AVX values.
>
> Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
> Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>

Applied only this patch for now, to prepare for the acl + AVX512
series which is ready for merge.
Thanks.


-- 
David Marchand


^ permalink raw reply	[flat|nested] 199+ messages in thread

* Re: [dpdk-dev] [PATCH v10 0/8] fib: implement AVX512 vector lookup
  2020-10-13 13:13                 ` [dpdk-dev] [PATCH v10 " Vladimir Medvedkin
@ 2020-10-16 15:15                   ` David Marchand
  2020-10-16 15:32                     ` Medvedkin, Vladimir
  2020-10-16 15:42                   ` [dpdk-dev] [PATCH v11 " Vladimir Medvedkin
                                     ` (8 subsequent siblings)
  9 siblings, 1 reply; 199+ messages in thread
From: David Marchand @ 2020-10-16 15:15 UTC (permalink / raw)
  To: Vladimir Medvedkin
  Cc: dev, Jerin Jacob Kollanukkaran, Ray Kinsella, Thomas Monjalon,
	Ananyev, Konstantin, Bruce Richardson, Ciara Power

On Tue, Oct 13, 2020 at 3:14 PM Vladimir Medvedkin
<vladimir.medvedkin@intel.com> wrote:
>
> This patch series implements vectorized lookup using AVX512 for
> ipv4 dir24_8 and ipv6 trie algorithms.
> Also introduced rte_fib_set_lookup_fn() to change lookup function type.
> Added option to select lookup function type in testfib application.
>
> Depends-on: patch-80471 ("eal: add max SIMD bitwidth")

This series won't build for arm (caught with devtools/test-meson-builds.sh).

It breaks for unused variables at:
fib: introduce AVX512 lookup (66 minutes ago) <Vladimir Medvedkin>
fib6: introduce AVX512 lookup (3 minutes ago) <Vladimir Medvedkin>


-- 
David Marchand


^ permalink raw reply	[flat|nested] 199+ messages in thread

* Re: [dpdk-dev] [PATCH v10 0/8] fib: implement AVX512 vector lookup
  2020-10-16 15:15                   ` David Marchand
@ 2020-10-16 15:32                     ` Medvedkin, Vladimir
  0 siblings, 0 replies; 199+ messages in thread
From: Medvedkin, Vladimir @ 2020-10-16 15:32 UTC (permalink / raw)
  To: David Marchand
  Cc: dev, Jerin Jacob Kollanukkaran, Ray Kinsella, Thomas Monjalon,
	Ananyev, Konstantin, Bruce Richardson, Ciara Power

Hello,

On 16/10/2020 16:15, David Marchand wrote:
> On Tue, Oct 13, 2020 at 3:14 PM Vladimir Medvedkin
> <vladimir.medvedkin@intel.com> wrote:
>>
>> This patch series implements vectorized lookup using AVX512 for
>> ipv4 dir24_8 and ipv6 trie algorithms.
>> Also introduced rte_fib_set_lookup_fn() to change lookup function type.
>> Added option to select lookup function type in testfib application.
>>
>> Depends-on: patch-80471 ("eal: add max SIMD bitwidth")
> 
> This series won't build for arm (caught with devtools/test-meson-builds.sh).
> 
> It breaks for unused variables at:
> fib: introduce AVX512 lookup (66 minutes ago) <Vladimir Medvedkin>
> fib6: introduce AVX512 lookup (3 minutes ago) <Vladimir Medvedkin>
> 

Thanks David, I will send v11 now with fixes for unused nh_sz variable.

> 

-- 
Regards,
Vladimir

^ permalink raw reply	[flat|nested] 199+ messages in thread

* [dpdk-dev] [PATCH v11 0/8] fib: implement AVX512 vector lookup
  2020-10-13 13:13                 ` [dpdk-dev] [PATCH v10 " Vladimir Medvedkin
  2020-10-16 15:15                   ` David Marchand
@ 2020-10-16 15:42                   ` Vladimir Medvedkin
  2020-10-19 10:17                     ` [dpdk-dev] [PATCH v12 0/7] " Vladimir Medvedkin
                                       ` (7 more replies)
  2020-10-16 15:42                   ` [dpdk-dev] [PATCH v11 1/8] eal/x86: introduce AVX 512-bit type Vladimir Medvedkin
                                     ` (7 subsequent siblings)
  9 siblings, 8 replies; 199+ messages in thread
From: Vladimir Medvedkin @ 2020-10-16 15:42 UTC (permalink / raw)
  To: dev
  Cc: david.marchand, jerinj, mdr, thomas, konstantin.ananyev,
	bruce.richardson, ciara.power

This patch series implements vectorized lookup using AVX512 for
ipv4 dir24_8 and ipv6 trie algorithms.
Also introduced rte_fib_set_lookup_fn() to change lookup function type.
Added option to select lookup function type in testfib application.

Depends-on: patch-81125 ("eal: add max SIMD bitwidth")

v11:
 - fix compillation issue with unused nh_sz variable

v10:
 - reflects the latest changes in the "eal: add max SIMD bitwidth" patch
 - add en extra doxygen comments
 - rebuild on the latest main

v9:
 - meson reworked
 - integration with max SIMD bitwidth patchseries
 - changed the logic of function selection on init

v8:
 - remove Makefile related changes
 - fix missing doxygen for lookup_type
 - add release notes

v7:
 - fix RTE_X86_ZMM_MASK macro

v6:
 - style fixes

v5:
 - prefix zmm macro in rte_vect.h with RTE_X86
 - remove unnecessary typedef for _x86_zmm_t
 - reword commit title
 - fix typos

v4:
 - use __rte_aligned() instead of using compiler attribute directly
 - rework and add comments to meson.build

v3:
 - separate out the AVX-512 code into a separate file

v2:
 - rename rte_zmm to __rte_x86_zmm to reflect its internal usage
 - make runtime decision to use avx512 lookup

Vladimir Medvedkin (8):
  eal/x86: introduce AVX 512-bit type
  fib: make lookup function type configurable
  fib: move lookup definition into the header file
  fib: introduce AVX512 lookup
  fib6: make lookup function type configurable
  fib6: move lookup definition into the header file
  fib6: introduce AVX512 lookup
  app/testfib: add support for different lookup functions

 app/test-fib/main.c                    |  65 ++++++-
 doc/guides/rel_notes/release_20_11.rst |   3 +
 lib/librte_eal/x86/include/rte_vect.h  |  19 ++
 lib/librte_fib/dir24_8.c               | 331 +++++++++------------------------
 lib/librte_fib/dir24_8.h               | 226 +++++++++++++++++++++-
 lib/librte_fib/dir24_8_avx512.c        | 165 ++++++++++++++++
 lib/librte_fib/dir24_8_avx512.h        |  24 +++
 lib/librte_fib/meson.build             |  51 +++++
 lib/librte_fib/rte_fib.c               |  21 ++-
 lib/librte_fib/rte_fib.h               |  36 ++++
 lib/librte_fib/rte_fib6.c              |  20 +-
 lib/librte_fib/rte_fib6.h              |  26 +++
 lib/librte_fib/rte_fib_version.map     |   2 +
 lib/librte_fib/trie.c                  | 193 ++++++-------------
 lib/librte_fib/trie.h                  | 119 +++++++++++-
 lib/librte_fib/trie_avx512.c           | 269 +++++++++++++++++++++++++++
 lib/librte_fib/trie_avx512.h           |  20 ++
 17 files changed, 1200 insertions(+), 390 deletions(-)
 create mode 100644 lib/librte_fib/dir24_8_avx512.c
 create mode 100644 lib/librte_fib/dir24_8_avx512.h
 create mode 100644 lib/librte_fib/trie_avx512.c
 create mode 100644 lib/librte_fib/trie_avx512.h

-- 
2.7.4


^ permalink raw reply	[flat|nested] 199+ messages in thread

* [dpdk-dev] [PATCH v11 1/8] eal/x86: introduce AVX 512-bit type
  2020-10-13 13:13                 ` [dpdk-dev] [PATCH v10 " Vladimir Medvedkin
  2020-10-16 15:15                   ` David Marchand
  2020-10-16 15:42                   ` [dpdk-dev] [PATCH v11 " Vladimir Medvedkin
@ 2020-10-16 15:42                   ` Vladimir Medvedkin
  2020-10-19  6:35                     ` Kinsella, Ray
  2020-10-16 15:42                   ` [dpdk-dev] [PATCH v11 2/8] fib: make lookup function type configurable Vladimir Medvedkin
                                     ` (6 subsequent siblings)
  9 siblings, 1 reply; 199+ messages in thread
From: Vladimir Medvedkin @ 2020-10-16 15:42 UTC (permalink / raw)
  To: dev
  Cc: david.marchand, jerinj, mdr, thomas, konstantin.ananyev,
	bruce.richardson, ciara.power

New data type to manipulate 512 bit AVX values.

Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 lib/librte_eal/x86/include/rte_vect.h | 19 +++++++++++++++++++
 1 file changed, 19 insertions(+)

diff --git a/lib/librte_eal/x86/include/rte_vect.h b/lib/librte_eal/x86/include/rte_vect.h
index a00d3d5..f0aad96 100644
--- a/lib/librte_eal/x86/include/rte_vect.h
+++ b/lib/librte_eal/x86/include/rte_vect.h
@@ -13,6 +13,7 @@
 
 #include <stdint.h>
 #include <rte_config.h>
+#include <rte_common.h>
 #include "generic/rte_vect.h"
 
 #if (defined(__ICC) || \
@@ -92,6 +93,24 @@ __extension__ ({                 \
 })
 #endif /* (defined(__ICC) && __ICC < 1210) */
 
+#ifdef __AVX512F__
+
+#define RTE_X86_ZMM_SIZE	(sizeof(__m512i))
+#define RTE_X86_ZMM_MASK	(RTE_X86_ZMM_SIZE - 1)
+
+typedef union __rte_x86_zmm {
+	__m512i	 z;
+	ymm_t    y[RTE_X86_ZMM_SIZE / sizeof(ymm_t)];
+	xmm_t    x[RTE_X86_ZMM_SIZE / sizeof(xmm_t)];
+	uint8_t  u8[RTE_X86_ZMM_SIZE / sizeof(uint8_t)];
+	uint16_t u16[RTE_X86_ZMM_SIZE / sizeof(uint16_t)];
+	uint32_t u32[RTE_X86_ZMM_SIZE / sizeof(uint32_t)];
+	uint64_t u64[RTE_X86_ZMM_SIZE / sizeof(uint64_t)];
+	double   pd[RTE_X86_ZMM_SIZE / sizeof(double)];
+} __rte_aligned(RTE_X86_ZMM_SIZE) __rte_x86_zmm_t;
+
+#endif /* __AVX512F__ */
+
 #ifdef __cplusplus
 }
 #endif
-- 
2.7.4


^ permalink raw reply	[flat|nested] 199+ messages in thread

* [dpdk-dev] [PATCH v11 2/8] fib: make lookup function type configurable
  2020-10-13 13:13                 ` [dpdk-dev] [PATCH v10 " Vladimir Medvedkin
                                     ` (2 preceding siblings ...)
  2020-10-16 15:42                   ` [dpdk-dev] [PATCH v11 1/8] eal/x86: introduce AVX 512-bit type Vladimir Medvedkin
@ 2020-10-16 15:42                   ` Vladimir Medvedkin
  2020-10-16 15:42                   ` [dpdk-dev] [PATCH v11 3/8] fib: move lookup definition into the header file Vladimir Medvedkin
                                     ` (5 subsequent siblings)
  9 siblings, 0 replies; 199+ messages in thread
From: Vladimir Medvedkin @ 2020-10-16 15:42 UTC (permalink / raw)
  To: dev
  Cc: david.marchand, jerinj, mdr, thomas, konstantin.ananyev,
	bruce.richardson, ciara.power

Add type argument to dir24_8_get_lookup_fn()
Now it supports 3 different lookup implementations:
 RTE_FIB_DIR24_8_SCALAR_MACRO
 RTE_FIB_DIR24_8_SCALAR_INLINE
 RTE_FIB_DIR24_8_SCALAR_UNI

Add new rte_fib_set_lookup_fn() - user can change lookup
function type runtime.

Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 lib/librte_fib/dir24_8.c           | 84 +++++++++++++++++++++++---------------
 lib/librte_fib/dir24_8.h           |  2 +-
 lib/librte_fib/rte_fib.c           | 21 +++++++++-
 lib/librte_fib/rte_fib.h           | 32 +++++++++++++++
 lib/librte_fib/rte_fib_version.map |  1 +
 5 files changed, 106 insertions(+), 34 deletions(-)

diff --git a/lib/librte_fib/dir24_8.c b/lib/librte_fib/dir24_8.c
index c9dce3c..ff51f65 100644
--- a/lib/librte_fib/dir24_8.c
+++ b/lib/librte_fib/dir24_8.c
@@ -45,13 +45,6 @@ struct dir24_8_tbl {
 
 #define ROUNDUP(x, y)	 RTE_ALIGN_CEIL(x, (1 << (32 - y)))
 
-enum lookup_type {
-	MACRO,
-	INLINE,
-	UNI
-};
-enum lookup_type test_lookup = MACRO;
-
 static inline void *
 get_tbl24_p(struct dir24_8_tbl *dp, uint32_t ip, uint8_t nh_sz)
 {
@@ -252,35 +245,62 @@ dir24_8_lookup_bulk_uni(void *p, const uint32_t *ips,
 	}
 }
 
+static inline rte_fib_lookup_fn_t
+get_scalar_fn(enum rte_fib_dir24_8_nh_sz nh_sz)
+{
+	switch (nh_sz) {
+	case RTE_FIB_DIR24_8_1B:
+		return dir24_8_lookup_bulk_1b;
+	case RTE_FIB_DIR24_8_2B:
+		return dir24_8_lookup_bulk_2b;
+	case RTE_FIB_DIR24_8_4B:
+		return dir24_8_lookup_bulk_4b;
+	case RTE_FIB_DIR24_8_8B:
+		return dir24_8_lookup_bulk_8b;
+	default:
+		return NULL;
+	}
+}
+
+static inline rte_fib_lookup_fn_t
+get_scalar_fn_inlined(enum rte_fib_dir24_8_nh_sz nh_sz)
+{
+	switch (nh_sz) {
+	case RTE_FIB_DIR24_8_1B:
+		return dir24_8_lookup_bulk_0;
+	case RTE_FIB_DIR24_8_2B:
+		return dir24_8_lookup_bulk_1;
+	case RTE_FIB_DIR24_8_4B:
+		return dir24_8_lookup_bulk_2;
+	case RTE_FIB_DIR24_8_8B:
+		return dir24_8_lookup_bulk_3;
+	default:
+		return NULL;
+	}
+}
+
 rte_fib_lookup_fn_t
-dir24_8_get_lookup_fn(struct rte_fib_conf *fib_conf)
+dir24_8_get_lookup_fn(void *p, enum rte_fib_dir24_8_lookup_type type)
 {
-	enum rte_fib_dir24_8_nh_sz nh_sz = fib_conf->dir24_8.nh_sz;
+	enum rte_fib_dir24_8_nh_sz nh_sz;
+	struct dir24_8_tbl *dp = p;
 
-	if (test_lookup == MACRO) {
-		switch (nh_sz) {
-		case RTE_FIB_DIR24_8_1B:
-			return dir24_8_lookup_bulk_1b;
-		case RTE_FIB_DIR24_8_2B:
-			return dir24_8_lookup_bulk_2b;
-		case RTE_FIB_DIR24_8_4B:
-			return dir24_8_lookup_bulk_4b;
-		case RTE_FIB_DIR24_8_8B:
-			return dir24_8_lookup_bulk_8b;
-		}
-	} else if (test_lookup == INLINE) {
-		switch (nh_sz) {
-		case RTE_FIB_DIR24_8_1B:
-			return dir24_8_lookup_bulk_0;
-		case RTE_FIB_DIR24_8_2B:
-			return dir24_8_lookup_bulk_1;
-		case RTE_FIB_DIR24_8_4B:
-			return dir24_8_lookup_bulk_2;
-		case RTE_FIB_DIR24_8_8B:
-			return dir24_8_lookup_bulk_3;
-		}
-	} else
+	if (dp == NULL)
+		return NULL;
+
+	nh_sz = dp->nh_sz;
+
+	switch (type) {
+	case RTE_FIB_DIR24_8_SCALAR_MACRO:
+		return get_scalar_fn(nh_sz);
+	case RTE_FIB_DIR24_8_SCALAR_INLINE:
+		return get_scalar_fn_inlined(nh_sz);
+	case RTE_FIB_DIR24_8_SCALAR_UNI:
 		return dir24_8_lookup_bulk_uni;
+	default:
+		return NULL;
+	}
+
 	return NULL;
 }
 
diff --git a/lib/librte_fib/dir24_8.h b/lib/librte_fib/dir24_8.h
index 1ec437c..53c5dd2 100644
--- a/lib/librte_fib/dir24_8.h
+++ b/lib/librte_fib/dir24_8.h
@@ -22,7 +22,7 @@ void
 dir24_8_free(void *p);
 
 rte_fib_lookup_fn_t
-dir24_8_get_lookup_fn(struct rte_fib_conf *conf);
+dir24_8_get_lookup_fn(void *p, enum rte_fib_dir24_8_lookup_type type);
 
 int
 dir24_8_modify(struct rte_fib *fib, uint32_t ip, uint8_t depth,
diff --git a/lib/librte_fib/rte_fib.c b/lib/librte_fib/rte_fib.c
index e090808..b9f6efb 100644
--- a/lib/librte_fib/rte_fib.c
+++ b/lib/librte_fib/rte_fib.c
@@ -107,7 +107,8 @@ init_dataplane(struct rte_fib *fib, __rte_unused int socket_id,
 		fib->dp = dir24_8_create(dp_name, socket_id, conf);
 		if (fib->dp == NULL)
 			return -rte_errno;
-		fib->lookup = dir24_8_get_lookup_fn(conf);
+		fib->lookup = dir24_8_get_lookup_fn(fib->dp,
+			RTE_FIB_DIR24_8_SCALAR_MACRO);
 		fib->modify = dir24_8_modify;
 		return 0;
 	default:
@@ -317,3 +318,21 @@ rte_fib_get_rib(struct rte_fib *fib)
 {
 	return (fib == NULL) ? NULL : fib->rib;
 }
+
+int
+rte_fib_set_lookup_fn(struct rte_fib *fib,
+	enum rte_fib_dir24_8_lookup_type type)
+{
+	rte_fib_lookup_fn_t fn;
+
+	switch (fib->type) {
+	case RTE_FIB_DIR24_8:
+		fn = dir24_8_get_lookup_fn(fib->dp, type);
+		if (fn == NULL)
+			return -EINVAL;
+		fib->lookup = fn;
+		return 0;
+	default:
+		return -EINVAL;
+	}
+}
diff --git a/lib/librte_fib/rte_fib.h b/lib/librte_fib/rte_fib.h
index 84ee774..2097ee5 100644
--- a/lib/librte_fib/rte_fib.h
+++ b/lib/librte_fib/rte_fib.h
@@ -58,6 +58,21 @@ enum rte_fib_dir24_8_nh_sz {
 	RTE_FIB_DIR24_8_8B
 };
 
+/** Type of lookup function implementation */
+enum rte_fib_dir24_8_lookup_type {
+	RTE_FIB_DIR24_8_SCALAR_MACRO,
+	/**< Macro based lookup function */
+	RTE_FIB_DIR24_8_SCALAR_INLINE,
+	/**<
+	 * Lookup implementation using inlined functions
+	 * for different next hop sizes
+	 */
+	RTE_FIB_DIR24_8_SCALAR_UNI
+	/**<
+	 * Unified lookup function for all next hop sizes
+	 */
+};
+
 /** FIB configuration structure */
 struct rte_fib_conf {
 	enum rte_fib_type type; /**< Type of FIB struct */
@@ -196,6 +211,23 @@ __rte_experimental
 struct rte_rib *
 rte_fib_get_rib(struct rte_fib *fib);
 
+/**
+ * Set lookup function based on type
+ *
+ * @param fib
+ *   FIB object handle
+ * @param type
+ *   type of lookup function
+ *
+ * @return
+ *    -EINVAL on failure
+ *    0 on success
+ */
+__rte_experimental
+int
+rte_fib_set_lookup_fn(struct rte_fib *fib,
+	enum rte_fib_dir24_8_lookup_type type);
+
 #ifdef __cplusplus
 }
 #endif
diff --git a/lib/librte_fib/rte_fib_version.map b/lib/librte_fib/rte_fib_version.map
index 9527417..216af66 100644
--- a/lib/librte_fib/rte_fib_version.map
+++ b/lib/librte_fib/rte_fib_version.map
@@ -9,6 +9,7 @@ EXPERIMENTAL {
 	rte_fib_lookup_bulk;
 	rte_fib_get_dp;
 	rte_fib_get_rib;
+	rte_fib_set_lookup_fn;
 
 	rte_fib6_add;
 	rte_fib6_create;
-- 
2.7.4


^ permalink raw reply	[flat|nested] 199+ messages in thread

* [dpdk-dev] [PATCH v11 3/8] fib: move lookup definition into the header file
  2020-10-13 13:13                 ` [dpdk-dev] [PATCH v10 " Vladimir Medvedkin
                                     ` (3 preceding siblings ...)
  2020-10-16 15:42                   ` [dpdk-dev] [PATCH v11 2/8] fib: make lookup function type configurable Vladimir Medvedkin
@ 2020-10-16 15:42                   ` Vladimir Medvedkin
  2020-10-16 15:42                   ` [dpdk-dev] [PATCH v11 4/8] fib: introduce AVX512 lookup Vladimir Medvedkin
                                     ` (4 subsequent siblings)
  9 siblings, 0 replies; 199+ messages in thread
From: Vladimir Medvedkin @ 2020-10-16 15:42 UTC (permalink / raw)
  To: dev
  Cc: david.marchand, jerinj, mdr, thomas, konstantin.ananyev,
	bruce.richardson, ciara.power

Move dir24_8 table layout and lookup definition into the
private header file. This is necessary for implementing a
vectorized lookup function in a separate .с file.

Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 lib/librte_fib/dir24_8.c | 225 +----------------------------------------------
 lib/librte_fib/dir24_8.h | 224 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 225 insertions(+), 224 deletions(-)

diff --git a/lib/librte_fib/dir24_8.c b/lib/librte_fib/dir24_8.c
index ff51f65..b5f2363 100644
--- a/lib/librte_fib/dir24_8.c
+++ b/lib/librte_fib/dir24_8.c
@@ -11,240 +11,17 @@
 
 #include <rte_debug.h>
 #include <rte_malloc.h>
-#include <rte_prefetch.h>
 #include <rte_errno.h>
 #include <rte_memory.h>
-#include <rte_branch_prediction.h>
 
-#include <rte_fib.h>
 #include <rte_rib.h>
+#include <rte_fib.h>
 #include "dir24_8.h"
 
 #define DIR24_8_NAMESIZE	64
 
-#define DIR24_8_TBL24_NUM_ENT		(1 << 24)
-#define DIR24_8_TBL8_GRP_NUM_ENT	256U
-#define DIR24_8_EXT_ENT			1
-#define DIR24_8_TBL24_MASK		0xffffff00
-
-#define BITMAP_SLAB_BIT_SIZE_LOG2	6
-#define BITMAP_SLAB_BIT_SIZE		(1 << BITMAP_SLAB_BIT_SIZE_LOG2)
-#define BITMAP_SLAB_BITMASK		(BITMAP_SLAB_BIT_SIZE - 1)
-
-struct dir24_8_tbl {
-	uint32_t	number_tbl8s;	/**< Total number of tbl8s */
-	uint32_t	rsvd_tbl8s;	/**< Number of reserved tbl8s */
-	uint32_t	cur_tbl8s;	/**< Current number of tbl8s */
-	enum rte_fib_dir24_8_nh_sz	nh_sz;	/**< Size of nexthop entry */
-	uint64_t	def_nh;		/**< Default next hop */
-	uint64_t	*tbl8;		/**< tbl8 table. */
-	uint64_t	*tbl8_idxes;	/**< bitmap containing free tbl8 idxes*/
-	/* tbl24 table. */
-	__extension__ uint64_t	tbl24[0] __rte_cache_aligned;
-};
-
 #define ROUNDUP(x, y)	 RTE_ALIGN_CEIL(x, (1 << (32 - y)))
 
-static inline void *
-get_tbl24_p(struct dir24_8_tbl *dp, uint32_t ip, uint8_t nh_sz)
-{
-	return (void *)&((uint8_t *)dp->tbl24)[(ip &
-		DIR24_8_TBL24_MASK) >> (8 - nh_sz)];
-}
-
-static inline  uint8_t
-bits_in_nh(uint8_t nh_sz)
-{
-	return 8 * (1 << nh_sz);
-}
-
-static inline uint64_t
-get_max_nh(uint8_t nh_sz)
-{
-	return ((1ULL << (bits_in_nh(nh_sz) - 1)) - 1);
-}
-
-static  inline uint32_t
-get_tbl24_idx(uint32_t ip)
-{
-	return ip >> 8;
-}
-
-static  inline uint32_t
-get_tbl8_idx(uint32_t res, uint32_t ip)
-{
-	return (res >> 1) * DIR24_8_TBL8_GRP_NUM_ENT + (uint8_t)ip;
-}
-
-static inline uint64_t
-lookup_msk(uint8_t nh_sz)
-{
-	return ((1ULL << ((1 << (nh_sz + 3)) - 1)) << 1) - 1;
-}
-
-static inline uint8_t
-get_psd_idx(uint32_t val, uint8_t nh_sz)
-{
-	return val & ((1 << (3 - nh_sz)) - 1);
-}
-
-static inline uint32_t
-get_tbl_idx(uint32_t val, uint8_t nh_sz)
-{
-	return val >> (3 - nh_sz);
-}
-
-static inline uint64_t
-get_tbl24(struct dir24_8_tbl *dp, uint32_t ip, uint8_t nh_sz)
-{
-	return ((dp->tbl24[get_tbl_idx(get_tbl24_idx(ip), nh_sz)] >>
-		(get_psd_idx(get_tbl24_idx(ip), nh_sz) *
-		bits_in_nh(nh_sz))) & lookup_msk(nh_sz));
-}
-
-static inline uint64_t
-get_tbl8(struct dir24_8_tbl *dp, uint32_t res, uint32_t ip, uint8_t nh_sz)
-{
-	return ((dp->tbl8[get_tbl_idx(get_tbl8_idx(res, ip), nh_sz)] >>
-		(get_psd_idx(get_tbl8_idx(res, ip), nh_sz) *
-		bits_in_nh(nh_sz))) & lookup_msk(nh_sz));
-}
-
-static inline int
-is_entry_extended(uint64_t ent)
-{
-	return (ent & DIR24_8_EXT_ENT) == DIR24_8_EXT_ENT;
-}
-
-#define LOOKUP_FUNC(suffix, type, bulk_prefetch, nh_sz)			\
-static void dir24_8_lookup_bulk_##suffix(void *p, const uint32_t *ips,	\
-	uint64_t *next_hops, const unsigned int n)			\
-{									\
-	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;		\
-	uint64_t tmp;							\
-	uint32_t i;							\
-	uint32_t prefetch_offset =					\
-		RTE_MIN((unsigned int)bulk_prefetch, n);		\
-									\
-	for (i = 0; i < prefetch_offset; i++)				\
-		rte_prefetch0(get_tbl24_p(dp, ips[i], nh_sz));		\
-	for (i = 0; i < (n - prefetch_offset); i++) {			\
-		rte_prefetch0(get_tbl24_p(dp,				\
-			ips[i + prefetch_offset], nh_sz));		\
-		tmp = ((type *)dp->tbl24)[ips[i] >> 8];			\
-		if (unlikely(is_entry_extended(tmp)))			\
-			tmp = ((type *)dp->tbl8)[(uint8_t)ips[i] +	\
-				((tmp >> 1) * DIR24_8_TBL8_GRP_NUM_ENT)]; \
-		next_hops[i] = tmp >> 1;				\
-	}								\
-	for (; i < n; i++) {						\
-		tmp = ((type *)dp->tbl24)[ips[i] >> 8];			\
-		if (unlikely(is_entry_extended(tmp)))			\
-			tmp = ((type *)dp->tbl8)[(uint8_t)ips[i] +	\
-				((tmp >> 1) * DIR24_8_TBL8_GRP_NUM_ENT)]; \
-		next_hops[i] = tmp >> 1;				\
-	}								\
-}									\
-
-LOOKUP_FUNC(1b, uint8_t, 5, 0)
-LOOKUP_FUNC(2b, uint16_t, 6, 1)
-LOOKUP_FUNC(4b, uint32_t, 15, 2)
-LOOKUP_FUNC(8b, uint64_t, 12, 3)
-
-static inline void
-dir24_8_lookup_bulk(struct dir24_8_tbl *dp, const uint32_t *ips,
-	uint64_t *next_hops, const unsigned int n, uint8_t nh_sz)
-{
-	uint64_t tmp;
-	uint32_t i;
-	uint32_t prefetch_offset = RTE_MIN(15U, n);
-
-	for (i = 0; i < prefetch_offset; i++)
-		rte_prefetch0(get_tbl24_p(dp, ips[i], nh_sz));
-	for (i = 0; i < (n - prefetch_offset); i++) {
-		rte_prefetch0(get_tbl24_p(dp, ips[i + prefetch_offset],
-			nh_sz));
-		tmp = get_tbl24(dp, ips[i], nh_sz);
-		if (unlikely(is_entry_extended(tmp)))
-			tmp = get_tbl8(dp, tmp, ips[i], nh_sz);
-
-		next_hops[i] = tmp >> 1;
-	}
-	for (; i < n; i++) {
-		tmp = get_tbl24(dp, ips[i], nh_sz);
-		if (unlikely(is_entry_extended(tmp)))
-			tmp = get_tbl8(dp, tmp, ips[i], nh_sz);
-
-		next_hops[i] = tmp >> 1;
-	}
-}
-
-static void
-dir24_8_lookup_bulk_0(void *p, const uint32_t *ips,
-	uint64_t *next_hops, const unsigned int n)
-{
-	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
-
-	dir24_8_lookup_bulk(dp, ips, next_hops, n, 0);
-}
-
-static void
-dir24_8_lookup_bulk_1(void *p, const uint32_t *ips,
-	uint64_t *next_hops, const unsigned int n)
-{
-	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
-
-	dir24_8_lookup_bulk(dp, ips, next_hops, n, 1);
-}
-
-static void
-dir24_8_lookup_bulk_2(void *p, const uint32_t *ips,
-	uint64_t *next_hops, const unsigned int n)
-{
-	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
-
-	dir24_8_lookup_bulk(dp, ips, next_hops, n, 2);
-}
-
-static void
-dir24_8_lookup_bulk_3(void *p, const uint32_t *ips,
-	uint64_t *next_hops, const unsigned int n)
-{
-	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
-
-	dir24_8_lookup_bulk(dp, ips, next_hops, n, 3);
-}
-
-static void
-dir24_8_lookup_bulk_uni(void *p, const uint32_t *ips,
-	uint64_t *next_hops, const unsigned int n)
-{
-	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
-	uint64_t tmp;
-	uint32_t i;
-	uint32_t prefetch_offset = RTE_MIN(15U, n);
-	uint8_t nh_sz = dp->nh_sz;
-
-	for (i = 0; i < prefetch_offset; i++)
-		rte_prefetch0(get_tbl24_p(dp, ips[i], nh_sz));
-	for (i = 0; i < (n - prefetch_offset); i++) {
-		rte_prefetch0(get_tbl24_p(dp, ips[i + prefetch_offset],
-			nh_sz));
-		tmp = get_tbl24(dp, ips[i], nh_sz);
-		if (unlikely(is_entry_extended(tmp)))
-			tmp = get_tbl8(dp, tmp, ips[i], nh_sz);
-
-		next_hops[i] = tmp >> 1;
-	}
-	for (; i < n; i++) {
-		tmp = get_tbl24(dp, ips[i], nh_sz);
-		if (unlikely(is_entry_extended(tmp)))
-			tmp = get_tbl8(dp, tmp, ips[i], nh_sz);
-
-		next_hops[i] = tmp >> 1;
-	}
-}
-
 static inline rte_fib_lookup_fn_t
 get_scalar_fn(enum rte_fib_dir24_8_nh_sz nh_sz)
 {
diff --git a/lib/librte_fib/dir24_8.h b/lib/librte_fib/dir24_8.h
index 53c5dd2..56d0389 100644
--- a/lib/librte_fib/dir24_8.h
+++ b/lib/librte_fib/dir24_8.h
@@ -6,6 +6,9 @@
 #ifndef _DIR24_8_H_
 #define _DIR24_8_H_
 
+#include <rte_prefetch.h>
+#include <rte_branch_prediction.h>
+
 /**
  * @file
  * DIR24_8 algorithm
@@ -15,6 +18,227 @@
 extern "C" {
 #endif
 
+#define DIR24_8_TBL24_NUM_ENT		(1 << 24)
+#define DIR24_8_TBL8_GRP_NUM_ENT	256U
+#define DIR24_8_EXT_ENT			1
+#define DIR24_8_TBL24_MASK		0xffffff00
+
+#define BITMAP_SLAB_BIT_SIZE_LOG2	6
+#define BITMAP_SLAB_BIT_SIZE		(1 << BITMAP_SLAB_BIT_SIZE_LOG2)
+#define BITMAP_SLAB_BITMASK		(BITMAP_SLAB_BIT_SIZE - 1)
+
+struct dir24_8_tbl {
+	uint32_t	number_tbl8s;	/**< Total number of tbl8s */
+	uint32_t	rsvd_tbl8s;	/**< Number of reserved tbl8s */
+	uint32_t	cur_tbl8s;	/**< Current number of tbl8s */
+	enum rte_fib_dir24_8_nh_sz	nh_sz;	/**< Size of nexthop entry */
+	uint64_t	def_nh;		/**< Default next hop */
+	uint64_t	*tbl8;		/**< tbl8 table. */
+	uint64_t	*tbl8_idxes;	/**< bitmap containing free tbl8 idxes*/
+	/* tbl24 table. */
+	__extension__ uint64_t	tbl24[0] __rte_cache_aligned;
+};
+
+static inline void *
+get_tbl24_p(struct dir24_8_tbl *dp, uint32_t ip, uint8_t nh_sz)
+{
+	return (void *)&((uint8_t *)dp->tbl24)[(ip &
+		DIR24_8_TBL24_MASK) >> (8 - nh_sz)];
+}
+
+static inline  uint8_t
+bits_in_nh(uint8_t nh_sz)
+{
+	return 8 * (1 << nh_sz);
+}
+
+static inline uint64_t
+get_max_nh(uint8_t nh_sz)
+{
+	return ((1ULL << (bits_in_nh(nh_sz) - 1)) - 1);
+}
+
+static  inline uint32_t
+get_tbl24_idx(uint32_t ip)
+{
+	return ip >> 8;
+}
+
+static  inline uint32_t
+get_tbl8_idx(uint32_t res, uint32_t ip)
+{
+	return (res >> 1) * DIR24_8_TBL8_GRP_NUM_ENT + (uint8_t)ip;
+}
+
+static inline uint64_t
+lookup_msk(uint8_t nh_sz)
+{
+	return ((1ULL << ((1 << (nh_sz + 3)) - 1)) << 1) - 1;
+}
+
+static inline uint8_t
+get_psd_idx(uint32_t val, uint8_t nh_sz)
+{
+	return val & ((1 << (3 - nh_sz)) - 1);
+}
+
+static inline uint32_t
+get_tbl_idx(uint32_t val, uint8_t nh_sz)
+{
+	return val >> (3 - nh_sz);
+}
+
+static inline uint64_t
+get_tbl24(struct dir24_8_tbl *dp, uint32_t ip, uint8_t nh_sz)
+{
+	return ((dp->tbl24[get_tbl_idx(get_tbl24_idx(ip), nh_sz)] >>
+		(get_psd_idx(get_tbl24_idx(ip), nh_sz) *
+		bits_in_nh(nh_sz))) & lookup_msk(nh_sz));
+}
+
+static inline uint64_t
+get_tbl8(struct dir24_8_tbl *dp, uint32_t res, uint32_t ip, uint8_t nh_sz)
+{
+	return ((dp->tbl8[get_tbl_idx(get_tbl8_idx(res, ip), nh_sz)] >>
+		(get_psd_idx(get_tbl8_idx(res, ip), nh_sz) *
+		bits_in_nh(nh_sz))) & lookup_msk(nh_sz));
+}
+
+static inline int
+is_entry_extended(uint64_t ent)
+{
+	return (ent & DIR24_8_EXT_ENT) == DIR24_8_EXT_ENT;
+}
+
+#define LOOKUP_FUNC(suffix, type, bulk_prefetch, nh_sz)			\
+static inline void dir24_8_lookup_bulk_##suffix(void *p, const uint32_t *ips, \
+	uint64_t *next_hops, const unsigned int n)			\
+{									\
+	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;		\
+	uint64_t tmp;							\
+	uint32_t i;							\
+	uint32_t prefetch_offset =					\
+		RTE_MIN((unsigned int)bulk_prefetch, n);		\
+									\
+	for (i = 0; i < prefetch_offset; i++)				\
+		rte_prefetch0(get_tbl24_p(dp, ips[i], nh_sz));		\
+	for (i = 0; i < (n - prefetch_offset); i++) {			\
+		rte_prefetch0(get_tbl24_p(dp,				\
+			ips[i + prefetch_offset], nh_sz));		\
+		tmp = ((type *)dp->tbl24)[ips[i] >> 8];			\
+		if (unlikely(is_entry_extended(tmp)))			\
+			tmp = ((type *)dp->tbl8)[(uint8_t)ips[i] +	\
+				((tmp >> 1) * DIR24_8_TBL8_GRP_NUM_ENT)]; \
+		next_hops[i] = tmp >> 1;				\
+	}								\
+	for (; i < n; i++) {						\
+		tmp = ((type *)dp->tbl24)[ips[i] >> 8];			\
+		if (unlikely(is_entry_extended(tmp)))			\
+			tmp = ((type *)dp->tbl8)[(uint8_t)ips[i] +	\
+				((tmp >> 1) * DIR24_8_TBL8_GRP_NUM_ENT)]; \
+		next_hops[i] = tmp >> 1;				\
+	}								\
+}									\
+
+LOOKUP_FUNC(1b, uint8_t, 5, 0)
+LOOKUP_FUNC(2b, uint16_t, 6, 1)
+LOOKUP_FUNC(4b, uint32_t, 15, 2)
+LOOKUP_FUNC(8b, uint64_t, 12, 3)
+
+static inline void
+dir24_8_lookup_bulk(struct dir24_8_tbl *dp, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n, uint8_t nh_sz)
+{
+	uint64_t tmp;
+	uint32_t i;
+	uint32_t prefetch_offset = RTE_MIN(15U, n);
+
+	for (i = 0; i < prefetch_offset; i++)
+		rte_prefetch0(get_tbl24_p(dp, ips[i], nh_sz));
+	for (i = 0; i < (n - prefetch_offset); i++) {
+		rte_prefetch0(get_tbl24_p(dp, ips[i + prefetch_offset],
+			nh_sz));
+		tmp = get_tbl24(dp, ips[i], nh_sz);
+		if (unlikely(is_entry_extended(tmp)))
+			tmp = get_tbl8(dp, tmp, ips[i], nh_sz);
+
+		next_hops[i] = tmp >> 1;
+	}
+	for (; i < n; i++) {
+		tmp = get_tbl24(dp, ips[i], nh_sz);
+		if (unlikely(is_entry_extended(tmp)))
+			tmp = get_tbl8(dp, tmp, ips[i], nh_sz);
+
+		next_hops[i] = tmp >> 1;
+	}
+}
+
+static inline void
+dir24_8_lookup_bulk_0(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n)
+{
+	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
+
+	dir24_8_lookup_bulk(dp, ips, next_hops, n, 0);
+}
+
+static inline void
+dir24_8_lookup_bulk_1(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n)
+{
+	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
+
+	dir24_8_lookup_bulk(dp, ips, next_hops, n, 1);
+}
+
+static inline void
+dir24_8_lookup_bulk_2(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n)
+{
+	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
+
+	dir24_8_lookup_bulk(dp, ips, next_hops, n, 2);
+}
+
+static inline void
+dir24_8_lookup_bulk_3(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n)
+{
+	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
+
+	dir24_8_lookup_bulk(dp, ips, next_hops, n, 3);
+}
+
+static inline void
+dir24_8_lookup_bulk_uni(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n)
+{
+	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
+	uint64_t tmp;
+	uint32_t i;
+	uint32_t prefetch_offset = RTE_MIN(15U, n);
+	uint8_t nh_sz = dp->nh_sz;
+
+	for (i = 0; i < prefetch_offset; i++)
+		rte_prefetch0(get_tbl24_p(dp, ips[i], nh_sz));
+	for (i = 0; i < (n - prefetch_offset); i++) {
+		rte_prefetch0(get_tbl24_p(dp, ips[i + prefetch_offset],
+			nh_sz));
+		tmp = get_tbl24(dp, ips[i], nh_sz);
+		if (unlikely(is_entry_extended(tmp)))
+			tmp = get_tbl8(dp, tmp, ips[i], nh_sz);
+
+		next_hops[i] = tmp >> 1;
+	}
+	for (; i < n; i++) {
+		tmp = get_tbl24(dp, ips[i], nh_sz);
+		if (unlikely(is_entry_extended(tmp)))
+			tmp = get_tbl8(dp, tmp, ips[i], nh_sz);
+
+		next_hops[i] = tmp >> 1;
+	}
+}
+
 void *
 dir24_8_create(const char *name, int socket_id, struct rte_fib_conf *conf);
 
-- 
2.7.4


^ permalink raw reply	[flat|nested] 199+ messages in thread

* [dpdk-dev] [PATCH v11 4/8] fib: introduce AVX512 lookup
  2020-10-13 13:13                 ` [dpdk-dev] [PATCH v10 " Vladimir Medvedkin
                                     ` (4 preceding siblings ...)
  2020-10-16 15:42                   ` [dpdk-dev] [PATCH v11 3/8] fib: move lookup definition into the header file Vladimir Medvedkin
@ 2020-10-16 15:42                   ` Vladimir Medvedkin
  2020-10-16 15:42                   ` [dpdk-dev] [PATCH v11 5/8] fib6: make lookup function type configurable Vladimir Medvedkin
                                     ` (3 subsequent siblings)
  9 siblings, 0 replies; 199+ messages in thread
From: Vladimir Medvedkin @ 2020-10-16 15:42 UTC (permalink / raw)
  To: dev
  Cc: david.marchand, jerinj, mdr, thomas, konstantin.ananyev,
	bruce.richardson, ciara.power

Add new lookup implementation for DIR24_8 algorithm using
AVX512 instruction set

Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 doc/guides/rel_notes/release_20_11.rst |   3 +
 lib/librte_fib/dir24_8.c               |  38 ++++++++
 lib/librte_fib/dir24_8_avx512.c        | 165 +++++++++++++++++++++++++++++++++
 lib/librte_fib/dir24_8_avx512.h        |  24 +++++
 lib/librte_fib/meson.build             |  34 +++++++
 lib/librte_fib/rte_fib.c               |   2 +-
 lib/librte_fib/rte_fib.h               |   6 +-
 7 files changed, 270 insertions(+), 2 deletions(-)
 create mode 100644 lib/librte_fib/dir24_8_avx512.c
 create mode 100644 lib/librte_fib/dir24_8_avx512.h

diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst
index 57e3edc..8c2a89f 100644
--- a/doc/guides/rel_notes/release_20_11.rst
+++ b/doc/guides/rel_notes/release_20_11.rst
@@ -148,6 +148,9 @@ New Features
   * Extern objects and functions can be plugged into the pipeline.
   * Transaction-oriented table updates.
 
+* **Added AVX512 lookup implementation for FIB.**
+
+  Added a AVX512 lookup functions implementation into FIB library.
 
 Removed Items
 -------------
diff --git a/lib/librte_fib/dir24_8.c b/lib/librte_fib/dir24_8.c
index b5f2363..b96d810 100644
--- a/lib/librte_fib/dir24_8.c
+++ b/lib/librte_fib/dir24_8.c
@@ -18,6 +18,12 @@
 #include <rte_fib.h>
 #include "dir24_8.h"
 
+#ifdef CC_DIR24_8_AVX512_SUPPORT
+
+#include "dir24_8_avx512.h"
+
+#endif /* CC_DIR24_8_AVX512_SUPPORT */
+
 #define DIR24_8_NAMESIZE	64
 
 #define ROUNDUP(x, y)	 RTE_ALIGN_CEIL(x, (1 << (32 - y)))
@@ -56,11 +62,38 @@ get_scalar_fn_inlined(enum rte_fib_dir24_8_nh_sz nh_sz)
 	}
 }
 
+static inline rte_fib_lookup_fn_t
+get_vector_fn(enum rte_fib_dir24_8_nh_sz nh_sz)
+{
+#ifdef CC_DIR24_8_AVX512_SUPPORT
+	if ((rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512F) <= 0) ||
+			(rte_get_max_simd_bitwidth() < RTE_SIMD_512))
+		return NULL;
+
+	switch (nh_sz) {
+	case RTE_FIB_DIR24_8_1B:
+		return rte_dir24_8_vec_lookup_bulk_1b;
+	case RTE_FIB_DIR24_8_2B:
+		return rte_dir24_8_vec_lookup_bulk_2b;
+	case RTE_FIB_DIR24_8_4B:
+		return rte_dir24_8_vec_lookup_bulk_4b;
+	case RTE_FIB_DIR24_8_8B:
+		return rte_dir24_8_vec_lookup_bulk_8b;
+	default:
+		return NULL;
+	}
+#else
+	RTE_SET_USED(nh_sz);
+#endif
+	return NULL;
+}
+
 rte_fib_lookup_fn_t
 dir24_8_get_lookup_fn(void *p, enum rte_fib_dir24_8_lookup_type type)
 {
 	enum rte_fib_dir24_8_nh_sz nh_sz;
 	struct dir24_8_tbl *dp = p;
+	rte_fib_lookup_fn_t ret_fn = NULL;
 
 	if (dp == NULL)
 		return NULL;
@@ -74,6 +107,11 @@ dir24_8_get_lookup_fn(void *p, enum rte_fib_dir24_8_lookup_type type)
 		return get_scalar_fn_inlined(nh_sz);
 	case RTE_FIB_DIR24_8_SCALAR_UNI:
 		return dir24_8_lookup_bulk_uni;
+	case RTE_FIB_DIR24_8_VECTOR_AVX512:
+		return get_vector_fn(nh_sz);
+	case RTE_FIB_DIR24_8_ANY:
+		ret_fn = get_vector_fn(nh_sz);
+		return (ret_fn) ? ret_fn : get_scalar_fn(nh_sz);
 	default:
 		return NULL;
 	}
diff --git a/lib/librte_fib/dir24_8_avx512.c b/lib/librte_fib/dir24_8_avx512.c
new file mode 100644
index 0000000..43dba28
--- /dev/null
+++ b/lib/librte_fib/dir24_8_avx512.c
@@ -0,0 +1,165 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Intel Corporation
+ */
+
+#include <rte_vect.h>
+#include <rte_fib.h>
+
+#include "dir24_8.h"
+#include "dir24_8_avx512.h"
+
+static __rte_always_inline void
+dir24_8_vec_lookup_x16(void *p, const uint32_t *ips,
+	uint64_t *next_hops, int size)
+{
+	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
+	__mmask16 msk_ext;
+	__mmask16 exp_msk = 0x5555;
+	__m512i ip_vec, idxes, res, bytes;
+	const __m512i zero = _mm512_set1_epi32(0);
+	const __m512i lsb = _mm512_set1_epi32(1);
+	const __m512i lsbyte_msk = _mm512_set1_epi32(0xff);
+	__m512i tmp1, tmp2, res_msk;
+	__m256i tmp256;
+	/* used to mask gather values if size is 1/2 (8/16 bit next hops) */
+	if (size == sizeof(uint8_t))
+		res_msk = _mm512_set1_epi32(UINT8_MAX);
+	else if (size == sizeof(uint16_t))
+		res_msk = _mm512_set1_epi32(UINT16_MAX);
+
+	ip_vec = _mm512_loadu_si512(ips);
+	/* mask 24 most significant bits */
+	idxes = _mm512_srli_epi32(ip_vec, 8);
+
+	/**
+	 * lookup in tbl24
+	 * Put it inside branch to make compiler happy with -O0
+	 */
+	if (size == sizeof(uint8_t)) {
+		res = _mm512_i32gather_epi32(idxes, (const int *)dp->tbl24, 1);
+		res = _mm512_and_epi32(res, res_msk);
+	} else if (size == sizeof(uint16_t)) {
+		res = _mm512_i32gather_epi32(idxes, (const int *)dp->tbl24, 2);
+		res = _mm512_and_epi32(res, res_msk);
+	} else
+		res = _mm512_i32gather_epi32(idxes, (const int *)dp->tbl24, 4);
+
+	/* get extended entries indexes */
+	msk_ext = _mm512_test_epi32_mask(res, lsb);
+
+	if (msk_ext != 0) {
+		idxes = _mm512_srli_epi32(res, 1);
+		idxes = _mm512_slli_epi32(idxes, 8);
+		bytes = _mm512_and_epi32(ip_vec, lsbyte_msk);
+		idxes = _mm512_maskz_add_epi32(msk_ext, idxes, bytes);
+		if (size == sizeof(uint8_t)) {
+			idxes = _mm512_mask_i32gather_epi32(zero, msk_ext,
+				idxes, (const int *)dp->tbl8, 1);
+			idxes = _mm512_and_epi32(idxes, res_msk);
+		} else if (size == sizeof(uint16_t)) {
+			idxes = _mm512_mask_i32gather_epi32(zero, msk_ext,
+				idxes, (const int *)dp->tbl8, 2);
+			idxes = _mm512_and_epi32(idxes, res_msk);
+		} else
+			idxes = _mm512_mask_i32gather_epi32(zero, msk_ext,
+				idxes, (const int *)dp->tbl8, 4);
+
+		res = _mm512_mask_blend_epi32(msk_ext, res, idxes);
+	}
+
+	res = _mm512_srli_epi32(res, 1);
+	tmp1 = _mm512_maskz_expand_epi32(exp_msk, res);
+	tmp256 = _mm512_extracti32x8_epi32(res, 1);
+	tmp2 = _mm512_maskz_expand_epi32(exp_msk,
+		_mm512_castsi256_si512(tmp256));
+	_mm512_storeu_si512(next_hops, tmp1);
+	_mm512_storeu_si512(next_hops + 8, tmp2);
+}
+
+static __rte_always_inline void
+dir24_8_vec_lookup_x8_8b(void *p, const uint32_t *ips,
+	uint64_t *next_hops)
+{
+	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
+	const __m512i zero = _mm512_set1_epi32(0);
+	const __m512i lsbyte_msk = _mm512_set1_epi64(0xff);
+	const __m512i lsb = _mm512_set1_epi64(1);
+	__m512i res, idxes, bytes;
+	__m256i idxes_256, ip_vec;
+	__mmask8 msk_ext;
+
+	ip_vec = _mm256_loadu_si256((const void *)ips);
+	/* mask 24 most significant bits */
+	idxes_256 = _mm256_srli_epi32(ip_vec, 8);
+
+	/* lookup in tbl24 */
+	res = _mm512_i32gather_epi64(idxes_256, (const void *)dp->tbl24, 8);
+
+	/* get extended entries indexes */
+	msk_ext = _mm512_test_epi64_mask(res, lsb);
+
+	if (msk_ext != 0) {
+		bytes = _mm512_cvtepi32_epi64(ip_vec);
+		idxes = _mm512_srli_epi64(res, 1);
+		idxes = _mm512_slli_epi64(idxes, 8);
+		bytes = _mm512_and_epi64(bytes, lsbyte_msk);
+		idxes = _mm512_maskz_add_epi64(msk_ext, idxes, bytes);
+		idxes = _mm512_mask_i64gather_epi64(zero, msk_ext, idxes,
+			(const void *)dp->tbl8, 8);
+
+		res = _mm512_mask_blend_epi64(msk_ext, res, idxes);
+	}
+
+	res = _mm512_srli_epi64(res, 1);
+	_mm512_storeu_si512(next_hops, res);
+}
+
+void
+rte_dir24_8_vec_lookup_bulk_1b(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n)
+{
+	uint32_t i;
+	for (i = 0; i < (n / 16); i++)
+		dir24_8_vec_lookup_x16(p, ips + i * 16, next_hops + i * 16,
+			sizeof(uint8_t));
+
+	dir24_8_lookup_bulk_1b(p, ips + i * 16, next_hops + i * 16,
+		n - i * 16);
+}
+
+void
+rte_dir24_8_vec_lookup_bulk_2b(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n)
+{
+	uint32_t i;
+	for (i = 0; i < (n / 16); i++)
+		dir24_8_vec_lookup_x16(p, ips + i * 16, next_hops + i * 16,
+			sizeof(uint16_t));
+
+	dir24_8_lookup_bulk_2b(p, ips + i * 16, next_hops + i * 16,
+		n - i * 16);
+}
+
+void
+rte_dir24_8_vec_lookup_bulk_4b(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n)
+{
+	uint32_t i;
+	for (i = 0; i < (n / 16); i++)
+		dir24_8_vec_lookup_x16(p, ips + i * 16, next_hops + i * 16,
+			sizeof(uint32_t));
+
+	dir24_8_lookup_bulk_4b(p, ips + i * 16, next_hops + i * 16,
+		n - i * 16);
+}
+
+void
+rte_dir24_8_vec_lookup_bulk_8b(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n)
+{
+	uint32_t i;
+	for (i = 0; i < (n / 8); i++)
+		dir24_8_vec_lookup_x8_8b(p, ips + i * 8, next_hops + i * 8);
+
+	dir24_8_lookup_bulk_8b(p, ips + i * 8, next_hops + i * 8, n - i * 8);
+}
diff --git a/lib/librte_fib/dir24_8_avx512.h b/lib/librte_fib/dir24_8_avx512.h
new file mode 100644
index 0000000..1d3c2b9
--- /dev/null
+++ b/lib/librte_fib/dir24_8_avx512.h
@@ -0,0 +1,24 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Intel Corporation
+ */
+
+#ifndef _DIR248_AVX512_H_
+#define _DIR248_AVX512_H_
+
+void
+rte_dir24_8_vec_lookup_bulk_1b(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n);
+
+void
+rte_dir24_8_vec_lookup_bulk_2b(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n);
+
+void
+rte_dir24_8_vec_lookup_bulk_4b(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n);
+
+void
+rte_dir24_8_vec_lookup_bulk_8b(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n);
+
+#endif /* _DIR248_AVX512_H_ */
diff --git a/lib/librte_fib/meson.build b/lib/librte_fib/meson.build
index 771828f..0a8adef 100644
--- a/lib/librte_fib/meson.build
+++ b/lib/librte_fib/meson.build
@@ -5,3 +5,37 @@
 sources = files('rte_fib.c', 'rte_fib6.c', 'dir24_8.c', 'trie.c')
 headers = files('rte_fib.h', 'rte_fib6.h')
 deps += ['rib']
+
+# compile AVX512 version if:
+# we are building 64-bit binary AND binutils can generate proper code
+if dpdk_conf.has('RTE_ARCH_X86_64') and binutils_ok.returncode() == 0
+	# compile AVX512 version if either:
+	# a. we have AVX512F supported in minimum instruction set baseline
+	# b. it's not minimum instruction set, but supported by compiler
+	#
+	# in former case, just add avx512 C file to files list
+	# in latter case, compile c file to static lib, using correct
+	# compiler flags, and then have the .o file from static lib
+	# linked into main lib.
+
+	# check if all required flags already enabled (variant a).
+	acl_avx512_flags = ['__AVX512F__','__AVX512DQ__']
+	acl_avx512_on = true
+	foreach f:acl_avx512_flags
+		if cc.get_define(f, args: machine_args) == ''
+			acl_avx512_on = false
+		endif
+	endforeach
+
+	if acl_avx512_on == true
+		cflags += ['-DCC_DIR24_8_AVX512_SUPPORT']
+		sources += files('dir24_8_avx512.c')
+	elif cc.has_multi_arguments('-mavx512f', '-mavx512dq')
+		dir24_8_avx512_tmp = static_library('dir24_8_avx512_tmp',
+				'dir24_8_avx512.c',
+				dependencies: static_rte_eal,
+				c_args: cflags + ['-mavx512f', '-mavx512dq'])
+		objs += dir24_8_avx512_tmp.extract_objects('dir24_8_avx512.c')
+		cflags += '-DCC_DIR24_8_AVX512_SUPPORT'
+	endif
+endif
diff --git a/lib/librte_fib/rte_fib.c b/lib/librte_fib/rte_fib.c
index b9f6efb..1af2a5f 100644
--- a/lib/librte_fib/rte_fib.c
+++ b/lib/librte_fib/rte_fib.c
@@ -108,7 +108,7 @@ init_dataplane(struct rte_fib *fib, __rte_unused int socket_id,
 		if (fib->dp == NULL)
 			return -rte_errno;
 		fib->lookup = dir24_8_get_lookup_fn(fib->dp,
-			RTE_FIB_DIR24_8_SCALAR_MACRO);
+			RTE_FIB_DIR24_8_ANY);
 		fib->modify = dir24_8_modify;
 		return 0;
 	default:
diff --git a/lib/librte_fib/rte_fib.h b/lib/librte_fib/rte_fib.h
index 2097ee5..d4e5d91 100644
--- a/lib/librte_fib/rte_fib.h
+++ b/lib/librte_fib/rte_fib.h
@@ -67,10 +67,14 @@ enum rte_fib_dir24_8_lookup_type {
 	 * Lookup implementation using inlined functions
 	 * for different next hop sizes
 	 */
-	RTE_FIB_DIR24_8_SCALAR_UNI
+	RTE_FIB_DIR24_8_SCALAR_UNI,
 	/**<
 	 * Unified lookup function for all next hop sizes
 	 */
+	RTE_FIB_DIR24_8_VECTOR_AVX512,
+	/**< Vector implementation using AVX512 */
+	RTE_FIB_DIR24_8_ANY = UINT32_MAX
+	/**< Selects the best implementation based on the max simd bitwidth */
 };
 
 /** FIB configuration structure */
-- 
2.7.4


^ permalink raw reply	[flat|nested] 199+ messages in thread

* [dpdk-dev] [PATCH v11 5/8] fib6: make lookup function type configurable
  2020-10-13 13:13                 ` [dpdk-dev] [PATCH v10 " Vladimir Medvedkin
                                     ` (5 preceding siblings ...)
  2020-10-16 15:42                   ` [dpdk-dev] [PATCH v11 4/8] fib: introduce AVX512 lookup Vladimir Medvedkin
@ 2020-10-16 15:42                   ` Vladimir Medvedkin
  2020-10-16 15:42                   ` [dpdk-dev] [PATCH v11 6/8] fib6: move lookup definition into the header file Vladimir Medvedkin
                                     ` (2 subsequent siblings)
  9 siblings, 0 replies; 199+ messages in thread
From: Vladimir Medvedkin @ 2020-10-16 15:42 UTC (permalink / raw)
  To: dev
  Cc: david.marchand, jerinj, mdr, thomas, konstantin.ananyev,
	bruce.richardson, ciara.power

Add type argument to trie_get_lookup_fn()
Now it only supports RTE_FIB6_TRIE_SCALAR

Add new rte_fib6_set_lookup_fn() - user can change lookup
function type runtime.

Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 lib/librte_fib/rte_fib6.c          | 20 +++++++++++++++-
 lib/librte_fib/rte_fib6.h          | 23 +++++++++++++++++++
 lib/librte_fib/rte_fib_version.map |  1 +
 lib/librte_fib/trie.c              | 47 +++++++++++++++++++++++---------------
 lib/librte_fib/trie.h              |  2 +-
 5 files changed, 72 insertions(+), 21 deletions(-)

diff --git a/lib/librte_fib/rte_fib6.c b/lib/librte_fib/rte_fib6.c
index a1f0db8..566cd5f 100644
--- a/lib/librte_fib/rte_fib6.c
+++ b/lib/librte_fib/rte_fib6.c
@@ -107,7 +107,7 @@ init_dataplane(struct rte_fib6 *fib, __rte_unused int socket_id,
 		fib->dp = trie_create(dp_name, socket_id, conf);
 		if (fib->dp == NULL)
 			return -rte_errno;
-		fib->lookup = rte_trie_get_lookup_fn(conf);
+		fib->lookup = trie_get_lookup_fn(fib->dp, RTE_FIB6_TRIE_SCALAR);
 		fib->modify = trie_modify;
 		return 0;
 	default:
@@ -319,3 +319,21 @@ rte_fib6_get_rib(struct rte_fib6 *fib)
 {
 	return (fib == NULL) ? NULL : fib->rib;
 }
+
+int
+rte_fib6_set_lookup_fn(struct rte_fib6 *fib,
+	enum rte_fib_trie_lookup_type type)
+{
+	rte_fib6_lookup_fn_t fn;
+
+	switch (fib->type) {
+	case RTE_FIB6_TRIE:
+		fn = trie_get_lookup_fn(fib->dp, type);
+		if (fn == NULL)
+			return -EINVAL;
+		fib->lookup = fn;
+		return 0;
+	default:
+		return -EINVAL;
+	}
+}
diff --git a/lib/librte_fib/rte_fib6.h b/lib/librte_fib/rte_fib6.h
index bbfcf23..cd0c75e 100644
--- a/lib/librte_fib/rte_fib6.h
+++ b/lib/librte_fib/rte_fib6.h
@@ -53,12 +53,18 @@ enum rte_fib6_op {
 	RTE_FIB6_DEL,
 };
 
+/** Size of nexthop (1 << nh_sz) bits for TRIE based FIB */
 enum rte_fib_trie_nh_sz {
 	RTE_FIB6_TRIE_2B = 1,
 	RTE_FIB6_TRIE_4B,
 	RTE_FIB6_TRIE_8B
 };
 
+/** Type of lookup function implementation */
+enum rte_fib_trie_lookup_type {
+	RTE_FIB6_TRIE_SCALAR /**< Scalar lookup function implementation*/
+};
+
 /** FIB configuration structure */
 struct rte_fib6_conf {
 	enum rte_fib6_type type; /**< Type of FIB struct */
@@ -201,6 +207,23 @@ __rte_experimental
 struct rte_rib6 *
 rte_fib6_get_rib(struct rte_fib6 *fib);
 
+/**
+ * Set lookup function based on type
+ *
+ * @param fib
+ *   FIB object handle
+ * @param type
+ *   type of lookup function
+ *
+ * @return
+ *    -EINVAL on failure
+ *    0 on success
+ */
+__rte_experimental
+int
+rte_fib6_set_lookup_fn(struct rte_fib6 *fib,
+	enum rte_fib_trie_lookup_type type);
+
 #ifdef __cplusplus
 }
 #endif
diff --git a/lib/librte_fib/rte_fib_version.map b/lib/librte_fib/rte_fib_version.map
index 216af66..9d1e181 100644
--- a/lib/librte_fib/rte_fib_version.map
+++ b/lib/librte_fib/rte_fib_version.map
@@ -19,6 +19,7 @@ EXPERIMENTAL {
 	rte_fib6_lookup_bulk;
 	rte_fib6_get_dp;
 	rte_fib6_get_rib;
+	rte_fib6_set_lookup_fn;
 
 	local: *;
 };
diff --git a/lib/librte_fib/trie.c b/lib/librte_fib/trie.c
index 2ae2add..fc14670 100644
--- a/lib/librte_fib/trie.c
+++ b/lib/librte_fib/trie.c
@@ -59,13 +59,6 @@ enum edge {
 	REDGE
 };
 
-enum lookup_type {
-	MACRO,
-	INLINE,
-	UNI
-};
-static enum lookup_type test_lookup = MACRO;
-
 static inline uint32_t
 get_tbl24_idx(const uint8_t *ip)
 {
@@ -153,22 +146,38 @@ LOOKUP_FUNC(2b, uint16_t, 1)
 LOOKUP_FUNC(4b, uint32_t, 2)
 LOOKUP_FUNC(8b, uint64_t, 3)
 
+static inline rte_fib6_lookup_fn_t
+get_scalar_fn(enum rte_fib_trie_nh_sz nh_sz)
+{
+	switch (nh_sz) {
+	case RTE_FIB6_TRIE_2B:
+		return rte_trie_lookup_bulk_2b;
+	case RTE_FIB6_TRIE_4B:
+		return rte_trie_lookup_bulk_4b;
+	case RTE_FIB6_TRIE_8B:
+		return rte_trie_lookup_bulk_8b;
+	default:
+		return NULL;
+	}
+}
+
 rte_fib6_lookup_fn_t
-rte_trie_get_lookup_fn(struct rte_fib6_conf *conf)
+trie_get_lookup_fn(void *p, enum rte_fib_trie_lookup_type type)
 {
-	enum rte_fib_trie_nh_sz nh_sz = conf->trie.nh_sz;
+	enum rte_fib_trie_nh_sz nh_sz;
+	struct rte_trie_tbl *dp = p;
 
-	if (test_lookup == MACRO) {
-		switch (nh_sz) {
-		case RTE_FIB6_TRIE_2B:
-			return rte_trie_lookup_bulk_2b;
-		case RTE_FIB6_TRIE_4B:
-			return rte_trie_lookup_bulk_4b;
-		case RTE_FIB6_TRIE_8B:
-			return rte_trie_lookup_bulk_8b;
-		}
+	if (dp == NULL)
+		return NULL;
+
+	nh_sz = dp->nh_sz;
+
+	switch (type) {
+	case RTE_FIB6_TRIE_SCALAR:
+		return get_scalar_fn(nh_sz);
+	default:
+		return NULL;
 	}
-
 	return NULL;
 }
 
diff --git a/lib/librte_fib/trie.h b/lib/librte_fib/trie.h
index bb750c5..0d5ef9a 100644
--- a/lib/librte_fib/trie.h
+++ b/lib/librte_fib/trie.h
@@ -22,7 +22,7 @@ void
 trie_free(void *p);
 
 rte_fib6_lookup_fn_t
-rte_trie_get_lookup_fn(struct rte_fib6_conf *fib_conf);
+trie_get_lookup_fn(void *p, enum rte_fib_trie_lookup_type type);
 
 int
 trie_modify(struct rte_fib6 *fib, const uint8_t ip[RTE_FIB6_IPV6_ADDR_SIZE],
-- 
2.7.4


^ permalink raw reply	[flat|nested] 199+ messages in thread

* [dpdk-dev] [PATCH v11 6/8] fib6: move lookup definition into the header file
  2020-10-13 13:13                 ` [dpdk-dev] [PATCH v10 " Vladimir Medvedkin
                                     ` (6 preceding siblings ...)
  2020-10-16 15:42                   ` [dpdk-dev] [PATCH v11 5/8] fib6: make lookup function type configurable Vladimir Medvedkin
@ 2020-10-16 15:42                   ` Vladimir Medvedkin
  2020-10-16 15:42                   ` [dpdk-dev] [PATCH v11 7/8] fib6: introduce AVX512 lookup Vladimir Medvedkin
  2020-10-16 15:42                   ` [dpdk-dev] [PATCH v11 8/8] app/testfib: add support for different lookup functions Vladimir Medvedkin
  9 siblings, 0 replies; 199+ messages in thread
From: Vladimir Medvedkin @ 2020-10-16 15:42 UTC (permalink / raw)
  To: dev
  Cc: david.marchand, jerinj, mdr, thomas, konstantin.ananyev,
	bruce.richardson, ciara.power

Move trie table layout and lookup definition into the
private header file. This is necessary for implementing a
vectorized lookup function in a separate .с file.

Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 lib/librte_fib/trie.c | 121 --------------------------------------------------
 lib/librte_fib/trie.h | 117 ++++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 117 insertions(+), 121 deletions(-)

diff --git a/lib/librte_fib/trie.c b/lib/librte_fib/trie.c
index fc14670..82ba13d 100644
--- a/lib/librte_fib/trie.c
+++ b/lib/librte_fib/trie.c
@@ -11,141 +11,20 @@
 
 #include <rte_debug.h>
 #include <rte_malloc.h>
-#include <rte_prefetch.h>
 #include <rte_errno.h>
 #include <rte_memory.h>
-#include <rte_branch_prediction.h>
 
 #include <rte_rib6.h>
 #include <rte_fib6.h>
 #include "trie.h"
 
-/* @internal Total number of tbl24 entries. */
-#define TRIE_TBL24_NUM_ENT	(1 << 24)
-
-/* Maximum depth value possible for IPv6 LPM. */
-#define TRIE_MAX_DEPTH		128
-
-/* @internal Number of entries in a tbl8 group. */
-#define TRIE_TBL8_GRP_NUM_ENT	256ULL
-
-/* @internal Total number of tbl8 groups in the tbl8. */
-#define TRIE_TBL8_NUM_GROUPS	65536
-
-/* @internal bitmask with valid and valid_group fields set */
-#define TRIE_EXT_ENT		1
-
 #define TRIE_NAMESIZE		64
 
-#define BITMAP_SLAB_BIT_SIZE_LOG2	6
-#define BITMAP_SLAB_BIT_SIZE		(1ULL << BITMAP_SLAB_BIT_SIZE_LOG2)
-#define BITMAP_SLAB_BITMASK		(BITMAP_SLAB_BIT_SIZE - 1)
-
-struct rte_trie_tbl {
-	uint32_t	number_tbl8s;	/**< Total number of tbl8s */
-	uint32_t	rsvd_tbl8s;	/**< Number of reserved tbl8s */
-	uint32_t	cur_tbl8s;	/**< Current cumber of tbl8s */
-	uint64_t	def_nh;		/**< Default next hop */
-	enum rte_fib_trie_nh_sz	nh_sz;	/**< Size of nexthop entry */
-	uint64_t	*tbl8;		/**< tbl8 table. */
-	uint32_t	*tbl8_pool;	/**< bitmap containing free tbl8 idxes*/
-	uint32_t	tbl8_pool_pos;
-	/* tbl24 table. */
-	__extension__ uint64_t	tbl24[0] __rte_cache_aligned;
-};
-
 enum edge {
 	LEDGE,
 	REDGE
 };
 
-static inline uint32_t
-get_tbl24_idx(const uint8_t *ip)
-{
-	return ip[0] << 16|ip[1] << 8|ip[2];
-}
-
-static inline void *
-get_tbl24_p(struct rte_trie_tbl *dp, const uint8_t *ip, uint8_t nh_sz)
-{
-	uint32_t tbl24_idx;
-
-	tbl24_idx = get_tbl24_idx(ip);
-	return (void *)&((uint8_t *)dp->tbl24)[tbl24_idx << nh_sz];
-}
-
-static inline uint8_t
-bits_in_nh(uint8_t nh_sz)
-{
-	return 8 * (1 << nh_sz);
-}
-
-static inline uint64_t
-get_max_nh(uint8_t nh_sz)
-{
-	return ((1ULL << (bits_in_nh(nh_sz) - 1)) - 1);
-}
-
-static inline uint64_t
-lookup_msk(uint8_t nh_sz)
-{
-	return ((1ULL << ((1 << (nh_sz + 3)) - 1)) << 1) - 1;
-}
-
-static inline uint8_t
-get_psd_idx(uint32_t val, uint8_t nh_sz)
-{
-	return val & ((1 << (3 - nh_sz)) - 1);
-}
-
-static inline uint32_t
-get_tbl_pos(uint32_t val, uint8_t nh_sz)
-{
-	return val >> (3 - nh_sz);
-}
-
-static inline uint64_t
-get_tbl_val_by_idx(uint64_t *tbl, uint32_t idx, uint8_t nh_sz)
-{
-	return ((tbl[get_tbl_pos(idx, nh_sz)] >> (get_psd_idx(idx, nh_sz) *
-		bits_in_nh(nh_sz))) & lookup_msk(nh_sz));
-}
-
-static inline void *
-get_tbl_p_by_idx(uint64_t *tbl, uint64_t idx, uint8_t nh_sz)
-{
-	return (uint8_t *)tbl + (idx << nh_sz);
-}
-
-static inline int
-is_entry_extended(uint64_t ent)
-{
-	return (ent & TRIE_EXT_ENT) == TRIE_EXT_ENT;
-}
-
-#define LOOKUP_FUNC(suffix, type, nh_sz)				\
-static void rte_trie_lookup_bulk_##suffix(void *p,			\
-	uint8_t ips[][RTE_FIB6_IPV6_ADDR_SIZE],			\
-	uint64_t *next_hops, const unsigned int n)			\
-{									\
-	struct rte_trie_tbl *dp = (struct rte_trie_tbl *)p;		\
-	uint64_t tmp;							\
-	uint32_t i, j;							\
-									\
-	for (i = 0; i < n; i++) {					\
-		tmp = ((type *)dp->tbl24)[get_tbl24_idx(&ips[i][0])];	\
-		j = 3;							\
-		while (is_entry_extended(tmp)) {			\
-			tmp = ((type *)dp->tbl8)[ips[i][j++] +		\
-				((tmp >> 1) * TRIE_TBL8_GRP_NUM_ENT)];	\
-		}							\
-		next_hops[i] = tmp >> 1;				\
-	}								\
-}
-LOOKUP_FUNC(2b, uint16_t, 1)
-LOOKUP_FUNC(4b, uint32_t, 2)
-LOOKUP_FUNC(8b, uint64_t, 3)
-
 static inline rte_fib6_lookup_fn_t
 get_scalar_fn(enum rte_fib_trie_nh_sz nh_sz)
 {
diff --git a/lib/librte_fib/trie.h b/lib/librte_fib/trie.h
index 0d5ef9a..663c7a9 100644
--- a/lib/librte_fib/trie.h
+++ b/lib/librte_fib/trie.h
@@ -10,11 +10,128 @@
  * @file
  * RTE IPv6 Longest Prefix Match (LPM)
  */
+#include <rte_prefetch.h>
+#include <rte_branch_prediction.h>
 
 #ifdef __cplusplus
 extern "C" {
 #endif
 
+/* @internal Total number of tbl24 entries. */
+#define TRIE_TBL24_NUM_ENT	(1 << 24)
+/* Maximum depth value possible for IPv6 LPM. */
+#define TRIE_MAX_DEPTH		128
+/* @internal Number of entries in a tbl8 group. */
+#define TRIE_TBL8_GRP_NUM_ENT	256ULL
+/* @internal Total number of tbl8 groups in the tbl8. */
+#define TRIE_TBL8_NUM_GROUPS	65536
+/* @internal bitmask with valid and valid_group fields set */
+#define TRIE_EXT_ENT		1
+
+#define BITMAP_SLAB_BIT_SIZE_LOG2	6
+#define BITMAP_SLAB_BIT_SIZE		(1ULL << BITMAP_SLAB_BIT_SIZE_LOG2)
+#define BITMAP_SLAB_BITMASK		(BITMAP_SLAB_BIT_SIZE - 1)
+
+struct rte_trie_tbl {
+	uint32_t	number_tbl8s;	/**< Total number of tbl8s */
+	uint32_t	rsvd_tbl8s;	/**< Number of reserved tbl8s */
+	uint32_t	cur_tbl8s;	/**< Current cumber of tbl8s */
+	uint64_t	def_nh;		/**< Default next hop */
+	enum rte_fib_trie_nh_sz	nh_sz;	/**< Size of nexthop entry */
+	uint64_t	*tbl8;		/**< tbl8 table. */
+	uint32_t	*tbl8_pool;	/**< bitmap containing free tbl8 idxes*/
+	uint32_t	tbl8_pool_pos;
+	/* tbl24 table. */
+	__extension__ uint64_t	tbl24[0] __rte_cache_aligned;
+};
+
+static inline uint32_t
+get_tbl24_idx(const uint8_t *ip)
+{
+	return ip[0] << 16|ip[1] << 8|ip[2];
+}
+
+static inline void *
+get_tbl24_p(struct rte_trie_tbl *dp, const uint8_t *ip, uint8_t nh_sz)
+{
+	uint32_t tbl24_idx;
+
+	tbl24_idx = get_tbl24_idx(ip);
+	return (void *)&((uint8_t *)dp->tbl24)[tbl24_idx << nh_sz];
+}
+
+static inline uint8_t
+bits_in_nh(uint8_t nh_sz)
+{
+	return 8 * (1 << nh_sz);
+}
+
+static inline uint64_t
+get_max_nh(uint8_t nh_sz)
+{
+	return ((1ULL << (bits_in_nh(nh_sz) - 1)) - 1);
+}
+
+static inline uint64_t
+lookup_msk(uint8_t nh_sz)
+{
+	return ((1ULL << ((1 << (nh_sz + 3)) - 1)) << 1) - 1;
+}
+
+static inline uint8_t
+get_psd_idx(uint32_t val, uint8_t nh_sz)
+{
+	return val & ((1 << (3 - nh_sz)) - 1);
+}
+
+static inline uint32_t
+get_tbl_pos(uint32_t val, uint8_t nh_sz)
+{
+	return val >> (3 - nh_sz);
+}
+
+static inline uint64_t
+get_tbl_val_by_idx(uint64_t *tbl, uint32_t idx, uint8_t nh_sz)
+{
+	return ((tbl[get_tbl_pos(idx, nh_sz)] >> (get_psd_idx(idx, nh_sz) *
+		bits_in_nh(nh_sz))) & lookup_msk(nh_sz));
+}
+
+static inline void *
+get_tbl_p_by_idx(uint64_t *tbl, uint64_t idx, uint8_t nh_sz)
+{
+	return (uint8_t *)tbl + (idx << nh_sz);
+}
+
+static inline int
+is_entry_extended(uint64_t ent)
+{
+	return (ent & TRIE_EXT_ENT) == TRIE_EXT_ENT;
+}
+
+#define LOOKUP_FUNC(suffix, type, nh_sz)				\
+static inline void rte_trie_lookup_bulk_##suffix(void *p,		\
+	uint8_t ips[][RTE_FIB6_IPV6_ADDR_SIZE],				\
+	uint64_t *next_hops, const unsigned int n)			\
+{									\
+	struct rte_trie_tbl *dp = (struct rte_trie_tbl *)p;		\
+	uint64_t tmp;							\
+	uint32_t i, j;							\
+									\
+	for (i = 0; i < n; i++) {					\
+		tmp = ((type *)dp->tbl24)[get_tbl24_idx(&ips[i][0])];	\
+		j = 3;							\
+		while (is_entry_extended(tmp)) {			\
+			tmp = ((type *)dp->tbl8)[ips[i][j++] +		\
+				((tmp >> 1) * TRIE_TBL8_GRP_NUM_ENT)];	\
+		}							\
+		next_hops[i] = tmp >> 1;				\
+	}								\
+}
+LOOKUP_FUNC(2b, uint16_t, 1)
+LOOKUP_FUNC(4b, uint32_t, 2)
+LOOKUP_FUNC(8b, uint64_t, 3)
+
 void *
 trie_create(const char *name, int socket_id, struct rte_fib6_conf *conf);
 
-- 
2.7.4


^ permalink raw reply	[flat|nested] 199+ messages in thread

* [dpdk-dev] [PATCH v11 7/8] fib6: introduce AVX512 lookup
  2020-10-13 13:13                 ` [dpdk-dev] [PATCH v10 " Vladimir Medvedkin
                                     ` (7 preceding siblings ...)
  2020-10-16 15:42                   ` [dpdk-dev] [PATCH v11 6/8] fib6: move lookup definition into the header file Vladimir Medvedkin
@ 2020-10-16 15:42                   ` Vladimir Medvedkin
  2020-10-16 15:42                   ` [dpdk-dev] [PATCH v11 8/8] app/testfib: add support for different lookup functions Vladimir Medvedkin
  9 siblings, 0 replies; 199+ messages in thread
From: Vladimir Medvedkin @ 2020-10-16 15:42 UTC (permalink / raw)
  To: dev
  Cc: david.marchand, jerinj, mdr, thomas, konstantin.ananyev,
	bruce.richardson, ciara.power

Add new lookup implementation for FIB6 trie algorithm using
AVX512 instruction set

Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 doc/guides/rel_notes/release_20_11.rst |   2 +-
 lib/librte_fib/meson.build             |  17 +++
 lib/librte_fib/rte_fib6.c              |   2 +-
 lib/librte_fib/rte_fib6.h              |   5 +-
 lib/librte_fib/trie.c                  |  35 +++++
 lib/librte_fib/trie_avx512.c           | 269 +++++++++++++++++++++++++++++++++
 lib/librte_fib/trie_avx512.h           |  20 +++
 7 files changed, 347 insertions(+), 3 deletions(-)
 create mode 100644 lib/librte_fib/trie_avx512.c
 create mode 100644 lib/librte_fib/trie_avx512.h

diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst
index 8c2a89f..fc9c13b 100644
--- a/doc/guides/rel_notes/release_20_11.rst
+++ b/doc/guides/rel_notes/release_20_11.rst
@@ -150,7 +150,7 @@ New Features
 
 * **Added AVX512 lookup implementation for FIB.**
 
-  Added a AVX512 lookup functions implementation into FIB library.
+  Added a AVX512 lookup functions implementation into FIB and FIB6 libraries.
 
 Removed Items
 -------------
diff --git a/lib/librte_fib/meson.build b/lib/librte_fib/meson.build
index 0a8adef..5d93de9 100644
--- a/lib/librte_fib/meson.build
+++ b/lib/librte_fib/meson.build
@@ -30,6 +30,12 @@ if dpdk_conf.has('RTE_ARCH_X86_64') and binutils_ok.returncode() == 0
 	if acl_avx512_on == true
 		cflags += ['-DCC_DIR24_8_AVX512_SUPPORT']
 		sources += files('dir24_8_avx512.c')
+		# TRIE AVX512 implementation uses avx512bw intrinsics along with
+		# avx512f and avx512dq
+		if cc.get_define('__AVX512BW__', args: machine_args) != ''
+			cflags += ['-DCC_TRIE_AVX512_SUPPORT']
+			sources += files('trie_avx512.c')
+		endif
 	elif cc.has_multi_arguments('-mavx512f', '-mavx512dq')
 		dir24_8_avx512_tmp = static_library('dir24_8_avx512_tmp',
 				'dir24_8_avx512.c',
@@ -37,5 +43,16 @@ if dpdk_conf.has('RTE_ARCH_X86_64') and binutils_ok.returncode() == 0
 				c_args: cflags + ['-mavx512f', '-mavx512dq'])
 		objs += dir24_8_avx512_tmp.extract_objects('dir24_8_avx512.c')
 		cflags += '-DCC_DIR24_8_AVX512_SUPPORT'
+		# TRIE AVX512 implementation uses avx512bw intrinsics along with
+		# avx512f and avx512dq
+		if cc.has_argument('-mavx512bw')
+			trie_avx512_tmp = static_library('trie_avx512_tmp',
+				'trie_avx512.c',
+				dependencies: static_rte_eal,
+				c_args: cflags + ['-mavx512f', \
+					'-mavx512dq', '-mavx512bw'])
+			objs += trie_avx512_tmp.extract_objects('trie_avx512.c')
+			cflags += '-DCC_TRIE_AVX512_SUPPORT'
+		endif
 	endif
 endif
diff --git a/lib/librte_fib/rte_fib6.c b/lib/librte_fib/rte_fib6.c
index 566cd5f..8512584 100644
--- a/lib/librte_fib/rte_fib6.c
+++ b/lib/librte_fib/rte_fib6.c
@@ -107,7 +107,7 @@ init_dataplane(struct rte_fib6 *fib, __rte_unused int socket_id,
 		fib->dp = trie_create(dp_name, socket_id, conf);
 		if (fib->dp == NULL)
 			return -rte_errno;
-		fib->lookup = trie_get_lookup_fn(fib->dp, RTE_FIB6_TRIE_SCALAR);
+		fib->lookup = trie_get_lookup_fn(fib->dp, RTE_FIB6_TRIE_ANY);
 		fib->modify = trie_modify;
 		return 0;
 	default:
diff --git a/lib/librte_fib/rte_fib6.h b/lib/librte_fib/rte_fib6.h
index cd0c75e..2b2a1c8 100644
--- a/lib/librte_fib/rte_fib6.h
+++ b/lib/librte_fib/rte_fib6.h
@@ -62,7 +62,10 @@ enum rte_fib_trie_nh_sz {
 
 /** Type of lookup function implementation */
 enum rte_fib_trie_lookup_type {
-	RTE_FIB6_TRIE_SCALAR /**< Scalar lookup function implementation*/
+	RTE_FIB6_TRIE_SCALAR, /**< Scalar lookup function implementation*/
+	RTE_FIB6_TRIE_VECTOR_AVX512, /**< Vector implementation using AVX512 */
+	RTE_FIB6_TRIE_ANY = UINT32_MAX
+	/**< Selects the best implementation based on the max simd bitwidth */
 };
 
 /** FIB configuration structure */
diff --git a/lib/librte_fib/trie.c b/lib/librte_fib/trie.c
index 82ba13d..3e5f4b9 100644
--- a/lib/librte_fib/trie.c
+++ b/lib/librte_fib/trie.c
@@ -18,6 +18,12 @@
 #include <rte_fib6.h>
 #include "trie.h"
 
+#ifdef CC_TRIE_AVX512_SUPPORT
+
+#include "trie_avx512.h"
+
+#endif /* CC_TRIE_AVX512_SUPPORT */
+
 #define TRIE_NAMESIZE		64
 
 enum edge {
@@ -40,11 +46,35 @@ get_scalar_fn(enum rte_fib_trie_nh_sz nh_sz)
 	}
 }
 
+static inline rte_fib6_lookup_fn_t
+get_vector_fn(enum rte_fib_trie_nh_sz nh_sz)
+{
+#ifdef CC_TRIE_AVX512_SUPPORT
+	if ((rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512F) <= 0) ||
+			(rte_get_max_simd_bitwidth() < RTE_SIMD_512))
+		return NULL;
+	switch (nh_sz) {
+	case RTE_FIB6_TRIE_2B:
+		return rte_trie_vec_lookup_bulk_2b;
+	case RTE_FIB6_TRIE_4B:
+		return rte_trie_vec_lookup_bulk_4b;
+	case RTE_FIB6_TRIE_8B:
+		return rte_trie_vec_lookup_bulk_8b;
+	default:
+		return NULL;
+	}
+#else
+	RTE_SET_USED(nh_sz);
+#endif
+	return NULL;
+}
+
 rte_fib6_lookup_fn_t
 trie_get_lookup_fn(void *p, enum rte_fib_trie_lookup_type type)
 {
 	enum rte_fib_trie_nh_sz nh_sz;
 	struct rte_trie_tbl *dp = p;
+	rte_fib6_lookup_fn_t ret_fn = NULL;
 
 	if (dp == NULL)
 		return NULL;
@@ -54,6 +84,11 @@ trie_get_lookup_fn(void *p, enum rte_fib_trie_lookup_type type)
 	switch (type) {
 	case RTE_FIB6_TRIE_SCALAR:
 		return get_scalar_fn(nh_sz);
+	case RTE_FIB6_TRIE_VECTOR_AVX512:
+		return get_vector_fn(nh_sz);
+	case RTE_FIB6_TRIE_ANY:
+		ret_fn = get_vector_fn(nh_sz);
+		return (ret_fn) ? ret_fn : get_scalar_fn(nh_sz);
 	default:
 		return NULL;
 	}
diff --git a/lib/librte_fib/trie_avx512.c b/lib/librte_fib/trie_avx512.c
new file mode 100644
index 0000000..b1c9e4e
--- /dev/null
+++ b/lib/librte_fib/trie_avx512.c
@@ -0,0 +1,269 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Intel Corporation
+ */
+
+#include <rte_vect.h>
+#include <rte_fib6.h>
+
+#include "trie.h"
+#include "trie_avx512.h"
+
+static __rte_always_inline void
+transpose_x16(uint8_t ips[16][RTE_FIB6_IPV6_ADDR_SIZE],
+	__m512i *first, __m512i *second, __m512i *third, __m512i *fourth)
+{
+	__m512i tmp1, tmp2, tmp3, tmp4;
+	__m512i tmp5, tmp6, tmp7, tmp8;
+	const __rte_x86_zmm_t perm_idxes = {
+		.u32 = { 0, 4, 8, 12, 2, 6, 10, 14,
+			1, 5, 9, 13, 3, 7, 11, 15
+		},
+	};
+
+	/* load all ip addresses */
+	tmp1 = _mm512_loadu_si512(&ips[0][0]);
+	tmp2 = _mm512_loadu_si512(&ips[4][0]);
+	tmp3 = _mm512_loadu_si512(&ips[8][0]);
+	tmp4 = _mm512_loadu_si512(&ips[12][0]);
+
+	/* transpose 4 byte chunks of 16 ips */
+	tmp5 = _mm512_unpacklo_epi32(tmp1, tmp2);
+	tmp7 = _mm512_unpackhi_epi32(tmp1, tmp2);
+	tmp6 = _mm512_unpacklo_epi32(tmp3, tmp4);
+	tmp8 = _mm512_unpackhi_epi32(tmp3, tmp4);
+
+	tmp1 = _mm512_unpacklo_epi32(tmp5, tmp6);
+	tmp3 = _mm512_unpackhi_epi32(tmp5, tmp6);
+	tmp2 = _mm512_unpacklo_epi32(tmp7, tmp8);
+	tmp4 = _mm512_unpackhi_epi32(tmp7, tmp8);
+
+	/* first 4-byte chunks of ips[] */
+	*first = _mm512_permutexvar_epi32(perm_idxes.z, tmp1);
+	/* second 4-byte chunks of ips[] */
+	*second = _mm512_permutexvar_epi32(perm_idxes.z, tmp3);
+	/* third 4-byte chunks of ips[] */
+	*third = _mm512_permutexvar_epi32(perm_idxes.z, tmp2);
+	/* fourth 4-byte chunks of ips[] */
+	*fourth = _mm512_permutexvar_epi32(perm_idxes.z, tmp4);
+}
+
+static __rte_always_inline void
+transpose_x8(uint8_t ips[8][RTE_FIB6_IPV6_ADDR_SIZE],
+	__m512i *first, __m512i *second)
+{
+	__m512i tmp1, tmp2, tmp3, tmp4;
+	const __rte_x86_zmm_t perm_idxes = {
+		.u64 = { 0, 2, 4, 6, 1, 3, 5, 7
+		},
+	};
+
+	tmp1 = _mm512_loadu_si512(&ips[0][0]);
+	tmp2 = _mm512_loadu_si512(&ips[4][0]);
+
+	tmp3 = _mm512_unpacklo_epi64(tmp1, tmp2);
+	*first = _mm512_permutexvar_epi64(perm_idxes.z, tmp3);
+	tmp4 = _mm512_unpackhi_epi64(tmp1, tmp2);
+	*second = _mm512_permutexvar_epi64(perm_idxes.z, tmp4);
+}
+
+static __rte_always_inline void
+trie_vec_lookup_x16(void *p, uint8_t ips[16][RTE_FIB6_IPV6_ADDR_SIZE],
+	uint64_t *next_hops, int size)
+{
+	struct rte_trie_tbl *dp = (struct rte_trie_tbl *)p;
+	const __m512i zero = _mm512_set1_epi32(0);
+	const __m512i lsb = _mm512_set1_epi32(1);
+	const __m512i two_lsb = _mm512_set1_epi32(3);
+	__m512i first, second, third, fourth; /*< IPv6 four byte chunks */
+	__m512i idxes, res, shuf_idxes;
+	__m512i tmp, tmp2, bytes, byte_chunk, base_idxes;
+	/* used to mask gather values if size is 2 (16 bit next hops) */
+	const __m512i res_msk = _mm512_set1_epi32(UINT16_MAX);
+	const __rte_x86_zmm_t bswap = {
+		.u8 = { 2, 1, 0, 255, 6, 5, 4, 255,
+			10, 9, 8, 255, 14, 13, 12, 255,
+			2, 1, 0, 255, 6, 5, 4, 255,
+			10, 9, 8, 255, 14, 13, 12, 255,
+			2, 1, 0, 255, 6, 5, 4, 255,
+			10, 9, 8, 255, 14, 13, 12, 255,
+			2, 1, 0, 255, 6, 5, 4, 255,
+			10, 9, 8, 255, 14, 13, 12, 255
+			},
+	};
+	const __mmask64 k = 0x1111111111111111;
+	int i = 3;
+	__mmask16 msk_ext, new_msk;
+	__mmask16 exp_msk = 0x5555;
+
+	transpose_x16(ips, &first, &second, &third, &fourth);
+
+	/* get_tbl24_idx() for every 4 byte chunk */
+	idxes = _mm512_shuffle_epi8(first, bswap.z);
+
+	/**
+	 * lookup in tbl24
+	 * Put it inside branch to make compiller happy with -O0
+	 */
+	if (size == sizeof(uint16_t)) {
+		res = _mm512_i32gather_epi32(idxes, (const int *)dp->tbl24, 2);
+		res = _mm512_and_epi32(res, res_msk);
+	} else
+		res = _mm512_i32gather_epi32(idxes, (const int *)dp->tbl24, 4);
+
+
+	/* get extended entries indexes */
+	msk_ext = _mm512_test_epi32_mask(res, lsb);
+
+	tmp = _mm512_srli_epi32(res, 1);
+
+	/* idxes to retrieve bytes */
+	shuf_idxes = _mm512_setr_epi32(3, 7, 11, 15,
+				19, 23, 27, 31,
+				35, 39, 43, 47,
+				51, 55, 59, 63);
+
+	base_idxes = _mm512_setr_epi32(0, 4, 8, 12,
+				16, 20, 24, 28,
+				32, 36, 40, 44,
+				48, 52, 56, 60);
+
+	/* traverse down the trie */
+	while (msk_ext) {
+		idxes = _mm512_maskz_slli_epi32(msk_ext, tmp, 8);
+		byte_chunk = (i < 8) ?
+			((i >= 4) ? second : first) :
+			((i >= 12) ? fourth : third);
+		bytes = _mm512_maskz_shuffle_epi8(k, byte_chunk, shuf_idxes);
+		idxes = _mm512_maskz_add_epi32(msk_ext, idxes, bytes);
+		if (size == sizeof(uint16_t)) {
+			tmp = _mm512_mask_i32gather_epi32(zero, msk_ext,
+				idxes, (const int *)dp->tbl8, 2);
+			tmp = _mm512_and_epi32(tmp, res_msk);
+		} else
+			tmp = _mm512_mask_i32gather_epi32(zero, msk_ext,
+				idxes, (const int *)dp->tbl8, 4);
+		new_msk = _mm512_test_epi32_mask(tmp, lsb);
+		res = _mm512_mask_blend_epi32(msk_ext ^ new_msk, res, tmp);
+		tmp = _mm512_srli_epi32(tmp, 1);
+		msk_ext = new_msk;
+
+		shuf_idxes = _mm512_maskz_add_epi8(k, shuf_idxes, lsb);
+		shuf_idxes = _mm512_and_epi32(shuf_idxes, two_lsb);
+		shuf_idxes = _mm512_maskz_add_epi8(k, shuf_idxes, base_idxes);
+		i++;
+	}
+
+	res = _mm512_srli_epi32(res, 1);
+	tmp = _mm512_maskz_expand_epi32(exp_msk, res);
+	__m256i tmp256;
+	tmp256 = _mm512_extracti32x8_epi32(res, 1);
+	tmp2 = _mm512_maskz_expand_epi32(exp_msk,
+		_mm512_castsi256_si512(tmp256));
+	_mm512_storeu_si512(next_hops, tmp);
+	_mm512_storeu_si512(next_hops + 8, tmp2);
+}
+
+static void
+trie_vec_lookup_x8_8b(void *p, uint8_t ips[8][RTE_FIB6_IPV6_ADDR_SIZE],
+	uint64_t *next_hops)
+{
+	struct rte_trie_tbl *dp = (struct rte_trie_tbl *)p;
+	const __m512i zero = _mm512_set1_epi32(0);
+	const __m512i lsb = _mm512_set1_epi32(1);
+	const __m512i three_lsb = _mm512_set1_epi32(7);
+	__m512i first, second; /*< IPv6 eight byte chunks */
+	__m512i idxes, res, shuf_idxes;
+	__m512i tmp, bytes, byte_chunk, base_idxes;
+	const __rte_x86_zmm_t bswap = {
+		.u8 = { 2, 1, 0, 255, 255, 255, 255, 255,
+			10, 9, 8, 255, 255, 255, 255, 255,
+			2, 1, 0, 255, 255, 255, 255, 255,
+			10, 9, 8, 255, 255, 255, 255, 255,
+			2, 1, 0, 255, 255, 255, 255, 255,
+			10, 9, 8, 255, 255, 255, 255, 255,
+			2, 1, 0, 255, 255, 255, 255, 255,
+			10, 9, 8, 255, 255, 255, 255, 255
+			},
+	};
+	const __mmask64 k = 0x101010101010101;
+	int i = 3;
+	__mmask8 msk_ext, new_msk;
+
+	transpose_x8(ips, &first, &second);
+
+	/* get_tbl24_idx() for every 4 byte chunk */
+	idxes = _mm512_shuffle_epi8(first, bswap.z);
+
+	/* lookup in tbl24 */
+	res = _mm512_i64gather_epi64(idxes, (const void *)dp->tbl24, 8);
+	/* get extended entries indexes */
+	msk_ext = _mm512_test_epi64_mask(res, lsb);
+
+	tmp = _mm512_srli_epi64(res, 1);
+
+	/* idxes to retrieve bytes */
+	shuf_idxes = _mm512_setr_epi64(3, 11, 19, 27, 35, 43, 51, 59);
+
+	base_idxes = _mm512_setr_epi64(0, 8, 16, 24, 32, 40, 48, 56);
+
+	/* traverse down the trie */
+	while (msk_ext) {
+		idxes = _mm512_maskz_slli_epi64(msk_ext, tmp, 8);
+		byte_chunk = (i < 8) ? first : second;
+		bytes = _mm512_maskz_shuffle_epi8(k, byte_chunk, shuf_idxes);
+		idxes = _mm512_maskz_add_epi64(msk_ext, idxes, bytes);
+		tmp = _mm512_mask_i64gather_epi64(zero, msk_ext,
+				idxes, (const void *)dp->tbl8, 8);
+		new_msk = _mm512_test_epi64_mask(tmp, lsb);
+		res = _mm512_mask_blend_epi64(msk_ext ^ new_msk, res, tmp);
+		tmp = _mm512_srli_epi64(tmp, 1);
+		msk_ext = new_msk;
+
+		shuf_idxes = _mm512_maskz_add_epi8(k, shuf_idxes, lsb);
+		shuf_idxes = _mm512_and_epi64(shuf_idxes, three_lsb);
+		shuf_idxes = _mm512_maskz_add_epi8(k, shuf_idxes, base_idxes);
+		i++;
+	}
+
+	res = _mm512_srli_epi64(res, 1);
+	_mm512_storeu_si512(next_hops, res);
+}
+
+void
+rte_trie_vec_lookup_bulk_2b(void *p, uint8_t ips[][RTE_FIB6_IPV6_ADDR_SIZE],
+	uint64_t *next_hops, const unsigned int n)
+{
+	uint32_t i;
+	for (i = 0; i < (n / 16); i++) {
+		trie_vec_lookup_x16(p, (uint8_t (*)[16])&ips[i * 16][0],
+				next_hops + i * 16, sizeof(uint16_t));
+	}
+	rte_trie_lookup_bulk_2b(p, (uint8_t (*)[16])&ips[i * 16][0],
+			next_hops + i * 16, n - i * 16);
+}
+
+void
+rte_trie_vec_lookup_bulk_4b(void *p, uint8_t ips[][RTE_FIB6_IPV6_ADDR_SIZE],
+	uint64_t *next_hops, const unsigned int n)
+{
+	uint32_t i;
+	for (i = 0; i < (n / 16); i++) {
+		trie_vec_lookup_x16(p, (uint8_t (*)[16])&ips[i * 16][0],
+				next_hops + i * 16, sizeof(uint32_t));
+	}
+	rte_trie_lookup_bulk_4b(p, (uint8_t (*)[16])&ips[i * 16][0],
+			next_hops + i * 16, n - i * 16);
+}
+
+void
+rte_trie_vec_lookup_bulk_8b(void *p, uint8_t ips[][RTE_FIB6_IPV6_ADDR_SIZE],
+	uint64_t *next_hops, const unsigned int n)
+{
+	uint32_t i;
+	for (i = 0; i < (n / 8); i++) {
+		trie_vec_lookup_x8_8b(p, (uint8_t (*)[16])&ips[i * 8][0],
+				next_hops + i * 8);
+	}
+	rte_trie_lookup_bulk_8b(p, (uint8_t (*)[16])&ips[i * 8][0],
+			next_hops + i * 8, n - i * 8);
+}
diff --git a/lib/librte_fib/trie_avx512.h b/lib/librte_fib/trie_avx512.h
new file mode 100644
index 0000000..ef8c7f0
--- /dev/null
+++ b/lib/librte_fib/trie_avx512.h
@@ -0,0 +1,20 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Intel Corporation
+ */
+
+#ifndef _TRIE_AVX512_H_
+#define _TRIE_AVX512_H_
+
+void
+rte_trie_vec_lookup_bulk_2b(void *p, uint8_t ips[][RTE_FIB6_IPV6_ADDR_SIZE],
+	uint64_t *next_hops, const unsigned int n);
+
+void
+rte_trie_vec_lookup_bulk_4b(void *p, uint8_t ips[][RTE_FIB6_IPV6_ADDR_SIZE],
+	uint64_t *next_hops, const unsigned int n);
+
+void
+rte_trie_vec_lookup_bulk_8b(void *p, uint8_t ips[][RTE_FIB6_IPV6_ADDR_SIZE],
+	uint64_t *next_hops, const unsigned int n);
+
+#endif /* _TRIE_AVX512_H_ */
-- 
2.7.4


^ permalink raw reply	[flat|nested] 199+ messages in thread

* [dpdk-dev] [PATCH v11 8/8] app/testfib: add support for different lookup functions
  2020-10-13 13:13                 ` [dpdk-dev] [PATCH v10 " Vladimir Medvedkin
                                     ` (8 preceding siblings ...)
  2020-10-16 15:42                   ` [dpdk-dev] [PATCH v11 7/8] fib6: introduce AVX512 lookup Vladimir Medvedkin
@ 2020-10-16 15:42                   ` Vladimir Medvedkin
  9 siblings, 0 replies; 199+ messages in thread
From: Vladimir Medvedkin @ 2020-10-16 15:42 UTC (permalink / raw)
  To: dev
  Cc: david.marchand, jerinj, mdr, thomas, konstantin.ananyev,
	bruce.richardson, ciara.power

Added -v option to switch between different lookup implementations
to measure their performance and correctness.

Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
---
 app/test-fib/main.c | 65 ++++++++++++++++++++++++++++++++++++++++++++++++++---
 1 file changed, 62 insertions(+), 3 deletions(-)

diff --git a/app/test-fib/main.c b/app/test-fib/main.c
index 9cf01b1..e46d264 100644
--- a/app/test-fib/main.c
+++ b/app/test-fib/main.c
@@ -99,6 +99,7 @@ static struct {
 	uint8_t		ent_sz;
 	uint8_t		rnd_lookup_ips_ratio;
 	uint8_t		print_fract;
+	uint8_t		lookup_fn;
 } config = {
 	.routes_file = NULL,
 	.lookup_ips_file = NULL,
@@ -110,7 +111,8 @@ static struct {
 	.tbl8 = DEFAULT_LPM_TBL8,
 	.ent_sz = 4,
 	.rnd_lookup_ips_ratio = 0,
-	.print_fract = 10
+	.print_fract = 10,
+	.lookup_fn = 0
 };
 
 struct rt_rule_4 {
@@ -638,7 +640,11 @@ print_usage(void)
 		"1/2/4/8 (default 4)>]\n"
 		"[-g <number of tbl8's for dir24_8 or trie FIBs>]\n"
 		"[-w <path to the file to dump routing table>]\n"
-		"[-u <path to the file to dump ip's for lookup>]\n",
+		"[-u <path to the file to dump ip's for lookup>]\n"
+		"[-v <type of loookup function:"
+		"\ts1, s2, s3 (3 types of scalar), v (vector) -"
+		" for DIR24_8 based FIB\n"
+		"\ts, v - for TRIE based ipv6 FIB>]\n",
 		config.prgname);
 }
 
@@ -681,7 +687,7 @@ parse_opts(int argc, char **argv)
 	int opt;
 	char *endptr;
 
-	while ((opt = getopt(argc, argv, "f:t:n:d:l:r:c6ab:e:g:w:u:s")) !=
+	while ((opt = getopt(argc, argv, "f:t:n:d:l:r:c6ab:e:g:w:u:sv:")) !=
 			-1) {
 		switch (opt) {
 		case 'f':
@@ -769,6 +775,23 @@ parse_opts(int argc, char **argv)
 				rte_exit(-EINVAL, "Invalid option -g\n");
 			}
 			break;
+		case 'v':
+			if ((strcmp(optarg, "s1") == 0) ||
+					(strcmp(optarg, "s") == 0)) {
+				config.lookup_fn = 1;
+				break;
+			} else if (strcmp(optarg, "v") == 0) {
+				config.lookup_fn = 2;
+				break;
+			} else if (strcmp(optarg, "s2") == 0) {
+				config.lookup_fn = 3;
+				break;
+			} else if (strcmp(optarg, "s3") == 0) {
+				config.lookup_fn = 4;
+				break;
+			}
+			print_usage();
+			rte_exit(-EINVAL, "Invalid option -v %s\n", optarg);
 		default:
 			print_usage();
 			rte_exit(-EINVAL, "Invalid options\n");
@@ -846,6 +869,27 @@ run_v4(void)
 		return -rte_errno;
 	}
 
+	if (config.lookup_fn != 0) {
+		if (config.lookup_fn == 1)
+			ret = rte_fib_set_lookup_fn(fib,
+				RTE_FIB_DIR24_8_SCALAR_MACRO);
+		else if (config.lookup_fn == 2)
+			ret = rte_fib_set_lookup_fn(fib,
+				RTE_FIB_DIR24_8_VECTOR_AVX512);
+		else if (config.lookup_fn == 3)
+			ret = rte_fib_set_lookup_fn(fib,
+				RTE_FIB_DIR24_8_SCALAR_INLINE);
+		else if (config.lookup_fn == 4)
+			ret = rte_fib_set_lookup_fn(fib,
+				RTE_FIB_DIR24_8_SCALAR_UNI);
+		else
+			ret = -EINVAL;
+		if (ret != 0) {
+			printf("Can not init lookup function\n");
+			return ret;
+		}
+	}
+
 	for (k = config.print_fract, i = 0; k > 0; k--) {
 		start = rte_rdtsc_precise();
 		for (j = 0; j < (config.nb_routes - i) / k; j++) {
@@ -1025,6 +1069,21 @@ run_v6(void)
 		return -rte_errno;
 	}
 
+	if (config.lookup_fn != 0) {
+		if (config.lookup_fn == 1)
+			ret = rte_fib6_set_lookup_fn(fib,
+				RTE_FIB6_TRIE_SCALAR);
+		else if (config.lookup_fn == 2)
+			ret = rte_fib6_set_lookup_fn(fib,
+				RTE_FIB6_TRIE_VECTOR_AVX512);
+		else
+			ret = -EINVAL;
+		if (ret != 0) {
+			printf("Can not init lookup function\n");
+			return ret;
+		}
+	}
+
 	for (k = config.print_fract, i = 0; k > 0; k--) {
 		start = rte_rdtsc_precise();
 		for (j = 0; j < (config.nb_routes - i) / k; j++) {
-- 
2.7.4


^ permalink raw reply	[flat|nested] 199+ messages in thread

* Re: [dpdk-dev] [PATCH v11 1/8] eal/x86: introduce AVX 512-bit type
  2020-10-16 15:42                   ` [dpdk-dev] [PATCH v11 1/8] eal/x86: introduce AVX 512-bit type Vladimir Medvedkin
@ 2020-10-19  6:35                     ` Kinsella, Ray
  2020-10-19 10:12                       ` Medvedkin, Vladimir
  0 siblings, 1 reply; 199+ messages in thread
From: Kinsella, Ray @ 2020-10-19  6:35 UTC (permalink / raw)
  To: Vladimir Medvedkin, dev
  Cc: david.marchand, jerinj, thomas, konstantin.ananyev,
	bruce.richardson, ciara.power



On 16/10/2020 16:42, Vladimir Medvedkin wrote:
> New data type to manipulate 512 bit AVX values.
> 
> Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
> Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>

This patch has already been applied - need to drop it from the v12.

Ray K

^ permalink raw reply	[flat|nested] 199+ messages in thread

* Re: [dpdk-dev] [PATCH v11 1/8] eal/x86: introduce AVX 512-bit type
  2020-10-19  6:35                     ` Kinsella, Ray
@ 2020-10-19 10:12                       ` Medvedkin, Vladimir
  0 siblings, 0 replies; 199+ messages in thread
From: Medvedkin, Vladimir @ 2020-10-19 10:12 UTC (permalink / raw)
  To: Kinsella, Ray, dev
  Cc: david.marchand, jerinj, thomas, konstantin.ananyev,
	bruce.richardson, ciara.power

Hi Ray,

On 19/10/2020 07:35, Kinsella, Ray wrote:
> 
> 
> On 16/10/2020 16:42, Vladimir Medvedkin wrote:
>> New data type to manipulate 512 bit AVX values.
>>
>> Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
>> Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> 
> This patch has already been applied - need to drop it from the v12.
> 

You're right, will send v12 asap.

> Ray K
> 

-- 
Regards,
Vladimir

^ permalink raw reply	[flat|nested] 199+ messages in thread

* [dpdk-dev] [PATCH v12 0/7] fib: implement AVX512 vector lookup
  2020-10-16 15:42                   ` [dpdk-dev] [PATCH v11 " Vladimir Medvedkin
@ 2020-10-19 10:17                     ` Vladimir Medvedkin
  2020-10-19 15:05                       ` [dpdk-dev] [PATCH v13 " Vladimir Medvedkin
                                         ` (7 more replies)
  2020-10-19 10:17                     ` [dpdk-dev] [PATCH v12 1/7] fib: make lookup function type configurable Vladimir Medvedkin
                                       ` (6 subsequent siblings)
  7 siblings, 8 replies; 199+ messages in thread
From: Vladimir Medvedkin @ 2020-10-19 10:17 UTC (permalink / raw)
  To: dev
  Cc: david.marchand, jerinj, mdr, thomas, konstantin.ananyev,
	bruce.richardson, ciara.power

This patch series implements vectorized lookup using AVX512 for
ipv4 dir24_8 and ipv6 trie algorithms.
Also introduced rte_fib_set_lookup_fn() to change lookup function type.
Added option to select lookup function type in testfib application.

Depends-on: patch-81125 ("eal: add max SIMD bitwidth")

v12:
 - rebase on the latest main
 - drop "eal/x86: introduce AVX 512-bit type" patch

v11:
 - fix compillation issue with unused nh_sz variable

v10:
 - reflects the latest changes in the "eal: add max SIMD bitwidth" patch
 - add en extra doxygen comments
 - rebuild on the latest main

v9:
 - meson reworked
 - integration with max SIMD bitwidth patchseries
 - changed the logic of function selection on init

v8:
 - remove Makefile related changes
 - fix missing doxygen for lookup_type
 - add release notes

v7:
 - fix RTE_X86_ZMM_MASK macro

v6:
 - style fixes

v5:
 - prefix zmm macro in rte_vect.h with RTE_X86
 - remove unnecessary typedef for _x86_zmm_t
 - reword commit title
 - fix typos

v4:
 - use __rte_aligned() instead of using compiler attribute directly
 - rework and add comments to meson.build

v3:
 - separate out the AVX-512 code into a separate file

v2:
 - rename rte_zmm to __rte_x86_zmm to reflect its internal usage
 - make runtime decision to use avx512 lookup

Vladimir Medvedkin (7):
  fib: make lookup function type configurable
  fib: move lookup definition into the header file
  fib: introduce AVX512 lookup
  fib6: make lookup function type configurable
  fib6: move lookup definition into the header file
  fib6: introduce AVX512 lookup
  app/testfib: add support for different lookup functions

 app/test-fib/main.c                    |  65 ++++++-
 doc/guides/rel_notes/release_20_11.rst |   3 +
 lib/librte_fib/dir24_8.c               | 331 +++++++++------------------------
 lib/librte_fib/dir24_8.h               | 226 +++++++++++++++++++++-
 lib/librte_fib/dir24_8_avx512.c        | 165 ++++++++++++++++
 lib/librte_fib/dir24_8_avx512.h        |  24 +++
 lib/librte_fib/meson.build             |  51 +++++
 lib/librte_fib/rte_fib.c               |  21 ++-
 lib/librte_fib/rte_fib.h               |  36 ++++
 lib/librte_fib/rte_fib6.c              |  20 +-
 lib/librte_fib/rte_fib6.h              |  26 +++
 lib/librte_fib/rte_fib_version.map     |   2 +
 lib/librte_fib/trie.c                  | 193 ++++++-------------
 lib/librte_fib/trie.h                  | 119 +++++++++++-
 lib/librte_fib/trie_avx512.c           | 269 +++++++++++++++++++++++++++
 lib/librte_fib/trie_avx512.h           |  20 ++
 16 files changed, 1181 insertions(+), 390 deletions(-)
 create mode 100644 lib/librte_fib/dir24_8_avx512.c
 create mode 100644 lib/librte_fib/dir24_8_avx512.h
 create mode 100644 lib/librte_fib/trie_avx512.c
 create mode 100644 lib/librte_fib/trie_avx512.h

-- 
2.7.4


^ permalink raw reply	[flat|nested] 199+ messages in thread

* [dpdk-dev] [PATCH v12 1/7] fib: make lookup function type configurable
  2020-10-16 15:42                   ` [dpdk-dev] [PATCH v11 " Vladimir Medvedkin
  2020-10-19 10:17                     ` [dpdk-dev] [PATCH v12 0/7] " Vladimir Medvedkin
@ 2020-10-19 10:17                     ` Vladimir Medvedkin
  2020-10-19 10:17                     ` [dpdk-dev] [PATCH v12 2/7] fib: move lookup definition into the header file Vladimir Medvedkin
                                       ` (5 subsequent siblings)
  7 siblings, 0 replies; 199+ messages in thread
From: Vladimir Medvedkin @ 2020-10-19 10:17 UTC (permalink / raw)
  To: dev
  Cc: david.marchand, jerinj, mdr, thomas, konstantin.ananyev,
	bruce.richardson, ciara.power

Add type argument to dir24_8_get_lookup_fn()
Now it supports 3 different lookup implementations:
 RTE_FIB_DIR24_8_SCALAR_MACRO
 RTE_FIB_DIR24_8_SCALAR_INLINE
 RTE_FIB_DIR24_8_SCALAR_UNI

Add new rte_fib_set_lookup_fn() - user can change lookup
function type runtime.

Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 lib/librte_fib/dir24_8.c           | 84 +++++++++++++++++++++++---------------
 lib/librte_fib/dir24_8.h           |  2 +-
 lib/librte_fib/rte_fib.c           | 21 +++++++++-
 lib/librte_fib/rte_fib.h           | 32 +++++++++++++++
 lib/librte_fib/rte_fib_version.map |  1 +
 5 files changed, 106 insertions(+), 34 deletions(-)

diff --git a/lib/librte_fib/dir24_8.c b/lib/librte_fib/dir24_8.c
index c9dce3c..ff51f65 100644
--- a/lib/librte_fib/dir24_8.c
+++ b/lib/librte_fib/dir24_8.c
@@ -45,13 +45,6 @@ struct dir24_8_tbl {
 
 #define ROUNDUP(x, y)	 RTE_ALIGN_CEIL(x, (1 << (32 - y)))
 
-enum lookup_type {
-	MACRO,
-	INLINE,
-	UNI
-};
-enum lookup_type test_lookup = MACRO;
-
 static inline void *
 get_tbl24_p(struct dir24_8_tbl *dp, uint32_t ip, uint8_t nh_sz)
 {
@@ -252,35 +245,62 @@ dir24_8_lookup_bulk_uni(void *p, const uint32_t *ips,
 	}
 }
 
+static inline rte_fib_lookup_fn_t
+get_scalar_fn(enum rte_fib_dir24_8_nh_sz nh_sz)
+{
+	switch (nh_sz) {
+	case RTE_FIB_DIR24_8_1B:
+		return dir24_8_lookup_bulk_1b;
+	case RTE_FIB_DIR24_8_2B:
+		return dir24_8_lookup_bulk_2b;
+	case RTE_FIB_DIR24_8_4B:
+		return dir24_8_lookup_bulk_4b;
+	case RTE_FIB_DIR24_8_8B:
+		return dir24_8_lookup_bulk_8b;
+	default:
+		return NULL;
+	}
+}
+
+static inline rte_fib_lookup_fn_t
+get_scalar_fn_inlined(enum rte_fib_dir24_8_nh_sz nh_sz)
+{
+	switch (nh_sz) {
+	case RTE_FIB_DIR24_8_1B:
+		return dir24_8_lookup_bulk_0;
+	case RTE_FIB_DIR24_8_2B:
+		return dir24_8_lookup_bulk_1;
+	case RTE_FIB_DIR24_8_4B:
+		return dir24_8_lookup_bulk_2;
+	case RTE_FIB_DIR24_8_8B:
+		return dir24_8_lookup_bulk_3;
+	default:
+		return NULL;
+	}
+}
+
 rte_fib_lookup_fn_t
-dir24_8_get_lookup_fn(struct rte_fib_conf *fib_conf)
+dir24_8_get_lookup_fn(void *p, enum rte_fib_dir24_8_lookup_type type)
 {
-	enum rte_fib_dir24_8_nh_sz nh_sz = fib_conf->dir24_8.nh_sz;
+	enum rte_fib_dir24_8_nh_sz nh_sz;
+	struct dir24_8_tbl *dp = p;
 
-	if (test_lookup == MACRO) {
-		switch (nh_sz) {
-		case RTE_FIB_DIR24_8_1B:
-			return dir24_8_lookup_bulk_1b;
-		case RTE_FIB_DIR24_8_2B:
-			return dir24_8_lookup_bulk_2b;
-		case RTE_FIB_DIR24_8_4B:
-			return dir24_8_lookup_bulk_4b;
-		case RTE_FIB_DIR24_8_8B:
-			return dir24_8_lookup_bulk_8b;
-		}
-	} else if (test_lookup == INLINE) {
-		switch (nh_sz) {
-		case RTE_FIB_DIR24_8_1B:
-			return dir24_8_lookup_bulk_0;
-		case RTE_FIB_DIR24_8_2B:
-			return dir24_8_lookup_bulk_1;
-		case RTE_FIB_DIR24_8_4B:
-			return dir24_8_lookup_bulk_2;
-		case RTE_FIB_DIR24_8_8B:
-			return dir24_8_lookup_bulk_3;
-		}
-	} else
+	if (dp == NULL)
+		return NULL;
+
+	nh_sz = dp->nh_sz;
+
+	switch (type) {
+	case RTE_FIB_DIR24_8_SCALAR_MACRO:
+		return get_scalar_fn(nh_sz);
+	case RTE_FIB_DIR24_8_SCALAR_INLINE:
+		return get_scalar_fn_inlined(nh_sz);
+	case RTE_FIB_DIR24_8_SCALAR_UNI:
 		return dir24_8_lookup_bulk_uni;
+	default:
+		return NULL;
+	}
+
 	return NULL;
 }
 
diff --git a/lib/librte_fib/dir24_8.h b/lib/librte_fib/dir24_8.h
index 1ec437c..53c5dd2 100644
--- a/lib/librte_fib/dir24_8.h
+++ b/lib/librte_fib/dir24_8.h
@@ -22,7 +22,7 @@ void
 dir24_8_free(void *p);
 
 rte_fib_lookup_fn_t
-dir24_8_get_lookup_fn(struct rte_fib_conf *conf);
+dir24_8_get_lookup_fn(void *p, enum rte_fib_dir24_8_lookup_type type);
 
 int
 dir24_8_modify(struct rte_fib *fib, uint32_t ip, uint8_t depth,
diff --git a/lib/librte_fib/rte_fib.c b/lib/librte_fib/rte_fib.c
index e090808..b9f6efb 100644
--- a/lib/librte_fib/rte_fib.c
+++ b/lib/librte_fib/rte_fib.c
@@ -107,7 +107,8 @@ init_dataplane(struct rte_fib *fib, __rte_unused int socket_id,
 		fib->dp = dir24_8_create(dp_name, socket_id, conf);
 		if (fib->dp == NULL)
 			return -rte_errno;
-		fib->lookup = dir24_8_get_lookup_fn(conf);
+		fib->lookup = dir24_8_get_lookup_fn(fib->dp,
+			RTE_FIB_DIR24_8_SCALAR_MACRO);
 		fib->modify = dir24_8_modify;
 		return 0;
 	default:
@@ -317,3 +318,21 @@ rte_fib_get_rib(struct rte_fib *fib)
 {
 	return (fib == NULL) ? NULL : fib->rib;
 }
+
+int
+rte_fib_set_lookup_fn(struct rte_fib *fib,
+	enum rte_fib_dir24_8_lookup_type type)
+{
+	rte_fib_lookup_fn_t fn;
+
+	switch (fib->type) {
+	case RTE_FIB_DIR24_8:
+		fn = dir24_8_get_lookup_fn(fib->dp, type);
+		if (fn == NULL)
+			return -EINVAL;
+		fib->lookup = fn;
+		return 0;
+	default:
+		return -EINVAL;
+	}
+}
diff --git a/lib/librte_fib/rte_fib.h b/lib/librte_fib/rte_fib.h
index 84ee774..2097ee5 100644
--- a/lib/librte_fib/rte_fib.h
+++ b/lib/librte_fib/rte_fib.h
@@ -58,6 +58,21 @@ enum rte_fib_dir24_8_nh_sz {
 	RTE_FIB_DIR24_8_8B
 };
 
+/** Type of lookup function implementation */
+enum rte_fib_dir24_8_lookup_type {
+	RTE_FIB_DIR24_8_SCALAR_MACRO,
+	/**< Macro based lookup function */
+	RTE_FIB_DIR24_8_SCALAR_INLINE,
+	/**<
+	 * Lookup implementation using inlined functions
+	 * for different next hop sizes
+	 */
+	RTE_FIB_DIR24_8_SCALAR_UNI
+	/**<
+	 * Unified lookup function for all next hop sizes
+	 */
+};
+
 /** FIB configuration structure */
 struct rte_fib_conf {
 	enum rte_fib_type type; /**< Type of FIB struct */
@@ -196,6 +211,23 @@ __rte_experimental
 struct rte_rib *
 rte_fib_get_rib(struct rte_fib *fib);
 
+/**
+ * Set lookup function based on type
+ *
+ * @param fib
+ *   FIB object handle
+ * @param type
+ *   type of lookup function
+ *
+ * @return
+ *    -EINVAL on failure
+ *    0 on success
+ */
+__rte_experimental
+int
+rte_fib_set_lookup_fn(struct rte_fib *fib,
+	enum rte_fib_dir24_8_lookup_type type);
+
 #ifdef __cplusplus
 }
 #endif
diff --git a/lib/librte_fib/rte_fib_version.map b/lib/librte_fib/rte_fib_version.map
index 9527417..216af66 100644
--- a/lib/librte_fib/rte_fib_version.map
+++ b/lib/librte_fib/rte_fib_version.map
@@ -9,6 +9,7 @@ EXPERIMENTAL {
 	rte_fib_lookup_bulk;
 	rte_fib_get_dp;
 	rte_fib_get_rib;
+	rte_fib_set_lookup_fn;
 
 	rte_fib6_add;
 	rte_fib6_create;
-- 
2.7.4


^ permalink raw reply	[flat|nested] 199+ messages in thread

* [dpdk-dev] [PATCH v12 2/7] fib: move lookup definition into the header file
  2020-10-16 15:42                   ` [dpdk-dev] [PATCH v11 " Vladimir Medvedkin
  2020-10-19 10:17                     ` [dpdk-dev] [PATCH v12 0/7] " Vladimir Medvedkin
  2020-10-19 10:17                     ` [dpdk-dev] [PATCH v12 1/7] fib: make lookup function type configurable Vladimir Medvedkin
@ 2020-10-19 10:17                     ` Vladimir Medvedkin
  2020-10-19 10:17                     ` [dpdk-dev] [PATCH v12 3/7] fib: introduce AVX512 lookup Vladimir Medvedkin
                                       ` (4 subsequent siblings)
  7 siblings, 0 replies; 199+ messages in thread
From: Vladimir Medvedkin @ 2020-10-19 10:17 UTC (permalink / raw)
  To: dev
  Cc: david.marchand, jerinj, mdr, thomas, konstantin.ananyev,
	bruce.richardson, ciara.power

Move dir24_8 table layout and lookup definition into the
private header file. This is necessary for implementing a
vectorized lookup function in a separate .с file.

Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 lib/librte_fib/dir24_8.c | 225 +----------------------------------------------
 lib/librte_fib/dir24_8.h | 224 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 225 insertions(+), 224 deletions(-)

diff --git a/lib/librte_fib/dir24_8.c b/lib/librte_fib/dir24_8.c
index ff51f65..b5f2363 100644
--- a/lib/librte_fib/dir24_8.c
+++ b/lib/librte_fib/dir24_8.c
@@ -11,240 +11,17 @@
 
 #include <rte_debug.h>
 #include <rte_malloc.h>
-#include <rte_prefetch.h>
 #include <rte_errno.h>
 #include <rte_memory.h>
-#include <rte_branch_prediction.h>
 
-#include <rte_fib.h>
 #include <rte_rib.h>
+#include <rte_fib.h>
 #include "dir24_8.h"
 
 #define DIR24_8_NAMESIZE	64
 
-#define DIR24_8_TBL24_NUM_ENT		(1 << 24)
-#define DIR24_8_TBL8_GRP_NUM_ENT	256U
-#define DIR24_8_EXT_ENT			1
-#define DIR24_8_TBL24_MASK		0xffffff00
-
-#define BITMAP_SLAB_BIT_SIZE_LOG2	6
-#define BITMAP_SLAB_BIT_SIZE		(1 << BITMAP_SLAB_BIT_SIZE_LOG2)
-#define BITMAP_SLAB_BITMASK		(BITMAP_SLAB_BIT_SIZE - 1)
-
-struct dir24_8_tbl {
-	uint32_t	number_tbl8s;	/**< Total number of tbl8s */
-	uint32_t	rsvd_tbl8s;	/**< Number of reserved tbl8s */
-	uint32_t	cur_tbl8s;	/**< Current number of tbl8s */
-	enum rte_fib_dir24_8_nh_sz	nh_sz;	/**< Size of nexthop entry */
-	uint64_t	def_nh;		/**< Default next hop */
-	uint64_t	*tbl8;		/**< tbl8 table. */
-	uint64_t	*tbl8_idxes;	/**< bitmap containing free tbl8 idxes*/
-	/* tbl24 table. */
-	__extension__ uint64_t	tbl24[0] __rte_cache_aligned;
-};
-
 #define ROUNDUP(x, y)	 RTE_ALIGN_CEIL(x, (1 << (32 - y)))
 
-static inline void *
-get_tbl24_p(struct dir24_8_tbl *dp, uint32_t ip, uint8_t nh_sz)
-{
-	return (void *)&((uint8_t *)dp->tbl24)[(ip &
-		DIR24_8_TBL24_MASK) >> (8 - nh_sz)];
-}
-
-static inline  uint8_t
-bits_in_nh(uint8_t nh_sz)
-{
-	return 8 * (1 << nh_sz);
-}
-
-static inline uint64_t
-get_max_nh(uint8_t nh_sz)
-{
-	return ((1ULL << (bits_in_nh(nh_sz) - 1)) - 1);
-}
-
-static  inline uint32_t
-get_tbl24_idx(uint32_t ip)
-{
-	return ip >> 8;
-}
-
-static  inline uint32_t
-get_tbl8_idx(uint32_t res, uint32_t ip)
-{
-	return (res >> 1) * DIR24_8_TBL8_GRP_NUM_ENT + (uint8_t)ip;
-}
-
-static inline uint64_t
-lookup_msk(uint8_t nh_sz)
-{
-	return ((1ULL << ((1 << (nh_sz + 3)) - 1)) << 1) - 1;
-}
-
-static inline uint8_t
-get_psd_idx(uint32_t val, uint8_t nh_sz)
-{
-	return val & ((1 << (3 - nh_sz)) - 1);
-}
-
-static inline uint32_t
-get_tbl_idx(uint32_t val, uint8_t nh_sz)
-{
-	return val >> (3 - nh_sz);
-}
-
-static inline uint64_t
-get_tbl24(struct dir24_8_tbl *dp, uint32_t ip, uint8_t nh_sz)
-{
-	return ((dp->tbl24[get_tbl_idx(get_tbl24_idx(ip), nh_sz)] >>
-		(get_psd_idx(get_tbl24_idx(ip), nh_sz) *
-		bits_in_nh(nh_sz))) & lookup_msk(nh_sz));
-}
-
-static inline uint64_t
-get_tbl8(struct dir24_8_tbl *dp, uint32_t res, uint32_t ip, uint8_t nh_sz)
-{
-	return ((dp->tbl8[get_tbl_idx(get_tbl8_idx(res, ip), nh_sz)] >>
-		(get_psd_idx(get_tbl8_idx(res, ip), nh_sz) *
-		bits_in_nh(nh_sz))) & lookup_msk(nh_sz));
-}
-
-static inline int
-is_entry_extended(uint64_t ent)
-{
-	return (ent & DIR24_8_EXT_ENT) == DIR24_8_EXT_ENT;
-}
-
-#define LOOKUP_FUNC(suffix, type, bulk_prefetch, nh_sz)			\
-static void dir24_8_lookup_bulk_##suffix(void *p, const uint32_t *ips,	\
-	uint64_t *next_hops, const unsigned int n)			\
-{									\
-	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;		\
-	uint64_t tmp;							\
-	uint32_t i;							\
-	uint32_t prefetch_offset =					\
-		RTE_MIN((unsigned int)bulk_prefetch, n);		\
-									\
-	for (i = 0; i < prefetch_offset; i++)				\
-		rte_prefetch0(get_tbl24_p(dp, ips[i], nh_sz));		\
-	for (i = 0; i < (n - prefetch_offset); i++) {			\
-		rte_prefetch0(get_tbl24_p(dp,				\
-			ips[i + prefetch_offset], nh_sz));		\
-		tmp = ((type *)dp->tbl24)[ips[i] >> 8];			\
-		if (unlikely(is_entry_extended(tmp)))			\
-			tmp = ((type *)dp->tbl8)[(uint8_t)ips[i] +	\
-				((tmp >> 1) * DIR24_8_TBL8_GRP_NUM_ENT)]; \
-		next_hops[i] = tmp >> 1;				\
-	}								\
-	for (; i < n; i++) {						\
-		tmp = ((type *)dp->tbl24)[ips[i] >> 8];			\
-		if (unlikely(is_entry_extended(tmp)))			\
-			tmp = ((type *)dp->tbl8)[(uint8_t)ips[i] +	\
-				((tmp >> 1) * DIR24_8_TBL8_GRP_NUM_ENT)]; \
-		next_hops[i] = tmp >> 1;				\
-	}								\
-}									\
-
-LOOKUP_FUNC(1b, uint8_t, 5, 0)
-LOOKUP_FUNC(2b, uint16_t, 6, 1)
-LOOKUP_FUNC(4b, uint32_t, 15, 2)
-LOOKUP_FUNC(8b, uint64_t, 12, 3)
-
-static inline void
-dir24_8_lookup_bulk(struct dir24_8_tbl *dp, const uint32_t *ips,
-	uint64_t *next_hops, const unsigned int n, uint8_t nh_sz)
-{
-	uint64_t tmp;
-	uint32_t i;
-	uint32_t prefetch_offset = RTE_MIN(15U, n);
-
-	for (i = 0; i < prefetch_offset; i++)
-		rte_prefetch0(get_tbl24_p(dp, ips[i], nh_sz));
-	for (i = 0; i < (n - prefetch_offset); i++) {
-		rte_prefetch0(get_tbl24_p(dp, ips[i + prefetch_offset],
-			nh_sz));
-		tmp = get_tbl24(dp, ips[i], nh_sz);
-		if (unlikely(is_entry_extended(tmp)))
-			tmp = get_tbl8(dp, tmp, ips[i], nh_sz);
-
-		next_hops[i] = tmp >> 1;
-	}
-	for (; i < n; i++) {
-		tmp = get_tbl24(dp, ips[i], nh_sz);
-		if (unlikely(is_entry_extended(tmp)))
-			tmp = get_tbl8(dp, tmp, ips[i], nh_sz);
-
-		next_hops[i] = tmp >> 1;
-	}
-}
-
-static void
-dir24_8_lookup_bulk_0(void *p, const uint32_t *ips,
-	uint64_t *next_hops, const unsigned int n)
-{
-	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
-
-	dir24_8_lookup_bulk(dp, ips, next_hops, n, 0);
-}
-
-static void
-dir24_8_lookup_bulk_1(void *p, const uint32_t *ips,
-	uint64_t *next_hops, const unsigned int n)
-{
-	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
-
-	dir24_8_lookup_bulk(dp, ips, next_hops, n, 1);
-}
-
-static void
-dir24_8_lookup_bulk_2(void *p, const uint32_t *ips,
-	uint64_t *next_hops, const unsigned int n)
-{
-	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
-
-	dir24_8_lookup_bulk(dp, ips, next_hops, n, 2);
-}
-
-static void
-dir24_8_lookup_bulk_3(void *p, const uint32_t *ips,
-	uint64_t *next_hops, const unsigned int n)
-{
-	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
-
-	dir24_8_lookup_bulk(dp, ips, next_hops, n, 3);
-}
-
-static void
-dir24_8_lookup_bulk_uni(void *p, const uint32_t *ips,
-	uint64_t *next_hops, const unsigned int n)
-{
-	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
-	uint64_t tmp;
-	uint32_t i;
-	uint32_t prefetch_offset = RTE_MIN(15U, n);
-	uint8_t nh_sz = dp->nh_sz;
-
-	for (i = 0; i < prefetch_offset; i++)
-		rte_prefetch0(get_tbl24_p(dp, ips[i], nh_sz));
-	for (i = 0; i < (n - prefetch_offset); i++) {
-		rte_prefetch0(get_tbl24_p(dp, ips[i + prefetch_offset],
-			nh_sz));
-		tmp = get_tbl24(dp, ips[i], nh_sz);
-		if (unlikely(is_entry_extended(tmp)))
-			tmp = get_tbl8(dp, tmp, ips[i], nh_sz);
-
-		next_hops[i] = tmp >> 1;
-	}
-	for (; i < n; i++) {
-		tmp = get_tbl24(dp, ips[i], nh_sz);
-		if (unlikely(is_entry_extended(tmp)))
-			tmp = get_tbl8(dp, tmp, ips[i], nh_sz);
-
-		next_hops[i] = tmp >> 1;
-	}
-}
-
 static inline rte_fib_lookup_fn_t
 get_scalar_fn(enum rte_fib_dir24_8_nh_sz nh_sz)
 {
diff --git a/lib/librte_fib/dir24_8.h b/lib/librte_fib/dir24_8.h
index 53c5dd2..56d0389 100644
--- a/lib/librte_fib/dir24_8.h
+++ b/lib/librte_fib/dir24_8.h
@@ -6,6 +6,9 @@
 #ifndef _DIR24_8_H_
 #define _DIR24_8_H_
 
+#include <rte_prefetch.h>
+#include <rte_branch_prediction.h>
+
 /**
  * @file
  * DIR24_8 algorithm
@@ -15,6 +18,227 @@
 extern "C" {
 #endif
 
+#define DIR24_8_TBL24_NUM_ENT		(1 << 24)
+#define DIR24_8_TBL8_GRP_NUM_ENT	256U
+#define DIR24_8_EXT_ENT			1
+#define DIR24_8_TBL24_MASK		0xffffff00
+
+#define BITMAP_SLAB_BIT_SIZE_LOG2	6
+#define BITMAP_SLAB_BIT_SIZE		(1 << BITMAP_SLAB_BIT_SIZE_LOG2)
+#define BITMAP_SLAB_BITMASK		(BITMAP_SLAB_BIT_SIZE - 1)
+
+struct dir24_8_tbl {
+	uint32_t	number_tbl8s;	/**< Total number of tbl8s */
+	uint32_t	rsvd_tbl8s;	/**< Number of reserved tbl8s */
+	uint32_t	cur_tbl8s;	/**< Current number of tbl8s */
+	enum rte_fib_dir24_8_nh_sz	nh_sz;	/**< Size of nexthop entry */
+	uint64_t	def_nh;		/**< Default next hop */
+	uint64_t	*tbl8;		/**< tbl8 table. */
+	uint64_t	*tbl8_idxes;	/**< bitmap containing free tbl8 idxes*/
+	/* tbl24 table. */
+	__extension__ uint64_t	tbl24[0] __rte_cache_aligned;
+};
+
+static inline void *
+get_tbl24_p(struct dir24_8_tbl *dp, uint32_t ip, uint8_t nh_sz)
+{
+	return (void *)&((uint8_t *)dp->tbl24)[(ip &
+		DIR24_8_TBL24_MASK) >> (8 - nh_sz)];
+}
+
+static inline  uint8_t
+bits_in_nh(uint8_t nh_sz)
+{
+	return 8 * (1 << nh_sz);
+}
+
+static inline uint64_t
+get_max_nh(uint8_t nh_sz)
+{
+	return ((1ULL << (bits_in_nh(nh_sz) - 1)) - 1);
+}
+
+static  inline uint32_t
+get_tbl24_idx(uint32_t ip)
+{
+	return ip >> 8;
+}
+
+static  inline uint32_t
+get_tbl8_idx(uint32_t res, uint32_t ip)
+{
+	return (res >> 1) * DIR24_8_TBL8_GRP_NUM_ENT + (uint8_t)ip;
+}
+
+static inline uint64_t
+lookup_msk(uint8_t nh_sz)
+{
+	return ((1ULL << ((1 << (nh_sz + 3)) - 1)) << 1) - 1;
+}
+
+static inline uint8_t
+get_psd_idx(uint32_t val, uint8_t nh_sz)
+{
+	return val & ((1 << (3 - nh_sz)) - 1);
+}
+
+static inline uint32_t
+get_tbl_idx(uint32_t val, uint8_t nh_sz)
+{
+	return val >> (3 - nh_sz);
+}
+
+static inline uint64_t
+get_tbl24(struct dir24_8_tbl *dp, uint32_t ip, uint8_t nh_sz)
+{
+	return ((dp->tbl24[get_tbl_idx(get_tbl24_idx(ip), nh_sz)] >>
+		(get_psd_idx(get_tbl24_idx(ip), nh_sz) *
+		bits_in_nh(nh_sz))) & lookup_msk(nh_sz));
+}
+
+static inline uint64_t
+get_tbl8(struct dir24_8_tbl *dp, uint32_t res, uint32_t ip, uint8_t nh_sz)
+{
+	return ((dp->tbl8[get_tbl_idx(get_tbl8_idx(res, ip), nh_sz)] >>
+		(get_psd_idx(get_tbl8_idx(res, ip), nh_sz) *
+		bits_in_nh(nh_sz))) & lookup_msk(nh_sz));
+}
+
+static inline int
+is_entry_extended(uint64_t ent)
+{
+	return (ent & DIR24_8_EXT_ENT) == DIR24_8_EXT_ENT;
+}
+
+#define LOOKUP_FUNC(suffix, type, bulk_prefetch, nh_sz)			\
+static inline void dir24_8_lookup_bulk_##suffix(void *p, const uint32_t *ips, \
+	uint64_t *next_hops, const unsigned int n)			\
+{									\
+	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;		\
+	uint64_t tmp;							\
+	uint32_t i;							\
+	uint32_t prefetch_offset =					\
+		RTE_MIN((unsigned int)bulk_prefetch, n);		\
+									\
+	for (i = 0; i < prefetch_offset; i++)				\
+		rte_prefetch0(get_tbl24_p(dp, ips[i], nh_sz));		\
+	for (i = 0; i < (n - prefetch_offset); i++) {			\
+		rte_prefetch0(get_tbl24_p(dp,				\
+			ips[i + prefetch_offset], nh_sz));		\
+		tmp = ((type *)dp->tbl24)[ips[i] >> 8];			\
+		if (unlikely(is_entry_extended(tmp)))			\
+			tmp = ((type *)dp->tbl8)[(uint8_t)ips[i] +	\
+				((tmp >> 1) * DIR24_8_TBL8_GRP_NUM_ENT)]; \
+		next_hops[i] = tmp >> 1;				\
+	}								\
+	for (; i < n; i++) {						\
+		tmp = ((type *)dp->tbl24)[ips[i] >> 8];			\
+		if (unlikely(is_entry_extended(tmp)))			\
+			tmp = ((type *)dp->tbl8)[(uint8_t)ips[i] +	\
+				((tmp >> 1) * DIR24_8_TBL8_GRP_NUM_ENT)]; \
+		next_hops[i] = tmp >> 1;				\
+	}								\
+}									\
+
+LOOKUP_FUNC(1b, uint8_t, 5, 0)
+LOOKUP_FUNC(2b, uint16_t, 6, 1)
+LOOKUP_FUNC(4b, uint32_t, 15, 2)
+LOOKUP_FUNC(8b, uint64_t, 12, 3)
+
+static inline void
+dir24_8_lookup_bulk(struct dir24_8_tbl *dp, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n, uint8_t nh_sz)
+{
+	uint64_t tmp;
+	uint32_t i;
+	uint32_t prefetch_offset = RTE_MIN(15U, n);
+
+	for (i = 0; i < prefetch_offset; i++)
+		rte_prefetch0(get_tbl24_p(dp, ips[i], nh_sz));
+	for (i = 0; i < (n - prefetch_offset); i++) {
+		rte_prefetch0(get_tbl24_p(dp, ips[i + prefetch_offset],
+			nh_sz));
+		tmp = get_tbl24(dp, ips[i], nh_sz);
+		if (unlikely(is_entry_extended(tmp)))
+			tmp = get_tbl8(dp, tmp, ips[i], nh_sz);
+
+		next_hops[i] = tmp >> 1;
+	}
+	for (; i < n; i++) {
+		tmp = get_tbl24(dp, ips[i], nh_sz);
+		if (unlikely(is_entry_extended(tmp)))
+			tmp = get_tbl8(dp, tmp, ips[i], nh_sz);
+
+		next_hops[i] = tmp >> 1;
+	}
+}
+
+static inline void
+dir24_8_lookup_bulk_0(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n)
+{
+	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
+
+	dir24_8_lookup_bulk(dp, ips, next_hops, n, 0);
+}
+
+static inline void
+dir24_8_lookup_bulk_1(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n)
+{
+	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
+
+	dir24_8_lookup_bulk(dp, ips, next_hops, n, 1);
+}
+
+static inline void
+dir24_8_lookup_bulk_2(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n)
+{
+	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
+
+	dir24_8_lookup_bulk(dp, ips, next_hops, n, 2);
+}
+
+static inline void
+dir24_8_lookup_bulk_3(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n)
+{
+	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
+
+	dir24_8_lookup_bulk(dp, ips, next_hops, n, 3);
+}
+
+static inline void
+dir24_8_lookup_bulk_uni(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n)
+{
+	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
+	uint64_t tmp;
+	uint32_t i;
+	uint32_t prefetch_offset = RTE_MIN(15U, n);
+	uint8_t nh_sz = dp->nh_sz;
+
+	for (i = 0; i < prefetch_offset; i++)
+		rte_prefetch0(get_tbl24_p(dp, ips[i], nh_sz));
+	for (i = 0; i < (n - prefetch_offset); i++) {
+		rte_prefetch0(get_tbl24_p(dp, ips[i + prefetch_offset],
+			nh_sz));
+		tmp = get_tbl24(dp, ips[i], nh_sz);
+		if (unlikely(is_entry_extended(tmp)))
+			tmp = get_tbl8(dp, tmp, ips[i], nh_sz);
+
+		next_hops[i] = tmp >> 1;
+	}
+	for (; i < n; i++) {
+		tmp = get_tbl24(dp, ips[i], nh_sz);
+		if (unlikely(is_entry_extended(tmp)))
+			tmp = get_tbl8(dp, tmp, ips[i], nh_sz);
+
+		next_hops[i] = tmp >> 1;
+	}
+}
+
 void *
 dir24_8_create(const char *name, int socket_id, struct rte_fib_conf *conf);
 
-- 
2.7.4


^ permalink raw reply	[flat|nested] 199+ messages in thread

* [dpdk-dev] [PATCH v12 3/7] fib: introduce AVX512 lookup
  2020-10-16 15:42                   ` [dpdk-dev] [PATCH v11 " Vladimir Medvedkin
                                       ` (2 preceding siblings ...)
  2020-10-19 10:17                     ` [dpdk-dev] [PATCH v12 2/7] fib: move lookup definition into the header file Vladimir Medvedkin
@ 2020-10-19 10:17                     ` Vladimir Medvedkin
  2020-10-19 10:17                     ` [dpdk-dev] [PATCH v12 4/7] fib6: make lookup function type configurable Vladimir Medvedkin
                                       ` (3 subsequent siblings)
  7 siblings, 0 replies; 199+ messages in thread
From: Vladimir Medvedkin @ 2020-10-19 10:17 UTC (permalink / raw)
  To: dev
  Cc: david.marchand, jerinj, mdr, thomas, konstantin.ananyev,
	bruce.richardson, ciara.power

Add new lookup implementation for DIR24_8 algorithm using
AVX512 instruction set

Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 doc/guides/rel_notes/release_20_11.rst |   3 +
 lib/librte_fib/dir24_8.c               |  38 ++++++++
 lib/librte_fib/dir24_8_avx512.c        | 165 +++++++++++++++++++++++++++++++++
 lib/librte_fib/dir24_8_avx512.h        |  24 +++++
 lib/librte_fib/meson.build             |  34 +++++++
 lib/librte_fib/rte_fib.c               |   2 +-
 lib/librte_fib/rte_fib.h               |   6 +-
 7 files changed, 270 insertions(+), 2 deletions(-)
 create mode 100644 lib/librte_fib/dir24_8_avx512.c
 create mode 100644 lib/librte_fib/dir24_8_avx512.h

diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst
index a9bcf5e..0e5f2de 100644
--- a/doc/guides/rel_notes/release_20_11.rst
+++ b/doc/guides/rel_notes/release_20_11.rst
@@ -344,6 +344,9 @@ New Features
   * Replaced ``--scalar`` command-line option with ``--alg=<value>``, to allow
     the user to select the desired classify method.
 
+* **Added AVX512 lookup implementation for FIB.**
+
+  Added a AVX512 lookup functions implementation into FIB library.
 
 Removed Items
 -------------
diff --git a/lib/librte_fib/dir24_8.c b/lib/librte_fib/dir24_8.c
index b5f2363..b96d810 100644
--- a/lib/librte_fib/dir24_8.c
+++ b/lib/librte_fib/dir24_8.c
@@ -18,6 +18,12 @@
 #include <rte_fib.h>
 #include "dir24_8.h"
 
+#ifdef CC_DIR24_8_AVX512_SUPPORT
+
+#include "dir24_8_avx512.h"
+
+#endif /* CC_DIR24_8_AVX512_SUPPORT */
+
 #define DIR24_8_NAMESIZE	64
 
 #define ROUNDUP(x, y)	 RTE_ALIGN_CEIL(x, (1 << (32 - y)))
@@ -56,11 +62,38 @@ get_scalar_fn_inlined(enum rte_fib_dir24_8_nh_sz nh_sz)
 	}
 }
 
+static inline rte_fib_lookup_fn_t
+get_vector_fn(enum rte_fib_dir24_8_nh_sz nh_sz)
+{
+#ifdef CC_DIR24_8_AVX512_SUPPORT
+	if ((rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512F) <= 0) ||
+			(rte_get_max_simd_bitwidth() < RTE_SIMD_512))
+		return NULL;
+
+	switch (nh_sz) {
+	case RTE_FIB_DIR24_8_1B:
+		return rte_dir24_8_vec_lookup_bulk_1b;
+	case RTE_FIB_DIR24_8_2B:
+		return rte_dir24_8_vec_lookup_bulk_2b;
+	case RTE_FIB_DIR24_8_4B:
+		return rte_dir24_8_vec_lookup_bulk_4b;
+	case RTE_FIB_DIR24_8_8B:
+		return rte_dir24_8_vec_lookup_bulk_8b;
+	default:
+		return NULL;
+	}
+#else
+	RTE_SET_USED(nh_sz);
+#endif
+	return NULL;
+}
+
 rte_fib_lookup_fn_t
 dir24_8_get_lookup_fn(void *p, enum rte_fib_dir24_8_lookup_type type)
 {
 	enum rte_fib_dir24_8_nh_sz nh_sz;
 	struct dir24_8_tbl *dp = p;
+	rte_fib_lookup_fn_t ret_fn = NULL;
 
 	if (dp == NULL)
 		return NULL;
@@ -74,6 +107,11 @@ dir24_8_get_lookup_fn(void *p, enum rte_fib_dir24_8_lookup_type type)
 		return get_scalar_fn_inlined(nh_sz);
 	case RTE_FIB_DIR24_8_SCALAR_UNI:
 		return dir24_8_lookup_bulk_uni;
+	case RTE_FIB_DIR24_8_VECTOR_AVX512:
+		return get_vector_fn(nh_sz);
+	case RTE_FIB_DIR24_8_ANY:
+		ret_fn = get_vector_fn(nh_sz);
+		return (ret_fn) ? ret_fn : get_scalar_fn(nh_sz);
 	default:
 		return NULL;
 	}
diff --git a/lib/librte_fib/dir24_8_avx512.c b/lib/librte_fib/dir24_8_avx512.c
new file mode 100644
index 0000000..43dba28
--- /dev/null
+++ b/lib/librte_fib/dir24_8_avx512.c
@@ -0,0 +1,165 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Intel Corporation
+ */
+
+#include <rte_vect.h>
+#include <rte_fib.h>
+
+#include "dir24_8.h"
+#include "dir24_8_avx512.h"
+
+static __rte_always_inline void
+dir24_8_vec_lookup_x16(void *p, const uint32_t *ips,
+	uint64_t *next_hops, int size)
+{
+	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
+	__mmask16 msk_ext;
+	__mmask16 exp_msk = 0x5555;
+	__m512i ip_vec, idxes, res, bytes;
+	const __m512i zero = _mm512_set1_epi32(0);
+	const __m512i lsb = _mm512_set1_epi32(1);
+	const __m512i lsbyte_msk = _mm512_set1_epi32(0xff);
+	__m512i tmp1, tmp2, res_msk;
+	__m256i tmp256;
+	/* used to mask gather values if size is 1/2 (8/16 bit next hops) */
+	if (size == sizeof(uint8_t))
+		res_msk = _mm512_set1_epi32(UINT8_MAX);
+	else if (size == sizeof(uint16_t))
+		res_msk = _mm512_set1_epi32(UINT16_MAX);
+
+	ip_vec = _mm512_loadu_si512(ips);
+	/* mask 24 most significant bits */
+	idxes = _mm512_srli_epi32(ip_vec, 8);
+
+	/**
+	 * lookup in tbl24
+	 * Put it inside branch to make compiler happy with -O0
+	 */
+	if (size == sizeof(uint8_t)) {
+		res = _mm512_i32gather_epi32(idxes, (const int *)dp->tbl24, 1);
+		res = _mm512_and_epi32(res, res_msk);
+	} else if (size == sizeof(uint16_t)) {
+		res = _mm512_i32gather_epi32(idxes, (const int *)dp->tbl24, 2);
+		res = _mm512_and_epi32(res, res_msk);
+	} else
+		res = _mm512_i32gather_epi32(idxes, (const int *)dp->tbl24, 4);
+
+	/* get extended entries indexes */
+	msk_ext = _mm512_test_epi32_mask(res, lsb);
+
+	if (msk_ext != 0) {
+		idxes = _mm512_srli_epi32(res, 1);
+		idxes = _mm512_slli_epi32(idxes, 8);
+		bytes = _mm512_and_epi32(ip_vec, lsbyte_msk);
+		idxes = _mm512_maskz_add_epi32(msk_ext, idxes, bytes);
+		if (size == sizeof(uint8_t)) {
+			idxes = _mm512_mask_i32gather_epi32(zero, msk_ext,
+				idxes, (const int *)dp->tbl8, 1);
+			idxes = _mm512_and_epi32(idxes, res_msk);
+		} else if (size == sizeof(uint16_t)) {
+			idxes = _mm512_mask_i32gather_epi32(zero, msk_ext,
+				idxes, (const int *)dp->tbl8, 2);
+			idxes = _mm512_and_epi32(idxes, res_msk);
+		} else
+			idxes = _mm512_mask_i32gather_epi32(zero, msk_ext,
+				idxes, (const int *)dp->tbl8, 4);
+
+		res = _mm512_mask_blend_epi32(msk_ext, res, idxes);
+	}
+
+	res = _mm512_srli_epi32(res, 1);
+	tmp1 = _mm512_maskz_expand_epi32(exp_msk, res);
+	tmp256 = _mm512_extracti32x8_epi32(res, 1);
+	tmp2 = _mm512_maskz_expand_epi32(exp_msk,
+		_mm512_castsi256_si512(tmp256));
+	_mm512_storeu_si512(next_hops, tmp1);
+	_mm512_storeu_si512(next_hops + 8, tmp2);
+}
+
+static __rte_always_inline void
+dir24_8_vec_lookup_x8_8b(void *p, const uint32_t *ips,
+	uint64_t *next_hops)
+{
+	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
+	const __m512i zero = _mm512_set1_epi32(0);
+	const __m512i lsbyte_msk = _mm512_set1_epi64(0xff);
+	const __m512i lsb = _mm512_set1_epi64(1);
+	__m512i res, idxes, bytes;
+	__m256i idxes_256, ip_vec;
+	__mmask8 msk_ext;
+
+	ip_vec = _mm256_loadu_si256((const void *)ips);
+	/* mask 24 most significant bits */
+	idxes_256 = _mm256_srli_epi32(ip_vec, 8);
+
+	/* lookup in tbl24 */
+	res = _mm512_i32gather_epi64(idxes_256, (const void *)dp->tbl24, 8);
+
+	/* get extended entries indexes */
+	msk_ext = _mm512_test_epi64_mask(res, lsb);
+
+	if (msk_ext != 0) {
+		bytes = _mm512_cvtepi32_epi64(ip_vec);
+		idxes = _mm512_srli_epi64(res, 1);
+		idxes = _mm512_slli_epi64(idxes, 8);
+		bytes = _mm512_and_epi64(bytes, lsbyte_msk);
+		idxes = _mm512_maskz_add_epi64(msk_ext, idxes, bytes);
+		idxes = _mm512_mask_i64gather_epi64(zero, msk_ext, idxes,
+			(const void *)dp->tbl8, 8);
+
+		res = _mm512_mask_blend_epi64(msk_ext, res, idxes);
+	}
+
+	res = _mm512_srli_epi64(res, 1);
+	_mm512_storeu_si512(next_hops, res);
+}
+
+void
+rte_dir24_8_vec_lookup_bulk_1b(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n)
+{
+	uint32_t i;
+	for (i = 0; i < (n / 16); i++)
+		dir24_8_vec_lookup_x16(p, ips + i * 16, next_hops + i * 16,
+			sizeof(uint8_t));
+
+	dir24_8_lookup_bulk_1b(p, ips + i * 16, next_hops + i * 16,
+		n - i * 16);
+}
+
+void
+rte_dir24_8_vec_lookup_bulk_2b(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n)
+{
+	uint32_t i;
+	for (i = 0; i < (n / 16); i++)
+		dir24_8_vec_lookup_x16(p, ips + i * 16, next_hops + i * 16,
+			sizeof(uint16_t));
+
+	dir24_8_lookup_bulk_2b(p, ips + i * 16, next_hops + i * 16,
+		n - i * 16);
+}
+
+void
+rte_dir24_8_vec_lookup_bulk_4b(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n)
+{
+	uint32_t i;
+	for (i = 0; i < (n / 16); i++)
+		dir24_8_vec_lookup_x16(p, ips + i * 16, next_hops + i * 16,
+			sizeof(uint32_t));
+
+	dir24_8_lookup_bulk_4b(p, ips + i * 16, next_hops + i * 16,
+		n - i * 16);
+}
+
+void
+rte_dir24_8_vec_lookup_bulk_8b(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n)
+{
+	uint32_t i;
+	for (i = 0; i < (n / 8); i++)
+		dir24_8_vec_lookup_x8_8b(p, ips + i * 8, next_hops + i * 8);
+
+	dir24_8_lookup_bulk_8b(p, ips + i * 8, next_hops + i * 8, n - i * 8);
+}
diff --git a/lib/librte_fib/dir24_8_avx512.h b/lib/librte_fib/dir24_8_avx512.h
new file mode 100644
index 0000000..1d3c2b9
--- /dev/null
+++ b/lib/librte_fib/dir24_8_avx512.h
@@ -0,0 +1,24 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Intel Corporation
+ */
+
+#ifndef _DIR248_AVX512_H_
+#define _DIR248_AVX512_H_
+
+void
+rte_dir24_8_vec_lookup_bulk_1b(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n);
+
+void
+rte_dir24_8_vec_lookup_bulk_2b(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n);
+
+void
+rte_dir24_8_vec_lookup_bulk_4b(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n);
+
+void
+rte_dir24_8_vec_lookup_bulk_8b(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n);
+
+#endif /* _DIR248_AVX512_H_ */
diff --git a/lib/librte_fib/meson.build b/lib/librte_fib/meson.build
index 771828f..0a8adef 100644
--- a/lib/librte_fib/meson.build
+++ b/lib/librte_fib/meson.build
@@ -5,3 +5,37 @@
 sources = files('rte_fib.c', 'rte_fib6.c', 'dir24_8.c', 'trie.c')
 headers = files('rte_fib.h', 'rte_fib6.h')
 deps += ['rib']
+
+# compile AVX512 version if:
+# we are building 64-bit binary AND binutils can generate proper code
+if dpdk_conf.has('RTE_ARCH_X86_64') and binutils_ok.returncode() == 0
+	# compile AVX512 version if either:
+	# a. we have AVX512F supported in minimum instruction set baseline
+	# b. it's not minimum instruction set, but supported by compiler
+	#
+	# in former case, just add avx512 C file to files list
+	# in latter case, compile c file to static lib, using correct
+	# compiler flags, and then have the .o file from static lib
+	# linked into main lib.
+
+	# check if all required flags already enabled (variant a).
+	acl_avx512_flags = ['__AVX512F__','__AVX512DQ__']
+	acl_avx512_on = true
+	foreach f:acl_avx512_flags
+		if cc.get_define(f, args: machine_args) == ''
+			acl_avx512_on = false
+		endif
+	endforeach
+
+	if acl_avx512_on == true
+		cflags += ['-DCC_DIR24_8_AVX512_SUPPORT']
+		sources += files('dir24_8_avx512.c')
+	elif cc.has_multi_arguments('-mavx512f', '-mavx512dq')
+		dir24_8_avx512_tmp = static_library('dir24_8_avx512_tmp',
+				'dir24_8_avx512.c',
+				dependencies: static_rte_eal,
+				c_args: cflags + ['-mavx512f', '-mavx512dq'])
+		objs += dir24_8_avx512_tmp.extract_objects('dir24_8_avx512.c')
+		cflags += '-DCC_DIR24_8_AVX512_SUPPORT'
+	endif
+endif
diff --git a/lib/librte_fib/rte_fib.c b/lib/librte_fib/rte_fib.c
index b9f6efb..1af2a5f 100644
--- a/lib/librte_fib/rte_fib.c
+++ b/lib/librte_fib/rte_fib.c
@@ -108,7 +108,7 @@ init_dataplane(struct rte_fib *fib, __rte_unused int socket_id,
 		if (fib->dp == NULL)
 			return -rte_errno;
 		fib->lookup = dir24_8_get_lookup_fn(fib->dp,
-			RTE_FIB_DIR24_8_SCALAR_MACRO);
+			RTE_FIB_DIR24_8_ANY);
 		fib->modify = dir24_8_modify;
 		return 0;
 	default:
diff --git a/lib/librte_fib/rte_fib.h b/lib/librte_fib/rte_fib.h
index 2097ee5..d4e5d91 100644
--- a/lib/librte_fib/rte_fib.h
+++ b/lib/librte_fib/rte_fib.h
@@ -67,10 +67,14 @@ enum rte_fib_dir24_8_lookup_type {
 	 * Lookup implementation using inlined functions
 	 * for different next hop sizes
 	 */
-	RTE_FIB_DIR24_8_SCALAR_UNI
+	RTE_FIB_DIR24_8_SCALAR_UNI,
 	/**<
 	 * Unified lookup function for all next hop sizes
 	 */
+	RTE_FIB_DIR24_8_VECTOR_AVX512,
+	/**< Vector implementation using AVX512 */
+	RTE_FIB_DIR24_8_ANY = UINT32_MAX
+	/**< Selects the best implementation based on the max simd bitwidth */
 };
 
 /** FIB configuration structure */
-- 
2.7.4


^ permalink raw reply	[flat|nested] 199+ messages in thread

* [dpdk-dev] [PATCH v12 4/7] fib6: make lookup function type configurable
  2020-10-16 15:42                   ` [dpdk-dev] [PATCH v11 " Vladimir Medvedkin
                                       ` (3 preceding siblings ...)
  2020-10-19 10:17                     ` [dpdk-dev] [PATCH v12 3/7] fib: introduce AVX512 lookup Vladimir Medvedkin
@ 2020-10-19 10:17                     ` Vladimir Medvedkin
  2020-10-19 10:17                     ` [dpdk-dev] [PATCH v12 5/7] fib6: move lookup definition into the header file Vladimir Medvedkin
                                       ` (2 subsequent siblings)
  7 siblings, 0 replies; 199+ messages in thread
From: Vladimir Medvedkin @ 2020-10-19 10:17 UTC (permalink / raw)
  To: dev
  Cc: david.marchand, jerinj, mdr, thomas, konstantin.ananyev,
	bruce.richardson, ciara.power

Add type argument to trie_get_lookup_fn()
Now it only supports RTE_FIB6_TRIE_SCALAR

Add new rte_fib6_set_lookup_fn() - user can change lookup
function type runtime.

Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 lib/librte_fib/rte_fib6.c          | 20 +++++++++++++++-
 lib/librte_fib/rte_fib6.h          | 23 +++++++++++++++++++
 lib/librte_fib/rte_fib_version.map |  1 +
 lib/librte_fib/trie.c              | 47 +++++++++++++++++++++++---------------
 lib/librte_fib/trie.h              |  2 +-
 5 files changed, 72 insertions(+), 21 deletions(-)

diff --git a/lib/librte_fib/rte_fib6.c b/lib/librte_fib/rte_fib6.c
index a1f0db8..566cd5f 100644
--- a/lib/librte_fib/rte_fib6.c
+++ b/lib/librte_fib/rte_fib6.c
@@ -107,7 +107,7 @@ init_dataplane(struct rte_fib6 *fib, __rte_unused int socket_id,
 		fib->dp = trie_create(dp_name, socket_id, conf);
 		if (fib->dp == NULL)
 			return -rte_errno;
-		fib->lookup = rte_trie_get_lookup_fn(conf);
+		fib->lookup = trie_get_lookup_fn(fib->dp, RTE_FIB6_TRIE_SCALAR);
 		fib->modify = trie_modify;
 		return 0;
 	default:
@@ -319,3 +319,21 @@ rte_fib6_get_rib(struct rte_fib6 *fib)
 {
 	return (fib == NULL) ? NULL : fib->rib;
 }
+
+int
+rte_fib6_set_lookup_fn(struct rte_fib6 *fib,
+	enum rte_fib_trie_lookup_type type)
+{
+	rte_fib6_lookup_fn_t fn;
+
+	switch (fib->type) {
+	case RTE_FIB6_TRIE:
+		fn = trie_get_lookup_fn(fib->dp, type);
+		if (fn == NULL)
+			return -EINVAL;
+		fib->lookup = fn;
+		return 0;
+	default:
+		return -EINVAL;
+	}
+}
diff --git a/lib/librte_fib/rte_fib6.h b/lib/librte_fib/rte_fib6.h
index bbfcf23..cd0c75e 100644
--- a/lib/librte_fib/rte_fib6.h
+++ b/lib/librte_fib/rte_fib6.h
@@ -53,12 +53,18 @@ enum rte_fib6_op {
 	RTE_FIB6_DEL,
 };
 
+/** Size of nexthop (1 << nh_sz) bits for TRIE based FIB */
 enum rte_fib_trie_nh_sz {
 	RTE_FIB6_TRIE_2B = 1,
 	RTE_FIB6_TRIE_4B,
 	RTE_FIB6_TRIE_8B
 };
 
+/** Type of lookup function implementation */
+enum rte_fib_trie_lookup_type {
+	RTE_FIB6_TRIE_SCALAR /**< Scalar lookup function implementation*/
+};
+
 /** FIB configuration structure */
 struct rte_fib6_conf {
 	enum rte_fib6_type type; /**< Type of FIB struct */
@@ -201,6 +207,23 @@ __rte_experimental
 struct rte_rib6 *
 rte_fib6_get_rib(struct rte_fib6 *fib);
 
+/**
+ * Set lookup function based on type
+ *
+ * @param fib
+ *   FIB object handle
+ * @param type
+ *   type of lookup function
+ *
+ * @return
+ *    -EINVAL on failure
+ *    0 on success
+ */
+__rte_experimental
+int
+rte_fib6_set_lookup_fn(struct rte_fib6 *fib,
+	enum rte_fib_trie_lookup_type type);
+
 #ifdef __cplusplus
 }
 #endif
diff --git a/lib/librte_fib/rte_fib_version.map b/lib/librte_fib/rte_fib_version.map
index 216af66..9d1e181 100644
--- a/lib/librte_fib/rte_fib_version.map
+++ b/lib/librte_fib/rte_fib_version.map
@@ -19,6 +19,7 @@ EXPERIMENTAL {
 	rte_fib6_lookup_bulk;
 	rte_fib6_get_dp;
 	rte_fib6_get_rib;
+	rte_fib6_set_lookup_fn;
 
 	local: *;
 };
diff --git a/lib/librte_fib/trie.c b/lib/librte_fib/trie.c
index 2ae2add..fc14670 100644
--- a/lib/librte_fib/trie.c
+++ b/lib/librte_fib/trie.c
@@ -59,13 +59,6 @@ enum edge {
 	REDGE
 };
 
-enum lookup_type {
-	MACRO,
-	INLINE,
-	UNI
-};
-static enum lookup_type test_lookup = MACRO;
-
 static inline uint32_t
 get_tbl24_idx(const uint8_t *ip)
 {
@@ -153,22 +146,38 @@ LOOKUP_FUNC(2b, uint16_t, 1)
 LOOKUP_FUNC(4b, uint32_t, 2)
 LOOKUP_FUNC(8b, uint64_t, 3)
 
+static inline rte_fib6_lookup_fn_t
+get_scalar_fn(enum rte_fib_trie_nh_sz nh_sz)
+{
+	switch (nh_sz) {
+	case RTE_FIB6_TRIE_2B:
+		return rte_trie_lookup_bulk_2b;
+	case RTE_FIB6_TRIE_4B:
+		return rte_trie_lookup_bulk_4b;
+	case RTE_FIB6_TRIE_8B:
+		return rte_trie_lookup_bulk_8b;
+	default:
+		return NULL;
+	}
+}
+
 rte_fib6_lookup_fn_t
-rte_trie_get_lookup_fn(struct rte_fib6_conf *conf)
+trie_get_lookup_fn(void *p, enum rte_fib_trie_lookup_type type)
 {
-	enum rte_fib_trie_nh_sz nh_sz = conf->trie.nh_sz;
+	enum rte_fib_trie_nh_sz nh_sz;
+	struct rte_trie_tbl *dp = p;
 
-	if (test_lookup == MACRO) {
-		switch (nh_sz) {
-		case RTE_FIB6_TRIE_2B:
-			return rte_trie_lookup_bulk_2b;
-		case RTE_FIB6_TRIE_4B:
-			return rte_trie_lookup_bulk_4b;
-		case RTE_FIB6_TRIE_8B:
-			return rte_trie_lookup_bulk_8b;
-		}
+	if (dp == NULL)
+		return NULL;
+
+	nh_sz = dp->nh_sz;
+
+	switch (type) {
+	case RTE_FIB6_TRIE_SCALAR:
+		return get_scalar_fn(nh_sz);
+	default:
+		return NULL;
 	}
-
 	return NULL;
 }
 
diff --git a/lib/librte_fib/trie.h b/lib/librte_fib/trie.h
index bb750c5..0d5ef9a 100644
--- a/lib/librte_fib/trie.h
+++ b/lib/librte_fib/trie.h
@@ -22,7 +22,7 @@ void
 trie_free(void *p);
 
 rte_fib6_lookup_fn_t
-rte_trie_get_lookup_fn(struct rte_fib6_conf *fib_conf);
+trie_get_lookup_fn(void *p, enum rte_fib_trie_lookup_type type);
 
 int
 trie_modify(struct rte_fib6 *fib, const uint8_t ip[RTE_FIB6_IPV6_ADDR_SIZE],
-- 
2.7.4


^ permalink raw reply	[flat|nested] 199+ messages in thread

* [dpdk-dev] [PATCH v12 5/7] fib6: move lookup definition into the header file
  2020-10-16 15:42                   ` [dpdk-dev] [PATCH v11 " Vladimir Medvedkin
                                       ` (4 preceding siblings ...)
  2020-10-19 10:17                     ` [dpdk-dev] [PATCH v12 4/7] fib6: make lookup function type configurable Vladimir Medvedkin
@ 2020-10-19 10:17                     ` Vladimir Medvedkin
  2020-10-19 10:17                     ` [dpdk-dev] [PATCH v12 6/7] fib6: introduce AVX512 lookup Vladimir Medvedkin
  2020-10-19 10:17                     ` [dpdk-dev] [PATCH v12 7/7] app/testfib: add support for different lookup functions Vladimir Medvedkin
  7 siblings, 0 replies; 199+ messages in thread
From: Vladimir Medvedkin @ 2020-10-19 10:17 UTC (permalink / raw)
  To: dev
  Cc: david.marchand, jerinj, mdr, thomas, konstantin.ananyev,
	bruce.richardson, ciara.power

Move trie table layout and lookup definition into the
private header file. This is necessary for implementing a
vectorized lookup function in a separate .с file.

Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 lib/librte_fib/trie.c | 121 --------------------------------------------------
 lib/librte_fib/trie.h | 117 ++++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 117 insertions(+), 121 deletions(-)

diff --git a/lib/librte_fib/trie.c b/lib/librte_fib/trie.c
index fc14670..82ba13d 100644
--- a/lib/librte_fib/trie.c
+++ b/lib/librte_fib/trie.c
@@ -11,141 +11,20 @@
 
 #include <rte_debug.h>
 #include <rte_malloc.h>
-#include <rte_prefetch.h>
 #include <rte_errno.h>
 #include <rte_memory.h>
-#include <rte_branch_prediction.h>
 
 #include <rte_rib6.h>
 #include <rte_fib6.h>
 #include "trie.h"
 
-/* @internal Total number of tbl24 entries. */
-#define TRIE_TBL24_NUM_ENT	(1 << 24)
-
-/* Maximum depth value possible for IPv6 LPM. */
-#define TRIE_MAX_DEPTH		128
-
-/* @internal Number of entries in a tbl8 group. */
-#define TRIE_TBL8_GRP_NUM_ENT	256ULL
-
-/* @internal Total number of tbl8 groups in the tbl8. */
-#define TRIE_TBL8_NUM_GROUPS	65536
-
-/* @internal bitmask with valid and valid_group fields set */
-#define TRIE_EXT_ENT		1
-
 #define TRIE_NAMESIZE		64
 
-#define BITMAP_SLAB_BIT_SIZE_LOG2	6
-#define BITMAP_SLAB_BIT_SIZE		(1ULL << BITMAP_SLAB_BIT_SIZE_LOG2)
-#define BITMAP_SLAB_BITMASK		(BITMAP_SLAB_BIT_SIZE - 1)
-
-struct rte_trie_tbl {
-	uint32_t	number_tbl8s;	/**< Total number of tbl8s */
-	uint32_t	rsvd_tbl8s;	/**< Number of reserved tbl8s */
-	uint32_t	cur_tbl8s;	/**< Current cumber of tbl8s */
-	uint64_t	def_nh;		/**< Default next hop */
-	enum rte_fib_trie_nh_sz	nh_sz;	/**< Size of nexthop entry */
-	uint64_t	*tbl8;		/**< tbl8 table. */
-	uint32_t	*tbl8_pool;	/**< bitmap containing free tbl8 idxes*/
-	uint32_t	tbl8_pool_pos;
-	/* tbl24 table. */
-	__extension__ uint64_t	tbl24[0] __rte_cache_aligned;
-};
-
 enum edge {
 	LEDGE,
 	REDGE
 };
 
-static inline uint32_t
-get_tbl24_idx(const uint8_t *ip)
-{
-	return ip[0] << 16|ip[1] << 8|ip[2];
-}
-
-static inline void *
-get_tbl24_p(struct rte_trie_tbl *dp, const uint8_t *ip, uint8_t nh_sz)
-{
-	uint32_t tbl24_idx;
-
-	tbl24_idx = get_tbl24_idx(ip);
-	return (void *)&((uint8_t *)dp->tbl24)[tbl24_idx << nh_sz];
-}
-
-static inline uint8_t
-bits_in_nh(uint8_t nh_sz)
-{
-	return 8 * (1 << nh_sz);
-}
-
-static inline uint64_t
-get_max_nh(uint8_t nh_sz)
-{
-	return ((1ULL << (bits_in_nh(nh_sz) - 1)) - 1);
-}
-
-static inline uint64_t
-lookup_msk(uint8_t nh_sz)
-{
-	return ((1ULL << ((1 << (nh_sz + 3)) - 1)) << 1) - 1;
-}
-
-static inline uint8_t
-get_psd_idx(uint32_t val, uint8_t nh_sz)
-{
-	return val & ((1 << (3 - nh_sz)) - 1);
-}
-
-static inline uint32_t
-get_tbl_pos(uint32_t val, uint8_t nh_sz)
-{
-	return val >> (3 - nh_sz);
-}
-
-static inline uint64_t
-get_tbl_val_by_idx(uint64_t *tbl, uint32_t idx, uint8_t nh_sz)
-{
-	return ((tbl[get_tbl_pos(idx, nh_sz)] >> (get_psd_idx(idx, nh_sz) *
-		bits_in_nh(nh_sz))) & lookup_msk(nh_sz));
-}
-
-static inline void *
-get_tbl_p_by_idx(uint64_t *tbl, uint64_t idx, uint8_t nh_sz)
-{
-	return (uint8_t *)tbl + (idx << nh_sz);
-}
-
-static inline int
-is_entry_extended(uint64_t ent)
-{
-	return (ent & TRIE_EXT_ENT) == TRIE_EXT_ENT;
-}
-
-#define LOOKUP_FUNC(suffix, type, nh_sz)				\
-static void rte_trie_lookup_bulk_##suffix(void *p,			\
-	uint8_t ips[][RTE_FIB6_IPV6_ADDR_SIZE],			\
-	uint64_t *next_hops, const unsigned int n)			\
-{									\
-	struct rte_trie_tbl *dp = (struct rte_trie_tbl *)p;		\
-	uint64_t tmp;							\
-	uint32_t i, j;							\
-									\
-	for (i = 0; i < n; i++) {					\
-		tmp = ((type *)dp->tbl24)[get_tbl24_idx(&ips[i][0])];	\
-		j = 3;							\
-		while (is_entry_extended(tmp)) {			\
-			tmp = ((type *)dp->tbl8)[ips[i][j++] +		\
-				((tmp >> 1) * TRIE_TBL8_GRP_NUM_ENT)];	\
-		}							\
-		next_hops[i] = tmp >> 1;				\
-	}								\
-}
-LOOKUP_FUNC(2b, uint16_t, 1)
-LOOKUP_FUNC(4b, uint32_t, 2)
-LOOKUP_FUNC(8b, uint64_t, 3)
-
 static inline rte_fib6_lookup_fn_t
 get_scalar_fn(enum rte_fib_trie_nh_sz nh_sz)
 {
diff --git a/lib/librte_fib/trie.h b/lib/librte_fib/trie.h
index 0d5ef9a..663c7a9 100644
--- a/lib/librte_fib/trie.h
+++ b/lib/librte_fib/trie.h
@@ -10,11 +10,128 @@
  * @file
  * RTE IPv6 Longest Prefix Match (LPM)
  */
+#include <rte_prefetch.h>
+#include <rte_branch_prediction.h>
 
 #ifdef __cplusplus
 extern "C" {
 #endif
 
+/* @internal Total number of tbl24 entries. */
+#define TRIE_TBL24_NUM_ENT	(1 << 24)
+/* Maximum depth value possible for IPv6 LPM. */
+#define TRIE_MAX_DEPTH		128
+/* @internal Number of entries in a tbl8 group. */
+#define TRIE_TBL8_GRP_NUM_ENT	256ULL
+/* @internal Total number of tbl8 groups in the tbl8. */
+#define TRIE_TBL8_NUM_GROUPS	65536
+/* @internal bitmask with valid and valid_group fields set */
+#define TRIE_EXT_ENT		1
+
+#define BITMAP_SLAB_BIT_SIZE_LOG2	6
+#define BITMAP_SLAB_BIT_SIZE		(1ULL << BITMAP_SLAB_BIT_SIZE_LOG2)
+#define BITMAP_SLAB_BITMASK		(BITMAP_SLAB_BIT_SIZE - 1)
+
+struct rte_trie_tbl {
+	uint32_t	number_tbl8s;	/**< Total number of tbl8s */
+	uint32_t	rsvd_tbl8s;	/**< Number of reserved tbl8s */
+	uint32_t	cur_tbl8s;	/**< Current cumber of tbl8s */
+	uint64_t	def_nh;		/**< Default next hop */
+	enum rte_fib_trie_nh_sz	nh_sz;	/**< Size of nexthop entry */
+	uint64_t	*tbl8;		/**< tbl8 table. */
+	uint32_t	*tbl8_pool;	/**< bitmap containing free tbl8 idxes*/
+	uint32_t	tbl8_pool_pos;
+	/* tbl24 table. */
+	__extension__ uint64_t	tbl24[0] __rte_cache_aligned;
+};
+
+static inline uint32_t
+get_tbl24_idx(const uint8_t *ip)
+{
+	return ip[0] << 16|ip[1] << 8|ip[2];
+}
+
+static inline void *
+get_tbl24_p(struct rte_trie_tbl *dp, const uint8_t *ip, uint8_t nh_sz)
+{
+	uint32_t tbl24_idx;
+
+	tbl24_idx = get_tbl24_idx(ip);
+	return (void *)&((uint8_t *)dp->tbl24)[tbl24_idx << nh_sz];
+}
+
+static inline uint8_t
+bits_in_nh(uint8_t nh_sz)
+{
+	return 8 * (1 << nh_sz);
+}
+
+static inline uint64_t
+get_max_nh(uint8_t nh_sz)
+{
+	return ((1ULL << (bits_in_nh(nh_sz) - 1)) - 1);
+}
+
+static inline uint64_t
+lookup_msk(uint8_t nh_sz)
+{
+	return ((1ULL << ((1 << (nh_sz + 3)) - 1)) << 1) - 1;
+}
+
+static inline uint8_t
+get_psd_idx(uint32_t val, uint8_t nh_sz)
+{
+	return val & ((1 << (3 - nh_sz)) - 1);
+}
+
+static inline uint32_t
+get_tbl_pos(uint32_t val, uint8_t nh_sz)
+{
+	return val >> (3 - nh_sz);
+}
+
+static inline uint64_t
+get_tbl_val_by_idx(uint64_t *tbl, uint32_t idx, uint8_t nh_sz)
+{
+	return ((tbl[get_tbl_pos(idx, nh_sz)] >> (get_psd_idx(idx, nh_sz) *
+		bits_in_nh(nh_sz))) & lookup_msk(nh_sz));
+}
+
+static inline void *
+get_tbl_p_by_idx(uint64_t *tbl, uint64_t idx, uint8_t nh_sz)
+{
+	return (uint8_t *)tbl + (idx << nh_sz);
+}
+
+static inline int
+is_entry_extended(uint64_t ent)
+{
+	return (ent & TRIE_EXT_ENT) == TRIE_EXT_ENT;
+}
+
+#define LOOKUP_FUNC(suffix, type, nh_sz)				\
+static inline void rte_trie_lookup_bulk_##suffix(void *p,		\
+	uint8_t ips[][RTE_FIB6_IPV6_ADDR_SIZE],				\
+	uint64_t *next_hops, const unsigned int n)			\
+{									\
+	struct rte_trie_tbl *dp = (struct rte_trie_tbl *)p;		\
+	uint64_t tmp;							\
+	uint32_t i, j;							\
+									\
+	for (i = 0; i < n; i++) {					\
+		tmp = ((type *)dp->tbl24)[get_tbl24_idx(&ips[i][0])];	\
+		j = 3;							\
+		while (is_entry_extended(tmp)) {			\
+			tmp = ((type *)dp->tbl8)[ips[i][j++] +		\
+				((tmp >> 1) * TRIE_TBL8_GRP_NUM_ENT)];	\
+		}							\
+		next_hops[i] = tmp >> 1;				\
+	}								\
+}
+LOOKUP_FUNC(2b, uint16_t, 1)
+LOOKUP_FUNC(4b, uint32_t, 2)
+LOOKUP_FUNC(8b, uint64_t, 3)
+
 void *
 trie_create(const char *name, int socket_id, struct rte_fib6_conf *conf);
 
-- 
2.7.4


^ permalink raw reply	[flat|nested] 199+ messages in thread

* [dpdk-dev] [PATCH v12 6/7] fib6: introduce AVX512 lookup
  2020-10-16 15:42                   ` [dpdk-dev] [PATCH v11 " Vladimir Medvedkin
                                       ` (5 preceding siblings ...)
  2020-10-19 10:17                     ` [dpdk-dev] [PATCH v12 5/7] fib6: move lookup definition into the header file Vladimir Medvedkin
@ 2020-10-19 10:17                     ` Vladimir Medvedkin
  2020-10-19 10:17                     ` [dpdk-dev] [PATCH v12 7/7] app/testfib: add support for different lookup functions Vladimir Medvedkin
  7 siblings, 0 replies; 199+ messages in thread
From: Vladimir Medvedkin @ 2020-10-19 10:17 UTC (permalink / raw)
  To: dev
  Cc: david.marchand, jerinj, mdr, thomas, konstantin.ananyev,
	bruce.richardson, ciara.power

Add new lookup implementation for FIB6 trie algorithm using
AVX512 instruction set

Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 doc/guides/rel_notes/release_20_11.rst |   2 +-
 lib/librte_fib/meson.build             |  17 +++
 lib/librte_fib/rte_fib6.c              |   2 +-
 lib/librte_fib/rte_fib6.h              |   5 +-
 lib/librte_fib/trie.c                  |  35 +++++
 lib/librte_fib/trie_avx512.c           | 269 +++++++++++++++++++++++++++++++++
 lib/librte_fib/trie_avx512.h           |  20 +++
 7 files changed, 347 insertions(+), 3 deletions(-)
 create mode 100644 lib/librte_fib/trie_avx512.c
 create mode 100644 lib/librte_fib/trie_avx512.h

diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst
index 0e5f2de..f14f444 100644
--- a/doc/guides/rel_notes/release_20_11.rst
+++ b/doc/guides/rel_notes/release_20_11.rst
@@ -346,7 +346,7 @@ New Features
 
 * **Added AVX512 lookup implementation for FIB.**
 
-  Added a AVX512 lookup functions implementation into FIB library.
+  Added a AVX512 lookup functions implementation into FIB and FIB6 libraries.
 
 Removed Items
 -------------
diff --git a/lib/librte_fib/meson.build b/lib/librte_fib/meson.build
index 0a8adef..5d93de9 100644
--- a/lib/librte_fib/meson.build
+++ b/lib/librte_fib/meson.build
@@ -30,6 +30,12 @@ if dpdk_conf.has('RTE_ARCH_X86_64') and binutils_ok.returncode() == 0
 	if acl_avx512_on == true
 		cflags += ['-DCC_DIR24_8_AVX512_SUPPORT']
 		sources += files('dir24_8_avx512.c')
+		# TRIE AVX512 implementation uses avx512bw intrinsics along with
+		# avx512f and avx512dq
+		if cc.get_define('__AVX512BW__', args: machine_args) != ''
+			cflags += ['-DCC_TRIE_AVX512_SUPPORT']
+			sources += files('trie_avx512.c')
+		endif
 	elif cc.has_multi_arguments('-mavx512f', '-mavx512dq')
 		dir24_8_avx512_tmp = static_library('dir24_8_avx512_tmp',
 				'dir24_8_avx512.c',
@@ -37,5 +43,16 @@ if dpdk_conf.has('RTE_ARCH_X86_64') and binutils_ok.returncode() == 0
 				c_args: cflags + ['-mavx512f', '-mavx512dq'])
 		objs += dir24_8_avx512_tmp.extract_objects('dir24_8_avx512.c')
 		cflags += '-DCC_DIR24_8_AVX512_SUPPORT'
+		# TRIE AVX512 implementation uses avx512bw intrinsics along with
+		# avx512f and avx512dq
+		if cc.has_argument('-mavx512bw')
+			trie_avx512_tmp = static_library('trie_avx512_tmp',
+				'trie_avx512.c',
+				dependencies: static_rte_eal,
+				c_args: cflags + ['-mavx512f', \
+					'-mavx512dq', '-mavx512bw'])
+			objs += trie_avx512_tmp.extract_objects('trie_avx512.c')
+			cflags += '-DCC_TRIE_AVX512_SUPPORT'
+		endif
 	endif
 endif
diff --git a/lib/librte_fib/rte_fib6.c b/lib/librte_fib/rte_fib6.c
index 566cd5f..8512584 100644
--- a/lib/librte_fib/rte_fib6.c
+++ b/lib/librte_fib/rte_fib6.c
@@ -107,7 +107,7 @@ init_dataplane(struct rte_fib6 *fib, __rte_unused int socket_id,
 		fib->dp = trie_create(dp_name, socket_id, conf);
 		if (fib->dp == NULL)
 			return -rte_errno;
-		fib->lookup = trie_get_lookup_fn(fib->dp, RTE_FIB6_TRIE_SCALAR);
+		fib->lookup = trie_get_lookup_fn(fib->dp, RTE_FIB6_TRIE_ANY);
 		fib->modify = trie_modify;
 		return 0;
 	default:
diff --git a/lib/librte_fib/rte_fib6.h b/lib/librte_fib/rte_fib6.h
index cd0c75e..2b2a1c8 100644
--- a/lib/librte_fib/rte_fib6.h
+++ b/lib/librte_fib/rte_fib6.h
@@ -62,7 +62,10 @@ enum rte_fib_trie_nh_sz {
 
 /** Type of lookup function implementation */
 enum rte_fib_trie_lookup_type {
-	RTE_FIB6_TRIE_SCALAR /**< Scalar lookup function implementation*/
+	RTE_FIB6_TRIE_SCALAR, /**< Scalar lookup function implementation*/
+	RTE_FIB6_TRIE_VECTOR_AVX512, /**< Vector implementation using AVX512 */
+	RTE_FIB6_TRIE_ANY = UINT32_MAX
+	/**< Selects the best implementation based on the max simd bitwidth */
 };
 
 /** FIB configuration structure */
diff --git a/lib/librte_fib/trie.c b/lib/librte_fib/trie.c
index 82ba13d..3e5f4b9 100644
--- a/lib/librte_fib/trie.c
+++ b/lib/librte_fib/trie.c
@@ -18,6 +18,12 @@
 #include <rte_fib6.h>
 #include "trie.h"
 
+#ifdef CC_TRIE_AVX512_SUPPORT
+
+#include "trie_avx512.h"
+
+#endif /* CC_TRIE_AVX512_SUPPORT */
+
 #define TRIE_NAMESIZE		64
 
 enum edge {
@@ -40,11 +46,35 @@ get_scalar_fn(enum rte_fib_trie_nh_sz nh_sz)
 	}
 }
 
+static inline rte_fib6_lookup_fn_t
+get_vector_fn(enum rte_fib_trie_nh_sz nh_sz)
+{
+#ifdef CC_TRIE_AVX512_SUPPORT
+	if ((rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512F) <= 0) ||
+			(rte_get_max_simd_bitwidth() < RTE_SIMD_512))
+		return NULL;
+	switch (nh_sz) {
+	case RTE_FIB6_TRIE_2B:
+		return rte_trie_vec_lookup_bulk_2b;
+	case RTE_FIB6_TRIE_4B:
+		return rte_trie_vec_lookup_bulk_4b;
+	case RTE_FIB6_TRIE_8B:
+		return rte_trie_vec_lookup_bulk_8b;
+	default:
+		return NULL;
+	}
+#else
+	RTE_SET_USED(nh_sz);
+#endif
+	return NULL;
+}
+
 rte_fib6_lookup_fn_t
 trie_get_lookup_fn(void *p, enum rte_fib_trie_lookup_type type)
 {
 	enum rte_fib_trie_nh_sz nh_sz;
 	struct rte_trie_tbl *dp = p;
+	rte_fib6_lookup_fn_t ret_fn = NULL;
 
 	if (dp == NULL)
 		return NULL;
@@ -54,6 +84,11 @@ trie_get_lookup_fn(void *p, enum rte_fib_trie_lookup_type type)
 	switch (type) {
 	case RTE_FIB6_TRIE_SCALAR:
 		return get_scalar_fn(nh_sz);
+	case RTE_FIB6_TRIE_VECTOR_AVX512:
+		return get_vector_fn(nh_sz);
+	case RTE_FIB6_TRIE_ANY:
+		ret_fn = get_vector_fn(nh_sz);
+		return (ret_fn) ? ret_fn : get_scalar_fn(nh_sz);
 	default:
 		return NULL;
 	}
diff --git a/lib/librte_fib/trie_avx512.c b/lib/librte_fib/trie_avx512.c
new file mode 100644
index 0000000..b1c9e4e
--- /dev/null
+++ b/lib/librte_fib/trie_avx512.c
@@ -0,0 +1,269 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Intel Corporation
+ */
+
+#include <rte_vect.h>
+#include <rte_fib6.h>
+
+#include "trie.h"
+#include "trie_avx512.h"
+
+static __rte_always_inline void
+transpose_x16(uint8_t ips[16][RTE_FIB6_IPV6_ADDR_SIZE],
+	__m512i *first, __m512i *second, __m512i *third, __m512i *fourth)
+{
+	__m512i tmp1, tmp2, tmp3, tmp4;
+	__m512i tmp5, tmp6, tmp7, tmp8;
+	const __rte_x86_zmm_t perm_idxes = {
+		.u32 = { 0, 4, 8, 12, 2, 6, 10, 14,
+			1, 5, 9, 13, 3, 7, 11, 15
+		},
+	};
+
+	/* load all ip addresses */
+	tmp1 = _mm512_loadu_si512(&ips[0][0]);
+	tmp2 = _mm512_loadu_si512(&ips[4][0]);
+	tmp3 = _mm512_loadu_si512(&ips[8][0]);
+	tmp4 = _mm512_loadu_si512(&ips[12][0]);
+
+	/* transpose 4 byte chunks of 16 ips */
+	tmp5 = _mm512_unpacklo_epi32(tmp1, tmp2);
+	tmp7 = _mm512_unpackhi_epi32(tmp1, tmp2);
+	tmp6 = _mm512_unpacklo_epi32(tmp3, tmp4);
+	tmp8 = _mm512_unpackhi_epi32(tmp3, tmp4);
+
+	tmp1 = _mm512_unpacklo_epi32(tmp5, tmp6);
+	tmp3 = _mm512_unpackhi_epi32(tmp5, tmp6);
+	tmp2 = _mm512_unpacklo_epi32(tmp7, tmp8);
+	tmp4 = _mm512_unpackhi_epi32(tmp7, tmp8);
+
+	/* first 4-byte chunks of ips[] */
+	*first = _mm512_permutexvar_epi32(perm_idxes.z, tmp1);
+	/* second 4-byte chunks of ips[] */
+	*second = _mm512_permutexvar_epi32(perm_idxes.z, tmp3);
+	/* third 4-byte chunks of ips[] */
+	*third = _mm512_permutexvar_epi32(perm_idxes.z, tmp2);
+	/* fourth 4-byte chunks of ips[] */
+	*fourth = _mm512_permutexvar_epi32(perm_idxes.z, tmp4);
+}
+
+static __rte_always_inline void
+transpose_x8(uint8_t ips[8][RTE_FIB6_IPV6_ADDR_SIZE],
+	__m512i *first, __m512i *second)
+{
+	__m512i tmp1, tmp2, tmp3, tmp4;
+	const __rte_x86_zmm_t perm_idxes = {
+		.u64 = { 0, 2, 4, 6, 1, 3, 5, 7
+		},
+	};
+
+	tmp1 = _mm512_loadu_si512(&ips[0][0]);
+	tmp2 = _mm512_loadu_si512(&ips[4][0]);
+
+	tmp3 = _mm512_unpacklo_epi64(tmp1, tmp2);
+	*first = _mm512_permutexvar_epi64(perm_idxes.z, tmp3);
+	tmp4 = _mm512_unpackhi_epi64(tmp1, tmp2);
+	*second = _mm512_permutexvar_epi64(perm_idxes.z, tmp4);
+}
+
+static __rte_always_inline void
+trie_vec_lookup_x16(void *p, uint8_t ips[16][RTE_FIB6_IPV6_ADDR_SIZE],
+	uint64_t *next_hops, int size)
+{
+	struct rte_trie_tbl *dp = (struct rte_trie_tbl *)p;
+	const __m512i zero = _mm512_set1_epi32(0);
+	const __m512i lsb = _mm512_set1_epi32(1);
+	const __m512i two_lsb = _mm512_set1_epi32(3);
+	__m512i first, second, third, fourth; /*< IPv6 four byte chunks */
+	__m512i idxes, res, shuf_idxes;
+	__m512i tmp, tmp2, bytes, byte_chunk, base_idxes;
+	/* used to mask gather values if size is 2 (16 bit next hops) */
+	const __m512i res_msk = _mm512_set1_epi32(UINT16_MAX);
+	const __rte_x86_zmm_t bswap = {
+		.u8 = { 2, 1, 0, 255, 6, 5, 4, 255,
+			10, 9, 8, 255, 14, 13, 12, 255,
+			2, 1, 0, 255, 6, 5, 4, 255,
+			10, 9, 8, 255, 14, 13, 12, 255,
+			2, 1, 0, 255, 6, 5, 4, 255,
+			10, 9, 8, 255, 14, 13, 12, 255,
+			2, 1, 0, 255, 6, 5, 4, 255,
+			10, 9, 8, 255, 14, 13, 12, 255
+			},
+	};
+	const __mmask64 k = 0x1111111111111111;
+	int i = 3;
+	__mmask16 msk_ext, new_msk;
+	__mmask16 exp_msk = 0x5555;
+
+	transpose_x16(ips, &first, &second, &third, &fourth);
+
+	/* get_tbl24_idx() for every 4 byte chunk */
+	idxes = _mm512_shuffle_epi8(first, bswap.z);
+
+	/**
+	 * lookup in tbl24
+	 * Put it inside branch to make compiller happy with -O0
+	 */
+	if (size == sizeof(uint16_t)) {
+		res = _mm512_i32gather_epi32(idxes, (const int *)dp->tbl24, 2);
+		res = _mm512_and_epi32(res, res_msk);
+	} else
+		res = _mm512_i32gather_epi32(idxes, (const int *)dp->tbl24, 4);
+
+
+	/* get extended entries indexes */
+	msk_ext = _mm512_test_epi32_mask(res, lsb);
+
+	tmp = _mm512_srli_epi32(res, 1);
+
+	/* idxes to retrieve bytes */
+	shuf_idxes = _mm512_setr_epi32(3, 7, 11, 15,
+				19, 23, 27, 31,
+				35, 39, 43, 47,
+				51, 55, 59, 63);
+
+	base_idxes = _mm512_setr_epi32(0, 4, 8, 12,
+				16, 20, 24, 28,
+				32, 36, 40, 44,
+				48, 52, 56, 60);
+
+	/* traverse down the trie */
+	while (msk_ext) {
+		idxes = _mm512_maskz_slli_epi32(msk_ext, tmp, 8);
+		byte_chunk = (i < 8) ?
+			((i >= 4) ? second : first) :
+			((i >= 12) ? fourth : third);
+		bytes = _mm512_maskz_shuffle_epi8(k, byte_chunk, shuf_idxes);
+		idxes = _mm512_maskz_add_epi32(msk_ext, idxes, bytes);
+		if (size == sizeof(uint16_t)) {
+			tmp = _mm512_mask_i32gather_epi32(zero, msk_ext,
+				idxes, (const int *)dp->tbl8, 2);
+			tmp = _mm512_and_epi32(tmp, res_msk);
+		} else
+			tmp = _mm512_mask_i32gather_epi32(zero, msk_ext,
+				idxes, (const int *)dp->tbl8, 4);
+		new_msk = _mm512_test_epi32_mask(tmp, lsb);
+		res = _mm512_mask_blend_epi32(msk_ext ^ new_msk, res, tmp);
+		tmp = _mm512_srli_epi32(tmp, 1);
+		msk_ext = new_msk;
+
+		shuf_idxes = _mm512_maskz_add_epi8(k, shuf_idxes, lsb);
+		shuf_idxes = _mm512_and_epi32(shuf_idxes, two_lsb);
+		shuf_idxes = _mm512_maskz_add_epi8(k, shuf_idxes, base_idxes);
+		i++;
+	}
+
+	res = _mm512_srli_epi32(res, 1);
+	tmp = _mm512_maskz_expand_epi32(exp_msk, res);
+	__m256i tmp256;
+	tmp256 = _mm512_extracti32x8_epi32(res, 1);
+	tmp2 = _mm512_maskz_expand_epi32(exp_msk,
+		_mm512_castsi256_si512(tmp256));
+	_mm512_storeu_si512(next_hops, tmp);
+	_mm512_storeu_si512(next_hops + 8, tmp2);
+}
+
+static void
+trie_vec_lookup_x8_8b(void *p, uint8_t ips[8][RTE_FIB6_IPV6_ADDR_SIZE],
+	uint64_t *next_hops)
+{
+	struct rte_trie_tbl *dp = (struct rte_trie_tbl *)p;
+	const __m512i zero = _mm512_set1_epi32(0);
+	const __m512i lsb = _mm512_set1_epi32(1);
+	const __m512i three_lsb = _mm512_set1_epi32(7);
+	__m512i first, second; /*< IPv6 eight byte chunks */
+	__m512i idxes, res, shuf_idxes;
+	__m512i tmp, bytes, byte_chunk, base_idxes;
+	const __rte_x86_zmm_t bswap = {
+		.u8 = { 2, 1, 0, 255, 255, 255, 255, 255,
+			10, 9, 8, 255, 255, 255, 255, 255,
+			2, 1, 0, 255, 255, 255, 255, 255,
+			10, 9, 8, 255, 255, 255, 255, 255,
+			2, 1, 0, 255, 255, 255, 255, 255,
+			10, 9, 8, 255, 255, 255, 255, 255,
+			2, 1, 0, 255, 255, 255, 255, 255,
+			10, 9, 8, 255, 255, 255, 255, 255
+			},
+	};
+	const __mmask64 k = 0x101010101010101;
+	int i = 3;
+	__mmask8 msk_ext, new_msk;
+
+	transpose_x8(ips, &first, &second);
+
+	/* get_tbl24_idx() for every 4 byte chunk */
+	idxes = _mm512_shuffle_epi8(first, bswap.z);
+
+	/* lookup in tbl24 */
+	res = _mm512_i64gather_epi64(idxes, (const void *)dp->tbl24, 8);
+	/* get extended entries indexes */
+	msk_ext = _mm512_test_epi64_mask(res, lsb);
+
+	tmp = _mm512_srli_epi64(res, 1);
+
+	/* idxes to retrieve bytes */
+	shuf_idxes = _mm512_setr_epi64(3, 11, 19, 27, 35, 43, 51, 59);
+
+	base_idxes = _mm512_setr_epi64(0, 8, 16, 24, 32, 40, 48, 56);
+
+	/* traverse down the trie */
+	while (msk_ext) {
+		idxes = _mm512_maskz_slli_epi64(msk_ext, tmp, 8);
+		byte_chunk = (i < 8) ? first : second;
+		bytes = _mm512_maskz_shuffle_epi8(k, byte_chunk, shuf_idxes);
+		idxes = _mm512_maskz_add_epi64(msk_ext, idxes, bytes);
+		tmp = _mm512_mask_i64gather_epi64(zero, msk_ext,
+				idxes, (const void *)dp->tbl8, 8);
+		new_msk = _mm512_test_epi64_mask(tmp, lsb);
+		res = _mm512_mask_blend_epi64(msk_ext ^ new_msk, res, tmp);
+		tmp = _mm512_srli_epi64(tmp, 1);
+		msk_ext = new_msk;
+
+		shuf_idxes = _mm512_maskz_add_epi8(k, shuf_idxes, lsb);
+		shuf_idxes = _mm512_and_epi64(shuf_idxes, three_lsb);
+		shuf_idxes = _mm512_maskz_add_epi8(k, shuf_idxes, base_idxes);
+		i++;
+	}
+
+	res = _mm512_srli_epi64(res, 1);
+	_mm512_storeu_si512(next_hops, res);
+}
+
+void
+rte_trie_vec_lookup_bulk_2b(void *p, uint8_t ips[][RTE_FIB6_IPV6_ADDR_SIZE],
+	uint64_t *next_hops, const unsigned int n)
+{
+	uint32_t i;
+	for (i = 0; i < (n / 16); i++) {
+		trie_vec_lookup_x16(p, (uint8_t (*)[16])&ips[i * 16][0],
+				next_hops + i * 16, sizeof(uint16_t));
+	}
+	rte_trie_lookup_bulk_2b(p, (uint8_t (*)[16])&ips[i * 16][0],
+			next_hops + i * 16, n - i * 16);
+}
+
+void
+rte_trie_vec_lookup_bulk_4b(void *p, uint8_t ips[][RTE_FIB6_IPV6_ADDR_SIZE],
+	uint64_t *next_hops, const unsigned int n)
+{
+	uint32_t i;
+	for (i = 0; i < (n / 16); i++) {
+		trie_vec_lookup_x16(p, (uint8_t (*)[16])&ips[i * 16][0],
+				next_hops + i * 16, sizeof(uint32_t));
+	}
+	rte_trie_lookup_bulk_4b(p, (uint8_t (*)[16])&ips[i * 16][0],
+			next_hops + i * 16, n - i * 16);
+}
+
+void
+rte_trie_vec_lookup_bulk_8b(void *p, uint8_t ips[][RTE_FIB6_IPV6_ADDR_SIZE],
+	uint64_t *next_hops, const unsigned int n)
+{
+	uint32_t i;
+	for (i = 0; i < (n / 8); i++) {
+		trie_vec_lookup_x8_8b(p, (uint8_t (*)[16])&ips[i * 8][0],
+				next_hops + i * 8);
+	}
+	rte_trie_lookup_bulk_8b(p, (uint8_t (*)[16])&ips[i * 8][0],
+			next_hops + i * 8, n - i * 8);
+}
diff --git a/lib/librte_fib/trie_avx512.h b/lib/librte_fib/trie_avx512.h
new file mode 100644
index 0000000..ef8c7f0
--- /dev/null
+++ b/lib/librte_fib/trie_avx512.h
@@ -0,0 +1,20 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Intel Corporation
+ */
+
+#ifndef _TRIE_AVX512_H_
+#define _TRIE_AVX512_H_
+
+void
+rte_trie_vec_lookup_bulk_2b(void *p, uint8_t ips[][RTE_FIB6_IPV6_ADDR_SIZE],
+	uint64_t *next_hops, const unsigned int n);
+
+void
+rte_trie_vec_lookup_bulk_4b(void *p, uint8_t ips[][RTE_FIB6_IPV6_ADDR_SIZE],
+	uint64_t *next_hops, const unsigned int n);
+
+void
+rte_trie_vec_lookup_bulk_8b(void *p, uint8_t ips[][RTE_FIB6_IPV6_ADDR_SIZE],
+	uint64_t *next_hops, const unsigned int n);
+
+#endif /* _TRIE_AVX512_H_ */
-- 
2.7.4


^ permalink raw reply	[flat|nested] 199+ messages in thread

* [dpdk-dev] [PATCH v12 7/7] app/testfib: add support for different lookup functions
  2020-10-16 15:42                   ` [dpdk-dev] [PATCH v11 " Vladimir Medvedkin
                                       ` (6 preceding siblings ...)
  2020-10-19 10:17                     ` [dpdk-dev] [PATCH v12 6/7] fib6: introduce AVX512 lookup Vladimir Medvedkin
@ 2020-10-19 10:17                     ` Vladimir Medvedkin
  7 siblings, 0 replies; 199+ messages in thread
From: Vladimir Medvedkin @ 2020-10-19 10:17 UTC (permalink / raw)
  To: dev
  Cc: david.marchand, jerinj, mdr, thomas, konstantin.ananyev,
	bruce.richardson, ciara.power

Added -v option to switch between different lookup implementations
to measure their performance and correctness.

Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
---
 app/test-fib/main.c | 65 ++++++++++++++++++++++++++++++++++++++++++++++++++---
 1 file changed, 62 insertions(+), 3 deletions(-)

diff --git a/app/test-fib/main.c b/app/test-fib/main.c
index 9cf01b1..e46d264 100644
--- a/app/test-fib/main.c
+++ b/app/test-fib/main.c
@@ -99,6 +99,7 @@ static struct {
 	uint8_t		ent_sz;
 	uint8_t		rnd_lookup_ips_ratio;
 	uint8_t		print_fract;
+	uint8_t		lookup_fn;
 } config = {
 	.routes_file = NULL,
 	.lookup_ips_file = NULL,
@@ -110,7 +111,8 @@ static struct {
 	.tbl8 = DEFAULT_LPM_TBL8,
 	.ent_sz = 4,
 	.rnd_lookup_ips_ratio = 0,
-	.print_fract = 10
+	.print_fract = 10,
+	.lookup_fn = 0
 };
 
 struct rt_rule_4 {
@@ -638,7 +640,11 @@ print_usage(void)
 		"1/2/4/8 (default 4)>]\n"
 		"[-g <number of tbl8's for dir24_8 or trie FIBs>]\n"
 		"[-w <path to the file to dump routing table>]\n"
-		"[-u <path to the file to dump ip's for lookup>]\n",
+		"[-u <path to the file to dump ip's for lookup>]\n"
+		"[-v <type of loookup function:"
+		"\ts1, s2, s3 (3 types of scalar), v (vector) -"
+		" for DIR24_8 based FIB\n"
+		"\ts, v - for TRIE based ipv6 FIB>]\n",
 		config.prgname);
 }
 
@@ -681,7 +687,7 @@ parse_opts(int argc, char **argv)
 	int opt;
 	char *endptr;
 
-	while ((opt = getopt(argc, argv, "f:t:n:d:l:r:c6ab:e:g:w:u:s")) !=
+	while ((opt = getopt(argc, argv, "f:t:n:d:l:r:c6ab:e:g:w:u:sv:")) !=
 			-1) {
 		switch (opt) {
 		case 'f':
@@ -769,6 +775,23 @@ parse_opts(int argc, char **argv)
 				rte_exit(-EINVAL, "Invalid option -g\n");
 			}
 			break;
+		case 'v':
+			if ((strcmp(optarg, "s1") == 0) ||
+					(strcmp(optarg, "s") == 0)) {
+				config.lookup_fn = 1;
+				break;
+			} else if (strcmp(optarg, "v") == 0) {
+				config.lookup_fn = 2;
+				break;
+			} else if (strcmp(optarg, "s2") == 0) {
+				config.lookup_fn = 3;
+				break;
+			} else if (strcmp(optarg, "s3") == 0) {
+				config.lookup_fn = 4;
+				break;
+			}
+			print_usage();
+			rte_exit(-EINVAL, "Invalid option -v %s\n", optarg);
 		default:
 			print_usage();
 			rte_exit(-EINVAL, "Invalid options\n");
@@ -846,6 +869,27 @@ run_v4(void)
 		return -rte_errno;
 	}
 
+	if (config.lookup_fn != 0) {
+		if (config.lookup_fn == 1)
+			ret = rte_fib_set_lookup_fn(fib,
+				RTE_FIB_DIR24_8_SCALAR_MACRO);
+		else if (config.lookup_fn == 2)
+			ret = rte_fib_set_lookup_fn(fib,
+				RTE_FIB_DIR24_8_VECTOR_AVX512);
+		else if (config.lookup_fn == 3)
+			ret = rte_fib_set_lookup_fn(fib,
+				RTE_FIB_DIR24_8_SCALAR_INLINE);
+		else if (config.lookup_fn == 4)
+			ret = rte_fib_set_lookup_fn(fib,
+				RTE_FIB_DIR24_8_SCALAR_UNI);
+		else
+			ret = -EINVAL;
+		if (ret != 0) {
+			printf("Can not init lookup function\n");
+			return ret;
+		}
+	}
+
 	for (k = config.print_fract, i = 0; k > 0; k--) {
 		start = rte_rdtsc_precise();
 		for (j = 0; j < (config.nb_routes - i) / k; j++) {
@@ -1025,6 +1069,21 @@ run_v6(void)
 		return -rte_errno;
 	}
 
+	if (config.lookup_fn != 0) {
+		if (config.lookup_fn == 1)
+			ret = rte_fib6_set_lookup_fn(fib,
+				RTE_FIB6_TRIE_SCALAR);
+		else if (config.lookup_fn == 2)
+			ret = rte_fib6_set_lookup_fn(fib,
+				RTE_FIB6_TRIE_VECTOR_AVX512);
+		else
+			ret = -EINVAL;
+		if (ret != 0) {
+			printf("Can not init lookup function\n");
+			return ret;
+		}
+	}
+
 	for (k = config.print_fract, i = 0; k > 0; k--) {
 		start = rte_rdtsc_precise();
 		for (j = 0; j < (config.nb_routes - i) / k; j++) {
-- 
2.7.4


^ permalink raw reply	[flat|nested] 199+ messages in thread

* [dpdk-dev] [PATCH v13 0/7] fib: implement AVX512 vector lookup
  2020-10-19 10:17                     ` [dpdk-dev] [PATCH v12 0/7] " Vladimir Medvedkin
@ 2020-10-19 15:05                       ` Vladimir Medvedkin
  2020-10-25 18:07                         ` [dpdk-dev] [PATCH v14 0/8] " Vladimir Medvedkin
                                           ` (8 more replies)
  2020-10-19 15:05                       ` [dpdk-dev] [PATCH v13 1/7] fib: make lookup function type configurable Vladimir Medvedkin
                                         ` (6 subsequent siblings)
  7 siblings, 9 replies; 199+ messages in thread
From: Vladimir Medvedkin @ 2020-10-19 15:05 UTC (permalink / raw)
  To: dev
  Cc: david.marchand, jerinj, mdr, thomas, konstantin.ananyev,
	bruce.richardson, ciara.power

This patch series implements vectorized lookup using AVX512 for
ipv4 dir24_8 and ipv6 trie algorithms.
Also introduced rte_fib_set_lookup_fn() to change lookup function type.
Added option to select lookup function type in testfib application.

v13:
 - reflect the latest changes in "eal: add max SIMD bitwidth" patch

v12:
 - rebase on the latest main
 - drop "eal/x86: introduce AVX 512-bit type" patch

v11:
 - fix compillation issue with unused nh_sz variable

v10:
 - reflects the latest changes in the "eal: add max SIMD bitwidth" patch
 - add en extra doxygen comments
 - rebuild on the latest main

v9:
 - meson reworked
 - integration with max SIMD bitwidth patchseries
 - changed the logic of function selection on init

v8:
 - remove Makefile related changes
 - fix missing doxygen for lookup_type
 - add release notes

v7:
 - fix RTE_X86_ZMM_MASK macro

v6:
 - style fixes

v5:
 - prefix zmm macro in rte_vect.h with RTE_X86
 - remove unnecessary typedef for _x86_zmm_t
 - reword commit title
 - fix typos

v4:
 - use __rte_aligned() instead of using compiler attribute directly
 - rework and add comments to meson.build

v3:
 - separate out the AVX-512 code into a separate file

v2:
 - rename rte_zmm to __rte_x86_zmm to reflect its internal usage
 - make runtime decision to use avx512 lookup

Vladimir Medvedkin (7):
  fib: make lookup function type configurable
  fib: move lookup definition into the header file
  fib: introduce AVX512 lookup
  fib6: make lookup function type configurable
  fib6: move lookup definition into the header file
  fib6: introduce AVX512 lookup
  app/testfib: add support for different lookup functions

 app/test-fib/main.c                    |  65 ++++++-
 doc/guides/rel_notes/release_20_11.rst |   3 +
 lib/librte_fib/dir24_8.c               | 332 +++++++++------------------------
 lib/librte_fib/dir24_8.h               | 226 +++++++++++++++++++++-
 lib/librte_fib/dir24_8_avx512.c        | 165 ++++++++++++++++
 lib/librte_fib/dir24_8_avx512.h        |  24 +++
 lib/librte_fib/meson.build             |  51 +++++
 lib/librte_fib/rte_fib.c               |  21 ++-
 lib/librte_fib/rte_fib.h               |  36 ++++
 lib/librte_fib/rte_fib6.c              |  20 +-
 lib/librte_fib/rte_fib6.h              |  26 +++
 lib/librte_fib/rte_fib_version.map     |   2 +
 lib/librte_fib/trie.c                  | 194 ++++++-------------
 lib/librte_fib/trie.h                  | 119 +++++++++++-
 lib/librte_fib/trie_avx512.c           | 269 ++++++++++++++++++++++++++
 lib/librte_fib/trie_avx512.h           |  20 ++
 16 files changed, 1183 insertions(+), 390 deletions(-)
 create mode 100644 lib/librte_fib/dir24_8_avx512.c
 create mode 100644 lib/librte_fib/dir24_8_avx512.h
 create mode 100644 lib/librte_fib/trie_avx512.c
 create mode 100644 lib/librte_fib/trie_avx512.h

-- 
2.7.4


^ permalink raw reply	[flat|nested] 199+ messages in thread

* [dpdk-dev] [PATCH v13 1/7] fib: make lookup function type configurable
  2020-10-19 10:17                     ` [dpdk-dev] [PATCH v12 0/7] " Vladimir Medvedkin
  2020-10-19 15:05                       ` [dpdk-dev] [PATCH v13 " Vladimir Medvedkin
@ 2020-10-19 15:05                       ` Vladimir Medvedkin
  2020-10-22  7:55                         ` Kinsella, Ray
  2020-10-22 11:52                         ` David Marchand
  2020-10-19 15:05                       ` [dpdk-dev] [PATCH v13 2/7] fib: move lookup definition into the header file Vladimir Medvedkin
                                         ` (5 subsequent siblings)
  7 siblings, 2 replies; 199+ messages in thread
From: Vladimir Medvedkin @ 2020-10-19 15:05 UTC (permalink / raw)
  To: dev
  Cc: david.marchand, jerinj, mdr, thomas, konstantin.ananyev,
	bruce.richardson, ciara.power

Add type argument to dir24_8_get_lookup_fn()
Now it supports 3 different lookup implementations:
 RTE_FIB_DIR24_8_SCALAR_MACRO
 RTE_FIB_DIR24_8_SCALAR_INLINE
 RTE_FIB_DIR24_8_SCALAR_UNI

Add new rte_fib_set_lookup_fn() - user can change lookup
function type runtime.

Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 lib/librte_fib/dir24_8.c           | 84 +++++++++++++++++++++++---------------
 lib/librte_fib/dir24_8.h           |  2 +-
 lib/librte_fib/rte_fib.c           | 21 +++++++++-
 lib/librte_fib/rte_fib.h           | 32 +++++++++++++++
 lib/librte_fib/rte_fib_version.map |  1 +
 5 files changed, 106 insertions(+), 34 deletions(-)

diff --git a/lib/librte_fib/dir24_8.c b/lib/librte_fib/dir24_8.c
index c9dce3c..ff51f65 100644
--- a/lib/librte_fib/dir24_8.c
+++ b/lib/librte_fib/dir24_8.c
@@ -45,13 +45,6 @@ struct dir24_8_tbl {
 
 #define ROUNDUP(x, y)	 RTE_ALIGN_CEIL(x, (1 << (32 - y)))
 
-enum lookup_type {
-	MACRO,
-	INLINE,
-	UNI
-};
-enum lookup_type test_lookup = MACRO;
-
 static inline void *
 get_tbl24_p(struct dir24_8_tbl *dp, uint32_t ip, uint8_t nh_sz)
 {
@@ -252,35 +245,62 @@ dir24_8_lookup_bulk_uni(void *p, const uint32_t *ips,
 	}
 }
 
+static inline rte_fib_lookup_fn_t
+get_scalar_fn(enum rte_fib_dir24_8_nh_sz nh_sz)
+{
+	switch (nh_sz) {
+	case RTE_FIB_DIR24_8_1B:
+		return dir24_8_lookup_bulk_1b;
+	case RTE_FIB_DIR24_8_2B:
+		return dir24_8_lookup_bulk_2b;
+	case RTE_FIB_DIR24_8_4B:
+		return dir24_8_lookup_bulk_4b;
+	case RTE_FIB_DIR24_8_8B:
+		return dir24_8_lookup_bulk_8b;
+	default:
+		return NULL;
+	}
+}
+
+static inline rte_fib_lookup_fn_t
+get_scalar_fn_inlined(enum rte_fib_dir24_8_nh_sz nh_sz)
+{
+	switch (nh_sz) {
+	case RTE_FIB_DIR24_8_1B:
+		return dir24_8_lookup_bulk_0;
+	case RTE_FIB_DIR24_8_2B:
+		return dir24_8_lookup_bulk_1;
+	case RTE_FIB_DIR24_8_4B:
+		return dir24_8_lookup_bulk_2;
+	case RTE_FIB_DIR24_8_8B:
+		return dir24_8_lookup_bulk_3;
+	default:
+		return NULL;
+	}
+}
+
 rte_fib_lookup_fn_t
-dir24_8_get_lookup_fn(struct rte_fib_conf *fib_conf)
+dir24_8_get_lookup_fn(void *p, enum rte_fib_dir24_8_lookup_type type)
 {
-	enum rte_fib_dir24_8_nh_sz nh_sz = fib_conf->dir24_8.nh_sz;
+	enum rte_fib_dir24_8_nh_sz nh_sz;
+	struct dir24_8_tbl *dp = p;
 
-	if (test_lookup == MACRO) {
-		switch (nh_sz) {
-		case RTE_FIB_DIR24_8_1B:
-			return dir24_8_lookup_bulk_1b;
-		case RTE_FIB_DIR24_8_2B:
-			return dir24_8_lookup_bulk_2b;
-		case RTE_FIB_DIR24_8_4B:
-			return dir24_8_lookup_bulk_4b;
-		case RTE_FIB_DIR24_8_8B:
-			return dir24_8_lookup_bulk_8b;
-		}
-	} else if (test_lookup == INLINE) {
-		switch (nh_sz) {
-		case RTE_FIB_DIR24_8_1B:
-			return dir24_8_lookup_bulk_0;
-		case RTE_FIB_DIR24_8_2B:
-			return dir24_8_lookup_bulk_1;
-		case RTE_FIB_DIR24_8_4B:
-			return dir24_8_lookup_bulk_2;
-		case RTE_FIB_DIR24_8_8B:
-			return dir24_8_lookup_bulk_3;
-		}
-	} else
+	if (dp == NULL)
+		return NULL;
+
+	nh_sz = dp->nh_sz;
+
+	switch (type) {
+	case RTE_FIB_DIR24_8_SCALAR_MACRO:
+		return get_scalar_fn(nh_sz);
+	case RTE_FIB_DIR24_8_SCALAR_INLINE:
+		return get_scalar_fn_inlined(nh_sz);
+	case RTE_FIB_DIR24_8_SCALAR_UNI:
 		return dir24_8_lookup_bulk_uni;
+	default:
+		return NULL;
+	}
+
 	return NULL;
 }
 
diff --git a/lib/librte_fib/dir24_8.h b/lib/librte_fib/dir24_8.h
index 1ec437c..53c5dd2 100644
--- a/lib/librte_fib/dir24_8.h
+++ b/lib/librte_fib/dir24_8.h
@@ -22,7 +22,7 @@ void
 dir24_8_free(void *p);
 
 rte_fib_lookup_fn_t
-dir24_8_get_lookup_fn(struct rte_fib_conf *conf);
+dir24_8_get_lookup_fn(void *p, enum rte_fib_dir24_8_lookup_type type);
 
 int
 dir24_8_modify(struct rte_fib *fib, uint32_t ip, uint8_t depth,
diff --git a/lib/librte_fib/rte_fib.c b/lib/librte_fib/rte_fib.c
index e090808..b9f6efb 100644
--- a/lib/librte_fib/rte_fib.c
+++ b/lib/librte_fib/rte_fib.c
@@ -107,7 +107,8 @@ init_dataplane(struct rte_fib *fib, __rte_unused int socket_id,
 		fib->dp = dir24_8_create(dp_name, socket_id, conf);
 		if (fib->dp == NULL)
 			return -rte_errno;
-		fib->lookup = dir24_8_get_lookup_fn(conf);
+		fib->lookup = dir24_8_get_lookup_fn(fib->dp,
+			RTE_FIB_DIR24_8_SCALAR_MACRO);
 		fib->modify = dir24_8_modify;
 		return 0;
 	default:
@@ -317,3 +318,21 @@ rte_fib_get_rib(struct rte_fib *fib)
 {
 	return (fib == NULL) ? NULL : fib->rib;
 }
+
+int
+rte_fib_set_lookup_fn(struct rte_fib *fib,
+	enum rte_fib_dir24_8_lookup_type type)
+{
+	rte_fib_lookup_fn_t fn;
+
+	switch (fib->type) {
+	case RTE_FIB_DIR24_8:
+		fn = dir24_8_get_lookup_fn(fib->dp, type);
+		if (fn == NULL)
+			return -EINVAL;
+		fib->lookup = fn;
+		return 0;
+	default:
+		return -EINVAL;
+	}
+}
diff --git a/lib/librte_fib/rte_fib.h b/lib/librte_fib/rte_fib.h
index 84ee774..2097ee5 100644
--- a/lib/librte_fib/rte_fib.h
+++ b/lib/librte_fib/rte_fib.h
@@ -58,6 +58,21 @@ enum rte_fib_dir24_8_nh_sz {
 	RTE_FIB_DIR24_8_8B
 };
 
+/** Type of lookup function implementation */
+enum rte_fib_dir24_8_lookup_type {
+	RTE_FIB_DIR24_8_SCALAR_MACRO,
+	/**< Macro based lookup function */
+	RTE_FIB_DIR24_8_SCALAR_INLINE,
+	/**<
+	 * Lookup implementation using inlined functions
+	 * for different next hop sizes
+	 */
+	RTE_FIB_DIR24_8_SCALAR_UNI
+	/**<
+	 * Unified lookup function for all next hop sizes
+	 */
+};
+
 /** FIB configuration structure */
 struct rte_fib_conf {
 	enum rte_fib_type type; /**< Type of FIB struct */
@@ -196,6 +211,23 @@ __rte_experimental
 struct rte_rib *
 rte_fib_get_rib(struct rte_fib *fib);
 
+/**
+ * Set lookup function based on type
+ *
+ * @param fib
+ *   FIB object handle
+ * @param type
+ *   type of lookup function
+ *
+ * @return
+ *    -EINVAL on failure
+ *    0 on success
+ */
+__rte_experimental
+int
+rte_fib_set_lookup_fn(struct rte_fib *fib,
+	enum rte_fib_dir24_8_lookup_type type);
+
 #ifdef __cplusplus
 }
 #endif
diff --git a/lib/librte_fib/rte_fib_version.map b/lib/librte_fib/rte_fib_version.map
index 9527417..216af66 100644
--- a/lib/librte_fib/rte_fib_version.map
+++ b/lib/librte_fib/rte_fib_version.map
@@ -9,6 +9,7 @@ EXPERIMENTAL {
 	rte_fib_lookup_bulk;
 	rte_fib_get_dp;
 	rte_fib_get_rib;
+	rte_fib_set_lookup_fn;
 
 	rte_fib6_add;
 	rte_fib6_create;
-- 
2.7.4


^ permalink raw reply	[flat|nested] 199+ messages in thread

* [dpdk-dev] [PATCH v13 2/7] fib: move lookup definition into the header file
  2020-10-19 10:17                     ` [dpdk-dev] [PATCH v12 0/7] " Vladimir Medvedkin
  2020-10-19 15:05                       ` [dpdk-dev] [PATCH v13 " Vladimir Medvedkin
  2020-10-19 15:05                       ` [dpdk-dev] [PATCH v13 1/7] fib: make lookup function type configurable Vladimir Medvedkin
@ 2020-10-19 15:05                       ` Vladimir Medvedkin
  2020-10-22  7:56                         ` Kinsella, Ray
  2020-10-19 15:05                       ` [dpdk-dev] [PATCH v13 3/7] fib: introduce AVX512 lookup Vladimir Medvedkin
                                         ` (4 subsequent siblings)
  7 siblings, 1 reply; 199+ messages in thread
From: Vladimir Medvedkin @ 2020-10-19 15:05 UTC (permalink / raw)
  To: dev
  Cc: david.marchand, jerinj, mdr, thomas, konstantin.ananyev,
	bruce.richardson, ciara.power

Move dir24_8 table layout and lookup definition into the
private header file. This is necessary for implementing a
vectorized lookup function in a separate .с file.

Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 lib/librte_fib/dir24_8.c | 225 +----------------------------------------------
 lib/librte_fib/dir24_8.h | 224 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 225 insertions(+), 224 deletions(-)

diff --git a/lib/librte_fib/dir24_8.c b/lib/librte_fib/dir24_8.c
index ff51f65..b5f2363 100644
--- a/lib/librte_fib/dir24_8.c
+++ b/lib/librte_fib/dir24_8.c
@@ -11,240 +11,17 @@
 
 #include <rte_debug.h>
 #include <rte_malloc.h>
-#include <rte_prefetch.h>
 #include <rte_errno.h>
 #include <rte_memory.h>
-#include <rte_branch_prediction.h>
 
-#include <rte_fib.h>
 #include <rte_rib.h>
+#include <rte_fib.h>
 #include "dir24_8.h"
 
 #define DIR24_8_NAMESIZE	64
 
-#define DIR24_8_TBL24_NUM_ENT		(1 << 24)
-#define DIR24_8_TBL8_GRP_NUM_ENT	256U
-#define DIR24_8_EXT_ENT			1
-#define DIR24_8_TBL24_MASK		0xffffff00
-
-#define BITMAP_SLAB_BIT_SIZE_LOG2	6
-#define BITMAP_SLAB_BIT_SIZE		(1 << BITMAP_SLAB_BIT_SIZE_LOG2)
-#define BITMAP_SLAB_BITMASK		(BITMAP_SLAB_BIT_SIZE - 1)
-
-struct dir24_8_tbl {
-	uint32_t	number_tbl8s;	/**< Total number of tbl8s */
-	uint32_t	rsvd_tbl8s;	/**< Number of reserved tbl8s */
-	uint32_t	cur_tbl8s;	/**< Current number of tbl8s */
-	enum rte_fib_dir24_8_nh_sz	nh_sz;	/**< Size of nexthop entry */
-	uint64_t	def_nh;		/**< Default next hop */
-	uint64_t	*tbl8;		/**< tbl8 table. */
-	uint64_t	*tbl8_idxes;	/**< bitmap containing free tbl8 idxes*/
-	/* tbl24 table. */
-	__extension__ uint64_t	tbl24[0] __rte_cache_aligned;
-};
-
 #define ROUNDUP(x, y)	 RTE_ALIGN_CEIL(x, (1 << (32 - y)))
 
-static inline void *
-get_tbl24_p(struct dir24_8_tbl *dp, uint32_t ip, uint8_t nh_sz)
-{
-	return (void *)&((uint8_t *)dp->tbl24)[(ip &
-		DIR24_8_TBL24_MASK) >> (8 - nh_sz)];
-}
-
-static inline  uint8_t
-bits_in_nh(uint8_t nh_sz)
-{
-	return 8 * (1 << nh_sz);
-}
-
-static inline uint64_t
-get_max_nh(uint8_t nh_sz)
-{
-	return ((1ULL << (bits_in_nh(nh_sz) - 1)) - 1);
-}
-
-static  inline uint32_t
-get_tbl24_idx(uint32_t ip)
-{
-	return ip >> 8;
-}
-
-static  inline uint32_t
-get_tbl8_idx(uint32_t res, uint32_t ip)
-{
-	return (res >> 1) * DIR24_8_TBL8_GRP_NUM_ENT + (uint8_t)ip;
-}
-
-static inline uint64_t
-lookup_msk(uint8_t nh_sz)
-{
-	return ((1ULL << ((1 << (nh_sz + 3)) - 1)) << 1) - 1;
-}
-
-static inline uint8_t
-get_psd_idx(uint32_t val, uint8_t nh_sz)
-{
-	return val & ((1 << (3 - nh_sz)) - 1);
-}
-
-static inline uint32_t
-get_tbl_idx(uint32_t val, uint8_t nh_sz)
-{
-	return val >> (3 - nh_sz);
-}
-
-static inline uint64_t
-get_tbl24(struct dir24_8_tbl *dp, uint32_t ip, uint8_t nh_sz)
-{
-	return ((dp->tbl24[get_tbl_idx(get_tbl24_idx(ip), nh_sz)] >>
-		(get_psd_idx(get_tbl24_idx(ip), nh_sz) *
-		bits_in_nh(nh_sz))) & lookup_msk(nh_sz));
-}
-
-static inline uint64_t
-get_tbl8(struct dir24_8_tbl *dp, uint32_t res, uint32_t ip, uint8_t nh_sz)
-{
-	return ((dp->tbl8[get_tbl_idx(get_tbl8_idx(res, ip), nh_sz)] >>
-		(get_psd_idx(get_tbl8_idx(res, ip), nh_sz) *
-		bits_in_nh(nh_sz))) & lookup_msk(nh_sz));
-}
-
-static inline int
-is_entry_extended(uint64_t ent)
-{
-	return (ent & DIR24_8_EXT_ENT) == DIR24_8_EXT_ENT;
-}
-
-#define LOOKUP_FUNC(suffix, type, bulk_prefetch, nh_sz)			\
-static void dir24_8_lookup_bulk_##suffix(void *p, const uint32_t *ips,	\
-	uint64_t *next_hops, const unsigned int n)			\
-{									\
-	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;		\
-	uint64_t tmp;							\
-	uint32_t i;							\
-	uint32_t prefetch_offset =					\
-		RTE_MIN((unsigned int)bulk_prefetch, n);		\
-									\
-	for (i = 0; i < prefetch_offset; i++)				\
-		rte_prefetch0(get_tbl24_p(dp, ips[i], nh_sz));		\
-	for (i = 0; i < (n - prefetch_offset); i++) {			\
-		rte_prefetch0(get_tbl24_p(dp,				\
-			ips[i + prefetch_offset], nh_sz));		\
-		tmp = ((type *)dp->tbl24)[ips[i] >> 8];			\
-		if (unlikely(is_entry_extended(tmp)))			\
-			tmp = ((type *)dp->tbl8)[(uint8_t)ips[i] +	\
-				((tmp >> 1) * DIR24_8_TBL8_GRP_NUM_ENT)]; \
-		next_hops[i] = tmp >> 1;				\
-	}								\
-	for (; i < n; i++) {						\
-		tmp = ((type *)dp->tbl24)[ips[i] >> 8];			\
-		if (unlikely(is_entry_extended(tmp)))			\
-			tmp = ((type *)dp->tbl8)[(uint8_t)ips[i] +	\
-				((tmp >> 1) * DIR24_8_TBL8_GRP_NUM_ENT)]; \
-		next_hops[i] = tmp >> 1;				\
-	}								\
-}									\
-
-LOOKUP_FUNC(1b, uint8_t, 5, 0)
-LOOKUP_FUNC(2b, uint16_t, 6, 1)
-LOOKUP_FUNC(4b, uint32_t, 15, 2)
-LOOKUP_FUNC(8b, uint64_t, 12, 3)
-
-static inline void
-dir24_8_lookup_bulk(struct dir24_8_tbl *dp, const uint32_t *ips,
-	uint64_t *next_hops, const unsigned int n, uint8_t nh_sz)
-{
-	uint64_t tmp;
-	uint32_t i;
-	uint32_t prefetch_offset = RTE_MIN(15U, n);
-
-	for (i = 0; i < prefetch_offset; i++)
-		rte_prefetch0(get_tbl24_p(dp, ips[i], nh_sz));
-	for (i = 0; i < (n - prefetch_offset); i++) {
-		rte_prefetch0(get_tbl24_p(dp, ips[i + prefetch_offset],
-			nh_sz));
-		tmp = get_tbl24(dp, ips[i], nh_sz);
-		if (unlikely(is_entry_extended(tmp)))
-			tmp = get_tbl8(dp, tmp, ips[i], nh_sz);
-
-		next_hops[i] = tmp >> 1;
-	}
-	for (; i < n; i++) {
-		tmp = get_tbl24(dp, ips[i], nh_sz);
-		if (unlikely(is_entry_extended(tmp)))
-			tmp = get_tbl8(dp, tmp, ips[i], nh_sz);
-
-		next_hops[i] = tmp >> 1;
-	}
-}
-
-static void
-dir24_8_lookup_bulk_0(void *p, const uint32_t *ips,
-	uint64_t *next_hops, const unsigned int n)
-{
-	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
-
-	dir24_8_lookup_bulk(dp, ips, next_hops, n, 0);
-}
-
-static void
-dir24_8_lookup_bulk_1(void *p, const uint32_t *ips,
-	uint64_t *next_hops, const unsigned int n)
-{
-	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
-
-	dir24_8_lookup_bulk(dp, ips, next_hops, n, 1);
-}
-
-static void
-dir24_8_lookup_bulk_2(void *p, const uint32_t *ips,
-	uint64_t *next_hops, const unsigned int n)
-{
-	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
-
-	dir24_8_lookup_bulk(dp, ips, next_hops, n, 2);
-}
-
-static void
-dir24_8_lookup_bulk_3(void *p, const uint32_t *ips,
-	uint64_t *next_hops, const unsigned int n)
-{
-	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
-
-	dir24_8_lookup_bulk(dp, ips, next_hops, n, 3);
-}
-
-static void
-dir24_8_lookup_bulk_uni(void *p, const uint32_t *ips,
-	uint64_t *next_hops, const unsigned int n)
-{
-	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
-	uint64_t tmp;
-	uint32_t i;
-	uint32_t prefetch_offset = RTE_MIN(15U, n);
-	uint8_t nh_sz = dp->nh_sz;
-
-	for (i = 0; i < prefetch_offset; i++)
-		rte_prefetch0(get_tbl24_p(dp, ips[i], nh_sz));
-	for (i = 0; i < (n - prefetch_offset); i++) {
-		rte_prefetch0(get_tbl24_p(dp, ips[i + prefetch_offset],
-			nh_sz));
-		tmp = get_tbl24(dp, ips[i], nh_sz);
-		if (unlikely(is_entry_extended(tmp)))
-			tmp = get_tbl8(dp, tmp, ips[i], nh_sz);
-
-		next_hops[i] = tmp >> 1;
-	}
-	for (; i < n; i++) {
-		tmp = get_tbl24(dp, ips[i], nh_sz);
-		if (unlikely(is_entry_extended(tmp)))
-			tmp = get_tbl8(dp, tmp, ips[i], nh_sz);
-
-		next_hops[i] = tmp >> 1;
-	}
-}
-
 static inline rte_fib_lookup_fn_t
 get_scalar_fn(enum rte_fib_dir24_8_nh_sz nh_sz)
 {
diff --git a/lib/librte_fib/dir24_8.h b/lib/librte_fib/dir24_8.h
index 53c5dd2..56d0389 100644
--- a/lib/librte_fib/dir24_8.h
+++ b/lib/librte_fib/dir24_8.h
@@ -6,6 +6,9 @@
 #ifndef _DIR24_8_H_
 #define _DIR24_8_H_
 
+#include <rte_prefetch.h>
+#include <rte_branch_prediction.h>
+
 /**
  * @file
  * DIR24_8 algorithm
@@ -15,6 +18,227 @@
 extern "C" {
 #endif
 
+#define DIR24_8_TBL24_NUM_ENT		(1 << 24)
+#define DIR24_8_TBL8_GRP_NUM_ENT	256U
+#define DIR24_8_EXT_ENT			1
+#define DIR24_8_TBL24_MASK		0xffffff00
+
+#define BITMAP_SLAB_BIT_SIZE_LOG2	6
+#define BITMAP_SLAB_BIT_SIZE		(1 << BITMAP_SLAB_BIT_SIZE_LOG2)
+#define BITMAP_SLAB_BITMASK		(BITMAP_SLAB_BIT_SIZE - 1)
+
+struct dir24_8_tbl {
+	uint32_t	number_tbl8s;	/**< Total number of tbl8s */
+	uint32_t	rsvd_tbl8s;	/**< Number of reserved tbl8s */
+	uint32_t	cur_tbl8s;	/**< Current number of tbl8s */
+	enum rte_fib_dir24_8_nh_sz	nh_sz;	/**< Size of nexthop entry */
+	uint64_t	def_nh;		/**< Default next hop */
+	uint64_t	*tbl8;		/**< tbl8 table. */
+	uint64_t	*tbl8_idxes;	/**< bitmap containing free tbl8 idxes*/
+	/* tbl24 table. */
+	__extension__ uint64_t	tbl24[0] __rte_cache_aligned;
+};
+
+static inline void *
+get_tbl24_p(struct dir24_8_tbl *dp, uint32_t ip, uint8_t nh_sz)
+{
+	return (void *)&((uint8_t *)dp->tbl24)[(ip &
+		DIR24_8_TBL24_MASK) >> (8 - nh_sz)];
+}
+
+static inline  uint8_t
+bits_in_nh(uint8_t nh_sz)
+{
+	return 8 * (1 << nh_sz);
+}
+
+static inline uint64_t
+get_max_nh(uint8_t nh_sz)
+{
+	return ((1ULL << (bits_in_nh(nh_sz) - 1)) - 1);
+}
+
+static  inline uint32_t
+get_tbl24_idx(uint32_t ip)
+{
+	return ip >> 8;
+}
+
+static  inline uint32_t
+get_tbl8_idx(uint32_t res, uint32_t ip)
+{
+	return (res >> 1) * DIR24_8_TBL8_GRP_NUM_ENT + (uint8_t)ip;
+}
+
+static inline uint64_t
+lookup_msk(uint8_t nh_sz)
+{
+	return ((1ULL << ((1 << (nh_sz + 3)) - 1)) << 1) - 1;
+}
+
+static inline uint8_t
+get_psd_idx(uint32_t val, uint8_t nh_sz)
+{
+	return val & ((1 << (3 - nh_sz)) - 1);
+}
+
+static inline uint32_t
+get_tbl_idx(uint32_t val, uint8_t nh_sz)
+{
+	return val >> (3 - nh_sz);
+}
+
+static inline uint64_t
+get_tbl24(struct dir24_8_tbl *dp, uint32_t ip, uint8_t nh_sz)
+{
+	return ((dp->tbl24[get_tbl_idx(get_tbl24_idx(ip), nh_sz)] >>
+		(get_psd_idx(get_tbl24_idx(ip), nh_sz) *
+		bits_in_nh(nh_sz))) & lookup_msk(nh_sz));
+}
+
+static inline uint64_t
+get_tbl8(struct dir24_8_tbl *dp, uint32_t res, uint32_t ip, uint8_t nh_sz)
+{
+	return ((dp->tbl8[get_tbl_idx(get_tbl8_idx(res, ip), nh_sz)] >>
+		(get_psd_idx(get_tbl8_idx(res, ip), nh_sz) *
+		bits_in_nh(nh_sz))) & lookup_msk(nh_sz));
+}
+
+static inline int
+is_entry_extended(uint64_t ent)
+{
+	return (ent & DIR24_8_EXT_ENT) == DIR24_8_EXT_ENT;
+}
+
+#define LOOKUP_FUNC(suffix, type, bulk_prefetch, nh_sz)			\
+static inline void dir24_8_lookup_bulk_##suffix(void *p, const uint32_t *ips, \
+	uint64_t *next_hops, const unsigned int n)			\
+{									\
+	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;		\
+	uint64_t tmp;							\
+	uint32_t i;							\
+	uint32_t prefetch_offset =					\
+		RTE_MIN((unsigned int)bulk_prefetch, n);		\
+									\
+	for (i = 0; i < prefetch_offset; i++)				\
+		rte_prefetch0(get_tbl24_p(dp, ips[i], nh_sz));		\
+	for (i = 0; i < (n - prefetch_offset); i++) {			\
+		rte_prefetch0(get_tbl24_p(dp,				\
+			ips[i + prefetch_offset], nh_sz));		\
+		tmp = ((type *)dp->tbl24)[ips[i] >> 8];			\
+		if (unlikely(is_entry_extended(tmp)))			\
+			tmp = ((type *)dp->tbl8)[(uint8_t)ips[i] +	\
+				((tmp >> 1) * DIR24_8_TBL8_GRP_NUM_ENT)]; \
+		next_hops[i] = tmp >> 1;				\
+	}								\
+	for (; i < n; i++) {						\
+		tmp = ((type *)dp->tbl24)[ips[i] >> 8];			\
+		if (unlikely(is_entry_extended(tmp)))			\
+			tmp = ((type *)dp->tbl8)[(uint8_t)ips[i] +	\
+				((tmp >> 1) * DIR24_8_TBL8_GRP_NUM_ENT)]; \
+		next_hops[i] = tmp >> 1;				\
+	}								\
+}									\
+
+LOOKUP_FUNC(1b, uint8_t, 5, 0)
+LOOKUP_FUNC(2b, uint16_t, 6, 1)
+LOOKUP_FUNC(4b, uint32_t, 15, 2)
+LOOKUP_FUNC(8b, uint64_t, 12, 3)
+
+static inline void
+dir24_8_lookup_bulk(struct dir24_8_tbl *dp, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n, uint8_t nh_sz)
+{
+	uint64_t tmp;
+	uint32_t i;
+	uint32_t prefetch_offset = RTE_MIN(15U, n);
+
+	for (i = 0; i < prefetch_offset; i++)
+		rte_prefetch0(get_tbl24_p(dp, ips[i], nh_sz));
+	for (i = 0; i < (n - prefetch_offset); i++) {
+		rte_prefetch0(get_tbl24_p(dp, ips[i + prefetch_offset],
+			nh_sz));
+		tmp = get_tbl24(dp, ips[i], nh_sz);
+		if (unlikely(is_entry_extended(tmp)))
+			tmp = get_tbl8(dp, tmp, ips[i], nh_sz);
+
+		next_hops[i] = tmp >> 1;
+	}
+	for (; i < n; i++) {
+		tmp = get_tbl24(dp, ips[i], nh_sz);
+		if (unlikely(is_entry_extended(tmp)))
+			tmp = get_tbl8(dp, tmp, ips[i], nh_sz);
+
+		next_hops[i] = tmp >> 1;
+	}
+}
+
+static inline void
+dir24_8_lookup_bulk_0(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n)
+{
+	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
+
+	dir24_8_lookup_bulk(dp, ips, next_hops, n, 0);
+}
+
+static inline void
+dir24_8_lookup_bulk_1(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n)
+{
+	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
+
+	dir24_8_lookup_bulk(dp, ips, next_hops, n, 1);
+}
+
+static inline void
+dir24_8_lookup_bulk_2(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n)
+{
+	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
+
+	dir24_8_lookup_bulk(dp, ips, next_hops, n, 2);
+}
+
+static inline void
+dir24_8_lookup_bulk_3(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n)
+{
+	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
+
+	dir24_8_lookup_bulk(dp, ips, next_hops, n, 3);
+}
+
+static inline void
+dir24_8_lookup_bulk_uni(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n)
+{
+	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
+	uint64_t tmp;
+	uint32_t i;
+	uint32_t prefetch_offset = RTE_MIN(15U, n);
+	uint8_t nh_sz = dp->nh_sz;
+
+	for (i = 0; i < prefetch_offset; i++)
+		rte_prefetch0(get_tbl24_p(dp, ips[i], nh_sz));
+	for (i = 0; i < (n - prefetch_offset); i++) {
+		rte_prefetch0(get_tbl24_p(dp, ips[i + prefetch_offset],
+			nh_sz));
+		tmp = get_tbl24(dp, ips[i], nh_sz);
+		if (unlikely(is_entry_extended(tmp)))
+			tmp = get_tbl8(dp, tmp, ips[i], nh_sz);
+
+		next_hops[i] = tmp >> 1;
+	}
+	for (; i < n; i++) {
+		tmp = get_tbl24(dp, ips[i], nh_sz);
+		if (unlikely(is_entry_extended(tmp)))
+			tmp = get_tbl8(dp, tmp, ips[i], nh_sz);
+
+		next_hops[i] = tmp >> 1;
+	}
+}
+
 void *
 dir24_8_create(const char *name, int socket_id, struct rte_fib_conf *conf);
 
-- 
2.7.4


^ permalink raw reply	[flat|nested] 199+ messages in thread

* [dpdk-dev] [PATCH v13 3/7] fib: introduce AVX512 lookup
  2020-10-19 10:17                     ` [dpdk-dev] [PATCH v12 0/7] " Vladimir Medvedkin
                                         ` (2 preceding siblings ...)
  2020-10-19 15:05                       ` [dpdk-dev] [PATCH v13 2/7] fib: move lookup definition into the header file Vladimir Medvedkin
@ 2020-10-19 15:05                       ` Vladimir Medvedkin
  2020-10-22  7:56                         ` Kinsella, Ray
  2020-10-19 15:05                       ` [dpdk-dev] [PATCH v13 4/7] fib6: make lookup function type configurable Vladimir Medvedkin
                                         ` (3 subsequent siblings)
  7 siblings, 1 reply; 199+ messages in thread
From: Vladimir Medvedkin @ 2020-10-19 15:05 UTC (permalink / raw)
  To: dev
  Cc: david.marchand, jerinj, mdr, thomas, konstantin.ananyev,
	bruce.richardson, ciara.power

Add new lookup implementation for DIR24_8 algorithm using
AVX512 instruction set

Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 doc/guides/rel_notes/release_20_11.rst |   3 +
 lib/librte_fib/dir24_8.c               |  39 ++++++++
 lib/librte_fib/dir24_8_avx512.c        | 165 +++++++++++++++++++++++++++++++++
 lib/librte_fib/dir24_8_avx512.h        |  24 +++++
 lib/librte_fib/meson.build             |  34 +++++++
 lib/librte_fib/rte_fib.c               |   2 +-
 lib/librte_fib/rte_fib.h               |   6 +-
 7 files changed, 271 insertions(+), 2 deletions(-)
 create mode 100644 lib/librte_fib/dir24_8_avx512.c
 create mode 100644 lib/librte_fib/dir24_8_avx512.h

diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst
index c2bd6ee..7eacab5 100644
--- a/doc/guides/rel_notes/release_20_11.rst
+++ b/doc/guides/rel_notes/release_20_11.rst
@@ -344,6 +344,9 @@ New Features
   * Replaced ``--scalar`` command-line option with ``--alg=<value>``, to allow
     the user to select the desired classify method.
 
+* **Added AVX512 lookup implementation for FIB.**
+
+  Added a AVX512 lookup functions implementation into FIB library.
 
 Removed Items
 -------------
diff --git a/lib/librte_fib/dir24_8.c b/lib/librte_fib/dir24_8.c
index b5f2363..891fd78 100644
--- a/lib/librte_fib/dir24_8.c
+++ b/lib/librte_fib/dir24_8.c
@@ -13,11 +13,18 @@
 #include <rte_malloc.h>
 #include <rte_errno.h>
 #include <rte_memory.h>
+#include <rte_vect.h>
 
 #include <rte_rib.h>
 #include <rte_fib.h>
 #include "dir24_8.h"
 
+#ifdef CC_DIR24_8_AVX512_SUPPORT
+
+#include "dir24_8_avx512.h"
+
+#endif /* CC_DIR24_8_AVX512_SUPPORT */
+
 #define DIR24_8_NAMESIZE	64
 
 #define ROUNDUP(x, y)	 RTE_ALIGN_CEIL(x, (1 << (32 - y)))
@@ -56,11 +63,38 @@ get_scalar_fn_inlined(enum rte_fib_dir24_8_nh_sz nh_sz)
 	}
 }
 
+static inline rte_fib_lookup_fn_t
+get_vector_fn(enum rte_fib_dir24_8_nh_sz nh_sz)
+{
+#ifdef CC_DIR24_8_AVX512_SUPPORT
+	if ((rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512F) <= 0) ||
+			(rte_vect_get_max_simd_bitwidth() < RTE_VECT_SIMD_512))
+		return NULL;
+
+	switch (nh_sz) {
+	case RTE_FIB_DIR24_8_1B:
+		return rte_dir24_8_vec_lookup_bulk_1b;
+	case RTE_FIB_DIR24_8_2B:
+		return rte_dir24_8_vec_lookup_bulk_2b;
+	case RTE_FIB_DIR24_8_4B:
+		return rte_dir24_8_vec_lookup_bulk_4b;
+	case RTE_FIB_DIR24_8_8B:
+		return rte_dir24_8_vec_lookup_bulk_8b;
+	default:
+		return NULL;
+	}
+#else
+	RTE_SET_USED(nh_sz);
+#endif
+	return NULL;
+}
+
 rte_fib_lookup_fn_t
 dir24_8_get_lookup_fn(void *p, enum rte_fib_dir24_8_lookup_type type)
 {
 	enum rte_fib_dir24_8_nh_sz nh_sz;
 	struct dir24_8_tbl *dp = p;
+	rte_fib_lookup_fn_t ret_fn = NULL;
 
 	if (dp == NULL)
 		return NULL;
@@ -74,6 +108,11 @@ dir24_8_get_lookup_fn(void *p, enum rte_fib_dir24_8_lookup_type type)
 		return get_scalar_fn_inlined(nh_sz);
 	case RTE_FIB_DIR24_8_SCALAR_UNI:
 		return dir24_8_lookup_bulk_uni;
+	case RTE_FIB_DIR24_8_VECTOR_AVX512:
+		return get_vector_fn(nh_sz);
+	case RTE_FIB_DIR24_8_ANY:
+		ret_fn = get_vector_fn(nh_sz);
+		return (ret_fn) ? ret_fn : get_scalar_fn(nh_sz);
 	default:
 		return NULL;
 	}
diff --git a/lib/librte_fib/dir24_8_avx512.c b/lib/librte_fib/dir24_8_avx512.c
new file mode 100644
index 0000000..43dba28
--- /dev/null
+++ b/lib/librte_fib/dir24_8_avx512.c
@@ -0,0 +1,165 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Intel Corporation
+ */
+
+#include <rte_vect.h>
+#include <rte_fib.h>
+
+#include "dir24_8.h"
+#include "dir24_8_avx512.h"
+
+static __rte_always_inline void
+dir24_8_vec_lookup_x16(void *p, const uint32_t *ips,
+	uint64_t *next_hops, int size)
+{
+	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
+	__mmask16 msk_ext;
+	__mmask16 exp_msk = 0x5555;
+	__m512i ip_vec, idxes, res, bytes;
+	const __m512i zero = _mm512_set1_epi32(0);
+	const __m512i lsb = _mm512_set1_epi32(1);
+	const __m512i lsbyte_msk = _mm512_set1_epi32(0xff);
+	__m512i tmp1, tmp2, res_msk;
+	__m256i tmp256;
+	/* used to mask gather values if size is 1/2 (8/16 bit next hops) */
+	if (size == sizeof(uint8_t))
+		res_msk = _mm512_set1_epi32(UINT8_MAX);
+	else if (size == sizeof(uint16_t))
+		res_msk = _mm512_set1_epi32(UINT16_MAX);
+
+	ip_vec = _mm512_loadu_si512(ips);
+	/* mask 24 most significant bits */
+	idxes = _mm512_srli_epi32(ip_vec, 8);
+
+	/**
+	 * lookup in tbl24
+	 * Put it inside branch to make compiler happy with -O0
+	 */
+	if (size == sizeof(uint8_t)) {
+		res = _mm512_i32gather_epi32(idxes, (const int *)dp->tbl24, 1);
+		res = _mm512_and_epi32(res, res_msk);
+	} else if (size == sizeof(uint16_t)) {
+		res = _mm512_i32gather_epi32(idxes, (const int *)dp->tbl24, 2);
+		res = _mm512_and_epi32(res, res_msk);
+	} else
+		res = _mm512_i32gather_epi32(idxes, (const int *)dp->tbl24, 4);
+
+	/* get extended entries indexes */
+	msk_ext = _mm512_test_epi32_mask(res, lsb);
+
+	if (msk_ext != 0) {
+		idxes = _mm512_srli_epi32(res, 1);
+		idxes = _mm512_slli_epi32(idxes, 8);
+		bytes = _mm512_and_epi32(ip_vec, lsbyte_msk);
+		idxes = _mm512_maskz_add_epi32(msk_ext, idxes, bytes);
+		if (size == sizeof(uint8_t)) {
+			idxes = _mm512_mask_i32gather_epi32(zero, msk_ext,
+				idxes, (const int *)dp->tbl8, 1);
+			idxes = _mm512_and_epi32(idxes, res_msk);
+		} else if (size == sizeof(uint16_t)) {
+			idxes = _mm512_mask_i32gather_epi32(zero, msk_ext,
+				idxes, (const int *)dp->tbl8, 2);
+			idxes = _mm512_and_epi32(idxes, res_msk);
+		} else
+			idxes = _mm512_mask_i32gather_epi32(zero, msk_ext,
+				idxes, (const int *)dp->tbl8, 4);
+
+		res = _mm512_mask_blend_epi32(msk_ext, res, idxes);
+	}
+
+	res = _mm512_srli_epi32(res, 1);
+	tmp1 = _mm512_maskz_expand_epi32(exp_msk, res);
+	tmp256 = _mm512_extracti32x8_epi32(res, 1);
+	tmp2 = _mm512_maskz_expand_epi32(exp_msk,
+		_mm512_castsi256_si512(tmp256));
+	_mm512_storeu_si512(next_hops, tmp1);
+	_mm512_storeu_si512(next_hops + 8, tmp2);
+}
+
+static __rte_always_inline void
+dir24_8_vec_lookup_x8_8b(void *p, const uint32_t *ips,
+	uint64_t *next_hops)
+{
+	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
+	const __m512i zero = _mm512_set1_epi32(0);
+	const __m512i lsbyte_msk = _mm512_set1_epi64(0xff);
+	const __m512i lsb = _mm512_set1_epi64(1);
+	__m512i res, idxes, bytes;
+	__m256i idxes_256, ip_vec;
+	__mmask8 msk_ext;
+
+	ip_vec = _mm256_loadu_si256((const void *)ips);
+	/* mask 24 most significant bits */
+	idxes_256 = _mm256_srli_epi32(ip_vec, 8);
+
+	/* lookup in tbl24 */
+	res = _mm512_i32gather_epi64(idxes_256, (const void *)dp->tbl24, 8);
+
+	/* get extended entries indexes */
+	msk_ext = _mm512_test_epi64_mask(res, lsb);
+
+	if (msk_ext != 0) {
+		bytes = _mm512_cvtepi32_epi64(ip_vec);
+		idxes = _mm512_srli_epi64(res, 1);
+		idxes = _mm512_slli_epi64(idxes, 8);
+		bytes = _mm512_and_epi64(bytes, lsbyte_msk);
+		idxes = _mm512_maskz_add_epi64(msk_ext, idxes, bytes);
+		idxes = _mm512_mask_i64gather_epi64(zero, msk_ext, idxes,
+			(const void *)dp->tbl8, 8);
+
+		res = _mm512_mask_blend_epi64(msk_ext, res, idxes);
+	}
+
+	res = _mm512_srli_epi64(res, 1);
+	_mm512_storeu_si512(next_hops, res);
+}
+
+void
+rte_dir24_8_vec_lookup_bulk_1b(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n)
+{
+	uint32_t i;
+	for (i = 0; i < (n / 16); i++)
+		dir24_8_vec_lookup_x16(p, ips + i * 16, next_hops + i * 16,
+			sizeof(uint8_t));
+
+	dir24_8_lookup_bulk_1b(p, ips + i * 16, next_hops + i * 16,
+		n - i * 16);
+}
+
+void
+rte_dir24_8_vec_lookup_bulk_2b(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n)
+{
+	uint32_t i;
+	for (i = 0; i < (n / 16); i++)
+		dir24_8_vec_lookup_x16(p, ips + i * 16, next_hops + i * 16,
+			sizeof(uint16_t));
+
+	dir24_8_lookup_bulk_2b(p, ips + i * 16, next_hops + i * 16,
+		n - i * 16);
+}
+
+void
+rte_dir24_8_vec_lookup_bulk_4b(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n)
+{
+	uint32_t i;
+	for (i = 0; i < (n / 16); i++)
+		dir24_8_vec_lookup_x16(p, ips + i * 16, next_hops + i * 16,
+			sizeof(uint32_t));
+
+	dir24_8_lookup_bulk_4b(p, ips + i * 16, next_hops + i * 16,
+		n - i * 16);
+}
+
+void
+rte_dir24_8_vec_lookup_bulk_8b(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n)
+{
+	uint32_t i;
+	for (i = 0; i < (n / 8); i++)
+		dir24_8_vec_lookup_x8_8b(p, ips + i * 8, next_hops + i * 8);
+
+	dir24_8_lookup_bulk_8b(p, ips + i * 8, next_hops + i * 8, n - i * 8);
+}
diff --git a/lib/librte_fib/dir24_8_avx512.h b/lib/librte_fib/dir24_8_avx512.h
new file mode 100644
index 0000000..1d3c2b9
--- /dev/null
+++ b/lib/librte_fib/dir24_8_avx512.h
@@ -0,0 +1,24 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Intel Corporation
+ */
+
+#ifndef _DIR248_AVX512_H_
+#define _DIR248_AVX512_H_
+
+void
+rte_dir24_8_vec_lookup_bulk_1b(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n);
+
+void
+rte_dir24_8_vec_lookup_bulk_2b(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n);
+
+void
+rte_dir24_8_vec_lookup_bulk_4b(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n);
+
+void
+rte_dir24_8_vec_lookup_bulk_8b(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n);
+
+#endif /* _DIR248_AVX512_H_ */
diff --git a/lib/librte_fib/meson.build b/lib/librte_fib/meson.build
index 771828f..0a8adef 100644
--- a/lib/librte_fib/meson.build
+++ b/lib/librte_fib/meson.build
@@ -5,3 +5,37 @@
 sources = files('rte_fib.c', 'rte_fib6.c', 'dir24_8.c', 'trie.c')
 headers = files('rte_fib.h', 'rte_fib6.h')
 deps += ['rib']
+
+# compile AVX512 version if:
+# we are building 64-bit binary AND binutils can generate proper code
+if dpdk_conf.has('RTE_ARCH_X86_64') and binutils_ok.returncode() == 0
+	# compile AVX512 version if either:
+	# a. we have AVX512F supported in minimum instruction set baseline
+	# b. it's not minimum instruction set, but supported by compiler
+	#
+	# in former case, just add avx512 C file to files list
+	# in latter case, compile c file to static lib, using correct
+	# compiler flags, and then have the .o file from static lib
+	# linked into main lib.
+
+	# check if all required flags already enabled (variant a).
+	acl_avx512_flags = ['__AVX512F__','__AVX512DQ__']
+	acl_avx512_on = true
+	foreach f:acl_avx512_flags
+		if cc.get_define(f, args: machine_args) == ''
+			acl_avx512_on = false
+		endif
+	endforeach
+
+	if acl_avx512_on == true
+		cflags += ['-DCC_DIR24_8_AVX512_SUPPORT']
+		sources += files('dir24_8_avx512.c')
+	elif cc.has_multi_arguments('-mavx512f', '-mavx512dq')
+		dir24_8_avx512_tmp = static_library('dir24_8_avx512_tmp',
+				'dir24_8_avx512.c',
+				dependencies: static_rte_eal,
+				c_args: cflags + ['-mavx512f', '-mavx512dq'])
+		objs += dir24_8_avx512_tmp.extract_objects('dir24_8_avx512.c')
+		cflags += '-DCC_DIR24_8_AVX512_SUPPORT'
+	endif
+endif
diff --git a/lib/librte_fib/rte_fib.c b/lib/librte_fib/rte_fib.c
index b9f6efb..1af2a5f 100644
--- a/lib/librte_fib/rte_fib.c
+++ b/lib/librte_fib/rte_fib.c
@@ -108,7 +108,7 @@ init_dataplane(struct rte_fib *fib, __rte_unused int socket_id,
 		if (fib->dp == NULL)
 			return -rte_errno;
 		fib->lookup = dir24_8_get_lookup_fn(fib->dp,
-			RTE_FIB_DIR24_8_SCALAR_MACRO);
+			RTE_FIB_DIR24_8_ANY);
 		fib->modify = dir24_8_modify;
 		return 0;
 	default:
diff --git a/lib/librte_fib/rte_fib.h b/lib/librte_fib/rte_fib.h
index 2097ee5..d4e5d91 100644
--- a/lib/librte_fib/rte_fib.h
+++ b/lib/librte_fib/rte_fib.h
@@ -67,10 +67,14 @@ enum rte_fib_dir24_8_lookup_type {
 	 * Lookup implementation using inlined functions
 	 * for different next hop sizes
 	 */
-	RTE_FIB_DIR24_8_SCALAR_UNI
+	RTE_FIB_DIR24_8_SCALAR_UNI,
 	/**<
 	 * Unified lookup function for all next hop sizes
 	 */
+	RTE_FIB_DIR24_8_VECTOR_AVX512,
+	/**< Vector implementation using AVX512 */
+	RTE_FIB_DIR24_8_ANY = UINT32_MAX
+	/**< Selects the best implementation based on the max simd bitwidth */
 };
 
 /** FIB configuration structure */
-- 
2.7.4


^ permalink raw reply	[flat|nested] 199+ messages in thread

* [dpdk-dev] [PATCH v13 4/7] fib6: make lookup function type configurable
  2020-10-19 10:17                     ` [dpdk-dev] [PATCH v12 0/7] " Vladimir Medvedkin
                                         ` (3 preceding siblings ...)
  2020-10-19 15:05                       ` [dpdk-dev] [PATCH v13 3/7] fib: introduce AVX512 lookup Vladimir Medvedkin
@ 2020-10-19 15:05                       ` Vladimir Medvedkin
  2020-10-22  7:56                         ` Kinsella, Ray
  2020-10-19 15:05                       ` [dpdk-dev] [PATCH v13 5/7] fib6: move lookup definition into the header file Vladimir Medvedkin
                                         ` (2 subsequent siblings)
  7 siblings, 1 reply; 199+ messages in thread
From: Vladimir Medvedkin @ 2020-10-19 15:05 UTC (permalink / raw)
  To: dev
  Cc: david.marchand, jerinj, mdr, thomas, konstantin.ananyev,
	bruce.richardson, ciara.power

Add type argument to trie_get_lookup_fn()
Now it only supports RTE_FIB6_TRIE_SCALAR

Add new rte_fib6_set_lookup_fn() - user can change lookup
function type runtime.

Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 lib/librte_fib/rte_fib6.c          | 20 +++++++++++++++-
 lib/librte_fib/rte_fib6.h          | 23 +++++++++++++++++++
 lib/librte_fib/rte_fib_version.map |  1 +
 lib/librte_fib/trie.c              | 47 +++++++++++++++++++++++---------------
 lib/librte_fib/trie.h              |  2 +-
 5 files changed, 72 insertions(+), 21 deletions(-)

diff --git a/lib/librte_fib/rte_fib6.c b/lib/librte_fib/rte_fib6.c
index a1f0db8..566cd5f 100644
--- a/lib/librte_fib/rte_fib6.c
+++ b/lib/librte_fib/rte_fib6.c
@@ -107,7 +107,7 @@ init_dataplane(struct rte_fib6 *fib, __rte_unused int socket_id,
 		fib->dp = trie_create(dp_name, socket_id, conf);
 		if (fib->dp == NULL)
 			return -rte_errno;
-		fib->lookup = rte_trie_get_lookup_fn(conf);
+		fib->lookup = trie_get_lookup_fn(fib->dp, RTE_FIB6_TRIE_SCALAR);
 		fib->modify = trie_modify;
 		return 0;
 	default:
@@ -319,3 +319,21 @@ rte_fib6_get_rib(struct rte_fib6 *fib)
 {
 	return (fib == NULL) ? NULL : fib->rib;
 }
+
+int
+rte_fib6_set_lookup_fn(struct rte_fib6 *fib,
+	enum rte_fib_trie_lookup_type type)
+{
+	rte_fib6_lookup_fn_t fn;
+
+	switch (fib->type) {
+	case RTE_FIB6_TRIE:
+		fn = trie_get_lookup_fn(fib->dp, type);
+		if (fn == NULL)
+			return -EINVAL;
+		fib->lookup = fn;
+		return 0;
+	default:
+		return -EINVAL;
+	}
+}
diff --git a/lib/librte_fib/rte_fib6.h b/lib/librte_fib/rte_fib6.h
index bbfcf23..cd0c75e 100644
--- a/lib/librte_fib/rte_fib6.h
+++ b/lib/librte_fib/rte_fib6.h
@@ -53,12 +53,18 @@ enum rte_fib6_op {
 	RTE_FIB6_DEL,
 };
 
+/** Size of nexthop (1 << nh_sz) bits for TRIE based FIB */
 enum rte_fib_trie_nh_sz {
 	RTE_FIB6_TRIE_2B = 1,
 	RTE_FIB6_TRIE_4B,
 	RTE_FIB6_TRIE_8B
 };
 
+/** Type of lookup function implementation */
+enum rte_fib_trie_lookup_type {
+	RTE_FIB6_TRIE_SCALAR /**< Scalar lookup function implementation*/
+};
+
 /** FIB configuration structure */
 struct rte_fib6_conf {
 	enum rte_fib6_type type; /**< Type of FIB struct */
@@ -201,6 +207,23 @@ __rte_experimental
 struct rte_rib6 *
 rte_fib6_get_rib(struct rte_fib6 *fib);
 
+/**
+ * Set lookup function based on type
+ *
+ * @param fib
+ *   FIB object handle
+ * @param type
+ *   type of lookup function
+ *
+ * @return
+ *    -EINVAL on failure
+ *    0 on success
+ */
+__rte_experimental
+int
+rte_fib6_set_lookup_fn(struct rte_fib6 *fib,
+	enum rte_fib_trie_lookup_type type);
+
 #ifdef __cplusplus
 }
 #endif
diff --git a/lib/librte_fib/rte_fib_version.map b/lib/librte_fib/rte_fib_version.map
index 216af66..9d1e181 100644
--- a/lib/librte_fib/rte_fib_version.map
+++ b/lib/librte_fib/rte_fib_version.map
@@ -19,6 +19,7 @@ EXPERIMENTAL {
 	rte_fib6_lookup_bulk;
 	rte_fib6_get_dp;
 	rte_fib6_get_rib;
+	rte_fib6_set_lookup_fn;
 
 	local: *;
 };
diff --git a/lib/librte_fib/trie.c b/lib/librte_fib/trie.c
index 2ae2add..fc14670 100644
--- a/lib/librte_fib/trie.c
+++ b/lib/librte_fib/trie.c
@@ -59,13 +59,6 @@ enum edge {
 	REDGE
 };
 
-enum lookup_type {
-	MACRO,
-	INLINE,
-	UNI
-};
-static enum lookup_type test_lookup = MACRO;
-
 static inline uint32_t
 get_tbl24_idx(const uint8_t *ip)
 {
@@ -153,22 +146,38 @@ LOOKUP_FUNC(2b, uint16_t, 1)
 LOOKUP_FUNC(4b, uint32_t, 2)
 LOOKUP_FUNC(8b, uint64_t, 3)
 
+static inline rte_fib6_lookup_fn_t
+get_scalar_fn(enum rte_fib_trie_nh_sz nh_sz)
+{
+	switch (nh_sz) {
+	case RTE_FIB6_TRIE_2B:
+		return rte_trie_lookup_bulk_2b;
+	case RTE_FIB6_TRIE_4B:
+		return rte_trie_lookup_bulk_4b;
+	case RTE_FIB6_TRIE_8B:
+		return rte_trie_lookup_bulk_8b;
+	default:
+		return NULL;
+	}
+}
+
 rte_fib6_lookup_fn_t
-rte_trie_get_lookup_fn(struct rte_fib6_conf *conf)
+trie_get_lookup_fn(void *p, enum rte_fib_trie_lookup_type type)
 {
-	enum rte_fib_trie_nh_sz nh_sz = conf->trie.nh_sz;
+	enum rte_fib_trie_nh_sz nh_sz;
+	struct rte_trie_tbl *dp = p;
 
-	if (test_lookup == MACRO) {
-		switch (nh_sz) {
-		case RTE_FIB6_TRIE_2B:
-			return rte_trie_lookup_bulk_2b;
-		case RTE_FIB6_TRIE_4B:
-			return rte_trie_lookup_bulk_4b;
-		case RTE_FIB6_TRIE_8B:
-			return rte_trie_lookup_bulk_8b;
-		}
+	if (dp == NULL)
+		return NULL;
+
+	nh_sz = dp->nh_sz;
+
+	switch (type) {
+	case RTE_FIB6_TRIE_SCALAR:
+		return get_scalar_fn(nh_sz);
+	default:
+		return NULL;
 	}
-
 	return NULL;
 }
 
diff --git a/lib/librte_fib/trie.h b/lib/librte_fib/trie.h
index bb750c5..0d5ef9a 100644
--- a/lib/librte_fib/trie.h
+++ b/lib/librte_fib/trie.h
@@ -22,7 +22,7 @@ void
 trie_free(void *p);
 
 rte_fib6_lookup_fn_t
-rte_trie_get_lookup_fn(struct rte_fib6_conf *fib_conf);
+trie_get_lookup_fn(void *p, enum rte_fib_trie_lookup_type type);
 
 int
 trie_modify(struct rte_fib6 *fib, const uint8_t ip[RTE_FIB6_IPV6_ADDR_SIZE],
-- 
2.7.4


^ permalink raw reply	[flat|nested] 199+ messages in thread

* [dpdk-dev] [PATCH v13 5/7] fib6: move lookup definition into the header file
  2020-10-19 10:17                     ` [dpdk-dev] [PATCH v12 0/7] " Vladimir Medvedkin
                                         ` (4 preceding siblings ...)
  2020-10-19 15:05                       ` [dpdk-dev] [PATCH v13 4/7] fib6: make lookup function type configurable Vladimir Medvedkin
@ 2020-10-19 15:05                       ` Vladimir Medvedkin
  2020-10-22  7:56                         ` Kinsella, Ray
  2020-10-19 15:05                       ` [dpdk-dev] [PATCH v13 6/7] fib6: introduce AVX512 lookup Vladimir Medvedkin
  2020-10-19 15:05                       ` [dpdk-dev] [PATCH v13 7/7] app/testfib: add support for different lookup functions Vladimir Medvedkin
  7 siblings, 1 reply; 199+ messages in thread
From: Vladimir Medvedkin @ 2020-10-19 15:05 UTC (permalink / raw)
  To: dev
  Cc: david.marchand, jerinj, mdr, thomas, konstantin.ananyev,
	bruce.richardson, ciara.power

Move trie table layout and lookup definition into the
private header file. This is necessary for implementing a
vectorized lookup function in a separate .с file.

Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 lib/librte_fib/trie.c | 121 --------------------------------------------------
 lib/librte_fib/trie.h | 117 ++++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 117 insertions(+), 121 deletions(-)

diff --git a/lib/librte_fib/trie.c b/lib/librte_fib/trie.c
index fc14670..82ba13d 100644
--- a/lib/librte_fib/trie.c
+++ b/lib/librte_fib/trie.c
@@ -11,141 +11,20 @@
 
 #include <rte_debug.h>
 #include <rte_malloc.h>
-#include <rte_prefetch.h>
 #include <rte_errno.h>
 #include <rte_memory.h>
-#include <rte_branch_prediction.h>
 
 #include <rte_rib6.h>
 #include <rte_fib6.h>
 #include "trie.h"
 
-/* @internal Total number of tbl24 entries. */
-#define TRIE_TBL24_NUM_ENT	(1 << 24)
-
-/* Maximum depth value possible for IPv6 LPM. */
-#define TRIE_MAX_DEPTH		128
-
-/* @internal Number of entries in a tbl8 group. */
-#define TRIE_TBL8_GRP_NUM_ENT	256ULL
-
-/* @internal Total number of tbl8 groups in the tbl8. */
-#define TRIE_TBL8_NUM_GROUPS	65536
-
-/* @internal bitmask with valid and valid_group fields set */
-#define TRIE_EXT_ENT		1
-
 #define TRIE_NAMESIZE		64
 
-#define BITMAP_SLAB_BIT_SIZE_LOG2	6
-#define BITMAP_SLAB_BIT_SIZE		(1ULL << BITMAP_SLAB_BIT_SIZE_LOG2)
-#define BITMAP_SLAB_BITMASK		(BITMAP_SLAB_BIT_SIZE - 1)
-
-struct rte_trie_tbl {
-	uint32_t	number_tbl8s;	/**< Total number of tbl8s */
-	uint32_t	rsvd_tbl8s;	/**< Number of reserved tbl8s */
-	uint32_t	cur_tbl8s;	/**< Current cumber of tbl8s */
-	uint64_t	def_nh;		/**< Default next hop */
-	enum rte_fib_trie_nh_sz	nh_sz;	/**< Size of nexthop entry */
-	uint64_t	*tbl8;		/**< tbl8 table. */
-	uint32_t	*tbl8_pool;	/**< bitmap containing free tbl8 idxes*/
-	uint32_t	tbl8_pool_pos;
-	/* tbl24 table. */
-	__extension__ uint64_t	tbl24[0] __rte_cache_aligned;
-};
-
 enum edge {
 	LEDGE,
 	REDGE
 };
 
-static inline uint32_t
-get_tbl24_idx(const uint8_t *ip)
-{
-	return ip[0] << 16|ip[1] << 8|ip[2];
-}
-
-static inline void *
-get_tbl24_p(struct rte_trie_tbl *dp, const uint8_t *ip, uint8_t nh_sz)
-{
-	uint32_t tbl24_idx;
-
-	tbl24_idx = get_tbl24_idx(ip);
-	return (void *)&((uint8_t *)dp->tbl24)[tbl24_idx << nh_sz];
-}
-
-static inline uint8_t
-bits_in_nh(uint8_t nh_sz)
-{
-	return 8 * (1 << nh_sz);
-}
-
-static inline uint64_t
-get_max_nh(uint8_t nh_sz)
-{
-	return ((1ULL << (bits_in_nh(nh_sz) - 1)) - 1);
-}
-
-static inline uint64_t
-lookup_msk(uint8_t nh_sz)
-{
-	return ((1ULL << ((1 << (nh_sz + 3)) - 1)) << 1) - 1;
-}
-
-static inline uint8_t
-get_psd_idx(uint32_t val, uint8_t nh_sz)
-{
-	return val & ((1 << (3 - nh_sz)) - 1);
-}
-
-static inline uint32_t
-get_tbl_pos(uint32_t val, uint8_t nh_sz)
-{
-	return val >> (3 - nh_sz);
-}
-
-static inline uint64_t
-get_tbl_val_by_idx(uint64_t *tbl, uint32_t idx, uint8_t nh_sz)
-{
-	return ((tbl[get_tbl_pos(idx, nh_sz)] >> (get_psd_idx(idx, nh_sz) *
-		bits_in_nh(nh_sz))) & lookup_msk(nh_sz));
-}
-
-static inline void *
-get_tbl_p_by_idx(uint64_t *tbl, uint64_t idx, uint8_t nh_sz)
-{
-	return (uint8_t *)tbl + (idx << nh_sz);
-}
-
-static inline int
-is_entry_extended(uint64_t ent)
-{
-	return (ent & TRIE_EXT_ENT) == TRIE_EXT_ENT;
-}
-
-#define LOOKUP_FUNC(suffix, type, nh_sz)				\
-static void rte_trie_lookup_bulk_##suffix(void *p,			\
-	uint8_t ips[][RTE_FIB6_IPV6_ADDR_SIZE],			\
-	uint64_t *next_hops, const unsigned int n)			\
-{									\
-	struct rte_trie_tbl *dp = (struct rte_trie_tbl *)p;		\
-	uint64_t tmp;							\
-	uint32_t i, j;							\
-									\
-	for (i = 0; i < n; i++) {					\
-		tmp = ((type *)dp->tbl24)[get_tbl24_idx(&ips[i][0])];	\
-		j = 3;							\
-		while (is_entry_extended(tmp)) {			\
-			tmp = ((type *)dp->tbl8)[ips[i][j++] +		\
-				((tmp >> 1) * TRIE_TBL8_GRP_NUM_ENT)];	\
-		}							\
-		next_hops[i] = tmp >> 1;				\
-	}								\
-}
-LOOKUP_FUNC(2b, uint16_t, 1)
-LOOKUP_FUNC(4b, uint32_t, 2)
-LOOKUP_FUNC(8b, uint64_t, 3)
-
 static inline rte_fib6_lookup_fn_t
 get_scalar_fn(enum rte_fib_trie_nh_sz nh_sz)
 {
diff --git a/lib/librte_fib/trie.h b/lib/librte_fib/trie.h
index 0d5ef9a..663c7a9 100644
--- a/lib/librte_fib/trie.h
+++ b/lib/librte_fib/trie.h
@@ -10,11 +10,128 @@
  * @file
  * RTE IPv6 Longest Prefix Match (LPM)
  */
+#include <rte_prefetch.h>
+#include <rte_branch_prediction.h>
 
 #ifdef __cplusplus
 extern "C" {
 #endif
 
+/* @internal Total number of tbl24 entries. */
+#define TRIE_TBL24_NUM_ENT	(1 << 24)
+/* Maximum depth value possible for IPv6 LPM. */
+#define TRIE_MAX_DEPTH		128
+/* @internal Number of entries in a tbl8 group. */
+#define TRIE_TBL8_GRP_NUM_ENT	256ULL
+/* @internal Total number of tbl8 groups in the tbl8. */
+#define TRIE_TBL8_NUM_GROUPS	65536
+/* @internal bitmask with valid and valid_group fields set */
+#define TRIE_EXT_ENT		1
+
+#define BITMAP_SLAB_BIT_SIZE_LOG2	6
+#define BITMAP_SLAB_BIT_SIZE		(1ULL << BITMAP_SLAB_BIT_SIZE_LOG2)
+#define BITMAP_SLAB_BITMASK		(BITMAP_SLAB_BIT_SIZE - 1)
+
+struct rte_trie_tbl {
+	uint32_t	number_tbl8s;	/**< Total number of tbl8s */
+	uint32_t	rsvd_tbl8s;	/**< Number of reserved tbl8s */
+	uint32_t	cur_tbl8s;	/**< Current cumber of tbl8s */
+	uint64_t	def_nh;		/**< Default next hop */
+	enum rte_fib_trie_nh_sz	nh_sz;	/**< Size of nexthop entry */
+	uint64_t	*tbl8;		/**< tbl8 table. */
+	uint32_t	*tbl8_pool;	/**< bitmap containing free tbl8 idxes*/
+	uint32_t	tbl8_pool_pos;
+	/* tbl24 table. */
+	__extension__ uint64_t	tbl24[0] __rte_cache_aligned;
+};
+
+static inline uint32_t
+get_tbl24_idx(const uint8_t *ip)
+{
+	return ip[0] << 16|ip[1] << 8|ip[2];
+}
+
+static inline void *
+get_tbl24_p(struct rte_trie_tbl *dp, const uint8_t *ip, uint8_t nh_sz)
+{
+	uint32_t tbl24_idx;
+
+	tbl24_idx = get_tbl24_idx(ip);
+	return (void *)&((uint8_t *)dp->tbl24)[tbl24_idx << nh_sz];
+}
+
+static inline uint8_t
+bits_in_nh(uint8_t nh_sz)
+{
+	return 8 * (1 << nh_sz);
+}
+
+static inline uint64_t
+get_max_nh(uint8_t nh_sz)
+{
+	return ((1ULL << (bits_in_nh(nh_sz) - 1)) - 1);
+}
+
+static inline uint64_t
+lookup_msk(uint8_t nh_sz)
+{
+	return ((1ULL << ((1 << (nh_sz + 3)) - 1)) << 1) - 1;
+}
+
+static inline uint8_t
+get_psd_idx(uint32_t val, uint8_t nh_sz)
+{
+	return val & ((1 << (3 - nh_sz)) - 1);
+}
+
+static inline uint32_t
+get_tbl_pos(uint32_t val, uint8_t nh_sz)
+{
+	return val >> (3 - nh_sz);
+}
+
+static inline uint64_t
+get_tbl_val_by_idx(uint64_t *tbl, uint32_t idx, uint8_t nh_sz)
+{
+	return ((tbl[get_tbl_pos(idx, nh_sz)] >> (get_psd_idx(idx, nh_sz) *
+		bits_in_nh(nh_sz))) & lookup_msk(nh_sz));
+}
+
+static inline void *
+get_tbl_p_by_idx(uint64_t *tbl, uint64_t idx, uint8_t nh_sz)
+{
+	return (uint8_t *)tbl + (idx << nh_sz);
+}
+
+static inline int
+is_entry_extended(uint64_t ent)
+{
+	return (ent & TRIE_EXT_ENT) == TRIE_EXT_ENT;
+}
+
+#define LOOKUP_FUNC(suffix, type, nh_sz)				\
+static inline void rte_trie_lookup_bulk_##suffix(void *p,		\
+	uint8_t ips[][RTE_FIB6_IPV6_ADDR_SIZE],				\
+	uint64_t *next_hops, const unsigned int n)			\
+{									\
+	struct rte_trie_tbl *dp = (struct rte_trie_tbl *)p;		\
+	uint64_t tmp;							\
+	uint32_t i, j;							\
+									\
+	for (i = 0; i < n; i++) {					\
+		tmp = ((type *)dp->tbl24)[get_tbl24_idx(&ips[i][0])];	\
+		j = 3;							\
+		while (is_entry_extended(tmp)) {			\
+			tmp = ((type *)dp->tbl8)[ips[i][j++] +		\
+				((tmp >> 1) * TRIE_TBL8_GRP_NUM_ENT)];	\
+		}							\
+		next_hops[i] = tmp >> 1;				\
+	}								\
+}
+LOOKUP_FUNC(2b, uint16_t, 1)
+LOOKUP_FUNC(4b, uint32_t, 2)
+LOOKUP_FUNC(8b, uint64_t, 3)
+
 void *
 trie_create(const char *name, int socket_id, struct rte_fib6_conf *conf);
 
-- 
2.7.4


^ permalink raw reply	[flat|nested] 199+ messages in thread

* [dpdk-dev] [PATCH v13 6/7] fib6: introduce AVX512 lookup
  2020-10-19 10:17                     ` [dpdk-dev] [PATCH v12 0/7] " Vladimir Medvedkin
                                         ` (5 preceding siblings ...)
  2020-10-19 15:05                       ` [dpdk-dev] [PATCH v13 5/7] fib6: move lookup definition into the header file Vladimir Medvedkin
@ 2020-10-19 15:05                       ` Vladimir Medvedkin
  2020-10-22  7:57                         ` Kinsella, Ray
  2020-10-19 15:05                       ` [dpdk-dev] [PATCH v13 7/7] app/testfib: add support for different lookup functions Vladimir Medvedkin
  7 siblings, 1 reply; 199+ messages in thread
From: Vladimir Medvedkin @ 2020-10-19 15:05 UTC (permalink / raw)
  To: dev
  Cc: david.marchand, jerinj, mdr, thomas, konstantin.ananyev,
	bruce.richardson, ciara.power

Add new lookup implementation for FIB6 trie algorithm using
AVX512 instruction set

Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 doc/guides/rel_notes/release_20_11.rst |   2 +-
 lib/librte_fib/meson.build             |  17 +++
 lib/librte_fib/rte_fib6.c              |   2 +-
 lib/librte_fib/rte_fib6.h              |   5 +-
 lib/librte_fib/trie.c                  |  36 +++++
 lib/librte_fib/trie_avx512.c           | 269 +++++++++++++++++++++++++++++++++
 lib/librte_fib/trie_avx512.h           |  20 +++
 7 files changed, 348 insertions(+), 3 deletions(-)
 create mode 100644 lib/librte_fib/trie_avx512.c
 create mode 100644 lib/librte_fib/trie_avx512.h

diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst
index 7eacab5..fa50e81 100644
--- a/doc/guides/rel_notes/release_20_11.rst
+++ b/doc/guides/rel_notes/release_20_11.rst
@@ -346,7 +346,7 @@ New Features
 
 * **Added AVX512 lookup implementation for FIB.**
 
-  Added a AVX512 lookup functions implementation into FIB library.
+  Added a AVX512 lookup functions implementation into FIB and FIB6 libraries.
 
 Removed Items
 -------------
diff --git a/lib/librte_fib/meson.build b/lib/librte_fib/meson.build
index 0a8adef..5d93de9 100644
--- a/lib/librte_fib/meson.build
+++ b/lib/librte_fib/meson.build
@@ -30,6 +30,12 @@ if dpdk_conf.has('RTE_ARCH_X86_64') and binutils_ok.returncode() == 0
 	if acl_avx512_on == true
 		cflags += ['-DCC_DIR24_8_AVX512_SUPPORT']
 		sources += files('dir24_8_avx512.c')
+		# TRIE AVX512 implementation uses avx512bw intrinsics along with
+		# avx512f and avx512dq
+		if cc.get_define('__AVX512BW__', args: machine_args) != ''
+			cflags += ['-DCC_TRIE_AVX512_SUPPORT']
+			sources += files('trie_avx512.c')
+		endif
 	elif cc.has_multi_arguments('-mavx512f', '-mavx512dq')
 		dir24_8_avx512_tmp = static_library('dir24_8_avx512_tmp',
 				'dir24_8_avx512.c',
@@ -37,5 +43,16 @@ if dpdk_conf.has('RTE_ARCH_X86_64') and binutils_ok.returncode() == 0
 				c_args: cflags + ['-mavx512f', '-mavx512dq'])
 		objs += dir24_8_avx512_tmp.extract_objects('dir24_8_avx512.c')
 		cflags += '-DCC_DIR24_8_AVX512_SUPPORT'
+		# TRIE AVX512 implementation uses avx512bw intrinsics along with
+		# avx512f and avx512dq
+		if cc.has_argument('-mavx512bw')
+			trie_avx512_tmp = static_library('trie_avx512_tmp',
+				'trie_avx512.c',
+				dependencies: static_rte_eal,
+				c_args: cflags + ['-mavx512f', \
+					'-mavx512dq', '-mavx512bw'])
+			objs += trie_avx512_tmp.extract_objects('trie_avx512.c')
+			cflags += '-DCC_TRIE_AVX512_SUPPORT'
+		endif
 	endif
 endif
diff --git a/lib/librte_fib/rte_fib6.c b/lib/librte_fib/rte_fib6.c
index 566cd5f..8512584 100644
--- a/lib/librte_fib/rte_fib6.c
+++ b/lib/librte_fib/rte_fib6.c
@@ -107,7 +107,7 @@ init_dataplane(struct rte_fib6 *fib, __rte_unused int socket_id,
 		fib->dp = trie_create(dp_name, socket_id, conf);
 		if (fib->dp == NULL)
 			return -rte_errno;
-		fib->lookup = trie_get_lookup_fn(fib->dp, RTE_FIB6_TRIE_SCALAR);
+		fib->lookup = trie_get_lookup_fn(fib->dp, RTE_FIB6_TRIE_ANY);
 		fib->modify = trie_modify;
 		return 0;
 	default:
diff --git a/lib/librte_fib/rte_fib6.h b/lib/librte_fib/rte_fib6.h
index cd0c75e..2b2a1c8 100644
--- a/lib/librte_fib/rte_fib6.h
+++ b/lib/librte_fib/rte_fib6.h
@@ -62,7 +62,10 @@ enum rte_fib_trie_nh_sz {
 
 /** Type of lookup function implementation */
 enum rte_fib_trie_lookup_type {
-	RTE_FIB6_TRIE_SCALAR /**< Scalar lookup function implementation*/
+	RTE_FIB6_TRIE_SCALAR, /**< Scalar lookup function implementation*/
+	RTE_FIB6_TRIE_VECTOR_AVX512, /**< Vector implementation using AVX512 */
+	RTE_FIB6_TRIE_ANY = UINT32_MAX
+	/**< Selects the best implementation based on the max simd bitwidth */
 };
 
 /** FIB configuration structure */
diff --git a/lib/librte_fib/trie.c b/lib/librte_fib/trie.c
index 82ba13d..d1b7672 100644
--- a/lib/librte_fib/trie.c
+++ b/lib/librte_fib/trie.c
@@ -13,11 +13,18 @@
 #include <rte_malloc.h>
 #include <rte_errno.h>
 #include <rte_memory.h>
+#include <rte_vect.h>
 
 #include <rte_rib6.h>
 #include <rte_fib6.h>
 #include "trie.h"
 
+#ifdef CC_TRIE_AVX512_SUPPORT
+
+#include "trie_avx512.h"
+
+#endif /* CC_TRIE_AVX512_SUPPORT */
+
 #define TRIE_NAMESIZE		64
 
 enum edge {
@@ -40,11 +47,35 @@ get_scalar_fn(enum rte_fib_trie_nh_sz nh_sz)
 	}
 }
 
+static inline rte_fib6_lookup_fn_t
+get_vector_fn(enum rte_fib_trie_nh_sz nh_sz)
+{
+#ifdef CC_TRIE_AVX512_SUPPORT
+	if ((rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512F) <= 0) ||
+			(rte_vect_get_max_simd_bitwidth() < RTE_VECT_SIMD_512))
+		return NULL;
+	switch (nh_sz) {
+	case RTE_FIB6_TRIE_2B:
+		return rte_trie_vec_lookup_bulk_2b;
+	case RTE_FIB6_TRIE_4B:
+		return rte_trie_vec_lookup_bulk_4b;
+	case RTE_FIB6_TRIE_8B:
+		return rte_trie_vec_lookup_bulk_8b;
+	default:
+		return NULL;
+	}
+#else
+	RTE_SET_USED(nh_sz);
+#endif
+	return NULL;
+}
+
 rte_fib6_lookup_fn_t
 trie_get_lookup_fn(void *p, enum rte_fib_trie_lookup_type type)
 {
 	enum rte_fib_trie_nh_sz nh_sz;
 	struct rte_trie_tbl *dp = p;
+	rte_fib6_lookup_fn_t ret_fn = NULL;
 
 	if (dp == NULL)
 		return NULL;
@@ -54,6 +85,11 @@ trie_get_lookup_fn(void *p, enum rte_fib_trie_lookup_type type)
 	switch (type) {
 	case RTE_FIB6_TRIE_SCALAR:
 		return get_scalar_fn(nh_sz);
+	case RTE_FIB6_TRIE_VECTOR_AVX512:
+		return get_vector_fn(nh_sz);
+	case RTE_FIB6_TRIE_ANY:
+		ret_fn = get_vector_fn(nh_sz);
+		return (ret_fn) ? ret_fn : get_scalar_fn(nh_sz);
 	default:
 		return NULL;
 	}
diff --git a/lib/librte_fib/trie_avx512.c b/lib/librte_fib/trie_avx512.c
new file mode 100644
index 0000000..b1c9e4e
--- /dev/null
+++ b/lib/librte_fib/trie_avx512.c
@@ -0,0 +1,269 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Intel Corporation
+ */
+
+#include <rte_vect.h>
+#include <rte_fib6.h>
+
+#include "trie.h"
+#include "trie_avx512.h"
+
+static __rte_always_inline void
+transpose_x16(uint8_t ips[16][RTE_FIB6_IPV6_ADDR_SIZE],
+	__m512i *first, __m512i *second, __m512i *third, __m512i *fourth)
+{
+	__m512i tmp1, tmp2, tmp3, tmp4;
+	__m512i tmp5, tmp6, tmp7, tmp8;
+	const __rte_x86_zmm_t perm_idxes = {
+		.u32 = { 0, 4, 8, 12, 2, 6, 10, 14,
+			1, 5, 9, 13, 3, 7, 11, 15
+		},
+	};
+
+	/* load all ip addresses */
+	tmp1 = _mm512_loadu_si512(&ips[0][0]);
+	tmp2 = _mm512_loadu_si512(&ips[4][0]);
+	tmp3 = _mm512_loadu_si512(&ips[8][0]);
+	tmp4 = _mm512_loadu_si512(&ips[12][0]);
+
+	/* transpose 4 byte chunks of 16 ips */
+	tmp5 = _mm512_unpacklo_epi32(tmp1, tmp2);
+	tmp7 = _mm512_unpackhi_epi32(tmp1, tmp2);
+	tmp6 = _mm512_unpacklo_epi32(tmp3, tmp4);
+	tmp8 = _mm512_unpackhi_epi32(tmp3, tmp4);
+
+	tmp1 = _mm512_unpacklo_epi32(tmp5, tmp6);
+	tmp3 = _mm512_unpackhi_epi32(tmp5, tmp6);
+	tmp2 = _mm512_unpacklo_epi32(tmp7, tmp8);
+	tmp4 = _mm512_unpackhi_epi32(tmp7, tmp8);
+
+	/* first 4-byte chunks of ips[] */
+	*first = _mm512_permutexvar_epi32(perm_idxes.z, tmp1);
+	/* second 4-byte chunks of ips[] */
+	*second = _mm512_permutexvar_epi32(perm_idxes.z, tmp3);
+	/* third 4-byte chunks of ips[] */
+	*third = _mm512_permutexvar_epi32(perm_idxes.z, tmp2);
+	/* fourth 4-byte chunks of ips[] */
+	*fourth = _mm512_permutexvar_epi32(perm_idxes.z, tmp4);
+}
+
+static __rte_always_inline void
+transpose_x8(uint8_t ips[8][RTE_FIB6_IPV6_ADDR_SIZE],
+	__m512i *first, __m512i *second)
+{
+	__m512i tmp1, tmp2, tmp3, tmp4;
+	const __rte_x86_zmm_t perm_idxes = {
+		.u64 = { 0, 2, 4, 6, 1, 3, 5, 7
+		},
+	};
+
+	tmp1 = _mm512_loadu_si512(&ips[0][0]);
+	tmp2 = _mm512_loadu_si512(&ips[4][0]);
+
+	tmp3 = _mm512_unpacklo_epi64(tmp1, tmp2);
+	*first = _mm512_permutexvar_epi64(perm_idxes.z, tmp3);
+	tmp4 = _mm512_unpackhi_epi64(tmp1, tmp2);
+	*second = _mm512_permutexvar_epi64(perm_idxes.z, tmp4);
+}
+
+static __rte_always_inline void
+trie_vec_lookup_x16(void *p, uint8_t ips[16][RTE_FIB6_IPV6_ADDR_SIZE],
+	uint64_t *next_hops, int size)
+{
+	struct rte_trie_tbl *dp = (struct rte_trie_tbl *)p;
+	const __m512i zero = _mm512_set1_epi32(0);
+	const __m512i lsb = _mm512_set1_epi32(1);
+	const __m512i two_lsb = _mm512_set1_epi32(3);
+	__m512i first, second, third, fourth; /*< IPv6 four byte chunks */
+	__m512i idxes, res, shuf_idxes;
+	__m512i tmp, tmp2, bytes, byte_chunk, base_idxes;
+	/* used to mask gather values if size is 2 (16 bit next hops) */
+	const __m512i res_msk = _mm512_set1_epi32(UINT16_MAX);
+	const __rte_x86_zmm_t bswap = {
+		.u8 = { 2, 1, 0, 255, 6, 5, 4, 255,
+			10, 9, 8, 255, 14, 13, 12, 255,
+			2, 1, 0, 255, 6, 5, 4, 255,
+			10, 9, 8, 255, 14, 13, 12, 255,
+			2, 1, 0, 255, 6, 5, 4, 255,
+			10, 9, 8, 255, 14, 13, 12, 255,
+			2, 1, 0, 255, 6, 5, 4, 255,
+			10, 9, 8, 255, 14, 13, 12, 255
+			},
+	};
+	const __mmask64 k = 0x1111111111111111;
+	int i = 3;
+	__mmask16 msk_ext, new_msk;
+	__mmask16 exp_msk = 0x5555;
+
+	transpose_x16(ips, &first, &second, &third, &fourth);
+
+	/* get_tbl24_idx() for every 4 byte chunk */
+	idxes = _mm512_shuffle_epi8(first, bswap.z);
+
+	/**
+	 * lookup in tbl24
+	 * Put it inside branch to make compiller happy with -O0
+	 */
+	if (size == sizeof(uint16_t)) {
+		res = _mm512_i32gather_epi32(idxes, (const int *)dp->tbl24, 2);
+		res = _mm512_and_epi32(res, res_msk);
+	} else
+		res = _mm512_i32gather_epi32(idxes, (const int *)dp->tbl24, 4);
+
+
+	/* get extended entries indexes */
+	msk_ext = _mm512_test_epi32_mask(res, lsb);
+
+	tmp = _mm512_srli_epi32(res, 1);
+
+	/* idxes to retrieve bytes */
+	shuf_idxes = _mm512_setr_epi32(3, 7, 11, 15,
+				19, 23, 27, 31,
+				35, 39, 43, 47,
+				51, 55, 59, 63);
+
+	base_idxes = _mm512_setr_epi32(0, 4, 8, 12,
+				16, 20, 24, 28,
+				32, 36, 40, 44,
+				48, 52, 56, 60);
+
+	/* traverse down the trie */
+	while (msk_ext) {
+		idxes = _mm512_maskz_slli_epi32(msk_ext, tmp, 8);
+		byte_chunk = (i < 8) ?
+			((i >= 4) ? second : first) :
+			((i >= 12) ? fourth : third);
+		bytes = _mm512_maskz_shuffle_epi8(k, byte_chunk, shuf_idxes);
+		idxes = _mm512_maskz_add_epi32(msk_ext, idxes, bytes);
+		if (size == sizeof(uint16_t)) {
+			tmp = _mm512_mask_i32gather_epi32(zero, msk_ext,
+				idxes, (const int *)dp->tbl8, 2);
+			tmp = _mm512_and_epi32(tmp, res_msk);
+		} else
+			tmp = _mm512_mask_i32gather_epi32(zero, msk_ext,
+				idxes, (const int *)dp->tbl8, 4);
+		new_msk = _mm512_test_epi32_mask(tmp, lsb);
+		res = _mm512_mask_blend_epi32(msk_ext ^ new_msk, res, tmp);
+		tmp = _mm512_srli_epi32(tmp, 1);
+		msk_ext = new_msk;
+
+		shuf_idxes = _mm512_maskz_add_epi8(k, shuf_idxes, lsb);
+		shuf_idxes = _mm512_and_epi32(shuf_idxes, two_lsb);
+		shuf_idxes = _mm512_maskz_add_epi8(k, shuf_idxes, base_idxes);
+		i++;
+	}
+
+	res = _mm512_srli_epi32(res, 1);
+	tmp = _mm512_maskz_expand_epi32(exp_msk, res);
+	__m256i tmp256;
+	tmp256 = _mm512_extracti32x8_epi32(res, 1);
+	tmp2 = _mm512_maskz_expand_epi32(exp_msk,
+		_mm512_castsi256_si512(tmp256));
+	_mm512_storeu_si512(next_hops, tmp);
+	_mm512_storeu_si512(next_hops + 8, tmp2);
+}
+
+static void
+trie_vec_lookup_x8_8b(void *p, uint8_t ips[8][RTE_FIB6_IPV6_ADDR_SIZE],
+	uint64_t *next_hops)
+{
+	struct rte_trie_tbl *dp = (struct rte_trie_tbl *)p;
+	const __m512i zero = _mm512_set1_epi32(0);
+	const __m512i lsb = _mm512_set1_epi32(1);
+	const __m512i three_lsb = _mm512_set1_epi32(7);
+	__m512i first, second; /*< IPv6 eight byte chunks */
+	__m512i idxes, res, shuf_idxes;
+	__m512i tmp, bytes, byte_chunk, base_idxes;
+	const __rte_x86_zmm_t bswap = {
+		.u8 = { 2, 1, 0, 255, 255, 255, 255, 255,
+			10, 9, 8, 255, 255, 255, 255, 255,
+			2, 1, 0, 255, 255, 255, 255, 255,
+			10, 9, 8, 255, 255, 255, 255, 255,
+			2, 1, 0, 255, 255, 255, 255, 255,
+			10, 9, 8, 255, 255, 255, 255, 255,
+			2, 1, 0, 255, 255, 255, 255, 255,
+			10, 9, 8, 255, 255, 255, 255, 255
+			},
+	};
+	const __mmask64 k = 0x101010101010101;
+	int i = 3;
+	__mmask8 msk_ext, new_msk;
+
+	transpose_x8(ips, &first, &second);
+
+	/* get_tbl24_idx() for every 4 byte chunk */
+	idxes = _mm512_shuffle_epi8(first, bswap.z);
+
+	/* lookup in tbl24 */
+	res = _mm512_i64gather_epi64(idxes, (const void *)dp->tbl24, 8);
+	/* get extended entries indexes */
+	msk_ext = _mm512_test_epi64_mask(res, lsb);
+
+	tmp = _mm512_srli_epi64(res, 1);
+
+	/* idxes to retrieve bytes */
+	shuf_idxes = _mm512_setr_epi64(3, 11, 19, 27, 35, 43, 51, 59);
+
+	base_idxes = _mm512_setr_epi64(0, 8, 16, 24, 32, 40, 48, 56);
+
+	/* traverse down the trie */
+	while (msk_ext) {
+		idxes = _mm512_maskz_slli_epi64(msk_ext, tmp, 8);
+		byte_chunk = (i < 8) ? first : second;
+		bytes = _mm512_maskz_shuffle_epi8(k, byte_chunk, shuf_idxes);
+		idxes = _mm512_maskz_add_epi64(msk_ext, idxes, bytes);
+		tmp = _mm512_mask_i64gather_epi64(zero, msk_ext,
+				idxes, (const void *)dp->tbl8, 8);
+		new_msk = _mm512_test_epi64_mask(tmp, lsb);
+		res = _mm512_mask_blend_epi64(msk_ext ^ new_msk, res, tmp);
+		tmp = _mm512_srli_epi64(tmp, 1);
+		msk_ext = new_msk;
+
+		shuf_idxes = _mm512_maskz_add_epi8(k, shuf_idxes, lsb);
+		shuf_idxes = _mm512_and_epi64(shuf_idxes, three_lsb);
+		shuf_idxes = _mm512_maskz_add_epi8(k, shuf_idxes, base_idxes);
+		i++;
+	}
+
+	res = _mm512_srli_epi64(res, 1);
+	_mm512_storeu_si512(next_hops, res);
+}
+
+void
+rte_trie_vec_lookup_bulk_2b(void *p, uint8_t ips[][RTE_FIB6_IPV6_ADDR_SIZE],
+	uint64_t *next_hops, const unsigned int n)
+{
+	uint32_t i;
+	for (i = 0; i < (n / 16); i++) {
+		trie_vec_lookup_x16(p, (uint8_t (*)[16])&ips[i * 16][0],
+				next_hops + i * 16, sizeof(uint16_t));
+	}
+	rte_trie_lookup_bulk_2b(p, (uint8_t (*)[16])&ips[i * 16][0],
+			next_hops + i * 16, n - i * 16);
+}
+
+void
+rte_trie_vec_lookup_bulk_4b(void *p, uint8_t ips[][RTE_FIB6_IPV6_ADDR_SIZE],
+	uint64_t *next_hops, const unsigned int n)
+{
+	uint32_t i;
+	for (i = 0; i < (n / 16); i++) {
+		trie_vec_lookup_x16(p, (uint8_t (*)[16])&ips[i * 16][0],
+				next_hops + i * 16, sizeof(uint32_t));
+	}
+	rte_trie_lookup_bulk_4b(p, (uint8_t (*)[16])&ips[i * 16][0],
+			next_hops + i * 16, n - i * 16);
+}
+
+void
+rte_trie_vec_lookup_bulk_8b(void *p, uint8_t ips[][RTE_FIB6_IPV6_ADDR_SIZE],
+	uint64_t *next_hops, const unsigned int n)
+{
+	uint32_t i;
+	for (i = 0; i < (n / 8); i++) {
+		trie_vec_lookup_x8_8b(p, (uint8_t (*)[16])&ips[i * 8][0],
+				next_hops + i * 8);
+	}
+	rte_trie_lookup_bulk_8b(p, (uint8_t (*)[16])&ips[i * 8][0],
+			next_hops + i * 8, n - i * 8);
+}
diff --git a/lib/librte_fib/trie_avx512.h b/lib/librte_fib/trie_avx512.h
new file mode 100644
index 0000000..ef8c7f0
--- /dev/null
+++ b/lib/librte_fib/trie_avx512.h
@@ -0,0 +1,20 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Intel Corporation
+ */
+
+#ifndef _TRIE_AVX512_H_
+#define _TRIE_AVX512_H_
+
+void
+rte_trie_vec_lookup_bulk_2b(void *p, uint8_t ips[][RTE_FIB6_IPV6_ADDR_SIZE],
+	uint64_t *next_hops, const unsigned int n);
+
+void
+rte_trie_vec_lookup_bulk_4b(void *p, uint8_t ips[][RTE_FIB6_IPV6_ADDR_SIZE],
+	uint64_t *next_hops, const unsigned int n);
+
+void
+rte_trie_vec_lookup_bulk_8b(void *p, uint8_t ips[][RTE_FIB6_IPV6_ADDR_SIZE],
+	uint64_t *next_hops, const unsigned int n);
+
+#endif /* _TRIE_AVX512_H_ */
-- 
2.7.4


^ permalink raw reply	[flat|nested] 199+ messages in thread

* [dpdk-dev] [PATCH v13 7/7] app/testfib: add support for different lookup functions
  2020-10-19 10:17                     ` [dpdk-dev] [PATCH v12 0/7] " Vladimir Medvedkin
                                         ` (6 preceding siblings ...)
  2020-10-19 15:05                       ` [dpdk-dev] [PATCH v13 6/7] fib6: introduce AVX512 lookup Vladimir Medvedkin
@ 2020-10-19 15:05                       ` Vladimir Medvedkin
  2020-10-22  7:57                         ` Kinsella, Ray
  7 siblings, 1 reply; 199+ messages in thread
From: Vladimir Medvedkin @ 2020-10-19 15:05 UTC (permalink / raw)
  To: dev
  Cc: david.marchand, jerinj, mdr, thomas, konstantin.ananyev,
	bruce.richardson, ciara.power

Added -v option to switch between different lookup implementations
to measure their performance and correctness.

Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
---
 app/test-fib/main.c | 65 ++++++++++++++++++++++++++++++++++++++++++++++++++---
 1 file changed, 62 insertions(+), 3 deletions(-)

diff --git a/app/test-fib/main.c b/app/test-fib/main.c
index 9cf01b1..e46d264 100644
--- a/app/test-fib/main.c
+++ b/app/test-fib/main.c
@@ -99,6 +99,7 @@ static struct {
 	uint8_t		ent_sz;
 	uint8_t		rnd_lookup_ips_ratio;
 	uint8_t		print_fract;
+	uint8_t		lookup_fn;
 } config = {
 	.routes_file = NULL,
 	.lookup_ips_file = NULL,
@@ -110,7 +111,8 @@ static struct {
 	.tbl8 = DEFAULT_LPM_TBL8,
 	.ent_sz = 4,
 	.rnd_lookup_ips_ratio = 0,
-	.print_fract = 10
+	.print_fract = 10,
+	.lookup_fn = 0
 };
 
 struct rt_rule_4 {
@@ -638,7 +640,11 @@ print_usage(void)
 		"1/2/4/8 (default 4)>]\n"
 		"[-g <number of tbl8's for dir24_8 or trie FIBs>]\n"
 		"[-w <path to the file to dump routing table>]\n"
-		"[-u <path to the file to dump ip's for lookup>]\n",
+		"[-u <path to the file to dump ip's for lookup>]\n"
+		"[-v <type of loookup function:"
+		"\ts1, s2, s3 (3 types of scalar), v (vector) -"
+		" for DIR24_8 based FIB\n"
+		"\ts, v - for TRIE based ipv6 FIB>]\n",
 		config.prgname);
 }
 
@@ -681,7 +687,7 @@ parse_opts(int argc, char **argv)
 	int opt;
 	char *endptr;
 
-	while ((opt = getopt(argc, argv, "f:t:n:d:l:r:c6ab:e:g:w:u:s")) !=
+	while ((opt = getopt(argc, argv, "f:t:n:d:l:r:c6ab:e:g:w:u:sv:")) !=
 			-1) {
 		switch (opt) {
 		case 'f':
@@ -769,6 +775,23 @@ parse_opts(int argc, char **argv)
 				rte_exit(-EINVAL, "Invalid option -g\n");
 			}
 			break;
+		case 'v':
+			if ((strcmp(optarg, "s1") == 0) ||
+					(strcmp(optarg, "s") == 0)) {
+				config.lookup_fn = 1;
+				break;
+			} else if (strcmp(optarg, "v") == 0) {
+				config.lookup_fn = 2;
+				break;
+			} else if (strcmp(optarg, "s2") == 0) {
+				config.lookup_fn = 3;
+				break;
+			} else if (strcmp(optarg, "s3") == 0) {
+				config.lookup_fn = 4;
+				break;
+			}
+			print_usage();
+			rte_exit(-EINVAL, "Invalid option -v %s\n", optarg);
 		default:
 			print_usage();
 			rte_exit(-EINVAL, "Invalid options\n");
@@ -846,6 +869,27 @@ run_v4(void)
 		return -rte_errno;
 	}
 
+	if (config.lookup_fn != 0) {
+		if (config.lookup_fn == 1)
+			ret = rte_fib_set_lookup_fn(fib,
+				RTE_FIB_DIR24_8_SCALAR_MACRO);
+		else if (config.lookup_fn == 2)
+			ret = rte_fib_set_lookup_fn(fib,
+				RTE_FIB_DIR24_8_VECTOR_AVX512);
+		else if (config.lookup_fn == 3)
+			ret = rte_fib_set_lookup_fn(fib,
+				RTE_FIB_DIR24_8_SCALAR_INLINE);
+		else if (config.lookup_fn == 4)
+			ret = rte_fib_set_lookup_fn(fib,
+				RTE_FIB_DIR24_8_SCALAR_UNI);
+		else
+			ret = -EINVAL;
+		if (ret != 0) {
+			printf("Can not init lookup function\n");
+			return ret;
+		}
+	}
+
 	for (k = config.print_fract, i = 0; k > 0; k--) {
 		start = rte_rdtsc_precise();
 		for (j = 0; j < (config.nb_routes - i) / k; j++) {
@@ -1025,6 +1069,21 @@ run_v6(void)
 		return -rte_errno;
 	}
 
+	if (config.lookup_fn != 0) {
+		if (config.lookup_fn == 1)
+			ret = rte_fib6_set_lookup_fn(fib,
+				RTE_FIB6_TRIE_SCALAR);
+		else if (config.lookup_fn == 2)
+			ret = rte_fib6_set_lookup_fn(fib,
+				RTE_FIB6_TRIE_VECTOR_AVX512);
+		else
+			ret = -EINVAL;
+		if (ret != 0) {
+			printf("Can not init lookup function\n");
+			return ret;
+		}
+	}
+
 	for (k = config.print_fract, i = 0; k > 0; k--) {
 		start = rte_rdtsc_precise();
 		for (j = 0; j < (config.nb_routes - i) / k; j++) {
-- 
2.7.4


^ permalink raw reply	[flat|nested] 199+ messages in thread

* Re: [dpdk-dev] [PATCH v13 1/7] fib: make lookup function type configurable
  2020-10-19 15:05                       ` [dpdk-dev] [PATCH v13 1/7] fib: make lookup function type configurable Vladimir Medvedkin
@ 2020-10-22  7:55                         ` Kinsella, Ray
  2020-10-22 11:52                         ` David Marchand
  1 sibling, 0 replies; 199+ messages in thread
From: Kinsella, Ray @ 2020-10-22  7:55 UTC (permalink / raw)
  To: Vladimir Medvedkin, dev
  Cc: david.marchand, jerinj, thomas, konstantin.ananyev,
	bruce.richardson, ciara.power



On 19/10/2020 16:05, Vladimir Medvedkin wrote:
> Add type argument to dir24_8_get_lookup_fn()
> Now it supports 3 different lookup implementations:
>  RTE_FIB_DIR24_8_SCALAR_MACRO
>  RTE_FIB_DIR24_8_SCALAR_INLINE
>  RTE_FIB_DIR24_8_SCALAR_UNI
> 
> Add new rte_fib_set_lookup_fn() - user can change lookup
> function type runtime.
> 
> Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
> Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Ray Kinsella <mdr@ashroe.eu>

^ permalink raw reply	[flat|nested] 199+ messages in thread

* Re: [dpdk-dev] [PATCH v13 2/7] fib: move lookup definition into the header file
  2020-10-19 15:05                       ` [dpdk-dev] [PATCH v13 2/7] fib: move lookup definition into the header file Vladimir Medvedkin
@ 2020-10-22  7:56                         ` Kinsella, Ray
  0 siblings, 0 replies; 199+ messages in thread
From: Kinsella, Ray @ 2020-10-22  7:56 UTC (permalink / raw)
  To: Vladimir Medvedkin, dev
  Cc: david.marchand, jerinj, thomas, konstantin.ananyev,
	bruce.richardson, ciara.power



On 19/10/2020 16:05, Vladimir Medvedkin wrote:
> Move dir24_8 table layout and lookup definition into the
> private header file. This is necessary for implementing a
> vectorized lookup function in a separate .с file.
> 
> Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
> Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> ---
>  lib/librte_fib/dir24_8.c | 225 +----------------------------------------------
>  lib/librte_fib/dir24_8.h | 224 ++++++++++++++++++++++++++++++++++++++++++++++
>  2 files changed, 225 insertions(+), 224 deletions(-)
> 
Acked-by: Ray Kinsella <mdr@ashroe.eu>

^ permalink raw reply	[flat|nested] 199+ messages in thread

* Re: [dpdk-dev] [PATCH v13 3/7] fib: introduce AVX512 lookup
  2020-10-19 15:05                       ` [dpdk-dev] [PATCH v13 3/7] fib: introduce AVX512 lookup Vladimir Medvedkin
@ 2020-10-22  7:56                         ` Kinsella, Ray
  0 siblings, 0 replies; 199+ messages in thread
From: Kinsella, Ray @ 2020-10-22  7:56 UTC (permalink / raw)
  To: Vladimir Medvedkin, dev
  Cc: david.marchand, jerinj, thomas, konstantin.ananyev,
	bruce.richardson, ciara.power



On 19/10/2020 16:05, Vladimir Medvedkin wrote:
> Add new lookup implementation for DIR24_8 algorithm using
> AVX512 instruction set
> 
> Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
> Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> ---
>  doc/guides/rel_notes/release_20_11.rst |   3 +
>  lib/librte_fib/dir24_8.c               |  39 ++++++++
>  lib/librte_fib/dir24_8_avx512.c        | 165 +++++++++++++++++++++++++++++++++
>  lib/librte_fib/dir24_8_avx512.h        |  24 +++++
>  lib/librte_fib/meson.build             |  34 +++++++
>  lib/librte_fib/rte_fib.c               |   2 +-
>  lib/librte_fib/rte_fib.h               |   6 +-
>  7 files changed, 271 insertions(+), 2 deletions(-)
>  create mode 100644 lib/librte_fib/dir24_8_avx512.c
>  create mode 100644 lib/librte_fib/dir24_8_avx512.h
> 
Acked-by: Ray Kinsella <mdr@ashroe.eu>

^ permalink raw reply	[flat|nested] 199+ messages in thread

* Re: [dpdk-dev] [PATCH v13 4/7] fib6: make lookup function type configurable
  2020-10-19 15:05                       ` [dpdk-dev] [PATCH v13 4/7] fib6: make lookup function type configurable Vladimir Medvedkin
@ 2020-10-22  7:56                         ` Kinsella, Ray
  0 siblings, 0 replies; 199+ messages in thread
From: Kinsella, Ray @ 2020-10-22  7:56 UTC (permalink / raw)
  To: Vladimir Medvedkin, dev
  Cc: david.marchand, jerinj, thomas, konstantin.ananyev,
	bruce.richardson, ciara.power



On 19/10/2020 16:05, Vladimir Medvedkin wrote:
> Add type argument to trie_get_lookup_fn()
> Now it only supports RTE_FIB6_TRIE_SCALAR
> 
> Add new rte_fib6_set_lookup_fn() - user can change lookup
> function type runtime.
> 
> Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
> Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> ---
>  lib/librte_fib/rte_fib6.c          | 20 +++++++++++++++-
>  lib/librte_fib/rte_fib6.h          | 23 +++++++++++++++++++
>  lib/librte_fib/rte_fib_version.map |  1 +
>  lib/librte_fib/trie.c              | 47 +++++++++++++++++++++++---------------
>  lib/librte_fib/trie.h              |  2 +-
>  5 files changed, 72 insertions(+), 21 deletions(-)
> 
Acked-by: Ray Kinsella <mdr@ashroe.eu>

^ permalink raw reply	[flat|nested] 199+ messages in thread

* Re: [dpdk-dev] [PATCH v13 5/7] fib6: move lookup definition into the header file
  2020-10-19 15:05                       ` [dpdk-dev] [PATCH v13 5/7] fib6: move lookup definition into the header file Vladimir Medvedkin
@ 2020-10-22  7:56                         ` Kinsella, Ray
  0 siblings, 0 replies; 199+ messages in thread
From: Kinsella, Ray @ 2020-10-22  7:56 UTC (permalink / raw)
  To: Vladimir Medvedkin, dev
  Cc: david.marchand, jerinj, thomas, konstantin.ananyev,
	bruce.richardson, ciara.power



On 19/10/2020 16:05, Vladimir Medvedkin wrote:
> Move trie table layout and lookup definition into the
> private header file. This is necessary for implementing a
> vectorized lookup function in a separate .с file.
> 
> Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
> Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Ray Kinsella <mdr@ashroe.eu>

^ permalink raw reply	[flat|nested] 199+ messages in thread

* Re: [dpdk-dev] [PATCH v13 6/7] fib6: introduce AVX512 lookup
  2020-10-19 15:05                       ` [dpdk-dev] [PATCH v13 6/7] fib6: introduce AVX512 lookup Vladimir Medvedkin
@ 2020-10-22  7:57                         ` Kinsella, Ray
  0 siblings, 0 replies; 199+ messages in thread
From: Kinsella, Ray @ 2020-10-22  7:57 UTC (permalink / raw)
  To: Vladimir Medvedkin, dev
  Cc: david.marchand, jerinj, thomas, konstantin.ananyev,
	bruce.richardson, ciara.power



On 19/10/2020 16:05, Vladimir Medvedkin wrote:
> Add new lookup implementation for FIB6 trie algorithm using
> AVX512 instruction set
> 
> Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
> Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Ray Kinsella <mdr@ashroe.eu>

^ permalink raw reply	[flat|nested] 199+ messages in thread

* Re: [dpdk-dev] [PATCH v13 7/7] app/testfib: add support for different lookup functions
  2020-10-19 15:05                       ` [dpdk-dev] [PATCH v13 7/7] app/testfib: add support for different lookup functions Vladimir Medvedkin
@ 2020-10-22  7:57                         ` Kinsella, Ray
  0 siblings, 0 replies; 199+ messages in thread
From: Kinsella, Ray @ 2020-10-22  7:57 UTC (permalink / raw)
  To: Vladimir Medvedkin, dev
  Cc: david.marchand, jerinj, thomas, konstantin.ananyev,
	bruce.richardson, ciara.power



On 19/10/2020 16:05, Vladimir Medvedkin wrote:
> Added -v option to switch between different lookup implementations
> to measure their performance and correctness.
> 
> Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
Acked-by: Ray Kinsella <mdr@ashroe.eu>

^ permalink raw reply	[flat|nested] 199+ messages in thread

* Re: [dpdk-dev] [PATCH v13 1/7] fib: make lookup function type configurable
  2020-10-19 15:05                       ` [dpdk-dev] [PATCH v13 1/7] fib: make lookup function type configurable Vladimir Medvedkin
  2020-10-22  7:55                         ` Kinsella, Ray
@ 2020-10-22 11:52                         ` David Marchand
  2020-10-22 15:11                           ` Medvedkin, Vladimir
  1 sibling, 1 reply; 199+ messages in thread
From: David Marchand @ 2020-10-22 11:52 UTC (permalink / raw)
  To: Vladimir Medvedkin
  Cc: dev, Jerin Jacob Kollanukkaran, Ray Kinsella, Thomas Monjalon,
	Ananyev, Konstantin, Bruce Richardson, Ciara Power

On Mon, Oct 19, 2020 at 5:05 PM Vladimir Medvedkin
<vladimir.medvedkin@intel.com> wrote:
>
> Add type argument to dir24_8_get_lookup_fn()
> Now it supports 3 different lookup implementations:
>  RTE_FIB_DIR24_8_SCALAR_MACRO
>  RTE_FIB_DIR24_8_SCALAR_INLINE
>  RTE_FIB_DIR24_8_SCALAR_UNI
>
> Add new rte_fib_set_lookup_fn() - user can change lookup
> function type runtime.
>
> Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
> Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>

We create a fib object with a type: either RTE_FIB_DUMMY or
RTE_FIB_DIR24_8 (separate topic, we probably do not need
RTE_FIB_TYPE_MAX).

This lookup API is dir24_8 specific.
If we won't abstract the lookup selection type, why not change this
API as dir24_8 specific?
I.e. s/rte_fib_set_lookup_fn/rte_fib_dir24_8_set_lookup_fn/g

The same would apply to FIB6 trie implementation.

-- 
David Marchand


^ permalink raw reply	[flat|nested] 199+ messages in thread

* Re: [dpdk-dev] [PATCH v13 1/7] fib: make lookup function type configurable
  2020-10-22 11:52                         ` David Marchand
@ 2020-10-22 15:11                           ` Medvedkin, Vladimir
  2020-10-23 10:29                             ` David Marchand
  0 siblings, 1 reply; 199+ messages in thread
From: Medvedkin, Vladimir @ 2020-10-22 15:11 UTC (permalink / raw)
  To: David Marchand
  Cc: dev, Jerin Jacob Kollanukkaran, Ray Kinsella, Thomas Monjalon,
	Ananyev, Konstantin, Bruce Richardson, Ciara Power

Hi David,

On 22/10/2020 12:52, David Marchand wrote:
> On Mon, Oct 19, 2020 at 5:05 PM Vladimir Medvedkin
> <vladimir.medvedkin@intel.com> wrote:
>>
>> Add type argument to dir24_8_get_lookup_fn()
>> Now it supports 3 different lookup implementations:
>>   RTE_FIB_DIR24_8_SCALAR_MACRO
>>   RTE_FIB_DIR24_8_SCALAR_INLINE
>>   RTE_FIB_DIR24_8_SCALAR_UNI
>>
>> Add new rte_fib_set_lookup_fn() - user can change lookup
>> function type runtime.
>>
>> Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
>> Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> 
> We create a fib object with a type: either RTE_FIB_DUMMY or
> RTE_FIB_DIR24_8 (separate topic, we probably do not need
> RTE_FIB_TYPE_MAX).
RTE_FIB_TYPE_MAX is used for early sanity check. I can remove it 
(relying on that init_dataplane() will return error for improper type), 
if you think that it is better to get rid of it.

> 
> This lookup API is dir24_8 specific.
> If we won't abstract the lookup selection type, why not change this
> API as dir24_8 specific?
> I.e. s/rte_fib_set_lookup_fn/rte_fib_dir24_8_set_lookup_fn/g
> 
> The same would apply to FIB6 trie implementation.

Good point. In future I want to add more data plane algorithms such as 
DXR or Poptrie for example. In this case I don't really want to have 
separate function for every supported algorithm, i.e. I think it is 
better to have single rte_fib_set_lookup_fn(). But on the other hand it 
needs to be generic in this case. In future releases I want to get rid 
of different dir24_8's scalar implementations (MACRO/INLINE/UNI). After 
this we can change types to algorithm agnostic names:
RTE_FIB_SCALAR,
RTE_FIB_VECTOR_AVX512

> 

-- 
Regards,
Vladimir

^ permalink raw reply	[flat|nested] 199+ messages in thread

* Re: [dpdk-dev] [PATCH v13 1/7] fib: make lookup function type configurable
  2020-10-22 15:11                           ` Medvedkin, Vladimir
@ 2020-10-23 10:29                             ` David Marchand
  2020-10-23 16:09                               ` Medvedkin, Vladimir
  0 siblings, 1 reply; 199+ messages in thread
From: David Marchand @ 2020-10-23 10:29 UTC (permalink / raw)
  To: Medvedkin, Vladimir
  Cc: dev, Jerin Jacob Kollanukkaran, Ray Kinsella, Thomas Monjalon,
	Ananyev, Konstantin, Bruce Richardson, Ciara Power

On Thu, Oct 22, 2020 at 5:12 PM Medvedkin, Vladimir
<vladimir.medvedkin@intel.com> wrote:
>
> Hi David,
>
> On 22/10/2020 12:52, David Marchand wrote:
> > On Mon, Oct 19, 2020 at 5:05 PM Vladimir Medvedkin
> > <vladimir.medvedkin@intel.com> wrote:
> >>
> >> Add type argument to dir24_8_get_lookup_fn()
> >> Now it supports 3 different lookup implementations:
> >>   RTE_FIB_DIR24_8_SCALAR_MACRO
> >>   RTE_FIB_DIR24_8_SCALAR_INLINE
> >>   RTE_FIB_DIR24_8_SCALAR_UNI
> >>
> >> Add new rte_fib_set_lookup_fn() - user can change lookup
> >> function type runtime.
> >>
> >> Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
> >> Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> >
> > We create a fib object with a type: either RTE_FIB_DUMMY or
> > RTE_FIB_DIR24_8 (separate topic, we probably do not need
> > RTE_FIB_TYPE_MAX).
> RTE_FIB_TYPE_MAX is used for early sanity check. I can remove it
> (relying on that init_dataplane() will return error for improper type),
> if you think that it is better to get rid of it.

Applications could start using it.
If you don't need it, don't expose it.

A validation on type <= RTE_FIB_DIR24_8 should be enough.


>
> >
> > This lookup API is dir24_8 specific.
> > If we won't abstract the lookup selection type, why not change this
> > API as dir24_8 specific?
> > I.e. s/rte_fib_set_lookup_fn/rte_fib_dir24_8_set_lookup_fn/g
> >
> > The same would apply to FIB6 trie implementation.
>
> Good point. In future I want to add more data plane algorithms such as
> DXR or Poptrie for example. In this case I don't really want to have
> separate function for every supported algorithm, i.e. I think it is
> better to have single rte_fib_set_lookup_fn(). But on the other hand it
> needs to be generic in this case. In future releases I want to get rid
> of different dir24_8's scalar implementations (MACRO/INLINE/UNI). After
> this we can change types to algorithm agnostic names:
> RTE_FIB_SCALAR,
> RTE_FIB_VECTOR_AVX512

Is there a real benefit from those 3 scalar lookup implementations for dir24_8 ?


-- 
David Marchand


^ permalink raw reply	[flat|nested] 199+ messages in thread

* Re: [dpdk-dev] [PATCH v13 1/7] fib: make lookup function type configurable
  2020-10-23 10:29                             ` David Marchand
@ 2020-10-23 16:09                               ` Medvedkin, Vladimir
  0 siblings, 0 replies; 199+ messages in thread
From: Medvedkin, Vladimir @ 2020-10-23 16:09 UTC (permalink / raw)
  To: David Marchand
  Cc: dev, Jerin Jacob Kollanukkaran, Ray Kinsella, Thomas Monjalon,
	Ananyev, Konstantin, Bruce Richardson, Ciara Power

Hello,

On 23/10/2020 11:29, David Marchand wrote:
> On Thu, Oct 22, 2020 at 5:12 PM Medvedkin, Vladimir
> <vladimir.medvedkin@intel.com> wrote:
>>
>> Hi David,
>>
>> On 22/10/2020 12:52, David Marchand wrote:
>>> On Mon, Oct 19, 2020 at 5:05 PM Vladimir Medvedkin
>>> <vladimir.medvedkin@intel.com> wrote:
>>>>
>>>> Add type argument to dir24_8_get_lookup_fn()
>>>> Now it supports 3 different lookup implementations:
>>>>    RTE_FIB_DIR24_8_SCALAR_MACRO
>>>>    RTE_FIB_DIR24_8_SCALAR_INLINE
>>>>    RTE_FIB_DIR24_8_SCALAR_UNI
>>>>
>>>> Add new rte_fib_set_lookup_fn() - user can change lookup
>>>> function type runtime.
>>>>
>>>> Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
>>>> Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
>>>
>>> We create a fib object with a type: either RTE_FIB_DUMMY or
>>> RTE_FIB_DIR24_8 (separate topic, we probably do not need
>>> RTE_FIB_TYPE_MAX).
>> RTE_FIB_TYPE_MAX is used for early sanity check. I can remove it
>> (relying on that init_dataplane() will return error for improper type),
>> if you think that it is better to get rid of it.
> 
> Applications could start using it.
> If you don't need it, don't expose it.
> 
> A validation on type <= RTE_FIB_DIR24_8 should be enough.
> 

Will remove it in v14, thanks!

> 
>>
>>>
>>> This lookup API is dir24_8 specific.
>>> If we won't abstract the lookup selection type, why not change this
>>> API as dir24_8 specific?
>>> I.e. s/rte_fib_set_lookup_fn/rte_fib_dir24_8_set_lookup_fn/g
>>>
>>> The same would apply to FIB6 trie implementation.
>>
>> Good point. In future I want to add more data plane algorithms such as
>> DXR or Poptrie for example. In this case I don't really want to have
>> separate function for every supported algorithm, i.e. I think it is
>> better to have single rte_fib_set_lookup_fn(). But on the other hand it
>> needs to be generic in this case. In future releases I want to get rid
>> of different dir24_8's scalar implementations (MACRO/INLINE/UNI). After
>> this we can change types to algorithm agnostic names:
>> RTE_FIB_SCALAR,
>> RTE_FIB_VECTOR_AVX512
> 
> Is there a real benefit from those 3 scalar lookup implementations for dir24_8 ?
> 

Initially I've sent 3 different implementations to get responses from 
the community what implementation should I leave. Test results on 
different IA CPU's shows that MACRO based implementation perform 
slightly faster. So I think there is no benefit from keeping other 
implementations.

> 

-- 
Regards,
Vladimir

^ permalink raw reply	[flat|nested] 199+ messages in thread

* [dpdk-dev] [PATCH v14 0/8] fib: implement AVX512 vector lookup
  2020-10-19 15:05                       ` [dpdk-dev] [PATCH v13 " Vladimir Medvedkin
@ 2020-10-25 18:07                         ` Vladimir Medvedkin
  2020-10-27 15:11                           ` [dpdk-dev] [PATCH v15 " Vladimir Medvedkin
                                             ` (8 more replies)
  2020-10-25 18:07                         ` [dpdk-dev] [PATCH v14 1/8] fib: make lookup function type configurable Vladimir Medvedkin
                                           ` (7 subsequent siblings)
  8 siblings, 9 replies; 199+ messages in thread
From: Vladimir Medvedkin @ 2020-10-25 18:07 UTC (permalink / raw)
  To: dev
  Cc: david.marchand, jerinj, mdr, thomas, konstantin.ananyev,
	bruce.richardson, ciara.power

This patch series implements vectorized lookup using AVX512 for
ipv4 dir24_8 and ipv6 trie algorithms.
Also introduced rte_fib_set_lookup_fn() to change lookup function type.
Added option to select lookup function type in testfib application.

v14:
 - remove unnecessary FIB types

v13:
 - reflect the latest changes in "eal: add max SIMD bitwidth" patch

v12:
 - rebase on the latest main
 - drop "eal/x86: introduce AVX 512-bit type" patch

v11:
 - fix compillation issue with unused nh_sz variable

v10:
 - reflects the latest changes in the "eal: add max SIMD bitwidth" patch
 - add en extra doxygen comments
 - rebuild on the latest main

v9:
 - meson reworked
 - integration with max SIMD bitwidth patchseries
 - changed the logic of function selection on init

v8:
 - remove Makefile related changes
 - fix missing doxygen for lookup_type
 - add release notes

v7:
 - fix RTE_X86_ZMM_MASK macro

v6:
 - style fixes

v5:
 - prefix zmm macro in rte_vect.h with RTE_X86
 - remove unnecessary typedef for _x86_zmm_t
 - reword commit title
 - fix typos

v4:
 - use __rte_aligned() instead of using compiler attribute directly
 - rework and add comments to meson.build

v3:
 - separate out the AVX-512 code into a separate file

v2:
 - rename rte_zmm to __rte_x86_zmm to reflect its internal usage
 - make runtime decision to use avx512 lookup

Vladimir Medvedkin (8):
  fib: make lookup function type configurable
  fib: move lookup definition into the header file
  fib: introduce AVX512 lookup
  fib6: make lookup function type configurable
  fib6: move lookup definition into the header file
  fib6: introduce AVX512 lookup
  app/testfib: add support for different lookup functions
  fib: remove unnecessary type of fib

 app/test-fib/main.c                    |  65 ++++++-
 app/test/test_fib.c                    |   2 +-
 app/test/test_fib6.c                   |   2 +-
 doc/guides/rel_notes/release_20_11.rst |   3 +
 lib/librte_fib/dir24_8.c               | 332 +++++++++------------------------
 lib/librte_fib/dir24_8.h               | 226 +++++++++++++++++++++-
 lib/librte_fib/dir24_8_avx512.c        | 165 ++++++++++++++++
 lib/librte_fib/dir24_8_avx512.h        |  24 +++
 lib/librte_fib/meson.build             |  51 +++++
 lib/librte_fib/rte_fib.c               |  23 ++-
 lib/librte_fib/rte_fib.h               |  39 +++-
 lib/librte_fib/rte_fib6.c              |  22 ++-
 lib/librte_fib/rte_fib6.h              |  29 ++-
 lib/librte_fib/trie.c                  | 194 ++++++-------------
 lib/librte_fib/trie.h                  | 119 +++++++++++-
 lib/librte_fib/trie_avx512.c           | 269 ++++++++++++++++++++++++++
 lib/librte_fib/trie_avx512.h           |  20 ++
 lib/librte_fib/version.map             |   2 +
 18 files changed, 1189 insertions(+), 398 deletions(-)
 create mode 100644 lib/librte_fib/dir24_8_avx512.c
 create mode 100644 lib/librte_fib/dir24_8_avx512.h
 create mode 100644 lib/librte_fib/trie_avx512.c
 create mode 100644 lib/librte_fib/trie_avx512.h

-- 
2.7.4


^ permalink raw reply	[flat|nested] 199+ messages in thread

* [dpdk-dev] [PATCH v14 1/8] fib: make lookup function type configurable
  2020-10-19 15:05                       ` [dpdk-dev] [PATCH v13 " Vladimir Medvedkin
  2020-10-25 18:07                         ` [dpdk-dev] [PATCH v14 0/8] " Vladimir Medvedkin
@ 2020-10-25 18:07                         ` Vladimir Medvedkin
  2020-10-26 13:58                           ` David Marchand
  2020-10-25 18:07                         ` [dpdk-dev] [PATCH v14 2/8] fib: move lookup definition into the header file Vladimir Medvedkin
                                           ` (6 subsequent siblings)
  8 siblings, 1 reply; 199+ messages in thread
From: Vladimir Medvedkin @ 2020-10-25 18:07 UTC (permalink / raw)
  To: dev
  Cc: david.marchand, jerinj, mdr, thomas, konstantin.ananyev,
	bruce.richardson, ciara.power

Add type argument to dir24_8_get_lookup_fn()
Now it supports 3 different lookup implementations:
 RTE_FIB_DIR24_8_SCALAR_MACRO
 RTE_FIB_DIR24_8_SCALAR_INLINE
 RTE_FIB_DIR24_8_SCALAR_UNI

Add new rte_fib_set_lookup_fn() - user can change lookup
function type runtime.

Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 lib/librte_fib/dir24_8.c   | 84 ++++++++++++++++++++++++++++------------------
 lib/librte_fib/dir24_8.h   |  2 +-
 lib/librte_fib/rte_fib.c   | 21 +++++++++++-
 lib/librte_fib/rte_fib.h   | 32 ++++++++++++++++++
 lib/librte_fib/version.map |  1 +
 5 files changed, 106 insertions(+), 34 deletions(-)

diff --git a/lib/librte_fib/dir24_8.c b/lib/librte_fib/dir24_8.c
index c9dce3c..ff51f65 100644
--- a/lib/librte_fib/dir24_8.c
+++ b/lib/librte_fib/dir24_8.c
@@ -45,13 +45,6 @@ struct dir24_8_tbl {
 
 #define ROUNDUP(x, y)	 RTE_ALIGN_CEIL(x, (1 << (32 - y)))
 
-enum lookup_type {
-	MACRO,
-	INLINE,
-	UNI
-};
-enum lookup_type test_lookup = MACRO;
-
 static inline void *
 get_tbl24_p(struct dir24_8_tbl *dp, uint32_t ip, uint8_t nh_sz)
 {
@@ -252,35 +245,62 @@ dir24_8_lookup_bulk_uni(void *p, const uint32_t *ips,
 	}
 }
 
+static inline rte_fib_lookup_fn_t
+get_scalar_fn(enum rte_fib_dir24_8_nh_sz nh_sz)
+{
+	switch (nh_sz) {
+	case RTE_FIB_DIR24_8_1B:
+		return dir24_8_lookup_bulk_1b;
+	case RTE_FIB_DIR24_8_2B:
+		return dir24_8_lookup_bulk_2b;
+	case RTE_FIB_DIR24_8_4B:
+		return dir24_8_lookup_bulk_4b;
+	case RTE_FIB_DIR24_8_8B:
+		return dir24_8_lookup_bulk_8b;
+	default:
+		return NULL;
+	}
+}
+
+static inline rte_fib_lookup_fn_t
+get_scalar_fn_inlined(enum rte_fib_dir24_8_nh_sz nh_sz)
+{
+	switch (nh_sz) {
+	case RTE_FIB_DIR24_8_1B:
+		return dir24_8_lookup_bulk_0;
+	case RTE_FIB_DIR24_8_2B:
+		return dir24_8_lookup_bulk_1;
+	case RTE_FIB_DIR24_8_4B:
+		return dir24_8_lookup_bulk_2;
+	case RTE_FIB_DIR24_8_8B:
+		return dir24_8_lookup_bulk_3;
+	default:
+		return NULL;
+	}
+}
+
 rte_fib_lookup_fn_t
-dir24_8_get_lookup_fn(struct rte_fib_conf *fib_conf)
+dir24_8_get_lookup_fn(void *p, enum rte_fib_dir24_8_lookup_type type)
 {
-	enum rte_fib_dir24_8_nh_sz nh_sz = fib_conf->dir24_8.nh_sz;
+	enum rte_fib_dir24_8_nh_sz nh_sz;
+	struct dir24_8_tbl *dp = p;
 
-	if (test_lookup == MACRO) {
-		switch (nh_sz) {
-		case RTE_FIB_DIR24_8_1B:
-			return dir24_8_lookup_bulk_1b;
-		case RTE_FIB_DIR24_8_2B:
-			return dir24_8_lookup_bulk_2b;
-		case RTE_FIB_DIR24_8_4B:
-			return dir24_8_lookup_bulk_4b;
-		case RTE_FIB_DIR24_8_8B:
-			return dir24_8_lookup_bulk_8b;
-		}
-	} else if (test_lookup == INLINE) {
-		switch (nh_sz) {
-		case RTE_FIB_DIR24_8_1B:
-			return dir24_8_lookup_bulk_0;
-		case RTE_FIB_DIR24_8_2B:
-			return dir24_8_lookup_bulk_1;
-		case RTE_FIB_DIR24_8_4B:
-			return dir24_8_lookup_bulk_2;
-		case RTE_FIB_DIR24_8_8B:
-			return dir24_8_lookup_bulk_3;
-		}
-	} else
+	if (dp == NULL)
+		return NULL;
+
+	nh_sz = dp->nh_sz;
+
+	switch (type) {
+	case RTE_FIB_DIR24_8_SCALAR_MACRO:
+		return get_scalar_fn(nh_sz);
+	case RTE_FIB_DIR24_8_SCALAR_INLINE:
+		return get_scalar_fn_inlined(nh_sz);
+	case RTE_FIB_DIR24_8_SCALAR_UNI:
 		return dir24_8_lookup_bulk_uni;
+	default:
+		return NULL;
+	}
+
 	return NULL;
 }
 
diff --git a/lib/librte_fib/dir24_8.h b/lib/librte_fib/dir24_8.h
index 1ec437c..53c5dd2 100644
--- a/lib/librte_fib/dir24_8.h
+++ b/lib/librte_fib/dir24_8.h
@@ -22,7 +22,7 @@ void
 dir24_8_free(void *p);
 
 rte_fib_lookup_fn_t
-dir24_8_get_lookup_fn(struct rte_fib_conf *conf);
+dir24_8_get_lookup_fn(void *p, enum rte_fib_dir24_8_lookup_type type);
 
 int
 dir24_8_modify(struct rte_fib *fib, uint32_t ip, uint8_t depth,
diff --git a/lib/librte_fib/rte_fib.c b/lib/librte_fib/rte_fib.c
index e090808..b9f6efb 100644
--- a/lib/librte_fib/rte_fib.c
+++ b/lib/librte_fib/rte_fib.c
@@ -107,7 +107,8 @@ init_dataplane(struct rte_fib *fib, __rte_unused int socket_id,
 		fib->dp = dir24_8_create(dp_name, socket_id, conf);
 		if (fib->dp == NULL)
 			return -rte_errno;
-		fib->lookup = dir24_8_get_lookup_fn(conf);
+		fib->lookup = dir24_8_get_lookup_fn(fib->dp,
+			RTE_FIB_DIR24_8_SCALAR_MACRO);
 		fib->modify = dir24_8_modify;
 		return 0;
 	default:
@@ -317,3 +318,21 @@ rte_fib_get_rib(struct rte_fib *fib)
 {
 	return (fib == NULL) ? NULL : fib->rib;
 }
+
+int
+rte_fib_set_lookup_fn(struct rte_fib *fib,
+	enum rte_fib_dir24_8_lookup_type type)
+{
+	rte_fib_lookup_fn_t fn;
+
+	switch (fib->type) {
+	case RTE_FIB_DIR24_8:
+		fn = dir24_8_get_lookup_fn(fib->dp, type);
+		if (fn == NULL)
+			return -EINVAL;
+		fib->lookup = fn;
+		return 0;
+	default:
+		return -EINVAL;
+	}
+}
diff --git a/lib/librte_fib/rte_fib.h b/lib/librte_fib/rte_fib.h
index 84ee774..2097ee5 100644
--- a/lib/librte_fib/rte_fib.h
+++ b/lib/librte_fib/rte_fib.h
@@ -58,6 +58,21 @@ enum rte_fib_dir24_8_nh_sz {
 	RTE_FIB_DIR24_8_8B
 };
 
+/** Type of lookup function implementation */
+enum rte_fib_dir24_8_lookup_type {
+	RTE_FIB_DIR24_8_SCALAR_MACRO,
+	/**< Macro based lookup function */
+	RTE_FIB_DIR24_8_SCALAR_INLINE,
+	/**<
+	 * Lookup implementation using inlined functions
+	 * for different next hop sizes
+	 */
+	RTE_FIB_DIR24_8_SCALAR_UNI
+	/**<
+	 * Unified lookup function for all next hop sizes
+	 */
+};
+
 /** FIB configuration structure */
 struct rte_fib_conf {
 	enum rte_fib_type type; /**< Type of FIB struct */
@@ -196,6 +211,23 @@ __rte_experimental
 struct rte_rib *
 rte_fib_get_rib(struct rte_fib *fib);
 
+/**
+ * Set lookup function based on type
+ *
+ * @param fib
+ *   FIB object handle
+ * @param type
+ *   type of lookup function
+ *
+ * @return
+ *    -EINVAL on failure
+ *    0 on success
+ */
+__rte_experimental
+int
+rte_fib_set_lookup_fn(struct rte_fib *fib,
+	enum rte_fib_dir24_8_lookup_type type);
+
 #ifdef __cplusplus
 }
 #endif
diff --git a/lib/librte_fib/version.map b/lib/librte_fib/version.map
index 9527417..216af66 100644
--- a/lib/librte_fib/version.map
+++ b/lib/librte_fib/version.map
@@ -9,6 +9,7 @@ EXPERIMENTAL {
 	rte_fib_lookup_bulk;
 	rte_fib_get_dp;
 	rte_fib_get_rib;
+	rte_fib_set_lookup_fn;
 
 	rte_fib6_add;
 	rte_fib6_create;
-- 
2.7.4


^ permalink raw reply	[flat|nested] 199+ messages in thread

* [dpdk-dev] [PATCH v14 2/8] fib: move lookup definition into the header file
  2020-10-19 15:05                       ` [dpdk-dev] [PATCH v13 " Vladimir Medvedkin
  2020-10-25 18:07                         ` [dpdk-dev] [PATCH v14 0/8] " Vladimir Medvedkin
  2020-10-25 18:07                         ` [dpdk-dev] [PATCH v14 1/8] fib: make lookup function type configurable Vladimir Medvedkin
@ 2020-10-25 18:07                         ` Vladimir Medvedkin
  2020-10-25 18:07                         ` [dpdk-dev] [PATCH v14 3/8] fib: introduce AVX512 lookup Vladimir Medvedkin
                                           ` (5 subsequent siblings)
  8 siblings, 0 replies; 199+ messages in thread
From: Vladimir Medvedkin @ 2020-10-25 18:07 UTC (permalink / raw)
  To: dev
  Cc: david.marchand, jerinj, mdr, thomas, konstantin.ananyev,
	bruce.richardson, ciara.power

Move dir24_8 table layout and lookup definition into the
private header file. This is necessary for implementing a
vectorized lookup function in a separate .с file.

Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 lib/librte_fib/dir24_8.c | 225 +----------------------------------------------
 lib/librte_fib/dir24_8.h | 224 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 225 insertions(+), 224 deletions(-)

diff --git a/lib/librte_fib/dir24_8.c b/lib/librte_fib/dir24_8.c
index ff51f65..b5f2363 100644
--- a/lib/librte_fib/dir24_8.c
+++ b/lib/librte_fib/dir24_8.c
@@ -11,240 +11,17 @@
 
 #include <rte_debug.h>
 #include <rte_malloc.h>
-#include <rte_prefetch.h>
 #include <rte_errno.h>
 #include <rte_memory.h>
-#include <rte_branch_prediction.h>
 
-#include <rte_fib.h>
 #include <rte_rib.h>
+#include <rte_fib.h>
 #include "dir24_8.h"
 
 #define DIR24_8_NAMESIZE	64
 
-#define DIR24_8_TBL24_NUM_ENT		(1 << 24)
-#define DIR24_8_TBL8_GRP_NUM_ENT	256U
-#define DIR24_8_EXT_ENT			1
-#define DIR24_8_TBL24_MASK		0xffffff00
-
-#define BITMAP_SLAB_BIT_SIZE_LOG2	6
-#define BITMAP_SLAB_BIT_SIZE		(1 << BITMAP_SLAB_BIT_SIZE_LOG2)
-#define BITMAP_SLAB_BITMASK		(BITMAP_SLAB_BIT_SIZE - 1)
-
-struct dir24_8_tbl {
-	uint32_t	number_tbl8s;	/**< Total number of tbl8s */
-	uint32_t	rsvd_tbl8s;	/**< Number of reserved tbl8s */
-	uint32_t	cur_tbl8s;	/**< Current number of tbl8s */
-	enum rte_fib_dir24_8_nh_sz	nh_sz;	/**< Size of nexthop entry */
-	uint64_t	def_nh;		/**< Default next hop */
-	uint64_t	*tbl8;		/**< tbl8 table. */
-	uint64_t	*tbl8_idxes;	/**< bitmap containing free tbl8 idxes*/
-	/* tbl24 table. */
-	__extension__ uint64_t	tbl24[0] __rte_cache_aligned;
-};
-
 #define ROUNDUP(x, y)	 RTE_ALIGN_CEIL(x, (1 << (32 - y)))
 
-static inline void *
-get_tbl24_p(struct dir24_8_tbl *dp, uint32_t ip, uint8_t nh_sz)
-{
-	return (void *)&((uint8_t *)dp->tbl24)[(ip &
-		DIR24_8_TBL24_MASK) >> (8 - nh_sz)];
-}
-
-static inline  uint8_t
-bits_in_nh(uint8_t nh_sz)
-{
-	return 8 * (1 << nh_sz);
-}
-
-static inline uint64_t
-get_max_nh(uint8_t nh_sz)
-{
-	return ((1ULL << (bits_in_nh(nh_sz) - 1)) - 1);
-}
-
-static  inline uint32_t
-get_tbl24_idx(uint32_t ip)
-{
-	return ip >> 8;
-}
-
-static  inline uint32_t
-get_tbl8_idx(uint32_t res, uint32_t ip)
-{
-	return (res >> 1) * DIR24_8_TBL8_GRP_NUM_ENT + (uint8_t)ip;
-}
-
-static inline uint64_t
-lookup_msk(uint8_t nh_sz)
-{
-	return ((1ULL << ((1 << (nh_sz + 3)) - 1)) << 1) - 1;
-}
-
-static inline uint8_t
-get_psd_idx(uint32_t val, uint8_t nh_sz)
-{
-	return val & ((1 << (3 - nh_sz)) - 1);
-}
-
-static inline uint32_t
-get_tbl_idx(uint32_t val, uint8_t nh_sz)
-{
-	return val >> (3 - nh_sz);
-}
-
-static inline uint64_t
-get_tbl24(struct dir24_8_tbl *dp, uint32_t ip, uint8_t nh_sz)
-{
-	return ((dp->tbl24[get_tbl_idx(get_tbl24_idx(ip), nh_sz)] >>
-		(get_psd_idx(get_tbl24_idx(ip), nh_sz) *
-		bits_in_nh(nh_sz))) & lookup_msk(nh_sz));
-}
-
-static inline uint64_t
-get_tbl8(struct dir24_8_tbl *dp, uint32_t res, uint32_t ip, uint8_t nh_sz)
-{
-	return ((dp->tbl8[get_tbl_idx(get_tbl8_idx(res, ip), nh_sz)] >>
-		(get_psd_idx(get_tbl8_idx(res, ip), nh_sz) *
-		bits_in_nh(nh_sz))) & lookup_msk(nh_sz));
-}
-
-static inline int
-is_entry_extended(uint64_t ent)
-{
-	return (ent & DIR24_8_EXT_ENT) == DIR24_8_EXT_ENT;
-}
-
-#define LOOKUP_FUNC(suffix, type, bulk_prefetch, nh_sz)			\
-static void dir24_8_lookup_bulk_##suffix(void *p, const uint32_t *ips,	\
-	uint64_t *next_hops, const unsigned int n)			\
-{									\
-	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;		\
-	uint64_t tmp;							\
-	uint32_t i;							\
-	uint32_t prefetch_offset =					\
-		RTE_MIN((unsigned int)bulk_prefetch, n);		\
-									\
-	for (i = 0; i < prefetch_offset; i++)				\
-		rte_prefetch0(get_tbl24_p(dp, ips[i], nh_sz));		\
-	for (i = 0; i < (n - prefetch_offset); i++) {			\
-		rte_prefetch0(get_tbl24_p(dp,				\
-			ips[i + prefetch_offset], nh_sz));		\
-		tmp = ((type *)dp->tbl24)[ips[i] >> 8];			\
-		if (unlikely(is_entry_extended(tmp)))			\
-			tmp = ((type *)dp->tbl8)[(uint8_t)ips[i] +	\
-				((tmp >> 1) * DIR24_8_TBL8_GRP_NUM_ENT)]; \
-		next_hops[i] = tmp >> 1;				\
-	}								\
-	for (; i < n; i++) {						\
-		tmp = ((type *)dp->tbl24)[ips[i] >> 8];			\
-		if (unlikely(is_entry_extended(tmp)))			\
-			tmp = ((type *)dp->tbl8)[(uint8_t)ips[i] +	\
-				((tmp >> 1) * DIR24_8_TBL8_GRP_NUM_ENT)]; \
-		next_hops[i] = tmp >> 1;				\
-	}								\
-}									\
-
-LOOKUP_FUNC(1b, uint8_t, 5, 0)
-LOOKUP_FUNC(2b, uint16_t, 6, 1)
-LOOKUP_FUNC(4b, uint32_t, 15, 2)
-LOOKUP_FUNC(8b, uint64_t, 12, 3)
-
-static inline void
-dir24_8_lookup_bulk(struct dir24_8_tbl *dp, const uint32_t *ips,
-	uint64_t *next_hops, const unsigned int n, uint8_t nh_sz)
-{
-	uint64_t tmp;
-	uint32_t i;
-	uint32_t prefetch_offset = RTE_MIN(15U, n);
-
-	for (i = 0; i < prefetch_offset; i++)
-		rte_prefetch0(get_tbl24_p(dp, ips[i], nh_sz));
-	for (i = 0; i < (n - prefetch_offset); i++) {
-		rte_prefetch0(get_tbl24_p(dp, ips[i + prefetch_offset],
-			nh_sz));
-		tmp = get_tbl24(dp, ips[i], nh_sz);
-		if (unlikely(is_entry_extended(tmp)))
-			tmp = get_tbl8(dp, tmp, ips[i], nh_sz);
-
-		next_hops[i] = tmp >> 1;
-	}
-	for (; i < n; i++) {
-		tmp = get_tbl24(dp, ips[i], nh_sz);
-		if (unlikely(is_entry_extended(tmp)))
-			tmp = get_tbl8(dp, tmp, ips[i], nh_sz);
-
-		next_hops[i] = tmp >> 1;
-	}
-}
-
-static void
-dir24_8_lookup_bulk_0(void *p, const uint32_t *ips,
-	uint64_t *next_hops, const unsigned int n)
-{
-	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
-
-	dir24_8_lookup_bulk(dp, ips, next_hops, n, 0);
-}
-
-static void
-dir24_8_lookup_bulk_1(void *p, const uint32_t *ips,
-	uint64_t *next_hops, const unsigned int n)
-{
-	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
-
-	dir24_8_lookup_bulk(dp, ips, next_hops, n, 1);
-}
-
-static void
-dir24_8_lookup_bulk_2(void *p, const uint32_t *ips,
-	uint64_t *next_hops, const unsigned int n)
-{
-	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
-
-	dir24_8_lookup_bulk(dp, ips, next_hops, n, 2);
-}
-
-static void
-dir24_8_lookup_bulk_3(void *p, const uint32_t *ips,
-	uint64_t *next_hops, const unsigned int n)
-{
-	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
-
-	dir24_8_lookup_bulk(dp, ips, next_hops, n, 3);
-}
-
-static void
-dir24_8_lookup_bulk_uni(void *p, const uint32_t *ips,
-	uint64_t *next_hops, const unsigned int n)
-{
-	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
-	uint64_t tmp;
-	uint32_t i;
-	uint32_t prefetch_offset = RTE_MIN(15U, n);
-	uint8_t nh_sz = dp->nh_sz;
-
-	for (i = 0; i < prefetch_offset; i++)
-		rte_prefetch0(get_tbl24_p(dp, ips[i], nh_sz));
-	for (i = 0; i < (n - prefetch_offset); i++) {
-		rte_prefetch0(get_tbl24_p(dp, ips[i + prefetch_offset],
-			nh_sz));
-		tmp = get_tbl24(dp, ips[i], nh_sz);
-		if (unlikely(is_entry_extended(tmp)))
-			tmp = get_tbl8(dp, tmp, ips[i], nh_sz);
-
-		next_hops[i] = tmp >> 1;
-	}
-	for (; i < n; i++) {
-		tmp = get_tbl24(dp, ips[i], nh_sz);
-		if (unlikely(is_entry_extended(tmp)))
-			tmp = get_tbl8(dp, tmp, ips[i], nh_sz);
-
-		next_hops[i] = tmp >> 1;
-	}
-}
-
 static inline rte_fib_lookup_fn_t
 get_scalar_fn(enum rte_fib_dir24_8_nh_sz nh_sz)
 {
diff --git a/lib/librte_fib/dir24_8.h b/lib/librte_fib/dir24_8.h
index 53c5dd2..56d0389 100644
--- a/lib/librte_fib/dir24_8.h
+++ b/lib/librte_fib/dir24_8.h
@@ -6,6 +6,9 @@
 #ifndef _DIR24_8_H_
 #define _DIR24_8_H_
 
+#include <rte_prefetch.h>
+#include <rte_branch_prediction.h>
+
 /**
  * @file
  * DIR24_8 algorithm
@@ -15,6 +18,227 @@
 extern "C" {
 #endif
 
+#define DIR24_8_TBL24_NUM_ENT		(1 << 24)
+#define DIR24_8_TBL8_GRP_NUM_ENT	256U
+#define DIR24_8_EXT_ENT			1
+#define DIR24_8_TBL24_MASK		0xffffff00
+
+#define BITMAP_SLAB_BIT_SIZE_LOG2	6
+#define BITMAP_SLAB_BIT_SIZE		(1 << BITMAP_SLAB_BIT_SIZE_LOG2)
+#define BITMAP_SLAB_BITMASK		(BITMAP_SLAB_BIT_SIZE - 1)
+
+struct dir24_8_tbl {
+	uint32_t	number_tbl8s;	/**< Total number of tbl8s */
+	uint32_t	rsvd_tbl8s;	/**< Number of reserved tbl8s */
+	uint32_t	cur_tbl8s;	/**< Current number of tbl8s */
+	enum rte_fib_dir24_8_nh_sz	nh_sz;	/**< Size of nexthop entry */
+	uint64_t	def_nh;		/**< Default next hop */
+	uint64_t	*tbl8;		/**< tbl8 table. */
+	uint64_t	*tbl8_idxes;	/**< bitmap containing free tbl8 idxes*/
+	/* tbl24 table. */
+	__extension__ uint64_t	tbl24[0] __rte_cache_aligned;
+};
+
+static inline void *
+get_tbl24_p(struct dir24_8_tbl *dp, uint32_t ip, uint8_t nh_sz)
+{
+	return (void *)&((uint8_t *)dp->tbl24)[(ip &
+		DIR24_8_TBL24_MASK) >> (8 - nh_sz)];
+}
+
+static inline  uint8_t
+bits_in_nh(uint8_t nh_sz)
+{
+	return 8 * (1 << nh_sz);
+}
+
+static inline uint64_t
+get_max_nh(uint8_t nh_sz)
+{
+	return ((1ULL << (bits_in_nh(nh_sz) - 1)) - 1);
+}
+
+static  inline uint32_t
+get_tbl24_idx(uint32_t ip)
+{
+	return ip >> 8;
+}
+
+static  inline uint32_t
+get_tbl8_idx(uint32_t res, uint32_t ip)
+{
+	return (res >> 1) * DIR24_8_TBL8_GRP_NUM_ENT + (uint8_t)ip;
+}
+
+static inline uint64_t
+lookup_msk(uint8_t nh_sz)
+{
+	return ((1ULL << ((1 << (nh_sz + 3)) - 1)) << 1) - 1;
+}
+
+static inline uint8_t
+get_psd_idx(uint32_t val, uint8_t nh_sz)
+{
+	return val & ((1 << (3 - nh_sz)) - 1);
+}
+
+static inline uint32_t
+get_tbl_idx(uint32_t val, uint8_t nh_sz)
+{
+	return val >> (3 - nh_sz);
+}
+
+static inline uint64_t
+get_tbl24(struct dir24_8_tbl *dp, uint32_t ip, uint8_t nh_sz)
+{
+	return ((dp->tbl24[get_tbl_idx(get_tbl24_idx(ip), nh_sz)] >>
+		(get_psd_idx(get_tbl24_idx(ip), nh_sz) *
+		bits_in_nh(nh_sz))) & lookup_msk(nh_sz));
+}
+
+static inline uint64_t
+get_tbl8(struct dir24_8_tbl *dp, uint32_t res, uint32_t ip, uint8_t nh_sz)
+{
+	return ((dp->tbl8[get_tbl_idx(get_tbl8_idx(res, ip), nh_sz)] >>
+		(get_psd_idx(get_tbl8_idx(res, ip), nh_sz) *
+		bits_in_nh(nh_sz))) & lookup_msk(nh_sz));
+}
+
+static inline int
+is_entry_extended(uint64_t ent)
+{
+	return (ent & DIR24_8_EXT_ENT) == DIR24_8_EXT_ENT;
+}
+
+#define LOOKUP_FUNC(suffix, type, bulk_prefetch, nh_sz)			\
+static inline void dir24_8_lookup_bulk_##suffix(void *p, const uint32_t *ips, \
+	uint64_t *next_hops, const unsigned int n)			\
+{									\
+	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;		\
+	uint64_t tmp;							\
+	uint32_t i;							\
+	uint32_t prefetch_offset =					\
+		RTE_MIN((unsigned int)bulk_prefetch, n);		\
+									\
+	for (i = 0; i < prefetch_offset; i++)				\
+		rte_prefetch0(get_tbl24_p(dp, ips[i], nh_sz));		\
+	for (i = 0; i < (n - prefetch_offset); i++) {			\
+		rte_prefetch0(get_tbl24_p(dp,				\
+			ips[i + prefetch_offset], nh_sz));		\
+		tmp = ((type *)dp->tbl24)[ips[i] >> 8];			\
+		if (unlikely(is_entry_extended(tmp)))			\
+			tmp = ((type *)dp->tbl8)[(uint8_t)ips[i] +	\
+				((tmp >> 1) * DIR24_8_TBL8_GRP_NUM_ENT)]; \
+		next_hops[i] = tmp >> 1;				\
+	}								\
+	for (; i < n; i++) {						\
+		tmp = ((type *)dp->tbl24)[ips[i] >> 8];			\
+		if (unlikely(is_entry_extended(tmp)))			\
+			tmp = ((type *)dp->tbl8)[(uint8_t)ips[i] +	\
+				((tmp >> 1) * DIR24_8_TBL8_GRP_NUM_ENT)]; \
+		next_hops[i] = tmp >> 1;				\
+	}								\
+}									\
+
+LOOKUP_FUNC(1b, uint8_t, 5, 0)
+LOOKUP_FUNC(2b, uint16_t, 6, 1)
+LOOKUP_FUNC(4b, uint32_t, 15, 2)
+LOOKUP_FUNC(8b, uint64_t, 12, 3)
+
+static inline void
+dir24_8_lookup_bulk(struct dir24_8_tbl *dp, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n, uint8_t nh_sz)
+{
+	uint64_t tmp;
+	uint32_t i;
+	uint32_t prefetch_offset = RTE_MIN(15U, n);
+
+	for (i = 0; i < prefetch_offset; i++)
+		rte_prefetch0(get_tbl24_p(dp, ips[i], nh_sz));
+	for (i = 0; i < (n - prefetch_offset); i++) {
+		rte_prefetch0(get_tbl24_p(dp, ips[i + prefetch_offset],
+			nh_sz));
+		tmp = get_tbl24(dp, ips[i], nh_sz);
+		if (unlikely(is_entry_extended(tmp)))
+			tmp = get_tbl8(dp, tmp, ips[i], nh_sz);
+
+		next_hops[i] = tmp >> 1;
+	}
+	for (; i < n; i++) {
+		tmp = get_tbl24(dp, ips[i], nh_sz);
+		if (unlikely(is_entry_extended(tmp)))
+			tmp = get_tbl8(dp, tmp, ips[i], nh_sz);
+
+		next_hops[i] = tmp >> 1;
+	}
+}
+
+static inline void
+dir24_8_lookup_bulk_0(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n)
+{
+	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
+
+	dir24_8_lookup_bulk(dp, ips, next_hops, n, 0);
+}
+
+static inline void
+dir24_8_lookup_bulk_1(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n)
+{
+	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
+
+	dir24_8_lookup_bulk(dp, ips, next_hops, n, 1);
+}
+
+static inline void
+dir24_8_lookup_bulk_2(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n)
+{
+	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
+
+	dir24_8_lookup_bulk(dp, ips, next_hops, n, 2);
+}
+
+static inline void
+dir24_8_lookup_bulk_3(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n)
+{
+	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
+
+	dir24_8_lookup_bulk(dp, ips, next_hops, n, 3);
+}
+
+static inline void
+dir24_8_lookup_bulk_uni(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n)
+{
+	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
+	uint64_t tmp;
+	uint32_t i;
+	uint32_t prefetch_offset = RTE_MIN(15U, n);
+	uint8_t nh_sz = dp->nh_sz;
+
+	for (i = 0; i < prefetch_offset; i++)
+		rte_prefetch0(get_tbl24_p(dp, ips[i], nh_sz));
+	for (i = 0; i < (n - prefetch_offset); i++) {
+		rte_prefetch0(get_tbl24_p(dp, ips[i + prefetch_offset],
+			nh_sz));
+		tmp = get_tbl24(dp, ips[i], nh_sz);
+		if (unlikely(is_entry_extended(tmp)))
+			tmp = get_tbl8(dp, tmp, ips[i], nh_sz);
+
+		next_hops[i] = tmp >> 1;
+	}
+	for (; i < n; i++) {
+		tmp = get_tbl24(dp, ips[i], nh_sz);
+		if (unlikely(is_entry_extended(tmp)))
+			tmp = get_tbl8(dp, tmp, ips[i], nh_sz);
+
+		next_hops[i] = tmp >> 1;
+	}
+}
+
 void *
 dir24_8_create(const char *name, int socket_id, struct rte_fib_conf *conf);
 
-- 
2.7.4


^ permalink raw reply	[flat|nested] 199+ messages in thread

* [dpdk-dev] [PATCH v14 3/8] fib: introduce AVX512 lookup
  2020-10-19 15:05                       ` [dpdk-dev] [PATCH v13 " Vladimir Medvedkin
                                           ` (2 preceding siblings ...)
  2020-10-25 18:07                         ` [dpdk-dev] [PATCH v14 2/8] fib: move lookup definition into the header file Vladimir Medvedkin
@ 2020-10-25 18:07                         ` Vladimir Medvedkin
  2020-10-25 18:07                         ` [dpdk-dev] [PATCH v14 4/8] fib6: make lookup function type configurable Vladimir Medvedkin
                                           ` (4 subsequent siblings)
  8 siblings, 0 replies; 199+ messages in thread
From: Vladimir Medvedkin @ 2020-10-25 18:07 UTC (permalink / raw)
  To: dev
  Cc: david.marchand, jerinj, mdr, thomas, konstantin.ananyev,
	bruce.richardson, ciara.power

Add new lookup implementation for DIR24_8 algorithm using
AVX512 instruction set

Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 doc/guides/rel_notes/release_20_11.rst |   3 +
 lib/librte_fib/dir24_8.c               |  39 ++++++++
 lib/librte_fib/dir24_8_avx512.c        | 165 +++++++++++++++++++++++++++++++++
 lib/librte_fib/dir24_8_avx512.h        |  24 +++++
 lib/librte_fib/meson.build             |  34 +++++++
 lib/librte_fib/rte_fib.c               |   2 +-
 lib/librte_fib/rte_fib.h               |   6 +-
 7 files changed, 271 insertions(+), 2 deletions(-)
 create mode 100644 lib/librte_fib/dir24_8_avx512.c
 create mode 100644 lib/librte_fib/dir24_8_avx512.h

diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst
index d8ac359..41d372c 100644
--- a/doc/guides/rel_notes/release_20_11.rst
+++ b/doc/guides/rel_notes/release_20_11.rst
@@ -345,6 +345,9 @@ New Features
   * Replaced ``--scalar`` command-line option with ``--alg=<value>``, to allow
     the user to select the desired classify method.
 
+* **Added AVX512 lookup implementation for FIB.**
+
+  Added a AVX512 lookup functions implementation into FIB library.
 
 Removed Items
 -------------
diff --git a/lib/librte_fib/dir24_8.c b/lib/librte_fib/dir24_8.c
index b5f2363..891fd78 100644
--- a/lib/librte_fib/dir24_8.c
+++ b/lib/librte_fib/dir24_8.c
@@ -13,11 +13,18 @@
 #include <rte_malloc.h>
 #include <rte_errno.h>
 #include <rte_memory.h>
+#include <rte_vect.h>
 
 #include <rte_rib.h>
 #include <rte_fib.h>
 #include "dir24_8.h"
 
+#ifdef CC_DIR24_8_AVX512_SUPPORT
+
+#include "dir24_8_avx512.h"
+
+#endif /* CC_DIR24_8_AVX512_SUPPORT */
+
 #define DIR24_8_NAMESIZE	64
 
 #define ROUNDUP(x, y)	 RTE_ALIGN_CEIL(x, (1 << (32 - y)))
@@ -56,11 +63,38 @@ get_scalar_fn_inlined(enum rte_fib_dir24_8_nh_sz nh_sz)
 	}
 }
 
+static inline rte_fib_lookup_fn_t
+get_vector_fn(enum rte_fib_dir24_8_nh_sz nh_sz)
+{
+#ifdef CC_DIR24_8_AVX512_SUPPORT
+	if ((rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512F) <= 0) ||
+			(rte_vect_get_max_simd_bitwidth() < RTE_VECT_SIMD_512))
+		return NULL;
+
+	switch (nh_sz) {
+	case RTE_FIB_DIR24_8_1B:
+		return rte_dir24_8_vec_lookup_bulk_1b;
+	case RTE_FIB_DIR24_8_2B:
+		return rte_dir24_8_vec_lookup_bulk_2b;
+	case RTE_FIB_DIR24_8_4B:
+		return rte_dir24_8_vec_lookup_bulk_4b;
+	case RTE_FIB_DIR24_8_8B:
+		return rte_dir24_8_vec_lookup_bulk_8b;
+	default:
+		return NULL;
+	}
+#else
+	RTE_SET_USED(nh_sz);
+#endif
+	return NULL;
+}
+
 rte_fib_lookup_fn_t
 dir24_8_get_lookup_fn(void *p, enum rte_fib_dir24_8_lookup_type type)
 {
 	enum rte_fib_dir24_8_nh_sz nh_sz;
 	struct dir24_8_tbl *dp = p;
+	rte_fib_lookup_fn_t ret_fn = NULL;
 
 	if (dp == NULL)
 		return NULL;
@@ -74,6 +108,11 @@ dir24_8_get_lookup_fn(void *p, enum rte_fib_dir24_8_lookup_type type)
 		return get_scalar_fn_inlined(nh_sz);
 	case RTE_FIB_DIR24_8_SCALAR_UNI:
 		return dir24_8_lookup_bulk_uni;
+	case RTE_FIB_DIR24_8_VECTOR_AVX512:
+		return get_vector_fn(nh_sz);
+	case RTE_FIB_DIR24_8_ANY:
+		ret_fn = get_vector_fn(nh_sz);
+		return (ret_fn) ? ret_fn : get_scalar_fn(nh_sz);
 	default:
 		return NULL;
 	}
diff --git a/lib/librte_fib/dir24_8_avx512.c b/lib/librte_fib/dir24_8_avx512.c
new file mode 100644
index 0000000..43dba28
--- /dev/null
+++ b/lib/librte_fib/dir24_8_avx512.c
@@ -0,0 +1,165 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Intel Corporation
+ */
+
+#include <rte_vect.h>
+#include <rte_fib.h>
+
+#include "dir24_8.h"
+#include "dir24_8_avx512.h"
+
+static __rte_always_inline void
+dir24_8_vec_lookup_x16(void *p, const uint32_t *ips,
+	uint64_t *next_hops, int size)
+{
+	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
+	__mmask16 msk_ext;
+	__mmask16 exp_msk = 0x5555;
+	__m512i ip_vec, idxes, res, bytes;
+	const __m512i zero = _mm512_set1_epi32(0);
+	const __m512i lsb = _mm512_set1_epi32(1);
+	const __m512i lsbyte_msk = _mm512_set1_epi32(0xff);
+	__m512i tmp1, tmp2, res_msk;
+	__m256i tmp256;
+	/* used to mask gather values if size is 1/2 (8/16 bit next hops) */
+	if (size == sizeof(uint8_t))
+		res_msk = _mm512_set1_epi32(UINT8_MAX);
+	else if (size == sizeof(uint16_t))
+		res_msk = _mm512_set1_epi32(UINT16_MAX);
+
+	ip_vec = _mm512_loadu_si512(ips);
+	/* mask 24 most significant bits */
+	idxes = _mm512_srli_epi32(ip_vec, 8);
+
+	/**
+	 * lookup in tbl24
+	 * Put it inside branch to make compiler happy with -O0
+	 */
+	if (size == sizeof(uint8_t)) {
+		res = _mm512_i32gather_epi32(idxes, (const int *)dp->tbl24, 1);
+		res = _mm512_and_epi32(res, res_msk);
+	} else if (size == sizeof(uint16_t)) {
+		res = _mm512_i32gather_epi32(idxes, (const int *)dp->tbl24, 2);
+		res = _mm512_and_epi32(res, res_msk);
+	} else
+		res = _mm512_i32gather_epi32(idxes, (const int *)dp->tbl24, 4);
+
+	/* get extended entries indexes */
+	msk_ext = _mm512_test_epi32_mask(res, lsb);
+
+	if (msk_ext != 0) {
+		idxes = _mm512_srli_epi32(res, 1);
+		idxes = _mm512_slli_epi32(idxes, 8);
+		bytes = _mm512_and_epi32(ip_vec, lsbyte_msk);
+		idxes = _mm512_maskz_add_epi32(msk_ext, idxes, bytes);
+		if (size == sizeof(uint8_t)) {
+			idxes = _mm512_mask_i32gather_epi32(zero, msk_ext,
+				idxes, (const int *)dp->tbl8, 1);
+			idxes = _mm512_and_epi32(idxes, res_msk);
+		} else if (size == sizeof(uint16_t)) {
+			idxes = _mm512_mask_i32gather_epi32(zero, msk_ext,
+				idxes, (const int *)dp->tbl8, 2);
+			idxes = _mm512_and_epi32(idxes, res_msk);
+		} else
+			idxes = _mm512_mask_i32gather_epi32(zero, msk_ext,
+				idxes, (const int *)dp->tbl8, 4);
+
+		res = _mm512_mask_blend_epi32(msk_ext, res, idxes);
+	}
+
+	res = _mm512_srli_epi32(res, 1);
+	tmp1 = _mm512_maskz_expand_epi32(exp_msk, res);
+	tmp256 = _mm512_extracti32x8_epi32(res, 1);
+	tmp2 = _mm512_maskz_expand_epi32(exp_msk,
+		_mm512_castsi256_si512(tmp256));
+	_mm512_storeu_si512(next_hops, tmp1);
+	_mm512_storeu_si512(next_hops + 8, tmp2);
+}
+
+static __rte_always_inline void
+dir24_8_vec_lookup_x8_8b(void *p, const uint32_t *ips,
+	uint64_t *next_hops)
+{
+	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
+	const __m512i zero = _mm512_set1_epi32(0);
+	const __m512i lsbyte_msk = _mm512_set1_epi64(0xff);
+	const __m512i lsb = _mm512_set1_epi64(1);
+	__m512i res, idxes, bytes;
+	__m256i idxes_256, ip_vec;
+	__mmask8 msk_ext;
+
+	ip_vec = _mm256_loadu_si256((const void *)ips);
+	/* mask 24 most significant bits */
+	idxes_256 = _mm256_srli_epi32(ip_vec, 8);
+
+	/* lookup in tbl24 */
+	res = _mm512_i32gather_epi64(idxes_256, (const void *)dp->tbl24, 8);
+
+	/* get extended entries indexes */
+	msk_ext = _mm512_test_epi64_mask(res, lsb);
+
+	if (msk_ext != 0) {
+		bytes = _mm512_cvtepi32_epi64(ip_vec);
+		idxes = _mm512_srli_epi64(res, 1);
+		idxes = _mm512_slli_epi64(idxes, 8);
+		bytes = _mm512_and_epi64(bytes, lsbyte_msk);
+		idxes = _mm512_maskz_add_epi64(msk_ext, idxes, bytes);
+		idxes = _mm512_mask_i64gather_epi64(zero, msk_ext, idxes,
+			(const void *)dp->tbl8, 8);
+
+		res = _mm512_mask_blend_epi64(msk_ext, res, idxes);
+	}
+
+	res = _mm512_srli_epi64(res, 1);
+	_mm512_storeu_si512(next_hops, res);
+}
+
+void
+rte_dir24_8_vec_lookup_bulk_1b(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n)
+{
+	uint32_t i;
+	for (i = 0; i < (n / 16); i++)
+		dir24_8_vec_lookup_x16(p, ips + i * 16, next_hops + i * 16,
+			sizeof(uint8_t));
+
+	dir24_8_lookup_bulk_1b(p, ips + i * 16, next_hops + i * 16,
+		n - i * 16);
+}
+
+void
+rte_dir24_8_vec_lookup_bulk_2b(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n)
+{
+	uint32_t i;
+	for (i = 0; i < (n / 16); i++)
+		dir24_8_vec_lookup_x16(p, ips + i * 16, next_hops + i * 16,
+			sizeof(uint16_t));
+
+	dir24_8_lookup_bulk_2b(p, ips + i * 16, next_hops + i * 16,
+		n - i * 16);
+}
+
+void
+rte_dir24_8_vec_lookup_bulk_4b(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n)
+{
+	uint32_t i;
+	for (i = 0; i < (n / 16); i++)
+		dir24_8_vec_lookup_x16(p, ips + i * 16, next_hops + i * 16,
+			sizeof(uint32_t));
+
+	dir24_8_lookup_bulk_4b(p, ips + i * 16, next_hops + i * 16,
+		n - i * 16);
+}
+
+void
+rte_dir24_8_vec_lookup_bulk_8b(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n)
+{
+	uint32_t i;
+	for (i = 0; i < (n / 8); i++)
+		dir24_8_vec_lookup_x8_8b(p, ips + i * 8, next_hops + i * 8);
+
+	dir24_8_lookup_bulk_8b(p, ips + i * 8, next_hops + i * 8, n - i * 8);
+}
diff --git a/lib/librte_fib/dir24_8_avx512.h b/lib/librte_fib/dir24_8_avx512.h
new file mode 100644
index 0000000..1d3c2b9
--- /dev/null
+++ b/lib/librte_fib/dir24_8_avx512.h
@@ -0,0 +1,24 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Intel Corporation
+ */
+
+#ifndef _DIR248_AVX512_H_
+#define _DIR248_AVX512_H_
+
+void
+rte_dir24_8_vec_lookup_bulk_1b(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n);
+
+void
+rte_dir24_8_vec_lookup_bulk_2b(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n);
+
+void
+rte_dir24_8_vec_lookup_bulk_4b(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n);
+
+void
+rte_dir24_8_vec_lookup_bulk_8b(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n);
+
+#endif /* _DIR248_AVX512_H_ */
diff --git a/lib/librte_fib/meson.build b/lib/librte_fib/meson.build
index 771828f..0a8adef 100644
--- a/lib/librte_fib/meson.build
+++ b/lib/librte_fib/meson.build
@@ -5,3 +5,37 @@
 sources = files('rte_fib.c', 'rte_fib6.c', 'dir24_8.c', 'trie.c')
 headers = files('rte_fib.h', 'rte_fib6.h')
 deps += ['rib']
+
+# compile AVX512 version if:
+# we are building 64-bit binary AND binutils can generate proper code
+if dpdk_conf.has('RTE_ARCH_X86_64') and binutils_ok.returncode() == 0
+	# compile AVX512 version if either:
+	# a. we have AVX512F supported in minimum instruction set baseline
+	# b. it's not minimum instruction set, but supported by compiler
+	#
+	# in former case, just add avx512 C file to files list
+	# in latter case, compile c file to static lib, using correct
+	# compiler flags, and then have the .o file from static lib
+	# linked into main lib.
+
+	# check if all required flags already enabled (variant a).
+	acl_avx512_flags = ['__AVX512F__','__AVX512DQ__']
+	acl_avx512_on = true
+	foreach f:acl_avx512_flags
+		if cc.get_define(f, args: machine_args) == ''
+			acl_avx512_on = false
+		endif
+	endforeach
+
+	if acl_avx512_on == true
+		cflags += ['-DCC_DIR24_8_AVX512_SUPPORT']
+		sources += files('dir24_8_avx512.c')
+	elif cc.has_multi_arguments('-mavx512f', '-mavx512dq')
+		dir24_8_avx512_tmp = static_library('dir24_8_avx512_tmp',
+				'dir24_8_avx512.c',
+				dependencies: static_rte_eal,
+				c_args: cflags + ['-mavx512f', '-mavx512dq'])
+		objs += dir24_8_avx512_tmp.extract_objects('dir24_8_avx512.c')
+		cflags += '-DCC_DIR24_8_AVX512_SUPPORT'
+	endif
+endif
diff --git a/lib/librte_fib/rte_fib.c b/lib/librte_fib/rte_fib.c
index b9f6efb..1af2a5f 100644
--- a/lib/librte_fib/rte_fib.c
+++ b/lib/librte_fib/rte_fib.c
@@ -108,7 +108,7 @@ init_dataplane(struct rte_fib *fib, __rte_unused int socket_id,
 		if (fib->dp == NULL)
 			return -rte_errno;
 		fib->lookup = dir24_8_get_lookup_fn(fib->dp,
-			RTE_FIB_DIR24_8_SCALAR_MACRO);
+			RTE_FIB_DIR24_8_ANY);
 		fib->modify = dir24_8_modify;
 		return 0;
 	default:
diff --git a/lib/librte_fib/rte_fib.h b/lib/librte_fib/rte_fib.h
index 2097ee5..d4e5d91 100644
--- a/lib/librte_fib/rte_fib.h
+++ b/lib/librte_fib/rte_fib.h
@@ -67,10 +67,14 @@ enum rte_fib_dir24_8_lookup_type {
 	 * Lookup implementation using inlined functions
 	 * for different next hop sizes
 	 */
-	RTE_FIB_DIR24_8_SCALAR_UNI
+	RTE_FIB_DIR24_8_SCALAR_UNI,
 	/**<
 	 * Unified lookup function for all next hop sizes
 	 */
+	RTE_FIB_DIR24_8_VECTOR_AVX512,
+	/**< Vector implementation using AVX512 */
+	RTE_FIB_DIR24_8_ANY = UINT32_MAX
+	/**< Selects the best implementation based on the max simd bitwidth */
 };
 
 /** FIB configuration structure */
-- 
2.7.4


^ permalink raw reply	[flat|nested] 199+ messages in thread

* [dpdk-dev] [PATCH v14 4/8] fib6: make lookup function type configurable
  2020-10-19 15:05                       ` [dpdk-dev] [PATCH v13 " Vladimir Medvedkin
                                           ` (3 preceding siblings ...)
  2020-10-25 18:07                         ` [dpdk-dev] [PATCH v14 3/8] fib: introduce AVX512 lookup Vladimir Medvedkin
@ 2020-10-25 18:07                         ` Vladimir Medvedkin
  2020-10-25 18:07                         ` [dpdk-dev] [PATCH v14 5/8] fib6: move lookup definition into the header file Vladimir Medvedkin
                                           ` (3 subsequent siblings)
  8 siblings, 0 replies; 199+ messages in thread
From: Vladimir Medvedkin @ 2020-10-25 18:07 UTC (permalink / raw)
  To: dev
  Cc: david.marchand, jerinj, mdr, thomas, konstantin.ananyev,
	bruce.richardson, ciara.power

Add type argument to trie_get_lookup_fn()
Now it only supports RTE_FIB6_TRIE_SCALAR

Add new rte_fib6_set_lookup_fn() - user can change lookup
function type runtime.

Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 lib/librte_fib/rte_fib6.c  | 20 +++++++++++++++++++-
 lib/librte_fib/rte_fib6.h  | 23 +++++++++++++++++++++++
 lib/librte_fib/trie.c      | 47 +++++++++++++++++++++++++++-------------------
 lib/librte_fib/trie.h      |  2 +-
 lib/librte_fib/version.map |  1 +
 5 files changed, 72 insertions(+), 21 deletions(-)

diff --git a/lib/librte_fib/rte_fib6.c b/lib/librte_fib/rte_fib6.c
index a1f0db8..566cd5f 100644
--- a/lib/librte_fib/rte_fib6.c
+++ b/lib/librte_fib/rte_fib6.c
@@ -107,7 +107,7 @@ init_dataplane(struct rte_fib6 *fib, __rte_unused int socket_id,
 		fib->dp = trie_create(dp_name, socket_id, conf);
 		if (fib->dp == NULL)
 			return -rte_errno;
-		fib->lookup = rte_trie_get_lookup_fn(conf);
+		fib->lookup = trie_get_lookup_fn(fib->dp, RTE_FIB6_TRIE_SCALAR);
 		fib->modify = trie_modify;
 		return 0;
 	default:
@@ -319,3 +319,21 @@ rte_fib6_get_rib(struct rte_fib6 *fib)
 {
 	return (fib == NULL) ? NULL : fib->rib;
 }
+
+int
+rte_fib6_set_lookup_fn(struct rte_fib6 *fib,
+	enum rte_fib_trie_lookup_type type)
+{
+	rte_fib6_lookup_fn_t fn;
+
+	switch (fib->type) {
+	case RTE_FIB6_TRIE:
+		fn = trie_get_lookup_fn(fib->dp, type);
+		if (fn == NULL)
+			return -EINVAL;
+		fib->lookup = fn;
+		return 0;
+	default:
+		return -EINVAL;
+	}
+}
diff --git a/lib/librte_fib/rte_fib6.h b/lib/librte_fib/rte_fib6.h
index bbfcf23..cd0c75e 100644
--- a/lib/librte_fib/rte_fib6.h
+++ b/lib/librte_fib/rte_fib6.h
@@ -53,12 +53,18 @@ enum rte_fib6_op {
 	RTE_FIB6_DEL,
 };
 
+/** Size of nexthop (1 << nh_sz) bits for TRIE based FIB */
 enum rte_fib_trie_nh_sz {
 	RTE_FIB6_TRIE_2B = 1,
 	RTE_FIB6_TRIE_4B,
 	RTE_FIB6_TRIE_8B
 };
 
+/** Type of lookup function implementation */
+enum rte_fib_trie_lookup_type {
+	RTE_FIB6_TRIE_SCALAR /**< Scalar lookup function implementation*/
+};
+
 /** FIB configuration structure */
 struct rte_fib6_conf {
 	enum rte_fib6_type type; /**< Type of FIB struct */
@@ -201,6 +207,23 @@ __rte_experimental
 struct rte_rib6 *
 rte_fib6_get_rib(struct rte_fib6 *fib);
 
+/**
+ * Set lookup function based on type
+ *
+ * @param fib
+ *   FIB object handle
+ * @param type
+ *   type of lookup function
+ *
+ * @return
+ *    -EINVAL on failure
+ *    0 on success
+ */
+__rte_experimental
+int
+rte_fib6_set_lookup_fn(struct rte_fib6 *fib,
+	enum rte_fib_trie_lookup_type type);
+
 #ifdef __cplusplus
 }
 #endif
diff --git a/lib/librte_fib/trie.c b/lib/librte_fib/trie.c
index 2ae2add..fc14670 100644
--- a/lib/librte_fib/trie.c
+++ b/lib/librte_fib/trie.c
@@ -59,13 +59,6 @@ enum edge {
 	REDGE
 };
 
-enum lookup_type {
-	MACRO,
-	INLINE,
-	UNI
-};
-static enum lookup_type test_lookup = MACRO;
-
 static inline uint32_t
 get_tbl24_idx(const uint8_t *ip)
 {
@@ -153,22 +146,38 @@ LOOKUP_FUNC(2b, uint16_t, 1)
 LOOKUP_FUNC(4b, uint32_t, 2)
 LOOKUP_FUNC(8b, uint64_t, 3)
 
+static inline rte_fib6_lookup_fn_t
+get_scalar_fn(enum rte_fib_trie_nh_sz nh_sz)
+{
+	switch (nh_sz) {
+	case RTE_FIB6_TRIE_2B:
+		return rte_trie_lookup_bulk_2b;
+	case RTE_FIB6_TRIE_4B:
+		return rte_trie_lookup_bulk_4b;
+	case RTE_FIB6_TRIE_8B:
+		return rte_trie_lookup_bulk_8b;
+	default:
+		return NULL;
+	}
+}
+
 rte_fib6_lookup_fn_t
-rte_trie_get_lookup_fn(struct rte_fib6_conf *conf)
+trie_get_lookup_fn(void *p, enum rte_fib_trie_lookup_type type)
 {
-	enum rte_fib_trie_nh_sz nh_sz = conf->trie.nh_sz;
+	enum rte_fib_trie_nh_sz nh_sz;
+	struct rte_trie_tbl *dp = p;
 
-	if (test_lookup == MACRO) {
-		switch (nh_sz) {
-		case RTE_FIB6_TRIE_2B:
-			return rte_trie_lookup_bulk_2b;
-		case RTE_FIB6_TRIE_4B:
-			return rte_trie_lookup_bulk_4b;
-		case RTE_FIB6_TRIE_8B:
-			return rte_trie_lookup_bulk_8b;
-		}
+	if (dp == NULL)
+		return NULL;
+
+	nh_sz = dp->nh_sz;
+
+	switch (type) {
+	case RTE_FIB6_TRIE_SCALAR:
+		return get_scalar_fn(nh_sz);
+	default:
+		return NULL;
 	}
-
 	return NULL;
 }
 
diff --git a/lib/librte_fib/trie.h b/lib/librte_fib/trie.h
index bb750c5..0d5ef9a 100644
--- a/lib/librte_fib/trie.h
+++ b/lib/librte_fib/trie.h
@@ -22,7 +22,7 @@ void
 trie_free(void *p);
 
 rte_fib6_lookup_fn_t
-rte_trie_get_lookup_fn(struct rte_fib6_conf *fib_conf);
+trie_get_lookup_fn(void *p, enum rte_fib_trie_lookup_type type);
 
 int
 trie_modify(struct rte_fib6 *fib, const uint8_t ip[RTE_FIB6_IPV6_ADDR_SIZE],
diff --git a/lib/librte_fib/version.map b/lib/librte_fib/version.map
index 216af66..9d1e181 100644
--- a/lib/librte_fib/version.map
+++ b/lib/librte_fib/version.map
@@ -19,6 +19,7 @@ EXPERIMENTAL {
 	rte_fib6_lookup_bulk;
 	rte_fib6_get_dp;
 	rte_fib6_get_rib;
+	rte_fib6_set_lookup_fn;
 
 	local: *;
 };
-- 
2.7.4


^ permalink raw reply	[flat|nested] 199+ messages in thread

* [dpdk-dev] [PATCH v14 5/8] fib6: move lookup definition into the header file
  2020-10-19 15:05                       ` [dpdk-dev] [PATCH v13 " Vladimir Medvedkin
                                           ` (4 preceding siblings ...)
  2020-10-25 18:07                         ` [dpdk-dev] [PATCH v14 4/8] fib6: make lookup function type configurable Vladimir Medvedkin
@ 2020-10-25 18:07                         ` Vladimir Medvedkin
  2020-10-25 18:07                         ` [dpdk-dev] [PATCH v14 6/8] fib6: introduce AVX512 lookup Vladimir Medvedkin
                                           ` (2 subsequent siblings)
  8 siblings, 0 replies; 199+ messages in thread
From: Vladimir Medvedkin @ 2020-10-25 18:07 UTC (permalink / raw)
  To: dev
  Cc: david.marchand, jerinj, mdr, thomas, konstantin.ananyev,
	bruce.richardson, ciara.power

Move trie table layout and lookup definition into the
private header file. This is necessary for implementing a
vectorized lookup function in a separate .с file.

Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 lib/librte_fib/trie.c | 121 --------------------------------------------------
 lib/librte_fib/trie.h | 117 ++++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 117 insertions(+), 121 deletions(-)

diff --git a/lib/librte_fib/trie.c b/lib/librte_fib/trie.c
index fc14670..82ba13d 100644
--- a/lib/librte_fib/trie.c
+++ b/lib/librte_fib/trie.c
@@ -11,141 +11,20 @@
 
 #include <rte_debug.h>
 #include <rte_malloc.h>
-#include <rte_prefetch.h>
 #include <rte_errno.h>
 #include <rte_memory.h>
-#include <rte_branch_prediction.h>
 
 #include <rte_rib6.h>
 #include <rte_fib6.h>
 #include "trie.h"
 
-/* @internal Total number of tbl24 entries. */
-#define TRIE_TBL24_NUM_ENT	(1 << 24)
-
-/* Maximum depth value possible for IPv6 LPM. */
-#define TRIE_MAX_DEPTH		128
-
-/* @internal Number of entries in a tbl8 group. */
-#define TRIE_TBL8_GRP_NUM_ENT	256ULL
-
-/* @internal Total number of tbl8 groups in the tbl8. */
-#define TRIE_TBL8_NUM_GROUPS	65536
-
-/* @internal bitmask with valid and valid_group fields set */
-#define TRIE_EXT_ENT		1
-
 #define TRIE_NAMESIZE		64
 
-#define BITMAP_SLAB_BIT_SIZE_LOG2	6
-#define BITMAP_SLAB_BIT_SIZE		(1ULL << BITMAP_SLAB_BIT_SIZE_LOG2)
-#define BITMAP_SLAB_BITMASK		(BITMAP_SLAB_BIT_SIZE - 1)
-
-struct rte_trie_tbl {
-	uint32_t	number_tbl8s;	/**< Total number of tbl8s */
-	uint32_t	rsvd_tbl8s;	/**< Number of reserved tbl8s */
-	uint32_t	cur_tbl8s;	/**< Current cumber of tbl8s */
-	uint64_t	def_nh;		/**< Default next hop */
-	enum rte_fib_trie_nh_sz	nh_sz;	/**< Size of nexthop entry */
-	uint64_t	*tbl8;		/**< tbl8 table. */
-	uint32_t	*tbl8_pool;	/**< bitmap containing free tbl8 idxes*/
-	uint32_t	tbl8_pool_pos;
-	/* tbl24 table. */
-	__extension__ uint64_t	tbl24[0] __rte_cache_aligned;
-};
-
 enum edge {
 	LEDGE,
 	REDGE
 };
 
-static inline uint32_t
-get_tbl24_idx(const uint8_t *ip)
-{
-	return ip[0] << 16|ip[1] << 8|ip[2];
-}
-
-static inline void *
-get_tbl24_p(struct rte_trie_tbl *dp, const uint8_t *ip, uint8_t nh_sz)
-{
-	uint32_t tbl24_idx;
-
-	tbl24_idx = get_tbl24_idx(ip);
-	return (void *)&((uint8_t *)dp->tbl24)[tbl24_idx << nh_sz];
-}
-
-static inline uint8_t
-bits_in_nh(uint8_t nh_sz)
-{
-	return 8 * (1 << nh_sz);
-}
-
-static inline uint64_t
-get_max_nh(uint8_t nh_sz)
-{
-	return ((1ULL << (bits_in_nh(nh_sz) - 1)) - 1);
-}
-
-static inline uint64_t
-lookup_msk(uint8_t nh_sz)
-{
-	return ((1ULL << ((1 << (nh_sz + 3)) - 1)) << 1) - 1;
-}
-
-static inline uint8_t
-get_psd_idx(uint32_t val, uint8_t nh_sz)
-{
-	return val & ((1 << (3 - nh_sz)) - 1);
-}
-
-static inline uint32_t
-get_tbl_pos(uint32_t val, uint8_t nh_sz)
-{
-	return val >> (3 - nh_sz);
-}
-
-static inline uint64_t
-get_tbl_val_by_idx(uint64_t *tbl, uint32_t idx, uint8_t nh_sz)
-{
-	return ((tbl[get_tbl_pos(idx, nh_sz)] >> (get_psd_idx(idx, nh_sz) *
-		bits_in_nh(nh_sz))) & lookup_msk(nh_sz));
-}
-
-static inline void *
-get_tbl_p_by_idx(uint64_t *tbl, uint64_t idx, uint8_t nh_sz)
-{
-	return (uint8_t *)tbl + (idx << nh_sz);
-}
-
-static inline int
-is_entry_extended(uint64_t ent)
-{
-	return (ent & TRIE_EXT_ENT) == TRIE_EXT_ENT;
-}
-
-#define LOOKUP_FUNC(suffix, type, nh_sz)				\
-static void rte_trie_lookup_bulk_##suffix(void *p,			\
-	uint8_t ips[][RTE_FIB6_IPV6_ADDR_SIZE],			\
-	uint64_t *next_hops, const unsigned int n)			\
-{									\
-	struct rte_trie_tbl *dp = (struct rte_trie_tbl *)p;		\
-	uint64_t tmp;							\
-	uint32_t i, j;							\
-									\
-	for (i = 0; i < n; i++) {					\
-		tmp = ((type *)dp->tbl24)[get_tbl24_idx(&ips[i][0])];	\
-		j = 3;							\
-		while (is_entry_extended(tmp)) {			\
-			tmp = ((type *)dp->tbl8)[ips[i][j++] +		\
-				((tmp >> 1) * TRIE_TBL8_GRP_NUM_ENT)];	\
-		}							\
-		next_hops[i] = tmp >> 1;				\
-	}								\
-}
-LOOKUP_FUNC(2b, uint16_t, 1)
-LOOKUP_FUNC(4b, uint32_t, 2)
-LOOKUP_FUNC(8b, uint64_t, 3)
-
 static inline rte_fib6_lookup_fn_t
 get_scalar_fn(enum rte_fib_trie_nh_sz nh_sz)
 {
diff --git a/lib/librte_fib/trie.h b/lib/librte_fib/trie.h
index 0d5ef9a..663c7a9 100644
--- a/lib/librte_fib/trie.h
+++ b/lib/librte_fib/trie.h
@@ -10,11 +10,128 @@
  * @file
  * RTE IPv6 Longest Prefix Match (LPM)
  */
+#include <rte_prefetch.h>
+#include <rte_branch_prediction.h>
 
 #ifdef __cplusplus
 extern "C" {
 #endif
 
+/* @internal Total number of tbl24 entries. */
+#define TRIE_TBL24_NUM_ENT	(1 << 24)
+/* Maximum depth value possible for IPv6 LPM. */
+#define TRIE_MAX_DEPTH		128
+/* @internal Number of entries in a tbl8 group. */
+#define TRIE_TBL8_GRP_NUM_ENT	256ULL
+/* @internal Total number of tbl8 groups in the tbl8. */
+#define TRIE_TBL8_NUM_GROUPS	65536
+/* @internal bitmask with valid and valid_group fields set */
+#define TRIE_EXT_ENT		1
+
+#define BITMAP_SLAB_BIT_SIZE_LOG2	6
+#define BITMAP_SLAB_BIT_SIZE		(1ULL << BITMAP_SLAB_BIT_SIZE_LOG2)
+#define BITMAP_SLAB_BITMASK		(BITMAP_SLAB_BIT_SIZE - 1)
+
+struct rte_trie_tbl {
+	uint32_t	number_tbl8s;	/**< Total number of tbl8s */
+	uint32_t	rsvd_tbl8s;	/**< Number of reserved tbl8s */
+	uint32_t	cur_tbl8s;	/**< Current cumber of tbl8s */
+	uint64_t	def_nh;		/**< Default next hop */
+	enum rte_fib_trie_nh_sz	nh_sz;	/**< Size of nexthop entry */
+	uint64_t	*tbl8;		/**< tbl8 table. */
+	uint32_t	*tbl8_pool;	/**< bitmap containing free tbl8 idxes*/
+	uint32_t	tbl8_pool_pos;
+	/* tbl24 table. */
+	__extension__ uint64_t	tbl24[0] __rte_cache_aligned;
+};
+
+static inline uint32_t
+get_tbl24_idx(const uint8_t *ip)
+{
+	return ip[0] << 16|ip[1] << 8|ip[2];
+}
+
+static inline void *
+get_tbl24_p(struct rte_trie_tbl *dp, const uint8_t *ip, uint8_t nh_sz)
+{
+	uint32_t tbl24_idx;
+
+	tbl24_idx = get_tbl24_idx(ip);
+	return (void *)&((uint8_t *)dp->tbl24)[tbl24_idx << nh_sz];
+}
+
+static inline uint8_t
+bits_in_nh(uint8_t nh_sz)
+{
+	return 8 * (1 << nh_sz);
+}
+
+static inline uint64_t
+get_max_nh(uint8_t nh_sz)
+{
+	return ((1ULL << (bits_in_nh(nh_sz) - 1)) - 1);
+}
+
+static inline uint64_t
+lookup_msk(uint8_t nh_sz)
+{
+	return ((1ULL << ((1 << (nh_sz + 3)) - 1)) << 1) - 1;
+}
+
+static inline uint8_t
+get_psd_idx(uint32_t val, uint8_t nh_sz)
+{
+	return val & ((1 << (3 - nh_sz)) - 1);
+}
+
+static inline uint32_t
+get_tbl_pos(uint32_t val, uint8_t nh_sz)
+{
+	return val >> (3 - nh_sz);
+}
+
+static inline uint64_t
+get_tbl_val_by_idx(uint64_t *tbl, uint32_t idx, uint8_t nh_sz)
+{
+	return ((tbl[get_tbl_pos(idx, nh_sz)] >> (get_psd_idx(idx, nh_sz) *
+		bits_in_nh(nh_sz))) & lookup_msk(nh_sz));
+}
+
+static inline void *
+get_tbl_p_by_idx(uint64_t *tbl, uint64_t idx, uint8_t nh_sz)
+{
+	return (uint8_t *)tbl + (idx << nh_sz);
+}
+
+static inline int
+is_entry_extended(uint64_t ent)
+{
+	return (ent & TRIE_EXT_ENT) == TRIE_EXT_ENT;
+}
+
+#define LOOKUP_FUNC(suffix, type, nh_sz)				\
+static inline void rte_trie_lookup_bulk_##suffix(void *p,		\
+	uint8_t ips[][RTE_FIB6_IPV6_ADDR_SIZE],				\
+	uint64_t *next_hops, const unsigned int n)			\
+{									\
+	struct rte_trie_tbl *dp = (struct rte_trie_tbl *)p;		\
+	uint64_t tmp;							\
+	uint32_t i, j;							\
+									\
+	for (i = 0; i < n; i++) {					\
+		tmp = ((type *)dp->tbl24)[get_tbl24_idx(&ips[i][0])];	\
+		j = 3;							\
+		while (is_entry_extended(tmp)) {			\
+			tmp = ((type *)dp->tbl8)[ips[i][j++] +		\
+				((tmp >> 1) * TRIE_TBL8_GRP_NUM_ENT)];	\
+		}							\
+		next_hops[i] = tmp >> 1;				\
+	}								\
+}
+LOOKUP_FUNC(2b, uint16_t, 1)
+LOOKUP_FUNC(4b, uint32_t, 2)
+LOOKUP_FUNC(8b, uint64_t, 3)
+
 void *
 trie_create(const char *name, int socket_id, struct rte_fib6_conf *conf);
 
-- 
2.7.4


^ permalink raw reply	[flat|nested] 199+ messages in thread

* [dpdk-dev] [PATCH v14 6/8] fib6: introduce AVX512 lookup
  2020-10-19 15:05                       ` [dpdk-dev] [PATCH v13 " Vladimir Medvedkin
                                           ` (5 preceding siblings ...)
  2020-10-25 18:07                         ` [dpdk-dev] [PATCH v14 5/8] fib6: move lookup definition into the header file Vladimir Medvedkin
@ 2020-10-25 18:07                         ` Vladimir Medvedkin
  2020-10-25 18:08                         ` [dpdk-dev] [PATCH v14 7/8] app/testfib: add support for different lookup functions Vladimir Medvedkin
  2020-10-25 18:08                         ` [dpdk-dev] [PATCH v14 8/8] fib: remove unnecessary type of fib Vladimir Medvedkin
  8 siblings, 0 replies; 199+ messages in thread
From: Vladimir Medvedkin @ 2020-10-25 18:07 UTC (permalink / raw)
  To: dev
  Cc: david.marchand, jerinj, mdr, thomas, konstantin.ananyev,
	bruce.richardson, ciara.power

Add new lookup implementation for FIB6 trie algorithm using
AVX512 instruction set

Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 doc/guides/rel_notes/release_20_11.rst |   2 +-
 lib/librte_fib/meson.build             |  17 +++
 lib/librte_fib/rte_fib6.c              |   2 +-
 lib/librte_fib/rte_fib6.h              |   5 +-
 lib/librte_fib/trie.c                  |  36 +++++
 lib/librte_fib/trie_avx512.c           | 269 +++++++++++++++++++++++++++++++++
 lib/librte_fib/trie_avx512.h           |  20 +++
 7 files changed, 348 insertions(+), 3 deletions(-)
 create mode 100644 lib/librte_fib/trie_avx512.c
 create mode 100644 lib/librte_fib/trie_avx512.h

diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst
index 41d372c..dbcd331 100644
--- a/doc/guides/rel_notes/release_20_11.rst
+++ b/doc/guides/rel_notes/release_20_11.rst
@@ -347,7 +347,7 @@ New Features
 
 * **Added AVX512 lookup implementation for FIB.**
 
-  Added a AVX512 lookup functions implementation into FIB library.
+  Added a AVX512 lookup functions implementation into FIB and FIB6 libraries.
 
 Removed Items
 -------------
diff --git a/lib/librte_fib/meson.build b/lib/librte_fib/meson.build
index 0a8adef..5d93de9 100644
--- a/lib/librte_fib/meson.build
+++ b/lib/librte_fib/meson.build
@@ -30,6 +30,12 @@ if dpdk_conf.has('RTE_ARCH_X86_64') and binutils_ok.returncode() == 0
 	if acl_avx512_on == true
 		cflags += ['-DCC_DIR24_8_AVX512_SUPPORT']
 		sources += files('dir24_8_avx512.c')
+		# TRIE AVX512 implementation uses avx512bw intrinsics along with
+		# avx512f and avx512dq
+		if cc.get_define('__AVX512BW__', args: machine_args) != ''
+			cflags += ['-DCC_TRIE_AVX512_SUPPORT']
+			sources += files('trie_avx512.c')
+		endif
 	elif cc.has_multi_arguments('-mavx512f', '-mavx512dq')
 		dir24_8_avx512_tmp = static_library('dir24_8_avx512_tmp',
 				'dir24_8_avx512.c',
@@ -37,5 +43,16 @@ if dpdk_conf.has('RTE_ARCH_X86_64') and binutils_ok.returncode() == 0
 				c_args: cflags + ['-mavx512f', '-mavx512dq'])
 		objs += dir24_8_avx512_tmp.extract_objects('dir24_8_avx512.c')
 		cflags += '-DCC_DIR24_8_AVX512_SUPPORT'
+		# TRIE AVX512 implementation uses avx512bw intrinsics along with
+		# avx512f and avx512dq
+		if cc.has_argument('-mavx512bw')
+			trie_avx512_tmp = static_library('trie_avx512_tmp',
+				'trie_avx512.c',
+				dependencies: static_rte_eal,
+				c_args: cflags + ['-mavx512f', \
+					'-mavx512dq', '-mavx512bw'])
+			objs += trie_avx512_tmp.extract_objects('trie_avx512.c')
+			cflags += '-DCC_TRIE_AVX512_SUPPORT'
+		endif
 	endif
 endif
diff --git a/lib/librte_fib/rte_fib6.c b/lib/librte_fib/rte_fib6.c
index 566cd5f..8512584 100644
--- a/lib/librte_fib/rte_fib6.c
+++ b/lib/librte_fib/rte_fib6.c
@@ -107,7 +107,7 @@ init_dataplane(struct rte_fib6 *fib, __rte_unused int socket_id,
 		fib->dp = trie_create(dp_name, socket_id, conf);
 		if (fib->dp == NULL)
 			return -rte_errno;
-		fib->lookup = trie_get_lookup_fn(fib->dp, RTE_FIB6_TRIE_SCALAR);
+		fib->lookup = trie_get_lookup_fn(fib->dp, RTE_FIB6_TRIE_ANY);
 		fib->modify = trie_modify;
 		return 0;
 	default:
diff --git a/lib/librte_fib/rte_fib6.h b/lib/librte_fib/rte_fib6.h
index cd0c75e..2b2a1c8 100644
--- a/lib/librte_fib/rte_fib6.h
+++ b/lib/librte_fib/rte_fib6.h
@@ -62,7 +62,10 @@ enum rte_fib_trie_nh_sz {
 
 /** Type of lookup function implementation */
 enum rte_fib_trie_lookup_type {
-	RTE_FIB6_TRIE_SCALAR /**< Scalar lookup function implementation*/
+	RTE_FIB6_TRIE_SCALAR, /**< Scalar lookup function implementation*/
+	RTE_FIB6_TRIE_VECTOR_AVX512, /**< Vector implementation using AVX512 */
+	RTE_FIB6_TRIE_ANY = UINT32_MAX
+	/**< Selects the best implementation based on the max simd bitwidth */
 };
 
 /** FIB configuration structure */
diff --git a/lib/librte_fib/trie.c b/lib/librte_fib/trie.c
index 82ba13d..d1b7672 100644
--- a/lib/librte_fib/trie.c
+++ b/lib/librte_fib/trie.c
@@ -13,11 +13,18 @@
 #include <rte_malloc.h>
 #include <rte_errno.h>
 #include <rte_memory.h>
+#include <rte_vect.h>
 
 #include <rte_rib6.h>
 #include <rte_fib6.h>
 #include "trie.h"
 
+#ifdef CC_TRIE_AVX512_SUPPORT
+
+#include "trie_avx512.h"
+
+#endif /* CC_TRIE_AVX512_SUPPORT */
+
 #define TRIE_NAMESIZE		64
 
 enum edge {
@@ -40,11 +47,35 @@ get_scalar_fn(enum rte_fib_trie_nh_sz nh_sz)
 	}
 }
 
+static inline rte_fib6_lookup_fn_t
+get_vector_fn(enum rte_fib_trie_nh_sz nh_sz)
+{
+#ifdef CC_TRIE_AVX512_SUPPORT
+	if ((rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512F) <= 0) ||
+			(rte_vect_get_max_simd_bitwidth() < RTE_VECT_SIMD_512))
+		return NULL;
+	switch (nh_sz) {
+	case RTE_FIB6_TRIE_2B:
+		return rte_trie_vec_lookup_bulk_2b;
+	case RTE_FIB6_TRIE_4B:
+		return rte_trie_vec_lookup_bulk_4b;
+	case RTE_FIB6_TRIE_8B:
+		return rte_trie_vec_lookup_bulk_8b;
+	default:
+		return NULL;
+	}
+#else
+	RTE_SET_USED(nh_sz);
+#endif
+	return NULL;
+}
+
 rte_fib6_lookup_fn_t
 trie_get_lookup_fn(void *p, enum rte_fib_trie_lookup_type type)
 {
 	enum rte_fib_trie_nh_sz nh_sz;
 	struct rte_trie_tbl *dp = p;
+	rte_fib6_lookup_fn_t ret_fn = NULL;
 
 	if (dp == NULL)
 		return NULL;
@@ -54,6 +85,11 @@ trie_get_lookup_fn(void *p, enum rte_fib_trie_lookup_type type)
 	switch (type) {
 	case RTE_FIB6_TRIE_SCALAR:
 		return get_scalar_fn(nh_sz);
+	case RTE_FIB6_TRIE_VECTOR_AVX512:
+		return get_vector_fn(nh_sz);
+	case RTE_FIB6_TRIE_ANY:
+		ret_fn = get_vector_fn(nh_sz);
+		return (ret_fn) ? ret_fn : get_scalar_fn(nh_sz);
 	default:
 		return NULL;
 	}
diff --git a/lib/librte_fib/trie_avx512.c b/lib/librte_fib/trie_avx512.c
new file mode 100644
index 0000000..b1c9e4e
--- /dev/null
+++ b/lib/librte_fib/trie_avx512.c
@@ -0,0 +1,269 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Intel Corporation
+ */
+
+#include <rte_vect.h>
+#include <rte_fib6.h>
+
+#include "trie.h"
+#include "trie_avx512.h"
+
+static __rte_always_inline void
+transpose_x16(uint8_t ips[16][RTE_FIB6_IPV6_ADDR_SIZE],
+	__m512i *first, __m512i *second, __m512i *third, __m512i *fourth)
+{
+	__m512i tmp1, tmp2, tmp3, tmp4;
+	__m512i tmp5, tmp6, tmp7, tmp8;
+	const __rte_x86_zmm_t perm_idxes = {
+		.u32 = { 0, 4, 8, 12, 2, 6, 10, 14,
+			1, 5, 9, 13, 3, 7, 11, 15
+		},
+	};
+
+	/* load all ip addresses */
+	tmp1 = _mm512_loadu_si512(&ips[0][0]);
+	tmp2 = _mm512_loadu_si512(&ips[4][0]);
+	tmp3 = _mm512_loadu_si512(&ips[8][0]);
+	tmp4 = _mm512_loadu_si512(&ips[12][0]);
+
+	/* transpose 4 byte chunks of 16 ips */
+	tmp5 = _mm512_unpacklo_epi32(tmp1, tmp2);
+	tmp7 = _mm512_unpackhi_epi32(tmp1, tmp2);
+	tmp6 = _mm512_unpacklo_epi32(tmp3, tmp4);
+	tmp8 = _mm512_unpackhi_epi32(tmp3, tmp4);
+
+	tmp1 = _mm512_unpacklo_epi32(tmp5, tmp6);
+	tmp3 = _mm512_unpackhi_epi32(tmp5, tmp6);
+	tmp2 = _mm512_unpacklo_epi32(tmp7, tmp8);
+	tmp4 = _mm512_unpackhi_epi32(tmp7, tmp8);
+
+	/* first 4-byte chunks of ips[] */
+	*first = _mm512_permutexvar_epi32(perm_idxes.z, tmp1);
+	/* second 4-byte chunks of ips[] */
+	*second = _mm512_permutexvar_epi32(perm_idxes.z, tmp3);
+	/* third 4-byte chunks of ips[] */
+	*third = _mm512_permutexvar_epi32(perm_idxes.z, tmp2);
+	/* fourth 4-byte chunks of ips[] */
+	*fourth = _mm512_permutexvar_epi32(perm_idxes.z, tmp4);
+}
+
+static __rte_always_inline void
+transpose_x8(uint8_t ips[8][RTE_FIB6_IPV6_ADDR_SIZE],
+	__m512i *first, __m512i *second)
+{
+	__m512i tmp1, tmp2, tmp3, tmp4;
+	const __rte_x86_zmm_t perm_idxes = {
+		.u64 = { 0, 2, 4, 6, 1, 3, 5, 7
+		},
+	};
+
+	tmp1 = _mm512_loadu_si512(&ips[0][0]);
+	tmp2 = _mm512_loadu_si512(&ips[4][0]);
+
+	tmp3 = _mm512_unpacklo_epi64(tmp1, tmp2);
+	*first = _mm512_permutexvar_epi64(perm_idxes.z, tmp3);
+	tmp4 = _mm512_unpackhi_epi64(tmp1, tmp2);
+	*second = _mm512_permutexvar_epi64(perm_idxes.z, tmp4);
+}
+
+static __rte_always_inline void
+trie_vec_lookup_x16(void *p, uint8_t ips[16][RTE_FIB6_IPV6_ADDR_SIZE],
+	uint64_t *next_hops, int size)
+{
+	struct rte_trie_tbl *dp = (struct rte_trie_tbl *)p;
+	const __m512i zero = _mm512_set1_epi32(0);
+	const __m512i lsb = _mm512_set1_epi32(1);
+	const __m512i two_lsb = _mm512_set1_epi32(3);
+	__m512i first, second, third, fourth; /*< IPv6 four byte chunks */
+	__m512i idxes, res, shuf_idxes;
+	__m512i tmp, tmp2, bytes, byte_chunk, base_idxes;
+	/* used to mask gather values if size is 2 (16 bit next hops) */
+	const __m512i res_msk = _mm512_set1_epi32(UINT16_MAX);
+	const __rte_x86_zmm_t bswap = {
+		.u8 = { 2, 1, 0, 255, 6, 5, 4, 255,
+			10, 9, 8, 255, 14, 13, 12, 255,
+			2, 1, 0, 255, 6, 5, 4, 255,
+			10, 9, 8, 255, 14, 13, 12, 255,
+			2, 1, 0, 255, 6, 5, 4, 255,
+			10, 9, 8, 255, 14, 13, 12, 255,
+			2, 1, 0, 255, 6, 5, 4, 255,
+			10, 9, 8, 255, 14, 13, 12, 255
+			},
+	};
+	const __mmask64 k = 0x1111111111111111;
+	int i = 3;
+	__mmask16 msk_ext, new_msk;
+	__mmask16 exp_msk = 0x5555;
+
+	transpose_x16(ips, &first, &second, &third, &fourth);
+
+	/* get_tbl24_idx() for every 4 byte chunk */
+	idxes = _mm512_shuffle_epi8(first, bswap.z);
+
+	/**
+	 * lookup in tbl24
+	 * Put it inside branch to make compiller happy with -O0
+	 */
+	if (size == sizeof(uint16_t)) {
+		res = _mm512_i32gather_epi32(idxes, (const int *)dp->tbl24, 2);
+		res = _mm512_and_epi32(res, res_msk);
+	} else
+		res = _mm512_i32gather_epi32(idxes, (const int *)dp->tbl24, 4);
+
+
+	/* get extended entries indexes */
+	msk_ext = _mm512_test_epi32_mask(res, lsb);
+
+	tmp = _mm512_srli_epi32(res, 1);
+
+	/* idxes to retrieve bytes */
+	shuf_idxes = _mm512_setr_epi32(3, 7, 11, 15,
+				19, 23, 27, 31,
+				35, 39, 43, 47,
+				51, 55, 59, 63);
+
+	base_idxes = _mm512_setr_epi32(0, 4, 8, 12,
+				16, 20, 24, 28,
+				32, 36, 40, 44,
+				48, 52, 56, 60);
+
+	/* traverse down the trie */
+	while (msk_ext) {
+		idxes = _mm512_maskz_slli_epi32(msk_ext, tmp, 8);
+		byte_chunk = (i < 8) ?
+			((i >= 4) ? second : first) :
+			((i >= 12) ? fourth : third);
+		bytes = _mm512_maskz_shuffle_epi8(k, byte_chunk, shuf_idxes);
+		idxes = _mm512_maskz_add_epi32(msk_ext, idxes, bytes);
+		if (size == sizeof(uint16_t)) {
+			tmp = _mm512_mask_i32gather_epi32(zero, msk_ext,
+				idxes, (const int *)dp->tbl8, 2);
+			tmp = _mm512_and_epi32(tmp, res_msk);
+		} else
+			tmp = _mm512_mask_i32gather_epi32(zero, msk_ext,
+				idxes, (const int *)dp->tbl8, 4);
+		new_msk = _mm512_test_epi32_mask(tmp, lsb);
+		res = _mm512_mask_blend_epi32(msk_ext ^ new_msk, res, tmp);
+		tmp = _mm512_srli_epi32(tmp, 1);
+		msk_ext = new_msk;
+
+		shuf_idxes = _mm512_maskz_add_epi8(k, shuf_idxes, lsb);
+		shuf_idxes = _mm512_and_epi32(shuf_idxes, two_lsb);
+		shuf_idxes = _mm512_maskz_add_epi8(k, shuf_idxes, base_idxes);
+		i++;
+	}
+
+	res = _mm512_srli_epi32(res, 1);
+	tmp = _mm512_maskz_expand_epi32(exp_msk, res);
+	__m256i tmp256;
+	tmp256 = _mm512_extracti32x8_epi32(res, 1);
+	tmp2 = _mm512_maskz_expand_epi32(exp_msk,
+		_mm512_castsi256_si512(tmp256));
+	_mm512_storeu_si512(next_hops, tmp);
+	_mm512_storeu_si512(next_hops + 8, tmp2);
+}
+
+static void
+trie_vec_lookup_x8_8b(void *p, uint8_t ips[8][RTE_FIB6_IPV6_ADDR_SIZE],
+	uint64_t *next_hops)
+{
+	struct rte_trie_tbl *dp = (struct rte_trie_tbl *)p;
+	const __m512i zero = _mm512_set1_epi32(0);
+	const __m512i lsb = _mm512_set1_epi32(1);
+	const __m512i three_lsb = _mm512_set1_epi32(7);
+	__m512i first, second; /*< IPv6 eight byte chunks */
+	__m512i idxes, res, shuf_idxes;
+	__m512i tmp, bytes, byte_chunk, base_idxes;
+	const __rte_x86_zmm_t bswap = {
+		.u8 = { 2, 1, 0, 255, 255, 255, 255, 255,
+			10, 9, 8, 255, 255, 255, 255, 255,
+			2, 1, 0, 255, 255, 255, 255, 255,
+			10, 9, 8, 255, 255, 255, 255, 255,
+			2, 1, 0, 255, 255, 255, 255, 255,
+			10, 9, 8, 255, 255, 255, 255, 255,
+			2, 1, 0, 255, 255, 255, 255, 255,
+			10, 9, 8, 255, 255, 255, 255, 255
+			},
+	};
+	const __mmask64 k = 0x101010101010101;
+	int i = 3;
+	__mmask8 msk_ext, new_msk;
+
+	transpose_x8(ips, &first, &second);
+
+	/* get_tbl24_idx() for every 4 byte chunk */
+	idxes = _mm512_shuffle_epi8(first, bswap.z);
+
+	/* lookup in tbl24 */
+	res = _mm512_i64gather_epi64(idxes, (const void *)dp->tbl24, 8);
+	/* get extended entries indexes */
+	msk_ext = _mm512_test_epi64_mask(res, lsb);
+
+	tmp = _mm512_srli_epi64(res, 1);
+
+	/* idxes to retrieve bytes */
+	shuf_idxes = _mm512_setr_epi64(3, 11, 19, 27, 35, 43, 51, 59);
+
+	base_idxes = _mm512_setr_epi64(0, 8, 16, 24, 32, 40, 48, 56);
+
+	/* traverse down the trie */
+	while (msk_ext) {
+		idxes = _mm512_maskz_slli_epi64(msk_ext, tmp, 8);
+		byte_chunk = (i < 8) ? first : second;
+		bytes = _mm512_maskz_shuffle_epi8(k, byte_chunk, shuf_idxes);
+		idxes = _mm512_maskz_add_epi64(msk_ext, idxes, bytes);
+		tmp = _mm512_mask_i64gather_epi64(zero, msk_ext,
+				idxes, (const void *)dp->tbl8, 8);
+		new_msk = _mm512_test_epi64_mask(tmp, lsb);
+		res = _mm512_mask_blend_epi64(msk_ext ^ new_msk, res, tmp);
+		tmp = _mm512_srli_epi64(tmp, 1);
+		msk_ext = new_msk;
+
+		shuf_idxes = _mm512_maskz_add_epi8(k, shuf_idxes, lsb);
+		shuf_idxes = _mm512_and_epi64(shuf_idxes, three_lsb);
+		shuf_idxes = _mm512_maskz_add_epi8(k, shuf_idxes, base_idxes);
+		i++;
+	}
+
+	res = _mm512_srli_epi64(res, 1);
+	_mm512_storeu_si512(next_hops, res);
+}
+
+void
+rte_trie_vec_lookup_bulk_2b(void *p, uint8_t ips[][RTE_FIB6_IPV6_ADDR_SIZE],
+	uint64_t *next_hops, const unsigned int n)
+{
+	uint32_t i;
+	for (i = 0; i < (n / 16); i++) {
+		trie_vec_lookup_x16(p, (uint8_t (*)[16])&ips[i * 16][0],
+				next_hops + i * 16, sizeof(uint16_t));
+	}
+	rte_trie_lookup_bulk_2b(p, (uint8_t (*)[16])&ips[i * 16][0],
+			next_hops + i * 16, n - i * 16);
+}
+
+void
+rte_trie_vec_lookup_bulk_4b(void *p, uint8_t ips[][RTE_FIB6_IPV6_ADDR_SIZE],
+	uint64_t *next_hops, const unsigned int n)
+{
+	uint32_t i;
+	for (i = 0; i < (n / 16); i++) {
+		trie_vec_lookup_x16(p, (uint8_t (*)[16])&ips[i * 16][0],
+				next_hops + i * 16, sizeof(uint32_t));
+	}
+	rte_trie_lookup_bulk_4b(p, (uint8_t (*)[16])&ips[i * 16][0],
+			next_hops + i * 16, n - i * 16);
+}
+
+void
+rte_trie_vec_lookup_bulk_8b(void *p, uint8_t ips[][RTE_FIB6_IPV6_ADDR_SIZE],
+	uint64_t *next_hops, const unsigned int n)
+{
+	uint32_t i;
+	for (i = 0; i < (n / 8); i++) {
+		trie_vec_lookup_x8_8b(p, (uint8_t (*)[16])&ips[i * 8][0],
+				next_hops + i * 8);
+	}
+	rte_trie_lookup_bulk_8b(p, (uint8_t (*)[16])&ips[i * 8][0],
+			next_hops + i * 8, n - i * 8);
+}
diff --git a/lib/librte_fib/trie_avx512.h b/lib/librte_fib/trie_avx512.h
new file mode 100644
index 0000000..ef8c7f0
--- /dev/null
+++ b/lib/librte_fib/trie_avx512.h
@@ -0,0 +1,20 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Intel Corporation
+ */
+
+#ifndef _TRIE_AVX512_H_
+#define _TRIE_AVX512_H_
+
+void
+rte_trie_vec_lookup_bulk_2b(void *p, uint8_t ips[][RTE_FIB6_IPV6_ADDR_SIZE],
+	uint64_t *next_hops, const unsigned int n);
+
+void
+rte_trie_vec_lookup_bulk_4b(void *p, uint8_t ips[][RTE_FIB6_IPV6_ADDR_SIZE],
+	uint64_t *next_hops, const unsigned int n);
+
+void
+rte_trie_vec_lookup_bulk_8b(void *p, uint8_t ips[][RTE_FIB6_IPV6_ADDR_SIZE],
+	uint64_t *next_hops, const unsigned int n);
+
+#endif /* _TRIE_AVX512_H_ */
-- 
2.7.4


^ permalink raw reply	[flat|nested] 199+ messages in thread

* [dpdk-dev] [PATCH v14 7/8] app/testfib: add support for different lookup functions
  2020-10-19 15:05                       ` [dpdk-dev] [PATCH v13 " Vladimir Medvedkin
                                           ` (6 preceding siblings ...)
  2020-10-25 18:07                         ` [dpdk-dev] [PATCH v14 6/8] fib6: introduce AVX512 lookup Vladimir Medvedkin
@ 2020-10-25 18:08                         ` Vladimir Medvedkin
  2020-10-25 18:08                         ` [dpdk-dev] [PATCH v14 8/8] fib: remove unnecessary type of fib Vladimir Medvedkin
  8 siblings, 0 replies; 199+ messages in thread
From: Vladimir Medvedkin @ 2020-10-25 18:08 UTC (permalink / raw)
  To: dev
  Cc: david.marchand, jerinj, mdr, thomas, konstantin.ananyev,
	bruce.richardson, ciara.power

Added -v option to switch between different lookup implementations
to measure their performance and correctness.

Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
---
 app/test-fib/main.c | 65 ++++++++++++++++++++++++++++++++++++++++++++++++++---
 1 file changed, 62 insertions(+), 3 deletions(-)

diff --git a/app/test-fib/main.c b/app/test-fib/main.c
index 9cf01b1..e46d264 100644
--- a/app/test-fib/main.c
+++ b/app/test-fib/main.c
@@ -99,6 +99,7 @@ static struct {
 	uint8_t		ent_sz;
 	uint8_t		rnd_lookup_ips_ratio;
 	uint8_t		print_fract;
+	uint8_t		lookup_fn;
 } config = {
 	.routes_file = NULL,
 	.lookup_ips_file = NULL,
@@ -110,7 +111,8 @@ static struct {
 	.tbl8 = DEFAULT_LPM_TBL8,
 	.ent_sz = 4,
 	.rnd_lookup_ips_ratio = 0,
-	.print_fract = 10
+	.print_fract = 10,
+	.lookup_fn = 0
 };
 
 struct rt_rule_4 {
@@ -638,7 +640,11 @@ print_usage(void)
 		"1/2/4/8 (default 4)>]\n"
 		"[-g <number of tbl8's for dir24_8 or trie FIBs>]\n"
 		"[-w <path to the file to dump routing table>]\n"
-		"[-u <path to the file to dump ip's for lookup>]\n",
+		"[-u <path to the file to dump ip's for lookup>]\n"
+		"[-v <type of loookup function:"
+		"\ts1, s2, s3 (3 types of scalar), v (vector) -"
+		" for DIR24_8 based FIB\n"
+		"\ts, v - for TRIE based ipv6 FIB>]\n",
 		config.prgname);
 }
 
@@ -681,7 +687,7 @@ parse_opts(int argc, char **argv)
 	int opt;
 	char *endptr;
 
-	while ((opt = getopt(argc, argv, "f:t:n:d:l:r:c6ab:e:g:w:u:s")) !=
+	while ((opt = getopt(argc, argv, "f:t:n:d:l:r:c6ab:e:g:w:u:sv:")) !=
 			-1) {
 		switch (opt) {
 		case 'f':
@@ -769,6 +775,23 @@ parse_opts(int argc, char **argv)
 				rte_exit(-EINVAL, "Invalid option -g\n");
 			}
 			break;
+		case 'v':
+			if ((strcmp(optarg, "s1") == 0) ||
+					(strcmp(optarg, "s") == 0)) {
+				config.lookup_fn = 1;
+				break;
+			} else if (strcmp(optarg, "v") == 0) {
+				config.lookup_fn = 2;
+				break;
+			} else if (strcmp(optarg, "s2") == 0) {
+				config.lookup_fn = 3;
+				break;
+			} else if (strcmp(optarg, "s3") == 0) {
+				config.lookup_fn = 4;
+				break;
+			}
+			print_usage();
+			rte_exit(-EINVAL, "Invalid option -v %s\n", optarg);
 		default:
 			print_usage();
 			rte_exit(-EINVAL, "Invalid options\n");
@@ -846,6 +869,27 @@ run_v4(void)
 		return -rte_errno;
 	}
 
+	if (config.lookup_fn != 0) {
+		if (config.lookup_fn == 1)
+			ret = rte_fib_set_lookup_fn(fib,
+				RTE_FIB_DIR24_8_SCALAR_MACRO);
+		else if (config.lookup_fn == 2)
+			ret = rte_fib_set_lookup_fn(fib,
+				RTE_FIB_DIR24_8_VECTOR_AVX512);
+		else if (config.lookup_fn == 3)
+			ret = rte_fib_set_lookup_fn(fib,
+				RTE_FIB_DIR24_8_SCALAR_INLINE);
+		else if (config.lookup_fn == 4)
+			ret = rte_fib_set_lookup_fn(fib,
+				RTE_FIB_DIR24_8_SCALAR_UNI);
+		else
+			ret = -EINVAL;
+		if (ret != 0) {
+			printf("Can not init lookup function\n");
+			return ret;
+		}
+	}
+
 	for (k = config.print_fract, i = 0; k > 0; k--) {
 		start = rte_rdtsc_precise();
 		for (j = 0; j < (config.nb_routes - i) / k; j++) {
@@ -1025,6 +1069,21 @@ run_v6(void)
 		return -rte_errno;
 	}
 
+	if (config.lookup_fn != 0) {
+		if (config.lookup_fn == 1)
+			ret = rte_fib6_set_lookup_fn(fib,
+				RTE_FIB6_TRIE_SCALAR);
+		else if (config.lookup_fn == 2)
+			ret = rte_fib6_set_lookup_fn(fib,
+				RTE_FIB6_TRIE_VECTOR_AVX512);
+		else
+			ret = -EINVAL;
+		if (ret != 0) {
+			printf("Can not init lookup function\n");
+			return ret;
+		}
+	}
+
 	for (k = config.print_fract, i = 0; k > 0; k--) {
 		start = rte_rdtsc_precise();
 		for (j = 0; j < (config.nb_routes - i) / k; j++) {
-- 
2.7.4


^ permalink raw reply	[flat|nested] 199+ messages in thread

* [dpdk-dev] [PATCH v14 8/8] fib: remove unnecessary type of fib
  2020-10-19 15:05                       ` [dpdk-dev] [PATCH v13 " Vladimir Medvedkin
                                           ` (7 preceding siblings ...)
  2020-10-25 18:08                         ` [dpdk-dev] [PATCH v14 7/8] app/testfib: add support for different lookup functions Vladimir Medvedkin
@ 2020-10-25 18:08                         ` Vladimir Medvedkin
  8 siblings, 0 replies; 199+ messages in thread
From: Vladimir Medvedkin @ 2020-10-25 18:08 UTC (permalink / raw)
  To: dev
  Cc: david.marchand, jerinj, mdr, thomas, konstantin.ananyev,
	bruce.richardson, ciara.power

FIB type RTE_FIB_TYPE_MAX is used only for sanity checks,
remove it to prevent applications start using it.
The same is for FIB6's RTE_FIB6_TYPE_MAX.

Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
---
 app/test/test_fib.c       | 2 +-
 app/test/test_fib6.c      | 2 +-
 lib/librte_fib/rte_fib.c  | 2 +-
 lib/librte_fib/rte_fib.h  | 3 +--
 lib/librte_fib/rte_fib6.c | 2 +-
 lib/librte_fib/rte_fib6.h | 3 +--
 6 files changed, 6 insertions(+), 8 deletions(-)

diff --git a/app/test/test_fib.c b/app/test/test_fib.c
index ca80a5d..e46b993 100644
--- a/app/test/test_fib.c
+++ b/app/test/test_fib.c
@@ -61,7 +61,7 @@ test_create_invalid(void)
 		"Call succeeded with invalid parameters\n");
 	config.max_routes = MAX_ROUTES;
 
-	config.type = RTE_FIB_TYPE_MAX;
+	config.type = RTE_FIB_DIR24_8 + 1;
 	fib = rte_fib_create(__func__, SOCKET_ID_ANY, &config);
 	RTE_TEST_ASSERT(fib == NULL,
 		"Call succeeded with invalid parameters\n");
diff --git a/app/test/test_fib6.c b/app/test/test_fib6.c
index af589fe..74abfc7 100644
--- a/app/test/test_fib6.c
+++ b/app/test/test_fib6.c
@@ -63,7 +63,7 @@ test_create_invalid(void)
 		"Call succeeded with invalid parameters\n");
 	config.max_routes = MAX_ROUTES;
 
-	config.type = RTE_FIB6_TYPE_MAX;
+	config.type = RTE_FIB6_TRIE + 1;
 	fib = rte_fib6_create(__func__, SOCKET_ID_ANY, &config);
 	RTE_TEST_ASSERT(fib == NULL,
 		"Call succeeded with invalid parameters\n");
diff --git a/lib/librte_fib/rte_fib.c b/lib/librte_fib/rte_fib.c
index 1af2a5f..4d2af84 100644
--- a/lib/librte_fib/rte_fib.c
+++ b/lib/librte_fib/rte_fib.c
@@ -159,7 +159,7 @@ rte_fib_create(const char *name, int socket_id, struct rte_fib_conf *conf)
 
 	/* Check user arguments. */
 	if ((name == NULL) || (conf == NULL) ||	(conf->max_routes < 0) ||
-			(conf->type >= RTE_FIB_TYPE_MAX)) {
+			(conf->type > RTE_FIB_DIR24_8)) {
 		rte_errno = EINVAL;
 		return NULL;
 	}
diff --git a/lib/librte_fib/rte_fib.h b/lib/librte_fib/rte_fib.h
index d4e5d91..8b78113 100644
--- a/lib/librte_fib/rte_fib.h
+++ b/lib/librte_fib/rte_fib.h
@@ -34,8 +34,7 @@ struct rte_rib;
 /** Type of FIB struct */
 enum rte_fib_type {
 	RTE_FIB_DUMMY,		/**< RIB tree based FIB */
-	RTE_FIB_DIR24_8,	/**< DIR24_8 based FIB */
-	RTE_FIB_TYPE_MAX
+	RTE_FIB_DIR24_8		/**< DIR24_8 based FIB */
 };
 
 /** Modify FIB function */
diff --git a/lib/librte_fib/rte_fib6.c b/lib/librte_fib/rte_fib6.c
index 8512584..0a679a3 100644
--- a/lib/librte_fib/rte_fib6.c
+++ b/lib/librte_fib/rte_fib6.c
@@ -160,7 +160,7 @@ rte_fib6_create(const char *name, int socket_id, struct rte_fib6_conf *conf)
 
 	/* Check user arguments. */
 	if ((name == NULL) || (conf == NULL) || (conf->max_routes < 0) ||
-			(conf->type >= RTE_FIB6_TYPE_MAX)) {
+			(conf->type > RTE_FIB6_TRIE)) {
 		rte_errno = EINVAL;
 		return NULL;
 	}
diff --git a/lib/librte_fib/rte_fib6.h b/lib/librte_fib/rte_fib6.h
index 2b2a1c8..4d43a84 100644
--- a/lib/librte_fib/rte_fib6.h
+++ b/lib/librte_fib/rte_fib6.h
@@ -35,8 +35,7 @@ struct rte_rib6;
 /** Type of FIB struct */
 enum rte_fib6_type {
 	RTE_FIB6_DUMMY,		/**< RIB6 tree based FIB */
-	RTE_FIB6_TRIE,		/**< TRIE based fib  */
-	RTE_FIB6_TYPE_MAX
+	RTE_FIB6_TRIE		/**< TRIE based fib  */
 };
 
 /** Modify FIB function */
-- 
2.7.4


^ permalink raw reply	[flat|nested] 199+ messages in thread

* Re: [dpdk-dev] [PATCH v14 1/8] fib: make lookup function type configurable
  2020-10-25 18:07                         ` [dpdk-dev] [PATCH v14 1/8] fib: make lookup function type configurable Vladimir Medvedkin
@ 2020-10-26 13:58                           ` David Marchand
  2020-10-26 17:51                             ` Medvedkin, Vladimir
  0 siblings, 1 reply; 199+ messages in thread
From: David Marchand @ 2020-10-26 13:58 UTC (permalink / raw)
  To: Vladimir Medvedkin
  Cc: dev, Jerin Jacob Kollanukkaran, Ray Kinsella, Thomas Monjalon,
	Ananyev, Konstantin, Bruce Richardson, Ciara Power

Hello Vladimir,

On Sun, Oct 25, 2020 at 7:08 PM Vladimir Medvedkin
<vladimir.medvedkin@intel.com> wrote:
> diff --git a/lib/librte_fib/rte_fib.h b/lib/librte_fib/rte_fib.h
> index 84ee774..2097ee5 100644
> --- a/lib/librte_fib/rte_fib.h
> +++ b/lib/librte_fib/rte_fib.h
> @@ -58,6 +58,21 @@ enum rte_fib_dir24_8_nh_sz {
>         RTE_FIB_DIR24_8_8B
>  };
>
> +/** Type of lookup function implementation */
> +enum rte_fib_dir24_8_lookup_type {
> +       RTE_FIB_DIR24_8_SCALAR_MACRO,
> +       /**< Macro based lookup function */
> +       RTE_FIB_DIR24_8_SCALAR_INLINE,
> +       /**<
> +        * Lookup implementation using inlined functions
> +        * for different next hop sizes
> +        */
> +       RTE_FIB_DIR24_8_SCALAR_UNI
> +       /**<
> +        * Unified lookup function for all next hop sizes
> +        */
> +};
> +

We can't have a generic function, with a specific type/
Let's have a generic name, in hope it will be extended later for other
fib implementations.
For the default behavior and selecting the "best" possible
implementation, we can introduce a RTE_FIB_LOOKUP_DEFAULT magic value
that would work with any fib type.

How about:

enum rte_fib_lookup_type {
  RTE_FIB_LOOKUP_DEFAULT,
  RTE_FIB_LOOKUP_DIR24_8_SCALAR_MACRO,
  RTE_FIB_LOOKUP_DIR24_8_SCALAR_INLINE,
  RTE_FIB_LOOKUP_DIR24_8_SCALAR_UNI,
  RTE_FIB_LOOKUP_DIR24_8_VECTOR_AVX512,
};


>  /** FIB configuration structure */
>  struct rte_fib_conf {
>         enum rte_fib_type type; /**< Type of FIB struct */
> @@ -196,6 +211,23 @@ __rte_experimental
>  struct rte_rib *
>  rte_fib_get_rib(struct rte_fib *fib);
>
> +/**
> + * Set lookup function based on type
> + *
> + * @param fib
> + *   FIB object handle
> + * @param type
> + *   type of lookup function
> + *
> + * @return
> + *    -EINVAL on failure
> + *    0 on success
> + */
> +__rte_experimental
> +int
> +rte_fib_set_lookup_fn(struct rte_fib *fib,
> +       enum rte_fib_dir24_8_lookup_type type);
> +

_fn does not give much info, how about rte_fib_select_lookup ?


>  #ifdef __cplusplus
>  }
>  #endif


-- 
David Marchand


^ permalink raw reply	[flat|nested] 199+ messages in thread

* Re: [dpdk-dev] [PATCH v14 1/8] fib: make lookup function type configurable
  2020-10-26 13:58                           ` David Marchand
@ 2020-10-26 17:51                             ` Medvedkin, Vladimir
  0 siblings, 0 replies; 199+ messages in thread
From: Medvedkin, Vladimir @ 2020-10-26 17:51 UTC (permalink / raw)
  To: David Marchand
  Cc: dev, Jerin Jacob Kollanukkaran, Ray Kinsella, Thomas Monjalon,
	Ananyev, Konstantin, Bruce Richardson, Ciara Power

Hello David,

On 26/10/2020 13:58, David Marchand wrote:
> Hello Vladimir,
> 
> On Sun, Oct 25, 2020 at 7:08 PM Vladimir Medvedkin
> <vladimir.medvedkin@intel.com> wrote:
>> diff --git a/lib/librte_fib/rte_fib.h b/lib/librte_fib/rte_fib.h
>> index 84ee774..2097ee5 100644
>> --- a/lib/librte_fib/rte_fib.h
>> +++ b/lib/librte_fib/rte_fib.h
>> @@ -58,6 +58,21 @@ enum rte_fib_dir24_8_nh_sz {
>>          RTE_FIB_DIR24_8_8B
>>   };
>>
>> +/** Type of lookup function implementation */
>> +enum rte_fib_dir24_8_lookup_type {
>> +       RTE_FIB_DIR24_8_SCALAR_MACRO,
>> +       /**< Macro based lookup function */
>> +       RTE_FIB_DIR24_8_SCALAR_INLINE,
>> +       /**<
>> +        * Lookup implementation using inlined functions
>> +        * for different next hop sizes
>> +        */
>> +       RTE_FIB_DIR24_8_SCALAR_UNI
>> +       /**<
>> +        * Unified lookup function for all next hop sizes
>> +        */
>> +};
>> +
> 
> We can't have a generic function, with a specific type/
> Let's have a generic name, in hope it will be extended later for other
> fib implementations.
> For the default behavior and selecting the "best" possible
> implementation, we can introduce a RTE_FIB_LOOKUP_DEFAULT magic value
> that would work with any fib type.
> 
> How about:
> 
> enum rte_fib_lookup_type {
>    RTE_FIB_LOOKUP_DEFAULT,
>    RTE_FIB_LOOKUP_DIR24_8_SCALAR_MACRO,
>    RTE_FIB_LOOKUP_DIR24_8_SCALAR_INLINE,
>    RTE_FIB_LOOKUP_DIR24_8_SCALAR_UNI,
>    RTE_FIB_LOOKUP_DIR24_8_VECTOR_AVX512,
> };
> 
> 

I introduced special magic value to select the "best" possible lookup 
implementation in "fib: introduce AVX512 lookup patch":
+       RTE_FIB_DIR24_8_ANY = UINT32_MAX
+       /**< Selects the best implementation based on the max simd 
bitwidth */
and I wanted to get rid of dir24_8 in names after I remove all 
unnecessary lookup implementations in the separate patches.

But I'm OK with your suggestion, I will reflect it in v15.


>>   /** FIB configuration structure */
>>   struct rte_fib_conf {
>>          enum rte_fib_type type; /**< Type of FIB struct */
>> @@ -196,6 +211,23 @@ __rte_experimental
>>   struct rte_rib *
>>   rte_fib_get_rib(struct rte_fib *fib);
>>
>> +/**
>> + * Set lookup function based on type
>> + *
>> + * @param fib
>> + *   FIB object handle
>> + * @param type
>> + *   type of lookup function
>> + *
>> + * @return
>> + *    -EINVAL on failure
>> + *    0 on success
>> + */
>> +__rte_experimental
>> +int
>> +rte_fib_set_lookup_fn(struct rte_fib *fib,
>> +       enum rte_fib_dir24_8_lookup_type type);
>> +
> 
> _fn does not give much info, how about rte_fib_select_lookup ?
> 

rte_fib_select_lookup is OK, I will rename it in v15 as well.

> 
>>   #ifdef __cplusplus
>>   }
>>   #endif
> 
> 

-- 
Regards,
Vladimir

^ permalink raw reply	[flat|nested] 199+ messages in thread

* [dpdk-dev] [PATCH v15 0/8] fib: implement AVX512 vector lookup
  2020-10-25 18:07                         ` [dpdk-dev] [PATCH v14 0/8] " Vladimir Medvedkin
@ 2020-10-27 15:11                           ` Vladimir Medvedkin
  2020-10-28 20:51                             ` David Marchand
  2020-10-27 15:11                           ` [dpdk-dev] [PATCH v15 1/8] fib: make lookup function type configurable Vladimir Medvedkin
                                             ` (7 subsequent siblings)
  8 siblings, 1 reply; 199+ messages in thread
From: Vladimir Medvedkin @ 2020-10-27 15:11 UTC (permalink / raw)
  To: dev
  Cc: david.marchand, jerinj, mdr, thomas, konstantin.ananyev,
	bruce.richardson, ciara.power

This patch series implements vectorized lookup using AVX512 for
ipv4 dir24_8 and ipv6 trie algorithms.
Also introduced rte_fib_set_lookup_fn() to change lookup function type.
Added option to select lookup function type in testfib application.

v15:
 - rename rte_fib_set_lookup_fn()
 - rename names of lookup types

v14:
 - remove unnecessary FIB types

v13:
 - reflect the latest changes in "eal: add max SIMD bitwidth" patch

v12:
 - rebase on the latest main
 - drop "eal/x86: introduce AVX 512-bit type" patch

v11:
 - fix compillation issue with unused nh_sz variable

v10:
 - reflects the latest changes in the "eal: add max SIMD bitwidth" patch
 - add en extra doxygen comments
 - rebuild on the latest main

v9:
 - meson reworked
 - integration with max SIMD bitwidth patchseries
 - changed the logic of function selection on init

v8:
 - remove Makefile related changes
 - fix missing doxygen for lookup_type
 - add release notes

v7:
 - fix RTE_X86_ZMM_MASK macro

v6:
 - style fixes

v5:
 - prefix zmm macro in rte_vect.h with RTE_X86
 - remove unnecessary typedef for _x86_zmm_t
 - reword commit title
 - fix typos

v4:
 - use __rte_aligned() instead of using compiler attribute directly
 - rework and add comments to meson.build

v3:
 - separate out the AVX-512 code into a separate file

v2:
 - rename rte_zmm to __rte_x86_zmm to reflect its internal usage
 - make runtime decision to use avx512 lookup

Vladimir Medvedkin (8):
  fib: make lookup function type configurable
  fib: move lookup definition into the header file
  fib: introduce AVX512 lookup
  fib6: make lookup function type configurable
  fib6: move lookup definition into the header file
  fib6: introduce AVX512 lookup
  app/testfib: add support for different lookup functions
  fib: remove unnecessary type of fib

 app/test-fib/main.c                    |  65 ++++++-
 app/test/test_fib.c                    |   2 +-
 app/test/test_fib6.c                   |   2 +-
 doc/guides/rel_notes/release_20_11.rst |   3 +
 lib/librte_fib/dir24_8.c               | 332 +++++++++------------------------
 lib/librte_fib/dir24_8.h               | 226 +++++++++++++++++++++-
 lib/librte_fib/dir24_8_avx512.c        | 165 ++++++++++++++++
 lib/librte_fib/dir24_8_avx512.h        |  24 +++
 lib/librte_fib/meson.build             |  51 +++++
 lib/librte_fib/rte_fib.c               |  23 ++-
 lib/librte_fib/rte_fib.h               |  39 +++-
 lib/librte_fib/rte_fib6.c              |  22 ++-
 lib/librte_fib/rte_fib6.h              |  29 ++-
 lib/librte_fib/trie.c                  | 194 ++++++-------------
 lib/librte_fib/trie.h                  | 119 +++++++++++-
 lib/librte_fib/trie_avx512.c           | 269 ++++++++++++++++++++++++++
 lib/librte_fib/trie_avx512.h           |  20 ++
 lib/librte_fib/version.map             |   2 +
 18 files changed, 1189 insertions(+), 398 deletions(-)
 create mode 100644 lib/librte_fib/dir24_8_avx512.c
 create mode 100644 lib/librte_fib/dir24_8_avx512.h
 create mode 100644 lib/librte_fib/trie_avx512.c
 create mode 100644 lib/librte_fib/trie_avx512.h

-- 
2.7.4


^ permalink raw reply	[flat|nested] 199+ messages in thread

* [dpdk-dev] [PATCH v15 1/8] fib: make lookup function type configurable
  2020-10-25 18:07                         ` [dpdk-dev] [PATCH v14 0/8] " Vladimir Medvedkin
  2020-10-27 15:11                           ` [dpdk-dev] [PATCH v15 " Vladimir Medvedkin
@ 2020-10-27 15:11                           ` Vladimir Medvedkin
  2020-10-27 15:11                           ` [dpdk-dev] [PATCH v15 2/8] fib: move lookup definition into the header file Vladimir Medvedkin
                                             ` (6 subsequent siblings)
  8 siblings, 0 replies; 199+ messages in thread
From: Vladimir Medvedkin @ 2020-10-27 15:11 UTC (permalink / raw)
  To: dev
  Cc: david.marchand, jerinj, mdr, thomas, konstantin.ananyev,
	bruce.richardson, ciara.power

Add type argument to dir24_8_get_lookup_fn()
Now it supports 3 different lookup implementations:
 RTE_FIB_LOOKUP_DIR24_8_SCALAR_MACRO
 RTE_FIB_LOOKUP_DIR24_8_SCALAR_INLINE
 RTE_FIB_LOOKUP_DIR24_8_SCALAR_UNI

Add new rte_fib_set_lookup_fn() - user can change lookup
function type runtime.

Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 lib/librte_fib/dir24_8.c   | 84 ++++++++++++++++++++++++++++------------------
 lib/librte_fib/dir24_8.h   |  2 +-
 lib/librte_fib/rte_fib.c   | 21 +++++++++++-
 lib/librte_fib/rte_fib.h   | 32 ++++++++++++++++++
 lib/librte_fib/version.map |  1 +
 5 files changed, 106 insertions(+), 34 deletions(-)

diff --git a/lib/librte_fib/dir24_8.c b/lib/librte_fib/dir24_8.c
index c9dce3c..ab5a1b2 100644
--- a/lib/librte_fib/dir24_8.c
+++ b/lib/librte_fib/dir24_8.c
@@ -45,13 +45,6 @@ struct dir24_8_tbl {
 
 #define ROUNDUP(x, y)	 RTE_ALIGN_CEIL(x, (1 << (32 - y)))
 
-enum lookup_type {
-	MACRO,
-	INLINE,
-	UNI
-};
-enum lookup_type test_lookup = MACRO;
-
 static inline void *
 get_tbl24_p(struct dir24_8_tbl *dp, uint32_t ip, uint8_t nh_sz)
 {
@@ -252,35 +245,62 @@ dir24_8_lookup_bulk_uni(void *p, const uint32_t *ips,
 	}
 }
 
+static inline rte_fib_lookup_fn_t
+get_scalar_fn(enum rte_fib_dir24_8_nh_sz nh_sz)
+{
+	switch (nh_sz) {
+	case RTE_FIB_DIR24_8_1B:
+		return dir24_8_lookup_bulk_1b;
+	case RTE_FIB_DIR24_8_2B:
+		return dir24_8_lookup_bulk_2b;
+	case RTE_FIB_DIR24_8_4B:
+		return dir24_8_lookup_bulk_4b;
+	case RTE_FIB_DIR24_8_8B:
+		return dir24_8_lookup_bulk_8b;
+	default:
+		return NULL;
+	}
+}
+
+static inline rte_fib_lookup_fn_t
+get_scalar_fn_inlined(enum rte_fib_dir24_8_nh_sz nh_sz)
+{
+	switch (nh_sz) {
+	case RTE_FIB_DIR24_8_1B:
+		return dir24_8_lookup_bulk_0;
+	case RTE_FIB_DIR24_8_2B:
+		return dir24_8_lookup_bulk_1;
+	case RTE_FIB_DIR24_8_4B:
+		return dir24_8_lookup_bulk_2;
+	case RTE_FIB_DIR24_8_8B:
+		return dir24_8_lookup_bulk_3;
+	default:
+		return NULL;
+	}
+}
+
 rte_fib_lookup_fn_t
-dir24_8_get_lookup_fn(struct rte_fib_conf *fib_conf)
+dir24_8_get_lookup_fn(void *p, enum rte_fib_lookup_type type)
 {
-	enum rte_fib_dir24_8_nh_sz nh_sz = fib_conf->dir24_8.nh_sz;
+	enum rte_fib_dir24_8_nh_sz nh_sz;
+	struct dir24_8_tbl *dp = p;
 
-	if (test_lookup == MACRO) {
-		switch (nh_sz) {
-		case RTE_FIB_DIR24_8_1B:
-			return dir24_8_lookup_bulk_1b;
-		case RTE_FIB_DIR24_8_2B:
-			return dir24_8_lookup_bulk_2b;
-		case RTE_FIB_DIR24_8_4B:
-			return dir24_8_lookup_bulk_4b;
-		case RTE_FIB_DIR24_8_8B:
-			return dir24_8_lookup_bulk_8b;
-		}
-	} else if (test_lookup == INLINE) {
-		switch (nh_sz) {
-		case RTE_FIB_DIR24_8_1B:
-			return dir24_8_lookup_bulk_0;
-		case RTE_FIB_DIR24_8_2B:
-			return dir24_8_lookup_bulk_1;
-		case RTE_FIB_DIR24_8_4B:
-			return dir24_8_lookup_bulk_2;
-		case RTE_FIB_DIR24_8_8B:
-			return dir24_8_lookup_bulk_3;
-		}
-	} else
+	if (dp == NULL)
+		return NULL;
+
+	nh_sz = dp->nh_sz;
+
+	switch (type) {
+	case RTE_FIB_LOOKUP_DIR24_8_SCALAR_MACRO:
+		return get_scalar_fn(nh_sz);
+	case RTE_FIB_LOOKUP_DIR24_8_SCALAR_INLINE:
+		return get_scalar_fn_inlined(nh_sz);
+	case RTE_FIB_LOOKUP_DIR24_8_SCALAR_UNI:
 		return dir24_8_lookup_bulk_uni;
+	default:
+		return NULL;
+	}
+
 	return NULL;
 }
 
diff --git a/lib/librte_fib/dir24_8.h b/lib/librte_fib/dir24_8.h
index 1ec437c..6c43f67 100644
--- a/lib/librte_fib/dir24_8.h
+++ b/lib/librte_fib/dir24_8.h
@@ -22,7 +22,7 @@ void
 dir24_8_free(void *p);
 
 rte_fib_lookup_fn_t
-dir24_8_get_lookup_fn(struct rte_fib_conf *conf);
+dir24_8_get_lookup_fn(void *p, enum rte_fib_lookup_type type);
 
 int
 dir24_8_modify(struct rte_fib *fib, uint32_t ip, uint8_t depth,
diff --git a/lib/librte_fib/rte_fib.c b/lib/librte_fib/rte_fib.c
index e090808..2b5fdf5 100644
--- a/lib/librte_fib/rte_fib.c
+++ b/lib/librte_fib/rte_fib.c
@@ -107,7 +107,8 @@ init_dataplane(struct rte_fib *fib, __rte_unused int socket_id,
 		fib->dp = dir24_8_create(dp_name, socket_id, conf);
 		if (fib->dp == NULL)
 			return -rte_errno;
-		fib->lookup = dir24_8_get_lookup_fn(conf);
+		fib->lookup = dir24_8_get_lookup_fn(fib->dp,
+			RTE_FIB_LOOKUP_DIR24_8_SCALAR_MACRO);
 		fib->modify = dir24_8_modify;
 		return 0;
 	default:
@@ -317,3 +318,21 @@ rte_fib_get_rib(struct rte_fib *fib)
 {
 	return (fib == NULL) ? NULL : fib->rib;
 }
+
+int
+rte_fib_select_lookup(struct rte_fib *fib,
+	enum rte_fib_lookup_type type)
+{
+	rte_fib_lookup_fn_t fn;
+
+	switch (fib->type) {
+	case RTE_FIB_DIR24_8:
+		fn = dir24_8_get_lookup_fn(fib->dp, type);
+		if (fn == NULL)
+			return -EINVAL;
+		fib->lookup = fn;
+		return 0;
+	default:
+		return -EINVAL;
+	}
+}
diff --git a/lib/librte_fib/rte_fib.h b/lib/librte_fib/rte_fib.h
index 84ee774..d46fedc 100644
--- a/lib/librte_fib/rte_fib.h
+++ b/lib/librte_fib/rte_fib.h
@@ -58,6 +58,21 @@ enum rte_fib_dir24_8_nh_sz {
 	RTE_FIB_DIR24_8_8B
 };
 
+/** Type of lookup function implementation */
+enum rte_fib_lookup_type {
+	RTE_FIB_LOOKUP_DIR24_8_SCALAR_MACRO,
+	/**< Macro based lookup function */
+	RTE_FIB_LOOKUP_DIR24_8_SCALAR_INLINE,
+	/**<
+	 * Lookup implementation using inlined functions
+	 * for different next hop sizes
+	 */
+	RTE_FIB_LOOKUP_DIR24_8_SCALAR_UNI
+	/**<
+	 * Unified lookup function for all next hop sizes
+	 */
+};
+
 /** FIB configuration structure */
 struct rte_fib_conf {
 	enum rte_fib_type type; /**< Type of FIB struct */
@@ -196,6 +211,23 @@ __rte_experimental
 struct rte_rib *
 rte_fib_get_rib(struct rte_fib *fib);
 
+/**
+ * Set lookup function based on type
+ *
+ * @param fib
+ *   FIB object handle
+ * @param type
+ *   type of lookup function
+ *
+ * @return
+ *    -EINVAL on failure
+ *    0 on success
+ */
+__rte_experimental
+int
+rte_fib_select_lookup(struct rte_fib *fib,
+	enum rte_fib_lookup_type type);
+
 #ifdef __cplusplus
 }
 #endif
diff --git a/lib/librte_fib/version.map b/lib/librte_fib/version.map
index 9527417..5fd792a 100644
--- a/lib/librte_fib/version.map
+++ b/lib/librte_fib/version.map
@@ -9,6 +9,7 @@ EXPERIMENTAL {
 	rte_fib_lookup_bulk;
 	rte_fib_get_dp;
 	rte_fib_get_rib;
+	rte_fib_select_lookup;
 
 	rte_fib6_add;
 	rte_fib6_create;
-- 
2.7.4


^ permalink raw reply	[flat|nested] 199+ messages in thread

* [dpdk-dev] [PATCH v15 2/8] fib: move lookup definition into the header file
  2020-10-25 18:07                         ` [dpdk-dev] [PATCH v14 0/8] " Vladimir Medvedkin
  2020-10-27 15:11                           ` [dpdk-dev] [PATCH v15 " Vladimir Medvedkin
  2020-10-27 15:11                           ` [dpdk-dev] [PATCH v15 1/8] fib: make lookup function type configurable Vladimir Medvedkin
@ 2020-10-27 15:11                           ` Vladimir Medvedkin
  2020-10-27 15:11                           ` [dpdk-dev] [PATCH v15 3/8] fib: introduce AVX512 lookup Vladimir Medvedkin
                                             ` (5 subsequent siblings)
  8 siblings, 0 replies; 199+ messages in thread
From: Vladimir Medvedkin @ 2020-10-27 15:11 UTC (permalink / raw)
  To: dev
  Cc: david.marchand, jerinj, mdr, thomas, konstantin.ananyev,
	bruce.richardson, ciara.power

Move dir24_8 table layout and lookup definition into the
private header file. This is necessary for implementing a
vectorized lookup function in a separate .с file.

Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 lib/librte_fib/dir24_8.c | 225 +----------------------------------------------
 lib/librte_fib/dir24_8.h | 224 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 225 insertions(+), 224 deletions(-)

diff --git a/lib/librte_fib/dir24_8.c b/lib/librte_fib/dir24_8.c
index ab5a1b2..87400fc 100644
--- a/lib/librte_fib/dir24_8.c
+++ b/lib/librte_fib/dir24_8.c
@@ -11,240 +11,17 @@
 
 #include <rte_debug.h>
 #include <rte_malloc.h>
-#include <rte_prefetch.h>
 #include <rte_errno.h>
 #include <rte_memory.h>
-#include <rte_branch_prediction.h>
 
-#include <rte_fib.h>
 #include <rte_rib.h>
+#include <rte_fib.h>
 #include "dir24_8.h"
 
 #define DIR24_8_NAMESIZE	64
 
-#define DIR24_8_TBL24_NUM_ENT		(1 << 24)
-#define DIR24_8_TBL8_GRP_NUM_ENT	256U
-#define DIR24_8_EXT_ENT			1
-#define DIR24_8_TBL24_MASK		0xffffff00
-
-#define BITMAP_SLAB_BIT_SIZE_LOG2	6
-#define BITMAP_SLAB_BIT_SIZE		(1 << BITMAP_SLAB_BIT_SIZE_LOG2)
-#define BITMAP_SLAB_BITMASK		(BITMAP_SLAB_BIT_SIZE - 1)
-
-struct dir24_8_tbl {
-	uint32_t	number_tbl8s;	/**< Total number of tbl8s */
-	uint32_t	rsvd_tbl8s;	/**< Number of reserved tbl8s */
-	uint32_t	cur_tbl8s;	/**< Current number of tbl8s */
-	enum rte_fib_dir24_8_nh_sz	nh_sz;	/**< Size of nexthop entry */
-	uint64_t	def_nh;		/**< Default next hop */
-	uint64_t	*tbl8;		/**< tbl8 table. */
-	uint64_t	*tbl8_idxes;	/**< bitmap containing free tbl8 idxes*/
-	/* tbl24 table. */
-	__extension__ uint64_t	tbl24[0] __rte_cache_aligned;
-};
-
 #define ROUNDUP(x, y)	 RTE_ALIGN_CEIL(x, (1 << (32 - y)))
 
-static inline void *
-get_tbl24_p(struct dir24_8_tbl *dp, uint32_t ip, uint8_t nh_sz)
-{
-	return (void *)&((uint8_t *)dp->tbl24)[(ip &
-		DIR24_8_TBL24_MASK) >> (8 - nh_sz)];
-}
-
-static inline  uint8_t
-bits_in_nh(uint8_t nh_sz)
-{
-	return 8 * (1 << nh_sz);
-}
-
-static inline uint64_t
-get_max_nh(uint8_t nh_sz)
-{
-	return ((1ULL << (bits_in_nh(nh_sz) - 1)) - 1);
-}
-
-static  inline uint32_t
-get_tbl24_idx(uint32_t ip)
-{
-	return ip >> 8;
-}
-
-static  inline uint32_t
-get_tbl8_idx(uint32_t res, uint32_t ip)
-{
-	return (res >> 1) * DIR24_8_TBL8_GRP_NUM_ENT + (uint8_t)ip;
-}
-
-static inline uint64_t
-lookup_msk(uint8_t nh_sz)
-{
-	return ((1ULL << ((1 << (nh_sz + 3)) - 1)) << 1) - 1;
-}
-
-static inline uint8_t
-get_psd_idx(uint32_t val, uint8_t nh_sz)
-{
-	return val & ((1 << (3 - nh_sz)) - 1);
-}
-
-static inline uint32_t
-get_tbl_idx(uint32_t val, uint8_t nh_sz)
-{
-	return val >> (3 - nh_sz);
-}
-
-static inline uint64_t
-get_tbl24(struct dir24_8_tbl *dp, uint32_t ip, uint8_t nh_sz)
-{
-	return ((dp->tbl24[get_tbl_idx(get_tbl24_idx(ip), nh_sz)] >>
-		(get_psd_idx(get_tbl24_idx(ip), nh_sz) *
-		bits_in_nh(nh_sz))) & lookup_msk(nh_sz));
-}
-
-static inline uint64_t
-get_tbl8(struct dir24_8_tbl *dp, uint32_t res, uint32_t ip, uint8_t nh_sz)
-{
-	return ((dp->tbl8[get_tbl_idx(get_tbl8_idx(res, ip), nh_sz)] >>
-		(get_psd_idx(get_tbl8_idx(res, ip), nh_sz) *
-		bits_in_nh(nh_sz))) & lookup_msk(nh_sz));
-}
-
-static inline int
-is_entry_extended(uint64_t ent)
-{
-	return (ent & DIR24_8_EXT_ENT) == DIR24_8_EXT_ENT;
-}
-
-#define LOOKUP_FUNC(suffix, type, bulk_prefetch, nh_sz)			\
-static void dir24_8_lookup_bulk_##suffix(void *p, const uint32_t *ips,	\
-	uint64_t *next_hops, const unsigned int n)			\
-{									\
-	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;		\
-	uint64_t tmp;							\
-	uint32_t i;							\
-	uint32_t prefetch_offset =					\
-		RTE_MIN((unsigned int)bulk_prefetch, n);		\
-									\
-	for (i = 0; i < prefetch_offset; i++)				\
-		rte_prefetch0(get_tbl24_p(dp, ips[i], nh_sz));		\
-	for (i = 0; i < (n - prefetch_offset); i++) {			\
-		rte_prefetch0(get_tbl24_p(dp,				\
-			ips[i + prefetch_offset], nh_sz));		\
-		tmp = ((type *)dp->tbl24)[ips[i] >> 8];			\
-		if (unlikely(is_entry_extended(tmp)))			\
-			tmp = ((type *)dp->tbl8)[(uint8_t)ips[i] +	\
-				((tmp >> 1) * DIR24_8_TBL8_GRP_NUM_ENT)]; \
-		next_hops[i] = tmp >> 1;				\
-	}								\
-	for (; i < n; i++) {						\
-		tmp = ((type *)dp->tbl24)[ips[i] >> 8];			\
-		if (unlikely(is_entry_extended(tmp)))			\
-			tmp = ((type *)dp->tbl8)[(uint8_t)ips[i] +	\
-				((tmp >> 1) * DIR24_8_TBL8_GRP_NUM_ENT)]; \
-		next_hops[i] = tmp >> 1;				\
-	}								\
-}									\
-
-LOOKUP_FUNC(1b, uint8_t, 5, 0)
-LOOKUP_FUNC(2b, uint16_t, 6, 1)
-LOOKUP_FUNC(4b, uint32_t, 15, 2)
-LOOKUP_FUNC(8b, uint64_t, 12, 3)
-
-static inline void
-dir24_8_lookup_bulk(struct dir24_8_tbl *dp, const uint32_t *ips,
-	uint64_t *next_hops, const unsigned int n, uint8_t nh_sz)
-{
-	uint64_t tmp;
-	uint32_t i;
-	uint32_t prefetch_offset = RTE_MIN(15U, n);
-
-	for (i = 0; i < prefetch_offset; i++)
-		rte_prefetch0(get_tbl24_p(dp, ips[i], nh_sz));
-	for (i = 0; i < (n - prefetch_offset); i++) {
-		rte_prefetch0(get_tbl24_p(dp, ips[i + prefetch_offset],
-			nh_sz));
-		tmp = get_tbl24(dp, ips[i], nh_sz);
-		if (unlikely(is_entry_extended(tmp)))
-			tmp = get_tbl8(dp, tmp, ips[i], nh_sz);
-
-		next_hops[i] = tmp >> 1;
-	}
-	for (; i < n; i++) {
-		tmp = get_tbl24(dp, ips[i], nh_sz);
-		if (unlikely(is_entry_extended(tmp)))
-			tmp = get_tbl8(dp, tmp, ips[i], nh_sz);
-
-		next_hops[i] = tmp >> 1;
-	}
-}
-
-static void
-dir24_8_lookup_bulk_0(void *p, const uint32_t *ips,
-	uint64_t *next_hops, const unsigned int n)
-{
-	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
-
-	dir24_8_lookup_bulk(dp, ips, next_hops, n, 0);
-}
-
-static void
-dir24_8_lookup_bulk_1(void *p, const uint32_t *ips,
-	uint64_t *next_hops, const unsigned int n)
-{
-	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
-
-	dir24_8_lookup_bulk(dp, ips, next_hops, n, 1);
-}
-
-static void
-dir24_8_lookup_bulk_2(void *p, const uint32_t *ips,
-	uint64_t *next_hops, const unsigned int n)
-{
-	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
-
-	dir24_8_lookup_bulk(dp, ips, next_hops, n, 2);
-}
-
-static void
-dir24_8_lookup_bulk_3(void *p, const uint32_t *ips,
-	uint64_t *next_hops, const unsigned int n)
-{
-	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
-
-	dir24_8_lookup_bulk(dp, ips, next_hops, n, 3);
-}
-
-static void
-dir24_8_lookup_bulk_uni(void *p, const uint32_t *ips,
-	uint64_t *next_hops, const unsigned int n)
-{
-	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
-	uint64_t tmp;
-	uint32_t i;
-	uint32_t prefetch_offset = RTE_MIN(15U, n);
-	uint8_t nh_sz = dp->nh_sz;
-
-	for (i = 0; i < prefetch_offset; i++)
-		rte_prefetch0(get_tbl24_p(dp, ips[i], nh_sz));
-	for (i = 0; i < (n - prefetch_offset); i++) {
-		rte_prefetch0(get_tbl24_p(dp, ips[i + prefetch_offset],
-			nh_sz));
-		tmp = get_tbl24(dp, ips[i], nh_sz);
-		if (unlikely(is_entry_extended(tmp)))
-			tmp = get_tbl8(dp, tmp, ips[i], nh_sz);
-
-		next_hops[i] = tmp >> 1;
-	}
-	for (; i < n; i++) {
-		tmp = get_tbl24(dp, ips[i], nh_sz);
-		if (unlikely(is_entry_extended(tmp)))
-			tmp = get_tbl8(dp, tmp, ips[i], nh_sz);
-
-		next_hops[i] = tmp >> 1;
-	}
-}
-
 static inline rte_fib_lookup_fn_t
 get_scalar_fn(enum rte_fib_dir24_8_nh_sz nh_sz)
 {
diff --git a/lib/librte_fib/dir24_8.h b/lib/librte_fib/dir24_8.h
index 6c43f67..bac65ee 100644
--- a/lib/librte_fib/dir24_8.h
+++ b/lib/librte_fib/dir24_8.h
@@ -6,6 +6,9 @@
 #ifndef _DIR24_8_H_
 #define _DIR24_8_H_
 
+#include <rte_prefetch.h>
+#include <rte_branch_prediction.h>
+
 /**
  * @file
  * DIR24_8 algorithm
@@ -15,6 +18,227 @@
 extern "C" {
 #endif
 
+#define DIR24_8_TBL24_NUM_ENT		(1 << 24)
+#define DIR24_8_TBL8_GRP_NUM_ENT	256U
+#define DIR24_8_EXT_ENT			1
+#define DIR24_8_TBL24_MASK		0xffffff00
+
+#define BITMAP_SLAB_BIT_SIZE_LOG2	6
+#define BITMAP_SLAB_BIT_SIZE		(1 << BITMAP_SLAB_BIT_SIZE_LOG2)
+#define BITMAP_SLAB_BITMASK		(BITMAP_SLAB_BIT_SIZE - 1)
+
+struct dir24_8_tbl {
+	uint32_t	number_tbl8s;	/**< Total number of tbl8s */
+	uint32_t	rsvd_tbl8s;	/**< Number of reserved tbl8s */
+	uint32_t	cur_tbl8s;	/**< Current number of tbl8s */
+	enum rte_fib_dir24_8_nh_sz	nh_sz;	/**< Size of nexthop entry */
+	uint64_t	def_nh;		/**< Default next hop */
+	uint64_t	*tbl8;		/**< tbl8 table. */
+	uint64_t	*tbl8_idxes;	/**< bitmap containing free tbl8 idxes*/
+	/* tbl24 table. */
+	__extension__ uint64_t	tbl24[0] __rte_cache_aligned;
+};
+
+static inline void *
+get_tbl24_p(struct dir24_8_tbl *dp, uint32_t ip, uint8_t nh_sz)
+{
+	return (void *)&((uint8_t *)dp->tbl24)[(ip &
+		DIR24_8_TBL24_MASK) >> (8 - nh_sz)];
+}
+
+static inline  uint8_t
+bits_in_nh(uint8_t nh_sz)
+{
+	return 8 * (1 << nh_sz);
+}
+
+static inline uint64_t
+get_max_nh(uint8_t nh_sz)
+{
+	return ((1ULL << (bits_in_nh(nh_sz) - 1)) - 1);
+}
+
+static  inline uint32_t
+get_tbl24_idx(uint32_t ip)
+{
+	return ip >> 8;
+}
+
+static  inline uint32_t
+get_tbl8_idx(uint32_t res, uint32_t ip)
+{
+	return (res >> 1) * DIR24_8_TBL8_GRP_NUM_ENT + (uint8_t)ip;
+}
+
+static inline uint64_t
+lookup_msk(uint8_t nh_sz)
+{
+	return ((1ULL << ((1 << (nh_sz + 3)) - 1)) << 1) - 1;
+}
+
+static inline uint8_t
+get_psd_idx(uint32_t val, uint8_t nh_sz)
+{
+	return val & ((1 << (3 - nh_sz)) - 1);
+}
+
+static inline uint32_t
+get_tbl_idx(uint32_t val, uint8_t nh_sz)
+{
+	return val >> (3 - nh_sz);
+}
+
+static inline uint64_t
+get_tbl24(struct dir24_8_tbl *dp, uint32_t ip, uint8_t nh_sz)
+{
+	return ((dp->tbl24[get_tbl_idx(get_tbl24_idx(ip), nh_sz)] >>
+		(get_psd_idx(get_tbl24_idx(ip), nh_sz) *
+		bits_in_nh(nh_sz))) & lookup_msk(nh_sz));
+}
+
+static inline uint64_t
+get_tbl8(struct dir24_8_tbl *dp, uint32_t res, uint32_t ip, uint8_t nh_sz)
+{
+	return ((dp->tbl8[get_tbl_idx(get_tbl8_idx(res, ip), nh_sz)] >>
+		(get_psd_idx(get_tbl8_idx(res, ip), nh_sz) *
+		bits_in_nh(nh_sz))) & lookup_msk(nh_sz));
+}
+
+static inline int
+is_entry_extended(uint64_t ent)
+{
+	return (ent & DIR24_8_EXT_ENT) == DIR24_8_EXT_ENT;
+}
+
+#define LOOKUP_FUNC(suffix, type, bulk_prefetch, nh_sz)			\
+static inline void dir24_8_lookup_bulk_##suffix(void *p, const uint32_t *ips, \
+	uint64_t *next_hops, const unsigned int n)			\
+{									\
+	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;		\
+	uint64_t tmp;							\
+	uint32_t i;							\
+	uint32_t prefetch_offset =					\
+		RTE_MIN((unsigned int)bulk_prefetch, n);		\
+									\
+	for (i = 0; i < prefetch_offset; i++)				\
+		rte_prefetch0(get_tbl24_p(dp, ips[i], nh_sz));		\
+	for (i = 0; i < (n - prefetch_offset); i++) {			\
+		rte_prefetch0(get_tbl24_p(dp,				\
+			ips[i + prefetch_offset], nh_sz));		\
+		tmp = ((type *)dp->tbl24)[ips[i] >> 8];			\
+		if (unlikely(is_entry_extended(tmp)))			\
+			tmp = ((type *)dp->tbl8)[(uint8_t)ips[i] +	\
+				((tmp >> 1) * DIR24_8_TBL8_GRP_NUM_ENT)]; \
+		next_hops[i] = tmp >> 1;				\
+	}								\
+	for (; i < n; i++) {						\
+		tmp = ((type *)dp->tbl24)[ips[i] >> 8];			\
+		if (unlikely(is_entry_extended(tmp)))			\
+			tmp = ((type *)dp->tbl8)[(uint8_t)ips[i] +	\
+				((tmp >> 1) * DIR24_8_TBL8_GRP_NUM_ENT)]; \
+		next_hops[i] = tmp >> 1;				\
+	}								\
+}									\
+
+LOOKUP_FUNC(1b, uint8_t, 5, 0)
+LOOKUP_FUNC(2b, uint16_t, 6, 1)
+LOOKUP_FUNC(4b, uint32_t, 15, 2)
+LOOKUP_FUNC(8b, uint64_t, 12, 3)
+
+static inline void
+dir24_8_lookup_bulk(struct dir24_8_tbl *dp, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n, uint8_t nh_sz)
+{
+	uint64_t tmp;
+	uint32_t i;
+	uint32_t prefetch_offset = RTE_MIN(15U, n);
+
+	for (i = 0; i < prefetch_offset; i++)
+		rte_prefetch0(get_tbl24_p(dp, ips[i], nh_sz));
+	for (i = 0; i < (n - prefetch_offset); i++) {
+		rte_prefetch0(get_tbl24_p(dp, ips[i + prefetch_offset],
+			nh_sz));
+		tmp = get_tbl24(dp, ips[i], nh_sz);
+		if (unlikely(is_entry_extended(tmp)))
+			tmp = get_tbl8(dp, tmp, ips[i], nh_sz);
+
+		next_hops[i] = tmp >> 1;
+	}
+	for (; i < n; i++) {
+		tmp = get_tbl24(dp, ips[i], nh_sz);
+		if (unlikely(is_entry_extended(tmp)))
+			tmp = get_tbl8(dp, tmp, ips[i], nh_sz);
+
+		next_hops[i] = tmp >> 1;
+	}
+}
+
+static inline void
+dir24_8_lookup_bulk_0(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n)
+{
+	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
+
+	dir24_8_lookup_bulk(dp, ips, next_hops, n, 0);
+}
+
+static inline void
+dir24_8_lookup_bulk_1(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n)
+{
+	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
+
+	dir24_8_lookup_bulk(dp, ips, next_hops, n, 1);
+}
+
+static inline void
+dir24_8_lookup_bulk_2(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n)
+{
+	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
+
+	dir24_8_lookup_bulk(dp, ips, next_hops, n, 2);
+}
+
+static inline void
+dir24_8_lookup_bulk_3(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n)
+{
+	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
+
+	dir24_8_lookup_bulk(dp, ips, next_hops, n, 3);
+}
+
+static inline void
+dir24_8_lookup_bulk_uni(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n)
+{
+	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
+	uint64_t tmp;
+	uint32_t i;
+	uint32_t prefetch_offset = RTE_MIN(15U, n);
+	uint8_t nh_sz = dp->nh_sz;
+
+	for (i = 0; i < prefetch_offset; i++)
+		rte_prefetch0(get_tbl24_p(dp, ips[i], nh_sz));
+	for (i = 0; i < (n - prefetch_offset); i++) {
+		rte_prefetch0(get_tbl24_p(dp, ips[i + prefetch_offset],
+			nh_sz));
+		tmp = get_tbl24(dp, ips[i], nh_sz);
+		if (unlikely(is_entry_extended(tmp)))
+			tmp = get_tbl8(dp, tmp, ips[i], nh_sz);
+
+		next_hops[i] = tmp >> 1;
+	}
+	for (; i < n; i++) {
+		tmp = get_tbl24(dp, ips[i], nh_sz);
+		if (unlikely(is_entry_extended(tmp)))
+			tmp = get_tbl8(dp, tmp, ips[i], nh_sz);
+
+		next_hops[i] = tmp >> 1;
+	}
+}
+
 void *
 dir24_8_create(const char *name, int socket_id, struct rte_fib_conf *conf);
 
-- 
2.7.4


^ permalink raw reply	[flat|nested] 199+ messages in thread

* [dpdk-dev] [PATCH v15 3/8] fib: introduce AVX512 lookup
  2020-10-25 18:07                         ` [dpdk-dev] [PATCH v14 0/8] " Vladimir Medvedkin
                                             ` (2 preceding siblings ...)
  2020-10-27 15:11                           ` [dpdk-dev] [PATCH v15 2/8] fib: move lookup definition into the header file Vladimir Medvedkin
@ 2020-10-27 15:11                           ` Vladimir Medvedkin
  2020-10-27 15:11                           ` [dpdk-dev] [PATCH v15 4/8] fib6: make lookup function type configurable Vladimir Medvedkin
                                             ` (4 subsequent siblings)
  8 siblings, 0 replies; 199+ messages in thread
From: Vladimir Medvedkin @ 2020-10-27 15:11 UTC (permalink / raw)
  To: dev
  Cc: david.marchand, jerinj, mdr, thomas, konstantin.ananyev,
	bruce.richardson, ciara.power

Add new lookup implementation for DIR24_8 algorithm using
AVX512 instruction set

Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 doc/guides/rel_notes/release_20_11.rst |   3 +
 lib/librte_fib/dir24_8.c               |  39 ++++++++
 lib/librte_fib/dir24_8_avx512.c        | 165 +++++++++++++++++++++++++++++++++
 lib/librte_fib/dir24_8_avx512.h        |  24 +++++
 lib/librte_fib/meson.build             |  34 +++++++
 lib/librte_fib/rte_fib.c               |   2 +-
 lib/librte_fib/rte_fib.h               |   6 +-
 7 files changed, 271 insertions(+), 2 deletions(-)
 create mode 100644 lib/librte_fib/dir24_8_avx512.c
 create mode 100644 lib/librte_fib/dir24_8_avx512.h

diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst
index dca8d41..c430e8e 100644
--- a/doc/guides/rel_notes/release_20_11.rst
+++ b/doc/guides/rel_notes/release_20_11.rst
@@ -345,6 +345,9 @@ New Features
   * Replaced ``--scalar`` command-line option with ``--alg=<value>``, to allow
     the user to select the desired classify method.
 
+* **Added AVX512 lookup implementation for FIB.**
+
+  Added a AVX512 lookup functions implementation into FIB library.
 
 Removed Items
 -------------
diff --git a/lib/librte_fib/dir24_8.c b/lib/librte_fib/dir24_8.c
index 87400fc..c97ae02 100644
--- a/lib/librte_fib/dir24_8.c
+++ b/lib/librte_fib/dir24_8.c
@@ -13,11 +13,18 @@
 #include <rte_malloc.h>
 #include <rte_errno.h>
 #include <rte_memory.h>
+#include <rte_vect.h>
 
 #include <rte_rib.h>
 #include <rte_fib.h>
 #include "dir24_8.h"
 
+#ifdef CC_DIR24_8_AVX512_SUPPORT
+
+#include "dir24_8_avx512.h"
+
+#endif /* CC_DIR24_8_AVX512_SUPPORT */
+
 #define DIR24_8_NAMESIZE	64
 
 #define ROUNDUP(x, y)	 RTE_ALIGN_CEIL(x, (1 << (32 - y)))
@@ -56,11 +63,38 @@ get_scalar_fn_inlined(enum rte_fib_dir24_8_nh_sz nh_sz)
 	}
 }
 
+static inline rte_fib_lookup_fn_t
+get_vector_fn(enum rte_fib_dir24_8_nh_sz nh_sz)
+{
+#ifdef CC_DIR24_8_AVX512_SUPPORT
+	if ((rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512F) <= 0) ||
+			(rte_vect_get_max_simd_bitwidth() < RTE_VECT_SIMD_512))
+		return NULL;
+
+	switch (nh_sz) {
+	case RTE_FIB_DIR24_8_1B:
+		return rte_dir24_8_vec_lookup_bulk_1b;
+	case RTE_FIB_DIR24_8_2B:
+		return rte_dir24_8_vec_lookup_bulk_2b;
+	case RTE_FIB_DIR24_8_4B:
+		return rte_dir24_8_vec_lookup_bulk_4b;
+	case RTE_FIB_DIR24_8_8B:
+		return rte_dir24_8_vec_lookup_bulk_8b;
+	default:
+		return NULL;
+	}
+#else
+	RTE_SET_USED(nh_sz);
+#endif
+	return NULL;
+}
+
 rte_fib_lookup_fn_t
 dir24_8_get_lookup_fn(void *p, enum rte_fib_lookup_type type)
 {
 	enum rte_fib_dir24_8_nh_sz nh_sz;
 	struct dir24_8_tbl *dp = p;
+	rte_fib_lookup_fn_t ret_fn = NULL;
 
 	if (dp == NULL)
 		return NULL;
@@ -74,6 +108,11 @@ dir24_8_get_lookup_fn(void *p, enum rte_fib_lookup_type type)
 		return get_scalar_fn_inlined(nh_sz);
 	case RTE_FIB_LOOKUP_DIR24_8_SCALAR_UNI:
 		return dir24_8_lookup_bulk_uni;
+	case RTE_FIB_LOOKUP_DIR24_8_VECTOR_AVX512:
+		return get_vector_fn(nh_sz);
+	case RTE_FIB_LOOKUP_DEFAULT:
+		ret_fn = get_vector_fn(nh_sz);
+		return (ret_fn) ? ret_fn : get_scalar_fn(nh_sz);
 	default:
 		return NULL;
 	}
diff --git a/lib/librte_fib/dir24_8_avx512.c b/lib/librte_fib/dir24_8_avx512.c
new file mode 100644
index 0000000..43dba28
--- /dev/null
+++ b/lib/librte_fib/dir24_8_avx512.c
@@ -0,0 +1,165 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Intel Corporation
+ */
+
+#include <rte_vect.h>
+#include <rte_fib.h>
+
+#include "dir24_8.h"
+#include "dir24_8_avx512.h"
+
+static __rte_always_inline void
+dir24_8_vec_lookup_x16(void *p, const uint32_t *ips,
+	uint64_t *next_hops, int size)
+{
+	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
+	__mmask16 msk_ext;
+	__mmask16 exp_msk = 0x5555;
+	__m512i ip_vec, idxes, res, bytes;
+	const __m512i zero = _mm512_set1_epi32(0);
+	const __m512i lsb = _mm512_set1_epi32(1);
+	const __m512i lsbyte_msk = _mm512_set1_epi32(0xff);
+	__m512i tmp1, tmp2, res_msk;
+	__m256i tmp256;
+	/* used to mask gather values if size is 1/2 (8/16 bit next hops) */
+	if (size == sizeof(uint8_t))
+		res_msk = _mm512_set1_epi32(UINT8_MAX);
+	else if (size == sizeof(uint16_t))
+		res_msk = _mm512_set1_epi32(UINT16_MAX);
+
+	ip_vec = _mm512_loadu_si512(ips);
+	/* mask 24 most significant bits */
+	idxes = _mm512_srli_epi32(ip_vec, 8);
+
+	/**
+	 * lookup in tbl24
+	 * Put it inside branch to make compiler happy with -O0
+	 */
+	if (size == sizeof(uint8_t)) {
+		res = _mm512_i32gather_epi32(idxes, (const int *)dp->tbl24, 1);
+		res = _mm512_and_epi32(res, res_msk);
+	} else if (size == sizeof(uint16_t)) {
+		res = _mm512_i32gather_epi32(idxes, (const int *)dp->tbl24, 2);
+		res = _mm512_and_epi32(res, res_msk);
+	} else
+		res = _mm512_i32gather_epi32(idxes, (const int *)dp->tbl24, 4);
+
+	/* get extended entries indexes */
+	msk_ext = _mm512_test_epi32_mask(res, lsb);
+
+	if (msk_ext != 0) {
+		idxes = _mm512_srli_epi32(res, 1);
+		idxes = _mm512_slli_epi32(idxes, 8);
+		bytes = _mm512_and_epi32(ip_vec, lsbyte_msk);
+		idxes = _mm512_maskz_add_epi32(msk_ext, idxes, bytes);
+		if (size == sizeof(uint8_t)) {
+			idxes = _mm512_mask_i32gather_epi32(zero, msk_ext,
+				idxes, (const int *)dp->tbl8, 1);
+			idxes = _mm512_and_epi32(idxes, res_msk);
+		} else if (size == sizeof(uint16_t)) {
+			idxes = _mm512_mask_i32gather_epi32(zero, msk_ext,
+				idxes, (const int *)dp->tbl8, 2);
+			idxes = _mm512_and_epi32(idxes, res_msk);
+		} else
+			idxes = _mm512_mask_i32gather_epi32(zero, msk_ext,
+				idxes, (const int *)dp->tbl8, 4);
+
+		res = _mm512_mask_blend_epi32(msk_ext, res, idxes);
+	}
+
+	res = _mm512_srli_epi32(res, 1);
+	tmp1 = _mm512_maskz_expand_epi32(exp_msk, res);
+	tmp256 = _mm512_extracti32x8_epi32(res, 1);
+	tmp2 = _mm512_maskz_expand_epi32(exp_msk,
+		_mm512_castsi256_si512(tmp256));
+	_mm512_storeu_si512(next_hops, tmp1);
+	_mm512_storeu_si512(next_hops + 8, tmp2);
+}
+
+static __rte_always_inline void
+dir24_8_vec_lookup_x8_8b(void *p, const uint32_t *ips,
+	uint64_t *next_hops)
+{
+	struct dir24_8_tbl *dp = (struct dir24_8_tbl *)p;
+	const __m512i zero = _mm512_set1_epi32(0);
+	const __m512i lsbyte_msk = _mm512_set1_epi64(0xff);
+	const __m512i lsb = _mm512_set1_epi64(1);
+	__m512i res, idxes, bytes;
+	__m256i idxes_256, ip_vec;
+	__mmask8 msk_ext;
+
+	ip_vec = _mm256_loadu_si256((const void *)ips);
+	/* mask 24 most significant bits */
+	idxes_256 = _mm256_srli_epi32(ip_vec, 8);
+
+	/* lookup in tbl24 */
+	res = _mm512_i32gather_epi64(idxes_256, (const void *)dp->tbl24, 8);
+
+	/* get extended entries indexes */
+	msk_ext = _mm512_test_epi64_mask(res, lsb);
+
+	if (msk_ext != 0) {
+		bytes = _mm512_cvtepi32_epi64(ip_vec);
+		idxes = _mm512_srli_epi64(res, 1);
+		idxes = _mm512_slli_epi64(idxes, 8);
+		bytes = _mm512_and_epi64(bytes, lsbyte_msk);
+		idxes = _mm512_maskz_add_epi64(msk_ext, idxes, bytes);
+		idxes = _mm512_mask_i64gather_epi64(zero, msk_ext, idxes,
+			(const void *)dp->tbl8, 8);
+
+		res = _mm512_mask_blend_epi64(msk_ext, res, idxes);
+	}
+
+	res = _mm512_srli_epi64(res, 1);
+	_mm512_storeu_si512(next_hops, res);
+}
+
+void
+rte_dir24_8_vec_lookup_bulk_1b(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n)
+{
+	uint32_t i;
+	for (i = 0; i < (n / 16); i++)
+		dir24_8_vec_lookup_x16(p, ips + i * 16, next_hops + i * 16,
+			sizeof(uint8_t));
+
+	dir24_8_lookup_bulk_1b(p, ips + i * 16, next_hops + i * 16,
+		n - i * 16);
+}
+
+void
+rte_dir24_8_vec_lookup_bulk_2b(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n)
+{
+	uint32_t i;
+	for (i = 0; i < (n / 16); i++)
+		dir24_8_vec_lookup_x16(p, ips + i * 16, next_hops + i * 16,
+			sizeof(uint16_t));
+
+	dir24_8_lookup_bulk_2b(p, ips + i * 16, next_hops + i * 16,
+		n - i * 16);
+}
+
+void
+rte_dir24_8_vec_lookup_bulk_4b(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n)
+{
+	uint32_t i;
+	for (i = 0; i < (n / 16); i++)
+		dir24_8_vec_lookup_x16(p, ips + i * 16, next_hops + i * 16,
+			sizeof(uint32_t));
+
+	dir24_8_lookup_bulk_4b(p, ips + i * 16, next_hops + i * 16,
+		n - i * 16);
+}
+
+void
+rte_dir24_8_vec_lookup_bulk_8b(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n)
+{
+	uint32_t i;
+	for (i = 0; i < (n / 8); i++)
+		dir24_8_vec_lookup_x8_8b(p, ips + i * 8, next_hops + i * 8);
+
+	dir24_8_lookup_bulk_8b(p, ips + i * 8, next_hops + i * 8, n - i * 8);
+}
diff --git a/lib/librte_fib/dir24_8_avx512.h b/lib/librte_fib/dir24_8_avx512.h
new file mode 100644
index 0000000..1d3c2b9
--- /dev/null
+++ b/lib/librte_fib/dir24_8_avx512.h
@@ -0,0 +1,24 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Intel Corporation
+ */
+
+#ifndef _DIR248_AVX512_H_
+#define _DIR248_AVX512_H_
+
+void
+rte_dir24_8_vec_lookup_bulk_1b(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n);
+
+void
+rte_dir24_8_vec_lookup_bulk_2b(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n);
+
+void
+rte_dir24_8_vec_lookup_bulk_4b(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n);
+
+void
+rte_dir24_8_vec_lookup_bulk_8b(void *p, const uint32_t *ips,
+	uint64_t *next_hops, const unsigned int n);
+
+#endif /* _DIR248_AVX512_H_ */
diff --git a/lib/librte_fib/meson.build b/lib/librte_fib/meson.build
index 771828f..0a8adef 100644
--- a/lib/librte_fib/meson.build
+++ b/lib/librte_fib/meson.build
@@ -5,3 +5,37 @@
 sources = files('rte_fib.c', 'rte_fib6.c', 'dir24_8.c', 'trie.c')
 headers = files('rte_fib.h', 'rte_fib6.h')
 deps += ['rib']
+
+# compile AVX512 version if:
+# we are building 64-bit binary AND binutils can generate proper code
+if dpdk_conf.has('RTE_ARCH_X86_64') and binutils_ok.returncode() == 0
+	# compile AVX512 version if either:
+	# a. we have AVX512F supported in minimum instruction set baseline
+	# b. it's not minimum instruction set, but supported by compiler
+	#
+	# in former case, just add avx512 C file to files list
+	# in latter case, compile c file to static lib, using correct
+	# compiler flags, and then have the .o file from static lib
+	# linked into main lib.
+
+	# check if all required flags already enabled (variant a).
+	acl_avx512_flags = ['__AVX512F__','__AVX512DQ__']
+	acl_avx512_on = true
+	foreach f:acl_avx512_flags
+		if cc.get_define(f, args: machine_args) == ''
+			acl_avx512_on = false
+		endif
+	endforeach
+
+	if acl_avx512_on == true
+		cflags += ['-DCC_DIR24_8_AVX512_SUPPORT']
+		sources += files('dir24_8_avx512.c')
+	elif cc.has_multi_arguments('-mavx512f', '-mavx512dq')
+		dir24_8_avx512_tmp = static_library('dir24_8_avx512_tmp',
+				'dir24_8_avx512.c',
+				dependencies: static_rte_eal,
+				c_args: cflags + ['-mavx512f', '-mavx512dq'])
+		objs += dir24_8_avx512_tmp.extract_objects('dir24_8_avx512.c')
+		cflags += '-DCC_DIR24_8_AVX512_SUPPORT'
+	endif
+endif
diff --git a/lib/librte_fib/rte_fib.c b/lib/librte_fib/rte_fib.c
index 2b5fdf5..398dbf9 100644
--- a/lib/librte_fib/rte_fib.c
+++ b/lib/librte_fib/rte_fib.c
@@ -108,7 +108,7 @@ init_dataplane(struct rte_fib *fib, __rte_unused int socket_id,
 		if (fib->dp == NULL)
 			return -rte_errno;
 		fib->lookup = dir24_8_get_lookup_fn(fib->dp,
-			RTE_FIB_LOOKUP_DIR24_8_SCALAR_MACRO);
+			RTE_FIB_LOOKUP_DEFAULT);
 		fib->modify = dir24_8_modify;
 		return 0;
 	default:
diff --git a/lib/librte_fib/rte_fib.h b/lib/librte_fib/rte_fib.h
index d46fedc..8688c93 100644
--- a/lib/librte_fib/rte_fib.h
+++ b/lib/librte_fib/rte_fib.h
@@ -60,6 +60,8 @@ enum rte_fib_dir24_8_nh_sz {
 
 /** Type of lookup function implementation */
 enum rte_fib_lookup_type {
+	RTE_FIB_LOOKUP_DEFAULT,
+	/**< Selects the best implementation based on the max simd bitwidth */
 	RTE_FIB_LOOKUP_DIR24_8_SCALAR_MACRO,
 	/**< Macro based lookup function */
 	RTE_FIB_LOOKUP_DIR24_8_SCALAR_INLINE,
@@ -67,10 +69,12 @@ enum rte_fib_lookup_type {
 	 * Lookup implementation using inlined functions
 	 * for different next hop sizes
 	 */
-	RTE_FIB_LOOKUP_DIR24_8_SCALAR_UNI
+	RTE_FIB_LOOKUP_DIR24_8_SCALAR_UNI,
 	/**<
 	 * Unified lookup function for all next hop sizes
 	 */
+	RTE_FIB_LOOKUP_DIR24_8_VECTOR_AVX512
+	/**< Vector implementation using AVX512 */
 };
 
 /** FIB configuration structure */
-- 
2.7.4


^ permalink raw reply	[flat|nested] 199+ messages in thread

* [dpdk-dev] [PATCH v15 4/8] fib6: make lookup function type configurable
  2020-10-25 18:07                         ` [dpdk-dev] [PATCH v14 0/8] " Vladimir Medvedkin
                                             ` (3 preceding siblings ...)
  2020-10-27 15:11                           ` [dpdk-dev] [PATCH v15 3/8] fib: introduce AVX512 lookup Vladimir Medvedkin
@ 2020-10-27 15:11                           ` Vladimir Medvedkin
  2020-10-27 15:11                           ` [dpdk-dev] [PATCH v15 5/8] fib6: move lookup definition into the header file Vladimir Medvedkin
                                             ` (3 subsequent siblings)
  8 siblings, 0 replies; 199+ messages in thread
From: Vladimir Medvedkin @ 2020-10-27 15:11 UTC (permalink / raw)
  To: dev
  Cc: david.marchand, jerinj, mdr, thomas, konstantin.ananyev,
	bruce.richardson, ciara.power

Add type argument to trie_get_lookup_fn()
Now it only supports RTE_FIB6_LOOKUP_TRIE_SCALAR

Add new rte_fib6_set_lookup_fn() - user can change lookup
function type runtime.

Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 lib/librte_fib/rte_fib6.c  | 20 +++++++++++++++++++-
 lib/librte_fib/rte_fib6.h  | 23 +++++++++++++++++++++++
 lib/librte_fib/trie.c      | 47 +++++++++++++++++++++++++++-------------------
 lib/librte_fib/trie.h      |  2 +-
 lib/librte_fib/version.map |  1 +
 5 files changed, 72 insertions(+), 21 deletions(-)

diff --git a/lib/librte_fib/rte_fib6.c b/lib/librte_fib/rte_fib6.c
index a1f0db8..45792bd 100644
--- a/lib/librte_fib/rte_fib6.c
+++ b/lib/librte_fib/rte_fib6.c
@@ -107,7 +107,7 @@ init_dataplane(struct rte_fib6 *fib, __rte_unused int socket_id,
 		fib->dp = trie_create(dp_name, socket_id, conf);
 		if (fib->dp == NULL)
 			return -rte_errno;
-		fib->lookup = rte_trie_get_lookup_fn(conf);
+		fib->lookup = trie_get_lookup_fn(fib->dp, RTE_FIB6_LOOKUP_TRIE_SCALAR);
 		fib->modify = trie_modify;
 		return 0;
 	default:
@@ -319,3 +319,21 @@ rte_fib6_get_rib(struct rte_fib6 *fib)
 {
 	return (fib == NULL) ? NULL : fib->rib;
 }
+
+int
+rte_fib6_select_lookup(struct rte_fib6 *fib,
+	enum rte_fib6_lookup_type type)
+{
+	rte_fib6_lookup_fn_t fn;
+
+	switch (fib->type) {
+	case RTE_FIB6_TRIE:
+		fn = trie_get_lookup_fn(fib->dp, type);
+		if (fn == NULL)
+			return -EINVAL;
+		fib->lookup = fn;
+		return 0;
+	default:
+		return -EINVAL;
+	}
+}
diff --git a/lib/librte_fib/rte_fib6.h b/lib/librte_fib/rte_fib6.h
index bbfcf23..8086f03 100644
--- a/lib/librte_fib/rte_fib6.h
+++ b/lib/librte_fib/rte_fib6.h
@@ -53,12 +53,18 @@ enum rte_fib6_op {
 	RTE_FIB6_DEL,
 };
 
+/** Size of nexthop (1 << nh_sz) bits for TRIE based FIB */
 enum rte_fib_trie_nh_sz {
 	RTE_FIB6_TRIE_2B = 1,
 	RTE_FIB6_TRIE_4B,
 	RTE_FIB6_TRIE_8B
 };
 
+/** Type of lookup function implementation */
+enum rte_fib6_lookup_type {
+	RTE_FIB6_LOOKUP_TRIE_SCALAR /**< Scalar lookup function implementation*/
+};
+
 /** FIB configuration structure */
 struct rte_fib6_conf {
 	enum rte_fib6_type type; /**< Type of FIB struct */
@@ -201,6 +207,23 @@ __rte_experimental
 struct rte_rib6 *
 rte_fib6_get_rib(struct rte_fib6 *fib);
 
+/**
+ * Set lookup function based on type
+ *
+ * @param fib
+ *   FIB object handle
+ * @param type
+ *   type of lookup function
+ *
+ * @return
+ *    -EINVAL on failure
+ *    0 on success
+ */
+__rte_experimental
+int
+rte_fib6_select_lookup(struct rte_fib6 *fib,
+	enum rte_fib6_lookup_type type);
+
 #ifdef __cplusplus
 }
 #endif
diff --git a/lib/librte_fib/trie.c b/lib/librte_fib/trie.c
index 2ae2add..11a7ca2 100644
--- a/lib/librte_fib/trie.c
+++ b/lib/librte_fib/trie.c
@@ -59,13 +59,6 @@ enum edge {
 	REDGE
 };
 
-enum lookup_type {
-	MACRO,
-	INLINE,
-	UNI
-};
-static enum lookup_type test_lookup = MACRO;
-
 static inline uint32_t
 get_tbl24_idx(const uint8_t *ip)
 {
@@ -153,22 +146,38 @@ LOOKUP_FUNC(2b, uint16_t, 1)
 LOOKUP_FUNC(4b, uint32_t, 2)
 LOOKUP_FUNC(8b, uint64_t, 3)
 
+static inline rte_fib6_lookup_fn_t
+get_scalar_fn(enum rte_fib_trie_nh_sz nh_sz)
+{
+	switch (nh_sz) {
+	case RTE_FIB6_TRIE_2B:
+		return rte_trie_lookup_bulk_2b;
+	case RTE_FIB6_TRIE_4B:
+		return rte_trie_lookup_bulk_4b;
+	case RTE_FIB6_TRIE_8B:
+		return rte_trie_lookup_bulk_8b;
+	default:
+		return NULL;
+	}
+}
+
 rte_fib6_lookup_fn_t
-rte_trie_get_lookup_fn(struct rte_fib6_conf *conf)
+trie_get_lookup_fn(void *p, enum rte_fib6_lookup_type type)
 {
-	enum rte_fib_trie_nh_sz nh_sz = conf->trie.nh_sz;
+	enum rte_fib_trie_nh_sz nh_sz;
+	struct rte_trie_tbl *dp = p;
 
-	if (test_lookup == MACRO) {
-		switch (nh_sz) {
-		case RTE_FIB6_TRIE_2B:
-			return rte_trie_lookup_bulk_2b;
-		case RTE_FIB6_TRIE_4B:
-			return rte_trie_lookup_bulk_4b;
-		case RTE_FIB6_TRIE_8B:
-			return rte_trie_lookup_bulk_8b;
-		}
+	if (dp == NULL)
+		return NULL;
+
+	nh_sz = dp->nh_sz;
+
+	switch (type) {
+	case RTE_FIB6_LOOKUP_TRIE_SCALAR:
+		return get_scalar_fn(nh_sz);
+	default:
+		return NULL;
 	}
-
 	return NULL;
 }
 
diff --git a/lib/librte_fib/trie.h b/lib/librte_fib/trie.h
index bb750c5..e328bef 100644
--- a/lib/librte_fib/trie.h
+++ b/lib/librte_fib/trie.h
@@ -22,7 +22,7 @@ void
 trie_free(void *p);
 
 rte_fib6_lookup_fn_t
-rte_trie_get_lookup_fn(struct rte_fib6_conf *fib_conf);
+trie_get_lookup_fn(void *p, enum rte_fib6_lookup_type type);
 
 int
 trie_modify(struct rte_fib6 *fib, const uint8_t ip[RTE_FIB6_IPV6_ADDR_SIZE],
diff --git a/lib/librte_fib/version.map b/lib/librte_fib/version.map
index 5fd792a..be975ea 100644
--- a/lib/librte_fib/version.map
+++ b/lib/librte_fib/version.map
@@ -19,6 +19,7 @@ EXPERIMENTAL {
 	rte_fib6_lookup_bulk;
 	rte_fib6_get_dp;
 	rte_fib6_get_rib;
+	rte_fib6_select_lookup;
 
 	local: *;
 };
-- 
2.7.4


^ permalink raw reply	[flat|nested] 199+ messages in thread

* [dpdk-dev] [PATCH v15 5/8] fib6: move lookup definition into the header file
  2020-10-25 18:07                         ` [dpdk-dev] [PATCH v14 0/8] " Vladimir Medvedkin
                                             ` (4 preceding siblings ...)
  2020-10-27 15:11                           ` [dpdk-dev] [PATCH v15 4/8] fib6: make lookup function type configurable Vladimir Medvedkin
@ 2020-10-27 15:11                           ` Vladimir Medvedkin
  2020-10-27 15:11                           ` [dpdk-dev] [PATCH v15 6/8] fib6: introduce AVX512 lookup Vladimir Medvedkin
                                             ` (2 subsequent siblings)
  8 siblings, 0 replies; 199+ messages in thread
From: Vladimir Medvedkin @ 2020-10-27 15:11 UTC (permalink / raw)
  To: dev
  Cc: david.marchand, jerinj, mdr, thomas, konstantin.ananyev,
	bruce.richardson, ciara.power

Move trie table layout and lookup definition into the
private header file. This is necessary for implementing a
vectorized lookup function in a separate .с file.

Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 lib/librte_fib/trie.c | 121 --------------------------------------------------
 lib/librte_fib/trie.h | 117 ++++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 117 insertions(+), 121 deletions(-)

diff --git a/lib/librte_fib/trie.c b/lib/librte_fib/trie.c
index 11a7ca2..08a03ab 100644
--- a/lib/librte_fib/trie.c
+++ b/lib/librte_fib/trie.c
@@ -11,141 +11,20 @@
 
 #include <rte_debug.h>
 #include <rte_malloc.h>
-#include <rte_prefetch.h>
 #include <rte_errno.h>
 #include <rte_memory.h>
-#include <rte_branch_prediction.h>
 
 #include <rte_rib6.h>
 #include <rte_fib6.h>
 #include "trie.h"
 
-/* @internal Total number of tbl24 entries. */
-#define TRIE_TBL24_NUM_ENT	(1 << 24)
-
-/* Maximum depth value possible for IPv6 LPM. */
-#define TRIE_MAX_DEPTH		128
-
-/* @internal Number of entries in a tbl8 group. */
-#define TRIE_TBL8_GRP_NUM_ENT	256ULL
-
-/* @internal Total number of tbl8 groups in the tbl8. */
-#define TRIE_TBL8_NUM_GROUPS	65536
-
-/* @internal bitmask with valid and valid_group fields set */
-#define TRIE_EXT_ENT		1
-
 #define TRIE_NAMESIZE		64
 
-#define BITMAP_SLAB_BIT_SIZE_LOG2	6
-#define BITMAP_SLAB_BIT_SIZE		(1ULL << BITMAP_SLAB_BIT_SIZE_LOG2)
-#define BITMAP_SLAB_BITMASK		(BITMAP_SLAB_BIT_SIZE - 1)
-
-struct rte_trie_tbl {
-	uint32_t	number_tbl8s;	/**< Total number of tbl8s */
-	uint32_t	rsvd_tbl8s;	/**< Number of reserved tbl8s */
-	uint32_t	cur_tbl8s;	/**< Current cumber of tbl8s */
-	uint64_t	def_nh;		/**< Default next hop */
-	enum rte_fib_trie_nh_sz	nh_sz;	/**< Size of nexthop entry */
-	uint64_t	*tbl8;		/**< tbl8 table. */
-	uint32_t	*tbl8_pool;	/**< bitmap containing free tbl8 idxes*/
-	uint32_t	tbl8_pool_pos;
-	/* tbl24 table. */
-	__extension__ uint64_t	tbl24[0] __rte_cache_aligned;
-};
-
 enum edge {
 	LEDGE,
 	REDGE
 };
 
-static inline uint32_t
-get_tbl24_idx(const uint8_t *ip)
-{
-	return ip[0] << 16|ip[1] << 8|ip[2];
-}
-
-static inline void *
-get_tbl24_p(struct rte_trie_tbl *dp, const uint8_t *ip, uint8_t nh_sz)
-{
-	uint32_t tbl24_idx;
-
-	tbl24_idx = get_tbl24_idx(ip);
-	return (void *)&((uint8_t *)dp->tbl24)[tbl24_idx << nh_sz];
-}
-
-static inline uint8_t
-bits_in_nh(uint8_t nh_sz)
-{
-	return 8 * (1 << nh_sz);
-}
-
-static inline uint64_t
-get_max_nh(uint8_t nh_sz)
-{
-	return ((1ULL << (bits_in_nh(nh_sz) - 1)) - 1);
-}
-
-static inline uint64_t
-lookup_msk(uint8_t nh_sz)
-{
-	return ((1ULL << ((1 << (nh_sz + 3)) - 1)) << 1) - 1;
-}
-
-static inline uint8_t
-get_psd_idx(uint32_t val, uint8_t nh_sz)
-{
-	return val & ((1 << (3 - nh_sz)) - 1);
-}
-
-static inline uint32_t
-get_tbl_pos(uint32_t val, uint8_t nh_sz)
-{
-	return val >> (3 - nh_sz);
-}
-
-static inline uint64_t
-get_tbl_val_by_idx(uint64_t *tbl, uint32_t idx, uint8_t nh_sz)
-{
-	return ((tbl[get_tbl_pos(idx, nh_sz)] >> (get_psd_idx(idx, nh_sz) *
-		bits_in_nh(nh_sz))) & lookup_msk(nh_sz));
-}
-
-static inline void *
-get_tbl_p_by_idx(uint64_t *tbl, uint64_t idx, uint8_t nh_sz)
-{
-	return (uint8_t *)tbl + (idx << nh_sz);
-}
-
-static inline int
-is_entry_extended(uint64_t ent)
-{
-	return (ent & TRIE_EXT_ENT) == TRIE_EXT_ENT;
-}
-
-#define LOOKUP_FUNC(suffix, type, nh_sz)				\
-static void rte_trie_lookup_bulk_##suffix(void *p,			\
-	uint8_t ips[][RTE_FIB6_IPV6_ADDR_SIZE],			\
-	uint64_t *next_hops, const unsigned int n)			\
-{									\
-	struct rte_trie_tbl *dp = (struct rte_trie_tbl *)p;		\
-	uint64_t tmp;							\
-	uint32_t i, j;							\
-									\
-	for (i = 0; i < n; i++) {					\
-		tmp = ((type *)dp->tbl24)[get_tbl24_idx(&ips[i][0])];	\
-		j = 3;							\
-		while (is_entry_extended(tmp)) {			\
-			tmp = ((type *)dp->tbl8)[ips[i][j++] +		\
-				((tmp >> 1) * TRIE_TBL8_GRP_NUM_ENT)];	\
-		}							\
-		next_hops[i] = tmp >> 1;				\
-	}								\
-}
-LOOKUP_FUNC(2b, uint16_t, 1)
-LOOKUP_FUNC(4b, uint32_t, 2)
-LOOKUP_FUNC(8b, uint64_t, 3)
-
 static inline rte_fib6_lookup_fn_t
 get_scalar_fn(enum rte_fib_trie_nh_sz nh_sz)
 {
diff --git a/lib/librte_fib/trie.h b/lib/librte_fib/trie.h
index e328bef..a4f429c 100644
--- a/lib/librte_fib/trie.h
+++ b/lib/librte_fib/trie.h
@@ -10,11 +10,128 @@
  * @file
  * RTE IPv6 Longest Prefix Match (LPM)
  */
+#include <rte_prefetch.h>
+#include <rte_branch_prediction.h>
 
 #ifdef __cplusplus
 extern "C" {
 #endif
 
+/* @internal Total number of tbl24 entries. */
+#define TRIE_TBL24_NUM_ENT	(1 << 24)
+/* Maximum depth value possible for IPv6 LPM. */
+#define TRIE_MAX_DEPTH		128
+/* @internal Number of entries in a tbl8 group. */
+#define TRIE_TBL8_GRP_NUM_ENT	256ULL
+/* @internal Total number of tbl8 groups in the tbl8. */
+#define TRIE_TBL8_NUM_GROUPS	65536
+/* @internal bitmask with valid and valid_group fields set */
+#define TRIE_EXT_ENT		1
+
+#define BITMAP_SLAB_BIT_SIZE_LOG2	6
+#define BITMAP_SLAB_BIT_SIZE		(1ULL << BITMAP_SLAB_BIT_SIZE_LOG2)
+#define BITMAP_SLAB_BITMASK		(BITMAP_SLAB_BIT_SIZE - 1)
+
+struct rte_trie_tbl {
+	uint32_t	number_tbl8s;	/**< Total number of tbl8s */
+	uint32_t	rsvd_tbl8s;	/**< Number of reserved tbl8s */
+	uint32_t	cur_tbl8s;	/**< Current cumber of tbl8s */
+	uint64_t	def_nh;		/**< Default next hop */
+	enum rte_fib_trie_nh_sz	nh_sz;	/**< Size of nexthop entry */
+	uint64_t	*tbl8;		/**< tbl8 table. */
+	uint32_t	*tbl8_pool;	/**< bitmap containing free tbl8 idxes*/
+	uint32_t	tbl8_pool_pos;
+	/* tbl24 table. */
+	__extension__ uint64_t	tbl24[0] __rte_cache_aligned;
+};
+
+static inline uint32_t
+get_tbl24_idx(const uint8_t *ip)
+{
+	return ip[0] << 16|ip[1] << 8|ip[2];
+}
+
+static inline void *
+get_tbl24_p(struct rte_trie_tbl *dp, const uint8_t *ip, uint8_t nh_sz)
+{
+	uint32_t tbl24_idx;
+
+	tbl24_idx = get_tbl24_idx(ip);
+	return (void *)&((uint8_t *)dp->tbl24)[tbl24_idx << nh_sz];
+}
+
+static inline uint8_t
+bits_in_nh(uint8_t nh_sz)
+{
+	return 8 * (1 << nh_sz);
+}
+
+static inline uint64_t
+get_max_nh(uint8_t nh_sz)
+{
+	return ((1ULL << (bits_in_nh(nh_sz) - 1)) - 1);
+}
+
+static inline uint64_t
+lookup_msk(uint8_t nh_sz)
+{
+	return ((1ULL << ((1 << (nh_sz + 3)) - 1)) << 1) - 1;
+}
+
+static inline uint8_t
+get_psd_idx(uint32_t val, uint8_t nh_sz)
+{
+	return val & ((1 << (3 - nh_sz)) - 1);
+}
+
+static inline uint32_t
+get_tbl_pos(uint32_t val, uint8_t nh_sz)
+{
+	return val >> (3 - nh_sz);
+}
+
+static inline uint64_t
+get_tbl_val_by_idx(uint64_t *tbl, uint32_t idx, uint8_t nh_sz)
+{
+	return ((tbl[get_tbl_pos(idx, nh_sz)] >> (get_psd_idx(idx, nh_sz) *
+		bits_in_nh(nh_sz))) & lookup_msk(nh_sz));
+}
+
+static inline void *
+get_tbl_p_by_idx(uint64_t *tbl, uint64_t idx, uint8_t nh_sz)
+{
+	return (uint8_t *)tbl + (idx << nh_sz);
+}
+
+static inline int
+is_entry_extended(uint64_t ent)
+{
+	return (ent & TRIE_EXT_ENT) == TRIE_EXT_ENT;
+}
+
+#define LOOKUP_FUNC(suffix, type, nh_sz)				\
+static inline void rte_trie_lookup_bulk_##suffix(void *p,		\
+	uint8_t ips[][RTE_FIB6_IPV6_ADDR_SIZE],				\
+	uint64_t *next_hops, const unsigned int n)			\
+{									\
+	struct rte_trie_tbl *dp = (struct rte_trie_tbl *)p;		\
+	uint64_t tmp;							\
+	uint32_t i, j;							\
+									\
+	for (i = 0; i < n; i++) {					\
+		tmp = ((type *)dp->tbl24)[get_tbl24_idx(&ips[i][0])];	\
+		j = 3;							\
+		while (is_entry_extended(tmp)) {			\
+			tmp = ((type *)dp->tbl8)[ips[i][j++] +		\
+				((tmp >> 1) * TRIE_TBL8_GRP_NUM_ENT)];	\
+		}							\
+		next_hops[i] = tmp >> 1;				\
+	}								\
+}
+LOOKUP_FUNC(2b, uint16_t, 1)
+LOOKUP_FUNC(4b, uint32_t, 2)
+LOOKUP_FUNC(8b, uint64_t, 3)
+
 void *
 trie_create(const char *name, int socket_id, struct rte_fib6_conf *conf);
 
-- 
2.7.4


^ permalink raw reply	[flat|nested] 199+ messages in thread

* [dpdk-dev] [PATCH v15 6/8] fib6: introduce AVX512 lookup
  2020-10-25 18:07                         ` [dpdk-dev] [PATCH v14 0/8] " Vladimir Medvedkin
                                             ` (5 preceding siblings ...)
  2020-10-27 15:11                           ` [dpdk-dev] [PATCH v15 5/8] fib6: move lookup definition into the header file Vladimir Medvedkin
@ 2020-10-27 15:11                           ` Vladimir Medvedkin
  2020-10-27 15:11                           ` [dpdk-dev] [PATCH v15 7/8] app/testfib: add support for different lookup functions Vladimir Medvedkin
  2020-10-27 15:11                           ` [dpdk-dev] [PATCH v15 8/8] fib: remove unnecessary type of fib Vladimir Medvedkin
  8 siblings, 0 replies; 199+ messages in thread
From: Vladimir Medvedkin @ 2020-10-27 15:11 UTC (permalink / raw)
  To: dev
  Cc: david.marchand, jerinj, mdr, thomas, konstantin.ananyev,
	bruce.richardson, ciara.power

Add new lookup implementation for FIB6 trie algorithm using
AVX512 instruction set

Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 doc/guides/rel_notes/release_20_11.rst |   2 +-
 lib/librte_fib/meson.build             |  17 +++
 lib/librte_fib/rte_fib6.c              |   2 +-
 lib/librte_fib/rte_fib6.h              |   5 +-
 lib/librte_fib/trie.c                  |  36 +++++
 lib/librte_fib/trie_avx512.c           | 269 +++++++++++++++++++++++++++++++++
 lib/librte_fib/trie_avx512.h           |  20 +++
 7 files changed, 348 insertions(+), 3 deletions(-)
 create mode 100644 lib/librte_fib/trie_avx512.c
 create mode 100644 lib/librte_fib/trie_avx512.h

diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst
index c430e8e..2bb3408 100644
--- a/doc/guides/rel_notes/release_20_11.rst
+++ b/doc/guides/rel_notes/release_20_11.rst
@@ -347,7 +347,7 @@ New Features
 
 * **Added AVX512 lookup implementation for FIB.**
 
-  Added a AVX512 lookup functions implementation into FIB library.
+  Added a AVX512 lookup functions implementation into FIB and FIB6 libraries.
 
 Removed Items
 -------------
diff --git a/lib/librte_fib/meson.build b/lib/librte_fib/meson.build
index 0a8adef..5d93de9 100644
--- a/lib/librte_fib/meson.build
+++ b/lib/librte_fib/meson.build
@@ -30,6 +30,12 @@ if dpdk_conf.has('RTE_ARCH_X86_64') and binutils_ok.returncode() == 0
 	if acl_avx512_on == true
 		cflags += ['-DCC_DIR24_8_AVX512_SUPPORT']
 		sources += files('dir24_8_avx512.c')
+		# TRIE AVX512 implementation uses avx512bw intrinsics along with
+		# avx512f and avx512dq
+		if cc.get_define('__AVX512BW__', args: machine_args) != ''
+			cflags += ['-DCC_TRIE_AVX512_SUPPORT']
+			sources += files('trie_avx512.c')
+		endif
 	elif cc.has_multi_arguments('-mavx512f', '-mavx512dq')
 		dir24_8_avx512_tmp = static_library('dir24_8_avx512_tmp',
 				'dir24_8_avx512.c',
@@ -37,5 +43,16 @@ if dpdk_conf.has('RTE_ARCH_X86_64') and binutils_ok.returncode() == 0
 				c_args: cflags + ['-mavx512f', '-mavx512dq'])
 		objs += dir24_8_avx512_tmp.extract_objects('dir24_8_avx512.c')
 		cflags += '-DCC_DIR24_8_AVX512_SUPPORT'
+		# TRIE AVX512 implementation uses avx512bw intrinsics along with
+		# avx512f and avx512dq
+		if cc.has_argument('-mavx512bw')
+			trie_avx512_tmp = static_library('trie_avx512_tmp',
+				'trie_avx512.c',
+				dependencies: static_rte_eal,
+				c_args: cflags + ['-mavx512f', \
+					'-mavx512dq', '-mavx512bw'])
+			objs += trie_avx512_tmp.extract_objects('trie_avx512.c')
+			cflags += '-DCC_TRIE_AVX512_SUPPORT'
+		endif
 	endif
 endif
diff --git a/lib/librte_fib/rte_fib6.c b/lib/librte_fib/rte_fib6.c
index 45792bd..1f5af0f 100644
--- a/lib/librte_fib/rte_fib6.c
+++ b/lib/librte_fib/rte_fib6.c
@@ -107,7 +107,7 @@ init_dataplane(struct rte_fib6 *fib, __rte_unused int socket_id,
 		fib->dp = trie_create(dp_name, socket_id, conf);
 		if (fib->dp == NULL)
 			return -rte_errno;
-		fib->lookup = trie_get_lookup_fn(fib->dp, RTE_FIB6_LOOKUP_TRIE_SCALAR);
+		fib->lookup = trie_get_lookup_fn(fib->dp, RTE_FIB6_LOOKUP_DEFAULT);
 		fib->modify = trie_modify;
 		return 0;
 	default:
diff --git a/lib/librte_fib/rte_fib6.h b/lib/librte_fib/rte_fib6.h
index 8086f03..887de7b 100644
--- a/lib/librte_fib/rte_fib6.h
+++ b/lib/librte_fib/rte_fib6.h
@@ -62,7 +62,10 @@ enum rte_fib_trie_nh_sz {
 
 /** Type of lookup function implementation */
 enum rte_fib6_lookup_type {
-	RTE_FIB6_LOOKUP_TRIE_SCALAR /**< Scalar lookup function implementation*/
+	RTE_FIB6_LOOKUP_DEFAULT,
+	/**< Selects the best implementation based on the max simd bitwidth */
+	RTE_FIB6_LOOKUP_TRIE_SCALAR, /**< Scalar lookup function implementation*/
+	RTE_FIB6_LOOKUP_TRIE_VECTOR_AVX512 /**< Vector implementation using AVX512 */
 };
 
 /** FIB configuration structure */
diff --git a/lib/librte_fib/trie.c b/lib/librte_fib/trie.c
index 08a03ab..5242c08 100644
--- a/lib/librte_fib/trie.c
+++ b/lib/librte_fib/trie.c
@@ -13,11 +13,18 @@
 #include <rte_malloc.h>
 #include <rte_errno.h>
 #include <rte_memory.h>
+#include <rte_vect.h>
 
 #include <rte_rib6.h>
 #include <rte_fib6.h>
 #include "trie.h"
 
+#ifdef CC_TRIE_AVX512_SUPPORT
+
+#include "trie_avx512.h"
+
+#endif /* CC_TRIE_AVX512_SUPPORT */
+
 #define TRIE_NAMESIZE		64
 
 enum edge {
@@ -40,11 +47,35 @@ get_scalar_fn(enum rte_fib_trie_nh_sz nh_sz)
 	}
 }
 
+static inline rte_fib6_lookup_fn_t
+get_vector_fn(enum rte_fib_trie_nh_sz nh_sz)
+{
+#ifdef CC_TRIE_AVX512_SUPPORT
+	if ((rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512F) <= 0) ||
+			(rte_vect_get_max_simd_bitwidth() < RTE_VECT_SIMD_512))
+		return NULL;
+	switch (nh_sz) {
+	case RTE_FIB6_TRIE_2B:
+		return rte_trie_vec_lookup_bulk_2b;
+	case RTE_FIB6_TRIE_4B:
+		return rte_trie_vec_lookup_bulk_4b;
+	case RTE_FIB6_TRIE_8B:
+		return rte_trie_vec_lookup_bulk_8b;
+	default:
+		return NULL;
+	}
+#else
+	RTE_SET_USED(nh_sz);
+#endif
+	return NULL;
+}
+
 rte_fib6_lookup_fn_t
 trie_get_lookup_fn(void *p, enum rte_fib6_lookup_type type)
 {
 	enum rte_fib_trie_nh_sz nh_sz;
 	struct rte_trie_tbl *dp = p;
+	rte_fib6_lookup_fn_t ret_fn = NULL;
 
 	if (dp == NULL)
 		return NULL;
@@ -54,6 +85,11 @@ trie_get_lookup_fn(void *p, enum rte_fib6_lookup_type type)
 	switch (type) {
 	case RTE_FIB6_LOOKUP_TRIE_SCALAR:
 		return get_scalar_fn(nh_sz);
+	case RTE_FIB6_LOOKUP_TRIE_VECTOR_AVX512:
+		return get_vector_fn(nh_sz);
+	case RTE_FIB6_LOOKUP_DEFAULT:
+		ret_fn = get_vector_fn(nh_sz);
+		return (ret_fn) ? ret_fn : get_scalar_fn(nh_sz);
 	default:
 		return NULL;
 	}
diff --git a/lib/librte_fib/trie_avx512.c b/lib/librte_fib/trie_avx512.c
new file mode 100644
index 0000000..b1c9e4e
--- /dev/null
+++ b/lib/librte_fib/trie_avx512.c
@@ -0,0 +1,269 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Intel Corporation
+ */
+
+#include <rte_vect.h>
+#include <rte_fib6.h>
+
+#include "trie.h"
+#include "trie_avx512.h"
+
+static __rte_always_inline void
+transpose_x16(uint8_t ips[16][RTE_FIB6_IPV6_ADDR_SIZE],
+	__m512i *first, __m512i *second, __m512i *third, __m512i *fourth)
+{
+	__m512i tmp1, tmp2, tmp3, tmp4;
+	__m512i tmp5, tmp6, tmp7, tmp8;
+	const __rte_x86_zmm_t perm_idxes = {
+		.u32 = { 0, 4, 8, 12, 2, 6, 10, 14,
+			1, 5, 9, 13, 3, 7, 11, 15
+		},
+	};
+
+	/* load all ip addresses */
+	tmp1 = _mm512_loadu_si512(&ips[0][0]);
+	tmp2 = _mm512_loadu_si512(&ips[4][0]);
+	tmp3 = _mm512_loadu_si512(&ips[8][0]);
+	tmp4 = _mm512_loadu_si512(&ips[12][0]);
+
+	/* transpose 4 byte chunks of 16 ips */
+	tmp5 = _mm512_unpacklo_epi32(tmp1, tmp2);
+	tmp7 = _mm512_unpackhi_epi32(tmp1, tmp2);
+	tmp6 = _mm512_unpacklo_epi32(tmp3, tmp4);
+	tmp8 = _mm512_unpackhi_epi32(tmp3, tmp4);
+
+	tmp1 = _mm512_unpacklo_epi32(tmp5, tmp6);
+	tmp3 = _mm512_unpackhi_epi32(tmp5, tmp6);
+	tmp2 = _mm512_unpacklo_epi32(tmp7, tmp8);
+	tmp4 = _mm512_unpackhi_epi32(tmp7, tmp8);
+
+	/* first 4-byte chunks of ips[] */
+	*first = _mm512_permutexvar_epi32(perm_idxes.z, tmp1);
+	/* second 4-byte chunks of ips[] */
+	*second = _mm512_permutexvar_epi32(perm_idxes.z, tmp3);
+	/* third 4-byte chunks of ips[] */
+	*third = _mm512_permutexvar_epi32(perm_idxes.z, tmp2);
+	/* fourth 4-byte chunks of ips[] */
+	*fourth = _mm512_permutexvar_epi32(perm_idxes.z, tmp4);
+}
+
+static __rte_always_inline void
+transpose_x8(uint8_t ips[8][RTE_FIB6_IPV6_ADDR_SIZE],
+	__m512i *first, __m512i *second)
+{
+	__m512i tmp1, tmp2, tmp3, tmp4;
+	const __rte_x86_zmm_t perm_idxes = {
+		.u64 = { 0, 2, 4, 6, 1, 3, 5, 7
+		},
+	};
+
+	tmp1 = _mm512_loadu_si512(&ips[0][0]);
+	tmp2 = _mm512_loadu_si512(&ips[4][0]);
+
+	tmp3 = _mm512_unpacklo_epi64(tmp1, tmp2);
+	*first = _mm512_permutexvar_epi64(perm_idxes.z, tmp3);
+	tmp4 = _mm512_unpackhi_epi64(tmp1, tmp2);
+	*second = _mm512_permutexvar_epi64(perm_idxes.z, tmp4);
+}
+
+static __rte_always_inline void
+trie_vec_lookup_x16(void *p, uint8_t ips[16][RTE_FIB6_IPV6_ADDR_SIZE],
+	uint64_t *next_hops, int size)
+{
+	struct rte_trie_tbl *dp = (struct rte_trie_tbl *)p;
+	const __m512i zero = _mm512_set1_epi32(0);
+	const __m512i lsb = _mm512_set1_epi32(1);
+	const __m512i two_lsb = _mm512_set1_epi32(3);
+	__m512i first, second, third, fourth; /*< IPv6 four byte chunks */
+	__m512i idxes, res, shuf_idxes;
+	__m512i tmp, tmp2, bytes, byte_chunk, base_idxes;
+	/* used to mask gather values if size is 2 (16 bit next hops) */
+	const __m512i res_msk = _mm512_set1_epi32(UINT16_MAX);
+	const __rte_x86_zmm_t bswap = {
+		.u8 = { 2, 1, 0, 255, 6, 5, 4, 255,
+			10, 9, 8, 255, 14, 13, 12, 255,
+			2, 1, 0, 255, 6, 5, 4, 255,
+			10, 9, 8, 255, 14, 13, 12, 255,
+			2, 1, 0, 255, 6, 5, 4, 255,
+			10, 9, 8, 255, 14, 13, 12, 255,
+			2, 1, 0, 255, 6, 5, 4, 255,
+			10, 9, 8, 255, 14, 13, 12, 255
+			},
+	};
+	const __mmask64 k = 0x1111111111111111;
+	int i = 3;
+	__mmask16 msk_ext, new_msk;
+	__mmask16 exp_msk = 0x5555;
+
+	transpose_x16(ips, &first, &second, &third, &fourth);
+
+	/* get_tbl24_idx() for every 4 byte chunk */
+	idxes = _mm512_shuffle_epi8(first, bswap.z);
+
+	/**
+	 * lookup in tbl24
+	 * Put it inside branch to make compiller happy with -O0
+	 */
+	if (size == sizeof(uint16_t)) {
+		res = _mm512_i32gather_epi32(idxes, (const int *)dp->tbl24, 2);
+		res = _mm512_and_epi32(res, res_msk);
+	} else
+		res = _mm512_i32gather_epi32(idxes, (const int *)dp->tbl24, 4);
+
+
+	/* get extended entries indexes */
+	msk_ext = _mm512_test_epi32_mask(res, lsb);
+
+	tmp = _mm512_srli_epi32(res, 1);
+
+	/* idxes to retrieve bytes */
+	shuf_idxes = _mm512_setr_epi32(3, 7, 11, 15,
+				19, 23, 27, 31,
+				35, 39, 43, 47,
+				51, 55, 59, 63);
+
+	base_idxes = _mm512_setr_epi32(0, 4, 8, 12,
+				16, 20, 24, 28,
+				32, 36, 40, 44,
+				48, 52, 56, 60);
+
+	/* traverse down the trie */
+	while (msk_ext) {
+		idxes = _mm512_maskz_slli_epi32(msk_ext, tmp, 8);
+		byte_chunk = (i < 8) ?
+			((i >= 4) ? second : first) :
+			((i >= 12) ? fourth : third);
+		bytes = _mm512_maskz_shuffle_epi8(k, byte_chunk, shuf_idxes);
+		idxes = _mm512_maskz_add_epi32(msk_ext, idxes, bytes);
+		if (size == sizeof(uint16_t)) {
+			tmp = _mm512_mask_i32gather_epi32(zero, msk_ext,
+				idxes, (const int *)dp->tbl8, 2);
+			tmp = _mm512_and_epi32(tmp, res_msk);
+		} else
+			tmp = _mm512_mask_i32gather_epi32(zero, msk_ext,
+				idxes, (const int *)dp->tbl8, 4);
+		new_msk = _mm512_test_epi32_mask(tmp, lsb);
+		res = _mm512_mask_blend_epi32(msk_ext ^ new_msk, res, tmp);
+		tmp = _mm512_srli_epi32(tmp, 1);
+		msk_ext = new_msk;
+
+		shuf_idxes = _mm512_maskz_add_epi8(k, shuf_idxes, lsb);
+		shuf_idxes = _mm512_and_epi32(shuf_idxes, two_lsb);
+		shuf_idxes = _mm512_maskz_add_epi8(k, shuf_idxes, base_idxes);
+		i++;
+	}
+
+	res = _mm512_srli_epi32(res, 1);
+	tmp = _mm512_maskz_expand_epi32(exp_msk, res);
+	__m256i tmp256;
+	tmp256 = _mm512_extracti32x8_epi32(res, 1);
+	tmp2 = _mm512_maskz_expand_epi32(exp_msk,
+		_mm512_castsi256_si512(tmp256));
+	_mm512_storeu_si512(next_hops, tmp);
+	_mm512_storeu_si512(next_hops + 8, tmp2);
+}
+
+static void
+trie_vec_lookup_x8_8b(void *p, uint8_t ips[8][RTE_FIB6_IPV6_ADDR_SIZE],
+	uint64_t *next_hops)
+{
+	struct rte_trie_tbl *dp = (struct rte_trie_tbl *)p;
+	const __m512i zero = _mm512_set1_epi32(0);
+	const __m512i lsb = _mm512_set1_epi32(1);
+	const __m512i three_lsb = _mm512_set1_epi32(7);
+	__m512i first, second; /*< IPv6 eight byte chunks */
+	__m512i idxes, res, shuf_idxes;
+	__m512i tmp, bytes, byte_chunk, base_idxes;
+	const __rte_x86_zmm_t bswap = {
+		.u8 = { 2, 1, 0, 255, 255, 255, 255, 255,
+			10, 9, 8, 255, 255, 255, 255, 255,
+			2, 1, 0, 255, 255, 255, 255, 255,
+			10, 9, 8, 255, 255, 255, 255, 255,
+			2, 1, 0, 255, 255, 255, 255, 255,
+			10, 9, 8, 255, 255, 255, 255, 255,
+			2, 1, 0, 255, 255, 255, 255, 255,
+			10, 9, 8, 255, 255, 255, 255, 255
+			},
+	};
+	const __mmask64 k = 0x101010101010101;
+	int i = 3;
+	__mmask8 msk_ext, new_msk;
+
+	transpose_x8(ips, &first, &second);
+
+	/* get_tbl24_idx() for every 4 byte chunk */
+	idxes = _mm512_shuffle_epi8(first, bswap.z);
+
+	/* lookup in tbl24 */
+	res = _mm512_i64gather_epi64(idxes, (const void *)dp->tbl24, 8);
+	/* get extended entries indexes */
+	msk_ext = _mm512_test_epi64_mask(res, lsb);
+
+	tmp = _mm512_srli_epi64(res, 1);
+
+	/* idxes to retrieve bytes */
+	shuf_idxes = _mm512_setr_epi64(3, 11, 19, 27, 35, 43, 51, 59);
+
+	base_idxes = _mm512_setr_epi64(0, 8, 16, 24, 32, 40, 48, 56);
+
+	/* traverse down the trie */
+	while (msk_ext) {
+		idxes = _mm512_maskz_slli_epi64(msk_ext, tmp, 8);
+		byte_chunk = (i < 8) ? first : second;
+		bytes = _mm512_maskz_shuffle_epi8(k, byte_chunk, shuf_idxes);
+		idxes = _mm512_maskz_add_epi64(msk_ext, idxes, bytes);
+		tmp = _mm512_mask_i64gather_epi64(zero, msk_ext,
+				idxes, (const void *)dp->tbl8, 8);
+		new_msk = _mm512_test_epi64_mask(tmp, lsb);
+		res = _mm512_mask_blend_epi64(msk_ext ^ new_msk, res, tmp);
+		tmp = _mm512_srli_epi64(tmp, 1);
+		msk_ext = new_msk;
+
+		shuf_idxes = _mm512_maskz_add_epi8(k, shuf_idxes, lsb);
+		shuf_idxes = _mm512_and_epi64(shuf_idxes, three_lsb);
+		shuf_idxes = _mm512_maskz_add_epi8(k, shuf_idxes, base_idxes);
+		i++;
+	}
+
+	res = _mm512_srli_epi64(res, 1);
+	_mm512_storeu_si512(next_hops, res);
+}
+
+void
+rte_trie_vec_lookup_bulk_2b(void *p, uint8_t ips[][RTE_FIB6_IPV6_ADDR_SIZE],
+	uint64_t *next_hops, const unsigned int n)
+{
+	uint32_t i;
+	for (i = 0; i < (n / 16); i++) {
+		trie_vec_lookup_x16(p, (uint8_t (*)[16])&ips[i * 16][0],
+				next_hops + i * 16, sizeof(uint16_t));
+	}
+	rte_trie_lookup_bulk_2b(p, (uint8_t (*)[16])&ips[i * 16][0],
+			next_hops + i * 16, n - i * 16);
+}
+
+void
+rte_trie_vec_lookup_bulk_4b(void *p, uint8_t ips[][RTE_FIB6_IPV6_ADDR_SIZE],
+	uint64_t *next_hops, const unsigned int n)
+{
+	uint32_t i;
+	for (i = 0; i < (n / 16); i++) {
+		trie_vec_lookup_x16(p, (uint8_t (*)[16])&ips[i * 16][0],
+				next_hops + i * 16, sizeof(uint32_t));
+	}
+	rte_trie_lookup_bulk_4b(p, (uint8_t (*)[16])&ips[i * 16][0],
+			next_hops + i * 16, n - i * 16);
+}
+
+void
+rte_trie_vec_lookup_bulk_8b(void *p, uint8_t ips[][RTE_FIB6_IPV6_ADDR_SIZE],
+	uint64_t *next_hops, const unsigned int n)
+{
+	uint32_t i;
+	for (i = 0; i < (n / 8); i++) {
+		trie_vec_lookup_x8_8b(p, (uint8_t (*)[16])&ips[i * 8][0],
+				next_hops + i * 8);
+	}
+	rte_trie_lookup_bulk_8b(p, (uint8_t (*)[16])&ips[i * 8][0],
+			next_hops + i * 8, n - i * 8);
+}
diff --git a/lib/librte_fib/trie_avx512.h b/lib/librte_fib/trie_avx512.h
new file mode 100644
index 0000000..ef8c7f0
--- /dev/null
+++ b/lib/librte_fib/trie_avx512.h
@@ -0,0 +1,20 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Intel Corporation
+ */
+
+#ifndef _TRIE_AVX512_H_
+#define _TRIE_AVX512_H_
+
+void
+rte_trie_vec_lookup_bulk_2b(void *p, uint8_t ips[][RTE_FIB6_IPV6_ADDR_SIZE],
+	uint64_t *next_hops, const unsigned int n);
+
+void
+rte_trie_vec_lookup_bulk_4b(void *p, uint8_t ips[][RTE_FIB6_IPV6_ADDR_SIZE],
+	uint64_t *next_hops, const unsigned int n);
+
+void
+rte_trie_vec_lookup_bulk_8b(void *p, uint8_t ips[][RTE_FIB6_IPV6_ADDR_SIZE],
+	uint64_t *next_hops, const unsigned int n);
+
+#endif /* _TRIE_AVX512_H_ */
-- 
2.7.4


^ permalink raw reply	[flat|nested] 199+ messages in thread

* [dpdk-dev] [PATCH v15 7/8] app/testfib: add support for different lookup functions
  2020-10-25 18:07                         ` [dpdk-dev] [PATCH v14 0/8] " Vladimir Medvedkin
                                             ` (6 preceding siblings ...)
  2020-10-27 15:11                           ` [dpdk-dev] [PATCH v15 6/8] fib6: introduce AVX512 lookup Vladimir Medvedkin
@ 2020-10-27 15:11                           ` Vladimir Medvedkin
  2020-10-27 15:11                           ` [dpdk-dev] [PATCH v15 8/8] fib: remove unnecessary type of fib Vladimir Medvedkin
  8 siblings, 0 replies; 199+ messages in thread
From: Vladimir Medvedkin @ 2020-10-27 15:11 UTC (permalink / raw)
  To: dev
  Cc: david.marchand, jerinj, mdr, thomas, konstantin.ananyev,
	bruce.richardson, ciara.power

Added -v option to switch between different lookup implementations
to measure their performance and correctness.

Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
---
 app/test-fib/main.c | 65 ++++++++++++++++++++++++++++++++++++++++++++++++++---
 1 file changed, 62 insertions(+), 3 deletions(-)

diff --git a/app/test-fib/main.c b/app/test-fib/main.c
index 9cf01b1..9e6a4f2 100644
--- a/app/test-fib/main.c
+++ b/app/test-fib/main.c
@@ -99,6 +99,7 @@ static struct {
 	uint8_t		ent_sz;
 	uint8_t		rnd_lookup_ips_ratio;
 	uint8_t		print_fract;
+	uint8_t		lookup_fn;
 } config = {
 	.routes_file = NULL,
 	.lookup_ips_file = NULL,
@@ -110,7 +111,8 @@ static struct {
 	.tbl8 = DEFAULT_LPM_TBL8,
 	.ent_sz = 4,
 	.rnd_lookup_ips_ratio = 0,
-	.print_fract = 10
+	.print_fract = 10,
+	.lookup_fn = 0
 };
 
 struct rt_rule_4 {
@@ -638,7 +640,11 @@ print_usage(void)
 		"1/2/4/8 (default 4)>]\n"
 		"[-g <number of tbl8's for dir24_8 or trie FIBs>]\n"
 		"[-w <path to the file to dump routing table>]\n"
-		"[-u <path to the file to dump ip's for lookup>]\n",
+		"[-u <path to the file to dump ip's for lookup>]\n"
+		"[-v <type of loookup function:"
+		"\ts1, s2, s3 (3 types of scalar), v (vector) -"
+		" for DIR24_8 based FIB\n"
+		"\ts, v - for TRIE based ipv6 FIB>]\n",
 		config.prgname);
 }
 
@@ -681,7 +687,7 @@ parse_opts(int argc, char **argv)
 	int opt;
 	char *endptr;
 
-	while ((opt = getopt(argc, argv, "f:t:n:d:l:r:c6ab:e:g:w:u:s")) !=
+	while ((opt = getopt(argc, argv, "f:t:n:d:l:r:c6ab:e:g:w:u:sv:")) !=
 			-1) {
 		switch (opt) {
 		case 'f':
@@ -769,6 +775,23 @@ parse_opts(int argc, char **argv)
 				rte_exit(-EINVAL, "Invalid option -g\n");
 			}
 			break;
+		case 'v':
+			if ((strcmp(optarg, "s1") == 0) ||
+					(strcmp(optarg, "s") == 0)) {
+				config.lookup_fn = 1;
+				break;
+			} else if (strcmp(optarg, "v") == 0) {
+				config.lookup_fn = 2;
+				break;
+			} else if (strcmp(optarg, "s2") == 0) {
+				config.lookup_fn = 3;
+				break;
+			} else if (strcmp(optarg, "s3") == 0) {
+				config.lookup_fn = 4;
+				break;
+			}
+			print_usage();
+			rte_exit(-EINVAL, "Invalid option -v %s\n", optarg);
 		default:
 			print_usage();
 			rte_exit(-EINVAL, "Invalid options\n");
@@ -846,6 +869,27 @@ run_v4(void)
 		return -rte_errno;
 	}
 
+	if (config.lookup_fn != 0) {
+		if (config.lookup_fn == 1)
+			ret = rte_fib_select_lookup(fib,
+				RTE_FIB_LOOKUP_DIR24_8_SCALAR_MACRO);
+		else if (config.lookup_fn == 2)
+			ret = rte_fib_select_lookup(fib,
+				RTE_FIB_LOOKUP_DIR24_8_VECTOR_AVX512);
+		else if (config.lookup_fn == 3)
+			ret = rte_fib_select_lookup(fib,
+				RTE_FIB_LOOKUP_DIR24_8_SCALAR_INLINE);
+		else if (config.lookup_fn == 4)
+			ret = rte_fib_select_lookup(fib,
+				RTE_FIB_LOOKUP_DIR24_8_SCALAR_UNI);
+		else
+			ret = -EINVAL;
+		if (ret != 0) {
+			printf("Can not init lookup function\n");
+			return ret;
+		}
+	}
+
 	for (k = config.print_fract, i = 0; k > 0; k--) {
 		start = rte_rdtsc_precise();
 		for (j = 0; j < (config.nb_routes - i) / k; j++) {
@@ -1025,6 +1069,21 @@ run_v6(void)
 		return -rte_errno;
 	}
 
+	if (config.lookup_fn != 0) {
+		if (config.lookup_fn == 1)
+			ret = rte_fib6_select_lookup(fib,
+				RTE_FIB6_LOOKUP_TRIE_SCALAR);
+		else if (config.lookup_fn == 2)
+			ret = rte_fib6_select_lookup(fib,
+				RTE_FIB6_LOOKUP_TRIE_VECTOR_AVX512);
+		else
+			ret = -EINVAL;
+		if (ret != 0) {
+			printf("Can not init lookup function\n");
+			return ret;
+		}
+	}
+
 	for (k = config.print_fract, i = 0; k > 0; k--) {
 		start = rte_rdtsc_precise();
 		for (j = 0; j < (config.nb_routes - i) / k; j++) {
-- 
2.7.4


^ permalink raw reply	[flat|nested] 199+ messages in thread

* [dpdk-dev] [PATCH v15 8/8] fib: remove unnecessary type of fib
  2020-10-25 18:07                         ` [dpdk-dev] [PATCH v14 0/8] " Vladimir Medvedkin
                                             ` (7 preceding siblings ...)
  2020-10-27 15:11                           ` [dpdk-dev] [PATCH v15 7/8] app/testfib: add support for different lookup functions Vladimir Medvedkin
@ 2020-10-27 15:11                           ` Vladimir Medvedkin
  8 siblings, 0 replies; 199+ messages in thread
From: Vladimir Medvedkin @ 2020-10-27 15:11 UTC (permalink / raw)
  To: dev
  Cc: david.marchand, jerinj, mdr, thomas, konstantin.ananyev,
	bruce.richardson, ciara.power

FIB type RTE_FIB_TYPE_MAX is used only for sanity checks,
remove it to prevent applications start using it.
The same is for FIB6's RTE_FIB6_TYPE_MAX.

Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
---
 app/test/test_fib.c       | 2 +-
 app/test/test_fib6.c      | 2 +-
 lib/librte_fib/rte_fib.c  | 2 +-
 lib/librte_fib/rte_fib.h  | 3 +--
 lib/librte_fib/rte_fib6.c | 2 +-
 lib/librte_fib/rte_fib6.h | 3 +--
 6 files changed, 6 insertions(+), 8 deletions(-)

diff --git a/app/test/test_fib.c b/app/test/test_fib.c
index ca80a5d..e46b993 100644
--- a/app/test/test_fib.c
+++ b/app/test/test_fib.c
@@ -61,7 +61,7 @@ test_create_invalid(void)
 		"Call succeeded with invalid parameters\n");
 	config.max_routes = MAX_ROUTES;
 
-	config.type = RTE_FIB_TYPE_MAX;
+	config.type = RTE_FIB_DIR24_8 + 1;
 	fib = rte_fib_create(__func__, SOCKET_ID_ANY, &config);
 	RTE_TEST_ASSERT(fib == NULL,
 		"Call succeeded with invalid parameters\n");
diff --git a/app/test/test_fib6.c b/app/test/test_fib6.c
index af589fe..74abfc7 100644
--- a/app/test/test_fib6.c
+++ b/app/test/test_fib6.c
@@ -63,7 +63,7 @@ test_create_invalid(void)
 		"Call succeeded with invalid parameters\n");
 	config.max_routes = MAX_ROUTES;
 
-	config.type = RTE_FIB6_TYPE_MAX;
+	config.type = RTE_FIB6_TRIE + 1;
 	fib = rte_fib6_create(__func__, SOCKET_ID_ANY, &config);
 	RTE_TEST_ASSERT(fib == NULL,
 		"Call succeeded with invalid parameters\n");
diff --git a/lib/librte_fib/rte_fib.c b/lib/librte_fib/rte_fib.c
index 398dbf9..b354d4b 100644
--- a/lib/librte_fib/rte_fib.c
+++ b/lib/librte_fib/rte_fib.c
@@ -159,7 +159,7 @@ rte_fib_create(const char *name, int socket_id, struct rte_fib_conf *conf)
 
 	/* Check user arguments. */
 	if ((name == NULL) || (conf == NULL) ||	(conf->max_routes < 0) ||
-			(conf->type >= RTE_FIB_TYPE_MAX)) {
+			(conf->type > RTE_FIB_DIR24_8)) {
 		rte_errno = EINVAL;
 		return NULL;
 	}
diff --git a/lib/librte_fib/rte_fib.h b/lib/librte_fib/rte_fib.h
index 8688c93..9a49313 100644
--- a/lib/librte_fib/rte_fib.h
+++ b/lib/librte_fib/rte_fib.h
@@ -34,8 +34,7 @@ struct rte_rib;
 /** Type of FIB struct */
 enum rte_fib_type {
 	RTE_FIB_DUMMY,		/**< RIB tree based FIB */
-	RTE_FIB_DIR24_8,	/**< DIR24_8 based FIB */
-	RTE_FIB_TYPE_MAX
+	RTE_FIB_DIR24_8		/**< DIR24_8 based FIB */
 };
 
 /** Modify FIB function */
diff --git a/lib/librte_fib/rte_fib6.c b/lib/librte_fib/rte_fib6.c
index 1f5af0f..44cc0c9 100644
--- a/lib/librte_fib/rte_fib6.c
+++ b/lib/librte_fib/rte_fib6.c
@@ -160,7 +160,7 @@ rte_fib6_create(const char *name, int socket_id, struct rte_fib6_conf *conf)
 
 	/* Check user arguments. */
 	if ((name == NULL) || (conf == NULL) || (conf->max_routes < 0) ||
-			(conf->type >= RTE_FIB6_TYPE_MAX)) {
+			(conf->type > RTE_FIB6_TRIE)) {
 		rte_errno = EINVAL;
 		return NULL;
 	}
diff --git a/lib/librte_fib/rte_fib6.h b/lib/librte_fib/rte_fib6.h
index 887de7b..adb5005 100644
--- a/lib/librte_fib/rte_fib6.h
+++ b/lib/librte_fib/rte_fib6.h
@@ -35,8 +35,7 @@ struct rte_rib6;
 /** Type of FIB struct */
 enum rte_fib6_type {
 	RTE_FIB6_DUMMY,		/**< RIB6 tree based FIB */
-	RTE_FIB6_TRIE,		/**< TRIE based fib  */
-	RTE_FIB6_TYPE_MAX
+	RTE_FIB6_TRIE		/**< TRIE based fib  */
 };
 
 /** Modify FIB function */
-- 
2.7.4


^ permalink raw reply	[flat|nested] 199+ messages in thread

* Re: [dpdk-dev] [PATCH v15 0/8] fib: implement AVX512 vector lookup
  2020-10-27 15:11                           ` [dpdk-dev] [PATCH v15 " Vladimir Medvedkin
@ 2020-10-28 20:51                             ` David Marchand
  0 siblings, 0 replies; 199+ messages in thread
From: David Marchand @ 2020-10-28 20:51 UTC (permalink / raw)
  To: Vladimir Medvedkin
  Cc: dev, Jerin Jacob Kollanukkaran, Ray Kinsella, Thomas Monjalon,
	Ananyev, Konstantin, Bruce Richardson, Ciara Power

On Tue, Oct 27, 2020 at 4:11 PM Vladimir Medvedkin
<vladimir.medvedkin@intel.com> wrote:
>
> This patch series implements vectorized lookup using AVX512 for
> ipv4 dir24_8 and ipv6 trie algorithms.
> Also introduced rte_fib_set_lookup_fn() to change lookup function type.
> Added option to select lookup function type in testfib application.

Series applied, thanks Vladimir.


-- 
David Marchand


^ permalink raw reply	[flat|nested] 199+ messages in thread

end of thread, other threads:[~2020-10-28 20:51 UTC | newest]

Thread overview: 199+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-03-09 12:43 [dpdk-dev] [PATCH 0/6] fib: implement AVX512 vector lookup Vladimir Medvedkin
2020-03-09 12:43 ` [dpdk-dev] [PATCH 1/6] eal: introduce zmm type for AVX 512-bit Vladimir Medvedkin
2020-03-09 16:39   ` Jerin Jacob
2020-03-10 14:44     ` Medvedkin, Vladimir
2020-03-20  8:23       ` Jerin Jacob
2020-03-09 12:43 ` [dpdk-dev] [PATCH 2/6] fib: make lookup function type configurable Vladimir Medvedkin
2020-04-01  5:47   ` Ray Kinsella
2020-04-01 18:48     ` Medvedkin, Vladimir
2020-03-09 12:43 ` [dpdk-dev] [PATCH 3/6] fib: introduce AVX512 lookup Vladimir Medvedkin
2020-04-01  5:54   ` Ray Kinsella
2020-03-09 12:43 ` [dpdk-dev] [PATCH 4/6] fib6: make lookup function type configurable Vladimir Medvedkin
2020-03-09 12:43 ` [dpdk-dev] [PATCH 5/6] fib6: introduce AVX512 lookup Vladimir Medvedkin
2020-03-09 12:43 ` [dpdk-dev] [PATCH 6/6] app/testfib: add support for different lookup functions Vladimir Medvedkin
2020-04-16  9:55 ` [dpdk-dev] [PATCH 0/6] fib: implement AVX512 vector lookup Thomas Monjalon
2020-05-14 12:28 ` [dpdk-dev] [PATCH v2 " Vladimir Medvedkin
2020-05-19 12:12   ` [dpdk-dev] [PATCH v3 0/8] " Vladimir Medvedkin
2020-05-19 12:23     ` David Marchand
2020-05-19 12:57       ` Medvedkin, Vladimir
2020-05-19 13:00         ` David Marchand
2020-06-19 10:34     ` Medvedkin, Vladimir
2020-07-08 20:16     ` [dpdk-dev] [PATCH v4 " Vladimir Medvedkin
2020-07-10 14:46       ` [dpdk-dev] [PATCH v5 " Vladimir Medvedkin
2020-07-13 11:11         ` [dpdk-dev] [PATCH v6 " Vladimir Medvedkin
2020-07-13 11:56           ` [dpdk-dev] [PATCH v7 " Vladimir Medvedkin
2020-09-30 10:35             ` [dpdk-dev] [PATCH v8 " Vladimir Medvedkin
2020-10-06 14:31               ` David Marchand
2020-10-06 15:13                 ` Medvedkin, Vladimir
2020-10-07 16:10               ` [dpdk-dev] [PATCH v9 " Vladimir Medvedkin
2020-10-13 13:13                 ` [dpdk-dev] [PATCH v10 " Vladimir Medvedkin
2020-10-16 15:15                   ` David Marchand
2020-10-16 15:32                     ` Medvedkin, Vladimir
2020-10-16 15:42                   ` [dpdk-dev] [PATCH v11 " Vladimir Medvedkin
2020-10-19 10:17                     ` [dpdk-dev] [PATCH v12 0/7] " Vladimir Medvedkin
2020-10-19 15:05                       ` [dpdk-dev] [PATCH v13 " Vladimir Medvedkin
2020-10-25 18:07                         ` [dpdk-dev] [PATCH v14 0/8] " Vladimir Medvedkin
2020-10-27 15:11                           ` [dpdk-dev] [PATCH v15 " Vladimir Medvedkin
2020-10-28 20:51                             ` David Marchand
2020-10-27 15:11                           ` [dpdk-dev] [PATCH v15 1/8] fib: make lookup function type configurable Vladimir Medvedkin
2020-10-27 15:11                           ` [dpdk-dev] [PATCH v15 2/8] fib: move lookup definition into the header file Vladimir Medvedkin
2020-10-27 15:11                           ` [dpdk-dev] [PATCH v15 3/8] fib: introduce AVX512 lookup Vladimir Medvedkin
2020-10-27 15:11                           ` [dpdk-dev] [PATCH v15 4/8] fib6: make lookup function type configurable Vladimir Medvedkin
2020-10-27 15:11                           ` [dpdk-dev] [PATCH v15 5/8] fib6: move lookup definition into the header file Vladimir Medvedkin
2020-10-27 15:11                           ` [dpdk-dev] [PATCH v15 6/8] fib6: introduce AVX512 lookup Vladimir Medvedkin
2020-10-27 15:11                           ` [dpdk-dev] [PATCH v15 7/8] app/testfib: add support for different lookup functions Vladimir Medvedkin
2020-10-27 15:11                           ` [dpdk-dev] [PATCH v15 8/8] fib: remove unnecessary type of fib Vladimir Medvedkin
2020-10-25 18:07                         ` [dpdk-dev] [PATCH v14 1/8] fib: make lookup function type configurable Vladimir Medvedkin
2020-10-26 13:58                           ` David Marchand
2020-10-26 17:51                             ` Medvedkin, Vladimir
2020-10-25 18:07                         ` [dpdk-dev] [PATCH v14 2/8] fib: move lookup definition into the header file Vladimir Medvedkin
2020-10-25 18:07                         ` [dpdk-dev] [PATCH v14 3/8] fib: introduce AVX512 lookup Vladimir Medvedkin
2020-10-25 18:07                         ` [dpdk-dev] [PATCH v14 4/8] fib6: make lookup function type configurable Vladimir Medvedkin
2020-10-25 18:07                         ` [dpdk-dev] [PATCH v14 5/8] fib6: move lookup definition into the header file Vladimir Medvedkin
2020-10-25 18:07                         ` [dpdk-dev] [PATCH v14 6/8] fib6: introduce AVX512 lookup Vladimir Medvedkin
2020-10-25 18:08                         ` [dpdk-dev] [PATCH v14 7/8] app/testfib: add support for different lookup functions Vladimir Medvedkin
2020-10-25 18:08                         ` [dpdk-dev] [PATCH v14 8/8] fib: remove unnecessary type of fib Vladimir Medvedkin
2020-10-19 15:05                       ` [dpdk-dev] [PATCH v13 1/7] fib: make lookup function type configurable Vladimir Medvedkin
2020-10-22  7:55                         ` Kinsella, Ray
2020-10-22 11:52                         ` David Marchand
2020-10-22 15:11                           ` Medvedkin, Vladimir
2020-10-23 10:29                             ` David Marchand
2020-10-23 16:09                               ` Medvedkin, Vladimir
2020-10-19 15:05                       ` [dpdk-dev] [PATCH v13 2/7] fib: move lookup definition into the header file Vladimir Medvedkin
2020-10-22  7:56                         ` Kinsella, Ray
2020-10-19 15:05                       ` [dpdk-dev] [PATCH v13 3/7] fib: introduce AVX512 lookup Vladimir Medvedkin
2020-10-22  7:56                         ` Kinsella, Ray
2020-10-19 15:05                       ` [dpdk-dev] [PATCH v13 4/7] fib6: make lookup function type configurable Vladimir Medvedkin
2020-10-22  7:56                         ` Kinsella, Ray
2020-10-19 15:05                       ` [dpdk-dev] [PATCH v13 5/7] fib6: move lookup definition into the header file Vladimir Medvedkin
2020-10-22  7:56                         ` Kinsella, Ray
2020-10-19 15:05                       ` [dpdk-dev] [PATCH v13 6/7] fib6: introduce AVX512 lookup Vladimir Medvedkin
2020-10-22  7:57                         ` Kinsella, Ray
2020-10-19 15:05                       ` [dpdk-dev] [PATCH v13 7/7] app/testfib: add support for different lookup functions Vladimir Medvedkin
2020-10-22  7:57                         ` Kinsella, Ray
2020-10-19 10:17                     ` [dpdk-dev] [PATCH v12 1/7] fib: make lookup function type configurable Vladimir Medvedkin
2020-10-19 10:17                     ` [dpdk-dev] [PATCH v12 2/7] fib: move lookup definition into the header file Vladimir Medvedkin
2020-10-19 10:17                     ` [dpdk-dev] [PATCH v12 3/7] fib: introduce AVX512 lookup Vladimir Medvedkin
2020-10-19 10:17                     ` [dpdk-dev] [PATCH v12 4/7] fib6: make lookup function type configurable Vladimir Medvedkin
2020-10-19 10:17                     ` [dpdk-dev] [PATCH v12 5/7] fib6: move lookup definition into the header file Vladimir Medvedkin
2020-10-19 10:17                     ` [dpdk-dev] [PATCH v12 6/7] fib6: introduce AVX512 lookup Vladimir Medvedkin
2020-10-19 10:17                     ` [dpdk-dev] [PATCH v12 7/7] app/testfib: add support for different lookup functions Vladimir Medvedkin
2020-10-16 15:42                   ` [dpdk-dev] [PATCH v11 1/8] eal/x86: introduce AVX 512-bit type Vladimir Medvedkin
2020-10-19  6:35                     ` Kinsella, Ray
2020-10-19 10:12                       ` Medvedkin, Vladimir
2020-10-16 15:42                   ` [dpdk-dev] [PATCH v11 2/8] fib: make lookup function type configurable Vladimir Medvedkin
2020-10-16 15:42                   ` [dpdk-dev] [PATCH v11 3/8] fib: move lookup definition into the header file Vladimir Medvedkin
2020-10-16 15:42                   ` [dpdk-dev] [PATCH v11 4/8] fib: introduce AVX512 lookup Vladimir Medvedkin
2020-10-16 15:42                   ` [dpdk-dev] [PATCH v11 5/8] fib6: make lookup function type configurable Vladimir Medvedkin
2020-10-16 15:42                   ` [dpdk-dev] [PATCH v11 6/8] fib6: move lookup definition into the header file Vladimir Medvedkin
2020-10-16 15:42                   ` [dpdk-dev] [PATCH v11 7/8] fib6: introduce AVX512 lookup Vladimir Medvedkin
2020-10-16 15:42                   ` [dpdk-dev] [PATCH v11 8/8] app/testfib: add support for different lookup functions Vladimir Medvedkin
2020-10-13 13:13                 ` [dpdk-dev] [PATCH v10 1/8] eal/x86: introduce AVX 512-bit type Vladimir Medvedkin
2020-10-14 12:17                   ` David Marchand
2020-10-13 13:13                 ` [dpdk-dev] [PATCH v10 2/8] fib: make lookup function type configurable Vladimir Medvedkin
2020-10-13 13:13                 ` [dpdk-dev] [PATCH v10 3/8] fib: move lookup definition into the header file Vladimir Medvedkin
2020-10-13 13:13                 ` [dpdk-dev] [PATCH v10 4/8] fib: introduce AVX512 lookup Vladimir Medvedkin
2020-10-13 13:13                 ` [dpdk-dev] [PATCH v10 5/8] fib6: make lookup function type configurable Vladimir Medvedkin
2020-10-13 13:14                 ` [dpdk-dev] [PATCH v10 6/8] fib6: move lookup definition into the header file Vladimir Medvedkin
2020-10-13 13:14                 ` [dpdk-dev] [PATCH v10 7/8] fib6: introduce AVX512 lookup Vladimir Medvedkin
2020-10-13 13:14                 ` [dpdk-dev] [PATCH v10 8/8] app/testfib: add support for different lookup functions Vladimir Medvedkin
2020-10-07 16:10               ` [dpdk-dev] [PATCH v9 1/8] eal/x86: introduce AVX 512-bit type Vladimir Medvedkin
2020-10-07 16:10               ` [dpdk-dev] [PATCH v9 2/8] fib: make lookup function type configurable Vladimir Medvedkin
2020-10-07 16:10               ` [dpdk-dev] [PATCH v9 3/8] fib: move lookup definition into the header file Vladimir Medvedkin
2020-10-07 16:10               ` [dpdk-dev] [PATCH v9 4/8] fib: introduce AVX512 lookup Vladimir Medvedkin
2020-10-13 10:27                 ` Bruce Richardson
2020-10-07 16:10               ` [dpdk-dev] [PATCH v9 5/8] fib6: make lookup function type configurable Vladimir Medvedkin
2020-10-07 16:10               ` [dpdk-dev] [PATCH v9 6/8] fib6: move lookup definition into the header file Vladimir Medvedkin
2020-10-07 16:10               ` [dpdk-dev] [PATCH v9 7/8] fib6: introduce AVX512 lookup Vladimir Medvedkin
2020-10-07 16:10               ` [dpdk-dev] [PATCH v9 8/8] app/testfib: add support for different lookup functions Vladimir Medvedkin
2020-09-30 10:35             ` [dpdk-dev] [PATCH v8 1/8] eal/x86: introduce AVX 512-bit type Vladimir Medvedkin
2020-09-30 10:35             ` [dpdk-dev] [PATCH v8 2/8] fib: make lookup function type configurable Vladimir Medvedkin
2020-09-30 10:35             ` [dpdk-dev] [PATCH v8 3/8] fib: move lookup definition into the header file Vladimir Medvedkin
2020-09-30 10:35             ` [dpdk-dev] [PATCH v8 4/8] fib: introduce AVX512 lookup Vladimir Medvedkin
2020-09-30 10:35             ` [dpdk-dev] [PATCH v8 5/8] fib6: make lookup function type configurable Vladimir Medvedkin
2020-09-30 10:35             ` [dpdk-dev] [PATCH v8 6/8] fib6: move lookup definition into the header file Vladimir Medvedkin
2020-09-30 10:35             ` [dpdk-dev] [PATCH v8 7/8] fib6: introduce AVX512 lookup Vladimir Medvedkin
2020-09-30 10:35             ` [dpdk-dev] [PATCH v8 8/8] app/testfib: add support for different lookup functions Vladimir Medvedkin
2020-07-13 11:56           ` [dpdk-dev] [PATCH v7 1/8] eal/x86: introduce AVX 512-bit type Vladimir Medvedkin
2020-07-13 11:56           ` [dpdk-dev] [PATCH v7 2/8] fib: make lookup function type configurable Vladimir Medvedkin
2020-07-16 11:51             ` Ananyev, Konstantin
2020-07-16 14:32             ` Thomas Monjalon
2020-09-30 11:06               ` Vladimir Medvedkin
2020-07-13 11:56           ` [dpdk-dev] [PATCH v7 3/8] fib: move lookup definition into the header file Vladimir Medvedkin
2020-07-13 11:56           ` [dpdk-dev] [PATCH v7 4/8] fib: introduce AVX512 lookup Vladimir Medvedkin
2020-07-13 11:56           ` [dpdk-dev] [PATCH v7 5/8] fib6: make lookup function type configurable Vladimir Medvedkin
2020-07-16 11:53             ` Ananyev, Konstantin
2020-07-13 11:56           ` [dpdk-dev] [PATCH v7 6/8] fib6: move lookup definition into the header file Vladimir Medvedkin
2020-07-13 11:56           ` [dpdk-dev] [PATCH v7 7/8] fib6: introduce AVX512 lookup Vladimir Medvedkin
2020-07-13 11:56           ` [dpdk-dev] [PATCH v7 8/8] app/testfib: add support for different lookup functions Vladimir Medvedkin
2020-07-13 22:19           ` [dpdk-dev] [PATCH v6 0/8] fib: implement AVX512 vector lookup Stephen Hemminger
2020-07-14  7:31             ` Kinsella, Ray
2020-07-14 14:38               ` Stephen Hemminger
2020-07-15  9:47                 ` Thomas Monjalon
2020-07-15 10:35                   ` Medvedkin, Vladimir
2020-07-15 11:59                     ` Thomas Monjalon
2020-07-15 12:29                       ` Medvedkin, Vladimir
2020-07-15 12:45                         ` Thomas Monjalon
2020-07-17 16:43                           ` Richardson, Bruce
2020-07-19 10:04                             ` Thomas Monjalon
2020-07-13 11:11         ` [dpdk-dev] [PATCH v6 1/8] eal/x86: introduce AVX 512-bit type Vladimir Medvedkin
2020-07-13 11:33           ` David Marchand
2020-07-13 11:44             ` Medvedkin, Vladimir
2020-07-13 11:11         ` [dpdk-dev] [PATCH v6 2/8] fib: make lookup function type configurable Vladimir Medvedkin
2020-07-13 11:11         ` [dpdk-dev] [PATCH v6 3/8] fib: move lookup definition into the header file Vladimir Medvedkin
2020-07-13 11:11         ` [dpdk-dev] [PATCH v6 4/8] fib: introduce AVX512 lookup Vladimir Medvedkin
2020-07-13 11:11         ` [dpdk-dev] [PATCH v6 5/8] fib6: make lookup function type configurable Vladimir Medvedkin
2020-07-13 11:11         ` [dpdk-dev] [PATCH v6 6/8] fib6: move lookup definition into the header file Vladimir Medvedkin
2020-07-13 11:11         ` [dpdk-dev] [PATCH v6 7/8] fib6: introduce AVX512 lookup Vladimir Medvedkin
2020-07-13 11:11         ` [dpdk-dev] [PATCH v6 8/8] app/testfib: add support for different lookup functions Vladimir Medvedkin
2020-07-10 14:46       ` [dpdk-dev] [PATCH v5 1/8] eal/x86: introduce AVX 512-bit type Vladimir Medvedkin
2020-07-10 21:49         ` Thomas Monjalon
2020-07-13 10:23           ` Medvedkin, Vladimir
2020-07-13 10:25             ` Thomas Monjalon
2020-07-13 10:39               ` Medvedkin, Vladimir
2020-07-13 10:45                 ` Ananyev, Konstantin
2020-07-10 14:46       ` [dpdk-dev] [PATCH v5 2/8] fib: make lookup function type configurable Vladimir Medvedkin
2020-07-10 14:46       ` [dpdk-dev] [PATCH v5 3/8] fib: move lookup definition into the header file Vladimir Medvedkin
2020-07-10 14:46       ` [dpdk-dev] [PATCH v5 4/8] fib: introduce AVX512 lookup Vladimir Medvedkin
2020-07-10 14:46       ` [dpdk-dev] [PATCH v5 5/8] fib6: make lookup function type configurable Vladimir Medvedkin
2020-07-10 14:46       ` [dpdk-dev] [PATCH v5 6/8] fib6: move lookup definition into the header file Vladimir Medvedkin
2020-07-10 14:46       ` [dpdk-dev] [PATCH v5 7/8] fib6: introduce AVX512 lookup Vladimir Medvedkin
2020-07-10 14:46       ` [dpdk-dev] [PATCH v5 8/8] app/testfib: add support for different lookup functions Vladimir Medvedkin
2020-07-08 20:16     ` [dpdk-dev] [PATCH v4 1/8] eal: introduce zmm type for AVX 512-bit Vladimir Medvedkin
2020-07-09 13:48       ` David Marchand
2020-07-09 14:52         ` Medvedkin, Vladimir
2020-07-09 15:20           ` David Marchand
2020-07-08 20:16     ` [dpdk-dev] [PATCH v4 2/8] fib: make lookup function type configurable Vladimir Medvedkin
2020-07-08 20:16     ` [dpdk-dev] [PATCH v4 3/8] fib: move lookup definition into the header file Vladimir Medvedkin
2020-07-08 20:16     ` [dpdk-dev] [PATCH v4 4/8] fib: introduce AVX512 lookup Vladimir Medvedkin
2020-07-08 20:16     ` [dpdk-dev] [PATCH v4 5/8] fib6: make lookup function type configurable Vladimir Medvedkin
2020-07-08 20:16     ` [dpdk-dev] [PATCH v4 6/8] fib6: move lookup definition into the header file Vladimir Medvedkin
2020-07-08 20:16     ` [dpdk-dev] [PATCH v4 7/8] fib6: introduce AVX512 lookup Vladimir Medvedkin
2020-07-08 20:16     ` [dpdk-dev] [PATCH v4 8/8] app/testfib: add support for different lookup functions Vladimir Medvedkin
2020-05-19 12:12   ` [dpdk-dev] [PATCH v3 1/8] eal: introduce zmm type for AVX 512-bit Vladimir Medvedkin
2020-06-24 13:14     ` Ananyev, Konstantin
2020-07-06 17:28     ` Thomas Monjalon
2020-05-19 12:12   ` [dpdk-dev] [PATCH v3 2/8] fib: make lookup function type configurable Vladimir Medvedkin
2020-05-19 12:12   ` [dpdk-dev] [PATCH v3 3/8] fib: move lookup definition into the header file Vladimir Medvedkin
2020-07-08 11:23     ` Ananyev, Konstantin
2020-05-19 12:12   ` [dpdk-dev] [PATCH v3 4/8] fib: introduce AVX512 lookup Vladimir Medvedkin
2020-06-24 13:18     ` Ananyev, Konstantin
2020-07-08 19:57       ` Medvedkin, Vladimir
2020-07-06 19:21     ` Thomas Monjalon
2020-07-08 20:19       ` Medvedkin, Vladimir
2020-07-07  9:44     ` Bruce Richardson
2020-05-19 12:13   ` [dpdk-dev] [PATCH v3 5/8] fib6: make lookup function type configurable Vladimir Medvedkin
2020-05-19 12:13   ` [dpdk-dev] [PATCH v3 6/8] fib6: move lookup definition into the header file Vladimir Medvedkin
2020-07-08 11:27     ` Ananyev, Konstantin
2020-05-19 12:13   ` [dpdk-dev] [PATCH v3 7/8] fib6: introduce AVX512 lookup Vladimir Medvedkin
2020-07-08 12:23     ` Ananyev, Konstantin
2020-07-08 19:56       ` Medvedkin, Vladimir
2020-05-19 12:13   ` [dpdk-dev] [PATCH v3 8/8] app/testfib: add support for different lookup functions Vladimir Medvedkin
2020-05-14 12:28 ` [dpdk-dev] [PATCH v2 1/6] eal: introduce zmm type for AVX 512-bit Vladimir Medvedkin
2020-05-14 12:28 ` [dpdk-dev] [PATCH v2 2/6] fib: make lookup function type configurable Vladimir Medvedkin
2020-05-14 12:28 ` [dpdk-dev] [PATCH v2 3/6] fib: introduce AVX512 lookup Vladimir Medvedkin
2020-05-14 12:40   ` Bruce Richardson
2020-05-14 12:43     ` Medvedkin, Vladimir
2020-05-14 12:28 ` [dpdk-dev] [PATCH v2 4/6] fib6: make lookup function type configurable Vladimir Medvedkin
2020-05-14 12:28 ` [dpdk-dev] [PATCH v2 5/6] fib6: introduce AVX512 lookup Vladimir Medvedkin
2020-05-14 12:28 ` [dpdk-dev] [PATCH v2 6/6] app/testfib: add support for different lookup functions Vladimir Medvedkin

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).