DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] [PATCH v5 00/11] Introducing the TILE-Gx platform
@ 2015-07-09  8:25 Zhigang Lu
  2015-07-09  8:25 ` [dpdk-dev] [PATCH v5 01/11] test: limit x86 cpuflags checks to x86 builds Zhigang Lu
                   ` (11 more replies)
  0 siblings, 12 replies; 13+ messages in thread
From: Zhigang Lu @ 2015-07-09  8:25 UTC (permalink / raw)
  To: dev

This series adds support for the EZchip TILE-Gx family of SoCs.  The
architecture port in itself is fairly straight forward due to its
reliance on generics for the most part.

In addition to adding TILE-Gx architecture specific code, this series
includes a few cross-platform fixes for DPDK (cpuflags, SSE related,
etc.), as well as minor extensions to to accomodate a wider range of
hugepage sizes and configurable mempool element alignment boundaries.

Changes in this series:
  v5: Added the Signed-off-by line for Cyril.
  v4: Added the Acked-by line, removed an already checked-in patch.
      Also amended commit log for "eal: allow empty compile time flags".
  v3: Renewed the Signed-off-by line.
  v2: Removed RTE_LIBNAME per Thomas' feedback.

Cyril Chemparathy (10):
  test: limit x86 cpuflags checks to x86 builds
  hash: check SSE flags only on x86 builds
  config: remove RTE_LIBNAME definition.
  memzone: refactor rte_memzone_reserve() variants
  memzone: allow multiple pagesizes to be requested
  mempool: allow config override on element alignment
  tile: add page sizes for TILE-Gx/Mx platforms
  tile: initial TILE-Gx support.
  tile: Add TILE-Gx mPIPE poll mode driver.
  maintainers: claim responsibility for TILE-Gx platform

Zhigang Lu (1):
  eal: allow empty compile time flags RTE_COMPILE_TIME_CPUFLAGS

 MAINTAINERS                                        |    4 +
 app/test/test_cpuflags.c                           |    6 +-
 config/common_bsdapp                               |    1 -
 config/common_linuxapp                             |    1 -
 config/defconfig_ppc_64-power8-linuxapp-gcc        |    2 -
 config/defconfig_tile-tilegx-linuxapp-gcc          |   70 +
 drivers/net/Makefile                               |    1 +
 drivers/net/mpipe/Makefile                         |   46 +
 drivers/net/mpipe/mpipe_tilegx.c                   | 1637 ++++++++++++++++++++
 lib/librte_eal/common/eal_common_cpuflags.c        |    5 +-
 lib/librte_eal/common/eal_common_memzone.c         |  141 +-
 .../common/include/arch/tile/rte_atomic.h          |   86 +
 .../common/include/arch/tile/rte_byteorder.h       |   91 ++
 .../common/include/arch/tile/rte_cpuflags.h        |   85 +
 .../common/include/arch/tile/rte_cycles.h          |   70 +
 .../common/include/arch/tile/rte_memcpy.h          |   93 ++
 .../common/include/arch/tile/rte_prefetch.h        |   61 +
 .../common/include/arch/tile/rte_rwlock.h          |   70 +
 .../common/include/arch/tile/rte_spinlock.h        |   92 ++
 lib/librte_eal/common/include/rte_memory.h         |   16 +-
 lib/librte_eal/common/include/rte_memzone.h        |   50 +-
 lib/librte_hash/rte_hash_crc.h                     |    2 +
 lib/librte_mempool/rte_mempool.c                   |   16 +-
 lib/librte_mempool/rte_mempool.h                   |    6 +
 mk/arch/tile/rte.vars.mk                           |   39 +
 mk/machine/tilegx/rte.vars.mk                      |   57 +
 mk/rte.app.mk                                      |    1 +
 mk/rte.vars.mk                                     |    5 +-
 28 files changed, 2637 insertions(+), 117 deletions(-)
 create mode 100644 config/defconfig_tile-tilegx-linuxapp-gcc
 create mode 100644 drivers/net/mpipe/Makefile
 create mode 100644 drivers/net/mpipe/mpipe_tilegx.c
 create mode 100644 lib/librte_eal/common/include/arch/tile/rte_atomic.h
 create mode 100644 lib/librte_eal/common/include/arch/tile/rte_byteorder.h
 create mode 100644 lib/librte_eal/common/include/arch/tile/rte_cpuflags.h
 create mode 100644 lib/librte_eal/common/include/arch/tile/rte_cycles.h
 create mode 100644 lib/librte_eal/common/include/arch/tile/rte_memcpy.h
 create mode 100644 lib/librte_eal/common/include/arch/tile/rte_prefetch.h
 create mode 100644 lib/librte_eal/common/include/arch/tile/rte_rwlock.h
 create mode 100644 lib/librte_eal/common/include/arch/tile/rte_spinlock.h
 create mode 100644 mk/arch/tile/rte.vars.mk
 create mode 100644 mk/machine/tilegx/rte.vars.mk

-- 
2.1.2

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [dpdk-dev] [PATCH v5 01/11] test: limit x86 cpuflags checks to x86 builds
  2015-07-09  8:25 [dpdk-dev] [PATCH v5 00/11] Introducing the TILE-Gx platform Zhigang Lu
@ 2015-07-09  8:25 ` Zhigang Lu
  2015-07-09  8:25 ` [dpdk-dev] [PATCH v5 02/11] hash: check SSE flags only on " Zhigang Lu
                   ` (10 subsequent siblings)
  11 siblings, 0 replies; 13+ messages in thread
From: Zhigang Lu @ 2015-07-09  8:25 UTC (permalink / raw)
  To: dev; +Cc: Cyril Chemparathy

From: Cyril Chemparathy <cchemparathy@ezchip.com>

The original code mistakenly defaulted to X86 when RTE_ARCH_PPC_64 was
left undefined.  This did not accomodate other non-PPC/non-X86
architectures.  This patch fixes this issue.

Signed-off-by: Cyril Chemparathy <cchemparathy@ezchip.com>
Signed-off-by: Zhigang Lu <zlu@ezchip.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
---
 app/test/test_cpuflags.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/app/test/test_cpuflags.c b/app/test/test_cpuflags.c
index 5aeba5d..5b92061 100644
--- a/app/test/test_cpuflags.c
+++ b/app/test/test_cpuflags.c
@@ -113,7 +113,9 @@ test_cpuflags(void)
 
 	printf("Check for ICACHE_SNOOP:\t\t");
 	CHECK_FOR_FLAG(RTE_CPUFLAG_ICACHE_SNOOP);
-#else
+#endif
+
+#if defined(RTE_ARCH_X86_64) || defined(RTE_ARCH_I686)
 	printf("Check for SSE:\t\t");
 	CHECK_FOR_FLAG(RTE_CPUFLAG_SSE);
 
@@ -149,8 +151,6 @@ test_cpuflags(void)
 
 	printf("Check for INVTSC:\t");
 	CHECK_FOR_FLAG(RTE_CPUFLAG_INVTSC);
-
-
 #endif
 
 	/*
-- 
2.1.2

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [dpdk-dev] [PATCH v5 02/11] hash: check SSE flags only on x86 builds
  2015-07-09  8:25 [dpdk-dev] [PATCH v5 00/11] Introducing the TILE-Gx platform Zhigang Lu
  2015-07-09  8:25 ` [dpdk-dev] [PATCH v5 01/11] test: limit x86 cpuflags checks to x86 builds Zhigang Lu
@ 2015-07-09  8:25 ` Zhigang Lu
  2015-07-09  8:25 ` [dpdk-dev] [PATCH v5 03/11] eal: allow empty compile time flags RTE_COMPILE_TIME_CPUFLAGS Zhigang Lu
                   ` (9 subsequent siblings)
  11 siblings, 0 replies; 13+ messages in thread
From: Zhigang Lu @ 2015-07-09  8:25 UTC (permalink / raw)
  To: dev; +Cc: Cyril Chemparathy

From: Cyril Chemparathy <cchemparathy@ezchip.com>

This is necessary because the required CPU flags may not be defined on
other architectures.

Signed-off-by: Cyril Chemparathy <cchemparathy@ezchip.com>
Signed-off-by: Zhigang Lu <zlu@ezchip.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
---
 lib/librte_hash/rte_hash_crc.h | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/lib/librte_hash/rte_hash_crc.h b/lib/librte_hash/rte_hash_crc.h
index abdbd9a..1f6f5bf 100644
--- a/lib/librte_hash/rte_hash_crc.h
+++ b/lib/librte_hash/rte_hash_crc.h
@@ -425,12 +425,14 @@ static inline void
 rte_hash_crc_set_alg(uint8_t alg)
 {
 	switch (alg) {
+#if defined(RTE_ARCH_I686) || defined(RTE_ARCH_X86_64)
 	case CRC32_SSE42_x64:
 		if (! rte_cpu_get_flag_enabled(RTE_CPUFLAG_EM64T))
 			alg = CRC32_SSE42;
 	case CRC32_SSE42:
 		if (! rte_cpu_get_flag_enabled(RTE_CPUFLAG_SSE4_2))
 			alg = CRC32_SW;
+#endif
 	case CRC32_SW:
 		crc32_alg = alg;
 	default:
-- 
2.1.2

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [dpdk-dev] [PATCH v5 03/11] eal: allow empty compile time flags RTE_COMPILE_TIME_CPUFLAGS
  2015-07-09  8:25 [dpdk-dev] [PATCH v5 00/11] Introducing the TILE-Gx platform Zhigang Lu
  2015-07-09  8:25 ` [dpdk-dev] [PATCH v5 01/11] test: limit x86 cpuflags checks to x86 builds Zhigang Lu
  2015-07-09  8:25 ` [dpdk-dev] [PATCH v5 02/11] hash: check SSE flags only on " Zhigang Lu
@ 2015-07-09  8:25 ` Zhigang Lu
  2015-07-09  8:25 ` [dpdk-dev] [PATCH v5 04/11] config: remove RTE_LIBNAME definition Zhigang Lu
                   ` (8 subsequent siblings)
  11 siblings, 0 replies; 13+ messages in thread
From: Zhigang Lu @ 2015-07-09  8:25 UTC (permalink / raw)
  To: dev; +Cc: Cyril Chemparathy

When RTE_COMPILE_TIME_CPUFLAGS is empty, the rte_cpu_check_supported()
code breaks with a "comparison is always false due to limited range of
data type".  This is because the compile_time_flags[] array is empty.
Assigning the array dimension to a local variable apparently solves this.

Signed-off-by: Cyril Chemparathy <cchemparathy@ezchip.com>
Signed-off-by: Zhigang Lu <zlu@ezchip.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
---
 lib/librte_eal/common/eal_common_cpuflags.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/lib/librte_eal/common/eal_common_cpuflags.c b/lib/librte_eal/common/eal_common_cpuflags.c
index 6fd360c..8ba7b30 100644
--- a/lib/librte_eal/common/eal_common_cpuflags.c
+++ b/lib/librte_eal/common/eal_common_cpuflags.c
@@ -30,6 +30,7 @@
  *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
  *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
  */
+#include <rte_common.h>
 #include <rte_cpuflags.h>
 
 /*
@@ -62,10 +63,10 @@ rte_cpu_check_supported(void)
 	static const enum rte_cpu_flag_t compile_time_flags[] = {
 			RTE_COMPILE_TIME_CPUFLAGS
 	};
-	unsigned i;
+	unsigned count = RTE_DIM(compile_time_flags), i;
 	int ret;
 
-	for (i = 0; i < sizeof(compile_time_flags)/sizeof(compile_time_flags[0]); i++) {
+	for (i = 0; i < count; i++) {
 		ret = rte_cpu_get_flag_enabled(compile_time_flags[i]);
 
 		if (ret < 0) {
-- 
2.1.2

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [dpdk-dev] [PATCH v5 04/11] config: remove RTE_LIBNAME definition.
  2015-07-09  8:25 [dpdk-dev] [PATCH v5 00/11] Introducing the TILE-Gx platform Zhigang Lu
                   ` (2 preceding siblings ...)
  2015-07-09  8:25 ` [dpdk-dev] [PATCH v5 03/11] eal: allow empty compile time flags RTE_COMPILE_TIME_CPUFLAGS Zhigang Lu
@ 2015-07-09  8:25 ` Zhigang Lu
  2015-07-09  8:25 ` [dpdk-dev] [PATCH v5 05/11] memzone: refactor rte_memzone_reserve() variants Zhigang Lu
                   ` (7 subsequent siblings)
  11 siblings, 0 replies; 13+ messages in thread
From: Zhigang Lu @ 2015-07-09  8:25 UTC (permalink / raw)
  To: dev; +Cc: Cyril Chemparathy

From: Cyril Chemparathy <cchemparathy@ezchip.com>

The library name is now being pinned to "dpdk" instead of intel_dpdk,
powerpc_dpdk, etc.  As a result, we no longer need this config item.
This patch removes it.

Signed-off-by: Cyril Chemparathy <cchemparathy@ezchip.com>
Signed-off-by: Zhigang Lu <zlu@ezchip.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
---
 config/common_bsdapp                        | 1 -
 config/common_linuxapp                      | 1 -
 config/defconfig_ppc_64-power8-linuxapp-gcc | 2 --
 mk/rte.vars.mk                              | 5 +----
 4 files changed, 1 insertion(+), 8 deletions(-)

diff --git a/config/common_bsdapp b/config/common_bsdapp
index dfa61a3..7112f1c 100644
--- a/config/common_bsdapp
+++ b/config/common_bsdapp
@@ -87,7 +87,6 @@ CONFIG_RTE_BUILD_SHARED_LIB=n
 # Combine to one single library
 #
 CONFIG_RTE_BUILD_COMBINE_LIBS=n
-CONFIG_RTE_LIBNAME=intel_dpdk
 
 #
 # Use newest code breaking previous ABI
diff --git a/config/common_linuxapp b/config/common_linuxapp
index 1732b70..46297cd 100644
--- a/config/common_linuxapp
+++ b/config/common_linuxapp
@@ -87,7 +87,6 @@ CONFIG_RTE_BUILD_SHARED_LIB=n
 # Combine to one single library
 #
 CONFIG_RTE_BUILD_COMBINE_LIBS=n
-CONFIG_RTE_LIBNAME="intel_dpdk"
 
 #
 # Use newest code breaking previous ABI
diff --git a/config/defconfig_ppc_64-power8-linuxapp-gcc b/config/defconfig_ppc_64-power8-linuxapp-gcc
index d97a885..f1af518 100644
--- a/config/defconfig_ppc_64-power8-linuxapp-gcc
+++ b/config/defconfig_ppc_64-power8-linuxapp-gcc
@@ -39,8 +39,6 @@ CONFIG_RTE_ARCH_64=y
 CONFIG_RTE_TOOLCHAIN="gcc"
 CONFIG_RTE_TOOLCHAIN_GCC=y
 
-CONFIG_RTE_LIBNAME="powerpc_dpdk"
-
 # Note: Power doesn't have this support
 CONFIG_RTE_LIBRTE_EAL_VMWARE_TSC_MAP_SUPPORT=n
 
diff --git a/mk/rte.vars.mk b/mk/rte.vars.mk
index 0469064..f87cf4b 100644
--- a/mk/rte.vars.mk
+++ b/mk/rte.vars.mk
@@ -65,10 +65,7 @@ ifneq ($(BUILDING_RTE_SDK),)
   RTE_SDK_BIN := $(RTE_OUTPUT)
 endif
 
-RTE_LIBNAME := $(CONFIG_RTE_LIBNAME:"%"=%)
-ifeq ($(RTE_LIBNAME),)
-RTE_LIBNAME := intel_dpdk
-endif
+RTE_LIBNAME := dpdk
 
 # RTE_TARGET is deducted from config when we are building the SDK.
 # Else, when building an external app, RTE_TARGET must be specified
-- 
2.1.2

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [dpdk-dev] [PATCH v5 05/11] memzone: refactor rte_memzone_reserve() variants
  2015-07-09  8:25 [dpdk-dev] [PATCH v5 00/11] Introducing the TILE-Gx platform Zhigang Lu
                   ` (3 preceding siblings ...)
  2015-07-09  8:25 ` [dpdk-dev] [PATCH v5 04/11] config: remove RTE_LIBNAME definition Zhigang Lu
@ 2015-07-09  8:25 ` Zhigang Lu
  2015-07-09  8:25 ` [dpdk-dev] [PATCH v5 06/11] memzone: allow multiple pagesizes to be requested Zhigang Lu
                   ` (6 subsequent siblings)
  11 siblings, 0 replies; 13+ messages in thread
From: Zhigang Lu @ 2015-07-09  8:25 UTC (permalink / raw)
  To: dev; +Cc: Cyril Chemparathy

From: Cyril Chemparathy <cchemparathy@ezchip.com>

The definitions of rte_memzone_reserve_aligned() and
rte_memzone_reserve_bounded() were identical with the exception of the
bound argument passed into rte_memzone_reserve_thread_safe().

This patch removes this replication of code by unifying it into
rte_memzone_reserve_thread_safe(), which is then called by all three
variants of rte_memzone_reserve().

Signed-off-by: Cyril Chemparathy <cchemparathy@ezchip.com>
Signed-off-by: Zhigang Lu <zlu@ezchip.com>
---
 lib/librte_eal/common/eal_common_memzone.c | 77 +++++++++++++-----------------
 1 file changed, 33 insertions(+), 44 deletions(-)

diff --git a/lib/librte_eal/common/eal_common_memzone.c b/lib/librte_eal/common/eal_common_memzone.c
index aee184a..1ea502b 100644
--- a/lib/librte_eal/common/eal_common_memzone.c
+++ b/lib/librte_eal/common/eal_common_memzone.c
@@ -77,18 +77,6 @@ memzone_lookup_thread_unsafe(const char *name)
 }
 
 /*
- * Return a pointer to a correctly filled memzone descriptor. If the
- * allocation cannot be done, return NULL.
- */
-const struct rte_memzone *
-rte_memzone_reserve(const char *name, size_t len, int socket_id,
-		      unsigned flags)
-{
-	return rte_memzone_reserve_aligned(name,
-			len, socket_id, flags, RTE_CACHE_LINE_SIZE);
-}
-
-/*
  * Helper function for memzone_reserve_aligned_thread_unsafe().
  * Calculate address offset from the start of the segment.
  * Align offset in that way that it satisfy istart alignmnet and
@@ -307,13 +295,10 @@ memzone_reserve_aligned_thread_unsafe(const char *name, size_t len,
 	return mz;
 }
 
-/*
- * Return a pointer to a correctly filled memzone descriptor (with a
- * specified alignment). If the allocation cannot be done, return NULL.
- */
-const struct rte_memzone *
-rte_memzone_reserve_aligned(const char *name, size_t len,
-		int socket_id, unsigned flags, unsigned align)
+static const struct rte_memzone *
+rte_memzone_reserve_thread_safe(const char *name, size_t len,
+				int socket_id, unsigned flags, unsigned align,
+				unsigned bound)
 {
 	struct rte_mem_config *mcfg;
 	const struct rte_memzone *mz = NULL;
@@ -331,7 +316,7 @@ rte_memzone_reserve_aligned(const char *name, size_t len,
 	rte_rwlock_write_lock(&mcfg->mlock);
 
 	mz = memzone_reserve_aligned_thread_unsafe(
-		name, len, socket_id, flags, align, 0);
+		name, len, socket_id, flags, align, bound);
 
 	rte_rwlock_write_unlock(&mcfg->mlock);
 
@@ -340,36 +325,40 @@ rte_memzone_reserve_aligned(const char *name, size_t len,
 
 /*
  * Return a pointer to a correctly filled memzone descriptor (with a
- * specified alignment and boundary).
- * If the allocation cannot be done, return NULL.
+ * specified alignment and boundary). If the allocation cannot be done,
+ * return NULL.
  */
 const struct rte_memzone *
-rte_memzone_reserve_bounded(const char *name, size_t len,
-		int socket_id, unsigned flags, unsigned align, unsigned bound)
+rte_memzone_reserve_bounded(const char *name, size_t len, int socket_id,
+			    unsigned flags, unsigned align, unsigned bound)
 {
-	struct rte_mem_config *mcfg;
-	const struct rte_memzone *mz = NULL;
-
-	/* both sizes cannot be explicitly called for */
-	if (((flags & RTE_MEMZONE_1GB) && (flags & RTE_MEMZONE_2MB))
-		|| ((flags & RTE_MEMZONE_16MB) && (flags & RTE_MEMZONE_16GB))) {
-		rte_errno = EINVAL;
-		return NULL;
-	}
-
-	/* get pointer to global configuration */
-	mcfg = rte_eal_get_configuration()->mem_config;
-
-	rte_rwlock_write_lock(&mcfg->mlock);
-
-	mz = memzone_reserve_aligned_thread_unsafe(
-		name, len, socket_id, flags, align, bound);
-
-	rte_rwlock_write_unlock(&mcfg->mlock);
+	return rte_memzone_reserve_thread_safe(name, len, socket_id, flags,
+					       align, bound);
+}
 
-	return mz;
+/*
+ * Return a pointer to a correctly filled memzone descriptor (with a
+ * specified alignment). If the allocation cannot be done, return NULL.
+ */
+const struct rte_memzone *
+rte_memzone_reserve_aligned(const char *name, size_t len, int socket_id,
+			    unsigned flags, unsigned align)
+{
+	return rte_memzone_reserve_thread_safe(name, len, socket_id, flags,
+					       align, 0);
 }
 
+/*
+ * Return a pointer to a correctly filled memzone descriptor. If the
+ * allocation cannot be done, return NULL.
+ */
+const struct rte_memzone *
+rte_memzone_reserve(const char *name, size_t len, int socket_id,
+		    unsigned flags)
+{
+	return rte_memzone_reserve_thread_safe(name, len, socket_id,
+					       flags, RTE_CACHE_LINE_SIZE, 0);
+}
 
 /*
  * Lookup for the memzone identified by the given name
-- 
2.1.2

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [dpdk-dev] [PATCH v5 06/11] memzone: allow multiple pagesizes to be requested
  2015-07-09  8:25 [dpdk-dev] [PATCH v5 00/11] Introducing the TILE-Gx platform Zhigang Lu
                   ` (4 preceding siblings ...)
  2015-07-09  8:25 ` [dpdk-dev] [PATCH v5 05/11] memzone: refactor rte_memzone_reserve() variants Zhigang Lu
@ 2015-07-09  8:25 ` Zhigang Lu
  2015-07-09  8:25 ` [dpdk-dev] [PATCH v5 07/11] mempool: allow config override on element alignment Zhigang Lu
                   ` (5 subsequent siblings)
  11 siblings, 0 replies; 13+ messages in thread
From: Zhigang Lu @ 2015-07-09  8:25 UTC (permalink / raw)
  To: dev; +Cc: Cyril Chemparathy

From: Cyril Chemparathy <cchemparathy@ezchip.com>

This patch extends the memzone allocator to remove the restriction
that prevented callers from specifying multiple page sizes in the
flags argument.

In doing so, we also sanitize the free segment matching logic to get
rid of architecture specific disjunctions (2MB vs 1GB on x86, and 16MB
vs 16GB on PPC), thereby allowing for a broader range of hugepages on
architectures that support it.

Signed-off-by: Cyril Chemparathy <cchemparathy@ezchip.com>
Signed-off-by: Zhigang Lu <zlu@ezchip.com>
---
 lib/librte_eal/common/eal_common_memzone.c | 58 ++++++++++++++----------------
 1 file changed, 27 insertions(+), 31 deletions(-)

diff --git a/lib/librte_eal/common/eal_common_memzone.c b/lib/librte_eal/common/eal_common_memzone.c
index 1ea502b..76bae72 100644
--- a/lib/librte_eal/common/eal_common_memzone.c
+++ b/lib/librte_eal/common/eal_common_memzone.c
@@ -113,7 +113,8 @@ align_phys_boundary(const struct rte_memseg *ms, size_t len, size_t align,
 
 static const struct rte_memzone *
 memzone_reserve_aligned_thread_unsafe(const char *name, size_t len,
-		int socket_id, unsigned flags, unsigned align, unsigned bound)
+		int socket_id, uint64_t size_mask, unsigned align,
+		unsigned bound)
 {
 	struct rte_mem_config *mcfg;
 	unsigned i = 0;
@@ -201,18 +202,7 @@ memzone_reserve_aligned_thread_unsafe(const char *name, size_t len,
 		if ((requested_len + addr_offset) > free_memseg[i].len)
 			continue;
 
-		/* check flags for hugepage sizes */
-		if ((flags & RTE_MEMZONE_2MB) &&
-				free_memseg[i].hugepage_sz == RTE_PGSIZE_1G)
-			continue;
-		if ((flags & RTE_MEMZONE_1GB) &&
-				free_memseg[i].hugepage_sz == RTE_PGSIZE_2M)
-			continue;
-		if ((flags & RTE_MEMZONE_16MB) &&
-				free_memseg[i].hugepage_sz == RTE_PGSIZE_16G)
-			continue;
-		if ((flags & RTE_MEMZONE_16GB) &&
-				free_memseg[i].hugepage_sz == RTE_PGSIZE_16M)
+		if ((size_mask & free_memseg[i].hugepage_sz) == 0)
 			continue;
 
 		/* this segment is the best until now */
@@ -244,16 +234,6 @@ memzone_reserve_aligned_thread_unsafe(const char *name, size_t len,
 
 	/* no segment found */
 	if (memseg_idx == -1) {
-		/*
-		 * If RTE_MEMZONE_SIZE_HINT_ONLY flag is specified,
-		 * try allocating again without the size parameter otherwise -fail.
-		 */
-		if ((flags & RTE_MEMZONE_SIZE_HINT_ONLY)  &&
-		    ((flags & RTE_MEMZONE_1GB) || (flags & RTE_MEMZONE_2MB)
-		|| (flags & RTE_MEMZONE_16MB) || (flags & RTE_MEMZONE_16GB)))
-			return memzone_reserve_aligned_thread_unsafe(name,
-				len, socket_id, 0, align, bound);
-
 		rte_errno = ENOMEM;
 		return NULL;
 	}
@@ -302,13 +282,18 @@ rte_memzone_reserve_thread_safe(const char *name, size_t len,
 {
 	struct rte_mem_config *mcfg;
 	const struct rte_memzone *mz = NULL;
-
-	/* both sizes cannot be explicitly called for */
-	if (((flags & RTE_MEMZONE_1GB) && (flags & RTE_MEMZONE_2MB))
-		|| ((flags & RTE_MEMZONE_16MB) && (flags & RTE_MEMZONE_16GB))) {
-		rte_errno = EINVAL;
-		return NULL;
-	}
+	uint64_t size_mask = 0;
+
+	if (flags & RTE_MEMZONE_2MB)
+		size_mask |= RTE_PGSIZE_2M;
+	if (flags & RTE_MEMZONE_16MB)
+		size_mask |= RTE_PGSIZE_16M;
+	if (flags & RTE_MEMZONE_1GB)
+		size_mask |= RTE_PGSIZE_1G;
+	if (flags & RTE_MEMZONE_16GB)
+		size_mask |= RTE_PGSIZE_16G;
+	if (!size_mask)
+		size_mask = UINT64_MAX;
 
 	/* get pointer to global configuration */
 	mcfg = rte_eal_get_configuration()->mem_config;
@@ -316,7 +301,18 @@ rte_memzone_reserve_thread_safe(const char *name, size_t len,
 	rte_rwlock_write_lock(&mcfg->mlock);
 
 	mz = memzone_reserve_aligned_thread_unsafe(
-		name, len, socket_id, flags, align, bound);
+		name, len, socket_id, size_mask, align, bound);
+
+	/*
+	 * If we failed to allocate the requested page size, and the 
+	 * RTE_MEMZONE_SIZE_HINT_ONLY flag is specified, try allocating
+	 * again.
+	 */
+	if (!mz && rte_errno == ENOMEM && size_mask != UINT64_MAX &&
+	    flags & RTE_MEMZONE_SIZE_HINT_ONLY) {
+		mz = memzone_reserve_aligned_thread_unsafe(
+			name, len, socket_id, UINT64_MAX, align, bound);
+	}
 
 	rte_rwlock_write_unlock(&mcfg->mlock);
 
-- 
2.1.2

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [dpdk-dev] [PATCH v5 07/11] mempool: allow config override on element alignment
  2015-07-09  8:25 [dpdk-dev] [PATCH v5 00/11] Introducing the TILE-Gx platform Zhigang Lu
                   ` (5 preceding siblings ...)
  2015-07-09  8:25 ` [dpdk-dev] [PATCH v5 06/11] memzone: allow multiple pagesizes to be requested Zhigang Lu
@ 2015-07-09  8:25 ` Zhigang Lu
  2015-07-09  8:25 ` [dpdk-dev] [PATCH v5 08/11] tile: add page sizes for TILE-Gx/Mx platforms Zhigang Lu
                   ` (4 subsequent siblings)
  11 siblings, 0 replies; 13+ messages in thread
From: Zhigang Lu @ 2015-07-09  8:25 UTC (permalink / raw)
  To: dev; +Cc: Cyril Chemparathy

From: Cyril Chemparathy <cchemparathy@ezchip.com>

On TILE-Gx and TILE-Mx platforms, the buffers fed into the hardware
buffer manager require a 128-byte alignment.  With this change, we
allow configuration based override of the element alignment, and
default to RTE_CACHE_LINE_SIZE if left unspecified.

Signed-off-by: Cyril Chemparathy <cchemparathy@ezchip.com>
Signed-off-by: Zhigang Lu <zlu@ezchip.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
---
 lib/librte_mempool/rte_mempool.c | 16 +++++++++-------
 lib/librte_mempool/rte_mempool.h |  6 ++++++
 2 files changed, 15 insertions(+), 7 deletions(-)

diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
index 02699a1..8e185c5 100644
--- a/lib/librte_mempool/rte_mempool.c
+++ b/lib/librte_mempool/rte_mempool.c
@@ -120,10 +120,10 @@ static unsigned optimize_object_size(unsigned obj_size)
 		nrank = 1;
 
 	/* process new object size */
-	new_obj_size = (obj_size + RTE_CACHE_LINE_MASK) / RTE_CACHE_LINE_SIZE;
+	new_obj_size = (obj_size + RTE_MEMPOOL_ALIGN_MASK) / RTE_MEMPOOL_ALIGN;
 	while (get_gcd(new_obj_size, nrank * nchan) != 1)
 		new_obj_size++;
-	return new_obj_size * RTE_CACHE_LINE_SIZE;
+	return new_obj_size * RTE_MEMPOOL_ALIGN;
 }
 
 static void
@@ -267,7 +267,7 @@ rte_mempool_calc_obj_size(uint32_t elt_size, uint32_t flags,
 #endif
 	if ((flags & MEMPOOL_F_NO_CACHE_ALIGN) == 0)
 		sz->header_size = RTE_ALIGN_CEIL(sz->header_size,
-			RTE_CACHE_LINE_SIZE);
+			RTE_MEMPOOL_ALIGN);
 
 	/* trailer contains the cookie in debug mode */
 	sz->trailer_size = 0;
@@ -281,9 +281,9 @@ rte_mempool_calc_obj_size(uint32_t elt_size, uint32_t flags,
 	if ((flags & MEMPOOL_F_NO_CACHE_ALIGN) == 0) {
 		sz->total_size = sz->header_size + sz->elt_size +
 			sz->trailer_size;
-		sz->trailer_size += ((RTE_CACHE_LINE_SIZE -
-				  (sz->total_size & RTE_CACHE_LINE_MASK)) &
-				 RTE_CACHE_LINE_MASK);
+		sz->trailer_size += ((RTE_MEMPOOL_ALIGN -
+				  (sz->total_size & RTE_MEMPOOL_ALIGN_MASK)) &
+				 RTE_MEMPOOL_ALIGN_MASK);
 	}
 
 	/*
@@ -498,7 +498,7 @@ rte_mempool_xmem_create(const char *name, unsigned n, unsigned elt_size,
 	 * cache-aligned
 	 */
 	private_data_size = (private_data_size +
-			     RTE_CACHE_LINE_MASK) & (~RTE_CACHE_LINE_MASK);
+			     RTE_MEMPOOL_ALIGN_MASK) & (~RTE_MEMPOOL_ALIGN_MASK);
 
 	if (! rte_eal_has_hugepages()) {
 		/*
@@ -525,6 +525,7 @@ rte_mempool_xmem_create(const char *name, unsigned n, unsigned elt_size,
 	 * enough to hold mempool header and metadata plus mempool objects.
 	 */
 	mempool_size = MEMPOOL_HEADER_SIZE(mp, pg_num) + private_data_size;
+	mempool_size = RTE_ALIGN_CEIL(mempool_size, RTE_MEMPOOL_ALIGN);
 	if (vaddr == NULL)
 		mempool_size += (size_t)objsz.total_size * n;
 
@@ -580,6 +581,7 @@ rte_mempool_xmem_create(const char *name, unsigned n, unsigned elt_size,
 	/* calculate address of the first element for continuous mempool. */
 	obj = (char *)mp + MEMPOOL_HEADER_SIZE(mp, pg_num) +
 		private_data_size;
+	obj = RTE_PTR_ALIGN_CEIL(obj, RTE_MEMPOOL_ALIGN);
 
 	/* populate address translation fields. */
 	mp->pg_num = pg_num;
diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
index 6d4ce9a..ee67ce7 100644
--- a/lib/librte_mempool/rte_mempool.h
+++ b/lib/librte_mempool/rte_mempool.h
@@ -142,6 +142,12 @@ struct rte_mempool_objsz {
 /** Mempool over one chunk of physically continuous memory */
 #define	MEMPOOL_PG_NUM_DEFAULT	1
 
+#ifndef RTE_MEMPOOL_ALIGN
+#define RTE_MEMPOOL_ALIGN	RTE_CACHE_LINE_SIZE
+#endif
+
+#define RTE_MEMPOOL_ALIGN_MASK	(RTE_MEMPOOL_ALIGN - 1)
+
 /**
  * Mempool object header structure
  *
-- 
2.1.2

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [dpdk-dev] [PATCH v5 08/11] tile: add page sizes for TILE-Gx/Mx platforms
  2015-07-09  8:25 [dpdk-dev] [PATCH v5 00/11] Introducing the TILE-Gx platform Zhigang Lu
                   ` (6 preceding siblings ...)
  2015-07-09  8:25 ` [dpdk-dev] [PATCH v5 07/11] mempool: allow config override on element alignment Zhigang Lu
@ 2015-07-09  8:25 ` Zhigang Lu
  2015-07-09  8:25 ` [dpdk-dev] [PATCH v5 09/11] tile: initial TILE-Gx support Zhigang Lu
                   ` (3 subsequent siblings)
  11 siblings, 0 replies; 13+ messages in thread
From: Zhigang Lu @ 2015-07-09  8:25 UTC (permalink / raw)
  To: dev; +Cc: Cyril Chemparathy

From: Cyril Chemparathy <cchemparathy@ezchip.com>

This patch adds a few new page sizes that are supported on the TILE-Gx
and TILE-Mx platforms.

Signed-off-by: Cyril Chemparathy <cchemparathy@ezchip.com>
Signed-off-by: Zhigang Lu <zlu@ezchip.com>
---
 lib/librte_eal/common/eal_common_memzone.c  |  8 +++++
 lib/librte_eal/common/include/rte_memory.h  | 16 +++++----
 lib/librte_eal/common/include/rte_memzone.h | 50 +++++++++++++++++++----------
 3 files changed, 51 insertions(+), 23 deletions(-)

diff --git a/lib/librte_eal/common/eal_common_memzone.c b/lib/librte_eal/common/eal_common_memzone.c
index 76bae72..dc39a79 100644
--- a/lib/librte_eal/common/eal_common_memzone.c
+++ b/lib/librte_eal/common/eal_common_memzone.c
@@ -284,12 +284,20 @@ rte_memzone_reserve_thread_safe(const char *name, size_t len,
 	const struct rte_memzone *mz = NULL;
 	uint64_t size_mask = 0;
 
+	if (flags & RTE_MEMZONE_256KB)
+		size_mask |= RTE_PGSIZE_256K;
 	if (flags & RTE_MEMZONE_2MB)
 		size_mask |= RTE_PGSIZE_2M;
 	if (flags & RTE_MEMZONE_16MB)
 		size_mask |= RTE_PGSIZE_16M;
+	if (flags & RTE_MEMZONE_256MB)
+		size_mask |= RTE_PGSIZE_256M;
+	if (flags & RTE_MEMZONE_512MB)
+		size_mask |= RTE_PGSIZE_512M;
 	if (flags & RTE_MEMZONE_1GB)
 		size_mask |= RTE_PGSIZE_1G;
+	if (flags & RTE_MEMZONE_4GB)
+		size_mask |= RTE_PGSIZE_4G;
 	if (flags & RTE_MEMZONE_16GB)
 		size_mask |= RTE_PGSIZE_16G;
 	if (!size_mask)
diff --git a/lib/librte_eal/common/include/rte_memory.h b/lib/librte_eal/common/include/rte_memory.h
index d948c0b..1bed415 100644
--- a/lib/librte_eal/common/include/rte_memory.h
+++ b/lib/librte_eal/common/include/rte_memory.h
@@ -53,12 +53,16 @@ extern "C" {
 #endif
 
 enum rte_page_sizes {
-	RTE_PGSIZE_4K = 1ULL << 12,
-	RTE_PGSIZE_2M = 1ULL << 21,
-	RTE_PGSIZE_1G = 1ULL << 30,
-	RTE_PGSIZE_64K = 1ULL << 16,
-	RTE_PGSIZE_16M = 1ULL << 24,
-	RTE_PGSIZE_16G = 1ULL << 34
+	RTE_PGSIZE_4K    = 1ULL << 12,
+	RTE_PGSIZE_64K   = 1ULL << 16,
+	RTE_PGSIZE_256K  = 1ULL << 18,
+	RTE_PGSIZE_2M    = 1ULL << 21,
+	RTE_PGSIZE_16M   = 1ULL << 24,
+	RTE_PGSIZE_256M  = 1ULL << 28,
+	RTE_PGSIZE_512M  = 1ULL << 29,
+	RTE_PGSIZE_1G    = 1ULL << 30,
+	RTE_PGSIZE_4G    = 1ULL << 32,
+	RTE_PGSIZE_16G   = 1ULL << 34,
 };
 
 #define SOCKET_ID_ANY -1                    /**< Any NUMA socket. */
diff --git a/lib/librte_eal/common/include/rte_memzone.h b/lib/librte_eal/common/include/rte_memzone.h
index ee62680..de5ae55 100644
--- a/lib/librte_eal/common/include/rte_memzone.h
+++ b/lib/librte_eal/common/include/rte_memzone.h
@@ -60,8 +60,12 @@ extern "C" {
 
 #define RTE_MEMZONE_2MB            0x00000001   /**< Use 2MB pages. */
 #define RTE_MEMZONE_1GB            0x00000002   /**< Use 1GB pages. */
-#define RTE_MEMZONE_16MB            0x00000100   /**< Use 16MB pages. */
-#define RTE_MEMZONE_16GB            0x00000200   /**< Use 16GB pages. */
+#define RTE_MEMZONE_16MB           0x00000100   /**< Use 16MB pages. */
+#define RTE_MEMZONE_16GB           0x00000200   /**< Use 16GB pages. */
+#define RTE_MEMZONE_256KB          0x00010000   /**< Use 256KB pages. */
+#define RTE_MEMZONE_256MB          0x00020000   /**< Use 256MB pages. */
+#define RTE_MEMZONE_512MB          0x00040000   /**< Use 512MB pages. */
+#define RTE_MEMZONE_4GB            0x00080000   /**< Use 4GB pages. */
 #define RTE_MEMZONE_SIZE_HINT_ONLY 0x00000004   /**< Use available page size */
 
 /**
@@ -110,11 +114,15 @@ struct rte_memzone {
  *   constraint for the reserved zone.
  * @param flags
  *   The flags parameter is used to request memzones to be
- *   taken from 1GB or 2MB hugepages.
- *   - RTE_MEMZONE_2MB - Reserve from 2MB pages
- *   - RTE_MEMZONE_1GB - Reserve from 1GB pages
- *   - RTE_MEMZONE_16MB - Reserve from 16MB pages
- *   - RTE_MEMZONE_16GB - Reserve from 16GB pages
+ *   taken from specifically sized hugepages.
+ *   - RTE_MEMZONE_2MB - Reserved from 2MB pages
+ *   - RTE_MEMZONE_1GB - Reserved from 1GB pages
+ *   - RTE_MEMZONE_16MB - Reserved from 16MB pages
+ *   - RTE_MEMZONE_16GB - Reserved from 16GB pages
+ *   - RTE_MEMZONE_256KB - Reserved from 256KB pages
+ *   - RTE_MEMZONE_256MB - Reserved from 256MB pages
+ *   - RTE_MEMZONE_512MB - Reserved from 512MB pages
+ *   - RTE_MEMZONE_4GB - Reserved from 4GB pages
  *   - RTE_MEMZONE_SIZE_HINT_ONLY - Allow alternative page size to be used if
  *                                  the requested page size is unavailable.
  *                                  If this flag is not set, the function
@@ -157,11 +165,15 @@ const struct rte_memzone *rte_memzone_reserve(const char *name,
  *   constraint for the reserved zone.
  * @param flags
  *   The flags parameter is used to request memzones to be
- *   taken from 1GB or 2MB hugepages.
- *   - RTE_MEMZONE_2MB - Reserve from 2MB pages
- *   - RTE_MEMZONE_1GB - Reserve from 1GB pages
- *   - RTE_MEMZONE_16MB - Reserve from 16MB pages
- *   - RTE_MEMZONE_16GB - Reserve from 16GB pages
+ *   taken from specifically sized hugepages.
+ *   - RTE_MEMZONE_2MB - Reserved from 2MB pages
+ *   - RTE_MEMZONE_1GB - Reserved from 1GB pages
+ *   - RTE_MEMZONE_16MB - Reserved from 16MB pages
+ *   - RTE_MEMZONE_16GB - Reserved from 16GB pages
+ *   - RTE_MEMZONE_256KB - Reserved from 256KB pages
+ *   - RTE_MEMZONE_256MB - Reserved from 256MB pages
+ *   - RTE_MEMZONE_512MB - Reserved from 512MB pages
+ *   - RTE_MEMZONE_4GB - Reserved from 4GB pages
  *   - RTE_MEMZONE_SIZE_HINT_ONLY - Allow alternative page size to be used if
  *                                  the requested page size is unavailable.
  *                                  If this flag is not set, the function
@@ -209,11 +221,15 @@ const struct rte_memzone *rte_memzone_reserve_aligned(const char *name,
  *   constraint for the reserved zone.
  * @param flags
  *   The flags parameter is used to request memzones to be
- *   taken from 1GB or 2MB hugepages.
- *   - RTE_MEMZONE_2MB - Reserve from 2MB pages
- *   - RTE_MEMZONE_1GB - Reserve from 1GB pages
- *   - RTE_MEMZONE_16MB - Reserve from 16MB pages
- *   - RTE_MEMZONE_16GB - Reserve from 16GB pages
+ *   taken from specifically sized hugepages.
+ *   - RTE_MEMZONE_2MB - Reserved from 2MB pages
+ *   - RTE_MEMZONE_1GB - Reserved from 1GB pages
+ *   - RTE_MEMZONE_16MB - Reserved from 16MB pages
+ *   - RTE_MEMZONE_16GB - Reserved from 16GB pages
+ *   - RTE_MEMZONE_256KB - Reserved from 256KB pages
+ *   - RTE_MEMZONE_256MB - Reserved from 256MB pages
+ *   - RTE_MEMZONE_512MB - Reserved from 512MB pages
+ *   - RTE_MEMZONE_4GB - Reserved from 4GB pages
  *   - RTE_MEMZONE_SIZE_HINT_ONLY - Allow alternative page size to be used if
  *                                  the requested page size is unavailable.
  *                                  If this flag is not set, the function
-- 
2.1.2

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [dpdk-dev] [PATCH v5 09/11] tile: initial TILE-Gx support.
  2015-07-09  8:25 [dpdk-dev] [PATCH v5 00/11] Introducing the TILE-Gx platform Zhigang Lu
                   ` (7 preceding siblings ...)
  2015-07-09  8:25 ` [dpdk-dev] [PATCH v5 08/11] tile: add page sizes for TILE-Gx/Mx platforms Zhigang Lu
@ 2015-07-09  8:25 ` Zhigang Lu
  2015-07-09  8:25 ` [dpdk-dev] [PATCH v5 10/11] tile: Add TILE-Gx mPIPE poll mode driver Zhigang Lu
                   ` (2 subsequent siblings)
  11 siblings, 0 replies; 13+ messages in thread
From: Zhigang Lu @ 2015-07-09  8:25 UTC (permalink / raw)
  To: dev; +Cc: Cyril Chemparathy

From: Cyril Chemparathy <cchemparathy@ezchip.com>

This commit adds support for the TILE-Gx platform, as well as the TILE
CPU architecture.  This architecture port is fairly simple due to its
reliance on generics for most arch stuff.

Signed-off-by: Cyril Chemparathy <cchemparathy@ezchip.com>
Signed-off-by: Zhigang Lu <zlu@ezchip.com>
---
 config/defconfig_tile-tilegx-linuxapp-gcc          | 69 ++++++++++++++++
 .../common/include/arch/tile/rte_atomic.h          | 86 ++++++++++++++++++++
 .../common/include/arch/tile/rte_byteorder.h       | 91 +++++++++++++++++++++
 .../common/include/arch/tile/rte_cpuflags.h        | 85 ++++++++++++++++++++
 .../common/include/arch/tile/rte_cycles.h          | 70 ++++++++++++++++
 .../common/include/arch/tile/rte_memcpy.h          | 93 ++++++++++++++++++++++
 .../common/include/arch/tile/rte_prefetch.h        | 61 ++++++++++++++
 .../common/include/arch/tile/rte_rwlock.h          | 70 ++++++++++++++++
 .../common/include/arch/tile/rte_spinlock.h        | 92 +++++++++++++++++++++
 mk/arch/tile/rte.vars.mk                           | 39 +++++++++
 mk/machine/tilegx/rte.vars.mk                      | 57 +++++++++++++
 11 files changed, 813 insertions(+)
 create mode 100644 config/defconfig_tile-tilegx-linuxapp-gcc
 create mode 100644 lib/librte_eal/common/include/arch/tile/rte_atomic.h
 create mode 100644 lib/librte_eal/common/include/arch/tile/rte_byteorder.h
 create mode 100644 lib/librte_eal/common/include/arch/tile/rte_cpuflags.h
 create mode 100644 lib/librte_eal/common/include/arch/tile/rte_cycles.h
 create mode 100644 lib/librte_eal/common/include/arch/tile/rte_memcpy.h
 create mode 100644 lib/librte_eal/common/include/arch/tile/rte_prefetch.h
 create mode 100644 lib/librte_eal/common/include/arch/tile/rte_rwlock.h
 create mode 100644 lib/librte_eal/common/include/arch/tile/rte_spinlock.h
 create mode 100644 mk/arch/tile/rte.vars.mk
 create mode 100644 mk/machine/tilegx/rte.vars.mk

diff --git a/config/defconfig_tile-tilegx-linuxapp-gcc b/config/defconfig_tile-tilegx-linuxapp-gcc
new file mode 100644
index 0000000..4023878
--- /dev/null
+++ b/config/defconfig_tile-tilegx-linuxapp-gcc
@@ -0,0 +1,69 @@
+#   BSD LICENSE
+#
+#   Copyright (C) EZchip Semiconductor 2015.
+#
+#   Redistribution and use in source and binary forms, with or without
+#   modification, are permitted provided that the following conditions
+#   are met:
+#
+#     * Redistributions of source code must retain the above copyright
+#       notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above copyright
+#       notice, this list of conditions and the following disclaimer in
+#       the documentation and/or other materials provided with the
+#       distribution.
+#     * Neither the name of EZchip Semiconductor nor the names of its
+#       contributors may be used to endorse or promote products derived
+#       from this software without specific prior written permission.
+#
+#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+#include "common_linuxapp"
+
+CONFIG_RTE_MACHINE="tilegx"
+
+CONFIG_RTE_ARCH="tile"
+CONFIG_RTE_ARCH_TILE=y
+CONFIG_RTE_ARCH_64=y
+CONFIG_RTE_ARCH_STRICT_ALIGN=y
+CONFIG_RTE_FORCE_INTRINSICS=y
+
+CONFIG_RTE_TOOLCHAIN="gcc"
+CONFIG_RTE_TOOLCHAIN_GCC=y
+
+# Disable things that we don't support or need
+CONFIG_RTE_LIBRTE_EAL_VMWARE_TSC_MAP_SUPPORT=n
+CONFIG_RTE_EAL_IGB_UIO=n
+CONFIG_RTE_EAL_VFIO=n
+CONFIG_RTE_LIBRTE_KNI=n
+CONFIG_RTE_LIBRTE_XEN_DOM0=n
+CONFIG_RTE_LIBRTE_IGB_PMD=n
+CONFIG_RTE_LIBRTE_EM_PMD=n
+CONFIG_RTE_LIBRTE_IXGBE_PMD=n
+CONFIG_RTE_LIBRTE_I40E_PMD=n
+CONFIG_RTE_LIBRTE_FM10K_PMD=n
+CONFIG_RTE_LIBRTE_VIRTIO_PMD=n
+CONFIG_RTE_LIBRTE_VMXNET3_PMD=n
+CONFIG_RTE_LIBRTE_ENIC_PMD=n
+
+# This following libraries are not available on the tile architecture. So
+# they're turned off.
+CONFIG_RTE_LIBRTE_LPM=n
+CONFIG_RTE_LIBRTE_ACL=n
+CONFIG_RTE_LIBRTE_SCHED=n
+CONFIG_RTE_LIBRTE_PORT=n
+CONFIG_RTE_LIBRTE_TABLE=n
+CONFIG_RTE_LIBRTE_PIPELINE=n
+
+# Enable and override things that we need
+CONFIG_RTE_MEMPOOL_ALIGN=128
diff --git a/lib/librte_eal/common/include/arch/tile/rte_atomic.h b/lib/librte_eal/common/include/arch/tile/rte_atomic.h
new file mode 100644
index 0000000..3dc8eb8
--- /dev/null
+++ b/lib/librte_eal/common/include/arch/tile/rte_atomic.h
@@ -0,0 +1,86 @@
+/*
+ *   BSD LICENSE
+ *
+ *   Copyright (C) EZchip Semiconductor Ltd. 2015.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of EZchip Semiconductor nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+*/
+
+#ifndef _RTE_ATOMIC_TILE_H_
+#define _RTE_ATOMIC_TILE_H_
+
+#ifndef RTE_FORCE_INTRINSICS
+#  error Platform must be built with CONFIG_RTE_FORCE_INTRINSICS
+#endif
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include "generic/rte_atomic.h"
+
+/**
+ * General memory barrier.
+ *
+ * Guarantees that the LOAD and STORE operations generated before the
+ * barrier occur before the LOAD and STORE operations generated after.
+ * This function is architecture dependent.
+ */
+static inline void rte_mb(void)
+{
+	__sync_synchronize();
+}
+
+/**
+ * Write memory barrier.
+ *
+ * Guarantees that the STORE operations generated before the barrier
+ * occur before the STORE operations generated after.
+ * This function is architecture dependent.
+ */
+static inline void rte_wmb(void)
+{
+	__sync_synchronize();
+}
+
+/**
+ * Read memory barrier.
+ *
+ * Guarantees that the LOAD operations generated before the barrier
+ * occur before the LOAD operations generated after.
+ * This function is architecture dependent.
+ */
+static inline void rte_rmb(void)
+{
+	__sync_synchronize();
+}
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_ATOMIC_TILE_H_ */
diff --git a/lib/librte_eal/common/include/arch/tile/rte_byteorder.h b/lib/librte_eal/common/include/arch/tile/rte_byteorder.h
new file mode 100644
index 0000000..7239e43
--- /dev/null
+++ b/lib/librte_eal/common/include/arch/tile/rte_byteorder.h
@@ -0,0 +1,91 @@
+/*
+ *   BSD LICENSE
+ *
+ *   Copyright (C) EZchip Semiconductor Ltd. 2015.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of EZchip Semiconductor nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+*/
+
+#ifndef _RTE_BYTEORDER_TILE_H_
+#define _RTE_BYTEORDER_TILE_H_
+
+#ifndef RTE_FORCE_INTRINSICS
+#  error Platform must be built with CONFIG_RTE_FORCE_INTRINSICS
+#endif
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include "generic/rte_byteorder.h"
+
+#if !(__GNUC__ > 4 || (__GNUC__ == 4 && __GNUC_MINOR__ >= 8))
+#define rte_bswap16(x) rte_constant_bswap16(x)
+#endif
+
+#if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN
+
+#define rte_cpu_to_le_16(x) (x)
+#define rte_cpu_to_le_32(x) (x)
+#define rte_cpu_to_le_64(x) (x)
+
+#define rte_cpu_to_be_16(x) rte_bswap16(x)
+#define rte_cpu_to_be_32(x) rte_bswap32(x)
+#define rte_cpu_to_be_64(x) rte_bswap64(x)
+
+#define rte_le_to_cpu_16(x) (x)
+#define rte_le_to_cpu_32(x) (x)
+#define rte_le_to_cpu_64(x) (x)
+
+#define rte_be_to_cpu_16(x) rte_bswap16(x)
+#define rte_be_to_cpu_32(x) rte_bswap32(x)
+#define rte_be_to_cpu_64(x) rte_bswap64(x)
+
+#else /* RTE_BIG_ENDIAN */
+
+#define rte_cpu_to_le_16(x) rte_bswap16(x)
+#define rte_cpu_to_le_32(x) rte_bswap32(x)
+#define rte_cpu_to_le_64(x) rte_bswap64(x)
+
+#define rte_cpu_to_be_16(x) (x)
+#define rte_cpu_to_be_32(x) (x)
+#define rte_cpu_to_be_64(x) (x)
+
+#define rte_le_to_cpu_16(x) rte_bswap16(x)
+#define rte_le_to_cpu_32(x) rte_bswap32(x)
+#define rte_le_to_cpu_64(x) rte_bswap64(x)
+
+#define rte_be_to_cpu_16(x) (x)
+#define rte_be_to_cpu_32(x) (x)
+#define rte_be_to_cpu_64(x) (x)
+#endif
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_BYTEORDER_TILE_H_ */
diff --git a/lib/librte_eal/common/include/arch/tile/rte_cpuflags.h b/lib/librte_eal/common/include/arch/tile/rte_cpuflags.h
new file mode 100644
index 0000000..08aa957
--- /dev/null
+++ b/lib/librte_eal/common/include/arch/tile/rte_cpuflags.h
@@ -0,0 +1,85 @@
+/*
+ *   BSD LICENSE
+ *
+ *   Copyright (C) EZchip Semiconductor Ltd. 2015.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of EZchip Semiconductor nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+*/
+
+#ifndef _RTE_CPUFLAGS_TILE_H_
+#define _RTE_CPUFLAGS_TILE_H_
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include <elf.h>
+#include <fcntl.h>
+#include <assert.h>
+#include <unistd.h>
+
+#include "generic/rte_cpuflags.h"
+
+/* software based registers */
+enum cpu_register_t {
+	REG_DUMMY = 0
+};
+
+/**
+ * Enumeration of all CPU features supported
+ */
+enum rte_cpu_flag_t {
+	RTE_CPUFLAG_NUMFLAGS /**< This should always be the last! */
+};
+
+static const struct feature_entry cpu_feature_table[] = {
+};
+
+/*
+ * Read AUXV software register and get cpu features for Power
+ */
+static inline void
+rte_cpu_get_features(__attribute__((unused)) uint32_t leaf,
+		     __attribute__((unused)) uint32_t subleaf,
+		     __attribute__((unused)) cpuid_registers_t out)
+{
+}
+
+/*
+ * Checks if a particular flag is available on current machine.
+ */
+static inline int
+rte_cpu_get_flag_enabled(__attribute__((unused)) enum rte_cpu_flag_t feature)
+{
+	return -ENOENT;
+}
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_CPUFLAGS_TILE_H_ */
diff --git a/lib/librte_eal/common/include/arch/tile/rte_cycles.h b/lib/librte_eal/common/include/arch/tile/rte_cycles.h
new file mode 100644
index 0000000..0b2200a
--- /dev/null
+++ b/lib/librte_eal/common/include/arch/tile/rte_cycles.h
@@ -0,0 +1,70 @@
+/*
+ *   BSD LICENSE
+ *
+ *   Copyright (C) EZchip Semiconductor Ltd. 2015.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of EZchip Semiconductor nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+*/
+
+#ifndef _RTE_CYCLES_TILE_H_
+#define _RTE_CYCLES_TILE_H_
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include <arch/cycle.h>
+
+#include "generic/rte_cycles.h"
+
+/**
+ * Read the time base register.
+ *
+ * @return
+ *   The time base for this lcore.
+ */
+static inline uint64_t
+rte_rdtsc(void)
+{
+	return get_cycle_count();
+}
+
+static inline uint64_t
+rte_rdtsc_precise(void)
+{
+	rte_mb();
+	return rte_rdtsc();
+}
+
+static inline uint64_t
+rte_get_tsc_cycles(void) { return rte_rdtsc(); }
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_CYCLES_TILE_H_ */
diff --git a/lib/librte_eal/common/include/arch/tile/rte_memcpy.h b/lib/librte_eal/common/include/arch/tile/rte_memcpy.h
new file mode 100644
index 0000000..9b5b37e
--- /dev/null
+++ b/lib/librte_eal/common/include/arch/tile/rte_memcpy.h
@@ -0,0 +1,93 @@
+/*
+ *   BSD LICENSE
+ *
+ *   Copyright (C) EZchip Semiconductor Ltd. 2015.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of EZchip Semiconductor nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+*/
+
+#ifndef _RTE_MEMCPY_TILE_H_
+#define _RTE_MEMCPY_TILE_H_
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include <stdint.h>
+#include <string.h>
+
+#include "generic/rte_memcpy.h"
+
+static inline void
+rte_mov16(uint8_t *dst, const uint8_t *src)
+{
+	memcpy(dst, src, 16);
+}
+
+static inline void
+rte_mov32(uint8_t *dst, const uint8_t *src)
+{
+	memcpy(dst, src, 32);
+}
+
+static inline void
+rte_mov48(uint8_t *dst, const uint8_t *src)
+{
+	memcpy(dst, src, 48);
+}
+
+static inline void
+rte_mov64(uint8_t *dst, const uint8_t *src)
+{
+	memcpy(dst, src, 64);
+}
+
+static inline void
+rte_mov128(uint8_t *dst, const uint8_t *src)
+{
+	memcpy(dst, src, 128);
+}
+
+static inline void
+rte_mov256(uint8_t *dst, const uint8_t *src)
+{
+	memcpy(dst, src, 256);
+}
+
+#define rte_memcpy(d, s, n)	memcpy((d), (s), (n))
+
+static inline void *
+rte_memcpy_func(void *dst, const void *src, size_t n)
+{
+	return memcpy(dst, src, n);
+}
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_MEMCPY_TILE_H_ */
diff --git a/lib/librte_eal/common/include/arch/tile/rte_prefetch.h b/lib/librte_eal/common/include/arch/tile/rte_prefetch.h
new file mode 100644
index 0000000..f02d9fa
--- /dev/null
+++ b/lib/librte_eal/common/include/arch/tile/rte_prefetch.h
@@ -0,0 +1,61 @@
+/*
+ *   BSD LICENSE
+ *
+ *   Copyright (C) EZchip Semiconductor Ltd. 2015.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of EZchip Semiconductor nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+*/
+
+#ifndef _RTE_PREFETCH_TILE_H_
+#define _RTE_PREFETCH_TILE_H_
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include "generic/rte_prefetch.h"
+
+static inline void rte_prefetch0(volatile void *p)
+{
+	__builtin_prefetch((const void *)(uintptr_t)p, 0, 3);
+}
+
+static inline void rte_prefetch1(volatile void *p)
+{
+	__builtin_prefetch((const void *)(uintptr_t)p, 0, 2);
+}
+
+static inline void rte_prefetch2(volatile void *p)
+{
+	__builtin_prefetch((const void *)(uintptr_t)p, 0, 1);
+}
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_PREFETCH_TILE_H_ */
diff --git a/lib/librte_eal/common/include/arch/tile/rte_rwlock.h b/lib/librte_eal/common/include/arch/tile/rte_rwlock.h
new file mode 100644
index 0000000..8f67a19
--- /dev/null
+++ b/lib/librte_eal/common/include/arch/tile/rte_rwlock.h
@@ -0,0 +1,70 @@
+/*
+ *   BSD LICENSE
+ *
+ *   Copyright (C) EZchip Semiconductor Ltd. 2015.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of EZchip Semiconductor nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+*/
+
+#ifndef _RTE_RWLOCK_TILE_H_
+#define _RTE_RWLOCK_TILE_H_
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include "generic/rte_rwlock.h"
+
+static inline void
+rte_rwlock_read_lock_tm(rte_rwlock_t *rwl)
+{
+	rte_rwlock_read_lock(rwl);
+}
+
+static inline void
+rte_rwlock_read_unlock_tm(rte_rwlock_t *rwl)
+{
+	rte_rwlock_read_unlock(rwl);
+}
+
+static inline void
+rte_rwlock_write_lock_tm(rte_rwlock_t *rwl)
+{
+	rte_rwlock_write_lock(rwl);
+}
+
+static inline void
+rte_rwlock_write_unlock_tm(rte_rwlock_t *rwl)
+{
+	rte_rwlock_write_unlock(rwl);
+}
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_RWLOCK_TILE_H_ */
diff --git a/lib/librte_eal/common/include/arch/tile/rte_spinlock.h b/lib/librte_eal/common/include/arch/tile/rte_spinlock.h
new file mode 100644
index 0000000..e91f99e
--- /dev/null
+++ b/lib/librte_eal/common/include/arch/tile/rte_spinlock.h
@@ -0,0 +1,92 @@
+/*
+ *   BSD LICENSE
+ *
+ *   Copyright (C) EZchip Semiconductor Ltd. 2015.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of EZchip Semiconductor nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+*/
+
+#ifndef _RTE_SPINLOCK_TILE_H_
+#define _RTE_SPINLOCK_TILE_H_
+
+#ifndef RTE_FORCE_INTRINSICS
+#  error Platform must be built with CONFIG_RTE_FORCE_INTRINSICS
+#endif
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include <rte_common.h>
+#include "generic/rte_spinlock.h"
+
+static inline int rte_tm_supported(void)
+{
+	return 0;
+}
+
+static inline void
+rte_spinlock_lock_tm(rte_spinlock_t *sl)
+{
+	rte_spinlock_lock(sl); /* fall-back */
+}
+
+static inline int
+rte_spinlock_trylock_tm(rte_spinlock_t *sl)
+{
+	return rte_spinlock_trylock(sl);
+}
+
+static inline void
+rte_spinlock_unlock_tm(rte_spinlock_t *sl)
+{
+	rte_spinlock_unlock(sl);
+}
+
+static inline void
+rte_spinlock_recursive_lock_tm(rte_spinlock_recursive_t *slr)
+{
+	rte_spinlock_recursive_lock(slr); /* fall-back */
+}
+
+static inline void
+rte_spinlock_recursive_unlock_tm(rte_spinlock_recursive_t *slr)
+{
+	rte_spinlock_recursive_unlock(slr);
+}
+
+static inline int
+rte_spinlock_recursive_trylock_tm(rte_spinlock_recursive_t *slr)
+{
+	return rte_spinlock_recursive_trylock(slr);
+}
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_SPINLOCK_TILE_H_ */
diff --git a/mk/arch/tile/rte.vars.mk b/mk/arch/tile/rte.vars.mk
new file mode 100644
index 0000000..b518986
--- /dev/null
+++ b/mk/arch/tile/rte.vars.mk
@@ -0,0 +1,39 @@
+#   BSD LICENSE
+#
+#   Copyright (C) EZchip Semiconductor Ltd. 2015.
+#
+#   Redistribution and use in source and binary forms, with or without
+#   modification, are permitted provided that the following conditions
+#   are met:
+#
+#     * Redistributions of source code must retain the above copyright
+#       notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above copyright
+#       notice, this list of conditions and the following disclaimer in
+#       the documentation and/or other materials provided with the
+#       distribution.
+#     * Neither the name of EZchip Semiconductor nor the names of its
+#       contributors may be used to endorse or promote products derived
+#       from this software without specific prior written permission.
+#
+#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+
+ARCH  ?= tile
+CROSS ?= tile-
+
+CPU_CFLAGS  ?=
+CPU_LDFLAGS ?=
+CPU_ASFLAGS ?=
+
+export ARCH CROSS CPU_CFLAGS CPU_LDFLAGS CPU_ASFLAGS
diff --git a/mk/machine/tilegx/rte.vars.mk b/mk/machine/tilegx/rte.vars.mk
new file mode 100644
index 0000000..c8256f1
--- /dev/null
+++ b/mk/machine/tilegx/rte.vars.mk
@@ -0,0 +1,57 @@
+#   BSD LICENSE
+#
+#   Copyright (C) EZchip Semiconductor Ltd. 2015.
+#
+#   Redistribution and use in source and binary forms, with or without
+#   modification, are permitted provided that the following conditions
+#   are met:
+#
+#     * Redistributions of source code must retain the above copyright
+#       notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above copyright
+#       notice, this list of conditions and the following disclaimer in
+#       the documentation and/or other materials provided with the
+#       distribution.
+#     * Neither the name of EZchip Semiconductor nor the names of its
+#       contributors may be used to endorse or promote products derived
+#       from this software without specific prior written permission.
+#
+#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+#
+# machine:
+#
+#   - can define ARCH variable (overridden by cmdline value)
+#   - can define CROSS variable (overridden by cmdline value)
+#   - define MACHINE_CFLAGS variable (overridden by cmdline value)
+#   - define MACHINE_LDFLAGS variable (overridden by cmdline value)
+#   - define MACHINE_ASFLAGS variable (overridden by cmdline value)
+#   - can define CPU_CFLAGS variable (overridden by cmdline value) that
+#     overrides the one defined in arch.
+#   - can define CPU_LDFLAGS variable (overridden by cmdline value) that
+#     overrides the one defined in arch.
+#   - can define CPU_ASFLAGS variable (overridden by cmdline value) that
+#     overrides the one defined in arch.
+#   - may override any previously defined variable
+#
+
+# ARCH =
+# CROSS =
+# MACHINE_CFLAGS =
+# MACHINE_LDFLAGS =
+# MACHINE_ASFLAGS =
+# CPU_CFLAGS =
+# CPU_LDFLAGS =
+# CPU_ASFLAGS =
+
+MACHINE_CFLAGS =
-- 
2.1.2

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [dpdk-dev] [PATCH v5 10/11] tile: Add TILE-Gx mPIPE poll mode driver.
  2015-07-09  8:25 [dpdk-dev] [PATCH v5 00/11] Introducing the TILE-Gx platform Zhigang Lu
                   ` (8 preceding siblings ...)
  2015-07-09  8:25 ` [dpdk-dev] [PATCH v5 09/11] tile: initial TILE-Gx support Zhigang Lu
@ 2015-07-09  8:25 ` Zhigang Lu
  2015-07-09  8:25 ` [dpdk-dev] [PATCH v5 11/11] maintainers: claim responsibility for TILE-Gx platform Zhigang Lu
  2015-07-13 14:17 ` [dpdk-dev] [PATCH v5 00/11] Introducing the " Thomas Monjalon
  11 siblings, 0 replies; 13+ messages in thread
From: Zhigang Lu @ 2015-07-09  8:25 UTC (permalink / raw)
  To: dev; +Cc: Cyril Chemparathy

From: Cyril Chemparathy <cchemparathy@ezchip.com>

This commit adds a poll mode driver for the mPIPE hardware present on
TILE-Gx SoCs.

Signed-off-by: Cyril Chemparathy <cchemparathy@ezchip.com>
Signed-off-by: Zhigang Lu <zlu@ezchip.com>
---
 config/defconfig_tile-tilegx-linuxapp-gcc |    1 +
 drivers/net/Makefile                      |    1 +
 drivers/net/mpipe/Makefile                |   46 +
 drivers/net/mpipe/mpipe_tilegx.c          | 1637 +++++++++++++++++++++++++++++
 mk/rte.app.mk                             |    1 +
 5 files changed, 1686 insertions(+)
 create mode 100644 drivers/net/mpipe/Makefile
 create mode 100644 drivers/net/mpipe/mpipe_tilegx.c

diff --git a/config/defconfig_tile-tilegx-linuxapp-gcc b/config/defconfig_tile-tilegx-linuxapp-gcc
index 4023878..e603d1b 100644
--- a/config/defconfig_tile-tilegx-linuxapp-gcc
+++ b/config/defconfig_tile-tilegx-linuxapp-gcc
@@ -66,4 +66,5 @@ CONFIG_RTE_LIBRTE_TABLE=n
 CONFIG_RTE_LIBRTE_PIPELINE=n
 
 # Enable and override things that we need
+CONFIG_RTE_LIBRTE_MPIPE_PMD=y
 CONFIG_RTE_MEMPOOL_ALIGN=128
diff --git a/drivers/net/Makefile b/drivers/net/Makefile
index 644cacb..ee77480 100644
--- a/drivers/net/Makefile
+++ b/drivers/net/Makefile
@@ -36,6 +36,7 @@ DIRS-$(CONFIG_RTE_LIBRTE_PMD_BOND) += bonding
 DIRS-$(CONFIG_RTE_LIBRTE_CXGBE_PMD) += cxgbe
 DIRS-$(CONFIG_RTE_LIBRTE_E1000_PMD) += e1000
 DIRS-$(CONFIG_RTE_LIBRTE_ENIC_PMD) += enic
+DIRS-$(CONFIG_RTE_LIBRTE_MPIPE_PMD) += mpipe
 DIRS-$(CONFIG_RTE_LIBRTE_FM10K_PMD) += fm10k
 DIRS-$(CONFIG_RTE_LIBRTE_I40E_PMD) += i40e
 DIRS-$(CONFIG_RTE_LIBRTE_IXGBE_PMD) += ixgbe
diff --git a/drivers/net/mpipe/Makefile b/drivers/net/mpipe/Makefile
new file mode 100644
index 0000000..552b303
--- /dev/null
+++ b/drivers/net/mpipe/Makefile
@@ -0,0 +1,46 @@
+#
+# Copyright 2015 EZchip Semiconductor Ltd.  All rights reserved.
+#
+# Redistribution and use in source and binary forms, with or without
+# modification, are permitted provided that the following conditions
+# are met:
+#
+# 1. Redistributions of source code must retain the above copyright
+# notice, this list of conditions and the following disclaimer.
+#
+# 2. Redistributions in binary form must reproduce the above copyright
+# notice, this list of conditions and the following disclaimer in
+# the documentation and/or other materials provided with the
+# distribution.
+#
+# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
+# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
+# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
+# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
+# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
+# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
+# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
+# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN
+# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+# POSSIBILITY OF SUCH DAMAGE.
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+# library name
+LIB = librte_pmd_mpipe.a
+
+CFLAGS += $(WERROR_FLAGS) -O3
+
+EXPORT_MAP := rte_pmd_mpipe_version.map
+
+LIBABIVER := 1
+
+SRCS-$(CONFIG_RTE_LIBRTE_MPIPE_PMD) += mpipe_tilegx.c
+
+DEPDIRS-$(CONFIG_RTE_LIBRTE_MPIPE_PMD) += lib/librte_eal lib/librte_ether
+DEPDIRS-$(CONFIG_RTE_LIBRTE_MPIPE_PMD) += lib/librte_mempool lib/librte_mbuf
+DEPDIRS-$(CONFIG_RTE_LIBRTE_MPIPE_PMD) += lib/librte_net lib/librte_malloc
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/net/mpipe/mpipe_tilegx.c b/drivers/net/mpipe/mpipe_tilegx.c
new file mode 100644
index 0000000..e222443
--- /dev/null
+++ b/drivers/net/mpipe/mpipe_tilegx.c
@@ -0,0 +1,1637 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2015 EZchip Semiconductor Ltd. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of EZchip Semiconductor nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <unistd.h>
+
+#include <rte_eal.h>
+#include <rte_dev.h>
+#include <rte_eal_memconfig.h>
+#include <rte_ethdev.h>
+#include <rte_malloc.h>
+#include <rte_cycles.h>
+
+#include <arch/mpipe_xaui_def.h>
+#include <arch/mpipe_gbe_def.h>
+
+#include <gxio/mpipe.h>
+
+#ifdef RTE_LIBRTE_MPIPE_PMD_DEBUG
+#define PMD_DEBUG_RX(...)	RTE_LOG(DEBUG, PMD, __VA_ARGS__)
+#define PMD_DEBUG_TX(...)	RTE_LOG(DEBUG, PMD, __VA_ARGS__)
+#else
+#define PMD_DEBUG_RX(...)
+#define PMD_DEBUG_TX(...)
+#endif
+
+#define MPIPE_MAX_CHANNELS		128
+#define MPIPE_TX_MAX_QUEUES		128
+#define MPIPE_RX_MAX_QUEUES		16
+#define MPIPE_TX_DESCS			512
+#define MPIPE_RX_BUCKETS		256
+#define MPIPE_RX_STACK_SIZE		65536
+#define MPIPE_RX_IP_ALIGN		2
+#define MPIPE_BSM_ALIGN			128
+
+#define MPIPE_LINK_UPDATE_TIMEOUT	10	/*  s */
+#define MPIPE_LINK_UPDATE_INTERVAL	100000	/* us */
+
+struct mpipe_channel_config
+{
+	int enable;
+	int first_bucket;
+	int num_buckets;
+	int head_room;
+	gxio_mpipe_rules_stacks_t stacks;
+};
+
+struct mpipe_context
+{
+	rte_spinlock_t        lock;
+	gxio_mpipe_context_t  context;
+	struct mpipe_channel_config channels[MPIPE_MAX_CHANNELS];
+};
+
+static struct mpipe_context mpipe_contexts[GXIO_MPIPE_INSTANCE_MAX];
+static int mpipe_instances;
+
+/* Per queue statistics. */
+struct mpipe_queue_stats {
+	uint64_t packets, bytes, errors, nomem;
+};
+
+/* Common tx/rx queue fields. */
+struct mpipe_queue {
+	struct mpipe_dev_priv *priv;	/* "priv" data of its device. */
+	uint16_t nb_desc;		/* Number of tx descriptors. */
+	uint16_t port_id;		/* Device index. */
+	uint16_t stat_idx;		/* Queue stats index. */
+	uint8_t queue_idx;		/* Queue index. */
+	uint8_t link_status;		/* 0 = link down. */
+	struct mpipe_queue_stats stats;	/* Stat data for the queue. */
+};
+
+/* Transmit queue description. */
+struct mpipe_tx_queue {
+	struct mpipe_queue q;		/* Common stuff. */
+};
+
+/* Receive queue description. */
+struct mpipe_rx_queue {
+	struct mpipe_queue q;		/* Common stuff. */
+	gxio_mpipe_iqueue_t iqueue;	/* mPIPE iqueue. */
+	gxio_mpipe_idesc_t *next_desc;	/* Next idesc to process. */
+	int avail_descs;		/* Number of available descs. */
+	void *rx_ring_mem;		/* DMA ring memory. */
+};
+
+struct mpipe_dev_priv {
+	gxio_mpipe_context_t *context;	/* mPIPE context. */
+	gxio_mpipe_link_t link;		/* mPIPE link for the device. */
+	gxio_mpipe_equeue_t equeue;	/* mPIPE equeue. */
+	unsigned equeue_size;		/* mPIPE equeue desc count. */
+	int instance;			/* mPIPE instance. */
+	int ering;			/* mPIPE eDMA ring. */
+	int stack;			/* mPIPE buffer stack. */
+	int channel;			/* Device channel. */
+	int port_id;			/* DPDK port index. */
+	struct rte_eth_dev *eth_dev;	/* DPDK device. */
+	struct rte_pci_device pci_dev;	/* PCI device data. */
+	struct rte_mbuf **tx_comps;	/* TX completion array. */
+	struct rte_mempool *rx_mpool;	/* mpool used by the rx queues. */
+	unsigned rx_offset;		/* Receive head room. */
+	unsigned rx_size_code;		/* mPIPE rx buffer size code. */
+	unsigned rx_buffers;		/* receive buffers on stack. */
+	int is_xaui:1,			/* Is this an xgbe or gbe? */
+	    initialized:1,		/* Initialized port? */
+	    running:1;			/* Running port? */
+	struct ether_addr mac_addr;	/* MAC address. */
+	unsigned nb_rx_queues;		/* Configured tx queues. */
+	unsigned nb_tx_queues;		/* Configured rx queues. */
+	int first_bucket;		/* mPIPE bucket start index. */
+	int first_ring;			/* mPIPE notif ring start index. */
+	int notif_group;		/* mPIPE notif group. */
+	rte_atomic32_t dp_count;	/* Active datapath thread count. */
+	int tx_stat_mapping[RTE_ETHDEV_QUEUE_STAT_CNTRS];
+	int rx_stat_mapping[RTE_ETHDEV_QUEUE_STAT_CNTRS];
+};
+
+#define mpipe_priv(dev)			\
+	((struct mpipe_dev_priv*)(dev)->data->dev_private)
+
+#define mpipe_name(priv)		\
+	((priv)->eth_dev->data->name)
+
+#define mpipe_rx_queue(priv, n)		\
+	((struct mpipe_rx_queue *)(priv)->eth_dev->data->rx_queues[n])
+
+#define mpipe_tx_queue(priv, n)		\
+	((struct mpipe_tx_queue *)(priv)->eth_dev->data->tx_queues[n])
+
+static void
+mpipe_xmit_flush(struct mpipe_dev_priv *priv);
+
+static void
+mpipe_recv_flush(struct mpipe_dev_priv *priv);
+
+static int mpipe_equeue_sizes[] = {
+	[GXIO_MPIPE_EQUEUE_ENTRY_512]	= 512,
+	[GXIO_MPIPE_EQUEUE_ENTRY_2K]	= 2048,
+	[GXIO_MPIPE_EQUEUE_ENTRY_8K]	= 8192,
+	[GXIO_MPIPE_EQUEUE_ENTRY_64K]	= 65536,
+};
+
+static int mpipe_iqueue_sizes[] = {
+	[GXIO_MPIPE_IQUEUE_ENTRY_128]	= 128,
+	[GXIO_MPIPE_IQUEUE_ENTRY_512]	= 512,
+	[GXIO_MPIPE_IQUEUE_ENTRY_2K]	= 2048,
+	[GXIO_MPIPE_IQUEUE_ENTRY_64K]	= 65536,
+};
+
+static int mpipe_buffer_sizes[] = {
+	[GXIO_MPIPE_BUFFER_SIZE_128]	= 128,
+	[GXIO_MPIPE_BUFFER_SIZE_256]	= 256,
+	[GXIO_MPIPE_BUFFER_SIZE_512]	= 512,
+	[GXIO_MPIPE_BUFFER_SIZE_1024]	= 1024,
+	[GXIO_MPIPE_BUFFER_SIZE_1664]	= 1664,
+	[GXIO_MPIPE_BUFFER_SIZE_4096]	= 4096,
+	[GXIO_MPIPE_BUFFER_SIZE_10368]	= 10368,
+	[GXIO_MPIPE_BUFFER_SIZE_16384]	= 16384,
+};
+
+static gxio_mpipe_context_t *
+mpipe_context(int instance)
+{
+	if (instance < 0 || instance >= mpipe_instances)
+		return NULL;
+	return &mpipe_contexts[instance].context;
+}
+
+static int mpipe_channel_config(int instance, int channel,
+				struct mpipe_channel_config *config)
+{
+	struct mpipe_channel_config *data;
+	struct mpipe_context *context;
+	gxio_mpipe_rules_t rules;
+	int idx, rc = 0;
+
+	if (instance < 0 || instance >= mpipe_instances ||
+	    channel < 0 || channel >= MPIPE_MAX_CHANNELS)
+		return -EINVAL;
+
+	context = &mpipe_contexts[instance];
+
+	rte_spinlock_lock(&context->lock);
+
+	gxio_mpipe_rules_init(&rules, &context->context);
+
+	for (idx = 0; idx < MPIPE_MAX_CHANNELS; idx++) {
+		data = (channel == idx) ? config : &context->channels[idx];
+
+		if (!data->enable)
+			continue;
+
+		rc = gxio_mpipe_rules_begin(&rules, data->first_bucket,
+					    data->num_buckets, &data->stacks);
+		if (rc < 0) {
+			goto done;
+		}
+
+		rc = gxio_mpipe_rules_add_channel(&rules, idx);
+		if (rc < 0) {
+			goto done;
+		}
+
+		rc = gxio_mpipe_rules_set_headroom(&rules, data->head_room);
+		if (rc < 0) {
+			goto done;
+		}
+	}
+
+	rc = gxio_mpipe_rules_commit(&rules);
+	if (rc == 0) {
+		memcpy(&context->channels[channel], config, sizeof(*config));
+	}
+
+done:
+	rte_spinlock_unlock(&context->lock);
+
+	return rc;
+}
+
+static int
+mpipe_get_size_index(int *array, int count, int size,
+		     bool roundup)
+{
+	int i, last = -1;
+
+	for (i = 0; i < count && array[i] < size; i++) {
+		if (array[i])
+			last = i;
+	}
+
+	if (roundup)
+		return i < count ? (int)i : -ENOENT;
+	else
+		return last >= 0 ? last : -ENOENT;
+}
+
+static int
+mpipe_calc_size(int *array, int count, int size)
+{
+	int index = mpipe_get_size_index(array, count, size, 1);
+	return index < 0 ? index : array[index];
+}
+
+static int mpipe_equeue_size(int size)
+{
+	int result;
+	result = mpipe_calc_size(mpipe_equeue_sizes,
+				 RTE_DIM(mpipe_equeue_sizes), size);
+	return result;
+}
+
+static int mpipe_iqueue_size(int size)
+{
+	int result;
+	result = mpipe_calc_size(mpipe_iqueue_sizes,
+				 RTE_DIM(mpipe_iqueue_sizes), size);
+	return result;
+}
+
+static int mpipe_buffer_size_index(int size)
+{
+	int result;
+	result = mpipe_get_size_index(mpipe_buffer_sizes,
+				      RTE_DIM(mpipe_buffer_sizes), size, 0);
+	return result;
+}
+
+static inline int
+mpipe_dev_atomic_read_link_status(struct rte_eth_dev *dev,
+				  struct rte_eth_link *link)
+{
+	struct rte_eth_link *dst = link;
+	struct rte_eth_link *src = &(dev->data->dev_link);
+
+	if (rte_atomic64_cmpset((uint64_t *)dst, *(uint64_t *)dst,
+				*(uint64_t *)src) == 0)
+		return -1;
+
+	return 0;
+}
+
+static inline int
+mpipe_dev_atomic_write_link_status(struct rte_eth_dev *dev,
+				   struct rte_eth_link *link)
+{
+	struct rte_eth_link *dst = &(dev->data->dev_link);
+	struct rte_eth_link *src = link;
+
+	if (rte_atomic64_cmpset((uint64_t *)dst, *(uint64_t *)dst,
+				*(uint64_t *)src) == 0)
+		return -1;
+
+	return 0;
+}
+
+static void
+mpipe_infos_get(struct rte_eth_dev *dev __rte_unused,
+		struct rte_eth_dev_info *dev_info)
+{
+	dev_info->min_rx_bufsize  = 128;
+	dev_info->max_rx_pktlen   = 1518;
+	dev_info->max_tx_queues   = MPIPE_TX_MAX_QUEUES;
+	dev_info->max_rx_queues   = MPIPE_RX_MAX_QUEUES;
+	dev_info->max_mac_addrs   = 1;
+	dev_info->rx_offload_capa = 0;
+	dev_info->tx_offload_capa = 0;
+}
+
+static int
+mpipe_configure(struct rte_eth_dev *dev)
+{
+	struct mpipe_dev_priv *priv = mpipe_priv(dev);
+
+	if (dev->data->nb_tx_queues > MPIPE_TX_MAX_QUEUES) {
+		RTE_LOG(ERR, PMD, "%s: Too many tx queues: %d > %d\n",
+			mpipe_name(priv), dev->data->nb_tx_queues,
+			MPIPE_TX_MAX_QUEUES);
+		return -EINVAL;
+	}
+	priv->nb_tx_queues = dev->data->nb_tx_queues;
+
+	if (dev->data->nb_rx_queues > MPIPE_RX_MAX_QUEUES) {
+		RTE_LOG(ERR, PMD, "%s: Too many rx queues: %d > %d\n",
+			mpipe_name(priv), dev->data->nb_rx_queues,
+			MPIPE_RX_MAX_QUEUES);
+	}
+	priv->nb_rx_queues = dev->data->nb_rx_queues;
+
+	return 0;
+}
+
+static inline int
+mpipe_link_compare(struct rte_eth_link *link1,
+		   struct rte_eth_link *link2)
+{
+	return ((*(uint64_t *)link1 == *(uint64_t *)link2)
+		? -1 : 0);
+}
+
+static int
+mpipe_link_update(struct rte_eth_dev *dev, int wait_to_complete)
+{
+	struct mpipe_dev_priv *priv = mpipe_priv(dev);
+	struct rte_eth_link old, new;
+	int64_t state, speed;
+	int count, rc;
+
+	memset(&old, 0, sizeof(old));
+	memset(&new, 0, sizeof(new));
+	mpipe_dev_atomic_read_link_status(dev, &old);
+
+	for (count = 0, rc = 0; count < MPIPE_LINK_UPDATE_TIMEOUT; count++) {
+		if (!priv->initialized)
+			break;
+
+		state = gxio_mpipe_link_get_attr(&priv->link,
+						 GXIO_MPIPE_LINK_CURRENT_STATE);
+		if (state < 0)
+			break;
+
+		speed = state & GXIO_MPIPE_LINK_SPEED_MASK;
+
+		if (speed == GXIO_MPIPE_LINK_1G) {
+			new.link_speed = ETH_LINK_SPEED_1000;
+			new.link_duplex = ETH_LINK_FULL_DUPLEX;
+			new.link_status = 1;
+		} else if (speed == GXIO_MPIPE_LINK_10G) {
+			new.link_speed = ETH_LINK_SPEED_10000;
+			new.link_duplex = ETH_LINK_FULL_DUPLEX;
+			new.link_status = 1;
+		}
+
+		rc = mpipe_link_compare(&old, &new);
+		if (rc == 0 || !wait_to_complete)
+			break;
+
+		rte_delay_us(MPIPE_LINK_UPDATE_INTERVAL);
+	}
+
+	mpipe_dev_atomic_write_link_status(dev, &new);
+	return rc;
+}
+
+static int
+mpipe_set_link(struct rte_eth_dev *dev, int up)
+{
+	struct mpipe_dev_priv *priv = mpipe_priv(dev);
+	int rc;
+
+	rc = gxio_mpipe_link_set_attr(&priv->link,
+				      GXIO_MPIPE_LINK_DESIRED_STATE,
+				      up ? GXIO_MPIPE_LINK_ANYSPEED : 0);
+	if (rc < 0) {
+		RTE_LOG(ERR, PMD, "%s: Failed to set link %s.\n",
+			mpipe_name(priv), up ? "up" : "down");
+	} else {
+		mpipe_link_update(dev, 0);
+	}
+
+	return rc;
+}
+
+static int
+mpipe_set_link_up(struct rte_eth_dev *dev)
+{
+	return mpipe_set_link(dev, 1);
+}
+
+static int
+mpipe_set_link_down(struct rte_eth_dev *dev)
+{
+	return mpipe_set_link(dev, 0);
+}
+
+static inline void
+mpipe_dp_enter(struct mpipe_dev_priv *priv)
+{
+	__insn_mtspr(SPR_DSTREAM_PF, 0);
+	rte_atomic32_inc(&priv->dp_count);
+}
+
+static inline void
+mpipe_dp_exit(struct mpipe_dev_priv *priv)
+{
+	rte_atomic32_dec(&priv->dp_count);
+}
+
+static inline void
+mpipe_dp_wait(struct mpipe_dev_priv *priv)
+{
+	while (rte_atomic32_read(&priv->dp_count) != 0) {
+		rte_pause();
+	}
+}
+
+static inline struct rte_mbuf *
+mpipe_recv_mbuf(struct mpipe_dev_priv *priv, gxio_mpipe_idesc_t *idesc,
+		int in_port)
+{
+	void *va = gxio_mpipe_idesc_get_va(idesc);
+	uint16_t size = gxio_mpipe_idesc_get_xfer_size(idesc);
+	struct rte_mbuf *mbuf = RTE_PTR_SUB(va, priv->rx_offset);
+
+	rte_pktmbuf_reset(mbuf);
+	mbuf->data_off = (uintptr_t)va - (uintptr_t)mbuf->buf_addr;
+	mbuf->port     = in_port;
+	mbuf->data_len = size;
+	mbuf->pkt_len  = size;
+	mbuf->hash.rss = gxio_mpipe_idesc_get_flow_hash(idesc);
+
+	PMD_DEBUG_RX("%s: RX mbuf %p, buffer %p, buf_addr %p, size %d\n",
+		     mpipe_name(priv), mbuf, va, mbuf->buf_addr, size);
+
+	return mbuf;
+}
+
+static inline void
+mpipe_recv_push(struct mpipe_dev_priv *priv, struct rte_mbuf *mbuf)
+{
+	const int offset = RTE_PKTMBUF_HEADROOM + MPIPE_RX_IP_ALIGN;
+	void *buf_addr = RTE_PTR_ADD(mbuf->buf_addr, offset);
+
+	gxio_mpipe_push_buffer(priv->context, priv->stack, buf_addr);
+	PMD_DEBUG_RX("%s: Pushed mbuf %p, buffer %p into stack %d\n",
+		     mpipe_name(priv), mbuf, buf_addr, priv->stack);
+}
+
+static inline void
+mpipe_recv_fill_stack(struct mpipe_dev_priv *priv, int count)
+{
+	struct rte_mbuf *mbuf;
+	int i;
+
+	for (i = 0; i < count; i++) {
+		mbuf = __rte_mbuf_raw_alloc(priv->rx_mpool);
+		if (!mbuf)
+			break;
+		mpipe_recv_push(priv, mbuf);
+	}
+
+	priv->rx_buffers += count;
+	PMD_DEBUG_RX("%s: Filled %d/%d buffers\n", mpipe_name(priv), i, count);
+}
+
+static inline void
+mpipe_recv_flush_stack(struct mpipe_dev_priv *priv)
+{
+	const int offset = priv->rx_offset & ~RTE_MEMPOOL_ALIGN_MASK;
+	uint8_t in_port = priv->port_id;
+	struct rte_mbuf *mbuf;
+	unsigned count;
+	void *va;
+
+	for (count = 0; count < priv->rx_buffers; count++) {
+		va = gxio_mpipe_pop_buffer(priv->context, priv->stack);
+		if (!va)
+			break;
+		mbuf = RTE_PTR_SUB(va, offset);
+
+		PMD_DEBUG_RX("%s: Flushing mbuf %p, va %p\n",
+			     mpipe_name(priv), mbuf, va);
+
+		mbuf->data_off    = (uintptr_t)va - (uintptr_t)mbuf->buf_addr;
+		mbuf->refcnt      = 1;
+		mbuf->nb_segs     = 1;
+		mbuf->port        = in_port;
+		mbuf->packet_type = 0;
+		mbuf->data_len    = 0;
+		mbuf->pkt_len     = 0;
+
+		__rte_mbuf_raw_free(mbuf);
+	}
+
+	PMD_DEBUG_RX("%s: Returned %d/%d buffers\n",
+		     mpipe_name(priv), count, priv->rx_buffers);
+	priv->rx_buffers -= count;
+}
+
+static void
+mpipe_register_segment(struct mpipe_dev_priv *priv, const struct rte_memseg *ms)
+{
+	size_t size = ms->hugepage_sz;
+	uint8_t *addr, *end;
+	int rc;
+
+	for (addr = ms->addr, end = addr + ms->len; addr < end; addr += size) {
+		rc = gxio_mpipe_register_page(priv->context, priv->stack, addr,
+					      size, 0);
+		if (rc < 0)
+			break;
+	}
+
+	if (rc < 0) {
+		RTE_LOG(ERR, PMD, "%s: Could not register memseg @%p, %d.\n",
+			mpipe_name(priv), ms->addr, rc);
+	} else {
+		RTE_LOG(DEBUG, PMD, "%s: Registered segment %p - %p\n",
+			mpipe_name(priv), ms->addr,
+			RTE_PTR_ADD(ms->addr, ms->len - 1));
+	}
+}
+
+static int
+mpipe_recv_init(struct mpipe_dev_priv *priv)
+{
+	const struct rte_memseg *seg = rte_eal_get_physmem_layout();
+	size_t stack_size;
+	void *stack_mem;
+	int rc;
+
+	if (!priv->rx_mpool) {
+		RTE_LOG(ERR, PMD, "%s: No buffer pool.\n",
+			mpipe_name(priv));
+		return -ENODEV;
+	}
+
+	/* Allocate one NotifRing for each queue. */
+	rc = gxio_mpipe_alloc_notif_rings(priv->context, MPIPE_RX_MAX_QUEUES,
+					  0, 0);
+	if (rc < 0) {
+		RTE_LOG(ERR, PMD, "%s: Failed to allocate notif rings.\n",
+			mpipe_name(priv));
+		return rc;
+	}
+	priv->first_ring = rc;
+
+	/* Allocate a NotifGroup. */
+	rc = gxio_mpipe_alloc_notif_groups(priv->context, 1, 0, 0);
+	if (rc < 0) {
+		RTE_LOG(ERR, PMD, "%s: Failed to allocate rx group.\n",
+			mpipe_name(priv));
+		return rc;
+	}
+	priv->notif_group = rc;
+
+	/* Allocate required buckets. */
+	rc = gxio_mpipe_alloc_buckets(priv->context, MPIPE_RX_BUCKETS, 0, 0);
+	if (rc < 0) {
+		RTE_LOG(ERR, PMD, "%s: Failed to allocate buckets.\n",
+			mpipe_name(priv));
+		return rc;
+	}
+	priv->first_bucket = rc;
+
+	rc = gxio_mpipe_alloc_buffer_stacks(priv->context, 1, 0, 0);
+	if (rc < 0) {
+		RTE_LOG(ERR, PMD, "%s: Failed to allocate buffer stack.\n",
+			mpipe_name(priv));
+		return rc;
+	}
+	priv->stack = rc;
+
+	while (seg && seg->addr)
+		mpipe_register_segment(priv, seg++);
+
+	stack_size = gxio_mpipe_calc_buffer_stack_bytes(MPIPE_RX_STACK_SIZE);
+	stack_mem = rte_zmalloc(NULL, stack_size, 65536);
+	if (!stack_mem) {
+		RTE_LOG(ERR, PMD, "%s: Failed to allocate buffer memory.\n",
+			mpipe_name(priv));
+		return -ENOMEM;
+	} else {
+		RTE_LOG(DEBUG, PMD, "%s: Buffer stack memory %p - %p.\n",
+			mpipe_name(priv), stack_mem,
+			RTE_PTR_ADD(stack_mem, stack_size - 1));
+	}
+
+	rc = gxio_mpipe_init_buffer_stack(priv->context, priv->stack,
+					  priv->rx_size_code, stack_mem,
+					  stack_size, 0);
+	if (rc < 0) {
+		RTE_LOG(ERR, PMD, "%s: Failed to initialize buffer stack.\n",
+			mpipe_name(priv));
+		return rc;
+	}
+
+	return 0;
+}
+
+static int
+mpipe_xmit_init(struct mpipe_dev_priv *priv)
+{
+	size_t ring_size;
+	void *ring_mem;
+	int rc;
+
+	/* Allocate eDMA ring. */
+	rc = gxio_mpipe_alloc_edma_rings(priv->context, 1, 0, 0);
+	if (rc < 0) {
+		RTE_LOG(ERR, PMD, "%s: Failed to alloc tx ring.\n",
+			mpipe_name(priv));
+		return rc;
+	}
+	priv->ering = rc;
+
+	rc = mpipe_equeue_size(MPIPE_TX_DESCS);
+	if (rc < 0) {
+		RTE_LOG(ERR, PMD, "%s: Cannot allocate %d equeue descs.\n",
+			mpipe_name(priv), (int)MPIPE_TX_DESCS);
+		return -ENOMEM;
+	}
+	priv->equeue_size = rc;
+
+	/* Initialize completion array. */
+	ring_size = sizeof(priv->tx_comps[0]) * priv->equeue_size;
+	priv->tx_comps = rte_zmalloc(NULL, ring_size, RTE_CACHE_LINE_SIZE);
+	if (!priv->tx_comps) {
+		RTE_LOG(ERR, PMD, "%s: Failed to allocate egress comps.\n",
+			mpipe_name(priv));
+		return -ENOMEM;
+	}
+
+	/* Allocate eDMA ring memory. */
+	ring_size = sizeof(gxio_mpipe_edesc_t) * priv->equeue_size;
+	ring_mem = rte_zmalloc(NULL, ring_size, ring_size);
+	if (!ring_mem) {
+		RTE_LOG(ERR, PMD, "%s: Failed to allocate egress descs.\n",
+			mpipe_name(priv));
+		return -ENOMEM;
+	} else {
+		RTE_LOG(DEBUG, PMD, "%s: eDMA ring memory %p - %p.\n",
+			mpipe_name(priv), ring_mem,
+			RTE_PTR_ADD(ring_mem, ring_size - 1));
+	}
+
+	/* Initialize eDMA ring. */
+	rc = gxio_mpipe_equeue_init(&priv->equeue, priv->context, priv->ering,
+				    priv->channel, ring_mem, ring_size, 0);
+	if (rc < 0) {
+		RTE_LOG(ERR, PMD, "%s: Failed to init equeue\n",
+			mpipe_name(priv));
+		return rc;
+	}
+
+	return 0;
+}
+
+static int
+mpipe_link_init(struct mpipe_dev_priv *priv)
+{
+	int rc;
+
+	/* Open the link. */
+	rc = gxio_mpipe_link_open(&priv->link, priv->context,
+				  mpipe_name(priv), GXIO_MPIPE_LINK_AUTO_NONE);
+	if (rc < 0) {
+		RTE_LOG(ERR, PMD, "%s: Failed to open link.\n",
+			mpipe_name(priv));
+		return rc;
+	}
+
+	/* Get the channel index. */
+	rc = gxio_mpipe_link_channel(&priv->link);
+	if (rc < 0) {
+		RTE_LOG(ERR, PMD, "%s: Bad channel\n",
+			mpipe_name(priv));
+		return rc;
+	}
+	priv->channel = rc;
+
+	return 0;
+}
+
+static int
+mpipe_init(struct mpipe_dev_priv *priv)
+{
+	int rc;
+
+	if (priv->initialized)
+		return 0;
+
+	rc = mpipe_link_init(priv);
+	if (rc < 0) {
+		RTE_LOG(ERR, PMD, "%s: Failed to init link.\n",
+			mpipe_name(priv));
+		return rc;
+	}
+
+	rc = mpipe_recv_init(priv);
+	if (rc < 0) {
+		RTE_LOG(ERR, PMD, "%s: Failed to init rx.\n",
+			mpipe_name(priv));
+		return rc;
+	}
+
+	rc = mpipe_xmit_init(priv);
+	if (rc < 0) {
+		RTE_LOG(ERR, PMD, "%s: Failed to init tx.\n",
+			mpipe_name(priv));
+		rte_free(priv);
+		return rc;
+	}
+
+	priv->initialized = 1;
+
+	return 0;
+}
+
+static int
+mpipe_start(struct rte_eth_dev *dev)
+{
+	struct mpipe_dev_priv *priv = mpipe_priv(dev);
+	struct mpipe_channel_config config;
+	struct mpipe_rx_queue *rx_queue;
+	struct rte_eth_link eth_link;
+	unsigned queue, buffers = 0;
+	size_t ring_size;
+	void *ring_mem;
+	int rc;
+
+	memset(&eth_link, 0, sizeof(eth_link));
+	mpipe_dev_atomic_write_link_status(dev, &eth_link);
+
+	rc = mpipe_init(priv);
+	if (rc < 0)
+		return rc;
+
+	/* Initialize NotifRings. */
+	for (queue = 0; queue < priv->nb_rx_queues; queue++) {
+		rx_queue = mpipe_rx_queue(priv, queue);
+		ring_size = rx_queue->q.nb_desc * sizeof(gxio_mpipe_idesc_t);
+
+		ring_mem = rte_malloc(NULL, ring_size, ring_size);
+		if (!ring_mem) {
+			RTE_LOG(ERR, PMD, "%s: Failed to alloc rx descs.\n",
+				mpipe_name(priv));
+			return -ENOMEM;
+		} else {
+			RTE_LOG(DEBUG, PMD, "%s: iDMA ring %d memory %p - %p.\n",
+				mpipe_name(priv), queue, ring_mem,
+				RTE_PTR_ADD(ring_mem, ring_size - 1));
+		}
+
+		rc = gxio_mpipe_iqueue_init(&rx_queue->iqueue, priv->context,
+					    priv->first_ring + queue, ring_mem,
+					    ring_size, 0);
+		if (rc < 0) {
+			RTE_LOG(ERR, PMD, "%s: Failed to init rx queue.\n",
+				mpipe_name(priv));
+			return rc;
+		}
+
+		rx_queue->rx_ring_mem = ring_mem;
+		buffers += rx_queue->q.nb_desc;
+	}
+
+	/* Initialize ingress NotifGroup and buckets. */
+	rc = gxio_mpipe_init_notif_group_and_buckets(priv->context,
+			priv->notif_group, priv->first_ring, priv->nb_rx_queues,
+			priv->first_bucket, MPIPE_RX_BUCKETS,
+			GXIO_MPIPE_BUCKET_STATIC_FLOW_AFFINITY);
+	if (rc < 0) {
+		RTE_LOG(ERR, PMD, "%s: Failed to init group and buckets.\n",
+			mpipe_name(priv));
+		return rc;
+	}
+
+	/* Configure the classifier to deliver packets from this port. */
+	config.enable = 1;
+	config.first_bucket = priv->first_bucket;
+	config.num_buckets = MPIPE_RX_BUCKETS;
+	memset(&config.stacks, 0xff, sizeof(config.stacks));
+	config.stacks.stacks[priv->rx_size_code] = priv->stack;
+	config.head_room = priv->rx_offset & RTE_MEMPOOL_ALIGN_MASK;
+
+	rc = mpipe_channel_config(priv->instance, priv->channel,
+				  &config);
+	if (rc < 0) {
+		RTE_LOG(ERR, PMD, "%s: Failed to setup classifier.\n",
+			mpipe_name(priv));
+		return rc;
+	}
+
+	/* Fill empty buffers into the buffer stack. */
+	mpipe_recv_fill_stack(priv, buffers);
+
+	/* Bring up the link. */
+	mpipe_set_link_up(dev);
+
+	/* Start xmit/recv on queues. */
+	for (queue = 0; queue < priv->nb_tx_queues; queue++)
+		mpipe_tx_queue(priv, queue)->q.link_status = 1;
+	for (queue = 0; queue < priv->nb_rx_queues; queue++)
+		mpipe_rx_queue(priv, queue)->q.link_status = 1;
+	priv->running = 1;
+
+	return 0;
+}
+
+static void
+mpipe_stop(struct rte_eth_dev *dev)
+{
+	struct mpipe_dev_priv *priv = mpipe_priv(dev);
+	struct mpipe_channel_config config;
+	unsigned queue;
+	int rc;
+
+	for (queue = 0; queue < priv->nb_tx_queues; queue++)
+		mpipe_tx_queue(priv, queue)->q.link_status = 0;
+	for (queue = 0; queue < priv->nb_rx_queues; queue++)
+		mpipe_rx_queue(priv, queue)->q.link_status = 0;
+
+	/* Make sure the link_status writes land. */
+	rte_wmb();
+
+	/*
+	 * Wait for link_status change to register with straggling datapath
+	 * threads.
+	 */
+	mpipe_dp_wait(priv);
+
+	/* Bring down the link. */
+	mpipe_set_link_down(dev);
+
+	/* Remove classifier rules. */
+	memset(&config, 0, sizeof(config));
+	rc = mpipe_channel_config(priv->instance, priv->channel,
+				  &config);
+	if (rc < 0) {
+		RTE_LOG(ERR, PMD, "%s: Failed to stop classifier.\n",
+			mpipe_name(priv));
+	}
+
+	/* Flush completed xmit packets. */
+	mpipe_xmit_flush(priv);
+
+	/* Flush buffer stacks. */
+	mpipe_recv_flush(priv);
+
+	priv->running = 0;
+}
+
+static void
+mpipe_close(struct rte_eth_dev *dev)
+{
+	struct mpipe_dev_priv *priv = mpipe_priv(dev);
+	if (priv->running)
+		mpipe_stop(dev);
+}
+
+static void
+mpipe_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats)
+{
+	struct mpipe_dev_priv *priv = mpipe_priv(dev);
+	struct mpipe_tx_queue *tx_queue;
+	struct mpipe_rx_queue *rx_queue;
+	unsigned i;
+	uint16_t idx;
+
+	memset(stats, 0, sizeof(*stats));
+
+	for (i = 0; i < priv->nb_tx_queues; i++) {
+		tx_queue = mpipe_tx_queue(priv, i);
+
+		stats->opackets += tx_queue->q.stats.packets;
+		stats->obytes   += tx_queue->q.stats.bytes;
+		stats->oerrors  += tx_queue->q.stats.errors;
+
+		idx = tx_queue->q.stat_idx;
+		if (idx != (uint16_t)-1) {
+			stats->q_opackets[idx] += tx_queue->q.stats.packets;
+			stats->q_obytes[idx]   += tx_queue->q.stats.bytes;
+			stats->q_errors[idx]   += tx_queue->q.stats.errors;
+		}
+	}
+
+	for (i = 0; i < priv->nb_rx_queues; i++) {
+		rx_queue = mpipe_rx_queue(priv, i);
+
+		stats->ipackets  += rx_queue->q.stats.packets;
+		stats->ibytes    += rx_queue->q.stats.bytes;
+		stats->ierrors   += rx_queue->q.stats.errors;
+		stats->rx_nombuf += rx_queue->q.stats.nomem;
+
+		idx = rx_queue->q.stat_idx;
+		if (idx != (uint16_t)-1) {
+			stats->q_ipackets[idx] += rx_queue->q.stats.packets;
+			stats->q_ibytes[idx]   += rx_queue->q.stats.bytes;
+			stats->q_errors[idx]   += rx_queue->q.stats.errors;
+		}
+	}
+}
+
+static void
+mpipe_stats_reset(struct rte_eth_dev *dev)
+{
+	struct mpipe_dev_priv *priv = mpipe_priv(dev);
+	struct mpipe_tx_queue *tx_queue;
+	struct mpipe_rx_queue *rx_queue;
+	unsigned i;
+
+	for (i = 0; i < priv->nb_tx_queues; i++) {
+		tx_queue = mpipe_tx_queue(priv, i);
+		memset(&tx_queue->q.stats, 0, sizeof(tx_queue->q.stats));
+	}
+
+	for (i = 0; i < priv->nb_rx_queues; i++) {
+		rx_queue = mpipe_rx_queue(priv, i);
+		memset(&rx_queue->q.stats, 0, sizeof(rx_queue->q.stats));
+	}
+}
+
+static int
+mpipe_queue_stats_mapping_set(struct rte_eth_dev *dev, uint16_t queue_id,
+			      uint8_t stat_idx, uint8_t is_rx)
+{
+	struct mpipe_dev_priv *priv = mpipe_priv(dev);
+
+	if (is_rx) {
+		priv->rx_stat_mapping[stat_idx] = queue_id;
+	} else {
+		priv->tx_stat_mapping[stat_idx] = queue_id;
+	}
+
+	return 0;
+}
+
+static int
+mpipe_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
+		     uint16_t nb_desc, unsigned int socket_id __rte_unused,
+		     const struct rte_eth_txconf *tx_conf __rte_unused)
+{
+	struct mpipe_tx_queue *tx_queue = dev->data->tx_queues[queue_idx];
+	struct mpipe_dev_priv *priv = mpipe_priv(dev);
+	uint16_t idx;
+
+	tx_queue = rte_realloc(tx_queue, sizeof(*tx_queue),
+			       RTE_CACHE_LINE_SIZE);
+	if (!tx_queue) {
+		RTE_LOG(ERR, PMD, "%s: Failed to allocate TX queue.\n",
+			mpipe_name(priv));
+		return -ENOMEM;
+	}
+
+	memset(&tx_queue->q, 0, sizeof(tx_queue->q));
+	tx_queue->q.priv = priv;
+	tx_queue->q.queue_idx = queue_idx;
+	tx_queue->q.port_id = dev->data->port_id;
+	tx_queue->q.nb_desc = nb_desc;
+
+	tx_queue->q.stat_idx = -1;
+	for (idx = 0; idx < RTE_ETHDEV_QUEUE_STAT_CNTRS; idx++) {
+		if (priv->tx_stat_mapping[idx] == queue_idx)
+			tx_queue->q.stat_idx = idx;
+	}
+
+	dev->data->tx_queues[queue_idx] = tx_queue;
+
+	return 0;
+}
+
+static void
+mpipe_tx_queue_release(void *_txq)
+{
+	rte_free(_txq);
+}
+
+static int
+mpipe_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
+		     uint16_t nb_desc, unsigned int socket_id __rte_unused,
+		     const struct rte_eth_rxconf *rx_conf __rte_unused,
+		     struct rte_mempool *mp)
+{
+	struct mpipe_rx_queue *rx_queue = dev->data->rx_queues[queue_idx];
+	struct mpipe_dev_priv *priv = mpipe_priv(dev);
+	uint16_t idx;
+	int size, rc;
+
+	rc = mpipe_iqueue_size(nb_desc);
+	if (rc < 0) {
+		RTE_LOG(ERR, PMD, "%s: Cannot allocate %d iqueue descs.\n",
+			mpipe_name(priv), (int)nb_desc);
+		return -ENOMEM;
+	}
+
+	if (rc != nb_desc) {
+		RTE_LOG(WARNING, PMD, "%s: Extending RX descs from %d to %d.\n",
+			mpipe_name(priv), (int)nb_desc, rc);
+		nb_desc = rc;
+	}
+
+	size = sizeof(*rx_queue);
+	rx_queue = rte_realloc(rx_queue, size, RTE_CACHE_LINE_SIZE);
+	if (!rx_queue) {
+		RTE_LOG(ERR, PMD, "%s: Failed to allocate RX queue.\n",
+			mpipe_name(priv));
+		return -ENOMEM;
+	}
+
+	memset(&rx_queue->q, 0, sizeof(rx_queue->q));
+	rx_queue->q.priv = priv;
+	rx_queue->q.nb_desc = nb_desc;
+	rx_queue->q.port_id = dev->data->port_id;
+	rx_queue->q.queue_idx = queue_idx;
+
+	if (!priv->rx_mpool) {
+		int size = (rte_pktmbuf_data_room_size(mp) -
+			    RTE_PKTMBUF_HEADROOM -
+			    MPIPE_RX_IP_ALIGN);
+
+		priv->rx_offset = (sizeof(struct rte_mbuf) +
+				   rte_pktmbuf_priv_size(mp) +
+				   RTE_PKTMBUF_HEADROOM +
+				   MPIPE_RX_IP_ALIGN);
+		if (size < 0) {
+			RTE_LOG(ERR, PMD, "%s: Bad buffer size %d.\n",
+				mpipe_name(priv),
+				rte_pktmbuf_data_room_size(mp));
+			return -ENOMEM;
+		}
+
+		priv->rx_size_code = mpipe_buffer_size_index(size);
+		priv->rx_mpool = mp;
+	}
+
+	if (priv->rx_mpool != mp) {
+		RTE_LOG(WARNING, PMD, "%s: Ignoring multiple buffer pools.\n",
+			mpipe_name(priv));
+	}
+
+	rx_queue->q.stat_idx = -1;
+	for (idx = 0; idx < RTE_ETHDEV_QUEUE_STAT_CNTRS; idx++) {
+		if (priv->rx_stat_mapping[idx] == queue_idx)
+			rx_queue->q.stat_idx = idx;
+	}
+
+	dev->data->rx_queues[queue_idx] = rx_queue;
+
+	return 0;
+}
+
+static void
+mpipe_rx_queue_release(void *_rxq)
+{
+	rte_free(_rxq);
+}
+
+#define MPIPE_XGBE_ENA_HASH_MULTI	\
+	(1UL << MPIPE_XAUI_RECEIVE_CONFIGURATION__ENA_HASH_MULTI_SHIFT)
+#define MPIPE_XGBE_ENA_HASH_UNI		\
+	(1UL << MPIPE_XAUI_RECEIVE_CONFIGURATION__ENA_HASH_UNI_SHIFT)
+#define MPIPE_XGBE_COPY_ALL		\
+	(1UL << MPIPE_XAUI_RECEIVE_CONFIGURATION__COPY_ALL_SHIFT)
+#define MPIPE_GBE_ENA_MULTI_HASH	\
+	(1UL << MPIPE_GBE_NETWORK_CONFIGURATION__MULTI_HASH_ENA_SHIFT)
+#define MPIPE_GBE_ENA_UNI_HASH		\
+	(1UL << MPIPE_GBE_NETWORK_CONFIGURATION__UNI_HASH_ENA_SHIFT)
+#define MPIPE_GBE_COPY_ALL		\
+	(1UL << MPIPE_GBE_NETWORK_CONFIGURATION__COPY_ALL_SHIFT)
+
+static void
+mpipe_promiscuous_enable(struct rte_eth_dev *dev)
+{
+	struct mpipe_dev_priv *priv = mpipe_priv(dev);
+	int64_t reg;
+	int addr;
+
+	if (priv->is_xaui) {
+		addr = MPIPE_XAUI_RECEIVE_CONFIGURATION;
+		reg  = gxio_mpipe_link_mac_rd(&priv->link, addr);
+		reg &= ~MPIPE_XGBE_ENA_HASH_MULTI;
+		reg &= ~MPIPE_XGBE_ENA_HASH_UNI;
+		reg |=  MPIPE_XGBE_COPY_ALL;
+		gxio_mpipe_link_mac_wr(&priv->link, addr, reg);
+	} else {
+		addr = MPIPE_GBE_NETWORK_CONFIGURATION;
+		reg  = gxio_mpipe_link_mac_rd(&priv->link, addr);
+		reg &= ~MPIPE_GBE_ENA_MULTI_HASH;
+		reg &= ~MPIPE_GBE_ENA_UNI_HASH;
+		reg |=  MPIPE_GBE_COPY_ALL;
+		gxio_mpipe_link_mac_wr(&priv->link, addr, reg);
+	}
+}
+
+static void
+mpipe_promiscuous_disable(struct rte_eth_dev *dev)
+{
+	struct mpipe_dev_priv *priv = mpipe_priv(dev);
+	int64_t reg;
+	int addr;
+
+	if (priv->is_xaui) {
+		addr = MPIPE_XAUI_RECEIVE_CONFIGURATION;
+		reg  = gxio_mpipe_link_mac_rd(&priv->link, addr);
+		reg |=  MPIPE_XGBE_ENA_HASH_MULTI;
+		reg |=  MPIPE_XGBE_ENA_HASH_UNI;
+		reg &= ~MPIPE_XGBE_COPY_ALL;
+		gxio_mpipe_link_mac_wr(&priv->link, addr, reg);
+	} else {
+		addr = MPIPE_GBE_NETWORK_CONFIGURATION;
+		reg  = gxio_mpipe_link_mac_rd(&priv->link, addr);
+		reg |=  MPIPE_GBE_ENA_MULTI_HASH;
+		reg |=  MPIPE_GBE_ENA_UNI_HASH;
+		reg &= ~MPIPE_GBE_COPY_ALL;
+		gxio_mpipe_link_mac_wr(&priv->link, addr, reg);
+	}
+}
+
+static struct eth_dev_ops mpipe_dev_ops = {
+	.dev_infos_get	         = mpipe_infos_get,
+	.dev_configure	         = mpipe_configure,
+	.dev_start	         = mpipe_start,
+	.dev_stop	         = mpipe_stop,
+	.dev_close	         = mpipe_close,
+	.stats_get	         = mpipe_stats_get,
+	.stats_reset	         = mpipe_stats_reset,
+	.queue_stats_mapping_set = mpipe_queue_stats_mapping_set,
+	.tx_queue_setup	         = mpipe_tx_queue_setup,
+	.rx_queue_setup	         = mpipe_rx_queue_setup,
+	.tx_queue_release	 = mpipe_tx_queue_release,
+	.rx_queue_release	 = mpipe_rx_queue_release,
+	.link_update	         = mpipe_link_update,
+	.dev_set_link_up         = mpipe_set_link_up,
+	.dev_set_link_down       = mpipe_set_link_down,
+	.promiscuous_enable      = mpipe_promiscuous_enable,
+	.promiscuous_disable     = mpipe_promiscuous_disable,
+};
+
+static inline void
+mpipe_xmit_null(struct mpipe_dev_priv *priv, int64_t start, int64_t end)
+{
+	gxio_mpipe_edesc_t null_desc = { { .bound = 1, .ns = 1 } };
+	gxio_mpipe_equeue_t *equeue = &priv->equeue;
+	int64_t slot;
+
+	for (slot = start; slot < end; slot++) {
+		gxio_mpipe_equeue_put_at(equeue, null_desc, slot);
+	}
+}
+
+static void
+mpipe_xmit_flush(struct mpipe_dev_priv *priv)
+{
+	gxio_mpipe_equeue_t *equeue = &priv->equeue;
+	int64_t slot;
+
+	/* Post a dummy descriptor and wait for its return. */
+	slot = gxio_mpipe_equeue_reserve(equeue, 1);
+	if (slot < 0) {
+		RTE_LOG(ERR, PMD, "%s: Failed to reserve stop slot.\n",
+			mpipe_name(priv));
+		return;
+	}
+
+	mpipe_xmit_null(priv, slot, slot + 1);
+
+	while (!gxio_mpipe_equeue_is_complete(equeue, slot, 1)) {
+		rte_pause();
+	}
+
+	for (slot = 0; slot < priv->equeue_size; slot++) {
+		if (priv->tx_comps[slot])
+			rte_pktmbuf_free_seg(priv->tx_comps[slot]);
+	}
+}
+
+static void
+mpipe_recv_flush(struct mpipe_dev_priv *priv)
+{
+	uint8_t in_port = priv->port_id;
+	struct mpipe_rx_queue *rx_queue;
+	gxio_mpipe_iqueue_t *iqueue;
+	gxio_mpipe_idesc_t idesc;
+	struct rte_mbuf *mbuf;
+	int retries = 0;
+	unsigned queue;
+
+	do {
+		mpipe_recv_flush_stack(priv);
+
+		/* Flush packets sitting in recv queues. */
+		for (queue = 0; queue < priv->nb_rx_queues; queue++) {
+			rx_queue = mpipe_rx_queue(priv, queue);
+			iqueue = &rx_queue->iqueue;
+			while (gxio_mpipe_iqueue_try_get(iqueue, &idesc) >= 0) {
+				mbuf = mpipe_recv_mbuf(priv, &idesc, in_port);
+				rte_pktmbuf_free(mbuf);
+				priv->rx_buffers--;
+			}
+			rte_free(rx_queue->rx_ring_mem);
+		}
+	} while (retries++ < 10 && priv->rx_buffers);
+
+	if (priv->rx_buffers) {
+		RTE_LOG(ERR, PMD, "%s: Leaked %d receive buffers.\n",
+			mpipe_name(priv), priv->rx_buffers);
+	} else {
+		PMD_DEBUG_RX("%s: Returned all receive buffers.\n",
+			     mpipe_name(priv));
+	}
+}
+
+static inline uint16_t
+mpipe_do_xmit(struct mpipe_tx_queue *tx_queue, struct rte_mbuf **tx_pkts,
+	      uint16_t nb_pkts)
+{
+	struct mpipe_dev_priv *priv = tx_queue->q.priv;
+	gxio_mpipe_equeue_t *equeue = &priv->equeue;
+	unsigned nb_bytes = 0;
+	unsigned nb_sent = 0;
+	int nb_slots, i;
+
+	PMD_DEBUG_TX("Trying to transmit %d packets on %s:%d.\n",
+		     nb_pkts, mpipe_name(tx_queue->q.priv),
+		     tx_queue->q.queue_idx);
+
+	/* Optimistic assumption that we need exactly one slot per packet. */
+	nb_slots = RTE_MIN(nb_pkts, MPIPE_TX_DESCS / 2);
+
+	do {
+		struct rte_mbuf *mbuf = NULL, *pkt = NULL;
+		int64_t slot;
+
+		/* Reserve eDMA ring slots. */
+		slot = gxio_mpipe_equeue_try_reserve_fast(equeue, nb_slots);
+		if (unlikely(slot < 0)) {
+			break;
+		}
+
+		for (i = 0; i < nb_slots; i++) {
+			unsigned idx = (slot + i) & (priv->equeue_size - 1);
+			rte_prefetch0(priv->tx_comps[idx]);
+		}
+
+		/* Fill up slots with descriptor and completion info. */
+		for (i = 0; i < nb_slots; i++) {
+			unsigned idx = (slot + i) & (priv->equeue_size - 1);
+			gxio_mpipe_edesc_t desc;
+			struct rte_mbuf *next;
+
+			/* Starting on a new packet? */
+			if (likely(!mbuf)) {
+				int room = nb_slots - i;
+
+				pkt = mbuf = tx_pkts[nb_sent];
+
+				/* Bail out if we run out of descs. */
+				if (unlikely(pkt->nb_segs > room))
+					break;
+
+				nb_sent++;
+			}
+
+			/* We have a segment to send. */
+			next = mbuf->next;
+
+			if (priv->tx_comps[idx])
+				rte_pktmbuf_free_seg(priv->tx_comps[idx]);
+
+			desc = (gxio_mpipe_edesc_t) { {
+				.va        = rte_pktmbuf_mtod(mbuf, uintptr_t),
+				.xfer_size = rte_pktmbuf_data_len(mbuf),
+				.bound     = next ? 0 : 1,
+			} };
+
+			nb_bytes += mbuf->data_len;
+			priv->tx_comps[idx] = mbuf;
+			gxio_mpipe_equeue_put_at(equeue, desc, slot + i);
+
+			PMD_DEBUG_TX("%s:%d: Sending packet %p, len %d\n",
+				     mpipe_name(priv),
+				     tx_queue->q.queue_idx,
+				     rte_pktmbuf_mtod(mbuf, void *),
+				     rte_pktmbuf_data_len(mbuf));
+
+			mbuf = next;
+		}
+
+		if (unlikely(nb_sent < nb_pkts)) {
+
+			/* Fill remaining slots with null descriptors. */
+			mpipe_xmit_null(priv, slot + i, slot + nb_slots);
+
+			/*
+			 * Calculate exact number of descriptors needed for
+			 * the next go around.
+			 */
+			nb_slots = 0;
+			for (i = nb_sent; i < nb_pkts; i++) {
+				nb_slots += tx_pkts[i]->nb_segs;
+			}
+
+			nb_slots = RTE_MIN(nb_slots, MPIPE_TX_DESCS / 2);
+		}
+	} while (nb_sent < nb_pkts);
+
+	tx_queue->q.stats.packets += nb_sent;
+	tx_queue->q.stats.bytes   += nb_bytes;
+
+	return nb_sent;
+}
+
+static inline uint16_t
+mpipe_do_recv(struct mpipe_rx_queue *rx_queue, struct rte_mbuf **rx_pkts,
+	      uint16_t nb_pkts)
+{
+	struct mpipe_dev_priv *priv = rx_queue->q.priv;
+	gxio_mpipe_iqueue_t *iqueue = &rx_queue->iqueue;
+	gxio_mpipe_idesc_t *first_idesc, *idesc, *last_idesc;
+	uint8_t in_port = rx_queue->q.port_id;
+	const unsigned look_ahead = 8;
+	int room = nb_pkts, rc = 0;
+	unsigned nb_packets = 0;
+	unsigned nb_dropped = 0;
+	unsigned nb_nomem = 0;
+	unsigned nb_bytes = 0;
+	unsigned nb_descs, i;
+
+	while (room && !rc) {
+		if (rx_queue->avail_descs < room) {
+			rc = gxio_mpipe_iqueue_try_peek(iqueue,
+							&rx_queue->next_desc);
+			rx_queue->avail_descs = rc < 0 ? 0 : rc;
+		}
+
+		if (unlikely(!rx_queue->avail_descs)) {
+			break;
+		}
+
+		nb_descs = RTE_MIN(room, rx_queue->avail_descs);
+
+		first_idesc = rx_queue->next_desc;
+		last_idesc  = first_idesc + nb_descs;
+
+		rx_queue->next_desc   += nb_descs;
+		rx_queue->avail_descs -= nb_descs;
+
+		for (i = 1; i < look_ahead; i++) {
+			rte_prefetch0(first_idesc + i);
+		}
+
+		PMD_DEBUG_RX("%s:%d: Trying to receive %d packets\n",
+			     mpipe_name(rx_queue->q.priv),
+			     rx_queue->q.queue_idx,
+			     nb_descs);
+
+		for (idesc = first_idesc; idesc < last_idesc; idesc++) {
+			struct rte_mbuf *mbuf;
+
+			PMD_DEBUG_RX("%s:%d: processing idesc %d/%d\n",
+				     mpipe_name(priv),
+				     rx_queue->q.queue_idx,
+				     nb_packets, nb_descs);
+
+			rte_prefetch0(idesc + look_ahead);
+
+			PMD_DEBUG_RX("%s:%d: idesc %p, %s%s%s%s%s%s%s%s%s%s"
+				     "size: %d, bkt: %d, chan: %d, ring: %d, sqn: %lu, va: %lu\n",
+				     mpipe_name(priv),
+				     rx_queue->q.queue_idx,
+				     idesc,
+				     idesc->me ? "me, " : "",
+				     idesc->tr ? "tr, " : "",
+				     idesc->ce ? "ce, " : "",
+				     idesc->ct ? "ct, " : "",
+				     idesc->cs ? "cs, " : "",
+				     idesc->nr ? "nr, " : "",
+				     idesc->sq ? "sq, " : "",
+				     idesc->ts ? "ts, " : "",
+				     idesc->ps ? "ps, " : "",
+				     idesc->be ? "be, " : "",
+				     idesc->l2_size,
+				     idesc->bucket_id,
+				     idesc->channel,
+				     idesc->notif_ring,
+				     (unsigned long)idesc->packet_sqn,
+				     (unsigned long)idesc->va);
+
+			if (unlikely(gxio_mpipe_idesc_has_error(idesc))) {
+				nb_dropped++;
+				gxio_mpipe_iqueue_drop(iqueue, idesc);
+				PMD_DEBUG_RX("%s:%d: Descriptor error\n",
+					     mpipe_name(rx_queue->q.priv),
+					     rx_queue->q.queue_idx);
+				continue;
+			}
+
+			mbuf = __rte_mbuf_raw_alloc(priv->rx_mpool);
+			if (unlikely(!mbuf)) {
+				nb_nomem++;
+				gxio_mpipe_iqueue_drop(iqueue, idesc);
+				PMD_DEBUG_RX("%s:%d: RX alloc failure\n",
+					     mpipe_name(rx_queue->q.priv),
+					     rx_queue->q.queue_idx);
+				continue;
+			}
+
+			mpipe_recv_push(priv, mbuf);
+
+			/* Get and setup the mbuf for the received packet. */
+			mbuf = mpipe_recv_mbuf(priv, idesc, in_port);
+
+			/* Update results and statistics counters. */
+			rx_pkts[nb_packets] = mbuf;
+			nb_bytes += mbuf->pkt_len;
+			nb_packets++;
+		}
+
+		/*
+		 * We release the ring in bursts, but do not track and release
+		 * buckets.  This therefore breaks dynamic flow affinity, but
+		 * we always operate in static affinity mode, and so we're OK
+		 * with this optimization.
+		 */
+		gxio_mpipe_iqueue_advance(iqueue, nb_descs);
+		gxio_mpipe_credit(iqueue->context, iqueue->ring, -1, nb_descs);
+
+		/*
+		 * Go around once more if we haven't yet peeked the queue, and
+		 * if we have more room to receive.
+		 */
+		room = nb_pkts - nb_packets;
+	}
+
+	rx_queue->q.stats.packets += nb_packets;
+	rx_queue->q.stats.bytes   += nb_bytes;
+	rx_queue->q.stats.errors  += nb_dropped;
+	rx_queue->q.stats.nomem   += nb_nomem;
+
+	PMD_DEBUG_RX("%s:%d: RX: %d/%d pkts/bytes, %d/%d drops/nomem\n",
+		     mpipe_name(rx_queue->q.priv), rx_queue->q.queue_idx,
+		     nb_packets, nb_bytes, nb_dropped, nb_nomem);
+
+	return nb_packets;
+}
+
+static uint16_t
+mpipe_recv_pkts(void *_rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
+{
+	struct mpipe_rx_queue *rx_queue = _rxq;
+	uint16_t result = 0;
+
+	if (rx_queue) {
+		mpipe_dp_enter(rx_queue->q.priv);
+		if (likely(rx_queue->q.link_status))
+			result = mpipe_do_recv(rx_queue, rx_pkts, nb_pkts);
+		mpipe_dp_exit(rx_queue->q.priv);
+	}
+
+	return result;
+}
+
+static uint16_t
+mpipe_xmit_pkts(void *_txq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
+{
+	struct mpipe_tx_queue *tx_queue = _txq;
+	uint16_t result = 0;
+
+	if (tx_queue) {
+		mpipe_dp_enter(tx_queue->q.priv);
+		if (likely(tx_queue->q.link_status))
+			result = mpipe_do_xmit(tx_queue, tx_pkts, nb_pkts);
+		mpipe_dp_exit(tx_queue->q.priv);
+	}
+
+	return result;
+}
+
+static int
+mpipe_link_mac(const char *ifname, uint8_t *mac)
+{
+	int rc, idx;
+	char name[GXIO_MPIPE_LINK_NAME_LEN];
+
+	for (idx = 0, rc = 0; !rc; idx++) {
+		rc = gxio_mpipe_link_enumerate_mac(idx, name, mac);
+		if (!rc && !strncmp(name, ifname, GXIO_MPIPE_LINK_NAME_LEN))
+			return 0;
+	}
+	return -ENODEV;
+}
+
+static int
+rte_pmd_mpipe_devinit(const char *ifname,
+		      const char *params __rte_unused)
+{
+	gxio_mpipe_context_t *context;
+	struct rte_eth_dev *eth_dev;
+	struct mpipe_dev_priv *priv;
+	int instance, rc;
+	uint8_t *mac;
+
+	/* Get the mPIPE instance that the device belongs to. */
+	instance = gxio_mpipe_link_instance(ifname);
+	context = mpipe_context(instance);
+	if (!context) {
+		RTE_LOG(ERR, PMD, "%s: No device for link.\n", ifname);
+		return -ENODEV;
+	}
+
+	priv = rte_zmalloc(NULL, sizeof(*priv), 0);
+	if (!priv) {
+		RTE_LOG(ERR, PMD, "%s: Failed to allocate priv.\n", ifname);
+		return -ENOMEM;
+	}
+
+	memset(&priv->tx_stat_mapping, 0xff, sizeof(priv->tx_stat_mapping));
+	memset(&priv->rx_stat_mapping, 0xff, sizeof(priv->rx_stat_mapping));
+	priv->context = context;
+	priv->instance = instance;
+	priv->is_xaui = (strncmp(ifname, "xgbe", 4) == 0);
+	priv->pci_dev.numa_node = instance;
+	priv->channel = -1;
+
+	mac = priv->mac_addr.addr_bytes;
+	rc = mpipe_link_mac(ifname, mac);
+	if (rc < 0) {
+		RTE_LOG(ERR, PMD, "%s: Failed to enumerate link.\n", ifname);
+		rte_free(priv);
+		return -ENODEV;
+	}
+
+	eth_dev = rte_eth_dev_allocate(ifname, RTE_ETH_DEV_VIRTUAL);
+	if (!eth_dev) {
+		RTE_LOG(ERR, PMD, "%s: Failed to allocate device.\n", ifname);
+		rte_free(priv);
+	}
+
+	RTE_LOG(INFO, PMD, "%s: Initialized mpipe device"
+		"(mac %02x:%02x:%02x:%02x:%02x:%02x).\n",
+		ifname, mac[0], mac[1], mac[2], mac[3], mac[4], mac[5]);
+
+	priv->eth_dev = eth_dev;
+	priv->port_id = eth_dev->data->port_id;
+	eth_dev->data->dev_private = priv;
+	eth_dev->pci_dev = &priv->pci_dev;
+	eth_dev->data->mac_addrs = &priv->mac_addr;
+
+	eth_dev->dev_ops      = &mpipe_dev_ops;
+	eth_dev->rx_pkt_burst = &mpipe_recv_pkts;
+	eth_dev->tx_pkt_burst = &mpipe_xmit_pkts;
+
+	return 0;
+}
+
+static struct rte_driver pmd_mpipe_xgbe_drv = {
+	.name = "xgbe",
+	.type = PMD_VDEV,
+	.init = rte_pmd_mpipe_devinit,
+};
+
+static struct rte_driver pmd_mpipe_gbe_drv = {
+	.name = "gbe",
+	.type = PMD_VDEV,
+	.init = rte_pmd_mpipe_devinit,
+};
+
+PMD_REGISTER_DRIVER(pmd_mpipe_xgbe_drv);
+PMD_REGISTER_DRIVER(pmd_mpipe_gbe_drv);
+
+static void __attribute__((constructor, used))
+mpipe_init_contexts(void)
+{
+	struct mpipe_context *context;
+	int rc, instance;
+
+	for (instance = 0; instance < GXIO_MPIPE_INSTANCE_MAX; instance++) {
+		context = &mpipe_contexts[instance];
+
+		rte_spinlock_init(&context->lock);
+		rc = gxio_mpipe_init(&context->context, instance);
+		if (rc < 0)
+			break;
+	}
+
+	mpipe_instances = instance;
+}
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index ad6f633..afd939a 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -137,6 +137,7 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_RING)       += -lrte_pmd_ring
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_PCAP)       += -lrte_pmd_pcap
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_AF_PACKET)  += -lrte_pmd_af_packet
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_NULL)       += -lrte_pmd_null
+_LDLIBS-$(CONFIG_RTE_LIBRTE_MPIPE_PMD)      += -lrte_pmd_mpipe -lgxio
 
 endif # ! $(CONFIG_RTE_BUILD_SHARED_LIB)
 
-- 
2.1.2

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [dpdk-dev] [PATCH v5 11/11] maintainers: claim responsibility for TILE-Gx platform
  2015-07-09  8:25 [dpdk-dev] [PATCH v5 00/11] Introducing the TILE-Gx platform Zhigang Lu
                   ` (9 preceding siblings ...)
  2015-07-09  8:25 ` [dpdk-dev] [PATCH v5 10/11] tile: Add TILE-Gx mPIPE poll mode driver Zhigang Lu
@ 2015-07-09  8:25 ` Zhigang Lu
  2015-07-13 14:17 ` [dpdk-dev] [PATCH v5 00/11] Introducing the " Thomas Monjalon
  11 siblings, 0 replies; 13+ messages in thread
From: Zhigang Lu @ 2015-07-09  8:25 UTC (permalink / raw)
  To: dev; +Cc: Cyril Chemparathy

From: Cyril Chemparathy <cchemparathy@ezchip.com>

Signed-off-by: Cyril Chemparathy <cchemparathy@ezchip.com>
Signed-off-by: Zhigang Lu <zlu@ezchip.com>
---
 MAINTAINERS | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/MAINTAINERS b/MAINTAINERS
index 5476a73..6ffa01b 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -114,6 +114,10 @@ M: Bruce Richardson <bruce.richardson@intel.com>
 M: Konstantin Ananyev <konstantin.ananyev@intel.com>
 F: lib/librte_eal/common/include/arch/x86/
 
+EZchip TILE-Gx
+M: Zhigang Lu <zlu@ezchip.com>
+F: lib/librte_eal/common/include/arch/tile/
+
 Linux EAL (with overlaps)
 M: David Marchand <david.marchand@6wind.com>
 F: lib/librte_eal/linuxapp/Makefile
-- 
2.1.2

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [dpdk-dev] [PATCH v5 00/11] Introducing the TILE-Gx platform
  2015-07-09  8:25 [dpdk-dev] [PATCH v5 00/11] Introducing the TILE-Gx platform Zhigang Lu
                   ` (10 preceding siblings ...)
  2015-07-09  8:25 ` [dpdk-dev] [PATCH v5 11/11] maintainers: claim responsibility for TILE-Gx platform Zhigang Lu
@ 2015-07-13 14:17 ` Thomas Monjalon
  11 siblings, 0 replies; 13+ messages in thread
From: Thomas Monjalon @ 2015-07-13 14:17 UTC (permalink / raw)
  To: Zhigang Lu; +Cc: dev

2015-07-09 16:25, Zhigang Lu:
> This series adds support for the EZchip TILE-Gx family of SoCs.  The
> architecture port in itself is fairly straight forward due to its
> reliance on generics for the most part.
> 
> In addition to adding TILE-Gx architecture specific code, this series
> includes a few cross-platform fixes for DPDK (cpuflags, SSE related,
> etc.), as well as minor extensions to to accomodate a wider range of
> hugepage sizes and configurable mempool element alignment boundaries.

Applied, thanks and welcome in DPDK maintenance ;)

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2015-07-13 14:18 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-07-09  8:25 [dpdk-dev] [PATCH v5 00/11] Introducing the TILE-Gx platform Zhigang Lu
2015-07-09  8:25 ` [dpdk-dev] [PATCH v5 01/11] test: limit x86 cpuflags checks to x86 builds Zhigang Lu
2015-07-09  8:25 ` [dpdk-dev] [PATCH v5 02/11] hash: check SSE flags only on " Zhigang Lu
2015-07-09  8:25 ` [dpdk-dev] [PATCH v5 03/11] eal: allow empty compile time flags RTE_COMPILE_TIME_CPUFLAGS Zhigang Lu
2015-07-09  8:25 ` [dpdk-dev] [PATCH v5 04/11] config: remove RTE_LIBNAME definition Zhigang Lu
2015-07-09  8:25 ` [dpdk-dev] [PATCH v5 05/11] memzone: refactor rte_memzone_reserve() variants Zhigang Lu
2015-07-09  8:25 ` [dpdk-dev] [PATCH v5 06/11] memzone: allow multiple pagesizes to be requested Zhigang Lu
2015-07-09  8:25 ` [dpdk-dev] [PATCH v5 07/11] mempool: allow config override on element alignment Zhigang Lu
2015-07-09  8:25 ` [dpdk-dev] [PATCH v5 08/11] tile: add page sizes for TILE-Gx/Mx platforms Zhigang Lu
2015-07-09  8:25 ` [dpdk-dev] [PATCH v5 09/11] tile: initial TILE-Gx support Zhigang Lu
2015-07-09  8:25 ` [dpdk-dev] [PATCH v5 10/11] tile: Add TILE-Gx mPIPE poll mode driver Zhigang Lu
2015-07-09  8:25 ` [dpdk-dev] [PATCH v5 11/11] maintainers: claim responsibility for TILE-Gx platform Zhigang Lu
2015-07-13 14:17 ` [dpdk-dev] [PATCH v5 00/11] Introducing the " Thomas Monjalon

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).