DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] [v2 00/23] Packet Framework
@ 2014-06-04 18:08 Cristian Dumitrescu
  2014-06-04 18:08 ` [dpdk-dev] [v2 01/23] librte_lpm: rule_is_present Cristian Dumitrescu
                   ` (25 more replies)
  0 siblings, 26 replies; 36+ messages in thread
From: Cristian Dumitrescu @ 2014-06-04 18:08 UTC (permalink / raw)
  To: dev

(Version 2 changes are exclusively style changes (checkpatch.pl) and patch consolidation, no functional change)

Intel DPDK Packet Framework provides a standard methodology (logically similar to OpenFlow) for rapid development of complex packet processing pipelines out of ports, tables and actions.

A pipeline is constructed by connecting its input ports to its output ports through a chain of lookup tables. As result of lookup operation into the current table, one of the table entries (or the default table entry, in case of lookup miss) is identified to provide the actions to be executed on the current packet and the associated action meta-data. The behavior of user actions is defined through the configurable table action handler, while the reserved actions define the next hop for the current packet (either another table, an output port or packet drop) and are handled transparently by the framework.

Three new Intel DPDK libraries are introduced for Packet Framework: librte_port, librte_table, librte_pipeline. Please check the Intel DPDK Programmer's Guide for full description of the Packet Framework design.

Two sample applications are provided for Packet Framework: app/test-pipeline and examples/ip_pipeline. Please check the Intel Sample Apps Guide for a detailed description of how these sample apps.

Cristian Dumitrescu (23):
  librte_lpm: rule_is_present
  mbuf: meta-data
  Packet Framework librte_port: Port API
  Packet Framework librte_port: ethdev ports
  Packet Framework librte_port: ring ports
  Packet Framework librte_port: IPv4 frag port
  Packet Framework librte_port: IPv4 reassembly
  Packet Framework librte_port: hierarchical scheduler port
  Packet Framework librte_port: Source/Sink ports
  Packet Framework librte_port: Build infrastructure
  Packet Framework librte_table: Table API
  Packet Framework librte_table: LPM IPv4 table
  Packet Framework librte_table: LPM IPv6 table
  Packet Framework librte_table: ACL table
  Packet Framework librte_table: Hash tables
  Packet Framework librte_table: array table
  Packet Framework librte_table: Stub table
  Packet Framework librte_table: Build infrastructure
  Packet Framework librte_pipeline: Pipeline
  librte_cfgfile: interpret config files
  Packet Framework performance application
  Packet Framework IPv4 pipeline sample app
  Packet Framework unit tests

 app/Makefile                                       |    1 +
 app/test-pipeline/Makefile                         |   66 +
 app/test-pipeline/config.c                         |  248 +++
 app/test-pipeline/init.c                           |  295 +++
 app/test-pipeline/main.c                           |  180 ++
 app/test-pipeline/main.h                           |  148 ++
 app/test-pipeline/pipeline_acl.c                   |  278 +++
 app/test-pipeline/pipeline_hash.c                  |  487 +++++
 app/test-pipeline/pipeline_lpm.c                   |  196 ++
 app/test-pipeline/pipeline_lpm_ipv6.c              |  200 ++
 app/test-pipeline/pipeline_stub.c                  |  165 ++
 app/test-pipeline/runtime.c                        |  185 ++
 app/test/Makefile                                  |    6 +
 app/test/commands.c                                |    4 +-
 app/test/test.h                                    |    1 +
 app/test/test_table.c                              |  220 +++
 app/test/test_table.h                              |  204 ++
 app/test/test_table_acl.c                          |  593 ++++++
 app/test/test_table_acl.h                          |   35 +
 app/test/test_table_combined.c                     |  784 ++++++++
 app/test/test_table_combined.h                     |   55 +
 app/test/test_table_pipeline.c                     |  603 ++++++
 app/test/test_table_pipeline.h                     |   35 +
 app/test/test_table_ports.c                        |  224 +++
 app/test/test_table_ports.h                        |   42 +
 app/test/test_table_tables.c                       |  907 +++++++++
 app/test/test_table_tables.h                       |   50 +
 config/common_bsdapp                               |   25 +
 config/common_linuxapp                             |   24 +
 doc/doxy-api-index.md                              |   17 +
 doc/doxy-api.conf                                  |    3 +
 examples/ip_pipeline/Makefile                      |   67 +
 examples/ip_pipeline/cmdline.c                     | 1976 ++++++++++++++++++++
 examples/ip_pipeline/config.c                      |  420 +++++
 examples/ip_pipeline/init.c                        |  614 ++++++
 examples/ip_pipeline/ip_pipeline.cfg               |   56 +
 examples/ip_pipeline/ip_pipeline.sh                |   18 +
 examples/ip_pipeline/main.c                        |  171 ++
 examples/ip_pipeline/main.h                        |  306 +++
 examples/ip_pipeline/pipeline_firewall.c           |  313 ++++
 .../ip_pipeline/pipeline_flow_classification.c     |  306 +++
 examples/ip_pipeline/pipeline_ipv4_frag.c          |  184 ++
 examples/ip_pipeline/pipeline_ipv4_ras.c           |  181 ++
 examples/ip_pipeline/pipeline_passthrough.c        |  213 +++
 examples/ip_pipeline/pipeline_routing.c            |  474 +++++
 examples/ip_pipeline/pipeline_rx.c                 |  385 ++++
 examples/ip_pipeline/pipeline_tx.c                 |  283 +++
 lib/Makefile                                       |    4 +
 lib/librte_cfgfile/Makefile                        |   53 +
 lib/librte_cfgfile/rte_cfgfile.c                   |  354 ++++
 lib/librte_cfgfile/rte_cfgfile.h                   |  195 ++
 lib/librte_eal/common/include/rte_hexdump.h        |    2 +
 lib/librte_eal/common/include/rte_log.h            |    3 +
 lib/librte_lpm/rte_lpm.c                           |   29 +
 lib/librte_lpm/rte_lpm.h                           |   19 +
 lib/librte_lpm/rte_lpm6.c                          |   31 +
 lib/librte_lpm/rte_lpm6.h                          |   19 +
 lib/librte_mbuf/rte_mbuf.h                         |   25 +
 lib/librte_pipeline/Makefile                       |   54 +
 lib/librte_pipeline/rte_pipeline.c                 | 1373 ++++++++++++++
 lib/librte_pipeline/rte_pipeline.h                 |  664 +++++++
 lib/librte_port/Makefile                           |   72 +
 lib/librte_port/ipv4_frag_tbl.h                    |  403 ++++
 lib/librte_port/ipv4_rsmbl.h                       |  429 +++++
 lib/librte_port/rte_ipv4_frag.h                    |  253 +++
 lib/librte_port/rte_port.h                         |  190 ++
 lib/librte_port/rte_port_ethdev.c                  |  305 +++
 lib/librte_port/rte_port_ethdev.h                  |   86 +
 lib/librte_port/rte_port_frag.c                    |  235 +++
 lib/librte_port/rte_port_frag.h                    |   94 +
 lib/librte_port/rte_port_ras.c                     |  256 +++
 lib/librte_port/rte_port_ras.h                     |   83 +
 lib/librte_port/rte_port_ring.c                    |  237 +++
 lib/librte_port/rte_port_ring.h                    |   82 +
 lib/librte_port/rte_port_sched.c                   |  239 +++
 lib/librte_port/rte_port_sched.h                   |   82 +
 lib/librte_port/rte_port_source_sink.c             |  158 ++
 lib/librte_port/rte_port_source_sink.h             |   70 +
 lib/librte_table/Makefile                          |   85 +
 lib/librte_table/rte_lru.h                         |  213 +++
 lib/librte_table/rte_table.h                       |  202 ++
 lib/librte_table/rte_table_acl.c                   |  490 +++++
 lib/librte_table/rte_table_acl.h                   |   95 +
 lib/librte_table/rte_table_array.c                 |  204 ++
 lib/librte_table/rte_table_array.h                 |   76 +
 lib/librte_table/rte_table_hash.h                  |  350 ++++
 lib/librte_table/rte_table_hash_ext.c              | 1122 +++++++++++
 lib/librte_table/rte_table_hash_key16.c            | 1100 +++++++++++
 lib/librte_table/rte_table_hash_key32.c            | 1120 +++++++++++
 lib/librte_table/rte_table_hash_key8.c             | 1398 ++++++++++++++
 lib/librte_table/rte_table_hash_lru.c              | 1065 +++++++++++
 lib/librte_table/rte_table_lpm.c                   |  347 ++++
 lib/librte_table/rte_table_lpm.h                   |  115 ++
 lib/librte_table/rte_table_lpm_ipv6.c              |  361 ++++
 lib/librte_table/rte_table_lpm_ipv6.h              |  119 ++
 lib/librte_table/rte_table_stub.c                  |   65 +
 lib/librte_table/rte_table_stub.h                  |   62 +
 mk/rte.app.mk                                      |   16 +
 98 files changed, 26951 insertions(+), 1 deletions(-)
 create mode 100644 app/test-pipeline/Makefile
 create mode 100644 app/test-pipeline/config.c
 create mode 100644 app/test-pipeline/init.c
 create mode 100644 app/test-pipeline/main.c
 create mode 100644 app/test-pipeline/main.h
 create mode 100644 app/test-pipeline/pipeline_acl.c
 create mode 100644 app/test-pipeline/pipeline_hash.c
 create mode 100644 app/test-pipeline/pipeline_lpm.c
 create mode 100644 app/test-pipeline/pipeline_lpm_ipv6.c
 create mode 100644 app/test-pipeline/pipeline_stub.c
 create mode 100644 app/test-pipeline/runtime.c
 create mode 100644 app/test/test_table.c
 create mode 100644 app/test/test_table.h
 create mode 100644 app/test/test_table_acl.c
 create mode 100644 app/test/test_table_acl.h
 create mode 100644 app/test/test_table_combined.c
 create mode 100644 app/test/test_table_combined.h
 create mode 100644 app/test/test_table_pipeline.c
 create mode 100644 app/test/test_table_pipeline.h
 create mode 100644 app/test/test_table_ports.c
 create mode 100644 app/test/test_table_ports.h
 create mode 100644 app/test/test_table_tables.c
 create mode 100644 app/test/test_table_tables.h
 create mode 100644 examples/ip_pipeline/Makefile
 create mode 100644 examples/ip_pipeline/cmdline.c
 create mode 100644 examples/ip_pipeline/config.c
 create mode 100644 examples/ip_pipeline/init.c
 create mode 100644 examples/ip_pipeline/ip_pipeline.cfg
 create mode 100644 examples/ip_pipeline/ip_pipeline.sh
 create mode 100644 examples/ip_pipeline/main.c
 create mode 100644 examples/ip_pipeline/main.h
 create mode 100644 examples/ip_pipeline/pipeline_firewall.c
 create mode 100644 examples/ip_pipeline/pipeline_flow_classification.c
 create mode 100644 examples/ip_pipeline/pipeline_ipv4_frag.c
 create mode 100644 examples/ip_pipeline/pipeline_ipv4_ras.c
 create mode 100644 examples/ip_pipeline/pipeline_passthrough.c
 create mode 100644 examples/ip_pipeline/pipeline_routing.c
 create mode 100644 examples/ip_pipeline/pipeline_rx.c
 create mode 100644 examples/ip_pipeline/pipeline_tx.c
 create mode 100644 lib/librte_cfgfile/Makefile
 create mode 100644 lib/librte_cfgfile/rte_cfgfile.c
 create mode 100644 lib/librte_cfgfile/rte_cfgfile.h
 create mode 100644 lib/librte_pipeline/Makefile
 create mode 100644 lib/librte_pipeline/rte_pipeline.c
 create mode 100644 lib/librte_pipeline/rte_pipeline.h
 create mode 100644 lib/librte_port/Makefile
 create mode 100644 lib/librte_port/ipv4_frag_tbl.h
 create mode 100644 lib/librte_port/ipv4_rsmbl.h
 create mode 100644 lib/librte_port/rte_ipv4_frag.h
 create mode 100644 lib/librte_port/rte_port.h
 create mode 100644 lib/librte_port/rte_port_ethdev.c
 create mode 100644 lib/librte_port/rte_port_ethdev.h
 create mode 100644 lib/librte_port/rte_port_frag.c
 create mode 100644 lib/librte_port/rte_port_frag.h
 create mode 100644 lib/librte_port/rte_port_ras.c
 create mode 100644 lib/librte_port/rte_port_ras.h
 create mode 100644 lib/librte_port/rte_port_ring.c
 create mode 100644 lib/librte_port/rte_port_ring.h
 create mode 100644 lib/librte_port/rte_port_sched.c
 create mode 100644 lib/librte_port/rte_port_sched.h
 create mode 100644 lib/librte_port/rte_port_source_sink.c
 create mode 100644 lib/librte_port/rte_port_source_sink.h
 create mode 100644 lib/librte_table/Makefile
 create mode 100644 lib/librte_table/rte_lru.h
 create mode 100644 lib/librte_table/rte_table.h
 create mode 100644 lib/librte_table/rte_table_acl.c
 create mode 100644 lib/librte_table/rte_table_acl.h
 create mode 100644 lib/librte_table/rte_table_array.c
 create mode 100644 lib/librte_table/rte_table_array.h
 create mode 100644 lib/librte_table/rte_table_hash.h
 create mode 100644 lib/librte_table/rte_table_hash_ext.c
 create mode 100644 lib/librte_table/rte_table_hash_key16.c
 create mode 100644 lib/librte_table/rte_table_hash_key32.c
 create mode 100644 lib/librte_table/rte_table_hash_key8.c
 create mode 100644 lib/librte_table/rte_table_hash_lru.c
 create mode 100644 lib/librte_table/rte_table_lpm.c
 create mode 100644 lib/librte_table/rte_table_lpm.h
 create mode 100644 lib/librte_table/rte_table_lpm_ipv6.c
 create mode 100644 lib/librte_table/rte_table_lpm_ipv6.h
 create mode 100644 lib/librte_table/rte_table_stub.c
 create mode 100644 lib/librte_table/rte_table_stub.h

-- 
1.7.7.6

^ permalink raw reply	[flat|nested] 36+ messages in thread

* [dpdk-dev] [v2 01/23] librte_lpm: rule_is_present
  2014-06-04 18:08 [dpdk-dev] [v2 00/23] Packet Framework Cristian Dumitrescu
@ 2014-06-04 18:08 ` Cristian Dumitrescu
  2014-06-04 18:08 ` [dpdk-dev] [v2 02/23] mbuf: meta-data Cristian Dumitrescu
                   ` (24 subsequent siblings)
  25 siblings, 0 replies; 36+ messages in thread
From: Cristian Dumitrescu @ 2014-06-04 18:08 UTC (permalink / raw)
  To: dev

Added API function for LPM IPv4 and IPv6 to querry for the existence of a rule/route and return the next hop ID associated with the routeif route is present. This is used by the Packet Framework LPM table for implementing a routing table.

Signed-off-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
---
 lib/librte_lpm/rte_lpm.c  |   29 +++++++++++++++++++++++++++++
 lib/librte_lpm/rte_lpm.h  |   19 +++++++++++++++++++
 lib/librte_lpm/rte_lpm6.c |   31 +++++++++++++++++++++++++++++++
 lib/librte_lpm/rte_lpm6.h |   19 +++++++++++++++++++
 4 files changed, 98 insertions(+), 0 deletions(-)

diff --git a/lib/librte_lpm/rte_lpm.c b/lib/librte_lpm/rte_lpm.c
index e915c24..09572c5 100644
--- a/lib/librte_lpm/rte_lpm.c
+++ b/lib/librte_lpm/rte_lpm.c
@@ -619,6 +619,35 @@ rte_lpm_add(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
 	return 0;
 }
 
+/*
+ * Look for a rule in the high-level rules table
+ */
+int
+rte_lpm_is_rule_present(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
+uint8_t *next_hop)
+{
+	uint32_t ip_masked;
+	int32_t rule_index;
+
+	/* Check user arguments. */
+	if ((lpm == NULL) ||
+		(next_hop == NULL) ||
+		(depth < 1) || (depth > RTE_LPM_MAX_DEPTH))
+		return -EINVAL;
+
+	/* Look for the rule using rule_find. */
+	ip_masked = ip & depth_to_mask(depth);
+	rule_index = rule_find(lpm, ip_masked, depth);
+
+	if (rule_index >= 0) {
+		*next_hop = lpm->rules_tbl[rule_index].next_hop;
+		return 1;
+	}
+
+	/* If rule is not found return 0. */
+	return 0;
+}
+
 static inline int32_t
 find_previous_rule(struct rte_lpm *lpm, uint32_t ip, uint8_t depth, uint8_t *sub_rule_depth)
 {
diff --git a/lib/librte_lpm/rte_lpm.h b/lib/librte_lpm/rte_lpm.h
index 033f542..83c231b 100644
--- a/lib/librte_lpm/rte_lpm.h
+++ b/lib/librte_lpm/rte_lpm.h
@@ -214,6 +214,25 @@ int
 rte_lpm_add(struct rte_lpm *lpm, uint32_t ip, uint8_t depth, uint8_t next_hop);
 
 /**
+ * Check if a rule is present in the LPM table,
+ * and provide its next hop if it is.
+ *
+ * @param lpm
+ *   LPM object handle
+ * @param ip
+ *   IP of the rule to be searched
+ * @param depth
+ *   Depth of the rule to searched
+ * @param next_hop
+ *   Next hop of the rule (valid only if it is found)
+ * @return
+ *   1 if the rule exists, 0 if it does not, a negative value on failure
+ */
+int
+rte_lpm_is_rule_present(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
+uint8_t *next_hop);
+
+/**
  * Delete a rule from the LPM table.
  *
  * @param lpm
diff --git a/lib/librte_lpm/rte_lpm6.c b/lib/librte_lpm/rte_lpm6.c
index 99a4a58..027c237 100644
--- a/lib/librte_lpm/rte_lpm6.c
+++ b/lib/librte_lpm/rte_lpm6.c
@@ -664,6 +664,37 @@ rule_find(struct rte_lpm6 *lpm, uint8_t *ip, uint8_t depth)
 }
 
 /*
+ * Look for a rule in the high-level rules table
+ */
+int
+rte_lpm6_is_rule_present(struct rte_lpm6 *lpm, uint8_t *ip, uint8_t depth,
+uint8_t *next_hop)
+{
+	uint8_t ip_masked[RTE_LPM6_IPV6_ADDR_SIZE];
+	int32_t rule_index;
+
+	/* Check user arguments. */
+	if ((lpm == NULL) || next_hop == NULL || ip == NULL ||
+			(depth < 1) || (depth > RTE_LPM6_MAX_DEPTH))
+		return -EINVAL;
+
+	/* Copy the IP and mask it to avoid modifying user's input data. */
+	memcpy(ip_masked, ip, RTE_LPM6_IPV6_ADDR_SIZE);
+	mask_ip(ip_masked, depth);
+
+	/* Look for the rule using rule_find. */
+	rule_index = rule_find(lpm, ip_masked, depth);
+
+	if (rule_index >= 0) {
+		*next_hop = lpm->rules_tbl[rule_index].next_hop;
+		return 1;
+	}
+
+	/* If rule is not found return 0. */
+	return 0;
+}
+
+/*
  * Delete a rule from the rule table.
  * NOTE: Valid range for depth parameter is 1 .. 128 inclusive.
  */
diff --git a/lib/librte_lpm/rte_lpm6.h b/lib/librte_lpm/rte_lpm6.h
index 8c1a293..24bcaba 100644
--- a/lib/librte_lpm/rte_lpm6.h
+++ b/lib/librte_lpm/rte_lpm6.h
@@ -125,6 +125,25 @@ rte_lpm6_add(struct rte_lpm6 *lpm, uint8_t *ip, uint8_t depth,
 		uint8_t next_hop);
 
 /**
+ * Check if a rule is present in the LPM table,
+ * and provide its next hop if it is.
+ *
+ * @param lpm
+ *   LPM object handle
+ * @param ip
+ *   IP of the rule to be searched
+ * @param depth
+ *   Depth of the rule to searched
+ * @param next_hop
+ *   Next hop of the rule (valid only if it is found)
+ * @return
+ *   1 if the rule exists, 0 if it does not, a negative value on failure
+ */
+int
+rte_lpm6_is_rule_present(struct rte_lpm6 *lpm, uint8_t *ip, uint8_t depth,
+uint8_t *next_hop);
+
+/**
  * Delete a rule from the LPM table.
  *
  * @param lpm
-- 
1.7.7.6

^ permalink raw reply	[flat|nested] 36+ messages in thread

* [dpdk-dev] [v2 02/23] mbuf: meta-data
  2014-06-04 18:08 [dpdk-dev] [v2 00/23] Packet Framework Cristian Dumitrescu
  2014-06-04 18:08 ` [dpdk-dev] [v2 01/23] librte_lpm: rule_is_present Cristian Dumitrescu
@ 2014-06-04 18:08 ` Cristian Dumitrescu
  2014-06-04 18:08 ` [dpdk-dev] [v2 03/23] Packet Framework librte_port: Port API Cristian Dumitrescu
                   ` (23 subsequent siblings)
  25 siblings, 0 replies; 36+ messages in thread
From: Cristian Dumitrescu @ 2014-06-04 18:08 UTC (permalink / raw)
  To: dev

Added zero-size field (offset in data structure) to specify the beginning of packet meta-data in the packet buffer just after the mbuf.

The size of the packet meta-data is application specific and the packet meta-data is managed by the application.

The packet meta-data should always be accessed through the provided macros.

This is used by the Packet Framework libraries (port, table, pipeline).

There is absolutely no performance impact due to this mbuf field, as it does not take any space in the mbuf structure (zero-size field).

Signed-off-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
---
 lib/librte_mbuf/rte_mbuf.h |   25 +++++++++++++++++++++++++
 1 files changed, 25 insertions(+), 0 deletions(-)

diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h
index 4a9ab41..a3f9503 100644
--- a/lib/librte_mbuf/rte_mbuf.h
+++ b/lib/librte_mbuf/rte_mbuf.h
@@ -201,8 +201,33 @@ struct rte_mbuf {
 		struct rte_ctrlmbuf ctrl;
 		struct rte_pktmbuf pkt;
 	};
+
+	union {
+		uint8_t metadata[0];
+		uint16_t metadata16[0];
+		uint32_t metadata32[0];
+		uint64_t metadata64[0];
+	};
 } __rte_cache_aligned;
 
+#define RTE_MBUF_METADATA_UINT8(mbuf, offset)              \
+	(mbuf->metadata[offset])
+#define RTE_MBUF_METADATA_UINT16(mbuf, offset)             \
+	(mbuf->metadata16[offset/sizeof(uint16_t)])
+#define RTE_MBUF_METADATA_UINT32(mbuf, offset)             \
+	(mbuf->metadata32[offset/sizeof(uint32_t)])
+#define RTE_MBUF_METADATA_UINT64(mbuf, offset)             \
+	(mbuf->metadata64[offset/sizeof(uint64_t)])
+
+#define RTE_MBUF_METADATA_UINT8_PTR(mbuf, offset)          \
+	(&mbuf->metadata[offset])
+#define RTE_MBUF_METADATA_UINT16_PTR(mbuf, offset)         \
+	(&mbuf->metadata16[offset/sizeof(uint16_t)])
+#define RTE_MBUF_METADATA_UINT32_PTR(mbuf, offset)         \
+	(&mbuf->metadata32[offset/sizeof(uint32_t)])
+#define RTE_MBUF_METADATA_UINT64_PTR(mbuf, offset)         \
+	(&mbuf->metadata64[offset/sizeof(uint64_t)])
+
 /**
  * Given the buf_addr returns the pointer to corresponding mbuf.
  */
-- 
1.7.7.6

^ permalink raw reply	[flat|nested] 36+ messages in thread

* [dpdk-dev] [v2 03/23] Packet Framework librte_port: Port API
  2014-06-04 18:08 [dpdk-dev] [v2 00/23] Packet Framework Cristian Dumitrescu
  2014-06-04 18:08 ` [dpdk-dev] [v2 01/23] librte_lpm: rule_is_present Cristian Dumitrescu
  2014-06-04 18:08 ` [dpdk-dev] [v2 02/23] mbuf: meta-data Cristian Dumitrescu
@ 2014-06-04 18:08 ` Cristian Dumitrescu
  2014-06-04 18:08 ` [dpdk-dev] [v2 04/23] Packet Framework librte_port: ethdev ports Cristian Dumitrescu
                   ` (22 subsequent siblings)
  25 siblings, 0 replies; 36+ messages in thread
From: Cristian Dumitrescu @ 2014-06-04 18:08 UTC (permalink / raw)
  To: dev

This file defines the port operations that have to be implemented by Packet Framework ports.

Signed-off-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
---
 lib/librte_eal/common/include/rte_log.h |    1 +
 lib/librte_port/rte_port.h              |  190 +++++++++++++++++++++++++++++++
 2 files changed, 191 insertions(+), 0 deletions(-)
 create mode 100644 lib/librte_port/rte_port.h

diff --git a/lib/librte_eal/common/include/rte_log.h b/lib/librte_eal/common/include/rte_log.h
index 3d44ded..490dbc9 100644
--- a/lib/librte_eal/common/include/rte_log.h
+++ b/lib/librte_eal/common/include/rte_log.h
@@ -74,6 +74,7 @@ extern struct rte_logs rte_logs;
 #define RTE_LOGTYPE_POWER   0x00000400 /**< Log related to power. */
 #define RTE_LOGTYPE_METER   0x00000800 /**< Log related to QoS meter. */
 #define RTE_LOGTYPE_SCHED   0x00001000 /**< Log related to QoS port scheduler. */
+#define RTE_LOGTYPE_PORT    0x00002000 /**< Log related to port. */
 
 /* these log types can be used in an application */
 #define RTE_LOGTYPE_USER1   0x01000000 /**< User-defined log type 1. */
diff --git a/lib/librte_port/rte_port.h b/lib/librte_port/rte_port.h
new file mode 100644
index 0000000..0934b00
--- /dev/null
+++ b/lib/librte_port/rte_port.h
@@ -0,0 +1,190 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __INCLUDE_RTE_PORT_H__
+#define __INCLUDE_RTE_PORT_H__
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/**
+ * @file
+ * RTE Port
+ *
+ * This tool is part of the Intel DPDK Packet Framework tool suite and provides
+ * a standard interface to implement different types of packet ports.
+ *
+ ***/
+
+#include <stdint.h>
+#include <rte_mbuf.h>
+
+/*
+ * Port IN
+ *
+ */
+/** Maximum number of packets read from any input port in a single burst.
+Cannot be changed. */
+#define RTE_PORT_IN_BURST_SIZE_MAX                         64
+
+/**
+ * Input port create
+ *
+ * @param params
+ *   Parameters for input port creation
+ * @param socket_id
+ *   CPU socket ID (e.g. for memory allocation purpose)
+ * @return
+ *   Handle to input port instance
+ */
+typedef void* (*rte_port_in_op_create)(void *params, int socket_id);
+
+/**
+ * Input port free
+ *
+ * @param port
+ *   Handle to input port instance
+ * @return
+ *   0 on success, error code otherwise
+ */
+typedef int (*rte_port_in_op_free)(void *port);
+
+/**
+ * Input port packet burst RX
+ *
+ * @param port
+ *   Handle to input port instance
+ * @param pkts
+ *   Burst of input packets
+ * @param n_pkts
+ *   Number of packets in the input burst
+ * @return
+ *   0 on success, error code otherwise
+ */
+typedef int (*rte_port_in_op_rx)(
+	void *port,
+	struct rte_mbuf **pkts,
+	uint32_t n_pkts);
+
+/** Input port interface defining the input port operation */
+struct rte_port_in_ops {
+	rte_port_in_op_create f_create; /**< Create */
+	rte_port_in_op_free f_free;     /**< Free */
+	rte_port_in_op_rx f_rx;         /**< Packet RX (packet burst) */
+};
+
+/*
+ * Port OUT
+ *
+ */
+/**
+ * Output port create
+ *
+ * @param params
+ *   Parameters for output port creation
+ * @param socket_id
+ *   CPU socket ID (e.g. for memory allocation purpose)
+ * @return
+ *   Handle to output port instance
+ */
+typedef void* (*rte_port_out_op_create)(void *params, int socket_id);
+
+/**
+ * Output port free
+ *
+ * @param port
+ *   Handle to output port instance
+ * @return
+ *   0 on success, error code otherwise
+ */
+typedef int (*rte_port_out_op_free)(void *port);
+
+/**
+ * Output port single packet TX
+ *
+ * @param port
+ *   Handle to output port instance
+ * @param pkt
+ *   Input packet
+ * @return
+ *   0 on success, error code otherwise
+ */
+typedef int (*rte_port_out_op_tx)(
+	void *port,
+	struct rte_mbuf *pkt);
+
+/**
+ * Output port packet burst TX
+ *
+ * @param port
+ *   Handle to output port instance
+ * @param pkts
+ *   Burst of input packets specified as array of up to 64 pointers to struct
+ *   rte_mbuf
+ * @param pkts_mask
+ *   64-bit bitmask specifying which packets in the input burst are valid. When
+ *   pkts_mask bit n is set, then element n of pkts array is pointing to a
+ *   valid packet. Otherwise, element n of pkts array will not be accessed.
+ * @return
+ *   0 on success, error code otherwise
+ */
+typedef int (*rte_port_out_op_tx_bulk)(
+	void *port,
+	struct rte_mbuf **pkt,
+	uint64_t pkts_mask);
+
+/**
+ * Output port free
+ *
+ * @param port
+ *   Handle to output port instance
+ * @return
+ *   0 on success, error code otherwise
+ */
+typedef int (*rte_port_out_op_flush)(void *port);
+
+/** Output port interface defining the output port operation */
+struct rte_port_out_ops {
+	rte_port_out_op_create f_create;   /**< Create */
+	rte_port_out_op_free f_free;       /**< Free */
+	rte_port_out_op_tx f_tx;           /**< Packet TX (single packet) */
+	rte_port_out_op_tx_bulk f_tx_bulk; /**< Packet TX (packet burst) */
+	rte_port_out_op_flush f_flush;     /**< Flush */
+};
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif
-- 
1.7.7.6

^ permalink raw reply	[flat|nested] 36+ messages in thread

* [dpdk-dev] [v2 04/23] Packet Framework librte_port: ethdev ports
  2014-06-04 18:08 [dpdk-dev] [v2 00/23] Packet Framework Cristian Dumitrescu
                   ` (2 preceding siblings ...)
  2014-06-04 18:08 ` [dpdk-dev] [v2 03/23] Packet Framework librte_port: Port API Cristian Dumitrescu
@ 2014-06-04 18:08 ` Cristian Dumitrescu
  2014-06-04 18:08 ` [dpdk-dev] [v2 05/23] Packet Framework librte_port: ring ports Cristian Dumitrescu
                   ` (21 subsequent siblings)
  25 siblings, 0 replies; 36+ messages in thread
From: Cristian Dumitrescu @ 2014-06-04 18:08 UTC (permalink / raw)
  To: dev

The input port ethdev_reader implements the Packet Framework port API on top of the Intel DPDK poll mode driver for a NIC RX queue.

The output port ethdev_writer implements the Packet Framework port API on top of the Intel DPDK poll mode driver for a NIC TX queue.

Signed-off-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
---
 lib/librte_port/rte_port_ethdev.c |  305 +++++++++++++++++++++++++++++++++++++
 lib/librte_port/rte_port_ethdev.h |   86 +++++++++++
 2 files changed, 391 insertions(+), 0 deletions(-)
 create mode 100644 lib/librte_port/rte_port_ethdev.c
 create mode 100644 lib/librte_port/rte_port_ethdev.h

diff --git a/lib/librte_port/rte_port_ethdev.c b/lib/librte_port/rte_port_ethdev.c
new file mode 100644
index 0000000..2d6f279
--- /dev/null
+++ b/lib/librte_port/rte_port_ethdev.c
@@ -0,0 +1,305 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+#include <string.h>
+
+#include <rte_mbuf.h>
+#include <rte_ethdev.h>
+#include <rte_malloc.h>
+
+#include "rte_port_ethdev.h"
+
+/*
+ * Port ETHDEV Reader
+ */
+struct rte_port_ethdev_reader {
+	uint16_t queue_id;
+	uint8_t port_id;
+};
+
+static void *
+rte_port_ethdev_reader_create(void *params, int socket_id)
+{
+	struct rte_port_ethdev_reader_params *conf =
+			(struct rte_port_ethdev_reader_params *) params;
+	struct rte_port_ethdev_reader *port;
+
+	/* Check input parameters */
+	if (conf == NULL) {
+		RTE_LOG(ERR, PORT, "%s: params is NULL\n", __func__);
+		return NULL;
+	}
+
+	/* Memory allocation */
+	port = rte_zmalloc_socket("PORT", sizeof(*port),
+			CACHE_LINE_SIZE, socket_id);
+	if (port == NULL) {
+		RTE_LOG(ERR, PORT, "%s: Failed to allocate port\n", __func__);
+		return NULL;
+	}
+
+	/* Initialization */
+	port->port_id = conf->port_id;
+	port->queue_id = conf->queue_id;
+
+	return port;
+}
+
+static int
+rte_port_ethdev_reader_rx(void *port, struct rte_mbuf **pkts, uint32_t n_pkts)
+{
+	struct rte_port_ethdev_reader *p =
+		(struct rte_port_ethdev_reader *) port;
+
+	return rte_eth_rx_burst(p->port_id, p->queue_id, pkts, n_pkts);
+}
+
+static int
+rte_port_ethdev_reader_free(void *port)
+{
+	if (port == NULL) {
+		RTE_LOG(ERR, PORT, "%s: port is NULL\n", __func__);
+		return -EINVAL;
+	}
+
+	rte_free(port);
+
+	return 0;
+}
+
+/*
+ * Port ETHDEV Writer
+ */
+#define RTE_PORT_ETHDEV_WRITER_APPROACH                  1
+
+struct rte_port_ethdev_writer {
+	struct rte_mbuf *tx_buf[2 * RTE_PORT_IN_BURST_SIZE_MAX];
+	uint32_t tx_burst_sz;
+	uint16_t tx_buf_count;
+	uint64_t bsz_mask;
+	uint16_t queue_id;
+	uint8_t port_id;
+};
+
+static void *
+rte_port_ethdev_writer_create(void *params, int socket_id)
+{
+	struct rte_port_ethdev_writer_params *conf =
+			(struct rte_port_ethdev_writer_params *) params;
+	struct rte_port_ethdev_writer *port;
+
+	/* Check input parameters */
+	if ((conf == NULL) ||
+		(conf->tx_burst_sz == 0) ||
+		(conf->tx_burst_sz > RTE_PORT_IN_BURST_SIZE_MAX) ||
+		(!rte_is_power_of_2(conf->tx_burst_sz))) {
+		RTE_LOG(ERR, PORT, "%s: Invalid input parameters\n", __func__);
+		return NULL;
+	}
+
+	/* Memory allocation */
+	port = rte_zmalloc_socket("PORT", sizeof(*port),
+			CACHE_LINE_SIZE, socket_id);
+	if (port == NULL) {
+		RTE_LOG(ERR, PORT, "%s: Failed to allocate port\n", __func__);
+		return NULL;
+	}
+
+	/* Initialization */
+	port->port_id = conf->port_id;
+	port->queue_id = conf->queue_id;
+	port->tx_burst_sz = conf->tx_burst_sz;
+	port->tx_buf_count = 0;
+	port->bsz_mask = 1LLU << (conf->tx_burst_sz - 1);
+
+	return port;
+}
+
+static inline void
+send_burst(struct rte_port_ethdev_writer *p)
+{
+	uint32_t nb_tx;
+
+	nb_tx = rte_eth_tx_burst(p->port_id, p->queue_id,
+			 p->tx_buf, p->tx_buf_count);
+
+	for ( ; nb_tx < p->tx_buf_count; nb_tx++)
+		rte_pktmbuf_free(p->tx_buf[nb_tx]);
+
+	p->tx_buf_count = 0;
+}
+
+static int
+rte_port_ethdev_writer_tx(void *port, struct rte_mbuf *pkt)
+{
+	struct rte_port_ethdev_writer *p =
+		(struct rte_port_ethdev_writer *) port;
+
+	p->tx_buf[p->tx_buf_count++] = pkt;
+	if (p->tx_buf_count >= p->tx_burst_sz)
+		send_burst(p);
+
+	return 0;
+}
+
+#if RTE_PORT_ETHDEV_WRITER_APPROACH == 0
+
+static int
+rte_port_ethdev_writer_tx_bulk(void *port,
+		struct rte_mbuf **pkts,
+		uint64_t pkts_mask)
+{
+	struct rte_port_ethdev_writer *p =
+		(struct rte_port_ethdev_writer *) port;
+
+	if ((pkts_mask & (pkts_mask + 1)) == 0) {
+		uint64_t n_pkts = __builtin_popcountll(pkts_mask);
+		uint32_t i;
+
+		for (i = 0; i < n_pkts; i++) {
+			struct rte_mbuf *pkt = pkts[i];
+
+			p->tx_buf[p->tx_buf_count++] = pkt;
+			if (p->tx_buf_count >= p->tx_burst_sz)
+				send_burst(p);
+		}
+	} else {
+		for ( ; pkts_mask; ) {
+			uint32_t pkt_index = __builtin_ctzll(pkts_mask);
+			uint64_t pkt_mask = 1LLU << pkt_index;
+			struct rte_mbuf *pkt = pkts[pkt_index];
+
+			p->tx_buf[p->tx_buf_count++] = pkt;
+			if (p->tx_buf_count >= p->tx_burst_sz)
+				send_burst(p);
+			pkts_mask &= ~pkt_mask;
+		}
+	}
+
+	return 0;
+}
+
+#elif RTE_PORT_ETHDEV_WRITER_APPROACH == 1
+
+static int
+rte_port_ethdev_writer_tx_bulk(void *port,
+		struct rte_mbuf **pkts,
+		uint64_t pkts_mask)
+{
+	struct rte_port_ethdev_writer *p =
+		(struct rte_port_ethdev_writer *) port;
+	uint32_t bsz_mask = p->bsz_mask;
+	uint32_t tx_buf_count = p->tx_buf_count;
+	uint64_t expr = (pkts_mask & (pkts_mask + 1)) |
+			((pkts_mask & bsz_mask) ^ bsz_mask);
+
+	if (expr == 0) {
+		uint64_t n_pkts = __builtin_popcountll(pkts_mask);
+		uint32_t n_pkts_ok;
+
+		if (tx_buf_count)
+			send_burst(p);
+
+		n_pkts_ok = rte_eth_tx_burst(p->port_id, p->queue_id, pkts,
+			n_pkts);
+
+		for ( ; n_pkts_ok < n_pkts; n_pkts_ok++) {
+			struct rte_mbuf *pkt = pkts[n_pkts_ok];
+
+			rte_pktmbuf_free(pkt);
+		}
+	} else {
+		for ( ; pkts_mask; ) {
+			uint32_t pkt_index = __builtin_ctzll(pkts_mask);
+			uint64_t pkt_mask = 1LLU << pkt_index;
+			struct rte_mbuf *pkt = pkts[pkt_index];
+
+			p->tx_buf[tx_buf_count++] = pkt;
+			pkts_mask &= ~pkt_mask;
+		}
+
+		p->tx_buf_count = tx_buf_count;
+		if (tx_buf_count >= p->tx_burst_sz)
+			send_burst(p);
+	}
+
+	return 0;
+}
+
+#else
+
+#error Invalid value for RTE_PORT_ETHDEV_WRITER_APPROACH
+
+#endif
+
+static int
+rte_port_ethdev_writer_flush(void *port)
+{
+	struct rte_port_ethdev_writer *p =
+		(struct rte_port_ethdev_writer *) port;
+
+	if (p->tx_buf_count > 0)
+		send_burst(p);
+
+	return 0;
+}
+
+static int
+rte_port_ethdev_writer_free(void *port)
+{
+	if (port == NULL) {
+		RTE_LOG(ERR, PORT, "%s: Port is NULL\n", __func__);
+		return -EINVAL;
+	}
+
+	rte_port_ethdev_writer_flush(port);
+	rte_free(port);
+
+	return 0;
+}
+
+/*
+ * Summary of port operations
+ */
+struct rte_port_in_ops rte_port_ethdev_reader_ops = {
+	.f_create = rte_port_ethdev_reader_create,
+	.f_free = rte_port_ethdev_reader_free,
+	.f_rx = rte_port_ethdev_reader_rx,
+};
+
+struct rte_port_out_ops rte_port_ethdev_writer_ops = {
+	.f_create = rte_port_ethdev_writer_create,
+	.f_free = rte_port_ethdev_writer_free,
+	.f_tx = rte_port_ethdev_writer_tx,
+	.f_tx_bulk = rte_port_ethdev_writer_tx_bulk,
+	.f_flush = rte_port_ethdev_writer_flush,
+};
diff --git a/lib/librte_port/rte_port_ethdev.h b/lib/librte_port/rte_port_ethdev.h
new file mode 100644
index 0000000..af67a12
--- /dev/null
+++ b/lib/librte_port/rte_port_ethdev.h
@@ -0,0 +1,86 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __INCLUDE_RTE_PORT_ETHDEV_H__
+#define __INCLUDE_RTE_PORT_ETHDEV_H__
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/**
+ * @file
+ * RTE Port Ethernet Device
+ *
+ * ethdev_reader: input port built on top of pre-initialized NIC RX queue
+ * ethdev_writer: output port built on top of pre-initialized NIC TX queue
+ *
+ ***/
+
+#include <stdint.h>
+
+#include "rte_port.h"
+
+/** ethdev_reader port parameters */
+struct rte_port_ethdev_reader_params {
+	/** NIC RX port ID */
+	uint8_t port_id;
+
+	/** NIC RX queue ID */
+	uint16_t queue_id;
+};
+
+/** ethdev_reader port operations */
+extern struct rte_port_in_ops rte_port_ethdev_reader_ops;
+
+/** ethdev_writer port parameters */
+struct rte_port_ethdev_writer_params {
+	/** NIC RX port ID */
+	uint8_t port_id;
+
+	/** NIC RX queue ID */
+	uint16_t queue_id;
+
+	/** Recommended burst size to NIC TX queue. The actual burst size can be
+	bigger or smaller than this value. */
+	uint32_t tx_burst_sz;
+};
+
+/** ethdev_writer port operations */
+extern struct rte_port_out_ops rte_port_ethdev_writer_ops;
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif
-- 
1.7.7.6

^ permalink raw reply	[flat|nested] 36+ messages in thread

* [dpdk-dev] [v2 05/23] Packet Framework librte_port: ring ports
  2014-06-04 18:08 [dpdk-dev] [v2 00/23] Packet Framework Cristian Dumitrescu
                   ` (3 preceding siblings ...)
  2014-06-04 18:08 ` [dpdk-dev] [v2 04/23] Packet Framework librte_port: ethdev ports Cristian Dumitrescu
@ 2014-06-04 18:08 ` Cristian Dumitrescu
  2014-06-04 18:08 ` [dpdk-dev] [v2 06/23] Packet Framework librte_port: IPv4 frag port Cristian Dumitrescu
                   ` (20 subsequent siblings)
  25 siblings, 0 replies; 36+ messages in thread
From: Cristian Dumitrescu @ 2014-06-04 18:08 UTC (permalink / raw)
  To: dev

ring_reader input port (on top of single consumer rte_ring)
ring writer output port (on top of single producer rte_ring)

Signed-off-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
---
 lib/librte_port/rte_port_ring.c |  237 +++++++++++++++++++++++++++++++++++++++
 lib/librte_port/rte_port_ring.h |   82 ++++++++++++++
 2 files changed, 319 insertions(+), 0 deletions(-)
 create mode 100644 lib/librte_port/rte_port_ring.c
 create mode 100644 lib/librte_port/rte_port_ring.h

diff --git a/lib/librte_port/rte_port_ring.c b/lib/librte_port/rte_port_ring.c
new file mode 100644
index 0000000..85bab63
--- /dev/null
+++ b/lib/librte_port/rte_port_ring.c
@@ -0,0 +1,237 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+#include <string.h>
+
+#include <rte_mbuf.h>
+#include <rte_ring.h>
+#include <rte_malloc.h>
+
+#include "rte_port_ring.h"
+
+/*
+ * Port RING Reader
+ */
+struct rte_port_ring_reader {
+	struct rte_ring *ring;
+};
+
+static void *
+rte_port_ring_reader_create(void *params, int socket_id)
+{
+	struct rte_port_ring_reader_params *conf =
+			(struct rte_port_ring_reader_params *) params;
+	struct rte_port_ring_reader *port;
+
+	/* Check input parameters */
+	if (conf == NULL) {
+		RTE_LOG(ERR, PORT, "%s: params is NULL\n", __func__);
+		return NULL;
+	}
+
+	/* Memory allocation */
+	port = rte_zmalloc_socket("PORT", sizeof(*port),
+			CACHE_LINE_SIZE, socket_id);
+	if (port == NULL) {
+		RTE_LOG(ERR, PORT, "%s: Failed to allocate port\n", __func__);
+		return NULL;
+	}
+
+	/* Initialization */
+	port->ring = conf->ring;
+
+	return port;
+}
+
+static int
+rte_port_ring_reader_rx(void *port, struct rte_mbuf **pkts, uint32_t n_pkts)
+{
+	struct rte_port_ring_reader *p = (struct rte_port_ring_reader *) port;
+
+	return rte_ring_sc_dequeue_burst(p->ring, (void **) pkts, n_pkts);
+}
+
+static int
+rte_port_ring_reader_free(void *port)
+{
+	if (port == NULL) {
+		RTE_LOG(ERR, PORT, "%s: port is NULL\n", __func__);
+		return -EINVAL;
+	}
+
+	rte_free(port);
+
+	return 0;
+}
+
+/*
+ * Port RING Writer
+ */
+struct rte_port_ring_writer {
+	struct rte_mbuf *tx_buf[RTE_PORT_IN_BURST_SIZE_MAX];
+	struct rte_ring *ring;
+	uint32_t tx_burst_sz;
+	uint32_t tx_buf_count;
+};
+
+static void *
+rte_port_ring_writer_create(void *params, int socket_id)
+{
+	struct rte_port_ring_writer_params *conf =
+			(struct rte_port_ring_writer_params *) params;
+	struct rte_port_ring_writer *port;
+
+	/* Check input parameters */
+	if ((conf == NULL) ||
+	    (conf->ring == NULL) ||
+		(conf->tx_burst_sz > RTE_PORT_IN_BURST_SIZE_MAX)) {
+		RTE_LOG(ERR, PORT, "%s: Invalid Parameters\n", __func__);
+		return NULL;
+	}
+
+	/* Memory allocation */
+	port = rte_zmalloc_socket("PORT", sizeof(*port),
+			CACHE_LINE_SIZE, socket_id);
+	if (port == NULL) {
+		RTE_LOG(ERR, PORT, "%s: Failed to allocate port\n", __func__);
+		return NULL;
+	}
+
+	/* Initialization */
+	port->ring = conf->ring;
+	port->tx_burst_sz = conf->tx_burst_sz;
+	port->tx_buf_count = 0;
+
+	return port;
+}
+
+static inline void
+send_burst(struct rte_port_ring_writer *p)
+{
+	uint32_t nb_tx;
+
+	nb_tx = rte_ring_sp_enqueue_burst(p->ring, (void **)p->tx_buf,
+			p->tx_buf_count);
+
+	for ( ; nb_tx < p->tx_buf_count; nb_tx++)
+		rte_pktmbuf_free(p->tx_buf[nb_tx]);
+
+	p->tx_buf_count = 0;
+}
+
+static int
+rte_port_ring_writer_tx(void *port, struct rte_mbuf *pkt)
+{
+	struct rte_port_ring_writer *p = (struct rte_port_ring_writer *) port;
+
+	p->tx_buf[p->tx_buf_count++] = pkt;
+	if (p->tx_buf_count >= p->tx_burst_sz)
+		send_burst(p);
+
+	return 0;
+}
+
+static int
+rte_port_ring_writer_tx_bulk(void *port,
+		struct rte_mbuf **pkts,
+		uint64_t pkts_mask)
+{
+	struct rte_port_ring_writer *p = (struct rte_port_ring_writer *) port;
+
+	if ((pkts_mask & (pkts_mask + 1)) == 0) {
+		uint64_t n_pkts = __builtin_popcountll(pkts_mask);
+		uint32_t i;
+
+		for (i = 0; i < n_pkts; i++) {
+			struct rte_mbuf *pkt = pkts[i];
+
+			p->tx_buf[p->tx_buf_count++] = pkt;
+			if (p->tx_buf_count >= p->tx_burst_sz)
+				send_burst(p);
+		}
+	} else {
+		for ( ; pkts_mask; ) {
+			uint32_t pkt_index = __builtin_ctzll(pkts_mask);
+			uint64_t pkt_mask = 1LLU << pkt_index;
+			struct rte_mbuf *pkt = pkts[pkt_index];
+
+			p->tx_buf[p->tx_buf_count++] = pkt;
+			if (p->tx_buf_count >= p->tx_burst_sz)
+				send_burst(p);
+			pkts_mask &= ~pkt_mask;
+		}
+	}
+
+	return 0;
+}
+
+static int
+rte_port_ring_writer_flush(void *port)
+{
+	struct rte_port_ring_writer *p = (struct rte_port_ring_writer *) port;
+
+	if (p->tx_buf_count > 0)
+		send_burst(p);
+
+	return 0;
+}
+
+static int
+rte_port_ring_writer_free(void *port)
+{
+	if (port == NULL) {
+		RTE_LOG(ERR, PORT, "%s: Port is NULL\n", __func__);
+		return -EINVAL;
+	}
+
+	rte_port_ring_writer_flush(port);
+	rte_free(port);
+
+	return 0;
+}
+
+/*
+ * Summary of port operations
+ */
+struct rte_port_in_ops rte_port_ring_reader_ops = {
+	.f_create = rte_port_ring_reader_create,
+	.f_free = rte_port_ring_reader_free,
+	.f_rx = rte_port_ring_reader_rx,
+};
+
+struct rte_port_out_ops rte_port_ring_writer_ops = {
+	.f_create = rte_port_ring_writer_create,
+	.f_free = rte_port_ring_writer_free,
+	.f_tx = rte_port_ring_writer_tx,
+	.f_tx_bulk = rte_port_ring_writer_tx_bulk,
+	.f_flush = rte_port_ring_writer_flush,
+};
diff --git a/lib/librte_port/rte_port_ring.h b/lib/librte_port/rte_port_ring.h
new file mode 100644
index 0000000..009dcf8
--- /dev/null
+++ b/lib/librte_port/rte_port_ring.h
@@ -0,0 +1,82 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __INCLUDE_RTE_PORT_RING_H__
+#define __INCLUDE_RTE_PORT_RING_H__
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/**
+ * @file
+ * RTE Port Ring
+ *
+ * ring_reader: input port built on top of pre-initialized single consumer ring
+ * ring_writer: output port built on top of pre-initialized single producer ring
+ *
+ ***/
+
+#include <stdint.h>
+
+#include <rte_ring.h>
+
+#include "rte_port.h"
+
+/** ring_reader port parameters */
+struct rte_port_ring_reader_params {
+	/** Underlying single consumer ring that has to be pre-initialized */
+	struct rte_ring *ring;
+};
+
+/** ring_reader port operations */
+extern struct rte_port_in_ops rte_port_ring_reader_ops;
+
+/** ring_writer port parameters */
+struct rte_port_ring_writer_params {
+	/** Underlying single producer ring that has to be pre-initialized */
+	struct rte_ring *ring;
+
+	/** Recommended burst size to ring. The actual burst size can be
+		bigger or smaller than this value. */
+	uint32_t tx_burst_sz;
+};
+
+/** ring_writer port operations */
+extern struct rte_port_out_ops rte_port_ring_writer_ops;
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif
-- 
1.7.7.6

^ permalink raw reply	[flat|nested] 36+ messages in thread

* [dpdk-dev] [v2 06/23] Packet Framework librte_port: IPv4 frag port
  2014-06-04 18:08 [dpdk-dev] [v2 00/23] Packet Framework Cristian Dumitrescu
                   ` (4 preceding siblings ...)
  2014-06-04 18:08 ` [dpdk-dev] [v2 05/23] Packet Framework librte_port: ring ports Cristian Dumitrescu
@ 2014-06-04 18:08 ` Cristian Dumitrescu
  2014-06-04 18:08 ` [dpdk-dev] [v2 07/23] Packet Framework librte_port: IPv4 reassembly Cristian Dumitrescu
                   ` (19 subsequent siblings)
  25 siblings, 0 replies; 36+ messages in thread
From: Cristian Dumitrescu @ 2014-06-04 18:08 UTC (permalink / raw)
  To: dev

This port presents the IPv4 fragmentation operation as a Packet Framework port.

Code duplication with examples/ipv4_frag sample app to be resolved soon by linking the relevant library once upstreamed.

Signed-off-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
---
 lib/librte_port/rte_ipv4_frag.h |  253 +++++++++++++++++++++++++++++++++++++++
 lib/librte_port/rte_port_frag.c |  235 ++++++++++++++++++++++++++++++++++++
 lib/librte_port/rte_port_frag.h |   94 +++++++++++++++
 3 files changed, 582 insertions(+), 0 deletions(-)
 create mode 100644 lib/librte_port/rte_ipv4_frag.h
 create mode 100644 lib/librte_port/rte_port_frag.c
 create mode 100644 lib/librte_port/rte_port_frag.h

diff --git a/lib/librte_port/rte_ipv4_frag.h b/lib/librte_port/rte_ipv4_frag.h
new file mode 100644
index 0000000..0e0a033
--- /dev/null
+++ b/lib/librte_port/rte_ipv4_frag.h
@@ -0,0 +1,253 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __INCLUDE_RTE_IPV4_FRAG_H__
+#define __INCLUDE_RTE_IPV4_FRAG_H__
+#include <rte_ip.h>
+#include <rte_memcpy.h>
+#include <rte_byteorder.h>
+#include <rte_ether.h>
+
+/**
+ * @file
+ * RTE IPv4 Fragmentation
+ *
+ * Implementation of IPv4 fragmentation.
+ *
+ */
+
+/*
+ * Default byte size for the IPv4 Maximum Transfer Unit (MTU).
+ * This value includes the size of IPv4 header.
+ */
+#define	IPV4_MTU_DEFAULT	ETHER_MTU
+
+/*
+ * Default payload in bytes for the IPv4 packet.
+ */
+#define	IPV4_DEFAULT_PAYLOAD	(IPV4_MTU_DEFAULT - sizeof(struct ipv4_hdr))
+
+/*
+ * MAX number of fragments per packet allowed.
+ */
+#define	IPV4_MAX_FRAGS_PER_PACKET	0x80
+
+
+/* Debug on/off */
+#ifdef RTE_IPV4_FRAG_DEBUG
+
+#define	RTE_IPV4_FRAG_ASSERT(exp)					\
+if (!(exp))	{							\
+	rte_panic("function %s, line%d: assert failed\n",		\
+		__func__, __LINE__);					\
+}
+
+#else /*RTE_IPV4_FRAG_DEBUG*/
+
+#define RTE_IPV4_FRAG_ASSERT(exp)	do { } while (0)
+
+#endif /*RTE_IPV4_FRAG_DEBUG*/
+
+/* Fragment Offset */
+#define	IPV4_HDR_DF_SHIFT			14
+#define	IPV4_HDR_MF_SHIFT			13
+#define	IPV4_HDR_FO_SHIFT			3
+
+#define	IPV4_HDR_DF_MASK			(1 << IPV4_HDR_DF_SHIFT)
+#define	IPV4_HDR_MF_MASK			(1 << IPV4_HDR_MF_SHIFT)
+
+#define	IPV4_HDR_FO_MASK			((1 << IPV4_HDR_FO_SHIFT) - 1)
+
+static inline void __fill_ipv4hdr_frag(struct ipv4_hdr *dst,
+		const struct ipv4_hdr *src, uint16_t len, uint16_t fofs,
+		uint16_t dofs, uint32_t mf)
+{
+	rte_memcpy(dst, src, sizeof(*dst));
+	fofs = (uint16_t)(fofs + (dofs >> IPV4_HDR_FO_SHIFT));
+	fofs = (uint16_t)(fofs | mf << IPV4_HDR_MF_SHIFT);
+	dst->fragment_offset = rte_cpu_to_be_16(fofs);
+	dst->total_length = rte_cpu_to_be_16(len);
+	dst->hdr_checksum = 0;
+}
+
+static inline void __free_fragments(struct rte_mbuf *mb[], uint32_t num)
+{
+	uint32_t i;
+	for (i = 0; i != num; i++)
+		rte_pktmbuf_free(mb[i]);
+}
+
+/**
+ * IPv4 fragmentation.
+ *
+ * This function implements the fragmentation of IPv4 packets.
+ *
+ * @param pkt_in
+ *   The input packet.
+ * @param pkts_out
+ *   Array storing the output fragments.
+ * @param mtu_size
+ *   Size in bytes of the Maximum Transfer Unit (MTU) for the outgoing IPv4
+ *   datagrams. This value includes the size of the IPv4 header.
+ * @param pool_direct
+ *   MBUF pool used for allocating direct buffers for the output fragments.
+ * @param pool_indirect
+ *   MBUF pool used for allocating indirect buffers for the output fragments.
+ * @return
+ *   Upon successful completion - number of output fragments placed
+ *   in the pkts_out array.
+ *   Otherwise - (-1) * <errno>.
+ */
+static inline int32_t rte_ipv4_fragmentation(struct rte_mbuf *pkt_in,
+	struct rte_mbuf **pkts_out,
+	uint16_t nb_pkts_out,
+	uint16_t mtu_size,
+	struct rte_mempool *pool_direct,
+	struct rte_mempool *pool_indirect)
+{
+	struct rte_mbuf *in_seg = NULL;
+	struct ipv4_hdr *in_hdr;
+	uint32_t out_pkt_pos, in_seg_data_pos;
+	uint32_t more_in_segs;
+	uint16_t fragment_offset, flag_offset, frag_size;
+
+	frag_size = (uint16_t)(mtu_size - sizeof(struct ipv4_hdr));
+
+	/* Fragment size should be a multiply of 8. */
+	RTE_IPV4_FRAG_ASSERT((frag_size & IPV4_HDR_FO_MASK) == 0);
+
+	/* Fragment size should be a multiply of 8. */
+	RTE_IPV4_FRAG_ASSERT(IPV4_MAX_FRAGS_PER_PACKET * frag_size >=
+	    (uint16_t)(pkt_in->pkt.pkt_len - sizeof(struct ipv4_hdr)));
+
+	in_hdr = (struct ipv4_hdr *) pkt_in->pkt.data;
+	flag_offset = rte_cpu_to_be_16(in_hdr->fragment_offset);
+
+	/* If Don't Fragment flag is set */
+	if (unlikely((flag_offset & IPV4_HDR_DF_MASK) != 0))
+		return (-ENOTSUP);
+
+	/* Check that pkts_out is big enough to hold all fragments */
+	if (unlikely(frag_size * nb_pkts_out <
+	    ((uint16_t)(pkt_in->pkt.pkt_len - sizeof(struct ipv4_hdr)))))
+		return (-EINVAL);
+
+	in_seg = pkt_in;
+	in_seg_data_pos = sizeof(struct ipv4_hdr);
+	out_pkt_pos = 0;
+	fragment_offset = 0;
+
+	more_in_segs = 1;
+	while (likely(more_in_segs)) {
+		struct rte_mbuf *out_pkt = NULL, *out_seg_prev = NULL;
+		uint32_t more_out_segs;
+		struct ipv4_hdr *out_hdr;
+
+		/* Allocate direct buffer */
+		out_pkt = rte_pktmbuf_alloc(pool_direct);
+		if (unlikely(out_pkt == NULL)) {
+			__free_fragments(pkts_out, out_pkt_pos);
+			return (-ENOMEM);
+		}
+
+		/* Reserve space for the IP header that will be built later */
+		out_pkt->pkt.data_len = sizeof(struct ipv4_hdr);
+		out_pkt->pkt.pkt_len = sizeof(struct ipv4_hdr);
+
+		out_seg_prev = out_pkt;
+		more_out_segs = 1;
+		while (likely(more_out_segs && more_in_segs)) {
+			struct rte_mbuf *out_seg = NULL;
+			uint32_t len;
+
+			/* Allocate indirect buffer */
+			out_seg = rte_pktmbuf_alloc(pool_indirect);
+			if (unlikely(out_seg == NULL)) {
+				rte_pktmbuf_free(out_pkt);
+				__free_fragments(pkts_out, out_pkt_pos);
+				return (-ENOMEM);
+			}
+			out_seg_prev->pkt.next = out_seg;
+			out_seg_prev = out_seg;
+
+			/* Prepare indirect buffer */
+			rte_pktmbuf_attach(out_seg, in_seg);
+			len = mtu_size - out_pkt->pkt.pkt_len;
+			if (len > (in_seg->pkt.data_len - in_seg_data_pos))
+				len = in_seg->pkt.data_len - in_seg_data_pos;
+
+			out_seg->pkt.data = (char *) in_seg->pkt.data +
+				(uint16_t)in_seg_data_pos;
+			out_seg->pkt.data_len = (uint16_t)len;
+			out_pkt->pkt.pkt_len = (uint16_t)(len +
+			    out_pkt->pkt.pkt_len);
+			out_pkt->pkt.nb_segs += 1;
+			in_seg_data_pos += len;
+
+			/* Current output packet (i.e. fragment) done ? */
+			if (unlikely(out_pkt->pkt.pkt_len >= mtu_size))
+				more_out_segs = 0;
+
+			/* Current input segment done ? */
+			if (unlikely(in_seg_data_pos == in_seg->pkt.data_len)) {
+				in_seg = in_seg->pkt.next;
+				in_seg_data_pos = 0;
+
+				if (unlikely(in_seg == NULL))
+					more_in_segs = 0;
+			}
+		}
+
+		/* Build the IP header */
+
+		out_hdr = (struct ipv4_hdr *) out_pkt->pkt.data;
+
+		__fill_ipv4hdr_frag(out_hdr, in_hdr,
+		    (uint16_t)out_pkt->pkt.pkt_len,
+		    flag_offset, fragment_offset, more_in_segs);
+
+		fragment_offset = (uint16_t)(fragment_offset +
+		    out_pkt->pkt.pkt_len - sizeof(struct ipv4_hdr));
+
+		out_pkt->ol_flags |= PKT_TX_IP_CKSUM;
+		out_pkt->pkt.vlan_macip.f.l3_len = sizeof(struct ipv4_hdr);
+
+		/* Write the fragment to the output list */
+		pkts_out[out_pkt_pos] = out_pkt;
+		out_pkt_pos++;
+	}
+
+	return out_pkt_pos;
+}
+
+#endif
diff --git a/lib/librte_port/rte_port_frag.c b/lib/librte_port/rte_port_frag.c
new file mode 100644
index 0000000..d88c3ab
--- /dev/null
+++ b/lib/librte_port/rte_port_frag.c
@@ -0,0 +1,235 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+#include <string.h>
+
+#include <rte_mbuf.h>
+#include <rte_ring.h>
+#include <rte_malloc.h>
+
+#include "rte_port_frag.h"
+#include "rte_ipv4_frag.h"
+
+struct rte_port_ring_reader_ipv4_frag {
+	/* Input parameters */
+	struct rte_ring *ring;
+	uint32_t mtu;
+	uint32_t metadata_size;
+	struct rte_mempool *pool_direct;
+	struct rte_mempool *pool_indirect;
+
+	/* Internal buffers */
+	struct rte_mbuf *pkts[RTE_PORT_IN_BURST_SIZE_MAX];
+	struct rte_mbuf *frags[IPV4_MAX_FRAGS_PER_PACKET];
+	uint32_t n_pkts;
+	uint32_t pos_pkts;
+	uint32_t n_frags;
+	uint32_t pos_frags;
+} __rte_cache_aligned;
+
+static void *
+rte_port_ring_reader_ipv4_frag_create(void *params, int socket_id)
+{
+	struct rte_port_ring_reader_ipv4_frag_params *conf =
+			(struct rte_port_ring_reader_ipv4_frag_params *) params;
+	struct rte_port_ring_reader_ipv4_frag *port;
+
+	/* Check input parameters */
+	if (conf == NULL) {
+		RTE_LOG(ERR, PORT, "%s: Parameter conf is NULL\n", __func__);
+		return NULL;
+	}
+	if (conf->ring == NULL) {
+		RTE_LOG(ERR, PORT, "%s: Parameter ring is NULL\n", __func__);
+		return NULL;
+	}
+	if (conf->mtu == 0) {
+		RTE_LOG(ERR, PORT, "%s: Parameter mtu is invalid\n", __func__);
+		return NULL;
+	}
+	if (conf->pool_direct == NULL) {
+		RTE_LOG(ERR, PORT, "%s: Parameter pool_direct is NULL\n",
+			__func__);
+		return NULL;
+	}
+	if (conf->pool_indirect == NULL) {
+		RTE_LOG(ERR, PORT, "%s: Parameter pool_indirect is NULL\n",
+			__func__);
+		return NULL;
+	}
+
+	/* Memory allocation */
+	port = rte_zmalloc_socket("PORT", sizeof(*port), CACHE_LINE_SIZE,
+		socket_id);
+	if (port == NULL) {
+		RTE_LOG(ERR, PORT, "%s: port is NULL\n", __func__);
+		return NULL;
+	}
+
+	/* Initialization */
+	port->ring = conf->ring;
+	port->mtu = conf->mtu;
+	port->metadata_size = conf->metadata_size;
+	port->pool_direct = conf->pool_direct;
+	port->pool_indirect = conf->pool_indirect;
+
+	port->n_pkts = 0;
+	port->pos_pkts = 0;
+	port->n_frags = 0;
+	port->pos_frags = 0;
+
+	return port;
+}
+
+static int
+rte_port_ring_reader_ipv4_frag_rx(void *port,
+		struct rte_mbuf **pkts,
+		uint32_t n_pkts)
+{
+	struct rte_port_ring_reader_ipv4_frag *p =
+			(struct rte_port_ring_reader_ipv4_frag *) port;
+	uint32_t n_pkts_out;
+
+	n_pkts_out = 0;
+
+	/* Get packets from the "frag" buffer */
+	if (p->n_frags >= n_pkts) {
+		memcpy(pkts, &p->frags[p->pos_frags], n_pkts * sizeof(void *));
+		p->pos_frags += n_pkts;
+		p->n_frags -= n_pkts;
+
+		return n_pkts;
+	}
+
+	memcpy(pkts, &p->frags[p->pos_frags], p->n_frags * sizeof(void *));
+	n_pkts_out = p->n_frags;
+	p->n_frags = 0;
+
+	/* Look to "pkts" buffer to get more packets */
+	for ( ; ; ) {
+		struct rte_mbuf *pkt;
+		uint32_t n_pkts_to_provide, i;
+		int status;
+
+		/* If "pkts" buffer is empty, read packet burst from ring */
+		if (p->n_pkts == 0) {
+			p->n_pkts = rte_ring_sc_dequeue_burst(p->ring,
+				(void **) p->pkts, RTE_PORT_IN_BURST_SIZE_MAX);
+			if (p->n_pkts == 0)
+				return n_pkts_out;
+			p->pos_pkts = 0;
+		}
+
+		/* Read next packet from "pkts" buffer */
+		pkt = p->pkts[p->pos_pkts++];
+		p->n_pkts--;
+
+		/* If not jumbo, pass current packet to output */
+		if (pkt->pkt.pkt_len <= IPV4_MTU_DEFAULT) {
+			pkts[n_pkts_out++] = pkt;
+
+			n_pkts_to_provide = n_pkts - n_pkts_out;
+			if (n_pkts_to_provide == 0)
+				return n_pkts;
+
+			continue;
+		}
+
+		/* Fragment current packet into the "frags" buffer */
+		status = rte_ipv4_fragmentation(
+			pkt,
+			p->frags,
+			IPV4_MAX_FRAGS_PER_PACKET,
+			p->mtu,
+			p->pool_direct,
+			p->pool_indirect
+		);
+
+		if (status < 0) {
+			rte_pktmbuf_free(pkt);
+			continue;
+		}
+
+		p->n_frags = (uint32_t) status;
+		p->pos_frags = 0;
+
+		/* Copy meta-data from input jumbo packet to its fragments */
+		for (i = 0; i < p->n_frags; i++) {
+			uint8_t *src = RTE_MBUF_METADATA_UINT8_PTR(pkt, 0);
+			uint8_t *dst =
+				RTE_MBUF_METADATA_UINT8_PTR(p->frags[i], 0);
+
+			memcpy(dst, src, p->metadata_size);
+		}
+
+		/* Free input jumbo packet */
+		rte_pktmbuf_free(pkt);
+
+		/* Get packets from "frag" buffer */
+		n_pkts_to_provide = n_pkts - n_pkts_out;
+		if (p->n_frags >= n_pkts_to_provide) {
+			memcpy(&pkts[n_pkts_out], p->frags,
+				n_pkts_to_provide * sizeof(void *));
+			p->n_frags -= n_pkts_to_provide;
+			p->pos_frags += n_pkts_to_provide;
+
+			return n_pkts;
+		}
+
+		memcpy(&pkts[n_pkts_out], p->frags,
+			p->n_frags * sizeof(void *));
+		n_pkts_out += p->n_frags;
+		p->n_frags = 0;
+	}
+}
+
+static int
+rte_port_ring_reader_ipv4_frag_free(void *port)
+{
+	if (port == NULL) {
+		RTE_LOG(ERR, PORT, "%s: Parameter port is NULL\n", __func__);
+		return -1;
+	}
+
+	rte_free(port);
+
+	return 0;
+}
+
+/*
+ * Summary of port operations
+ */
+struct rte_port_in_ops rte_port_ring_reader_ipv4_frag_ops = {
+	.f_create = rte_port_ring_reader_ipv4_frag_create,
+	.f_free = rte_port_ring_reader_ipv4_frag_free,
+	.f_rx = rte_port_ring_reader_ipv4_frag_rx,
+};
diff --git a/lib/librte_port/rte_port_frag.h b/lib/librte_port/rte_port_frag.h
new file mode 100644
index 0000000..dfd70c0
--- /dev/null
+++ b/lib/librte_port/rte_port_frag.h
@@ -0,0 +1,94 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __INCLUDE_RTE_PORT_IP_FRAG_H__
+#define __INCLUDE_RTE_PORT_IP_FRAG_H__
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/**
+ * @file
+ * RTE Port for IPv4 Fragmentation
+ *
+ * This port is built on top of pre-initialized single consumer rte_ring. In
+ * order to minimize the amount of packets stored in the ring at any given
+ * time, the IP fragmentation functionality is executed on ring read operation,
+ * hence this port is implemented as an input port. A regular ring_writer port
+ * can be created to write to the same ring.
+ *
+ * The packets written to the ring are either complete IP datagrams or jumbo
+ * frames (i.e. IP packets with length bigger than provided MTU value). The
+ * packets read from the ring are all non-jumbo frames. The complete IP
+ * datagrams written to the ring are not changed. The jumbo frames are
+ * fragmented into several IP packets with length less or equal to MTU.
+ *
+ ***/
+
+#include <stdint.h>
+
+#include <rte_ring.h>
+
+#include "rte_port.h"
+
+/** ring_reader_ipv4_frag port parameters */
+struct rte_port_ring_reader_ipv4_frag_params {
+	/** Underlying single consumer ring that has to be pre-initialized. */
+	struct rte_ring *ring;
+
+	/** Maximum Transfer Unit (MTU). Maximum IP packet size (in bytes). */
+	uint32_t mtu;
+
+	/** Size of application dependent meta-data stored per each input packet
+	    that has to be copied to each of the fragments originating from the
+	    same input IP datagram. */
+	uint32_t metadata_size;
+
+	/** Pre-initialized buffer pool used for allocating direct buffers for
+	    the output fragments. */
+	struct rte_mempool *pool_direct;
+
+	/** Pre-initialized buffer pool used for allocating indirect buffers for
+	    the output fragments. */
+	struct rte_mempool *pool_indirect;
+};
+
+/** ring_reader_ipv4_frag port operations */
+extern struct rte_port_in_ops rte_port_ring_reader_ipv4_frag_ops;
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif
-- 
1.7.7.6

^ permalink raw reply	[flat|nested] 36+ messages in thread

* [dpdk-dev] [v2 07/23] Packet Framework librte_port: IPv4 reassembly
  2014-06-04 18:08 [dpdk-dev] [v2 00/23] Packet Framework Cristian Dumitrescu
                   ` (5 preceding siblings ...)
  2014-06-04 18:08 ` [dpdk-dev] [v2 06/23] Packet Framework librte_port: IPv4 frag port Cristian Dumitrescu
@ 2014-06-04 18:08 ` Cristian Dumitrescu
  2014-06-04 18:08 ` [dpdk-dev] [v2 08/23] Packet Framework librte_port: hierarchical scheduler port Cristian Dumitrescu
                   ` (18 subsequent siblings)
  25 siblings, 0 replies; 36+ messages in thread
From: Cristian Dumitrescu @ 2014-06-04 18:08 UTC (permalink / raw)
  To: dev

The IPv4 reassembly operation is presented as a Packet Framework port.

The code duplication with examples/ip_reassembly sample application to be addressed soon by linking the relevant library once upstreamed.

Signed-off-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
---
 lib/librte_port/ipv4_frag_tbl.h |  403 ++++++++++++++++++++++++++++++++++++
 lib/librte_port/ipv4_rsmbl.h    |  429 +++++++++++++++++++++++++++++++++++++++
 lib/librte_port/rte_port_ras.c  |  256 +++++++++++++++++++++++
 lib/librte_port/rte_port_ras.h  |   83 ++++++++
 4 files changed, 1171 insertions(+), 0 deletions(-)
 create mode 100644 lib/librte_port/ipv4_frag_tbl.h
 create mode 100644 lib/librte_port/ipv4_rsmbl.h
 create mode 100644 lib/librte_port/rte_port_ras.c
 create mode 100644 lib/librte_port/rte_port_ras.h

diff --git a/lib/librte_port/ipv4_frag_tbl.h b/lib/librte_port/ipv4_frag_tbl.h
new file mode 100644
index 0000000..c44863b
--- /dev/null
+++ b/lib/librte_port/ipv4_frag_tbl.h
@@ -0,0 +1,403 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _IPV4_FRAG_TBL_H_
+#define _IPV4_FRAG_TBL_H_
+
+/**
+ * @file
+ * IPv4 fragments table.
+ *
+ * Implementation of IPv4 fragment table create/destroy/find/update.
+ *
+ */
+
+/*
+ * The ipv4_frag_tbl is a simple hash table:
+ * The basic idea is to use two hash functions and <bucket_entries>
+ * associativity. This provides 2 * <bucket_entries> possible locations in
+ * the hash table for each key. Sort of simplified Cuckoo hashing,
+ * when the collision occurs and all 2 * <bucket_entries> are occupied,
+ * instead of resinserting existing keys into alternative locations, we just
+ * return a faiure.
+ * Another thing timing: entries that resides in the table longer then
+ * <max_cycles> are considered as invalid, and could be removed/replaced
+ * by the new ones.
+ * <key, data> pair is stored together, all add/update/lookup opearions are not
+ * MT safe.
+ */
+
+#include <rte_jhash.h>
+#ifdef RTE_MACHINE_CPUFLAG_SSE4_2
+#include <rte_hash_crc.h>
+#endif /* RTE_MACHINE_CPUFLAG_SSE4_2 */
+
+#define	PRIME_VALUE	0xeaad8405
+
+TAILQ_HEAD(ipv4_pkt_list, ipv4_frag_pkt);
+
+struct ipv4_frag_tbl_stat {
+	uint64_t find_num;      /* total # of find/insert attempts. */
+	uint64_t add_num;       /* # of add ops. */
+	uint64_t del_num;       /* # of del ops. */
+	uint64_t reuse_num;     /* # of reuse (del/add) ops. */
+	uint64_t fail_total;    /* total # of add failures. */
+	uint64_t fail_nospace;  /* # of 'no space' add failures. */
+} __rte_cache_aligned;
+
+struct ipv4_frag_tbl {
+	uint64_t             max_cycles;      /* ttl for table entries. */
+	uint32_t             entry_mask;      /* hash value mask. */
+	uint32_t             max_entries;     /* max entries allowed. */
+	uint32_t             use_entries;     /* entries in use. */
+	uint32_t             bucket_entries;  /* hash assocaitivity. */
+	uint32_t             nb_entries;      /* total size of the table. */
+	uint32_t             nb_buckets;      /* num of associativity lines. */
+	struct ipv4_frag_pkt *last;           /* last used entry. */
+	struct ipv4_pkt_list lru;             /* LRU list for table entries. */
+	struct ipv4_frag_tbl_stat stat;       /* statistics counters. */
+	struct ipv4_frag_pkt pkt[0];          /* hash table. */
+};
+
+#define	IPV4_FRAG_TBL_POS(tbl, sig)	\
+	((tbl)->pkt + ((sig) & (tbl)->entry_mask))
+
+#define	IPV4_FRAG_HASH_FNUM	2
+
+#ifdef IPV4_FRAG_TBL_STAT
+#define	IPV4_FRAG_TBL_STAT_UPDATE(s, f, v)	((s)->f += (v))
+#else
+#define	IPV4_FRAG_TBL_STAT_UPDATE(s, f, v)	do {} while (0)
+#endif /* IPV4_FRAG_TBL_STAT */
+
+static inline void
+ipv4_frag_hash(const struct ipv4_frag_key *key, uint32_t *v1, uint32_t *v2)
+{
+	uint32_t v;
+	const uint32_t *p;
+
+	p = (const uint32_t *)&key->src_dst;
+
+#ifdef RTE_MACHINE_CPUFLAG_SSE4_2
+	v = rte_hash_crc_4byte(p[0], PRIME_VALUE);
+	v = rte_hash_crc_4byte(p[1], v);
+	v = rte_hash_crc_4byte(key->id, v);
+#else
+
+	v = rte_jhash_3words(p[0], p[1], key->id, PRIME_VALUE);
+#endif /* RTE_MACHINE_CPUFLAG_SSE4_2 */
+
+	*v1 =  v;
+	*v2 = (v << 7) + (v >> 14);
+}
+
+/*
+ * Update the table, after we finish processing it's entry.
+ */
+static inline void
+ipv4_frag_inuse(struct ipv4_frag_tbl *tbl, const struct  ipv4_frag_pkt *fp)
+{
+	if (IPV4_FRAG_KEY_EMPTY(&fp->key)) {
+		TAILQ_REMOVE(&tbl->lru, fp, lru);
+		tbl->use_entries--;
+	}
+}
+
+/*
+ * For the given key, try to find an existing entry.
+ * If such entry doesn't exist, will return free and/or timed-out entry,
+ * that can be used for that key.
+ */
+static inline struct  ipv4_frag_pkt *
+ipv4_frag_lookup(struct ipv4_frag_tbl *tbl,
+	const struct ipv4_frag_key *key, uint64_t tms,
+	struct ipv4_frag_pkt **free, struct ipv4_frag_pkt **stale)
+{
+	struct ipv4_frag_pkt *p1, *p2;
+	struct ipv4_frag_pkt *empty, *old;
+	uint64_t max_cycles;
+	uint32_t i, assoc, sig1, sig2;
+
+	empty = NULL;
+	old = NULL;
+
+	max_cycles = tbl->max_cycles;
+	assoc = tbl->bucket_entries;
+
+	if (tbl->last != NULL && IPV4_FRAG_KEY_CMP(&tbl->last->key, key) == 0)
+		return tbl->last;
+
+	ipv4_frag_hash(key, &sig1, &sig2);
+	p1 = IPV4_FRAG_TBL_POS(tbl, sig1);
+	p2 = IPV4_FRAG_TBL_POS(tbl, sig2);
+
+	for (i = 0; i != assoc; i++) {
+		IPV4_FRAG_LOG(DEBUG, "%s:%d:\n"
+			"tbl: %p, max_entries: %u, use_entries: %u\n"
+			"ipv4_frag_pkt line0: %p, index: %u from %u\n"
+			"key: <%" PRIx64 ", %#x>, start: %" PRIu64 "\n",
+			__func__, __LINE__,
+			tbl, tbl->max_entries, tbl->use_entries,
+			p1, i, assoc,
+			p1[i].key.src_dst, p1[i].key.id, p1[i].start);
+
+		if (IPV4_FRAG_KEY_CMP(&p1[i].key, key) == 0)
+			return (p1 + i);
+		else if (IPV4_FRAG_KEY_EMPTY(&p1[i].key))
+			empty = (empty == NULL) ? (p1 + i) : empty;
+		else if (max_cycles + p1[i].start < tms)
+			old = (old == NULL) ? (p1 + i) : old;
+
+		IPV4_FRAG_LOG(DEBUG, "%s:%d:\n"
+			"tbl: %p, max_entries: %u, use_entries: %u\n"
+			"ipv4_frag_pkt line1: %p, index: %u from %u\n"
+			"key: <%" PRIx64 ", %#x>, start: %" PRIu64 "\n",
+			__func__, __LINE__,
+			tbl, tbl->max_entries, tbl->use_entries,
+			p2, i, assoc,
+			p2[i].key.src_dst, p2[i].key.id, p2[i].start);
+
+		if (IPV4_FRAG_KEY_CMP(&p2[i].key, key) == 0)
+			return (p2 + i);
+		else if (IPV4_FRAG_KEY_EMPTY(&p2[i].key))
+			empty = (empty == NULL) ? (p2 + i) : empty;
+		else if (max_cycles + p2[i].start < tms)
+			old = (old == NULL) ? (p2 + i) : old;
+	}
+
+	*free = empty;
+	*stale = old;
+	return NULL;
+}
+
+static inline void
+ipv4_frag_tbl_del(struct ipv4_frag_tbl *tbl, struct ipv4_frag_death_row *dr,
+	struct ipv4_frag_pkt *fp)
+{
+	ipv4_frag_free(fp, dr);
+	IPV4_FRAG_KEY_INVALIDATE(&fp->key);
+	TAILQ_REMOVE(&tbl->lru, fp, lru);
+	tbl->use_entries--;
+	IPV4_FRAG_TBL_STAT_UPDATE(&tbl->stat, del_num, 1);
+}
+
+static inline void
+ipv4_frag_tbl_add(struct ipv4_frag_tbl *tbl,  struct ipv4_frag_pkt *fp,
+	const struct ipv4_frag_key *key, uint64_t tms)
+{
+	fp->key = key[0];
+	ipv4_frag_reset(fp, tms);
+	TAILQ_INSERT_TAIL(&tbl->lru, fp, lru);
+	tbl->use_entries++;
+	IPV4_FRAG_TBL_STAT_UPDATE(&tbl->stat, add_num, 1);
+}
+
+static inline void
+ipv4_frag_tbl_reuse(struct ipv4_frag_tbl *tbl, struct ipv4_frag_death_row *dr,
+	struct ipv4_frag_pkt *fp, uint64_t tms)
+{
+	ipv4_frag_free(fp, dr);
+	ipv4_frag_reset(fp, tms);
+	TAILQ_REMOVE(&tbl->lru, fp, lru);
+	TAILQ_INSERT_TAIL(&tbl->lru, fp, lru);
+	IPV4_FRAG_TBL_STAT_UPDATE(&tbl->stat, reuse_num, 1);
+}
+
+/*
+ * Find an entry in the table for the corresponding fragment.
+ * If such entry is not present, then allocate a new one.
+ * If the entry is stale, then free and reuse it.
+ */
+static inline struct ipv4_frag_pkt *
+ipv4_frag_find(struct ipv4_frag_tbl *tbl, struct ipv4_frag_death_row *dr,
+	const struct ipv4_frag_key *key, uint64_t tms)
+{
+	struct ipv4_frag_pkt *pkt, *free, *stale, *lru;
+	uint64_t max_cycles;
+
+	/*
+	 * Actually the two line below are totally redundant.
+	 * they are here, just to make gcc 4.6 happy.
+	 */
+	free = NULL;
+	stale = NULL;
+	max_cycles = tbl->max_cycles;
+
+	IPV4_FRAG_TBL_STAT_UPDATE(&tbl->stat, find_num, 1);
+
+	pkt = ipv4_frag_lookup(tbl, key, tms, &free, &stale);
+	if (pkt == NULL) {
+
+		/*timed-out entry, free and invalidate it*/
+		if (stale != NULL) {
+			ipv4_frag_tbl_del(tbl, dr, stale);
+			free = stale;
+
+		/*
+		 * we found a free entry, check if we can use it.
+		 * If we run out of free entries in the table, then
+		 * check if we have a timed out entry to delete.
+		 */
+		} else if (free != NULL &&
+				tbl->max_entries <= tbl->use_entries) {
+			lru = TAILQ_FIRST(&tbl->lru);
+			if (max_cycles + lru->start < tms) {
+				ipv4_frag_tbl_del(tbl, dr, lru);
+			} else {
+				free = NULL;
+				IPV4_FRAG_TBL_STAT_UPDATE(&tbl->stat,
+					fail_nospace, 1);
+			}
+		}
+
+		/* found a free entry to reuse. */
+		if (free != NULL) {
+			ipv4_frag_tbl_add(tbl,  free, key, tms);
+			pkt = free;
+		}
+
+	/*
+	 * we found the flow, but it is already timed out,
+	 * so free associated resources, reposition it in the LRU list,
+	 * and reuse it.
+	 */
+	} else if (max_cycles + pkt->start < tms) {
+		ipv4_frag_tbl_reuse(tbl, dr, pkt, tms);
+	}
+
+	IPV4_FRAG_TBL_STAT_UPDATE(&tbl->stat, fail_total, (pkt == NULL));
+
+	tbl->last = pkt;
+	return pkt;
+}
+
+/*
+ * Create a new IPV4 Frag table.
+ * @param bucket_num
+ *  Number of buckets in the hash table.
+ * @param bucket_entries
+ *  Number of entries per bucket (e.g. hash associativity).
+ *  Should be power of two.
+ * @param max_entries
+ *   Maximum number of entries that could be stored in the table.
+ *   The value should be less or equal then bucket_num * bucket_entries.
+ * @param max_cycles
+ *   Maximum TTL in cycles for each fragmented packet.
+ * @param socket_id
+ *  The *socket_id* argument is the socket identifier in the case of
+ *  NUMA. The value can be *SOCKET_ID_ANY* if there is no NUMA constraints.
+ * @return
+ *   The pointer to the new allocated mempool, on success. NULL on error.
+ */
+static struct ipv4_frag_tbl *
+ipv4_frag_tbl_create(uint32_t bucket_num, uint32_t bucket_entries,
+	uint32_t max_entries, uint64_t max_cycles, int socket_id)
+{
+	struct ipv4_frag_tbl *tbl;
+	size_t sz;
+	uint64_t nb_entries;
+
+	nb_entries = rte_align32pow2(bucket_num);
+	nb_entries *= bucket_entries;
+	nb_entries *= IPV4_FRAG_HASH_FNUM;
+
+	/* check input parameters. */
+	if (rte_is_power_of_2(bucket_entries) == 0 ||
+			nb_entries > UINT32_MAX || nb_entries == 0 ||
+			nb_entries < max_entries) {
+		RTE_LOG(ERR, USER1, "%s: invalid input parameter\n", __func__);
+		return NULL;
+	}
+
+	sz = sizeof(*tbl) + nb_entries * sizeof(tbl->pkt[0]);
+	tbl = rte_zmalloc_socket(__func__, sz, CACHE_LINE_SIZE, socket_id);
+	if (tbl == NULL) {
+		RTE_LOG(ERR, USER1,
+			"%s: allocation of %zu bytes at socket %d failed do\n",
+			__func__, sz, socket_id);
+		return NULL;
+	}
+
+	RTE_LOG(INFO, USER1, "%s: allocated of %zu bytes at socket %d\n",
+		__func__, sz, socket_id);
+
+	tbl->max_cycles = max_cycles;
+	tbl->max_entries = max_entries;
+	tbl->nb_entries = (uint32_t)nb_entries;
+	tbl->nb_buckets = bucket_num;
+	tbl->bucket_entries = bucket_entries;
+	tbl->entry_mask = (tbl->nb_entries - 1) & ~(tbl->bucket_entries  - 1);
+
+	TAILQ_INIT(&(tbl->lru));
+	return tbl;
+}
+
+static inline void
+ipv4_frag_tbl_destroy(struct ipv4_frag_tbl *tbl)
+{
+	rte_free(tbl);
+}
+
+#if 0
+
+static void
+ipv4_frag_tbl_dump_stat(FILE *f, const struct ipv4_frag_tbl *tbl)
+{
+	uint64_t fail_total, fail_nospace;
+
+	fail_total = tbl->stat.fail_total;
+	fail_nospace = tbl->stat.fail_nospace;
+
+	fprintf(f, "max entries:\t%u;\n"
+		"entries in use:\t%u;\n"
+		"finds/inserts:\t%" PRIu64 ";\n"
+		"entries added:\t%" PRIu64 ";\n"
+		"entries deleted by timeout:\t%" PRIu64 ";\n"
+		"entries reused by timeout:\t%" PRIu64 ";\n"
+		"total add failures:\t%" PRIu64 ";\n"
+		"add no-space failures:\t%" PRIu64 ";\n"
+		"add hash-collisions failures:\t%" PRIu64 ";\n",
+		tbl->max_entries,
+		tbl->use_entries,
+		tbl->stat.find_num,
+		tbl->stat.add_num,
+		tbl->stat.del_num,
+		tbl->stat.reuse_num,
+		fail_total,
+		fail_nospace,
+		fail_total - fail_nospace);
+}
+
+#endif
+
+#endif /* _IPV4_FRAG_TBL_H_ */
diff --git a/lib/librte_port/ipv4_rsmbl.h b/lib/librte_port/ipv4_rsmbl.h
new file mode 100644
index 0000000..f6cf963
--- /dev/null
+++ b/lib/librte_port/ipv4_rsmbl.h
@@ -0,0 +1,429 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _IPV4_RSMBL_H_
+#define _IPV4_RSMBL_H_
+
+#include <rte_byteorder.h>
+
+/**
+ * @file
+ * IPv4 reassemble
+ *
+ * Implementation of IPv4 reassemble.
+ *
+ */
+
+#define MAX_PKT_BURST                   64
+
+enum {
+	LAST_FRAG_IDX,
+	FIRST_FRAG_IDX,
+	MIN_FRAG_NUM,
+	MAX_FRAG_NUM = 4,
+};
+
+struct ipv4_frag {
+	uint16_t ofs;
+	uint16_t len;
+	struct rte_mbuf *mb;
+};
+
+/*
+ * Use <src addr, dst_addr, id> to uniquely indetify fragmented datagram.
+ */
+struct ipv4_frag_key {
+	uint64_t  src_dst;
+	uint32_t  id;
+};
+
+#define	IPV4_FRAG_KEY_INVALIDATE(k)	((k)->src_dst = 0)
+#define	IPV4_FRAG_KEY_EMPTY(k)		((k)->src_dst == 0)
+
+#define	IPV4_FRAG_KEY_CMP(k1, k2)	\
+	(((k1)->src_dst ^ (k2)->src_dst) | ((k1)->id ^ (k2)->id))
+
+
+/*
+ * Fragmented packet to reassemble.
+ * First two entries in the frags[] array are for the last and first fragments.
+ */
+struct ipv4_frag_pkt {
+	TAILQ_ENTRY(ipv4_frag_pkt) lru;   /* LRU list */
+	struct ipv4_frag_key key;
+	uint64_t             start;       /* creation timestamp */
+	uint32_t             total_size;  /* expected reassembled size */
+	uint32_t             frag_size;   /* size of fragments received */
+	uint32_t             last_idx;    /* index of next entry to fill */
+	struct ipv4_frag     frags[MAX_FRAG_NUM];
+} __rte_cache_aligned;
+
+
+struct ipv4_frag_death_row {
+	uint32_t cnt;
+	struct rte_mbuf *row[MAX_PKT_BURST * (MAX_FRAG_NUM + 1)];
+};
+
+#define	IPV4_FRAG_MBUF2DR(dr, mb)	((dr)->row[(dr)->cnt++] = (mb))
+
+/* logging macros. */
+
+#ifdef IPV4_FRAG_DEBUG
+#define	IPV4_FRAG_LOG(lvl, fmt, args...)	RTE_LOG(lvl, USER1, fmt, ##args)
+#else
+#define	IPV4_FRAG_LOG(lvl, fmt, args...)	do {} while (0)
+#endif /* IPV4_FRAG_DEBUG */
+
+
+static inline void
+ipv4_frag_reset(struct ipv4_frag_pkt *fp, uint64_t tms)
+{
+	static const struct ipv4_frag zero_frag = {
+		.ofs = 0,
+		.len = 0,
+		.mb = NULL,
+	};
+
+	fp->start = tms;
+	fp->total_size = UINT32_MAX;
+	fp->frag_size = 0;
+	fp->last_idx = MIN_FRAG_NUM;
+	fp->frags[LAST_FRAG_IDX] = zero_frag;
+	fp->frags[FIRST_FRAG_IDX] = zero_frag;
+}
+
+static inline void
+ipv4_frag_free(struct ipv4_frag_pkt *fp, struct ipv4_frag_death_row *dr)
+{
+	uint32_t i, k;
+
+	k = dr->cnt;
+	for (i = 0; i != fp->last_idx; i++) {
+		if (fp->frags[i].mb != NULL) {
+			dr->row[k++] = fp->frags[i].mb;
+			fp->frags[i].mb = NULL;
+		}
+	}
+
+	fp->last_idx = 0;
+	dr->cnt = k;
+}
+
+static inline void
+ipv4_frag_free_death_row(struct ipv4_frag_death_row *dr, uint32_t prefetch)
+{
+	uint32_t i, k, n;
+
+	k = RTE_MIN(prefetch, dr->cnt);
+	n = dr->cnt;
+
+	for (i = 0; i != k; i++)
+		rte_prefetch0(dr->row[i]);
+
+	for (i = 0; i != n - k; i++) {
+		rte_prefetch0(dr->row[i + k]);
+		rte_pktmbuf_free(dr->row[i]);
+	}
+
+	for (; i != n; i++)
+		rte_pktmbuf_free(dr->row[i]);
+
+	dr->cnt = 0;
+}
+
+/*
+ * Helper function.
+ * Takes 2 mbufs that represents two framents of the same packet and
+ * chains them into one mbuf.
+ */
+static inline void
+ipv4_frag_chain(struct rte_mbuf *mn, struct rte_mbuf *mp)
+{
+	struct rte_mbuf *ms;
+
+	/* adjust start of the last fragment data. */
+	rte_pktmbuf_adj(mp, (uint16_t)(mp->pkt.vlan_macip.f.l2_len +
+		mp->pkt.vlan_macip.f.l3_len));
+
+	/* chain two fragments. */
+	ms = rte_pktmbuf_lastseg(mn);
+	ms->pkt.next = mp;
+
+	/* accumulate number of segments and total length. */
+	mn->pkt.nb_segs = (uint8_t)(mn->pkt.nb_segs + mp->pkt.nb_segs);
+	mn->pkt.pkt_len += mp->pkt.pkt_len;
+
+	/* reset pkt_len and nb_segs for chained fragment. */
+	mp->pkt.pkt_len = mp->pkt.data_len;
+	mp->pkt.nb_segs = 1;
+}
+
+/*
+ * Reassemble fragments into one packet.
+ */
+static inline struct rte_mbuf *
+ipv4_frag_reassemble(const struct ipv4_frag_pkt *fp)
+{
+	struct ipv4_hdr *ip_hdr;
+	struct rte_mbuf *m, *prev;
+	uint32_t i, n, ofs, first_len;
+
+	first_len = fp->frags[FIRST_FRAG_IDX].len;
+	n = fp->last_idx - 1;
+
+	/*start from the last fragment. */
+	m = fp->frags[LAST_FRAG_IDX].mb;
+	ofs = fp->frags[LAST_FRAG_IDX].ofs;
+
+	while (ofs != first_len) {
+
+		prev = m;
+
+		for (i = n; i != FIRST_FRAG_IDX && ofs != first_len; i--) {
+
+			/* previous fragment found. */
+			if (fp->frags[i].ofs + fp->frags[i].len == ofs) {
+
+				ipv4_frag_chain(fp->frags[i].mb, m);
+
+				/* update our last fragment and offset. */
+				m = fp->frags[i].mb;
+				ofs = fp->frags[i].ofs;
+			}
+		}
+
+		/* error - hole in the packet. */
+		if (m == prev)
+			return NULL;
+	}
+
+	/* chain with the first fragment. */
+	ipv4_frag_chain(fp->frags[FIRST_FRAG_IDX].mb, m);
+	m = fp->frags[FIRST_FRAG_IDX].mb;
+
+	/* update mbuf fields for reassembled packet. */
+	m->ol_flags |= PKT_TX_IP_CKSUM;
+
+	/* update ipv4 header for the reassmebled packet */
+	ip_hdr = (struct ipv4_hdr *)(rte_pktmbuf_mtod(m, uint8_t *) +
+		m->pkt.vlan_macip.f.l2_len);
+
+	ip_hdr->total_length = rte_cpu_to_be_16((uint16_t)(fp->total_size +
+		m->pkt.vlan_macip.f.l3_len));
+	ip_hdr->fragment_offset = (uint16_t)(ip_hdr->fragment_offset &
+		rte_cpu_to_be_16(IPV4_HDR_DF_FLAG));
+	ip_hdr->hdr_checksum = 0;
+
+	return m;
+}
+
+static inline struct rte_mbuf *
+ipv4_frag_process(struct ipv4_frag_pkt *fp, struct ipv4_frag_death_row *dr,
+	struct rte_mbuf *mb, uint16_t ofs, uint16_t len, uint16_t more_frags)
+{
+	uint32_t idx;
+
+	fp->frag_size += len;
+
+	/* this is the first fragment. */
+	if (ofs == 0) {
+		idx = (fp->frags[FIRST_FRAG_IDX].mb == NULL) ?
+			FIRST_FRAG_IDX : UINT32_MAX;
+
+	/* this is the last fragment. */
+	} else if (more_frags == 0) {
+		fp->total_size = ofs + len;
+		idx = (fp->frags[LAST_FRAG_IDX].mb == NULL) ?
+			LAST_FRAG_IDX : UINT32_MAX;
+
+	/* this is the intermediate fragment. */
+	} else {
+		idx = fp->last_idx;
+		if (idx < sizeof(fp->frags) / sizeof(fp->frags[0]))
+			fp->last_idx++;
+	}
+
+	/*
+	 * errorneous packet: either exceeed max allowed number of fragments,
+	 * or duplicate first/last fragment encountered.
+	 */
+	if (idx >= sizeof(fp->frags) / sizeof(fp->frags[0])) {
+
+		/* report an error. */
+		IPV4_FRAG_LOG(DEBUG, "%s:%d invalid fragmented packet:\n"
+			"ipv4_frag_pkt: %p, key: <%" PRIx64 ", %#x>, "
+			"total_size: %u, frag_size: %u, last_idx: %u\n"
+			"first fragment: ofs: %u, len: %u\n"
+			"last fragment: ofs: %u, len: %u\n\n",
+			__func__, __LINE__,
+			fp, fp->key.src_dst, fp->key.id,
+			fp->total_size, fp->frag_size, fp->last_idx,
+			fp->frags[FIRST_FRAG_IDX].ofs,
+			fp->frags[FIRST_FRAG_IDX].len,
+			fp->frags[LAST_FRAG_IDX].ofs,
+			fp->frags[LAST_FRAG_IDX].len);
+
+		/* free all fragments, invalidate the entry. */
+		ipv4_frag_free(fp, dr);
+		IPV4_FRAG_KEY_INVALIDATE(&fp->key);
+		IPV4_FRAG_MBUF2DR(dr, mb);
+
+		return NULL;
+	}
+
+	fp->frags[idx].ofs = ofs;
+	fp->frags[idx].len = len;
+	fp->frags[idx].mb = mb;
+
+	mb = NULL;
+
+	/* not all fragments are collected yet. */
+	if (likely(fp->frag_size < fp->total_size)) {
+		return mb;
+
+	/* if we collected all fragments, then try to reassemble. */
+	} else if (fp->frag_size == fp->total_size &&
+			fp->frags[FIRST_FRAG_IDX].mb != NULL) {
+		mb = ipv4_frag_reassemble(fp);
+	}
+
+	/* errorenous set of fragments. */
+	if (mb == NULL) {
+
+		/* report an error. */
+		IPV4_FRAG_LOG(DEBUG, "%s:%d invalid fragmented packet:\n"
+			"ipv4_frag_pkt: %p, key: <%" PRIx64 ", %#x>, "
+			"total_size: %u, frag_size: %u, last_idx: %u\n"
+			"first fragment: ofs: %u, len: %u\n"
+			"last fragment: ofs: %u, len: %u\n\n",
+			__func__, __LINE__,
+			fp, fp->key.src_dst, fp->key.id,
+			fp->total_size, fp->frag_size, fp->last_idx,
+			fp->frags[FIRST_FRAG_IDX].ofs,
+			fp->frags[FIRST_FRAG_IDX].len,
+			fp->frags[LAST_FRAG_IDX].ofs,
+			fp->frags[LAST_FRAG_IDX].len);
+
+		/* free associated resources. */
+		ipv4_frag_free(fp, dr);
+	}
+
+	/* we are done with that entry, invalidate it. */
+	IPV4_FRAG_KEY_INVALIDATE(&fp->key);
+	return mb;
+}
+
+#include "ipv4_frag_tbl.h"
+
+/*
+ * Process new mbuf with fragment of IPV4 packet.
+ * Incoming mbuf should have it's l2_len/l3_len fields setuped correclty.
+ * @param tbl
+ *   Table where to lookup/add the fragmented packet.
+ * @param mb
+ *   Incoming mbuf with IPV4 fragment.
+ * @param tms
+ *   Fragment arrival timestamp.
+ * @param ip_hdr
+ *   Pointer to the IPV4 header inside the fragment.
+ * @param ip_ofs
+ *   Fragment's offset (as extracted from the header).
+ * @param ip_flag
+ *   Fragment's MF flag.
+ * @return
+ *   Pointer to mbuf for reassebled packet, or NULL if:
+ *   - an error occured.
+ *   - not all fragments of the packet are collected yet.
+ */
+static inline struct rte_mbuf *
+ipv4_frag_mbuf(struct ipv4_frag_tbl *tbl, struct ipv4_frag_death_row *dr,
+	struct rte_mbuf *mb, uint64_t tms, struct ipv4_hdr *ip_hdr,
+	uint16_t ip_ofs, uint16_t ip_flag)
+{
+	struct ipv4_frag_pkt *fp;
+	struct ipv4_frag_key key;
+	const uint64_t *psd;
+	uint16_t ip_len;
+
+	psd = (uint64_t *)&ip_hdr->src_addr;
+	key.src_dst = psd[0];
+	key.id = ip_hdr->packet_id;
+
+	ip_ofs *= IPV4_HDR_OFFSET_UNITS;
+	ip_len = (uint16_t)(rte_be_to_cpu_16(ip_hdr->total_length) -
+		mb->pkt.vlan_macip.f.l3_len);
+
+	IPV4_FRAG_LOG(DEBUG, "%s:%d:\n"
+		"mbuf: %p, tms: %" PRIu64
+		", key: <%" PRIx64 ", %#x>, ofs: %u, len: %u, flags: %#x\n"
+		"tbl: %p, max_cycles: %" PRIu64 ", entry_mask: %#x, "
+		"max_entries: %u, use_entries: %u\n\n",
+		__func__, __LINE__,
+		mb, tms, key.src_dst, key.id, ip_ofs, ip_len, ip_flag,
+		tbl, tbl->max_cycles, tbl->entry_mask, tbl->max_entries,
+		tbl->use_entries);
+
+	/* try to find/add entry into the fragment's table. */
+	fp = ipv4_frag_find(tbl, dr, &key, tms);
+	if (fp == NULL) {
+		IPV4_FRAG_MBUF2DR(dr, mb);
+		return NULL;
+	}
+
+	IPV4_FRAG_LOG(DEBUG, "%s:%d:\n"
+		"tbl: %p, max_entries: %u, use_entries: %u\n"
+		"ipv4_frag_pkt: %p, key: <%" PRIx64 ", %#x>, start: %" PRIu64
+		", total_size: %u, frag_size: %u, last_idx: %u\n\n",
+		__func__, __LINE__,
+		tbl, tbl->max_entries, tbl->use_entries,
+		fp, fp->key.src_dst, fp->key.id, fp->start,
+		fp->total_size, fp->frag_size, fp->last_idx);
+
+	/* process the fragmented packet. */
+	mb = ipv4_frag_process(fp, dr, mb, ip_ofs, ip_len, ip_flag);
+	ipv4_frag_inuse(tbl, fp);
+
+	IPV4_FRAG_LOG(DEBUG, "%s:%d:\n"
+		"mbuf: %p\n"
+		"tbl: %p, max_entries: %u, use_entries: %u\n"
+		"ipv4_frag_pkt: %p, key: <%" PRIx64 ", %#x>, start: %" PRIu64
+		", total_size: %u, frag_size: %u, last_idx: %u\n\n",
+		__func__, __LINE__, mb,
+		tbl, tbl->max_entries, tbl->use_entries,
+		fp, fp->key.src_dst, fp->key.id, fp->start,
+		fp->total_size, fp->frag_size, fp->last_idx);
+
+	return mb;
+}
+
+#endif /* _IPV4_RSMBL_H_ */
diff --git a/lib/librte_port/rte_port_ras.c b/lib/librte_port/rte_port_ras.c
new file mode 100644
index 0000000..60f33b5
--- /dev/null
+++ b/lib/librte_port/rte_port_ras.c
@@ -0,0 +1,256 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+#include <string.h>
+
+#include <rte_mbuf.h>
+#include <rte_malloc.h>
+#include <rte_ether.h>
+#include <rte_ip.h>
+#include <rte_cycles.h>
+#include <rte_log.h>
+
+#include "rte_port_ras.h"
+#include "ipv4_rsmbl.h"
+
+#ifndef IPV4_RAS_N_BUCKETS
+#define IPV4_RAS_N_BUCKETS                                 4094
+#endif
+
+#ifndef IPV4_RAS_N_ENTRIES_PER_BUCKET
+#define IPV4_RAS_N_ENTRIES_PER_BUCKET                      8
+#endif
+
+#ifndef IPV4_RAS_N_ENTRIES
+#define IPV4_RAS_N_ENTRIES (IPV4_RAS_N_BUCKETS * IPV4_RAS_N_ENTRIES_PER_BUCKET)
+#endif
+
+struct rte_port_ring_writer_ipv4_ras {
+	struct rte_mbuf *tx_buf[RTE_PORT_IN_BURST_SIZE_MAX];
+	struct rte_ring *ring;
+	uint32_t tx_burst_sz;
+	uint32_t tx_buf_count;
+	struct ipv4_frag_tbl *frag_tbl;
+	struct ipv4_frag_death_row death_row;
+};
+
+static void *
+rte_port_ring_writer_ipv4_ras_create(void *params, int socket_id)
+{
+	struct rte_port_ring_writer_ipv4_ras_params *conf =
+			(struct rte_port_ring_writer_ipv4_ras_params *) params;
+	struct rte_port_ring_writer_ipv4_ras *port;
+	uint64_t frag_cycles;
+
+	/* Check input parameters */
+	if (conf == NULL) {
+		RTE_LOG(ERR, PORT, "%s: Parameter conf is NULL\n", __func__);
+		return NULL;
+	}
+	if (conf->ring == NULL) {
+		RTE_LOG(ERR, PORT, "%s: Parameter ring is NULL\n", __func__);
+		return NULL;
+	}
+	if ((conf->tx_burst_sz == 0) ||
+	    (conf->tx_burst_sz > RTE_PORT_IN_BURST_SIZE_MAX)) {
+		RTE_LOG(ERR, PORT, "%s: Parameter tx_burst_sz is invalid\n",
+			__func__);
+		return NULL;
+	}
+
+	/* Memory allocation */
+	port = rte_zmalloc_socket("PORT", sizeof(*port),
+			CACHE_LINE_SIZE, socket_id);
+	if (port == NULL) {
+		RTE_LOG(ERR, PORT, "%s: Failed to allocate socket\n", __func__);
+		return NULL;
+	}
+
+	/* Create fragmentation table */
+	frag_cycles = (rte_get_tsc_hz() + MS_PER_S - 1) / MS_PER_S * MS_PER_S;
+	frag_cycles *= 100;
+
+	port->frag_tbl = ipv4_frag_tbl_create(
+		IPV4_RAS_N_BUCKETS,
+		IPV4_RAS_N_ENTRIES_PER_BUCKET,
+		IPV4_RAS_N_ENTRIES,
+		frag_cycles,
+		socket_id);
+
+	if (port->frag_tbl == NULL) {
+		RTE_LOG(ERR, PORT, "%s: ipv4_frag_tbl_create failed\n",
+			__func__);
+		rte_free(port);
+		return NULL;
+	}
+
+	/* Initialization */
+	port->ring = conf->ring;
+	port->tx_burst_sz = conf->tx_burst_sz;
+	port->tx_buf_count = 0;
+
+	return port;
+}
+
+static inline void
+send_burst(struct rte_port_ring_writer_ipv4_ras *p)
+{
+	uint32_t nb_tx;
+
+	nb_tx = rte_ring_sp_enqueue_burst(p->ring, (void **)p->tx_buf,
+			p->tx_buf_count);
+
+	for ( ; nb_tx < p->tx_buf_count; nb_tx++)
+		rte_pktmbuf_free(p->tx_buf[nb_tx]);
+
+	p->tx_buf_count = 0;
+}
+
+static inline void
+process_one(struct rte_port_ring_writer_ipv4_ras *p, struct rte_mbuf *pkt)
+{
+	/* Assume there is no ethernet header */
+	struct ipv4_hdr *pkt_hdr = (struct ipv4_hdr *)
+			(rte_pktmbuf_mtod(pkt, unsigned char *));
+
+	/* Get "Do not fragment" flag and fragment offset */
+	uint16_t frag_field = rte_be_to_cpu_16(pkt_hdr->fragment_offset);
+	uint16_t frag_offset = (uint16_t)(frag_field & IPV4_HDR_OFFSET_MASK);
+	uint16_t frag_flag = (uint16_t)(frag_field & IPV4_HDR_MF_FLAG);
+
+	/* If it is a fragmented packet, then try to reassemble */
+	if ((frag_flag == 0) && (frag_offset == 0))
+		p->tx_buf[p->tx_buf_count++] = pkt;
+	else {
+		struct rte_mbuf *mo;
+		struct ipv4_frag_tbl *tbl = p->frag_tbl;
+		struct ipv4_frag_death_row *dr = &p->death_row;
+
+		/* Process this fragment */
+		mo = ipv4_frag_mbuf(tbl, dr, pkt, rte_rdtsc(), pkt_hdr,
+			frag_offset, frag_flag);
+		if (mo != NULL)
+			p->tx_buf[p->tx_buf_count++] = mo;
+
+		ipv4_frag_free_death_row(&p->death_row, 3);
+	}
+}
+
+static int
+rte_port_ring_writer_ipv4_ras_tx(void *port, struct rte_mbuf *pkt)
+{
+	struct rte_port_ring_writer_ipv4_ras *p =
+			(struct rte_port_ring_writer_ipv4_ras *) port;
+
+	process_one(p, pkt);
+	if (p->tx_buf_count >= p->tx_burst_sz)
+		send_burst(p);
+
+	return 0;
+}
+
+static int
+rte_port_ring_writer_ipv4_ras_tx_bulk(void *port,
+		struct rte_mbuf **pkts,
+		uint64_t pkts_mask)
+{
+	struct rte_port_ring_writer_ipv4_ras *p =
+			(struct rte_port_ring_writer_ipv4_ras *) port;
+
+	if ((pkts_mask & (pkts_mask + 1)) == 0) {
+		uint64_t n_pkts = __builtin_popcountll(pkts_mask);
+		uint32_t i;
+
+		for (i = 0; i < n_pkts; i++) {
+			struct rte_mbuf *pkt = pkts[i];
+
+			process_one(p, pkt);
+			if (p->tx_buf_count >= p->tx_burst_sz)
+				send_burst(p);
+		}
+	} else {
+		for ( ; pkts_mask; ) {
+			uint32_t pkt_index = __builtin_ctzll(pkts_mask);
+			uint64_t pkt_mask = 1LLU << pkt_index;
+			struct rte_mbuf *pkt = pkts[pkt_index];
+
+			process_one(p, pkt);
+			if (p->tx_buf_count >= p->tx_burst_sz)
+				send_burst(p);
+
+			pkts_mask &= ~pkt_mask;
+		}
+	}
+
+	return 0;
+}
+
+static int
+rte_port_ring_writer_ipv4_ras_flush(void *port)
+{
+	struct rte_port_ring_writer_ipv4_ras *p =
+			(struct rte_port_ring_writer_ipv4_ras *) port;
+
+	if (p->tx_buf_count > 0)
+		send_burst(p);
+
+	return 0;
+}
+
+static int
+rte_port_ring_writer_ipv4_ras_free(void *port)
+{
+	struct rte_port_ring_writer_ipv4_ras *p =
+			(struct rte_port_ring_writer_ipv4_ras *) port;
+
+	if (port == NULL) {
+		RTE_LOG(ERR, PORT, "%s: Parameter port is NULL\n", __func__);
+		return -1;
+	}
+
+	rte_port_ring_writer_ipv4_ras_flush(port);
+	ipv4_frag_tbl_destroy(p->frag_tbl);
+	rte_free(port);
+
+	return 0;
+}
+
+/*
+ * Summary of port operations
+ */
+struct rte_port_out_ops rte_port_ring_writer_ipv4_ras_ops = {
+	.f_create = rte_port_ring_writer_ipv4_ras_create,
+	.f_free = rte_port_ring_writer_ipv4_ras_free,
+	.f_tx = rte_port_ring_writer_ipv4_ras_tx,
+	.f_tx_bulk = rte_port_ring_writer_ipv4_ras_tx_bulk,
+	.f_flush = rte_port_ring_writer_ipv4_ras_flush,
+};
diff --git a/lib/librte_port/rte_port_ras.h b/lib/librte_port/rte_port_ras.h
new file mode 100644
index 0000000..c6ed688
--- /dev/null
+++ b/lib/librte_port/rte_port_ras.h
@@ -0,0 +1,83 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __INCLUDE_RTE_PORT_RAS_H__
+#define __INCLUDE_RTE_PORT_RAS_H__
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/**
+ * @file
+ * RTE Port for IPv4 Reassembly
+ *
+ * This port is built on top of pre-initialized single producer rte_ring. In
+ * order to minimize the amount of packets stored in the ring at any given
+ * time, the IP reassembly functionality is executed on ring write operation,
+ * hence this port is implemented as an output port. A regular ring_reader port
+ * can be created to read from the same ring.
+ *
+ * The packets written to the ring are either complete IP datagrams or IP
+ * fragments. The packets read from the ring are all complete IP datagrams,
+ * either jumbo frames (i.e. IP packets with length bigger than MTU) or not.
+ * The complete IP datagrams written to the ring are not changed. The IP
+ * fragments written to the ring are first reassembled and into complete IP
+ * datagrams or dropped on error or IP reassembly time-out.
+ *
+ ***/
+
+#include <stdint.h>
+
+#include <rte_ring.h>
+
+#include "rte_port.h"
+
+/** ring_writer_ipv4_ras port parameters */
+struct rte_port_ring_writer_ipv4_ras_params {
+	/** Underlying single consumer ring that has to be pre-initialized. */
+	struct rte_ring *ring;
+
+	/** Recommended burst size to ring. The actual burst size can be bigger
+	or smaller than this value. */
+	uint32_t tx_burst_sz;
+};
+
+/** ring_writer_ipv4_ras port operations */
+extern struct rte_port_out_ops rte_port_ring_writer_ipv4_ras_ops;
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif
-- 
1.7.7.6

^ permalink raw reply	[flat|nested] 36+ messages in thread

* [dpdk-dev] [v2 08/23] Packet Framework librte_port: hierarchical scheduler port
  2014-06-04 18:08 [dpdk-dev] [v2 00/23] Packet Framework Cristian Dumitrescu
                   ` (6 preceding siblings ...)
  2014-06-04 18:08 ` [dpdk-dev] [v2 07/23] Packet Framework librte_port: IPv4 reassembly Cristian Dumitrescu
@ 2014-06-04 18:08 ` Cristian Dumitrescu
  2014-06-04 18:08 ` [dpdk-dev] [v2 09/23] Packet Framework librte_port: Source/Sink ports Cristian Dumitrescu
                   ` (17 subsequent siblings)
  25 siblings, 0 replies; 36+ messages in thread
From: Cristian Dumitrescu @ 2014-06-04 18:08 UTC (permalink / raw)
  To: dev

The QoS hierarchical scheduler presented as Packet Framework port.

Signed-off-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
---
 lib/librte_port/rte_port_sched.c |  239 ++++++++++++++++++++++++++++++++++++++
 lib/librte_port/rte_port_sched.h |   82 +++++++++++++
 2 files changed, 321 insertions(+), 0 deletions(-)
 create mode 100644 lib/librte_port/rte_port_sched.c
 create mode 100644 lib/librte_port/rte_port_sched.h

diff --git a/lib/librte_port/rte_port_sched.c b/lib/librte_port/rte_port_sched.c
new file mode 100644
index 0000000..0e71494
--- /dev/null
+++ b/lib/librte_port/rte_port_sched.c
@@ -0,0 +1,239 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+#include <string.h>
+
+#include <rte_mbuf.h>
+#include <rte_malloc.h>
+
+#include "rte_port_sched.h"
+
+/*
+ * Reader
+ */
+struct rte_port_sched_reader {
+	struct rte_sched_port *sched;
+};
+
+static void *
+rte_port_sched_reader_create(void *params, int socket_id)
+{
+	struct rte_port_sched_reader_params *conf =
+			(struct rte_port_sched_reader_params *) params;
+	struct rte_port_sched_reader *port;
+
+	/* Check input parameters */
+	if ((conf == NULL) ||
+	    (conf->sched == NULL)) {
+		RTE_LOG(ERR, PORT, "%s: Invalid params\n", __func__);
+		return NULL;
+	}
+
+	/* Memory allocation */
+	port = rte_zmalloc_socket("PORT", sizeof(*port),
+			CACHE_LINE_SIZE, socket_id);
+	if (port == NULL) {
+		RTE_LOG(ERR, PORT, "%s: Failed to allocate port\n", __func__);
+		return NULL;
+	}
+
+	/* Initialization */
+	port->sched = conf->sched;
+
+	return port;
+}
+
+static int
+rte_port_sched_reader_rx(void *port, struct rte_mbuf **pkts, uint32_t n_pkts)
+{
+	struct rte_port_sched_reader *p = (struct rte_port_sched_reader *) port;
+
+	return rte_sched_port_dequeue(p->sched, pkts, n_pkts);
+}
+
+static int
+rte_port_sched_reader_free(void *port)
+{
+	if (port == NULL) {
+		RTE_LOG(ERR, PORT, "%s: port is NULL\n", __func__);
+		return -EINVAL;
+	}
+
+	rte_free(port);
+
+	return 0;
+}
+
+/*
+ * Writer
+ */
+struct rte_port_sched_writer {
+	struct rte_mbuf *tx_buf[2 * RTE_PORT_IN_BURST_SIZE_MAX];
+	struct rte_sched_port *sched;
+	uint32_t tx_burst_sz;
+	uint32_t tx_buf_count;
+	uint64_t bsz_mask;
+};
+
+static void *
+rte_port_sched_writer_create(void *params, int socket_id)
+{
+	struct rte_port_sched_writer_params *conf =
+			(struct rte_port_sched_writer_params *) params;
+	struct rte_port_sched_writer *port;
+
+	/* Check input parameters */
+	if ((conf == NULL) ||
+	    (conf->sched == NULL) ||
+	    (conf->tx_burst_sz == 0) ||
+	    (conf->tx_burst_sz > RTE_PORT_IN_BURST_SIZE_MAX) ||
+		(!rte_is_power_of_2(conf->tx_burst_sz))) {
+		RTE_LOG(ERR, PORT, "%s: Invalid params\n", __func__);
+		return NULL;
+	}
+
+	/* Memory allocation */
+	port = rte_zmalloc_socket("PORT", sizeof(*port),
+			CACHE_LINE_SIZE, socket_id);
+	if (port == NULL) {
+		RTE_LOG(ERR, PORT, "%s: Failed to allocate port\n", __func__);
+		return NULL;
+	}
+
+	/* Initialization */
+	port->sched = conf->sched;
+	port->tx_burst_sz = conf->tx_burst_sz;
+	port->tx_buf_count = 0;
+	port->bsz_mask = 1LLU << (conf->tx_burst_sz - 1);
+
+	return port;
+}
+
+static int
+rte_port_sched_writer_tx(void *port, struct rte_mbuf *pkt)
+{
+	struct rte_port_sched_writer *p = (struct rte_port_sched_writer *) port;
+
+	p->tx_buf[p->tx_buf_count++] = pkt;
+	if (p->tx_buf_count >= p->tx_burst_sz) {
+		rte_sched_port_enqueue(p->sched, p->tx_buf, p->tx_buf_count);
+		p->tx_buf_count = 0;
+	}
+
+	return 0;
+}
+
+static int
+rte_port_sched_writer_tx_bulk(void *port,
+		struct rte_mbuf **pkts,
+		uint64_t pkts_mask)
+{
+	struct rte_port_sched_writer *p = (struct rte_port_sched_writer *) port;
+	uint32_t bsz_mask = p->bsz_mask;
+	uint32_t tx_buf_count = p->tx_buf_count;
+	uint64_t expr = (pkts_mask & (pkts_mask + 1)) |
+			((pkts_mask & bsz_mask) ^ bsz_mask);
+
+	if (expr == 0) {
+		uint64_t n_pkts = __builtin_popcountll(pkts_mask);
+
+		if (tx_buf_count) {
+			rte_sched_port_enqueue(p->sched, p->tx_buf,
+				tx_buf_count);
+			p->tx_buf_count = 0;
+		}
+
+		rte_sched_port_enqueue(p->sched, pkts, n_pkts);
+	} else {
+		for ( ; pkts_mask; ) {
+			uint32_t pkt_index = __builtin_ctzll(pkts_mask);
+			uint64_t pkt_mask = 1LLU << pkt_index;
+			struct rte_mbuf *pkt = pkts[pkt_index];
+
+			p->tx_buf[tx_buf_count++] = pkt;
+			pkts_mask &= ~pkt_mask;
+		}
+		p->tx_buf_count = tx_buf_count;
+
+		if (tx_buf_count >= p->tx_burst_sz) {
+			rte_sched_port_enqueue(p->sched, p->tx_buf,
+				tx_buf_count);
+			p->tx_buf_count = 0;
+		}
+	}
+
+	return 0;
+}
+
+static int
+rte_port_sched_writer_flush(void *port)
+{
+	struct rte_port_sched_writer *p = (struct rte_port_sched_writer *) port;
+
+	if (p->tx_buf_count) {
+		rte_sched_port_enqueue(p->sched, p->tx_buf, p->tx_buf_count);
+		p->tx_buf_count = 0;
+	}
+
+	return 0;
+}
+
+static int
+rte_port_sched_writer_free(void *port)
+{
+	if (port == NULL) {
+		RTE_LOG(ERR, PORT, "%s: port is NULL\n", __func__);
+		return -EINVAL;
+	}
+
+	rte_port_sched_writer_flush(port);
+	rte_free(port);
+
+	return 0;
+}
+
+/*
+ * Summary of port operations
+ */
+struct rte_port_in_ops rte_port_sched_reader_ops = {
+	.f_create = rte_port_sched_reader_create,
+	.f_free = rte_port_sched_reader_free,
+	.f_rx = rte_port_sched_reader_rx,
+};
+
+struct rte_port_out_ops rte_port_sched_writer_ops = {
+	.f_create = rte_port_sched_writer_create,
+	.f_free = rte_port_sched_writer_free,
+	.f_tx = rte_port_sched_writer_tx,
+	.f_tx_bulk = rte_port_sched_writer_tx_bulk,
+	.f_flush = rte_port_sched_writer_flush,
+};
diff --git a/lib/librte_port/rte_port_sched.h b/lib/librte_port/rte_port_sched.h
new file mode 100644
index 0000000..555415a
--- /dev/null
+++ b/lib/librte_port/rte_port_sched.h
@@ -0,0 +1,82 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __INCLUDE_RTE_PORT_SCHED_H__
+#define __INCLUDE_RTE_PORT_SCHED_H__
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/**
+ * @file
+ * RTE Port Hierarchical Scheduler
+ *
+ * sched_reader: input port built on top of pre-initialized rte_sched_port
+ * sched_writer: output port built on top of pre-initialized rte_sched_port
+ *
+ ***/
+
+#include <stdint.h>
+
+#include <rte_sched.h>
+
+#include "rte_port.h"
+
+/** sched_reader port parameters */
+struct rte_port_sched_reader_params {
+	/** Underlying pre-initialized rte_sched_port */
+	struct rte_sched_port *sched;
+};
+
+/** sched_reader port operations */
+extern struct rte_port_in_ops rte_port_sched_reader_ops;
+
+/** sched_writer port parameters */
+struct rte_port_sched_writer_params {
+	/** Underlying pre-initialized rte_sched_port */
+	struct rte_sched_port *sched;
+
+	/** Recommended burst size. The actual burst size can be bigger or
+	smaller than this value. */
+	uint32_t tx_burst_sz;
+};
+
+/** sched_writer port operations */
+extern struct rte_port_out_ops rte_port_sched_writer_ops;
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif
-- 
1.7.7.6

^ permalink raw reply	[flat|nested] 36+ messages in thread

* [dpdk-dev] [v2 09/23] Packet Framework librte_port: Source/Sink ports
  2014-06-04 18:08 [dpdk-dev] [v2 00/23] Packet Framework Cristian Dumitrescu
                   ` (7 preceding siblings ...)
  2014-06-04 18:08 ` [dpdk-dev] [v2 08/23] Packet Framework librte_port: hierarchical scheduler port Cristian Dumitrescu
@ 2014-06-04 18:08 ` Cristian Dumitrescu
  2014-06-04 18:08 ` [dpdk-dev] [v2 10/23] Packet Framework librte_port: Build infrastructure Cristian Dumitrescu
                   ` (16 subsequent siblings)
  25 siblings, 0 replies; 36+ messages in thread
From: Cristian Dumitrescu @ 2014-06-04 18:08 UTC (permalink / raw)
  To: dev

Source port is a packet generator, similar to /dev/zero Linux device.

Sink port is a packet terminator (drops all input packets), similar to /dev/null Linux device.

Signed-off-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
---
 lib/librte_port/rte_port_source_sink.c |  158 ++++++++++++++++++++++++++++++++
 lib/librte_port/rte_port_source_sink.h |   70 ++++++++++++++
 2 files changed, 228 insertions(+), 0 deletions(-)
 create mode 100644 lib/librte_port/rte_port_source_sink.c
 create mode 100644 lib/librte_port/rte_port_source_sink.h

diff --git a/lib/librte_port/rte_port_source_sink.c b/lib/librte_port/rte_port_source_sink.c
new file mode 100644
index 0000000..23e3878
--- /dev/null
+++ b/lib/librte_port/rte_port_source_sink.c
@@ -0,0 +1,158 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+#include <stdint.h>
+#include <string.h>
+
+#include <rte_mbuf.h>
+#include <rte_mempool.h>
+#include <rte_malloc.h>
+
+#include "rte_port_source_sink.h"
+
+/*
+ * Port SOURCE
+ */
+struct rte_port_source {
+	struct rte_mempool *mempool;
+};
+
+static void *
+rte_port_source_create(void *params, int socket_id)
+{
+	struct rte_port_source_params *p =
+			(struct rte_port_source_params *) params;
+	struct rte_port_source *port;
+
+	/* Check input arguments*/
+	if ((p == NULL) || (p->mempool == NULL)) {
+		RTE_LOG(ERR, PORT, "%s: Invalid params\n", __func__);
+		return NULL;
+	}
+
+	/* Memory allocation */
+	port = rte_zmalloc_socket("PORT", sizeof(*port),
+			CACHE_LINE_SIZE, socket_id);
+	if (port == NULL) {
+		RTE_LOG(ERR, PORT, "%s: Failed to allocate port\n", __func__);
+		return NULL;
+	}
+
+	/* Initialization */
+	port->mempool = (struct rte_mempool *) p->mempool;
+
+	return port;
+}
+
+static int
+rte_port_source_free(void *port)
+{
+	/* Check input parameters */
+	if (port == NULL)
+		return 0;
+
+	rte_free(port);
+
+	return 0;
+}
+
+static int
+rte_port_source_rx(void *port, struct rte_mbuf **pkts, uint32_t n_pkts)
+{
+	struct rte_port_source *p = (struct rte_port_source *) port;
+
+	if (rte_mempool_get_bulk(p->mempool, (void **) pkts, n_pkts) != 0)
+		return 0;
+
+	return n_pkts;
+}
+
+/*
+ * Port SINK
+ */
+static void *
+rte_port_sink_create(__rte_unused void *params, __rte_unused int socket_id)
+{
+	return (void *) 1;
+}
+
+static int
+rte_port_sink_tx(__rte_unused void *port, struct rte_mbuf *pkt)
+{
+	rte_pktmbuf_free(pkt);
+
+	return 0;
+}
+
+static int
+rte_port_sink_tx_bulk(__rte_unused void *port, struct rte_mbuf **pkts,
+	uint64_t pkts_mask)
+{
+	if ((pkts_mask & (pkts_mask + 1)) == 0) {
+		uint64_t n_pkts = __builtin_popcountll(pkts_mask);
+		uint32_t i;
+
+		for (i = 0; i < n_pkts; i++) {
+			struct rte_mbuf *pkt = pkts[i];
+
+			rte_pktmbuf_free(pkt);
+		}
+	} else {
+		for ( ; pkts_mask; ) {
+			uint32_t pkt_index = __builtin_ctzll(pkts_mask);
+			uint64_t pkt_mask = 1LLU << pkt_index;
+			struct rte_mbuf *pkt = pkts[pkt_index];
+
+			rte_pktmbuf_free(pkt);
+			pkts_mask &= ~pkt_mask;
+		}
+	}
+
+	return 0;
+}
+
+/*
+ * Summary of port operations
+ */
+struct rte_port_in_ops rte_port_source_ops = {
+	.f_create = rte_port_source_create,
+	.f_free = rte_port_source_free,
+	.f_rx = rte_port_source_rx,
+};
+
+struct rte_port_out_ops rte_port_sink_ops = {
+	.f_create = rte_port_sink_create,
+	.f_free = NULL,
+	.f_tx = rte_port_sink_tx,
+	.f_tx_bulk = rte_port_sink_tx_bulk,
+	.f_flush = NULL,
+};
diff --git a/lib/librte_port/rte_port_source_sink.h b/lib/librte_port/rte_port_source_sink.h
new file mode 100644
index 0000000..0f9be79
--- /dev/null
+++ b/lib/librte_port/rte_port_source_sink.h
@@ -0,0 +1,70 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __INCLUDE_RTE_PORT_SOURCE_SINK_H__
+#define __INCLUDE_RTE_PORT_SOURCE_SINK_H__
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/**
+ * @file
+ * RTE Port Source/Sink
+ *
+ * source: input port that can be used to generate packets
+ * sink: output port that drops all packets written to it
+ *
+ ***/
+
+#include "rte_port.h"
+
+/** source port parameters */
+struct rte_port_source_params {
+	/** Pre-initialized buffer pool */
+	struct rte_mempool *mempool;
+};
+
+/** source port operations */
+extern struct rte_port_in_ops rte_port_source_ops;
+
+/** sink port parameters: NONE */
+
+/** sink port operations */
+extern struct rte_port_out_ops rte_port_sink_ops;
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif
-- 
1.7.7.6

^ permalink raw reply	[flat|nested] 36+ messages in thread

* [dpdk-dev] [v2 10/23] Packet Framework librte_port: Build infrastructure
  2014-06-04 18:08 [dpdk-dev] [v2 00/23] Packet Framework Cristian Dumitrescu
                   ` (8 preceding siblings ...)
  2014-06-04 18:08 ` [dpdk-dev] [v2 09/23] Packet Framework librte_port: Source/Sink ports Cristian Dumitrescu
@ 2014-06-04 18:08 ` Cristian Dumitrescu
  2014-06-04 18:08 ` [dpdk-dev] [v2 11/23] Packet Framework librte_table: Table API Cristian Dumitrescu
                   ` (15 subsequent siblings)
  25 siblings, 0 replies; 36+ messages in thread
From: Cristian Dumitrescu @ 2014-06-04 18:08 UTC (permalink / raw)
  To: dev

Makefile and build infrastructure for the librte_port library.

Signed-off-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
---
 config/common_bsdapp     |    5 +++
 config/common_linuxapp   |    4 ++
 doc/doxy-api-index.md    |    9 ++++++
 doc/doxy-api.conf        |    1 +
 lib/Makefile             |    1 +
 lib/librte_port/Makefile |   72 ++++++++++++++++++++++++++++++++++++++++++++++
 mk/rte.app.mk            |    4 ++
 7 files changed, 96 insertions(+), 0 deletions(-)
 create mode 100644 lib/librte_port/Makefile
 mode change 100644 => 100755 mk/rte.app.mk

diff --git a/config/common_bsdapp b/config/common_bsdapp
index 2cc7b80..e1cc356 100644
--- a/config/common_bsdapp
+++ b/config/common_bsdapp
@@ -300,3 +300,8 @@ CONFIG_RTE_APP_TEST=y
 CONFIG_RTE_TEST_PMD=y
 CONFIG_RTE_TEST_PMD_RECORD_CORE_CYCLES=n
 CONFIG_RTE_TEST_PMD_RECORD_BURST_STATS=n
+
+#
+# Compile librte_port
+#
+CONFIG_RTE_LIBRTE_PORT=y
diff --git a/config/common_linuxapp b/config/common_linuxapp
index 62619c6..ef0f65e 100644
--- a/config/common_linuxapp
+++ b/config/common_linuxapp
@@ -337,3 +337,7 @@ CONFIG_RTE_TEST_PMD_RECORD_BURST_STATS=n
 #
 CONFIG_RTE_NIC_BYPASS=n
 
+#
+# Compile librte_port
+#
+CONFIG_RTE_LIBRTE_PORT=y
diff --git a/doc/doxy-api-index.md b/doc/doxy-api-index.md
index 2825c08..3e74ea6 100644
--- a/doc/doxy-api-index.md
+++ b/doc/doxy-api-index.md
@@ -85,6 +85,15 @@ There are many libraries, so their headers may be grouped by topics:
   [scheduler]          (@ref rte_sched.h),
   [RED congestion]     (@ref rte_red.h)
 
+- **Packet Framework**:
+  [port]                    (@ref rte_port.h),
+  [port ethdev]             (@ref rte_port_ethdev.h),
+  [port ring]               (@ref rte_port_ring.h),
+  [port IPv4 fragmentation] (@ref rte_port_frag.h),
+  [port IPv4 reassembly]    (@ref rte_port_ras.h),
+  [port scheduler]          (@ref rte_port_sched.h),
+  [port source/sink]        (@ref rte_port_source_sink.h)
+
 - **hashes**:
   [hash]               (@ref rte_hash.h),
   [jhash]              (@ref rte_jhash.h),
diff --git a/doc/doxy-api.conf b/doc/doxy-api.conf
index 642f77a..4f280bf 100644
--- a/doc/doxy-api.conf
+++ b/doc/doxy-api.conf
@@ -41,6 +41,7 @@ INPUT                   = doc/doxy-api-index.md \
                           lib/librte_mempool \
                           lib/librte_meter \
                           lib/librte_net \
+                          lib/librte_port \
                           lib/librte_power \
                           lib/librte_ring \
                           lib/librte_sched \
diff --git a/lib/Makefile b/lib/Makefile
index b92b392..654968e 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -55,6 +55,7 @@ DIRS-$(CONFIG_RTE_LIBRTE_METER) += librte_meter
 DIRS-$(CONFIG_RTE_LIBRTE_SCHED) += librte_sched
 DIRS-$(CONFIG_RTE_LIBRTE_ACL) += librte_acl
 DIRS-$(CONFIG_RTE_LIBRTE_KVARGS) += librte_kvargs
+DIRS-$(CONFIG_RTE_LIBRTE_PORT) += librte_port
 
 ifeq ($(CONFIG_RTE_EXEC_ENV_LINUXAPP),y)
 DIRS-$(CONFIG_RTE_LIBRTE_KNI) += librte_kni
diff --git a/lib/librte_port/Makefile b/lib/librte_port/Makefile
new file mode 100644
index 0000000..b67df48
--- /dev/null
+++ b/lib/librte_port/Makefile
@@ -0,0 +1,72 @@
+#   BSD LICENSE
+#
+#   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+#   All rights reserved.
+#
+#   Redistribution and use in source and binary forms, with or without
+#   modification, are permitted provided that the following conditions
+#   are met:
+#
+#     * Redistributions of source code must retain the above copyright
+#       notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above copyright
+#       notice, this list of conditions and the following disclaimer in
+#       the documentation and/or other materials provided with the
+#       distribution.
+#     * Neither the name of Intel Corporation nor the names of its
+#       contributors may be used to endorse or promote products derived
+#       from this software without specific prior written permission.
+#
+#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+#
+# library name
+#
+LIB = librte_port.a
+
+CFLAGS += -O3
+CFLAGS += $(WERROR_FLAGS)
+
+#
+# all source are stored in SRCS-y
+#
+SRCS-$(CONFIG_RTE_LIBRTE_PORT) += rte_port_ring.c
+SRCS-$(CONFIG_RTE_LIBRTE_PORT) += rte_port_ethdev.c
+SRCS-$(CONFIG_RTE_LIBRTE_PORT) += rte_port_sched.c
+SRCS-$(CONFIG_RTE_LIBRTE_PORT) += rte_port_ras.c
+SRCS-$(CONFIG_RTE_LIBRTE_PORT) += rte_port_source_sink.c
+ifeq ($(CONFIG_RTE_MBUF_SCATTER_GATHER),y)
+SRCS-$(CONFIG_RTE_LIBRTE_PORT) += rte_port_frag.c
+endif
+
+# install includes
+SYMLINK-$(CONFIG_RTE_LIBRTE_PORT)-include += rte_port.h
+SYMLINK-$(CONFIG_RTE_LIBRTE_PORT)-include += rte_port_ring.h
+SYMLINK-$(CONFIG_RTE_LIBRTE_PORT)-include += rte_port_ethdev.h
+SYMLINK-$(CONFIG_RTE_LIBRTE_PORT)-include += rte_port_sched.h
+SYMLINK-$(CONFIG_RTE_LIBRTE_PORT)-include += rte_port_ras.h
+SYMLINK-$(CONFIG_RTE_LIBRTE_PORT)-include += rte_port_source_sink.h
+ifeq ($(CONFIG_RTE_MBUF_SCATTER_GATHER),y)
+SYMLINK-$(CONFIG_RTE_LIBRTE_PORT)-include += rte_port_frag.h
+endif
+
+# this lib depends upon:
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PORT) := lib/librte_eal
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PORT) += lib/librte_mbuf
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PORT) += lib/librte_mempool
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PORT) += lib/librte_malloc
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PORT) += lib/librte_ether
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
old mode 100644
new mode 100755
index a836577..e67326b
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -73,6 +73,10 @@ LDLIBS += -lrte_ivshmem
 endif
 endif
 
+ifeq ($(CONFIG_RTE_LIBRTE_PORT),y)
+LDLIBS += -lrte_port
+endif
+
 ifeq ($(CONFIG_RTE_LIBRTE_TIMER),y)
 LDLIBS += -lrte_timer
 endif
-- 
1.7.7.6

^ permalink raw reply	[flat|nested] 36+ messages in thread

* [dpdk-dev] [v2 11/23] Packet Framework librte_table: Table API
  2014-06-04 18:08 [dpdk-dev] [v2 00/23] Packet Framework Cristian Dumitrescu
                   ` (9 preceding siblings ...)
  2014-06-04 18:08 ` [dpdk-dev] [v2 10/23] Packet Framework librte_port: Build infrastructure Cristian Dumitrescu
@ 2014-06-04 18:08 ` Cristian Dumitrescu
  2014-06-04 18:08 ` [dpdk-dev] [v2 12/23] Packet Framework librte_table: LPM IPv4 table Cristian Dumitrescu
                   ` (14 subsequent siblings)
  25 siblings, 0 replies; 36+ messages in thread
From: Cristian Dumitrescu @ 2014-06-04 18:08 UTC (permalink / raw)
  To: dev

This file defines the operations to be implemented by any Packet Framework table.

Signed-off-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
---
 lib/librte_eal/common/include/rte_log.h |    1 +
 lib/librte_table/rte_table.h            |  202 +++++++++++++++++++++++++++++++
 2 files changed, 203 insertions(+), 0 deletions(-)
 create mode 100644 lib/librte_table/rte_table.h
 mode change 100755 => 100644 mk/rte.app.mk

diff --git a/lib/librte_eal/common/include/rte_log.h b/lib/librte_eal/common/include/rte_log.h
index 490dbc9..d119815 100644
--- a/lib/librte_eal/common/include/rte_log.h
+++ b/lib/librte_eal/common/include/rte_log.h
@@ -75,6 +75,7 @@ extern struct rte_logs rte_logs;
 #define RTE_LOGTYPE_METER   0x00000800 /**< Log related to QoS meter. */
 #define RTE_LOGTYPE_SCHED   0x00001000 /**< Log related to QoS port scheduler. */
 #define RTE_LOGTYPE_PORT    0x00002000 /**< Log related to port. */
+#define RTE_LOGTYPE_TABLE   0x00004000 /**< Log related to table. */
 
 /* these log types can be used in an application */
 #define RTE_LOGTYPE_USER1   0x01000000 /**< User-defined log type 1. */
diff --git a/lib/librte_table/rte_table.h b/lib/librte_table/rte_table.h
new file mode 100644
index 0000000..d57bc33
--- /dev/null
+++ b/lib/librte_table/rte_table.h
@@ -0,0 +1,202 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __INCLUDE_RTE_TABLE_H__
+#define __INCLUDE_RTE_TABLE_H__
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/**
+ * @file
+ * RTE Table
+ *
+ * This tool is part of the Intel DPDK Packet Framework tool suite and provides
+ * a standard interface to implement different types of lookup tables for data
+ * plane processing.
+ *
+ * Virtually any search algorithm that can uniquely associate data to a lookup
+ * key can be fitted under this lookup table abstraction. For the flow table
+ * use-case, the lookup key is an n-tuple of packet fields that uniquely
+ * identifies a traffic flow, while data represents actions and action
+ * meta-data associated with the same traffic flow.
+ *
+ ***/
+
+#include <stdint.h>
+#include <rte_mbuf.h>
+#include <rte_port.h>
+
+/**
+ * Lookup table create
+ *
+ * @param params
+ *   Parameters for lookup table creation. The underlying data structure is
+ *   different for each lookup table type.
+ * @param socket_id
+ *   CPU socket ID (e.g. for memory allocation purpose)
+ * @param entry_size
+ *   Data size of each lookup table entry (measured in bytes)
+ * @return
+ *   Handle to lookup table instance
+ */
+typedef void* (*rte_table_op_create)(void *params, int socket_id,
+	uint32_t entry_size);
+
+/**
+ * Lookup table free
+ *
+ * @param table
+ *   Handle to lookup table instance
+ * @return
+ *   0 on success, error code otherwise
+ */
+typedef int (*rte_table_op_free)(void *table);
+
+/**
+ * Lookup table entry add
+ *
+ * @param table
+ *   Handle to lookup table instance
+ * @param key
+ *   Lookup key
+ * @param entry
+ *   Data to be associated with the current key. This parameter has to point to
+ *   a valid memory buffer where the first entry_size bytes (table create
+ *   parameter) are populated with the data.
+ * @param key_found
+ *   After successful invocation, *key_found is set to a value different than 0
+ *   if the current key is already present in the table and to 0 if not. This
+ *   pointer has to be set to a valid memory location before the table entry add
+ *   function is called.
+ * @param entry_ptr
+ *   After successful invocation, *entry_ptr stores the handle to the table
+ *   entry containing the data associated with the current key. This handle can
+ *   be used to perform further read-write accesses to this entry. This handle
+ *   is valid until the key is deleted from the table or the same key is
+ *   re-added to the table, typically to associate it with different data. This
+ *   pointer has to be set to a valid memory location before the function is
+ *   called.
+ * @return
+ *   0 on success, error code otherwise
+ */
+typedef int (*rte_table_op_entry_add)(
+	void *table,
+	void *key,
+	void *entry,
+	int *key_found,
+	void **entry_ptr);
+
+/**
+ * Lookup table entry delete
+ *
+ * @param table
+ *   Handle to lookup table instance
+ * @param key
+ *   Lookup key
+ * @param key_found
+ *   After successful invocation, *key_found is set to a value different than 0
+ *   if the current key was present in the table before the delete operation
+ *   was performed and to 0 if not. This pointer has to be set to a valid
+ *   memory location before the table entry delete function is called.
+ * @param entry
+ *   After successful invocation, if the key is found in the table (*key found
+ *   is different than 0 after function call is completed) and entry points to
+ *   a valid buffer (entry is set to a value different than NULL before the
+ *   function is called), then the first entry_size bytes (table create
+ *   parameter) in *entry store a copy of table entry that contained the data
+ *   associated with the current key before the key was deleted.
+ * @return
+ *   0 on success, error code otherwise
+ */
+typedef int (*rte_table_op_entry_delete)(
+	void *table,
+	void *key,
+	int *key_found,
+	void *entry);
+
+/**
+ * Lookup table lookup
+ *
+ * @param table
+ *   Handle to lookup table instance
+ * @param pkts
+ *   Burst of input packets specified as array of up to 64 pointers to struct
+ *   rte_mbuf
+ * @param pkts_mask
+ *   64-bit bitmask specifying which packets in the input burst are valid. When
+ *   pkts_mask bit n is set, then element n of pkts array is pointing to a
+ *   valid packet. Otherwise, element n of pkts array does not point to a valid
+ *   packet, therefore it will not be accessed.
+ * @param lookup_hit_mask
+ *   Once the table lookup operation is completed, this 64-bit bitmask
+ *   specifies which of the valid packets in the input burst resulted in lookup
+ *   hit. For each valid input packet (pkts_mask bit n is set), the following
+ *   are true on lookup hit: lookup_hit_mask bit n is set, element n of entries
+ *   array is valid and it points to the lookup table entry that was hit. For
+ *   each valid input packet (pkts_mask bit n is set), the following are true
+ *   on lookup miss: lookup_hit_mask bit n is not set and element n of entries
+ *   array is not valid.
+ * @param entries
+ *   Once the table lookup operation is completed, this array provides the
+ *   lookup table entries that were hit, as described above. It is required
+ *   that this array is always pre-allocated by the caller of this function
+ *   with exactly 64 elements. The implementation is allowed to speculatively
+ *   modify the elements of this array, so elements marked as invalid in
+ *   lookup_hit_mask once the table lookup operation is completed might have
+ *   been modified by this function.
+ * @return
+ *   0 on success, error code otherwise
+ */
+typedef int (*rte_table_op_lookup)(
+	void *table,
+	struct rte_mbuf **pkts,
+	uint64_t pkts_mask,
+	uint64_t *lookup_hit_mask,
+	void **entries);
+
+/** Lookup table interface defining the lookup table operation */
+struct rte_table_ops {
+	rte_table_op_create f_create;       /**< Create */
+	rte_table_op_free f_free;           /**< Free */
+	rte_table_op_entry_add f_add;       /**< Entry add */
+	rte_table_op_entry_delete f_delete; /**< Entry delete */
+	rte_table_op_lookup f_lookup;       /**< Lookup */
+};
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
old mode 100755
new mode 100644
-- 
1.7.7.6

^ permalink raw reply	[flat|nested] 36+ messages in thread

* [dpdk-dev] [v2 12/23] Packet Framework librte_table: LPM IPv4 table
  2014-06-04 18:08 [dpdk-dev] [v2 00/23] Packet Framework Cristian Dumitrescu
                   ` (10 preceding siblings ...)
  2014-06-04 18:08 ` [dpdk-dev] [v2 11/23] Packet Framework librte_table: Table API Cristian Dumitrescu
@ 2014-06-04 18:08 ` Cristian Dumitrescu
  2014-06-04 18:08 ` [dpdk-dev] [v2 13/23] Packet Framework librte_table: LPM IPv6 table Cristian Dumitrescu
                   ` (13 subsequent siblings)
  25 siblings, 0 replies; 36+ messages in thread
From: Cristian Dumitrescu @ 2014-06-04 18:08 UTC (permalink / raw)
  To: dev

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain, Size: 14956 bytes --]

Routing table for IPv4.

Signed-off-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
---
 lib/librte_table/rte_table_lpm.c |  347 ++++++++++++++++++++++++++++++++++++++
 lib/librte_table/rte_table_lpm.h |  115 +++++++++++++
 2 files changed, 462 insertions(+), 0 deletions(-)
 create mode 100644 lib/librte_table/rte_table_lpm.c
 create mode 100644 lib/librte_table/rte_table_lpm.h

diff --git a/lib/librte_table/rte_table_lpm.c b/lib/librte_table/rte_table_lpm.c
new file mode 100644
index 0000000..a175ff3
--- /dev/null
+++ b/lib/librte_table/rte_table_lpm.c
@@ -0,0 +1,347 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <string.h>
+#include <stdio.h>
+
+#include <rte_common.h>
+#include <rte_mbuf.h>
+#include <rte_malloc.h>
+#include <rte_byteorder.h>
+#include <rte_log.h>
+#include <rte_lpm.h>
+
+#include "rte_table_lpm.h"
+
+#define RTE_TABLE_LPM_MAX_NEXT_HOPS                        256
+
+struct rte_table_lpm {
+	/* Input parameters */
+	uint32_t entry_size;
+	uint32_t entry_unique_size;
+	uint32_t n_rules;
+	uint32_t offset;
+
+	/* Handle to low-level LPM table */
+	struct rte_lpm *lpm;
+
+	/* Next Hop Table (NHT) */
+	uint32_t nht_users[RTE_TABLE_LPM_MAX_NEXT_HOPS];
+	uint8_t nht[0] __rte_cache_aligned;
+};
+
+static void *
+rte_table_lpm_create(void *params, int socket_id, uint32_t entry_size)
+{
+	struct rte_table_lpm_params *p = (struct rte_table_lpm_params *) params;
+	struct rte_table_lpm *lpm;
+	uint32_t total_size, nht_size;
+
+	/* Check input parameters */
+	if (p == NULL) {
+		RTE_LOG(ERR, TABLE, "%s: NULL input parameters\n", __func__);
+		return NULL;
+	}
+	if (p->n_rules == 0) {
+		RTE_LOG(ERR, TABLE, "%s: Invalid n_rules\n", __func__);
+		return NULL;
+	}
+	if (p->entry_unique_size == 0) {
+		RTE_LOG(ERR, TABLE, "%s: Invalid entry_unique_size\n",
+			__func__);
+		return NULL;
+	}
+	if (p->entry_unique_size > entry_size) {
+		RTE_LOG(ERR, TABLE, "%s: Invalid entry_unique_size\n",
+			__func__);
+		return NULL;
+	}
+	if ((p->offset & 0x3) != 0) {
+		RTE_LOG(ERR, TABLE, "%s: Invalid offset\n", __func__);
+		return NULL;
+	}
+
+	entry_size = RTE_ALIGN(entry_size, sizeof(uint64_t));
+
+	/* Memory allocation */
+	nht_size = RTE_TABLE_LPM_MAX_NEXT_HOPS * entry_size;
+	total_size = sizeof(struct rte_table_lpm) + nht_size;
+	lpm = rte_zmalloc_socket("TABLE", total_size, CACHE_LINE_SIZE,
+		socket_id);
+	if (lpm == NULL) {
+		RTE_LOG(ERR, TABLE,
+			"%s: Cannot allocate %u bytes for LPM table\n",
+			__func__, total_size);
+		return NULL;
+	}
+
+	/* LPM low-level table creation */
+	lpm->lpm = rte_lpm_create("LPM", socket_id, p->n_rules, 0);
+	if (lpm->lpm == NULL) {
+		rte_free(lpm);
+		RTE_LOG(ERR, TABLE, "Unable to create low-level LPM table\n");
+		return NULL;
+	}
+
+	/* Memory initialization */
+	lpm->entry_size = entry_size;
+	lpm->entry_unique_size = p->entry_unique_size;
+	lpm->n_rules = p->n_rules;
+	lpm->offset = p->offset;
+
+	return lpm;
+}
+
+static int
+rte_table_lpm_free(void *table)
+{
+	struct rte_table_lpm *lpm = (struct rte_table_lpm *) table;
+
+	/* Check input parameters */
+	if (lpm == NULL) {
+		RTE_LOG(ERR, TABLE, "%s: table parameter is NULL\n", __func__);
+		return -EINVAL;
+	}
+
+	/* Free previously allocated resources */
+	rte_lpm_free(lpm->lpm);
+	rte_free(lpm);
+
+	return 0;
+}
+
+static int
+nht_find_free(struct rte_table_lpm *lpm, uint32_t *pos)
+{
+	uint32_t i;
+
+	for (i = 0; i < RTE_TABLE_LPM_MAX_NEXT_HOPS; i++) {
+		if (lpm->nht_users[i] == 0) {
+			*pos = i;
+			return 1;
+		}
+	}
+
+	return 0;
+}
+
+static int
+nht_find_existing(struct rte_table_lpm *lpm, void *entry, uint32_t *pos)
+{
+	uint32_t i;
+
+	for (i = 0; i < RTE_TABLE_LPM_MAX_NEXT_HOPS; i++) {
+		uint8_t *nht_entry = &lpm->nht[i * lpm->entry_size];
+
+		if ((lpm->nht_users[i] > 0) && (memcmp(nht_entry, entry,
+			lpm->entry_unique_size) == 0)) {
+			*pos = i;
+			return 1;
+		}
+	}
+
+	return 0;
+}
+
+static int
+rte_table_lpm_entry_add(
+	void *table,
+	void *key,
+	void *entry,
+	int *key_found,
+	void **entry_ptr)
+{
+	struct rte_table_lpm *lpm = (struct rte_table_lpm *) table;
+	struct rte_table_lpm_key *ip_prefix = (struct rte_table_lpm_key *) key;
+	uint32_t nht_pos, nht_pos0_valid;
+	int status;
+	uint8_t nht_pos0;
+
+	/* Check input parameters */
+	if (lpm == NULL) {
+		RTE_LOG(ERR, TABLE, "%s: table parameter is NULL\n", __func__);
+		return -EINVAL;
+	}
+	if (ip_prefix == NULL) {
+		RTE_LOG(ERR, TABLE, "%s: ip_prefix parameter is NULL\n",
+			__func__);
+		return -EINVAL;
+	}
+	if (entry == NULL) {
+		RTE_LOG(ERR, TABLE, "%s: entry parameter is NULL\n", __func__);
+		return -EINVAL;
+	}
+
+	if ((ip_prefix->depth == 0) || (ip_prefix->depth > 32)) {
+		RTE_LOG(ERR, TABLE, "%s: invalid depth (%d)\n",
+			__func__, ip_prefix->depth);
+		return -EINVAL;
+	}
+
+	/* Check if rule is already present in the table */
+	status = rte_lpm_is_rule_present(lpm->lpm, ip_prefix->ip,
+		ip_prefix->depth, &nht_pos0);
+	nht_pos0_valid = status > 0;
+
+	/* Find existing or free NHT entry */
+	if (nht_find_existing(lpm, entry, &nht_pos) == 0) {
+		uint8_t *nht_entry;
+
+		if (nht_find_free(lpm, &nht_pos) == 0) {
+			RTE_LOG(ERR, TABLE, "%s: NHT full\n", __func__);
+			return -1;
+		}
+
+		nht_entry = &lpm->nht[nht_pos * lpm->entry_size];
+		memcpy(nht_entry, entry, lpm->entry_size);
+	}
+
+	/* Add rule to low level LPM table */
+	if (rte_lpm_add(lpm->lpm, ip_prefix->ip, ip_prefix->depth,
+		(uint8_t) nht_pos) < 0) {
+		RTE_LOG(ERR, TABLE, "%s: LPM rule add failed\n", __func__);
+		return -1;
+	}
+
+	/* Commit NHT changes */
+	lpm->nht_users[nht_pos]++;
+	lpm->nht_users[nht_pos0] -= nht_pos0_valid;
+
+	*key_found = nht_pos0_valid;
+	*entry_ptr = (void *) &lpm->nht[nht_pos * lpm->entry_size];
+	return 0;
+}
+
+static int
+rte_table_lpm_entry_delete(
+	void *table,
+	void *key,
+	int *key_found,
+	void *entry)
+{
+	struct rte_table_lpm *lpm = (struct rte_table_lpm *) table;
+	struct rte_table_lpm_key *ip_prefix = (struct rte_table_lpm_key *) key;
+	uint8_t nht_pos;
+	int status;
+
+	/* Check input parameters */
+	if (lpm == NULL) {
+		RTE_LOG(ERR, TABLE, "%s: table parameter is NULL\n", __func__);
+		return -EINVAL;
+	}
+	if (ip_prefix == NULL) {
+		RTE_LOG(ERR, TABLE, "%s: ip_prefix parameter is NULL\n",
+			__func__);
+		return -EINVAL;
+	}
+	if ((ip_prefix->depth == 0) || (ip_prefix->depth > 32)) {
+		RTE_LOG(ERR, TABLE, "%s: invalid depth (%d)\n", __func__,
+			ip_prefix->depth);
+		return -EINVAL;
+	}
+
+	/* Return if rule is not present in the table */
+	status = rte_lpm_is_rule_present(lpm->lpm, ip_prefix->ip,
+		ip_prefix->depth, &nht_pos);
+	if (status < 0) {
+		RTE_LOG(ERR, TABLE, "%s: LPM algorithmic error\n", __func__);
+		return -1;
+	}
+	if (status == 0) {
+		*key_found = 0;
+		return 0;
+	}
+
+	/* Delete rule from the low-level LPM table */
+	status = rte_lpm_delete(lpm->lpm, ip_prefix->ip, ip_prefix->depth);
+	if (status) {
+		RTE_LOG(ERR, TABLE, "%s: LPM rule delete failed\n", __func__);
+		return -1;
+	}
+
+	/* Commit NHT changes */
+	lpm->nht_users[nht_pos]--;
+
+	*key_found = 1;
+	if (entry)
+		memcpy(entry, &lpm->nht[nht_pos * lpm->entry_size],
+			lpm->entry_size);
+
+	return 0;
+}
+
+static int
+rte_table_lpm_lookup(
+	void *table,
+	struct rte_mbuf **pkts,
+	uint64_t pkts_mask,
+	uint64_t *lookup_hit_mask,
+	void **entries)
+{
+	struct rte_table_lpm *lpm = (struct rte_table_lpm *) table;
+	uint64_t pkts_out_mask = 0;
+	uint32_t i;
+
+	pkts_out_mask = 0;
+	for (i = 0; i < (uint32_t)(RTE_PORT_IN_BURST_SIZE_MAX -
+		__builtin_clzll(pkts_mask)); i++) {
+		uint64_t pkt_mask = 1LLU << i;
+
+		if (pkt_mask & pkts_mask) {
+			struct rte_mbuf *pkt = pkts[i];
+			uint32_t ip = rte_bswap32(
+				RTE_MBUF_METADATA_UINT32(pkt, lpm->offset));
+			int status;
+			uint8_t nht_pos;
+
+			status = rte_lpm_lookup(lpm->lpm, ip, &nht_pos);
+			if (status == 0) {
+				pkts_out_mask |= pkt_mask;
+				entries[i] = (void *) &lpm->nht[nht_pos *
+					lpm->entry_size];
+			}
+		}
+	}
+
+	*lookup_hit_mask = pkts_out_mask;
+
+	return 0;
+}
+
+struct rte_table_ops rte_table_lpm_ops = {
+	.f_create = rte_table_lpm_create,
+	.f_free = rte_table_lpm_free,
+	.f_add = rte_table_lpm_entry_add,
+	.f_delete = rte_table_lpm_entry_delete,
+	.f_lookup = rte_table_lpm_lookup,
+};
diff --git a/lib/librte_table/rte_table_lpm.h b/lib/librte_table/rte_table_lpm.h
new file mode 100644
index 0000000..c08c958
--- /dev/null
+++ b/lib/librte_table/rte_table_lpm.h
@@ -0,0 +1,115 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __INCLUDE_RTE_TABLE_LPM_H__
+#define __INCLUDE_RTE_TABLE_LPM_H__
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/**
+ * @file
+ * RTE Table LPM for IPv4
+ *
+ * This table uses the Longest Prefix Match (LPM) algorithm to uniquely
+ * associate data to lookup keys.
+ *
+ * Use-case: IP routing table. Routes that are added to the table associate a
+ * next hop to an IP prefix. The IP prefix is specified as IP address and depth
+ * and cover for a multitude of lookup keys (i.e. destination IP addresses)
+ * that all share the same data (i.e. next hop). The next hop information
+ * typically contains the output interface ID, the IP address of the next hop
+ * station (which is part of the same IP network the output interface is
+ * connected to) and other flags and counters.
+ *
+ * The LPM primitive only allows associating an 8-bit number (next hop ID) to
+ * an IP prefix, while a routing table can potentially contain thousands of
+ * routes or even more. This means that the same next hop ID (and next hop
+ * information) has to be shared by multiple routes, which makes sense, as
+ * multiple remote networks could be reached through the same next hop.
+ * Therefore, when a route is added or updated, the LPM table has to check
+ * whether the same next hop is already in use before using a new next hop ID
+ * for this route.
+ *
+ * The comparison between different next hops is done for the first
+ * “entry_unique_size” bytes of the next hop information (configurable
+ * parameter), which have to uniquely identify the next hop, therefore the user
+ * has to carefully manage the format of the LPM table entry (i.e.  the next
+ * hop information) so that any next hop data that changes value during
+ * run-time (e.g. counters) is placed outside of this area.
+ *
+ ***/
+
+#include <stdint.h>
+
+#include "rte_table.h"
+
+/** LPM table parameters */
+struct rte_table_lpm_params {
+	/** Maximum number of LPM rules (i.e. IP routes) */
+	uint32_t n_rules;
+
+	/** Number of bytes at the start of the table entry that uniquely
+	identify the entry. Cannot be bigger than table entry size. */
+	uint32_t entry_unique_size;
+
+	/** Byte offset within input packet meta-data where lookup key (i.e.
+	the destination IP address) is located. */
+	uint32_t offset;
+};
+
+/** LPM table rule (i.e. route), specified as IP prefix. While the key used by
+the lookup operation is the destination IP address (read from the input packet
+meta-data), the entry add and entry delete operations work with LPM rules, with
+each rule covering for a multitude of lookup keys (destination IP addresses)
+that share the same data (next hop). */
+struct rte_table_lpm_key {
+	/** IP address */
+	uint32_t ip;
+
+	/** IP address depth. The most significant "depth" bits of the IP
+	address specify the network part of the IP address, while the rest of
+	the bits specify the host part of the address and are ignored for the
+	purpose of route specification. */
+	uint8_t depth;
+};
+
+/** LPM table operations */
+extern struct rte_table_ops rte_table_lpm_ops;
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif
-- 
1.7.7.6

^ permalink raw reply	[flat|nested] 36+ messages in thread

* [dpdk-dev] [v2 13/23] Packet Framework librte_table: LPM IPv6 table
  2014-06-04 18:08 [dpdk-dev] [v2 00/23] Packet Framework Cristian Dumitrescu
                   ` (11 preceding siblings ...)
  2014-06-04 18:08 ` [dpdk-dev] [v2 12/23] Packet Framework librte_table: LPM IPv4 table Cristian Dumitrescu
@ 2014-06-04 18:08 ` Cristian Dumitrescu
  2014-06-04 18:08 ` [dpdk-dev] [v2 14/23] Packet Framework librte_table: ACL table Cristian Dumitrescu
                   ` (12 subsequent siblings)
  25 siblings, 0 replies; 36+ messages in thread
From: Cristian Dumitrescu @ 2014-06-04 18:08 UTC (permalink / raw)
  To: dev

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain, Size: 15572 bytes --]

Routing table for IPv6.

Signed-off-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
---
 lib/librte_table/rte_table_lpm_ipv6.c |  361 +++++++++++++++++++++++++++++++++
 lib/librte_table/rte_table_lpm_ipv6.h |  119 +++++++++++
 2 files changed, 480 insertions(+), 0 deletions(-)
 create mode 100644 lib/librte_table/rte_table_lpm_ipv6.c
 create mode 100644 lib/librte_table/rte_table_lpm_ipv6.h

diff --git a/lib/librte_table/rte_table_lpm_ipv6.c b/lib/librte_table/rte_table_lpm_ipv6.c
new file mode 100644
index 0000000..e3d59d0
--- /dev/null
+++ b/lib/librte_table/rte_table_lpm_ipv6.c
@@ -0,0 +1,361 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <string.h>
+#include <stdio.h>
+
+#include <rte_common.h>
+#include <rte_mbuf.h>
+#include <rte_malloc.h>
+#include <rte_byteorder.h>
+#include <rte_log.h>
+#include <rte_lpm6.h>
+
+#include "rte_table_lpm_ipv6.h"
+
+#define RTE_TABLE_LPM_MAX_NEXT_HOPS                        256
+
+struct rte_table_lpm_ipv6 {
+	/* Input parameters */
+	uint32_t entry_size;
+	uint32_t entry_unique_size;
+	uint32_t n_rules;
+	uint32_t offset;
+
+	/* Handle to low-level LPM table */
+	struct rte_lpm6 *lpm;
+
+	/* Next Hop Table (NHT) */
+	uint32_t nht_users[RTE_TABLE_LPM_MAX_NEXT_HOPS];
+	uint8_t nht[0] __rte_cache_aligned;
+};
+
+static void *
+rte_table_lpm_ipv6_create(void *params, int socket_id, uint32_t entry_size)
+{
+	struct rte_table_lpm_ipv6_params *p =
+		(struct rte_table_lpm_ipv6_params *) params;
+	struct rte_table_lpm_ipv6 *lpm;
+	struct rte_lpm6_config lpm6_config;
+	uint32_t total_size, nht_size;
+
+	/* Check input parameters */
+	if (p == NULL) {
+		RTE_LOG(ERR, TABLE, "%s: NULL input parameters\n", __func__);
+		return NULL;
+	}
+	if (p->n_rules == 0) {
+		RTE_LOG(ERR, TABLE, "%s: Invalid n_rules\n", __func__);
+		return NULL;
+	}
+	if (p->number_tbl8s == 0) {
+		RTE_LOG(ERR, TABLE, "%s: Invalid n_rules\n", __func__);
+		return NULL;
+	}
+	if (p->entry_unique_size == 0) {
+		RTE_LOG(ERR, TABLE, "%s: Invalid entry_unique_size\n",
+			__func__);
+		return NULL;
+	}
+	if (p->entry_unique_size > entry_size) {
+		RTE_LOG(ERR, TABLE, "%s: Invalid entry_unique_size\n",
+			__func__);
+		return NULL;
+	}
+	if ((p->offset & 0x3) != 0) {
+		RTE_LOG(ERR, TABLE, "%s: Invalid offset\n", __func__);
+		return NULL;
+	}
+
+	entry_size = RTE_ALIGN(entry_size, sizeof(uint64_t));
+
+	/* Memory allocation */
+	nht_size = RTE_TABLE_LPM_MAX_NEXT_HOPS * entry_size;
+	total_size = sizeof(struct rte_table_lpm_ipv6) + nht_size;
+	lpm = rte_zmalloc_socket("TABLE", total_size, CACHE_LINE_SIZE,
+		socket_id);
+	if (lpm == NULL) {
+		RTE_LOG(ERR, TABLE,
+			"%s: Cannot allocate %u bytes for LPM IPv6 table\n",
+			__func__, total_size);
+		return NULL;
+	}
+
+	/* LPM low-level table creation */
+	lpm6_config.max_rules = p->n_rules;
+	lpm6_config.number_tbl8s = p->number_tbl8s;
+	lpm6_config.flags = 0;
+	lpm->lpm = rte_lpm6_create("LPM IPv6", socket_id, &lpm6_config);
+	if (lpm->lpm == NULL) {
+		rte_free(lpm);
+		RTE_LOG(ERR, TABLE,
+			"Unable to create low-level LPM IPv6 table\n");
+		return NULL;
+	}
+
+	/* Memory initialization */
+	lpm->entry_size = entry_size;
+	lpm->entry_unique_size = p->entry_unique_size;
+	lpm->n_rules = p->n_rules;
+	lpm->offset = p->offset;
+
+	return lpm;
+}
+
+static int
+rte_table_lpm_ipv6_free(void *table)
+{
+	struct rte_table_lpm_ipv6 *lpm = (struct rte_table_lpm_ipv6 *) table;
+
+	/* Check input parameters */
+	if (lpm == NULL) {
+		RTE_LOG(ERR, TABLE, "%s: table parameter is NULL\n", __func__);
+		return -EINVAL;
+	}
+
+	/* Free previously allocated resources */
+	rte_lpm6_free(lpm->lpm);
+	rte_free(lpm);
+
+	return 0;
+}
+
+static int
+nht_find_free(struct rte_table_lpm_ipv6 *lpm, uint32_t *pos)
+{
+	uint32_t i;
+
+	for (i = 0; i < RTE_TABLE_LPM_MAX_NEXT_HOPS; i++) {
+		if (lpm->nht_users[i] == 0) {
+			*pos = i;
+			return 1;
+		}
+	}
+
+	return 0;
+}
+
+static int
+nht_find_existing(struct rte_table_lpm_ipv6 *lpm, void *entry, uint32_t *pos)
+{
+	uint32_t i;
+
+	for (i = 0; i < RTE_TABLE_LPM_MAX_NEXT_HOPS; i++) {
+		uint8_t *nht_entry = &lpm->nht[i * lpm->entry_size];
+
+		if ((lpm->nht_users[i] > 0) && (memcmp(nht_entry, entry,
+			lpm->entry_unique_size) == 0)) {
+			*pos = i;
+			return 1;
+		}
+	}
+
+	return 0;
+}
+
+static int
+rte_table_lpm_ipv6_entry_add(
+	void *table,
+	void *key,
+	void *entry,
+	int *key_found,
+	void **entry_ptr)
+{
+	struct rte_table_lpm_ipv6 *lpm = (struct rte_table_lpm_ipv6 *) table;
+	struct rte_table_lpm_ipv6_key *ip_prefix =
+		(struct rte_table_lpm_ipv6_key *) key;
+	uint32_t nht_pos, nht_pos0_valid;
+	int status;
+	uint8_t nht_pos0;
+
+	/* Check input parameters */
+	if (lpm == NULL) {
+		RTE_LOG(ERR, TABLE, "%s: table parameter is NULL\n", __func__);
+		return -EINVAL;
+	}
+	if (ip_prefix == NULL) {
+		RTE_LOG(ERR, TABLE, "%s: ip_prefix parameter is NULL\n",
+			__func__);
+		return -EINVAL;
+	}
+	if (entry == NULL) {
+		RTE_LOG(ERR, TABLE, "%s: entry parameter is NULL\n", __func__);
+		return -EINVAL;
+	}
+
+	if ((ip_prefix->depth == 0) || (ip_prefix->depth > 128)) {
+		RTE_LOG(ERR, TABLE, "%s: invalid depth (%d)\n", __func__,
+			ip_prefix->depth);
+		return -EINVAL;
+	}
+
+	/* Check if rule is already present in the table */
+	status = rte_lpm6_is_rule_present(lpm->lpm, ip_prefix->ip,
+		ip_prefix->depth, &nht_pos0);
+	nht_pos0_valid = status > 0;
+
+	/* Find existing or free NHT entry */
+	if (nht_find_existing(lpm, entry, &nht_pos) == 0) {
+		uint8_t *nht_entry;
+
+		if (nht_find_free(lpm, &nht_pos) == 0) {
+			RTE_LOG(ERR, TABLE, "%s: NHT full\n", __func__);
+			return -1;
+		}
+
+		nht_entry = &lpm->nht[nht_pos * lpm->entry_size];
+		memcpy(nht_entry, entry, lpm->entry_size);
+	}
+
+	/* Add rule to low level LPM table */
+	if (rte_lpm6_add(lpm->lpm, ip_prefix->ip, ip_prefix->depth,
+		(uint8_t) nht_pos) < 0) {
+		RTE_LOG(ERR, TABLE, "%s: LPM IPv6 rule add failed\n", __func__);
+		return -1;
+	}
+
+	/* Commit NHT changes */
+	lpm->nht_users[nht_pos]++;
+	lpm->nht_users[nht_pos0] -= nht_pos0_valid;
+
+	*key_found = nht_pos0_valid;
+	*entry_ptr = (void *) &lpm->nht[nht_pos * lpm->entry_size];
+	return 0;
+}
+
+static int
+rte_table_lpm_ipv6_entry_delete(
+	void *table,
+	void *key,
+	int *key_found,
+	void *entry)
+{
+	struct rte_table_lpm_ipv6 *lpm = (struct rte_table_lpm_ipv6 *) table;
+	struct rte_table_lpm_ipv6_key *ip_prefix =
+		(struct rte_table_lpm_ipv6_key *) key;
+	uint8_t nht_pos;
+	int status;
+
+	/* Check input parameters */
+	if (lpm == NULL) {
+		RTE_LOG(ERR, TABLE, "%s: table parameter is NULL\n", __func__);
+		return -EINVAL;
+	}
+	if (ip_prefix == NULL) {
+		RTE_LOG(ERR, TABLE, "%s: ip_prefix parameter is NULL\n",
+			__func__);
+		return -EINVAL;
+	}
+	if ((ip_prefix->depth == 0) || (ip_prefix->depth > 128)) {
+		RTE_LOG(ERR, TABLE, "%s: invalid depth (%d)\n", __func__,
+			ip_prefix->depth);
+		return -EINVAL;
+	}
+
+	/* Return if rule is not present in the table */
+	status = rte_lpm6_is_rule_present(lpm->lpm, ip_prefix->ip,
+		ip_prefix->depth, &nht_pos);
+	if (status < 0) {
+		RTE_LOG(ERR, TABLE, "%s: LPM IPv6 algorithmic error\n",
+			__func__);
+		return -1;
+	}
+	if (status == 0) {
+		*key_found = 0;
+		return 0;
+	}
+
+	/* Delete rule from the low-level LPM table */
+	status = rte_lpm6_delete(lpm->lpm, ip_prefix->ip, ip_prefix->depth);
+	if (status) {
+		RTE_LOG(ERR, TABLE, "%s: LPM IPv6 rule delete failed\n",
+			__func__);
+		return -1;
+	}
+
+	/* Commit NHT changes */
+	lpm->nht_users[nht_pos]--;
+
+	*key_found = 1;
+	if (entry)
+		memcpy(entry, &lpm->nht[nht_pos * lpm->entry_size],
+			lpm->entry_size);
+
+	return 0;
+}
+
+static int
+rte_table_lpm_ipv6_lookup(
+	void *table,
+	struct rte_mbuf **pkts,
+	uint64_t pkts_mask,
+	uint64_t *lookup_hit_mask,
+	void **entries)
+{
+	struct rte_table_lpm_ipv6 *lpm = (struct rte_table_lpm_ipv6 *) table;
+	uint64_t pkts_out_mask = 0;
+	uint32_t i;
+
+	pkts_out_mask = 0;
+	for (i = 0; i < (uint32_t)(RTE_PORT_IN_BURST_SIZE_MAX -
+		__builtin_clzll(pkts_mask)); i++) {
+		uint64_t pkt_mask = 1LLU << i;
+
+		if (pkt_mask & pkts_mask) {
+			struct rte_mbuf *pkt = pkts[i];
+			uint8_t *ip = RTE_MBUF_METADATA_UINT8_PTR(pkt,
+				lpm->offset);
+			int status;
+			uint8_t nht_pos;
+
+			status = rte_lpm6_lookup(lpm->lpm, ip, &nht_pos);
+			if (status == 0) {
+				pkts_out_mask |= pkt_mask;
+				entries[i] = (void *) &lpm->nht[nht_pos *
+					lpm->entry_size];
+			}
+		}
+	}
+
+	*lookup_hit_mask = pkts_out_mask;
+
+	return 0;
+}
+
+struct rte_table_ops rte_table_lpm_ipv6_ops = {
+	.f_create = rte_table_lpm_ipv6_create,
+	.f_free = rte_table_lpm_ipv6_free,
+	.f_add = rte_table_lpm_ipv6_entry_add,
+	.f_delete = rte_table_lpm_ipv6_entry_delete,
+	.f_lookup = rte_table_lpm_ipv6_lookup,
+};
diff --git a/lib/librte_table/rte_table_lpm_ipv6.h b/lib/librte_table/rte_table_lpm_ipv6.h
new file mode 100644
index 0000000..91fb0d8
--- /dev/null
+++ b/lib/librte_table/rte_table_lpm_ipv6.h
@@ -0,0 +1,119 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __INCLUDE_RTE_TABLE_LPM_IPV6_H__
+#define __INCLUDE_RTE_TABLE_LPM_IPV6_H__
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/**
+ * @file
+ * RTE Table LPM for IPv6
+ *
+ * This table uses the Longest Prefix Match (LPM) algorithm to uniquely
+ * associate data to lookup keys.
+ *
+ * Use-case: IP routing table. Routes that are added to the table associate a
+ * next hop to an IP prefix. The IP prefix is specified as IP address and depth
+ * and cover for a multitude of lookup keys (i.e. destination IP addresses)
+ * that all share the same data (i.e. next hop). The next hop information
+ * typically contains the output interface ID, the IP address of the next hop
+ * station (which is part of the same IP network the output interface is
+ * connected to) and other flags and counters.
+ *
+ * The LPM primitive only allows associating an 8-bit number (next hop ID) to
+ * an IP prefix, while a routing table can potentially contain thousands of
+ * routes or even more. This means that the same next hop ID (and next hop
+ * information) has to be shared by multiple routes, which makes sense, as
+ * multiple remote networks could be reached through the same next hop.
+ * Therefore, when a route is added or updated, the LPM table has to check
+ * whether the same next hop is already in use before using a new next hop ID
+ * for this route.
+ *
+ * The comparison between different next hops is done for the first
+ * “entry_unique_size” bytes of the next hop information (configurable
+ * parameter), which have to uniquely identify the next hop, therefore the user
+ * has to carefully manage the format of the LPM table entry (i.e.  the next
+ * hop information) so that any next hop data that changes value during
+ * run-time (e.g. counters) is placed outside of this area.
+ *
+ ***/
+
+#include <stdint.h>
+
+#include "rte_table.h"
+
+#define RTE_LPM_IPV6_ADDR_SIZE 16
+
+/** LPM table parameters */
+struct rte_table_lpm_ipv6_params {
+	/** Maximum number of LPM rules (i.e. IP routes) */
+	uint32_t n_rules;
+
+	uint32_t number_tbl8s;
+
+	/** Number of bytes at the start of the table entry that uniquely
+	identify the entry. Cannot be bigger than table entry size. */
+	uint32_t entry_unique_size;
+
+	/** Byte offset within input packet meta-data where lookup key (i.e.
+	the destination IP address) is located. */
+	uint32_t offset;
+};
+
+/** LPM table rule (i.e. route), specified as IP prefix. While the key used by
+the lookup operation is the destination IP address (read from the input packet
+meta-data), the entry add and entry delete operations work with LPM rules, with
+each rule covering for a multitude of lookup keys (destination IP addresses)
+that share the same data (next hop). */
+struct rte_table_lpm_ipv6_key {
+	/** IP address */
+	uint8_t ip[RTE_LPM_IPV6_ADDR_SIZE];
+
+	/** IP address depth. The most significant "depth" bits of the IP
+	address specify the network part of the IP address, while the rest of
+	the bits specify the host part of the address and are ignored for the
+	purpose of route specification. */
+	uint8_t depth;
+};
+
+/** LPM table operations */
+extern struct rte_table_ops rte_table_lpm_ipv6_ops;
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif
-- 
1.7.7.6

^ permalink raw reply	[flat|nested] 36+ messages in thread

* [dpdk-dev] [v2 14/23] Packet Framework librte_table: ACL table
  2014-06-04 18:08 [dpdk-dev] [v2 00/23] Packet Framework Cristian Dumitrescu
                   ` (12 preceding siblings ...)
  2014-06-04 18:08 ` [dpdk-dev] [v2 13/23] Packet Framework librte_table: LPM IPv6 table Cristian Dumitrescu
@ 2014-06-04 18:08 ` Cristian Dumitrescu
  2014-06-04 18:08 ` [dpdk-dev] [v2 15/23] Packet Framework librte_table: Hash tables Cristian Dumitrescu
                   ` (11 subsequent siblings)
  25 siblings, 0 replies; 36+ messages in thread
From: Cristian Dumitrescu @ 2014-06-04 18:08 UTC (permalink / raw)
  To: dev

Packet Framework ACL table for ACL rule database.

Signed-off-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
---
 lib/librte_table/rte_table_acl.c |  490 ++++++++++++++++++++++++++++++++++++++
 lib/librte_table/rte_table_acl.h |   95 ++++++++
 2 files changed, 585 insertions(+), 0 deletions(-)
 create mode 100644 lib/librte_table/rte_table_acl.c
 create mode 100644 lib/librte_table/rte_table_acl.h

diff --git a/lib/librte_table/rte_table_acl.c b/lib/librte_table/rte_table_acl.c
new file mode 100644
index 0000000..f74f22a
--- /dev/null
+++ b/lib/librte_table/rte_table_acl.c
@@ -0,0 +1,490 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <string.h>
+#include <stdio.h>
+
+#include <rte_common.h>
+#include <rte_mbuf.h>
+#include <rte_malloc.h>
+#include <rte_log.h>
+
+#include "rte_table_acl.h"
+#include <rte_ether.h>
+
+struct rte_table_acl {
+	/* Low-level ACL table */
+	char name[2][RTE_ACL_NAMESIZE];
+	struct rte_acl_param acl_params; /* for creating low level acl table */
+	struct rte_acl_config cfg; /* Holds the field definitions (metadata) */
+	struct rte_acl_ctx *ctx;
+	uint32_t name_id;
+
+	/* Input parameters */
+	uint32_t n_rules;
+	uint32_t entry_size;
+
+	/* Internal tables */
+	uint8_t *action_table;
+	struct rte_acl_rule **acl_rule_list; /* Array of pointers to rules */
+	uint8_t *acl_rule_memory; /* Memory to store the rules */
+
+	/* Memory to store the action table and stack of free entries */
+	uint8_t memory[0] __rte_cache_aligned;
+};
+
+
+static void *
+rte_table_acl_create(
+	void *params,
+	int socket_id,
+	uint32_t entry_size)
+{
+	struct rte_table_acl_params *p = (struct rte_table_acl_params *) params;
+	struct rte_table_acl *acl;
+	uint32_t action_table_size, acl_rule_list_size, acl_rule_memory_size;
+	uint32_t total_size;
+
+	RTE_BUILD_BUG_ON(((sizeof(struct rte_table_acl) % CACHE_LINE_SIZE)
+		!= 0));
+
+	/* Check input parameters */
+	if (p == NULL) {
+		RTE_LOG(ERR, TABLE, "%s: Invalid value for params\n", __func__);
+		return NULL;
+	}
+	if (p->name == NULL) {
+		RTE_LOG(ERR, TABLE, "%s: Invalid value for name\n", __func__);
+		return NULL;
+	}
+	if (p->n_rules == 0) {
+		RTE_LOG(ERR, TABLE, "%s: Invalid value for n_rules\n",
+			__func__);
+		return NULL;
+	}
+	if ((p->n_rule_fields == 0) ||
+	    (p->n_rule_fields > RTE_ACL_MAX_FIELDS)) {
+		RTE_LOG(ERR, TABLE, "%s: Invalid value for n_rule_fields\n",
+			__func__);
+		return NULL;
+	}
+
+	entry_size = RTE_ALIGN(entry_size, sizeof(uint64_t));
+
+	/* Memory allocation */
+	action_table_size = CACHE_LINE_ROUNDUP(p->n_rules * entry_size);
+	acl_rule_list_size =
+		CACHE_LINE_ROUNDUP(p->n_rules * sizeof(struct rte_acl_rule *));
+	acl_rule_memory_size = CACHE_LINE_ROUNDUP(p->n_rules *
+		RTE_ACL_RULE_SZ(p->n_rule_fields));
+	total_size = sizeof(struct rte_table_acl) + action_table_size +
+		acl_rule_list_size + acl_rule_memory_size;
+
+	acl = rte_zmalloc_socket("TABLE", total_size, CACHE_LINE_SIZE,
+		socket_id);
+	if (acl == NULL) {
+		RTE_LOG(ERR, TABLE,
+			"%s: Cannot allocate %u bytes for ACL table\n",
+			__func__, total_size);
+		return NULL;
+	}
+
+	acl->action_table = &acl->memory[0];
+	acl->acl_rule_list =
+		(struct rte_acl_rule **) &acl->memory[action_table_size];
+	acl->acl_rule_memory = (uint8_t *)
+		&acl->memory[action_table_size + acl_rule_list_size];
+
+	/* Initialization of internal fields */
+	rte_snprintf(acl->name[0], RTE_ACL_NAMESIZE, "%s_a", p->name);
+	rte_snprintf(acl->name[1], RTE_ACL_NAMESIZE, "%s_b", p->name);
+	acl->name_id = 1;
+
+	acl->acl_params.name = acl->name[acl->name_id];
+	acl->acl_params.socket_id = socket_id;
+	acl->acl_params.rule_size = RTE_ACL_RULE_SZ(p->n_rule_fields);
+	acl->acl_params.max_rule_num = p->n_rules;
+
+	acl->cfg.num_categories = 1;
+	acl->cfg.num_fields = p->n_rule_fields;
+	memcpy(&acl->cfg.defs[0], &p->field_format[0],
+		p->n_rule_fields * sizeof(struct rte_acl_field_def));
+
+	acl->ctx = NULL;
+
+	acl->n_rules = p->n_rules;
+	acl->entry_size = entry_size;
+
+	return acl;
+}
+
+static int
+rte_table_acl_free(void *table)
+{
+	struct rte_table_acl *acl = (struct rte_table_acl *) table;
+
+	/* Check input parameters */
+	if (table == NULL) {
+		RTE_LOG(ERR, TABLE, "%s: table parameter is NULL\n", __func__);
+		return -EINVAL;
+	}
+
+	/* Free previously allocated resources */
+	if (acl->ctx != NULL)
+		rte_acl_free(acl->ctx);
+
+	rte_free(acl);
+
+	return 0;
+}
+
+RTE_ACL_RULE_DEF(rte_pipeline_acl_rule, RTE_ACL_MAX_FIELDS);
+
+static int
+rte_table_acl_build(struct rte_table_acl *acl, struct rte_acl_ctx **acl_ctx)
+{
+	struct rte_acl_ctx *ctx = NULL;
+	uint32_t n_rules, i;
+	int status;
+
+	/* Create low level ACL table */
+	ctx = rte_acl_create(&acl->acl_params);
+	if (ctx == NULL) {
+		RTE_LOG(ERR, TABLE, "%s: Cannot create low level ACL table\n",
+			__func__);
+		return -1;
+	}
+
+	/* Add rules to low level ACL table */
+	n_rules = 0;
+	for (i = 1; i < acl->n_rules; i++) {
+		if (acl->acl_rule_list[i] != NULL) {
+			status = rte_acl_add_rules(ctx, acl->acl_rule_list[i],
+				1);
+			if (status != 0) {
+				RTE_LOG(ERR, TABLE,
+				"%s: Cannot add rule to low level ACL table\n",
+					__func__);
+				rte_acl_free(ctx);
+				return -1;
+			}
+
+			n_rules++;
+		}
+	}
+
+	if (n_rules == 0) {
+		rte_acl_free(ctx);
+		*acl_ctx = NULL;
+		return 0;
+	}
+
+	/* Build low level ACl table */
+	status = rte_acl_build(ctx, &acl->cfg);
+	if (status != 0) {
+		RTE_LOG(ERR, TABLE,
+			"%s: Cannot build the low level ACL table\n",
+			__func__);
+		rte_acl_free(ctx);
+		return -1;
+	}
+
+	rte_acl_dump(ctx);
+
+	*acl_ctx = ctx;
+	return 0;
+}
+
+static int
+rte_table_acl_entry_add(
+	void *table,
+	void *key,
+	void *entry,
+	int *key_found,
+	void **entry_ptr)
+{
+	struct rte_table_acl *acl = (struct rte_table_acl *) table;
+	struct rte_table_acl_rule_add_params *rule =
+		(struct rte_table_acl_rule_add_params *) key;
+	struct rte_pipeline_acl_rule acl_rule;
+	struct rte_acl_rule *rule_location;
+	struct rte_acl_ctx *ctx;
+	uint32_t free_pos, free_pos_valid, i;
+	int status;
+
+	/* Check input parameters */
+	if (table == NULL) {
+		RTE_LOG(ERR, TABLE, "%s: table parameter is NULL\n", __func__);
+		return -EINVAL;
+	}
+	if (key == NULL) {
+		RTE_LOG(ERR, TABLE, "%s: key parameter is NULL\n", __func__);
+		return -EINVAL;
+	}
+	if (entry == NULL) {
+		RTE_LOG(ERR, TABLE, "%s: entry parameter is NULL\n", __func__);
+		return -EINVAL;
+	}
+	if (key_found == NULL) {
+		RTE_LOG(ERR, TABLE, "%s: key_found parameter is NULL\n",
+			__func__);
+		return -EINVAL;
+	}
+	if (entry_ptr == NULL) {
+		RTE_LOG(ERR, TABLE, "%s: entry_ptr parameter is NULL\n",
+			__func__);
+		return -EINVAL;
+	}
+	if (rule->priority > RTE_ACL_MAX_PRIORITY) {
+		RTE_LOG(ERR, TABLE, "%s: Priority is too high\n", __func__);
+		return -EINVAL;
+	}
+
+	/* Setup rule data structure */
+	memset(&acl_rule, 0, sizeof(acl_rule));
+	acl_rule.data.category_mask = 1;
+	acl_rule.data.priority = RTE_ACL_MAX_PRIORITY - rule->priority;
+	acl_rule.data.userdata = 0; /* To be set up later */
+	memcpy(&acl_rule.field[0],
+		&rule->field_value[0],
+		acl->cfg.num_fields * sizeof(struct rte_acl_field));
+
+	/* Look to see if the rule exists already in the table */
+	free_pos = 0;
+	free_pos_valid = 0;
+	for (i = 1; i < acl->n_rules; i++) {
+		if (acl->acl_rule_list[i] == NULL) {
+			if (free_pos_valid == 0) {
+				free_pos = i;
+				free_pos_valid = 1;
+			}
+
+			continue;
+		}
+
+		/* Compare the key fields */
+		status = memcmp(&acl->acl_rule_list[i]->field[0],
+			&rule->field_value[0],
+			acl->cfg.num_fields * sizeof(struct rte_acl_field));
+
+		/* Rule found: update data associated with the rule */
+		if (status == 0) {
+			*key_found = 1;
+			*entry_ptr = &acl->memory[i * acl->entry_size];
+			memcpy(*entry_ptr, entry, acl->entry_size);
+
+			return 0;
+		}
+	}
+
+	/* Return if max rules */
+	if (free_pos_valid == 0) {
+		RTE_LOG(ERR, TABLE, "%s: Max number of rules reached\n",
+			__func__);
+		return -ENOSPC;
+	}
+
+	/* Add the new rule to the rule set */
+	acl_rule.data.userdata = free_pos;
+	rule_location = (struct rte_acl_rule *)
+		&acl->acl_rule_memory[free_pos * acl->acl_params.rule_size];
+	memcpy(rule_location, &acl_rule, acl->acl_params.rule_size);
+	acl->acl_rule_list[free_pos] = rule_location;
+
+	/* Build low level ACL table */
+	acl->name_id ^= 1;
+	acl->acl_params.name = acl->name[acl->name_id];
+	status = rte_table_acl_build(acl, &ctx);
+	if (status != 0) {
+		/* Roll back changes */
+		acl->acl_rule_list[free_pos] = NULL;
+		acl->name_id ^= 1;
+
+		return -EINVAL;
+	}
+
+	/* Commit changes */
+	if (acl->ctx != NULL)
+		rte_acl_free(acl->ctx);
+	acl->ctx = ctx;
+	*key_found = 0;
+	*entry_ptr = &acl->memory[free_pos * acl->entry_size];
+	memcpy(*entry_ptr, entry, acl->entry_size);
+
+	return 0;
+}
+
+static int
+rte_table_acl_entry_delete(
+	void *table,
+	void *key,
+	int *key_found,
+	void *entry)
+{
+	struct rte_table_acl *acl = (struct rte_table_acl *) table;
+	struct rte_table_acl_rule_delete_params *rule =
+		(struct rte_table_acl_rule_delete_params *) key;
+	struct rte_acl_rule *deleted_rule = NULL;
+	struct rte_acl_ctx *ctx;
+	uint32_t pos, pos_valid, i;
+	int status;
+
+	/* Check input parameters */
+	if (table == NULL) {
+		RTE_LOG(ERR, TABLE, "%s: table parameter is NULL\n", __func__);
+		return -EINVAL;
+	}
+	if (key == NULL) {
+		RTE_LOG(ERR, TABLE, "%s: key parameter is NULL\n", __func__);
+		return -EINVAL;
+	}
+	if (key_found == NULL) {
+		RTE_LOG(ERR, TABLE, "%s: key_found parameter is NULL\n",
+			__func__);
+		return -EINVAL;
+	}
+
+	/* Look for the rule in the table */
+	pos = 0;
+	pos_valid = 0;
+	for (i = 1; i < acl->n_rules; i++) {
+		if (acl->acl_rule_list[i] != NULL) {
+			/* Compare the key fields */
+			status = memcmp(&acl->acl_rule_list[i]->field[0],
+				&rule->field_value[0], acl->cfg.num_fields *
+				sizeof(struct rte_acl_field));
+
+			/* Rule found: remove from table */
+			if (status == 0) {
+				pos = i;
+				pos_valid = 1;
+
+				deleted_rule = acl->acl_rule_list[i];
+				acl->acl_rule_list[i] = NULL;
+			}
+		}
+	}
+
+	/* Return if rule not found */
+	if (pos_valid == 0) {
+		*key_found = 0;
+		return 0;
+	}
+
+	/* Build low level ACL table */
+	acl->name_id ^= 1;
+	acl->acl_params.name = acl->name[acl->name_id];
+	status = rte_table_acl_build(acl, &ctx);
+	if (status != 0) {
+		/* Roll back changes */
+		acl->acl_rule_list[pos] = deleted_rule;
+		acl->name_id ^= 1;
+
+		return -EINVAL;
+	}
+
+	/* Commit changes */
+	if (acl->ctx != NULL)
+		rte_acl_free(acl->ctx);
+
+	acl->ctx = ctx;
+	*key_found = 1;
+	if (entry != NULL)
+		memcpy(entry, &acl->memory[pos * acl->entry_size],
+			acl->entry_size);
+
+	return 0;
+}
+
+static int
+rte_table_acl_lookup(
+	void *table,
+	struct rte_mbuf **pkts,
+	uint64_t pkts_mask,
+	uint64_t *lookup_hit_mask,
+	void **entries)
+{
+	struct rte_table_acl *acl = (struct rte_table_acl *) table;
+	const uint8_t *pkts_data[RTE_PORT_IN_BURST_SIZE_MAX];
+	uint32_t results[RTE_PORT_IN_BURST_SIZE_MAX];
+	uint64_t pkts_out_mask;
+	uint32_t n_pkts, i, j;
+
+	/* Input conversion */
+	for (i = 0, j = 0; i < (uint32_t)(RTE_PORT_IN_BURST_SIZE_MAX -
+		__builtin_clzll(pkts_mask)); i++) {
+		uint64_t pkt_mask = 1LLU << i;
+
+		if (pkt_mask & pkts_mask) {
+			pkts_data[j] = rte_pktmbuf_mtod(pkts[i], uint8_t *);
+			j++;
+		}
+	}
+	n_pkts = j;
+
+	/* Low-level ACL table lookup */
+	if (acl->ctx != NULL)
+		rte_acl_classify(acl->ctx, pkts_data, results, n_pkts, 1);
+	else
+		n_pkts = 0;
+
+	/* Output conversion */
+	pkts_out_mask = 0;
+	for (i = 0; i < n_pkts; i++) {
+		uint32_t action_table_pos = results[i];
+		uint32_t pkt_pos = __builtin_ctzll(pkts_mask);
+		uint64_t pkt_mask = 1LLU << pkt_pos;
+
+		pkts_mask &= ~pkt_mask;
+
+		if (action_table_pos != RTE_ACL_INVALID_USERDATA) {
+			pkts_out_mask |= pkt_mask;
+			entries[pkt_pos] = (void *)
+				&acl->memory[action_table_pos *
+				acl->entry_size];
+			rte_prefetch0(entries[pkt_pos]);
+		}
+	}
+
+	*lookup_hit_mask = pkts_out_mask;
+
+	return 0;
+}
+
+struct rte_table_ops rte_table_acl_ops = {
+	.f_create = rte_table_acl_create,
+	.f_free = rte_table_acl_free,
+	.f_add = rte_table_acl_entry_add,
+	.f_delete = rte_table_acl_entry_delete,
+	.f_lookup = rte_table_acl_lookup,
+};
diff --git a/lib/librte_table/rte_table_acl.h b/lib/librte_table/rte_table_acl.h
new file mode 100644
index 0000000..a9cc032
--- /dev/null
+++ b/lib/librte_table/rte_table_acl.h
@@ -0,0 +1,95 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __INCLUDE_RTE_TABLE_ACL_H__
+#define __INCLUDE_RTE_TABLE_ACL_H__
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/**
+ * @file
+ * RTE Table ACL
+ *
+ * This table uses the Access Control List (ACL) algorithm to uniquely
+ * associate data to lookup keys.
+ *
+ * Use-cases: Firewall rule database, etc.
+ *
+ ***/
+
+#include <stdint.h>
+
+#include "rte_acl.h"
+
+#include "rte_table.h"
+
+/** ACL table parameters */
+struct rte_table_acl_params {
+	/** Name */
+	const char *name;
+
+	/** Maximum number of ACL rules in the table */
+	uint32_t n_rules;
+
+	/** Number of fields in the ACL rule specification */
+	uint32_t n_rule_fields;
+
+	/** Format specification of the fields of the ACL rule */
+	struct rte_acl_field_def field_format[RTE_ACL_MAX_FIELDS];
+};
+
+/** ACL rule specification for entry add operation */
+struct rte_table_acl_rule_add_params {
+	/** ACL rule priority, with 0 as the highest priority */
+	int32_t  priority;
+
+	/** Values for the fields of the ACL rule to be added to the table */
+	struct rte_acl_field field_value[RTE_ACL_MAX_FIELDS];
+};
+
+/** ACL rule specification for entry delete operation */
+struct rte_table_acl_rule_delete_params {
+	/** Values for the fields of the ACL rule to be deleted from table */
+	struct rte_acl_field field_value[RTE_ACL_MAX_FIELDS];
+};
+
+/** ACL table operations */
+extern struct rte_table_ops rte_table_acl_ops;
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif
-- 
1.7.7.6

^ permalink raw reply	[flat|nested] 36+ messages in thread

* [dpdk-dev] [v2 15/23] Packet Framework librte_table: Hash tables
  2014-06-04 18:08 [dpdk-dev] [v2 00/23] Packet Framework Cristian Dumitrescu
                   ` (13 preceding siblings ...)
  2014-06-04 18:08 ` [dpdk-dev] [v2 14/23] Packet Framework librte_table: ACL table Cristian Dumitrescu
@ 2014-06-04 18:08 ` Cristian Dumitrescu
  2014-06-04 18:08 ` [dpdk-dev] [v2 16/23] Packet Framework librte_table: array table Cristian Dumitrescu
                   ` (10 subsequent siblings)
  25 siblings, 0 replies; 36+ messages in thread
From: Cristian Dumitrescu @ 2014-06-04 18:08 UTC (permalink / raw)
  To: dev

Various types of hash tables presented under the Packet Framework toolbox.

Hash table types:
1. Extendible bucket (ext): when bucket is full, bucket is extended with more keys
2. Least Recently Used (LRU): when bucket is full, the LRU entry is discarded
3. Pre-computed key signature: RX core extracts the key n-tuple from the packet, computes the key signature and saves the key and key signature within the packet meta-data; flow classification core performs the actual lookup (the bucket search stage) after reading the key and key signature from packet meta-data
4. Signature computed on-the-fly (do-sig version): the same CPU core extracts the key n-tuple from pkt, computes key signature and performs the table lookup
5. Configurable key size or optimized for single key size (8-byte, 16-byte and 32-byte key sizes)

Please checkout the Intel DPDK documentation for more details on these hash tables.

Signed-off-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
---
 lib/librte_table/rte_lru.h              |  213 +++++
 lib/librte_table/rte_table_hash.h       |  350 ++++++++
 lib/librte_table/rte_table_hash_ext.c   | 1122 +++++++++++++++++++++++++
 lib/librte_table/rte_table_hash_key16.c | 1100 ++++++++++++++++++++++++
 lib/librte_table/rte_table_hash_key32.c | 1120 +++++++++++++++++++++++++
 lib/librte_table/rte_table_hash_key8.c  | 1398 +++++++++++++++++++++++++++++++
 lib/librte_table/rte_table_hash_lru.c   | 1065 +++++++++++++++++++++++
 7 files changed, 6368 insertions(+), 0 deletions(-)
 create mode 100644 lib/librte_table/rte_lru.h
 create mode 100644 lib/librte_table/rte_table_hash.h
 create mode 100644 lib/librte_table/rte_table_hash_ext.c
 create mode 100644 lib/librte_table/rte_table_hash_key16.c
 create mode 100644 lib/librte_table/rte_table_hash_key32.c
 create mode 100644 lib/librte_table/rte_table_hash_key8.c
 create mode 100644 lib/librte_table/rte_table_hash_lru.c

diff --git a/lib/librte_table/rte_lru.h b/lib/librte_table/rte_lru.h
new file mode 100644
index 0000000..e87e062
--- /dev/null
+++ b/lib/librte_table/rte_lru.h
@@ -0,0 +1,213 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __INCLUDE_RTE_LRU_H__
+#define __INCLUDE_RTE_LRU_H__
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include <stdint.h>
+
+#ifdef __INTEL_COMPILER
+#define GCC_VERSION (0)
+#else
+#define GCC_VERSION (__GNUC__ * 10000+__GNUC_MINOR__*100 + __GNUC_PATCHLEVEL__)
+#endif
+
+#ifndef RTE_TABLE_HASH_LRU_STRATEGY
+#ifdef __SSE4_2__
+#define RTE_TABLE_HASH_LRU_STRATEGY                        2
+#else /* if no SSE, use simple scalar version */
+#define RTE_TABLE_HASH_LRU_STRATEGY                        1
+#endif
+#endif
+
+#ifndef RTE_ARCH_X86_64
+#undef RTE_TABLE_HASH_LRU_STRATEGY
+#define RTE_TABLE_HASH_LRU_STRATEGY                        1
+#endif
+
+#if (RTE_TABLE_HASH_LRU_STRATEGY < 0) || (RTE_TABLE_HASH_LRU_STRATEGY > 3)
+#error Invalid value for RTE_TABLE_HASH_LRU_STRATEGY
+#endif
+
+#if RTE_TABLE_HASH_LRU_STRATEGY == 0
+
+#define lru_init(bucket)						\
+do									\
+	bucket = bucket;						\
+while (0)
+
+#define lru_pos(bucket) (bucket->lru_list & 0xFFFFLLU)
+
+#define lru_update(bucket, mru_val)					\
+do {									\
+	bucket = bucket;						\
+	mru_val = mru_val;						\
+} while (0)
+
+#elif RTE_TABLE_HASH_LRU_STRATEGY == 1
+
+#define lru_init(bucket)						\
+do									\
+	bucket->lru_list = 0x0000000100020003LLU;			\
+while (0)
+
+#define lru_pos(bucket) (bucket->lru_list & 0xFFFFLLU)
+
+#define lru_update(bucket, mru_val)					\
+do {									\
+	uint64_t x, pos, x0, x1, x2, mask;				\
+									\
+	x = bucket->lru_list;						\
+									\
+	pos = 4;							\
+	if ((x >> 48) == ((uint64_t) mru_val))				\
+		pos = 3;						\
+									\
+	if (((x >> 32) & 0xFFFFLLU) == ((uint64_t) mru_val))		\
+		pos = 2;						\
+									\
+	if (((x >> 16) & 0xFFFFLLU) == ((uint64_t) mru_val))		\
+		pos = 1;						\
+									\
+	if ((x & 0xFFFFLLU) == ((uint64_t) mru_val))			\
+		pos = 0;						\
+									\
+									\
+	pos <<= 4;							\
+	mask = (~0LLU) << pos;						\
+	x0 = x & (~mask);						\
+	x1 = (x >> 16) & mask;						\
+	x2 = (x << (48 - pos)) & (0xFFFFLLU << 48);			\
+	x = x0 | x1 | x2;						\
+									\
+	if (pos != 64)							\
+		bucket->lru_list = x;					\
+} while (0)
+
+#elif RTE_TABLE_HASH_LRU_STRATEGY == 2
+
+#if GCC_VERSION > 40306
+#include <x86intrin.h>
+#else
+#include <emmintrin.h>
+#include <smmintrin.h>
+#include <xmmintrin.h>
+#endif
+
+#define lru_init(bucket)						\
+do									\
+	bucket->lru_list = 0x0000000100020003LLU;			\
+while (0)
+
+#define lru_pos(bucket) (bucket->lru_list & 0xFFFFLLU)
+
+#define lru_update(bucket, mru_val)					\
+do {									\
+	/* set up the masks for all possible shuffles, depends on pos */\
+	static uint64_t masks[10] = {					\
+		/* Shuffle order; Make Zero (see _mm_shuffle_epi8 manual) */\
+		0x0100070605040302, 0x8080808080808080,			\
+		0x0302070605040100, 0x8080808080808080,			\
+		0x0504070603020100, 0x8080808080808080,			\
+		0x0706050403020100, 0x8080808080808080,			\
+		0x0706050403020100, 0x8080808080808080};		\
+	/* load up one register with repeats of mru-val  */		\
+	uint64_t mru2 = mru_val;					\
+	uint64_t mru3 = mru2 | (mru2 << 16);				\
+	uint64_t lru = bucket->lru_list;				\
+	/* XOR to cause the word we're looking for to go to zero */	\
+	uint64_t mru = lru ^ ((mru3 << 32) | mru3);			\
+	__m128i c = _mm_cvtsi64_si128(mru);				\
+	__m128i b = _mm_cvtsi64_si128(lru);				\
+	/* Find the minimum value (first zero word, if it's in there) */\
+	__m128i d = _mm_minpos_epu16(c);				\
+	/* Second word is the index to found word (first word is the value) */\
+	unsigned pos = _mm_extract_epi16(d, 1);				\
+	/* move the recently used location to top of list */		\
+	__m128i k = _mm_shuffle_epi8(b, *((__m128i *) &masks[2 * pos]));\
+	/* Finally, update the original list with the reordered data */	\
+	bucket->lru_list = _mm_extract_epi64(k, 0);			\
+	/* Phwew! */							\
+} while (0)
+
+#elif RTE_TABLE_HASH_LRU_STRATEGY == 3
+
+#if GCC_VERSION > 40306
+#include <x86intrin.h>
+#else
+#include <emmintrin.h>
+#include <smmintrin.h>
+#include <xmmintrin.h>
+#endif
+
+#define lru_init(bucket)						\
+do									\
+	bucket->lru_list = ~0LLU;					\
+while (0)
+
+
+static inline int
+f_lru_pos(uint64_t lru_list)
+{
+	__m128i lst = _mm_set_epi64x((uint64_t)-1, lru_list);
+	__m128i min = _mm_minpos_epu16(lst);
+	return _mm_extract_epi16(min, 1);
+}
+#define lru_pos(bucket) f_lru_pos(bucket->lru_list)
+
+#define lru_update(bucket, mru_val)					\
+do {									\
+	const uint64_t orvals[] = {0xFFFFLLU, 0xFFFFLLU << 16,		\
+		0xFFFFLLU << 32, 0xFFFFLLU << 48, 0LLU};		\
+	const uint64_t decs[] = {0x1000100010001LLU, 0};		\
+	__m128i lru = _mm_cvtsi64_si128(bucket->lru_list);		\
+	__m128i vdec = _mm_cvtsi64_si128(decs[mru_val>>2]);		\
+	lru = _mm_subs_epu16(lru, vdec);				\
+	bucket->lru_list = _mm_extract_epi64(lru, 0) | orvals[mru_val];	\
+} while (0)
+
+#else
+
+#error "Incorrect value for RTE_TABLE_HASH_LRU_STRATEGY"
+
+#endif
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif
diff --git a/lib/librte_table/rte_table_hash.h b/lib/librte_table/rte_table_hash.h
new file mode 100644
index 0000000..9181942
--- /dev/null
+++ b/lib/librte_table/rte_table_hash.h
@@ -0,0 +1,350 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __INCLUDE_RTE_TABLE_HASH_H__
+#define __INCLUDE_RTE_TABLE_HASH_H__
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/**
+ * @file
+ * RTE Table Hash
+ *
+ * These tables use the exact match criterion to uniquely associate data to
+ * lookup keys.
+ *
+ * Use-cases: Flow classification table, Address Resolution Protocol (ARP) table
+ *
+ * Hash table types:
+ * 1. Entry add strategy on bucket full:
+ *     a. Least Recently Used (LRU): One of the existing keys in the bucket is
+ *        deleted and the new key is added in its place. The number of keys in
+ *        each bucket never grows bigger than 4. The logic to pick the key to
+ *        be dropped from the bucket is LRU. The hash table lookup operation
+ *        maintains the order in which the keys in the same bucket are hit, so
+ *        every time a key is hit, it becomes the new Most Recently Used (MRU)
+ *        key, i.e. the most unlikely candidate for drop. When a key is added
+ *        to the bucket, it also becomes the new MRU key. When a key needs to
+ *        be picked and dropped, the most likely candidate for drop, i.e. the
+ *        current LRU key, is always picked. The LRU logic requires maintaining
+ *        specific data structures per each bucket.
+ *     b. Extendible bucket (ext): The bucket is extended with space for 4 more
+ *        keys. This is done by allocating additional memory at table init time,
+ *        which is used to create a pool of free keys (the size of this pool is
+ *        configurable and always a multiple of 4). On key add operation, the
+ *        allocation of a group of 4 keys only happens successfully within the
+ *        limit of free keys, otherwise the key add operation fails. On key
+ *        delete operation, a group of 4 keys is freed back to the pool of free
+ *        keys when the key to be deleted is the only key that was used within
+ *        its group of 4 keys at that time. On key lookup operation, if the
+ *        current bucket is in extended state and a match is not found in the
+ *        first group of 4 keys, the search continues beyond the first group of
+ *        4 keys, potentially until all keys in this bucket are examined. The
+ *        extendible bucket logic requires maintaining specific data structures
+ *        per table and per each bucket.
+ * 2. Key signature computation:
+ *     a. Pre-computed key signature: The key lookup operation is split between
+ *        two CPU cores. The first CPU core (typically the CPU core performing
+ *        packet RX) extracts the key from the input packet, computes the key
+ *        signature and saves both the key and the key signature in the packet
+ *        buffer as packet meta-data. The second CPU core reads both the key and
+ *        the key signature from the packet meta-data and performs the bucket
+ *        search step of the key lookup operation.
+ *     b. Key signature computed on lookup (do-sig): The same CPU core reads
+ *        the key from the packet meta-data, uses it to compute the key
+ *        signature and also performs the bucket search step of the key lookup
+ *        operation.
+ * 3. Key size:
+ *     a. Configurable key size
+ *     b. Single key size (8-byte, 16-byte or 32-byte key size)
+ *
+ ***/
+#include <stdint.h>
+
+#include "rte_table.h"
+
+/** Hash function */
+typedef uint64_t (*rte_table_hash_op_hash)(
+	void *key,
+	uint32_t key_size,
+	uint64_t seed);
+
+/**
+ * Hash tables with configurable key size
+ *
+ */
+/** Extendible bucket hash table parameters */
+struct rte_table_hash_ext_params {
+	/** Key size (number of bytes) */
+	uint32_t key_size;
+
+	/** Maximum number of keys */
+	uint32_t n_keys;
+
+	/** Number of hash table buckets. Each bucket stores up to 4 keys. */
+	uint32_t n_buckets;
+
+	/** Number of hash table bucket extensions. Each bucket extension has
+	space for 4 keys and each bucket can have 0, 1 or more extensions. */
+	uint32_t n_buckets_ext;
+
+	/** Hash function */
+	rte_table_hash_op_hash f_hash;
+
+	/** Seed value for the hash function */
+	uint64_t seed;
+
+	/** Byte offset within packet meta-data where the 4-byte key signature
+	is located. Valid for pre-computed key signature tables, ignored for
+	do-sig tables. */
+	uint32_t signature_offset;
+
+	/** Byte offset within packet meta-data where the key is located */
+	uint32_t key_offset;
+};
+
+/** Extendible bucket hash table operations for pre-computed key signature */
+extern struct rte_table_ops rte_table_hash_ext_ops;
+
+/** Extendible bucket hash table operations for key signature computed on
+	lookup ("do-sig") */
+extern struct rte_table_ops rte_table_hash_ext_dosig_ops;
+
+/** LRU hash table parameters */
+struct rte_table_hash_lru_params {
+	/** Key size (number of bytes) */
+	uint32_t key_size;
+
+	/** Maximum number of keys */
+	uint32_t n_keys;
+
+	/** Number of hash table buckets. Each bucket stores up to 4 keys. */
+	uint32_t n_buckets;
+
+	/** Hash function */
+	rte_table_hash_op_hash f_hash;
+
+	/** Seed value for the hash function */
+	uint64_t seed;
+
+	/** Byte offset within packet meta-data where the 4-byte key signature
+	is located. Valid for pre-computed key signature tables, ignored for
+	do-sig tables. */
+	uint32_t signature_offset;
+
+	/** Byte offset within packet meta-data where the key is located */
+	uint32_t key_offset;
+};
+
+/** LRU hash table operations for pre-computed key signature */
+extern struct rte_table_ops rte_table_hash_lru_ops;
+
+/** LRU hash table operations for key signature computed on lookup ("do-sig") */
+extern struct rte_table_ops rte_table_hash_lru_dosig_ops;
+
+/**
+ * 8-byte key hash tables
+ *
+ */
+/** LRU hash table parameters */
+struct rte_table_hash_key8_lru_params {
+	/** Maximum number of entries (and keys) in the table */
+	uint32_t n_entries;
+
+	/** Hash function */
+	rte_table_hash_op_hash f_hash;
+
+	/** Seed for the hash function */
+	uint64_t seed;
+
+	/** Byte offset within packet meta-data where the 4-byte key signature
+	is located. Valid for pre-computed key signature tables, ignored for
+	do-sig tables. */
+	uint32_t signature_offset;
+
+	/** Byte offset within packet meta-data where the key is located */
+	uint32_t key_offset;
+};
+
+/** LRU hash table operations for pre-computed key signature */
+extern struct rte_table_ops rte_table_hash_key8_lru_ops;
+
+/** LRU hash table operations for key signature computed on lookup ("do-sig") */
+extern struct rte_table_ops rte_table_hash_key8_lru_dosig_ops;
+
+/** Extendible bucket hash table parameters */
+struct rte_table_hash_key8_ext_params {
+	/** Maximum number of entries (and keys) in the table */
+	uint32_t n_entries;
+
+	/** Number of entries (and keys) for hash table bucket extensions. Each
+		bucket is extended in increments of 4 keys. */
+	uint32_t n_entries_ext;
+
+	/** Hash function */
+	rte_table_hash_op_hash f_hash;
+
+	/** Seed for the hash function */
+	uint64_t seed;
+
+	/** Byte offset within packet meta-data where the 4-byte key signature
+	is located. Valid for pre-computed key signature tables, ignored for
+	do-sig tables. */
+	uint32_t signature_offset;
+
+	/** Byte offset within packet meta-data where the key is located */
+	uint32_t key_offset;
+};
+
+/** Extendible bucket hash table operations for pre-computed key signature */
+extern struct rte_table_ops rte_table_hash_key8_ext_ops;
+
+/** Extendible bucket hash table operations for key signature computed on
+    lookup ("do-sig") */
+extern struct rte_table_ops rte_table_hash_key8_ext_dosig_ops;
+
+/**
+ * 16-byte key hash tables
+ *
+ */
+/** LRU hash table parameters */
+struct rte_table_hash_key16_lru_params {
+	/** Maximum number of entries (and keys) in the table */
+	uint32_t n_entries;
+
+	/** Hash function */
+	rte_table_hash_op_hash f_hash;
+
+	/** Seed for the hash function */
+	uint64_t seed;
+
+	/** Byte offset within packet meta-data where the 4-byte key signature
+	is located. Valid for pre-computed key signature tables, ignored for
+	do-sig tables. */
+	uint32_t signature_offset;
+
+	/** Byte offset within packet meta-data where the key is located */
+	uint32_t key_offset;
+};
+
+/** LRU hash table operations for pre-computed key signature */
+extern struct rte_table_ops rte_table_hash_key16_lru_ops;
+
+/** Extendible bucket hash table parameters */
+struct rte_table_hash_key16_ext_params {
+	/** Maximum number of entries (and keys) in the table */
+	uint32_t n_entries;
+
+	/** Number of entries (and keys) for hash table bucket extensions. Each
+	bucket is extended in increments of 4 keys. */
+	uint32_t n_entries_ext;
+
+	/** Hash function */
+	rte_table_hash_op_hash f_hash;
+
+	/** Seed for the hash function */
+	uint64_t seed;
+
+	/** Byte offset within packet meta-data where the 4-byte key signature
+	is located. Valid for pre-computed key signature tables, ignored for
+	do-sig tables. */
+	uint32_t signature_offset;
+
+	/** Byte offset within packet meta-data where the key is located */
+	uint32_t key_offset;
+};
+
+/** Extendible bucket operations for pre-computed key signature */
+extern struct rte_table_ops rte_table_hash_key16_ext_ops;
+
+/**
+ * 32-byte key hash tables
+ *
+ */
+/** LRU hash table parameters */
+struct rte_table_hash_key32_lru_params {
+	/** Maximum number of entries (and keys) in the table */
+	uint32_t n_entries;
+
+	/** Hash function */
+	rte_table_hash_op_hash f_hash;
+
+	/** Seed for the hash function */
+	uint64_t seed;
+
+	/** Byte offset within packet meta-data where the 4-byte key signature
+	is located. Valid for pre-computed key signature tables, ignored for
+	do-sig tables. */
+	uint32_t signature_offset;
+
+	/** Byte offset within packet meta-data where the key is located */
+	uint32_t key_offset;
+};
+
+/** LRU hash table operations for pre-computed key signature */
+extern struct rte_table_ops rte_table_hash_key32_lru_ops;
+
+/** Extendible bucket hash table parameters */
+struct rte_table_hash_key32_ext_params {
+	/** Maximum number of entries (and keys) in the table */
+	uint32_t n_entries;
+
+	/** Number of entries (and keys) for hash table bucket extensions. Each
+		bucket is extended in increments of 4 keys. */
+	uint32_t n_entries_ext;
+
+	/** Hash function */
+	rte_table_hash_op_hash f_hash;
+
+	/** Seed for the hash function */
+	uint64_t seed;
+
+	/** Byte offset within packet meta-data where the 4-byte key signature
+	is located. Valid for pre-computed key signature tables, ignored for
+	do-sig tables. */
+	uint32_t signature_offset;
+
+	/** Byte offset within packet meta-data where the key is located */
+	uint32_t key_offset;
+};
+
+/** Extendible bucket hash table operations */
+extern struct rte_table_ops rte_table_hash_key32_ext_ops;
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif
diff --git a/lib/librte_table/rte_table_hash_ext.c b/lib/librte_table/rte_table_hash_ext.c
new file mode 100644
index 0000000..6e26d98
--- /dev/null
+++ b/lib/librte_table/rte_table_hash_ext.c
@@ -0,0 +1,1122 @@
+/*-
+ *	 BSD LICENSE
+ *
+ *	 Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+ *	 All rights reserved.
+ *
+ *	 Redistribution and use in source and binary forms, with or without
+ *	 modification, are permitted provided that the following conditions
+ *	 are met:
+ *
+ *	* Redistributions of source code must retain the above copyright
+ *		 notice, this list of conditions and the following disclaimer.
+ *	* Redistributions in binary form must reproduce the above copyright
+ *		 notice, this list of conditions and the following disclaimer in
+ *		 the documentation and/or other materials provided with the
+ *		 distribution.
+ *	* Neither the name of Intel Corporation nor the names of its
+ *		 contributors may be used to endorse or promote products derived
+ *		 from this software without specific prior written permission.
+ *
+ *	 THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *	 "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *	 LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *	 A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *	 OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *	 SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *	 LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *	 DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *	 THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *	 (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *	 OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <string.h>
+#include <stdio.h>
+
+#include <rte_common.h>
+#include <rte_mbuf.h>
+#include <rte_malloc.h>
+#include <rte_log.h>
+
+#include "rte_table_hash.h"
+
+#define KEYS_PER_BUCKET	4
+
+struct bucket {
+	union {
+		uintptr_t next;
+		uint64_t lru_list;
+	};
+	uint16_t sig[KEYS_PER_BUCKET];
+	uint32_t key_pos[KEYS_PER_BUCKET];
+};
+
+#define BUCKET_NEXT(bucket)						\
+	((void *) ((bucket)->next & (~1LU)))
+
+#define BUCKET_NEXT_VALID(bucket)					\
+	((bucket)->next & 1LU)
+
+#define BUCKET_NEXT_SET(bucket, bucket_next)				\
+do									\
+	(bucket)->next = (((uintptr_t) ((void *) (bucket_next))) | 1LU);\
+while (0)
+
+#define BUCKET_NEXT_SET_NULL(bucket)					\
+do									\
+	(bucket)->next = 0;						\
+while (0)
+
+#define BUCKET_NEXT_COPY(bucket, bucket2)				\
+do									\
+	(bucket)->next = (bucket2)->next;				\
+while (0)
+
+struct grinder {
+	struct bucket *bkt;
+	uint64_t sig;
+	uint64_t match;
+	uint32_t key_index;
+};
+
+struct rte_table_hash {
+	/* Input parameters */
+	uint32_t key_size;
+	uint32_t entry_size;
+	uint32_t n_keys;
+	uint32_t n_buckets;
+	uint32_t n_buckets_ext;
+	rte_table_hash_op_hash f_hash;
+	uint64_t seed;
+	uint32_t signature_offset;
+	uint32_t key_offset;
+
+	/* Internal */
+	uint64_t bucket_mask;
+	uint32_t key_size_shl;
+	uint32_t data_size_shl;
+	uint32_t key_stack_tos;
+	uint32_t bkt_ext_stack_tos;
+
+	/* Grinder */
+	struct grinder grinders[RTE_PORT_IN_BURST_SIZE_MAX];
+
+	/* Tables */
+	struct bucket *buckets;
+	struct bucket *buckets_ext;
+	uint8_t *key_mem;
+	uint8_t *data_mem;
+	uint32_t *key_stack;
+	uint32_t *bkt_ext_stack;
+
+	/* Table memory */
+	uint8_t memory[0] __rte_cache_aligned;
+};
+
+static int
+check_params_create(struct rte_table_hash_ext_params *params)
+{
+	uint32_t n_buckets_min;
+
+	/* key_size */
+	if ((params->key_size == 0) ||
+		(!rte_is_power_of_2(params->key_size))) {
+		RTE_LOG(ERR, TABLE, "%s: key_size invalid value\n", __func__);
+		return -EINVAL;
+	}
+
+	/* n_keys */
+	if ((params->n_keys == 0) ||
+		(!rte_is_power_of_2(params->n_keys))) {
+		RTE_LOG(ERR, TABLE, "%s: n_keys invalid value\n", __func__);
+		return -EINVAL;
+	}
+
+	/* n_buckets */
+	n_buckets_min = (params->n_keys + KEYS_PER_BUCKET - 1) / params->n_keys;
+	if ((params->n_buckets == 0) ||
+		(!rte_is_power_of_2(params->n_keys)) ||
+		(params->n_buckets < n_buckets_min)) {
+		RTE_LOG(ERR, TABLE, "%s: n_buckets invalid value\n", __func__);
+		return -EINVAL;
+	}
+
+	/* f_hash */
+	if (params->f_hash == NULL) {
+		RTE_LOG(ERR, TABLE, "%s: f_hash invalid value\n", __func__);
+		return -EINVAL;
+	}
+
+	/* signature offset */
+	if ((params->signature_offset & 0x3) != 0) {
+		RTE_LOG(ERR, TABLE, "%s: signature_offset invalid value\n",
+			__func__);
+		return -EINVAL;
+	}
+
+	/* key offset */
+	if ((params->key_offset & 0x7) != 0) {
+		RTE_LOG(ERR, TABLE, "%s: key_offset invalid value\n", __func__);
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+static void *
+rte_table_hash_ext_create(void *params, int socket_id, uint32_t entry_size)
+{
+	struct rte_table_hash_ext_params *p =
+		(struct rte_table_hash_ext_params *) params;
+	struct rte_table_hash *t;
+	uint32_t total_size, table_meta_sz, table_meta_offset;
+	uint32_t bucket_sz, bucket_ext_sz, key_sz;
+	uint32_t key_stack_sz, bkt_ext_stack_sz, data_sz;
+	uint32_t bucket_offset, bucket_ext_offset, key_offset;
+	uint32_t key_stack_offset, bkt_ext_stack_offset, data_offset;
+	uint32_t i;
+
+	/* Check input parameters */
+	if ((check_params_create(p) != 0) ||
+		(!rte_is_power_of_2(entry_size)) ||
+		((sizeof(struct rte_table_hash) % CACHE_LINE_SIZE) != 0) ||
+		(sizeof(struct bucket) != (CACHE_LINE_SIZE / 2)))
+		return NULL;
+
+	/* Memory allocation */
+	table_meta_sz = CACHE_LINE_ROUNDUP(sizeof(struct rte_table_hash));
+	bucket_sz = CACHE_LINE_ROUNDUP(p->n_buckets * sizeof(struct bucket));
+	bucket_ext_sz =
+		CACHE_LINE_ROUNDUP(p->n_buckets_ext * sizeof(struct bucket));
+	key_sz = CACHE_LINE_ROUNDUP(p->n_keys * p->key_size);
+	key_stack_sz = CACHE_LINE_ROUNDUP(p->n_keys * sizeof(uint32_t));
+	bkt_ext_stack_sz =
+		CACHE_LINE_ROUNDUP(p->n_buckets_ext * sizeof(uint32_t));
+	data_sz = CACHE_LINE_ROUNDUP(p->n_keys * entry_size);
+	total_size = table_meta_sz + bucket_sz + bucket_ext_sz + key_sz +
+		key_stack_sz + bkt_ext_stack_sz + data_sz;
+
+	t = rte_zmalloc_socket("TABLE", total_size, CACHE_LINE_SIZE, socket_id);
+	if (t == NULL) {
+		RTE_LOG(ERR, TABLE,
+			"%s: Cannot allocate %u bytes for hash table\n",
+			__func__, total_size);
+		return NULL;
+	}
+	RTE_LOG(INFO, TABLE, "%s (%u-byte key): Hash table memory footprint is "
+		"%u bytes\n", __func__, p->key_size, total_size);
+
+	/* Memory initialization */
+	t->key_size = p->key_size;
+	t->entry_size = entry_size;
+	t->n_keys = p->n_keys;
+	t->n_buckets = p->n_buckets;
+	t->n_buckets_ext = p->n_buckets_ext;
+	t->f_hash = p->f_hash;
+	t->seed = p->seed;
+	t->signature_offset = p->signature_offset;
+	t->key_offset = p->key_offset;
+
+	/* Internal */
+	t->bucket_mask = t->n_buckets - 1;
+	t->key_size_shl = __builtin_ctzl(p->key_size);
+	t->data_size_shl = __builtin_ctzl(p->key_size);
+
+	/* Tables */
+	table_meta_offset = 0;
+	bucket_offset = table_meta_offset + table_meta_sz;
+	bucket_ext_offset = bucket_offset + bucket_sz;
+	key_offset = bucket_ext_offset + bucket_ext_sz;
+	key_stack_offset = key_offset + key_sz;
+	bkt_ext_stack_offset = key_stack_offset + key_stack_sz;
+	data_offset = bkt_ext_stack_offset + bkt_ext_stack_sz;
+
+	t->buckets = (struct bucket *) &t->memory[bucket_offset];
+	t->buckets_ext = (struct bucket *) &t->memory[bucket_ext_offset];
+	t->key_mem = &t->memory[key_offset];
+	t->key_stack = (uint32_t *) &t->memory[key_stack_offset];
+	t->bkt_ext_stack = (uint32_t *) &t->memory[bkt_ext_stack_offset];
+	t->data_mem = &t->memory[data_offset];
+
+	/* Key stack */
+	for (i = 0; i < t->n_keys; i++)
+		t->key_stack[i] = t->n_keys - 1 - i;
+	t->key_stack_tos = t->n_keys;
+
+	/* Bucket ext stack */
+	for (i = 0; i < t->n_buckets_ext; i++)
+		t->bkt_ext_stack[i] = t->n_buckets_ext - 1 - i;
+	t->bkt_ext_stack_tos = t->n_buckets_ext;
+
+	return t;
+}
+
+static int
+rte_table_hash_ext_free(void *table)
+{
+	struct rte_table_hash *t = (struct rte_table_hash *) table;
+
+	/* Check input parameters */
+	if (t == NULL)
+		return -EINVAL;
+
+	rte_free(t);
+	return 0;
+}
+
+static int
+rte_table_hash_ext_entry_add(void *table, void *key, void *entry,
+	int *key_found, void **entry_ptr)
+{
+	struct rte_table_hash *t = (struct rte_table_hash *) table;
+	struct bucket *bkt0, *bkt, *bkt_prev;
+	uint64_t sig;
+	uint32_t bkt_index, i;
+
+	sig = t->f_hash(key, t->key_size, t->seed);
+	bkt_index = sig & t->bucket_mask;
+	bkt0 = &t->buckets[bkt_index];
+	sig = (sig >> 16) | 1LLU;
+
+	/* Key is present in the bucket */
+	for (bkt = bkt0; bkt != NULL; bkt = BUCKET_NEXT(bkt))
+		for (i = 0; i < KEYS_PER_BUCKET; i++) {
+			uint64_t bkt_sig = (uint64_t) bkt->sig[i];
+			uint32_t bkt_key_index = bkt->key_pos[i];
+			uint8_t *bkt_key =
+				&t->key_mem[bkt_key_index << t->key_size_shl];
+
+			if ((sig == bkt_sig) && (memcmp(key, bkt_key,
+				t->key_size) == 0)) {
+				uint8_t *data = &t->data_mem[bkt_key_index <<
+					t->data_size_shl];
+
+				memcpy(data, entry, t->entry_size);
+				*key_found = 1;
+				*entry_ptr = (void *) data;
+				return 0;
+			}
+		}
+
+	/* Key is not present in the bucket */
+	for (bkt_prev = NULL, bkt = bkt0; bkt != NULL; bkt_prev = bkt,
+		bkt = BUCKET_NEXT(bkt))
+		for (i = 0; i < KEYS_PER_BUCKET; i++) {
+			uint64_t bkt_sig = (uint64_t) bkt->sig[i];
+
+			if (bkt_sig == 0) {
+				uint32_t bkt_key_index;
+				uint8_t *bkt_key, *data;
+
+				/* Allocate new key */
+				if (t->key_stack_tos == 0) /* No free keys */
+					return -ENOSPC;
+
+				bkt_key_index = t->key_stack[
+					--t->key_stack_tos];
+
+				/* Install new key */
+				bkt_key = &t->key_mem[bkt_key_index <<
+					t->key_size_shl];
+				data = &t->data_mem[bkt_key_index <<
+					t->data_size_shl];
+
+				bkt->sig[i] = (uint16_t) sig;
+				bkt->key_pos[i] = bkt_key_index;
+				memcpy(bkt_key, key, t->key_size);
+				memcpy(data, entry, t->entry_size);
+
+				*key_found = 0;
+				*entry_ptr = (void *) data;
+				return 0;
+			}
+		}
+
+	/* Bucket full: extend bucket */
+	if ((t->bkt_ext_stack_tos > 0) && (t->key_stack_tos > 0)) {
+		uint32_t bkt_key_index;
+		uint8_t *bkt_key, *data;
+
+		/* Allocate new bucket ext */
+		bkt_index = t->bkt_ext_stack[--t->bkt_ext_stack_tos];
+		bkt = &t->buckets_ext[bkt_index];
+
+		/* Chain the new bucket ext */
+		BUCKET_NEXT_SET(bkt_prev, bkt);
+		BUCKET_NEXT_SET_NULL(bkt);
+
+		/* Allocate new key */
+		bkt_key_index = t->key_stack[--t->key_stack_tos];
+		bkt_key = &t->key_mem[bkt_key_index << t->key_size_shl];
+
+		data = &t->data_mem[bkt_key_index << t->data_size_shl];
+
+		/* Install new key into bucket */
+		bkt->sig[0] = (uint16_t) sig;
+		bkt->key_pos[0] = bkt_key_index;
+		memcpy(bkt_key, key, t->key_size);
+		memcpy(data, entry, t->entry_size);
+
+		*key_found = 0;
+		*entry_ptr = (void *) data;
+		return 0;
+	}
+
+	return -ENOSPC;
+}
+
+static int
+rte_table_hash_ext_entry_delete(void *table, void *key, int *key_found,
+void *entry)
+{
+	struct rte_table_hash *t = (struct rte_table_hash *) table;
+	struct bucket *bkt0, *bkt, *bkt_prev;
+	uint64_t sig;
+	uint32_t bkt_index, i;
+
+	sig = t->f_hash(key, t->key_size, t->seed);
+	bkt_index = sig & t->bucket_mask;
+	bkt0 = &t->buckets[bkt_index];
+	sig = (sig >> 16) | 1LLU;
+
+	/* Key is present in the bucket */
+	for (bkt_prev = NULL, bkt = bkt0; bkt != NULL; bkt_prev = bkt,
+		bkt = BUCKET_NEXT(bkt))
+		for (i = 0; i < KEYS_PER_BUCKET; i++) {
+			uint64_t bkt_sig = (uint64_t) bkt->sig[i];
+			uint32_t bkt_key_index = bkt->key_pos[i];
+			uint8_t *bkt_key = &t->key_mem[bkt_key_index <<
+				t->key_size_shl];
+
+			if ((sig == bkt_sig) && (memcmp(key, bkt_key,
+				t->key_size) == 0)) {
+				uint8_t *data = &t->data_mem[bkt_key_index <<
+					t->data_size_shl];
+
+				/* Uninstall key from bucket */
+				bkt->sig[i] = 0;
+				*key_found = 1;
+				if (entry)
+					memcpy(entry, data, t->entry_size);
+
+				/* Free key */
+				t->key_stack[t->key_stack_tos++] =
+					bkt_key_index;
+
+				/*Check if bucket is unused */
+				if ((bkt_prev != NULL) &&
+				    (bkt->sig[0] == 0) && (bkt->sig[1] == 0) &&
+				    (bkt->sig[2] == 0) && (bkt->sig[3] == 0)) {
+					/* Clear bucket */
+					memset(bkt, 0, sizeof(struct bucket));
+
+					/* Unchain bucket */
+					BUCKET_NEXT_COPY(bkt_prev, bkt);
+
+					/* Free bucket back to buckets ext */
+					bkt_index = bkt - t->buckets_ext;
+					t->bkt_ext_stack[t->bkt_ext_stack_tos++]
+						= bkt_index;
+				}
+
+				return 0;
+			}
+		}
+
+	/* Key is not present in the bucket */
+	*key_found = 0;
+	return 0;
+}
+
+static int rte_table_hash_ext_lookup_unoptimized(
+	void *table,
+	struct rte_mbuf **pkts,
+	uint64_t pkts_mask,
+	uint64_t *lookup_hit_mask,
+	void **entries,
+	int dosig)
+{
+	struct rte_table_hash *t = (struct rte_table_hash *) table;
+	uint64_t pkts_mask_out = 0;
+
+	for ( ; pkts_mask; ) {
+		struct bucket *bkt0, *bkt;
+		struct rte_mbuf *pkt;
+		uint8_t *key;
+		uint64_t pkt_mask, sig;
+		uint32_t pkt_index, bkt_index, i;
+
+		pkt_index = __builtin_ctzll(pkts_mask);
+		pkt_mask = 1LLU << pkt_index;
+		pkts_mask &= ~pkt_mask;
+
+		pkt = pkts[pkt_index];
+		key = RTE_MBUF_METADATA_UINT8_PTR(pkt, t->key_offset);
+		if (dosig)
+			sig = (uint64_t) t->f_hash(key, t->key_size, t->seed);
+		else
+			sig = RTE_MBUF_METADATA_UINT32(pkt,
+				t->signature_offset);
+
+		bkt_index = sig & t->bucket_mask;
+		bkt0 = &t->buckets[bkt_index];
+		sig = (sig >> 16) | 1LLU;
+
+		/* Key is present in the bucket */
+		for (bkt = bkt0; bkt != NULL; bkt = BUCKET_NEXT(bkt))
+			for (i = 0; i < KEYS_PER_BUCKET; i++) {
+				uint64_t bkt_sig = (uint64_t) bkt->sig[i];
+				uint32_t bkt_key_index = bkt->key_pos[i];
+				uint8_t *bkt_key = &t->key_mem[bkt_key_index <<
+					t->key_size_shl];
+
+				if ((sig == bkt_sig) && (memcmp(key, bkt_key,
+					t->key_size) == 0)) {
+					uint8_t *data = &t->data_mem[
+					bkt_key_index << t->data_size_shl];
+
+					pkts_mask_out |= pkt_mask;
+					entries[pkt_index] = (void *) data;
+					break;
+				}
+			}
+	}
+
+	*lookup_hit_mask = pkts_mask_out;
+	return 0;
+}
+
+/***
+ *
+ * mask = match bitmask
+ * match = at least one match
+ * match_many = more than one match
+ * match_pos = position of first match
+ *
+ *----------------------------------------
+ * mask		 match	 match_many	  match_pos
+ *----------------------------------------
+ * 0000		 0		 0			  00
+ * 0001		 1		 0			  00
+ * 0010		 1		 0			  01
+ * 0011		 1		 1			  00
+ *----------------------------------------
+ * 0100		 1		 0			  10
+ * 0101		 1		 1			  00
+ * 0110		 1		 1			  01
+ * 0111		 1		 1			  00
+ *----------------------------------------
+ * 1000		 1		 0			  11
+ * 1001		 1		 1			  00
+ * 1010		 1		 1			  01
+ * 1011		 1		 1			  00
+ *----------------------------------------
+ * 1100		 1		 1			  10
+ * 1101		 1		 1			  00
+ * 1110		 1		 1			  01
+ * 1111		 1		 1			  00
+ *----------------------------------------
+ *
+ * match = 1111_1111_1111_1110
+ * match_many = 1111_1110_1110_1000
+ * match_pos = 0001_0010_0001_0011__0001_0010_0001_0000
+ *
+ * match = 0xFFFELLU
+ * match_many = 0xFEE8LLU
+ * match_pos = 0x12131210LLU
+ *
+ ***/
+
+#define LUT_MATCH						0xFFFELLU
+#define LUT_MATCH_MANY						0xFEE8LLU
+#define LUT_MATCH_POS						0x12131210LLU
+
+#define lookup_cmp_sig(mbuf_sig, bucket, match, match_many, match_pos)	\
+{									\
+	uint64_t bucket_sig[4], mask[4], mask_all;			\
+									\
+	bucket_sig[0] = bucket->sig[0];					\
+	bucket_sig[1] = bucket->sig[1];					\
+	bucket_sig[2] = bucket->sig[2];					\
+	bucket_sig[3] = bucket->sig[3];					\
+									\
+	bucket_sig[0] ^= mbuf_sig;					\
+	bucket_sig[1] ^= mbuf_sig;					\
+	bucket_sig[2] ^= mbuf_sig;					\
+	bucket_sig[3] ^= mbuf_sig;					\
+									\
+	mask[0] = 0;							\
+	mask[1] = 0;							\
+	mask[2] = 0;							\
+	mask[3] = 0;							\
+									\
+	if (bucket_sig[0] == 0)						\
+		mask[0] = 1;						\
+	if (bucket_sig[1] == 0)						\
+		mask[1] = 2;						\
+	if (bucket_sig[2] == 0)						\
+		mask[2] = 4;						\
+	if (bucket_sig[3] == 0)						\
+		mask[3] = 8;						\
+									\
+	mask_all = (mask[0] | mask[1]) | (mask[2] | mask[3]);		\
+									\
+	match = (LUT_MATCH >> mask_all) & 1;				\
+	match_many = (LUT_MATCH_MANY >> mask_all) & 1;			\
+	match_pos = (LUT_MATCH_POS >> (mask_all << 1)) & 3;		\
+}
+
+#define lookup_cmp_key(mbuf, key, match_key, f)				\
+{									\
+	uint64_t *pkt_key = RTE_MBUF_METADATA_UINT64_PTR(mbuf, f->key_offset);\
+	uint64_t *bkt_key = (uint64_t *) key;				\
+									\
+	switch (f->key_size) {						\
+	case 8:								\
+	{								\
+		uint64_t xor = pkt_key[0] ^ bkt_key[0];			\
+		match_key = 0;						\
+		if (xor == 0)						\
+			match_key = 1;					\
+	}								\
+	break;								\
+									\
+	case 16:							\
+	{								\
+		uint64_t xor[2], or;					\
+									\
+		xor[0] = pkt_key[0] ^ bkt_key[0];			\
+		xor[1] = pkt_key[1] ^ bkt_key[1];			\
+		or = xor[0] | xor[1];					\
+		match_key = 0;						\
+		if (or == 0)						\
+			match_key = 1;					\
+	}								\
+	break;								\
+									\
+	case 32:							\
+	{								\
+		uint64_t xor[4], or;					\
+									\
+		xor[0] = pkt_key[0] ^ bkt_key[0];			\
+		xor[1] = pkt_key[1] ^ bkt_key[1];			\
+		xor[2] = pkt_key[2] ^ bkt_key[2];			\
+		xor[3] = pkt_key[3] ^ bkt_key[3];			\
+		or = xor[0] | xor[1] | xor[2] | xor[3];			\
+		match_key = 0;						\
+		if (or == 0)						\
+			match_key = 1;					\
+	}								\
+	break;								\
+									\
+	case 64:							\
+	{								\
+		uint64_t xor[8], or;					\
+									\
+		xor[0] = pkt_key[0] ^ bkt_key[0];			\
+		xor[1] = pkt_key[1] ^ bkt_key[1];			\
+		xor[2] = pkt_key[2] ^ bkt_key[2];			\
+		xor[3] = pkt_key[3] ^ bkt_key[3];			\
+		xor[4] = pkt_key[4] ^ bkt_key[4];			\
+		xor[5] = pkt_key[5] ^ bkt_key[5];			\
+		xor[6] = pkt_key[6] ^ bkt_key[6];			\
+		xor[7] = pkt_key[7] ^ bkt_key[7];			\
+		or = xor[0] | xor[1] | xor[2] | xor[3] |		\
+			xor[4] | xor[5] | xor[6] | xor[7];		\
+		match_key = 0;						\
+		if (or == 0)						\
+			match_key = 1;					\
+	}								\
+	break;								\
+									\
+	default:							\
+		match_key = 0;						\
+		if (memcmp(pkt_key, bkt_key, f->key_size) == 0)		\
+			match_key = 1;					\
+	}								\
+}
+
+#define lookup2_stage0(t, g, pkts, pkts_mask, pkt00_index, pkt01_index)	\
+{									\
+	uint64_t pkt00_mask, pkt01_mask;				\
+	struct rte_mbuf *mbuf00, *mbuf01;				\
+									\
+	pkt00_index = __builtin_ctzll(pkts_mask);			\
+	pkt00_mask = 1LLU << pkt00_index;				\
+	pkts_mask &= ~pkt00_mask;					\
+	mbuf00 = pkts[pkt00_index];					\
+									\
+	pkt01_index = __builtin_ctzll(pkts_mask);			\
+	pkt01_mask = 1LLU << pkt01_index;				\
+	pkts_mask &= ~pkt01_mask;					\
+	mbuf01 = pkts[pkt01_index];					\
+									\
+	rte_prefetch0(RTE_MBUF_METADATA_UINT8_PTR(mbuf00, 0));		\
+	rte_prefetch0(RTE_MBUF_METADATA_UINT8_PTR(mbuf01, 0));		\
+}
+
+#define lookup2_stage0_with_odd_support(t, g, pkts, pkts_mask, pkt00_index, \
+	pkt01_index)							\
+{									\
+	uint64_t pkt00_mask, pkt01_mask;				\
+	struct rte_mbuf *mbuf00, *mbuf01;				\
+									\
+	pkt00_index = __builtin_ctzll(pkts_mask);			\
+	pkt00_mask = 1LLU << pkt00_index;				\
+	pkts_mask &= ~pkt00_mask;					\
+	mbuf00 = pkts[pkt00_index];					\
+									\
+	pkt01_index = __builtin_ctzll(pkts_mask);			\
+	if (pkts_mask == 0)						\
+		pkt01_index = pkt00_index;				\
+	pkt01_mask = 1LLU << pkt01_index;				\
+	pkts_mask &= ~pkt01_mask;					\
+	mbuf01 = pkts[pkt01_index];					\
+									\
+	rte_prefetch0(RTE_MBUF_METADATA_UINT8_PTR(mbuf00, 0));		\
+	rte_prefetch0(RTE_MBUF_METADATA_UINT8_PTR(mbuf01, 0));		\
+}
+
+#define lookup2_stage1(t, g, pkts, pkt10_index, pkt11_index)		\
+{									\
+	struct grinder *g10, *g11;					\
+	uint64_t sig10, sig11, bkt10_index, bkt11_index;		\
+	struct rte_mbuf *mbuf10, *mbuf11;				\
+	struct bucket *bkt10, *bkt11, *buckets = t->buckets;		\
+	uint64_t bucket_mask = t->bucket_mask;				\
+	uint32_t signature_offset = t->signature_offset;		\
+									\
+	mbuf10 = pkts[pkt10_index];					\
+	sig10 = (uint64_t) RTE_MBUF_METADATA_UINT32(mbuf10, signature_offset);\
+	bkt10_index = sig10 & bucket_mask;				\
+	bkt10 = &buckets[bkt10_index];					\
+									\
+	mbuf11 = pkts[pkt11_index];					\
+	sig11 = (uint64_t) RTE_MBUF_METADATA_UINT32(mbuf11, signature_offset);\
+	bkt11_index = sig11 & bucket_mask;				\
+	bkt11 = &buckets[bkt11_index];					\
+									\
+	rte_prefetch0(bkt10);						\
+	rte_prefetch0(bkt11);						\
+									\
+	g10 = &g[pkt10_index];						\
+	g10->sig = sig10;						\
+	g10->bkt = bkt10;						\
+									\
+	g11 = &g[pkt11_index];						\
+	g11->sig = sig11;						\
+	g11->bkt = bkt11;						\
+}
+
+#define lookup2_stage1_dosig(t, g, pkts, pkt10_index, pkt11_index)	\
+{									\
+	struct grinder *g10, *g11;					\
+	uint64_t sig10, sig11, bkt10_index, bkt11_index;		\
+	struct rte_mbuf *mbuf10, *mbuf11;				\
+	struct bucket *bkt10, *bkt11, *buckets = t->buckets;		\
+	uint8_t *key10, *key11;						\
+	uint64_t bucket_mask = t->bucket_mask;				\
+	rte_table_hash_op_hash f_hash = t->f_hash;			\
+	uint64_t seed = t->seed;					\
+	uint32_t key_size = t->key_size;				\
+	uint32_t key_offset = t->key_offset;				\
+									\
+	mbuf10 = pkts[pkt10_index];					\
+	key10 = RTE_MBUF_METADATA_UINT8_PTR(mbuf10, key_offset);	\
+	sig10 = (uint64_t) f_hash(key10, key_size, seed);		\
+	bkt10_index = sig10 & bucket_mask;				\
+	bkt10 = &buckets[bkt10_index];					\
+									\
+	mbuf11 = pkts[pkt11_index];					\
+	key11 = RTE_MBUF_METADATA_UINT8_PTR(mbuf11, key_offset);	\
+	sig11 = (uint64_t) f_hash(key11, key_size, seed);		\
+	bkt11_index = sig11 & bucket_mask;				\
+	bkt11 = &buckets[bkt11_index];					\
+									\
+	rte_prefetch0(bkt10);						\
+	rte_prefetch0(bkt11);						\
+									\
+	g10 = &g[pkt10_index];						\
+	g10->sig = sig10;						\
+	g10->bkt = bkt10;						\
+									\
+	g11 = &g[pkt11_index];						\
+	g11->sig = sig11;						\
+	g11->bkt = bkt11;						\
+}
+
+#define lookup2_stage2(t, g, pkt20_index, pkt21_index, pkts_mask_match_many)\
+{									\
+	struct grinder *g20, *g21;					\
+	uint64_t sig20, sig21;						\
+	struct bucket *bkt20, *bkt21;					\
+	uint8_t *key20, *key21, *key_mem = t->key_mem;			\
+	uint64_t match20, match21, match_many20, match_many21;		\
+	uint64_t match_pos20, match_pos21;				\
+	uint32_t key20_index, key21_index, key_size_shl = t->key_size_shl;\
+									\
+	g20 = &g[pkt20_index];						\
+	sig20 = g20->sig;						\
+	bkt20 = g20->bkt;						\
+	sig20 = (sig20 >> 16) | 1LLU;					\
+	lookup_cmp_sig(sig20, bkt20, match20, match_many20, match_pos20);\
+	match20 <<= pkt20_index;					\
+	match_many20 |= BUCKET_NEXT_VALID(bkt20);			\
+	match_many20 <<= pkt20_index;					\
+	key20_index = bkt20->key_pos[match_pos20];			\
+	key20 = &key_mem[key20_index << key_size_shl];			\
+									\
+	g21 = &g[pkt21_index];						\
+	sig21 = g21->sig;						\
+	bkt21 = g21->bkt;						\
+	sig21 = (sig21 >> 16) | 1LLU;					\
+	lookup_cmp_sig(sig21, bkt21, match21, match_many21, match_pos21);\
+	match21 <<= pkt21_index;					\
+	match_many21 |= BUCKET_NEXT_VALID(bkt21);			\
+	match_many21 <<= pkt21_index;					\
+	key21_index = bkt21->key_pos[match_pos21];			\
+	key21 = &key_mem[key21_index << key_size_shl];			\
+									\
+	rte_prefetch0(key20);						\
+	rte_prefetch0(key21);						\
+									\
+	pkts_mask_match_many |= match_many20 | match_many21;		\
+									\
+	g20->match = match20;						\
+	g20->key_index = key20_index;					\
+									\
+	g21->match = match21;						\
+	g21->key_index = key21_index;					\
+}
+
+#define lookup2_stage3(t, g, pkts, pkt30_index, pkt31_index, pkts_mask_out, \
+	entries)							\
+{									\
+	struct grinder *g30, *g31;					\
+	struct rte_mbuf *mbuf30, *mbuf31;				\
+	uint8_t *key30, *key31, *key_mem = t->key_mem;			\
+	uint8_t *data30, *data31, *data_mem = t->data_mem;		\
+	uint64_t match30, match31, match_key30, match_key31, match_keys;\
+	uint32_t key30_index, key31_index;				\
+	uint32_t key_size_shl = t->key_size_shl;			\
+	uint32_t data_size_shl = t->data_size_shl;			\
+									\
+	mbuf30 = pkts[pkt30_index];					\
+	g30 = &g[pkt30_index];						\
+	match30 = g30->match;						\
+	key30_index = g30->key_index;					\
+	key30 = &key_mem[key30_index << key_size_shl];			\
+	lookup_cmp_key(mbuf30, key30, match_key30, t);			\
+	match_key30 <<= pkt30_index;					\
+	match_key30 &= match30;						\
+	data30 = &data_mem[key30_index << data_size_shl];		\
+	entries[pkt30_index] = data30;					\
+									\
+	mbuf31 = pkts[pkt31_index];					\
+	g31 = &g[pkt31_index];						\
+	match31 = g31->match;						\
+	key31_index = g31->key_index;					\
+	key31 = &key_mem[key31_index << key_size_shl];			\
+	lookup_cmp_key(mbuf31, key31, match_key31, t);			\
+	match_key31 <<= pkt31_index;					\
+	match_key31 &= match31;						\
+	data31 = &data_mem[key31_index << data_size_shl];		\
+	entries[pkt31_index] = data31;					\
+									\
+	rte_prefetch0(data30);						\
+	rte_prefetch0(data31);						\
+									\
+	match_keys = match_key30 | match_key31;				\
+	pkts_mask_out |= match_keys;					\
+}
+
+/***
+* The lookup function implements a 4-stage pipeline, with each stage processing
+* two different packets. The purpose of pipelined implementation is to hide the
+* latency of prefetching the data structures and loosen the data dependency
+* between instructions.
+*
+*  p00  _______   p10  _______   p20  _______   p30  _______
+*----->|       |----->|       |----->|       |----->|       |----->
+*      |   0   |      |   1   |      |   2   |      |   3   |
+*----->|_______|----->|_______|----->|_______|----->|_______|----->
+*  p01            p11            p21            p31
+*
+* The naming convention is:
+*    pXY = packet Y of stage X, X = 0 .. 3, Y = 0 .. 1
+*
+***/
+static int rte_table_hash_ext_lookup(
+	void *table,
+	struct rte_mbuf **pkts,
+	uint64_t pkts_mask,
+	uint64_t *lookup_hit_mask,
+	void **entries)
+{
+	struct rte_table_hash *t = (struct rte_table_hash *) table;
+	struct grinder *g = t->grinders;
+	uint64_t pkt00_index, pkt01_index, pkt10_index, pkt11_index;
+	uint64_t pkt20_index, pkt21_index, pkt30_index, pkt31_index;
+	uint64_t pkts_mask_out = 0, pkts_mask_match_many = 0;
+	int status = 0;
+
+	/* Cannot run the pipeline with less than 7 packets */
+	if (__builtin_popcountll(pkts_mask) < 7)
+		return rte_table_hash_ext_lookup_unoptimized(table, pkts,
+			pkts_mask, lookup_hit_mask, entries, 0);
+
+	/* Pipeline stage 0 */
+	lookup2_stage0(t, g, pkts, pkts_mask, pkt00_index, pkt01_index);
+
+	/* Pipeline feed */
+	pkt10_index = pkt00_index;
+	pkt11_index = pkt01_index;
+
+	/* Pipeline stage 0 */
+	lookup2_stage0(t, g, pkts, pkts_mask, pkt00_index, pkt01_index);
+
+	/* Pipeline stage 1 */
+	lookup2_stage1(t, g, pkts, pkt10_index, pkt11_index);
+
+	/* Pipeline feed */
+	pkt20_index = pkt10_index;
+	pkt21_index = pkt11_index;
+	pkt10_index = pkt00_index;
+	pkt11_index = pkt01_index;
+
+	/* Pipeline stage 0 */
+	lookup2_stage0(t, g, pkts, pkts_mask, pkt00_index, pkt01_index);
+
+	/* Pipeline stage 1 */
+	lookup2_stage1(t, g, pkts, pkt10_index, pkt11_index);
+
+	/* Pipeline stage 2 */
+	lookup2_stage2(t, g, pkt20_index, pkt21_index, pkts_mask_match_many);
+
+	/*
+	* Pipeline run
+	*
+	*/
+	for ( ; pkts_mask; ) {
+		/* Pipeline feed */
+		pkt30_index = pkt20_index;
+		pkt31_index = pkt21_index;
+		pkt20_index = pkt10_index;
+		pkt21_index = pkt11_index;
+		pkt10_index = pkt00_index;
+		pkt11_index = pkt01_index;
+
+		/* Pipeline stage 0 */
+		lookup2_stage0_with_odd_support(t, g, pkts, pkts_mask,
+			pkt00_index, pkt01_index);
+
+		/* Pipeline stage 1 */
+		lookup2_stage1(t, g, pkts, pkt10_index, pkt11_index);
+
+		/* Pipeline stage 2 */
+		lookup2_stage2(t, g, pkt20_index, pkt21_index,
+			pkts_mask_match_many);
+
+		/* Pipeline stage 3 */
+		lookup2_stage3(t, g, pkts, pkt30_index, pkt31_index,
+			pkts_mask_out, entries);
+	}
+
+	/* Pipeline feed */
+	pkt30_index = pkt20_index;
+	pkt31_index = pkt21_index;
+	pkt20_index = pkt10_index;
+	pkt21_index = pkt11_index;
+	pkt10_index = pkt00_index;
+	pkt11_index = pkt01_index;
+
+	/* Pipeline stage 1 */
+	lookup2_stage1(t, g, pkts, pkt10_index, pkt11_index);
+
+	/* Pipeline stage 2 */
+	lookup2_stage2(t, g, pkt20_index, pkt21_index, pkts_mask_match_many);
+
+	/* Pipeline stage 3 */
+	lookup2_stage3(t, g, pkts, pkt30_index, pkt31_index, pkts_mask_out,
+		entries);
+
+	/* Pipeline feed */
+	pkt30_index = pkt20_index;
+	pkt31_index = pkt21_index;
+	pkt20_index = pkt10_index;
+	pkt21_index = pkt11_index;
+
+	/* Pipeline stage 2 */
+	lookup2_stage2(t, g, pkt20_index, pkt21_index, pkts_mask_match_many);
+
+	/* Pipeline stage 3 */
+	lookup2_stage3(t, g, pkts, pkt30_index, pkt31_index, pkts_mask_out,
+		entries);
+
+	/* Pipeline feed */
+	pkt30_index = pkt20_index;
+	pkt31_index = pkt21_index;
+
+	/* Pipeline stage 3 */
+	lookup2_stage3(t, g, pkts, pkt30_index, pkt31_index, pkts_mask_out,
+		entries);
+
+	/* Slow path */
+	pkts_mask_match_many &= ~pkts_mask_out;
+	if (pkts_mask_match_many) {
+		uint64_t pkts_mask_out_slow = 0;
+
+		status = rte_table_hash_ext_lookup_unoptimized(table, pkts,
+			pkts_mask_match_many, &pkts_mask_out_slow, entries, 0);
+		pkts_mask_out |= pkts_mask_out_slow;
+	}
+
+	*lookup_hit_mask = pkts_mask_out;
+	return status;
+}
+
+static int rte_table_hash_ext_lookup_dosig(
+	void *table,
+	struct rte_mbuf **pkts,
+	uint64_t pkts_mask,
+	uint64_t *lookup_hit_mask,
+	void **entries)
+{
+	struct rte_table_hash *t = (struct rte_table_hash *) table;
+	struct grinder *g = t->grinders;
+	uint64_t pkt00_index, pkt01_index, pkt10_index, pkt11_index;
+	uint64_t pkt20_index, pkt21_index, pkt30_index, pkt31_index;
+	uint64_t pkts_mask_out = 0, pkts_mask_match_many = 0;
+	int status = 0;
+
+	/* Cannot run the pipeline with less than 7 packets */
+	if (__builtin_popcountll(pkts_mask) < 7)
+		return rte_table_hash_ext_lookup_unoptimized(table, pkts,
+			pkts_mask, lookup_hit_mask, entries, 1);
+
+	/* Pipeline stage 0 */
+	lookup2_stage0(t, g, pkts, pkts_mask, pkt00_index, pkt01_index);
+
+	/* Pipeline feed */
+	pkt10_index = pkt00_index;
+	pkt11_index = pkt01_index;
+
+	/* Pipeline stage 0 */
+	lookup2_stage0(t, g, pkts, pkts_mask, pkt00_index, pkt01_index);
+
+	/* Pipeline stage 1 */
+	lookup2_stage1_dosig(t, g, pkts, pkt10_index, pkt11_index);
+
+	/* Pipeline feed */
+	pkt20_index = pkt10_index;
+	pkt21_index = pkt11_index;
+	pkt10_index = pkt00_index;
+	pkt11_index = pkt01_index;
+
+	/* Pipeline stage 0 */
+	lookup2_stage0(t, g, pkts, pkts_mask, pkt00_index, pkt01_index);
+
+	/* Pipeline stage 1 */
+	lookup2_stage1_dosig(t, g, pkts, pkt10_index, pkt11_index);
+
+	/* Pipeline stage 2 */
+	lookup2_stage2(t, g, pkt20_index, pkt21_index, pkts_mask_match_many);
+
+	/*
+	* Pipeline run
+	*
+	*/
+	for ( ; pkts_mask; ) {
+		/* Pipeline feed */
+		pkt30_index = pkt20_index;
+		pkt31_index = pkt21_index;
+		pkt20_index = pkt10_index;
+		pkt21_index = pkt11_index;
+		pkt10_index = pkt00_index;
+		pkt11_index = pkt01_index;
+
+		/* Pipeline stage 0 */
+		lookup2_stage0_with_odd_support(t, g, pkts, pkts_mask,
+			pkt00_index, pkt01_index);
+
+		/* Pipeline stage 1 */
+		lookup2_stage1_dosig(t, g, pkts, pkt10_index, pkt11_index);
+
+		/* Pipeline stage 2 */
+		lookup2_stage2(t, g, pkt20_index, pkt21_index,
+			pkts_mask_match_many);
+
+		/* Pipeline stage 3 */
+		lookup2_stage3(t, g, pkts, pkt30_index, pkt31_index,
+			pkts_mask_out, entries);
+	}
+
+	/* Pipeline feed */
+	pkt30_index = pkt20_index;
+	pkt31_index = pkt21_index;
+	pkt20_index = pkt10_index;
+	pkt21_index = pkt11_index;
+	pkt10_index = pkt00_index;
+	pkt11_index = pkt01_index;
+
+	/* Pipeline stage 1 */
+	lookup2_stage1_dosig(t, g, pkts, pkt10_index, pkt11_index);
+
+	/* Pipeline stage 2 */
+	lookup2_stage2(t, g, pkt20_index, pkt21_index, pkts_mask_match_many);
+
+	/* Pipeline stage 3 */
+	lookup2_stage3(t, g, pkts, pkt30_index, pkt31_index, pkts_mask_out,
+		entries);
+
+	/* Pipeline feed */
+	pkt30_index = pkt20_index;
+	pkt31_index = pkt21_index;
+	pkt20_index = pkt10_index;
+	pkt21_index = pkt11_index;
+
+	/* Pipeline stage 2 */
+	lookup2_stage2(t, g, pkt20_index, pkt21_index, pkts_mask_match_many);
+
+	/* Pipeline stage 3 */
+	lookup2_stage3(t, g, pkts, pkt30_index, pkt31_index, pkts_mask_out,
+		entries);
+
+	/* Pipeline feed */
+	pkt30_index = pkt20_index;
+	pkt31_index = pkt21_index;
+
+	/* Pipeline stage 3 */
+	lookup2_stage3(t, g, pkts, pkt30_index, pkt31_index, pkts_mask_out,
+		entries);
+
+	/* Slow path */
+	pkts_mask_match_many &= ~pkts_mask_out;
+	if (pkts_mask_match_many) {
+		uint64_t pkts_mask_out_slow = 0;
+
+		status = rte_table_hash_ext_lookup_unoptimized(table, pkts,
+			pkts_mask_match_many, &pkts_mask_out_slow, entries, 1);
+		pkts_mask_out |= pkts_mask_out_slow;
+	}
+
+	*lookup_hit_mask = pkts_mask_out;
+	return status;
+}
+
+struct rte_table_ops rte_table_hash_ext_ops	 = {
+	.f_create = rte_table_hash_ext_create,
+	.f_free = rte_table_hash_ext_free,
+	.f_add = rte_table_hash_ext_entry_add,
+	.f_delete = rte_table_hash_ext_entry_delete,
+	.f_lookup = rte_table_hash_ext_lookup,
+};
+
+struct rte_table_ops rte_table_hash_ext_dosig_ops  = {
+	.f_create = rte_table_hash_ext_create,
+	.f_free = rte_table_hash_ext_free,
+	.f_add = rte_table_hash_ext_entry_add,
+	.f_delete = rte_table_hash_ext_entry_delete,
+	.f_lookup = rte_table_hash_ext_lookup_dosig,
+};
diff --git a/lib/librte_table/rte_table_hash_key16.c b/lib/librte_table/rte_table_hash_key16.c
new file mode 100644
index 0000000..f5ec87d
--- /dev/null
+++ b/lib/librte_table/rte_table_hash_key16.c
@@ -0,0 +1,1100 @@
+/*-
+ *	 BSD LICENSE
+ *
+ *	 Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+ *	 All rights reserved.
+ *
+ *	 Redistribution and use in source and binary forms, with or without
+ *	 modification, are permitted provided that the following conditions
+ *	 are met:
+ *
+ *	* Redistributions of source code must retain the above copyright
+ *		 notice, this list of conditions and the following disclaimer.
+ *	* Redistributions in binary form must reproduce the above copyright
+ *		 notice, this list of conditions and the following disclaimer in
+ *		 the documentation and/or other materials provided with the
+ *		 distribution.
+ *	* Neither the name of Intel Corporation nor the names of its
+ *		 contributors may be used to endorse or promote products derived
+ *		 from this software without specific prior written permission.
+ *
+ *	 THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *	 "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *	 LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *	 A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *	 OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *	 SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *	 LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *	 DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *	 THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *	 (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *	 OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+#include <string.h>
+#include <stdio.h>
+
+#include <rte_common.h>
+#include <rte_mbuf.h>
+#include <rte_malloc.h>
+#include <rte_log.h>
+
+#include "rte_table_hash.h"
+#include "rte_lru.h"
+
+#define RTE_TABLE_HASH_KEY_SIZE						16
+
+#define RTE_BUCKET_ENTRY_VALID						0x1LLU
+
+struct rte_bucket_4_16 {
+	/* Cache line 0 */
+	uint64_t signature[4 + 1];
+	uint64_t lru_list;
+	struct rte_bucket_4_16 *next;
+	uint64_t next_valid;
+
+	/* Cache line 1 */
+	uint64_t key[4][2];
+
+	/* Cache line 2 */
+	uint8_t data[0];
+};
+
+struct rte_table_hash {
+	/* Input parameters */
+	uint32_t n_buckets;
+	uint32_t n_entries_per_bucket;
+	uint32_t key_size;
+	uint32_t entry_size;
+	uint32_t bucket_size;
+	uint32_t signature_offset;
+	uint32_t key_offset;
+	rte_table_hash_op_hash f_hash;
+	uint64_t seed;
+
+	/* Extendible buckets */
+	uint32_t n_buckets_ext;
+	uint32_t stack_pos;
+	uint32_t *stack;
+
+	/* Lookup table */
+	uint8_t memory[0] __rte_cache_aligned;
+};
+
+static int
+check_params_create_lru(struct rte_table_hash_key16_lru_params *params) {
+	/* n_entries */
+	if (params->n_entries == 0) {
+		RTE_LOG(ERR, TABLE, "%s: n_entries is zero\n", __func__);
+		return -EINVAL;
+	}
+
+	/* signature offset */
+	if ((params->signature_offset & 0x3) != 0) {
+		RTE_LOG(ERR, TABLE, "%s: invalid signature_offset\n", __func__);
+		return -EINVAL;
+	}
+
+	/* key offset */
+	if ((params->key_offset & 0x7) != 0) {
+		RTE_LOG(ERR, TABLE, "%s: invalid key_offset\n", __func__);
+		return -EINVAL;
+	}
+
+	/* f_hash */
+	if (params->f_hash == NULL) {
+		RTE_LOG(ERR, TABLE,
+			"%s: f_hash function pointer is NULL\n", __func__);
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+static void *
+rte_table_hash_create_key16_lru(void *params,
+		int socket_id,
+		uint32_t entry_size)
+{
+	struct rte_table_hash_key16_lru_params *p =
+			(struct rte_table_hash_key16_lru_params *) params;
+	struct rte_table_hash *f;
+	uint32_t n_buckets, n_entries_per_bucket,
+			key_size, bucket_size_cl, total_size, i;
+
+	/* Check input parameters */
+	if ((check_params_create_lru(p) != 0) ||
+		((sizeof(struct rte_table_hash) % CACHE_LINE_SIZE) != 0) ||
+		((sizeof(struct rte_bucket_4_16) % CACHE_LINE_SIZE) != 0))
+		return NULL;
+	n_entries_per_bucket = 4;
+	key_size = 16;
+
+	/* Memory allocation */
+	n_buckets = rte_align32pow2((p->n_entries + n_entries_per_bucket - 1) /
+		n_entries_per_bucket);
+	bucket_size_cl = (sizeof(struct rte_bucket_4_16) + n_entries_per_bucket
+		* entry_size + CACHE_LINE_SIZE - 1) / CACHE_LINE_SIZE;
+	total_size = sizeof(struct rte_table_hash) + n_buckets *
+		bucket_size_cl * CACHE_LINE_SIZE;
+
+	f = rte_zmalloc_socket("TABLE", total_size, CACHE_LINE_SIZE, socket_id);
+	if (f == NULL) {
+		RTE_LOG(ERR, TABLE,
+		"%s: Cannot allocate %u bytes for hash table\n",
+		__func__, total_size);
+		return NULL;
+	}
+	RTE_LOG(INFO, TABLE,
+		"%s: Hash table memory footprint is %u bytes\n",
+		__func__, total_size);
+
+	/* Memory initialization */
+	f->n_buckets = n_buckets;
+	f->n_entries_per_bucket = n_entries_per_bucket;
+	f->key_size = key_size;
+	f->entry_size = entry_size;
+	f->bucket_size = bucket_size_cl * CACHE_LINE_SIZE;
+	f->signature_offset = p->signature_offset;
+	f->key_offset = p->key_offset;
+	f->f_hash = p->f_hash;
+	f->seed = p->seed;
+
+	for (i = 0; i < n_buckets; i++) {
+		struct rte_bucket_4_16 *bucket;
+
+		bucket = (struct rte_bucket_4_16 *) &f->memory[i *
+			f->bucket_size];
+		lru_init(bucket);
+	}
+
+	return f;
+}
+
+static int
+rte_table_hash_free_key16_lru(void *table)
+{
+	struct rte_table_hash *f = (struct rte_table_hash *) table;
+
+	/* Check input parameters */
+	if (f == NULL) {
+		RTE_LOG(ERR, TABLE, "%s: table parameter is NULL\n", __func__);
+		return -EINVAL;
+	}
+
+	rte_free(f);
+	return 0;
+}
+
+static int
+rte_table_hash_entry_add_key16_lru(
+	void *table,
+	void *key,
+	void *entry,
+	int *key_found,
+	void **entry_ptr)
+{
+	struct rte_table_hash *f = (struct rte_table_hash *) table;
+	struct rte_bucket_4_16 *bucket;
+	uint64_t signature, pos;
+	uint32_t bucket_index, i;
+
+	signature = f->f_hash(key, f->key_size, f->seed);
+	bucket_index = signature & (f->n_buckets - 1);
+	bucket = (struct rte_bucket_4_16 *)
+			&f->memory[bucket_index * f->bucket_size];
+	signature |= RTE_BUCKET_ENTRY_VALID;
+
+	/* Key is present in the bucket */
+	for (i = 0; i < 4; i++) {
+		uint64_t bucket_signature = bucket->signature[i];
+		uint8_t *bucket_key = (uint8_t *) bucket->key[i];
+
+		if ((bucket_signature == signature) &&
+				(memcmp(key, bucket_key, f->key_size) == 0)) {
+			uint8_t *bucket_data = &bucket->data[i * f->entry_size];
+
+			memcpy(bucket_data, entry, f->entry_size);
+			lru_update(bucket, i);
+			*key_found = 1;
+			*entry_ptr = (void *) bucket_data;
+			return 0;
+		}
+	}
+
+	/* Key is not present in the bucket */
+	for (i = 0; i < 4; i++) {
+		uint64_t bucket_signature = bucket->signature[i];
+		uint8_t *bucket_key = (uint8_t *) bucket->key[i];
+
+		if (bucket_signature == 0) {
+			uint8_t *bucket_data = &bucket->data[i * f->entry_size];
+
+			bucket->signature[i] = signature;
+			memcpy(bucket_key, key, f->key_size);
+			memcpy(bucket_data, entry, f->entry_size);
+			lru_update(bucket, i);
+			*key_found = 0;
+			*entry_ptr = (void *) bucket_data;
+
+			return 0;
+		}
+	}
+
+	/* Bucket full: replace LRU entry */
+	pos = lru_pos(bucket);
+	bucket->signature[pos] = signature;
+	memcpy(bucket->key[pos], key, f->key_size);
+	memcpy(&bucket->data[pos * f->entry_size], entry, f->entry_size);
+	lru_update(bucket, pos);
+	*key_found = 0;
+	*entry_ptr = (void *) &bucket->data[pos * f->entry_size];
+
+	return 0;
+}
+
+static int
+rte_table_hash_entry_delete_key16_lru(
+	void *table,
+	void *key,
+	int *key_found,
+	void *entry)
+{
+	struct rte_table_hash *f = (struct rte_table_hash *) table;
+	struct rte_bucket_4_16 *bucket;
+	uint64_t signature;
+	uint32_t bucket_index, i;
+
+	signature = f->f_hash(key, f->key_size, f->seed);
+	bucket_index = signature & (f->n_buckets - 1);
+	bucket = (struct rte_bucket_4_16 *)
+			&f->memory[bucket_index * f->bucket_size];
+	signature |= RTE_BUCKET_ENTRY_VALID;
+
+	/* Key is present in the bucket */
+	for (i = 0; i < 4; i++) {
+		uint64_t bucket_signature = bucket->signature[i];
+		uint8_t *bucket_key = (uint8_t *) bucket->key[i];
+
+		if ((bucket_signature == signature) &&
+				(memcmp(key, bucket_key, f->key_size) == 0)) {
+			uint8_t *bucket_data = &bucket->data[i * f->entry_size];
+
+			bucket->signature[i] = 0;
+			*key_found = 1;
+			if (entry)
+				memcpy(entry, bucket_data, f->entry_size);
+			return 0;
+		}
+	}
+
+	/* Key is not present in the bucket */
+	*key_found = 0;
+	return 0;
+}
+
+static int
+check_params_create_ext(struct rte_table_hash_key16_ext_params *params) {
+	/* n_entries */
+	if (params->n_entries == 0) {
+		RTE_LOG(ERR, TABLE, "%s: n_entries is zero\n", __func__);
+		return -EINVAL;
+	}
+
+	/* n_entries_ext */
+	if (params->n_entries_ext == 0) {
+		RTE_LOG(ERR, TABLE, "%s: n_entries_ext is zero\n", __func__);
+		return -EINVAL;
+	}
+
+	/* signature offset */
+	if ((params->signature_offset & 0x3) != 0) {
+		RTE_LOG(ERR, TABLE, "%s: invalid signature offset\n", __func__);
+		return -EINVAL;
+	}
+
+	/* key offset */
+	if ((params->key_offset & 0x7) != 0) {
+		RTE_LOG(ERR, TABLE, "%s: invalid key offset\n", __func__);
+		return -EINVAL;
+	}
+
+	/* f_hash */
+	if (params->f_hash == NULL) {
+		RTE_LOG(ERR, TABLE,
+			"%s: f_hash function pointer is NULL\n", __func__);
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+static void *
+rte_table_hash_create_key16_ext(void *params,
+		int socket_id,
+		uint32_t entry_size)
+{
+	struct rte_table_hash_key16_ext_params *p =
+			(struct rte_table_hash_key16_ext_params *) params;
+	struct rte_table_hash *f;
+	uint32_t n_buckets, n_buckets_ext, n_entries_per_bucket, key_size,
+			bucket_size_cl, stack_size_cl, total_size, i;
+
+	/* Check input parameters */
+	if ((check_params_create_ext(p) != 0) ||
+		((sizeof(struct rte_table_hash) % CACHE_LINE_SIZE) != 0) ||
+		((sizeof(struct rte_bucket_4_16) % CACHE_LINE_SIZE) != 0))
+		return NULL;
+
+	n_entries_per_bucket = 4;
+	key_size = 16;
+
+	/* Memory allocation */
+	n_buckets = rte_align32pow2((p->n_entries + n_entries_per_bucket - 1) /
+		n_entries_per_bucket);
+	n_buckets_ext = (p->n_entries_ext + n_entries_per_bucket - 1) /
+		n_entries_per_bucket;
+	bucket_size_cl = (sizeof(struct rte_bucket_4_16) + n_entries_per_bucket
+		* entry_size + CACHE_LINE_SIZE - 1) / CACHE_LINE_SIZE;
+	stack_size_cl = (n_buckets_ext * sizeof(uint32_t) + CACHE_LINE_SIZE - 1)
+		/ CACHE_LINE_SIZE;
+	total_size = sizeof(struct rte_table_hash) +
+		((n_buckets + n_buckets_ext) * bucket_size_cl + stack_size_cl) *
+		CACHE_LINE_SIZE;
+
+	f = rte_zmalloc_socket("TABLE", total_size, CACHE_LINE_SIZE, socket_id);
+	if (f == NULL) {
+		RTE_LOG(ERR, TABLE,
+			"%s: Cannot allocate %u bytes for hash table\n",
+			__func__, total_size);
+		return NULL;
+	}
+	RTE_LOG(INFO, TABLE,
+		"%s: Hash table memory footprint is %u bytes\n",
+		__func__, total_size);
+
+	/* Memory initialization */
+	f->n_buckets = n_buckets;
+	f->n_entries_per_bucket = n_entries_per_bucket;
+	f->key_size = key_size;
+	f->entry_size = entry_size;
+	f->bucket_size = bucket_size_cl * CACHE_LINE_SIZE;
+	f->signature_offset = p->signature_offset;
+	f->key_offset = p->key_offset;
+	f->f_hash = p->f_hash;
+	f->seed = p->seed;
+
+	f->n_buckets_ext = n_buckets_ext;
+	f->stack_pos = n_buckets_ext;
+	f->stack = (uint32_t *)
+		&f->memory[(n_buckets + n_buckets_ext) * f->bucket_size];
+
+	for (i = 0; i < n_buckets_ext; i++)
+		f->stack[i] = i;
+
+	return f;
+}
+
+static int
+rte_table_hash_free_key16_ext(void *table)
+{
+	struct rte_table_hash *f = (struct rte_table_hash *) table;
+
+	/* Check input parameters */
+	if (f == NULL) {
+		RTE_LOG(ERR, TABLE, "%s: table parameter is NULL\n", __func__);
+		return -EINVAL;
+	}
+
+	rte_free(f);
+	return 0;
+}
+
+static int
+rte_table_hash_entry_add_key16_ext(
+	void *table,
+	void *key,
+	void *entry,
+	int *key_found,
+	void **entry_ptr)
+{
+	struct rte_table_hash *f = (struct rte_table_hash *) table;
+	struct rte_bucket_4_16 *bucket0, *bucket, *bucket_prev;
+	uint64_t signature;
+	uint32_t bucket_index, i;
+
+	signature = f->f_hash(key, f->key_size, f->seed);
+	bucket_index = signature & (f->n_buckets - 1);
+	bucket0 = (struct rte_bucket_4_16 *)
+			&f->memory[bucket_index * f->bucket_size];
+	signature |= RTE_BUCKET_ENTRY_VALID;
+
+	/* Key is present in the bucket */
+	for (bucket = bucket0; bucket != NULL; bucket = bucket->next)
+		for (i = 0; i < 4; i++) {
+			uint64_t bucket_signature = bucket->signature[i];
+			uint8_t *bucket_key = (uint8_t *) bucket->key[i];
+
+			if ((bucket_signature == signature) &&
+				(memcmp(key, bucket_key, f->key_size) == 0)) {
+				uint8_t *bucket_data = &bucket->data[i *
+					f->entry_size];
+
+				memcpy(bucket_data, entry, f->entry_size);
+				*key_found = 1;
+				*entry_ptr = (void *) bucket_data;
+				return 0;
+			}
+		}
+
+	/* Key is not present in the bucket */
+	for (bucket_prev = NULL, bucket = bucket0; bucket != NULL;
+			 bucket_prev = bucket, bucket = bucket->next)
+		for (i = 0; i < 4; i++) {
+			uint64_t bucket_signature = bucket->signature[i];
+			uint8_t *bucket_key = (uint8_t *) bucket->key[i];
+
+			if (bucket_signature == 0) {
+				uint8_t *bucket_data = &bucket->data[i *
+					f->entry_size];
+
+				bucket->signature[i] = signature;
+				memcpy(bucket_key, key, f->key_size);
+				memcpy(bucket_data, entry, f->entry_size);
+				*key_found = 0;
+				*entry_ptr = (void *) bucket_data;
+
+				return 0;
+			}
+		}
+
+	/* Bucket full: extend bucket */
+	if (f->stack_pos > 0) {
+		bucket_index = f->stack[--f->stack_pos];
+
+		bucket = (struct rte_bucket_4_16 *) &f->memory[(f->n_buckets +
+			bucket_index) * f->bucket_size];
+		bucket_prev->next = bucket;
+		bucket_prev->next_valid = 1;
+
+		bucket->signature[0] = signature;
+		memcpy(bucket->key[0], key, f->key_size);
+		memcpy(&bucket->data[0], entry, f->entry_size);
+		*key_found = 0;
+		*entry_ptr = (void *) &bucket->data[0];
+		return 0;
+	}
+
+	return -ENOSPC;
+}
+
+static int
+rte_table_hash_entry_delete_key16_ext(
+	void *table,
+	void *key,
+	int *key_found,
+	void *entry)
+{
+	struct rte_table_hash *f = (struct rte_table_hash *) table;
+	struct rte_bucket_4_16 *bucket0, *bucket, *bucket_prev;
+	uint64_t signature;
+	uint32_t bucket_index, i;
+
+	signature = f->f_hash(key, f->key_size, f->seed);
+	bucket_index = signature & (f->n_buckets - 1);
+	bucket0 = (struct rte_bucket_4_16 *)
+		&f->memory[bucket_index * f->bucket_size];
+	signature |= RTE_BUCKET_ENTRY_VALID;
+
+	/* Key is present in the bucket */
+	for (bucket_prev = NULL, bucket = bucket0; bucket != NULL;
+		bucket_prev = bucket, bucket = bucket->next)
+		for (i = 0; i < 4; i++) {
+			uint64_t bucket_signature = bucket->signature[i];
+			uint8_t *bucket_key = (uint8_t *) bucket->key[i];
+
+			if ((bucket_signature == signature) &&
+				(memcmp(key, bucket_key, f->key_size) == 0)) {
+				uint8_t *bucket_data = &bucket->data[i *
+					f->entry_size];
+
+				bucket->signature[i] = 0;
+				*key_found = 1;
+				if (entry)
+					memcpy(entry, bucket_data,
+					f->entry_size);
+
+				if ((bucket->signature[0] == 0) &&
+					(bucket->signature[1] == 0) &&
+					(bucket->signature[2] == 0) &&
+					(bucket->signature[3] == 0) &&
+					(bucket_prev != NULL)) {
+					bucket_prev->next = bucket->next;
+					bucket_prev->next_valid =
+						bucket->next_valid;
+
+					memset(bucket, 0,
+						sizeof(struct rte_bucket_4_16));
+					bucket_index = (bucket -
+						((struct rte_bucket_4_16 *)
+						f->memory)) - f->n_buckets;
+					f->stack[f->stack_pos++] = bucket_index;
+				}
+
+				return 0;
+			}
+		}
+
+	/* Key is not present in the bucket */
+	*key_found = 0;
+	return 0;
+}
+
+#define lookup_key16_cmp(key_in, bucket, pos)			\
+{								\
+	uint64_t xor[4][2], or[4], signature[4];		\
+								\
+	signature[0] = (~bucket->signature[0]) & 1;		\
+	signature[1] = (~bucket->signature[1]) & 1;		\
+	signature[2] = (~bucket->signature[2]) & 1;		\
+	signature[3] = (~bucket->signature[3]) & 1;		\
+								\
+	xor[0][0] = key_in[0] ^	 bucket->key[0][0];		\
+	xor[0][1] = key_in[1] ^	 bucket->key[0][1];		\
+								\
+	xor[1][0] = key_in[0] ^	 bucket->key[1][0];		\
+	xor[1][1] = key_in[1] ^	 bucket->key[1][1];		\
+								\
+	xor[2][0] = key_in[0] ^	 bucket->key[2][0];		\
+	xor[2][1] = key_in[1] ^	 bucket->key[2][1];		\
+								\
+	xor[3][0] = key_in[0] ^	 bucket->key[3][0];		\
+	xor[3][1] = key_in[1] ^	 bucket->key[3][1];		\
+								\
+	or[0] = xor[0][0] | xor[0][1] | signature[0];		\
+	or[1] = xor[1][0] | xor[1][1] | signature[1];		\
+	or[2] = xor[2][0] | xor[2][1] | signature[2];		\
+	or[3] = xor[3][0] | xor[3][1] | signature[3];		\
+								\
+	pos = 4;						\
+	if (or[0] == 0)						\
+		pos = 0;					\
+	if (or[1] == 0)						\
+		pos = 1;					\
+	if (or[2] == 0)						\
+		pos = 2;					\
+	if (or[3] == 0)						\
+		pos = 3;					\
+}
+
+#define lookup1_stage0(pkt0_index, mbuf0, pkts, pkts_mask)	\
+{								\
+	uint64_t pkt_mask;					\
+								\
+	pkt0_index = __builtin_ctzll(pkts_mask);		\
+	pkt_mask = 1LLU << pkt0_index;				\
+	pkts_mask &= ~pkt_mask;					\
+								\
+	mbuf0 = pkts[pkt0_index];				\
+	rte_prefetch0(RTE_MBUF_METADATA_UINT8_PTR(mbuf0, 0));	\
+}
+
+#define lookup1_stage1(mbuf1, bucket1, f)			\
+{								\
+	uint64_t signature;					\
+	uint32_t bucket_index;					\
+								\
+	signature = RTE_MBUF_METADATA_UINT32(mbuf1, f->signature_offset);\
+	bucket_index = signature & (f->n_buckets - 1);		\
+	bucket1 = (struct rte_bucket_4_16 *)			\
+		&f->memory[bucket_index * f->bucket_size];	\
+	rte_prefetch0(bucket1);					\
+	rte_prefetch0((void *)(((uintptr_t) bucket1) + CACHE_LINE_SIZE));\
+}
+
+#define lookup1_stage2_lru(pkt2_index, mbuf2, bucket2,		\
+		pkts_mask_out, entries, f)			\
+{								\
+	void *a;						\
+	uint64_t pkt_mask;					\
+	uint64_t *key;						\
+	uint32_t pos;						\
+								\
+	key = RTE_MBUF_METADATA_UINT64_PTR(mbuf2, f->key_offset);\
+								\
+	lookup_key16_cmp(key, bucket2, pos);			\
+								\
+	pkt_mask = (bucket2->signature[pos] & 1LLU) << pkt2_index;\
+	pkts_mask_out |= pkt_mask;				\
+								\
+	a = (void *) &bucket2->data[pos * f->entry_size];	\
+	rte_prefetch0(a);					\
+	entries[pkt2_index] = a;				\
+	lru_update(bucket2, pos);				\
+}
+
+#define lookup1_stage2_ext(pkt2_index, mbuf2, bucket2, pkts_mask_out, entries, \
+	buckets_mask, buckets, keys, f)				\
+{								\
+	struct rte_bucket_4_16 *bucket_next;			\
+	void *a;						\
+	uint64_t pkt_mask, bucket_mask;				\
+	uint64_t *key;						\
+	uint32_t pos;						\
+								\
+	key = RTE_MBUF_METADATA_UINT64_PTR(mbuf2, f->key_offset);\
+								\
+	lookup_key16_cmp(key, bucket2, pos);			\
+								\
+	pkt_mask = (bucket2->signature[pos] & 1LLU) << pkt2_index;\
+	pkts_mask_out |= pkt_mask;				\
+								\
+	a = (void *) &bucket2->data[pos * f->entry_size];	\
+	rte_prefetch0(a);					\
+	entries[pkt2_index] = a;				\
+								\
+	bucket_mask = (~pkt_mask) & (bucket2->next_valid << pkt2_index);\
+	buckets_mask |= bucket_mask;				\
+	bucket_next = bucket2->next;				\
+	buckets[pkt2_index] = bucket_next;			\
+	keys[pkt2_index] = key;					\
+}
+
+#define lookup_grinder(pkt_index, buckets, keys, pkts_mask_out, entries,\
+	buckets_mask, f)					\
+{								\
+	struct rte_bucket_4_16 *bucket, *bucket_next;		\
+	void *a;						\
+	uint64_t pkt_mask, bucket_mask;				\
+	uint64_t *key;						\
+	uint32_t pos;						\
+								\
+	bucket = buckets[pkt_index];				\
+	key = keys[pkt_index];					\
+								\
+	lookup_key16_cmp(key, bucket, pos);			\
+								\
+	pkt_mask = (bucket->signature[pos] & 1LLU) << pkt_index;\
+	pkts_mask_out |= pkt_mask;				\
+								\
+	a = (void *) &bucket->data[pos * f->entry_size];	\
+	rte_prefetch0(a);					\
+	entries[pkt_index] = a;					\
+								\
+	bucket_mask = (~pkt_mask) & (bucket->next_valid << pkt_index);\
+	buckets_mask |= bucket_mask;				\
+	bucket_next = bucket->next;				\
+	rte_prefetch0(bucket_next);				\
+	rte_prefetch0((void *)(((uintptr_t) bucket_next) + CACHE_LINE_SIZE));\
+	buckets[pkt_index] = bucket_next;			\
+	keys[pkt_index] = key;					\
+}
+
+#define lookup2_stage0(pkt00_index, pkt01_index, mbuf00, mbuf01,\
+		pkts, pkts_mask)				\
+{								\
+	uint64_t pkt00_mask, pkt01_mask;			\
+								\
+	pkt00_index = __builtin_ctzll(pkts_mask);		\
+	pkt00_mask = 1LLU << pkt00_index;			\
+	pkts_mask &= ~pkt00_mask;				\
+								\
+	mbuf00 = pkts[pkt00_index];				\
+	rte_prefetch0(RTE_MBUF_METADATA_UINT8_PTR(mbuf00, 0));	\
+								\
+	pkt01_index = __builtin_ctzll(pkts_mask);		\
+	pkt01_mask = 1LLU << pkt01_index;			\
+	pkts_mask &= ~pkt01_mask;				\
+								\
+	mbuf01 = pkts[pkt01_index];				\
+	rte_prefetch0(RTE_MBUF_METADATA_UINT8_PTR(mbuf01, 0));	\
+}
+
+#define lookup2_stage0_with_odd_support(pkt00_index, pkt01_index,\
+		mbuf00, mbuf01, pkts, pkts_mask)		\
+{								\
+	uint64_t pkt00_mask, pkt01_mask;			\
+								\
+	pkt00_index = __builtin_ctzll(pkts_mask);		\
+	pkt00_mask = 1LLU << pkt00_index;			\
+	pkts_mask &= ~pkt00_mask;				\
+								\
+	mbuf00 = pkts[pkt00_index];				\
+	rte_prefetch0(RTE_MBUF_METADATA_UINT8_PTR(mbuf00, 0));	\
+								\
+	pkt01_index = __builtin_ctzll(pkts_mask);		\
+	if (pkts_mask == 0)					\
+		pkt01_index = pkt00_index;			\
+	pkt01_mask = 1LLU << pkt01_index;			\
+	pkts_mask &= ~pkt01_mask;				\
+								\
+	mbuf01 = pkts[pkt01_index];				\
+	rte_prefetch0(RTE_MBUF_METADATA_UINT8_PTR(mbuf01, 0));	\
+}
+
+#define lookup2_stage1(mbuf10, mbuf11, bucket10, bucket11, f)	\
+{								\
+	uint64_t signature10, signature11;			\
+	uint32_t bucket10_index, bucket11_index;		\
+								\
+	signature10 = RTE_MBUF_METADATA_UINT32(mbuf10, f->signature_offset);\
+	bucket10_index = signature10 & (f->n_buckets - 1);	\
+	bucket10 = (struct rte_bucket_4_16 *)			\
+		&f->memory[bucket10_index * f->bucket_size];	\
+	rte_prefetch0(bucket10);				\
+	rte_prefetch0((void *)(((uintptr_t) bucket10) + CACHE_LINE_SIZE));\
+								\
+	signature11 = RTE_MBUF_METADATA_UINT32(mbuf11, f->signature_offset);\
+	bucket11_index = signature11 & (f->n_buckets - 1);	\
+	bucket11 = (struct rte_bucket_4_16 *)			\
+		&f->memory[bucket11_index * f->bucket_size];	\
+	rte_prefetch0(bucket11);				\
+	rte_prefetch0((void *)(((uintptr_t) bucket11) + CACHE_LINE_SIZE));\
+}
+
+#define lookup2_stage2_lru(pkt20_index, pkt21_index, mbuf20, mbuf21,\
+		bucket20, bucket21, pkts_mask_out, entries, f)	\
+{								\
+	void *a20, *a21;					\
+	uint64_t pkt20_mask, pkt21_mask;			\
+	uint64_t *key20, *key21;				\
+	uint32_t pos20, pos21;					\
+								\
+	key20 = RTE_MBUF_METADATA_UINT64_PTR(mbuf20, f->key_offset);\
+	key21 = RTE_MBUF_METADATA_UINT64_PTR(mbuf21, f->key_offset);\
+								\
+	lookup_key16_cmp(key20, bucket20, pos20);		\
+	lookup_key16_cmp(key21, bucket21, pos21);		\
+								\
+	pkt20_mask = (bucket20->signature[pos20] & 1LLU) << pkt20_index;\
+	pkt21_mask = (bucket21->signature[pos21] & 1LLU) << pkt21_index;\
+	pkts_mask_out |= pkt20_mask | pkt21_mask;			\
+								\
+	a20 = (void *) &bucket20->data[pos20 * f->entry_size];	\
+	a21 = (void *) &bucket21->data[pos21 * f->entry_size];	\
+	rte_prefetch0(a20);					\
+	rte_prefetch0(a21);					\
+	entries[pkt20_index] = a20;				\
+	entries[pkt21_index] = a21;				\
+	lru_update(bucket20, pos20);				\
+	lru_update(bucket21, pos21);				\
+}
+
+#define lookup2_stage2_ext(pkt20_index, pkt21_index, mbuf20, mbuf21, bucket20, \
+	bucket21, pkts_mask_out, entries, buckets_mask, buckets, keys, f) \
+{								\
+	struct rte_bucket_4_16 *bucket20_next, *bucket21_next;	\
+	void *a20, *a21;					\
+	uint64_t pkt20_mask, pkt21_mask, bucket20_mask, bucket21_mask;\
+	uint64_t *key20, *key21;				\
+	uint32_t pos20, pos21;					\
+								\
+	key20 = RTE_MBUF_METADATA_UINT64_PTR(mbuf20, f->key_offset);\
+	key21 = RTE_MBUF_METADATA_UINT64_PTR(mbuf21, f->key_offset);\
+								\
+	lookup_key16_cmp(key20, bucket20, pos20);		\
+	lookup_key16_cmp(key21, bucket21, pos21);		\
+								\
+	pkt20_mask = (bucket20->signature[pos20] & 1LLU) << pkt20_index;\
+	pkt21_mask = (bucket21->signature[pos21] & 1LLU) << pkt21_index;\
+	pkts_mask_out |= pkt20_mask | pkt21_mask;		\
+								\
+	a20 = (void *) &bucket20->data[pos20 * f->entry_size];	\
+	a21 = (void *) &bucket21->data[pos21 * f->entry_size];	\
+	rte_prefetch0(a20);					\
+	rte_prefetch0(a21);					\
+	entries[pkt20_index] = a20;				\
+	entries[pkt21_index] = a21;				\
+								\
+	bucket20_mask = (~pkt20_mask) & (bucket20->next_valid << pkt20_index);\
+	bucket21_mask = (~pkt21_mask) & (bucket21->next_valid << pkt21_index);\
+	buckets_mask |= bucket20_mask | bucket21_mask;		\
+	bucket20_next = bucket20->next;				\
+	bucket21_next = bucket21->next;				\
+	buckets[pkt20_index] = bucket20_next;			\
+	buckets[pkt21_index] = bucket21_next;			\
+	keys[pkt20_index] = key20;				\
+	keys[pkt21_index] = key21;				\
+}
+
+static int
+rte_table_hash_lookup_key16_lru(
+	void *table,
+	struct rte_mbuf **pkts,
+	uint64_t pkts_mask,
+	uint64_t *lookup_hit_mask,
+	void **entries)
+{
+	struct rte_table_hash *f = (struct rte_table_hash *) table;
+	struct rte_bucket_4_16 *bucket10, *bucket11, *bucket20, *bucket21;
+	struct rte_mbuf *mbuf00, *mbuf01, *mbuf10, *mbuf11, *mbuf20, *mbuf21;
+	uint32_t pkt00_index, pkt01_index, pkt10_index;
+	uint32_t pkt11_index, pkt20_index, pkt21_index;
+	uint64_t pkts_mask_out = 0;
+
+	/* Cannot run the pipeline with less than 5 packets */
+	if (__builtin_popcountll(pkts_mask) < 5) {
+		for ( ; pkts_mask; ) {
+			struct rte_bucket_4_16 *bucket;
+			struct rte_mbuf *mbuf;
+			uint32_t pkt_index;
+
+			lookup1_stage0(pkt_index, mbuf, pkts, pkts_mask);
+			lookup1_stage1(mbuf, bucket, f);
+			lookup1_stage2_lru(pkt_index, mbuf, bucket,
+				pkts_mask_out, entries, f);
+		}
+
+		*lookup_hit_mask = pkts_mask_out;
+		return 0;
+	}
+
+	/*
+	 * Pipeline fill
+	 *
+	 */
+	/* Pipeline stage 0 */
+	lookup2_stage0(pkt00_index, pkt01_index, mbuf00, mbuf01, pkts,
+		pkts_mask);
+
+	/* Pipeline feed */
+	mbuf10 = mbuf00;
+	mbuf11 = mbuf01;
+	pkt10_index = pkt00_index;
+	pkt11_index = pkt01_index;
+
+	/* Pipeline stage 0 */
+	lookup2_stage0(pkt00_index, pkt01_index, mbuf00, mbuf01, pkts,
+		pkts_mask);
+
+	/* Pipeline stage 1 */
+	lookup2_stage1(mbuf10, mbuf11, bucket10, bucket11, f);
+
+	/*
+	 * Pipeline run
+	 *
+	 */
+	for ( ; pkts_mask; ) {
+		/* Pipeline feed */
+		bucket20 = bucket10;
+		bucket21 = bucket11;
+		mbuf20 = mbuf10;
+		mbuf21 = mbuf11;
+		mbuf10 = mbuf00;
+		mbuf11 = mbuf01;
+		pkt20_index = pkt10_index;
+		pkt21_index = pkt11_index;
+		pkt10_index = pkt00_index;
+		pkt11_index = pkt01_index;
+
+		/* Pipeline stage 0 */
+		lookup2_stage0_with_odd_support(pkt00_index, pkt01_index,
+			mbuf00, mbuf01, pkts, pkts_mask);
+
+		/* Pipeline stage 1 */
+		lookup2_stage1(mbuf10, mbuf11, bucket10, bucket11, f);
+
+		/* Pipeline stage 2 */
+		lookup2_stage2_lru(pkt20_index, pkt21_index, mbuf20, mbuf21,
+			bucket20, bucket21, pkts_mask_out, entries, f);
+	}
+
+	/*
+	 * Pipeline flush
+	 *
+	 */
+	/* Pipeline feed */
+	bucket20 = bucket10;
+	bucket21 = bucket11;
+	mbuf20 = mbuf10;
+	mbuf21 = mbuf11;
+	mbuf10 = mbuf00;
+	mbuf11 = mbuf01;
+	pkt20_index = pkt10_index;
+	pkt21_index = pkt11_index;
+	pkt10_index = pkt00_index;
+	pkt11_index = pkt01_index;
+
+	/* Pipeline stage 1 */
+	lookup2_stage1(mbuf10, mbuf11, bucket10, bucket11, f);
+
+	/* Pipeline stage 2 */
+	lookup2_stage2_lru(pkt20_index, pkt21_index, mbuf20, mbuf21,
+		bucket20, bucket21, pkts_mask_out, entries, f);
+
+	/* Pipeline feed */
+	bucket20 = bucket10;
+	bucket21 = bucket11;
+	mbuf20 = mbuf10;
+	mbuf21 = mbuf11;
+	pkt20_index = pkt10_index;
+	pkt21_index = pkt11_index;
+
+	/* Pipeline stage 2 */
+	lookup2_stage2_lru(pkt20_index, pkt21_index, mbuf20, mbuf21,
+		bucket20, bucket21, pkts_mask_out, entries, f);
+
+	*lookup_hit_mask = pkts_mask_out;
+	return 0;
+} /* rte_table_hash_lookup_key16_lru() */
+
+static int
+rte_table_hash_lookup_key16_ext(
+	void *table,
+	struct rte_mbuf **pkts,
+	uint64_t pkts_mask,
+	uint64_t *lookup_hit_mask,
+	void **entries)
+{
+	struct rte_table_hash *f = (struct rte_table_hash *) table;
+	struct rte_bucket_4_16 *bucket10, *bucket11, *bucket20, *bucket21;
+	struct rte_mbuf *mbuf00, *mbuf01, *mbuf10, *mbuf11, *mbuf20, *mbuf21;
+	uint32_t pkt00_index, pkt01_index, pkt10_index;
+	uint32_t pkt11_index, pkt20_index, pkt21_index;
+	uint64_t pkts_mask_out = 0, buckets_mask = 0;
+	struct rte_bucket_4_16 *buckets[RTE_PORT_IN_BURST_SIZE_MAX];
+	uint64_t *keys[RTE_PORT_IN_BURST_SIZE_MAX];
+
+	/* Cannot run the pipeline with less than 5 packets */
+	if (__builtin_popcountll(pkts_mask) < 5) {
+		for ( ; pkts_mask; ) {
+			struct rte_bucket_4_16 *bucket;
+			struct rte_mbuf *mbuf;
+			uint32_t pkt_index;
+
+			lookup1_stage0(pkt_index, mbuf, pkts, pkts_mask);
+			lookup1_stage1(mbuf, bucket, f);
+			lookup1_stage2_ext(pkt_index, mbuf, bucket,
+				pkts_mask_out, entries, buckets_mask,
+				buckets, keys, f);
+		}
+
+		*lookup_hit_mask = pkts_mask_out;
+		return 0;
+	}
+
+	/*
+	 * Pipeline fill
+	 *
+	 */
+	/* Pipeline stage 0 */
+	lookup2_stage0(pkt00_index, pkt01_index, mbuf00, mbuf01, pkts,
+		pkts_mask);
+
+	/* Pipeline feed */
+	mbuf10 = mbuf00;
+	mbuf11 = mbuf01;
+	pkt10_index = pkt00_index;
+	pkt11_index = pkt01_index;
+
+	/* Pipeline stage 0 */
+	lookup2_stage0(pkt00_index, pkt01_index, mbuf00, mbuf01, pkts,
+		pkts_mask);
+
+	/* Pipeline stage 1 */
+	lookup2_stage1(mbuf10, mbuf11, bucket10, bucket11, f);
+
+	/*
+	 * Pipeline run
+	 *
+	 */
+	for ( ; pkts_mask; ) {
+		/* Pipeline feed */
+		bucket20 = bucket10;
+		bucket21 = bucket11;
+		mbuf20 = mbuf10;
+		mbuf21 = mbuf11;
+		mbuf10 = mbuf00;
+		mbuf11 = mbuf01;
+		pkt20_index = pkt10_index;
+		pkt21_index = pkt11_index;
+		pkt10_index = pkt00_index;
+		pkt11_index = pkt01_index;
+
+		/* Pipeline stage 0 */
+		lookup2_stage0_with_odd_support(pkt00_index, pkt01_index,
+			mbuf00, mbuf01, pkts, pkts_mask);
+
+		/* Pipeline stage 1 */
+		lookup2_stage1(mbuf10, mbuf11, bucket10, bucket11, f);
+
+		/* Pipeline stage 2 */
+		lookup2_stage2_ext(pkt20_index, pkt21_index, mbuf20, mbuf21,
+			bucket20, bucket21, pkts_mask_out, entries,
+			buckets_mask, buckets, keys, f);
+	}
+
+	/*
+	 * Pipeline flush
+	 *
+	 */
+	/* Pipeline feed */
+	bucket20 = bucket10;
+	bucket21 = bucket11;
+	mbuf20 = mbuf10;
+	mbuf21 = mbuf11;
+	mbuf10 = mbuf00;
+	mbuf11 = mbuf01;
+	pkt20_index = pkt10_index;
+	pkt21_index = pkt11_index;
+	pkt10_index = pkt00_index;
+	pkt11_index = pkt01_index;
+
+	/* Pipeline stage 1 */
+	lookup2_stage1(mbuf10, mbuf11, bucket10, bucket11, f);
+
+	/* Pipeline stage 2 */
+	lookup2_stage2_ext(pkt20_index, pkt21_index, mbuf20, mbuf21,
+		bucket20, bucket21, pkts_mask_out, entries,
+		buckets_mask, buckets, keys, f);
+
+	/* Pipeline feed */
+	bucket20 = bucket10;
+	bucket21 = bucket11;
+	mbuf20 = mbuf10;
+	mbuf21 = mbuf11;
+	pkt20_index = pkt10_index;
+	pkt21_index = pkt11_index;
+
+	/* Pipeline stage 2 */
+	lookup2_stage2_ext(pkt20_index, pkt21_index, mbuf20, mbuf21,
+		bucket20, bucket21, pkts_mask_out, entries,
+		buckets_mask, buckets, keys, f);
+
+	/* Grind next buckets */
+	for ( ; buckets_mask; ) {
+		uint64_t buckets_mask_next = 0;
+
+		for ( ; buckets_mask; ) {
+			uint64_t pkt_mask;
+			uint32_t pkt_index;
+
+			pkt_index = __builtin_ctzll(buckets_mask);
+			pkt_mask = 1LLU << pkt_index;
+			buckets_mask &= ~pkt_mask;
+
+			lookup_grinder(pkt_index, buckets, keys, pkts_mask_out,
+				entries, buckets_mask_next, f);
+		}
+
+		buckets_mask = buckets_mask_next;
+	}
+
+	*lookup_hit_mask = pkts_mask_out;
+	return 0;
+} /* rte_table_hash_lookup_key16_ext() */
+
+struct rte_table_ops rte_table_hash_key16_lru_ops = {
+	.f_create = rte_table_hash_create_key16_lru,
+	.f_free = rte_table_hash_free_key16_lru,
+	.f_add = rte_table_hash_entry_add_key16_lru,
+	.f_delete = rte_table_hash_entry_delete_key16_lru,
+	.f_lookup = rte_table_hash_lookup_key16_lru,
+};
+
+struct rte_table_ops rte_table_hash_key16_ext_ops = {
+	.f_create = rte_table_hash_create_key16_ext,
+	.f_free = rte_table_hash_free_key16_ext,
+	.f_add = rte_table_hash_entry_add_key16_ext,
+	.f_delete = rte_table_hash_entry_delete_key16_ext,
+	.f_lookup = rte_table_hash_lookup_key16_ext,
+};
diff --git a/lib/librte_table/rte_table_hash_key32.c b/lib/librte_table/rte_table_hash_key32.c
new file mode 100644
index 0000000..e8f4812
--- /dev/null
+++ b/lib/librte_table/rte_table_hash_key32.c
@@ -0,0 +1,1120 @@
+/*-
+ *	 BSD LICENSE
+ *
+ *	 Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+ *	 All rights reserved.
+ *
+ *	 Redistribution and use in source and binary forms, with or without
+ *	 modification, are permitted provided that the following conditions
+ *	 are met:
+ *
+ *	* Redistributions of source code must retain the above copyright
+ *		 notice, this list of conditions and the following disclaimer.
+ *	* Redistributions in binary form must reproduce the above copyright
+ *		 notice, this list of conditions and the following disclaimer in
+ *		 the documentation and/or other materials provided with the
+ *		 distribution.
+ *	* Neither the name of Intel Corporation nor the names of its
+ *		 contributors may be used to endorse or promote products derived
+ *		 from this software without specific prior written permission.
+ *
+ *	 THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *	 "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *	 LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *	 A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *	 OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *	 SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *	 LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *	 DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *	 THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *	 (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *	 OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+#include <string.h>
+#include <stdio.h>
+
+#include <rte_common.h>
+#include <rte_mbuf.h>
+#include <rte_malloc.h>
+#include <rte_log.h>
+
+#include "rte_table_hash.h"
+#include "rte_lru.h"
+
+#define RTE_TABLE_HASH_KEY_SIZE						32
+
+#define RTE_BUCKET_ENTRY_VALID						0x1LLU
+
+struct rte_bucket_4_32 {
+	/* Cache line 0 */
+	uint64_t signature[4 + 1];
+	uint64_t lru_list;
+	struct rte_bucket_4_32 *next;
+	uint64_t next_valid;
+
+	/* Cache lines 1 and 2 */
+	uint64_t key[4][4];
+
+	/* Cache line 3 */
+	uint8_t data[0];
+};
+
+struct rte_table_hash {
+	/* Input parameters */
+	uint32_t n_buckets;
+	uint32_t n_entries_per_bucket;
+	uint32_t key_size;
+	uint32_t entry_size;
+	uint32_t bucket_size;
+	uint32_t signature_offset;
+	uint32_t key_offset;
+	rte_table_hash_op_hash f_hash;
+	uint64_t seed;
+
+	/* Extendible buckets */
+	uint32_t n_buckets_ext;
+	uint32_t stack_pos;
+	uint32_t *stack;
+
+	/* Lookup table */
+	uint8_t memory[0] __rte_cache_aligned;
+};
+
+static int
+check_params_create_lru(struct rte_table_hash_key32_lru_params *params) {
+	/* n_entries */
+	if (params->n_entries == 0) {
+		RTE_LOG(ERR, TABLE, "%s: n_entries is zero\n", __func__);
+		return -EINVAL;
+	}
+
+	/* signature offset */
+	if ((params->signature_offset & 0x3) != 0) {
+		RTE_LOG(ERR, TABLE, "%s: invalid signature offset\n", __func__);
+		return -EINVAL;
+	}
+
+	/* key offset */
+	if ((params->key_offset & 0x7) != 0) {
+		RTE_LOG(ERR, TABLE, "%s: invalid key offset\n", __func__);
+		return -EINVAL;
+	}
+
+	/* f_hash */
+	if (params->f_hash == NULL) {
+		RTE_LOG(ERR, TABLE, "%s: f_hash function pointer is NULL\n",
+			__func__);
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+static void *
+rte_table_hash_create_key32_lru(void *params,
+		int socket_id,
+		uint32_t entry_size)
+{
+	struct rte_table_hash_key32_lru_params *p =
+		(struct rte_table_hash_key32_lru_params *) params;
+	struct rte_table_hash *f;
+	uint32_t n_buckets, n_entries_per_bucket, key_size, bucket_size_cl;
+	uint32_t total_size, i;
+
+	/* Check input parameters */
+	if ((check_params_create_lru(p) != 0) ||
+		((sizeof(struct rte_table_hash) % CACHE_LINE_SIZE) != 0) ||
+		((sizeof(struct rte_bucket_4_32) % CACHE_LINE_SIZE) != 0)) {
+		return NULL;
+	}
+	n_entries_per_bucket = 4;
+	key_size = 32;
+
+	/* Memory allocation */
+	n_buckets = rte_align32pow2((p->n_entries + n_entries_per_bucket - 1) /
+		n_entries_per_bucket);
+	bucket_size_cl = (sizeof(struct rte_bucket_4_32) + n_entries_per_bucket
+		* entry_size + CACHE_LINE_SIZE - 1) / CACHE_LINE_SIZE;
+	total_size = sizeof(struct rte_table_hash) + n_buckets *
+		bucket_size_cl * CACHE_LINE_SIZE;
+
+	f = rte_zmalloc_socket("TABLE", total_size, CACHE_LINE_SIZE, socket_id);
+	if (f == NULL) {
+		RTE_LOG(ERR, TABLE,
+			"%s: Cannot allocate %u bytes for hash table\n",
+			__func__, total_size);
+		return NULL;
+	}
+	RTE_LOG(INFO, TABLE,
+		"%s: Hash table memory footprint is %u bytes\n", __func__,
+		total_size);
+
+	/* Memory initialization */
+	f->n_buckets = n_buckets;
+	f->n_entries_per_bucket = n_entries_per_bucket;
+	f->key_size = key_size;
+	f->entry_size = entry_size;
+	f->bucket_size = bucket_size_cl * CACHE_LINE_SIZE;
+	f->signature_offset = p->signature_offset;
+	f->key_offset = p->key_offset;
+	f->f_hash = p->f_hash;
+	f->seed = p->seed;
+
+	for (i = 0; i < n_buckets; i++) {
+		struct rte_bucket_4_32 *bucket;
+
+		bucket = (struct rte_bucket_4_32 *) &f->memory[i *
+			f->bucket_size];
+		bucket->lru_list = 0x0000000100020003LLU;
+	}
+
+	return f;
+}
+
+static int
+rte_table_hash_free_key32_lru(void *table)
+{
+	struct rte_table_hash *f = (struct rte_table_hash *) table;
+
+	/* Check input parameters */
+	if (f == NULL) {
+		RTE_LOG(ERR, TABLE, "%s: table parameter is NULL\n", __func__);
+		return -EINVAL;
+	}
+
+	rte_free(f);
+	return 0;
+}
+
+static int
+rte_table_hash_entry_add_key32_lru(
+	void *table,
+	void *key,
+	void *entry,
+	int *key_found,
+	void **entry_ptr)
+{
+	struct rte_table_hash *f = (struct rte_table_hash *) table;
+	struct rte_bucket_4_32 *bucket;
+	uint64_t signature, pos;
+	uint32_t bucket_index, i;
+
+	signature = f->f_hash(key, f->key_size, f->seed);
+	bucket_index = signature & (f->n_buckets - 1);
+	bucket = (struct rte_bucket_4_32 *)
+		&f->memory[bucket_index * f->bucket_size];
+	signature |= RTE_BUCKET_ENTRY_VALID;
+
+	/* Key is present in the bucket */
+	for (i = 0; i < 4; i++) {
+		uint64_t bucket_signature = bucket->signature[i];
+		uint8_t *bucket_key = (uint8_t *) bucket->key[i];
+
+		if ((bucket_signature == signature) &&
+			(memcmp(key, bucket_key, f->key_size) == 0)) {
+			uint8_t *bucket_data = &bucket->data[i * f->entry_size];
+
+			memcpy(bucket_data, entry, f->entry_size);
+			lru_update(bucket, i);
+			*key_found = 1;
+			*entry_ptr = (void *) bucket_data;
+			return 0;
+		}
+	}
+
+	/* Key is not present in the bucket */
+	for (i = 0; i < 4; i++) {
+		uint64_t bucket_signature = bucket->signature[i];
+		uint8_t *bucket_key = (uint8_t *) bucket->key[i];
+
+		if (bucket_signature == 0) {
+			uint8_t *bucket_data = &bucket->data[i * f->entry_size];
+
+			bucket->signature[i] = signature;
+			memcpy(bucket_key, key, f->key_size);
+			memcpy(bucket_data, entry, f->entry_size);
+			lru_update(bucket, i);
+			*key_found = 0;
+			*entry_ptr = (void *) bucket_data;
+
+			return 0;
+		}
+	}
+
+	/* Bucket full: replace LRU entry */
+	pos = lru_pos(bucket);
+	bucket->signature[pos] = signature;
+	memcpy(bucket->key[pos], key, f->key_size);
+	memcpy(&bucket->data[pos * f->entry_size], entry, f->entry_size);
+	lru_update(bucket, pos);
+	*key_found	= 0;
+	*entry_ptr = (void *) &bucket->data[pos * f->entry_size];
+
+	return 0;
+}
+
+static int
+rte_table_hash_entry_delete_key32_lru(
+	void *table,
+	void *key,
+	int *key_found,
+	void *entry)
+{
+	struct rte_table_hash *f = (struct rte_table_hash *) table;
+	struct rte_bucket_4_32 *bucket;
+	uint64_t signature;
+	uint32_t bucket_index, i;
+
+	signature = f->f_hash(key, f->key_size, f->seed);
+	bucket_index = signature & (f->n_buckets - 1);
+	bucket = (struct rte_bucket_4_32 *)
+		&f->memory[bucket_index * f->bucket_size];
+	signature |= RTE_BUCKET_ENTRY_VALID;
+
+	/* Key is present in the bucket */
+	for (i = 0; i < 4; i++) {
+		uint64_t bucket_signature = bucket->signature[i];
+		uint8_t *bucket_key = (uint8_t *) bucket->key[i];
+
+		if ((bucket_signature == signature) &&
+			(memcmp(key, bucket_key, f->key_size) == 0)) {
+			uint8_t *bucket_data = &bucket->data[i * f->entry_size];
+
+			bucket->signature[i] = 0;
+			*key_found = 1;
+			if (entry)
+				memcpy(entry, bucket_data, f->entry_size);
+
+			return 0;
+		}
+	}
+
+	/* Key is not present in the bucket */
+	*key_found = 0;
+	return 0;
+}
+
+static int
+check_params_create_ext(struct rte_table_hash_key32_ext_params *params) {
+	/* n_entries */
+	if (params->n_entries == 0) {
+		RTE_LOG(ERR, TABLE, "%s: n_entries is zero\n", __func__);
+		return -EINVAL;
+	}
+
+	/* n_entries_ext */
+	if (params->n_entries_ext == 0) {
+		RTE_LOG(ERR, TABLE, "%s: n_entries_ext is zero\n", __func__);
+		return -EINVAL;
+	}
+
+	/* signature offset */
+	if ((params->signature_offset & 0x3) != 0) {
+		RTE_LOG(ERR, TABLE, "%s: invalid signature offset\n", __func__);
+		return -EINVAL;
+	}
+
+	/* key offset */
+	if ((params->key_offset & 0x7) != 0) {
+		RTE_LOG(ERR, TABLE, "%s: invalid key offset\n", __func__);
+		return -EINVAL;
+	}
+
+	/* f_hash */
+	if (params->f_hash == NULL) {
+		RTE_LOG(ERR, TABLE, "%s: f_hash function pointer is NULL\n",
+			__func__);
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+static void *
+rte_table_hash_create_key32_ext(void *params,
+	int socket_id,
+	uint32_t entry_size)
+{
+	struct rte_table_hash_key32_ext_params *p =
+			(struct rte_table_hash_key32_ext_params *) params;
+	struct rte_table_hash *f;
+	uint32_t n_buckets, n_buckets_ext, n_entries_per_bucket;
+	uint32_t key_size, bucket_size_cl, stack_size_cl, total_size, i;
+
+	/* Check input parameters */
+	if ((check_params_create_ext(p) != 0) ||
+		((sizeof(struct rte_table_hash) % CACHE_LINE_SIZE) != 0) ||
+		((sizeof(struct rte_bucket_4_32) % CACHE_LINE_SIZE) != 0))
+		return NULL;
+
+	n_entries_per_bucket = 4;
+	key_size = 32;
+
+	/* Memory allocation */
+	n_buckets = rte_align32pow2((p->n_entries + n_entries_per_bucket - 1) /
+		n_entries_per_bucket);
+	n_buckets_ext = (p->n_entries_ext + n_entries_per_bucket - 1) /
+		n_entries_per_bucket;
+	bucket_size_cl = (sizeof(struct rte_bucket_4_32) + n_entries_per_bucket
+		* entry_size + CACHE_LINE_SIZE - 1) / CACHE_LINE_SIZE;
+	stack_size_cl = (n_buckets_ext * sizeof(uint32_t) + CACHE_LINE_SIZE - 1)
+		/ CACHE_LINE_SIZE;
+	total_size = sizeof(struct rte_table_hash) +
+		((n_buckets + n_buckets_ext) * bucket_size_cl + stack_size_cl) *
+		CACHE_LINE_SIZE;
+
+	f = rte_zmalloc_socket("TABLE", total_size, CACHE_LINE_SIZE, socket_id);
+	if (f == NULL) {
+		RTE_LOG(ERR, TABLE,
+			"%s: Cannot allocate %u bytes for hash table\n",
+			__func__, total_size);
+		return NULL;
+	}
+	RTE_LOG(INFO, TABLE,
+		"%s: Hash table memory footprint is %u bytes\n", __func__,
+		total_size);
+
+	/* Memory initialization */
+	f->n_buckets = n_buckets;
+	f->n_entries_per_bucket = n_entries_per_bucket;
+	f->key_size = key_size;
+	f->entry_size = entry_size;
+	f->bucket_size = bucket_size_cl * CACHE_LINE_SIZE;
+	f->signature_offset = p->signature_offset;
+	f->key_offset = p->key_offset;
+	f->f_hash = p->f_hash;
+	f->seed = p->seed;
+
+	f->n_buckets_ext = n_buckets_ext;
+	f->stack_pos = n_buckets_ext;
+	f->stack = (uint32_t *)
+		&f->memory[(n_buckets + n_buckets_ext) * f->bucket_size];
+
+	for (i = 0; i < n_buckets_ext; i++)
+		f->stack[i] = i;
+
+	return f;
+}
+
+static int
+rte_table_hash_free_key32_ext(void *table)
+{
+	struct rte_table_hash *f = (struct rte_table_hash *) table;
+
+	/* Check input parameters */
+	if (f == NULL) {
+		RTE_LOG(ERR, TABLE, "%s: table parameter is NULL\n", __func__);
+		return -EINVAL;
+	}
+
+	rte_free(f);
+	return 0;
+}
+
+static int
+rte_table_hash_entry_add_key32_ext(
+	void *table,
+	void *key,
+	void *entry,
+	int *key_found,
+	void **entry_ptr)
+{
+	struct rte_table_hash *f = (struct rte_table_hash *) table;
+	struct rte_bucket_4_32 *bucket0, *bucket, *bucket_prev;
+	uint64_t signature;
+	uint32_t bucket_index, i;
+
+	signature = f->f_hash(key, f->key_size, f->seed);
+	bucket_index = signature & (f->n_buckets - 1);
+	bucket0 = (struct rte_bucket_4_32 *)
+			&f->memory[bucket_index * f->bucket_size];
+	signature |= RTE_BUCKET_ENTRY_VALID;
+
+	/* Key is present in the bucket */
+	for (bucket = bucket0; bucket != NULL; bucket = bucket->next) {
+		for (i = 0; i < 4; i++) {
+			uint64_t bucket_signature = bucket->signature[i];
+			uint8_t *bucket_key = (uint8_t *) bucket->key[i];
+
+			if ((bucket_signature == signature) &&
+				(memcmp(key, bucket_key, f->key_size) == 0)) {
+				uint8_t *bucket_data = &bucket->data[i *
+					f->entry_size];
+
+				memcpy(bucket_data, entry, f->entry_size);
+				*key_found = 1;
+				*entry_ptr = (void *) bucket_data;
+
+				return 0;
+			}
+		}
+	}
+
+	/* Key is not present in the bucket */
+	for (bucket_prev = NULL, bucket = bucket0; bucket != NULL;
+		bucket_prev = bucket, bucket = bucket->next)
+		for (i = 0; i < 4; i++) {
+			uint64_t bucket_signature = bucket->signature[i];
+			uint8_t *bucket_key = (uint8_t *) bucket->key[i];
+
+			if (bucket_signature == 0) {
+				uint8_t *bucket_data = &bucket->data[i *
+					f->entry_size];
+
+				bucket->signature[i] = signature;
+				memcpy(bucket_key, key, f->key_size);
+				memcpy(bucket_data, entry, f->entry_size);
+				*key_found = 0;
+				*entry_ptr = (void *) bucket_data;
+
+				return 0;
+			}
+		}
+
+	/* Bucket full: extend bucket */
+	if (f->stack_pos > 0) {
+		bucket_index = f->stack[--f->stack_pos];
+
+		bucket = (struct rte_bucket_4_32 *)
+			&f->memory[(f->n_buckets + bucket_index) *
+			f->bucket_size];
+		bucket_prev->next = bucket;
+		bucket_prev->next_valid = 1;
+
+		bucket->signature[0] = signature;
+		memcpy(bucket->key[0], key, f->key_size);
+		memcpy(&bucket->data[0], entry, f->entry_size);
+		*key_found = 0;
+		*entry_ptr = (void *) &bucket->data[0];
+		return 0;
+	}
+
+	return -ENOSPC;
+}
+
+static int
+rte_table_hash_entry_delete_key32_ext(
+	void *table,
+	void *key,
+	int *key_found,
+	void *entry)
+{
+	struct rte_table_hash *f = (struct rte_table_hash *) table;
+	struct rte_bucket_4_32 *bucket0, *bucket, *bucket_prev;
+	uint64_t signature;
+	uint32_t bucket_index, i;
+
+	signature = f->f_hash(key, f->key_size, f->seed);
+	bucket_index = signature & (f->n_buckets - 1);
+	bucket0 = (struct rte_bucket_4_32 *)
+		&f->memory[bucket_index * f->bucket_size];
+	signature |= RTE_BUCKET_ENTRY_VALID;
+
+	/* Key is present in the bucket */
+	for (bucket_prev = NULL, bucket = bucket0; bucket != NULL;
+		bucket_prev = bucket, bucket = bucket->next)
+		for (i = 0; i < 4; i++) {
+			uint64_t bucket_signature = bucket->signature[i];
+			uint8_t *bucket_key = (uint8_t *) bucket->key[i];
+
+			if ((bucket_signature == signature) &&
+				(memcmp(key, bucket_key, f->key_size) == 0)) {
+				uint8_t *bucket_data = &bucket->data[i *
+					f->entry_size];
+
+				bucket->signature[i] = 0;
+				*key_found = 1;
+				if (entry)
+					memcpy(entry, bucket_data,
+						f->entry_size);
+
+				if ((bucket->signature[0] == 0) &&
+						(bucket->signature[1] == 0) &&
+						(bucket->signature[2] == 0) &&
+						(bucket->signature[3] == 0) &&
+						(bucket_prev != NULL)) {
+					bucket_prev->next = bucket->next;
+					bucket_prev->next_valid =
+						bucket->next_valid;
+
+					memset(bucket, 0,
+						sizeof(struct rte_bucket_4_32));
+					bucket_index = (bucket -
+						((struct rte_bucket_4_32 *)
+						f->memory)) - f->n_buckets;
+					f->stack[f->stack_pos++] = bucket_index;
+				}
+
+				return 0;
+			}
+		}
+
+	/* Key is not present in the bucket */
+	*key_found = 0;
+	return 0;
+}
+
+#define lookup_key32_cmp(key_in, bucket, pos)			\
+{								\
+	uint64_t xor[4][4], or[4], signature[4];		\
+								\
+	signature[0] = ((~bucket->signature[0]) & 1);		\
+	signature[1] = ((~bucket->signature[1]) & 1);		\
+	signature[2] = ((~bucket->signature[2]) & 1);		\
+	signature[3] = ((~bucket->signature[3]) & 1);		\
+								\
+	xor[0][0] = key_in[0] ^	 bucket->key[0][0];		\
+	xor[0][1] = key_in[1] ^	 bucket->key[0][1];		\
+	xor[0][2] = key_in[2] ^	 bucket->key[0][2];		\
+	xor[0][3] = key_in[3] ^	 bucket->key[0][3];		\
+								\
+	xor[1][0] = key_in[0] ^	 bucket->key[1][0];		\
+	xor[1][1] = key_in[1] ^	 bucket->key[1][1];		\
+	xor[1][2] = key_in[2] ^	 bucket->key[1][2];		\
+	xor[1][3] = key_in[3] ^	 bucket->key[1][3];		\
+								\
+	xor[2][0] = key_in[0] ^	 bucket->key[2][0];		\
+	xor[2][1] = key_in[1] ^	 bucket->key[2][1];		\
+	xor[2][2] = key_in[2] ^	 bucket->key[2][2];		\
+	xor[2][3] = key_in[3] ^	 bucket->key[2][3];		\
+								\
+	xor[3][0] = key_in[0] ^	 bucket->key[3][0];		\
+	xor[3][1] = key_in[1] ^	 bucket->key[3][1];		\
+	xor[3][2] = key_in[2] ^	 bucket->key[3][2];		\
+	xor[3][3] = key_in[3] ^	 bucket->key[3][3];		\
+								\
+	or[0] = xor[0][0] | xor[0][1] | xor[0][2] | xor[0][3] | signature[0];\
+	or[1] = xor[1][0] | xor[1][1] | xor[1][2] | xor[1][3] | signature[1];\
+	or[2] = xor[2][0] | xor[2][1] | xor[2][2] | xor[2][3] | signature[2];\
+	or[3] = xor[3][0] | xor[3][1] | xor[3][2] | xor[3][3] | signature[3];\
+								\
+	pos = 4;						\
+	if (or[0] == 0)						\
+		pos = 0;					\
+	if (or[1] == 0)						\
+		pos = 1;					\
+	if (or[2] == 0)						\
+		pos = 2;					\
+	if (or[3] == 0)						\
+		pos = 3;					\
+}
+
+#define lookup1_stage0(pkt0_index, mbuf0, pkts, pkts_mask)	\
+{								\
+	uint64_t pkt_mask;					\
+								\
+	pkt0_index = __builtin_ctzll(pkts_mask);		\
+	pkt_mask = 1LLU << pkt0_index;				\
+	pkts_mask &= ~pkt_mask;					\
+								\
+	mbuf0 = pkts[pkt0_index];				\
+	rte_prefetch0(RTE_MBUF_METADATA_UINT8_PTR(mbuf0, 0));	\
+}
+
+#define lookup1_stage1(mbuf1, bucket1, f)			\
+{								\
+	uint64_t signature;					\
+	uint32_t bucket_index;					\
+								\
+	signature = RTE_MBUF_METADATA_UINT32(mbuf1, f->signature_offset);\
+	bucket_index = signature & (f->n_buckets - 1);		\
+	bucket1 = (struct rte_bucket_4_32 *)			\
+		&f->memory[bucket_index * f->bucket_size];	\
+	rte_prefetch0(bucket1);					\
+	rte_prefetch0((void *)(((uintptr_t) bucket1) + CACHE_LINE_SIZE));\
+	rte_prefetch0((void *)(((uintptr_t) bucket1) + 2 * CACHE_LINE_SIZE));\
+}
+
+#define lookup1_stage2_lru(pkt2_index, mbuf2, bucket2,		\
+	pkts_mask_out, entries, f)				\
+{								\
+	void *a;						\
+	uint64_t pkt_mask;					\
+	uint64_t *key;						\
+	uint32_t pos;						\
+								\
+	key = RTE_MBUF_METADATA_UINT64_PTR(mbuf2, f->key_offset);\
+								\
+	lookup_key32_cmp(key, bucket2, pos);			\
+								\
+	pkt_mask = (bucket2->signature[pos] & 1LLU) << pkt2_index;\
+	pkts_mask_out |= pkt_mask;				\
+								\
+	a = (void *) &bucket2->data[pos * f->entry_size];	\
+	rte_prefetch0(a);					\
+	entries[pkt2_index] = a;				\
+	lru_update(bucket2, pos);				\
+}
+
+#define lookup1_stage2_ext(pkt2_index, mbuf2, bucket2, pkts_mask_out,\
+	entries, buckets_mask, buckets, keys, f)		\
+{								\
+	struct rte_bucket_4_32 *bucket_next;			\
+	void *a;						\
+	uint64_t pkt_mask, bucket_mask;				\
+	uint64_t *key;						\
+	uint32_t pos;						\
+								\
+	key = RTE_MBUF_METADATA_UINT64_PTR(mbuf2, f->key_offset);\
+								\
+	lookup_key32_cmp(key, bucket2, pos);			\
+								\
+	pkt_mask = (bucket2->signature[pos] & 1LLU) << pkt2_index;\
+	pkts_mask_out |= pkt_mask;				\
+								\
+	a = (void *) &bucket2->data[pos * f->entry_size];	\
+	rte_prefetch0(a);					\
+	entries[pkt2_index] = a;				\
+								\
+	bucket_mask = (~pkt_mask) & (bucket2->next_valid << pkt2_index);\
+	buckets_mask |= bucket_mask;				\
+	bucket_next = bucket2->next;				\
+	buckets[pkt2_index] = bucket_next;			\
+	keys[pkt2_index] = key;					\
+}
+
+#define lookup_grinder(pkt_index, buckets, keys, pkts_mask_out,	\
+	entries, buckets_mask, f)				\
+{								\
+	struct rte_bucket_4_32 *bucket, *bucket_next;		\
+	void *a;						\
+	uint64_t pkt_mask, bucket_mask;				\
+	uint64_t *key;						\
+	uint32_t pos;						\
+								\
+	bucket = buckets[pkt_index];				\
+	key = keys[pkt_index];					\
+								\
+	lookup_key32_cmp(key, bucket, pos);			\
+								\
+	pkt_mask = (bucket->signature[pos] & 1LLU) << pkt_index;\
+	pkts_mask_out |= pkt_mask;				\
+								\
+	a = (void *) &bucket->data[pos * f->entry_size];	\
+	rte_prefetch0(a);					\
+	entries[pkt_index] = a;					\
+								\
+	bucket_mask = (~pkt_mask) & (bucket->next_valid << pkt_index);\
+	buckets_mask |= bucket_mask;				\
+	bucket_next = bucket->next;				\
+	rte_prefetch0(bucket_next);				\
+	rte_prefetch0((void *)(((uintptr_t) bucket_next) + CACHE_LINE_SIZE));\
+	rte_prefetch0((void *)(((uintptr_t) bucket_next) +	\
+		2 * CACHE_LINE_SIZE));				\
+	buckets[pkt_index] = bucket_next;			\
+	keys[pkt_index] = key;					\
+}
+
+#define lookup2_stage0(pkt00_index, pkt01_index, mbuf00, mbuf01,\
+	pkts, pkts_mask)					\
+{								\
+	uint64_t pkt00_mask, pkt01_mask;			\
+								\
+	pkt00_index = __builtin_ctzll(pkts_mask);		\
+	pkt00_mask = 1LLU << pkt00_index;			\
+	pkts_mask &= ~pkt00_mask;				\
+								\
+	mbuf00 = pkts[pkt00_index];				\
+	rte_prefetch0(RTE_MBUF_METADATA_UINT8_PTR(mbuf00, 0));	\
+								\
+	pkt01_index = __builtin_ctzll(pkts_mask);		\
+	pkt01_mask = 1LLU << pkt01_index;			\
+	pkts_mask &= ~pkt01_mask;				\
+								\
+	mbuf01 = pkts[pkt01_index];				\
+	rte_prefetch0(RTE_MBUF_METADATA_UINT8_PTR(mbuf01, 0));	\
+}
+
+#define lookup2_stage0_with_odd_support(pkt00_index, pkt01_index,\
+	mbuf00, mbuf01, pkts, pkts_mask)			\
+{								\
+	uint64_t pkt00_mask, pkt01_mask;			\
+								\
+	pkt00_index = __builtin_ctzll(pkts_mask);		\
+	pkt00_mask = 1LLU << pkt00_index;			\
+	pkts_mask &= ~pkt00_mask;				\
+								\
+	mbuf00 = pkts[pkt00_index];				\
+	rte_prefetch0(RTE_MBUF_METADATA_UINT8_PTR(mbuf00, 0));	\
+								\
+	pkt01_index = __builtin_ctzll(pkts_mask);		\
+	if (pkts_mask == 0)					\
+		pkt01_index = pkt00_index;			\
+								\
+	pkt01_mask = 1LLU << pkt01_index;			\
+	pkts_mask &= ~pkt01_mask;				\
+								\
+	mbuf01 = pkts[pkt01_index];				\
+	rte_prefetch0(RTE_MBUF_METADATA_UINT8_PTR(mbuf01, 0));	\
+}
+
+#define lookup2_stage1(mbuf10, mbuf11, bucket10, bucket11, f)	\
+{								\
+	uint64_t signature10, signature11;			\
+	uint32_t bucket10_index, bucket11_index;		\
+								\
+	signature10 = RTE_MBUF_METADATA_UINT32(mbuf10, f->signature_offset);\
+	bucket10_index = signature10 & (f->n_buckets - 1);	\
+	bucket10 = (struct rte_bucket_4_32 *)			\
+		&f->memory[bucket10_index * f->bucket_size];	\
+	rte_prefetch0(bucket10);				\
+	rte_prefetch0((void *)(((uintptr_t) bucket10) + CACHE_LINE_SIZE));\
+	rte_prefetch0((void *)(((uintptr_t) bucket10) + 2 * CACHE_LINE_SIZE));\
+								\
+	signature11 = RTE_MBUF_METADATA_UINT32(mbuf11, f->signature_offset);\
+	bucket11_index = signature11 & (f->n_buckets - 1);	\
+	bucket11 = (struct rte_bucket_4_32 *)			\
+		&f->memory[bucket11_index * f->bucket_size];	\
+	rte_prefetch0(bucket11);				\
+	rte_prefetch0((void *)(((uintptr_t) bucket11) + CACHE_LINE_SIZE));\
+	rte_prefetch0((void *)(((uintptr_t) bucket11) + 2 * CACHE_LINE_SIZE));\
+}
+
+#define lookup2_stage2_lru(pkt20_index, pkt21_index, mbuf20, mbuf21,\
+	bucket20, bucket21, pkts_mask_out, entries, f)		\
+{								\
+	void *a20, *a21;					\
+	uint64_t pkt20_mask, pkt21_mask;			\
+	uint64_t *key20, *key21;				\
+	uint32_t pos20, pos21;					\
+								\
+	key20 = RTE_MBUF_METADATA_UINT64_PTR(mbuf20, f->key_offset);\
+	key21 = RTE_MBUF_METADATA_UINT64_PTR(mbuf21, f->key_offset);\
+								\
+	lookup_key32_cmp(key20, bucket20, pos20);		\
+	lookup_key32_cmp(key21, bucket21, pos21);		\
+								\
+	pkt20_mask = (bucket20->signature[pos20] & 1LLU) << pkt20_index;\
+	pkt21_mask = (bucket21->signature[pos21] & 1LLU) << pkt21_index;\
+	pkts_mask_out |= pkt20_mask | pkt21_mask;		\
+								\
+	a20 = (void *) &bucket20->data[pos20 * f->entry_size];	\
+	a21 = (void *) &bucket21->data[pos21 * f->entry_size];	\
+	rte_prefetch0(a20);					\
+	rte_prefetch0(a21);					\
+	entries[pkt20_index] = a20;				\
+	entries[pkt21_index] = a21;				\
+	lru_update(bucket20, pos20);				\
+	lru_update(bucket21, pos21);				\
+}
+
+#define lookup2_stage2_ext(pkt20_index, pkt21_index, mbuf20, mbuf21, bucket20, \
+	bucket21, pkts_mask_out, entries, buckets_mask, buckets, keys, f)\
+{								\
+	struct rte_bucket_4_32 *bucket20_next, *bucket21_next;	\
+	void *a20, *a21;					\
+	uint64_t pkt20_mask, pkt21_mask, bucket20_mask, bucket21_mask;\
+	uint64_t *key20, *key21;				\
+	uint32_t pos20, pos21;					\
+								\
+	key20 = RTE_MBUF_METADATA_UINT64_PTR(mbuf20, f->key_offset);\
+	key21 = RTE_MBUF_METADATA_UINT64_PTR(mbuf21, f->key_offset);\
+								\
+	lookup_key32_cmp(key20, bucket20, pos20);		\
+	lookup_key32_cmp(key21, bucket21, pos21);		\
+								\
+	pkt20_mask = (bucket20->signature[pos20] & 1LLU) << pkt20_index;\
+	pkt21_mask = (bucket21->signature[pos21] & 1LLU) << pkt21_index;\
+	pkts_mask_out |= pkt20_mask | pkt21_mask;		\
+								\
+	a20 = (void *) &bucket20->data[pos20 * f->entry_size];	\
+	a21 = (void *) &bucket21->data[pos21 * f->entry_size];	\
+	rte_prefetch0(a20);					\
+	rte_prefetch0(a21);					\
+	entries[pkt20_index] = a20;				\
+	entries[pkt21_index] = a21;				\
+								\
+	bucket20_mask = (~pkt20_mask) & (bucket20->next_valid << pkt20_index);\
+	bucket21_mask = (~pkt21_mask) & (bucket21->next_valid << pkt21_index);\
+	buckets_mask |= bucket20_mask | bucket21_mask;		\
+	bucket20_next = bucket20->next;				\
+	bucket21_next = bucket21->next;				\
+	buckets[pkt20_index] = bucket20_next;			\
+	buckets[pkt21_index] = bucket21_next;			\
+	keys[pkt20_index] = key20;				\
+	keys[pkt21_index] = key21;				\
+}
+
+static int
+rte_table_hash_lookup_key32_lru(
+	void *table,
+	struct rte_mbuf **pkts,
+	uint64_t pkts_mask,
+	uint64_t *lookup_hit_mask,
+	void **entries)
+{
+	struct rte_table_hash *f = (struct rte_table_hash *) table;
+	struct rte_bucket_4_32 *bucket10, *bucket11, *bucket20, *bucket21;
+	struct rte_mbuf *mbuf00, *mbuf01, *mbuf10, *mbuf11, *mbuf20, *mbuf21;
+	uint32_t pkt00_index, pkt01_index, pkt10_index;
+	uint32_t pkt11_index, pkt20_index, pkt21_index;
+	uint64_t pkts_mask_out = 0;
+
+	/* Cannot run the pipeline with less than 5 packets */
+	if (__builtin_popcountll(pkts_mask) < 5) {
+		for ( ; pkts_mask; ) {
+			struct rte_bucket_4_32 *bucket;
+			struct rte_mbuf *mbuf;
+			uint32_t pkt_index;
+
+			lookup1_stage0(pkt_index, mbuf, pkts, pkts_mask);
+			lookup1_stage1(mbuf, bucket, f);
+			lookup1_stage2_lru(pkt_index, mbuf, bucket,
+					pkts_mask_out, entries, f);
+		}
+
+		*lookup_hit_mask = pkts_mask_out;
+		return 0;
+	}
+
+	/*
+	 * Pipeline fill
+	 *
+	 */
+	/* Pipeline stage 0 */
+	lookup2_stage0(pkt00_index, pkt01_index, mbuf00, mbuf01, pkts,
+		pkts_mask);
+
+	/* Pipeline feed */
+	mbuf10 = mbuf00;
+	mbuf11 = mbuf01;
+	pkt10_index = pkt00_index;
+	pkt11_index = pkt01_index;
+
+	/* Pipeline stage 0 */
+	lookup2_stage0(pkt00_index, pkt01_index, mbuf00, mbuf01, pkts,
+		pkts_mask);
+
+	/* Pipeline stage 1 */
+	lookup2_stage1(mbuf10, mbuf11, bucket10, bucket11, f);
+
+	/*
+	 * Pipeline run
+	 *
+	 */
+	for ( ; pkts_mask; ) {
+		/* Pipeline feed */
+		bucket20 = bucket10;
+		bucket21 = bucket11;
+		mbuf20 = mbuf10;
+		mbuf21 = mbuf11;
+		mbuf10 = mbuf00;
+		mbuf11 = mbuf01;
+		pkt20_index = pkt10_index;
+		pkt21_index = pkt11_index;
+		pkt10_index = pkt00_index;
+		pkt11_index = pkt01_index;
+
+		/* Pipeline stage 0 */
+		lookup2_stage0_with_odd_support(pkt00_index, pkt01_index,
+			mbuf00, mbuf01, pkts, pkts_mask);
+
+		/* Pipeline stage 1 */
+		lookup2_stage1(mbuf10, mbuf11, bucket10, bucket11, f);
+
+		/* Pipeline stage 2 */
+		lookup2_stage2_lru(pkt20_index, pkt21_index,
+			mbuf20, mbuf21, bucket20, bucket21, pkts_mask_out,
+			entries, f);
+	}
+
+	/*
+	 * Pipeline flush
+	 *
+	 */
+	/* Pipeline feed */
+	bucket20 = bucket10;
+	bucket21 = bucket11;
+	mbuf20 = mbuf10;
+	mbuf21 = mbuf11;
+	mbuf10 = mbuf00;
+	mbuf11 = mbuf01;
+	pkt20_index = pkt10_index;
+	pkt21_index = pkt11_index;
+	pkt10_index = pkt00_index;
+	pkt11_index = pkt01_index;
+
+	/* Pipeline stage 1 */
+	lookup2_stage1(mbuf10, mbuf11, bucket10, bucket11, f);
+
+	/* Pipeline stage 2 */
+	lookup2_stage2_lru(pkt20_index, pkt21_index,
+		mbuf20, mbuf21, bucket20, bucket21, pkts_mask_out, entries, f);
+
+	/* Pipeline feed */
+	bucket20 = bucket10;
+	bucket21 = bucket11;
+	mbuf20 = mbuf10;
+	mbuf21 = mbuf11;
+	pkt20_index = pkt10_index;
+	pkt21_index = pkt11_index;
+
+	/* Pipeline stage 2 */
+	lookup2_stage2_lru(pkt20_index, pkt21_index,
+		mbuf20, mbuf21, bucket20, bucket21, pkts_mask_out, entries, f);
+
+	*lookup_hit_mask = pkts_mask_out;
+	return 0;
+} /* rte_table_hash_lookup_key32_lru() */
+
+static int
+rte_table_hash_lookup_key32_ext(
+	void *table,
+	struct rte_mbuf **pkts,
+	uint64_t pkts_mask,
+	uint64_t *lookup_hit_mask,
+	void **entries)
+{
+	struct rte_table_hash *f = (struct rte_table_hash *) table;
+	struct rte_bucket_4_32 *bucket10, *bucket11, *bucket20, *bucket21;
+	struct rte_mbuf *mbuf00, *mbuf01, *mbuf10, *mbuf11, *mbuf20, *mbuf21;
+	uint32_t pkt00_index, pkt01_index, pkt10_index;
+	uint32_t pkt11_index, pkt20_index, pkt21_index;
+	uint64_t pkts_mask_out = 0, buckets_mask = 0;
+	struct rte_bucket_4_32 *buckets[RTE_PORT_IN_BURST_SIZE_MAX];
+	uint64_t *keys[RTE_PORT_IN_BURST_SIZE_MAX];
+
+	/* Cannot run the pipeline with less than 5 packets */
+	if (__builtin_popcountll(pkts_mask) < 5) {
+		for ( ; pkts_mask; ) {
+			struct rte_bucket_4_32 *bucket;
+			struct rte_mbuf *mbuf;
+			uint32_t pkt_index;
+
+			lookup1_stage0(pkt_index, mbuf, pkts, pkts_mask);
+			lookup1_stage1(mbuf, bucket, f);
+			lookup1_stage2_ext(pkt_index, mbuf, bucket,
+				pkts_mask_out, entries, buckets_mask, buckets,
+				keys, f);
+		}
+
+		*lookup_hit_mask = pkts_mask_out;
+		return 0;
+	}
+
+	/*
+	 * Pipeline fill
+	 *
+	 */
+	/* Pipeline stage 0 */
+	lookup2_stage0(pkt00_index, pkt01_index, mbuf00, mbuf01, pkts,
+		pkts_mask);
+
+	/* Pipeline feed */
+	mbuf10 = mbuf00;
+	mbuf11 = mbuf01;
+	pkt10_index = pkt00_index;
+	pkt11_index = pkt01_index;
+
+	/* Pipeline stage 0 */
+	lookup2_stage0(pkt00_index, pkt01_index, mbuf00, mbuf01, pkts,
+		pkts_mask);
+
+	/* Pipeline stage 1 */
+	lookup2_stage1(mbuf10, mbuf11, bucket10, bucket11, f);
+
+	/*
+	 * Pipeline run
+	 *
+	 */
+	for ( ; pkts_mask; ) {
+		/* Pipeline feed */
+		bucket20 = bucket10;
+		bucket21 = bucket11;
+		mbuf20 = mbuf10;
+		mbuf21 = mbuf11;
+		mbuf10 = mbuf00;
+		mbuf11 = mbuf01;
+		pkt20_index = pkt10_index;
+		pkt21_index = pkt11_index;
+		pkt10_index = pkt00_index;
+		pkt11_index = pkt01_index;
+
+		/* Pipeline stage 0 */
+		lookup2_stage0_with_odd_support(pkt00_index, pkt01_index,
+			mbuf00, mbuf01, pkts, pkts_mask);
+
+		/* Pipeline stage 1 */
+		lookup2_stage1(mbuf10, mbuf11, bucket10, bucket11, f);
+
+		/* Pipeline stage 2 */
+		lookup2_stage2_ext(pkt20_index, pkt21_index, mbuf20, mbuf21,
+			bucket20, bucket21, pkts_mask_out, entries,
+			buckets_mask, buckets, keys, f);
+	}
+
+	/*
+	 * Pipeline flush
+	 *
+	 */
+	/* Pipeline feed */
+	bucket20 = bucket10;
+	bucket21 = bucket11;
+	mbuf20 = mbuf10;
+	mbuf21 = mbuf11;
+	mbuf10 = mbuf00;
+	mbuf11 = mbuf01;
+	pkt20_index = pkt10_index;
+	pkt21_index = pkt11_index;
+	pkt10_index = pkt00_index;
+	pkt11_index = pkt01_index;
+
+	/* Pipeline stage 1 */
+	lookup2_stage1(mbuf10, mbuf11, bucket10, bucket11, f);
+
+	/* Pipeline stage 2 */
+	lookup2_stage2_ext(pkt20_index, pkt21_index, mbuf20, mbuf21,
+		bucket20, bucket21, pkts_mask_out, entries,
+		buckets_mask, buckets, keys, f);
+
+	/* Pipeline feed */
+	bucket20 = bucket10;
+	bucket21 = bucket11;
+	mbuf20 = mbuf10;
+	mbuf21 = mbuf11;
+	pkt20_index = pkt10_index;
+	pkt21_index = pkt11_index;
+
+	/* Pipeline stage 2 */
+	lookup2_stage2_ext(pkt20_index, pkt21_index, mbuf20, mbuf21,
+		bucket20, bucket21, pkts_mask_out, entries,
+		buckets_mask, buckets, keys, f);
+
+	/* Grind next buckets */
+	for ( ; buckets_mask; ) {
+		uint64_t buckets_mask_next = 0;
+
+		for ( ; buckets_mask; ) {
+			uint64_t pkt_mask;
+			uint32_t pkt_index;
+
+			pkt_index = __builtin_ctzll(buckets_mask);
+			pkt_mask = 1LLU << pkt_index;
+			buckets_mask &= ~pkt_mask;
+
+			lookup_grinder(pkt_index, buckets, keys, pkts_mask_out,
+				entries, buckets_mask_next, f);
+		}
+
+		buckets_mask = buckets_mask_next;
+	}
+
+	*lookup_hit_mask = pkts_mask_out;
+	return 0;
+} /* rte_table_hash_lookup_key32_ext() */
+
+struct rte_table_ops rte_table_hash_key32_lru_ops = {
+	.f_create = rte_table_hash_create_key32_lru,
+	.f_free = rte_table_hash_free_key32_lru,
+	.f_add = rte_table_hash_entry_add_key32_lru,
+	.f_delete = rte_table_hash_entry_delete_key32_lru,
+	.f_lookup = rte_table_hash_lookup_key32_lru,
+};
+
+struct rte_table_ops rte_table_hash_key32_ext_ops = {
+	.f_create = rte_table_hash_create_key32_ext,
+	.f_free = rte_table_hash_free_key32_ext,
+	.f_add = rte_table_hash_entry_add_key32_ext,
+	.f_delete = rte_table_hash_entry_delete_key32_ext,
+	.f_lookup = rte_table_hash_lookup_key32_ext,
+};
diff --git a/lib/librte_table/rte_table_hash_key8.c b/lib/librte_table/rte_table_hash_key8.c
new file mode 100644
index 0000000..d60c96e
--- /dev/null
+++ b/lib/librte_table/rte_table_hash_key8.c
@@ -0,0 +1,1398 @@
+/*-
+ *	 BSD LICENSE
+ *
+ *	 Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+ *	 All rights reserved.
+ *
+ *	 Redistribution and use in source and binary forms, with or without
+ *	 modification, are permitted provided that the following conditions
+ *	 are met:
+ *
+ *	* Redistributions of source code must retain the above copyright
+ *		 notice, this list of conditions and the following disclaimer.
+ *	* Redistributions in binary form must reproduce the above copyright
+ *		 notice, this list of conditions and the following disclaimer in
+ *		 the documentation and/or other materials provided with the
+ *		 distribution.
+ *	* Neither the name of Intel Corporation nor the names of its
+ *		 contributors may be used to endorse or promote products derived
+ *		 from this software without specific prior written permission.
+ *
+ *	 THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *	 "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *	 LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *	 A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *	 OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *	 SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *	 LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *	 DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *	 THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *	 (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *	 OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+#include <string.h>
+#include <stdio.h>
+
+#include <rte_common.h>
+#include <rte_mbuf.h>
+#include <rte_malloc.h>
+#include <rte_log.h>
+
+#include "rte_table_hash.h"
+#include "rte_lru.h"
+
+#define RTE_TABLE_HASH_KEY_SIZE						8
+
+struct rte_bucket_4_8 {
+	/* Cache line 0 */
+	uint64_t signature;
+	uint64_t lru_list;
+	struct rte_bucket_4_8 *next;
+	uint64_t next_valid;
+
+	uint64_t key[4];
+
+	/* Cache line 1 */
+	uint8_t data[0];
+};
+
+struct rte_table_hash {
+	/* Input parameters */
+	uint32_t n_buckets;
+	uint32_t n_entries_per_bucket;
+	uint32_t key_size;
+	uint32_t entry_size;
+	uint32_t bucket_size;
+	uint32_t signature_offset;
+	uint32_t key_offset;
+	rte_table_hash_op_hash f_hash;
+	uint64_t seed;
+
+	/* Extendible buckets */
+	uint32_t n_buckets_ext;
+	uint32_t stack_pos;
+	uint32_t *stack;
+
+	/* Lookup table */
+	uint8_t memory[0] __rte_cache_aligned;
+};
+
+static int
+check_params_create_lru(struct rte_table_hash_key8_lru_params *params) {
+	/* n_entries */
+	if (params->n_entries == 0) {
+		RTE_LOG(ERR, TABLE, "%s: n_entries is zero\n", __func__);
+		return -EINVAL;
+	}
+
+	/* signature offset */
+	if ((params->signature_offset & 0x3) != 0) {
+		RTE_LOG(ERR, TABLE, "%s: invalid signature_offset\n", __func__);
+		return -EINVAL;
+	}
+
+	/* key offset */
+	if ((params->key_offset & 0x7) != 0) {
+		RTE_LOG(ERR, TABLE, "%s: invalid key_offset\n", __func__);
+		return -EINVAL;
+	}
+
+	/* f_hash */
+	if (params->f_hash == NULL) {
+		RTE_LOG(ERR, TABLE, "%s: f_hash function pointer is NULL\n",
+			__func__);
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+static void *
+rte_table_hash_create_key8_lru(void *params, int socket_id, uint32_t entry_size)
+{
+	struct rte_table_hash_key8_lru_params *p =
+		(struct rte_table_hash_key8_lru_params *) params;
+	struct rte_table_hash *f;
+	uint32_t n_buckets, n_entries_per_bucket, key_size, bucket_size_cl;
+	uint32_t total_size, i;
+
+	/* Check input parameters */
+	if ((check_params_create_lru(p) != 0) ||
+		((sizeof(struct rte_table_hash) % CACHE_LINE_SIZE) != 0) ||
+		((sizeof(struct rte_bucket_4_8) % CACHE_LINE_SIZE) != 0)) {
+		return NULL;
+	}
+	n_entries_per_bucket = 4;
+	key_size = 8;
+
+	/* Memory allocation */
+	n_buckets = rte_align32pow2((p->n_entries + n_entries_per_bucket - 1) /
+		n_entries_per_bucket);
+	bucket_size_cl = (sizeof(struct rte_bucket_4_8) + n_entries_per_bucket *
+		entry_size + CACHE_LINE_SIZE - 1) / CACHE_LINE_SIZE;
+	total_size = sizeof(struct rte_table_hash) + n_buckets *
+		bucket_size_cl * CACHE_LINE_SIZE;
+
+	f = rte_zmalloc_socket("TABLE", total_size, CACHE_LINE_SIZE, socket_id);
+	if (f == NULL) {
+		RTE_LOG(ERR, TABLE,
+			"%s: Cannot allocate %u bytes for hash table\n",
+			__func__, total_size);
+		return NULL;
+	}
+	RTE_LOG(INFO, TABLE,
+		"%s: Hash table memory footprint is %u bytes\n",
+		__func__, total_size);
+
+	/* Memory initialization */
+	f->n_buckets = n_buckets;
+	f->n_entries_per_bucket = n_entries_per_bucket;
+	f->key_size = key_size;
+	f->entry_size = entry_size;
+	f->bucket_size = bucket_size_cl * CACHE_LINE_SIZE;
+	f->signature_offset = p->signature_offset;
+	f->key_offset = p->key_offset;
+	f->f_hash = p->f_hash;
+	f->seed = p->seed;
+
+	for (i = 0; i < n_buckets; i++) {
+		struct rte_bucket_4_8 *bucket;
+
+		bucket = (struct rte_bucket_4_8 *) &f->memory[i *
+			f->bucket_size];
+		bucket->lru_list = 0x0000000100020003LLU;
+	}
+
+	return f;
+}
+
+static int
+rte_table_hash_free_key8_lru(void *table)
+{
+	struct rte_table_hash *f = (struct rte_table_hash *) table;
+
+	/* Check input parameters */
+	if (f == NULL) {
+		RTE_LOG(ERR, TABLE, "%s: table parameter is NULL\n", __func__);
+		return -EINVAL;
+	}
+
+	rte_free(f);
+	return 0;
+}
+
+static int
+rte_table_hash_entry_add_key8_lru(
+	void *table,
+	void *key,
+	void *entry,
+	int *key_found,
+	void **entry_ptr)
+{
+	struct rte_table_hash *f = (struct rte_table_hash *) table;
+	struct rte_bucket_4_8 *bucket;
+	uint64_t signature, mask, pos;
+	uint32_t bucket_index, i;
+
+	signature = f->f_hash(key, f->key_size, f->seed);
+	bucket_index = signature & (f->n_buckets - 1);
+	bucket = (struct rte_bucket_4_8 *)
+		&f->memory[bucket_index * f->bucket_size];
+
+	/* Key is present in the bucket */
+	for (i = 0, mask = 1LLU; i < 4; i++, mask <<= 1) {
+		uint64_t bucket_signature = bucket->signature;
+		uint64_t bucket_key = bucket->key[i];
+
+		if ((bucket_signature & mask) &&
+		    (*((uint64_t *) key) == bucket_key)) {
+			uint8_t *bucket_data = &bucket->data[i * f->entry_size];
+
+			memcpy(bucket_data, entry, f->entry_size);
+			lru_update(bucket, i);
+			*key_found = 1;
+			*entry_ptr = (void *) bucket_data;
+			return 0;
+		}
+	}
+
+	/* Key is not present in the bucket */
+	for (i = 0, mask = 1LLU; i < 4; i++, mask <<= 1) {
+		uint64_t bucket_signature = bucket->signature;
+
+		if ((bucket_signature & mask) == 0) {
+			uint8_t *bucket_data = &bucket->data[i * f->entry_size];
+
+			bucket->signature |= mask;
+			bucket->key[i] = *((uint64_t *) key);
+			memcpy(bucket_data, entry, f->entry_size);
+			lru_update(bucket, i);
+			*key_found = 0;
+			*entry_ptr = (void *) bucket_data;
+
+			return 0;
+		}
+	}
+
+	/* Bucket full: replace LRU entry */
+	pos = lru_pos(bucket);
+	bucket->key[pos] = *((uint64_t *) key);
+	memcpy(&bucket->data[pos * f->entry_size], entry, f->entry_size);
+	lru_update(bucket, pos);
+	*key_found	= 0;
+	*entry_ptr = (void *) &bucket->data[pos * f->entry_size];
+
+	return 0;
+}
+
+static int
+rte_table_hash_entry_delete_key8_lru(
+	void *table,
+	void *key,
+	int *key_found,
+	void *entry)
+{
+	struct rte_table_hash *f = (struct rte_table_hash *) table;
+	struct rte_bucket_4_8 *bucket;
+	uint64_t signature, mask;
+	uint32_t bucket_index, i;
+
+	signature = f->f_hash(key, f->key_size, f->seed);
+	bucket_index = signature & (f->n_buckets - 1);
+	bucket = (struct rte_bucket_4_8 *)
+		&f->memory[bucket_index * f->bucket_size];
+
+	/* Key is present in the bucket */
+	for (i = 0, mask = 1LLU; i < 4; i++, mask <<= 1) {
+		uint64_t bucket_signature = bucket->signature;
+		uint64_t bucket_key = bucket->key[i];
+
+		if ((bucket_signature & mask) &&
+		    (*((uint64_t *) key) == bucket_key)) {
+			uint8_t *bucket_data = &bucket->data[i * f->entry_size];
+
+			bucket->signature &= ~mask;
+			*key_found = 1;
+			if (entry)
+				memcpy(entry, bucket_data, f->entry_size);
+
+			return 0;
+		}
+	}
+
+	/* Key is not present in the bucket */
+	*key_found = 0;
+	return 0;
+}
+
+static int
+check_params_create_ext(struct rte_table_hash_key8_ext_params *params) {
+	/* n_entries */
+	if (params->n_entries == 0) {
+		RTE_LOG(ERR, TABLE, "%s: n_entries is zero\n", __func__);
+		return -EINVAL;
+	}
+
+	/* n_entries_ext */
+	if (params->n_entries_ext == 0) {
+		RTE_LOG(ERR, TABLE, "%s: n_entries_ext is zero\n", __func__);
+		return -EINVAL;
+	}
+
+	/* signature offset */
+	if ((params->signature_offset & 0x3) != 0) {
+		RTE_LOG(ERR, TABLE, "%s: invalid signature_offset\n", __func__);
+		return -EINVAL;
+	}
+
+	/* key offset */
+	if ((params->key_offset & 0x7) != 0) {
+		RTE_LOG(ERR, TABLE, "%s: invalid key_offset\n", __func__);
+		return -EINVAL;
+	}
+
+	/* f_hash */
+	if (params->f_hash == NULL) {
+		RTE_LOG(ERR, TABLE, "%s: f_hash function pointer is NULL\n",
+			__func__);
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+static void *
+rte_table_hash_create_key8_ext(void *params, int socket_id, uint32_t entry_size)
+{
+	struct rte_table_hash_key8_ext_params *p =
+		(struct rte_table_hash_key8_ext_params *) params;
+	struct rte_table_hash *f;
+	uint32_t n_buckets, n_buckets_ext, n_entries_per_bucket, key_size;
+	uint32_t bucket_size_cl, stack_size_cl, total_size, i;
+
+	/* Check input parameters */
+	if ((check_params_create_ext(p) != 0) ||
+		((sizeof(struct rte_table_hash) % CACHE_LINE_SIZE) != 0) ||
+		((sizeof(struct rte_bucket_4_8) % CACHE_LINE_SIZE) != 0))
+		return NULL;
+
+	n_entries_per_bucket = 4;
+	key_size = 8;
+
+	/* Memory allocation */
+	n_buckets = rte_align32pow2((p->n_entries + n_entries_per_bucket - 1) /
+		n_entries_per_bucket);
+	n_buckets_ext = (p->n_entries_ext + n_entries_per_bucket - 1) /
+		n_entries_per_bucket;
+	bucket_size_cl = (sizeof(struct rte_bucket_4_8) + n_entries_per_bucket *
+		entry_size + CACHE_LINE_SIZE - 1) / CACHE_LINE_SIZE;
+	stack_size_cl = (n_buckets_ext * sizeof(uint32_t) + CACHE_LINE_SIZE - 1)
+		/ CACHE_LINE_SIZE;
+	total_size = sizeof(struct rte_table_hash) + ((n_buckets +
+		n_buckets_ext) * bucket_size_cl + stack_size_cl) *
+		CACHE_LINE_SIZE;
+
+	f = rte_zmalloc_socket("TABLE", total_size, CACHE_LINE_SIZE, socket_id);
+	if (f == NULL) {
+		RTE_LOG(ERR, TABLE,
+			"%s: Cannot allocate %u bytes for hash table\n",
+			__func__, total_size);
+		return NULL;
+	}
+	RTE_LOG(INFO, TABLE,
+		"%s: Hash table memory footprint is %u bytes\n",
+		__func__, total_size);
+
+	/* Memory initialization */
+	f->n_buckets = n_buckets;
+	f->n_entries_per_bucket = n_entries_per_bucket;
+	f->key_size = key_size;
+	f->entry_size = entry_size;
+	f->bucket_size = bucket_size_cl * CACHE_LINE_SIZE;
+	f->signature_offset = p->signature_offset;
+	f->key_offset = p->key_offset;
+	f->f_hash = p->f_hash;
+	f->seed = p->seed;
+
+	f->n_buckets_ext = n_buckets_ext;
+	f->stack_pos = n_buckets_ext;
+	f->stack = (uint32_t *)
+		&f->memory[(n_buckets + n_buckets_ext) * f->bucket_size];
+
+	for (i = 0; i < n_buckets_ext; i++)
+		f->stack[i] = i;
+
+	return f;
+}
+
+static int
+rte_table_hash_free_key8_ext(void *table)
+{
+	struct rte_table_hash *f = (struct rte_table_hash *) table;
+
+	/* Check input parameters */
+	if (f == NULL) {
+		RTE_LOG(ERR, TABLE, "%s: table parameter is NULL\n", __func__);
+		return -EINVAL;
+	}
+
+	rte_free(f);
+	return 0;
+}
+
+static int
+rte_table_hash_entry_add_key8_ext(
+	void *table,
+	void *key,
+	void *entry,
+	int *key_found,
+	void **entry_ptr)
+{
+	struct rte_table_hash *f = (struct rte_table_hash *) table;
+	struct rte_bucket_4_8 *bucket0, *bucket, *bucket_prev;
+	uint64_t signature;
+	uint32_t bucket_index, i;
+
+	signature = f->f_hash(key, f->key_size, f->seed);
+	bucket_index = signature & (f->n_buckets - 1);
+	bucket0 = (struct rte_bucket_4_8 *)
+		&f->memory[bucket_index * f->bucket_size];
+
+	/* Key is present in the bucket */
+	for (bucket = bucket0; bucket != NULL; bucket = bucket->next) {
+		uint64_t mask;
+
+		for (i = 0, mask = 1LLU; i < 4; i++, mask <<= 1) {
+			uint64_t bucket_signature = bucket->signature;
+			uint64_t bucket_key = bucket->key[i];
+
+			if ((bucket_signature & mask) &&
+					(*((uint64_t *) key) == bucket_key)) {
+				uint8_t *bucket_data = &bucket->data[i *
+					f->entry_size];
+
+				memcpy(bucket_data, entry, f->entry_size);
+				*key_found = 1;
+				*entry_ptr = (void *) bucket_data;
+				return 0;
+			}
+		}
+	}
+
+	/* Key is not present in the bucket */
+	for (bucket_prev = NULL, bucket = bucket0;
+		bucket != NULL; bucket_prev = bucket, bucket = bucket->next) {
+		uint64_t mask;
+
+		for (i = 0, mask = 1LLU; i < 4; i++, mask <<= 1) {
+			uint64_t bucket_signature = bucket->signature;
+
+			if ((bucket_signature & mask) == 0) {
+				uint8_t *bucket_data = &bucket->data[i *
+					f->entry_size];
+
+				bucket->signature |= mask;
+				bucket->key[i] = *((uint64_t *) key);
+				memcpy(bucket_data, entry, f->entry_size);
+				*key_found = 0;
+				*entry_ptr = (void *) bucket_data;
+
+				return 0;
+			}
+		}
+	}
+
+	/* Bucket full: extend bucket */
+	if (f->stack_pos > 0) {
+		bucket_index = f->stack[--f->stack_pos];
+
+		bucket = (struct rte_bucket_4_8 *) &f->memory[(f->n_buckets +
+			bucket_index) * f->bucket_size];
+		bucket_prev->next = bucket;
+		bucket_prev->next_valid = 1;
+
+		bucket->signature = 1;
+		bucket->key[0] = *((uint64_t *) key);
+		memcpy(&bucket->data[0], entry, f->entry_size);
+		*key_found = 0;
+		*entry_ptr = (void *) &bucket->data[0];
+		return 0;
+	}
+
+	return -ENOSPC;
+}
+
+static int
+rte_table_hash_entry_delete_key8_ext(
+	void *table,
+	void *key,
+	int *key_found,
+	void *entry)
+{
+	struct rte_table_hash *f = (struct rte_table_hash *) table;
+	struct rte_bucket_4_8 *bucket0, *bucket, *bucket_prev;
+	uint64_t signature;
+	uint32_t bucket_index, i;
+
+	signature = f->f_hash(key, f->key_size, f->seed);
+	bucket_index = signature & (f->n_buckets - 1);
+	bucket0 = (struct rte_bucket_4_8 *)
+		&f->memory[bucket_index * f->bucket_size];
+
+	/* Key is present in the bucket */
+	for (bucket_prev = NULL, bucket = bucket0; bucket != NULL;
+		bucket_prev = bucket, bucket = bucket->next) {
+		uint64_t mask;
+
+		for (i = 0, mask = 1LLU; i < 4; i++, mask <<= 1) {
+			uint64_t bucket_signature = bucket->signature;
+			uint64_t bucket_key = bucket->key[i];
+
+			if ((bucket_signature & mask) &&
+				(*((uint64_t *) key) == bucket_key)) {
+				uint8_t *bucket_data = &bucket->data[i *
+					f->entry_size];
+
+				bucket->signature &= ~mask;
+				*key_found = 1;
+				if (entry)
+					memcpy(entry, bucket_data,
+						f->entry_size);
+
+				if ((bucket->signature == 0) &&
+				    (bucket_prev != NULL)) {
+					bucket_prev->next = bucket->next;
+					bucket_prev->next_valid =
+						bucket->next_valid;
+
+					memset(bucket, 0,
+						sizeof(struct rte_bucket_4_8));
+					bucket_index = (bucket -
+						((struct rte_bucket_4_8 *)
+						f->memory)) - f->n_buckets;
+					f->stack[f->stack_pos++] = bucket_index;
+				}
+
+				return 0;
+			}
+		}
+	}
+
+	/* Key is not present in the bucket */
+	*key_found = 0;
+	return 0;
+}
+
+#define lookup_key8_cmp(key_in, bucket, pos)			\
+{								\
+	uint64_t xor[4], signature;				\
+								\
+	signature = ~bucket->signature;				\
+								\
+	xor[0] = (key_in[0] ^	 bucket->key[0]) | (signature & 1);\
+	xor[1] = (key_in[0] ^	 bucket->key[1]) | (signature & 2);\
+	xor[2] = (key_in[0] ^	 bucket->key[2]) | (signature & 4);\
+	xor[3] = (key_in[0] ^	 bucket->key[3]) | (signature & 8);\
+								\
+	pos = 4;						\
+	if (xor[0] == 0)					\
+		pos = 0;					\
+	if (xor[1] == 0)					\
+		pos = 1;					\
+	if (xor[2] == 0)					\
+		pos = 2;					\
+	if (xor[3] == 0)					\
+		pos = 3;					\
+}
+
+#define lookup1_stage0(pkt0_index, mbuf0, pkts, pkts_mask)	\
+{								\
+	uint64_t pkt_mask;					\
+								\
+	pkt0_index = __builtin_ctzll(pkts_mask);		\
+	pkt_mask = 1LLU << pkt0_index;				\
+	pkts_mask &= ~pkt_mask;					\
+								\
+	mbuf0 = pkts[pkt0_index];				\
+	rte_prefetch0(RTE_MBUF_METADATA_UINT8_PTR(mbuf0, 0));	\
+}
+
+#define lookup1_stage1(mbuf1, bucket1, f)			\
+{								\
+	uint64_t signature;					\
+	uint32_t bucket_index;					\
+								\
+	signature = RTE_MBUF_METADATA_UINT32(mbuf1, f->signature_offset);\
+	bucket_index = signature & (f->n_buckets - 1);		\
+	bucket1 = (struct rte_bucket_4_8 *)			\
+		&f->memory[bucket_index * f->bucket_size];	\
+	rte_prefetch0(bucket1);					\
+}
+
+#define lookup1_stage1_dosig(mbuf1, bucket1, f)			\
+{								\
+	uint64_t *key;						\
+	uint64_t signature;					\
+	uint32_t bucket_index;					\
+								\
+	key = RTE_MBUF_METADATA_UINT64_PTR(mbuf1, f->key_offset);\
+	signature = f->f_hash(key, RTE_TABLE_HASH_KEY_SIZE, f->seed);\
+	bucket_index = signature & (f->n_buckets - 1);		\
+	bucket1 = (struct rte_bucket_4_8 *)			\
+		&f->memory[bucket_index * f->bucket_size];	\
+	rte_prefetch0(bucket1);					\
+}
+
+#define lookup1_stage2_lru(pkt2_index, mbuf2, bucket2,		\
+	pkts_mask_out, entries, f)				\
+{								\
+	void *a;						\
+	uint64_t pkt_mask;					\
+	uint64_t *key;						\
+	uint32_t pos;						\
+								\
+	key = RTE_MBUF_METADATA_UINT64_PTR(mbuf2, f->key_offset);\
+								\
+	lookup_key8_cmp(key, bucket2, pos);			\
+								\
+	pkt_mask = ((bucket2->signature >> pos) & 1LLU) << pkt2_index;\
+	pkts_mask_out |= pkt_mask;				\
+								\
+	a = (void *) &bucket2->data[pos * f->entry_size];	\
+	rte_prefetch0(a);					\
+	entries[pkt2_index] = a;				\
+	lru_update(bucket2, pos);				\
+}
+
+#define lookup1_stage2_ext(pkt2_index, mbuf2, bucket2, pkts_mask_out,\
+	entries, buckets_mask, buckets, keys, f)		\
+{								\
+	struct rte_bucket_4_8 *bucket_next;			\
+	void *a;						\
+	uint64_t pkt_mask, bucket_mask;				\
+	uint64_t *key;						\
+	uint32_t pos;						\
+								\
+	key = RTE_MBUF_METADATA_UINT64_PTR(mbuf2, f->key_offset);\
+								\
+	lookup_key8_cmp(key, bucket2, pos);			\
+								\
+	pkt_mask = ((bucket2->signature >> pos) & 1LLU) << pkt2_index;\
+	pkts_mask_out |= pkt_mask;				\
+								\
+	a = (void *) &bucket2->data[pos * f->entry_size];	\
+	rte_prefetch0(a);					\
+	entries[pkt2_index] = a;				\
+								\
+	bucket_mask = (~pkt_mask) & (bucket2->next_valid << pkt2_index);\
+	buckets_mask |= bucket_mask;				\
+	bucket_next = bucket2->next;				\
+	buckets[pkt2_index] = bucket_next;			\
+	keys[pkt2_index] = key;					\
+}
+
+#define lookup_grinder(pkt_index, buckets, keys, pkts_mask_out, entries,\
+	buckets_mask, f)					\
+{								\
+	struct rte_bucket_4_8 *bucket, *bucket_next;		\
+	void *a;						\
+	uint64_t pkt_mask, bucket_mask;				\
+	uint64_t *key;						\
+	uint32_t pos;						\
+								\
+	bucket = buckets[pkt_index];				\
+	key = keys[pkt_index];					\
+								\
+	lookup_key8_cmp(key, bucket, pos);			\
+								\
+	pkt_mask = ((bucket->signature >> pos) & 1LLU) << pkt_index;\
+	pkts_mask_out |= pkt_mask;				\
+								\
+	a = (void *) &bucket->data[pos * f->entry_size];	\
+	rte_prefetch0(a);					\
+	entries[pkt_index] = a;					\
+								\
+	bucket_mask = (~pkt_mask) & (bucket->next_valid << pkt_index);\
+	buckets_mask |= bucket_mask;				\
+	bucket_next = bucket->next;				\
+	rte_prefetch0(bucket_next);				\
+	buckets[pkt_index] = bucket_next;			\
+	keys[pkt_index] = key;					\
+}
+
+#define lookup2_stage0(pkt00_index, pkt01_index, mbuf00, mbuf01,\
+	pkts, pkts_mask)					\
+{								\
+	uint64_t pkt00_mask, pkt01_mask;			\
+								\
+	pkt00_index = __builtin_ctzll(pkts_mask);		\
+	pkt00_mask = 1LLU << pkt00_index;			\
+	pkts_mask &= ~pkt00_mask;				\
+								\
+	mbuf00 = pkts[pkt00_index];				\
+	rte_prefetch0(RTE_MBUF_METADATA_UINT8_PTR(mbuf00, 0));	\
+								\
+	pkt01_index = __builtin_ctzll(pkts_mask);		\
+	pkt01_mask = 1LLU << pkt01_index;			\
+	pkts_mask &= ~pkt01_mask;				\
+								\
+	mbuf01 = pkts[pkt01_index];				\
+	rte_prefetch0(RTE_MBUF_METADATA_UINT8_PTR(mbuf01, 0));	\
+}
+
+#define lookup2_stage0_with_odd_support(pkt00_index, pkt01_index,\
+	mbuf00, mbuf01, pkts, pkts_mask)			\
+{								\
+	uint64_t pkt00_mask, pkt01_mask;			\
+								\
+	pkt00_index = __builtin_ctzll(pkts_mask);		\
+	pkt00_mask = 1LLU << pkt00_index;			\
+	pkts_mask &= ~pkt00_mask;				\
+								\
+	mbuf00 = pkts[pkt00_index];				\
+	rte_prefetch0(RTE_MBUF_METADATA_UINT8_PTR(mbuf00, 0));	\
+								\
+	pkt01_index = __builtin_ctzll(pkts_mask);		\
+	if (pkts_mask == 0)					\
+		pkt01_index = pkt00_index;			\
+								\
+	pkt01_mask = 1LLU << pkt01_index;			\
+	pkts_mask &= ~pkt01_mask;				\
+								\
+	mbuf01 = pkts[pkt01_index];				\
+	rte_prefetch0(RTE_MBUF_METADATA_UINT8_PTR(mbuf01, 0));	\
+}
+
+#define lookup2_stage1(mbuf10, mbuf11, bucket10, bucket11, f)	\
+{								\
+	uint64_t signature10, signature11;			\
+	uint32_t bucket10_index, bucket11_index;		\
+								\
+	signature10 = RTE_MBUF_METADATA_UINT32(mbuf10, f->signature_offset);\
+	bucket10_index = signature10 & (f->n_buckets - 1);	\
+	bucket10 = (struct rte_bucket_4_8 *)			\
+		&f->memory[bucket10_index * f->bucket_size];	\
+	rte_prefetch0(bucket10);				\
+								\
+	signature11 = RTE_MBUF_METADATA_UINT32(mbuf11, f->signature_offset);\
+	bucket11_index = signature11 & (f->n_buckets - 1);	\
+	bucket11 = (struct rte_bucket_4_8 *)			\
+		&f->memory[bucket11_index * f->bucket_size];	\
+	rte_prefetch0(bucket11);				\
+}
+
+#define lookup2_stage1_dosig(mbuf10, mbuf11, bucket10, bucket11, f)\
+{								\
+	uint64_t *key10, *key11;				\
+	uint64_t signature10, signature11;			\
+	uint32_t bucket10_index, bucket11_index;		\
+	rte_table_hash_op_hash f_hash = f->f_hash;		\
+	uint64_t seed = f->seed;				\
+	uint32_t key_offset = f->key_offset;			\
+								\
+	key10 = RTE_MBUF_METADATA_UINT64_PTR(mbuf10, key_offset);\
+	key11 = RTE_MBUF_METADATA_UINT64_PTR(mbuf11, key_offset);\
+								\
+	signature10 = f_hash(key10, RTE_TABLE_HASH_KEY_SIZE, seed);\
+	bucket10_index = signature10 & (f->n_buckets - 1);	\
+	bucket10 = (struct rte_bucket_4_8 *)			\
+		&f->memory[bucket10_index * f->bucket_size];	\
+	rte_prefetch0(bucket10);				\
+								\
+	signature11 = f_hash(key11, RTE_TABLE_HASH_KEY_SIZE, seed);\
+	bucket11_index = signature11 & (f->n_buckets - 1);	\
+	bucket11 = (struct rte_bucket_4_8 *)			\
+		&f->memory[bucket11_index * f->bucket_size];	\
+	rte_prefetch0(bucket11);				\
+}
+
+#define lookup2_stage2_lru(pkt20_index, pkt21_index, mbuf20, mbuf21,\
+	bucket20, bucket21, pkts_mask_out, entries, f)		\
+{								\
+	void *a20, *a21;					\
+	uint64_t pkt20_mask, pkt21_mask;			\
+	uint64_t *key20, *key21;				\
+	uint32_t pos20, pos21;					\
+								\
+	key20 = RTE_MBUF_METADATA_UINT64_PTR(mbuf20, f->key_offset);\
+	key21 = RTE_MBUF_METADATA_UINT64_PTR(mbuf21, f->key_offset);\
+								\
+	lookup_key8_cmp(key20, bucket20, pos20);		\
+	lookup_key8_cmp(key21, bucket21, pos21);		\
+								\
+	pkt20_mask = ((bucket20->signature >> pos20) & 1LLU) << pkt20_index;\
+	pkt21_mask = ((bucket21->signature >> pos21) & 1LLU) << pkt21_index;\
+	pkts_mask_out |= pkt20_mask | pkt21_mask;		\
+								\
+	a20 = (void *) &bucket20->data[pos20 * f->entry_size];	\
+	a21 = (void *) &bucket21->data[pos21 * f->entry_size];	\
+	rte_prefetch0(a20);					\
+	rte_prefetch0(a21);					\
+	entries[pkt20_index] = a20;				\
+	entries[pkt21_index] = a21;				\
+	lru_update(bucket20, pos20);				\
+	lru_update(bucket21, pos21);				\
+}
+
+#define lookup2_stage2_ext(pkt20_index, pkt21_index, mbuf20, mbuf21, bucket20, \
+	bucket21, pkts_mask_out, entries, buckets_mask, buckets, keys, f)\
+{								\
+	struct rte_bucket_4_8 *bucket20_next, *bucket21_next;	\
+	void *a20, *a21;					\
+	uint64_t pkt20_mask, pkt21_mask, bucket20_mask, bucket21_mask;\
+	uint64_t *key20, *key21;				\
+	uint32_t pos20, pos21;					\
+								\
+	key20 = RTE_MBUF_METADATA_UINT64_PTR(mbuf20, f->key_offset);\
+	key21 = RTE_MBUF_METADATA_UINT64_PTR(mbuf21, f->key_offset);\
+								\
+	lookup_key8_cmp(key20, bucket20, pos20);		\
+	lookup_key8_cmp(key21, bucket21, pos21);		\
+								\
+	pkt20_mask = ((bucket20->signature >> pos20) & 1LLU) << pkt20_index;\
+	pkt21_mask = ((bucket21->signature >> pos21) & 1LLU) << pkt21_index;\
+	pkts_mask_out |= pkt20_mask | pkt21_mask;		\
+								\
+	a20 = (void *) &bucket20->data[pos20 * f->entry_size];	\
+	a21 = (void *) &bucket21->data[pos21 * f->entry_size];	\
+	rte_prefetch0(a20);					\
+	rte_prefetch0(a21);					\
+	entries[pkt20_index] = a20;				\
+	entries[pkt21_index] = a21;				\
+								\
+	bucket20_mask = (~pkt20_mask) & (bucket20->next_valid << pkt20_index);\
+	bucket21_mask = (~pkt21_mask) & (bucket21->next_valid << pkt21_index);\
+	buckets_mask |= bucket20_mask | bucket21_mask;		\
+	bucket20_next = bucket20->next;				\
+	bucket21_next = bucket21->next;				\
+	buckets[pkt20_index] = bucket20_next;			\
+	buckets[pkt21_index] = bucket21_next;			\
+	keys[pkt20_index] = key20;				\
+	keys[pkt21_index] = key21;				\
+}
+
+static int
+rte_table_hash_lookup_key8_lru(
+	void *table,
+	struct rte_mbuf **pkts,
+	uint64_t pkts_mask,
+	uint64_t *lookup_hit_mask,
+	void **entries)
+{
+	struct rte_table_hash *f = (struct rte_table_hash *) table;
+	struct rte_bucket_4_8 *bucket10, *bucket11, *bucket20, *bucket21;
+	struct rte_mbuf *mbuf00, *mbuf01, *mbuf10, *mbuf11, *mbuf20, *mbuf21;
+	uint32_t pkt00_index, pkt01_index, pkt10_index,
+			pkt11_index, pkt20_index, pkt21_index;
+	uint64_t pkts_mask_out = 0;
+
+	/* Cannot run the pipeline with less than 5 packets */
+	if (__builtin_popcountll(pkts_mask) < 5) {
+		for ( ; pkts_mask; ) {
+			struct rte_bucket_4_8 *bucket;
+			struct rte_mbuf *mbuf;
+			uint32_t pkt_index;
+
+			lookup1_stage0(pkt_index, mbuf, pkts, pkts_mask);
+			lookup1_stage1(mbuf, bucket, f);
+			lookup1_stage2_lru(pkt_index, mbuf, bucket,
+					pkts_mask_out, entries, f);
+		}
+
+		*lookup_hit_mask = pkts_mask_out;
+		return 0;
+	}
+
+	/*
+	 * Pipeline fill
+	 *
+	 */
+	/* Pipeline stage 0 */
+	lookup2_stage0(pkt00_index, pkt01_index, mbuf00, mbuf01, pkts,
+		pkts_mask);
+
+	/* Pipeline feed */
+	mbuf10 = mbuf00;
+	mbuf11 = mbuf01;
+	pkt10_index = pkt00_index;
+	pkt11_index = pkt01_index;
+
+	/* Pipeline stage 0 */
+	lookup2_stage0(pkt00_index, pkt01_index, mbuf00, mbuf01, pkts,
+		pkts_mask);
+
+	/* Pipeline stage 1 */
+	lookup2_stage1(mbuf10, mbuf11, bucket10, bucket11, f);
+
+	/*
+	 * Pipeline run
+	 *
+	 */
+	for ( ; pkts_mask; ) {
+		/* Pipeline feed */
+		bucket20 = bucket10;
+		bucket21 = bucket11;
+		mbuf20 = mbuf10;
+		mbuf21 = mbuf11;
+		mbuf10 = mbuf00;
+		mbuf11 = mbuf01;
+		pkt20_index = pkt10_index;
+		pkt21_index = pkt11_index;
+		pkt10_index = pkt00_index;
+		pkt11_index = pkt01_index;
+
+		/* Pipeline stage 0 */
+		lookup2_stage0_with_odd_support(pkt00_index, pkt01_index,
+			mbuf00, mbuf01, pkts, pkts_mask);
+
+		/* Pipeline stage 1 */
+		lookup2_stage1(mbuf10, mbuf11, bucket10, bucket11, f);
+
+		/* Pipeline stage 2 */
+		lookup2_stage2_lru(pkt20_index, pkt21_index, mbuf20, mbuf21,
+			bucket20, bucket21, pkts_mask_out, entries, f);
+	}
+
+	/*
+	 * Pipeline flush
+	 *
+	 */
+	/* Pipeline feed */
+	bucket20 = bucket10;
+	bucket21 = bucket11;
+	mbuf20 = mbuf10;
+	mbuf21 = mbuf11;
+	mbuf10 = mbuf00;
+	mbuf11 = mbuf01;
+	pkt20_index = pkt10_index;
+	pkt21_index = pkt11_index;
+	pkt10_index = pkt00_index;
+	pkt11_index = pkt01_index;
+
+	/* Pipeline stage 1 */
+	lookup2_stage1(mbuf10, mbuf11, bucket10, bucket11, f);
+
+	/* Pipeline stage 2 */
+	lookup2_stage2_lru(pkt20_index, pkt21_index, mbuf20, mbuf21,
+		bucket20, bucket21, pkts_mask_out, entries, f);
+
+	/* Pipeline feed */
+	bucket20 = bucket10;
+	bucket21 = bucket11;
+	mbuf20 = mbuf10;
+	mbuf21 = mbuf11;
+	pkt20_index = pkt10_index;
+	pkt21_index = pkt11_index;
+
+	/* Pipeline stage 2 */
+	lookup2_stage2_lru(pkt20_index, pkt21_index, mbuf20, mbuf21,
+		bucket20, bucket21, pkts_mask_out, entries, f);
+
+	*lookup_hit_mask = pkts_mask_out;
+	return 0;
+} /* rte_table_hash_lookup_key8_lru() */
+
+static int
+rte_table_hash_lookup_key8_lru_dosig(
+	void *table,
+	struct rte_mbuf **pkts,
+	uint64_t pkts_mask,
+	uint64_t *lookup_hit_mask,
+	void **entries)
+{
+	struct rte_table_hash *f = (struct rte_table_hash *) table;
+	struct rte_bucket_4_8 *bucket10, *bucket11, *bucket20, *bucket21;
+	struct rte_mbuf *mbuf00, *mbuf01, *mbuf10, *mbuf11, *mbuf20, *mbuf21;
+	uint32_t pkt00_index, pkt01_index, pkt10_index;
+	uint32_t pkt11_index, pkt20_index, pkt21_index;
+	uint64_t pkts_mask_out = 0;
+
+	/* Cannot run the pipeline with less than 5 packets */
+	if (__builtin_popcountll(pkts_mask) < 5) {
+		for ( ; pkts_mask; ) {
+			struct rte_bucket_4_8 *bucket;
+			struct rte_mbuf *mbuf;
+			uint32_t pkt_index;
+
+			lookup1_stage0(pkt_index, mbuf, pkts, pkts_mask);
+			lookup1_stage1_dosig(mbuf, bucket, f);
+			lookup1_stage2_lru(pkt_index, mbuf, bucket,
+				pkts_mask_out, entries, f);
+		}
+
+		*lookup_hit_mask = pkts_mask_out;
+		return 0;
+	}
+
+	/*
+	 * Pipeline fill
+	 *
+	 */
+	/* Pipeline stage 0 */
+	lookup2_stage0(pkt00_index, pkt01_index, mbuf00, mbuf01, pkts,
+		pkts_mask);
+
+	/* Pipeline feed */
+	mbuf10 = mbuf00;
+	mbuf11 = mbuf01;
+	pkt10_index = pkt00_index;
+	pkt11_index = pkt01_index;
+
+	/* Pipeline stage 0 */
+	lookup2_stage0(pkt00_index, pkt01_index, mbuf00, mbuf01, pkts,
+		pkts_mask);
+
+	/* Pipeline stage 1 */
+	lookup2_stage1_dosig(mbuf10, mbuf11, bucket10, bucket11, f);
+
+	/*
+	 * Pipeline run
+	 *
+	 */
+	for ( ; pkts_mask; ) {
+		/* Pipeline feed */
+		bucket20 = bucket10;
+		bucket21 = bucket11;
+		mbuf20 = mbuf10;
+		mbuf21 = mbuf11;
+		mbuf10 = mbuf00;
+		mbuf11 = mbuf01;
+		pkt20_index = pkt10_index;
+		pkt21_index = pkt11_index;
+		pkt10_index = pkt00_index;
+		pkt11_index = pkt01_index;
+
+		/* Pipeline stage 0 */
+		lookup2_stage0_with_odd_support(pkt00_index, pkt01_index,
+			mbuf00, mbuf01, pkts, pkts_mask);
+
+		/* Pipeline stage 1 */
+		lookup2_stage1_dosig(mbuf10, mbuf11, bucket10, bucket11, f);
+
+		/* Pipeline stage 2 */
+		lookup2_stage2_lru(pkt20_index, pkt21_index, mbuf20, mbuf21,
+			bucket20, bucket21, pkts_mask_out, entries, f);
+	}
+
+	/*
+	 * Pipeline flush
+	 *
+	 */
+	/* Pipeline feed */
+	bucket20 = bucket10;
+	bucket21 = bucket11;
+	mbuf20 = mbuf10;
+	mbuf21 = mbuf11;
+	mbuf10 = mbuf00;
+	mbuf11 = mbuf01;
+	pkt20_index = pkt10_index;
+	pkt21_index = pkt11_index;
+	pkt10_index = pkt00_index;
+	pkt11_index = pkt01_index;
+
+	/* Pipeline stage 1 */
+	lookup2_stage1_dosig(mbuf10, mbuf11, bucket10, bucket11, f);
+
+	/* Pipeline stage 2 */
+	lookup2_stage2_lru(pkt20_index, pkt21_index, mbuf20, mbuf21,
+		bucket20, bucket21, pkts_mask_out, entries, f);
+
+	/* Pipeline feed */
+	bucket20 = bucket10;
+	bucket21 = bucket11;
+	mbuf20 = mbuf10;
+	mbuf21 = mbuf11;
+	pkt20_index = pkt10_index;
+	pkt21_index = pkt11_index;
+
+	/* Pipeline stage 2 */
+	lookup2_stage2_lru(pkt20_index, pkt21_index, mbuf20, mbuf21,
+		bucket20, bucket21, pkts_mask_out, entries, f);
+
+	*lookup_hit_mask = pkts_mask_out;
+	return 0;
+} /* rte_table_hash_lookup_key8_lru_dosig() */
+
+static int
+rte_table_hash_lookup_key8_ext(
+	void *table,
+	struct rte_mbuf **pkts,
+	uint64_t pkts_mask,
+	uint64_t *lookup_hit_mask,
+	void **entries)
+{
+	struct rte_table_hash *f = (struct rte_table_hash *) table;
+	struct rte_bucket_4_8 *bucket10, *bucket11, *bucket20, *bucket21;
+	struct rte_mbuf *mbuf00, *mbuf01, *mbuf10, *mbuf11, *mbuf20, *mbuf21;
+	uint32_t pkt00_index, pkt01_index, pkt10_index;
+	uint32_t pkt11_index, pkt20_index, pkt21_index;
+	uint64_t pkts_mask_out = 0, buckets_mask = 0;
+	struct rte_bucket_4_8 *buckets[RTE_PORT_IN_BURST_SIZE_MAX];
+	uint64_t *keys[RTE_PORT_IN_BURST_SIZE_MAX];
+
+	/* Cannot run the pipeline with less than 5 packets */
+	if (__builtin_popcountll(pkts_mask) < 5) {
+		for ( ; pkts_mask; ) {
+			struct rte_bucket_4_8 *bucket;
+			struct rte_mbuf *mbuf;
+			uint32_t pkt_index;
+
+			lookup1_stage0(pkt_index, mbuf, pkts, pkts_mask);
+			lookup1_stage1(mbuf, bucket, f);
+			lookup1_stage2_ext(pkt_index, mbuf, bucket,
+				pkts_mask_out, entries, buckets_mask, buckets,
+				keys, f);
+		}
+
+		*lookup_hit_mask = pkts_mask_out;
+		return 0;
+	}
+
+	/*
+	 * Pipeline fill
+	 *
+	 */
+	/* Pipeline stage 0 */
+	lookup2_stage0(pkt00_index, pkt01_index, mbuf00, mbuf01, pkts,
+		pkts_mask);
+
+	/* Pipeline feed */
+	mbuf10 = mbuf00;
+	mbuf11 = mbuf01;
+	pkt10_index = pkt00_index;
+	pkt11_index = pkt01_index;
+
+	/* Pipeline stage 0 */
+	lookup2_stage0(pkt00_index, pkt01_index, mbuf00, mbuf01, pkts,
+		pkts_mask);
+
+	/* Pipeline stage 1 */
+	lookup2_stage1(mbuf10, mbuf11, bucket10, bucket11, f);
+
+	/*
+	 * Pipeline run
+	 *
+	 */
+	for ( ; pkts_mask; ) {
+		/* Pipeline feed */
+		bucket20 = bucket10;
+		bucket21 = bucket11;
+		mbuf20 = mbuf10;
+		mbuf21 = mbuf11;
+		mbuf10 = mbuf00;
+		mbuf11 = mbuf01;
+		pkt20_index = pkt10_index;
+		pkt21_index = pkt11_index;
+		pkt10_index = pkt00_index;
+		pkt11_index = pkt01_index;
+
+		/* Pipeline stage 0 */
+		lookup2_stage0_with_odd_support(pkt00_index, pkt01_index,
+			mbuf00, mbuf01, pkts, pkts_mask);
+
+		/* Pipeline stage 1 */
+		lookup2_stage1(mbuf10, mbuf11, bucket10, bucket11, f);
+
+		/* Pipeline stage 2 */
+		lookup2_stage2_ext(pkt20_index, pkt21_index, mbuf20, mbuf21,
+			bucket20, bucket21, pkts_mask_out, entries,
+			buckets_mask, buckets, keys, f);
+	}
+
+	/*
+	 * Pipeline flush
+	 *
+	 */
+	/* Pipeline feed */
+	bucket20 = bucket10;
+	bucket21 = bucket11;
+	mbuf20 = mbuf10;
+	mbuf21 = mbuf11;
+	mbuf10 = mbuf00;
+	mbuf11 = mbuf01;
+	pkt20_index = pkt10_index;
+	pkt21_index = pkt11_index;
+	pkt10_index = pkt00_index;
+	pkt11_index = pkt01_index;
+
+	/* Pipeline stage 1 */
+	lookup2_stage1(mbuf10, mbuf11, bucket10, bucket11, f);
+
+	/* Pipeline stage 2 */
+	lookup2_stage2_ext(pkt20_index, pkt21_index, mbuf20, mbuf21,
+		bucket20, bucket21, pkts_mask_out, entries,
+		buckets_mask, buckets, keys, f);
+
+	/* Pipeline feed */
+	bucket20 = bucket10;
+	bucket21 = bucket11;
+	mbuf20 = mbuf10;
+	mbuf21 = mbuf11;
+	pkt20_index = pkt10_index;
+	pkt21_index = pkt11_index;
+
+	/* Pipeline stage 2 */
+	lookup2_stage2_ext(pkt20_index, pkt21_index, mbuf20, mbuf21,
+		bucket20, bucket21, pkts_mask_out, entries,
+		buckets_mask, buckets, keys, f);
+
+	/* Grind next buckets */
+	for ( ; buckets_mask; ) {
+		uint64_t buckets_mask_next = 0;
+
+		for ( ; buckets_mask; ) {
+			uint64_t pkt_mask;
+			uint32_t pkt_index;
+
+			pkt_index = __builtin_ctzll(buckets_mask);
+			pkt_mask = 1LLU << pkt_index;
+			buckets_mask &= ~pkt_mask;
+
+			lookup_grinder(pkt_index, buckets, keys, pkts_mask_out,
+				entries, buckets_mask_next, f);
+		}
+
+		buckets_mask = buckets_mask_next;
+	}
+
+	*lookup_hit_mask = pkts_mask_out;
+	return 0;
+} /* rte_table_hash_lookup_key8_ext() */
+
+static int
+rte_table_hash_lookup_key8_ext_dosig(
+	void *table,
+	struct rte_mbuf **pkts,
+	uint64_t pkts_mask,
+	uint64_t *lookup_hit_mask,
+	void **entries)
+{
+	struct rte_table_hash *f = (struct rte_table_hash *) table;
+	struct rte_bucket_4_8 *bucket10, *bucket11, *bucket20, *bucket21;
+	struct rte_mbuf *mbuf00, *mbuf01, *mbuf10, *mbuf11, *mbuf20, *mbuf21;
+	uint32_t pkt00_index, pkt01_index, pkt10_index;
+	uint32_t pkt11_index, pkt20_index, pkt21_index;
+	uint64_t pkts_mask_out = 0, buckets_mask = 0;
+	struct rte_bucket_4_8 *buckets[RTE_PORT_IN_BURST_SIZE_MAX];
+	uint64_t *keys[RTE_PORT_IN_BURST_SIZE_MAX];
+
+	/* Cannot run the pipeline with less than 5 packets */
+	if (__builtin_popcountll(pkts_mask) < 5) {
+		for ( ; pkts_mask; ) {
+			struct rte_bucket_4_8 *bucket;
+			struct rte_mbuf *mbuf;
+			uint32_t pkt_index;
+
+			lookup1_stage0(pkt_index, mbuf, pkts, pkts_mask);
+			lookup1_stage1_dosig(mbuf, bucket, f);
+			lookup1_stage2_ext(pkt_index, mbuf, bucket,
+				pkts_mask_out, entries, buckets_mask,
+				buckets, keys, f);
+		}
+
+		*lookup_hit_mask = pkts_mask_out;
+		return 0;
+	}
+
+	/*
+	 * Pipeline fill
+	 *
+	 */
+	/* Pipeline stage 0 */
+	lookup2_stage0(pkt00_index, pkt01_index, mbuf00, mbuf01, pkts,
+		pkts_mask);
+
+	/* Pipeline feed */
+	mbuf10 = mbuf00;
+	mbuf11 = mbuf01;
+	pkt10_index = pkt00_index;
+	pkt11_index = pkt01_index;
+
+	/* Pipeline stage 0 */
+	lookup2_stage0(pkt00_index, pkt01_index, mbuf00, mbuf01, pkts,
+		pkts_mask);
+
+	/* Pipeline stage 1 */
+	lookup2_stage1_dosig(mbuf10, mbuf11, bucket10, bucket11, f);
+
+	/*
+	 * Pipeline run
+	 *
+	 */
+	for ( ; pkts_mask; ) {
+		/* Pipeline feed */
+		bucket20 = bucket10;
+		bucket21 = bucket11;
+		mbuf20 = mbuf10;
+		mbuf21 = mbuf11;
+		mbuf10 = mbuf00;
+		mbuf11 = mbuf01;
+		pkt20_index = pkt10_index;
+		pkt21_index = pkt11_index;
+		pkt10_index = pkt00_index;
+		pkt11_index = pkt01_index;
+
+		/* Pipeline stage 0 */
+		lookup2_stage0_with_odd_support(pkt00_index, pkt01_index,
+			mbuf00, mbuf01, pkts, pkts_mask);
+
+		/* Pipeline stage 1 */
+		lookup2_stage1_dosig(mbuf10, mbuf11, bucket10, bucket11, f);
+
+		/* Pipeline stage 2 */
+		lookup2_stage2_ext(pkt20_index, pkt21_index, mbuf20, mbuf21,
+			bucket20, bucket21, pkts_mask_out, entries,
+			buckets_mask, buckets, keys, f);
+	}
+
+	/*
+	 * Pipeline flush
+	 *
+	 */
+	/* Pipeline feed */
+	bucket20 = bucket10;
+	bucket21 = bucket11;
+	mbuf20 = mbuf10;
+	mbuf21 = mbuf11;
+	mbuf10 = mbuf00;
+	mbuf11 = mbuf01;
+	pkt20_index = pkt10_index;
+	pkt21_index = pkt11_index;
+	pkt10_index = pkt00_index;
+	pkt11_index = pkt01_index;
+
+	/* Pipeline stage 1 */
+	lookup2_stage1_dosig(mbuf10, mbuf11, bucket10, bucket11, f);
+
+	/* Pipeline stage 2 */
+	lookup2_stage2_ext(pkt20_index, pkt21_index, mbuf20, mbuf21,
+		bucket20, bucket21, pkts_mask_out, entries,
+		buckets_mask, buckets, keys, f);
+
+	/* Pipeline feed */
+	bucket20 = bucket10;
+	bucket21 = bucket11;
+	mbuf20 = mbuf10;
+	mbuf21 = mbuf11;
+	pkt20_index = pkt10_index;
+	pkt21_index = pkt11_index;
+
+	/* Pipeline stage 2 */
+	lookup2_stage2_ext(pkt20_index, pkt21_index, mbuf20, mbuf21,
+		bucket20, bucket21, pkts_mask_out, entries,
+		buckets_mask, buckets, keys, f);
+
+	/* Grind next buckets */
+	for ( ; buckets_mask; ) {
+		uint64_t buckets_mask_next = 0;
+
+		for ( ; buckets_mask; ) {
+			uint64_t pkt_mask;
+			uint32_t pkt_index;
+
+			pkt_index = __builtin_ctzll(buckets_mask);
+			pkt_mask = 1LLU << pkt_index;
+			buckets_mask &= ~pkt_mask;
+
+			lookup_grinder(pkt_index, buckets, keys, pkts_mask_out,
+				entries, buckets_mask_next, f);
+		}
+
+		buckets_mask = buckets_mask_next;
+	}
+
+	*lookup_hit_mask = pkts_mask_out;
+	return 0;
+} /* rte_table_hash_lookup_key8_dosig_ext() */
+
+struct rte_table_ops rte_table_hash_key8_lru_ops = {
+	.f_create = rte_table_hash_create_key8_lru,
+	.f_free = rte_table_hash_free_key8_lru,
+	.f_add = rte_table_hash_entry_add_key8_lru,
+	.f_delete = rte_table_hash_entry_delete_key8_lru,
+	.f_lookup = rte_table_hash_lookup_key8_lru,
+};
+
+struct rte_table_ops rte_table_hash_key8_lru_dosig_ops = {
+	.f_create = rte_table_hash_create_key8_lru,
+	.f_free = rte_table_hash_free_key8_lru,
+	.f_add = rte_table_hash_entry_add_key8_lru,
+	.f_delete = rte_table_hash_entry_delete_key8_lru,
+	.f_lookup = rte_table_hash_lookup_key8_lru_dosig,
+};
+
+struct rte_table_ops rte_table_hash_key8_ext_ops = {
+	.f_create = rte_table_hash_create_key8_ext,
+	.f_free = rte_table_hash_free_key8_ext,
+	.f_add = rte_table_hash_entry_add_key8_ext,
+	.f_delete = rte_table_hash_entry_delete_key8_ext,
+	.f_lookup = rte_table_hash_lookup_key8_ext,
+};
+
+struct rte_table_ops rte_table_hash_key8_ext_dosig_ops = {
+	.f_create = rte_table_hash_create_key8_ext,
+	.f_free = rte_table_hash_free_key8_ext,
+	.f_add = rte_table_hash_entry_add_key8_ext,
+	.f_delete = rte_table_hash_entry_delete_key8_ext,
+	.f_lookup = rte_table_hash_lookup_key8_ext_dosig,
+};
diff --git a/lib/librte_table/rte_table_hash_lru.c b/lib/librte_table/rte_table_hash_lru.c
new file mode 100644
index 0000000..d1a4984
--- /dev/null
+++ b/lib/librte_table/rte_table_hash_lru.c
@@ -0,0 +1,1065 @@
+/*-
+ *	 BSD LICENSE
+ *
+ *	 Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+ *	 All rights reserved.
+ *
+ *	 Redistribution and use in source and binary forms, with or without
+ *	 modification, are permitted provided that the following conditions
+ *	 are met:
+ *
+ *	* Redistributions of source code must retain the above copyright
+ *		 notice, this list of conditions and the following disclaimer.
+ *	* Redistributions in binary form must reproduce the above copyright
+ *		 notice, this list of conditions and the following disclaimer in
+ *		 the documentation and/or other materials provided with the
+ *		 distribution.
+ *	* Neither the name of Intel Corporation nor the names of its
+ *		 contributors may be used to endorse or promote products derived
+ *		 from this software without specific prior written permission.
+ *
+ *	 THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *	 "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *	 LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *	 A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *	 OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *	 SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *	 LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *	 DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *	 THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *	 (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *	 OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <string.h>
+#include <stdio.h>
+
+#include <rte_common.h>
+#include <rte_mbuf.h>
+#include <rte_malloc.h>
+#include <rte_log.h>
+
+#include "rte_table_hash.h"
+#include "rte_lru.h"
+
+#define KEYS_PER_BUCKET	4
+
+struct bucket {
+	union {
+		struct bucket *next;
+		uint64_t lru_list;
+	};
+	uint16_t sig[KEYS_PER_BUCKET];
+	uint32_t key_pos[KEYS_PER_BUCKET];
+};
+
+struct grinder {
+	struct bucket *bkt;
+	uint64_t sig;
+	uint64_t match;
+	uint64_t match_pos;
+	uint32_t key_index;
+};
+
+struct rte_table_hash {
+	/* Input parameters */
+	uint32_t key_size;
+	uint32_t entry_size;
+	uint32_t n_keys;
+	uint32_t n_buckets;
+	rte_table_hash_op_hash f_hash;
+	uint64_t seed;
+	uint32_t signature_offset;
+	uint32_t key_offset;
+
+	/* Internal */
+	uint64_t bucket_mask;
+	uint32_t key_size_shl;
+	uint32_t data_size_shl;
+	uint32_t key_stack_tos;
+
+	/* Grinder */
+	struct grinder grinders[RTE_PORT_IN_BURST_SIZE_MAX];
+
+	/* Tables */
+	struct bucket *buckets;
+	uint8_t *key_mem;
+	uint8_t *data_mem;
+	uint32_t *key_stack;
+
+	/* Table memory */
+	uint8_t memory[0] __rte_cache_aligned;
+};
+
+static int
+check_params_create(struct rte_table_hash_lru_params *params)
+{
+	uint32_t n_buckets_min;
+
+	/* key_size */
+	if ((params->key_size == 0) ||
+		(!rte_is_power_of_2(params->key_size))) {
+		RTE_LOG(ERR, TABLE, "%s: key_size invalid value\n", __func__);
+		return -EINVAL;
+	}
+
+	/* n_keys */
+	if ((params->n_keys == 0) ||
+		(!rte_is_power_of_2(params->n_keys))) {
+		RTE_LOG(ERR, TABLE, "%s: n_keys invalid value\n", __func__);
+		return -EINVAL;
+	}
+
+	/* n_buckets */
+	n_buckets_min = (params->n_keys + KEYS_PER_BUCKET - 1) / params->n_keys;
+	if ((params->n_buckets == 0) ||
+		(!rte_is_power_of_2(params->n_keys)) ||
+		(params->n_buckets < n_buckets_min)) {
+		RTE_LOG(ERR, TABLE, "%s: n_buckets invalid value\n", __func__);
+		return -EINVAL;
+	}
+
+	/* f_hash */
+	if (params->f_hash == NULL) {
+		RTE_LOG(ERR, TABLE, "%s: f_hash invalid value\n", __func__);
+		return -EINVAL;
+	}
+
+	/* signature offset */
+	if ((params->signature_offset & 0x3) != 0) {
+		RTE_LOG(ERR, TABLE, "%s: signature_offset invalid value\n",
+			__func__);
+		return -EINVAL;
+	}
+
+	/* key offset */
+	if ((params->key_offset & 0x7) != 0) {
+		RTE_LOG(ERR, TABLE, "%s: key_offset invalid value\n", __func__);
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+static void *
+rte_table_hash_lru_create(void *params, int socket_id, uint32_t entry_size)
+{
+	struct rte_table_hash_lru_params *p =
+		(struct rte_table_hash_lru_params *) params;
+	struct rte_table_hash *t;
+	uint32_t total_size, table_meta_sz, table_meta_offset;
+	uint32_t bucket_sz, key_sz, key_stack_sz, data_sz;
+	uint32_t bucket_offset, key_offset, key_stack_offset, data_offset;
+	uint32_t i;
+
+	/* Check input parameters */
+	if ((check_params_create(p) != 0) ||
+		(!rte_is_power_of_2(entry_size)) ||
+		((sizeof(struct rte_table_hash) % CACHE_LINE_SIZE) != 0) ||
+		(sizeof(struct bucket) != (CACHE_LINE_SIZE / 2))) {
+		return NULL;
+	}
+
+	/* Memory allocation */
+	table_meta_sz = CACHE_LINE_ROUNDUP(sizeof(struct rte_table_hash));
+	bucket_sz = CACHE_LINE_ROUNDUP(p->n_buckets * sizeof(struct bucket));
+	key_sz = CACHE_LINE_ROUNDUP(p->n_keys * p->key_size);
+	key_stack_sz = CACHE_LINE_ROUNDUP(p->n_keys * sizeof(uint32_t));
+	data_sz = CACHE_LINE_ROUNDUP(p->n_keys * entry_size);
+	total_size = table_meta_sz + bucket_sz + key_sz + key_stack_sz +
+		data_sz;
+
+	t = rte_zmalloc_socket("TABLE", total_size, CACHE_LINE_SIZE, socket_id);
+	if (t == NULL) {
+		RTE_LOG(ERR, TABLE,
+			"%s: Cannot allocate %u bytes for hash table\n",
+			__func__, total_size);
+		return NULL;
+	}
+	RTE_LOG(INFO, TABLE, "%s (%u-byte key): Hash table memory footprint is "
+		"%u bytes\n", __func__, p->key_size, total_size);
+
+	/* Memory initialization */
+	t->key_size = p->key_size;
+	t->entry_size = entry_size;
+	t->n_keys = p->n_keys;
+	t->n_buckets = p->n_buckets;
+	t->f_hash = p->f_hash;
+	t->seed = p->seed;
+	t->signature_offset = p->signature_offset;
+	t->key_offset = p->key_offset;
+
+	/* Internal */
+	t->bucket_mask = t->n_buckets - 1;
+	t->key_size_shl = __builtin_ctzl(p->key_size);
+	t->data_size_shl = __builtin_ctzl(p->key_size);
+
+	/* Tables */
+	table_meta_offset = 0;
+	bucket_offset = table_meta_offset + table_meta_sz;
+	key_offset = bucket_offset + bucket_sz;
+	key_stack_offset = key_offset + key_sz;
+	data_offset = key_stack_offset + key_stack_sz;
+
+	t->buckets = (struct bucket *) &t->memory[bucket_offset];
+	t->key_mem = &t->memory[key_offset];
+	t->key_stack = (uint32_t *) &t->memory[key_stack_offset];
+	t->data_mem = &t->memory[data_offset];
+
+	/* Key stack */
+	for (i = 0; i < t->n_keys; i++)
+		t->key_stack[i] = t->n_keys - 1 - i;
+	t->key_stack_tos = t->n_keys;
+
+	/* LRU */
+	for (i = 0; i < t->n_buckets; i++) {
+		struct bucket *bkt = &t->buckets[i];
+
+		lru_init(bkt);
+	}
+
+	return t;
+}
+
+static int
+rte_table_hash_lru_free(void *table)
+{
+	struct rte_table_hash *t = (struct rte_table_hash *) table;
+
+	/* Check input parameters */
+	if (t == NULL)
+		return -EINVAL;
+
+	rte_free(t);
+	return 0;
+}
+
+static int
+rte_table_hash_lru_entry_add(void *table, void *key, void *entry,
+	int *key_found, void **entry_ptr)
+{
+	struct rte_table_hash *t = (struct rte_table_hash *) table;
+	struct bucket *bkt;
+	uint64_t sig;
+	uint32_t bkt_index, i;
+
+	sig = t->f_hash(key, t->key_size, t->seed);
+	bkt_index = sig & t->bucket_mask;
+	bkt = &t->buckets[bkt_index];
+	sig = (sig >> 16) | 1LLU;
+
+	/* Key is present in the bucket */
+	for (i = 0; i < KEYS_PER_BUCKET; i++) {
+		uint64_t bkt_sig = (uint64_t) bkt->sig[i];
+		uint32_t bkt_key_index = bkt->key_pos[i];
+		uint8_t *bkt_key = &t->key_mem[bkt_key_index <<
+			t->key_size_shl];
+
+		if ((sig == bkt_sig) && (memcmp(key, bkt_key, t->key_size)
+			== 0)) {
+			uint8_t *data = &t->data_mem[bkt_key_index <<
+				t->data_size_shl];
+
+			memcpy(data, entry, t->entry_size);
+			lru_update(bkt, i);
+			*key_found = 1;
+			*entry_ptr = (void *) data;
+			return 0;
+		}
+	}
+
+	/* Key is not present in the bucket */
+	for (i = 0; i < KEYS_PER_BUCKET; i++) {
+		uint64_t bkt_sig = (uint64_t) bkt->sig[i];
+
+		if (bkt_sig == 0) {
+			uint32_t bkt_key_index;
+			uint8_t *bkt_key, *data;
+
+			/* Allocate new key */
+			if (t->key_stack_tos == 0) {
+				/* No keys available */
+				return -ENOSPC;
+			}
+			bkt_key_index = t->key_stack[--t->key_stack_tos];
+
+			/* Install new key */
+			bkt_key = &t->key_mem[bkt_key_index << t->key_size_shl];
+			data = &t->data_mem[bkt_key_index << t->data_size_shl];
+
+			bkt->sig[i] = (uint16_t) sig;
+			bkt->key_pos[i] = bkt_key_index;
+			memcpy(bkt_key, key, t->key_size);
+			memcpy(data, entry, t->entry_size);
+			lru_update(bkt, i);
+
+			*key_found = 0;
+			*entry_ptr = (void *) data;
+			return 0;
+		}
+	}
+
+	/* Bucket full */
+	{
+		uint64_t pos = lru_pos(bkt);
+		uint32_t bkt_key_index = bkt->key_pos[pos];
+		uint8_t *bkt_key = &t->key_mem[bkt_key_index <<
+			t->key_size_shl];
+		uint8_t *data = &t->data_mem[bkt_key_index << t->data_size_shl];
+
+		bkt->sig[pos] = (uint16_t) sig;
+		memcpy(bkt_key, key, t->key_size);
+		memcpy(data, entry, t->entry_size);
+		lru_update(bkt, pos);
+
+		*key_found = 0;
+		*entry_ptr = (void *) data;
+		return 0;
+	}
+}
+
+static int
+rte_table_hash_lru_entry_delete(void *table, void *key, int *key_found,
+	void *entry)
+{
+	struct rte_table_hash *t = (struct rte_table_hash *) table;
+	struct bucket *bkt;
+	uint64_t sig;
+	uint32_t bkt_index, i;
+
+	sig = t->f_hash(key, t->key_size, t->seed);
+	bkt_index = sig & t->bucket_mask;
+	bkt = &t->buckets[bkt_index];
+	sig = (sig >> 16) | 1LLU;
+
+	/* Key is present in the bucket */
+	for (i = 0; i < KEYS_PER_BUCKET; i++) {
+		uint64_t bkt_sig = (uint64_t) bkt->sig[i];
+		uint32_t bkt_key_index = bkt->key_pos[i];
+		uint8_t *bkt_key = &t->key_mem[bkt_key_index <<
+			t->key_size_shl];
+
+		if ((sig == bkt_sig) &&
+			(memcmp(key, bkt_key, t->key_size) == 0)) {
+			uint8_t *data = &t->data_mem[bkt_key_index <<
+				t->data_size_shl];
+
+			bkt->sig[i] = 0;
+			t->key_stack[t->key_stack_tos++] = bkt_key_index;
+			*key_found = 1;
+			memcpy(entry, data, t->entry_size);
+			return 0;
+		}
+	}
+
+	/* Key is not present in the bucket */
+	*key_found = 0;
+	return 0;
+}
+
+static int rte_table_hash_lru_lookup_unoptimized(
+	void *table,
+	struct rte_mbuf **pkts,
+	uint64_t pkts_mask,
+	uint64_t *lookup_hit_mask,
+	void **entries,
+	int dosig)
+{
+	struct rte_table_hash *t = (struct rte_table_hash *) table;
+	uint64_t pkts_mask_out = 0;
+
+	for ( ; pkts_mask; ) {
+		struct bucket *bkt;
+		struct rte_mbuf *pkt;
+		uint8_t *key;
+		uint64_t pkt_mask, sig;
+		uint32_t pkt_index, bkt_index, i;
+
+		pkt_index = __builtin_ctzll(pkts_mask);
+		pkt_mask = 1LLU << pkt_index;
+		pkts_mask &= ~pkt_mask;
+
+		pkt = pkts[pkt_index];
+		key = RTE_MBUF_METADATA_UINT8_PTR(pkt, t->key_offset);
+		if (dosig)
+			sig = (uint64_t) t->f_hash(key, t->key_size, t->seed);
+		else
+			sig = RTE_MBUF_METADATA_UINT32(pkt,
+				t->signature_offset);
+
+		bkt_index = sig & t->bucket_mask;
+		bkt = &t->buckets[bkt_index];
+		sig = (sig >> 16) | 1LLU;
+
+		/* Key is present in the bucket */
+		for (i = 0; i < KEYS_PER_BUCKET; i++) {
+			uint64_t bkt_sig = (uint64_t) bkt->sig[i];
+			uint32_t bkt_key_index = bkt->key_pos[i];
+			uint8_t *bkt_key = &t->key_mem[bkt_key_index <<
+				t->key_size_shl];
+
+			if ((sig == bkt_sig) && (memcmp(key, bkt_key,
+				t->key_size) == 0)) {
+				uint8_t *data = &t->data_mem[bkt_key_index <<
+					t->data_size_shl];
+
+				lru_update(bkt, i);
+				pkts_mask_out |= pkt_mask;
+				entries[pkt_index] = (void *) data;
+				break;
+			}
+		}
+	}
+
+	*lookup_hit_mask = pkts_mask_out;
+	return 0;
+}
+
+/***
+*
+* mask = match bitmask
+* match = at least one match
+* match_many = more than one match
+* match_pos = position of first match
+*
+* ----------------------------------------
+* mask		 match	 match_many	  match_pos
+* ----------------------------------------
+* 0000		 0		 0			  00
+* 0001		 1		 0			  00
+* 0010		 1		 0			  01
+* 0011		 1		 1			  00
+* ----------------------------------------
+* 0100		 1		 0			  10
+* 0101		 1		 1			  00
+* 0110		 1		 1			  01
+* 0111		 1		 1			  00
+* ----------------------------------------
+* 1000		 1		 0			  11
+* 1001		 1		 1			  00
+* 1010		 1		 1			  01
+* 1011		 1		 1			  00
+* ----------------------------------------
+* 1100		 1		 1			  10
+* 1101		 1		 1			  00
+* 1110		 1		 1			  01
+* 1111		 1		 1			  00
+* ----------------------------------------
+*
+* match = 1111_1111_1111_1110
+* match_many = 1111_1110_1110_1000
+* match_pos = 0001_0010_0001_0011__0001_0010_0001_0000
+*
+* match = 0xFFFELLU
+* match_many = 0xFEE8LLU
+* match_pos = 0x12131210LLU
+*
+***/
+
+#define LUT_MATCH						0xFFFELLU
+#define LUT_MATCH_MANY						0xFEE8LLU
+#define LUT_MATCH_POS						0x12131210LLU
+
+#define lookup_cmp_sig(mbuf_sig, bucket, match, match_many, match_pos)\
+{								\
+	uint64_t bucket_sig[4], mask[4], mask_all;		\
+								\
+	bucket_sig[0] = bucket->sig[0];				\
+	bucket_sig[1] = bucket->sig[1];				\
+	bucket_sig[2] = bucket->sig[2];				\
+	bucket_sig[3] = bucket->sig[3];				\
+								\
+	bucket_sig[0] ^= mbuf_sig;				\
+	bucket_sig[1] ^= mbuf_sig;				\
+	bucket_sig[2] ^= mbuf_sig;				\
+	bucket_sig[3] ^= mbuf_sig;				\
+								\
+	mask[0] = 0;						\
+	mask[1] = 0;						\
+	mask[2] = 0;						\
+	mask[3] = 0;						\
+								\
+	if (bucket_sig[0] == 0)					\
+		mask[0] = 1;					\
+	if (bucket_sig[1] == 0)					\
+		mask[1] = 2;					\
+	if (bucket_sig[2] == 0)					\
+		mask[2] = 4;					\
+	if (bucket_sig[3] == 0)					\
+		mask[3] = 8;					\
+								\
+	mask_all = (mask[0] | mask[1]) | (mask[2] | mask[3]);	\
+								\
+	match = (LUT_MATCH >> mask_all) & 1;			\
+	match_many = (LUT_MATCH_MANY >> mask_all) & 1;		\
+	match_pos = (LUT_MATCH_POS >> (mask_all << 1)) & 3;	\
+}
+
+#define lookup_cmp_key(mbuf, key, match_key, f)			\
+{								\
+	uint64_t *pkt_key = RTE_MBUF_METADATA_UINT64_PTR(mbuf, f->key_offset);\
+	uint64_t *bkt_key = (uint64_t *) key;			\
+								\
+	switch (f->key_size) {					\
+	case 8:							\
+	{							\
+		uint64_t xor = pkt_key[0] ^ bkt_key[0];		\
+		match_key = 0;					\
+		if (xor == 0)					\
+			match_key = 1;				\
+	}							\
+	break;							\
+								\
+	case 16:						\
+	{							\
+		uint64_t xor[2], or;				\
+								\
+		xor[0] = pkt_key[0] ^ bkt_key[0];		\
+		xor[1] = pkt_key[1] ^ bkt_key[1];		\
+		or = xor[0] | xor[1];				\
+		match_key = 0;					\
+		if (or == 0)					\
+			match_key = 1;				\
+	}							\
+	break;							\
+								\
+	case 32:						\
+	{							\
+		uint64_t xor[4], or;				\
+								\
+		xor[0] = pkt_key[0] ^ bkt_key[0];		\
+		xor[1] = pkt_key[1] ^ bkt_key[1];		\
+		xor[2] = pkt_key[2] ^ bkt_key[2];		\
+		xor[3] = pkt_key[3] ^ bkt_key[3];		\
+		or = xor[0] | xor[1] | xor[2] | xor[3];		\
+		match_key = 0;					\
+		if (or == 0)					\
+			match_key = 1;				\
+	}							\
+	break;							\
+								\
+	case 64:						\
+	{							\
+		uint64_t xor[8], or;				\
+								\
+		xor[0] = pkt_key[0] ^ bkt_key[0];		\
+		xor[1] = pkt_key[1] ^ bkt_key[1];		\
+		xor[2] = pkt_key[2] ^ bkt_key[2];		\
+		xor[3] = pkt_key[3] ^ bkt_key[3];		\
+		xor[4] = pkt_key[4] ^ bkt_key[4];		\
+		xor[5] = pkt_key[5] ^ bkt_key[5];		\
+		xor[6] = pkt_key[6] ^ bkt_key[6];		\
+		xor[7] = pkt_key[7] ^ bkt_key[7];		\
+		or = xor[0] | xor[1] | xor[2] | xor[3] |	\
+			xor[4] | xor[5] | xor[6] | xor[7];	\
+		match_key = 0;					\
+		if (or == 0)					\
+			match_key = 1;				\
+	}							\
+	break;							\
+								\
+	default:						\
+		match_key = 0;					\
+		if (memcmp(pkt_key, bkt_key, f->key_size) == 0)	\
+			match_key = 1;				\
+	}							\
+}
+
+#define lookup2_stage0(t, g, pkts, pkts_mask, pkt00_index, pkt01_index)\
+{								\
+	uint64_t pkt00_mask, pkt01_mask;			\
+	struct rte_mbuf *mbuf00, *mbuf01;			\
+								\
+	pkt00_index = __builtin_ctzll(pkts_mask);		\
+	pkt00_mask = 1LLU << pkt00_index;			\
+	pkts_mask &= ~pkt00_mask;				\
+	mbuf00 = pkts[pkt00_index];				\
+								\
+	pkt01_index = __builtin_ctzll(pkts_mask);		\
+	pkt01_mask = 1LLU << pkt01_index;			\
+	pkts_mask &= ~pkt01_mask;				\
+	mbuf01 = pkts[pkt01_index];				\
+								\
+	rte_prefetch0(RTE_MBUF_METADATA_UINT8_PTR(mbuf00, 0));	\
+	rte_prefetch0(RTE_MBUF_METADATA_UINT8_PTR(mbuf01, 0));	\
+}
+
+#define lookup2_stage0_with_odd_support(t, g, pkts, pkts_mask, pkt00_index, \
+	pkt01_index)						\
+{								\
+	uint64_t pkt00_mask, pkt01_mask;			\
+	struct rte_mbuf *mbuf00, *mbuf01;			\
+								\
+	pkt00_index = __builtin_ctzll(pkts_mask);		\
+	pkt00_mask = 1LLU << pkt00_index;			\
+	pkts_mask &= ~pkt00_mask;				\
+	mbuf00 = pkts[pkt00_index];				\
+								\
+	pkt01_index = __builtin_ctzll(pkts_mask);		\
+	if (pkts_mask == 0)					\
+		pkt01_index = pkt00_index;			\
+								\
+	pkt01_mask = 1LLU << pkt01_index;			\
+	pkts_mask &= ~pkt01_mask;				\
+	mbuf01 = pkts[pkt01_index];				\
+								\
+	rte_prefetch0(RTE_MBUF_METADATA_UINT8_PTR(mbuf00, 0));	\
+	rte_prefetch0(RTE_MBUF_METADATA_UINT8_PTR(mbuf01, 0));	\
+}
+
+#define lookup2_stage1(t, g, pkts, pkt10_index, pkt11_index)	\
+{								\
+	struct grinder *g10, *g11;				\
+	uint64_t sig10, sig11, bkt10_index, bkt11_index;	\
+	struct rte_mbuf *mbuf10, *mbuf11;			\
+	struct bucket *bkt10, *bkt11, *buckets = t->buckets;	\
+	uint64_t bucket_mask = t->bucket_mask;			\
+	uint32_t signature_offset = t->signature_offset;	\
+								\
+	mbuf10 = pkts[pkt10_index];				\
+	sig10 = (uint64_t) RTE_MBUF_METADATA_UINT32(mbuf10, signature_offset);\
+	bkt10_index = sig10 & bucket_mask;			\
+	bkt10 = &buckets[bkt10_index];				\
+								\
+	mbuf11 = pkts[pkt11_index];				\
+	sig11 = (uint64_t) RTE_MBUF_METADATA_UINT32(mbuf11, signature_offset);\
+	bkt11_index = sig11 & bucket_mask;			\
+	bkt11 = &buckets[bkt11_index];				\
+								\
+	rte_prefetch0(bkt10);					\
+	rte_prefetch0(bkt11);					\
+								\
+	g10 = &g[pkt10_index];					\
+	g10->sig = sig10;					\
+	g10->bkt = bkt10;					\
+								\
+	g11 = &g[pkt11_index];					\
+	g11->sig = sig11;					\
+	g11->bkt = bkt11;					\
+}
+
+#define lookup2_stage1_dosig(t, g, pkts, pkt10_index, pkt11_index)\
+{								\
+	struct grinder *g10, *g11;				\
+	uint64_t sig10, sig11, bkt10_index, bkt11_index;	\
+	struct rte_mbuf *mbuf10, *mbuf11;			\
+	struct bucket *bkt10, *bkt11, *buckets = t->buckets;	\
+	uint8_t *key10, *key11;					\
+	uint64_t bucket_mask = t->bucket_mask;			\
+	rte_table_hash_op_hash f_hash = t->f_hash;		\
+	uint64_t seed = t->seed;				\
+	uint32_t key_size = t->key_size;			\
+	uint32_t key_offset = t->key_offset;			\
+								\
+	mbuf10 = pkts[pkt10_index];				\
+	key10 = RTE_MBUF_METADATA_UINT8_PTR(mbuf10, key_offset);\
+	sig10 = (uint64_t) f_hash(key10, key_size, seed);	\
+	bkt10_index = sig10 & bucket_mask;			\
+	bkt10 = &buckets[bkt10_index];				\
+								\
+	mbuf11 = pkts[pkt11_index];				\
+	key11 = RTE_MBUF_METADATA_UINT8_PTR(mbuf11, key_offset);\
+	sig11 = (uint64_t) f_hash(key11, key_size, seed);	\
+	bkt11_index = sig11 & bucket_mask;			\
+	bkt11 = &buckets[bkt11_index];				\
+								\
+	rte_prefetch0(bkt10);					\
+	rte_prefetch0(bkt11);					\
+								\
+	g10 = &g[pkt10_index];					\
+	g10->sig = sig10;					\
+	g10->bkt = bkt10;					\
+								\
+	g11 = &g[pkt11_index];					\
+	g11->sig = sig11;					\
+	g11->bkt = bkt11;					\
+}
+
+#define lookup2_stage2(t, g, pkt20_index, pkt21_index, pkts_mask_match_many)\
+{								\
+	struct grinder *g20, *g21;				\
+	uint64_t sig20, sig21;					\
+	struct bucket *bkt20, *bkt21;				\
+	uint8_t *key20, *key21, *key_mem = t->key_mem;		\
+	uint64_t match20, match21, match_many20, match_many21;	\
+	uint64_t match_pos20, match_pos21;			\
+	uint32_t key20_index, key21_index, key_size_shl = t->key_size_shl;\
+								\
+	g20 = &g[pkt20_index];					\
+	sig20 = g20->sig;					\
+	bkt20 = g20->bkt;					\
+	sig20 = (sig20 >> 16) | 1LLU;				\
+	lookup_cmp_sig(sig20, bkt20, match20, match_many20, match_pos20);\
+	match20 <<= pkt20_index;				\
+	match_many20 <<= pkt20_index;				\
+	key20_index = bkt20->key_pos[match_pos20];		\
+	key20 = &key_mem[key20_index << key_size_shl];		\
+								\
+	g21 = &g[pkt21_index];					\
+	sig21 = g21->sig;					\
+	bkt21 = g21->bkt;					\
+	sig21 = (sig21 >> 16) | 1LLU;				\
+	lookup_cmp_sig(sig21, bkt21, match21, match_many21, match_pos21);\
+	match21 <<= pkt21_index;				\
+	match_many21 <<= pkt21_index;				\
+	key21_index = bkt21->key_pos[match_pos21];		\
+	key21 = &key_mem[key21_index << key_size_shl];		\
+								\
+	rte_prefetch0(key20);					\
+	rte_prefetch0(key21);					\
+								\
+	pkts_mask_match_many |= match_many20 | match_many21;	\
+								\
+	g20->match = match20;					\
+	g20->match_pos = match_pos20;				\
+	g20->key_index = key20_index;				\
+								\
+	g21->match = match21;					\
+	g21->match_pos = match_pos21;				\
+	g21->key_index = key21_index;				\
+}
+
+#define lookup2_stage3(t, g, pkts, pkt30_index, pkt31_index, pkts_mask_out, \
+	entries)						\
+{								\
+	struct grinder *g30, *g31;				\
+	struct rte_mbuf *mbuf30, *mbuf31;			\
+	struct bucket *bkt30, *bkt31;				\
+	uint8_t *key30, *key31, *key_mem = t->key_mem;		\
+	uint8_t *data30, *data31, *data_mem = t->data_mem;	\
+	uint64_t match30, match31, match_pos30, match_pos31;	\
+	uint64_t match_key30, match_key31, match_keys;		\
+	uint32_t key30_index, key31_index;			\
+	uint32_t key_size_shl = t->key_size_shl;		\
+	uint32_t data_size_shl = t->data_size_shl;		\
+								\
+	mbuf30 = pkts[pkt30_index];				\
+	g30 = &g[pkt30_index];					\
+	bkt30 = g30->bkt;					\
+	match30 = g30->match;					\
+	match_pos30 = g30->match_pos;				\
+	key30_index = g30->key_index;				\
+	key30 = &key_mem[key30_index << key_size_shl];		\
+	lookup_cmp_key(mbuf30, key30, match_key30, t);		\
+	match_key30 <<= pkt30_index;				\
+	match_key30 &= match30;					\
+	data30 = &data_mem[key30_index << data_size_shl];	\
+	entries[pkt30_index] = data30;				\
+								\
+	mbuf31 = pkts[pkt31_index];				\
+	g31 = &g[pkt31_index];					\
+	bkt31 = g31->bkt;					\
+	match31 = g31->match;					\
+	match_pos31 = g31->match_pos;				\
+	key31_index = g31->key_index;				\
+	key31 = &key_mem[key31_index << key_size_shl];		\
+	lookup_cmp_key(mbuf31, key31, match_key31, t);		\
+	match_key31 <<= pkt31_index;				\
+	match_key31 &= match31;					\
+	data31 = &data_mem[key31_index << data_size_shl];	\
+	entries[pkt31_index] = data31;				\
+								\
+	rte_prefetch0(data30);					\
+	rte_prefetch0(data31);					\
+								\
+	match_keys = match_key30 | match_key31;			\
+	pkts_mask_out |= match_keys;				\
+								\
+	if (match_key30 == 0)					\
+		match_pos30 = 4;				\
+	lru_update(bkt30, match_pos30);				\
+								\
+	if (match_key31 == 0)					\
+		match_pos31 = 4;				\
+	lru_update(bkt31, match_pos31);				\
+}
+
+/***
+* The lookup function implements a 4-stage pipeline, with each stage processing
+* two different packets. The purpose of pipelined implementation is to hide the
+* latency of prefetching the data structures and loosen the data dependency
+* between instructions.
+*
+*   p00  _______   p10  _______   p20  _______   p30  _______
+* ----->|       |----->|       |----->|       |----->|       |----->
+*       |   0   |      |   1   |      |   2   |      |   3   |
+* ----->|_______|----->|_______|----->|_______|----->|_______|----->
+*   p01            p11            p21            p31
+*
+* The naming convention is:
+*	  pXY = packet Y of stage X, X = 0 .. 3, Y = 0 .. 1
+*
+***/
+static int rte_table_hash_lru_lookup(
+	void *table,
+	struct rte_mbuf **pkts,
+	uint64_t pkts_mask,
+	uint64_t *lookup_hit_mask,
+	void **entries)
+{
+	struct rte_table_hash *t = (struct rte_table_hash *) table;
+	struct grinder *g = t->grinders;
+	uint64_t pkt00_index, pkt01_index, pkt10_index, pkt11_index;
+	uint64_t pkt20_index, pkt21_index, pkt30_index, pkt31_index;
+	uint64_t pkts_mask_out = 0, pkts_mask_match_many = 0;
+	int status = 0;
+
+	/* Cannot run the pipeline with less than 7 packets */
+	if (__builtin_popcountll(pkts_mask) < 7)
+		return rte_table_hash_lru_lookup_unoptimized(table, pkts,
+			pkts_mask, lookup_hit_mask, entries, 0);
+
+	/* Pipeline stage 0 */
+	lookup2_stage0(t, g, pkts, pkts_mask, pkt00_index, pkt01_index);
+
+	/* Pipeline feed */
+	pkt10_index = pkt00_index;
+	pkt11_index = pkt01_index;
+
+	/* Pipeline stage 0 */
+	lookup2_stage0(t, g, pkts, pkts_mask, pkt00_index, pkt01_index);
+
+	/* Pipeline stage 1 */
+	lookup2_stage1(t, g, pkts, pkt10_index, pkt11_index);
+
+	/* Pipeline feed */
+	pkt20_index = pkt10_index;
+	pkt21_index = pkt11_index;
+	pkt10_index = pkt00_index;
+	pkt11_index = pkt01_index;
+
+	/* Pipeline stage 0 */
+	lookup2_stage0(t, g, pkts, pkts_mask, pkt00_index, pkt01_index);
+
+	/* Pipeline stage 1 */
+	lookup2_stage1(t, g, pkts, pkt10_index, pkt11_index);
+
+	/* Pipeline stage 2 */
+	lookup2_stage2(t, g, pkt20_index, pkt21_index, pkts_mask_match_many);
+
+	/*
+	* Pipeline run
+	*
+	*/
+	for ( ; pkts_mask; ) {
+		/* Pipeline feed */
+		pkt30_index = pkt20_index;
+		pkt31_index = pkt21_index;
+		pkt20_index = pkt10_index;
+		pkt21_index = pkt11_index;
+		pkt10_index = pkt00_index;
+		pkt11_index = pkt01_index;
+
+		/* Pipeline stage 0 */
+		lookup2_stage0_with_odd_support(t, g, pkts, pkts_mask,
+			pkt00_index, pkt01_index);
+
+		/* Pipeline stage 1 */
+		lookup2_stage1(t, g, pkts, pkt10_index, pkt11_index);
+
+		/* Pipeline stage 2 */
+		lookup2_stage2(t, g, pkt20_index, pkt21_index,
+			pkts_mask_match_many);
+
+		/* Pipeline stage 3 */
+		lookup2_stage3(t, g, pkts, pkt30_index, pkt31_index,
+			pkts_mask_out, entries);
+	}
+
+	/* Pipeline feed */
+	pkt30_index = pkt20_index;
+	pkt31_index = pkt21_index;
+	pkt20_index = pkt10_index;
+	pkt21_index = pkt11_index;
+	pkt10_index = pkt00_index;
+	pkt11_index = pkt01_index;
+
+	/* Pipeline stage 1 */
+	lookup2_stage1(t, g, pkts, pkt10_index, pkt11_index);
+
+	/* Pipeline stage 2 */
+	lookup2_stage2(t, g, pkt20_index, pkt21_index, pkts_mask_match_many);
+
+	/* Pipeline stage 3 */
+	lookup2_stage3(t, g, pkts, pkt30_index, pkt31_index, pkts_mask_out,
+		entries);
+
+	/* Pipeline feed */
+	pkt30_index = pkt20_index;
+	pkt31_index = pkt21_index;
+	pkt20_index = pkt10_index;
+	pkt21_index = pkt11_index;
+
+	/* Pipeline stage 2 */
+	lookup2_stage2(t, g, pkt20_index, pkt21_index, pkts_mask_match_many);
+
+	/* Pipeline stage 3 */
+	lookup2_stage3(t, g, pkts, pkt30_index, pkt31_index, pkts_mask_out,
+		entries);
+
+	/* Pipeline feed */
+	pkt30_index = pkt20_index;
+	pkt31_index = pkt21_index;
+
+	/* Pipeline stage 3 */
+	lookup2_stage3(t, g, pkts, pkt30_index, pkt31_index, pkts_mask_out,
+		entries);
+
+	/* Slow path */
+	pkts_mask_match_many &= ~pkts_mask_out;
+	if (pkts_mask_match_many) {
+		uint64_t pkts_mask_out_slow = 0;
+
+		status = rte_table_hash_lru_lookup_unoptimized(table, pkts,
+			pkts_mask_match_many, &pkts_mask_out_slow, entries, 0);
+		pkts_mask_out |= pkts_mask_out_slow;
+	}
+
+	*lookup_hit_mask = pkts_mask_out;
+	return status;
+}
+
+static int rte_table_hash_lru_lookup_dosig(
+	void *table,
+	struct rte_mbuf **pkts,
+	uint64_t pkts_mask,
+	uint64_t *lookup_hit_mask,
+	void **entries)
+{
+	struct rte_table_hash *t = (struct rte_table_hash *) table;
+	struct grinder *g = t->grinders;
+	uint64_t pkt00_index, pkt01_index, pkt10_index, pkt11_index;
+	uint64_t pkt20_index, pkt21_index, pkt30_index, pkt31_index;
+	uint64_t pkts_mask_out = 0, pkts_mask_match_many = 0;
+	int status = 0;
+
+	/* Cannot run the pipeline with less than 7 packets */
+	if (__builtin_popcountll(pkts_mask) < 7)
+		return rte_table_hash_lru_lookup_unoptimized(table, pkts,
+			pkts_mask, lookup_hit_mask, entries, 1);
+
+	/* Pipeline stage 0 */
+	lookup2_stage0(t, g, pkts, pkts_mask, pkt00_index, pkt01_index);
+
+	/* Pipeline feed */
+	pkt10_index = pkt00_index;
+	pkt11_index = pkt01_index;
+
+	/* Pipeline stage 0 */
+	lookup2_stage0(t, g, pkts, pkts_mask, pkt00_index, pkt01_index);
+
+	/* Pipeline stage 1 */
+	lookup2_stage1_dosig(t, g, pkts, pkt10_index, pkt11_index);
+
+	/* Pipeline feed */
+	pkt20_index = pkt10_index;
+	pkt21_index = pkt11_index;
+	pkt10_index = pkt00_index;
+	pkt11_index = pkt01_index;
+
+	/* Pipeline stage 0 */
+	lookup2_stage0(t, g, pkts, pkts_mask, pkt00_index, pkt01_index);
+
+	/* Pipeline stage 1 */
+	lookup2_stage1_dosig(t, g, pkts, pkt10_index, pkt11_index);
+
+	/* Pipeline stage 2 */
+	lookup2_stage2(t, g, pkt20_index, pkt21_index, pkts_mask_match_many);
+
+	/*
+	* Pipeline run
+	*
+	*/
+	for ( ; pkts_mask; ) {
+		/* Pipeline feed */
+		pkt30_index = pkt20_index;
+		pkt31_index = pkt21_index;
+		pkt20_index = pkt10_index;
+		pkt21_index = pkt11_index;
+		pkt10_index = pkt00_index;
+		pkt11_index = pkt01_index;
+
+		/* Pipeline stage 0 */
+		lookup2_stage0_with_odd_support(t, g, pkts, pkts_mask,
+			pkt00_index, pkt01_index);
+
+		/* Pipeline stage 1 */
+		lookup2_stage1_dosig(t, g, pkts, pkt10_index, pkt11_index);
+
+		/* Pipeline stage 2 */
+		lookup2_stage2(t, g, pkt20_index, pkt21_index,
+			pkts_mask_match_many);
+
+		/* Pipeline stage 3 */
+		lookup2_stage3(t, g, pkts, pkt30_index, pkt31_index,
+			pkts_mask_out, entries);
+	}
+
+	/* Pipeline feed */
+	pkt30_index = pkt20_index;
+	pkt31_index = pkt21_index;
+	pkt20_index = pkt10_index;
+	pkt21_index = pkt11_index;
+	pkt10_index = pkt00_index;
+	pkt11_index = pkt01_index;
+
+	/* Pipeline stage 1 */
+	lookup2_stage1_dosig(t, g, pkts, pkt10_index, pkt11_index);
+
+	/* Pipeline stage 2 */
+	lookup2_stage2(t, g, pkt20_index, pkt21_index, pkts_mask_match_many);
+
+	/* Pipeline stage 3 */
+	lookup2_stage3(t, g, pkts, pkt30_index, pkt31_index, pkts_mask_out,
+		entries);
+
+	/* Pipeline feed */
+	pkt30_index = pkt20_index;
+	pkt31_index = pkt21_index;
+	pkt20_index = pkt10_index;
+	pkt21_index = pkt11_index;
+
+	/* Pipeline stage 2 */
+	lookup2_stage2(t, g, pkt20_index, pkt21_index, pkts_mask_match_many);
+
+	/* Pipeline stage 3 */
+	lookup2_stage3(t, g, pkts, pkt30_index, pkt31_index, pkts_mask_out,
+		entries);
+
+	/* Pipeline feed */
+	pkt30_index = pkt20_index;
+	pkt31_index = pkt21_index;
+
+	/* Pipeline stage 3 */
+	lookup2_stage3(t, g, pkts, pkt30_index, pkt31_index, pkts_mask_out,
+		entries);
+
+	/* Slow path */
+	pkts_mask_match_many &= ~pkts_mask_out;
+	if (pkts_mask_match_many) {
+		uint64_t pkts_mask_out_slow = 0;
+
+		status = rte_table_hash_lru_lookup_unoptimized(table, pkts,
+			pkts_mask_match_many, &pkts_mask_out_slow, entries, 1);
+		pkts_mask_out |= pkts_mask_out_slow;
+	}
+
+	*lookup_hit_mask = pkts_mask_out;
+	return status;
+}
+
+struct rte_table_ops rte_table_hash_lru_ops = {
+	.f_create = rte_table_hash_lru_create,
+	.f_free = rte_table_hash_lru_free,
+	.f_add = rte_table_hash_lru_entry_add,
+	.f_delete = rte_table_hash_lru_entry_delete,
+	.f_lookup = rte_table_hash_lru_lookup,
+};
+
+struct rte_table_ops rte_table_hash_lru_dosig_ops = {
+	.f_create = rte_table_hash_lru_create,
+	.f_free = rte_table_hash_lru_free,
+	.f_add = rte_table_hash_lru_entry_add,
+	.f_delete = rte_table_hash_lru_entry_delete,
+	.f_lookup = rte_table_hash_lru_lookup_dosig,
+};
-- 
1.7.7.6

^ permalink raw reply	[flat|nested] 36+ messages in thread

* [dpdk-dev] [v2 16/23] Packet Framework librte_table: array table
  2014-06-04 18:08 [dpdk-dev] [v2 00/23] Packet Framework Cristian Dumitrescu
                   ` (14 preceding siblings ...)
  2014-06-04 18:08 ` [dpdk-dev] [v2 15/23] Packet Framework librte_table: Hash tables Cristian Dumitrescu
@ 2014-06-04 18:08 ` Cristian Dumitrescu
  2014-06-04 18:08 ` [dpdk-dev] [v2 17/23] Packet Framework librte_table: Stub table Cristian Dumitrescu
                   ` (9 subsequent siblings)
  25 siblings, 0 replies; 36+ messages in thread
From: Cristian Dumitrescu @ 2014-06-04 18:08 UTC (permalink / raw)
  To: dev

Packet Framework array tables.

Signed-off-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
---
 lib/librte_table/rte_table_array.c |  204 ++++++++++++++++++++++++++++++++++++
 lib/librte_table/rte_table_array.h |   76 +++++++++++++
 2 files changed, 280 insertions(+), 0 deletions(-)
 create mode 100644 lib/librte_table/rte_table_array.c
 create mode 100644 lib/librte_table/rte_table_array.h

diff --git a/lib/librte_table/rte_table_array.c b/lib/librte_table/rte_table_array.c
new file mode 100644
index 0000000..f0f5e1e
--- /dev/null
+++ b/lib/librte_table/rte_table_array.c
@@ -0,0 +1,204 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <string.h>
+#include <stdio.h>
+
+#include <rte_common.h>
+#include <rte_mbuf.h>
+#include <rte_malloc.h>
+#include <rte_log.h>
+
+#include "rte_table_array.h"
+
+struct rte_table_array {
+	/* Input parameters */
+	uint32_t entry_size;
+	uint32_t n_entries;
+	uint32_t offset;
+
+	/* Internal fields */
+	uint32_t entry_pos_mask;
+
+	/* Internal table */
+	uint8_t array[0] __rte_cache_aligned;
+} __rte_cache_aligned;
+
+static void *
+rte_table_array_create(void *params, int socket_id, uint32_t entry_size)
+{
+	struct rte_table_array_params *p =
+		(struct rte_table_array_params *) params;
+	struct rte_table_array *t;
+	uint32_t total_cl_size, total_size;
+
+	/* Check input parameters */
+	if ((p == NULL) ||
+	    (p->n_entries == 0) ||
+		(!rte_is_power_of_2(p->n_entries)) ||
+		((p->offset & 0x3) != 0)) {
+		return NULL;
+	}
+
+	/* Memory allocation */
+	total_cl_size = (sizeof(struct rte_table_array) +
+			CACHE_LINE_SIZE) / CACHE_LINE_SIZE;
+	total_cl_size += (p->n_entries * entry_size +
+			CACHE_LINE_SIZE) / CACHE_LINE_SIZE;
+	total_size = total_cl_size * CACHE_LINE_SIZE;
+	t = rte_zmalloc_socket("TABLE", total_size, CACHE_LINE_SIZE, socket_id);
+	if (t == NULL) {
+		RTE_LOG(ERR, TABLE,
+			"%s: Cannot allocate %u bytes for array table\n",
+			__func__, total_size);
+		return NULL;
+	}
+
+	/* Memory initialization */
+	t->entry_size = entry_size;
+	t->n_entries = p->n_entries;
+	t->offset = p->offset;
+	t->entry_pos_mask = t->n_entries - 1;
+
+	return t;
+}
+
+static int
+rte_table_array_free(void *table)
+{
+	struct rte_table_array *t = (struct rte_table_array *) table;
+
+	/* Check input parameters */
+	if (t == NULL) {
+		RTE_LOG(ERR, TABLE, "%s: table parameter is NULL\n", __func__);
+		return -EINVAL;
+	}
+
+	/* Free previously allocated resources */
+	rte_free(t);
+
+	return 0;
+}
+
+static int
+rte_table_array_entry_add(
+	void *table,
+	void *key,
+	void *entry,
+	int *key_found,
+	void **entry_ptr)
+{
+	struct rte_table_array *t = (struct rte_table_array *) table;
+	struct rte_table_array_key *k = (struct rte_table_array_key *) key;
+	uint8_t *table_entry;
+
+	/* Check input parameters */
+	if (table == NULL) {
+		RTE_LOG(ERR, TABLE, "%s: table parameter is NULL\n", __func__);
+		return -EINVAL;
+	}
+	if (key == NULL) {
+		RTE_LOG(ERR, TABLE, "%s: key parameter is NULL\n", __func__);
+		return -EINVAL;
+	}
+	if (entry == NULL) {
+		RTE_LOG(ERR, TABLE, "%s: entry parameter is NULL\n", __func__);
+		return -EINVAL;
+	}
+	if (key_found == NULL) {
+		RTE_LOG(ERR, TABLE, "%s: key_found parameter is NULL\n",
+			__func__);
+		return -EINVAL;
+	}
+	if (entry_ptr == NULL) {
+		RTE_LOG(ERR, TABLE, "%s: entry_ptr parameter is NULL\n",
+			__func__);
+		return -EINVAL;
+	}
+
+	table_entry = &t->array[k->pos * t->entry_size];
+	memcpy(table_entry, entry, t->entry_size);
+	*key_found = 1;
+	*entry_ptr = (void *) table_entry;
+
+	return 0;
+}
+
+static int
+rte_table_array_lookup(
+	void *table,
+	struct rte_mbuf **pkts,
+	uint64_t pkts_mask,
+	uint64_t *lookup_hit_mask,
+	void **entries)
+{
+	struct rte_table_array *t = (struct rte_table_array *) table;
+
+	if ((pkts_mask & (pkts_mask + 1)) == 0) {
+		uint64_t n_pkts = __builtin_popcountll(pkts_mask);
+		uint32_t i;
+
+		for (i = 0; i < n_pkts; i++) {
+			struct rte_mbuf *pkt = pkts[i];
+			uint32_t entry_pos = RTE_MBUF_METADATA_UINT32(pkt,
+				t->offset) & t->entry_pos_mask;
+
+			entries[i] = (void *) &t->array[entry_pos *
+				t->entry_size];
+		}
+	} else {
+		for ( ; pkts_mask; ) {
+			uint32_t pkt_index = __builtin_ctzll(pkts_mask);
+			uint64_t pkt_mask = 1LLU << pkt_index;
+			struct rte_mbuf *pkt = pkts[pkt_index];
+			uint32_t entry_pos = RTE_MBUF_METADATA_UINT32(pkt,
+				t->offset) & t->entry_pos_mask;
+
+			entries[pkt_index] = (void *) &t->array[entry_pos *
+				t->entry_size];
+			pkts_mask &= ~pkt_mask;
+		}
+	}
+
+	*lookup_hit_mask = pkts_mask;
+
+	return 0;
+}
+
+struct rte_table_ops rte_table_array_ops = {
+	.f_create = rte_table_array_create,
+	.f_free = rte_table_array_free,
+	.f_add = rte_table_array_entry_add,
+	.f_delete = NULL,
+	.f_lookup = rte_table_array_lookup,
+};
diff --git a/lib/librte_table/rte_table_array.h b/lib/librte_table/rte_table_array.h
new file mode 100644
index 0000000..9521119
--- /dev/null
+++ b/lib/librte_table/rte_table_array.h
@@ -0,0 +1,76 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __INCLUDE_RTE_TABLE_ARRAY_H__
+#define __INCLUDE_RTE_TABLE_ARRAY_H__
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/**
+ * @file
+ * RTE Table Array
+ *
+ * Simple array indexing. Lookup key is the array entry index.
+ *
+ ***/
+
+#include <stdint.h>
+
+#include "rte_table.h"
+
+/** Array table parameters */
+struct rte_table_array_params {
+	/** Number of array entries. Has to be a power of two. */
+	uint32_t n_entries;
+
+	/** Byte offset within input packet meta-data where lookup key (i.e. the
+	    array entry index) is located. */
+	uint32_t offset;
+};
+
+/** Array table key format */
+struct rte_table_array_key {
+	/** Array entry index */
+	uint32_t pos;
+};
+
+/** Array table operations */
+extern struct rte_table_ops rte_table_array_ops;
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif
-- 
1.7.7.6

^ permalink raw reply	[flat|nested] 36+ messages in thread

* [dpdk-dev] [v2 17/23] Packet Framework librte_table: Stub table
  2014-06-04 18:08 [dpdk-dev] [v2 00/23] Packet Framework Cristian Dumitrescu
                   ` (15 preceding siblings ...)
  2014-06-04 18:08 ` [dpdk-dev] [v2 16/23] Packet Framework librte_table: array table Cristian Dumitrescu
@ 2014-06-04 18:08 ` Cristian Dumitrescu
  2014-06-04 18:08 ` [dpdk-dev] [v2 18/23] Packet Framework librte_table: Build infrastructure Cristian Dumitrescu
                   ` (8 subsequent siblings)
  25 siblings, 0 replies; 36+ messages in thread
From: Cristian Dumitrescu @ 2014-06-04 18:08 UTC (permalink / raw)
  To: dev

The stub table is a simple implementation of the Packet Framework table API that produces lookup miss for all input packets.

It is used a simple cable-type forwarder by the Packet Framework pipeline library.

Signed-off-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
---
 lib/librte_table/rte_table_stub.c |   65 +++++++++++++++++++++++++++++++++++++
 lib/librte_table/rte_table_stub.h |   62 +++++++++++++++++++++++++++++++++++
 2 files changed, 127 insertions(+), 0 deletions(-)
 create mode 100644 lib/librte_table/rte_table_stub.c
 create mode 100644 lib/librte_table/rte_table_stub.h

diff --git a/lib/librte_table/rte_table_stub.c b/lib/librte_table/rte_table_stub.c
new file mode 100644
index 0000000..876b7e4
--- /dev/null
+++ b/lib/librte_table/rte_table_stub.c
@@ -0,0 +1,65 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <rte_mbuf.h>
+
+#include "rte_table_stub.h"
+
+static void *
+rte_table_stub_create(__rte_unused void *params,
+		__rte_unused int socket_id,
+		__rte_unused uint32_t entry_size)
+{
+	return (void *) 1;
+}
+
+static int
+rte_table_stub_lookup(
+	__rte_unused void *table,
+	__rte_unused struct rte_mbuf **pkts,
+	__rte_unused uint64_t pkts_mask,
+	uint64_t *lookup_hit_mask,
+	__rte_unused void **entries)
+{
+	*lookup_hit_mask = 0;
+
+	return 0;
+}
+
+struct rte_table_ops rte_table_stub_ops = {
+	.f_create = rte_table_stub_create,
+	.f_free = NULL,
+	.f_add = NULL,
+	.f_delete = NULL,
+	.f_lookup = rte_table_stub_lookup,
+};
diff --git a/lib/librte_table/rte_table_stub.h b/lib/librte_table/rte_table_stub.h
new file mode 100644
index 0000000..e75340b
--- /dev/null
+++ b/lib/librte_table/rte_table_stub.h
@@ -0,0 +1,62 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __INCLUDE_RTE_TABLE_STUB_H__
+#define __INCLUDE_RTE_TABLE_STUB_H__
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/**
+ * @file
+ * RTE Table Stub
+ *
+ * The stub table lookup operation produces lookup miss for all input packets.
+ *
+ ***/
+
+#include <stdint.h>
+
+#include "rte_table.h"
+
+/** Stub table parameters: NONE */
+
+/** Stub table operations */
+extern struct rte_table_ops rte_table_stub_ops;
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif
-- 
1.7.7.6

^ permalink raw reply	[flat|nested] 36+ messages in thread

* [dpdk-dev] [v2 18/23] Packet Framework librte_table: Build infrastructure
  2014-06-04 18:08 [dpdk-dev] [v2 00/23] Packet Framework Cristian Dumitrescu
                   ` (16 preceding siblings ...)
  2014-06-04 18:08 ` [dpdk-dev] [v2 17/23] Packet Framework librte_table: Stub table Cristian Dumitrescu
@ 2014-06-04 18:08 ` Cristian Dumitrescu
  2014-06-04 18:08 ` [dpdk-dev] [v2 19/23] Packet Framework librte_pipeline: Pipeline Cristian Dumitrescu
                   ` (7 subsequent siblings)
  25 siblings, 0 replies; 36+ messages in thread
From: Cristian Dumitrescu @ 2014-06-04 18:08 UTC (permalink / raw)
  To: dev

Makefile and buid infrastructure for the Packet Framework table library.

Signed-off-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
---
 config/common_bsdapp      |    5 +++
 config/common_linuxapp    |    5 +++
 doc/doxy-api-index.md     |    9 ++++-
 doc/doxy-api.conf         |    1 +
 lib/Makefile              |    1 +
 lib/librte_table/Makefile |   85 +++++++++++++++++++++++++++++++++++++++++++++
 mk/rte.app.mk             |    4 ++
 7 files changed, 109 insertions(+), 1 deletions(-)
 create mode 100644 lib/librte_table/Makefile

diff --git a/config/common_bsdapp b/config/common_bsdapp
index e1cc356..c86b03c 100644
--- a/config/common_bsdapp
+++ b/config/common_bsdapp
@@ -305,3 +305,8 @@ CONFIG_RTE_TEST_PMD_RECORD_BURST_STATS=n
 # Compile librte_port
 #
 CONFIG_RTE_LIBRTE_PORT=y
+
+#
+# Compile librte_table
+#
+CONFIG_RTE_LIBRTE_TABLE=y
diff --git a/config/common_linuxapp b/config/common_linuxapp
index ef0f65e..a3a5761 100644
--- a/config/common_linuxapp
+++ b/config/common_linuxapp
@@ -341,3 +341,8 @@ CONFIG_RTE_NIC_BYPASS=n
 # Compile librte_port
 #
 CONFIG_RTE_LIBRTE_PORT=y
+
+#
+# Compile librte_table
+#
+CONFIG_RTE_LIBRTE_TABLE=y
diff --git a/doc/doxy-api-index.md b/doc/doxy-api-index.md
index 3e74ea6..a49ee77 100644
--- a/doc/doxy-api-index.md
+++ b/doc/doxy-api-index.md
@@ -92,7 +92,14 @@ There are many libraries, so their headers may be grouped by topics:
   [port IPv4 fragmentation] (@ref rte_port_frag.h),
   [port IPv4 reassembly]    (@ref rte_port_ras.h),
   [port scheduler]          (@ref rte_port_sched.h),
-  [port source/sink]        (@ref rte_port_source_sink.h)
+  [port source/sink]        (@ref rte_port_source_sink.h),
+  [table]                   (@ref rte_table.h),
+  [table ACL]               (@ref rte_table_acl.h),
+  [table array]             (@ref rte_table_array.h),
+  [table hash]              (@ref rte_table_hash.h),
+  [table lpm IPv4]          (@ref rte_table_lpm.h),
+  [table lpm IPv6]          (@ref rte_table_lpm_ipv6.h),
+  [table stub]              (@ref rte_table_stub.h)
 
 - **hashes**:
   [hash]               (@ref rte_hash.h),
diff --git a/doc/doxy-api.conf b/doc/doxy-api.conf
index 4f280bf..5a456b7 100644
--- a/doc/doxy-api.conf
+++ b/doc/doxy-api.conf
@@ -45,6 +45,7 @@ INPUT                   = doc/doxy-api-index.md \
                           lib/librte_power \
                           lib/librte_ring \
                           lib/librte_sched \
+                          lib/librte_table \
                           lib/librte_timer
 FILE_PATTERNS           = rte_*.h \
                           cmdline.h
diff --git a/lib/Makefile b/lib/Makefile
index 654968e..4246e2f 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -56,6 +56,7 @@ DIRS-$(CONFIG_RTE_LIBRTE_SCHED) += librte_sched
 DIRS-$(CONFIG_RTE_LIBRTE_ACL) += librte_acl
 DIRS-$(CONFIG_RTE_LIBRTE_KVARGS) += librte_kvargs
 DIRS-$(CONFIG_RTE_LIBRTE_PORT) += librte_port
+DIRS-$(CONFIG_RTE_LIBRTE_TABLE) += librte_table
 
 ifeq ($(CONFIG_RTE_EXEC_ENV_LINUXAPP),y)
 DIRS-$(CONFIG_RTE_LIBRTE_KNI) += librte_kni
diff --git a/lib/librte_table/Makefile b/lib/librte_table/Makefile
new file mode 100644
index 0000000..ca38e75
--- /dev/null
+++ b/lib/librte_table/Makefile
@@ -0,0 +1,85 @@
+#   BSD LICENSE
+#
+#   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+#   All rights reserved.
+#
+#   Redistribution and use in source and binary forms, with or without
+#   modification, are permitted provided that the following conditions
+#   are met:
+#
+#     * Redistributions of source code must retain the above copyright
+#       notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above copyright
+#       notice, this list of conditions and the following disclaimer in
+#       the documentation and/or other materials provided with the
+#       distribution.
+#     * Neither the name of Intel Corporation nor the names of its
+#       contributors may be used to endorse or promote products derived
+#       from this software without specific prior written permission.
+#
+#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+#
+# library name
+#
+LIB = librte_table.a
+
+CFLAGS += -O3
+CFLAGS += $(WERROR_FLAGS)
+
+#
+# all source are stored in SRCS-y
+#
+SRCS-$(CONFIG_RTE_LIBRTE_TABLE) += rte_table_stub.c
+SRCS-$(CONFIG_RTE_LIBRTE_TABLE) += rte_table_array.c
+SRCS-$(CONFIG_RTE_LIBRTE_TABLE) += rte_table_hash_ext.c
+SRCS-$(CONFIG_RTE_LIBRTE_TABLE) += rte_table_hash_lru.c
+SRCS-$(CONFIG_RTE_LIBRTE_TABLE) += rte_table_hash_key8.c
+SRCS-$(CONFIG_RTE_LIBRTE_TABLE) += rte_table_hash_key16.c
+SRCS-$(CONFIG_RTE_LIBRTE_TABLE) += rte_table_hash_key32.c
+SRCS-$(CONFIG_RTE_LIBRTE_TABLE) += rte_table_lpm.c
+SRCS-$(CONFIG_RTE_LIBRTE_TABLE) += rte_table_lpm_ipv6.c
+
+ifeq ($(CONFIG_RTE_LIBRTE_ACL),y)
+SRCS-$(CONFIG_RTE_LIBRTE_TABLE) += rte_table_acl.c
+endif
+
+# install includes
+SYMLINK-$(CONFIG_RTE_LIBRTE_TABLE)-include += rte_table.h
+SYMLINK-$(CONFIG_RTE_LIBRTE_TABLE)-include += rte_table_stub.h
+SYMLINK-$(CONFIG_RTE_LIBRTE_TABLE)-include += rte_table_array.h
+SYMLINK-$(CONFIG_RTE_LIBRTE_TABLE)-include += rte_table_hash.h
+SYMLINK-$(CONFIG_RTE_LIBRTE_TABLE)-include += rte_table_lpm.h
+SYMLINK-$(CONFIG_RTE_LIBRTE_TABLE)-include += rte_table_lpm_ipv6.h
+SYMLINK-$(CONFIG_RTE_LIBRTE_TABLE)-include += rte_lru.h
+
+ifeq ($(CONFIG_RTE_LIBRTE_ACL),y)
+SYMLINK-$(CONFIG_RTE_LIBRTE_TABLE)-include += rte_table_acl.h
+endif
+
+# this lib depends upon:
+DEPDIRS-$(CONFIG_RTE_LIBRTE_TABLE) := lib/librte_eal
+DEPDIRS-$(CONFIG_RTE_LIBRTE_TABLE) += lib/librte_mbuf
+DEPDIRS-$(CONFIG_RTE_LIBRTE_TABLE) += lib/librte_mempool
+DEPDIRS-$(CONFIG_RTE_LIBRTE_TABLE) += lib/librte_malloc
+DEPDIRS-$(CONFIG_RTE_LIBRTE_TABLE) += lib/librte_hash
+DEPDIRS-$(CONFIG_RTE_LIBRTE_TABLE) += lib/librte_lpm
+DEPDIRS-$(CONFIG_RTE_LIBRTE_TABLE) += lib/librte_port
+
+ifeq ($(CONFIG_RTE_LIBRTE_ACL),y)
+DEPDIRS-$(CONFIG_RTE_LIBRTE_TABLE) += lib/librte_acl
+endif
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index e67326b..a11812b 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -73,6 +73,10 @@ LDLIBS += -lrte_ivshmem
 endif
 endif
 
+ifeq ($(CONFIG_RTE_LIBRTE_TABLE),y)
+LDLIBS += -lrte_table
+endif
+
 ifeq ($(CONFIG_RTE_LIBRTE_PORT),y)
 LDLIBS += -lrte_port
 endif
-- 
1.7.7.6

^ permalink raw reply	[flat|nested] 36+ messages in thread

* [dpdk-dev] [v2 19/23] Packet Framework librte_pipeline: Pipeline
  2014-06-04 18:08 [dpdk-dev] [v2 00/23] Packet Framework Cristian Dumitrescu
                   ` (17 preceding siblings ...)
  2014-06-04 18:08 ` [dpdk-dev] [v2 18/23] Packet Framework librte_table: Build infrastructure Cristian Dumitrescu
@ 2014-06-04 18:08 ` Cristian Dumitrescu
  2014-06-04 18:08 ` [dpdk-dev] [v2 20/23] librte_cfgfile: interpret config files Cristian Dumitrescu
                   ` (6 subsequent siblings)
  25 siblings, 0 replies; 36+ messages in thread
From: Cristian Dumitrescu @ 2014-06-04 18:08 UTC (permalink / raw)
  To: dev

The Packet Frameowrk pipeline library provides a standard methodology (logically similar to OpenFlow) for rapid development of complex packet processing pipelines out of ports, tables and actions.

A pipeline is constructed by connecting its input ports to its output ports through a chain of lookup tables. As result of lookup operation into the current table, one of the table entries (or the default table entry, in case of lookup miss) is identified to provide the actions to be executed on the current packet and the associated action meta-data.

The behavior of user actions is defined through the configurable table action handler, while the reserved actions define the next hop for the current packet (either another table, an output port or packet drop) and are handled transparently by the framework.

Please check the Intel DPDK Programmer's Guide for more details.

Signed-off-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
---
 config/common_bsdapp                    |    5 +
 config/common_linuxapp                  |    5 +
 doc/doxy-api-index.md                   |    3 +-
 doc/doxy-api.conf                       |    1 +
 lib/Makefile                            |    1 +
 lib/librte_eal/common/include/rte_log.h |    1 +
 lib/librte_pipeline/Makefile            |   54 ++
 lib/librte_pipeline/rte_pipeline.c      | 1373 +++++++++++++++++++++++++++++++
 lib/librte_pipeline/rte_pipeline.h      |  664 +++++++++++++++
 mk/rte.app.mk                           |    4 +
 10 files changed, 2110 insertions(+), 1 deletions(-)
 create mode 100644 lib/librte_pipeline/Makefile
 create mode 100644 lib/librte_pipeline/rte_pipeline.c
 create mode 100644 lib/librte_pipeline/rte_pipeline.h

diff --git a/config/common_bsdapp b/config/common_bsdapp
index c86b03c..565fcb6 100644
--- a/config/common_bsdapp
+++ b/config/common_bsdapp
@@ -310,3 +310,8 @@ CONFIG_RTE_LIBRTE_PORT=y
 # Compile librte_table
 #
 CONFIG_RTE_LIBRTE_TABLE=y
+
+#
+# Compile librte_pipeline
+#
+CONFIG_RTE_LIBRTE_PIPELINE=y
diff --git a/config/common_linuxapp b/config/common_linuxapp
index a3a5761..e52f163 100644
--- a/config/common_linuxapp
+++ b/config/common_linuxapp
@@ -346,3 +346,8 @@ CONFIG_RTE_LIBRTE_PORT=y
 # Compile librte_table
 #
 CONFIG_RTE_LIBRTE_TABLE=y
+
+#
+# Compile librte_pipeline
+#
+CONFIG_RTE_LIBRTE_PIPELINE=y
diff --git a/doc/doxy-api-index.md b/doc/doxy-api-index.md
index a49ee77..a873d3a 100644
--- a/doc/doxy-api-index.md
+++ b/doc/doxy-api-index.md
@@ -99,7 +99,8 @@ There are many libraries, so their headers may be grouped by topics:
   [table hash]              (@ref rte_table_hash.h),
   [table lpm IPv4]          (@ref rte_table_lpm.h),
   [table lpm IPv6]          (@ref rte_table_lpm_ipv6.h),
-  [table stub]              (@ref rte_table_stub.h)
+  [table stub]              (@ref rte_table_stub.h),
+  [pipeline]                (@ref rte_pipeline.h)
 
 - **hashes**:
   [hash]               (@ref rte_hash.h),
diff --git a/doc/doxy-api.conf b/doc/doxy-api.conf
index 5a456b7..eb9745f 100644
--- a/doc/doxy-api.conf
+++ b/doc/doxy-api.conf
@@ -41,6 +41,7 @@ INPUT                   = doc/doxy-api-index.md \
                           lib/librte_mempool \
                           lib/librte_meter \
                           lib/librte_net \
+                          lib/librte_pipeline \
                           lib/librte_port \
                           lib/librte_power \
                           lib/librte_ring \
diff --git a/lib/Makefile b/lib/Makefile
index 4246e2f..f29d66e 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -57,6 +57,7 @@ DIRS-$(CONFIG_RTE_LIBRTE_ACL) += librte_acl
 DIRS-$(CONFIG_RTE_LIBRTE_KVARGS) += librte_kvargs
 DIRS-$(CONFIG_RTE_LIBRTE_PORT) += librte_port
 DIRS-$(CONFIG_RTE_LIBRTE_TABLE) += librte_table
+DIRS-$(CONFIG_RTE_LIBRTE_PIPELINE) += librte_pipeline
 
 ifeq ($(CONFIG_RTE_EXEC_ENV_LINUXAPP),y)
 DIRS-$(CONFIG_RTE_LIBRTE_KNI) += librte_kni
diff --git a/lib/librte_eal/common/include/rte_log.h b/lib/librte_eal/common/include/rte_log.h
index d119815..1a22326 100644
--- a/lib/librte_eal/common/include/rte_log.h
+++ b/lib/librte_eal/common/include/rte_log.h
@@ -76,6 +76,7 @@ extern struct rte_logs rte_logs;
 #define RTE_LOGTYPE_SCHED   0x00001000 /**< Log related to QoS port scheduler. */
 #define RTE_LOGTYPE_PORT    0x00002000 /**< Log related to port. */
 #define RTE_LOGTYPE_TABLE   0x00004000 /**< Log related to table. */
+#define RTE_LOGTYPE_PIPELINE 0x00008000 /**< Log related to pipeline. */
 
 /* these log types can be used in an application */
 #define RTE_LOGTYPE_USER1   0x01000000 /**< User-defined log type 1. */
diff --git a/lib/librte_pipeline/Makefile b/lib/librte_pipeline/Makefile
new file mode 100644
index 0000000..cf8fde8
--- /dev/null
+++ b/lib/librte_pipeline/Makefile
@@ -0,0 +1,54 @@
+#   BSD LICENSE
+#
+#   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+#   All rights reserved.
+#
+#   Redistribution and use in source and binary forms, with or without
+#   modification, are permitted provided that the following conditions
+#   are met:
+#
+#     * Redistributions of source code must retain the above copyright
+#       notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above copyright
+#       notice, this list of conditions and the following disclaimer in
+#       the documentation and/or other materials provided with the
+#       distribution.
+#     * Neither the name of Intel Corporation nor the names of its
+#       contributors may be used to endorse or promote products derived
+#       from this software without specific prior written permission.
+#
+#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+#
+# library name
+#
+LIB = librte_pipeline.a
+
+CFLAGS += -O3
+CFLAGS += $(WERROR_FLAGS)
+
+#
+# all source are stored in SRCS-y
+#
+SRCS-$(CONFIG_RTE_LIBRTE_PIPELINE) := rte_pipeline.c
+
+# install includes
+SYMLINK-$(CONFIG_RTE_LIBRTE_PIPELINE)-include += rte_pipeline.h
+
+# this lib depends upon:
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PIPELINE) := lib/librte_table
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PIPELINE) += lib/librte_port
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_pipeline/rte_pipeline.c b/lib/librte_pipeline/rte_pipeline.c
new file mode 100644
index 0000000..1de4c09
--- /dev/null
+++ b/lib/librte_pipeline/rte_pipeline.c
@@ -0,0 +1,1373 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <string.h>
+#include <stdio.h>
+
+#include <rte_common.h>
+#include <rte_memory.h>
+#include <rte_memzone.h>
+#include <rte_cycles.h>
+#include <rte_prefetch.h>
+#include <rte_branch_prediction.h>
+#include <rte_mbuf.h>
+#include <rte_malloc.h>
+#include <rte_string_fns.h>
+
+#include "rte_pipeline.h"
+
+#define RTE_TABLE_INVALID                                 UINT32_MAX
+
+struct rte_port_in {
+	/* Input parameters */
+	struct rte_port_in_ops ops;
+	rte_pipeline_port_in_action_handler f_action;
+	void *arg_ah;
+	uint32_t burst_size;
+
+	/* The table to which this port is connected */
+	uint32_t table_id;
+
+	/* Handle to low-level port */
+	void *h_port;
+
+	/* List of enabled ports */
+	struct rte_port_in *next;
+};
+
+struct rte_port_out {
+	/* Input parameters */
+	struct rte_port_out_ops ops;
+	rte_pipeline_port_out_action_handler f_action;
+	rte_pipeline_port_out_action_handler_bulk f_action_bulk;
+	void *arg_ah;
+
+	/* Handle to low-level port */
+	void *h_port;
+};
+
+struct rte_table {
+	/* Input parameters */
+	struct rte_table_ops ops;
+	rte_pipeline_table_action_handler_hit f_action_hit;
+	rte_pipeline_table_action_handler_miss f_action_miss;
+	void *arg_ah;
+	struct rte_pipeline_table_entry *default_entry;
+	uint32_t entry_size;
+
+	uint32_t table_next_id;
+	uint32_t table_next_id_valid;
+
+	/* Handle to the low-level table object */
+	void *h_table;
+};
+
+#define RTE_PIPELINE_MAX_NAME_SZ                           124
+
+struct rte_pipeline {
+	/* Input parameters */
+	char name[RTE_PIPELINE_MAX_NAME_SZ];
+	int socket_id;
+	uint32_t offset_port_id;
+
+	/* Internal tables */
+	struct rte_port_in ports_in[RTE_PIPELINE_PORT_IN_MAX];
+	struct rte_port_out ports_out[RTE_PIPELINE_PORT_OUT_MAX];
+	struct rte_table tables[RTE_PIPELINE_TABLE_MAX];
+
+	/* Occupancy of internal tables */
+	uint32_t num_ports_in;
+	uint32_t num_ports_out;
+	uint32_t num_tables;
+
+	/* List of enabled ports */
+	uint64_t enabled_port_in_mask;
+	struct rte_port_in *port_in_first;
+
+	/* Pipeline run structures */
+	struct rte_mbuf *pkts[RTE_PORT_IN_BURST_SIZE_MAX];
+	struct rte_pipeline_table_entry *entries[RTE_PORT_IN_BURST_SIZE_MAX];
+	uint64_t action_mask0[RTE_PIPELINE_ACTIONS];
+	uint64_t action_mask1[RTE_PIPELINE_ACTIONS];
+} __rte_cache_aligned;
+
+static inline uint32_t
+rte_mask_get_next(uint64_t mask, uint32_t pos)
+{
+	uint64_t mask_rot = (mask << ((63 - pos) & 0x3F)) |
+			(mask >> ((pos + 1) & 0x3F));
+	return (__builtin_ctzll(mask_rot) - (63 - pos)) & 0x3F;
+}
+
+static inline uint32_t
+rte_mask_get_prev(uint64_t mask, uint32_t pos)
+{
+	uint64_t mask_rot = (mask >> (pos & 0x3F)) |
+			(mask << ((64 - pos) & 0x3F));
+	return ((63 - __builtin_clzll(mask_rot)) + pos) & 0x3F;
+}
+
+static void
+rte_pipeline_table_free(struct rte_table *table);
+
+static void
+rte_pipeline_port_in_free(struct rte_port_in *port);
+
+static void
+rte_pipeline_port_out_free(struct rte_port_out *port);
+
+/*
+ * Pipeline
+ *
+ */
+static int
+rte_pipeline_check_params(struct rte_pipeline_params *params)
+{
+	if (params == NULL) {
+		RTE_LOG(ERR, PIPELINE,
+			"%s: Incorrect value for parameter params\n", __func__);
+		return -EINVAL;
+	}
+
+	/* name */
+	if (params->name == NULL) {
+		RTE_LOG(ERR, PIPELINE,
+			"%s: Incorrect value for parameter name\n", __func__);
+		return -EINVAL;
+	}
+
+	/* socket */
+	if ((params->socket_id < 0) ||
+	    (params->socket_id >= RTE_MAX_NUMA_NODES)) {
+		RTE_LOG(ERR, PIPELINE,
+			"%s: Incorrect value for parameter socket_id\n",
+			__func__);
+		return -EINVAL;
+	}
+
+	/* offset_port_id */
+	if (params->offset_port_id & 0x3) {
+		RTE_LOG(ERR, PIPELINE,
+			"%s: Incorrect value for parameter offset_port_id\n",
+			__func__);
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+struct rte_pipeline *
+rte_pipeline_create(struct rte_pipeline_params *params)
+{
+	struct rte_pipeline *p;
+	int status;
+
+	/* Check input parameters */
+	status = rte_pipeline_check_params(params);
+	if (status != 0) {
+		RTE_LOG(ERR, PIPELINE,
+			"%s: Pipeline params check failed (%d)\n",
+			__func__, status);
+		return NULL;
+	}
+
+	/* Allocate memory for the pipeline on requested socket */
+	p = rte_zmalloc_socket("PIPELINE", sizeof(struct rte_pipeline),
+			CACHE_LINE_SIZE, params->socket_id);
+
+	if (p == NULL) {
+		RTE_LOG(ERR, PIPELINE,
+			"%s: Pipeline memory allocation failed\n", __func__);
+		return NULL;
+	}
+
+	/* Save input parameters */
+	rte_snprintf(p->name, RTE_PIPELINE_MAX_NAME_SZ, "%s", params->name);
+	p->socket_id = params->socket_id;
+	p->offset_port_id = params->offset_port_id;
+
+	/* Initialize pipeline internal data structure */
+	p->num_ports_in = 0;
+	p->num_ports_out = 0;
+	p->num_tables = 0;
+	p->enabled_port_in_mask = 0;
+	p->port_in_first = NULL;
+
+	return p;
+}
+
+int
+rte_pipeline_free(struct rte_pipeline *p)
+{
+	uint32_t i;
+
+	/* Check input parameters */
+	if (p == NULL) {
+		RTE_LOG(ERR, PIPELINE,
+			"%s: rte_pipeline parameter is NULL\n", __func__);
+		return -EINVAL;
+	}
+
+	/* Free input ports */
+	for (i = 0; i < p->num_ports_in; i++) {
+		struct rte_port_in *port = &p->ports_in[i];
+
+		rte_pipeline_port_in_free(port);
+	}
+
+	/* Free tables */
+	for (i = 0; i < p->num_tables; i++) {
+		struct rte_table *table = &p->tables[i];
+
+		rte_pipeline_table_free(table);
+	}
+
+	/* Free output ports */
+	for (i = 0; i < p->num_ports_out; i++) {
+		struct rte_port_out *port = &p->ports_out[i];
+
+		rte_pipeline_port_out_free(port);
+	}
+
+	/* Free pipeline memory */
+	rte_free(p);
+
+	return 0;
+}
+
+/*
+ * Table
+ *
+ */
+static int
+rte_table_check_params(struct rte_pipeline *p,
+		struct rte_pipeline_table_params *params,
+		uint32_t *table_id)
+{
+	if (p == NULL) {
+		RTE_LOG(ERR, PIPELINE, "%s: pipeline parameter is NULL\n",
+			__func__);
+		return -EINVAL;
+	}
+	if (params == NULL) {
+		RTE_LOG(ERR, PIPELINE, "%s: params parameter is NULL\n",
+			__func__);
+		return -EINVAL;
+	}
+	if (table_id == NULL) {
+		RTE_LOG(ERR, PIPELINE, "%s: table_id parameter is NULL\n",
+			__func__);
+		return -EINVAL;
+	}
+
+	/* ops */
+	if (params->ops == NULL) {
+		RTE_LOG(ERR, PIPELINE, "%s: params->ops is NULL\n",
+			__func__);
+		return -EINVAL;
+	}
+
+	if (params->ops->f_create == NULL) {
+		RTE_LOG(ERR, PIPELINE,
+			"%s: f_create function pointer is NULL\n", __func__);
+		return -EINVAL;
+	}
+
+	if (params->ops->f_lookup == NULL) {
+		RTE_LOG(ERR, PIPELINE,
+			"%s: f_lookup function pointer is NULL\n", __func__);
+		return -EINVAL;
+	}
+
+	/* De we have room for one more table? */
+	if (p->num_tables == RTE_PIPELINE_TABLE_MAX) {
+		RTE_LOG(ERR, PIPELINE,
+			"%s: Incorrect value for num_tables parameter\n",
+			__func__);
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+int
+rte_pipeline_table_create(struct rte_pipeline *p,
+		struct rte_pipeline_table_params *params,
+		uint32_t *table_id)
+{
+	struct rte_table *table;
+	struct rte_pipeline_table_entry *default_entry;
+	void *h_table;
+	uint32_t entry_size, id;
+	int status;
+
+	/* Check input arguments */
+	status = rte_table_check_params(p, params, table_id);
+	if (status != 0)
+		return status;
+
+	id = p->num_tables;
+	table = &p->tables[id];
+
+	/* Allocate space for the default table entry */
+	entry_size = sizeof(struct rte_pipeline_table_entry) +
+		params->action_data_size;
+	default_entry = (struct rte_pipeline_table_entry *) rte_zmalloc_socket(
+		"PIPELINE", entry_size, CACHE_LINE_SIZE, p->socket_id);
+	if (default_entry == NULL) {
+		RTE_LOG(ERR, PIPELINE,
+			"%s: Failed to allocate default entry\n", __func__);
+		return -EINVAL;
+	}
+
+	/* Create the table */
+	h_table = params->ops->f_create(params->arg_create, p->socket_id,
+		entry_size);
+	if (h_table == NULL) {
+		rte_free(default_entry);
+		RTE_LOG(ERR, PIPELINE, "%s: Table creation failed\n", __func__);
+		return -EINVAL;
+	}
+
+	/* Commit current table to the pipeline */
+	p->num_tables++;
+	*table_id = id;
+
+	/* Save input parameters */
+	memcpy(&table->ops, params->ops, sizeof(struct rte_table_ops));
+	table->f_action_hit = params->f_action_hit;
+	table->f_action_miss = params->f_action_miss;
+	table->arg_ah = params->arg_ah;
+	table->entry_size = entry_size;
+
+	/* Clear the lookup miss actions (to be set later through API) */
+	table->default_entry = default_entry;
+	table->default_entry->action = RTE_PIPELINE_ACTION_DROP;
+
+	/* Initialize table internal data structure */
+	table->h_table = h_table;
+	table->table_next_id = 0;
+	table->table_next_id_valid = 0;
+
+	return 0;
+}
+
+void
+rte_pipeline_table_free(struct rte_table *table)
+{
+	if (table->ops.f_free != NULL)
+		table->ops.f_free(table->h_table);
+
+	rte_free(table->default_entry);
+}
+
+int
+rte_pipeline_table_default_entry_add(struct rte_pipeline *p,
+	uint32_t table_id,
+	struct rte_pipeline_table_entry *default_entry,
+	struct rte_pipeline_table_entry **default_entry_ptr)
+{
+	struct rte_table *table;
+
+	/* Check input arguments */
+	if (p == NULL) {
+		RTE_LOG(ERR, PIPELINE, "%s: pipeline parameter is NULL\n",
+			__func__);
+		return -EINVAL;
+	}
+
+	if (default_entry == NULL) {
+		RTE_LOG(ERR, PIPELINE,
+			"%s: default_entry parameter is NULL\n", __func__);
+		return -EINVAL;
+	}
+
+	if (table_id >= p->num_tables) {
+		RTE_LOG(ERR, PIPELINE,
+			"%s: table_id %d out of range\n", __func__, table_id);
+		return -EINVAL;
+	}
+
+	table = &p->tables[table_id];
+
+	if ((default_entry->action == RTE_PIPELINE_ACTION_TABLE) &&
+		table->table_next_id_valid &&
+		(default_entry->table_id != table->table_next_id)) {
+		RTE_LOG(ERR, PIPELINE,
+			"%s: Tree-like topologies not allowed\n", __func__);
+		return -EINVAL;
+	}
+
+	/* Set the lookup miss actions */
+	if ((default_entry->action == RTE_PIPELINE_ACTION_TABLE) &&
+		(table->table_next_id_valid == 0)) {
+		table->table_next_id = default_entry->table_id;
+		table->table_next_id_valid = 1;
+	}
+
+	memcpy(table->default_entry, default_entry, table->entry_size);
+
+	*default_entry_ptr = table->default_entry;
+	return 0;
+}
+
+int
+rte_pipeline_table_default_entry_delete(struct rte_pipeline *p,
+		uint32_t table_id,
+		struct rte_pipeline_table_entry *entry)
+{
+	struct rte_table *table;
+
+	/* Check input arguments */
+	if (p == NULL) {
+		RTE_LOG(ERR, PIPELINE,
+			"%s: pipeline parameter is NULL\n", __func__);
+		return -EINVAL;
+	}
+
+	if (table_id >= p->num_tables) {
+		RTE_LOG(ERR, PIPELINE,
+			"%s: table_id %d out of range\n", __func__, table_id);
+		return -EINVAL;
+	}
+
+	table = &p->tables[table_id];
+
+	/* Save the current contents of the default entry */
+	if (entry)
+		memcpy(entry, table->default_entry, table->entry_size);
+
+	/* Clear the lookup miss actions */
+	memset(table->default_entry, 0, table->entry_size);
+	table->default_entry->action = RTE_PIPELINE_ACTION_DROP;
+
+	return 0;
+}
+
+int
+rte_pipeline_table_entry_add(struct rte_pipeline *p,
+		uint32_t table_id,
+		void *key,
+		struct rte_pipeline_table_entry *entry,
+		int *key_found,
+		struct rte_pipeline_table_entry **entry_ptr)
+{
+	struct rte_table *table;
+
+	/* Check input arguments */
+	if (p == NULL) {
+		RTE_LOG(ERR, PIPELINE, "%s: pipeline parameter is NULL\n",
+			__func__);
+		return -EINVAL;
+	}
+
+	if (key == NULL) {
+		RTE_LOG(ERR, PIPELINE, "%s: key parameter is NULL\n", __func__);
+		return -EINVAL;
+	}
+
+	if (entry == NULL) {
+		RTE_LOG(ERR, PIPELINE, "%s: entry parameter is NULL\n",
+			__func__);
+		return -EINVAL;
+	}
+
+	if (table_id >= p->num_tables) {
+		RTE_LOG(ERR, PIPELINE,
+			"%s: table_id %d out of range\n", __func__, table_id);
+		return -EINVAL;
+	}
+
+	table = &p->tables[table_id];
+
+	if (table->ops.f_add == NULL) {
+		RTE_LOG(ERR, PIPELINE, "%s: f_add function pointer NULL\n",
+			__func__);
+		return -EINVAL;
+	}
+
+	if ((entry->action == RTE_PIPELINE_ACTION_TABLE) &&
+		table->table_next_id_valid &&
+		(entry->table_id != table->table_next_id)) {
+		RTE_LOG(ERR, PIPELINE,
+			"%s: Tree-like topologies not allowed\n", __func__);
+		return -EINVAL;
+	}
+
+	/* Add entry */
+	if ((entry->action == RTE_PIPELINE_ACTION_TABLE) &&
+		(table->table_next_id_valid == 0)) {
+		table->table_next_id = entry->table_id;
+		table->table_next_id_valid = 1;
+	}
+
+	return (table->ops.f_add)(table->h_table, key, (void *) entry,
+		key_found, (void **) entry_ptr);
+}
+
+int
+rte_pipeline_table_entry_delete(struct rte_pipeline *p,
+		uint32_t table_id,
+		void *key,
+		int *key_found,
+		struct rte_pipeline_table_entry *entry)
+{
+	struct rte_table *table;
+
+	/* Check input arguments */
+	if (p == NULL) {
+		RTE_LOG(ERR, PIPELINE, "%s: pipeline parameter NULL\n",
+			__func__);
+		return -EINVAL;
+	}
+
+	if (key == NULL) {
+		RTE_LOG(ERR, PIPELINE, "%s: key parameter is NULL\n",
+			__func__);
+		return -EINVAL;
+	}
+
+	if (table_id >= p->num_tables) {
+		RTE_LOG(ERR, PIPELINE,
+			"%s: table_id %d out of range\n", __func__, table_id);
+		return -EINVAL;
+	}
+
+	table = &p->tables[table_id];
+
+	if (table->ops.f_delete == NULL) {
+		RTE_LOG(ERR, PIPELINE,
+			"%s: f_delete function pointer NULL\n", __func__);
+		return -EINVAL;
+	}
+
+	return (table->ops.f_delete)(table->h_table, key, key_found, entry);
+}
+
+/*
+ * Port
+ *
+ */
+static int
+rte_pipeline_port_in_check_params(struct rte_pipeline *p,
+		struct rte_pipeline_port_in_params *params,
+		uint32_t *port_id)
+{
+	if (p == NULL) {
+		RTE_LOG(ERR, PIPELINE, "%s: pipeline parameter NULL\n",
+			__func__);
+		return -EINVAL;
+	}
+	if (params == NULL) {
+		RTE_LOG(ERR, PIPELINE, "%s: params parameter NULL\n", __func__);
+		return -EINVAL;
+	}
+	if (port_id == NULL) {
+		RTE_LOG(ERR, PIPELINE, "%s: port_id parameter NULL\n",
+			__func__);
+		return -EINVAL;
+	}
+
+	/* ops */
+	if (params->ops == NULL) {
+		RTE_LOG(ERR, PIPELINE, "%s: params->ops parameter NULL\n",
+			__func__);
+		return -EINVAL;
+	}
+
+	if (params->ops->f_create == NULL) {
+		RTE_LOG(ERR, PIPELINE,
+			"%s: f_create function pointer NULL\n", __func__);
+		return -EINVAL;
+	}
+
+	if (params->ops->f_rx == NULL) {
+		RTE_LOG(ERR, PIPELINE, "%s: f_rx function pointer NULL\n",
+			__func__);
+		return -EINVAL;
+	}
+
+	/* burst_size */
+	if ((params->burst_size == 0) ||
+		(params->burst_size > RTE_PORT_IN_BURST_SIZE_MAX)) {
+		RTE_LOG(ERR, PIPELINE, "%s: invalid value for burst_size\n",
+			__func__);
+		return -EINVAL;
+	}
+
+	/* Do we have room for one more port? */
+	if (p->num_ports_in == RTE_PIPELINE_PORT_IN_MAX) {
+		RTE_LOG(ERR, PIPELINE,
+			"%s: invalid value for num_ports_in\n", __func__);
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+static int
+rte_pipeline_port_out_check_params(struct rte_pipeline *p,
+		struct rte_pipeline_port_out_params *params,
+		uint32_t *port_id)
+{
+	rte_pipeline_port_out_action_handler f_ah;
+	rte_pipeline_port_out_action_handler_bulk f_ah_bulk;
+
+	if (p == NULL) {
+		RTE_LOG(ERR, PIPELINE, "%s: pipeline parameter NULL\n",
+			__func__);
+		return -EINVAL;
+	}
+
+	if (params == NULL) {
+		RTE_LOG(ERR, PIPELINE, "%s: params parameter NULL\n", __func__);
+		return -EINVAL;
+	}
+
+	if (port_id == NULL) {
+		RTE_LOG(ERR, PIPELINE, "%s: port_id parameter NULL\n",
+			__func__);
+		return -EINVAL;
+	}
+
+	/* ops */
+	if (params->ops == NULL) {
+		RTE_LOG(ERR, PIPELINE, "%s: params->ops parameter NULL\n",
+			__func__);
+		return -EINVAL;
+	}
+
+	if (params->ops->f_create == NULL) {
+		RTE_LOG(ERR, PIPELINE,
+			"%s: f_create function pointer NULL\n", __func__);
+		return -EINVAL;
+	}
+
+	if (params->ops->f_tx == NULL) {
+		RTE_LOG(ERR, PIPELINE,
+				"%s: f_tx function pointer NULL\n", __func__);
+		return -EINVAL;
+	}
+
+	if (params->ops->f_tx_bulk == NULL) {
+		RTE_LOG(ERR, PIPELINE,
+			"%s: f_tx_bulk function pointer NULL\n", __func__);
+		return -EINVAL;
+	}
+
+	f_ah = params->f_action;
+	f_ah_bulk = params->f_action_bulk;
+	if (((f_ah != NULL) && (f_ah_bulk == NULL)) ||
+	    ((f_ah == NULL) && (f_ah_bulk != NULL))) {
+		RTE_LOG(ERR, PIPELINE, "%s: Action handlers have to be either"
+			"both enabled or both disabled\n", __func__);
+		return -EINVAL;
+	}
+
+	/* Do we have room for one more port? */
+	if (p->num_ports_out == RTE_PIPELINE_PORT_OUT_MAX) {
+		RTE_LOG(ERR, PIPELINE,
+			"%s: invalid value for num_ports_out\n", __func__);
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+int
+rte_pipeline_port_in_create(struct rte_pipeline *p,
+		struct rte_pipeline_port_in_params *params,
+		uint32_t *port_id)
+{
+	struct rte_port_in *port;
+	void *h_port;
+	uint32_t id;
+	int status;
+
+	/* Check input arguments */
+	status = rte_pipeline_port_in_check_params(p, params, port_id);
+	if (status != 0)
+		return status;
+
+	id = p->num_ports_in;
+	port = &p->ports_in[id];
+
+	/* Create the port */
+	h_port = params->ops->f_create(params->arg_create, p->socket_id);
+	if (h_port == NULL) {
+		RTE_LOG(ERR, PIPELINE, "%s: Port creation failed\n", __func__);
+		return -EINVAL;
+	}
+
+	/* Commit current table to the pipeline */
+	p->num_ports_in++;
+	*port_id = id;
+
+	/* Save input parameters */
+	memcpy(&port->ops, params->ops, sizeof(struct rte_port_in_ops));
+	port->f_action = params->f_action;
+	port->arg_ah = params->arg_ah;
+	port->burst_size = params->burst_size;
+
+	/* Initialize port internal data structure */
+	port->table_id = RTE_TABLE_INVALID;
+	port->h_port = h_port;
+	port->next = NULL;
+
+	return 0;
+}
+
+void
+rte_pipeline_port_in_free(struct rte_port_in *port)
+{
+	if (port->ops.f_free != NULL)
+		port->ops.f_free(port->h_port);
+}
+
+int
+rte_pipeline_port_out_create(struct rte_pipeline *p,
+		struct rte_pipeline_port_out_params *params,
+		uint32_t *port_id)
+{
+	struct rte_port_out *port;
+	void *h_port;
+	uint32_t id;
+	int status;
+
+	/* Check input arguments */
+	status = rte_pipeline_port_out_check_params(p, params, port_id);
+	if (status != 0)
+		return status;
+
+	id = p->num_ports_out;
+	port = &p->ports_out[id];
+
+	/* Create the port */
+	h_port = params->ops->f_create(params->arg_create, p->socket_id);
+	if (h_port == NULL) {
+		RTE_LOG(ERR, PIPELINE, "%s: Port creation failed\n", __func__);
+		return -EINVAL;
+	}
+
+	/* Commit current table to the pipeline */
+	p->num_ports_out++;
+	*port_id = id;
+
+	/* Save input parameters */
+	memcpy(&port->ops, params->ops, sizeof(struct rte_port_out_ops));
+	port->f_action = params->f_action;
+	port->f_action_bulk = params->f_action_bulk;
+	port->arg_ah = params->arg_ah;
+
+	/* Initialize port internal data structure */
+	port->h_port = h_port;
+
+	return 0;
+}
+
+void
+rte_pipeline_port_out_free(struct rte_port_out *port)
+{
+	if (port->ops.f_free != NULL)
+		port->ops.f_free(port->h_port);
+}
+
+int
+rte_pipeline_port_in_connect_to_table(struct rte_pipeline *p,
+		uint32_t port_id,
+		uint32_t table_id)
+{
+	struct rte_port_in *port;
+
+	/* Check input arguments */
+	if (p == NULL) {
+		RTE_LOG(ERR, PIPELINE, "%s: pipeline parameter NULL\n",
+			__func__);
+		return -EINVAL;
+	}
+
+	if (port_id >= p->num_ports_in) {
+		RTE_LOG(ERR, PIPELINE,
+			"%s: port IN ID %u is out of range\n",
+			__func__, port_id);
+		return -EINVAL;
+	}
+
+	if (table_id >= p->num_tables) {
+		RTE_LOG(ERR, PIPELINE,
+			"%s: Table ID %u is out of range\n",
+			__func__, table_id);
+		return -EINVAL;
+	}
+
+	port = &p->ports_in[port_id];
+	port->table_id = table_id;
+
+	return 0;
+}
+
+int
+rte_pipeline_port_in_enable(struct rte_pipeline *p, uint32_t port_id)
+{
+	struct rte_port_in *port, *port_prev, *port_next;
+	struct rte_port_in *port_first, *port_last;
+	uint64_t port_mask;
+	uint32_t port_prev_id, port_next_id, port_first_id, port_last_id;
+
+	/* Check input arguments */
+	if (p == NULL) {
+		RTE_LOG(ERR, PIPELINE, "%s: pipeline parameter NULL\n",
+			__func__);
+		return -EINVAL;
+	}
+
+	if (port_id >= p->num_ports_in) {
+		RTE_LOG(ERR, PIPELINE,
+			"%s: port IN ID %u is out of range\n",
+			__func__, port_id);
+		return -EINVAL;
+	}
+
+	/* Return if current input port is already enabled */
+	port_mask = 1LLU << port_id;
+	if (p->enabled_port_in_mask & port_mask)
+		return 0;
+
+	p->enabled_port_in_mask |= port_mask;
+
+	/* Add current input port to the pipeline chain of enabled ports */
+	port_prev_id = rte_mask_get_prev(p->enabled_port_in_mask, port_id);
+	port_next_id = rte_mask_get_next(p->enabled_port_in_mask, port_id);
+
+	port_prev = &p->ports_in[port_prev_id];
+	port_next = &p->ports_in[port_next_id];
+	port = &p->ports_in[port_id];
+
+	port_prev->next = port;
+	port->next = port_next;
+
+	/* Update the first and last input ports in the chain */
+	port_first_id = __builtin_ctzll(p->enabled_port_in_mask);
+	port_last_id = 63 - __builtin_clzll(p->enabled_port_in_mask);
+
+	port_first = &p->ports_in[port_first_id];
+	port_last = &p->ports_in[port_last_id];
+
+	p->port_in_first = port_first;
+	port_last->next = NULL;
+
+	return 0;
+}
+
+int
+rte_pipeline_port_in_disable(struct rte_pipeline *p, uint32_t port_id)
+{
+	struct rte_port_in *port_prev, *port_next, *port_first, *port_last;
+	uint64_t port_mask;
+	uint32_t port_prev_id, port_next_id, port_first_id, port_last_id;
+
+	/* Check input arguments */
+	if (p == NULL) {
+		RTE_LOG(ERR, PIPELINE, "%s: pipeline parameter NULL\n",
+		__func__);
+		return -EINVAL;
+	}
+
+	if (port_id >= p->num_ports_in) {
+		RTE_LOG(ERR, PIPELINE, "%s: port IN ID %u is out of range\n",
+			__func__, port_id);
+		return -EINVAL;
+	}
+
+	/* Return if current input port is already disabled */
+	port_mask = 1LLU << port_id;
+	if ((p->enabled_port_in_mask & port_mask) == 0)
+		return 0;
+
+	/* Return if no other enabled ports */
+	if (__builtin_popcountll(p->enabled_port_in_mask) == 1) {
+		p->enabled_port_in_mask &= ~port_mask;
+		p->port_in_first = NULL;
+
+		return 0;
+	}
+
+	/* Add current input port to the pipeline chain of enabled ports */
+	port_prev_id = rte_mask_get_prev(p->enabled_port_in_mask, port_id);
+	port_next_id = rte_mask_get_next(p->enabled_port_in_mask, port_id);
+
+	port_prev = &p->ports_in[port_prev_id];
+	port_next = &p->ports_in[port_next_id];
+
+	port_prev->next = port_next;
+	p->enabled_port_in_mask &= ~port_mask;
+
+	/* Update the first and last input ports in the chain */
+	port_first_id = __builtin_ctzll(p->enabled_port_in_mask);
+	port_last_id = 63 - __builtin_clzll(p->enabled_port_in_mask);
+
+	port_first = &p->ports_in[port_first_id];
+	port_last = &p->ports_in[port_last_id];
+
+	p->port_in_first = port_first;
+	port_last->next = NULL;
+
+	return 0;
+}
+
+/*
+ * Pipeline run-time
+ *
+ */
+int
+rte_pipeline_check(struct rte_pipeline *p)
+{
+	uint32_t port_in_id;
+
+	/* Check input arguments */
+	if (p == NULL) {
+		RTE_LOG(ERR, PIPELINE, "%s: pipeline parameter NULL\n",
+			__func__);
+		return -EINVAL;
+	}
+
+	/* Check that pipeline has at least one input port, one table and one
+	output port */
+	if (p->num_ports_in == 0) {
+		RTE_LOG(ERR, PIPELINE, "%s: must have at least 1 input port\n",
+			__func__);
+		return -EINVAL;
+	}
+	if (p->num_tables == 0) {
+		RTE_LOG(ERR, PIPELINE, "%s: must have at least 1 table\n",
+			__func__);
+		return -EINVAL;
+	}
+	if (p->num_ports_out == 0) {
+		RTE_LOG(ERR, PIPELINE, "%s: must have at least 1 output port\n",
+			__func__);
+		return -EINVAL;
+	}
+
+	/* Check that all input ports are connected */
+	for (port_in_id = 0; port_in_id < p->num_ports_in; port_in_id++) {
+		struct rte_port_in *port_in = &p->ports_in[port_in_id];
+
+		if (port_in->table_id == RTE_TABLE_INVALID) {
+			RTE_LOG(ERR, PIPELINE,
+				"%s: Port IN ID %u is not connected\n",
+				__func__, port_in_id);
+			return -EINVAL;
+		}
+	}
+
+	return 0;
+}
+
+static inline void
+rte_pipeline_compute_masks(struct rte_pipeline *p, uint64_t pkts_mask)
+{
+	p->action_mask1[RTE_PIPELINE_ACTION_DROP] = 0;
+	p->action_mask1[RTE_PIPELINE_ACTION_PORT] = 0;
+	p->action_mask1[RTE_PIPELINE_ACTION_TABLE] = 0;
+
+	if ((pkts_mask & (pkts_mask + 1)) == 0) {
+		uint64_t n_pkts = __builtin_popcountll(pkts_mask);
+		uint32_t i;
+
+		for (i = 0; i < n_pkts; i++) {
+			uint64_t pkt_mask = 1LLU << i;
+			uint32_t pos = p->entries[i]->action;
+
+			p->action_mask1[pos] |= pkt_mask;
+		}
+	} else {
+		uint32_t i;
+
+		for (i = 0; i < RTE_PORT_IN_BURST_SIZE_MAX; i++) {
+			uint64_t pkt_mask = 1LLU << i;
+			uint32_t pos;
+
+			if ((pkt_mask & pkts_mask) == 0)
+				continue;
+
+			pos = p->entries[i]->action;
+			p->action_mask1[pos] |= pkt_mask;
+		}
+	}
+}
+
+static inline void
+rte_pipeline_action_handler_port_bulk(struct rte_pipeline *p,
+		uint64_t pkts_mask, uint32_t port_id)
+{
+	struct rte_port_out *port_out = &p->ports_out[port_id];
+
+	/* Output port user actions */
+	if (port_out->f_action_bulk != NULL) {
+		uint64_t mask = pkts_mask;
+
+		port_out->f_action_bulk(p->pkts, &pkts_mask, port_out->arg_ah);
+		p->action_mask0[RTE_PIPELINE_ACTION_DROP] |= pkts_mask ^  mask;
+	}
+
+	/* Output port TX */
+	if (pkts_mask != 0)
+		port_out->ops.f_tx_bulk(port_out->h_port, p->pkts, pkts_mask);
+}
+
+static inline void
+rte_pipeline_action_handler_port(struct rte_pipeline *p, uint64_t pkts_mask)
+{
+	if ((pkts_mask & (pkts_mask + 1)) == 0) {
+		uint64_t n_pkts = __builtin_popcountll(pkts_mask);
+		uint32_t i;
+
+		for (i = 0; i < n_pkts; i++) {
+			struct rte_mbuf *pkt = p->pkts[i];
+			uint32_t port_out_id = p->entries[i]->port_id;
+			struct rte_port_out *port_out =
+				&p->ports_out[port_out_id];
+
+			/* Output port user actions */
+			if (port_out->f_action == NULL) /* Output port TX */
+				port_out->ops.f_tx(port_out->h_port, pkt);
+			else {
+				uint64_t pkt_mask = 1LLU;
+
+				port_out->f_action(pkt, &pkt_mask,
+					port_out->arg_ah);
+				p->action_mask0[RTE_PIPELINE_ACTION_DROP] |=
+					(pkt_mask ^ 1LLU) << i;
+
+				/* Output port TX */
+				if (pkt_mask != 0)
+					port_out->ops.f_tx(port_out->h_port,
+						pkt);
+			}
+		}
+	} else {
+		uint32_t i;
+
+		for (i = 0;  i < RTE_PORT_IN_BURST_SIZE_MAX; i++) {
+			uint64_t pkt_mask = 1LLU << i;
+			struct rte_mbuf *pkt;
+			struct rte_port_out *port_out;
+			uint32_t port_out_id;
+
+			if ((pkt_mask & pkts_mask) == 0)
+				continue;
+
+			pkt = p->pkts[i];
+			port_out_id = p->entries[i]->port_id;
+			port_out = &p->ports_out[port_out_id];
+
+			/* Output port user actions */
+			if (port_out->f_action == NULL) /* Output port TX */
+				port_out->ops.f_tx(port_out->h_port, pkt);
+			else {
+				pkt_mask = 1LLU;
+
+				port_out->f_action(pkt, &pkt_mask,
+					port_out->arg_ah);
+				p->action_mask0[RTE_PIPELINE_ACTION_DROP] |=
+					(pkt_mask ^ 1LLU) << i;
+
+				/* Output port TX */
+				if (pkt_mask != 0)
+					port_out->ops.f_tx(port_out->h_port,
+						pkt);
+			}
+		}
+	}
+}
+
+static inline void
+rte_pipeline_action_handler_port_meta(struct rte_pipeline *p,
+	uint64_t pkts_mask)
+{
+	if ((pkts_mask & (pkts_mask + 1)) == 0) {
+		uint64_t n_pkts = __builtin_popcountll(pkts_mask);
+		uint32_t i;
+
+		for (i = 0; i < n_pkts; i++) {
+			struct rte_mbuf *pkt = p->pkts[i];
+			uint32_t port_out_id =
+				RTE_MBUF_METADATA_UINT32(pkt,
+					p->offset_port_id);
+			struct rte_port_out *port_out = &p->ports_out[
+				port_out_id];
+
+			/* Output port user actions */
+			if (port_out->f_action == NULL) /* Output port TX */
+				port_out->ops.f_tx(port_out->h_port, pkt);
+			else {
+				uint64_t pkt_mask = 1LLU;
+
+				port_out->f_action(pkt, &pkt_mask,
+					port_out->arg_ah);
+				p->action_mask0[RTE_PIPELINE_ACTION_DROP] |=
+					(pkt_mask ^ 1LLU) << i;
+
+				/* Output port TX */
+				if (pkt_mask != 0)
+					port_out->ops.f_tx(port_out->h_port,
+						pkt);
+			}
+		}
+	} else {
+		uint32_t i;
+
+		for (i = 0;  i < RTE_PORT_IN_BURST_SIZE_MAX; i++) {
+			uint64_t pkt_mask = 1LLU << i;
+			struct rte_mbuf *pkt;
+			struct rte_port_out *port_out;
+			uint32_t port_out_id;
+
+			if ((pkt_mask & pkts_mask) == 0)
+				continue;
+
+			pkt = p->pkts[i];
+			port_out_id = RTE_MBUF_METADATA_UINT32(pkt,
+				p->offset_port_id);
+			port_out = &p->ports_out[port_out_id];
+
+			/* Output port user actions */
+			if (port_out->f_action == NULL) /* Output port TX */
+				port_out->ops.f_tx(port_out->h_port, pkt);
+			else {
+				pkt_mask = 1LLU;
+
+				port_out->f_action(pkt, &pkt_mask,
+					port_out->arg_ah);
+				p->action_mask0[RTE_PIPELINE_ACTION_DROP] |=
+					(pkt_mask ^ 1LLU) << i;
+
+				/* Output port TX */
+				if (pkt_mask != 0)
+					port_out->ops.f_tx(port_out->h_port,
+						pkt);
+			}
+		}
+	}
+}
+
+static inline void
+rte_pipeline_action_handler_drop(struct rte_pipeline *p, uint64_t pkts_mask)
+{
+	if ((pkts_mask & (pkts_mask + 1)) == 0) {
+		uint64_t n_pkts = __builtin_popcountll(pkts_mask);
+		uint32_t i;
+
+		for (i = 0; i < n_pkts; i++)
+			rte_pktmbuf_free(p->pkts[i]);
+	} else {
+		uint32_t i;
+
+		for (i = 0; i < RTE_PORT_IN_BURST_SIZE_MAX; i++) {
+			uint64_t pkt_mask = 1LLU << i;
+
+			if ((pkt_mask & pkts_mask) == 0)
+				continue;
+
+			rte_pktmbuf_free(p->pkts[i]);
+		}
+	}
+}
+
+int
+rte_pipeline_run(struct rte_pipeline *p)
+{
+	struct rte_port_in *port_in;
+
+	for (port_in = p->port_in_first; port_in != NULL;
+		port_in = port_in->next) {
+		uint64_t pkts_mask;
+		uint32_t n_pkts, table_id;
+
+		/* Input port RX */
+		n_pkts = port_in->ops.f_rx(port_in->h_port, p->pkts,
+			port_in->burst_size);
+		if (n_pkts == 0)
+			continue;
+
+		pkts_mask = RTE_LEN2MASK(n_pkts, uint64_t);
+		p->action_mask0[RTE_PIPELINE_ACTION_DROP] = 0;
+		p->action_mask0[RTE_PIPELINE_ACTION_PORT] = 0;
+		p->action_mask0[RTE_PIPELINE_ACTION_TABLE] = 0;
+
+		/* Input port user actions */
+		if (port_in->f_action != NULL) {
+			uint64_t mask = pkts_mask;
+
+			port_in->f_action(p->pkts, n_pkts, &pkts_mask,
+				port_in->arg_ah);
+			p->action_mask0[RTE_PIPELINE_ACTION_DROP] |=
+				pkts_mask ^ mask;
+		}
+
+		/* Table */
+		for (table_id = port_in->table_id; pkts_mask != 0; ) {
+			struct rte_table *table;
+			uint64_t lookup_hit_mask, lookup_miss_mask;
+
+			/* Lookup */
+			table = &p->tables[table_id];
+			table->ops.f_lookup(table->h_table, p->pkts, pkts_mask,
+					&lookup_hit_mask, (void **) p->entries);
+			lookup_miss_mask = pkts_mask & (~lookup_hit_mask);
+
+			/* Lookup miss */
+			if (lookup_miss_mask != 0) {
+				struct rte_pipeline_table_entry *default_entry =
+					table->default_entry;
+
+				/* Table user actions */
+				if (table->f_action_miss != NULL) {
+					uint64_t mask = lookup_miss_mask;
+
+					table->f_action_miss(p->pkts,
+						&lookup_miss_mask,
+						default_entry, table->arg_ah);
+					p->action_mask0[
+						RTE_PIPELINE_ACTION_DROP] |=
+						lookup_miss_mask ^ mask;
+				}
+
+				/* Table reserved actions */
+				if ((default_entry->action ==
+					RTE_PIPELINE_ACTION_PORT) &&
+					(lookup_miss_mask != 0))
+					rte_pipeline_action_handler_port_bulk(p,
+						lookup_miss_mask,
+						default_entry->port_id);
+				else {
+					uint32_t pos = default_entry->action;
+
+					p->action_mask0[pos] = lookup_miss_mask;
+				}
+			}
+
+			/* Lookup hit */
+			if (lookup_hit_mask != 0) {
+				/* Table user actions */
+				if (table->f_action_hit != NULL) {
+					uint64_t mask = lookup_hit_mask;
+
+					table->f_action_hit(p->pkts,
+						&lookup_hit_mask,
+						p->entries, table->arg_ah);
+					p->action_mask0[
+						RTE_PIPELINE_ACTION_DROP] |=
+						lookup_hit_mask ^ mask;
+				}
+
+				/* Table reserved actions */
+				rte_pipeline_compute_masks(p, lookup_hit_mask);
+				p->action_mask0[RTE_PIPELINE_ACTION_DROP] |=
+					p->action_mask1[
+						RTE_PIPELINE_ACTION_DROP];
+				p->action_mask0[RTE_PIPELINE_ACTION_PORT] |=
+					p->action_mask1[
+						RTE_PIPELINE_ACTION_PORT];
+				p->action_mask0[RTE_PIPELINE_ACTION_TABLE] |=
+					p->action_mask1[
+						RTE_PIPELINE_ACTION_TABLE];
+			}
+
+			/* Prepare for next iteration */
+			pkts_mask = p->action_mask0[RTE_PIPELINE_ACTION_TABLE];
+			table_id = table->table_next_id;
+			p->action_mask0[RTE_PIPELINE_ACTION_TABLE] = 0;
+		}
+
+		/* Table reserved action PORT */
+		rte_pipeline_action_handler_port(p,
+				p->action_mask0[RTE_PIPELINE_ACTION_PORT]);
+
+		/* Table reserved action PORT META */
+		rte_pipeline_action_handler_port_meta(p,
+				p->action_mask0[RTE_PIPELINE_ACTION_PORT_META]);
+
+		/* Table reserved action DROP */
+		rte_pipeline_action_handler_drop(p,
+				p->action_mask0[RTE_PIPELINE_ACTION_DROP]);
+	}
+
+	return 0;
+}
+
+int
+rte_pipeline_flush(struct rte_pipeline *p)
+{
+	uint32_t port_id;
+
+	/* Check input arguments */
+	if (p == NULL) {
+		RTE_LOG(ERR, PIPELINE, "%s: pipeline parameter NULL\n",
+			__func__);
+		return -EINVAL;
+	}
+
+	for (port_id = 0; port_id < p->num_ports_out; port_id++) {
+		struct rte_port_out *port = &p->ports_out[port_id];
+
+		if (port->ops.f_flush != NULL)
+			port->ops.f_flush(port->h_port);
+	}
+
+	return 0;
+}
+
+int
+rte_pipeline_port_out_packet_insert(struct rte_pipeline *p,
+		uint32_t port_id, struct rte_mbuf *pkt)
+{
+	struct rte_port_out *port_out = &p->ports_out[port_id];
+
+	/* Output port user actions */
+	if (port_out->f_action == NULL)
+		port_out->ops.f_tx(port_out->h_port, pkt); /* Output port TX */
+	else {
+		uint64_t pkt_mask = 1LLU;
+
+		port_out->f_action(pkt, &pkt_mask, port_out->arg_ah);
+
+		if (pkt_mask != 0) /* Output port TX */
+			port_out->ops.f_tx(port_out->h_port, pkt);
+		else
+			rte_pktmbuf_free(pkt);
+	}
+
+	return 0;
+}
diff --git a/lib/librte_pipeline/rte_pipeline.h b/lib/librte_pipeline/rte_pipeline.h
new file mode 100644
index 0000000..fb1014a
--- /dev/null
+++ b/lib/librte_pipeline/rte_pipeline.h
@@ -0,0 +1,664 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __INCLUDE_RTE_PIPELINE_H__
+#define __INCLUDE_RTE_PIPELINE_H__
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/**
+ * @file
+ * RTE Pipeline
+ *
+ * This tool is part of the Intel DPDK Packet Framework tool suite and provides
+ * a standard methodology (logically similar to OpenFlow) for rapid development
+ * of complex packet processing pipelines out of ports, tables and actions.
+ *
+ * <B>Basic operation.</B> A pipeline is constructed by connecting its input
+ * ports to its output ports through a chain of lookup tables. As result of
+ * lookup operation into the current table, one of the table entries (or the
+ * default table entry, in case of lookup miss) is identified to provide the
+ * actions to be executed on the current packet and the associated action
+ * meta-data. The behavior of user actions is defined through the configurable
+ * table action handler, while the reserved actions define the next hop for the
+ * current packet (either another table, an output port or packet drop) and are
+ * handled transparently by the framework.
+ *
+ * <B>Initialization and run-time flows.</B> Once all the pipeline elements
+ * (input ports, tables, output ports) have been created, input ports connected
+ * to tables, table action handlers configured, tables populated with the
+ * initial set of entries (actions and action meta-data) and input ports
+ * enabled, the pipeline runs automatically, pushing packets from input ports
+ * to tables and output ports. At each table, the identified user actions are
+ * being executed, resulting in action meta-data (stored in the table entry)
+ * and packet meta-data (stored with the packet descriptor) being updated. The
+ * pipeline tables can have further updates and input ports can be disabled or
+ * enabled later on as required.
+ *
+ * <B>Multi-core scaling.</B> Typically, each CPU core will run its own
+ * pipeline instance. Complex application-level pipelines can be implemented by
+ * interconnecting multiple CPU core-level pipelines in tree-like topologies,
+ * as the same port devices (e.g. SW rings) can serve as output ports for the
+ * pipeline running on CPU core A, as well as input ports for the pipeline
+ * running on CPU core B. This approach enables the application development
+ * using the pipeline (CPU cores connected serially), cluster/run-to-completion
+ * (CPU cores connected in parallel) or mixed (pipeline of CPU core clusters)
+ * programming models.
+ *
+ * <B>Thread safety.</B> It is possible to have multiple pipelines running on
+ * the same CPU core, but it is not allowed (for thread safety reasons) to have
+ * multiple CPU cores running the same pipeline instance.
+ *
+ ***/
+
+#include <stdint.h>
+
+#include <rte_mbuf.h>
+#include <rte_port.h>
+#include <rte_table.h>
+
+/*
+ * Pipeline
+ *
+ */
+/** Opaque data type for pipeline */
+struct rte_pipeline;
+
+/** Parameters for pipeline creation  */
+struct rte_pipeline_params {
+	/** Pipeline name */
+	const char *name;
+
+	/** CPU socket ID where memory for the pipeline and its elements (ports
+	and tables) should be allocated */
+	int socket_id;
+
+	/** Offset within packet meta-data to port_id to be used by action
+	"Send packet to output port read from packet meta-data". Has to be
+	4-byte aligned. */
+	uint32_t offset_port_id;
+};
+
+/**
+ * Pipeline create
+ *
+ * @param params
+ *   Parameters for pipeline creation
+ * @return
+ *   Handle to pipeline instance on success or NULL otherwise
+ */
+struct rte_pipeline *rte_pipeline_create(struct rte_pipeline_params *params);
+
+/**
+ * Pipeline free
+ *
+ * @param p
+ *   Handle to pipeline instance
+ * @return
+ *   0 on success, error code otherwise
+ */
+int rte_pipeline_free(struct rte_pipeline *p);
+
+/**
+ * Pipeline consistency check
+ *
+ * @param p
+ *   Handle to pipeline instance
+ * @return
+ *   0 on success, error code otherwise
+ */
+int rte_pipeline_check(struct rte_pipeline *p);
+
+/**
+ * Pipeline run
+ *
+ * @param p
+ *   Handle to pipeline instance
+ * @return
+ *   0 on success, error code otherwise
+ */
+int rte_pipeline_run(struct rte_pipeline *p);
+
+/**
+ * Pipeline flush
+ *
+ * @param p
+ *   Handle to pipeline instance
+ * @return
+ *   0 on success, error code otherwise
+ */
+int rte_pipeline_flush(struct rte_pipeline *p);
+
+/*
+ * Actions
+ *
+ */
+/** Reserved actions */
+enum rte_pipeline_action {
+	/** Drop the packet */
+	RTE_PIPELINE_ACTION_DROP = 0,
+
+	/** Send packet to output port */
+	RTE_PIPELINE_ACTION_PORT,
+
+	/** Send packet to output port read from packet meta-data */
+	RTE_PIPELINE_ACTION_PORT_META,
+
+	/** Send packet to table */
+	RTE_PIPELINE_ACTION_TABLE,
+
+	/** Number of reserved actions */
+	RTE_PIPELINE_ACTIONS
+};
+
+/*
+ * Table
+ *
+ */
+/** Maximum number of tables allowed for any given pipeline instance. The
+	value of this parameter cannot be changed. */
+#define RTE_PIPELINE_TABLE_MAX                                     64
+
+/**
+ * Head format for the table entry of any pipeline table. For any given
+ * pipeline table, all table entries should have the same size and format. For
+ * any given pipeline table, the table entry has to start with a head of this
+ * structure, which contains the reserved actions and their associated
+ * meta-data, and then optionally continues with user actions and their
+ * associated meta-data. As all the currently defined reserved actions are
+ * mutually exclusive, only one reserved action can be set per table entry.
+ */
+struct rte_pipeline_table_entry {
+	/** Reserved action */
+	enum rte_pipeline_action action;
+
+	union {
+		/** Output port ID (meta-data for "Send packet to output port"
+		action) */
+		uint32_t port_id;
+		/** Table ID (meta-data for "Send packet to table" action) */
+		uint32_t table_id;
+	};
+	/** Start of table entry area for user defined actions and meta-data */
+	uint8_t action_data[0];
+};
+
+/**
+ * Pipeline table action handler on lookup hit
+ *
+ * The action handler can decide to drop packets by resetting the associated
+ * packet bit in the pkts_mask parameter. In this case, the action handler is
+ * required not to free the packet buffer, which will be freed eventually by
+ * the pipeline.
+ *
+ * @param pkts
+ *   Burst of input packets specified as array of up to 64 pointers to struct
+ *   rte_mbuf
+ * @param pkts_mask
+ *   64-bit bitmask specifying which packets in the input burst are valid. When
+ *   pkts_mask bit n is set, then element n of pkts array is pointing to a
+ *   valid packet and element n of entries array is pointing to a valid table
+ *   entry associated with the packet, with the association typically done by
+ *   the table lookup operation. Otherwise, element n of pkts array and element
+ *   n of entries array will not be accessed.
+ * @param entries
+ *   Set of table entries specified as array of up to 64 pointers to struct
+ *   rte_pipeline_table_entry
+ * @param arg
+ *   Opaque parameter registered by the user at the pipeline table creation
+ *   time
+ * @return
+ *   0 on success, error code otherwise
+ */
+typedef int (*rte_pipeline_table_action_handler_hit)(
+	struct rte_mbuf **pkts,
+	uint64_t *pkts_mask,
+	struct rte_pipeline_table_entry **entries,
+	void *arg);
+
+/**
+ * Pipeline table action handler on lookup miss
+ *
+ * The action handler can decide to drop packets by resetting the associated
+ * packet bit in the pkts_mask parameter. In this case, the action handler is
+ * required not to free the packet buffer, which will be freed eventually by
+ * the pipeline.
+ *
+ * @param pkts
+ *   Burst of input packets specified as array of up to 64 pointers to struct
+ *   rte_mbuf
+ * @param pkts_mask
+ *   64-bit bitmask specifying which packets in the input burst are valid. When
+ *   pkts_mask bit n is set, then element n of pkts array is pointing to a
+ *   valid packet. Otherwise, element n of pkts array will not be accessed.
+ * @param entry
+ *   Single table entry associated with all the valid packets from the input
+ *   burst, specified as pointer to struct rte_pipeline_table_entry.
+ *   This entry is the pipeline table default entry that is associated by the
+ *   table lookup operation with the input packets that have resulted in lookup
+ *   miss.
+ * @param arg
+ *   Opaque parameter registered by the user at the pipeline table creation
+ *   time
+ * @return
+ *   0 on success, error code otherwise
+ */
+typedef int (*rte_pipeline_table_action_handler_miss)(
+	struct rte_mbuf **pkts,
+	uint64_t *pkts_mask,
+	struct rte_pipeline_table_entry *entry,
+	void *arg);
+
+/** Parameters for pipeline table creation. Action handlers have to be either
+    both enabled or both disabled (they can be disabled by setting them to
+    NULL). */
+struct rte_pipeline_table_params {
+	/** Table operations (specific to each table type) */
+	struct rte_table_ops *ops;
+	/** Opaque param to be passed to the table create operation when
+	invoked */
+	void *arg_create;
+	/** Callback function to execute the user actions on input packets in
+	case of lookup hit */
+	rte_pipeline_table_action_handler_hit f_action_hit;
+	/** Callback function to execute the user actions on input packets in
+	case of lookup miss */
+	rte_pipeline_table_action_handler_miss f_action_miss;
+
+	/** Opaque parameter to be passed to lookup hit and/or lookup miss
+	action handlers when invoked */
+	void *arg_ah;
+	/** Memory size to be reserved per table entry for storing the user
+	actions and their meta-data */
+	uint32_t action_data_size;
+};
+
+/**
+ * Pipeline table create
+ *
+ * @param p
+ *   Handle to pipeline instance
+ * @param params
+ *   Parameters for pipeline table creation
+ * @param table_id
+ *   Table ID. Valid only within the scope of table IDs of the current
+ *   pipeline. Only returned after a successful invocation.
+ * @return
+ *   0 on success, error code otherwise
+ */
+int rte_pipeline_table_create(struct rte_pipeline *p,
+	struct rte_pipeline_table_params *params,
+	uint32_t *table_id);
+
+/**
+ * Pipeline table default entry add
+ *
+ * The contents of the table default entry is updated with the provided actions
+ * and meta-data. When the default entry is not configured (by using this
+ * function), the built-in default entry has the action "Drop" and meta-data
+ * set to all-zeros.
+ *
+ * @param p
+ *   Handle to pipeline instance
+ * @param table_id
+ *   Table ID (returned by previous invocation of pipeline table create)
+ * @param default_entry
+ *   New contents for the table default entry
+ * @param default_entry_ptr
+ *   On successful invocation, pointer to the default table entry which can be
+ *   used for further read-write accesses to this table entry. This pointer
+ *   is valid until the default entry is deleted or re-added.
+ * @return
+ *   0 on success, error code otherwise
+ */
+int rte_pipeline_table_default_entry_add(struct rte_pipeline *p,
+	uint32_t table_id,
+	struct rte_pipeline_table_entry *default_entry,
+	struct rte_pipeline_table_entry **default_entry_ptr);
+
+/**
+ * Pipeline table default entry delete
+ *
+ * The new contents of the table default entry is set to reserved action "Drop
+ * the packet" with meta-data cleared (i.e. set to all-zeros).
+ *
+ * @param p
+ *   Handle to pipeline instance
+ * @param table_id
+ *   Table ID (returned by previous invocation of pipeline table create)
+ * @param entry
+ *   On successful invocation, when entry points to a valid buffer, the
+ *   previous contents of the table default entry (as it was just before the
+ *   delete operation) is copied to this buffer
+ * @return
+ *   0 on success, error code otherwise
+ */
+int rte_pipeline_table_default_entry_delete(struct rte_pipeline *p,
+	uint32_t table_id,
+	struct rte_pipeline_table_entry *entry);
+
+/**
+ * Pipeline table entry add
+ *
+ * @param p
+ *   Handle to pipeline instance
+ * @param table_id
+ *   Table ID (returned by previous invocation of pipeline table create)
+ * @param key
+ *   Table entry key
+ * @param entry
+ *   New contents for the table entry identified by key
+ * @param key_found
+ *   On successful invocation, set to TRUE (value different than 0) if key was
+ *   already present in the table before the add operation and to FALSE (value
+ *   0) if not
+ * @param entry_ptr
+ *   On successful invocation, pointer to the table entry associated with key.
+ *   This can be used for further read-write accesses to this table entry and
+ *   is valid until the key is deleted from the table or re-added (usually for
+ *   associating different actions and/or action meta-data to the current key)
+ * @return
+ *   0 on success, error code otherwise
+ */
+int rte_pipeline_table_entry_add(struct rte_pipeline *p,
+	uint32_t table_id,
+	void *key,
+	struct rte_pipeline_table_entry *entry,
+	int *key_found,
+	struct rte_pipeline_table_entry **entry_ptr);
+
+/**
+ * Pipeline table entry delete
+ *
+ * @param p
+ *   Handle to pipeline instance
+ * @param table_id
+ *   Table ID (returned by previous invocation of pipeline table create)
+ * @param key
+ *   Table entry key
+ * @param key_found
+ *   On successful invocation, set to TRUE (value different than 0) if key was
+ *   found in the table before the delete operation and to FALSE (value 0) if
+ *   not
+ * @param entry
+ *   On successful invocation, when key is found in the table and entry points
+ *   to a valid buffer, the table entry contents (as it was before the delete
+ *   was performed) is copied to this buffer
+ * @return
+ *   0 on success, error code otherwise
+ */
+int rte_pipeline_table_entry_delete(struct rte_pipeline *p,
+	uint32_t table_id,
+	void *key,
+	int *key_found,
+	struct rte_pipeline_table_entry *entry);
+
+/*
+ * Port IN
+ *
+ */
+/** Maximum number of input ports allowed for any given pipeline instance. The
+	value of this parameter cannot be changed. */
+#define RTE_PIPELINE_PORT_IN_MAX                                    64
+
+/**
+ * Pipeline input port action handler
+ *
+ * The action handler can decide to drop packets by resetting the associated
+ * packet bit in the pkts_mask parameter. In this case, the action handler is
+ * required not to free the packet buffer, which will be freed eventually by
+ * the pipeline.
+ *
+ * @param pkts
+ *   Burst of input packets specified as array of up to 64 pointers to struct
+ *   rte_mbuf
+ * @param n
+ *   Number of packets in the input burst. This parameter specifies that
+ *   elements 0 to (n-1) of pkts array are valid.
+ * @param pkts_mask
+ *   64-bit bitmask specifying which packets in the input burst are still valid
+ *   after the action handler is executed. When pkts_mask bit n is set, then
+ *   element n of pkts array is pointing to a valid packet.
+ * @param arg
+ *   Opaque parameter registered by the user at the pipeline table creation
+ *   time
+ * @return
+ *   0 on success, error code otherwise
+ */
+typedef int (*rte_pipeline_port_in_action_handler)(
+	struct rte_mbuf **pkts,
+	uint32_t n,
+	uint64_t *pkts_mask,
+	void *arg);
+
+/** Parameters for pipeline input port creation */
+struct rte_pipeline_port_in_params {
+	/** Input port operations (specific to each table type) */
+	struct rte_port_in_ops *ops;
+	/** Opaque parameter to be passed to create operation when invoked */
+	void *arg_create;
+
+	/** Callback function to execute the user actions on input packets.
+		Disabled if set to NULL. */
+	rte_pipeline_port_in_action_handler f_action;
+	/** Opaque parameter to be passed to the action handler when invoked */
+	void *arg_ah;
+
+	/** Recommended burst size for the RX operation(in number of pkts) */
+	uint32_t burst_size;
+};
+
+/**
+ * Pipeline input port create
+ *
+ * @param p
+ *   Handle to pipeline instance
+ * @param params
+ *   Parameters for pipeline input port creation
+ * @param port_id
+ *   Input port ID. Valid only within the scope of input port IDs of the
+ *   current pipeline. Only returned after a successful invocation.
+ * @return
+ *   0 on success, error code otherwise
+ */
+int rte_pipeline_port_in_create(struct rte_pipeline *p,
+	struct rte_pipeline_port_in_params *params,
+	uint32_t *port_id);
+
+/**
+ * Pipeline input port connect to table
+ *
+ * @param p
+ *   Handle to pipeline instance
+ * @param port_id
+ *   Port ID (returned by previous invocation of pipeline input port create)
+ * @param table_id
+ *   Table ID (returned by previous invocation of pipeline table create)
+ * @return
+ *   0 on success, error code otherwise
+ */
+int rte_pipeline_port_in_connect_to_table(struct rte_pipeline *p,
+	uint32_t port_id,
+	uint32_t table_id);
+
+/**
+ * Pipeline input port enable
+ *
+ * @param p
+ *   Handle to pipeline instance
+ * @param port_id
+ *   Port ID (returned by previous invocation of pipeline input port create)
+ * @return
+ *   0 on success, error code otherwise
+ */
+int rte_pipeline_port_in_enable(struct rte_pipeline *p,
+	uint32_t port_id);
+
+/**
+ * Pipeline input port disable
+ *
+ * @param p
+ *   Handle to pipeline instance
+ * @param port_id
+ *   Port ID (returned by previous invocation of pipeline input port create)
+ * @return
+ *   0 on success, error code otherwise
+ */
+int rte_pipeline_port_in_disable(struct rte_pipeline *p,
+	uint32_t port_id);
+
+/*
+ * Port OUT
+ *
+ */
+/** Maximum number of output ports allowed for any given pipeline instance. The
+	value of this parameter cannot be changed. */
+#define RTE_PIPELINE_PORT_OUT_MAX                                   64
+
+/**
+ * Pipeline output port action handler for single packet
+ *
+ * The action handler can decide to drop packets by resetting the pkt_mask
+ * argument. In this case, the action handler is required not to free the
+ * packet buffer, which will be freed eventually by the pipeline.
+ *
+ * @param pkt
+ *   Input packet
+ * @param pkt_mask
+ *   Output argument set to 0 when the action handler decides to drop the input
+ *   packet and to 1LLU otherwise
+ * @param arg
+ *   Opaque parameter registered by the user at the pipeline table creation
+ *   time
+ * @return
+ *   0 on success, error code otherwise
+ */
+typedef int (*rte_pipeline_port_out_action_handler)(
+	struct rte_mbuf *pkt,
+	uint64_t *pkt_mask,
+	void *arg);
+
+/**
+ * Pipeline output port action handler bulk
+ *
+ * The action handler can decide to drop packets by resetting the associated
+ * packet bit in the pkts_mask parameter. In this case, the action handler is
+ * required not to free the packet buffer, which will be freed eventually by
+ * the pipeline.
+ *
+ * @param pkts
+ *   Burst of input packets specified as array of up to 64 pointers to struct
+ *   rte_mbuf
+ * @param pkts_mask
+ *   64-bit bitmask specifying which packets in the input burst are valid. When
+ *   pkts_mask bit n is set, then element n of pkts array is pointing to a
+ *   valid packet. Otherwise, element n of pkts array will not be accessed.
+ * @param arg
+ *   Opaque parameter registered by the user at the pipeline table creation
+ *   time
+ * @return
+ *   0 on success, error code otherwise
+ */
+typedef int (*rte_pipeline_port_out_action_handler_bulk)(
+	struct rte_mbuf **pkts,
+	uint64_t *pkts_mask,
+	void *arg);
+
+/** Parameters for pipeline output port creation. The action handlers have to
+be either both enabled or both disabled (by setting them to NULL). When
+enabled, the pipeline selects between them at different moments, based on the
+number of packets that have to be sent to the same output port. */
+struct rte_pipeline_port_out_params {
+	/** Output port operations (specific to each table type) */
+	struct rte_port_out_ops *ops;
+	/** Opaque parameter to be passed to create operation when invoked */
+	void *arg_create;
+
+	/** Callback function executing the user actions on single input
+	packet */
+	rte_pipeline_port_out_action_handler f_action;
+	/** Callback function executing the user actions on bust of input
+	packets */
+	rte_pipeline_port_out_action_handler_bulk f_action_bulk;
+	/** Opaque parameter to be passed to the action handler when invoked */
+	void *arg_ah;
+};
+
+/**
+ * Pipeline output port create
+ *
+ * @param p
+ *   Handle to pipeline instance
+ * @param params
+ *   Parameters for pipeline output port creation
+ * @param port_id
+ *   Output port ID. Valid only within the scope of output port IDs of the
+ *   current pipeline. Only returned after a successful invocation.
+ * @return
+ *   0 on success, error code otherwise
+ */
+int rte_pipeline_port_out_create(struct rte_pipeline *p,
+	struct rte_pipeline_port_out_params *params,
+	uint32_t *port_id);
+
+/**
+ * Pipeline output port packet insert
+ *
+ * This function is called by the table action handler whenever it generates a
+ * new packet to be sent out though one of the pipeline output ports. This
+ * packet is not part of the burst of input packets read from any of the
+ * pipeline input ports, so it is not an element of the pkts array input
+ * parameter of the table action handler. This packet can be dropped by the
+ * output port action handler.
+ *
+ * @param p
+ *   Handle to pipeline instance
+ * @param port_id
+ *   Output port ID (returned by previous invocation of pipeline output port
+ *   create) to send the packet specified by pkt
+ * @param pkt
+ *   New packet generated by the table action handler
+ * @return
+ *   0 on success, error code otherwise
+ */
+int rte_pipeline_port_out_packet_insert(struct rte_pipeline *p,
+	uint32_t port_id,
+	struct rte_mbuf *pkt);
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index a11812b..69a99a7 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -73,6 +73,10 @@ LDLIBS += -lrte_ivshmem
 endif
 endif
 
+ifeq ($(CONFIG_RTE_LIBRTE_PIPELINE),y)
+LDLIBS += -lrte_pipeline
+endif
+
 ifeq ($(CONFIG_RTE_LIBRTE_TABLE),y)
 LDLIBS += -lrte_table
 endif
-- 
1.7.7.6

^ permalink raw reply	[flat|nested] 36+ messages in thread

* [dpdk-dev] [v2 20/23] librte_cfgfile: interpret config files
  2014-06-04 18:08 [dpdk-dev] [v2 00/23] Packet Framework Cristian Dumitrescu
                   ` (18 preceding siblings ...)
  2014-06-04 18:08 ` [dpdk-dev] [v2 19/23] Packet Framework librte_pipeline: Pipeline Cristian Dumitrescu
@ 2014-06-04 18:08 ` Cristian Dumitrescu
  2014-10-16 16:46   ` Thomas Monjalon
  2014-06-04 18:08 ` [dpdk-dev] [v2 21/23] Packet Framework performance application Cristian Dumitrescu
                   ` (5 subsequent siblings)
  25 siblings, 1 reply; 36+ messages in thread
From: Cristian Dumitrescu @ 2014-06-04 18:08 UTC (permalink / raw)
  To: dev

This library provides a tool to interpret config files that have standard structure.

It is used by the Packet Framework examples/ip_pipeline sample application.

It originates from examples/qos_sched sample application and now it makes this code available as a library for other sample applications to use. The code duplication with qos_sched sample app to be addressed later.

Signed-off-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
---
 config/common_bsdapp             |    5 +
 config/common_linuxapp           |    5 +
 lib/Makefile                     |    1 +
 lib/librte_cfgfile/Makefile      |   53 ++++++
 lib/librte_cfgfile/rte_cfgfile.c |  354 ++++++++++++++++++++++++++++++++++++++
 lib/librte_cfgfile/rte_cfgfile.h |  195 +++++++++++++++++++++
 mk/rte.app.mk                    |    4 +
 7 files changed, 617 insertions(+), 0 deletions(-)
 create mode 100644 lib/librte_cfgfile/Makefile
 create mode 100644 lib/librte_cfgfile/rte_cfgfile.c
 create mode 100644 lib/librte_cfgfile/rte_cfgfile.h

diff --git a/config/common_bsdapp b/config/common_bsdapp
index 565fcb6..55a1a26 100644
--- a/config/common_bsdapp
+++ b/config/common_bsdapp
@@ -228,6 +228,11 @@ CONFIG_RTE_LIBRTE_MALLOC_DEBUG=n
 CONFIG_RTE_MALLOC_MEMZONE_SIZE=11M
 
 #
+# Compile librte_cfgfile
+#
+CONFIG_RTE_LIBRTE_CFGFILE=y
+
+#
 # Compile librte_cmdline
 #
 CONFIG_RTE_LIBRTE_CMDLINE=y
diff --git a/config/common_linuxapp b/config/common_linuxapp
index e52f163..445a594 100644
--- a/config/common_linuxapp
+++ b/config/common_linuxapp
@@ -255,6 +255,11 @@ CONFIG_RTE_LIBRTE_MALLOC_DEBUG=n
 CONFIG_RTE_MALLOC_MEMZONE_SIZE=11M
 
 #
+# Compile librte_cfgfile
+#
+CONFIG_RTE_LIBRTE_CFGFILE=y
+
+#
 # Compile librte_cmdline
 #
 CONFIG_RTE_LIBRTE_CMDLINE=y
diff --git a/lib/Makefile b/lib/Makefile
index f29d66e..166bfe6 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -38,6 +38,7 @@ DIRS-$(CONFIG_RTE_LIBRTE_RING) += librte_ring
 DIRS-$(CONFIG_RTE_LIBRTE_MEMPOOL) += librte_mempool
 DIRS-$(CONFIG_RTE_LIBRTE_MBUF) += librte_mbuf
 DIRS-$(CONFIG_RTE_LIBRTE_TIMER) += librte_timer
+DIRS-$(CONFIG_RTE_LIBRTE_CFGFILE) += librte_cfgfile
 DIRS-$(CONFIG_RTE_LIBRTE_CMDLINE) += librte_cmdline
 DIRS-$(CONFIG_RTE_LIBRTE_ETHER) += librte_ether
 DIRS-$(CONFIG_RTE_LIBRTE_E1000_PMD) += librte_pmd_e1000
diff --git a/lib/librte_cfgfile/Makefile b/lib/librte_cfgfile/Makefile
new file mode 100644
index 0000000..55e8701
--- /dev/null
+++ b/lib/librte_cfgfile/Makefile
@@ -0,0 +1,53 @@
+#   BSD LICENSE
+#
+#   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+#   All rights reserved.
+#
+#   Redistribution and use in source and binary forms, with or without
+#   modification, are permitted provided that the following conditions
+#   are met:
+#
+#     * Redistributions of source code must retain the above copyright
+#       notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above copyright
+#       notice, this list of conditions and the following disclaimer in
+#       the documentation and/or other materials provided with the
+#       distribution.
+#     * Neither the name of Intel Corporation nor the names of its
+#       contributors may be used to endorse or promote products derived
+#       from this software without specific prior written permission.
+#
+#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+#
+# library name
+#
+LIB = librte_cfgfile.a
+
+CFLAGS += -O3
+CFLAGS += $(WERROR_FLAGS)
+
+#
+# all source are stored in SRCS-y
+#
+SRCS-$(CONFIG_RTE_LIBRTE_CFGFILE) += rte_cfgfile.c
+
+# install includes
+SYMLINK-$(CONFIG_RTE_LIBRTE_CFGFILE)-include += rte_cfgfile.h
+
+# this lib needs eal
+DEPDIRS-$(CONFIG_RTE_LIBRTE_CFGFILE) += lib/librte_eal
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_cfgfile/rte_cfgfile.c b/lib/librte_cfgfile/rte_cfgfile.c
new file mode 100644
index 0000000..26052d0
--- /dev/null
+++ b/lib/librte_cfgfile/rte_cfgfile.c
@@ -0,0 +1,354 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <ctype.h>
+#include <rte_string_fns.h>
+
+#include "rte_cfgfile.h"
+
+struct rte_cfgfile_section {
+	char name[CFG_NAME_LEN];
+	int num_entries;
+	struct rte_cfgfile_entry *entries[0];
+};
+
+struct rte_cfgfile {
+	int flags;
+	int num_sections;
+	struct rte_cfgfile_section *sections[0];
+};
+
+/** when we resize a file structure, how many extra entries
+ * for new sections do we add in */
+#define CFG_ALLOC_SECTION_BATCH 8
+/** when we resize a section structure, how many extra entries
+ * for new entries do we add in */
+#define CFG_ALLOC_ENTRY_BATCH 16
+
+static unsigned
+_strip(char *str, unsigned len)
+{
+	int newlen = len;
+	if (len == 0)
+		return 0;
+
+	if (isspace(str[len-1])) {
+		/* strip trailing whitespace */
+		while (newlen > 0 && isspace(str[newlen - 1]))
+			str[--newlen] = '\0';
+	}
+
+	if (isspace(str[0])) {
+		/* strip leading whitespace */
+		int i, start = 1;
+		while (isspace(str[start]) && start < newlen)
+			start++
+			; /* do nothing */
+		newlen -= start;
+		for (i = 0; i < newlen; i++)
+			str[i] = str[i+start];
+		str[i] = '\0';
+	}
+	return newlen;
+}
+
+struct rte_cfgfile *
+rte_cfgfile_load(const char *filename, int flags)
+{
+	int allocated_sections = CFG_ALLOC_SECTION_BATCH;
+	int allocated_entries = 0;
+	int curr_section = -1;
+	int curr_entry = -1;
+	char buffer[256];
+	int lineno = 0;
+	struct rte_cfgfile *cfg = NULL;
+
+	FILE *f = fopen(filename, "r");
+	if (f == NULL)
+		return NULL;
+
+	cfg = malloc(sizeof(*cfg) + sizeof(cfg->sections[0]) *
+		allocated_sections);
+	if (cfg == NULL)
+		goto error2;
+
+	memset(cfg->sections, 0, sizeof(cfg->sections[0]) * allocated_sections);
+
+	while (fgets(buffer, sizeof(buffer), f) != NULL) {
+		char *pos = NULL;
+		size_t len = strnlen(buffer, sizeof(buffer));
+		lineno++;
+		if ((len >= sizeof(buffer) - 1) && (buffer[len-1] != '\n')) {
+			printf("Error line %d - no \\n found on string. "
+					"Check if line too long\n", lineno);
+			goto error1;
+		}
+		pos = memchr(buffer, ';', sizeof(buffer));
+		if (pos != NULL) {
+			*pos = '\0';
+			len = pos -  buffer;
+		}
+
+		len = _strip(buffer, len);
+		if (buffer[0] != '[' && memchr(buffer, '=', len) == NULL)
+			continue;
+
+		if (buffer[0] == '[') {
+			/* section heading line */
+			char *end = memchr(buffer, ']', len);
+			if (end == NULL) {
+				printf("Error line %d - no terminating '['"
+					"character found\n", lineno);
+				goto error1;
+			}
+			*end = '\0';
+			_strip(&buffer[1], end - &buffer[1]);
+
+			/* close off old section and add start new one */
+			if (curr_section >= 0)
+				cfg->sections[curr_section]->num_entries =
+					curr_entry + 1;
+			curr_section++;
+
+			/* resize overall struct if we don't have room for more
+			sections */
+			if (curr_section == allocated_sections) {
+				allocated_sections += CFG_ALLOC_SECTION_BATCH;
+				struct rte_cfgfile *n_cfg = realloc(cfg,
+					sizeof(*cfg) + sizeof(cfg->sections[0])
+					* allocated_sections);
+				if (n_cfg == NULL) {
+					printf("Error - no more memory\n");
+					goto error1;
+				}
+				cfg = n_cfg;
+			}
+
+			/* allocate space for new section */
+			allocated_entries = CFG_ALLOC_ENTRY_BATCH;
+			curr_entry = -1;
+			cfg->sections[curr_section] = malloc(
+				sizeof(*cfg->sections[0]) +
+				sizeof(cfg->sections[0]->entries[0]) *
+				allocated_entries);
+			if (cfg->sections[curr_section] == NULL) {
+				printf("Error - no more memory\n");
+				goto error1;
+			}
+
+			rte_snprintf(cfg->sections[curr_section]->name,
+					sizeof(cfg->sections[0]->name),
+					"%s", &buffer[1]);
+		} else {
+			/* value line */
+			if (curr_section < 0) {
+				printf("Error line %d - value outside of"
+					"section\n", lineno);
+				goto error1;
+			}
+
+			struct rte_cfgfile_section *sect =
+				cfg->sections[curr_section];
+			char *split[2];
+			if (rte_strsplit(buffer, sizeof(buffer), split, 2, '=')
+				!= 2) {
+				printf("Error at line %d - cannot split "
+					"string\n", lineno);
+				goto error1;
+			}
+
+			curr_entry++;
+			if (curr_entry == allocated_entries) {
+				allocated_entries += CFG_ALLOC_ENTRY_BATCH;
+				struct rte_cfgfile_section *n_sect = realloc(
+					sect, sizeof(*sect) +
+					sizeof(sect->entries[0]) *
+					allocated_entries);
+				if (n_sect == NULL) {
+					printf("Error - no more memory\n");
+					goto error1;
+				}
+				sect = cfg->sections[curr_section] = n_sect;
+			}
+
+			sect->entries[curr_entry] = malloc(
+				sizeof(*sect->entries[0]));
+			if (sect->entries[curr_entry] == NULL) {
+				printf("Error - no more memory\n");
+				goto error1;
+			}
+
+			struct rte_cfgfile_entry *entry = sect->entries[
+				curr_entry];
+			rte_snprintf(entry->name, sizeof(entry->name), "%s",
+				split[0]);
+			rte_snprintf(entry->value, sizeof(entry->value), "%s",
+				split[1]);
+			_strip(entry->name, strnlen(entry->name,
+				sizeof(entry->name)));
+			_strip(entry->value, strnlen(entry->value,
+				sizeof(entry->value)));
+		}
+	}
+	fclose(f);
+	cfg->flags = flags;
+	cfg->sections[curr_section]->num_entries = curr_entry + 1;
+	cfg->num_sections = curr_section + 1;
+	return cfg;
+
+error1:
+	rte_cfgfile_close(cfg);
+error2:
+	fclose(f);
+	return NULL;
+}
+
+
+int rte_cfgfile_close(struct rte_cfgfile *cfg)
+{
+	int i, j;
+
+	if (cfg == NULL)
+		return -1;
+
+	for (i = 0; i < cfg->num_sections; i++) {
+		if (cfg->sections[i] != NULL) {
+			if (cfg->sections[i]->num_entries) {
+				for (j = 0; j < cfg->sections[i]->num_entries;
+					j++) {
+					if (cfg->sections[i]->entries[j] !=
+						NULL)
+						free(cfg->sections[i]->
+							entries[j]);
+				}
+			}
+			free(cfg->sections[i]);
+		}
+	}
+	free(cfg);
+
+	return 0;
+}
+
+int
+rte_cfgfile_num_sections(struct rte_cfgfile *cfg, const char *sectionname,
+size_t length)
+{
+	int i;
+	int num_sections = 0;
+	for (i = 0; i < cfg->num_sections; i++) {
+		if (strncmp(cfg->sections[i]->name, sectionname, length) == 0)
+			num_sections++;
+	}
+	return num_sections;
+}
+
+int
+rte_cfgfile_sections(struct rte_cfgfile *cfg, char *sections[],
+	int max_sections)
+{
+	int i;
+
+	for (i = 0; i < cfg->num_sections && i < max_sections; i++)
+		rte_snprintf(sections[i], CFG_NAME_LEN, "%s",
+		cfg->sections[i]->name);
+
+	return i;
+}
+
+static const struct rte_cfgfile_section *
+_get_section(struct rte_cfgfile *cfg, const char *sectionname)
+{
+	int i;
+	for (i = 0; i < cfg->num_sections; i++) {
+		if (strncmp(cfg->sections[i]->name, sectionname,
+				sizeof(cfg->sections[0]->name)) == 0)
+			return cfg->sections[i];
+	}
+	return NULL;
+}
+
+int
+rte_cfgfile_has_section(struct rte_cfgfile *cfg, const char *sectionname)
+{
+	return (_get_section(cfg, sectionname) != NULL);
+}
+
+int
+rte_cfgfile_section_num_entries(struct rte_cfgfile *cfg,
+	const char *sectionname)
+{
+	const struct rte_cfgfile_section *s = _get_section(cfg, sectionname);
+	if (s == NULL)
+		return -1;
+	return s->num_entries;
+}
+
+
+int
+rte_cfgfile_section_entries(struct rte_cfgfile *cfg, const char *sectionname,
+		struct rte_cfgfile_entry *entries, int max_entries)
+{
+	int i;
+	const struct rte_cfgfile_section *sect = _get_section(cfg, sectionname);
+	if (sect == NULL)
+		return -1;
+	for (i = 0; i < max_entries && i < sect->num_entries; i++)
+		entries[i] = *sect->entries[i];
+	return i;
+}
+
+const char *
+rte_cfgfile_get_entry(struct rte_cfgfile *cfg, const char *sectionname,
+		const char *entryname)
+{
+	int i;
+	const struct rte_cfgfile_section *sect = _get_section(cfg, sectionname);
+	if (sect == NULL)
+		return NULL;
+	for (i = 0; i < sect->num_entries; i++)
+		if (strncmp(sect->entries[i]->name, entryname, CFG_NAME_LEN)
+			== 0)
+			return sect->entries[i]->value;
+	return NULL;
+}
+
+int
+rte_cfgfile_has_entry(struct rte_cfgfile *cfg, const char *sectionname,
+		const char *entryname)
+{
+	return (rte_cfgfile_get_entry(cfg, sectionname, entryname) != NULL);
+}
diff --git a/lib/librte_cfgfile/rte_cfgfile.h b/lib/librte_cfgfile/rte_cfgfile.h
new file mode 100644
index 0000000..7c9fc91
--- /dev/null
+++ b/lib/librte_cfgfile/rte_cfgfile.h
@@ -0,0 +1,195 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __INCLUDE_RTE_CFGFILE_H__
+#define __INCLUDE_RTE_CFGFILE_H__
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/**
+* @file
+* RTE Configuration File
+*
+* This library allows reading application defined parameters from standard
+* format configuration file.
+*
+***/
+
+#define CFG_NAME_LEN 32
+#define CFG_VALUE_LEN 64
+
+/** Configuration file */
+struct rte_cfgfile;
+
+/** Configuration file entry */
+struct rte_cfgfile_entry {
+	char name[CFG_NAME_LEN]; /**< Name */
+	char value[CFG_VALUE_LEN]; /**< Value */
+};
+
+/**
+* Open config file
+*
+* @param filename
+*   Config file name
+* @param flags
+*   Config file flags, Reserved for future use. Must be set to 0.
+* @return
+*   Handle to configuration file
+*/
+struct rte_cfgfile *rte_cfgfile_load(const char *filename, int flags);
+
+/**
+* Get number of sections in config file
+*
+* @param cfg
+*   Config file
+* @param sec_name
+*   Section name
+* @param length
+*   Maximum section name length
+* @return
+*   0 on success, error code otherwise
+*/
+int rte_cfgfile_num_sections(struct rte_cfgfile *cfg, const char *sec_name,
+	size_t length);
+
+/**
+* Get name of all config file sections.
+*
+* Fills in the array sections with the name of all the sections in the file
+* (up to the number of max_sections sections).
+*
+* @param cfg
+*   Config file
+* @param sections
+*   Array containing section names after successful invocation. Each elemen
+*   of this array should be preallocated by the user with at least
+*   CFG_NAME_LEN characters.
+* @param max_sections
+*   Maximum number of section names to be stored in sections array
+* @return
+*   0 on success, error code otherwise
+*/
+int rte_cfgfile_sections(struct rte_cfgfile *cfg, char *sections[],
+	int max_sections);
+
+/**
+* Check if given section exists in config file
+*
+* @param cfg
+*   Config file
+* @param sectionname
+*   Section name
+* @return
+*   TRUE (value different than 0) if section exists, FALSE (value 0) otherwise
+*/
+int rte_cfgfile_has_section(struct rte_cfgfile *cfg, const char *sectionname);
+
+/**
+* Get number of entries in given config file section
+*
+* @param cfg
+*   Config file
+* @param sectionname
+*   Section name
+* @return
+*   Number of entries in section
+*/
+int rte_cfgfile_section_num_entries(struct rte_cfgfile *cfg,
+	const char *sectionname);
+
+/** Get section entries as key-value pairs
+*
+* @param cfg
+*   Config file
+* @param sectionname
+*   Section name
+* @param entries
+*   Pre-allocated array of at least max_entries entries where the section
+*   entries are stored as key-value pair after successful invocation
+* @param max_entries
+*   Maximum number of section entries to be stored in entries array
+* @return
+*   0 on success, error code otherwise
+*/
+int rte_cfgfile_section_entries(struct rte_cfgfile *cfg,
+	const char *sectionname,
+	struct rte_cfgfile_entry *entries,
+	int max_entries);
+
+/** Get value of the named entry in named config file section
+*
+* @param cfg
+*   Config file
+* @param sectionname
+*   Section name
+* @param entryname
+*   Entry name
+* @return
+*   Entry value
+*/
+const char *rte_cfgfile_get_entry(struct rte_cfgfile *cfg,
+	const char *sectionname,
+	const char *entryname);
+
+/** Check if given entry exists in named config file section
+*
+* @param cfg
+*   Config file
+* @param sectionname
+*   Section name
+* @param entryname
+*   Entry name
+* @return
+*   TRUE (value different than 0) if entry exists, FALSE (value 0) otherwise
+*/
+int rte_cfgfile_has_entry(struct rte_cfgfile *cfg, const char *sectionname,
+	const char *entryname);
+
+/** Close config file
+*
+* @param cfg
+*   Config file
+* @return
+*   0 on success, error code otherwise
+*/
+int rte_cfgfile_close(struct rte_cfgfile *cfg);
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index 69a99a7..22f7338 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -158,6 +158,10 @@ ifeq ($(CONFIG_RTE_LIBRTE_CMDLINE),y)
 LDLIBS += -lrte_cmdline
 endif
 
+ifeq ($(CONFIG_RTE_LIBRTE_CFGFILE),y)
+LDLIBS += -lrte_cfgfile
+endif
+
 ifeq ($(CONFIG_RTE_LIBRTE_PMD_XENVIRT),y)
 LDLIBS += -lrte_pmd_xenvirt
 LDLIBS += -lxenstore
-- 
1.7.7.6

^ permalink raw reply	[flat|nested] 36+ messages in thread

* [dpdk-dev] [v2 21/23] Packet Framework performance application
  2014-06-04 18:08 [dpdk-dev] [v2 00/23] Packet Framework Cristian Dumitrescu
                   ` (19 preceding siblings ...)
  2014-06-04 18:08 ` [dpdk-dev] [v2 20/23] librte_cfgfile: interpret config files Cristian Dumitrescu
@ 2014-06-04 18:08 ` Cristian Dumitrescu
  2014-06-04 18:08 ` [dpdk-dev] [v2 22/23] Packet Framework IPv4 pipeline sample app Cristian Dumitrescu
                   ` (4 subsequent siblings)
  25 siblings, 0 replies; 36+ messages in thread
From: Cristian Dumitrescu @ 2014-06-04 18:08 UTC (permalink / raw)
  To: dev

This application is purposefully buit to benchmark the performance of the Intel DPDK Packet Framework toolbox.

It uses 3 CPU cores connected in a chain through SW rings (NICs --> Core A --> Core B --> Core C --> NICs)
1. Core A: reads packets from NIC ports and writes them to SW queues;
2. Core B: instantiates a Packet Framework pipeline that uses ring reader input ports, a table whose type is selected trhough command line arguments (--none, --stub, --lpm, --acl, --hash[-spec]-KEYSZ-TYPE, whith KEYSZ as 8, 16 or 32 bytes and TYPE as ext (Extendible bucket) or lru (LRU)) and ring writers output ports;
3. Core C: reads packets from SW rings and writes them to NIC ports.

Please check the Intel DPDK Sample App Guide for full description.

Signed-off-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
---
 app/Makefile                          |    1 +
 app/test-pipeline/Makefile            |   66 +++++
 app/test-pipeline/config.c            |  248 +++++++++++++++++
 app/test-pipeline/init.c              |  295 ++++++++++++++++++++
 app/test-pipeline/main.c              |  180 ++++++++++++
 app/test-pipeline/main.h              |  148 ++++++++++
 app/test-pipeline/pipeline_acl.c      |  278 +++++++++++++++++++
 app/test-pipeline/pipeline_hash.c     |  487 +++++++++++++++++++++++++++++++++
 app/test-pipeline/pipeline_lpm.c      |  196 +++++++++++++
 app/test-pipeline/pipeline_lpm_ipv6.c |  200 ++++++++++++++
 app/test-pipeline/pipeline_stub.c     |  165 +++++++++++
 app/test-pipeline/runtime.c           |  185 +++++++++++++
 config/common_bsdapp                  |    5 +
 config/common_linuxapp                |    5 +
 14 files changed, 2459 insertions(+), 0 deletions(-)
 create mode 100644 app/test-pipeline/Makefile
 create mode 100644 app/test-pipeline/config.c
 create mode 100644 app/test-pipeline/init.c
 create mode 100644 app/test-pipeline/main.c
 create mode 100644 app/test-pipeline/main.h
 create mode 100644 app/test-pipeline/pipeline_acl.c
 create mode 100644 app/test-pipeline/pipeline_hash.c
 create mode 100644 app/test-pipeline/pipeline_lpm.c
 create mode 100644 app/test-pipeline/pipeline_lpm_ipv6.c
 create mode 100644 app/test-pipeline/pipeline_stub.c
 create mode 100644 app/test-pipeline/runtime.c

diff --git a/app/Makefile b/app/Makefile
index 6267d7b..0359cbb 100644
--- a/app/Makefile
+++ b/app/Makefile
@@ -32,6 +32,7 @@
 include $(RTE_SDK)/mk/rte.vars.mk
 
 DIRS-$(CONFIG_RTE_APP_TEST) += test
+DIRS-$(CONFIG_RTE_TEST_PIPELINE) += test-pipeline
 DIRS-$(CONFIG_RTE_TEST_PMD) += test-pmd
 DIRS-$(CONFIG_RTE_LIBRTE_CMDLINE) += cmdline_test
 DIRS-$(CONFIG_RTE_LIBRTE_EAL_LINUXAPP) += dump_cfg
diff --git a/app/test-pipeline/Makefile b/app/test-pipeline/Makefile
new file mode 100644
index 0000000..63401db
--- /dev/null
+++ b/app/test-pipeline/Makefile
@@ -0,0 +1,66 @@
+#   BSD LICENSE
+#
+#   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+#   All rights reserved.
+#
+#   Redistribution and use in source and binary forms, with or without
+#   modification, are permitted provided that the following conditions
+#   are met:
+#
+#     * Redistributions of source code must retain the above copyright
+#       notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above copyright
+#       notice, this list of conditions and the following disclaimer in
+#       the documentation and/or other materials provided with the
+#       distribution.
+#     * Neither the name of Intel Corporation nor the names of its
+#       contributors may be used to endorse or promote products derived
+#       from this software without specific prior written permission.
+#
+#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+#
+# library name
+#
+APP = testpipeline
+
+CFLAGS += -O3
+CFLAGS += $(WERROR_FLAGS)
+
+ifeq ($(CONFIG_RTE_LIBRTE_PMD_PCAP),y)
+LDFLAGS += -lpcap
+endif
+
+#
+# all source are stored in SRCS-y
+#
+SRCS-$(CONFIG_RTE_TEST_PIPELINE) := main.c
+SRCS-$(CONFIG_RTE_TEST_PIPELINE) += config.c
+SRCS-$(CONFIG_RTE_TEST_PIPELINE) += init.c
+SRCS-$(CONFIG_RTE_TEST_PIPELINE) += runtime.c
+SRCS-$(CONFIG_RTE_TEST_PIPELINE) += pipeline_stub.c
+SRCS-$(CONFIG_RTE_TEST_PIPELINE) += pipeline_hash.c
+SRCS-$(CONFIG_RTE_TEST_PIPELINE) += pipeline_lpm.c
+SRCS-$(CONFIG_RTE_TEST_PIPELINE) += pipeline_lpm_ipv6.c
+
+# include ACL lib if available
+ifeq ($(CONFIG_RTE_LIBRTE_ACL),y)
+SRCS-$(CONFIG_RTE_TEST_PIPELINE) += pipeline_acl.c
+endif
+
+# this application needs libraries first
+DEPDIRS-$(CONFIG_RTE_TEST_PIPELINE) += lib
+
+include $(RTE_SDK)/mk/rte.app.mk
diff --git a/app/test-pipeline/config.c b/app/test-pipeline/config.c
new file mode 100644
index 0000000..85b6996
--- /dev/null
+++ b/app/test-pipeline/config.c
@@ -0,0 +1,248 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <stdio.h>
+#include <stdlib.h>
+#include <stdint.h>
+#include <inttypes.h>
+#include <sys/types.h>
+#include <string.h>
+#include <sys/queue.h>
+#include <stdarg.h>
+#include <errno.h>
+#include <getopt.h>
+
+#include <rte_common.h>
+#include <rte_byteorder.h>
+#include <rte_log.h>
+#include <rte_memory.h>
+#include <rte_memcpy.h>
+#include <rte_memzone.h>
+#include <rte_tailq.h>
+#include <rte_eal.h>
+#include <rte_per_lcore.h>
+#include <rte_launch.h>
+#include <rte_atomic.h>
+#include <rte_cycles.h>
+#include <rte_prefetch.h>
+#include <rte_lcore.h>
+#include <rte_per_lcore.h>
+#include <rte_branch_prediction.h>
+#include <rte_interrupts.h>
+#include <rte_pci.h>
+#include <rte_random.h>
+#include <rte_debug.h>
+#include <rte_ether.h>
+#include <rte_ethdev.h>
+#include <rte_ring.h>
+#include <rte_mempool.h>
+#include <rte_mbuf.h>
+#include <rte_ip.h>
+#include <rte_tcp.h>
+#include <rte_lpm.h>
+#include <rte_lpm6.h>
+#include <rte_string_fns.h>
+
+#include "main.h"
+
+struct app_params app;
+
+static const char usage[] = "\n";
+
+void
+app_print_usage(void)
+{
+	printf(usage);
+}
+
+static int
+app_parse_port_mask(const char *arg)
+{
+	char *end = NULL;
+	uint64_t port_mask;
+	uint32_t i;
+
+	if (arg[0] == '\0')
+		return -1;
+
+	port_mask = strtoul(arg, &end, 16);
+	if ((end == NULL) || (*end != '\0'))
+		return -2;
+
+	if (port_mask == 0)
+		return -3;
+
+	app.n_ports = 0;
+	for (i = 0; i < 64; i++) {
+		if ((port_mask & (1LLU << i)) == 0)
+			continue;
+
+		if (app.n_ports >= APP_MAX_PORTS)
+			return -4;
+
+		app.ports[app.n_ports] = i;
+		app.n_ports++;
+	}
+
+	if (!rte_is_power_of_2(app.n_ports))
+		return -5;
+
+	return 0;
+}
+
+struct {
+	const char *name;
+	uint32_t value;
+} app_args_table[] = {
+	{"none", e_APP_PIPELINE_NONE},
+	{"stub", e_APP_PIPELINE_STUB},
+	{"hash-8-ext", e_APP_PIPELINE_HASH_KEY8_EXT},
+	{"hash-8-lru", e_APP_PIPELINE_HASH_KEY8_LRU},
+	{"hash-16-ext", e_APP_PIPELINE_HASH_KEY16_EXT},
+	{"hash-16-lru", e_APP_PIPELINE_HASH_KEY16_LRU},
+	{"hash-32-ext", e_APP_PIPELINE_HASH_KEY32_EXT},
+	{"hash-32-lru", e_APP_PIPELINE_HASH_KEY32_LRU},
+	{"hash-spec-8-ext", e_APP_PIPELINE_HASH_SPEC_KEY8_EXT},
+	{"hash-spec-8-lru", e_APP_PIPELINE_HASH_SPEC_KEY8_LRU},
+	{"hash-spec-16-ext", e_APP_PIPELINE_HASH_SPEC_KEY16_EXT},
+	{"hash-spec-16-lru", e_APP_PIPELINE_HASH_SPEC_KEY16_LRU},
+	{"hash-spec-32-ext", e_APP_PIPELINE_HASH_SPEC_KEY32_EXT},
+	{"hash-spec-32-lru", e_APP_PIPELINE_HASH_SPEC_KEY32_LRU},
+	{"acl", e_APP_PIPELINE_ACL},
+	{"lpm", e_APP_PIPELINE_LPM},
+	{"lpm-ipv6", e_APP_PIPELINE_LPM_IPV6},
+};
+
+int
+app_parse_args(int argc, char **argv)
+{
+	int opt, ret;
+	char **argvopt;
+	int option_index;
+	char *prgname = argv[0];
+	static struct option lgopts[] = {
+		{"none", 0, 0, 0},
+		{"stub", 0, 0, 0},
+		{"hash-8-ext", 0, 0, 0},
+		{"hash-8-lru", 0, 0, 0},
+		{"hash-16-ext", 0, 0, 0},
+		{"hash-16-lru", 0, 0, 0},
+		{"hash-32-ext", 0, 0, 0},
+		{"hash-32-lru", 0, 0, 0},
+		{"hash-spec-8-ext", 0, 0, 0},
+		{"hash-spec-8-lru", 0, 0, 0},
+		{"hash-spec-16-ext", 0, 0, 0},
+		{"hash-spec-16-lru", 0, 0, 0},
+		{"hash-spec-32-ext", 0, 0, 0},
+		{"hash-spec-32-lru", 0, 0, 0},
+		{"acl", 0, 0, 0},
+		{"lpm", 0, 0, 0},
+		{"lpm-ipv6", 0, 0, 0},
+		{NULL, 0, 0, 0}
+	};
+	uint32_t lcores[3], n_lcores, lcore_id, pipeline_type_provided;
+
+	/* EAL args */
+	n_lcores = 0;
+	for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) {
+		if (rte_lcore_is_enabled(lcore_id) == 0)
+			continue;
+
+		if (n_lcores >= 3) {
+			RTE_LOG(ERR, USER1, "Number of cores must be 3\n");
+			app_print_usage();
+			return -1;
+		}
+
+		lcores[n_lcores] = lcore_id;
+		n_lcores++;
+	}
+
+	if (n_lcores != 3) {
+		RTE_LOG(ERR, USER1, "Number of cores must be 3\n");
+		app_print_usage();
+		return -1;
+	}
+
+	app.core_rx = lcores[0];
+	app.core_worker = lcores[1];
+	app.core_tx = lcores[2];
+
+	/* Non-EAL args */
+	argvopt = argv;
+
+	app.pipeline_type = e_APP_PIPELINE_HASH_KEY16_LRU;
+	pipeline_type_provided = 0;
+
+	while ((opt = getopt_long(argc, argvopt, "p:",
+			lgopts, &option_index)) != EOF) {
+		switch (opt) {
+		case 'p':
+			if (app_parse_port_mask(optarg) < 0) {
+				app_print_usage();
+				return -1;
+			}
+			break;
+
+		case 0: /* long options */
+			if (!pipeline_type_provided) {
+				uint32_t i;
+
+				for (i = 0; i < e_APP_PIPELINES; i++) {
+					if (!strcmp(lgopts[option_index].name,
+						app_args_table[i].name)) {
+						app.pipeline_type =
+							app_args_table[i].value;
+						pipeline_type_provided = 1;
+						break;
+					}
+				}
+
+				break;
+			}
+
+			app_print_usage();
+			return -1;
+
+		default:
+			return -1;
+		}
+	}
+
+	if (optind >= 0)
+		argv[optind - 1] = prgname;
+
+	ret = optind - 1;
+	optind = 0; /* reset getopt lib */
+	return ret;
+}
diff --git a/app/test-pipeline/init.c b/app/test-pipeline/init.c
new file mode 100644
index 0000000..12b104a
--- /dev/null
+++ b/app/test-pipeline/init.c
@@ -0,0 +1,295 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <stdio.h>
+#include <stdlib.h>
+#include <stdint.h>
+#include <inttypes.h>
+#include <sys/types.h>
+#include <string.h>
+#include <sys/queue.h>
+#include <stdarg.h>
+#include <errno.h>
+#include <getopt.h>
+
+#include <rte_common.h>
+#include <rte_byteorder.h>
+#include <rte_log.h>
+#include <rte_memory.h>
+#include <rte_memcpy.h>
+#include <rte_memzone.h>
+#include <rte_tailq.h>
+#include <rte_eal.h>
+#include <rte_per_lcore.h>
+#include <rte_launch.h>
+#include <rte_atomic.h>
+#include <rte_cycles.h>
+#include <rte_prefetch.h>
+#include <rte_lcore.h>
+#include <rte_per_lcore.h>
+#include <rte_branch_prediction.h>
+#include <rte_interrupts.h>
+#include <rte_pci.h>
+#include <rte_random.h>
+#include <rte_debug.h>
+#include <rte_ether.h>
+#include <rte_ethdev.h>
+#include <rte_ring.h>
+#include <rte_mempool.h>
+#include <rte_mbuf.h>
+#include <rte_string_fns.h>
+#include <rte_ip.h>
+#include <rte_tcp.h>
+#include <rte_lpm.h>
+#include <rte_lpm6.h>
+
+#include "main.h"
+
+struct app_params app = {
+	/* Ports*/
+	.n_ports = APP_MAX_PORTS,
+	.port_rx_ring_size = 128,
+	.port_tx_ring_size = 512,
+
+	/* Rings */
+	.ring_rx_size = 128,
+	.ring_tx_size = 128,
+
+	/* Buffer pool */
+	.pool_buffer_size = 2048 + sizeof(struct rte_mbuf) +
+		RTE_PKTMBUF_HEADROOM,
+	.pool_size = 32 * 1024,
+	.pool_cache_size = 256,
+
+	/* Burst sizes */
+	.burst_size_rx_read = 64,
+	.burst_size_rx_write = 64,
+	.burst_size_worker_read = 64,
+	.burst_size_worker_write = 64,
+	.burst_size_tx_read = 64,
+	.burst_size_tx_write = 64,
+};
+
+static struct rte_eth_conf port_conf = {
+	.rxmode = {
+		.split_hdr_size = 0,
+		.header_split   = 0, /* Header Split disabled */
+		.hw_ip_checksum = 1, /* IP checksum offload enabled */
+		.hw_vlan_filter = 0, /* VLAN filtering disabled */
+		.jumbo_frame    = 0, /* Jumbo Frame Support disabled */
+		.hw_strip_crc   = 0, /* CRC stripped by hardware */
+	},
+	.rx_adv_conf = {
+		.rss_conf = {
+			.rss_key = NULL,
+			.rss_hf = ETH_RSS_IPV4 | ETH_RSS_IPV6,
+		},
+	},
+	.txmode = {
+		.mq_mode = ETH_MQ_TX_NONE,
+	},
+};
+
+static struct rte_eth_rxconf rx_conf = {
+	.rx_thresh = {
+		.pthresh = 8,
+		.hthresh = 8,
+		.wthresh = 4,
+	},
+	.rx_free_thresh = 64,
+	.rx_drop_en = 0,
+};
+
+static struct rte_eth_txconf tx_conf = {
+	.tx_thresh = {
+		.pthresh = 36,
+		.hthresh = 0,
+		.wthresh = 0,
+	},
+	.tx_free_thresh = 0,
+	.tx_rs_thresh = 0,
+};
+
+static void
+app_init_mbuf_pools(void)
+{
+	/* Init the buffer pool */
+	RTE_LOG(INFO, USER1, "Creating the mbuf pool ...\n");
+	app.pool = rte_mempool_create(
+		"mempool",
+		app.pool_size,
+		app.pool_buffer_size,
+		app.pool_cache_size,
+		sizeof(struct rte_pktmbuf_pool_private),
+		rte_pktmbuf_pool_init, NULL,
+		rte_pktmbuf_init, NULL,
+		rte_socket_id(),
+		0);
+	if (app.pool == NULL)
+		rte_panic("Cannot create mbuf pool\n");
+}
+
+static void
+app_init_rings(void)
+{
+	uint32_t i;
+
+	for (i = 0; i < app.n_ports; i++) {
+		char name[32];
+
+		rte_snprintf(name, sizeof(name), "app_ring_rx_%u", i);
+
+		app.rings_rx[i] = rte_ring_create(
+			name,
+			app.ring_rx_size,
+			rte_socket_id(),
+			RING_F_SP_ENQ | RING_F_SC_DEQ);
+
+		if (app.rings_rx[i] == NULL)
+			rte_panic("Cannot create RX ring %u\n", i);
+	}
+
+	for (i = 0; i < app.n_ports; i++) {
+		char name[32];
+
+		rte_snprintf(name, sizeof(name), "app_ring_tx_%u", i);
+
+		app.rings_tx[i] = rte_ring_create(
+			name,
+			app.ring_tx_size,
+			rte_socket_id(),
+			RING_F_SP_ENQ | RING_F_SC_DEQ);
+
+		if (app.rings_tx[i] == NULL)
+			rte_panic("Cannot create TX ring %u\n", i);
+	}
+
+}
+
+static void
+app_ports_check_link(void)
+{
+	uint32_t all_ports_up, i;
+
+	all_ports_up = 1;
+
+	for (i = 0; i < app.n_ports; i++) {
+		struct rte_eth_link link;
+		uint8_t port;
+
+		port = (uint8_t) app.ports[i];
+		memset(&link, 0, sizeof(link));
+		rte_eth_link_get_nowait(port, &link);
+		RTE_LOG(INFO, USER1, "Port %u (%u Gbps) %s\n",
+			port,
+			link.link_speed / 1000,
+			link.link_status ? "UP" : "DOWN");
+
+		if (link.link_status == 0)
+			all_ports_up = 0;
+	}
+
+	if (all_ports_up == 0)
+		rte_panic("Some NIC ports are DOWN\n");
+}
+
+static void
+app_init_ports(void)
+{
+	uint32_t i;
+
+	/* Init driver */
+	RTE_LOG(INFO, USER1, "Initializing the PMD driver ...\n");
+	if (rte_eal_pci_probe() < 0)
+		rte_panic("Cannot probe PCI\n");
+
+	/* Init NIC ports, then start the ports */
+	for (i = 0; i < app.n_ports; i++) {
+		uint8_t port;
+		int ret;
+
+		port = (uint8_t) app.ports[i];
+		RTE_LOG(INFO, USER1, "Initializing NIC port %u ...\n", port);
+
+		/* Init port */
+		ret = rte_eth_dev_configure(
+			port,
+			1,
+			1,
+			&port_conf);
+		if (ret < 0)
+			rte_panic("Cannot init NIC port %u (%d)\n", port, ret);
+
+		rte_eth_promiscuous_enable(port);
+
+		/* Init RX queues */
+		ret = rte_eth_rx_queue_setup(
+			port,
+			0,
+			app.port_rx_ring_size,
+			rte_eth_dev_socket_id(port),
+			&rx_conf,
+			app.pool);
+		if (ret < 0)
+			rte_panic("Cannot init RX for port %u (%d)\n",
+				(uint32_t) port, ret);
+
+		/* Init TX queues */
+		ret = rte_eth_tx_queue_setup(
+			port,
+			0,
+			app.port_tx_ring_size,
+			rte_eth_dev_socket_id(port),
+			&tx_conf);
+		if (ret < 0)
+			rte_panic("Cannot init TX for port %u (%d)\n",
+				(uint32_t) port, ret);
+
+		/* Start port */
+		ret = rte_eth_dev_start(port);
+		if (ret < 0)
+			rte_panic("Cannot start port %u (%d)\n", port, ret);
+	}
+
+	app_ports_check_link();
+}
+
+void
+app_init(void)
+{
+	app_init_mbuf_pools();
+	app_init_rings();
+	app_init_ports();
+
+	RTE_LOG(INFO, USER1, "Initialization completed\n");
+}
diff --git a/app/test-pipeline/main.c b/app/test-pipeline/main.c
new file mode 100644
index 0000000..0a2a597
--- /dev/null
+++ b/app/test-pipeline/main.c
@@ -0,0 +1,180 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <stdio.h>
+#include <stdlib.h>
+#include <stdint.h>
+#include <inttypes.h>
+#include <sys/types.h>
+#include <string.h>
+#include <sys/queue.h>
+#include <stdarg.h>
+#include <errno.h>
+#include <getopt.h>
+#include <unistd.h>
+
+#include <rte_common.h>
+#include <rte_byteorder.h>
+#include <rte_log.h>
+#include <rte_memory.h>
+#include <rte_memcpy.h>
+#include <rte_memzone.h>
+#include <rte_tailq.h>
+#include <rte_eal.h>
+#include <rte_per_lcore.h>
+#include <rte_launch.h>
+#include <rte_atomic.h>
+#include <rte_cycles.h>
+#include <rte_prefetch.h>
+#include <rte_lcore.h>
+#include <rte_per_lcore.h>
+#include <rte_branch_prediction.h>
+#include <rte_interrupts.h>
+#include <rte_pci.h>
+#include <rte_random.h>
+#include <rte_debug.h>
+#include <rte_ether.h>
+#include <rte_ethdev.h>
+#include <rte_ring.h>
+#include <rte_mempool.h>
+#include <rte_mbuf.h>
+#include <rte_ip.h>
+#include <rte_tcp.h>
+#include <rte_lpm.h>
+#include <rte_lpm6.h>
+
+#include "main.h"
+
+int
+MAIN(int argc, char **argv)
+{
+	uint32_t lcore;
+	int ret;
+
+	/* Init EAL */
+	ret = rte_eal_init(argc, argv);
+	if (ret < 0)
+		return -1;
+	argc -= ret;
+	argv += ret;
+
+	/* Parse application arguments (after the EAL ones) */
+	ret = app_parse_args(argc, argv);
+	if (ret < 0) {
+		app_print_usage();
+		return -1;
+	}
+
+	/* Init */
+	app_init();
+
+	/* Launch per-lcore init on every lcore */
+	rte_eal_mp_remote_launch(app_lcore_main_loop, NULL, CALL_MASTER);
+	RTE_LCORE_FOREACH_SLAVE(lcore) {
+		if (rte_eal_wait_lcore(lcore) < 0)
+			return -1;
+	}
+
+	return 0;
+}
+
+int
+app_lcore_main_loop(__attribute__((unused)) void *arg)
+{
+	unsigned lcore;
+
+	lcore = rte_lcore_id();
+
+	if (lcore == app.core_rx) {
+		switch (app.pipeline_type) {
+		case e_APP_PIPELINE_ACL:
+			app_main_loop_rx();
+			return 0;
+
+		default:
+			app_main_loop_rx_metadata();
+			return 0;
+		}
+	}
+
+	if (lcore == app.core_worker) {
+		switch (app.pipeline_type) {
+		case e_APP_PIPELINE_STUB:
+			app_main_loop_worker_pipeline_stub();
+			return 0;
+
+		case e_APP_PIPELINE_HASH_KEY8_EXT:
+		case e_APP_PIPELINE_HASH_KEY8_LRU:
+		case e_APP_PIPELINE_HASH_KEY16_EXT:
+		case e_APP_PIPELINE_HASH_KEY16_LRU:
+		case e_APP_PIPELINE_HASH_KEY32_EXT:
+		case e_APP_PIPELINE_HASH_KEY32_LRU:
+		case e_APP_PIPELINE_HASH_SPEC_KEY8_EXT:
+		case e_APP_PIPELINE_HASH_SPEC_KEY8_LRU:
+		case e_APP_PIPELINE_HASH_SPEC_KEY16_EXT:
+		case e_APP_PIPELINE_HASH_SPEC_KEY16_LRU:
+		case e_APP_PIPELINE_HASH_SPEC_KEY32_EXT:
+		case e_APP_PIPELINE_HASH_SPEC_KEY32_LRU:
+			app_main_loop_worker_pipeline_hash();
+			return 0;
+
+		case e_APP_PIPELINE_ACL:
+#ifndef RTE_LIBRTE_ACL
+			rte_exit(EXIT_FAILURE, "ACL not present in build\n");
+#else
+			app_main_loop_worker_pipeline_acl();
+			return 0;
+#endif
+
+		case e_APP_PIPELINE_LPM:
+			app_main_loop_worker_pipeline_lpm();
+			return 0;
+
+		case e_APP_PIPELINE_LPM_IPV6:
+			app_main_loop_worker_pipeline_lpm_ipv6();
+			return 0;
+
+		case e_APP_PIPELINE_NONE:
+		default:
+			app_main_loop_worker();
+			return 0;
+		}
+	}
+
+	if (lcore == app.core_tx) {
+		app_main_loop_tx();
+		return 0;
+	}
+
+	return 0;
+}
diff --git a/app/test-pipeline/main.h b/app/test-pipeline/main.h
new file mode 100644
index 0000000..2ed2928
--- /dev/null
+++ b/app/test-pipeline/main.h
@@ -0,0 +1,148 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _MAIN_H_
+#define _MAIN_H_
+
+#ifndef APP_MBUF_ARRAY_SIZE
+#define APP_MBUF_ARRAY_SIZE 256
+#endif
+
+struct app_mbuf_array {
+	struct rte_mbuf *array[APP_MBUF_ARRAY_SIZE];
+	uint16_t n_mbufs;
+};
+
+#ifndef APP_MAX_PORTS
+#define APP_MAX_PORTS 4
+#endif
+
+struct app_params {
+	/* CPU cores */
+	uint32_t core_rx;
+	uint32_t core_worker;
+	uint32_t core_tx;
+
+	/* Ports*/
+	uint32_t ports[APP_MAX_PORTS];
+	uint32_t n_ports;
+	uint32_t port_rx_ring_size;
+	uint32_t port_tx_ring_size;
+
+	/* Rings */
+	struct rte_ring *rings_rx[APP_MAX_PORTS];
+	struct rte_ring *rings_tx[APP_MAX_PORTS];
+	uint32_t ring_rx_size;
+	uint32_t ring_tx_size;
+
+	/* Internal buffers */
+	struct app_mbuf_array mbuf_rx;
+	struct app_mbuf_array mbuf_tx[APP_MAX_PORTS];
+
+	/* Buffer pool */
+	struct rte_mempool *pool;
+	uint32_t pool_buffer_size;
+	uint32_t pool_size;
+	uint32_t pool_cache_size;
+
+	/* Burst sizes */
+	uint32_t burst_size_rx_read;
+	uint32_t burst_size_rx_write;
+	uint32_t burst_size_worker_read;
+	uint32_t burst_size_worker_write;
+	uint32_t burst_size_tx_read;
+	uint32_t burst_size_tx_write;
+
+	/* App behavior */
+	uint32_t pipeline_type;
+} __rte_cache_aligned;
+
+extern struct app_params app;
+
+int app_parse_args(int argc, char **argv);
+void app_print_usage(void);
+void app_init(void);
+int app_lcore_main_loop(void *arg);
+
+/* Pipeline */
+enum {
+	e_APP_PIPELINE_NONE = 0,
+	e_APP_PIPELINE_STUB,
+
+	e_APP_PIPELINE_HASH_KEY8_EXT,
+	e_APP_PIPELINE_HASH_KEY8_LRU,
+	e_APP_PIPELINE_HASH_KEY16_EXT,
+	e_APP_PIPELINE_HASH_KEY16_LRU,
+	e_APP_PIPELINE_HASH_KEY32_EXT,
+	e_APP_PIPELINE_HASH_KEY32_LRU,
+
+	e_APP_PIPELINE_HASH_SPEC_KEY8_EXT,
+	e_APP_PIPELINE_HASH_SPEC_KEY8_LRU,
+	e_APP_PIPELINE_HASH_SPEC_KEY16_EXT,
+	e_APP_PIPELINE_HASH_SPEC_KEY16_LRU,
+	e_APP_PIPELINE_HASH_SPEC_KEY32_EXT,
+	e_APP_PIPELINE_HASH_SPEC_KEY32_LRU,
+
+	e_APP_PIPELINE_ACL,
+	e_APP_PIPELINE_LPM,
+	e_APP_PIPELINE_LPM_IPV6,
+	e_APP_PIPELINES
+};
+
+void app_main_loop_rx(void);
+void app_main_loop_rx_metadata(void);
+uint64_t test_hash(void *key, uint32_t key_size, uint64_t seed);
+
+void app_main_loop_worker(void);
+void app_main_loop_worker_pipeline_stub(void);
+void app_main_loop_worker_pipeline_hash(void);
+void app_main_loop_worker_pipeline_acl(void);
+void app_main_loop_worker_pipeline_lpm(void);
+void app_main_loop_worker_pipeline_lpm_ipv6(void);
+
+void app_main_loop_tx(void);
+
+#define APP_FLUSH 0
+#ifndef APP_FLUSH
+#define APP_FLUSH 0x3FF
+#endif
+
+#ifdef RTE_EXEC_ENV_BAREMETAL
+#define MAIN _main
+#else
+#define MAIN main
+#endif
+
+int MAIN(int argc, char **argv);
+
+#endif /* _MAIN_H_ */
diff --git a/app/test-pipeline/pipeline_acl.c b/app/test-pipeline/pipeline_acl.c
new file mode 100644
index 0000000..f163e55
--- /dev/null
+++ b/app/test-pipeline/pipeline_acl.c
@@ -0,0 +1,278 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <stdio.h>
+#include <stdlib.h>
+#include <stdint.h>
+
+#include <rte_log.h>
+#include <rte_ethdev.h>
+#include <rte_ether.h>
+#include <rte_ip.h>
+#include <rte_byteorder.h>
+
+#include <rte_port_ring.h>
+#include <rte_table_acl.h>
+#include <rte_pipeline.h>
+
+#include "main.h"
+
+enum {
+	PROTO_FIELD_IPV4,
+	SRC_FIELD_IPV4,
+	DST_FIELD_IPV4,
+	SRCP_FIELD_IPV4,
+	DSTP_FIELD_IPV4,
+	NUM_FIELDS_IPV4
+};
+
+/*
+ * Here we define the 'shape' of the data we're searching for,
+ * by defining the meta-data of the ACL rules.
+ * in this case, we're defining 5 tuples. IP addresses, ports,
+ * and protocol.
+ */
+struct rte_acl_field_def ipv4_field_formats[NUM_FIELDS_IPV4] = {
+	{
+		.type = RTE_ACL_FIELD_TYPE_BITMASK,
+		.size = sizeof(uint8_t),
+		.field_index = PROTO_FIELD_IPV4,
+		.input_index = PROTO_FIELD_IPV4,
+		.offset = sizeof(struct ether_hdr) +
+			offsetof(struct ipv4_hdr, next_proto_id),
+	},
+	{
+		.type = RTE_ACL_FIELD_TYPE_MASK,
+		.size = sizeof(uint32_t),
+		.field_index = SRC_FIELD_IPV4,
+		.input_index = SRC_FIELD_IPV4,
+		.offset = sizeof(struct ether_hdr) +
+			offsetof(struct ipv4_hdr, src_addr),
+	},
+	{
+		.type = RTE_ACL_FIELD_TYPE_MASK,
+		.size = sizeof(uint32_t),
+		.field_index = DST_FIELD_IPV4,
+		.input_index = DST_FIELD_IPV4,
+		.offset = sizeof(struct ether_hdr) +
+			offsetof(struct ipv4_hdr, dst_addr),
+	},
+	{
+		.type = RTE_ACL_FIELD_TYPE_RANGE,
+		.size = sizeof(uint16_t),
+		.field_index = SRCP_FIELD_IPV4,
+		.input_index = SRCP_FIELD_IPV4,
+		.offset = sizeof(struct ether_hdr) + sizeof(struct ipv4_hdr),
+	},
+	{
+		.type = RTE_ACL_FIELD_TYPE_RANGE,
+		.size = sizeof(uint16_t),
+		.field_index = DSTP_FIELD_IPV4,
+		.input_index = SRCP_FIELD_IPV4,
+		.offset = sizeof(struct ether_hdr) + sizeof(struct ipv4_hdr) +
+			sizeof(uint16_t),
+	},
+};
+
+
+
+void
+app_main_loop_worker_pipeline_acl(void) {
+	struct rte_pipeline_params pipeline_params = {
+		.name = "pipeline",
+		.socket_id = rte_socket_id(),
+	};
+
+	struct rte_pipeline *p;
+	uint32_t port_in_id[APP_MAX_PORTS];
+	uint32_t port_out_id[APP_MAX_PORTS];
+	uint32_t table_id;
+	uint32_t i;
+
+	RTE_LOG(INFO, USER1,
+		"Core %u is doing work (pipeline with ACL table)\n",
+		rte_lcore_id());
+
+	/* Pipeline configuration */
+	p = rte_pipeline_create(&pipeline_params);
+	if (p == NULL)
+		rte_panic("Unable to configure the pipeline\n");
+
+	/* Input port configuration */
+	for (i = 0; i < app.n_ports; i++) {
+		struct rte_port_ring_reader_params port_ring_params = {
+			.ring = app.rings_rx[i],
+		};
+
+		struct rte_pipeline_port_in_params port_params = {
+			.ops = &rte_port_ring_reader_ops,
+			.arg_create = (void *) &port_ring_params,
+			.f_action = NULL,
+			.arg_ah = NULL,
+			.burst_size = app.burst_size_worker_read,
+		};
+
+		if (rte_pipeline_port_in_create(p, &port_params,
+			&port_in_id[i]))
+			rte_panic("Unable to configure input port for "
+				"ring %d\n", i);
+	}
+
+	/* Output port configuration */
+	for (i = 0; i < app.n_ports; i++) {
+		struct rte_port_ring_writer_params port_ring_params = {
+			.ring = app.rings_tx[i],
+			.tx_burst_sz = app.burst_size_worker_write,
+		};
+
+		struct rte_pipeline_port_out_params port_params = {
+			.ops = &rte_port_ring_writer_ops,
+			.arg_create = (void *) &port_ring_params,
+			.f_action = NULL,
+			.f_action_bulk = NULL,
+			.arg_ah = NULL,
+		};
+
+		if (rte_pipeline_port_out_create(p, &port_params,
+			&port_out_id[i]))
+			rte_panic("Unable to configure output port for "
+				"ring %d\n", i);
+	}
+
+	/* Table configuration */
+	{
+		struct rte_table_acl_params table_acl_params = {
+			.name = "test", /* unique identifier for acl contexts */
+			.n_rules = 1 << 5,
+			.n_rule_fields = DIM(ipv4_field_formats),
+		};
+
+		/* Copy in the rule meta-data defined above into the params */
+		memcpy(table_acl_params.field_format, ipv4_field_formats,
+			sizeof(ipv4_field_formats));
+
+		struct rte_pipeline_table_params table_params = {
+			.ops = &rte_table_acl_ops,
+			.arg_create = &table_acl_params,
+			.f_action_hit = NULL,
+			.f_action_miss = NULL,
+			.arg_ah = NULL,
+			.action_data_size = 0,
+		};
+
+		if (rte_pipeline_table_create(p, &table_params, &table_id))
+			rte_panic("Unable to configure the ACL table\n");
+	}
+
+	/* Interconnecting ports and tables */
+	for (i = 0; i < app.n_ports; i++)
+		if (rte_pipeline_port_in_connect_to_table(p, port_in_id[i],
+			table_id))
+			rte_panic("Unable to connect input port %u to "
+				"table %u\n", port_in_id[i],  table_id);
+
+	/* Add entries to tables */
+	for (i = 0; i < app.n_ports; i++) {
+		struct rte_pipeline_table_entry table_entry = {
+			.action = RTE_PIPELINE_ACTION_PORT,
+			{.port_id = port_out_id[i & (app.n_ports - 1)]},
+		};
+		struct rte_table_acl_rule_add_params rule_params;
+		struct rte_pipeline_table_entry *entry_ptr;
+		int key_found, ret;
+
+		memset(&rule_params, 0, sizeof(rule_params));
+
+		/* Set the rule values */
+		rule_params.field_value[SRC_FIELD_IPV4].value.u32 = 0;
+		rule_params.field_value[SRC_FIELD_IPV4].mask_range.u32 = 0;
+		rule_params.field_value[DST_FIELD_IPV4].value.u32 =
+			i << (24 - __builtin_popcount(app.n_ports - 1));
+		rule_params.field_value[DST_FIELD_IPV4].mask_range.u32 =
+			8 + __builtin_popcount(app.n_ports - 1);
+		rule_params.field_value[SRCP_FIELD_IPV4].value.u16 = 0;
+		rule_params.field_value[SRCP_FIELD_IPV4].mask_range.u16 =
+			UINT16_MAX;
+		rule_params.field_value[DSTP_FIELD_IPV4].value.u16 = 0;
+		rule_params.field_value[DSTP_FIELD_IPV4].mask_range.u16 =
+			UINT16_MAX;
+		rule_params.field_value[PROTO_FIELD_IPV4].value.u8 = 0;
+		rule_params.field_value[PROTO_FIELD_IPV4].mask_range.u8 = 0;
+
+		rule_params.priority = 0;
+
+		uint32_t dst_addr = rule_params.field_value[DST_FIELD_IPV4].
+			value.u32;
+		uint32_t dst_mask =
+			rule_params.field_value[DST_FIELD_IPV4].mask_range.u32;
+
+		printf("Adding rule to ACL table (IPv4 destination = "
+			"%u.%u.%u.%u/%u => port out = %u)\n",
+			(dst_addr & 0xFF000000) >> 24,
+			(dst_addr & 0x00FF0000) >> 16,
+			(dst_addr & 0x0000FF00) >> 8,
+			dst_addr & 0x000000FF,
+			dst_mask,
+			table_entry.port_id);
+
+		/* For ACL, add needs an rte_table_acl_rule_add_params struct */
+		ret = rte_pipeline_table_entry_add(p, table_id, &rule_params,
+			&table_entry, &key_found, &entry_ptr);
+		if (ret < 0)
+			rte_panic("Unable to add entry to table %u (%d)\n",
+				table_id, ret);
+	}
+
+	/* Enable input ports */
+	for (i = 0; i < app.n_ports; i++)
+		if (rte_pipeline_port_in_enable(p, port_in_id[i]))
+			rte_panic("Unable to enable input port %u\n",
+				port_in_id[i]);
+
+	/* Check pipeline consistency */
+	if (rte_pipeline_check(p) < 0)
+		rte_panic("Pipeline consistency check failed\n");
+
+	/* Run-time */
+#if APP_FLUSH == 0
+	for ( ; ; )
+		rte_pipeline_run(p);
+#else
+	for (i = 0; ; i++) {
+		rte_pipeline_run(p);
+
+		if ((i & APP_FLUSH) == 0)
+			rte_pipeline_flush(p);
+	}
+#endif
+}
diff --git a/app/test-pipeline/pipeline_hash.c b/app/test-pipeline/pipeline_hash.c
new file mode 100644
index 0000000..4598ad4
--- /dev/null
+++ b/app/test-pipeline/pipeline_hash.c
@@ -0,0 +1,487 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <stdio.h>
+#include <stdlib.h>
+#include <stdint.h>
+
+#include <rte_log.h>
+#include <rte_ethdev.h>
+#include <rte_ether.h>
+#include <rte_ip.h>
+#include <rte_byteorder.h>
+
+#include <rte_port_ring.h>
+#include <rte_table_hash.h>
+#include <rte_pipeline.h>
+
+#include "main.h"
+
+static void
+translate_options(uint32_t *special, uint32_t *ext, uint32_t *key_size)
+{
+	switch (app.pipeline_type) {
+	case e_APP_PIPELINE_HASH_KEY8_EXT:
+		*special = 0; *ext = 1; *key_size = 8; return;
+	case e_APP_PIPELINE_HASH_KEY8_LRU:
+		*special = 0; *ext = 0; *key_size = 8; return;
+	case e_APP_PIPELINE_HASH_KEY16_EXT:
+		*special = 0; *ext = 1; *key_size = 16; return;
+	case e_APP_PIPELINE_HASH_KEY16_LRU:
+		*special = 0; *ext = 0; *key_size = 16; return;
+	case e_APP_PIPELINE_HASH_KEY32_EXT:
+		*special = 0; *ext = 1; *key_size = 32; return;
+	case e_APP_PIPELINE_HASH_KEY32_LRU:
+		*special = 0; *ext = 0; *key_size = 32; return;
+
+	case e_APP_PIPELINE_HASH_SPEC_KEY8_EXT:
+		*special = 1; *ext = 1; *key_size = 8; return;
+	case e_APP_PIPELINE_HASH_SPEC_KEY8_LRU:
+		*special = 1; *ext = 0; *key_size = 8; return;
+	case e_APP_PIPELINE_HASH_SPEC_KEY16_EXT:
+		*special = 1; *ext = 1; *key_size = 16; return;
+	case e_APP_PIPELINE_HASH_SPEC_KEY16_LRU:
+		*special = 1; *ext = 0; *key_size = 16; return;
+	case e_APP_PIPELINE_HASH_SPEC_KEY32_EXT:
+		*special = 1; *ext = 1; *key_size = 32; return;
+	case e_APP_PIPELINE_HASH_SPEC_KEY32_LRU:
+		*special = 1; *ext = 0; *key_size = 32; return;
+
+	default:
+		rte_panic("Invalid hash table type or key size\n");
+	}
+}
+void
+app_main_loop_worker_pipeline_hash(void) {
+	struct rte_pipeline_params pipeline_params = {
+		.name = "pipeline",
+		.socket_id = rte_socket_id(),
+	};
+
+	struct rte_pipeline *p;
+	uint32_t port_in_id[APP_MAX_PORTS];
+	uint32_t port_out_id[APP_MAX_PORTS];
+	uint32_t table_id;
+	uint32_t i;
+	uint32_t special, ext, key_size;
+
+	translate_options(&special, &ext, &key_size);
+
+	RTE_LOG(INFO, USER1, "Core %u is doing work "
+		"(pipeline with hash table, %s, %s, %d-byte key)\n",
+		rte_lcore_id(),
+		special ? "specialized" : "non-specialized",
+		ext ? "extendible bucket" : "LRU",
+		key_size);
+
+	/* Pipeline configuration */
+	p = rte_pipeline_create(&pipeline_params);
+	if (p == NULL)
+		rte_panic("Unable to configure the pipeline\n");
+
+	/* Input port configuration */
+	for (i = 0; i < app.n_ports; i++) {
+		struct rte_port_ring_reader_params port_ring_params = {
+			.ring = app.rings_rx[i],
+		};
+
+		struct rte_pipeline_port_in_params port_params = {
+			.ops = &rte_port_ring_reader_ops,
+			.arg_create = (void *) &port_ring_params,
+			.f_action = NULL,
+			.arg_ah = NULL,
+			.burst_size = app.burst_size_worker_read,
+		};
+
+		if (rte_pipeline_port_in_create(p, &port_params,
+			&port_in_id[i]))
+			rte_panic("Unable to configure input port for "
+				"ring %d\n", i);
+	}
+
+	/* Output port configuration */
+	for (i = 0; i < app.n_ports; i++) {
+		struct rte_port_ring_writer_params port_ring_params = {
+			.ring = app.rings_tx[i],
+			.tx_burst_sz = app.burst_size_worker_write,
+		};
+
+		struct rte_pipeline_port_out_params port_params = {
+			.ops = &rte_port_ring_writer_ops,
+			.arg_create = (void *) &port_ring_params,
+			.f_action = NULL,
+			.f_action_bulk = NULL,
+			.arg_ah = NULL,
+		};
+
+		if (rte_pipeline_port_out_create(p, &port_params,
+			&port_out_id[i]))
+			rte_panic("Unable to configure output port for "
+				"ring %d\n", i);
+	}
+
+	/* Table configuration */
+	switch (app.pipeline_type) {
+	case e_APP_PIPELINE_HASH_KEY8_EXT:
+	case e_APP_PIPELINE_HASH_KEY16_EXT:
+	case e_APP_PIPELINE_HASH_KEY32_EXT:
+	{
+		struct rte_table_hash_ext_params table_hash_params = {
+			.key_size = key_size,
+			.n_keys = 1 << 24,
+			.n_buckets = 1 << 22,
+			.n_buckets_ext = 1 << 21,
+			.f_hash = test_hash,
+			.seed = 0,
+			.signature_offset = 0,
+			.key_offset = 32,
+		};
+
+		struct rte_pipeline_table_params table_params = {
+			.ops = &rte_table_hash_ext_ops,
+			.arg_create = &table_hash_params,
+			.f_action_hit = NULL,
+			.f_action_miss = NULL,
+			.arg_ah = NULL,
+			.action_data_size = 0,
+		};
+
+		if (rte_pipeline_table_create(p, &table_params, &table_id))
+			rte_panic("Unable to configure the hash table\n");
+	}
+	break;
+
+	case e_APP_PIPELINE_HASH_KEY8_LRU:
+	case e_APP_PIPELINE_HASH_KEY16_LRU:
+	case e_APP_PIPELINE_HASH_KEY32_LRU:
+	{
+		struct rte_table_hash_lru_params table_hash_params = {
+			.key_size = key_size,
+			.n_keys = 1 << 24,
+			.n_buckets = 1 << 22,
+			.f_hash = test_hash,
+			.seed = 0,
+			.signature_offset = 0,
+			.key_offset = 32,
+		};
+
+		struct rte_pipeline_table_params table_params = {
+			.ops = &rte_table_hash_lru_ops,
+			.arg_create = &table_hash_params,
+			.f_action_hit = NULL,
+			.f_action_miss = NULL,
+			.arg_ah = NULL,
+			.action_data_size = 0,
+		};
+
+		if (rte_pipeline_table_create(p, &table_params, &table_id))
+			rte_panic("Unable to configure the hash table\n");
+	}
+	break;
+
+	case e_APP_PIPELINE_HASH_SPEC_KEY8_EXT:
+	{
+		struct rte_table_hash_key8_ext_params table_hash_params = {
+			.n_entries = 1 << 24,
+			.n_entries_ext = 1 << 23,
+			.signature_offset = 0,
+			.key_offset = 32,
+			.f_hash = test_hash,
+			.seed = 0,
+		};
+
+		struct rte_pipeline_table_params table_params = {
+			.ops = &rte_table_hash_key8_ext_ops,
+			.arg_create = &table_hash_params,
+			.f_action_hit = NULL,
+			.f_action_miss = NULL,
+			.arg_ah = NULL,
+			.action_data_size = 0,
+		};
+
+		if (rte_pipeline_table_create(p, &table_params, &table_id))
+			rte_panic("Unable to configure the hash table\n");
+	}
+	break;
+
+	case e_APP_PIPELINE_HASH_SPEC_KEY8_LRU:
+	{
+		struct rte_table_hash_key8_lru_params table_hash_params = {
+			.n_entries = 1 << 24,
+			.signature_offset = 0,
+			.key_offset = 32,
+			.f_hash = test_hash,
+			.seed = 0,
+		};
+
+		struct rte_pipeline_table_params table_params = {
+			.ops = &rte_table_hash_key8_lru_ops,
+			.arg_create = &table_hash_params,
+			.f_action_hit = NULL,
+			.f_action_miss = NULL,
+			.arg_ah = NULL,
+			.action_data_size = 0,
+		};
+
+		if (rte_pipeline_table_create(p, &table_params, &table_id))
+			rte_panic("Unable to configure the hash table\n");
+	}
+	break;
+
+	case e_APP_PIPELINE_HASH_SPEC_KEY16_EXT:
+	{
+		struct rte_table_hash_key16_ext_params table_hash_params = {
+			.n_entries = 1 << 24,
+			.n_entries_ext = 1 << 23,
+			.signature_offset = 0,
+			.key_offset = 32,
+			.f_hash = test_hash,
+			.seed = 0,
+		};
+
+		struct rte_pipeline_table_params table_params = {
+			.ops = &rte_table_hash_key16_ext_ops,
+			.arg_create = &table_hash_params,
+			.f_action_hit = NULL,
+			.f_action_miss = NULL,
+			.arg_ah = NULL,
+			.action_data_size = 0,
+		};
+
+		if (rte_pipeline_table_create(p, &table_params, &table_id))
+			rte_panic("Unable to configure the hash table)\n");
+	}
+	break;
+
+	case e_APP_PIPELINE_HASH_SPEC_KEY16_LRU:
+	{
+		struct rte_table_hash_key16_lru_params table_hash_params = {
+			.n_entries = 1 << 24,
+			.signature_offset = 0,
+			.key_offset = 32,
+			.f_hash = test_hash,
+			.seed = 0,
+		};
+
+		struct rte_pipeline_table_params table_params = {
+			.ops = &rte_table_hash_key16_lru_ops,
+			.arg_create = &table_hash_params,
+			.f_action_hit = NULL,
+			.f_action_miss = NULL,
+			.arg_ah = NULL,
+			.action_data_size = 0,
+		};
+
+		if (rte_pipeline_table_create(p, &table_params, &table_id))
+			rte_panic("Unable to configure the hash table\n");
+	}
+	break;
+
+	case e_APP_PIPELINE_HASH_SPEC_KEY32_EXT:
+	{
+		struct rte_table_hash_key32_ext_params table_hash_params = {
+			.n_entries = 1 << 24,
+			.n_entries_ext = 1 << 23,
+			.signature_offset = 0,
+			.key_offset = 32,
+			.f_hash = test_hash,
+			.seed = 0,
+		};
+
+		struct rte_pipeline_table_params table_params = {
+			.ops = &rte_table_hash_key32_ext_ops,
+			.arg_create = &table_hash_params,
+			.f_action_hit = NULL,
+			.f_action_miss = NULL,
+			.arg_ah = NULL,
+			.action_data_size = 0,
+		};
+
+		if (rte_pipeline_table_create(p, &table_params, &table_id))
+			rte_panic("Unable to configure the hash table\n");
+	}
+	break;
+
+
+	case e_APP_PIPELINE_HASH_SPEC_KEY32_LRU:
+	{
+		struct rte_table_hash_key32_lru_params table_hash_params = {
+			.n_entries = 1 << 24,
+			.signature_offset = 0,
+			.key_offset = 32,
+			.f_hash = test_hash,
+			.seed = 0,
+		};
+
+		struct rte_pipeline_table_params table_params = {
+			.ops = &rte_table_hash_key32_lru_ops,
+			.arg_create = &table_hash_params,
+			.f_action_hit = NULL,
+			.f_action_miss = NULL,
+			.arg_ah = NULL,
+			.action_data_size = 0,
+		};
+
+		if (rte_pipeline_table_create(p, &table_params, &table_id))
+			rte_panic("Unable to configure the hash table\n");
+	}
+	break;
+
+	default:
+		rte_panic("Invalid hash table type or key size\n");
+	}
+
+	/* Interconnecting ports and tables */
+	for (i = 0; i < app.n_ports; i++)
+		if (rte_pipeline_port_in_connect_to_table(p, port_in_id[i],
+			table_id))
+			rte_panic("Unable to connect input port %u to "
+				"table %u\n", port_in_id[i],  table_id);
+
+	/* Add entries to tables */
+	for (i = 0; i < (1 << 24); i++) {
+		struct rte_pipeline_table_entry entry = {
+			.action = RTE_PIPELINE_ACTION_PORT,
+			{.port_id = port_out_id[i & (app.n_ports - 1)]},
+		};
+		struct rte_pipeline_table_entry *entry_ptr;
+		uint8_t key[32];
+		uint32_t *k32 = (uint32_t *) key;
+		int key_found, status;
+
+		memset(key, 0, sizeof(key));
+		k32[0] = rte_be_to_cpu_32(i);
+
+		status = rte_pipeline_table_entry_add(p, table_id, key, &entry,
+			&key_found, &entry_ptr);
+		if (status < 0)
+			rte_panic("Unable to add entry to table %u (%d)\n",
+				table_id, status);
+	}
+
+	/* Enable input ports */
+	for (i = 0; i < app.n_ports; i++)
+		if (rte_pipeline_port_in_enable(p, port_in_id[i]))
+			rte_panic("Unable to enable input port %u\n",
+				port_in_id[i]);
+
+	/* Check pipeline consistency */
+	if (rte_pipeline_check(p) < 0)
+		rte_panic("Pipeline consistency check failed\n");
+
+	/* Run-time */
+#if APP_FLUSH == 0
+	for ( ; ; )
+		rte_pipeline_run(p);
+#else
+	for (i = 0; ; i++) {
+		rte_pipeline_run(p);
+
+		if ((i & APP_FLUSH) == 0)
+			rte_pipeline_flush(p);
+	}
+#endif
+}
+
+uint64_t test_hash(
+	void *key,
+	__attribute__((unused)) uint32_t key_size,
+	__attribute__((unused)) uint64_t seed)
+{
+	uint32_t *k32 = (uint32_t *) key;
+	uint32_t ip_dst = rte_be_to_cpu_32(k32[0]);
+	uint64_t signature = (ip_dst >> 2) | ((ip_dst & 0x3) << 30);
+
+	return signature;
+}
+
+void
+app_main_loop_rx_metadata(void) {
+	uint32_t i, j;
+	int ret;
+
+	RTE_LOG(INFO, USER1, "Core %u is doing RX (with meta-data)\n",
+		rte_lcore_id());
+
+	for (i = 0; ; i = ((i + 1) & (app.n_ports - 1))) {
+		uint16_t n_mbufs;
+
+		n_mbufs = rte_eth_rx_burst(
+			app.ports[i],
+			0,
+			app.mbuf_rx.array,
+			app.burst_size_rx_read);
+
+		if (n_mbufs == 0)
+			continue;
+
+		for (j = 0; j < n_mbufs; j++) {
+			struct rte_mbuf *m;
+			uint8_t *m_data, *key;
+			struct ipv4_hdr *ip_hdr;
+			struct ipv6_hdr *ipv6_hdr;
+			uint32_t ip_dst;
+			uint8_t *ipv6_dst;
+			uint32_t *signature, *k32;
+
+			m = app.mbuf_rx.array[j];
+			m_data = rte_pktmbuf_mtod(m, uint8_t *);
+			signature = RTE_MBUF_METADATA_UINT32_PTR(m, 0);
+			key = RTE_MBUF_METADATA_UINT8_PTR(m, 32);
+
+			if (m->ol_flags & PKT_RX_IPV4_HDR) {
+				ip_hdr = (struct ipv4_hdr *)
+					&m_data[sizeof(struct ether_hdr)];
+				ip_dst = ip_hdr->dst_addr;
+
+				k32 = (uint32_t *) key;
+				k32[0] = ip_dst & 0xFFFFFF00;
+			} else {
+				ipv6_hdr = (struct ipv6_hdr *)
+					&m_data[sizeof(struct ether_hdr)];
+				ipv6_dst = ipv6_hdr->dst_addr;
+
+				memcpy(key, ipv6_dst, 16);
+			}
+
+			*signature = test_hash(key, 0, 0);
+		}
+
+		do {
+			ret = rte_ring_sp_enqueue_bulk(
+				app.rings_rx[i],
+				(void **) app.mbuf_rx.array,
+				n_mbufs);
+		} while (ret < 0);
+	}
+}
diff --git a/app/test-pipeline/pipeline_lpm.c b/app/test-pipeline/pipeline_lpm.c
new file mode 100644
index 0000000..b1a2c13
--- /dev/null
+++ b/app/test-pipeline/pipeline_lpm.c
@@ -0,0 +1,196 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <stdio.h>
+#include <stdlib.h>
+#include <stdint.h>
+
+#include <rte_log.h>
+#include <rte_ethdev.h>
+#include <rte_ether.h>
+#include <rte_ip.h>
+#include <rte_byteorder.h>
+
+#include <rte_port_ring.h>
+#include <rte_table_lpm.h>
+#include <rte_pipeline.h>
+
+#include "main.h"
+
+void
+app_main_loop_worker_pipeline_lpm(void) {
+	struct rte_pipeline_params pipeline_params = {
+		.name = "pipeline",
+		.socket_id = rte_socket_id(),
+	};
+
+	struct rte_pipeline *p;
+	uint32_t port_in_id[APP_MAX_PORTS];
+	uint32_t port_out_id[APP_MAX_PORTS];
+	uint32_t table_id;
+	uint32_t i;
+
+	RTE_LOG(INFO, USER1, "Core %u is doing work (pipeline with "
+		"LPM table)\n", rte_lcore_id());
+
+	/* Pipeline configuration */
+	p = rte_pipeline_create(&pipeline_params);
+	if (p == NULL)
+		rte_panic("Unable to configure the pipeline\n");
+
+	/* Input port configuration */
+	for (i = 0; i < app.n_ports; i++) {
+		struct rte_port_ring_reader_params port_ring_params = {
+			.ring = app.rings_rx[i],
+		};
+
+		struct rte_pipeline_port_in_params port_params = {
+			.ops = &rte_port_ring_reader_ops,
+			.arg_create = (void *) &port_ring_params,
+			.f_action = NULL,
+			.arg_ah = NULL,
+			.burst_size = app.burst_size_worker_read,
+		};
+
+		if (rte_pipeline_port_in_create(p, &port_params,
+			&port_in_id[i]))
+			rte_panic("Unable to configure input port for "
+				"ring %d\n", i);
+	}
+
+	/* Output port configuration */
+	for (i = 0; i < app.n_ports; i++) {
+		struct rte_port_ring_writer_params port_ring_params = {
+			.ring = app.rings_tx[i],
+			.tx_burst_sz = app.burst_size_worker_write,
+		};
+
+		struct rte_pipeline_port_out_params port_params = {
+			.ops = &rte_port_ring_writer_ops,
+			.arg_create = (void *) &port_ring_params,
+			.f_action = NULL,
+			.f_action_bulk = NULL,
+			.arg_ah = NULL,
+		};
+
+		if (rte_pipeline_port_out_create(p, &port_params,
+			&port_out_id[i]))
+			rte_panic("Unable to configure output port for "
+				"ring %d\n", i);
+	}
+
+	/* Table configuration */
+	{
+		struct rte_table_lpm_params table_lpm_params = {
+			.n_rules = 1 << 24,
+			.entry_unique_size =
+				sizeof(struct rte_pipeline_table_entry),
+			.offset = 32,
+		};
+
+		struct rte_pipeline_table_params table_params = {
+			.ops = &rte_table_lpm_ops,
+			.arg_create = &table_lpm_params,
+			.f_action_hit = NULL,
+			.f_action_miss = NULL,
+			.arg_ah = NULL,
+			.action_data_size = 0,
+		};
+
+		if (rte_pipeline_table_create(p, &table_params, &table_id))
+			rte_panic("Unable to configure the LPM table\n");
+	}
+
+	/* Interconnecting ports and tables */
+	for (i = 0; i < app.n_ports; i++)
+		if (rte_pipeline_port_in_connect_to_table(p, port_in_id[i],
+			table_id))
+			rte_panic("Unable to connect input port %u to "
+				"table %u\n", port_in_id[i],  table_id);
+
+	/* Add entries to tables */
+	for (i = 0; i < app.n_ports; i++) {
+		struct rte_pipeline_table_entry entry = {
+			.action = RTE_PIPELINE_ACTION_PORT,
+			{.port_id = port_out_id[i & (app.n_ports - 1)]},
+		};
+
+		struct rte_table_lpm_key key = {
+			.ip = i << (24 - __builtin_popcount(app.n_ports - 1)),
+			.depth = 8 + __builtin_popcount(app.n_ports - 1),
+		};
+
+		struct rte_pipeline_table_entry *entry_ptr;
+
+		int key_found, status;
+
+		printf("Adding rule to LPM table (IPv4 destination = %"
+			PRIu32 ".%" PRIu32 ".%" PRIu32 ".%" PRIu32 "/%" PRIu8
+			" => port out = %" PRIu32 ")\n",
+			(key.ip & 0xFF000000) >> 24,
+			(key.ip & 0x00FF0000) >> 16,
+			(key.ip & 0x0000FF00) >> 8,
+			key.ip & 0x000000FF,
+			key.depth,
+			i);
+
+		status = rte_pipeline_table_entry_add(p, table_id, &key, &entry,
+			&key_found, &entry_ptr);
+		if (status < 0)
+			rte_panic("Unable to add entry to table %u (%d)\n",
+				table_id, status);
+	}
+
+	/* Enable input ports */
+	for (i = 0; i < app.n_ports; i++)
+		if (rte_pipeline_port_in_enable(p, port_in_id[i]))
+			rte_panic("Unable to enable input port %u\n",
+				port_in_id[i]);
+
+	/* Check pipeline consistency */
+	if (rte_pipeline_check(p) < 0)
+		rte_panic("Pipeline consistency check failed\n");
+
+	/* Run-time */
+#if APP_FLUSH == 0
+	for ( ; ; )
+		rte_pipeline_run(p);
+#else
+	for (i = 0; ; i++) {
+		rte_pipeline_run(p);
+
+		if ((i & APP_FLUSH) == 0)
+			rte_pipeline_flush(p);
+	}
+#endif
+}
diff --git a/app/test-pipeline/pipeline_lpm_ipv6.c b/app/test-pipeline/pipeline_lpm_ipv6.c
new file mode 100644
index 0000000..3f24a2d
--- /dev/null
+++ b/app/test-pipeline/pipeline_lpm_ipv6.c
@@ -0,0 +1,200 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <stdio.h>
+#include <stdlib.h>
+#include <stdint.h>
+
+#include <rte_log.h>
+#include <rte_ethdev.h>
+#include <rte_ether.h>
+#include <rte_ip.h>
+#include <rte_byteorder.h>
+
+#include <rte_port_ring.h>
+#include <rte_table_lpm_ipv6.h>
+#include <rte_pipeline.h>
+
+#include "main.h"
+
+void
+app_main_loop_worker_pipeline_lpm_ipv6(void) {
+	struct rte_pipeline_params pipeline_params = {
+		.name = "pipeline",
+		.socket_id = rte_socket_id(),
+	};
+
+	struct rte_pipeline *p;
+	uint32_t port_in_id[APP_MAX_PORTS];
+	uint32_t port_out_id[APP_MAX_PORTS];
+	uint32_t table_id;
+	uint32_t i;
+
+	RTE_LOG(INFO, USER1,
+		"Core %u is doing work (pipeline with IPv6 LPM table)\n",
+		rte_lcore_id());
+
+	/* Pipeline configuration */
+	p = rte_pipeline_create(&pipeline_params);
+	if (p == NULL)
+		rte_panic("Unable to configure the pipeline\n");
+
+	/* Input port configuration */
+	for (i = 0; i < app.n_ports; i++) {
+		struct rte_port_ring_reader_params port_ring_params = {
+			.ring = app.rings_rx[i],
+		};
+
+		struct rte_pipeline_port_in_params port_params = {
+			.ops = &rte_port_ring_reader_ops,
+			.arg_create = (void *) &port_ring_params,
+			.f_action = NULL,
+			.arg_ah = NULL,
+			.burst_size = app.burst_size_worker_read,
+		};
+
+		if (rte_pipeline_port_in_create(p, &port_params,
+			&port_in_id[i]))
+			rte_panic("Unable to configure input port for "
+				"ring %d\n", i);
+	}
+
+	/* Output port configuration */
+	for (i = 0; i < app.n_ports; i++) {
+		struct rte_port_ring_writer_params port_ring_params = {
+			.ring = app.rings_tx[i],
+			.tx_burst_sz = app.burst_size_worker_write,
+		};
+
+		struct rte_pipeline_port_out_params port_params = {
+			.ops = &rte_port_ring_writer_ops,
+			.arg_create = (void *) &port_ring_params,
+			.f_action = NULL,
+			.f_action_bulk = NULL,
+			.arg_ah = NULL,
+		};
+
+		if (rte_pipeline_port_out_create(p, &port_params,
+			&port_out_id[i]))
+			rte_panic("Unable to configure output port for "
+				"ring %d\n", i);
+	}
+
+	/* Table configuration */
+	{
+		struct rte_table_lpm_ipv6_params table_lpm_ipv6_params = {
+			.n_rules = 1 << 24,
+			.number_tbl8s = 1 << 21,
+			.entry_unique_size =
+				sizeof(struct rte_pipeline_table_entry),
+			.offset = 32,
+		};
+
+		struct rte_pipeline_table_params table_params = {
+			.ops = &rte_table_lpm_ipv6_ops,
+			.arg_create = &table_lpm_ipv6_params,
+			.f_action_hit = NULL,
+			.f_action_miss = NULL,
+			.arg_ah = NULL,
+			.action_data_size = 0,
+		};
+
+		if (rte_pipeline_table_create(p, &table_params, &table_id))
+			rte_panic("Unable to configure the IPv6 LPM table\n");
+	}
+
+	/* Interconnecting ports and tables */
+	for (i = 0; i < app.n_ports; i++)
+		if (rte_pipeline_port_in_connect_to_table(p, port_in_id[i],
+			table_id))
+			rte_panic("Unable to connect input port %u to "
+				"table %u\n", port_in_id[i],  table_id);
+
+	/* Add entries to tables */
+	for (i = 0; i < app.n_ports; i++) {
+		struct rte_pipeline_table_entry entry = {
+			.action = RTE_PIPELINE_ACTION_PORT,
+			{.port_id = port_out_id[i & (app.n_ports - 1)]},
+		};
+
+		struct rte_table_lpm_ipv6_key key;
+		struct rte_pipeline_table_entry *entry_ptr;
+		uint32_t ip;
+		int key_found, status;
+
+		key.depth = 8 + __builtin_popcount(app.n_ports - 1);
+
+		ip = rte_bswap32(i << (24 -
+			__builtin_popcount(app.n_ports - 1)));
+		memcpy(key.ip, &ip, sizeof(uint32_t));
+
+		printf("Adding rule to IPv6 LPM table (IPv6 destination = "
+			"%.2x%.2x:%.2x%.2x:%.2x%.2x:%.2x%.2x:"
+			"%.2x%.2x:%.2x%.2x:%.2x%.2x:%.2x%.2x/%u => "
+			"port out = %u)\n",
+			key.ip[0], key.ip[1], key.ip[2], key.ip[3],
+			key.ip[4], key.ip[5], key.ip[6], key.ip[7],
+			key.ip[8], key.ip[9], key.ip[10], key.ip[11],
+			key.ip[12], key.ip[13], key.ip[14], key.ip[15],
+			key.depth, i);
+
+		status = rte_pipeline_table_entry_add(p, table_id, &key, &entry,
+			&key_found, &entry_ptr);
+		if (status < 0)
+			rte_panic("Unable to add entry to table %u (%d)\n",
+				table_id, status);
+	}
+
+	/* Enable input ports */
+	for (i = 0; i < app.n_ports; i++)
+		if (rte_pipeline_port_in_enable(p, port_in_id[i]))
+			rte_panic("Unable to enable input port %u\n",
+				port_in_id[i]);
+
+	/* Check pipeline consistency */
+	if (rte_pipeline_check(p) < 0)
+		rte_panic("Pipeline consistency check failed\n");
+
+	/* Run-time */
+#if APP_FLUSH == 0
+	for ( ; ; )
+		rte_pipeline_run(p);
+#else
+	for (i = 0; ; i++) {
+		rte_pipeline_run(p);
+
+		if ((i & APP_FLUSH) == 0)
+			rte_pipeline_flush(p);
+	}
+#endif
+}
diff --git a/app/test-pipeline/pipeline_stub.c b/app/test-pipeline/pipeline_stub.c
new file mode 100644
index 0000000..0ad6f9b
--- /dev/null
+++ b/app/test-pipeline/pipeline_stub.c
@@ -0,0 +1,165 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <stdio.h>
+#include <stdlib.h>
+#include <stdint.h>
+
+#include <rte_log.h>
+#include <rte_port_ring.h>
+#include <rte_table_stub.h>
+#include <rte_pipeline.h>
+
+#include "main.h"
+
+void
+app_main_loop_worker_pipeline_stub(void) {
+	struct rte_pipeline_params pipeline_params = {
+		.name = "pipeline",
+		.socket_id = rte_socket_id(),
+	};
+
+	struct rte_pipeline *p;
+	uint32_t port_in_id[APP_MAX_PORTS];
+	uint32_t port_out_id[APP_MAX_PORTS];
+	uint32_t table_id[APP_MAX_PORTS];
+	uint32_t i;
+
+	RTE_LOG(INFO, USER1, "Core %u is doing work (pipeline with stub "
+		"tables)\n", rte_lcore_id());
+
+	/* Pipeline configuration */
+	p = rte_pipeline_create(&pipeline_params);
+	if (p == NULL)
+		rte_panic("Unable to configure the pipeline\n");
+
+	/* Input port configuration */
+	for (i = 0; i < app.n_ports; i++) {
+		struct rte_port_ring_reader_params port_ring_params = {
+			.ring = app.rings_rx[i],
+		};
+
+		struct rte_pipeline_port_in_params port_params = {
+			.ops = &rte_port_ring_reader_ops,
+			.arg_create = (void *) &port_ring_params,
+			.f_action = NULL,
+			.arg_ah = NULL,
+			.burst_size = app.burst_size_worker_read,
+		};
+
+		if (rte_pipeline_port_in_create(p, &port_params,
+			&port_in_id[i]))
+			rte_panic("Unable to configure input port for "
+				"ring %d\n", i);
+	}
+
+	/* Output port configuration */
+	for (i = 0; i < app.n_ports; i++) {
+		struct rte_port_ring_writer_params port_ring_params = {
+			.ring = app.rings_tx[i],
+			.tx_burst_sz = app.burst_size_worker_write,
+		};
+
+		struct rte_pipeline_port_out_params port_params = {
+			.ops = &rte_port_ring_writer_ops,
+			.arg_create = (void *) &port_ring_params,
+			.f_action = NULL,
+			.f_action_bulk = NULL,
+			.arg_ah = NULL,
+		};
+
+		if (rte_pipeline_port_out_create(p, &port_params,
+			&port_out_id[i]))
+			rte_panic("Unable to configure output port for "
+				"ring %d\n", i);
+	}
+
+	/* Table configuration */
+	for (i = 0; i < app.n_ports; i++) {
+		struct rte_pipeline_table_params table_params = {
+			.ops = &rte_table_stub_ops,
+			.arg_create = NULL,
+			.f_action_hit = NULL,
+			.f_action_miss = NULL,
+			.arg_ah = NULL,
+			.action_data_size = 0,
+		};
+
+		if (rte_pipeline_table_create(p, &table_params, &table_id[i]))
+			rte_panic("Unable to configure table %u\n", i);
+	}
+
+	/* Interconnecting ports and tables */
+	for (i = 0; i < app.n_ports; i++)
+		if (rte_pipeline_port_in_connect_to_table(p, port_in_id[i],
+				table_id[i]))
+			rte_panic("Unable to connect input port %u to "
+				"table %u\n", port_in_id[i],  table_id[i]);
+
+	/* Add entries to tables */
+	for (i = 0; i < app.n_ports; i++) {
+		struct rte_pipeline_table_entry entry = {
+			.action = RTE_PIPELINE_ACTION_PORT,
+			{.port_id = port_out_id[i ^ 1]},
+		};
+		struct rte_pipeline_table_entry *default_entry_ptr;
+
+		if (rte_pipeline_table_default_entry_add(p, table_id[i], &entry,
+			&default_entry_ptr))
+			rte_panic("Unable to add default entry to table %u\n",
+				table_id[i]);
+	}
+
+	/* Enable input ports */
+	for (i = 0; i < app.n_ports; i++)
+		if (rte_pipeline_port_in_enable(p, port_in_id[i]))
+			rte_panic("Unable to enable input port %u\n",
+				port_in_id[i]);
+
+	/* Check pipeline consistency */
+	if (rte_pipeline_check(p) < 0)
+		rte_panic("Pipeline consistency check failed\n");
+
+	/* Run-time */
+#if APP_FLUSH == 0
+	for ( ; ; )
+		rte_pipeline_run(p);
+#else
+	for (i = 0; ; i++) {
+		rte_pipeline_run(p);
+
+		if ((i & APP_FLUSH) == 0)
+			rte_pipeline_flush(p);
+	}
+#endif
+}
diff --git a/app/test-pipeline/runtime.c b/app/test-pipeline/runtime.c
new file mode 100644
index 0000000..14b7998
--- /dev/null
+++ b/app/test-pipeline/runtime.c
@@ -0,0 +1,185 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <stdio.h>
+#include <stdlib.h>
+#include <stdint.h>
+#include <inttypes.h>
+#include <sys/types.h>
+#include <string.h>
+#include <sys/queue.h>
+#include <stdarg.h>
+#include <errno.h>
+#include <getopt.h>
+
+#include <rte_common.h>
+#include <rte_byteorder.h>
+#include <rte_log.h>
+#include <rte_memory.h>
+#include <rte_memcpy.h>
+#include <rte_memzone.h>
+#include <rte_tailq.h>
+#include <rte_eal.h>
+#include <rte_per_lcore.h>
+#include <rte_launch.h>
+#include <rte_atomic.h>
+#include <rte_cycles.h>
+#include <rte_prefetch.h>
+#include <rte_lcore.h>
+#include <rte_per_lcore.h>
+#include <rte_branch_prediction.h>
+#include <rte_interrupts.h>
+#include <rte_pci.h>
+#include <rte_random.h>
+#include <rte_debug.h>
+#include <rte_ether.h>
+#include <rte_ethdev.h>
+#include <rte_ring.h>
+#include <rte_mempool.h>
+#include <rte_mbuf.h>
+#include <rte_ip.h>
+#include <rte_tcp.h>
+#include <rte_lpm.h>
+#include <rte_lpm6.h>
+#include <rte_malloc.h>
+
+#include "main.h"
+
+void
+app_main_loop_rx(void) {
+	uint32_t i;
+	int ret;
+
+	RTE_LOG(INFO, USER1, "Core %u is doing RX\n", rte_lcore_id());
+
+	for (i = 0; ; i = ((i + 1) & (app.n_ports - 1))) {
+		uint16_t n_mbufs;
+
+		n_mbufs = rte_eth_rx_burst(
+			app.ports[i],
+			0,
+			app.mbuf_rx.array,
+			app.burst_size_rx_read);
+
+		if (n_mbufs == 0)
+			continue;
+
+		do {
+			ret = rte_ring_sp_enqueue_bulk(
+				app.rings_rx[i],
+				(void **) app.mbuf_rx.array,
+				n_mbufs);
+		} while (ret < 0);
+	}
+}
+
+void
+app_main_loop_worker(void) {
+	struct app_mbuf_array *worker_mbuf;
+	uint32_t i;
+
+	RTE_LOG(INFO, USER1, "Core %u is doing work (no pipeline)\n",
+		rte_lcore_id());
+
+	worker_mbuf = rte_malloc_socket(NULL, sizeof(struct app_mbuf_array),
+			CACHE_LINE_SIZE, rte_socket_id());
+	if (worker_mbuf == NULL)
+		rte_panic("Worker thread: cannot allocate buffer space\n");
+
+	for (i = 0; ; i = ((i + 1) & (app.n_ports - 1))) {
+		int ret;
+
+		ret = rte_ring_sc_dequeue_bulk(
+			app.rings_rx[i],
+			(void **) worker_mbuf->array,
+			app.burst_size_worker_read);
+
+		if (ret == -ENOENT)
+			continue;
+
+		do {
+			ret = rte_ring_sp_enqueue_bulk(
+				app.rings_tx[i ^ 1],
+				(void **) worker_mbuf->array,
+				app.burst_size_worker_write);
+		} while (ret < 0);
+	}
+}
+
+void
+app_main_loop_tx(void) {
+	uint32_t i;
+
+	RTE_LOG(INFO, USER1, "Core %u is doing TX\n", rte_lcore_id());
+
+	for (i = 0; ; i = ((i + 1) & (app.n_ports - 1))) {
+		uint16_t n_mbufs, n_pkts;
+		int ret;
+
+		n_mbufs = app.mbuf_tx[i].n_mbufs;
+
+		ret = rte_ring_sc_dequeue_bulk(
+			app.rings_tx[i],
+			(void **) &app.mbuf_tx[i].array[n_mbufs],
+			app.burst_size_tx_read);
+
+		if (ret == -ENOENT)
+			continue;
+
+		n_mbufs += app.burst_size_tx_read;
+
+		if (n_mbufs < app.burst_size_tx_write) {
+			app.mbuf_tx[i].n_mbufs = n_mbufs;
+			continue;
+		}
+
+		n_pkts = rte_eth_tx_burst(
+			app.ports[i],
+			0,
+			app.mbuf_tx[i].array,
+			n_mbufs);
+
+		if (n_pkts < n_mbufs) {
+			uint16_t k;
+
+			for (k = n_pkts; k < n_mbufs; k++) {
+				struct rte_mbuf *pkt_to_free;
+
+				pkt_to_free = app.mbuf_tx[i].array[k];
+				rte_pktmbuf_free(pkt_to_free);
+			}
+		}
+
+		app.mbuf_tx[i].n_mbufs = 0;
+	}
+}
diff --git a/config/common_bsdapp b/config/common_bsdapp
index 55a1a26..af8a689 100644
--- a/config/common_bsdapp
+++ b/config/common_bsdapp
@@ -320,3 +320,8 @@ CONFIG_RTE_LIBRTE_TABLE=y
 # Compile librte_pipeline
 #
 CONFIG_RTE_LIBRTE_PIPELINE=y
+
+#
+# Compile the pipeline test application
+#
+CONFIG_RTE_TEST_PIPELINE=y
diff --git a/config/common_linuxapp b/config/common_linuxapp
index 445a594..5395339 100644
--- a/config/common_linuxapp
+++ b/config/common_linuxapp
@@ -356,3 +356,8 @@ CONFIG_RTE_LIBRTE_TABLE=y
 # Compile librte_pipeline
 #
 CONFIG_RTE_LIBRTE_PIPELINE=y
+
+#
+# Compile the pipeline test application
+#
+CONFIG_RTE_TEST_PIPELINE=y
-- 
1.7.7.6

^ permalink raw reply	[flat|nested] 36+ messages in thread

* [dpdk-dev] [v2 22/23] Packet Framework IPv4 pipeline sample app
  2014-06-04 18:08 [dpdk-dev] [v2 00/23] Packet Framework Cristian Dumitrescu
                   ` (20 preceding siblings ...)
  2014-06-04 18:08 ` [dpdk-dev] [v2 21/23] Packet Framework performance application Cristian Dumitrescu
@ 2014-06-04 18:08 ` Cristian Dumitrescu
  2014-06-09  9:11   ` Olivier MATZ
  2014-06-04 18:08 ` [dpdk-dev] [v2 23/23] Packet Framework unit tests Cristian Dumitrescu
                   ` (3 subsequent siblings)
  25 siblings, 1 reply; 36+ messages in thread
From: Cristian Dumitrescu @ 2014-06-04 18:08 UTC (permalink / raw)
  To: dev

This Packet Framework sample application illustrates the capabilities of the Intel DPDK Packet Framework toolbox.

It creates different functional blocks used by a typical IPv4 framework like: flow classification, firewall, routing, etc.

CPU cores are connected together through standard interfaces built on SW rings, which each CPU core running a separate pipeline instance.

Please refer to Intel DPDK Sample App Guide for full description.

Signed-off-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
---
 examples/ip_pipeline/Makefile                      |   67 +
 examples/ip_pipeline/cmdline.c                     | 1976 ++++++++++++++++++++
 examples/ip_pipeline/config.c                      |  420 +++++
 examples/ip_pipeline/init.c                        |  614 ++++++
 examples/ip_pipeline/ip_pipeline.cfg               |   56 +
 examples/ip_pipeline/ip_pipeline.sh                |   18 +
 examples/ip_pipeline/main.c                        |  171 ++
 examples/ip_pipeline/main.h                        |  306 +++
 examples/ip_pipeline/pipeline_firewall.c           |  313 ++++
 .../ip_pipeline/pipeline_flow_classification.c     |  306 +++
 examples/ip_pipeline/pipeline_ipv4_frag.c          |  184 ++
 examples/ip_pipeline/pipeline_ipv4_ras.c           |  181 ++
 examples/ip_pipeline/pipeline_passthrough.c        |  213 +++
 examples/ip_pipeline/pipeline_routing.c            |  474 +++++
 examples/ip_pipeline/pipeline_rx.c                 |  385 ++++
 examples/ip_pipeline/pipeline_tx.c                 |  283 +++
 16 files changed, 5967 insertions(+), 0 deletions(-)
 create mode 100644 examples/ip_pipeline/Makefile
 create mode 100644 examples/ip_pipeline/cmdline.c
 create mode 100644 examples/ip_pipeline/config.c
 create mode 100644 examples/ip_pipeline/init.c
 create mode 100644 examples/ip_pipeline/ip_pipeline.cfg
 create mode 100644 examples/ip_pipeline/ip_pipeline.sh
 create mode 100644 examples/ip_pipeline/main.c
 create mode 100644 examples/ip_pipeline/main.h
 create mode 100644 examples/ip_pipeline/pipeline_firewall.c
 create mode 100644 examples/ip_pipeline/pipeline_flow_classification.c
 create mode 100644 examples/ip_pipeline/pipeline_ipv4_frag.c
 create mode 100644 examples/ip_pipeline/pipeline_ipv4_ras.c
 create mode 100644 examples/ip_pipeline/pipeline_passthrough.c
 create mode 100644 examples/ip_pipeline/pipeline_routing.c
 create mode 100644 examples/ip_pipeline/pipeline_rx.c
 create mode 100644 examples/ip_pipeline/pipeline_tx.c

diff --git a/examples/ip_pipeline/Makefile b/examples/ip_pipeline/Makefile
new file mode 100644
index 0000000..e5aecdf
--- /dev/null
+++ b/examples/ip_pipeline/Makefile
@@ -0,0 +1,67 @@
+#   BSD LICENSE
+#
+#   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+#   All rights reserved.
+#
+#   Redistribution and use in source and binary forms, with or without
+#   modification, are permitted provided that the following conditions
+#   are met:
+#
+#     * Redistributions of source code must retain the above copyright
+#       notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above copyright
+#       notice, this list of conditions and the following disclaimer in
+#       the documentation and/or other materials provided with the
+#       distribution.
+#     * Neither the name of Intel Corporation nor the names of its
+#       contributors may be used to endorse or promote products derived
+#       from this software without specific prior written permission.
+#
+#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+ifeq ($(RTE_SDK),)
+$(error "Please define RTE_SDK environment variable")
+endif
+
+# Default target, can be overridden by command line or environment
+RTE_TARGET ?= x86_64-default-linuxapp-gcc
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+# binary name
+APP = ip_pipeline
+
+# all source are stored in SRCS-y
+SRCS-$(CONFIG_RTE_TEST_PIPELINE) := main.c
+SRCS-$(CONFIG_RTE_TEST_PIPELINE) += config.c
+SRCS-$(CONFIG_RTE_TEST_PIPELINE) += init.c
+SRCS-$(CONFIG_RTE_TEST_PIPELINE) += cmdline.c
+SRCS-$(CONFIG_RTE_TEST_PIPELINE) += pipeline_rx.c
+SRCS-$(CONFIG_RTE_TEST_PIPELINE) += pipeline_tx.c
+SRCS-$(CONFIG_RTE_TEST_PIPELINE) += pipeline_flow_classification.c
+SRCS-$(CONFIG_RTE_TEST_PIPELINE) += pipeline_routing.c
+SRCS-$(CONFIG_RTE_TEST_PIPELINE) += pipeline_passthrough.c
+
+ifeq ($(CONFIG_RTE_MBUF_SCATTER_GATHER),y)
+SRCS-$(CONFIG_RTE_TEST_PIPELINE) += pipeline_ipv4_frag.c
+SRCS-$(CONFIG_RTE_TEST_PIPELINE) += pipeline_ipv4_ras.c
+endif
+
+ifeq ($(CONFIG_RTE_LIBRTE_ACL),y)
+SRCS-$(CONFIG_RTE_TEST_PIPELINE) += pipeline_firewall.c
+endif
+
+CFLAGS += -O3
+CFLAGS += $(WERROR_FLAGS)
+
+include $(RTE_SDK)/mk/rte.extapp.mk
diff --git a/examples/ip_pipeline/cmdline.c b/examples/ip_pipeline/cmdline.c
new file mode 100644
index 0000000..e10a0cf
--- /dev/null
+++ b/examples/ip_pipeline/cmdline.c
@@ -0,0 +1,1976 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <stdio.h>
+#include <termios.h>
+#include <inttypes.h>
+#include <string.h>
+#include <netinet/in.h>
+#include <fcntl.h>
+#include <unistd.h>
+
+#include <rte_ether.h>
+#include <rte_byteorder.h>
+#include <rte_ring.h>
+#include <rte_mbuf.h>
+#include <rte_malloc.h>
+#include <rte_string_fns.h>
+#include <cmdline_rdline.h>
+#include <cmdline_parse.h>
+#include <cmdline_parse_num.h>
+#include <cmdline_parse_string.h>
+#include <cmdline_parse_ipaddr.h>
+#include <cmdline_parse_etheraddr.h>
+#include <cmdline_socket.h>
+#include <cmdline.h>
+
+#include "main.h"
+
+#define IS_RULE_PRESENT(res, rule_key, table, type)			\
+do {									\
+	struct app_rule *it;						\
+									\
+	(res) = NULL;							\
+	TAILQ_FOREACH(it, &table, entries) {				\
+		if (memcmp(&rule_key, &it->type.key, sizeof(rule_key)) == 0) {\
+			(res) = it;					\
+			break;						\
+		}							\
+	}								\
+} while (0)
+
+/* Rules */
+static void
+app_init_rule_tables(void);
+
+TAILQ_HEAD(linked_list, app_rule) arp_table, routing_table, firewall_table,
+	flow_table;
+
+uint32_t n_arp_rules;
+uint32_t n_routing_rules;
+uint32_t n_firewall_rules;
+uint32_t n_flow_rules;
+
+struct app_arp_rule {
+	struct {
+		uint8_t out_iface;
+		uint32_t nh_ip;
+	} key;
+
+	struct ether_addr nh_arp;
+};
+
+struct app_routing_rule {
+	struct {
+		uint32_t ip;
+		uint8_t depth;
+	} key;
+
+	uint8_t port;
+	uint32_t nh_ip;
+};
+
+struct app_firewall_rule {
+	struct {
+		uint32_t src_ip;
+		uint32_t src_ip_mask;
+		uint32_t dst_ip;
+		uint32_t dst_ip_mask;
+		uint16_t src_port_from;
+		uint16_t src_port_to;
+		uint16_t dst_port_from;
+		uint16_t dst_port_to;
+		uint8_t proto;
+		uint8_t proto_mask;
+	} key;
+
+	int32_t priority;
+	uint8_t port;
+};
+
+struct app_flow_rule {
+	struct {
+		uint32_t src_ip;
+		uint32_t dst_ip;
+		uint16_t src_port;
+		uint16_t dst_port;
+		uint8_t proto;
+	} key;
+
+	uint8_t port;
+};
+
+struct app_rule {
+	union {
+		struct app_arp_rule arp;
+		struct app_routing_rule routing;
+		struct app_firewall_rule firewall;
+		struct app_flow_rule flow;
+	};
+
+	TAILQ_ENTRY(app_rule) entries;
+};
+
+/* Initialization */
+static void
+app_init_rule_tables(void)
+{
+	TAILQ_INIT(&arp_table);
+	TAILQ_INIT(&routing_table);
+	TAILQ_INIT(&firewall_table);
+	TAILQ_INIT(&flow_table);
+
+	n_arp_rules = 0;
+	n_routing_rules = 0;
+	n_firewall_rules = 0;
+	n_flow_rules = 0;
+}
+
+/* Printing */
+static void
+print_arp_rule(struct app_arp_rule rule)
+{
+	printf("(Iface = %u, Address = %u.%u.%u.%u) => "
+		"HWaddress = %02x:%02x:%02x:%02x:%02x:%02x\n",
+		rule.key.out_iface,
+		(rule.key.nh_ip >> 24) & 0xFF,
+		(rule.key.nh_ip >> 16) & 0xFF,
+		(rule.key.nh_ip >> 8) & 0xFF,
+		rule.key.nh_ip & 0xFF,
+
+		rule.nh_arp.addr_bytes[0],
+		rule.nh_arp.addr_bytes[1],
+		rule.nh_arp.addr_bytes[2],
+		rule.nh_arp.addr_bytes[3],
+		rule.nh_arp.addr_bytes[4],
+		rule.nh_arp.addr_bytes[5]);
+}
+
+static void
+print_routing_rule(struct app_routing_rule rule)
+{
+	printf("IP Prefix = %u.%u.%u.%u/%u => "
+		"(Iface = %u, Gateway = %u.%u.%u.%u)\n",
+		(rule.key.ip >> 24) & 0xFF,
+		(rule.key.ip >> 16) & 0xFF,
+		(rule.key.ip >> 8) & 0xFF,
+		rule.key.ip & 0xFF,
+
+		rule.key.depth,
+		rule.port,
+
+		(rule.nh_ip >> 24) & 0xFF,
+		(rule.nh_ip >> 16) & 0xFF,
+		(rule.nh_ip >> 8) & 0xFF,
+		rule.nh_ip & 0xFF);
+}
+
+#ifdef RTE_LIBRTE_ACL
+
+static void
+print_firewall_rule(struct app_firewall_rule rule)
+{
+	printf("Priority %d: (IP Src = %u.%u.%u.%u/%u, "
+		"IP Dst = %u.%u.%u.%u/%u, "
+		"Port Src = %u-%u, Port Dst = %u-%u, Proto = %u (%u)) => "
+		"Port = %u\n",
+		rule.priority,
+
+		(rule.key.src_ip >> 24) & 0xFF,
+		(rule.key.src_ip >> 16) & 0xFF,
+		(rule.key.src_ip >> 8) & 0xFF,
+		rule.key.src_ip & 0xFF,
+		rule.key.src_ip_mask,
+
+		(rule.key.dst_ip >> 24) & 0xFF,
+		(rule.key.dst_ip >> 16) & 0xFF,
+		(rule.key.dst_ip >> 8) & 0xFF,
+		rule.key.dst_ip & 0xFF,
+		rule.key.dst_ip_mask,
+
+		rule.key.src_port_from,
+		rule.key.src_port_to,
+		rule.key.dst_port_from,
+		rule.key.dst_port_to,
+		rule.key.proto,
+		rule.key.proto_mask,
+		rule.port);
+}
+
+#endif
+
+static void
+print_flow_rule(struct app_flow_rule rule)
+{
+	printf("(IP Src = %u.%u.%u.%u, IP Dst = %u.%u.%u.%u, Port Src = %u, "
+		"Port Dst = %u, Proto = %u) => Port = %u\n",
+		(rule.key.src_ip >> 24) & 0xFF,
+		(rule.key.src_ip >> 16) & 0xFF,
+		(rule.key.src_ip >> 8) & 0xFF,
+		rule.key.src_ip & 0xFF,
+
+		(rule.key.dst_ip >> 24) & 0xFF,
+		(rule.key.dst_ip >> 16) & 0xFF,
+		(rule.key.dst_ip >> 8) & 0xFF,
+		rule.key.dst_ip  & 0xFF,
+
+		rule.key.src_port,
+		rule.key.dst_port,
+		(uint32_t) rule.key.proto,
+		rule.port);
+}
+
+/* Commands */
+
+/* *** Run file (script) *** */
+struct cmd_run_file_result {
+	cmdline_fixed_string_t run_string;
+	char file_path[100];
+};
+
+static void
+cmd_run_file_parsed(
+	void *parsed_result,
+	struct cmdline *cl,
+	__attribute__((unused)) void *data)
+{
+	struct cmd_run_file_result *params = parsed_result;
+	struct cmdline *file_cl;
+	int fd;
+
+	/* Check params */
+	if (!params->file_path) {
+		printf("Illegal value for file path (%s)\n", params->file_path);
+		return;
+	}
+
+	fd = open(params->file_path, O_RDONLY, 0);
+	if (fd < 0) {
+		printf("Illegal value for file path (%s)\n", params->file_path);
+		return;
+	}
+
+	file_cl = cmdline_new(cl->ctx, "", fd, 1);
+	cmdline_interact(file_cl);
+	close(fd);
+}
+
+cmdline_parse_token_string_t cmd_run_file_run_string =
+	TOKEN_STRING_INITIALIZER(struct cmd_run_file_result, run_string, "run");
+
+cmdline_parse_token_string_t cmd_run_file_file_path =
+	TOKEN_STRING_INITIALIZER(struct cmd_run_file_result, file_path, NULL);
+
+cmdline_parse_inst_t cmd_run_file = {
+	.f = cmd_run_file_parsed,
+	.data = NULL,
+	.help_str = "Run commands from file",
+	.tokens = {
+		(void *)&cmd_run_file_run_string,
+		(void *)&cmd_run_file_file_path,
+		NULL,
+	},
+};
+
+/* *** Link - Enable *** */
+struct cmd_link_enable_result {
+	cmdline_fixed_string_t link_string;
+	uint8_t port;
+	cmdline_fixed_string_t up_string;
+};
+
+static void
+cmd_link_enable_parsed(
+	void *parsed_result,
+	__attribute__((unused)) struct cmdline *cl,
+	__attribute__((unused)) void *data)
+{
+	struct cmd_link_enable_result *params = parsed_result;
+	void *msg;
+	struct app_msg_req *req;
+	struct app_msg_resp *resp;
+	int status;
+
+	uint32_t core_id = app_get_first_core_id(APP_CORE_RX);
+
+	if (core_id == RTE_MAX_LCORE) {
+		printf("RX core not preformed by any CPU core\n");
+		return;
+	}
+
+	struct rte_ring *ring_req = app_get_ring_req(core_id);
+	struct rte_ring *ring_resp = app_get_ring_resp(core_id);
+
+	/* Check params */
+	if (params->port >= app.n_ports) {
+		printf("Illegal value for port parameter (%u)\n", params->port);
+		return;
+	}
+
+	printf("Enabling port %d\n", params->port);
+
+	/* Allocate message buffer */
+	msg = (void *)rte_ctrlmbuf_alloc(app.msg_pool);
+	if (msg == NULL)
+		rte_panic("Unable to allocate new message\n");
+
+	/* Fill request message */
+	req = (struct app_msg_req *) ((struct rte_mbuf *)msg)->ctrl.data;
+	req->type = APP_MSG_REQ_RX_PORT_ENABLE;
+	req->rx_up.port = params->port;
+
+	/* Send request */
+	do {
+		status = rte_ring_sp_enqueue(ring_req, msg);
+	} while (status == -ENOBUFS);
+
+	/* Wait for response */
+	do {
+		status = rte_ring_sc_dequeue(ring_resp, &msg);
+	} while (status != 0);
+	resp = (struct app_msg_resp *) ((struct rte_mbuf *)msg)->ctrl.data;
+	/* Check response */
+	if (resp->result != 0)
+		printf("Request LINK_UP failed (%u)\n", resp->result);
+
+	/* Free message buffer */
+	rte_ctrlmbuf_free(msg);
+}
+
+cmdline_parse_token_string_t cmd_link_enable_link_string =
+	TOKEN_STRING_INITIALIZER(struct cmd_link_enable_result, link_string,
+	"link");
+
+cmdline_parse_token_num_t cmd_link_enable_port =
+	TOKEN_NUM_INITIALIZER(struct cmd_link_enable_result, port, UINT8);
+
+cmdline_parse_token_string_t cmd_link_enable_up_string =
+	TOKEN_STRING_INITIALIZER(struct cmd_link_enable_result, up_string,
+	"up");
+
+cmdline_parse_inst_t cmd_link_enable = {
+	.f = cmd_link_enable_parsed,
+	.data = NULL,
+	.help_str = "Link down",
+	.tokens = {
+		(void *)&cmd_link_enable_link_string,
+		(void *)&cmd_link_enable_port,
+		(void *)&cmd_link_enable_up_string,
+		NULL,
+	},
+};
+
+/* *** Link - Disable *** */
+struct cmd_link_disable_result {
+	cmdline_fixed_string_t link_string;
+	uint8_t port;
+	cmdline_fixed_string_t down_string;
+};
+
+static void
+cmd_link_disable_parsed(
+	void *parsed_result,
+	__attribute__((unused)) struct cmdline *cl,
+	__attribute__((unused)) void *data)
+{
+	struct cmd_link_disable_result *params = parsed_result;
+	struct app_msg_req *req;
+	struct app_msg_resp *resp;
+	void *msg;
+	int status;
+
+	uint32_t core_id = app_get_first_core_id(APP_CORE_RX);
+
+	if (core_id == RTE_MAX_LCORE) {
+		printf("RX not performed by any CPU core\n");
+		return;
+	}
+
+	struct rte_ring *ring_req = app_get_ring_req(core_id);
+	struct rte_ring *ring_resp = app_get_ring_resp(core_id);
+
+	/* Check params */
+	if (params->port >= app.n_ports) {
+		printf("Illegal value for port parameter (%u)\n", params->port);
+		return;
+	}
+
+	printf("Disabling port %d\n", params->port);
+
+	/* Allocate message buffer */
+	msg = rte_ctrlmbuf_alloc(app.msg_pool);
+	if (msg == NULL)
+		rte_panic("Unable to allocate new message\n");
+
+	/* Fill request message */
+	req = (struct app_msg_req *) ((struct rte_mbuf *)msg)->ctrl.data;
+	req->type = APP_MSG_REQ_RX_PORT_DISABLE;
+	req->rx_down.port = params->port;
+
+	/* Send request */
+	do {
+		status = rte_ring_sp_enqueue(ring_req, msg);
+	} while (status == -ENOBUFS);
+
+	/* Wait for response */
+	do {
+		status = rte_ring_sc_dequeue(ring_resp, &msg);
+	} while (status != 0);
+	resp = (struct app_msg_resp *) ((struct rte_mbuf *)msg)->ctrl.data;
+
+	/* Check response */
+	if (resp->result != 0)
+		printf("Request LINK_DOWN failed (%u)\n", resp->result);
+
+	/* Free message buffer */
+	rte_ctrlmbuf_free((struct rte_mbuf *)msg);
+}
+
+cmdline_parse_token_string_t cmd_link_disable_link_string =
+	TOKEN_STRING_INITIALIZER(struct cmd_link_disable_result, link_string,
+	"link");
+
+cmdline_parse_token_num_t cmd_link_disable_port =
+	TOKEN_NUM_INITIALIZER(struct cmd_link_disable_result, port, UINT8);
+
+cmdline_parse_token_string_t cmd_link_disable_down_string =
+	TOKEN_STRING_INITIALIZER(struct cmd_link_disable_result, down_string,
+	"down");
+
+cmdline_parse_inst_t cmd_link_disable = {
+	.f = cmd_link_disable_parsed,
+	.data = NULL,
+	.help_str = "Link up",
+	.tokens = {
+		(void *)&cmd_link_disable_link_string,
+		(void *)&cmd_link_disable_port,
+		(void *)&cmd_link_disable_down_string,
+		NULL,
+	},
+};
+
+
+/* *** ARP - Add *** */
+struct cmd_arp_add_result {
+	cmdline_fixed_string_t arp_string;
+	cmdline_fixed_string_t add_string;
+	uint8_t out_iface;
+	cmdline_ipaddr_t nh_ip;
+	struct ether_addr nh_arp;
+
+};
+
+static void
+cmd_arp_add_parsed(
+	void *parsed_result,
+	__attribute__((unused)) struct cmdline *cl,
+	__attribute__((unused)) void *data)
+{
+	struct cmd_arp_add_result *params = parsed_result;
+	struct app_rule rule, *old_rule;
+	struct app_msg_req *req;
+	struct app_msg_resp *resp;
+	void *msg;
+	int status;
+
+	uint32_t core_id = app_get_first_core_id(APP_CORE_RT);
+
+	if (core_id == RTE_MAX_LCORE) {
+		printf("ARP not performed by any CPU core\n");
+		return;
+	}
+
+	struct rte_ring *ring_req = app_get_ring_req(core_id);
+	struct rte_ring *ring_resp = app_get_ring_resp(core_id);
+
+	/* Check params */
+	if (params->out_iface >= app.n_ports) {
+		printf("Illegal value for output interface parameter (%u)\n",
+			params->out_iface);
+		return;
+	}
+
+	/* Create rule */
+	memset(&rule, 0, sizeof(rule));
+	rule.arp.key.out_iface = params->out_iface;
+	rule.arp.key.nh_ip =
+		rte_bswap32((uint32_t) params->nh_ip.addr.ipv4.s_addr);
+	rule.arp.nh_arp = params->nh_arp;
+
+	/* Check rule existence */
+	IS_RULE_PRESENT(old_rule, rule.arp.key, arp_table, arp);
+	if ((old_rule == NULL) && (n_arp_rules == app.max_arp_rules)) {
+		printf("ARP table is full.\n");
+		return;
+	}
+
+	printf("Adding ARP entry: ");
+	print_arp_rule(rule.arp);
+
+	/* Allocate message buffer */
+	msg = (void *)rte_ctrlmbuf_alloc(app.msg_pool);
+	if (msg == NULL)
+		rte_panic("Unable to allocate new message\n");
+
+	/* Fill request message */
+	req = (struct app_msg_req *) ((struct rte_mbuf *)msg)->ctrl.data;
+	req->type = APP_MSG_REQ_ARP_ADD;
+	req->arp_add.out_iface = rule.arp.key.out_iface;
+	req->arp_add.nh_ip = rule.arp.key.nh_ip;
+	req->arp_add.nh_arp = rule.arp.nh_arp;
+
+	/* Send request */
+	do {
+		status = rte_ring_sp_enqueue(ring_req, msg);
+	} while (status == -ENOBUFS);
+
+	/* Wait for response */
+	do {
+		status = rte_ring_sc_dequeue(ring_resp, &msg);
+	} while (status != 0);
+	resp = (struct app_msg_resp *) ((struct rte_mbuf *)msg)->ctrl.data;
+
+	/* Check response */
+	if (resp->result != 0)
+		printf("Request ARP_ADD failed (%u)\n", resp->result);
+	else {
+		if (old_rule == NULL) {
+			struct app_rule *new_rule = (struct app_rule *)
+				rte_zmalloc_socket("CLI",
+				sizeof(struct app_rule),
+				CACHE_LINE_SIZE,
+				rte_socket_id());
+
+			if (new_rule == NULL)
+				rte_panic("Unable to allocate new rule\n");
+
+			memcpy(new_rule, &rule, sizeof(rule));
+			TAILQ_INSERT_TAIL(&arp_table, new_rule, entries);
+			n_arp_rules++;
+		} else
+			old_rule->arp.nh_arp = rule.arp.nh_arp;
+	}
+
+	/* Free message buffer */
+	rte_ctrlmbuf_free((struct rte_mbuf *) msg);
+}
+
+cmdline_parse_token_string_t cmd_arp_add_arp_string =
+	TOKEN_STRING_INITIALIZER(struct cmd_arp_add_result, arp_string, "arp");
+
+cmdline_parse_token_string_t cmd_arp_add_add_string =
+	TOKEN_STRING_INITIALIZER(struct cmd_arp_add_result, add_string, "add");
+
+cmdline_parse_token_num_t cmd_arp_add_out_iface =
+	TOKEN_NUM_INITIALIZER(struct cmd_arp_add_result, out_iface, UINT8);
+
+cmdline_parse_token_ipaddr_t cmd_arp_add_nh_ip =
+	TOKEN_IPADDR_INITIALIZER(struct cmd_arp_add_result, nh_ip);
+
+cmdline_parse_token_etheraddr_t cmd_arp_add_nh_arp =
+	TOKEN_ETHERADDR_INITIALIZER(struct cmd_arp_add_result, nh_arp);
+
+cmdline_parse_inst_t cmd_arp_add = {
+	.f = cmd_arp_add_parsed,
+	.data = NULL,
+	.help_str = "ARP add",
+	.tokens = {
+		(void *)&cmd_arp_add_arp_string,
+		(void *)&cmd_arp_add_add_string,
+		(void *)&cmd_arp_add_out_iface,
+		(void *)&cmd_arp_add_nh_ip,
+		(void *)&cmd_arp_add_nh_arp,
+		NULL,
+	},
+	};
+
+/* *** ARP - Del *** */
+struct cmd_arp_del_result {
+	cmdline_fixed_string_t arp_string;
+	cmdline_fixed_string_t del_string;
+	uint8_t out_iface;
+	cmdline_ipaddr_t nh_ip;
+};
+
+static void
+cmd_arp_del_parsed(
+	void *parsed_result,
+	__attribute__((unused)) struct cmdline *cl,
+	__attribute__((unused)) void *data)
+{
+	struct cmd_arp_del_result *params = parsed_result;
+	struct app_rule rule, *old_rule;
+	struct app_msg_req *req;
+	struct app_msg_resp *resp;
+	void *msg;
+	int status;
+
+	uint32_t core_id = app_get_first_core_id(APP_CORE_RT);
+
+	if (core_id == RTE_MAX_LCORE) {
+		printf("ARP not performed by any CPU core\n");
+		return;
+	}
+
+	struct rte_ring *ring_req = app_get_ring_req(core_id);
+	struct rte_ring *ring_resp = app_get_ring_resp(core_id);
+
+	/* Check params */
+	if (params->out_iface > app.n_ports) {
+		printf("Illegal value for output interface parameter (%u)\n",
+			params->out_iface);
+		return;
+	}
+
+	/* Create rule */
+	memset(&rule, 0, sizeof(rule));
+	rule.arp.key.out_iface = params->out_iface;
+	rule.arp.key.nh_ip =
+		rte_bswap32((uint32_t) params->nh_ip.addr.ipv4.s_addr);
+
+	/* Check rule existence */
+	IS_RULE_PRESENT(old_rule, rule.arp.key, arp_table, arp);
+	if (old_rule == NULL)
+		return;
+
+	printf("Deleting ARP entry: ");
+	print_arp_rule(old_rule->arp);
+
+	/* Allocate message buffer */
+	msg = (void *)rte_ctrlmbuf_alloc(app.msg_pool);
+	if (msg == NULL)
+		rte_panic("Unable to allocate new message\n");
+
+	/* Fill request message */
+	req = (struct app_msg_req *) ((struct rte_mbuf *)msg)->ctrl.data;
+	req->type = APP_MSG_REQ_ARP_DEL;
+	req->arp_del.out_iface = rule.arp.key.out_iface;
+	req->arp_del.nh_ip = rule.arp.key.nh_ip;
+
+	/* Send request */
+	do {
+		status = rte_ring_sp_enqueue(ring_req, msg);
+	} while (status == -ENOBUFS);
+
+	/* Wait for response */
+	do {
+		status = rte_ring_sc_dequeue(ring_resp, &msg);
+	} while (status != 0);
+	resp = (struct app_msg_resp *) ((struct rte_mbuf *)msg)->ctrl.data;
+
+	/* Check response */
+	if (resp->result != 0)
+		printf("Request ARP_DEL failed (%u)\n", resp->result);
+	else {
+		TAILQ_REMOVE(&arp_table, old_rule, entries);
+		n_arp_rules--;
+		rte_free(old_rule);
+	}
+
+	/* Free message buffer */
+	rte_ctrlmbuf_free((struct rte_mbuf *) msg);
+}
+
+cmdline_parse_token_string_t cmd_arp_del_arp_string =
+	TOKEN_STRING_INITIALIZER(struct cmd_arp_del_result, arp_string, "arp");
+
+cmdline_parse_token_string_t cmd_arp_del_del_string =
+	TOKEN_STRING_INITIALIZER(struct cmd_arp_del_result, del_string, "del");
+
+cmdline_parse_token_num_t cmd_arp_del_out_iface =
+	TOKEN_NUM_INITIALIZER(struct cmd_arp_del_result, out_iface, UINT8);
+
+cmdline_parse_token_ipaddr_t cmd_arp_del_nh_ip =
+	TOKEN_IPADDR_INITIALIZER(struct cmd_arp_del_result, nh_ip);
+
+cmdline_parse_inst_t cmd_arp_del = {
+	.f = cmd_arp_del_parsed,
+	.data = NULL,
+	.help_str = "ARP delete",
+	.tokens = {
+		(void *)&cmd_arp_del_arp_string,
+		(void *)&cmd_arp_del_del_string,
+		(void *)&cmd_arp_del_out_iface,
+		(void *)&cmd_arp_del_nh_ip,
+		NULL,
+	},
+};
+
+/* *** ARP - Print *** */
+struct cmd_arp_print_result {
+	cmdline_fixed_string_t arp_string;
+	cmdline_fixed_string_t print_string;
+};
+
+static void
+cmd_arp_print_parsed(
+	__attribute__((unused)) void *parsed_result,
+	__attribute__((unused)) struct cmdline *cl,
+	__attribute__((unused)) void *data)
+{
+	struct app_rule *it;
+
+	TAILQ_FOREACH(it, &arp_table, entries) {
+		print_arp_rule(it->arp);
+	}
+}
+
+cmdline_parse_token_string_t cmd_arp_print_arp_string =
+	TOKEN_STRING_INITIALIZER(struct cmd_arp_print_result, arp_string,
+	"arp");
+
+cmdline_parse_token_string_t cmd_arp_print_print_string =
+	TOKEN_STRING_INITIALIZER(struct cmd_arp_print_result, print_string,
+	"ls");
+
+cmdline_parse_inst_t cmd_arp_print = {
+	.f = cmd_arp_print_parsed,
+	.data = NULL,
+	.help_str = "ARP list",
+	.tokens = {
+		(void *)&cmd_arp_print_arp_string,
+		(void *)&cmd_arp_print_print_string,
+		NULL,
+	},
+};
+
+/* *** Routing - Add *** */
+struct cmd_route_add_result {
+	cmdline_fixed_string_t route_string;
+	cmdline_fixed_string_t add_string;
+	cmdline_ipaddr_t ip;
+	uint8_t depth;
+	uint8_t port;
+	cmdline_ipaddr_t nh_ip;
+};
+
+static void
+cmd_route_add_parsed(
+	void *parsed_result,
+	__attribute__((unused)) struct cmdline *cl,
+	__attribute__((unused)) void *data)
+{
+	struct cmd_route_add_result *params = parsed_result;
+	struct app_rule rule, *old_rule;
+	struct app_msg_req *req;
+	struct app_msg_resp *resp;
+	void *msg;
+	int status;
+
+	uint32_t core_id = app_get_first_core_id(APP_CORE_RT);
+
+	if (core_id == RTE_MAX_LCORE) {
+		printf("Routing not performed by any CPU core\n");
+		return;
+	}
+
+	struct rte_ring *ring_req = app_get_ring_req(core_id);
+	struct rte_ring *ring_resp = app_get_ring_resp(core_id);
+
+	/* Check params */
+	if ((params->depth == 0) || (params->depth > 32)) {
+		printf("Illegal value for depth parameter (%u)\n",
+			params->depth);
+		return;
+	}
+
+	if (params->port >= app.n_ports) {
+		printf("Illegal value for port parameter (%u)\n", params->port);
+		return;
+	}
+
+	/* Create rule */
+	memset(&rule, 0, sizeof(rule));
+	rule.routing.key.ip = rte_bswap32((uint32_t)
+		params->ip.addr.ipv4.s_addr);
+	rule.routing.key.depth = params->depth;
+	rule.routing.port = params->port;
+	rule.routing.nh_ip =
+		rte_bswap32((uint32_t) params->nh_ip.addr.ipv4.s_addr);
+
+	/* Check rule existence */
+	IS_RULE_PRESENT(old_rule, rule.routing.key, routing_table, routing);
+	if ((old_rule == NULL) && (n_routing_rules == app.max_routing_rules)) {
+		printf("Routing table is full.\n");
+		return;
+	}
+
+	printf("Adding route: ");
+	print_routing_rule(rule.routing);
+
+	/* Allocate message buffer */
+	msg = (void *)rte_ctrlmbuf_alloc(app.msg_pool);
+	if (msg == NULL)
+		rte_panic("Unable to allocate new message\n");
+
+	/* Fill request message */
+	req = (struct app_msg_req *) ((struct rte_mbuf *)msg)->ctrl.data;
+	req->type = APP_MSG_REQ_RT_ADD;
+	req->routing_add.ip = rule.routing.key.ip;
+	req->routing_add.depth = rule.routing.key.depth;
+	req->routing_add.port = rule.routing.port;
+	req->routing_add.nh_ip = rule.routing.nh_ip;
+
+	/* Send request */
+	do {
+		status = rte_ring_sp_enqueue(ring_req, msg);
+	} while (status == -ENOBUFS);
+
+	/* Wait for response */
+	do {
+		status = rte_ring_sc_dequeue(ring_resp, &msg);
+	} while (status != 0);
+	resp = (struct app_msg_resp *) ((struct rte_mbuf *)msg)->ctrl.data;
+
+	/* Check response */
+	if (resp->result != 0)
+		printf("Request ROUTE_ADD failed (%u)\n", resp->result);
+	else {
+		if (old_rule == NULL) {
+			struct app_rule *new_rule = (struct app_rule *)
+				rte_zmalloc_socket("CLI",
+				sizeof(struct app_rule),
+				CACHE_LINE_SIZE,
+				rte_socket_id());
+
+			if (new_rule == NULL)
+				rte_panic("Unable to allocate new rule\n");
+
+			memcpy(new_rule, &rule, sizeof(rule));
+			TAILQ_INSERT_TAIL(&routing_table, new_rule, entries);
+			n_routing_rules++;
+		} else {
+			old_rule->routing.port = rule.routing.port;
+			old_rule->routing.nh_ip = rule.routing.nh_ip;
+		}
+	}
+
+	/* Free message buffer */
+	rte_ctrlmbuf_free((struct rte_mbuf *) msg);
+}
+
+cmdline_parse_token_string_t cmd_route_add_route_string =
+	TOKEN_STRING_INITIALIZER(struct cmd_route_add_result, route_string,
+	"route");
+
+cmdline_parse_token_string_t cmd_route_add_add_string =
+	TOKEN_STRING_INITIALIZER(struct cmd_route_add_result, add_string,
+	"add");
+
+cmdline_parse_token_ipaddr_t cmd_route_add_ip =
+	TOKEN_IPADDR_INITIALIZER(struct cmd_route_add_result, ip);
+
+cmdline_parse_token_num_t cmd_route_add_depth =
+	TOKEN_NUM_INITIALIZER(struct cmd_route_add_result, depth, UINT8);
+
+cmdline_parse_token_num_t cmd_route_add_port =
+	TOKEN_NUM_INITIALIZER(struct cmd_route_add_result, port, UINT8);
+
+cmdline_parse_token_ipaddr_t cmd_route_add_nh_ip =
+	TOKEN_IPADDR_INITIALIZER(struct cmd_route_add_result, nh_ip);
+
+cmdline_parse_inst_t cmd_route_add = {
+	.f = cmd_route_add_parsed,
+	.data = NULL,
+	.help_str = "Route add",
+	.tokens = {
+		(void *)&cmd_route_add_route_string,
+		(void *)&cmd_route_add_add_string,
+		(void *)&cmd_route_add_ip,
+		(void *)&cmd_route_add_depth,
+		(void *)&cmd_route_add_port,
+		(void *)&cmd_route_add_nh_ip,
+		NULL,
+	},
+};
+
+/* *** Routing - Del *** */
+struct cmd_route_del_result {
+	cmdline_fixed_string_t route_string;
+	cmdline_fixed_string_t del_string;
+	cmdline_ipaddr_t ip;
+	uint8_t depth;
+};
+
+static void
+cmd_route_del_parsed(
+	void *parsed_result,
+	__attribute__((unused)) struct cmdline *cl,
+	__attribute__((unused)) void *data)
+{
+	struct cmd_route_del_result *params = parsed_result;
+	struct app_rule rule, *old_rule;
+	struct app_msg_req *req;
+	struct app_msg_resp *resp;
+	void *msg;
+	int status;
+
+	uint32_t core_id = app_get_first_core_id(APP_CORE_RT);
+
+	if (core_id == RTE_MAX_LCORE) {
+		printf("Routing not performed by any CPU core\n");
+		return;
+	}
+
+	struct rte_ring *ring_req = app_get_ring_req(core_id);
+	struct rte_ring *ring_resp = app_get_ring_resp(core_id);
+
+	/* Check params */
+	if ((params->depth == 0) || (params->depth > 32)) {
+		printf("Illegal value for depth parameter (%u)\n",
+			params->depth);
+		return;
+	}
+
+	/* Create rule */
+	memset(&rule, 0, sizeof(rule));
+	rule.routing.key.ip = rte_bswap32((uint32_t)
+		params->ip.addr.ipv4.s_addr);
+	rule.routing.key.depth = params->depth;
+
+	/* Check rule existence */
+	IS_RULE_PRESENT(old_rule, rule.routing.key, routing_table, routing);
+	if (old_rule == NULL)
+		return;
+
+	printf("Deleting route: ");
+	print_routing_rule(old_rule->routing);
+
+	/* Allocate message buffer */
+	msg = (void *)rte_ctrlmbuf_alloc(app.msg_pool);
+	if (msg == NULL)
+		rte_panic("Unable to allocate new message\n");
+
+	/* Fill request message */
+	req = (struct app_msg_req *) ((struct rte_mbuf *)msg)->ctrl.data;
+	req->type = APP_MSG_REQ_RT_DEL;
+	req->routing_del.ip = rule.routing.key.ip;
+	req->routing_del.depth = rule.routing.key.depth;
+
+	/* Send request */
+	do {
+		status = rte_ring_sp_enqueue(ring_req, msg);
+	} while (status == -ENOBUFS);
+
+	/* Wait for response */
+	do {
+		status = rte_ring_sc_dequeue(ring_resp, &msg);
+	} while (status != 0);
+	resp = (struct app_msg_resp *) ((struct rte_mbuf *)msg)->ctrl.data;
+
+	/* Check response */
+	if (resp->result != 0)
+		printf("Request ROUTE_DEL failed %u)\n", resp->result);
+	else {
+		TAILQ_REMOVE(&routing_table, old_rule, entries);
+		rte_free(old_rule);
+		n_routing_rules--;
+	}
+
+	/* Free message buffer */
+	rte_ctrlmbuf_free((struct rte_mbuf *)msg);
+}
+
+cmdline_parse_token_string_t cmd_route_del_route_string =
+	TOKEN_STRING_INITIALIZER(struct cmd_route_del_result, route_string,
+	"route");
+
+cmdline_parse_token_string_t cmd_route_del_del_string =
+	TOKEN_STRING_INITIALIZER(struct cmd_route_del_result, del_string,
+	"del");
+
+cmdline_parse_token_ipaddr_t cmd_route_del_ip =
+	TOKEN_IPADDR_INITIALIZER(struct cmd_route_del_result, ip);
+
+cmdline_parse_token_num_t cmd_route_del_depth =
+	TOKEN_NUM_INITIALIZER(struct cmd_route_del_result, depth, UINT8);
+
+cmdline_parse_inst_t cmd_route_del = {
+	.f = cmd_route_del_parsed,
+	.data = NULL,
+	.help_str = "Route delete",
+	.tokens = {
+		(void *)&cmd_route_del_route_string,
+		(void *)&cmd_route_del_del_string,
+		(void *)&cmd_route_del_ip,
+		(void *)&cmd_route_del_depth,
+		NULL,
+	},
+};
+
+/* *** Routing - Print *** */
+struct cmd_routing_print_result {
+	cmdline_fixed_string_t routing_string;
+	cmdline_fixed_string_t print_string;
+};
+
+static void
+cmd_routing_print_parsed(
+	__attribute__((unused)) void *parsed_result,
+	__attribute__((unused)) struct cmdline *cl,
+	__attribute__((unused)) void *data)
+{
+	struct app_rule *it;
+
+	TAILQ_FOREACH(it, &routing_table, entries) {
+		print_routing_rule(it->routing);
+	}
+}
+
+cmdline_parse_token_string_t cmd_routing_print_routing_string =
+	TOKEN_STRING_INITIALIZER(struct cmd_routing_print_result,
+	routing_string, "route");
+
+cmdline_parse_token_string_t cmd_routing_print_print_string =
+	TOKEN_STRING_INITIALIZER(struct cmd_routing_print_result, print_string,
+	"ls");
+
+cmdline_parse_inst_t cmd_routing_print = {
+	.f = cmd_routing_print_parsed,
+	.data = NULL,
+	.help_str = "Route list",
+	.tokens = {
+		(void *)&cmd_routing_print_routing_string,
+		(void *)&cmd_routing_print_print_string,
+		NULL,
+	},
+};
+
+#ifdef RTE_LIBRTE_ACL
+
+/* *** Firewall - Add *** */
+struct cmd_firewall_add_result {
+	cmdline_fixed_string_t firewall_string;
+	cmdline_fixed_string_t add_string;
+	int32_t priority;
+	cmdline_ipaddr_t src_ip;
+	uint32_t src_ip_mask;
+	cmdline_ipaddr_t dst_ip;
+	uint32_t dst_ip_mask;
+	uint16_t src_port_from;
+	uint16_t src_port_to;
+	uint16_t dst_port_from;
+	uint16_t dst_port_to;
+	uint8_t proto;
+	uint8_t proto_mask;
+	uint8_t port;
+};
+
+static void
+cmd_firewall_add_parsed(
+	void *parsed_result,
+	__attribute__((unused)) struct cmdline *cl,
+	__attribute__((unused)) void *data)
+{
+	struct cmd_firewall_add_result *params = parsed_result;
+	struct app_rule rule, *old_rule;
+	struct rte_mbuf *msg;
+	struct app_msg_req *req;
+	struct app_msg_resp *resp;
+	int status;
+
+	uint32_t core_id = app_get_first_core_id(APP_CORE_FW);
+
+	if (core_id == RTE_MAX_LCORE) {
+		printf("Firewall not performed by any CPU core\n");
+		return;
+	}
+
+	struct rte_ring *ring_req = app_get_ring_req(core_id);
+	struct rte_ring *ring_resp = app_get_ring_resp(core_id);
+
+	/* Check params */
+	if (params->port >= app.n_ports) {
+		printf("Illegal value for port parameter (%u)\n", params->port);
+		return;
+	}
+
+	/* Create rule */
+	memset(&rule, 0, sizeof(rule));
+	rule.firewall.priority = params->priority;
+	rule.firewall.key.src_ip =
+		rte_bswap32((uint32_t)params->src_ip.addr.ipv4.s_addr);
+	rule.firewall.key.src_ip_mask = params->src_ip_mask;
+	rule.firewall.key.dst_ip =
+		rte_bswap32((uint32_t)params->dst_ip.addr.ipv4.s_addr);
+	rule.firewall.key.dst_ip_mask = params->dst_ip_mask;
+	rule.firewall.key.src_port_from = params->src_port_from;
+	rule.firewall.key.src_port_to = params->src_port_to;
+	rule.firewall.key.dst_port_from = params->dst_port_from;
+	rule.firewall.key.dst_port_to = params->dst_port_to;
+	rule.firewall.key.proto = params->proto;
+	rule.firewall.key.proto_mask = params->proto_mask;
+	rule.firewall.port = params->port;
+
+	/* Check rule existence */
+	IS_RULE_PRESENT(old_rule, rule.firewall.key, firewall_table, firewall);
+	if ((old_rule == NULL) &&
+		(n_firewall_rules == app.max_firewall_rules)) {
+		printf("Firewall table is full.\n");
+		return;
+	}
+
+	printf("Adding firewall rule: ");
+	print_firewall_rule(rule.firewall);
+
+	/* Allocate message buffer */
+	msg = rte_ctrlmbuf_alloc(app.msg_pool);
+	if (msg == NULL)
+		rte_panic("Unable to allocate new message\n");
+
+	/* Fill request message */
+	req = (struct app_msg_req *) msg->ctrl.data;
+	req->type = APP_MSG_REQ_FW_ADD;
+	req->firewall_add.add_params.priority = rule.firewall.priority;
+	req->firewall_add.add_params.field_value[1].value.u32 =
+		rule.firewall.key.src_ip;
+	req->firewall_add.add_params.field_value[1].mask_range.u32 =
+		rule.firewall.key.src_ip_mask;
+	req->firewall_add.add_params.field_value[2].value.u32 =
+		rule.firewall.key.dst_ip;
+	req->firewall_add.add_params.field_value[2].mask_range.u32 =
+		rule.firewall.key.dst_ip_mask;
+	req->firewall_add.add_params.field_value[3].value.u16 =
+		rule.firewall.key.src_port_from;
+	req->firewall_add.add_params.field_value[3].mask_range.u16 =
+		rule.firewall.key.src_port_to;
+	req->firewall_add.add_params.field_value[4].value.u16 =
+		rule.firewall.key.dst_port_from;
+	req->firewall_add.add_params.field_value[4].mask_range.u16 =
+		rule.firewall.key.dst_port_to;
+	req->firewall_add.add_params.field_value[0].value.u8 =
+		rule.firewall.key.proto;
+	req->firewall_add.add_params.field_value[0].mask_range.u8 =
+		rule.firewall.key.proto_mask;
+	req->firewall_add.port = rule.firewall.port;
+
+	/* Send request */
+	do {
+		status = rte_ring_sp_enqueue(ring_req, (void *) msg);
+	} while (status == -ENOBUFS);
+
+	/* Wait for response */
+	do {
+		status = rte_ring_sc_dequeue(ring_resp, (void **) &msg);
+	} while (status != 0);
+	resp = (struct app_msg_resp *) msg->ctrl.data;
+
+	/* Check response */
+	if (resp->result != 0)
+		printf("Request FIREWALL_ADD failed (%u)\n", resp->result);
+	else {
+		if (old_rule == NULL) {
+			struct app_rule *new_rule = (struct app_rule *)
+				rte_zmalloc_socket("CLI",
+				sizeof(struct app_rule),
+				CACHE_LINE_SIZE,
+				rte_socket_id());
+
+			memcpy(new_rule, &rule, sizeof(rule));
+			TAILQ_INSERT_TAIL(&firewall_table, new_rule, entries);
+			n_firewall_rules++;
+		} else {
+			old_rule->firewall.priority = rule.firewall.priority;
+			old_rule->firewall.port = rule.firewall.port;
+		}
+	}
+
+	/* Free message buffer */
+	rte_ctrlmbuf_free(msg);
+}
+
+cmdline_parse_token_string_t cmd_firewall_add_firewall_string =
+	TOKEN_STRING_INITIALIZER(struct cmd_firewall_add_result,
+	firewall_string, "firewall");
+
+cmdline_parse_token_string_t cmd_firewall_add_add_string =
+	TOKEN_STRING_INITIALIZER(struct cmd_firewall_add_result, add_string,
+	"add");
+
+cmdline_parse_token_num_t cmd_firewall_add_priority =
+	TOKEN_NUM_INITIALIZER(struct cmd_firewall_add_result, priority, INT32);
+
+cmdline_parse_token_ipaddr_t cmd_firewall_add_src_ip =
+	TOKEN_IPADDR_INITIALIZER(struct cmd_firewall_add_result, src_ip);
+cmdline_parse_token_num_t cmd_firewall_add_src_ip_mask =
+	TOKEN_NUM_INITIALIZER(struct cmd_firewall_add_result, src_ip_mask,
+	UINT32);
+
+cmdline_parse_token_ipaddr_t cmd_firewall_add_dst_ip =
+	TOKEN_IPADDR_INITIALIZER(struct cmd_firewall_add_result, dst_ip);
+cmdline_parse_token_num_t cmd_firewall_add_dst_ip_mask =
+	TOKEN_NUM_INITIALIZER(struct cmd_firewall_add_result, dst_ip_mask,
+	UINT32);
+
+cmdline_parse_token_num_t cmd_firewall_add_src_port_from =
+	TOKEN_NUM_INITIALIZER(struct cmd_firewall_add_result, src_port_from,
+	UINT16);
+cmdline_parse_token_num_t cmd_firewall_add_src_port_to =
+	TOKEN_NUM_INITIALIZER(struct cmd_firewall_add_result, src_port_to,
+	UINT16);
+
+cmdline_parse_token_num_t cmd_firewall_add_dst_port_from =
+	TOKEN_NUM_INITIALIZER(struct cmd_firewall_add_result, dst_port_from,
+	UINT16);
+cmdline_parse_token_num_t cmd_firewall_add_dst_port_to =
+	TOKEN_NUM_INITIALIZER(struct cmd_firewall_add_result, dst_port_to,
+	UINT16);
+
+cmdline_parse_token_num_t cmd_firewall_add_proto =
+	TOKEN_NUM_INITIALIZER(struct cmd_firewall_add_result, proto, UINT8);
+cmdline_parse_token_num_t cmd_firewall_add_proto_mask =
+	TOKEN_NUM_INITIALIZER(struct cmd_firewall_add_result, proto_mask,
+	UINT8);
+cmdline_parse_token_num_t cmd_firewall_add_port =
+	TOKEN_NUM_INITIALIZER(struct cmd_firewall_add_result, port, UINT8);
+
+cmdline_parse_inst_t cmd_firewall_add = {
+	.f = cmd_firewall_add_parsed,
+	.data = NULL,
+	.help_str = "Firewall rule add",
+	.tokens = {
+		(void *)&cmd_firewall_add_firewall_string,
+		(void *)&cmd_firewall_add_add_string,
+		(void *)&cmd_firewall_add_priority,
+		(void *)&cmd_firewall_add_src_ip,
+		(void *)&cmd_firewall_add_src_ip_mask,
+		(void *)&cmd_firewall_add_dst_ip,
+		(void *)&cmd_firewall_add_dst_ip_mask,
+		(void *)&cmd_firewall_add_src_port_from,
+		(void *)&cmd_firewall_add_src_port_to,
+		(void *)&cmd_firewall_add_dst_port_from,
+		(void *)&cmd_firewall_add_dst_port_to,
+		(void *)&cmd_firewall_add_proto,
+		(void *)&cmd_firewall_add_proto_mask,
+		(void *)&cmd_firewall_add_port,
+		NULL,
+	},
+};
+
+/* *** firewall - Del *** */
+struct cmd_firewall_del_result {
+	cmdline_fixed_string_t firewall_string;
+	cmdline_fixed_string_t del_string;
+	cmdline_ipaddr_t src_ip;
+	uint32_t src_ip_mask;
+	cmdline_ipaddr_t dst_ip;
+	uint32_t dst_ip_mask;
+	uint16_t src_port_from;
+	uint16_t src_port_to;
+	uint16_t dst_port_from;
+	uint16_t dst_port_to;
+	uint8_t proto;
+	uint8_t proto_mask;
+};
+
+static void
+cmd_firewall_del_parsed(
+	void *parsed_result,
+	__attribute__((unused)) struct cmdline *cl,
+	__attribute__((unused)) void *data)
+{
+	struct cmd_firewall_del_result *params = parsed_result;
+	struct app_rule rule, *old_rule;
+	struct rte_mbuf *msg;
+	struct app_msg_req *req;
+	struct app_msg_resp *resp;
+	int status;
+
+	uint32_t core_id = app_get_first_core_id(APP_CORE_FW);
+
+	if (core_id == RTE_MAX_LCORE) {
+		printf("Firewall not performed by any CPU core\n");
+		return;
+	}
+
+	struct rte_ring *ring_req = app_get_ring_req(core_id);
+	struct rte_ring *ring_resp = app_get_ring_resp(core_id);
+
+	/* Check params */
+
+	/* Create rule */
+	memset(&rule, 0, sizeof(rule));
+	rule.firewall.key.src_ip =
+		rte_bswap32((uint32_t) params->src_ip.addr.ipv4.s_addr);
+	rule.firewall.key.src_ip_mask = params->src_ip_mask;
+	rule.firewall.key.dst_ip =
+		rte_bswap32((uint32_t) params->dst_ip.addr.ipv4.s_addr);
+	rule.firewall.key.dst_ip_mask = params->dst_ip_mask;
+	rule.firewall.key.src_port_from = params->src_port_from;
+	rule.firewall.key.src_port_to = params->src_port_to;
+	rule.firewall.key.dst_port_from = params->dst_port_from;
+	rule.firewall.key.dst_port_to = params->dst_port_to;
+	rule.firewall.key.proto = params->proto;
+	rule.firewall.key.proto_mask = params->proto_mask;
+
+	/* Check rule existence */
+	IS_RULE_PRESENT(old_rule, rule.firewall.key, firewall_table, firewall);
+	if (old_rule == NULL)
+		return;
+
+	printf("Deleting firewall rule: ");
+	print_firewall_rule(old_rule->firewall);
+
+	/* Allocate message buffer */
+	msg = rte_ctrlmbuf_alloc(app.msg_pool);
+	if (msg == NULL)
+		rte_panic("Unable to allocate new message\n");
+
+	/* Fill request message */
+	req = (struct app_msg_req *) msg->ctrl.data;
+	memset(&req->firewall_del, 0, sizeof(req->firewall_del));
+	req->type = APP_MSG_REQ_FW_DEL;
+	req->firewall_del.delete_params.field_value[1].value.u32 =
+		rule.firewall.key.src_ip;
+	req->firewall_del.delete_params.field_value[1].mask_range.u32 =
+		rule.firewall.key.src_ip_mask;
+	req->firewall_del.delete_params.field_value[2].value.u32 =
+		rule.firewall.key.dst_ip;
+	req->firewall_del.delete_params.field_value[2].mask_range.u32 =
+		rule.firewall.key.dst_ip_mask;
+	req->firewall_del.delete_params.field_value[3].value.u16 =
+		rule.firewall.key.src_port_from;
+	req->firewall_del.delete_params.field_value[3].mask_range.u16 =
+		rule.firewall.key.src_port_to;
+	req->firewall_del.delete_params.field_value[4].value.u16 =
+		rule.firewall.key.dst_port_from;
+	req->firewall_del.delete_params.field_value[4].mask_range.u16 =
+		rule.firewall.key.dst_port_to;
+	req->firewall_del.delete_params.field_value[0].value.u8 =
+		rule.firewall.key.proto;
+	req->firewall_del.delete_params.field_value[0].mask_range.u8 =
+		rule.firewall.key.proto_mask;
+
+	/* Send request */
+	do {
+		status = rte_ring_sp_enqueue(ring_req, (void *) msg);
+	} while (status == -ENOBUFS);
+
+	/* Wait for response */
+	do {
+		status = rte_ring_sc_dequeue(ring_resp, (void **) &msg);
+	} while (status != 0);
+	resp = (struct app_msg_resp *) msg->ctrl.data;
+
+	/* Check response */
+	if (resp->result != 0)
+		printf("Request FIREWALL_DEL failed %u)\n", resp->result);
+	else {
+		TAILQ_REMOVE(&firewall_table, old_rule, entries);
+		rte_free(old_rule);
+		n_firewall_rules--;
+	}
+
+	/* Free message buffer */
+	rte_ctrlmbuf_free(msg);
+}
+
+cmdline_parse_token_string_t cmd_firewall_del_firewall_string =
+	TOKEN_STRING_INITIALIZER(struct cmd_firewall_del_result,
+	firewall_string, "firewall");
+
+cmdline_parse_token_string_t cmd_firewall_del_del_string =
+	TOKEN_STRING_INITIALIZER(struct cmd_firewall_del_result, del_string,
+	"del");
+
+cmdline_parse_token_ipaddr_t cmd_firewall_del_src_ip =
+	TOKEN_IPADDR_INITIALIZER(struct cmd_firewall_del_result, src_ip);
+cmdline_parse_token_num_t cmd_firewall_del_src_ip_mask =
+	TOKEN_NUM_INITIALIZER(struct cmd_firewall_del_result, src_ip_mask,
+	UINT32);
+
+cmdline_parse_token_ipaddr_t cmd_firewall_del_dst_ip =
+	TOKEN_IPADDR_INITIALIZER(struct cmd_firewall_del_result, dst_ip);
+cmdline_parse_token_num_t cmd_firewall_del_dst_ip_mask =
+	TOKEN_NUM_INITIALIZER(struct cmd_firewall_del_result, dst_ip_mask,
+	UINT32);
+
+cmdline_parse_token_num_t cmd_firewall_del_src_port_from =
+	TOKEN_NUM_INITIALIZER(struct cmd_firewall_del_result, src_port_from,
+	UINT16);
+cmdline_parse_token_num_t cmd_firewall_del_src_port_to =
+	TOKEN_NUM_INITIALIZER(struct cmd_firewall_del_result, src_port_to,
+	UINT16);
+
+cmdline_parse_token_num_t cmd_firewall_del_dst_port_from =
+	TOKEN_NUM_INITIALIZER(struct cmd_firewall_del_result, dst_port_from,
+	UINT16);
+cmdline_parse_token_num_t cmd_firewall_del_dst_port_to =
+	TOKEN_NUM_INITIALIZER(struct cmd_firewall_del_result, dst_port_to,
+	UINT16);
+
+cmdline_parse_token_num_t cmd_firewall_del_proto =
+	TOKEN_NUM_INITIALIZER(struct cmd_firewall_del_result, proto, UINT8);
+cmdline_parse_token_num_t cmd_firewall_del_proto_mask =
+	TOKEN_NUM_INITIALIZER(struct cmd_firewall_del_result, proto_mask,
+	UINT8);
+
+cmdline_parse_inst_t cmd_firewall_del = {
+	.f = cmd_firewall_del_parsed,
+	.data = NULL,
+	.help_str = "Firewall rule delete",
+	.tokens = {
+		(void *)&cmd_firewall_del_firewall_string,
+		(void *)&cmd_firewall_del_del_string,
+		(void *)&cmd_firewall_del_src_ip,
+		(void *)&cmd_firewall_del_src_ip_mask,
+		(void *)&cmd_firewall_del_dst_ip,
+		(void *)&cmd_firewall_del_dst_ip_mask,
+		(void *)&cmd_firewall_del_src_port_from,
+		(void *)&cmd_firewall_del_src_port_to,
+		(void *)&cmd_firewall_del_dst_port_from,
+		(void *)&cmd_firewall_del_dst_port_to,
+		(void *)&cmd_firewall_del_proto,
+		(void *)&cmd_firewall_del_proto_mask,
+		NULL,
+	},
+};
+
+/* *** Firewall - Print *** */
+struct cmd_firewall_print_result {
+	cmdline_fixed_string_t firewall_string;
+	cmdline_fixed_string_t print_string;
+};
+
+static void
+cmd_firewall_print_parsed(
+	__attribute__((unused)) void *parsed_result,
+	__attribute__((unused)) struct cmdline *cl,
+	__attribute__((unused)) void *data)
+{
+	struct app_rule *it;
+
+	TAILQ_FOREACH(it, &firewall_table, entries) {
+		print_firewall_rule(it->firewall);
+	}
+}
+
+cmdline_parse_token_string_t cmd_firewall_print_firewall_string =
+	TOKEN_STRING_INITIALIZER(struct cmd_firewall_print_result,
+	firewall_string, "firewall");
+
+cmdline_parse_token_string_t cmd_firewall_print_print_string =
+	TOKEN_STRING_INITIALIZER(struct cmd_firewall_print_result, print_string,
+	"ls");
+
+cmdline_parse_inst_t cmd_firewall_print = {
+	.f = cmd_firewall_print_parsed,
+	.data = NULL,
+	.help_str = "Firewall rules list",
+	.tokens = {
+		(void *)&cmd_firewall_print_firewall_string,
+		(void *)&cmd_firewall_print_print_string,
+		NULL,
+	},
+};
+
+#endif
+
+/* *** Flow Classification - Add All *** */
+struct cmd_flow_add_all_result {
+	cmdline_fixed_string_t flow_string;
+	cmdline_fixed_string_t add_string;
+	cmdline_fixed_string_t all_string;
+};
+
+static void
+cmd_flow_add_all_parsed(
+	__attribute__((unused)) void *parsed_result,
+	__attribute__((unused)) struct cmdline *cl,
+	__attribute__((unused)) void *data)
+{
+	struct app_msg_req *req;
+	struct app_msg_resp *resp;
+	void *msg;
+	int status;
+
+	struct rte_ring *ring_req =
+		app_get_ring_req(app_get_first_core_id(APP_CORE_FC));
+	struct rte_ring *ring_resp =
+		app_get_ring_resp(app_get_first_core_id(APP_CORE_FC));
+
+	/* Allocate message buffer */
+	msg = (void *)rte_ctrlmbuf_alloc(app.msg_pool);
+	if (msg == NULL)
+		rte_panic("Unable to allocate new message\n");
+
+	/* Fill request message */
+	req = (struct app_msg_req *) ((struct rte_mbuf *)msg)->ctrl.data;
+	memset(req, 0, sizeof(struct app_msg_req));
+
+	req->type = APP_MSG_REQ_FC_ADD_ALL;
+
+	/* Send request */
+	do {
+		status = rte_ring_sp_enqueue(ring_req, msg);
+	} while (status == -ENOBUFS);
+
+	/* Wait for response */
+	do {
+		status = rte_ring_sc_dequeue(ring_resp, &msg);
+	} while (status != 0);
+	resp = (struct app_msg_resp *) ((struct rte_mbuf *)msg)->ctrl.data;
+
+	/* Check response */
+	if (resp->result != 0)
+		printf("Request FLOW_ADD_ALL failed (%u)\n", resp->result);
+
+	/* Free message buffer */
+	rte_ctrlmbuf_free((struct rte_mbuf *)msg);
+}
+
+cmdline_parse_token_string_t cmd_flow_add_all_flow_string =
+	TOKEN_STRING_INITIALIZER(struct cmd_flow_add_all_result, flow_string,
+	"flow");
+
+cmdline_parse_token_string_t cmd_flow_add_all_add_string =
+	TOKEN_STRING_INITIALIZER(struct cmd_flow_add_all_result, add_string,
+	"add");
+
+cmdline_parse_token_string_t cmd_flow_add_all_all_string =
+	TOKEN_STRING_INITIALIZER(struct cmd_flow_add_all_result, all_string,
+	"all");
+
+cmdline_parse_inst_t cmd_flow_add_all = {
+	.f = cmd_flow_add_all_parsed,
+	.data = NULL,
+	.help_str = "Flow table initialization based on hard-coded rule",
+	.tokens = {
+		(void *)&cmd_flow_add_all_flow_string,
+		(void *)&cmd_flow_add_all_add_string,
+		(void *)&cmd_flow_add_all_all_string,
+		NULL,
+	},
+};
+
+/* *** Flow Classification - Add *** */
+struct cmd_flow_add_result {
+	cmdline_fixed_string_t flow_string;
+	cmdline_fixed_string_t add_string;
+	cmdline_ipaddr_t src_ip;
+	cmdline_ipaddr_t dst_ip;
+	uint16_t src_port;
+	uint16_t dst_port;
+	uint8_t proto;
+	uint8_t port;
+};
+
+static void
+cmd_flow_add_parsed(
+	void *parsed_result,
+	__attribute__((unused)) struct cmdline *cl,
+	__attribute__((unused)) void *data)
+{
+	struct cmd_flow_add_result *params = parsed_result;
+	struct app_rule rule, *old_rule;
+	struct app_msg_req *req;
+	struct app_msg_resp *resp;
+	void *msg;
+	int status;
+
+	uint32_t core_id = app_get_first_core_id(APP_CORE_FC);
+
+	if (core_id == RTE_MAX_LCORE) {
+		printf("Flow classification not performed by any CPU core\n");
+		return;
+	}
+
+	struct rte_ring *ring_req = app_get_ring_req(core_id);
+	struct rte_ring *ring_resp = app_get_ring_resp(core_id);
+
+	/* Check params */
+	if (params->port >= app.n_ports) {
+		printf("Illegal value for port parameter (%u)\n", params->port);
+		return;
+	}
+
+	/* Create rule */
+	memset(&rule, 0, sizeof(rule));
+	rule.flow.key.src_ip =
+		rte_bswap32((uint32_t)params->src_ip.addr.ipv4.s_addr);
+	rule.flow.key.dst_ip =
+		rte_bswap32((uint32_t)params->dst_ip.addr.ipv4.s_addr);
+	rule.flow.key.src_port = params->src_port;
+	rule.flow.key.dst_port = params->dst_port;
+	rule.flow.key.proto = params->proto;
+	rule.flow.port = params->port;
+
+	/* Check rule existence */
+	IS_RULE_PRESENT(old_rule, rule.flow.key, flow_table, flow);
+	if ((old_rule == NULL) && (n_flow_rules == app.max_flow_rules)) {
+		printf("Flow table is full.\n");
+		return;
+	}
+
+	printf("Adding flow: ");
+	print_flow_rule(rule.flow);
+
+	/* Allocate message buffer */
+	msg = (void *)rte_ctrlmbuf_alloc(app.msg_pool);
+	if (msg == NULL)
+		rte_panic("Unable to allocate new message\n");
+
+	/* Fill request message */
+	req = (struct app_msg_req *) ((struct rte_mbuf *)msg)->ctrl.data;
+	memset(req, 0, sizeof(struct app_msg_req));
+
+	req->type = APP_MSG_REQ_FC_ADD;
+	req->flow_classif_add.key.ip_src = rte_bswap32(rule.flow.key.src_ip);
+	req->flow_classif_add.key.ip_dst = rte_bswap32(rule.flow.key.dst_ip);
+	req->flow_classif_add.key.port_src =
+		rte_bswap16(rule.flow.key.src_port);
+	req->flow_classif_add.key.port_dst =
+		rte_bswap16(rule.flow.key.dst_port);
+	req->flow_classif_add.key.proto = rule.flow.key.proto;
+	req->flow_classif_add.port = rule.flow.port;
+
+	/* Send request */
+	do {
+		status = rte_ring_sp_enqueue(ring_req, msg);
+	} while (status == -ENOBUFS);
+
+	/* Wait for response */
+	do {
+		status = rte_ring_sc_dequeue(ring_resp, &msg);
+	} while (status != 0);
+	resp = (struct app_msg_resp *) ((struct rte_mbuf *)msg)->ctrl.data;
+
+	/* Check response */
+	if (resp->result != 0)
+		printf("Request FLOW_ADD failed (%u)\n", resp->result);
+	else {
+		if (old_rule == NULL) {
+			struct app_rule *new_rule = (struct app_rule *)
+				rte_zmalloc_socket("CLI",
+				sizeof(struct app_rule),
+				CACHE_LINE_SIZE,
+				rte_socket_id());
+
+			if (new_rule == NULL)
+				rte_panic("Unable to allocate new rule\n");
+
+			memcpy(new_rule, &rule, sizeof(rule));
+			TAILQ_INSERT_TAIL(&flow_table, new_rule, entries);
+			n_flow_rules++;
+		} else
+			old_rule->flow.port = rule.flow.port;
+	}
+
+	/* Free message buffer */
+	rte_ctrlmbuf_free((struct rte_mbuf *)msg);
+}
+
+cmdline_parse_token_string_t cmd_flow_add_flow_string =
+	TOKEN_STRING_INITIALIZER(struct cmd_flow_add_result, flow_string,
+	"flow");
+
+cmdline_parse_token_string_t cmd_flow_add_add_string =
+	TOKEN_STRING_INITIALIZER(struct cmd_flow_add_result, add_string, "add");
+
+cmdline_parse_token_ipaddr_t cmd_flow_add_src_ip =
+	TOKEN_IPADDR_INITIALIZER(struct cmd_flow_add_result, src_ip);
+
+cmdline_parse_token_ipaddr_t cmd_flow_add_dst_ip =
+	TOKEN_IPADDR_INITIALIZER(struct cmd_flow_add_result, dst_ip);
+
+cmdline_parse_token_num_t cmd_flow_add_src_port =
+	TOKEN_NUM_INITIALIZER(struct cmd_flow_add_result, src_port, UINT16);
+
+cmdline_parse_token_num_t cmd_flow_add_dst_port =
+	TOKEN_NUM_INITIALIZER(struct cmd_flow_add_result, dst_port, UINT16);
+
+cmdline_parse_token_num_t cmd_flow_add_proto =
+	TOKEN_NUM_INITIALIZER(struct cmd_flow_add_result, proto, UINT8);
+
+cmdline_parse_token_num_t cmd_flow_add_port =
+	TOKEN_NUM_INITIALIZER(struct cmd_flow_add_result, port, UINT8);
+
+cmdline_parse_inst_t cmd_flow_add = {
+	.f = cmd_flow_add_parsed,
+	.data = NULL,
+	.help_str = "Flow add",
+	.tokens = {
+		(void *)&cmd_flow_add_flow_string,
+		(void *)&cmd_flow_add_add_string,
+		(void *)&cmd_flow_add_src_ip,
+		(void *)&cmd_flow_add_dst_ip,
+		(void *)&cmd_flow_add_src_port,
+		(void *)&cmd_flow_add_dst_port,
+		(void *)&cmd_flow_add_proto,
+		(void *)&cmd_flow_add_port,
+		NULL,
+	},
+};
+
+/* *** Flow Classification - Del *** */
+struct cmd_flow_del_result {
+	cmdline_fixed_string_t flow_string;
+	cmdline_fixed_string_t del_string;
+	cmdline_ipaddr_t src_ip;
+	cmdline_ipaddr_t dst_ip;
+	uint16_t src_port;
+	uint16_t dst_port;
+	uint8_t proto;
+};
+
+static void
+cmd_flow_del_parsed(
+	void *parsed_result,
+	__attribute__((unused)) struct cmdline *cl,
+	__attribute__((unused)) void *data)
+{
+	struct cmd_flow_del_result *params = parsed_result;
+	struct app_rule rule, *old_rule;
+	struct app_msg_req *req;
+	struct app_msg_resp *resp;
+	void *msg;
+	int status;
+
+	uint32_t core_id = app_get_first_core_id(APP_CORE_FC);
+
+	if (core_id == RTE_MAX_LCORE) {
+		printf("Flow classification not performed by any CPU core.\n");
+		return;
+	}
+
+	struct rte_ring *ring_req = app_get_ring_req(core_id);
+	struct rte_ring *ring_resp = app_get_ring_resp(core_id);
+
+	/* Create rule */
+	memset(&rule, 0, sizeof(rule));
+	rule.flow.key.src_ip =
+		rte_bswap32((uint32_t)params->src_ip.addr.ipv4.s_addr);
+	rule.flow.key.dst_ip =
+		rte_bswap32((uint32_t)params->dst_ip.addr.ipv4.s_addr);
+	rule.flow.key.src_port = params->src_port;
+	rule.flow.key.dst_port = params->dst_port;
+	rule.flow.key.proto = params->proto;
+
+	/* Check rule existence */
+	IS_RULE_PRESENT(old_rule, rule.flow.key, flow_table, flow);
+	if (old_rule == NULL)
+		return;
+
+	printf("Deleting flow: ");
+	print_flow_rule(old_rule->flow);
+
+	/* Allocate message buffer */
+	msg = (void *)rte_ctrlmbuf_alloc(app.msg_pool);
+	if (msg == NULL)
+		rte_panic("Unable to allocate new message\n");
+
+	/* Fill request message */
+	req = (struct app_msg_req *) ((struct rte_mbuf *)msg)->ctrl.data;
+	memset(req, 0, sizeof(struct app_msg_req));
+
+	req->type = APP_MSG_REQ_FC_DEL;
+	req->flow_classif_del.key.ip_src = rte_bswap32(rule.flow.key.src_ip);
+	req->flow_classif_del.key.ip_dst = rte_bswap32(rule.flow.key.dst_ip);
+	req->flow_classif_del.key.port_src =
+		rte_bswap32(rule.flow.key.src_port);
+	req->flow_classif_del.key.port_dst =
+		rte_bswap32(rule.flow.key.dst_port);
+	req->flow_classif_del.key.proto = rule.flow.key.proto;
+
+	/* Send request */
+	do {
+		status = rte_ring_sp_enqueue(ring_req, msg);
+	} while (status == -ENOBUFS);
+
+	/* Wait for response */
+	do {
+		status = rte_ring_sc_dequeue(ring_resp, &msg);
+	} while (status != 0);
+	resp = (struct app_msg_resp *) ((struct rte_mbuf *)msg)->ctrl.data;
+
+	/* Check response */
+	if (resp->result != 0)
+		printf("Request FLOW_DEL failed (%u)\n", resp->result);
+	else {
+		TAILQ_REMOVE(&flow_table, old_rule, entries);
+		rte_free(old_rule);
+		n_flow_rules--;
+	}
+
+	/* Free message buffer */
+	rte_ctrlmbuf_free((struct rte_mbuf *)msg);
+}
+
+cmdline_parse_token_string_t cmd_flow_del_flow_string =
+	TOKEN_STRING_INITIALIZER(struct cmd_flow_del_result, flow_string,
+	"flow");
+
+cmdline_parse_token_string_t cmd_flow_del_del_string =
+	TOKEN_STRING_INITIALIZER(struct cmd_flow_del_result, del_string, "del");
+
+cmdline_parse_token_ipaddr_t cmd_flow_del_src_ip =
+	TOKEN_IPADDR_INITIALIZER(struct cmd_flow_del_result, src_ip);
+
+cmdline_parse_token_ipaddr_t cmd_flow_del_dst_ip =
+	TOKEN_IPADDR_INITIALIZER(struct cmd_flow_del_result, dst_ip);
+
+cmdline_parse_token_num_t cmd_flow_del_src_port =
+	TOKEN_NUM_INITIALIZER(struct cmd_flow_del_result, src_port, UINT16);
+
+cmdline_parse_token_num_t cmd_flow_del_dst_port =
+	TOKEN_NUM_INITIALIZER(struct cmd_flow_del_result, dst_port, UINT16);
+
+cmdline_parse_token_num_t cmd_flow_del_proto =
+	TOKEN_NUM_INITIALIZER(struct cmd_flow_del_result, proto, UINT8);
+
+cmdline_parse_inst_t cmd_flow_del = {
+	.f = cmd_flow_del_parsed,
+	.data = NULL,
+	.help_str = "Flow delete",
+	.tokens = {
+		(void *)&cmd_flow_del_flow_string,
+		(void *)&cmd_flow_del_del_string,
+		(void *)&cmd_flow_del_src_ip,
+		(void *)&cmd_flow_del_dst_ip,
+		(void *)&cmd_flow_del_src_port,
+		(void *)&cmd_flow_del_dst_port,
+		(void *)&cmd_flow_del_proto,
+		NULL,
+	},
+};
+
+/* *** Flow Classification - Print *** */
+struct cmd_flow_print_result {
+	cmdline_fixed_string_t flow_string;
+	cmdline_fixed_string_t print_string;
+};
+
+static void
+cmd_flow_print_parsed(
+	__attribute__((unused)) void *parsed_result,
+	__attribute__((unused)) struct cmdline *cl,
+	__attribute__((unused)) void *data)
+{
+	struct app_rule *it;
+
+	TAILQ_FOREACH(it, &flow_table, entries) {
+		print_flow_rule(it->flow);
+	}
+}
+
+cmdline_parse_token_string_t cmd_flow_print_flow_string =
+	TOKEN_STRING_INITIALIZER(struct cmd_flow_print_result, flow_string,
+	"flow");
+
+cmdline_parse_token_string_t cmd_flow_print_print_string =
+	TOKEN_STRING_INITIALIZER(struct cmd_flow_print_result, print_string,
+	"ls");
+
+cmdline_parse_inst_t cmd_flow_print = {
+	.f = cmd_flow_print_parsed,
+	.data = NULL,
+	.help_str = "Flow list",
+	.tokens = {
+		(void *)&cmd_flow_print_flow_string,
+		(void *)&cmd_flow_print_print_string,
+		NULL,
+	},
+};
+
+/* *** QUIT *** */
+struct cmd_quit_result {
+	cmdline_fixed_string_t quit;
+};
+
+static void cmd_quit_parsed(__attribute__((unused)) void *parsed_result,
+		struct cmdline *cl,
+		__attribute__((unused)) void *data)
+{
+	cmdline_quit(cl);
+}
+
+cmdline_parse_token_string_t cmd_quit_quit =
+		TOKEN_STRING_INITIALIZER(struct cmd_quit_result, quit, "quit");
+
+cmdline_parse_inst_t cmd_quit = {
+	.f = cmd_quit_parsed,
+	.data = NULL,
+	.help_str = "Exit application",
+	.tokens = {
+		(void *)&cmd_quit_quit,
+		NULL,
+	},
+};
+
+/* List of commands */
+cmdline_parse_ctx_t main_ctx[] = {
+	(cmdline_parse_inst_t *)&cmd_flow_add,
+	(cmdline_parse_inst_t *)&cmd_flow_del,
+	(cmdline_parse_inst_t *)&cmd_flow_add_all,
+	(cmdline_parse_inst_t *)&cmd_flow_print,
+#ifdef RTE_LIBRTE_ACL
+	(cmdline_parse_inst_t *)&cmd_firewall_add,
+	(cmdline_parse_inst_t *)&cmd_firewall_del,
+	(cmdline_parse_inst_t *)&cmd_firewall_print,
+#endif
+	(cmdline_parse_inst_t *)&cmd_route_add,
+	(cmdline_parse_inst_t *)&cmd_route_del,
+	(cmdline_parse_inst_t *)&cmd_routing_print,
+	(cmdline_parse_inst_t *)&cmd_arp_add,
+	(cmdline_parse_inst_t *)&cmd_arp_del,
+	(cmdline_parse_inst_t *)&cmd_arp_print,
+	(cmdline_parse_inst_t *)&cmd_run_file,
+	(cmdline_parse_inst_t *)&cmd_link_enable,
+	(cmdline_parse_inst_t *)&cmd_link_disable,
+	(cmdline_parse_inst_t *)&cmd_quit,
+	NULL,
+};
+
+/* Main loop */
+void
+app_main_loop_cmdline(void)
+{
+	struct cmdline *cl;
+	uint32_t core_id = rte_lcore_id();
+
+	RTE_LOG(INFO, USER1, "Core %u is running the command line interface\n",
+		core_id);
+
+	n_arp_rules = 0;
+	n_routing_rules = 0;
+	n_firewall_rules = 0;
+	n_flow_rules = 0;
+
+	app_init_rule_tables();
+
+	cl = cmdline_stdin_new(main_ctx, "pipeline> ");
+	if (cl == NULL)
+		return;
+	cmdline_interact(cl);
+	cmdline_stdin_exit(cl);
+}
diff --git a/examples/ip_pipeline/config.c b/examples/ip_pipeline/config.c
new file mode 100644
index 0000000..86be3a8
--- /dev/null
+++ b/examples/ip_pipeline/config.c
@@ -0,0 +1,420 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <stdio.h>
+#include <stdlib.h>
+#include <stdint.h>
+#include <inttypes.h>
+#include <sys/types.h>
+#include <string.h>
+#include <sys/queue.h>
+#include <stdarg.h>
+#include <errno.h>
+#include <getopt.h>
+
+#include <rte_common.h>
+#include <rte_byteorder.h>
+#include <rte_log.h>
+#include <rte_memory.h>
+#include <rte_memcpy.h>
+#include <rte_memzone.h>
+#include <rte_tailq.h>
+#include <rte_eal.h>
+#include <rte_per_lcore.h>
+#include <rte_launch.h>
+#include <rte_atomic.h>
+#include <rte_cycles.h>
+#include <rte_prefetch.h>
+#include <rte_lcore.h>
+#include <rte_per_lcore.h>
+#include <rte_branch_prediction.h>
+#include <rte_interrupts.h>
+#include <rte_pci.h>
+#include <rte_random.h>
+#include <rte_debug.h>
+#include <rte_ether.h>
+#include <rte_ethdev.h>
+#include <rte_ring.h>
+#include <rte_mempool.h>
+#include <rte_mbuf.h>
+#include <rte_ip.h>
+#include <rte_tcp.h>
+#include <rte_lpm.h>
+#include <rte_lpm6.h>
+#include <rte_string_fns.h>
+#include <rte_cfgfile.h>
+
+#include "main.h"
+
+struct app_params app;
+
+static const char usage[] =
+	"Usage: %s EAL_OPTIONS-- -p PORT_MASK [-f CONFIG_FILE]\n";
+
+void
+app_print_usage(char *prgname)
+{
+	printf(usage, prgname);
+}
+
+const char *
+app_core_type_id_to_string(enum app_core_type id)
+{
+	switch (id) {
+	case APP_CORE_NONE: return "NONE";
+	case APP_CORE_MASTER: return "MASTER";
+	case APP_CORE_RX: return "RX";
+	case APP_CORE_TX: return "TX";
+	case APP_CORE_PT: return "PT";
+	case APP_CORE_FC: return "FC";
+	case APP_CORE_FW: return "FW";
+	case APP_CORE_RT: return "RT";
+	case APP_CORE_TM: return "TM";
+	case APP_CORE_IPV4_FRAG: return "IPV4_FRAG";
+	case APP_CORE_IPV4_RAS: return "IPV4_RAS";
+	default: return NULL;
+	}
+}
+
+int
+app_core_type_string_to_id(const char *string, enum app_core_type *id)
+{
+	if (strcmp(string, "NONE") == 0) {
+		*id = APP_CORE_NONE;
+		return 0;
+	}
+	if (strcmp(string, "MASTER") == 0) {
+		*id = APP_CORE_MASTER;
+		return 0;
+	}
+	if (strcmp(string, "RX") == 0) {
+		*id = APP_CORE_RX;
+		return 0;
+	}
+	if (strcmp(string, "TX") == 0) {
+		*id = APP_CORE_TX;
+		return 0;
+	}
+	if (strcmp(string, "PT") == 0) {
+		*id = APP_CORE_PT;
+		return 0;
+	}
+	if (strcmp(string, "FC") == 0) {
+		*id = APP_CORE_FC;
+		return 0;
+	}
+	if (strcmp(string, "FW") == 0) {
+		*id = APP_CORE_FW;
+		return 0;
+	}
+	if (strcmp(string, "RT") == 0) {
+		*id = APP_CORE_RT;
+		return 0;
+	}
+	if (strcmp(string, "TM") == 0) {
+		*id = APP_CORE_TM;
+		return 0;
+	}
+	if (strcmp(string, "IPV4_FRAG") == 0) {
+		*id = APP_CORE_IPV4_FRAG;
+		return 0;
+	}
+	if (strcmp(string, "IPV4_RAS") == 0) {
+		*id = APP_CORE_IPV4_RAS;
+		return 0;
+	}
+
+	return -1;
+}
+
+static uint64_t
+app_get_core_mask(void)
+{
+	uint64_t core_mask = 0;
+	uint32_t i;
+
+	for (i = 0; i < RTE_MAX_LCORE; i++) {
+		if (rte_lcore_is_enabled(i) == 0)
+			continue;
+
+		core_mask |= 1LLU << i;
+	}
+
+	return core_mask;
+}
+
+static int
+app_install_coremask(uint64_t core_mask)
+{
+	uint32_t n_cores, i;
+
+	for (n_cores = 0, i = 0; i < RTE_MAX_LCORE; i++)
+		if (app.cores[i].core_type != APP_CORE_NONE)
+			n_cores++;
+
+	if (n_cores != app.n_cores) {
+		rte_panic("Number of cores in COREMASK should be %u instead "
+			"of %u\n", n_cores, app.n_cores);
+		return -1;
+	}
+
+	for (i = 0; i < RTE_MAX_LCORE; i++) {
+		uint32_t core_id;
+
+		if (app.cores[i].core_type == APP_CORE_NONE)
+			continue;
+
+		core_id = __builtin_ctzll(core_mask);
+		core_mask &= ~(1LLU << core_id);
+
+		app.cores[i].core_id = core_id;
+	}
+
+	return 0;
+}
+static int
+app_install_cfgfile(const char *file_name)
+{
+	struct rte_cfgfile *file;
+	uint32_t n_cores, i;
+
+	memset(app.cores, 0, sizeof(app.cores));
+
+	if (file_name[0] == '\0')
+		return -1;
+
+	file = rte_cfgfile_load(file_name, 0);
+	if (file == NULL) {
+		rte_panic("Config file %s not found\n", file_name);
+		return -1;
+	}
+
+	n_cores = (uint32_t) rte_cfgfile_num_sections(file, "core",
+		strnlen("core", 5));
+	if (n_cores < app.n_cores) {
+		rte_panic("Config file parse error: not enough cores specified "
+			"(%u cores missing)\n", app.n_cores - n_cores);
+		return -1;
+	}
+	if (n_cores > app.n_cores) {
+		rte_panic("Config file parse error: too many cores specified "
+			"(%u cores too many)\n", n_cores - app.n_cores);
+		return -1;
+	}
+
+	for (i = 0; i < n_cores; i++) {
+		struct app_core_params *p = &app.cores[i];
+		char section_name[16];
+		const char *entry;
+		uint32_t j;
+
+		/* [core X] */
+		rte_snprintf(section_name, sizeof(section_name), "core %u", i);
+		if (!rte_cfgfile_has_section(file, section_name)) {
+			rte_panic("Config file parse error: core IDs are not "
+				"sequential (core %u missing)\n", i);
+			return -1;
+		}
+
+		/* type */
+		entry = rte_cfgfile_get_entry(file, section_name, "type");
+		if (!entry) {
+			rte_panic("Config file parse error: core %u type not "
+				"defined\n", i);
+			return -1;
+		}
+		if ((app_core_type_string_to_id(entry, &p->core_type) != 0) ||
+		    (p->core_type == APP_CORE_NONE)) {
+			rte_panic("Config file parse error: core %u type "
+				"error\n", i);
+			return -1;
+		}
+
+		/* queues in */
+		entry = rte_cfgfile_get_entry(file, section_name, "queues in");
+		if (!entry) {
+			rte_panic("Config file parse error: core %u queues in "
+				"not defined\n", i);
+			return -1;
+		}
+
+		for (j = 0; (j < APP_MAX_SWQ_PER_CORE) && (entry != NULL);
+			j++) {
+			char *next;
+
+			p->swq_in[j] =  (uint32_t) strtol(entry, &next, 10);
+			if (next == entry)
+				break;
+			entry = next;
+		}
+
+		if ((j != APP_MAX_SWQ_PER_CORE) || (*entry != '\0')) {
+			rte_panic("Config file parse error: core %u queues in "
+				"error\n", i);
+			return -1;
+		}
+
+		/* queues out */
+		entry = rte_cfgfile_get_entry(file, section_name, "queues out");
+		if (!entry) {
+			rte_panic("Config file parse error: core %u queues out "
+				"not defined\n", i);
+			return -1;
+		}
+
+		for (j = 0; (j < APP_MAX_SWQ_PER_CORE) && (entry != NULL);
+			j++) {
+			char *next;
+
+			p->swq_out[j] =  (uint32_t) strtol(entry, &next, 10);
+			if (next == entry)
+				break;
+			entry = next;
+		}
+		if ((j != APP_MAX_SWQ_PER_CORE) || (*entry != '\0')) {
+			rte_panic("Config file parse error: core %u queues out "
+				"error\n", i);
+			return -1;
+		}
+	}
+
+	rte_cfgfile_close(file);
+
+	return 0;
+}
+
+void app_cores_config_print(void)
+{
+	uint32_t i;
+
+	for (i = 0; i < RTE_MAX_LCORE; i++) {
+		struct app_core_params *p = &app.cores[i];
+		uint32_t j;
+
+		if (app.cores[i].core_type == APP_CORE_NONE)
+			continue;
+
+		printf("---> core %u: id = %u type = %6s [", i, p->core_id,
+			app_core_type_id_to_string(p->core_type));
+		for (j = 0; j < APP_MAX_SWQ_PER_CORE; j++)
+			printf("%2d ", (int) p->swq_in[j]);
+
+		printf("] [");
+		for (j = 0; j < APP_MAX_SWQ_PER_CORE; j++)
+			printf("%2d ", (int) p->swq_out[j]);
+
+		printf("]\n");
+	}
+}
+
+static int
+app_install_port_mask(const char *arg)
+{
+	char *end = NULL;
+	uint64_t port_mask;
+	uint32_t i;
+
+	if (arg[0] == '\0')
+		return -1;
+
+	port_mask = strtoul(arg, &end, 16);
+	if ((end == NULL) || (*end != '\0'))
+		return -2;
+
+	if (port_mask == 0)
+		return -3;
+
+	app.n_ports = 0;
+	for (i = 0; i < 64; i++) {
+		if ((port_mask & (1LLU << i)) == 0)
+			continue;
+
+		if (app.n_ports >= APP_MAX_PORTS)
+			return -4;
+
+		app.ports[app.n_ports] = i;
+		app.n_ports++;
+	}
+
+	if (!rte_is_power_of_2(app.n_ports))
+		return -5;
+
+	return 0;
+}
+
+int
+app_parse_args(int argc, char **argv)
+{
+	int opt, ret;
+	char **argvopt;
+	int option_index;
+	char *prgname = argv[0];
+	static struct option lgopts[] = {
+		{NULL, 0, 0, 0}
+	};
+	uint64_t core_mask = app_get_core_mask();
+
+	app.n_cores = __builtin_popcountll(core_mask);
+
+	argvopt = argv;
+	while ((opt = getopt_long(argc, argvopt, "p:f:", lgopts,
+			&option_index)) != EOF) {
+		switch (opt) {
+		case 'p':
+			if (app_install_port_mask(optarg) != 0)
+				rte_panic("PORT_MASK should specify a number "
+					"of ports that is power of 2 less or "
+					"equal to %u\n", APP_MAX_PORTS);
+			break;
+
+		case 'f':
+			app_install_cfgfile(optarg);
+			break;
+
+		default:
+			return -1;
+		}
+	}
+
+	app_install_coremask(core_mask);
+
+	app_cores_config_print();
+
+	if (optind >= 0)
+		argv[optind - 1] = prgname;
+
+	ret = optind - 1;
+	optind = 0; /* reset getopt lib */
+
+	return ret;
+}
diff --git a/examples/ip_pipeline/init.c b/examples/ip_pipeline/init.c
new file mode 100644
index 0000000..947e152
--- /dev/null
+++ b/examples/ip_pipeline/init.c
@@ -0,0 +1,614 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <stdio.h>
+#include <stdlib.h>
+#include <stdint.h>
+#include <inttypes.h>
+#include <sys/types.h>
+#include <string.h>
+#include <sys/queue.h>
+#include <stdarg.h>
+#include <errno.h>
+#include <getopt.h>
+
+#include <rte_common.h>
+#include <rte_byteorder.h>
+#include <rte_log.h>
+#include <rte_memory.h>
+#include <rte_memcpy.h>
+#include <rte_memzone.h>
+#include <rte_tailq.h>
+#include <rte_eal.h>
+#include <rte_per_lcore.h>
+#include <rte_launch.h>
+#include <rte_atomic.h>
+#include <rte_cycles.h>
+#include <rte_prefetch.h>
+#include <rte_lcore.h>
+#include <rte_per_lcore.h>
+#include <rte_branch_prediction.h>
+#include <rte_interrupts.h>
+#include <rte_pci.h>
+#include <rte_random.h>
+#include <rte_debug.h>
+#include <rte_ether.h>
+#include <rte_ethdev.h>
+#include <rte_ring.h>
+#include <rte_mempool.h>
+#include <rte_malloc.h>
+#include <rte_mbuf.h>
+#include <rte_string_fns.h>
+#include <rte_ip.h>
+#include <rte_tcp.h>
+#include <rte_lpm.h>
+#include <rte_lpm6.h>
+
+#include "main.h"
+
+#define NA                             APP_SWQ_INVALID
+
+struct app_params app = {
+	/* CPU cores */
+	.cores = {
+	{0, APP_CORE_MASTER, {15, 16, 17, NA, NA, NA, NA, NA},
+		{12, 13, 14, NA, NA, NA, NA, NA} },
+	{0, APP_CORE_RX,     {NA, NA, NA, NA, NA, NA, NA, 12},
+		{ 0,  1,  2,  3, NA, NA, NA, 15} },
+	{0, APP_CORE_FC,     { 0,  1,  2,  3, NA, NA, NA, 13},
+		{ 4,  5,  6,  7, NA, NA, NA, 16} },
+	{0, APP_CORE_RT,     { 4,  5,  6,  7, NA, NA, NA, 14},
+		{ 8,  9, 10, 11, NA, NA, NA, 17} },
+	{0, APP_CORE_TX,     { 8,  9, 10, 11, NA, NA, NA, NA},
+		{NA, NA, NA, NA, NA, NA, NA, NA} },
+	},
+
+	/* Ports*/
+	.n_ports = APP_MAX_PORTS,
+	.rsz_hwq_rx = 128,
+	.rsz_hwq_tx = 512,
+	.bsz_hwq_rd = 64,
+	.bsz_hwq_wr = 64,
+
+	.port_conf = {
+		.rxmode = {
+			.split_hdr_size = 0,
+			.header_split   = 0, /* Header Split disabled */
+			.hw_ip_checksum = 1, /* IP checksum offload enabled */
+			.hw_vlan_filter = 0, /* VLAN filtering disabled */
+			.jumbo_frame    = 1, /* Jumbo Frame Support enabled */
+			.max_rx_pkt_len = 9000, /* Jumbo Frame MAC pkt length */
+			.hw_strip_crc   = 0, /* CRC stripped by hardware */
+		},
+		.rx_adv_conf = {
+			.rss_conf = {
+				.rss_key = NULL,
+				.rss_hf = ETH_RSS_IPV4 | ETH_RSS_IPV6,
+			},
+		},
+		.txmode = {
+			.mq_mode = ETH_MQ_TX_NONE,
+		},
+	},
+
+	.rx_conf = {
+		.rx_thresh = {
+			.pthresh = 8,
+			.hthresh = 8,
+			.wthresh = 4,
+		},
+		.rx_free_thresh = 64,
+		.rx_drop_en = 0,
+	},
+
+	.tx_conf = {
+		.tx_thresh = {
+			.pthresh = 36,
+			.hthresh = 0,
+			.wthresh = 0,
+		},
+		.tx_free_thresh = 0,
+		.tx_rs_thresh = 0,
+	},
+
+	/* SWQs */
+	.rsz_swq = 128,
+	.bsz_swq_rd = 64,
+	.bsz_swq_wr = 64,
+
+	/* Buffer pool */
+	.pool_buffer_size = 2048 + sizeof(struct rte_mbuf) +
+		RTE_PKTMBUF_HEADROOM,
+	.pool_size = 32 * 1024,
+	.pool_cache_size = 256,
+
+	/* Message buffer pool */
+	.msg_pool_buffer_size = 256,
+	.msg_pool_size = 1024,
+	.msg_pool_cache_size = 64,
+
+	/* Rule tables */
+	.max_arp_rules = 1 << 10,
+	.max_firewall_rules = 1 << 5,
+	.max_routing_rules = 1 << 24,
+	.max_flow_rules = 1 << 24,
+
+	/* Application processing */
+	.ether_hdr_pop_push = 0,
+};
+
+struct app_core_params *
+app_get_core_params(uint32_t core_id)
+{
+	uint32_t i;
+
+	for (i = 0; i < RTE_MAX_LCORE; i++) {
+		struct app_core_params *p = &app.cores[i];
+
+		if (p->core_id != core_id)
+			continue;
+
+		return p;
+	}
+
+	return NULL;
+}
+
+static uint32_t
+app_get_n_swq_in(void)
+{
+	uint32_t max_swq_id = 0, i, j;
+
+	for (i = 0; i < RTE_MAX_LCORE; i++) {
+		struct app_core_params *p = &app.cores[i];
+
+		if (p->core_type == APP_CORE_NONE)
+			continue;
+
+		for (j = 0; j < APP_MAX_SWQ_PER_CORE; j++) {
+			uint32_t swq_id = p->swq_in[j];
+
+			if ((swq_id != APP_SWQ_INVALID) &&
+				(swq_id > max_swq_id))
+				max_swq_id = swq_id;
+		}
+	}
+
+	return (1 + max_swq_id);
+}
+
+static uint32_t
+app_get_n_swq_out(void)
+{
+	uint32_t max_swq_id = 0, i, j;
+
+	for (i = 0; i < RTE_MAX_LCORE; i++) {
+		struct app_core_params *p = &app.cores[i];
+
+		if (p->core_type == APP_CORE_NONE)
+			continue;
+
+		for (j = 0; j < APP_MAX_SWQ_PER_CORE; j++) {
+			uint32_t swq_id = p->swq_out[j];
+
+			if ((swq_id != APP_SWQ_INVALID) &&
+				(swq_id > max_swq_id))
+				max_swq_id = swq_id;
+		}
+	}
+
+	return (1 + max_swq_id);
+}
+
+static uint32_t
+app_get_swq_in_count(uint32_t swq_id)
+{
+	uint32_t n, i;
+
+	for (n = 0, i = 0; i < RTE_MAX_LCORE; i++) {
+		struct app_core_params *p = &app.cores[i];
+		uint32_t j;
+
+		if (p->core_type == APP_CORE_NONE)
+			continue;
+
+		for (j = 0; j < APP_MAX_SWQ_PER_CORE; j++)
+			if (p->swq_in[j] == swq_id)
+				n++;
+	}
+
+	return n;
+}
+
+static uint32_t
+app_get_swq_out_count(uint32_t swq_id)
+{
+	uint32_t n, i;
+
+	for (n = 0, i = 0; i < RTE_MAX_LCORE; i++) {
+		struct app_core_params *p = &app.cores[i];
+		uint32_t j;
+
+		if (p->core_type == APP_CORE_NONE)
+			continue;
+
+		for (j = 0; j < APP_MAX_SWQ_PER_CORE; j++)
+			if (p->swq_out[j] == swq_id)
+				n++;
+	}
+
+	return n;
+}
+
+void
+app_check_core_params(void)
+{
+	uint32_t n_swq_in = app_get_n_swq_in();
+	uint32_t n_swq_out = app_get_n_swq_out();
+	uint32_t i;
+
+	/* Check that range of SW queues is contiguous and each SW queue has
+	   exactly one reader and one writer */
+	if (n_swq_in != n_swq_out)
+		rte_panic("Number of input SW queues is not equal to the "
+			"number of output SW queues\n");
+
+	for (i = 0; i < n_swq_in; i++) {
+		uint32_t n = app_get_swq_in_count(i);
+
+		if (n == 0)
+			rte_panic("SW queue %u has no reader\n", i);
+
+		if (n > 1)
+			rte_panic("SW queue %u has more than one reader\n", i);
+	}
+
+	for (i = 0; i < n_swq_out; i++) {
+		uint32_t n = app_get_swq_out_count(i);
+
+		if (n == 0)
+			rte_panic("SW queue %u has no writer\n", i);
+
+		if (n > 1)
+			rte_panic("SW queue %u has more than one writer\n", i);
+	}
+
+	/* Check the request and response queues are valid */
+	for (i = 0; i < RTE_MAX_LCORE; i++) {
+		struct app_core_params *p = &app.cores[i];
+		uint32_t ring_id_req, ring_id_resp;
+
+		if ((p->core_type != APP_CORE_FC) &&
+		    (p->core_type != APP_CORE_FW) &&
+			(p->core_type != APP_CORE_RT)) {
+			continue;
+		}
+
+		ring_id_req = p->swq_in[APP_SWQ_IN_REQ];
+		if (ring_id_req == APP_SWQ_INVALID)
+			rte_panic("Core %u of type %u has invalid request "
+				"queue ID\n", p->core_id, p->core_type);
+
+		ring_id_resp = p->swq_out[APP_SWQ_OUT_RESP];
+		if (ring_id_resp == APP_SWQ_INVALID)
+			rte_panic("Core %u of type %u has invalid response "
+				"queue ID\n", p->core_id, p->core_type);
+	}
+
+	return;
+}
+
+uint32_t
+app_get_first_core_id(enum app_core_type core_type)
+{
+	uint32_t i;
+
+	for (i = 0; i < RTE_MAX_LCORE; i++) {
+		struct app_core_params *p = &app.cores[i];
+
+		if (p->core_type == core_type)
+			return p->core_id;
+	}
+
+	return RTE_MAX_LCORE;
+}
+
+struct rte_ring *
+app_get_ring_req(uint32_t core_id)
+{
+	struct app_core_params *p = app_get_core_params(core_id);
+	uint32_t ring_req_id = p->swq_in[APP_SWQ_IN_REQ];
+
+	return app.rings[ring_req_id];
+}
+
+struct rte_ring *
+app_get_ring_resp(uint32_t core_id)
+{
+	struct app_core_params *p = app_get_core_params(core_id);
+	uint32_t ring_resp_id = p->swq_out[APP_SWQ_OUT_RESP];
+
+	return app.rings[ring_resp_id];
+}
+
+static void
+app_init_mbuf_pools(void)
+{
+	/* Init the buffer pool */
+	RTE_LOG(INFO, USER1, "Creating the mbuf pool ...\n");
+	app.pool = rte_mempool_create(
+		"mempool",
+		app.pool_size,
+		app.pool_buffer_size,
+		app.pool_cache_size,
+		sizeof(struct rte_pktmbuf_pool_private),
+		rte_pktmbuf_pool_init, NULL,
+		rte_pktmbuf_init, NULL,
+		rte_socket_id(),
+		0);
+	if (app.pool == NULL)
+		rte_panic("Cannot create mbuf pool\n");
+
+	/* Init the indirect buffer pool */
+	RTE_LOG(INFO, USER1, "Creating the indirect mbuf pool ...\n");
+	app.indirect_pool = rte_mempool_create(
+		"indirect mempool",
+		app.pool_size,
+		sizeof(struct rte_mbuf) + sizeof(struct app_pkt_metadata),
+		app.pool_cache_size,
+		0,
+		NULL, NULL,
+		rte_pktmbuf_init, NULL,
+		rte_socket_id(),
+		0);
+	if (app.indirect_pool == NULL)
+		rte_panic("Cannot create mbuf pool\n");
+
+	/* Init the message buffer pool */
+	RTE_LOG(INFO, USER1, "Creating the message pool ...\n");
+	app.msg_pool = rte_mempool_create(
+		"mempool msg",
+		app.msg_pool_size,
+		app.msg_pool_buffer_size,
+		app.msg_pool_cache_size,
+		0,
+		NULL, NULL,
+		rte_ctrlmbuf_init, NULL,
+		rte_socket_id(),
+		0);
+	if (app.msg_pool == NULL)
+		rte_panic("Cannot create message pool\n");
+}
+
+static void
+app_init_rings(void)
+{
+	uint32_t n_swq, i;
+
+	n_swq = app_get_n_swq_in();
+	RTE_LOG(INFO, USER1, "Initializing %u SW rings ...\n", n_swq);
+
+	app.rings = rte_malloc_socket(NULL, n_swq * sizeof(struct rte_ring *),
+		CACHE_LINE_SIZE, rte_socket_id());
+	if (app.rings == NULL)
+		rte_panic("Cannot allocate memory to store ring pointers\n");
+
+	for (i = 0; i < n_swq; i++) {
+		struct rte_ring *ring;
+		char name[32];
+
+		rte_snprintf(name, sizeof(name), "app_ring_%u", i);
+
+		ring = rte_ring_create(
+			name,
+			app.rsz_swq,
+			rte_socket_id(),
+			RING_F_SP_ENQ | RING_F_SC_DEQ);
+
+		if (ring == NULL)
+			rte_panic("Cannot create ring %u\n", i);
+
+		app.rings[i] = ring;
+	}
+}
+
+static void
+app_ports_check_link(void)
+{
+	uint32_t all_ports_up, i;
+
+	all_ports_up = 1;
+
+	for (i = 0; i < app.n_ports; i++) {
+		struct rte_eth_link link;
+		uint32_t port;
+
+		port = app.ports[i];
+		memset(&link, 0, sizeof(link));
+		rte_eth_link_get_nowait(port, &link);
+		RTE_LOG(INFO, USER1, "Port %u (%u Gbps) %s\n",
+			port,
+			link.link_speed / 1000,
+			link.link_status ? "UP" : "DOWN");
+
+		if (link.link_status == 0)
+			all_ports_up = 0;
+	}
+
+	if (all_ports_up == 0)
+		rte_panic("Some NIC ports are DOWN\n");
+}
+
+static void
+app_init_ports(void)
+{
+	uint32_t i;
+
+	/* Init driver */
+	RTE_LOG(INFO, USER1, "Initializing the PMD driver ...\n");
+	if (rte_eal_pci_probe() < 0)
+		rte_panic("Cannot probe PCI\n");
+
+	/* Init NIC ports, then start the ports */
+	for (i = 0; i < app.n_ports; i++) {
+		uint32_t port;
+		int ret;
+
+		port = app.ports[i];
+		RTE_LOG(INFO, USER1, "Initializing NIC port %u ...\n", port);
+
+		/* Init port */
+		ret = rte_eth_dev_configure(
+			port,
+			1,
+			1,
+			&app.port_conf);
+		if (ret < 0)
+			rte_panic("Cannot init NIC port %u (%d)\n", port, ret);
+		rte_eth_promiscuous_enable(port);
+
+		/* Init RX queues */
+		ret = rte_eth_rx_queue_setup(
+			port,
+			0,
+			app.rsz_hwq_rx,
+			rte_eth_dev_socket_id(port),
+			&app.rx_conf,
+			app.pool);
+		if (ret < 0)
+			rte_panic("Cannot init RX for port %u (%d)\n",
+				(uint32_t) port, ret);
+
+		/* Init TX queues */
+		ret = rte_eth_tx_queue_setup(
+			port,
+			0,
+			app.rsz_hwq_tx,
+			rte_eth_dev_socket_id(port),
+			&app.tx_conf);
+		if (ret < 0)
+			rte_panic("Cannot init TX for port %u (%d)\n", port,
+				ret);
+
+		/* Start port */
+		ret = rte_eth_dev_start(port);
+		if (ret < 0)
+			rte_panic("Cannot start port %u (%d)\n", port, ret);
+	}
+
+	app_ports_check_link();
+}
+
+#define APP_PING_TIMEOUT_SEC                               5
+
+void
+app_ping(void)
+{
+	unsigned i;
+	uint64_t timestamp, diff_tsc;
+
+	const uint64_t timeout = rte_get_tsc_hz() * APP_PING_TIMEOUT_SEC;
+
+	for (i = 0; i < RTE_MAX_LCORE; i++) {
+		struct app_core_params *p = &app.cores[i];
+		struct rte_ring *ring_req, *ring_resp;
+		void *msg;
+		struct app_msg_req *req;
+		int status;
+
+		if ((p->core_type != APP_CORE_FC) &&
+		    (p->core_type != APP_CORE_FW) &&
+			(p->core_type != APP_CORE_RT) &&
+			(p->core_type != APP_CORE_RX))
+			continue;
+
+		ring_req = app_get_ring_req(p->core_id);
+		ring_resp = app_get_ring_resp(p->core_id);
+
+		/* Fill request message */
+		msg = (void *)rte_ctrlmbuf_alloc(app.msg_pool);
+		if (msg == NULL)
+			rte_panic("Unable to allocate new message\n");
+
+		req = (struct app_msg_req *)
+			((struct rte_mbuf *)msg)->ctrl.data;
+		req->type = APP_MSG_REQ_PING;
+
+		/* Send request */
+		do {
+			status = rte_ring_sp_enqueue(ring_req, msg);
+		} while (status == -ENOBUFS);
+
+		/* Wait for response */
+		timestamp = rte_rdtsc();
+		do {
+			status = rte_ring_sc_dequeue(ring_resp, &msg);
+			diff_tsc = rte_rdtsc() - timestamp;
+
+			if (unlikely(diff_tsc > timeout))
+				rte_panic("Core %u of type %d does not respond "
+					"to requests\n", p->core_id,
+					p->core_type);
+		} while (status != 0);
+
+		/* Free message buffer */
+		rte_ctrlmbuf_free(msg);
+	}
+}
+
+static void
+app_init_etc(void)
+{
+	if ((app_get_first_core_id(APP_CORE_IPV4_FRAG) != RTE_MAX_LCORE) ||
+		(app_get_first_core_id(APP_CORE_IPV4_RAS) != RTE_MAX_LCORE)) {
+		RTE_LOG(INFO, USER1,
+			"Activating the Ethernet header pop/push ...\n");
+		app.ether_hdr_pop_push = 1;
+	}
+}
+
+void
+app_init(void)
+{
+	if ((sizeof(struct app_pkt_metadata) % CACHE_LINE_SIZE) != 0)
+		rte_panic("Application pkt meta-data size mismatch\n");
+
+	app_check_core_params();
+
+	app_init_mbuf_pools();
+	app_init_rings();
+	app_init_ports();
+	app_init_etc();
+
+	RTE_LOG(INFO, USER1, "Initialization completed\n");
+}
diff --git a/examples/ip_pipeline/ip_pipeline.cfg b/examples/ip_pipeline/ip_pipeline.cfg
new file mode 100644
index 0000000..428830d
--- /dev/null
+++ b/examples/ip_pipeline/ip_pipeline.cfg
@@ -0,0 +1,56 @@
+;   BSD LICENSE
+;
+;   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+;   All rights reserved.
+;
+;   Redistribution and use in source and binary forms, with or without
+;   modification, are permitted provided that the following conditions
+;   are met:
+;
+;     * Redistributions of source code must retain the above copyright
+;       notice, this list of conditions and the following disclaimer.
+;     * Redistributions in binary form must reproduce the above copyright
+;       notice, this list of conditions and the following disclaimer in
+;       the documentation and/or other materials provided with the
+;       distribution.
+;     * Neither the name of Intel Corporation nor the names of its
+;       contributors may be used to endorse or promote products derived
+;       from this software without specific prior written permission.
+;
+;   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+;   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+;   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+;   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+;   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+;   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+;   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+;   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+;   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+;   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+;   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+; Core configuration
+[core 0]
+type = MASTER
+queues in  = 15 16 17 -1 -1 -1 -1 -1
+queues out = 12 13 14 -1 -1 -1 -1 -1
+
+[core 1]
+type = RX
+queues in  = -1 -1 -1 -1 -1 -1 -1 12
+queues out =  0  1  2  3 -1 -1 -1 15
+
+[core 2]
+type = FC
+queues in  =  0  1  2  3 -1 -1 -1 13
+queues out =  4  5  6  7 -1 -1 -1 16
+
+[core 3]
+type = RT
+queues in  =  4  5  6  7 -1 -1 -1 14
+queues out =  8  9 10 11 -1 -1 -1 17
+
+[core 4]
+type = TX
+queues in  =  8  9 10 11 -1 -1 -1 -1
+queues out = -1 -1 -1 -1 -1 -1 -1 -1
diff --git a/examples/ip_pipeline/ip_pipeline.sh b/examples/ip_pipeline/ip_pipeline.sh
new file mode 100644
index 0000000..c3419ca
--- /dev/null
+++ b/examples/ip_pipeline/ip_pipeline.sh
@@ -0,0 +1,18 @@
+#Address Resolution Protocol (ARP) Table
+#arp add iface ipaddr macaddr
+arp add 0 0.0.0.1 0a:0b:0c:0d:0e:0f
+arp add 1 0.128.0.1 1a:1b:1c:1d:1e:1f
+
+#Routing Table
+#route add ipaddr prefixlen iface gateway
+route add 0.0.0.0 9 0 0.0.0.1
+route add 0.128.0.0 9 1 0.128.0.1
+
+#Flow Table
+flow add all
+#flow add 0.0.0.0 1.2.3.4 0 0 6 0
+#flow add 10.11.12.13 0.0.0.0 0 0 6 1
+
+#Firewall
+#firewall add 1 0.0.0.0 0 0.0.0.0 9 0 65535 0 65535 6 0xf 0
+#firewall add 1 0.0.0.0 0 0.128.0.0 9 0 65535 0 65535 6 0xf 1
diff --git a/examples/ip_pipeline/main.c b/examples/ip_pipeline/main.c
new file mode 100644
index 0000000..f773958
--- /dev/null
+++ b/examples/ip_pipeline/main.c
@@ -0,0 +1,171 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <stdio.h>
+#include <stdlib.h>
+#include <stdint.h>
+#include <inttypes.h>
+#include <sys/types.h>
+#include <string.h>
+#include <sys/queue.h>
+#include <stdarg.h>
+#include <errno.h>
+#include <getopt.h>
+#include <unistd.h>
+
+#include <rte_common.h>
+#include <rte_byteorder.h>
+#include <rte_log.h>
+#include <rte_memory.h>
+#include <rte_memcpy.h>
+#include <rte_memzone.h>
+#include <rte_tailq.h>
+#include <rte_eal.h>
+#include <rte_per_lcore.h>
+#include <rte_launch.h>
+#include <rte_atomic.h>
+#include <rte_cycles.h>
+#include <rte_prefetch.h>
+#include <rte_lcore.h>
+#include <rte_per_lcore.h>
+#include <rte_branch_prediction.h>
+#include <rte_interrupts.h>
+#include <rte_pci.h>
+#include <rte_random.h>
+#include <rte_debug.h>
+#include <rte_ether.h>
+#include <rte_ethdev.h>
+#include <rte_ring.h>
+#include <rte_mempool.h>
+#include <rte_mbuf.h>
+#include <rte_ip.h>
+#include <rte_tcp.h>
+#include <rte_lpm.h>
+#include <rte_lpm6.h>
+
+#include "main.h"
+
+int
+MAIN(int argc, char **argv)
+{
+	int ret;
+
+	/* Init EAL */
+	ret = rte_eal_init(argc, argv);
+	if (ret < 0)
+		return -1;
+	argc -= ret;
+	argv += ret;
+
+	/* Parse application arguments (after the EAL ones) */
+	ret = app_parse_args(argc, argv);
+	if (ret < 0) {
+		app_print_usage(argv[0]);
+		return -1;
+	}
+
+	/* Init */
+	app_init();
+
+	/* Launch per-lcore init on every lcore */
+	rte_eal_mp_remote_launch(app_lcore_main_loop, NULL, CALL_MASTER);
+
+	return 0;
+}
+
+int
+app_lcore_main_loop(__attribute__((unused)) void *arg)
+{
+	uint32_t core_id, i;
+
+	core_id = rte_lcore_id();
+
+	for (i = 0; i < app.n_cores; i++) {
+		struct app_core_params *p = &app.cores[i];
+
+		if (p->core_id != core_id)
+			continue;
+
+		switch (p->core_type) {
+		case APP_CORE_MASTER:
+			app_ping();
+			app_main_loop_cmdline();
+			return 0;
+		case APP_CORE_RX:
+			app_main_loop_pipeline_rx();
+			/* app_main_loop_rx(); */
+			return 0;
+		case APP_CORE_TX:
+			app_main_loop_pipeline_tx();
+			/* app_main_loop_tx(); */
+			return 0;
+		case APP_CORE_PT:
+			/* app_main_loop_pipeline_passthrough(); */
+			app_main_loop_passthrough();
+			return 0;
+		case APP_CORE_FC:
+			app_main_loop_pipeline_flow_classification();
+			return 0;
+		case APP_CORE_FW:
+		case APP_CORE_RT:
+			app_main_loop_pipeline_routing();
+			return 0;
+
+#ifdef RTE_LIBRTE_ACL
+			app_main_loop_pipeline_firewall();
+			return 0;
+#else
+			rte_exit(EXIT_FAILURE, "ACL not present in build\n");
+#endif
+
+#ifdef RTE_MBUF_SCATTER_GATHER
+		case APP_CORE_IPV4_FRAG:
+			app_main_loop_pipeline_ipv4_frag();
+			return 0;
+		case APP_CORE_IPV4_RAS:
+			app_main_loop_pipeline_ipv4_ras();
+			return 0;
+#else
+			rte_exit(EXIT_FAILURE,
+				"mbuf chaining not present in build\n");
+#endif
+
+		default:
+			rte_panic("%s: Invalid core type for core %u\n",
+				__func__, i);
+		}
+	}
+
+	rte_panic("%s: Algorithmic error\n", __func__);
+	return -1;
+}
diff --git a/examples/ip_pipeline/main.h b/examples/ip_pipeline/main.h
new file mode 100644
index 0000000..4bce203
--- /dev/null
+++ b/examples/ip_pipeline/main.h
@@ -0,0 +1,306 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _MAIN_H_
+#define _MAIN_H_
+
+#include <stdint.h>
+
+#include <rte_mempool.h>
+#include <rte_mbuf.h>
+#include <rte_ring.h>
+#include <rte_ethdev.h>
+
+#ifdef RTE_LIBRTE_ACL
+#include <rte_table_acl.h>
+#endif
+
+struct app_flow_key {
+	union {
+		struct {
+			uint8_t ttl; /* needs to be set to 0 */
+			uint8_t proto;
+			uint16_t header_checksum; /* needs to be set to 0 */
+			uint32_t ip_src;
+		};
+		uint64_t slab0;
+	};
+
+	union {
+		struct {
+			uint32_t ip_dst;
+			uint16_t port_src;
+			uint16_t port_dst;
+		};
+		uint64_t slab1;
+	};
+} __attribute__((__packed__));
+
+struct app_arp_key {
+	uint32_t nh_ip;
+	uint32_t nh_iface;
+} __attribute__((__packed__));
+
+struct app_pkt_metadata {
+	uint32_t signature;
+	uint8_t reserved1[28];
+
+	struct app_flow_key flow_key;
+
+	struct app_arp_key arp_key;
+	struct ether_addr nh_arp;
+
+	uint8_t reserved3[2];
+} __attribute__((__packed__));
+
+#ifndef APP_MBUF_ARRAY_SIZE
+#define APP_MBUF_ARRAY_SIZE            256
+#endif
+
+struct app_mbuf_array {
+	struct rte_mbuf *array[APP_MBUF_ARRAY_SIZE];
+	uint32_t n_mbufs;
+};
+
+#ifndef APP_MAX_PORTS
+#define APP_MAX_PORTS                  4
+#endif
+
+#ifndef APP_MAX_SWQ_PER_CORE
+#define APP_MAX_SWQ_PER_CORE           8
+#endif
+
+#define APP_SWQ_INVALID                ((uint32_t)(-1))
+
+#define APP_SWQ_IN_REQ                 (APP_MAX_SWQ_PER_CORE - 1)
+
+#define APP_SWQ_OUT_RESP               (APP_MAX_SWQ_PER_CORE - 1)
+
+enum app_core_type {
+	APP_CORE_NONE = 0, /* Unused */
+	APP_CORE_MASTER,   /* Management */
+	APP_CORE_RX,       /* Reception */
+	APP_CORE_TX,       /* Transmission */
+	APP_CORE_PT,       /* Pass-through */
+	APP_CORE_FC,       /* Flow Classification */
+	APP_CORE_FW,       /* Firewall */
+	APP_CORE_RT,       /* Routing */
+	APP_CORE_TM,       /* Traffic Management */
+	APP_CORE_IPV4_FRAG,/* IPv4 Fragmentation */
+	APP_CORE_IPV4_RAS, /* IPv4 Reassembly */
+};
+
+struct app_core_params {
+	uint32_t core_id;
+	enum app_core_type core_type;
+
+	/* SWQ map */
+	uint32_t swq_in[APP_MAX_SWQ_PER_CORE];
+	uint32_t swq_out[APP_MAX_SWQ_PER_CORE];
+} __rte_cache_aligned;
+
+struct app_params {
+	/* CPU cores */
+	struct app_core_params cores[RTE_MAX_LCORE];
+	uint32_t n_cores;
+
+	/* Ports*/
+	uint32_t ports[APP_MAX_PORTS];
+	uint32_t n_ports;
+	uint32_t rsz_hwq_rx;
+	uint32_t rsz_hwq_tx;
+	uint32_t bsz_hwq_rd;
+	uint32_t bsz_hwq_wr;
+	struct rte_eth_conf port_conf;
+	struct rte_eth_rxconf rx_conf;
+	struct rte_eth_txconf tx_conf;
+
+	/* SW Queues (SWQs) */
+	struct rte_ring **rings;
+	uint32_t rsz_swq;
+	uint32_t bsz_swq_rd;
+	uint32_t bsz_swq_wr;
+
+	/* Buffer pool */
+	struct rte_mempool *pool;
+	struct rte_mempool *indirect_pool;
+	uint32_t pool_buffer_size;
+	uint32_t pool_size;
+	uint32_t pool_cache_size;
+
+	/* Message buffer pool */
+	struct rte_mempool *msg_pool;
+	uint32_t msg_pool_buffer_size;
+	uint32_t msg_pool_size;
+	uint32_t msg_pool_cache_size;
+
+	/* Rule tables */
+	uint32_t max_arp_rules;
+	uint32_t max_routing_rules;
+	uint32_t max_firewall_rules;
+	uint32_t max_flow_rules;
+
+	/* Processing */
+	uint32_t ether_hdr_pop_push;
+} __rte_cache_aligned;
+
+extern struct app_params app;
+
+const char *app_core_type_id_to_string(enum app_core_type id);
+int app_core_type_string_to_id(const char *string, enum app_core_type *id);
+void app_cores_config_print(void);
+
+void app_check_core_params(void);
+struct app_core_params *app_get_core_params(uint32_t core_id);
+uint32_t app_get_first_core_id(enum app_core_type core_type);
+struct rte_ring *app_get_ring_req(uint32_t core_id);
+struct rte_ring *app_get_ring_resp(uint32_t core_id);
+
+int app_parse_args(int argc, char **argv);
+void app_print_usage(char *prgname);
+void app_init(void);
+void app_ping(void);
+int app_lcore_main_loop(void *arg);
+
+/* Hash functions */
+uint64_t test_hash(void *key, uint32_t key_size, uint64_t seed);
+uint32_t rte_jhash2_16(uint32_t *k, uint32_t initval);
+#if defined(__x86_64__)
+uint32_t rte_aeshash_16(uint64_t *k, uint64_t seed);
+uint32_t rte_crchash_16(uint64_t *k, uint64_t seed);
+#endif
+
+/* I/O with no pipeline */
+void app_main_loop_rx(void);
+void app_main_loop_tx(void);
+void app_main_loop_passthrough(void);
+
+/* Pipeline */
+void app_main_loop_pipeline_rx(void);
+void app_main_loop_pipeline_rx_frag(void);
+void app_main_loop_pipeline_tx(void);
+void app_main_loop_pipeline_tx_ras(void);
+void app_main_loop_pipeline_flow_classification(void);
+void app_main_loop_pipeline_firewall(void);
+void app_main_loop_pipeline_routing(void);
+void app_main_loop_pipeline_passthrough(void);
+void app_main_loop_pipeline_ipv4_frag(void);
+void app_main_loop_pipeline_ipv4_ras(void);
+
+/* Command Line Interface (CLI) */
+void app_main_loop_cmdline(void);
+
+/* Messages */
+enum app_msg_req_type {
+	APP_MSG_REQ_PING,
+	APP_MSG_REQ_FC_ADD,
+	APP_MSG_REQ_FC_DEL,
+	APP_MSG_REQ_FC_ADD_ALL,
+	APP_MSG_REQ_FW_ADD,
+	APP_MSG_REQ_FW_DEL,
+	APP_MSG_REQ_RT_ADD,
+	APP_MSG_REQ_RT_DEL,
+	APP_MSG_REQ_ARP_ADD,
+	APP_MSG_REQ_ARP_DEL,
+	APP_MSG_REQ_RX_PORT_ENABLE,
+	APP_MSG_REQ_RX_PORT_DISABLE,
+};
+
+struct app_msg_req {
+	enum app_msg_req_type type;
+	union {
+		struct {
+			uint32_t ip;
+			uint8_t depth;
+			uint8_t port;
+			uint32_t nh_ip;
+		} routing_add;
+		struct {
+			uint32_t ip;
+			uint8_t depth;
+		} routing_del;
+		struct {
+			uint8_t out_iface;
+			uint32_t nh_ip;
+			struct ether_addr nh_arp;
+		} arp_add;
+		struct {
+			uint8_t out_iface;
+			uint32_t nh_ip;
+		} arp_del;
+		struct {
+			union {
+				uint8_t key_raw[16];
+				struct app_flow_key key;
+			};
+			uint8_t port;
+		} flow_classif_add;
+		struct {
+			union {
+				uint8_t key_raw[16];
+				struct app_flow_key key;
+			};
+		} flow_classif_del;
+#ifdef RTE_LIBRTE_ACL
+		struct {
+			struct rte_table_acl_rule_add_params add_params;
+			uint8_t port;
+		} firewall_add;
+		struct {
+			struct rte_table_acl_rule_delete_params delete_params;
+		} firewall_del;
+#endif
+		struct {
+			uint8_t port;
+		} rx_up;
+		struct {
+			uint8_t port;
+		} rx_down;
+	};
+};
+
+struct app_msg_resp {
+	int result;
+};
+
+#define APP_FLUSH 0xFF
+
+#ifdef RTE_EXEC_ENV_BAREMETAL
+#define MAIN _main
+#else
+#define MAIN main
+#endif
+
+int MAIN(int argc, char **argv);
+
+#endif /* _MAIN_H_ */
diff --git a/examples/ip_pipeline/pipeline_firewall.c b/examples/ip_pipeline/pipeline_firewall.c
new file mode 100644
index 0000000..ecc15a7
--- /dev/null
+++ b/examples/ip_pipeline/pipeline_firewall.c
@@ -0,0 +1,313 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <stdio.h>
+#include <stdlib.h>
+#include <stdint.h>
+
+#include <rte_malloc.h>
+#include <rte_log.h>
+#include <rte_ethdev.h>
+#include <rte_ether.h>
+#include <rte_ip.h>
+#include <rte_byteorder.h>
+
+#include <rte_port_ring.h>
+#include <rte_table_acl.h>
+#include <rte_pipeline.h>
+
+#include "main.h"
+
+struct app_core_firewall_message_handle_params {
+	struct rte_ring *ring_req;
+	struct rte_ring *ring_resp;
+
+	struct rte_pipeline *p;
+	uint32_t *port_out_id;
+	uint32_t table_id;
+};
+
+static void
+app_message_handle(struct app_core_firewall_message_handle_params *params);
+
+enum {
+	PROTO_FIELD_IPV4,
+	SRC_FIELD_IPV4,
+	DST_FIELD_IPV4,
+	SRCP_FIELD_IPV4,
+	DSTP_FIELD_IPV4,
+	NUM_FIELDS_IPV4
+};
+
+struct rte_acl_field_def ipv4_field_formats[NUM_FIELDS_IPV4] = {
+	{
+		.type = RTE_ACL_FIELD_TYPE_BITMASK,
+		.size = sizeof(uint8_t),
+		.field_index = PROTO_FIELD_IPV4,
+		.input_index = PROTO_FIELD_IPV4,
+		.offset = sizeof(struct ether_hdr) +
+			offsetof(struct ipv4_hdr, next_proto_id),
+	},
+	{
+		.type = RTE_ACL_FIELD_TYPE_MASK,
+		.size = sizeof(uint32_t),
+		.field_index = SRC_FIELD_IPV4,
+		.input_index = SRC_FIELD_IPV4,
+		.offset = sizeof(struct ether_hdr) +
+			offsetof(struct ipv4_hdr, src_addr),
+	},
+	{
+		.type = RTE_ACL_FIELD_TYPE_MASK,
+		.size = sizeof(uint32_t),
+		.field_index = DST_FIELD_IPV4,
+		.input_index = DST_FIELD_IPV4,
+		.offset = sizeof(struct ether_hdr) +
+			offsetof(struct ipv4_hdr, dst_addr),
+	},
+	{
+		.type = RTE_ACL_FIELD_TYPE_RANGE,
+		.size = sizeof(uint16_t),
+		.field_index = SRCP_FIELD_IPV4,
+		.input_index = SRCP_FIELD_IPV4,
+		.offset = sizeof(struct ether_hdr) + sizeof(struct ipv4_hdr),
+	},
+	{
+		.type = RTE_ACL_FIELD_TYPE_RANGE,
+		.size = sizeof(uint16_t),
+		.field_index = DSTP_FIELD_IPV4,
+		.input_index = SRCP_FIELD_IPV4,
+		.offset = sizeof(struct ether_hdr) + sizeof(struct ipv4_hdr) +
+			sizeof(uint16_t),
+	},
+};
+
+void
+app_main_loop_pipeline_firewall(void) {
+	struct rte_pipeline_params pipeline_params = {
+		.name = "pipeline",
+		.socket_id = rte_socket_id(),
+	};
+
+	struct rte_pipeline *p;
+	uint32_t port_in_id[APP_MAX_PORTS];
+	uint32_t port_out_id[APP_MAX_PORTS];
+	uint32_t table_id;
+	uint32_t i;
+
+	uint32_t core_id = rte_lcore_id();
+	struct app_core_params *core_params = app_get_core_params(core_id);
+	struct app_core_firewall_message_handle_params mh_params;
+
+	if ((core_params == NULL) || (core_params->core_type != APP_CORE_FW))
+		rte_panic("Core %u misconfiguration\n", core_id);
+
+	RTE_LOG(INFO, USER1, "Core %u is doing firewall\n", core_id);
+
+	/* Pipeline configuration */
+	p = rte_pipeline_create(&pipeline_params);
+	if (p == NULL)
+		rte_panic("Unable to configure the pipeline\n");
+
+	/* Input port configuration */
+	for (i = 0; i < app.n_ports; i++) {
+		struct rte_port_ring_reader_params port_ring_params = {
+			.ring = app.rings[core_params->swq_in[i]],
+		};
+
+		struct rte_pipeline_port_in_params port_params = {
+			.ops = &rte_port_ring_reader_ops,
+			.arg_create = (void *) &port_ring_params,
+			.f_action = NULL,
+			.arg_ah = NULL,
+			.burst_size = app.bsz_swq_rd,
+		};
+
+		if (rte_pipeline_port_in_create(p, &port_params,
+			&port_in_id[i]))
+			rte_panic("Unable to configure input port for "
+				"ring %d\n", i);
+	}
+
+	/* Output port configuration */
+	for (i = 0; i < app.n_ports; i++) {
+		struct rte_port_ring_writer_params port_ring_params = {
+			.ring = app.rings[core_params->swq_out[i]],
+			.tx_burst_sz = app.bsz_swq_wr,
+		};
+
+		struct rte_pipeline_port_out_params port_params = {
+			.ops = &rte_port_ring_writer_ops,
+			.arg_create = (void *) &port_ring_params,
+			.f_action = NULL,
+			.f_action_bulk = NULL,
+			.arg_ah = NULL,
+		};
+
+		if (rte_pipeline_port_out_create(p, &port_params,
+			&port_out_id[i]))
+			rte_panic("Unable to configure output port for "
+				"ring %d\n", i);
+	}
+
+	/* Table configuration */
+	{
+		struct rte_table_acl_params table_acl_params = {
+			.name = "test", /* unique identifier for acl contexts */
+			.n_rules = app.max_firewall_rules,
+			.n_rule_fields = DIM(ipv4_field_formats),
+		};
+
+		struct rte_pipeline_table_params table_params = {
+			.ops = &rte_table_acl_ops,
+			.arg_create = &table_acl_params,
+			.f_action_hit = NULL,
+			.f_action_miss = NULL,
+			.arg_ah = NULL,
+			.action_data_size = 0,
+		};
+
+		memcpy(table_acl_params.field_format, ipv4_field_formats,
+			sizeof(ipv4_field_formats));
+
+		if (rte_pipeline_table_create(p, &table_params, &table_id))
+			rte_panic("Unable to configure the ACL table\n");
+	}
+
+	/* Interconnecting ports and tables */
+	for (i = 0; i < app.n_ports; i++)
+		if (rte_pipeline_port_in_connect_to_table(p, port_in_id[i],
+			table_id))
+			rte_panic("Unable to connect input port %u to "
+				"table %u\n", port_in_id[i],  table_id);
+
+	/* Enable input ports */
+	for (i = 0; i < app.n_ports; i++)
+		if (rte_pipeline_port_in_enable(p, port_in_id[i]))
+			rte_panic("Unable to enable input port %u\n",
+				port_in_id[i]);
+
+	/* Check pipeline consistency */
+	if (rte_pipeline_check(p) < 0)
+		rte_panic("Pipeline consistency check failed\n");
+
+	/* Message handling */
+	mh_params.ring_req = app_get_ring_req(
+		app_get_first_core_id(APP_CORE_FW));
+	mh_params.ring_resp = app_get_ring_resp(
+		app_get_first_core_id(APP_CORE_FW));
+	mh_params.p = p;
+	mh_params.port_out_id = port_out_id;
+	mh_params.table_id = table_id;
+
+	/* Run-time */
+	for (i = 0; ; i++) {
+		rte_pipeline_run(p);
+
+		if ((i & APP_FLUSH) == 0) {
+			rte_pipeline_flush(p);
+			app_message_handle(&mh_params);
+		}
+	}
+}
+
+void
+app_message_handle(struct app_core_firewall_message_handle_params *params)
+{
+	struct rte_ring *ring_req = params->ring_req;
+	struct rte_ring *ring_resp;
+	struct rte_mbuf *msg;
+	struct app_msg_req *req;
+	struct app_msg_resp *resp;
+	struct rte_pipeline *p;
+	uint32_t *port_out_id;
+	uint32_t table_id;
+	int result;
+
+	/* Read request message */
+	result = rte_ring_sc_dequeue(ring_req, (void **) &msg);
+	if (result != 0)
+		return;
+
+	ring_resp = params->ring_resp;
+	p = params->p;
+	port_out_id = params->port_out_id;
+	table_id = params->table_id;
+
+	/* Handle request */
+	req = (struct app_msg_req *) msg->ctrl.data;
+	switch (req->type) {
+	case APP_MSG_REQ_PING:
+	{
+		result = 0;
+		break;
+	}
+
+	case APP_MSG_REQ_FW_ADD:
+	{
+		struct rte_pipeline_table_entry entry = {
+			.action = RTE_PIPELINE_ACTION_PORT,
+			{.port_id = port_out_id[req->firewall_add.port]},
+		};
+
+		struct rte_pipeline_table_entry *entry_ptr;
+
+		int key_found;
+
+		result = rte_pipeline_table_entry_add(p, table_id,
+			&req->firewall_add.add_params, &entry, &key_found,
+			&entry_ptr);
+		break;
+	}
+
+	case APP_MSG_REQ_FW_DEL:
+	{
+		int key_found;
+
+		result = rte_pipeline_table_entry_delete(p, table_id,
+			&req->firewall_del.delete_params, &key_found, NULL);
+		break;
+	}
+
+	default:
+		rte_panic("FW unrecognized message type (%u)\n", req->type);
+	}
+
+	/* Fill in response message */
+	resp = (struct app_msg_resp *) msg->ctrl.data;
+	resp->result = result;
+
+	/* Send response */
+	do {
+		result = rte_ring_sp_enqueue(ring_resp, (void *) msg);
+	} while (result == -ENOBUFS);
+}
diff --git a/examples/ip_pipeline/pipeline_flow_classification.c b/examples/ip_pipeline/pipeline_flow_classification.c
new file mode 100644
index 0000000..68d4f93
--- /dev/null
+++ b/examples/ip_pipeline/pipeline_flow_classification.c
@@ -0,0 +1,306 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <stdio.h>
+#include <stdlib.h>
+#include <stdint.h>
+
+#include <rte_malloc.h>
+#include <rte_log.h>
+#include <rte_ethdev.h>
+#include <rte_ether.h>
+#include <rte_ip.h>
+#include <rte_byteorder.h>
+
+#include <rte_port_ring.h>
+#include <rte_table_hash.h>
+#include <rte_pipeline.h>
+
+#include "main.h"
+
+struct app_core_fc_message_handle_params {
+	struct rte_ring *ring_req;
+	struct rte_ring *ring_resp;
+
+	struct rte_pipeline *p;
+	uint32_t *port_out_id;
+	uint32_t table_id;
+};
+
+static void
+app_message_handle(struct app_core_fc_message_handle_params *params);
+
+static int app_flow_classification_table_init(
+	struct rte_pipeline *p,
+	uint32_t *port_out_id,
+	uint32_t table_id)
+{
+	struct app_flow_key flow_key;
+	uint32_t i;
+
+	/* Add entries to tables */
+	for (i = 0; i < (1 << 24); i++) {
+		struct rte_pipeline_table_entry entry = {
+			.action = RTE_PIPELINE_ACTION_PORT,
+			{.port_id = port_out_id[i & (app.n_ports - 1)]},
+		};
+		struct rte_pipeline_table_entry *entry_ptr;
+		int key_found, status;
+
+		flow_key.ttl = 0;
+		flow_key.proto = 6; /* TCP */
+		flow_key.header_checksum = 0;
+		flow_key.ip_src = 0;
+		flow_key.ip_dst = rte_bswap32(i);
+		flow_key.port_src = 0;
+		flow_key.port_dst = 0;
+
+		status = rte_pipeline_table_entry_add(p, table_id,
+			(void *) &flow_key, &entry, &key_found, &entry_ptr);
+		if (status < 0)
+			rte_panic("Unable to add entry to table %u (%d)\n",
+				table_id, status);
+	}
+
+	return 0;
+}
+
+void
+app_main_loop_pipeline_flow_classification(void) {
+	struct rte_pipeline_params pipeline_params = {
+		.name = "pipeline",
+		.socket_id = rte_socket_id(),
+	};
+
+	struct rte_pipeline *p;
+	uint32_t port_in_id[APP_MAX_PORTS];
+	uint32_t port_out_id[APP_MAX_PORTS];
+	uint32_t table_id;
+	uint32_t i;
+
+	uint32_t core_id = rte_lcore_id();
+	struct app_core_params *core_params = app_get_core_params(core_id);
+	struct app_core_fc_message_handle_params mh_params;
+
+	if ((core_params == NULL) || (core_params->core_type != APP_CORE_FC))
+		rte_panic("Core %u misconfiguration\n", core_id);
+
+	RTE_LOG(INFO, USER1, "Core %u is doing flow classification "
+		"(pipeline with hash table, 16-byte key, LRU)\n", core_id);
+
+	/* Pipeline configuration */
+	p = rte_pipeline_create(&pipeline_params);
+	if (p == NULL)
+		rte_panic("Unable to configure the pipeline\n");
+
+	/* Input port configuration */
+	for (i = 0; i < app.n_ports; i++) {
+		struct rte_port_ring_reader_params port_ring_params = {
+			.ring = app.rings[core_params->swq_in[i]],
+		};
+
+		struct rte_pipeline_port_in_params port_params = {
+			.ops = &rte_port_ring_reader_ops,
+			.arg_create = (void *) &port_ring_params,
+			.f_action = NULL,
+			.arg_ah = NULL,
+			.burst_size = app.bsz_swq_rd,
+		};
+
+		if (rte_pipeline_port_in_create(p, &port_params,
+			&port_in_id[i]))
+			rte_panic("Unable to configure input port for "
+				"ring %d\n", i);
+	}
+
+	/* Output port configuration */
+	for (i = 0; i < app.n_ports; i++) {
+		struct rte_port_ring_writer_params port_ring_params = {
+			.ring = app.rings[core_params->swq_out[i]],
+			.tx_burst_sz = app.bsz_swq_wr,
+		};
+
+		struct rte_pipeline_port_out_params port_params = {
+			.ops = &rte_port_ring_writer_ops,
+			.arg_create = (void *) &port_ring_params,
+			.f_action = NULL,
+			.f_action_bulk = NULL,
+			.arg_ah = NULL,
+		};
+
+		if (rte_pipeline_port_out_create(p, &port_params,
+			&port_out_id[i]))
+			rte_panic("Unable to configure output port for "
+				"ring %d\n", i);
+	}
+
+	/* Table configuration */
+	{
+		struct rte_table_hash_key16_lru_params table_hash_params = {
+			.n_entries = 1 << 24,
+			.signature_offset = __builtin_offsetof(
+				struct app_pkt_metadata, signature),
+			.key_offset = __builtin_offsetof(
+				struct app_pkt_metadata, flow_key),
+			.f_hash = test_hash,
+			.seed = 0,
+		};
+
+		struct rte_pipeline_table_params table_params = {
+			.ops = &rte_table_hash_key16_lru_ops,
+			.arg_create = &table_hash_params,
+			.f_action_hit = NULL,
+			.f_action_miss = NULL,
+			.arg_ah = NULL,
+			.action_data_size = 0,
+		};
+
+		if (rte_pipeline_table_create(p, &table_params, &table_id))
+			rte_panic("Unable to configure the hash table\n");
+	}
+
+	/* Interconnecting ports and tables */
+	for (i = 0; i < app.n_ports; i++)
+		if (rte_pipeline_port_in_connect_to_table(p, port_in_id[i],
+			table_id))
+			rte_panic("Unable to connect input port %u to "
+				"table %u\n", port_in_id[i],  table_id);
+
+	/* Enable input ports */
+	for (i = 0; i < app.n_ports; i++)
+		if (rte_pipeline_port_in_enable(p, port_in_id[i]))
+			rte_panic("Unable to enable input port %u\n",
+				port_in_id[i]);
+
+	/* Check pipeline consistency */
+	if (rte_pipeline_check(p) < 0)
+		rte_panic("Pipeline consistency check failed\n");
+
+	/* Message handling */
+	mh_params.ring_req = app_get_ring_req(
+		app_get_first_core_id(APP_CORE_FC));
+	mh_params.ring_resp = app_get_ring_resp(
+		app_get_first_core_id(APP_CORE_FC));
+	mh_params.p = p;
+	mh_params.port_out_id = port_out_id;
+	mh_params.table_id = table_id;
+
+	/* Run-time */
+	for (i = 0; ; i++) {
+		rte_pipeline_run(p);
+
+		if ((i & APP_FLUSH) == 0) {
+			rte_pipeline_flush(p);
+			app_message_handle(&mh_params);
+		}
+	}
+}
+
+void
+app_message_handle(struct app_core_fc_message_handle_params *params)
+{
+	struct rte_ring *ring_req = params->ring_req;
+	struct rte_ring *ring_resp;
+	void *msg;
+	struct app_msg_req *req;
+	struct app_msg_resp *resp;
+	struct rte_pipeline *p;
+	uint32_t *port_out_id;
+	uint32_t table_id;
+	int result;
+
+	/* Read request message */
+	result = rte_ring_sc_dequeue(ring_req, &msg);
+	if (result != 0)
+		return;
+
+	ring_resp = params->ring_resp;
+	p = params->p;
+	port_out_id = params->port_out_id;
+	table_id = params->table_id;
+
+	/* Handle request */
+	req = (struct app_msg_req *) ((struct rte_mbuf *)msg)->ctrl.data;
+	switch (req->type) {
+	case APP_MSG_REQ_PING:
+	{
+		result = 0;
+		break;
+	}
+
+	case APP_MSG_REQ_FC_ADD_ALL:
+	{
+		result = app_flow_classification_table_init(p, port_out_id,
+			table_id);
+		break;
+	}
+
+	case APP_MSG_REQ_FC_ADD:
+	{
+		struct rte_pipeline_table_entry entry = {
+			.action = RTE_PIPELINE_ACTION_PORT,
+			{.port_id = port_out_id[req->flow_classif_add.port]},
+		};
+
+		struct rte_pipeline_table_entry *entry_ptr;
+
+		int key_found;
+
+		result = rte_pipeline_table_entry_add(p, table_id,
+			req->flow_classif_add.key_raw, &entry, &key_found,
+			&entry_ptr);
+		break;
+	}
+
+	case APP_MSG_REQ_FC_DEL:
+	{
+		int key_found;
+
+		result = rte_pipeline_table_entry_delete(p, table_id,
+			req->flow_classif_add.key_raw, &key_found, NULL);
+		break;
+	}
+
+	default:
+		rte_panic("FC Unrecognized message type (%u)\n", req->type);
+	}
+
+	/* Fill in response message */
+	resp = (struct app_msg_resp *) ((struct rte_mbuf *)msg)->ctrl.data;
+	resp->result = result;
+
+	/* Send response */
+	do {
+		result = rte_ring_sp_enqueue(ring_resp, msg);
+	} while (result == -ENOBUFS);
+}
diff --git a/examples/ip_pipeline/pipeline_ipv4_frag.c b/examples/ip_pipeline/pipeline_ipv4_frag.c
new file mode 100644
index 0000000..e799206
--- /dev/null
+++ b/examples/ip_pipeline/pipeline_ipv4_frag.c
@@ -0,0 +1,184 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <stdio.h>
+#include <stdlib.h>
+#include <stdint.h>
+
+#include <rte_common.h>
+#include <rte_byteorder.h>
+#include <rte_log.h>
+#include <rte_malloc.h>
+#include <rte_ethdev.h>
+#include <rte_mbuf.h>
+#include <rte_ether.h>
+#include <rte_ip.h>
+
+#include <rte_port_ethdev.h>
+#include <rte_port_ring.h>
+#include <rte_port_frag.h>
+#include <rte_table_stub.h>
+#include <rte_pipeline.h>
+
+#include "main.h"
+
+void
+app_main_loop_pipeline_ipv4_frag(void) {
+	struct rte_pipeline *p;
+	uint32_t port_in_id[APP_MAX_PORTS];
+	uint32_t port_out_id[APP_MAX_PORTS];
+	uint32_t table_id[APP_MAX_PORTS];
+	uint32_t i;
+
+	uint32_t core_id = rte_lcore_id();
+	struct app_core_params *core_params = app_get_core_params(core_id);
+
+	if ((core_params == NULL) ||
+		(core_params->core_type != APP_CORE_IPV4_FRAG))
+		rte_panic("Core %u misconfiguration\n", core_id);
+
+	RTE_LOG(INFO, USER1, "Core %u is doing IPv4 fragmentation\n", core_id);
+
+	/* Pipeline configuration */
+	struct rte_pipeline_params pipeline_params = {
+		.name = "pipeline",
+		.socket_id = rte_socket_id(),
+	};
+
+	p = rte_pipeline_create(&pipeline_params);
+	if (p == NULL)
+		rte_panic("%s: Unable to configure the pipeline\n", __func__);
+
+	/* Input port configuration */
+	for (i = 0; i < app.n_ports; i++) {
+		struct rte_port_ring_reader_ipv4_frag_params
+			port_frag_params = {
+			.ring = app.rings[core_params->swq_in[i]],
+			.mtu = 1500,
+			.metadata_size = sizeof(struct app_pkt_metadata),
+			.pool_direct = app.pool,
+			.pool_indirect = app.indirect_pool,
+		};
+
+		struct rte_pipeline_port_in_params port_params = {
+			.ops = &rte_port_ring_reader_ipv4_frag_ops,
+			.arg_create = (void *) &port_frag_params,
+			.f_action = NULL,
+			.arg_ah = NULL,
+			.burst_size = app.bsz_swq_rd,
+		};
+
+		if (rte_pipeline_port_in_create(p, &port_params,
+			&port_in_id[i]))
+			rte_panic("%s: Unable to configure input port %i\n",
+				__func__, i);
+	}
+
+	/* Output port configuration */
+	for (i = 0; i < app.n_ports; i++) {
+		struct rte_port_ring_writer_params port_ring_params = {
+			.ring = app.rings[core_params->swq_out[i]],
+			.tx_burst_sz = app.bsz_swq_wr,
+		};
+
+		struct rte_pipeline_port_out_params port_params = {
+			.ops = &rte_port_ring_writer_ops,
+			.arg_create = (void *) &port_ring_params,
+			.f_action = NULL,
+			.f_action_bulk = NULL,
+			.arg_ah = NULL,
+		};
+
+		if (rte_pipeline_port_out_create(p, &port_params,
+			&port_out_id[i]))
+			rte_panic("%s: Unable to configure output port %i\n",
+				__func__, i);
+	}
+
+	/* Table configuration */
+	for (i = 0; i < app.n_ports; i++) {
+		struct rte_pipeline_table_params table_params = {
+			.ops = &rte_table_stub_ops,
+			.arg_create = NULL,
+			.f_action_hit = NULL,
+			.f_action_miss = NULL,
+			.arg_ah = NULL,
+			.action_data_size = 0,
+		};
+
+		if (rte_pipeline_table_create(p, &table_params, &table_id[i]))
+			rte_panic("%s: Unable to configure table %u\n",
+				__func__, table_id[i]);
+	}
+
+	/* Interconnecting ports and tables */
+	for (i = 0; i < app.n_ports; i++)
+		if (rte_pipeline_port_in_connect_to_table(p, port_in_id[i],
+			table_id[i]))
+			rte_panic("%s: Unable to connect input port %u to "
+				"table %u\n", __func__, port_in_id[i],
+				table_id[i]);
+
+	/* Add entries to tables */
+	for (i = 0; i < app.n_ports; i++) {
+		struct rte_pipeline_table_entry default_entry = {
+			.action = RTE_PIPELINE_ACTION_PORT,
+			{.port_id = port_out_id[i]},
+		};
+
+		struct rte_pipeline_table_entry *default_entry_ptr;
+
+		if (rte_pipeline_table_default_entry_add(p, table_id[i],
+			&default_entry, &default_entry_ptr))
+			rte_panic("%s: Unable to add default entry to "
+				"table %u\n", __func__, table_id[i]);
+	}
+
+	/* Enable input ports */
+	for (i = 0; i < app.n_ports; i++)
+		if (rte_pipeline_port_in_enable(p, port_in_id[i]))
+			rte_panic("Unable to enable input port %u\n",
+				port_in_id[i]);
+
+	/* Check pipeline consistency */
+	if (rte_pipeline_check(p) < 0)
+		rte_panic("%s: Pipeline consistency check failed\n", __func__);
+
+	/* Run-time */
+	for (i = 0; ; i++) {
+		rte_pipeline_run(p);
+
+		if ((i & APP_FLUSH) == 0)
+			rte_pipeline_flush(p);
+	}
+}
diff --git a/examples/ip_pipeline/pipeline_ipv4_ras.c b/examples/ip_pipeline/pipeline_ipv4_ras.c
new file mode 100644
index 0000000..2d6611c
--- /dev/null
+++ b/examples/ip_pipeline/pipeline_ipv4_ras.c
@@ -0,0 +1,181 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <stdio.h>
+#include <stdlib.h>
+#include <stdint.h>
+
+#include <rte_common.h>
+#include <rte_byteorder.h>
+#include <rte_log.h>
+#include <rte_malloc.h>
+#include <rte_ethdev.h>
+#include <rte_mbuf.h>
+#include <rte_ether.h>
+#include <rte_ip.h>
+
+#include <rte_port_ethdev.h>
+#include <rte_port_ring.h>
+#include <rte_port_ras.h>
+#include <rte_table_stub.h>
+#include <rte_pipeline.h>
+
+#include "main.h"
+
+void
+app_main_loop_pipeline_ipv4_ras(void) {
+	struct rte_pipeline *p;
+	uint32_t port_in_id[APP_MAX_PORTS];
+	uint32_t port_out_id[APP_MAX_PORTS];
+	uint32_t table_id[APP_MAX_PORTS];
+	uint32_t i;
+
+	uint32_t core_id = rte_lcore_id();
+	struct app_core_params *core_params = app_get_core_params(core_id);
+
+	if ((core_params == NULL) ||
+		(core_params->core_type != APP_CORE_IPV4_RAS)) {
+		rte_panic("Core %u misconfiguration\n", core_id);
+	}
+
+	RTE_LOG(INFO, USER1, "Core %u is doing IPv4 reassembly\n", core_id);
+
+	/* Pipeline configuration */
+	struct rte_pipeline_params pipeline_params = {
+		.name = "pipeline",
+		.socket_id = rte_socket_id(),
+	};
+
+	p = rte_pipeline_create(&pipeline_params);
+	if (p == NULL)
+		rte_panic("%s: Unable to configure the pipeline\n", __func__);
+
+	/* Input port configuration */
+	for (i = 0; i < app.n_ports; i++) {
+		struct rte_port_ring_reader_params port_ring_params = {
+			.ring = app.rings[core_params->swq_in[i]],
+		};
+
+		struct rte_pipeline_port_in_params port_params = {
+			.ops = &rte_port_ring_reader_ops,
+			.arg_create = (void *) &port_ring_params,
+			.f_action = NULL,
+			.arg_ah = NULL,
+			.burst_size = app.bsz_swq_rd,
+		};
+
+		if (rte_pipeline_port_in_create(p, &port_params,
+			&port_in_id[i]))
+			rte_panic("%s: Unable to configure input port %i\n",
+				__func__, i);
+	}
+
+	/* Output port configuration */
+	for (i = 0; i < app.n_ports; i++) {
+		struct rte_port_ring_writer_params port_ring_ipv4_ras_params = {
+			.ring = app.rings[core_params->swq_out[i]],
+			.tx_burst_sz = app.bsz_swq_wr,
+		};
+
+		struct rte_pipeline_port_out_params port_params = {
+			.ops = &rte_port_ring_writer_ipv4_ras_ops,
+			.arg_create = (void *) &port_ring_ipv4_ras_params,
+			.f_action = NULL,
+			.f_action_bulk = NULL,
+			.arg_ah = NULL,
+		};
+
+		if (rte_pipeline_port_out_create(p, &port_params,
+			&port_out_id[i]))
+			rte_panic("%s: Unable to configure output port %i\n",
+				__func__, i);
+	}
+
+	/* Table configuration */
+	for (i = 0; i < app.n_ports; i++) {
+		struct rte_pipeline_table_params table_params = {
+			.ops = &rte_table_stub_ops,
+			.arg_create = NULL,
+			.f_action_hit = NULL,
+			.f_action_miss = NULL,
+			.arg_ah = NULL,
+			.action_data_size = 0,
+		};
+
+		if (rte_pipeline_table_create(p, &table_params, &table_id[i]))
+			rte_panic("%s: Unable to configure table %u\n",
+				__func__, table_id[i]);
+	}
+
+	/* Interconnecting ports and tables */
+	for (i = 0; i < app.n_ports; i++)
+		if (rte_pipeline_port_in_connect_to_table(p, port_in_id[i],
+			table_id[i]))
+			rte_panic("%s: Unable to connect input port %u to "
+				"table %u\n", __func__, port_in_id[i],
+				table_id[i]);
+
+	/* Add entries to tables */
+	for (i = 0; i < app.n_ports; i++) {
+		struct rte_pipeline_table_entry default_entry = {
+			.action = RTE_PIPELINE_ACTION_PORT,
+			{.port_id = port_out_id[i]},
+		};
+
+		struct rte_pipeline_table_entry *default_entry_ptr;
+
+		if (rte_pipeline_table_default_entry_add(p, table_id[i],
+			&default_entry,
+			&default_entry_ptr))
+			rte_panic("%s: Unable to add default entry to "
+				"table %u\n", __func__, table_id[i]);
+	}
+
+	/* Enable input ports */
+	for (i = 0; i < app.n_ports; i++)
+		if (rte_pipeline_port_in_enable(p, port_in_id[i]))
+			rte_panic("Unable to enable input port %u\n",
+				port_in_id[i]);
+
+	/* Check pipeline consistency */
+	if (rte_pipeline_check(p) < 0)
+		rte_panic("%s: Pipeline consistency check failed\n", __func__);
+
+	/* Run-time */
+	for (i = 0; ; i++) {
+		rte_pipeline_run(p);
+
+		if ((i & APP_FLUSH) == 0)
+			rte_pipeline_flush(p);
+	}
+}
diff --git a/examples/ip_pipeline/pipeline_passthrough.c b/examples/ip_pipeline/pipeline_passthrough.c
new file mode 100644
index 0000000..4af6f44
--- /dev/null
+++ b/examples/ip_pipeline/pipeline_passthrough.c
@@ -0,0 +1,213 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <stdio.h>
+#include <stdlib.h>
+#include <stdint.h>
+
+#include <rte_malloc.h>
+#include <rte_log.h>
+
+#include <rte_port_ring.h>
+#include <rte_table_stub.h>
+#include <rte_pipeline.h>
+
+#include "main.h"
+
+void
+app_main_loop_pipeline_passthrough(void) {
+	struct rte_pipeline_params pipeline_params = {
+		.name = "pipeline",
+		.socket_id = rte_socket_id(),
+	};
+
+	struct rte_pipeline *p;
+	uint32_t port_in_id[APP_MAX_PORTS];
+	uint32_t port_out_id[APP_MAX_PORTS];
+	uint32_t table_id[APP_MAX_PORTS];
+	uint32_t i;
+
+	uint32_t core_id = rte_lcore_id();
+	struct app_core_params *core_params = app_get_core_params(core_id);
+
+	if ((core_params == NULL) || (core_params->core_type != APP_CORE_PT))
+		rte_panic("Core %u misconfiguration\n", core_id);
+
+	RTE_LOG(INFO, USER1, "Core %u is doing pass-through\n", core_id);
+
+	/* Pipeline configuration */
+	p = rte_pipeline_create(&pipeline_params);
+	if (p == NULL)
+		rte_panic("%s: Unable to configure the pipeline\n", __func__);
+
+	/* Input port configuration */
+	for (i = 0; i < app.n_ports; i++) {
+		struct rte_port_ring_reader_params port_ring_params = {
+			.ring = app.rings[core_params->swq_in[i]],
+		};
+
+		struct rte_pipeline_port_in_params port_params = {
+			.ops = &rte_port_ring_reader_ops,
+			.arg_create = (void *) &port_ring_params,
+			.f_action = NULL,
+			.arg_ah = NULL,
+			.burst_size = app.bsz_swq_rd,
+		};
+
+		if (rte_pipeline_port_in_create(p, &port_params,
+			&port_in_id[i])) {
+			rte_panic("%s: Unable to configure input port for "
+				"ring %d\n", __func__, i);
+		}
+	}
+
+	/* Output port configuration */
+	for (i = 0; i < app.n_ports; i++) {
+		struct rte_port_ring_writer_params port_ring_params = {
+			.ring = app.rings[core_params->swq_out[i]],
+			.tx_burst_sz = app.bsz_swq_wr,
+		};
+
+		struct rte_pipeline_port_out_params port_params = {
+			.ops = &rte_port_ring_writer_ops,
+			.arg_create = (void *) &port_ring_params,
+			.f_action = NULL,
+			.f_action_bulk = NULL,
+			.arg_ah = NULL,
+		};
+
+		if (rte_pipeline_port_out_create(p, &port_params,
+			&port_out_id[i])) {
+			rte_panic("%s: Unable to configure output port for "
+				"ring %d\n", __func__, i);
+		}
+	}
+
+	/* Table configuration */
+	for (i = 0; i < app.n_ports; i++) {
+		struct rte_pipeline_table_params table_params = {
+			.ops = &rte_table_stub_ops,
+			.arg_create = NULL,
+			.f_action_hit = NULL,
+			.f_action_miss = NULL,
+			.arg_ah = NULL,
+			.action_data_size = 0,
+		};
+
+		if (rte_pipeline_table_create(p, &table_params, &table_id[i]))
+			rte_panic("%s: Unable to configure table %u\n",
+				__func__, i);
+	}
+
+	/* Interconnecting ports and tables */
+	for (i = 0; i < app.n_ports; i++) {
+		if (rte_pipeline_port_in_connect_to_table(p, port_in_id[i],
+			table_id[i])) {
+			rte_panic("%s: Unable to connect input port %u to "
+				"table %u\n", __func__, port_in_id[i],
+				table_id[i]);
+		}
+	}
+
+	/* Add entries to tables */
+	for (i = 0; i < app.n_ports; i++) {
+		struct rte_pipeline_table_entry default_entry = {
+			.action = RTE_PIPELINE_ACTION_PORT,
+			{.port_id = port_out_id[i]},
+		};
+
+		struct rte_pipeline_table_entry *default_entry_ptr;
+
+		if (rte_pipeline_table_default_entry_add(p, table_id[i],
+			&default_entry, &default_entry_ptr))
+			rte_panic("%s: Unable to add default entry to "
+				"table %u\n", __func__, table_id[i]);
+	}
+
+	/* Enable input ports */
+	for (i = 0; i < app.n_ports; i++)
+		if (rte_pipeline_port_in_enable(p, port_in_id[i]))
+			rte_panic("Unable to enable input port %u\n",
+				port_in_id[i]);
+
+	/* Check pipeline consistency */
+	if (rte_pipeline_check(p) < 0)
+		rte_panic("%s: Pipeline consistency check failed\n", __func__);
+
+	/* Run-time */
+	for (i = 0; ; i++) {
+		rte_pipeline_run(p);
+
+		if ((i & APP_FLUSH) == 0)
+			rte_pipeline_flush(p);
+	}
+}
+
+void
+app_main_loop_passthrough(void) {
+	struct app_mbuf_array *m;
+	uint32_t i;
+
+	uint32_t core_id = rte_lcore_id();
+	struct app_core_params *core_params = app_get_core_params(core_id);
+
+	if ((core_params == NULL) || (core_params->core_type != APP_CORE_PT))
+		rte_panic("Core %u misconfiguration\n", core_id);
+
+	RTE_LOG(INFO, USER1, "Core %u is doing pass-through (no pipeline)\n",
+		core_id);
+
+	m = rte_malloc_socket(NULL, sizeof(struct app_mbuf_array),
+		CACHE_LINE_SIZE, rte_socket_id());
+	if (m == NULL)
+		rte_panic("%s: cannot allocate buffer space\n", __func__);
+
+	for (i = 0; ; i = ((i + 1) & (app.n_ports - 1))) {
+		int ret;
+
+		ret = rte_ring_sc_dequeue_bulk(
+			app.rings[core_params->swq_in[i]],
+			(void **) m->array,
+			app.bsz_swq_rd);
+
+		if (ret == -ENOENT)
+			continue;
+
+		do {
+			ret = rte_ring_sp_enqueue_bulk(
+				app.rings[core_params->swq_out[i]],
+				(void **) m->array,
+				app.bsz_swq_wr);
+		} while (ret < 0);
+	}
+}
diff --git a/examples/ip_pipeline/pipeline_routing.c b/examples/ip_pipeline/pipeline_routing.c
new file mode 100644
index 0000000..f19506d
--- /dev/null
+++ b/examples/ip_pipeline/pipeline_routing.c
@@ -0,0 +1,474 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <stdio.h>
+#include <stdlib.h>
+#include <stdint.h>
+
+#include <rte_malloc.h>
+#include <rte_log.h>
+#include <rte_ethdev.h>
+#include <rte_ether.h>
+#include <rte_ip.h>
+#include <rte_byteorder.h>
+
+#include <rte_port_ring.h>
+#include <rte_table_lpm.h>
+#include <rte_table_hash.h>
+#include <rte_pipeline.h>
+
+#include "main.h"
+
+#include <unistd.h>
+
+struct app_routing_table_entry {
+	struct rte_pipeline_table_entry head;
+	uint32_t nh_ip;
+	uint32_t nh_iface;
+};
+
+struct app_arp_table_entry {
+	struct rte_pipeline_table_entry head;
+	struct ether_addr nh_arp;
+};
+
+static inline void
+app_routing_table_write_metadata(
+	struct rte_mbuf *pkt,
+	struct app_routing_table_entry *entry)
+{
+	struct app_pkt_metadata *c =
+		(struct app_pkt_metadata *) RTE_MBUF_METADATA_UINT8_PTR(pkt, 0);
+
+	c->arp_key.nh_ip = entry->nh_ip;
+	c->arp_key.nh_iface = entry->nh_iface;
+}
+
+static int
+app_routing_table_ah(
+	struct rte_mbuf **pkts,
+	uint64_t *pkts_mask,
+	struct rte_pipeline_table_entry **entries,
+	__attribute__((unused)) void *arg)
+{
+	uint64_t pkts_in_mask = *pkts_mask;
+
+	if ((pkts_in_mask & (pkts_in_mask + 1)) == 0) {
+		uint64_t n_pkts = __builtin_popcountll(pkts_in_mask);
+		uint32_t i;
+
+		for (i = 0; i < n_pkts; i++) {
+			struct rte_mbuf *m = pkts[i];
+			struct app_routing_table_entry *a =
+				(struct app_routing_table_entry *) entries[i];
+
+			app_routing_table_write_metadata(m, a);
+		}
+	} else
+		for ( ; pkts_in_mask; ) {
+			struct rte_mbuf *m;
+			struct app_routing_table_entry *a;
+			uint64_t pkt_mask;
+			uint32_t packet_index;
+
+			packet_index = __builtin_ctzll(pkts_in_mask);
+			pkt_mask = 1LLU << packet_index;
+			pkts_in_mask &= ~pkt_mask;
+
+			m = pkts[packet_index];
+			a = (struct app_routing_table_entry *)
+				entries[packet_index];
+			app_routing_table_write_metadata(m, a);
+		}
+
+	return 0;
+}
+
+static inline void
+app_arp_table_write_metadata(
+	struct rte_mbuf *pkt,
+	struct app_arp_table_entry *entry)
+{
+	struct app_pkt_metadata *c =
+		(struct app_pkt_metadata *) RTE_MBUF_METADATA_UINT8_PTR(pkt, 0);
+	ether_addr_copy(&entry->nh_arp, &c->nh_arp);
+}
+
+static int
+app_arp_table_ah(
+	struct rte_mbuf **pkts,
+	uint64_t *pkts_mask,
+	struct rte_pipeline_table_entry **entries,
+	__attribute__((unused)) void *arg)
+{
+	uint64_t pkts_in_mask = *pkts_mask;
+
+	if ((pkts_in_mask & (pkts_in_mask + 1)) == 0) {
+		uint64_t n_pkts = __builtin_popcountll(pkts_in_mask);
+		uint32_t i;
+
+		for (i = 0; i < n_pkts; i++) {
+			struct rte_mbuf *m = pkts[i];
+			struct app_arp_table_entry *a =
+				(struct app_arp_table_entry *) entries[i];
+
+			app_arp_table_write_metadata(m, a);
+		}
+	} else {
+		for ( ; pkts_in_mask; ) {
+			struct rte_mbuf *m;
+			struct app_arp_table_entry *a;
+			uint64_t pkt_mask;
+			uint32_t packet_index;
+
+			packet_index = __builtin_ctzll(pkts_in_mask);
+			pkt_mask = 1LLU << packet_index;
+			pkts_in_mask &= ~pkt_mask;
+
+			m = pkts[packet_index];
+			a = (struct app_arp_table_entry *)
+				entries[packet_index];
+			app_arp_table_write_metadata(m, a);
+		}
+	}
+
+	return 0;
+}
+
+static uint64_t app_arp_table_hash(
+	void *key,
+	__attribute__((unused)) uint32_t key_size,
+	__attribute__((unused)) uint64_t seed)
+{
+	uint32_t *k = (uint32_t *) key;
+
+	return k[1];
+}
+
+struct app_core_routing_message_handle_params {
+	struct rte_ring *ring_req;
+	struct rte_ring *ring_resp;
+	struct rte_pipeline *p;
+	uint32_t *port_out_id;
+	uint32_t routing_table_id;
+	uint32_t arp_table_id;
+};
+
+static void
+app_message_handle(struct app_core_routing_message_handle_params *params);
+
+void
+app_main_loop_pipeline_routing(void) {
+	struct rte_pipeline_params pipeline_params = {
+		.name = "pipeline",
+		.socket_id = rte_socket_id(),
+	};
+
+	struct rte_pipeline *p;
+	uint32_t port_in_id[APP_MAX_PORTS];
+	uint32_t port_out_id[APP_MAX_PORTS];
+	uint32_t routing_table_id, arp_table_id;
+	uint32_t i;
+
+	uint32_t core_id = rte_lcore_id();
+	struct app_core_params *core_params = app_get_core_params(core_id);
+	struct app_core_routing_message_handle_params mh_params;
+
+	if ((core_params == NULL) || (core_params->core_type != APP_CORE_RT))
+		rte_panic("Core %u misconfiguration\n", core_id);
+
+	RTE_LOG(INFO, USER1, "Core %u is doing routing\n", core_id);
+
+	/* Pipeline configuration */
+	p = rte_pipeline_create(&pipeline_params);
+	if (p == NULL)
+		rte_panic("Unable to configure the pipeline\n");
+
+	/* Input port configuration */
+	for (i = 0; i < app.n_ports; i++) {
+		struct rte_port_ring_reader_params port_ring_params = {
+			.ring = app.rings[core_params->swq_in[i]],
+		};
+
+		struct rte_pipeline_port_in_params port_params = {
+			.ops = &rte_port_ring_reader_ops,
+			.arg_create = (void *) &port_ring_params,
+			.f_action = NULL,
+			.arg_ah = NULL,
+			.burst_size = app.bsz_swq_rd,
+		};
+
+		if (rte_pipeline_port_in_create(p, &port_params,
+			&port_in_id[i]))
+			rte_panic("Unable to configure input port for "
+				"ring %d\n", i);
+	}
+
+	/* Output port configuration */
+	for (i = 0; i < app.n_ports; i++) {
+		struct rte_port_ring_writer_params port_ring_params = {
+			.ring = app.rings[core_params->swq_out[i]],
+			.tx_burst_sz = app.bsz_swq_wr,
+		};
+
+		struct rte_pipeline_port_out_params port_params = {
+			.ops = &rte_port_ring_writer_ops,
+			.arg_create = (void *) &port_ring_params,
+			.f_action = NULL,
+			.f_action_bulk = NULL,
+			.arg_ah = NULL,
+		};
+
+		if (rte_pipeline_port_out_create(p, &port_params,
+			&port_out_id[i]))
+			rte_panic("Unable to configure output port for "
+				"ring %d\n", i);
+	}
+
+	/* Routing table configuration */
+	{
+		struct rte_table_lpm_params table_lpm_params = {
+			.n_rules = app.max_routing_rules,
+			.entry_unique_size =
+				sizeof(struct app_routing_table_entry),
+			.offset = __builtin_offsetof(struct app_pkt_metadata,
+				flow_key.ip_dst),
+		};
+
+		struct rte_pipeline_table_params table_params = {
+			.ops = &rte_table_lpm_ops,
+			.arg_create = &table_lpm_params,
+			.f_action_hit = app_routing_table_ah,
+			.f_action_miss = NULL,
+			.arg_ah = NULL,
+			.action_data_size =
+				sizeof(struct app_routing_table_entry) -
+				sizeof(struct rte_pipeline_table_entry),
+		};
+
+		if (rte_pipeline_table_create(p, &table_params,
+			&routing_table_id))
+			rte_panic("Unable to configure the LPM table\n");
+	}
+
+	/* ARP table configuration */
+	{
+		struct rte_table_hash_key8_lru_params table_arp_params = {
+			.n_entries = app.max_arp_rules,
+			.f_hash = app_arp_table_hash,
+			.seed = 0,
+			.signature_offset = 0, /* Unused */
+			.key_offset = __builtin_offsetof(
+				struct app_pkt_metadata, arp_key),
+		};
+
+		struct rte_pipeline_table_params table_params = {
+			.ops = &rte_table_hash_key8_lru_dosig_ops,
+			.arg_create = &table_arp_params,
+			.f_action_hit = app_arp_table_ah,
+			.f_action_miss = NULL,
+			.arg_ah = NULL,
+			.action_data_size = sizeof(struct app_arp_table_entry) -
+				sizeof(struct rte_pipeline_table_entry),
+		};
+
+		if (rte_pipeline_table_create(p, &table_params, &arp_table_id))
+			rte_panic("Unable to configure the ARP table\n");
+	}
+
+	/* Interconnecting ports and tables */
+	for (i = 0; i < app.n_ports; i++) {
+		if (rte_pipeline_port_in_connect_to_table(p, port_in_id[i],
+			routing_table_id))
+			rte_panic("Unable to connect input port %u to "
+				"table %u\n", port_in_id[i],  routing_table_id);
+	}
+
+	/* Enable input ports */
+	for (i = 0; i < app.n_ports; i++)
+		if (rte_pipeline_port_in_enable(p, port_in_id[i]))
+			rte_panic("Unable to enable input port %u\n",
+				port_in_id[i]);
+
+	/* Check pipeline consistency */
+	if (rte_pipeline_check(p) < 0)
+		rte_panic("Pipeline consistency check failed\n");
+
+	/* Message handling */
+	mh_params.ring_req =
+		app_get_ring_req(app_get_first_core_id(APP_CORE_RT));
+	mh_params.ring_resp =
+		app_get_ring_resp(app_get_first_core_id(APP_CORE_RT));
+	mh_params.p = p;
+	mh_params.port_out_id = port_out_id;
+	mh_params.routing_table_id = routing_table_id;
+	mh_params.arp_table_id = arp_table_id;
+
+	/* Run-time */
+	for (i = 0; ; i++) {
+		rte_pipeline_run(p);
+
+		if ((i & APP_FLUSH) == 0) {
+			rte_pipeline_flush(p);
+			app_message_handle(&mh_params);
+		}
+	}
+}
+
+void
+app_message_handle(struct app_core_routing_message_handle_params *params)
+{
+	struct rte_ring *ring_req = params->ring_req;
+	struct rte_ring *ring_resp;
+	void *msg;
+	struct app_msg_req *req;
+	struct app_msg_resp *resp;
+	struct rte_pipeline *p;
+	uint32_t *port_out_id;
+	uint32_t routing_table_id, arp_table_id;
+	int result;
+
+	/* Read request message */
+	result = rte_ring_sc_dequeue(ring_req, &msg);
+	if (result != 0)
+		return;
+
+	ring_resp = params->ring_resp;
+	p = params->p;
+	port_out_id = params->port_out_id;
+	routing_table_id = params->routing_table_id;
+	arp_table_id = params->arp_table_id;
+
+	/* Handle request */
+	req = (struct app_msg_req *) ((struct rte_mbuf *)msg)->ctrl.data;
+	switch (req->type) {
+	case APP_MSG_REQ_PING:
+	{
+		result = 0;
+		break;
+	}
+
+	case APP_MSG_REQ_RT_ADD:
+	{
+		struct app_routing_table_entry entry = {
+			.head = {
+				.action = RTE_PIPELINE_ACTION_TABLE,
+				{.table_id = arp_table_id},
+			},
+			.nh_ip = req->routing_add.nh_ip,
+			.nh_iface = port_out_id[req->routing_add.port],
+		};
+
+		struct rte_table_lpm_key key = {
+			.ip = req->routing_add.ip,
+			.depth = req->routing_add.depth,
+		};
+
+		struct rte_pipeline_table_entry *entry_ptr;
+
+		int key_found;
+
+		result = rte_pipeline_table_entry_add(p, routing_table_id, &key,
+			(struct rte_pipeline_table_entry *) &entry, &key_found,
+			&entry_ptr);
+		break;
+	}
+
+	case APP_MSG_REQ_RT_DEL:
+	{
+		struct rte_table_lpm_key key = {
+			.ip = req->routing_del.ip,
+			.depth = req->routing_del.depth,
+		};
+
+		int key_found;
+
+		result = rte_pipeline_table_entry_delete(p, routing_table_id,
+			&key, &key_found, NULL);
+		break;
+	}
+
+	case APP_MSG_REQ_ARP_ADD:
+	{
+
+		struct app_arp_table_entry entry = {
+			.head = {
+				.action = RTE_PIPELINE_ACTION_PORT,
+				{.port_id =
+					port_out_id[req->arp_add.out_iface]},
+			},
+			.nh_arp = req->arp_add.nh_arp,
+		};
+
+		struct app_arp_key arp_key = {
+			.nh_ip = req->arp_add.nh_ip,
+			.nh_iface = port_out_id[req->arp_add.out_iface],
+		};
+
+		struct rte_pipeline_table_entry *entry_ptr;
+
+		int key_found;
+
+		result = rte_pipeline_table_entry_add(p, arp_table_id, &arp_key,
+			(struct rte_pipeline_table_entry *) &entry, &key_found,
+			&entry_ptr);
+		break;
+	}
+
+	case APP_MSG_REQ_ARP_DEL:
+	{
+		struct app_arp_key arp_key = {
+			.nh_ip = req->arp_del.nh_ip,
+			.nh_iface = port_out_id[req->arp_del.out_iface],
+		};
+
+		int key_found;
+
+		result = rte_pipeline_table_entry_delete(p, arp_table_id,
+			&arp_key, &key_found, NULL);
+		break;
+	}
+
+	default:
+		rte_panic("RT Unrecognized message type (%u)\n", req->type);
+	}
+
+	/* Fill in response message */
+	resp = (struct app_msg_resp *) ((struct rte_mbuf *)msg)->ctrl.data;
+	resp->result = result;
+
+	/* Send response */
+	do {
+		result = rte_ring_sp_enqueue(ring_resp, msg);
+	} while (result == -ENOBUFS);
+}
diff --git a/examples/ip_pipeline/pipeline_rx.c b/examples/ip_pipeline/pipeline_rx.c
new file mode 100644
index 0000000..ba5fa0a
--- /dev/null
+++ b/examples/ip_pipeline/pipeline_rx.c
@@ -0,0 +1,385 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <stdio.h>
+#include <stdlib.h>
+
+#include <rte_common.h>
+#include <rte_byteorder.h>
+#include <rte_log.h>
+#include <rte_malloc.h>
+#include <rte_ethdev.h>
+#include <rte_mbuf.h>
+#include <rte_ether.h>
+#include <rte_ip.h>
+#include <rte_jhash.h>
+
+#include <rte_port_ethdev.h>
+#include <rte_port_ring.h>
+#include <rte_table_stub.h>
+#include <rte_pipeline.h>
+
+
+#include "main.h"
+
+struct app_core_rx_message_handle_params {
+	struct rte_ring *ring_req;
+	struct rte_ring *ring_resp;
+
+	struct rte_pipeline *p;
+	uint32_t *port_in_id;
+};
+
+static void
+app_message_handle(struct app_core_rx_message_handle_params *params);
+
+static int
+app_pipeline_rx_port_in_action_handler(struct rte_mbuf **pkts, uint32_t n,
+	uint64_t *pkts_mask, void *arg);
+
+void
+app_main_loop_pipeline_rx(void) {
+	struct rte_pipeline *p;
+	uint32_t port_in_id[APP_MAX_PORTS];
+	uint32_t port_out_id[APP_MAX_PORTS];
+	uint32_t table_id[APP_MAX_PORTS];
+	uint32_t i;
+
+	uint32_t core_id = rte_lcore_id();
+	struct app_core_params *core_params = app_get_core_params(core_id);
+	struct app_core_rx_message_handle_params mh_params;
+
+	if ((core_params == NULL) || (core_params->core_type != APP_CORE_RX))
+		rte_panic("Core %u misconfiguration\n", core_id);
+
+	RTE_LOG(INFO, USER1, "Core %u is doing RX\n", core_id);
+
+	/* Pipeline configuration */
+	struct rte_pipeline_params pipeline_params = {
+		.name = "pipeline",
+		.socket_id = rte_socket_id(),
+	};
+
+	p = rte_pipeline_create(&pipeline_params);
+	if (p == NULL)
+		rte_panic("%s: Unable to configure the pipeline\n", __func__);
+
+	/* Input port configuration */
+	for (i = 0; i < app.n_ports; i++) {
+		struct rte_port_ethdev_reader_params port_ethdev_params = {
+			.port_id = app.ports[i],
+			.queue_id = 0,
+		};
+
+		struct rte_pipeline_port_in_params port_params = {
+			.ops = &rte_port_ethdev_reader_ops,
+			.arg_create = (void *) &port_ethdev_params,
+			.f_action = app_pipeline_rx_port_in_action_handler,
+			.arg_ah = NULL,
+			.burst_size = app.bsz_hwq_rd,
+		};
+
+		if (rte_pipeline_port_in_create(p, &port_params,
+			&port_in_id[i]))
+			rte_panic("%s: Unable to configure input port for "
+				"port %d\n", __func__, app.ports[i]);
+	}
+
+	/* Output port configuration */
+	for (i = 0; i < app.n_ports; i++) {
+		struct rte_port_ring_writer_params port_ring_params = {
+			.ring = app.rings[core_params->swq_out[i]],
+			.tx_burst_sz = app.bsz_swq_wr,
+		};
+
+		struct rte_pipeline_port_out_params port_params = {
+			.ops = &rte_port_ring_writer_ops,
+			.arg_create = (void *) &port_ring_params,
+			.f_action = NULL,
+			.f_action_bulk = NULL,
+			.arg_ah = NULL,
+		};
+
+		if (rte_pipeline_port_out_create(p, &port_params,
+			&port_out_id[i]))
+			rte_panic("%s: Unable to configure output port for "
+				"ring RX %i\n", __func__, i);
+	}
+
+	/* Table configuration */
+	for (i = 0; i < app.n_ports; i++) {
+		struct rte_pipeline_table_params table_params = {
+			.ops = &rte_table_stub_ops,
+			.arg_create = NULL,
+			.f_action_hit = NULL,
+			.f_action_miss = NULL,
+			.arg_ah = NULL,
+			.action_data_size = 0,
+		};
+
+		if (rte_pipeline_table_create(p, &table_params, &table_id[i]))
+			rte_panic("%s: Unable to configure table %u\n",
+				__func__, table_id[i]);
+	}
+
+	/* Interconnecting ports and tables */
+	for (i = 0; i < app.n_ports; i++)
+		if (rte_pipeline_port_in_connect_to_table(p, port_in_id[i],
+			table_id[i]))
+			rte_panic("%s: Unable to connect input port %u to "
+				"table %u\n", __func__, port_in_id[i],
+				table_id[i]);
+
+	/* Add entries to tables */
+	for (i = 0; i < app.n_ports; i++) {
+		struct rte_pipeline_table_entry default_entry = {
+			.action = RTE_PIPELINE_ACTION_PORT,
+			{.port_id = port_out_id[i]},
+		};
+
+		struct rte_pipeline_table_entry *default_entry_ptr;
+
+		if (rte_pipeline_table_default_entry_add(p, table_id[i],
+			&default_entry, &default_entry_ptr))
+			rte_panic("%s: Unable to add default entry to "
+				"table %u\n", __func__, table_id[i]);
+	}
+
+	/* Enable input ports */
+	for (i = 0; i < app.n_ports; i++)
+		if (rte_pipeline_port_in_enable(p, port_in_id[i]))
+			rte_panic("Unable to enable input port %u\n",
+				port_in_id[i]);
+
+	/* Check pipeline consistency */
+	if (rte_pipeline_check(p) < 0)
+		rte_panic("%s: Pipeline consistency check failed\n", __func__);
+
+	/* Message handling */
+	mh_params.ring_req =
+		app_get_ring_req(app_get_first_core_id(APP_CORE_RX));
+	mh_params.ring_resp =
+		app_get_ring_resp(app_get_first_core_id(APP_CORE_RX));
+	mh_params.p = p;
+	mh_params.port_in_id = port_in_id;
+
+	/* Run-time */
+	for (i = 0; ; i++) {
+		rte_pipeline_run(p);
+
+		if ((i & APP_FLUSH) == 0) {
+			rte_pipeline_flush(p);
+			app_message_handle(&mh_params);
+		}
+	}
+}
+
+uint64_t test_hash(
+	void *key,
+	__attribute__((unused)) uint32_t key_size,
+	__attribute__((unused)) uint64_t seed)
+{
+	struct app_flow_key *flow_key = (struct app_flow_key *) key;
+	uint32_t ip_dst = rte_be_to_cpu_32(flow_key->ip_dst);
+	uint64_t signature = (ip_dst & 0x00FFFFFFLLU) >> 2;
+
+	return signature;
+}
+
+uint32_t
+rte_jhash2_16(uint32_t *k, uint32_t initval)
+{
+	uint32_t a, b, c;
+
+	a = b = RTE_JHASH_GOLDEN_RATIO;
+	c = initval;
+
+	a += k[0];
+	b += k[1];
+	c += k[2];
+	__rte_jhash_mix(a, b, c);
+
+	c += 16; /* length in bytes */
+	a += k[3]; /* Remaining word */
+
+	__rte_jhash_mix(a, b, c);
+
+	return c;
+}
+
+static inline void
+app_pkt_metadata_fill(struct rte_mbuf *m)
+{
+	uint8_t *m_data = rte_pktmbuf_mtod(m, uint8_t *);
+	struct app_pkt_metadata *c =
+		(struct app_pkt_metadata *) RTE_MBUF_METADATA_UINT8_PTR(m, 0);
+	struct ipv4_hdr *ip_hdr =
+		(struct ipv4_hdr *) &m_data[sizeof(struct ether_hdr)];
+	uint64_t *ipv4_hdr_slab = (uint64_t *) ip_hdr;
+
+	/* TTL and Header Checksum are set to 0 */
+	c->flow_key.slab0 = ipv4_hdr_slab[1] & 0xFFFFFFFF0000FF00LLU;
+	c->flow_key.slab1 = ipv4_hdr_slab[2];
+	c->signature = test_hash((void *) &c->flow_key, 0, 0);
+
+	/* Pop Ethernet header */
+	if (app.ether_hdr_pop_push) {
+		rte_pktmbuf_adj(m, (uint16_t)sizeof(struct ether_hdr));
+		m->pkt.vlan_macip.f.l2_len = 0;
+		m->pkt.vlan_macip.f.l3_len = sizeof(struct ipv4_hdr);
+	}
+}
+
+int
+app_pipeline_rx_port_in_action_handler(
+	struct rte_mbuf **pkts,
+	uint32_t n,
+	uint64_t *pkts_mask,
+	__rte_unused void *arg)
+{
+	uint32_t i;
+
+	for (i = 0; i < n; i++) {
+		struct rte_mbuf *m = pkts[i];
+
+		app_pkt_metadata_fill(m);
+	}
+
+	*pkts_mask = (~0LLU) >> (64 - n);
+
+	return 0;
+}
+
+void
+app_main_loop_rx(void) {
+	struct app_mbuf_array *ma;
+	uint32_t i, j;
+	int ret;
+
+	uint32_t core_id = rte_lcore_id();
+	struct app_core_params *core_params = app_get_core_params(core_id);
+
+	if ((core_params == NULL) || (core_params->core_type != APP_CORE_RX))
+		rte_panic("Core %u misconfiguration\n", core_id);
+
+	RTE_LOG(INFO, USER1, "Core %u is doing RX (no pipeline)\n", core_id);
+
+	ma = rte_malloc_socket(NULL, sizeof(struct app_mbuf_array),
+		CACHE_LINE_SIZE, rte_socket_id());
+	if (ma == NULL)
+		rte_panic("%s: cannot allocate buffer space\n", __func__);
+
+	for (i = 0; ; i = ((i + 1) & (app.n_ports - 1))) {
+		uint32_t n_mbufs;
+
+		n_mbufs = rte_eth_rx_burst(
+			app.ports[i],
+			0,
+			ma->array,
+			app.bsz_hwq_rd);
+
+		if (n_mbufs == 0)
+			continue;
+
+		for (j = 0; j < n_mbufs; j++) {
+			struct rte_mbuf *m = ma->array[j];
+
+			app_pkt_metadata_fill(m);
+		}
+
+		do {
+			ret = rte_ring_sp_enqueue_bulk(
+				app.rings[core_params->swq_out[i]],
+				(void **) ma->array,
+				n_mbufs);
+		} while (ret < 0);
+	}
+}
+
+void
+app_message_handle(struct app_core_rx_message_handle_params *params)
+{
+	struct rte_ring *ring_req = params->ring_req;
+	struct rte_ring *ring_resp;
+	void *msg;
+	struct app_msg_req *req;
+	struct app_msg_resp *resp;
+	struct rte_pipeline *p;
+	uint32_t *port_in_id;
+	int result;
+
+	/* Read request message */
+	result = rte_ring_sc_dequeue(ring_req, &msg);
+	if (result != 0)
+		return;
+
+	ring_resp = params->ring_resp;
+	p = params->p;
+	port_in_id = params->port_in_id;
+
+	/* Handle request */
+	req = (struct app_msg_req *) ((struct rte_mbuf *)msg)->ctrl.data;
+	switch (req->type) {
+	case APP_MSG_REQ_PING:
+	{
+		result = 0;
+		break;
+	}
+
+	case APP_MSG_REQ_RX_PORT_ENABLE:
+	{
+		result = rte_pipeline_port_in_enable(p,
+			port_in_id[req->rx_up.port]);
+		break;
+	}
+
+	case APP_MSG_REQ_RX_PORT_DISABLE:
+	{
+		result = rte_pipeline_port_in_disable(p,
+			port_in_id[req->rx_down.port]);
+		break;
+	}
+
+	default:
+		rte_panic("RX Unrecognized message type (%u)\n", req->type);
+	}
+
+	/* Fill in response message */
+	resp = (struct app_msg_resp *) ((struct rte_mbuf *)msg)->ctrl.data;
+	resp->result = result;
+
+	/* Send response */
+	do {
+		result = rte_ring_sp_enqueue(ring_resp, msg);
+	} while (result == -ENOBUFS);
+}
diff --git a/examples/ip_pipeline/pipeline_tx.c b/examples/ip_pipeline/pipeline_tx.c
new file mode 100644
index 0000000..3bf2c8b
--- /dev/null
+++ b/examples/ip_pipeline/pipeline_tx.c
@@ -0,0 +1,283 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <stdio.h>
+#include <stdlib.h>
+#include <stdint.h>
+
+#include <rte_common.h>
+#include <rte_byteorder.h>
+#include <rte_log.h>
+#include <rte_malloc.h>
+#include <rte_ethdev.h>
+#include <rte_mbuf.h>
+#include <rte_ether.h>
+#include <rte_ip.h>
+
+#include <rte_port_ethdev.h>
+#include <rte_port_ring.h>
+#include <rte_table_stub.h>
+#include <rte_pipeline.h>
+
+#include "main.h"
+
+static struct ether_addr local_ether_addr = {
+	.addr_bytes = {0, 1, 2, 3, 4, 5},
+};
+
+static inline void
+app_pkt_metadata_flush(struct rte_mbuf *pkt)
+{
+	struct app_pkt_metadata *pkt_meta = (struct app_pkt_metadata *)
+		RTE_MBUF_METADATA_UINT8_PTR(pkt, 0);
+	struct ether_hdr *ether_hdr = (struct ether_hdr *)
+		rte_pktmbuf_prepend(pkt, (uint16_t) sizeof(struct ether_hdr));
+
+	ether_addr_copy(&pkt_meta->nh_arp, &ether_hdr->d_addr);
+	ether_addr_copy(&local_ether_addr, &ether_hdr->s_addr);
+	ether_hdr->ether_type = rte_bswap16(ETHER_TYPE_IPv4);
+	pkt->pkt.vlan_macip.f.l2_len = sizeof(struct ether_hdr);
+}
+
+static int
+app_pipeline_tx_port_in_action_handler(
+	struct rte_mbuf **pkts,
+	uint32_t n,
+	uint64_t *pkts_mask,
+	__rte_unused void *arg)
+{
+	uint32_t i;
+
+	for (i = 0; i < n; i++) {
+		struct rte_mbuf *m = pkts[i];
+
+		app_pkt_metadata_flush(m);
+	}
+
+	*pkts_mask = (~0LLU) >> (64 - n);
+
+	return 0;
+}
+
+void
+app_main_loop_pipeline_tx(void) {
+	struct rte_pipeline *p;
+	uint32_t port_in_id[APP_MAX_PORTS];
+	uint32_t port_out_id[APP_MAX_PORTS];
+	uint32_t table_id[APP_MAX_PORTS];
+	uint32_t i;
+
+	uint32_t core_id = rte_lcore_id();
+	struct app_core_params *core_params = app_get_core_params(core_id);
+
+	if ((core_params == NULL) || (core_params->core_type != APP_CORE_TX))
+		rte_panic("Core %u misconfiguration\n", core_id);
+
+	RTE_LOG(INFO, USER1, "Core %u is doing TX\n", core_id);
+
+	/* Pipeline configuration */
+	struct rte_pipeline_params pipeline_params = {
+		.name = "pipeline",
+		.socket_id = rte_socket_id(),
+	};
+
+	p = rte_pipeline_create(&pipeline_params);
+	if (p == NULL)
+		rte_panic("%s: Unable to configure the pipeline\n", __func__);
+
+	/* Input port configuration */
+	for (i = 0; i < app.n_ports; i++) {
+		struct rte_port_ring_reader_params port_ring_params = {
+			.ring = app.rings[core_params->swq_in[i]],
+		};
+
+		struct rte_pipeline_port_in_params port_params = {
+			.ops = &rte_port_ring_reader_ops,
+			.arg_create = (void *) &port_ring_params,
+			.f_action = (app.ether_hdr_pop_push) ?
+				app_pipeline_tx_port_in_action_handler : NULL,
+			.arg_ah = NULL,
+			.burst_size = app.bsz_swq_rd,
+		};
+
+		if (rte_pipeline_port_in_create(p, &port_params,
+			&port_in_id[i])) {
+			rte_panic("%s: Unable to configure input port for "
+				"ring TX %i\n", __func__, i);
+		}
+	}
+
+	/* Output port configuration */
+	for (i = 0; i < app.n_ports; i++) {
+		struct rte_port_ethdev_writer_params port_ethdev_params = {
+			.port_id = app.ports[i],
+			.queue_id = 0,
+			.tx_burst_sz = app.bsz_hwq_wr,
+		};
+
+		struct rte_pipeline_port_out_params port_params = {
+			.ops = &rte_port_ethdev_writer_ops,
+			.arg_create = (void *) &port_ethdev_params,
+			.f_action = NULL,
+			.f_action_bulk = NULL,
+			.arg_ah = NULL,
+		};
+
+		if (rte_pipeline_port_out_create(p, &port_params,
+			&port_out_id[i])) {
+			rte_panic("%s: Unable to configure output port for "
+				"port %d\n", __func__, app.ports[i]);
+		}
+	}
+
+	/* Table configuration */
+	for (i = 0; i < app.n_ports; i++) {
+		struct rte_pipeline_table_params table_params = {
+			.ops = &rte_table_stub_ops,
+			.arg_create = NULL,
+			.f_action_hit = NULL,
+			.f_action_miss = NULL,
+			.arg_ah = NULL,
+			.action_data_size = 0,
+		};
+
+		if (rte_pipeline_table_create(p, &table_params, &table_id[i])) {
+			rte_panic("%s: Unable to configure table %u\n",
+				__func__, table_id[i]);
+		}
+	}
+
+	/* Interconnecting ports and tables */
+	for (i = 0; i < app.n_ports; i++)
+		if (rte_pipeline_port_in_connect_to_table(p, port_in_id[i],
+			table_id[i]))
+			rte_panic("%s: Unable to connect input port %u to "
+				"table %u\n", __func__, port_in_id[i],
+				table_id[i]);
+
+	/* Add entries to tables */
+	for (i = 0; i < app.n_ports; i++) {
+		struct rte_pipeline_table_entry default_entry = {
+			.action = RTE_PIPELINE_ACTION_PORT,
+			{.port_id = port_out_id[i]},
+		};
+
+		struct rte_pipeline_table_entry *default_entry_ptr;
+
+		if (rte_pipeline_table_default_entry_add(p, table_id[i],
+			&default_entry, &default_entry_ptr))
+			rte_panic("%s: Unable to add default entry to "
+				"table %u\n", __func__, table_id[i]);
+	}
+
+	/* Enable input ports */
+	for (i = 0; i < app.n_ports; i++)
+		if (rte_pipeline_port_in_enable(p, port_in_id[i]))
+			rte_panic("Unable to enable input port %u\n",
+				port_in_id[i]);
+
+	/* Check pipeline consistency */
+	if (rte_pipeline_check(p) < 0)
+		rte_panic("%s: Pipeline consistency check failed\n", __func__);
+
+	/* Run-time */
+	for (i = 0; ; i++) {
+		rte_pipeline_run(p);
+
+		if ((i & APP_FLUSH) == 0)
+			rte_pipeline_flush(p);
+	}
+}
+
+void
+app_main_loop_tx(void) {
+	struct app_mbuf_array *m[APP_MAX_PORTS];
+	uint32_t i;
+
+	uint32_t core_id = rte_lcore_id();
+	struct app_core_params *core_params = app_get_core_params(core_id);
+
+	if ((core_params == NULL) || (core_params->core_type != APP_CORE_TX))
+		rte_panic("Core %u misconfiguration\n", core_id);
+
+	RTE_LOG(INFO, USER1, "Core %u is doing TX (no pipeline)\n", core_id);
+
+	for (i = 0; i < APP_MAX_PORTS; i++) {
+		m[i] = rte_malloc_socket(NULL, sizeof(struct app_mbuf_array),
+			CACHE_LINE_SIZE, rte_socket_id());
+		if (m[i] == NULL)
+			rte_panic("%s: Cannot allocate buffer space\n",
+				__func__);
+	}
+
+	for (i = 0; ; i = ((i + 1) & (app.n_ports - 1))) {
+		uint32_t n_mbufs, n_pkts;
+		int ret;
+
+		n_mbufs = m[i]->n_mbufs;
+
+		ret = rte_ring_sc_dequeue_bulk(
+			app.rings[core_params->swq_in[i]],
+			(void **) &m[i]->array[n_mbufs],
+			app.bsz_swq_rd);
+
+		if (ret == -ENOENT)
+			continue;
+
+		n_mbufs += app.bsz_swq_rd;
+
+		if (n_mbufs < app.bsz_hwq_wr) {
+			m[i]->n_mbufs = n_mbufs;
+			continue;
+		}
+
+		n_pkts = rte_eth_tx_burst(
+			app.ports[i],
+			0,
+			m[i]->array,
+			n_mbufs);
+
+		if (n_pkts < n_mbufs) {
+			uint32_t k;
+
+			for (k = n_pkts; k < n_mbufs; k++) {
+				struct rte_mbuf *pkt_to_free;
+
+				pkt_to_free = m[i]->array[k];
+				rte_pktmbuf_free(pkt_to_free);
+			}
+		}
+
+		m[i]->n_mbufs = 0;
+	}
+}
-- 
1.7.7.6

^ permalink raw reply	[flat|nested] 36+ messages in thread

* [dpdk-dev] [v2 23/23] Packet Framework unit tests
  2014-06-04 18:08 [dpdk-dev] [v2 00/23] Packet Framework Cristian Dumitrescu
                   ` (21 preceding siblings ...)
  2014-06-04 18:08 ` [dpdk-dev] [v2 22/23] Packet Framework IPv4 pipeline sample app Cristian Dumitrescu
@ 2014-06-04 18:08 ` Cristian Dumitrescu
  2014-06-05 11:01 ` [dpdk-dev] [v2 00/23] Packet Framework De Lara Guarch, Pablo
                   ` (2 subsequent siblings)
  25 siblings, 0 replies; 36+ messages in thread
From: Cristian Dumitrescu @ 2014-06-04 18:08 UTC (permalink / raw)
  To: dev

Unit tests for Packet Framework libraries.

Signed-off-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
---
 app/test/Makefile                           |    6 +
 app/test/commands.c                         |    4 +-
 app/test/test.h                             |    1 +
 app/test/test_table.c                       |  220 +++++++
 app/test/test_table.h                       |  204 ++++++
 app/test/test_table_acl.c                   |  593 +++++++++++++++++
 app/test/test_table_acl.h                   |   35 +
 app/test/test_table_combined.c              |  784 +++++++++++++++++++++++
 app/test/test_table_combined.h              |   55 ++
 app/test/test_table_pipeline.c              |  603 ++++++++++++++++++
 app/test/test_table_pipeline.h              |   35 +
 app/test/test_table_ports.c                 |  224 +++++++
 app/test/test_table_ports.h                 |   42 ++
 app/test/test_table_tables.c                |  907 +++++++++++++++++++++++++++
 app/test/test_table_tables.h                |   50 ++
 lib/librte_eal/common/include/rte_hexdump.h |    2 +
 16 files changed, 3764 insertions(+), 1 deletions(-)
 create mode 100644 app/test/test_table.c
 create mode 100644 app/test/test_table.h
 create mode 100644 app/test/test_table_acl.c
 create mode 100644 app/test/test_table_acl.h
 create mode 100644 app/test/test_table_combined.c
 create mode 100644 app/test/test_table_combined.h
 create mode 100644 app/test/test_table_pipeline.c
 create mode 100644 app/test/test_table_pipeline.h
 create mode 100644 app/test/test_table_ports.c
 create mode 100644 app/test/test_table_ports.h
 create mode 100644 app/test/test_table_tables.c
 create mode 100644 app/test/test_table_tables.h

diff --git a/app/test/Makefile b/app/test/Makefile
index b49785e..0158f7f 100644
--- a/app/test/Makefile
+++ b/app/test/Makefile
@@ -52,6 +52,12 @@ SRCS-$(CONFIG_RTE_APP_TEST) += test_spinlock.c
 SRCS-$(CONFIG_RTE_APP_TEST) += test_memory.c
 SRCS-$(CONFIG_RTE_APP_TEST) += test_memzone.c
 SRCS-$(CONFIG_RTE_APP_TEST) += test_ring.c
+SRCS-$(CONFIG_RTE_APP_TEST) += test_table.c
+SRCS-$(CONFIG_RTE_APP_TEST) += test_table_pipeline.c
+SRCS-$(CONFIG_RTE_APP_TEST) += test_table_tables.c
+SRCS-$(CONFIG_RTE_APP_TEST) += test_table_ports.c
+SRCS-$(CONFIG_RTE_APP_TEST) += test_table_combined.c
+SRCS-$(CONFIG_RTE_APP_TEST) += test_table_acl.c
 SRCS-$(CONFIG_RTE_APP_TEST) += test_ring_perf.c
 SRCS-$(CONFIG_RTE_APP_TEST) += test_rwlock.c
 SRCS-$(CONFIG_RTE_APP_TEST) += test_timer.c
diff --git a/app/test/commands.c b/app/test/commands.c
index efa8566..64984c2 100644
--- a/app/test/commands.c
+++ b/app/test/commands.c
@@ -151,6 +151,8 @@ static void cmd_autotest_parsed(void *parsed_result,
 		ret = test_cycles();
 	if (!strcmp(res->autotest, "ring_autotest"))
 		ret = test_ring();
+	if (!strcmp(res->autotest, "table_autotest"))
+		ret = test_table();
 	if (!strcmp(res->autotest, "ring_perf_autotest"))
 		ret = test_ring_perf();
 	if (!strcmp(res->autotest, "timer_autotest"))
@@ -226,7 +228,7 @@ cmdline_parse_token_string_t cmd_autotest_autotest =
 			"red_autotest#meter_autotest#sched_autotest#"
 			"memcpy_perf_autotest#kni_autotest#"
 			"pm_autotest#ivshmem_autotest#"
-			"devargs_autotest#"
+			"devargs_autotest#table_autotest#"
 #ifdef RTE_LIBRTE_ACL
 			"acl_autotest#"
 #endif
diff --git a/app/test/test.h b/app/test/test.h
index 1945d29..d13588d 100644
--- a/app/test/test.h
+++ b/app/test/test.h
@@ -57,6 +57,7 @@ int test_cycles(void);
 int test_logs(void);
 int test_memzone(void);
 int test_ring(void);
+int test_table(void);
 int test_ring_perf(void);
 int test_mempool(void);
 int test_mempool_perf(void);
diff --git a/app/test/test_table.c b/app/test/test_table.c
new file mode 100644
index 0000000..7e2e781
--- /dev/null
+++ b/app/test/test_table.c
@@ -0,0 +1,220 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+
+#ifndef RTE_LIBRTE_TABLE
+
+#include "test.h"
+
+int
+test_table(void)
+{
+	return 0;
+}
+
+#else
+
+#include <rte_byteorder.h>
+#include <rte_hexdump.h>
+#include <rte_string_fns.h>
+#include <string.h>
+#include "test.h"
+#include "test_table.h"
+#include "test_table_pipeline.h"
+#include "test_table_ports.h"
+#include "test_table_tables.h"
+#include "test_table_combined.h"
+#include "test_table_acl.h"
+
+/* Global variables */
+struct rte_pipeline *p;
+struct rte_ring *rings_rx[N_PORTS];
+struct rte_ring *rings_tx[N_PORTS];
+struct rte_mempool *pool = NULL;
+
+uint32_t port_in_id[N_PORTS];
+uint32_t port_out_id[N_PORTS];
+uint32_t port_out_id_type[3];
+uint32_t table_id[N_PORTS*2];
+uint64_t override_hit_mask = 0xFFFFFFFF;
+uint64_t override_miss_mask = 0xFFFFFFFF;
+uint64_t non_reserved_actions_hit = 0;
+uint64_t non_reserved_actions_miss = 0;
+uint8_t connect_miss_action_to_port_out = 0;
+uint8_t connect_miss_action_to_table = 0;
+uint32_t table_entry_default_action = RTE_PIPELINE_ACTION_DROP;
+uint32_t table_entry_hit_action = RTE_PIPELINE_ACTION_PORT;
+uint32_t table_entry_miss_action = RTE_PIPELINE_ACTION_DROP;
+rte_pipeline_port_in_action_handler port_in_action = NULL;
+rte_pipeline_port_out_action_handler port_out_action = NULL;
+rte_pipeline_table_action_handler_hit action_handler_hit = NULL;
+rte_pipeline_table_action_handler_miss action_handler_miss = NULL;
+
+/* Function prototypes */
+static void app_init_rings(void);
+static void app_init_mbuf_pools(void);
+
+uint64_t pipeline_test_hash(void *key,
+		__attribute__((unused)) uint32_t key_size,
+		__attribute__((unused)) uint64_t seed)
+{
+	uint32_t *k32 = (uint32_t *) key;
+	uint32_t ip_dst = rte_be_to_cpu_32(k32[0]);
+	uint64_t signature = ip_dst;
+
+	return signature;
+}
+
+static void
+app_init_mbuf_pools(void)
+{
+	/* Init the buffer pool */
+	printf("Getting/Creating the mempool ...\n");
+	pool = rte_mempool_lookup("mempool");
+	if (!pool) {
+		pool = rte_mempool_create(
+			"mempool",
+			POOL_SIZE,
+			POOL_BUFFER_SIZE,
+			POOL_CACHE_SIZE,
+			sizeof(struct rte_pktmbuf_pool_private),
+			rte_pktmbuf_pool_init, NULL,
+			rte_pktmbuf_init, NULL,
+			0,
+			0);
+		if (pool == NULL)
+			rte_panic("Cannot create mbuf pool\n");
+	}
+}
+
+static void
+app_init_rings(void)
+{
+	uint32_t i;
+
+	for (i = 0; i < N_PORTS; i++) {
+		char name[32];
+
+		rte_snprintf(name, sizeof(name), "app_ring_rx_%u", i);
+		rings_rx[i] = rte_ring_lookup(name);
+		if (rings_rx[i] == NULL) {
+			rings_rx[i] = rte_ring_create(
+				name,
+				RING_RX_SIZE,
+				0,
+				RING_F_SP_ENQ | RING_F_SC_DEQ);
+		}
+		if (rings_rx[i] == NULL)
+			rte_panic("Cannot create RX ring %u\n", i);
+	}
+
+	for (i = 0; i < N_PORTS; i++) {
+		char name[32];
+
+		rte_snprintf(name, sizeof(name), "app_ring_tx_%u", i);
+		rings_tx[i] = rte_ring_lookup(name);
+		if (rings_tx[i] == NULL) {
+			rings_tx[i] = rte_ring_create(
+				name,
+				RING_TX_SIZE,
+				0,
+				RING_F_SP_ENQ | RING_F_SC_DEQ);
+		}
+		if (rings_tx[i] == NULL)
+			rte_panic("Cannot create TX ring %u\n", i);
+	}
+
+}
+
+int
+test_table(void)
+{
+	int status, failures;
+	unsigned i;
+
+	failures = 0;
+
+	app_init_rings();
+	app_init_mbuf_pools();
+
+	printf("\n\n\n\n************Pipeline tests************\n");
+
+	if (test_table_pipeline() < 0)
+		return -1;
+
+	printf("\n\n\n\n************Port tests************\n");
+	for (i = 0; i < n_port_tests; i++) {
+		status = port_tests[i]();
+		if (status < 0) {
+			printf("\nPort test number %d failed (%d).\n", i,
+				status);
+			failures++;
+			return -1;
+		}
+	}
+
+	printf("\n\n\n\n************Table tests************\n");
+	for (i = 0; i < n_table_tests; i++) {
+		status = table_tests[i]();
+		if (status < 0) {
+			printf("\nTable test number %d failed (%d).\n", i,
+				status);
+			failures++;
+			return -1;
+		}
+	}
+
+	printf("\n\n\n\n************Table tests************\n");
+	for (i = 0; i < n_table_tests_combined; i++) {
+		status = table_tests_combined[i]();
+		if (status < 0) {
+			printf("\nCombined table test number %d failed with "
+				"reason number %d.\n", i, status);
+			failures++;
+			return -1;
+		}
+	}
+
+	if (failures)
+		return -1;
+
+#ifdef RTE_LIBRTE_ACL
+	printf("\n\n\n\n************ACL tests************\n");
+	if (test_table_ACL() < 0)
+		return -1;
+#endif
+
+	return 0;
+}
+
+#endif
diff --git a/app/test/test_table.h b/app/test/test_table.h
new file mode 100644
index 0000000..afea738
--- /dev/null
+++ b/app/test/test_table.h
@@ -0,0 +1,204 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <rte_table_stub.h>
+#include <rte_table_lpm.h>
+#include <rte_table_lpm_ipv6.h>
+#include <rte_table_hash.h>
+#include <rte_table_array.h>
+#include <rte_pipeline.h>
+
+#ifdef RTE_LIBRTE_ACL
+#include <rte_table_acl.h>
+#endif
+
+#include <rte_port_ring.h>
+#include <rte_port_ethdev.h>
+#include <rte_port_source_sink.h>
+
+#ifndef TEST_TABLE_H_
+#define TEST_TABLE_H_
+
+#define RING_SIZE 4096
+#define MAX_BULK 32
+#define N 65536
+#define TIME_S 5
+#define TEST_RING_FULL_EMTPY_ITER   8
+#define N_PORTS             2
+#define N_PKTS              2
+#define N_PKTS_EXT          6
+#define RING_RX rings_rx[0]
+#define RING_RX_2 rings_rx[1]
+#define RING_TX rings_tx[0]
+#define RING_TX_2 rings_tx[1]
+#define PORT_RX_RING_SIZE   128
+#define PORT_TX_RING_SIZE   512
+#define RING_RX_SIZE        128
+#define RING_TX_SIZE        128
+#define POOL_BUFFER_SIZE (2048 + sizeof(struct rte_mbuf) + RTE_PKTMBUF_HEADROOM)
+#define POOL_SIZE           (32 * 1024)
+#define POOL_CACHE_SIZE     256
+#define BURST_SIZE          8
+#define WORKER_TYPE         1
+#define MAX_DUMMY_PORTS     2
+#define MP_NAME             "dummy_port_mempool"
+#define MBUF_COUNT          (8000 * MAX_DUMMY_PORTS)
+#define MBUF_SIZE        (2048 + sizeof(struct rte_mbuf) + RTE_PKTMBUF_HEADROOM)
+#define MP_CACHE_SZ         256
+#define MP_SOCKET           0
+#define MP_FLAGS            0
+
+/* Macros */
+#define RING_ENQUEUE(ring, value) do {					\
+	struct rte_mbuf *m;						\
+	uint32_t *k32, *signature;					\
+	uint8_t *key;							\
+									\
+	m = rte_pktmbuf_alloc(pool);					\
+	if (m == NULL)							\
+		return -1;						\
+	signature = RTE_MBUF_METADATA_UINT32_PTR(m, 0);			\
+	key = RTE_MBUF_METADATA_UINT8_PTR(m, 32);			\
+	k32 = (uint32_t *) key;						\
+	k32[0] = (value);						\
+	*signature = pipeline_test_hash(key, 0, 0);			\
+	rte_ring_enqueue((ring), m);					\
+} while (0)
+
+#define RUN_PIPELINE(pipeline) do {					\
+	rte_pipeline_run((pipeline));					\
+	rte_pipeline_flush((pipeline));					\
+} while (0)
+
+#define VERIFY(var, value) do {						\
+	if ((var) != -(value))						\
+		return var;						\
+} while (0)
+
+#define VERIFY_TRAFFIC(ring, sent, expected) do {			\
+	unsigned i, n = 0;						\
+	void *mbuf = NULL;						\
+									\
+	for (i = 0; i < (sent); i++) {					\
+		if (!rte_ring_dequeue((ring), &mbuf)) {			\
+			if (mbuf == NULL)				\
+				continue;				\
+			n++;						\
+			rte_pktmbuf_free((struct rte_mbuf *)mbuf);	\
+		}							\
+		else							\
+			break;						\
+	}								\
+	printf("Expected %d, got %d\n", expected, n);			\
+	if (n != (expected)) {						\
+		return -21;						\
+	}								\
+} while (0)
+
+/* Function definitions */
+int test_table(void);
+uint64_t pipeline_test_hash(
+	void *key,
+	__attribute__((unused)) uint32_t key_size,
+	__attribute__((unused)) uint64_t seed);
+
+/* Extern variables */
+extern struct rte_pipeline *p;
+extern struct rte_ring *rings_rx[N_PORTS];
+extern struct rte_ring *rings_tx[N_PORTS];
+extern struct rte_mempool *pool;
+extern uint32_t port_in_id[N_PORTS];
+extern uint32_t port_out_id[N_PORTS];
+extern uint32_t port_out_id_type[3];
+extern uint32_t table_id[N_PORTS*2];
+extern uint64_t override_hit_mask;
+extern uint64_t override_miss_mask;
+extern uint64_t non_reserved_actions_hit;
+extern uint64_t non_reserved_actions_miss;
+extern uint8_t connect_miss_action_to_port_out;
+extern uint8_t connect_miss_action_to_table;
+extern uint32_t table_entry_default_action;
+extern uint32_t table_entry_hit_action;
+extern uint32_t table_entry_miss_action;
+extern rte_pipeline_port_in_action_handler port_in_action;
+extern rte_pipeline_port_out_action_handler port_out_action;
+extern rte_pipeline_table_action_handler_hit action_handler_hit;
+extern rte_pipeline_table_action_handler_miss action_handler_miss;
+
+/* Global data types */
+struct manage_ops {
+	uint32_t op_id;
+	void *op_data;
+	int expected_result;
+};
+
+/* Internal pipeline structures */
+struct rte_port_in {
+	struct rte_port_in_ops ops;
+	uint32_t burst_size;
+	uint32_t table_id;
+	void *h_port;
+};
+
+struct rte_port_out {
+	struct rte_port_out_ops ops;
+	void *h_port;
+};
+
+struct rte_table {
+	struct rte_table_ops ops;
+	rte_pipeline_table_action_handler_hit f_action;
+	uint32_t table_next_id;
+	uint32_t table_next_id_valid;
+	uint8_t actions_lookup_miss[CACHE_LINE_SIZE];
+	uint32_t action_data_size;
+	void *h_table;
+};
+
+#define RTE_PIPELINE_MAX_NAME_SZ                           124
+
+struct rte_pipeline {
+	char name[RTE_PIPELINE_MAX_NAME_SZ];
+	uint32_t socket_id;
+	struct rte_port_in ports_in[16];
+	struct rte_port_out ports_out[16];
+	struct rte_table tables[64];
+	uint32_t num_ports_in;
+	uint32_t num_ports_out;
+	uint32_t num_tables;
+	struct rte_mbuf *pkts[RTE_PORT_IN_BURST_SIZE_MAX];
+	struct rte_table_entry *actions[RTE_PORT_IN_BURST_SIZE_MAX];
+	uint64_t mask_action[64];
+	uint32_t mask_actions;
+};
+#endif
diff --git a/app/test/test_table_acl.c b/app/test/test_table_acl.c
new file mode 100644
index 0000000..a0cd804
--- /dev/null
+++ b/app/test/test_table_acl.c
@@ -0,0 +1,593 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifdef RTE_LIBRTE_ACL
+
+#include <rte_hexdump.h>
+#include "test_table.h"
+#include "test_table_acl.h"
+
+#define IPv4(a, b, c, d) ((uint32_t)(((a) & 0xff) << 24) |		\
+	(((b) & 0xff) << 16) |						\
+	(((c) & 0xff) << 8) |						\
+	((d) & 0xff))
+
+static const char cb_port_delim[] = ":";
+
+/*
+ * Rule and trace formats definitions.
+ **/
+
+struct ipv4_5tuple {
+	uint8_t  proto;
+	uint32_t ip_src;
+	uint32_t ip_dst;
+	uint16_t port_src;
+	uint16_t port_dst;
+};
+
+enum {
+	PROTO_FIELD_IPV4,
+	SRC_FIELD_IPV4,
+	DST_FIELD_IPV4,
+	SRCP_FIELD_IPV4,
+	DSTP_FIELD_IPV4,
+	NUM_FIELDS_IPV4
+};
+
+struct rte_acl_field_def ipv4_defs[NUM_FIELDS_IPV4] = {
+	{
+		.type = RTE_ACL_FIELD_TYPE_BITMASK,
+		.size = sizeof(uint8_t),
+		.field_index = PROTO_FIELD_IPV4,
+		.input_index = PROTO_FIELD_IPV4,
+		.offset = offsetof(struct ipv4_5tuple, proto),
+	},
+	{
+		.type = RTE_ACL_FIELD_TYPE_MASK,
+		.size = sizeof(uint32_t),
+		.field_index = SRC_FIELD_IPV4,
+		.input_index = SRC_FIELD_IPV4,
+		.offset = offsetof(struct ipv4_5tuple, ip_src),
+	},
+	{
+		.type = RTE_ACL_FIELD_TYPE_MASK,
+		.size = sizeof(uint32_t),
+		.field_index = DST_FIELD_IPV4,
+		.input_index = DST_FIELD_IPV4,
+		.offset = offsetof(struct ipv4_5tuple, ip_dst),
+	},
+	{
+		.type = RTE_ACL_FIELD_TYPE_RANGE,
+		.size = sizeof(uint16_t),
+		.field_index = SRCP_FIELD_IPV4,
+		.input_index = SRCP_FIELD_IPV4,
+		.offset = offsetof(struct ipv4_5tuple, port_src),
+	},
+	{
+		.type = RTE_ACL_FIELD_TYPE_RANGE,
+		.size = sizeof(uint16_t),
+		.field_index = DSTP_FIELD_IPV4,
+		.input_index = SRCP_FIELD_IPV4,
+		.offset = offsetof(struct ipv4_5tuple, port_dst),
+	},
+};
+
+struct rte_table_acl_rule_add_params table_acl_IPv4_rule;
+
+typedef int (*parse_5tuple)(char *text,
+	struct rte_table_acl_rule_add_params *rule);
+
+/*
+* The order of the fields in the rule string after the initial '@'
+*/
+enum {
+	CB_FLD_SRC_ADDR,
+	CB_FLD_DST_ADDR,
+	CB_FLD_SRC_PORT_RANGE,
+	CB_FLD_DST_PORT_RANGE,
+	CB_FLD_PROTO,
+	CB_FLD_NUM,
+};
+
+
+#define GET_CB_FIELD(in, fd, base, lim, dlm)				\
+do {									\
+	unsigned long val;						\
+	char *end;							\
+									\
+	errno = 0;							\
+	val = strtoul((in), &end, (base));				\
+	if (errno != 0 || end[0] != (dlm) || val > (lim))		\
+		return -EINVAL;						\
+	(fd) = (typeof(fd)) val;					\
+	(in) = end + 1;							\
+} while (0)
+
+
+
+
+static int
+parse_ipv4_net(const char *in, uint32_t *addr, uint32_t *mask_len)
+{
+	uint8_t a, b, c, d, m;
+
+	GET_CB_FIELD(in, a, 0, UINT8_MAX, '.');
+	GET_CB_FIELD(in, b, 0, UINT8_MAX, '.');
+	GET_CB_FIELD(in, c, 0, UINT8_MAX, '.');
+	GET_CB_FIELD(in, d, 0, UINT8_MAX, '/');
+	GET_CB_FIELD(in, m, 0, sizeof(uint32_t) * CHAR_BIT, 0);
+
+	addr[0] = IPv4(a, b, c, d);
+	mask_len[0] = m;
+
+	return 0;
+}
+
+static int
+parse_port_range(const char *in, uint16_t *port_low, uint16_t *port_high)
+{
+	uint16_t a, b;
+
+	GET_CB_FIELD(in, a, 0, UINT16_MAX, ':');
+	GET_CB_FIELD(in, b, 0, UINT16_MAX, 0);
+
+	port_low[0] = a;
+	port_high[0] = b;
+
+	return 0;
+}
+
+static int
+parse_cb_ipv4_rule(char *str, struct rte_table_acl_rule_add_params *v)
+{
+	int i, rc;
+	char *s, *sp, *in[CB_FLD_NUM];
+	static const char *dlm = " \t\n";
+
+	/*
+	** Skip leading '@'
+	*/
+	if (strchr(str, '@') != str)
+		return -EINVAL;
+
+	s = str + 1;
+
+	/*
+	* Populate the 'in' array with the location of each
+	* field in the string we're parsing
+	*/
+	for (i = 0; i != DIM(in); i++) {
+		in[i] = strtok_r(s, dlm, &sp);
+		if (in[i] == NULL)
+			return -EINVAL;
+		s = NULL;
+	}
+
+	/* Parse x.x.x.x/x */
+	rc = parse_ipv4_net(in[CB_FLD_SRC_ADDR],
+		&v->field_value[SRC_FIELD_IPV4].value.u32,
+		&v->field_value[SRC_FIELD_IPV4].mask_range.u32);
+	if (rc != 0) {
+		RTE_LOG(ERR, PIPELINE, "failed to read src address/mask: %s\n",
+			in[CB_FLD_SRC_ADDR]);
+		return rc;
+	}
+
+	printf("V=%u, mask=%u\n", v->field_value[SRC_FIELD_IPV4].value.u32,
+		v->field_value[SRC_FIELD_IPV4].mask_range.u32);
+
+	/* Parse x.x.x.x/x */
+	(rc = parse_ipv4_net(in[CB_FLD_DST_ADDR],
+		&v->field_value[DST_FIELD_IPV4].value.u32,
+		&v->field_value[DST_FIELD_IPV4].mask_range.u32);
+	if (rc != 0) {
+		RTE_LOG(ERR, PIPELINE, "failed to read dest address/mask: %s\n",
+			in[CB_FLD_DST_ADDR]);
+		return rc;
+	}
+
+	printf("V=%u, mask=%u\n", v->field_value[DST_FIELD_IPV4].value.u32,
+	v->field_value[DST_FIELD_IPV4].mask_range.u32);
+	/* Parse n:n */
+	rc = parse_port_range(in[CB_FLD_SRC_PORT_RANGE],
+		&v->field_value[SRCP_FIELD_IPV4].value.u16,
+		&v->field_value[SRCP_FIELD_IPV4].mask_range.u16);
+	if (rc != 0) {
+		RTE_LOG(ERR, PIPELINE, "failed to read source port range: %s\n",
+			in[CB_FLD_SRC_PORT_RANGE]);
+		return rc;
+	}
+
+	printf("V=%u, mask=%u\n", v->field_value[SRCP_FIELD_IPV4].value.u16,
+		v->field_value[SRCP_FIELD_IPV4].mask_range.u16);
+	/* Parse n:n */
+	rc = parse_port_range(in[CB_FLD_DST_PORT_RANGE],
+		&v->field_value[DSTP_FIELD_IPV4].value.u16,
+		&v->field_value[DSTP_FIELD_IPV4].mask_range.u16);
+	if (rc != 0) {
+		RTE_LOG(ERR, PIPELINE, "failed to read dest port range: %s\n",
+			in[CB_FLD_DST_PORT_RANGE]);
+		return rc;
+	}
+
+	printf("V=%u, mask=%u\n", v->field_value[DSTP_FIELD_IPV4].value.u16,
+		v->field_value[DSTP_FIELD_IPV4].mask_range.u16);
+	/* parse 0/0xnn */
+	GET_CB_FIELD(in[CB_FLD_PROTO],
+		v->field_value[PROTO_FIELD_IPV4].value.u8,
+		0, UINT8_MAX, '/');
+	GET_CB_FIELD(in[CB_FLD_PROTO],
+		v->field_value[PROTO_FIELD_IPV4].mask_range.u8,
+		0, UINT8_MAX, 0);
+
+	printf("V=%u, mask=%u\n",
+		(unsigned int)v->field_value[PROTO_FIELD_IPV4].value.u8,
+		v->field_value[PROTO_FIELD_IPV4].mask_range.u8);
+	return 0;
+}
+
+
+/*
+ * The format for these rules DO NOT need the port ranges to be
+ * separated by ' : ', just ':'. It's a lot more readable and
+ * cleaner, IMO.
+ */
+char lines[][128] = {
+	"@0.0.0.0/0 0.0.0.0/0 0:65535 0:65535 2/0xff", /* Protocol check */
+	"@192.168.3.1/32 0.0.0.0/0 0:65535 0:65535 0/0", /* Src IP checl */
+	"@0.0.0.0/0 10.4.4.1/32 0:65535 0:65535 0/0", /* dst IP check */
+	"@0.0.0.0/0 0.0.0.0/0 105:105 0:65535 0/0", /* src port check */
+	"@0.0.0.0/0 0.0.0.0/0 0:65535 206:206 0/0", /* dst port check */
+};
+
+char line[128];
+
+
+static int
+setup_acl_pipeline(void)
+{
+	int ret;
+	int i;
+	struct rte_pipeline_params pipeline_params = {
+		.name = "PIPELINE",
+		.socket_id = 0,
+	};
+	uint32_t n;
+	struct rte_table_acl_rule_add_params rule_params;
+	struct rte_pipeline_table_acl_rule_delete_params *delete_params;
+	parse_5tuple parser;
+	char acl_name[64];
+
+	/* Pipeline configuration */
+	p = rte_pipeline_create(&pipeline_params);
+	if (p == NULL) {
+		RTE_LOG(INFO, PIPELINE, "%s: Failed to configure pipeline\n",
+			__func__);
+		goto fail;
+	}
+
+	/* Input port configuration */
+	for (i = 0; i < N_PORTS; i++) {
+		struct rte_port_ring_reader_params port_ring_params = {
+			.ring = rings_rx[i],
+		};
+
+		struct rte_pipeline_port_in_params port_params = {
+			.ops = &rte_port_ring_reader_ops,
+			.arg_create = (void *) &port_ring_params,
+			.f_action = NULL,
+			.burst_size = BURST_SIZE,
+		};
+
+		/* Put in action for some ports */
+		if (i)
+			port_params.f_action = port_in_action;
+
+		ret = rte_pipeline_port_in_create(p, &port_params,
+			&port_in_id[i]);
+		if (ret) {
+			rte_panic("Unable to configure input port %d, ret:%d\n",
+				i, ret);
+			goto fail;
+		}
+	}
+
+	/* output Port configuration */
+	for (i = 0; i < N_PORTS; i++) {
+		struct rte_port_ring_writer_params port_ring_params = {
+			.ring = rings_tx[i],
+			.tx_burst_sz = BURST_SIZE,
+		};
+
+		struct rte_pipeline_port_out_params port_params = {
+			.ops = &rte_port_ring_writer_ops,
+			.arg_create = (void *) &port_ring_params,
+			.f_action = NULL,
+			.arg_ah = NULL,
+		};
+
+
+		if (rte_pipeline_port_out_create(p, &port_params,
+			&port_out_id[i])) {
+			rte_panic("Unable to configure output port %d\n", i);
+			goto fail;
+		}
+	}
+
+	/* Table configuration  */
+	for (i = 0; i < N_PORTS; i++) {
+		struct rte_pipeline_table_params table_params;
+
+		/* Set up defaults for stub */
+		table_params.ops = &rte_table_stub_ops;
+		table_params.arg_create = NULL;
+		table_params.f_action_hit = action_handler_hit;
+		table_params.f_action_miss = NULL;
+		table_params.action_data_size = 0;
+
+		RTE_LOG(INFO, PIPELINE, "miss_action=%x\n",
+			table_entry_miss_action);
+
+		printf("RTE_ACL_RULE_SZ(%zu) = %zu\n", DIM(ipv4_defs),
+			RTE_ACL_RULE_SZ(DIM(ipv4_defs)));
+
+		struct rte_table_acl_params acl_params;
+
+		acl_params.n_rules = 1 << 5;
+		acl_params.n_rule_fields = DIM(ipv4_defs);
+		rte_snprintf(acl_name, sizeof(acl_name), "ACL%d", i);
+		acl_params.name = acl_name;
+		memcpy(acl_params.field_format, ipv4_defs, sizeof(ipv4_defs));
+
+		table_params.ops = &rte_table_acl_ops;
+		table_params.arg_create = &acl_params;
+
+		if (rte_pipeline_table_create(p, &table_params, &table_id[i])) {
+			rte_panic("Unable to configure table %u\n", i);
+			goto fail;
+		}
+
+		if (connect_miss_action_to_table) {
+			if (rte_pipeline_table_create(p, &table_params,
+				&table_id[i+2])) {
+				rte_panic("Unable to configure table %u\n", i);
+				goto fail;
+			}
+		}
+	}
+
+	for (i = 0; i < N_PORTS; i++) {
+		if (rte_pipeline_port_in_connect_to_table(p, port_in_id[i],
+			table_id[i])) {
+			rte_panic("Unable to connect input port %u to "
+				"table %u\n",
+				port_in_id[i],  table_id[i]);
+			goto fail;
+		}
+	}
+
+	/* Add entries to tables */
+	for (i = 0; i < N_PORTS; i++) {
+		struct rte_pipeline_table_entry table_entry = {
+			.action = RTE_PIPELINE_ACTION_PORT,
+			{.port_id = port_out_id[i^1]},
+		};
+		int key_found;
+		struct rte_pipeline_table_entry *entry_ptr;
+
+		memset(&rule_params, 0, sizeof(rule_params));
+		parser = parse_cb_ipv4_rule;
+
+		for (n = 1; n <= 5; n++) {
+			rte_snprintf(line, sizeof(line), "%s", lines[n-1]);
+			printf("PARSING [%s]\n", line);
+
+			ret = parser(line, &rule_params);
+			if (ret != 0) {
+				RTE_LOG(ERR, PIPELINE,
+					"line %u: parse_cb_ipv4vlan_rule"
+					" failed, error code: %d (%s)\n",
+					n, ret, strerror(-ret));
+				return ret;
+			}
+
+			rule_params.priority = RTE_ACL_MAX_PRIORITY - n;
+
+			ret = rte_pipeline_table_entry_add(p, table_id[i],
+				&rule_params,
+				&table_entry, &key_found, &entry_ptr);
+			if (ret < 0) {
+				rte_panic("Add entry to table %u failed (%d)\n",
+					table_id[i], ret);
+				goto fail;
+			}
+		}
+
+		/* delete a few rules */
+		for (n = 2; n <= 3; n++) {
+			rte_snprintf(line, sizeof(line), "%s", lines[n-1]);
+			printf("PARSING [%s]\n", line);
+
+			ret = parser(line, &rule_params);
+			if (ret != 0) {
+				RTE_LOG(ERR, PIPELINE, "line %u: parse rule "
+					" failed, error code: %d (%s)\n",
+					n, ret, strerror(-ret));
+				return ret;
+			}
+
+			delete_params = (struct
+				rte_pipeline_table_acl_rule_delete_params *)
+				&(rule_params.field_value[0]);
+			ret = rte_pipeline_table_entry_delete(p, table_id[i],
+				delete_params, &key_found, NULL);
+			if (ret < 0) {
+				rte_panic("Add entry to table %u failed (%d)\n",
+					table_id[i], ret);
+				goto fail;
+			} else
+				printf("Deleted Rule.\n");
+		}
+
+
+		/* Try to add duplicates */
+		for (n = 1; n <= 5; n++) {
+			rte_snprintf(line, sizeof(line), "%s", lines[n-1]);
+			printf("PARSING [%s]\n", line);
+
+			ret = parser(line, &rule_params);
+			if (ret != 0) {
+				RTE_LOG(ERR, PIPELINE, "line %u: parse rule"
+					" failed, error code: %d (%s)\n",
+					n, ret, strerror(-ret));
+				return ret;
+			}
+
+			rule_params.priority = RTE_ACL_MAX_PRIORITY - n;
+
+			ret = rte_pipeline_table_entry_add(p, table_id[i],
+				&rule_params,
+				&table_entry, &key_found, &entry_ptr);
+			if (ret < 0) {
+				rte_panic("Add entry to table %u failed (%d)\n",
+					table_id[i], ret);
+				goto fail;
+			}
+		}
+	}
+
+	/* Enable input ports */
+	for (i = 0; i < N_PORTS ; i++)
+		if (rte_pipeline_port_in_enable(p, port_in_id[i]))
+			rte_panic("Unable to enable input port %u\n",
+				port_in_id[i]);
+
+	/* Check pipeline consistency */
+	if (rte_pipeline_check(p) < 0) {
+		rte_panic("Pipeline consistency check failed\n");
+		goto fail;
+	}
+
+	return  0;
+fail:
+
+	return -1;
+}
+
+static int
+test_pipeline_single_filter(int expected_count)
+{
+	int i, j, ret, tx_count;
+	struct ipv4_5tuple five_tuple;
+
+	/* Allocate a few mbufs and manually insert into the rings. */
+	for (i = 0; i < N_PORTS; i++) {
+		for (j = 0; j < 8; j++) {
+			struct rte_mbuf *mbuf;
+
+			mbuf = rte_pktmbuf_alloc(pool);
+			memset(mbuf->pkt.data, 0x00,
+				sizeof(struct ipv4_5tuple));
+
+			five_tuple.proto = j;
+			five_tuple.ip_src = rte_bswap32(IPv4(192, 168, j, 1));
+			five_tuple.ip_dst = rte_bswap32(IPv4(10, 4, j, 1));
+			five_tuple.port_src = rte_bswap16(100 + j);
+			five_tuple.port_dst = rte_bswap16(200 + j);
+
+			memcpy(mbuf->pkt.data, &five_tuple,
+				sizeof(struct ipv4_5tuple));
+			RTE_LOG(INFO, PIPELINE, "%s: Enqueue onto ring %d\n",
+				__func__, i);
+			rte_ring_enqueue(rings_rx[i], mbuf);
+		}
+	}
+
+	/* Run pipeline once */
+	rte_pipeline_run(p);
+
+	rte_pipeline_flush(p);
+
+	tx_count = 0;
+
+	for (i = 0; i < N_PORTS; i++) {
+		void *objs[RING_TX_SIZE];
+		struct rte_mbuf *mbuf;
+
+		ret = rte_ring_sc_dequeue_burst(rings_tx[i], objs, 10);
+		if (ret <= 0) {
+			printf("Got no objects from ring %d - error code %d\n",
+				i, ret);
+		} else {
+			printf("Got %d object(s) from ring %d!\n", ret, i);
+			for (j = 0; j < ret; j++) {
+				mbuf = (struct rte_mbuf *)objs[j];
+				rte_hexdump("mbuf", mbuf->pkt.data, 64);
+				rte_pktmbuf_free(mbuf);
+			}
+			tx_count += ret;
+		}
+	}
+
+	if (tx_count != expected_count) {
+		RTE_LOG(INFO, PIPELINE,
+			"%s: Unexpected packets for ACL test, "
+			"expected %d, got %d\n",
+			__func__, expected_count, tx_count);
+		goto fail;
+	}
+
+	rte_pipeline_free(p);
+
+	return  0;
+fail:
+	return -1;
+
+}
+
+int
+test_table_ACL(void)
+{
+
+
+	override_hit_mask = 0xFF; /* All packets are a hit */
+
+	setup_acl_pipeline();
+	if (test_pipeline_single_filter(10) < 0)
+		return -1;
+
+	return 0;
+}
+
+#endif
diff --git a/app/test/test_table_acl.h b/app/test/test_table_acl.h
new file mode 100644
index 0000000..f57cb27
--- /dev/null
+++ b/app/test/test_table_acl.h
@@ -0,0 +1,35 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+/* Test prototypes */
+int test_table_ACL(void);
diff --git a/app/test/test_table_combined.c b/app/test/test_table_combined.c
new file mode 100644
index 0000000..3380ff1
--- /dev/null
+++ b/app/test/test_table_combined.c
@@ -0,0 +1,784 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifdef RTE_LIBRTE_TABLE
+#include <string.h>
+#include "test_table_combined.h"
+#include "test_table.h"
+#include <rte_table_lpm_ipv6.h>
+
+#define MAX_TEST_KEYS 128
+#define N_PACKETS 50
+
+enum check_table_result {
+	CHECK_TABLE_OK,
+	CHECK_TABLE_PORT_CONFIG,
+	CHECK_TABLE_PORT_ENABLE,
+	CHECK_TABLE_TABLE_CONFIG,
+	CHECK_TABLE_ENTRY_ADD,
+	CHECK_TABLE_DEFAULT_ENTRY_ADD,
+	CHECK_TABLE_CONNECT,
+	CHECK_TABLE_MANAGE_ERROR,
+	CHECK_TABLE_CONSISTENCY,
+	CHECK_TABLE_NO_TRAFFIC,
+	CHECK_TABLE_INVALID_PARAMETER,
+};
+
+struct table_packets {
+	uint32_t hit_packet[MAX_TEST_KEYS];
+	uint32_t miss_packet[MAX_TEST_KEYS];
+	uint32_t n_hit_packets;
+	uint32_t n_miss_packets;
+};
+
+combined_table_test table_tests_combined[] = {
+	test_table_lpm_combined,
+	test_table_lpm_ipv6_combined,
+	test_table_hash8lru,
+	test_table_hash8ext,
+	test_table_hash16lru,
+	test_table_hash16ext,
+	test_table_hash32lru,
+	test_table_hash32ext,
+};
+
+unsigned n_table_tests_combined = RTE_DIM(table_tests_combined);
+
+/* Generic port tester function */
+static int
+test_table_type(struct rte_table_ops *table_ops, void *table_args,
+	void *key, struct table_packets *table_packets,
+	struct manage_ops *manage_ops, unsigned n_ops)
+{
+	uint32_t ring_in_id, table_id, ring_out_id, ring_out_2_id;
+	unsigned i;
+
+	RTE_SET_USED(manage_ops);
+	RTE_SET_USED(n_ops);
+	/* Create pipeline */
+	struct rte_pipeline_params pipeline_params = {
+		.name = "pipeline",
+		.socket_id = 0,
+	};
+
+	struct rte_pipeline *pipeline = rte_pipeline_create(&pipeline_params);
+
+	/* Create input ring */
+	struct rte_port_ring_reader_params ring_params_rx = {
+		.ring = RING_RX,
+	};
+
+	struct rte_port_ring_writer_params ring_params_tx = {
+		.ring = RING_RX,
+		.tx_burst_sz = RTE_PORT_IN_BURST_SIZE_MAX,
+	};
+
+	struct rte_pipeline_port_in_params ring_in_params = {
+		.ops = &rte_port_ring_reader_ops,
+		.arg_create = (void *)&ring_params_rx,
+		.f_action = NULL,
+		.burst_size = RTE_PORT_IN_BURST_SIZE_MAX,
+	};
+
+	if (rte_pipeline_port_in_create(pipeline, &ring_in_params,
+		&ring_in_id) != 0)
+		return -CHECK_TABLE_PORT_CONFIG;
+
+	/* Create table */
+	struct rte_pipeline_table_params table_params = {
+		.ops = table_ops,
+		.arg_create = table_args,
+		.f_action_hit = NULL,
+		.f_action_miss = NULL,
+		.arg_ah = NULL,
+		.action_data_size = 0,
+	};
+
+	if (rte_pipeline_table_create(pipeline, &table_params, &table_id) != 0)
+		return -CHECK_TABLE_TABLE_CONFIG;
+
+	/* Create output ports */
+	ring_params_tx.ring = RING_TX;
+
+	struct rte_pipeline_port_out_params ring_out_params = {
+		.ops = &rte_port_ring_writer_ops,
+		.arg_create = (void *)&ring_params_tx,
+		.f_action = NULL,
+	};
+
+	if (rte_pipeline_port_out_create(pipeline, &ring_out_params,
+		&ring_out_id) != 0)
+		return -CHECK_TABLE_PORT_CONFIG;
+
+	ring_params_tx.ring = RING_TX_2;
+
+	if (rte_pipeline_port_out_create(pipeline, &ring_out_params,
+		&ring_out_2_id) != 0)
+		return -CHECK_TABLE_PORT_CONFIG;
+
+	/* Add entry to the table */
+	struct rte_pipeline_table_entry default_entry = {
+		.action = RTE_PIPELINE_ACTION_DROP,
+		{.table_id = ring_out_id},
+	};
+
+	struct rte_pipeline_table_entry table_entry = {
+		.action = RTE_PIPELINE_ACTION_PORT,
+		{.table_id = ring_out_id},
+	};
+
+	struct rte_pipeline_table_entry *default_entry_ptr, *entry_ptr;
+
+	int key_found;
+
+	if (rte_pipeline_table_default_entry_add(pipeline, table_id,
+		&default_entry, &default_entry_ptr) != 0)
+		return -CHECK_TABLE_DEFAULT_ENTRY_ADD;
+
+	if (rte_pipeline_table_entry_add(pipeline, table_id,
+		key ? key : &table_entry, &table_entry, &key_found,
+			&entry_ptr) != 0)
+		return -CHECK_TABLE_ENTRY_ADD;
+
+	/* Create connections and check consistency */
+	if (rte_pipeline_port_in_connect_to_table(pipeline, ring_in_id,
+		table_id) != 0)
+		return -CHECK_TABLE_CONNECT;
+
+	if (rte_pipeline_port_in_enable(pipeline, ring_in_id) != 0)
+		return -CHECK_TABLE_PORT_ENABLE;
+
+	if (rte_pipeline_check(pipeline) != 0)
+		return -CHECK_TABLE_CONSISTENCY;
+
+
+
+	/* Flow test - All hits */
+	if (table_packets->n_hit_packets) {
+		for (i = 0; i < table_packets->n_hit_packets; i++)
+			RING_ENQUEUE(RING_RX, table_packets->hit_packet[i]);
+
+		RUN_PIPELINE(pipeline);
+
+		VERIFY_TRAFFIC(RING_TX, table_packets->n_hit_packets,
+				table_packets->n_hit_packets);
+	}
+
+	/* Flow test - All misses */
+	if (table_packets->n_miss_packets) {
+		for (i = 0; i < table_packets->n_miss_packets; i++)
+			RING_ENQUEUE(RING_RX, table_packets->miss_packet[i]);
+
+		RUN_PIPELINE(pipeline);
+
+		VERIFY_TRAFFIC(RING_TX, table_packets->n_miss_packets, 0);
+	}
+
+	/* Flow test - Half hits, half misses */
+	if (table_packets->n_hit_packets && table_packets->n_miss_packets) {
+		for (i = 0; i < (table_packets->n_hit_packets) / 2; i++)
+			RING_ENQUEUE(RING_RX, table_packets->hit_packet[i]);
+
+		for (i = 0; i < (table_packets->n_miss_packets) / 2; i++)
+			RING_ENQUEUE(RING_RX, table_packets->miss_packet[i]);
+
+		RUN_PIPELINE(pipeline);
+		VERIFY_TRAFFIC(RING_TX, table_packets->n_hit_packets,
+			table_packets->n_hit_packets / 2);
+	}
+
+	/* Flow test - Single packet */
+	if (table_packets->n_hit_packets) {
+		RING_ENQUEUE(RING_RX, table_packets->hit_packet[0]);
+		RUN_PIPELINE(pipeline);
+		VERIFY_TRAFFIC(RING_TX, table_packets->n_hit_packets, 1);
+	}
+	if (table_packets->n_miss_packets) {
+		RING_ENQUEUE(RING_RX, table_packets->miss_packet[0]);
+		RUN_PIPELINE(pipeline);
+		VERIFY_TRAFFIC(RING_TX, table_packets->n_miss_packets, 0);
+	}
+
+
+	/* Change table entry action */
+	printf("Change entry action\n");
+	table_entry.table_id = ring_out_2_id;
+
+	if (rte_pipeline_table_default_entry_add(pipeline, table_id,
+		&default_entry, &default_entry_ptr) != 0)
+		return -CHECK_TABLE_ENTRY_ADD;
+
+	if (rte_pipeline_table_entry_add(pipeline, table_id,
+		key ? key : &table_entry, &table_entry, &key_found,
+			&entry_ptr) != 0)
+		return -CHECK_TABLE_ENTRY_ADD;
+
+	/* Check that traffic destination has changed */
+	if (table_packets->n_hit_packets) {
+		for (i = 0; i < table_packets->n_hit_packets; i++)
+			RING_ENQUEUE(RING_RX, table_packets->hit_packet[i]);
+
+		RUN_PIPELINE(pipeline);
+		VERIFY_TRAFFIC(RING_TX, table_packets->n_hit_packets, 0);
+		VERIFY_TRAFFIC(RING_TX_2, table_packets->n_hit_packets,
+			table_packets->n_hit_packets);
+	}
+
+	printf("delete entry\n");
+	/* Delete table entry */
+	rte_pipeline_table_entry_delete(pipeline, table_id,
+		key ? key : &table_entry, &key_found, NULL);
+
+	rte_pipeline_free(pipeline);
+
+	return 0;
+}
+
+/* Table tests */
+int
+test_table_stub_combined(void)
+{
+	int status, i;
+	struct table_packets table_packets;
+
+	printf("--------------\n");
+	printf("RUNNING TEST - %s\n", __func__);
+	printf("--------------\n");
+	for (i = 0; i < N_PACKETS; i++)
+		table_packets.hit_packet[i] = i;
+
+	table_packets.n_hit_packets = N_PACKETS;
+	table_packets.n_miss_packets = 0;
+
+	status = test_table_type(&rte_table_stub_ops, NULL, NULL,
+		&table_packets, NULL, 1);
+	VERIFY(status, CHECK_TABLE_OK);
+
+	return 0;
+}
+
+int
+test_table_lpm_combined(void)
+{
+	int status, i;
+
+	/* Traffic flow */
+	struct rte_table_lpm_params lpm_params = {
+		.n_rules = 1 << 16,
+		.entry_unique_size = 8,
+		.offset = 0,
+	};
+
+	struct rte_table_lpm_key lpm_key = {
+		.ip = 0xadadadad,
+		.depth = 16,
+	};
+
+	struct table_packets table_packets;
+
+	printf("--------------\n");
+	printf("RUNNING TEST - %s\n", __func__);
+	printf("--------------\n");
+
+	for (i = 0; i < N_PACKETS; i++)
+		table_packets.hit_packet[i] = 0xadadadad;
+
+	for (i = 0; i < N_PACKETS; i++)
+		table_packets.miss_packet[i] = 0xfefefefe;
+
+	table_packets.n_hit_packets = N_PACKETS;
+	table_packets.n_miss_packets = N_PACKETS;
+
+	status = test_table_type(&rte_table_lpm_ops, (void *)&lpm_params,
+		(void *)&lpm_key, &table_packets, NULL, 0);
+	VERIFY(status, CHECK_TABLE_OK);
+
+	/* Invalid parameters */
+	lpm_params.n_rules = 0;
+
+	status = test_table_type(&rte_table_lpm_ops, (void *)&lpm_params,
+		(void *)&lpm_key, &table_packets, NULL, 0);
+	VERIFY(status, CHECK_TABLE_TABLE_CONFIG);
+
+	lpm_params.n_rules = 1 << 24;
+	lpm_key.depth = 0;
+
+	status = test_table_type(&rte_table_lpm_ops, (void *)&lpm_params,
+		(void *)&lpm_key, &table_packets, NULL, 0);
+	VERIFY(status, CHECK_TABLE_ENTRY_ADD);
+
+	lpm_key.depth = 33;
+
+	status = test_table_type(&rte_table_lpm_ops, (void *)&lpm_params,
+		(void *)&lpm_key, &table_packets, NULL, 0);
+	VERIFY(status, CHECK_TABLE_ENTRY_ADD);
+
+	return 0;
+}
+
+int
+test_table_lpm_ipv6_combined(void)
+{
+	int status, i;
+
+	/* Traffic flow */
+	struct rte_table_lpm_ipv6_params lpm_ipv6_params = {
+		.n_rules = 1 << 16,
+		.number_tbl8s = 1 << 13,
+		.entry_unique_size = 8,
+		.offset = 32,
+	};
+
+	struct rte_table_lpm_ipv6_key lpm_ipv6_key = {
+		.depth = 16,
+	};
+	memset(lpm_ipv6_key.ip, 0xad, 16);
+
+	struct table_packets table_packets;
+
+	printf("--------------\n");
+	printf("RUNNING TEST - %s\n", __func__);
+	printf("--------------\n");
+	for (i = 0; i < N_PACKETS; i++)
+		table_packets.hit_packet[i] = 0xadadadad;
+
+	for (i = 0; i < N_PACKETS; i++)
+		table_packets.miss_packet[i] = 0xadadadab;
+
+	table_packets.n_hit_packets = N_PACKETS;
+	table_packets.n_miss_packets = N_PACKETS;
+
+	status = test_table_type(&rte_table_lpm_ipv6_ops,
+		(void *)&lpm_ipv6_params,
+		(void *)&lpm_ipv6_key, &table_packets, NULL, 0);
+	VERIFY(status, CHECK_TABLE_OK);
+
+	/* Invalid parameters */
+	lpm_ipv6_params.n_rules = 0;
+
+	status = test_table_type(&rte_table_lpm_ipv6_ops,
+		(void *)&lpm_ipv6_params,
+		(void *)&lpm_ipv6_key, &table_packets, NULL, 0);
+	VERIFY(status, CHECK_TABLE_TABLE_CONFIG);
+
+	lpm_ipv6_params.n_rules = 1 << 24;
+	lpm_ipv6_key.depth = 0;
+
+	status = test_table_type(&rte_table_lpm_ipv6_ops,
+		(void *)&lpm_ipv6_params,
+		(void *)&lpm_ipv6_key, &table_packets, NULL, 0);
+	VERIFY(status, CHECK_TABLE_ENTRY_ADD);
+
+	lpm_ipv6_key.depth = 129;
+	status = test_table_type(&rte_table_lpm_ipv6_ops,
+		(void *)&lpm_ipv6_params,
+		(void *)&lpm_ipv6_key, &table_packets, NULL, 0);
+	VERIFY(status, CHECK_TABLE_ENTRY_ADD);
+
+	return 0;
+}
+
+int
+test_table_hash8lru(void)
+{
+	int status, i;
+
+	/* Traffic flow */
+	struct rte_table_hash_key8_lru_params key8lru_params = {
+		.n_entries = 1<<24,
+		.f_hash = pipeline_test_hash,
+		.seed = 0,
+		.signature_offset = 0,
+		.key_offset = 32,
+	};
+
+	uint8_t key8lru[8];
+	uint32_t *k8lru = (uint32_t *) key8lru;
+
+	memset(key8lru, 0, sizeof(key8lru));
+	k8lru[0] = 0xadadadad;
+
+	struct table_packets table_packets;
+
+	printf("--------------\n");
+	printf("RUNNING TEST - %s\n", __func__);
+	printf("--------------\n");
+	for (i = 0; i < 50; i++)
+		table_packets.hit_packet[i] = 0xadadadad;
+
+	for (i = 0; i < 50; i++)
+		table_packets.miss_packet[i] = 0xfefefefe;
+
+	table_packets.n_hit_packets = 50;
+	table_packets.n_miss_packets = 50;
+
+	status = test_table_type(&rte_table_hash_key8_lru_ops,
+		(void *)&key8lru_params, (void *)key8lru, &table_packets,
+			NULL, 0);
+	VERIFY(status, CHECK_TABLE_OK);
+
+	/* Invalid parameters */
+	key8lru_params.n_entries = 0;
+
+	status = test_table_type(&rte_table_hash_key8_lru_ops,
+		(void *)&key8lru_params, (void *)key8lru, &table_packets,
+			NULL, 0);
+	VERIFY(status, CHECK_TABLE_TABLE_CONFIG);
+
+	key8lru_params.n_entries = 1<<16;
+	key8lru_params.f_hash = NULL;
+
+	status = test_table_type(&rte_table_hash_key8_lru_ops,
+		(void *)&key8lru_params, (void *)key8lru, &table_packets,
+			NULL, 0);
+	VERIFY(status, CHECK_TABLE_TABLE_CONFIG);
+
+	return 0;
+}
+
+int
+test_table_hash16lru(void)
+{
+	int status, i;
+
+	/* Traffic flow */
+	struct rte_table_hash_key16_lru_params key16lru_params = {
+		.n_entries = 1<<16,
+		.f_hash = pipeline_test_hash,
+		.seed = 0,
+		.signature_offset = 0,
+		.key_offset = 32,
+	};
+
+	uint8_t key16lru[16];
+	uint32_t *k16lru = (uint32_t *) key16lru;
+
+	memset(key16lru, 0, sizeof(key16lru));
+	k16lru[0] = 0xadadadad;
+
+	struct table_packets table_packets;
+
+	printf("--------------\n");
+	printf("RUNNING TEST - %s\n", __func__);
+	printf("--------------\n");
+	for (i = 0; i < 50; i++)
+		table_packets.hit_packet[i] = 0xadadadad;
+
+	for (i = 0; i < 50; i++)
+		table_packets.miss_packet[i] = 0xfefefefe;
+
+	table_packets.n_hit_packets = 50;
+	table_packets.n_miss_packets = 50;
+
+	status = test_table_type(&rte_table_hash_key16_lru_ops,
+		(void *)&key16lru_params, (void *)key16lru, &table_packets,
+			NULL, 0);
+	VERIFY(status, CHECK_TABLE_OK);
+
+	/* Invalid parameters */
+	key16lru_params.n_entries = 0;
+
+	status = test_table_type(&rte_table_hash_key16_lru_ops,
+		(void *)&key16lru_params, (void *)key16lru, &table_packets,
+			NULL, 0);
+	VERIFY(status, CHECK_TABLE_TABLE_CONFIG);
+
+	key16lru_params.n_entries = 1<<16;
+	key16lru_params.f_hash = NULL;
+
+	status = test_table_type(&rte_table_hash_key16_lru_ops,
+		(void *)&key16lru_params, (void *)key16lru, &table_packets,
+			NULL, 0);
+	VERIFY(status, CHECK_TABLE_TABLE_CONFIG);
+
+	return 0;
+}
+
+int
+test_table_hash32lru(void)
+{
+	int status, i;
+
+	/* Traffic flow */
+	struct rte_table_hash_key32_lru_params key32lru_params = {
+		.n_entries = 1<<16,
+		.f_hash = pipeline_test_hash,
+		.seed = 0,
+		.signature_offset = 0,
+		.key_offset = 32,
+	};
+
+	uint8_t key32lru[32];
+	uint32_t *k32lru = (uint32_t *) key32lru;
+
+	memset(key32lru, 0, sizeof(key32lru));
+	k32lru[0] = 0xadadadad;
+
+	struct table_packets table_packets;
+
+	printf("--------------\n");
+	printf("RUNNING TEST - %s\n", __func__);
+	printf("--------------\n");
+	for (i = 0; i < 50; i++)
+		table_packets.hit_packet[i] = 0xadadadad;
+
+	for (i = 0; i < 50; i++)
+		table_packets.miss_packet[i] = 0xbdadadad;
+
+	table_packets.n_hit_packets = 50;
+	table_packets.n_miss_packets = 50;
+
+	status = test_table_type(&rte_table_hash_key32_lru_ops,
+		(void *)&key32lru_params, (void *)key32lru, &table_packets,
+		NULL, 0);
+	VERIFY(status, CHECK_TABLE_OK);
+
+	/* Invalid parameters */
+	key32lru_params.n_entries = 0;
+
+	status = test_table_type(&rte_table_hash_key32_lru_ops,
+		(void *)&key32lru_params, (void *)key32lru, &table_packets,
+		NULL, 0);
+	VERIFY(status, CHECK_TABLE_TABLE_CONFIG);
+
+	key32lru_params.n_entries = 1<<16;
+	key32lru_params.f_hash = NULL;
+
+	status = test_table_type(&rte_table_hash_key32_lru_ops,
+		(void *)&key32lru_params, (void *)key32lru, &table_packets,
+		NULL, 0);
+	VERIFY(status, CHECK_TABLE_TABLE_CONFIG);
+
+	return 0;
+}
+
+int
+test_table_hash8ext(void)
+{
+	int status, i;
+
+	/* Traffic flow */
+	struct rte_table_hash_key8_ext_params key8ext_params = {
+		.n_entries = 1<<16,
+		.n_entries_ext = 1<<15,
+		.f_hash = pipeline_test_hash,
+		.seed = 0,
+		.signature_offset = 0,
+		.key_offset = 32,
+	};
+
+	uint8_t key8ext[8];
+	uint32_t *k8ext = (uint32_t *) key8ext;
+
+	memset(key8ext, 0, sizeof(key8ext));
+	k8ext[0] = 0xadadadad;
+
+	struct table_packets table_packets;
+
+	printf("--------------\n");
+	printf("RUNNING TEST - %s\n", __func__);
+	printf("--------------\n");
+	for (i = 0; i < 50; i++)
+		table_packets.hit_packet[i] = 0xadadadad;
+
+	for (i = 0; i < 50; i++)
+		table_packets.miss_packet[i] = 0xbdadadad;
+
+	table_packets.n_hit_packets = 50;
+	table_packets.n_miss_packets = 50;
+
+	status = test_table_type(&rte_table_hash_key8_ext_ops,
+		(void *)&key8ext_params, (void *)key8ext, &table_packets,
+		NULL, 0);
+	VERIFY(status, CHECK_TABLE_OK);
+
+	/* Invalid parameters */
+	key8ext_params.n_entries = 0;
+
+	status = test_table_type(&rte_table_hash_key8_ext_ops,
+		(void *)&key8ext_params, (void *)key8ext, &table_packets,
+		NULL, 0);
+	VERIFY(status, CHECK_TABLE_TABLE_CONFIG);
+
+	key8ext_params.n_entries = 1<<16;
+	key8ext_params.f_hash = NULL;
+
+	status = test_table_type(&rte_table_hash_key8_ext_ops,
+		(void *)&key8ext_params, (void *)key8ext, &table_packets,
+		NULL, 0);
+	VERIFY(status, CHECK_TABLE_TABLE_CONFIG);
+
+	key8ext_params.f_hash = pipeline_test_hash;
+	key8ext_params.n_entries_ext = 0;
+
+	status = test_table_type(&rte_table_hash_key8_ext_ops,
+	(void *)&key8ext_params, (void *)key8ext, &table_packets, NULL, 0);
+	VERIFY(status, CHECK_TABLE_TABLE_CONFIG);
+
+	return 0;
+}
+
+int
+test_table_hash16ext(void)
+{
+	int status, i;
+
+	/* Traffic flow */
+	struct rte_table_hash_key16_ext_params key16ext_params = {
+		.n_entries = 1<<16,
+		.n_entries_ext = 1<<15,
+		.f_hash = pipeline_test_hash,
+		.seed = 0,
+		.signature_offset = 0,
+		.key_offset = 32,
+	};
+
+	uint8_t key16ext[16];
+	uint32_t *k16ext = (uint32_t *) key16ext;
+
+	memset(key16ext, 0, sizeof(key16ext));
+	k16ext[0] = 0xadadadad;
+
+	struct table_packets table_packets;
+
+	printf("--------------\n");
+	printf("RUNNING TEST - %s\n", __func__);
+	printf("--------------\n");
+	for (i = 0; i < 50; i++)
+		table_packets.hit_packet[i] = 0xadadadad;
+
+	for (i = 0; i < 50; i++)
+		table_packets.miss_packet[i] = 0xbdadadad;
+
+	table_packets.n_hit_packets = 50;
+	table_packets.n_miss_packets = 50;
+
+	status = test_table_type(&rte_table_hash_key16_ext_ops,
+		(void *)&key16ext_params, (void *)key16ext, &table_packets,
+		NULL, 0);
+	VERIFY(status, CHECK_TABLE_OK);
+
+	/* Invalid parameters */
+	key16ext_params.n_entries = 0;
+
+	status = test_table_type(&rte_table_hash_key16_ext_ops,
+		(void *)&key16ext_params, (void *)key16ext, &table_packets,
+		NULL, 0);
+	VERIFY(status, CHECK_TABLE_TABLE_CONFIG);
+
+	key16ext_params.n_entries = 1<<16;
+	key16ext_params.f_hash = NULL;
+
+	status = test_table_type(&rte_table_hash_key16_ext_ops,
+		(void *)&key16ext_params, (void *)key16ext, &table_packets,
+		NULL, 0);
+	VERIFY(status, CHECK_TABLE_TABLE_CONFIG);
+
+	key16ext_params.f_hash = pipeline_test_hash;
+	key16ext_params.n_entries_ext = 0;
+
+	status = test_table_type(&rte_table_hash_key16_ext_ops,
+	(void *)&key16ext_params, (void *)key16ext, &table_packets, NULL, 0);
+	VERIFY(status, CHECK_TABLE_TABLE_CONFIG);
+
+	return 0;
+}
+
+int
+test_table_hash32ext(void)
+{
+	int status, i;
+
+	/* Traffic flow */
+	struct rte_table_hash_key32_ext_params key32ext_params = {
+		.n_entries = 1<<16,
+		.n_entries_ext = 1<<15,
+		.f_hash = pipeline_test_hash,
+		.seed = 0,
+		.signature_offset = 0,
+		.key_offset = 32,
+	};
+
+	uint8_t key32ext[32];
+	uint32_t *k32ext = (uint32_t *) key32ext;
+
+	memset(key32ext, 0, sizeof(key32ext));
+	k32ext[0] = 0xadadadad;
+
+	struct table_packets table_packets;
+
+	printf("--------------\n");
+	printf("RUNNING TEST - %s\n", __func__);
+	printf("--------------\n");
+	for (i = 0; i < 50; i++)
+		table_packets.hit_packet[i] = 0xadadadad;
+
+	for (i = 0; i < 50; i++)
+		table_packets.miss_packet[i] = 0xbdadadad;
+
+	table_packets.n_hit_packets = 50;
+	table_packets.n_miss_packets = 50;
+
+	status = test_table_type(&rte_table_hash_key32_ext_ops,
+		(void *)&key32ext_params, (void *)key32ext, &table_packets,
+		NULL, 0);
+	VERIFY(status, CHECK_TABLE_OK);
+
+	/* Invalid parameters */
+	key32ext_params.n_entries = 0;
+
+	status = test_table_type(&rte_table_hash_key32_ext_ops,
+		(void *)&key32ext_params, (void *)key32ext, &table_packets,
+		NULL, 0);
+	VERIFY(status, CHECK_TABLE_TABLE_CONFIG);
+
+	key32ext_params.n_entries = 1<<16;
+	key32ext_params.f_hash = NULL;
+
+	status = test_table_type(&rte_table_hash_key32_ext_ops,
+		(void *)&key32ext_params, (void *)key32ext, &table_packets,
+		NULL, 0);
+	VERIFY(status, CHECK_TABLE_TABLE_CONFIG);
+
+	key32ext_params.f_hash = pipeline_test_hash;
+	key32ext_params.n_entries_ext = 0;
+
+	status = test_table_type(&rte_table_hash_key32_ext_ops,
+		(void *)&key32ext_params, (void *)key32ext, &table_packets,
+		NULL, 0);
+	VERIFY(status, CHECK_TABLE_TABLE_CONFIG);
+
+	return 0;
+}
+
+#endif
diff --git a/app/test/test_table_combined.h b/app/test/test_table_combined.h
new file mode 100644
index 0000000..f94f09f
--- /dev/null
+++ b/app/test/test_table_combined.h
@@ -0,0 +1,55 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+/* Test prototypes */
+int test_table_stub_combined(void);
+int test_table_lpm_combined(void);
+int test_table_lpm_ipv6_combined(void);
+#ifdef RTE_LIBRTE_ACL
+int test_table_acl(void);
+#endif
+int test_table_hash8unoptimized(void);
+int test_table_hash8lru(void);
+int test_table_hash8ext(void);
+int test_table_hash16unoptimized(void);
+int test_table_hash16lru(void);
+int test_table_hash16ext(void);
+int test_table_hash32unoptimized(void);
+int test_table_hash32lru(void);
+int test_table_hash32ext(void);
+
+/* Extern variables */
+typedef int (*combined_table_test)(void);
+
+extern combined_table_test table_tests_combined[];
+extern unsigned n_table_tests_combined;
diff --git a/app/test/test_table_pipeline.c b/app/test/test_table_pipeline.c
new file mode 100644
index 0000000..35644a6
--- /dev/null
+++ b/app/test/test_table_pipeline.c
@@ -0,0 +1,603 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef RTE_LIBRTE_PIPELINE
+
+#include "test.h"
+
+#else
+
+#include <string.h>
+#include <rte_pipeline.h>
+#include <rte_log.h>
+#include <inttypes.h>
+#include <rte_hexdump.h>
+#include "test_table.h"
+#include "test_table_pipeline.h"
+
+#define RTE_CBUF_UINT8_PTR(cbuf, offset)			\
+	(&cbuf->data[offset])
+#define RTE_CBUF_UINT32_PTR(cbuf, offset)			\
+	(&cbuf->data32[offset/sizeof(uint32_t)])
+
+#if 0
+
+static rte_pipeline_port_out_action_handler port_action_0x00
+	(struct rte_mbuf **pkts, uint32_t n, uint64_t *pkts_mask, void *arg);
+static rte_pipeline_port_out_action_handler port_action_0xFF
+	(struct rte_mbuf **pkts, uint32_t n, uint64_t *pkts_mask, void *arg);
+static rte_pipeline_port_out_action_handler port_action_stub
+	(struct rte_mbuf **pkts, uint32_t n, uint64_t *pkts_mask, void *arg);
+
+
+rte_pipeline_port_out_action_handler port_action_0x00(struct rte_mbuf **pkts,
+	uint32_t n,
+	uint64_t *pkts_mask,
+	void *arg)
+{
+	RTE_SET_USED(pkts);
+	RTE_SET_USED(n);
+	RTE_SET_USED(arg);
+	printf("Port Action 0x00\n");
+	*pkts_mask = 0x00;
+	return 0;
+}
+
+rte_pipeline_port_out_action_handler port_action_0xFF(struct rte_mbuf **pkts,
+	uint32_t n,
+	uint64_t *pkts_mask,
+	void *arg)
+{
+	RTE_SET_USED(pkts);
+	RTE_SET_USED(n);
+	RTE_SET_USED(arg);
+	printf("Port Action 0xFF\n");
+	*pkts_mask = 0xFF;
+	return 0;
+}
+
+rte_pipeline_port_out_action_handler port_action_stub(struct rte_mbuf **pkts,
+	uint32_t n,
+	uint64_t *pkts_mask,
+	void *arg)
+{
+	RTE_SET_USED(pkts);
+	RTE_SET_USED(n);
+	RTE_SET_USED(pkts_mask);
+	RTE_SET_USED(arg);
+	printf("Port Action stub\n");
+	return 0;
+}
+
+#endif
+
+rte_pipeline_table_action_handler_hit
+table_action_0x00(struct rte_mbuf **pkts, uint64_t *pkts_mask,
+	struct rte_pipeline_table_entry **actions, uint32_t action_mask);
+
+rte_pipeline_table_action_handler_hit
+table_action_stub_hit(struct rte_mbuf **pkts, uint64_t *pkts_mask,
+	struct rte_pipeline_table_entry **actions, uint32_t action_mask);
+
+rte_pipeline_table_action_handler_miss
+table_action_stub_miss(struct rte_mbuf **pkts, uint64_t *pkts_mask,
+	struct rte_pipeline_table_entry *action, uint32_t action_mask);
+
+rte_pipeline_table_action_handler_hit
+table_action_0x00(__attribute__((unused)) struct rte_mbuf **pkts,
+	uint64_t *pkts_mask,
+	__attribute__((unused)) struct rte_pipeline_table_entry **actions,
+	__attribute__((unused)) uint32_t action_mask)
+{
+	printf("Table Action, setting pkts_mask to 0x00\n");
+	*pkts_mask = 0x00;
+	return 0;
+}
+
+rte_pipeline_table_action_handler_hit
+table_action_stub_hit(__attribute__((unused)) struct rte_mbuf **pkts,
+	uint64_t *pkts_mask,
+	__attribute__((unused)) struct rte_pipeline_table_entry **actions,
+	__attribute__((unused)) uint32_t action_mask)
+{
+	printf("STUB Table Action Hit - doing nothing\n");
+	printf("STUB Table Action Hit - setting mask to 0x%"PRIx64"\n",
+		override_hit_mask);
+	*pkts_mask = override_hit_mask;
+	return 0;
+}
+rte_pipeline_table_action_handler_miss
+table_action_stub_miss(__attribute__((unused)) struct rte_mbuf **pkts,
+	uint64_t *pkts_mask,
+	__attribute__((unused)) struct rte_pipeline_table_entry *action,
+	__attribute__((unused)) uint32_t action_mask)
+{
+	printf("STUB Table Action Miss - setting mask to 0x%"PRIx64"\n",
+		override_miss_mask);
+	*pkts_mask = override_miss_mask;
+	return 0;
+}
+
+
+enum e_test_type {
+	e_TEST_STUB = 0,
+	e_TEST_LPM,
+	e_TEST_LPM6,
+	e_TEST_HASH_LRU_8,
+	e_TEST_HASH_LRU_16,
+	e_TEST_HASH_LRU_32,
+	e_TEST_HASH_EXT_8,
+	e_TEST_HASH_EXT_16,
+	e_TEST_HASH_EXT_32
+};
+
+char pipeline_test_names[][64] = {
+	"Stub",
+	"LPM",
+	"LPMv6",
+	"8-bit LRU Hash",
+	"16-bit LRU Hash",
+	"32-bit LRU Hash",
+	"16-bit Ext Hash",
+	"8-bit Ext Hash",
+	"32-bit Ext Hash",
+	""
+};
+
+
+static int
+cleanup_pipeline(void)
+{
+
+	rte_pipeline_free(p);
+
+	return 0;
+}
+
+
+static int check_pipeline_invalid_params(void);
+
+static int
+check_pipeline_invalid_params(void)
+{
+	struct rte_pipeline_params pipeline_params_1 = {
+		.name = NULL,
+		.socket_id = 0,
+	};
+	struct rte_pipeline_params pipeline_params_2 = {
+		.name = "PIPELINE",
+		.socket_id = -1,
+	};
+	struct rte_pipeline_params pipeline_params_3 = {
+		.name = "PIPELINE",
+		.socket_id = 127,
+	};
+
+	p = rte_pipeline_create(NULL);
+	if (p != NULL) {
+		RTE_LOG(INFO, PIPELINE,
+			"%s: configured pipeline with null params\n",
+			__func__);
+		goto fail;
+	}
+	p = rte_pipeline_create(&pipeline_params_1);
+	if (p != NULL) {
+		RTE_LOG(INFO, PIPELINE, "%s: Configure pipeline with NULL "
+			"name\n", __func__);
+		goto fail;
+	}
+
+	p = rte_pipeline_create(&pipeline_params_2);
+	if (p != NULL) {
+		RTE_LOG(INFO, PIPELINE, "%s: Configure pipeline with invalid "
+			"socket\n", __func__);
+		goto fail;
+	}
+
+	p = rte_pipeline_create(&pipeline_params_3);
+	if (p != NULL) {
+		RTE_LOG(INFO, PIPELINE, "%s: Configure pipeline with invalid "
+			"socket\n", __func__);
+		goto fail;
+	}
+
+	/* Check pipeline consistency */
+	if (!rte_pipeline_check(p)) {
+		rte_panic("Pipeline consistency reported as OK\n");
+		goto fail;
+	}
+
+
+	return 0;
+fail:
+	return -1;
+}
+
+
+static int
+setup_pipeline(int test_type)
+{
+	int ret;
+	int i;
+	struct rte_pipeline_params pipeline_params = {
+		.name = "PIPELINE",
+		.socket_id = 0,
+	};
+
+	RTE_LOG(INFO, PIPELINE, "%s: **** Setting up %s test\n",
+		__func__, pipeline_test_names[test_type]);
+
+	/* Pipeline configuration */
+	p = rte_pipeline_create(&pipeline_params);
+	if (p == NULL) {
+		RTE_LOG(INFO, PIPELINE, "%s: Failed to configure pipeline\n",
+			__func__);
+		goto fail;
+	}
+
+	ret = rte_pipeline_free(p);
+	if (ret != 0) {
+		RTE_LOG(INFO, PIPELINE, "%s: Failed to free pipeline\n",
+			__func__);
+		goto fail;
+	}
+
+	/* Pipeline configuration */
+	p = rte_pipeline_create(&pipeline_params);
+	if (p == NULL) {
+		RTE_LOG(INFO, PIPELINE, "%s: Failed to configure pipeline\n",
+			__func__);
+		goto fail;
+	}
+
+
+	/* Input port configuration */
+	for (i = 0; i < N_PORTS; i++) {
+		struct rte_port_ring_reader_params port_ring_params = {
+			.ring = rings_rx[i],
+		};
+
+		struct rte_pipeline_port_in_params port_params = {
+			.ops = &rte_port_ring_reader_ops,
+			.arg_create = (void *) &port_ring_params,
+			.f_action = NULL,
+			.burst_size = BURST_SIZE,
+		};
+
+		/* Put in action for some ports */
+		if (i)
+			port_params.f_action = NULL;
+
+		ret = rte_pipeline_port_in_create(p, &port_params,
+			&port_in_id[i]);
+		if (ret) {
+			rte_panic("Unable to configure input port %d, ret:%d\n",
+				i, ret);
+			goto fail;
+		}
+	}
+
+	/* output Port configuration */
+	for (i = 0; i < N_PORTS; i++) {
+		struct rte_port_ring_writer_params port_ring_params = {
+			.ring = rings_tx[i],
+			.tx_burst_sz = BURST_SIZE,
+		};
+
+		struct rte_pipeline_port_out_params port_params = {
+			.ops = &rte_port_ring_writer_ops,
+			.arg_create = (void *) &port_ring_params,
+			.f_action = NULL,
+			.arg_ah = NULL,
+		};
+
+		if (i)
+			port_params.f_action = port_out_action;
+
+		if (rte_pipeline_port_out_create(p, &port_params,
+			&port_out_id[i])) {
+			rte_panic("Unable to configure output port %d\n", i);
+			goto fail;
+		}
+	}
+
+	/* Table configuration  */
+	for (i = 0; i < N_PORTS; i++) {
+		struct rte_pipeline_table_params table_params = {
+				.ops = &rte_table_stub_ops,
+				.arg_create = NULL,
+				.f_action_hit = action_handler_hit,
+				.f_action_miss = action_handler_miss,
+				.action_data_size = 0,
+		};
+
+		if (rte_pipeline_table_create(p, &table_params, &table_id[i])) {
+			rte_panic("Unable to configure table %u\n", i);
+			goto fail;
+		}
+
+		if (connect_miss_action_to_table)
+			if (rte_pipeline_table_create(p, &table_params,
+				&table_id[i+2])) {
+				rte_panic("Unable to configure table %u\n", i);
+				goto fail;
+			}
+	}
+
+	for (i = 0; i < N_PORTS; i++)
+		if (rte_pipeline_port_in_connect_to_table(p, port_in_id[i],
+			table_id[i])) {
+			rte_panic("Unable to connect input port %u to "
+				"table %u\n", port_in_id[i],  table_id[i]);
+			goto fail;
+		}
+
+	/* Add entries to tables */
+	for (i = 0; i < N_PORTS; i++) {
+		struct rte_pipeline_table_entry default_entry = {
+			.action = (enum rte_pipeline_action)
+				table_entry_default_action,
+			{.port_id = port_out_id[i^1]},
+		};
+		struct rte_pipeline_table_entry *default_entry_ptr;
+
+		if (connect_miss_action_to_table) {
+			printf("Setting first table to output to next table\n");
+			default_entry.action = RTE_PIPELINE_ACTION_TABLE;
+			default_entry.table_id = table_id[i+2];
+		}
+
+		/* Add the default action for the table. */
+		ret = rte_pipeline_table_default_entry_add(p, table_id[i],
+			&default_entry, &default_entry_ptr);
+		if (ret < 0) {
+			rte_panic("Unable to add default entry to table %u "
+				"code %d\n", table_id[i], ret);
+			goto fail;
+		} else
+			printf("Added default entry to table id %d with "
+				"action %x\n",
+				table_id[i], default_entry.action);
+
+		if (connect_miss_action_to_table) {
+			/* We create a second table so the first can pass
+			traffic into it */
+			struct rte_pipeline_table_entry default_entry = {
+				.action = RTE_PIPELINE_ACTION_PORT,
+				{.port_id = port_out_id[i^1]},
+			};
+			printf("Setting secont table to output to port\n");
+
+			/* Add the default action for the table. */
+			ret = rte_pipeline_table_default_entry_add(p,
+				table_id[i+2],
+				&default_entry, &default_entry_ptr);
+			if (ret < 0) {
+				rte_panic("Unable to add default entry to "
+					"table %u code %d\n",
+					table_id[i], ret);
+				goto fail;
+			} else
+				printf("Added default entry to table id %d "
+					"with action %x\n",
+					table_id[i], default_entry.action);
+		}
+	}
+
+	/* Enable input ports */
+	for (i = 0; i < N_PORTS ; i++)
+		if (rte_pipeline_port_in_enable(p, port_in_id[i]))
+			rte_panic("Unable to enable input port %u\n",
+				port_in_id[i]);
+
+	/* Check pipeline consistency */
+	if (rte_pipeline_check(p) < 0) {
+		rte_panic("Pipeline consistency check failed\n");
+		goto fail;
+	} else
+		printf("Pipeline Consistency OK!\n");
+
+	return 0;
+fail:
+
+	return -1;
+}
+
+static int
+test_pipeline_single_filter(int test_type, int expected_count)
+{
+	int i;
+	int j;
+	int ret;
+	int tx_count;
+
+	RTE_LOG(INFO, PIPELINE, "%s: **** Running %s test\n",
+		__func__, pipeline_test_names[test_type]);
+	/* Run pipeline once */
+	rte_pipeline_run(p);
+
+
+	ret = rte_pipeline_flush(NULL);
+	if (ret != -EINVAL) {
+		RTE_LOG(INFO, PIPELINE,
+			"%s: No pipeline flush error NULL pipeline (%d)\n",
+			__func__, ret);
+		goto fail;
+	}
+
+	/*
+	 * Allocate a few mbufs and manually insert into the rings. */
+	for (i = 0; i < N_PORTS; i++)
+		for (j = 0; j < N_PORTS; j++) {
+			struct rte_mbuf *m;
+			uint8_t *key;
+			uint32_t *k32;
+
+			m = rte_pktmbuf_alloc(pool);
+			if (m == NULL) {
+				rte_panic("Failed to alloc mbuf from pool\n");
+				return -1;
+			}
+			key = RTE_MBUF_METADATA_UINT8_PTR(m, 32);
+
+			k32 = (uint32_t *) key;
+			k32[0] = 0xadadadad >> (j % 2);
+
+			RTE_LOG(INFO, PIPELINE, "%s: Enqueue onto ring %d\n",
+				__func__, i);
+			rte_ring_enqueue(rings_rx[i], m);
+		}
+
+	/* Run pipeline once */
+	rte_pipeline_run(p);
+
+   /*
+	* need to flush the pipeline, as there may be less hits than the burst
+	size and they will not have been flushed to the tx rings. */
+	rte_pipeline_flush(p);
+
+   /*
+	* Now we'll see what we got back on the tx rings. We should see whatever
+	* packets we had hits on that were destined for the output ports.
+	*/
+	tx_count = 0;
+
+	for (i = 0; i < N_PORTS; i++) {
+		void *objs[RING_TX_SIZE];
+		struct rte_mbuf *mbuf;
+
+		ret = rte_ring_sc_dequeue_burst(rings_tx[i], objs, 10);
+		if (ret <= 0)
+			printf("Got no objects from ring %d - error code %d\n",
+				i, ret);
+		else {
+			printf("Got %d object(s) from ring %d!\n", ret, i);
+			for (j = 0; j < ret; j++) {
+				mbuf = (struct rte_mbuf *)objs[j];
+				rte_hexdump(stdout, "Object:", mbuf->pkt.data,
+					mbuf->pkt.data_len);
+				rte_pktmbuf_free(mbuf);
+			}
+			tx_count += ret;
+		}
+	}
+
+	if (tx_count != expected_count) {
+		RTE_LOG(INFO, PIPELINE,
+			"%s: Unexpected packets out for %s test, expected %d, "
+			"got %d\n", __func__, pipeline_test_names[test_type],
+			expected_count, tx_count);
+		goto fail;
+	}
+
+	cleanup_pipeline();
+
+	return 0;
+fail:
+	return -1;
+
+}
+
+int
+test_table_pipeline(void)
+{
+	/* TEST - All packets dropped */
+	action_handler_hit = NULL;
+	action_handler_miss = NULL;
+	table_entry_default_action = RTE_PIPELINE_ACTION_DROP;
+	setup_pipeline(e_TEST_STUB);
+	if (test_pipeline_single_filter(e_TEST_STUB, 0) < 0)
+		return -1;
+
+	/* TEST - All packets passed through */
+	table_entry_default_action = RTE_PIPELINE_ACTION_PORT;
+	setup_pipeline(e_TEST_STUB);
+	if (test_pipeline_single_filter(e_TEST_STUB, 4) < 0)
+		return -1;
+
+	/* TEST - one packet per port */
+	action_handler_hit = NULL;
+	action_handler_miss =
+		(rte_pipeline_table_action_handler_miss) table_action_stub_miss;
+	table_entry_default_action = RTE_PIPELINE_ACTION_PORT;
+	override_miss_mask = 0x01; /* one packet per port */
+	setup_pipeline(e_TEST_STUB);
+	if (test_pipeline_single_filter(e_TEST_STUB, 2) < 0)
+		return -1;
+
+	/* TEST - one packet per port */
+	override_miss_mask = 0x02; /*all per port */
+	setup_pipeline(e_TEST_STUB);
+	if (test_pipeline_single_filter(e_TEST_STUB, 2) < 0)
+		return -1;
+
+	/* TEST - all packets per port */
+	override_miss_mask = 0x03; /*all per port */
+	setup_pipeline(e_TEST_STUB);
+	if (test_pipeline_single_filter(e_TEST_STUB, 4) < 0)
+		return -1;
+
+   /*
+	* This test will set up two tables in the pipeline. the first table
+	* will forward to another table on miss, and the second table will
+	* forward to port.
+	*/
+	connect_miss_action_to_table = 1;
+	table_entry_default_action = RTE_PIPELINE_ACTION_TABLE;
+	action_handler_hit = NULL;  /* not for stub, hitmask always zero */
+	action_handler_miss = NULL;
+	setup_pipeline(e_TEST_STUB);
+	if (test_pipeline_single_filter(e_TEST_STUB, 4) < 0)
+		return -1;
+	connect_miss_action_to_table = 0;
+
+	printf("TEST - two tables, hitmask override to 0x01\n");
+	connect_miss_action_to_table = 1;
+	action_handler_miss =
+		(rte_pipeline_table_action_handler_miss)table_action_stub_miss;
+	override_miss_mask = 0x01;
+	setup_pipeline(e_TEST_STUB);
+	if (test_pipeline_single_filter(e_TEST_STUB, 2) < 0)
+		return -1;
+	connect_miss_action_to_table = 0;
+
+	if (check_pipeline_invalid_params()) {
+		RTE_LOG(INFO, PIPELINE, "%s: Check pipeline invalid params "
+			"failed.\n", __func__);
+		return -1;
+	}
+
+	return 0;
+}
+
+#endif
diff --git a/app/test/test_table_pipeline.h b/app/test/test_table_pipeline.h
new file mode 100644
index 0000000..b3f20ba
--- /dev/null
+++ b/app/test/test_table_pipeline.h
@@ -0,0 +1,35 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+/* Test prototypes */
+int test_table_pipeline(void);
diff --git a/app/test/test_table_ports.c b/app/test/test_table_ports.c
new file mode 100644
index 0000000..e9d45b0
--- /dev/null
+++ b/app/test/test_table_ports.c
@@ -0,0 +1,224 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifdef RTE_LIBRTE_TABLE
+
+#include "test_table_ports.h"
+#include "test_table.h"
+
+port_test port_tests[] = {
+	test_port_ring_reader,
+	test_port_ring_writer,
+};
+
+unsigned n_port_tests = RTE_DIM(port_tests);
+
+/* Port tests */
+int
+test_port_ring_reader(void)
+{
+	int status, i;
+	struct rte_port_ring_reader_params port_ring_reader_params;
+	void *port;
+
+	/* Invalid params */
+	port = rte_port_ring_reader_ops.f_create(NULL, 0);
+	if (port != NULL)
+		return -1;
+
+	status = rte_port_ring_reader_ops.f_free(port);
+	if (status >= 0)
+		return -2;
+
+	/* Create and free */
+	port_ring_reader_params.ring = RING_RX;
+	port = rte_port_ring_reader_ops.f_create(&port_ring_reader_params, 0);
+	if (port == NULL)
+		return -3;
+
+	status = rte_port_ring_reader_ops.f_free(port);
+	if (status != 0)
+		return -4;
+
+	/* -- Traffic RX -- */
+	int expected_pkts, received_pkts;
+	struct rte_mbuf *res_mbuf[RTE_PORT_IN_BURST_SIZE_MAX];
+	void *mbuf[RTE_PORT_IN_BURST_SIZE_MAX];
+
+	port_ring_reader_params.ring = RING_RX;
+	port = rte_port_ring_reader_ops.f_create(&port_ring_reader_params, 0);
+
+	/* Single packet */
+	mbuf[0] = (void *)rte_pktmbuf_alloc(pool);
+
+	expected_pkts = rte_ring_sp_enqueue_burst(port_ring_reader_params.ring,
+		mbuf, 1);
+	received_pkts = rte_port_ring_reader_ops.f_rx(port, res_mbuf, 1);
+
+	if (received_pkts < expected_pkts)
+		return -5;
+
+	rte_pktmbuf_free(res_mbuf[0]);
+
+	/* Multiple packets */
+	for (i = 0; i < RTE_PORT_IN_BURST_SIZE_MAX; i++)
+		mbuf[i] = rte_pktmbuf_alloc(pool);
+
+	expected_pkts = rte_ring_sp_enqueue_burst(port_ring_reader_params.ring,
+		(void * const *) mbuf, RTE_PORT_IN_BURST_SIZE_MAX);
+	received_pkts = rte_port_ring_reader_ops.f_rx(port, res_mbuf,
+		RTE_PORT_IN_BURST_SIZE_MAX);
+
+	if (received_pkts < expected_pkts)
+		return -6;
+
+	for (i = 0; i < RTE_PORT_IN_BURST_SIZE_MAX; i++)
+		rte_pktmbuf_free(res_mbuf[i]);
+
+	return 0;
+}
+
+int
+test_port_ring_writer(void)
+{
+	int status, i;
+	struct rte_port_ring_writer_params port_ring_writer_params;
+	void *port;
+
+	/* Invalid params */
+	port = rte_port_ring_writer_ops.f_create(NULL, 0);
+	if (port != NULL)
+		return -1;
+
+	status = rte_port_ring_writer_ops.f_free(port);
+	if (status >= 0)
+		return -2;
+
+	port_ring_writer_params.ring = NULL;
+
+	port = rte_port_ring_writer_ops.f_create(&port_ring_writer_params, 0);
+	if (port != NULL)
+		return -3;
+
+	port_ring_writer_params.ring = RING_TX;
+	port_ring_writer_params.tx_burst_sz = RTE_PORT_IN_BURST_SIZE_MAX + 1;
+
+	port = rte_port_ring_writer_ops.f_create(&port_ring_writer_params, 0);
+	if (port != NULL)
+		return -4;
+
+	/* Create and free */
+	port_ring_writer_params.ring = RING_TX;
+	port_ring_writer_params.tx_burst_sz = RTE_PORT_IN_BURST_SIZE_MAX;
+
+	port = rte_port_ring_writer_ops.f_create(&port_ring_writer_params, 0);
+	if (port == NULL)
+		return -5;
+
+	status = rte_port_ring_writer_ops.f_free(port);
+	if (status != 0)
+		return -6;
+
+	/* -- Traffic TX -- */
+	int expected_pkts, received_pkts;
+	struct rte_mbuf *mbuf[RTE_PORT_IN_BURST_SIZE_MAX];
+	struct rte_mbuf *res_mbuf[RTE_PORT_IN_BURST_SIZE_MAX];
+
+	port_ring_writer_params.ring = RING_TX;
+	port_ring_writer_params.tx_burst_sz = RTE_PORT_IN_BURST_SIZE_MAX;
+	port = rte_port_ring_writer_ops.f_create(&port_ring_writer_params, 0);
+
+	/* Single packet */
+	mbuf[0] = rte_pktmbuf_alloc(pool);
+
+	rte_port_ring_writer_ops.f_tx(port, mbuf[0]);
+	rte_port_ring_writer_ops.f_flush(port);
+	expected_pkts = 1;
+	received_pkts = rte_ring_sc_dequeue_burst(port_ring_writer_params.ring,
+		(void **)res_mbuf, port_ring_writer_params.tx_burst_sz);
+
+	if (received_pkts < expected_pkts)
+		return -7;
+
+	rte_pktmbuf_free(res_mbuf[0]);
+
+	/* Multiple packets */
+	for (i = 0; i < RTE_PORT_IN_BURST_SIZE_MAX; i++) {
+		mbuf[i] = rte_pktmbuf_alloc(pool);
+		rte_port_ring_writer_ops.f_tx(port, mbuf[i]);
+	}
+
+	expected_pkts = RTE_PORT_IN_BURST_SIZE_MAX;
+	received_pkts = rte_ring_sc_dequeue_burst(port_ring_writer_params.ring,
+		(void **)res_mbuf, port_ring_writer_params.tx_burst_sz);
+
+	if (received_pkts < expected_pkts)
+		return -8;
+
+	for (i = 0; i < RTE_PORT_IN_BURST_SIZE_MAX; i++)
+		rte_pktmbuf_free(res_mbuf[i]);
+
+	/* TX Bulk */
+	for (i = 0; i < RTE_PORT_IN_BURST_SIZE_MAX; i++)
+		mbuf[i] = rte_pktmbuf_alloc(pool);
+	rte_port_ring_writer_ops.f_tx_bulk(port, mbuf, (uint64_t)-1);
+
+	expected_pkts = RTE_PORT_IN_BURST_SIZE_MAX;
+	received_pkts = rte_ring_sc_dequeue_burst(port_ring_writer_params.ring,
+		(void **)res_mbuf, port_ring_writer_params.tx_burst_sz);
+
+	if (received_pkts < expected_pkts)
+		return -8;
+
+	for (i = 0; i < RTE_PORT_IN_BURST_SIZE_MAX; i++)
+		rte_pktmbuf_free(res_mbuf[i]);
+
+	for (i = 0; i < RTE_PORT_IN_BURST_SIZE_MAX; i++)
+		mbuf[i] = rte_pktmbuf_alloc(pool);
+	rte_port_ring_writer_ops.f_tx_bulk(port, mbuf, (uint64_t)-3);
+	rte_port_ring_writer_ops.f_tx_bulk(port, mbuf, (uint64_t)2);
+
+	expected_pkts = RTE_PORT_IN_BURST_SIZE_MAX;
+	received_pkts = rte_ring_sc_dequeue_burst(port_ring_writer_params.ring,
+		(void **)res_mbuf, port_ring_writer_params.tx_burst_sz);
+
+	if (received_pkts < expected_pkts)
+		return -9;
+
+	for (i = 0; i < RTE_PORT_IN_BURST_SIZE_MAX; i++)
+		rte_pktmbuf_free(res_mbuf[i]);
+
+	return 0;
+}
+
+#endif
diff --git a/app/test/test_table_ports.h b/app/test/test_table_ports.h
new file mode 100644
index 0000000..512b77f
--- /dev/null
+++ b/app/test/test_table_ports.h
@@ -0,0 +1,42 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+/* Test prototypes */
+int test_port_ring_reader(void);
+int test_port_ring_writer(void);
+
+/* Extern variables */
+typedef int (*port_test)(void);
+
+extern port_test port_tests[];
+extern unsigned n_port_tests;
diff --git a/app/test/test_table_tables.c b/app/test/test_table_tables.c
new file mode 100644
index 0000000..da8338c
--- /dev/null
+++ b/app/test/test_table_tables.c
@@ -0,0 +1,907 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifdef RTE_LIBRTE_TABLE
+
+#include <string.h>
+#include <rte_byteorder.h>
+#include <rte_table_lpm_ipv6.h>
+#include <rte_lru.h>
+#include <rte_cycles.h>
+#include "test_table_tables.h"
+#include "test_table.h"
+
+table_test table_tests[] = {
+	test_table_stub,
+	test_table_array,
+	test_table_lpm,
+	test_table_lpm_ipv6,
+	test_table_hash_lru,
+	test_table_hash_ext,
+};
+
+#define PREPARE_PACKET(mbuf, value) do {				\
+	uint32_t *k32, *signature;					\
+	uint8_t *key;							\
+	mbuf = rte_pktmbuf_alloc(pool);					\
+	signature = RTE_MBUF_METADATA_UINT32_PTR(mbuf, 0);		\
+	key = RTE_MBUF_METADATA_UINT8_PTR(mbuf, 32);			\
+	memset(key, 0, 32);						\
+	k32 = (uint32_t *) key;						\
+	k32[0] = (value);						\
+	*signature = pipeline_test_hash(key, 0, 0);			\
+} while (0)
+
+unsigned n_table_tests = RTE_DIM(table_tests);
+
+/* Function prototypes */
+static int
+test_table_hash_lru_generic(struct rte_table_ops *ops);
+static int
+test_table_hash_ext_generic(struct rte_table_ops *ops);
+
+struct rte_bucket_4_8 {
+	/* Cache line 0 */
+	uint64_t signature;
+	uint64_t lru_list;
+	struct rte_bucket_4_8 *next;
+	uint64_t next_valid;
+	uint64_t key[4];
+	/* Cache line 1 */
+	uint8_t data[0];
+};
+
+#if RTE_TABLE_HASH_LRU_STRATEGY == 3
+uint64_t shuffles = 0xfffffffdfffbfff9ULL;
+#else
+uint64_t shuffles = 0x0003000200010000ULL;
+#endif
+
+static int test_lru_update(void)
+{
+	struct rte_bucket_4_8 b;
+	struct rte_bucket_4_8 *bucket;
+	uint32_t i;
+	uint64_t pos;
+	uint64_t iterations;
+	uint64_t j;
+	int poss;
+
+	printf("---------------------------\n");
+	printf("Testing lru_update macro...\n");
+	printf("---------------------------\n");
+	bucket = &b;
+	iterations = 10;
+#if RTE_TABLE_HASH_LRU_STRATEGY == 3
+	bucket->lru_list = 0xFFFFFFFFFFFFFFFFULL;
+#else
+	bucket->lru_list = 0x0000000100020003ULL;
+#endif
+	poss = 0;
+	for (j = 0; j < iterations; j++)
+		for (i = 0; i < 9; i++) {
+			uint32_t idx = i >> 1;
+			lru_update(bucket, idx);
+			pos = lru_pos(bucket);
+			poss += pos;
+			printf("%s: %d lru_list=%016"PRIx64", upd=%d, "
+				"pos=%"PRIx64"\n",
+				__func__, i, bucket->lru_list, i>>1, pos);
+		}
+
+	if (bucket->lru_list != shuffles) {
+		printf("%s: ERROR: %d lru_list=%016"PRIx64", expected %016"
+			PRIx64"\n",
+			__func__, i, bucket->lru_list, shuffles);
+		return -1;
+	}
+	printf("%s: output checksum of results =%d\n",
+		__func__, poss);
+#if 0
+	if (poss != 126) {
+		printf("%s: ERROR output checksum of results =%d expected %d\n",
+			__func__, poss, 126);
+		return -1;
+	}
+#endif
+
+	fflush(stdout);
+
+	uint64_t sc_start = rte_rdtsc();
+	iterations = 100000000;
+	poss = 0;
+	for (j = 0; j < iterations; j++) {
+		for (i = 0; i < 4; i++) {
+			lru_update(bucket, i);
+			pos |= bucket->lru_list;
+		}
+	}
+	uint64_t sc_end = rte_rdtsc();
+
+	printf("%s: output checksum of results =%llu\n",
+		__func__, (long long unsigned int)pos);
+	printf("%s: start=%016"PRIx64", end=%016"PRIx64"\n",
+		__func__, sc_start, sc_end);
+	printf("\nlru_update: %lu cycles per loop iteration.\n\n",
+		(long unsigned int)((sc_end-sc_start)/(iterations*4)));
+
+	return 0;
+}
+
+/* Table tests */
+int
+test_table_stub(void)
+{
+	int i;
+	uint64_t expected_mask = 0, result_mask;
+	struct rte_mbuf *mbufs[RTE_PORT_IN_BURST_SIZE_MAX];
+	void *table;
+	char *entries[RTE_PORT_IN_BURST_SIZE_MAX];
+
+	/* Create */
+	table = rte_table_stub_ops.f_create(NULL, 0, 1);
+	if (table == NULL)
+		return -1;
+
+	/* Traffic flow */
+	for (i = 0; i < RTE_PORT_IN_BURST_SIZE_MAX; i++)
+		if (i % 2 == 0)
+			PREPARE_PACKET(mbufs[i], 0xadadadad);
+		else
+			PREPARE_PACKET(mbufs[i], 0xadadadab);
+
+	expected_mask = 0;
+	rte_table_stub_ops.f_lookup(table, mbufs, -1,
+		&result_mask, (void **)entries);
+	if (result_mask != expected_mask)
+		return -2;
+
+	/* Free resources */
+	for (i = 0; i < RTE_PORT_IN_BURST_SIZE_MAX; i++)
+		rte_pktmbuf_free(mbufs[i]);
+
+	return 0;
+}
+
+int
+test_table_array(void)
+{
+	int status, i;
+	uint64_t result_mask;
+	struct rte_mbuf *mbufs[RTE_PORT_IN_BURST_SIZE_MAX];
+	void *table;
+	char *entries[RTE_PORT_IN_BURST_SIZE_MAX];
+	char entry1, entry2;
+	void *entry_ptr;
+	int key_found;
+
+	/* Create */
+	struct rte_table_array_params array_params;
+
+	table = rte_table_array_ops.f_create(NULL, 0, 1);
+	if (table != NULL)
+		return -1;
+
+	array_params.n_entries = 0;
+
+	table = rte_table_array_ops.f_create(&array_params, 0, 1);
+	if (table != NULL)
+		return -2;
+
+	array_params.n_entries = 7;
+
+	table = rte_table_array_ops.f_create(&array_params, 0, 1);
+	if (table != NULL)
+		return -3;
+
+	array_params.n_entries = 1 << 24;
+	array_params.offset = 1;
+
+	table = rte_table_array_ops.f_create(&array_params, 0, 1);
+	if (table != NULL)
+		return -4;
+
+	array_params.offset = 32;
+
+	table = rte_table_array_ops.f_create(&array_params, 0, 1);
+	if (table == NULL)
+		return -5;
+
+	/* Free */
+	status = rte_table_array_ops.f_free(table);
+	if (status < 0)
+		return -6;
+
+	status = rte_table_array_ops.f_free(NULL);
+	if (status == 0)
+		return -7;
+
+	/* Add */
+	struct rte_table_array_key array_key_1 = {
+		.pos = 10,
+	};
+	struct rte_table_array_key array_key_2 = {
+		.pos = 20,
+	};
+	entry1 = 'A';
+	entry2 = 'B';
+
+	table = rte_table_array_ops.f_create(&array_params, 0, 1);
+	if (table == NULL)
+		return -8;
+
+	status = rte_table_array_ops.f_add(NULL, (void *) &array_key_1, &entry1,
+		&key_found, &entry_ptr);
+	if (status == 0)
+		return -9;
+
+	status = rte_table_array_ops.f_add(table, (void *) &array_key_1, NULL,
+		&key_found, &entry_ptr);
+	if (status == 0)
+		return -10;
+
+	status = rte_table_array_ops.f_add(table, (void *) &array_key_1,
+		&entry1, &key_found, &entry_ptr);
+	if (status != 0)
+		return -11;
+
+	/* Traffic flow */
+	status = rte_table_array_ops.f_add(table, (void *) &array_key_2,
+		&entry2, &key_found, &entry_ptr);
+	if (status != 0)
+		return -12;
+
+	for (i = 0; i < RTE_PORT_IN_BURST_SIZE_MAX; i++)
+		if (i % 2 == 0)
+			PREPARE_PACKET(mbufs[i], 10);
+		else
+			PREPARE_PACKET(mbufs[i], 20);
+
+	rte_table_array_ops.f_lookup(table, mbufs, -1,
+		&result_mask, (void **)entries);
+
+	for (i = 0; i < RTE_PORT_IN_BURST_SIZE_MAX; i++)
+		if (i % 2 == 0 && *entries[i] != 'A')
+			return -13;
+		else
+			if (i % 2 == 1 && *entries[i] != 'B')
+				return -13;
+
+	/* Free resources */
+	for (i = 0; i < RTE_PORT_IN_BURST_SIZE_MAX; i++)
+		rte_pktmbuf_free(mbufs[i]);
+
+	status = rte_table_array_ops.f_free(table);
+
+	return 0;
+}
+
+int
+test_table_lpm(void)
+{
+	int status, i;
+	uint64_t expected_mask = 0, result_mask;
+	struct rte_mbuf *mbufs[RTE_PORT_IN_BURST_SIZE_MAX];
+	void *table;
+	char *entries[RTE_PORT_IN_BURST_SIZE_MAX];
+	char entry;
+	void *entry_ptr;
+	int key_found;
+	uint32_t entry_size = 1;
+
+	/* Create */
+	struct rte_table_lpm_params lpm_params;
+
+	table = rte_table_lpm_ops.f_create(NULL, 0, entry_size);
+	if (table != NULL)
+		return -1;
+
+	lpm_params.n_rules = 0;
+
+	table = rte_table_lpm_ops.f_create(&lpm_params, 0, entry_size);
+	if (table != NULL)
+		return -2;
+
+	lpm_params.n_rules = 1 << 24;
+	lpm_params.offset = 1;
+
+	table = rte_table_lpm_ops.f_create(&lpm_params, 0, entry_size);
+	if (table != NULL)
+		return -3;
+
+	lpm_params.offset = 32;
+	lpm_params.entry_unique_size = 0;
+
+	table = rte_table_lpm_ops.f_create(&lpm_params, 0, entry_size);
+	if (table != NULL)
+		return -4;
+
+	lpm_params.entry_unique_size = entry_size + 1;
+
+	table = rte_table_lpm_ops.f_create(&lpm_params, 0, entry_size);
+	if (table != NULL)
+		return -5;
+
+	lpm_params.entry_unique_size = entry_size;
+
+	table = rte_table_lpm_ops.f_create(&lpm_params, 0, entry_size);
+	if (table == NULL)
+		return -6;
+
+	/* Free */
+	status = rte_table_lpm_ops.f_free(table);
+	if (status < 0)
+		return -7;
+
+	status = rte_table_lpm_ops.f_free(NULL);
+	if (status == 0)
+		return -8;
+
+	/* Add */
+	struct rte_table_lpm_key lpm_key;
+	lpm_key.ip = 0xadadadad;
+
+	table = rte_table_lpm_ops.f_create(&lpm_params, 0, 1);
+	if (table == NULL)
+		return -9;
+
+	status = rte_table_lpm_ops.f_add(NULL, &lpm_key, &entry, &key_found,
+		&entry_ptr);
+	if (status == 0)
+		return -10;
+
+	status = rte_table_lpm_ops.f_add(table, NULL, &entry, &key_found,
+		&entry_ptr);
+	if (status == 0)
+		return -11;
+
+	status = rte_table_lpm_ops.f_add(table, &lpm_key, NULL, &key_found,
+		&entry_ptr);
+	if (status == 0)
+		return -12;
+
+	lpm_key.depth = 0;
+	status = rte_table_lpm_ops.f_add(table, &lpm_key, &entry, &key_found,
+		&entry_ptr);
+	if (status == 0)
+		return -13;
+
+	lpm_key.depth = 33;
+	status = rte_table_lpm_ops.f_add(table, &lpm_key, &entry, &key_found,
+		&entry_ptr);
+	if (status == 0)
+		return -14;
+
+	lpm_key.depth = 16;
+	status = rte_table_lpm_ops.f_add(table, &lpm_key, &entry, &key_found,
+		&entry_ptr);
+	if (status != 0)
+		return -15;
+
+	/* Delete */
+	status = rte_table_lpm_ops.f_delete(NULL, &lpm_key, &key_found, NULL);
+	if (status == 0)
+		return -16;
+
+	status = rte_table_lpm_ops.f_delete(table, NULL, &key_found, NULL);
+	if (status == 0)
+		return -17;
+
+	lpm_key.depth = 0;
+	status = rte_table_lpm_ops.f_delete(table, &lpm_key, &key_found, NULL);
+	if (status == 0)
+		return -18;
+
+	lpm_key.depth = 33;
+	status = rte_table_lpm_ops.f_delete(table, &lpm_key, &key_found, NULL);
+	if (status == 0)
+		return -19;
+
+	lpm_key.depth = 16;
+	status = rte_table_lpm_ops.f_delete(table, &lpm_key, &key_found, NULL);
+	if (status != 0)
+		return -20;
+
+	status = rte_table_lpm_ops.f_delete(table, &lpm_key, &key_found, NULL);
+	if (status != 0)
+		return -21;
+
+	/* Traffic flow */
+	entry = 'A';
+	status = rte_table_lpm_ops.f_add(table, &lpm_key, &entry, &key_found,
+		&entry_ptr);
+	if (status < 0)
+		return -22;
+
+	for (i = 0; i < RTE_PORT_IN_BURST_SIZE_MAX; i++)
+		if (i % 2 == 0) {
+			expected_mask |= (uint64_t)1 << i;
+			PREPARE_PACKET(mbufs[i], 0xadadadad);
+		} else
+			PREPARE_PACKET(mbufs[i], 0xadadadab);
+
+	rte_table_lpm_ops.f_lookup(table, mbufs, -1,
+		&result_mask, (void **)entries);
+	if (result_mask != expected_mask)
+		return -21;
+
+	/* Free resources */
+	for (i = 0; i < RTE_PORT_IN_BURST_SIZE_MAX; i++)
+		rte_pktmbuf_free(mbufs[i]);
+
+	status = rte_table_lpm_ops.f_free(table);
+
+	return 0;
+}
+
+int
+test_table_lpm_ipv6(void)
+{
+	int status, i;
+	uint64_t expected_mask = 0, result_mask;
+	struct rte_mbuf *mbufs[RTE_PORT_IN_BURST_SIZE_MAX];
+	void *table;
+	char *entries[RTE_PORT_IN_BURST_SIZE_MAX];
+	char entry;
+	void *entry_ptr;
+	int key_found;
+	uint32_t entry_size = 1;
+
+	/* Create */
+	struct rte_table_lpm_ipv6_params lpm_params;
+
+	table = rte_table_lpm_ipv6_ops.f_create(NULL, 0, entry_size);
+	if (table != NULL)
+		return -1;
+
+	lpm_params.n_rules = 0;
+
+	table = rte_table_lpm_ipv6_ops.f_create(&lpm_params, 0, entry_size);
+	if (table != NULL)
+		return -2;
+
+	lpm_params.n_rules = 1 << 24;
+	lpm_params.number_tbl8s = 0;
+	table = rte_table_lpm_ipv6_ops.f_create(&lpm_params, 0, entry_size);
+	if (table != NULL)
+		return -2;
+
+	lpm_params.number_tbl8s = 1 << 21;
+	lpm_params.entry_unique_size = 0;
+	table = rte_table_lpm_ipv6_ops.f_create(&lpm_params, 0, entry_size);
+	if (table != NULL)
+		return -2;
+
+	lpm_params.entry_unique_size = entry_size + 1;
+	table = rte_table_lpm_ipv6_ops.f_create(&lpm_params, 0, entry_size);
+	if (table != NULL)
+		return -2;
+
+	lpm_params.entry_unique_size = entry_size;
+	lpm_params.offset = 32;
+
+	table = rte_table_lpm_ipv6_ops.f_create(&lpm_params, 0, entry_size);
+	if (table == NULL)
+		return -3;
+
+	/* Free */
+	status = rte_table_lpm_ipv6_ops.f_free(table);
+	if (status < 0)
+		return -4;
+
+	status = rte_table_lpm_ipv6_ops.f_free(NULL);
+	if (status == 0)
+		return -5;
+
+	/* Add */
+	struct rte_table_lpm_ipv6_key lpm_key;
+
+	lpm_key.ip[0] = 0xad;
+	lpm_key.ip[1] = 0xad;
+	lpm_key.ip[2] = 0xad;
+	lpm_key.ip[3] = 0xad;
+
+	table = rte_table_lpm_ipv6_ops.f_create(&lpm_params, 0, entry_size);
+	if (table == NULL)
+		return -6;
+
+	status = rte_table_lpm_ipv6_ops.f_add(NULL, &lpm_key, &entry,
+		&key_found, &entry_ptr);
+	if (status == 0)
+		return -7;
+
+	status = rte_table_lpm_ipv6_ops.f_add(table, NULL, &entry, &key_found,
+		&entry_ptr);
+	if (status == 0)
+		return -8;
+
+	status = rte_table_lpm_ipv6_ops.f_add(table, &lpm_key, NULL, &key_found,
+		&entry_ptr);
+	if (status == 0)
+		return -9;
+
+	lpm_key.depth = 0;
+	status = rte_table_lpm_ipv6_ops.f_add(table, &lpm_key, &entry,
+		&key_found, &entry_ptr);
+	if (status == 0)
+		return -10;
+
+	lpm_key.depth = 129;
+	status = rte_table_lpm_ipv6_ops.f_add(table, &lpm_key, &entry,
+		&key_found, &entry_ptr);
+	if (status == 0)
+		return -11;
+
+	lpm_key.depth = 16;
+	status = rte_table_lpm_ipv6_ops.f_add(table, &lpm_key, &entry,
+		&key_found, &entry_ptr);
+	if (status != 0)
+		return -12;
+
+	/* Delete */
+	status = rte_table_lpm_ipv6_ops.f_delete(NULL, &lpm_key, &key_found,
+		NULL);
+	if (status == 0)
+		return -13;
+
+	status = rte_table_lpm_ipv6_ops.f_delete(table, NULL, &key_found, NULL);
+	if (status == 0)
+		return -14;
+
+	lpm_key.depth = 0;
+	status = rte_table_lpm_ipv6_ops.f_delete(table, &lpm_key, &key_found,
+		NULL);
+	if (status == 0)
+		return -15;
+
+	lpm_key.depth = 129;
+	status = rte_table_lpm_ipv6_ops.f_delete(table, &lpm_key, &key_found,
+		NULL);
+	if (status == 0)
+		return -16;
+
+	lpm_key.depth = 16;
+	status = rte_table_lpm_ipv6_ops.f_delete(table, &lpm_key, &key_found,
+		NULL);
+	if (status != 0)
+		return -17;
+
+	status = rte_table_lpm_ipv6_ops.f_delete(table, &lpm_key, &key_found,
+		NULL);
+	if (status != 0)
+		return -18;
+
+	/* Traffic flow */
+	entry = 'A';
+	status = rte_table_lpm_ipv6_ops.f_add(table, &lpm_key, &entry,
+		&key_found, &entry_ptr);
+	if (status < 0)
+		return -19;
+
+	for (i = 0; i < RTE_PORT_IN_BURST_SIZE_MAX; i++)
+		if (i % 2 == 0) {
+			expected_mask |= (uint64_t)1 << i;
+			PREPARE_PACKET(mbufs[i], 0xadadadad);
+		} else
+			PREPARE_PACKET(mbufs[i], 0xadadadab);
+
+	rte_table_lpm_ipv6_ops.f_lookup(table, mbufs, -1,
+		&result_mask, (void **)entries);
+	if (result_mask != expected_mask)
+		return -20;
+
+	/* Free resources */
+	for (i = 0; i < RTE_PORT_IN_BURST_SIZE_MAX; i++)
+		rte_pktmbuf_free(mbufs[i]);
+
+	status = rte_table_lpm_ipv6_ops.f_free(table);
+
+	return 0;
+}
+
+static int
+test_table_hash_lru_generic(struct rte_table_ops *ops)
+{
+	int status, i;
+	uint64_t expected_mask = 0, result_mask;
+	struct rte_mbuf *mbufs[RTE_PORT_IN_BURST_SIZE_MAX];
+	void *table;
+	char *entries[RTE_PORT_IN_BURST_SIZE_MAX];
+	char entry;
+	void *entry_ptr;
+	int key_found;
+
+	/* Create */
+	struct rte_table_hash_key8_lru_params hash_params;
+
+	hash_params.n_entries = 0;
+
+	table = ops->f_create(&hash_params, 0, 1);
+	if (table != NULL)
+		return -1;
+
+	hash_params.n_entries = 1 << 10;
+	hash_params.signature_offset = 1;
+
+	table = ops->f_create(&hash_params, 0, 1);
+	if (table != NULL)
+		return -2;
+
+	hash_params.signature_offset = 0;
+	hash_params.key_offset = 1;
+
+	table = ops->f_create(&hash_params, 0, 1);
+	if (table != NULL)
+		return -3;
+
+	hash_params.key_offset = 32;
+	hash_params.f_hash = NULL;
+
+	table = ops->f_create(&hash_params, 0, 1);
+	if (table != NULL)
+		return -4;
+
+	hash_params.f_hash = pipeline_test_hash;
+
+	table = ops->f_create(&hash_params, 0, 1);
+	if (table == NULL)
+		return -5;
+
+	/* Free */
+	status = ops->f_free(table);
+	if (status < 0)
+		return -6;
+
+	status = ops->f_free(NULL);
+	if (status == 0)
+		return -7;
+
+	/* Add */
+	uint8_t key[32];
+	uint32_t *k32 = (uint32_t *) &key;
+
+	memset(key, 0, 32);
+	k32[0] = rte_be_to_cpu_32(0xadadadad);
+
+	table = ops->f_create(&hash_params, 0, 1);
+	if (table == NULL)
+		return -8;
+
+	entry = 'A';
+	status = ops->f_add(table, &key, &entry, &key_found, &entry_ptr);
+	if (status != 0)
+		return -9;
+
+	/* Delete */
+	status = ops->f_delete(table, &key, &key_found, NULL);
+	if (status != 0)
+		return -10;
+
+	status = ops->f_delete(table, &key, &key_found, NULL);
+	if (status != 0)
+		return -11;
+
+	/* Traffic flow */
+	entry = 'A';
+	status = ops->f_add(table, &key, &entry, &key_found, &entry_ptr);
+	if (status < 0)
+		return -12;
+
+	for (i = 0; i < RTE_PORT_IN_BURST_SIZE_MAX; i++)
+		if (i % 2 == 0) {
+			expected_mask |= (uint64_t)1 << i;
+			PREPARE_PACKET(mbufs[i], 0xadadadad);
+		} else
+			PREPARE_PACKET(mbufs[i], 0xadadadab);
+
+	ops->f_lookup(table, mbufs, -1, &result_mask, (void **)entries);
+	if (result_mask != expected_mask)
+		return -13;
+
+	/* Free resources */
+	for (i = 0; i < RTE_PORT_IN_BURST_SIZE_MAX; i++)
+		rte_pktmbuf_free(mbufs[i]);
+
+	status = ops->f_free(table);
+
+	return 0;
+}
+
+static int
+test_table_hash_ext_generic(struct rte_table_ops *ops)
+{
+	int status, i;
+	uint64_t expected_mask = 0, result_mask;
+	struct rte_mbuf *mbufs[RTE_PORT_IN_BURST_SIZE_MAX];
+	void *table;
+	char *entries[RTE_PORT_IN_BURST_SIZE_MAX];
+	char entry;
+	int key_found;
+	void *entry_ptr;
+
+	/* Create */
+	struct rte_table_hash_key8_ext_params hash_params;
+
+	hash_params.n_entries = 0;
+
+	table = ops->f_create(&hash_params, 0, 1);
+	if (table != NULL)
+		return -1;
+
+	hash_params.n_entries = 1 << 10;
+	hash_params.n_entries_ext = 0;
+	table = ops->f_create(&hash_params, 0, 1);
+	if (table != NULL)
+		return -2;
+
+	hash_params.n_entries_ext = 1 << 4;
+	hash_params.signature_offset = 1;
+	table = ops->f_create(&hash_params, 0, 1);
+	if (table != NULL)
+		return -2;
+
+	hash_params.signature_offset = 0;
+	hash_params.key_offset = 1;
+
+	table = ops->f_create(&hash_params, 0, 1);
+	if (table != NULL)
+		return -3;
+
+	hash_params.key_offset = 32;
+	hash_params.f_hash = NULL;
+
+	table = ops->f_create(&hash_params, 0, 1);
+	if (table != NULL)
+		return -4;
+
+	hash_params.f_hash = pipeline_test_hash;
+
+	table = ops->f_create(&hash_params, 0, 1);
+	if (table == NULL)
+		return -5;
+
+	/* Free */
+	status = ops->f_free(table);
+	if (status < 0)
+		return -6;
+
+	status = ops->f_free(NULL);
+	if (status == 0)
+		return -7;
+
+	/* Add */
+	uint8_t key[32];
+	uint32_t *k32 = (uint32_t *) &key;
+
+	memset(key, 0, 32);
+	k32[0] = rte_be_to_cpu_32(0xadadadad);
+
+	table = ops->f_create(&hash_params, 0, 1);
+	if (table == NULL)
+		return -8;
+
+	entry = 'A';
+	status = ops->f_add(table, &key, &entry, &key_found, &entry_ptr);
+	if (status != 0)
+		return -9;
+
+	/* Delete */
+	status = ops->f_delete(table, &key, &key_found, NULL);
+	if (status != 0)
+		return -10;
+
+	status = ops->f_delete(table, &key, &key_found, NULL);
+	if (status != 0)
+		return -11;
+
+	/* Traffic flow */
+	entry = 'A';
+	status = ops->f_add(table, &key, &entry, &key_found, &entry_ptr);
+	if (status < 0)
+		return -12;
+
+	for (i = 0; i < RTE_PORT_IN_BURST_SIZE_MAX; i++)
+		if (i % 2 == 0) {
+			expected_mask |= (uint64_t)1 << i;
+			PREPARE_PACKET(mbufs[i], 0xadadadad);
+		} else
+			PREPARE_PACKET(mbufs[i], 0xadadadab);
+
+	ops->f_lookup(table, mbufs, -1, &result_mask, (void **)entries);
+	if (result_mask != expected_mask)
+		return -13;
+
+	/* Free resources */
+	for (i = 0; i < RTE_PORT_IN_BURST_SIZE_MAX; i++)
+		rte_pktmbuf_free(mbufs[i]);
+
+	status = ops->f_free(table);
+
+	return 0;
+}
+
+int
+test_table_hash_lru(void)
+{
+	int status;
+
+	status = test_table_hash_lru_generic(&rte_table_hash_key8_lru_ops);
+	if (status < 0)
+		return status;
+
+	status = test_table_hash_lru_generic(
+		&rte_table_hash_key8_lru_dosig_ops);
+	if (status < 0)
+		return status;
+
+	status = test_table_hash_lru_generic(&rte_table_hash_key16_lru_ops);
+	if (status < 0)
+		return status;
+
+	status = test_table_hash_lru_generic(&rte_table_hash_key32_lru_ops);
+	if (status < 0)
+		return status;
+
+	status = test_lru_update();
+	if (status < 0)
+		return status;
+
+	return 0;
+}
+
+int
+test_table_hash_ext(void)
+{
+	int status;
+
+	status = test_table_hash_ext_generic(&rte_table_hash_key8_ext_ops);
+	if (status < 0)
+		return status;
+
+	status = test_table_hash_ext_generic(
+		&rte_table_hash_key8_ext_dosig_ops);
+	if (status < 0)
+		return status;
+
+	status = test_table_hash_ext_generic(&rte_table_hash_key16_ext_ops);
+	if (status < 0)
+		return status;
+
+	status = test_table_hash_ext_generic(&rte_table_hash_key32_ext_ops);
+	if (status < 0)
+		return status;
+
+	return 0;
+}
+
+#endif
diff --git a/app/test/test_table_tables.h b/app/test/test_table_tables.h
new file mode 100644
index 0000000..b368623
--- /dev/null
+++ b/app/test/test_table_tables.h
@@ -0,0 +1,50 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+/* Test prototypes */
+int test_table_lpm(void);
+int test_table_lpm_ipv6(void);
+int test_table_array(void);
+#ifdef RTE_LIBRTE_ACL
+int test_table_acl(void);
+#endif
+int test_table_hash_unoptimized(void);
+int test_table_hash_lru(void);
+int test_table_hash_ext(void);
+int test_table_stub(void);
+
+/* Extern variables */
+typedef int (*table_test)(void);
+
+extern table_test table_tests[];
+extern unsigned n_table_tests;
diff --git a/lib/librte_eal/common/include/rte_hexdump.h b/lib/librte_eal/common/include/rte_hexdump.h
index db08d30..b4fdf7f 100644
--- a/lib/librte_eal/common/include/rte_hexdump.h
+++ b/lib/librte_eal/common/include/rte_hexdump.h
@@ -39,6 +39,8 @@
  * Simple API to dump out memory in a special hex format.
  */
 
+#include <stdio.h>
+
 #ifdef __cplusplus
 extern "C" {
 #endif
-- 
1.7.7.6

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [dpdk-dev] [v2 00/23] Packet Framework
  2014-06-04 18:08 [dpdk-dev] [v2 00/23] Packet Framework Cristian Dumitrescu
                   ` (22 preceding siblings ...)
  2014-06-04 18:08 ` [dpdk-dev] [v2 23/23] Packet Framework unit tests Cristian Dumitrescu
@ 2014-06-05 11:01 ` De Lara Guarch, Pablo
  2014-06-05 11:43 ` Cao, Waterman
  2014-06-05 14:40 ` Ivan Boule
  25 siblings, 0 replies; 36+ messages in thread
From: De Lara Guarch, Pablo @ 2014-06-05 11:01 UTC (permalink / raw)
  To: Dumitrescu, Cristian, dev

Acked-by: Pablo de Lara Guarch <pablo.de.lara.guarch@intel.com>

> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Cristian Dumitrescu
> Sent: Wednesday, June 04, 2014 7:08 PM
> To: dev@dpdk.org
> Subject: [dpdk-dev] [v2 00/23] Packet Framework
> 
> (Version 2 changes are exclusively style changes (checkpatch.pl) and patch
> consolidation, no functional change)
> 
> Intel DPDK Packet Framework provides a standard methodology (logically
> similar to OpenFlow) for rapid development of complex packet processing
> pipelines out of ports, tables and actions.
> 
> A pipeline is constructed by connecting its input ports to its output ports
> through a chain of lookup tables. As result of lookup operation into the
> current table, one of the table entries (or the default table entry, in case of
> lookup miss) is identified to provide the actions to be executed on the
> current packet and the associated action meta-data. The behavior of user
> actions is defined through the configurable table action handler, while the
> reserved actions define the next hop for the current packet (either another
> table, an output port or packet drop) and are handled transparently by the
> framework.
> 
> Three new Intel DPDK libraries are introduced for Packet Framework:
> librte_port, librte_table, librte_pipeline. Please check the Intel DPDK
> Programmer's Guide for full description of the Packet Framework design.
> 
> Two sample applications are provided for Packet Framework: app/test-
> pipeline and examples/ip_pipeline. Please check the Intel Sample Apps Guide
> for a detailed description of how these sample apps.
> 
> Cristian Dumitrescu (23):
>   librte_lpm: rule_is_present
>   mbuf: meta-data
>   Packet Framework librte_port: Port API
>   Packet Framework librte_port: ethdev ports
>   Packet Framework librte_port: ring ports
>   Packet Framework librte_port: IPv4 frag port
>   Packet Framework librte_port: IPv4 reassembly
>   Packet Framework librte_port: hierarchical scheduler port
>   Packet Framework librte_port: Source/Sink ports
>   Packet Framework librte_port: Build infrastructure
>   Packet Framework librte_table: Table API
>   Packet Framework librte_table: LPM IPv4 table
>   Packet Framework librte_table: LPM IPv6 table
>   Packet Framework librte_table: ACL table
>   Packet Framework librte_table: Hash tables
>   Packet Framework librte_table: array table
>   Packet Framework librte_table: Stub table
>   Packet Framework librte_table: Build infrastructure
>   Packet Framework librte_pipeline: Pipeline
>   librte_cfgfile: interpret config files
>   Packet Framework performance application
>   Packet Framework IPv4 pipeline sample app
>   Packet Framework unit tests
> 
>  app/Makefile                                       |    1 +
>  app/test-pipeline/Makefile                         |   66 +
>  app/test-pipeline/config.c                         |  248 +++
>  app/test-pipeline/init.c                           |  295 +++
>  app/test-pipeline/main.c                           |  180 ++
>  app/test-pipeline/main.h                           |  148 ++
>  app/test-pipeline/pipeline_acl.c                   |  278 +++
>  app/test-pipeline/pipeline_hash.c                  |  487 +++++
>  app/test-pipeline/pipeline_lpm.c                   |  196 ++
>  app/test-pipeline/pipeline_lpm_ipv6.c              |  200 ++
>  app/test-pipeline/pipeline_stub.c                  |  165 ++
>  app/test-pipeline/runtime.c                        |  185 ++
>  app/test/Makefile                                  |    6 +
>  app/test/commands.c                                |    4 +-
>  app/test/test.h                                    |    1 +
>  app/test/test_table.c                              |  220 +++
>  app/test/test_table.h                              |  204 ++
>  app/test/test_table_acl.c                          |  593 ++++++
>  app/test/test_table_acl.h                          |   35 +
>  app/test/test_table_combined.c                     |  784 ++++++++
>  app/test/test_table_combined.h                     |   55 +
>  app/test/test_table_pipeline.c                     |  603 ++++++
>  app/test/test_table_pipeline.h                     |   35 +
>  app/test/test_table_ports.c                        |  224 +++
>  app/test/test_table_ports.h                        |   42 +
>  app/test/test_table_tables.c                       |  907 +++++++++
>  app/test/test_table_tables.h                       |   50 +
>  config/common_bsdapp                               |   25 +
>  config/common_linuxapp                             |   24 +
>  doc/doxy-api-index.md                              |   17 +
>  doc/doxy-api.conf                                  |    3 +
>  examples/ip_pipeline/Makefile                      |   67 +
>  examples/ip_pipeline/cmdline.c                     | 1976 ++++++++++++++++++++
>  examples/ip_pipeline/config.c                      |  420 +++++
>  examples/ip_pipeline/init.c                        |  614 ++++++
>  examples/ip_pipeline/ip_pipeline.cfg               |   56 +
>  examples/ip_pipeline/ip_pipeline.sh                |   18 +
>  examples/ip_pipeline/main.c                        |  171 ++
>  examples/ip_pipeline/main.h                        |  306 +++
>  examples/ip_pipeline/pipeline_firewall.c           |  313 ++++
>  .../ip_pipeline/pipeline_flow_classification.c     |  306 +++
>  examples/ip_pipeline/pipeline_ipv4_frag.c          |  184 ++
>  examples/ip_pipeline/pipeline_ipv4_ras.c           |  181 ++
>  examples/ip_pipeline/pipeline_passthrough.c        |  213 +++
>  examples/ip_pipeline/pipeline_routing.c            |  474 +++++
>  examples/ip_pipeline/pipeline_rx.c                 |  385 ++++
>  examples/ip_pipeline/pipeline_tx.c                 |  283 +++
>  lib/Makefile                                       |    4 +
>  lib/librte_cfgfile/Makefile                        |   53 +
>  lib/librte_cfgfile/rte_cfgfile.c                   |  354 ++++
>  lib/librte_cfgfile/rte_cfgfile.h                   |  195 ++
>  lib/librte_eal/common/include/rte_hexdump.h        |    2 +
>  lib/librte_eal/common/include/rte_log.h            |    3 +
>  lib/librte_lpm/rte_lpm.c                           |   29 +
>  lib/librte_lpm/rte_lpm.h                           |   19 +
>  lib/librte_lpm/rte_lpm6.c                          |   31 +
>  lib/librte_lpm/rte_lpm6.h                          |   19 +
>  lib/librte_mbuf/rte_mbuf.h                         |   25 +
>  lib/librte_pipeline/Makefile                       |   54 +
>  lib/librte_pipeline/rte_pipeline.c                 | 1373 ++++++++++++++
>  lib/librte_pipeline/rte_pipeline.h                 |  664 +++++++
>  lib/librte_port/Makefile                           |   72 +
>  lib/librte_port/ipv4_frag_tbl.h                    |  403 ++++
>  lib/librte_port/ipv4_rsmbl.h                       |  429 +++++
>  lib/librte_port/rte_ipv4_frag.h                    |  253 +++
>  lib/librte_port/rte_port.h                         |  190 ++
>  lib/librte_port/rte_port_ethdev.c                  |  305 +++
>  lib/librte_port/rte_port_ethdev.h                  |   86 +
>  lib/librte_port/rte_port_frag.c                    |  235 +++
>  lib/librte_port/rte_port_frag.h                    |   94 +
>  lib/librte_port/rte_port_ras.c                     |  256 +++
>  lib/librte_port/rte_port_ras.h                     |   83 +
>  lib/librte_port/rte_port_ring.c                    |  237 +++
>  lib/librte_port/rte_port_ring.h                    |   82 +
>  lib/librte_port/rte_port_sched.c                   |  239 +++
>  lib/librte_port/rte_port_sched.h                   |   82 +
>  lib/librte_port/rte_port_source_sink.c             |  158 ++
>  lib/librte_port/rte_port_source_sink.h             |   70 +
>  lib/librte_table/Makefile                          |   85 +
>  lib/librte_table/rte_lru.h                         |  213 +++
>  lib/librte_table/rte_table.h                       |  202 ++
>  lib/librte_table/rte_table_acl.c                   |  490 +++++
>  lib/librte_table/rte_table_acl.h                   |   95 +
>  lib/librte_table/rte_table_array.c                 |  204 ++
>  lib/librte_table/rte_table_array.h                 |   76 +
>  lib/librte_table/rte_table_hash.h                  |  350 ++++
>  lib/librte_table/rte_table_hash_ext.c              | 1122 +++++++++++
>  lib/librte_table/rte_table_hash_key16.c            | 1100 +++++++++++
>  lib/librte_table/rte_table_hash_key32.c            | 1120 +++++++++++
>  lib/librte_table/rte_table_hash_key8.c             | 1398 ++++++++++++++
>  lib/librte_table/rte_table_hash_lru.c              | 1065 +++++++++++
>  lib/librte_table/rte_table_lpm.c                   |  347 ++++
>  lib/librte_table/rte_table_lpm.h                   |  115 ++
>  lib/librte_table/rte_table_lpm_ipv6.c              |  361 ++++
>  lib/librte_table/rte_table_lpm_ipv6.h              |  119 ++
>  lib/librte_table/rte_table_stub.c                  |   65 +
>  lib/librte_table/rte_table_stub.h                  |   62 +
>  mk/rte.app.mk                                      |   16 +
>  98 files changed, 26951 insertions(+), 1 deletions(-)
>  create mode 100644 app/test-pipeline/Makefile
>  create mode 100644 app/test-pipeline/config.c
>  create mode 100644 app/test-pipeline/init.c
>  create mode 100644 app/test-pipeline/main.c
>  create mode 100644 app/test-pipeline/main.h
>  create mode 100644 app/test-pipeline/pipeline_acl.c
>  create mode 100644 app/test-pipeline/pipeline_hash.c
>  create mode 100644 app/test-pipeline/pipeline_lpm.c
>  create mode 100644 app/test-pipeline/pipeline_lpm_ipv6.c
>  create mode 100644 app/test-pipeline/pipeline_stub.c
>  create mode 100644 app/test-pipeline/runtime.c
>  create mode 100644 app/test/test_table.c
>  create mode 100644 app/test/test_table.h
>  create mode 100644 app/test/test_table_acl.c
>  create mode 100644 app/test/test_table_acl.h
>  create mode 100644 app/test/test_table_combined.c
>  create mode 100644 app/test/test_table_combined.h
>  create mode 100644 app/test/test_table_pipeline.c
>  create mode 100644 app/test/test_table_pipeline.h
>  create mode 100644 app/test/test_table_ports.c
>  create mode 100644 app/test/test_table_ports.h
>  create mode 100644 app/test/test_table_tables.c
>  create mode 100644 app/test/test_table_tables.h
>  create mode 100644 examples/ip_pipeline/Makefile
>  create mode 100644 examples/ip_pipeline/cmdline.c
>  create mode 100644 examples/ip_pipeline/config.c
>  create mode 100644 examples/ip_pipeline/init.c
>  create mode 100644 examples/ip_pipeline/ip_pipeline.cfg
>  create mode 100644 examples/ip_pipeline/ip_pipeline.sh
>  create mode 100644 examples/ip_pipeline/main.c
>  create mode 100644 examples/ip_pipeline/main.h
>  create mode 100644 examples/ip_pipeline/pipeline_firewall.c
>  create mode 100644 examples/ip_pipeline/pipeline_flow_classification.c
>  create mode 100644 examples/ip_pipeline/pipeline_ipv4_frag.c
>  create mode 100644 examples/ip_pipeline/pipeline_ipv4_ras.c
>  create mode 100644 examples/ip_pipeline/pipeline_passthrough.c
>  create mode 100644 examples/ip_pipeline/pipeline_routing.c
>  create mode 100644 examples/ip_pipeline/pipeline_rx.c
>  create mode 100644 examples/ip_pipeline/pipeline_tx.c
>  create mode 100644 lib/librte_cfgfile/Makefile
>  create mode 100644 lib/librte_cfgfile/rte_cfgfile.c
>  create mode 100644 lib/librte_cfgfile/rte_cfgfile.h
>  create mode 100644 lib/librte_pipeline/Makefile
>  create mode 100644 lib/librte_pipeline/rte_pipeline.c
>  create mode 100644 lib/librte_pipeline/rte_pipeline.h
>  create mode 100644 lib/librte_port/Makefile
>  create mode 100644 lib/librte_port/ipv4_frag_tbl.h
>  create mode 100644 lib/librte_port/ipv4_rsmbl.h
>  create mode 100644 lib/librte_port/rte_ipv4_frag.h
>  create mode 100644 lib/librte_port/rte_port.h
>  create mode 100644 lib/librte_port/rte_port_ethdev.c
>  create mode 100644 lib/librte_port/rte_port_ethdev.h
>  create mode 100644 lib/librte_port/rte_port_frag.c
>  create mode 100644 lib/librte_port/rte_port_frag.h
>  create mode 100644 lib/librte_port/rte_port_ras.c
>  create mode 100644 lib/librte_port/rte_port_ras.h
>  create mode 100644 lib/librte_port/rte_port_ring.c
>  create mode 100644 lib/librte_port/rte_port_ring.h
>  create mode 100644 lib/librte_port/rte_port_sched.c
>  create mode 100644 lib/librte_port/rte_port_sched.h
>  create mode 100644 lib/librte_port/rte_port_source_sink.c
>  create mode 100644 lib/librte_port/rte_port_source_sink.h
>  create mode 100644 lib/librte_table/Makefile
>  create mode 100644 lib/librte_table/rte_lru.h
>  create mode 100644 lib/librte_table/rte_table.h
>  create mode 100644 lib/librte_table/rte_table_acl.c
>  create mode 100644 lib/librte_table/rte_table_acl.h
>  create mode 100644 lib/librte_table/rte_table_array.c
>  create mode 100644 lib/librte_table/rte_table_array.h
>  create mode 100644 lib/librte_table/rte_table_hash.h
>  create mode 100644 lib/librte_table/rte_table_hash_ext.c
>  create mode 100644 lib/librte_table/rte_table_hash_key16.c
>  create mode 100644 lib/librte_table/rte_table_hash_key32.c
>  create mode 100644 lib/librte_table/rte_table_hash_key8.c
>  create mode 100644 lib/librte_table/rte_table_hash_lru.c
>  create mode 100644 lib/librte_table/rte_table_lpm.c
>  create mode 100644 lib/librte_table/rte_table_lpm.h
>  create mode 100644 lib/librte_table/rte_table_lpm_ipv6.c
>  create mode 100644 lib/librte_table/rte_table_lpm_ipv6.h
>  create mode 100644 lib/librte_table/rte_table_stub.c
>  create mode 100644 lib/librte_table/rte_table_stub.h
> 
> --
> 1.7.7.6

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [dpdk-dev] [v2 00/23] Packet Framework
  2014-06-04 18:08 [dpdk-dev] [v2 00/23] Packet Framework Cristian Dumitrescu
                   ` (23 preceding siblings ...)
  2014-06-05 11:01 ` [dpdk-dev] [v2 00/23] Packet Framework De Lara Guarch, Pablo
@ 2014-06-05 11:43 ` Cao, Waterman
  2014-06-05 14:40 ` Ivan Boule
  25 siblings, 0 replies; 36+ messages in thread
From: Cao, Waterman @ 2014-06-05 11:43 UTC (permalink / raw)
  To: Dumitrescu, Cristian, dev

Tested-by: Waterman Cao <waterman.cao@intel.com>

Totally this patch is composed of 24 files including cover letter, and has been tested by Intel. 
We verified packet framework patch with ip pipeline example and unit test, all cases passed.
Please see test result as the following:  
  test_flow_management   Passed
  test_frame_sizes       Passed
  test_incremental_ip    Passed
  test_route_management  Passed   
  test_hash_tables       Passed
  test_lpm_table         Passed
  test_none_table        Passed
Test environment: Fedora 20, Linux Kernel 3.13.6-200, GCC 4.8.2, Intel Xeon processor E5-2680 v2, with Intel Niantic 82599.

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [dpdk-dev] [v2 00/23] Packet Framework
  2014-06-04 18:08 [dpdk-dev] [v2 00/23] Packet Framework Cristian Dumitrescu
                   ` (24 preceding siblings ...)
  2014-06-05 11:43 ` Cao, Waterman
@ 2014-06-05 14:40 ` Ivan Boule
  2014-06-17  1:27   ` Thomas Monjalon
  25 siblings, 1 reply; 36+ messages in thread
From: Ivan Boule @ 2014-06-05 14:40 UTC (permalink / raw)
  To: Cristian Dumitrescu, dev

On 06/04/2014 08:08 PM, Cristian Dumitrescu wrote:
> (Version 2 changes are exclusively style changes (checkpatch.pl) and patch consolidation, no functional change)
>
> Intel DPDK Packet Framework provides a standard methodology (logically similar to OpenFlow) for rapid development of complex packet processing pipelines out of ports, tables and actions.
>
> A pipeline is constructed by connecting its input ports to its output ports through a chain of lookup tables. As result of lookup operation into the current table, one of the table entries (or the default table entry, in case of lookup miss) is identified to provide the actions to be executed on the current packet and the associated action meta-data. The behavior of user actions is defined through the configurable table action handler, while the reserved actions define the next hop for the current packet (either another table, an output port or packet drop) and are handled transparently by the framework.
>
> Three new Intel DPDK libraries are introduced for Packet Framework: librte_port, librte_table, librte_pipeline. Please check the Intel DPDK Programmer's Guide for full description of the Packet Framework design.
>
> Two sample applications are provided for Packet Framework: app/test-pipeline and examples/ip_pipeline. Please check the Intel Sample Apps Guide for a detailed description of how these sample apps.
>
> Cristian Dumitrescu (23):
>    librte_lpm: rule_is_present
>    mbuf: meta-data
>    Packet Framework librte_port: Port API
>    Packet Framework librte_port: ethdev ports
>    Packet Framework librte_port: ring ports
>    Packet Framework librte_port: IPv4 frag port
>    Packet Framework librte_port: IPv4 reassembly
>    Packet Framework librte_port: hierarchical scheduler port
>    Packet Framework librte_port: Source/Sink ports
>    Packet Framework librte_port: Build infrastructure
>    Packet Framework librte_table: Table API
>    Packet Framework librte_table: LPM IPv4 table
>    Packet Framework librte_table: LPM IPv6 table
>    Packet Framework librte_table: ACL table
>    Packet Framework librte_table: Hash tables
>    Packet Framework librte_table: array table
>    Packet Framework librte_table: Stub table
>    Packet Framework librte_table: Build infrastructure
>    Packet Framework librte_pipeline: Pipeline
>    librte_cfgfile: interpret config files
>    Packet Framework performance application
>    Packet Framework IPv4 pipeline sample app
>    Packet Framework unit tests
>

Acked by: Ivan Boule <ivan.boule@6wind.com>

-- 
Ivan Boule
6WIND Development Engineer

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [dpdk-dev] [v2 22/23] Packet Framework IPv4 pipeline sample app
  2014-06-04 18:08 ` [dpdk-dev] [v2 22/23] Packet Framework IPv4 pipeline sample app Cristian Dumitrescu
@ 2014-06-09  9:11   ` Olivier MATZ
  2014-06-09 10:49     ` Dumitrescu, Cristian
  0 siblings, 1 reply; 36+ messages in thread
From: Olivier MATZ @ 2014-06-09  9:11 UTC (permalink / raw)
  To: Cristian Dumitrescu, dev

Hi Cristian,

On 06/04/2014 08:08 PM, Cristian Dumitrescu wrote:
> This Packet Framework sample application illustrates the capabilities of the Intel DPDK Packet Framework toolbox.
>
> It creates different functional blocks used by a typical IPv4 framework like: flow classification, firewall, routing, etc.
>
> CPU cores are connected together through standard interfaces built on SW rings, which each CPU core running a separate pipeline instance.
>
> Please refer to Intel DPDK Sample App Guide for full description.
>
> Signed-off-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>

Would it be possible to replace the ctrlmbuf by something else (a
pktmbuf for instance)?

As you know this would conflict if we want to remove the ctrlmbuf from
the rte_mbuf structure.

Regards,
Olivier

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [dpdk-dev] [v2 22/23] Packet Framework IPv4 pipeline sample app
  2014-06-09  9:11   ` Olivier MATZ
@ 2014-06-09 10:49     ` Dumitrescu, Cristian
  2014-06-09 12:13       ` Olivier MATZ
  0 siblings, 1 reply; 36+ messages in thread
From: Dumitrescu, Cristian @ 2014-06-09 10:49 UTC (permalink / raw)
  To: Olivier MATZ, dev

Hi Olivier,

We could remove the ctrlmbuf from this app and replace it with something else, but I am afraid we do not have that something else yet defined and agreed. And I would like to avoid doing the same work twice:   change this app now to replace the ctrlmbuf with something else, and then replace this something else with whatever we decide to use for message passing as part of the 1.8 mbuf refresh discussion.

We need a message type defined for message passing between cores, and pktmbuf is definitely not the right approach. I can also invent something new, but it is unlikely people will accept it now without a debate, so it will only make this problem worse. Not to mention that we do not even have consensus to remove ctrlmbuf :(. 

My proposal is (as also discussed with Ivan on a different thread) to take the mbuf refresh discussion during 1.8 timeframe, which should include the decision on what to use for message passing. I can commit now to send a patch for this app at that time to do these changes, would this work?

Thanks,
Cristian


-----Original Message-----
From: Olivier MATZ [mailto:olivier.matz@6wind.com] 
Sent: Monday, June 9, 2014 10:12 AM
To: Dumitrescu, Cristian; dev@dpdk.org
Subject: Re: [dpdk-dev] [v2 22/23] Packet Framework IPv4 pipeline sample app

Hi Cristian,

On 06/04/2014 08:08 PM, Cristian Dumitrescu wrote:
> This Packet Framework sample application illustrates the capabilities of the Intel DPDK Packet Framework toolbox.
>
> It creates different functional blocks used by a typical IPv4 framework like: flow classification, firewall, routing, etc.
>
> CPU cores are connected together through standard interfaces built on SW rings, which each CPU core running a separate pipeline instance.
>
> Please refer to Intel DPDK Sample App Guide for full description.
>
> Signed-off-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>

Would it be possible to replace the ctrlmbuf by something else (a
pktmbuf for instance)?

As you know this would conflict if we want to remove the ctrlmbuf from
the rte_mbuf structure.

Regards,
Olivier
--------------------------------------------------------------
Intel Shannon Limited
Registered in Ireland
Registered Office: Collinstown Industrial Park, Leixlip, County Kildare
Registered Number: 308263
Business address: Dromore House, East Park, Shannon, Co. Clare

This e-mail and any attachments may contain confidential material for the sole use of the intended recipient(s). Any review or distribution by others is strictly prohibited. If you are not the intended recipient, please contact the sender and delete all copies.

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [dpdk-dev] [v2 22/23] Packet Framework IPv4 pipeline sample app
  2014-06-09 10:49     ` Dumitrescu, Cristian
@ 2014-06-09 12:13       ` Olivier MATZ
  2014-06-09 13:25         ` Dumitrescu, Cristian
  0 siblings, 1 reply; 36+ messages in thread
From: Olivier MATZ @ 2014-06-09 12:13 UTC (permalink / raw)
  To: Cristian Dumitrescu; +Cc: dev

Hi Christian,

> We need a message type defined for message passing between cores, and
> pktmbuf is definitely not the right approach.

Could you please explain why a pktmbuf is not the right approach?

As proposed in http://dpdk.org/ml/archives/dev/2014-May/002759.html
I think the control mbuf could be replaced by a packet mbuf or an
application private structure.


Regards,
Olivier

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [dpdk-dev] [v2 22/23] Packet Framework IPv4 pipeline sample app
  2014-06-09 12:13       ` Olivier MATZ
@ 2014-06-09 13:25         ` Dumitrescu, Cristian
  2014-06-09 15:51           ` Olivier MATZ
  0 siblings, 1 reply; 36+ messages in thread
From: Dumitrescu, Cristian @ 2014-06-09 13:25 UTC (permalink / raw)
  To: Olivier MATZ; +Cc: dev

Hi Olivier,

A few notes on using pktmbuf here:
1. As the name implies, pktmbuf should be used for packets and ctrlmbuf should be used for control messages :). IMHO using pktmbuf to control messages is a confusing workaround.
2. Pktmbuf has a lot of overhead that is not needed in order to send short messages between cores. Pktmbuf has a lot of pointers and other fields that do not make sense for messages. I don't think we want people to say DPDK is difficult to use because e.g. sending 2 bytes from core A to core B requires initializing a bunch of pointers and other fields that do not make sense.
3. Once we start using pktmbuf to send messages, it is likely that other people will follow this example, and they might do it incorrectly. I don't think we want to see emails on this list from people asking e.g:
	i) Why does my app segfaults, when all I want to do is send 2 bytes from core A to core B?
	ii) Why does my app segfaults when core A writes a message to a NIC TX queue? :)

Using an app dependent structure requires duplicating the work to create/free the pool of such structures, and alloc/free mechanism. And then some people will ask why are we not using ctrlmbuf, as long as ctrlmbuf exists in DPDK.

I think that, as long as we have ctrlmbuf and pktmbuf in DPDK, we should follow the existing model. We should not look for workarounds that we know we plan to change anyway, we should look for the right solution. We both agree we need to refresh pktmbuf and ctrlmbuf, but my point is we should not do changes as long as we don't know what the agreed solution will look like?

Thanks,
Cristian


-----Original Message-----
From: Olivier MATZ [mailto:olivier.matz@6wind.com] 
Sent: Monday, June 9, 2014 1:14 PM
To: Dumitrescu, Cristian
Cc: dev@dpdk.org
Subject: Re: [dpdk-dev] [v2 22/23] Packet Framework IPv4 pipeline sample app

Hi Christian,

> We need a message type defined for message passing between cores, and
> pktmbuf is definitely not the right approach.

Could you please explain why a pktmbuf is not the right approach?

As proposed in http://dpdk.org/ml/archives/dev/2014-May/002759.html
I think the control mbuf could be replaced by a packet mbuf or an
application private structure.


Regards,
Olivier

--------------------------------------------------------------
Intel Shannon Limited
Registered in Ireland
Registered Office: Collinstown Industrial Park, Leixlip, County Kildare
Registered Number: 308263
Business address: Dromore House, East Park, Shannon, Co. Clare

This e-mail and any attachments may contain confidential material for the sole use of the intended recipient(s). Any review or distribution by others is strictly prohibited. If you are not the intended recipient, please contact the sender and delete all copies.

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [dpdk-dev] [v2 22/23] Packet Framework IPv4 pipeline sample app
  2014-06-09 13:25         ` Dumitrescu, Cristian
@ 2014-06-09 15:51           ` Olivier MATZ
  0 siblings, 0 replies; 36+ messages in thread
From: Olivier MATZ @ 2014-06-09 15:51 UTC (permalink / raw)
  To: Cristian Dumitrescu; +Cc: dev

Cristian,

Please see some comments below.

On 06/09/2014 03:25 PM, Dumitrescu, Cristian wrote:
> 1. As the name implies, pktmbuf should be used for packets and ctrlmbuf
> should be used for control messages . IMHO using pktmbuf to control
> messages is a confusing workaround.

If ctrlmbuf are removed, the name pktmbuf would change in mbuf.
But anyway, to me it's not confusing at all to store data in a packet,
even if it's a data from a core to another.

> 2. Pktmbuf has a lot of overhead that is not needed in order to send
> short messages between cores. Pktmbuf has a lot of pointers and other
> fields that do not make sense for messages. I don't think we want people
> to say DPDK is difficult to use because e.g. sending 2 bytes from core A
> to core B requires initializing a bunch of pointers and other fields
> that do not make sense.

All the fields that should be initialized in a packet mbuf are
reset in rte_pktmbuf_reset() so the user won't have anything to do.
But using pktmbuf is not the only solution, you can use a private
application structure without duplicating code (see below).

> 3. Once we start using pktmbuf to send messages, it is likely that other
> people will follow this example, and they might do it incorrectly. I
> don't think we want to see emails on this list from people asking e.g:
>
> i) Why does my app segfaults, when all I want to do is send 2 bytes from
> core A to core B?
>
> ii) Why does my app segfaults when core A writes a message to a NIC TX
> queue?

Why would the application segfaults? Indeed, if you misuse any function,
it could segfault but is it a reason for not implementing the feature?

> Using an app dependent structure requires duplicating the work to
> create/free the pool of such structures, and alloc/free mechanism. And
> then some people will ask why are we not using ctrlmbuf, as long as
> ctrlmbuf exists in DPDK.

In this case, I would say that rte_mempool functions are enough to
allocate/free. If the ctrlmbuf structure is composed of a data array
and a length field, you only need:

   rte_mempool_get(mp, &ctrlmbuf);
   memcpy(ctrlmbuf->buf, my_data, my_data_len);
   ctrlmbuf->len = my_data_len;

> I think that, as long as we have ctrlmbuf and pktmbuf in DPDK, we should
> follow the existing model. We should not look for workarounds that we
> know we plan to change anyway, we should look for the right solution. We
> both agree we need to refresh pktmbuf and ctrlmbuf, but my point is we
> should not do changes as long as we don't know what the agreed solution
> will look like?

I agree, we should debate on what is the right solution, that's
precisely what I'm doing. To decide if ctrlmbuf should be kept
or changed, we should:
- understand its use-case if any and see what ctrlmbuf features
   are required
- understand why it should be included in rte_mbuf or not: in my opinion
   there is no reason to do it, and it this has a cost (ex: 1 byte lost
   in mbuf, mbuf fields badly organized)


Regards,
Olivier

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [dpdk-dev] [v2 00/23] Packet Framework
  2014-06-05 14:40 ` Ivan Boule
@ 2014-06-17  1:27   ` Thomas Monjalon
  0 siblings, 0 replies; 36+ messages in thread
From: Thomas Monjalon @ 2014-06-17  1:27 UTC (permalink / raw)
  To: Cristian Dumitrescu; +Cc: dev

2014-06-04 19:08, Cristian Dumitrescu:
> > Intel DPDK Packet Framework provides a standard methodology (logically
> > similar to OpenFlow) for rapid development of complex packet processing
> > pipelines out of ports, tables and actions.
> >
> > A pipeline is constructed by connecting its input ports to its output
> > ports through a chain of lookup tables. As result of lookup operation
> > into the current table, one of the table entries (or the default table
> > entry, in case of lookup miss) is identified to provide the actions to
> > be executed on the current packet and the associated action meta-data.
> > The behavior of user actions is defined through the configurable table
> > action handler, while the reserved actions define the next hop for the
> > current packet (either another table, an output port or packet drop)
> > and are handled transparently by the framework.
> >
> > Three new Intel DPDK libraries are introduced for Packet Framework:
> > librte_port, librte_table, librte_pipeline.
> > Please check the Intel DPDK Programmer's Guide for full description
> > of the Packet Framework design.
> >
> > Two sample applications are provided for Packet Framework:
> > app/test-pipeline and examples/ip_pipeline.
> > Please check the Intel Sample Apps Guide for a detailed description
> > of how these sample apps.
> 
> Acked by: Ivan Boule <ivan.boule@6wind.com>

It was conflicting with vhost examples because of new logtype:
	http://dpdk.org/browse/dpdk/commit/?id=7b79b2718f0d028cc0

I've ported fragmentation and reassembly ports to the new ip_frag library
instead of the duplicated code from the old example.

I've removed CONFIG_RTE_TEST_PIPELINE option. CONFIG_RTE_LIBRTE_PIPELINE
should be sufficient.
By the way, more build options conditioning could be needed in order to
disable some features (e.g. disabling LPM lib should silently skip LPM port).

Commit splitting have been reworked for atomicity, especially makefiles and
doxygen files.

Packet Framework is a big piece of code which is now applied to master branch
and should be ready for version 1.7.0.

Thanks a lot
-- 
Thomas

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [dpdk-dev] [v2 20/23] librte_cfgfile: interpret config files
  2014-06-04 18:08 ` [dpdk-dev] [v2 20/23] librte_cfgfile: interpret config files Cristian Dumitrescu
@ 2014-10-16 16:46   ` Thomas Monjalon
  2014-10-17 18:16     ` Dumitrescu, Cristian
  0 siblings, 1 reply; 36+ messages in thread
From: Thomas Monjalon @ 2014-10-16 16:46 UTC (permalink / raw)
  To: Cristian Dumitrescu; +Cc: dev

Hi Cristian,

2014-06-04 19:08, Cristian Dumitrescu:
> This library provides a tool to interpret config files that have standard
> structure.
> 
> It is used by the Packet Framework examples/ip_pipeline sample application.
> 
> It originates from examples/qos_sched sample application and now it makes
> this code available as a library for other sample applications to use.
> The code duplication with qos_sched sample app to be addressed later.

4 months ago, you said that this duplication will be adressed later.
Neither you nor anyone at Intel submitted a patch to clean up that.
I just want to be sure that "later" doesn't mean "never" because
I'm accepting another "later" word for cleaning old filtering API.

Maybe you just forgot it so please prove me that I'm right to accept
"later" clean-up, in general.

Thanks
-- 
Thomas

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [dpdk-dev] [v2 20/23] librte_cfgfile: interpret config files
  2014-10-16 16:46   ` Thomas Monjalon
@ 2014-10-17 18:16     ` Dumitrescu, Cristian
  2014-10-17 18:50       ` Thomas Monjalon
  0 siblings, 1 reply; 36+ messages in thread
From: Dumitrescu, Cristian @ 2014-10-17 18:16 UTC (permalink / raw)
  To: Thomas Monjalon; +Cc: dev

Hi Tomas,

Yes, you're right, we need to close on this pending item. Thanks for bringing it up.

I am currently working on a patch series, once I send it out I will come back and look into to qos_sched. Is this OK with you?

Regards,
Cristian

-----Original Message-----
From: Thomas Monjalon [mailto:thomas.monjalon@6wind.com] 
Sent: Thursday, October 16, 2014 5:46 PM
To: Dumitrescu, Cristian
Cc: dev@dpdk.org; Wu, Jingjing; Liu, Jijiang
Subject: Re: [dpdk-dev] [v2 20/23] librte_cfgfile: interpret config files

Hi Cristian,

2014-06-04 19:08, Cristian Dumitrescu:
> This library provides a tool to interpret config files that have standard
> structure.
> 
> It is used by the Packet Framework examples/ip_pipeline sample application.
> 
> It originates from examples/qos_sched sample application and now it makes
> this code available as a library for other sample applications to use.
> The code duplication with qos_sched sample app to be addressed later.

4 months ago, you said that this duplication will be adressed later.
Neither you nor anyone at Intel submitted a patch to clean up that.
I just want to be sure that "later" doesn't mean "never" because
I'm accepting another "later" word for cleaning old filtering API.

Maybe you just forgot it so please prove me that I'm right to accept
"later" clean-up, in general.

Thanks
-- 
Thomas
--------------------------------------------------------------
Intel Shannon Limited
Registered in Ireland
Registered Office: Collinstown Industrial Park, Leixlip, County Kildare
Registered Number: 308263
Business address: Dromore House, East Park, Shannon, Co. Clare

This e-mail and any attachments may contain confidential material for the sole use of the intended recipient(s). Any review or distribution by others is strictly prohibited. If you are not the intended recipient, please contact the sender and delete all copies.

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [dpdk-dev] [v2 20/23] librte_cfgfile: interpret config files
  2014-10-17 18:16     ` Dumitrescu, Cristian
@ 2014-10-17 18:50       ` Thomas Monjalon
  0 siblings, 0 replies; 36+ messages in thread
From: Thomas Monjalon @ 2014-10-17 18:50 UTC (permalink / raw)
  To: Dumitrescu, Cristian; +Cc: dev

2014-10-17 18:16, Dumitrescu, Cristian:
> Hi Tomas,
> 
> Yes, you're right, we need to close on this pending item.
> Thanks for bringing it up.
> 
> I am currently working on a patch series, once I send it out
> I will come back and look into to qos_sched. Is this OK with you?

Yes, thank you.

> -----Original Message-----
> From: Thomas Monjalon [mailto:thomas.monjalon@6wind.com] 
> Sent: Thursday, October 16, 2014 5:46 PM
> To: Dumitrescu, Cristian
> Cc: dev@dpdk.org; Wu, Jingjing; Liu, Jijiang
> Subject: Re: [dpdk-dev] [v2 20/23] librte_cfgfile: interpret config files
> 
> Hi Cristian,
> 
> 2014-06-04 19:08, Cristian Dumitrescu:
> > This library provides a tool to interpret config files that have standard
> > structure.
> > 
> > It is used by the Packet Framework examples/ip_pipeline sample application.
> > 
> > It originates from examples/qos_sched sample application and now it makes
> > this code available as a library for other sample applications to use.
> > The code duplication with qos_sched sample app to be addressed later.
> 
> 4 months ago, you said that this duplication will be adressed later.
> Neither you nor anyone at Intel submitted a patch to clean up that.
> I just want to be sure that "later" doesn't mean "never" because
> I'm accepting another "later" word for cleaning old filtering API.
> 
> Maybe you just forgot it so please prove me that I'm right to accept
> "later" clean-up, in general.
> 
> Thanks

^ permalink raw reply	[flat|nested] 36+ messages in thread

end of thread, other threads:[~2014-10-17 18:42 UTC | newest]

Thread overview: 36+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-06-04 18:08 [dpdk-dev] [v2 00/23] Packet Framework Cristian Dumitrescu
2014-06-04 18:08 ` [dpdk-dev] [v2 01/23] librte_lpm: rule_is_present Cristian Dumitrescu
2014-06-04 18:08 ` [dpdk-dev] [v2 02/23] mbuf: meta-data Cristian Dumitrescu
2014-06-04 18:08 ` [dpdk-dev] [v2 03/23] Packet Framework librte_port: Port API Cristian Dumitrescu
2014-06-04 18:08 ` [dpdk-dev] [v2 04/23] Packet Framework librte_port: ethdev ports Cristian Dumitrescu
2014-06-04 18:08 ` [dpdk-dev] [v2 05/23] Packet Framework librte_port: ring ports Cristian Dumitrescu
2014-06-04 18:08 ` [dpdk-dev] [v2 06/23] Packet Framework librte_port: IPv4 frag port Cristian Dumitrescu
2014-06-04 18:08 ` [dpdk-dev] [v2 07/23] Packet Framework librte_port: IPv4 reassembly Cristian Dumitrescu
2014-06-04 18:08 ` [dpdk-dev] [v2 08/23] Packet Framework librte_port: hierarchical scheduler port Cristian Dumitrescu
2014-06-04 18:08 ` [dpdk-dev] [v2 09/23] Packet Framework librte_port: Source/Sink ports Cristian Dumitrescu
2014-06-04 18:08 ` [dpdk-dev] [v2 10/23] Packet Framework librte_port: Build infrastructure Cristian Dumitrescu
2014-06-04 18:08 ` [dpdk-dev] [v2 11/23] Packet Framework librte_table: Table API Cristian Dumitrescu
2014-06-04 18:08 ` [dpdk-dev] [v2 12/23] Packet Framework librte_table: LPM IPv4 table Cristian Dumitrescu
2014-06-04 18:08 ` [dpdk-dev] [v2 13/23] Packet Framework librte_table: LPM IPv6 table Cristian Dumitrescu
2014-06-04 18:08 ` [dpdk-dev] [v2 14/23] Packet Framework librte_table: ACL table Cristian Dumitrescu
2014-06-04 18:08 ` [dpdk-dev] [v2 15/23] Packet Framework librte_table: Hash tables Cristian Dumitrescu
2014-06-04 18:08 ` [dpdk-dev] [v2 16/23] Packet Framework librte_table: array table Cristian Dumitrescu
2014-06-04 18:08 ` [dpdk-dev] [v2 17/23] Packet Framework librte_table: Stub table Cristian Dumitrescu
2014-06-04 18:08 ` [dpdk-dev] [v2 18/23] Packet Framework librte_table: Build infrastructure Cristian Dumitrescu
2014-06-04 18:08 ` [dpdk-dev] [v2 19/23] Packet Framework librte_pipeline: Pipeline Cristian Dumitrescu
2014-06-04 18:08 ` [dpdk-dev] [v2 20/23] librte_cfgfile: interpret config files Cristian Dumitrescu
2014-10-16 16:46   ` Thomas Monjalon
2014-10-17 18:16     ` Dumitrescu, Cristian
2014-10-17 18:50       ` Thomas Monjalon
2014-06-04 18:08 ` [dpdk-dev] [v2 21/23] Packet Framework performance application Cristian Dumitrescu
2014-06-04 18:08 ` [dpdk-dev] [v2 22/23] Packet Framework IPv4 pipeline sample app Cristian Dumitrescu
2014-06-09  9:11   ` Olivier MATZ
2014-06-09 10:49     ` Dumitrescu, Cristian
2014-06-09 12:13       ` Olivier MATZ
2014-06-09 13:25         ` Dumitrescu, Cristian
2014-06-09 15:51           ` Olivier MATZ
2014-06-04 18:08 ` [dpdk-dev] [v2 23/23] Packet Framework unit tests Cristian Dumitrescu
2014-06-05 11:01 ` [dpdk-dev] [v2 00/23] Packet Framework De Lara Guarch, Pablo
2014-06-05 11:43 ` Cao, Waterman
2014-06-05 14:40 ` Ivan Boule
2014-06-17  1:27   ` Thomas Monjalon

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).