DPDK patches and discussions
 help / color / mirror / Atom feed
From: Maciej Gajdzica <maciejx.t.gajdzica@intel.com>
To: dev@dpdk.org
Subject: [dpdk-dev] [PATCH v2 11/11] ip_pipeline: added new implementation of flow classification pipeline
Date: Thu, 25 Jun 2015 13:15:14 +0200	[thread overview]
Message-ID: <1435230914-8174-12-git-send-email-maciejx.t.gajdzica@intel.com> (raw)
In-Reply-To: <1435230914-8174-1-git-send-email-maciejx.t.gajdzica@intel.com>

Flow classification pipeline implementation is split to two files.
pipeline_flow_classification.c file handles front-end functions (cli
commands parsing) pipeline_flow_classification_ops.c contains
implementation of functions done by pipeline (back-end).

Signed-off-by: Maciej Gajdzica <maciejx.t.gajdzica@intel.com>
---
 examples/ip_pipeline/Makefile                      |    2 +
 examples/ip_pipeline/config/fc_ipv4_5tuple.cfg     |   23 +
 examples/ip_pipeline/config/fc_ipv4_5tuple.sh      |    9 +
 examples/ip_pipeline/config/fc_ipv6_5tuple.cfg     |   23 +
 examples/ip_pipeline/config/fc_ipv6_5tuple.sh      |    8 +
 examples/ip_pipeline/config/fc_qinq.cfg            |   23 +
 examples/ip_pipeline/config/fc_qinq.sh             |    8 +
 examples/ip_pipeline/init.c                        |    2 +
 .../pipeline/pipeline_flow_classification.c        | 2063 +++++++++++++++++---
 .../pipeline/pipeline_flow_classification.h        |  106 +
 .../pipeline/pipeline_flow_classification_be.c     |  569 ++++++
 .../pipeline/pipeline_flow_classification_be.h     |  140 ++
 12 files changed, 2755 insertions(+), 221 deletions(-)
 create mode 100644 examples/ip_pipeline/config/fc_ipv4_5tuple.cfg
 create mode 100644 examples/ip_pipeline/config/fc_ipv4_5tuple.sh
 create mode 100644 examples/ip_pipeline/config/fc_ipv6_5tuple.cfg
 create mode 100644 examples/ip_pipeline/config/fc_ipv6_5tuple.sh
 create mode 100644 examples/ip_pipeline/config/fc_qinq.cfg
 create mode 100644 examples/ip_pipeline/config/fc_qinq.sh
 create mode 100644 examples/ip_pipeline/pipeline/pipeline_flow_classification.h
 create mode 100644 examples/ip_pipeline/pipeline/pipeline_flow_classification_be.c
 create mode 100644 examples/ip_pipeline/pipeline/pipeline_flow_classification_be.h

diff --git a/examples/ip_pipeline/Makefile b/examples/ip_pipeline/Makefile
index a2881a6..f3ff1ec 100644
--- a/examples/ip_pipeline/Makefile
+++ b/examples/ip_pipeline/Makefile
@@ -64,6 +64,8 @@ SRCS-$(CONFIG_RTE_LIBRTE_PIPELINE) += pipeline_passthrough_be.c
 SRCS-$(CONFIG_RTE_LIBRTE_PIPELINE) += pipeline_passthrough.c
 SRCS-$(CONFIG_RTE_LIBRTE_PIPELINE) += pipeline_firewall_be.c
 SRCS-$(CONFIG_RTE_LIBRTE_PIPELINE) += pipeline_firewall.c
+SRCS-$(CONFIG_RTE_LIBRTE_PIPELINE) += pipeline_flow_classification_be.c
+SRCS-$(CONFIG_RTE_LIBRTE_PIPELINE) += pipeline_flow_classification.c
 SRCS-$(CONFIG_RTE_LIBRTE_PIPELINE) += pipeline_routing_be.c
 SRCS-$(CONFIG_RTE_LIBRTE_PIPELINE) += pipeline_routing.c
 
diff --git a/examples/ip_pipeline/config/fc_ipv4_5tuple.cfg b/examples/ip_pipeline/config/fc_ipv4_5tuple.cfg
new file mode 100644
index 0000000..246df5f
--- /dev/null
+++ b/examples/ip_pipeline/config/fc_ipv4_5tuple.cfg
@@ -0,0 +1,23 @@
+[PIPELINE0]
+type = MASTER
+core = 0
+
+[PIPELINE1]
+type = PASS-THROUGH
+core = s0c1
+pktq_in = RXQ0.0 RXQ1.0 RXQ2.0 RXQ3.0
+pktq_out = SWQ0 SWQ1 SWQ2 SWQ3
+key_type = ipv4_5tuple
+key_offset_rd = 150; key_offset_rd = headroom (128) + ethernet (14) + ttl offset (8)
+key_offset_wr = 64
+hash_offset = 80
+
+[PIPELINE2]
+type = FLOW_CLASSIFICATION
+core = s0c2
+pktq_in = SWQ0 SWQ1 SWQ2 SWQ3
+pktq_out = TXQ0.0 TXQ1.0 TXQ2.0 TXQ3.0
+n_flows = 16777216
+key_offset = 64
+key_size = 16
+hash_offset = 80
diff --git a/examples/ip_pipeline/config/fc_ipv4_5tuple.sh b/examples/ip_pipeline/config/fc_ipv4_5tuple.sh
new file mode 100644
index 0000000..29c77f9
--- /dev/null
+++ b/examples/ip_pipeline/config/fc_ipv4_5tuple.sh
@@ -0,0 +1,9 @@
+#run config/fc_ipv4_5tuple.sh
+
+p 1 ping
+p 2 ping
+
+p 2 flow add default 3
+p 2 flow add ipv4_5tuple 1.2.3.4 5.6.7.8 256 257 6 2
+p 2 flow ls
+
diff --git a/examples/ip_pipeline/config/fc_ipv6_5tuple.cfg b/examples/ip_pipeline/config/fc_ipv6_5tuple.cfg
new file mode 100644
index 0000000..4b2b0da
--- /dev/null
+++ b/examples/ip_pipeline/config/fc_ipv6_5tuple.cfg
@@ -0,0 +1,23 @@
+[PIPELINE0]
+type = MASTER
+core = 0
+
+[PIPELINE1]
+type = PASS-THROUGH
+core = s0c1
+pktq_in = RXQ0.0 RXQ1.0 RXQ2.0 RXQ3.0
+pktq_out = SWQ0 SWQ1 SWQ2 SWQ3
+key_type = ipv6_5tuple; key_size = 64
+key_offset_rd = 146; key_offset_rd = headroom (128) + ethernet (14) + payload length offset (4)
+key_offset_wr = 0
+hash_offset = 64
+
+[PIPELINE2]
+type = FLOW_CLASSIFICATION
+core = s0c2
+pktq_in = SWQ0 SWQ1 SWQ2 SWQ3
+pktq_out = TXQ0.0 TXQ1.0 TXQ2.0 TXQ3.0
+n_flows = 16777216
+key_offset = 0
+key_size = 64
+hash_offset = 64
diff --git a/examples/ip_pipeline/config/fc_ipv6_5tuple.sh b/examples/ip_pipeline/config/fc_ipv6_5tuple.sh
new file mode 100644
index 0000000..b3724ee
--- /dev/null
+++ b/examples/ip_pipeline/config/fc_ipv6_5tuple.sh
@@ -0,0 +1,8 @@
+#run config/fc_ipv6_5tuple.sh
+
+p 1 ping
+p 2 ping
+
+p 2 flow add default 3
+p 2 flow add ipv6_5tuple 0001:0203:0405:0607:0809:0a0b:0c0d:0e0f 1011:1213:1415:1617:1819:1a1b:1c1d:1e1f 256 257 6 2
+p 2 flow ls
diff --git a/examples/ip_pipeline/config/fc_qinq.cfg b/examples/ip_pipeline/config/fc_qinq.cfg
new file mode 100644
index 0000000..a502d7a
--- /dev/null
+++ b/examples/ip_pipeline/config/fc_qinq.cfg
@@ -0,0 +1,23 @@
+[PIPELINE0]
+type = MASTER
+core = 0
+
+[PIPELINE1]
+type = PASS-THROUGH
+core = s0c1
+pktq_in = RXQ0.0 RXQ1.0 RXQ2.0 RXQ3.0
+pktq_out = SWQ0 SWQ1 SWQ2 SWQ3
+key_type = qinq
+key_offset_rd = 140; key_offset_rd = headroom (128) + 1st ethertype offset (12)
+key_offset_wr = 64
+hash_offset = 72
+
+[PIPELINE2]
+type = FLOW_CLASSIFICATION
+core = s0c2
+pktq_in = SWQ0 SWQ1 SWQ2 SWQ3
+pktq_out = TXQ0.0 TXQ1.0 TXQ2.0 TXQ3.0
+n_flows = 16777216
+key_offset = 64
+key_size = 8
+hash_offset = 72
diff --git a/examples/ip_pipeline/config/fc_qinq.sh b/examples/ip_pipeline/config/fc_qinq.sh
new file mode 100644
index 0000000..71dc350
--- /dev/null
+++ b/examples/ip_pipeline/config/fc_qinq.sh
@@ -0,0 +1,8 @@
+#run config/fc_qinq.sh
+
+p 1 ping
+p 2 ping
+
+p 2 flow add default 3
+p 2 flow add qinq 256 257 2
+p 2 flow ls
diff --git a/examples/ip_pipeline/init.c b/examples/ip_pipeline/init.c
index 840bc60..c4edae8 100644
--- a/examples/ip_pipeline/init.c
+++ b/examples/ip_pipeline/init.c
@@ -47,6 +47,7 @@
 #include "pipeline_master.h"
 #include "pipeline_passthrough.h"
 #include "pipeline_firewall.h"
+#include "pipeline_flow_classification.h"
 #include "pipeline_routing.h"
 
 #define APP_NAME_SIZE	32
@@ -1193,6 +1194,7 @@ int app_init(struct app_params *app)
 	app_pipeline_common_cmd_push(app);
 	app_pipeline_type_register(app, &pipeline_master);
 	app_pipeline_type_register(app, &pipeline_passthrough);
+	app_pipeline_type_register(app, &pipeline_flow_classification);
 	app_pipeline_type_register(app, &pipeline_firewall);
 	app_pipeline_type_register(app, &pipeline_routing);
 
diff --git a/examples/ip_pipeline/pipeline/pipeline_flow_classification.c b/examples/ip_pipeline/pipeline/pipeline_flow_classification.c
index cc0cbf1..f578f54 100644
--- a/examples/ip_pipeline/pipeline/pipeline_flow_classification.c
+++ b/examples/ip_pipeline/pipeline/pipeline_flow_classification.c
@@ -1,7 +1,7 @@
 /*-
  *   BSD LICENSE
  *
- *   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+ *   Copyright(c) 2010-2015 Intel Corporation. All rights reserved.
  *   All rights reserved.
  *
  *   Redistribution and use in source and binary forms, with or without
@@ -32,275 +32,1896 @@
  */
 
 #include <stdio.h>
-#include <stdlib.h>
-#include <stdint.h>
+#include <string.h>
+#include <sys/queue.h>
+#include <netinet/in.h>
 
+#include <rte_common.h>
+#include <rte_hexdump.h>
 #include <rte_malloc.h>
-#include <rte_log.h>
-#include <rte_ethdev.h>
-#include <rte_ether.h>
-#include <rte_ip.h>
-#include <rte_byteorder.h>
+#include <cmdline_rdline.h>
+#include <cmdline_parse.h>
+#include <cmdline_parse_num.h>
+#include <cmdline_parse_string.h>
+#include <cmdline_parse_ipaddr.h>
+#include <cmdline_parse_etheraddr.h>
 
-#include <rte_port_ring.h>
-#include <rte_table_hash.h>
-#include <rte_pipeline.h>
+#include "app.h"
+#include "pipeline_common_fe.h"
+#include "pipeline_flow_classification.h"
+#include "hash_func.h"
 
-#include "main.h"
+/*
+ * Key conversion
+ */
+
+struct pkt_key_qinq {
+	uint16_t ethertype_svlan;
+	uint16_t svlan;
+	uint16_t ethertype_cvlan;
+	uint16_t cvlan;
+} __attribute__((__packed__));
+
+struct pkt_key_ipv4_5tuple {
+	uint8_t ttl;
+	uint8_t proto;
+	uint16_t checksum;
+	uint32_t ip_src;
+	uint32_t ip_dst;
+	uint16_t port_src;
+	uint16_t port_dst;
+} __attribute__((__packed__));
+
+struct pkt_key_ipv6_5tuple {
+	uint16_t payload_length;
+	uint8_t proto;
+	uint8_t hop_limit;
+	uint8_t ip_src[16];
+	uint8_t ip_dst[16];
+	uint16_t port_src;
+	uint16_t port_dst;
+} __attribute__((__packed__));
+
+static int
+app_pipeline_fc_key_convert(struct pipeline_fc_key *key_in,
+	uint8_t *key_out,
+	uint32_t *signature)
+{
+	uint8_t buffer[PIPELINE_FC_FLOW_KEY_MAX_SIZE];
+	void *key_buffer = (key_out)? key_out : buffer;
+
+	switch (key_in->type) {
+	case FLOW_KEY_QINQ:
+	{
+		struct pkt_key_qinq *qinq = key_buffer;
 
-struct app_core_fc_message_handle_params {
-	struct rte_ring *ring_req;
-	struct rte_ring *ring_resp;
+		qinq->ethertype_svlan = 0;
+		qinq->svlan = rte_bswap16(key_in->key.qinq.svlan);
+		qinq->ethertype_cvlan = 0;
+		qinq->cvlan = rte_bswap16(key_in->key.qinq.cvlan);
 
-	struct rte_pipeline *p;
-	uint32_t *port_out_id;
-	uint32_t table_id;
+		if (signature)
+			*signature = (uint32_t) hash_default_key8(qinq, 8, 0);
+		return 0;
+	}
+
+	case FLOW_KEY_IPV4_5TUPLE:
+	{
+		struct pkt_key_ipv4_5tuple *ipv4 = key_buffer;
+
+		ipv4->ttl = 0;
+		ipv4->proto = key_in->key.ipv4_5tuple.proto;
+		ipv4->checksum = 0;
+		ipv4->ip_src = rte_bswap32(key_in->key.ipv4_5tuple.ip_src);
+		ipv4->ip_dst = rte_bswap32(key_in->key.ipv4_5tuple.ip_dst);
+		ipv4->port_src = rte_bswap16(key_in->key.ipv4_5tuple.port_src);
+		ipv4->port_dst = rte_bswap16(key_in->key.ipv4_5tuple.port_dst);
+
+		if (signature)
+			*signature = (uint32_t) hash_default_key16(ipv4, 16, 0);
+		return 0;
+	}
+
+	case FLOW_KEY_IPV6_5TUPLE:
+	{
+		struct pkt_key_ipv6_5tuple *ipv6 = key_buffer;
+
+		memset(ipv6, 0, 64);
+		ipv6->payload_length = 0;
+		ipv6->proto = key_in->key.ipv6_5tuple.proto;
+		ipv6->hop_limit = 0;
+		memcpy(&ipv6->ip_src, &key_in->key.ipv6_5tuple.ip_src, 16);
+		memcpy(&ipv6->ip_dst, &key_in->key.ipv6_5tuple.ip_dst, 16);
+		ipv6->port_src = rte_bswap16(key_in->key.ipv6_5tuple.port_src);
+		ipv6->port_dst = rte_bswap16(key_in->key.ipv6_5tuple.port_dst);
+
+		if (signature)
+			*signature = (uint32_t) hash_default_key64(ipv6, 64, 0);
+		return 0;
+	}
+
+	default:
+		return -1;
+	}
+}
+
+/*
+ * Flow classification pipeline
+ */
+
+struct app_pipeline_fc_flow {
+	struct pipeline_fc_key key;
+	uint32_t port_id;
+	uint32_t signature;
+	void *entry_ptr;
+
+	TAILQ_ENTRY(app_pipeline_fc_flow) node;
 };
 
-static void
-app_message_handle(struct app_core_fc_message_handle_params *params);
+#define N_BUCKETS                                65536
+
+struct app_pipeline_fc {
+	/* Parameters */
+	uint32_t n_ports_in;
+	uint32_t n_ports_out;
+
+	/* Flows */
+	TAILQ_HEAD(, app_pipeline_fc_flow) flows[N_BUCKETS];
+	uint32_t n_flows;
+
+	/* Default flow */
+	uint32_t default_flow_present;
+	uint32_t default_flow_port_id;
+	void *default_flow_entry_ptr;
+};
+
+static struct app_pipeline_fc_flow *
+app_pipeline_fc_flow_find(struct app_pipeline_fc *p,
+	struct pipeline_fc_key *key)
+{
+	struct app_pipeline_fc_flow *f;
+	uint32_t signature, bucket_id;
+
+	app_pipeline_fc_key_convert(key, NULL, &signature);
+	bucket_id = signature & (N_BUCKETS - 1);
+
+	TAILQ_FOREACH(f, &p->flows[bucket_id], node)
+		if ((signature == f->signature) &&
+			(memcmp(key, &f->key, sizeof(struct pipeline_fc_key)) == 0))
+			return f;
+
+	return NULL;
+}
+
+static void*
+app_pipeline_fc_init(struct pipeline_params *params,
+	__rte_unused void *arg)
+{
+	struct app_pipeline_fc *p;
+	uint32_t size, i;
+
+	/* Check input arguments */
+	if ((params == NULL) ||
+		(params->n_ports_in == 0) ||
+		(params->n_ports_out == 0))
+		return NULL;
+
+	/* Memory allocation */
+	size = RTE_CACHE_LINE_ROUNDUP(sizeof(struct app_pipeline_fc));
+	p = rte_zmalloc(NULL, size, RTE_CACHE_LINE_SIZE);
+	if (p == NULL)
+		return NULL;
+
+	/* Initialization */
+	p->n_ports_in = params->n_ports_in;
+	p->n_ports_out = params->n_ports_out;
+
+	for (i = 0; i < N_BUCKETS; i++)
+		TAILQ_INIT(&p->flows[i]);
+	p->n_flows = 0;
+
+	return (void *) p;
+}
 
-static int app_flow_classification_table_init(
-	struct rte_pipeline *p,
-	uint32_t *port_out_id,
-	uint32_t table_id)
+static int
+app_pipeline_fc_free(void *pipeline)
 {
-	struct app_flow_key flow_key;
+	struct app_pipeline_fc *p = pipeline;
 	uint32_t i;
 
-	/* Add entries to tables */
-	for (i = 0; i < (1 << 24); i++) {
-		struct rte_pipeline_table_entry entry = {
-			.action = RTE_PIPELINE_ACTION_PORT,
-			{.port_id = port_out_id[i & (app.n_ports - 1)]},
-		};
-		struct rte_pipeline_table_entry *entry_ptr;
-		int key_found, status;
-
-		flow_key.ttl = 0;
-		flow_key.proto = 6; /* TCP */
-		flow_key.header_checksum = 0;
-		flow_key.ip_src = 0;
-		flow_key.ip_dst = rte_bswap32(i);
-		flow_key.port_src = 0;
-		flow_key.port_dst = 0;
-
-		status = rte_pipeline_table_entry_add(p, table_id,
-			(void *) &flow_key, &entry, &key_found, &entry_ptr);
-		if (status < 0)
-			rte_panic("Unable to add entry to table %u (%d)\n",
-				table_id, status);
+	/* Check input arguments */
+	if (p == NULL)
+		return -1;
+
+	/* Free resources */
+	for (i = 0; i < N_BUCKETS; i++)
+		while (!TAILQ_EMPTY(&p->flows[i])) {
+			struct app_pipeline_fc_flow *flow;
+
+			flow = TAILQ_FIRST(&p->flows[i]);
+			TAILQ_REMOVE(&p->flows[i], flow, node);
+			rte_free(flow);
+		}
+
+	rte_free(p);
+	return 0;
+}
+
+static int
+app_pipeline_fc_key_check(struct pipeline_fc_key *key)
+{
+	switch (key->type) {
+		case FLOW_KEY_QINQ:
+		{
+			uint16_t svlan = key->key.qinq.svlan;
+			uint16_t cvlan = key->key.qinq.cvlan;
+
+			if ((svlan & 0xF000) ||
+				(cvlan & 0xF000))
+				return -1;
+
+			return 0;
+		}
+
+		case FLOW_KEY_IPV4_5TUPLE:
+			return 0;
+
+		case FLOW_KEY_IPV6_5TUPLE:
+			return 0;
+
+		default:
+			return -1;
 	}
+}
+
+int
+app_pipeline_fc_add(struct app_params *app,
+	uint32_t pipeline_id,
+	struct pipeline_fc_key *key,
+	uint32_t port_id)
+{
+	struct app_pipeline_fc *p;
+	struct app_pipeline_fc_flow *flow;
+
+	struct pipeline_fc_add_msg_req *req;
+	struct pipeline_fc_add_msg_rsp *rsp;
+
+	uint32_t signature;
+	int new_flow;
+
+	/* Check input arguments */
+	if ((app == NULL) ||
+		(key == NULL))
+		return -1;
+
+	p = app_pipeline_data_fe(app, pipeline_id);
+	if (p == NULL)
+		return -1;
+
+	if (port_id >= p->n_ports_in)
+		return -1;
+
+	if (app_pipeline_fc_key_check(key) != 0)
+		return -1;
+
+	/* Find existing flow or allocate new flow */
+	flow = app_pipeline_fc_flow_find(p, key);
+	new_flow = (flow == NULL);
+	if (flow == NULL) {
+		flow = rte_malloc(NULL, sizeof(*flow), RTE_CACHE_LINE_SIZE);
+
+		if (flow == NULL)
+			return -1;
+	}
+
+	/* Allocate and write request */
+	req = app_msg_alloc(app);
+	if (req == NULL)
+		return -1;
+
+	req->type = PIPELINE_MSG_REQ_CUSTOM;
+	req->subtype = PIPELINE_FC_MSG_REQ_FLOW_ADD;
+	app_pipeline_fc_key_convert(key, req->key, &signature);
+	req->port_id = port_id;
+
+	/* Send request and wait for response */
+	rsp = app_msg_send_recv(app, pipeline_id, req, MSG_TIMEOUT_DEFAULT);
+	if (rsp == NULL) {
+		if (new_flow)
+			rte_free(flow);
+		return -1;
+	}
+
+	/* Read response and write flow */
+	if (rsp->status ||
+		(rsp->entry_ptr == NULL) ||
+		((new_flow == 0) && (rsp->key_found == 0)) ||
+		((new_flow == 1) && (rsp->key_found == 1))) {
+		app_msg_free(app, rsp);
+		if (new_flow)
+			rte_free(flow);
+		return -1;
+	}
+
+	memset(&flow->key, 0, sizeof(flow->key));
+	memcpy(&flow->key, key, sizeof(flow->key));
+	flow->port_id = port_id;
+	flow->signature = signature;
+	flow->entry_ptr = rsp->entry_ptr;
+
+	/* Commit rule */
+	if (new_flow) {
+		uint32_t bucket_id = signature & (N_BUCKETS - 1);
+		TAILQ_INSERT_TAIL(&p->flows[bucket_id], flow, node);
+		p->n_flows++;
+	}
+
+	/* Free response */
+	app_msg_free(app, rsp);
 
 	return 0;
 }
 
-void
-app_main_loop_pipeline_flow_classification(void) {
-	struct rte_pipeline_params pipeline_params = {
-		.name = "pipeline",
-		.socket_id = rte_socket_id(),
-	};
-
-	struct rte_pipeline *p;
-	uint32_t port_in_id[APP_MAX_PORTS];
-	uint32_t port_out_id[APP_MAX_PORTS];
-	uint32_t table_id;
-	uint32_t i;
+int
+app_pipeline_fc_add_bulk(struct app_params *app,
+	uint32_t pipeline_id,
+	struct pipeline_fc_key *key,
+	uint32_t *port_id,
+	uint32_t n_keys)
+{
+	struct app_pipeline_fc *p;
+	struct pipeline_fc_add_bulk_msg_req *req;
+	struct pipeline_fc_add_bulk_msg_rsp *rsp;
 
-	uint32_t core_id = rte_lcore_id();
-	struct app_core_params *core_params = app_get_core_params(core_id);
-	struct app_core_fc_message_handle_params mh_params;
+	struct app_pipeline_fc_flow **flow;
+	uint32_t *signature;
+	int *new_flow;
+	struct pipeline_fc_add_bulk_flow_req *flow_req;
+	struct pipeline_fc_add_bulk_flow_rsp *flow_rsp;
 
-	if ((core_params == NULL) || (core_params->core_type != APP_CORE_FC))
-		rte_panic("Core %u misconfiguration\n", core_id);
+	uint32_t i;
+	int status;
 
-	RTE_LOG(INFO, USER1, "Core %u is doing flow classification "
-		"(pipeline with hash table, 16-byte key, LRU)\n", core_id);
+	/* Check input arguments */
+	if ((app == NULL) ||
+		(key == NULL) ||
+		(port_id == NULL) ||
+		(n_keys == 0))
+		return -1;
 
-	/* Pipeline configuration */
-	p = rte_pipeline_create(&pipeline_params);
+	p = app_pipeline_data_fe(app, pipeline_id);
 	if (p == NULL)
-		rte_panic("Unable to configure the pipeline\n");
-
-	/* Input port configuration */
-	for (i = 0; i < app.n_ports; i++) {
-		struct rte_port_ring_reader_params port_ring_params = {
-			.ring = app.rings[core_params->swq_in[i]],
-		};
-
-		struct rte_pipeline_port_in_params port_params = {
-			.ops = &rte_port_ring_reader_ops,
-			.arg_create = (void *) &port_ring_params,
-			.f_action = NULL,
-			.arg_ah = NULL,
-			.burst_size = app.bsz_swq_rd,
-		};
-
-		if (rte_pipeline_port_in_create(p, &port_params,
-			&port_in_id[i]))
-			rte_panic("Unable to configure input port for "
-				"ring %d\n", i);
+		return -1;
+
+	for (i = 0; i < n_keys; i++)
+		if (port_id[i] >= p->n_ports_in)
+			return -1;
+
+	for (i = 0; i < n_keys; i++)
+		if (app_pipeline_fc_key_check(&key[i]) != 0)
+			return -1;
+
+	/* Memory allocation */
+	flow = rte_malloc(NULL, n_keys * sizeof(struct app_pipeline_fc_flow *), RTE_CACHE_LINE_SIZE);
+	if (flow == NULL)
+		return -1;
+
+	signature = rte_malloc(NULL, n_keys * sizeof(uint32_t), RTE_CACHE_LINE_SIZE);
+	if (signature == NULL){
+		rte_free(flow);
+		return -1;
 	}
 
-	/* Output port configuration */
-	for (i = 0; i < app.n_ports; i++) {
-		struct rte_port_ring_writer_params port_ring_params = {
-			.ring = app.rings[core_params->swq_out[i]],
-			.tx_burst_sz = app.bsz_swq_wr,
-		};
-
-		struct rte_pipeline_port_out_params port_params = {
-			.ops = &rte_port_ring_writer_ops,
-			.arg_create = (void *) &port_ring_params,
-			.f_action = NULL,
-			.f_action_bulk = NULL,
-			.arg_ah = NULL,
-		};
-
-		if (rte_pipeline_port_out_create(p, &port_params,
-			&port_out_id[i]))
-			rte_panic("Unable to configure output port for "
-				"ring %d\n", i);
+	new_flow = rte_malloc(NULL, n_keys * sizeof(int), RTE_CACHE_LINE_SIZE);
+	if (new_flow == NULL) {
+		rte_free(signature);
+		rte_free(flow);
+		return -1;
 	}
 
-	/* Table configuration */
-	{
-		struct rte_table_hash_key16_lru_params table_hash_params = {
-			.n_entries = 1 << 24,
-			.signature_offset = __builtin_offsetof(
-				struct app_pkt_metadata, signature),
-			.key_offset = __builtin_offsetof(
-				struct app_pkt_metadata, flow_key),
-			.f_hash = test_hash,
-			.seed = 0,
-		};
-
-		struct rte_pipeline_table_params table_params = {
-			.ops = &rte_table_hash_key16_lru_ops,
-			.arg_create = &table_hash_params,
-			.f_action_hit = NULL,
-			.f_action_miss = NULL,
-			.arg_ah = NULL,
-			.action_data_size = 0,
-		};
-
-		if (rte_pipeline_table_create(p, &table_params, &table_id))
-			rte_panic("Unable to configure the hash table\n");
+	flow_req = rte_malloc(NULL, n_keys * sizeof(struct pipeline_fc_add_bulk_flow_req), RTE_CACHE_LINE_SIZE);
+	if (flow_req == NULL) {
+		rte_free(new_flow);
+		rte_free(signature);
+		rte_free(flow);
+		return -1;
 	}
 
-	/* Interconnecting ports and tables */
-	for (i = 0; i < app.n_ports; i++)
-		if (rte_pipeline_port_in_connect_to_table(p, port_in_id[i],
-			table_id))
-			rte_panic("Unable to connect input port %u to "
-				"table %u\n", port_in_id[i],  table_id);
-
-	/* Enable input ports */
-	for (i = 0; i < app.n_ports; i++)
-		if (rte_pipeline_port_in_enable(p, port_in_id[i]))
-			rte_panic("Unable to enable input port %u\n",
-				port_in_id[i]);
-
-	/* Check pipeline consistency */
-	if (rte_pipeline_check(p) < 0)
-		rte_panic("Pipeline consistency check failed\n");
-
-	/* Message handling */
-	mh_params.ring_req = app_get_ring_req(
-		app_get_first_core_id(APP_CORE_FC));
-	mh_params.ring_resp = app_get_ring_resp(
-		app_get_first_core_id(APP_CORE_FC));
-	mh_params.p = p;
-	mh_params.port_out_id = port_out_id;
-	mh_params.table_id = table_id;
-
-	/* Run-time */
-	for (i = 0; ; i++) {
-		rte_pipeline_run(p);
-
-		if ((i & APP_FLUSH) == 0) {
-			rte_pipeline_flush(p);
-			app_message_handle(&mh_params);
+	flow_rsp = rte_malloc(NULL, n_keys * sizeof(struct pipeline_fc_add_bulk_flow_rsp), RTE_CACHE_LINE_SIZE);
+	if (flow_req == NULL) {
+		rte_free(flow_req);
+		rte_free(new_flow);
+		rte_free(signature);
+		rte_free(flow);
+		return -1;
+	}
+
+	/* Find existing flow or allocate new flow */
+	for (i = 0; i < n_keys; i++) {
+		flow[i] = app_pipeline_fc_flow_find(p, &key[i]);
+		new_flow[i] = (flow[i] == NULL);
+		if (flow[i] == NULL) {
+			flow[i] = rte_zmalloc(NULL, sizeof(struct app_pipeline_fc_flow), RTE_CACHE_LINE_SIZE);
+
+			if (flow[i] == NULL) {
+				uint32_t j;
+
+				for (j = 0; j < i; j++)
+					if (new_flow[j])
+						rte_free(flow[j]);
+
+				rte_free(flow_rsp);
+				rte_free(flow_req);
+				rte_free(new_flow);
+				rte_free(signature);
+				rte_free(flow);
+				return -1;
+			}
 		}
 	}
+
+	/* Allocate and write request */
+	req = app_msg_alloc(app);
+	if (req == NULL) {
+		for (i = 0; i < n_keys; i++)
+			if (new_flow[i])
+				rte_free(flow[i]);
+
+		rte_free(flow_rsp);
+		rte_free(flow_req);
+		rte_free(new_flow);
+		rte_free(signature);
+		rte_free(flow);
+		return -1;
+	}
+
+	for (i = 0; i < n_keys; i++) {
+		app_pipeline_fc_key_convert(&key[i], flow_req[i].key, &signature[i]);
+		flow_req[i].port_id = port_id[i];
+	}
+
+	req->type = PIPELINE_MSG_REQ_CUSTOM;
+	req->subtype = PIPELINE_FC_MSG_REQ_FLOW_ADD_BULK;
+	req->req = flow_req;
+	req->rsp = flow_rsp;
+	req->n_keys = n_keys;
+
+	/* Send request and wait for response */
+	rsp = app_msg_send_recv(app, pipeline_id, req, 10000);
+	if (rsp == NULL) {
+		for (i = 0; i < n_keys; i++)
+			if (new_flow[i])
+				rte_free(flow[i]);
+
+		rte_free(flow_rsp);
+		rte_free(flow_req);
+		rte_free(new_flow);
+		rte_free(signature);
+		rte_free(flow);
+		return -1;
+	}
+
+	/* Read response */
+	status = 0;
+
+	for (i = 0; i < rsp->n_keys; i++)
+		if ((flow_rsp[i].entry_ptr == NULL) ||
+			((new_flow[i] == 0) && (flow_rsp[i].key_found == 0)) ||
+			((new_flow[i] == 1) && (flow_rsp[i].key_found == 1)))
+			status = -1;
+
+	if (rsp->n_keys < n_keys)
+		status = -1;
+
+	/* Commit flows */
+	for (i = 0; i < rsp->n_keys; i++) {
+		memcpy(&flow[i]->key, &key[i], sizeof(flow[i]->key));
+		flow[i]->port_id = port_id[i];
+		flow[i]->signature = signature[i];
+		flow[i]->entry_ptr = flow_rsp[i].entry_ptr;
+
+		if (new_flow[i]) {
+			uint32_t bucket_id = signature[i] & (N_BUCKETS - 1);
+			TAILQ_INSERT_TAIL(&p->flows[bucket_id], flow[i], node);
+			p->n_flows++;
+		}
+	}
+
+	/* Free resources */
+	app_msg_free(app, rsp);
+
+	for (i = rsp->n_keys; i < n_keys; i++)
+		if (new_flow[i])
+			rte_free(flow[i]);
+
+	rte_free(flow_rsp);
+	rte_free(flow_req);
+	rte_free(new_flow);
+	rte_free(signature);
+	rte_free(flow);
+
+	return status;
 }
 
-void
-app_message_handle(struct app_core_fc_message_handle_params *params)
+int
+app_pipeline_fc_del(struct app_params *app,
+	uint32_t pipeline_id,
+	struct pipeline_fc_key *key)
 {
-	struct rte_ring *ring_req = params->ring_req;
-	struct rte_ring *ring_resp;
-	void *msg;
-	struct app_msg_req *req;
-	struct app_msg_resp *resp;
-	struct rte_pipeline *p;
-	uint32_t *port_out_id;
-	uint32_t table_id;
-	int result;
-
-	/* Read request message */
-	result = rte_ring_sc_dequeue(ring_req, &msg);
-	if (result != 0)
-		return;
+	struct app_pipeline_fc *p;
+	struct app_pipeline_fc_flow *flow;
 
-	ring_resp = params->ring_resp;
-	p = params->p;
-	port_out_id = params->port_out_id;
-	table_id = params->table_id;
+	struct pipeline_fc_del_msg_req *req;
+	struct pipeline_fc_del_msg_rsp *rsp;
 
-	/* Handle request */
-	req = (struct app_msg_req *)rte_ctrlmbuf_data((struct rte_mbuf *)msg);
-	switch (req->type) {
-	case APP_MSG_REQ_PING:
-	{
-		result = 0;
-		break;
+	uint32_t signature, bucket_id;
+
+	/* Check input arguments */
+	if ((app == NULL) ||
+		(key == NULL))
+		return -1;
+
+	p = app_pipeline_data_fe(app, pipeline_id);
+	if (p == NULL)
+		return -1;
+
+	if (app_pipeline_fc_key_check(key) != 0)
+		return -1;
+
+	/* Find rule */
+	flow = app_pipeline_fc_flow_find(p, key);
+	if (flow == NULL)
+		return 0;
+
+	/* Allocate and write request */
+	req = app_msg_alloc(app);
+	if (req == NULL)
+		return -1;
+
+	req->type = PIPELINE_MSG_REQ_CUSTOM;
+	req->subtype = PIPELINE_FC_MSG_REQ_FLOW_DEL;
+	app_pipeline_fc_key_convert(key, req->key, &signature);
+
+	/* Send request and wait for response */
+	rsp = app_msg_send_recv(app, pipeline_id, req, MSG_TIMEOUT_DEFAULT);
+	if (rsp == NULL)
+		return -1;
+
+	/* Read response */
+	if (rsp->status || !rsp->key_found){
+		app_msg_free(app, rsp);
+		return -1;
 	}
 
-	case APP_MSG_REQ_FC_ADD_ALL:
-	{
-		result = app_flow_classification_table_init(p, port_out_id,
-			table_id);
-		break;
+	/* Remove rule */
+	bucket_id = signature & (N_BUCKETS - 1);
+	TAILQ_REMOVE(&p->flows[bucket_id], flow, node);
+	p->n_flows--;
+	rte_free(flow);
+
+	/* Free response */
+	app_msg_free(app, rsp);
+
+	return 0;
+}
+
+int
+app_pipeline_fc_add_default(struct app_params *app,
+	uint32_t pipeline_id,
+	uint32_t port_id)
+{
+	struct app_pipeline_fc *p;
+
+	struct pipeline_fc_add_default_msg_req *req;
+	struct pipeline_fc_add_default_msg_rsp *rsp;
+
+	/* Check input arguments */
+	if (app == NULL)
+		return -1;
+
+	p = app_pipeline_data_fe(app, pipeline_id);
+	if (p == NULL)
+		return -1;
+
+	if (port_id >= p->n_ports_in)
+		return -1;
+
+	/* Allocate and write request */
+	req = app_msg_alloc(app);
+	if (req == NULL)
+		return -1;
+
+	req->type = PIPELINE_MSG_REQ_CUSTOM;
+	req->subtype = PIPELINE_FC_MSG_REQ_FLOW_ADD_DEFAULT;
+	req->port_id = port_id;
+
+	/* Send request and wait for response */
+	rsp = app_msg_send_recv(app, pipeline_id, req, MSG_TIMEOUT_DEFAULT);
+	if (rsp == NULL)
+		return -1;
+
+	/* Read response and write flow */
+	if (rsp->status || (rsp->entry_ptr == NULL)){
+		app_msg_free(app, rsp);
+		return -1;
 	}
 
-	case APP_MSG_REQ_FC_ADD:
-	{
-		struct rte_pipeline_table_entry entry = {
-			.action = RTE_PIPELINE_ACTION_PORT,
-			{.port_id = port_out_id[req->flow_classif_add.port]},
-		};
+	p->default_flow_port_id = port_id;
+	p->default_flow_entry_ptr = rsp->entry_ptr;
 
-		struct rte_pipeline_table_entry *entry_ptr;
+	/* Commit route */
+	p->default_flow_present = 1;
 
-		int key_found;
+	/* Free response */
+	app_msg_free(app, rsp);
 
-		result = rte_pipeline_table_entry_add(p, table_id,
-			req->flow_classif_add.key_raw, &entry, &key_found,
-			&entry_ptr);
-		break;
+	return 0;
+}
+
+int
+app_pipeline_fc_del_default(struct app_params *app,
+	uint32_t pipeline_id)
+{
+	struct app_pipeline_fc *p;
+
+	struct pipeline_fc_del_default_msg_req *req;
+	struct pipeline_fc_del_default_msg_rsp *rsp;
+
+	/* Check input arguments */
+	if (app == NULL)
+		return -1;
+
+	p = app_pipeline_data_fe(app, pipeline_id);
+	if (p == NULL)
+		return -EINVAL;
+
+	/* Allocate and write request */
+	req = app_msg_alloc(app);
+	if (req == NULL)
+		return -1;
+
+	req->type = PIPELINE_MSG_REQ_CUSTOM;
+	req->subtype = PIPELINE_FC_MSG_REQ_FLOW_DEL_DEFAULT;
+
+	/* Send request and wait for response */
+	rsp = app_msg_send_recv(app, pipeline_id, req, MSG_TIMEOUT_DEFAULT);
+	if (rsp == NULL)
+		return -1;
+
+	/* Read response */
+	if (rsp->status) {
+		app_msg_free(app, rsp);
+		return -1;
 	}
 
-	case APP_MSG_REQ_FC_DEL:
-	{
-		int key_found;
+	/* Commit route */
+	p->default_flow_present = 0;
+
+	/* Free response */
+	app_msg_free(app, rsp);
+
+	return 0;
+}
+
+/*
+ * Flow ls
+ */
+
+static void
+print_fc_qinq_flow(struct app_pipeline_fc_flow *flow)
+{
+	printf("(SVLAN = %u, "
+		"CVLAN = %u) => "
+		"Port = %u "
+		"(signature = 0x%08x, "
+		"entry_ptr = %p)\n",
+
+		flow->key.key.qinq.svlan,
+		flow->key.key.qinq.cvlan,
+		flow->port_id,
+		flow->signature,
+		flow->entry_ptr);
+}
+
+static void
+print_fc_ipv4_5tuple_flow(struct app_pipeline_fc_flow *flow)
+{
+	printf("(SA = %u.%u.%u.%u, "
+		   "DA = %u.%u.%u.%u, "
+		   "SP = %u, "
+		   "DP = %u, "
+		   "Proto = %u) => "
+		   "Port = %u "
+		   "(signature = 0x%08x, "
+		   "entry_ptr = %p)\n",
+
+		   (flow->key.key.ipv4_5tuple.ip_src >> 24) & 0xFF,
+		   (flow->key.key.ipv4_5tuple.ip_src >> 16) & 0xFF,
+		   (flow->key.key.ipv4_5tuple.ip_src >> 8) & 0xFF,
+		   flow->key.key.ipv4_5tuple.ip_src & 0xFF,
+
+		   (flow->key.key.ipv4_5tuple.ip_dst >> 24) & 0xFF,
+		   (flow->key.key.ipv4_5tuple.ip_dst >> 16) & 0xFF,
+		   (flow->key.key.ipv4_5tuple.ip_dst >> 8) & 0xFF,
+		   flow->key.key.ipv4_5tuple.ip_dst & 0xFF,
+
+		   flow->key.key.ipv4_5tuple.port_src,
+		   flow->key.key.ipv4_5tuple.port_dst,
+
+		   flow->key.key.ipv4_5tuple.proto,
+
+		   flow->port_id,
+		   flow->signature,
+		   flow->entry_ptr);
+}
+
+static void
+print_fc_ipv6_5tuple_flow(struct app_pipeline_fc_flow *flow)
+{
+	printf("(SA = %02x%02x:%02x%02x:%02x%02x:%02x%02x:%02x%02x:%02x%02x:%02x%02x:%02x%02x, "
+		"DA = %02x%02x:%02x%02x:%02x%02x:%02x%02x:%02x%02x:%02x%02x:%02x%02x:%02x%02x, "
+		"SP = %u, "
+		"DP = %u "
+		"Proto = %u "
+		"=> Port = %u "
+		"(signature = 0x%08x, "
+		"entry_ptr = %p)\n",
+
+		flow->key.key.ipv6_5tuple.ip_src[0],
+		flow->key.key.ipv6_5tuple.ip_src[1],
+		flow->key.key.ipv6_5tuple.ip_src[2],
+		flow->key.key.ipv6_5tuple.ip_src[3],
+		flow->key.key.ipv6_5tuple.ip_src[4],
+		flow->key.key.ipv6_5tuple.ip_src[5],
+		flow->key.key.ipv6_5tuple.ip_src[6],
+		flow->key.key.ipv6_5tuple.ip_src[7],
+		flow->key.key.ipv6_5tuple.ip_src[8],
+		flow->key.key.ipv6_5tuple.ip_src[9],
+		flow->key.key.ipv6_5tuple.ip_src[10],
+		flow->key.key.ipv6_5tuple.ip_src[11],
+		flow->key.key.ipv6_5tuple.ip_src[12],
+		flow->key.key.ipv6_5tuple.ip_src[13],
+		flow->key.key.ipv6_5tuple.ip_src[14],
+		flow->key.key.ipv6_5tuple.ip_src[15],
 
-		result = rte_pipeline_table_entry_delete(p, table_id,
-			req->flow_classif_add.key_raw, &key_found, NULL);
+		flow->key.key.ipv6_5tuple.ip_dst[0],
+		flow->key.key.ipv6_5tuple.ip_dst[1],
+		flow->key.key.ipv6_5tuple.ip_dst[2],
+		flow->key.key.ipv6_5tuple.ip_dst[3],
+		flow->key.key.ipv6_5tuple.ip_dst[4],
+		flow->key.key.ipv6_5tuple.ip_dst[5],
+		flow->key.key.ipv6_5tuple.ip_dst[6],
+		flow->key.key.ipv6_5tuple.ip_dst[7],
+		flow->key.key.ipv6_5tuple.ip_dst[8],
+		flow->key.key.ipv6_5tuple.ip_dst[9],
+		flow->key.key.ipv6_5tuple.ip_dst[10],
+		flow->key.key.ipv6_5tuple.ip_dst[11],
+		flow->key.key.ipv6_5tuple.ip_dst[12],
+		flow->key.key.ipv6_5tuple.ip_dst[13],
+		flow->key.key.ipv6_5tuple.ip_dst[14],
+		flow->key.key.ipv6_5tuple.ip_dst[15],
+
+		flow->key.key.ipv6_5tuple.port_src,
+		flow->key.key.ipv6_5tuple.port_dst,
+
+		flow->key.key.ipv6_5tuple.proto,
+
+		flow->port_id,
+		flow->signature,
+		flow->entry_ptr);
+}
+
+static void
+print_fc_flow(struct app_pipeline_fc_flow *flow)
+{
+	switch(flow->key.type) {
+	case FLOW_KEY_QINQ:
+		print_fc_qinq_flow(flow);
+		break;
+
+	case FLOW_KEY_IPV4_5TUPLE:
+		print_fc_ipv4_5tuple_flow(flow);
+		break;
+
+	case FLOW_KEY_IPV6_5TUPLE:
+		print_fc_ipv6_5tuple_flow(flow);
 		break;
 	}
+}
 
-	default:
-		rte_panic("FC Unrecognized message type (%u)\n", req->type);
+static int
+app_pipeline_fc_ls(struct app_params *app,
+		uint32_t pipeline_id)
+{
+	struct app_pipeline_fc *p;
+	struct app_pipeline_fc_flow *flow;
+	uint32_t i;
+
+	/* Check input arguments */
+	if (app == NULL)
+		return -1;
+
+	p = app_pipeline_data_fe(app, pipeline_id);
+	if (p == NULL)
+		return -1;
+
+	for (i = 0; i < N_BUCKETS; i++)
+		TAILQ_FOREACH(flow, &p->flows[i], node)
+			print_fc_flow(flow);
+
+	if (p->default_flow_present)
+		printf("Default flow: port %u (entry ptr = %p)\n",
+			p->default_flow_port_id,
+			p->default_flow_entry_ptr);
+	else
+		printf("Default: DROP\n");
+
+	return 0;
+}
+
+/*
+ * flow add qinq
+ */
+
+struct cmd_fc_add_qinq_result {
+	cmdline_fixed_string_t p_string;
+	uint32_t pipeline_id;
+	cmdline_fixed_string_t flow_string;
+	cmdline_fixed_string_t add_string;
+	cmdline_fixed_string_t qinq_string;
+	uint16_t svlan;
+	uint16_t cvlan;
+	uint32_t port;
+};
+
+static void
+cmd_fc_add_qinq_parsed(
+	void *parsed_result,
+	__rte_unused struct cmdline *cl,
+	void *data)
+{
+	struct cmd_fc_add_qinq_result *params = parsed_result;
+	struct app_params *app = data;
+	struct pipeline_fc_key key;
+	int status;
+
+	memset(&key, 0, sizeof(key));
+	key.type = FLOW_KEY_QINQ;
+	key.key.qinq.svlan = params->svlan;
+	key.key.qinq.cvlan = params->cvlan;
+
+	status = app_pipeline_fc_add(app, params->pipeline_id, &key, params->port);
+	if(status != 0)
+		printf("Command failed\n");
+}
+
+cmdline_parse_token_string_t cmd_fc_add_qinq_p_string =
+	TOKEN_STRING_INITIALIZER(struct cmd_fc_add_qinq_result, p_string, "p");
+
+cmdline_parse_token_num_t cmd_fc_add_qinq_pipeline_id =
+	TOKEN_NUM_INITIALIZER(struct cmd_fc_add_qinq_result, pipeline_id, UINT32);
+
+cmdline_parse_token_string_t cmd_fc_add_qinq_flow_string =
+	TOKEN_STRING_INITIALIZER(struct cmd_fc_add_qinq_result, flow_string, "flow");
+
+cmdline_parse_token_string_t cmd_fc_add_qinq_add_string =
+	TOKEN_STRING_INITIALIZER(struct cmd_fc_add_qinq_result, add_string, "add");
+
+cmdline_parse_token_string_t cmd_fc_add_qinq_qinq_string =
+	TOKEN_STRING_INITIALIZER(struct cmd_fc_add_qinq_result, qinq_string, "qinq");
+
+cmdline_parse_token_num_t cmd_fc_add_qinq_svlan =
+	TOKEN_NUM_INITIALIZER(struct cmd_fc_add_qinq_result, svlan, UINT16);
+
+cmdline_parse_token_num_t cmd_fc_add_qinq_cvlan =
+	TOKEN_NUM_INITIALIZER(struct cmd_fc_add_qinq_result, cvlan, UINT16);
+
+cmdline_parse_token_num_t cmd_fc_add_qinq_port =
+	TOKEN_NUM_INITIALIZER(struct cmd_fc_add_qinq_result, port, UINT32);
+
+cmdline_parse_inst_t cmd_fc_add_qinq = {
+	.f = cmd_fc_add_qinq_parsed,
+	.data = NULL,
+	.help_str = "Flow add (Q-in-Q)",
+	.tokens = {
+		(void *) &cmd_fc_add_qinq_p_string,
+		(void *) &cmd_fc_add_qinq_pipeline_id,
+		(void *) &cmd_fc_add_qinq_flow_string,
+		(void *) &cmd_fc_add_qinq_add_string,
+		(void *) &cmd_fc_add_qinq_qinq_string,
+		(void *) &cmd_fc_add_qinq_svlan,
+		(void *) &cmd_fc_add_qinq_cvlan,
+		(void *) &cmd_fc_add_qinq_port,
+		NULL,
+	},
+};
+
+/*
+ * flow add qinq all
+ */
+
+struct cmd_fc_add_qinq_all_result {
+	cmdline_fixed_string_t p_string;
+	uint32_t pipeline_id;
+	cmdline_fixed_string_t flow_string;
+	cmdline_fixed_string_t add_string;
+	cmdline_fixed_string_t qinq_string;
+	cmdline_fixed_string_t all_string;
+	uint32_t n_flows;
+	uint32_t n_ports;
+};
+
+#ifndef N_FLOWS_BULK
+#define N_FLOWS_BULK                             4096
+#endif
+
+static void
+cmd_fc_add_qinq_all_parsed(
+	void *parsed_result,
+	__rte_unused struct cmdline *cl,
+	void *data)
+{
+	struct cmd_fc_add_qinq_all_result *params = parsed_result;
+	struct app_params *app = data;
+	struct pipeline_fc_key *key;
+	uint32_t *port_id;
+	uint32_t flow_id;
+
+	key = rte_zmalloc(NULL, N_FLOWS_BULK * sizeof(*key), RTE_CACHE_LINE_SIZE);
+	if (key == NULL) {
+		printf("Memory allocation failed\n");
+		return;
 	}
 
-	/* Fill in response message */
-	resp = (struct app_msg_resp *)rte_ctrlmbuf_data((struct rte_mbuf *)msg);
-	resp->result = result;
+	port_id = rte_malloc(NULL, N_FLOWS_BULK * sizeof(*port_id), RTE_CACHE_LINE_SIZE);
+	if (port_id == NULL) {
+		rte_free(key);
+		printf("Memory allocation failed\n");
+		return;
+	}
 
-	/* Send response */
-	do {
-		result = rte_ring_sp_enqueue(ring_resp, msg);
-	} while (result == -ENOBUFS);
+	for (flow_id = 0; flow_id < params->n_flows; flow_id++) {
+		uint32_t pos = flow_id & (N_FLOWS_BULK - 1);
+
+		key[pos].type = FLOW_KEY_QINQ;
+		key[pos].key.qinq.svlan = flow_id >> 12;
+		key[pos].key.qinq.cvlan = flow_id & 0xFFF;
+
+		port_id[pos] = flow_id % params->n_ports;
+
+		if ((pos == N_FLOWS_BULK - 1) ||
+			(flow_id == params->n_flows - 1)) {
+			int status;
+
+			status = app_pipeline_fc_add_bulk(app,
+				params->pipeline_id,
+				key,
+				port_id,
+				pos + 1);
+
+			if(status != 0) {
+				printf("Command failed\n");
+
+				break;
+			}
+		}
+	}
+
+	rte_free(port_id);
+	rte_free(key);
+}
+
+cmdline_parse_token_string_t cmd_fc_add_qinq_all_p_string =
+	TOKEN_STRING_INITIALIZER(struct cmd_fc_add_qinq_all_result, p_string, "p");
+
+cmdline_parse_token_num_t cmd_fc_add_qinq_all_pipeline_id =
+	TOKEN_NUM_INITIALIZER(struct cmd_fc_add_qinq_all_result, pipeline_id, UINT32);
+
+cmdline_parse_token_string_t cmd_fc_add_qinq_all_flow_string =
+	TOKEN_STRING_INITIALIZER(struct cmd_fc_add_qinq_all_result, flow_string, "flow");
+
+cmdline_parse_token_string_t cmd_fc_add_qinq_all_add_string =
+	TOKEN_STRING_INITIALIZER(struct cmd_fc_add_qinq_all_result, add_string, "add");
+
+cmdline_parse_token_string_t cmd_fc_add_qinq_all_qinq_string =
+	TOKEN_STRING_INITIALIZER(struct cmd_fc_add_qinq_all_result, qinq_string, "qinq");
+
+cmdline_parse_token_string_t cmd_fc_add_qinq_all_all_string =
+	TOKEN_STRING_INITIALIZER(struct cmd_fc_add_qinq_all_result, all_string, "all");
+
+cmdline_parse_token_num_t cmd_fc_add_qinq_all_n_flows =
+	TOKEN_NUM_INITIALIZER(struct cmd_fc_add_qinq_all_result, n_flows, UINT32);
+
+cmdline_parse_token_num_t cmd_fc_add_qinq_all_n_ports =
+	TOKEN_NUM_INITIALIZER(struct cmd_fc_add_qinq_all_result, n_ports, UINT32);
+
+cmdline_parse_inst_t cmd_fc_add_qinq_all = {
+	.f = cmd_fc_add_qinq_all_parsed,
+	.data = NULL,
+	.help_str = "Flow add all (Q-in-Q)",
+	.tokens = {
+		(void *) &cmd_fc_add_qinq_all_p_string,
+		(void *) &cmd_fc_add_qinq_all_pipeline_id,
+		(void *) &cmd_fc_add_qinq_all_flow_string,
+		(void *) &cmd_fc_add_qinq_all_add_string,
+		(void *) &cmd_fc_add_qinq_all_qinq_string,
+		(void *) &cmd_fc_add_qinq_all_all_string,
+		(void *) &cmd_fc_add_qinq_all_n_flows,
+		(void *) &cmd_fc_add_qinq_all_n_ports,
+		NULL,
+	},
+};
+
+/*
+ * flow add ipv4_5tuple
+ */
+
+struct cmd_fc_add_ipv4_5tuple_result {
+	cmdline_fixed_string_t p_string;
+	uint32_t pipeline_id;
+	cmdline_fixed_string_t flow_string;
+	cmdline_fixed_string_t add_string;
+	cmdline_fixed_string_t ipv4_5tuple_string;
+	cmdline_ipaddr_t ip_src;
+	cmdline_ipaddr_t ip_dst;
+	uint16_t port_src;
+	uint16_t port_dst;
+	uint32_t proto;
+	uint32_t port;
+};
+
+static void
+cmd_fc_add_ipv4_5tuple_parsed(
+	void *parsed_result,
+	__rte_unused struct cmdline *cl,
+	void *data)
+{
+	struct cmd_fc_add_ipv4_5tuple_result *params = parsed_result;
+	struct app_params *app = data;
+	struct pipeline_fc_key key;
+	int status;
+
+	memset(&key, 0, sizeof(key));
+	key.type = FLOW_KEY_IPV4_5TUPLE;
+	key.key.ipv4_5tuple.ip_src = rte_bswap32(params->ip_src.addr.ipv4.s_addr);
+	key.key.ipv4_5tuple.ip_dst = rte_bswap32(params->ip_dst.addr.ipv4.s_addr);
+	key.key.ipv4_5tuple.port_src = params->port_src;
+	key.key.ipv4_5tuple.port_dst = params->port_dst;
+	key.key.ipv4_5tuple.proto = params->proto;
+
+	status = app_pipeline_fc_add(app, params->pipeline_id, &key, params->port);
+	if(status != 0)
+		printf("Command failed\n");
+}
+
+cmdline_parse_token_string_t cmd_fc_add_ipv4_5tuple_p_string =
+	TOKEN_STRING_INITIALIZER(struct cmd_fc_add_ipv4_5tuple_result, p_string, "p");
+
+cmdline_parse_token_num_t cmd_fc_add_ipv4_5tuple_pipeline_id =
+	TOKEN_NUM_INITIALIZER(struct cmd_fc_add_ipv4_5tuple_result, pipeline_id, UINT32);
+
+cmdline_parse_token_string_t cmd_fc_add_ipv4_5tuple_flow_string =
+	TOKEN_STRING_INITIALIZER(struct cmd_fc_add_ipv4_5tuple_result, flow_string, "flow");
+
+cmdline_parse_token_string_t cmd_fc_add_ipv4_5tuple_add_string =
+	TOKEN_STRING_INITIALIZER(struct cmd_fc_add_ipv4_5tuple_result, add_string, "add");
+
+cmdline_parse_token_string_t cmd_fc_add_ipv4_5tuple_ipv4_5tuple_string =
+	TOKEN_STRING_INITIALIZER(struct cmd_fc_add_ipv4_5tuple_result, ipv4_5tuple_string, "ipv4_5tuple");
+
+cmdline_parse_token_ipaddr_t cmd_fc_add_ipv4_5tuple_ip_src =
+	TOKEN_IPV4_INITIALIZER(struct cmd_fc_add_ipv4_5tuple_result, ip_src);
+
+cmdline_parse_token_ipaddr_t cmd_fc_add_ipv4_5tuple_ip_dst =
+	TOKEN_IPV4_INITIALIZER(struct cmd_fc_add_ipv4_5tuple_result, ip_dst);
+
+cmdline_parse_token_num_t cmd_fc_add_ipv4_5tuple_port_src =
+	TOKEN_NUM_INITIALIZER(struct cmd_fc_add_ipv4_5tuple_result, port_src, UINT16);
+
+cmdline_parse_token_num_t cmd_fc_add_ipv4_5tuple_port_dst =
+	TOKEN_NUM_INITIALIZER(struct cmd_fc_add_ipv4_5tuple_result, port_dst, UINT16);
+
+cmdline_parse_token_num_t cmd_fc_add_ipv4_5tuple_proto =
+	TOKEN_NUM_INITIALIZER(struct cmd_fc_add_ipv4_5tuple_result, proto, UINT32);
+
+cmdline_parse_token_num_t cmd_fc_add_ipv4_5tuple_port =
+	TOKEN_NUM_INITIALIZER(struct cmd_fc_add_ipv4_5tuple_result, port, UINT32);
+
+cmdline_parse_inst_t cmd_fc_add_ipv4_5tuple = {
+	.f = cmd_fc_add_ipv4_5tuple_parsed,
+	.data = NULL,
+	.help_str = "Flow add (IPv4 5-tuple)",
+	.tokens = {
+		(void *) &cmd_fc_add_ipv4_5tuple_p_string,
+		(void *) &cmd_fc_add_ipv4_5tuple_pipeline_id,
+		(void *) &cmd_fc_add_ipv4_5tuple_flow_string,
+		(void *) &cmd_fc_add_ipv4_5tuple_add_string,
+		(void *) &cmd_fc_add_ipv4_5tuple_ipv4_5tuple_string,
+		(void *) &cmd_fc_add_ipv4_5tuple_ip_src,
+		(void *) &cmd_fc_add_ipv4_5tuple_ip_dst,
+		(void *) &cmd_fc_add_ipv4_5tuple_port_src,
+		(void *) &cmd_fc_add_ipv4_5tuple_port_dst,
+		(void *) &cmd_fc_add_ipv4_5tuple_proto,
+		(void *) &cmd_fc_add_ipv4_5tuple_port,
+		NULL,
+	},
+};
+
+/*
+ * flow add ipv4_5tuple all
+ */
+
+struct cmd_fc_add_ipv4_5tuple_all_result {
+	cmdline_fixed_string_t p_string;
+	uint32_t pipeline_id;
+	cmdline_fixed_string_t flow_string;
+	cmdline_fixed_string_t add_string;
+	cmdline_fixed_string_t ipv4_5tuple_string;
+	cmdline_fixed_string_t all_string;
+	uint32_t n_flows;
+	uint32_t n_ports;
+};
+
+static void
+cmd_fc_add_ipv4_5tuple_all_parsed(
+	void *parsed_result,
+	__rte_unused struct cmdline *cl,
+	void *data)
+{
+	struct cmd_fc_add_ipv4_5tuple_all_result *params = parsed_result;
+	struct app_params *app = data;
+	struct pipeline_fc_key *key;
+	uint32_t *port_id;
+	uint32_t flow_id;
+
+	key = rte_zmalloc(NULL, N_FLOWS_BULK * sizeof(*key), RTE_CACHE_LINE_SIZE);
+	if (key == NULL) {
+		printf("Memory allocation failed\n");
+		return;
+	}
+
+	port_id = rte_malloc(NULL, N_FLOWS_BULK * sizeof(*port_id), RTE_CACHE_LINE_SIZE);
+	if (port_id == NULL) {
+		rte_free(key);
+		printf("Memory allocation failed\n");
+		return;
+	}
+
+	for (flow_id = 0; flow_id < params->n_flows; flow_id++) {
+		uint32_t pos = flow_id & (N_FLOWS_BULK - 1);
+
+		key[pos].type = FLOW_KEY_IPV4_5TUPLE;
+		key[pos].key.ipv4_5tuple.ip_src = 0;
+		key[pos].key.ipv4_5tuple.ip_dst = flow_id;
+		key[pos].key.ipv4_5tuple.port_src = 0;
+		key[pos].key.ipv4_5tuple.port_dst = 0;
+		key[pos].key.ipv4_5tuple.proto = 6;
+
+		port_id[pos] = flow_id % params->n_ports;
+
+		if ((pos == N_FLOWS_BULK - 1) ||
+			(flow_id == params->n_flows - 1)) {
+			int status;
+
+			status = app_pipeline_fc_add_bulk(app,
+				params->pipeline_id,
+				key,
+				port_id,
+				pos + 1);
+
+			if(status != 0) {
+				printf("Command failed\n");
+
+				break;
+			}
+		}
+	}
+
+	rte_free(port_id);
+	rte_free(key);
+}
+
+cmdline_parse_token_string_t cmd_fc_add_ipv4_5tuple_all_p_string =
+	TOKEN_STRING_INITIALIZER(struct cmd_fc_add_ipv4_5tuple_all_result, p_string, "p");
+
+cmdline_parse_token_num_t cmd_fc_add_ipv4_5tuple_all_pipeline_id =
+	TOKEN_NUM_INITIALIZER(struct cmd_fc_add_ipv4_5tuple_all_result, pipeline_id, UINT32);
+
+cmdline_parse_token_string_t cmd_fc_add_ipv4_5tuple_all_flow_string =
+	TOKEN_STRING_INITIALIZER(struct cmd_fc_add_ipv4_5tuple_all_result, flow_string, "flow");
+
+cmdline_parse_token_string_t cmd_fc_add_ipv4_5tuple_all_add_string =
+	TOKEN_STRING_INITIALIZER(struct cmd_fc_add_ipv4_5tuple_all_result, add_string, "add");
+
+cmdline_parse_token_string_t cmd_fc_add_ipv4_5tuple_all_ipv4_5tuple_string =
+	TOKEN_STRING_INITIALIZER(struct cmd_fc_add_ipv4_5tuple_all_result, ipv4_5tuple_string, "ipv4_5tuple");
+
+cmdline_parse_token_string_t cmd_fc_add_ipv4_5tuple_all_all_string =
+	TOKEN_STRING_INITIALIZER(struct cmd_fc_add_ipv4_5tuple_all_result, all_string, "all");
+
+cmdline_parse_token_num_t cmd_fc_add_ipv4_5tuple_all_n_flows =
+	TOKEN_NUM_INITIALIZER(struct cmd_fc_add_ipv4_5tuple_all_result, n_flows, UINT32);
+
+cmdline_parse_token_num_t cmd_fc_add_ipv4_5tuple_all_n_ports =
+	TOKEN_NUM_INITIALIZER(struct cmd_fc_add_ipv4_5tuple_all_result, n_ports, UINT32);
+
+cmdline_parse_inst_t cmd_fc_add_ipv4_5tuple_all = {
+	.f = cmd_fc_add_ipv4_5tuple_all_parsed,
+	.data = NULL,
+	.help_str = "Flow add all (IPv4 5-tuple)",
+	.tokens = {
+		(void *) &cmd_fc_add_ipv4_5tuple_all_p_string,
+		(void *) &cmd_fc_add_ipv4_5tuple_all_pipeline_id,
+		(void *) &cmd_fc_add_ipv4_5tuple_all_flow_string,
+		(void *) &cmd_fc_add_ipv4_5tuple_all_add_string,
+		(void *) &cmd_fc_add_ipv4_5tuple_all_ipv4_5tuple_string,
+		(void *) &cmd_fc_add_ipv4_5tuple_all_all_string,
+		(void *) &cmd_fc_add_ipv4_5tuple_all_n_flows,
+		(void *) &cmd_fc_add_ipv4_5tuple_all_n_ports,
+		NULL,
+	},
+};
+
+/*
+ * flow add ipv6_5tuple
+ */
+
+struct cmd_fc_add_ipv6_5tuple_result {
+	cmdline_fixed_string_t p_string;
+	uint32_t pipeline_id;
+	cmdline_fixed_string_t flow_string;
+	cmdline_fixed_string_t add_string;
+	cmdline_fixed_string_t ipv6_5tuple_string;
+	cmdline_ipaddr_t ip_src;
+	cmdline_ipaddr_t ip_dst;
+	uint16_t port_src;
+	uint16_t port_dst;
+	uint32_t proto;
+	uint32_t port;
+};
+
+static void
+cmd_fc_add_ipv6_5tuple_parsed(
+	void *parsed_result,
+	__rte_unused struct cmdline *cl,
+	void *data)
+{
+	struct cmd_fc_add_ipv6_5tuple_result *params = parsed_result;
+	struct app_params *app = data;
+	struct pipeline_fc_key key;
+	int status;
+
+	memset(&key, 0, sizeof(key));
+	key.type = FLOW_KEY_IPV6_5TUPLE;
+	memcpy(key.key.ipv6_5tuple.ip_src, params->ip_src.addr.ipv6.s6_addr, 16);
+	memcpy(key.key.ipv6_5tuple.ip_dst, params->ip_dst.addr.ipv6.s6_addr, 16);
+	key.key.ipv6_5tuple.port_src = params->port_src;
+	key.key.ipv6_5tuple.port_dst = params->port_dst;
+	key.key.ipv6_5tuple.proto = params->proto;
+
+	status = app_pipeline_fc_add(app, params->pipeline_id, &key, params->port);
+	if(status != 0)
+		printf("Command failed\n");
+}
+
+cmdline_parse_token_string_t cmd_fc_add_ipv6_5tuple_p_string =
+	TOKEN_STRING_INITIALIZER(struct cmd_fc_add_ipv6_5tuple_result, p_string, "p");
+
+cmdline_parse_token_num_t cmd_fc_add_ipv6_5tuple_pipeline_id =
+	TOKEN_NUM_INITIALIZER(struct cmd_fc_add_ipv6_5tuple_result, pipeline_id, UINT32);
+
+cmdline_parse_token_string_t cmd_fc_add_ipv6_5tuple_flow_string =
+	TOKEN_STRING_INITIALIZER(struct cmd_fc_add_ipv6_5tuple_result, flow_string, "flow");
+
+cmdline_parse_token_string_t cmd_fc_add_ipv6_5tuple_add_string =
+	TOKEN_STRING_INITIALIZER(struct cmd_fc_add_ipv6_5tuple_result, add_string, "add");
+
+cmdline_parse_token_string_t cmd_fc_add_ipv6_5tuple_ipv6_5tuple_string =
+	TOKEN_STRING_INITIALIZER(struct cmd_fc_add_ipv6_5tuple_result, ipv6_5tuple_string, "ipv6_5tuple");
+
+cmdline_parse_token_ipaddr_t cmd_fc_add_ipv6_5tuple_ip_src =
+	TOKEN_IPV6_INITIALIZER(struct cmd_fc_add_ipv6_5tuple_result, ip_src);
+
+cmdline_parse_token_ipaddr_t cmd_fc_add_ipv6_5tuple_ip_dst =
+	TOKEN_IPV6_INITIALIZER(struct cmd_fc_add_ipv6_5tuple_result, ip_dst);
+
+cmdline_parse_token_num_t cmd_fc_add_ipv6_5tuple_port_src =
+	TOKEN_NUM_INITIALIZER(struct cmd_fc_add_ipv6_5tuple_result, port_src, UINT16);
+
+cmdline_parse_token_num_t cmd_fc_add_ipv6_5tuple_port_dst =
+	TOKEN_NUM_INITIALIZER(struct cmd_fc_add_ipv6_5tuple_result, port_dst, UINT16);
+
+cmdline_parse_token_num_t cmd_fc_add_ipv6_5tuple_proto =
+	TOKEN_NUM_INITIALIZER(struct cmd_fc_add_ipv6_5tuple_result, proto, UINT32);
+
+cmdline_parse_token_num_t cmd_fc_add_ipv6_5tuple_port =
+	TOKEN_NUM_INITIALIZER(struct cmd_fc_add_ipv6_5tuple_result, port, UINT32);
+
+cmdline_parse_inst_t cmd_fc_add_ipv6_5tuple = {
+	.f = cmd_fc_add_ipv6_5tuple_parsed,
+	.data = NULL,
+	.help_str = "Flow add (IPv6 5-tuple)",
+	.tokens = {
+		(void *) &cmd_fc_add_ipv6_5tuple_p_string,
+		(void *) &cmd_fc_add_ipv6_5tuple_pipeline_id,
+		(void *) &cmd_fc_add_ipv6_5tuple_flow_string,
+		(void *) &cmd_fc_add_ipv6_5tuple_add_string,
+		(void *) &cmd_fc_add_ipv6_5tuple_ipv6_5tuple_string,
+		(void *) &cmd_fc_add_ipv6_5tuple_ip_src,
+		(void *) &cmd_fc_add_ipv6_5tuple_ip_dst,
+		(void *) &cmd_fc_add_ipv6_5tuple_port_src,
+		(void *) &cmd_fc_add_ipv6_5tuple_port_dst,
+		(void *) &cmd_fc_add_ipv6_5tuple_proto,
+		(void *) &cmd_fc_add_ipv6_5tuple_port,
+		NULL,
+	},
+};
+
+/*
+ * flow add ipv6_5tuple all
+ */
+
+struct cmd_fc_add_ipv6_5tuple_all_result {
+	cmdline_fixed_string_t p_string;
+	uint32_t pipeline_id;
+	cmdline_fixed_string_t flow_string;
+	cmdline_fixed_string_t add_string;
+	cmdline_fixed_string_t ipv6_5tuple_string;
+	cmdline_fixed_string_t all_string;
+	uint32_t n_flows;
+	uint32_t n_ports;
+};
+
+static void
+cmd_fc_add_ipv6_5tuple_all_parsed(
+	void *parsed_result,
+	__rte_unused struct cmdline *cl,
+	void *data)
+{
+	struct cmd_fc_add_ipv6_5tuple_all_result *params = parsed_result;
+	struct app_params *app = data;
+	struct pipeline_fc_key *key;
+	uint32_t *port_id;
+	uint32_t flow_id;
+
+	key = rte_zmalloc(NULL, N_FLOWS_BULK * sizeof(*key), RTE_CACHE_LINE_SIZE);
+	if (key == NULL) {
+		printf("Memory allocation failed\n");
+		return;
+	}
+
+	port_id = rte_malloc(NULL, N_FLOWS_BULK * sizeof(*port_id), RTE_CACHE_LINE_SIZE);
+	if (port_id == NULL) {
+		rte_free(key);
+		printf("Memory allocation failed\n");
+		return;
+	}
+
+	for (flow_id = 0; flow_id < params->n_flows; flow_id++) {
+		uint32_t pos = flow_id & (N_FLOWS_BULK - 1);
+		uint32_t *x;
+
+		key[pos].type = FLOW_KEY_IPV6_5TUPLE;
+		x = (uint32_t *) key[pos].key.ipv6_5tuple.ip_dst;
+		*x = rte_bswap32(flow_id);
+		key[pos].key.ipv6_5tuple.proto = 6;
+
+		port_id[pos] = flow_id % params->n_ports;
+
+		if ((pos == N_FLOWS_BULK - 1) ||
+			(flow_id == params->n_flows - 1)) {
+			int status;
+
+			status = app_pipeline_fc_add_bulk(app,
+				params->pipeline_id,
+				key,
+				port_id,
+				pos + 1);
+
+			if(status != 0) {
+				printf("Command failed\n");
+
+				break;
+			}
+		}
+	}
+
+	rte_free(port_id);
+	rte_free(key);
 }
+
+cmdline_parse_token_string_t cmd_fc_add_ipv6_5tuple_all_p_string =
+	TOKEN_STRING_INITIALIZER(struct cmd_fc_add_ipv6_5tuple_all_result, p_string, "p");
+
+cmdline_parse_token_num_t cmd_fc_add_ipv6_5tuple_all_pipeline_id =
+	TOKEN_NUM_INITIALIZER(struct cmd_fc_add_ipv6_5tuple_all_result, pipeline_id, UINT32);
+
+cmdline_parse_token_string_t cmd_fc_add_ipv6_5tuple_all_flow_string =
+	TOKEN_STRING_INITIALIZER(struct cmd_fc_add_ipv6_5tuple_all_result, flow_string, "flow");
+
+cmdline_parse_token_string_t cmd_fc_add_ipv6_5tuple_all_add_string =
+	TOKEN_STRING_INITIALIZER(struct cmd_fc_add_ipv6_5tuple_all_result, add_string, "add");
+
+cmdline_parse_token_string_t cmd_fc_add_ipv6_5tuple_all_ipv6_5tuple_string =
+	TOKEN_STRING_INITIALIZER(struct cmd_fc_add_ipv6_5tuple_all_result, ipv6_5tuple_string, "ipv6_5tuple");
+
+cmdline_parse_token_string_t cmd_fc_add_ipv6_5tuple_all_all_string =
+	TOKEN_STRING_INITIALIZER(struct cmd_fc_add_ipv6_5tuple_all_result, all_string, "all");
+
+cmdline_parse_token_num_t cmd_fc_add_ipv6_5tuple_all_n_flows =
+	TOKEN_NUM_INITIALIZER(struct cmd_fc_add_ipv6_5tuple_all_result, n_flows, UINT32);
+
+cmdline_parse_token_num_t cmd_fc_add_ipv6_5tuple_all_n_ports =
+	TOKEN_NUM_INITIALIZER(struct cmd_fc_add_ipv6_5tuple_all_result, n_ports, UINT32);
+
+cmdline_parse_inst_t cmd_fc_add_ipv6_5tuple_all = {
+	.f = cmd_fc_add_ipv6_5tuple_all_parsed,
+	.data = NULL,
+	.help_str = "Flow add all (ipv6 5-tuple)",
+	.tokens = {
+		(void *) &cmd_fc_add_ipv6_5tuple_all_p_string,
+		(void *) &cmd_fc_add_ipv6_5tuple_all_pipeline_id,
+		(void *) &cmd_fc_add_ipv6_5tuple_all_flow_string,
+		(void *) &cmd_fc_add_ipv6_5tuple_all_add_string,
+		(void *) &cmd_fc_add_ipv6_5tuple_all_ipv6_5tuple_string,
+		(void *) &cmd_fc_add_ipv6_5tuple_all_all_string,
+		(void *) &cmd_fc_add_ipv6_5tuple_all_n_flows,
+		(void *) &cmd_fc_add_ipv6_5tuple_all_n_ports,
+		NULL,
+	},
+};
+
+/*
+ * flow del qinq
+ */
+struct cmd_fc_del_qinq_result {
+	cmdline_fixed_string_t p_string;
+	uint32_t pipeline_id;
+	cmdline_fixed_string_t flow_string;
+	cmdline_fixed_string_t del_string;
+	cmdline_fixed_string_t qinq_string;
+	uint16_t svlan;
+	uint16_t cvlan;
+};
+
+static void
+cmd_fc_del_qinq_parsed(
+	void *parsed_result,
+	__rte_unused struct cmdline *cl,
+	void *data)
+{
+	struct cmd_fc_del_qinq_result *params = parsed_result;
+	struct app_params *app = data;
+	struct pipeline_fc_key key;
+	int status;
+
+	memset(&key, 0, sizeof(key));
+	key.type = FLOW_KEY_QINQ;
+	key.key.qinq.svlan = params->svlan;
+	key.key.qinq.cvlan = params->cvlan;
+	status = app_pipeline_fc_del(app, params->pipeline_id, &key);
+
+	if(status != 0)
+		printf("Command failed\n");
+}
+
+cmdline_parse_token_string_t cmd_fc_del_qinq_p_string =
+	TOKEN_STRING_INITIALIZER(struct cmd_fc_del_qinq_result, p_string, "p");
+
+cmdline_parse_token_num_t cmd_fc_del_qinq_pipeline_id =
+	TOKEN_NUM_INITIALIZER(struct cmd_fc_del_qinq_result, pipeline_id, UINT32);
+
+cmdline_parse_token_string_t cmd_fc_del_qinq_flow_string =
+	TOKEN_STRING_INITIALIZER(struct cmd_fc_del_qinq_result, flow_string, "flow");
+
+cmdline_parse_token_string_t cmd_fc_del_qinq_del_string =
+	TOKEN_STRING_INITIALIZER(struct cmd_fc_del_qinq_result, del_string, "del");
+
+cmdline_parse_token_string_t cmd_fc_del_qinq_qinq_string =
+	TOKEN_STRING_INITIALIZER(struct cmd_fc_del_qinq_result, qinq_string, "qinq");
+
+cmdline_parse_token_num_t cmd_fc_del_qinq_svlan =
+	TOKEN_NUM_INITIALIZER(struct cmd_fc_del_qinq_result, svlan, UINT16);
+
+cmdline_parse_token_num_t cmd_fc_del_qinq_cvlan =
+	TOKEN_NUM_INITIALIZER(struct cmd_fc_del_qinq_result, cvlan, UINT16);
+
+cmdline_parse_inst_t cmd_fc_del_qinq = {
+	.f = cmd_fc_del_qinq_parsed,
+	.data = NULL,
+	.help_str = "Flow delete (Q-in-Q)",
+	.tokens = {
+		(void *) &cmd_fc_del_qinq_p_string,
+		(void *) &cmd_fc_del_qinq_pipeline_id,
+		(void *) &cmd_fc_del_qinq_flow_string,
+		(void *) &cmd_fc_del_qinq_del_string,
+		(void *) &cmd_fc_del_qinq_qinq_string,
+		(void *) &cmd_fc_del_qinq_svlan,
+		(void *) &cmd_fc_del_qinq_cvlan,
+		NULL,
+	},
+};
+
+/*
+ * flow del ipv4_5tuple
+ */
+
+struct cmd_fc_del_ipv4_5tuple_result {
+	cmdline_fixed_string_t p_string;
+	uint32_t pipeline_id;
+	cmdline_fixed_string_t flow_string;
+	cmdline_fixed_string_t del_string;
+	cmdline_fixed_string_t ipv4_5tuple_string;
+	cmdline_ipaddr_t ip_src;
+	cmdline_ipaddr_t ip_dst;
+	uint16_t port_src;
+	uint16_t port_dst;
+	uint32_t proto;
+};
+
+static void
+cmd_fc_del_ipv4_5tuple_parsed(
+	void *parsed_result,
+	__rte_unused struct cmdline *cl,
+	void *data)
+{
+	struct cmd_fc_del_ipv4_5tuple_result *params = parsed_result;
+	struct app_params *app = data;
+	struct pipeline_fc_key key;
+	int status;
+
+	memset(&key, 0, sizeof(key));
+	key.type = FLOW_KEY_IPV4_5TUPLE;
+	key.key.ipv4_5tuple.ip_src = rte_bswap32(params->ip_src.addr.ipv4.s_addr);
+	key.key.ipv4_5tuple.ip_dst = rte_bswap32(params->ip_dst.addr.ipv4.s_addr);
+	key.key.ipv4_5tuple.port_src = params->port_src;
+	key.key.ipv4_5tuple.port_dst = params->port_dst;
+	key.key.ipv4_5tuple.proto = params->proto;
+
+	status = app_pipeline_fc_del(app, params->pipeline_id, &key);
+	if(status != 0)
+		printf("Command failed\n");
+}
+
+cmdline_parse_token_string_t cmd_fc_del_ipv4_5tuple_p_string =
+	TOKEN_STRING_INITIALIZER(struct cmd_fc_del_ipv4_5tuple_result, p_string, "p");
+
+cmdline_parse_token_num_t cmd_fc_del_ipv4_5tuple_pipeline_id =
+	TOKEN_NUM_INITIALIZER(struct cmd_fc_del_ipv4_5tuple_result, pipeline_id, UINT32);
+
+cmdline_parse_token_string_t cmd_fc_del_ipv4_5tuple_flow_string =
+	TOKEN_STRING_INITIALIZER(struct cmd_fc_del_ipv4_5tuple_result, flow_string, "flow");
+
+cmdline_parse_token_string_t cmd_fc_del_ipv4_5tuple_del_string =
+	TOKEN_STRING_INITIALIZER(struct cmd_fc_del_ipv4_5tuple_result, del_string, "del");
+
+cmdline_parse_token_string_t cmd_fc_del_ipv4_5tuple_ipv4_5tuple_string =
+	TOKEN_STRING_INITIALIZER(struct cmd_fc_del_ipv4_5tuple_result, ipv4_5tuple_string, "ipv4_5tuple");
+
+cmdline_parse_token_ipaddr_t cmd_fc_del_ipv4_5tuple_ip_src =
+	TOKEN_IPV4_INITIALIZER(struct cmd_fc_del_ipv4_5tuple_result, ip_src);
+
+cmdline_parse_token_ipaddr_t cmd_fc_del_ipv4_5tuple_ip_dst =
+	TOKEN_IPV4_INITIALIZER(struct cmd_fc_del_ipv4_5tuple_result, ip_dst);
+
+cmdline_parse_token_num_t cmd_fc_del_ipv4_5tuple_port_src =
+	TOKEN_NUM_INITIALIZER(struct cmd_fc_del_ipv4_5tuple_result, port_src, UINT16);
+
+cmdline_parse_token_num_t cmd_fc_del_ipv4_5tuple_port_dst =
+	TOKEN_NUM_INITIALIZER(struct cmd_fc_del_ipv4_5tuple_result, port_dst, UINT16);
+
+cmdline_parse_token_num_t cmd_fc_del_ipv4_5tuple_proto =
+	TOKEN_NUM_INITIALIZER(struct cmd_fc_del_ipv4_5tuple_result, proto, UINT32);
+
+cmdline_parse_inst_t cmd_fc_del_ipv4_5tuple = {
+	.f = cmd_fc_del_ipv4_5tuple_parsed,
+	.data = NULL,
+	.help_str = "Flow delete (IPv4 5-tuple)",
+	.tokens = {
+		(void *) &cmd_fc_del_ipv4_5tuple_p_string,
+		(void *) &cmd_fc_del_ipv4_5tuple_pipeline_id,
+		(void *) &cmd_fc_del_ipv4_5tuple_flow_string,
+		(void *) &cmd_fc_del_ipv4_5tuple_del_string,
+		(void *) &cmd_fc_del_ipv4_5tuple_ipv4_5tuple_string,
+		(void *) &cmd_fc_del_ipv4_5tuple_ip_src,
+		(void *) &cmd_fc_del_ipv4_5tuple_ip_dst,
+		(void *) &cmd_fc_del_ipv4_5tuple_port_src,
+		(void *) &cmd_fc_del_ipv4_5tuple_port_dst,
+		(void *) &cmd_fc_del_ipv4_5tuple_proto,
+		NULL,
+	},
+};
+
+/*
+ * flow del ipv6_5tuple
+ */
+
+struct cmd_fc_del_ipv6_5tuple_result {
+	cmdline_fixed_string_t p_string;
+	uint32_t pipeline_id;
+	cmdline_fixed_string_t flow_string;
+	cmdline_fixed_string_t del_string;
+	cmdline_fixed_string_t ipv6_5tuple_string;
+	cmdline_ipaddr_t ip_src;
+	cmdline_ipaddr_t ip_dst;
+	uint16_t port_src;
+	uint16_t port_dst;
+	uint32_t proto;
+};
+
+static void
+cmd_fc_del_ipv6_5tuple_parsed(
+	void *parsed_result,
+	__rte_unused struct cmdline *cl,
+	void *data)
+{
+	struct cmd_fc_del_ipv6_5tuple_result *params = parsed_result;
+	struct app_params *app = data;
+	struct pipeline_fc_key key;
+	int status;
+
+	memset(&key, 0, sizeof(key));
+	key.type = FLOW_KEY_IPV6_5TUPLE;
+	memcpy(key.key.ipv6_5tuple.ip_src, params->ip_src.addr.ipv6.s6_addr, 16);
+	memcpy(key.key.ipv6_5tuple.ip_dst, params->ip_dst.addr.ipv6.s6_addr, 16);
+	key.key.ipv6_5tuple.port_src = params->port_src;
+	key.key.ipv6_5tuple.port_dst = params->port_dst;
+	key.key.ipv6_5tuple.proto = params->proto;
+
+	status = app_pipeline_fc_del(app, params->pipeline_id, &key);
+	if(status != 0)
+		printf("Command failed\n");
+}
+
+cmdline_parse_token_string_t cmd_fc_del_ipv6_5tuple_p_string =
+	TOKEN_STRING_INITIALIZER(struct cmd_fc_del_ipv6_5tuple_result, p_string, "p");
+
+cmdline_parse_token_num_t cmd_fc_del_ipv6_5tuple_pipeline_id =
+	TOKEN_NUM_INITIALIZER(struct cmd_fc_del_ipv6_5tuple_result, pipeline_id, UINT32);
+
+cmdline_parse_token_string_t cmd_fc_del_ipv6_5tuple_flow_string =
+	TOKEN_STRING_INITIALIZER(struct cmd_fc_del_ipv6_5tuple_result, flow_string, "flow");
+
+cmdline_parse_token_string_t cmd_fc_del_ipv6_5tuple_del_string =
+	TOKEN_STRING_INITIALIZER(struct cmd_fc_del_ipv6_5tuple_result, del_string, "del");
+
+cmdline_parse_token_string_t cmd_fc_del_ipv6_5tuple_ipv6_5tuple_string =
+	TOKEN_STRING_INITIALIZER(struct cmd_fc_del_ipv6_5tuple_result, ipv6_5tuple_string, "ipv6_5tuple");
+
+cmdline_parse_token_ipaddr_t cmd_fc_del_ipv6_5tuple_ip_src =
+	TOKEN_IPV6_INITIALIZER(struct cmd_fc_del_ipv6_5tuple_result, ip_src);
+
+cmdline_parse_token_ipaddr_t cmd_fc_del_ipv6_5tuple_ip_dst =
+	TOKEN_IPV6_INITIALIZER(struct cmd_fc_del_ipv6_5tuple_result, ip_dst);
+
+cmdline_parse_token_num_t cmd_fc_del_ipv6_5tuple_port_src =
+	TOKEN_NUM_INITIALIZER(struct cmd_fc_del_ipv6_5tuple_result, port_src, UINT16);
+
+cmdline_parse_token_num_t cmd_fc_del_ipv6_5tuple_port_dst =
+	TOKEN_NUM_INITIALIZER(struct cmd_fc_del_ipv6_5tuple_result, port_dst, UINT16);
+
+cmdline_parse_token_num_t cmd_fc_del_ipv6_5tuple_proto =
+	TOKEN_NUM_INITIALIZER(struct cmd_fc_del_ipv6_5tuple_result, proto, UINT32);
+
+cmdline_parse_inst_t cmd_fc_del_ipv6_5tuple = {
+	.f = cmd_fc_del_ipv6_5tuple_parsed,
+	.data = NULL,
+	.help_str = "Flow delete (IPv6 5-tuple)",
+	.tokens = {
+		(void *) &cmd_fc_del_ipv6_5tuple_p_string,
+		(void *) &cmd_fc_del_ipv6_5tuple_pipeline_id,
+		(void *) &cmd_fc_del_ipv6_5tuple_flow_string,
+		(void *) &cmd_fc_del_ipv6_5tuple_del_string,
+		(void *) &cmd_fc_del_ipv6_5tuple_ipv6_5tuple_string,
+		(void *) &cmd_fc_del_ipv6_5tuple_ip_src,
+		(void *) &cmd_fc_del_ipv6_5tuple_ip_dst,
+		(void *) &cmd_fc_del_ipv6_5tuple_port_src,
+		(void *) &cmd_fc_del_ipv6_5tuple_port_dst,
+		(void *) &cmd_fc_del_ipv6_5tuple_proto,
+		NULL,
+	},
+};
+
+/*
+ * flow add default
+ */
+
+struct cmd_fc_add_default_result {
+	cmdline_fixed_string_t p_string;
+	uint32_t pipeline_id;
+	cmdline_fixed_string_t flow_string;
+	cmdline_fixed_string_t add_string;
+	cmdline_fixed_string_t default_string;
+	uint32_t port;
+};
+
+static void
+cmd_fc_add_default_parsed(
+	void *parsed_result,
+	__rte_unused struct cmdline *cl,
+	void *data)
+{
+	struct cmd_fc_add_default_result *params = parsed_result;
+	struct app_params *app = data;
+	int status;
+
+	status = app_pipeline_fc_add_default(app, params->pipeline_id,
+		params->port);
+
+	if(status != 0)
+		printf("Command failed\n");
+}
+
+cmdline_parse_token_string_t cmd_fc_add_default_p_string =
+	TOKEN_STRING_INITIALIZER(struct cmd_fc_add_default_result, p_string, "p");
+
+cmdline_parse_token_num_t cmd_fc_add_default_pipeline_id =
+	TOKEN_NUM_INITIALIZER(struct cmd_fc_add_default_result, pipeline_id, UINT32);
+
+cmdline_parse_token_string_t cmd_fc_add_default_flow_string =
+	TOKEN_STRING_INITIALIZER(struct cmd_fc_add_default_result, flow_string, "flow");
+
+cmdline_parse_token_string_t cmd_fc_add_default_add_string =
+	TOKEN_STRING_INITIALIZER(struct cmd_fc_add_default_result, add_string, "add");
+
+cmdline_parse_token_string_t cmd_fc_add_default_default_string =
+	TOKEN_STRING_INITIALIZER(struct cmd_fc_add_default_result, default_string, "default");
+
+cmdline_parse_token_num_t cmd_fc_add_default_port =
+	TOKEN_NUM_INITIALIZER(struct cmd_fc_add_default_result, port, UINT32);
+
+cmdline_parse_inst_t cmd_fc_add_default = {
+	.f = cmd_fc_add_default_parsed,
+	.data = NULL,
+	.help_str = "Flow add default",
+	.tokens = {
+		(void *) &cmd_fc_add_default_p_string,
+		(void *) &cmd_fc_add_default_pipeline_id,
+		(void *) &cmd_fc_add_default_flow_string,
+		(void *) &cmd_fc_add_default_add_string,
+		(void *) &cmd_fc_add_default_default_string,
+		(void *) &cmd_fc_add_default_port,
+		NULL,
+	},
+};
+
+/*
+ * flow del default
+ */
+
+struct cmd_fc_del_default_result {
+	cmdline_fixed_string_t p_string;
+	uint32_t pipeline_id;
+	cmdline_fixed_string_t flow_string;
+	cmdline_fixed_string_t del_string;
+	cmdline_fixed_string_t default_string;
+};
+
+static void
+cmd_fc_del_default_parsed(
+	void *parsed_result,
+	__rte_unused struct cmdline *cl,
+	void *data)
+{
+	struct cmd_fc_del_default_result *params = parsed_result;
+	struct app_params *app = data;
+	int status;
+
+	status = app_pipeline_fc_del_default(app, params->pipeline_id);
+	if(status != 0)
+		printf("Command failed\n");
+}
+
+cmdline_parse_token_string_t cmd_fc_del_default_p_string =
+	TOKEN_STRING_INITIALIZER(struct cmd_fc_del_default_result, p_string, "p");
+
+cmdline_parse_token_num_t cmd_fc_del_default_pipeline_id =
+	TOKEN_NUM_INITIALIZER(struct cmd_fc_del_default_result, pipeline_id, UINT32);
+
+cmdline_parse_token_string_t cmd_fc_del_default_flow_string =
+	TOKEN_STRING_INITIALIZER(struct cmd_fc_del_default_result, flow_string, "flow");
+
+cmdline_parse_token_string_t cmd_fc_del_default_del_string =
+	TOKEN_STRING_INITIALIZER(struct cmd_fc_del_default_result, del_string, "del");
+
+cmdline_parse_token_string_t cmd_fc_del_default_default_string =
+	TOKEN_STRING_INITIALIZER(struct cmd_fc_del_default_result, default_string, "default");
+
+cmdline_parse_inst_t cmd_fc_del_default = {
+	.f = cmd_fc_del_default_parsed,
+	.data = NULL,
+	.help_str = "Flow delete default",
+	.tokens = {
+		(void *) &cmd_fc_del_default_p_string,
+		(void *) &cmd_fc_del_default_pipeline_id,
+		(void *) &cmd_fc_del_default_flow_string,
+		(void *) &cmd_fc_del_default_del_string,
+		(void *) &cmd_fc_del_default_default_string,
+		NULL,
+	},
+};
+
+/*
+ * flow ls
+ */
+
+struct cmd_fc_ls_result {
+	cmdline_fixed_string_t p_string;
+	uint32_t pipeline_id;
+	cmdline_fixed_string_t flow_string;
+	cmdline_fixed_string_t ls_string;
+};
+
+static void
+cmd_fc_ls_parsed(
+	void *parsed_result,
+	__attribute__((unused)) struct cmdline *cl,
+	void *data)
+{
+	struct cmd_fc_ls_result *params = parsed_result;
+	struct app_params *app = data;
+	int status;
+
+	status = app_pipeline_fc_ls(app, params->pipeline_id);
+	if(status != 0)
+		printf("Command failed\n");
+}
+
+cmdline_parse_token_string_t cmd_fc_ls_p_string =
+	TOKEN_STRING_INITIALIZER(struct cmd_fc_ls_result, p_string, "p");
+
+cmdline_parse_token_num_t cmd_fc_ls_pipeline_id =
+	TOKEN_NUM_INITIALIZER(struct cmd_fc_ls_result, pipeline_id, UINT32);
+
+cmdline_parse_token_string_t cmd_fc_ls_flow_string =
+	TOKEN_STRING_INITIALIZER(struct cmd_fc_ls_result,
+	flow_string, "flow");
+
+cmdline_parse_token_string_t cmd_fc_ls_ls_string =
+	TOKEN_STRING_INITIALIZER(struct cmd_fc_ls_result, ls_string,
+	"ls");
+
+cmdline_parse_inst_t cmd_fc_ls = {
+	.f = cmd_fc_ls_parsed,
+	.data = NULL,
+	.help_str = "Flow list",
+	.tokens = {
+		(void *) &cmd_fc_ls_p_string,
+		(void *) &cmd_fc_ls_pipeline_id,
+		(void *) &cmd_fc_ls_flow_string,
+		(void *) &cmd_fc_ls_ls_string,
+		NULL,
+	},
+};
+
+static cmdline_parse_ctx_t pipeline_cmds[] = {
+	(cmdline_parse_inst_t *) &cmd_fc_add_qinq,
+	(cmdline_parse_inst_t *) &cmd_fc_add_ipv4_5tuple,
+	(cmdline_parse_inst_t *) &cmd_fc_add_ipv6_5tuple,
+
+	(cmdline_parse_inst_t *) &cmd_fc_del_qinq,
+	(cmdline_parse_inst_t *) &cmd_fc_del_ipv4_5tuple,
+	(cmdline_parse_inst_t *) &cmd_fc_del_ipv6_5tuple,
+
+	(cmdline_parse_inst_t *) &cmd_fc_add_default,
+	(cmdline_parse_inst_t *) &cmd_fc_del_default,
+
+	(cmdline_parse_inst_t *) &cmd_fc_add_qinq_all,
+	(cmdline_parse_inst_t *) &cmd_fc_add_ipv4_5tuple_all,
+	(cmdline_parse_inst_t *) &cmd_fc_add_ipv6_5tuple_all,
+
+	(cmdline_parse_inst_t *) &cmd_fc_ls,
+	NULL,
+};
+
+static struct pipeline_fe_ops pipeline_flow_classification_fe_ops = {
+	.f_init = app_pipeline_fc_init,
+	.f_free = app_pipeline_fc_free,
+	.cmds = pipeline_cmds,
+};
+
+struct pipeline_type pipeline_flow_classification = {
+	.name = "FLOW_CLASSIFICATION",
+	.be_ops = &pipeline_flow_classification_be_ops,
+	.fe_ops = &pipeline_flow_classification_fe_ops,
+};
diff --git a/examples/ip_pipeline/pipeline/pipeline_flow_classification.h b/examples/ip_pipeline/pipeline/pipeline_flow_classification.h
new file mode 100644
index 0000000..dd67755
--- /dev/null
+++ b/examples/ip_pipeline/pipeline/pipeline_flow_classification.h
@@ -0,0 +1,106 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2015 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __INCLUDE_PIPELINE_FLOW_CLASSIFICATION_H__
+#define __INCLUDE_PIPELINE_FLOW_CLASSIFICATION_H__
+
+#include "pipeline.h"
+#include "pipeline_flow_classification_be.h"
+
+enum flow_key_type {
+	FLOW_KEY_QINQ,
+	FLOW_KEY_IPV4_5TUPLE,
+	FLOW_KEY_IPV6_5TUPLE,
+};
+
+struct flow_key_qinq {
+	uint16_t svlan;
+	uint16_t cvlan;
+};
+
+struct flow_key_ipv4_5tuple {
+	uint32_t ip_src;
+	uint32_t ip_dst;
+	uint16_t port_src;
+	uint16_t port_dst;
+	uint32_t proto;
+};
+
+struct flow_key_ipv6_5tuple {
+	uint8_t ip_src[16];
+	uint8_t ip_dst[16];
+	uint16_t port_src;
+	uint16_t port_dst;
+	uint32_t proto;
+};
+
+struct pipeline_fc_key {
+	enum flow_key_type type;
+	union {
+		struct flow_key_qinq qinq;
+		struct flow_key_ipv4_5tuple ipv4_5tuple;
+		struct flow_key_ipv6_5tuple ipv6_5tuple;
+	} key;
+};
+
+int
+app_pipeline_fc_add(struct app_params *app,
+	uint32_t pipeline_id,
+	struct pipeline_fc_key *key,
+	uint32_t port_id);
+
+int
+app_pipeline_fc_add_bulk(struct app_params *app,
+	uint32_t pipeline_id,
+	struct pipeline_fc_key *key,
+	uint32_t *port_id,
+	uint32_t n_keys);
+
+int
+app_pipeline_fc_del(struct app_params *app,
+	uint32_t pipeline_id,
+	struct pipeline_fc_key *key);
+
+int
+app_pipeline_fc_add_default(struct app_params *app,
+	uint32_t pipeline_id,
+	uint32_t port_id);
+
+int
+app_pipeline_fc_del_default(struct app_params *app,
+	uint32_t pipeline_id);
+
+	
+extern struct pipeline_type pipeline_flow_classification;
+
+#endif
diff --git a/examples/ip_pipeline/pipeline/pipeline_flow_classification_be.c b/examples/ip_pipeline/pipeline/pipeline_flow_classification_be.c
new file mode 100644
index 0000000..25ae694
--- /dev/null
+++ b/examples/ip_pipeline/pipeline/pipeline_flow_classification_be.c
@@ -0,0 +1,569 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2015 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <string.h>
+
+#include <rte_common.h>
+#include <rte_malloc.h>
+#include <rte_table_hash.h>
+#include <rte_byteorder.h>
+
+#include "pipeline_flow_classification_be.h"
+#include "hash_func.h"
+
+struct pipeline_flow_classification {
+	struct pipeline p;
+	pipeline_msg_req_handler custom_handlers[PIPELINE_FC_MSG_REQS];
+
+	uint32_t n_flows;
+	uint32_t key_offset;
+	uint32_t key_size;
+	uint32_t hash_offset;
+} __rte_cache_aligned;
+
+static void *
+pipeline_fc_msg_req_custom_handler(struct pipeline *p, void *msg);
+
+static pipeline_msg_req_handler handlers[] = {
+	[PIPELINE_MSG_REQ_PING] = pipeline_msg_req_ping_handler,
+	[PIPELINE_MSG_REQ_STATS_PORT_IN] = pipeline_msg_req_stats_port_in_handler,
+	[PIPELINE_MSG_REQ_STATS_PORT_OUT] = pipeline_msg_req_stats_port_out_handler,
+	[PIPELINE_MSG_REQ_STATS_TABLE] = pipeline_msg_req_stats_table_handler,
+	[PIPELINE_MSG_REQ_PORT_IN_ENABLE] = pipeline_msg_req_port_in_enable_handler,
+	[PIPELINE_MSG_REQ_PORT_IN_DISABLE] = pipeline_msg_req_port_in_disable_handler,
+	[PIPELINE_MSG_REQ_CUSTOM] = pipeline_fc_msg_req_custom_handler,
+};
+
+static void *
+pipeline_fc_msg_req_add_handler(struct pipeline *p, void *msg);
+
+static void *
+pipeline_fc_msg_req_add_bulk_handler(struct pipeline *p, void *msg);
+
+static void *
+pipeline_fc_msg_req_del_handler(struct pipeline *p, void *msg);
+
+static void *
+pipeline_fc_msg_req_add_default_handler(struct pipeline *p, void *msg);
+
+static void *
+pipeline_fc_msg_req_del_default_handler(struct pipeline *p, void *msg);
+
+static pipeline_msg_req_handler custom_handlers[] = {
+	[PIPELINE_FC_MSG_REQ_FLOW_ADD] = pipeline_fc_msg_req_add_handler,
+	[PIPELINE_FC_MSG_REQ_FLOW_ADD_BULK] = pipeline_fc_msg_req_add_bulk_handler,
+	[PIPELINE_FC_MSG_REQ_FLOW_DEL] = pipeline_fc_msg_req_del_handler,
+	[PIPELINE_FC_MSG_REQ_FLOW_ADD_DEFAULT] = pipeline_fc_msg_req_add_default_handler,
+	[PIPELINE_FC_MSG_REQ_FLOW_DEL_DEFAULT] = pipeline_fc_msg_req_del_default_handler,
+};
+
+/*
+ * Flow table
+ */
+struct flow_table_entry {
+	struct rte_pipeline_table_entry head;
+};
+
+rte_table_hash_op_hash hash_func[] = {
+	hash_default_key8,
+	hash_default_key16,
+	hash_default_key24,
+	hash_default_key32,
+	hash_default_key40,
+	hash_default_key48,
+	hash_default_key56,
+	hash_default_key64
+};
+
+static int
+pipeline_fc_parse_args(struct pipeline_flow_classification *p,
+	struct pipeline_params *params)
+{
+	uint32_t n_flows_present = 0;
+	uint32_t key_offset_present = 0;
+	uint32_t key_size_present = 0;
+	uint32_t hash_offset_present = 0;
+
+	uint32_t i;
+
+	for (i = 0; i < params->n_args; i++) {
+		char *arg_name = params->args_name[i];
+		char *arg_value = params->args_value[i];
+
+		/* n_flows */
+		if (strcmp(arg_name, "n_flows") == 0) {
+			if (n_flows_present)
+				return -1;
+			n_flows_present = 1;
+
+			p->n_flows = atoi(arg_value);
+			if (p->n_flows == 0)
+				return -1;
+
+			continue;
+		}
+
+		/* key_offset */
+		if (strcmp(arg_name, "key_offset") == 0) {
+			if (key_offset_present)
+				return -1;
+			key_offset_present = 1;
+
+			p->key_offset = atoi(arg_value);
+
+			continue;
+		}
+
+		/* key_size */
+		if (strcmp(arg_name, "key_size") == 0) {
+			if (key_size_present)
+				return -1;
+			key_size_present = 1;
+
+			p->key_size = atoi(arg_value);
+			if ((p->key_size == 0) ||
+				(p->key_size > PIPELINE_FC_FLOW_KEY_MAX_SIZE) ||
+				(p->key_size % 8))
+				return -1;
+
+			continue;
+		}
+
+		/* hash_offset */
+		if (strcmp(arg_name, "hash_offset") == 0) {
+			if (hash_offset_present)
+				return -1;
+			hash_offset_present = 1;
+
+			p->hash_offset = atoi(arg_value);
+
+			continue;
+		} 
+
+		/* Unknown argument */
+		return -1;
+	}
+
+	/* Check that mandatory arguments are present */
+	if ((n_flows_present == 0) ||
+		(key_offset_present == 0) ||
+		(key_size_present == 0) ||
+		(hash_offset_present == 0))
+		return -1;
+
+	return 0;
+}
+
+static void * pipeline_fc_init(struct pipeline_params *params,
+	__rte_unused void *arg)
+{
+	struct pipeline *p;
+	struct pipeline_flow_classification *p_fc;
+	uint32_t size, i;
+
+	/* Check input arguments */
+	if (params == NULL)
+		return NULL;
+
+	/* Memory allocation */
+	size = RTE_CACHE_LINE_ROUNDUP(sizeof(struct pipeline_flow_classification));
+	p = rte_zmalloc(NULL, size, RTE_CACHE_LINE_SIZE);
+	if (p == NULL)
+		return NULL;
+	p_fc = (struct pipeline_flow_classification *) p;
+
+	strcpy(p->name, params->name);
+	p->log_level = params->log_level;
+
+	PLOG(p, HIGH, "Flow classification");
+
+	/* Parse arguments */
+	if (pipeline_fc_parse_args(p_fc, params))
+		return NULL;
+
+	/* Pipeline */
+	{
+		struct rte_pipeline_params pipeline_params = {
+			.name = params->name,
+			.socket_id = params->socket_id,
+			.offset_port_id = 0,
+		};
+
+		p->p = rte_pipeline_create(&pipeline_params);
+		if (p->p == NULL) {
+			rte_free(p);
+			return NULL;
+		}
+	}
+
+	/* Input ports */
+	p->n_ports_in = params->n_ports_in;
+	for (i = 0; i < p->n_ports_in; i++) {
+		struct rte_pipeline_port_in_params port_params = {
+			.ops = pipeline_port_in_params_get_ops(&params->port_in[i]),
+			.arg_create = pipeline_port_in_params_convert(&params->port_in[i]),
+			.f_action = NULL,
+			.arg_ah = NULL,
+			.burst_size = params->port_in[i].burst_size,
+		};
+
+		int status = rte_pipeline_port_in_create(p->p,
+			&port_params,
+			&p->port_in_id[i]);
+
+		if (status) {
+			rte_pipeline_free(p->p);
+			rte_free(p);
+			return NULL;
+		}
+	}
+
+	/* Output ports */
+	p->n_ports_out = params->n_ports_out;
+	for (i = 0; i < p->n_ports_out; i++) {
+		struct rte_pipeline_port_out_params port_params = {
+			.ops = pipeline_port_out_params_get_ops(&params->port_out[i]),
+			.arg_create = pipeline_port_out_params_convert(&params->port_out[i]),
+			.f_action = NULL,
+			.f_action_bulk = NULL,
+			.arg_ah = NULL,
+		};
+
+		int status = rte_pipeline_port_out_create(p->p,
+			&port_params,
+			&p->port_out_id[i]);
+
+		if (status) {
+			rte_pipeline_free(p->p);
+			rte_free(p);
+			return NULL;
+		}
+	}
+
+	/* Tables */
+	p->n_tables = 1;
+	{
+		struct rte_table_hash_key8_ext_params table_hash_key8_params = {
+			.n_entries = p_fc->n_flows,
+			.n_entries_ext = p_fc->n_flows,
+			.signature_offset = p_fc->hash_offset,
+			.key_offset = p_fc->key_offset,
+			.f_hash = hash_func[(p_fc->key_size / 8) - 1],
+			.seed = 0,
+		};
+
+		struct rte_table_hash_key16_ext_params table_hash_key16_params = {
+			.n_entries = p_fc->n_flows,
+			.n_entries_ext = p_fc->n_flows,
+			.signature_offset = p_fc->hash_offset,
+			.key_offset = p_fc->key_offset,
+			.f_hash = hash_func[(p_fc->key_size / 8) - 1],
+			.seed = 0,
+		};
+
+		struct rte_table_hash_ext_params table_hash_params = {
+			.key_size = p_fc->key_size,
+			.n_keys = p_fc->n_flows,
+			.n_buckets = p_fc->n_flows / 4,
+			.n_buckets_ext = p_fc->n_flows / 4,
+			.f_hash = hash_func[(p_fc->key_size / 8) - 1],
+			.seed = 0,
+			.signature_offset = p_fc->hash_offset,
+			.key_offset = p_fc->key_offset,
+		};
+
+		struct rte_pipeline_table_params table_params = {
+			.ops = NULL, /* set below */
+			.arg_create = NULL, /* set below */
+			.f_action_hit = NULL,
+			.f_action_miss = NULL,
+			.arg_ah = NULL,
+			.action_data_size = sizeof(struct flow_table_entry) -
+				sizeof(struct rte_pipeline_table_entry),
+		};
+
+		int status;
+
+		switch (p_fc->key_size)
+		{
+			case 8:
+				table_params.ops = &rte_table_hash_key8_lru_ops;
+				table_params.arg_create = &table_hash_key8_params;
+				break;
+
+			case 16:
+				table_params.ops = &rte_table_hash_key16_ext_ops;
+				table_params.arg_create = &table_hash_key16_params;
+				break;
+
+			default:
+				table_params.ops = &rte_table_hash_ext_ops;
+				table_params.arg_create = &table_hash_params;
+		}
+
+		status = rte_pipeline_table_create(p->p,
+			&table_params,
+			&p->table_id[0]);
+
+		if (status) {
+			rte_pipeline_free(p->p);
+			rte_free(p);
+			return NULL;
+		}
+	}
+
+	/* Connecting input ports to tables */
+	for (i = 0; i < p->n_ports_in; i++) {
+		int status = rte_pipeline_port_in_connect_to_table(p->p,
+			p->port_in_id[i],
+			p->table_id[0]);
+
+		if (status) {
+			rte_pipeline_free(p->p);
+			rte_free(p);
+			return NULL;
+		}
+	}
+
+	/* Enable input ports */
+	for (i = 0; i < p->n_ports_in; i++) {
+		int status = rte_pipeline_port_in_enable(p->p,
+			p->port_in_id[i]);
+
+		if (status) {
+			rte_pipeline_free(p->p);
+			rte_free(p);
+			return NULL;
+		}
+	}
+
+	/* Check pipeline consistency */
+	if (rte_pipeline_check(p->p) < 0) {
+		rte_pipeline_free(p->p);
+		rte_free(p);
+		return NULL;
+	}
+
+	/* Message queues */
+	p->n_msgq = params->n_msgq;
+	for (i = 0; i < p->n_msgq; i++)
+		p->msgq_in[i] = params->msgq_in[i];
+	for (i = 0; i < p->n_msgq; i++)
+		p->msgq_out[i] = params->msgq_out[i];
+
+	/* Message handlers */
+	memcpy(p->handlers, handlers, sizeof(p->handlers));
+	memcpy(p_fc->custom_handlers,
+		custom_handlers,
+		sizeof(p_fc->custom_handlers));
+
+	return p;
+}
+
+static int
+pipeline_fc_free(void *pipeline)
+{
+	struct pipeline *p = (struct pipeline *) pipeline;
+
+	/* Check input arguments */
+	if (p == NULL)
+		return -1;
+
+	/* Free resources */
+	rte_pipeline_free(p->p);
+	rte_free(p);
+	return 0;
+}
+
+static int
+pipeline_fc_track(void *pipeline,
+	__rte_unused uint32_t port_in,
+	uint32_t *port_out)
+{
+	struct pipeline *p = (struct pipeline *) pipeline;
+
+	/* Check input arguments */
+	if ((p == NULL) ||
+		(port_in >= p->n_ports_in) ||
+		(port_out == NULL))
+		return -1;
+
+	if (p->n_ports_in == 1) {
+		*port_out = 0;
+		return 0;
+	}
+
+	return -1;
+}
+
+static int
+pipeline_fc_timer(void *pipeline)
+{
+	struct pipeline *p = (struct pipeline *) pipeline;
+
+	pipeline_msg_req_handle(p);
+	rte_pipeline_flush(p->p);
+
+	return 0;
+}
+
+static void *
+pipeline_fc_msg_req_custom_handler(struct pipeline *p, void *msg)
+{
+	struct pipeline_flow_classification *p_fc =
+			(struct pipeline_flow_classification *) p;
+	struct pipeline_custom_msg_req *req = msg;
+	pipeline_msg_req_handler f_handle;
+
+	f_handle = (req->subtype < PIPELINE_FC_MSG_REQS)?
+		p_fc->custom_handlers[req->subtype] :
+		pipeline_msg_req_invalid_handler;
+
+	if (f_handle == NULL)
+		f_handle = pipeline_msg_req_invalid_handler;
+
+	return f_handle(p, req);
+}
+
+static void *
+pipeline_fc_msg_req_add_handler(struct pipeline *p, void *msg)
+{
+	struct pipeline_fc_add_msg_req *req = msg;
+	struct pipeline_fc_add_msg_rsp *rsp = msg;
+
+	struct flow_table_entry entry = {
+		.head = {
+			.action = RTE_PIPELINE_ACTION_PORT,
+			{.port_id = p->port_out_id[req->port_id]},
+		},
+	};
+
+	rsp->status = rte_pipeline_table_entry_add(p->p,
+		p->table_id[0],
+		&req->key,
+		(struct rte_pipeline_table_entry *) &entry,
+		&rsp->key_found,
+		(struct rte_pipeline_table_entry **) &rsp->entry_ptr);
+
+	return rsp;
+}
+
+static void *
+pipeline_fc_msg_req_add_bulk_handler(struct pipeline *p, void *msg)
+{
+	struct pipeline_fc_add_bulk_msg_req *req = msg;
+	struct pipeline_fc_add_bulk_msg_rsp *rsp = msg;
+	uint32_t i;
+
+	for (i = 0; i < req->n_keys; i++) {
+		struct pipeline_fc_add_bulk_flow_req *flow_req = &req->req[i];
+		struct pipeline_fc_add_bulk_flow_rsp *flow_rsp = &req->rsp[i];
+
+		struct flow_table_entry entry = {
+			.head = {
+				.action = RTE_PIPELINE_ACTION_PORT,
+				{.port_id = p->port_out_id[flow_req->port_id]},
+			},
+		};
+
+		int status = rte_pipeline_table_entry_add(p->p,
+			p->table_id[0],
+			&flow_req->key,
+			(struct rte_pipeline_table_entry *) &entry,
+			&flow_rsp->key_found,
+			(struct rte_pipeline_table_entry **) &flow_rsp->entry_ptr);
+
+		if (status)
+			break;
+	}
+
+	rsp->n_keys = i;
+
+	return rsp;
+}
+
+static void *
+pipeline_fc_msg_req_del_handler(struct pipeline *p, void *msg)
+{
+	struct pipeline_fc_del_msg_req *req = msg;
+	struct pipeline_fc_del_msg_rsp *rsp = msg;
+
+	rsp->status = rte_pipeline_table_entry_delete(p->p,
+		p->table_id[0],
+		&req->key,
+		&rsp->key_found,
+		NULL);
+
+	return rsp;
+}
+
+static void *
+pipeline_fc_msg_req_add_default_handler(struct pipeline *p, void *msg)
+{
+	struct pipeline_fc_add_default_msg_req *req = msg;
+	struct pipeline_fc_add_default_msg_rsp *rsp = msg;
+
+	struct flow_table_entry default_entry = {
+		.head = {
+			.action = RTE_PIPELINE_ACTION_PORT,
+			{.port_id = p->port_out_id[req->port_id]},
+		},
+	};
+
+	rsp->status = rte_pipeline_table_default_entry_add(p->p,
+		p->table_id[0],
+		(struct rte_pipeline_table_entry *) &default_entry,
+		(struct rte_pipeline_table_entry **) &rsp->entry_ptr);
+
+	return rsp;
+}
+
+static void *
+pipeline_fc_msg_req_del_default_handler(struct pipeline *p, void *msg)
+{
+	struct pipeline_fc_del_default_msg_rsp *rsp = msg;
+
+	rsp->status = rte_pipeline_table_default_entry_delete(p->p,
+		p->table_id[0],
+		NULL);
+
+	return rsp;
+}
+
+struct pipeline_be_ops pipeline_flow_classification_be_ops = {
+	.f_init = pipeline_fc_init,
+	.f_free = pipeline_fc_free,
+	.f_run = NULL,
+	.f_timer = pipeline_fc_timer,
+	.f_track = pipeline_fc_track,
+};
diff --git a/examples/ip_pipeline/pipeline/pipeline_flow_classification_be.h b/examples/ip_pipeline/pipeline/pipeline_flow_classification_be.h
new file mode 100644
index 0000000..46403d5
--- /dev/null
+++ b/examples/ip_pipeline/pipeline/pipeline_flow_classification_be.h
@@ -0,0 +1,140 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2015 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __INCLUDE_PIPELINE_FLOW_CLASSIFICATION_BE_H__
+#define __INCLUDE_PIPELINE_FLOW_CLASSIFICATION_BE_H__
+
+#include "pipeline_common_be.h"
+
+enum pipeline_fc_msg_req_type {
+	PIPELINE_FC_MSG_REQ_FLOW_ADD = 0,
+	PIPELINE_FC_MSG_REQ_FLOW_ADD_BULK,
+	PIPELINE_FC_MSG_REQ_FLOW_DEL,
+	PIPELINE_FC_MSG_REQ_FLOW_ADD_DEFAULT,
+	PIPELINE_FC_MSG_REQ_FLOW_DEL_DEFAULT,
+	PIPELINE_FC_MSG_REQS,
+};
+
+#ifndef PIPELINE_FC_FLOW_KEY_MAX_SIZE
+#define PIPELINE_FC_FLOW_KEY_MAX_SIZE            64
+#endif
+
+/*
+ * MSG ADD
+ */
+struct pipeline_fc_add_msg_req {
+	enum pipeline_msg_req_type type;
+	enum pipeline_fc_msg_req_type subtype;
+
+	uint8_t key[PIPELINE_FC_FLOW_KEY_MAX_SIZE];
+
+	uint32_t port_id;
+};
+
+struct pipeline_fc_add_msg_rsp {
+	int status;
+	int key_found;
+	void *entry_ptr;
+};
+
+/*
+ * MSG ADD BULK
+ */
+struct pipeline_fc_add_bulk_flow_req {
+	uint8_t key[PIPELINE_FC_FLOW_KEY_MAX_SIZE];
+	uint32_t port_id;
+};
+
+struct pipeline_fc_add_bulk_flow_rsp {
+	int key_found;
+	void *entry_ptr;
+};
+
+struct pipeline_fc_add_bulk_msg_req {
+	enum pipeline_msg_req_type type;
+	enum pipeline_fc_msg_req_type subtype;
+
+	struct pipeline_fc_add_bulk_flow_req *req;
+	struct pipeline_fc_add_bulk_flow_rsp *rsp;
+	uint32_t n_keys;
+};
+
+struct pipeline_fc_add_bulk_msg_rsp {
+	uint32_t n_keys;
+};
+
+/*
+ * MSG DEL
+ */
+struct pipeline_fc_del_msg_req {
+	enum pipeline_msg_req_type type;
+	enum pipeline_fc_msg_req_type subtype;
+
+	uint8_t key[PIPELINE_FC_FLOW_KEY_MAX_SIZE];
+};
+
+struct pipeline_fc_del_msg_rsp {
+	int status;
+	int key_found;
+};
+
+/*
+ * MSG ADD DEFAULT
+ */
+struct pipeline_fc_add_default_msg_req {
+	enum pipeline_msg_req_type type;
+	enum pipeline_fc_msg_req_type subtype;
+
+	uint32_t port_id;
+};
+
+struct pipeline_fc_add_default_msg_rsp {
+	int status;
+	void *entry_ptr;
+};
+
+/*
+ * MSG DEL DEFAULT
+ */
+struct pipeline_fc_del_default_msg_req {
+	enum pipeline_msg_req_type type;
+	enum pipeline_fc_msg_req_type subtype;
+};
+
+struct pipeline_fc_del_default_msg_rsp {
+	int status;
+};
+
+extern struct pipeline_be_ops pipeline_flow_classification_be_ops;
+
+#endif
-- 
1.7.9.5

  parent reply	other threads:[~2015-06-25 11:37 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-06-25 11:15 [dpdk-dev] [PATCH v2 00/11] ip_pipeline: ip_pipeline application enhancements Maciej Gajdzica
2015-06-25 11:15 ` [dpdk-dev] [PATCH v2 01/11] ip_pipeline: add parsing for config files with new syntax Maciej Gajdzica
2015-06-25 11:15 ` [dpdk-dev] [PATCH v2 02/11] ip_pipeline: added config checks Maciej Gajdzica
2015-06-25 11:15 ` [dpdk-dev] [PATCH v2 03/11] ip_pipeline: modified init to match new params struct Maciej Gajdzica
2015-06-25 11:15 ` [dpdk-dev] [PATCH v2 04/11] ip_pipeline: moved pipelines to separate folder Maciej Gajdzica
2015-06-25 11:15 ` [dpdk-dev] [PATCH v2 05/11] ip_pipeline: added master pipeline Maciej Gajdzica
2015-06-25 11:15 ` [dpdk-dev] [PATCH v2 06/11] ip_pipeline: added application thread Maciej Gajdzica
2015-06-25 11:15 ` [dpdk-dev] [PATCH v2 07/11] ip_pipeline: moved config files to separate folder Maciej Gajdzica
2015-06-25 11:15 ` [dpdk-dev] [PATCH v2 08/11] ip_pipeline: added new implementation of passthrough pipeline Maciej Gajdzica
2015-06-25 11:15 ` [dpdk-dev] [PATCH v2 09/11] ip_pipeline: added new implementation of firewall pipeline Maciej Gajdzica
2015-06-25 11:15 ` [dpdk-dev] [PATCH v2 10/11] ip_pipeline: added new implementation of routing pipeline Maciej Gajdzica
2015-06-25 11:15 ` Maciej Gajdzica [this message]
2015-06-25 12:19 ` [dpdk-dev] [PATCH v2 00/11] ip_pipeline: ip_pipeline application enhancements Dumitrescu, Cristian

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1435230914-8174-12-git-send-email-maciejx.t.gajdzica@intel.com \
    --to=maciejx.t.gajdzica@intel.com \
    --cc=dev@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).