DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] [PATCH 0/2] implement VXLAN/NVGRE Encap/Decap in testpmd
@ 2018-06-14 15:08 Nelio Laranjeiro
  2018-06-14 15:08 ` [dpdk-dev] [PATCH 1/2] app/testpmd: add VXLAN encap/decap support Nelio Laranjeiro
                   ` (4 more replies)
  0 siblings, 5 replies; 63+ messages in thread
From: Nelio Laranjeiro @ 2018-06-14 15:08 UTC (permalink / raw)
  To: dev, Adrien Mazarguil, Wenzhuo Lu, Jingjing Wu, Bernard Iremonger
  Cc: Mohammad Abdul Awal

This series adds an easy and maintainable configuration version support for
those two actions for 18.08 by using global variables in testpmd to store the
necessary information for the tunnel encapsulation.  Those variables are used
in conjunction of RTE_FLOW_ACTION_{VXLAN,NVGRE}_ENCAP action to create easily
the action for flows.

A common way to use it:

 set vxlan ipv4 4 4 4 127.0.0.1 128.0.0.1 11:11:11:11:11:11 22:22:22:22:22:22
 flow create 0 ingress pattern end actions vxlan_encap / queue index 0 / end

 set vxlan ipv6 4 4 4 ::1 ::2222 11:11:11:11:11:11 22:22:22:22:22:22
 flow create 0 ingress pattern end actions vxlan_encap / queue index 0 / end

 set nvgre ipv4 4 127.0.0.1 128.0.0.1 11:11:11:11:11:11 22:22:22:22:22:22
 flow create 0 ingress pattern end actions nvgre_encap / queue index 0 / end

 set nvgre ipv6 4 ::1 ::2222 11:11:11:11:11:11 22:22:22:22:22:22
 flow create 0 ingress pattern end actions nvgre_encap / queue index 0 / end

This also replace the proposal done by Mohammad Abdul Awal [1] which handles
in a more complex way for the same work.

Note this API has already a modification planned for 18.11 [2] thus those
series should have a limited life for a single release.

[1] https://dpdk.org/ml/archives/dev/2018-May/101403.html
[2] https://dpdk.org/ml/archives/dev/2018-June/103485.html

Nelio Laranjeiro (2):
  app/testpmd: add VXLAN encap/decap support
  app/testpmd: add NVGRE encap/decap support

 app/test-pmd/cmdline.c                      | 169 +++++++++++++
 app/test-pmd/cmdline_flow.c                 | 248 ++++++++++++++++++++
 app/test-pmd/testpmd.c                      |  24 ++
 app/test-pmd/testpmd.h                      |  28 +++
 doc/guides/testpmd_app_ug/testpmd_funcs.rst |  25 ++
 5 files changed, 494 insertions(+)

-- 
2.17.1

^ permalink raw reply	[flat|nested] 63+ messages in thread

* [dpdk-dev] [PATCH 1/2] app/testpmd: add VXLAN encap/decap support
  2018-06-14 15:08 [dpdk-dev] [PATCH 0/2] implement VXLAN/NVGRE Encap/Decap in testpmd Nelio Laranjeiro
@ 2018-06-14 15:08 ` Nelio Laranjeiro
  2018-06-14 15:09 ` [dpdk-dev] [PATCH 2/2] app/testpmd: add NVGRE " Nelio Laranjeiro
                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 63+ messages in thread
From: Nelio Laranjeiro @ 2018-06-14 15:08 UTC (permalink / raw)
  To: dev, Adrien Mazarguil, Wenzhuo Lu, Jingjing Wu, Bernard Iremonger
  Cc: Mohammad Abdul Awal

Due to the complex VXLAN_ENCAP flow action and based on the fact testpmd
does not allocate memory, this patch adds a new command in testpmd to
initialise a global structure containing the necessary information to
make the outer layer of the packet.  This same global structure will
then be used by the flow command line in testpmd when the action
vxlan_encap will be parsed, at this point, the conversion into such
action becomes trivial.

This global structure is only used for the encap action.

Signed-off-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
---
 app/test-pmd/cmdline.c                      |  90 ++++++++++++++
 app/test-pmd/cmdline_flow.c                 | 129 ++++++++++++++++++++
 app/test-pmd/testpmd.c                      |  15 +++
 app/test-pmd/testpmd.h                      |  15 +++
 doc/guides/testpmd_app_ug/testpmd_funcs.rst |  12 ++
 5 files changed, 261 insertions(+)

diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
index 27e2aa8c8..a3b98b2f2 100644
--- a/app/test-pmd/cmdline.c
+++ b/app/test-pmd/cmdline.c
@@ -781,6 +781,10 @@ static void cmd_help_long_parsed(void *parsed_result,
 			"port tm hierarchy commit (port_id) (clean_on_fail)\n"
 			"	Commit tm hierarchy.\n\n"
 
+			"vxlan ipv4|ipv6 vni udp-src udp-dst ip-src ip-dst"
+			" eth-src eth-dst\n"
+			"       Configure the VXLAN encapsulation for flows.\n\n"
+
 			, list_pkt_forwarding_modes()
 		);
 	}
@@ -14838,6 +14842,91 @@ cmdline_parse_inst_t cmd_set_port_tm_hierarchy_default = {
 };
 #endif
 
+/** Set VXLAN encapsulation details */
+struct cmd_set_vxlan_result {
+	cmdline_fixed_string_t set;
+	cmdline_fixed_string_t vxlan;
+	cmdline_fixed_string_t ip_version;
+	uint32_t vni;
+	uint16_t udp_src;
+	uint16_t udp_dst;
+	cmdline_ipaddr_t ip_src;
+	cmdline_ipaddr_t ip_dst;
+	struct ether_addr eth_src;
+	struct ether_addr eth_dst;
+};
+
+cmdline_parse_token_string_t cmd_set_vxlan_set =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_vxlan_result, set, "set");
+cmdline_parse_token_string_t cmd_set_vxlan_vxlan =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_vxlan_result, vxlan, "vxlan");
+cmdline_parse_token_string_t cmd_set_vxlan_ip_version =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_vxlan_result, ip_version,
+				 "ipv4#ipv6");
+cmdline_parse_token_num_t cmd_set_vxlan_vni =
+	TOKEN_NUM_INITIALIZER(struct cmd_set_vxlan_result, vni, UINT32);
+cmdline_parse_token_num_t cmd_set_vxlan_udp_src =
+	TOKEN_NUM_INITIALIZER(struct cmd_set_vxlan_result, udp_src, UINT16);
+cmdline_parse_token_num_t cmd_set_vxlan_udp_dst =
+	TOKEN_NUM_INITIALIZER(struct cmd_set_vxlan_result, udp_dst, UINT16);
+cmdline_parse_token_ipaddr_t cmd_set_vxlan_ip_src =
+	TOKEN_IPADDR_INITIALIZER(struct cmd_set_vxlan_result, ip_src);
+cmdline_parse_token_ipaddr_t cmd_set_vxlan_ip_dst =
+	TOKEN_IPADDR_INITIALIZER(struct cmd_set_vxlan_result, ip_dst);
+cmdline_parse_token_etheraddr_t cmd_set_vxlan_eth_src =
+	TOKEN_ETHERADDR_INITIALIZER(struct cmd_set_vxlan_result, eth_src);
+cmdline_parse_token_etheraddr_t cmd_set_vxlan_eth_dst =
+	TOKEN_ETHERADDR_INITIALIZER(struct cmd_set_vxlan_result, eth_dst);
+
+static void cmd_set_vxlan_parsed(void *parsed_result,
+	__attribute__((unused)) struct cmdline *cl,
+	__attribute__((unused)) void *data)
+{
+	struct cmd_set_vxlan_result *res = parsed_result;
+	uint32_t vni = rte_cpu_to_be_32(res->vni) >> 8;
+
+	if (strcmp(res->ip_version, "ipv4") == 0)
+		vxlan_encap_conf.select_ipv4 = 1;
+	else if (strcmp(res->ip_version, "ipv6") == 0)
+		vxlan_encap_conf.select_ipv4 = 0;
+	else
+		return;
+	memcpy(vxlan_encap_conf.vni, &vni, 3);
+	vxlan_encap_conf.udp_src = rte_cpu_to_be_16(res->udp_src);
+	vxlan_encap_conf.udp_dst = rte_cpu_to_be_16(res->udp_dst);
+	if (vxlan_encap_conf.select_ipv4) {
+		IPV4_ADDR_TO_UINT(res->ip_src, vxlan_encap_conf.ipv4_src);
+		IPV4_ADDR_TO_UINT(res->ip_dst, vxlan_encap_conf.ipv4_dst);
+	} else {
+		IPV6_ADDR_TO_ARRAY(res->ip_src, vxlan_encap_conf.ipv6_src);
+		IPV6_ADDR_TO_ARRAY(res->ip_dst, vxlan_encap_conf.ipv6_dst);
+	}
+	memcpy(vxlan_encap_conf.eth_src, res->eth_src.addr_bytes,
+	       ETHER_ADDR_LEN);
+	memcpy(vxlan_encap_conf.eth_dst, res->eth_dst.addr_bytes,
+	       ETHER_ADDR_LEN);
+}
+
+cmdline_parse_inst_t cmd_set_vxlan = {
+	.f = cmd_set_vxlan_parsed,
+	.data = NULL,
+	.help_str = "set vxlan ipv4|ipv6 <vni> <udp-src> <udp-dst> <ip-src>"
+		" <ip-dst> <eth-src> <eth-dst>",
+	.tokens = {
+		(void *)&cmd_set_vxlan_set,
+		(void *)&cmd_set_vxlan_vxlan,
+		(void *)&cmd_set_vxlan_ip_version,
+		(void *)&cmd_set_vxlan_vni,
+		(void *)&cmd_set_vxlan_udp_src,
+		(void *)&cmd_set_vxlan_udp_dst,
+		(void *)&cmd_set_vxlan_ip_src,
+		(void *)&cmd_set_vxlan_ip_dst,
+		(void *)&cmd_set_vxlan_eth_src,
+		(void *)&cmd_set_vxlan_eth_dst,
+		NULL,
+	},
+};
+
 /* Strict link priority scheduling mode setting */
 static void
 cmd_strict_link_prio_parsed(
@@ -17462,6 +17551,7 @@ cmdline_parse_ctx_t main_ctx[] = {
 #if defined RTE_LIBRTE_PMD_SOFTNIC && defined RTE_LIBRTE_SCHED
 	(cmdline_parse_inst_t *)&cmd_set_port_tm_hierarchy_default,
 #endif
+	(cmdline_parse_inst_t *)&cmd_set_vxlan,
 	(cmdline_parse_inst_t *)&cmd_ddp_add,
 	(cmdline_parse_inst_t *)&cmd_ddp_del,
 	(cmdline_parse_inst_t *)&cmd_ddp_get_list,
diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index 9918d7fda..9f609b7db 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -237,6 +237,8 @@ enum index {
 	ACTION_OF_POP_MPLS_ETHERTYPE,
 	ACTION_OF_PUSH_MPLS,
 	ACTION_OF_PUSH_MPLS_ETHERTYPE,
+	ACTION_VXLAN_ENCAP,
+	ACTION_VXLAN_DECAP,
 };
 
 /** Maximum size for pattern in struct rte_flow_item_raw. */
@@ -256,6 +258,22 @@ struct action_rss_data {
 	uint16_t queue[ACTION_RSS_QUEUE_NUM];
 };
 
+/** Maximum number of items in struct rte_flow_action_vxlan_encap. */
+#define ACTION_VXLAN_ENCAP_ITEMS_NUM 6
+
+/** Storage for struct rte_flow_action_vxlan_encap including external data. */
+struct action_vxlan_encap_data {
+	struct rte_flow_action_vxlan_encap conf;
+	struct rte_flow_item items[ACTION_VXLAN_ENCAP_ITEMS_NUM];
+	struct rte_flow_item_eth item_eth;
+	union {
+		struct rte_flow_item_ipv4 item_ipv4;
+		struct rte_flow_item_ipv6 item_ipv6;
+	};
+	struct rte_flow_item_udp item_udp;
+	struct rte_flow_item_vxlan item_vxlan;
+};
+
 /** Maximum number of subsequent tokens and arguments on the stack. */
 #define CTX_STACK_SIZE 16
 
@@ -773,6 +791,8 @@ static const enum index next_action[] = {
 	ACTION_OF_SET_VLAN_PCP,
 	ACTION_OF_POP_MPLS,
 	ACTION_OF_PUSH_MPLS,
+	ACTION_VXLAN_ENCAP,
+	ACTION_VXLAN_DECAP,
 	ZERO,
 };
 
@@ -896,6 +916,9 @@ static int parse_vc_action_rss_type(struct context *, const struct token *,
 static int parse_vc_action_rss_queue(struct context *, const struct token *,
 				     const char *, unsigned int, void *,
 				     unsigned int);
+static int parse_vc_action_vxlan_encap(struct context *, const struct token *,
+				       const char *, unsigned int, void *,
+				       unsigned int);
 static int parse_destroy(struct context *, const struct token *,
 			 const char *, unsigned int,
 			 void *, unsigned int);
@@ -2362,6 +2385,24 @@ static const struct token token_list[] = {
 			      ethertype)),
 		.call = parse_vc_conf,
 	},
+	[ACTION_VXLAN_ENCAP] = {
+		.name = "vxlan_encap",
+		.help = "VXLAN encapsulation, uses configuration set by \"set"
+			" vxlan\"",
+		.priv = PRIV_ACTION(VXLAN_ENCAP,
+				    sizeof(struct action_vxlan_encap_data)),
+		.next = NEXT(NEXT_ENTRY(ACTION_NEXT)),
+		.call = parse_vc_action_vxlan_encap,
+	},
+	[ACTION_VXLAN_DECAP] = {
+		.name = "vxlan_decap",
+		.help = "Performs a decapsulation action by stripping all"
+			" headers of the VXLAN tunnel network overlay from the"
+			" matched flow.",
+		.priv = PRIV_ACTION(VXLAN_DECAP, 0),
+		.next = NEXT(NEXT_ENTRY(ACTION_NEXT)),
+		.call = parse_vc,
+	},
 };
 
 /** Remove and return last entry from argument stack. */
@@ -2926,6 +2967,94 @@ parse_vc_action_rss_queue(struct context *ctx, const struct token *token,
 	return len;
 }
 
+/** Parse VXLAN encap action. */
+static int
+parse_vc_action_vxlan_encap(struct context *ctx, const struct token *token,
+			    const char *str, unsigned int len,
+			    void *buf, unsigned int size)
+{
+	struct buffer *out = buf;
+	struct rte_flow_action *action;
+	struct action_vxlan_encap_data *action_vxlan_encap_data;
+	int ret;
+
+	ret = parse_vc(ctx, token, str, len, buf, size);
+	if (ret < 0)
+		return ret;
+	/* Nothing else to do if there is no buffer. */
+	if (!out)
+		return ret;
+	if (!out->args.vc.actions_n)
+		return -1;
+	action = &out->args.vc.actions[out->args.vc.actions_n - 1];
+	/* Point to selected object. */
+	ctx->object = out->args.vc.data;
+	ctx->objmask = NULL;
+	/* Set up default configuration. */
+	action_vxlan_encap_data = ctx->object;
+	*action_vxlan_encap_data = (struct action_vxlan_encap_data){
+		.conf = (struct rte_flow_action_vxlan_encap){
+			.definition = action_vxlan_encap_data->items,
+		},
+		.items = {
+			{
+				.type = RTE_FLOW_ITEM_TYPE_ETH,
+				.spec = &action_vxlan_encap_data->item_eth,
+				.mask = &action_vxlan_encap_data->item_eth,
+			},
+			{
+				.type = RTE_FLOW_ITEM_TYPE_IPV4,
+				.spec = &action_vxlan_encap_data->item_ipv4,
+				.mask = &action_vxlan_encap_data->item_ipv4,
+			},
+			{
+				.type = RTE_FLOW_ITEM_TYPE_UDP,
+				.spec = &action_vxlan_encap_data->item_udp,
+				.mask = &action_vxlan_encap_data->item_udp,
+			},
+			{
+				.type = RTE_FLOW_ITEM_TYPE_VXLAN,
+				.spec = &action_vxlan_encap_data->item_vxlan,
+				.mask = &action_vxlan_encap_data->item_vxlan,
+			},
+			{
+				.type = RTE_FLOW_ITEM_TYPE_END,
+			},
+		},
+		.item_eth = { .type = 0, },
+		.item_ipv4.hdr = {
+			.src_addr = vxlan_encap_conf.ipv4_src,
+			.dst_addr = vxlan_encap_conf.ipv4_dst,
+		},
+		.item_udp.hdr = {
+			.src_port = vxlan_encap_conf.udp_src,
+			.dst_port = vxlan_encap_conf.udp_dst,
+		},
+		.item_vxlan.flags = 0,
+	};
+	memcpy(action_vxlan_encap_data->item_eth.dst.addr_bytes,
+	       vxlan_encap_conf.eth_dst, ETHER_ADDR_LEN);
+	memcpy(action_vxlan_encap_data->item_eth.src.addr_bytes,
+	       vxlan_encap_conf.eth_src, ETHER_ADDR_LEN);
+	if (!vxlan_encap_conf.select_ipv4) {
+		memcpy(&action_vxlan_encap_data->item_ipv6.hdr.src_addr,
+		       &vxlan_encap_conf.ipv6_src,
+		       sizeof(vxlan_encap_conf.ipv6_src));
+		memcpy(&action_vxlan_encap_data->item_ipv6.hdr.dst_addr,
+		       &vxlan_encap_conf.ipv6_dst,
+		       sizeof(vxlan_encap_conf.ipv6_dst));
+		action_vxlan_encap_data->items[1] = (struct rte_flow_item){
+			.type = RTE_FLOW_ITEM_TYPE_IPV6,
+			.spec = &action_vxlan_encap_data->item_ipv6,
+			.mask = &action_vxlan_encap_data->item_ipv6,
+		};
+	}
+	memcpy(action_vxlan_encap_data->item_vxlan.vni, vxlan_encap_conf.vni,
+	       RTE_DIM(vxlan_encap_conf.vni));
+	action->conf = &action_vxlan_encap_data->conf;
+	return ret;
+}
+
 /** Parse tokens for destroy command. */
 static int
 parse_destroy(struct context *ctx, const struct token *token,
diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
index 35cf26674..1c68c9d30 100644
--- a/app/test-pmd/testpmd.c
+++ b/app/test-pmd/testpmd.c
@@ -393,6 +393,21 @@ uint8_t bitrate_enabled;
 struct gro_status gro_ports[RTE_MAX_ETHPORTS];
 uint8_t gro_flush_cycles = GRO_DEFAULT_FLUSH_CYCLES;
 
+struct vxlan_encap_conf vxlan_encap_conf = {
+	.select_ipv4 = 1,
+	.vni = "\x00\x00\x00",
+	.udp_src = RTE_BE16(1),
+	.udp_dst = RTE_BE16(4789),
+	.ipv4_src = IPv4(127, 0, 0, 1),
+	.ipv4_dst = IPv4(255, 255, 255, 255),
+	.ipv6_src = "\x00\x00\x00\x00\x00\x00\x00\x00"
+		"\x00\x00\x00\x00\x00\x00\x00\x01",
+	.ipv6_dst = "\x00\x00\x00\x00\x00\x00\x00\x00"
+		"\x00\x00\x00\x00\x00\x00\x11\x11",
+	.eth_src = "\x00\x00\x00\x00\x00\x00",
+	.eth_dst = "\xff\xff\xff\xff\xff\xff",
+};
+
 /* Forward function declarations */
 static void map_port_queue_stats_mapping_registers(portid_t pi,
 						   struct rte_port *port);
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index f51cd9dd9..72c4e8d54 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -479,6 +479,21 @@ struct gso_status {
 extern struct gso_status gso_ports[RTE_MAX_ETHPORTS];
 extern uint16_t gso_max_segment_size;
 
+/* VXLAN encap/decap parameters. */
+struct vxlan_encap_conf {
+	uint32_t select_ipv4:1;
+	uint8_t vni[3];
+	rte_be16_t udp_src;
+	rte_be16_t udp_dst;
+	rte_be32_t ipv4_src;
+	rte_be32_t ipv4_dst;
+	uint8_t ipv6_src[16];
+	uint8_t ipv6_dst[16];
+	uint8_t eth_src[ETHER_ADDR_LEN];
+	uint8_t eth_dst[ETHER_ADDR_LEN];
+};
+struct vxlan_encap_conf vxlan_encap_conf;
+
 static inline unsigned int
 lcore_num(void)
 {
diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
index 0d6fd50ca..162d1c535 100644
--- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
+++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
@@ -1534,6 +1534,12 @@ Enable or disable a per queue Tx offloading only on a specific Tx queue::
 
 This command should be run when the port is stopped, or else it will fail.
 
+Config VXLAN Encap outer layers
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Configure the outer layer to encapsulate a packet inside a VXLAN tunnel::
+
+ testpmd> set vxlan ipv4|ipv6 (udp-src) (udp-dst) (ip-src) (ip-dst) (mac-src) (mac-dst)
 
 Port Functions
 --------------
@@ -3650,6 +3656,12 @@ This section lists supported actions and their attributes, if any.
 
   - ``ethertype``: Ethertype.
 
+- ``vxlan_encap``: Performs a VXLAN encapsulation, outer layer configuration
+  is done through `Config VXLAN Encap outer layers`_.
+
+- ``vxlan_decap``: Performs a decapsulation action by stripping all headers of
+  the VXLAN tunnel network overlay from the matched flow.
+
 Destroying flow rules
 ~~~~~~~~~~~~~~~~~~~~~
 
-- 
2.17.1

^ permalink raw reply	[flat|nested] 63+ messages in thread

* [dpdk-dev] [PATCH 2/2] app/testpmd: add NVGRE encap/decap support
  2018-06-14 15:08 [dpdk-dev] [PATCH 0/2] implement VXLAN/NVGRE Encap/Decap in testpmd Nelio Laranjeiro
  2018-06-14 15:08 ` [dpdk-dev] [PATCH 1/2] app/testpmd: add VXLAN encap/decap support Nelio Laranjeiro
@ 2018-06-14 15:09 ` Nelio Laranjeiro
  2018-06-15  9:32   ` Iremonger, Bernard
  2018-06-18  8:52 ` [dpdk-dev] [PATCH v2 0/2] app/testpmd implement VXLAN/NVGRE Encap/Decap Nelio Laranjeiro
                   ` (2 subsequent siblings)
  4 siblings, 1 reply; 63+ messages in thread
From: Nelio Laranjeiro @ 2018-06-14 15:09 UTC (permalink / raw)
  To: dev, Adrien Mazarguil, Wenzhuo Lu, Jingjing Wu, Bernard Iremonger
  Cc: Mohammad Abdul Awal

Due to the complex NVGRE_ENCAP flow action and based on the fact testpmd
does not allocate memory, this patch adds a new command in testpmd to
initialise a global structure containing the necessary information to
make the outer layer of the packet.  This same global structure will
then be used by the flow command line in testpmd when the action
nvgre_encap will be parsed, at this point, the conversion into such
action becomes trivial.

This global structure is only used for the encap action.

Signed-off-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
---
 app/test-pmd/cmdline.c                      |  79 +++++++++++++
 app/test-pmd/cmdline_flow.c                 | 119 ++++++++++++++++++++
 app/test-pmd/testpmd.c                      |   9 ++
 app/test-pmd/testpmd.h                      |  13 +++
 doc/guides/testpmd_app_ug/testpmd_funcs.rst |  13 +++
 5 files changed, 233 insertions(+)

diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
index a3b98b2f2..588696d5c 100644
--- a/app/test-pmd/cmdline.c
+++ b/app/test-pmd/cmdline.c
@@ -785,6 +785,9 @@ static void cmd_help_long_parsed(void *parsed_result,
 			" eth-src eth-dst\n"
 			"       Configure the VXLAN encapsulation for flows.\n\n"
 
+			"nvgre ipv4|ipv6 tni ip-src ip-dst eth-src eth-dst\n"
+			"       Configure the NVGRE encapsulation for flows.\n\n"
+
 			, list_pkt_forwarding_modes()
 		);
 	}
@@ -14927,6 +14930,81 @@ cmdline_parse_inst_t cmd_set_vxlan = {
 	},
 };
 
+/** Set VXLAN encapsulation details */
+struct cmd_set_nvgre_result {
+	cmdline_fixed_string_t set;
+	cmdline_fixed_string_t nvgre;
+	cmdline_fixed_string_t ip_version;
+	uint32_t tni;
+	cmdline_ipaddr_t ip_src;
+	cmdline_ipaddr_t ip_dst;
+	struct ether_addr eth_src;
+	struct ether_addr eth_dst;
+};
+
+cmdline_parse_token_string_t cmd_set_nvgre_set =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_nvgre_result, set, "set");
+cmdline_parse_token_string_t cmd_set_nvgre_nvgre =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_nvgre_result, nvgre, "nvgre");
+cmdline_parse_token_string_t cmd_set_nvgre_ip_version =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_nvgre_result, ip_version,
+				 "ipv4#ipv6");
+cmdline_parse_token_num_t cmd_set_nvgre_tni =
+	TOKEN_NUM_INITIALIZER(struct cmd_set_nvgre_result, tni, UINT32);
+cmdline_parse_token_num_t cmd_set_nvgre_ip_src =
+	TOKEN_IPADDR_INITIALIZER(struct cmd_set_nvgre_result, ip_src);
+cmdline_parse_token_ipaddr_t cmd_set_nvgre_ip_dst =
+	TOKEN_IPADDR_INITIALIZER(struct cmd_set_nvgre_result, ip_dst);
+cmdline_parse_token_etheraddr_t cmd_set_nvgre_eth_src =
+	TOKEN_ETHERADDR_INITIALIZER(struct cmd_set_nvgre_result, eth_src);
+cmdline_parse_token_etheraddr_t cmd_set_nvgre_eth_dst =
+	TOKEN_ETHERADDR_INITIALIZER(struct cmd_set_nvgre_result, eth_dst);
+
+static void cmd_set_nvgre_parsed(void *parsed_result,
+	__attribute__((unused)) struct cmdline *cl,
+	__attribute__((unused)) void *data)
+{
+	struct cmd_set_nvgre_result *res = parsed_result;
+	uint32_t tni = rte_cpu_to_be_32(res->tni) >> 8;
+
+	if (strcmp(res->ip_version, "ipv4") == 0)
+		nvgre_encap_conf.select_ipv4 = 1;
+	else if (strcmp(res->ip_version, "ipv6") == 0)
+		nvgre_encap_conf.select_ipv4 = 0;
+	else
+		return;
+	memcpy(nvgre_encap_conf.tni, &tni, 3);
+	if (nvgre_encap_conf.select_ipv4) {
+		IPV4_ADDR_TO_UINT(res->ip_src, nvgre_encap_conf.ipv4_src);
+		IPV4_ADDR_TO_UINT(res->ip_dst, nvgre_encap_conf.ipv4_dst);
+	} else {
+		IPV6_ADDR_TO_ARRAY(res->ip_src, nvgre_encap_conf.ipv6_src);
+		IPV6_ADDR_TO_ARRAY(res->ip_dst, nvgre_encap_conf.ipv6_dst);
+	}
+	memcpy(nvgre_encap_conf.eth_src, res->eth_src.addr_bytes,
+	       ETHER_ADDR_LEN);
+	memcpy(nvgre_encap_conf.eth_dst, res->eth_dst.addr_bytes,
+	       ETHER_ADDR_LEN);
+}
+
+cmdline_parse_inst_t cmd_set_nvgre = {
+	.f = cmd_set_nvgre_parsed,
+	.data = NULL,
+	.help_str = "set nvgre ipv4|ipv6 <vni> <ip-src> <ip-dst> <eth-src>"
+		" <eth-dst>",
+	.tokens = {
+		(void *)&cmd_set_nvgre_set,
+		(void *)&cmd_set_nvgre_nvgre,
+		(void *)&cmd_set_nvgre_ip_version,
+		(void *)&cmd_set_nvgre_tni,
+		(void *)&cmd_set_nvgre_ip_src,
+		(void *)&cmd_set_nvgre_ip_dst,
+		(void *)&cmd_set_nvgre_eth_src,
+		(void *)&cmd_set_nvgre_eth_dst,
+		NULL,
+	},
+};
+
 /* Strict link priority scheduling mode setting */
 static void
 cmd_strict_link_prio_parsed(
@@ -17552,6 +17630,7 @@ cmdline_parse_ctx_t main_ctx[] = {
 	(cmdline_parse_inst_t *)&cmd_set_port_tm_hierarchy_default,
 #endif
 	(cmdline_parse_inst_t *)&cmd_set_vxlan,
+	(cmdline_parse_inst_t *)&cmd_set_nvgre,
 	(cmdline_parse_inst_t *)&cmd_ddp_add,
 	(cmdline_parse_inst_t *)&cmd_ddp_del,
 	(cmdline_parse_inst_t *)&cmd_ddp_get_list,
diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index 9f609b7db..dd55056fd 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -239,6 +239,8 @@ enum index {
 	ACTION_OF_PUSH_MPLS_ETHERTYPE,
 	ACTION_VXLAN_ENCAP,
 	ACTION_VXLAN_DECAP,
+	ACTION_NVGRE_ENCAP,
+	ACTION_NVGRE_DECAP,
 };
 
 /** Maximum size for pattern in struct rte_flow_item_raw. */
@@ -274,6 +276,21 @@ struct action_vxlan_encap_data {
 	struct rte_flow_item_vxlan item_vxlan;
 };
 
+/** Maximum number of items in struct rte_flow_action_nvgre_encap. */
+#define ACTION_NVGRE_ENCAP_ITEMS_NUM 4
+
+/** Storage for struct rte_flow_action_nvgre_encap including external data. */
+struct action_nvgre_encap_data {
+	struct rte_flow_action_nvgre_encap conf;
+	struct rte_flow_item items[ACTION_NVGRE_ENCAP_ITEMS_NUM];
+	struct rte_flow_item_eth item_eth;
+	union {
+		struct rte_flow_item_ipv4 item_ipv4;
+		struct rte_flow_item_ipv6 item_ipv6;
+	};
+	struct rte_flow_item_nvgre item_nvgre;
+};
+
 /** Maximum number of subsequent tokens and arguments on the stack. */
 #define CTX_STACK_SIZE 16
 
@@ -793,6 +810,8 @@ static const enum index next_action[] = {
 	ACTION_OF_PUSH_MPLS,
 	ACTION_VXLAN_ENCAP,
 	ACTION_VXLAN_DECAP,
+	ACTION_NVGRE_ENCAP,
+	ACTION_NVGRE_DECAP,
 	ZERO,
 };
 
@@ -919,6 +938,9 @@ static int parse_vc_action_rss_queue(struct context *, const struct token *,
 static int parse_vc_action_vxlan_encap(struct context *, const struct token *,
 				       const char *, unsigned int, void *,
 				       unsigned int);
+static int parse_vc_action_nvgre_encap(struct context *, const struct token *,
+				       const char *, unsigned int, void *,
+				       unsigned int);
 static int parse_destroy(struct context *, const struct token *,
 			 const char *, unsigned int,
 			 void *, unsigned int);
@@ -2403,6 +2425,24 @@ static const struct token token_list[] = {
 		.next = NEXT(NEXT_ENTRY(ACTION_NEXT)),
 		.call = parse_vc,
 	},
+	[ACTION_NVGRE_ENCAP] = {
+		.name = "nvgre_encap",
+		.help = "NVGRE encapsulation, uses configuration set by \"set"
+			" nvgre\"",
+		.priv = PRIV_ACTION(NVGRE_ENCAP,
+				    sizeof(struct action_nvgre_encap_data)),
+		.next = NEXT(NEXT_ENTRY(ACTION_NEXT)),
+		.call = parse_vc_action_nvgre_encap,
+	},
+	[ACTION_NVGRE_DECAP] = {
+		.name = "nvgre_decap",
+		.help = "Performs a decapsulation action by stripping all"
+			" headers of the NVGRE tunnel network overlay from the"
+			" matched flow.",
+		.priv = PRIV_ACTION(NVGRE_DECAP, 0),
+		.next = NEXT(NEXT_ENTRY(ACTION_NEXT)),
+		.call = parse_vc,
+	},
 };
 
 /** Remove and return last entry from argument stack. */
@@ -3055,6 +3095,85 @@ parse_vc_action_vxlan_encap(struct context *ctx, const struct token *token,
 	return ret;
 }
 
+/** Parse NVGRE encap action. */
+static int
+parse_vc_action_nvgre_encap(struct context *ctx, const struct token *token,
+			    const char *str, unsigned int len,
+			    void *buf, unsigned int size)
+{
+	struct buffer *out = buf;
+	struct rte_flow_action *action;
+	struct action_nvgre_encap_data *action_nvgre_encap_data;
+	int ret;
+
+	ret = parse_vc(ctx, token, str, len, buf, size);
+	if (ret < 0)
+		return ret;
+	/* Nothing else to do if there is no buffer. */
+	if (!out)
+		return ret;
+	if (!out->args.vc.actions_n)
+		return -1;
+	action = &out->args.vc.actions[out->args.vc.actions_n - 1];
+	/* Point to selected object. */
+	ctx->object = out->args.vc.data;
+	ctx->objmask = NULL;
+	/* Set up default configuration. */
+	action_nvgre_encap_data = ctx->object;
+	*action_nvgre_encap_data = (struct action_nvgre_encap_data){
+		.conf = (struct rte_flow_action_nvgre_encap){
+			.definition = action_nvgre_encap_data->items,
+		},
+		.items = {
+			{
+				.type = RTE_FLOW_ITEM_TYPE_ETH,
+				.spec = &action_nvgre_encap_data->item_eth,
+				.mask = &action_nvgre_encap_data->item_eth,
+			},
+			{
+				.type = RTE_FLOW_ITEM_TYPE_IPV4,
+				.spec = &action_nvgre_encap_data->item_ipv4,
+				.mask = &action_nvgre_encap_data->item_ipv4,
+			},
+			{
+				.type = RTE_FLOW_ITEM_TYPE_NVGRE,
+				.spec = &action_nvgre_encap_data->item_nvgre,
+				.mask = &action_nvgre_encap_data->item_nvgre,
+			},
+			{
+				.type = RTE_FLOW_ITEM_TYPE_END,
+			},
+		},
+		.item_eth = { .type = 0, },
+		.item_ipv4.hdr = {
+		       .src_addr = nvgre_encap_conf.ipv4_src,
+		       .dst_addr = nvgre_encap_conf.ipv4_dst,
+		},
+		.item_nvgre.flow_id = 0,
+	};
+	memcpy(action_nvgre_encap_data->item_eth.dst.addr_bytes,
+	       nvgre_encap_conf.eth_dst, ETHER_ADDR_LEN);
+	memcpy(action_nvgre_encap_data->item_eth.src.addr_bytes,
+	       nvgre_encap_conf.eth_src, ETHER_ADDR_LEN);
+	if (!nvgre_encap_conf.select_ipv4) {
+		memcpy(&action_nvgre_encap_data->item_ipv6.hdr.src_addr,
+		       &nvgre_encap_conf.ipv6_src,
+		       sizeof(nvgre_encap_conf.ipv6_src));
+		memcpy(&action_nvgre_encap_data->item_ipv6.hdr.dst_addr,
+		       &nvgre_encap_conf.ipv6_dst,
+		       sizeof(nvgre_encap_conf.ipv6_dst));
+		action_nvgre_encap_data->items[1] = (struct rte_flow_item){
+			.type = RTE_FLOW_ITEM_TYPE_IPV6,
+			.spec = &action_nvgre_encap_data->item_ipv6,
+			.mask = &action_nvgre_encap_data->item_ipv6,
+		};
+	}
+	memcpy(action_nvgre_encap_data->item_nvgre.tni, nvgre_encap_conf.tni,
+	       RTE_DIM(nvgre_encap_conf.tni));
+	action->conf = &action_nvgre_encap_data->conf;
+	return ret;
+}
+
 /** Parse tokens for destroy command. */
 static int
 parse_destroy(struct context *ctx, const struct token *token,
diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
index 1c68c9d30..f54205949 100644
--- a/app/test-pmd/testpmd.c
+++ b/app/test-pmd/testpmd.c
@@ -408,6 +408,15 @@ struct vxlan_encap_conf vxlan_encap_conf = {
 	.eth_dst = "\xff\xff\xff\xff\xff\xff",
 };
 
+struct nvgre_encap_conf nvgre_encap_conf = {
+	.select_ipv4 = 1,
+	.tni = "\x00\x00\x00",
+	.ipv4_src = IPv4(127, 0, 0, 1),
+	.ipv4_dst = IPv4(255, 255, 255, 255),
+	.eth_src = "\x00\x00\x00\x00\x00\x00",
+	.eth_dst = "\xff\xff\xff\xff\xff\xff",
+};
+
 /* Forward function declarations */
 static void map_port_queue_stats_mapping_registers(portid_t pi,
 						   struct rte_port *port);
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index 72c4e8d54..7871b93e1 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -494,6 +494,19 @@ struct vxlan_encap_conf {
 };
 struct vxlan_encap_conf vxlan_encap_conf;
 
+/* NVGRE encap/decap parameters. */
+struct nvgre_encap_conf {
+	uint32_t select_ipv4:1;
+	uint8_t tni[3];
+	rte_be32_t ipv4_src;
+	rte_be32_t ipv4_dst;
+	uint8_t ipv6_src[16];
+	uint8_t ipv6_dst[16];
+	uint8_t eth_src[ETHER_ADDR_LEN];
+	uint8_t eth_dst[ETHER_ADDR_LEN];
+};
+struct nvgre_encap_conf nvgre_encap_conf;
+
 static inline unsigned int
 lcore_num(void)
 {
diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
index 162d1c535..0ee497f11 100644
--- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
+++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
@@ -1541,6 +1541,13 @@ Configure the outer layer to encapsulate a packet inside a VXLAN tunnel::
 
  testpmd> set vxlan ipv4|ipv6 (udp-src) (udp-dst) (ip-src) (ip-dst) (mac-src) (mac-dst)
 
+Config NVGRE Encap outer layers
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Configure the outer layer to encapsulate a packet inside a NVGRE tunnel::
+
+ testpmd> set nvgre ipv4|ipv6 (ip-src) (ip-dst) (mac-src) (mac-dst)
+
 Port Functions
 --------------
 
@@ -3662,6 +3669,12 @@ This section lists supported actions and their attributes, if any.
 - ``vxlan_decap``: Performs a decapsulation action by stripping all headers of
   the VXLAN tunnel network overlay from the matched flow.
 
+- ``nvgre_encap``: Performs a NVGRE encapsulation, outer layer configuration
+  is done through `Config NVGRE Encap outer layers`_.
+
+- ``nvgre_decap``: Performs a decapsulation action by stripping all headers of
+  the VXLAN tunnel network overlay from the matched flow.
+
 Destroying flow rules
 ~~~~~~~~~~~~~~~~~~~~~
 
-- 
2.17.1

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [dpdk-dev] [PATCH 2/2] app/testpmd: add NVGRE encap/decap support
  2018-06-14 15:09 ` [dpdk-dev] [PATCH 2/2] app/testpmd: add NVGRE " Nelio Laranjeiro
@ 2018-06-15  9:32   ` Iremonger, Bernard
  2018-06-15 11:25     ` Nélio Laranjeiro
  0 siblings, 1 reply; 63+ messages in thread
From: Iremonger, Bernard @ 2018-06-15  9:32 UTC (permalink / raw)
  To: Nelio Laranjeiro, dev, Adrien Mazarguil, Lu, Wenzhuo, Wu, Jingjing
  Cc: Awal, Mohammad Abdul

Hi Nelio,

> -----Original Message-----
> From: Nelio Laranjeiro [mailto:nelio.laranjeiro@6wind.com]
> Sent: Thursday, June 14, 2018 4:09 PM
> To: dev@dpdk.org; Adrien Mazarguil <adrien.mazarguil@6wind.com>; Lu,
> Wenzhuo <wenzhuo.lu@intel.com>; Wu, Jingjing <jingjing.wu@intel.com>;
> Iremonger, Bernard <bernard.iremonger@intel.com>
> Cc: Awal, Mohammad Abdul <mohammad.abdul.awal@intel.com>
> Subject: [PATCH 2/2] app/testpmd: add NVGRE encap/decap support
> 
> Due to the complex NVGRE_ENCAP flow action and based on the fact
> testpmd does not allocate memory, this patch adds a new command in
> testpmd to initialise a global structure containing the necessary information
> to make the outer layer of the packet.  This same global structure will then be
> used by the flow command line in testpmd when the action nvgre_encap will
> be parsed, at this point, the conversion into such action becomes trivial.
> 
> This global structure is only used for the encap action.
> 
> Signed-off-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
> ---
>  app/test-pmd/cmdline.c                      |  79 +++++++++++++
>  app/test-pmd/cmdline_flow.c                 | 119 ++++++++++++++++++++
>  app/test-pmd/testpmd.c                      |   9 ++
>  app/test-pmd/testpmd.h                      |  13 +++
>  doc/guides/testpmd_app_ug/testpmd_funcs.rst |  13 +++
>  5 files changed, 233 insertions(+)
> 
> diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c index
> a3b98b2f2..588696d5c 100644
> --- a/app/test-pmd/cmdline.c
> +++ b/app/test-pmd/cmdline.c
> @@ -785,6 +785,9 @@ static void cmd_help_long_parsed(void
> *parsed_result,
>  			" eth-src eth-dst\n"
>  			"       Configure the VXLAN encapsulation for
> flows.\n\n"
> 
> +			"nvgre ipv4|ipv6 tni ip-src ip-dst eth-src eth-dst\n"
> +			"       Configure the NVGRE encapsulation for
> flows.\n\n"
> +
>  			, list_pkt_forwarding_modes()
>  		);
>  	}
> @@ -14927,6 +14930,81 @@ cmdline_parse_inst_t cmd_set_vxlan = {
>  	},
>  };
> 
> +/** Set VXLAN encapsulation details */

VXLAN should be NVGRE.

> +struct cmd_set_nvgre_result {
> +	cmdline_fixed_string_t set;
> +	cmdline_fixed_string_t nvgre;
> +	cmdline_fixed_string_t ip_version;
> +	uint32_t tni;
> +	cmdline_ipaddr_t ip_src;
> +	cmdline_ipaddr_t ip_dst;
> +	struct ether_addr eth_src;
> +	struct ether_addr eth_dst;
> +};
> +
> +cmdline_parse_token_string_t cmd_set_nvgre_set =
> +	TOKEN_STRING_INITIALIZER(struct cmd_set_nvgre_result, set,
> "set");
> +cmdline_parse_token_string_t cmd_set_nvgre_nvgre =
> +	TOKEN_STRING_INITIALIZER(struct cmd_set_nvgre_result, nvgre,
> "nvgre");
> +cmdline_parse_token_string_t cmd_set_nvgre_ip_version =
> +	TOKEN_STRING_INITIALIZER(struct cmd_set_nvgre_result,
> ip_version,
> +				 "ipv4#ipv6");
> +cmdline_parse_token_num_t cmd_set_nvgre_tni =
> +	TOKEN_NUM_INITIALIZER(struct cmd_set_nvgre_result, tni,
> UINT32);
> +cmdline_parse_token_num_t cmd_set_nvgre_ip_src =
> +	TOKEN_IPADDR_INITIALIZER(struct cmd_set_nvgre_result, ip_src);
> +cmdline_parse_token_ipaddr_t cmd_set_nvgre_ip_dst =
> +	TOKEN_IPADDR_INITIALIZER(struct cmd_set_nvgre_result, ip_dst);
> +cmdline_parse_token_etheraddr_t cmd_set_nvgre_eth_src =
> +	TOKEN_ETHERADDR_INITIALIZER(struct cmd_set_nvgre_result,
> eth_src);
> +cmdline_parse_token_etheraddr_t cmd_set_nvgre_eth_dst =
> +	TOKEN_ETHERADDR_INITIALIZER(struct cmd_set_nvgre_result,
> eth_dst);
> +
> +static void cmd_set_nvgre_parsed(void *parsed_result,
> +	__attribute__((unused)) struct cmdline *cl,
> +	__attribute__((unused)) void *data)
> +{
> +	struct cmd_set_nvgre_result *res = parsed_result;
> +	uint32_t tni = rte_cpu_to_be_32(res->tni) >> 8;
> +
> +	if (strcmp(res->ip_version, "ipv4") == 0)
> +		nvgre_encap_conf.select_ipv4 = 1;
> +	else if (strcmp(res->ip_version, "ipv6") == 0)
> +		nvgre_encap_conf.select_ipv4 = 0;
> +	else
> +		return;
> +	memcpy(nvgre_encap_conf.tni, &tni, 3);
> +	if (nvgre_encap_conf.select_ipv4) {
> +		IPV4_ADDR_TO_UINT(res->ip_src,
> nvgre_encap_conf.ipv4_src);
> +		IPV4_ADDR_TO_UINT(res->ip_dst,
> nvgre_encap_conf.ipv4_dst);
> +	} else {
> +		IPV6_ADDR_TO_ARRAY(res->ip_src,
> nvgre_encap_conf.ipv6_src);
> +		IPV6_ADDR_TO_ARRAY(res->ip_dst,
> nvgre_encap_conf.ipv6_dst);
> +	}
> +	memcpy(nvgre_encap_conf.eth_src, res->eth_src.addr_bytes,
> +	       ETHER_ADDR_LEN);
> +	memcpy(nvgre_encap_conf.eth_dst, res->eth_dst.addr_bytes,
> +	       ETHER_ADDR_LEN);
> +}
> +
> +cmdline_parse_inst_t cmd_set_nvgre = {
> +	.f = cmd_set_nvgre_parsed,
> +	.data = NULL,
> +	.help_str = "set nvgre ipv4|ipv6 <vni> <ip-src> <ip-dst> <eth-src>"
> +		" <eth-dst>",
> +	.tokens = {
> +		(void *)&cmd_set_nvgre_set,
> +		(void *)&cmd_set_nvgre_nvgre,
> +		(void *)&cmd_set_nvgre_ip_version,
> +		(void *)&cmd_set_nvgre_tni,
> +		(void *)&cmd_set_nvgre_ip_src,
> +		(void *)&cmd_set_nvgre_ip_dst,
> +		(void *)&cmd_set_nvgre_eth_src,
> +		(void *)&cmd_set_nvgre_eth_dst,
> +		NULL,
> +	},
> +};
> +
>  /* Strict link priority scheduling mode setting */  static void
> cmd_strict_link_prio_parsed( @@ -17552,6 +17630,7 @@
> cmdline_parse_ctx_t main_ctx[] = {
>  	(cmdline_parse_inst_t *)&cmd_set_port_tm_hierarchy_default,
>  #endif
>  	(cmdline_parse_inst_t *)&cmd_set_vxlan,
> +	(cmdline_parse_inst_t *)&cmd_set_nvgre,
>  	(cmdline_parse_inst_t *)&cmd_ddp_add,
>  	(cmdline_parse_inst_t *)&cmd_ddp_del,
>  	(cmdline_parse_inst_t *)&cmd_ddp_get_list, diff --git a/app/test-
> pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c index
> 9f609b7db..dd55056fd 100644
> --- a/app/test-pmd/cmdline_flow.c
> +++ b/app/test-pmd/cmdline_flow.c
> @@ -239,6 +239,8 @@ enum index {
>  	ACTION_OF_PUSH_MPLS_ETHERTYPE,
>  	ACTION_VXLAN_ENCAP,
>  	ACTION_VXLAN_DECAP,
> +	ACTION_NVGRE_ENCAP,
> +	ACTION_NVGRE_DECAP,
>  };
> 
>  /** Maximum size for pattern in struct rte_flow_item_raw. */ @@ -274,6
> +276,21 @@ struct action_vxlan_encap_data {
>  	struct rte_flow_item_vxlan item_vxlan;  };
> 
> +/** Maximum number of items in struct rte_flow_action_nvgre_encap. */
> +#define ACTION_NVGRE_ENCAP_ITEMS_NUM 4
> +
> +/** Storage for struct rte_flow_action_nvgre_encap including external
> +data. */ struct action_nvgre_encap_data {
> +	struct rte_flow_action_nvgre_encap conf;
> +	struct rte_flow_item items[ACTION_NVGRE_ENCAP_ITEMS_NUM];
> +	struct rte_flow_item_eth item_eth;
> +	union {
> +		struct rte_flow_item_ipv4 item_ipv4;
> +		struct rte_flow_item_ipv6 item_ipv6;
> +	};
> +	struct rte_flow_item_nvgre item_nvgre; };
> +
>  /** Maximum number of subsequent tokens and arguments on the stack.
> */  #define CTX_STACK_SIZE 16
> 
> @@ -793,6 +810,8 @@ static const enum index next_action[] = {
>  	ACTION_OF_PUSH_MPLS,
>  	ACTION_VXLAN_ENCAP,
>  	ACTION_VXLAN_DECAP,
> +	ACTION_NVGRE_ENCAP,
> +	ACTION_NVGRE_DECAP,
>  	ZERO,
>  };
> 
> @@ -919,6 +938,9 @@ static int parse_vc_action_rss_queue(struct context
> *, const struct token *,  static int parse_vc_action_vxlan_encap(struct
> context *, const struct token *,
>  				       const char *, unsigned int, void *,
>  				       unsigned int);
> +static int parse_vc_action_nvgre_encap(struct context *, const struct token
> *,
> +				       const char *, unsigned int, void *,
> +				       unsigned int);
>  static int parse_destroy(struct context *, const struct token *,
>  			 const char *, unsigned int,
>  			 void *, unsigned int);
> @@ -2403,6 +2425,24 @@ static const struct token token_list[] = {
>  		.next = NEXT(NEXT_ENTRY(ACTION_NEXT)),
>  		.call = parse_vc,
>  	},
> +	[ACTION_NVGRE_ENCAP] = {
> +		.name = "nvgre_encap",
> +		.help = "NVGRE encapsulation, uses configuration set by
> \"set"
> +			" nvgre\"",
> +		.priv = PRIV_ACTION(NVGRE_ENCAP,
> +				    sizeof(struct action_nvgre_encap_data)),
> +		.next = NEXT(NEXT_ENTRY(ACTION_NEXT)),
> +		.call = parse_vc_action_nvgre_encap,
> +	},
> +	[ACTION_NVGRE_DECAP] = {
> +		.name = "nvgre_decap",
> +		.help = "Performs a decapsulation action by stripping all"
> +			" headers of the NVGRE tunnel network overlay from
> the"
> +			" matched flow.",
> +		.priv = PRIV_ACTION(NVGRE_DECAP, 0),
> +		.next = NEXT(NEXT_ENTRY(ACTION_NEXT)),
> +		.call = parse_vc,
> +	},
>  };
> 
>  /** Remove and return last entry from argument stack. */ @@ -3055,6
> +3095,85 @@ parse_vc_action_vxlan_encap(struct context *ctx, const struct
> token *token,
>  	return ret;
>  }
> 
> +/** Parse NVGRE encap action. */
> +static int
> +parse_vc_action_nvgre_encap(struct context *ctx, const struct token
> *token,
> +			    const char *str, unsigned int len,
> +			    void *buf, unsigned int size)
> +{
> +	struct buffer *out = buf;
> +	struct rte_flow_action *action;
> +	struct action_nvgre_encap_data *action_nvgre_encap_data;
> +	int ret;
> +
> +	ret = parse_vc(ctx, token, str, len, buf, size);
> +	if (ret < 0)
> +		return ret;
> +	/* Nothing else to do if there is no buffer. */
> +	if (!out)
> +		return ret;
> +	if (!out->args.vc.actions_n)
> +		return -1;
> +	action = &out->args.vc.actions[out->args.vc.actions_n - 1];
> +	/* Point to selected object. */
> +	ctx->object = out->args.vc.data;
> +	ctx->objmask = NULL;
> +	/* Set up default configuration. */
> +	action_nvgre_encap_data = ctx->object;
> +	*action_nvgre_encap_data = (struct action_nvgre_encap_data){
> +		.conf = (struct rte_flow_action_nvgre_encap){
> +			.definition = action_nvgre_encap_data->items,
> +		},
> +		.items = {
> +			{
> +				.type = RTE_FLOW_ITEM_TYPE_ETH,
> +				.spec = &action_nvgre_encap_data-
> >item_eth,
> +				.mask = &action_nvgre_encap_data-
> >item_eth,
> +			},
> +			{
> +				.type = RTE_FLOW_ITEM_TYPE_IPV4,
> +				.spec = &action_nvgre_encap_data-
> >item_ipv4,
> +				.mask = &action_nvgre_encap_data-
> >item_ipv4,
> +			},
> +			{
> +				.type = RTE_FLOW_ITEM_TYPE_NVGRE,
> +				.spec = &action_nvgre_encap_data-
> >item_nvgre,
> +				.mask = &action_nvgre_encap_data-
> >item_nvgre,
> +			},
> +			{
> +				.type = RTE_FLOW_ITEM_TYPE_END,
> +			},
> +		},
> +		.item_eth = { .type = 0, },
> +		.item_ipv4.hdr = {
> +		       .src_addr = nvgre_encap_conf.ipv4_src,
> +		       .dst_addr = nvgre_encap_conf.ipv4_dst,
> +		},
> +		.item_nvgre.flow_id = 0,
> +	};
> +	memcpy(action_nvgre_encap_data->item_eth.dst.addr_bytes,
> +	       nvgre_encap_conf.eth_dst, ETHER_ADDR_LEN);
> +	memcpy(action_nvgre_encap_data->item_eth.src.addr_bytes,
> +	       nvgre_encap_conf.eth_src, ETHER_ADDR_LEN);
> +	if (!nvgre_encap_conf.select_ipv4) {
> +		memcpy(&action_nvgre_encap_data-
> >item_ipv6.hdr.src_addr,
> +		       &nvgre_encap_conf.ipv6_src,
> +		       sizeof(nvgre_encap_conf.ipv6_src));
> +		memcpy(&action_nvgre_encap_data-
> >item_ipv6.hdr.dst_addr,
> +		       &nvgre_encap_conf.ipv6_dst,
> +		       sizeof(nvgre_encap_conf.ipv6_dst));
> +		action_nvgre_encap_data->items[1] = (struct
> rte_flow_item){
> +			.type = RTE_FLOW_ITEM_TYPE_IPV6,
> +			.spec = &action_nvgre_encap_data->item_ipv6,
> +			.mask = &action_nvgre_encap_data->item_ipv6,
> +		};
> +	}
> +	memcpy(action_nvgre_encap_data->item_nvgre.tni,
> nvgre_encap_conf.tni,
> +	       RTE_DIM(nvgre_encap_conf.tni));
> +	action->conf = &action_nvgre_encap_data->conf;
> +	return ret;
> +}
> +
>  /** Parse tokens for destroy command. */  static int  parse_destroy(struct
> context *ctx, const struct token *token, diff --git a/app/test-pmd/testpmd.c
> b/app/test-pmd/testpmd.c index 1c68c9d30..f54205949 100644
> --- a/app/test-pmd/testpmd.c
> +++ b/app/test-pmd/testpmd.c
> @@ -408,6 +408,15 @@ struct vxlan_encap_conf vxlan_encap_conf = {
>  	.eth_dst = "\xff\xff\xff\xff\xff\xff",  };
> 
> +struct nvgre_encap_conf nvgre_encap_conf = {
> +	.select_ipv4 = 1,
> +	.tni = "\x00\x00\x00",
> +	.ipv4_src = IPv4(127, 0, 0, 1),
> +	.ipv4_dst = IPv4(255, 255, 255, 255),

Should there be  .ipv6_src and .ipv6_dst here ?

> +	.eth_src = "\x00\x00\x00\x00\x00\x00",
> +	.eth_dst = "\xff\xff\xff\xff\xff\xff", };
> +
>  /* Forward function declarations */
>  static void map_port_queue_stats_mapping_registers(portid_t pi,
>  						   struct rte_port *port);
> diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h index
> 72c4e8d54..7871b93e1 100644
> --- a/app/test-pmd/testpmd.h
> +++ b/app/test-pmd/testpmd.h
> @@ -494,6 +494,19 @@ struct vxlan_encap_conf {  };  struct
> vxlan_encap_conf vxlan_encap_conf;
> 
> +/* NVGRE encap/decap parameters. */
> +struct nvgre_encap_conf {
> +	uint32_t select_ipv4:1;
> +	uint8_t tni[3];
> +	rte_be32_t ipv4_src;
> +	rte_be32_t ipv4_dst;
> +	uint8_t ipv6_src[16];
> +	uint8_t ipv6_dst[16];
> +	uint8_t eth_src[ETHER_ADDR_LEN];
> +	uint8_t eth_dst[ETHER_ADDR_LEN];
> +};
> +struct nvgre_encap_conf nvgre_encap_conf;
> +
>  static inline unsigned int
>  lcore_num(void)
>  {
> diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
> b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
> index 162d1c535..0ee497f11 100644
> --- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
> +++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
> @@ -1541,6 +1541,13 @@ Configure the outer layer to encapsulate a packet
> inside a VXLAN tunnel::
> 
>   testpmd> set vxlan ipv4|ipv6 (udp-src) (udp-dst) (ip-src) (ip-dst) (mac-src)
> (mac-dst)
> 
> +Config NVGRE Encap outer layers
> +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> +
> +Configure the outer layer to encapsulate a packet inside a NVGRE tunnel::
> +
> + testpmd> set nvgre ipv4|ipv6 (ip-src) (ip-dst) (mac-src) (mac-dst)
> +
>  Port Functions
>  --------------
> 
> @@ -3662,6 +3669,12 @@ This section lists supported actions and their
> attributes, if any.
>  - ``vxlan_decap``: Performs a decapsulation action by stripping all headers of
>    the VXLAN tunnel network overlay from the matched flow.
> 
> +- ``nvgre_encap``: Performs a NVGRE encapsulation, outer layer
> +configuration
> +  is done through `Config NVGRE Encap outer layers`_.
> +
> +- ``nvgre_decap``: Performs a decapsulation action by stripping all
> +headers of
> +  the VXLAN tunnel network overlay from the matched flow.

VXLAN should be NVGRE.

> +
>  Destroying flow rules
>  ~~~~~~~~~~~~~~~~~~~~~
> 
> --
> 2.17.1

Regards,

Bernard.

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [dpdk-dev] [PATCH 2/2] app/testpmd: add NVGRE encap/decap support
  2018-06-15  9:32   ` Iremonger, Bernard
@ 2018-06-15 11:25     ` Nélio Laranjeiro
  0 siblings, 0 replies; 63+ messages in thread
From: Nélio Laranjeiro @ 2018-06-15 11:25 UTC (permalink / raw)
  To: Iremonger, Bernard
  Cc: dev, Adrien Mazarguil, Lu, Wenzhuo, Wu, Jingjing, Awal, Mohammad Abdul

Hi Bernard,

On Fri, Jun 15, 2018 at 09:32:02AM +0000, Iremonger, Bernard wrote:
> Hi Nelio,
> 
>[...]
> > @@ -14927,6 +14930,81 @@ cmdline_parse_inst_t cmd_set_vxlan = {
> >  	},
> >  };
> > 
> > +/** Set VXLAN encapsulation details */
> 
> VXLAN should be NVGRE.
>[...]

Right,

> > b/app/test-pmd/testpmd.c index 1c68c9d30..f54205949 100644
> > --- a/app/test-pmd/testpmd.c
> > +++ b/app/test-pmd/testpmd.c
> > @@ -408,6 +408,15 @@ struct vxlan_encap_conf vxlan_encap_conf = {
> >  	.eth_dst = "\xff\xff\xff\xff\xff\xff",  };
> > 
> > +struct nvgre_encap_conf nvgre_encap_conf = {
> > +	.select_ipv4 = 1,
> > +	.tni = "\x00\x00\x00",
> > +	.ipv4_src = IPv4(127, 0, 0, 1),
> > +	.ipv4_dst = IPv4(255, 255, 255, 255),
> 
> Should there be  .ipv6_src and .ipv6_dst here ?
>[...]

Yes indeed initialisation of IPv6 is missing.

> > +- ``nvgre_decap``: Performs a decapsulation action by stripping all
> > +headers of
> > +  the VXLAN tunnel network overlay from the matched flow.
> 
> VXLAN should be NVGRE.
> 
>[...]

Here also,

I am will update it in a V2.

Thanks for you review,

-- 
Nélio Laranjeiro
6WIND

^ permalink raw reply	[flat|nested] 63+ messages in thread

* [dpdk-dev] [PATCH v2 0/2] app/testpmd implement VXLAN/NVGRE Encap/Decap
  2018-06-14 15:08 [dpdk-dev] [PATCH 0/2] implement VXLAN/NVGRE Encap/Decap in testpmd Nelio Laranjeiro
  2018-06-14 15:08 ` [dpdk-dev] [PATCH 1/2] app/testpmd: add VXLAN encap/decap support Nelio Laranjeiro
  2018-06-14 15:09 ` [dpdk-dev] [PATCH 2/2] app/testpmd: add NVGRE " Nelio Laranjeiro
@ 2018-06-18  8:52 ` Nelio Laranjeiro
  2018-06-18  9:05   ` Ferruh Yigit
                     ` (3 more replies)
  2018-06-18  8:52 ` [dpdk-dev] [PATCH v2 1/2] app/testpmd: add VXLAN " Nelio Laranjeiro
  2018-06-18  8:52 ` [dpdk-dev] [PATCH v2 2/2] app/testpmd: add NVGRE " Nelio Laranjeiro
  4 siblings, 4 replies; 63+ messages in thread
From: Nelio Laranjeiro @ 2018-06-18  8:52 UTC (permalink / raw)
  To: dev, Adrien Mazarguil, Wenzhuo Lu, Jingjing Wu, Bernard Iremonger
  Cc: Mohammad Abdul Awal

This series adds an easy and maintainable configuration version support for
those two actions for 18.08 by using global variables in testpmd to store the
necessary information for the tunnel encapsulation.  Those variables are used
in conjunction of RTE_FLOW_ACTION_{VXLAN,NVGRE}_ENCAP action to create easily
the action for flows.

A common way to use it:

 set vxlan ipv4 4 4 4 127.0.0.1 128.0.0.1 11:11:11:11:11:11 22:22:22:22:22:22
 flow create 0 ingress pattern end actions vxlan_encap / queue index 0 / end

 set vxlan ipv6 4 4 4 ::1 ::2222 11:11:11:11:11:11 22:22:22:22:22:22
 flow create 0 ingress pattern end actions vxlan_encap / queue index 0 / end

 set nvgre ipv4 4 127.0.0.1 128.0.0.1 11:11:11:11:11:11 22:22:22:22:22:22
 flow create 0 ingress pattern end actions nvgre_encap / queue index 0 / end

 set nvgre ipv6 4 ::1 ::2222 11:11:11:11:11:11 22:22:22:22:22:22
 flow create 0 ingress pattern end actions nvgre_encap / queue index 0 / end

This also replace the proposal done by Mohammad Abdul Awal [1] which handles
in a more complex way for the same work.

Note this API has already a modification planned for 18.11 [2] thus those
series should have a limited life for a single release.

[1] https://dpdk.org/ml/archives/dev/2018-May/101403.html
[2] https://dpdk.org/ml/archives/dev/2018-June/103485.html


Changes in v2:

- add default IPv6 values for NVGRE encapsulation.
- replace VXLAN to NVGRE in comments concerning NVGRE layer.

Nelio Laranjeiro (2):
  app/testpmd: add VXLAN encap/decap support
  app/testpmd: add NVGRE encap/decap support

 app/test-pmd/cmdline.c                      | 169 +++++++++++++
 app/test-pmd/cmdline_flow.c                 | 248 ++++++++++++++++++++
 app/test-pmd/testpmd.c                      |  28 +++
 app/test-pmd/testpmd.h                      |  28 +++
 doc/guides/testpmd_app_ug/testpmd_funcs.rst |  25 ++
 5 files changed, 498 insertions(+)

-- 
2.17.1

^ permalink raw reply	[flat|nested] 63+ messages in thread

* [dpdk-dev] [PATCH v2 1/2] app/testpmd: add VXLAN encap/decap support
  2018-06-14 15:08 [dpdk-dev] [PATCH 0/2] implement VXLAN/NVGRE Encap/Decap in testpmd Nelio Laranjeiro
                   ` (2 preceding siblings ...)
  2018-06-18  8:52 ` [dpdk-dev] [PATCH v2 0/2] app/testpmd implement VXLAN/NVGRE Encap/Decap Nelio Laranjeiro
@ 2018-06-18  8:52 ` Nelio Laranjeiro
  2018-06-18 12:47   ` Mohammad Abdul Awal
  2018-06-18 21:02   ` Stephen Hemminger
  2018-06-18  8:52 ` [dpdk-dev] [PATCH v2 2/2] app/testpmd: add NVGRE " Nelio Laranjeiro
  4 siblings, 2 replies; 63+ messages in thread
From: Nelio Laranjeiro @ 2018-06-18  8:52 UTC (permalink / raw)
  To: dev, Adrien Mazarguil, Wenzhuo Lu, Jingjing Wu, Bernard Iremonger
  Cc: Mohammad Abdul Awal

Due to the complex VXLAN_ENCAP flow action and based on the fact testpmd
does not allocate memory, this patch adds a new command in testpmd to
initialise a global structure containing the necessary information to
make the outer layer of the packet.  This same global structure will
then be used by the flow command line in testpmd when the action
vxlan_encap will be parsed, at this point, the conversion into such
action becomes trivial.

This global structure is only used for the encap action.

Signed-off-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
---
 app/test-pmd/cmdline.c                      |  90 ++++++++++++++
 app/test-pmd/cmdline_flow.c                 | 129 ++++++++++++++++++++
 app/test-pmd/testpmd.c                      |  15 +++
 app/test-pmd/testpmd.h                      |  15 +++
 doc/guides/testpmd_app_ug/testpmd_funcs.rst |  12 ++
 5 files changed, 261 insertions(+)

diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
index 27e2aa8c8..a3b98b2f2 100644
--- a/app/test-pmd/cmdline.c
+++ b/app/test-pmd/cmdline.c
@@ -781,6 +781,10 @@ static void cmd_help_long_parsed(void *parsed_result,
 			"port tm hierarchy commit (port_id) (clean_on_fail)\n"
 			"	Commit tm hierarchy.\n\n"
 
+			"vxlan ipv4|ipv6 vni udp-src udp-dst ip-src ip-dst"
+			" eth-src eth-dst\n"
+			"       Configure the VXLAN encapsulation for flows.\n\n"
+
 			, list_pkt_forwarding_modes()
 		);
 	}
@@ -14838,6 +14842,91 @@ cmdline_parse_inst_t cmd_set_port_tm_hierarchy_default = {
 };
 #endif
 
+/** Set VXLAN encapsulation details */
+struct cmd_set_vxlan_result {
+	cmdline_fixed_string_t set;
+	cmdline_fixed_string_t vxlan;
+	cmdline_fixed_string_t ip_version;
+	uint32_t vni;
+	uint16_t udp_src;
+	uint16_t udp_dst;
+	cmdline_ipaddr_t ip_src;
+	cmdline_ipaddr_t ip_dst;
+	struct ether_addr eth_src;
+	struct ether_addr eth_dst;
+};
+
+cmdline_parse_token_string_t cmd_set_vxlan_set =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_vxlan_result, set, "set");
+cmdline_parse_token_string_t cmd_set_vxlan_vxlan =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_vxlan_result, vxlan, "vxlan");
+cmdline_parse_token_string_t cmd_set_vxlan_ip_version =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_vxlan_result, ip_version,
+				 "ipv4#ipv6");
+cmdline_parse_token_num_t cmd_set_vxlan_vni =
+	TOKEN_NUM_INITIALIZER(struct cmd_set_vxlan_result, vni, UINT32);
+cmdline_parse_token_num_t cmd_set_vxlan_udp_src =
+	TOKEN_NUM_INITIALIZER(struct cmd_set_vxlan_result, udp_src, UINT16);
+cmdline_parse_token_num_t cmd_set_vxlan_udp_dst =
+	TOKEN_NUM_INITIALIZER(struct cmd_set_vxlan_result, udp_dst, UINT16);
+cmdline_parse_token_ipaddr_t cmd_set_vxlan_ip_src =
+	TOKEN_IPADDR_INITIALIZER(struct cmd_set_vxlan_result, ip_src);
+cmdline_parse_token_ipaddr_t cmd_set_vxlan_ip_dst =
+	TOKEN_IPADDR_INITIALIZER(struct cmd_set_vxlan_result, ip_dst);
+cmdline_parse_token_etheraddr_t cmd_set_vxlan_eth_src =
+	TOKEN_ETHERADDR_INITIALIZER(struct cmd_set_vxlan_result, eth_src);
+cmdline_parse_token_etheraddr_t cmd_set_vxlan_eth_dst =
+	TOKEN_ETHERADDR_INITIALIZER(struct cmd_set_vxlan_result, eth_dst);
+
+static void cmd_set_vxlan_parsed(void *parsed_result,
+	__attribute__((unused)) struct cmdline *cl,
+	__attribute__((unused)) void *data)
+{
+	struct cmd_set_vxlan_result *res = parsed_result;
+	uint32_t vni = rte_cpu_to_be_32(res->vni) >> 8;
+
+	if (strcmp(res->ip_version, "ipv4") == 0)
+		vxlan_encap_conf.select_ipv4 = 1;
+	else if (strcmp(res->ip_version, "ipv6") == 0)
+		vxlan_encap_conf.select_ipv4 = 0;
+	else
+		return;
+	memcpy(vxlan_encap_conf.vni, &vni, 3);
+	vxlan_encap_conf.udp_src = rte_cpu_to_be_16(res->udp_src);
+	vxlan_encap_conf.udp_dst = rte_cpu_to_be_16(res->udp_dst);
+	if (vxlan_encap_conf.select_ipv4) {
+		IPV4_ADDR_TO_UINT(res->ip_src, vxlan_encap_conf.ipv4_src);
+		IPV4_ADDR_TO_UINT(res->ip_dst, vxlan_encap_conf.ipv4_dst);
+	} else {
+		IPV6_ADDR_TO_ARRAY(res->ip_src, vxlan_encap_conf.ipv6_src);
+		IPV6_ADDR_TO_ARRAY(res->ip_dst, vxlan_encap_conf.ipv6_dst);
+	}
+	memcpy(vxlan_encap_conf.eth_src, res->eth_src.addr_bytes,
+	       ETHER_ADDR_LEN);
+	memcpy(vxlan_encap_conf.eth_dst, res->eth_dst.addr_bytes,
+	       ETHER_ADDR_LEN);
+}
+
+cmdline_parse_inst_t cmd_set_vxlan = {
+	.f = cmd_set_vxlan_parsed,
+	.data = NULL,
+	.help_str = "set vxlan ipv4|ipv6 <vni> <udp-src> <udp-dst> <ip-src>"
+		" <ip-dst> <eth-src> <eth-dst>",
+	.tokens = {
+		(void *)&cmd_set_vxlan_set,
+		(void *)&cmd_set_vxlan_vxlan,
+		(void *)&cmd_set_vxlan_ip_version,
+		(void *)&cmd_set_vxlan_vni,
+		(void *)&cmd_set_vxlan_udp_src,
+		(void *)&cmd_set_vxlan_udp_dst,
+		(void *)&cmd_set_vxlan_ip_src,
+		(void *)&cmd_set_vxlan_ip_dst,
+		(void *)&cmd_set_vxlan_eth_src,
+		(void *)&cmd_set_vxlan_eth_dst,
+		NULL,
+	},
+};
+
 /* Strict link priority scheduling mode setting */
 static void
 cmd_strict_link_prio_parsed(
@@ -17462,6 +17551,7 @@ cmdline_parse_ctx_t main_ctx[] = {
 #if defined RTE_LIBRTE_PMD_SOFTNIC && defined RTE_LIBRTE_SCHED
 	(cmdline_parse_inst_t *)&cmd_set_port_tm_hierarchy_default,
 #endif
+	(cmdline_parse_inst_t *)&cmd_set_vxlan,
 	(cmdline_parse_inst_t *)&cmd_ddp_add,
 	(cmdline_parse_inst_t *)&cmd_ddp_del,
 	(cmdline_parse_inst_t *)&cmd_ddp_get_list,
diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index 934cf7e90..4f4aba407 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -239,6 +239,8 @@ enum index {
 	ACTION_OF_POP_MPLS_ETHERTYPE,
 	ACTION_OF_PUSH_MPLS,
 	ACTION_OF_PUSH_MPLS_ETHERTYPE,
+	ACTION_VXLAN_ENCAP,
+	ACTION_VXLAN_DECAP,
 };
 
 /** Maximum size for pattern in struct rte_flow_item_raw. */
@@ -258,6 +260,22 @@ struct action_rss_data {
 	uint16_t queue[ACTION_RSS_QUEUE_NUM];
 };
 
+/** Maximum number of items in struct rte_flow_action_vxlan_encap. */
+#define ACTION_VXLAN_ENCAP_ITEMS_NUM 6
+
+/** Storage for struct rte_flow_action_vxlan_encap including external data. */
+struct action_vxlan_encap_data {
+	struct rte_flow_action_vxlan_encap conf;
+	struct rte_flow_item items[ACTION_VXLAN_ENCAP_ITEMS_NUM];
+	struct rte_flow_item_eth item_eth;
+	union {
+		struct rte_flow_item_ipv4 item_ipv4;
+		struct rte_flow_item_ipv6 item_ipv6;
+	};
+	struct rte_flow_item_udp item_udp;
+	struct rte_flow_item_vxlan item_vxlan;
+};
+
 /** Maximum number of subsequent tokens and arguments on the stack. */
 #define CTX_STACK_SIZE 16
 
@@ -775,6 +793,8 @@ static const enum index next_action[] = {
 	ACTION_OF_SET_VLAN_PCP,
 	ACTION_OF_POP_MPLS,
 	ACTION_OF_PUSH_MPLS,
+	ACTION_VXLAN_ENCAP,
+	ACTION_VXLAN_DECAP,
 	ZERO,
 };
 
@@ -905,6 +925,9 @@ static int parse_vc_action_rss_type(struct context *, const struct token *,
 static int parse_vc_action_rss_queue(struct context *, const struct token *,
 				     const char *, unsigned int, void *,
 				     unsigned int);
+static int parse_vc_action_vxlan_encap(struct context *, const struct token *,
+				       const char *, unsigned int, void *,
+				       unsigned int);
 static int parse_destroy(struct context *, const struct token *,
 			 const char *, unsigned int,
 			 void *, unsigned int);
@@ -2387,6 +2410,24 @@ static const struct token token_list[] = {
 			      ethertype)),
 		.call = parse_vc_conf,
 	},
+	[ACTION_VXLAN_ENCAP] = {
+		.name = "vxlan_encap",
+		.help = "VXLAN encapsulation, uses configuration set by \"set"
+			" vxlan\"",
+		.priv = PRIV_ACTION(VXLAN_ENCAP,
+				    sizeof(struct action_vxlan_encap_data)),
+		.next = NEXT(NEXT_ENTRY(ACTION_NEXT)),
+		.call = parse_vc_action_vxlan_encap,
+	},
+	[ACTION_VXLAN_DECAP] = {
+		.name = "vxlan_decap",
+		.help = "Performs a decapsulation action by stripping all"
+			" headers of the VXLAN tunnel network overlay from the"
+			" matched flow.",
+		.priv = PRIV_ACTION(VXLAN_DECAP, 0),
+		.next = NEXT(NEXT_ENTRY(ACTION_NEXT)),
+		.call = parse_vc,
+	},
 };
 
 /** Remove and return last entry from argument stack. */
@@ -2951,6 +2992,94 @@ parse_vc_action_rss_queue(struct context *ctx, const struct token *token,
 	return len;
 }
 
+/** Parse VXLAN encap action. */
+static int
+parse_vc_action_vxlan_encap(struct context *ctx, const struct token *token,
+			    const char *str, unsigned int len,
+			    void *buf, unsigned int size)
+{
+	struct buffer *out = buf;
+	struct rte_flow_action *action;
+	struct action_vxlan_encap_data *action_vxlan_encap_data;
+	int ret;
+
+	ret = parse_vc(ctx, token, str, len, buf, size);
+	if (ret < 0)
+		return ret;
+	/* Nothing else to do if there is no buffer. */
+	if (!out)
+		return ret;
+	if (!out->args.vc.actions_n)
+		return -1;
+	action = &out->args.vc.actions[out->args.vc.actions_n - 1];
+	/* Point to selected object. */
+	ctx->object = out->args.vc.data;
+	ctx->objmask = NULL;
+	/* Set up default configuration. */
+	action_vxlan_encap_data = ctx->object;
+	*action_vxlan_encap_data = (struct action_vxlan_encap_data){
+		.conf = (struct rte_flow_action_vxlan_encap){
+			.definition = action_vxlan_encap_data->items,
+		},
+		.items = {
+			{
+				.type = RTE_FLOW_ITEM_TYPE_ETH,
+				.spec = &action_vxlan_encap_data->item_eth,
+				.mask = &action_vxlan_encap_data->item_eth,
+			},
+			{
+				.type = RTE_FLOW_ITEM_TYPE_IPV4,
+				.spec = &action_vxlan_encap_data->item_ipv4,
+				.mask = &action_vxlan_encap_data->item_ipv4,
+			},
+			{
+				.type = RTE_FLOW_ITEM_TYPE_UDP,
+				.spec = &action_vxlan_encap_data->item_udp,
+				.mask = &action_vxlan_encap_data->item_udp,
+			},
+			{
+				.type = RTE_FLOW_ITEM_TYPE_VXLAN,
+				.spec = &action_vxlan_encap_data->item_vxlan,
+				.mask = &action_vxlan_encap_data->item_vxlan,
+			},
+			{
+				.type = RTE_FLOW_ITEM_TYPE_END,
+			},
+		},
+		.item_eth = { .type = 0, },
+		.item_ipv4.hdr = {
+			.src_addr = vxlan_encap_conf.ipv4_src,
+			.dst_addr = vxlan_encap_conf.ipv4_dst,
+		},
+		.item_udp.hdr = {
+			.src_port = vxlan_encap_conf.udp_src,
+			.dst_port = vxlan_encap_conf.udp_dst,
+		},
+		.item_vxlan.flags = 0,
+	};
+	memcpy(action_vxlan_encap_data->item_eth.dst.addr_bytes,
+	       vxlan_encap_conf.eth_dst, ETHER_ADDR_LEN);
+	memcpy(action_vxlan_encap_data->item_eth.src.addr_bytes,
+	       vxlan_encap_conf.eth_src, ETHER_ADDR_LEN);
+	if (!vxlan_encap_conf.select_ipv4) {
+		memcpy(&action_vxlan_encap_data->item_ipv6.hdr.src_addr,
+		       &vxlan_encap_conf.ipv6_src,
+		       sizeof(vxlan_encap_conf.ipv6_src));
+		memcpy(&action_vxlan_encap_data->item_ipv6.hdr.dst_addr,
+		       &vxlan_encap_conf.ipv6_dst,
+		       sizeof(vxlan_encap_conf.ipv6_dst));
+		action_vxlan_encap_data->items[1] = (struct rte_flow_item){
+			.type = RTE_FLOW_ITEM_TYPE_IPV6,
+			.spec = &action_vxlan_encap_data->item_ipv6,
+			.mask = &action_vxlan_encap_data->item_ipv6,
+		};
+	}
+	memcpy(action_vxlan_encap_data->item_vxlan.vni, vxlan_encap_conf.vni,
+	       RTE_DIM(vxlan_encap_conf.vni));
+	action->conf = &action_vxlan_encap_data->conf;
+	return ret;
+}
+
 /** Parse tokens for destroy command. */
 static int
 parse_destroy(struct context *ctx, const struct token *token,
diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
index 35cf26674..1c68c9d30 100644
--- a/app/test-pmd/testpmd.c
+++ b/app/test-pmd/testpmd.c
@@ -393,6 +393,21 @@ uint8_t bitrate_enabled;
 struct gro_status gro_ports[RTE_MAX_ETHPORTS];
 uint8_t gro_flush_cycles = GRO_DEFAULT_FLUSH_CYCLES;
 
+struct vxlan_encap_conf vxlan_encap_conf = {
+	.select_ipv4 = 1,
+	.vni = "\x00\x00\x00",
+	.udp_src = RTE_BE16(1),
+	.udp_dst = RTE_BE16(4789),
+	.ipv4_src = IPv4(127, 0, 0, 1),
+	.ipv4_dst = IPv4(255, 255, 255, 255),
+	.ipv6_src = "\x00\x00\x00\x00\x00\x00\x00\x00"
+		"\x00\x00\x00\x00\x00\x00\x00\x01",
+	.ipv6_dst = "\x00\x00\x00\x00\x00\x00\x00\x00"
+		"\x00\x00\x00\x00\x00\x00\x11\x11",
+	.eth_src = "\x00\x00\x00\x00\x00\x00",
+	.eth_dst = "\xff\xff\xff\xff\xff\xff",
+};
+
 /* Forward function declarations */
 static void map_port_queue_stats_mapping_registers(portid_t pi,
 						   struct rte_port *port);
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index f51cd9dd9..72c4e8d54 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -479,6 +479,21 @@ struct gso_status {
 extern struct gso_status gso_ports[RTE_MAX_ETHPORTS];
 extern uint16_t gso_max_segment_size;
 
+/* VXLAN encap/decap parameters. */
+struct vxlan_encap_conf {
+	uint32_t select_ipv4:1;
+	uint8_t vni[3];
+	rte_be16_t udp_src;
+	rte_be16_t udp_dst;
+	rte_be32_t ipv4_src;
+	rte_be32_t ipv4_dst;
+	uint8_t ipv6_src[16];
+	uint8_t ipv6_dst[16];
+	uint8_t eth_src[ETHER_ADDR_LEN];
+	uint8_t eth_dst[ETHER_ADDR_LEN];
+};
+struct vxlan_encap_conf vxlan_encap_conf;
+
 static inline unsigned int
 lcore_num(void)
 {
diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
index 0d6fd50ca..162d1c535 100644
--- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
+++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
@@ -1534,6 +1534,12 @@ Enable or disable a per queue Tx offloading only on a specific Tx queue::
 
 This command should be run when the port is stopped, or else it will fail.
 
+Config VXLAN Encap outer layers
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Configure the outer layer to encapsulate a packet inside a VXLAN tunnel::
+
+ testpmd> set vxlan ipv4|ipv6 (udp-src) (udp-dst) (ip-src) (ip-dst) (mac-src) (mac-dst)
 
 Port Functions
 --------------
@@ -3650,6 +3656,12 @@ This section lists supported actions and their attributes, if any.
 
   - ``ethertype``: Ethertype.
 
+- ``vxlan_encap``: Performs a VXLAN encapsulation, outer layer configuration
+  is done through `Config VXLAN Encap outer layers`_.
+
+- ``vxlan_decap``: Performs a decapsulation action by stripping all headers of
+  the VXLAN tunnel network overlay from the matched flow.
+
 Destroying flow rules
 ~~~~~~~~~~~~~~~~~~~~~
 
-- 
2.17.1

^ permalink raw reply	[flat|nested] 63+ messages in thread

* [dpdk-dev] [PATCH v2 2/2] app/testpmd: add NVGRE encap/decap support
  2018-06-14 15:08 [dpdk-dev] [PATCH 0/2] implement VXLAN/NVGRE Encap/Decap in testpmd Nelio Laranjeiro
                   ` (3 preceding siblings ...)
  2018-06-18  8:52 ` [dpdk-dev] [PATCH v2 1/2] app/testpmd: add VXLAN " Nelio Laranjeiro
@ 2018-06-18  8:52 ` Nelio Laranjeiro
  2018-06-18 12:48   ` Mohammad Abdul Awal
  4 siblings, 1 reply; 63+ messages in thread
From: Nelio Laranjeiro @ 2018-06-18  8:52 UTC (permalink / raw)
  To: dev, Adrien Mazarguil, Wenzhuo Lu, Jingjing Wu, Bernard Iremonger
  Cc: Mohammad Abdul Awal

Due to the complex NVGRE_ENCAP flow action and based on the fact testpmd
does not allocate memory, this patch adds a new command in testpmd to
initialise a global structure containing the necessary information to
make the outer layer of the packet.  This same global structure will
then be used by the flow command line in testpmd when the action
nvgre_encap will be parsed, at this point, the conversion into such
action becomes trivial.

This global structure is only used for the encap action.

Signed-off-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
---
 app/test-pmd/cmdline.c                      |  79 +++++++++++++
 app/test-pmd/cmdline_flow.c                 | 119 ++++++++++++++++++++
 app/test-pmd/testpmd.c                      |  13 +++
 app/test-pmd/testpmd.h                      |  13 +++
 doc/guides/testpmd_app_ug/testpmd_funcs.rst |  13 +++
 5 files changed, 237 insertions(+)

diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
index a3b98b2f2..7ea1e5792 100644
--- a/app/test-pmd/cmdline.c
+++ b/app/test-pmd/cmdline.c
@@ -785,6 +785,9 @@ static void cmd_help_long_parsed(void *parsed_result,
 			" eth-src eth-dst\n"
 			"       Configure the VXLAN encapsulation for flows.\n\n"
 
+			"nvgre ipv4|ipv6 tni ip-src ip-dst eth-src eth-dst\n"
+			"       Configure the NVGRE encapsulation for flows.\n\n"
+
 			, list_pkt_forwarding_modes()
 		);
 	}
@@ -14927,6 +14930,81 @@ cmdline_parse_inst_t cmd_set_vxlan = {
 	},
 };
 
+/** Set NVGRE encapsulation details */
+struct cmd_set_nvgre_result {
+	cmdline_fixed_string_t set;
+	cmdline_fixed_string_t nvgre;
+	cmdline_fixed_string_t ip_version;
+	uint32_t tni;
+	cmdline_ipaddr_t ip_src;
+	cmdline_ipaddr_t ip_dst;
+	struct ether_addr eth_src;
+	struct ether_addr eth_dst;
+};
+
+cmdline_parse_token_string_t cmd_set_nvgre_set =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_nvgre_result, set, "set");
+cmdline_parse_token_string_t cmd_set_nvgre_nvgre =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_nvgre_result, nvgre, "nvgre");
+cmdline_parse_token_string_t cmd_set_nvgre_ip_version =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_nvgre_result, ip_version,
+				 "ipv4#ipv6");
+cmdline_parse_token_num_t cmd_set_nvgre_tni =
+	TOKEN_NUM_INITIALIZER(struct cmd_set_nvgre_result, tni, UINT32);
+cmdline_parse_token_num_t cmd_set_nvgre_ip_src =
+	TOKEN_IPADDR_INITIALIZER(struct cmd_set_nvgre_result, ip_src);
+cmdline_parse_token_ipaddr_t cmd_set_nvgre_ip_dst =
+	TOKEN_IPADDR_INITIALIZER(struct cmd_set_nvgre_result, ip_dst);
+cmdline_parse_token_etheraddr_t cmd_set_nvgre_eth_src =
+	TOKEN_ETHERADDR_INITIALIZER(struct cmd_set_nvgre_result, eth_src);
+cmdline_parse_token_etheraddr_t cmd_set_nvgre_eth_dst =
+	TOKEN_ETHERADDR_INITIALIZER(struct cmd_set_nvgre_result, eth_dst);
+
+static void cmd_set_nvgre_parsed(void *parsed_result,
+	__attribute__((unused)) struct cmdline *cl,
+	__attribute__((unused)) void *data)
+{
+	struct cmd_set_nvgre_result *res = parsed_result;
+	uint32_t tni = rte_cpu_to_be_32(res->tni) >> 8;
+
+	if (strcmp(res->ip_version, "ipv4") == 0)
+		nvgre_encap_conf.select_ipv4 = 1;
+	else if (strcmp(res->ip_version, "ipv6") == 0)
+		nvgre_encap_conf.select_ipv4 = 0;
+	else
+		return;
+	memcpy(nvgre_encap_conf.tni, &tni, 3);
+	if (nvgre_encap_conf.select_ipv4) {
+		IPV4_ADDR_TO_UINT(res->ip_src, nvgre_encap_conf.ipv4_src);
+		IPV4_ADDR_TO_UINT(res->ip_dst, nvgre_encap_conf.ipv4_dst);
+	} else {
+		IPV6_ADDR_TO_ARRAY(res->ip_src, nvgre_encap_conf.ipv6_src);
+		IPV6_ADDR_TO_ARRAY(res->ip_dst, nvgre_encap_conf.ipv6_dst);
+	}
+	memcpy(nvgre_encap_conf.eth_src, res->eth_src.addr_bytes,
+	       ETHER_ADDR_LEN);
+	memcpy(nvgre_encap_conf.eth_dst, res->eth_dst.addr_bytes,
+	       ETHER_ADDR_LEN);
+}
+
+cmdline_parse_inst_t cmd_set_nvgre = {
+	.f = cmd_set_nvgre_parsed,
+	.data = NULL,
+	.help_str = "set nvgre ipv4|ipv6 <vni> <ip-src> <ip-dst> <eth-src>"
+		" <eth-dst>",
+	.tokens = {
+		(void *)&cmd_set_nvgre_set,
+		(void *)&cmd_set_nvgre_nvgre,
+		(void *)&cmd_set_nvgre_ip_version,
+		(void *)&cmd_set_nvgre_tni,
+		(void *)&cmd_set_nvgre_ip_src,
+		(void *)&cmd_set_nvgre_ip_dst,
+		(void *)&cmd_set_nvgre_eth_src,
+		(void *)&cmd_set_nvgre_eth_dst,
+		NULL,
+	},
+};
+
 /* Strict link priority scheduling mode setting */
 static void
 cmd_strict_link_prio_parsed(
@@ -17552,6 +17630,7 @@ cmdline_parse_ctx_t main_ctx[] = {
 	(cmdline_parse_inst_t *)&cmd_set_port_tm_hierarchy_default,
 #endif
 	(cmdline_parse_inst_t *)&cmd_set_vxlan,
+	(cmdline_parse_inst_t *)&cmd_set_nvgre,
 	(cmdline_parse_inst_t *)&cmd_ddp_add,
 	(cmdline_parse_inst_t *)&cmd_ddp_del,
 	(cmdline_parse_inst_t *)&cmd_ddp_get_list,
diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index 4f4aba407..7fd5468a8 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -241,6 +241,8 @@ enum index {
 	ACTION_OF_PUSH_MPLS_ETHERTYPE,
 	ACTION_VXLAN_ENCAP,
 	ACTION_VXLAN_DECAP,
+	ACTION_NVGRE_ENCAP,
+	ACTION_NVGRE_DECAP,
 };
 
 /** Maximum size for pattern in struct rte_flow_item_raw. */
@@ -276,6 +278,21 @@ struct action_vxlan_encap_data {
 	struct rte_flow_item_vxlan item_vxlan;
 };
 
+/** Maximum number of items in struct rte_flow_action_nvgre_encap. */
+#define ACTION_NVGRE_ENCAP_ITEMS_NUM 4
+
+/** Storage for struct rte_flow_action_nvgre_encap including external data. */
+struct action_nvgre_encap_data {
+	struct rte_flow_action_nvgre_encap conf;
+	struct rte_flow_item items[ACTION_NVGRE_ENCAP_ITEMS_NUM];
+	struct rte_flow_item_eth item_eth;
+	union {
+		struct rte_flow_item_ipv4 item_ipv4;
+		struct rte_flow_item_ipv6 item_ipv6;
+	};
+	struct rte_flow_item_nvgre item_nvgre;
+};
+
 /** Maximum number of subsequent tokens and arguments on the stack. */
 #define CTX_STACK_SIZE 16
 
@@ -795,6 +812,8 @@ static const enum index next_action[] = {
 	ACTION_OF_PUSH_MPLS,
 	ACTION_VXLAN_ENCAP,
 	ACTION_VXLAN_DECAP,
+	ACTION_NVGRE_ENCAP,
+	ACTION_NVGRE_DECAP,
 	ZERO,
 };
 
@@ -928,6 +947,9 @@ static int parse_vc_action_rss_queue(struct context *, const struct token *,
 static int parse_vc_action_vxlan_encap(struct context *, const struct token *,
 				       const char *, unsigned int, void *,
 				       unsigned int);
+static int parse_vc_action_nvgre_encap(struct context *, const struct token *,
+				       const char *, unsigned int, void *,
+				       unsigned int);
 static int parse_destroy(struct context *, const struct token *,
 			 const char *, unsigned int,
 			 void *, unsigned int);
@@ -2428,6 +2450,24 @@ static const struct token token_list[] = {
 		.next = NEXT(NEXT_ENTRY(ACTION_NEXT)),
 		.call = parse_vc,
 	},
+	[ACTION_NVGRE_ENCAP] = {
+		.name = "nvgre_encap",
+		.help = "NVGRE encapsulation, uses configuration set by \"set"
+			" nvgre\"",
+		.priv = PRIV_ACTION(NVGRE_ENCAP,
+				    sizeof(struct action_nvgre_encap_data)),
+		.next = NEXT(NEXT_ENTRY(ACTION_NEXT)),
+		.call = parse_vc_action_nvgre_encap,
+	},
+	[ACTION_NVGRE_DECAP] = {
+		.name = "nvgre_decap",
+		.help = "Performs a decapsulation action by stripping all"
+			" headers of the NVGRE tunnel network overlay from the"
+			" matched flow.",
+		.priv = PRIV_ACTION(NVGRE_DECAP, 0),
+		.next = NEXT(NEXT_ENTRY(ACTION_NEXT)),
+		.call = parse_vc,
+	},
 };
 
 /** Remove and return last entry from argument stack. */
@@ -3080,6 +3120,85 @@ parse_vc_action_vxlan_encap(struct context *ctx, const struct token *token,
 	return ret;
 }
 
+/** Parse NVGRE encap action. */
+static int
+parse_vc_action_nvgre_encap(struct context *ctx, const struct token *token,
+			    const char *str, unsigned int len,
+			    void *buf, unsigned int size)
+{
+	struct buffer *out = buf;
+	struct rte_flow_action *action;
+	struct action_nvgre_encap_data *action_nvgre_encap_data;
+	int ret;
+
+	ret = parse_vc(ctx, token, str, len, buf, size);
+	if (ret < 0)
+		return ret;
+	/* Nothing else to do if there is no buffer. */
+	if (!out)
+		return ret;
+	if (!out->args.vc.actions_n)
+		return -1;
+	action = &out->args.vc.actions[out->args.vc.actions_n - 1];
+	/* Point to selected object. */
+	ctx->object = out->args.vc.data;
+	ctx->objmask = NULL;
+	/* Set up default configuration. */
+	action_nvgre_encap_data = ctx->object;
+	*action_nvgre_encap_data = (struct action_nvgre_encap_data){
+		.conf = (struct rte_flow_action_nvgre_encap){
+			.definition = action_nvgre_encap_data->items,
+		},
+		.items = {
+			{
+				.type = RTE_FLOW_ITEM_TYPE_ETH,
+				.spec = &action_nvgre_encap_data->item_eth,
+				.mask = &action_nvgre_encap_data->item_eth,
+			},
+			{
+				.type = RTE_FLOW_ITEM_TYPE_IPV4,
+				.spec = &action_nvgre_encap_data->item_ipv4,
+				.mask = &action_nvgre_encap_data->item_ipv4,
+			},
+			{
+				.type = RTE_FLOW_ITEM_TYPE_NVGRE,
+				.spec = &action_nvgre_encap_data->item_nvgre,
+				.mask = &action_nvgre_encap_data->item_nvgre,
+			},
+			{
+				.type = RTE_FLOW_ITEM_TYPE_END,
+			},
+		},
+		.item_eth = { .type = 0, },
+		.item_ipv4.hdr = {
+		       .src_addr = nvgre_encap_conf.ipv4_src,
+		       .dst_addr = nvgre_encap_conf.ipv4_dst,
+		},
+		.item_nvgre.flow_id = 0,
+	};
+	memcpy(action_nvgre_encap_data->item_eth.dst.addr_bytes,
+	       nvgre_encap_conf.eth_dst, ETHER_ADDR_LEN);
+	memcpy(action_nvgre_encap_data->item_eth.src.addr_bytes,
+	       nvgre_encap_conf.eth_src, ETHER_ADDR_LEN);
+	if (!nvgre_encap_conf.select_ipv4) {
+		memcpy(&action_nvgre_encap_data->item_ipv6.hdr.src_addr,
+		       &nvgre_encap_conf.ipv6_src,
+		       sizeof(nvgre_encap_conf.ipv6_src));
+		memcpy(&action_nvgre_encap_data->item_ipv6.hdr.dst_addr,
+		       &nvgre_encap_conf.ipv6_dst,
+		       sizeof(nvgre_encap_conf.ipv6_dst));
+		action_nvgre_encap_data->items[1] = (struct rte_flow_item){
+			.type = RTE_FLOW_ITEM_TYPE_IPV6,
+			.spec = &action_nvgre_encap_data->item_ipv6,
+			.mask = &action_nvgre_encap_data->item_ipv6,
+		};
+	}
+	memcpy(action_nvgre_encap_data->item_nvgre.tni, nvgre_encap_conf.tni,
+	       RTE_DIM(nvgre_encap_conf.tni));
+	action->conf = &action_nvgre_encap_data->conf;
+	return ret;
+}
+
 /** Parse tokens for destroy command. */
 static int
 parse_destroy(struct context *ctx, const struct token *token,
diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
index 1c68c9d30..97b4c4f9c 100644
--- a/app/test-pmd/testpmd.c
+++ b/app/test-pmd/testpmd.c
@@ -408,6 +408,19 @@ struct vxlan_encap_conf vxlan_encap_conf = {
 	.eth_dst = "\xff\xff\xff\xff\xff\xff",
 };
 
+struct nvgre_encap_conf nvgre_encap_conf = {
+	.select_ipv4 = 1,
+	.tni = "\x00\x00\x00",
+	.ipv4_src = IPv4(127, 0, 0, 1),
+	.ipv4_dst = IPv4(255, 255, 255, 255),
+	.ipv6_src = "\x00\x00\x00\x00\x00\x00\x00\x00"
+		"\x00\x00\x00\x00\x00\x00\x00\x01",
+	.ipv6_dst = "\x00\x00\x00\x00\x00\x00\x00\x00"
+		"\x00\x00\x00\x00\x00\x00\x11\x11",
+	.eth_src = "\x00\x00\x00\x00\x00\x00",
+	.eth_dst = "\xff\xff\xff\xff\xff\xff",
+};
+
 /* Forward function declarations */
 static void map_port_queue_stats_mapping_registers(portid_t pi,
 						   struct rte_port *port);
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index 72c4e8d54..7871b93e1 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -494,6 +494,19 @@ struct vxlan_encap_conf {
 };
 struct vxlan_encap_conf vxlan_encap_conf;
 
+/* NVGRE encap/decap parameters. */
+struct nvgre_encap_conf {
+	uint32_t select_ipv4:1;
+	uint8_t tni[3];
+	rte_be32_t ipv4_src;
+	rte_be32_t ipv4_dst;
+	uint8_t ipv6_src[16];
+	uint8_t ipv6_dst[16];
+	uint8_t eth_src[ETHER_ADDR_LEN];
+	uint8_t eth_dst[ETHER_ADDR_LEN];
+};
+struct nvgre_encap_conf nvgre_encap_conf;
+
 static inline unsigned int
 lcore_num(void)
 {
diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
index 162d1c535..59f5f6dad 100644
--- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
+++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
@@ -1541,6 +1541,13 @@ Configure the outer layer to encapsulate a packet inside a VXLAN tunnel::
 
  testpmd> set vxlan ipv4|ipv6 (udp-src) (udp-dst) (ip-src) (ip-dst) (mac-src) (mac-dst)
 
+Config NVGRE Encap outer layers
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Configure the outer layer to encapsulate a packet inside a NVGRE tunnel::
+
+ testpmd> set nvgre ipv4|ipv6 (ip-src) (ip-dst) (mac-src) (mac-dst)
+
 Port Functions
 --------------
 
@@ -3662,6 +3669,12 @@ This section lists supported actions and their attributes, if any.
 - ``vxlan_decap``: Performs a decapsulation action by stripping all headers of
   the VXLAN tunnel network overlay from the matched flow.
 
+- ``nvgre_encap``: Performs a NVGRE encapsulation, outer layer configuration
+  is done through `Config NVGRE Encap outer layers`_.
+
+- ``nvgre_decap``: Performs a decapsulation action by stripping all headers of
+  the NVGRE tunnel network overlay from the matched flow.
+
 Destroying flow rules
 ~~~~~~~~~~~~~~~~~~~~~
 
-- 
2.17.1

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [dpdk-dev] [PATCH v2 0/2] app/testpmd implement VXLAN/NVGRE Encap/Decap
  2018-06-18  8:52 ` [dpdk-dev] [PATCH v2 0/2] app/testpmd implement VXLAN/NVGRE Encap/Decap Nelio Laranjeiro
@ 2018-06-18  9:05   ` Ferruh Yigit
  2018-06-18  9:38     ` Nélio Laranjeiro
  2018-06-18 14:36   ` [dpdk-dev] [PATCH v3 " Nelio Laranjeiro
                     ` (2 subsequent siblings)
  3 siblings, 1 reply; 63+ messages in thread
From: Ferruh Yigit @ 2018-06-18  9:05 UTC (permalink / raw)
  To: Nelio Laranjeiro, dev, Adrien Mazarguil, Wenzhuo Lu, Jingjing Wu,
	Bernard Iremonger
  Cc: Mohammad Abdul Awal

On 6/18/2018 9:52 AM, Nelio Laranjeiro wrote:
> This series adds an easy and maintainable configuration version support for
> those two actions for 18.08 by using global variables in testpmd to store the
> necessary information for the tunnel encapsulation.  Those variables are used
> in conjunction of RTE_FLOW_ACTION_{VXLAN,NVGRE}_ENCAP action to create easily
> the action for flows.
> 
> A common way to use it:
> 
>  set vxlan ipv4 4 4 4 127.0.0.1 128.0.0.1 11:11:11:11:11:11 22:22:22:22:22:22
>  flow create 0 ingress pattern end actions vxlan_encap / queue index 0 / end
> 
>  set vxlan ipv6 4 4 4 ::1 ::2222 11:11:11:11:11:11 22:22:22:22:22:22
>  flow create 0 ingress pattern end actions vxlan_encap / queue index 0 / end
> 
>  set nvgre ipv4 4 127.0.0.1 128.0.0.1 11:11:11:11:11:11 22:22:22:22:22:22
>  flow create 0 ingress pattern end actions nvgre_encap / queue index 0 / end
> 
>  set nvgre ipv6 4 ::1 ::2222 11:11:11:11:11:11 22:22:22:22:22:22
>  flow create 0 ingress pattern end actions nvgre_encap / queue index 0 / end
> 
> This also replace the proposal done by Mohammad Abdul Awal [1] which handles
> in a more complex way for the same work.

Hi Nelio,

Is this set on top of mentioned set? If so shouldn't the set has the Awal's
sign-off too?
Are you replacing someone else patch with dropping his sign-off?

> 
> Note this API has already a modification planned for 18.11 [2] thus those
> series should have a limited life for a single release.
> 
> [1] https://dpdk.org/ml/archives/dev/2018-May/101403.html
> [2] https://dpdk.org/ml/archives/dev/2018-June/103485.html
> 
> 
> Changes in v2:
> 
> - add default IPv6 values for NVGRE encapsulation.
> - replace VXLAN to NVGRE in comments concerning NVGRE layer.
> 
> Nelio Laranjeiro (2):
>   app/testpmd: add VXLAN encap/decap support
>   app/testpmd: add NVGRE encap/decap support
> 
>  app/test-pmd/cmdline.c                      | 169 +++++++++++++
>  app/test-pmd/cmdline_flow.c                 | 248 ++++++++++++++++++++
>  app/test-pmd/testpmd.c                      |  28 +++
>  app/test-pmd/testpmd.h                      |  28 +++
>  doc/guides/testpmd_app_ug/testpmd_funcs.rst |  25 ++
>  5 files changed, 498 insertions(+)
> 

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [dpdk-dev] [PATCH v2 0/2] app/testpmd implement VXLAN/NVGRE Encap/Decap
  2018-06-18  9:05   ` Ferruh Yigit
@ 2018-06-18  9:38     ` Nélio Laranjeiro
  2018-06-18 14:40       ` Ferruh Yigit
  0 siblings, 1 reply; 63+ messages in thread
From: Nélio Laranjeiro @ 2018-06-18  9:38 UTC (permalink / raw)
  To: Ferruh Yigit
  Cc: dev, Adrien Mazarguil, Wenzhuo Lu, Jingjing Wu,
	Bernard Iremonger, Mohammad Abdul Awal

On Mon, Jun 18, 2018 at 10:05:03AM +0100, Ferruh Yigit wrote:
> On 6/18/2018 9:52 AM, Nelio Laranjeiro wrote:
> > This series adds an easy and maintainable configuration version support for
> > those two actions for 18.08 by using global variables in testpmd to store the
> > necessary information for the tunnel encapsulation.  Those variables are used
> > in conjunction of RTE_FLOW_ACTION_{VXLAN,NVGRE}_ENCAP action to create easily
> > the action for flows.
> > 
> > A common way to use it:
> > 
> >  set vxlan ipv4 4 4 4 127.0.0.1 128.0.0.1 11:11:11:11:11:11 22:22:22:22:22:22
> >  flow create 0 ingress pattern end actions vxlan_encap / queue index 0 / end
> > 
> >  set vxlan ipv6 4 4 4 ::1 ::2222 11:11:11:11:11:11 22:22:22:22:22:22
> >  flow create 0 ingress pattern end actions vxlan_encap / queue index 0 / end
> > 
> >  set nvgre ipv4 4 127.0.0.1 128.0.0.1 11:11:11:11:11:11 22:22:22:22:22:22
> >  flow create 0 ingress pattern end actions nvgre_encap / queue index 0 / end
> > 
> >  set nvgre ipv6 4 ::1 ::2222 11:11:11:11:11:11 22:22:22:22:22:22
> >  flow create 0 ingress pattern end actions nvgre_encap / queue index 0 / end
> > 
> > This also replace the proposal done by Mohammad Abdul Awal [1] which handles
> > in a more complex way for the same work.
> 
> Hi Nelio,
> 
> Is this set on top of mentioned set?

Hi Ferruh,

No it is another implementation of Declan's API.  It can be directly
applied on top of the current DPDK code without any other patch.

> If so shouldn't the set has the Awal's sign-off too?
> Are you replacing someone else patch with dropping his sign-off?
>
> > Note this API has already a modification planned for 18.11 [2] thus those
> > series should have a limited life for a single release.
> > 
> > [1] https://dpdk.org/ml/archives/dev/2018-May/101403.html
> > [2] https://dpdk.org/ml/archives/dev/2018-June/103485.html
> > 
> > 
> > Changes in v2:
> > 
> > - add default IPv6 values for NVGRE encapsulation.
> > - replace VXLAN to NVGRE in comments concerning NVGRE layer.
> > 
> > Nelio Laranjeiro (2):
> >   app/testpmd: add VXLAN encap/decap support
> >   app/testpmd: add NVGRE encap/decap support
> > 
> >  app/test-pmd/cmdline.c                      | 169 +++++++++++++
> >  app/test-pmd/cmdline_flow.c                 | 248 ++++++++++++++++++++
> >  app/test-pmd/testpmd.c                      |  28 +++
> >  app/test-pmd/testpmd.h                      |  28 +++
> >  doc/guides/testpmd_app_ug/testpmd_funcs.rst |  25 ++
> >  5 files changed, 498 insertions(+)

-- 
Nélio Laranjeiro
6WIND

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [dpdk-dev] [PATCH v2 1/2] app/testpmd: add VXLAN encap/decap support
  2018-06-18  8:52 ` [dpdk-dev] [PATCH v2 1/2] app/testpmd: add VXLAN " Nelio Laranjeiro
@ 2018-06-18 12:47   ` Mohammad Abdul Awal
  2018-06-18 21:02   ` Stephen Hemminger
  1 sibling, 0 replies; 63+ messages in thread
From: Mohammad Abdul Awal @ 2018-06-18 12:47 UTC (permalink / raw)
  To: Nelio Laranjeiro, dev, Adrien Mazarguil, Wenzhuo Lu, Jingjing Wu,
	Bernard Iremonger

Hi Nelio,


On 18/06/2018 09:52, Nelio Laranjeiro wrote:
> Due to the complex VXLAN_ENCAP flow action and based on the fact testpmd
> does not allocate memory, this patch adds a new command in testpmd to
> initialise a global structure containing the necessary information to
> make the outer layer of the packet.  This same global structure will
> then be used by the flow command line in testpmd when the action
> vxlan_encap will be parsed, at this point, the conversion into such
> action becomes trivial.
>
> This global structure is only used for the encap action.
>
> Signed-off-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
> ---
>   app/test-pmd/cmdline.c                      |  90 ++++++++++++++
>   app/test-pmd/cmdline_flow.c                 | 129 ++++++++++++++++++++
>   app/test-pmd/testpmd.c                      |  15 +++
>   app/test-pmd/testpmd.h                      |  15 +++
>   doc/guides/testpmd_app_ug/testpmd_funcs.rst |  12 ++
>   5 files changed, 261 insertions(+)
>
> diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
> index 27e2aa8c8..a3b98b2f2 100644
> --- a/app/test-pmd/cmdline.c
> +++ b/app/test-pmd/cmdline.c
> @@ -781,6 +781,10 @@ static void cmd_help_long_parsed(void *parsed_result,
>   			"port tm hierarchy commit (port_id) (clean_on_fail)\n"
>   			"	Commit tm hierarchy.\n\n"
>   
> +			"vxlan ipv4|ipv6 vni udp-src udp-dst ip-src ip-dst"
> +			" eth-src eth-dst\n"
> +			"       Configure the VXLAN encapsulation for flows.\n\n"
> +
Should there be support for outer VLAN header according to the definitions?

>   			, list_pkt_forwarding_modes()
>   		);
>   	}
> @@ -14838,6 +14842,91 @@ cmdline_parse_inst_t cmd_set_port_tm_hierarchy_default = {
>   };
>   #endif
>   
> +/** Set VXLAN encapsulation details */
> +struct cmd_set_vxlan_result {
> +	cmdline_fixed_string_t set;
> +	cmdline_fixed_string_t vxlan;
> +	cmdline_fixed_string_t ip_version;
> +	uint32_t vni;
> +	uint16_t udp_src;
> +	uint16_t udp_dst;
> +	cmdline_ipaddr_t ip_src;
> +	cmdline_ipaddr_t ip_dst;
> +	struct ether_addr eth_src;
> +	struct ether_addr eth_dst;
> +};
> +
> +cmdline_parse_token_string_t cmd_set_vxlan_set =
> +	TOKEN_STRING_INITIALIZER(struct cmd_set_vxlan_result, set, "set");
> +cmdline_parse_token_string_t cmd_set_vxlan_vxlan =
> +	TOKEN_STRING_INITIALIZER(struct cmd_set_vxlan_result, vxlan, "vxlan");
> +cmdline_parse_token_string_t cmd_set_vxlan_ip_version =
> +	TOKEN_STRING_INITIALIZER(struct cmd_set_vxlan_result, ip_version,
> +				 "ipv4#ipv6");
> +cmdline_parse_token_num_t cmd_set_vxlan_vni =
> +	TOKEN_NUM_INITIALIZER(struct cmd_set_vxlan_result, vni, UINT32);
> +cmdline_parse_token_num_t cmd_set_vxlan_udp_src =
> +	TOKEN_NUM_INITIALIZER(struct cmd_set_vxlan_result, udp_src, UINT16);
> +cmdline_parse_token_num_t cmd_set_vxlan_udp_dst =
> +	TOKEN_NUM_INITIALIZER(struct cmd_set_vxlan_result, udp_dst, UINT16);
> +cmdline_parse_token_ipaddr_t cmd_set_vxlan_ip_src =
> +	TOKEN_IPADDR_INITIALIZER(struct cmd_set_vxlan_result, ip_src);
> +cmdline_parse_token_ipaddr_t cmd_set_vxlan_ip_dst =
> +	TOKEN_IPADDR_INITIALIZER(struct cmd_set_vxlan_result, ip_dst);
> +cmdline_parse_token_etheraddr_t cmd_set_vxlan_eth_src =
> +	TOKEN_ETHERADDR_INITIALIZER(struct cmd_set_vxlan_result, eth_src);
> +cmdline_parse_token_etheraddr_t cmd_set_vxlan_eth_dst =
> +	TOKEN_ETHERADDR_INITIALIZER(struct cmd_set_vxlan_result, eth_dst);
> +
> +static void cmd_set_vxlan_parsed(void *parsed_result,
> +	__attribute__((unused)) struct cmdline *cl,
> +	__attribute__((unused)) void *data)
> +{
> +	struct cmd_set_vxlan_result *res = parsed_result;
> +	uint32_t vni = rte_cpu_to_be_32(res->vni) >> 8;
> +
> +	if (strcmp(res->ip_version, "ipv4") == 0)
> +		vxlan_encap_conf.select_ipv4 = 1;
> +	else if (strcmp(res->ip_version, "ipv6") == 0)
> +		vxlan_encap_conf.select_ipv4 = 0;
> +	else
> +		return;
> +	memcpy(vxlan_encap_conf.vni, &vni, 3);
> +	vxlan_encap_conf.udp_src = rte_cpu_to_be_16(res->udp_src);
> +	vxlan_encap_conf.udp_dst = rte_cpu_to_be_16(res->udp_dst);
> +	if (vxlan_encap_conf.select_ipv4) {
> +		IPV4_ADDR_TO_UINT(res->ip_src, vxlan_encap_conf.ipv4_src);
> +		IPV4_ADDR_TO_UINT(res->ip_dst, vxlan_encap_conf.ipv4_dst);
> +	} else {
> +		IPV6_ADDR_TO_ARRAY(res->ip_src, vxlan_encap_conf.ipv6_src);
> +		IPV6_ADDR_TO_ARRAY(res->ip_dst, vxlan_encap_conf.ipv6_dst);
> +	}
> +	memcpy(vxlan_encap_conf.eth_src, res->eth_src.addr_bytes,
> +	       ETHER_ADDR_LEN);
> +	memcpy(vxlan_encap_conf.eth_dst, res->eth_dst.addr_bytes,
> +	       ETHER_ADDR_LEN);
> +}
> +
> +cmdline_parse_inst_t cmd_set_vxlan = {
> +	.f = cmd_set_vxlan_parsed,
> +	.data = NULL,
> +	.help_str = "set vxlan ipv4|ipv6 <vni> <udp-src> <udp-dst> <ip-src>"
> +		" <ip-dst> <eth-src> <eth-dst>",
> +	.tokens = {
> +		(void *)&cmd_set_vxlan_set,
> +		(void *)&cmd_set_vxlan_vxlan,
> +		(void *)&cmd_set_vxlan_ip_version,
> +		(void *)&cmd_set_vxlan_vni,
> +		(void *)&cmd_set_vxlan_udp_src,
> +		(void *)&cmd_set_vxlan_udp_dst,
> +		(void *)&cmd_set_vxlan_ip_src,
> +		(void *)&cmd_set_vxlan_ip_dst,
> +		(void *)&cmd_set_vxlan_eth_src,
> +		(void *)&cmd_set_vxlan_eth_dst,
> +		NULL,
> +	},
> +};
> +
>   /* Strict link priority scheduling mode setting */
>   static void
>   cmd_strict_link_prio_parsed(
> @@ -17462,6 +17551,7 @@ cmdline_parse_ctx_t main_ctx[] = {
>   #if defined RTE_LIBRTE_PMD_SOFTNIC && defined RTE_LIBRTE_SCHED
>   	(cmdline_parse_inst_t *)&cmd_set_port_tm_hierarchy_default,
>   #endif
> +	(cmdline_parse_inst_t *)&cmd_set_vxlan,
>   	(cmdline_parse_inst_t *)&cmd_ddp_add,
>   	(cmdline_parse_inst_t *)&cmd_ddp_del,
>   	(cmdline_parse_inst_t *)&cmd_ddp_get_list,
> diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
> index 934cf7e90..4f4aba407 100644
> --- a/app/test-pmd/cmdline_flow.c
> +++ b/app/test-pmd/cmdline_flow.c
> @@ -239,6 +239,8 @@ enum index {
>   	ACTION_OF_POP_MPLS_ETHERTYPE,
>   	ACTION_OF_PUSH_MPLS,
>   	ACTION_OF_PUSH_MPLS_ETHERTYPE,
> +	ACTION_VXLAN_ENCAP,
> +	ACTION_VXLAN_DECAP,
>   };
>   
>   /** Maximum size for pattern in struct rte_flow_item_raw. */
> @@ -258,6 +260,22 @@ struct action_rss_data {
>   	uint16_t queue[ACTION_RSS_QUEUE_NUM];
>   };
>   
> +/** Maximum number of items in struct rte_flow_action_vxlan_encap. */
> +#define ACTION_VXLAN_ENCAP_ITEMS_NUM 6
> +
> +/** Storage for struct rte_flow_action_vxlan_encap including external data. */
> +struct action_vxlan_encap_data {
> +	struct rte_flow_action_vxlan_encap conf;
> +	struct rte_flow_item items[ACTION_VXLAN_ENCAP_ITEMS_NUM];
> +	struct rte_flow_item_eth item_eth;
> +	union {
> +		struct rte_flow_item_ipv4 item_ipv4;
> +		struct rte_flow_item_ipv6 item_ipv6;
> +	};
> +	struct rte_flow_item_udp item_udp;
> +	struct rte_flow_item_vxlan item_vxlan;
> +};
> +
>   /** Maximum number of subsequent tokens and arguments on the stack. */
>   #define CTX_STACK_SIZE 16
>   
> @@ -775,6 +793,8 @@ static const enum index next_action[] = {
>   	ACTION_OF_SET_VLAN_PCP,
>   	ACTION_OF_POP_MPLS,
>   	ACTION_OF_PUSH_MPLS,
> +	ACTION_VXLAN_ENCAP,
> +	ACTION_VXLAN_DECAP,
>   	ZERO,
>   };
>   
> @@ -905,6 +925,9 @@ static int parse_vc_action_rss_type(struct context *, const struct token *,
>   static int parse_vc_action_rss_queue(struct context *, const struct token *,
>   				     const char *, unsigned int, void *,
>   				     unsigned int);
> +static int parse_vc_action_vxlan_encap(struct context *, const struct token *,
> +				       const char *, unsigned int, void *,
> +				       unsigned int);
>   static int parse_destroy(struct context *, const struct token *,
>   			 const char *, unsigned int,
>   			 void *, unsigned int);
> @@ -2387,6 +2410,24 @@ static const struct token token_list[] = {
>   			      ethertype)),
>   		.call = parse_vc_conf,
>   	},
> +	[ACTION_VXLAN_ENCAP] = {
> +		.name = "vxlan_encap",
> +		.help = "VXLAN encapsulation, uses configuration set by \"set"
> +			" vxlan\"",
> +		.priv = PRIV_ACTION(VXLAN_ENCAP,
> +				    sizeof(struct action_vxlan_encap_data)),
> +		.next = NEXT(NEXT_ENTRY(ACTION_NEXT)),
> +		.call = parse_vc_action_vxlan_encap,
> +	},
> +	[ACTION_VXLAN_DECAP] = {
> +		.name = "vxlan_decap",
> +		.help = "Performs a decapsulation action by stripping all"
> +			" headers of the VXLAN tunnel network overlay from the"
> +			" matched flow.",
> +		.priv = PRIV_ACTION(VXLAN_DECAP, 0),
> +		.next = NEXT(NEXT_ENTRY(ACTION_NEXT)),
> +		.call = parse_vc,
> +	},
>   };
>   
>   /** Remove and return last entry from argument stack. */
> @@ -2951,6 +2992,94 @@ parse_vc_action_rss_queue(struct context *ctx, const struct token *token,
>   	return len;
>   }
>   
> +/** Parse VXLAN encap action. */
> +static int
> +parse_vc_action_vxlan_encap(struct context *ctx, const struct token *token,
> +			    const char *str, unsigned int len,
> +			    void *buf, unsigned int size)
> +{
> +	struct buffer *out = buf;
> +	struct rte_flow_action *action;
> +	struct action_vxlan_encap_data *action_vxlan_encap_data;
> +	int ret;
> +
> +	ret = parse_vc(ctx, token, str, len, buf, size);
> +	if (ret < 0)
> +		return ret;
> +	/* Nothing else to do if there is no buffer. */
> +	if (!out)
> +		return ret;
> +	if (!out->args.vc.actions_n)
> +		return -1;
> +	action = &out->args.vc.actions[out->args.vc.actions_n - 1];
> +	/* Point to selected object. */
> +	ctx->object = out->args.vc.data;
> +	ctx->objmask = NULL;
> +	/* Set up default configuration. */
> +	action_vxlan_encap_data = ctx->object;
> +	*action_vxlan_encap_data = (struct action_vxlan_encap_data){
> +		.conf = (struct rte_flow_action_vxlan_encap){
> +			.definition = action_vxlan_encap_data->items,
> +		},
> +		.items = {
> +			{
> +				.type = RTE_FLOW_ITEM_TYPE_ETH,
> +				.spec = &action_vxlan_encap_data->item_eth,
> +				.mask = &action_vxlan_encap_data->item_eth,
> +			},
> +			{
> +				.type = RTE_FLOW_ITEM_TYPE_IPV4,
> +				.spec = &action_vxlan_encap_data->item_ipv4,
> +				.mask = &action_vxlan_encap_data->item_ipv4,
> +			},
> +			{
> +				.type = RTE_FLOW_ITEM_TYPE_UDP,
> +				.spec = &action_vxlan_encap_data->item_udp,
> +				.mask = &action_vxlan_encap_data->item_udp,
> +			},
> +			{
> +				.type = RTE_FLOW_ITEM_TYPE_VXLAN,
> +				.spec = &action_vxlan_encap_data->item_vxlan,
> +				.mask = &action_vxlan_encap_data->item_vxlan,
> +			},
> +			{
> +				.type = RTE_FLOW_ITEM_TYPE_END,
> +			},
> +		},
> +		.item_eth = { .type = 0, },
> +		.item_ipv4.hdr = {
> +			.src_addr = vxlan_encap_conf.ipv4_src,
> +			.dst_addr = vxlan_encap_conf.ipv4_dst,
> +		},
> +		.item_udp.hdr = {
> +			.src_port = vxlan_encap_conf.udp_src,
> +			.dst_port = vxlan_encap_conf.udp_dst,
> +		},
> +		.item_vxlan.flags = 0,
> +	};
> +	memcpy(action_vxlan_encap_data->item_eth.dst.addr_bytes,
> +	       vxlan_encap_conf.eth_dst, ETHER_ADDR_LEN);
> +	memcpy(action_vxlan_encap_data->item_eth.src.addr_bytes,
> +	       vxlan_encap_conf.eth_src, ETHER_ADDR_LEN);
> +	if (!vxlan_encap_conf.select_ipv4) {
> +		memcpy(&action_vxlan_encap_data->item_ipv6.hdr.src_addr,
> +		       &vxlan_encap_conf.ipv6_src,
> +		       sizeof(vxlan_encap_conf.ipv6_src));
> +		memcpy(&action_vxlan_encap_data->item_ipv6.hdr.dst_addr,
> +		       &vxlan_encap_conf.ipv6_dst,
> +		       sizeof(vxlan_encap_conf.ipv6_dst));
> +		action_vxlan_encap_data->items[1] = (struct rte_flow_item){
> +			.type = RTE_FLOW_ITEM_TYPE_IPV6,
> +			.spec = &action_vxlan_encap_data->item_ipv6,
> +			.mask = &action_vxlan_encap_data->item_ipv6,
> +		};
> +	}
> +	memcpy(action_vxlan_encap_data->item_vxlan.vni, vxlan_encap_conf.vni,
> +	       RTE_DIM(vxlan_encap_conf.vni));
> +	action->conf = &action_vxlan_encap_data->conf;
> +	return ret;
> +}
> +
>   /** Parse tokens for destroy command. */
>   static int
>   parse_destroy(struct context *ctx, const struct token *token,
> diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
> index 35cf26674..1c68c9d30 100644
> --- a/app/test-pmd/testpmd.c
> +++ b/app/test-pmd/testpmd.c
> @@ -393,6 +393,21 @@ uint8_t bitrate_enabled;
>   struct gro_status gro_ports[RTE_MAX_ETHPORTS];
>   uint8_t gro_flush_cycles = GRO_DEFAULT_FLUSH_CYCLES;
>   
> +struct vxlan_encap_conf vxlan_encap_conf = {
> +	.select_ipv4 = 1,
> +	.vni = "\x00\x00\x00",
> +	.udp_src = RTE_BE16(1),
> +	.udp_dst = RTE_BE16(4789),
> +	.ipv4_src = IPv4(127, 0, 0, 1),
> +	.ipv4_dst = IPv4(255, 255, 255, 255),
> +	.ipv6_src = "\x00\x00\x00\x00\x00\x00\x00\x00"
> +		"\x00\x00\x00\x00\x00\x00\x00\x01",
> +	.ipv6_dst = "\x00\x00\x00\x00\x00\x00\x00\x00"
> +		"\x00\x00\x00\x00\x00\x00\x11\x11",
> +	.eth_src = "\x00\x00\x00\x00\x00\x00",
> +	.eth_dst = "\xff\xff\xff\xff\xff\xff",
> +};
> +
>   /* Forward function declarations */
>   static void map_port_queue_stats_mapping_registers(portid_t pi,
>   						   struct rte_port *port);
> diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
> index f51cd9dd9..72c4e8d54 100644
> --- a/app/test-pmd/testpmd.h
> +++ b/app/test-pmd/testpmd.h
> @@ -479,6 +479,21 @@ struct gso_status {
>   extern struct gso_status gso_ports[RTE_MAX_ETHPORTS];
>   extern uint16_t gso_max_segment_size;
>   
> +/* VXLAN encap/decap parameters. */
> +struct vxlan_encap_conf {
> +	uint32_t select_ipv4:1;
> +	uint8_t vni[3];
> +	rte_be16_t udp_src;
> +	rte_be16_t udp_dst;
> +	rte_be32_t ipv4_src;
> +	rte_be32_t ipv4_dst;
> +	uint8_t ipv6_src[16];
> +	uint8_t ipv6_dst[16];
> +	uint8_t eth_src[ETHER_ADDR_LEN];
> +	uint8_t eth_dst[ETHER_ADDR_LEN];
> +};
> +struct vxlan_encap_conf vxlan_encap_conf;
> +
>   static inline unsigned int
>   lcore_num(void)
>   {
> diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
> index 0d6fd50ca..162d1c535 100644
> --- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
> +++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
> @@ -1534,6 +1534,12 @@ Enable or disable a per queue Tx offloading only on a specific Tx queue::
>   
>   This command should be run when the port is stopped, or else it will fail.
>   
> +Config VXLAN Encap outer layers
> +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> +
> +Configure the outer layer to encapsulate a packet inside a VXLAN tunnel::
> +
> + testpmd> set vxlan ipv4|ipv6 (udp-src) (udp-dst) (ip-src) (ip-dst) (mac-src) (mac-dst)
^^^^^^
vxlan vni is missing? Also the VLAN tag id?
>   
>   Port Functions
>   --------------
> @@ -3650,6 +3656,12 @@ This section lists supported actions and their attributes, if any.
>   
>     - ``ethertype``: Ethertype.
>   
> +- ``vxlan_encap``: Performs a VXLAN encapsulation, outer layer configuration
> +  is done through `Config VXLAN Encap outer layers`_.
> +
> +- ``vxlan_decap``: Performs a decapsulation action by stripping all headers of
> +  the VXLAN tunnel network overlay from the matched flow.
> +
>   Destroying flow rules
>   ~~~~~~~~~~~~~~~~~~~~~
>   

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [dpdk-dev] [PATCH v2 2/2] app/testpmd: add NVGRE encap/decap support
  2018-06-18  8:52 ` [dpdk-dev] [PATCH v2 2/2] app/testpmd: add NVGRE " Nelio Laranjeiro
@ 2018-06-18 12:48   ` Mohammad Abdul Awal
  0 siblings, 0 replies; 63+ messages in thread
From: Mohammad Abdul Awal @ 2018-06-18 12:48 UTC (permalink / raw)
  To: Nelio Laranjeiro, dev, Adrien Mazarguil, Wenzhuo Lu, Jingjing Wu,
	Bernard Iremonger



On 18/06/2018 09:52, Nelio Laranjeiro wrote:
> Due to the complex NVGRE_ENCAP flow action and based on the fact testpmd
> does not allocate memory, this patch adds a new command in testpmd to
> initialise a global structure containing the necessary information to
> make the outer layer of the packet.  This same global structure will
> then be used by the flow command line in testpmd when the action
> nvgre_encap will be parsed, at this point, the conversion into such
> action becomes trivial.
>
> This global structure is only used for the encap action.
>
> Signed-off-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
> ---
>   app/test-pmd/cmdline.c                      |  79 +++++++++++++
>   app/test-pmd/cmdline_flow.c                 | 119 ++++++++++++++++++++
>   app/test-pmd/testpmd.c                      |  13 +++
>   app/test-pmd/testpmd.h                      |  13 +++
>   doc/guides/testpmd_app_ug/testpmd_funcs.rst |  13 +++
>   5 files changed, 237 insertions(+)
>
> diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
> index a3b98b2f2..7ea1e5792 100644
> --- a/app/test-pmd/cmdline.c
> +++ b/app/test-pmd/cmdline.c
> @@ -785,6 +785,9 @@ static void cmd_help_long_parsed(void *parsed_result,
>   			" eth-src eth-dst\n"
>   			"       Configure the VXLAN encapsulation for flows.\n\n"
>   
> +			"nvgre ipv4|ipv6 tni ip-src ip-dst eth-src eth-dst\n"
> +			"       Configure the NVGRE encapsulation for flows.\n\n"
Should there be support for outer VLAN header according to the definitions?
> +
>   			, list_pkt_forwarding_modes()
>   		);
>   	}
> @@ -14927,6 +14930,81 @@ cmdline_parse_inst_t cmd_set_vxlan = {
>   	},
>   };
>   
> +/** Set NVGRE encapsulation details */
> +struct cmd_set_nvgre_result {
> +	cmdline_fixed_string_t set;
> +	cmdline_fixed_string_t nvgre;
> +	cmdline_fixed_string_t ip_version;
> +	uint32_t tni;
> +	cmdline_ipaddr_t ip_src;
> +	cmdline_ipaddr_t ip_dst;
> +	struct ether_addr eth_src;
> +	struct ether_addr eth_dst;
> +};
> +
> +cmdline_parse_token_string_t cmd_set_nvgre_set =
> +	TOKEN_STRING_INITIALIZER(struct cmd_set_nvgre_result, set, "set");
> +cmdline_parse_token_string_t cmd_set_nvgre_nvgre =
> +	TOKEN_STRING_INITIALIZER(struct cmd_set_nvgre_result, nvgre, "nvgre");
> +cmdline_parse_token_string_t cmd_set_nvgre_ip_version =
> +	TOKEN_STRING_INITIALIZER(struct cmd_set_nvgre_result, ip_version,
> +				 "ipv4#ipv6");
> +cmdline_parse_token_num_t cmd_set_nvgre_tni =
> +	TOKEN_NUM_INITIALIZER(struct cmd_set_nvgre_result, tni, UINT32);
> +cmdline_parse_token_num_t cmd_set_nvgre_ip_src =
> +	TOKEN_IPADDR_INITIALIZER(struct cmd_set_nvgre_result, ip_src);
> +cmdline_parse_token_ipaddr_t cmd_set_nvgre_ip_dst =
> +	TOKEN_IPADDR_INITIALIZER(struct cmd_set_nvgre_result, ip_dst);
> +cmdline_parse_token_etheraddr_t cmd_set_nvgre_eth_src =
> +	TOKEN_ETHERADDR_INITIALIZER(struct cmd_set_nvgre_result, eth_src);
> +cmdline_parse_token_etheraddr_t cmd_set_nvgre_eth_dst =
> +	TOKEN_ETHERADDR_INITIALIZER(struct cmd_set_nvgre_result, eth_dst);
> +
> +static void cmd_set_nvgre_parsed(void *parsed_result,
> +	__attribute__((unused)) struct cmdline *cl,
> +	__attribute__((unused)) void *data)
> +{
> +	struct cmd_set_nvgre_result *res = parsed_result;
> +	uint32_t tni = rte_cpu_to_be_32(res->tni) >> 8;
> +
> +	if (strcmp(res->ip_version, "ipv4") == 0)
> +		nvgre_encap_conf.select_ipv4 = 1;
> +	else if (strcmp(res->ip_version, "ipv6") == 0)
> +		nvgre_encap_conf.select_ipv4 = 0;
> +	else
> +		return;
> +	memcpy(nvgre_encap_conf.tni, &tni, 3);
> +	if (nvgre_encap_conf.select_ipv4) {
> +		IPV4_ADDR_TO_UINT(res->ip_src, nvgre_encap_conf.ipv4_src);
> +		IPV4_ADDR_TO_UINT(res->ip_dst, nvgre_encap_conf.ipv4_dst);
> +	} else {
> +		IPV6_ADDR_TO_ARRAY(res->ip_src, nvgre_encap_conf.ipv6_src);
> +		IPV6_ADDR_TO_ARRAY(res->ip_dst, nvgre_encap_conf.ipv6_dst);
> +	}
> +	memcpy(nvgre_encap_conf.eth_src, res->eth_src.addr_bytes,
> +	       ETHER_ADDR_LEN);
> +	memcpy(nvgre_encap_conf.eth_dst, res->eth_dst.addr_bytes,
> +	       ETHER_ADDR_LEN);
> +}
> +
> +cmdline_parse_inst_t cmd_set_nvgre = {
> +	.f = cmd_set_nvgre_parsed,
> +	.data = NULL,
> +	.help_str = "set nvgre ipv4|ipv6 <vni> <ip-src> <ip-dst> <eth-src>"
> +		" <eth-dst>",
> +	.tokens = {
> +		(void *)&cmd_set_nvgre_set,
> +		(void *)&cmd_set_nvgre_nvgre,
> +		(void *)&cmd_set_nvgre_ip_version,
> +		(void *)&cmd_set_nvgre_tni,
> +		(void *)&cmd_set_nvgre_ip_src,
> +		(void *)&cmd_set_nvgre_ip_dst,
> +		(void *)&cmd_set_nvgre_eth_src,
> +		(void *)&cmd_set_nvgre_eth_dst,
> +		NULL,
> +	},
> +};
> +
>   /* Strict link priority scheduling mode setting */
>   static void
>   cmd_strict_link_prio_parsed(
> @@ -17552,6 +17630,7 @@ cmdline_parse_ctx_t main_ctx[] = {
>   	(cmdline_parse_inst_t *)&cmd_set_port_tm_hierarchy_default,
>   #endif
>   	(cmdline_parse_inst_t *)&cmd_set_vxlan,
> +	(cmdline_parse_inst_t *)&cmd_set_nvgre,
>   	(cmdline_parse_inst_t *)&cmd_ddp_add,
>   	(cmdline_parse_inst_t *)&cmd_ddp_del,
>   	(cmdline_parse_inst_t *)&cmd_ddp_get_list,
> diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
> index 4f4aba407..7fd5468a8 100644
> --- a/app/test-pmd/cmdline_flow.c
> +++ b/app/test-pmd/cmdline_flow.c
> @@ -241,6 +241,8 @@ enum index {
>   	ACTION_OF_PUSH_MPLS_ETHERTYPE,
>   	ACTION_VXLAN_ENCAP,
>   	ACTION_VXLAN_DECAP,
> +	ACTION_NVGRE_ENCAP,
> +	ACTION_NVGRE_DECAP,
>   };
>   
>   /** Maximum size for pattern in struct rte_flow_item_raw. */
> @@ -276,6 +278,21 @@ struct action_vxlan_encap_data {
>   	struct rte_flow_item_vxlan item_vxlan;
>   };
>   
> +/** Maximum number of items in struct rte_flow_action_nvgre_encap. */
> +#define ACTION_NVGRE_ENCAP_ITEMS_NUM 4
> +
> +/** Storage for struct rte_flow_action_nvgre_encap including external data. */
> +struct action_nvgre_encap_data {
> +	struct rte_flow_action_nvgre_encap conf;
> +	struct rte_flow_item items[ACTION_NVGRE_ENCAP_ITEMS_NUM];
> +	struct rte_flow_item_eth item_eth;
> +	union {
> +		struct rte_flow_item_ipv4 item_ipv4;
> +		struct rte_flow_item_ipv6 item_ipv6;
> +	};
> +	struct rte_flow_item_nvgre item_nvgre;
> +};
> +
>   /** Maximum number of subsequent tokens and arguments on the stack. */
>   #define CTX_STACK_SIZE 16
>   
> @@ -795,6 +812,8 @@ static const enum index next_action[] = {
>   	ACTION_OF_PUSH_MPLS,
>   	ACTION_VXLAN_ENCAP,
>   	ACTION_VXLAN_DECAP,
> +	ACTION_NVGRE_ENCAP,
> +	ACTION_NVGRE_DECAP,
>   	ZERO,
>   };
>   
> @@ -928,6 +947,9 @@ static int parse_vc_action_rss_queue(struct context *, const struct token *,
>   static int parse_vc_action_vxlan_encap(struct context *, const struct token *,
>   				       const char *, unsigned int, void *,
>   				       unsigned int);
> +static int parse_vc_action_nvgre_encap(struct context *, const struct token *,
> +				       const char *, unsigned int, void *,
> +				       unsigned int);
>   static int parse_destroy(struct context *, const struct token *,
>   			 const char *, unsigned int,
>   			 void *, unsigned int);
> @@ -2428,6 +2450,24 @@ static const struct token token_list[] = {
>   		.next = NEXT(NEXT_ENTRY(ACTION_NEXT)),
>   		.call = parse_vc,
>   	},
> +	[ACTION_NVGRE_ENCAP] = {
> +		.name = "nvgre_encap",
> +		.help = "NVGRE encapsulation, uses configuration set by \"set"
> +			" nvgre\"",
> +		.priv = PRIV_ACTION(NVGRE_ENCAP,
> +				    sizeof(struct action_nvgre_encap_data)),
> +		.next = NEXT(NEXT_ENTRY(ACTION_NEXT)),
> +		.call = parse_vc_action_nvgre_encap,
> +	},
> +	[ACTION_NVGRE_DECAP] = {
> +		.name = "nvgre_decap",
> +		.help = "Performs a decapsulation action by stripping all"
> +			" headers of the NVGRE tunnel network overlay from the"
> +			" matched flow.",
> +		.priv = PRIV_ACTION(NVGRE_DECAP, 0),
> +		.next = NEXT(NEXT_ENTRY(ACTION_NEXT)),
> +		.call = parse_vc,
> +	},
>   };
>   
>   /** Remove and return last entry from argument stack. */
> @@ -3080,6 +3120,85 @@ parse_vc_action_vxlan_encap(struct context *ctx, const struct token *token,
>   	return ret;
>   }
>   
> +/** Parse NVGRE encap action. */
> +static int
> +parse_vc_action_nvgre_encap(struct context *ctx, const struct token *token,
> +			    const char *str, unsigned int len,
> +			    void *buf, unsigned int size)
> +{
> +	struct buffer *out = buf;
> +	struct rte_flow_action *action;
> +	struct action_nvgre_encap_data *action_nvgre_encap_data;
> +	int ret;
> +
> +	ret = parse_vc(ctx, token, str, len, buf, size);
> +	if (ret < 0)
> +		return ret;
> +	/* Nothing else to do if there is no buffer. */
> +	if (!out)
> +		return ret;
> +	if (!out->args.vc.actions_n)
> +		return -1;
> +	action = &out->args.vc.actions[out->args.vc.actions_n - 1];
> +	/* Point to selected object. */
> +	ctx->object = out->args.vc.data;
> +	ctx->objmask = NULL;
> +	/* Set up default configuration. */
> +	action_nvgre_encap_data = ctx->object;
> +	*action_nvgre_encap_data = (struct action_nvgre_encap_data){
> +		.conf = (struct rte_flow_action_nvgre_encap){
> +			.definition = action_nvgre_encap_data->items,
> +		},
> +		.items = {
> +			{
> +				.type = RTE_FLOW_ITEM_TYPE_ETH,
> +				.spec = &action_nvgre_encap_data->item_eth,
> +				.mask = &action_nvgre_encap_data->item_eth,
> +			},
> +			{
> +				.type = RTE_FLOW_ITEM_TYPE_IPV4,
> +				.spec = &action_nvgre_encap_data->item_ipv4,
> +				.mask = &action_nvgre_encap_data->item_ipv4,
> +			},
> +			{
> +				.type = RTE_FLOW_ITEM_TYPE_NVGRE,
> +				.spec = &action_nvgre_encap_data->item_nvgre,
> +				.mask = &action_nvgre_encap_data->item_nvgre,
> +			},
> +			{
> +				.type = RTE_FLOW_ITEM_TYPE_END,
> +			},
> +		},
> +		.item_eth = { .type = 0, },
> +		.item_ipv4.hdr = {
> +		       .src_addr = nvgre_encap_conf.ipv4_src,
> +		       .dst_addr = nvgre_encap_conf.ipv4_dst,
> +		},
> +		.item_nvgre.flow_id = 0,
> +	};
> +	memcpy(action_nvgre_encap_data->item_eth.dst.addr_bytes,
> +	       nvgre_encap_conf.eth_dst, ETHER_ADDR_LEN);
> +	memcpy(action_nvgre_encap_data->item_eth.src.addr_bytes,
> +	       nvgre_encap_conf.eth_src, ETHER_ADDR_LEN);
> +	if (!nvgre_encap_conf.select_ipv4) {
> +		memcpy(&action_nvgre_encap_data->item_ipv6.hdr.src_addr,
> +		       &nvgre_encap_conf.ipv6_src,
> +		       sizeof(nvgre_encap_conf.ipv6_src));
> +		memcpy(&action_nvgre_encap_data->item_ipv6.hdr.dst_addr,
> +		       &nvgre_encap_conf.ipv6_dst,
> +		       sizeof(nvgre_encap_conf.ipv6_dst));
> +		action_nvgre_encap_data->items[1] = (struct rte_flow_item){
> +			.type = RTE_FLOW_ITEM_TYPE_IPV6,
> +			.spec = &action_nvgre_encap_data->item_ipv6,
> +			.mask = &action_nvgre_encap_data->item_ipv6,
> +		};
> +	}
> +	memcpy(action_nvgre_encap_data->item_nvgre.tni, nvgre_encap_conf.tni,
> +	       RTE_DIM(nvgre_encap_conf.tni));
> +	action->conf = &action_nvgre_encap_data->conf;
> +	return ret;
> +}
> +
>   /** Parse tokens for destroy command. */
>   static int
>   parse_destroy(struct context *ctx, const struct token *token,
> diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
> index 1c68c9d30..97b4c4f9c 100644
> --- a/app/test-pmd/testpmd.c
> +++ b/app/test-pmd/testpmd.c
> @@ -408,6 +408,19 @@ struct vxlan_encap_conf vxlan_encap_conf = {
>   	.eth_dst = "\xff\xff\xff\xff\xff\xff",
>   };
>   
> +struct nvgre_encap_conf nvgre_encap_conf = {
> +	.select_ipv4 = 1,
> +	.tni = "\x00\x00\x00",
> +	.ipv4_src = IPv4(127, 0, 0, 1),
> +	.ipv4_dst = IPv4(255, 255, 255, 255),
> +	.ipv6_src = "\x00\x00\x00\x00\x00\x00\x00\x00"
> +		"\x00\x00\x00\x00\x00\x00\x00\x01",
> +	.ipv6_dst = "\x00\x00\x00\x00\x00\x00\x00\x00"
> +		"\x00\x00\x00\x00\x00\x00\x11\x11",
> +	.eth_src = "\x00\x00\x00\x00\x00\x00",
> +	.eth_dst = "\xff\xff\xff\xff\xff\xff",
> +};
> +
>   /* Forward function declarations */
>   static void map_port_queue_stats_mapping_registers(portid_t pi,
>   						   struct rte_port *port);
> diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
> index 72c4e8d54..7871b93e1 100644
> --- a/app/test-pmd/testpmd.h
> +++ b/app/test-pmd/testpmd.h
> @@ -494,6 +494,19 @@ struct vxlan_encap_conf {
>   };
>   struct vxlan_encap_conf vxlan_encap_conf;
>   
> +/* NVGRE encap/decap parameters. */
> +struct nvgre_encap_conf {
> +	uint32_t select_ipv4:1;
> +	uint8_t tni[3];
> +	rte_be32_t ipv4_src;
> +	rte_be32_t ipv4_dst;
> +	uint8_t ipv6_src[16];
> +	uint8_t ipv6_dst[16];
> +	uint8_t eth_src[ETHER_ADDR_LEN];
> +	uint8_t eth_dst[ETHER_ADDR_LEN];
> +};
> +struct nvgre_encap_conf nvgre_encap_conf;
> +
>   static inline unsigned int
>   lcore_num(void)
>   {
> diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
> index 162d1c535..59f5f6dad 100644
> --- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
> +++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
> @@ -1541,6 +1541,13 @@ Configure the outer layer to encapsulate a packet inside a VXLAN tunnel::
>   
>    testpmd> set vxlan ipv4|ipv6 (udp-src) (udp-dst) (ip-src) (ip-dst) (mac-src) (mac-dst)
>   
> +Config NVGRE Encap outer layers
> +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> +
> +Configure the outer layer to encapsulate a packet inside a NVGRE tunnel::
> +
> + testpmd> set nvgre ipv4|ipv6 (ip-src) (ip-dst) (mac-src) (mac-dst)
                                                              ^^^^^^
                                                          nvgre tni is 
missing? Also the VLAN tag id?
> +
>   Port Functions
>   --------------
>   
> @@ -3662,6 +3669,12 @@ This section lists supported actions and their attributes, if any.
>   - ``vxlan_decap``: Performs a decapsulation action by stripping all headers of
>     the VXLAN tunnel network overlay from the matched flow.
>   
> +- ``nvgre_encap``: Performs a NVGRE encapsulation, outer layer configuration
> +  is done through `Config NVGRE Encap outer layers`_.
> +
> +- ``nvgre_decap``: Performs a decapsulation action by stripping all headers of
> +  the NVGRE tunnel network overlay from the matched flow.
> +
>   Destroying flow rules
>   ~~~~~~~~~~~~~~~~~~~~~
>   

^ permalink raw reply	[flat|nested] 63+ messages in thread

* [dpdk-dev] [PATCH v3 0/2] app/testpmd implement VXLAN/NVGRE Encap/Decap
  2018-06-18  8:52 ` [dpdk-dev] [PATCH v2 0/2] app/testpmd implement VXLAN/NVGRE Encap/Decap Nelio Laranjeiro
  2018-06-18  9:05   ` Ferruh Yigit
@ 2018-06-18 14:36   ` Nelio Laranjeiro
  2018-06-18 16:28     ` Iremonger, Bernard
  2018-06-21  7:13     ` [dpdk-dev] [PATCH v4 " Nelio Laranjeiro
  2018-06-18 14:36   ` [dpdk-dev] [PATCH v3 1/2] app/testpmd: add VXLAN encap/decap support Nelio Laranjeiro
  2018-06-18 14:36   ` [dpdk-dev] [PATCH v3 2/2] app/testpmd: add NVGRE " Nelio Laranjeiro
  3 siblings, 2 replies; 63+ messages in thread
From: Nelio Laranjeiro @ 2018-06-18 14:36 UTC (permalink / raw)
  To: dev, Adrien Mazarguil, Wenzhuo Lu, Jingjing Wu,
	Bernard Iremonger, Mohammad Abdul Awal

This series adds an easy and maintainable configuration version support for
those two actions for 18.08 by using global variables in testpmd to store the
necessary information for the tunnel encapsulation.  Those variables are used
in conjunction of RTE_FLOW_ACTION_{VXLAN,NVGRE}_ENCAP action to create easily
the action for flows.

A common way to use it:

 set vxlan ipv4 4 4 4 127.0.0.1 128.0.0.1 11:11:11:11:11:11 22:22:22:22:22:22
 flow create 0 ingress pattern end actions vxlan_encap / queue index 0 / end

 set vxlan ipv6 4 4 4 ::1 ::2222 11:11:11:11:11:11 22:22:22:22:22:22
 flow create 0 ingress pattern end actions vxlan_encap / queue index 0 / end

 set nvgre ipv4 4 127.0.0.1 128.0.0.1 11:11:11:11:11:11 22:22:22:22:22:22
 flow create 0 ingress pattern end actions nvgre_encap / queue index 0 / end

 set nvgre ipv6 4 ::1 ::2222 11:11:11:11:11:11 22:22:22:22:22:22
 flow create 0 ingress pattern end actions nvgre_encap / queue index 0 / end

This also replace the proposal done by Mohammad Abdul Awal [1] which handles
in a more complex way for the same work.

Note this API has already a modification planned for 18.11 [2] thus those
series should have a limited life for a single release.

[1] https://dpdk.org/ml/archives/dev/2018-May/101403.html
[2] https://dpdk.org/ml/archives/dev/2018-June/103485.html

Changes in v3:

- support VLAN in the outer encapsulation.
- fix the documentation with missing arguments.

Changes in v2:

- add default IPv6 values for NVGRE encapsulation.
- replace VXLAN to NVGRE in comments concerning NVGRE layer.

Nelio Laranjeiro (2):
  app/testpmd: add VXLAN encap/decap support
  app/testpmd: add NVGRE encap/decap support

 app/test-pmd/cmdline.c                      | 242 ++++++++++++++++++
 app/test-pmd/cmdline_flow.c                 | 268 ++++++++++++++++++++
 app/test-pmd/testpmd.c                      |  32 +++
 app/test-pmd/testpmd.h                      |  32 +++
 doc/guides/testpmd_app_ug/testpmd_funcs.rst |  27 ++
 5 files changed, 601 insertions(+)

-- 
2.17.1

^ permalink raw reply	[flat|nested] 63+ messages in thread

* [dpdk-dev] [PATCH v3 1/2] app/testpmd: add VXLAN encap/decap support
  2018-06-18  8:52 ` [dpdk-dev] [PATCH v2 0/2] app/testpmd implement VXLAN/NVGRE Encap/Decap Nelio Laranjeiro
  2018-06-18  9:05   ` Ferruh Yigit
  2018-06-18 14:36   ` [dpdk-dev] [PATCH v3 " Nelio Laranjeiro
@ 2018-06-18 14:36   ` Nelio Laranjeiro
  2018-06-19  7:09     ` Ori Kam
  2018-06-18 14:36   ` [dpdk-dev] [PATCH v3 2/2] app/testpmd: add NVGRE " Nelio Laranjeiro
  3 siblings, 1 reply; 63+ messages in thread
From: Nelio Laranjeiro @ 2018-06-18 14:36 UTC (permalink / raw)
  To: dev, Adrien Mazarguil, Wenzhuo Lu, Jingjing Wu,
	Bernard Iremonger, Mohammad Abdul Awal

Due to the complex VXLAN_ENCAP flow action and based on the fact testpmd
does not allocate memory, this patch adds a new command in testpmd to
initialise a global structure containing the necessary information to
make the outer layer of the packet.  This same global structure will
then be used by the flow command line in testpmd when the action
vxlan_encap will be parsed, at this point, the conversion into such
action becomes trivial.

This global structure is only used for the encap action.

Signed-off-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
---
 app/test-pmd/cmdline.c                      | 129 ++++++++++++++++++
 app/test-pmd/cmdline_flow.c                 | 139 ++++++++++++++++++++
 app/test-pmd/testpmd.c                      |  17 +++
 app/test-pmd/testpmd.h                      |  17 +++
 doc/guides/testpmd_app_ug/testpmd_funcs.rst |  13 ++
 5 files changed, 315 insertions(+)

diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
index 27e2aa8c8..93573606f 100644
--- a/app/test-pmd/cmdline.c
+++ b/app/test-pmd/cmdline.c
@@ -781,6 +781,14 @@ static void cmd_help_long_parsed(void *parsed_result,
 			"port tm hierarchy commit (port_id) (clean_on_fail)\n"
 			"	Commit tm hierarchy.\n\n"
 
+			"vxlan ipv4|ipv6 vni udp-src udp-dst ip-src ip-dst"
+			" eth-src eth-dst\n"
+			"       Configure the VXLAN encapsulation for flows.\n\n"
+
+			"vxlan-with-vlan ipv4|ipv6 vni udp-src udp-dst ip-src ip-dst"
+			" vlan-tci eth-src eth-dst\n"
+			"       Configure the VXLAN encapsulation for flows.\n\n"
+
 			, list_pkt_forwarding_modes()
 		);
 	}
@@ -14838,6 +14846,125 @@ cmdline_parse_inst_t cmd_set_port_tm_hierarchy_default = {
 };
 #endif
 
+/** Set VXLAN encapsulation details */
+struct cmd_set_vxlan_result {
+	cmdline_fixed_string_t set;
+	cmdline_fixed_string_t vxlan;
+	cmdline_fixed_string_t ip_version;
+	uint32_t vlan_present:1;
+	uint32_t vni;
+	uint16_t udp_src;
+	uint16_t udp_dst;
+	cmdline_ipaddr_t ip_src;
+	cmdline_ipaddr_t ip_dst;
+	uint16_t tci;
+	struct ether_addr eth_src;
+	struct ether_addr eth_dst;
+};
+
+cmdline_parse_token_string_t cmd_set_vxlan_set =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_vxlan_result, set, "set");
+cmdline_parse_token_string_t cmd_set_vxlan_vxlan =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_vxlan_result, vxlan, "vxlan");
+cmdline_parse_token_string_t cmd_set_vxlan_vxlan_with_vlan =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_vxlan_result, vxlan,
+				 "vxlan-with-vlan");
+cmdline_parse_token_string_t cmd_set_vxlan_ip_version =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_vxlan_result, ip_version,
+				 "ipv4#ipv6");
+cmdline_parse_token_num_t cmd_set_vxlan_vni =
+	TOKEN_NUM_INITIALIZER(struct cmd_set_vxlan_result, vni, UINT32);
+cmdline_parse_token_num_t cmd_set_vxlan_udp_src =
+	TOKEN_NUM_INITIALIZER(struct cmd_set_vxlan_result, udp_src, UINT16);
+cmdline_parse_token_num_t cmd_set_vxlan_udp_dst =
+	TOKEN_NUM_INITIALIZER(struct cmd_set_vxlan_result, udp_dst, UINT16);
+cmdline_parse_token_ipaddr_t cmd_set_vxlan_ip_src =
+	TOKEN_IPADDR_INITIALIZER(struct cmd_set_vxlan_result, ip_src);
+cmdline_parse_token_ipaddr_t cmd_set_vxlan_ip_dst =
+	TOKEN_IPADDR_INITIALIZER(struct cmd_set_vxlan_result, ip_dst);
+cmdline_parse_token_num_t cmd_set_vxlan_vlan =
+	TOKEN_NUM_INITIALIZER(struct cmd_set_vxlan_result, tci, UINT16);
+cmdline_parse_token_etheraddr_t cmd_set_vxlan_eth_src =
+	TOKEN_ETHERADDR_INITIALIZER(struct cmd_set_vxlan_result, eth_src);
+cmdline_parse_token_etheraddr_t cmd_set_vxlan_eth_dst =
+	TOKEN_ETHERADDR_INITIALIZER(struct cmd_set_vxlan_result, eth_dst);
+
+static void cmd_set_vxlan_parsed(void *parsed_result,
+	__attribute__((unused)) struct cmdline *cl,
+	__attribute__((unused)) void *data)
+{
+	struct cmd_set_vxlan_result *res = parsed_result;
+	uint32_t vni = rte_cpu_to_be_32(res->vni) >> 8;
+
+	if (strcmp(res->vxlan, "vxlan") == 0)
+		vxlan_encap_conf.select_vlan = 0;
+	else if (strcmp(res->vxlan, "vxlan-with-vlan") == 0)
+		vxlan_encap_conf.select_vlan = 1;
+	if (strcmp(res->ip_version, "ipv4") == 0)
+		vxlan_encap_conf.select_ipv4 = 1;
+	else if (strcmp(res->ip_version, "ipv6") == 0)
+		vxlan_encap_conf.select_ipv4 = 0;
+	else
+		return;
+	memcpy(vxlan_encap_conf.vni, &vni, 3);
+	vxlan_encap_conf.udp_src = rte_cpu_to_be_16(res->udp_src);
+	vxlan_encap_conf.udp_dst = rte_cpu_to_be_16(res->udp_dst);
+	if (vxlan_encap_conf.select_ipv4) {
+		IPV4_ADDR_TO_UINT(res->ip_src, vxlan_encap_conf.ipv4_src);
+		IPV4_ADDR_TO_UINT(res->ip_dst, vxlan_encap_conf.ipv4_dst);
+	} else {
+		IPV6_ADDR_TO_ARRAY(res->ip_src, vxlan_encap_conf.ipv6_src);
+		IPV6_ADDR_TO_ARRAY(res->ip_dst, vxlan_encap_conf.ipv6_dst);
+	}
+	if (vxlan_encap_conf.select_vlan)
+		vxlan_encap_conf.vlan_tci = rte_cpu_to_be_16(res->tci);
+	memcpy(vxlan_encap_conf.eth_src, res->eth_src.addr_bytes,
+	       ETHER_ADDR_LEN);
+	memcpy(vxlan_encap_conf.eth_dst, res->eth_dst.addr_bytes,
+	       ETHER_ADDR_LEN);
+}
+
+cmdline_parse_inst_t cmd_set_vxlan = {
+	.f = cmd_set_vxlan_parsed,
+	.data = NULL,
+	.help_str = "set vxlan ipv4|ipv6 <vni> <udp-src> <udp-dst> <ip-src>"
+		" <ip-dst> <eth-src> <eth-dst>",
+	.tokens = {
+		(void *)&cmd_set_vxlan_set,
+		(void *)&cmd_set_vxlan_vxlan,
+		(void *)&cmd_set_vxlan_ip_version,
+		(void *)&cmd_set_vxlan_vni,
+		(void *)&cmd_set_vxlan_udp_src,
+		(void *)&cmd_set_vxlan_udp_dst,
+		(void *)&cmd_set_vxlan_ip_src,
+		(void *)&cmd_set_vxlan_ip_dst,
+		(void *)&cmd_set_vxlan_eth_src,
+		(void *)&cmd_set_vxlan_eth_dst,
+		NULL,
+	},
+};
+
+cmdline_parse_inst_t cmd_set_vxlan_with_vlan = {
+	.f = cmd_set_vxlan_parsed,
+	.data = NULL,
+	.help_str = "set vxlan-with-vlan ipv4|ipv6 <vni> <udp-src> <udp-dst>"
+		" <ip-src> <ip-dst> <vlan-tci> <eth-src> <eth-dst>",
+	.tokens = {
+		(void *)&cmd_set_vxlan_set,
+		(void *)&cmd_set_vxlan_vxlan_with_vlan,
+		(void *)&cmd_set_vxlan_ip_version,
+		(void *)&cmd_set_vxlan_vni,
+		(void *)&cmd_set_vxlan_udp_src,
+		(void *)&cmd_set_vxlan_udp_dst,
+		(void *)&cmd_set_vxlan_ip_src,
+		(void *)&cmd_set_vxlan_ip_dst,
+		(void *)&cmd_set_vxlan_vlan,
+		(void *)&cmd_set_vxlan_eth_src,
+		(void *)&cmd_set_vxlan_eth_dst,
+		NULL,
+	},
+};
+
 /* Strict link priority scheduling mode setting */
 static void
 cmd_strict_link_prio_parsed(
@@ -17462,6 +17589,8 @@ cmdline_parse_ctx_t main_ctx[] = {
 #if defined RTE_LIBRTE_PMD_SOFTNIC && defined RTE_LIBRTE_SCHED
 	(cmdline_parse_inst_t *)&cmd_set_port_tm_hierarchy_default,
 #endif
+	(cmdline_parse_inst_t *)&cmd_set_vxlan,
+	(cmdline_parse_inst_t *)&cmd_set_vxlan_with_vlan,
 	(cmdline_parse_inst_t *)&cmd_ddp_add,
 	(cmdline_parse_inst_t *)&cmd_ddp_del,
 	(cmdline_parse_inst_t *)&cmd_ddp_get_list,
diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index 934cf7e90..a8b5221a6 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -239,6 +239,8 @@ enum index {
 	ACTION_OF_POP_MPLS_ETHERTYPE,
 	ACTION_OF_PUSH_MPLS,
 	ACTION_OF_PUSH_MPLS_ETHERTYPE,
+	ACTION_VXLAN_ENCAP,
+	ACTION_VXLAN_DECAP,
 };
 
 /** Maximum size for pattern in struct rte_flow_item_raw. */
@@ -258,6 +260,23 @@ struct action_rss_data {
 	uint16_t queue[ACTION_RSS_QUEUE_NUM];
 };
 
+/** Maximum number of items in struct rte_flow_action_vxlan_encap. */
+#define ACTION_VXLAN_ENCAP_ITEMS_NUM 6
+
+/** Storage for struct rte_flow_action_vxlan_encap including external data. */
+struct action_vxlan_encap_data {
+	struct rte_flow_action_vxlan_encap conf;
+	struct rte_flow_item items[ACTION_VXLAN_ENCAP_ITEMS_NUM];
+	struct rte_flow_item_eth item_eth;
+	struct rte_flow_item_vlan item_vlan;
+	union {
+		struct rte_flow_item_ipv4 item_ipv4;
+		struct rte_flow_item_ipv6 item_ipv6;
+	};
+	struct rte_flow_item_udp item_udp;
+	struct rte_flow_item_vxlan item_vxlan;
+};
+
 /** Maximum number of subsequent tokens and arguments on the stack. */
 #define CTX_STACK_SIZE 16
 
@@ -775,6 +794,8 @@ static const enum index next_action[] = {
 	ACTION_OF_SET_VLAN_PCP,
 	ACTION_OF_POP_MPLS,
 	ACTION_OF_PUSH_MPLS,
+	ACTION_VXLAN_ENCAP,
+	ACTION_VXLAN_DECAP,
 	ZERO,
 };
 
@@ -905,6 +926,9 @@ static int parse_vc_action_rss_type(struct context *, const struct token *,
 static int parse_vc_action_rss_queue(struct context *, const struct token *,
 				     const char *, unsigned int, void *,
 				     unsigned int);
+static int parse_vc_action_vxlan_encap(struct context *, const struct token *,
+				       const char *, unsigned int, void *,
+				       unsigned int);
 static int parse_destroy(struct context *, const struct token *,
 			 const char *, unsigned int,
 			 void *, unsigned int);
@@ -2387,6 +2411,24 @@ static const struct token token_list[] = {
 			      ethertype)),
 		.call = parse_vc_conf,
 	},
+	[ACTION_VXLAN_ENCAP] = {
+		.name = "vxlan_encap",
+		.help = "VXLAN encapsulation, uses configuration set by \"set"
+			" vxlan\"",
+		.priv = PRIV_ACTION(VXLAN_ENCAP,
+				    sizeof(struct action_vxlan_encap_data)),
+		.next = NEXT(NEXT_ENTRY(ACTION_NEXT)),
+		.call = parse_vc_action_vxlan_encap,
+	},
+	[ACTION_VXLAN_DECAP] = {
+		.name = "vxlan_decap",
+		.help = "Performs a decapsulation action by stripping all"
+			" headers of the VXLAN tunnel network overlay from the"
+			" matched flow.",
+		.priv = PRIV_ACTION(VXLAN_DECAP, 0),
+		.next = NEXT(NEXT_ENTRY(ACTION_NEXT)),
+		.call = parse_vc,
+	},
 };
 
 /** Remove and return last entry from argument stack. */
@@ -2951,6 +2993,103 @@ parse_vc_action_rss_queue(struct context *ctx, const struct token *token,
 	return len;
 }
 
+/** Parse VXLAN encap action. */
+static int
+parse_vc_action_vxlan_encap(struct context *ctx, const struct token *token,
+			    const char *str, unsigned int len,
+			    void *buf, unsigned int size)
+{
+	struct buffer *out = buf;
+	struct rte_flow_action *action;
+	struct action_vxlan_encap_data *action_vxlan_encap_data;
+	int ret;
+
+	ret = parse_vc(ctx, token, str, len, buf, size);
+	if (ret < 0)
+		return ret;
+	/* Nothing else to do if there is no buffer. */
+	if (!out)
+		return ret;
+	if (!out->args.vc.actions_n)
+		return -1;
+	action = &out->args.vc.actions[out->args.vc.actions_n - 1];
+	/* Point to selected object. */
+	ctx->object = out->args.vc.data;
+	ctx->objmask = NULL;
+	/* Set up default configuration. */
+	action_vxlan_encap_data = ctx->object;
+	*action_vxlan_encap_data = (struct action_vxlan_encap_data){
+		.conf = (struct rte_flow_action_vxlan_encap){
+			.definition = action_vxlan_encap_data->items,
+		},
+		.items = {
+			{
+				.type = RTE_FLOW_ITEM_TYPE_ETH,
+				.spec = &action_vxlan_encap_data->item_eth,
+				.mask = &action_vxlan_encap_data->item_eth,
+			},
+			{
+				.type = RTE_FLOW_ITEM_TYPE_VLAN,
+				.spec = &action_vxlan_encap_data->item_vlan,
+				.mask = &action_vxlan_encap_data->item_vlan,
+			},
+			{
+				.type = RTE_FLOW_ITEM_TYPE_IPV4,
+				.spec = &action_vxlan_encap_data->item_ipv4,
+				.mask = &action_vxlan_encap_data->item_ipv4,
+			},
+			{
+				.type = RTE_FLOW_ITEM_TYPE_UDP,
+				.spec = &action_vxlan_encap_data->item_udp,
+				.mask = &action_vxlan_encap_data->item_udp,
+			},
+			{
+				.type = RTE_FLOW_ITEM_TYPE_VXLAN,
+				.spec = &action_vxlan_encap_data->item_vxlan,
+				.mask = &action_vxlan_encap_data->item_vxlan,
+			},
+			{
+				.type = RTE_FLOW_ITEM_TYPE_END,
+			},
+		},
+		.item_eth.type = 0,
+		.item_vlan.tci = vxlan_encap_conf.vlan_tci,
+		.item_ipv4.hdr = {
+			.src_addr = vxlan_encap_conf.ipv4_src,
+			.dst_addr = vxlan_encap_conf.ipv4_dst,
+		},
+		.item_udp.hdr = {
+			.src_port = vxlan_encap_conf.udp_src,
+			.dst_port = vxlan_encap_conf.udp_dst,
+		},
+		.item_vxlan.flags = 0,
+	};
+	memcpy(action_vxlan_encap_data->item_eth.dst.addr_bytes,
+	       vxlan_encap_conf.eth_dst, ETHER_ADDR_LEN);
+	memcpy(action_vxlan_encap_data->item_eth.src.addr_bytes,
+	       vxlan_encap_conf.eth_src, ETHER_ADDR_LEN);
+	if (!vxlan_encap_conf.select_ipv4) {
+		memcpy(&action_vxlan_encap_data->item_ipv6.hdr.src_addr,
+		       &vxlan_encap_conf.ipv6_src,
+		       sizeof(vxlan_encap_conf.ipv6_src));
+		memcpy(&action_vxlan_encap_data->item_ipv6.hdr.dst_addr,
+		       &vxlan_encap_conf.ipv6_dst,
+		       sizeof(vxlan_encap_conf.ipv6_dst));
+		action_vxlan_encap_data->items[2] = (struct rte_flow_item){
+			.type = RTE_FLOW_ITEM_TYPE_IPV6,
+			.spec = &action_vxlan_encap_data->item_ipv6,
+			.mask = &action_vxlan_encap_data->item_ipv6,
+		};
+	}
+	if (!vxlan_encap_conf.select_vlan)
+		action_vxlan_encap_data->items[1].type =
+			RTE_FLOW_ITEM_TYPE_VOID;
+	memcpy(action_vxlan_encap_data->item_vxlan.vni, vxlan_encap_conf.vni,
+	       RTE_DIM(vxlan_encap_conf.vni));
+	action->conf = &action_vxlan_encap_data->conf;
+	return ret;
+}
+
 /** Parse tokens for destroy command. */
 static int
 parse_destroy(struct context *ctx, const struct token *token,
diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
index 24c199844..af9b96f9b 100644
--- a/app/test-pmd/testpmd.c
+++ b/app/test-pmd/testpmd.c
@@ -393,6 +393,23 @@ uint8_t bitrate_enabled;
 struct gro_status gro_ports[RTE_MAX_ETHPORTS];
 uint8_t gro_flush_cycles = GRO_DEFAULT_FLUSH_CYCLES;
 
+struct vxlan_encap_conf vxlan_encap_conf = {
+	.select_ipv4 = 1,
+	.select_vlan = 0,
+	.vni = "\x00\x00\x00",
+	.udp_src = RTE_BE16(1),
+	.udp_dst = RTE_BE16(4789),
+	.ipv4_src = IPv4(127, 0, 0, 1),
+	.ipv4_dst = IPv4(255, 255, 255, 255),
+	.ipv6_src = "\x00\x00\x00\x00\x00\x00\x00\x00"
+		"\x00\x00\x00\x00\x00\x00\x00\x01",
+	.ipv6_dst = "\x00\x00\x00\x00\x00\x00\x00\x00"
+		"\x00\x00\x00\x00\x00\x00\x11\x11",
+	.vlan_tci = 0,
+	.eth_src = "\x00\x00\x00\x00\x00\x00",
+	.eth_dst = "\xff\xff\xff\xff\xff\xff",
+};
+
 /* Forward function declarations */
 static void map_port_queue_stats_mapping_registers(portid_t pi,
 						   struct rte_port *port);
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index f51cd9dd9..0d6618788 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -479,6 +479,23 @@ struct gso_status {
 extern struct gso_status gso_ports[RTE_MAX_ETHPORTS];
 extern uint16_t gso_max_segment_size;
 
+/* VXLAN encap/decap parameters. */
+struct vxlan_encap_conf {
+	uint32_t select_ipv4:1;
+	uint32_t select_vlan:1;
+	uint8_t vni[3];
+	rte_be16_t udp_src;
+	rte_be16_t udp_dst;
+	rte_be32_t ipv4_src;
+	rte_be32_t ipv4_dst;
+	uint8_t ipv6_src[16];
+	uint8_t ipv6_dst[16];
+	rte_be16_t vlan_tci;
+	uint8_t eth_src[ETHER_ADDR_LEN];
+	uint8_t eth_dst[ETHER_ADDR_LEN];
+};
+struct vxlan_encap_conf vxlan_encap_conf;
+
 static inline unsigned int
 lcore_num(void)
 {
diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
index 0d6fd50ca..30dc53046 100644
--- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
+++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
@@ -1534,6 +1534,13 @@ Enable or disable a per queue Tx offloading only on a specific Tx queue::
 
 This command should be run when the port is stopped, or else it will fail.
 
+Config VXLAN Encap outer layers
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Configure the outer layer to encapsulate a packet inside a VXLAN tunnel::
+
+ testpmd> set vxlan ipv4|ipv6 (vni) (udp-src) (udp-dst) (ip-src) (ip-dst) (mac-src) (mac-dst)
+ testpmd> set vxlan-with-vlan ipv4|ipv6 (vni) (udp-src) (udp-dst) (ip-src) (ip-dst) (vlan-tci) (mac-src) (mac-dst)
 
 Port Functions
 --------------
@@ -3650,6 +3657,12 @@ This section lists supported actions and their attributes, if any.
 
   - ``ethertype``: Ethertype.
 
+- ``vxlan_encap``: Performs a VXLAN encapsulation, outer layer configuration
+  is done through `Config VXLAN Encap outer layers`_.
+
+- ``vxlan_decap``: Performs a decapsulation action by stripping all headers of
+  the VXLAN tunnel network overlay from the matched flow.
+
 Destroying flow rules
 ~~~~~~~~~~~~~~~~~~~~~
 
-- 
2.17.1

^ permalink raw reply	[flat|nested] 63+ messages in thread

* [dpdk-dev] [PATCH v3 2/2] app/testpmd: add NVGRE encap/decap support
  2018-06-18  8:52 ` [dpdk-dev] [PATCH v2 0/2] app/testpmd implement VXLAN/NVGRE Encap/Decap Nelio Laranjeiro
                     ` (2 preceding siblings ...)
  2018-06-18 14:36   ` [dpdk-dev] [PATCH v3 1/2] app/testpmd: add VXLAN encap/decap support Nelio Laranjeiro
@ 2018-06-18 14:36   ` Nelio Laranjeiro
  2018-06-19  7:08     ` Ori Kam
  3 siblings, 1 reply; 63+ messages in thread
From: Nelio Laranjeiro @ 2018-06-18 14:36 UTC (permalink / raw)
  To: dev, Adrien Mazarguil, Wenzhuo Lu, Jingjing Wu,
	Bernard Iremonger, Mohammad Abdul Awal

Due to the complex NVGRE_ENCAP flow action and based on the fact testpmd
does not allocate memory, this patch adds a new command in testpmd to
initialise a global structure containing the necessary information to
make the outer layer of the packet.  This same global structure will
then be used by the flow command line in testpmd when the action
nvgre_encap will be parsed, at this point, the conversion into such
action becomes trivial.

This global structure is only used for the encap action.

Signed-off-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
---
 app/test-pmd/cmdline.c                      | 113 +++++++++++++++++
 app/test-pmd/cmdline_flow.c                 | 129 ++++++++++++++++++++
 app/test-pmd/testpmd.c                      |  15 +++
 app/test-pmd/testpmd.h                      |  15 +++
 doc/guides/testpmd_app_ug/testpmd_funcs.rst |  14 +++
 5 files changed, 286 insertions(+)

diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
index 93573606f..711914e53 100644
--- a/app/test-pmd/cmdline.c
+++ b/app/test-pmd/cmdline.c
@@ -789,6 +789,12 @@ static void cmd_help_long_parsed(void *parsed_result,
 			" vlan-tci eth-src eth-dst\n"
 			"       Configure the VXLAN encapsulation for flows.\n\n"
 
+			"nvgre ipv4|ipv6 tni ip-src ip-dst eth-src eth-dst\n"
+			"       Configure the NVGRE encapsulation for flows.\n\n"
+
+			"nvgre-with-vlan ipv4|ipv6 tni ip-src ip-dst vlan-tci eth-src eth-dst\n"
+			"       Configure the NVGRE encapsulation for flows.\n\n"
+
 			, list_pkt_forwarding_modes()
 		);
 	}
@@ -14965,6 +14971,111 @@ cmdline_parse_inst_t cmd_set_vxlan_with_vlan = {
 	},
 };
 
+/** Set NVGRE encapsulation details */
+struct cmd_set_nvgre_result {
+	cmdline_fixed_string_t set;
+	cmdline_fixed_string_t nvgre;
+	cmdline_fixed_string_t ip_version;
+	uint32_t tni;
+	cmdline_ipaddr_t ip_src;
+	cmdline_ipaddr_t ip_dst;
+	uint16_t tci;
+	struct ether_addr eth_src;
+	struct ether_addr eth_dst;
+};
+
+cmdline_parse_token_string_t cmd_set_nvgre_set =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_nvgre_result, set, "set");
+cmdline_parse_token_string_t cmd_set_nvgre_nvgre =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_nvgre_result, nvgre, "nvgre");
+cmdline_parse_token_string_t cmd_set_nvgre_nvgre_with_vlan =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_nvgre_result, nvgre, "nvgre-with-vlan");
+cmdline_parse_token_string_t cmd_set_nvgre_ip_version =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_nvgre_result, ip_version,
+				 "ipv4#ipv6");
+cmdline_parse_token_num_t cmd_set_nvgre_tni =
+	TOKEN_NUM_INITIALIZER(struct cmd_set_nvgre_result, tni, UINT32);
+cmdline_parse_token_num_t cmd_set_nvgre_ip_src =
+	TOKEN_IPADDR_INITIALIZER(struct cmd_set_nvgre_result, ip_src);
+cmdline_parse_token_ipaddr_t cmd_set_nvgre_ip_dst =
+	TOKEN_IPADDR_INITIALIZER(struct cmd_set_nvgre_result, ip_dst);
+cmdline_parse_token_num_t cmd_set_nvgre_vlan =
+	TOKEN_NUM_INITIALIZER(struct cmd_set_nvgre_result, tci, UINT16);
+cmdline_parse_token_etheraddr_t cmd_set_nvgre_eth_src =
+	TOKEN_ETHERADDR_INITIALIZER(struct cmd_set_nvgre_result, eth_src);
+cmdline_parse_token_etheraddr_t cmd_set_nvgre_eth_dst =
+	TOKEN_ETHERADDR_INITIALIZER(struct cmd_set_nvgre_result, eth_dst);
+
+static void cmd_set_nvgre_parsed(void *parsed_result,
+	__attribute__((unused)) struct cmdline *cl,
+	__attribute__((unused)) void *data)
+{
+	struct cmd_set_nvgre_result *res = parsed_result;
+	uint32_t tni = rte_cpu_to_be_32(res->tni) >> 8;
+
+	if (strcmp(res->nvgre, "nvgre") == 0)
+		nvgre_encap_conf.select_vlan = 0;
+	else if (strcmp(res->nvgre, "nvgre-with-vlan") == 0)
+		nvgre_encap_conf.select_vlan = 1;
+	if (strcmp(res->ip_version, "ipv4") == 0)
+		nvgre_encap_conf.select_ipv4 = 1;
+	else if (strcmp(res->ip_version, "ipv6") == 0)
+		nvgre_encap_conf.select_ipv4 = 0;
+	else
+		return;
+	memcpy(nvgre_encap_conf.tni, &tni, 3);
+	if (nvgre_encap_conf.select_ipv4) {
+		IPV4_ADDR_TO_UINT(res->ip_src, nvgre_encap_conf.ipv4_src);
+		IPV4_ADDR_TO_UINT(res->ip_dst, nvgre_encap_conf.ipv4_dst);
+	} else {
+		IPV6_ADDR_TO_ARRAY(res->ip_src, nvgre_encap_conf.ipv6_src);
+		IPV6_ADDR_TO_ARRAY(res->ip_dst, nvgre_encap_conf.ipv6_dst);
+	}
+	if (nvgre_encap_conf.select_vlan)
+		nvgre_encap_conf.vlan_tci = rte_cpu_to_be_16(res->tci);
+	memcpy(nvgre_encap_conf.eth_src, res->eth_src.addr_bytes,
+	       ETHER_ADDR_LEN);
+	memcpy(nvgre_encap_conf.eth_dst, res->eth_dst.addr_bytes,
+	       ETHER_ADDR_LEN);
+}
+
+cmdline_parse_inst_t cmd_set_nvgre = {
+	.f = cmd_set_nvgre_parsed,
+	.data = NULL,
+	.help_str = "set nvgre ipv4|ipv6 <vni> <ip-src> <ip-dst> <eth-src>"
+		" <eth-dst>",
+	.tokens = {
+		(void *)&cmd_set_nvgre_set,
+		(void *)&cmd_set_nvgre_nvgre,
+		(void *)&cmd_set_nvgre_ip_version,
+		(void *)&cmd_set_nvgre_tni,
+		(void *)&cmd_set_nvgre_ip_src,
+		(void *)&cmd_set_nvgre_ip_dst,
+		(void *)&cmd_set_nvgre_eth_src,
+		(void *)&cmd_set_nvgre_eth_dst,
+		NULL,
+	},
+};
+
+cmdline_parse_inst_t cmd_set_nvgre_with_vlan = {
+	.f = cmd_set_nvgre_parsed,
+	.data = NULL,
+	.help_str = "set nvgre-with-vlan ipv4|ipv6 <vni> <ip-src> <ip-dst>"
+		" <vlan-tci> <eth-src> <eth-dst>",
+	.tokens = {
+		(void *)&cmd_set_nvgre_set,
+		(void *)&cmd_set_nvgre_nvgre_with_vlan,
+		(void *)&cmd_set_nvgre_ip_version,
+		(void *)&cmd_set_nvgre_tni,
+		(void *)&cmd_set_nvgre_ip_src,
+		(void *)&cmd_set_nvgre_ip_dst,
+		(void *)&cmd_set_nvgre_vlan,
+		(void *)&cmd_set_nvgre_eth_src,
+		(void *)&cmd_set_nvgre_eth_dst,
+		NULL,
+	},
+};
+
 /* Strict link priority scheduling mode setting */
 static void
 cmd_strict_link_prio_parsed(
@@ -17591,6 +17702,8 @@ cmdline_parse_ctx_t main_ctx[] = {
 #endif
 	(cmdline_parse_inst_t *)&cmd_set_vxlan,
 	(cmdline_parse_inst_t *)&cmd_set_vxlan_with_vlan,
+	(cmdline_parse_inst_t *)&cmd_set_nvgre,
+	(cmdline_parse_inst_t *)&cmd_set_nvgre_with_vlan,
 	(cmdline_parse_inst_t *)&cmd_ddp_add,
 	(cmdline_parse_inst_t *)&cmd_ddp_del,
 	(cmdline_parse_inst_t *)&cmd_ddp_get_list,
diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index a8b5221a6..8f3bf58bf 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -241,6 +241,8 @@ enum index {
 	ACTION_OF_PUSH_MPLS_ETHERTYPE,
 	ACTION_VXLAN_ENCAP,
 	ACTION_VXLAN_DECAP,
+	ACTION_NVGRE_ENCAP,
+	ACTION_NVGRE_DECAP,
 };
 
 /** Maximum size for pattern in struct rte_flow_item_raw. */
@@ -277,6 +279,22 @@ struct action_vxlan_encap_data {
 	struct rte_flow_item_vxlan item_vxlan;
 };
 
+/** Maximum number of items in struct rte_flow_action_nvgre_encap. */
+#define ACTION_NVGRE_ENCAP_ITEMS_NUM 5
+
+/** Storage for struct rte_flow_action_nvgre_encap including external data. */
+struct action_nvgre_encap_data {
+	struct rte_flow_action_nvgre_encap conf;
+	struct rte_flow_item items[ACTION_NVGRE_ENCAP_ITEMS_NUM];
+	struct rte_flow_item_eth item_eth;
+	struct rte_flow_item_vlan item_vlan;
+	union {
+		struct rte_flow_item_ipv4 item_ipv4;
+		struct rte_flow_item_ipv6 item_ipv6;
+	};
+	struct rte_flow_item_nvgre item_nvgre;
+};
+
 /** Maximum number of subsequent tokens and arguments on the stack. */
 #define CTX_STACK_SIZE 16
 
@@ -796,6 +814,8 @@ static const enum index next_action[] = {
 	ACTION_OF_PUSH_MPLS,
 	ACTION_VXLAN_ENCAP,
 	ACTION_VXLAN_DECAP,
+	ACTION_NVGRE_ENCAP,
+	ACTION_NVGRE_DECAP,
 	ZERO,
 };
 
@@ -929,6 +949,9 @@ static int parse_vc_action_rss_queue(struct context *, const struct token *,
 static int parse_vc_action_vxlan_encap(struct context *, const struct token *,
 				       const char *, unsigned int, void *,
 				       unsigned int);
+static int parse_vc_action_nvgre_encap(struct context *, const struct token *,
+				       const char *, unsigned int, void *,
+				       unsigned int);
 static int parse_destroy(struct context *, const struct token *,
 			 const char *, unsigned int,
 			 void *, unsigned int);
@@ -2429,6 +2452,24 @@ static const struct token token_list[] = {
 		.next = NEXT(NEXT_ENTRY(ACTION_NEXT)),
 		.call = parse_vc,
 	},
+	[ACTION_NVGRE_ENCAP] = {
+		.name = "nvgre_encap",
+		.help = "NVGRE encapsulation, uses configuration set by \"set"
+			" nvgre\"",
+		.priv = PRIV_ACTION(NVGRE_ENCAP,
+				    sizeof(struct action_nvgre_encap_data)),
+		.next = NEXT(NEXT_ENTRY(ACTION_NEXT)),
+		.call = parse_vc_action_nvgre_encap,
+	},
+	[ACTION_NVGRE_DECAP] = {
+		.name = "nvgre_decap",
+		.help = "Performs a decapsulation action by stripping all"
+			" headers of the NVGRE tunnel network overlay from the"
+			" matched flow.",
+		.priv = PRIV_ACTION(NVGRE_DECAP, 0),
+		.next = NEXT(NEXT_ENTRY(ACTION_NEXT)),
+		.call = parse_vc,
+	},
 };
 
 /** Remove and return last entry from argument stack. */
@@ -3090,6 +3131,94 @@ parse_vc_action_vxlan_encap(struct context *ctx, const struct token *token,
 	return ret;
 }
 
+/** Parse NVGRE encap action. */
+static int
+parse_vc_action_nvgre_encap(struct context *ctx, const struct token *token,
+			    const char *str, unsigned int len,
+			    void *buf, unsigned int size)
+{
+	struct buffer *out = buf;
+	struct rte_flow_action *action;
+	struct action_nvgre_encap_data *action_nvgre_encap_data;
+	int ret;
+
+	ret = parse_vc(ctx, token, str, len, buf, size);
+	if (ret < 0)
+		return ret;
+	/* Nothing else to do if there is no buffer. */
+	if (!out)
+		return ret;
+	if (!out->args.vc.actions_n)
+		return -1;
+	action = &out->args.vc.actions[out->args.vc.actions_n - 1];
+	/* Point to selected object. */
+	ctx->object = out->args.vc.data;
+	ctx->objmask = NULL;
+	/* Set up default configuration. */
+	action_nvgre_encap_data = ctx->object;
+	*action_nvgre_encap_data = (struct action_nvgre_encap_data){
+		.conf = (struct rte_flow_action_nvgre_encap){
+			.definition = action_nvgre_encap_data->items,
+		},
+		.items = {
+			{
+				.type = RTE_FLOW_ITEM_TYPE_ETH,
+				.spec = &action_nvgre_encap_data->item_eth,
+				.mask = &action_nvgre_encap_data->item_eth,
+			},
+			{
+				.type = RTE_FLOW_ITEM_TYPE_VLAN,
+				.spec = &action_nvgre_encap_data->item_vlan,
+				.mask = &action_nvgre_encap_data->item_vlan,
+			},
+			{
+				.type = RTE_FLOW_ITEM_TYPE_IPV4,
+				.spec = &action_nvgre_encap_data->item_ipv4,
+				.mask = &action_nvgre_encap_data->item_ipv4,
+			},
+			{
+				.type = RTE_FLOW_ITEM_TYPE_NVGRE,
+				.spec = &action_nvgre_encap_data->item_nvgre,
+				.mask = &action_nvgre_encap_data->item_nvgre,
+			},
+			{
+				.type = RTE_FLOW_ITEM_TYPE_END,
+			},
+		},
+		.item_eth.type = 0,
+		.item_vlan.tci = nvgre_encap_conf.vlan_tci,
+		.item_ipv4.hdr = {
+		       .src_addr = nvgre_encap_conf.ipv4_src,
+		       .dst_addr = nvgre_encap_conf.ipv4_dst,
+		},
+		.item_nvgre.flow_id = 0,
+	};
+	memcpy(action_nvgre_encap_data->item_eth.dst.addr_bytes,
+	       nvgre_encap_conf.eth_dst, ETHER_ADDR_LEN);
+	memcpy(action_nvgre_encap_data->item_eth.src.addr_bytes,
+	       nvgre_encap_conf.eth_src, ETHER_ADDR_LEN);
+	if (!nvgre_encap_conf.select_ipv4) {
+		memcpy(&action_nvgre_encap_data->item_ipv6.hdr.src_addr,
+		       &nvgre_encap_conf.ipv6_src,
+		       sizeof(nvgre_encap_conf.ipv6_src));
+		memcpy(&action_nvgre_encap_data->item_ipv6.hdr.dst_addr,
+		       &nvgre_encap_conf.ipv6_dst,
+		       sizeof(nvgre_encap_conf.ipv6_dst));
+		action_nvgre_encap_data->items[2] = (struct rte_flow_item){
+			.type = RTE_FLOW_ITEM_TYPE_IPV6,
+			.spec = &action_nvgre_encap_data->item_ipv6,
+			.mask = &action_nvgre_encap_data->item_ipv6,
+		};
+	}
+	if (!nvgre_encap_conf.select_vlan)
+		action_nvgre_encap_data->items[1].type =
+			RTE_FLOW_ITEM_TYPE_VOID;
+	memcpy(action_nvgre_encap_data->item_nvgre.tni, nvgre_encap_conf.tni,
+	       RTE_DIM(nvgre_encap_conf.tni));
+	action->conf = &action_nvgre_encap_data->conf;
+	return ret;
+}
+
 /** Parse tokens for destroy command. */
 static int
 parse_destroy(struct context *ctx, const struct token *token,
diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
index af9b96f9b..8f580ac5b 100644
--- a/app/test-pmd/testpmd.c
+++ b/app/test-pmd/testpmd.c
@@ -410,6 +410,21 @@ struct vxlan_encap_conf vxlan_encap_conf = {
 	.eth_dst = "\xff\xff\xff\xff\xff\xff",
 };
 
+struct nvgre_encap_conf nvgre_encap_conf = {
+	.select_ipv4 = 1,
+	.select_vlan = 0,
+	.tni = "\x00\x00\x00",
+	.ipv4_src = IPv4(127, 0, 0, 1),
+	.ipv4_dst = IPv4(255, 255, 255, 255),
+	.ipv6_src = "\x00\x00\x00\x00\x00\x00\x00\x00"
+		"\x00\x00\x00\x00\x00\x00\x00\x01",
+	.ipv6_dst = "\x00\x00\x00\x00\x00\x00\x00\x00"
+		"\x00\x00\x00\x00\x00\x00\x11\x11",
+	.vlan_tci = 0,
+	.eth_src = "\x00\x00\x00\x00\x00\x00",
+	.eth_dst = "\xff\xff\xff\xff\xff\xff",
+};
+
 /* Forward function declarations */
 static void map_port_queue_stats_mapping_registers(portid_t pi,
 						   struct rte_port *port);
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index 0d6618788..2b1e448b0 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -496,6 +496,21 @@ struct vxlan_encap_conf {
 };
 struct vxlan_encap_conf vxlan_encap_conf;
 
+/* NVGRE encap/decap parameters. */
+struct nvgre_encap_conf {
+	uint32_t select_ipv4:1;
+	uint32_t select_vlan:1;
+	uint8_t tni[3];
+	rte_be32_t ipv4_src;
+	rte_be32_t ipv4_dst;
+	uint8_t ipv6_src[16];
+	uint8_t ipv6_dst[16];
+	rte_be16_t vlan_tci;
+	uint8_t eth_src[ETHER_ADDR_LEN];
+	uint8_t eth_dst[ETHER_ADDR_LEN];
+};
+struct nvgre_encap_conf nvgre_encap_conf;
+
 static inline unsigned int
 lcore_num(void)
 {
diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
index 30dc53046..78508b441 100644
--- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
+++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
@@ -1542,6 +1542,14 @@ Configure the outer layer to encapsulate a packet inside a VXLAN tunnel::
  testpmd> set vxlan ipv4|ipv6 (vni) (udp-src) (udp-dst) (ip-src) (ip-dst) (mac-src) (mac-dst)
  testpmd> set vxlan-with-vlan ipv4|ipv6 (vni) (udp-src) (udp-dst) (ip-src) (ip-dst) (vlan-tci) (mac-src) (mac-dst)
 
+Config NVGRE Encap outer layers
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Configure the outer layer to encapsulate a packet inside a NVGRE tunnel::
+
+ testpmd> set nvgre ipv4|ipv6 (tni) (ip-src) (ip-dst) (mac-src) (mac-dst)
+ testpmd> set nvgre-with-vlan ipv4|ipv6 (tni) (ip-src) (ip-dst) (vlan-tci) (mac-src) (mac-dst)
+
 Port Functions
 --------------
 
@@ -3663,6 +3671,12 @@ This section lists supported actions and their attributes, if any.
 - ``vxlan_decap``: Performs a decapsulation action by stripping all headers of
   the VXLAN tunnel network overlay from the matched flow.
 
+- ``nvgre_encap``: Performs a NVGRE encapsulation, outer layer configuration
+  is done through `Config NVGRE Encap outer layers`_.
+
+- ``nvgre_decap``: Performs a decapsulation action by stripping all headers of
+  the NVGRE tunnel network overlay from the matched flow.
+
 Destroying flow rules
 ~~~~~~~~~~~~~~~~~~~~~
 
-- 
2.17.1

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [dpdk-dev] [PATCH v2 0/2] app/testpmd implement VXLAN/NVGRE Encap/Decap
  2018-06-18  9:38     ` Nélio Laranjeiro
@ 2018-06-18 14:40       ` Ferruh Yigit
  2018-06-19  7:32         ` Nélio Laranjeiro
  0 siblings, 1 reply; 63+ messages in thread
From: Ferruh Yigit @ 2018-06-18 14:40 UTC (permalink / raw)
  To: Nélio Laranjeiro
  Cc: dev, Adrien Mazarguil, Wenzhuo Lu, Jingjing Wu,
	Bernard Iremonger, Mohammad Abdul Awal

On 6/18/2018 10:38 AM, Nélio Laranjeiro wrote:
> On Mon, Jun 18, 2018 at 10:05:03AM +0100, Ferruh Yigit wrote:
>> On 6/18/2018 9:52 AM, Nelio Laranjeiro wrote:
>>> This series adds an easy and maintainable configuration version support for
>>> those two actions for 18.08 by using global variables in testpmd to store the
>>> necessary information for the tunnel encapsulation.  Those variables are used
>>> in conjunction of RTE_FLOW_ACTION_{VXLAN,NVGRE}_ENCAP action to create easily
>>> the action for flows.
>>>
>>> A common way to use it:
>>>
>>>  set vxlan ipv4 4 4 4 127.0.0.1 128.0.0.1 11:11:11:11:11:11 22:22:22:22:22:22
>>>  flow create 0 ingress pattern end actions vxlan_encap / queue index 0 / end
>>>
>>>  set vxlan ipv6 4 4 4 ::1 ::2222 11:11:11:11:11:11 22:22:22:22:22:22
>>>  flow create 0 ingress pattern end actions vxlan_encap / queue index 0 / end
>>>
>>>  set nvgre ipv4 4 127.0.0.1 128.0.0.1 11:11:11:11:11:11 22:22:22:22:22:22
>>>  flow create 0 ingress pattern end actions nvgre_encap / queue index 0 / end
>>>
>>>  set nvgre ipv6 4 ::1 ::2222 11:11:11:11:11:11 22:22:22:22:22:22
>>>  flow create 0 ingress pattern end actions nvgre_encap / queue index 0 / end
>>>
>>> This also replace the proposal done by Mohammad Abdul Awal [1] which handles
>>> in a more complex way for the same work.
>>
>> Hi Nelio,
>>
>> Is this set on top of mentioned set?
> 
> Hi Ferruh,
> 
> No it is another implementation of Declan's API.  It can be directly
> applied on top of the current DPDK code without any other patch.

I mean "based on" more than "on top of". So if this code is based on referenced
patchset, I believe it should keep original sign-off.

If this code is completely new implementation that replaces referenced patchset,
I believe it would be nice to comment on the original patch or communicate about
it instead of just sending another set to replace original one.

> 
>> If so shouldn't the set has the Awal's sign-off too?
>> Are you replacing someone else patch with dropping his sign-off?
>>
>>> Note this API has already a modification planned for 18.11 [2] thus those
>>> series should have a limited life for a single release.
>>>
>>> [1] https://dpdk.org/ml/archives/dev/2018-May/101403.html
>>> [2] https://dpdk.org/ml/archives/dev/2018-June/103485.html
>>>
>>>
>>> Changes in v2:
>>>
>>> - add default IPv6 values for NVGRE encapsulation.
>>> - replace VXLAN to NVGRE in comments concerning NVGRE layer.
>>>
>>> Nelio Laranjeiro (2):
>>>   app/testpmd: add VXLAN encap/decap support
>>>   app/testpmd: add NVGRE encap/decap support
>>>
>>>  app/test-pmd/cmdline.c                      | 169 +++++++++++++
>>>  app/test-pmd/cmdline_flow.c                 | 248 ++++++++++++++++++++
>>>  app/test-pmd/testpmd.c                      |  28 +++
>>>  app/test-pmd/testpmd.h                      |  28 +++
>>>  doc/guides/testpmd_app_ug/testpmd_funcs.rst |  25 ++
>>>  5 files changed, 498 insertions(+)
> 

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [dpdk-dev] [PATCH v3 0/2] app/testpmd implement VXLAN/NVGRE Encap/Decap
  2018-06-18 14:36   ` [dpdk-dev] [PATCH v3 " Nelio Laranjeiro
@ 2018-06-18 16:28     ` Iremonger, Bernard
  2018-06-19  9:41       ` Nélio Laranjeiro
  2018-06-21  7:13     ` [dpdk-dev] [PATCH v4 " Nelio Laranjeiro
  1 sibling, 1 reply; 63+ messages in thread
From: Iremonger, Bernard @ 2018-06-18 16:28 UTC (permalink / raw)
  To: Nelio Laranjeiro, dev, Adrien Mazarguil, Lu, Wenzhuo, Wu,
	Jingjing, Awal, Mohammad Abdul

Hi Nelio,

> -----Original Message-----
> From: Nelio Laranjeiro [mailto:nelio.laranjeiro@6wind.com]
> Sent: Monday, June 18, 2018 3:37 PM
> To: dev@dpdk.org; Adrien Mazarguil <adrien.mazarguil@6wind.com>; Lu,
> Wenzhuo <wenzhuo.lu@intel.com>; Wu, Jingjing <jingjing.wu@intel.com>;
> Iremonger, Bernard <bernard.iremonger@intel.com>; Awal, Mohammad Abdul
> <mohammad.abdul.awal@intel.com>
> Subject: [PATCH v3 0/2] app/testpmd implement VXLAN/NVGRE Encap/Decap
> 
> This series adds an easy and maintainable configuration version support for
> those two actions for 18.08 by using global variables in testpmd to store the
> necessary information for the tunnel encapsulation.  Those variables are used in
> conjunction of RTE_FLOW_ACTION_{VXLAN,NVGRE}_ENCAP action to create
> easily the action for flows.
> 
> A common way to use it:
> 
>  set vxlan ipv4 4 4 4 127.0.0.1 128.0.0.1 11:11:11:11:11:11 22:22:22:22:22:22
> flow create 0 ingress pattern end actions vxlan_encap / queue index 0 / end
> 
>  set vxlan ipv6 4 4 4 ::1 ::2222 11:11:11:11:11:11 22:22:22:22:22:22  flow create
> 0 ingress pattern end actions vxlan_encap / queue index 0 / end
> 
>  set nvgre ipv4 4 127.0.0.1 128.0.0.1 11:11:11:11:11:11 22:22:22:22:22:22  flow
> create 0 ingress pattern end actions nvgre_encap / queue index 0 / end
> 
>  set nvgre ipv6 4 ::1 ::2222 11:11:11:11:11:11 22:22:22:22:22:22  flow create 0
> ingress pattern end actions nvgre_encap / queue index 0 / end
> 

It might be useful to add the above sample testpmd command lines to section 4.12 of the doc/guides/testpmd_app_ug/testpmd_funcs.rst file

> This also replace the proposal done by Mohammad Abdul Awal [1] which
> handles in a more complex way for the same work.
> 
> Note this API has already a modification planned for 18.11 [2] thus those series
> should have a limited life for a single release.
> 
> [1] https://dpdk.org/ml/archives/dev/2018-May/101403.html
> [2] https://dpdk.org/ml/archives/dev/2018-June/103485.html
> 
> Changes in v3:
> 
> - support VLAN in the outer encapsulation.
> - fix the documentation with missing arguments.
> 
> Changes in v2:
> 
> - add default IPv6 values for NVGRE encapsulation.
> - replace VXLAN to NVGRE in comments concerning NVGRE layer.
> 
> Nelio Laranjeiro (2):
>   app/testpmd: add VXLAN encap/decap support
>   app/testpmd: add NVGRE encap/decap support
> 
>  app/test-pmd/cmdline.c                      | 242 ++++++++++++++++++
>  app/test-pmd/cmdline_flow.c                 | 268 ++++++++++++++++++++
>  app/test-pmd/testpmd.c                      |  32 +++
>  app/test-pmd/testpmd.h                      |  32 +++
>  doc/guides/testpmd_app_ug/testpmd_funcs.rst |  27 ++
>  5 files changed, 601 insertions(+)
> 
> --
> 2.17.1

Regards,

Bernard.

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [dpdk-dev] [PATCH v2 1/2] app/testpmd: add VXLAN encap/decap support
  2018-06-18  8:52 ` [dpdk-dev] [PATCH v2 1/2] app/testpmd: add VXLAN " Nelio Laranjeiro
  2018-06-18 12:47   ` Mohammad Abdul Awal
@ 2018-06-18 21:02   ` Stephen Hemminger
  2018-06-19  9:44     ` Nélio Laranjeiro
  1 sibling, 1 reply; 63+ messages in thread
From: Stephen Hemminger @ 2018-06-18 21:02 UTC (permalink / raw)
  To: Nelio Laranjeiro
  Cc: dev, Adrien Mazarguil, Wenzhuo Lu, Jingjing Wu,
	Bernard Iremonger, Mohammad Abdul Awal

On Mon, 18 Jun 2018 10:52:54 +0200
Nelio Laranjeiro <nelio.laranjeiro@6wind.com> wrote:

>  
> +struct vxlan_encap_conf vxlan_encap_conf = {
> +	.select_ipv4 = 1,
> +	.vni = "\x00\x00\x00",
> +	.udp_src = RTE_BE16(1),

Overall looks good. One enhancement I would suggest is to implement generating
the UDP source port based on a hash of fields from inner packet (as suggested
in RFC 7348).  This would be enabled by default (use udp source port of 0
as a flag to enable it).

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [dpdk-dev] [PATCH v3 2/2] app/testpmd: add NVGRE encap/decap support
  2018-06-18 14:36   ` [dpdk-dev] [PATCH v3 2/2] app/testpmd: add NVGRE " Nelio Laranjeiro
@ 2018-06-19  7:08     ` Ori Kam
  0 siblings, 0 replies; 63+ messages in thread
From: Ori Kam @ 2018-06-19  7:08 UTC (permalink / raw)
  To: Nélio Laranjeiro, dev, Adrien Mazarguil, Wenzhuo Lu,
	Jingjing Wu, Bernard Iremonger, Mohammad Abdul Awal

Small comment,

> -----Original Message-----
> Signed-off-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
> ---
>  app/test-pmd/cmdline.c                      | 113 +++++++++++++++++
>  app/test-pmd/cmdline_flow.c                 | 129 ++++++++++++++++++++
>  app/test-pmd/testpmd.c                      |  15 +++
>  app/test-pmd/testpmd.h                      |  15 +++
>  doc/guides/testpmd_app_ug/testpmd_funcs.rst |  14 +++
>  5 files changed, 286 insertions(+)
> 
> diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
> index 93573606f..711914e53 100644
> --- a/app/test-pmd/cmdline.c
> +++ b/app/test-pmd/cmdline.c
> +
> +static void cmd_set_nvgre_parsed(void *parsed_result,
> +	__attribute__((unused)) struct cmdline *cl,
> +	__attribute__((unused)) void *data)
> +{
> +	struct cmd_set_nvgre_result *res = parsed_result;
> +	uint32_t tni = rte_cpu_to_be_32(res->tni) >> 8;

Is this also correct in case of big endian system? 
I think it will  remove part of the tni.

> +
> +	if (strcmp(res->nvgre, "nvgre") == 0)
> +		nvgre_encap_conf.select_vlan = 0;
> +	else if (strcmp(res->nvgre, "nvgre-with-vlan") == 0)
> +		nvgre_encap_conf.select_vlan = 1;
> +	if (strcmp(res->ip_version, "ipv4") == 0)
> +		nvgre_encap_conf.select_ipv4 = 1;
> +	else if (strcmp(res->ip_version, "ipv6") == 0)
> +		nvgre_encap_conf.select_ipv4 = 0;
> +	else
> +		return;
> +	memcpy(nvgre_encap_conf.tni, &tni, 3);

I don't think this will work as expected in big endian system.

> +	if (nvgre_encap_conf.select_ipv4) {
> +		IPV4_ADDR_TO_UINT(res->ip_src,
> nvgre_encap_conf.ipv4_src);
> +		IPV4_ADDR_TO_UINT(res->ip_dst,
> nvgre_encap_conf.ipv4_dst);
> +	} else {
> +		IPV6_ADDR_TO_ARRAY(res->ip_src,
> nvgre_encap_conf.ipv6_src);
> +		IPV6_ADDR_TO_ARRAY(res->ip_dst,
> nvgre_encap_conf.ipv6_dst);
> +	}
> +	if (nvgre_encap_conf.select_vlan)
> +		nvgre_encap_conf.vlan_tci = rte_cpu_to_be_16(res->tci);
> +	memcpy(nvgre_encap_conf.eth_src, res->eth_src.addr_bytes,
> +	       ETHER_ADDR_LEN);
> +	memcpy(nvgre_encap_conf.eth_dst, res->eth_dst.addr_bytes,
> +	       ETHER_ADDR_LEN);
> +}
> +
> --
> 2.17.1


Best,
Ori

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [dpdk-dev] [PATCH v3 1/2] app/testpmd: add VXLAN encap/decap support
  2018-06-18 14:36   ` [dpdk-dev] [PATCH v3 1/2] app/testpmd: add VXLAN encap/decap support Nelio Laranjeiro
@ 2018-06-19  7:09     ` Ori Kam
  2018-06-19  9:40       ` Nélio Laranjeiro
  0 siblings, 1 reply; 63+ messages in thread
From: Ori Kam @ 2018-06-19  7:09 UTC (permalink / raw)
  To: Nélio Laranjeiro, dev, Adrien Mazarguil, Wenzhuo Lu,
	Jingjing Wu, Bernard Iremonger, Mohammad Abdul Awal



> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Nelio Laranjeiro
> Signed-off-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
> ---
>  app/test-pmd/cmdline.c                      | 129 ++++++++++++++++++
>  app/test-pmd/cmdline_flow.c                 | 139 ++++++++++++++++++++
>  app/test-pmd/testpmd.c                      |  17 +++
>  app/test-pmd/testpmd.h                      |  17 +++
>  doc/guides/testpmd_app_ug/testpmd_funcs.rst |  13 ++
>  5 files changed, 315 insertions(+)
> 
> diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
> index 27e2aa8c8..93573606f 100644
> --- a/app/test-pmd/cmdline.c
> +++ b/app/test-pmd/cmdline.c
> +static void cmd_set_vxlan_parsed(void *parsed_result,
> +	__attribute__((unused)) struct cmdline *cl,
> +	__attribute__((unused)) void *data)
> +{
> +	struct cmd_set_vxlan_result *res = parsed_result;
> +	uint32_t vni = rte_cpu_to_be_32(res->vni) >> 8;

Is this also correct in case of big endian system? 
I think it will  remove part of the vni. 

> +
> +	if (strcmp(res->vxlan, "vxlan") == 0)
> +		vxlan_encap_conf.select_vlan = 0;
> +	else if (strcmp(res->vxlan, "vxlan-with-vlan") == 0)
> +		vxlan_encap_conf.select_vlan = 1;
> +	if (strcmp(res->ip_version, "ipv4") == 0)
> +		vxlan_encap_conf.select_ipv4 = 1;
> +	else if (strcmp(res->ip_version, "ipv6") == 0)
> +		vxlan_encap_conf.select_ipv4 = 0;
> +	else
> +		return;
> +	memcpy(vxlan_encap_conf.vni, &vni, 3);

I don't think this line is correct when running on big endian system.

> +	vxlan_encap_conf.udp_src = rte_cpu_to_be_16(res->udp_src);
> +	vxlan_encap_conf.udp_dst = rte_cpu_to_be_16(res->udp_dst);
> +	if (vxlan_encap_conf.select_ipv4) {
> +		IPV4_ADDR_TO_UINT(res->ip_src,
> vxlan_encap_conf.ipv4_src);
> +		IPV4_ADDR_TO_UINT(res->ip_dst,
> vxlan_encap_conf.ipv4_dst);
> +	} else {
> +		IPV6_ADDR_TO_ARRAY(res->ip_src,
> vxlan_encap_conf.ipv6_src);
> +		IPV6_ADDR_TO_ARRAY(res->ip_dst,
> vxlan_encap_conf.ipv6_dst);
> +	}
> +	if (vxlan_encap_conf.select_vlan)
> +		vxlan_encap_conf.vlan_tci = rte_cpu_to_be_16(res->tci);
> +	memcpy(vxlan_encap_conf.eth_src, res->eth_src.addr_bytes,
> +	       ETHER_ADDR_LEN);
> +	memcpy(vxlan_encap_conf.eth_dst, res->eth_dst.addr_bytes,
> +	       ETHER_ADDR_LEN);
> +}
> 
> --
> 2.17.1

Best,
Ori

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [dpdk-dev] [PATCH v2 0/2] app/testpmd implement VXLAN/NVGRE Encap/Decap
  2018-06-18 14:40       ` Ferruh Yigit
@ 2018-06-19  7:32         ` Nélio Laranjeiro
  0 siblings, 0 replies; 63+ messages in thread
From: Nélio Laranjeiro @ 2018-06-19  7:32 UTC (permalink / raw)
  To: Ferruh Yigit
  Cc: dev, Adrien Mazarguil, Wenzhuo Lu, Jingjing Wu,
	Bernard Iremonger, Mohammad Abdul Awal

On Mon, Jun 18, 2018 at 03:40:53PM +0100, Ferruh Yigit wrote:
> On 6/18/2018 10:38 AM, Nélio Laranjeiro wrote:
> > On Mon, Jun 18, 2018 at 10:05:03AM +0100, Ferruh Yigit wrote:
> >> On 6/18/2018 9:52 AM, Nelio Laranjeiro wrote:
> >>> This series adds an easy and maintainable configuration version support for
> >>> those two actions for 18.08 by using global variables in testpmd to store the
> >>> necessary information for the tunnel encapsulation.  Those variables are used
> >>> in conjunction of RTE_FLOW_ACTION_{VXLAN,NVGRE}_ENCAP action to create easily
> >>> the action for flows.
> >>>
> >>> A common way to use it:
> >>>
> >>>  set vxlan ipv4 4 4 4 127.0.0.1 128.0.0.1 11:11:11:11:11:11 22:22:22:22:22:22
> >>>  flow create 0 ingress pattern end actions vxlan_encap / queue index 0 / end
> >>>
> >>>  set vxlan ipv6 4 4 4 ::1 ::2222 11:11:11:11:11:11 22:22:22:22:22:22
> >>>  flow create 0 ingress pattern end actions vxlan_encap / queue index 0 / end
> >>>
> >>>  set nvgre ipv4 4 127.0.0.1 128.0.0.1 11:11:11:11:11:11 22:22:22:22:22:22
> >>>  flow create 0 ingress pattern end actions nvgre_encap / queue index 0 / end
> >>>
> >>>  set nvgre ipv6 4 ::1 ::2222 11:11:11:11:11:11 22:22:22:22:22:22
> >>>  flow create 0 ingress pattern end actions nvgre_encap / queue index 0 / end
> >>>
> >>> This also replace the proposal done by Mohammad Abdul Awal [1] which handles
> >>> in a more complex way for the same work.
> >>
> >> Hi Nelio,
> >>
> >> Is this set on top of mentioned set?
> > 
> > Hi Ferruh,
> > 
> > No it is another implementation of Declan's API.  It can be directly
> > applied on top of the current DPDK code without any other patch.
> 
> I mean "based on" more than "on top of". So if this code is based on referenced
> patchset, I believe it should keep original sign-off.
> 
> If this code is completely new implementation that replaces referenced patchset,
> I believe it would be nice to comment on the original patch or communicate about
> it instead of just sending another set to replace original one.

Hi Ferruh,

I agree with your point of view but my intention was to show how hard
for an application it will be to implement such actions (as mentioned by
Adrien [1][2]) whereas Mohammad has made the implementation for the
testpmd command line interactive mode.  That is also the reason why I've
copied Mohammad at the first place in my series.
Note that such implementation request has been made by Thomas [3] even
with it, it has entered DPDK without it.

I did not comment on his series because I don't have any reason to
block it, if his series enters, mine just won't I also agree with that,
there is no need to have both implementation in DPDK, but it worse to
show how an application may have to deal with such actions.

> >> If so shouldn't the set has the Awal's sign-off too?
> >> Are you replacing someone else patch with dropping his sign-off?
> >>
> >>> Note this API has already a modification planned for 18.11 [2] thus those
> >>> series should have a limited life for a single release.
> >>>
> >>> [1] https://dpdk.org/ml/archives/dev/2018-May/101403.html
> >>> [2] https://dpdk.org/ml/archives/dev/2018-June/103485.html
> >>>
> >>>
> >>> Changes in v2:
> >>>
> >>> - add default IPv6 values for NVGRE encapsulation.
> >>> - replace VXLAN to NVGRE in comments concerning NVGRE layer.
> >>>
> >>> Nelio Laranjeiro (2):
> >>>   app/testpmd: add VXLAN encap/decap support
> >>>   app/testpmd: add NVGRE encap/decap support
> >>>
> >>>  app/test-pmd/cmdline.c                      | 169 +++++++++++++
> >>>  app/test-pmd/cmdline_flow.c                 | 248 ++++++++++++++++++++
> >>>  app/test-pmd/testpmd.c                      |  28 +++
> >>>  app/test-pmd/testpmd.h                      |  28 +++
> >>>  doc/guides/testpmd_app_ug/testpmd_funcs.rst |  25 ++
> >>>  5 files changed, 498 insertions(+)
> > 
> 

Regards,

[1] https://mails.dpdk.org/archives/dev/2018-April/095945.html
[2] https://mails.dpdk.org/archives/dev/2018-April/098124.html
[3] https://mails.dpdk.org/archives/dev/2018-April/099799.html

-- 
Nélio Laranjeiro
6WIND

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [dpdk-dev] [PATCH v3 1/2] app/testpmd: add VXLAN encap/decap support
  2018-06-19  7:09     ` Ori Kam
@ 2018-06-19  9:40       ` Nélio Laranjeiro
  0 siblings, 0 replies; 63+ messages in thread
From: Nélio Laranjeiro @ 2018-06-19  9:40 UTC (permalink / raw)
  To: Ori Kam
  Cc: dev, Adrien Mazarguil, Wenzhuo Lu, Jingjing Wu,
	Bernard Iremonger, Mohammad Abdul Awal

On Tue, Jun 19, 2018 at 07:09:28AM +0000, Ori Kam wrote:
> 
> 
> > -----Original Message-----
> > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Nelio Laranjeiro
> > Signed-off-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
> > ---
> >  app/test-pmd/cmdline.c                      | 129 ++++++++++++++++++
> >  app/test-pmd/cmdline_flow.c                 | 139 ++++++++++++++++++++
> >  app/test-pmd/testpmd.c                      |  17 +++
> >  app/test-pmd/testpmd.h                      |  17 +++
> >  doc/guides/testpmd_app_ug/testpmd_funcs.rst |  13 ++
> >  5 files changed, 315 insertions(+)
> > 
> > diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
> > index 27e2aa8c8..93573606f 100644
> > --- a/app/test-pmd/cmdline.c
> > +++ b/app/test-pmd/cmdline.c
> > +static void cmd_set_vxlan_parsed(void *parsed_result,
> > +	__attribute__((unused)) struct cmdline *cl,
> > +	__attribute__((unused)) void *data)
> > +{
> > +	struct cmd_set_vxlan_result *res = parsed_result;
> > +	uint32_t vni = rte_cpu_to_be_32(res->vni) >> 8;
> 
> Is this also correct in case of big endian system? 
> I think it will  remove part of the vni. 
> 
> > +
> > +	if (strcmp(res->vxlan, "vxlan") == 0)
> > +		vxlan_encap_conf.select_vlan = 0;
> > +	else if (strcmp(res->vxlan, "vxlan-with-vlan") == 0)
> > +		vxlan_encap_conf.select_vlan = 1;
> > +	if (strcmp(res->ip_version, "ipv4") == 0)
> > +		vxlan_encap_conf.select_ipv4 = 1;
> > +	else if (strcmp(res->ip_version, "ipv6") == 0)
> > +		vxlan_encap_conf.select_ipv4 = 0;
> > +	else
> > +		return;
> > +	memcpy(vxlan_encap_conf.vni, &vni, 3);
> 
> I don't think this line is correct when running on big endian system.

Yes, this is wrong,  it will be fixed in v4.

Thanks,

-- 
Nélio Laranjeiro
6WIND

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [dpdk-dev] [PATCH v3 0/2] app/testpmd implement VXLAN/NVGRE Encap/Decap
  2018-06-18 16:28     ` Iremonger, Bernard
@ 2018-06-19  9:41       ` Nélio Laranjeiro
  0 siblings, 0 replies; 63+ messages in thread
From: Nélio Laranjeiro @ 2018-06-19  9:41 UTC (permalink / raw)
  To: Iremonger, Bernard
  Cc: dev, Adrien Mazarguil, Lu, Wenzhuo, Wu, Jingjing, Awal, Mohammad Abdul

On Mon, Jun 18, 2018 at 04:28:05PM +0000, Iremonger, Bernard wrote:
> Hi Nelio,
> 
> > -----Original Message-----
> > From: Nelio Laranjeiro [mailto:nelio.laranjeiro@6wind.com]
> > Sent: Monday, June 18, 2018 3:37 PM
> > To: dev@dpdk.org; Adrien Mazarguil <adrien.mazarguil@6wind.com>; Lu,
> > Wenzhuo <wenzhuo.lu@intel.com>; Wu, Jingjing <jingjing.wu@intel.com>;
> > Iremonger, Bernard <bernard.iremonger@intel.com>; Awal, Mohammad Abdul
> > <mohammad.abdul.awal@intel.com>
> > Subject: [PATCH v3 0/2] app/testpmd implement VXLAN/NVGRE Encap/Decap
> > 
> > This series adds an easy and maintainable configuration version support for
> > those two actions for 18.08 by using global variables in testpmd to store the
> > necessary information for the tunnel encapsulation.  Those variables are used in
> > conjunction of RTE_FLOW_ACTION_{VXLAN,NVGRE}_ENCAP action to create
> > easily the action for flows.
> > 
> > A common way to use it:
> > 
> >  set vxlan ipv4 4 4 4 127.0.0.1 128.0.0.1 11:11:11:11:11:11 22:22:22:22:22:22
> > flow create 0 ingress pattern end actions vxlan_encap / queue index 0 / end
> > 
> >  set vxlan ipv6 4 4 4 ::1 ::2222 11:11:11:11:11:11 22:22:22:22:22:22  flow create
> > 0 ingress pattern end actions vxlan_encap / queue index 0 / end
> > 
> >  set nvgre ipv4 4 127.0.0.1 128.0.0.1 11:11:11:11:11:11 22:22:22:22:22:22  flow
> > create 0 ingress pattern end actions nvgre_encap / queue index 0 / end
> > 
> >  set nvgre ipv6 4 ::1 ::2222 11:11:11:11:11:11 22:22:22:22:22:22  flow create 0
> > ingress pattern end actions nvgre_encap / queue index 0 / end
> > 
> 
> It might be useful to add the above sample testpmd command lines to
> section 4.12 of the doc/guides/testpmd_app_ug/testpmd_funcs.rst file
>[...]

Agreed, I'll add it in the v4.

Thanks,

-- 
Nélio Laranjeiro
6WIND

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [dpdk-dev] [PATCH v2 1/2] app/testpmd: add VXLAN encap/decap support
  2018-06-18 21:02   ` Stephen Hemminger
@ 2018-06-19  9:44     ` Nélio Laranjeiro
  0 siblings, 0 replies; 63+ messages in thread
From: Nélio Laranjeiro @ 2018-06-19  9:44 UTC (permalink / raw)
  To: Stephen Hemminger
  Cc: dev, Adrien Mazarguil, Wenzhuo Lu, Jingjing Wu,
	Bernard Iremonger, Mohammad Abdul Awal

On Mon, Jun 18, 2018 at 02:02:10PM -0700, Stephen Hemminger wrote:
> On Mon, 18 Jun 2018 10:52:54 +0200
> Nelio Laranjeiro <nelio.laranjeiro@6wind.com> wrote:
> 
> >  
> > +struct vxlan_encap_conf vxlan_encap_conf = {
> > +	.select_ipv4 = 1,
> > +	.vni = "\x00\x00\x00",
> > +	.udp_src = RTE_BE16(1),
> 
> Overall looks good. One enhancement I would suggest is to implement generating
> the UDP source port based on a hash of fields from inner packet (as suggested
> in RFC 7348).  This would be enabled by default (use udp source port of 0
> as a flag to enable it).

I'll make the modification for the v4,

Thanks,

-- 
Nélio Laranjeiro
6WIND

^ permalink raw reply	[flat|nested] 63+ messages in thread

* [dpdk-dev] [PATCH v4 0/2] app/testpmd implement VXLAN/NVGRE Encap/Decap
  2018-06-18 14:36   ` [dpdk-dev] [PATCH v3 " Nelio Laranjeiro
  2018-06-18 16:28     ` Iremonger, Bernard
@ 2018-06-21  7:13     ` Nelio Laranjeiro
  2018-06-21  7:13       ` [dpdk-dev] [PATCH v4 1/2] app/testpmd: add VXLAN encap/decap support Nelio Laranjeiro
                         ` (3 more replies)
  1 sibling, 4 replies; 63+ messages in thread
From: Nelio Laranjeiro @ 2018-06-21  7:13 UTC (permalink / raw)
  To: dev, Adrien Mazarguil, Wenzhuo Lu, Jingjing Wu,
	Bernard Iremonger, Mohammad Abdul Awal, Ori Kam,
	Stephen Hemminger

This series adds an easy and maintainable configuration version support for
those two actions for 18.08 by using global variables in testpmd to store the
necessary information for the tunnel encapsulation.  Those variables are used
in conjunction of RTE_FLOW_ACTION_{VXLAN,NVGRE}_ENCAP action to create easily
the action for flows.

A common way to use it:

 set vxlan ipv4 4 4 4 127.0.0.1 128.0.0.1 11:11:11:11:11:11 22:22:22:22:22:22
 flow create 0 ingress pattern end actions vxlan_encap / queue index 0 / end

 set vxlan ipv6 4 4 4 ::1 ::2222 11:11:11:11:11:11 22:22:22:22:22:22
 flow create 0 ingress pattern end actions vxlan_encap / queue index 0 / end

 set nvgre ipv4 4 127.0.0.1 128.0.0.1 11:11:11:11:11:11 22:22:22:22:22:22
 flow create 0 ingress pattern end actions nvgre_encap / queue index 0 / end

 set nvgre ipv6 4 ::1 ::2222 11:11:11:11:11:11 22:22:22:22:22:22
 flow create 0 ingress pattern end actions nvgre_encap / queue index 0 / end

This also replace the proposal done by Mohammad Abdul Awal [1] which handles
in a more complex way for the same work.

Note this API has already a modification planned for 18.11 [2] thus those
series should have a limited life for a single release.

[1] https://dpdk.org/ml/archives/dev/2018-May/101403.html
[2] https://dpdk.org/ml/archives/dev/2018-June/103485.html

Changes in v4:

- fix big endian issue on vni and tni.
- add samples to the documentation.
- set the VXLAN UDP source port to 0 by default to let the driver generate it
  from the inner hash as described in the RFC 7348.
- use default rte flow mask for each item.

Changes in v3:

- support VLAN in the outer encapsulation.
- fix the documentation with missing arguments.

Changes in v2:

- add default IPv6 values for NVGRE encapsulation.
- replace VXLAN to NVGRE in comments concerning NVGRE layer.

Nelio Laranjeiro (2):
  app/testpmd: add VXLAN encap/decap support
  app/testpmd: add NVGRE encap/decap support

 app/test-pmd/cmdline.c                      | 252 ++++++++++++++++++
 app/test-pmd/cmdline_flow.c                 | 268 ++++++++++++++++++++
 app/test-pmd/testpmd.c                      |  32 +++
 app/test-pmd/testpmd.h                      |  32 +++
 doc/guides/testpmd_app_ug/testpmd_funcs.rst |  72 ++++++
 5 files changed, 656 insertions(+)

-- 
2.18.0.rc2

^ permalink raw reply	[flat|nested] 63+ messages in thread

* [dpdk-dev] [PATCH v4 1/2] app/testpmd: add VXLAN encap/decap support
  2018-06-21  7:13     ` [dpdk-dev] [PATCH v4 " Nelio Laranjeiro
@ 2018-06-21  7:13       ` Nelio Laranjeiro
  2018-06-26 10:51         ` Ori Kam
  2018-06-26 12:43         ` Iremonger, Bernard
  2018-06-21  7:13       ` [dpdk-dev] [PATCH v4 2/2] app/testpmd: add NVGRE " Nelio Laranjeiro
                         ` (2 subsequent siblings)
  3 siblings, 2 replies; 63+ messages in thread
From: Nelio Laranjeiro @ 2018-06-21  7:13 UTC (permalink / raw)
  To: dev, Adrien Mazarguil, Wenzhuo Lu, Jingjing Wu,
	Bernard Iremonger, Mohammad Abdul Awal, Ori Kam,
	Stephen Hemminger

Due to the complex VXLAN_ENCAP flow action and based on the fact testpmd
does not allocate memory, this patch adds a new command in testpmd to
initialise a global structure containing the necessary information to
make the outer layer of the packet.  This same global structure will
then be used by the flow command line in testpmd when the action
vxlan_encap will be parsed, at this point, the conversion into such
action becomes trivial.

This global structure is only used for the encap action.

Signed-off-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
---
 app/test-pmd/cmdline.c                      | 134 +++++++++++++++++++
 app/test-pmd/cmdline_flow.c                 | 139 ++++++++++++++++++++
 app/test-pmd/testpmd.c                      |  17 +++
 app/test-pmd/testpmd.h                      |  17 +++
 doc/guides/testpmd_app_ug/testpmd_funcs.rst |  35 +++++
 5 files changed, 342 insertions(+)

diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
index 27e2aa8c8..048fff2bd 100644
--- a/app/test-pmd/cmdline.c
+++ b/app/test-pmd/cmdline.c
@@ -781,6 +781,14 @@ static void cmd_help_long_parsed(void *parsed_result,
 			"port tm hierarchy commit (port_id) (clean_on_fail)\n"
 			"	Commit tm hierarchy.\n\n"
 
+			"vxlan ipv4|ipv6 vni udp-src udp-dst ip-src ip-dst"
+			" eth-src eth-dst\n"
+			"       Configure the VXLAN encapsulation for flows.\n\n"
+
+			"vxlan-with-vlan ipv4|ipv6 vni udp-src udp-dst ip-src ip-dst"
+			" vlan-tci eth-src eth-dst\n"
+			"       Configure the VXLAN encapsulation for flows.\n\n"
+
 			, list_pkt_forwarding_modes()
 		);
 	}
@@ -14838,6 +14846,130 @@ cmdline_parse_inst_t cmd_set_port_tm_hierarchy_default = {
 };
 #endif
 
+/** Set VXLAN encapsulation details */
+struct cmd_set_vxlan_result {
+	cmdline_fixed_string_t set;
+	cmdline_fixed_string_t vxlan;
+	cmdline_fixed_string_t ip_version;
+	uint32_t vlan_present:1;
+	uint32_t vni;
+	uint16_t udp_src;
+	uint16_t udp_dst;
+	cmdline_ipaddr_t ip_src;
+	cmdline_ipaddr_t ip_dst;
+	uint16_t tci;
+	struct ether_addr eth_src;
+	struct ether_addr eth_dst;
+};
+
+cmdline_parse_token_string_t cmd_set_vxlan_set =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_vxlan_result, set, "set");
+cmdline_parse_token_string_t cmd_set_vxlan_vxlan =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_vxlan_result, vxlan, "vxlan");
+cmdline_parse_token_string_t cmd_set_vxlan_vxlan_with_vlan =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_vxlan_result, vxlan,
+				 "vxlan-with-vlan");
+cmdline_parse_token_string_t cmd_set_vxlan_ip_version =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_vxlan_result, ip_version,
+				 "ipv4#ipv6");
+cmdline_parse_token_num_t cmd_set_vxlan_vni =
+	TOKEN_NUM_INITIALIZER(struct cmd_set_vxlan_result, vni, UINT32);
+cmdline_parse_token_num_t cmd_set_vxlan_udp_src =
+	TOKEN_NUM_INITIALIZER(struct cmd_set_vxlan_result, udp_src, UINT16);
+cmdline_parse_token_num_t cmd_set_vxlan_udp_dst =
+	TOKEN_NUM_INITIALIZER(struct cmd_set_vxlan_result, udp_dst, UINT16);
+cmdline_parse_token_ipaddr_t cmd_set_vxlan_ip_src =
+	TOKEN_IPADDR_INITIALIZER(struct cmd_set_vxlan_result, ip_src);
+cmdline_parse_token_ipaddr_t cmd_set_vxlan_ip_dst =
+	TOKEN_IPADDR_INITIALIZER(struct cmd_set_vxlan_result, ip_dst);
+cmdline_parse_token_num_t cmd_set_vxlan_vlan =
+	TOKEN_NUM_INITIALIZER(struct cmd_set_vxlan_result, tci, UINT16);
+cmdline_parse_token_etheraddr_t cmd_set_vxlan_eth_src =
+	TOKEN_ETHERADDR_INITIALIZER(struct cmd_set_vxlan_result, eth_src);
+cmdline_parse_token_etheraddr_t cmd_set_vxlan_eth_dst =
+	TOKEN_ETHERADDR_INITIALIZER(struct cmd_set_vxlan_result, eth_dst);
+
+static void cmd_set_vxlan_parsed(void *parsed_result,
+	__attribute__((unused)) struct cmdline *cl,
+	__attribute__((unused)) void *data)
+{
+	struct cmd_set_vxlan_result *res = parsed_result;
+	union {
+		uint32_t vxlan_id;
+		uint8_t vni[4];
+	} id = {
+		.vxlan_id = rte_cpu_to_be_32(res->vni) & RTE_BE32(0x00ffffff),
+	};
+
+	if (strcmp(res->vxlan, "vxlan") == 0)
+		vxlan_encap_conf.select_vlan = 0;
+	else if (strcmp(res->vxlan, "vxlan-with-vlan") == 0)
+		vxlan_encap_conf.select_vlan = 1;
+	if (strcmp(res->ip_version, "ipv4") == 0)
+		vxlan_encap_conf.select_ipv4 = 1;
+	else if (strcmp(res->ip_version, "ipv6") == 0)
+		vxlan_encap_conf.select_ipv4 = 0;
+	else
+		return;
+	rte_memcpy(vxlan_encap_conf.vni, &id.vni[1], 3);
+	vxlan_encap_conf.udp_src = rte_cpu_to_be_16(res->udp_src);
+	vxlan_encap_conf.udp_dst = rte_cpu_to_be_16(res->udp_dst);
+	if (vxlan_encap_conf.select_ipv4) {
+		IPV4_ADDR_TO_UINT(res->ip_src, vxlan_encap_conf.ipv4_src);
+		IPV4_ADDR_TO_UINT(res->ip_dst, vxlan_encap_conf.ipv4_dst);
+	} else {
+		IPV6_ADDR_TO_ARRAY(res->ip_src, vxlan_encap_conf.ipv6_src);
+		IPV6_ADDR_TO_ARRAY(res->ip_dst, vxlan_encap_conf.ipv6_dst);
+	}
+	if (vxlan_encap_conf.select_vlan)
+		vxlan_encap_conf.vlan_tci = rte_cpu_to_be_16(res->tci);
+	rte_memcpy(vxlan_encap_conf.eth_src, res->eth_src.addr_bytes,
+		   ETHER_ADDR_LEN);
+	rte_memcpy(vxlan_encap_conf.eth_dst, res->eth_dst.addr_bytes,
+		   ETHER_ADDR_LEN);
+}
+
+cmdline_parse_inst_t cmd_set_vxlan = {
+	.f = cmd_set_vxlan_parsed,
+	.data = NULL,
+	.help_str = "set vxlan ipv4|ipv6 <vni> <udp-src> <udp-dst> <ip-src>"
+		" <ip-dst> <eth-src> <eth-dst>",
+	.tokens = {
+		(void *)&cmd_set_vxlan_set,
+		(void *)&cmd_set_vxlan_vxlan,
+		(void *)&cmd_set_vxlan_ip_version,
+		(void *)&cmd_set_vxlan_vni,
+		(void *)&cmd_set_vxlan_udp_src,
+		(void *)&cmd_set_vxlan_udp_dst,
+		(void *)&cmd_set_vxlan_ip_src,
+		(void *)&cmd_set_vxlan_ip_dst,
+		(void *)&cmd_set_vxlan_eth_src,
+		(void *)&cmd_set_vxlan_eth_dst,
+		NULL,
+	},
+};
+
+cmdline_parse_inst_t cmd_set_vxlan_with_vlan = {
+	.f = cmd_set_vxlan_parsed,
+	.data = NULL,
+	.help_str = "set vxlan-with-vlan ipv4|ipv6 <vni> <udp-src> <udp-dst>"
+		" <ip-src> <ip-dst> <vlan-tci> <eth-src> <eth-dst>",
+	.tokens = {
+		(void *)&cmd_set_vxlan_set,
+		(void *)&cmd_set_vxlan_vxlan_with_vlan,
+		(void *)&cmd_set_vxlan_ip_version,
+		(void *)&cmd_set_vxlan_vni,
+		(void *)&cmd_set_vxlan_udp_src,
+		(void *)&cmd_set_vxlan_udp_dst,
+		(void *)&cmd_set_vxlan_ip_src,
+		(void *)&cmd_set_vxlan_ip_dst,
+		(void *)&cmd_set_vxlan_vlan,
+		(void *)&cmd_set_vxlan_eth_src,
+		(void *)&cmd_set_vxlan_eth_dst,
+		NULL,
+	},
+};
+
 /* Strict link priority scheduling mode setting */
 static void
 cmd_strict_link_prio_parsed(
@@ -17462,6 +17594,8 @@ cmdline_parse_ctx_t main_ctx[] = {
 #if defined RTE_LIBRTE_PMD_SOFTNIC && defined RTE_LIBRTE_SCHED
 	(cmdline_parse_inst_t *)&cmd_set_port_tm_hierarchy_default,
 #endif
+	(cmdline_parse_inst_t *)&cmd_set_vxlan,
+	(cmdline_parse_inst_t *)&cmd_set_vxlan_with_vlan,
 	(cmdline_parse_inst_t *)&cmd_ddp_add,
 	(cmdline_parse_inst_t *)&cmd_ddp_del,
 	(cmdline_parse_inst_t *)&cmd_ddp_get_list,
diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index 934cf7e90..7823addb7 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -239,6 +239,8 @@ enum index {
 	ACTION_OF_POP_MPLS_ETHERTYPE,
 	ACTION_OF_PUSH_MPLS,
 	ACTION_OF_PUSH_MPLS_ETHERTYPE,
+	ACTION_VXLAN_ENCAP,
+	ACTION_VXLAN_DECAP,
 };
 
 /** Maximum size for pattern in struct rte_flow_item_raw. */
@@ -258,6 +260,23 @@ struct action_rss_data {
 	uint16_t queue[ACTION_RSS_QUEUE_NUM];
 };
 
+/** Maximum number of items in struct rte_flow_action_vxlan_encap. */
+#define ACTION_VXLAN_ENCAP_ITEMS_NUM 6
+
+/** Storage for struct rte_flow_action_vxlan_encap including external data. */
+struct action_vxlan_encap_data {
+	struct rte_flow_action_vxlan_encap conf;
+	struct rte_flow_item items[ACTION_VXLAN_ENCAP_ITEMS_NUM];
+	struct rte_flow_item_eth item_eth;
+	struct rte_flow_item_vlan item_vlan;
+	union {
+		struct rte_flow_item_ipv4 item_ipv4;
+		struct rte_flow_item_ipv6 item_ipv6;
+	};
+	struct rte_flow_item_udp item_udp;
+	struct rte_flow_item_vxlan item_vxlan;
+};
+
 /** Maximum number of subsequent tokens and arguments on the stack. */
 #define CTX_STACK_SIZE 16
 
@@ -775,6 +794,8 @@ static const enum index next_action[] = {
 	ACTION_OF_SET_VLAN_PCP,
 	ACTION_OF_POP_MPLS,
 	ACTION_OF_PUSH_MPLS,
+	ACTION_VXLAN_ENCAP,
+	ACTION_VXLAN_DECAP,
 	ZERO,
 };
 
@@ -905,6 +926,9 @@ static int parse_vc_action_rss_type(struct context *, const struct token *,
 static int parse_vc_action_rss_queue(struct context *, const struct token *,
 				     const char *, unsigned int, void *,
 				     unsigned int);
+static int parse_vc_action_vxlan_encap(struct context *, const struct token *,
+				       const char *, unsigned int, void *,
+				       unsigned int);
 static int parse_destroy(struct context *, const struct token *,
 			 const char *, unsigned int,
 			 void *, unsigned int);
@@ -2387,6 +2411,24 @@ static const struct token token_list[] = {
 			      ethertype)),
 		.call = parse_vc_conf,
 	},
+	[ACTION_VXLAN_ENCAP] = {
+		.name = "vxlan_encap",
+		.help = "VXLAN encapsulation, uses configuration set by \"set"
+			" vxlan\"",
+		.priv = PRIV_ACTION(VXLAN_ENCAP,
+				    sizeof(struct action_vxlan_encap_data)),
+		.next = NEXT(NEXT_ENTRY(ACTION_NEXT)),
+		.call = parse_vc_action_vxlan_encap,
+	},
+	[ACTION_VXLAN_DECAP] = {
+		.name = "vxlan_decap",
+		.help = "Performs a decapsulation action by stripping all"
+			" headers of the VXLAN tunnel network overlay from the"
+			" matched flow.",
+		.priv = PRIV_ACTION(VXLAN_DECAP, 0),
+		.next = NEXT(NEXT_ENTRY(ACTION_NEXT)),
+		.call = parse_vc,
+	},
 };
 
 /** Remove and return last entry from argument stack. */
@@ -2951,6 +2993,103 @@ parse_vc_action_rss_queue(struct context *ctx, const struct token *token,
 	return len;
 }
 
+/** Parse VXLAN encap action. */
+static int
+parse_vc_action_vxlan_encap(struct context *ctx, const struct token *token,
+			    const char *str, unsigned int len,
+			    void *buf, unsigned int size)
+{
+	struct buffer *out = buf;
+	struct rte_flow_action *action;
+	struct action_vxlan_encap_data *action_vxlan_encap_data;
+	int ret;
+
+	ret = parse_vc(ctx, token, str, len, buf, size);
+	if (ret < 0)
+		return ret;
+	/* Nothing else to do if there is no buffer. */
+	if (!out)
+		return ret;
+	if (!out->args.vc.actions_n)
+		return -1;
+	action = &out->args.vc.actions[out->args.vc.actions_n - 1];
+	/* Point to selected object. */
+	ctx->object = out->args.vc.data;
+	ctx->objmask = NULL;
+	/* Set up default configuration. */
+	action_vxlan_encap_data = ctx->object;
+	*action_vxlan_encap_data = (struct action_vxlan_encap_data){
+		.conf = (struct rte_flow_action_vxlan_encap){
+			.definition = action_vxlan_encap_data->items,
+		},
+		.items = {
+			{
+				.type = RTE_FLOW_ITEM_TYPE_ETH,
+				.spec = &action_vxlan_encap_data->item_eth,
+				.mask = &rte_flow_item_eth_mask,
+			},
+			{
+				.type = RTE_FLOW_ITEM_TYPE_VLAN,
+				.spec = &action_vxlan_encap_data->item_vlan,
+				.mask = &rte_flow_item_vlan_mask,
+			},
+			{
+				.type = RTE_FLOW_ITEM_TYPE_IPV4,
+				.spec = &action_vxlan_encap_data->item_ipv4,
+				.mask = &rte_flow_item_ipv4_mask,
+			},
+			{
+				.type = RTE_FLOW_ITEM_TYPE_UDP,
+				.spec = &action_vxlan_encap_data->item_udp,
+				.mask = &rte_flow_item_udp_mask,
+			},
+			{
+				.type = RTE_FLOW_ITEM_TYPE_VXLAN,
+				.spec = &action_vxlan_encap_data->item_vxlan,
+				.mask = &rte_flow_item_vxlan_mask,
+			},
+			{
+				.type = RTE_FLOW_ITEM_TYPE_END,
+			},
+		},
+		.item_eth.type = 0,
+		.item_vlan.tci = vxlan_encap_conf.vlan_tci,
+		.item_ipv4.hdr = {
+			.src_addr = vxlan_encap_conf.ipv4_src,
+			.dst_addr = vxlan_encap_conf.ipv4_dst,
+		},
+		.item_udp.hdr = {
+			.src_port = vxlan_encap_conf.udp_src,
+			.dst_port = vxlan_encap_conf.udp_dst,
+		},
+		.item_vxlan.flags = 0,
+	};
+	memcpy(action_vxlan_encap_data->item_eth.dst.addr_bytes,
+	       vxlan_encap_conf.eth_dst, ETHER_ADDR_LEN);
+	memcpy(action_vxlan_encap_data->item_eth.src.addr_bytes,
+	       vxlan_encap_conf.eth_src, ETHER_ADDR_LEN);
+	if (!vxlan_encap_conf.select_ipv4) {
+		memcpy(&action_vxlan_encap_data->item_ipv6.hdr.src_addr,
+		       &vxlan_encap_conf.ipv6_src,
+		       sizeof(vxlan_encap_conf.ipv6_src));
+		memcpy(&action_vxlan_encap_data->item_ipv6.hdr.dst_addr,
+		       &vxlan_encap_conf.ipv6_dst,
+		       sizeof(vxlan_encap_conf.ipv6_dst));
+		action_vxlan_encap_data->items[2] = (struct rte_flow_item){
+			.type = RTE_FLOW_ITEM_TYPE_IPV6,
+			.spec = &action_vxlan_encap_data->item_ipv6,
+			.mask = &rte_flow_item_ipv6_mask,
+		};
+	}
+	if (!vxlan_encap_conf.select_vlan)
+		action_vxlan_encap_data->items[1].type =
+			RTE_FLOW_ITEM_TYPE_VOID;
+	memcpy(action_vxlan_encap_data->item_vxlan.vni, vxlan_encap_conf.vni,
+	       RTE_DIM(vxlan_encap_conf.vni));
+	action->conf = &action_vxlan_encap_data->conf;
+	return ret;
+}
+
 /** Parse tokens for destroy command. */
 static int
 parse_destroy(struct context *ctx, const struct token *token,
diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
index 24c199844..5f581c360 100644
--- a/app/test-pmd/testpmd.c
+++ b/app/test-pmd/testpmd.c
@@ -393,6 +393,23 @@ uint8_t bitrate_enabled;
 struct gro_status gro_ports[RTE_MAX_ETHPORTS];
 uint8_t gro_flush_cycles = GRO_DEFAULT_FLUSH_CYCLES;
 
+struct vxlan_encap_conf vxlan_encap_conf = {
+	.select_ipv4 = 1,
+	.select_vlan = 0,
+	.vni = "\x00\x00\x00",
+	.udp_src = 0,
+	.udp_dst = RTE_BE16(4789),
+	.ipv4_src = IPv4(127, 0, 0, 1),
+	.ipv4_dst = IPv4(255, 255, 255, 255),
+	.ipv6_src = "\x00\x00\x00\x00\x00\x00\x00\x00"
+		"\x00\x00\x00\x00\x00\x00\x00\x01",
+	.ipv6_dst = "\x00\x00\x00\x00\x00\x00\x00\x00"
+		"\x00\x00\x00\x00\x00\x00\x11\x11",
+	.vlan_tci = 0,
+	.eth_src = "\x00\x00\x00\x00\x00\x00",
+	.eth_dst = "\xff\xff\xff\xff\xff\xff",
+};
+
 /* Forward function declarations */
 static void map_port_queue_stats_mapping_registers(portid_t pi,
 						   struct rte_port *port);
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index f51cd9dd9..0d6618788 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -479,6 +479,23 @@ struct gso_status {
 extern struct gso_status gso_ports[RTE_MAX_ETHPORTS];
 extern uint16_t gso_max_segment_size;
 
+/* VXLAN encap/decap parameters. */
+struct vxlan_encap_conf {
+	uint32_t select_ipv4:1;
+	uint32_t select_vlan:1;
+	uint8_t vni[3];
+	rte_be16_t udp_src;
+	rte_be16_t udp_dst;
+	rte_be32_t ipv4_src;
+	rte_be32_t ipv4_dst;
+	uint8_t ipv6_src[16];
+	uint8_t ipv6_dst[16];
+	rte_be16_t vlan_tci;
+	uint8_t eth_src[ETHER_ADDR_LEN];
+	uint8_t eth_dst[ETHER_ADDR_LEN];
+};
+struct vxlan_encap_conf vxlan_encap_conf;
+
 static inline unsigned int
 lcore_num(void)
 {
diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
index 0d6fd50ca..2743043d3 100644
--- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
+++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
@@ -1534,6 +1534,13 @@ Enable or disable a per queue Tx offloading only on a specific Tx queue::
 
 This command should be run when the port is stopped, or else it will fail.
 
+Config VXLAN Encap outer layers
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Configure the outer layer to encapsulate a packet inside a VXLAN tunnel::
+
+ testpmd> set vxlan ipv4|ipv6 (vni) (udp-src) (udp-dst) (ip-src) (ip-dst) (mac-src) (mac-dst)
+ testpmd> set vxlan-with-vlan ipv4|ipv6 (vni) (udp-src) (udp-dst) (ip-src) (ip-dst) (vlan-tci) (mac-src) (mac-dst)
 
 Port Functions
 --------------
@@ -3650,6 +3657,12 @@ This section lists supported actions and their attributes, if any.
 
   - ``ethertype``: Ethertype.
 
+- ``vxlan_encap``: Performs a VXLAN encapsulation, outer layer configuration
+  is done through `Config VXLAN Encap outer layers`_.
+
+- ``vxlan_decap``: Performs a decapsulation action by stripping all headers of
+  the VXLAN tunnel network overlay from the matched flow.
+
 Destroying flow rules
 ~~~~~~~~~~~~~~~~~~~~~
 
@@ -3915,6 +3928,28 @@ Validate and create a QinQ rule on port 0 to steer traffic to a queue on the hos
    0       0       0       i-      ETH VLAN VLAN=>VF QUEUE
    1       0       0       i-      ETH VLAN VLAN=>PF QUEUE
 
+Sample VXLAN encapsulation rule
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+VXLAN encapsulation outer layer has default value pre-configured in testpmd
+source code, those can be changed by using the following commands::
+
+IPv4 VXLAN outer header::
+
+  testpmd> set vxlan ipv4 4 4 4 127.0.0.1 128.0.0.1 11:11:11:11:11:11 22:22:22:22:22:22
+  testpmd> flow create 0 ingress pattern end actions vxlan_encap / queue index 0 / end
+
+  testpmd> set vxlan-with-vlan ipv4 4 4 4 127.0.0.1 128.0.0.1 34 11:11:11:11:11:11 22:22:22:22:22:22
+  testpmd> flow create 0 ingress pattern end actions vxlan_encap / queue index 0 / end
+
+IPv6 VXLAN outer header::
+
+  testpmd> set vxlan ipv6 4 4 4 ::1 ::2222 11:11:11:11:11:11 22:22:22:22:22:22
+  testpmd> flow create 0 ingress pattern end actions vxlan_encap / queue index 0 / end
+
+  testpmd> set vxlan-with-vlan ipv6 4 4 4 ::1 ::2222 34 11:11:11:11:11:11 22:22:22:22:22:22
+  testpmd> flow create 0 ingress pattern end actions vxlan_encap / queue index 0 / end
+
 BPF Functions
 --------------
 
-- 
2.18.0.rc2

^ permalink raw reply	[flat|nested] 63+ messages in thread

* [dpdk-dev] [PATCH v4 2/2] app/testpmd: add NVGRE encap/decap support
  2018-06-21  7:13     ` [dpdk-dev] [PATCH v4 " Nelio Laranjeiro
  2018-06-21  7:13       ` [dpdk-dev] [PATCH v4 1/2] app/testpmd: add VXLAN encap/decap support Nelio Laranjeiro
@ 2018-06-21  7:13       ` Nelio Laranjeiro
  2018-06-26 10:48         ` Ori Kam
  2018-06-26 12:48         ` Iremonger, Bernard
  2018-06-22  7:42       ` [dpdk-dev] [PATCH v4 0/2] app/testpmd implement VXLAN/NVGRE Encap/Decap Mohammad Abdul Awal
  2018-06-27  8:53       ` [dpdk-dev] [PATCH v5 " Nelio Laranjeiro
  3 siblings, 2 replies; 63+ messages in thread
From: Nelio Laranjeiro @ 2018-06-21  7:13 UTC (permalink / raw)
  To: dev, Adrien Mazarguil, Wenzhuo Lu, Jingjing Wu,
	Bernard Iremonger, Mohammad Abdul Awal, Ori Kam,
	Stephen Hemminger

Due to the complex NVGRE_ENCAP flow action and based on the fact testpmd
does not allocate memory, this patch adds a new command in testpmd to
initialise a global structure containing the necessary information to
make the outer layer of the packet.  This same global structure will
then be used by the flow command line in testpmd when the action
nvgre_encap will be parsed, at this point, the conversion into such
action becomes trivial.

This global structure is only used for the encap action.

Signed-off-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
---
 app/test-pmd/cmdline.c                      | 118 ++++++++++++++++++
 app/test-pmd/cmdline_flow.c                 | 129 ++++++++++++++++++++
 app/test-pmd/testpmd.c                      |  15 +++
 app/test-pmd/testpmd.h                      |  15 +++
 doc/guides/testpmd_app_ug/testpmd_funcs.rst |  37 ++++++
 5 files changed, 314 insertions(+)

diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
index 048fff2bd..ad7f9eda5 100644
--- a/app/test-pmd/cmdline.c
+++ b/app/test-pmd/cmdline.c
@@ -789,6 +789,12 @@ static void cmd_help_long_parsed(void *parsed_result,
 			" vlan-tci eth-src eth-dst\n"
 			"       Configure the VXLAN encapsulation for flows.\n\n"
 
+			"nvgre ipv4|ipv6 tni ip-src ip-dst eth-src eth-dst\n"
+			"       Configure the NVGRE encapsulation for flows.\n\n"
+
+			"nvgre-with-vlan ipv4|ipv6 tni ip-src ip-dst vlan-tci eth-src eth-dst\n"
+			"       Configure the NVGRE encapsulation for flows.\n\n"
+
 			, list_pkt_forwarding_modes()
 		);
 	}
@@ -14970,6 +14976,116 @@ cmdline_parse_inst_t cmd_set_vxlan_with_vlan = {
 	},
 };
 
+/** Set NVGRE encapsulation details */
+struct cmd_set_nvgre_result {
+	cmdline_fixed_string_t set;
+	cmdline_fixed_string_t nvgre;
+	cmdline_fixed_string_t ip_version;
+	uint32_t tni;
+	cmdline_ipaddr_t ip_src;
+	cmdline_ipaddr_t ip_dst;
+	uint16_t tci;
+	struct ether_addr eth_src;
+	struct ether_addr eth_dst;
+};
+
+cmdline_parse_token_string_t cmd_set_nvgre_set =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_nvgre_result, set, "set");
+cmdline_parse_token_string_t cmd_set_nvgre_nvgre =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_nvgre_result, nvgre, "nvgre");
+cmdline_parse_token_string_t cmd_set_nvgre_nvgre_with_vlan =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_nvgre_result, nvgre, "nvgre-with-vlan");
+cmdline_parse_token_string_t cmd_set_nvgre_ip_version =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_nvgre_result, ip_version,
+				 "ipv4#ipv6");
+cmdline_parse_token_num_t cmd_set_nvgre_tni =
+	TOKEN_NUM_INITIALIZER(struct cmd_set_nvgre_result, tni, UINT32);
+cmdline_parse_token_num_t cmd_set_nvgre_ip_src =
+	TOKEN_IPADDR_INITIALIZER(struct cmd_set_nvgre_result, ip_src);
+cmdline_parse_token_ipaddr_t cmd_set_nvgre_ip_dst =
+	TOKEN_IPADDR_INITIALIZER(struct cmd_set_nvgre_result, ip_dst);
+cmdline_parse_token_num_t cmd_set_nvgre_vlan =
+	TOKEN_NUM_INITIALIZER(struct cmd_set_nvgre_result, tci, UINT16);
+cmdline_parse_token_etheraddr_t cmd_set_nvgre_eth_src =
+	TOKEN_ETHERADDR_INITIALIZER(struct cmd_set_nvgre_result, eth_src);
+cmdline_parse_token_etheraddr_t cmd_set_nvgre_eth_dst =
+	TOKEN_ETHERADDR_INITIALIZER(struct cmd_set_nvgre_result, eth_dst);
+
+static void cmd_set_nvgre_parsed(void *parsed_result,
+	__attribute__((unused)) struct cmdline *cl,
+	__attribute__((unused)) void *data)
+{
+	struct cmd_set_nvgre_result *res = parsed_result;
+	union {
+		uint32_t nvgre_tni;
+		uint8_t tni[4];
+	} id = {
+		.nvgre_tni = rte_cpu_to_be_32(res->tni) & RTE_BE32(0x00ffffff),
+	};
+
+	if (strcmp(res->nvgre, "nvgre") == 0)
+		nvgre_encap_conf.select_vlan = 0;
+	else if (strcmp(res->nvgre, "nvgre-with-vlan") == 0)
+		nvgre_encap_conf.select_vlan = 1;
+	if (strcmp(res->ip_version, "ipv4") == 0)
+		nvgre_encap_conf.select_ipv4 = 1;
+	else if (strcmp(res->ip_version, "ipv6") == 0)
+		nvgre_encap_conf.select_ipv4 = 0;
+	else
+		return;
+	rte_memcpy(nvgre_encap_conf.tni, &id.tni[1], 3);
+	if (nvgre_encap_conf.select_ipv4) {
+		IPV4_ADDR_TO_UINT(res->ip_src, nvgre_encap_conf.ipv4_src);
+		IPV4_ADDR_TO_UINT(res->ip_dst, nvgre_encap_conf.ipv4_dst);
+	} else {
+		IPV6_ADDR_TO_ARRAY(res->ip_src, nvgre_encap_conf.ipv6_src);
+		IPV6_ADDR_TO_ARRAY(res->ip_dst, nvgre_encap_conf.ipv6_dst);
+	}
+	if (nvgre_encap_conf.select_vlan)
+		nvgre_encap_conf.vlan_tci = rte_cpu_to_be_16(res->tci);
+	rte_memcpy(nvgre_encap_conf.eth_src, res->eth_src.addr_bytes,
+		   ETHER_ADDR_LEN);
+	rte_memcpy(nvgre_encap_conf.eth_dst, res->eth_dst.addr_bytes,
+		   ETHER_ADDR_LEN);
+}
+
+cmdline_parse_inst_t cmd_set_nvgre = {
+	.f = cmd_set_nvgre_parsed,
+	.data = NULL,
+	.help_str = "set nvgre ipv4|ipv6 <vni> <ip-src> <ip-dst> <eth-src>"
+		" <eth-dst>",
+	.tokens = {
+		(void *)&cmd_set_nvgre_set,
+		(void *)&cmd_set_nvgre_nvgre,
+		(void *)&cmd_set_nvgre_ip_version,
+		(void *)&cmd_set_nvgre_tni,
+		(void *)&cmd_set_nvgre_ip_src,
+		(void *)&cmd_set_nvgre_ip_dst,
+		(void *)&cmd_set_nvgre_eth_src,
+		(void *)&cmd_set_nvgre_eth_dst,
+		NULL,
+	},
+};
+
+cmdline_parse_inst_t cmd_set_nvgre_with_vlan = {
+	.f = cmd_set_nvgre_parsed,
+	.data = NULL,
+	.help_str = "set nvgre-with-vlan ipv4|ipv6 <vni> <ip-src> <ip-dst>"
+		" <vlan-tci> <eth-src> <eth-dst>",
+	.tokens = {
+		(void *)&cmd_set_nvgre_set,
+		(void *)&cmd_set_nvgre_nvgre_with_vlan,
+		(void *)&cmd_set_nvgre_ip_version,
+		(void *)&cmd_set_nvgre_tni,
+		(void *)&cmd_set_nvgre_ip_src,
+		(void *)&cmd_set_nvgre_ip_dst,
+		(void *)&cmd_set_nvgre_vlan,
+		(void *)&cmd_set_nvgre_eth_src,
+		(void *)&cmd_set_nvgre_eth_dst,
+		NULL,
+	},
+};
+
 /* Strict link priority scheduling mode setting */
 static void
 cmd_strict_link_prio_parsed(
@@ -17596,6 +17712,8 @@ cmdline_parse_ctx_t main_ctx[] = {
 #endif
 	(cmdline_parse_inst_t *)&cmd_set_vxlan,
 	(cmdline_parse_inst_t *)&cmd_set_vxlan_with_vlan,
+	(cmdline_parse_inst_t *)&cmd_set_nvgre,
+	(cmdline_parse_inst_t *)&cmd_set_nvgre_with_vlan,
 	(cmdline_parse_inst_t *)&cmd_ddp_add,
 	(cmdline_parse_inst_t *)&cmd_ddp_del,
 	(cmdline_parse_inst_t *)&cmd_ddp_get_list,
diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index 7823addb7..fea9380c4 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -241,6 +241,8 @@ enum index {
 	ACTION_OF_PUSH_MPLS_ETHERTYPE,
 	ACTION_VXLAN_ENCAP,
 	ACTION_VXLAN_DECAP,
+	ACTION_NVGRE_ENCAP,
+	ACTION_NVGRE_DECAP,
 };
 
 /** Maximum size for pattern in struct rte_flow_item_raw. */
@@ -277,6 +279,22 @@ struct action_vxlan_encap_data {
 	struct rte_flow_item_vxlan item_vxlan;
 };
 
+/** Maximum number of items in struct rte_flow_action_nvgre_encap. */
+#define ACTION_NVGRE_ENCAP_ITEMS_NUM 5
+
+/** Storage for struct rte_flow_action_nvgre_encap including external data. */
+struct action_nvgre_encap_data {
+	struct rte_flow_action_nvgre_encap conf;
+	struct rte_flow_item items[ACTION_NVGRE_ENCAP_ITEMS_NUM];
+	struct rte_flow_item_eth item_eth;
+	struct rte_flow_item_vlan item_vlan;
+	union {
+		struct rte_flow_item_ipv4 item_ipv4;
+		struct rte_flow_item_ipv6 item_ipv6;
+	};
+	struct rte_flow_item_nvgre item_nvgre;
+};
+
 /** Maximum number of subsequent tokens and arguments on the stack. */
 #define CTX_STACK_SIZE 16
 
@@ -796,6 +814,8 @@ static const enum index next_action[] = {
 	ACTION_OF_PUSH_MPLS,
 	ACTION_VXLAN_ENCAP,
 	ACTION_VXLAN_DECAP,
+	ACTION_NVGRE_ENCAP,
+	ACTION_NVGRE_DECAP,
 	ZERO,
 };
 
@@ -929,6 +949,9 @@ static int parse_vc_action_rss_queue(struct context *, const struct token *,
 static int parse_vc_action_vxlan_encap(struct context *, const struct token *,
 				       const char *, unsigned int, void *,
 				       unsigned int);
+static int parse_vc_action_nvgre_encap(struct context *, const struct token *,
+				       const char *, unsigned int, void *,
+				       unsigned int);
 static int parse_destroy(struct context *, const struct token *,
 			 const char *, unsigned int,
 			 void *, unsigned int);
@@ -2429,6 +2452,24 @@ static const struct token token_list[] = {
 		.next = NEXT(NEXT_ENTRY(ACTION_NEXT)),
 		.call = parse_vc,
 	},
+	[ACTION_NVGRE_ENCAP] = {
+		.name = "nvgre_encap",
+		.help = "NVGRE encapsulation, uses configuration set by \"set"
+			" nvgre\"",
+		.priv = PRIV_ACTION(NVGRE_ENCAP,
+				    sizeof(struct action_nvgre_encap_data)),
+		.next = NEXT(NEXT_ENTRY(ACTION_NEXT)),
+		.call = parse_vc_action_nvgre_encap,
+	},
+	[ACTION_NVGRE_DECAP] = {
+		.name = "nvgre_decap",
+		.help = "Performs a decapsulation action by stripping all"
+			" headers of the NVGRE tunnel network overlay from the"
+			" matched flow.",
+		.priv = PRIV_ACTION(NVGRE_DECAP, 0),
+		.next = NEXT(NEXT_ENTRY(ACTION_NEXT)),
+		.call = parse_vc,
+	},
 };
 
 /** Remove and return last entry from argument stack. */
@@ -3090,6 +3131,94 @@ parse_vc_action_vxlan_encap(struct context *ctx, const struct token *token,
 	return ret;
 }
 
+/** Parse NVGRE encap action. */
+static int
+parse_vc_action_nvgre_encap(struct context *ctx, const struct token *token,
+			    const char *str, unsigned int len,
+			    void *buf, unsigned int size)
+{
+	struct buffer *out = buf;
+	struct rte_flow_action *action;
+	struct action_nvgre_encap_data *action_nvgre_encap_data;
+	int ret;
+
+	ret = parse_vc(ctx, token, str, len, buf, size);
+	if (ret < 0)
+		return ret;
+	/* Nothing else to do if there is no buffer. */
+	if (!out)
+		return ret;
+	if (!out->args.vc.actions_n)
+		return -1;
+	action = &out->args.vc.actions[out->args.vc.actions_n - 1];
+	/* Point to selected object. */
+	ctx->object = out->args.vc.data;
+	ctx->objmask = NULL;
+	/* Set up default configuration. */
+	action_nvgre_encap_data = ctx->object;
+	*action_nvgre_encap_data = (struct action_nvgre_encap_data){
+		.conf = (struct rte_flow_action_nvgre_encap){
+			.definition = action_nvgre_encap_data->items,
+		},
+		.items = {
+			{
+				.type = RTE_FLOW_ITEM_TYPE_ETH,
+				.spec = &action_nvgre_encap_data->item_eth,
+				.mask = &rte_flow_item_eth_mask,
+			},
+			{
+				.type = RTE_FLOW_ITEM_TYPE_VLAN,
+				.spec = &action_nvgre_encap_data->item_vlan,
+				.mask = &rte_flow_item_vlan_mask,
+			},
+			{
+				.type = RTE_FLOW_ITEM_TYPE_IPV4,
+				.spec = &action_nvgre_encap_data->item_ipv4,
+				.mask = &rte_flow_item_ipv4_mask,
+			},
+			{
+				.type = RTE_FLOW_ITEM_TYPE_NVGRE,
+				.spec = &action_nvgre_encap_data->item_nvgre,
+				.mask = &rte_flow_item_nvgre_mask,
+			},
+			{
+				.type = RTE_FLOW_ITEM_TYPE_END,
+			},
+		},
+		.item_eth.type = 0,
+		.item_vlan.tci = nvgre_encap_conf.vlan_tci,
+		.item_ipv4.hdr = {
+		       .src_addr = nvgre_encap_conf.ipv4_src,
+		       .dst_addr = nvgre_encap_conf.ipv4_dst,
+		},
+		.item_nvgre.flow_id = 0,
+	};
+	memcpy(action_nvgre_encap_data->item_eth.dst.addr_bytes,
+	       nvgre_encap_conf.eth_dst, ETHER_ADDR_LEN);
+	memcpy(action_nvgre_encap_data->item_eth.src.addr_bytes,
+	       nvgre_encap_conf.eth_src, ETHER_ADDR_LEN);
+	if (!nvgre_encap_conf.select_ipv4) {
+		memcpy(&action_nvgre_encap_data->item_ipv6.hdr.src_addr,
+		       &nvgre_encap_conf.ipv6_src,
+		       sizeof(nvgre_encap_conf.ipv6_src));
+		memcpy(&action_nvgre_encap_data->item_ipv6.hdr.dst_addr,
+		       &nvgre_encap_conf.ipv6_dst,
+		       sizeof(nvgre_encap_conf.ipv6_dst));
+		action_nvgre_encap_data->items[2] = (struct rte_flow_item){
+			.type = RTE_FLOW_ITEM_TYPE_IPV6,
+			.spec = &action_nvgre_encap_data->item_ipv6,
+			.mask = &rte_flow_item_ipv6_mask,
+		};
+	}
+	if (!nvgre_encap_conf.select_vlan)
+		action_nvgre_encap_data->items[1].type =
+			RTE_FLOW_ITEM_TYPE_VOID;
+	memcpy(action_nvgre_encap_data->item_nvgre.tni, nvgre_encap_conf.tni,
+	       RTE_DIM(nvgre_encap_conf.tni));
+	action->conf = &action_nvgre_encap_data->conf;
+	return ret;
+}
+
 /** Parse tokens for destroy command. */
 static int
 parse_destroy(struct context *ctx, const struct token *token,
diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
index 5f581c360..c60b507a0 100644
--- a/app/test-pmd/testpmd.c
+++ b/app/test-pmd/testpmd.c
@@ -410,6 +410,21 @@ struct vxlan_encap_conf vxlan_encap_conf = {
 	.eth_dst = "\xff\xff\xff\xff\xff\xff",
 };
 
+struct nvgre_encap_conf nvgre_encap_conf = {
+	.select_ipv4 = 1,
+	.select_vlan = 0,
+	.tni = "\x00\x00\x00",
+	.ipv4_src = IPv4(127, 0, 0, 1),
+	.ipv4_dst = IPv4(255, 255, 255, 255),
+	.ipv6_src = "\x00\x00\x00\x00\x00\x00\x00\x00"
+		"\x00\x00\x00\x00\x00\x00\x00\x01",
+	.ipv6_dst = "\x00\x00\x00\x00\x00\x00\x00\x00"
+		"\x00\x00\x00\x00\x00\x00\x11\x11",
+	.vlan_tci = 0,
+	.eth_src = "\x00\x00\x00\x00\x00\x00",
+	.eth_dst = "\xff\xff\xff\xff\xff\xff",
+};
+
 /* Forward function declarations */
 static void map_port_queue_stats_mapping_registers(portid_t pi,
 						   struct rte_port *port);
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index 0d6618788..2b1e448b0 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -496,6 +496,21 @@ struct vxlan_encap_conf {
 };
 struct vxlan_encap_conf vxlan_encap_conf;
 
+/* NVGRE encap/decap parameters. */
+struct nvgre_encap_conf {
+	uint32_t select_ipv4:1;
+	uint32_t select_vlan:1;
+	uint8_t tni[3];
+	rte_be32_t ipv4_src;
+	rte_be32_t ipv4_dst;
+	uint8_t ipv6_src[16];
+	uint8_t ipv6_dst[16];
+	rte_be16_t vlan_tci;
+	uint8_t eth_src[ETHER_ADDR_LEN];
+	uint8_t eth_dst[ETHER_ADDR_LEN];
+};
+struct nvgre_encap_conf nvgre_encap_conf;
+
 static inline unsigned int
 lcore_num(void)
 {
diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
index 2743043d3..17e0fef63 100644
--- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
+++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
@@ -1542,6 +1542,14 @@ Configure the outer layer to encapsulate a packet inside a VXLAN tunnel::
  testpmd> set vxlan ipv4|ipv6 (vni) (udp-src) (udp-dst) (ip-src) (ip-dst) (mac-src) (mac-dst)
  testpmd> set vxlan-with-vlan ipv4|ipv6 (vni) (udp-src) (udp-dst) (ip-src) (ip-dst) (vlan-tci) (mac-src) (mac-dst)
 
+Config NVGRE Encap outer layers
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Configure the outer layer to encapsulate a packet inside a NVGRE tunnel::
+
+ testpmd> set nvgre ipv4|ipv6 (tni) (ip-src) (ip-dst) (mac-src) (mac-dst)
+ testpmd> set nvgre-with-vlan ipv4|ipv6 (tni) (ip-src) (ip-dst) (vlan-tci) (mac-src) (mac-dst)
+
 Port Functions
 --------------
 
@@ -3663,6 +3671,12 @@ This section lists supported actions and their attributes, if any.
 - ``vxlan_decap``: Performs a decapsulation action by stripping all headers of
   the VXLAN tunnel network overlay from the matched flow.
 
+- ``nvgre_encap``: Performs a NVGRE encapsulation, outer layer configuration
+  is done through `Config NVGRE Encap outer layers`_.
+
+- ``nvgre_decap``: Performs a decapsulation action by stripping all headers of
+  the NVGRE tunnel network overlay from the matched flow.
+
 Destroying flow rules
 ~~~~~~~~~~~~~~~~~~~~~
 
@@ -3950,6 +3964,29 @@ IPv6 VXLAN outer header::
   testpmd> set vxlan-with-vlan ipv6 4 4 4 ::1 ::2222 34 11:11:11:11:11:11 22:22:22:22:22:22
   testpmd> flow create 0 ingress pattern end actions vxlan_encap / queue index 0 / end
 
+Sample NVGRE encapsulation rule
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+NVGRE encapsulation outer layer has default value pre-configured in testpmd
+source code, those can be changed by using the following commands::
+
+IPv4 NVGRE outer header::
+
+  testpmd> set nvgre ipv4 4 127.0.0.1 128.0.0.1 11:11:11:11:11:11 22:22:22:22:22:22
+  testpmd> flow create 0 ingress pattern end actions nvgre_encap / queue index 0 / end
+
+  testpmd> set nvgre-with-vlan 4 127.0.0.1 128.0.0.1 34 11:11:11:11:11:11 22:22:22:22:22:22
+  testpmd> flow create 0 ingress pattern end actions vxlan_encap / queue index 0 / end
+
+IPv6 NVGRE outer header::
+
+  testpmd> set nvgre ipv6 4 ::1 ::2222 11:11:11:11:11:11 22:22:22:22:22:22
+  testpmd> flow create 0 ingress pattern end actions vxlan_encap / queue index 0 / end
+
+  testpmd> set nvgre-with-vlan ipv6 4 ::1 ::2222 34 11:11:11:11:11:11 22:22:22:22:22:22
+  testpmd> flow create 0 ingress pattern end actions vxlan_encap / queue index 0 / end
+
+
 BPF Functions
 --------------
 
-- 
2.18.0.rc2

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [dpdk-dev] [PATCH v4 0/2] app/testpmd implement VXLAN/NVGRE Encap/Decap
  2018-06-21  7:13     ` [dpdk-dev] [PATCH v4 " Nelio Laranjeiro
  2018-06-21  7:13       ` [dpdk-dev] [PATCH v4 1/2] app/testpmd: add VXLAN encap/decap support Nelio Laranjeiro
  2018-06-21  7:13       ` [dpdk-dev] [PATCH v4 2/2] app/testpmd: add NVGRE " Nelio Laranjeiro
@ 2018-06-22  7:42       ` Mohammad Abdul Awal
  2018-06-22  8:31         ` Nélio Laranjeiro
  2018-06-27  8:53       ` [dpdk-dev] [PATCH v5 " Nelio Laranjeiro
  3 siblings, 1 reply; 63+ messages in thread
From: Mohammad Abdul Awal @ 2018-06-22  7:42 UTC (permalink / raw)
  To: Nelio Laranjeiro, dev, Adrien Mazarguil, Wenzhuo Lu, Jingjing Wu,
	Bernard Iremonger, Ori Kam, Stephen Hemminger

Hi Nelio,


On 21/06/2018 08:13, Nelio Laranjeiro wrote:
> This series adds an easy and maintainable configuration version support for
> those two actions for 18.08 by using global variables in testpmd to store the
> necessary information for the tunnel encapsulation.  Those variables are used
> in conjunction of RTE_FLOW_ACTION_{VXLAN,NVGRE}_ENCAP action to create easily
> the action for flows.
>
> A common way to use it:
>
>   set vxlan ipv4 4 4 4 127.0.0.1 128.0.0.1 11:11:11:11:11:11 22:22:22:22:22:22
>   flow create 0 ingress pattern end actions vxlan_encap / queue index 0 / end

This way we can define only one tunnel for all the flows. This is not a 
convenient for testing a scenario (e.g. mutiport or switch) with 
multiple tunnels. Isn't it?

Regards,
Awal.

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [dpdk-dev] [PATCH v4 0/2] app/testpmd implement VXLAN/NVGRE Encap/Decap
  2018-06-22  7:42       ` [dpdk-dev] [PATCH v4 0/2] app/testpmd implement VXLAN/NVGRE Encap/Decap Mohammad Abdul Awal
@ 2018-06-22  8:31         ` Nélio Laranjeiro
  2018-06-22  8:51           ` Mohammad Abdul Awal
  0 siblings, 1 reply; 63+ messages in thread
From: Nélio Laranjeiro @ 2018-06-22  8:31 UTC (permalink / raw)
  To: Mohammad Abdul Awal
  Cc: dev, Adrien Mazarguil, Wenzhuo Lu, Jingjing Wu,
	Bernard Iremonger, Ori Kam, Stephen Hemminger

On Fri, Jun 22, 2018 at 08:42:10AM +0100, Mohammad Abdul Awal wrote:
> Hi Nelio,
> 
> 
> On 21/06/2018 08:13, Nelio Laranjeiro wrote:
> > This series adds an easy and maintainable configuration version support for
> > those two actions for 18.08 by using global variables in testpmd to store the
> > necessary information for the tunnel encapsulation.  Those variables are used
> > in conjunction of RTE_FLOW_ACTION_{VXLAN,NVGRE}_ENCAP action to create easily
> > the action for flows.
> > 
> > A common way to use it:
> > 
> >   set vxlan ipv4 4 4 4 127.0.0.1 128.0.0.1 11:11:11:11:11:11 22:22:22:22:22:22
> >   flow create 0 ingress pattern end actions vxlan_encap / queue index 0 / end
> 
> This way we can define only one tunnel for all the flows. This is not a
> convenient for testing a scenario (e.g. mutiport or switch) with multiple
> tunnels. Isn't it?

Hi Awal.

The "set vxlan" command will just configure the outer VXLAN tunnel to be
used, when the "flow" command is invoked, it will use the VXLAN tunnel
information and create a valid VXLAN_ENCAP action.  For instance:

 testpmd> set vxlan ipv4 4 4 4 127.0.0.1 128.0.0.1 11:11:11:11:11:11 22:22:22:22:22:22
 testpmd> flow create 0 ingress pattern end actions vxlan_encap / queue index 0 / end
 testpmd> set vxlan ipv6 4 34 42 ::1 ::2222 80:12:13:14:15:16 22:22:22:22:22:22
 testpmd> flow create 0 ingress pattern end actions vxlan_encap / queue index 0 / end

will create two VLXAN_ENCAP flow one with IPv4 tunnel the second one
with an IPv6.  Whereas:

 testpmd> set vxlan ipv4 4 4 4 127.0.0.1 128.0.0.1 11:11:11:11:11:11 22:22:22:22:22:22
 testpmd> flow create 0 ingress pattern eth / ipv4 src is 10.2.3.4 / end
 	actions vxlan_encap / queue index 0 / end
 testpmd> flow create 0 ingress pattern eth / ipv4 src is 20.2.3.4 / end
 	actions vxlan_encap / queue index 0 / end

will encapsulate the packets having as IPv4 source IP 10.2.3.4 and
20.2.3.4 with the same VXLAN tunnel headers.

Regards,

-- 
Nélio Laranjeiro
6WIND

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [dpdk-dev] [PATCH v4 0/2] app/testpmd implement VXLAN/NVGRE Encap/Decap
  2018-06-22  8:31         ` Nélio Laranjeiro
@ 2018-06-22  8:51           ` Mohammad Abdul Awal
  2018-06-22  9:08             ` Nélio Laranjeiro
  0 siblings, 1 reply; 63+ messages in thread
From: Mohammad Abdul Awal @ 2018-06-22  8:51 UTC (permalink / raw)
  To: Nélio Laranjeiro
  Cc: dev, Adrien Mazarguil, Wenzhuo Lu, Jingjing Wu,
	Bernard Iremonger, Ori Kam, Stephen Hemminger



On 22/06/2018 09:31, Nélio Laranjeiro wrote:
> On Fri, Jun 22, 2018 at 08:42:10AM +0100, Mohammad Abdul Awal wrote:
>> Hi Nelio,
>>
>>
>> On 21/06/2018 08:13, Nelio Laranjeiro wrote:
>>> This series adds an easy and maintainable configuration version support for
>>> those two actions for 18.08 by using global variables in testpmd to store the
>>> necessary information for the tunnel encapsulation.  Those variables are used
>>> in conjunction of RTE_FLOW_ACTION_{VXLAN,NVGRE}_ENCAP action to create easily
>>> the action for flows.
>>>
>>> A common way to use it:
>>>
>>>    set vxlan ipv4 4 4 4 127.0.0.1 128.0.0.1 11:11:11:11:11:11 22:22:22:22:22:22
>>>    flow create 0 ingress pattern end actions vxlan_encap / queue index 0 / end
>> This way we can define only one tunnel for all the flows. This is not a
>> convenient for testing a scenario (e.g. mutiport or switch) with multiple
>> tunnels. Isn't it?
> Hi Awal.
>
> The "set vxlan" command will just configure the outer VXLAN tunnel to be
> used, when the "flow" command is invoked, it will use the VXLAN tunnel
> information and create a valid VXLAN_ENCAP action.  For instance:
>
>   testpmd> set vxlan ipv4 4 4 4 127.0.0.1 128.0.0.1 11:11:11:11:11:11 22:22:22:22:22:22
>   testpmd> flow create 0 ingress pattern end actions vxlan_encap / queue index 0 / end
>   testpmd> set vxlan ipv6 4 34 42 ::1 ::2222 80:12:13:14:15:16 22:22:22:22:22:22
>   testpmd> flow create 0 ingress pattern end actions vxlan_encap / queue index 0 / end
>
> will create two VLXAN_ENCAP flow one with IPv4 tunnel the second one
> with an IPv6.  Whereas:
>
>   testpmd> set vxlan ipv4 4 4 4 127.0.0.1 128.0.0.1 11:11:11:11:11:11 22:22:22:22:22:22
>   testpmd> flow create 0 ingress pattern eth / ipv4 src is 10.2.3.4 / end
>   	actions vxlan_encap / queue index 0 / end
>   testpmd> flow create 0 ingress pattern eth / ipv4 src is 20.2.3.4 / end
>   	actions vxlan_encap / queue index 0 / end
>
> will encapsulate the packets having as IPv4 source IP 10.2.3.4 and
> 20.2.3.4 with the same VXLAN tunnel headers.

I understand that the same IPv4 tunnel will be used for both flows in 
your example above.  I have the following questions.

1) How can we create two or more IPv4 (or IPv6) tunnel?
1) How can we make the flows to use different IPv4 tunnels?
As an example,

  testpmd> set vxlan ipv4 4 4 4 127.0.0.1 128.0.0.1 11:11:11:11:11:11 22:22:22:22:22:22
  testpmd> set vxlan ipv4 4 4 4 127.0.0.1 128.0.0.1 33:33:33:33:33:33 44:44:44:44:44:44
  testpmd> flow create 0 ingress pattern end actions vxlan_encap <first tunnel?> / queue index 0 / end
  testpmd> flow create 0 ingress pattern end actions vxlan_encap <second tunnel?> / queue index 0 / end
  

Is it possible?

Regards,
Awal.

>
> Regards,
>

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [dpdk-dev] [PATCH v4 0/2] app/testpmd implement VXLAN/NVGRE Encap/Decap
  2018-06-22  8:51           ` Mohammad Abdul Awal
@ 2018-06-22  9:08             ` Nélio Laranjeiro
  2018-06-22 10:19               ` Mohammad Abdul Awal
  0 siblings, 1 reply; 63+ messages in thread
From: Nélio Laranjeiro @ 2018-06-22  9:08 UTC (permalink / raw)
  To: Mohammad Abdul Awal
  Cc: dev, Adrien Mazarguil, Wenzhuo Lu, Jingjing Wu,
	Bernard Iremonger, Ori Kam, Stephen Hemminger

On Fri, Jun 22, 2018 at 09:51:15AM +0100, Mohammad Abdul Awal wrote:
> 
> 
> On 22/06/2018 09:31, Nélio Laranjeiro wrote:
> > On Fri, Jun 22, 2018 at 08:42:10AM +0100, Mohammad Abdul Awal wrote:
> > > Hi Nelio,
> > > 
> > > 
> > > On 21/06/2018 08:13, Nelio Laranjeiro wrote:
> > > > This series adds an easy and maintainable configuration version support for
> > > > those two actions for 18.08 by using global variables in testpmd to store the
> > > > necessary information for the tunnel encapsulation.  Those variables are used
> > > > in conjunction of RTE_FLOW_ACTION_{VXLAN,NVGRE}_ENCAP action to create easily
> > > > the action for flows.
> > > > 
> > > > A common way to use it:
> > > > 
> > > >    set vxlan ipv4 4 4 4 127.0.0.1 128.0.0.1 11:11:11:11:11:11 22:22:22:22:22:22
> > > >    flow create 0 ingress pattern end actions vxlan_encap / queue index 0 / end
> > > This way we can define only one tunnel for all the flows. This is not a
> > > convenient for testing a scenario (e.g. mutiport or switch) with multiple
> > > tunnels. Isn't it?
> > Hi Awal.
> > 
> > The "set vxlan" command will just configure the outer VXLAN tunnel to be
> > used, when the "flow" command is invoked, it will use the VXLAN tunnel
> > information and create a valid VXLAN_ENCAP action.  For instance:
> > 
> >   testpmd> set vxlan ipv4 4 4 4 127.0.0.1 128.0.0.1 11:11:11:11:11:11 22:22:22:22:22:22
> >   testpmd> flow create 0 ingress pattern end actions vxlan_encap / queue index 0 / end
> >   testpmd> set vxlan ipv6 4 34 42 ::1 ::2222 80:12:13:14:15:16 22:22:22:22:22:22
> >   testpmd> flow create 0 ingress pattern end actions vxlan_encap / queue index 0 / end
> > 
> > will create two VLXAN_ENCAP flow one with IPv4 tunnel the second one
> > with an IPv6.  Whereas:
> > 
> >   testpmd> set vxlan ipv4 4 4 4 127.0.0.1 128.0.0.1 11:11:11:11:11:11 22:22:22:22:22:22
> >   testpmd> flow create 0 ingress pattern eth / ipv4 src is 10.2.3.4 / end
> >   	actions vxlan_encap / queue index 0 / end
> >   testpmd> flow create 0 ingress pattern eth / ipv4 src is 20.2.3.4 / end
> >   	actions vxlan_encap / queue index 0 / end
> > 
> > will encapsulate the packets having as IPv4 source IP 10.2.3.4 and
> > 20.2.3.4 with the same VXLAN tunnel headers.
> 
> I understand that the same IPv4 tunnel will be used for both flows in your
> example above.  I have the following questions.
> 
> 1) How can we create two or more IPv4 (or IPv6) tunnel?
> 1) How can we make the flows to use different IPv4 tunnels?
> As an example,
> 
>  testpmd> set vxlan ipv4 4 4 4 127.0.0.1 128.0.0.1 11:11:11:11:11:11 22:22:22:22:22:22
>  testpmd> set vxlan ipv4 4 4 4 127.0.0.1 128.0.0.1 33:33:33:33:33:33 44:44:44:44:44:44
>  testpmd> flow create 0 ingress pattern end actions vxlan_encap <first tunnel?> / queue index 0 / end
>  testpmd> flow create 0 ingress pattern end actions vxlan_encap <second tunnel?> / queue index 0 / end
> 

Doing this, the flows will use the same tunnel, you must do:

 testpmd> set vxlan ipv4 4 4 4 127.0.0.1 128.0.0.1 11:11:11:11:11:11 22:22:22:22:22:22
 testpmd> flow create 0 ingress pattern end actions vxlan_encap <first tunnel?> / queue index 0 / end
 testpmd> set vxlan ipv4 4 4 4 127.0.0.1 128.0.0.1 33:33:33:33:33:33 44:44:44:44:44:44
 testpmd> flow create 0 ingress pattern end actions vxlan_encap <second tunnel?> / queue index 0 / end

to have what you want.

> Is it possible?

Regards,

-- 
Nélio Laranjeiro
6WIND

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [dpdk-dev] [PATCH v4 0/2] app/testpmd implement VXLAN/NVGRE Encap/Decap
  2018-06-22  9:08             ` Nélio Laranjeiro
@ 2018-06-22 10:19               ` Mohammad Abdul Awal
  2018-06-26 15:15                 ` Nélio Laranjeiro
  0 siblings, 1 reply; 63+ messages in thread
From: Mohammad Abdul Awal @ 2018-06-22 10:19 UTC (permalink / raw)
  To: Nélio Laranjeiro
  Cc: dev, Adrien Mazarguil, Wenzhuo Lu, Jingjing Wu,
	Bernard Iremonger, Ori Kam, Stephen Hemminger



On 22/06/2018 10:08, Nélio Laranjeiro wrote:
> On Fri, Jun 22, 2018 at 09:51:15AM +0100, Mohammad Abdul Awal wrote:
>>
>> On 22/06/2018 09:31, Nélio Laranjeiro wrote:
>>> On Fri, Jun 22, 2018 at 08:42:10AM +0100, Mohammad Abdul Awal wrote:
>>>> Hi Nelio,
>>>>
>>>>
>>>> On 21/06/2018 08:13, Nelio Laranjeiro wrote:
>>>>> This series adds an easy and maintainable configuration version support for
>>>>> those two actions for 18.08 by using global variables in testpmd to store the
>>>>> necessary information for the tunnel encapsulation.  Those variables are used
>>>>> in conjunction of RTE_FLOW_ACTION_{VXLAN,NVGRE}_ENCAP action to create easily
>>>>> the action for flows.
>>>>>
>>>>> A common way to use it:
>>>>>
>>>>>     set vxlan ipv4 4 4 4 127.0.0.1 128.0.0.1 11:11:11:11:11:11 22:22:22:22:22:22
>>>>>     flow create 0 ingress pattern end actions vxlan_encap / queue index 0 / end
>>>> This way we can define only one tunnel for all the flows. This is not a
>>>> convenient for testing a scenario (e.g. mutiport or switch) with multiple
>>>> tunnels. Isn't it?
>>> Hi Awal.
>>>
>>> The "set vxlan" command will just configure the outer VXLAN tunnel to be
>>> used, when the "flow" command is invoked, it will use the VXLAN tunnel
>>> information and create a valid VXLAN_ENCAP action.  For instance:
>>>
>>>    testpmd> set vxlan ipv4 4 4 4 127.0.0.1 128.0.0.1 11:11:11:11:11:11 22:22:22:22:22:22
>>>    testpmd> flow create 0 ingress pattern end actions vxlan_encap / queue index 0 / end
>>>    testpmd> set vxlan ipv6 4 34 42 ::1 ::2222 80:12:13:14:15:16 22:22:22:22:22:22
>>>    testpmd> flow create 0 ingress pattern end actions vxlan_encap / queue index 0 / end
>>>
>>> will create two VLXAN_ENCAP flow one with IPv4 tunnel the second one
>>> with an IPv6.  Whereas:
>>>
>>>    testpmd> set vxlan ipv4 4 4 4 127.0.0.1 128.0.0.1 11:11:11:11:11:11 22:22:22:22:22:22
>>>    testpmd> flow create 0 ingress pattern eth / ipv4 src is 10.2.3.4 / end
>>>    	actions vxlan_encap / queue index 0 / end
>>>    testpmd> flow create 0 ingress pattern eth / ipv4 src is 20.2.3.4 / end
>>>    	actions vxlan_encap / queue index 0 / end
>>>
>>> will encapsulate the packets having as IPv4 source IP 10.2.3.4 and
>>> 20.2.3.4 with the same VXLAN tunnel headers.
>> I understand that the same IPv4 tunnel will be used for both flows in your
>> example above.  I have the following questions.
>>
>> 1) How can we create two or more IPv4 (or IPv6) tunnel?
>> 1) How can we make the flows to use different IPv4 tunnels?
>> As an example,
>>
>>   testpmd> set vxlan ipv4 4 4 4 127.0.0.1 128.0.0.1 11:11:11:11:11:11 22:22:22:22:22:22
>>   testpmd> set vxlan ipv4 4 4 4 127.0.0.1 128.0.0.1 33:33:33:33:33:33 44:44:44:44:44:44
>>   testpmd> flow create 0 ingress pattern end actions vxlan_encap <first tunnel?> / queue index 0 / end
>>   testpmd> flow create 0 ingress pattern end actions vxlan_encap <second tunnel?> / queue index 0 / end
>>
> Doing this, the flows will use the same tunnel, you must do:
>
>   testpmd> set vxlan ipv4 4 4 4 127.0.0.1 128.0.0.1 11:11:11:11:11:11 22:22:22:22:22:22
>   testpmd> flow create 0 ingress pattern end actions vxlan_encap <first tunnel?> / queue index 0 / end
>   testpmd> set vxlan ipv4 4 4 4 127.0.0.1 128.0.0.1 33:33:33:33:33:33 44:44:44:44:44:44
>   testpmd> flow create 0 ingress pattern end actions vxlan_encap <second tunnel?> / queue index 0 / end
>
> to have what you want.
OK, thanks for the clarification. So, since there will be only one 
global instance of the tunnel,  for any subsequent "set vxlan" 
operations, the tunnel created from the last last operation will be 
used. May be it should be cleared in the description/documentation?

>> Is it possible?
> Regards,
>

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [dpdk-dev] [PATCH v4 2/2] app/testpmd: add NVGRE encap/decap support
  2018-06-21  7:13       ` [dpdk-dev] [PATCH v4 2/2] app/testpmd: add NVGRE " Nelio Laranjeiro
@ 2018-06-26 10:48         ` Ori Kam
  2018-06-26 12:48         ` Iremonger, Bernard
  1 sibling, 0 replies; 63+ messages in thread
From: Ori Kam @ 2018-06-26 10:48 UTC (permalink / raw)
  To: Nélio Laranjeiro, dev, Adrien Mazarguil, Wenzhuo Lu,
	Jingjing Wu, Bernard Iremonger, Mohammad Abdul Awal,
	Stephen Hemminger

Acked-by: Ori Kam <orika@mellanox.com>

> -----Original Message-----
> From: Nelio Laranjeiro [mailto:nelio.laranjeiro@6wind.com]
> Sent: Thursday, June 21, 2018 10:14 AM
> To: dev@dpdk.org; Adrien Mazarguil <adrien.mazarguil@6wind.com>;
> Wenzhuo Lu <wenzhuo.lu@intel.com>; Jingjing Wu
> <jingjing.wu@intel.com>; Bernard Iremonger
> <bernard.iremonger@intel.com>; Mohammad Abdul Awal
> <mohammad.abdul.awal@intel.com>; Ori Kam <orika@mellanox.com>;
> Stephen Hemminger <stephen@networkplumber.org>
> Subject: [PATCH v4 2/2] app/testpmd: add NVGRE encap/decap support
> 
> Due to the complex NVGRE_ENCAP flow action and based on the fact
> testpmd
> does not allocate memory, this patch adds a new command in testpmd to
> initialise a global structure containing the necessary information to
> make the outer layer of the packet.  This same global structure will
> then be used by the flow command line in testpmd when the action
> nvgre_encap will be parsed, at this point, the conversion into such
> action becomes trivial.
> 
> This global structure is only used for the encap action.
> 
> Signed-off-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
> ---
>  app/test-pmd/cmdline.c                      | 118 ++++++++++++++++++
>  app/test-pmd/cmdline_flow.c                 | 129 ++++++++++++++++++++
>  app/test-pmd/testpmd.c                      |  15 +++
>  app/test-pmd/testpmd.h                      |  15 +++
>  doc/guides/testpmd_app_ug/testpmd_funcs.rst |  37 ++++++
>  5 files changed, 314 insertions(+)
> 
> diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
> index 048fff2bd..ad7f9eda5 100644
> --- a/app/test-pmd/cmdline.c
> +++ b/app/test-pmd/cmdline.c
> @@ -789,6 +789,12 @@ static void cmd_help_long_parsed(void
> *parsed_result,
>  			" vlan-tci eth-src eth-dst\n"
>  			"       Configure the VXLAN encapsulation for
> flows.\n\n"
> 
> +			"nvgre ipv4|ipv6 tni ip-src ip-dst eth-src eth-dst\n"
> +			"       Configure the NVGRE encapsulation for
> flows.\n\n"
> +
> +			"nvgre-with-vlan ipv4|ipv6 tni ip-src ip-dst vlan-tci
> eth-src eth-dst\n"
> +			"       Configure the NVGRE encapsulation for
> flows.\n\n"
> +
>  			, list_pkt_forwarding_modes()
>  		);
>  	}
> @@ -14970,6 +14976,116 @@ cmdline_parse_inst_t
> cmd_set_vxlan_with_vlan = {
>  	},
>  };
> 
> +/** Set NVGRE encapsulation details */
> +struct cmd_set_nvgre_result {
> +	cmdline_fixed_string_t set;
> +	cmdline_fixed_string_t nvgre;
> +	cmdline_fixed_string_t ip_version;
> +	uint32_t tni;
> +	cmdline_ipaddr_t ip_src;
> +	cmdline_ipaddr_t ip_dst;
> +	uint16_t tci;
> +	struct ether_addr eth_src;
> +	struct ether_addr eth_dst;
> +};
> +
> +cmdline_parse_token_string_t cmd_set_nvgre_set =
> +	TOKEN_STRING_INITIALIZER(struct cmd_set_nvgre_result, set,
> "set");
> +cmdline_parse_token_string_t cmd_set_nvgre_nvgre =
> +	TOKEN_STRING_INITIALIZER(struct cmd_set_nvgre_result, nvgre,
> "nvgre");
> +cmdline_parse_token_string_t cmd_set_nvgre_nvgre_with_vlan =
> +	TOKEN_STRING_INITIALIZER(struct cmd_set_nvgre_result, nvgre,
> "nvgre-with-vlan");
> +cmdline_parse_token_string_t cmd_set_nvgre_ip_version =
> +	TOKEN_STRING_INITIALIZER(struct cmd_set_nvgre_result,
> ip_version,
> +				 "ipv4#ipv6");
> +cmdline_parse_token_num_t cmd_set_nvgre_tni =
> +	TOKEN_NUM_INITIALIZER(struct cmd_set_nvgre_result, tni,
> UINT32);
> +cmdline_parse_token_num_t cmd_set_nvgre_ip_src =
> +	TOKEN_IPADDR_INITIALIZER(struct cmd_set_nvgre_result, ip_src);
> +cmdline_parse_token_ipaddr_t cmd_set_nvgre_ip_dst =
> +	TOKEN_IPADDR_INITIALIZER(struct cmd_set_nvgre_result, ip_dst);
> +cmdline_parse_token_num_t cmd_set_nvgre_vlan =
> +	TOKEN_NUM_INITIALIZER(struct cmd_set_nvgre_result, tci,
> UINT16);
> +cmdline_parse_token_etheraddr_t cmd_set_nvgre_eth_src =
> +	TOKEN_ETHERADDR_INITIALIZER(struct cmd_set_nvgre_result,
> eth_src);
> +cmdline_parse_token_etheraddr_t cmd_set_nvgre_eth_dst =
> +	TOKEN_ETHERADDR_INITIALIZER(struct cmd_set_nvgre_result,
> eth_dst);
> +
> +static void cmd_set_nvgre_parsed(void *parsed_result,
> +	__attribute__((unused)) struct cmdline *cl,
> +	__attribute__((unused)) void *data)
> +{
> +	struct cmd_set_nvgre_result *res = parsed_result;
> +	union {
> +		uint32_t nvgre_tni;
> +		uint8_t tni[4];
> +	} id = {
> +		.nvgre_tni = rte_cpu_to_be_32(res->tni) &
> RTE_BE32(0x00ffffff),
> +	};
> +
> +	if (strcmp(res->nvgre, "nvgre") == 0)
> +		nvgre_encap_conf.select_vlan = 0;
> +	else if (strcmp(res->nvgre, "nvgre-with-vlan") == 0)
> +		nvgre_encap_conf.select_vlan = 1;
> +	if (strcmp(res->ip_version, "ipv4") == 0)
> +		nvgre_encap_conf.select_ipv4 = 1;
> +	else if (strcmp(res->ip_version, "ipv6") == 0)
> +		nvgre_encap_conf.select_ipv4 = 0;
> +	else
> +		return;
> +	rte_memcpy(nvgre_encap_conf.tni, &id.tni[1], 3);
> +	if (nvgre_encap_conf.select_ipv4) {
> +		IPV4_ADDR_TO_UINT(res->ip_src,
> nvgre_encap_conf.ipv4_src);
> +		IPV4_ADDR_TO_UINT(res->ip_dst,
> nvgre_encap_conf.ipv4_dst);
> +	} else {
> +		IPV6_ADDR_TO_ARRAY(res->ip_src,
> nvgre_encap_conf.ipv6_src);
> +		IPV6_ADDR_TO_ARRAY(res->ip_dst,
> nvgre_encap_conf.ipv6_dst);
> +	}
> +	if (nvgre_encap_conf.select_vlan)
> +		nvgre_encap_conf.vlan_tci = rte_cpu_to_be_16(res->tci);
> +	rte_memcpy(nvgre_encap_conf.eth_src, res->eth_src.addr_bytes,
> +		   ETHER_ADDR_LEN);
> +	rte_memcpy(nvgre_encap_conf.eth_dst, res->eth_dst.addr_bytes,
> +		   ETHER_ADDR_LEN);
> +}
> +
> +cmdline_parse_inst_t cmd_set_nvgre = {
> +	.f = cmd_set_nvgre_parsed,
> +	.data = NULL,
> +	.help_str = "set nvgre ipv4|ipv6 <vni> <ip-src> <ip-dst> <eth-src>"
> +		" <eth-dst>",
> +	.tokens = {
> +		(void *)&cmd_set_nvgre_set,
> +		(void *)&cmd_set_nvgre_nvgre,
> +		(void *)&cmd_set_nvgre_ip_version,
> +		(void *)&cmd_set_nvgre_tni,
> +		(void *)&cmd_set_nvgre_ip_src,
> +		(void *)&cmd_set_nvgre_ip_dst,
> +		(void *)&cmd_set_nvgre_eth_src,
> +		(void *)&cmd_set_nvgre_eth_dst,
> +		NULL,
> +	},
> +};
> +
> +cmdline_parse_inst_t cmd_set_nvgre_with_vlan = {
> +	.f = cmd_set_nvgre_parsed,
> +	.data = NULL,
> +	.help_str = "set nvgre-with-vlan ipv4|ipv6 <vni> <ip-src> <ip-dst>"
> +		" <vlan-tci> <eth-src> <eth-dst>",
> +	.tokens = {
> +		(void *)&cmd_set_nvgre_set,
> +		(void *)&cmd_set_nvgre_nvgre_with_vlan,
> +		(void *)&cmd_set_nvgre_ip_version,
> +		(void *)&cmd_set_nvgre_tni,
> +		(void *)&cmd_set_nvgre_ip_src,
> +		(void *)&cmd_set_nvgre_ip_dst,
> +		(void *)&cmd_set_nvgre_vlan,
> +		(void *)&cmd_set_nvgre_eth_src,
> +		(void *)&cmd_set_nvgre_eth_dst,
> +		NULL,
> +	},
> +};
> +
>  /* Strict link priority scheduling mode setting */
>  static void
>  cmd_strict_link_prio_parsed(
> @@ -17596,6 +17712,8 @@ cmdline_parse_ctx_t main_ctx[] = {
>  #endif
>  	(cmdline_parse_inst_t *)&cmd_set_vxlan,
>  	(cmdline_parse_inst_t *)&cmd_set_vxlan_with_vlan,
> +	(cmdline_parse_inst_t *)&cmd_set_nvgre,
> +	(cmdline_parse_inst_t *)&cmd_set_nvgre_with_vlan,
>  	(cmdline_parse_inst_t *)&cmd_ddp_add,
>  	(cmdline_parse_inst_t *)&cmd_ddp_del,
>  	(cmdline_parse_inst_t *)&cmd_ddp_get_list,
> diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
> index 7823addb7..fea9380c4 100644
> --- a/app/test-pmd/cmdline_flow.c
> +++ b/app/test-pmd/cmdline_flow.c
> @@ -241,6 +241,8 @@ enum index {
>  	ACTION_OF_PUSH_MPLS_ETHERTYPE,
>  	ACTION_VXLAN_ENCAP,
>  	ACTION_VXLAN_DECAP,
> +	ACTION_NVGRE_ENCAP,
> +	ACTION_NVGRE_DECAP,
>  };
> 
>  /** Maximum size for pattern in struct rte_flow_item_raw. */
> @@ -277,6 +279,22 @@ struct action_vxlan_encap_data {
>  	struct rte_flow_item_vxlan item_vxlan;
>  };
> 
> +/** Maximum number of items in struct rte_flow_action_nvgre_encap. */
> +#define ACTION_NVGRE_ENCAP_ITEMS_NUM 5
> +
> +/** Storage for struct rte_flow_action_nvgre_encap including external
> data. */
> +struct action_nvgre_encap_data {
> +	struct rte_flow_action_nvgre_encap conf;
> +	struct rte_flow_item items[ACTION_NVGRE_ENCAP_ITEMS_NUM];
> +	struct rte_flow_item_eth item_eth;
> +	struct rte_flow_item_vlan item_vlan;
> +	union {
> +		struct rte_flow_item_ipv4 item_ipv4;
> +		struct rte_flow_item_ipv6 item_ipv6;
> +	};
> +	struct rte_flow_item_nvgre item_nvgre;
> +};
> +
>  /** Maximum number of subsequent tokens and arguments on the stack.
> */
>  #define CTX_STACK_SIZE 16
> 
> @@ -796,6 +814,8 @@ static const enum index next_action[] = {
>  	ACTION_OF_PUSH_MPLS,
>  	ACTION_VXLAN_ENCAP,
>  	ACTION_VXLAN_DECAP,
> +	ACTION_NVGRE_ENCAP,
> +	ACTION_NVGRE_DECAP,
>  	ZERO,
>  };
> 
> @@ -929,6 +949,9 @@ static int parse_vc_action_rss_queue(struct context
> *, const struct token *,
>  static int parse_vc_action_vxlan_encap(struct context *, const struct token
> *,
>  				       const char *, unsigned int, void *,
>  				       unsigned int);
> +static int parse_vc_action_nvgre_encap(struct context *, const struct token
> *,
> +				       const char *, unsigned int, void *,
> +				       unsigned int);
>  static int parse_destroy(struct context *, const struct token *,
>  			 const char *, unsigned int,
>  			 void *, unsigned int);
> @@ -2429,6 +2452,24 @@ static const struct token token_list[] = {
>  		.next = NEXT(NEXT_ENTRY(ACTION_NEXT)),
>  		.call = parse_vc,
>  	},
> +	[ACTION_NVGRE_ENCAP] = {
> +		.name = "nvgre_encap",
> +		.help = "NVGRE encapsulation, uses configuration set by
> \"set"
> +			" nvgre\"",
> +		.priv = PRIV_ACTION(NVGRE_ENCAP,
> +				    sizeof(struct action_nvgre_encap_data)),
> +		.next = NEXT(NEXT_ENTRY(ACTION_NEXT)),
> +		.call = parse_vc_action_nvgre_encap,
> +	},
> +	[ACTION_NVGRE_DECAP] = {
> +		.name = "nvgre_decap",
> +		.help = "Performs a decapsulation action by stripping all"
> +			" headers of the NVGRE tunnel network overlay from
> the"
> +			" matched flow.",
> +		.priv = PRIV_ACTION(NVGRE_DECAP, 0),
> +		.next = NEXT(NEXT_ENTRY(ACTION_NEXT)),
> +		.call = parse_vc,
> +	},
>  };
> 
>  /** Remove and return last entry from argument stack. */
> @@ -3090,6 +3131,94 @@ parse_vc_action_vxlan_encap(struct context *ctx,
> const struct token *token,
>  	return ret;
>  }
> 
> +/** Parse NVGRE encap action. */
> +static int
> +parse_vc_action_nvgre_encap(struct context *ctx, const struct token
> *token,
> +			    const char *str, unsigned int len,
> +			    void *buf, unsigned int size)
> +{
> +	struct buffer *out = buf;
> +	struct rte_flow_action *action;
> +	struct action_nvgre_encap_data *action_nvgre_encap_data;
> +	int ret;
> +
> +	ret = parse_vc(ctx, token, str, len, buf, size);
> +	if (ret < 0)
> +		return ret;
> +	/* Nothing else to do if there is no buffer. */
> +	if (!out)
> +		return ret;
> +	if (!out->args.vc.actions_n)
> +		return -1;
> +	action = &out->args.vc.actions[out->args.vc.actions_n - 1];
> +	/* Point to selected object. */
> +	ctx->object = out->args.vc.data;
> +	ctx->objmask = NULL;
> +	/* Set up default configuration. */
> +	action_nvgre_encap_data = ctx->object;
> +	*action_nvgre_encap_data = (struct action_nvgre_encap_data){
> +		.conf = (struct rte_flow_action_nvgre_encap){
> +			.definition = action_nvgre_encap_data->items,
> +		},
> +		.items = {
> +			{
> +				.type = RTE_FLOW_ITEM_TYPE_ETH,
> +				.spec = &action_nvgre_encap_data-
> >item_eth,
> +				.mask = &rte_flow_item_eth_mask,
> +			},
> +			{
> +				.type = RTE_FLOW_ITEM_TYPE_VLAN,
> +				.spec = &action_nvgre_encap_data-
> >item_vlan,
> +				.mask = &rte_flow_item_vlan_mask,
> +			},
> +			{
> +				.type = RTE_FLOW_ITEM_TYPE_IPV4,
> +				.spec = &action_nvgre_encap_data-
> >item_ipv4,
> +				.mask = &rte_flow_item_ipv4_mask,
> +			},
> +			{
> +				.type = RTE_FLOW_ITEM_TYPE_NVGRE,
> +				.spec = &action_nvgre_encap_data-
> >item_nvgre,
> +				.mask = &rte_flow_item_nvgre_mask,
> +			},
> +			{
> +				.type = RTE_FLOW_ITEM_TYPE_END,
> +			},
> +		},
> +		.item_eth.type = 0,
> +		.item_vlan.tci = nvgre_encap_conf.vlan_tci,
> +		.item_ipv4.hdr = {
> +		       .src_addr = nvgre_encap_conf.ipv4_src,
> +		       .dst_addr = nvgre_encap_conf.ipv4_dst,
> +		},
> +		.item_nvgre.flow_id = 0,
> +	};
> +	memcpy(action_nvgre_encap_data->item_eth.dst.addr_bytes,
> +	       nvgre_encap_conf.eth_dst, ETHER_ADDR_LEN);
> +	memcpy(action_nvgre_encap_data->item_eth.src.addr_bytes,
> +	       nvgre_encap_conf.eth_src, ETHER_ADDR_LEN);
> +	if (!nvgre_encap_conf.select_ipv4) {
> +		memcpy(&action_nvgre_encap_data-
> >item_ipv6.hdr.src_addr,
> +		       &nvgre_encap_conf.ipv6_src,
> +		       sizeof(nvgre_encap_conf.ipv6_src));
> +		memcpy(&action_nvgre_encap_data-
> >item_ipv6.hdr.dst_addr,
> +		       &nvgre_encap_conf.ipv6_dst,
> +		       sizeof(nvgre_encap_conf.ipv6_dst));
> +		action_nvgre_encap_data->items[2] = (struct
> rte_flow_item){
> +			.type = RTE_FLOW_ITEM_TYPE_IPV6,
> +			.spec = &action_nvgre_encap_data->item_ipv6,
> +			.mask = &rte_flow_item_ipv6_mask,
> +		};
> +	}
> +	if (!nvgre_encap_conf.select_vlan)
> +		action_nvgre_encap_data->items[1].type =
> +			RTE_FLOW_ITEM_TYPE_VOID;
> +	memcpy(action_nvgre_encap_data->item_nvgre.tni,
> nvgre_encap_conf.tni,
> +	       RTE_DIM(nvgre_encap_conf.tni));
> +	action->conf = &action_nvgre_encap_data->conf;
> +	return ret;
> +}
> +
>  /** Parse tokens for destroy command. */
>  static int
>  parse_destroy(struct context *ctx, const struct token *token,
> diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
> index 5f581c360..c60b507a0 100644
> --- a/app/test-pmd/testpmd.c
> +++ b/app/test-pmd/testpmd.c
> @@ -410,6 +410,21 @@ struct vxlan_encap_conf vxlan_encap_conf = {
>  	.eth_dst = "\xff\xff\xff\xff\xff\xff",
>  };
> 
> +struct nvgre_encap_conf nvgre_encap_conf = {
> +	.select_ipv4 = 1,
> +	.select_vlan = 0,
> +	.tni = "\x00\x00\x00",
> +	.ipv4_src = IPv4(127, 0, 0, 1),
> +	.ipv4_dst = IPv4(255, 255, 255, 255),
> +	.ipv6_src = "\x00\x00\x00\x00\x00\x00\x00\x00"
> +		"\x00\x00\x00\x00\x00\x00\x00\x01",
> +	.ipv6_dst = "\x00\x00\x00\x00\x00\x00\x00\x00"
> +		"\x00\x00\x00\x00\x00\x00\x11\x11",
> +	.vlan_tci = 0,
> +	.eth_src = "\x00\x00\x00\x00\x00\x00",
> +	.eth_dst = "\xff\xff\xff\xff\xff\xff",
> +};
> +
>  /* Forward function declarations */
>  static void map_port_queue_stats_mapping_registers(portid_t pi,
>  						   struct rte_port *port);
> diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
> index 0d6618788..2b1e448b0 100644
> --- a/app/test-pmd/testpmd.h
> +++ b/app/test-pmd/testpmd.h
> @@ -496,6 +496,21 @@ struct vxlan_encap_conf {
>  };
>  struct vxlan_encap_conf vxlan_encap_conf;
> 
> +/* NVGRE encap/decap parameters. */
> +struct nvgre_encap_conf {
> +	uint32_t select_ipv4:1;
> +	uint32_t select_vlan:1;
> +	uint8_t tni[3];
> +	rte_be32_t ipv4_src;
> +	rte_be32_t ipv4_dst;
> +	uint8_t ipv6_src[16];
> +	uint8_t ipv6_dst[16];
> +	rte_be16_t vlan_tci;
> +	uint8_t eth_src[ETHER_ADDR_LEN];
> +	uint8_t eth_dst[ETHER_ADDR_LEN];
> +};
> +struct nvgre_encap_conf nvgre_encap_conf;
> +
>  static inline unsigned int
>  lcore_num(void)
>  {
> diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
> b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
> index 2743043d3..17e0fef63 100644
> --- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
> +++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
> @@ -1542,6 +1542,14 @@ Configure the outer layer to encapsulate a packet
> inside a VXLAN tunnel::
>   testpmd> set vxlan ipv4|ipv6 (vni) (udp-src) (udp-dst) (ip-src) (ip-dst) (mac-
> src) (mac-dst)
>   testpmd> set vxlan-with-vlan ipv4|ipv6 (vni) (udp-src) (udp-dst) (ip-src) (ip-
> dst) (vlan-tci) (mac-src) (mac-dst)
> 
> +Config NVGRE Encap outer layers
> +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> +
> +Configure the outer layer to encapsulate a packet inside a NVGRE tunnel::
> +
> + testpmd> set nvgre ipv4|ipv6 (tni) (ip-src) (ip-dst) (mac-src) (mac-dst)
> + testpmd> set nvgre-with-vlan ipv4|ipv6 (tni) (ip-src) (ip-dst) (vlan-tci)
> (mac-src) (mac-dst)
> +
>  Port Functions
>  --------------
> 
> @@ -3663,6 +3671,12 @@ This section lists supported actions and their
> attributes, if any.
>  - ``vxlan_decap``: Performs a decapsulation action by stripping all headers of
>    the VXLAN tunnel network overlay from the matched flow.
> 
> +- ``nvgre_encap``: Performs a NVGRE encapsulation, outer layer
> configuration
> +  is done through `Config NVGRE Encap outer layers`_.
> +
> +- ``nvgre_decap``: Performs a decapsulation action by stripping all headers
> of
> +  the NVGRE tunnel network overlay from the matched flow.
> +
>  Destroying flow rules
>  ~~~~~~~~~~~~~~~~~~~~~
> 
> @@ -3950,6 +3964,29 @@ IPv6 VXLAN outer header::
>    testpmd> set vxlan-with-vlan ipv6 4 4 4 ::1 ::2222 34 11:11:11:11:11:11
> 22:22:22:22:22:22
>    testpmd> flow create 0 ingress pattern end actions vxlan_encap / queue
> index 0 / end
> 
> +Sample NVGRE encapsulation rule
> +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> +
> +NVGRE encapsulation outer layer has default value pre-configured in
> testpmd
> +source code, those can be changed by using the following commands::
> +
> +IPv4 NVGRE outer header::
> +
> +  testpmd> set nvgre ipv4 4 127.0.0.1 128.0.0.1 11:11:11:11:11:11
> 22:22:22:22:22:22
> +  testpmd> flow create 0 ingress pattern end actions nvgre_encap / queue
> index 0 / end
> +
> +  testpmd> set nvgre-with-vlan 4 127.0.0.1 128.0.0.1 34 11:11:11:11:11:11
> 22:22:22:22:22:22
> +  testpmd> flow create 0 ingress pattern end actions vxlan_encap / queue
> index 0 / end
> +
> +IPv6 NVGRE outer header::
> +
> +  testpmd> set nvgre ipv6 4 ::1 ::2222 11:11:11:11:11:11 22:22:22:22:22:22
> +  testpmd> flow create 0 ingress pattern end actions vxlan_encap / queue
> index 0 / end
> +
> +  testpmd> set nvgre-with-vlan ipv6 4 ::1 ::2222 34 11:11:11:11:11:11
> 22:22:22:22:22:22
> +  testpmd> flow create 0 ingress pattern end actions vxlan_encap / queue
> index 0 / end
> +
> +
>  BPF Functions
>  --------------
> 
> --
> 2.18.0.rc2


^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [dpdk-dev] [PATCH v4 1/2] app/testpmd: add VXLAN encap/decap support
  2018-06-21  7:13       ` [dpdk-dev] [PATCH v4 1/2] app/testpmd: add VXLAN encap/decap support Nelio Laranjeiro
@ 2018-06-26 10:51         ` Ori Kam
  2018-06-26 12:43         ` Iremonger, Bernard
  1 sibling, 0 replies; 63+ messages in thread
From: Ori Kam @ 2018-06-26 10:51 UTC (permalink / raw)
  To: Nélio Laranjeiro, dev, Adrien Mazarguil, Wenzhuo Lu,
	Jingjing Wu, Bernard Iremonger, Mohammad Abdul Awal,
	Stephen Hemminger

Acked-by: Ori Kam <orika@mellanox.com>

> -----Original Message-----
> From: Nelio Laranjeiro [mailto:nelio.laranjeiro@6wind.com]
> Sent: Thursday, June 21, 2018 10:14 AM
> To: dev@dpdk.org; Adrien Mazarguil <adrien.mazarguil@6wind.com>;
> Wenzhuo Lu <wenzhuo.lu@intel.com>; Jingjing Wu
> <jingjing.wu@intel.com>; Bernard Iremonger
> <bernard.iremonger@intel.com>; Mohammad Abdul Awal
> <mohammad.abdul.awal@intel.com>; Ori Kam <orika@mellanox.com>;
> Stephen Hemminger <stephen@networkplumber.org>
> Subject: [PATCH v4 1/2] app/testpmd: add VXLAN encap/decap support
> 
> Due to the complex VXLAN_ENCAP flow action and based on the fact
> testpmd
> does not allocate memory, this patch adds a new command in testpmd to
> initialise a global structure containing the necessary information to
> make the outer layer of the packet.  This same global structure will
> then be used by the flow command line in testpmd when the action
> vxlan_encap will be parsed, at this point, the conversion into such
> action becomes trivial.
> 
> This global structure is only used for the encap action.
> 
> Signed-off-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
> ---
>  app/test-pmd/cmdline.c                      | 134 +++++++++++++++++++
>  app/test-pmd/cmdline_flow.c                 | 139 ++++++++++++++++++++
>  app/test-pmd/testpmd.c                      |  17 +++
>  app/test-pmd/testpmd.h                      |  17 +++
>  doc/guides/testpmd_app_ug/testpmd_funcs.rst |  35 +++++
>  5 files changed, 342 insertions(+)
> 
> diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
> index 27e2aa8c8..048fff2bd 100644
> --- a/app/test-pmd/cmdline.c
> +++ b/app/test-pmd/cmdline.c
> @@ -781,6 +781,14 @@ static void cmd_help_long_parsed(void
> *parsed_result,
>  			"port tm hierarchy commit (port_id)
> (clean_on_fail)\n"
>  			"	Commit tm hierarchy.\n\n"
> 
> +			"vxlan ipv4|ipv6 vni udp-src udp-dst ip-src ip-dst"
> +			" eth-src eth-dst\n"
> +			"       Configure the VXLAN encapsulation for
> flows.\n\n"
> +
> +			"vxlan-with-vlan ipv4|ipv6 vni udp-src udp-dst ip-src
> ip-dst"
> +			" vlan-tci eth-src eth-dst\n"
> +			"       Configure the VXLAN encapsulation for
> flows.\n\n"
> +
>  			, list_pkt_forwarding_modes()
>  		);
>  	}
> @@ -14838,6 +14846,130 @@ cmdline_parse_inst_t
> cmd_set_port_tm_hierarchy_default = {
>  };
>  #endif
> 
> +/** Set VXLAN encapsulation details */
> +struct cmd_set_vxlan_result {
> +	cmdline_fixed_string_t set;
> +	cmdline_fixed_string_t vxlan;
> +	cmdline_fixed_string_t ip_version;
> +	uint32_t vlan_present:1;
> +	uint32_t vni;
> +	uint16_t udp_src;
> +	uint16_t udp_dst;
> +	cmdline_ipaddr_t ip_src;
> +	cmdline_ipaddr_t ip_dst;
> +	uint16_t tci;
> +	struct ether_addr eth_src;
> +	struct ether_addr eth_dst;
> +};
> +
> +cmdline_parse_token_string_t cmd_set_vxlan_set =
> +	TOKEN_STRING_INITIALIZER(struct cmd_set_vxlan_result, set,
> "set");
> +cmdline_parse_token_string_t cmd_set_vxlan_vxlan =
> +	TOKEN_STRING_INITIALIZER(struct cmd_set_vxlan_result, vxlan,
> "vxlan");
> +cmdline_parse_token_string_t cmd_set_vxlan_vxlan_with_vlan =
> +	TOKEN_STRING_INITIALIZER(struct cmd_set_vxlan_result, vxlan,
> +				 "vxlan-with-vlan");
> +cmdline_parse_token_string_t cmd_set_vxlan_ip_version =
> +	TOKEN_STRING_INITIALIZER(struct cmd_set_vxlan_result,
> ip_version,
> +				 "ipv4#ipv6");
> +cmdline_parse_token_num_t cmd_set_vxlan_vni =
> +	TOKEN_NUM_INITIALIZER(struct cmd_set_vxlan_result, vni,
> UINT32);
> +cmdline_parse_token_num_t cmd_set_vxlan_udp_src =
> +	TOKEN_NUM_INITIALIZER(struct cmd_set_vxlan_result, udp_src,
> UINT16);
> +cmdline_parse_token_num_t cmd_set_vxlan_udp_dst =
> +	TOKEN_NUM_INITIALIZER(struct cmd_set_vxlan_result, udp_dst,
> UINT16);
> +cmdline_parse_token_ipaddr_t cmd_set_vxlan_ip_src =
> +	TOKEN_IPADDR_INITIALIZER(struct cmd_set_vxlan_result, ip_src);
> +cmdline_parse_token_ipaddr_t cmd_set_vxlan_ip_dst =
> +	TOKEN_IPADDR_INITIALIZER(struct cmd_set_vxlan_result, ip_dst);
> +cmdline_parse_token_num_t cmd_set_vxlan_vlan =
> +	TOKEN_NUM_INITIALIZER(struct cmd_set_vxlan_result, tci, UINT16);
> +cmdline_parse_token_etheraddr_t cmd_set_vxlan_eth_src =
> +	TOKEN_ETHERADDR_INITIALIZER(struct cmd_set_vxlan_result,
> eth_src);
> +cmdline_parse_token_etheraddr_t cmd_set_vxlan_eth_dst =
> +	TOKEN_ETHERADDR_INITIALIZER(struct cmd_set_vxlan_result,
> eth_dst);
> +
> +static void cmd_set_vxlan_parsed(void *parsed_result,
> +	__attribute__((unused)) struct cmdline *cl,
> +	__attribute__((unused)) void *data)
> +{
> +	struct cmd_set_vxlan_result *res = parsed_result;
> +	union {
> +		uint32_t vxlan_id;
> +		uint8_t vni[4];
> +	} id = {
> +		.vxlan_id = rte_cpu_to_be_32(res->vni) &
> RTE_BE32(0x00ffffff),
> +	};
> +
> +	if (strcmp(res->vxlan, "vxlan") == 0)
> +		vxlan_encap_conf.select_vlan = 0;
> +	else if (strcmp(res->vxlan, "vxlan-with-vlan") == 0)
> +		vxlan_encap_conf.select_vlan = 1;
> +	if (strcmp(res->ip_version, "ipv4") == 0)
> +		vxlan_encap_conf.select_ipv4 = 1;
> +	else if (strcmp(res->ip_version, "ipv6") == 0)
> +		vxlan_encap_conf.select_ipv4 = 0;
> +	else
> +		return;
> +	rte_memcpy(vxlan_encap_conf.vni, &id.vni[1], 3);
> +	vxlan_encap_conf.udp_src = rte_cpu_to_be_16(res->udp_src);
> +	vxlan_encap_conf.udp_dst = rte_cpu_to_be_16(res->udp_dst);
> +	if (vxlan_encap_conf.select_ipv4) {
> +		IPV4_ADDR_TO_UINT(res->ip_src,
> vxlan_encap_conf.ipv4_src);
> +		IPV4_ADDR_TO_UINT(res->ip_dst,
> vxlan_encap_conf.ipv4_dst);
> +	} else {
> +		IPV6_ADDR_TO_ARRAY(res->ip_src,
> vxlan_encap_conf.ipv6_src);
> +		IPV6_ADDR_TO_ARRAY(res->ip_dst,
> vxlan_encap_conf.ipv6_dst);
> +	}
> +	if (vxlan_encap_conf.select_vlan)
> +		vxlan_encap_conf.vlan_tci = rte_cpu_to_be_16(res->tci);
> +	rte_memcpy(vxlan_encap_conf.eth_src, res->eth_src.addr_bytes,
> +		   ETHER_ADDR_LEN);
> +	rte_memcpy(vxlan_encap_conf.eth_dst, res->eth_dst.addr_bytes,
> +		   ETHER_ADDR_LEN);
> +}
> +
> +cmdline_parse_inst_t cmd_set_vxlan = {
> +	.f = cmd_set_vxlan_parsed,
> +	.data = NULL,
> +	.help_str = "set vxlan ipv4|ipv6 <vni> <udp-src> <udp-dst> <ip-src>"
> +		" <ip-dst> <eth-src> <eth-dst>",
> +	.tokens = {
> +		(void *)&cmd_set_vxlan_set,
> +		(void *)&cmd_set_vxlan_vxlan,
> +		(void *)&cmd_set_vxlan_ip_version,
> +		(void *)&cmd_set_vxlan_vni,
> +		(void *)&cmd_set_vxlan_udp_src,
> +		(void *)&cmd_set_vxlan_udp_dst,
> +		(void *)&cmd_set_vxlan_ip_src,
> +		(void *)&cmd_set_vxlan_ip_dst,
> +		(void *)&cmd_set_vxlan_eth_src,
> +		(void *)&cmd_set_vxlan_eth_dst,
> +		NULL,
> +	},
> +};
> +
> +cmdline_parse_inst_t cmd_set_vxlan_with_vlan = {
> +	.f = cmd_set_vxlan_parsed,
> +	.data = NULL,
> +	.help_str = "set vxlan-with-vlan ipv4|ipv6 <vni> <udp-src> <udp-
> dst>"
> +		" <ip-src> <ip-dst> <vlan-tci> <eth-src> <eth-dst>",
> +	.tokens = {
> +		(void *)&cmd_set_vxlan_set,
> +		(void *)&cmd_set_vxlan_vxlan_with_vlan,
> +		(void *)&cmd_set_vxlan_ip_version,
> +		(void *)&cmd_set_vxlan_vni,
> +		(void *)&cmd_set_vxlan_udp_src,
> +		(void *)&cmd_set_vxlan_udp_dst,
> +		(void *)&cmd_set_vxlan_ip_src,
> +		(void *)&cmd_set_vxlan_ip_dst,
> +		(void *)&cmd_set_vxlan_vlan,
> +		(void *)&cmd_set_vxlan_eth_src,
> +		(void *)&cmd_set_vxlan_eth_dst,
> +		NULL,
> +	},
> +};
> +
>  /* Strict link priority scheduling mode setting */
>  static void
>  cmd_strict_link_prio_parsed(
> @@ -17462,6 +17594,8 @@ cmdline_parse_ctx_t main_ctx[] = {
>  #if defined RTE_LIBRTE_PMD_SOFTNIC && defined RTE_LIBRTE_SCHED
>  	(cmdline_parse_inst_t *)&cmd_set_port_tm_hierarchy_default,
>  #endif
> +	(cmdline_parse_inst_t *)&cmd_set_vxlan,
> +	(cmdline_parse_inst_t *)&cmd_set_vxlan_with_vlan,
>  	(cmdline_parse_inst_t *)&cmd_ddp_add,
>  	(cmdline_parse_inst_t *)&cmd_ddp_del,
>  	(cmdline_parse_inst_t *)&cmd_ddp_get_list,
> diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
> index 934cf7e90..7823addb7 100644
> --- a/app/test-pmd/cmdline_flow.c
> +++ b/app/test-pmd/cmdline_flow.c
> @@ -239,6 +239,8 @@ enum index {
>  	ACTION_OF_POP_MPLS_ETHERTYPE,
>  	ACTION_OF_PUSH_MPLS,
>  	ACTION_OF_PUSH_MPLS_ETHERTYPE,
> +	ACTION_VXLAN_ENCAP,
> +	ACTION_VXLAN_DECAP,
>  };
> 
>  /** Maximum size for pattern in struct rte_flow_item_raw. */
> @@ -258,6 +260,23 @@ struct action_rss_data {
>  	uint16_t queue[ACTION_RSS_QUEUE_NUM];
>  };
> 
> +/** Maximum number of items in struct rte_flow_action_vxlan_encap. */
> +#define ACTION_VXLAN_ENCAP_ITEMS_NUM 6
> +
> +/** Storage for struct rte_flow_action_vxlan_encap including external data.
> */
> +struct action_vxlan_encap_data {
> +	struct rte_flow_action_vxlan_encap conf;
> +	struct rte_flow_item items[ACTION_VXLAN_ENCAP_ITEMS_NUM];
> +	struct rte_flow_item_eth item_eth;
> +	struct rte_flow_item_vlan item_vlan;
> +	union {
> +		struct rte_flow_item_ipv4 item_ipv4;
> +		struct rte_flow_item_ipv6 item_ipv6;
> +	};
> +	struct rte_flow_item_udp item_udp;
> +	struct rte_flow_item_vxlan item_vxlan;
> +};
> +
>  /** Maximum number of subsequent tokens and arguments on the stack.
> */
>  #define CTX_STACK_SIZE 16
> 
> @@ -775,6 +794,8 @@ static const enum index next_action[] = {
>  	ACTION_OF_SET_VLAN_PCP,
>  	ACTION_OF_POP_MPLS,
>  	ACTION_OF_PUSH_MPLS,
> +	ACTION_VXLAN_ENCAP,
> +	ACTION_VXLAN_DECAP,
>  	ZERO,
>  };
> 
> @@ -905,6 +926,9 @@ static int parse_vc_action_rss_type(struct context *,
> const struct token *,
>  static int parse_vc_action_rss_queue(struct context *, const struct token *,
>  				     const char *, unsigned int, void *,
>  				     unsigned int);
> +static int parse_vc_action_vxlan_encap(struct context *, const struct token
> *,
> +				       const char *, unsigned int, void *,
> +				       unsigned int);
>  static int parse_destroy(struct context *, const struct token *,
>  			 const char *, unsigned int,
>  			 void *, unsigned int);
> @@ -2387,6 +2411,24 @@ static const struct token token_list[] = {
>  			      ethertype)),
>  		.call = parse_vc_conf,
>  	},
> +	[ACTION_VXLAN_ENCAP] = {
> +		.name = "vxlan_encap",
> +		.help = "VXLAN encapsulation, uses configuration set by
> \"set"
> +			" vxlan\"",
> +		.priv = PRIV_ACTION(VXLAN_ENCAP,
> +				    sizeof(struct action_vxlan_encap_data)),
> +		.next = NEXT(NEXT_ENTRY(ACTION_NEXT)),
> +		.call = parse_vc_action_vxlan_encap,
> +	},
> +	[ACTION_VXLAN_DECAP] = {
> +		.name = "vxlan_decap",
> +		.help = "Performs a decapsulation action by stripping all"
> +			" headers of the VXLAN tunnel network overlay from
> the"
> +			" matched flow.",
> +		.priv = PRIV_ACTION(VXLAN_DECAP, 0),
> +		.next = NEXT(NEXT_ENTRY(ACTION_NEXT)),
> +		.call = parse_vc,
> +	},
>  };
> 
>  /** Remove and return last entry from argument stack. */
> @@ -2951,6 +2993,103 @@ parse_vc_action_rss_queue(struct context *ctx,
> const struct token *token,
>  	return len;
>  }
> 
> +/** Parse VXLAN encap action. */
> +static int
> +parse_vc_action_vxlan_encap(struct context *ctx, const struct token
> *token,
> +			    const char *str, unsigned int len,
> +			    void *buf, unsigned int size)
> +{
> +	struct buffer *out = buf;
> +	struct rte_flow_action *action;
> +	struct action_vxlan_encap_data *action_vxlan_encap_data;
> +	int ret;
> +
> +	ret = parse_vc(ctx, token, str, len, buf, size);
> +	if (ret < 0)
> +		return ret;
> +	/* Nothing else to do if there is no buffer. */
> +	if (!out)
> +		return ret;
> +	if (!out->args.vc.actions_n)
> +		return -1;
> +	action = &out->args.vc.actions[out->args.vc.actions_n - 1];
> +	/* Point to selected object. */
> +	ctx->object = out->args.vc.data;
> +	ctx->objmask = NULL;
> +	/* Set up default configuration. */
> +	action_vxlan_encap_data = ctx->object;
> +	*action_vxlan_encap_data = (struct action_vxlan_encap_data){
> +		.conf = (struct rte_flow_action_vxlan_encap){
> +			.definition = action_vxlan_encap_data->items,
> +		},
> +		.items = {
> +			{
> +				.type = RTE_FLOW_ITEM_TYPE_ETH,
> +				.spec = &action_vxlan_encap_data-
> >item_eth,
> +				.mask = &rte_flow_item_eth_mask,
> +			},
> +			{
> +				.type = RTE_FLOW_ITEM_TYPE_VLAN,
> +				.spec = &action_vxlan_encap_data-
> >item_vlan,
> +				.mask = &rte_flow_item_vlan_mask,
> +			},
> +			{
> +				.type = RTE_FLOW_ITEM_TYPE_IPV4,
> +				.spec = &action_vxlan_encap_data-
> >item_ipv4,
> +				.mask = &rte_flow_item_ipv4_mask,
> +			},
> +			{
> +				.type = RTE_FLOW_ITEM_TYPE_UDP,
> +				.spec = &action_vxlan_encap_data-
> >item_udp,
> +				.mask = &rte_flow_item_udp_mask,
> +			},
> +			{
> +				.type = RTE_FLOW_ITEM_TYPE_VXLAN,
> +				.spec = &action_vxlan_encap_data-
> >item_vxlan,
> +				.mask = &rte_flow_item_vxlan_mask,
> +			},
> +			{
> +				.type = RTE_FLOW_ITEM_TYPE_END,
> +			},
> +		},
> +		.item_eth.type = 0,
> +		.item_vlan.tci = vxlan_encap_conf.vlan_tci,
> +		.item_ipv4.hdr = {
> +			.src_addr = vxlan_encap_conf.ipv4_src,
> +			.dst_addr = vxlan_encap_conf.ipv4_dst,
> +		},
> +		.item_udp.hdr = {
> +			.src_port = vxlan_encap_conf.udp_src,
> +			.dst_port = vxlan_encap_conf.udp_dst,
> +		},
> +		.item_vxlan.flags = 0,
> +	};
> +	memcpy(action_vxlan_encap_data->item_eth.dst.addr_bytes,
> +	       vxlan_encap_conf.eth_dst, ETHER_ADDR_LEN);
> +	memcpy(action_vxlan_encap_data->item_eth.src.addr_bytes,
> +	       vxlan_encap_conf.eth_src, ETHER_ADDR_LEN);
> +	if (!vxlan_encap_conf.select_ipv4) {
> +		memcpy(&action_vxlan_encap_data-
> >item_ipv6.hdr.src_addr,
> +		       &vxlan_encap_conf.ipv6_src,
> +		       sizeof(vxlan_encap_conf.ipv6_src));
> +		memcpy(&action_vxlan_encap_data-
> >item_ipv6.hdr.dst_addr,
> +		       &vxlan_encap_conf.ipv6_dst,
> +		       sizeof(vxlan_encap_conf.ipv6_dst));
> +		action_vxlan_encap_data->items[2] = (struct
> rte_flow_item){
> +			.type = RTE_FLOW_ITEM_TYPE_IPV6,
> +			.spec = &action_vxlan_encap_data->item_ipv6,
> +			.mask = &rte_flow_item_ipv6_mask,
> +		};
> +	}
> +	if (!vxlan_encap_conf.select_vlan)
> +		action_vxlan_encap_data->items[1].type =
> +			RTE_FLOW_ITEM_TYPE_VOID;
> +	memcpy(action_vxlan_encap_data->item_vxlan.vni,
> vxlan_encap_conf.vni,
> +	       RTE_DIM(vxlan_encap_conf.vni));
> +	action->conf = &action_vxlan_encap_data->conf;
> +	return ret;
> +}
> +
>  /** Parse tokens for destroy command. */
>  static int
>  parse_destroy(struct context *ctx, const struct token *token,
> diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
> index 24c199844..5f581c360 100644
> --- a/app/test-pmd/testpmd.c
> +++ b/app/test-pmd/testpmd.c
> @@ -393,6 +393,23 @@ uint8_t bitrate_enabled;
>  struct gro_status gro_ports[RTE_MAX_ETHPORTS];
>  uint8_t gro_flush_cycles = GRO_DEFAULT_FLUSH_CYCLES;
> 
> +struct vxlan_encap_conf vxlan_encap_conf = {
> +	.select_ipv4 = 1,
> +	.select_vlan = 0,
> +	.vni = "\x00\x00\x00",
> +	.udp_src = 0,
> +	.udp_dst = RTE_BE16(4789),
> +	.ipv4_src = IPv4(127, 0, 0, 1),
> +	.ipv4_dst = IPv4(255, 255, 255, 255),
> +	.ipv6_src = "\x00\x00\x00\x00\x00\x00\x00\x00"
> +		"\x00\x00\x00\x00\x00\x00\x00\x01",
> +	.ipv6_dst = "\x00\x00\x00\x00\x00\x00\x00\x00"
> +		"\x00\x00\x00\x00\x00\x00\x11\x11",
> +	.vlan_tci = 0,
> +	.eth_src = "\x00\x00\x00\x00\x00\x00",
> +	.eth_dst = "\xff\xff\xff\xff\xff\xff",
> +};
> +
>  /* Forward function declarations */
>  static void map_port_queue_stats_mapping_registers(portid_t pi,
>  						   struct rte_port *port);
> diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
> index f51cd9dd9..0d6618788 100644
> --- a/app/test-pmd/testpmd.h
> +++ b/app/test-pmd/testpmd.h
> @@ -479,6 +479,23 @@ struct gso_status {
>  extern struct gso_status gso_ports[RTE_MAX_ETHPORTS];
>  extern uint16_t gso_max_segment_size;
> 
> +/* VXLAN encap/decap parameters. */
> +struct vxlan_encap_conf {
> +	uint32_t select_ipv4:1;
> +	uint32_t select_vlan:1;
> +	uint8_t vni[3];
> +	rte_be16_t udp_src;
> +	rte_be16_t udp_dst;
> +	rte_be32_t ipv4_src;
> +	rte_be32_t ipv4_dst;
> +	uint8_t ipv6_src[16];
> +	uint8_t ipv6_dst[16];
> +	rte_be16_t vlan_tci;
> +	uint8_t eth_src[ETHER_ADDR_LEN];
> +	uint8_t eth_dst[ETHER_ADDR_LEN];
> +};
> +struct vxlan_encap_conf vxlan_encap_conf;
> +
>  static inline unsigned int
>  lcore_num(void)
>  {
> diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
> b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
> index 0d6fd50ca..2743043d3 100644
> --- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
> +++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
> @@ -1534,6 +1534,13 @@ Enable or disable a per queue Tx offloading only
> on a specific Tx queue::
> 
>  This command should be run when the port is stopped, or else it will fail.
> 
> +Config VXLAN Encap outer layers
> +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> +
> +Configure the outer layer to encapsulate a packet inside a VXLAN tunnel::
> +
> + testpmd> set vxlan ipv4|ipv6 (vni) (udp-src) (udp-dst) (ip-src) (ip-dst)
> (mac-src) (mac-dst)
> + testpmd> set vxlan-with-vlan ipv4|ipv6 (vni) (udp-src) (udp-dst) (ip-src)
> (ip-dst) (vlan-tci) (mac-src) (mac-dst)
> 
>  Port Functions
>  --------------
> @@ -3650,6 +3657,12 @@ This section lists supported actions and their
> attributes, if any.
> 
>    - ``ethertype``: Ethertype.
> 
> +- ``vxlan_encap``: Performs a VXLAN encapsulation, outer layer
> configuration
> +  is done through `Config VXLAN Encap outer layers`_.
> +
> +- ``vxlan_decap``: Performs a decapsulation action by stripping all headers of
> +  the VXLAN tunnel network overlay from the matched flow.
> +
>  Destroying flow rules
>  ~~~~~~~~~~~~~~~~~~~~~
> 
> @@ -3915,6 +3928,28 @@ Validate and create a QinQ rule on port 0 to steer
> traffic to a queue on the hos
>     0       0       0       i-      ETH VLAN VLAN=>VF QUEUE
>     1       0       0       i-      ETH VLAN VLAN=>PF QUEUE
> 
> +Sample VXLAN encapsulation rule
> +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> +
> +VXLAN encapsulation outer layer has default value pre-configured in
> testpmd
> +source code, those can be changed by using the following commands::
> +
> +IPv4 VXLAN outer header::
> +
> +  testpmd> set vxlan ipv4 4 4 4 127.0.0.1 128.0.0.1 11:11:11:11:11:11
> 22:22:22:22:22:22
> +  testpmd> flow create 0 ingress pattern end actions vxlan_encap / queue
> index 0 / end
> +
> +  testpmd> set vxlan-with-vlan ipv4 4 4 4 127.0.0.1 128.0.0.1 34
> 11:11:11:11:11:11 22:22:22:22:22:22
> +  testpmd> flow create 0 ingress pattern end actions vxlan_encap / queue
> index 0 / end
> +
> +IPv6 VXLAN outer header::
> +
> +  testpmd> set vxlan ipv6 4 4 4 ::1 ::2222 11:11:11:11:11:11 22:22:22:22:22:22
> +  testpmd> flow create 0 ingress pattern end actions vxlan_encap / queue
> index 0 / end
> +
> +  testpmd> set vxlan-with-vlan ipv6 4 4 4 ::1 ::2222 34 11:11:11:11:11:11
> 22:22:22:22:22:22
> +  testpmd> flow create 0 ingress pattern end actions vxlan_encap / queue
> index 0 / end
> +
>  BPF Functions
>  --------------
> 
> --
> 2.18.0.rc2


^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [dpdk-dev] [PATCH v4 1/2] app/testpmd: add VXLAN encap/decap support
  2018-06-21  7:13       ` [dpdk-dev] [PATCH v4 1/2] app/testpmd: add VXLAN encap/decap support Nelio Laranjeiro
  2018-06-26 10:51         ` Ori Kam
@ 2018-06-26 12:43         ` Iremonger, Bernard
  1 sibling, 0 replies; 63+ messages in thread
From: Iremonger, Bernard @ 2018-06-26 12:43 UTC (permalink / raw)
  To: Nelio Laranjeiro, dev, Adrien Mazarguil, Lu, Wenzhuo, Wu,
	Jingjing, Awal, Mohammad Abdul, Ori Kam, Stephen Hemminger

Hi Nelio,

> -----Original Message-----
> From: Nelio Laranjeiro [mailto:nelio.laranjeiro@6wind.com]
> Sent: Thursday, June 21, 2018 8:14 AM
> To: dev@dpdk.org; Adrien Mazarguil <adrien.mazarguil@6wind.com>; Lu,
> Wenzhuo <wenzhuo.lu@intel.com>; Wu, Jingjing <jingjing.wu@intel.com>;
> Iremonger, Bernard <bernard.iremonger@intel.com>; Awal, Mohammad Abdul
> <mohammad.abdul.awal@intel.com>; Ori Kam <orika@mellanox.com>;
> Stephen Hemminger <stephen@networkplumber.org>
> Subject: [PATCH v4 1/2] app/testpmd: add VXLAN encap/decap support
> 
> Due to the complex VXLAN_ENCAP flow action and based on the fact testpmd
> does not allocate memory, this patch adds a new command in testpmd to
> initialise a global structure containing the necessary information to make the
> outer layer of the packet.  This same global structure will then be used by the
> flow command line in testpmd when the action vxlan_encap will be parsed, at
> this point, the conversion into such action becomes trivial.
> 
> This global structure is only used for the encap action.
> 
> Signed-off-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
> ---
<snip>


> diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
> b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
> index 0d6fd50ca..2743043d3 100644
> --- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
> +++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
> @@ -1534,6 +1534,13 @@ Enable or disable a per queue Tx offloading only on
> a specific Tx queue::
> 
>  This command should be run when the port is stopped, or else it will fail.
> 
> +Config VXLAN Encap outer layers
> +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> +
> +Configure the outer layer to encapsulate a packet inside a VXLAN tunnel::
> +
> + testpmd> set vxlan ipv4|ipv6 (vni) (udp-src) (udp-dst) (ip-src)
> + testpmd> (ip-dst) (mac-src) (mac-dst) set vxlan-with-vlan ipv4|ipv6
> + testpmd> (vni) (udp-src) (udp-dst) (ip-src) (ip-dst) (vlan-tci)
> + testpmd> (mac-src) (mac-dst)
> 
>  Port Functions
>  --------------
> @@ -3650,6 +3657,12 @@ This section lists supported actions and their
> attributes, if any.
> 
>    - ``ethertype``: Ethertype.
> 
> +- ``vxlan_encap``: Performs a VXLAN encapsulation, outer layer
> +configuration
> +  is done through `Config VXLAN Encap outer layers`_.
> +
> +- ``vxlan_decap``: Performs a decapsulation action by stripping all
> +headers of
> +  the VXLAN tunnel network overlay from the matched flow.
> +
>  Destroying flow rules
>  ~~~~~~~~~~~~~~~~~~~~~
> 
> @@ -3915,6 +3928,28 @@ Validate and create a QinQ rule on port 0 to steer
> traffic to a queue on the hos
>     0       0       0       i-      ETH VLAN VLAN=>VF QUEUE
>     1       0       0       i-      ETH VLAN VLAN=>PF QUEUE
> 
> +Sample VXLAN encapsulation rule
> +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> +
> +VXLAN encapsulation outer layer has default value pre-configured in
> +testpmd source code, those can be changed by using the following commands::

make doc-guides-html
sphinx processing guides-html...
dpdk/doc/guides/testpmd_app_ug/testpmd_funcs.rst:3951: WARNING: Literal block expected; none found.

> +
> +IPv4 VXLAN outer header::
> +
> +  testpmd> set vxlan ipv4 4 4 4 127.0.0.1 128.0.0.1 11:11:11:11:11:11
> + 22:22:22:22:22:22  testpmd> flow create 0 ingress pattern end actions
> + vxlan_encap / queue index 0 / end
> +
> +  testpmd> set vxlan-with-vlan ipv4 4 4 4 127.0.0.1 128.0.0.1 34
> + 11:11:11:11:11:11 22:22:22:22:22:22  testpmd> flow create 0 ingress
> + pattern end actions vxlan_encap / queue index 0 / end
> +
> +IPv6 VXLAN outer header::
> +
> +  testpmd> set vxlan ipv6 4 4 4 ::1 ::2222 11:11:11:11:11:11
> + 22:22:22:22:22:22  testpmd> flow create 0 ingress pattern end actions
> + vxlan_encap / queue index 0 / end
> +
> +  testpmd> set vxlan-with-vlan ipv6 4 4 4 ::1 ::2222 34
> + 11:11:11:11:11:11 22:22:22:22:22:22  testpmd> flow create 0 ingress
> + pattern end actions vxlan_encap / queue index 0 / end
> +
>  BPF Functions
>  --------------
> 
> --
> 2.18.0.rc2

Regards,

Bernard.

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [dpdk-dev] [PATCH v4 2/2] app/testpmd: add NVGRE encap/decap support
  2018-06-21  7:13       ` [dpdk-dev] [PATCH v4 2/2] app/testpmd: add NVGRE " Nelio Laranjeiro
  2018-06-26 10:48         ` Ori Kam
@ 2018-06-26 12:48         ` Iremonger, Bernard
  2018-06-26 15:15           ` Nélio Laranjeiro
  1 sibling, 1 reply; 63+ messages in thread
From: Iremonger, Bernard @ 2018-06-26 12:48 UTC (permalink / raw)
  To: Nelio Laranjeiro, dev, Adrien Mazarguil, Lu, Wenzhuo, Wu,
	Jingjing, Awal, Mohammad Abdul, Ori Kam, Stephen Hemminger

Hi Nelio,

> -----Original Message-----
> From: Nelio Laranjeiro [mailto:nelio.laranjeiro@6wind.com]
> Sent: Thursday, June 21, 2018 8:14 AM
> To: dev@dpdk.org; Adrien Mazarguil <adrien.mazarguil@6wind.com>; Lu,
> Wenzhuo <wenzhuo.lu@intel.com>; Wu, Jingjing <jingjing.wu@intel.com>;
> Iremonger, Bernard <bernard.iremonger@intel.com>; Awal, Mohammad Abdul
> <mohammad.abdul.awal@intel.com>; Ori Kam <orika@mellanox.com>;
> Stephen Hemminger <stephen@networkplumber.org>
> Subject: [PATCH v4 2/2] app/testpmd: add NVGRE encap/decap support
> 
> Due to the complex NVGRE_ENCAP flow action and based on the fact testpmd
> does not allocate memory, this patch adds a new command in testpmd to
> initialise a global structure containing the necessary information to make the
> outer layer of the packet.  This same global structure will then be used by the
> flow command line in testpmd when the action nvgre_encap will be parsed, at
> this point, the conversion into such action becomes trivial.
> 
> This global structure is only used for the encap action.
> 
> Signed-off-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
> ---
>  app/test-pmd/cmdline.c                      | 118 ++++++++++++++++++
>  app/test-pmd/cmdline_flow.c                 | 129 ++++++++++++++++++++
>  app/test-pmd/testpmd.c                      |  15 +++
>  app/test-pmd/testpmd.h                      |  15 +++
>  doc/guides/testpmd_app_ug/testpmd_funcs.rst |  37 ++++++
>  5 files changed, 314 insertions(+)

<snip>

> diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
> b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
> index 2743043d3..17e0fef63 100644
> --- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
> +++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
> @@ -1542,6 +1542,14 @@ Configure the outer layer to encapsulate a packet
> inside a VXLAN tunnel::
>   testpmd> set vxlan ipv4|ipv6 (vni) (udp-src) (udp-dst) (ip-src) (ip-dst) (mac-src)
> (mac-dst)
>   testpmd> set vxlan-with-vlan ipv4|ipv6 (vni) (udp-src) (udp-dst) (ip-src) (ip-dst)
> (vlan-tci) (mac-src) (mac-dst)
> 
> +Config NVGRE Encap outer layers
> +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> +
> +Configure the outer layer to encapsulate a packet inside a NVGRE tunnel::
> +
> + testpmd> set nvgre ipv4|ipv6 (tni) (ip-src) (ip-dst) (mac-src)
> + testpmd> (mac-dst) set nvgre-with-vlan ipv4|ipv6 (tni) (ip-src)
> + testpmd> (ip-dst) (vlan-tci) (mac-src) (mac-dst)
> +
>  Port Functions
>  --------------
> 
> @@ -3663,6 +3671,12 @@ This section lists supported actions and their
> attributes, if any.
>  - ``vxlan_decap``: Performs a decapsulation action by stripping all headers of
>    the VXLAN tunnel network overlay from the matched flow.
> 
> +- ``nvgre_encap``: Performs a NVGRE encapsulation, outer layer
> +configuration
> +  is done through `Config NVGRE Encap outer layers`_.
> +
> +- ``nvgre_decap``: Performs a decapsulation action by stripping all
> +headers of
> +  the NVGRE tunnel network overlay from the matched flow.
> +
>  Destroying flow rules
>  ~~~~~~~~~~~~~~~~~~~~~
> 
> @@ -3950,6 +3964,29 @@ IPv6 VXLAN outer header::
>    testpmd> set vxlan-with-vlan ipv6 4 4 4 ::1 ::2222 34 11:11:11:11:11:11
> 22:22:22:22:22:22
>    testpmd> flow create 0 ingress pattern end actions vxlan_encap / queue index
> 0 / end
> 
> +Sample NVGRE encapsulation rule
> +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> +
> +NVGRE encapsulation outer layer has default value pre-configured in
> +testpmd source code, those can be changed by using the following commands::

make doc-guides-html
sphinx processing guides-html...
dpdk/doc/guides/testpmd_app_ug/testpmd_funcs.rst:3973: WARNING: Literal block expected; none found.

> +
> +IPv4 NVGRE outer header::
> +
> +  testpmd> set nvgre ipv4 4 127.0.0.1 128.0.0.1 11:11:11:11:11:11
> + 22:22:22:22:22:22  testpmd> flow create 0 ingress pattern end actions
> + nvgre_encap / queue index 0 / end
> +
> +  testpmd> set nvgre-with-vlan 4 127.0.0.1 128.0.0.1 34
> + 11:11:11:11:11:11 22:22:22:22:22:22  testpmd> flow create 0 ingress
> + pattern end actions vxlan_encap / queue index 0 / end
> +
> +IPv6 NVGRE outer header::
> +
> +  testpmd> set nvgre ipv6 4 ::1 ::2222 11:11:11:11:11:11
> + 22:22:22:22:22:22  testpmd> flow create 0 ingress pattern end actions
> + vxlan_encap / queue index 0 / end
> +
> +  testpmd> set nvgre-with-vlan ipv6 4 ::1 ::2222 34 11:11:11:11:11:11
> + 22:22:22:22:22:22  testpmd> flow create 0 ingress pattern end actions
> + vxlan_encap / queue index 0 / end
> +
> +
>  BPF Functions
>  --------------
> 
> --
> 2.18.0.rc2

Regards,

Bernard.

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [dpdk-dev] [PATCH v4 0/2] app/testpmd implement VXLAN/NVGRE Encap/Decap
  2018-06-22 10:19               ` Mohammad Abdul Awal
@ 2018-06-26 15:15                 ` Nélio Laranjeiro
  0 siblings, 0 replies; 63+ messages in thread
From: Nélio Laranjeiro @ 2018-06-26 15:15 UTC (permalink / raw)
  To: Mohammad Abdul Awal
  Cc: dev, Adrien Mazarguil, Wenzhuo Lu, Jingjing Wu,
	Bernard Iremonger, Ori Kam, Stephen Hemminger

On Fri, Jun 22, 2018 at 11:19:14AM +0100, Mohammad Abdul Awal wrote:
> On 22/06/2018 10:08, Nélio Laranjeiro wrote:
> > On Fri, Jun 22, 2018 at 09:51:15AM +0100, Mohammad Abdul Awal wrote:
> > > 
> > > On 22/06/2018 09:31, Nélio Laranjeiro wrote:
> > > > On Fri, Jun 22, 2018 at 08:42:10AM +0100, Mohammad Abdul Awal wrote:
> > > > > Hi Nelio,
> > > > > 
> > > > > 
> > > > > On 21/06/2018 08:13, Nelio Laranjeiro wrote:
> > > > > > This series adds an easy and maintainable configuration version support for
> > > > > > those two actions for 18.08 by using global variables in testpmd to store the
> > > > > > necessary information for the tunnel encapsulation.  Those variables are used
> > > > > > in conjunction of RTE_FLOW_ACTION_{VXLAN,NVGRE}_ENCAP action to create easily
> > > > > > the action for flows.
> > > > > > 
> > > > > > A common way to use it:
> > > > > > 
> > > > > >     set vxlan ipv4 4 4 4 127.0.0.1 128.0.0.1 11:11:11:11:11:11 22:22:22:22:22:22
> > > > > >     flow create 0 ingress pattern end actions vxlan_encap / queue index 0 / end
> > > > > This way we can define only one tunnel for all the flows. This is not a
> > > > > convenient for testing a scenario (e.g. mutiport or switch) with multiple
> > > > > tunnels. Isn't it?
> > > > Hi Awal.
> > > > 
> > > > The "set vxlan" command will just configure the outer VXLAN tunnel to be
> > > > used, when the "flow" command is invoked, it will use the VXLAN tunnel
> > > > information and create a valid VXLAN_ENCAP action.  For instance:
> > > > 
> > > >    testpmd> set vxlan ipv4 4 4 4 127.0.0.1 128.0.0.1 11:11:11:11:11:11 22:22:22:22:22:22
> > > >    testpmd> flow create 0 ingress pattern end actions vxlan_encap / queue index 0 / end
> > > >    testpmd> set vxlan ipv6 4 34 42 ::1 ::2222 80:12:13:14:15:16 22:22:22:22:22:22
> > > >    testpmd> flow create 0 ingress pattern end actions vxlan_encap / queue index 0 / end
> > > > 
> > > > will create two VLXAN_ENCAP flow one with IPv4 tunnel the second one
> > > > with an IPv6.  Whereas:
> > > > 
> > > >    testpmd> set vxlan ipv4 4 4 4 127.0.0.1 128.0.0.1 11:11:11:11:11:11 22:22:22:22:22:22
> > > >    testpmd> flow create 0 ingress pattern eth / ipv4 src is 10.2.3.4 / end
> > > >    	actions vxlan_encap / queue index 0 / end
> > > >    testpmd> flow create 0 ingress pattern eth / ipv4 src is 20.2.3.4 / end
> > > >    	actions vxlan_encap / queue index 0 / end
> > > > 
> > > > will encapsulate the packets having as IPv4 source IP 10.2.3.4 and
> > > > 20.2.3.4 with the same VXLAN tunnel headers.
> > > I understand that the same IPv4 tunnel will be used for both flows in your
> > > example above.  I have the following questions.
> > > 
> > > 1) How can we create two or more IPv4 (or IPv6) tunnel?
> > > 1) How can we make the flows to use different IPv4 tunnels?
> > > As an example,
> > > 
> > >   testpmd> set vxlan ipv4 4 4 4 127.0.0.1 128.0.0.1 11:11:11:11:11:11 22:22:22:22:22:22
> > >   testpmd> set vxlan ipv4 4 4 4 127.0.0.1 128.0.0.1 33:33:33:33:33:33 44:44:44:44:44:44
> > >   testpmd> flow create 0 ingress pattern end actions vxlan_encap <first tunnel?> / queue index 0 / end
> > >   testpmd> flow create 0 ingress pattern end actions vxlan_encap <second tunnel?> / queue index 0 / end
> > > 
> > Doing this, the flows will use the same tunnel, you must do:
> > 
> >   testpmd> set vxlan ipv4 4 4 4 127.0.0.1 128.0.0.1 11:11:11:11:11:11 22:22:22:22:22:22
> >   testpmd> flow create 0 ingress pattern end actions vxlan_encap <first tunnel?> / queue index 0 / end
> >   testpmd> set vxlan ipv4 4 4 4 127.0.0.1 128.0.0.1 33:33:33:33:33:33 44:44:44:44:44:44
> >   testpmd> flow create 0 ingress pattern end actions vxlan_encap <second tunnel?> / queue index 0 / end
> > 
> > to have what you want.
> OK, thanks for the clarification. So, since there will be only one global
> instance of the tunnel,  for any subsequent "set vxlan" operations, the
> tunnel created from the last last operation will be used. May be it should
> be cleared in the description/documentation?

Will add it in the v5.

> > > Is it possible?
> > Regards,
> > 
> 

Thanks,

-- 
Nélio Laranjeiro
6WIND

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [dpdk-dev] [PATCH v4 2/2] app/testpmd: add NVGRE encap/decap support
  2018-06-26 12:48         ` Iremonger, Bernard
@ 2018-06-26 15:15           ` Nélio Laranjeiro
  0 siblings, 0 replies; 63+ messages in thread
From: Nélio Laranjeiro @ 2018-06-26 15:15 UTC (permalink / raw)
  To: Iremonger, Bernard
  Cc: dev, Adrien Mazarguil, Lu, Wenzhuo, Wu, Jingjing, Awal,
	Mohammad Abdul, Ori Kam, Stephen Hemminger

Hi,

On Tue, Jun 26, 2018 at 12:48:42PM +0000, Iremonger, Bernard wrote:
> Hi Nelio,
> 
> > -----Original Message-----
> > From: Nelio Laranjeiro [mailto:nelio.laranjeiro@6wind.com]
> > Sent: Thursday, June 21, 2018 8:14 AM
> > To: dev@dpdk.org; Adrien Mazarguil <adrien.mazarguil@6wind.com>; Lu,
> > Wenzhuo <wenzhuo.lu@intel.com>; Wu, Jingjing <jingjing.wu@intel.com>;
> > Iremonger, Bernard <bernard.iremonger@intel.com>; Awal, Mohammad Abdul
> > <mohammad.abdul.awal@intel.com>; Ori Kam <orika@mellanox.com>;
> > Stephen Hemminger <stephen@networkplumber.org>
> > Subject: [PATCH v4 2/2] app/testpmd: add NVGRE encap/decap support
> > 
> > Due to the complex NVGRE_ENCAP flow action and based on the fact testpmd
> > does not allocate memory, this patch adds a new command in testpmd to
> > initialise a global structure containing the necessary information to make the
> > outer layer of the packet.  This same global structure will then be used by the
> > flow command line in testpmd when the action nvgre_encap will be parsed, at
> > this point, the conversion into such action becomes trivial.
> > 
> > This global structure is only used for the encap action.
> > 
> > Signed-off-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
> > ---
> >  app/test-pmd/cmdline.c                      | 118 ++++++++++++++++++
> >  app/test-pmd/cmdline_flow.c                 | 129 ++++++++++++++++++++
> >  app/test-pmd/testpmd.c                      |  15 +++
> >  app/test-pmd/testpmd.h                      |  15 +++
> >  doc/guides/testpmd_app_ug/testpmd_funcs.rst |  37 ++++++
> >  5 files changed, 314 insertions(+)
> 
> <snip>
> 
> > diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
> > b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
> > index 2743043d3..17e0fef63 100644
> > --- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
> > +++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
> > @@ -1542,6 +1542,14 @@ Configure the outer layer to encapsulate a packet
> > inside a VXLAN tunnel::
> >   testpmd> set vxlan ipv4|ipv6 (vni) (udp-src) (udp-dst) (ip-src) (ip-dst) (mac-src)
> > (mac-dst)
> >   testpmd> set vxlan-with-vlan ipv4|ipv6 (vni) (udp-src) (udp-dst) (ip-src) (ip-dst)
> > (vlan-tci) (mac-src) (mac-dst)
> > 
> > +Config NVGRE Encap outer layers
> > +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> > +
> > +Configure the outer layer to encapsulate a packet inside a NVGRE tunnel::
> > +
> > + testpmd> set nvgre ipv4|ipv6 (tni) (ip-src) (ip-dst) (mac-src)
> > + testpmd> (mac-dst) set nvgre-with-vlan ipv4|ipv6 (tni) (ip-src)
> > + testpmd> (ip-dst) (vlan-tci) (mac-src) (mac-dst)
> > +
> >  Port Functions
> >  --------------
> > 
> > @@ -3663,6 +3671,12 @@ This section lists supported actions and their
> > attributes, if any.
> >  - ``vxlan_decap``: Performs a decapsulation action by stripping all headers of
> >    the VXLAN tunnel network overlay from the matched flow.
> > 
> > +- ``nvgre_encap``: Performs a NVGRE encapsulation, outer layer
> > +configuration
> > +  is done through `Config NVGRE Encap outer layers`_.
> > +
> > +- ``nvgre_decap``: Performs a decapsulation action by stripping all
> > +headers of
> > +  the NVGRE tunnel network overlay from the matched flow.
> > +
> >  Destroying flow rules
> >  ~~~~~~~~~~~~~~~~~~~~~
> > 
> > @@ -3950,6 +3964,29 @@ IPv6 VXLAN outer header::
> >    testpmd> set vxlan-with-vlan ipv6 4 4 4 ::1 ::2222 34 11:11:11:11:11:11
> > 22:22:22:22:22:22
> >    testpmd> flow create 0 ingress pattern end actions vxlan_encap / queue index
> > 0 / end
> > 
> > +Sample NVGRE encapsulation rule
> > +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> > +
> > +NVGRE encapsulation outer layer has default value pre-configured in
> > +testpmd source code, those can be changed by using the following commands::
> 
> make doc-guides-html
> sphinx processing guides-html...
> dpdk/doc/guides/testpmd_app_ug/testpmd_funcs.rst:3973: WARNING: Literal block expected; none found.

I will fix it in the v5 among with the VXLAN issue you pointed out in
the second patch.

> 
> > +
> > +IPv4 NVGRE outer header::
> > +
> > +  testpmd> set nvgre ipv4 4 127.0.0.1 128.0.0.1 11:11:11:11:11:11
> > + 22:22:22:22:22:22  testpmd> flow create 0 ingress pattern end actions
> > + nvgre_encap / queue index 0 / end
> > +
> > +  testpmd> set nvgre-with-vlan 4 127.0.0.1 128.0.0.1 34
> > + 11:11:11:11:11:11 22:22:22:22:22:22  testpmd> flow create 0 ingress
> > + pattern end actions vxlan_encap / queue index 0 / end
> > +
> > +IPv6 NVGRE outer header::
> > +
> > +  testpmd> set nvgre ipv6 4 ::1 ::2222 11:11:11:11:11:11
> > + 22:22:22:22:22:22  testpmd> flow create 0 ingress pattern end actions
> > + vxlan_encap / queue index 0 / end
> > +
> > +  testpmd> set nvgre-with-vlan ipv6 4 ::1 ::2222 34 11:11:11:11:11:11
> > + 22:22:22:22:22:22  testpmd> flow create 0 ingress pattern end actions
> > + vxlan_encap / queue index 0 / end
> > +
> > +
> >  BPF Functions
> >  --------------
> > 
> > --
> > 2.18.0.rc2
> 
> Regards,
> 
> Bernard.
> 

Thanks,

-- 
Nélio Laranjeiro
6WIND

^ permalink raw reply	[flat|nested] 63+ messages in thread

* [dpdk-dev] [PATCH v5 0/2] app/testpmd implement VXLAN/NVGRE Encap/Decap
  2018-06-21  7:13     ` [dpdk-dev] [PATCH v4 " Nelio Laranjeiro
                         ` (2 preceding siblings ...)
  2018-06-22  7:42       ` [dpdk-dev] [PATCH v4 0/2] app/testpmd implement VXLAN/NVGRE Encap/Decap Mohammad Abdul Awal
@ 2018-06-27  8:53       ` Nelio Laranjeiro
  2018-06-27  8:53         ` [dpdk-dev] [PATCH v5 1/2] app/testpmd: add VXLAN encap/decap support Nelio Laranjeiro
                           ` (2 more replies)
  3 siblings, 3 replies; 63+ messages in thread
From: Nelio Laranjeiro @ 2018-06-27  8:53 UTC (permalink / raw)
  To: dev, Adrien Mazarguil, Wenzhuo Lu, Jingjing Wu,
	Bernard Iremonger, Mohammad Abdul Awal, Stephen Hemminger
  Cc: Ori Kam

This series adds an easy and maintainable configuration version support for
those two actions for 18.08 by using global variables in testpmd to store the
necessary information for the tunnel encapsulation.  Those variables are used
in conjunction of RTE_FLOW_ACTION_{VXLAN,NVGRE}_ENCAP action to create easily
the action for flows.

A common way to use it:

 set vxlan ipv4 4 4 4 127.0.0.1 128.0.0.1 11:11:11:11:11:11 22:22:22:22:22:22
 flow create 0 ingress pattern end actions vxlan_encap / queue index 0 / end

 set vxlan ipv6 4 4 4 ::1 ::2222 11:11:11:11:11:11 22:22:22:22:22:22
 flow create 0 ingress pattern end actions vxlan_encap / queue index 0 / end

 set nvgre ipv4 4 127.0.0.1 128.0.0.1 11:11:11:11:11:11 22:22:22:22:22:22
 flow create 0 ingress pattern end actions nvgre_encap / queue index 0 / end

 set nvgre ipv6 4 ::1 ::2222 11:11:11:11:11:11 22:22:22:22:22:22
 flow create 0 ingress pattern end actions nvgre_encap / queue index 0 / end

This also replace the proposal done by Mohammad Abdul Awal [1] which handles
in a more complex way for the same work.

Note this API has already a modification planned for 18.11 [2] thus those
series should have a limited life for a single release.

[1] https://dpdk.org/ml/archives/dev/2018-May/101403.html
[2] https://dpdk.org/ml/archives/dev/2018-June/103485.html

Changes in v5:

- fix documentation generation.
- add more explanation on how to generate several encapsulated flows.

Changes in v4:

- fix big endian issue on vni and tni.
- add samples to the documentation.
- set the VXLAN UDP source port to 0 by default to let the driver generate it
  from the inner hash as described in the RFC 7348.
- use default rte flow mask for each item.

Changes in v3:

- support VLAN in the outer encapsulation.
- fix the documentation with missing arguments.

Changes in v2:

- add default IPv6 values for NVGRE encapsulation.
- replace VXLAN to NVGRE in comments concerning NVGRE layer.


Nelio Laranjeiro (2):
  app/testpmd: add VXLAN encap/decap support
  app/testpmd: add NVGRE encap/decap support

 app/test-pmd/cmdline.c                      | 252 ++++++++++++++++++
 app/test-pmd/cmdline_flow.c                 | 268 ++++++++++++++++++++
 app/test-pmd/testpmd.c                      |  32 +++
 app/test-pmd/testpmd.h                      |  32 +++
 doc/guides/testpmd_app_ug/testpmd_funcs.rst |  82 ++++++
 5 files changed, 666 insertions(+)

-- 
2.18.0

^ permalink raw reply	[flat|nested] 63+ messages in thread

* [dpdk-dev] [PATCH v5 1/2] app/testpmd: add VXLAN encap/decap support
  2018-06-27  8:53       ` [dpdk-dev] [PATCH v5 " Nelio Laranjeiro
@ 2018-06-27  8:53         ` Nelio Laranjeiro
  2018-06-27  8:53         ` [dpdk-dev] [PATCH v5 2/2] app/testpmd: add NVGRE " Nelio Laranjeiro
  2018-06-27  9:53         ` [dpdk-dev] [PATCH v6 0/2] app/testpmd implement VXLAN/NVGRE Encap/Decap Nelio Laranjeiro
  2 siblings, 0 replies; 63+ messages in thread
From: Nelio Laranjeiro @ 2018-06-27  8:53 UTC (permalink / raw)
  To: dev, Adrien Mazarguil, Wenzhuo Lu, Jingjing Wu,
	Bernard Iremonger, Mohammad Abdul Awal, Stephen Hemminger
  Cc: Ori Kam

Due to the complex VXLAN_ENCAP flow action and based on the fact testpmd
does not allocate memory, this patch adds a new command in testpmd to
initialise a global structure containing the necessary information to
make the outer layer of the packet.  This same global structure will
then be used by the flow command line in testpmd when the action
vxlan_encap will be parsed, at this point, the conversion into such
action becomes trivial.

This global structure is only used for the encap action.

Signed-off-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
Acked-by: Ori Kam <orika@mellanox.com>
---
 app/test-pmd/cmdline.c                      | 134 +++++++++++++++++++
 app/test-pmd/cmdline_flow.c                 | 139 ++++++++++++++++++++
 app/test-pmd/testpmd.c                      |  17 +++
 app/test-pmd/testpmd.h                      |  17 +++
 doc/guides/testpmd_app_ug/testpmd_funcs.rst |  40 ++++++
 5 files changed, 347 insertions(+)

diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
index 27e2aa8c8..048fff2bd 100644
--- a/app/test-pmd/cmdline.c
+++ b/app/test-pmd/cmdline.c
@@ -781,6 +781,14 @@ static void cmd_help_long_parsed(void *parsed_result,
 			"port tm hierarchy commit (port_id) (clean_on_fail)\n"
 			"	Commit tm hierarchy.\n\n"
 
+			"vxlan ipv4|ipv6 vni udp-src udp-dst ip-src ip-dst"
+			" eth-src eth-dst\n"
+			"       Configure the VXLAN encapsulation for flows.\n\n"
+
+			"vxlan-with-vlan ipv4|ipv6 vni udp-src udp-dst ip-src ip-dst"
+			" vlan-tci eth-src eth-dst\n"
+			"       Configure the VXLAN encapsulation for flows.\n\n"
+
 			, list_pkt_forwarding_modes()
 		);
 	}
@@ -14838,6 +14846,130 @@ cmdline_parse_inst_t cmd_set_port_tm_hierarchy_default = {
 };
 #endif
 
+/** Set VXLAN encapsulation details */
+struct cmd_set_vxlan_result {
+	cmdline_fixed_string_t set;
+	cmdline_fixed_string_t vxlan;
+	cmdline_fixed_string_t ip_version;
+	uint32_t vlan_present:1;
+	uint32_t vni;
+	uint16_t udp_src;
+	uint16_t udp_dst;
+	cmdline_ipaddr_t ip_src;
+	cmdline_ipaddr_t ip_dst;
+	uint16_t tci;
+	struct ether_addr eth_src;
+	struct ether_addr eth_dst;
+};
+
+cmdline_parse_token_string_t cmd_set_vxlan_set =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_vxlan_result, set, "set");
+cmdline_parse_token_string_t cmd_set_vxlan_vxlan =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_vxlan_result, vxlan, "vxlan");
+cmdline_parse_token_string_t cmd_set_vxlan_vxlan_with_vlan =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_vxlan_result, vxlan,
+				 "vxlan-with-vlan");
+cmdline_parse_token_string_t cmd_set_vxlan_ip_version =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_vxlan_result, ip_version,
+				 "ipv4#ipv6");
+cmdline_parse_token_num_t cmd_set_vxlan_vni =
+	TOKEN_NUM_INITIALIZER(struct cmd_set_vxlan_result, vni, UINT32);
+cmdline_parse_token_num_t cmd_set_vxlan_udp_src =
+	TOKEN_NUM_INITIALIZER(struct cmd_set_vxlan_result, udp_src, UINT16);
+cmdline_parse_token_num_t cmd_set_vxlan_udp_dst =
+	TOKEN_NUM_INITIALIZER(struct cmd_set_vxlan_result, udp_dst, UINT16);
+cmdline_parse_token_ipaddr_t cmd_set_vxlan_ip_src =
+	TOKEN_IPADDR_INITIALIZER(struct cmd_set_vxlan_result, ip_src);
+cmdline_parse_token_ipaddr_t cmd_set_vxlan_ip_dst =
+	TOKEN_IPADDR_INITIALIZER(struct cmd_set_vxlan_result, ip_dst);
+cmdline_parse_token_num_t cmd_set_vxlan_vlan =
+	TOKEN_NUM_INITIALIZER(struct cmd_set_vxlan_result, tci, UINT16);
+cmdline_parse_token_etheraddr_t cmd_set_vxlan_eth_src =
+	TOKEN_ETHERADDR_INITIALIZER(struct cmd_set_vxlan_result, eth_src);
+cmdline_parse_token_etheraddr_t cmd_set_vxlan_eth_dst =
+	TOKEN_ETHERADDR_INITIALIZER(struct cmd_set_vxlan_result, eth_dst);
+
+static void cmd_set_vxlan_parsed(void *parsed_result,
+	__attribute__((unused)) struct cmdline *cl,
+	__attribute__((unused)) void *data)
+{
+	struct cmd_set_vxlan_result *res = parsed_result;
+	union {
+		uint32_t vxlan_id;
+		uint8_t vni[4];
+	} id = {
+		.vxlan_id = rte_cpu_to_be_32(res->vni) & RTE_BE32(0x00ffffff),
+	};
+
+	if (strcmp(res->vxlan, "vxlan") == 0)
+		vxlan_encap_conf.select_vlan = 0;
+	else if (strcmp(res->vxlan, "vxlan-with-vlan") == 0)
+		vxlan_encap_conf.select_vlan = 1;
+	if (strcmp(res->ip_version, "ipv4") == 0)
+		vxlan_encap_conf.select_ipv4 = 1;
+	else if (strcmp(res->ip_version, "ipv6") == 0)
+		vxlan_encap_conf.select_ipv4 = 0;
+	else
+		return;
+	rte_memcpy(vxlan_encap_conf.vni, &id.vni[1], 3);
+	vxlan_encap_conf.udp_src = rte_cpu_to_be_16(res->udp_src);
+	vxlan_encap_conf.udp_dst = rte_cpu_to_be_16(res->udp_dst);
+	if (vxlan_encap_conf.select_ipv4) {
+		IPV4_ADDR_TO_UINT(res->ip_src, vxlan_encap_conf.ipv4_src);
+		IPV4_ADDR_TO_UINT(res->ip_dst, vxlan_encap_conf.ipv4_dst);
+	} else {
+		IPV6_ADDR_TO_ARRAY(res->ip_src, vxlan_encap_conf.ipv6_src);
+		IPV6_ADDR_TO_ARRAY(res->ip_dst, vxlan_encap_conf.ipv6_dst);
+	}
+	if (vxlan_encap_conf.select_vlan)
+		vxlan_encap_conf.vlan_tci = rte_cpu_to_be_16(res->tci);
+	rte_memcpy(vxlan_encap_conf.eth_src, res->eth_src.addr_bytes,
+		   ETHER_ADDR_LEN);
+	rte_memcpy(vxlan_encap_conf.eth_dst, res->eth_dst.addr_bytes,
+		   ETHER_ADDR_LEN);
+}
+
+cmdline_parse_inst_t cmd_set_vxlan = {
+	.f = cmd_set_vxlan_parsed,
+	.data = NULL,
+	.help_str = "set vxlan ipv4|ipv6 <vni> <udp-src> <udp-dst> <ip-src>"
+		" <ip-dst> <eth-src> <eth-dst>",
+	.tokens = {
+		(void *)&cmd_set_vxlan_set,
+		(void *)&cmd_set_vxlan_vxlan,
+		(void *)&cmd_set_vxlan_ip_version,
+		(void *)&cmd_set_vxlan_vni,
+		(void *)&cmd_set_vxlan_udp_src,
+		(void *)&cmd_set_vxlan_udp_dst,
+		(void *)&cmd_set_vxlan_ip_src,
+		(void *)&cmd_set_vxlan_ip_dst,
+		(void *)&cmd_set_vxlan_eth_src,
+		(void *)&cmd_set_vxlan_eth_dst,
+		NULL,
+	},
+};
+
+cmdline_parse_inst_t cmd_set_vxlan_with_vlan = {
+	.f = cmd_set_vxlan_parsed,
+	.data = NULL,
+	.help_str = "set vxlan-with-vlan ipv4|ipv6 <vni> <udp-src> <udp-dst>"
+		" <ip-src> <ip-dst> <vlan-tci> <eth-src> <eth-dst>",
+	.tokens = {
+		(void *)&cmd_set_vxlan_set,
+		(void *)&cmd_set_vxlan_vxlan_with_vlan,
+		(void *)&cmd_set_vxlan_ip_version,
+		(void *)&cmd_set_vxlan_vni,
+		(void *)&cmd_set_vxlan_udp_src,
+		(void *)&cmd_set_vxlan_udp_dst,
+		(void *)&cmd_set_vxlan_ip_src,
+		(void *)&cmd_set_vxlan_ip_dst,
+		(void *)&cmd_set_vxlan_vlan,
+		(void *)&cmd_set_vxlan_eth_src,
+		(void *)&cmd_set_vxlan_eth_dst,
+		NULL,
+	},
+};
+
 /* Strict link priority scheduling mode setting */
 static void
 cmd_strict_link_prio_parsed(
@@ -17462,6 +17594,8 @@ cmdline_parse_ctx_t main_ctx[] = {
 #if defined RTE_LIBRTE_PMD_SOFTNIC && defined RTE_LIBRTE_SCHED
 	(cmdline_parse_inst_t *)&cmd_set_port_tm_hierarchy_default,
 #endif
+	(cmdline_parse_inst_t *)&cmd_set_vxlan,
+	(cmdline_parse_inst_t *)&cmd_set_vxlan_with_vlan,
 	(cmdline_parse_inst_t *)&cmd_ddp_add,
 	(cmdline_parse_inst_t *)&cmd_ddp_del,
 	(cmdline_parse_inst_t *)&cmd_ddp_get_list,
diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index 934cf7e90..7823addb7 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -239,6 +239,8 @@ enum index {
 	ACTION_OF_POP_MPLS_ETHERTYPE,
 	ACTION_OF_PUSH_MPLS,
 	ACTION_OF_PUSH_MPLS_ETHERTYPE,
+	ACTION_VXLAN_ENCAP,
+	ACTION_VXLAN_DECAP,
 };
 
 /** Maximum size for pattern in struct rte_flow_item_raw. */
@@ -258,6 +260,23 @@ struct action_rss_data {
 	uint16_t queue[ACTION_RSS_QUEUE_NUM];
 };
 
+/** Maximum number of items in struct rte_flow_action_vxlan_encap. */
+#define ACTION_VXLAN_ENCAP_ITEMS_NUM 6
+
+/** Storage for struct rte_flow_action_vxlan_encap including external data. */
+struct action_vxlan_encap_data {
+	struct rte_flow_action_vxlan_encap conf;
+	struct rte_flow_item items[ACTION_VXLAN_ENCAP_ITEMS_NUM];
+	struct rte_flow_item_eth item_eth;
+	struct rte_flow_item_vlan item_vlan;
+	union {
+		struct rte_flow_item_ipv4 item_ipv4;
+		struct rte_flow_item_ipv6 item_ipv6;
+	};
+	struct rte_flow_item_udp item_udp;
+	struct rte_flow_item_vxlan item_vxlan;
+};
+
 /** Maximum number of subsequent tokens and arguments on the stack. */
 #define CTX_STACK_SIZE 16
 
@@ -775,6 +794,8 @@ static const enum index next_action[] = {
 	ACTION_OF_SET_VLAN_PCP,
 	ACTION_OF_POP_MPLS,
 	ACTION_OF_PUSH_MPLS,
+	ACTION_VXLAN_ENCAP,
+	ACTION_VXLAN_DECAP,
 	ZERO,
 };
 
@@ -905,6 +926,9 @@ static int parse_vc_action_rss_type(struct context *, const struct token *,
 static int parse_vc_action_rss_queue(struct context *, const struct token *,
 				     const char *, unsigned int, void *,
 				     unsigned int);
+static int parse_vc_action_vxlan_encap(struct context *, const struct token *,
+				       const char *, unsigned int, void *,
+				       unsigned int);
 static int parse_destroy(struct context *, const struct token *,
 			 const char *, unsigned int,
 			 void *, unsigned int);
@@ -2387,6 +2411,24 @@ static const struct token token_list[] = {
 			      ethertype)),
 		.call = parse_vc_conf,
 	},
+	[ACTION_VXLAN_ENCAP] = {
+		.name = "vxlan_encap",
+		.help = "VXLAN encapsulation, uses configuration set by \"set"
+			" vxlan\"",
+		.priv = PRIV_ACTION(VXLAN_ENCAP,
+				    sizeof(struct action_vxlan_encap_data)),
+		.next = NEXT(NEXT_ENTRY(ACTION_NEXT)),
+		.call = parse_vc_action_vxlan_encap,
+	},
+	[ACTION_VXLAN_DECAP] = {
+		.name = "vxlan_decap",
+		.help = "Performs a decapsulation action by stripping all"
+			" headers of the VXLAN tunnel network overlay from the"
+			" matched flow.",
+		.priv = PRIV_ACTION(VXLAN_DECAP, 0),
+		.next = NEXT(NEXT_ENTRY(ACTION_NEXT)),
+		.call = parse_vc,
+	},
 };
 
 /** Remove and return last entry from argument stack. */
@@ -2951,6 +2993,103 @@ parse_vc_action_rss_queue(struct context *ctx, const struct token *token,
 	return len;
 }
 
+/** Parse VXLAN encap action. */
+static int
+parse_vc_action_vxlan_encap(struct context *ctx, const struct token *token,
+			    const char *str, unsigned int len,
+			    void *buf, unsigned int size)
+{
+	struct buffer *out = buf;
+	struct rte_flow_action *action;
+	struct action_vxlan_encap_data *action_vxlan_encap_data;
+	int ret;
+
+	ret = parse_vc(ctx, token, str, len, buf, size);
+	if (ret < 0)
+		return ret;
+	/* Nothing else to do if there is no buffer. */
+	if (!out)
+		return ret;
+	if (!out->args.vc.actions_n)
+		return -1;
+	action = &out->args.vc.actions[out->args.vc.actions_n - 1];
+	/* Point to selected object. */
+	ctx->object = out->args.vc.data;
+	ctx->objmask = NULL;
+	/* Set up default configuration. */
+	action_vxlan_encap_data = ctx->object;
+	*action_vxlan_encap_data = (struct action_vxlan_encap_data){
+		.conf = (struct rte_flow_action_vxlan_encap){
+			.definition = action_vxlan_encap_data->items,
+		},
+		.items = {
+			{
+				.type = RTE_FLOW_ITEM_TYPE_ETH,
+				.spec = &action_vxlan_encap_data->item_eth,
+				.mask = &rte_flow_item_eth_mask,
+			},
+			{
+				.type = RTE_FLOW_ITEM_TYPE_VLAN,
+				.spec = &action_vxlan_encap_data->item_vlan,
+				.mask = &rte_flow_item_vlan_mask,
+			},
+			{
+				.type = RTE_FLOW_ITEM_TYPE_IPV4,
+				.spec = &action_vxlan_encap_data->item_ipv4,
+				.mask = &rte_flow_item_ipv4_mask,
+			},
+			{
+				.type = RTE_FLOW_ITEM_TYPE_UDP,
+				.spec = &action_vxlan_encap_data->item_udp,
+				.mask = &rte_flow_item_udp_mask,
+			},
+			{
+				.type = RTE_FLOW_ITEM_TYPE_VXLAN,
+				.spec = &action_vxlan_encap_data->item_vxlan,
+				.mask = &rte_flow_item_vxlan_mask,
+			},
+			{
+				.type = RTE_FLOW_ITEM_TYPE_END,
+			},
+		},
+		.item_eth.type = 0,
+		.item_vlan.tci = vxlan_encap_conf.vlan_tci,
+		.item_ipv4.hdr = {
+			.src_addr = vxlan_encap_conf.ipv4_src,
+			.dst_addr = vxlan_encap_conf.ipv4_dst,
+		},
+		.item_udp.hdr = {
+			.src_port = vxlan_encap_conf.udp_src,
+			.dst_port = vxlan_encap_conf.udp_dst,
+		},
+		.item_vxlan.flags = 0,
+	};
+	memcpy(action_vxlan_encap_data->item_eth.dst.addr_bytes,
+	       vxlan_encap_conf.eth_dst, ETHER_ADDR_LEN);
+	memcpy(action_vxlan_encap_data->item_eth.src.addr_bytes,
+	       vxlan_encap_conf.eth_src, ETHER_ADDR_LEN);
+	if (!vxlan_encap_conf.select_ipv4) {
+		memcpy(&action_vxlan_encap_data->item_ipv6.hdr.src_addr,
+		       &vxlan_encap_conf.ipv6_src,
+		       sizeof(vxlan_encap_conf.ipv6_src));
+		memcpy(&action_vxlan_encap_data->item_ipv6.hdr.dst_addr,
+		       &vxlan_encap_conf.ipv6_dst,
+		       sizeof(vxlan_encap_conf.ipv6_dst));
+		action_vxlan_encap_data->items[2] = (struct rte_flow_item){
+			.type = RTE_FLOW_ITEM_TYPE_IPV6,
+			.spec = &action_vxlan_encap_data->item_ipv6,
+			.mask = &rte_flow_item_ipv6_mask,
+		};
+	}
+	if (!vxlan_encap_conf.select_vlan)
+		action_vxlan_encap_data->items[1].type =
+			RTE_FLOW_ITEM_TYPE_VOID;
+	memcpy(action_vxlan_encap_data->item_vxlan.vni, vxlan_encap_conf.vni,
+	       RTE_DIM(vxlan_encap_conf.vni));
+	action->conf = &action_vxlan_encap_data->conf;
+	return ret;
+}
+
 /** Parse tokens for destroy command. */
 static int
 parse_destroy(struct context *ctx, const struct token *token,
diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
index 63c2a5aca..4a18d043c 100644
--- a/app/test-pmd/testpmd.c
+++ b/app/test-pmd/testpmd.c
@@ -393,6 +393,23 @@ uint8_t bitrate_enabled;
 struct gro_status gro_ports[RTE_MAX_ETHPORTS];
 uint8_t gro_flush_cycles = GRO_DEFAULT_FLUSH_CYCLES;
 
+struct vxlan_encap_conf vxlan_encap_conf = {
+	.select_ipv4 = 1,
+	.select_vlan = 0,
+	.vni = "\x00\x00\x00",
+	.udp_src = 0,
+	.udp_dst = RTE_BE16(4789),
+	.ipv4_src = IPv4(127, 0, 0, 1),
+	.ipv4_dst = IPv4(255, 255, 255, 255),
+	.ipv6_src = "\x00\x00\x00\x00\x00\x00\x00\x00"
+		"\x00\x00\x00\x00\x00\x00\x00\x01",
+	.ipv6_dst = "\x00\x00\x00\x00\x00\x00\x00\x00"
+		"\x00\x00\x00\x00\x00\x00\x11\x11",
+	.vlan_tci = 0,
+	.eth_src = "\x00\x00\x00\x00\x00\x00",
+	.eth_dst = "\xff\xff\xff\xff\xff\xff",
+};
+
 /* Forward function declarations */
 static void map_port_queue_stats_mapping_registers(portid_t pi,
 						   struct rte_port *port);
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index f51cd9dd9..0d6618788 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -479,6 +479,23 @@ struct gso_status {
 extern struct gso_status gso_ports[RTE_MAX_ETHPORTS];
 extern uint16_t gso_max_segment_size;
 
+/* VXLAN encap/decap parameters. */
+struct vxlan_encap_conf {
+	uint32_t select_ipv4:1;
+	uint32_t select_vlan:1;
+	uint8_t vni[3];
+	rte_be16_t udp_src;
+	rte_be16_t udp_dst;
+	rte_be32_t ipv4_src;
+	rte_be32_t ipv4_dst;
+	uint8_t ipv6_src[16];
+	uint8_t ipv6_dst[16];
+	rte_be16_t vlan_tci;
+	uint8_t eth_src[ETHER_ADDR_LEN];
+	uint8_t eth_dst[ETHER_ADDR_LEN];
+};
+struct vxlan_encap_conf vxlan_encap_conf;
+
 static inline unsigned int
 lcore_num(void)
 {
diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
index 0d6fd50ca..698b83268 100644
--- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
+++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
@@ -1534,6 +1534,18 @@ Enable or disable a per queue Tx offloading only on a specific Tx queue::
 
 This command should be run when the port is stopped, or else it will fail.
 
+Config VXLAN Encap outer layers
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Configure the outer layer to encapsulate a packet inside a VXLAN tunnel::
+
+ testpmd> set vxlan ipv4|ipv6 (vni) (udp-src) (udp-dst) (ip-src) (ip-dst) (mac-src) (mac-dst)
+ testpmd> set vxlan-with-vlan ipv4|ipv6 (vni) (udp-src) (udp-dst) (ip-src) (ip-dst) (vlan-tci) (mac-src) (mac-dst)
+
+Those command will set an internal configuration inside testpmd, any following
+flow rule using the action vxlan_encap will use the last configuration set.
+To have a different encapsulation header, one of those commands must be called
+before the flow rule creation.
 
 Port Functions
 --------------
@@ -3650,6 +3662,12 @@ This section lists supported actions and their attributes, if any.
 
   - ``ethertype``: Ethertype.
 
+- ``vxlan_encap``: Performs a VXLAN encapsulation, outer layer configuration
+  is done through `Config VXLAN Encap outer layers`_.
+
+- ``vxlan_decap``: Performs a decapsulation action by stripping all headers of
+  the VXLAN tunnel network overlay from the matched flow.
+
 Destroying flow rules
 ~~~~~~~~~~~~~~~~~~~~~
 
@@ -3915,6 +3933,28 @@ Validate and create a QinQ rule on port 0 to steer traffic to a queue on the hos
    0       0       0       i-      ETH VLAN VLAN=>VF QUEUE
    1       0       0       i-      ETH VLAN VLAN=>PF QUEUE
 
+Sample VXLAN encapsulation rule
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+VXLAN encapsulation outer layer has default value pre-configured in testpmd
+source code, those can be changed by using the following commands
+
+IPv4 VXLAN outer header::
+
+ testpmd> set vxlan ipv4 4 4 4 127.0.0.1 128.0.0.1 11:11:11:11:11:11 22:22:22:22:22:22
+ testpmd> flow create 0 ingress pattern end actions vxlan_encap / queue index 0 / end
+
+ testpmd> set vxlan-with-vlan ipv4 4 4 4 127.0.0.1 128.0.0.1 34 11:11:11:11:11:11 22:22:22:22:22:22
+ testpmd> flow create 0 ingress pattern end actions vxlan_encap / queue index 0 / end
+
+IPv6 VXLAN outer header::
+
+ testpmd> set vxlan ipv6 4 4 4 ::1 ::2222 11:11:11:11:11:11 22:22:22:22:22:22
+ testpmd> flow create 0 ingress pattern end actions vxlan_encap / queue index 0 / end
+
+ testpmd> set vxlan-with-vlan ipv6 4 4 4 ::1 ::2222 34 11:11:11:11:11:11 22:22:22:22:22:22
+ testpmd> flow create 0 ingress pattern end actions vxlan_encap / queue index 0 / end
+
 BPF Functions
 --------------
 
-- 
2.18.0

^ permalink raw reply	[flat|nested] 63+ messages in thread

* [dpdk-dev] [PATCH v5 2/2] app/testpmd: add NVGRE encap/decap support
  2018-06-27  8:53       ` [dpdk-dev] [PATCH v5 " Nelio Laranjeiro
  2018-06-27  8:53         ` [dpdk-dev] [PATCH v5 1/2] app/testpmd: add VXLAN encap/decap support Nelio Laranjeiro
@ 2018-06-27  8:53         ` Nelio Laranjeiro
  2018-06-27  9:53         ` [dpdk-dev] [PATCH v6 0/2] app/testpmd implement VXLAN/NVGRE Encap/Decap Nelio Laranjeiro
  2 siblings, 0 replies; 63+ messages in thread
From: Nelio Laranjeiro @ 2018-06-27  8:53 UTC (permalink / raw)
  To: dev, Adrien Mazarguil, Wenzhuo Lu, Jingjing Wu,
	Bernard Iremonger, Mohammad Abdul Awal, Stephen Hemminger
  Cc: Ori Kam

Due to the complex NVGRE_ENCAP flow action and based on the fact testpmd
does not allocate memory, this patch adds a new command in testpmd to
initialise a global structure containing the necessary information to
make the outer layer of the packet.  This same global structure will
then be used by the flow command line in testpmd when the action
nvgre_encap will be parsed, at this point, the conversion into such
action becomes trivial.

This global structure is only used for the encap action.

Signed-off-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
Acked-by: Ori Kam <orika@mellanox.com>
---
 app/test-pmd/cmdline.c                      | 118 ++++++++++++++++++
 app/test-pmd/cmdline_flow.c                 | 129 ++++++++++++++++++++
 app/test-pmd/testpmd.c                      |  15 +++
 app/test-pmd/testpmd.h                      |  15 +++
 doc/guides/testpmd_app_ug/testpmd_funcs.rst |  42 +++++++
 5 files changed, 319 insertions(+)

diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
index 048fff2bd..ad7f9eda5 100644
--- a/app/test-pmd/cmdline.c
+++ b/app/test-pmd/cmdline.c
@@ -789,6 +789,12 @@ static void cmd_help_long_parsed(void *parsed_result,
 			" vlan-tci eth-src eth-dst\n"
 			"       Configure the VXLAN encapsulation for flows.\n\n"
 
+			"nvgre ipv4|ipv6 tni ip-src ip-dst eth-src eth-dst\n"
+			"       Configure the NVGRE encapsulation for flows.\n\n"
+
+			"nvgre-with-vlan ipv4|ipv6 tni ip-src ip-dst vlan-tci eth-src eth-dst\n"
+			"       Configure the NVGRE encapsulation for flows.\n\n"
+
 			, list_pkt_forwarding_modes()
 		);
 	}
@@ -14970,6 +14976,116 @@ cmdline_parse_inst_t cmd_set_vxlan_with_vlan = {
 	},
 };
 
+/** Set NVGRE encapsulation details */
+struct cmd_set_nvgre_result {
+	cmdline_fixed_string_t set;
+	cmdline_fixed_string_t nvgre;
+	cmdline_fixed_string_t ip_version;
+	uint32_t tni;
+	cmdline_ipaddr_t ip_src;
+	cmdline_ipaddr_t ip_dst;
+	uint16_t tci;
+	struct ether_addr eth_src;
+	struct ether_addr eth_dst;
+};
+
+cmdline_parse_token_string_t cmd_set_nvgre_set =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_nvgre_result, set, "set");
+cmdline_parse_token_string_t cmd_set_nvgre_nvgre =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_nvgre_result, nvgre, "nvgre");
+cmdline_parse_token_string_t cmd_set_nvgre_nvgre_with_vlan =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_nvgre_result, nvgre, "nvgre-with-vlan");
+cmdline_parse_token_string_t cmd_set_nvgre_ip_version =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_nvgre_result, ip_version,
+				 "ipv4#ipv6");
+cmdline_parse_token_num_t cmd_set_nvgre_tni =
+	TOKEN_NUM_INITIALIZER(struct cmd_set_nvgre_result, tni, UINT32);
+cmdline_parse_token_num_t cmd_set_nvgre_ip_src =
+	TOKEN_IPADDR_INITIALIZER(struct cmd_set_nvgre_result, ip_src);
+cmdline_parse_token_ipaddr_t cmd_set_nvgre_ip_dst =
+	TOKEN_IPADDR_INITIALIZER(struct cmd_set_nvgre_result, ip_dst);
+cmdline_parse_token_num_t cmd_set_nvgre_vlan =
+	TOKEN_NUM_INITIALIZER(struct cmd_set_nvgre_result, tci, UINT16);
+cmdline_parse_token_etheraddr_t cmd_set_nvgre_eth_src =
+	TOKEN_ETHERADDR_INITIALIZER(struct cmd_set_nvgre_result, eth_src);
+cmdline_parse_token_etheraddr_t cmd_set_nvgre_eth_dst =
+	TOKEN_ETHERADDR_INITIALIZER(struct cmd_set_nvgre_result, eth_dst);
+
+static void cmd_set_nvgre_parsed(void *parsed_result,
+	__attribute__((unused)) struct cmdline *cl,
+	__attribute__((unused)) void *data)
+{
+	struct cmd_set_nvgre_result *res = parsed_result;
+	union {
+		uint32_t nvgre_tni;
+		uint8_t tni[4];
+	} id = {
+		.nvgre_tni = rte_cpu_to_be_32(res->tni) & RTE_BE32(0x00ffffff),
+	};
+
+	if (strcmp(res->nvgre, "nvgre") == 0)
+		nvgre_encap_conf.select_vlan = 0;
+	else if (strcmp(res->nvgre, "nvgre-with-vlan") == 0)
+		nvgre_encap_conf.select_vlan = 1;
+	if (strcmp(res->ip_version, "ipv4") == 0)
+		nvgre_encap_conf.select_ipv4 = 1;
+	else if (strcmp(res->ip_version, "ipv6") == 0)
+		nvgre_encap_conf.select_ipv4 = 0;
+	else
+		return;
+	rte_memcpy(nvgre_encap_conf.tni, &id.tni[1], 3);
+	if (nvgre_encap_conf.select_ipv4) {
+		IPV4_ADDR_TO_UINT(res->ip_src, nvgre_encap_conf.ipv4_src);
+		IPV4_ADDR_TO_UINT(res->ip_dst, nvgre_encap_conf.ipv4_dst);
+	} else {
+		IPV6_ADDR_TO_ARRAY(res->ip_src, nvgre_encap_conf.ipv6_src);
+		IPV6_ADDR_TO_ARRAY(res->ip_dst, nvgre_encap_conf.ipv6_dst);
+	}
+	if (nvgre_encap_conf.select_vlan)
+		nvgre_encap_conf.vlan_tci = rte_cpu_to_be_16(res->tci);
+	rte_memcpy(nvgre_encap_conf.eth_src, res->eth_src.addr_bytes,
+		   ETHER_ADDR_LEN);
+	rte_memcpy(nvgre_encap_conf.eth_dst, res->eth_dst.addr_bytes,
+		   ETHER_ADDR_LEN);
+}
+
+cmdline_parse_inst_t cmd_set_nvgre = {
+	.f = cmd_set_nvgre_parsed,
+	.data = NULL,
+	.help_str = "set nvgre ipv4|ipv6 <vni> <ip-src> <ip-dst> <eth-src>"
+		" <eth-dst>",
+	.tokens = {
+		(void *)&cmd_set_nvgre_set,
+		(void *)&cmd_set_nvgre_nvgre,
+		(void *)&cmd_set_nvgre_ip_version,
+		(void *)&cmd_set_nvgre_tni,
+		(void *)&cmd_set_nvgre_ip_src,
+		(void *)&cmd_set_nvgre_ip_dst,
+		(void *)&cmd_set_nvgre_eth_src,
+		(void *)&cmd_set_nvgre_eth_dst,
+		NULL,
+	},
+};
+
+cmdline_parse_inst_t cmd_set_nvgre_with_vlan = {
+	.f = cmd_set_nvgre_parsed,
+	.data = NULL,
+	.help_str = "set nvgre-with-vlan ipv4|ipv6 <vni> <ip-src> <ip-dst>"
+		" <vlan-tci> <eth-src> <eth-dst>",
+	.tokens = {
+		(void *)&cmd_set_nvgre_set,
+		(void *)&cmd_set_nvgre_nvgre_with_vlan,
+		(void *)&cmd_set_nvgre_ip_version,
+		(void *)&cmd_set_nvgre_tni,
+		(void *)&cmd_set_nvgre_ip_src,
+		(void *)&cmd_set_nvgre_ip_dst,
+		(void *)&cmd_set_nvgre_vlan,
+		(void *)&cmd_set_nvgre_eth_src,
+		(void *)&cmd_set_nvgre_eth_dst,
+		NULL,
+	},
+};
+
 /* Strict link priority scheduling mode setting */
 static void
 cmd_strict_link_prio_parsed(
@@ -17596,6 +17712,8 @@ cmdline_parse_ctx_t main_ctx[] = {
 #endif
 	(cmdline_parse_inst_t *)&cmd_set_vxlan,
 	(cmdline_parse_inst_t *)&cmd_set_vxlan_with_vlan,
+	(cmdline_parse_inst_t *)&cmd_set_nvgre,
+	(cmdline_parse_inst_t *)&cmd_set_nvgre_with_vlan,
 	(cmdline_parse_inst_t *)&cmd_ddp_add,
 	(cmdline_parse_inst_t *)&cmd_ddp_del,
 	(cmdline_parse_inst_t *)&cmd_ddp_get_list,
diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index 7823addb7..fea9380c4 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -241,6 +241,8 @@ enum index {
 	ACTION_OF_PUSH_MPLS_ETHERTYPE,
 	ACTION_VXLAN_ENCAP,
 	ACTION_VXLAN_DECAP,
+	ACTION_NVGRE_ENCAP,
+	ACTION_NVGRE_DECAP,
 };
 
 /** Maximum size for pattern in struct rte_flow_item_raw. */
@@ -277,6 +279,22 @@ struct action_vxlan_encap_data {
 	struct rte_flow_item_vxlan item_vxlan;
 };
 
+/** Maximum number of items in struct rte_flow_action_nvgre_encap. */
+#define ACTION_NVGRE_ENCAP_ITEMS_NUM 5
+
+/** Storage for struct rte_flow_action_nvgre_encap including external data. */
+struct action_nvgre_encap_data {
+	struct rte_flow_action_nvgre_encap conf;
+	struct rte_flow_item items[ACTION_NVGRE_ENCAP_ITEMS_NUM];
+	struct rte_flow_item_eth item_eth;
+	struct rte_flow_item_vlan item_vlan;
+	union {
+		struct rte_flow_item_ipv4 item_ipv4;
+		struct rte_flow_item_ipv6 item_ipv6;
+	};
+	struct rte_flow_item_nvgre item_nvgre;
+};
+
 /** Maximum number of subsequent tokens and arguments on the stack. */
 #define CTX_STACK_SIZE 16
 
@@ -796,6 +814,8 @@ static const enum index next_action[] = {
 	ACTION_OF_PUSH_MPLS,
 	ACTION_VXLAN_ENCAP,
 	ACTION_VXLAN_DECAP,
+	ACTION_NVGRE_ENCAP,
+	ACTION_NVGRE_DECAP,
 	ZERO,
 };
 
@@ -929,6 +949,9 @@ static int parse_vc_action_rss_queue(struct context *, const struct token *,
 static int parse_vc_action_vxlan_encap(struct context *, const struct token *,
 				       const char *, unsigned int, void *,
 				       unsigned int);
+static int parse_vc_action_nvgre_encap(struct context *, const struct token *,
+				       const char *, unsigned int, void *,
+				       unsigned int);
 static int parse_destroy(struct context *, const struct token *,
 			 const char *, unsigned int,
 			 void *, unsigned int);
@@ -2429,6 +2452,24 @@ static const struct token token_list[] = {
 		.next = NEXT(NEXT_ENTRY(ACTION_NEXT)),
 		.call = parse_vc,
 	},
+	[ACTION_NVGRE_ENCAP] = {
+		.name = "nvgre_encap",
+		.help = "NVGRE encapsulation, uses configuration set by \"set"
+			" nvgre\"",
+		.priv = PRIV_ACTION(NVGRE_ENCAP,
+				    sizeof(struct action_nvgre_encap_data)),
+		.next = NEXT(NEXT_ENTRY(ACTION_NEXT)),
+		.call = parse_vc_action_nvgre_encap,
+	},
+	[ACTION_NVGRE_DECAP] = {
+		.name = "nvgre_decap",
+		.help = "Performs a decapsulation action by stripping all"
+			" headers of the NVGRE tunnel network overlay from the"
+			" matched flow.",
+		.priv = PRIV_ACTION(NVGRE_DECAP, 0),
+		.next = NEXT(NEXT_ENTRY(ACTION_NEXT)),
+		.call = parse_vc,
+	},
 };
 
 /** Remove and return last entry from argument stack. */
@@ -3090,6 +3131,94 @@ parse_vc_action_vxlan_encap(struct context *ctx, const struct token *token,
 	return ret;
 }
 
+/** Parse NVGRE encap action. */
+static int
+parse_vc_action_nvgre_encap(struct context *ctx, const struct token *token,
+			    const char *str, unsigned int len,
+			    void *buf, unsigned int size)
+{
+	struct buffer *out = buf;
+	struct rte_flow_action *action;
+	struct action_nvgre_encap_data *action_nvgre_encap_data;
+	int ret;
+
+	ret = parse_vc(ctx, token, str, len, buf, size);
+	if (ret < 0)
+		return ret;
+	/* Nothing else to do if there is no buffer. */
+	if (!out)
+		return ret;
+	if (!out->args.vc.actions_n)
+		return -1;
+	action = &out->args.vc.actions[out->args.vc.actions_n - 1];
+	/* Point to selected object. */
+	ctx->object = out->args.vc.data;
+	ctx->objmask = NULL;
+	/* Set up default configuration. */
+	action_nvgre_encap_data = ctx->object;
+	*action_nvgre_encap_data = (struct action_nvgre_encap_data){
+		.conf = (struct rte_flow_action_nvgre_encap){
+			.definition = action_nvgre_encap_data->items,
+		},
+		.items = {
+			{
+				.type = RTE_FLOW_ITEM_TYPE_ETH,
+				.spec = &action_nvgre_encap_data->item_eth,
+				.mask = &rte_flow_item_eth_mask,
+			},
+			{
+				.type = RTE_FLOW_ITEM_TYPE_VLAN,
+				.spec = &action_nvgre_encap_data->item_vlan,
+				.mask = &rte_flow_item_vlan_mask,
+			},
+			{
+				.type = RTE_FLOW_ITEM_TYPE_IPV4,
+				.spec = &action_nvgre_encap_data->item_ipv4,
+				.mask = &rte_flow_item_ipv4_mask,
+			},
+			{
+				.type = RTE_FLOW_ITEM_TYPE_NVGRE,
+				.spec = &action_nvgre_encap_data->item_nvgre,
+				.mask = &rte_flow_item_nvgre_mask,
+			},
+			{
+				.type = RTE_FLOW_ITEM_TYPE_END,
+			},
+		},
+		.item_eth.type = 0,
+		.item_vlan.tci = nvgre_encap_conf.vlan_tci,
+		.item_ipv4.hdr = {
+		       .src_addr = nvgre_encap_conf.ipv4_src,
+		       .dst_addr = nvgre_encap_conf.ipv4_dst,
+		},
+		.item_nvgre.flow_id = 0,
+	};
+	memcpy(action_nvgre_encap_data->item_eth.dst.addr_bytes,
+	       nvgre_encap_conf.eth_dst, ETHER_ADDR_LEN);
+	memcpy(action_nvgre_encap_data->item_eth.src.addr_bytes,
+	       nvgre_encap_conf.eth_src, ETHER_ADDR_LEN);
+	if (!nvgre_encap_conf.select_ipv4) {
+		memcpy(&action_nvgre_encap_data->item_ipv6.hdr.src_addr,
+		       &nvgre_encap_conf.ipv6_src,
+		       sizeof(nvgre_encap_conf.ipv6_src));
+		memcpy(&action_nvgre_encap_data->item_ipv6.hdr.dst_addr,
+		       &nvgre_encap_conf.ipv6_dst,
+		       sizeof(nvgre_encap_conf.ipv6_dst));
+		action_nvgre_encap_data->items[2] = (struct rte_flow_item){
+			.type = RTE_FLOW_ITEM_TYPE_IPV6,
+			.spec = &action_nvgre_encap_data->item_ipv6,
+			.mask = &rte_flow_item_ipv6_mask,
+		};
+	}
+	if (!nvgre_encap_conf.select_vlan)
+		action_nvgre_encap_data->items[1].type =
+			RTE_FLOW_ITEM_TYPE_VOID;
+	memcpy(action_nvgre_encap_data->item_nvgre.tni, nvgre_encap_conf.tni,
+	       RTE_DIM(nvgre_encap_conf.tni));
+	action->conf = &action_nvgre_encap_data->conf;
+	return ret;
+}
+
 /** Parse tokens for destroy command. */
 static int
 parse_destroy(struct context *ctx, const struct token *token,
diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
index 4a18d043c..121685da3 100644
--- a/app/test-pmd/testpmd.c
+++ b/app/test-pmd/testpmd.c
@@ -410,6 +410,21 @@ struct vxlan_encap_conf vxlan_encap_conf = {
 	.eth_dst = "\xff\xff\xff\xff\xff\xff",
 };
 
+struct nvgre_encap_conf nvgre_encap_conf = {
+	.select_ipv4 = 1,
+	.select_vlan = 0,
+	.tni = "\x00\x00\x00",
+	.ipv4_src = IPv4(127, 0, 0, 1),
+	.ipv4_dst = IPv4(255, 255, 255, 255),
+	.ipv6_src = "\x00\x00\x00\x00\x00\x00\x00\x00"
+		"\x00\x00\x00\x00\x00\x00\x00\x01",
+	.ipv6_dst = "\x00\x00\x00\x00\x00\x00\x00\x00"
+		"\x00\x00\x00\x00\x00\x00\x11\x11",
+	.vlan_tci = 0,
+	.eth_src = "\x00\x00\x00\x00\x00\x00",
+	.eth_dst = "\xff\xff\xff\xff\xff\xff",
+};
+
 /* Forward function declarations */
 static void map_port_queue_stats_mapping_registers(portid_t pi,
 						   struct rte_port *port);
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index 0d6618788..2b1e448b0 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -496,6 +496,21 @@ struct vxlan_encap_conf {
 };
 struct vxlan_encap_conf vxlan_encap_conf;
 
+/* NVGRE encap/decap parameters. */
+struct nvgre_encap_conf {
+	uint32_t select_ipv4:1;
+	uint32_t select_vlan:1;
+	uint8_t tni[3];
+	rte_be32_t ipv4_src;
+	rte_be32_t ipv4_dst;
+	uint8_t ipv6_src[16];
+	uint8_t ipv6_dst[16];
+	rte_be16_t vlan_tci;
+	uint8_t eth_src[ETHER_ADDR_LEN];
+	uint8_t eth_dst[ETHER_ADDR_LEN];
+};
+struct nvgre_encap_conf nvgre_encap_conf;
+
 static inline unsigned int
 lcore_num(void)
 {
diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
index 698b83268..35d8b8d14 100644
--- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
+++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
@@ -1547,6 +1547,19 @@ flow rule using the action vxlan_encap will use the last configuration set.
 To have a different encapsulation header, one of those commands must be called
 before the flow rule creation.
 
+Config NVGRE Encap outer layers
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Configure the outer layer to encapsulate a packet inside a NVGRE tunnel::
+
+ testpmd> set nvgre ipv4|ipv6 (tni) (ip-src) (ip-dst) (mac-src) (mac-dst)
+ testpmd> set nvgre-with-vlan ipv4|ipv6 (tni) (ip-src) (ip-dst) (vlan-tci) (mac-src) (mac-dst)
+
+Those command will set an internal configuration inside testpmd, any following
+flow rule using the action nvgre_encap will use the last configuration set.
+To have a different encapsulation header, one of those commands must be called
+before the flow rule creation.
+
 Port Functions
 --------------
 
@@ -3668,6 +3681,12 @@ This section lists supported actions and their attributes, if any.
 - ``vxlan_decap``: Performs a decapsulation action by stripping all headers of
   the VXLAN tunnel network overlay from the matched flow.
 
+- ``nvgre_encap``: Performs a NVGRE encapsulation, outer layer configuration
+  is done through `Config NVGRE Encap outer layers`_.
+
+- ``nvgre_decap``: Performs a decapsulation action by stripping all headers of
+  the NVGRE tunnel network overlay from the matched flow.
+
 Destroying flow rules
 ~~~~~~~~~~~~~~~~~~~~~
 
@@ -3955,6 +3974,29 @@ IPv6 VXLAN outer header::
  testpmd> set vxlan-with-vlan ipv6 4 4 4 ::1 ::2222 34 11:11:11:11:11:11 22:22:22:22:22:22
  testpmd> flow create 0 ingress pattern end actions vxlan_encap / queue index 0 / end
 
+Sample NVGRE encapsulation rule
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+NVGRE encapsulation outer layer has default value pre-configured in testpmd
+source code, those can be changed by using the following commands
+
+IPv4 NVGRE outer header::
+
+ testpmd> set nvgre ipv4 4 127.0.0.1 128.0.0.1 11:11:11:11:11:11 22:22:22:22:22:22
+ testpmd> flow create 0 ingress pattern end actions nvgre_encap / queue index 0 / end
+
+ testpmd> set nvgre-with-vlan 4 127.0.0.1 128.0.0.1 34 11:11:11:11:11:11 22:22:22:22:22:22
+ testpmd> flow create 0 ingress pattern end actions vxlan_encap / queue index 0 / end
+
+IPv6 NVGRE outer header::
+
+ testpmd> set nvgre ipv6 4 ::1 ::2222 11:11:11:11:11:11 22:22:22:22:22:22
+ testpmd> flow create 0 ingress pattern end actions vxlan_encap / queue index 0 / end
+
+ testpmd> set nvgre-with-vlan ipv6 4 ::1 ::2222 34 11:11:11:11:11:11 22:22:22:22:22:22
+ testpmd> flow create 0 ingress pattern end actions vxlan_encap / queue index 0 / end
+
+
 BPF Functions
 --------------
 
-- 
2.18.0

^ permalink raw reply	[flat|nested] 63+ messages in thread

* [dpdk-dev] [PATCH v6 0/2] app/testpmd implement VXLAN/NVGRE Encap/Decap
  2018-06-27  8:53       ` [dpdk-dev] [PATCH v5 " Nelio Laranjeiro
  2018-06-27  8:53         ` [dpdk-dev] [PATCH v5 1/2] app/testpmd: add VXLAN encap/decap support Nelio Laranjeiro
  2018-06-27  8:53         ` [dpdk-dev] [PATCH v5 2/2] app/testpmd: add NVGRE " Nelio Laranjeiro
@ 2018-06-27  9:53         ` Nelio Laranjeiro
  2018-06-27  9:53           ` [dpdk-dev] [PATCH v6 1/2] app/testpmd: add VXLAN encap/decap support Nelio Laranjeiro
                             ` (3 more replies)
  2 siblings, 4 replies; 63+ messages in thread
From: Nelio Laranjeiro @ 2018-06-27  9:53 UTC (permalink / raw)
  To: dev, Adrien Mazarguil, Wenzhuo Lu, Jingjing Wu,
	Bernard Iremonger, Mohammad Abdul Awal, Stephen Hemminger
  Cc: Ori Kam

This series adds an easy and maintainable configuration version support for
those two actions for 18.08 by using global variables in testpmd to store the
necessary information for the tunnel encapsulation.  Those variables are used
in conjunction of RTE_FLOW_ACTION_{VXLAN,NVGRE}_ENCAP action to create easily
the action for flows.

A common way to use it:

 set vxlan ipv4 4 4 4 127.0.0.1 128.0.0.1 11:11:11:11:11:11 22:22:22:22:22:22
 flow create 0 ingress pattern end actions vxlan_encap / queue index 0 / end

 set vxlan ipv6 4 4 4 ::1 ::2222 11:11:11:11:11:11 22:22:22:22:22:22
 flow create 0 ingress pattern end actions vxlan_encap / queue index 0 / end

 set nvgre ipv4 4 127.0.0.1 128.0.0.1 11:11:11:11:11:11 22:22:22:22:22:22
 flow create 0 ingress pattern end actions nvgre_encap / queue index 0 / end

 set nvgre ipv6 4 ::1 ::2222 11:11:11:11:11:11 22:22:22:22:22:22
 flow create 0 ingress pattern end actions nvgre_encap / queue index 0 / end

This also replace the proposal done by Mohammad Abdul Awal [1] which handles
in a more complex way for the same work.

Note this API has already a modification planned for 18.11 [2] thus those
series should have a limited life for a single release.

[1] https://dpdk.org/ml/archives/dev/2018-May/101403.html
[2] https://dpdk.org/ml/archives/dev/2018-June/103485.html

Changes in v6:

- fix compilation under redhat 7.5 with gcc 4.8.5 20150623

Changes in v5:

- fix documentation generation.
- add more explanation on how to generate several encapsulated flows.

Changes in v4:

- fix big endian issue on vni and tni.
- add samples to the documentation.
- set the VXLAN UDP source port to 0 by default to let the driver generate it
  from the inner hash as described in the RFC 7348.
- use default rte flow mask for each item.

Changes in v3:

- support VLAN in the outer encapsulation.
- fix the documentation with missing arguments.

Changes in v2:

- add default IPv6 values for NVGRE encapsulation.
- replace VXLAN to NVGRE in comments concerning NVGRE layer.

Nelio Laranjeiro (2):
  app/testpmd: add VXLAN encap/decap support
  app/testpmd: add NVGRE encap/decap support

 app/test-pmd/cmdline.c                      | 252 ++++++++++++++++++
 app/test-pmd/cmdline_flow.c                 | 274 ++++++++++++++++++++
 app/test-pmd/testpmd.c                      |  32 +++
 app/test-pmd/testpmd.h                      |  32 +++
 doc/guides/testpmd_app_ug/testpmd_funcs.rst |  72 +++++
 5 files changed, 662 insertions(+)

-- 
2.18.0

^ permalink raw reply	[flat|nested] 63+ messages in thread

* [dpdk-dev] [PATCH v6 1/2] app/testpmd: add VXLAN encap/decap support
  2018-06-27  9:53         ` [dpdk-dev] [PATCH v6 0/2] app/testpmd implement VXLAN/NVGRE Encap/Decap Nelio Laranjeiro
@ 2018-06-27  9:53           ` Nelio Laranjeiro
  2018-06-27  9:53           ` [dpdk-dev] [PATCH v6 2/2] app/testpmd: add NVGRE " Nelio Laranjeiro
                             ` (2 subsequent siblings)
  3 siblings, 0 replies; 63+ messages in thread
From: Nelio Laranjeiro @ 2018-06-27  9:53 UTC (permalink / raw)
  To: dev, Adrien Mazarguil, Wenzhuo Lu, Jingjing Wu,
	Bernard Iremonger, Mohammad Abdul Awal, Stephen Hemminger
  Cc: Ori Kam

Due to the complex VXLAN_ENCAP flow action and based on the fact testpmd
does not allocate memory, this patch adds a new command in testpmd to
initialise a global structure containing the necessary information to
make the outer layer of the packet.  This same global structure will
then be used by the flow command line in testpmd when the action
vxlan_encap will be parsed, at this point, the conversion into such
action becomes trivial.

This global structure is only used for the encap action.

Signed-off-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
---
 app/test-pmd/cmdline.c                      | 134 ++++++++++++++++++
 app/test-pmd/cmdline_flow.c                 | 142 ++++++++++++++++++++
 app/test-pmd/testpmd.c                      |  17 +++
 app/test-pmd/testpmd.h                      |  17 +++
 doc/guides/testpmd_app_ug/testpmd_funcs.rst |  35 +++++
 5 files changed, 345 insertions(+)

diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
index 27e2aa8c8..048fff2bd 100644
--- a/app/test-pmd/cmdline.c
+++ b/app/test-pmd/cmdline.c
@@ -781,6 +781,14 @@ static void cmd_help_long_parsed(void *parsed_result,
 			"port tm hierarchy commit (port_id) (clean_on_fail)\n"
 			"	Commit tm hierarchy.\n\n"
 
+			"vxlan ipv4|ipv6 vni udp-src udp-dst ip-src ip-dst"
+			" eth-src eth-dst\n"
+			"       Configure the VXLAN encapsulation for flows.\n\n"
+
+			"vxlan-with-vlan ipv4|ipv6 vni udp-src udp-dst ip-src ip-dst"
+			" vlan-tci eth-src eth-dst\n"
+			"       Configure the VXLAN encapsulation for flows.\n\n"
+
 			, list_pkt_forwarding_modes()
 		);
 	}
@@ -14838,6 +14846,130 @@ cmdline_parse_inst_t cmd_set_port_tm_hierarchy_default = {
 };
 #endif
 
+/** Set VXLAN encapsulation details */
+struct cmd_set_vxlan_result {
+	cmdline_fixed_string_t set;
+	cmdline_fixed_string_t vxlan;
+	cmdline_fixed_string_t ip_version;
+	uint32_t vlan_present:1;
+	uint32_t vni;
+	uint16_t udp_src;
+	uint16_t udp_dst;
+	cmdline_ipaddr_t ip_src;
+	cmdline_ipaddr_t ip_dst;
+	uint16_t tci;
+	struct ether_addr eth_src;
+	struct ether_addr eth_dst;
+};
+
+cmdline_parse_token_string_t cmd_set_vxlan_set =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_vxlan_result, set, "set");
+cmdline_parse_token_string_t cmd_set_vxlan_vxlan =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_vxlan_result, vxlan, "vxlan");
+cmdline_parse_token_string_t cmd_set_vxlan_vxlan_with_vlan =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_vxlan_result, vxlan,
+				 "vxlan-with-vlan");
+cmdline_parse_token_string_t cmd_set_vxlan_ip_version =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_vxlan_result, ip_version,
+				 "ipv4#ipv6");
+cmdline_parse_token_num_t cmd_set_vxlan_vni =
+	TOKEN_NUM_INITIALIZER(struct cmd_set_vxlan_result, vni, UINT32);
+cmdline_parse_token_num_t cmd_set_vxlan_udp_src =
+	TOKEN_NUM_INITIALIZER(struct cmd_set_vxlan_result, udp_src, UINT16);
+cmdline_parse_token_num_t cmd_set_vxlan_udp_dst =
+	TOKEN_NUM_INITIALIZER(struct cmd_set_vxlan_result, udp_dst, UINT16);
+cmdline_parse_token_ipaddr_t cmd_set_vxlan_ip_src =
+	TOKEN_IPADDR_INITIALIZER(struct cmd_set_vxlan_result, ip_src);
+cmdline_parse_token_ipaddr_t cmd_set_vxlan_ip_dst =
+	TOKEN_IPADDR_INITIALIZER(struct cmd_set_vxlan_result, ip_dst);
+cmdline_parse_token_num_t cmd_set_vxlan_vlan =
+	TOKEN_NUM_INITIALIZER(struct cmd_set_vxlan_result, tci, UINT16);
+cmdline_parse_token_etheraddr_t cmd_set_vxlan_eth_src =
+	TOKEN_ETHERADDR_INITIALIZER(struct cmd_set_vxlan_result, eth_src);
+cmdline_parse_token_etheraddr_t cmd_set_vxlan_eth_dst =
+	TOKEN_ETHERADDR_INITIALIZER(struct cmd_set_vxlan_result, eth_dst);
+
+static void cmd_set_vxlan_parsed(void *parsed_result,
+	__attribute__((unused)) struct cmdline *cl,
+	__attribute__((unused)) void *data)
+{
+	struct cmd_set_vxlan_result *res = parsed_result;
+	union {
+		uint32_t vxlan_id;
+		uint8_t vni[4];
+	} id = {
+		.vxlan_id = rte_cpu_to_be_32(res->vni) & RTE_BE32(0x00ffffff),
+	};
+
+	if (strcmp(res->vxlan, "vxlan") == 0)
+		vxlan_encap_conf.select_vlan = 0;
+	else if (strcmp(res->vxlan, "vxlan-with-vlan") == 0)
+		vxlan_encap_conf.select_vlan = 1;
+	if (strcmp(res->ip_version, "ipv4") == 0)
+		vxlan_encap_conf.select_ipv4 = 1;
+	else if (strcmp(res->ip_version, "ipv6") == 0)
+		vxlan_encap_conf.select_ipv4 = 0;
+	else
+		return;
+	rte_memcpy(vxlan_encap_conf.vni, &id.vni[1], 3);
+	vxlan_encap_conf.udp_src = rte_cpu_to_be_16(res->udp_src);
+	vxlan_encap_conf.udp_dst = rte_cpu_to_be_16(res->udp_dst);
+	if (vxlan_encap_conf.select_ipv4) {
+		IPV4_ADDR_TO_UINT(res->ip_src, vxlan_encap_conf.ipv4_src);
+		IPV4_ADDR_TO_UINT(res->ip_dst, vxlan_encap_conf.ipv4_dst);
+	} else {
+		IPV6_ADDR_TO_ARRAY(res->ip_src, vxlan_encap_conf.ipv6_src);
+		IPV6_ADDR_TO_ARRAY(res->ip_dst, vxlan_encap_conf.ipv6_dst);
+	}
+	if (vxlan_encap_conf.select_vlan)
+		vxlan_encap_conf.vlan_tci = rte_cpu_to_be_16(res->tci);
+	rte_memcpy(vxlan_encap_conf.eth_src, res->eth_src.addr_bytes,
+		   ETHER_ADDR_LEN);
+	rte_memcpy(vxlan_encap_conf.eth_dst, res->eth_dst.addr_bytes,
+		   ETHER_ADDR_LEN);
+}
+
+cmdline_parse_inst_t cmd_set_vxlan = {
+	.f = cmd_set_vxlan_parsed,
+	.data = NULL,
+	.help_str = "set vxlan ipv4|ipv6 <vni> <udp-src> <udp-dst> <ip-src>"
+		" <ip-dst> <eth-src> <eth-dst>",
+	.tokens = {
+		(void *)&cmd_set_vxlan_set,
+		(void *)&cmd_set_vxlan_vxlan,
+		(void *)&cmd_set_vxlan_ip_version,
+		(void *)&cmd_set_vxlan_vni,
+		(void *)&cmd_set_vxlan_udp_src,
+		(void *)&cmd_set_vxlan_udp_dst,
+		(void *)&cmd_set_vxlan_ip_src,
+		(void *)&cmd_set_vxlan_ip_dst,
+		(void *)&cmd_set_vxlan_eth_src,
+		(void *)&cmd_set_vxlan_eth_dst,
+		NULL,
+	},
+};
+
+cmdline_parse_inst_t cmd_set_vxlan_with_vlan = {
+	.f = cmd_set_vxlan_parsed,
+	.data = NULL,
+	.help_str = "set vxlan-with-vlan ipv4|ipv6 <vni> <udp-src> <udp-dst>"
+		" <ip-src> <ip-dst> <vlan-tci> <eth-src> <eth-dst>",
+	.tokens = {
+		(void *)&cmd_set_vxlan_set,
+		(void *)&cmd_set_vxlan_vxlan_with_vlan,
+		(void *)&cmd_set_vxlan_ip_version,
+		(void *)&cmd_set_vxlan_vni,
+		(void *)&cmd_set_vxlan_udp_src,
+		(void *)&cmd_set_vxlan_udp_dst,
+		(void *)&cmd_set_vxlan_ip_src,
+		(void *)&cmd_set_vxlan_ip_dst,
+		(void *)&cmd_set_vxlan_vlan,
+		(void *)&cmd_set_vxlan_eth_src,
+		(void *)&cmd_set_vxlan_eth_dst,
+		NULL,
+	},
+};
+
 /* Strict link priority scheduling mode setting */
 static void
 cmd_strict_link_prio_parsed(
@@ -17462,6 +17594,8 @@ cmdline_parse_ctx_t main_ctx[] = {
 #if defined RTE_LIBRTE_PMD_SOFTNIC && defined RTE_LIBRTE_SCHED
 	(cmdline_parse_inst_t *)&cmd_set_port_tm_hierarchy_default,
 #endif
+	(cmdline_parse_inst_t *)&cmd_set_vxlan,
+	(cmdline_parse_inst_t *)&cmd_set_vxlan_with_vlan,
 	(cmdline_parse_inst_t *)&cmd_ddp_add,
 	(cmdline_parse_inst_t *)&cmd_ddp_del,
 	(cmdline_parse_inst_t *)&cmd_ddp_get_list,
diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index 934cf7e90..a99fd0048 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -239,6 +239,8 @@ enum index {
 	ACTION_OF_POP_MPLS_ETHERTYPE,
 	ACTION_OF_PUSH_MPLS,
 	ACTION_OF_PUSH_MPLS_ETHERTYPE,
+	ACTION_VXLAN_ENCAP,
+	ACTION_VXLAN_DECAP,
 };
 
 /** Maximum size for pattern in struct rte_flow_item_raw. */
@@ -258,6 +260,23 @@ struct action_rss_data {
 	uint16_t queue[ACTION_RSS_QUEUE_NUM];
 };
 
+/** Maximum number of items in struct rte_flow_action_vxlan_encap. */
+#define ACTION_VXLAN_ENCAP_ITEMS_NUM 6
+
+/** Storage for struct rte_flow_action_vxlan_encap including external data. */
+struct action_vxlan_encap_data {
+	struct rte_flow_action_vxlan_encap conf;
+	struct rte_flow_item items[ACTION_VXLAN_ENCAP_ITEMS_NUM];
+	struct rte_flow_item_eth item_eth;
+	struct rte_flow_item_vlan item_vlan;
+	union {
+		struct rte_flow_item_ipv4 item_ipv4;
+		struct rte_flow_item_ipv6 item_ipv6;
+	};
+	struct rte_flow_item_udp item_udp;
+	struct rte_flow_item_vxlan item_vxlan;
+};
+
 /** Maximum number of subsequent tokens and arguments on the stack. */
 #define CTX_STACK_SIZE 16
 
@@ -775,6 +794,8 @@ static const enum index next_action[] = {
 	ACTION_OF_SET_VLAN_PCP,
 	ACTION_OF_POP_MPLS,
 	ACTION_OF_PUSH_MPLS,
+	ACTION_VXLAN_ENCAP,
+	ACTION_VXLAN_DECAP,
 	ZERO,
 };
 
@@ -905,6 +926,9 @@ static int parse_vc_action_rss_type(struct context *, const struct token *,
 static int parse_vc_action_rss_queue(struct context *, const struct token *,
 				     const char *, unsigned int, void *,
 				     unsigned int);
+static int parse_vc_action_vxlan_encap(struct context *, const struct token *,
+				       const char *, unsigned int, void *,
+				       unsigned int);
 static int parse_destroy(struct context *, const struct token *,
 			 const char *, unsigned int,
 			 void *, unsigned int);
@@ -2387,6 +2411,24 @@ static const struct token token_list[] = {
 			      ethertype)),
 		.call = parse_vc_conf,
 	},
+	[ACTION_VXLAN_ENCAP] = {
+		.name = "vxlan_encap",
+		.help = "VXLAN encapsulation, uses configuration set by \"set"
+			" vxlan\"",
+		.priv = PRIV_ACTION(VXLAN_ENCAP,
+				    sizeof(struct action_vxlan_encap_data)),
+		.next = NEXT(NEXT_ENTRY(ACTION_NEXT)),
+		.call = parse_vc_action_vxlan_encap,
+	},
+	[ACTION_VXLAN_DECAP] = {
+		.name = "vxlan_decap",
+		.help = "Performs a decapsulation action by stripping all"
+			" headers of the VXLAN tunnel network overlay from the"
+			" matched flow.",
+		.priv = PRIV_ACTION(VXLAN_DECAP, 0),
+		.next = NEXT(NEXT_ENTRY(ACTION_NEXT)),
+		.call = parse_vc,
+	},
 };
 
 /** Remove and return last entry from argument stack. */
@@ -2951,6 +2993,106 @@ parse_vc_action_rss_queue(struct context *ctx, const struct token *token,
 	return len;
 }
 
+/** Parse VXLAN encap action. */
+static int
+parse_vc_action_vxlan_encap(struct context *ctx, const struct token *token,
+			    const char *str, unsigned int len,
+			    void *buf, unsigned int size)
+{
+	struct buffer *out = buf;
+	struct rte_flow_action *action;
+	struct action_vxlan_encap_data *action_vxlan_encap_data;
+	int ret;
+
+	ret = parse_vc(ctx, token, str, len, buf, size);
+	if (ret < 0)
+		return ret;
+	/* Nothing else to do if there is no buffer. */
+	if (!out)
+		return ret;
+	if (!out->args.vc.actions_n)
+		return -1;
+	action = &out->args.vc.actions[out->args.vc.actions_n - 1];
+	/* Point to selected object. */
+	ctx->object = out->args.vc.data;
+	ctx->objmask = NULL;
+	/* Set up default configuration. */
+	action_vxlan_encap_data = ctx->object;
+	*action_vxlan_encap_data = (struct action_vxlan_encap_data){
+		.conf = (struct rte_flow_action_vxlan_encap){
+			.definition = action_vxlan_encap_data->items,
+		},
+		.items = {
+			{
+				.type = RTE_FLOW_ITEM_TYPE_ETH,
+				.spec = &action_vxlan_encap_data->item_eth,
+				.mask = &rte_flow_item_eth_mask,
+			},
+			{
+				.type = RTE_FLOW_ITEM_TYPE_VLAN,
+				.spec = &action_vxlan_encap_data->item_vlan,
+				.mask = &rte_flow_item_vlan_mask,
+			},
+			{
+				.type = RTE_FLOW_ITEM_TYPE_IPV4,
+				.spec = &action_vxlan_encap_data->item_ipv4,
+				.mask = &rte_flow_item_ipv4_mask,
+			},
+			{
+				.type = RTE_FLOW_ITEM_TYPE_UDP,
+				.spec = &action_vxlan_encap_data->item_udp,
+				.mask = &rte_flow_item_udp_mask,
+			},
+			{
+				.type = RTE_FLOW_ITEM_TYPE_VXLAN,
+				.spec = &action_vxlan_encap_data->item_vxlan,
+				.mask = &rte_flow_item_vxlan_mask,
+			},
+			{
+				.type = RTE_FLOW_ITEM_TYPE_END,
+			},
+		},
+		.item_eth.type = 0,
+		.item_vlan = {
+			.tci = vxlan_encap_conf.vlan_tci,
+			.inner_type = 0,
+		},
+		.item_ipv4.hdr = {
+			.src_addr = vxlan_encap_conf.ipv4_src,
+			.dst_addr = vxlan_encap_conf.ipv4_dst,
+		},
+		.item_udp.hdr = {
+			.src_port = vxlan_encap_conf.udp_src,
+			.dst_port = vxlan_encap_conf.udp_dst,
+		},
+		.item_vxlan.flags = 0,
+	};
+	memcpy(action_vxlan_encap_data->item_eth.dst.addr_bytes,
+	       vxlan_encap_conf.eth_dst, ETHER_ADDR_LEN);
+	memcpy(action_vxlan_encap_data->item_eth.src.addr_bytes,
+	       vxlan_encap_conf.eth_src, ETHER_ADDR_LEN);
+	if (!vxlan_encap_conf.select_ipv4) {
+		memcpy(&action_vxlan_encap_data->item_ipv6.hdr.src_addr,
+		       &vxlan_encap_conf.ipv6_src,
+		       sizeof(vxlan_encap_conf.ipv6_src));
+		memcpy(&action_vxlan_encap_data->item_ipv6.hdr.dst_addr,
+		       &vxlan_encap_conf.ipv6_dst,
+		       sizeof(vxlan_encap_conf.ipv6_dst));
+		action_vxlan_encap_data->items[2] = (struct rte_flow_item){
+			.type = RTE_FLOW_ITEM_TYPE_IPV6,
+			.spec = &action_vxlan_encap_data->item_ipv6,
+			.mask = &rte_flow_item_ipv6_mask,
+		};
+	}
+	if (!vxlan_encap_conf.select_vlan)
+		action_vxlan_encap_data->items[1].type =
+			RTE_FLOW_ITEM_TYPE_VOID;
+	memcpy(action_vxlan_encap_data->item_vxlan.vni, vxlan_encap_conf.vni,
+	       RTE_DIM(vxlan_encap_conf.vni));
+	action->conf = &action_vxlan_encap_data->conf;
+	return ret;
+}
+
 /** Parse tokens for destroy command. */
 static int
 parse_destroy(struct context *ctx, const struct token *token,
diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
index 63c2a5aca..4a18d043c 100644
--- a/app/test-pmd/testpmd.c
+++ b/app/test-pmd/testpmd.c
@@ -393,6 +393,23 @@ uint8_t bitrate_enabled;
 struct gro_status gro_ports[RTE_MAX_ETHPORTS];
 uint8_t gro_flush_cycles = GRO_DEFAULT_FLUSH_CYCLES;
 
+struct vxlan_encap_conf vxlan_encap_conf = {
+	.select_ipv4 = 1,
+	.select_vlan = 0,
+	.vni = "\x00\x00\x00",
+	.udp_src = 0,
+	.udp_dst = RTE_BE16(4789),
+	.ipv4_src = IPv4(127, 0, 0, 1),
+	.ipv4_dst = IPv4(255, 255, 255, 255),
+	.ipv6_src = "\x00\x00\x00\x00\x00\x00\x00\x00"
+		"\x00\x00\x00\x00\x00\x00\x00\x01",
+	.ipv6_dst = "\x00\x00\x00\x00\x00\x00\x00\x00"
+		"\x00\x00\x00\x00\x00\x00\x11\x11",
+	.vlan_tci = 0,
+	.eth_src = "\x00\x00\x00\x00\x00\x00",
+	.eth_dst = "\xff\xff\xff\xff\xff\xff",
+};
+
 /* Forward function declarations */
 static void map_port_queue_stats_mapping_registers(portid_t pi,
 						   struct rte_port *port);
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index f51cd9dd9..0d6618788 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -479,6 +479,23 @@ struct gso_status {
 extern struct gso_status gso_ports[RTE_MAX_ETHPORTS];
 extern uint16_t gso_max_segment_size;
 
+/* VXLAN encap/decap parameters. */
+struct vxlan_encap_conf {
+	uint32_t select_ipv4:1;
+	uint32_t select_vlan:1;
+	uint8_t vni[3];
+	rte_be16_t udp_src;
+	rte_be16_t udp_dst;
+	rte_be32_t ipv4_src;
+	rte_be32_t ipv4_dst;
+	uint8_t ipv6_src[16];
+	uint8_t ipv6_dst[16];
+	rte_be16_t vlan_tci;
+	uint8_t eth_src[ETHER_ADDR_LEN];
+	uint8_t eth_dst[ETHER_ADDR_LEN];
+};
+struct vxlan_encap_conf vxlan_encap_conf;
+
 static inline unsigned int
 lcore_num(void)
 {
diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
index 0d6fd50ca..2743043d3 100644
--- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
+++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
@@ -1534,6 +1534,13 @@ Enable or disable a per queue Tx offloading only on a specific Tx queue::
 
 This command should be run when the port is stopped, or else it will fail.
 
+Config VXLAN Encap outer layers
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Configure the outer layer to encapsulate a packet inside a VXLAN tunnel::
+
+ testpmd> set vxlan ipv4|ipv6 (vni) (udp-src) (udp-dst) (ip-src) (ip-dst) (mac-src) (mac-dst)
+ testpmd> set vxlan-with-vlan ipv4|ipv6 (vni) (udp-src) (udp-dst) (ip-src) (ip-dst) (vlan-tci) (mac-src) (mac-dst)
 
 Port Functions
 --------------
@@ -3650,6 +3657,12 @@ This section lists supported actions and their attributes, if any.
 
   - ``ethertype``: Ethertype.
 
+- ``vxlan_encap``: Performs a VXLAN encapsulation, outer layer configuration
+  is done through `Config VXLAN Encap outer layers`_.
+
+- ``vxlan_decap``: Performs a decapsulation action by stripping all headers of
+  the VXLAN tunnel network overlay from the matched flow.
+
 Destroying flow rules
 ~~~~~~~~~~~~~~~~~~~~~
 
@@ -3915,6 +3928,28 @@ Validate and create a QinQ rule on port 0 to steer traffic to a queue on the hos
    0       0       0       i-      ETH VLAN VLAN=>VF QUEUE
    1       0       0       i-      ETH VLAN VLAN=>PF QUEUE
 
+Sample VXLAN encapsulation rule
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+VXLAN encapsulation outer layer has default value pre-configured in testpmd
+source code, those can be changed by using the following commands::
+
+IPv4 VXLAN outer header::
+
+  testpmd> set vxlan ipv4 4 4 4 127.0.0.1 128.0.0.1 11:11:11:11:11:11 22:22:22:22:22:22
+  testpmd> flow create 0 ingress pattern end actions vxlan_encap / queue index 0 / end
+
+  testpmd> set vxlan-with-vlan ipv4 4 4 4 127.0.0.1 128.0.0.1 34 11:11:11:11:11:11 22:22:22:22:22:22
+  testpmd> flow create 0 ingress pattern end actions vxlan_encap / queue index 0 / end
+
+IPv6 VXLAN outer header::
+
+  testpmd> set vxlan ipv6 4 4 4 ::1 ::2222 11:11:11:11:11:11 22:22:22:22:22:22
+  testpmd> flow create 0 ingress pattern end actions vxlan_encap / queue index 0 / end
+
+  testpmd> set vxlan-with-vlan ipv6 4 4 4 ::1 ::2222 34 11:11:11:11:11:11 22:22:22:22:22:22
+  testpmd> flow create 0 ingress pattern end actions vxlan_encap / queue index 0 / end
+
 BPF Functions
 --------------
 
-- 
2.18.0

^ permalink raw reply	[flat|nested] 63+ messages in thread

* [dpdk-dev] [PATCH v6 2/2] app/testpmd: add NVGRE encap/decap support
  2018-06-27  9:53         ` [dpdk-dev] [PATCH v6 0/2] app/testpmd implement VXLAN/NVGRE Encap/Decap Nelio Laranjeiro
  2018-06-27  9:53           ` [dpdk-dev] [PATCH v6 1/2] app/testpmd: add VXLAN encap/decap support Nelio Laranjeiro
@ 2018-06-27  9:53           ` Nelio Laranjeiro
  2018-06-27 10:00           ` [dpdk-dev] [PATCH v6 0/2] app/testpmd implement VXLAN/NVGRE Encap/Decap Nélio Laranjeiro
  2018-06-27 11:45           ` [dpdk-dev] [PATCH v7 " Nelio Laranjeiro
  3 siblings, 0 replies; 63+ messages in thread
From: Nelio Laranjeiro @ 2018-06-27  9:53 UTC (permalink / raw)
  To: dev, Adrien Mazarguil, Wenzhuo Lu, Jingjing Wu,
	Bernard Iremonger, Mohammad Abdul Awal, Stephen Hemminger
  Cc: Ori Kam

Due to the complex NVGRE_ENCAP flow action and based on the fact testpmd
does not allocate memory, this patch adds a new command in testpmd to
initialise a global structure containing the necessary information to
make the outer layer of the packet.  This same global structure will
then be used by the flow command line in testpmd when the action
nvgre_encap will be parsed, at this point, the conversion into such
action becomes trivial.

This global structure is only used for the encap action.

Signed-off-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
---
 app/test-pmd/cmdline.c                      | 118 +++++++++++++++++
 app/test-pmd/cmdline_flow.c                 | 132 ++++++++++++++++++++
 app/test-pmd/testpmd.c                      |  15 +++
 app/test-pmd/testpmd.h                      |  15 +++
 doc/guides/testpmd_app_ug/testpmd_funcs.rst |  37 ++++++
 5 files changed, 317 insertions(+)

diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
index 048fff2bd..ad7f9eda5 100644
--- a/app/test-pmd/cmdline.c
+++ b/app/test-pmd/cmdline.c
@@ -789,6 +789,12 @@ static void cmd_help_long_parsed(void *parsed_result,
 			" vlan-tci eth-src eth-dst\n"
 			"       Configure the VXLAN encapsulation for flows.\n\n"
 
+			"nvgre ipv4|ipv6 tni ip-src ip-dst eth-src eth-dst\n"
+			"       Configure the NVGRE encapsulation for flows.\n\n"
+
+			"nvgre-with-vlan ipv4|ipv6 tni ip-src ip-dst vlan-tci eth-src eth-dst\n"
+			"       Configure the NVGRE encapsulation for flows.\n\n"
+
 			, list_pkt_forwarding_modes()
 		);
 	}
@@ -14970,6 +14976,116 @@ cmdline_parse_inst_t cmd_set_vxlan_with_vlan = {
 	},
 };
 
+/** Set NVGRE encapsulation details */
+struct cmd_set_nvgre_result {
+	cmdline_fixed_string_t set;
+	cmdline_fixed_string_t nvgre;
+	cmdline_fixed_string_t ip_version;
+	uint32_t tni;
+	cmdline_ipaddr_t ip_src;
+	cmdline_ipaddr_t ip_dst;
+	uint16_t tci;
+	struct ether_addr eth_src;
+	struct ether_addr eth_dst;
+};
+
+cmdline_parse_token_string_t cmd_set_nvgre_set =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_nvgre_result, set, "set");
+cmdline_parse_token_string_t cmd_set_nvgre_nvgre =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_nvgre_result, nvgre, "nvgre");
+cmdline_parse_token_string_t cmd_set_nvgre_nvgre_with_vlan =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_nvgre_result, nvgre, "nvgre-with-vlan");
+cmdline_parse_token_string_t cmd_set_nvgre_ip_version =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_nvgre_result, ip_version,
+				 "ipv4#ipv6");
+cmdline_parse_token_num_t cmd_set_nvgre_tni =
+	TOKEN_NUM_INITIALIZER(struct cmd_set_nvgre_result, tni, UINT32);
+cmdline_parse_token_num_t cmd_set_nvgre_ip_src =
+	TOKEN_IPADDR_INITIALIZER(struct cmd_set_nvgre_result, ip_src);
+cmdline_parse_token_ipaddr_t cmd_set_nvgre_ip_dst =
+	TOKEN_IPADDR_INITIALIZER(struct cmd_set_nvgre_result, ip_dst);
+cmdline_parse_token_num_t cmd_set_nvgre_vlan =
+	TOKEN_NUM_INITIALIZER(struct cmd_set_nvgre_result, tci, UINT16);
+cmdline_parse_token_etheraddr_t cmd_set_nvgre_eth_src =
+	TOKEN_ETHERADDR_INITIALIZER(struct cmd_set_nvgre_result, eth_src);
+cmdline_parse_token_etheraddr_t cmd_set_nvgre_eth_dst =
+	TOKEN_ETHERADDR_INITIALIZER(struct cmd_set_nvgre_result, eth_dst);
+
+static void cmd_set_nvgre_parsed(void *parsed_result,
+	__attribute__((unused)) struct cmdline *cl,
+	__attribute__((unused)) void *data)
+{
+	struct cmd_set_nvgre_result *res = parsed_result;
+	union {
+		uint32_t nvgre_tni;
+		uint8_t tni[4];
+	} id = {
+		.nvgre_tni = rte_cpu_to_be_32(res->tni) & RTE_BE32(0x00ffffff),
+	};
+
+	if (strcmp(res->nvgre, "nvgre") == 0)
+		nvgre_encap_conf.select_vlan = 0;
+	else if (strcmp(res->nvgre, "nvgre-with-vlan") == 0)
+		nvgre_encap_conf.select_vlan = 1;
+	if (strcmp(res->ip_version, "ipv4") == 0)
+		nvgre_encap_conf.select_ipv4 = 1;
+	else if (strcmp(res->ip_version, "ipv6") == 0)
+		nvgre_encap_conf.select_ipv4 = 0;
+	else
+		return;
+	rte_memcpy(nvgre_encap_conf.tni, &id.tni[1], 3);
+	if (nvgre_encap_conf.select_ipv4) {
+		IPV4_ADDR_TO_UINT(res->ip_src, nvgre_encap_conf.ipv4_src);
+		IPV4_ADDR_TO_UINT(res->ip_dst, nvgre_encap_conf.ipv4_dst);
+	} else {
+		IPV6_ADDR_TO_ARRAY(res->ip_src, nvgre_encap_conf.ipv6_src);
+		IPV6_ADDR_TO_ARRAY(res->ip_dst, nvgre_encap_conf.ipv6_dst);
+	}
+	if (nvgre_encap_conf.select_vlan)
+		nvgre_encap_conf.vlan_tci = rte_cpu_to_be_16(res->tci);
+	rte_memcpy(nvgre_encap_conf.eth_src, res->eth_src.addr_bytes,
+		   ETHER_ADDR_LEN);
+	rte_memcpy(nvgre_encap_conf.eth_dst, res->eth_dst.addr_bytes,
+		   ETHER_ADDR_LEN);
+}
+
+cmdline_parse_inst_t cmd_set_nvgre = {
+	.f = cmd_set_nvgre_parsed,
+	.data = NULL,
+	.help_str = "set nvgre ipv4|ipv6 <vni> <ip-src> <ip-dst> <eth-src>"
+		" <eth-dst>",
+	.tokens = {
+		(void *)&cmd_set_nvgre_set,
+		(void *)&cmd_set_nvgre_nvgre,
+		(void *)&cmd_set_nvgre_ip_version,
+		(void *)&cmd_set_nvgre_tni,
+		(void *)&cmd_set_nvgre_ip_src,
+		(void *)&cmd_set_nvgre_ip_dst,
+		(void *)&cmd_set_nvgre_eth_src,
+		(void *)&cmd_set_nvgre_eth_dst,
+		NULL,
+	},
+};
+
+cmdline_parse_inst_t cmd_set_nvgre_with_vlan = {
+	.f = cmd_set_nvgre_parsed,
+	.data = NULL,
+	.help_str = "set nvgre-with-vlan ipv4|ipv6 <vni> <ip-src> <ip-dst>"
+		" <vlan-tci> <eth-src> <eth-dst>",
+	.tokens = {
+		(void *)&cmd_set_nvgre_set,
+		(void *)&cmd_set_nvgre_nvgre_with_vlan,
+		(void *)&cmd_set_nvgre_ip_version,
+		(void *)&cmd_set_nvgre_tni,
+		(void *)&cmd_set_nvgre_ip_src,
+		(void *)&cmd_set_nvgre_ip_dst,
+		(void *)&cmd_set_nvgre_vlan,
+		(void *)&cmd_set_nvgre_eth_src,
+		(void *)&cmd_set_nvgre_eth_dst,
+		NULL,
+	},
+};
+
 /* Strict link priority scheduling mode setting */
 static void
 cmd_strict_link_prio_parsed(
@@ -17596,6 +17712,8 @@ cmdline_parse_ctx_t main_ctx[] = {
 #endif
 	(cmdline_parse_inst_t *)&cmd_set_vxlan,
 	(cmdline_parse_inst_t *)&cmd_set_vxlan_with_vlan,
+	(cmdline_parse_inst_t *)&cmd_set_nvgre,
+	(cmdline_parse_inst_t *)&cmd_set_nvgre_with_vlan,
 	(cmdline_parse_inst_t *)&cmd_ddp_add,
 	(cmdline_parse_inst_t *)&cmd_ddp_del,
 	(cmdline_parse_inst_t *)&cmd_ddp_get_list,
diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index a99fd0048..f9260600e 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -241,6 +241,8 @@ enum index {
 	ACTION_OF_PUSH_MPLS_ETHERTYPE,
 	ACTION_VXLAN_ENCAP,
 	ACTION_VXLAN_DECAP,
+	ACTION_NVGRE_ENCAP,
+	ACTION_NVGRE_DECAP,
 };
 
 /** Maximum size for pattern in struct rte_flow_item_raw. */
@@ -277,6 +279,22 @@ struct action_vxlan_encap_data {
 	struct rte_flow_item_vxlan item_vxlan;
 };
 
+/** Maximum number of items in struct rte_flow_action_nvgre_encap. */
+#define ACTION_NVGRE_ENCAP_ITEMS_NUM 5
+
+/** Storage for struct rte_flow_action_nvgre_encap including external data. */
+struct action_nvgre_encap_data {
+	struct rte_flow_action_nvgre_encap conf;
+	struct rte_flow_item items[ACTION_NVGRE_ENCAP_ITEMS_NUM];
+	struct rte_flow_item_eth item_eth;
+	struct rte_flow_item_vlan item_vlan;
+	union {
+		struct rte_flow_item_ipv4 item_ipv4;
+		struct rte_flow_item_ipv6 item_ipv6;
+	};
+	struct rte_flow_item_nvgre item_nvgre;
+};
+
 /** Maximum number of subsequent tokens and arguments on the stack. */
 #define CTX_STACK_SIZE 16
 
@@ -796,6 +814,8 @@ static const enum index next_action[] = {
 	ACTION_OF_PUSH_MPLS,
 	ACTION_VXLAN_ENCAP,
 	ACTION_VXLAN_DECAP,
+	ACTION_NVGRE_ENCAP,
+	ACTION_NVGRE_DECAP,
 	ZERO,
 };
 
@@ -929,6 +949,9 @@ static int parse_vc_action_rss_queue(struct context *, const struct token *,
 static int parse_vc_action_vxlan_encap(struct context *, const struct token *,
 				       const char *, unsigned int, void *,
 				       unsigned int);
+static int parse_vc_action_nvgre_encap(struct context *, const struct token *,
+				       const char *, unsigned int, void *,
+				       unsigned int);
 static int parse_destroy(struct context *, const struct token *,
 			 const char *, unsigned int,
 			 void *, unsigned int);
@@ -2429,6 +2452,24 @@ static const struct token token_list[] = {
 		.next = NEXT(NEXT_ENTRY(ACTION_NEXT)),
 		.call = parse_vc,
 	},
+	[ACTION_NVGRE_ENCAP] = {
+		.name = "nvgre_encap",
+		.help = "NVGRE encapsulation, uses configuration set by \"set"
+			" nvgre\"",
+		.priv = PRIV_ACTION(NVGRE_ENCAP,
+				    sizeof(struct action_nvgre_encap_data)),
+		.next = NEXT(NEXT_ENTRY(ACTION_NEXT)),
+		.call = parse_vc_action_nvgre_encap,
+	},
+	[ACTION_NVGRE_DECAP] = {
+		.name = "nvgre_decap",
+		.help = "Performs a decapsulation action by stripping all"
+			" headers of the NVGRE tunnel network overlay from the"
+			" matched flow.",
+		.priv = PRIV_ACTION(NVGRE_DECAP, 0),
+		.next = NEXT(NEXT_ENTRY(ACTION_NEXT)),
+		.call = parse_vc,
+	},
 };
 
 /** Remove and return last entry from argument stack. */
@@ -3093,6 +3134,97 @@ parse_vc_action_vxlan_encap(struct context *ctx, const struct token *token,
 	return ret;
 }
 
+/** Parse NVGRE encap action. */
+static int
+parse_vc_action_nvgre_encap(struct context *ctx, const struct token *token,
+			    const char *str, unsigned int len,
+			    void *buf, unsigned int size)
+{
+	struct buffer *out = buf;
+	struct rte_flow_action *action;
+	struct action_nvgre_encap_data *action_nvgre_encap_data;
+	int ret;
+
+	ret = parse_vc(ctx, token, str, len, buf, size);
+	if (ret < 0)
+		return ret;
+	/* Nothing else to do if there is no buffer. */
+	if (!out)
+		return ret;
+	if (!out->args.vc.actions_n)
+		return -1;
+	action = &out->args.vc.actions[out->args.vc.actions_n - 1];
+	/* Point to selected object. */
+	ctx->object = out->args.vc.data;
+	ctx->objmask = NULL;
+	/* Set up default configuration. */
+	action_nvgre_encap_data = ctx->object;
+	*action_nvgre_encap_data = (struct action_nvgre_encap_data){
+		.conf = (struct rte_flow_action_nvgre_encap){
+			.definition = action_nvgre_encap_data->items,
+		},
+		.items = {
+			{
+				.type = RTE_FLOW_ITEM_TYPE_ETH,
+				.spec = &action_nvgre_encap_data->item_eth,
+				.mask = &rte_flow_item_eth_mask,
+			},
+			{
+				.type = RTE_FLOW_ITEM_TYPE_VLAN,
+				.spec = &action_nvgre_encap_data->item_vlan,
+				.mask = &rte_flow_item_vlan_mask,
+			},
+			{
+				.type = RTE_FLOW_ITEM_TYPE_IPV4,
+				.spec = &action_nvgre_encap_data->item_ipv4,
+				.mask = &rte_flow_item_ipv4_mask,
+			},
+			{
+				.type = RTE_FLOW_ITEM_TYPE_NVGRE,
+				.spec = &action_nvgre_encap_data->item_nvgre,
+				.mask = &rte_flow_item_nvgre_mask,
+			},
+			{
+				.type = RTE_FLOW_ITEM_TYPE_END,
+			},
+		},
+		.item_eth.type = 0,
+		.item_vlan = {
+			.tci = nvgre_encap_conf.vlan_tci,
+			.inner_type = 0,
+		},
+		.item_ipv4.hdr = {
+		       .src_addr = nvgre_encap_conf.ipv4_src,
+		       .dst_addr = nvgre_encap_conf.ipv4_dst,
+		},
+		.item_nvgre.flow_id = 0,
+	};
+	memcpy(action_nvgre_encap_data->item_eth.dst.addr_bytes,
+	       nvgre_encap_conf.eth_dst, ETHER_ADDR_LEN);
+	memcpy(action_nvgre_encap_data->item_eth.src.addr_bytes,
+	       nvgre_encap_conf.eth_src, ETHER_ADDR_LEN);
+	if (!nvgre_encap_conf.select_ipv4) {
+		memcpy(&action_nvgre_encap_data->item_ipv6.hdr.src_addr,
+		       &nvgre_encap_conf.ipv6_src,
+		       sizeof(nvgre_encap_conf.ipv6_src));
+		memcpy(&action_nvgre_encap_data->item_ipv6.hdr.dst_addr,
+		       &nvgre_encap_conf.ipv6_dst,
+		       sizeof(nvgre_encap_conf.ipv6_dst));
+		action_nvgre_encap_data->items[2] = (struct rte_flow_item){
+			.type = RTE_FLOW_ITEM_TYPE_IPV6,
+			.spec = &action_nvgre_encap_data->item_ipv6,
+			.mask = &rte_flow_item_ipv6_mask,
+		};
+	}
+	if (!nvgre_encap_conf.select_vlan)
+		action_nvgre_encap_data->items[1].type =
+			RTE_FLOW_ITEM_TYPE_VOID;
+	memcpy(action_nvgre_encap_data->item_nvgre.tni, nvgre_encap_conf.tni,
+	       RTE_DIM(nvgre_encap_conf.tni));
+	action->conf = &action_nvgre_encap_data->conf;
+	return ret;
+}
+
 /** Parse tokens for destroy command. */
 static int
 parse_destroy(struct context *ctx, const struct token *token,
diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
index 4a18d043c..121685da3 100644
--- a/app/test-pmd/testpmd.c
+++ b/app/test-pmd/testpmd.c
@@ -410,6 +410,21 @@ struct vxlan_encap_conf vxlan_encap_conf = {
 	.eth_dst = "\xff\xff\xff\xff\xff\xff",
 };
 
+struct nvgre_encap_conf nvgre_encap_conf = {
+	.select_ipv4 = 1,
+	.select_vlan = 0,
+	.tni = "\x00\x00\x00",
+	.ipv4_src = IPv4(127, 0, 0, 1),
+	.ipv4_dst = IPv4(255, 255, 255, 255),
+	.ipv6_src = "\x00\x00\x00\x00\x00\x00\x00\x00"
+		"\x00\x00\x00\x00\x00\x00\x00\x01",
+	.ipv6_dst = "\x00\x00\x00\x00\x00\x00\x00\x00"
+		"\x00\x00\x00\x00\x00\x00\x11\x11",
+	.vlan_tci = 0,
+	.eth_src = "\x00\x00\x00\x00\x00\x00",
+	.eth_dst = "\xff\xff\xff\xff\xff\xff",
+};
+
 /* Forward function declarations */
 static void map_port_queue_stats_mapping_registers(portid_t pi,
 						   struct rte_port *port);
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index 0d6618788..2b1e448b0 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -496,6 +496,21 @@ struct vxlan_encap_conf {
 };
 struct vxlan_encap_conf vxlan_encap_conf;
 
+/* NVGRE encap/decap parameters. */
+struct nvgre_encap_conf {
+	uint32_t select_ipv4:1;
+	uint32_t select_vlan:1;
+	uint8_t tni[3];
+	rte_be32_t ipv4_src;
+	rte_be32_t ipv4_dst;
+	uint8_t ipv6_src[16];
+	uint8_t ipv6_dst[16];
+	rte_be16_t vlan_tci;
+	uint8_t eth_src[ETHER_ADDR_LEN];
+	uint8_t eth_dst[ETHER_ADDR_LEN];
+};
+struct nvgre_encap_conf nvgre_encap_conf;
+
 static inline unsigned int
 lcore_num(void)
 {
diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
index 2743043d3..17e0fef63 100644
--- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
+++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
@@ -1542,6 +1542,14 @@ Configure the outer layer to encapsulate a packet inside a VXLAN tunnel::
  testpmd> set vxlan ipv4|ipv6 (vni) (udp-src) (udp-dst) (ip-src) (ip-dst) (mac-src) (mac-dst)
  testpmd> set vxlan-with-vlan ipv4|ipv6 (vni) (udp-src) (udp-dst) (ip-src) (ip-dst) (vlan-tci) (mac-src) (mac-dst)
 
+Config NVGRE Encap outer layers
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Configure the outer layer to encapsulate a packet inside a NVGRE tunnel::
+
+ testpmd> set nvgre ipv4|ipv6 (tni) (ip-src) (ip-dst) (mac-src) (mac-dst)
+ testpmd> set nvgre-with-vlan ipv4|ipv6 (tni) (ip-src) (ip-dst) (vlan-tci) (mac-src) (mac-dst)
+
 Port Functions
 --------------
 
@@ -3663,6 +3671,12 @@ This section lists supported actions and their attributes, if any.
 - ``vxlan_decap``: Performs a decapsulation action by stripping all headers of
   the VXLAN tunnel network overlay from the matched flow.
 
+- ``nvgre_encap``: Performs a NVGRE encapsulation, outer layer configuration
+  is done through `Config NVGRE Encap outer layers`_.
+
+- ``nvgre_decap``: Performs a decapsulation action by stripping all headers of
+  the NVGRE tunnel network overlay from the matched flow.
+
 Destroying flow rules
 ~~~~~~~~~~~~~~~~~~~~~
 
@@ -3950,6 +3964,29 @@ IPv6 VXLAN outer header::
   testpmd> set vxlan-with-vlan ipv6 4 4 4 ::1 ::2222 34 11:11:11:11:11:11 22:22:22:22:22:22
   testpmd> flow create 0 ingress pattern end actions vxlan_encap / queue index 0 / end
 
+Sample NVGRE encapsulation rule
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+NVGRE encapsulation outer layer has default value pre-configured in testpmd
+source code, those can be changed by using the following commands::
+
+IPv4 NVGRE outer header::
+
+  testpmd> set nvgre ipv4 4 127.0.0.1 128.0.0.1 11:11:11:11:11:11 22:22:22:22:22:22
+  testpmd> flow create 0 ingress pattern end actions nvgre_encap / queue index 0 / end
+
+  testpmd> set nvgre-with-vlan 4 127.0.0.1 128.0.0.1 34 11:11:11:11:11:11 22:22:22:22:22:22
+  testpmd> flow create 0 ingress pattern end actions vxlan_encap / queue index 0 / end
+
+IPv6 NVGRE outer header::
+
+  testpmd> set nvgre ipv6 4 ::1 ::2222 11:11:11:11:11:11 22:22:22:22:22:22
+  testpmd> flow create 0 ingress pattern end actions vxlan_encap / queue index 0 / end
+
+  testpmd> set nvgre-with-vlan ipv6 4 ::1 ::2222 34 11:11:11:11:11:11 22:22:22:22:22:22
+  testpmd> flow create 0 ingress pattern end actions vxlan_encap / queue index 0 / end
+
+
 BPF Functions
 --------------
 
-- 
2.18.0

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [dpdk-dev] [PATCH v6 0/2] app/testpmd implement VXLAN/NVGRE Encap/Decap
  2018-06-27  9:53         ` [dpdk-dev] [PATCH v6 0/2] app/testpmd implement VXLAN/NVGRE Encap/Decap Nelio Laranjeiro
  2018-06-27  9:53           ` [dpdk-dev] [PATCH v6 1/2] app/testpmd: add VXLAN encap/decap support Nelio Laranjeiro
  2018-06-27  9:53           ` [dpdk-dev] [PATCH v6 2/2] app/testpmd: add NVGRE " Nelio Laranjeiro
@ 2018-06-27 10:00           ` Nélio Laranjeiro
  2018-06-27 11:45           ` [dpdk-dev] [PATCH v7 " Nelio Laranjeiro
  3 siblings, 0 replies; 63+ messages in thread
From: Nélio Laranjeiro @ 2018-06-27 10:00 UTC (permalink / raw)
  To: dev, Adrien Mazarguil, Wenzhuo Lu, Jingjing Wu,
	Bernard Iremonger, Mohammad Abdul Awal, Stephen Hemminger
  Cc: Ori Kam

Sorry, I've messed up with my local branches.  I will send a v7 which
only fixes the compilation issues on redhat.

Thanks,

-- 
Nélio Laranjeiro
6WIND

^ permalink raw reply	[flat|nested] 63+ messages in thread

* [dpdk-dev] [PATCH v7 0/2] app/testpmd implement VXLAN/NVGRE Encap/Decap
  2018-06-27  9:53         ` [dpdk-dev] [PATCH v6 0/2] app/testpmd implement VXLAN/NVGRE Encap/Decap Nelio Laranjeiro
                             ` (2 preceding siblings ...)
  2018-06-27 10:00           ` [dpdk-dev] [PATCH v6 0/2] app/testpmd implement VXLAN/NVGRE Encap/Decap Nélio Laranjeiro
@ 2018-06-27 11:45           ` Nelio Laranjeiro
  2018-06-27 11:45             ` [dpdk-dev] [PATCH v7 1/2] app/testpmd: add VXLAN encap/decap support Nelio Laranjeiro
                               ` (3 more replies)
  3 siblings, 4 replies; 63+ messages in thread
From: Nelio Laranjeiro @ 2018-06-27 11:45 UTC (permalink / raw)
  To: dev, Adrien Mazarguil, Wenzhuo Lu, Jingjing Wu,
	Bernard Iremonger, Mohammad Abdul Awal, Stephen Hemminger
  Cc: Ori Kam

This series adds an easy and maintainable configuration version support for
those two actions for 18.08 by using global variables in testpmd to store the
necessary information for the tunnel encapsulation.  Those variables are used
in conjunction of RTE_FLOW_ACTION_{VXLAN,NVGRE}_ENCAP action to create easily
the action for flows.

A common way to use it:

 set vxlan ipv4 4 4 4 127.0.0.1 128.0.0.1 11:11:11:11:11:11 22:22:22:22:22:22
 flow create 0 ingress pattern end actions vxlan_encap / queue index 0 / end

 set vxlan ipv6 4 4 4 ::1 ::2222 11:11:11:11:11:11 22:22:22:22:22:22
 flow create 0 ingress pattern end actions vxlan_encap / queue index 0 / end

 set nvgre ipv4 4 127.0.0.1 128.0.0.1 11:11:11:11:11:11 22:22:22:22:22:22
 flow create 0 ingress pattern end actions nvgre_encap / queue index 0 / end

 set nvgre ipv6 4 ::1 ::2222 11:11:11:11:11:11 22:22:22:22:22:22
 flow create 0 ingress pattern end actions nvgre_encap / queue index 0 / end

This also replace the proposal done by Mohammad Abdul Awal [1] which handles
in a more complex way for the same work.

Note this API has already a modification planned for 18.11 [2] thus those
series should have a limited life for a single release.

[1] https://dpdk.org/ml/archives/dev/2018-May/101403.html
[2] https://dpdk.org/ml/archives/dev/2018-June/103485.html

Changes in v7:

- add missing documentation added in v5 and removed in v6 by mistake.

Changes in v6:

- fix compilation under redhat 7.5 with gcc 4.8.5 20150623

Changes in v5:

- fix documentation generation.
- add more explanation on how to generate several encapsulated flows.

Changes in v4:

- fix big endian issue on vni and tni.
- add samples to the documentation.
- set the VXLAN UDP source port to 0 by default to let the driver generate it
  from the inner hash as described in the RFC 7348.
- use default rte flow mask for each item.

Changes in v3:

- support VLAN in the outer encapsulation.
- fix the documentation with missing arguments.

Changes in v2:

- add default IPv6 values for NVGRE encapsulation.
- replace VXLAN to NVGRE in comments concerning NVGRE layer.

Nelio Laranjeiro (2):
  app/testpmd: add VXLAN encap/decap support
  app/testpmd: add NVGRE encap/decap support

 app/test-pmd/cmdline.c                      | 252 ++++++++++++++++++
 app/test-pmd/cmdline_flow.c                 | 274 ++++++++++++++++++++
 app/test-pmd/testpmd.c                      |  32 +++
 app/test-pmd/testpmd.h                      |  32 +++
 doc/guides/testpmd_app_ug/testpmd_funcs.rst |  82 ++++++
 5 files changed, 672 insertions(+)

-- 
2.18.0

^ permalink raw reply	[flat|nested] 63+ messages in thread

* [dpdk-dev] [PATCH v7 1/2] app/testpmd: add VXLAN encap/decap support
  2018-06-27 11:45           ` [dpdk-dev] [PATCH v7 " Nelio Laranjeiro
@ 2018-06-27 11:45             ` Nelio Laranjeiro
  2018-06-27 11:45             ` [dpdk-dev] [PATCH v7 2/2] app/testpmd: add NVGRE " Nelio Laranjeiro
                               ` (2 subsequent siblings)
  3 siblings, 0 replies; 63+ messages in thread
From: Nelio Laranjeiro @ 2018-06-27 11:45 UTC (permalink / raw)
  To: dev, Adrien Mazarguil, Wenzhuo Lu, Jingjing Wu,
	Bernard Iremonger, Mohammad Abdul Awal, Stephen Hemminger
  Cc: Ori Kam

Due to the complex VXLAN_ENCAP flow action and based on the fact testpmd
does not allocate memory, this patch adds a new command in testpmd to
initialise a global structure containing the necessary information to
make the outer layer of the packet.  This same global structure will
then be used by the flow command line in testpmd when the action
vxlan_encap will be parsed, at this point, the conversion into such
action becomes trivial.

This global structure is only used for the encap action.

Signed-off-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
Acked-by: Ori Kam <orika@mellanox.com>
---
 app/test-pmd/cmdline.c                      | 134 ++++++++++++++++++
 app/test-pmd/cmdline_flow.c                 | 142 ++++++++++++++++++++
 app/test-pmd/testpmd.c                      |  17 +++
 app/test-pmd/testpmd.h                      |  17 +++
 doc/guides/testpmd_app_ug/testpmd_funcs.rst |  40 ++++++
 5 files changed, 350 insertions(+)

diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
index 27e2aa8c8..048fff2bd 100644
--- a/app/test-pmd/cmdline.c
+++ b/app/test-pmd/cmdline.c
@@ -781,6 +781,14 @@ static void cmd_help_long_parsed(void *parsed_result,
 			"port tm hierarchy commit (port_id) (clean_on_fail)\n"
 			"	Commit tm hierarchy.\n\n"
 
+			"vxlan ipv4|ipv6 vni udp-src udp-dst ip-src ip-dst"
+			" eth-src eth-dst\n"
+			"       Configure the VXLAN encapsulation for flows.\n\n"
+
+			"vxlan-with-vlan ipv4|ipv6 vni udp-src udp-dst ip-src ip-dst"
+			" vlan-tci eth-src eth-dst\n"
+			"       Configure the VXLAN encapsulation for flows.\n\n"
+
 			, list_pkt_forwarding_modes()
 		);
 	}
@@ -14838,6 +14846,130 @@ cmdline_parse_inst_t cmd_set_port_tm_hierarchy_default = {
 };
 #endif
 
+/** Set VXLAN encapsulation details */
+struct cmd_set_vxlan_result {
+	cmdline_fixed_string_t set;
+	cmdline_fixed_string_t vxlan;
+	cmdline_fixed_string_t ip_version;
+	uint32_t vlan_present:1;
+	uint32_t vni;
+	uint16_t udp_src;
+	uint16_t udp_dst;
+	cmdline_ipaddr_t ip_src;
+	cmdline_ipaddr_t ip_dst;
+	uint16_t tci;
+	struct ether_addr eth_src;
+	struct ether_addr eth_dst;
+};
+
+cmdline_parse_token_string_t cmd_set_vxlan_set =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_vxlan_result, set, "set");
+cmdline_parse_token_string_t cmd_set_vxlan_vxlan =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_vxlan_result, vxlan, "vxlan");
+cmdline_parse_token_string_t cmd_set_vxlan_vxlan_with_vlan =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_vxlan_result, vxlan,
+				 "vxlan-with-vlan");
+cmdline_parse_token_string_t cmd_set_vxlan_ip_version =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_vxlan_result, ip_version,
+				 "ipv4#ipv6");
+cmdline_parse_token_num_t cmd_set_vxlan_vni =
+	TOKEN_NUM_INITIALIZER(struct cmd_set_vxlan_result, vni, UINT32);
+cmdline_parse_token_num_t cmd_set_vxlan_udp_src =
+	TOKEN_NUM_INITIALIZER(struct cmd_set_vxlan_result, udp_src, UINT16);
+cmdline_parse_token_num_t cmd_set_vxlan_udp_dst =
+	TOKEN_NUM_INITIALIZER(struct cmd_set_vxlan_result, udp_dst, UINT16);
+cmdline_parse_token_ipaddr_t cmd_set_vxlan_ip_src =
+	TOKEN_IPADDR_INITIALIZER(struct cmd_set_vxlan_result, ip_src);
+cmdline_parse_token_ipaddr_t cmd_set_vxlan_ip_dst =
+	TOKEN_IPADDR_INITIALIZER(struct cmd_set_vxlan_result, ip_dst);
+cmdline_parse_token_num_t cmd_set_vxlan_vlan =
+	TOKEN_NUM_INITIALIZER(struct cmd_set_vxlan_result, tci, UINT16);
+cmdline_parse_token_etheraddr_t cmd_set_vxlan_eth_src =
+	TOKEN_ETHERADDR_INITIALIZER(struct cmd_set_vxlan_result, eth_src);
+cmdline_parse_token_etheraddr_t cmd_set_vxlan_eth_dst =
+	TOKEN_ETHERADDR_INITIALIZER(struct cmd_set_vxlan_result, eth_dst);
+
+static void cmd_set_vxlan_parsed(void *parsed_result,
+	__attribute__((unused)) struct cmdline *cl,
+	__attribute__((unused)) void *data)
+{
+	struct cmd_set_vxlan_result *res = parsed_result;
+	union {
+		uint32_t vxlan_id;
+		uint8_t vni[4];
+	} id = {
+		.vxlan_id = rte_cpu_to_be_32(res->vni) & RTE_BE32(0x00ffffff),
+	};
+
+	if (strcmp(res->vxlan, "vxlan") == 0)
+		vxlan_encap_conf.select_vlan = 0;
+	else if (strcmp(res->vxlan, "vxlan-with-vlan") == 0)
+		vxlan_encap_conf.select_vlan = 1;
+	if (strcmp(res->ip_version, "ipv4") == 0)
+		vxlan_encap_conf.select_ipv4 = 1;
+	else if (strcmp(res->ip_version, "ipv6") == 0)
+		vxlan_encap_conf.select_ipv4 = 0;
+	else
+		return;
+	rte_memcpy(vxlan_encap_conf.vni, &id.vni[1], 3);
+	vxlan_encap_conf.udp_src = rte_cpu_to_be_16(res->udp_src);
+	vxlan_encap_conf.udp_dst = rte_cpu_to_be_16(res->udp_dst);
+	if (vxlan_encap_conf.select_ipv4) {
+		IPV4_ADDR_TO_UINT(res->ip_src, vxlan_encap_conf.ipv4_src);
+		IPV4_ADDR_TO_UINT(res->ip_dst, vxlan_encap_conf.ipv4_dst);
+	} else {
+		IPV6_ADDR_TO_ARRAY(res->ip_src, vxlan_encap_conf.ipv6_src);
+		IPV6_ADDR_TO_ARRAY(res->ip_dst, vxlan_encap_conf.ipv6_dst);
+	}
+	if (vxlan_encap_conf.select_vlan)
+		vxlan_encap_conf.vlan_tci = rte_cpu_to_be_16(res->tci);
+	rte_memcpy(vxlan_encap_conf.eth_src, res->eth_src.addr_bytes,
+		   ETHER_ADDR_LEN);
+	rte_memcpy(vxlan_encap_conf.eth_dst, res->eth_dst.addr_bytes,
+		   ETHER_ADDR_LEN);
+}
+
+cmdline_parse_inst_t cmd_set_vxlan = {
+	.f = cmd_set_vxlan_parsed,
+	.data = NULL,
+	.help_str = "set vxlan ipv4|ipv6 <vni> <udp-src> <udp-dst> <ip-src>"
+		" <ip-dst> <eth-src> <eth-dst>",
+	.tokens = {
+		(void *)&cmd_set_vxlan_set,
+		(void *)&cmd_set_vxlan_vxlan,
+		(void *)&cmd_set_vxlan_ip_version,
+		(void *)&cmd_set_vxlan_vni,
+		(void *)&cmd_set_vxlan_udp_src,
+		(void *)&cmd_set_vxlan_udp_dst,
+		(void *)&cmd_set_vxlan_ip_src,
+		(void *)&cmd_set_vxlan_ip_dst,
+		(void *)&cmd_set_vxlan_eth_src,
+		(void *)&cmd_set_vxlan_eth_dst,
+		NULL,
+	},
+};
+
+cmdline_parse_inst_t cmd_set_vxlan_with_vlan = {
+	.f = cmd_set_vxlan_parsed,
+	.data = NULL,
+	.help_str = "set vxlan-with-vlan ipv4|ipv6 <vni> <udp-src> <udp-dst>"
+		" <ip-src> <ip-dst> <vlan-tci> <eth-src> <eth-dst>",
+	.tokens = {
+		(void *)&cmd_set_vxlan_set,
+		(void *)&cmd_set_vxlan_vxlan_with_vlan,
+		(void *)&cmd_set_vxlan_ip_version,
+		(void *)&cmd_set_vxlan_vni,
+		(void *)&cmd_set_vxlan_udp_src,
+		(void *)&cmd_set_vxlan_udp_dst,
+		(void *)&cmd_set_vxlan_ip_src,
+		(void *)&cmd_set_vxlan_ip_dst,
+		(void *)&cmd_set_vxlan_vlan,
+		(void *)&cmd_set_vxlan_eth_src,
+		(void *)&cmd_set_vxlan_eth_dst,
+		NULL,
+	},
+};
+
 /* Strict link priority scheduling mode setting */
 static void
 cmd_strict_link_prio_parsed(
@@ -17462,6 +17594,8 @@ cmdline_parse_ctx_t main_ctx[] = {
 #if defined RTE_LIBRTE_PMD_SOFTNIC && defined RTE_LIBRTE_SCHED
 	(cmdline_parse_inst_t *)&cmd_set_port_tm_hierarchy_default,
 #endif
+	(cmdline_parse_inst_t *)&cmd_set_vxlan,
+	(cmdline_parse_inst_t *)&cmd_set_vxlan_with_vlan,
 	(cmdline_parse_inst_t *)&cmd_ddp_add,
 	(cmdline_parse_inst_t *)&cmd_ddp_del,
 	(cmdline_parse_inst_t *)&cmd_ddp_get_list,
diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index 934cf7e90..a99fd0048 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -239,6 +239,8 @@ enum index {
 	ACTION_OF_POP_MPLS_ETHERTYPE,
 	ACTION_OF_PUSH_MPLS,
 	ACTION_OF_PUSH_MPLS_ETHERTYPE,
+	ACTION_VXLAN_ENCAP,
+	ACTION_VXLAN_DECAP,
 };
 
 /** Maximum size for pattern in struct rte_flow_item_raw. */
@@ -258,6 +260,23 @@ struct action_rss_data {
 	uint16_t queue[ACTION_RSS_QUEUE_NUM];
 };
 
+/** Maximum number of items in struct rte_flow_action_vxlan_encap. */
+#define ACTION_VXLAN_ENCAP_ITEMS_NUM 6
+
+/** Storage for struct rte_flow_action_vxlan_encap including external data. */
+struct action_vxlan_encap_data {
+	struct rte_flow_action_vxlan_encap conf;
+	struct rte_flow_item items[ACTION_VXLAN_ENCAP_ITEMS_NUM];
+	struct rte_flow_item_eth item_eth;
+	struct rte_flow_item_vlan item_vlan;
+	union {
+		struct rte_flow_item_ipv4 item_ipv4;
+		struct rte_flow_item_ipv6 item_ipv6;
+	};
+	struct rte_flow_item_udp item_udp;
+	struct rte_flow_item_vxlan item_vxlan;
+};
+
 /** Maximum number of subsequent tokens and arguments on the stack. */
 #define CTX_STACK_SIZE 16
 
@@ -775,6 +794,8 @@ static const enum index next_action[] = {
 	ACTION_OF_SET_VLAN_PCP,
 	ACTION_OF_POP_MPLS,
 	ACTION_OF_PUSH_MPLS,
+	ACTION_VXLAN_ENCAP,
+	ACTION_VXLAN_DECAP,
 	ZERO,
 };
 
@@ -905,6 +926,9 @@ static int parse_vc_action_rss_type(struct context *, const struct token *,
 static int parse_vc_action_rss_queue(struct context *, const struct token *,
 				     const char *, unsigned int, void *,
 				     unsigned int);
+static int parse_vc_action_vxlan_encap(struct context *, const struct token *,
+				       const char *, unsigned int, void *,
+				       unsigned int);
 static int parse_destroy(struct context *, const struct token *,
 			 const char *, unsigned int,
 			 void *, unsigned int);
@@ -2387,6 +2411,24 @@ static const struct token token_list[] = {
 			      ethertype)),
 		.call = parse_vc_conf,
 	},
+	[ACTION_VXLAN_ENCAP] = {
+		.name = "vxlan_encap",
+		.help = "VXLAN encapsulation, uses configuration set by \"set"
+			" vxlan\"",
+		.priv = PRIV_ACTION(VXLAN_ENCAP,
+				    sizeof(struct action_vxlan_encap_data)),
+		.next = NEXT(NEXT_ENTRY(ACTION_NEXT)),
+		.call = parse_vc_action_vxlan_encap,
+	},
+	[ACTION_VXLAN_DECAP] = {
+		.name = "vxlan_decap",
+		.help = "Performs a decapsulation action by stripping all"
+			" headers of the VXLAN tunnel network overlay from the"
+			" matched flow.",
+		.priv = PRIV_ACTION(VXLAN_DECAP, 0),
+		.next = NEXT(NEXT_ENTRY(ACTION_NEXT)),
+		.call = parse_vc,
+	},
 };
 
 /** Remove and return last entry from argument stack. */
@@ -2951,6 +2993,106 @@ parse_vc_action_rss_queue(struct context *ctx, const struct token *token,
 	return len;
 }
 
+/** Parse VXLAN encap action. */
+static int
+parse_vc_action_vxlan_encap(struct context *ctx, const struct token *token,
+			    const char *str, unsigned int len,
+			    void *buf, unsigned int size)
+{
+	struct buffer *out = buf;
+	struct rte_flow_action *action;
+	struct action_vxlan_encap_data *action_vxlan_encap_data;
+	int ret;
+
+	ret = parse_vc(ctx, token, str, len, buf, size);
+	if (ret < 0)
+		return ret;
+	/* Nothing else to do if there is no buffer. */
+	if (!out)
+		return ret;
+	if (!out->args.vc.actions_n)
+		return -1;
+	action = &out->args.vc.actions[out->args.vc.actions_n - 1];
+	/* Point to selected object. */
+	ctx->object = out->args.vc.data;
+	ctx->objmask = NULL;
+	/* Set up default configuration. */
+	action_vxlan_encap_data = ctx->object;
+	*action_vxlan_encap_data = (struct action_vxlan_encap_data){
+		.conf = (struct rte_flow_action_vxlan_encap){
+			.definition = action_vxlan_encap_data->items,
+		},
+		.items = {
+			{
+				.type = RTE_FLOW_ITEM_TYPE_ETH,
+				.spec = &action_vxlan_encap_data->item_eth,
+				.mask = &rte_flow_item_eth_mask,
+			},
+			{
+				.type = RTE_FLOW_ITEM_TYPE_VLAN,
+				.spec = &action_vxlan_encap_data->item_vlan,
+				.mask = &rte_flow_item_vlan_mask,
+			},
+			{
+				.type = RTE_FLOW_ITEM_TYPE_IPV4,
+				.spec = &action_vxlan_encap_data->item_ipv4,
+				.mask = &rte_flow_item_ipv4_mask,
+			},
+			{
+				.type = RTE_FLOW_ITEM_TYPE_UDP,
+				.spec = &action_vxlan_encap_data->item_udp,
+				.mask = &rte_flow_item_udp_mask,
+			},
+			{
+				.type = RTE_FLOW_ITEM_TYPE_VXLAN,
+				.spec = &action_vxlan_encap_data->item_vxlan,
+				.mask = &rte_flow_item_vxlan_mask,
+			},
+			{
+				.type = RTE_FLOW_ITEM_TYPE_END,
+			},
+		},
+		.item_eth.type = 0,
+		.item_vlan = {
+			.tci = vxlan_encap_conf.vlan_tci,
+			.inner_type = 0,
+		},
+		.item_ipv4.hdr = {
+			.src_addr = vxlan_encap_conf.ipv4_src,
+			.dst_addr = vxlan_encap_conf.ipv4_dst,
+		},
+		.item_udp.hdr = {
+			.src_port = vxlan_encap_conf.udp_src,
+			.dst_port = vxlan_encap_conf.udp_dst,
+		},
+		.item_vxlan.flags = 0,
+	};
+	memcpy(action_vxlan_encap_data->item_eth.dst.addr_bytes,
+	       vxlan_encap_conf.eth_dst, ETHER_ADDR_LEN);
+	memcpy(action_vxlan_encap_data->item_eth.src.addr_bytes,
+	       vxlan_encap_conf.eth_src, ETHER_ADDR_LEN);
+	if (!vxlan_encap_conf.select_ipv4) {
+		memcpy(&action_vxlan_encap_data->item_ipv6.hdr.src_addr,
+		       &vxlan_encap_conf.ipv6_src,
+		       sizeof(vxlan_encap_conf.ipv6_src));
+		memcpy(&action_vxlan_encap_data->item_ipv6.hdr.dst_addr,
+		       &vxlan_encap_conf.ipv6_dst,
+		       sizeof(vxlan_encap_conf.ipv6_dst));
+		action_vxlan_encap_data->items[2] = (struct rte_flow_item){
+			.type = RTE_FLOW_ITEM_TYPE_IPV6,
+			.spec = &action_vxlan_encap_data->item_ipv6,
+			.mask = &rte_flow_item_ipv6_mask,
+		};
+	}
+	if (!vxlan_encap_conf.select_vlan)
+		action_vxlan_encap_data->items[1].type =
+			RTE_FLOW_ITEM_TYPE_VOID;
+	memcpy(action_vxlan_encap_data->item_vxlan.vni, vxlan_encap_conf.vni,
+	       RTE_DIM(vxlan_encap_conf.vni));
+	action->conf = &action_vxlan_encap_data->conf;
+	return ret;
+}
+
 /** Parse tokens for destroy command. */
 static int
 parse_destroy(struct context *ctx, const struct token *token,
diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
index 63c2a5aca..4a18d043c 100644
--- a/app/test-pmd/testpmd.c
+++ b/app/test-pmd/testpmd.c
@@ -393,6 +393,23 @@ uint8_t bitrate_enabled;
 struct gro_status gro_ports[RTE_MAX_ETHPORTS];
 uint8_t gro_flush_cycles = GRO_DEFAULT_FLUSH_CYCLES;
 
+struct vxlan_encap_conf vxlan_encap_conf = {
+	.select_ipv4 = 1,
+	.select_vlan = 0,
+	.vni = "\x00\x00\x00",
+	.udp_src = 0,
+	.udp_dst = RTE_BE16(4789),
+	.ipv4_src = IPv4(127, 0, 0, 1),
+	.ipv4_dst = IPv4(255, 255, 255, 255),
+	.ipv6_src = "\x00\x00\x00\x00\x00\x00\x00\x00"
+		"\x00\x00\x00\x00\x00\x00\x00\x01",
+	.ipv6_dst = "\x00\x00\x00\x00\x00\x00\x00\x00"
+		"\x00\x00\x00\x00\x00\x00\x11\x11",
+	.vlan_tci = 0,
+	.eth_src = "\x00\x00\x00\x00\x00\x00",
+	.eth_dst = "\xff\xff\xff\xff\xff\xff",
+};
+
 /* Forward function declarations */
 static void map_port_queue_stats_mapping_registers(portid_t pi,
 						   struct rte_port *port);
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index f51cd9dd9..0d6618788 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -479,6 +479,23 @@ struct gso_status {
 extern struct gso_status gso_ports[RTE_MAX_ETHPORTS];
 extern uint16_t gso_max_segment_size;
 
+/* VXLAN encap/decap parameters. */
+struct vxlan_encap_conf {
+	uint32_t select_ipv4:1;
+	uint32_t select_vlan:1;
+	uint8_t vni[3];
+	rte_be16_t udp_src;
+	rte_be16_t udp_dst;
+	rte_be32_t ipv4_src;
+	rte_be32_t ipv4_dst;
+	uint8_t ipv6_src[16];
+	uint8_t ipv6_dst[16];
+	rte_be16_t vlan_tci;
+	uint8_t eth_src[ETHER_ADDR_LEN];
+	uint8_t eth_dst[ETHER_ADDR_LEN];
+};
+struct vxlan_encap_conf vxlan_encap_conf;
+
 static inline unsigned int
 lcore_num(void)
 {
diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
index 0d6fd50ca..698b83268 100644
--- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
+++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
@@ -1534,6 +1534,18 @@ Enable or disable a per queue Tx offloading only on a specific Tx queue::
 
 This command should be run when the port is stopped, or else it will fail.
 
+Config VXLAN Encap outer layers
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Configure the outer layer to encapsulate a packet inside a VXLAN tunnel::
+
+ testpmd> set vxlan ipv4|ipv6 (vni) (udp-src) (udp-dst) (ip-src) (ip-dst) (mac-src) (mac-dst)
+ testpmd> set vxlan-with-vlan ipv4|ipv6 (vni) (udp-src) (udp-dst) (ip-src) (ip-dst) (vlan-tci) (mac-src) (mac-dst)
+
+Those command will set an internal configuration inside testpmd, any following
+flow rule using the action vxlan_encap will use the last configuration set.
+To have a different encapsulation header, one of those commands must be called
+before the flow rule creation.
 
 Port Functions
 --------------
@@ -3650,6 +3662,12 @@ This section lists supported actions and their attributes, if any.
 
   - ``ethertype``: Ethertype.
 
+- ``vxlan_encap``: Performs a VXLAN encapsulation, outer layer configuration
+  is done through `Config VXLAN Encap outer layers`_.
+
+- ``vxlan_decap``: Performs a decapsulation action by stripping all headers of
+  the VXLAN tunnel network overlay from the matched flow.
+
 Destroying flow rules
 ~~~~~~~~~~~~~~~~~~~~~
 
@@ -3915,6 +3933,28 @@ Validate and create a QinQ rule on port 0 to steer traffic to a queue on the hos
    0       0       0       i-      ETH VLAN VLAN=>VF QUEUE
    1       0       0       i-      ETH VLAN VLAN=>PF QUEUE
 
+Sample VXLAN encapsulation rule
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+VXLAN encapsulation outer layer has default value pre-configured in testpmd
+source code, those can be changed by using the following commands
+
+IPv4 VXLAN outer header::
+
+ testpmd> set vxlan ipv4 4 4 4 127.0.0.1 128.0.0.1 11:11:11:11:11:11 22:22:22:22:22:22
+ testpmd> flow create 0 ingress pattern end actions vxlan_encap / queue index 0 / end
+
+ testpmd> set vxlan-with-vlan ipv4 4 4 4 127.0.0.1 128.0.0.1 34 11:11:11:11:11:11 22:22:22:22:22:22
+ testpmd> flow create 0 ingress pattern end actions vxlan_encap / queue index 0 / end
+
+IPv6 VXLAN outer header::
+
+ testpmd> set vxlan ipv6 4 4 4 ::1 ::2222 11:11:11:11:11:11 22:22:22:22:22:22
+ testpmd> flow create 0 ingress pattern end actions vxlan_encap / queue index 0 / end
+
+ testpmd> set vxlan-with-vlan ipv6 4 4 4 ::1 ::2222 34 11:11:11:11:11:11 22:22:22:22:22:22
+ testpmd> flow create 0 ingress pattern end actions vxlan_encap / queue index 0 / end
+
 BPF Functions
 --------------
 
-- 
2.18.0

^ permalink raw reply	[flat|nested] 63+ messages in thread

* [dpdk-dev] [PATCH v7 2/2] app/testpmd: add NVGRE encap/decap support
  2018-06-27 11:45           ` [dpdk-dev] [PATCH v7 " Nelio Laranjeiro
  2018-06-27 11:45             ` [dpdk-dev] [PATCH v7 1/2] app/testpmd: add VXLAN encap/decap support Nelio Laranjeiro
@ 2018-06-27 11:45             ` Nelio Laranjeiro
  2018-07-02 10:40             ` [dpdk-dev] [PATCH v7 0/2] app/testpmd implement VXLAN/NVGRE Encap/Decap Mohammad Abdul Awal
  2018-07-05 14:33             ` [dpdk-dev] [PATCH v8 " Nelio Laranjeiro
  3 siblings, 0 replies; 63+ messages in thread
From: Nelio Laranjeiro @ 2018-06-27 11:45 UTC (permalink / raw)
  To: dev, Adrien Mazarguil, Wenzhuo Lu, Jingjing Wu,
	Bernard Iremonger, Mohammad Abdul Awal, Stephen Hemminger
  Cc: Ori Kam

Due to the complex NVGRE_ENCAP flow action and based on the fact testpmd
does not allocate memory, this patch adds a new command in testpmd to
initialise a global structure containing the necessary information to
make the outer layer of the packet.  This same global structure will
then be used by the flow command line in testpmd when the action
nvgre_encap will be parsed, at this point, the conversion into such
action becomes trivial.

This global structure is only used for the encap action.

Signed-off-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
Acked-by: Ori Kam <orika@mellanox.com>
---
 app/test-pmd/cmdline.c                      | 118 +++++++++++++++++
 app/test-pmd/cmdline_flow.c                 | 132 ++++++++++++++++++++
 app/test-pmd/testpmd.c                      |  15 +++
 app/test-pmd/testpmd.h                      |  15 +++
 doc/guides/testpmd_app_ug/testpmd_funcs.rst |  42 +++++++
 5 files changed, 322 insertions(+)

diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
index 048fff2bd..ad7f9eda5 100644
--- a/app/test-pmd/cmdline.c
+++ b/app/test-pmd/cmdline.c
@@ -789,6 +789,12 @@ static void cmd_help_long_parsed(void *parsed_result,
 			" vlan-tci eth-src eth-dst\n"
 			"       Configure the VXLAN encapsulation for flows.\n\n"
 
+			"nvgre ipv4|ipv6 tni ip-src ip-dst eth-src eth-dst\n"
+			"       Configure the NVGRE encapsulation for flows.\n\n"
+
+			"nvgre-with-vlan ipv4|ipv6 tni ip-src ip-dst vlan-tci eth-src eth-dst\n"
+			"       Configure the NVGRE encapsulation for flows.\n\n"
+
 			, list_pkt_forwarding_modes()
 		);
 	}
@@ -14970,6 +14976,116 @@ cmdline_parse_inst_t cmd_set_vxlan_with_vlan = {
 	},
 };
 
+/** Set NVGRE encapsulation details */
+struct cmd_set_nvgre_result {
+	cmdline_fixed_string_t set;
+	cmdline_fixed_string_t nvgre;
+	cmdline_fixed_string_t ip_version;
+	uint32_t tni;
+	cmdline_ipaddr_t ip_src;
+	cmdline_ipaddr_t ip_dst;
+	uint16_t tci;
+	struct ether_addr eth_src;
+	struct ether_addr eth_dst;
+};
+
+cmdline_parse_token_string_t cmd_set_nvgre_set =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_nvgre_result, set, "set");
+cmdline_parse_token_string_t cmd_set_nvgre_nvgre =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_nvgre_result, nvgre, "nvgre");
+cmdline_parse_token_string_t cmd_set_nvgre_nvgre_with_vlan =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_nvgre_result, nvgre, "nvgre-with-vlan");
+cmdline_parse_token_string_t cmd_set_nvgre_ip_version =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_nvgre_result, ip_version,
+				 "ipv4#ipv6");
+cmdline_parse_token_num_t cmd_set_nvgre_tni =
+	TOKEN_NUM_INITIALIZER(struct cmd_set_nvgre_result, tni, UINT32);
+cmdline_parse_token_num_t cmd_set_nvgre_ip_src =
+	TOKEN_IPADDR_INITIALIZER(struct cmd_set_nvgre_result, ip_src);
+cmdline_parse_token_ipaddr_t cmd_set_nvgre_ip_dst =
+	TOKEN_IPADDR_INITIALIZER(struct cmd_set_nvgre_result, ip_dst);
+cmdline_parse_token_num_t cmd_set_nvgre_vlan =
+	TOKEN_NUM_INITIALIZER(struct cmd_set_nvgre_result, tci, UINT16);
+cmdline_parse_token_etheraddr_t cmd_set_nvgre_eth_src =
+	TOKEN_ETHERADDR_INITIALIZER(struct cmd_set_nvgre_result, eth_src);
+cmdline_parse_token_etheraddr_t cmd_set_nvgre_eth_dst =
+	TOKEN_ETHERADDR_INITIALIZER(struct cmd_set_nvgre_result, eth_dst);
+
+static void cmd_set_nvgre_parsed(void *parsed_result,
+	__attribute__((unused)) struct cmdline *cl,
+	__attribute__((unused)) void *data)
+{
+	struct cmd_set_nvgre_result *res = parsed_result;
+	union {
+		uint32_t nvgre_tni;
+		uint8_t tni[4];
+	} id = {
+		.nvgre_tni = rte_cpu_to_be_32(res->tni) & RTE_BE32(0x00ffffff),
+	};
+
+	if (strcmp(res->nvgre, "nvgre") == 0)
+		nvgre_encap_conf.select_vlan = 0;
+	else if (strcmp(res->nvgre, "nvgre-with-vlan") == 0)
+		nvgre_encap_conf.select_vlan = 1;
+	if (strcmp(res->ip_version, "ipv4") == 0)
+		nvgre_encap_conf.select_ipv4 = 1;
+	else if (strcmp(res->ip_version, "ipv6") == 0)
+		nvgre_encap_conf.select_ipv4 = 0;
+	else
+		return;
+	rte_memcpy(nvgre_encap_conf.tni, &id.tni[1], 3);
+	if (nvgre_encap_conf.select_ipv4) {
+		IPV4_ADDR_TO_UINT(res->ip_src, nvgre_encap_conf.ipv4_src);
+		IPV4_ADDR_TO_UINT(res->ip_dst, nvgre_encap_conf.ipv4_dst);
+	} else {
+		IPV6_ADDR_TO_ARRAY(res->ip_src, nvgre_encap_conf.ipv6_src);
+		IPV6_ADDR_TO_ARRAY(res->ip_dst, nvgre_encap_conf.ipv6_dst);
+	}
+	if (nvgre_encap_conf.select_vlan)
+		nvgre_encap_conf.vlan_tci = rte_cpu_to_be_16(res->tci);
+	rte_memcpy(nvgre_encap_conf.eth_src, res->eth_src.addr_bytes,
+		   ETHER_ADDR_LEN);
+	rte_memcpy(nvgre_encap_conf.eth_dst, res->eth_dst.addr_bytes,
+		   ETHER_ADDR_LEN);
+}
+
+cmdline_parse_inst_t cmd_set_nvgre = {
+	.f = cmd_set_nvgre_parsed,
+	.data = NULL,
+	.help_str = "set nvgre ipv4|ipv6 <vni> <ip-src> <ip-dst> <eth-src>"
+		" <eth-dst>",
+	.tokens = {
+		(void *)&cmd_set_nvgre_set,
+		(void *)&cmd_set_nvgre_nvgre,
+		(void *)&cmd_set_nvgre_ip_version,
+		(void *)&cmd_set_nvgre_tni,
+		(void *)&cmd_set_nvgre_ip_src,
+		(void *)&cmd_set_nvgre_ip_dst,
+		(void *)&cmd_set_nvgre_eth_src,
+		(void *)&cmd_set_nvgre_eth_dst,
+		NULL,
+	},
+};
+
+cmdline_parse_inst_t cmd_set_nvgre_with_vlan = {
+	.f = cmd_set_nvgre_parsed,
+	.data = NULL,
+	.help_str = "set nvgre-with-vlan ipv4|ipv6 <vni> <ip-src> <ip-dst>"
+		" <vlan-tci> <eth-src> <eth-dst>",
+	.tokens = {
+		(void *)&cmd_set_nvgre_set,
+		(void *)&cmd_set_nvgre_nvgre_with_vlan,
+		(void *)&cmd_set_nvgre_ip_version,
+		(void *)&cmd_set_nvgre_tni,
+		(void *)&cmd_set_nvgre_ip_src,
+		(void *)&cmd_set_nvgre_ip_dst,
+		(void *)&cmd_set_nvgre_vlan,
+		(void *)&cmd_set_nvgre_eth_src,
+		(void *)&cmd_set_nvgre_eth_dst,
+		NULL,
+	},
+};
+
 /* Strict link priority scheduling mode setting */
 static void
 cmd_strict_link_prio_parsed(
@@ -17596,6 +17712,8 @@ cmdline_parse_ctx_t main_ctx[] = {
 #endif
 	(cmdline_parse_inst_t *)&cmd_set_vxlan,
 	(cmdline_parse_inst_t *)&cmd_set_vxlan_with_vlan,
+	(cmdline_parse_inst_t *)&cmd_set_nvgre,
+	(cmdline_parse_inst_t *)&cmd_set_nvgre_with_vlan,
 	(cmdline_parse_inst_t *)&cmd_ddp_add,
 	(cmdline_parse_inst_t *)&cmd_ddp_del,
 	(cmdline_parse_inst_t *)&cmd_ddp_get_list,
diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index a99fd0048..f9260600e 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -241,6 +241,8 @@ enum index {
 	ACTION_OF_PUSH_MPLS_ETHERTYPE,
 	ACTION_VXLAN_ENCAP,
 	ACTION_VXLAN_DECAP,
+	ACTION_NVGRE_ENCAP,
+	ACTION_NVGRE_DECAP,
 };
 
 /** Maximum size for pattern in struct rte_flow_item_raw. */
@@ -277,6 +279,22 @@ struct action_vxlan_encap_data {
 	struct rte_flow_item_vxlan item_vxlan;
 };
 
+/** Maximum number of items in struct rte_flow_action_nvgre_encap. */
+#define ACTION_NVGRE_ENCAP_ITEMS_NUM 5
+
+/** Storage for struct rte_flow_action_nvgre_encap including external data. */
+struct action_nvgre_encap_data {
+	struct rte_flow_action_nvgre_encap conf;
+	struct rte_flow_item items[ACTION_NVGRE_ENCAP_ITEMS_NUM];
+	struct rte_flow_item_eth item_eth;
+	struct rte_flow_item_vlan item_vlan;
+	union {
+		struct rte_flow_item_ipv4 item_ipv4;
+		struct rte_flow_item_ipv6 item_ipv6;
+	};
+	struct rte_flow_item_nvgre item_nvgre;
+};
+
 /** Maximum number of subsequent tokens and arguments on the stack. */
 #define CTX_STACK_SIZE 16
 
@@ -796,6 +814,8 @@ static const enum index next_action[] = {
 	ACTION_OF_PUSH_MPLS,
 	ACTION_VXLAN_ENCAP,
 	ACTION_VXLAN_DECAP,
+	ACTION_NVGRE_ENCAP,
+	ACTION_NVGRE_DECAP,
 	ZERO,
 };
 
@@ -929,6 +949,9 @@ static int parse_vc_action_rss_queue(struct context *, const struct token *,
 static int parse_vc_action_vxlan_encap(struct context *, const struct token *,
 				       const char *, unsigned int, void *,
 				       unsigned int);
+static int parse_vc_action_nvgre_encap(struct context *, const struct token *,
+				       const char *, unsigned int, void *,
+				       unsigned int);
 static int parse_destroy(struct context *, const struct token *,
 			 const char *, unsigned int,
 			 void *, unsigned int);
@@ -2429,6 +2452,24 @@ static const struct token token_list[] = {
 		.next = NEXT(NEXT_ENTRY(ACTION_NEXT)),
 		.call = parse_vc,
 	},
+	[ACTION_NVGRE_ENCAP] = {
+		.name = "nvgre_encap",
+		.help = "NVGRE encapsulation, uses configuration set by \"set"
+			" nvgre\"",
+		.priv = PRIV_ACTION(NVGRE_ENCAP,
+				    sizeof(struct action_nvgre_encap_data)),
+		.next = NEXT(NEXT_ENTRY(ACTION_NEXT)),
+		.call = parse_vc_action_nvgre_encap,
+	},
+	[ACTION_NVGRE_DECAP] = {
+		.name = "nvgre_decap",
+		.help = "Performs a decapsulation action by stripping all"
+			" headers of the NVGRE tunnel network overlay from the"
+			" matched flow.",
+		.priv = PRIV_ACTION(NVGRE_DECAP, 0),
+		.next = NEXT(NEXT_ENTRY(ACTION_NEXT)),
+		.call = parse_vc,
+	},
 };
 
 /** Remove and return last entry from argument stack. */
@@ -3093,6 +3134,97 @@ parse_vc_action_vxlan_encap(struct context *ctx, const struct token *token,
 	return ret;
 }
 
+/** Parse NVGRE encap action. */
+static int
+parse_vc_action_nvgre_encap(struct context *ctx, const struct token *token,
+			    const char *str, unsigned int len,
+			    void *buf, unsigned int size)
+{
+	struct buffer *out = buf;
+	struct rte_flow_action *action;
+	struct action_nvgre_encap_data *action_nvgre_encap_data;
+	int ret;
+
+	ret = parse_vc(ctx, token, str, len, buf, size);
+	if (ret < 0)
+		return ret;
+	/* Nothing else to do if there is no buffer. */
+	if (!out)
+		return ret;
+	if (!out->args.vc.actions_n)
+		return -1;
+	action = &out->args.vc.actions[out->args.vc.actions_n - 1];
+	/* Point to selected object. */
+	ctx->object = out->args.vc.data;
+	ctx->objmask = NULL;
+	/* Set up default configuration. */
+	action_nvgre_encap_data = ctx->object;
+	*action_nvgre_encap_data = (struct action_nvgre_encap_data){
+		.conf = (struct rte_flow_action_nvgre_encap){
+			.definition = action_nvgre_encap_data->items,
+		},
+		.items = {
+			{
+				.type = RTE_FLOW_ITEM_TYPE_ETH,
+				.spec = &action_nvgre_encap_data->item_eth,
+				.mask = &rte_flow_item_eth_mask,
+			},
+			{
+				.type = RTE_FLOW_ITEM_TYPE_VLAN,
+				.spec = &action_nvgre_encap_data->item_vlan,
+				.mask = &rte_flow_item_vlan_mask,
+			},
+			{
+				.type = RTE_FLOW_ITEM_TYPE_IPV4,
+				.spec = &action_nvgre_encap_data->item_ipv4,
+				.mask = &rte_flow_item_ipv4_mask,
+			},
+			{
+				.type = RTE_FLOW_ITEM_TYPE_NVGRE,
+				.spec = &action_nvgre_encap_data->item_nvgre,
+				.mask = &rte_flow_item_nvgre_mask,
+			},
+			{
+				.type = RTE_FLOW_ITEM_TYPE_END,
+			},
+		},
+		.item_eth.type = 0,
+		.item_vlan = {
+			.tci = nvgre_encap_conf.vlan_tci,
+			.inner_type = 0,
+		},
+		.item_ipv4.hdr = {
+		       .src_addr = nvgre_encap_conf.ipv4_src,
+		       .dst_addr = nvgre_encap_conf.ipv4_dst,
+		},
+		.item_nvgre.flow_id = 0,
+	};
+	memcpy(action_nvgre_encap_data->item_eth.dst.addr_bytes,
+	       nvgre_encap_conf.eth_dst, ETHER_ADDR_LEN);
+	memcpy(action_nvgre_encap_data->item_eth.src.addr_bytes,
+	       nvgre_encap_conf.eth_src, ETHER_ADDR_LEN);
+	if (!nvgre_encap_conf.select_ipv4) {
+		memcpy(&action_nvgre_encap_data->item_ipv6.hdr.src_addr,
+		       &nvgre_encap_conf.ipv6_src,
+		       sizeof(nvgre_encap_conf.ipv6_src));
+		memcpy(&action_nvgre_encap_data->item_ipv6.hdr.dst_addr,
+		       &nvgre_encap_conf.ipv6_dst,
+		       sizeof(nvgre_encap_conf.ipv6_dst));
+		action_nvgre_encap_data->items[2] = (struct rte_flow_item){
+			.type = RTE_FLOW_ITEM_TYPE_IPV6,
+			.spec = &action_nvgre_encap_data->item_ipv6,
+			.mask = &rte_flow_item_ipv6_mask,
+		};
+	}
+	if (!nvgre_encap_conf.select_vlan)
+		action_nvgre_encap_data->items[1].type =
+			RTE_FLOW_ITEM_TYPE_VOID;
+	memcpy(action_nvgre_encap_data->item_nvgre.tni, nvgre_encap_conf.tni,
+	       RTE_DIM(nvgre_encap_conf.tni));
+	action->conf = &action_nvgre_encap_data->conf;
+	return ret;
+}
+
 /** Parse tokens for destroy command. */
 static int
 parse_destroy(struct context *ctx, const struct token *token,
diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
index 4a18d043c..121685da3 100644
--- a/app/test-pmd/testpmd.c
+++ b/app/test-pmd/testpmd.c
@@ -410,6 +410,21 @@ struct vxlan_encap_conf vxlan_encap_conf = {
 	.eth_dst = "\xff\xff\xff\xff\xff\xff",
 };
 
+struct nvgre_encap_conf nvgre_encap_conf = {
+	.select_ipv4 = 1,
+	.select_vlan = 0,
+	.tni = "\x00\x00\x00",
+	.ipv4_src = IPv4(127, 0, 0, 1),
+	.ipv4_dst = IPv4(255, 255, 255, 255),
+	.ipv6_src = "\x00\x00\x00\x00\x00\x00\x00\x00"
+		"\x00\x00\x00\x00\x00\x00\x00\x01",
+	.ipv6_dst = "\x00\x00\x00\x00\x00\x00\x00\x00"
+		"\x00\x00\x00\x00\x00\x00\x11\x11",
+	.vlan_tci = 0,
+	.eth_src = "\x00\x00\x00\x00\x00\x00",
+	.eth_dst = "\xff\xff\xff\xff\xff\xff",
+};
+
 /* Forward function declarations */
 static void map_port_queue_stats_mapping_registers(portid_t pi,
 						   struct rte_port *port);
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index 0d6618788..2b1e448b0 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -496,6 +496,21 @@ struct vxlan_encap_conf {
 };
 struct vxlan_encap_conf vxlan_encap_conf;
 
+/* NVGRE encap/decap parameters. */
+struct nvgre_encap_conf {
+	uint32_t select_ipv4:1;
+	uint32_t select_vlan:1;
+	uint8_t tni[3];
+	rte_be32_t ipv4_src;
+	rte_be32_t ipv4_dst;
+	uint8_t ipv6_src[16];
+	uint8_t ipv6_dst[16];
+	rte_be16_t vlan_tci;
+	uint8_t eth_src[ETHER_ADDR_LEN];
+	uint8_t eth_dst[ETHER_ADDR_LEN];
+};
+struct nvgre_encap_conf nvgre_encap_conf;
+
 static inline unsigned int
 lcore_num(void)
 {
diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
index 698b83268..35d8b8d14 100644
--- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
+++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
@@ -1547,6 +1547,19 @@ flow rule using the action vxlan_encap will use the last configuration set.
 To have a different encapsulation header, one of those commands must be called
 before the flow rule creation.
 
+Config NVGRE Encap outer layers
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Configure the outer layer to encapsulate a packet inside a NVGRE tunnel::
+
+ testpmd> set nvgre ipv4|ipv6 (tni) (ip-src) (ip-dst) (mac-src) (mac-dst)
+ testpmd> set nvgre-with-vlan ipv4|ipv6 (tni) (ip-src) (ip-dst) (vlan-tci) (mac-src) (mac-dst)
+
+Those command will set an internal configuration inside testpmd, any following
+flow rule using the action nvgre_encap will use the last configuration set.
+To have a different encapsulation header, one of those commands must be called
+before the flow rule creation.
+
 Port Functions
 --------------
 
@@ -3668,6 +3681,12 @@ This section lists supported actions and their attributes, if any.
 - ``vxlan_decap``: Performs a decapsulation action by stripping all headers of
   the VXLAN tunnel network overlay from the matched flow.
 
+- ``nvgre_encap``: Performs a NVGRE encapsulation, outer layer configuration
+  is done through `Config NVGRE Encap outer layers`_.
+
+- ``nvgre_decap``: Performs a decapsulation action by stripping all headers of
+  the NVGRE tunnel network overlay from the matched flow.
+
 Destroying flow rules
 ~~~~~~~~~~~~~~~~~~~~~
 
@@ -3955,6 +3974,29 @@ IPv6 VXLAN outer header::
  testpmd> set vxlan-with-vlan ipv6 4 4 4 ::1 ::2222 34 11:11:11:11:11:11 22:22:22:22:22:22
  testpmd> flow create 0 ingress pattern end actions vxlan_encap / queue index 0 / end
 
+Sample NVGRE encapsulation rule
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+NVGRE encapsulation outer layer has default value pre-configured in testpmd
+source code, those can be changed by using the following commands
+
+IPv4 NVGRE outer header::
+
+ testpmd> set nvgre ipv4 4 127.0.0.1 128.0.0.1 11:11:11:11:11:11 22:22:22:22:22:22
+ testpmd> flow create 0 ingress pattern end actions nvgre_encap / queue index 0 / end
+
+ testpmd> set nvgre-with-vlan 4 127.0.0.1 128.0.0.1 34 11:11:11:11:11:11 22:22:22:22:22:22
+ testpmd> flow create 0 ingress pattern end actions vxlan_encap / queue index 0 / end
+
+IPv6 NVGRE outer header::
+
+ testpmd> set nvgre ipv6 4 ::1 ::2222 11:11:11:11:11:11 22:22:22:22:22:22
+ testpmd> flow create 0 ingress pattern end actions vxlan_encap / queue index 0 / end
+
+ testpmd> set nvgre-with-vlan ipv6 4 ::1 ::2222 34 11:11:11:11:11:11 22:22:22:22:22:22
+ testpmd> flow create 0 ingress pattern end actions vxlan_encap / queue index 0 / end
+
+
 BPF Functions
 --------------
 
-- 
2.18.0

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [dpdk-dev] [PATCH v7 0/2] app/testpmd implement VXLAN/NVGRE Encap/Decap
  2018-06-27 11:45           ` [dpdk-dev] [PATCH v7 " Nelio Laranjeiro
  2018-06-27 11:45             ` [dpdk-dev] [PATCH v7 1/2] app/testpmd: add VXLAN encap/decap support Nelio Laranjeiro
  2018-06-27 11:45             ` [dpdk-dev] [PATCH v7 2/2] app/testpmd: add NVGRE " Nelio Laranjeiro
@ 2018-07-02 10:40             ` Mohammad Abdul Awal
  2018-07-04 14:54               ` Ferruh Yigit
  2018-07-05 14:33             ` [dpdk-dev] [PATCH v8 " Nelio Laranjeiro
  3 siblings, 1 reply; 63+ messages in thread
From: Mohammad Abdul Awal @ 2018-07-02 10:40 UTC (permalink / raw)
  To: Nelio Laranjeiro, dev, Adrien Mazarguil, Wenzhuo Lu, Jingjing Wu,
	Bernard Iremonger, Stephen Hemminger
  Cc: Ori Kam


On 27/06/2018 12:45, Nelio Laranjeiro wrote:
> This series adds an easy and maintainable configuration version support for
> those two actions for 18.08 by using global variables in testpmd to store the
> necessary information for the tunnel encapsulation.  Those variables are used
> in conjunction of RTE_FLOW_ACTION_{VXLAN,NVGRE}_ENCAP action to create easily
> the action for flows.
>
> A common way to use it:
>
>   set vxlan ipv4 4 4 4 127.0.0.1 128.0.0.1 11:11:11:11:11:11 22:22:22:22:22:22
>   flow create 0 ingress pattern end actions vxlan_encap / queue index 0 / end
>
>   set vxlan ipv6 4 4 4 ::1 ::2222 11:11:11:11:11:11 22:22:22:22:22:22
>   flow create 0 ingress pattern end actions vxlan_encap / queue index 0 / end
>
>   set nvgre ipv4 4 127.0.0.1 128.0.0.1 11:11:11:11:11:11 22:22:22:22:22:22
>   flow create 0 ingress pattern end actions nvgre_encap / queue index 0 / end
>
>   set nvgre ipv6 4 ::1 ::2222 11:11:11:11:11:11 22:22:22:22:22:22
>   flow create 0 ingress pattern end actions nvgre_encap / queue index 0 / end
>
> This also replace the proposal done by Mohammad Abdul Awal [1] which handles
> in a more complex way for the same work.
>
> Note this API has already a modification planned for 18.11 [2] thus those
> series should have a limited life for a single release.
>
> [1] https://dpdk.org/ml/archives/dev/2018-May/101403.html
> [2] https://dpdk.org/ml/archives/dev/2018-June/103485.html
>
> Changes in v7:
>
> - add missing documentation added in v5 and removed in v6 by mistake.
>
> Changes in v6:
>
> - fix compilation under redhat 7.5 with gcc 4.8.5 20150623
>
> Changes in v5:
>
> - fix documentation generation.
> - add more explanation on how to generate several encapsulated flows.
>
> Changes in v4:
>
> - fix big endian issue on vni and tni.
> - add samples to the documentation.
> - set the VXLAN UDP source port to 0 by default to let the driver generate it
>    from the inner hash as described in the RFC 7348.
> - use default rte flow mask for each item.
>
> Changes in v3:
>
> - support VLAN in the outer encapsulation.
> - fix the documentation with missing arguments.
>
> Changes in v2:
>
> - add default IPv6 values for NVGRE encapsulation.
> - replace VXLAN to NVGRE in comments concerning NVGRE layer.
>
> Nelio Laranjeiro (2):
>    app/testpmd: add VXLAN encap/decap support
>    app/testpmd: add NVGRE encap/decap support
>
>   app/test-pmd/cmdline.c                      | 252 ++++++++++++++++++
>   app/test-pmd/cmdline_flow.c                 | 274 ++++++++++++++++++++
>   app/test-pmd/testpmd.c                      |  32 +++
>   app/test-pmd/testpmd.h                      |  32 +++
>   doc/guides/testpmd_app_ug/testpmd_funcs.rst |  82 ++++++
>   5 files changed, 672 insertions(+)


Hi,

I have one concern in terms of usability though.
In testpmd, the rte_flow command line options have auto-completion with 
"<item_name> <item_name_value>" format which make using the command very 
much user friendly.

For the command "set vxlan ipv4 4 4 4 127.0.0.1 128.0.0.1 
11:11:11:11:11:11 22:22:22:22:22:22", it does not look much user 
friendly to me. A user may easily lose track of sequence of 9 param 
items. It would be much user friendly if the options would be like below 
and has auto-completion.

set vxlan ip_ver <ip_ver-value> vni <vni-value> udp_src <udp_src-value> 
udp-dst <udp_dst-value> ip_src <ip_src-value> ip_dst <ip_dst-value> 
eth_src <eth_src-value> eth_dst <eth_dst-value>

This way an user may never feel confused. Can maintainers comment on 
this point please?

Regards,
Awal.

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [dpdk-dev] [PATCH v7 0/2] app/testpmd implement VXLAN/NVGRE Encap/Decap
  2018-07-02 10:40             ` [dpdk-dev] [PATCH v7 0/2] app/testpmd implement VXLAN/NVGRE Encap/Decap Mohammad Abdul Awal
@ 2018-07-04 14:54               ` Ferruh Yigit
  2018-07-05  9:37                 ` Nélio Laranjeiro
  0 siblings, 1 reply; 63+ messages in thread
From: Ferruh Yigit @ 2018-07-04 14:54 UTC (permalink / raw)
  To: Mohammad Abdul Awal, Nelio Laranjeiro, dev, Adrien Mazarguil,
	Wenzhuo Lu, Jingjing Wu, Bernard Iremonger, Stephen Hemminger
  Cc: Ori Kam

On 7/2/2018 11:40 AM, Mohammad Abdul Awal wrote:
> 
> On 27/06/2018 12:45, Nelio Laranjeiro wrote:
>> This series adds an easy and maintainable configuration version support for
>> those two actions for 18.08 by using global variables in testpmd to store the
>> necessary information for the tunnel encapsulation.  Those variables are used
>> in conjunction of RTE_FLOW_ACTION_{VXLAN,NVGRE}_ENCAP action to create easily
>> the action for flows.
>>
>> A common way to use it:
>>
>>   set vxlan ipv4 4 4 4 127.0.0.1 128.0.0.1 11:11:11:11:11:11 22:22:22:22:22:22
>>   flow create 0 ingress pattern end actions vxlan_encap / queue index 0 / end
>>
>>   set vxlan ipv6 4 4 4 ::1 ::2222 11:11:11:11:11:11 22:22:22:22:22:22
>>   flow create 0 ingress pattern end actions vxlan_encap / queue index 0 / end
>>
>>   set nvgre ipv4 4 127.0.0.1 128.0.0.1 11:11:11:11:11:11 22:22:22:22:22:22
>>   flow create 0 ingress pattern end actions nvgre_encap / queue index 0 / end
>>
>>   set nvgre ipv6 4 ::1 ::2222 11:11:11:11:11:11 22:22:22:22:22:22
>>   flow create 0 ingress pattern end actions nvgre_encap / queue index 0 / end
>>
>> This also replace the proposal done by Mohammad Abdul Awal [1] which handles
>> in a more complex way for the same work.
>>
>> Note this API has already a modification planned for 18.11 [2] thus those
>> series should have a limited life for a single release.
>>
>> [1] https://dpdk.org/ml/archives/dev/2018-May/101403.html
>> [2] https://dpdk.org/ml/archives/dev/2018-June/103485.html
>>
>> Changes in v7:
>>
>> - add missing documentation added in v5 and removed in v6 by mistake.
>>
>> Changes in v6:
>>
>> - fix compilation under redhat 7.5 with gcc 4.8.5 20150623
>>
>> Changes in v5:
>>
>> - fix documentation generation.
>> - add more explanation on how to generate several encapsulated flows.
>>
>> Changes in v4:
>>
>> - fix big endian issue on vni and tni.
>> - add samples to the documentation.
>> - set the VXLAN UDP source port to 0 by default to let the driver generate it
>>    from the inner hash as described in the RFC 7348.
>> - use default rte flow mask for each item.
>>
>> Changes in v3:
>>
>> - support VLAN in the outer encapsulation.
>> - fix the documentation with missing arguments.
>>
>> Changes in v2:
>>
>> - add default IPv6 values for NVGRE encapsulation.
>> - replace VXLAN to NVGRE in comments concerning NVGRE layer.
>>
>> Nelio Laranjeiro (2):
>>    app/testpmd: add VXLAN encap/decap support
>>    app/testpmd: add NVGRE encap/decap support
>>
>>   app/test-pmd/cmdline.c                      | 252 ++++++++++++++++++
>>   app/test-pmd/cmdline_flow.c                 | 274 ++++++++++++++++++++
>>   app/test-pmd/testpmd.c                      |  32 +++
>>   app/test-pmd/testpmd.h                      |  32 +++
>>   doc/guides/testpmd_app_ug/testpmd_funcs.rst |  82 ++++++
>>   5 files changed, 672 insertions(+)
> 
> 
> Hi,
> 
> I have one concern in terms of usability though.
> In testpmd, the rte_flow command line options have auto-completion with 
> "<item_name> <item_name_value>" format which make using the command very 
> much user friendly.
> 
> For the command "set vxlan ipv4 4 4 4 127.0.0.1 128.0.0.1 
> 11:11:11:11:11:11 22:22:22:22:22:22", it does not look much user 
> friendly to me. A user may easily lose track of sequence of 9 param 
> items. It would be much user friendly if the options would be like below 
> and has auto-completion.
> 
> set vxlan ip_ver <ip_ver-value> vni <vni-value> udp_src <udp_src-value> 
> udp-dst <udp_dst-value> ip_src <ip_src-value> ip_dst <ip_dst-value> 
> eth_src <eth_src-value> eth_dst <eth_dst-value>

Hi Nelio, Adrien,

I tend to agree with Awal here, this is to forget/confuse and key-value pairs
makes it easier to use.

Meanwhile this is an usability improvement and I prefer not to block this patch
for this.

What is your comment on this, how should we proceed?

Thanks,
ferruh

> 
> This way an user may never feel confused. Can maintainers comment on 
> this point please?
> 
> Regards,
> Awal.
> 

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [dpdk-dev] [PATCH v7 0/2] app/testpmd implement VXLAN/NVGRE Encap/Decap
  2018-07-04 14:54               ` Ferruh Yigit
@ 2018-07-05  9:37                 ` Nélio Laranjeiro
  0 siblings, 0 replies; 63+ messages in thread
From: Nélio Laranjeiro @ 2018-07-05  9:37 UTC (permalink / raw)
  To: Ferruh Yigit
  Cc: Mohammad Abdul Awal, dev, Adrien Mazarguil, Wenzhuo Lu,
	Jingjing Wu, Bernard Iremonger, Stephen Hemminger, Ori Kam

On Wed, Jul 04, 2018 at 03:54:32PM +0100, Ferruh Yigit wrote:
> On 7/2/2018 11:40 AM, Mohammad Abdul Awal wrote:
> > 
> > On 27/06/2018 12:45, Nelio Laranjeiro wrote:
> >> This series adds an easy and maintainable configuration version support for
> >> those two actions for 18.08 by using global variables in testpmd to store the
> >> necessary information for the tunnel encapsulation.  Those variables are used
> >> in conjunction of RTE_FLOW_ACTION_{VXLAN,NVGRE}_ENCAP action to create easily
> >> the action for flows.
> >>
> >> A common way to use it:
> >>
> >>   set vxlan ipv4 4 4 4 127.0.0.1 128.0.0.1 11:11:11:11:11:11 22:22:22:22:22:22
> >>   flow create 0 ingress pattern end actions vxlan_encap / queue index 0 / end
> >>
> >>   set vxlan ipv6 4 4 4 ::1 ::2222 11:11:11:11:11:11 22:22:22:22:22:22
> >>   flow create 0 ingress pattern end actions vxlan_encap / queue index 0 / end
> >>
> >>   set nvgre ipv4 4 127.0.0.1 128.0.0.1 11:11:11:11:11:11 22:22:22:22:22:22
> >>   flow create 0 ingress pattern end actions nvgre_encap / queue index 0 / end
> >>
> >>   set nvgre ipv6 4 ::1 ::2222 11:11:11:11:11:11 22:22:22:22:22:22
> >>   flow create 0 ingress pattern end actions nvgre_encap / queue index 0 / end
> >>
> >> This also replace the proposal done by Mohammad Abdul Awal [1] which handles
> >> in a more complex way for the same work.
> >>
> >> Note this API has already a modification planned for 18.11 [2] thus those
> >> series should have a limited life for a single release.
> >>
> >> [1] https://dpdk.org/ml/archives/dev/2018-May/101403.html
> >> [2] https://dpdk.org/ml/archives/dev/2018-June/103485.html
> >>
> >> Changes in v7:
> >>
> >> - add missing documentation added in v5 and removed in v6 by mistake.
> >>
> >> Changes in v6:
> >>
> >> - fix compilation under redhat 7.5 with gcc 4.8.5 20150623
> >>
> >> Changes in v5:
> >>
> >> - fix documentation generation.
> >> - add more explanation on how to generate several encapsulated flows.
> >>
> >> Changes in v4:
> >>
> >> - fix big endian issue on vni and tni.
> >> - add samples to the documentation.
> >> - set the VXLAN UDP source port to 0 by default to let the driver generate it
> >>    from the inner hash as described in the RFC 7348.
> >> - use default rte flow mask for each item.
> >>
> >> Changes in v3:
> >>
> >> - support VLAN in the outer encapsulation.
> >> - fix the documentation with missing arguments.
> >>
> >> Changes in v2:
> >>
> >> - add default IPv6 values for NVGRE encapsulation.
> >> - replace VXLAN to NVGRE in comments concerning NVGRE layer.
> >>
> >> Nelio Laranjeiro (2):
> >>    app/testpmd: add VXLAN encap/decap support
> >>    app/testpmd: add NVGRE encap/decap support
> >>
> >>   app/test-pmd/cmdline.c                      | 252 ++++++++++++++++++
> >>   app/test-pmd/cmdline_flow.c                 | 274 ++++++++++++++++++++
> >>   app/test-pmd/testpmd.c                      |  32 +++
> >>   app/test-pmd/testpmd.h                      |  32 +++
> >>   doc/guides/testpmd_app_ug/testpmd_funcs.rst |  82 ++++++
> >>   5 files changed, 672 insertions(+)
> > 
> > 
> > Hi,
> > 
> > I have one concern in terms of usability though.
> > In testpmd, the rte_flow command line options have auto-completion with 
> > "<item_name> <item_name_value>" format which make using the command very 
> > much user friendly.
> > 
> > For the command "set vxlan ipv4 4 4 4 127.0.0.1 128.0.0.1 
> > 11:11:11:11:11:11 22:22:22:22:22:22", it does not look much user 
> > friendly to me. A user may easily lose track of sequence of 9 param 
> > items. It would be much user friendly if the options would be like below 
> > and has auto-completion.
> > 
> > set vxlan ip_ver <ip_ver-value> vni <vni-value> udp_src <udp_src-value> 
> > udp-dst <udp_dst-value> ip_src <ip_src-value> ip_dst <ip_dst-value> 
> > eth_src <eth_src-value> eth_dst <eth_dst-value>
> 
> Hi Nelio, Adrien,
> 
> I tend to agree with Awal here, this is to forget/confuse and key-value pairs
> makes it easier to use.
> 
> Meanwhile this is an usability improvement and I prefer not to block this patch
> for this.
> 
> What is your comment on this, how should we proceed?
> 
> Thanks,
> ferruh

Hi,

I also agree with this proposal, I'll prepare a v8 with those fix
tokens.

> > This way an user may never feel confused. Can maintainers comment on 
> > this point please?
> > 
> > Regards,
> > Awal.

Thanks

-- 
Nélio Laranjeiro
6WIND

^ permalink raw reply	[flat|nested] 63+ messages in thread

* [dpdk-dev] [PATCH v8 0/2] app/testpmd implement VXLAN/NVGRE Encap/Decap
  2018-06-27 11:45           ` [dpdk-dev] [PATCH v7 " Nelio Laranjeiro
                               ` (2 preceding siblings ...)
  2018-07-02 10:40             ` [dpdk-dev] [PATCH v7 0/2] app/testpmd implement VXLAN/NVGRE Encap/Decap Mohammad Abdul Awal
@ 2018-07-05 14:33             ` Nelio Laranjeiro
  2018-07-05 14:33               ` [dpdk-dev] [PATCH v8 1/2] app/testpmd: add VXLAN encap/decap support Nelio Laranjeiro
                                 ` (4 more replies)
  3 siblings, 5 replies; 63+ messages in thread
From: Nelio Laranjeiro @ 2018-07-05 14:33 UTC (permalink / raw)
  To: dev, Adrien Mazarguil, Wenzhuo Lu, Jingjing Wu,
	Bernard Iremonger, Mohammad Abdul Awal, Stephen Hemminger
  Cc: Ori Kam

This series adds an easy and maintainable configuration version support for
those two actions for 18.08 by using global variables in testpmd to store the
necessary information for the tunnel encapsulation.  Those variables are used
in conjunction of RTE_FLOW_ACTION_{VXLAN,NVGRE}_ENCAP action to create easily
the action for flows.

A common way to use it:

 set vxlan ip-version ipv4 vni 4 udp-src 4 udp-dst 4 ip-src 27.0.0.1
        ip-dst 128.0.0.1 eth-src 11:11:11:11:11:11 eth-dst 22:22:22:22:22:22
 flow create 0 ingress pattern end actions vxlan_encap /
        queue index 0 / end

 set vxlan-with-vlan ip-version ipv4 vni 4 udp-src 4 udp-dst 4 p-src
         127.0.0.1 ip-dst 128.0.0.1 vlan-tci 34 eth-src 11:11:11:11:11:11
         eth-dst 22:22:22:22:22:22
 flow create 0 ingress pattern end actions vxlan_encap /
         queue index 0 / end

 set vxlan ip-version ipv6 vni 4 udp-src 4 udp-dst 4 ip-src ::1
        ip-dst ::2222 eth-src 11:11:11:11:11:11 eth-dst 22:22:22:22:22:22
 flow create 0 ingress pattern end actions vxlan_encap /
         queue index 0 / end

 set vxlan-with-vlan ip-version ipv6 vni 4 udp-src 4 udp-dst 4
         ip-src ::1 ip-dst ::2222 vlan-tci 34 eth-src 11:11:11:11:11:11
         eth-dst 22:22:22:22:22:22
 flow create 0 ingress pattern end actions vxlan_encap /
         queue index 0 / end

This also replace the proposal done by Mohammad Abdul Awal [1] which handles
in a more complex way for the same work.

Note this API has already a modification planned for 18.11 [2] thus those
series should have a limited life for a single release.

[1] https://dpdk.org/ml/archives/dev/2018-May/101403.html
[2] https://dpdk.org/ml/archives/dev/2018-June/103485.html

Changes in v8:

- add static tokens in the command line to be user friendly.

Changes in v7:

- add missing documentation added in v5 and removed in v6 by mistake.

Changes in v6:

- fix compilation under redhat 7.5 with gcc 4.8.5 20150623

Changes in v5:

- fix documentation generation.
- add more explanation on how to generate several encapsulated flows.

Changes in v4:

- fix big endian issue on vni and tni.
- add samples to the documentation.
- set the VXLAN UDP source port to 0 by default to let the driver generate it
  from the inner hash as described in the RFC 7348.
- use default rte flow mask for each item.

Changes in v3:

- support VLAN in the outer encapsulation.
- fix the documentation with missing arguments.

Changes in v2:

- add default IPv6 values for NVGRE encapsulation.
- replace VXLAN to NVGRE in comments concerning NVGRE layer.


Nelio Laranjeiro (2):
  app/testpmd: add VXLAN encap/decap support
  app/testpmd: add NVGRE encap/decap support

 app/test-pmd/cmdline.c                      | 345 ++++++++++++++++++++
 app/test-pmd/cmdline_flow.c                 | 274 ++++++++++++++++
 app/test-pmd/testpmd.c                      |  32 ++
 app/test-pmd/testpmd.h                      |  32 ++
 doc/guides/testpmd_app_ug/testpmd_funcs.rst | 107 ++++++
 5 files changed, 790 insertions(+)

-- 
2.18.0

^ permalink raw reply	[flat|nested] 63+ messages in thread

* [dpdk-dev] [PATCH v8 1/2] app/testpmd: add VXLAN encap/decap support
  2018-07-05 14:33             ` [dpdk-dev] [PATCH v8 " Nelio Laranjeiro
@ 2018-07-05 14:33               ` Nelio Laranjeiro
  2018-07-05 15:03                 ` Mohammad Abdul Awal
  2018-07-05 14:33               ` [dpdk-dev] [PATCH v8 2/2] app/testpmd: add NVGRE " Nelio Laranjeiro
                                 ` (3 subsequent siblings)
  4 siblings, 1 reply; 63+ messages in thread
From: Nelio Laranjeiro @ 2018-07-05 14:33 UTC (permalink / raw)
  To: dev, Adrien Mazarguil, Wenzhuo Lu, Jingjing Wu,
	Bernard Iremonger, Mohammad Abdul Awal, Stephen Hemminger
  Cc: Ori Kam

Due to the complex VXLAN_ENCAP flow action and based on the fact testpmd
does not allocate memory, this patch adds a new command in testpmd to
initialise a global structure containing the necessary information to
make the outer layer of the packet.  This same global structure will
then be used by the flow command line in testpmd when the action
vxlan_encap will be parsed, at this point, the conversion into such
action becomes trivial.

This global structure is only used for the encap action.

Signed-off-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
Acked-by: Ori Kam <orika@mellanox.com>
---
 app/test-pmd/cmdline.c                      | 185 ++++++++++++++++++++
 app/test-pmd/cmdline_flow.c                 | 142 +++++++++++++++
 app/test-pmd/testpmd.c                      |  17 ++
 app/test-pmd/testpmd.h                      |  17 ++
 doc/guides/testpmd_app_ug/testpmd_funcs.rst |  55 ++++++
 5 files changed, 416 insertions(+)

diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
index 27e2aa8c8..56bdb023c 100644
--- a/app/test-pmd/cmdline.c
+++ b/app/test-pmd/cmdline.c
@@ -781,6 +781,17 @@ static void cmd_help_long_parsed(void *parsed_result,
 			"port tm hierarchy commit (port_id) (clean_on_fail)\n"
 			"	Commit tm hierarchy.\n\n"
 
+			"vxlan ip-version (ipv4|ipv6) vni (vni) udp-src"
+			" (udp-src) udp-dst (udp-dst) ip-src (ip-src) ip-dst"
+			" (ip-dst) eth-src (eth-src) eth-dst (eth-dst)\n"
+			"       Configure the VXLAN encapsulation for flows.\n\n"
+
+			"vxlan-with-vlan ip-version (ipv4|ipv6) vni (vni)"
+			" udp-src (udp-src) udp-dst (udp-dst) ip-src (ip-src)"
+			" ip-dst (ip-dst) vlan-tci (vlan-tci) eth-src (eth-src)"
+			" eth-dst (eth-dst)\n"
+			"       Configure the VXLAN encapsulation for flows.\n\n"
+
 			, list_pkt_forwarding_modes()
 		);
 	}
@@ -14838,6 +14849,178 @@ cmdline_parse_inst_t cmd_set_port_tm_hierarchy_default = {
 };
 #endif
 
+/** Set VXLAN encapsulation details */
+struct cmd_set_vxlan_result {
+	cmdline_fixed_string_t set;
+	cmdline_fixed_string_t vxlan;
+	cmdline_fixed_string_t pos_token;
+	cmdline_fixed_string_t ip_version;
+	uint32_t vlan_present:1;
+	uint32_t vni;
+	uint16_t udp_src;
+	uint16_t udp_dst;
+	cmdline_ipaddr_t ip_src;
+	cmdline_ipaddr_t ip_dst;
+	uint16_t tci;
+	struct ether_addr eth_src;
+	struct ether_addr eth_dst;
+};
+
+cmdline_parse_token_string_t cmd_set_vxlan_set =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_vxlan_result, set, "set");
+cmdline_parse_token_string_t cmd_set_vxlan_vxlan =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_vxlan_result, vxlan, "vxlan");
+cmdline_parse_token_string_t cmd_set_vxlan_vxlan_with_vlan =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_vxlan_result, vxlan,
+				 "vxlan-with-vlan");
+cmdline_parse_token_string_t cmd_set_vxlan_ip_version =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_vxlan_result, pos_token,
+				 "ip-version");
+cmdline_parse_token_string_t cmd_set_vxlan_ip_version_value =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_vxlan_result, ip_version,
+				 "ipv4#ipv6");
+cmdline_parse_token_string_t cmd_set_vxlan_vni =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_vxlan_result, pos_token,
+				 "vni");
+cmdline_parse_token_num_t cmd_set_vxlan_vni_value =
+	TOKEN_NUM_INITIALIZER(struct cmd_set_vxlan_result, vni, UINT32);
+cmdline_parse_token_string_t cmd_set_vxlan_udp_src =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_vxlan_result, pos_token,
+				 "udp-src");
+cmdline_parse_token_num_t cmd_set_vxlan_udp_src_value =
+	TOKEN_NUM_INITIALIZER(struct cmd_set_vxlan_result, udp_src, UINT16);
+cmdline_parse_token_string_t cmd_set_vxlan_udp_dst =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_vxlan_result, pos_token,
+				 "udp-dst");
+cmdline_parse_token_num_t cmd_set_vxlan_udp_dst_value =
+	TOKEN_NUM_INITIALIZER(struct cmd_set_vxlan_result, udp_dst, UINT16);
+cmdline_parse_token_string_t cmd_set_vxlan_ip_src =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_vxlan_result, pos_token,
+				 "ip-src");
+cmdline_parse_token_ipaddr_t cmd_set_vxlan_ip_src_value =
+	TOKEN_IPADDR_INITIALIZER(struct cmd_set_vxlan_result, ip_src);
+cmdline_parse_token_string_t cmd_set_vxlan_ip_dst =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_vxlan_result, pos_token,
+				 "ip-dst");
+cmdline_parse_token_ipaddr_t cmd_set_vxlan_ip_dst_value =
+	TOKEN_IPADDR_INITIALIZER(struct cmd_set_vxlan_result, ip_dst);
+cmdline_parse_token_string_t cmd_set_vxlan_vlan =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_vxlan_result, pos_token,
+				 "vlan-tci");
+cmdline_parse_token_num_t cmd_set_vxlan_vlan_value =
+	TOKEN_NUM_INITIALIZER(struct cmd_set_vxlan_result, tci, UINT16);
+cmdline_parse_token_string_t cmd_set_vxlan_eth_src =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_vxlan_result, pos_token,
+				 "eth-src");
+cmdline_parse_token_etheraddr_t cmd_set_vxlan_eth_src_value =
+	TOKEN_ETHERADDR_INITIALIZER(struct cmd_set_vxlan_result, eth_src);
+cmdline_parse_token_string_t cmd_set_vxlan_eth_dst =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_vxlan_result, pos_token,
+				 "eth-dst");
+cmdline_parse_token_etheraddr_t cmd_set_vxlan_eth_dst_value =
+	TOKEN_ETHERADDR_INITIALIZER(struct cmd_set_vxlan_result, eth_dst);
+
+static void cmd_set_vxlan_parsed(void *parsed_result,
+	__attribute__((unused)) struct cmdline *cl,
+	__attribute__((unused)) void *data)
+{
+	struct cmd_set_vxlan_result *res = parsed_result;
+	union {
+		uint32_t vxlan_id;
+		uint8_t vni[4];
+	} id = {
+		.vxlan_id = rte_cpu_to_be_32(res->vni) & RTE_BE32(0x00ffffff),
+	};
+
+	if (strcmp(res->vxlan, "vxlan") == 0)
+		vxlan_encap_conf.select_vlan = 0;
+	else if (strcmp(res->vxlan, "vxlan-with-vlan") == 0)
+		vxlan_encap_conf.select_vlan = 1;
+	if (strcmp(res->ip_version, "ipv4") == 0)
+		vxlan_encap_conf.select_ipv4 = 1;
+	else if (strcmp(res->ip_version, "ipv6") == 0)
+		vxlan_encap_conf.select_ipv4 = 0;
+	else
+		return;
+	rte_memcpy(vxlan_encap_conf.vni, &id.vni[1], 3);
+	vxlan_encap_conf.udp_src = rte_cpu_to_be_16(res->udp_src);
+	vxlan_encap_conf.udp_dst = rte_cpu_to_be_16(res->udp_dst);
+	if (vxlan_encap_conf.select_ipv4) {
+		IPV4_ADDR_TO_UINT(res->ip_src, vxlan_encap_conf.ipv4_src);
+		IPV4_ADDR_TO_UINT(res->ip_dst, vxlan_encap_conf.ipv4_dst);
+	} else {
+		IPV6_ADDR_TO_ARRAY(res->ip_src, vxlan_encap_conf.ipv6_src);
+		IPV6_ADDR_TO_ARRAY(res->ip_dst, vxlan_encap_conf.ipv6_dst);
+	}
+	if (vxlan_encap_conf.select_vlan)
+		vxlan_encap_conf.vlan_tci = rte_cpu_to_be_16(res->tci);
+	rte_memcpy(vxlan_encap_conf.eth_src, res->eth_src.addr_bytes,
+		   ETHER_ADDR_LEN);
+	rte_memcpy(vxlan_encap_conf.eth_dst, res->eth_dst.addr_bytes,
+		   ETHER_ADDR_LEN);
+}
+
+cmdline_parse_inst_t cmd_set_vxlan = {
+	.f = cmd_set_vxlan_parsed,
+	.data = NULL,
+	.help_str = "set vxlan ip-version ipv4|ipv6 vni <vni> udp-src"
+		" <udp-src> udp-dst <udp-dst> ip-src <ip-src> ip-dst <ip-dst>"
+		" eth-src <eth-src> eth-dst <eth-dst>",
+	.tokens = {
+		(void *)&cmd_set_vxlan_set,
+		(void *)&cmd_set_vxlan_vxlan,
+		(void *)&cmd_set_vxlan_ip_version,
+		(void *)&cmd_set_vxlan_ip_version_value,
+		(void *)&cmd_set_vxlan_vni,
+		(void *)&cmd_set_vxlan_vni_value,
+		(void *)&cmd_set_vxlan_udp_src,
+		(void *)&cmd_set_vxlan_udp_src_value,
+		(void *)&cmd_set_vxlan_udp_dst,
+		(void *)&cmd_set_vxlan_udp_dst_value,
+		(void *)&cmd_set_vxlan_ip_src,
+		(void *)&cmd_set_vxlan_ip_src_value,
+		(void *)&cmd_set_vxlan_ip_dst,
+		(void *)&cmd_set_vxlan_ip_dst_value,
+		(void *)&cmd_set_vxlan_eth_src,
+		(void *)&cmd_set_vxlan_eth_src_value,
+		(void *)&cmd_set_vxlan_eth_dst,
+		(void *)&cmd_set_vxlan_eth_dst_value,
+		NULL,
+	},
+};
+
+cmdline_parse_inst_t cmd_set_vxlan_with_vlan = {
+	.f = cmd_set_vxlan_parsed,
+	.data = NULL,
+	.help_str = "set vxlan-with-vlan ip-version ipv4|ipv6 vni <vni>"
+		" udp-src <udp-src> udp-dst <udp-dst> ip-src <ip-src> ip-dst"
+		" <ip-dst> vlan-tci <vlan-tci> eth-src <eth-src> eth-dst"
+		" <eth-dst>",
+	.tokens = {
+		(void *)&cmd_set_vxlan_set,
+		(void *)&cmd_set_vxlan_vxlan_with_vlan,
+		(void *)&cmd_set_vxlan_ip_version,
+		(void *)&cmd_set_vxlan_ip_version_value,
+		(void *)&cmd_set_vxlan_vni,
+		(void *)&cmd_set_vxlan_vni_value,
+		(void *)&cmd_set_vxlan_udp_src,
+		(void *)&cmd_set_vxlan_udp_src_value,
+		(void *)&cmd_set_vxlan_udp_dst,
+		(void *)&cmd_set_vxlan_udp_dst_value,
+		(void *)&cmd_set_vxlan_ip_src,
+		(void *)&cmd_set_vxlan_ip_src_value,
+		(void *)&cmd_set_vxlan_ip_dst,
+		(void *)&cmd_set_vxlan_ip_dst_value,
+		(void *)&cmd_set_vxlan_vlan,
+		(void *)&cmd_set_vxlan_vlan_value,
+		(void *)&cmd_set_vxlan_eth_src,
+		(void *)&cmd_set_vxlan_eth_src_value,
+		(void *)&cmd_set_vxlan_eth_dst,
+		(void *)&cmd_set_vxlan_eth_dst_value,
+		NULL,
+	},
+};
+
 /* Strict link priority scheduling mode setting */
 static void
 cmd_strict_link_prio_parsed(
@@ -17462,6 +17645,8 @@ cmdline_parse_ctx_t main_ctx[] = {
 #if defined RTE_LIBRTE_PMD_SOFTNIC && defined RTE_LIBRTE_SCHED
 	(cmdline_parse_inst_t *)&cmd_set_port_tm_hierarchy_default,
 #endif
+	(cmdline_parse_inst_t *)&cmd_set_vxlan,
+	(cmdline_parse_inst_t *)&cmd_set_vxlan_with_vlan,
 	(cmdline_parse_inst_t *)&cmd_ddp_add,
 	(cmdline_parse_inst_t *)&cmd_ddp_del,
 	(cmdline_parse_inst_t *)&cmd_ddp_get_list,
diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index 934cf7e90..a99fd0048 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -239,6 +239,8 @@ enum index {
 	ACTION_OF_POP_MPLS_ETHERTYPE,
 	ACTION_OF_PUSH_MPLS,
 	ACTION_OF_PUSH_MPLS_ETHERTYPE,
+	ACTION_VXLAN_ENCAP,
+	ACTION_VXLAN_DECAP,
 };
 
 /** Maximum size for pattern in struct rte_flow_item_raw. */
@@ -258,6 +260,23 @@ struct action_rss_data {
 	uint16_t queue[ACTION_RSS_QUEUE_NUM];
 };
 
+/** Maximum number of items in struct rte_flow_action_vxlan_encap. */
+#define ACTION_VXLAN_ENCAP_ITEMS_NUM 6
+
+/** Storage for struct rte_flow_action_vxlan_encap including external data. */
+struct action_vxlan_encap_data {
+	struct rte_flow_action_vxlan_encap conf;
+	struct rte_flow_item items[ACTION_VXLAN_ENCAP_ITEMS_NUM];
+	struct rte_flow_item_eth item_eth;
+	struct rte_flow_item_vlan item_vlan;
+	union {
+		struct rte_flow_item_ipv4 item_ipv4;
+		struct rte_flow_item_ipv6 item_ipv6;
+	};
+	struct rte_flow_item_udp item_udp;
+	struct rte_flow_item_vxlan item_vxlan;
+};
+
 /** Maximum number of subsequent tokens and arguments on the stack. */
 #define CTX_STACK_SIZE 16
 
@@ -775,6 +794,8 @@ static const enum index next_action[] = {
 	ACTION_OF_SET_VLAN_PCP,
 	ACTION_OF_POP_MPLS,
 	ACTION_OF_PUSH_MPLS,
+	ACTION_VXLAN_ENCAP,
+	ACTION_VXLAN_DECAP,
 	ZERO,
 };
 
@@ -905,6 +926,9 @@ static int parse_vc_action_rss_type(struct context *, const struct token *,
 static int parse_vc_action_rss_queue(struct context *, const struct token *,
 				     const char *, unsigned int, void *,
 				     unsigned int);
+static int parse_vc_action_vxlan_encap(struct context *, const struct token *,
+				       const char *, unsigned int, void *,
+				       unsigned int);
 static int parse_destroy(struct context *, const struct token *,
 			 const char *, unsigned int,
 			 void *, unsigned int);
@@ -2387,6 +2411,24 @@ static const struct token token_list[] = {
 			      ethertype)),
 		.call = parse_vc_conf,
 	},
+	[ACTION_VXLAN_ENCAP] = {
+		.name = "vxlan_encap",
+		.help = "VXLAN encapsulation, uses configuration set by \"set"
+			" vxlan\"",
+		.priv = PRIV_ACTION(VXLAN_ENCAP,
+				    sizeof(struct action_vxlan_encap_data)),
+		.next = NEXT(NEXT_ENTRY(ACTION_NEXT)),
+		.call = parse_vc_action_vxlan_encap,
+	},
+	[ACTION_VXLAN_DECAP] = {
+		.name = "vxlan_decap",
+		.help = "Performs a decapsulation action by stripping all"
+			" headers of the VXLAN tunnel network overlay from the"
+			" matched flow.",
+		.priv = PRIV_ACTION(VXLAN_DECAP, 0),
+		.next = NEXT(NEXT_ENTRY(ACTION_NEXT)),
+		.call = parse_vc,
+	},
 };
 
 /** Remove and return last entry from argument stack. */
@@ -2951,6 +2993,106 @@ parse_vc_action_rss_queue(struct context *ctx, const struct token *token,
 	return len;
 }
 
+/** Parse VXLAN encap action. */
+static int
+parse_vc_action_vxlan_encap(struct context *ctx, const struct token *token,
+			    const char *str, unsigned int len,
+			    void *buf, unsigned int size)
+{
+	struct buffer *out = buf;
+	struct rte_flow_action *action;
+	struct action_vxlan_encap_data *action_vxlan_encap_data;
+	int ret;
+
+	ret = parse_vc(ctx, token, str, len, buf, size);
+	if (ret < 0)
+		return ret;
+	/* Nothing else to do if there is no buffer. */
+	if (!out)
+		return ret;
+	if (!out->args.vc.actions_n)
+		return -1;
+	action = &out->args.vc.actions[out->args.vc.actions_n - 1];
+	/* Point to selected object. */
+	ctx->object = out->args.vc.data;
+	ctx->objmask = NULL;
+	/* Set up default configuration. */
+	action_vxlan_encap_data = ctx->object;
+	*action_vxlan_encap_data = (struct action_vxlan_encap_data){
+		.conf = (struct rte_flow_action_vxlan_encap){
+			.definition = action_vxlan_encap_data->items,
+		},
+		.items = {
+			{
+				.type = RTE_FLOW_ITEM_TYPE_ETH,
+				.spec = &action_vxlan_encap_data->item_eth,
+				.mask = &rte_flow_item_eth_mask,
+			},
+			{
+				.type = RTE_FLOW_ITEM_TYPE_VLAN,
+				.spec = &action_vxlan_encap_data->item_vlan,
+				.mask = &rte_flow_item_vlan_mask,
+			},
+			{
+				.type = RTE_FLOW_ITEM_TYPE_IPV4,
+				.spec = &action_vxlan_encap_data->item_ipv4,
+				.mask = &rte_flow_item_ipv4_mask,
+			},
+			{
+				.type = RTE_FLOW_ITEM_TYPE_UDP,
+				.spec = &action_vxlan_encap_data->item_udp,
+				.mask = &rte_flow_item_udp_mask,
+			},
+			{
+				.type = RTE_FLOW_ITEM_TYPE_VXLAN,
+				.spec = &action_vxlan_encap_data->item_vxlan,
+				.mask = &rte_flow_item_vxlan_mask,
+			},
+			{
+				.type = RTE_FLOW_ITEM_TYPE_END,
+			},
+		},
+		.item_eth.type = 0,
+		.item_vlan = {
+			.tci = vxlan_encap_conf.vlan_tci,
+			.inner_type = 0,
+		},
+		.item_ipv4.hdr = {
+			.src_addr = vxlan_encap_conf.ipv4_src,
+			.dst_addr = vxlan_encap_conf.ipv4_dst,
+		},
+		.item_udp.hdr = {
+			.src_port = vxlan_encap_conf.udp_src,
+			.dst_port = vxlan_encap_conf.udp_dst,
+		},
+		.item_vxlan.flags = 0,
+	};
+	memcpy(action_vxlan_encap_data->item_eth.dst.addr_bytes,
+	       vxlan_encap_conf.eth_dst, ETHER_ADDR_LEN);
+	memcpy(action_vxlan_encap_data->item_eth.src.addr_bytes,
+	       vxlan_encap_conf.eth_src, ETHER_ADDR_LEN);
+	if (!vxlan_encap_conf.select_ipv4) {
+		memcpy(&action_vxlan_encap_data->item_ipv6.hdr.src_addr,
+		       &vxlan_encap_conf.ipv6_src,
+		       sizeof(vxlan_encap_conf.ipv6_src));
+		memcpy(&action_vxlan_encap_data->item_ipv6.hdr.dst_addr,
+		       &vxlan_encap_conf.ipv6_dst,
+		       sizeof(vxlan_encap_conf.ipv6_dst));
+		action_vxlan_encap_data->items[2] = (struct rte_flow_item){
+			.type = RTE_FLOW_ITEM_TYPE_IPV6,
+			.spec = &action_vxlan_encap_data->item_ipv6,
+			.mask = &rte_flow_item_ipv6_mask,
+		};
+	}
+	if (!vxlan_encap_conf.select_vlan)
+		action_vxlan_encap_data->items[1].type =
+			RTE_FLOW_ITEM_TYPE_VOID;
+	memcpy(action_vxlan_encap_data->item_vxlan.vni, vxlan_encap_conf.vni,
+	       RTE_DIM(vxlan_encap_conf.vni));
+	action->conf = &action_vxlan_encap_data->conf;
+	return ret;
+}
+
 /** Parse tokens for destroy command. */
 static int
 parse_destroy(struct context *ctx, const struct token *token,
diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
index dde7d43e3..bf39ac3ff 100644
--- a/app/test-pmd/testpmd.c
+++ b/app/test-pmd/testpmd.c
@@ -392,6 +392,23 @@ uint8_t bitrate_enabled;
 struct gro_status gro_ports[RTE_MAX_ETHPORTS];
 uint8_t gro_flush_cycles = GRO_DEFAULT_FLUSH_CYCLES;
 
+struct vxlan_encap_conf vxlan_encap_conf = {
+	.select_ipv4 = 1,
+	.select_vlan = 0,
+	.vni = "\x00\x00\x00",
+	.udp_src = 0,
+	.udp_dst = RTE_BE16(4789),
+	.ipv4_src = IPv4(127, 0, 0, 1),
+	.ipv4_dst = IPv4(255, 255, 255, 255),
+	.ipv6_src = "\x00\x00\x00\x00\x00\x00\x00\x00"
+		"\x00\x00\x00\x00\x00\x00\x00\x01",
+	.ipv6_dst = "\x00\x00\x00\x00\x00\x00\x00\x00"
+		"\x00\x00\x00\x00\x00\x00\x11\x11",
+	.vlan_tci = 0,
+	.eth_src = "\x00\x00\x00\x00\x00\x00",
+	.eth_dst = "\xff\xff\xff\xff\xff\xff",
+};
+
 /* Forward function declarations */
 static void map_port_queue_stats_mapping_registers(portid_t pi,
 						   struct rte_port *port);
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index f51cd9dd9..0d6618788 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -479,6 +479,23 @@ struct gso_status {
 extern struct gso_status gso_ports[RTE_MAX_ETHPORTS];
 extern uint16_t gso_max_segment_size;
 
+/* VXLAN encap/decap parameters. */
+struct vxlan_encap_conf {
+	uint32_t select_ipv4:1;
+	uint32_t select_vlan:1;
+	uint8_t vni[3];
+	rte_be16_t udp_src;
+	rte_be16_t udp_dst;
+	rte_be32_t ipv4_src;
+	rte_be32_t ipv4_dst;
+	uint8_t ipv6_src[16];
+	uint8_t ipv6_dst[16];
+	rte_be16_t vlan_tci;
+	uint8_t eth_src[ETHER_ADDR_LEN];
+	uint8_t eth_dst[ETHER_ADDR_LEN];
+};
+struct vxlan_encap_conf vxlan_encap_conf;
+
 static inline unsigned int
 lcore_num(void)
 {
diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
index 0d6fd50ca..3281778d9 100644
--- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
+++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
@@ -1534,6 +1534,23 @@ Enable or disable a per queue Tx offloading only on a specific Tx queue::
 
 This command should be run when the port is stopped, or else it will fail.
 
+Config VXLAN Encap outer layers
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Configure the outer layer to encapsulate a packet inside a VXLAN tunnel::
+
+ set vxlan ip-version (ipv4|ipv6) vni (vni) udp-src (udp-src) \
+ udp-dst (udp-dst) ip-src (ip-src) ip-dst (ip-dst) eth-src (eth-src) \
+ eth-dst (eth-dst)
+
+ set vxlan-with-vlan ip-version (ipv4|ipv6) vni (vni) udp-src (udp-src) \
+ udp-dst (udp-dst) ip-src (ip-src) ip-dst (ip-dst) vlan-tci (vlan-tci) \
+ eth-src (eth-src) eth-dst (eth-dst)
+
+Those command will set an internal configuration inside testpmd, any following
+flow rule using the action vxlan_encap will use the last configuration set.
+To have a different encapsulation header, one of those commands must be called
+before the flow rule creation.
 
 Port Functions
 --------------
@@ -3650,6 +3667,12 @@ This section lists supported actions and their attributes, if any.
 
   - ``ethertype``: Ethertype.
 
+- ``vxlan_encap``: Performs a VXLAN encapsulation, outer layer configuration
+  is done through `Config VXLAN Encap outer layers`_.
+
+- ``vxlan_decap``: Performs a decapsulation action by stripping all headers of
+  the VXLAN tunnel network overlay from the matched flow.
+
 Destroying flow rules
 ~~~~~~~~~~~~~~~~~~~~~
 
@@ -3915,6 +3938,38 @@ Validate and create a QinQ rule on port 0 to steer traffic to a queue on the hos
    0       0       0       i-      ETH VLAN VLAN=>VF QUEUE
    1       0       0       i-      ETH VLAN VLAN=>PF QUEUE
 
+Sample VXLAN encapsulation rule
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+VXLAN encapsulation outer layer has default value pre-configured in testpmd
+source code, those can be changed by using the following commands
+
+IPv4 VXLAN outer header::
+
+ testpmd> set vxlan ip-version ipv4 vni 4 udp-src 4 udp-dst 4 ip-src 127.0.0.1
+        ip-dst 128.0.0.1 eth-src 11:11:11:11:11:11 eth-dst 22:22:22:22:22:22
+ testpmd> flow create 0 ingress pattern end actions vxlan_encap /
+        queue index 0 / end
+
+ testpmd> set vxlan-with-vlan ip-version ipv4 vni 4 udp-src 4 udp-dst 4 ip-src
+         127.0.0.1 ip-dst 128.0.0.1 vlan-tci 34 eth-src 11:11:11:11:11:11
+         eth-dst 22:22:22:22:22:22
+ testpmd> flow create 0 ingress pattern end actions vxlan_encap /
+         queue index 0 / end
+
+IPv6 VXLAN outer header::
+
+ testpmd> set vxlan ip-version ipv6 vni 4 udp-src 4 udp-dst 4 ip-src ::1
+        ip-dst ::2222 eth-src 11:11:11:11:11:11 eth-dst 22:22:22:22:22:22
+ testpmd> flow create 0 ingress pattern end actions vxlan_encap /
+         queue index 0 / end
+
+ testpmd> set vxlan-with-vlan ip-version ipv6 vni 4 udp-src 4 udp-dst 4
+         ip-src ::1 ip-dst ::2222 vlan-tci 34 eth-src 11:11:11:11:11:11
+         eth-dst 22:22:22:22:22:22
+ testpmd> flow create 0 ingress pattern end actions vxlan_encap /
+         queue index 0 / end
+
 BPF Functions
 --------------
 
-- 
2.18.0

^ permalink raw reply	[flat|nested] 63+ messages in thread

* [dpdk-dev] [PATCH v8 2/2] app/testpmd: add NVGRE encap/decap support
  2018-07-05 14:33             ` [dpdk-dev] [PATCH v8 " Nelio Laranjeiro
  2018-07-05 14:33               ` [dpdk-dev] [PATCH v8 1/2] app/testpmd: add VXLAN encap/decap support Nelio Laranjeiro
@ 2018-07-05 14:33               ` Nelio Laranjeiro
  2018-07-05 15:07                 ` Mohammad Abdul Awal
  2018-07-05 14:48               ` [dpdk-dev] [PATCH v8 0/2] app/testpmd implement VXLAN/NVGRE Encap/Decap Adrien Mazarguil
                                 ` (2 subsequent siblings)
  4 siblings, 1 reply; 63+ messages in thread
From: Nelio Laranjeiro @ 2018-07-05 14:33 UTC (permalink / raw)
  To: dev, Adrien Mazarguil, Wenzhuo Lu, Jingjing Wu,
	Bernard Iremonger, Mohammad Abdul Awal, Stephen Hemminger
  Cc: Ori Kam

Due to the complex NVGRE_ENCAP flow action and based on the fact testpmd
does not allocate memory, this patch adds a new command in testpmd to
initialise a global structure containing the necessary information to
make the outer layer of the packet.  This same global structure will
then be used by the flow command line in testpmd when the action
nvgre_encap will be parsed, at this point, the conversion into such
action becomes trivial.

This global structure is only used for the encap action.

Signed-off-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
Acked-by: Ori Kam <orika@mellanox.com>
---
 app/test-pmd/cmdline.c                      | 160 ++++++++++++++++++++
 app/test-pmd/cmdline_flow.c                 | 132 ++++++++++++++++
 app/test-pmd/testpmd.c                      |  15 ++
 app/test-pmd/testpmd.h                      |  15 ++
 doc/guides/testpmd_app_ug/testpmd_funcs.rst |  52 +++++++
 5 files changed, 374 insertions(+)

diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
index 56bdb023c..1b3fa1647 100644
--- a/app/test-pmd/cmdline.c
+++ b/app/test-pmd/cmdline.c
@@ -792,6 +792,16 @@ static void cmd_help_long_parsed(void *parsed_result,
 			" eth-dst (eth-dst)\n"
 			"       Configure the VXLAN encapsulation for flows.\n\n"
 
+			"nvgre ip-version (ipv4|ipv6) tni (tni) ip-src"
+			" (ip-src) ip-dst (ip-dst) eth-src (eth-src) eth-dst"
+			" (eth-dst)\n"
+			"       Configure the NVGRE encapsulation for flows.\n\n"
+
+			"nvgre-with-vlan ip-version (ipv4|ipv6) tni (tni)"
+			" ip-src (ip-src) ip-dst (ip-dst) vlan-tci (vlan-tci)"
+			" eth-src (eth-src) eth-dst (eth-dst)\n"
+			"       Configure the NVGRE encapsulation for flows.\n\n"
+
 			, list_pkt_forwarding_modes()
 		);
 	}
@@ -15021,6 +15031,154 @@ cmdline_parse_inst_t cmd_set_vxlan_with_vlan = {
 	},
 };
 
+/** Set NVGRE encapsulation details */
+struct cmd_set_nvgre_result {
+	cmdline_fixed_string_t set;
+	cmdline_fixed_string_t nvgre;
+	cmdline_fixed_string_t pos_token;
+	cmdline_fixed_string_t ip_version;
+	uint32_t tni;
+	cmdline_ipaddr_t ip_src;
+	cmdline_ipaddr_t ip_dst;
+	uint16_t tci;
+	struct ether_addr eth_src;
+	struct ether_addr eth_dst;
+};
+
+cmdline_parse_token_string_t cmd_set_nvgre_set =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_nvgre_result, set, "set");
+cmdline_parse_token_string_t cmd_set_nvgre_nvgre =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_nvgre_result, nvgre, "nvgre");
+cmdline_parse_token_string_t cmd_set_nvgre_nvgre_with_vlan =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_nvgre_result, nvgre,
+				 "nvgre-with-vlan");
+cmdline_parse_token_string_t cmd_set_nvgre_ip_version =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_nvgre_result, pos_token,
+				 "ip-version");
+cmdline_parse_token_string_t cmd_set_nvgre_ip_version_value =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_nvgre_result, ip_version,
+				 "ipv4#ipv6");
+cmdline_parse_token_string_t cmd_set_nvgre_tni =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_nvgre_result, pos_token,
+				 "tni");
+cmdline_parse_token_num_t cmd_set_nvgre_tni_value =
+	TOKEN_NUM_INITIALIZER(struct cmd_set_nvgre_result, tni, UINT32);
+cmdline_parse_token_string_t cmd_set_nvgre_ip_src =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_nvgre_result, pos_token,
+				 "ip-src");
+cmdline_parse_token_num_t cmd_set_nvgre_ip_src_value =
+	TOKEN_IPADDR_INITIALIZER(struct cmd_set_nvgre_result, ip_src);
+cmdline_parse_token_string_t cmd_set_nvgre_ip_dst =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_nvgre_result, pos_token,
+				 "ip-dst");
+cmdline_parse_token_ipaddr_t cmd_set_nvgre_ip_dst_value =
+	TOKEN_IPADDR_INITIALIZER(struct cmd_set_nvgre_result, ip_dst);
+cmdline_parse_token_string_t cmd_set_nvgre_vlan =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_nvgre_result, pos_token,
+				 "vlan-tci");
+cmdline_parse_token_num_t cmd_set_nvgre_vlan_value =
+	TOKEN_NUM_INITIALIZER(struct cmd_set_nvgre_result, tci, UINT16);
+cmdline_parse_token_string_t cmd_set_nvgre_eth_src =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_nvgre_result, pos_token,
+				 "eth-src");
+cmdline_parse_token_etheraddr_t cmd_set_nvgre_eth_src_value =
+	TOKEN_ETHERADDR_INITIALIZER(struct cmd_set_nvgre_result, eth_src);
+cmdline_parse_token_string_t cmd_set_nvgre_eth_dst =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_nvgre_result, pos_token,
+				 "eth-dst");
+cmdline_parse_token_etheraddr_t cmd_set_nvgre_eth_dst_value =
+	TOKEN_ETHERADDR_INITIALIZER(struct cmd_set_nvgre_result, eth_dst);
+
+static void cmd_set_nvgre_parsed(void *parsed_result,
+	__attribute__((unused)) struct cmdline *cl,
+	__attribute__((unused)) void *data)
+{
+	struct cmd_set_nvgre_result *res = parsed_result;
+	union {
+		uint32_t nvgre_tni;
+		uint8_t tni[4];
+	} id = {
+		.nvgre_tni = rte_cpu_to_be_32(res->tni) & RTE_BE32(0x00ffffff),
+	};
+
+	if (strcmp(res->nvgre, "nvgre") == 0)
+		nvgre_encap_conf.select_vlan = 0;
+	else if (strcmp(res->nvgre, "nvgre-with-vlan") == 0)
+		nvgre_encap_conf.select_vlan = 1;
+	if (strcmp(res->ip_version, "ipv4") == 0)
+		nvgre_encap_conf.select_ipv4 = 1;
+	else if (strcmp(res->ip_version, "ipv6") == 0)
+		nvgre_encap_conf.select_ipv4 = 0;
+	else
+		return;
+	rte_memcpy(nvgre_encap_conf.tni, &id.tni[1], 3);
+	if (nvgre_encap_conf.select_ipv4) {
+		IPV4_ADDR_TO_UINT(res->ip_src, nvgre_encap_conf.ipv4_src);
+		IPV4_ADDR_TO_UINT(res->ip_dst, nvgre_encap_conf.ipv4_dst);
+	} else {
+		IPV6_ADDR_TO_ARRAY(res->ip_src, nvgre_encap_conf.ipv6_src);
+		IPV6_ADDR_TO_ARRAY(res->ip_dst, nvgre_encap_conf.ipv6_dst);
+	}
+	if (nvgre_encap_conf.select_vlan)
+		nvgre_encap_conf.vlan_tci = rte_cpu_to_be_16(res->tci);
+	rte_memcpy(nvgre_encap_conf.eth_src, res->eth_src.addr_bytes,
+		   ETHER_ADDR_LEN);
+	rte_memcpy(nvgre_encap_conf.eth_dst, res->eth_dst.addr_bytes,
+		   ETHER_ADDR_LEN);
+}
+
+cmdline_parse_inst_t cmd_set_nvgre = {
+	.f = cmd_set_nvgre_parsed,
+	.data = NULL,
+	.help_str = "set nvgre ip-version (ipv4|ipv6) tni (tni) ip-src"
+		" (ip-src) ip-dst (ip-dst) eth-src (eth-src)"
+		" eth-dst (eth-dst)",
+	.tokens = {
+		(void *)&cmd_set_nvgre_set,
+		(void *)&cmd_set_nvgre_nvgre,
+		(void *)&cmd_set_nvgre_ip_version,
+		(void *)&cmd_set_nvgre_ip_version_value,
+		(void *)&cmd_set_nvgre_tni,
+		(void *)&cmd_set_nvgre_tni_value,
+		(void *)&cmd_set_nvgre_ip_src,
+		(void *)&cmd_set_nvgre_ip_src_value,
+		(void *)&cmd_set_nvgre_ip_dst,
+		(void *)&cmd_set_nvgre_ip_dst_value,
+		(void *)&cmd_set_nvgre_eth_src,
+		(void *)&cmd_set_nvgre_eth_src_value,
+		(void *)&cmd_set_nvgre_eth_dst,
+		(void *)&cmd_set_nvgre_eth_dst_value,
+		NULL,
+	},
+};
+
+cmdline_parse_inst_t cmd_set_nvgre_with_vlan = {
+	.f = cmd_set_nvgre_parsed,
+	.data = NULL,
+	.help_str = "set nvgre-with-vlan ip-version (ipv4|ipv6) tni (tni)"
+		" ip-src (ip-src) ip-dst (ip-dst) vlan-tci (vlan-tci)"
+		" eth-src (eth-src) eth-dst (eth-dst)",
+	.tokens = {
+		(void *)&cmd_set_nvgre_set,
+		(void *)&cmd_set_nvgre_nvgre_with_vlan,
+		(void *)&cmd_set_nvgre_ip_version,
+		(void *)&cmd_set_nvgre_ip_version_value,
+		(void *)&cmd_set_nvgre_tni,
+		(void *)&cmd_set_nvgre_tni_value,
+		(void *)&cmd_set_nvgre_ip_src,
+		(void *)&cmd_set_nvgre_ip_src_value,
+		(void *)&cmd_set_nvgre_ip_dst,
+		(void *)&cmd_set_nvgre_ip_dst_value,
+		(void *)&cmd_set_nvgre_vlan,
+		(void *)&cmd_set_nvgre_vlan_value,
+		(void *)&cmd_set_nvgre_eth_src,
+		(void *)&cmd_set_nvgre_eth_src_value,
+		(void *)&cmd_set_nvgre_eth_dst,
+		(void *)&cmd_set_nvgre_eth_dst_value,
+		NULL,
+	},
+};
+
 /* Strict link priority scheduling mode setting */
 static void
 cmd_strict_link_prio_parsed(
@@ -17647,6 +17805,8 @@ cmdline_parse_ctx_t main_ctx[] = {
 #endif
 	(cmdline_parse_inst_t *)&cmd_set_vxlan,
 	(cmdline_parse_inst_t *)&cmd_set_vxlan_with_vlan,
+	(cmdline_parse_inst_t *)&cmd_set_nvgre,
+	(cmdline_parse_inst_t *)&cmd_set_nvgre_with_vlan,
 	(cmdline_parse_inst_t *)&cmd_ddp_add,
 	(cmdline_parse_inst_t *)&cmd_ddp_del,
 	(cmdline_parse_inst_t *)&cmd_ddp_get_list,
diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index a99fd0048..f9260600e 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -241,6 +241,8 @@ enum index {
 	ACTION_OF_PUSH_MPLS_ETHERTYPE,
 	ACTION_VXLAN_ENCAP,
 	ACTION_VXLAN_DECAP,
+	ACTION_NVGRE_ENCAP,
+	ACTION_NVGRE_DECAP,
 };
 
 /** Maximum size for pattern in struct rte_flow_item_raw. */
@@ -277,6 +279,22 @@ struct action_vxlan_encap_data {
 	struct rte_flow_item_vxlan item_vxlan;
 };
 
+/** Maximum number of items in struct rte_flow_action_nvgre_encap. */
+#define ACTION_NVGRE_ENCAP_ITEMS_NUM 5
+
+/** Storage for struct rte_flow_action_nvgre_encap including external data. */
+struct action_nvgre_encap_data {
+	struct rte_flow_action_nvgre_encap conf;
+	struct rte_flow_item items[ACTION_NVGRE_ENCAP_ITEMS_NUM];
+	struct rte_flow_item_eth item_eth;
+	struct rte_flow_item_vlan item_vlan;
+	union {
+		struct rte_flow_item_ipv4 item_ipv4;
+		struct rte_flow_item_ipv6 item_ipv6;
+	};
+	struct rte_flow_item_nvgre item_nvgre;
+};
+
 /** Maximum number of subsequent tokens and arguments on the stack. */
 #define CTX_STACK_SIZE 16
 
@@ -796,6 +814,8 @@ static const enum index next_action[] = {
 	ACTION_OF_PUSH_MPLS,
 	ACTION_VXLAN_ENCAP,
 	ACTION_VXLAN_DECAP,
+	ACTION_NVGRE_ENCAP,
+	ACTION_NVGRE_DECAP,
 	ZERO,
 };
 
@@ -929,6 +949,9 @@ static int parse_vc_action_rss_queue(struct context *, const struct token *,
 static int parse_vc_action_vxlan_encap(struct context *, const struct token *,
 				       const char *, unsigned int, void *,
 				       unsigned int);
+static int parse_vc_action_nvgre_encap(struct context *, const struct token *,
+				       const char *, unsigned int, void *,
+				       unsigned int);
 static int parse_destroy(struct context *, const struct token *,
 			 const char *, unsigned int,
 			 void *, unsigned int);
@@ -2429,6 +2452,24 @@ static const struct token token_list[] = {
 		.next = NEXT(NEXT_ENTRY(ACTION_NEXT)),
 		.call = parse_vc,
 	},
+	[ACTION_NVGRE_ENCAP] = {
+		.name = "nvgre_encap",
+		.help = "NVGRE encapsulation, uses configuration set by \"set"
+			" nvgre\"",
+		.priv = PRIV_ACTION(NVGRE_ENCAP,
+				    sizeof(struct action_nvgre_encap_data)),
+		.next = NEXT(NEXT_ENTRY(ACTION_NEXT)),
+		.call = parse_vc_action_nvgre_encap,
+	},
+	[ACTION_NVGRE_DECAP] = {
+		.name = "nvgre_decap",
+		.help = "Performs a decapsulation action by stripping all"
+			" headers of the NVGRE tunnel network overlay from the"
+			" matched flow.",
+		.priv = PRIV_ACTION(NVGRE_DECAP, 0),
+		.next = NEXT(NEXT_ENTRY(ACTION_NEXT)),
+		.call = parse_vc,
+	},
 };
 
 /** Remove and return last entry from argument stack. */
@@ -3093,6 +3134,97 @@ parse_vc_action_vxlan_encap(struct context *ctx, const struct token *token,
 	return ret;
 }
 
+/** Parse NVGRE encap action. */
+static int
+parse_vc_action_nvgre_encap(struct context *ctx, const struct token *token,
+			    const char *str, unsigned int len,
+			    void *buf, unsigned int size)
+{
+	struct buffer *out = buf;
+	struct rte_flow_action *action;
+	struct action_nvgre_encap_data *action_nvgre_encap_data;
+	int ret;
+
+	ret = parse_vc(ctx, token, str, len, buf, size);
+	if (ret < 0)
+		return ret;
+	/* Nothing else to do if there is no buffer. */
+	if (!out)
+		return ret;
+	if (!out->args.vc.actions_n)
+		return -1;
+	action = &out->args.vc.actions[out->args.vc.actions_n - 1];
+	/* Point to selected object. */
+	ctx->object = out->args.vc.data;
+	ctx->objmask = NULL;
+	/* Set up default configuration. */
+	action_nvgre_encap_data = ctx->object;
+	*action_nvgre_encap_data = (struct action_nvgre_encap_data){
+		.conf = (struct rte_flow_action_nvgre_encap){
+			.definition = action_nvgre_encap_data->items,
+		},
+		.items = {
+			{
+				.type = RTE_FLOW_ITEM_TYPE_ETH,
+				.spec = &action_nvgre_encap_data->item_eth,
+				.mask = &rte_flow_item_eth_mask,
+			},
+			{
+				.type = RTE_FLOW_ITEM_TYPE_VLAN,
+				.spec = &action_nvgre_encap_data->item_vlan,
+				.mask = &rte_flow_item_vlan_mask,
+			},
+			{
+				.type = RTE_FLOW_ITEM_TYPE_IPV4,
+				.spec = &action_nvgre_encap_data->item_ipv4,
+				.mask = &rte_flow_item_ipv4_mask,
+			},
+			{
+				.type = RTE_FLOW_ITEM_TYPE_NVGRE,
+				.spec = &action_nvgre_encap_data->item_nvgre,
+				.mask = &rte_flow_item_nvgre_mask,
+			},
+			{
+				.type = RTE_FLOW_ITEM_TYPE_END,
+			},
+		},
+		.item_eth.type = 0,
+		.item_vlan = {
+			.tci = nvgre_encap_conf.vlan_tci,
+			.inner_type = 0,
+		},
+		.item_ipv4.hdr = {
+		       .src_addr = nvgre_encap_conf.ipv4_src,
+		       .dst_addr = nvgre_encap_conf.ipv4_dst,
+		},
+		.item_nvgre.flow_id = 0,
+	};
+	memcpy(action_nvgre_encap_data->item_eth.dst.addr_bytes,
+	       nvgre_encap_conf.eth_dst, ETHER_ADDR_LEN);
+	memcpy(action_nvgre_encap_data->item_eth.src.addr_bytes,
+	       nvgre_encap_conf.eth_src, ETHER_ADDR_LEN);
+	if (!nvgre_encap_conf.select_ipv4) {
+		memcpy(&action_nvgre_encap_data->item_ipv6.hdr.src_addr,
+		       &nvgre_encap_conf.ipv6_src,
+		       sizeof(nvgre_encap_conf.ipv6_src));
+		memcpy(&action_nvgre_encap_data->item_ipv6.hdr.dst_addr,
+		       &nvgre_encap_conf.ipv6_dst,
+		       sizeof(nvgre_encap_conf.ipv6_dst));
+		action_nvgre_encap_data->items[2] = (struct rte_flow_item){
+			.type = RTE_FLOW_ITEM_TYPE_IPV6,
+			.spec = &action_nvgre_encap_data->item_ipv6,
+			.mask = &rte_flow_item_ipv6_mask,
+		};
+	}
+	if (!nvgre_encap_conf.select_vlan)
+		action_nvgre_encap_data->items[1].type =
+			RTE_FLOW_ITEM_TYPE_VOID;
+	memcpy(action_nvgre_encap_data->item_nvgre.tni, nvgre_encap_conf.tni,
+	       RTE_DIM(nvgre_encap_conf.tni));
+	action->conf = &action_nvgre_encap_data->conf;
+	return ret;
+}
+
 /** Parse tokens for destroy command. */
 static int
 parse_destroy(struct context *ctx, const struct token *token,
diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
index bf39ac3ff..dbba7d253 100644
--- a/app/test-pmd/testpmd.c
+++ b/app/test-pmd/testpmd.c
@@ -409,6 +409,21 @@ struct vxlan_encap_conf vxlan_encap_conf = {
 	.eth_dst = "\xff\xff\xff\xff\xff\xff",
 };
 
+struct nvgre_encap_conf nvgre_encap_conf = {
+	.select_ipv4 = 1,
+	.select_vlan = 0,
+	.tni = "\x00\x00\x00",
+	.ipv4_src = IPv4(127, 0, 0, 1),
+	.ipv4_dst = IPv4(255, 255, 255, 255),
+	.ipv6_src = "\x00\x00\x00\x00\x00\x00\x00\x00"
+		"\x00\x00\x00\x00\x00\x00\x00\x01",
+	.ipv6_dst = "\x00\x00\x00\x00\x00\x00\x00\x00"
+		"\x00\x00\x00\x00\x00\x00\x11\x11",
+	.vlan_tci = 0,
+	.eth_src = "\x00\x00\x00\x00\x00\x00",
+	.eth_dst = "\xff\xff\xff\xff\xff\xff",
+};
+
 /* Forward function declarations */
 static void map_port_queue_stats_mapping_registers(portid_t pi,
 						   struct rte_port *port);
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index 0d6618788..2b1e448b0 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -496,6 +496,21 @@ struct vxlan_encap_conf {
 };
 struct vxlan_encap_conf vxlan_encap_conf;
 
+/* NVGRE encap/decap parameters. */
+struct nvgre_encap_conf {
+	uint32_t select_ipv4:1;
+	uint32_t select_vlan:1;
+	uint8_t tni[3];
+	rte_be32_t ipv4_src;
+	rte_be32_t ipv4_dst;
+	uint8_t ipv6_src[16];
+	uint8_t ipv6_dst[16];
+	rte_be16_t vlan_tci;
+	uint8_t eth_src[ETHER_ADDR_LEN];
+	uint8_t eth_dst[ETHER_ADDR_LEN];
+};
+struct nvgre_encap_conf nvgre_encap_conf;
+
 static inline unsigned int
 lcore_num(void)
 {
diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
index 3281778d9..94d8d38c7 100644
--- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
+++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
@@ -1552,6 +1552,21 @@ flow rule using the action vxlan_encap will use the last configuration set.
 To have a different encapsulation header, one of those commands must be called
 before the flow rule creation.
 
+Config NVGRE Encap outer layers
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Configure the outer layer to encapsulate a packet inside a NVGRE tunnel::
+
+ set nvgre ip-version (ipv4|ipv6) tni (tni) ip-src (ip-src) ip-dst (ip-dst) \
+        eth-src (eth-src) eth-dst (eth-dst)
+ set nvgre-with-vlan ip-version (ipv4|ipv6) tni (tni) ip-src (ip-src) \
+        ip-dst (ip-dst) vlan-tci (vlan-tci) eth-src (eth-src) eth-dst (eth-dst)
+
+Those command will set an internal configuration inside testpmd, any following
+flow rule using the action nvgre_encap will use the last configuration set.
+To have a different encapsulation header, one of those commands must be called
+before the flow rule creation.
+
 Port Functions
 --------------
 
@@ -3673,6 +3688,12 @@ This section lists supported actions and their attributes, if any.
 - ``vxlan_decap``: Performs a decapsulation action by stripping all headers of
   the VXLAN tunnel network overlay from the matched flow.
 
+- ``nvgre_encap``: Performs a NVGRE encapsulation, outer layer configuration
+  is done through `Config NVGRE Encap outer layers`_.
+
+- ``nvgre_decap``: Performs a decapsulation action by stripping all headers of
+  the NVGRE tunnel network overlay from the matched flow.
+
 Destroying flow rules
 ~~~~~~~~~~~~~~~~~~~~~
 
@@ -3970,6 +3991,37 @@ IPv6 VXLAN outer header::
  testpmd> flow create 0 ingress pattern end actions vxlan_encap /
          queue index 0 / end
 
+Sample NVGRE encapsulation rule
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+NVGRE encapsulation outer layer has default value pre-configured in testpmd
+source code, those can be changed by using the following commands
+
+IPv4 NVGRE outer header::
+
+ testpmd> set nvgre ip-version ipv4 tni 4 ip-src 127.0.0.1 ip-dst 128.0.0.1
+        eth-src 11:11:11:11:11:11 eth-dst 22:22:22:22:22:22
+ testpmd> flow create 0 ingress pattern end actions nvgre_encap /
+        queue index 0 / end
+
+ testpmd> set nvgre-with-vlan ip-version ipv4 tni 4 ip-src 127.0.0.1
+         ip-dst 128.0.0.1 vlan-tci 34 eth-src 11:11:11:11:11:11
+         eth-dst 22:22:22:22:22:22
+ testpmd> flow create 0 ingress pattern end actions nvgre_encap /
+         queue index 0 / end
+
+IPv6 NVGRE outer header::
+
+ testpmd> set nvgre ip-version ipv6 tni 4 ip-src ::1 ip-dst ::2222
+        eth-src 11:11:11:11:11:11 eth-dst 22:22:22:22:22:22
+ testpmd> flow create 0 ingress pattern end actions nvgre_encap /
+        queue index 0 / end
+
+ testpmd> set nvgre-with-vlan ip-version ipv6 tni 4 ip-src ::1 ip-dst ::2222
+        vlan-tci 34 eth-src 11:11:11:11:11:11 eth-dst 22:22:22:22:22:22
+ testpmd> flow create 0 ingress pattern end actions nvgre_encap /
+        queue index 0 / end
+
 BPF Functions
 --------------
 
-- 
2.18.0

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [dpdk-dev] [PATCH v8 0/2] app/testpmd implement VXLAN/NVGRE Encap/Decap
  2018-07-05 14:33             ` [dpdk-dev] [PATCH v8 " Nelio Laranjeiro
  2018-07-05 14:33               ` [dpdk-dev] [PATCH v8 1/2] app/testpmd: add VXLAN encap/decap support Nelio Laranjeiro
  2018-07-05 14:33               ` [dpdk-dev] [PATCH v8 2/2] app/testpmd: add NVGRE " Nelio Laranjeiro
@ 2018-07-05 14:48               ` Adrien Mazarguil
  2018-07-05 14:57               ` Mohammad Abdul Awal
  2018-07-06  6:43               ` [dpdk-dev] [PATCH v9 " Nelio Laranjeiro
  4 siblings, 0 replies; 63+ messages in thread
From: Adrien Mazarguil @ 2018-07-05 14:48 UTC (permalink / raw)
  To: Nelio Laranjeiro
  Cc: dev, Wenzhuo Lu, Jingjing Wu, Bernard Iremonger,
	Mohammad Abdul Awal, Stephen Hemminger, Ori Kam

On Thu, Jul 05, 2018 at 04:33:08PM +0200, Nelio Laranjeiro wrote:
> This series adds an easy and maintainable configuration version support for
> those two actions for 18.08 by using global variables in testpmd to store the
> necessary information for the tunnel encapsulation.  Those variables are used
> in conjunction of RTE_FLOW_ACTION_{VXLAN,NVGRE}_ENCAP action to create easily
> the action for flows.
> 
> A common way to use it:
> 
>  set vxlan ip-version ipv4 vni 4 udp-src 4 udp-dst 4 ip-src 27.0.0.1
>         ip-dst 128.0.0.1 eth-src 11:11:11:11:11:11 eth-dst 22:22:22:22:22:22
>  flow create 0 ingress pattern end actions vxlan_encap /
>         queue index 0 / end
> 
>  set vxlan-with-vlan ip-version ipv4 vni 4 udp-src 4 udp-dst 4 p-src
>          127.0.0.1 ip-dst 128.0.0.1 vlan-tci 34 eth-src 11:11:11:11:11:11
>          eth-dst 22:22:22:22:22:22
>  flow create 0 ingress pattern end actions vxlan_encap /
>          queue index 0 / end
> 
>  set vxlan ip-version ipv6 vni 4 udp-src 4 udp-dst 4 ip-src ::1
>         ip-dst ::2222 eth-src 11:11:11:11:11:11 eth-dst 22:22:22:22:22:22
>  flow create 0 ingress pattern end actions vxlan_encap /
>          queue index 0 / end
> 
>  set vxlan-with-vlan ip-version ipv6 vni 4 udp-src 4 udp-dst 4
>          ip-src ::1 ip-dst ::2222 vlan-tci 34 eth-src 11:11:11:11:11:11
>          eth-dst 22:22:22:22:22:22
>  flow create 0 ingress pattern end actions vxlan_encap /
>          queue index 0 / end
> 
> This also replace the proposal done by Mohammad Abdul Awal [1] which handles
> in a more complex way for the same work.
> 
> Note this API has already a modification planned for 18.11 [2] thus those
> series should have a limited life for a single release.
> 
> [1] https://dpdk.org/ml/archives/dev/2018-May/101403.html
> [2] https://dpdk.org/ml/archives/dev/2018-June/103485.html
> 
> Changes in v8:
> 
> - add static tokens in the command line to be user friendly.

Looks good to me,

Acked-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>

-- 
Adrien Mazarguil
6WIND

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [dpdk-dev] [PATCH v8 0/2] app/testpmd implement VXLAN/NVGRE Encap/Decap
  2018-07-05 14:33             ` [dpdk-dev] [PATCH v8 " Nelio Laranjeiro
                                 ` (2 preceding siblings ...)
  2018-07-05 14:48               ` [dpdk-dev] [PATCH v8 0/2] app/testpmd implement VXLAN/NVGRE Encap/Decap Adrien Mazarguil
@ 2018-07-05 14:57               ` Mohammad Abdul Awal
  2018-07-06  6:43               ` [dpdk-dev] [PATCH v9 " Nelio Laranjeiro
  4 siblings, 0 replies; 63+ messages in thread
From: Mohammad Abdul Awal @ 2018-07-05 14:57 UTC (permalink / raw)
  To: Nelio Laranjeiro, dev, Adrien Mazarguil, Wenzhuo Lu, Jingjing Wu,
	Bernard Iremonger, Stephen Hemminger
  Cc: Ori Kam



On 05/07/2018 15:33, Nelio Laranjeiro wrote:
> This series adds an easy and maintainable configuration version support for
> those two actions for 18.08 by using global variables in testpmd to store the
> necessary information for the tunnel encapsulation.  Those variables are used
> in conjunction of RTE_FLOW_ACTION_{VXLAN,NVGRE}_ENCAP action to create easily
> the action for flows.
>
> A common way to use it:
>
>   set vxlan ip-version ipv4 vni 4 udp-src 4 udp-dst 4 ip-src 27.0.0.1
>          ip-dst 128.0.0.1 eth-src 11:11:11:11:11:11 eth-dst 22:22:22:22:22:22
>   flow create 0 ingress pattern end actions vxlan_encap /
>          queue index 0 / end
>
>   set vxlan-with-vlan ip-version ipv4 vni 4 udp-src 4 udp-dst 4 p-src
>           127.0.0.1 ip-dst 128.0.0.1 vlan-tci 34 eth-src 11:11:11:11:11:11
>           eth-dst 22:22:22:22:22:22
>   flow create 0 ingress pattern end actions vxlan_encap /
>           queue index 0 / end
>
>   set vxlan ip-version ipv6 vni 4 udp-src 4 udp-dst 4 ip-src ::1
>          ip-dst ::2222 eth-src 11:11:11:11:11:11 eth-dst 22:22:22:22:22:22
>   flow create 0 ingress pattern end actions vxlan_encap /
>           queue index 0 / end
>
>   set vxlan-with-vlan ip-version ipv6 vni 4 udp-src 4 udp-dst 4
>           ip-src ::1 ip-dst ::2222 vlan-tci 34 eth-src 11:11:11:11:11:11
>           eth-dst 22:22:22:22:22:22
>   flow create 0 ingress pattern end actions vxlan_encap /
>           queue index 0 / end
>
> This also replace the proposal done by Mohammad Abdul Awal [1] which handles
> in a more complex way for the same work.
>
> Note this API has already a modification planned for 18.11 [2] thus those
> series should have a limited life for a single release.
>
> [1] https://dpdk.org/ml/archives/dev/2018-May/101403.html
> [2] https://dpdk.org/ml/archives/dev/2018-June/103485.html
>
> Changes in v8:
>
> - add static tokens in the command line to be user friendly.
>
> Changes in v7:
>
> - add missing documentation added in v5 and removed in v6 by mistake.
>
> Changes in v6:
>
> - fix compilation under redhat 7.5 with gcc 4.8.5 20150623
>
> Changes in v5:
>
> - fix documentation generation.
> - add more explanation on how to generate several encapsulated flows.
>
> Changes in v4:
>
> - fix big endian issue on vni and tni.
> - add samples to the documentation.
> - set the VXLAN UDP source port to 0 by default to let the driver generate it
>    from the inner hash as described in the RFC 7348.
> - use default rte flow mask for each item.
>
> Changes in v3:
>
> - support VLAN in the outer encapsulation.
> - fix the documentation with missing arguments.
>
> Changes in v2:
>
> - add default IPv6 values for NVGRE encapsulation.
> - replace VXLAN to NVGRE in comments concerning NVGRE layer.
>
>
> Nelio Laranjeiro (2):
>    app/testpmd: add VXLAN encap/decap support
>    app/testpmd: add NVGRE encap/decap support
>
>   app/test-pmd/cmdline.c                      | 345 ++++++++++++++++++++
>   app/test-pmd/cmdline_flow.c                 | 274 ++++++++++++++++
>   app/test-pmd/testpmd.c                      |  32 ++
>   app/test-pmd/testpmd.h                      |  32 ++
>   doc/guides/testpmd_app_ug/testpmd_funcs.rst | 107 ++++++
>   5 files changed, 790 insertions(+)
>

Looks good to me.

Tested-by: Mohammad Abdul Awal <mohammad.abdul.awal@intel.com>
Acked-by: Mohammad Abdul Awal <mohammad.abdul.awal@intel.com>

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [dpdk-dev] [PATCH v8 1/2] app/testpmd: add VXLAN encap/decap support
  2018-07-05 14:33               ` [dpdk-dev] [PATCH v8 1/2] app/testpmd: add VXLAN encap/decap support Nelio Laranjeiro
@ 2018-07-05 15:03                 ` Mohammad Abdul Awal
  0 siblings, 0 replies; 63+ messages in thread
From: Mohammad Abdul Awal @ 2018-07-05 15:03 UTC (permalink / raw)
  To: Nelio Laranjeiro, dev, Adrien Mazarguil, Wenzhuo Lu, Jingjing Wu,
	Bernard Iremonger, Stephen Hemminger
  Cc: Ori Kam



On 05/07/2018 15:33, Nelio Laranjeiro wrote:
> Due to the complex VXLAN_ENCAP flow action and based on the fact testpmd
> does not allocate memory, this patch adds a new command in testpmd to
> initialise a global structure containing the necessary information to
> make the outer layer of the packet.  This same global structure will
> then be used by the flow command line in testpmd when the action
> vxlan_encap will be parsed, at this point, the conversion into such
> action becomes trivial.
>
> This global structure is only used for the encap action.
>
> Signed-off-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
> Acked-by: Ori Kam <orika@mellanox.com>
> ---
>   app/test-pmd/cmdline.c                      | 185 ++++++++++++++++++++
>   app/test-pmd/cmdline_flow.c                 | 142 +++++++++++++++
>   app/test-pmd/testpmd.c                      |  17 ++
>   app/test-pmd/testpmd.h                      |  17 ++
>   doc/guides/testpmd_app_ug/testpmd_funcs.rst |  55 ++++++
>   5 files changed, 416 insertions(+)
>
> diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
> index 27e2aa8c8..56bdb023c 100644
> --- a/app/test-pmd/cmdline.c
> +++ b/app/test-pmd/cmdline.c
> @@ -781,6 +781,17 @@ static void cmd_help_long_parsed(void *parsed_result,
>   			"port tm hierarchy commit (port_id) (clean_on_fail)\n"
>   			"	Commit tm hierarchy.\n\n"
>   
> +			"vxlan ip-version (ipv4|ipv6) vni (vni) udp-src"
> +			" (udp-src) udp-dst (udp-dst) ip-src (ip-src) ip-dst"
> +			" (ip-dst) eth-src (eth-src) eth-dst (eth-dst)\n"
> +			"       Configure the VXLAN encapsulation for flows.\n\n"
> +
> +			"vxlan-with-vlan ip-version (ipv4|ipv6) vni (vni)"
> +			" udp-src (udp-src) udp-dst (udp-dst) ip-src (ip-src)"
> +			" ip-dst (ip-dst) vlan-tci (vlan-tci) eth-src (eth-src)"
> +			" eth-dst (eth-dst)\n"
> +			"       Configure the VXLAN encapsulation for flows.\n\n"
> +
>   			, list_pkt_forwarding_modes()
>   		);
>   	}
> @@ -14838,6 +14849,178 @@ cmdline_parse_inst_t cmd_set_port_tm_hierarchy_default = {
>   };
>   #endif
>   
> +/** Set VXLAN encapsulation details */
> +struct cmd_set_vxlan_result {
> +	cmdline_fixed_string_t set;
> +	cmdline_fixed_string_t vxlan;
> +	cmdline_fixed_string_t pos_token;
> +	cmdline_fixed_string_t ip_version;
> +	uint32_t vlan_present:1;
> +	uint32_t vni;
> +	uint16_t udp_src;
> +	uint16_t udp_dst;
> +	cmdline_ipaddr_t ip_src;
> +	cmdline_ipaddr_t ip_dst;
> +	uint16_t tci;
> +	struct ether_addr eth_src;
> +	struct ether_addr eth_dst;
> +};
> +
> +cmdline_parse_token_string_t cmd_set_vxlan_set =
> +	TOKEN_STRING_INITIALIZER(struct cmd_set_vxlan_result, set, "set");
> +cmdline_parse_token_string_t cmd_set_vxlan_vxlan =
> +	TOKEN_STRING_INITIALIZER(struct cmd_set_vxlan_result, vxlan, "vxlan");
> +cmdline_parse_token_string_t cmd_set_vxlan_vxlan_with_vlan =
> +	TOKEN_STRING_INITIALIZER(struct cmd_set_vxlan_result, vxlan,
> +				 "vxlan-with-vlan");
> +cmdline_parse_token_string_t cmd_set_vxlan_ip_version =
> +	TOKEN_STRING_INITIALIZER(struct cmd_set_vxlan_result, pos_token,
> +				 "ip-version");
> +cmdline_parse_token_string_t cmd_set_vxlan_ip_version_value =
> +	TOKEN_STRING_INITIALIZER(struct cmd_set_vxlan_result, ip_version,
> +				 "ipv4#ipv6");
> +cmdline_parse_token_string_t cmd_set_vxlan_vni =
> +	TOKEN_STRING_INITIALIZER(struct cmd_set_vxlan_result, pos_token,
> +				 "vni");
> +cmdline_parse_token_num_t cmd_set_vxlan_vni_value =
> +	TOKEN_NUM_INITIALIZER(struct cmd_set_vxlan_result, vni, UINT32);
> +cmdline_parse_token_string_t cmd_set_vxlan_udp_src =
> +	TOKEN_STRING_INITIALIZER(struct cmd_set_vxlan_result, pos_token,
> +				 "udp-src");
> +cmdline_parse_token_num_t cmd_set_vxlan_udp_src_value =
> +	TOKEN_NUM_INITIALIZER(struct cmd_set_vxlan_result, udp_src, UINT16);
> +cmdline_parse_token_string_t cmd_set_vxlan_udp_dst =
> +	TOKEN_STRING_INITIALIZER(struct cmd_set_vxlan_result, pos_token,
> +				 "udp-dst");
> +cmdline_parse_token_num_t cmd_set_vxlan_udp_dst_value =
> +	TOKEN_NUM_INITIALIZER(struct cmd_set_vxlan_result, udp_dst, UINT16);
> +cmdline_parse_token_string_t cmd_set_vxlan_ip_src =
> +	TOKEN_STRING_INITIALIZER(struct cmd_set_vxlan_result, pos_token,
> +				 "ip-src");
> +cmdline_parse_token_ipaddr_t cmd_set_vxlan_ip_src_value =
> +	TOKEN_IPADDR_INITIALIZER(struct cmd_set_vxlan_result, ip_src);
> +cmdline_parse_token_string_t cmd_set_vxlan_ip_dst =
> +	TOKEN_STRING_INITIALIZER(struct cmd_set_vxlan_result, pos_token,
> +				 "ip-dst");
> +cmdline_parse_token_ipaddr_t cmd_set_vxlan_ip_dst_value =
> +	TOKEN_IPADDR_INITIALIZER(struct cmd_set_vxlan_result, ip_dst);
> +cmdline_parse_token_string_t cmd_set_vxlan_vlan =
> +	TOKEN_STRING_INITIALIZER(struct cmd_set_vxlan_result, pos_token,
> +				 "vlan-tci");
> +cmdline_parse_token_num_t cmd_set_vxlan_vlan_value =
> +	TOKEN_NUM_INITIALIZER(struct cmd_set_vxlan_result, tci, UINT16);
> +cmdline_parse_token_string_t cmd_set_vxlan_eth_src =
> +	TOKEN_STRING_INITIALIZER(struct cmd_set_vxlan_result, pos_token,
> +				 "eth-src");
> +cmdline_parse_token_etheraddr_t cmd_set_vxlan_eth_src_value =
> +	TOKEN_ETHERADDR_INITIALIZER(struct cmd_set_vxlan_result, eth_src);
> +cmdline_parse_token_string_t cmd_set_vxlan_eth_dst =
> +	TOKEN_STRING_INITIALIZER(struct cmd_set_vxlan_result, pos_token,
> +				 "eth-dst");
> +cmdline_parse_token_etheraddr_t cmd_set_vxlan_eth_dst_value =
> +	TOKEN_ETHERADDR_INITIALIZER(struct cmd_set_vxlan_result, eth_dst);
> +
> +static void cmd_set_vxlan_parsed(void *parsed_result,
> +	__attribute__((unused)) struct cmdline *cl,
> +	__attribute__((unused)) void *data)
> +{
> +	struct cmd_set_vxlan_result *res = parsed_result;
> +	union {
> +		uint32_t vxlan_id;
> +		uint8_t vni[4];
> +	} id = {
> +		.vxlan_id = rte_cpu_to_be_32(res->vni) & RTE_BE32(0x00ffffff),
> +	};
> +
> +	if (strcmp(res->vxlan, "vxlan") == 0)
> +		vxlan_encap_conf.select_vlan = 0;
> +	else if (strcmp(res->vxlan, "vxlan-with-vlan") == 0)
> +		vxlan_encap_conf.select_vlan = 1;
> +	if (strcmp(res->ip_version, "ipv4") == 0)
> +		vxlan_encap_conf.select_ipv4 = 1;
> +	else if (strcmp(res->ip_version, "ipv6") == 0)
> +		vxlan_encap_conf.select_ipv4 = 0;
> +	else
> +		return;
> +	rte_memcpy(vxlan_encap_conf.vni, &id.vni[1], 3);
> +	vxlan_encap_conf.udp_src = rte_cpu_to_be_16(res->udp_src);
> +	vxlan_encap_conf.udp_dst = rte_cpu_to_be_16(res->udp_dst);
> +	if (vxlan_encap_conf.select_ipv4) {
> +		IPV4_ADDR_TO_UINT(res->ip_src, vxlan_encap_conf.ipv4_src);
> +		IPV4_ADDR_TO_UINT(res->ip_dst, vxlan_encap_conf.ipv4_dst);
> +	} else {
> +		IPV6_ADDR_TO_ARRAY(res->ip_src, vxlan_encap_conf.ipv6_src);
> +		IPV6_ADDR_TO_ARRAY(res->ip_dst, vxlan_encap_conf.ipv6_dst);
> +	}
> +	if (vxlan_encap_conf.select_vlan)
> +		vxlan_encap_conf.vlan_tci = rte_cpu_to_be_16(res->tci);
> +	rte_memcpy(vxlan_encap_conf.eth_src, res->eth_src.addr_bytes,
> +		   ETHER_ADDR_LEN);
> +	rte_memcpy(vxlan_encap_conf.eth_dst, res->eth_dst.addr_bytes,
> +		   ETHER_ADDR_LEN);
> +}
> +
> +cmdline_parse_inst_t cmd_set_vxlan = {
> +	.f = cmd_set_vxlan_parsed,
> +	.data = NULL,
> +	.help_str = "set vxlan ip-version ipv4|ipv6 vni <vni> udp-src"
> +		" <udp-src> udp-dst <udp-dst> ip-src <ip-src> ip-dst <ip-dst>"
> +		" eth-src <eth-src> eth-dst <eth-dst>",
> +	.tokens = {
> +		(void *)&cmd_set_vxlan_set,
> +		(void *)&cmd_set_vxlan_vxlan,
> +		(void *)&cmd_set_vxlan_ip_version,
> +		(void *)&cmd_set_vxlan_ip_version_value,
> +		(void *)&cmd_set_vxlan_vni,
> +		(void *)&cmd_set_vxlan_vni_value,
> +		(void *)&cmd_set_vxlan_udp_src,
> +		(void *)&cmd_set_vxlan_udp_src_value,
> +		(void *)&cmd_set_vxlan_udp_dst,
> +		(void *)&cmd_set_vxlan_udp_dst_value,
> +		(void *)&cmd_set_vxlan_ip_src,
> +		(void *)&cmd_set_vxlan_ip_src_value,
> +		(void *)&cmd_set_vxlan_ip_dst,
> +		(void *)&cmd_set_vxlan_ip_dst_value,
> +		(void *)&cmd_set_vxlan_eth_src,
> +		(void *)&cmd_set_vxlan_eth_src_value,
> +		(void *)&cmd_set_vxlan_eth_dst,
> +		(void *)&cmd_set_vxlan_eth_dst_value,
> +		NULL,
> +	},
> +};
> +
> +cmdline_parse_inst_t cmd_set_vxlan_with_vlan = {
> +	.f = cmd_set_vxlan_parsed,
> +	.data = NULL,
> +	.help_str = "set vxlan-with-vlan ip-version ipv4|ipv6 vni <vni>"
> +		" udp-src <udp-src> udp-dst <udp-dst> ip-src <ip-src> ip-dst"
> +		" <ip-dst> vlan-tci <vlan-tci> eth-src <eth-src> eth-dst"
> +		" <eth-dst>",
> +	.tokens = {
> +		(void *)&cmd_set_vxlan_set,
> +		(void *)&cmd_set_vxlan_vxlan_with_vlan,
> +		(void *)&cmd_set_vxlan_ip_version,
> +		(void *)&cmd_set_vxlan_ip_version_value,
> +		(void *)&cmd_set_vxlan_vni,
> +		(void *)&cmd_set_vxlan_vni_value,
> +		(void *)&cmd_set_vxlan_udp_src,
> +		(void *)&cmd_set_vxlan_udp_src_value,
> +		(void *)&cmd_set_vxlan_udp_dst,
> +		(void *)&cmd_set_vxlan_udp_dst_value,
> +		(void *)&cmd_set_vxlan_ip_src,
> +		(void *)&cmd_set_vxlan_ip_src_value,
> +		(void *)&cmd_set_vxlan_ip_dst,
> +		(void *)&cmd_set_vxlan_ip_dst_value,
> +		(void *)&cmd_set_vxlan_vlan,
> +		(void *)&cmd_set_vxlan_vlan_value,
> +		(void *)&cmd_set_vxlan_eth_src,
> +		(void *)&cmd_set_vxlan_eth_src_value,
> +		(void *)&cmd_set_vxlan_eth_dst,
> +		(void *)&cmd_set_vxlan_eth_dst_value,
> +		NULL,
> +	},
> +};
> +
>   /* Strict link priority scheduling mode setting */
>   static void
>   cmd_strict_link_prio_parsed(
> @@ -17462,6 +17645,8 @@ cmdline_parse_ctx_t main_ctx[] = {
>   #if defined RTE_LIBRTE_PMD_SOFTNIC && defined RTE_LIBRTE_SCHED
>   	(cmdline_parse_inst_t *)&cmd_set_port_tm_hierarchy_default,
>   #endif
> +	(cmdline_parse_inst_t *)&cmd_set_vxlan,
> +	(cmdline_parse_inst_t *)&cmd_set_vxlan_with_vlan,
>   	(cmdline_parse_inst_t *)&cmd_ddp_add,
>   	(cmdline_parse_inst_t *)&cmd_ddp_del,
>   	(cmdline_parse_inst_t *)&cmd_ddp_get_list,
> diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
> index 934cf7e90..a99fd0048 100644
> --- a/app/test-pmd/cmdline_flow.c
> +++ b/app/test-pmd/cmdline_flow.c
> @@ -239,6 +239,8 @@ enum index {
>   	ACTION_OF_POP_MPLS_ETHERTYPE,
>   	ACTION_OF_PUSH_MPLS,
>   	ACTION_OF_PUSH_MPLS_ETHERTYPE,
> +	ACTION_VXLAN_ENCAP,
> +	ACTION_VXLAN_DECAP,
>   };
>   
>   /** Maximum size for pattern in struct rte_flow_item_raw. */
> @@ -258,6 +260,23 @@ struct action_rss_data {
>   	uint16_t queue[ACTION_RSS_QUEUE_NUM];
>   };
>   
> +/** Maximum number of items in struct rte_flow_action_vxlan_encap. */
> +#define ACTION_VXLAN_ENCAP_ITEMS_NUM 6
> +
> +/** Storage for struct rte_flow_action_vxlan_encap including external data. */
> +struct action_vxlan_encap_data {
> +	struct rte_flow_action_vxlan_encap conf;
> +	struct rte_flow_item items[ACTION_VXLAN_ENCAP_ITEMS_NUM];
> +	struct rte_flow_item_eth item_eth;
> +	struct rte_flow_item_vlan item_vlan;
> +	union {
> +		struct rte_flow_item_ipv4 item_ipv4;
> +		struct rte_flow_item_ipv6 item_ipv6;
> +	};
> +	struct rte_flow_item_udp item_udp;
> +	struct rte_flow_item_vxlan item_vxlan;
> +};
> +
>   /** Maximum number of subsequent tokens and arguments on the stack. */
>   #define CTX_STACK_SIZE 16
>   
> @@ -775,6 +794,8 @@ static const enum index next_action[] = {
>   	ACTION_OF_SET_VLAN_PCP,
>   	ACTION_OF_POP_MPLS,
>   	ACTION_OF_PUSH_MPLS,
> +	ACTION_VXLAN_ENCAP,
> +	ACTION_VXLAN_DECAP,
>   	ZERO,
>   };
>   
> @@ -905,6 +926,9 @@ static int parse_vc_action_rss_type(struct context *, const struct token *,
>   static int parse_vc_action_rss_queue(struct context *, const struct token *,
>   				     const char *, unsigned int, void *,
>   				     unsigned int);
> +static int parse_vc_action_vxlan_encap(struct context *, const struct token *,
> +				       const char *, unsigned int, void *,
> +				       unsigned int);
>   static int parse_destroy(struct context *, const struct token *,
>   			 const char *, unsigned int,
>   			 void *, unsigned int);
> @@ -2387,6 +2411,24 @@ static const struct token token_list[] = {
>   			      ethertype)),
>   		.call = parse_vc_conf,
>   	},
> +	[ACTION_VXLAN_ENCAP] = {
> +		.name = "vxlan_encap",
> +		.help = "VXLAN encapsulation, uses configuration set by \"set"
> +			" vxlan\"",
> +		.priv = PRIV_ACTION(VXLAN_ENCAP,
> +				    sizeof(struct action_vxlan_encap_data)),
> +		.next = NEXT(NEXT_ENTRY(ACTION_NEXT)),
> +		.call = parse_vc_action_vxlan_encap,
> +	},
> +	[ACTION_VXLAN_DECAP] = {
> +		.name = "vxlan_decap",
> +		.help = "Performs a decapsulation action by stripping all"
> +			" headers of the VXLAN tunnel network overlay from the"
> +			" matched flow.",
> +		.priv = PRIV_ACTION(VXLAN_DECAP, 0),
> +		.next = NEXT(NEXT_ENTRY(ACTION_NEXT)),
> +		.call = parse_vc,
> +	},
>   };
>   
>   /** Remove and return last entry from argument stack. */
> @@ -2951,6 +2993,106 @@ parse_vc_action_rss_queue(struct context *ctx, const struct token *token,
>   	return len;
>   }
>   
> +/** Parse VXLAN encap action. */
> +static int
> +parse_vc_action_vxlan_encap(struct context *ctx, const struct token *token,
> +			    const char *str, unsigned int len,
> +			    void *buf, unsigned int size)
> +{
> +	struct buffer *out = buf;
> +	struct rte_flow_action *action;
> +	struct action_vxlan_encap_data *action_vxlan_encap_data;
> +	int ret;
> +
> +	ret = parse_vc(ctx, token, str, len, buf, size);
> +	if (ret < 0)
> +		return ret;
> +	/* Nothing else to do if there is no buffer. */
> +	if (!out)
> +		return ret;
> +	if (!out->args.vc.actions_n)
> +		return -1;
> +	action = &out->args.vc.actions[out->args.vc.actions_n - 1];
> +	/* Point to selected object. */
> +	ctx->object = out->args.vc.data;
> +	ctx->objmask = NULL;
> +	/* Set up default configuration. */
> +	action_vxlan_encap_data = ctx->object;
> +	*action_vxlan_encap_data = (struct action_vxlan_encap_data){
> +		.conf = (struct rte_flow_action_vxlan_encap){
> +			.definition = action_vxlan_encap_data->items,
> +		},
> +		.items = {
> +			{
> +				.type = RTE_FLOW_ITEM_TYPE_ETH,
> +				.spec = &action_vxlan_encap_data->item_eth,
> +				.mask = &rte_flow_item_eth_mask,
> +			},
> +			{
> +				.type = RTE_FLOW_ITEM_TYPE_VLAN,
> +				.spec = &action_vxlan_encap_data->item_vlan,
> +				.mask = &rte_flow_item_vlan_mask,
> +			},
> +			{
> +				.type = RTE_FLOW_ITEM_TYPE_IPV4,
> +				.spec = &action_vxlan_encap_data->item_ipv4,
> +				.mask = &rte_flow_item_ipv4_mask,
> +			},
> +			{
> +				.type = RTE_FLOW_ITEM_TYPE_UDP,
> +				.spec = &action_vxlan_encap_data->item_udp,
> +				.mask = &rte_flow_item_udp_mask,
> +			},
> +			{
> +				.type = RTE_FLOW_ITEM_TYPE_VXLAN,
> +				.spec = &action_vxlan_encap_data->item_vxlan,
> +				.mask = &rte_flow_item_vxlan_mask,
> +			},
> +			{
> +				.type = RTE_FLOW_ITEM_TYPE_END,
> +			},
> +		},
> +		.item_eth.type = 0,
> +		.item_vlan = {
> +			.tci = vxlan_encap_conf.vlan_tci,
> +			.inner_type = 0,
> +		},
> +		.item_ipv4.hdr = {
> +			.src_addr = vxlan_encap_conf.ipv4_src,
> +			.dst_addr = vxlan_encap_conf.ipv4_dst,
> +		},
> +		.item_udp.hdr = {
> +			.src_port = vxlan_encap_conf.udp_src,
> +			.dst_port = vxlan_encap_conf.udp_dst,
> +		},
> +		.item_vxlan.flags = 0,
> +	};
> +	memcpy(action_vxlan_encap_data->item_eth.dst.addr_bytes,
> +	       vxlan_encap_conf.eth_dst, ETHER_ADDR_LEN);
> +	memcpy(action_vxlan_encap_data->item_eth.src.addr_bytes,
> +	       vxlan_encap_conf.eth_src, ETHER_ADDR_LEN);
> +	if (!vxlan_encap_conf.select_ipv4) {
> +		memcpy(&action_vxlan_encap_data->item_ipv6.hdr.src_addr,
> +		       &vxlan_encap_conf.ipv6_src,
> +		       sizeof(vxlan_encap_conf.ipv6_src));
> +		memcpy(&action_vxlan_encap_data->item_ipv6.hdr.dst_addr,
> +		       &vxlan_encap_conf.ipv6_dst,
> +		       sizeof(vxlan_encap_conf.ipv6_dst));
> +		action_vxlan_encap_data->items[2] = (struct rte_flow_item){
> +			.type = RTE_FLOW_ITEM_TYPE_IPV6,
> +			.spec = &action_vxlan_encap_data->item_ipv6,
> +			.mask = &rte_flow_item_ipv6_mask,
> +		};
> +	}
> +	if (!vxlan_encap_conf.select_vlan)
> +		action_vxlan_encap_data->items[1].type =
> +			RTE_FLOW_ITEM_TYPE_VOID;
> +	memcpy(action_vxlan_encap_data->item_vxlan.vni, vxlan_encap_conf.vni,
> +	       RTE_DIM(vxlan_encap_conf.vni));
> +	action->conf = &action_vxlan_encap_data->conf;
> +	return ret;
> +}
> +
>   /** Parse tokens for destroy command. */
>   static int
>   parse_destroy(struct context *ctx, const struct token *token,
> diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
> index dde7d43e3..bf39ac3ff 100644
> --- a/app/test-pmd/testpmd.c
> +++ b/app/test-pmd/testpmd.c
> @@ -392,6 +392,23 @@ uint8_t bitrate_enabled;
>   struct gro_status gro_ports[RTE_MAX_ETHPORTS];
>   uint8_t gro_flush_cycles = GRO_DEFAULT_FLUSH_CYCLES;
>   
> +struct vxlan_encap_conf vxlan_encap_conf = {
> +	.select_ipv4 = 1,
> +	.select_vlan = 0,
> +	.vni = "\x00\x00\x00",
> +	.udp_src = 0,
> +	.udp_dst = RTE_BE16(4789),
> +	.ipv4_src = IPv4(127, 0, 0, 1),
> +	.ipv4_dst = IPv4(255, 255, 255, 255),
> +	.ipv6_src = "\x00\x00\x00\x00\x00\x00\x00\x00"
> +		"\x00\x00\x00\x00\x00\x00\x00\x01",
> +	.ipv6_dst = "\x00\x00\x00\x00\x00\x00\x00\x00"
> +		"\x00\x00\x00\x00\x00\x00\x11\x11",
> +	.vlan_tci = 0,
> +	.eth_src = "\x00\x00\x00\x00\x00\x00",
> +	.eth_dst = "\xff\xff\xff\xff\xff\xff",
> +};
> +
>   /* Forward function declarations */
>   static void map_port_queue_stats_mapping_registers(portid_t pi,
>   						   struct rte_port *port);
> diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
> index f51cd9dd9..0d6618788 100644
> --- a/app/test-pmd/testpmd.h
> +++ b/app/test-pmd/testpmd.h
> @@ -479,6 +479,23 @@ struct gso_status {
>   extern struct gso_status gso_ports[RTE_MAX_ETHPORTS];
>   extern uint16_t gso_max_segment_size;
>   
> +/* VXLAN encap/decap parameters. */
> +struct vxlan_encap_conf {
> +	uint32_t select_ipv4:1;
> +	uint32_t select_vlan:1;
> +	uint8_t vni[3];
> +	rte_be16_t udp_src;
> +	rte_be16_t udp_dst;
> +	rte_be32_t ipv4_src;
> +	rte_be32_t ipv4_dst;
> +	uint8_t ipv6_src[16];
> +	uint8_t ipv6_dst[16];
> +	rte_be16_t vlan_tci;
> +	uint8_t eth_src[ETHER_ADDR_LEN];
> +	uint8_t eth_dst[ETHER_ADDR_LEN];
> +};
> +struct vxlan_encap_conf vxlan_encap_conf;
> +
>   static inline unsigned int
>   lcore_num(void)
>   {
> diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
> index 0d6fd50ca..3281778d9 100644
> --- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
> +++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
> @@ -1534,6 +1534,23 @@ Enable or disable a per queue Tx offloading only on a specific Tx queue::
>   
>   This command should be run when the port is stopped, or else it will fail.
>   
> +Config VXLAN Encap outer layers
> +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> +
> +Configure the outer layer to encapsulate a packet inside a VXLAN tunnel::
> +
> + set vxlan ip-version (ipv4|ipv6) vni (vni) udp-src (udp-src) \
> + udp-dst (udp-dst) ip-src (ip-src) ip-dst (ip-dst) eth-src (eth-src) \
> + eth-dst (eth-dst)
> +
> + set vxlan-with-vlan ip-version (ipv4|ipv6) vni (vni) udp-src (udp-src) \
> + udp-dst (udp-dst) ip-src (ip-src) ip-dst (ip-dst) vlan-tci (vlan-tci) \
> + eth-src (eth-src) eth-dst (eth-dst)
> +
> +Those command will set an internal configuration inside testpmd, any following
> +flow rule using the action vxlan_encap will use the last configuration set.
> +To have a different encapsulation header, one of those commands must be called
> +before the flow rule creation.
>   
>   Port Functions
>   --------------
> @@ -3650,6 +3667,12 @@ This section lists supported actions and their attributes, if any.
>   
>     - ``ethertype``: Ethertype.
>   
> +- ``vxlan_encap``: Performs a VXLAN encapsulation, outer layer configuration
> +  is done through `Config VXLAN Encap outer layers`_.
> +
> +- ``vxlan_decap``: Performs a decapsulation action by stripping all headers of
> +  the VXLAN tunnel network overlay from the matched flow.
> +
>   Destroying flow rules
>   ~~~~~~~~~~~~~~~~~~~~~
>   
> @@ -3915,6 +3938,38 @@ Validate and create a QinQ rule on port 0 to steer traffic to a queue on the hos
>      0       0       0       i-      ETH VLAN VLAN=>VF QUEUE
>      1       0       0       i-      ETH VLAN VLAN=>PF QUEUE
>   
> +Sample VXLAN encapsulation rule
> +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> +
> +VXLAN encapsulation outer layer has default value pre-configured in testpmd
> +source code, those can be changed by using the following commands
> +
> +IPv4 VXLAN outer header::
> +
> + testpmd> set vxlan ip-version ipv4 vni 4 udp-src 4 udp-dst 4 ip-src 127.0.0.1
> +        ip-dst 128.0.0.1 eth-src 11:11:11:11:11:11 eth-dst 22:22:22:22:22:22
> + testpmd> flow create 0 ingress pattern end actions vxlan_encap /
> +        queue index 0 / end
> +
> + testpmd> set vxlan-with-vlan ip-version ipv4 vni 4 udp-src 4 udp-dst 4 ip-src
> +         127.0.0.1 ip-dst 128.0.0.1 vlan-tci 34 eth-src 11:11:11:11:11:11
> +         eth-dst 22:22:22:22:22:22
> + testpmd> flow create 0 ingress pattern end actions vxlan_encap /
> +         queue index 0 / end
> +
> +IPv6 VXLAN outer header::
> +
> + testpmd> set vxlan ip-version ipv6 vni 4 udp-src 4 udp-dst 4 ip-src ::1
> +        ip-dst ::2222 eth-src 11:11:11:11:11:11 eth-dst 22:22:22:22:22:22
> + testpmd> flow create 0 ingress pattern end actions vxlan_encap /
> +         queue index 0 / end
> +
> + testpmd> set vxlan-with-vlan ip-version ipv6 vni 4 udp-src 4 udp-dst 4
> +         ip-src ::1 ip-dst ::2222 vlan-tci 34 eth-src 11:11:11:11:11:11
> +         eth-dst 22:22:22:22:22:22
> + testpmd> flow create 0 ingress pattern end actions vxlan_encap /
> +         queue index 0 / end
> +
>   BPF Functions
>   --------------
>   
Tested-by: Mohammad Abdul Awal <mohammad.abdul.awal@intel.com>
Acked-by: Mohammad Abdul Awal <mohammad.abdul.awal@intel.com>

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [dpdk-dev] [PATCH v8 2/2] app/testpmd: add NVGRE encap/decap support
  2018-07-05 14:33               ` [dpdk-dev] [PATCH v8 2/2] app/testpmd: add NVGRE " Nelio Laranjeiro
@ 2018-07-05 15:07                 ` Mohammad Abdul Awal
  2018-07-05 15:17                   ` Nélio Laranjeiro
  0 siblings, 1 reply; 63+ messages in thread
From: Mohammad Abdul Awal @ 2018-07-05 15:07 UTC (permalink / raw)
  To: Nelio Laranjeiro, dev, Adrien Mazarguil, Wenzhuo Lu, Jingjing Wu,
	Bernard Iremonger, Stephen Hemminger
  Cc: Ori Kam

Some nits.

Auto-completion suggestion for values should be wrapped between '<' and 
'>', not '(' and ')'. See all the cases.

On 05/07/2018 15:33, Nelio Laranjeiro wrote:
> Due to the complex NVGRE_ENCAP flow action and based on the fact testpmd
> does not allocate memory, this patch adds a new command in testpmd to
> initialise a global structure containing the necessary information to
> make the outer layer of the packet.  This same global structure will
> then be used by the flow command line in testpmd when the action
> nvgre_encap will be parsed, at this point, the conversion into such
> action becomes trivial.
>
> This global structure is only used for the encap action.
>
> Signed-off-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
> Acked-by: Ori Kam <orika@mellanox.com>
> ---
>   app/test-pmd/cmdline.c                      | 160 ++++++++++++++++++++
>   app/test-pmd/cmdline_flow.c                 | 132 ++++++++++++++++
>   app/test-pmd/testpmd.c                      |  15 ++
>   app/test-pmd/testpmd.h                      |  15 ++
>   doc/guides/testpmd_app_ug/testpmd_funcs.rst |  52 +++++++
>   5 files changed, 374 insertions(+)
>
> diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
> index 56bdb023c..1b3fa1647 100644
> --- a/app/test-pmd/cmdline.c
> +++ b/app/test-pmd/cmdline.c
> @@ -792,6 +792,16 @@ static void cmd_help_long_parsed(void *parsed_result,
>   			" eth-dst (eth-dst)\n"
>   			"       Configure the VXLAN encapsulation for flows.\n\n"
>   
> +			"nvgre ip-version (ipv4|ipv6) tni (tni) ip-src"
> +			" (ip-src) ip-dst (ip-dst) eth-src (eth-src) eth-dst"
> +			" (eth-dst)\n"
Auto-completion suggestion for values should be wrapped between '<' and 
'>', not '(' and ')'.
> +			"       Configure the NVGRE encapsulation for flows.\n\n"
> +
> +			"nvgre-with-vlan ip-version (ipv4|ipv6) tni (tni)"
> +			" ip-src (ip-src) ip-dst (ip-dst) vlan-tci (vlan-tci)"
> +			" eth-src (eth-src) eth-dst (eth-dst)\n"
> +			"       Configure the NVGRE encapsulation for flows.\n\n"
> +
>   			, list_pkt_forwarding_modes()
>   		);
>   	}
> @@ -15021,6 +15031,154 @@ cmdline_parse_inst_t cmd_set_vxlan_with_vlan = {
>   	},
>   };
>   
> +/** Set NVGRE encapsulation details */
> +struct cmd_set_nvgre_result {
> +	cmdline_fixed_string_t set;
> +	cmdline_fixed_string_t nvgre;
> +	cmdline_fixed_string_t pos_token;
> +	cmdline_fixed_string_t ip_version;
> +	uint32_t tni;
> +	cmdline_ipaddr_t ip_src;
> +	cmdline_ipaddr_t ip_dst;
> +	uint16_t tci;
> +	struct ether_addr eth_src;
> +	struct ether_addr eth_dst;
> +};
> +
> +cmdline_parse_token_string_t cmd_set_nvgre_set =
> +	TOKEN_STRING_INITIALIZER(struct cmd_set_nvgre_result, set, "set");
> +cmdline_parse_token_string_t cmd_set_nvgre_nvgre =
> +	TOKEN_STRING_INITIALIZER(struct cmd_set_nvgre_result, nvgre, "nvgre");
> +cmdline_parse_token_string_t cmd_set_nvgre_nvgre_with_vlan =
> +	TOKEN_STRING_INITIALIZER(struct cmd_set_nvgre_result, nvgre,
> +				 "nvgre-with-vlan");
> +cmdline_parse_token_string_t cmd_set_nvgre_ip_version =
> +	TOKEN_STRING_INITIALIZER(struct cmd_set_nvgre_result, pos_token,
> +				 "ip-version");
> +cmdline_parse_token_string_t cmd_set_nvgre_ip_version_value =
> +	TOKEN_STRING_INITIALIZER(struct cmd_set_nvgre_result, ip_version,
> +				 "ipv4#ipv6");
> +cmdline_parse_token_string_t cmd_set_nvgre_tni =
> +	TOKEN_STRING_INITIALIZER(struct cmd_set_nvgre_result, pos_token,
> +				 "tni");
> +cmdline_parse_token_num_t cmd_set_nvgre_tni_value =
> +	TOKEN_NUM_INITIALIZER(struct cmd_set_nvgre_result, tni, UINT32);
> +cmdline_parse_token_string_t cmd_set_nvgre_ip_src =
> +	TOKEN_STRING_INITIALIZER(struct cmd_set_nvgre_result, pos_token,
> +				 "ip-src");
> +cmdline_parse_token_num_t cmd_set_nvgre_ip_src_value =
> +	TOKEN_IPADDR_INITIALIZER(struct cmd_set_nvgre_result, ip_src);
> +cmdline_parse_token_string_t cmd_set_nvgre_ip_dst =
> +	TOKEN_STRING_INITIALIZER(struct cmd_set_nvgre_result, pos_token,
> +				 "ip-dst");
> +cmdline_parse_token_ipaddr_t cmd_set_nvgre_ip_dst_value =
> +	TOKEN_IPADDR_INITIALIZER(struct cmd_set_nvgre_result, ip_dst);
> +cmdline_parse_token_string_t cmd_set_nvgre_vlan =
> +	TOKEN_STRING_INITIALIZER(struct cmd_set_nvgre_result, pos_token,
> +				 "vlan-tci");
> +cmdline_parse_token_num_t cmd_set_nvgre_vlan_value =
> +	TOKEN_NUM_INITIALIZER(struct cmd_set_nvgre_result, tci, UINT16);
> +cmdline_parse_token_string_t cmd_set_nvgre_eth_src =
> +	TOKEN_STRING_INITIALIZER(struct cmd_set_nvgre_result, pos_token,
> +				 "eth-src");
> +cmdline_parse_token_etheraddr_t cmd_set_nvgre_eth_src_value =
> +	TOKEN_ETHERADDR_INITIALIZER(struct cmd_set_nvgre_result, eth_src);
> +cmdline_parse_token_string_t cmd_set_nvgre_eth_dst =
> +	TOKEN_STRING_INITIALIZER(struct cmd_set_nvgre_result, pos_token,
> +				 "eth-dst");
> +cmdline_parse_token_etheraddr_t cmd_set_nvgre_eth_dst_value =
> +	TOKEN_ETHERADDR_INITIALIZER(struct cmd_set_nvgre_result, eth_dst);
> +
> +static void cmd_set_nvgre_parsed(void *parsed_result,
> +	__attribute__((unused)) struct cmdline *cl,
> +	__attribute__((unused)) void *data)
> +{
> +	struct cmd_set_nvgre_result *res = parsed_result;
> +	union {
> +		uint32_t nvgre_tni;
> +		uint8_t tni[4];
> +	} id = {
> +		.nvgre_tni = rte_cpu_to_be_32(res->tni) & RTE_BE32(0x00ffffff),
> +	};
> +
> +	if (strcmp(res->nvgre, "nvgre") == 0)
> +		nvgre_encap_conf.select_vlan = 0;
> +	else if (strcmp(res->nvgre, "nvgre-with-vlan") == 0)
> +		nvgre_encap_conf.select_vlan = 1;
> +	if (strcmp(res->ip_version, "ipv4") == 0)
> +		nvgre_encap_conf.select_ipv4 = 1;
> +	else if (strcmp(res->ip_version, "ipv6") == 0)
> +		nvgre_encap_conf.select_ipv4 = 0;
> +	else
> +		return;
> +	rte_memcpy(nvgre_encap_conf.tni, &id.tni[1], 3);
> +	if (nvgre_encap_conf.select_ipv4) {
> +		IPV4_ADDR_TO_UINT(res->ip_src, nvgre_encap_conf.ipv4_src);
> +		IPV4_ADDR_TO_UINT(res->ip_dst, nvgre_encap_conf.ipv4_dst);
> +	} else {
> +		IPV6_ADDR_TO_ARRAY(res->ip_src, nvgre_encap_conf.ipv6_src);
> +		IPV6_ADDR_TO_ARRAY(res->ip_dst, nvgre_encap_conf.ipv6_dst);
> +	}
> +	if (nvgre_encap_conf.select_vlan)
> +		nvgre_encap_conf.vlan_tci = rte_cpu_to_be_16(res->tci);
> +	rte_memcpy(nvgre_encap_conf.eth_src, res->eth_src.addr_bytes,
> +		   ETHER_ADDR_LEN);
> +	rte_memcpy(nvgre_encap_conf.eth_dst, res->eth_dst.addr_bytes,
> +		   ETHER_ADDR_LEN);
> +}
> +
> +cmdline_parse_inst_t cmd_set_nvgre = {
> +	.f = cmd_set_nvgre_parsed,
> +	.data = NULL,
> +	.help_str = "set nvgre ip-version (ipv4|ipv6) tni (tni) ip-src"
> +		" (ip-src) ip-dst (ip-dst) eth-src (eth-src)"
> +		" eth-dst (eth-dst)",
> +	.tokens = {
> +		(void *)&cmd_set_nvgre_set,
> +		(void *)&cmd_set_nvgre_nvgre,
> +		(void *)&cmd_set_nvgre_ip_version,
> +		(void *)&cmd_set_nvgre_ip_version_value,
> +		(void *)&cmd_set_nvgre_tni,
> +		(void *)&cmd_set_nvgre_tni_value,
> +		(void *)&cmd_set_nvgre_ip_src,
> +		(void *)&cmd_set_nvgre_ip_src_value,
> +		(void *)&cmd_set_nvgre_ip_dst,
> +		(void *)&cmd_set_nvgre_ip_dst_value,
> +		(void *)&cmd_set_nvgre_eth_src,
> +		(void *)&cmd_set_nvgre_eth_src_value,
> +		(void *)&cmd_set_nvgre_eth_dst,
> +		(void *)&cmd_set_nvgre_eth_dst_value,
> +		NULL,
> +	},
> +};
> +
> +cmdline_parse_inst_t cmd_set_nvgre_with_vlan = {
> +	.f = cmd_set_nvgre_parsed,
> +	.data = NULL,
> +	.help_str = "set nvgre-with-vlan ip-version (ipv4|ipv6) tni (tni)"
> +		" ip-src (ip-src) ip-dst (ip-dst) vlan-tci (vlan-tci)"
> +		" eth-src (eth-src) eth-dst (eth-dst)",
> +	.tokens = {
> +		(void *)&cmd_set_nvgre_set,
> +		(void *)&cmd_set_nvgre_nvgre_with_vlan,
> +		(void *)&cmd_set_nvgre_ip_version,
> +		(void *)&cmd_set_nvgre_ip_version_value,
> +		(void *)&cmd_set_nvgre_tni,
> +		(void *)&cmd_set_nvgre_tni_value,
> +		(void *)&cmd_set_nvgre_ip_src,
> +		(void *)&cmd_set_nvgre_ip_src_value,
> +		(void *)&cmd_set_nvgre_ip_dst,
> +		(void *)&cmd_set_nvgre_ip_dst_value,
> +		(void *)&cmd_set_nvgre_vlan,
> +		(void *)&cmd_set_nvgre_vlan_value,
> +		(void *)&cmd_set_nvgre_eth_src,
> +		(void *)&cmd_set_nvgre_eth_src_value,
> +		(void *)&cmd_set_nvgre_eth_dst,
> +		(void *)&cmd_set_nvgre_eth_dst_value,
> +		NULL,
> +	},
> +};
> +
>   /* Strict link priority scheduling mode setting */
>   static void
>   cmd_strict_link_prio_parsed(
> @@ -17647,6 +17805,8 @@ cmdline_parse_ctx_t main_ctx[] = {
>   #endif
>   	(cmdline_parse_inst_t *)&cmd_set_vxlan,
>   	(cmdline_parse_inst_t *)&cmd_set_vxlan_with_vlan,
> +	(cmdline_parse_inst_t *)&cmd_set_nvgre,
> +	(cmdline_parse_inst_t *)&cmd_set_nvgre_with_vlan,
>   	(cmdline_parse_inst_t *)&cmd_ddp_add,
>   	(cmdline_parse_inst_t *)&cmd_ddp_del,
>   	(cmdline_parse_inst_t *)&cmd_ddp_get_list,
> diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
> index a99fd0048..f9260600e 100644
> --- a/app/test-pmd/cmdline_flow.c
> +++ b/app/test-pmd/cmdline_flow.c
> @@ -241,6 +241,8 @@ enum index {
>   	ACTION_OF_PUSH_MPLS_ETHERTYPE,
>   	ACTION_VXLAN_ENCAP,
>   	ACTION_VXLAN_DECAP,
> +	ACTION_NVGRE_ENCAP,
> +	ACTION_NVGRE_DECAP,
>   };
>   
>   /** Maximum size for pattern in struct rte_flow_item_raw. */
> @@ -277,6 +279,22 @@ struct action_vxlan_encap_data {
>   	struct rte_flow_item_vxlan item_vxlan;
>   };
>   
> +/** Maximum number of items in struct rte_flow_action_nvgre_encap. */
> +#define ACTION_NVGRE_ENCAP_ITEMS_NUM 5
> +
> +/** Storage for struct rte_flow_action_nvgre_encap including external data. */
> +struct action_nvgre_encap_data {
> +	struct rte_flow_action_nvgre_encap conf;
> +	struct rte_flow_item items[ACTION_NVGRE_ENCAP_ITEMS_NUM];
> +	struct rte_flow_item_eth item_eth;
> +	struct rte_flow_item_vlan item_vlan;
> +	union {
> +		struct rte_flow_item_ipv4 item_ipv4;
> +		struct rte_flow_item_ipv6 item_ipv6;
> +	};
> +	struct rte_flow_item_nvgre item_nvgre;
> +};
> +
>   /** Maximum number of subsequent tokens and arguments on the stack. */
>   #define CTX_STACK_SIZE 16
>   
> @@ -796,6 +814,8 @@ static const enum index next_action[] = {
>   	ACTION_OF_PUSH_MPLS,
>   	ACTION_VXLAN_ENCAP,
>   	ACTION_VXLAN_DECAP,
> +	ACTION_NVGRE_ENCAP,
> +	ACTION_NVGRE_DECAP,
>   	ZERO,
>   };
>   
> @@ -929,6 +949,9 @@ static int parse_vc_action_rss_queue(struct context *, const struct token *,
>   static int parse_vc_action_vxlan_encap(struct context *, const struct token *,
>   				       const char *, unsigned int, void *,
>   				       unsigned int);
> +static int parse_vc_action_nvgre_encap(struct context *, const struct token *,
> +				       const char *, unsigned int, void *,
> +				       unsigned int);
>   static int parse_destroy(struct context *, const struct token *,
>   			 const char *, unsigned int,
>   			 void *, unsigned int);
> @@ -2429,6 +2452,24 @@ static const struct token token_list[] = {
>   		.next = NEXT(NEXT_ENTRY(ACTION_NEXT)),
>   		.call = parse_vc,
>   	},
> +	[ACTION_NVGRE_ENCAP] = {
> +		.name = "nvgre_encap",
> +		.help = "NVGRE encapsulation, uses configuration set by \"set"
> +			" nvgre\"",
> +		.priv = PRIV_ACTION(NVGRE_ENCAP,
> +				    sizeof(struct action_nvgre_encap_data)),
> +		.next = NEXT(NEXT_ENTRY(ACTION_NEXT)),
> +		.call = parse_vc_action_nvgre_encap,
> +	},
> +	[ACTION_NVGRE_DECAP] = {
> +		.name = "nvgre_decap",
> +		.help = "Performs a decapsulation action by stripping all"
> +			" headers of the NVGRE tunnel network overlay from the"
> +			" matched flow.",
> +		.priv = PRIV_ACTION(NVGRE_DECAP, 0),
> +		.next = NEXT(NEXT_ENTRY(ACTION_NEXT)),
> +		.call = parse_vc,
> +	},
>   };
>   
>   /** Remove and return last entry from argument stack. */
> @@ -3093,6 +3134,97 @@ parse_vc_action_vxlan_encap(struct context *ctx, const struct token *token,
>   	return ret;
>   }
>   
> +/** Parse NVGRE encap action. */
> +static int
> +parse_vc_action_nvgre_encap(struct context *ctx, const struct token *token,
> +			    const char *str, unsigned int len,
> +			    void *buf, unsigned int size)
> +{
> +	struct buffer *out = buf;
> +	struct rte_flow_action *action;
> +	struct action_nvgre_encap_data *action_nvgre_encap_data;
> +	int ret;
> +
> +	ret = parse_vc(ctx, token, str, len, buf, size);
> +	if (ret < 0)
> +		return ret;
> +	/* Nothing else to do if there is no buffer. */
> +	if (!out)
> +		return ret;
> +	if (!out->args.vc.actions_n)
> +		return -1;
> +	action = &out->args.vc.actions[out->args.vc.actions_n - 1];
> +	/* Point to selected object. */
> +	ctx->object = out->args.vc.data;
> +	ctx->objmask = NULL;
> +	/* Set up default configuration. */
> +	action_nvgre_encap_data = ctx->object;
> +	*action_nvgre_encap_data = (struct action_nvgre_encap_data){
> +		.conf = (struct rte_flow_action_nvgre_encap){
> +			.definition = action_nvgre_encap_data->items,
> +		},
> +		.items = {
> +			{
> +				.type = RTE_FLOW_ITEM_TYPE_ETH,
> +				.spec = &action_nvgre_encap_data->item_eth,
> +				.mask = &rte_flow_item_eth_mask,
> +			},
> +			{
> +				.type = RTE_FLOW_ITEM_TYPE_VLAN,
> +				.spec = &action_nvgre_encap_data->item_vlan,
> +				.mask = &rte_flow_item_vlan_mask,
> +			},
> +			{
> +				.type = RTE_FLOW_ITEM_TYPE_IPV4,
> +				.spec = &action_nvgre_encap_data->item_ipv4,
> +				.mask = &rte_flow_item_ipv4_mask,
> +			},
> +			{
> +				.type = RTE_FLOW_ITEM_TYPE_NVGRE,
> +				.spec = &action_nvgre_encap_data->item_nvgre,
> +				.mask = &rte_flow_item_nvgre_mask,
> +			},
> +			{
> +				.type = RTE_FLOW_ITEM_TYPE_END,
> +			},
> +		},
> +		.item_eth.type = 0,
> +		.item_vlan = {
> +			.tci = nvgre_encap_conf.vlan_tci,
> +			.inner_type = 0,
> +		},
> +		.item_ipv4.hdr = {
> +		       .src_addr = nvgre_encap_conf.ipv4_src,
> +		       .dst_addr = nvgre_encap_conf.ipv4_dst,
> +		},
> +		.item_nvgre.flow_id = 0,
> +	};
> +	memcpy(action_nvgre_encap_data->item_eth.dst.addr_bytes,
> +	       nvgre_encap_conf.eth_dst, ETHER_ADDR_LEN);
> +	memcpy(action_nvgre_encap_data->item_eth.src.addr_bytes,
> +	       nvgre_encap_conf.eth_src, ETHER_ADDR_LEN);
> +	if (!nvgre_encap_conf.select_ipv4) {
> +		memcpy(&action_nvgre_encap_data->item_ipv6.hdr.src_addr,
> +		       &nvgre_encap_conf.ipv6_src,
> +		       sizeof(nvgre_encap_conf.ipv6_src));
> +		memcpy(&action_nvgre_encap_data->item_ipv6.hdr.dst_addr,
> +		       &nvgre_encap_conf.ipv6_dst,
> +		       sizeof(nvgre_encap_conf.ipv6_dst));
> +		action_nvgre_encap_data->items[2] = (struct rte_flow_item){
> +			.type = RTE_FLOW_ITEM_TYPE_IPV6,
> +			.spec = &action_nvgre_encap_data->item_ipv6,
> +			.mask = &rte_flow_item_ipv6_mask,
> +		};
> +	}
> +	if (!nvgre_encap_conf.select_vlan)
> +		action_nvgre_encap_data->items[1].type =
> +			RTE_FLOW_ITEM_TYPE_VOID;
> +	memcpy(action_nvgre_encap_data->item_nvgre.tni, nvgre_encap_conf.tni,
> +	       RTE_DIM(nvgre_encap_conf.tni));
> +	action->conf = &action_nvgre_encap_data->conf;
> +	return ret;
> +}
> +
>   /** Parse tokens for destroy command. */
>   static int
>   parse_destroy(struct context *ctx, const struct token *token,
> diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
> index bf39ac3ff..dbba7d253 100644
> --- a/app/test-pmd/testpmd.c
> +++ b/app/test-pmd/testpmd.c
> @@ -409,6 +409,21 @@ struct vxlan_encap_conf vxlan_encap_conf = {
>   	.eth_dst = "\xff\xff\xff\xff\xff\xff",
>   };
>   
> +struct nvgre_encap_conf nvgre_encap_conf = {
> +	.select_ipv4 = 1,
> +	.select_vlan = 0,
> +	.tni = "\x00\x00\x00",
> +	.ipv4_src = IPv4(127, 0, 0, 1),
> +	.ipv4_dst = IPv4(255, 255, 255, 255),
> +	.ipv6_src = "\x00\x00\x00\x00\x00\x00\x00\x00"
> +		"\x00\x00\x00\x00\x00\x00\x00\x01",
> +	.ipv6_dst = "\x00\x00\x00\x00\x00\x00\x00\x00"
> +		"\x00\x00\x00\x00\x00\x00\x11\x11",
> +	.vlan_tci = 0,
> +	.eth_src = "\x00\x00\x00\x00\x00\x00",
> +	.eth_dst = "\xff\xff\xff\xff\xff\xff",
> +};
> +
>   /* Forward function declarations */
>   static void map_port_queue_stats_mapping_registers(portid_t pi,
>   						   struct rte_port *port);
> diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
> index 0d6618788..2b1e448b0 100644
> --- a/app/test-pmd/testpmd.h
> +++ b/app/test-pmd/testpmd.h
> @@ -496,6 +496,21 @@ struct vxlan_encap_conf {
>   };
>   struct vxlan_encap_conf vxlan_encap_conf;
>   
> +/* NVGRE encap/decap parameters. */
> +struct nvgre_encap_conf {
> +	uint32_t select_ipv4:1;
> +	uint32_t select_vlan:1;
> +	uint8_t tni[3];
> +	rte_be32_t ipv4_src;
> +	rte_be32_t ipv4_dst;
> +	uint8_t ipv6_src[16];
> +	uint8_t ipv6_dst[16];
> +	rte_be16_t vlan_tci;
> +	uint8_t eth_src[ETHER_ADDR_LEN];
> +	uint8_t eth_dst[ETHER_ADDR_LEN];
> +};
> +struct nvgre_encap_conf nvgre_encap_conf;
> +
>   static inline unsigned int
>   lcore_num(void)
>   {
> diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
> index 3281778d9..94d8d38c7 100644
> --- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
> +++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
> @@ -1552,6 +1552,21 @@ flow rule using the action vxlan_encap will use the last configuration set.
>   To have a different encapsulation header, one of those commands must be called
>   before the flow rule creation.
>   
> +Config NVGRE Encap outer layers
> +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> +
> +Configure the outer layer to encapsulate a packet inside a NVGRE tunnel::
> +
> + set nvgre ip-version (ipv4|ipv6) tni (tni) ip-src (ip-src) ip-dst (ip-dst) \
> +        eth-src (eth-src) eth-dst (eth-dst)
> + set nvgre-with-vlan ip-version (ipv4|ipv6) tni (tni) ip-src (ip-src) \
> +        ip-dst (ip-dst) vlan-tci (vlan-tci) eth-src (eth-src) eth-dst (eth-dst)
                                       ^^^^^^^
                                       <ip-dst>
Auto-completion suggestion for values should be wrapped between '<' and 
'>', not '(' and ')', for all cases.
> +
> +Those command will set an internal configuration inside testpmd, any following
> +flow rule using the action nvgre_encap will use the last configuration set.
> +To have a different encapsulation header, one of those commands must be called
> +before the flow rule creation.
> +
>   Port Functions
>   --------------
>   
> @@ -3673,6 +3688,12 @@ This section lists supported actions and their attributes, if any.
>   - ``vxlan_decap``: Performs a decapsulation action by stripping all headers of
>     the VXLAN tunnel network overlay from the matched flow.
>   
> +- ``nvgre_encap``: Performs a NVGRE encapsulation, outer layer configuration
> +  is done through `Config NVGRE Encap outer layers`_.
> +
> +- ``nvgre_decap``: Performs a decapsulation action by stripping all headers of
> +  the NVGRE tunnel network overlay from the matched flow.
> +
>   Destroying flow rules
>   ~~~~~~~~~~~~~~~~~~~~~
>   
> @@ -3970,6 +3991,37 @@ IPv6 VXLAN outer header::
>    testpmd> flow create 0 ingress pattern end actions vxlan_encap /
>            queue index 0 / end
>   
> +Sample NVGRE encapsulation rule
> +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> +
> +NVGRE encapsulation outer layer has default value pre-configured in testpmd
> +source code, those can be changed by using the following commands
> +
> +IPv4 NVGRE outer header::
> +
> + testpmd> set nvgre ip-version ipv4 tni 4 ip-src 127.0.0.1 ip-dst 128.0.0.1
> +        eth-src 11:11:11:11:11:11 eth-dst 22:22:22:22:22:22
> + testpmd> flow create 0 ingress pattern end actions nvgre_encap /
> +        queue index 0 / end
> +
> + testpmd> set nvgre-with-vlan ip-version ipv4 tni 4 ip-src 127.0.0.1
> +         ip-dst 128.0.0.1 vlan-tci 34 eth-src 11:11:11:11:11:11
> +         eth-dst 22:22:22:22:22:22
> + testpmd> flow create 0 ingress pattern end actions nvgre_encap /
> +         queue index 0 / end
> +
> +IPv6 NVGRE outer header::
> +
> + testpmd> set nvgre ip-version ipv6 tni 4 ip-src ::1 ip-dst ::2222
> +        eth-src 11:11:11:11:11:11 eth-dst 22:22:22:22:22:22
> + testpmd> flow create 0 ingress pattern end actions nvgre_encap /
> +        queue index 0 / end
> +
> + testpmd> set nvgre-with-vlan ip-version ipv6 tni 4 ip-src ::1 ip-dst ::2222
> +        vlan-tci 34 eth-src 11:11:11:11:11:11 eth-dst 22:22:22:22:22:22
> + testpmd> flow create 0 ingress pattern end actions nvgre_encap /
> +        queue index 0 / end
> +
>   BPF Functions
>   --------------
>   

Tested-by: Mohammad Abdul Awal <mohammad.abdul.awal@intel.com>
Acked-by: Mohammad Abdul Awal <mohammad.abdul.awal@intel.com>

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [dpdk-dev] [PATCH v8 2/2] app/testpmd: add NVGRE encap/decap support
  2018-07-05 15:07                 ` Mohammad Abdul Awal
@ 2018-07-05 15:17                   ` Nélio Laranjeiro
  0 siblings, 0 replies; 63+ messages in thread
From: Nélio Laranjeiro @ 2018-07-05 15:17 UTC (permalink / raw)
  To: Mohammad Abdul Awal
  Cc: dev, Adrien Mazarguil, Wenzhuo Lu, Jingjing Wu,
	Bernard Iremonger, Stephen Hemminger, Ori Kam

On Thu, Jul 05, 2018 at 04:07:28PM +0100, Mohammad Abdul Awal wrote:
>    Some nits.
> 
>    Auto-completion suggestion for values should be wrapped between '<' and
>    '>', not '(' and ')'. See all the cases.
>[...]

Right, I'll send a v9 to fix this.

Thanks,

-- 
Nélio Laranjeiro
6WIND

^ permalink raw reply	[flat|nested] 63+ messages in thread

* [dpdk-dev] [PATCH v9 0/2] app/testpmd implement VXLAN/NVGRE Encap/Decap
  2018-07-05 14:33             ` [dpdk-dev] [PATCH v8 " Nelio Laranjeiro
                                 ` (3 preceding siblings ...)
  2018-07-05 14:57               ` Mohammad Abdul Awal
@ 2018-07-06  6:43               ` Nelio Laranjeiro
  2018-07-06  6:43                 ` [dpdk-dev] [PATCH v9 1/2] app/testpmd: add VXLAN encap/decap support Nelio Laranjeiro
                                   ` (2 more replies)
  4 siblings, 3 replies; 63+ messages in thread
From: Nelio Laranjeiro @ 2018-07-06  6:43 UTC (permalink / raw)
  To: dev, Wenzhuo Lu, Jingjing Wu, Bernard Iremonger, Stephen Hemminger
  Cc: Adrien Mazarguil, Mohammad Abdul Awal, Ori Kam

This series adds an easy and maintainable configuration version support for
those two actions for 18.08 by using global variables in testpmd to store the
necessary information for the tunnel encapsulation.  Those variables are used
in conjunction of RTE_FLOW_ACTION_{VXLAN,NVGRE}_ENCAP action to create easily
the action for flows.

A common way to use it:

 set vxlan ip-version ipv4 vni 4 udp-src 4 udp-dst 4 ip-src 27.0.0.1
        ip-dst 128.0.0.1 eth-src 11:11:11:11:11:11 eth-dst 22:22:22:22:22:22
 flow create 0 ingress pattern end actions vxlan_encap /
        queue index 0 / end

 set vxlan-with-vlan ip-version ipv4 vni 4 udp-src 4 udp-dst 4 p-src
         127.0.0.1 ip-dst 128.0.0.1 vlan-tci 34 eth-src 11:11:11:11:11:11
         eth-dst 22:22:22:22:22:22
 flow create 0 ingress pattern end actions vxlan_encap /
         queue index 0 / end

 set vxlan ip-version ipv6 vni 4 udp-src 4 udp-dst 4 ip-src ::1
        ip-dst ::2222 eth-src 11:11:11:11:11:11 eth-dst 22:22:22:22:22:22
 flow create 0 ingress pattern end actions vxlan_encap /
         queue index 0 / end

 set vxlan-with-vlan ip-version ipv6 vni 4 udp-src 4 udp-dst 4
         ip-src ::1 ip-dst ::2222 vlan-tci 34 eth-src 11:11:11:11:11:11
         eth-dst 22:22:22:22:22:22
 flow create 0 ingress pattern end actions vxlan_encap /
         queue index 0 / end

This also replace the proposal done by Mohammad Abdul Awal [1] which handles
in a more complex way for the same work.

Note this API has already a modification planned for 18.11 [2] thus those
series should have a limited life for a single release.

[1] https://dpdk.org/ml/archives/dev/2018-May/101403.html
[2] https://dpdk.org/ml/archives/dev/2018-June/103485.html

Changes in v9:

- fix the help for NVGRE.

Changes in v8:

- add static tokens in the command line to be user friendly.

Changes in v7:

- add missing documentation added in v5 and removed in v6 by mistake.

Changes in v6:

- fix compilation under redhat 7.5 with gcc 4.8.5 20150623

Changes in v5:

- fix documentation generation.
- add more explanation on how to generate several encapsulated flows.

Changes in v4:

- fix big endian issue on vni and tni.
- add samples to the documentation.
- set the VXLAN UDP source port to 0 by default to let the driver generate it
  from the inner hash as described in the RFC 7348.
- use default rte flow mask for each item.

Changes in v3:

- support VLAN in the outer encapsulation.
- fix the documentation with missing arguments.

Changes in v2:

- add default IPv6 values for NVGRE encapsulation.
- replace VXLAN to NVGRE in comments concerning NVGRE layer.


Nelio Laranjeiro (2):
  app/testpmd: add VXLAN encap/decap support
  app/testpmd: add NVGRE encap/decap support

 app/test-pmd/cmdline.c                      | 345 ++++++++++++++++++++
 app/test-pmd/cmdline_flow.c                 | 274 ++++++++++++++++
 app/test-pmd/testpmd.c                      |  32 ++
 app/test-pmd/testpmd.h                      |  32 ++
 doc/guides/testpmd_app_ug/testpmd_funcs.rst | 107 ++++++
 5 files changed, 790 insertions(+)

-- 
2.18.0

^ permalink raw reply	[flat|nested] 63+ messages in thread

* [dpdk-dev] [PATCH v9 1/2] app/testpmd: add VXLAN encap/decap support
  2018-07-06  6:43               ` [dpdk-dev] [PATCH v9 " Nelio Laranjeiro
@ 2018-07-06  6:43                 ` Nelio Laranjeiro
  2018-07-06  6:43                 ` [dpdk-dev] [PATCH v9 2/2] app/testpmd: add NVGRE " Nelio Laranjeiro
  2018-07-18  8:31                 ` [dpdk-dev] [PATCH v9 0/2] app/testpmd implement VXLAN/NVGRE Encap/Decap Ferruh Yigit
  2 siblings, 0 replies; 63+ messages in thread
From: Nelio Laranjeiro @ 2018-07-06  6:43 UTC (permalink / raw)
  To: dev, Wenzhuo Lu, Jingjing Wu, Bernard Iremonger, Stephen Hemminger
  Cc: Adrien Mazarguil, Mohammad Abdul Awal, Ori Kam

Due to the complex VXLAN_ENCAP flow action and based on the fact testpmd
does not allocate memory, this patch adds a new command in testpmd to
initialise a global structure containing the necessary information to
make the outer layer of the packet.  This same global structure will
then be used by the flow command line in testpmd when the action
vxlan_encap will be parsed, at this point, the conversion into such
action becomes trivial.

This global structure is only used for the encap action.

Signed-off-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
Acked-by: Ori Kam <orika@mellanox.com>
Acked-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Acked-by: Mohammad Abdul Awal <mohammad.abdul.awal@intel.com>
Tested-by: Mohammad Abdul Awal <mohammad.abdul.awal@intel.com>
---
 app/test-pmd/cmdline.c                      | 185 ++++++++++++++++++++
 app/test-pmd/cmdline_flow.c                 | 142 +++++++++++++++
 app/test-pmd/testpmd.c                      |  17 ++
 app/test-pmd/testpmd.h                      |  17 ++
 doc/guides/testpmd_app_ug/testpmd_funcs.rst |  55 ++++++
 5 files changed, 416 insertions(+)

diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
index 27e2aa8c8..56bdb023c 100644
--- a/app/test-pmd/cmdline.c
+++ b/app/test-pmd/cmdline.c
@@ -781,6 +781,17 @@ static void cmd_help_long_parsed(void *parsed_result,
 			"port tm hierarchy commit (port_id) (clean_on_fail)\n"
 			"	Commit tm hierarchy.\n\n"
 
+			"vxlan ip-version (ipv4|ipv6) vni (vni) udp-src"
+			" (udp-src) udp-dst (udp-dst) ip-src (ip-src) ip-dst"
+			" (ip-dst) eth-src (eth-src) eth-dst (eth-dst)\n"
+			"       Configure the VXLAN encapsulation for flows.\n\n"
+
+			"vxlan-with-vlan ip-version (ipv4|ipv6) vni (vni)"
+			" udp-src (udp-src) udp-dst (udp-dst) ip-src (ip-src)"
+			" ip-dst (ip-dst) vlan-tci (vlan-tci) eth-src (eth-src)"
+			" eth-dst (eth-dst)\n"
+			"       Configure the VXLAN encapsulation for flows.\n\n"
+
 			, list_pkt_forwarding_modes()
 		);
 	}
@@ -14838,6 +14849,178 @@ cmdline_parse_inst_t cmd_set_port_tm_hierarchy_default = {
 };
 #endif
 
+/** Set VXLAN encapsulation details */
+struct cmd_set_vxlan_result {
+	cmdline_fixed_string_t set;
+	cmdline_fixed_string_t vxlan;
+	cmdline_fixed_string_t pos_token;
+	cmdline_fixed_string_t ip_version;
+	uint32_t vlan_present:1;
+	uint32_t vni;
+	uint16_t udp_src;
+	uint16_t udp_dst;
+	cmdline_ipaddr_t ip_src;
+	cmdline_ipaddr_t ip_dst;
+	uint16_t tci;
+	struct ether_addr eth_src;
+	struct ether_addr eth_dst;
+};
+
+cmdline_parse_token_string_t cmd_set_vxlan_set =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_vxlan_result, set, "set");
+cmdline_parse_token_string_t cmd_set_vxlan_vxlan =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_vxlan_result, vxlan, "vxlan");
+cmdline_parse_token_string_t cmd_set_vxlan_vxlan_with_vlan =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_vxlan_result, vxlan,
+				 "vxlan-with-vlan");
+cmdline_parse_token_string_t cmd_set_vxlan_ip_version =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_vxlan_result, pos_token,
+				 "ip-version");
+cmdline_parse_token_string_t cmd_set_vxlan_ip_version_value =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_vxlan_result, ip_version,
+				 "ipv4#ipv6");
+cmdline_parse_token_string_t cmd_set_vxlan_vni =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_vxlan_result, pos_token,
+				 "vni");
+cmdline_parse_token_num_t cmd_set_vxlan_vni_value =
+	TOKEN_NUM_INITIALIZER(struct cmd_set_vxlan_result, vni, UINT32);
+cmdline_parse_token_string_t cmd_set_vxlan_udp_src =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_vxlan_result, pos_token,
+				 "udp-src");
+cmdline_parse_token_num_t cmd_set_vxlan_udp_src_value =
+	TOKEN_NUM_INITIALIZER(struct cmd_set_vxlan_result, udp_src, UINT16);
+cmdline_parse_token_string_t cmd_set_vxlan_udp_dst =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_vxlan_result, pos_token,
+				 "udp-dst");
+cmdline_parse_token_num_t cmd_set_vxlan_udp_dst_value =
+	TOKEN_NUM_INITIALIZER(struct cmd_set_vxlan_result, udp_dst, UINT16);
+cmdline_parse_token_string_t cmd_set_vxlan_ip_src =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_vxlan_result, pos_token,
+				 "ip-src");
+cmdline_parse_token_ipaddr_t cmd_set_vxlan_ip_src_value =
+	TOKEN_IPADDR_INITIALIZER(struct cmd_set_vxlan_result, ip_src);
+cmdline_parse_token_string_t cmd_set_vxlan_ip_dst =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_vxlan_result, pos_token,
+				 "ip-dst");
+cmdline_parse_token_ipaddr_t cmd_set_vxlan_ip_dst_value =
+	TOKEN_IPADDR_INITIALIZER(struct cmd_set_vxlan_result, ip_dst);
+cmdline_parse_token_string_t cmd_set_vxlan_vlan =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_vxlan_result, pos_token,
+				 "vlan-tci");
+cmdline_parse_token_num_t cmd_set_vxlan_vlan_value =
+	TOKEN_NUM_INITIALIZER(struct cmd_set_vxlan_result, tci, UINT16);
+cmdline_parse_token_string_t cmd_set_vxlan_eth_src =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_vxlan_result, pos_token,
+				 "eth-src");
+cmdline_parse_token_etheraddr_t cmd_set_vxlan_eth_src_value =
+	TOKEN_ETHERADDR_INITIALIZER(struct cmd_set_vxlan_result, eth_src);
+cmdline_parse_token_string_t cmd_set_vxlan_eth_dst =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_vxlan_result, pos_token,
+				 "eth-dst");
+cmdline_parse_token_etheraddr_t cmd_set_vxlan_eth_dst_value =
+	TOKEN_ETHERADDR_INITIALIZER(struct cmd_set_vxlan_result, eth_dst);
+
+static void cmd_set_vxlan_parsed(void *parsed_result,
+	__attribute__((unused)) struct cmdline *cl,
+	__attribute__((unused)) void *data)
+{
+	struct cmd_set_vxlan_result *res = parsed_result;
+	union {
+		uint32_t vxlan_id;
+		uint8_t vni[4];
+	} id = {
+		.vxlan_id = rte_cpu_to_be_32(res->vni) & RTE_BE32(0x00ffffff),
+	};
+
+	if (strcmp(res->vxlan, "vxlan") == 0)
+		vxlan_encap_conf.select_vlan = 0;
+	else if (strcmp(res->vxlan, "vxlan-with-vlan") == 0)
+		vxlan_encap_conf.select_vlan = 1;
+	if (strcmp(res->ip_version, "ipv4") == 0)
+		vxlan_encap_conf.select_ipv4 = 1;
+	else if (strcmp(res->ip_version, "ipv6") == 0)
+		vxlan_encap_conf.select_ipv4 = 0;
+	else
+		return;
+	rte_memcpy(vxlan_encap_conf.vni, &id.vni[1], 3);
+	vxlan_encap_conf.udp_src = rte_cpu_to_be_16(res->udp_src);
+	vxlan_encap_conf.udp_dst = rte_cpu_to_be_16(res->udp_dst);
+	if (vxlan_encap_conf.select_ipv4) {
+		IPV4_ADDR_TO_UINT(res->ip_src, vxlan_encap_conf.ipv4_src);
+		IPV4_ADDR_TO_UINT(res->ip_dst, vxlan_encap_conf.ipv4_dst);
+	} else {
+		IPV6_ADDR_TO_ARRAY(res->ip_src, vxlan_encap_conf.ipv6_src);
+		IPV6_ADDR_TO_ARRAY(res->ip_dst, vxlan_encap_conf.ipv6_dst);
+	}
+	if (vxlan_encap_conf.select_vlan)
+		vxlan_encap_conf.vlan_tci = rte_cpu_to_be_16(res->tci);
+	rte_memcpy(vxlan_encap_conf.eth_src, res->eth_src.addr_bytes,
+		   ETHER_ADDR_LEN);
+	rte_memcpy(vxlan_encap_conf.eth_dst, res->eth_dst.addr_bytes,
+		   ETHER_ADDR_LEN);
+}
+
+cmdline_parse_inst_t cmd_set_vxlan = {
+	.f = cmd_set_vxlan_parsed,
+	.data = NULL,
+	.help_str = "set vxlan ip-version ipv4|ipv6 vni <vni> udp-src"
+		" <udp-src> udp-dst <udp-dst> ip-src <ip-src> ip-dst <ip-dst>"
+		" eth-src <eth-src> eth-dst <eth-dst>",
+	.tokens = {
+		(void *)&cmd_set_vxlan_set,
+		(void *)&cmd_set_vxlan_vxlan,
+		(void *)&cmd_set_vxlan_ip_version,
+		(void *)&cmd_set_vxlan_ip_version_value,
+		(void *)&cmd_set_vxlan_vni,
+		(void *)&cmd_set_vxlan_vni_value,
+		(void *)&cmd_set_vxlan_udp_src,
+		(void *)&cmd_set_vxlan_udp_src_value,
+		(void *)&cmd_set_vxlan_udp_dst,
+		(void *)&cmd_set_vxlan_udp_dst_value,
+		(void *)&cmd_set_vxlan_ip_src,
+		(void *)&cmd_set_vxlan_ip_src_value,
+		(void *)&cmd_set_vxlan_ip_dst,
+		(void *)&cmd_set_vxlan_ip_dst_value,
+		(void *)&cmd_set_vxlan_eth_src,
+		(void *)&cmd_set_vxlan_eth_src_value,
+		(void *)&cmd_set_vxlan_eth_dst,
+		(void *)&cmd_set_vxlan_eth_dst_value,
+		NULL,
+	},
+};
+
+cmdline_parse_inst_t cmd_set_vxlan_with_vlan = {
+	.f = cmd_set_vxlan_parsed,
+	.data = NULL,
+	.help_str = "set vxlan-with-vlan ip-version ipv4|ipv6 vni <vni>"
+		" udp-src <udp-src> udp-dst <udp-dst> ip-src <ip-src> ip-dst"
+		" <ip-dst> vlan-tci <vlan-tci> eth-src <eth-src> eth-dst"
+		" <eth-dst>",
+	.tokens = {
+		(void *)&cmd_set_vxlan_set,
+		(void *)&cmd_set_vxlan_vxlan_with_vlan,
+		(void *)&cmd_set_vxlan_ip_version,
+		(void *)&cmd_set_vxlan_ip_version_value,
+		(void *)&cmd_set_vxlan_vni,
+		(void *)&cmd_set_vxlan_vni_value,
+		(void *)&cmd_set_vxlan_udp_src,
+		(void *)&cmd_set_vxlan_udp_src_value,
+		(void *)&cmd_set_vxlan_udp_dst,
+		(void *)&cmd_set_vxlan_udp_dst_value,
+		(void *)&cmd_set_vxlan_ip_src,
+		(void *)&cmd_set_vxlan_ip_src_value,
+		(void *)&cmd_set_vxlan_ip_dst,
+		(void *)&cmd_set_vxlan_ip_dst_value,
+		(void *)&cmd_set_vxlan_vlan,
+		(void *)&cmd_set_vxlan_vlan_value,
+		(void *)&cmd_set_vxlan_eth_src,
+		(void *)&cmd_set_vxlan_eth_src_value,
+		(void *)&cmd_set_vxlan_eth_dst,
+		(void *)&cmd_set_vxlan_eth_dst_value,
+		NULL,
+	},
+};
+
 /* Strict link priority scheduling mode setting */
 static void
 cmd_strict_link_prio_parsed(
@@ -17462,6 +17645,8 @@ cmdline_parse_ctx_t main_ctx[] = {
 #if defined RTE_LIBRTE_PMD_SOFTNIC && defined RTE_LIBRTE_SCHED
 	(cmdline_parse_inst_t *)&cmd_set_port_tm_hierarchy_default,
 #endif
+	(cmdline_parse_inst_t *)&cmd_set_vxlan,
+	(cmdline_parse_inst_t *)&cmd_set_vxlan_with_vlan,
 	(cmdline_parse_inst_t *)&cmd_ddp_add,
 	(cmdline_parse_inst_t *)&cmd_ddp_del,
 	(cmdline_parse_inst_t *)&cmd_ddp_get_list,
diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index 934cf7e90..a99fd0048 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -239,6 +239,8 @@ enum index {
 	ACTION_OF_POP_MPLS_ETHERTYPE,
 	ACTION_OF_PUSH_MPLS,
 	ACTION_OF_PUSH_MPLS_ETHERTYPE,
+	ACTION_VXLAN_ENCAP,
+	ACTION_VXLAN_DECAP,
 };
 
 /** Maximum size for pattern in struct rte_flow_item_raw. */
@@ -258,6 +260,23 @@ struct action_rss_data {
 	uint16_t queue[ACTION_RSS_QUEUE_NUM];
 };
 
+/** Maximum number of items in struct rte_flow_action_vxlan_encap. */
+#define ACTION_VXLAN_ENCAP_ITEMS_NUM 6
+
+/** Storage for struct rte_flow_action_vxlan_encap including external data. */
+struct action_vxlan_encap_data {
+	struct rte_flow_action_vxlan_encap conf;
+	struct rte_flow_item items[ACTION_VXLAN_ENCAP_ITEMS_NUM];
+	struct rte_flow_item_eth item_eth;
+	struct rte_flow_item_vlan item_vlan;
+	union {
+		struct rte_flow_item_ipv4 item_ipv4;
+		struct rte_flow_item_ipv6 item_ipv6;
+	};
+	struct rte_flow_item_udp item_udp;
+	struct rte_flow_item_vxlan item_vxlan;
+};
+
 /** Maximum number of subsequent tokens and arguments on the stack. */
 #define CTX_STACK_SIZE 16
 
@@ -775,6 +794,8 @@ static const enum index next_action[] = {
 	ACTION_OF_SET_VLAN_PCP,
 	ACTION_OF_POP_MPLS,
 	ACTION_OF_PUSH_MPLS,
+	ACTION_VXLAN_ENCAP,
+	ACTION_VXLAN_DECAP,
 	ZERO,
 };
 
@@ -905,6 +926,9 @@ static int parse_vc_action_rss_type(struct context *, const struct token *,
 static int parse_vc_action_rss_queue(struct context *, const struct token *,
 				     const char *, unsigned int, void *,
 				     unsigned int);
+static int parse_vc_action_vxlan_encap(struct context *, const struct token *,
+				       const char *, unsigned int, void *,
+				       unsigned int);
 static int parse_destroy(struct context *, const struct token *,
 			 const char *, unsigned int,
 			 void *, unsigned int);
@@ -2387,6 +2411,24 @@ static const struct token token_list[] = {
 			      ethertype)),
 		.call = parse_vc_conf,
 	},
+	[ACTION_VXLAN_ENCAP] = {
+		.name = "vxlan_encap",
+		.help = "VXLAN encapsulation, uses configuration set by \"set"
+			" vxlan\"",
+		.priv = PRIV_ACTION(VXLAN_ENCAP,
+				    sizeof(struct action_vxlan_encap_data)),
+		.next = NEXT(NEXT_ENTRY(ACTION_NEXT)),
+		.call = parse_vc_action_vxlan_encap,
+	},
+	[ACTION_VXLAN_DECAP] = {
+		.name = "vxlan_decap",
+		.help = "Performs a decapsulation action by stripping all"
+			" headers of the VXLAN tunnel network overlay from the"
+			" matched flow.",
+		.priv = PRIV_ACTION(VXLAN_DECAP, 0),
+		.next = NEXT(NEXT_ENTRY(ACTION_NEXT)),
+		.call = parse_vc,
+	},
 };
 
 /** Remove and return last entry from argument stack. */
@@ -2951,6 +2993,106 @@ parse_vc_action_rss_queue(struct context *ctx, const struct token *token,
 	return len;
 }
 
+/** Parse VXLAN encap action. */
+static int
+parse_vc_action_vxlan_encap(struct context *ctx, const struct token *token,
+			    const char *str, unsigned int len,
+			    void *buf, unsigned int size)
+{
+	struct buffer *out = buf;
+	struct rte_flow_action *action;
+	struct action_vxlan_encap_data *action_vxlan_encap_data;
+	int ret;
+
+	ret = parse_vc(ctx, token, str, len, buf, size);
+	if (ret < 0)
+		return ret;
+	/* Nothing else to do if there is no buffer. */
+	if (!out)
+		return ret;
+	if (!out->args.vc.actions_n)
+		return -1;
+	action = &out->args.vc.actions[out->args.vc.actions_n - 1];
+	/* Point to selected object. */
+	ctx->object = out->args.vc.data;
+	ctx->objmask = NULL;
+	/* Set up default configuration. */
+	action_vxlan_encap_data = ctx->object;
+	*action_vxlan_encap_data = (struct action_vxlan_encap_data){
+		.conf = (struct rte_flow_action_vxlan_encap){
+			.definition = action_vxlan_encap_data->items,
+		},
+		.items = {
+			{
+				.type = RTE_FLOW_ITEM_TYPE_ETH,
+				.spec = &action_vxlan_encap_data->item_eth,
+				.mask = &rte_flow_item_eth_mask,
+			},
+			{
+				.type = RTE_FLOW_ITEM_TYPE_VLAN,
+				.spec = &action_vxlan_encap_data->item_vlan,
+				.mask = &rte_flow_item_vlan_mask,
+			},
+			{
+				.type = RTE_FLOW_ITEM_TYPE_IPV4,
+				.spec = &action_vxlan_encap_data->item_ipv4,
+				.mask = &rte_flow_item_ipv4_mask,
+			},
+			{
+				.type = RTE_FLOW_ITEM_TYPE_UDP,
+				.spec = &action_vxlan_encap_data->item_udp,
+				.mask = &rte_flow_item_udp_mask,
+			},
+			{
+				.type = RTE_FLOW_ITEM_TYPE_VXLAN,
+				.spec = &action_vxlan_encap_data->item_vxlan,
+				.mask = &rte_flow_item_vxlan_mask,
+			},
+			{
+				.type = RTE_FLOW_ITEM_TYPE_END,
+			},
+		},
+		.item_eth.type = 0,
+		.item_vlan = {
+			.tci = vxlan_encap_conf.vlan_tci,
+			.inner_type = 0,
+		},
+		.item_ipv4.hdr = {
+			.src_addr = vxlan_encap_conf.ipv4_src,
+			.dst_addr = vxlan_encap_conf.ipv4_dst,
+		},
+		.item_udp.hdr = {
+			.src_port = vxlan_encap_conf.udp_src,
+			.dst_port = vxlan_encap_conf.udp_dst,
+		},
+		.item_vxlan.flags = 0,
+	};
+	memcpy(action_vxlan_encap_data->item_eth.dst.addr_bytes,
+	       vxlan_encap_conf.eth_dst, ETHER_ADDR_LEN);
+	memcpy(action_vxlan_encap_data->item_eth.src.addr_bytes,
+	       vxlan_encap_conf.eth_src, ETHER_ADDR_LEN);
+	if (!vxlan_encap_conf.select_ipv4) {
+		memcpy(&action_vxlan_encap_data->item_ipv6.hdr.src_addr,
+		       &vxlan_encap_conf.ipv6_src,
+		       sizeof(vxlan_encap_conf.ipv6_src));
+		memcpy(&action_vxlan_encap_data->item_ipv6.hdr.dst_addr,
+		       &vxlan_encap_conf.ipv6_dst,
+		       sizeof(vxlan_encap_conf.ipv6_dst));
+		action_vxlan_encap_data->items[2] = (struct rte_flow_item){
+			.type = RTE_FLOW_ITEM_TYPE_IPV6,
+			.spec = &action_vxlan_encap_data->item_ipv6,
+			.mask = &rte_flow_item_ipv6_mask,
+		};
+	}
+	if (!vxlan_encap_conf.select_vlan)
+		action_vxlan_encap_data->items[1].type =
+			RTE_FLOW_ITEM_TYPE_VOID;
+	memcpy(action_vxlan_encap_data->item_vxlan.vni, vxlan_encap_conf.vni,
+	       RTE_DIM(vxlan_encap_conf.vni));
+	action->conf = &action_vxlan_encap_data->conf;
+	return ret;
+}
+
 /** Parse tokens for destroy command. */
 static int
 parse_destroy(struct context *ctx, const struct token *token,
diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
index dde7d43e3..bf39ac3ff 100644
--- a/app/test-pmd/testpmd.c
+++ b/app/test-pmd/testpmd.c
@@ -392,6 +392,23 @@ uint8_t bitrate_enabled;
 struct gro_status gro_ports[RTE_MAX_ETHPORTS];
 uint8_t gro_flush_cycles = GRO_DEFAULT_FLUSH_CYCLES;
 
+struct vxlan_encap_conf vxlan_encap_conf = {
+	.select_ipv4 = 1,
+	.select_vlan = 0,
+	.vni = "\x00\x00\x00",
+	.udp_src = 0,
+	.udp_dst = RTE_BE16(4789),
+	.ipv4_src = IPv4(127, 0, 0, 1),
+	.ipv4_dst = IPv4(255, 255, 255, 255),
+	.ipv6_src = "\x00\x00\x00\x00\x00\x00\x00\x00"
+		"\x00\x00\x00\x00\x00\x00\x00\x01",
+	.ipv6_dst = "\x00\x00\x00\x00\x00\x00\x00\x00"
+		"\x00\x00\x00\x00\x00\x00\x11\x11",
+	.vlan_tci = 0,
+	.eth_src = "\x00\x00\x00\x00\x00\x00",
+	.eth_dst = "\xff\xff\xff\xff\xff\xff",
+};
+
 /* Forward function declarations */
 static void map_port_queue_stats_mapping_registers(portid_t pi,
 						   struct rte_port *port);
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index f51cd9dd9..0d6618788 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -479,6 +479,23 @@ struct gso_status {
 extern struct gso_status gso_ports[RTE_MAX_ETHPORTS];
 extern uint16_t gso_max_segment_size;
 
+/* VXLAN encap/decap parameters. */
+struct vxlan_encap_conf {
+	uint32_t select_ipv4:1;
+	uint32_t select_vlan:1;
+	uint8_t vni[3];
+	rte_be16_t udp_src;
+	rte_be16_t udp_dst;
+	rte_be32_t ipv4_src;
+	rte_be32_t ipv4_dst;
+	uint8_t ipv6_src[16];
+	uint8_t ipv6_dst[16];
+	rte_be16_t vlan_tci;
+	uint8_t eth_src[ETHER_ADDR_LEN];
+	uint8_t eth_dst[ETHER_ADDR_LEN];
+};
+struct vxlan_encap_conf vxlan_encap_conf;
+
 static inline unsigned int
 lcore_num(void)
 {
diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
index 0d6fd50ca..3281778d9 100644
--- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
+++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
@@ -1534,6 +1534,23 @@ Enable or disable a per queue Tx offloading only on a specific Tx queue::
 
 This command should be run when the port is stopped, or else it will fail.
 
+Config VXLAN Encap outer layers
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Configure the outer layer to encapsulate a packet inside a VXLAN tunnel::
+
+ set vxlan ip-version (ipv4|ipv6) vni (vni) udp-src (udp-src) \
+ udp-dst (udp-dst) ip-src (ip-src) ip-dst (ip-dst) eth-src (eth-src) \
+ eth-dst (eth-dst)
+
+ set vxlan-with-vlan ip-version (ipv4|ipv6) vni (vni) udp-src (udp-src) \
+ udp-dst (udp-dst) ip-src (ip-src) ip-dst (ip-dst) vlan-tci (vlan-tci) \
+ eth-src (eth-src) eth-dst (eth-dst)
+
+Those command will set an internal configuration inside testpmd, any following
+flow rule using the action vxlan_encap will use the last configuration set.
+To have a different encapsulation header, one of those commands must be called
+before the flow rule creation.
 
 Port Functions
 --------------
@@ -3650,6 +3667,12 @@ This section lists supported actions and their attributes, if any.
 
   - ``ethertype``: Ethertype.
 
+- ``vxlan_encap``: Performs a VXLAN encapsulation, outer layer configuration
+  is done through `Config VXLAN Encap outer layers`_.
+
+- ``vxlan_decap``: Performs a decapsulation action by stripping all headers of
+  the VXLAN tunnel network overlay from the matched flow.
+
 Destroying flow rules
 ~~~~~~~~~~~~~~~~~~~~~
 
@@ -3915,6 +3938,38 @@ Validate and create a QinQ rule on port 0 to steer traffic to a queue on the hos
    0       0       0       i-      ETH VLAN VLAN=>VF QUEUE
    1       0       0       i-      ETH VLAN VLAN=>PF QUEUE
 
+Sample VXLAN encapsulation rule
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+VXLAN encapsulation outer layer has default value pre-configured in testpmd
+source code, those can be changed by using the following commands
+
+IPv4 VXLAN outer header::
+
+ testpmd> set vxlan ip-version ipv4 vni 4 udp-src 4 udp-dst 4 ip-src 127.0.0.1
+        ip-dst 128.0.0.1 eth-src 11:11:11:11:11:11 eth-dst 22:22:22:22:22:22
+ testpmd> flow create 0 ingress pattern end actions vxlan_encap /
+        queue index 0 / end
+
+ testpmd> set vxlan-with-vlan ip-version ipv4 vni 4 udp-src 4 udp-dst 4 ip-src
+         127.0.0.1 ip-dst 128.0.0.1 vlan-tci 34 eth-src 11:11:11:11:11:11
+         eth-dst 22:22:22:22:22:22
+ testpmd> flow create 0 ingress pattern end actions vxlan_encap /
+         queue index 0 / end
+
+IPv6 VXLAN outer header::
+
+ testpmd> set vxlan ip-version ipv6 vni 4 udp-src 4 udp-dst 4 ip-src ::1
+        ip-dst ::2222 eth-src 11:11:11:11:11:11 eth-dst 22:22:22:22:22:22
+ testpmd> flow create 0 ingress pattern end actions vxlan_encap /
+         queue index 0 / end
+
+ testpmd> set vxlan-with-vlan ip-version ipv6 vni 4 udp-src 4 udp-dst 4
+         ip-src ::1 ip-dst ::2222 vlan-tci 34 eth-src 11:11:11:11:11:11
+         eth-dst 22:22:22:22:22:22
+ testpmd> flow create 0 ingress pattern end actions vxlan_encap /
+         queue index 0 / end
+
 BPF Functions
 --------------
 
-- 
2.18.0

^ permalink raw reply	[flat|nested] 63+ messages in thread

* [dpdk-dev] [PATCH v9 2/2] app/testpmd: add NVGRE encap/decap support
  2018-07-06  6:43               ` [dpdk-dev] [PATCH v9 " Nelio Laranjeiro
  2018-07-06  6:43                 ` [dpdk-dev] [PATCH v9 1/2] app/testpmd: add VXLAN encap/decap support Nelio Laranjeiro
@ 2018-07-06  6:43                 ` Nelio Laranjeiro
  2018-07-18  8:31                 ` [dpdk-dev] [PATCH v9 0/2] app/testpmd implement VXLAN/NVGRE Encap/Decap Ferruh Yigit
  2 siblings, 0 replies; 63+ messages in thread
From: Nelio Laranjeiro @ 2018-07-06  6:43 UTC (permalink / raw)
  To: dev, Wenzhuo Lu, Jingjing Wu, Bernard Iremonger, Stephen Hemminger
  Cc: Adrien Mazarguil, Mohammad Abdul Awal, Ori Kam

Due to the complex NVGRE_ENCAP flow action and based on the fact testpmd
does not allocate memory, this patch adds a new command in testpmd to
initialise a global structure containing the necessary information to
make the outer layer of the packet.  This same global structure will
then be used by the flow command line in testpmd when the action
nvgre_encap will be parsed, at this point, the conversion into such
action becomes trivial.

This global structure is only used for the encap action.

Signed-off-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
Acked-by: Ori Kam <orika@mellanox.com>
Acked-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Acked-by: Mohammad Abdul Awal <mohammad.abdul.awal@intel.com>
Tested-by: Mohammad Abdul Awal <mohammad.abdul.awal@intel.com>
---
 app/test-pmd/cmdline.c                      | 160 ++++++++++++++++++++
 app/test-pmd/cmdline_flow.c                 | 132 ++++++++++++++++
 app/test-pmd/testpmd.c                      |  15 ++
 app/test-pmd/testpmd.h                      |  15 ++
 doc/guides/testpmd_app_ug/testpmd_funcs.rst |  52 +++++++
 5 files changed, 374 insertions(+)

diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
index 56bdb023c..c3af6f956 100644
--- a/app/test-pmd/cmdline.c
+++ b/app/test-pmd/cmdline.c
@@ -792,6 +792,16 @@ static void cmd_help_long_parsed(void *parsed_result,
 			" eth-dst (eth-dst)\n"
 			"       Configure the VXLAN encapsulation for flows.\n\n"
 
+			"nvgre ip-version (ipv4|ipv6) tni (tni) ip-src"
+			" (ip-src) ip-dst (ip-dst) eth-src (eth-src) eth-dst"
+			" (eth-dst)\n"
+			"       Configure the NVGRE encapsulation for flows.\n\n"
+
+			"nvgre-with-vlan ip-version (ipv4|ipv6) tni (tni)"
+			" ip-src (ip-src) ip-dst (ip-dst) vlan-tci (vlan-tci)"
+			" eth-src (eth-src) eth-dst (eth-dst)\n"
+			"       Configure the NVGRE encapsulation for flows.\n\n"
+
 			, list_pkt_forwarding_modes()
 		);
 	}
@@ -15021,6 +15031,154 @@ cmdline_parse_inst_t cmd_set_vxlan_with_vlan = {
 	},
 };
 
+/** Set NVGRE encapsulation details */
+struct cmd_set_nvgre_result {
+	cmdline_fixed_string_t set;
+	cmdline_fixed_string_t nvgre;
+	cmdline_fixed_string_t pos_token;
+	cmdline_fixed_string_t ip_version;
+	uint32_t tni;
+	cmdline_ipaddr_t ip_src;
+	cmdline_ipaddr_t ip_dst;
+	uint16_t tci;
+	struct ether_addr eth_src;
+	struct ether_addr eth_dst;
+};
+
+cmdline_parse_token_string_t cmd_set_nvgre_set =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_nvgre_result, set, "set");
+cmdline_parse_token_string_t cmd_set_nvgre_nvgre =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_nvgre_result, nvgre, "nvgre");
+cmdline_parse_token_string_t cmd_set_nvgre_nvgre_with_vlan =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_nvgre_result, nvgre,
+				 "nvgre-with-vlan");
+cmdline_parse_token_string_t cmd_set_nvgre_ip_version =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_nvgre_result, pos_token,
+				 "ip-version");
+cmdline_parse_token_string_t cmd_set_nvgre_ip_version_value =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_nvgre_result, ip_version,
+				 "ipv4#ipv6");
+cmdline_parse_token_string_t cmd_set_nvgre_tni =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_nvgre_result, pos_token,
+				 "tni");
+cmdline_parse_token_num_t cmd_set_nvgre_tni_value =
+	TOKEN_NUM_INITIALIZER(struct cmd_set_nvgre_result, tni, UINT32);
+cmdline_parse_token_string_t cmd_set_nvgre_ip_src =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_nvgre_result, pos_token,
+				 "ip-src");
+cmdline_parse_token_num_t cmd_set_nvgre_ip_src_value =
+	TOKEN_IPADDR_INITIALIZER(struct cmd_set_nvgre_result, ip_src);
+cmdline_parse_token_string_t cmd_set_nvgre_ip_dst =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_nvgre_result, pos_token,
+				 "ip-dst");
+cmdline_parse_token_ipaddr_t cmd_set_nvgre_ip_dst_value =
+	TOKEN_IPADDR_INITIALIZER(struct cmd_set_nvgre_result, ip_dst);
+cmdline_parse_token_string_t cmd_set_nvgre_vlan =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_nvgre_result, pos_token,
+				 "vlan-tci");
+cmdline_parse_token_num_t cmd_set_nvgre_vlan_value =
+	TOKEN_NUM_INITIALIZER(struct cmd_set_nvgre_result, tci, UINT16);
+cmdline_parse_token_string_t cmd_set_nvgre_eth_src =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_nvgre_result, pos_token,
+				 "eth-src");
+cmdline_parse_token_etheraddr_t cmd_set_nvgre_eth_src_value =
+	TOKEN_ETHERADDR_INITIALIZER(struct cmd_set_nvgre_result, eth_src);
+cmdline_parse_token_string_t cmd_set_nvgre_eth_dst =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_nvgre_result, pos_token,
+				 "eth-dst");
+cmdline_parse_token_etheraddr_t cmd_set_nvgre_eth_dst_value =
+	TOKEN_ETHERADDR_INITIALIZER(struct cmd_set_nvgre_result, eth_dst);
+
+static void cmd_set_nvgre_parsed(void *parsed_result,
+	__attribute__((unused)) struct cmdline *cl,
+	__attribute__((unused)) void *data)
+{
+	struct cmd_set_nvgre_result *res = parsed_result;
+	union {
+		uint32_t nvgre_tni;
+		uint8_t tni[4];
+	} id = {
+		.nvgre_tni = rte_cpu_to_be_32(res->tni) & RTE_BE32(0x00ffffff),
+	};
+
+	if (strcmp(res->nvgre, "nvgre") == 0)
+		nvgre_encap_conf.select_vlan = 0;
+	else if (strcmp(res->nvgre, "nvgre-with-vlan") == 0)
+		nvgre_encap_conf.select_vlan = 1;
+	if (strcmp(res->ip_version, "ipv4") == 0)
+		nvgre_encap_conf.select_ipv4 = 1;
+	else if (strcmp(res->ip_version, "ipv6") == 0)
+		nvgre_encap_conf.select_ipv4 = 0;
+	else
+		return;
+	rte_memcpy(nvgre_encap_conf.tni, &id.tni[1], 3);
+	if (nvgre_encap_conf.select_ipv4) {
+		IPV4_ADDR_TO_UINT(res->ip_src, nvgre_encap_conf.ipv4_src);
+		IPV4_ADDR_TO_UINT(res->ip_dst, nvgre_encap_conf.ipv4_dst);
+	} else {
+		IPV6_ADDR_TO_ARRAY(res->ip_src, nvgre_encap_conf.ipv6_src);
+		IPV6_ADDR_TO_ARRAY(res->ip_dst, nvgre_encap_conf.ipv6_dst);
+	}
+	if (nvgre_encap_conf.select_vlan)
+		nvgre_encap_conf.vlan_tci = rte_cpu_to_be_16(res->tci);
+	rte_memcpy(nvgre_encap_conf.eth_src, res->eth_src.addr_bytes,
+		   ETHER_ADDR_LEN);
+	rte_memcpy(nvgre_encap_conf.eth_dst, res->eth_dst.addr_bytes,
+		   ETHER_ADDR_LEN);
+}
+
+cmdline_parse_inst_t cmd_set_nvgre = {
+	.f = cmd_set_nvgre_parsed,
+	.data = NULL,
+	.help_str = "set nvgre ip-version <ipv4|ipv6> tni <tni> ip-src"
+		" <ip-src> ip-dst <ip-dst> eth-src <eth-src>"
+		" eth-dst <eth-dst>",
+	.tokens = {
+		(void *)&cmd_set_nvgre_set,
+		(void *)&cmd_set_nvgre_nvgre,
+		(void *)&cmd_set_nvgre_ip_version,
+		(void *)&cmd_set_nvgre_ip_version_value,
+		(void *)&cmd_set_nvgre_tni,
+		(void *)&cmd_set_nvgre_tni_value,
+		(void *)&cmd_set_nvgre_ip_src,
+		(void *)&cmd_set_nvgre_ip_src_value,
+		(void *)&cmd_set_nvgre_ip_dst,
+		(void *)&cmd_set_nvgre_ip_dst_value,
+		(void *)&cmd_set_nvgre_eth_src,
+		(void *)&cmd_set_nvgre_eth_src_value,
+		(void *)&cmd_set_nvgre_eth_dst,
+		(void *)&cmd_set_nvgre_eth_dst_value,
+		NULL,
+	},
+};
+
+cmdline_parse_inst_t cmd_set_nvgre_with_vlan = {
+	.f = cmd_set_nvgre_parsed,
+	.data = NULL,
+	.help_str = "set nvgre-with-vlan ip-version <ipv4|ipv6> tni <tni>"
+		" ip-src <ip-src> ip-dst <ip-dst> vlan-tci <vlan-tci>"
+		" eth-src <eth-src> eth-dst <eth-dst>",
+	.tokens = {
+		(void *)&cmd_set_nvgre_set,
+		(void *)&cmd_set_nvgre_nvgre_with_vlan,
+		(void *)&cmd_set_nvgre_ip_version,
+		(void *)&cmd_set_nvgre_ip_version_value,
+		(void *)&cmd_set_nvgre_tni,
+		(void *)&cmd_set_nvgre_tni_value,
+		(void *)&cmd_set_nvgre_ip_src,
+		(void *)&cmd_set_nvgre_ip_src_value,
+		(void *)&cmd_set_nvgre_ip_dst,
+		(void *)&cmd_set_nvgre_ip_dst_value,
+		(void *)&cmd_set_nvgre_vlan,
+		(void *)&cmd_set_nvgre_vlan_value,
+		(void *)&cmd_set_nvgre_eth_src,
+		(void *)&cmd_set_nvgre_eth_src_value,
+		(void *)&cmd_set_nvgre_eth_dst,
+		(void *)&cmd_set_nvgre_eth_dst_value,
+		NULL,
+	},
+};
+
 /* Strict link priority scheduling mode setting */
 static void
 cmd_strict_link_prio_parsed(
@@ -17647,6 +17805,8 @@ cmdline_parse_ctx_t main_ctx[] = {
 #endif
 	(cmdline_parse_inst_t *)&cmd_set_vxlan,
 	(cmdline_parse_inst_t *)&cmd_set_vxlan_with_vlan,
+	(cmdline_parse_inst_t *)&cmd_set_nvgre,
+	(cmdline_parse_inst_t *)&cmd_set_nvgre_with_vlan,
 	(cmdline_parse_inst_t *)&cmd_ddp_add,
 	(cmdline_parse_inst_t *)&cmd_ddp_del,
 	(cmdline_parse_inst_t *)&cmd_ddp_get_list,
diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index a99fd0048..f9260600e 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -241,6 +241,8 @@ enum index {
 	ACTION_OF_PUSH_MPLS_ETHERTYPE,
 	ACTION_VXLAN_ENCAP,
 	ACTION_VXLAN_DECAP,
+	ACTION_NVGRE_ENCAP,
+	ACTION_NVGRE_DECAP,
 };
 
 /** Maximum size for pattern in struct rte_flow_item_raw. */
@@ -277,6 +279,22 @@ struct action_vxlan_encap_data {
 	struct rte_flow_item_vxlan item_vxlan;
 };
 
+/** Maximum number of items in struct rte_flow_action_nvgre_encap. */
+#define ACTION_NVGRE_ENCAP_ITEMS_NUM 5
+
+/** Storage for struct rte_flow_action_nvgre_encap including external data. */
+struct action_nvgre_encap_data {
+	struct rte_flow_action_nvgre_encap conf;
+	struct rte_flow_item items[ACTION_NVGRE_ENCAP_ITEMS_NUM];
+	struct rte_flow_item_eth item_eth;
+	struct rte_flow_item_vlan item_vlan;
+	union {
+		struct rte_flow_item_ipv4 item_ipv4;
+		struct rte_flow_item_ipv6 item_ipv6;
+	};
+	struct rte_flow_item_nvgre item_nvgre;
+};
+
 /** Maximum number of subsequent tokens and arguments on the stack. */
 #define CTX_STACK_SIZE 16
 
@@ -796,6 +814,8 @@ static const enum index next_action[] = {
 	ACTION_OF_PUSH_MPLS,
 	ACTION_VXLAN_ENCAP,
 	ACTION_VXLAN_DECAP,
+	ACTION_NVGRE_ENCAP,
+	ACTION_NVGRE_DECAP,
 	ZERO,
 };
 
@@ -929,6 +949,9 @@ static int parse_vc_action_rss_queue(struct context *, const struct token *,
 static int parse_vc_action_vxlan_encap(struct context *, const struct token *,
 				       const char *, unsigned int, void *,
 				       unsigned int);
+static int parse_vc_action_nvgre_encap(struct context *, const struct token *,
+				       const char *, unsigned int, void *,
+				       unsigned int);
 static int parse_destroy(struct context *, const struct token *,
 			 const char *, unsigned int,
 			 void *, unsigned int);
@@ -2429,6 +2452,24 @@ static const struct token token_list[] = {
 		.next = NEXT(NEXT_ENTRY(ACTION_NEXT)),
 		.call = parse_vc,
 	},
+	[ACTION_NVGRE_ENCAP] = {
+		.name = "nvgre_encap",
+		.help = "NVGRE encapsulation, uses configuration set by \"set"
+			" nvgre\"",
+		.priv = PRIV_ACTION(NVGRE_ENCAP,
+				    sizeof(struct action_nvgre_encap_data)),
+		.next = NEXT(NEXT_ENTRY(ACTION_NEXT)),
+		.call = parse_vc_action_nvgre_encap,
+	},
+	[ACTION_NVGRE_DECAP] = {
+		.name = "nvgre_decap",
+		.help = "Performs a decapsulation action by stripping all"
+			" headers of the NVGRE tunnel network overlay from the"
+			" matched flow.",
+		.priv = PRIV_ACTION(NVGRE_DECAP, 0),
+		.next = NEXT(NEXT_ENTRY(ACTION_NEXT)),
+		.call = parse_vc,
+	},
 };
 
 /** Remove and return last entry from argument stack. */
@@ -3093,6 +3134,97 @@ parse_vc_action_vxlan_encap(struct context *ctx, const struct token *token,
 	return ret;
 }
 
+/** Parse NVGRE encap action. */
+static int
+parse_vc_action_nvgre_encap(struct context *ctx, const struct token *token,
+			    const char *str, unsigned int len,
+			    void *buf, unsigned int size)
+{
+	struct buffer *out = buf;
+	struct rte_flow_action *action;
+	struct action_nvgre_encap_data *action_nvgre_encap_data;
+	int ret;
+
+	ret = parse_vc(ctx, token, str, len, buf, size);
+	if (ret < 0)
+		return ret;
+	/* Nothing else to do if there is no buffer. */
+	if (!out)
+		return ret;
+	if (!out->args.vc.actions_n)
+		return -1;
+	action = &out->args.vc.actions[out->args.vc.actions_n - 1];
+	/* Point to selected object. */
+	ctx->object = out->args.vc.data;
+	ctx->objmask = NULL;
+	/* Set up default configuration. */
+	action_nvgre_encap_data = ctx->object;
+	*action_nvgre_encap_data = (struct action_nvgre_encap_data){
+		.conf = (struct rte_flow_action_nvgre_encap){
+			.definition = action_nvgre_encap_data->items,
+		},
+		.items = {
+			{
+				.type = RTE_FLOW_ITEM_TYPE_ETH,
+				.spec = &action_nvgre_encap_data->item_eth,
+				.mask = &rte_flow_item_eth_mask,
+			},
+			{
+				.type = RTE_FLOW_ITEM_TYPE_VLAN,
+				.spec = &action_nvgre_encap_data->item_vlan,
+				.mask = &rte_flow_item_vlan_mask,
+			},
+			{
+				.type = RTE_FLOW_ITEM_TYPE_IPV4,
+				.spec = &action_nvgre_encap_data->item_ipv4,
+				.mask = &rte_flow_item_ipv4_mask,
+			},
+			{
+				.type = RTE_FLOW_ITEM_TYPE_NVGRE,
+				.spec = &action_nvgre_encap_data->item_nvgre,
+				.mask = &rte_flow_item_nvgre_mask,
+			},
+			{
+				.type = RTE_FLOW_ITEM_TYPE_END,
+			},
+		},
+		.item_eth.type = 0,
+		.item_vlan = {
+			.tci = nvgre_encap_conf.vlan_tci,
+			.inner_type = 0,
+		},
+		.item_ipv4.hdr = {
+		       .src_addr = nvgre_encap_conf.ipv4_src,
+		       .dst_addr = nvgre_encap_conf.ipv4_dst,
+		},
+		.item_nvgre.flow_id = 0,
+	};
+	memcpy(action_nvgre_encap_data->item_eth.dst.addr_bytes,
+	       nvgre_encap_conf.eth_dst, ETHER_ADDR_LEN);
+	memcpy(action_nvgre_encap_data->item_eth.src.addr_bytes,
+	       nvgre_encap_conf.eth_src, ETHER_ADDR_LEN);
+	if (!nvgre_encap_conf.select_ipv4) {
+		memcpy(&action_nvgre_encap_data->item_ipv6.hdr.src_addr,
+		       &nvgre_encap_conf.ipv6_src,
+		       sizeof(nvgre_encap_conf.ipv6_src));
+		memcpy(&action_nvgre_encap_data->item_ipv6.hdr.dst_addr,
+		       &nvgre_encap_conf.ipv6_dst,
+		       sizeof(nvgre_encap_conf.ipv6_dst));
+		action_nvgre_encap_data->items[2] = (struct rte_flow_item){
+			.type = RTE_FLOW_ITEM_TYPE_IPV6,
+			.spec = &action_nvgre_encap_data->item_ipv6,
+			.mask = &rte_flow_item_ipv6_mask,
+		};
+	}
+	if (!nvgre_encap_conf.select_vlan)
+		action_nvgre_encap_data->items[1].type =
+			RTE_FLOW_ITEM_TYPE_VOID;
+	memcpy(action_nvgre_encap_data->item_nvgre.tni, nvgre_encap_conf.tni,
+	       RTE_DIM(nvgre_encap_conf.tni));
+	action->conf = &action_nvgre_encap_data->conf;
+	return ret;
+}
+
 /** Parse tokens for destroy command. */
 static int
 parse_destroy(struct context *ctx, const struct token *token,
diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
index bf39ac3ff..dbba7d253 100644
--- a/app/test-pmd/testpmd.c
+++ b/app/test-pmd/testpmd.c
@@ -409,6 +409,21 @@ struct vxlan_encap_conf vxlan_encap_conf = {
 	.eth_dst = "\xff\xff\xff\xff\xff\xff",
 };
 
+struct nvgre_encap_conf nvgre_encap_conf = {
+	.select_ipv4 = 1,
+	.select_vlan = 0,
+	.tni = "\x00\x00\x00",
+	.ipv4_src = IPv4(127, 0, 0, 1),
+	.ipv4_dst = IPv4(255, 255, 255, 255),
+	.ipv6_src = "\x00\x00\x00\x00\x00\x00\x00\x00"
+		"\x00\x00\x00\x00\x00\x00\x00\x01",
+	.ipv6_dst = "\x00\x00\x00\x00\x00\x00\x00\x00"
+		"\x00\x00\x00\x00\x00\x00\x11\x11",
+	.vlan_tci = 0,
+	.eth_src = "\x00\x00\x00\x00\x00\x00",
+	.eth_dst = "\xff\xff\xff\xff\xff\xff",
+};
+
 /* Forward function declarations */
 static void map_port_queue_stats_mapping_registers(portid_t pi,
 						   struct rte_port *port);
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index 0d6618788..2b1e448b0 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -496,6 +496,21 @@ struct vxlan_encap_conf {
 };
 struct vxlan_encap_conf vxlan_encap_conf;
 
+/* NVGRE encap/decap parameters. */
+struct nvgre_encap_conf {
+	uint32_t select_ipv4:1;
+	uint32_t select_vlan:1;
+	uint8_t tni[3];
+	rte_be32_t ipv4_src;
+	rte_be32_t ipv4_dst;
+	uint8_t ipv6_src[16];
+	uint8_t ipv6_dst[16];
+	rte_be16_t vlan_tci;
+	uint8_t eth_src[ETHER_ADDR_LEN];
+	uint8_t eth_dst[ETHER_ADDR_LEN];
+};
+struct nvgre_encap_conf nvgre_encap_conf;
+
 static inline unsigned int
 lcore_num(void)
 {
diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
index 3281778d9..94d8d38c7 100644
--- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
+++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
@@ -1552,6 +1552,21 @@ flow rule using the action vxlan_encap will use the last configuration set.
 To have a different encapsulation header, one of those commands must be called
 before the flow rule creation.
 
+Config NVGRE Encap outer layers
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Configure the outer layer to encapsulate a packet inside a NVGRE tunnel::
+
+ set nvgre ip-version (ipv4|ipv6) tni (tni) ip-src (ip-src) ip-dst (ip-dst) \
+        eth-src (eth-src) eth-dst (eth-dst)
+ set nvgre-with-vlan ip-version (ipv4|ipv6) tni (tni) ip-src (ip-src) \
+        ip-dst (ip-dst) vlan-tci (vlan-tci) eth-src (eth-src) eth-dst (eth-dst)
+
+Those command will set an internal configuration inside testpmd, any following
+flow rule using the action nvgre_encap will use the last configuration set.
+To have a different encapsulation header, one of those commands must be called
+before the flow rule creation.
+
 Port Functions
 --------------
 
@@ -3673,6 +3688,12 @@ This section lists supported actions and their attributes, if any.
 - ``vxlan_decap``: Performs a decapsulation action by stripping all headers of
   the VXLAN tunnel network overlay from the matched flow.
 
+- ``nvgre_encap``: Performs a NVGRE encapsulation, outer layer configuration
+  is done through `Config NVGRE Encap outer layers`_.
+
+- ``nvgre_decap``: Performs a decapsulation action by stripping all headers of
+  the NVGRE tunnel network overlay from the matched flow.
+
 Destroying flow rules
 ~~~~~~~~~~~~~~~~~~~~~
 
@@ -3970,6 +3991,37 @@ IPv6 VXLAN outer header::
  testpmd> flow create 0 ingress pattern end actions vxlan_encap /
          queue index 0 / end
 
+Sample NVGRE encapsulation rule
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+NVGRE encapsulation outer layer has default value pre-configured in testpmd
+source code, those can be changed by using the following commands
+
+IPv4 NVGRE outer header::
+
+ testpmd> set nvgre ip-version ipv4 tni 4 ip-src 127.0.0.1 ip-dst 128.0.0.1
+        eth-src 11:11:11:11:11:11 eth-dst 22:22:22:22:22:22
+ testpmd> flow create 0 ingress pattern end actions nvgre_encap /
+        queue index 0 / end
+
+ testpmd> set nvgre-with-vlan ip-version ipv4 tni 4 ip-src 127.0.0.1
+         ip-dst 128.0.0.1 vlan-tci 34 eth-src 11:11:11:11:11:11
+         eth-dst 22:22:22:22:22:22
+ testpmd> flow create 0 ingress pattern end actions nvgre_encap /
+         queue index 0 / end
+
+IPv6 NVGRE outer header::
+
+ testpmd> set nvgre ip-version ipv6 tni 4 ip-src ::1 ip-dst ::2222
+        eth-src 11:11:11:11:11:11 eth-dst 22:22:22:22:22:22
+ testpmd> flow create 0 ingress pattern end actions nvgre_encap /
+        queue index 0 / end
+
+ testpmd> set nvgre-with-vlan ip-version ipv6 tni 4 ip-src ::1 ip-dst ::2222
+        vlan-tci 34 eth-src 11:11:11:11:11:11 eth-dst 22:22:22:22:22:22
+ testpmd> flow create 0 ingress pattern end actions nvgre_encap /
+        queue index 0 / end
+
 BPF Functions
 --------------
 
-- 
2.18.0

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [dpdk-dev] [PATCH v9 0/2] app/testpmd implement VXLAN/NVGRE Encap/Decap
  2018-07-06  6:43               ` [dpdk-dev] [PATCH v9 " Nelio Laranjeiro
  2018-07-06  6:43                 ` [dpdk-dev] [PATCH v9 1/2] app/testpmd: add VXLAN encap/decap support Nelio Laranjeiro
  2018-07-06  6:43                 ` [dpdk-dev] [PATCH v9 2/2] app/testpmd: add NVGRE " Nelio Laranjeiro
@ 2018-07-18  8:31                 ` Ferruh Yigit
  2 siblings, 0 replies; 63+ messages in thread
From: Ferruh Yigit @ 2018-07-18  8:31 UTC (permalink / raw)
  To: Nelio Laranjeiro, dev, Wenzhuo Lu, Jingjing Wu,
	Bernard Iremonger, Stephen Hemminger
  Cc: Adrien Mazarguil, Mohammad Abdul Awal, Ori Kam

On 7/6/2018 7:43 AM, Nelio Laranjeiro wrote:
> This series adds an easy and maintainable configuration version support for
> those two actions for 18.08 by using global variables in testpmd to store the
> necessary information for the tunnel encapsulation.  Those variables are used
> in conjunction of RTE_FLOW_ACTION_{VXLAN,NVGRE}_ENCAP action to create easily
> the action for flows.
> 
> A common way to use it:
> 
>  set vxlan ip-version ipv4 vni 4 udp-src 4 udp-dst 4 ip-src 27.0.0.1
>         ip-dst 128.0.0.1 eth-src 11:11:11:11:11:11 eth-dst 22:22:22:22:22:22
>  flow create 0 ingress pattern end actions vxlan_encap /
>         queue index 0 / end
> 
>  set vxlan-with-vlan ip-version ipv4 vni 4 udp-src 4 udp-dst 4 p-src
>          127.0.0.1 ip-dst 128.0.0.1 vlan-tci 34 eth-src 11:11:11:11:11:11
>          eth-dst 22:22:22:22:22:22
>  flow create 0 ingress pattern end actions vxlan_encap /
>          queue index 0 / end
> 
>  set vxlan ip-version ipv6 vni 4 udp-src 4 udp-dst 4 ip-src ::1
>         ip-dst ::2222 eth-src 11:11:11:11:11:11 eth-dst 22:22:22:22:22:22
>  flow create 0 ingress pattern end actions vxlan_encap /
>          queue index 0 / end
> 
>  set vxlan-with-vlan ip-version ipv6 vni 4 udp-src 4 udp-dst 4
>          ip-src ::1 ip-dst ::2222 vlan-tci 34 eth-src 11:11:11:11:11:11
>          eth-dst 22:22:22:22:22:22
>  flow create 0 ingress pattern end actions vxlan_encap /
>          queue index 0 / end
> 
> This also replace the proposal done by Mohammad Abdul Awal [1] which handles
> in a more complex way for the same work.
> 
> Note this API has already a modification planned for 18.11 [2] thus those
> series should have a limited life for a single release.
> 
> [1] https://dpdk.org/ml/archives/dev/2018-May/101403.html
> [2] https://dpdk.org/ml/archives/dev/2018-June/103485.html
> 
> Changes in v9:
> 
> - fix the help for NVGRE.
> 
> Changes in v8:
> 
> - add static tokens in the command line to be user friendly.
> 
> Changes in v7:
> 
> - add missing documentation added in v5 and removed in v6 by mistake.
> 
> Changes in v6:
> 
> - fix compilation under redhat 7.5 with gcc 4.8.5 20150623
> 
> Changes in v5:
> 
> - fix documentation generation.
> - add more explanation on how to generate several encapsulated flows.
> 
> Changes in v4:
> 
> - fix big endian issue on vni and tni.
> - add samples to the documentation.
> - set the VXLAN UDP source port to 0 by default to let the driver generate it
>   from the inner hash as described in the RFC 7348.
> - use default rte flow mask for each item.
> 
> Changes in v3:
> 
> - support VLAN in the outer encapsulation.
> - fix the documentation with missing arguments.
> 
> Changes in v2:
> 
> - add default IPv6 values for NVGRE encapsulation.
> - replace VXLAN to NVGRE in comments concerning NVGRE layer.
> 
> 
> Nelio Laranjeiro (2):
>   app/testpmd: add VXLAN encap/decap support
>   app/testpmd: add NVGRE encap/decap support

Series applied to dpdk-next-net/master, thanks.

^ permalink raw reply	[flat|nested] 63+ messages in thread

end of thread, other threads:[~2018-07-18  8:31 UTC | newest]

Thread overview: 63+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-06-14 15:08 [dpdk-dev] [PATCH 0/2] implement VXLAN/NVGRE Encap/Decap in testpmd Nelio Laranjeiro
2018-06-14 15:08 ` [dpdk-dev] [PATCH 1/2] app/testpmd: add VXLAN encap/decap support Nelio Laranjeiro
2018-06-14 15:09 ` [dpdk-dev] [PATCH 2/2] app/testpmd: add NVGRE " Nelio Laranjeiro
2018-06-15  9:32   ` Iremonger, Bernard
2018-06-15 11:25     ` Nélio Laranjeiro
2018-06-18  8:52 ` [dpdk-dev] [PATCH v2 0/2] app/testpmd implement VXLAN/NVGRE Encap/Decap Nelio Laranjeiro
2018-06-18  9:05   ` Ferruh Yigit
2018-06-18  9:38     ` Nélio Laranjeiro
2018-06-18 14:40       ` Ferruh Yigit
2018-06-19  7:32         ` Nélio Laranjeiro
2018-06-18 14:36   ` [dpdk-dev] [PATCH v3 " Nelio Laranjeiro
2018-06-18 16:28     ` Iremonger, Bernard
2018-06-19  9:41       ` Nélio Laranjeiro
2018-06-21  7:13     ` [dpdk-dev] [PATCH v4 " Nelio Laranjeiro
2018-06-21  7:13       ` [dpdk-dev] [PATCH v4 1/2] app/testpmd: add VXLAN encap/decap support Nelio Laranjeiro
2018-06-26 10:51         ` Ori Kam
2018-06-26 12:43         ` Iremonger, Bernard
2018-06-21  7:13       ` [dpdk-dev] [PATCH v4 2/2] app/testpmd: add NVGRE " Nelio Laranjeiro
2018-06-26 10:48         ` Ori Kam
2018-06-26 12:48         ` Iremonger, Bernard
2018-06-26 15:15           ` Nélio Laranjeiro
2018-06-22  7:42       ` [dpdk-dev] [PATCH v4 0/2] app/testpmd implement VXLAN/NVGRE Encap/Decap Mohammad Abdul Awal
2018-06-22  8:31         ` Nélio Laranjeiro
2018-06-22  8:51           ` Mohammad Abdul Awal
2018-06-22  9:08             ` Nélio Laranjeiro
2018-06-22 10:19               ` Mohammad Abdul Awal
2018-06-26 15:15                 ` Nélio Laranjeiro
2018-06-27  8:53       ` [dpdk-dev] [PATCH v5 " Nelio Laranjeiro
2018-06-27  8:53         ` [dpdk-dev] [PATCH v5 1/2] app/testpmd: add VXLAN encap/decap support Nelio Laranjeiro
2018-06-27  8:53         ` [dpdk-dev] [PATCH v5 2/2] app/testpmd: add NVGRE " Nelio Laranjeiro
2018-06-27  9:53         ` [dpdk-dev] [PATCH v6 0/2] app/testpmd implement VXLAN/NVGRE Encap/Decap Nelio Laranjeiro
2018-06-27  9:53           ` [dpdk-dev] [PATCH v6 1/2] app/testpmd: add VXLAN encap/decap support Nelio Laranjeiro
2018-06-27  9:53           ` [dpdk-dev] [PATCH v6 2/2] app/testpmd: add NVGRE " Nelio Laranjeiro
2018-06-27 10:00           ` [dpdk-dev] [PATCH v6 0/2] app/testpmd implement VXLAN/NVGRE Encap/Decap Nélio Laranjeiro
2018-06-27 11:45           ` [dpdk-dev] [PATCH v7 " Nelio Laranjeiro
2018-06-27 11:45             ` [dpdk-dev] [PATCH v7 1/2] app/testpmd: add VXLAN encap/decap support Nelio Laranjeiro
2018-06-27 11:45             ` [dpdk-dev] [PATCH v7 2/2] app/testpmd: add NVGRE " Nelio Laranjeiro
2018-07-02 10:40             ` [dpdk-dev] [PATCH v7 0/2] app/testpmd implement VXLAN/NVGRE Encap/Decap Mohammad Abdul Awal
2018-07-04 14:54               ` Ferruh Yigit
2018-07-05  9:37                 ` Nélio Laranjeiro
2018-07-05 14:33             ` [dpdk-dev] [PATCH v8 " Nelio Laranjeiro
2018-07-05 14:33               ` [dpdk-dev] [PATCH v8 1/2] app/testpmd: add VXLAN encap/decap support Nelio Laranjeiro
2018-07-05 15:03                 ` Mohammad Abdul Awal
2018-07-05 14:33               ` [dpdk-dev] [PATCH v8 2/2] app/testpmd: add NVGRE " Nelio Laranjeiro
2018-07-05 15:07                 ` Mohammad Abdul Awal
2018-07-05 15:17                   ` Nélio Laranjeiro
2018-07-05 14:48               ` [dpdk-dev] [PATCH v8 0/2] app/testpmd implement VXLAN/NVGRE Encap/Decap Adrien Mazarguil
2018-07-05 14:57               ` Mohammad Abdul Awal
2018-07-06  6:43               ` [dpdk-dev] [PATCH v9 " Nelio Laranjeiro
2018-07-06  6:43                 ` [dpdk-dev] [PATCH v9 1/2] app/testpmd: add VXLAN encap/decap support Nelio Laranjeiro
2018-07-06  6:43                 ` [dpdk-dev] [PATCH v9 2/2] app/testpmd: add NVGRE " Nelio Laranjeiro
2018-07-18  8:31                 ` [dpdk-dev] [PATCH v9 0/2] app/testpmd implement VXLAN/NVGRE Encap/Decap Ferruh Yigit
2018-06-18 14:36   ` [dpdk-dev] [PATCH v3 1/2] app/testpmd: add VXLAN encap/decap support Nelio Laranjeiro
2018-06-19  7:09     ` Ori Kam
2018-06-19  9:40       ` Nélio Laranjeiro
2018-06-18 14:36   ` [dpdk-dev] [PATCH v3 2/2] app/testpmd: add NVGRE " Nelio Laranjeiro
2018-06-19  7:08     ` Ori Kam
2018-06-18  8:52 ` [dpdk-dev] [PATCH v2 1/2] app/testpmd: add VXLAN " Nelio Laranjeiro
2018-06-18 12:47   ` Mohammad Abdul Awal
2018-06-18 21:02   ` Stephen Hemminger
2018-06-19  9:44     ` Nélio Laranjeiro
2018-06-18  8:52 ` [dpdk-dev] [PATCH v2 2/2] app/testpmd: add NVGRE " Nelio Laranjeiro
2018-06-18 12:48   ` Mohammad Abdul Awal

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).