DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] [PATCH v7 00/10] Support VxLAN on Fortville
@ 2014-10-23 13:18 Jijiang Liu
  2014-10-23 13:18 ` [dpdk-dev] [PATCH v7 01/10] librte_mbuf:the rte_mbuf structure changes Jijiang Liu
                   ` (9 more replies)
  0 siblings, 10 replies; 12+ messages in thread
From: Jijiang Liu @ 2014-10-23 13:18 UTC (permalink / raw)
  To: dev

The patch set supports VxLAN on Fortville based on latest rte_mbuf structure. 
 
It includes:
 - Support VxLAN packet identification by configuring UDP tunneling port.
 - Support VxLAN packet filters. It uses MAC and VLAN to point
   to a queue. The filter types supported are listed below:
   1. Inner MAC and Inner VLAN ID
   2. Inner MAC address, inner VLAN ID and tenant ID.
   3. Inner MAC and tenant ID
   4. Inner MAC address
   5. Outer MAC address, tenant ID and inner MAC
 - Support VxLAN TX checksum offload, which include outer L3(IP), inner L3(IP) and inner L4(UDP,TCP and SCTP)
 
Change notes:
 
 v7)  * Remove the "tunnel_type" from rte_eth_conf structure. 
      * Remove the "to_queue" from rte_eth_tunnel_filter_conf,we probaly add it to be configurable later.
      * Remove the check of tunneling packet type in testpmd add Rx tunnel offload flag instead.
      * Split the APIs and data structures of VxLAN into two patches.

Jijiang Liu (10):
  change rte_mbuf structures 
  add data structures of UDP tunneling 
  add VxLAN packet identification API in librte_ether
  support VxLAN packet identification in i40e
  test VxLAN packet identification in testpmd.
  add data structures of tunneling filter in rte_eth_ctrl.h
  implement the API of VxLAN packet filter in i40e
  test VxLAN packet filter
  support VxLAN Tx checksum offload in i40e
  test VxLAN Tx checksum offload


 app/test-pmd/cmdline.c            |  228 +++++++++++++++++++++++++-
 app/test-pmd/config.c             |    6 +-
 app/test-pmd/csumonly.c           |  194 ++++++++++++++++++++--
 app/test-pmd/rxonly.c             |   50 ++++++-
 lib/librte_ether/rte_eth_ctrl.h   |   61 +++++++
 lib/librte_ether/rte_ethdev.c     |   52 ++++++
 lib/librte_ether/rte_ethdev.h     |   54 ++++++
 lib/librte_ether/rte_ether.h      |   13 ++
 lib/librte_mbuf/rte_mbuf.h        |   28 +++-
 lib/librte_pmd_i40e/i40e_ethdev.c |  331 ++++++++++++++++++++++++++++++++++++-
 lib/librte_pmd_i40e/i40e_ethdev.h |    8 +-
 lib/librte_pmd_i40e/i40e_rxtx.c   |  151 +++++++++++------
 12 files changed, 1096 insertions(+), 80 deletions(-)

-- 
1.7.7.6

^ permalink raw reply	[flat|nested] 12+ messages in thread

* [dpdk-dev] [PATCH v7 01/10] librte_mbuf:the rte_mbuf structure changes
  2014-10-23 13:18 [dpdk-dev] [PATCH v7 00/10] Support VxLAN on Fortville Jijiang Liu
@ 2014-10-23 13:18 ` Jijiang Liu
  2014-10-23 13:18 ` [dpdk-dev] [PATCH v7 02/10] librte_ether:add the basic data structures of VxLAN Jijiang Liu
                   ` (8 subsequent siblings)
  9 siblings, 0 replies; 12+ messages in thread
From: Jijiang Liu @ 2014-10-23 13:18 UTC (permalink / raw)
  To: dev

Replace the "reserved2" field with the "packet_type" field and add the "inner_l2_l3_len" field in the rte_mbuf structure.
The "packet_type" field is used to indicate ordinary packet format and also tunneling packet format such as IP in IP, IP in GRE, MAC in GRE and MAC in UDP.
The "inner_l2_len" and the "inner_l3_len" fields are added in the second cache line, they use 2 bytes for TX offloading of tunnels.

Signed-off-by: Jijiang Liu <jijiang.liu@intel.com>
---
 lib/librte_mbuf/rte_mbuf.h |   25 ++++++++++++++++++++++++-
 1 files changed, 24 insertions(+), 1 deletions(-)

diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h
index ddadc21..497d88b 100644
--- a/lib/librte_mbuf/rte_mbuf.h
+++ b/lib/librte_mbuf/rte_mbuf.h
@@ -163,7 +163,14 @@ struct rte_mbuf {
 
 	/* remaining bytes are set on RX when pulling packet from descriptor */
 	MARKER rx_descriptor_fields1;
-	uint16_t reserved2;       /**< Unused field. Required for padding */
+
+	/**
+	 * The packet type, which is used to indicate ordinary packet and also
+	 * tunneled packet format, i.e. each number is represented a type of
+	 * packet.
+	 */
+	uint16_t packet_type;
+
 	uint16_t data_len;        /**< Amount of data in segment buffer. */
 	uint32_t pkt_len;         /**< Total pkt len: sum of all segments. */
 	uint16_t vlan_tci;        /**< VLAN Tag Control Identifier (CPU order) */
@@ -196,6 +203,18 @@ struct rte_mbuf {
 			uint16_t l2_len:7;      /**< L2 (MAC) Header Length. */
 		};
 	};
+
+	/* fields for TX offloading of tunnels */
+	union {
+		uint16_t inner_l2_l3_len;
+		/**< combined inner l2/l3 lengths as single var */
+		struct {
+			uint16_t inner_l3_len:9;
+			/**< inner L3 (IP) Header Length. */
+			uint16_t inner_l2_len:7;
+			/**< inner L2 (MAC) Header Length. */
+		};
+	};
 } __rte_cache_aligned;
 
 /**
@@ -546,11 +565,13 @@ static inline void rte_pktmbuf_reset(struct rte_mbuf *m)
 	m->next = NULL;
 	m->pkt_len = 0;
 	m->l2_l3_len = 0;
+	m->inner_l2_l3_len = 0;
 	m->vlan_tci = 0;
 	m->nb_segs = 1;
 	m->port = 0xff;
 
 	m->ol_flags = 0;
+	m->packet_type = 0;
 	m->data_off = (RTE_PKTMBUF_HEADROOM <= m->buf_len) ?
 			RTE_PKTMBUF_HEADROOM : m->buf_len;
 
@@ -614,12 +635,14 @@ static inline void rte_pktmbuf_attach(struct rte_mbuf *mi, struct rte_mbuf *md)
 	mi->port = md->port;
 	mi->vlan_tci = md->vlan_tci;
 	mi->l2_l3_len = md->l2_l3_len;
+	mi->inner_l2_l3_len = md->inner_l2_l3_len;
 	mi->hash = md->hash;
 
 	mi->next = NULL;
 	mi->pkt_len = mi->data_len;
 	mi->nb_segs = 1;
 	mi->ol_flags = md->ol_flags;
+	mi->packet_type = md->packet_type;
 
 	__rte_mbuf_sanity_check(mi, 1);
 	__rte_mbuf_sanity_check(md, 0);
-- 
1.7.7.6

^ permalink raw reply	[flat|nested] 12+ messages in thread

* [dpdk-dev] [PATCH v7 02/10] librte_ether:add the basic data structures of VxLAN
  2014-10-23 13:18 [dpdk-dev] [PATCH v7 00/10] Support VxLAN on Fortville Jijiang Liu
  2014-10-23 13:18 ` [dpdk-dev] [PATCH v7 01/10] librte_mbuf:the rte_mbuf structure changes Jijiang Liu
@ 2014-10-23 13:18 ` Jijiang Liu
  2014-10-23 13:18 ` [dpdk-dev] [PATCH v7 03/10] librte_ether:add VxLAN packet identification API Jijiang Liu
                   ` (7 subsequent siblings)
  9 siblings, 0 replies; 12+ messages in thread
From: Jijiang Liu @ 2014-10-23 13:18 UTC (permalink / raw)
  To: dev

Add definations of basic data structures of VxLAN.

Signed-off-by: Jijiang Liu <jijiang.liu@intel.com>
---
 lib/librte_ether/rte_eth_ctrl.h |   12 ++++++++++++
 lib/librte_ether/rte_ethdev.h   |    8 ++++++++
 lib/librte_ether/rte_ether.h    |   13 +++++++++++++
 3 files changed, 33 insertions(+), 0 deletions(-)

diff --git a/lib/librte_ether/rte_eth_ctrl.h b/lib/librte_ether/rte_eth_ctrl.h
index df21ac6..9a90d19 100644
--- a/lib/librte_ether/rte_eth_ctrl.h
+++ b/lib/librte_ether/rte_eth_ctrl.h
@@ -71,6 +71,18 @@ enum rte_filter_op {
 	RTE_ETH_FILTER_OP_MAX
 };
 
+/**
+ * Tunneled type.
+ */
+enum rte_eth_tunnel_type {
+	RTE_TUNNEL_TYPE_NONE = 0,
+	RTE_TUNNEL_TYPE_VXLAN,
+	RTE_TUNNEL_TYPE_GENEVE,
+	RTE_TUNNEL_TYPE_TEREDO,
+	RTE_TUNNEL_TYPE_NVGRE,
+	RTE_TUNNEL_TYPE_MAX,
+};
+
 #ifdef __cplusplus
 }
 #endif
diff --git a/lib/librte_ether/rte_ethdev.h b/lib/librte_ether/rte_ethdev.h
index b69a6af..46a5568 100644
--- a/lib/librte_ether/rte_ethdev.h
+++ b/lib/librte_ether/rte_ethdev.h
@@ -710,6 +710,14 @@ struct rte_fdir_conf {
 };
 
 /**
+ * UDP tunneling configuration.
+ */
+struct rte_eth_udp_tunnel {
+	uint16_t udp_port;
+	uint8_t prot_type;
+};
+
+/**
  *  Possible l4type of FDIR filters.
  */
 enum rte_l4type {
diff --git a/lib/librte_ether/rte_ether.h b/lib/librte_ether/rte_ether.h
index 2e08f23..100cc52 100644
--- a/lib/librte_ether/rte_ether.h
+++ b/lib/librte_ether/rte_ether.h
@@ -286,6 +286,16 @@ struct vlan_hdr {
 	uint16_t eth_proto;/**< Ethernet type of encapsulated frame. */
 } __attribute__((__packed__));
 
+/**
+ * VXLAN protocol header.
+ * Contains the 8-bit flag, 24-bit VXLAN Network Identifier and
+ * Reserved fields (24 bits and 8 bits)
+ */
+struct vxlan_hdr {
+	uint32_t vx_flags; /**< flag (8) + Reserved (24). */
+	uint32_t vx_vni;   /**< VNI (24) + Reserved (8). */
+} __attribute__((__packed__));
+
 /* Ethernet frame types */
 #define ETHER_TYPE_IPv4 0x0800 /**< IPv4 Protocol. */
 #define ETHER_TYPE_IPv6 0x86DD /**< IPv6 Protocol. */
@@ -294,6 +304,9 @@ struct vlan_hdr {
 #define ETHER_TYPE_VLAN 0x8100 /**< IEEE 802.1Q VLAN tagging. */
 #define ETHER_TYPE_1588 0x88F7 /**< IEEE 802.1AS 1588 Precise Time Protocol. */
 
+#define ETHER_VXLAN_HLEN (sizeof(struct udp_hdr) + sizeof(struct vxlan_hdr))
+/**< VxLAN tunnel header length. */
+
 #ifdef __cplusplus
 }
 #endif
-- 
1.7.7.6

^ permalink raw reply	[flat|nested] 12+ messages in thread

* [dpdk-dev] [PATCH v7 03/10] librte_ether:add VxLAN packet identification API
  2014-10-23 13:18 [dpdk-dev] [PATCH v7 00/10] Support VxLAN on Fortville Jijiang Liu
  2014-10-23 13:18 ` [dpdk-dev] [PATCH v7 01/10] librte_mbuf:the rte_mbuf structure changes Jijiang Liu
  2014-10-23 13:18 ` [dpdk-dev] [PATCH v7 02/10] librte_ether:add the basic data structures of VxLAN Jijiang Liu
@ 2014-10-23 13:18 ` Jijiang Liu
  2014-10-23 13:18 ` [dpdk-dev] [PATCH v7 04/10] i40e:support VxLAN packet identification in i40e Jijiang Liu
                   ` (6 subsequent siblings)
  9 siblings, 0 replies; 12+ messages in thread
From: Jijiang Liu @ 2014-10-23 13:18 UTC (permalink / raw)
  To: dev

There are "some" destination UDP port numbers that have unque meaning.
In terms of VxLAN, "IANA has assigned the value 4789 for the VXLAN UDP port, and this value SHOULD be used by default as the destination UDP port. Some early implementations of VXLAN have used other values for the destination port. To enable interoperability with these implementations, the destination port SHOULD be configurable."

Add two APIs in librte_ether for supporting UDP tunneling port configuration on i40e.
Currently, only VxLAN is implemented in this patch set.

Signed-off-by: Jijiang Liu <jijiang.liu@intel.com>
---
 lib/librte_ether/rte_ethdev.c |   52 +++++++++++++++++++++++++++++++++++++++++
 lib/librte_ether/rte_ethdev.h |   46 ++++++++++++++++++++++++++++++++++++
 2 files changed, 98 insertions(+), 0 deletions(-)

diff --git a/lib/librte_ether/rte_ethdev.c b/lib/librte_ether/rte_ethdev.c
index 50f10d9..ff1c769 100644
--- a/lib/librte_ether/rte_ethdev.c
+++ b/lib/librte_ether/rte_ethdev.c
@@ -2038,6 +2038,58 @@ rte_eth_dev_rss_hash_conf_get(uint8_t port_id,
 }
 
 int
+rte_eth_dev_udp_tunnel_add(uint8_t port_id,
+			   struct rte_eth_udp_tunnel *udp_tunnel)
+{
+	struct rte_eth_dev *dev;
+
+	if (port_id >= nb_ports) {
+		PMD_DEBUG_TRACE("Invalid port_id=%d\n", port_id);
+		return -ENODEV;
+	}
+
+	if (udp_tunnel == NULL) {
+		PMD_DEBUG_TRACE("Invalid udp_tunnel parameter\n");
+		return -EINVAL;
+	}
+
+	if (udp_tunnel->prot_type >= RTE_TUNNEL_TYPE_MAX) {
+		PMD_DEBUG_TRACE("Invalid tunnel type\n");
+		return -EINVAL;
+	}
+
+	dev = &rte_eth_devices[port_id];
+	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->udp_tunnel_add, -ENOTSUP);
+	return (*dev->dev_ops->udp_tunnel_add)(dev, udp_tunnel);
+}
+
+int
+rte_eth_dev_udp_tunnel_delete(uint8_t port_id,
+			      struct rte_eth_udp_tunnel *udp_tunnel)
+{
+	struct rte_eth_dev *dev;
+
+	if (port_id >= nb_ports) {
+		PMD_DEBUG_TRACE("Invalid port_id=%d\n", port_id);
+		return -ENODEV;
+	}
+	dev = &rte_eth_devices[port_id];
+
+	if (udp_tunnel == NULL) {
+		PMD_DEBUG_TRACE("Invalid udp_tunnel parametr\n");
+		return -EINVAL;
+	}
+
+	if (udp_tunnel->prot_type >= RTE_TUNNEL_TYPE_MAX) {
+		PMD_DEBUG_TRACE("Invalid tunnel type\n");
+		return -EINVAL;
+	}
+
+	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->udp_tunnel_del, -ENOTSUP);
+	return (*dev->dev_ops->udp_tunnel_del)(dev, udp_tunnel);
+}
+
+int
 rte_eth_led_on(uint8_t port_id)
 {
 	struct rte_eth_dev *dev;
diff --git a/lib/librte_ether/rte_ethdev.h b/lib/librte_ether/rte_ethdev.h
index 46a5568..8bf274d 100644
--- a/lib/librte_ether/rte_ethdev.h
+++ b/lib/librte_ether/rte_ethdev.h
@@ -1274,6 +1274,15 @@ typedef int (*eth_mirror_rule_reset_t)(struct rte_eth_dev *dev,
 				  uint8_t rule_id);
 /**< @internal Remove a traffic mirroring rule on an Ethernet device */
 
+typedef int (*eth_udp_tunnel_add_t)(struct rte_eth_dev *dev,
+				    struct rte_eth_udp_tunnel *tunnel_udp);
+/**< @internal Add tunneling UDP info */
+
+typedef int (*eth_udp_tunnel_del_t)(struct rte_eth_dev *dev,
+				    struct rte_eth_udp_tunnel *tunnel_udp);
+/**< @internal Delete tunneling UDP info */
+
+
 #ifdef RTE_NIC_BYPASS
 
 enum {
@@ -1454,6 +1463,8 @@ struct eth_dev_ops {
 	eth_set_vf_rx_t            set_vf_rx;  /**< enable/disable a VF receive */
 	eth_set_vf_tx_t            set_vf_tx;  /**< enable/disable a VF transmit */
 	eth_set_vf_vlan_filter_t   set_vf_vlan_filter;  /**< Set VF VLAN filter */
+	eth_udp_tunnel_add_t       udp_tunnel_add;
+	eth_udp_tunnel_del_t       udp_tunnel_del;
 	eth_set_queue_rate_limit_t set_queue_rate_limit;   /**< Set queue rate limit */
 	eth_set_vf_rate_limit_t    set_vf_rate_limit;   /**< Set VF rate limit */
 
@@ -3350,6 +3361,41 @@ int
 rte_eth_dev_rss_hash_conf_get(uint8_t port_id,
 			      struct rte_eth_rss_conf *rss_conf);
 
+ /**
+ * Add UDP tunneling port of an Ethernet device for filtering a specific
+ * tunneling packet by UDP port number.
+ *
+ * @param port_id
+ *   The port identifier of the Ethernet device.
+ * @param tunnel_udp
+ *   UDP tunneling configuration.
+ *
+ * @return
+ *   - (0) if successful.
+ *   - (-ENODEV) if port identifier is invalid.
+ *   - (-ENOTSUP) if hardware doesn't support tunnel type.
+ */
+int
+rte_eth_dev_udp_tunnel_add(uint8_t port_id,
+			   struct rte_eth_udp_tunnel *tunnel_udp);
+
+ /**
+ * Detete UDP tunneling port configuration of Ethernet device
+ *
+ * @param port_id
+ *   The port identifier of the Ethernet device.
+ * @param tunnel_udp
+ *   UDP tunneling configuration.
+ *
+ * @return
+ *   - (0) if successful.
+ *   - (-ENODEV) if port identifier is invalid.
+ *   - (-ENOTSUP) if hardware doesn't support tunnel type.
+ */
+int
+rte_eth_dev_udp_tunnel_delete(uint8_t port_id,
+			      struct rte_eth_udp_tunnel *tunnel_udp);
+
 /**
  * add syn filter
  *
-- 
1.7.7.6

^ permalink raw reply	[flat|nested] 12+ messages in thread

* [dpdk-dev] [PATCH v7 04/10] i40e:support VxLAN packet identification in i40e
  2014-10-23 13:18 [dpdk-dev] [PATCH v7 00/10] Support VxLAN on Fortville Jijiang Liu
                   ` (2 preceding siblings ...)
  2014-10-23 13:18 ` [dpdk-dev] [PATCH v7 03/10] librte_ether:add VxLAN packet identification API Jijiang Liu
@ 2014-10-23 13:18 ` Jijiang Liu
  2014-10-23 13:18 ` [dpdk-dev] [PATCH v7 05/10] app/test-pmd:test VxLAN packet identification Jijiang Liu
                   ` (5 subsequent siblings)
  9 siblings, 0 replies; 12+ messages in thread
From: Jijiang Liu @ 2014-10-23 13:18 UTC (permalink / raw)
  To: dev

Implement the configuration API of VxLAN destination UDP port in librte_pmd_i40e,
and add new Rx offload flags for supporting VXLAN packet offload.

Signed-off-by: Jijiang Liu <jijiang.liu@intel.com>
---
 lib/librte_mbuf/rte_mbuf.h        |    2 +
 lib/librte_pmd_i40e/i40e_ethdev.c |  157 +++++++++++++++++++++++++++++++++++++
 lib/librte_pmd_i40e/i40e_ethdev.h |    8 ++-
 lib/librte_pmd_i40e/i40e_rxtx.c   |  105 +++++++++++++-----------
 4 files changed, 223 insertions(+), 49 deletions(-)

diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h
index 497d88b..9af3bd9 100644
--- a/lib/librte_mbuf/rte_mbuf.h
+++ b/lib/librte_mbuf/rte_mbuf.h
@@ -91,6 +91,8 @@ extern "C" {
 #define PKT_RX_IPV6_HDR_EXT  (1ULL << 8)  /**< RX packet with extended IPv6 header. */
 #define PKT_RX_IEEE1588_PTP  (1ULL << 9)  /**< RX IEEE1588 L2 Ethernet PT Packet. */
 #define PKT_RX_IEEE1588_TMST (1ULL << 10) /**< RX IEEE1588 L2/L4 timestamped packet.*/
+#define PKT_RX_TUNNEL_IPV4_HDR (1ULL << 11) /**< RX tunnel packet with IPv4 header.*/
+#define PKT_RX_TUNNEL_IPV6_HDR (1ULL << 12) /**< RX tunnel packet with IPv6 header. */
 
 #define PKT_TX_VLAN_PKT      (1ULL << 55) /**< TX packet is a 802.1q VLAN packet. */
 #define PKT_TX_IP_CKSUM      (1ULL << 54) /**< IP cksum of TX pkt. computed by NIC. */
diff --git a/lib/librte_pmd_i40e/i40e_ethdev.c b/lib/librte_pmd_i40e/i40e_ethdev.c
index 3b75f0f..eb643e5 100644
--- a/lib/librte_pmd_i40e/i40e_ethdev.c
+++ b/lib/librte_pmd_i40e/i40e_ethdev.c
@@ -186,6 +186,10 @@ static int i40e_dev_rss_hash_update(struct rte_eth_dev *dev,
 				    struct rte_eth_rss_conf *rss_conf);
 static int i40e_dev_rss_hash_conf_get(struct rte_eth_dev *dev,
 				      struct rte_eth_rss_conf *rss_conf);
+static int i40e_dev_udp_tunnel_add(struct rte_eth_dev *dev,
+				struct rte_eth_udp_tunnel *udp_tunnel);
+static int i40e_dev_udp_tunnel_del(struct rte_eth_dev *dev,
+				struct rte_eth_udp_tunnel *udp_tunnel);
 static int i40e_dev_filter_ctrl(struct rte_eth_dev *dev,
 				enum rte_filter_type filter_type,
 				enum rte_filter_op filter_op,
@@ -241,6 +245,8 @@ static struct eth_dev_ops i40e_eth_dev_ops = {
 	.reta_query                   = i40e_dev_rss_reta_query,
 	.rss_hash_update              = i40e_dev_rss_hash_update,
 	.rss_hash_conf_get            = i40e_dev_rss_hash_conf_get,
+	.udp_tunnel_add               = i40e_dev_udp_tunnel_add,
+	.udp_tunnel_del               = i40e_dev_udp_tunnel_del,
 	.filter_ctrl                  = i40e_dev_filter_ctrl,
 };
 
@@ -4092,6 +4098,157 @@ i40e_dev_rss_hash_conf_get(struct rte_eth_dev *dev,
 	return 0;
 }
 
+static int
+i40e_get_vxlan_port_idx(struct i40e_pf *pf, uint16_t port)
+{
+	uint8_t i;
+
+	for (i = 0; i < I40E_MAX_PF_UDP_OFFLOAD_PORTS; i++) {
+		if (pf->vxlan_ports[i] == port)
+			return i;
+	}
+
+	return -1;
+}
+
+static int
+i40e_add_vxlan_port(struct i40e_pf *pf, uint16_t port)
+{
+	int  idx, ret;
+	uint8_t filter_idx;
+	struct i40e_hw *hw = I40E_PF_TO_HW(pf);
+
+	idx = i40e_get_vxlan_port_idx(pf, port);
+
+	/* Check if port already exists */
+	if (idx >= 0) {
+		PMD_DRV_LOG(ERR, "Port %d already offloaded\n", port);
+		return -EINVAL;
+	}
+
+	/* Now check if there is space to add the new port */
+	idx = i40e_get_vxlan_port_idx(pf, 0);
+	if (idx < 0) {
+		PMD_DRV_LOG(ERR, "Maximum number of UDP ports reached,"
+			"not adding port %d\n", port);
+		return -ENOSPC;
+	}
+
+	ret =  i40e_aq_add_udp_tunnel(hw, port, I40E_AQC_TUNNEL_TYPE_VXLAN,
+					&filter_idx, NULL);
+	if (ret < 0) {
+		PMD_DRV_LOG(ERR, "Failed to add VxLAN UDP port %d\n", port);
+		return -1;
+	}
+
+	PMD_DRV_LOG(INFO, "Added %s port %d with AQ command with index %d\n",
+			 port,  filter_index);
+
+	/* New port: add it and mark its index in the bitmap */
+	pf->vxlan_ports[idx] = port;
+	pf->vxlan_bitmap |= (1 << idx);
+
+	if (!(pf->flags & I40E_FLAG_VXLAN))
+		pf->flags |= I40E_FLAG_VXLAN;
+
+	return 0;
+}
+
+static int
+i40e_del_vxlan_port(struct i40e_pf *pf, uint16_t port)
+{
+	int idx;
+	struct i40e_hw *hw = I40E_PF_TO_HW(pf);
+
+	if (!(pf->flags & I40E_FLAG_VXLAN)) {
+		PMD_DRV_LOG(ERR, "VxLAN tunneling mode is not configured\n");
+		return -EINVAL;
+	}
+
+	idx = i40e_get_vxlan_port_idx(pf, port);
+
+	if (idx < 0) {
+		PMD_DRV_LOG(ERR, "Port %d doesn't exist\n", port);
+		return -EINVAL;
+	}
+
+	if (i40e_aq_del_udp_tunnel(hw, idx, NULL) < 0) {
+		PMD_DRV_LOG(ERR, "Failed to delete VxLAN UDP port %d\n", port);
+		return -1;
+	}
+
+	PMD_DRV_LOG(INFO, "Deleted port %d with AQ command with index %d\n",
+			port, idx);
+
+	pf->vxlan_ports[idx] = 0;
+	pf->vxlan_bitmap &= ~(1 << idx);
+
+	if (!pf->vxlan_bitmap)
+		pf->flags &= ~I40E_FLAG_VXLAN;
+
+	return 0;
+}
+
+/* Add UDP tunneling port */
+static int
+i40e_dev_udp_tunnel_add(struct rte_eth_dev *dev,
+			struct rte_eth_udp_tunnel *udp_tunnel)
+{
+	int ret = 0;
+	struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+
+	if (udp_tunnel == NULL)
+		return -EINVAL;
+
+	switch (udp_tunnel->prot_type) {
+	case RTE_TUNNEL_TYPE_VXLAN:
+		ret = i40e_add_vxlan_port(pf, udp_tunnel->udp_port);
+		break;
+
+	case RTE_TUNNEL_TYPE_GENEVE:
+	case RTE_TUNNEL_TYPE_TEREDO:
+		PMD_DRV_LOG(ERR, "Tunnel type is not supported now.\n");
+		ret = -1;
+		break;
+
+	default:
+		PMD_DRV_LOG(ERR, "Invalid tunnel type\n");
+		ret = -1;
+		break;
+	}
+
+	return ret;
+}
+
+/* Remove UDP tunneling port */
+static int
+i40e_dev_udp_tunnel_del(struct rte_eth_dev *dev,
+			struct rte_eth_udp_tunnel *udp_tunnel)
+{
+	int ret = 0;
+	struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+
+	if (udp_tunnel == NULL)
+		return -EINVAL;
+
+	switch (udp_tunnel->prot_type) {
+	case RTE_TUNNEL_TYPE_VXLAN:
+		ret = i40e_del_vxlan_port(pf, udp_tunnel->udp_port);
+		break;
+	case RTE_TUNNEL_TYPE_GENEVE:
+	case RTE_TUNNEL_TYPE_TEREDO:
+		PMD_DRV_LOG(ERR, "Tunnel type is not supported now.\n");
+		ret = -1;
+		break;
+	default:
+		PMD_DRV_LOG(ERR, "Invalid tunnel type\n");
+		ret = -1;
+		break;
+	}
+
+	return ret;
+}
+
 /* Configure RSS */
 static int
 i40e_pf_config_rss(struct i40e_pf *pf)
diff --git a/lib/librte_pmd_i40e/i40e_ethdev.h b/lib/librte_pmd_i40e/i40e_ethdev.h
index 1d42cd2..826a1fb 100644
--- a/lib/librte_pmd_i40e/i40e_ethdev.h
+++ b/lib/librte_pmd_i40e/i40e_ethdev.h
@@ -60,13 +60,15 @@
 #define I40E_FLAG_HEADER_SPLIT_DISABLED (1ULL << 4)
 #define I40E_FLAG_HEADER_SPLIT_ENABLED  (1ULL << 5)
 #define I40E_FLAG_FDIR                  (1ULL << 6)
+#define I40E_FLAG_VXLAN                 (1ULL << 7)
 #define I40E_FLAG_ALL (I40E_FLAG_RSS | \
 		       I40E_FLAG_DCB | \
 		       I40E_FLAG_VMDQ | \
 		       I40E_FLAG_SRIOV | \
 		       I40E_FLAG_HEADER_SPLIT_DISABLED | \
 		       I40E_FLAG_HEADER_SPLIT_ENABLED | \
-		       I40E_FLAG_FDIR)
+		       I40E_FLAG_FDIR | \
+		       I40E_FLAG_VXLAN)
 
 #define I40E_RSS_OFFLOAD_ALL ( \
 	ETH_RSS_NONF_IPV4_UDP | \
@@ -246,6 +248,10 @@ struct i40e_pf {
 	uint16_t vmdq_nb_qps; /* The number of queue pairs of VMDq */
 	uint16_t vf_nb_qps; /* The number of queue pairs of VF */
 	uint16_t fdir_nb_qps; /* The number of queue pairs of Flow Director */
+
+	/* store VxLAN UDP ports */
+	uint16_t vxlan_ports[I40E_MAX_PF_UDP_OFFLOAD_PORTS];
+	uint16_t vxlan_bitmap; /* Vxlan bit mask */
 };
 
 enum pending_msg {
diff --git a/lib/librte_pmd_i40e/i40e_rxtx.c b/lib/librte_pmd_i40e/i40e_rxtx.c
index 2b53677..2108290 100644
--- a/lib/librte_pmd_i40e/i40e_rxtx.c
+++ b/lib/librte_pmd_i40e/i40e_rxtx.c
@@ -208,34 +208,34 @@ i40e_rxd_ptype_to_pkt_flags(uint64_t qword)
 		PKT_RX_IPV4_HDR_EXT, /* PTYPE 56 */
 		PKT_RX_IPV4_HDR_EXT, /* PTYPE 57 */
 		PKT_RX_IPV4_HDR_EXT, /* PTYPE 58 */
-		PKT_RX_IPV4_HDR_EXT, /* PTYPE 59 */
-		PKT_RX_IPV4_HDR_EXT, /* PTYPE 60 */
-		PKT_RX_IPV4_HDR_EXT, /* PTYPE 61 */
+		PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 59 */
+		PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 60 */
+		PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 61 */
 		0, /* PTYPE 62 */
-		PKT_RX_IPV4_HDR_EXT, /* PTYPE 63 */
-		PKT_RX_IPV4_HDR_EXT, /* PTYPE 64 */
-		PKT_RX_IPV4_HDR_EXT, /* PTYPE 65 */
-		PKT_RX_IPV4_HDR_EXT, /* PTYPE 66 */
-		PKT_RX_IPV4_HDR_EXT, /* PTYPE 67 */
-		PKT_RX_IPV4_HDR_EXT, /* PTYPE 68 */
+		PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 63 */
+		PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 64 */
+		PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 65 */
+		PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 66 */
+		PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 67 */
+		PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 68 */
 		0, /* PTYPE 69 */
-		PKT_RX_IPV4_HDR_EXT, /* PTYPE 70 */
-		PKT_RX_IPV4_HDR_EXT, /* PTYPE 71 */
-		PKT_RX_IPV4_HDR_EXT, /* PTYPE 72 */
-		PKT_RX_IPV4_HDR_EXT, /* PTYPE 73 */
-		PKT_RX_IPV4_HDR_EXT, /* PTYPE 74 */
-		PKT_RX_IPV4_HDR_EXT, /* PTYPE 75 */
-		PKT_RX_IPV4_HDR_EXT, /* PTYPE 76 */
+		PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 70 */
+		PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 71 */
+		PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 72 */
+		PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 73 */
+		PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 74 */
+		PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 75 */
+		PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 76 */
 		0, /* PTYPE 77 */
-		PKT_RX_IPV4_HDR_EXT, /* PTYPE 78 */
-		PKT_RX_IPV4_HDR_EXT, /* PTYPE 79 */
-		PKT_RX_IPV4_HDR_EXT, /* PTYPE 80 */
-		PKT_RX_IPV4_HDR_EXT, /* PTYPE 81 */
-		PKT_RX_IPV4_HDR_EXT, /* PTYPE 82 */
-		PKT_RX_IPV4_HDR_EXT, /* PTYPE 83 */
+		PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 78 */
+		PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 79 */
+		PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 80 */
+		PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 81 */
+		PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 82 */
+		PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 83 */
 		0, /* PTYPE 84 */
-		PKT_RX_IPV4_HDR_EXT, /* PTYPE 85 */
-		PKT_RX_IPV4_HDR_EXT, /* PTYPE 86 */
+		PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 85 */
+		PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 86 */
 		PKT_RX_IPV4_HDR_EXT, /* PTYPE 87 */
 		PKT_RX_IPV6_HDR, /* PTYPE 88 */
 		PKT_RX_IPV6_HDR, /* PTYPE 89 */
@@ -274,34 +274,34 @@ i40e_rxd_ptype_to_pkt_flags(uint64_t qword)
 		PKT_RX_IPV6_HDR_EXT, /* PTYPE 122 */
 		PKT_RX_IPV6_HDR_EXT, /* PTYPE 123 */
 		PKT_RX_IPV6_HDR_EXT, /* PTYPE 124 */
-		PKT_RX_IPV6_HDR_EXT, /* PTYPE 125 */
-		PKT_RX_IPV6_HDR_EXT, /* PTYPE 126 */
-		PKT_RX_IPV6_HDR_EXT, /* PTYPE 127 */
+		PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 125 */
+		PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 126 */
+		PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 127 */
 		0, /* PTYPE 128 */
-		PKT_RX_IPV6_HDR_EXT, /* PTYPE 129 */
-		PKT_RX_IPV6_HDR_EXT, /* PTYPE 130 */
-		PKT_RX_IPV6_HDR_EXT, /* PTYPE 131 */
-		PKT_RX_IPV6_HDR_EXT, /* PTYPE 132 */
-		PKT_RX_IPV6_HDR_EXT, /* PTYPE 133 */
-		PKT_RX_IPV6_HDR_EXT, /* PTYPE 134 */
+		PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 129 */
+		PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 130 */
+		PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 131 */
+		PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 132 */
+		PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 133 */
+		PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 134 */
 		0, /* PTYPE 135 */
-		PKT_RX_IPV6_HDR_EXT, /* PTYPE 136 */
-		PKT_RX_IPV6_HDR_EXT, /* PTYPE 137 */
-		PKT_RX_IPV6_HDR_EXT, /* PTYPE 138 */
-		PKT_RX_IPV6_HDR_EXT, /* PTYPE 139 */
-		PKT_RX_IPV6_HDR_EXT, /* PTYPE 140 */
-		PKT_RX_IPV6_HDR_EXT, /* PTYPE 141 */
-		PKT_RX_IPV6_HDR_EXT, /* PTYPE 142 */
+		PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 136 */
+		PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 137 */
+		PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 138 */
+		PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 139 */
+		PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 140 */
+		PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 141 */
+		PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 142 */
 		0, /* PTYPE 143 */
-		PKT_RX_IPV6_HDR_EXT, /* PTYPE 144 */
-		PKT_RX_IPV6_HDR_EXT, /* PTYPE 145 */
-		PKT_RX_IPV6_HDR_EXT, /* PTYPE 146 */
-		PKT_RX_IPV6_HDR_EXT, /* PTYPE 147 */
-		PKT_RX_IPV6_HDR_EXT, /* PTYPE 148 */
-		PKT_RX_IPV6_HDR_EXT, /* PTYPE 149 */
+		PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 144 */
+		PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 145 */
+		PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 146 */
+		PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 147 */
+		PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 148 */
+		PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 149 */
 		0, /* PTYPE 150 */
-		PKT_RX_IPV6_HDR_EXT, /* PTYPE 151 */
-		PKT_RX_IPV6_HDR_EXT, /* PTYPE 152 */
+		PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 151 */
+		PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 152 */
 		PKT_RX_IPV6_HDR_EXT, /* PTYPE 153 */
 		0, /* PTYPE 154 */
 		0, /* PTYPE 155 */
@@ -638,6 +638,10 @@ i40e_rx_scan_hw_ring(struct i40e_rx_queue *rxq)
 			pkt_flags |= i40e_rxd_error_to_pkt_flags(qword1);
 			pkt_flags |= i40e_rxd_ptype_to_pkt_flags(qword1);
 			mb->ol_flags = pkt_flags;
+
+			mb->packet_type = (uint16_t)((qword1 &
+					I40E_RXD_QW1_PTYPE_MASK) >>
+					I40E_RXD_QW1_PTYPE_SHIFT);
 			if (pkt_flags & PKT_RX_RSS_HASH)
 				mb->hash.rss = rte_le_to_cpu_32(\
 					rxdp->wb.qword0.hi_dword.rss);
@@ -873,6 +877,8 @@ i40e_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 		pkt_flags = i40e_rxd_status_to_pkt_flags(qword1);
 		pkt_flags |= i40e_rxd_error_to_pkt_flags(qword1);
 		pkt_flags |= i40e_rxd_ptype_to_pkt_flags(qword1);
+		rxm->packet_type = (uint16_t)((qword1 & I40E_RXD_QW1_PTYPE_MASK) >>
+				I40E_RXD_QW1_PTYPE_SHIFT);
 		rxm->ol_flags = pkt_flags;
 		if (pkt_flags & PKT_RX_RSS_HASH)
 			rxm->hash.rss =
@@ -1027,6 +1033,9 @@ i40e_recv_scattered_pkts(void *rx_queue,
 		pkt_flags = i40e_rxd_status_to_pkt_flags(qword1);
 		pkt_flags |= i40e_rxd_error_to_pkt_flags(qword1);
 		pkt_flags |= i40e_rxd_ptype_to_pkt_flags(qword1);
+		first_seg->packet_type = (uint16_t)((qword1 &
+					I40E_RXD_QW1_PTYPE_MASK) >>
+					I40E_RXD_QW1_PTYPE_SHIFT);
 		first_seg->ol_flags = pkt_flags;
 		if (pkt_flags & PKT_RX_RSS_HASH)
 			rxm->hash.rss =
-- 
1.7.7.6

^ permalink raw reply	[flat|nested] 12+ messages in thread

* [dpdk-dev] [PATCH v7 05/10] app/test-pmd:test VxLAN packet identification
  2014-10-23 13:18 [dpdk-dev] [PATCH v7 00/10] Support VxLAN on Fortville Jijiang Liu
                   ` (3 preceding siblings ...)
  2014-10-23 13:18 ` [dpdk-dev] [PATCH v7 04/10] i40e:support VxLAN packet identification in i40e Jijiang Liu
@ 2014-10-23 13:18 ` Jijiang Liu
  2014-10-23 16:10   ` Thomas Monjalon
  2014-10-23 13:18 ` [dpdk-dev] [PATCH v7 06/10] librte_ether:add data structures of VxLAN filter Jijiang Liu
                   ` (4 subsequent siblings)
  9 siblings, 1 reply; 12+ messages in thread
From: Jijiang Liu @ 2014-10-23 13:18 UTC (permalink / raw)
  To: dev

Add two commands to test VxLAN packet identification.
The test steps are as follows:
 1> use commands to add/delete VxLAN UDP port.
 2> use rxonly mode to receive VxLAN packet.

Signed-off-by: Jijiang Liu <jijiang.liu@intel.com>
---
 app/test-pmd/cmdline.c |   65 ++++++++++++++++++++++++++++++++++++++++++++++++
 app/test-pmd/rxonly.c  |   55 +++++++++++++++++++++++++++++++++++++++-
 2 files changed, 118 insertions(+), 2 deletions(-)

diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
index 0b972f9..4d7b4d1 100644
--- a/app/test-pmd/cmdline.c
+++ b/app/test-pmd/cmdline.c
@@ -285,6 +285,12 @@ static void cmd_help_long_parsed(void *parsed_result,
 			"    Set the outer VLAN TPID for Packet Filtering on"
 			" a port\n\n"
 
+			"rx_vxlan_port add (udp_port) (port_id)\n"
+			"    Add an UDP port for VxLAN packet filter on a port\n\n"
+
+			"rx_vxlan_port rm (udp_port) (port_id)\n"
+			"    Remove an UDP port for VxLAN packet filter on a port\n\n"
+
 			"tx_vlan set vlan_id (port_id)\n"
 			"    Set hardware insertion of VLAN ID in packets sent"
 			" on a port.\n\n"
@@ -6225,6 +6231,64 @@ cmdline_parse_inst_t cmd_vf_rate_limit = {
 	},
 };
 
+/* *** CONFIGURE TUNNEL UDP PORT *** */
+struct cmd_tunnel_udp_config {
+	cmdline_fixed_string_t cmd;
+	cmdline_fixed_string_t what;
+	uint16_t udp_port;
+	uint8_t port_id;
+};
+
+static void
+cmd_tunnel_udp_config_parsed(void *parsed_result,
+			  __attribute__((unused)) struct cmdline *cl,
+			  __attribute__((unused)) void *data)
+{
+	struct cmd_tunnel_udp_config *res = parsed_result;
+	struct rte_eth_udp_tunnel tunnel_udp;
+	int ret;
+
+	tunnel_udp.udp_port = res->udp_port;
+
+	if (!strcmp(res->cmd, "rx_vxlan_port"))
+		tunnel_udp.prot_type = RTE_TUNNEL_TYPE_VXLAN;
+
+	if (!strcmp(res->what, "add"))
+		ret = rte_eth_dev_udp_tunnel_add(res->port_id, &tunnel_udp);
+	else
+		ret = rte_eth_dev_udp_tunnel_delete(res->port_id, &tunnel_udp);
+
+	if (ret < 0)
+		printf("udp tunneling add error: (%s)\n", strerror(-ret));
+}
+
+cmdline_parse_token_string_t cmd_tunnel_udp_config_cmd =
+	TOKEN_STRING_INITIALIZER(struct cmd_tunnel_udp_config,
+				cmd, "rx_vxlan_port");
+cmdline_parse_token_string_t cmd_tunnel_udp_config_what =
+	TOKEN_STRING_INITIALIZER(struct cmd_tunnel_udp_config,
+				what, "add#rm");
+cmdline_parse_token_num_t cmd_tunnel_udp_config_udp_port =
+	TOKEN_NUM_INITIALIZER(struct cmd_tunnel_udp_config,
+				udp_port, UINT16);
+cmdline_parse_token_num_t cmd_tunnel_udp_config_port_id =
+	TOKEN_NUM_INITIALIZER(struct cmd_tunnel_udp_config,
+				port_id, UINT8);
+
+cmdline_parse_inst_t cmd_tunnel_udp_config = {
+	.f = cmd_tunnel_udp_config_parsed,
+	.data = (void *)0,
+	.help_str = "add/rm an tunneling UDP port filter: "
+			"rx_vxlan_port add udp_port port_id",
+	.tokens = {
+		(void *)&cmd_tunnel_udp_config_cmd,
+		(void *)&cmd_tunnel_udp_config_what,
+		(void *)&cmd_tunnel_udp_config_udp_port,
+		(void *)&cmd_tunnel_udp_config_port_id,
+		NULL,
+	},
+};
+
 /* *** CONFIGURE VM MIRROR VLAN/POOL RULE *** */
 struct cmd_set_mirror_mask_result {
 	cmdline_fixed_string_t set;
@@ -7518,6 +7582,7 @@ cmdline_parse_ctx_t main_ctx[] = {
 	(cmdline_parse_inst_t *)&cmd_vf_rxvlan_filter,
 	(cmdline_parse_inst_t *)&cmd_queue_rate_limit,
 	(cmdline_parse_inst_t *)&cmd_vf_rate_limit,
+	(cmdline_parse_inst_t *)&cmd_tunnel_udp_config,
 	(cmdline_parse_inst_t *)&cmd_set_mirror_mask,
 	(cmdline_parse_inst_t *)&cmd_set_mirror_link,
 	(cmdline_parse_inst_t *)&cmd_reset_mirror_rule,
diff --git a/app/test-pmd/rxonly.c b/app/test-pmd/rxonly.c
index 98c788b..d3be62e 100644
--- a/app/test-pmd/rxonly.c
+++ b/app/test-pmd/rxonly.c
@@ -66,10 +66,12 @@
 #include <rte_ether.h>
 #include <rte_ethdev.h>
 #include <rte_string_fns.h>
+#include <rte_ip.h>
+#include <rte_udp.h>
 
 #include "testpmd.h"
 
-#define MAX_PKT_RX_FLAGS 11
+#define MAX_PKT_RX_FLAGS 13
 static const char *pkt_rx_flag_names[MAX_PKT_RX_FLAGS] = {
 	"VLAN_PKT",
 	"RSS_HASH",
@@ -84,6 +86,9 @@ static const char *pkt_rx_flag_names[MAX_PKT_RX_FLAGS] = {
 
 	"IEEE1588_PTP",
 	"IEEE1588_TMST",
+
+	"PKT_RX_TUNNEL_IPV4_HDR"
+	"PKT_RX_TUNNEL_IPV6_HDR"
 };
 
 static inline void
@@ -111,7 +116,9 @@ pkt_burst_receive(struct fwd_stream *fs)
 	uint16_t eth_type;
 	uint64_t ol_flags;
 	uint16_t nb_rx;
-	uint16_t i;
+	uint16_t i, packet_type;
+	uint64_t is_encapsulation;
+
 #ifdef RTE_TEST_PMD_RECORD_CORE_CYCLES
 	uint64_t start_tsc;
 	uint64_t end_tsc;
@@ -152,6 +159,11 @@ pkt_burst_receive(struct fwd_stream *fs)
 		eth_hdr = rte_pktmbuf_mtod(mb, struct ether_hdr *);
 		eth_type = RTE_BE_TO_CPU_16(eth_hdr->ether_type);
 		ol_flags = mb->ol_flags;
+		packet_type = mb->packet_type;
+
+		is_encapsulation = ol_flags & (PKT_RX_TUNNEL_IPV4_HDR |
+				PKT_RX_TUNNEL_IPV6_HDR);
+
 		print_ether_addr("  src=", &eth_hdr->s_addr);
 		print_ether_addr(" - dst=", &eth_hdr->d_addr);
 		printf(" - type=0x%04x - length=%u - nb_segs=%d",
@@ -166,6 +178,45 @@ pkt_burst_receive(struct fwd_stream *fs)
 			       mb->hash.fdir.hash, mb->hash.fdir.id);
 		if (ol_flags & PKT_RX_VLAN_PKT)
 			printf(" - VLAN tci=0x%x", mb->vlan_tci);
+		if (is_encapsulation) {
+			struct ipv4_hdr *ipv4_hdr;
+			struct ipv6_hdr *ipv6_hdr;
+			struct udp_hdr *udp_hdr;
+			uint8_t l2_len;
+			uint8_t l3_len;
+			uint8_t l4_len;
+			uint8_t l4_proto;
+			struct  vxlan_hdr *vxlan_hdr;
+
+			l2_len  = sizeof(struct ether_hdr);
+
+			 /* Do not support ipv4 option field */
+			if (ol_flags & PKT_RX_TUNNEL_IPV4_HDR) {
+				l3_len = sizeof(struct ipv4_hdr);
+				ipv4_hdr = (struct ipv4_hdr *) (rte_pktmbuf_mtod(mb,
+						unsigned char *) + l2_len);
+				l4_proto = ipv4_hdr->next_proto_id;
+			} else {
+				l3_len = sizeof(struct ipv6_hdr);
+				ipv6_hdr = (struct ipv6_hdr *) (rte_pktmbuf_mtod(mb,
+						unsigned char *) + l2_len);
+				l4_proto = ipv6_hdr->proto;
+			}
+			if (l4_proto == IPPROTO_UDP) {
+				udp_hdr = (struct udp_hdr *) (rte_pktmbuf_mtod(mb,
+						unsigned char *) + l2_len + l3_len);
+				l4_len = sizeof(struct udp_hdr);
+				vxlan_hdr = (struct vxlan_hdr *) (rte_pktmbuf_mtod(mb,
+						unsigned char *) + l2_len + l3_len
+						 + l4_len);
+
+				printf(" - VxLAN packet: packet type =%d, "
+					"Destination UDP port =%d, VNI = %d",
+					packet_type, RTE_BE_TO_CPU_16(udp_hdr->dst_port),
+					rte_be_to_cpu_32(vxlan_hdr->vx_vni) >> 8);
+			}
+		}
+		printf(" - Receive queue=0x%x", (unsigned) fs->rx_queue);
 		printf("\n");
 		if (ol_flags != 0) {
 			int rxf;
-- 
1.7.7.6

^ permalink raw reply	[flat|nested] 12+ messages in thread

* [dpdk-dev] [PATCH v7 06/10] librte_ether:add data structures of VxLAN filter
  2014-10-23 13:18 [dpdk-dev] [PATCH v7 00/10] Support VxLAN on Fortville Jijiang Liu
                   ` (4 preceding siblings ...)
  2014-10-23 13:18 ` [dpdk-dev] [PATCH v7 05/10] app/test-pmd:test VxLAN packet identification Jijiang Liu
@ 2014-10-23 13:18 ` Jijiang Liu
  2014-10-23 13:18 ` [dpdk-dev] [PATCH v7 07/10] i40e:implement the API of VxLAN filter in librte_pmd_i40e Jijiang Liu
                   ` (3 subsequent siblings)
  9 siblings, 0 replies; 12+ messages in thread
From: Jijiang Liu @ 2014-10-23 13:18 UTC (permalink / raw)
  To: dev

Add definations of the data structures of tunneling packet filter in the rte_eth_ctrl.h file.

Signed-off-by: Jijiang Liu <jijiang.liu@intel.com>
---
 lib/librte_ether/rte_eth_ctrl.h |   49 +++++++++++++++++++++++++++++++++++++++
 1 files changed, 49 insertions(+), 0 deletions(-)

diff --git a/lib/librte_ether/rte_eth_ctrl.h b/lib/librte_ether/rte_eth_ctrl.h
index 9a90d19..b4ab731 100644
--- a/lib/librte_ether/rte_eth_ctrl.h
+++ b/lib/librte_ether/rte_eth_ctrl.h
@@ -51,6 +51,7 @@ extern "C" {
  */
 enum rte_filter_type {
 	RTE_ETH_FILTER_NONE = 0,
+	RTE_ETH_FILTER_TUNNEL,
 	RTE_ETH_FILTER_MAX
 };
 
@@ -83,6 +84,54 @@ enum rte_eth_tunnel_type {
 	RTE_TUNNEL_TYPE_MAX,
 };
 
+/**
+ * filter type of tunneling packet
+ */
+#define ETH_TUNNEL_FILTER_OMAC  0x01 /**< filter by outer MAC addr */
+#define ETH_TUNNEL_FILTER_OIP   0x02 /**< filter by outer IP Addr */
+#define ETH_TUNNEL_FILTER_TENID 0x04 /**< filter by tenant ID */
+#define ETH_TUNNEL_FILTER_IMAC  0x08 /**< filter by inner MAC addr */
+#define ETH_TUNNEL_FILTER_IVLAN 0x10 /**< filter by inner VLAN ID */
+#define ETH_TUNNEL_FILTER_IIP   0x20 /**< filter by inner IP addr */
+
+#define RTE_TUNNEL_FILTER_IMAC_IVLAN (ETH_TUNNEL_FILTER_IMAC | \
+					ETH_TUNNEL_FILTER_IVLAN)
+#define RTE_TUNNEL_FILTER_IMAC_IVLAN_TENID (ETH_TUNNEL_FILTER_IMAC | \
+					ETH_TUNNEL_FILTER_IVLAN | \
+					ETH_TUNNEL_FILTER_TENID)
+#define RTE_TUNNEL_FILTER_IMAC_TENID (ETH_TUNNEL_FILTER_IMAC | \
+					ETH_TUNNEL_FILTER_TENID)
+#define RTE_TUNNEL_FILTER_OMAC_TENID_IMAC (ETH_TUNNEL_FILTER_OMAC | \
+					ETH_TUNNEL_FILTER_TENID | \
+					ETH_TUNNEL_FILTER_IMAC)
+
+/**
+ *  Select IPv4 or IPv6 for tunnel filters.
+ */
+enum rte_tunnel_iptype {
+	RTE_TUNNEL_IPTYPE_IPV4 = 0, /**< IPv4. */
+	RTE_TUNNEL_IPTYPE_IPV6,     /**< IPv6. */
+};
+
+/**
+ * Tunneling Packet filter configuration.
+ */
+struct rte_eth_tunnel_filter_conf {
+	struct ether_addr *outer_mac;  /**< Outer MAC address filter. */
+	struct ether_addr *inner_mac;  /**< Inner MAC address filter. */
+	uint16_t inner_vlan;           /**< Inner VLAN filter. */
+	enum rte_tunnel_iptype ip_type; /**< IP address type. */
+	union {
+		uint32_t ipv4_addr;    /**< IPv4 source address to match. */
+		uint32_t ipv6_addr[4]; /**< IPv6 source address to match. */
+	} ip_addr; /**< IPv4/IPv6 source address to match (union of above). */
+
+	uint16_t filter_type;   /**< Filter type. */
+	enum rte_eth_tunnel_type tunnel_type; /**< Tunnel Type. */
+	uint32_t tenant_id;     /** < Tenant number. */
+	uint16_t queue_id;      /** < queue number. */
+};
+
 #ifdef __cplusplus
 }
 #endif
-- 
1.7.7.6

^ permalink raw reply	[flat|nested] 12+ messages in thread

* [dpdk-dev] [PATCH v7 07/10] i40e:implement the API of VxLAN filter in librte_pmd_i40e
  2014-10-23 13:18 [dpdk-dev] [PATCH v7 00/10] Support VxLAN on Fortville Jijiang Liu
                   ` (5 preceding siblings ...)
  2014-10-23 13:18 ` [dpdk-dev] [PATCH v7 06/10] librte_ether:add data structures of VxLAN filter Jijiang Liu
@ 2014-10-23 13:18 ` Jijiang Liu
  2014-10-23 13:18 ` [dpdk-dev] [PATCH v7 08/10] app/testpmd:test VxLAN packet filter Jijiang Liu
                   ` (2 subsequent siblings)
  9 siblings, 0 replies; 12+ messages in thread
From: Jijiang Liu @ 2014-10-23 13:18 UTC (permalink / raw)
  To: dev

The filter types supported are listed below for VxLAN:
   1. Inner MAC and Inner VLAN ID.
   2. Inner MAC address, inner VLAN ID and tenant ID.
   3. Inner MAC and tenant ID.
   4. Inner MAC address.
   5. Outer MAC address, tenant ID and inner MAC address.

Signed-off-by: Jijiang Liu <jijiang.liu@intel.com>
---
 lib/librte_pmd_i40e/i40e_ethdev.c |  174 ++++++++++++++++++++++++++++++++++++-
 1 files changed, 172 insertions(+), 2 deletions(-)

diff --git a/lib/librte_pmd_i40e/i40e_ethdev.c b/lib/librte_pmd_i40e/i40e_ethdev.c
index eb643e5..be83268 100644
--- a/lib/librte_pmd_i40e/i40e_ethdev.c
+++ b/lib/librte_pmd_i40e/i40e_ethdev.c
@@ -48,6 +48,7 @@
 #include <rte_malloc.h>
 #include <rte_memcpy.h>
 #include <rte_dev.h>
+#include <rte_eth_ctrl.h>
 
 #include "i40e_logs.h"
 #include "i40e/i40e_register_x710_int.h"
@@ -4099,6 +4100,108 @@ i40e_dev_rss_hash_conf_get(struct rte_eth_dev *dev,
 }
 
 static int
+i40e_dev_get_filter_type(uint16_t filter_type, uint16_t *flag)
+{
+	switch (filter_type) {
+	case RTE_TUNNEL_FILTER_IMAC_IVLAN:
+		*flag = I40E_AQC_ADD_CLOUD_FILTER_IMAC_IVLAN;
+		break;
+	case RTE_TUNNEL_FILTER_IMAC_IVLAN_TENID:
+		*flag = I40E_AQC_ADD_CLOUD_FILTER_IMAC_IVLAN_TEN_ID;
+		break;
+	case RTE_TUNNEL_FILTER_IMAC_TENID:
+		*flag = I40E_AQC_ADD_CLOUD_FILTER_IMAC_TEN_ID;
+		break;
+	case RTE_TUNNEL_FILTER_OMAC_TENID_IMAC:
+		*flag = I40E_AQC_ADD_CLOUD_FILTER_OMAC_TEN_ID_IMAC;
+		break;
+	case ETH_TUNNEL_FILTER_IMAC:
+		*flag = I40E_AQC_ADD_CLOUD_FILTER_IMAC;
+		break;
+	default:
+		PMD_DRV_LOG(ERR, "invalid tunnel filter type\n");
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+static int
+i40e_dev_tunnel_filter_set(struct i40e_pf *pf,
+			struct rte_eth_tunnel_filter_conf *tunnel_filter,
+			uint8_t add)
+{
+	uint16_t ip_type;
+	uint8_t tun_type = 0;
+	int val, ret = 0;
+	struct i40e_hw *hw = I40E_PF_TO_HW(pf);
+	struct i40e_vsi *vsi = pf->main_vsi;
+	struct i40e_aqc_add_remove_cloud_filters_element_data  *cld_filter;
+	struct i40e_aqc_add_remove_cloud_filters_element_data  *pfilter;
+
+	cld_filter = rte_zmalloc("tunnel_filter",
+		sizeof(struct i40e_aqc_add_remove_cloud_filters_element_data),
+		0);
+
+	if (NULL == cld_filter) {
+		PMD_DRV_LOG(ERR, "Failed to alloc memory.\n");
+		return -EINVAL;
+	}
+	pfilter = cld_filter;
+
+	(void)rte_memcpy(&pfilter->outer_mac, tunnel_filter->outer_mac,
+			sizeof(struct ether_addr));
+	(void)rte_memcpy(&pfilter->inner_mac, tunnel_filter->inner_mac,
+			sizeof(struct ether_addr));
+
+	pfilter->inner_vlan = tunnel_filter->inner_vlan;
+	if (tunnel_filter->ip_type == RTE_TUNNEL_IPTYPE_IPV4) {
+		ip_type = I40E_AQC_ADD_CLOUD_FLAGS_IPV4;
+		(void)rte_memcpy(&pfilter->ipaddr.v4.data,
+				&tunnel_filter->ip_addr,
+				sizeof(pfilter->ipaddr.v4.data));
+	} else {
+		ip_type = I40E_AQC_ADD_CLOUD_FLAGS_IPV6;
+		(void)rte_memcpy(&pfilter->ipaddr.v6.data,
+				&tunnel_filter->ip_addr,
+				sizeof(pfilter->ipaddr.v6.data));
+	}
+
+	/* check tunneled type */
+	switch (tunnel_filter->tunnel_type) {
+	case RTE_TUNNEL_TYPE_VXLAN:
+		tun_type = I40E_AQC_ADD_CLOUD_TNL_TYPE_XVLAN;
+		break;
+	default:
+		/* Other tunnel types is not supported. */
+		PMD_DRV_LOG(ERR, "tunnel type is not supported.\n");
+		rte_free(cld_filter);
+		return -EINVAL;
+	}
+
+	val = i40e_dev_get_filter_type(tunnel_filter->filter_type,
+						&pfilter->flags);
+	if (val < 0) {
+		rte_free(cld_filter);
+		return -EINVAL;
+	}
+
+	pfilter->flags |= I40E_AQC_ADD_CLOUD_FLAGS_TO_QUEUE | ip_type |
+		(tun_type << I40E_AQC_ADD_CLOUD_TNL_TYPE_SHIFT);
+	pfilter->tenant_id = tunnel_filter->tenant_id;
+	pfilter->queue_number = tunnel_filter->queue_id;
+
+	if (add)
+		ret = i40e_aq_add_cloud_filters(hw, vsi->seid, cld_filter, 1);
+	else
+		ret = i40e_aq_remove_cloud_filters(hw, vsi->seid,
+						cld_filter, 1);
+
+	rte_free(cld_filter);
+	return ret;
+}
+
+static int
 i40e_get_vxlan_port_idx(struct i40e_pf *pf, uint16_t port)
 {
 	uint8_t i;
@@ -4286,6 +4389,72 @@ i40e_pf_config_rss(struct i40e_pf *pf)
 }
 
 static int
+i40e_tunnel_filter_param_check(struct i40e_pf *pf,
+			struct rte_eth_tunnel_filter_conf *filter)
+{
+	if (pf == NULL || filter == NULL) {
+		PMD_DRV_LOG(ERR, "Invalid parameter\n");
+		return -EINVAL;
+	}
+
+	if (filter->queue_id >= pf->dev_data->nb_rx_queues) {
+		PMD_DRV_LOG(ERR, "Invalid queue ID\n");
+		return -EINVAL;
+	}
+
+	if (filter->inner_vlan > ETHER_MAX_VLAN_ID) {
+		PMD_DRV_LOG(ERR, "Invalid inner VLAN ID\n");
+		return -EINVAL;
+	}
+
+	if ((filter->filter_type & ETH_TUNNEL_FILTER_OMAC) &&
+		(is_zero_ether_addr(filter->outer_mac))) {
+		PMD_DRV_LOG(ERR, "Cannot add NULL outer MAC address\n");
+		return -EINVAL;
+	}
+
+	if ((filter->filter_type & ETH_TUNNEL_FILTER_IMAC) &&
+		(is_zero_ether_addr(filter->inner_mac))) {
+		PMD_DRV_LOG(ERR, "Cannot add NULL inner MAC address\n");
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+static int
+i40e_tunnel_filter_handle(struct rte_eth_dev *dev, enum rte_filter_op filter_op,
+			void *arg)
+{
+	struct rte_eth_tunnel_filter_conf *filter;
+	struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	int ret = I40E_SUCCESS;
+
+	filter = (struct rte_eth_tunnel_filter_conf *)(arg);
+
+	if (i40e_tunnel_filter_param_check(pf, filter) < 0)
+		return I40E_ERR_PARAM;
+
+	switch (filter_op) {
+	case RTE_ETH_FILTER_NOP:
+		if (!(pf->flags & I40E_FLAG_VXLAN))
+			ret = I40E_NOT_SUPPORTED;
+	case RTE_ETH_FILTER_ADD:
+		ret = i40e_dev_tunnel_filter_set(pf, filter, 1);
+		break;
+	case RTE_ETH_FILTER_DELETE:
+		ret = i40e_dev_tunnel_filter_set(pf, filter, 0);
+		break;
+	default:
+		PMD_DRV_LOG(ERR, "unknown operation %u\n", filter_op);
+		ret = I40E_ERR_PARAM;
+		break;
+	}
+
+	return ret;
+}
+
+static int
 i40e_pf_config_mq_rx(struct i40e_pf *pf)
 {
 	if (!pf->dev_data->sriov.active) {
@@ -4309,13 +4478,14 @@ i40e_dev_filter_ctrl(struct rte_eth_dev *dev,
 		     void *arg)
 {
 	int ret = 0;
-	(void)filter_op;
-	(void)arg;
 
 	if (dev == NULL)
 		return -EINVAL;
 
 	switch (filter_type) {
+	case RTE_ETH_FILTER_TUNNEL:
+		ret = i40e_tunnel_filter_handle(dev, filter_op, arg);
+		break;
 	default:
 		PMD_DRV_LOG(WARNING, "Filter type (%d) not supported",
 							filter_type);
-- 
1.7.7.6

^ permalink raw reply	[flat|nested] 12+ messages in thread

* [dpdk-dev] [PATCH v7 08/10] app/testpmd:test VxLAN packet filter
  2014-10-23 13:18 [dpdk-dev] [PATCH v7 00/10] Support VxLAN on Fortville Jijiang Liu
                   ` (6 preceding siblings ...)
  2014-10-23 13:18 ` [dpdk-dev] [PATCH v7 07/10] i40e:implement the API of VxLAN filter in librte_pmd_i40e Jijiang Liu
@ 2014-10-23 13:18 ` Jijiang Liu
  2014-10-23 13:18 ` [dpdk-dev] [PATCH v7 09/10] i40e:support VxLAN Tx checksum offload Jijiang Liu
  2014-10-23 13:19 ` [dpdk-dev] [PATCH v7 10/10] app/testpmd:test " Jijiang Liu
  9 siblings, 0 replies; 12+ messages in thread
From: Jijiang Liu @ 2014-10-23 13:18 UTC (permalink / raw)
  To: dev

Add the "tunnel_filter" command in testpmd to test the API of VxLAN packet filter.

Signed-off-by: Jijiang Liu <jijiang.liu@intel.com>
---
 app/test-pmd/cmdline.c |  150 ++++++++++++++++++++++++++++++++++++++++++++++++
 1 files changed, 150 insertions(+), 0 deletions(-)

diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
index 4d7b4d1..da5d272 100644
--- a/app/test-pmd/cmdline.c
+++ b/app/test-pmd/cmdline.c
@@ -285,6 +285,14 @@ static void cmd_help_long_parsed(void *parsed_result,
 			"    Set the outer VLAN TPID for Packet Filtering on"
 			" a port\n\n"
 
+			"tunnel_filter add (port_id) (outer_mac) (inner_mac) (ip_addr) "
+			"(inner_vlan) (tunnel_type) (filter_type) (tenant_id) (queue_id)\n"
+			"   add a tunnel filter of a port.\n\n"
+
+			"tunnel_filter rm (port_id) (outer_mac) (inner_mac) (ip_addr) "
+			"(inner_vlan) (tunnel_type) (filter_type) (tenant_id) (queue_id)\n"
+			"   remove a tunnel filter of a port.\n\n"
+
 			"rx_vxlan_port add (udp_port) (port_id)\n"
 			"    Add an UDP port for VxLAN packet filter on a port\n\n"
 
@@ -6231,6 +6239,147 @@ cmdline_parse_inst_t cmd_vf_rate_limit = {
 	},
 };
 
+/* *** ADD TUNNEL FILTER OF A PORT *** */
+struct cmd_tunnel_filter_result {
+	cmdline_fixed_string_t cmd;
+	cmdline_fixed_string_t what;
+	uint8_t port_id;
+	struct ether_addr outer_mac;
+	struct ether_addr inner_mac;
+	cmdline_ipaddr_t ip_value;
+	uint16_t inner_vlan;
+	cmdline_fixed_string_t tunnel_type;
+	cmdline_fixed_string_t filter_type;
+	uint32_t tenant_id;
+	uint16_t queue_num;
+};
+
+static void
+cmd_tunnel_filter_parsed(void *parsed_result,
+			  __attribute__((unused)) struct cmdline *cl,
+			  __attribute__((unused)) void *data)
+{
+	struct cmd_tunnel_filter_result *res = parsed_result;
+	struct rte_eth_tunnel_filter_conf tunnel_filter_conf;
+	int ret = 0;
+
+	tunnel_filter_conf.outer_mac = &res->outer_mac;
+	tunnel_filter_conf.inner_mac = &res->inner_mac;
+	tunnel_filter_conf.inner_vlan = res->inner_vlan;
+
+	if (res->ip_value.family == AF_INET) {
+		tunnel_filter_conf.ip_addr.ipv4_addr =
+			res->ip_value.addr.ipv4.s_addr;
+		tunnel_filter_conf.ip_type = RTE_TUNNEL_IPTYPE_IPV4;
+	} else {
+		memcpy(&(tunnel_filter_conf.ip_addr.ipv6_addr),
+			&(res->ip_value.addr.ipv6),
+			sizeof(struct in6_addr));
+		tunnel_filter_conf.ip_type = RTE_TUNNEL_IPTYPE_IPV6;
+	}
+
+	if (!strcmp(res->filter_type, "imac-ivlan"))
+		tunnel_filter_conf.filter_type = RTE_TUNNEL_FILTER_IMAC_IVLAN;
+	else if (!strcmp(res->filter_type, "imac-ivlan-tenid"))
+		tunnel_filter_conf.filter_type =
+			RTE_TUNNEL_FILTER_IMAC_IVLAN_TENID;
+	else if (!strcmp(res->filter_type, "imac-tenid"))
+		tunnel_filter_conf.filter_type = RTE_TUNNEL_FILTER_IMAC_TENID;
+	else if (!strcmp(res->filter_type, "imac"))
+		tunnel_filter_conf.filter_type = ETH_TUNNEL_FILTER_IMAC;
+	else if (!strcmp(res->filter_type, "omac-imac-tenid"))
+		tunnel_filter_conf.filter_type =
+			RTE_TUNNEL_FILTER_OMAC_TENID_IMAC;
+	else {
+		printf("The filter type is not supported");
+		return;
+	}
+
+	if (!strcmp(res->tunnel_type, "vxlan"))
+		tunnel_filter_conf.tunnel_type = RTE_TUNNEL_TYPE_VXLAN;
+	else {
+		printf("Only VxLAN is supported now.\n");
+		return;
+	}
+
+	tunnel_filter_conf.tenant_id = res->tenant_id;
+	tunnel_filter_conf.queue_id = res->queue_num;
+	if (!strcmp(res->what, "add"))
+		ret = rte_eth_dev_filter_ctrl(res->port_id,
+					RTE_ETH_FILTER_TUNNEL,
+					RTE_ETH_FILTER_ADD,
+					&tunnel_filter_conf);
+	else
+		ret = rte_eth_dev_filter_ctrl(res->port_id,
+					RTE_ETH_FILTER_TUNNEL,
+					RTE_ETH_FILTER_DELETE,
+					&tunnel_filter_conf);
+	if (ret < 0)
+		printf("cmd_tunnel_filter_parsed error: (%s)\n",
+				strerror(-ret));
+
+}
+cmdline_parse_token_string_t cmd_tunnel_filter_cmd =
+	TOKEN_STRING_INITIALIZER(struct cmd_tunnel_filter_result,
+	cmd, "tunnel_filter");
+cmdline_parse_token_string_t cmd_tunnel_filter_what =
+	TOKEN_STRING_INITIALIZER(struct cmd_tunnel_filter_result,
+	what, "add#rm");
+cmdline_parse_token_num_t cmd_tunnel_filter_port_id =
+	TOKEN_NUM_INITIALIZER(struct cmd_tunnel_filter_result,
+	port_id, UINT8);
+cmdline_parse_token_etheraddr_t cmd_tunnel_filter_outer_mac =
+	TOKEN_ETHERADDR_INITIALIZER(struct cmd_tunnel_filter_result,
+	outer_mac);
+cmdline_parse_token_etheraddr_t cmd_tunnel_filter_inner_mac =
+	TOKEN_ETHERADDR_INITIALIZER(struct cmd_tunnel_filter_result,
+	inner_mac);
+cmdline_parse_token_num_t cmd_tunnel_filter_innner_vlan =
+	TOKEN_NUM_INITIALIZER(struct cmd_tunnel_filter_result,
+	inner_vlan, UINT16);
+cmdline_parse_token_ipaddr_t cmd_tunnel_filter_ip_value =
+	TOKEN_IPADDR_INITIALIZER(struct cmd_tunnel_filter_result,
+	ip_value);
+cmdline_parse_token_string_t cmd_tunnel_filter_tunnel_type =
+	TOKEN_STRING_INITIALIZER(struct cmd_tunnel_filter_result,
+	tunnel_type, "vxlan");
+
+cmdline_parse_token_string_t cmd_tunnel_filter_filter_type =
+	TOKEN_STRING_INITIALIZER(struct cmd_tunnel_filter_result,
+	filter_type, "imac-ivlan#imac-ivlan-tenid#imac-tenid#"
+		"imac#omac-imac-tenid");
+cmdline_parse_token_num_t cmd_tunnel_filter_tenant_id =
+	TOKEN_NUM_INITIALIZER(struct cmd_tunnel_filter_result,
+	tenant_id, UINT32);
+cmdline_parse_token_num_t cmd_tunnel_filter_queue_num =
+	TOKEN_NUM_INITIALIZER(struct cmd_tunnel_filter_result,
+	queue_num, UINT16);
+
+cmdline_parse_inst_t cmd_tunnel_filter = {
+	.f = cmd_tunnel_filter_parsed,
+	.data = (void *)0,
+	.help_str = "add/rm tunnel filter of a port: "
+			"tunnel_filter add port_id outer_mac inner_mac ip "
+			"inner_vlan tunnel_type(vxlan) filter_type "
+			"(imac-ivlan|imac-ivlan-tenid|imac-tenid|"
+			"imac|omac-imac-tenid) "
+			"tenant_id queue_num",
+	.tokens = {
+		(void *)&cmd_tunnel_filter_cmd,
+		(void *)&cmd_tunnel_filter_what,
+		(void *)&cmd_tunnel_filter_port_id,
+		(void *)&cmd_tunnel_filter_outer_mac,
+		(void *)&cmd_tunnel_filter_inner_mac,
+		(void *)&cmd_tunnel_filter_ip_value,
+		(void *)&cmd_tunnel_filter_innner_vlan,
+		(void *)&cmd_tunnel_filter_tunnel_type,
+		(void *)&cmd_tunnel_filter_filter_type,
+		(void *)&cmd_tunnel_filter_tenant_id,
+		(void *)&cmd_tunnel_filter_queue_num,
+		NULL,
+	},
+};
+
 /* *** CONFIGURE TUNNEL UDP PORT *** */
 struct cmd_tunnel_udp_config {
 	cmdline_fixed_string_t cmd;
@@ -7582,6 +7731,7 @@ cmdline_parse_ctx_t main_ctx[] = {
 	(cmdline_parse_inst_t *)&cmd_vf_rxvlan_filter,
 	(cmdline_parse_inst_t *)&cmd_queue_rate_limit,
 	(cmdline_parse_inst_t *)&cmd_vf_rate_limit,
+	(cmdline_parse_inst_t *)&cmd_tunnel_filter,
 	(cmdline_parse_inst_t *)&cmd_tunnel_udp_config,
 	(cmdline_parse_inst_t *)&cmd_set_mirror_mask,
 	(cmdline_parse_inst_t *)&cmd_set_mirror_link,
-- 
1.7.7.6

^ permalink raw reply	[flat|nested] 12+ messages in thread

* [dpdk-dev] [PATCH v7 09/10] i40e:support VxLAN Tx checksum offload
  2014-10-23 13:18 [dpdk-dev] [PATCH v7 00/10] Support VxLAN on Fortville Jijiang Liu
                   ` (7 preceding siblings ...)
  2014-10-23 13:18 ` [dpdk-dev] [PATCH v7 08/10] app/testpmd:test VxLAN packet filter Jijiang Liu
@ 2014-10-23 13:18 ` Jijiang Liu
  2014-10-23 13:19 ` [dpdk-dev] [PATCH v7 10/10] app/testpmd:test " Jijiang Liu
  9 siblings, 0 replies; 12+ messages in thread
From: Jijiang Liu @ 2014-10-23 13:18 UTC (permalink / raw)
  To: dev

Support VxLAN Tx checksum offload, which include
  - outer L3(IP) checksum offload
  - inner L3(IP) checksum offload
  - inner L4(UDP, TCP and SCTP) checksum offload

Signed-off-by: Jijiang Liu <jijiang.liu@intel.com>
---
 lib/librte_mbuf/rte_mbuf.h      |    1 +
 lib/librte_pmd_i40e/i40e_rxtx.c |   46 +++++++++++++++++++++++++++++++++-----
 2 files changed, 41 insertions(+), 6 deletions(-)

diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h
index 9af3bd9..a86dedf 100644
--- a/lib/librte_mbuf/rte_mbuf.h
+++ b/lib/librte_mbuf/rte_mbuf.h
@@ -96,6 +96,7 @@ extern "C" {
 
 #define PKT_TX_VLAN_PKT      (1ULL << 55) /**< TX packet is a 802.1q VLAN packet. */
 #define PKT_TX_IP_CKSUM      (1ULL << 54) /**< IP cksum of TX pkt. computed by NIC. */
+#define PKT_TX_VXLAN_CKSUM   (1ULL << 50) /**< TX checksum of VxLAN computed by NIC */
 #define PKT_TX_IPV4_CSUM     PKT_TX_IP_CKSUM /**< Alias of PKT_TX_IP_CKSUM. */
 #define PKT_TX_IPV4          PKT_RX_IPV4_HDR /**< IPv4 with no IP checksum offload. */
 #define PKT_TX_IPV6          PKT_RX_IPV6_HDR /**< IPv6 packet */
diff --git a/lib/librte_pmd_i40e/i40e_rxtx.c b/lib/librte_pmd_i40e/i40e_rxtx.c
index 2108290..7599df9 100644
--- a/lib/librte_pmd_i40e/i40e_rxtx.c
+++ b/lib/librte_pmd_i40e/i40e_rxtx.c
@@ -411,11 +411,14 @@ i40e_rxd_ptype_to_pkt_flags(uint64_t qword)
 }
 
 static inline void
-i40e_txd_enable_checksum(uint32_t ol_flags,
+i40e_txd_enable_checksum(uint64_t ol_flags,
 			uint32_t *td_cmd,
 			uint32_t *td_offset,
 			uint8_t l2_len,
-			uint8_t l3_len)
+			uint16_t l3_len,
+			uint8_t inner_l2_len,
+			uint16_t inner_l3_len,
+			uint32_t *cd_tunneling)
 {
 	if (!l2_len) {
 		PMD_DRV_LOG(DEBUG, "L2 length set to 0");
@@ -428,6 +431,27 @@ i40e_txd_enable_checksum(uint32_t ol_flags,
 		return;
 	}
 
+	/* VxLAN packet TX checksum offload */
+	if (unlikely(ol_flags & PKT_TX_VXLAN_CKSUM)) {
+		uint8_t l4tun_len;
+
+		l4tun_len = ETHER_VXLAN_HLEN + inner_l2_len;
+
+		if (ol_flags & PKT_TX_IPV4_CSUM)
+			*cd_tunneling |= I40E_TX_CTX_EXT_IP_IPV4;
+		else if (ol_flags & PKT_TX_IPV6)
+			*cd_tunneling |= I40E_TX_CTX_EXT_IP_IPV6;
+
+		/* Now set the ctx descriptor fields */
+		*cd_tunneling |= (l3_len >> 2) <<
+				I40E_TXD_CTX_QW0_EXT_IPLEN_SHIFT |
+				I40E_TXD_CTX_UDP_TUNNELING |
+				(l4tun_len >> 1) <<
+				I40E_TXD_CTX_QW0_NATLEN_SHIFT;
+
+		l3_len = inner_l3_len;
+	}
+
 	/* Enable L3 checksum offloads */
 	if (ol_flags & PKT_TX_IPV4_CSUM) {
 		*td_cmd |= I40E_TX_DESC_CMD_IIPT_IPV4_CSUM;
@@ -1077,7 +1101,10 @@ i40e_recv_scattered_pkts(void *rx_queue,
 static inline uint16_t
 i40e_calc_context_desc(uint64_t flags)
 {
-	uint16_t mask = 0;
+	uint64_t mask = 0ULL;
+
+	if (flags | PKT_TX_VXLAN_CKSUM)
+		mask |= PKT_TX_VXLAN_CKSUM;
 
 #ifdef RTE_LIBRTE_IEEE1588
 	mask |= PKT_TX_IEEE1588_TMST;
@@ -1098,6 +1125,7 @@ i40e_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 	volatile struct i40e_tx_desc *txr;
 	struct rte_mbuf *tx_pkt;
 	struct rte_mbuf *m_seg;
+	uint32_t cd_tunneling_params;
 	uint16_t tx_id;
 	uint16_t nb_tx;
 	uint32_t td_cmd;
@@ -1106,7 +1134,9 @@ i40e_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 	uint32_t td_tag;
 	uint64_t ol_flags;
 	uint8_t l2_len;
-	uint8_t l3_len;
+	uint16_t l3_len;
+	uint8_t inner_l2_len;
+	uint16_t inner_l3_len;
 	uint16_t nb_used;
 	uint16_t nb_ctx;
 	uint16_t tx_last;
@@ -1134,7 +1164,9 @@ i40e_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 
 		ol_flags = tx_pkt->ol_flags;
 		l2_len = tx_pkt->l2_len;
+		inner_l2_len = tx_pkt->inner_l2_len;
 		l3_len = tx_pkt->l3_len;
+		inner_l3_len = tx_pkt->inner_l3_len;
 
 		/* Calculate the number of context descriptors needed. */
 		nb_ctx = i40e_calc_context_desc(ol_flags);
@@ -1182,15 +1214,17 @@ i40e_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 		td_cmd |= I40E_TX_DESC_CMD_ICRC;
 
 		/* Enable checksum offloading */
+		cd_tunneling_params = 0;
 		i40e_txd_enable_checksum(ol_flags, &td_cmd, &td_offset,
-							l2_len, l3_len);
+						l2_len, l3_len, inner_l2_len,
+						inner_l3_len,
+						&cd_tunneling_params);
 
 		if (unlikely(nb_ctx)) {
 			/* Setup TX context descriptor if required */
 			volatile struct i40e_tx_context_desc *ctx_txd =
 				(volatile struct i40e_tx_context_desc *)\
 							&txr[tx_id];
-			uint32_t cd_tunneling_params = 0;
 			uint16_t cd_l2tag2 = 0;
 			uint64_t cd_type_cmd_tso_mss =
 				I40E_TX_DESC_DTYPE_CONTEXT;
-- 
1.7.7.6

^ permalink raw reply	[flat|nested] 12+ messages in thread

* [dpdk-dev] [PATCH v7 10/10] app/testpmd:test VxLAN Tx checksum offload
  2014-10-23 13:18 [dpdk-dev] [PATCH v7 00/10] Support VxLAN on Fortville Jijiang Liu
                   ` (8 preceding siblings ...)
  2014-10-23 13:18 ` [dpdk-dev] [PATCH v7 09/10] i40e:support VxLAN Tx checksum offload Jijiang Liu
@ 2014-10-23 13:19 ` Jijiang Liu
  9 siblings, 0 replies; 12+ messages in thread
From: Jijiang Liu @ 2014-10-23 13:19 UTC (permalink / raw)
  To: dev

Add test cases in testpmd to test VxLAN Tx Checksum offload, which include
 - IPv4 and IPv6 packet
 - outer L3, inner L3 and L4 checksum offload for Tx side.

Signed-off-by: Jijiang Liu <jijiang.liu@intel.com>
---
 app/test-pmd/cmdline.c  |   13 ++-
 app/test-pmd/config.c   |    6 +-
 app/test-pmd/csumonly.c |  194 +++++++++++++++++++++++++++++++++++++++++++----
 3 files changed, 192 insertions(+), 21 deletions(-)

diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
index da5d272..757c399 100644
--- a/app/test-pmd/cmdline.c
+++ b/app/test-pmd/cmdline.c
@@ -310,13 +310,17 @@ static void cmd_help_long_parsed(void *parsed_result,
 			"    Disable hardware insertion of a VLAN header in"
 			" packets sent on a port.\n\n"
 
-			"tx_checksum set mask (port_id)\n"
+			"tx_checksum set (mask) (port_id)\n"
 			"    Enable hardware insertion of checksum offload with"
-			" the 4-bit mask, 0~0xf, in packets sent on a port.\n"
+			" the 8-bit mask, 0~0xff, in packets sent on a port.\n"
 			"        bit 0 - insert ip   checksum offload if set\n"
 			"        bit 1 - insert udp  checksum offload if set\n"
 			"        bit 2 - insert tcp  checksum offload if set\n"
 			"        bit 3 - insert sctp checksum offload if set\n"
+			"        bit 4 - insert inner ip  checksum offload if set\n"
+			"        bit 5 - insert inner udp checksum offload if set\n"
+			"        bit 6 - insert inner tcp checksum offload if set\n"
+			"        bit 7 - insert inner sctp checksum offload if set\n"
 			"    Please check the NIC datasheet for HW limits.\n\n"
 
 			"set fwd (%s)\n"
@@ -2763,8 +2767,9 @@ cmdline_parse_inst_t cmd_tx_cksum_set = {
 	.f = cmd_tx_cksum_set_parsed,
 	.data = NULL,
 	.help_str = "enable hardware insertion of L3/L4checksum with a given "
-	"mask in packets sent on a port, the bit mapping is given as, Bit 0 for ip"
-	"Bit 1 for UDP, Bit 2 for TCP, Bit 3 for SCTP",
+	"mask in packets sent on a port, the bit mapping is given as, Bit 0 for ip, "
+	"Bit 1 for UDP, Bit 2 for TCP, Bit 3 for SCTP, Bit 4 for inner ip, "
+	"Bit 5 for inner UDP, Bit 6 for inner TCP, Bit 7 for inner SCTP",
 	.tokens = {
 		(void *)&cmd_tx_cksum_set_tx_cksum,
 		(void *)&cmd_tx_cksum_set_set,
diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index 2a1b93f..9bc08f4 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -1753,9 +1753,9 @@ tx_cksum_set(portid_t port_id, uint64_t ol_flags)
 	uint64_t tx_ol_flags;
 	if (port_id_is_invalid(port_id))
 		return;
-	/* Clear last 4 bits and then set L3/4 checksum mask again */
-	tx_ol_flags = ports[port_id].tx_ol_flags & (~0x0Full);
-	ports[port_id].tx_ol_flags = ((ol_flags & 0xf) | tx_ol_flags);
+	/* Clear last 8 bits and then set L3/4 checksum mask again */
+	tx_ol_flags = ports[port_id].tx_ol_flags & (~0x0FFull);
+	ports[port_id].tx_ol_flags = ((ol_flags & 0xff) | tx_ol_flags);
 }
 
 void
diff --git a/app/test-pmd/csumonly.c b/app/test-pmd/csumonly.c
index fcc4876..3967476 100644
--- a/app/test-pmd/csumonly.c
+++ b/app/test-pmd/csumonly.c
@@ -209,10 +209,16 @@ pkt_burst_checksum_forward(struct fwd_stream *fs)
 	struct rte_mbuf  *mb;
 	struct ether_hdr *eth_hdr;
 	struct ipv4_hdr  *ipv4_hdr;
+	struct ether_hdr *inner_eth_hdr;
+	struct ipv4_hdr  *inner_ipv4_hdr = NULL;
 	struct ipv6_hdr  *ipv6_hdr;
+	struct ipv6_hdr  *inner_ipv6_hdr = NULL;
 	struct udp_hdr   *udp_hdr;
+	struct udp_hdr   *inner_udp_hdr;
 	struct tcp_hdr   *tcp_hdr;
+	struct tcp_hdr   *inner_tcp_hdr;
 	struct sctp_hdr  *sctp_hdr;
+	struct sctp_hdr  *inner_sctp_hdr;
 
 	uint16_t nb_rx;
 	uint16_t nb_tx;
@@ -221,12 +227,17 @@ pkt_burst_checksum_forward(struct fwd_stream *fs)
 	uint64_t pkt_ol_flags;
 	uint64_t tx_ol_flags;
 	uint16_t l4_proto;
+	uint16_t inner_l4_proto = 0;
 	uint16_t eth_type;
 	uint8_t  l2_len;
 	uint8_t  l3_len;
+	uint8_t  inner_l2_len = 0;
+	uint8_t  inner_l3_len = 0;
 
 	uint32_t rx_bad_ip_csum;
 	uint32_t rx_bad_l4_csum;
+	uint8_t  ipv4_tunnel;
+	uint8_t  ipv6_tunnel;
 
 #ifdef RTE_TEST_PMD_RECORD_CORE_CYCLES
 	uint64_t start_tsc;
@@ -262,7 +273,10 @@ pkt_burst_checksum_forward(struct fwd_stream *fs)
 		l2_len  = sizeof(struct ether_hdr);
 		pkt_ol_flags = mb->ol_flags;
 		ol_flags = (pkt_ol_flags & (~PKT_TX_L4_MASK));
-
+		ipv4_tunnel = (pkt_ol_flags & PKT_RX_TUNNEL_IPV4_HDR) ?
+				1 : 0;
+		ipv6_tunnel = (pkt_ol_flags & PKT_RX_TUNNEL_IPV6_HDR) ?
+				1 : 0;
 		eth_hdr = rte_pktmbuf_mtod(mb, struct ether_hdr *);
 		eth_type = rte_be_to_cpu_16(eth_hdr->ether_type);
 		if (eth_type == ETHER_TYPE_VLAN) {
@@ -295,7 +309,7 @@ pkt_burst_checksum_forward(struct fwd_stream *fs)
 		 *      + ipv4 or ipv6
 		 *      + udp or tcp or sctp or others
 		 */
-		if (pkt_ol_flags & PKT_RX_IPV4_HDR) {
+		if (pkt_ol_flags & (PKT_RX_IPV4_HDR | PKT_RX_TUNNEL_IPV4_HDR)) {
 
 			/* Do not support ipv4 option field */
 			l3_len = sizeof(struct ipv4_hdr) ;
@@ -325,17 +339,92 @@ pkt_burst_checksum_forward(struct fwd_stream *fs)
 				if (tx_ol_flags & 0x2) {
 					/* HW Offload */
 					ol_flags |= PKT_TX_UDP_CKSUM;
-					/* Pseudo header sum need be set properly */
-					udp_hdr->dgram_cksum = get_ipv4_psd_sum(ipv4_hdr);
+					if (ipv4_tunnel)
+						udp_hdr->dgram_cksum = 0;
+					else
+						/* Pseudo header sum need be set properly */
+						udp_hdr->dgram_cksum =
+							get_ipv4_psd_sum(ipv4_hdr);
 				}
 				else {
 					/* SW Implementation, clear checksum field first */
 					udp_hdr->dgram_cksum = 0;
 					udp_hdr->dgram_cksum = get_ipv4_udptcp_checksum(ipv4_hdr,
-							(uint16_t*)udp_hdr);
+									(uint16_t *)udp_hdr);
 				}
-			}
-			else if (l4_proto == IPPROTO_TCP){
+
+				if (ipv4_tunnel) {
+
+					uint16_t len;
+
+					/* Check if inner L3/L4 checkum flag is set */
+					if (tx_ol_flags & 0xF0)
+						ol_flags |= PKT_TX_VXLAN_CKSUM;
+
+					inner_l2_len  = sizeof(struct ether_hdr);
+					inner_eth_hdr = (struct ether_hdr *) (rte_pktmbuf_mtod(mb,
+								unsigned char *) + l2_len + l3_len
+								 + ETHER_VXLAN_HLEN);
+
+					eth_type = rte_be_to_cpu_16(inner_eth_hdr->ether_type);
+					if (eth_type == ETHER_TYPE_VLAN) {
+						inner_l2_len += sizeof(struct vlan_hdr);
+						eth_type = rte_be_to_cpu_16(*(uint16_t *)
+							((uintptr_t)&eth_hdr->ether_type +
+								sizeof(struct vlan_hdr)));
+					}
+
+					len = l2_len + l3_len + ETHER_VXLAN_HLEN + inner_l2_len;
+					if (eth_type == ETHER_TYPE_IPv4) {
+						inner_l3_len = sizeof(struct ipv4_hdr);
+						inner_ipv4_hdr = (struct ipv4_hdr *) (rte_pktmbuf_mtod(mb,
+								unsigned char *) + len);
+						inner_l4_proto = inner_ipv4_hdr->next_proto_id;
+
+						if (tx_ol_flags & 0x10) {
+
+							/* Do not delete, this is required by HW*/
+							inner_ipv4_hdr->hdr_checksum = 0;
+							ol_flags |= PKT_TX_IPV4_CSUM;
+						}
+
+					} else if (eth_type == ETHER_TYPE_IPv6) {
+						inner_l3_len = sizeof(struct ipv6_hdr);
+						inner_ipv6_hdr = (struct ipv6_hdr *) (rte_pktmbuf_mtod(mb,
+								unsigned char *) + len);
+						inner_l4_proto = inner_ipv6_hdr->proto;
+					}
+					if ((inner_l4_proto == IPPROTO_UDP) && (tx_ol_flags & 0x20)) {
+
+						/* HW Offload */
+						ol_flags |= PKT_TX_UDP_CKSUM;
+						inner_udp_hdr = (struct udp_hdr *) (rte_pktmbuf_mtod(mb,
+								unsigned char *) + len + inner_l3_len);
+						if (eth_type == ETHER_TYPE_IPv4)
+							inner_udp_hdr->dgram_cksum = get_ipv4_psd_sum(inner_ipv4_hdr);
+						else if (eth_type == ETHER_TYPE_IPv6)
+							inner_udp_hdr->dgram_cksum = get_ipv6_psd_sum(inner_ipv6_hdr);
+
+					} else if ((inner_l4_proto == IPPROTO_TCP) && (tx_ol_flags & 0x40)) {
+						/* HW Offload */
+						ol_flags |= PKT_TX_TCP_CKSUM;
+						inner_tcp_hdr = (struct tcp_hdr *) (rte_pktmbuf_mtod(mb,
+								unsigned char *) + len + inner_l3_len);
+						if (eth_type == ETHER_TYPE_IPv4)
+							inner_tcp_hdr->cksum = get_ipv4_psd_sum(inner_ipv4_hdr);
+						else if (eth_type == ETHER_TYPE_IPv6)
+							inner_tcp_hdr->cksum = get_ipv6_psd_sum(inner_ipv6_hdr);
+					} else if ((inner_l4_proto == IPPROTO_SCTP) && (tx_ol_flags & 0x80)) {
+						/* HW Offload */
+						ol_flags |= PKT_TX_SCTP_CKSUM;
+						inner_sctp_hdr = (struct sctp_hdr *) (rte_pktmbuf_mtod(mb,
+								unsigned char *) + len + inner_l3_len);
+						inner_sctp_hdr->cksum = 0;
+					}
+
+				}
+
+			} else if (l4_proto == IPPROTO_TCP) {
 				tcp_hdr = (struct tcp_hdr*) (rte_pktmbuf_mtod(mb,
 						unsigned char *) + l2_len + l3_len);
 				if (tx_ol_flags & 0x4) {
@@ -347,8 +436,7 @@ pkt_burst_checksum_forward(struct fwd_stream *fs)
 					tcp_hdr->cksum = get_ipv4_udptcp_checksum(ipv4_hdr,
 							(uint16_t*)tcp_hdr);
 				}
-			}
-			else if (l4_proto == IPPROTO_SCTP) {
+			} else if (l4_proto == IPPROTO_SCTP) {
 				sctp_hdr = (struct sctp_hdr*) (rte_pktmbuf_mtod(mb,
 						unsigned char *) + l2_len + l3_len);
 
@@ -367,9 +455,7 @@ pkt_burst_checksum_forward(struct fwd_stream *fs)
 				}
 			}
 			/* End of L4 Handling*/
-		}
-		else if (pkt_ol_flags & PKT_RX_IPV6_HDR) {
-
+		} else if (pkt_ol_flags & (PKT_RX_IPV6_HDR | PKT_RX_TUNNEL_IPV6_HDR)) {
 			ipv6_hdr = (struct ipv6_hdr *) (rte_pktmbuf_mtod(mb,
 					unsigned char *) + l2_len);
 			l3_len = sizeof(struct ipv6_hdr) ;
@@ -382,15 +468,93 @@ pkt_burst_checksum_forward(struct fwd_stream *fs)
 				if (tx_ol_flags & 0x2) {
 					/* HW Offload */
 					ol_flags |= PKT_TX_UDP_CKSUM;
-					udp_hdr->dgram_cksum = get_ipv6_psd_sum(ipv6_hdr);
+					if (ipv6_tunnel)
+						udp_hdr->dgram_cksum = 0;
+					else
+						udp_hdr->dgram_cksum =
+							get_ipv6_psd_sum(ipv6_hdr);
 				}
 				else {
 					/* SW Implementation */
 					/* checksum field need be clear first */
 					udp_hdr->dgram_cksum = 0;
 					udp_hdr->dgram_cksum = get_ipv6_udptcp_checksum(ipv6_hdr,
-							(uint16_t*)udp_hdr);
+								(uint16_t *)udp_hdr);
 				}
+
+				if (ipv6_tunnel) {
+
+					uint16_t len;
+
+					/* Check if inner L3/L4 checksum flag is set */
+					if (tx_ol_flags & 0xF0)
+						ol_flags |= PKT_TX_VXLAN_CKSUM;
+
+					inner_l2_len  = sizeof(struct ether_hdr);
+					inner_eth_hdr = (struct ether_hdr *) (rte_pktmbuf_mtod(mb,
+						unsigned char *) + l2_len + l3_len + ETHER_VXLAN_HLEN);
+					eth_type = rte_be_to_cpu_16(inner_eth_hdr->ether_type);
+
+					if (eth_type == ETHER_TYPE_VLAN) {
+						inner_l2_len += sizeof(struct vlan_hdr);
+						eth_type = rte_be_to_cpu_16(*(uint16_t *)
+							((uintptr_t)&eth_hdr->ether_type +
+							sizeof(struct vlan_hdr)));
+					}
+
+					len = l2_len + l3_len + ETHER_VXLAN_HLEN + inner_l2_len;
+
+					if (eth_type == ETHER_TYPE_IPv4) {
+						inner_l3_len = sizeof(struct ipv4_hdr);
+						inner_ipv4_hdr = (struct ipv4_hdr *) (rte_pktmbuf_mtod(mb,
+								unsigned char *) + len);
+						inner_l4_proto = inner_ipv4_hdr->next_proto_id;
+
+						/* HW offload */
+						if (tx_ol_flags & 0x10) {
+
+							/* Do not delete, this is required by HW*/
+							inner_ipv4_hdr->hdr_checksum = 0;
+							ol_flags |= PKT_TX_IPV4_CSUM;
+						}
+					} else if (eth_type == ETHER_TYPE_IPv6) {
+						inner_l3_len = sizeof(struct ipv6_hdr);
+						inner_ipv6_hdr = (struct ipv6_hdr *) (rte_pktmbuf_mtod(mb,
+							unsigned char *) + len);
+						inner_l4_proto = inner_ipv6_hdr->proto;
+					}
+
+					if ((inner_l4_proto == IPPROTO_UDP) && (tx_ol_flags & 0x20)) {
+						inner_udp_hdr = (struct udp_hdr *) (rte_pktmbuf_mtod(mb,
+							unsigned char *) + len + inner_l3_len);
+						/* HW offload */
+						ol_flags |= PKT_TX_UDP_CKSUM;
+						inner_udp_hdr->dgram_cksum = 0;
+						if (eth_type == ETHER_TYPE_IPv4)
+							inner_udp_hdr->dgram_cksum = get_ipv4_psd_sum(inner_ipv4_hdr);
+						else if (eth_type == ETHER_TYPE_IPv6)
+							inner_udp_hdr->dgram_cksum = get_ipv6_psd_sum(inner_ipv6_hdr);
+					} else if ((inner_l4_proto == IPPROTO_TCP) && (tx_ol_flags & 0x40)) {
+						/* HW offload */
+						ol_flags |= PKT_TX_TCP_CKSUM;
+						inner_tcp_hdr = (struct tcp_hdr *) (rte_pktmbuf_mtod(mb,
+								unsigned char *) + len + inner_l3_len);
+
+						if (eth_type == ETHER_TYPE_IPv4)
+							inner_tcp_hdr->cksum = get_ipv4_psd_sum(inner_ipv4_hdr);
+						else if (eth_type == ETHER_TYPE_IPv6)
+							inner_tcp_hdr->cksum = get_ipv6_psd_sum(inner_ipv6_hdr);
+
+					} else if ((inner_l4_proto == IPPROTO_SCTP) && (tx_ol_flags & 0x80)) {
+						/* HW offload */
+						ol_flags |= PKT_TX_SCTP_CKSUM;
+						inner_sctp_hdr = (struct sctp_hdr *) (rte_pktmbuf_mtod(mb,
+								unsigned char *) + len + inner_l3_len);
+						inner_sctp_hdr->cksum = 0;
+					}
+
+				}
+
 			}
 			else if (l4_proto == IPPROTO_TCP) {
 				tcp_hdr = (struct tcp_hdr*) (rte_pktmbuf_mtod(mb,
@@ -434,6 +598,8 @@ pkt_burst_checksum_forward(struct fwd_stream *fs)
 		/* Combine the packet header write. VLAN is not consider here */
 		mb->l2_len = l2_len;
 		mb->l3_len = l3_len;
+		mb->inner_l2_len = inner_l2_len;
+		mb->inner_l3_len = inner_l3_len;
 		mb->ol_flags = ol_flags;
 	}
 	nb_tx = rte_eth_tx_burst(fs->tx_port, fs->tx_queue, pkts_burst, nb_rx);
-- 
1.7.7.6

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [dpdk-dev] [PATCH v7 05/10] app/test-pmd:test VxLAN packet identification
  2014-10-23 13:18 ` [dpdk-dev] [PATCH v7 05/10] app/test-pmd:test VxLAN packet identification Jijiang Liu
@ 2014-10-23 16:10   ` Thomas Monjalon
  0 siblings, 0 replies; 12+ messages in thread
From: Thomas Monjalon @ 2014-10-23 16:10 UTC (permalink / raw)
  To: Jijiang Liu; +Cc: dev

2014-10-23 21:18, Jijiang Liu:
> Add two commands to test VxLAN packet identification.
> The test steps are as follows:
>  1> use commands to add/delete VxLAN UDP port.
>  2> use rxonly mode to receive VxLAN packet.
> 
> Signed-off-by: Jijiang Liu <jijiang.liu@intel.com>
[...]
>  static const char *pkt_rx_flag_names[MAX_PKT_RX_FLAGS] = {
>  	"VLAN_PKT",
>  	"RSS_HASH",
> @@ -84,6 +86,9 @@ static const char *pkt_rx_flag_names[MAX_PKT_RX_FLAGS] = {
>  
>  	"IEEE1588_PTP",
>  	"IEEE1588_TMST",
> +
> +	"PKT_RX_TUNNEL_IPV4_HDR"
> +	"PKT_RX_TUNNEL_IPV6_HDR"
>  };

Commas are missing here.

-- 
Thomas

^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2014-10-23 16:01 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-10-23 13:18 [dpdk-dev] [PATCH v7 00/10] Support VxLAN on Fortville Jijiang Liu
2014-10-23 13:18 ` [dpdk-dev] [PATCH v7 01/10] librte_mbuf:the rte_mbuf structure changes Jijiang Liu
2014-10-23 13:18 ` [dpdk-dev] [PATCH v7 02/10] librte_ether:add the basic data structures of VxLAN Jijiang Liu
2014-10-23 13:18 ` [dpdk-dev] [PATCH v7 03/10] librte_ether:add VxLAN packet identification API Jijiang Liu
2014-10-23 13:18 ` [dpdk-dev] [PATCH v7 04/10] i40e:support VxLAN packet identification in i40e Jijiang Liu
2014-10-23 13:18 ` [dpdk-dev] [PATCH v7 05/10] app/test-pmd:test VxLAN packet identification Jijiang Liu
2014-10-23 16:10   ` Thomas Monjalon
2014-10-23 13:18 ` [dpdk-dev] [PATCH v7 06/10] librte_ether:add data structures of VxLAN filter Jijiang Liu
2014-10-23 13:18 ` [dpdk-dev] [PATCH v7 07/10] i40e:implement the API of VxLAN filter in librte_pmd_i40e Jijiang Liu
2014-10-23 13:18 ` [dpdk-dev] [PATCH v7 08/10] app/testpmd:test VxLAN packet filter Jijiang Liu
2014-10-23 13:18 ` [dpdk-dev] [PATCH v7 09/10] i40e:support VxLAN Tx checksum offload Jijiang Liu
2014-10-23 13:19 ` [dpdk-dev] [PATCH v7 10/10] app/testpmd:test " Jijiang Liu

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).