DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] [PATCH v4 0/8]Support VxLAN on Fortville
@ 2014-09-26  2:02 Jijiang Liu
  2014-09-26  2:02 ` [dpdk-dev] [PATCH v4 1/8]i40e:support VxLAN packet identification in librte_ether Jijiang Liu
                   ` (7 more replies)
  0 siblings, 8 replies; 12+ messages in thread
From: Jijiang Liu @ 2014-09-26  2:02 UTC (permalink / raw)
  To: dev

The patch set supports VxLAN on Fortville based on current mbuf structure. When Bruce's Mbuf Structure Rework(part 3) is applied, there will be minor changes later.
 
It includes:
 - Support VxLAN packet identification by configuring tunneling UDP port.
 - Support VxLAN packet filters. It uses MAC and VLAN to point
   to a queue. The filter types supported include below:
   1. Inner MAC and Inner VLAN ID
   2. Inner MAC address, inner VLAN ID and tenant ID.
   3. Inner MAC and tenant ID
   4. Inner MAC address
   5. Outer MAC address, tenant ID and inner MAC
 - Support VxLAN TX checksum offload, which include outer L3(IP), inner L3(IP) and inner L4(UDP,TCP and SCTP)
 
Change notes:
 
 v4) Merge Mbuf Structure changes done by Bruce, and fix merged conflicits in app/test-pmd/config.c file.


jijiangl (8):
  support VxLAN packet identification in librte_ether
  support VxLAN packet identification in librte_pmd_i40e
  test vxlan packet identification
  Add new filter framework
  implement API of VxLAN packet filter in librte_pmd_i40e
  test VxLAN packet filter API
  support VxLAN Tx checksum offload
  test VxLAN Tx checksum offload

 app/test-pmd/cmdline.c            |  233 +++++++++++++++++++++-
 app/test-pmd/config.c             |    6 +-
 app/test-pmd/csumonly.c           |  200 +++++++++++++++++--
 app/test-pmd/parameters.c         |   13 ++
 app/test-pmd/rxonly.c             |   49 +++++
 app/test-pmd/testpmd.c            |    8 +
 app/test-pmd/testpmd.h            |    4 +
 config/common_linuxapp            |    5 +
 lib/librte_ether/Makefile         |    1 +
 lib/librte_ether/rte_eth_ctrl.h   |  150 ++++++++++++++
 lib/librte_ether/rte_ethdev.c     |   95 +++++++++
 lib/librte_ether/rte_ethdev.h     |  108 ++++++++++
 lib/librte_ether/rte_ether.h      |    8 +
 lib/librte_mbuf/rte_mbuf.h        |    4 +
 lib/librte_pmd_i40e/i40e_ethdev.c |  405 ++++++++++++++++++++++++++++++++++++-
 lib/librte_pmd_i40e/i40e_ethdev.h |    5 +
 lib/librte_pmd_i40e/i40e_rxtx.c   |   57 +++++-
 17 files changed, 1325 insertions(+), 26 deletions(-)
 create mode 100644 lib/librte_ether/rte_eth_ctrl.h

-- 
1.7.7.6

^ permalink raw reply	[flat|nested] 12+ messages in thread

* [dpdk-dev] [PATCH v4 1/8]i40e:support VxLAN packet identification in librte_ether
  2014-09-26  2:02 [dpdk-dev] [PATCH v4 0/8]Support VxLAN on Fortville Jijiang Liu
@ 2014-09-26  2:02 ` Jijiang Liu
  2014-09-26  2:02 ` [dpdk-dev] [PATCH v4 2/8]i40e:support VxLAN packet identification in librte_pmd_i40e Jijiang Liu
                   ` (6 subsequent siblings)
  7 siblings, 0 replies; 12+ messages in thread
From: Jijiang Liu @ 2014-09-26  2:02 UTC (permalink / raw)
  To: dev

Add data structures and APIs in librte_ether for supporting tunneling UDP port configuration on i40e,
Currently, only VxLAN is implemented, which include
 -  VxLAN UDP port initialization
 -  Add APIs to configure VxLAN UDP port
 
Signed-off-by: Jijiang Liu <jijiang.liu@intel.com>
Acked-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Jing Chen <jing.d.chen@intel.com>

---
 lib/librte_ether/rte_ethdev.c |   63 ++++++++++++++++++++++++++++++++++
 lib/librte_ether/rte_ethdev.h |   76 +++++++++++++++++++++++++++++++++++++++++
 lib/librte_ether/rte_ether.h  |    8 ++++
 3 files changed, 147 insertions(+), 0 deletions(-)

diff --git a/lib/librte_ether/rte_ethdev.c b/lib/librte_ether/rte_ethdev.c
index b71b679..642d312 100644
--- a/lib/librte_ether/rte_ethdev.c
+++ b/lib/librte_ether/rte_ethdev.c
@@ -2029,6 +2029,69 @@ rte_eth_dev_rss_hash_conf_get(uint8_t port_id,
 }
 
 int
+rte_eth_dev_udp_tunnel_add(uint8_t port_id,
+			   struct rte_eth_udp_tunnel *udp_tunnel,
+			   uint8_t count)
+{
+	uint8_t i;
+	struct rte_eth_dev *dev;
+	struct rte_eth_udp_tunnel *tunnel;
+
+	if (port_id >= nb_ports) {
+		PMD_DEBUG_TRACE("Invalid port_id=%d\n", port_id);
+		return -ENODEV;
+	}
+
+	if (udp_tunnel == NULL) {
+		PMD_DEBUG_TRACE("Invalid udp_tunnel parameter\n");
+		return -EINVAL;
+	}
+	tunnel = udp_tunnel;
+
+	for (i = 0; i < count; i++, tunnel++) {
+		if (tunnel->prot_type >= RTE_TUNNEL_TYPE_MAX) {
+			PMD_DEBUG_TRACE("Invalid tunnel type\n");
+			return -EINVAL;
+		}
+	}
+
+	dev = &rte_eth_devices[port_id];
+	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->udp_tunnel_add, -ENOTSUP);
+	return (*dev->dev_ops->udp_tunnel_add)(dev, udp_tunnel, count);
+}
+
+int
+rte_eth_dev_udp_tunnel_delete(uint8_t port_id,
+			      struct rte_eth_udp_tunnel *udp_tunnel,
+			      uint8_t count)
+{
+	uint8_t i;
+	struct rte_eth_dev *dev;
+	struct rte_eth_udp_tunnel *tunnel;
+
+	if (port_id >= nb_ports) {
+		PMD_DEBUG_TRACE("Invalid port_id=%d\n", port_id);
+		return -ENODEV;
+	}
+	dev = &rte_eth_devices[port_id];
+
+	if (udp_tunnel == NULL) {
+		PMD_DEBUG_TRACE("Invalid udp_tunnel parametr\n");
+		return -EINVAL;
+	}
+	tunnel = udp_tunnel;
+	for (i = 0; i < count; i++, tunnel++) {
+		if (tunnel->prot_type >= RTE_TUNNEL_TYPE_MAX) {
+			PMD_DEBUG_TRACE("Invalid tunnel type\n");
+			return -EINVAL;
+		}
+	}
+
+	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->udp_tunnel_del, -ENOTSUP);
+	return (*dev->dev_ops->udp_tunnel_del)(dev, udp_tunnel, count);
+}
+
+int
 rte_eth_led_on(uint8_t port_id)
 {
 	struct rte_eth_dev *dev;
diff --git a/lib/librte_ether/rte_ethdev.h b/lib/librte_ether/rte_ethdev.h
index 60b24c5..615fec0 100644
--- a/lib/librte_ether/rte_ethdev.h
+++ b/lib/librte_ether/rte_ethdev.h
@@ -708,6 +708,26 @@ struct rte_fdir_conf {
 };
 
 /**
+ * UDP tunneling configuration.
+ */
+struct rte_eth_udp_tunnel {
+	uint16_t udp_port;
+	uint8_t prot_type;
+};
+
+/**
+ * Tunneled type.
+ */
+enum rte_eth_tunnel_type {
+	RTE_TUNNEL_TYPE_NONE = 0,
+	RTE_TUNNEL_TYPE_VXLAN,
+	RTE_TUNNEL_TYPE_GENEVE,
+	RTE_TUNNEL_TYPE_TEREDO,
+	RTE_TUNNEL_TYPE_NVGRE,
+	RTE_TUNNEL_TYPE_MAX,
+};
+
+/**
  *  Possible l4type of FDIR filters.
  */
 enum rte_l4type {
@@ -829,6 +849,7 @@ struct rte_intr_conf {
  * configuration settings may be needed.
  */
 struct rte_eth_conf {
+	enum rte_eth_tunnel_type tunnel_type;
 	uint16_t link_speed;
 	/**< ETH_LINK_SPEED_10[0|00|000], or 0 for autonegotation */
 	uint16_t link_duplex;
@@ -1262,6 +1283,17 @@ typedef int (*eth_mirror_rule_reset_t)(struct rte_eth_dev *dev,
 				  uint8_t rule_id);
 /**< @internal Remove a traffic mirroring rule on an Ethernet device */
 
+typedef int (*eth_udp_tunnel_add_t)(struct rte_eth_dev *dev,
+				    struct rte_eth_udp_tunnel *tunnel_udp,
+				    uint8_t count);
+/**< @internal Add tunneling UDP info */
+
+typedef int (*eth_udp_tunnel_del_t)(struct rte_eth_dev *dev,
+				    struct rte_eth_udp_tunnel *tunnel_udp,
+				    uint8_t count);
+/**< @internal Delete tunneling UDP info */
+
+
 #ifdef RTE_NIC_BYPASS
 
 enum {
@@ -1436,6 +1468,8 @@ struct eth_dev_ops {
 	eth_set_vf_rx_t            set_vf_rx;  /**< enable/disable a VF receive */
 	eth_set_vf_tx_t            set_vf_tx;  /**< enable/disable a VF transmit */
 	eth_set_vf_vlan_filter_t   set_vf_vlan_filter;  /**< Set VF VLAN filter */
+	eth_udp_tunnel_add_t       udp_tunnel_add;
+	eth_udp_tunnel_del_t       udp_tunnel_del;
 	eth_set_queue_rate_limit_t set_queue_rate_limit;   /**< Set queue rate limit */
 	eth_set_vf_rate_limit_t    set_vf_rate_limit;   /**< Set VF rate limit */
 
@@ -3324,6 +3358,48 @@ int
 rte_eth_dev_rss_hash_conf_get(uint8_t port_id,
 			      struct rte_eth_rss_conf *rss_conf);
 
+ /**
+ * Add tunneling UDP port configuration of Ethernet device
+ *
+ * @param port_id
+ *   The port identifier of the Ethernet device.
+ * @param tunnel_udp
+ *   Where to store the current Tunneling UDP configuration
+ *   of the Ethernet device.
+ * @param count
+ *   How many configurations are going to added.
+ *
+ * @return
+ *   - (0) if successful.
+ *   - (-ENODEV) if port identifier is invalid.
+ *   - (-ENOTSUP) if hardware doesn't support tunnel type.
+ */
+int
+rte_eth_dev_udp_tunnel_add(uint8_t port_id,
+			struct rte_eth_udp_tunnel *tunnel_udp,
+			uint8_t count);
+
+ /**
+ * Detete tunneling UDP port configuration of Ethernet device
+ *
+ * @param port_id
+ *   The port identifier of the Ethernet device.
+ * @param tunnel_udp
+ *   Where to store the current Tunneling UDP configuration
+ *   of the Ethernet device.
+ * @param count
+ *   How many configurations are going to deleted.
+ *
+ * @return
+ *   - (0) if successful.
+ *   - (-ENODEV) if port identifier is invalid.
+ *   - (-ENOTSUP) if hardware doesn't support tunnel type.
+ */
+int
+rte_eth_dev_udp_tunnel_delete(uint8_t port_id,
+			struct rte_eth_udp_tunnel *tunnel_udp,
+			uint8_t count);
+
 /**
  * add syn filter
  *
diff --git a/lib/librte_ether/rte_ether.h b/lib/librte_ether/rte_ether.h
index 2e08f23..ddbdbb3 100644
--- a/lib/librte_ether/rte_ether.h
+++ b/lib/librte_ether/rte_ether.h
@@ -286,6 +286,12 @@ struct vlan_hdr {
 	uint16_t eth_proto;/**< Ethernet type of encapsulated frame. */
 } __attribute__((__packed__));
 
+/* VXLAN protocol header */
+struct vxlan_hdr {
+	uint32_t vx_flags; /**< VxLAN flag. */
+	uint32_t vx_vni;   /**< VxLAN ID. */
+} __attribute__((__packed__));
+
 /* Ethernet frame types */
 #define ETHER_TYPE_IPv4 0x0800 /**< IPv4 Protocol. */
 #define ETHER_TYPE_IPv6 0x86DD /**< IPv6 Protocol. */
@@ -294,6 +300,8 @@ struct vlan_hdr {
 #define ETHER_TYPE_VLAN 0x8100 /**< IEEE 802.1Q VLAN tagging. */
 #define ETHER_TYPE_1588 0x88F7 /**< IEEE 802.1AS 1588 Precise Time Protocol. */
 
+#define ETHER_VXLAN_HLEN (sizeof(struct udp_hdr) + sizeof(struct vxlan_hdr))
+
 #ifdef __cplusplus
 }
 #endif
-- 
1.7.7.6

^ permalink raw reply	[flat|nested] 12+ messages in thread

* [dpdk-dev] [PATCH v4 2/8]i40e:support VxLAN packet identification in librte_pmd_i40e
  2014-09-26  2:02 [dpdk-dev] [PATCH v4 0/8]Support VxLAN on Fortville Jijiang Liu
  2014-09-26  2:02 ` [dpdk-dev] [PATCH v4 1/8]i40e:support VxLAN packet identification in librte_ether Jijiang Liu
@ 2014-09-26  2:02 ` Jijiang Liu
  2014-09-29 10:48   ` Bruce Richardson
  2014-09-26  2:02 ` [dpdk-dev] [PATCH v4 3/8]app/test-pmd:test VxLAN packet identification Jijiang Liu
                   ` (5 subsequent siblings)
  7 siblings, 1 reply; 12+ messages in thread
From: Jijiang Liu @ 2014-09-26  2:02 UTC (permalink / raw)
  To: dev

Support tunneling UDP port configuration on i40e in librte_pmd_i40e.
Currently, only VxLAN is implemented, which include
 -  VxLAN UDP port initialization
 -  Implement the APIs to configure VxLAN UDP port in librte_pmd_i40e.
 
Signed-off-by: Jijiang Liu <jijiang.liu@intel.com>
Acked-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Jing Chen <jing.d.chen@intel.com>

---
 config/common_linuxapp            |    5 +
 lib/librte_mbuf/rte_mbuf.h        |    2 +
 lib/librte_pmd_i40e/i40e_ethdev.c |  200 ++++++++++++++++++++++++++++++++++++-
 lib/librte_pmd_i40e/i40e_ethdev.h |    5 +
 lib/librte_pmd_i40e/i40e_rxtx.c   |   10 ++
 5 files changed, 221 insertions(+), 1 deletions(-)

diff --git a/config/common_linuxapp b/config/common_linuxapp
index 5bee910..75a4cd7 100644
--- a/config/common_linuxapp
+++ b/config/common_linuxapp
@@ -212,6 +212,11 @@ CONFIG_RTE_LIBRTE_I40E_QUEUE_NUM_PER_VF=4
 CONFIG_RTE_LIBRTE_I40E_ITR_INTERVAL=-1
 
 #
+# Compile tunneling UDP port support
+#
+CONFIG_RTE_LIBRTE_TUNNEL_UDP_PORT=4789
+
+#
 # Compile burst-oriented VIRTIO PMD driver
 #
 CONFIG_RTE_LIBRTE_VIRTIO_PMD=y
diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h
index 1c6e115..4955684 100644
--- a/lib/librte_mbuf/rte_mbuf.h
+++ b/lib/librte_mbuf/rte_mbuf.h
@@ -538,6 +538,7 @@ static inline void rte_pktmbuf_reset(struct rte_mbuf *m)
 	m->port = 0xff;
 
 	m->ol_flags = 0;
+	m->reserved = 0;
 	m->data_off = (RTE_PKTMBUF_HEADROOM <= m->buf_len) ?
 			RTE_PKTMBUF_HEADROOM : m->buf_len;
 
@@ -607,6 +608,7 @@ static inline void rte_pktmbuf_attach(struct rte_mbuf *mi, struct rte_mbuf *md)
 	mi->pkt_len = mi->data_len;
 	mi->nb_segs = 1;
 	mi->ol_flags = md->ol_flags;
+	mi->reserved = md->reserved;
 
 	__rte_mbuf_sanity_check(mi, 1);
 	__rte_mbuf_sanity_check(md, 0);
diff --git a/lib/librte_pmd_i40e/i40e_ethdev.c b/lib/librte_pmd_i40e/i40e_ethdev.c
index a00d6ca..ddc7ea0 100644
--- a/lib/librte_pmd_i40e/i40e_ethdev.c
+++ b/lib/librte_pmd_i40e/i40e_ethdev.c
@@ -189,7 +189,7 @@ static int i40e_res_pool_alloc(struct i40e_res_pool_info *pool,
 static int i40e_dev_init_vlan(struct rte_eth_dev *dev);
 static int i40e_veb_release(struct i40e_veb *veb);
 static struct i40e_veb *i40e_veb_setup(struct i40e_pf *pf,
-						struct i40e_vsi *vsi);
+					struct i40e_vsi *vsi);
 static int i40e_pf_config_mq_rx(struct i40e_pf *pf);
 static int i40e_vsi_config_double_vlan(struct i40e_vsi *vsi, int on);
 static inline int i40e_find_all_vlan_for_mac(struct i40e_vsi *vsi,
@@ -205,6 +205,14 @@ static int i40e_dev_rss_hash_update(struct rte_eth_dev *dev,
 				    struct rte_eth_rss_conf *rss_conf);
 static int i40e_dev_rss_hash_conf_get(struct rte_eth_dev *dev,
 				      struct rte_eth_rss_conf *rss_conf);
+static int i40e_dev_udp_tunnel_add(struct rte_eth_dev *dev,
+				   struct rte_eth_udp_tunnel *udp_tunnel,
+				   uint8_t count);
+static int i40e_dev_udp_tunnel_del(struct rte_eth_dev *dev,
+				   struct rte_eth_udp_tunnel *udp_tunnel,
+				   uint8_t count);
+static int i40e_pf_config_vxlan(struct i40e_pf *pf);
+
 
 /* Default hash key buffer for RSS */
 static uint32_t rss_key_default[I40E_PFQF_HKEY_MAX_INDEX + 1];
@@ -256,6 +264,8 @@ static struct eth_dev_ops i40e_eth_dev_ops = {
 	.reta_query                   = i40e_dev_rss_reta_query,
 	.rss_hash_update              = i40e_dev_rss_hash_update,
 	.rss_hash_conf_get            = i40e_dev_rss_hash_conf_get,
+	.udp_tunnel_add               = i40e_dev_udp_tunnel_add,
+	.udp_tunnel_del               = i40e_dev_udp_tunnel_del,
 };
 
 static struct eth_driver rte_i40e_pmd = {
@@ -2532,6 +2542,34 @@ i40e_vsi_dump_bw_config(struct i40e_vsi *vsi)
 	return 0;
 }
 
+static int
+i40e_vxlan_filters_init(struct i40e_pf *pf)
+{
+	uint8_t filter_index;
+	int ret = 0;
+	struct i40e_hw *hw = I40E_PF_TO_HW(pf);
+
+	if (!(pf->flags & I40E_FLAG_VXLAN))
+		return 0;
+
+	/* Init first entry in tunneling UDP table */
+	ret = i40e_aq_add_udp_tunnel(hw, RTE_LIBRTE_TUNNEL_UDP_PORT,
+				I40E_AQC_TUNNEL_TYPE_VXLAN,
+				&filter_index, NULL);
+	if (ret < 0) {
+		PMD_DRV_LOG(ERR, "Failed to add UDP tunnel port %d "
+			"with index=%d\n", RTE_VXLAN_UDP_PORT,
+			 filter_index);
+	} else {
+		pf->vxlan_bitmap |= 1;
+		pf->vxlan_ports[0] = RTE_LIBRTE_TUNNEL_UDP_PORT;
+		PMD_DRV_LOG(INFO, "Added UDP tunnel port %d with "
+			"index=%d\n", RTE_VXLAN_UDP_PORT, filter_index);
+	}
+
+	return ret;
+}
+
 /* Setup a VSI */
 struct i40e_vsi *
 i40e_vsi_setup(struct i40e_pf *pf,
@@ -3163,6 +3201,12 @@ i40e_vsi_rx_init(struct i40e_vsi *vsi)
 	uint16_t i;
 
 	i40e_pf_config_mq_rx(pf);
+
+	if (data->dev_conf.tunnel_type == RTE_TUNNEL_TYPE_VXLAN) {
+		pf->flags |= I40E_FLAG_VXLAN;
+		i40e_pf_config_vxlan(pf);
+	}
+
 	for (i = 0; i < data->nb_rx_queues; i++) {
 		ret = i40e_rx_queue_init(data->rx_queues[i]);
 		if (ret != I40E_SUCCESS) {
@@ -4079,6 +4123,150 @@ i40e_dev_rss_hash_conf_get(struct rte_eth_dev *dev,
 	return 0;
 }
 
+static int
+i40e_get_vxlan_port_idx(struct i40e_pf *pf, uint16_t port)
+{
+	uint8_t i;
+
+	for (i = 0; i < I40E_MAX_PF_UDP_OFFLOAD_PORTS; i++) {
+		if (pf->vxlan_ports[i] == port)
+			return i;
+	}
+
+	return -1;
+}
+
+static int
+i40e_add_vxlan_port(struct i40e_pf *pf, uint16_t port)
+{
+	int  idx, ret;
+	uint8_t filter_idx;
+	struct i40e_hw *hw = I40E_PF_TO_HW(pf);
+
+	if (!(pf->flags & I40E_FLAG_VXLAN)) {
+		PMD_DRV_LOG(ERR, "VxLAN tunneling mode is not configured\n");
+		return -EINVAL;
+	}
+
+	idx = i40e_get_vxlan_port_idx(pf, port);
+
+	/* Check if port already exists */
+	if (idx >= 0) {
+		PMD_DRV_LOG(ERR, "Port %d already offloaded\n", port);
+		return -1;
+	}
+
+	/* Now check if there is space to add the new port */
+	idx = i40e_get_vxlan_port_idx(pf, 0);
+	if (idx < 0) {
+		PMD_DRV_LOG(ERR, "Maximum number of UDP ports reached,"
+			"not adding port %d\n", port);
+		return -ENOSPC;
+	}
+
+	ret =  i40e_aq_add_udp_tunnel(hw, port, I40E_AQC_TUNNEL_TYPE_VXLAN,
+					&filter_idx, NULL);
+	if (ret < 0) {
+		PMD_DRV_LOG(ERR, "Failed to add VxLAN UDP port %d\n", port);
+		return -1;
+	}
+
+	PMD_DRV_LOG(INFO, "Added %s port %d with AQ command with index %d\n",
+			 port,  filter_index);
+
+	/* New port: add it and mark its index in the bitmap */
+	pf->vxlan_ports[idx] = port;
+	pf->vxlan_bitmap |= (1 << idx);
+
+	return 0;
+}
+
+static int
+i40e_del_vxlan_port(struct i40e_pf *pf, uint16_t port)
+{
+	int idx;
+	struct i40e_hw *hw = I40E_PF_TO_HW(pf);
+
+	idx = i40e_get_vxlan_port_idx(pf, port);
+
+	if (idx < 0) {
+		PMD_DRV_LOG(ERR, "Port %d doesn't exist\n", port);
+		return -1;
+	}
+
+	if (i40e_aq_del_udp_tunnel(hw, idx, NULL) < 0) {
+		PMD_DRV_LOG(ERR, "Failed to delete VxLAN UDP port %d\n", port);
+		return -1;
+	}
+
+	PMD_DRV_LOG(INFO, "Added port %d with AQ command with index %d\n",
+			port, idx);
+
+	pf->vxlan_ports[idx] = 0;
+	pf->vxlan_bitmap &= ~(1 << idx);
+
+	return 0;
+}
+
+/* configure port of UDP tunneling */
+static int
+i40e_dev_udp_tunnel_add(struct rte_eth_dev *dev,
+		struct rte_eth_udp_tunnel *udp_tunnel, uint8_t count)
+{
+	uint16_t i;
+	int ret = 0;
+	struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+
+	for (i = 0; i < count; i++, udp_tunnel++) {
+		switch (udp_tunnel->prot_type) {
+		case RTE_TUNNEL_TYPE_VXLAN:
+			ret = i40e_add_vxlan_port(pf, udp_tunnel->udp_port);
+			break;
+
+		case RTE_TUNNEL_TYPE_GENEVE:
+		case RTE_TUNNEL_TYPE_TEREDO:
+			PMD_DRV_LOG(ERR, "Tunnel type is not supported now.\n");
+			ret = -1;
+			break;
+
+		default:
+			PMD_DRV_LOG(ERR, "Invalid tunnel type\n");
+			ret = -1;
+			break;
+		}
+	}
+
+	return ret;
+}
+
+static int
+i40e_dev_udp_tunnel_del(struct rte_eth_dev *dev,
+			struct rte_eth_udp_tunnel *udp_tunnel, uint8_t count)
+{
+	uint16_t i;
+	int ret = 0;
+	struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+
+	for (i = 0; i < count; i++, udp_tunnel++) {
+		switch (udp_tunnel->prot_type) {
+		case RTE_TUNNEL_TYPE_VXLAN:
+			ret = i40e_del_vxlan_port(pf, udp_tunnel->udp_port);
+			break;
+		case RTE_TUNNEL_TYPE_GENEVE:
+		case RTE_TUNNEL_TYPE_TEREDO:
+			PMD_DRV_LOG(ERR, "Tunnel type is not supported now.\n");
+			ret = -1;
+			break;
+		default:
+			PMD_DRV_LOG(ERR, "Invalid tunnel type\n");
+			ret = -1;
+			break;
+		}
+	}
+
+	return ret;
+}
+
 /* Configure RSS */
 static int
 i40e_pf_config_rss(struct i40e_pf *pf)
@@ -4115,6 +4303,16 @@ i40e_pf_config_rss(struct i40e_pf *pf)
 	return i40e_hw_rss_hash_set(hw, &rss_conf);
 }
 
+/* Configure VxLAN */
+static int
+i40e_pf_config_vxlan(struct i40e_pf *pf)
+{
+	if (pf->flags & I40E_FLAG_VXLAN)
+		i40e_vxlan_filters_init(pf);
+
+	return 0;
+}
+
 static int
 i40e_pf_config_mq_rx(struct i40e_pf *pf)
 {
diff --git a/lib/librte_pmd_i40e/i40e_ethdev.h b/lib/librte_pmd_i40e/i40e_ethdev.h
index 64deef2..22d0628 100644
--- a/lib/librte_pmd_i40e/i40e_ethdev.h
+++ b/lib/librte_pmd_i40e/i40e_ethdev.h
@@ -60,6 +60,7 @@
 #define I40E_FLAG_HEADER_SPLIT_DISABLED (1ULL << 4)
 #define I40E_FLAG_HEADER_SPLIT_ENABLED  (1ULL << 5)
 #define I40E_FLAG_FDIR                  (1ULL << 6)
+#define I40E_FLAG_VXLAN                 (1ULL << 7)
 #define I40E_FLAG_ALL (I40E_FLAG_RSS | \
 		       I40E_FLAG_DCB | \
 		       I40E_FLAG_VMDQ | \
@@ -216,6 +217,10 @@ struct i40e_pf {
 	uint16_t vmdq_nb_qps; /* The number of queue pairs of VMDq */
 	uint16_t vf_nb_qps; /* The number of queue pairs of VF */
 	uint16_t fdir_nb_qps; /* The number of queue pairs of Flow Director */
+
+	/* store VxLAN UDP ports */
+	uint16_t vxlan_ports[I40E_MAX_PF_UDP_OFFLOAD_PORTS];
+	uint16_t vxlan_bitmap; /* Vxlan bit mask */
 };
 
 enum pending_msg {
diff --git a/lib/librte_pmd_i40e/i40e_rxtx.c b/lib/librte_pmd_i40e/i40e_rxtx.c
index 099699c..abdf406 100644
--- a/lib/librte_pmd_i40e/i40e_rxtx.c
+++ b/lib/librte_pmd_i40e/i40e_rxtx.c
@@ -638,6 +638,11 @@ i40e_rx_scan_hw_ring(struct i40e_rx_queue *rxq)
 			pkt_flags |= i40e_rxd_error_to_pkt_flags(qword1);
 			pkt_flags |= i40e_rxd_ptype_to_pkt_flags(qword1);
 			mb->ol_flags = pkt_flags;
+
+			/* reserved is used to store packet type for RX side */
+			mb->reserved = (uint8_t)((qword1 &
+					I40E_RXD_QW1_PTYPE_MASK) >>
+					I40E_RXD_QW1_PTYPE_SHIFT);
 			if (pkt_flags & PKT_RX_RSS_HASH)
 				mb->hash.rss = rte_le_to_cpu_32(\
 					rxdp->wb.qword0.hi_dword.rss);
@@ -873,6 +878,8 @@ i40e_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 		pkt_flags = i40e_rxd_status_to_pkt_flags(qword1);
 		pkt_flags |= i40e_rxd_error_to_pkt_flags(qword1);
 		pkt_flags |= i40e_rxd_ptype_to_pkt_flags(qword1);
+		rxm->reserved = (uint8_t)((qword1 & I40E_RXD_QW1_PTYPE_MASK) >>
+				I40E_RXD_QW1_PTYPE_SHIFT);
 		rxm->ol_flags = pkt_flags;
 		if (pkt_flags & PKT_RX_RSS_HASH)
 			rxm->hash.rss =
@@ -1027,6 +1034,9 @@ i40e_recv_scattered_pkts(void *rx_queue,
 		pkt_flags = i40e_rxd_status_to_pkt_flags(qword1);
 		pkt_flags |= i40e_rxd_error_to_pkt_flags(qword1);
 		pkt_flags |= i40e_rxd_ptype_to_pkt_flags(qword1);
+		first_seg->reserved = (uint8_t)((qword1 &
+					I40E_RXD_QW1_PTYPE_MASK) >>
+					I40E_RXD_QW1_PTYPE_SHIFT);
 		first_seg->ol_flags = pkt_flags;
 		if (pkt_flags & PKT_RX_RSS_HASH)
 			rxm->hash.rss =
-- 
1.7.7.6

^ permalink raw reply	[flat|nested] 12+ messages in thread

* [dpdk-dev] [PATCH v4 3/8]app/test-pmd:test VxLAN packet identification
  2014-09-26  2:02 [dpdk-dev] [PATCH v4 0/8]Support VxLAN on Fortville Jijiang Liu
  2014-09-26  2:02 ` [dpdk-dev] [PATCH v4 1/8]i40e:support VxLAN packet identification in librte_ether Jijiang Liu
  2014-09-26  2:02 ` [dpdk-dev] [PATCH v4 2/8]i40e:support VxLAN packet identification in librte_pmd_i40e Jijiang Liu
@ 2014-09-26  2:02 ` Jijiang Liu
  2014-09-26  2:02 ` [dpdk-dev] [PATCH v4 4/8]librte_ether:add a common filter API Jijiang Liu
                   ` (4 subsequent siblings)
  7 siblings, 0 replies; 12+ messages in thread
From: Jijiang Liu @ 2014-09-26  2:02 UTC (permalink / raw)
  To: dev

Add commands to test VxLAN packet identification, which include
 - use commands to add/delete VxLAN UDP port.
 - use rxonly mode to receive VxLAN packet.

Signed-off-by: Jijiang Liu <jijiang.liu@intel.com>
Acked-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Jing Chen <jing.d.chen@intel.com>

---
 app/test-pmd/cmdline.c    |   78 ++++++++++++++++++++++++++++++++++++++++++--
 app/test-pmd/parameters.c |   13 +++++++
 app/test-pmd/rxonly.c     |   49 ++++++++++++++++++++++++++++
 app/test-pmd/testpmd.c    |    8 +++++
 app/test-pmd/testpmd.h    |    4 ++
 5 files changed, 148 insertions(+), 4 deletions(-)

diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
index 225f669..c0b7293 100644
--- a/app/test-pmd/cmdline.c
+++ b/app/test-pmd/cmdline.c
@@ -285,6 +285,12 @@ static void cmd_help_long_parsed(void *parsed_result,
 			"    Set the outer VLAN TPID for Packet Filtering on"
 			" a port\n\n"
 
+			"rx_vxlan_port add (udp_port) (port_id)\n"
+			"    Add an UDP port for VxLAN packet filter on a port\n\n"
+
+			"rx_vxlan_port rm (udp_port) (port_id)\n"
+			"    Remove an UDP port for VxLAN packet filter on a port\n\n"
+
 			"tx_vlan set vlan_id (port_id)\n"
 			"    Set hardware insertion of VLAN ID in packets sent"
 			" on a port.\n\n"
@@ -296,13 +302,17 @@ static void cmd_help_long_parsed(void *parsed_result,
 			"    Disable hardware insertion of a VLAN header in"
 			" packets sent on a port.\n\n"
 
-			"tx_checksum set mask (port_id)\n"
+			"tx_checksum set (mask) (port_id)\n"
 			"    Enable hardware insertion of checksum offload with"
-			" the 4-bit mask, 0~0xf, in packets sent on a port.\n"
+			" the 8-bit mask, 0~0xff, in packets sent on a port.\n"
 			"        bit 0 - insert ip   checksum offload if set\n"
 			"        bit 1 - insert udp  checksum offload if set\n"
 			"        bit 2 - insert tcp  checksum offload if set\n"
 			"        bit 3 - insert sctp checksum offload if set\n"
+			"        bit 4 - insert inner ip  checksum offload if set\n"
+			"        bit 5 - insert inner udp checksum offload if set\n"
+			"        bit 6 - insert inner tcp checksum offload if set\n"
+			"        bit 7 - insert inner sctp checksum offload if set\n"
 			"    Please check the NIC datasheet for HW limits.\n\n"
 
 			"set fwd (%s)\n"
@@ -2745,8 +2755,9 @@ cmdline_parse_inst_t cmd_tx_cksum_set = {
 	.f = cmd_tx_cksum_set_parsed,
 	.data = NULL,
 	.help_str = "enable hardware insertion of L3/L4checksum with a given "
-	"mask in packets sent on a port, the bit mapping is given as, Bit 0 for ip"
-	"Bit 1 for UDP, Bit 2 for TCP, Bit 3 for SCTP",
+	"mask in packets sent on a port, the bit mapping is given as, Bit 0 for ip "
+	"Bit 1 for UDP, Bit 2 for TCP, Bit 3 for SCTP, Bit 4 for inner ip "
+	"Bit 5 for inner UDP, Bit 6 for inner TCP, Bit 7 for inner SCTP",
 	.tokens = {
 		(void *)&cmd_tx_cksum_set_tx_cksum,
 		(void *)&cmd_tx_cksum_set_set,
@@ -6221,6 +6232,64 @@ cmdline_parse_inst_t cmd_vf_rate_limit = {
 	},
 };
 
+/* *** CONFIGURE TUNNEL UDP PORT *** */
+struct cmd_tunnel_udp_config {
+	cmdline_fixed_string_t cmd;
+	cmdline_fixed_string_t what;
+	uint16_t udp_port;
+	uint8_t port_id;
+};
+
+static void
+cmd_tunnel_udp_config_parsed(void *parsed_result,
+			  __attribute__((unused)) struct cmdline *cl,
+			  __attribute__((unused)) void *data)
+{
+	struct cmd_tunnel_udp_config *res = parsed_result;
+	struct rte_eth_udp_tunnel tunnel_udp;
+	int ret;
+
+	tunnel_udp.udp_port = res->udp_port;
+
+	if (!strcmp(res->cmd, "rx_vxlan_port"))
+		tunnel_udp.prot_type = RTE_TUNNEL_TYPE_VXLAN;
+
+	if (!strcmp(res->what, "add"))
+		ret = rte_eth_dev_udp_tunnel_add(res->port_id, &tunnel_udp, 1);
+	else
+		ret = rte_eth_dev_udp_tunnel_delete(res->port_id, &tunnel_udp, 1);
+
+	if (ret < 0)
+		printf("udp tunneling add error: (%s)\n", strerror(-ret));
+}
+
+cmdline_parse_token_string_t cmd_tunnel_udp_config_cmd =
+	TOKEN_STRING_INITIALIZER(struct cmd_tunnel_udp_config,
+				cmd, "rx_vxlan_port");
+cmdline_parse_token_string_t cmd_tunnel_udp_config_what =
+	TOKEN_STRING_INITIALIZER(struct cmd_tunnel_udp_config,
+				what, "add#rm");
+cmdline_parse_token_num_t cmd_tunnel_udp_config_udp_port =
+	TOKEN_NUM_INITIALIZER(struct cmd_tunnel_udp_config,
+				udp_port, UINT16);
+cmdline_parse_token_num_t cmd_tunnel_udp_config_port_id =
+	TOKEN_NUM_INITIALIZER(struct cmd_tunnel_udp_config,
+				port_id, UINT8);
+
+cmdline_parse_inst_t cmd_tunnel_udp_config = {
+	.f = cmd_tunnel_udp_config_parsed,
+	.data = (void *)0,
+	.help_str = "add/rm an tunneling UDP port filter: "
+			"rx_vxlan_port add udp_port port_id",
+	.tokens = {
+		(void *)&cmd_tunnel_udp_config_cmd,
+		(void *)&cmd_tunnel_udp_config_what,
+		(void *)&cmd_tunnel_udp_config_udp_port,
+		(void *)&cmd_tunnel_udp_config_port_id,
+		NULL,
+	},
+};
+
 /* *** CONFIGURE VM MIRROR VLAN/POOL RULE *** */
 struct cmd_set_mirror_mask_result {
 	cmdline_fixed_string_t set;
@@ -7514,6 +7583,7 @@ cmdline_parse_ctx_t main_ctx[] = {
 	(cmdline_parse_inst_t *)&cmd_vf_rxvlan_filter,
 	(cmdline_parse_inst_t *)&cmd_queue_rate_limit,
 	(cmdline_parse_inst_t *)&cmd_vf_rate_limit,
+	(cmdline_parse_inst_t *)&cmd_tunnel_udp_config,
 	(cmdline_parse_inst_t *)&cmd_set_mirror_mask,
 	(cmdline_parse_inst_t *)&cmd_set_mirror_link,
 	(cmdline_parse_inst_t *)&cmd_reset_mirror_rule,
diff --git a/app/test-pmd/parameters.c b/app/test-pmd/parameters.c
index 9573a43..fda8c1d 100644
--- a/app/test-pmd/parameters.c
+++ b/app/test-pmd/parameters.c
@@ -202,6 +202,10 @@ usage(char* progname)
 	printf("  --txpkts=X[,Y]*: set TX segment sizes.\n");
 	printf("  --disable-link-check: disable check on link status when "
 	       "starting/stopping ports.\n");
+	printf("  --tunnel-type=N: set tunneling packet type "
+	       "(0 <= N <= 4).(0:non-tunneling packet;1:VxLAN; "
+	       "2:GENEVE;3: TEREDO;4: NVGRE)\n");
+
 }
 
 #ifdef RTE_LIBRTE_CMDLINE
@@ -600,6 +604,7 @@ launch_args_parse(int argc, char** argv)
 		{ "no-flush-rx",	0, 0, 0 },
 		{ "txpkts",			1, 0, 0 },
 		{ "disable-link-check",		0, 0, 0 },
+		{ "tunnel-type",                1, 0, 0 },
 		{ 0, 0, 0, 0 },
 	};
 
@@ -1032,6 +1037,14 @@ launch_args_parse(int argc, char** argv)
 				else
 					rte_exit(EXIT_FAILURE, "rxfreet must be >= 0\n");
 			}
+			if (!strcmp(lgopts[opt_idx].name, "tunnel-type")) {
+				n = atoi(optarg);
+				if ((n >= 0) && (n < RTE_TUNNEL_TYPE_MAX))
+					rx_tunnel_type = (uint16_t)n;
+				else
+					rte_exit(EXIT_FAILURE, "tunnel-type must be 0-%d\n",
+						RTE_TUNNEL_TYPE_MAX);
+			}
 			if (!strcmp(lgopts[opt_idx].name, "tx-queue-stats-mapping")) {
 				if (parse_queue_stats_mapping_config(optarg, TX)) {
 					rte_exit(EXIT_FAILURE,
diff --git a/app/test-pmd/rxonly.c b/app/test-pmd/rxonly.c
index 98c788b..a937a55 100644
--- a/app/test-pmd/rxonly.c
+++ b/app/test-pmd/rxonly.c
@@ -66,6 +66,8 @@
 #include <rte_ether.h>
 #include <rte_ethdev.h>
 #include <rte_string_fns.h>
+#include <rte_ip.h>
+#include <rte_udp.h>
 
 #include "testpmd.h"
 
@@ -112,6 +114,9 @@ pkt_burst_receive(struct fwd_stream *fs)
 	uint64_t ol_flags;
 	uint16_t nb_rx;
 	uint16_t i;
+	uint8_t ptype;
+	uint8_t is_encapsulation;
+
 #ifdef RTE_TEST_PMD_RECORD_CORE_CYCLES
 	uint64_t start_tsc;
 	uint64_t end_tsc;
@@ -152,6 +157,11 @@ pkt_burst_receive(struct fwd_stream *fs)
 		eth_hdr = rte_pktmbuf_mtod(mb, struct ether_hdr *);
 		eth_type = RTE_BE_TO_CPU_16(eth_hdr->ether_type);
 		ol_flags = mb->ol_flags;
+		ptype = mb->reserved;
+
+		is_encapsulation = IS_ETH_IPV4_TUNNEL(ptype) |
+					IS_ETH_IPV6_TUNNEL(ptype);
+
 		print_ether_addr("  src=", &eth_hdr->s_addr);
 		print_ether_addr(" - dst=", &eth_hdr->d_addr);
 		printf(" - type=0x%04x - length=%u - nb_segs=%d",
@@ -166,6 +176,45 @@ pkt_burst_receive(struct fwd_stream *fs)
 			       mb->hash.fdir.hash, mb->hash.fdir.id);
 		if (ol_flags & PKT_RX_VLAN_PKT)
 			printf(" - VLAN tci=0x%x", mb->vlan_tci);
+		if (is_encapsulation) {
+			struct ipv4_hdr *ipv4_hdr;
+			struct ipv6_hdr *ipv6_hdr;
+			struct udp_hdr *udp_hdr;
+			uint8_t l2_len;
+			uint8_t l3_len;
+			uint8_t l4_len;
+			uint8_t l4_proto;
+			struct  vxlan_hdr *vxlan_hdr;
+
+			l2_len  = sizeof(struct ether_hdr);
+
+			 /* Do not support ipv4 option field */
+			if (IS_ETH_IPV4_TUNNEL(ptype)) {
+				l3_len = sizeof(struct ipv4_hdr);
+				ipv4_hdr = (struct ipv4_hdr *) (rte_pktmbuf_mtod(mb,
+						unsigned char *) + l2_len);
+				l4_proto = ipv4_hdr->next_proto_id;
+			} else {
+				l3_len = sizeof(struct ipv6_hdr);
+				ipv6_hdr = (struct ipv6_hdr *) (rte_pktmbuf_mtod(mb,
+						unsigned char *) + l2_len);
+				l4_proto = ipv6_hdr->proto;
+			}
+			if (l4_proto == IPPROTO_UDP) {
+				udp_hdr = (struct udp_hdr *) (rte_pktmbuf_mtod(mb,
+						unsigned char *) + l2_len + l3_len);
+				l4_len = sizeof(struct udp_hdr);
+				vxlan_hdr = (struct vxlan_hdr *) (rte_pktmbuf_mtod(mb,
+						unsigned char *) + l2_len + l3_len
+						 + l4_len);
+
+				printf(" - VxLAN packet: packet type =%d, "
+					"Destination UDP port =%d, VNI = %d",
+					ptype, RTE_BE_TO_CPU_16(udp_hdr->dst_port),
+					rte_be_to_cpu_32(vxlan_hdr->vx_vni) >> 8);
+			}
+		}
+		printf(" - Receive queue=0x%x", (unsigned) fs->rx_queue);
 		printf("\n");
 		if (ol_flags != 0) {
 			int rxf;
diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
index 9f6cdc4..f2e66eb 100644
--- a/app/test-pmd/testpmd.c
+++ b/app/test-pmd/testpmd.c
@@ -243,6 +243,12 @@ uint16_t tx_free_thresh = 0; /* Use default values. */
 uint16_t tx_rs_thresh = 0; /* Use default values. */
 
 /*
+ * Configurable value of tunnel type.
+ */
+
+uint8_t rx_tunnel_type = 0; /* Use default values. */
+
+/*
  * Configurable value of TX queue flags.
  */
 uint32_t txq_flags = 0; /* No flags set. */
@@ -1689,6 +1695,8 @@ init_port_config(void)
 		port = &ports[pid];
 		port->dev_conf.rxmode = rx_mode;
 		port->dev_conf.fdir_conf = fdir_conf;
+		if (rx_tunnel_type == 1)
+			port->dev_conf.tunnel_type = RTE_TUNNEL_TYPE_VXLAN;
 		if (nb_rxq > 1) {
 			port->dev_conf.rx_adv_conf.rss_conf.rss_key = NULL;
 			port->dev_conf.rx_adv_conf.rss_conf.rss_hf = rss_hf;
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index 9cbfeac..a797536 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -80,6 +80,9 @@ typedef uint16_t streamid_t;
 
 #define MAX_QUEUE_ID ((1 << (sizeof(queueid_t) * 8)) - 1)
 
+#define IS_ETH_IPV4_TUNNEL(ptype) ((ptype > 58) && (ptype < 87))
+#define IS_ETH_IPV6_TUNNEL(ptype) ((ptype > 124) && (ptype < 153))
+
 enum {
 	PORT_TOPOLOGY_PAIRED,
 	PORT_TOPOLOGY_CHAINED,
@@ -342,6 +345,7 @@ extern uint8_t rx_drop_en;
 extern uint16_t tx_free_thresh;
 extern uint16_t tx_rs_thresh;
 extern uint32_t txq_flags;
+extern uint8_t rx_tunnel_type;
 
 extern uint8_t dcb_config;
 extern uint8_t dcb_test;
-- 
1.7.7.6

^ permalink raw reply	[flat|nested] 12+ messages in thread

* [dpdk-dev] [PATCH v4 4/8]librte_ether:add a common filter API
  2014-09-26  2:02 [dpdk-dev] [PATCH v4 0/8]Support VxLAN on Fortville Jijiang Liu
                   ` (2 preceding siblings ...)
  2014-09-26  2:02 ` [dpdk-dev] [PATCH v4 3/8]app/test-pmd:test VxLAN packet identification Jijiang Liu
@ 2014-09-26  2:02 ` Jijiang Liu
  2014-09-26  2:02 ` [dpdk-dev] [PATCH v4 5/8]i40e:implement API of VxLAN packet filter in librte_pmd_i40e Jijiang Liu
                   ` (3 subsequent siblings)
  7 siblings, 0 replies; 12+ messages in thread
From: Jijiang Liu @ 2014-09-26  2:02 UTC (permalink / raw)
  To: dev

Introduce a new filter framewok in librte_ether. As to the implemetation discussion, please refer to
http://dpdk.org/ml/archives/dev/2014-September/005179.html, and VxLAN tunnel filter implementation is based on
it.
 
Signed-off-by: Jijiang Liu <jijiang.liu@intel.com>
Acked-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>

---
 lib/librte_ether/Makefile       |    1 +
 lib/librte_ether/rte_eth_ctrl.h |  150 +++++++++++++++++++++++++++++++++++++++
 lib/librte_ether/rte_ethdev.c   |   32 ++++++++
 lib/librte_ether/rte_ethdev.h   |   56 +++++++++++---
 4 files changed, 227 insertions(+), 12 deletions(-)
 create mode 100644 lib/librte_ether/rte_eth_ctrl.h

diff --git a/lib/librte_ether/Makefile b/lib/librte_ether/Makefile
index b310f8b..a461c31 100644
--- a/lib/librte_ether/Makefile
+++ b/lib/librte_ether/Makefile
@@ -46,6 +46,7 @@ SRCS-y += rte_ethdev.c
 #
 SYMLINK-y-include += rte_ether.h
 SYMLINK-y-include += rte_ethdev.h
+SYMLINK-y-include += rte_eth_ctrl.h
 
 # this lib depends upon:
 DEPDIRS-y += lib/librte_eal lib/librte_mempool lib/librte_ring lib/librte_mbuf
diff --git a/lib/librte_ether/rte_eth_ctrl.h b/lib/librte_ether/rte_eth_ctrl.h
new file mode 100644
index 0000000..47186e5
--- /dev/null
+++ b/lib/librte_ether/rte_eth_ctrl.h
@@ -0,0 +1,150 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _RTE_ETH_CTRL_H_
+#define _RTE_ETH_CTRL_H_
+
+/**
+ * @file
+ *
+ * Ethernet device features and related data structures used
+ * by control APIs should be defined in this file.
+ *
+ */
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/**
+ * Feature filter types
+ */
+enum rte_filter_type {
+	RTE_ETH_FILTER_NONE = 0,
+	RTE_ETH_FILTER_RSS,
+	RTE_ETH_FILTER_FDIR,
+	RTE_ETH_FILTER_TUNNEL,
+	RTE_ETH_FILTER_MAX,
+};
+
+/**
+ * all generic operations to filters
+ */
+enum rte_filter_op {
+	RTE_ETH_FILTER_OP_NONE = 0, /**< used to check whether the type filter is supported */
+	RTE_ETH_FILTER_OP_ADD,      /**< add filter entry */
+	RTE_ETH_FILTER_OP_UPDATE,   /**< update filter entry */
+	RTE_ETH_FILTER_OP_DELETE,   /**< delete filter entry */
+	RTE_ETH_FILTER_OP_GET,      /**< get filter entry */
+	RTE_ETH_FILTER_OP_SET,      /**< configurations */
+	RTE_ETH_FILTER_OP_GET_INFO, /**< get information of filter, such as status or statistics */
+	RTE_ETH_FILTER_OP_MAX,
+};
+
+/**** TUNNEL FILTER DATA DEFINATION *** */
+
+#define ETH_TUNNEL_FILTER_OMAC  0x01
+#define ETH_TUNNEL_FILTER_OIP   0x02
+#define ETH_TUNNEL_FILTER_TENID 0x04
+#define ETH_TUNNEL_FILTER_IMAC  0x08
+#define ETH_TUNNEL_FILTER_IVLAN 0x10
+#define ETH_TUNNEL_FILTER_IIP   0x20
+
+#define RTE_TUNNEL_FLAGS_TO_QUEUE 1
+
+/*
+ * Tunneled filter type
+ */
+enum rte_tunnel_filter_type {
+	RTE_TUNNEL_FILTER_TYPE_NONE = 0,
+	RTE_TUNNEL_FILTER_OIP = ETH_TUNNEL_FILTER_OIP,
+	RTE_TUNNEL_FILTER_IMAC_IVLAN =
+		ETH_TUNNEL_FILTER_IMAC | ETH_TUNNEL_FILTER_IVLAN,
+	RTE_TUNNEL_FILTER_IMAC_IVLAN_TENID =
+		ETH_TUNNEL_FILTER_IMAC | ETH_TUNNEL_FILTER_IVLAN |
+		ETH_TUNNEL_FILTER_TENID,
+	RTE_TUNNEL_FILTER_IMAC_TENID =
+		ETH_TUNNEL_FILTER_IMAC | ETH_TUNNEL_FILTER_TENID,
+	RTE_TUNNEL_FILTER_IMAC = ETH_TUNNEL_FILTER_IMAC,
+	RTE_TUNNEL_FILTER_OMAC_TENID_IMAC =
+		ETH_TUNNEL_FILTER_OMAC | ETH_TUNNEL_FILTER_TENID |
+		ETH_TUNNEL_FILTER_IMAC,
+	RTE_TUNNEL_FILTER_IIP = ETH_TUNNEL_FILTER_IIP,
+	RTE_TUNNEL_FILTER_TYPE_MAX,
+};
+
+/**
+ *  Select IPv4 or IPv6 for tunnel filters.
+ */
+enum rte_tunnel_iptype {
+	RTE_TUNNEL_IPTYPE_IPV4 = 0, /**< IPv4. */
+	RTE_TUNNEL_IPTYPE_IPV6,     /**< IPv6. */
+};
+
+/**
+ * Tunneled type.
+ */
+enum rte_eth_tunnel_type {
+	RTE_TUNNEL_TYPE_NONE = 0,
+	RTE_TUNNEL_TYPE_VXLAN,
+	RTE_TUNNEL_TYPE_GENEVE,
+	RTE_TUNNEL_TYPE_TEREDO,
+	RTE_TUNNEL_TYPE_NVGRE,
+	RTE_TUNNEL_TYPE_MAX,
+};
+
+/**
+ * Tunnel Packet filter configuration.
+ */
+struct rte_eth_tunnel_filter_conf {
+	struct ether_addr *outer_mac;  /**< Outer MAC address fiter. */
+	struct ether_addr *inner_mac;  /**< Inner MAC address fiter. */
+	uint16_t inner_vlan;           /**< Inner VLAN fiter. */
+	enum rte_tunnel_iptype ip_type; /**< IP address type. */
+	union {
+		uint32_t ipv4_addr;    /**< IPv4 source address to match. */
+		uint32_t ipv6_addr[4]; /**< IPv6 source address to match. */
+	} ip_addr; /**< IPv4/IPv6 source address to match (union of above). */
+
+	uint8_t filter_type;           /**< Filter type. */
+	uint8_t to_queue;       /**< Use MAC and VLAN to point to a queue. */
+	enum rte_eth_tunnel_type tunnel_type; /**< Tunnel Type. */
+	uint32_t tenant_id;            /** < Tenant number. */
+	uint16_t queue_id;             /** < queue number. */
+};
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_ETH_CTRL_H_ */
diff --git a/lib/librte_ether/rte_ethdev.c b/lib/librte_ether/rte_ethdev.c
index 642d312..93f9f6f 100644
--- a/lib/librte_ether/rte_ethdev.c
+++ b/lib/librte_ether/rte_ethdev.c
@@ -3202,3 +3202,35 @@ rte_eth_dev_get_flex_filter(uint8_t port_id, uint16_t index,
 	return (*dev->dev_ops->get_flex_filter)(dev, index, filter,
 						rx_queue);
 }
+
+int
+rte_eth_dev_filter_supported(uint8_t port_id, enum rte_filter_type filter_type)
+{
+	struct rte_eth_dev *dev;
+
+	if (port_id >= nb_ports) {
+		PMD_DEBUG_TRACE("Invalid port_id=%d\n", port_id);
+		return -ENODEV;
+	}
+
+	dev = &rte_eth_devices[port_id];
+	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->filter_ctrl, -ENOTSUP);
+	return (*dev->dev_ops->filter_ctrl)(dev, filter_type,
+				RTE_ETH_FILTER_OP_NONE, NULL);
+}
+
+int
+rte_eth_dev_filter_ctrl(uint8_t port_id, enum rte_filter_type filter_type,
+		       enum rte_filter_op filter_op, void *arg)
+{
+	struct rte_eth_dev *dev;
+
+	if (port_id >= nb_ports) {
+		PMD_DEBUG_TRACE("Invalid port_id=%d\n", port_id);
+		return -ENODEV;
+	}
+
+	dev = &rte_eth_devices[port_id];
+	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->filter_ctrl, -ENOTSUP);
+	return (*dev->dev_ops->filter_ctrl)(dev, filter_type, filter_op, arg);
+}
diff --git a/lib/librte_ether/rte_ethdev.h b/lib/librte_ether/rte_ethdev.h
index 615fec0..d1eeb4f 100644
--- a/lib/librte_ether/rte_ethdev.h
+++ b/lib/librte_ether/rte_ethdev.h
@@ -177,6 +177,7 @@ extern "C" {
 #include <rte_pci.h>
 #include <rte_mbuf.h>
 #include "rte_ether.h"
+#include "rte_eth_ctrl.h"
 
 /**
  * A structure used to retrieve statistics for an Ethernet port.
@@ -716,18 +717,6 @@ struct rte_eth_udp_tunnel {
 };
 
 /**
- * Tunneled type.
- */
-enum rte_eth_tunnel_type {
-	RTE_TUNNEL_TYPE_NONE = 0,
-	RTE_TUNNEL_TYPE_VXLAN,
-	RTE_TUNNEL_TYPE_GENEVE,
-	RTE_TUNNEL_TYPE_TEREDO,
-	RTE_TUNNEL_TYPE_NVGRE,
-	RTE_TUNNEL_TYPE_MAX,
-};
-
-/**
  *  Possible l4type of FDIR filters.
  */
 enum rte_l4type {
@@ -1415,6 +1404,12 @@ typedef int (*eth_get_flex_filter_t)(struct rte_eth_dev *dev,
 			uint16_t *rx_queue);
 /**< @internal Get a flex filter rule on an Ethernet device */
 
+typedef int (*eth_filter_ctrl_t)(struct rte_eth_dev *dev,
+				 enum rte_filter_type filter_type,
+				 enum rte_filter_op filter_op,
+				 void *arg);
+/**< @internal Take operations to assigned filter type on an Ethernet device */
+
 /**
  * @internal A structure containing the functions exported by an Ethernet driver.
  */
@@ -1525,6 +1520,7 @@ struct eth_dev_ops {
 	eth_add_flex_filter_t          add_flex_filter;      /**< add flex filter. */
 	eth_remove_flex_filter_t       remove_flex_filter;   /**< remove flex filter. */
 	eth_get_flex_filter_t          get_flex_filter;      /**< get flex filter. */
+	eth_filter_ctrl_t              filter_ctrl;          /**< common filter control*/
 };
 
 /**
@@ -3689,6 +3685,42 @@ int rte_eth_dev_remove_flex_filter(uint8_t port_id, uint16_t index);
 int rte_eth_dev_get_flex_filter(uint8_t port_id, uint16_t index,
 			struct rte_flex_filter *filter, uint16_t *rx_queue);
 
+/**
+ * Check whether the filter type is supported on an Ethernet device.
+ * All the supported filter types are defined in 'rte_eth_ctrl.h'.
+ *
+ * @param port_id
+ *   The port identifier of the Ethernet device.
+ * @param filter_type
+ *   filter type.
+ * @return
+ *   - (0) if successful.
+ *   - (-ENOTSUP) if hardware doesn't support this filter type.
+ *   - (-ENODEV) if *port_id* invalid.
+ */
+int rte_eth_dev_filter_supported(uint8_t port_id, enum rte_filter_type filter_type);
+
+/**
+ * Take operations to assigned filter type on an Ethernet device.
+ * All the supported operations and filter types are defined in 'rte_eth_ctrl.h'.
+ *
+ * @param port_id
+ *   The port identifier of the Ethernet device.
+ * @param filter_type
+ *   filter type.
+  * @param filter_op
+ *   The operation taken to assigned filter.
+ * @param arg
+ *   A pointer to arguments defined specifically for the operation.
+ * @return
+ *   - (0) if successful.
+ *   - (-ENOTSUP) if hardware doesn't support.
+ *   - (-ENODEV) if *port_id* invalid.
+ *   - others depends on the specific operations implementation.
+ */
+int rte_eth_dev_filter_ctrl(uint8_t port_id, enum rte_filter_type filter_type,
+			enum rte_filter_op filter_op, void *arg);
+
 #ifdef __cplusplus
 }
 #endif
-- 
1.7.7.6

^ permalink raw reply	[flat|nested] 12+ messages in thread

* [dpdk-dev] [PATCH v4 5/8]i40e:implement API of VxLAN packet filter in librte_pmd_i40e
  2014-09-26  2:02 [dpdk-dev] [PATCH v4 0/8]Support VxLAN on Fortville Jijiang Liu
                   ` (3 preceding siblings ...)
  2014-09-26  2:02 ` [dpdk-dev] [PATCH v4 4/8]librte_ether:add a common filter API Jijiang Liu
@ 2014-09-26  2:02 ` Jijiang Liu
  2014-09-26  2:02 ` [dpdk-dev] [PATCH v4 6/8]app/testpmd:test VxLAN packet filter API Jijiang Liu
                   ` (2 subsequent siblings)
  7 siblings, 0 replies; 12+ messages in thread
From: Jijiang Liu @ 2014-09-26  2:02 UTC (permalink / raw)
  To: dev

The implementation of VxLAN tunnel filter in librte_pmd_i40e, which include
 - add the i40e_dev_filter_ctrl() function.
 - add the i40e_dev_tunnel_filter_set() function.
 
Signed-off-by: Jijiang Liu <jijiang.liu@intel.com>
Acked-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Jing Chen <jing.d.chen@intel.com>

---
 lib/librte_pmd_i40e/i40e_ethdev.c |  202 +++++++++++++++++++++++++++++++++++++
 1 files changed, 202 insertions(+), 0 deletions(-)

diff --git a/lib/librte_pmd_i40e/i40e_ethdev.c b/lib/librte_pmd_i40e/i40e_ethdev.c
index ddc7ea0..c9c881a 100644
--- a/lib/librte_pmd_i40e/i40e_ethdev.c
+++ b/lib/librte_pmd_i40e/i40e_ethdev.c
@@ -48,6 +48,7 @@
 #include <rte_malloc.h>
 #include <rte_memcpy.h>
 #include <rte_dev.h>
+#include <rte_eth_ctrl.h>
 
 #include "i40e_logs.h"
 #include "i40e/i40e_register_x710_int.h"
@@ -211,7 +212,13 @@ static int i40e_dev_udp_tunnel_add(struct rte_eth_dev *dev,
 static int i40e_dev_udp_tunnel_del(struct rte_eth_dev *dev,
 				   struct rte_eth_udp_tunnel *udp_tunnel,
 				   uint8_t count);
+static int i40e_dev_tunnel_filter_set(struct i40e_pf *pf,
+			     struct rte_eth_tunnel_filter_conf *tunnel_filter,
+			     uint8_t add);
 static int i40e_pf_config_vxlan(struct i40e_pf *pf);
+static int i40e_dev_filter_ctrl(struct rte_eth_dev *dev,
+			       enum rte_filter_type filter_type,
+			       enum rte_filter_op filter_op, void *arg);
 
 
 /* Default hash key buffer for RSS */
@@ -266,6 +273,7 @@ static struct eth_dev_ops i40e_eth_dev_ops = {
 	.rss_hash_conf_get            = i40e_dev_rss_hash_conf_get,
 	.udp_tunnel_add               = i40e_dev_udp_tunnel_add,
 	.udp_tunnel_del               = i40e_dev_udp_tunnel_del,
+	.filter_ctrl                  = i40e_dev_filter_ctrl,
 };
 
 static struct eth_driver rte_i40e_pmd = {
@@ -4124,6 +4132,110 @@ i40e_dev_rss_hash_conf_get(struct rte_eth_dev *dev,
 }
 
 static int
+i40e_dev_get_filter_type(enum rte_tunnel_filter_type filter_type,
+			uint16_t *flag)
+{
+	switch (filter_type) {
+	case RTE_TUNNEL_FILTER_IMAC_IVLAN:
+		*flag = I40E_AQC_ADD_CLOUD_FILTER_IMAC_IVLAN;
+		break;
+	case RTE_TUNNEL_FILTER_IMAC_IVLAN_TENID:
+		*flag = I40E_AQC_ADD_CLOUD_FILTER_IMAC_IVLAN_TEN_ID;
+		break;
+	case RTE_TUNNEL_FILTER_IMAC_TENID:
+		*flag = I40E_AQC_ADD_CLOUD_FILTER_IMAC_TEN_ID;
+		break;
+	case RTE_TUNNEL_FILTER_OMAC_TENID_IMAC:
+		*flag = I40E_AQC_ADD_CLOUD_FILTER_OMAC_TEN_ID_IMAC;
+		break;
+	case RTE_TUNNEL_FILTER_IMAC:
+		*flag = I40E_AQC_ADD_CLOUD_FILTER_IMAC;
+		break;
+	default:
+		PMD_DRV_LOG(ERR, "invalid tunnel filter type\n");
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+static int
+i40e_dev_tunnel_filter_set(struct i40e_pf *pf,
+			struct rte_eth_tunnel_filter_conf *tunnel_filter,
+			uint8_t add)
+{
+	uint16_t ip_type;
+	uint8_t tun_type = 0;
+	int ret = 0;
+	int val;
+	struct i40e_hw *hw = I40E_PF_TO_HW(pf);
+	struct i40e_vsi *vsi = pf->main_vsi;
+	struct i40e_aqc_add_remove_cloud_filters_element_data  *cld_filter;
+	struct i40e_aqc_add_remove_cloud_filters_element_data  *pfilter;
+
+	cld_filter = rte_zmalloc("tunnel_filter",
+		sizeof(struct i40e_aqc_add_remove_cloud_filters_element_data),
+		0);
+
+	if (NULL == cld_filter) {
+		PMD_DRV_LOG(ERR, "Failed to alloc memory.\n");
+		return -EINVAL;
+	}
+	pfilter = cld_filter;
+
+	(void)rte_memcpy(&pfilter->outer_mac, tunnel_filter->outer_mac,
+			sizeof(struct ether_addr));
+	(void)rte_memcpy(&pfilter->inner_mac, tunnel_filter->inner_mac,
+			sizeof(struct ether_addr));
+
+	pfilter->inner_vlan = tunnel_filter->inner_vlan;
+	if (tunnel_filter->ip_type == RTE_TUNNEL_IPTYPE_IPV4) {
+		ip_type = I40E_AQC_ADD_CLOUD_FLAGS_IPV4;
+		(void)rte_memcpy(&pfilter->ipaddr.v4.data,
+				&tunnel_filter->ip_addr,
+				sizeof(pfilter->ipaddr.v4.data));
+	} else {
+		ip_type = I40E_AQC_ADD_CLOUD_FLAGS_IPV6;
+		(void)rte_memcpy(&pfilter->ipaddr.v6.data,
+				&tunnel_filter->ip_addr,
+				sizeof(pfilter->ipaddr.v6.data));
+	}
+
+	/* check tunnel type */
+	switch (tunnel_filter->tunnel_type) {
+	case RTE_TUNNEL_TYPE_VXLAN:
+		tun_type = I40E_AQC_ADD_CLOUD_TNL_TYPE_XVLAN;
+		break;
+	default:
+		/* Other tunnel types is not supported. */
+		PMD_DRV_LOG(ERR, "tunnel type is not supported.\n");
+		rte_free(cld_filter);
+		return -EINVAL;
+	}
+
+	val = i40e_dev_get_filter_type(tunnel_filter->filter_type,
+						&pfilter->flags);
+	if (val < 0) {
+		rte_free(cld_filter);
+		return -EINVAL;
+	}
+
+	pfilter->flags |= I40E_AQC_ADD_CLOUD_FLAGS_TO_QUEUE | ip_type |
+		(tun_type << I40E_AQC_ADD_CLOUD_TNL_TYPE_SHIFT);
+	pfilter->tenant_id = tunnel_filter->tenant_id;
+	pfilter->queue_number = tunnel_filter->queue_id;
+
+	if (add)
+		ret = i40e_aq_add_cloud_filters(hw, vsi->seid, cld_filter, 1);
+	else
+		ret = i40e_aq_remove_cloud_filters(hw, vsi->seid,
+						cld_filter, 1);
+
+	rte_free(cld_filter);
+	return ret;
+}
+
+static int
 i40e_get_vxlan_port_idx(struct i40e_pf *pf, uint16_t port)
 {
 	uint8_t i;
@@ -4314,6 +4426,96 @@ i40e_pf_config_vxlan(struct i40e_pf *pf)
 }
 
 static int
+i40e_tunnel_filter_param_check(struct i40e_pf *pf,
+			struct rte_eth_tunnel_filter_conf *filter)
+{
+	if (pf == NULL || filter == NULL) {
+		PMD_DRV_LOG(ERR, "Invalid parameter\n");
+		return -EINVAL;
+	}
+
+	if (filter->queue_id >= pf->dev_data->nb_rx_queues) {
+		PMD_DRV_LOG(ERR, "Invalid queue ID\n");
+		return -EINVAL;
+	}
+
+	if (filter->inner_vlan > ETHER_MAX_VLAN_ID) {
+		PMD_DRV_LOG(ERR, "Invalid inner VLAN ID\n");
+		return -EINVAL;
+	}
+
+	if ((filter->filter_type & ETH_TUNNEL_FILTER_OMAC) &&
+		(is_zero_ether_addr(filter->outer_mac))) {
+		PMD_DRV_LOG(ERR, "Cannot add NULL outer MAC address\n");
+		return -EINVAL;
+	}
+
+	if ((filter->filter_type & ETH_TUNNEL_FILTER_IMAC) &&
+		(is_zero_ether_addr(filter->inner_mac))) {
+		PMD_DRV_LOG(ERR, "Cannot add NULL inner MAC address\n");
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+static int
+i40e_tunnel_filter_handle(struct i40e_pf *pf, enum rte_filter_op filter_op,
+			void *arg)
+{
+	struct rte_eth_tunnel_filter_conf *filter;
+	int ret = I40E_SUCCESS;
+	uint8_t add = 1;
+
+	filter = (struct rte_eth_tunnel_filter_conf *)(arg);
+
+	if (i40e_tunnel_filter_param_check(pf, filter) < 0)
+		return I40E_ERR_PARAM;
+
+	switch (filter_op) {
+	case RTE_ETH_FILTER_OP_NONE:
+		if (!(pf->flags & I40E_FLAG_VXLAN))
+			ret = I40E_NOT_SUPPORTED;
+	case RTE_ETH_FILTER_OP_ADD:
+		ret = i40e_dev_tunnel_filter_set(pf, filter, add);
+		break;
+	case RTE_ETH_FILTER_OP_DELETE:
+		add = 0;
+		ret = i40e_dev_tunnel_filter_set(pf, filter, add);
+		break;
+	default:
+		PMD_DRV_LOG(ERR, "unknown operation %u\n", filter_op);
+		ret = I40E_ERR_PARAM;
+		break;
+	}
+
+	return ret;
+}
+
+/*
+ * Take operations to assigned filter type on NIC.
+ */
+static int
+i40e_dev_filter_ctrl(struct rte_eth_dev *dev, enum rte_filter_type filter_type,
+		     enum rte_filter_op filter_op, void *arg) {
+
+	struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	int ret = 0;
+
+	switch (filter_type) {
+	case RTE_ETH_FILTER_TUNNEL:
+		ret = i40e_tunnel_filter_handle(pf, filter_op, arg);
+		break;
+	default:
+		PMD_DRV_LOG(ERR, "unsupported filter type %u\n", filter_type);
+		ret = -EINVAL;
+		break;
+	}
+
+	return ret;
+}
+
+static int
 i40e_pf_config_mq_rx(struct i40e_pf *pf)
 {
 	if (!pf->dev_data->sriov.active) {
-- 
1.7.7.6

^ permalink raw reply	[flat|nested] 12+ messages in thread

* [dpdk-dev] [PATCH v4 6/8]app/testpmd:test VxLAN packet filter API
  2014-09-26  2:02 [dpdk-dev] [PATCH v4 0/8]Support VxLAN on Fortville Jijiang Liu
                   ` (4 preceding siblings ...)
  2014-09-26  2:02 ` [dpdk-dev] [PATCH v4 5/8]i40e:implement API of VxLAN packet filter in librte_pmd_i40e Jijiang Liu
@ 2014-09-26  2:02 ` Jijiang Liu
  2014-09-26  2:02 ` [dpdk-dev] [PATCH v4 7/8]i40e:support VxLAN Tx checksum offload Jijiang Liu
  2014-09-26  2:02 ` [dpdk-dev] [PATCH v4 8/8]app/testpmd:test " Jijiang Liu
  7 siblings, 0 replies; 12+ messages in thread
From: Jijiang Liu @ 2014-09-26  2:02 UTC (permalink / raw)
  To: dev

Add tunnel_filter command in testpmd to test VxLAN packet filter API.
 
Signed-off-by: Jijiang Liu <jijiang.liu@intel.com>
Acked-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Jing Chen <jing.d.chen@intel.com>

---
 app/test-pmd/cmdline.c |  152 ++++++++++++++++++++++++++++++++++++++++++++++++
 1 files changed, 152 insertions(+), 0 deletions(-)

diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
index c0b7293..a74e9dc 100644
--- a/app/test-pmd/cmdline.c
+++ b/app/test-pmd/cmdline.c
@@ -285,6 +285,14 @@ static void cmd_help_long_parsed(void *parsed_result,
 			"    Set the outer VLAN TPID for Packet Filtering on"
 			" a port\n\n"
 
+			"tunnel_filter add (port_id) (outer_mac) (inner_mac) (ip_addr) "
+			"(inner_vlan) (tunnel_type) (filter_type) (tenant_id) (queue_id)\n"
+			"   add a tunnel filter of a port.\n\n"
+
+			"tunnel_filter rm (port_id) (outer_mac) (inner_mac) (ip_addr) "
+			"(inner_vlan) (tunnel_type) (filter_type) (tenant_id) (queue_id)\n"
+			"   remove a tunnel filter of a port.\n\n"
+
 			"rx_vxlan_port add (udp_port) (port_id)\n"
 			"    Add an UDP port for VxLAN packet filter on a port\n\n"
 
@@ -6232,6 +6240,149 @@ cmdline_parse_inst_t cmd_vf_rate_limit = {
 	},
 };
 
+/* *** ADD TUNNEL FILTER OF A PORT *** */
+struct cmd_tunnel_filter_result {
+	cmdline_fixed_string_t cmd;
+	cmdline_fixed_string_t what;
+	uint8_t port_id;
+	struct ether_addr outer_mac;
+	struct ether_addr inner_mac;
+	cmdline_ipaddr_t ip_value;
+	uint16_t inner_vlan;
+	cmdline_fixed_string_t tunnel_type;
+	cmdline_fixed_string_t filter_type;
+	uint32_t tenant_id;
+	uint16_t queue_num;
+};
+
+static void
+cmd_tunnel_filter_parsed(void *parsed_result,
+			  __attribute__((unused)) struct cmdline *cl,
+			  __attribute__((unused)) void *data)
+{
+	struct cmd_tunnel_filter_result *res = parsed_result;
+	struct rte_eth_tunnel_filter_conf tunnel_filter_conf;
+	int ret = 0;
+
+	tunnel_filter_conf.outer_mac = &res->outer_mac;
+	tunnel_filter_conf.inner_mac = &res->inner_mac;
+	tunnel_filter_conf.inner_vlan = res->inner_vlan;
+
+	if (res->ip_value.family == AF_INET) {
+		tunnel_filter_conf.ip_addr.ipv4_addr =
+			res->ip_value.addr.ipv4.s_addr;
+		tunnel_filter_conf.ip_type = RTE_TUNNEL_IPTYPE_IPV4;
+	} else {
+		memcpy(&(tunnel_filter_conf.ip_addr.ipv6_addr),
+			&(res->ip_value.addr.ipv6),
+			sizeof(struct in6_addr));
+		tunnel_filter_conf.ip_type = RTE_TUNNEL_IPTYPE_IPV6;
+	}
+
+	if (!strcmp(res->filter_type, "imac-ivlan"))
+		tunnel_filter_conf.filter_type = RTE_TUNNEL_FILTER_IMAC_IVLAN;
+	else if (!strcmp(res->filter_type, "imac-ivlan-tenid"))
+		tunnel_filter_conf.filter_type =
+			RTE_TUNNEL_FILTER_IMAC_IVLAN_TENID;
+	else if (!strcmp(res->filter_type, "imac-tenid"))
+		tunnel_filter_conf.filter_type = RTE_TUNNEL_FILTER_IMAC_TENID;
+	else if (!strcmp(res->filter_type, "imac"))
+		tunnel_filter_conf.filter_type = RTE_TUNNEL_FILTER_IMAC;
+	else if (!strcmp(res->filter_type, "omac-imac-tenid"))
+		tunnel_filter_conf.filter_type =
+			RTE_TUNNEL_FILTER_OMAC_TENID_IMAC;
+	else {
+		printf("The filter type is not supported");
+		return;
+	}
+
+	tunnel_filter_conf.to_queue = RTE_TUNNEL_FLAGS_TO_QUEUE;
+
+	if (!strcmp(res->tunnel_type, "vxlan"))
+		tunnel_filter_conf.tunnel_type = RTE_TUNNEL_TYPE_VXLAN;
+	else {
+		printf("Only VxLAN is supported now.\n");
+		return;
+	}
+
+	tunnel_filter_conf.tenant_id = res->tenant_id;
+	tunnel_filter_conf.queue_id = res->queue_num;
+	if (!strcmp(res->what, "add"))
+		ret = rte_eth_dev_filter_ctrl(res->port_id,
+					RTE_ETH_FILTER_TUNNEL,
+					RTE_ETH_FILTER_OP_ADD,
+					&tunnel_filter_conf);
+	else
+		ret = rte_eth_dev_filter_ctrl(res->port_id,
+					RTE_ETH_FILTER_TUNNEL,
+					RTE_ETH_FILTER_OP_DELETE,
+					&tunnel_filter_conf);
+	if (ret < 0)
+		printf("cmd_tunnel_filter_parsed error: (%s)\n",
+				strerror(-ret));
+
+}
+cmdline_parse_token_string_t cmd_tunnel_filter_cmd =
+	TOKEN_STRING_INITIALIZER(struct cmd_tunnel_filter_result,
+	cmd, "tunnel_filter");
+cmdline_parse_token_string_t cmd_tunnel_filter_what =
+	TOKEN_STRING_INITIALIZER(struct cmd_tunnel_filter_result,
+	what, "add#rm");
+cmdline_parse_token_num_t cmd_tunnel_filter_port_id =
+	TOKEN_NUM_INITIALIZER(struct cmd_tunnel_filter_result,
+	port_id, UINT8);
+cmdline_parse_token_etheraddr_t cmd_tunnel_filter_outer_mac =
+	TOKEN_ETHERADDR_INITIALIZER(struct cmd_tunnel_filter_result,
+	outer_mac);
+cmdline_parse_token_etheraddr_t cmd_tunnel_filter_inner_mac =
+	TOKEN_ETHERADDR_INITIALIZER(struct cmd_tunnel_filter_result,
+	inner_mac);
+cmdline_parse_token_num_t cmd_tunnel_filter_innner_vlan =
+	TOKEN_NUM_INITIALIZER(struct cmd_tunnel_filter_result,
+	inner_vlan, UINT16);
+cmdline_parse_token_ipaddr_t cmd_tunnel_filter_ip_value =
+	TOKEN_IPADDR_INITIALIZER(struct cmd_tunnel_filter_result,
+	ip_value);
+cmdline_parse_token_string_t cmd_tunnel_filter_tunnel_type =
+	TOKEN_STRING_INITIALIZER(struct cmd_tunnel_filter_result,
+	tunnel_type, "vxlan");
+
+cmdline_parse_token_string_t cmd_tunnel_filter_filter_type =
+	TOKEN_STRING_INITIALIZER(struct cmd_tunnel_filter_result,
+	filter_type, "imac-ivlan#imac-ivlan-tenid#imac-tenid#"
+		"imac#omac-imac-tenid");
+cmdline_parse_token_num_t cmd_tunnel_filter_tenant_id =
+	TOKEN_NUM_INITIALIZER(struct cmd_tunnel_filter_result,
+	tenant_id, UINT32);
+cmdline_parse_token_num_t cmd_tunnel_filter_queue_num =
+	TOKEN_NUM_INITIALIZER(struct cmd_tunnel_filter_result,
+	queue_num, UINT16);
+
+cmdline_parse_inst_t cmd_tunnel_filter = {
+	.f = cmd_tunnel_filter_parsed,
+	.data = (void *)0,
+	.help_str = "add/rm tunnel filter of a port: "
+			"tunnel_filter add port_id outer_mac inner_mac ip "
+			"inner_vlan tunnel_type(vxlan) filter_type "
+			"(imac-ivlan|imac-ivlan-tenid|imac-tenid|"
+			"imac|omac-imac-tenid) "
+			"tenant_id queue_num",
+	.tokens = {
+		(void *)&cmd_tunnel_filter_cmd,
+		(void *)&cmd_tunnel_filter_what,
+		(void *)&cmd_tunnel_filter_port_id,
+		(void *)&cmd_tunnel_filter_outer_mac,
+		(void *)&cmd_tunnel_filter_inner_mac,
+		(void *)&cmd_tunnel_filter_ip_value,
+		(void *)&cmd_tunnel_filter_innner_vlan,
+		(void *)&cmd_tunnel_filter_tunnel_type,
+		(void *)&cmd_tunnel_filter_filter_type,
+		(void *)&cmd_tunnel_filter_tenant_id,
+		(void *)&cmd_tunnel_filter_queue_num,
+		NULL,
+	},
+};
+
 /* *** CONFIGURE TUNNEL UDP PORT *** */
 struct cmd_tunnel_udp_config {
 	cmdline_fixed_string_t cmd;
@@ -7583,6 +7734,7 @@ cmdline_parse_ctx_t main_ctx[] = {
 	(cmdline_parse_inst_t *)&cmd_vf_rxvlan_filter,
 	(cmdline_parse_inst_t *)&cmd_queue_rate_limit,
 	(cmdline_parse_inst_t *)&cmd_vf_rate_limit,
+	(cmdline_parse_inst_t *)&cmd_tunnel_filter,
 	(cmdline_parse_inst_t *)&cmd_tunnel_udp_config,
 	(cmdline_parse_inst_t *)&cmd_set_mirror_mask,
 	(cmdline_parse_inst_t *)&cmd_set_mirror_link,
-- 
1.7.7.6

^ permalink raw reply	[flat|nested] 12+ messages in thread

* [dpdk-dev] [PATCH v4 7/8]i40e:support VxLAN Tx checksum offload
  2014-09-26  2:02 [dpdk-dev] [PATCH v4 0/8]Support VxLAN on Fortville Jijiang Liu
                   ` (5 preceding siblings ...)
  2014-09-26  2:02 ` [dpdk-dev] [PATCH v4 6/8]app/testpmd:test VxLAN packet filter API Jijiang Liu
@ 2014-09-26  2:02 ` Jijiang Liu
  2014-09-29 10:50   ` Bruce Richardson
  2014-09-26  2:02 ` [dpdk-dev] [PATCH v4 8/8]app/testpmd:test " Jijiang Liu
  7 siblings, 1 reply; 12+ messages in thread
From: Jijiang Liu @ 2014-09-26  2:02 UTC (permalink / raw)
  To: dev

Support VxLAN Tx checksum offload, which include
  - outer L3(IP) checksum offload
  - inner L3(IP) checksum offload
  - inner L4(UDP, TCP and SCTP) checksum offload
 
Signed-off-by: Jijiang Liu <jijiang.liu@intel.com>
Acked-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Jing Chen <jing.d.chen@intel.com>

---
 lib/librte_mbuf/rte_mbuf.h        |    2 +
 lib/librte_pmd_i40e/i40e_ethdev.c |    4 +-
 lib/librte_pmd_i40e/i40e_rxtx.c   |   47 ++++++++++++++++++++++++++++++++++--
 3 files changed, 48 insertions(+), 5 deletions(-)

diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h
index 4955684..1f3f4eb 100644
--- a/lib/librte_mbuf/rte_mbuf.h
+++ b/lib/librte_mbuf/rte_mbuf.h
@@ -86,6 +86,8 @@ extern "C" {
 #define PKT_RX_IEEE1588_PTP  0x0200 /**< RX IEEE1588 L2 Ethernet PT Packet. */
 #define PKT_RX_IEEE1588_TMST 0x0400 /**< RX IEEE1588 L2/L4 timestamped packet.*/
 
+#define PKT_TX_VXLAN_CKSUM   0x0001 /**< Checksum of TX VxLAN pkt. computed by NIC.. */
+#define PKT_TX_IVLAN_PKT     0x0002 /**< TX packet is VxLAN packet with an inner VLAN. */
 #define PKT_TX_VLAN_PKT      0x0800 /**< TX packet is a 802.1q VLAN packet. */
 #define PKT_TX_IP_CKSUM      0x1000 /**< IP cksum of TX pkt. computed by NIC. */
 #define PKT_TX_IPV4_CSUM     0x1000 /**< Alias of PKT_TX_IP_CKSUM. */
diff --git a/lib/librte_pmd_i40e/i40e_ethdev.c b/lib/librte_pmd_i40e/i40e_ethdev.c
index a2d9111..10f15c9 100644
--- a/lib/librte_pmd_i40e/i40e_ethdev.c
+++ b/lib/librte_pmd_i40e/i40e_ethdev.c
@@ -2566,13 +2566,13 @@ i40e_vxlan_filters_init(struct i40e_pf *pf)
 				&filter_index, NULL);
 	if (ret < 0) {
 		PMD_DRV_LOG(ERR, "Failed to add UDP tunnel port %d "
-			"with index=%d\n", RTE_VXLAN_UDP_PORT,
+			"with index=%d\n", RTE_LIBRTE_TUNNEL_UDP_PORT,
 			 filter_index);
 	} else {
 		pf->vxlan_bitmap |= 1;
 		pf->vxlan_ports[0] = RTE_LIBRTE_TUNNEL_UDP_PORT;
 		PMD_DRV_LOG(INFO, "Added UDP tunnel port %d with "
-			"index=%d\n", RTE_VXLAN_UDP_PORT, filter_index);
+			"index=%d\n", RTE_LIBRTE_TUNNEL_UDP_PORT, filter_index);
 	}
 
 	return ret;
diff --git a/lib/librte_pmd_i40e/i40e_rxtx.c b/lib/librte_pmd_i40e/i40e_rxtx.c
index abdf406..821457c 100644
--- a/lib/librte_pmd_i40e/i40e_rxtx.c
+++ b/lib/librte_pmd_i40e/i40e_rxtx.c
@@ -410,12 +410,16 @@ i40e_rxd_ptype_to_pkt_flags(uint64_t qword)
 	return ip_ptype_map[ptype];
 }
 
+#define L4TUN_LEN (sizeof(struct udp_hdr) + sizeof(struct vxlan_hdr)\
+			 + sizeof(struct ether_hdr))
 static inline void
 i40e_txd_enable_checksum(uint32_t ol_flags,
 			uint32_t *td_cmd,
 			uint32_t *td_offset,
 			uint8_t l2_len,
-			uint8_t l3_len)
+			uint8_t l3_len,
+			uint8_t inner_l3_len,
+			uint32_t *cd_tunneling)
 {
 	if (!l2_len) {
 		PMD_DRV_LOG(DEBUG, "L2 length set to 0");
@@ -428,6 +432,31 @@ i40e_txd_enable_checksum(uint32_t ol_flags,
 		return;
 	}
 
+	/* VxLAN packet TX checksum offload */
+	if (unlikely(ol_flags & PKT_TX_VXLAN_CKSUM)) {
+		uint8_t l4tun_len;
+
+		/* packet with inner VLAN */
+		if (ol_flags  & PKT_TX_IVLAN_PKT)
+			l4tun_len = L4TUN_LEN + sizeof(struct vlan_hdr);
+		else
+			l4tun_len = L4TUN_LEN;
+
+		if (ol_flags & PKT_TX_IPV4_CSUM)
+			*cd_tunneling |= I40E_TX_CTX_EXT_IP_IPV4;
+		else if (ol_flags & PKT_TX_IPV6)
+			*cd_tunneling |= I40E_TX_CTX_EXT_IP_IPV6;
+
+		/* Now set the ctx descriptor fields */
+		*cd_tunneling |= (l3_len >> 2) <<
+				I40E_TXD_CTX_QW0_EXT_IPLEN_SHIFT |
+				I40E_TXD_CTX_UDP_TUNNELING |
+				(l4tun_len >> 1) <<
+				I40E_TXD_CTX_QW0_NATLEN_SHIFT;
+
+		l3_len = inner_l3_len;
+	}
+
 	/* Enable L3 checksum offloads */
 	if (ol_flags & PKT_TX_IPV4_CSUM) {
 		*td_cmd |= I40E_TX_DESC_CMD_IIPT_IPV4_CSUM;
@@ -1080,6 +1109,9 @@ i40e_calc_context_desc(uint64_t flags)
 {
 	uint16_t mask = 0;
 
+	if (flags | PKT_TX_VXLAN_CKSUM)
+		mask |= PKT_TX_VXLAN_CKSUM;
+
 #ifdef RTE_LIBRTE_IEEE1588
 	mask |= PKT_TX_IEEE1588_TMST;
 #endif
@@ -1099,6 +1131,7 @@ i40e_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 	volatile struct i40e_tx_desc *txr;
 	struct rte_mbuf *tx_pkt;
 	struct rte_mbuf *m_seg;
+	uint32_t cd_tunneling_params;
 	uint16_t tx_id;
 	uint16_t nb_tx;
 	uint32_t td_cmd;
@@ -1108,6 +1141,7 @@ i40e_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 	uint64_t ol_flags;
 	uint8_t l2_len;
 	uint8_t l3_len;
+	uint8_t inner_l3_len;
 	uint16_t nb_used;
 	uint16_t nb_ctx;
 	uint16_t tx_last;
@@ -1137,6 +1171,12 @@ i40e_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 		l2_len = tx_pkt->l2_len;
 		l3_len = tx_pkt->l3_len;
 
+		/**
+		 * the reserved in mbuf is used to store innel L3
+		 * header length.
+		 */
+		inner_l3_len = tx_pkt->reserved;
+
 		/* Calculate the number of context descriptors needed. */
 		nb_ctx = i40e_calc_context_desc(ol_flags);
 
@@ -1183,15 +1223,16 @@ i40e_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 		td_cmd |= I40E_TX_DESC_CMD_ICRC;
 
 		/* Enable checksum offloading */
+		cd_tunneling_params = 0;
 		i40e_txd_enable_checksum(ol_flags, &td_cmd, &td_offset,
-							l2_len, l3_len);
+						l2_len, l3_len, inner_l3_len,
+						&cd_tunneling_params);
 
 		if (unlikely(nb_ctx)) {
 			/* Setup TX context descriptor if required */
 			volatile struct i40e_tx_context_desc *ctx_txd =
 				(volatile struct i40e_tx_context_desc *)\
 							&txr[tx_id];
-			uint32_t cd_tunneling_params = 0;
 			uint16_t cd_l2tag2 = 0;
 			uint64_t cd_type_cmd_tso_mss =
 				I40E_TX_DESC_DTYPE_CONTEXT;
-- 
1.7.7.6

^ permalink raw reply	[flat|nested] 12+ messages in thread

* [dpdk-dev] [PATCH v4 8/8]app/testpmd:test VxLAN Tx checksum offload
  2014-09-26  2:02 [dpdk-dev] [PATCH v4 0/8]Support VxLAN on Fortville Jijiang Liu
                   ` (6 preceding siblings ...)
  2014-09-26  2:02 ` [dpdk-dev] [PATCH v4 7/8]i40e:support VxLAN Tx checksum offload Jijiang Liu
@ 2014-09-26  2:02 ` Jijiang Liu
  7 siblings, 0 replies; 12+ messages in thread
From: Jijiang Liu @ 2014-09-26  2:02 UTC (permalink / raw)
  To: dev

Add test cases in testpmd to test VxLAN Tx Checksum offload, which include
 - IPv4 tunnel and IPv6 tunnel
 - outer L3, inner L3 and L4 checksum offload for Tx side.
 
Signed-off-by: Jijiang Liu <jijiang.liu@intel.com>
Acked-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Jing Chen <jing.d.chen@intel.com>

---
 app/test-pmd/config.c   |    6 +-
 app/test-pmd/csumonly.c |  200 +++++++++++++++++++++++++++++++++++++++++++----
 2 files changed, 188 insertions(+), 18 deletions(-)

diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index 2a1b93f..9bc08f4 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -1753,9 +1753,9 @@ tx_cksum_set(portid_t port_id, uint64_t ol_flags)
 	uint64_t tx_ol_flags;
 	if (port_id_is_invalid(port_id))
 		return;
-	/* Clear last 4 bits and then set L3/4 checksum mask again */
-	tx_ol_flags = ports[port_id].tx_ol_flags & (~0x0Full);
-	ports[port_id].tx_ol_flags = ((ol_flags & 0xf) | tx_ol_flags);
+	/* Clear last 8 bits and then set L3/4 checksum mask again */
+	tx_ol_flags = ports[port_id].tx_ol_flags & (~0x0FFull);
+	ports[port_id].tx_ol_flags = ((ol_flags & 0xff) | tx_ol_flags);
 }
 
 void
diff --git a/app/test-pmd/csumonly.c b/app/test-pmd/csumonly.c
index fcc4876..4c53042 100644
--- a/app/test-pmd/csumonly.c
+++ b/app/test-pmd/csumonly.c
@@ -196,7 +196,6 @@ get_ipv6_udptcp_checksum(struct ipv6_hdr *ipv6_hdr, uint16_t *l4_hdr)
 	return (uint16_t)cksum;
 }
 
-
 /*
  * Forwarding of packets. Change the checksum field with HW or SW methods
  * The HW/SW method selection depends on the ol_flags on every packet
@@ -209,10 +208,16 @@ pkt_burst_checksum_forward(struct fwd_stream *fs)
 	struct rte_mbuf  *mb;
 	struct ether_hdr *eth_hdr;
 	struct ipv4_hdr  *ipv4_hdr;
+	struct ether_hdr *inner_eth_hdr;
+	struct ipv4_hdr  *inner_ipv4_hdr = NULL;
 	struct ipv6_hdr  *ipv6_hdr;
+	struct ipv6_hdr  *inner_ipv6_hdr = NULL;
 	struct udp_hdr   *udp_hdr;
+	struct udp_hdr   *inner_udp_hdr;
 	struct tcp_hdr   *tcp_hdr;
+	struct tcp_hdr   *inner_tcp_hdr;
 	struct sctp_hdr  *sctp_hdr;
+	struct sctp_hdr  *inner_sctp_hdr;
 
 	uint16_t nb_rx;
 	uint16_t nb_tx;
@@ -221,12 +226,18 @@ pkt_burst_checksum_forward(struct fwd_stream *fs)
 	uint64_t pkt_ol_flags;
 	uint64_t tx_ol_flags;
 	uint16_t l4_proto;
+	uint16_t inner_l4_proto = 0;
 	uint16_t eth_type;
 	uint8_t  l2_len;
 	uint8_t  l3_len;
+	uint8_t  inner_l2_len;
+	uint8_t  inner_l3_len = 0;
 
 	uint32_t rx_bad_ip_csum;
 	uint32_t rx_bad_l4_csum;
+	uint8_t  ipv4_tunnel;
+	uint8_t  ipv6_tunnel;
+	uint16_t ptype;
 
 #ifdef RTE_TEST_PMD_RECORD_CORE_CYCLES
 	uint64_t start_tsc;
@@ -255,6 +266,7 @@ pkt_burst_checksum_forward(struct fwd_stream *fs)
 
 	txp = &ports[fs->tx_port];
 	tx_ol_flags = txp->tx_ol_flags;
+	ptype = mb->reserved;
 
 	for (i = 0; i < nb_rx; i++) {
 
@@ -262,7 +274,9 @@ pkt_burst_checksum_forward(struct fwd_stream *fs)
 		l2_len  = sizeof(struct ether_hdr);
 		pkt_ol_flags = mb->ol_flags;
 		ol_flags = (pkt_ol_flags & (~PKT_TX_L4_MASK));
-
+		ptype = mb->reserved;
+		ipv4_tunnel = IS_ETH_IPV4_TUNNEL(ptype);
+		ipv6_tunnel = IS_ETH_IPV6_TUNNEL(ptype);
 		eth_hdr = rte_pktmbuf_mtod(mb, struct ether_hdr *);
 		eth_type = rte_be_to_cpu_16(eth_hdr->ether_type);
 		if (eth_type == ETHER_TYPE_VLAN) {
@@ -295,7 +309,7 @@ pkt_burst_checksum_forward(struct fwd_stream *fs)
 		 *      + ipv4 or ipv6
 		 *      + udp or tcp or sctp or others
 		 */
-		if (pkt_ol_flags & PKT_RX_IPV4_HDR) {
+		if (pkt_ol_flags & (PKT_RX_IPV4_HDR | PKT_RX_IPV4_HDR_EXT)) {
 
 			/* Do not support ipv4 option field */
 			l3_len = sizeof(struct ipv4_hdr) ;
@@ -325,17 +339,95 @@ pkt_burst_checksum_forward(struct fwd_stream *fs)
 				if (tx_ol_flags & 0x2) {
 					/* HW Offload */
 					ol_flags |= PKT_TX_UDP_CKSUM;
-					/* Pseudo header sum need be set properly */
-					udp_hdr->dgram_cksum = get_ipv4_psd_sum(ipv4_hdr);
+					if (ipv4_tunnel)
+						udp_hdr->dgram_cksum = 0;
+					else
+						/* Pseudo header sum need be set properly */
+						udp_hdr->dgram_cksum =
+							get_ipv4_psd_sum(ipv4_hdr);
 				}
 				else {
 					/* SW Implementation, clear checksum field first */
 					udp_hdr->dgram_cksum = 0;
 					udp_hdr->dgram_cksum = get_ipv4_udptcp_checksum(ipv4_hdr,
-							(uint16_t*)udp_hdr);
+									(uint16_t *)udp_hdr);
 				}
-			}
-			else if (l4_proto == IPPROTO_TCP){
+
+				if (ipv4_tunnel) {
+
+					uint16_t len;
+
+					/* Check if inner L3/L4 checkum flag is set */
+					if (tx_ol_flags & 0xF0)
+						ol_flags |= PKT_TX_VXLAN_CKSUM;
+
+					inner_l2_len  = sizeof(struct ether_hdr);
+					inner_eth_hdr = (struct ether_hdr *) (rte_pktmbuf_mtod(mb,
+								unsigned char *) + l2_len + l3_len
+								 + ETHER_VXLAN_HLEN);
+
+					eth_type = rte_be_to_cpu_16(inner_eth_hdr->ether_type);
+					if (eth_type == ETHER_TYPE_VLAN) {
+						ol_flags |= PKT_TX_IVLAN_PKT;
+						inner_l2_len += sizeof(struct vlan_hdr);
+						eth_type = rte_be_to_cpu_16(*(uint16_t *)
+							((uintptr_t)&eth_hdr->ether_type +
+								sizeof(struct vlan_hdr)));
+					}
+
+					len = l2_len + l3_len + ETHER_VXLAN_HLEN + inner_l2_len;
+					if (eth_type == ETHER_TYPE_IPv4) {
+						inner_l3_len = sizeof(struct ipv4_hdr);
+						inner_ipv4_hdr = (struct ipv4_hdr *) (rte_pktmbuf_mtod(mb,
+								unsigned char *) + len);
+						inner_l4_proto = inner_ipv4_hdr->next_proto_id;
+
+						if (tx_ol_flags & 0x10) {
+
+							/* Do not delete, this is required by HW*/
+							inner_ipv4_hdr->hdr_checksum = 0;
+							ol_flags |= PKT_TX_IPV4_CSUM;
+						}
+
+					} else if (eth_type == ETHER_TYPE_IPv6) {
+						inner_l3_len = sizeof(struct ipv6_hdr);
+						inner_ipv6_hdr = (struct ipv6_hdr *) (rte_pktmbuf_mtod(mb,
+								unsigned char *) + len);
+						inner_l4_proto = inner_ipv6_hdr->proto;
+					}
+					if ((inner_l4_proto == IPPROTO_UDP) && (tx_ol_flags & 0x20)) {
+
+						/* HW Offload */
+						ol_flags |= PKT_TX_UDP_CKSUM;
+						inner_udp_hdr = (struct udp_hdr *) (rte_pktmbuf_mtod(mb,
+								unsigned char *) + len + inner_l3_len);
+						if (eth_type == ETHER_TYPE_IPv4)
+							inner_udp_hdr->dgram_cksum = get_ipv4_psd_sum(inner_ipv4_hdr);
+						else if (eth_type == ETHER_TYPE_IPv6)
+							inner_udp_hdr->dgram_cksum = get_ipv6_psd_sum(inner_ipv6_hdr);
+
+					} else if ((inner_l4_proto == IPPROTO_TCP) && (tx_ol_flags & 0x40)) {
+						
+						/* HW Offload */
+						ol_flags |= PKT_TX_TCP_CKSUM;
+						inner_tcp_hdr = (struct tcp_hdr *) (rte_pktmbuf_mtod(mb,
+								unsigned char *) + len + inner_l3_len);
+						if (eth_type == ETHER_TYPE_IPv4)
+							inner_tcp_hdr->cksum = get_ipv4_psd_sum(inner_ipv4_hdr);
+						else if (eth_type == ETHER_TYPE_IPv6)
+							inner_tcp_hdr->cksum = get_ipv6_psd_sum(inner_ipv6_hdr);
+					} else if ((inner_l4_proto == IPPROTO_SCTP) && (tx_ol_flags & 0x80)) {
+						/* HW Offload */
+						ol_flags |= PKT_TX_SCTP_CKSUM;
+						inner_sctp_hdr = (struct sctp_hdr *) (rte_pktmbuf_mtod(mb,
+								unsigned char *) + len + inner_l3_len);
+						inner_sctp_hdr->cksum = 0;
+					}
+
+					mb->reserved = inner_l3_len;
+				}
+
+			} else if (l4_proto == IPPROTO_TCP) {
 				tcp_hdr = (struct tcp_hdr*) (rte_pktmbuf_mtod(mb,
 						unsigned char *) + l2_len + l3_len);
 				if (tx_ol_flags & 0x4) {
@@ -347,8 +439,7 @@ pkt_burst_checksum_forward(struct fwd_stream *fs)
 					tcp_hdr->cksum = get_ipv4_udptcp_checksum(ipv4_hdr,
 							(uint16_t*)tcp_hdr);
 				}
-			}
-			else if (l4_proto == IPPROTO_SCTP) {
+			} else if (l4_proto == IPPROTO_SCTP) {
 				sctp_hdr = (struct sctp_hdr*) (rte_pktmbuf_mtod(mb,
 						unsigned char *) + l2_len + l3_len);
 
@@ -367,9 +458,7 @@ pkt_burst_checksum_forward(struct fwd_stream *fs)
 				}
 			}
 			/* End of L4 Handling*/
-		}
-		else if (pkt_ol_flags & PKT_RX_IPV6_HDR) {
-
+		} else if (pkt_ol_flags & (PKT_RX_IPV6_HDR | PKT_RX_IPV6_HDR_EXT)) {
 			ipv6_hdr = (struct ipv6_hdr *) (rte_pktmbuf_mtod(mb,
 					unsigned char *) + l2_len);
 			l3_len = sizeof(struct ipv6_hdr) ;
@@ -382,15 +471,96 @@ pkt_burst_checksum_forward(struct fwd_stream *fs)
 				if (tx_ol_flags & 0x2) {
 					/* HW Offload */
 					ol_flags |= PKT_TX_UDP_CKSUM;
-					udp_hdr->dgram_cksum = get_ipv6_psd_sum(ipv6_hdr);
+					if (ipv6_tunnel)
+						udp_hdr->dgram_cksum = 0;
+					else
+						udp_hdr->dgram_cksum =
+							get_ipv6_psd_sum(ipv6_hdr);
 				}
 				else {
 					/* SW Implementation */
 					/* checksum field need be clear first */
 					udp_hdr->dgram_cksum = 0;
 					udp_hdr->dgram_cksum = get_ipv6_udptcp_checksum(ipv6_hdr,
-							(uint16_t*)udp_hdr);
+								(uint16_t *)udp_hdr);
 				}
+
+				if (ipv6_tunnel) {
+
+					uint16_t len;
+
+					/* Check if inner L3/L4 checksum flag is set */
+					if (tx_ol_flags & 0xF0)
+						ol_flags |= PKT_TX_VXLAN_CKSUM;
+
+					inner_l2_len  = sizeof(struct ether_hdr);
+					inner_eth_hdr = (struct ether_hdr *) (rte_pktmbuf_mtod(mb,
+						unsigned char *) + l2_len + l3_len + ETHER_VXLAN_HLEN);
+					eth_type = rte_be_to_cpu_16(inner_eth_hdr->ether_type);
+
+					if (eth_type == ETHER_TYPE_VLAN) {
+						ol_flags |= PKT_TX_IVLAN_PKT;
+						inner_l2_len += sizeof(struct vlan_hdr);
+						eth_type = rte_be_to_cpu_16(*(uint16_t *)
+							((uintptr_t)&eth_hdr->ether_type +
+							sizeof(struct vlan_hdr)));
+					}
+
+					len = l2_len + l3_len + ETHER_VXLAN_HLEN + inner_l2_len;
+
+					if (eth_type == ETHER_TYPE_IPv4) {
+						inner_l3_len = sizeof(struct ipv4_hdr);
+						inner_ipv4_hdr = (struct ipv4_hdr *) (rte_pktmbuf_mtod(mb,
+								unsigned char *) + len);
+						inner_l4_proto = inner_ipv4_hdr->next_proto_id;
+
+						/* HW offload */
+						if (tx_ol_flags & 0x10) {
+
+							/* Do not delete, this is required by HW*/
+							inner_ipv4_hdr->hdr_checksum = 0;
+							ol_flags |= PKT_TX_IPV4_CSUM;
+						}
+					} else if (eth_type == ETHER_TYPE_IPv6) {
+						inner_l3_len = sizeof(struct ipv6_hdr);
+						inner_ipv6_hdr = (struct ipv6_hdr *) (rte_pktmbuf_mtod(mb,
+							unsigned char *) + len);
+						inner_l4_proto = inner_ipv6_hdr->proto;
+					}
+
+					if ((inner_l4_proto == IPPROTO_UDP) && (tx_ol_flags & 0x20)) {
+						inner_udp_hdr = (struct udp_hdr *) (rte_pktmbuf_mtod(mb,
+							unsigned char *) + len + inner_l3_len);
+						/* HW offload */
+						ol_flags |= PKT_TX_UDP_CKSUM;
+						inner_udp_hdr->dgram_cksum = 0;
+						if (eth_type == ETHER_TYPE_IPv4)
+							inner_udp_hdr->dgram_cksum = get_ipv4_psd_sum(inner_ipv4_hdr);
+						else if (eth_type == ETHER_TYPE_IPv6)
+							inner_udp_hdr->dgram_cksum = get_ipv6_psd_sum(inner_ipv6_hdr);
+					} else if ((inner_l4_proto == IPPROTO_TCP) && (tx_ol_flags & 0x40)) {
+						/* HW offload */
+						ol_flags |= PKT_TX_TCP_CKSUM;
+						inner_tcp_hdr = (struct tcp_hdr *) (rte_pktmbuf_mtod(mb,
+								unsigned char *) + len + inner_l3_len);
+
+						if (eth_type == ETHER_TYPE_IPv4)
+							inner_tcp_hdr->cksum = get_ipv4_psd_sum(inner_ipv4_hdr);
+						else if (eth_type == ETHER_TYPE_IPv6)
+							inner_tcp_hdr->cksum = get_ipv6_psd_sum(inner_ipv6_hdr);
+
+					} else if ((inner_l4_proto == IPPROTO_SCTP) && (tx_ol_flags & 0x80)) {
+						/* HW offload */
+						ol_flags |= PKT_TX_SCTP_CKSUM;
+						inner_sctp_hdr = (struct sctp_hdr *) (rte_pktmbuf_mtod(mb,
+								unsigned char *) + len + inner_l3_len);
+						inner_sctp_hdr->cksum = 0;
+					}
+
+					/* pass the inner l3 length to driver */
+					mb->reserved = inner_l3_len;
+				}
+
 			}
 			else if (l4_proto == IPPROTO_TCP) {
 				tcp_hdr = (struct tcp_hdr*) (rte_pktmbuf_mtod(mb,
-- 
1.7.7.6

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [dpdk-dev] [PATCH v4 2/8]i40e:support VxLAN packet identification in librte_pmd_i40e
  2014-09-26  2:02 ` [dpdk-dev] [PATCH v4 2/8]i40e:support VxLAN packet identification in librte_pmd_i40e Jijiang Liu
@ 2014-09-29 10:48   ` Bruce Richardson
  2014-10-08  3:44     ` Liu, Jijiang
  0 siblings, 1 reply; 12+ messages in thread
From: Bruce Richardson @ 2014-09-29 10:48 UTC (permalink / raw)
  To: Jijiang Liu; +Cc: dev

On Fri, Sep 26, 2014 at 10:02:03AM +0800, Jijiang Liu wrote:
> Support tunneling UDP port configuration on i40e in librte_pmd_i40e.
> Currently, only VxLAN is implemented, which include
>  -  VxLAN UDP port initialization
>  -  Implement the APIs to configure VxLAN UDP port in librte_pmd_i40e.
>  
> Signed-off-by: Jijiang Liu <jijiang.liu@intel.com>
> Acked-by: Helin Zhang <helin.zhang@intel.com>
> Acked-by: Jingjing Wu <jingjing.wu@intel.com>
> Acked-by: Jing Chen <jing.d.chen@intel.com>
> 
> ---
>  config/common_linuxapp            |    5 +
>  lib/librte_mbuf/rte_mbuf.h        |    2 +
>  lib/librte_pmd_i40e/i40e_ethdev.c |  200 ++++++++++++++++++++++++++++++++++++-
>  lib/librte_pmd_i40e/i40e_ethdev.h |    5 +
>  lib/librte_pmd_i40e/i40e_rxtx.c   |   10 ++
>  5 files changed, 221 insertions(+), 1 deletions(-)
> 
> diff --git a/config/common_linuxapp b/config/common_linuxapp
> index 5bee910..75a4cd7 100644
> --- a/config/common_linuxapp
> +++ b/config/common_linuxapp
> @@ -212,6 +212,11 @@ CONFIG_RTE_LIBRTE_I40E_QUEUE_NUM_PER_VF=4
>  CONFIG_RTE_LIBRTE_I40E_ITR_INTERVAL=-1
>  
>  #
> +# Compile tunneling UDP port support
> +#
> +CONFIG_RTE_LIBRTE_TUNNEL_UDP_PORT=4789
> +
> +#
>  # Compile burst-oriented VIRTIO PMD driver
>  #
>  CONFIG_RTE_LIBRTE_VIRTIO_PMD=y
> diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h
> index 1c6e115..4955684 100644
> --- a/lib/librte_mbuf/rte_mbuf.h
> +++ b/lib/librte_mbuf/rte_mbuf.h
> @@ -538,6 +538,7 @@ static inline void rte_pktmbuf_reset(struct rte_mbuf *m)
>  	m->port = 0xff;
>  
>  	m->ol_flags = 0;
> +	m->reserved = 0;
>  	m->data_off = (RTE_PKTMBUF_HEADROOM <= m->buf_len) ?
>  			RTE_PKTMBUF_HEADROOM : m->buf_len;
>  
> @@ -607,6 +608,7 @@ static inline void rte_pktmbuf_attach(struct rte_mbuf *mi, struct rte_mbuf *md)
>  	mi->pkt_len = mi->data_len;
>  	mi->nb_segs = 1;
>  	mi->ol_flags = md->ol_flags;
> +	mi->reserved = md->reserved;
>  
>  	__rte_mbuf_sanity_check(mi, 1);
>  	__rte_mbuf_sanity_check(md, 0);

If the "reserved" field in the mbuf is now being used, it should be renamed 
to what its actually being used for. If it is still not being used, why this 
change?

/Bruce

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [dpdk-dev] [PATCH v4 7/8]i40e:support VxLAN Tx checksum offload
  2014-09-26  2:02 ` [dpdk-dev] [PATCH v4 7/8]i40e:support VxLAN Tx checksum offload Jijiang Liu
@ 2014-09-29 10:50   ` Bruce Richardson
  0 siblings, 0 replies; 12+ messages in thread
From: Bruce Richardson @ 2014-09-29 10:50 UTC (permalink / raw)
  To: Jijiang Liu; +Cc: dev

On Fri, Sep 26, 2014 at 10:02:08AM +0800, Jijiang Liu wrote:
> Support VxLAN Tx checksum offload, which include
>   - outer L3(IP) checksum offload
>   - inner L3(IP) checksum offload
>   - inner L4(UDP, TCP and SCTP) checksum offload
>  
> Signed-off-by: Jijiang Liu <jijiang.liu@intel.com>
> Acked-by: Helin Zhang <helin.zhang@intel.com>
> Acked-by: Jingjing Wu <jingjing.wu@intel.com>
> Acked-by: Jing Chen <jing.d.chen@intel.com>
> 
> ---
>  lib/librte_mbuf/rte_mbuf.h        |    2 +
>  lib/librte_pmd_i40e/i40e_ethdev.c |    4 +-
>  lib/librte_pmd_i40e/i40e_rxtx.c   |   47 ++++++++++++++++++++++++++++++++++--
>  3 files changed, 48 insertions(+), 5 deletions(-)
> 
> diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h
> index 4955684..1f3f4eb 100644
> --- a/lib/librte_mbuf/rte_mbuf.h
> +++ b/lib/librte_mbuf/rte_mbuf.h
> @@ -86,6 +86,8 @@ extern "C" {
>  #define PKT_RX_IEEE1588_PTP  0x0200 /**< RX IEEE1588 L2 Ethernet PT Packet. */
>  #define PKT_RX_IEEE1588_TMST 0x0400 /**< RX IEEE1588 L2/L4 timestamped packet.*/
>  
> +#define PKT_TX_VXLAN_CKSUM   0x0001 /**< Checksum of TX VxLAN pkt. computed by NIC.. */
> +#define PKT_TX_IVLAN_PKT     0x0002 /**< TX packet is VxLAN packet with an inner VLAN. */
>  #define PKT_TX_VLAN_PKT      0x0800 /**< TX packet is a 802.1q VLAN packet. */
>  #define PKT_TX_IP_CKSUM      0x1000 /**< IP cksum of TX pkt. computed by NIC. */
>  #define PKT_TX_IPV4_CSUM     0x1000 /**< Alias of PKT_TX_IP_CKSUM. */

These flag values overlap with ones already defined for RX. We have an 
addition 48 flags (47 after you subtract one I reused for control mbuf flag) 
following the mbuf rework, so overlap should not be needed, I think.

/Bruce

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [dpdk-dev] [PATCH v4 2/8]i40e:support VxLAN packet identification in librte_pmd_i40e
  2014-09-29 10:48   ` Bruce Richardson
@ 2014-10-08  3:44     ` Liu, Jijiang
  0 siblings, 0 replies; 12+ messages in thread
From: Liu, Jijiang @ 2014-10-08  3:44 UTC (permalink / raw)
  To: Richardson, Bruce; +Cc: dev

Hi Bruce,

> -----Original Message-----
> From: Richardson, Bruce
> Sent: Monday, September 29, 2014 6:48 PM
> To: Liu, Jijiang
> Cc: dev@dpdk.org
> Subject: Re: [dpdk-dev] [PATCH v4 2/8]i40e:support VxLAN packet
> identification in librte_pmd_i40e
> 
> On Fri, Sep 26, 2014 at 10:02:03AM +0800, Jijiang Liu wrote:
> > Support tunneling UDP port configuration on i40e in librte_pmd_i40e.
> > Currently, only VxLAN is implemented, which include
> >  -  VxLAN UDP port initialization
> >  -  Implement the APIs to configure VxLAN UDP port in librte_pmd_i40e.
> >
> > Signed-off-by: Jijiang Liu <jijiang.liu@intel.com>
> > Acked-by: Helin Zhang <helin.zhang@intel.com>
> > Acked-by: Jingjing Wu <jingjing.wu@intel.com>
> > Acked-by: Jing Chen <jing.d.chen@intel.com>
> >
> > ---
> >  config/common_linuxapp            |    5 +
> >  lib/librte_mbuf/rte_mbuf.h        |    2 +
> >  lib/librte_pmd_i40e/i40e_ethdev.c |  200
> ++++++++++++++++++++++++++++++++++++-
> >  lib/librte_pmd_i40e/i40e_ethdev.h |    5 +
> >  lib/librte_pmd_i40e/i40e_rxtx.c   |   10 ++
> >  5 files changed, 221 insertions(+), 1 deletions(-)
> >
> > diff --git a/config/common_linuxapp b/config/common_linuxapp index
> > 5bee910..75a4cd7 100644
> > --- a/config/common_linuxapp
> > +++ b/config/common_linuxapp
> > @@ -212,6 +212,11 @@
> CONFIG_RTE_LIBRTE_I40E_QUEUE_NUM_PER_VF=4
> >  CONFIG_RTE_LIBRTE_I40E_ITR_INTERVAL=-1
> >
> >  #
> > +# Compile tunneling UDP port support
> > +#
> > +CONFIG_RTE_LIBRTE_TUNNEL_UDP_PORT=4789
> > +
> > +#
> >  # Compile burst-oriented VIRTIO PMD driver  #
> > CONFIG_RTE_LIBRTE_VIRTIO_PMD=y diff --git a/lib/librte_mbuf/rte_mbuf.h
> > b/lib/librte_mbuf/rte_mbuf.h index 1c6e115..4955684 100644
> > --- a/lib/librte_mbuf/rte_mbuf.h
> > +++ b/lib/librte_mbuf/rte_mbuf.h
> > @@ -538,6 +538,7 @@ static inline void rte_pktmbuf_reset(struct rte_mbuf
> *m)
> >  	m->port = 0xff;
> >
> >  	m->ol_flags = 0;
> > +	m->reserved = 0;
> >  	m->data_off = (RTE_PKTMBUF_HEADROOM <= m->buf_len) ?
> >  			RTE_PKTMBUF_HEADROOM : m->buf_len;
> >
> > @@ -607,6 +608,7 @@ static inline void rte_pktmbuf_attach(struct
> rte_mbuf *mi, struct rte_mbuf *md)
> >  	mi->pkt_len = mi->data_len;
> >  	mi->nb_segs = 1;
> >  	mi->ol_flags = md->ol_flags;
> > +	mi->reserved = md->reserved;
> >
> >  	__rte_mbuf_sanity_check(mi, 1);
> >  	__rte_mbuf_sanity_check(md, 0);
> 
> If the "reserved" field in the mbuf is now being used, it should be renamed to
> what its actually being used for. If it is still not being used, why this change?
> 
> /Bruce

As I said in the cover letter, when your Mbuf Structure Rework(part 3) is applied, there will be minor changes later.
I will not keep the current changes in mbuf part, and rename the "reserved2" field to the "packet_type" in rte_mbuf structure instead.
Tunneling application need to know what packet type of incoming packet is. 

/Jijiang Liu

^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2014-10-08  3:36 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-09-26  2:02 [dpdk-dev] [PATCH v4 0/8]Support VxLAN on Fortville Jijiang Liu
2014-09-26  2:02 ` [dpdk-dev] [PATCH v4 1/8]i40e:support VxLAN packet identification in librte_ether Jijiang Liu
2014-09-26  2:02 ` [dpdk-dev] [PATCH v4 2/8]i40e:support VxLAN packet identification in librte_pmd_i40e Jijiang Liu
2014-09-29 10:48   ` Bruce Richardson
2014-10-08  3:44     ` Liu, Jijiang
2014-09-26  2:02 ` [dpdk-dev] [PATCH v4 3/8]app/test-pmd:test VxLAN packet identification Jijiang Liu
2014-09-26  2:02 ` [dpdk-dev] [PATCH v4 4/8]librte_ether:add a common filter API Jijiang Liu
2014-09-26  2:02 ` [dpdk-dev] [PATCH v4 5/8]i40e:implement API of VxLAN packet filter in librte_pmd_i40e Jijiang Liu
2014-09-26  2:02 ` [dpdk-dev] [PATCH v4 6/8]app/testpmd:test VxLAN packet filter API Jijiang Liu
2014-09-26  2:02 ` [dpdk-dev] [PATCH v4 7/8]i40e:support VxLAN Tx checksum offload Jijiang Liu
2014-09-29 10:50   ` Bruce Richardson
2014-09-26  2:02 ` [dpdk-dev] [PATCH v4 8/8]app/testpmd:test " Jijiang Liu

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).