DPDK patches and discussions
 help / color / mirror / Atom feed
From: Wenzhuo Lu <wenzhuo.lu@intel.com>
To: dev@dpdk.org
Subject: [dpdk-dev] [PATCH v6 3/5] ixgbe: support l2 tunnel operations
Date: Tue,  8 Mar 2016 14:53:35 +0800	[thread overview]
Message-ID: <1457420017-15345-4-git-send-email-wenzhuo.lu@intel.com> (raw)
In-Reply-To: <1457420017-15345-1-git-send-email-wenzhuo.lu@intel.com>

Add support of l2 tunnel configuration and operations.
1, Support modifying ether type of a type of l2 tunnel.
2, Support enabling and disabling the support of a type of l2 tunnel.
3, Support enabling/disabling l2 tunnel tag insertion/stripping.
4, Support enabling/disabling l2 tunnel packets forwarding.
5, Support adding/deleting forwarding rules for l2 tunnel packets.
Only support E-tag now.

Also update the release note.

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
---
 doc/guides/rel_notes/release_16_04.rst |  21 ++
 drivers/net/ixgbe/ixgbe_ethdev.c       | 538 +++++++++++++++++++++++++++++++++
 2 files changed, 559 insertions(+)

diff --git a/doc/guides/rel_notes/release_16_04.rst b/doc/guides/rel_notes/release_16_04.rst
index eb1b3b2..994da33 100644
--- a/doc/guides/rel_notes/release_16_04.rst
+++ b/doc/guides/rel_notes/release_16_04.rst
@@ -44,6 +44,27 @@ This section should contain new features added in this release. Sample format:
   Add the offload and negotiation of checksum and TSO between vhost-user and
   vanilla Linux virtio guest.
 
+* **Added support for E-tag on X550.**
+
+  E-tag is defined in 802.1br. Please reference
+  http://www.ieee802.org/1/pages/802.1br.html.
+
+  This feature is for VF, but please aware all the setting is on PF. It means
+  the CLIs should be used on PF, but some of their effect will be shown on VF.
+  The forwarding of E-tag packets based on GRP and E-CID_base will have effect
+  on PF. Theoretically the E-tag packets can be forwarded to any pool/queue.
+  But normally we'd like to forward the packets to the pools/queues belonging
+  to the VFs. And E-tag insertion and stripping will have effect on VFs. When
+  VF receives E-tag packets, it should strip the E-tag. When VF transmits
+  packets, it should insert the E-tag. Both can be offloaded.
+
+  When we want to use this E-tag support feature, the forwarding should be
+  enabled to forward the packets received by PF to indicated VFs. And insertion
+  and stripping should be enabled for VFs to offload the effort to HW.
+
+  * Support E-tag offloading of insertion and stripping.
+  * Support Forwarding E-tag packets to pools based on
+    GRP and E-CID_base.
 
 Resolved Issues
 ---------------
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index b99e48e..b3299e6 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -139,6 +139,17 @@
 #define IXGBE_CYCLECOUNTER_MASK   0xffffffffffffffffULL
 
 #define IXGBE_VT_CTL_POOLING_MODE_MASK         0x00030000
+#define IXGBE_VT_CTL_POOLING_MODE_ETAG         0x00010000
+#define DEFAULT_ETAG_ETYPE                     0x893f
+#define IXGBE_ETAG_ETYPE                       0x00005084
+#define IXGBE_ETAG_ETYPE_MASK                  0x0000ffff
+#define IXGBE_ETAG_ETYPE_VALID                 0x80000000
+#define IXGBE_RAH_ADTYPE                       0x40000000
+#define IXGBE_RAL_ETAG_FILTER_MASK             0x00003fff
+#define IXGBE_VMVIR_TAGA_MASK                  0x18000000
+#define IXGBE_VMVIR_TAGA_ETAG_INSERT           0x08000000
+#define IXGBE_VMTIR(_i) (0x00017000 + ((_i) * 4)) /* 64 of these (0-63) */
+#define IXGBE_QDE_STRIP_TAG                    0x00000004
 
 static int eth_ixgbe_dev_init(struct rte_eth_dev *eth_dev);
 static int eth_ixgbe_dev_uninit(struct rte_eth_dev *eth_dev);
@@ -339,6 +350,19 @@ static int ixgbe_timesync_read_time(struct rte_eth_dev *dev,
 				   struct timespec *timestamp);
 static int ixgbe_timesync_write_time(struct rte_eth_dev *dev,
 				   const struct timespec *timestamp);
+static int ixgbe_dev_l2_tunnel_eth_type_conf
+	(struct rte_eth_dev *dev, struct rte_eth_l2_tunnel *l2_tunnel);
+static int ixgbe_dev_l2_tunnel_offload_set
+	(struct rte_eth_dev *dev,
+	 struct rte_eth_l2_tunnel *l2_tunnel,
+	 uint32_t mask,
+	 uint8_t en);
+static int ixgbe_dev_l2_tunnel_filter_add
+	(struct rte_eth_dev *dev,
+	 struct rte_eth_l2_tunnel *l2_tunnel);
+static int ixgbe_dev_l2_tunnel_filter_del
+	(struct rte_eth_dev *dev,
+	 struct rte_eth_l2_tunnel *l2_tunnel);
 
 /*
  * Define VF Stats MACRO for Non "cleared on read" register
@@ -497,6 +521,10 @@ static const struct eth_dev_ops ixgbe_eth_dev_ops = {
 	.timesync_adjust_time = ixgbe_timesync_adjust_time,
 	.timesync_read_time   = ixgbe_timesync_read_time,
 	.timesync_write_time  = ixgbe_timesync_write_time,
+	.l2_tunnel_eth_type_conf = ixgbe_dev_l2_tunnel_eth_type_conf,
+	.l2_tunnel_offload_set   = ixgbe_dev_l2_tunnel_offload_set,
+	.l2_tunnel_filter_add    = ixgbe_dev_l2_tunnel_filter_add,
+	.l2_tunnel_filter_del    = ixgbe_dev_l2_tunnel_filter_del,
 };
 
 /*
@@ -6201,6 +6229,516 @@ ixgbe_dev_get_dcb_info(struct rte_eth_dev *dev,
 	return 0;
 }
 
+/* Update e-tag ether type */
+static int
+ixgbe_update_e_tag_eth_type(struct ixgbe_hw *hw,
+			    uint16_t ether_type)
+{
+	uint32_t etag_etype;
+
+	if (hw->mac.type != ixgbe_mac_X550 &&
+	    hw->mac.type != ixgbe_mac_X550EM_x) {
+		return -ENOTSUP;
+	}
+
+	etag_etype = IXGBE_READ_REG(hw, IXGBE_ETAG_ETYPE);
+	etag_etype &= ~IXGBE_ETAG_ETYPE_MASK;
+	etag_etype |= ether_type;
+	IXGBE_WRITE_REG(hw, IXGBE_ETAG_ETYPE, etag_etype);
+	IXGBE_WRITE_FLUSH(hw);
+
+	return 0;
+}
+
+/* Config l2 tunnel ether type */
+static int
+ixgbe_dev_l2_tunnel_eth_type_conf(struct rte_eth_dev *dev,
+				  struct rte_eth_l2_tunnel *l2_tunnel)
+{
+	int ret = 0;
+	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+
+	if (l2_tunnel == NULL)
+		return -EINVAL;
+
+	switch (l2_tunnel->l2_tunnel_type) {
+	case RTE_L2_TUNNEL_TYPE_E_TAG:
+		ret = ixgbe_update_e_tag_eth_type(hw, l2_tunnel->ether_type);
+		break;
+	default:
+		PMD_DRV_LOG(ERR, "Invalid tunnel type");
+		ret = -EINVAL;
+		break;
+	}
+
+	return ret;
+}
+
+/* Enable e-tag tunnel */
+static int
+ixgbe_e_tag_enable(struct ixgbe_hw *hw)
+{
+	uint32_t etag_etype;
+
+	if (hw->mac.type != ixgbe_mac_X550 &&
+	    hw->mac.type != ixgbe_mac_X550EM_x) {
+		return -ENOTSUP;
+	}
+
+	etag_etype = IXGBE_READ_REG(hw, IXGBE_ETAG_ETYPE);
+	etag_etype |= IXGBE_ETAG_ETYPE_VALID;
+	IXGBE_WRITE_REG(hw, IXGBE_ETAG_ETYPE, etag_etype);
+	IXGBE_WRITE_FLUSH(hw);
+
+	return 0;
+}
+
+/* Enable l2 tunnel */
+static int
+ixgbe_dev_l2_tunnel_enable(struct rte_eth_dev *dev,
+			   enum rte_eth_l2_tunnel_type l2_tunnel_type)
+{
+	int ret = 0;
+	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+
+	switch (l2_tunnel_type) {
+	case RTE_L2_TUNNEL_TYPE_E_TAG:
+		ret = ixgbe_e_tag_enable(hw);
+		break;
+	default:
+		PMD_DRV_LOG(ERR, "Invalid tunnel type");
+		ret = -EINVAL;
+		break;
+	}
+
+	return ret;
+}
+
+/* Disable e-tag tunnel */
+static int
+ixgbe_e_tag_disable(struct ixgbe_hw *hw)
+{
+	uint32_t etag_etype;
+
+	if (hw->mac.type != ixgbe_mac_X550 &&
+	    hw->mac.type != ixgbe_mac_X550EM_x) {
+		return -ENOTSUP;
+	}
+
+	etag_etype = IXGBE_READ_REG(hw, IXGBE_ETAG_ETYPE);
+	etag_etype &= ~IXGBE_ETAG_ETYPE_VALID;
+	IXGBE_WRITE_REG(hw, IXGBE_ETAG_ETYPE, etag_etype);
+	IXGBE_WRITE_FLUSH(hw);
+
+	return 0;
+}
+
+/* Disable l2 tunnel */
+static int
+ixgbe_dev_l2_tunnel_disable(struct rte_eth_dev *dev,
+			    enum rte_eth_l2_tunnel_type l2_tunnel_type)
+{
+	int ret = 0;
+	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+
+	switch (l2_tunnel_type) {
+	case RTE_L2_TUNNEL_TYPE_E_TAG:
+		ret = ixgbe_e_tag_disable(hw);
+		break;
+	default:
+		PMD_DRV_LOG(ERR, "Invalid tunnel type");
+		ret = -EINVAL;
+		break;
+	}
+
+	return ret;
+}
+
+static int
+ixgbe_e_tag_filter_del(struct rte_eth_dev *dev,
+		       struct rte_eth_l2_tunnel *l2_tunnel)
+{
+	int ret = 0;
+	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	uint32_t i, rar_entries;
+	uint32_t rar_low, rar_high;
+
+	if (hw->mac.type != ixgbe_mac_X550 &&
+	    hw->mac.type != ixgbe_mac_X550EM_x) {
+		return -ENOTSUP;
+	}
+
+	rar_entries = ixgbe_get_num_rx_addrs(hw);
+
+	for (i = 1; i < rar_entries; i++) {
+		rar_high = IXGBE_READ_REG(hw, IXGBE_RAH(i));
+		rar_low  = IXGBE_READ_REG(hw, IXGBE_RAL(i));
+		if ((rar_high & IXGBE_RAH_AV) &&
+		    (rar_high & IXGBE_RAH_ADTYPE) &&
+		    ((rar_low & IXGBE_RAL_ETAG_FILTER_MASK) ==
+		     l2_tunnel->tunnel_id)) {
+			IXGBE_WRITE_REG(hw, IXGBE_RAL(i), 0);
+			IXGBE_WRITE_REG(hw, IXGBE_RAH(i), 0);
+
+			ixgbe_clear_vmdq(hw, i, IXGBE_CLEAR_VMDQ_ALL);
+
+			return ret;
+		}
+	}
+
+	return ret;
+}
+
+static int
+ixgbe_e_tag_filter_add(struct rte_eth_dev *dev,
+		       struct rte_eth_l2_tunnel *l2_tunnel)
+{
+	int ret = 0;
+	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	uint32_t i, rar_entries;
+	uint32_t rar_low, rar_high;
+
+	if (hw->mac.type != ixgbe_mac_X550 &&
+	    hw->mac.type != ixgbe_mac_X550EM_x) {
+		return -ENOTSUP;
+	}
+
+	/* One entry for one tunnel. Try to remove potential existing entry. */
+	ixgbe_e_tag_filter_del(dev, l2_tunnel);
+
+	rar_entries = ixgbe_get_num_rx_addrs(hw);
+
+	for (i = 1; i < rar_entries; i++) {
+		rar_high = IXGBE_READ_REG(hw, IXGBE_RAH(i));
+		if (rar_high & IXGBE_RAH_AV) {
+			continue;
+		} else {
+			ixgbe_set_vmdq(hw, i, l2_tunnel->pool);
+			rar_high = IXGBE_RAH_AV | IXGBE_RAH_ADTYPE;
+			rar_low = l2_tunnel->tunnel_id;
+
+			IXGBE_WRITE_REG(hw, IXGBE_RAL(i), rar_low);
+			IXGBE_WRITE_REG(hw, IXGBE_RAH(i), rar_high);
+
+			return ret;
+		}
+	}
+
+	PMD_INIT_LOG(NOTICE, "The table of E-tag forwarding rule is full."
+		     " Please remove a rule before adding a new one.");
+	return -EINVAL;
+}
+
+/* Add l2 tunnel filter */
+static int
+ixgbe_dev_l2_tunnel_filter_add(struct rte_eth_dev *dev,
+			       struct rte_eth_l2_tunnel *l2_tunnel)
+{
+	int ret = 0;
+
+	switch (l2_tunnel->l2_tunnel_type) {
+	case RTE_L2_TUNNEL_TYPE_E_TAG:
+		ret = ixgbe_e_tag_filter_add(dev, l2_tunnel);
+		break;
+	default:
+		PMD_DRV_LOG(ERR, "Invalid tunnel type");
+		ret = -EINVAL;
+		break;
+	}
+
+	return ret;
+}
+
+/* Delete l2 tunnel filter */
+static int
+ixgbe_dev_l2_tunnel_filter_del(struct rte_eth_dev *dev,
+			       struct rte_eth_l2_tunnel *l2_tunnel)
+{
+	int ret = 0;
+
+	switch (l2_tunnel->l2_tunnel_type) {
+	case RTE_L2_TUNNEL_TYPE_E_TAG:
+		ret = ixgbe_e_tag_filter_del(dev, l2_tunnel);
+		break;
+	default:
+		PMD_DRV_LOG(ERR, "Invalid tunnel type");
+		ret = -EINVAL;
+		break;
+	}
+
+	return ret;
+}
+
+static int
+ixgbe_e_tag_forwarding_en_dis(struct rte_eth_dev *dev, bool en)
+{
+	int ret = 0;
+	uint32_t ctrl;
+	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+
+	if (hw->mac.type != ixgbe_mac_X550 &&
+	    hw->mac.type != ixgbe_mac_X550EM_x) {
+		return -ENOTSUP;
+	}
+
+	ctrl = IXGBE_READ_REG(hw, IXGBE_VT_CTL);
+	ctrl &= ~IXGBE_VT_CTL_POOLING_MODE_MASK;
+	if (en)
+		ctrl |= IXGBE_VT_CTL_POOLING_MODE_ETAG;
+	IXGBE_WRITE_REG(hw, IXGBE_VT_CTL, ctrl);
+
+	return ret;
+}
+
+/* Enable l2 tunnel forwarding */
+static int
+ixgbe_dev_l2_tunnel_forwarding_enable
+	(struct rte_eth_dev *dev,
+	 enum rte_eth_l2_tunnel_type l2_tunnel_type)
+{
+	int ret = 0;
+
+	switch (l2_tunnel_type) {
+	case RTE_L2_TUNNEL_TYPE_E_TAG:
+		ret = ixgbe_e_tag_forwarding_en_dis(dev, 1);
+		break;
+	default:
+		PMD_DRV_LOG(ERR, "Invalid tunnel type");
+		ret = -EINVAL;
+		break;
+	}
+
+	return ret;
+}
+
+/* Disable l2 tunnel forwarding */
+static int
+ixgbe_dev_l2_tunnel_forwarding_disable
+	(struct rte_eth_dev *dev,
+	 enum rte_eth_l2_tunnel_type l2_tunnel_type)
+{
+	int ret = 0;
+
+	switch (l2_tunnel_type) {
+	case RTE_L2_TUNNEL_TYPE_E_TAG:
+		ret = ixgbe_e_tag_forwarding_en_dis(dev, 0);
+		break;
+	default:
+		PMD_DRV_LOG(ERR, "Invalid tunnel type");
+		ret = -EINVAL;
+		break;
+	}
+
+	return ret;
+}
+
+static int
+ixgbe_e_tag_insertion_en_dis(struct rte_eth_dev *dev,
+			     struct rte_eth_l2_tunnel *l2_tunnel,
+			     bool en)
+{
+	int ret = 0;
+	uint32_t vmtir, vmvir;
+	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+
+	if (l2_tunnel->vf_id >= dev->pci_dev->max_vfs) {
+		PMD_DRV_LOG(ERR,
+			    "VF id %u should be less than %u",
+			    l2_tunnel->vf_id,
+			    dev->pci_dev->max_vfs);
+		return -EINVAL;
+	}
+
+	if (hw->mac.type != ixgbe_mac_X550 &&
+	    hw->mac.type != ixgbe_mac_X550EM_x) {
+		return -ENOTSUP;
+	}
+
+	if (en)
+		vmtir = l2_tunnel->tunnel_id;
+	else
+		vmtir = 0;
+
+	IXGBE_WRITE_REG(hw, IXGBE_VMTIR(l2_tunnel->vf_id), vmtir);
+
+	vmvir = IXGBE_READ_REG(hw, IXGBE_VMVIR(l2_tunnel->vf_id));
+	vmvir &= ~IXGBE_VMVIR_TAGA_MASK;
+	if (en)
+		vmvir |= IXGBE_VMVIR_TAGA_ETAG_INSERT;
+	IXGBE_WRITE_REG(hw, IXGBE_VMVIR(l2_tunnel->vf_id), vmvir);
+
+	return ret;
+}
+
+/* Enable l2 tunnel tag insertion */
+static int
+ixgbe_dev_l2_tunnel_insertion_enable(struct rte_eth_dev *dev,
+				     struct rte_eth_l2_tunnel *l2_tunnel)
+{
+	int ret = 0;
+
+	switch (l2_tunnel->l2_tunnel_type) {
+	case RTE_L2_TUNNEL_TYPE_E_TAG:
+		ret = ixgbe_e_tag_insertion_en_dis(dev, l2_tunnel, 1);
+		break;
+	default:
+		PMD_DRV_LOG(ERR, "Invalid tunnel type");
+		ret = -EINVAL;
+		break;
+	}
+
+	return ret;
+}
+
+/* Disable l2 tunnel tag insertion */
+static int
+ixgbe_dev_l2_tunnel_insertion_disable
+	(struct rte_eth_dev *dev,
+	 struct rte_eth_l2_tunnel *l2_tunnel)
+{
+	int ret = 0;
+
+	switch (l2_tunnel->l2_tunnel_type) {
+	case RTE_L2_TUNNEL_TYPE_E_TAG:
+		ret = ixgbe_e_tag_insertion_en_dis(dev, l2_tunnel, 0);
+		break;
+	default:
+		PMD_DRV_LOG(ERR, "Invalid tunnel type");
+		ret = -EINVAL;
+		break;
+	}
+
+	return ret;
+}
+
+static int
+ixgbe_e_tag_stripping_en_dis(struct rte_eth_dev *dev,
+			     bool en)
+{
+	int ret = 0;
+	uint32_t qde;
+	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+
+	if (hw->mac.type != ixgbe_mac_X550 &&
+	    hw->mac.type != ixgbe_mac_X550EM_x) {
+		return -ENOTSUP;
+	}
+
+	qde = IXGBE_READ_REG(hw, IXGBE_QDE);
+	if (en)
+		qde |= IXGBE_QDE_STRIP_TAG;
+	else
+		qde &= ~IXGBE_QDE_STRIP_TAG;
+	qde &= ~IXGBE_QDE_READ;
+	qde |= IXGBE_QDE_WRITE;
+	IXGBE_WRITE_REG(hw, IXGBE_QDE, qde);
+
+	return ret;
+}
+
+/* Enable l2 tunnel tag stripping */
+static int
+ixgbe_dev_l2_tunnel_stripping_enable
+	(struct rte_eth_dev *dev,
+	 enum rte_eth_l2_tunnel_type l2_tunnel_type)
+{
+	int ret = 0;
+
+	switch (l2_tunnel_type) {
+	case RTE_L2_TUNNEL_TYPE_E_TAG:
+		ret = ixgbe_e_tag_stripping_en_dis(dev, 1);
+		break;
+	default:
+		PMD_DRV_LOG(ERR, "Invalid tunnel type");
+		ret = -EINVAL;
+		break;
+	}
+
+	return ret;
+}
+
+/* Disable l2 tunnel tag stripping */
+static int
+ixgbe_dev_l2_tunnel_stripping_disable
+	(struct rte_eth_dev *dev,
+	 enum rte_eth_l2_tunnel_type l2_tunnel_type)
+{
+	int ret = 0;
+
+	switch (l2_tunnel_type) {
+	case RTE_L2_TUNNEL_TYPE_E_TAG:
+		ret = ixgbe_e_tag_stripping_en_dis(dev, 0);
+		break;
+	default:
+		PMD_DRV_LOG(ERR, "Invalid tunnel type");
+		ret = -EINVAL;
+		break;
+	}
+
+	return ret;
+}
+
+/* Enable/disable l2 tunnel offload functions */
+static int
+ixgbe_dev_l2_tunnel_offload_set
+	(struct rte_eth_dev *dev,
+	 struct rte_eth_l2_tunnel *l2_tunnel,
+	 uint32_t mask,
+	 uint8_t en)
+{
+	int ret = 0;
+
+	if (l2_tunnel == NULL)
+		return -EINVAL;
+
+	ret = -EINVAL;
+	if (mask & ETH_L2_TUNNEL_ENABLE_MASK) {
+		if (en)
+			ret = ixgbe_dev_l2_tunnel_enable(
+				dev,
+				l2_tunnel->l2_tunnel_type);
+		else
+			ret = ixgbe_dev_l2_tunnel_disable(
+				dev,
+				l2_tunnel->l2_tunnel_type);
+	}
+
+	if (mask & ETH_L2_TUNNEL_INSERTION_MASK) {
+		if (en)
+			ret = ixgbe_dev_l2_tunnel_insertion_enable(
+				dev,
+				l2_tunnel);
+		else
+			ret = ixgbe_dev_l2_tunnel_insertion_disable(
+				dev,
+				l2_tunnel);
+	}
+
+	if (mask & ETH_L2_TUNNEL_STRIPPING_MASK) {
+		if (en)
+			ret = ixgbe_dev_l2_tunnel_stripping_enable(
+				dev,
+				l2_tunnel->l2_tunnel_type);
+		else
+			ret = ixgbe_dev_l2_tunnel_stripping_disable(
+				dev,
+				l2_tunnel->l2_tunnel_type);
+	}
+
+	if (mask & ETH_L2_TUNNEL_FORWARDING_MASK) {
+		if (en)
+			ret = ixgbe_dev_l2_tunnel_forwarding_enable(
+				dev,
+				l2_tunnel->l2_tunnel_type);
+		else
+			ret = ixgbe_dev_l2_tunnel_forwarding_disable(
+				dev,
+				l2_tunnel->l2_tunnel_type);
+	}
+
+	return ret;
+}
+
 static struct rte_driver rte_ixgbe_driver = {
 	.type = PMD_PDEV,
 	.init = rte_ixgbe_pmd_init,
-- 
1.9.3

  parent reply	other threads:[~2016-03-08  6:53 UTC|newest]

Thread overview: 95+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-01-29  7:03 [dpdk-dev] [PATCH 0/8] support E-tag offloading and forwarding on Intel X550 NIC Wenzhuo Lu
2016-01-29  7:03 ` [dpdk-dev] [PATCH 1/8] ixgbe: select pool by MAC when using double VLAN Wenzhuo Lu
2016-01-29  7:03 ` [dpdk-dev] [PATCH 2/8] lib/librte_ether: support l2 tunnel config Wenzhuo Lu
2016-01-29  7:03 ` [dpdk-dev] [PATCH 3/8] ixgbe: " Wenzhuo Lu
2016-01-29  7:03 ` [dpdk-dev] [PATCH 4/8] app/testpmd: add CLIs for " Wenzhuo Lu
2016-01-29  7:03 ` [dpdk-dev] [PATCH 5/8] lib/librte_ether: support new l2 tunnel operation Wenzhuo Lu
2016-01-29  7:03 ` [dpdk-dev] [PATCH 6/8] ixgbe: support " Wenzhuo Lu
2016-01-29  7:03 ` [dpdk-dev] [PATCH 7/8] app/testpmd: add CLIs for E-tag operation Wenzhuo Lu
2016-01-29  7:03 ` [dpdk-dev] [PATCH 8/8] doc: add release note for E-tag Wenzhuo Lu
2016-02-01 16:15   ` Mcnamara, John
2016-01-29  7:16 ` [dpdk-dev] [PATCH 0/8] support E-tag offloading and forwarding on Intel X550 NIC Qiu, Michael
2016-02-01  1:04   ` Lu, Wenzhuo
2016-02-01  1:39     ` Yuanhan Liu
2016-02-01  1:56       ` Lu, Wenzhuo
2016-02-01  2:06         ` Yuanhan Liu
2016-02-01  3:00           ` Lu, Wenzhuo
2016-02-01  8:31       ` Qiu, Michael
2016-02-02  1:24         ` Lu, Wenzhuo
2016-02-02  6:56 ` [dpdk-dev] [PATCH v2 0/7] " Wenzhuo Lu
2016-02-02  6:56   ` [dpdk-dev] [PATCH v2 1/7] ixgbe: select pool by MAC when using double VLAN Wenzhuo Lu
2016-02-02  6:57   ` [dpdk-dev] [PATCH v2 2/7] lib/librte_ether: support l2 tunnel config Wenzhuo Lu
2016-02-02 12:03     ` Bruce Richardson
2016-02-03  1:05       ` Lu, Wenzhuo
2016-02-03  3:36     ` Stephen Hemminger
2016-02-03  8:08       ` Lu, Wenzhuo
2016-02-02  6:57   ` [dpdk-dev] [PATCH v2 3/7] ixgbe: " Wenzhuo Lu
2016-02-02  6:57   ` [dpdk-dev] [PATCH v2 4/7] app/testpmd: add CLIs for " Wenzhuo Lu
2016-02-02  6:57   ` [dpdk-dev] [PATCH v2 5/7] lib/librte_ether: support new l2 tunnel operation Wenzhuo Lu
2016-02-02  6:57   ` [dpdk-dev] [PATCH v2 6/7] ixgbe: support " Wenzhuo Lu
2016-02-02  6:57   ` [dpdk-dev] [PATCH v2 7/7] app/testpmd: add CLIs for E-tag operation Wenzhuo Lu
2016-02-12 13:50   ` [dpdk-dev] [PATCH v2 0/7] support E-tag offloading and forwarding on Intel X550 NIC De Lara Guarch, Pablo
2016-02-15  1:21     ` Lu, Wenzhuo
2016-02-15  9:39       ` De Lara Guarch, Pablo
2016-02-16  8:20 ` [dpdk-dev] [PATCH v3 " Wenzhuo Lu
2016-02-16  8:20   ` [dpdk-dev] [PATCH v3 1/7] ixgbe: select pool by MAC when using double VLAN Wenzhuo Lu
2016-02-16  8:20   ` [dpdk-dev] [PATCH v3 2/7] lib/librte_ether: support l2 tunnel config Wenzhuo Lu
2016-02-16  8:20   ` [dpdk-dev] [PATCH v3 3/7] ixgbe: " Wenzhuo Lu
2016-02-16  8:20   ` [dpdk-dev] [PATCH v3 4/7] app/testpmd: add CLIs for " Wenzhuo Lu
2016-02-16  8:20   ` [dpdk-dev] [PATCH v3 5/7] lib/librte_ether: support new l2 tunnel operation Wenzhuo Lu
2016-02-16  8:20   ` [dpdk-dev] [PATCH v3 6/7] ixgbe: support " Wenzhuo Lu
2016-02-16  8:20   ` [dpdk-dev] [PATCH v3 7/7] app/testpmd: add CLIs for E-tag operation Wenzhuo Lu
2016-02-18  2:46 ` [dpdk-dev] [PATCH v4 0/7] support E-tag offloading and forwarding on Intel X550 NIC Wenzhuo Lu
2016-02-18  2:46   ` [dpdk-dev] [PATCH v4 1/7] ixgbe: select pool by MAC when using double VLAN Wenzhuo Lu
2016-02-18  2:46   ` [dpdk-dev] [PATCH v4 2/7] lib/librte_ether: support l2 tunnel config Wenzhuo Lu
2016-02-18  2:46   ` [dpdk-dev] [PATCH v4 3/7] ixgbe: " Wenzhuo Lu
2016-03-04  1:47     ` He, Shaopeng
2016-03-04  3:17       ` Lu, Wenzhuo
2016-02-18  2:46   ` [dpdk-dev] [PATCH v4 4/7] app/testpmd: add CLIs for " Wenzhuo Lu
2016-02-18  2:46   ` [dpdk-dev] [PATCH v4 5/7] lib/librte_ether: support new l2 tunnel operation Wenzhuo Lu
2016-03-04  1:47     ` He, Shaopeng
2016-03-04  3:31       ` Lu, Wenzhuo
2016-03-07  2:04         ` He, Shaopeng
2016-02-18  2:46   ` [dpdk-dev] [PATCH v4 6/7] ixgbe: support " Wenzhuo Lu
2016-03-04  1:46     ` He, Shaopeng
2016-03-04  3:15       ` Lu, Wenzhuo
2016-02-18  2:46   ` [dpdk-dev] [PATCH v4 7/7] app/testpmd: add CLIs for E-tag operation Wenzhuo Lu
2016-03-04  1:46     ` He, Shaopeng
2016-03-04  3:11       ` Lu, Wenzhuo
2016-03-04  9:23   ` [dpdk-dev] [PATCH v4 0/7] support E-tag offloading and forwarding on Intel X550 NIC Liu, Yong
2016-03-07  2:42 ` [dpdk-dev] [PATCH v5 0/7] support E-tag offloading and forwarding on X550 Wenzhuo Lu
2016-03-07  2:42   ` [dpdk-dev] [PATCH v5 1/7] ixgbe: select pool by MAC when using double VLAN Wenzhuo Lu
2016-03-07  2:42   ` [dpdk-dev] [PATCH v5 2/7] lib/librte_ether: support l2 tunnel config Wenzhuo Lu
2016-03-07  2:42   ` [dpdk-dev] [PATCH v5 3/7] ixgbe: " Wenzhuo Lu
2016-03-07  2:42   ` [dpdk-dev] [PATCH v5 4/7] app/testpmd: add CLIs for " Wenzhuo Lu
2016-03-07  2:42   ` [dpdk-dev] [PATCH v5 5/7] lib/librte_ether: support new l2 tunnel operation Wenzhuo Lu
2016-03-07  3:29     ` Wu, Jingjing
2016-03-07  5:29       ` Lu, Wenzhuo
2016-03-07  2:42   ` [dpdk-dev] [PATCH v5 6/7] ixgbe: support " Wenzhuo Lu
2016-03-07  2:42   ` [dpdk-dev] [PATCH v5 7/7] app/testpmd: add CLIs for E-tag operation Wenzhuo Lu
2016-03-08  6:53 ` [dpdk-dev] [PATCH v6 0/5] support E-tag offloading and forwarding on X550 Wenzhuo Lu
2016-03-08  6:53   ` [dpdk-dev] [PATCH v6 1/5] ixgbe: select pool by MAC when using double VLAN Wenzhuo Lu
2016-03-08  6:53   ` [dpdk-dev] [PATCH v6 2/5] lib/librte_ether: support l2 tunnel operations Wenzhuo Lu
2016-03-09  0:14     ` Thomas Monjalon
2016-03-09  1:15       ` Lu, Wenzhuo
2016-03-09  9:27         ` Thomas Monjalon
2016-03-10  0:54           ` Lu, Wenzhuo
2016-03-08  6:53   ` Wenzhuo Lu [this message]
2016-03-08  6:53   ` [dpdk-dev] [PATCH v6 4/5] app/testpmd: add CLIs for l2 tunnel config Wenzhuo Lu
2016-03-08  6:53   ` [dpdk-dev] [PATCH v6 5/5] app/testpmd: add CLIs for E-tag operation Wenzhuo Lu
2016-03-08  8:08   ` [dpdk-dev] [PATCH v6 0/5] support E-tag offloading and forwarding on X550 Wu, Jingjing
2016-03-09  7:44 ` [dpdk-dev] [PATCH v7 " Wenzhuo Lu
2016-03-09  7:44   ` [dpdk-dev] [PATCH v7 1/5] ixgbe: select pool by MAC when using double VLAN Wenzhuo Lu
2016-03-09  7:44   ` [dpdk-dev] [PATCH v7 2/5] lib/librte_ether: support l2 tunnel operations Wenzhuo Lu
2016-03-09  7:44   ` [dpdk-dev] [PATCH v7 3/5] ixgbe: " Wenzhuo Lu
2016-03-09  7:44   ` [dpdk-dev] [PATCH v7 4/5] app/testpmd: add CLIs for l2 tunnel config Wenzhuo Lu
2016-03-09  7:44   ` [dpdk-dev] [PATCH v7 5/5] app/testpmd: add CLIs for E-tag operation Wenzhuo Lu
2016-03-09 10:07   ` [dpdk-dev] [PATCH v7 0/5] support E-tag offloading and forwarding on X550 Thomas Monjalon
2016-03-10  0:44     ` Lu, Wenzhuo
2016-03-11  1:10 ` [dpdk-dev] [PATCH v8 " Wenzhuo Lu
2016-03-11  1:10   ` [dpdk-dev] [PATCH v8 1/5] ixgbe: select pool by MAC when using double VLAN Wenzhuo Lu
2016-03-11  1:10   ` [dpdk-dev] [PATCH v8 2/5] lib/librte_ether: support l2 tunnel operations Wenzhuo Lu
2016-03-11  1:10   ` [dpdk-dev] [PATCH v8 3/5] ixgbe: " Wenzhuo Lu
2016-03-11  1:10   ` [dpdk-dev] [PATCH v8 4/5] app/testpmd: add CLIs for l2 tunnel config Wenzhuo Lu
2016-03-11  1:10   ` [dpdk-dev] [PATCH v8 5/5] app/testpmd: add CLIs for E-tag operation Wenzhuo Lu
2016-03-11 22:27   ` [dpdk-dev] [PATCH v8 0/5] support E-tag offloading and forwarding on X550 Thomas Monjalon

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1457420017-15345-4-git-send-email-wenzhuo.lu@intel.com \
    --to=wenzhuo.lu@intel.com \
    --cc=dev@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).