From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by dpdk.org (Postfix) with ESMTP id 0A20F2BD3 for ; Fri, 4 Mar 2016 02:46:46 +0100 (CET) Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga102.jf.intel.com with ESMTP; 03 Mar 2016 17:46:44 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.22,533,1449561600"; d="scan'208";a="663522483" Received: from fmsmsx105.amr.corp.intel.com ([10.18.124.203]) by FMSMGA003.fm.intel.com with ESMTP; 03 Mar 2016 17:46:44 -0800 Received: from fmsmsx158.amr.corp.intel.com (10.18.116.75) by FMSMSX105.amr.corp.intel.com (10.18.124.203) with Microsoft SMTP Server (TLS) id 14.3.248.2; Thu, 3 Mar 2016 17:46:43 -0800 Received: from shsmsx151.ccr.corp.intel.com (10.239.6.50) by fmsmsx158.amr.corp.intel.com (10.18.116.75) with Microsoft SMTP Server (TLS) id 14.3.248.2; Thu, 3 Mar 2016 17:46:44 -0800 Received: from shsmsx102.ccr.corp.intel.com ([169.254.2.232]) by SHSMSX151.ccr.corp.intel.com ([169.254.3.18]) with mapi id 14.03.0248.002; Fri, 4 Mar 2016 09:46:42 +0800 From: "He, Shaopeng" To: "Lu, Wenzhuo" , "dev@dpdk.org" Thread-Topic: [dpdk-dev] [PATCH v4 6/7] ixgbe: support l2 tunnel operation Thread-Index: AQHRafa8M10E4h2ssEGJBwAtpmfvoZ9IjBxQ Date: Fri, 4 Mar 2016 01:46:42 +0000 Message-ID: References: <1454051035-25757-1-git-send-email-wenzhuo.lu@intel.com> <1455763573-2867-1-git-send-email-wenzhuo.lu@intel.com> <1455763573-2867-7-git-send-email-wenzhuo.lu@intel.com> In-Reply-To: <1455763573-2867-7-git-send-email-wenzhuo.lu@intel.com> Accept-Language: zh-CN, en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [10.239.127.40] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Subject: Re: [dpdk-dev] [PATCH v4 6/7] ixgbe: support l2 tunnel operation X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 04 Mar 2016 01:46:47 -0000 Hi Wenzhuo, > -----Original Message----- > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Wenzhuo Lu > Sent: Thursday, February 18, 2016 10:46 AM > To: dev@dpdk.org > Subject: [dpdk-dev] [PATCH v4 6/7] ixgbe: support l2 tunnel operation >=20 > Add support of l2 tunnel operation. > Support enabling/disabling l2 tunnel tag insertion/stripping. > Support enabling/disabling l2 tunnel packets forwarding. > Support adding/deleting forwarding rules for l2 tunnel packets. > Only support E-tag now. >=20 > Also update the release note. >=20 > Signed-off-by: Wenzhuo Lu > --- > doc/guides/rel_notes/release_16_04.rst | 21 ++ > drivers/net/ixgbe/ixgbe_ethdev.c | 371 > +++++++++++++++++++++++++++++++++ > 2 files changed, 392 insertions(+) >=20 > diff --git a/doc/guides/rel_notes/release_16_04.rst > b/doc/guides/rel_notes/release_16_04.rst > index eb1b3b2..994da33 100644 > --- a/doc/guides/rel_notes/release_16_04.rst > +++ b/doc/guides/rel_notes/release_16_04.rst > @@ -44,6 +44,27 @@ This section should contain new features added in this > release. Sample format: > Add the offload and negotiation of checksum and TSO between vhost-user > and > vanilla Linux virtio guest. >=20 > +* **Added support for E-tag on X550.** > + > + E-tag is defined in 802.1br. Please reference > + http://www.ieee802.org/1/pages/802.1br.html. > + > + This feature is for VF, but please aware all the setting is on PF. It = means > + the CLIs should be used on PF, but some of their effect will be shown = on > VF. > + The forwarding of E-tag packets based on GRP and E-CID_base will have > effect > + on PF. Theoretically the E-tag packets can be forwarded to any pool/qu= eue. > + But normally we'd like to forward the packets to the pools/queues > belonging > + to the VFs. And E-tag insertion and stripping will have effect on VFs.= When > + VF receives E-tag packets, it should strip the E-tag. When VF transmit= s > + packets, it should insert the E-tag. Both can be offloaded. > + > + When we want to use this E-tag support feature, the forwarding should = be > + enabled to forward the packets received by PF to indicated VFs. And > insertion > + and stripping should be enabled for VFs to offload the effort to HW. > + > + * Support E-tag offloading of insertion and stripping. > + * Support Forwarding E-tag packets to pools based on > + GRP and E-CID_base. >=20 > Resolved Issues > --------------- > diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c > b/drivers/net/ixgbe/ixgbe_ethdev.c > index b15a4b6..aa00842 100644 > --- a/drivers/net/ixgbe/ixgbe_ethdev.c > +++ b/drivers/net/ixgbe/ixgbe_ethdev.c > @@ -139,10 +139,17 @@ > #define IXGBE_CYCLECOUNTER_MASK 0xffffffffffffffffULL >=20 > #define IXGBE_VT_CTL_POOLING_MODE_MASK 0x00030000 > +#define IXGBE_VT_CTL_POOLING_MODE_ETAG 0x00010000 > #define DEFAULT_ETAG_ETYPE 0x893f > #define IXGBE_ETAG_ETYPE 0x00005084 > #define IXGBE_ETAG_ETYPE_MASK 0x0000ffff > #define IXGBE_ETAG_ETYPE_VALID 0x80000000 > +#define IXGBE_RAH_ADTYPE 0x40000000 > +#define IXGBE_RAL_ETAG_FILTER_MASK 0x00003fff > +#define IXGBE_VMVIR_TAGA_MASK 0x18000000 > +#define IXGBE_VMVIR_TAGA_ETAG_INSERT 0x08000000 > +#define IXGBE_VMTIR(_i) (0x00017000 + ((_i) * 4)) /* 64 of these (0-63) = */ > +#define IXGBE_QDE_STRIP_TAG 0x00000004 >=20 > static int eth_ixgbe_dev_init(struct rte_eth_dev *eth_dev); > static int eth_ixgbe_dev_uninit(struct rte_eth_dev *eth_dev); > @@ -351,6 +358,33 @@ static int ixgbe_dev_l2_tunnel_enable > static int ixgbe_dev_l2_tunnel_disable > (struct rte_eth_dev *dev, > enum rte_eth_l2_tunnel_type l2_tunnel_type); > +static int ixgbe_dev_l2_tunnel_insertion_enable > + (struct rte_eth_dev *dev, > + struct rte_eth_l2_tunnel *l2_tunnel, > + uint16_t vf_id); > +static int ixgbe_dev_l2_tunnel_insertion_disable > + (struct rte_eth_dev *dev, > + enum rte_eth_l2_tunnel_type l2_tunnel_type, > + uint16_t vf_id); > +static int ixgbe_dev_l2_tunnel_stripping_enable > + (struct rte_eth_dev *dev, > + enum rte_eth_l2_tunnel_type l2_tunnel_type); > +static int ixgbe_dev_l2_tunnel_stripping_disable > + (struct rte_eth_dev *dev, > + enum rte_eth_l2_tunnel_type l2_tunnel_type); > +static int ixgbe_dev_l2_tunnel_forwarding_enable > + (struct rte_eth_dev *dev, > + enum rte_eth_l2_tunnel_type l2_tunnel_type); > +static int ixgbe_dev_l2_tunnel_forwarding_disable > + (struct rte_eth_dev *dev, > + enum rte_eth_l2_tunnel_type l2_tunnel_type); > +static int ixgbe_dev_l2_tunnel_filter_add > + (struct rte_eth_dev *dev, > + struct rte_eth_l2_tunnel *l2_tunnel, > + uint32_t pool); > +static int ixgbe_dev_l2_tunnel_filter_del > + (struct rte_eth_dev *dev, > + struct rte_eth_l2_tunnel *l2_tunnel); >=20 > /* > * Define VF Stats MACRO for Non "cleared on read" register > @@ -512,6 +546,14 @@ static const struct eth_dev_ops ixgbe_eth_dev_ops > =3D { > .l2_tunnel_eth_type_conf =3D ixgbe_dev_l2_tunnel_eth_type_conf, > .l2_tunnel_enable =3D ixgbe_dev_l2_tunnel_enable, > .l2_tunnel_disable =3D ixgbe_dev_l2_tunnel_disable, > + .l2_tunnel_insertion_enable =3D > ixgbe_dev_l2_tunnel_insertion_enable, > + .l2_tunnel_insertion_disable =3D > ixgbe_dev_l2_tunnel_insertion_disable, > + .l2_tunnel_stripping_enable =3D > ixgbe_dev_l2_tunnel_stripping_enable, > + .l2_tunnel_stripping_disable =3D > ixgbe_dev_l2_tunnel_stripping_disable, > + .l2_tunnel_forwarding_enable =3D > ixgbe_dev_l2_tunnel_forwarding_enable, > + .l2_tunnel_forwarding_disable =3D > ixgbe_dev_l2_tunnel_forwarding_disable, > + .l2_tunnel_filter_add =3D ixgbe_dev_l2_tunnel_filter_add, > + .l2_tunnel_filter_del =3D ixgbe_dev_l2_tunnel_filter_del, > }; >=20 > /* > @@ -6341,6 +6383,335 @@ ixgbe_dev_l2_tunnel_disable(struct > rte_eth_dev *dev, > return ret; > } >=20 > +static int > +ixgbe_e_tag_filter_del(struct rte_eth_dev *dev, > + struct rte_eth_l2_tunnel *l2_tunnel) > +{ > + int ret =3D 0; > + struct ixgbe_hw *hw =3D IXGBE_DEV_PRIVATE_TO_HW(dev->data- > >dev_private); > + u32 i, rar_entries; > + u32 rar_low, rar_high; > + > + if (hw->mac.type !=3D ixgbe_mac_X550 && > + hw->mac.type !=3D ixgbe_mac_X550EM_x) { > + return -ENOTSUP; > + } > + > + rar_entries =3D ixgbe_get_num_rx_addrs(hw); > + > + for (i =3D 1; i < rar_entries; i++) { > + rar_high =3D IXGBE_READ_REG(hw, IXGBE_RAH(i)); > + rar_low =3D IXGBE_READ_REG(hw, IXGBE_RAL(i)); > + if ((rar_high & IXGBE_RAH_AV) && > + (rar_high & IXGBE_RAH_ADTYPE) && > + ((rar_low & IXGBE_RAL_ETAG_FILTER_MASK) =3D=3D > + l2_tunnel->tunnel_id)) { > + IXGBE_WRITE_REG(hw, IXGBE_RAL(i), 0); > + IXGBE_WRITE_REG(hw, IXGBE_RAH(i), 0); > + > + ixgbe_clear_vmdq(hw, i, IXGBE_CLEAR_VMDQ_ALL); > + > + return ret; > + } > + } > + > + return ret; > +} > + > +static int > +ixgbe_e_tag_filter_add(struct rte_eth_dev *dev, > + struct rte_eth_l2_tunnel *l2_tunnel, > + uint32_t pool) > +{ > + int ret =3D 0; > + struct ixgbe_hw *hw =3D IXGBE_DEV_PRIVATE_TO_HW(dev->data- > >dev_private); > + u32 i, rar_entries; > + u32 rar_low, rar_high; > + > + if (hw->mac.type !=3D ixgbe_mac_X550 && > + hw->mac.type !=3D ixgbe_mac_X550EM_x) { > + return -ENOTSUP; > + } > + > + /* One entry for one tunnel. Try to remove potential existing entry. > */ > + ixgbe_e_tag_filter_del(dev, l2_tunnel); > + > + rar_entries =3D ixgbe_get_num_rx_addrs(hw); > + > + for (i =3D 1; i < rar_entries; i++) { > + rar_high =3D IXGBE_READ_REG(hw, IXGBE_RAH(i)); > + if (rar_high & IXGBE_RAH_AV) { > + continue; > + } else { > + ixgbe_set_vmdq(hw, i, pool); Do we need to check the return result here?=20 > + rar_high =3D IXGBE_RAH_AV | IXGBE_RAH_ADTYPE; > + rar_low =3D l2_tunnel->tunnel_id; > + > + IXGBE_WRITE_REG(hw, IXGBE_RAL(i), rar_low); > + IXGBE_WRITE_REG(hw, IXGBE_RAH(i), rar_high); > + > + return ret; > + } > + } > + > + PMD_INIT_LOG(NOTICE, "The table of E-tag forwarding rule is full." > + " Please remove a rule before adding a new one."); > + return -1; > +} > + > +/* Add l2 tunnel filter */ > +static int > +ixgbe_dev_l2_tunnel_filter_add(struct rte_eth_dev *dev, > + struct rte_eth_l2_tunnel *l2_tunnel, > + uint32_t pool) > +{ > + int ret =3D 0; > + > + switch (l2_tunnel->l2_tunnel_type) { > + case RTE_L2_TUNNEL_TYPE_E_TAG: > + ret =3D ixgbe_e_tag_filter_add(dev, l2_tunnel, pool); > + break; > + default: > + PMD_DRV_LOG(ERR, "Invalid tunnel type"); > + ret =3D -1; > + break; > + } > + > + return ret; > +} > + > +/* Delete l2 tunnel filter */ > +static int > +ixgbe_dev_l2_tunnel_filter_del(struct rte_eth_dev *dev, > + struct rte_eth_l2_tunnel *l2_tunnel) > +{ > + int ret =3D 0; > + > + switch (l2_tunnel->l2_tunnel_type) { > + case RTE_L2_TUNNEL_TYPE_E_TAG: > + ret =3D ixgbe_e_tag_filter_del(dev, l2_tunnel); > + break; > + default: > + PMD_DRV_LOG(ERR, "Invalid tunnel type"); > + ret =3D -1; > + break; > + } > + > + return ret; > +} > + > +static int > +ixgbe_e_tag_forwarding_en_dis(struct rte_eth_dev *dev, bool en) > +{ > + int ret =3D 0; > + uint32_t ctrl; > + struct ixgbe_hw *hw =3D IXGBE_DEV_PRIVATE_TO_HW(dev->data- > >dev_private); > + > + if (hw->mac.type !=3D ixgbe_mac_X550 && > + hw->mac.type !=3D ixgbe_mac_X550EM_x) { > + return -ENOTSUP; > + } > + > + ctrl =3D IXGBE_READ_REG(hw, IXGBE_VT_CTL); > + ctrl &=3D ~IXGBE_VT_CTL_POOLING_MODE_MASK; > + if (en) > + ctrl |=3D IXGBE_VT_CTL_POOLING_MODE_ETAG; > + IXGBE_WRITE_REG(hw, IXGBE_VT_CTL, ctrl); > + > + return ret; > +} > + > +/* Enable l2 tunnel forwarding */ > +static int > +ixgbe_dev_l2_tunnel_forwarding_enable > + (struct rte_eth_dev *dev, > + enum rte_eth_l2_tunnel_type l2_tunnel_type) > +{ > + int ret =3D 0; > + > + switch (l2_tunnel_type) { > + case RTE_L2_TUNNEL_TYPE_E_TAG: > + ret =3D ixgbe_e_tag_forwarding_en_dis(dev, 1); > + break; > + default: > + PMD_DRV_LOG(ERR, "Invalid tunnel type"); > + ret =3D -1; > + break; > + } > + > + return ret; > +} > + > +/* Disable l2 tunnel forwarding */ > +static int > +ixgbe_dev_l2_tunnel_forwarding_disable > + (struct rte_eth_dev *dev, > + enum rte_eth_l2_tunnel_type l2_tunnel_type) > +{ > + int ret =3D 0; > + > + switch (l2_tunnel_type) { > + case RTE_L2_TUNNEL_TYPE_E_TAG: > + ret =3D ixgbe_e_tag_forwarding_en_dis(dev, 0); > + break; > + default: > + PMD_DRV_LOG(ERR, "Invalid tunnel type"); > + ret =3D -1; > + break; > + } > + > + return ret; > +} > + > +static int > +ixgbe_e_tag_insertion_en_dis(struct rte_eth_dev *dev, > + struct rte_eth_l2_tunnel *l2_tunnel, > + uint16_t vf_id, > + bool en) > +{ > + int ret =3D 0; > + uint32_t vmtir, vmvir; > + struct ixgbe_hw *hw =3D IXGBE_DEV_PRIVATE_TO_HW(dev->data- > >dev_private); > + > + if (vf_id >=3D dev->pci_dev->max_vfs) { > + PMD_DRV_LOG(ERR, > + "VF id %u should be less than %u", > + vf_id, > + dev->pci_dev->max_vfs); > + return -EINVAL; > + } > + > + if (hw->mac.type !=3D ixgbe_mac_X550 && > + hw->mac.type !=3D ixgbe_mac_X550EM_x) { > + return -ENOTSUP; > + } > + > + if (en) > + vmtir =3D l2_tunnel->tunnel_id; > + else > + vmtir =3D 0; > + > + IXGBE_WRITE_REG(hw, IXGBE_VMTIR(vf_id), vmtir); > + > + vmvir =3D IXGBE_READ_REG(hw, IXGBE_VMVIR(vf_id)); > + vmvir &=3D ~IXGBE_VMVIR_TAGA_MASK; > + if (en) > + vmvir |=3D IXGBE_VMVIR_TAGA_ETAG_INSERT; > + IXGBE_WRITE_REG(hw, IXGBE_VMVIR(vf_id), vmvir); > + > + return ret; > +} > + > +/* Enable l2 tunnel tag insertion */ > +static int > +ixgbe_dev_l2_tunnel_insertion_enable(struct rte_eth_dev *dev, > + struct rte_eth_l2_tunnel *l2_tunnel, > + uint16_t vf_id) > +{ > + int ret =3D 0; > + > + switch (l2_tunnel->l2_tunnel_type) { > + case RTE_L2_TUNNEL_TYPE_E_TAG: > + ret =3D ixgbe_e_tag_insertion_en_dis(dev, l2_tunnel, vf_id, 1); > + break; > + default: > + PMD_DRV_LOG(ERR, "Invalid tunnel type"); > + ret =3D -1; > + break; > + } > + > + return ret; > +} > + > +/* Disable l2 tunnel tag insertion */ > +static int > +ixgbe_dev_l2_tunnel_insertion_disable > + (struct rte_eth_dev *dev, > + enum rte_eth_l2_tunnel_type l2_tunnel_type, > + uint16_t vf_id) > +{ > + int ret =3D 0; > + > + switch (l2_tunnel_type) { > + case RTE_L2_TUNNEL_TYPE_E_TAG: > + ret =3D ixgbe_e_tag_insertion_en_dis(dev, NULL, vf_id, 0); > + break; > + default: > + PMD_DRV_LOG(ERR, "Invalid tunnel type"); > + ret =3D -1; > + break; > + } > + > + return ret; > +} > + > +static int > +ixgbe_e_tag_stripping_en_dis(struct rte_eth_dev *dev, > + bool en) > +{ > + int ret =3D 0; > + uint32_t qde; > + struct ixgbe_hw *hw =3D IXGBE_DEV_PRIVATE_TO_HW(dev->data- > >dev_private); > + > + if (hw->mac.type !=3D ixgbe_mac_X550 && > + hw->mac.type !=3D ixgbe_mac_X550EM_x) { > + return -ENOTSUP; > + } > + > + qde =3D IXGBE_READ_REG(hw, IXGBE_QDE); > + if (en) > + qde |=3D IXGBE_QDE_STRIP_TAG; > + else > + qde &=3D ~IXGBE_QDE_STRIP_TAG; > + qde &=3D ~IXGBE_QDE_READ; > + qde |=3D IXGBE_QDE_WRITE; > + IXGBE_WRITE_REG(hw, IXGBE_QDE, qde); > + > + return ret; > +} > + > +/* Enable l2 tunnel tag stripping */ > +static int > +ixgbe_dev_l2_tunnel_stripping_enable > + (struct rte_eth_dev *dev, > + enum rte_eth_l2_tunnel_type l2_tunnel_type) > +{ > + int ret =3D 0; > + > + switch (l2_tunnel_type) { > + case RTE_L2_TUNNEL_TYPE_E_TAG: > + ret =3D ixgbe_e_tag_stripping_en_dis(dev, 1); > + break; > + default: > + PMD_DRV_LOG(ERR, "Invalid tunnel type"); > + ret =3D -1; > + break; > + } > + > + return ret; > +} > + > +/* Disable l2 tunnel tag stripping */ > +static int > +ixgbe_dev_l2_tunnel_stripping_disable > + (struct rte_eth_dev *dev, > + enum rte_eth_l2_tunnel_type l2_tunnel_type) > +{ > + int ret =3D 0; > + > + switch (l2_tunnel_type) { > + case RTE_L2_TUNNEL_TYPE_E_TAG: > + ret =3D ixgbe_e_tag_stripping_en_dis(dev, 0); > + break; > + default: > + PMD_DRV_LOG(ERR, "Invalid tunnel type"); > + ret =3D -1; > + break; > + } > + > + return ret; > +} > + > static struct rte_driver rte_ixgbe_driver =3D { > .type =3D PMD_PDEV, > .init =3D rte_ixgbe_pmd_init, > -- > 1.9.3