From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from NAM02-CY1-obe.outbound.protection.outlook.com (mail-cys01nam02on0040.outbound.protection.outlook.com [104.47.37.40]) by dpdk.org (Postfix) with ESMTP id E9E3C1B01E for ; Thu, 14 Dec 2017 07:36:24 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=CAVIUMNETWORKS.onmicrosoft.com; s=selector1-cavium-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version; bh=7/nNL4ZX1hErIvDi1sdoSKbehVByNyp2HKmErCmo2n8=; b=FiX8C9GSHBiU6ITScA8thYV3jDjsO3lefQpYoeXnOW6ZisZVDBMYmelRPMng3Z/UyYIIZ+/Z/gpMLnJ3PWZbdANDOzwh5QfcnnSbzP1KRkId1G1JK0/ITqqhGs0e+C/O7t4S/U7JYYXJWscxcNDs1iDuM0XNfeutCOkv2MfSW2k= Authentication-Results: spf=none (sender IP is ) smtp.mailfrom=Rasesh.Mody@cavium.com; Received: from cavium.com (198.186.0.2) by DM5PR0701MB3830.namprd07.prod.outlook.com (2603:10b6:4:7f::28) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P256) id 15.20.302.9; Thu, 14 Dec 2017 06:36:22 +0000 From: Rasesh Mody To: ferruh.yigit@intel.com Cc: Shahed Shaikh , dev@dpdk.org, Dept-EngDPDKDev@cavium.com Date: Wed, 13 Dec 2017 22:36:03 -0800 Message-Id: <1513233363-4241-4-git-send-email-rasesh.mody@cavium.com> X-Mailer: git-send-email 1.7.10.3 In-Reply-To: <1511555745-13793-1-git-send-email-rasesh.mody@cavium.com> References: <1511555745-13793-1-git-send-email-rasesh.mody@cavium.com> MIME-Version: 1.0 Content-Type: text/plain X-Originating-IP: [198.186.0.2] X-ClientProxiedBy: BN6PR11CA0021.namprd11.prod.outlook.com (2603:10b6:405:2::31) To DM5PR0701MB3830.namprd07.prod.outlook.com (2603:10b6:4:7f::28) X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 20f409aa-50d7-441f-73cb-08d542bcfe61 X-Microsoft-Antispam: UriScan:; BCL:0; PCL:0; RULEID:(4534020)(4602075)(4627115)(201703031133081)(201702281549075)(5600026)(4604075)(2017052603307); SRVR:DM5PR0701MB3830; X-Microsoft-Exchange-Diagnostics: 1; DM5PR0701MB3830; 3:n/zPCT+MqO0PkvDwFIYL/E8H27CRq0AdAk2eMVdGxF2X3d7wYCZa2nVIY5+UN53ULvDxoc+KWm7u5A8ouwgQ1L2FHSeaoureShncnjfpJLwn57wUl2oh0WrdHUaLZHcUsomqcuXVFgN82FmH7QBN5+b4Pc7M0aH/YsO467ALvhPjNaeCowHkafBQat6hA+8fcXoFAJFz0eijTqEtaNVn+o0NHIVD7xOhGmFPfpEnM/8F6LrsfE75A4LnF8F5vb1B; 25:d9vqVEgYwKhCPsw6sIkRA/wQhQlYeVBXkNmUvog7FlrVHji7JLhtEelTP/087B2szqV3O9GrDxKM9LrMgCEZrpjO9PpXc3huxWn63a9g1SvaftZvl7GCoZ6DSCBQORGrQ77tQn8hEPqKjyxKfk6JNrVPu8/348i3mJIcnPiwoTveaXHn2fHerQDSt31IsmyxrbsSv4GvTf+cWowZ2Dork+/H1FKG1TgmbUObnsk4gSo5AbvwkORRYUMCGEsWUtdeOFjk5gvYV/I/+phMjnFWH4OFSTrw7CNaGKFKJvYf6ZpxtA6ZS2Ts5/S0XcMSI6rJWdDYhbuaL6KxZHJq4Modig==; 31:EK7wbFANzok1lBrywvQ5Oi+/BXD3qWxThYha5udOKTY4JPZ0VGLRNqd7ZkHP66j3u+cGP/8oey45sIKs9QK4xgDKKzDBwi5/lE0oMOOo7/fgJXslhRiBv+YNTStItOlKQxHB1IxvdlfYICeGTd8kXJyIJUjFzcu9hAlQxNDqwLbcfIuPLFgVYhZdF2wKfLw5Uo4pDFdjGcZ5SJFepIPTAAhKAL4jV6OUJtcXJdKyAJ0= X-MS-TrafficTypeDiagnostic: DM5PR0701MB3830: X-Microsoft-Exchange-Diagnostics: 1; DM5PR0701MB3830; 20:0wYmO/gpMFiFII+tqgPQDzXpycNAtR07vFnWTrd/ZOrHMW8VL86CNnmxcWsdShgnVSQdlpHGO8bJ1fiWJwfu7Lx+Hfp7/jZpRvdVFeBwwOOVPzCEZ2nfJkwbejL8VWmbhDfKECZy7SGiE2Sc70T0mYXxCX/IOkGBk+M8A9N3OHfp9uZBrRX+oOCn7B8B+aNXFVm64lrCPr2ImVZsVy0B27L4/gmbJYQdNZ3qJ0ggIZypuDKjxEjY41YCCFD5X4Ys2PHe7VX1LyydhHKhoAqgKzV1vSBcrrlv9hr8ntqIUv1xcX4TjiIzleb7s9wAqpnlVxAP5ZIKLLhinKc3ZpEnnvdBMN7jmjXsqMPJ8YsI26k3cHLOhzNdOdh/pSeef+HBdUsAa9sAG2ASCApS4R9Ir9gcdRmigc2vNwv8Sp8jPUtGZbJpEeL5oAu1WuqBIMR79s9DG79eci9DclZ9CycPeWq+0IemJyzGZ7DOV25sqJUlZaBBHgXDO2C/1/VwOWuO; 4:VgP35XTl0ZsNkVvtxjwQ0YpNl8nXeGN47drW/GtOetSmvwkv5o8b1ca0z+qAeaDwE3yqhyONPms2IQOR1nNK2j0vO9Zf908AWnlD9Z2AnI1CqmiZwA1PUVFvjddPVYckg0Bg2lB2TfEvWEspoBwrHPBn4CqIj7eRcYzULY9aXOJooTkIbg3r22lmBeM04IV+JT9vTlJ8Z7OkkJH3hCsbdZDuwE1eCl7nwBoBPQ459ClyQgZw/6RX3PMhh3+wlbMDnfJ993Mox7BY2Y0fFdpG/A== X-Microsoft-Antispam-PRVS: X-Exchange-Antispam-Report-Test: UriScan:; X-Exchange-Antispam-Report-CFA-Test: BCL:0; PCL:0; RULEID:(6040450)(2401047)(5005006)(8121501046)(10201501046)(3002001)(93006095)(93001095)(3231023)(6041248)(20161123562025)(20161123564025)(20161123560025)(20161123555025)(20161123558100)(201703131423075)(201702281528075)(201703061421075)(201703061406153)(6072148)(201708071742011); SRVR:DM5PR0701MB3830; BCL:0; PCL:0; RULEID:(100000803101)(100110400095); SRVR:DM5PR0701MB3830; X-Forefront-PRVS: 05214FD68E X-Forefront-Antispam-Report: SFV:NSPM; SFS:(10009020)(346002)(376002)(366004)(39860400002)(396003)(189003)(199004)(53936002)(76176011)(2906002)(50466002)(68736007)(51416003)(36756003)(55016002)(47776003)(59450400001)(478600001)(16526018)(7696005)(105586002)(2351001)(4326008)(305945005)(7736002)(66066001)(6666003)(8676002)(6116002)(81166006)(25786009)(3846002)(316002)(6916009)(72206003)(16586007)(2950100002)(5660300001)(4720700003)(52116002)(69596002)(21086003)(97736004)(86362001)(107886003)(106356001)(50226002)(575784001)(81156014)(8936002)(48376002)(386003)(2361001); DIR:OUT; SFP:1101; SCL:1; SRVR:DM5PR0701MB3830; H:cavium.com; FPR:; SPF:None; PTR:InfoNoRecords; MX:1; A:1; LANG:en; Received-SPF: None (protection.outlook.com: cavium.com does not designate permitted sender hosts) X-Microsoft-Exchange-Diagnostics: =?us-ascii?Q?1; DM5PR0701MB3830; 23:vatv/XPVUovMpIL2k4trQhEGuvIsXP9yeKTWBNK?= =?us-ascii?Q?97aZHV7DLmBMmdKTUUsxdRX2rSp/gkJr0F1vAf7pLjUStsF9iQDh0eoYqdNo?= =?us-ascii?Q?cD2W+TI+MHraCCrO+B6JaIW+u+pIm9SLw/052Q3Cij3EophVB60Fic1xsKNy?= =?us-ascii?Q?cKlwwKFGZCh4ALCCaRBlUey7vik4vb09OVC8CRXApZnKVas4HwBvQkYwAo/l?= =?us-ascii?Q?W2SPXYElra8hhM7M4phEjxvfpkAM8KHNOhdk53nRRDRtOJvkX67EUMu82lZw?= =?us-ascii?Q?vkfl1QTZERRwLTcUEQ8wWvk1RTumh3iazbmXn1yduk0+zLC9ULEfusPwo2rJ?= =?us-ascii?Q?N0eO+FK21ODUiQoXJIwe5exml0lLmAD8aiVR7qwOEnZTnQ4U17TO/UN8SR7N?= =?us-ascii?Q?4+Oerx89a59VdlRdp3RXflcmlZ8DyZj71WXk3Qoq2hywDte1uKkdBBU2iSFx?= =?us-ascii?Q?YeNlXV9YSes8rc00iDBSODB8EENuQB5Oimzb4ybW5pseyVVw2m9qk2n5EEWA?= =?us-ascii?Q?h3d1pfwZzgrlfc8UEBIOF4zfTBiUs/kgftEzROQbBG1msHsP74Y+o2gUDcTu?= =?us-ascii?Q?MpthmQtLTeVlI/TjuCHyZUR8q3HuQAth6clcRICX5GgivDhUqR6bA3MwfN5X?= =?us-ascii?Q?lnzX/5PbhgVod20A8eQE0ISoXgqMU/EDcpb7RHFeYSMczsfLoXd3tf7iGJOn?= =?us-ascii?Q?rcIhFQLs+/dk7ZroRaFdXKFTvSxyiM+qdiO8klX+jbobkfc4vfWwIammgdSO?= =?us-ascii?Q?YQQj+Qxuo+AswY4dI1Ewu+RciPRvHb//6n6yA1osCFXb88qoGcVYRcXoAtjg?= =?us-ascii?Q?wfAepWmRaytjjqWRGiGgLOa/umn3uCKf8yl2Mup/s1rvE8ltyGGN5sqVRehg?= =?us-ascii?Q?h7OtrERRDc6BNKrv7Js/WH2q8agMJg4PDAnBrwn+ClBIbedCShYse08AjLlX?= =?us-ascii?Q?ys0ALc4XDwu45EAtt70c7cihj0vrWbnufV4YkjQNUbDPB6bUbjCFQGYG6WW/?= =?us-ascii?Q?jc/Xvrn640IV37Djb5RHg1AEtp/d0D7aAXzqyw3Wz5+FKdZ0iBmhj3x8oq/4?= =?us-ascii?Q?AzL7ap9vRZTavsN+n/c9F4ZEic6D4k/h8YLZyPO/Z1Pc0G4BSXY8d8ZnRv8S?= =?us-ascii?Q?HYuDa71FxfFfw5AtXuWiNw4n+GRowA5dSs3xxxZifgLf7Fz8gbCMjNeVLfWf?= =?us-ascii?Q?w/PpiCyoZpeVXhy+DCkVUIjMRGCGdv+LM5sjeORiLunRD1XXHtAKQooMsdQ?= =?us-ascii?Q?=3D=3D?= X-Microsoft-Exchange-Diagnostics: 1; DM5PR0701MB3830; 6:yWsTL0SI7FuvJWZNav2nXhe0UzaKAbH/EATRx2xFl7oNs27C55DESlv2bhv8lAsK/PlYwh8M0i7k3VxUtbR5y4Q6ZStbPeXuRQ++oW3lXQ4saSbFOGaQSX8vZIgLS2VSUJk8mFxTP81zDjxuikTt8bi/I1XEKdsbpj7z3EoOys/7nOzQiy9N+MzbknJs43EWHFaVb/HaUZfZbE3XCFl0VWLCJvHsAr+FP6vyF/O7GLM0qwbNo1R2QCtYTgn3ssB1b4cF7v+DJeKYv+gjCUkBdE9vS43YFuveb81tmfqdb0gqJqlZvcK4JMhUi9Q6rw/Sd8tWM23OVSQr0aGJamUrIg==; 5:ms8vn8a1QANt0k6TbK3jBuLmkKysY/IohageaNMt+EVgmwfszM/l4jNc2XX6fE0NWWaEx4tMHyTKzMSH9OZQorFnS/9jwUe///8aGq4PhZtVBss/DJHIqREP0fYzNuVhpDVpqeP81xqpxy1EAu0rswJIVo6q2sD+OG+Hpj2lCNA=; 24:/oFLajUzBbBUIR8X5UnAHflrmVLhkQWknS2PC9L/883HX4hrV0qQQDDvEsd2kftSPDuTqjpT7yWIiZ2QLsOvXQyabkkmn8Dd0lXRQ92nfBY=; 7:CR4ObWQE5P963WIEew5ZO0TcA8N+DHvZ6dAt0NHY2V0DyMyuDeFqVmras5gOUGiuNxS3gIzIYw/BuQO3zVphTDieUKVKwu9Ir5NPBo6K6INICK4gbEpOHPFzkv/abRneFjIo0ND/F1RhrB+RofYKHQ5YPNmIDvkjIYAX4hw6F0g+7CN4WtW/Hn+eIn3RlWgJCSRiH9oroL7+pRVir4Da6aRyJn9JKGCQYlSXx7xg24OwP5mVKNNo6zyKQ1HeDXzB SpamDiagnosticOutput: 1:99 SpamDiagnosticMetadata: NSPM X-OriginatorOrg: cavium.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 14 Dec 2017 06:36:22.4689 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 20f409aa-50d7-441f-73cb-08d542bcfe61 X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 711e4ccf-2e9b-4bcf-a551-4094005b6194 X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM5PR0701MB3830 Subject: [dpdk-dev] [PATCH v2 3/3] net/qede: add support for GENEVE tunneling offload X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 14 Dec 2017 06:36:25 -0000 From: Shahed Shaikh This patch refactors existing VXLAN tunneling offload code and enables following features for GENEVE: - destination UDP port configuration - checksum offloads - filter configuration Signed-off-by: Shahed Shaikh --- drivers/net/qede/qede_ethdev.c | 518 ++++++++++++++++++++++++++-------------- drivers/net/qede/qede_ethdev.h | 10 +- drivers/net/qede/qede_rxtx.c | 4 +- drivers/net/qede/qede_rxtx.h | 4 +- 4 files changed, 350 insertions(+), 186 deletions(-) diff --git a/drivers/net/qede/qede_ethdev.c b/drivers/net/qede/qede_ethdev.c index 0128cec..68e99c5 100644 --- a/drivers/net/qede/qede_ethdev.c +++ b/drivers/net/qede/qede_ethdev.c @@ -15,7 +15,7 @@ static int64_t timer_period = 1; /* VXLAN tunnel classification mapping */ -const struct _qede_vxlan_tunn_types { +const struct _qede_udp_tunn_types { uint16_t rte_filter_type; enum ecore_filter_ucast_type qede_type; enum ecore_tunn_clss qede_tunn_clss; @@ -612,48 +612,118 @@ static void qede_set_ucast_cmn_params(struct ecore_filter_ucast *ucast) } static int +qede_tunnel_update(struct qede_dev *qdev, + struct ecore_tunnel_info *tunn_info) +{ + struct ecore_dev *edev = QEDE_INIT_EDEV(qdev); + enum _ecore_status_t rc = ECORE_INVAL; + struct ecore_hwfn *p_hwfn; + struct ecore_ptt *p_ptt; + int i; + + for_each_hwfn(edev, i) { + p_hwfn = &edev->hwfns[i]; + p_ptt = IS_PF(edev) ? ecore_ptt_acquire(p_hwfn) : NULL; + rc = ecore_sp_pf_update_tunn_cfg(p_hwfn, p_ptt, + tunn_info, ECORE_SPQ_MODE_CB, NULL); + if (IS_PF(edev)) + ecore_ptt_release(p_hwfn, p_ptt); + + if (rc != ECORE_SUCCESS) + break; + } + + return rc; +} + +static int qede_vxlan_enable(struct rte_eth_dev *eth_dev, uint8_t clss, - bool enable, bool mask) + bool enable) { struct qede_dev *qdev = QEDE_INIT_QDEV(eth_dev); struct ecore_dev *edev = QEDE_INIT_EDEV(qdev); enum _ecore_status_t rc = ECORE_INVAL; - struct ecore_ptt *p_ptt; struct ecore_tunnel_info tunn; - struct ecore_hwfn *p_hwfn; - int i; + + if (qdev->vxlan.enable == enable) + return ECORE_SUCCESS; memset(&tunn, 0, sizeof(struct ecore_tunnel_info)); - tunn.vxlan.b_update_mode = enable; - tunn.vxlan.b_mode_enabled = mask; + tunn.vxlan.b_update_mode = true; + tunn.vxlan.b_mode_enabled = enable; tunn.b_update_rx_cls = true; tunn.b_update_tx_cls = true; tunn.vxlan.tun_cls = clss; - for_each_hwfn(edev, i) { - p_hwfn = &edev->hwfns[i]; - if (IS_PF(edev)) { - p_ptt = ecore_ptt_acquire(p_hwfn); - if (!p_ptt) - return -EAGAIN; - } else { - p_ptt = NULL; - } - rc = ecore_sp_pf_update_tunn_cfg(p_hwfn, p_ptt, - &tunn, ECORE_SPQ_MODE_CB, NULL); - if (rc != ECORE_SUCCESS) { - DP_ERR(edev, "Failed to update tunn_clss %u\n", - tunn.vxlan.tun_cls); - if (IS_PF(edev)) - ecore_ptt_release(p_hwfn, p_ptt); - break; - } - } + tunn.vxlan_port.b_update_port = true; + tunn.vxlan_port.port = enable ? QEDE_VXLAN_DEF_PORT : 0; + rc = qede_tunnel_update(qdev, &tunn); if (rc == ECORE_SUCCESS) { qdev->vxlan.enable = enable; qdev->vxlan.udp_port = (enable) ? QEDE_VXLAN_DEF_PORT : 0; - DP_INFO(edev, "vxlan is %s\n", enable ? "enabled" : "disabled"); + DP_INFO(edev, "vxlan is %s, UDP port = %d\n", + enable ? "enabled" : "disabled", qdev->vxlan.udp_port); + } else { + DP_ERR(edev, "Failed to update tunn_clss %u\n", + tunn.vxlan.tun_cls); + } + + return rc; +} + +static int +qede_geneve_enable(struct rte_eth_dev *eth_dev, uint8_t clss, + bool enable) +{ + struct qede_dev *qdev = QEDE_INIT_QDEV(eth_dev); + struct ecore_dev *edev = QEDE_INIT_EDEV(qdev); + enum _ecore_status_t rc = ECORE_INVAL; + struct ecore_tunnel_info tunn; + + memset(&tunn, 0, sizeof(struct ecore_tunnel_info)); + tunn.l2_geneve.b_update_mode = true; + tunn.l2_geneve.b_mode_enabled = enable; + tunn.ip_geneve.b_update_mode = true; + tunn.ip_geneve.b_mode_enabled = enable; + tunn.l2_geneve.tun_cls = clss; + tunn.ip_geneve.tun_cls = clss; + tunn.b_update_rx_cls = true; + tunn.b_update_tx_cls = true; + + tunn.geneve_port.b_update_port = true; + tunn.geneve_port.port = enable ? QEDE_GENEVE_DEF_PORT : 0; + + rc = qede_tunnel_update(qdev, &tunn); + if (rc == ECORE_SUCCESS) { + qdev->geneve.enable = enable; + qdev->geneve.udp_port = (enable) ? QEDE_GENEVE_DEF_PORT : 0; + DP_INFO(edev, "GENEVE is %s, UDP port = %d\n", + enable ? "enabled" : "disabled", qdev->geneve.udp_port); + } else { + DP_ERR(edev, "Failed to update tunn_clss %u\n", + clss); + } + + return rc; +} + +static int +qede_tunn_enable(struct rte_eth_dev *eth_dev, uint8_t clss, + enum rte_eth_tunnel_type tunn_type, bool enable) +{ + int rc = -EINVAL; + + switch (tunn_type) { + case RTE_TUNNEL_TYPE_VXLAN: + rc = qede_vxlan_enable(eth_dev, clss, enable); + break; + case RTE_TUNNEL_TYPE_GENEVE: + rc = qede_geneve_enable(eth_dev, clss, enable); + break; + default: + rc = -EINVAL; + break; } return rc; @@ -1367,7 +1437,8 @@ static int qede_dev_configure(struct rte_eth_dev *eth_dev) DEV_TX_OFFLOAD_TCP_CKSUM | DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM | DEV_TX_OFFLOAD_TCP_TSO | - DEV_TX_OFFLOAD_VXLAN_TNL_TSO); + DEV_TX_OFFLOAD_VXLAN_TNL_TSO | + DEV_TX_OFFLOAD_GENEVE_TNL_TSO); memset(&link, 0, sizeof(struct qed_link_output)); qdev->ops->common->get_link(edev, &link); @@ -1873,6 +1944,7 @@ static int qede_flow_ctrl_get(struct rte_eth_dev *eth_dev, RTE_PTYPE_L4_UDP, RTE_PTYPE_TUNNEL_VXLAN, RTE_PTYPE_L4_FRAG, + RTE_PTYPE_TUNNEL_GENEVE, /* Inner */ RTE_PTYPE_INNER_L2_ETHER, RTE_PTYPE_INNER_L2_ETHER_VLAN, @@ -2221,74 +2293,36 @@ static int qede_set_mtu(struct rte_eth_dev *dev, uint16_t mtu) } static int -qede_conf_udp_dst_port(struct rte_eth_dev *eth_dev, - struct rte_eth_udp_tunnel *tunnel_udp, - bool add) +qede_udp_dst_port_del(struct rte_eth_dev *eth_dev, + struct rte_eth_udp_tunnel *tunnel_udp) { struct qede_dev *qdev = QEDE_INIT_QDEV(eth_dev); struct ecore_dev *edev = QEDE_INIT_EDEV(qdev); struct ecore_tunnel_info tunn; /* @DPDK */ - struct ecore_hwfn *p_hwfn; - struct ecore_ptt *p_ptt; uint16_t udp_port; - int rc, i; + int rc; PMD_INIT_FUNC_TRACE(edev); memset(&tunn, 0, sizeof(tunn)); - if (tunnel_udp->prot_type == RTE_TUNNEL_TYPE_VXLAN) { - /* Enable VxLAN tunnel if needed before UDP port update using - * default MAC/VLAN classification. - */ - if (add) { - if (qdev->vxlan.udp_port == tunnel_udp->udp_port) { - DP_INFO(edev, - "UDP port %u was already configured\n", - tunnel_udp->udp_port); - return ECORE_SUCCESS; - } - /* Enable VXLAN if it was not enabled while adding - * VXLAN filter. - */ - if (!qdev->vxlan.enable) { - rc = qede_vxlan_enable(eth_dev, - ECORE_TUNN_CLSS_MAC_VLAN, true, true); - if (rc != ECORE_SUCCESS) { - DP_ERR(edev, "Failed to enable VXLAN " - "prior to updating UDP port\n"); - return rc; - } - } - udp_port = tunnel_udp->udp_port; - } else { - if (qdev->vxlan.udp_port != tunnel_udp->udp_port) { - DP_ERR(edev, "UDP port %u doesn't exist\n", - tunnel_udp->udp_port); - return ECORE_INVAL; - } - udp_port = 0; + + switch (tunnel_udp->prot_type) { + case RTE_TUNNEL_TYPE_VXLAN: + if (qdev->vxlan.udp_port != tunnel_udp->udp_port) { + DP_ERR(edev, "UDP port %u doesn't exist\n", + tunnel_udp->udp_port); + return ECORE_INVAL; } + udp_port = 0; tunn.vxlan_port.b_update_port = true; tunn.vxlan_port.port = udp_port; - for_each_hwfn(edev, i) { - p_hwfn = &edev->hwfns[i]; - if (IS_PF(edev)) { - p_ptt = ecore_ptt_acquire(p_hwfn); - if (!p_ptt) - return -EAGAIN; - } else { - p_ptt = NULL; - } - rc = ecore_sp_pf_update_tunn_cfg(p_hwfn, p_ptt, &tunn, - ECORE_SPQ_MODE_CB, NULL); - if (rc != ECORE_SUCCESS) { - DP_ERR(edev, "Unable to config UDP port %u\n", - tunn.vxlan_port.port); - if (IS_PF(edev)) - ecore_ptt_release(p_hwfn, p_ptt); - return rc; - } + + rc = qede_tunnel_update(qdev, &tunn); + if (rc != ECORE_SUCCESS) { + DP_ERR(edev, "Unable to config UDP port %u\n", + tunn.vxlan_port.port); + return rc; } qdev->vxlan.udp_port = udp_port; @@ -2296,26 +2330,145 @@ static int qede_set_mtu(struct rte_eth_dev *dev, uint16_t mtu) * VXLAN filters have reached 0 then VxLAN offload can be be * disabled. */ - if (!add && qdev->vxlan.enable && qdev->vxlan.num_filters == 0) + if (qdev->vxlan.enable && qdev->vxlan.num_filters == 0) return qede_vxlan_enable(eth_dev, - ECORE_TUNN_CLSS_MAC_VLAN, false, true); + ECORE_TUNN_CLSS_MAC_VLAN, false); + + break; + + case RTE_TUNNEL_TYPE_GENEVE: + if (qdev->geneve.udp_port != tunnel_udp->udp_port) { + DP_ERR(edev, "UDP port %u doesn't exist\n", + tunnel_udp->udp_port); + return ECORE_INVAL; + } + + udp_port = 0; + + tunn.geneve_port.b_update_port = true; + tunn.geneve_port.port = udp_port; + + rc = qede_tunnel_update(qdev, &tunn); + if (rc != ECORE_SUCCESS) { + DP_ERR(edev, "Unable to config UDP port %u\n", + tunn.vxlan_port.port); + return rc; + } + + qdev->vxlan.udp_port = udp_port; + /* If the request is to delete UDP port and if the number of + * GENEVE filters have reached 0 then GENEVE offload can be be + * disabled. + */ + if (qdev->geneve.enable && qdev->geneve.num_filters == 0) + return qede_geneve_enable(eth_dev, + ECORE_TUNN_CLSS_MAC_VLAN, false); + + break; + + default: + return ECORE_INVAL; } return 0; -} -static int -qede_udp_dst_port_del(struct rte_eth_dev *eth_dev, - struct rte_eth_udp_tunnel *tunnel_udp) -{ - return qede_conf_udp_dst_port(eth_dev, tunnel_udp, false); } - static int qede_udp_dst_port_add(struct rte_eth_dev *eth_dev, struct rte_eth_udp_tunnel *tunnel_udp) { - return qede_conf_udp_dst_port(eth_dev, tunnel_udp, true); + struct qede_dev *qdev = QEDE_INIT_QDEV(eth_dev); + struct ecore_dev *edev = QEDE_INIT_EDEV(qdev); + struct ecore_tunnel_info tunn; /* @DPDK */ + uint16_t udp_port; + int rc; + + PMD_INIT_FUNC_TRACE(edev); + + memset(&tunn, 0, sizeof(tunn)); + + switch (tunnel_udp->prot_type) { + case RTE_TUNNEL_TYPE_VXLAN: + if (qdev->vxlan.udp_port == tunnel_udp->udp_port) { + DP_INFO(edev, + "UDP port %u for VXLAN was already configured\n", + tunnel_udp->udp_port); + return ECORE_SUCCESS; + } + + /* Enable VxLAN tunnel with default MAC/VLAN classification if + * it was not enabled while adding VXLAN filter before UDP port + * update. + */ + if (!qdev->vxlan.enable) { + rc = qede_vxlan_enable(eth_dev, + ECORE_TUNN_CLSS_MAC_VLAN, true); + if (rc != ECORE_SUCCESS) { + DP_ERR(edev, "Failed to enable VXLAN " + "prior to updating UDP port\n"); + return rc; + } + } + udp_port = tunnel_udp->udp_port; + + tunn.vxlan_port.b_update_port = true; + tunn.vxlan_port.port = udp_port; + + rc = qede_tunnel_update(qdev, &tunn); + if (rc != ECORE_SUCCESS) { + DP_ERR(edev, "Unable to config UDP port %u for VXLAN\n", + udp_port); + return rc; + } + + DP_INFO(edev, "Updated UDP port %u for VXLAN\n", udp_port); + + qdev->vxlan.udp_port = udp_port; + break; + + case RTE_TUNNEL_TYPE_GENEVE: + if (qdev->geneve.udp_port == tunnel_udp->udp_port) { + DP_INFO(edev, + "UDP port %u for GENEVE was already configured\n", + tunnel_udp->udp_port); + return ECORE_SUCCESS; + } + + /* Enable GENEVE tunnel with default MAC/VLAN classification if + * it was not enabled while adding GENEVE filter before UDP port + * update. + */ + if (!qdev->geneve.enable) { + rc = qede_geneve_enable(eth_dev, + ECORE_TUNN_CLSS_MAC_VLAN, true); + if (rc != ECORE_SUCCESS) { + DP_ERR(edev, "Failed to enable GENEVE " + "prior to updating UDP port\n"); + return rc; + } + } + udp_port = tunnel_udp->udp_port; + + tunn.geneve_port.b_update_port = true; + tunn.geneve_port.port = udp_port; + + rc = qede_tunnel_update(qdev, &tunn); + if (rc != ECORE_SUCCESS) { + DP_ERR(edev, "Unable to config UDP port %u for GENEVE\n", + udp_port); + return rc; + } + + DP_INFO(edev, "Updated UDP port %u for GENEVE\n", udp_port); + + qdev->geneve.udp_port = udp_port; + break; + + default: + return ECORE_INVAL; + } + + return 0; } static void qede_get_ecore_tunn_params(uint32_t filter, uint32_t *type, @@ -2382,113 +2535,116 @@ static void qede_get_ecore_tunn_params(uint32_t filter, uint32_t *type, return ECORE_SUCCESS; } -static int qede_vxlan_tunn_config(struct rte_eth_dev *eth_dev, - enum rte_filter_op filter_op, - const struct rte_eth_tunnel_filter_conf *conf) +static int +_qede_tunn_filter_config(struct rte_eth_dev *eth_dev, + const struct rte_eth_tunnel_filter_conf *conf, + __attribute__((unused)) enum rte_filter_op filter_op, + enum ecore_tunn_clss *clss, + bool add) { struct qede_dev *qdev = QEDE_INIT_QDEV(eth_dev); struct ecore_dev *edev = QEDE_INIT_EDEV(qdev); - enum ecore_filter_ucast_type type; - enum ecore_tunn_clss clss = MAX_ECORE_TUNN_CLSS; struct ecore_filter_ucast ucast = {0}; - char str[80]; + enum ecore_filter_ucast_type type; uint16_t filter_type = 0; + char str[80]; int rc; - PMD_INIT_FUNC_TRACE(edev); + filter_type = conf->filter_type; + /* Determine if the given filter classification is supported */ + qede_get_ecore_tunn_params(filter_type, &type, clss, str); + if (*clss == MAX_ECORE_TUNN_CLSS) { + DP_ERR(edev, "Unsupported filter type\n"); + return -EINVAL; + } + /* Init tunnel ucast params */ + rc = qede_set_ucast_tunn_cmn_param(&ucast, conf, type); + if (rc != ECORE_SUCCESS) { + DP_ERR(edev, "Unsupported Tunnel filter type 0x%x\n", + conf->filter_type); + return rc; + } + DP_INFO(edev, "Rule: \"%s\", op %d, type 0x%x\n", + str, filter_op, ucast.type); - switch (filter_op) { - case RTE_ETH_FILTER_ADD: - if (IS_VF(edev)) - return qede_vxlan_enable(eth_dev, - ECORE_TUNN_CLSS_MAC_VLAN, true, true); + ucast.opcode = add ? ECORE_FILTER_ADD : ECORE_FILTER_REMOVE; - filter_type = conf->filter_type; - /* Determine if the given filter classification is supported */ - qede_get_ecore_tunn_params(filter_type, &type, &clss, str); - if (clss == MAX_ECORE_TUNN_CLSS) { - DP_ERR(edev, "Unsupported filter type\n"); - return -EINVAL; - } - /* Init tunnel ucast params */ - rc = qede_set_ucast_tunn_cmn_param(&ucast, conf, type); - if (rc != ECORE_SUCCESS) { - DP_ERR(edev, "Unsupported VxLAN filter type 0x%x\n", - conf->filter_type); - return rc; + /* Skip MAC/VLAN if filter is based on VNI */ + if (!(filter_type & ETH_TUNNEL_FILTER_TENID)) { + rc = qede_mac_int_ops(eth_dev, &ucast, add); + if ((rc == 0) && add) { + /* Enable accept anyvlan */ + qede_config_accept_any_vlan(qdev, true); } - DP_INFO(edev, "Rule: \"%s\", op %d, type 0x%x\n", - str, filter_op, ucast.type); - - ucast.opcode = ECORE_FILTER_ADD; + } else { + rc = qede_ucast_filter(eth_dev, &ucast, add); + if (rc == 0) + rc = ecore_filter_ucast_cmd(edev, &ucast, + ECORE_SPQ_MODE_CB, NULL); + } - /* Skip MAC/VLAN if filter is based on VNI */ - if (!(filter_type & ETH_TUNNEL_FILTER_TENID)) { - rc = qede_mac_int_ops(eth_dev, &ucast, 1); - if (rc == 0) { - /* Enable accept anyvlan */ - qede_config_accept_any_vlan(qdev, true); - } - } else { - rc = qede_ucast_filter(eth_dev, &ucast, 1); - if (rc == 0) - rc = ecore_filter_ucast_cmd(edev, &ucast, - ECORE_SPQ_MODE_CB, NULL); - } + return rc; +} - if (rc != ECORE_SUCCESS) - return rc; +static int +qede_tunn_filter_config(struct rte_eth_dev *eth_dev, + enum rte_filter_op filter_op, + const struct rte_eth_tunnel_filter_conf *conf) +{ + struct qede_dev *qdev = QEDE_INIT_QDEV(eth_dev); + struct ecore_dev *edev = QEDE_INIT_EDEV(qdev); + enum ecore_tunn_clss clss = MAX_ECORE_TUNN_CLSS; + bool add; + int rc; - qdev->vxlan.num_filters++; - qdev->vxlan.filter_type = filter_type; - if (!qdev->vxlan.enable) - return qede_vxlan_enable(eth_dev, clss, true, true); + PMD_INIT_FUNC_TRACE(edev); - break; + switch (filter_op) { + case RTE_ETH_FILTER_ADD: + add = true; + break; case RTE_ETH_FILTER_DELETE: - if (IS_VF(edev)) - return qede_vxlan_enable(eth_dev, - ECORE_TUNN_CLSS_MAC_VLAN, false, true); + add = false; + break; + default: + DP_ERR(edev, "Unsupported operation %d\n", filter_op); + return -EINVAL; + } - filter_type = conf->filter_type; - /* Determine if the given filter classification is supported */ - qede_get_ecore_tunn_params(filter_type, &type, &clss, str); - if (clss == MAX_ECORE_TUNN_CLSS) { - DP_ERR(edev, "Unsupported filter type\n"); - return -EINVAL; - } - /* Init tunnel ucast params */ - rc = qede_set_ucast_tunn_cmn_param(&ucast, conf, type); - if (rc != ECORE_SUCCESS) { - DP_ERR(edev, "Unsupported VxLAN filter type 0x%x\n", - conf->filter_type); - return rc; - } - DP_INFO(edev, "Rule: \"%s\", op %d, type 0x%x\n", - str, filter_op, ucast.type); + if (IS_VF(edev)) + return qede_tunn_enable(eth_dev, + ECORE_TUNN_CLSS_MAC_VLAN, + conf->tunnel_type, add); - ucast.opcode = ECORE_FILTER_REMOVE; + rc = _qede_tunn_filter_config(eth_dev, conf, filter_op, &clss, add); + if (rc != ECORE_SUCCESS) + return rc; - if (!(filter_type & ETH_TUNNEL_FILTER_TENID)) { - rc = qede_mac_int_ops(eth_dev, &ucast, 0); - } else { - rc = qede_ucast_filter(eth_dev, &ucast, 0); - if (rc == 0) - rc = ecore_filter_ucast_cmd(edev, &ucast, - ECORE_SPQ_MODE_CB, NULL); + if (add) { + if (conf->tunnel_type == RTE_TUNNEL_TYPE_VXLAN) { + qdev->vxlan.num_filters++; + qdev->vxlan.filter_type = conf->filter_type; + } else { /* GENEVE */ + qdev->geneve.num_filters++; + qdev->geneve.filter_type = conf->filter_type; } - if (rc != ECORE_SUCCESS) - return rc; - qdev->vxlan.num_filters--; + if (!qdev->vxlan.enable || !qdev->geneve.enable) + return qede_tunn_enable(eth_dev, clss, + conf->tunnel_type, + true); + } else { + if (conf->tunnel_type == RTE_TUNNEL_TYPE_VXLAN) + qdev->vxlan.num_filters--; + else /*GENEVE*/ + qdev->geneve.num_filters--; /* Disable VXLAN if VXLAN filters become 0 */ - if (qdev->vxlan.num_filters == 0) - return qede_vxlan_enable(eth_dev, clss, false, true); - break; - default: - DP_ERR(edev, "Unsupported operation %d\n", filter_op); - return -EINVAL; + if ((qdev->vxlan.num_filters == 0) || + (qdev->geneve.num_filters == 0)) + return qede_tunn_enable(eth_dev, clss, + conf->tunnel_type, + false); } return 0; @@ -2508,13 +2664,13 @@ int qede_dev_filter_ctrl(struct rte_eth_dev *eth_dev, case RTE_ETH_FILTER_TUNNEL: switch (filter_conf->tunnel_type) { case RTE_TUNNEL_TYPE_VXLAN: + case RTE_TUNNEL_TYPE_GENEVE: DP_INFO(edev, "Packet steering to the specified Rx queue" - " is not supported with VXLAN tunneling"); - return(qede_vxlan_tunn_config(eth_dev, filter_op, + " is not supported with UDP tunneling"); + return(qede_tunn_filter_config(eth_dev, filter_op, filter_conf)); /* Place holders for future tunneling support */ - case RTE_TUNNEL_TYPE_GENEVE: case RTE_TUNNEL_TYPE_TEREDO: case RTE_TUNNEL_TYPE_NVGRE: case RTE_TUNNEL_TYPE_IP_IN_GRE: diff --git a/drivers/net/qede/qede_ethdev.h b/drivers/net/qede/qede_ethdev.h index 021de5c..7e55baf 100644 --- a/drivers/net/qede/qede_ethdev.h +++ b/drivers/net/qede/qede_ethdev.h @@ -166,11 +166,14 @@ struct qede_fdir_info { SLIST_HEAD(fdir_list_head, qede_fdir_entry)fdir_list_head; }; -struct qede_vxlan_tunn { +/* IANA assigned default UDP ports for encapsulation protocols */ +#define QEDE_VXLAN_DEF_PORT (4789) +#define QEDE_GENEVE_DEF_PORT (6081) + +struct qede_udp_tunn { bool enable; uint16_t num_filters; uint16_t filter_type; -#define QEDE_VXLAN_DEF_PORT (4789) uint16_t udp_port; }; @@ -202,7 +205,8 @@ struct qede_dev { SLIST_HEAD(uc_list_head, qede_ucast_entry) uc_list_head; uint16_t num_uc_addr; bool handle_hw_err; - struct qede_vxlan_tunn vxlan; + struct qede_udp_tunn vxlan; + struct qede_udp_tunn geneve; struct qede_fdir_info fdir_info; bool vlan_strip_flg; char drv_ver[QEDE_PMD_DRV_VER_STR_SIZE]; diff --git a/drivers/net/qede/qede_rxtx.c b/drivers/net/qede/qede_rxtx.c index 01a24e5..184f0e1 100644 --- a/drivers/net/qede/qede_rxtx.c +++ b/drivers/net/qede/qede_rxtx.c @@ -1792,7 +1792,9 @@ static inline uint32_t qede_rx_cqe_to_tunn_pkt_type(uint16_t flags) if (((tx_ol_flags & PKT_TX_TUNNEL_MASK) == PKT_TX_TUNNEL_VXLAN) || ((tx_ol_flags & PKT_TX_TUNNEL_MASK) == - PKT_TX_TUNNEL_MPLSINUDP)) { + PKT_TX_TUNNEL_MPLSINUDP) || + ((tx_ol_flags & PKT_TX_TUNNEL_MASK) == + PKT_TX_TUNNEL_GENEVE)) { /* Check against max which is Tunnel IPv6 + ext */ if (unlikely(txq->nb_tx_avail < ETH_TX_MIN_BDS_PER_TUNN_IPV6_WITH_EXT_PKT)) diff --git a/drivers/net/qede/qede_rxtx.h b/drivers/net/qede/qede_rxtx.h index acf9e47..6214c97 100644 --- a/drivers/net/qede/qede_rxtx.h +++ b/drivers/net/qede/qede_rxtx.h @@ -73,7 +73,8 @@ ETH_RSS_IPV6 |\ ETH_RSS_NONFRAG_IPV6_TCP |\ ETH_RSS_NONFRAG_IPV6_UDP |\ - ETH_RSS_VXLAN) + ETH_RSS_VXLAN |\ + ETH_RSS_GENEVE) #define QEDE_TXQ_FLAGS ((uint32_t)ETH_TXQ_FLAGS_NOMULTSEGS) @@ -151,6 +152,7 @@ PKT_TX_QINQ_PKT | \ PKT_TX_VLAN_PKT | \ PKT_TX_TUNNEL_VXLAN | \ + PKT_TX_TUNNEL_GENEVE | \ PKT_TX_TUNNEL_MPLSINUDP) #define QEDE_TX_OFFLOAD_NOTSUP_MASK \ -- 1.7.10.3