From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 2AE96A0A03; Thu, 20 Oct 2022 05:25:12 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id B604542CCC; Thu, 20 Oct 2022 05:23:34 +0200 (CEST) Received: from NAM12-DM6-obe.outbound.protection.outlook.com (mail-dm6nam12on2072.outbound.protection.outlook.com [40.107.243.72]) by mails.dpdk.org (Postfix) with ESMTP id 7065E42CCC for ; Thu, 20 Oct 2022 05:23:32 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=D9QvvNIMm5KfREga2JDxtXAkHdIGhnk+GJ9IAGFFylGg3c/WvQwdLr8fLqMxekXNclhbDCP1DiSmVLGuYzuYM87GeuFK1gZOFov4urS9bd0JEUsbku7FKJ7WxOwP+XYFD1cedK+y755pFq934QJKGf0v7lZP13ZFfFDDsUurQ1vLC2h4Q+4jIJwaj09Dyl0WBjP0HZDoduDkAgjHw0xR4eh718p1wCE0T84vu+MoUKPOtaf6/dlqdFpxugArP5YdcnLt/HHgHqxq70XFhtzp3pCjm7jxs1PR2gFNuwmZbm86d6DOfwd3SuOjw66Y+raPJ+1+PQo5QcoS9Pa78TunGw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=/PAq2iex2Ha0rqJ0fzHETxPfZ9QMIh1/Kq2cG4RokE8=; b=drFhQ3brWT5fxz/zPNxGNgNJj77S7plGhGWXPO1L3azI3CUxKLr0x2/2hpyAYaxZleU0D0RUCoFa3FBQ3sSk1/3bUWgOLIUtgxcOdO7tSjHUwozeeyJsk5Oh4RHK/NVdx3D1RasOaG7P4HTMSTGv7ZIlA8yG1gi3X5wh44heuT/QkT2G8RhUTqgAGYsOJrwpn+bTh+/lwKr9HxfeXBdMd8EwO1geHGHNeYIaJdXTtYIAmh+4R0/EaSEX7zihMbObnVRusL2xYvpnjZfbg9VkUXe9pVFMxOddsSQHzbxRzE2EcN5J5uaL3cJQt7rncdeezmUWrHrdYLAwQv/1Eh40aw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=none (sender ip is 216.228.117.160) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=fail (p=reject sp=reject pct=100) action=oreject header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=/PAq2iex2Ha0rqJ0fzHETxPfZ9QMIh1/Kq2cG4RokE8=; b=M9okH9rMZY419hPOBq2d/WvH9bDp91QLcbJjOsW6QhcXYiS8RA94O1G6NrlFUlkKOzCdeA+gJoaOwrDWPAvgA6H4wdFm6RcS+MKuPqhEQYZxlDpvh8ZbPQ3OQ5p92d8O7FMXXsxatOwT4VgcJjpnvLCbDUXQfAQkb0L2Me9cljUhT0c7v3LiD3fQokfB8LsmXCSG+ZuAFoALzeikBWRvRKh7PSqVMld1cd3vNOifLBO63QenAq2CPWzDtUtWG/+DQvxTSO5QTh6noBI/0AB0RoyvZkAgjeK6z7zUzQnRKb2Mptg/MfC1AgRC/9jXClUyJPCoP2sz3qphaK5cgqrGEw== Received: from BN8PR15CA0018.namprd15.prod.outlook.com (2603:10b6:408:c0::31) by SA0PR12MB4398.namprd12.prod.outlook.com (2603:10b6:806:9f::8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5723.34; Thu, 20 Oct 2022 03:23:26 +0000 Received: from BN8NAM11FT023.eop-nam11.prod.protection.outlook.com (2603:10b6:408:c0:cafe::cf) by BN8PR15CA0018.outlook.office365.com (2603:10b6:408:c0::31) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5723.26 via Frontend Transport; Thu, 20 Oct 2022 03:23:26 +0000 X-MS-Exchange-Authentication-Results: spf=none (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=fail action=oreject header.from=nvidia.com; Received-SPF: None (protection.outlook.com: nvidia.com does not designate permitted sender hosts) Received: from mail.nvidia.com (216.228.117.160) by BN8NAM11FT023.mail.protection.outlook.com (10.13.177.103) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5746.16 via Frontend Transport; Thu, 20 Oct 2022 03:23:26 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.26; Wed, 19 Oct 2022 20:23:10 -0700 Received: from nvidia.com (10.126.230.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.29; Wed, 19 Oct 2022 20:23:08 -0700 From: Suanming Mou To: Matan Azrad , Viacheslav Ovsiienko CC: , , , "Dariusz Sosnowski" Subject: [PATCH v5 18/18] net/mlx5: create control flow rules with HWS Date: Thu, 20 Oct 2022 06:22:10 +0300 Message-ID: <20221020032210.30250-19-suanmingm@nvidia.com> X-Mailer: git-send-email 2.18.1 In-Reply-To: <20221020032210.30250-1-suanmingm@nvidia.com> References: <20220923144334.27736-1-suanmingm@nvidia.com> <20221020032210.30250-1-suanmingm@nvidia.com> MIME-Version: 1.0 Content-Type: text/plain X-Originating-IP: [10.126.230.35] X-ClientProxiedBy: rnnvmail201.nvidia.com (10.129.68.8) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BN8NAM11FT023:EE_|SA0PR12MB4398:EE_ X-MS-Office365-Filtering-Correlation-Id: 679e67a6-3e64-49d3-20e8-08dab24a73df X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: iFlnvVpIjarbUSosVs9oeHo61mG7X3ZHWU3vC2ccpsz2Dcy0Vt4newzL9d+JVoByvtymbDajFNURL1NMtyaQeVVcTXQEINXffEZ53wHDe4wZkqLHAlAp9Z3t+yxZCghCM4IZupVeVtDvlAxYXDvoea6oePurbzSo0QumdR6t6x4Nb5CIdi5vGloXkOmyP9SRcWw81PyBN3Ms+iM2OQf5eRmtDXlDFRcawINhIwOL/uM5QPARjPprjKP/Yf6PnVM6ncsjmruMeoFWQ48HVAZUjThFxTprLs+GOihZkMyE8ErwXEtRVc8yoGddcoC238sLbT6Ztiygx1VpnH0zD817wfRvdLN3lpaWrDKOdxaiiFrE0ulf4qViGpXsIvkTybo21fQDyO6Bl33IJkXB8AwvwsztECwUfFhYt74lTul5tKH64LfFEKFxulqiKpz+rM9QgNlVA0V7M7d1by3wBDnix2pDX6Cegj40ZnlJf5B/yWUCU7TNknvJ2EbCtUMLuVtTrwOJEpqXS8dzYSUDADmn2igbmNv/nueiWwQLeeMFxQrm7TfbKM925YyWMdl4lWUuzZToWsH//QvQUR4cebZqx4REaOb9HjJAZ/WFqvi/QdYC/2zXTrv7vnHJpVUw8kQH3J0nYls14IoH+Mp22HOI6tbQl0nZarfPuHHLfWAZpzUJuo4qlBgFosI/IKW/FOg7e8jUtMdi87ZYzrJavOaTxD0nhDkP4ueBkYlZ09yp3qtGXzahb7hxIrIEWL6qTRY2QBEm/H31k7VivuPj1KJVWQ== X-Forefront-Antispam-Report: CIP:216.228.117.160; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge1.nvidia.com; CAT:NONE; SFS:(13230022)(4636009)(396003)(346002)(376002)(136003)(39860400002)(451199015)(46966006)(36840700001)(40470700004)(82740400003)(356005)(6666004)(107886003)(47076005)(7636003)(86362001)(6286002)(4326008)(5660300002)(8936002)(41300700001)(8676002)(40460700003)(7696005)(70586007)(26005)(70206006)(40480700001)(55016003)(426003)(336012)(110136005)(2906002)(82310400005)(30864003)(2616005)(54906003)(6636002)(16526019)(1076003)(186003)(316002)(36860700001)(36756003)(478600001)(83380400001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 20 Oct 2022 03:23:26.5485 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 679e67a6-3e64-49d3-20e8-08dab24a73df X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.160]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT023.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA0PR12MB4398 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Dariusz Sosnowski This patch adds creation of control flow rules required to receive default traffic (based on port configuration) with HWS. Control flow rules are created on port start and destroyed on port stop. Handling of destroying these rules was already implemented before that patch. Control flow rules are created if and only if flow isolation mode is disabled and creation process goes as follows: - Port configuration is collected into a set of flags. Each flag corresponds to a certain Ethernet pattern type, defined by mlx5_flow_ctrl_rx_eth_pattern_type enumeration. There is a separate flag for VLAN filtering. - For each possible Ethernet pattern type and: - For each possible RSS action configuration: - If configuration flags do not match this combination, it is omitted. - A template table is created using this combination of pattern and actions template (templates are fetched from hw_ctrl_rx struct stored in port's private data). - Flow rules are created in this table. Signed-off-by: Dariusz Sosnowski --- doc/guides/rel_notes/release_22_11.rst | 1 + drivers/net/mlx5/mlx5.h | 4 + drivers/net/mlx5/mlx5_flow.h | 56 ++ drivers/net/mlx5/mlx5_flow_hw.c | 799 +++++++++++++++++++++++++ drivers/net/mlx5/mlx5_rxq.c | 3 +- drivers/net/mlx5/mlx5_trigger.c | 20 +- 6 files changed, 881 insertions(+), 2 deletions(-) diff --git a/doc/guides/rel_notes/release_22_11.rst b/doc/guides/rel_notes/release_22_11.rst index 1ec218a5d1..8e3412a7ff 100644 --- a/doc/guides/rel_notes/release_22_11.rst +++ b/doc/guides/rel_notes/release_22_11.rst @@ -245,6 +245,7 @@ New Features - Support of meter. - Support of counter. - Support of CT. + - Support of control flow and isolate mode. * **Rewritten pmdinfo script.** diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index 87c90d58d7..911bb43344 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -1636,6 +1636,8 @@ struct mlx5_hw_ctrl_flow { struct rte_flow *flow; }; +struct mlx5_flow_hw_ctrl_rx; + struct mlx5_priv { struct rte_eth_dev_data *dev_data; /* Pointer to device data. */ struct mlx5_dev_ctx_shared *sh; /* Shared device context. */ @@ -1767,6 +1769,8 @@ struct mlx5_priv { /* Management data for ASO connection tracking. */ struct mlx5_aso_ct_pool *hws_ctpool; /* HW steering's CT pool. */ struct mlx5_aso_mtr_pool *hws_mpool; /* HW steering's Meter pool. */ + struct mlx5_flow_hw_ctrl_rx *hw_ctrl_rx; + /**< HW steering templates used to create control flow rules. */ #endif }; diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index edf45b814d..e9e4537700 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -2113,6 +2113,62 @@ rte_col_2_mlx5_col(enum rte_color rcol) return MLX5_FLOW_COLOR_UNDEFINED; } +/* All types of Ethernet patterns used in control flow rules. */ +enum mlx5_flow_ctrl_rx_eth_pattern_type { + MLX5_FLOW_HW_CTRL_RX_ETH_PATTERN_ALL = 0, + MLX5_FLOW_HW_CTRL_RX_ETH_PATTERN_ALL_MCAST, + MLX5_FLOW_HW_CTRL_RX_ETH_PATTERN_BCAST, + MLX5_FLOW_HW_CTRL_RX_ETH_PATTERN_BCAST_VLAN, + MLX5_FLOW_HW_CTRL_RX_ETH_PATTERN_IPV4_MCAST, + MLX5_FLOW_HW_CTRL_RX_ETH_PATTERN_IPV4_MCAST_VLAN, + MLX5_FLOW_HW_CTRL_RX_ETH_PATTERN_IPV6_MCAST, + MLX5_FLOW_HW_CTRL_RX_ETH_PATTERN_IPV6_MCAST_VLAN, + MLX5_FLOW_HW_CTRL_RX_ETH_PATTERN_DMAC, + MLX5_FLOW_HW_CTRL_RX_ETH_PATTERN_DMAC_VLAN, + MLX5_FLOW_HW_CTRL_RX_ETH_PATTERN_MAX, +}; + +/* All types of RSS actions used in control flow rules. */ +enum mlx5_flow_ctrl_rx_expanded_rss_type { + MLX5_FLOW_HW_CTRL_RX_EXPANDED_RSS_NON_IP = 0, + MLX5_FLOW_HW_CTRL_RX_EXPANDED_RSS_IPV4, + MLX5_FLOW_HW_CTRL_RX_EXPANDED_RSS_IPV4_UDP, + MLX5_FLOW_HW_CTRL_RX_EXPANDED_RSS_IPV4_TCP, + MLX5_FLOW_HW_CTRL_RX_EXPANDED_RSS_IPV6, + MLX5_FLOW_HW_CTRL_RX_EXPANDED_RSS_IPV6_UDP, + MLX5_FLOW_HW_CTRL_RX_EXPANDED_RSS_IPV6_TCP, + MLX5_FLOW_HW_CTRL_RX_EXPANDED_RSS_MAX, +}; + +/** + * Contains pattern template, template table and its attributes for a single + * combination of Ethernet pattern and RSS action. Used to create control flow rules + * with HWS. + */ +struct mlx5_flow_hw_ctrl_rx_table { + struct rte_flow_template_table_attr attr; + struct rte_flow_pattern_template *pt; + struct rte_flow_template_table *tbl; +}; + +/* Contains all templates required to create control flow rules with HWS. */ +struct mlx5_flow_hw_ctrl_rx { + struct rte_flow_actions_template *rss[MLX5_FLOW_HW_CTRL_RX_EXPANDED_RSS_MAX]; + struct mlx5_flow_hw_ctrl_rx_table tables[MLX5_FLOW_HW_CTRL_RX_ETH_PATTERN_MAX] + [MLX5_FLOW_HW_CTRL_RX_EXPANDED_RSS_MAX]; +}; + +#define MLX5_CTRL_PROMISCUOUS (RTE_BIT32(0)) +#define MLX5_CTRL_ALL_MULTICAST (RTE_BIT32(1)) +#define MLX5_CTRL_BROADCAST (RTE_BIT32(2)) +#define MLX5_CTRL_IPV4_MULTICAST (RTE_BIT32(3)) +#define MLX5_CTRL_IPV6_MULTICAST (RTE_BIT32(4)) +#define MLX5_CTRL_DMAC (RTE_BIT32(5)) +#define MLX5_CTRL_VLAN_FILTER (RTE_BIT32(6)) + +int mlx5_flow_hw_ctrl_flows(struct rte_eth_dev *dev, uint32_t flags); +void mlx5_flow_hw_cleanup_ctrl_rx_templates(struct rte_eth_dev *dev); + int mlx5_flow_group_to_table(struct rte_eth_dev *dev, const struct mlx5_flow_tunnel *tunnel, uint32_t group, uint32_t *table, diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index 49186c4339..84c891cab6 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -47,6 +47,11 @@ /* Lowest priority for HW non-root table. */ #define MLX5_HW_LOWEST_PRIO_NON_ROOT (UINT32_MAX) +/* Priorities for Rx control flow rules. */ +#define MLX5_HW_CTRL_RX_PRIO_L2 (MLX5_HW_LOWEST_PRIO_ROOT) +#define MLX5_HW_CTRL_RX_PRIO_L3 (MLX5_HW_LOWEST_PRIO_ROOT - 1) +#define MLX5_HW_CTRL_RX_PRIO_L4 (MLX5_HW_LOWEST_PRIO_ROOT - 2) + #define MLX5_HW_VLAN_PUSH_TYPE_IDX 0 #define MLX5_HW_VLAN_PUSH_VID_IDX 1 #define MLX5_HW_VLAN_PUSH_PCP_IDX 2 @@ -84,6 +89,72 @@ static uint32_t mlx5_hw_act_flag[MLX5_HW_ACTION_FLAG_MAX] }, }; +/* Ethernet item spec for promiscuous mode. */ +static const struct rte_flow_item_eth ctrl_rx_eth_promisc_spec = { + .dst.addr_bytes = "\x00\x00\x00\x00\x00\x00", + .src.addr_bytes = "\x00\x00\x00\x00\x00\x00", + .type = 0, +}; +/* Ethernet item mask for promiscuous mode. */ +static const struct rte_flow_item_eth ctrl_rx_eth_promisc_mask = { + .dst.addr_bytes = "\x00\x00\x00\x00\x00\x00", + .src.addr_bytes = "\x00\x00\x00\x00\x00\x00", + .type = 0, +}; + +/* Ethernet item spec for all multicast mode. */ +static const struct rte_flow_item_eth ctrl_rx_eth_mcast_spec = { + .dst.addr_bytes = "\x01\x00\x00\x00\x00\x00", + .src.addr_bytes = "\x00\x00\x00\x00\x00\x00", + .type = 0, +}; +/* Ethernet item mask for all multicast mode. */ +static const struct rte_flow_item_eth ctrl_rx_eth_mcast_mask = { + .dst.addr_bytes = "\x01\x00\x00\x00\x00\x00", + .src.addr_bytes = "\x00\x00\x00\x00\x00\x00", + .type = 0, +}; + +/* Ethernet item spec for IPv4 multicast traffic. */ +static const struct rte_flow_item_eth ctrl_rx_eth_ipv4_mcast_spec = { + .dst.addr_bytes = "\x01\x00\x5e\x00\x00\x00", + .src.addr_bytes = "\x00\x00\x00\x00\x00\x00", + .type = 0, +}; +/* Ethernet item mask for IPv4 multicast traffic. */ +static const struct rte_flow_item_eth ctrl_rx_eth_ipv4_mcast_mask = { + .dst.addr_bytes = "\xff\xff\xff\x00\x00\x00", + .src.addr_bytes = "\x00\x00\x00\x00\x00\x00", + .type = 0, +}; + +/* Ethernet item spec for IPv6 multicast traffic. */ +static const struct rte_flow_item_eth ctrl_rx_eth_ipv6_mcast_spec = { + .dst.addr_bytes = "\x33\x33\x00\x00\x00\x00", + .src.addr_bytes = "\x00\x00\x00\x00\x00\x00", + .type = 0, +}; +/* Ethernet item mask for IPv6 multicast traffic. */ +static const struct rte_flow_item_eth ctrl_rx_eth_ipv6_mcast_mask = { + .dst.addr_bytes = "\xff\xff\x00\x00\x00\x00", + .src.addr_bytes = "\x00\x00\x00\x00\x00\x00", + .type = 0, +}; + +/* Ethernet item mask for unicast traffic. */ +static const struct rte_flow_item_eth ctrl_rx_eth_dmac_mask = { + .dst.addr_bytes = "\xff\xff\xff\xff\xff\xff", + .src.addr_bytes = "\x00\x00\x00\x00\x00\x00", + .type = 0, +}; + +/* Ethernet item spec for broadcast. */ +static const struct rte_flow_item_eth ctrl_rx_eth_bcast_spec = { + .dst.addr_bytes = "\xff\xff\xff\xff\xff\xff", + .src.addr_bytes = "\x00\x00\x00\x00\x00\x00", + .type = 0, +}; + /** * Set rxq flag. * @@ -6346,6 +6417,365 @@ flow_hw_create_vlan(struct rte_eth_dev *dev) return 0; } +static void +flow_hw_cleanup_ctrl_rx_tables(struct rte_eth_dev *dev) +{ + struct mlx5_priv *priv = dev->data->dev_private; + unsigned int i; + unsigned int j; + + if (!priv->hw_ctrl_rx) + return; + for (i = 0; i < MLX5_FLOW_HW_CTRL_RX_ETH_PATTERN_MAX; ++i) { + for (j = 0; j < MLX5_FLOW_HW_CTRL_RX_EXPANDED_RSS_MAX; ++j) { + struct rte_flow_template_table *tbl = priv->hw_ctrl_rx->tables[i][j].tbl; + struct rte_flow_pattern_template *pt = priv->hw_ctrl_rx->tables[i][j].pt; + + if (tbl) + claim_zero(flow_hw_table_destroy(dev, tbl, NULL)); + if (pt) + claim_zero(flow_hw_pattern_template_destroy(dev, pt, NULL)); + } + } + for (i = 0; i < MLX5_FLOW_HW_CTRL_RX_EXPANDED_RSS_MAX; ++i) { + struct rte_flow_actions_template *at = priv->hw_ctrl_rx->rss[i]; + + if (at) + claim_zero(flow_hw_actions_template_destroy(dev, at, NULL)); + } + mlx5_free(priv->hw_ctrl_rx); + priv->hw_ctrl_rx = NULL; +} + +static uint64_t +flow_hw_ctrl_rx_rss_type_hash_types(const enum mlx5_flow_ctrl_rx_expanded_rss_type rss_type) +{ + switch (rss_type) { + case MLX5_FLOW_HW_CTRL_RX_EXPANDED_RSS_NON_IP: + return 0; + case MLX5_FLOW_HW_CTRL_RX_EXPANDED_RSS_IPV4: + return RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 | RTE_ETH_RSS_NONFRAG_IPV4_OTHER; + case MLX5_FLOW_HW_CTRL_RX_EXPANDED_RSS_IPV4_UDP: + return RTE_ETH_RSS_NONFRAG_IPV4_UDP; + case MLX5_FLOW_HW_CTRL_RX_EXPANDED_RSS_IPV4_TCP: + return RTE_ETH_RSS_NONFRAG_IPV4_TCP; + case MLX5_FLOW_HW_CTRL_RX_EXPANDED_RSS_IPV6: + return RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6 | RTE_ETH_RSS_NONFRAG_IPV6_OTHER; + case MLX5_FLOW_HW_CTRL_RX_EXPANDED_RSS_IPV6_UDP: + return RTE_ETH_RSS_NONFRAG_IPV6_UDP | RTE_ETH_RSS_IPV6_UDP_EX; + case MLX5_FLOW_HW_CTRL_RX_EXPANDED_RSS_IPV6_TCP: + return RTE_ETH_RSS_NONFRAG_IPV6_TCP | RTE_ETH_RSS_IPV6_TCP_EX; + default: + /* Should not reach here. */ + MLX5_ASSERT(false); + return 0; + } +} + +static struct rte_flow_actions_template * +flow_hw_create_ctrl_rx_rss_template(struct rte_eth_dev *dev, + const enum mlx5_flow_ctrl_rx_expanded_rss_type rss_type) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct rte_flow_actions_template_attr attr = { + .ingress = 1, + }; + uint16_t queue[RTE_MAX_QUEUES_PER_PORT]; + struct rte_flow_action_rss rss_conf = { + .func = RTE_ETH_HASH_FUNCTION_DEFAULT, + .level = 0, + .types = 0, + .key_len = priv->rss_conf.rss_key_len, + .key = priv->rss_conf.rss_key, + .queue_num = priv->reta_idx_n, + .queue = queue, + }; + struct rte_flow_action actions[] = { + { + .type = RTE_FLOW_ACTION_TYPE_RSS, + .conf = &rss_conf, + }, + { + .type = RTE_FLOW_ACTION_TYPE_END, + } + }; + struct rte_flow_action masks[] = { + { + .type = RTE_FLOW_ACTION_TYPE_RSS, + .conf = &rss_conf, + }, + { + .type = RTE_FLOW_ACTION_TYPE_END, + } + }; + struct rte_flow_actions_template *at; + struct rte_flow_error error; + unsigned int i; + + MLX5_ASSERT(priv->reta_idx_n > 0 && priv->reta_idx); + /* Select proper RSS hash types and based on that configure the actions template. */ + rss_conf.types = flow_hw_ctrl_rx_rss_type_hash_types(rss_type); + if (rss_conf.types) { + for (i = 0; i < priv->reta_idx_n; ++i) + queue[i] = (*priv->reta_idx)[i]; + } else { + rss_conf.queue_num = 1; + queue[0] = (*priv->reta_idx)[0]; + } + at = flow_hw_actions_template_create(dev, &attr, actions, masks, &error); + if (!at) + DRV_LOG(ERR, + "Failed to create ctrl flow actions template: rte_errno(%d), type(%d): %s", + rte_errno, error.type, + error.message ? error.message : "(no stated reason)"); + return at; +} + +static uint32_t ctrl_rx_rss_priority_map[MLX5_FLOW_HW_CTRL_RX_EXPANDED_RSS_MAX] = { + [MLX5_FLOW_HW_CTRL_RX_EXPANDED_RSS_NON_IP] = MLX5_HW_CTRL_RX_PRIO_L2, + [MLX5_FLOW_HW_CTRL_RX_EXPANDED_RSS_IPV4] = MLX5_HW_CTRL_RX_PRIO_L3, + [MLX5_FLOW_HW_CTRL_RX_EXPANDED_RSS_IPV4_UDP] = MLX5_HW_CTRL_RX_PRIO_L4, + [MLX5_FLOW_HW_CTRL_RX_EXPANDED_RSS_IPV4_TCP] = MLX5_HW_CTRL_RX_PRIO_L4, + [MLX5_FLOW_HW_CTRL_RX_EXPANDED_RSS_IPV6] = MLX5_HW_CTRL_RX_PRIO_L3, + [MLX5_FLOW_HW_CTRL_RX_EXPANDED_RSS_IPV6_UDP] = MLX5_HW_CTRL_RX_PRIO_L4, + [MLX5_FLOW_HW_CTRL_RX_EXPANDED_RSS_IPV6_TCP] = MLX5_HW_CTRL_RX_PRIO_L4, +}; + +static uint32_t ctrl_rx_nb_flows_map[MLX5_FLOW_HW_CTRL_RX_ETH_PATTERN_MAX] = { + [MLX5_FLOW_HW_CTRL_RX_ETH_PATTERN_ALL] = 1, + [MLX5_FLOW_HW_CTRL_RX_ETH_PATTERN_ALL_MCAST] = 1, + [MLX5_FLOW_HW_CTRL_RX_ETH_PATTERN_BCAST] = 1, + [MLX5_FLOW_HW_CTRL_RX_ETH_PATTERN_BCAST_VLAN] = MLX5_MAX_VLAN_IDS, + [MLX5_FLOW_HW_CTRL_RX_ETH_PATTERN_IPV4_MCAST] = 1, + [MLX5_FLOW_HW_CTRL_RX_ETH_PATTERN_IPV4_MCAST_VLAN] = MLX5_MAX_VLAN_IDS, + [MLX5_FLOW_HW_CTRL_RX_ETH_PATTERN_IPV6_MCAST] = 1, + [MLX5_FLOW_HW_CTRL_RX_ETH_PATTERN_IPV6_MCAST_VLAN] = MLX5_MAX_VLAN_IDS, + [MLX5_FLOW_HW_CTRL_RX_ETH_PATTERN_DMAC] = MLX5_MAX_UC_MAC_ADDRESSES, + [MLX5_FLOW_HW_CTRL_RX_ETH_PATTERN_DMAC_VLAN] = + MLX5_MAX_UC_MAC_ADDRESSES * MLX5_MAX_VLAN_IDS, +}; + +static struct rte_flow_template_table_attr +flow_hw_get_ctrl_rx_table_attr(enum mlx5_flow_ctrl_rx_eth_pattern_type eth_pattern_type, + const enum mlx5_flow_ctrl_rx_expanded_rss_type rss_type) +{ + return (struct rte_flow_template_table_attr){ + .flow_attr = { + .group = 0, + .priority = ctrl_rx_rss_priority_map[rss_type], + .ingress = 1, + }, + .nb_flows = ctrl_rx_nb_flows_map[eth_pattern_type], + }; +} + +static struct rte_flow_item +flow_hw_get_ctrl_rx_eth_item(const enum mlx5_flow_ctrl_rx_eth_pattern_type eth_pattern_type) +{ + struct rte_flow_item item = { + .type = RTE_FLOW_ITEM_TYPE_ETH, + .mask = NULL, + }; + + switch (eth_pattern_type) { + case MLX5_FLOW_HW_CTRL_RX_ETH_PATTERN_ALL: + item.mask = &ctrl_rx_eth_promisc_mask; + break; + case MLX5_FLOW_HW_CTRL_RX_ETH_PATTERN_ALL_MCAST: + item.mask = &ctrl_rx_eth_mcast_mask; + break; + case MLX5_FLOW_HW_CTRL_RX_ETH_PATTERN_BCAST: + case MLX5_FLOW_HW_CTRL_RX_ETH_PATTERN_BCAST_VLAN: + case MLX5_FLOW_HW_CTRL_RX_ETH_PATTERN_DMAC: + case MLX5_FLOW_HW_CTRL_RX_ETH_PATTERN_DMAC_VLAN: + item.mask = &ctrl_rx_eth_dmac_mask; + break; + case MLX5_FLOW_HW_CTRL_RX_ETH_PATTERN_IPV4_MCAST: + case MLX5_FLOW_HW_CTRL_RX_ETH_PATTERN_IPV4_MCAST_VLAN: + item.mask = &ctrl_rx_eth_ipv4_mcast_mask; + break; + case MLX5_FLOW_HW_CTRL_RX_ETH_PATTERN_IPV6_MCAST: + case MLX5_FLOW_HW_CTRL_RX_ETH_PATTERN_IPV6_MCAST_VLAN: + item.mask = &ctrl_rx_eth_ipv6_mcast_mask; + break; + default: + /* Should not reach here - ETH mask must be present. */ + item.type = RTE_FLOW_ITEM_TYPE_END; + MLX5_ASSERT(false); + break; + } + return item; +} + +static struct rte_flow_item +flow_hw_get_ctrl_rx_vlan_item(const enum mlx5_flow_ctrl_rx_eth_pattern_type eth_pattern_type) +{ + struct rte_flow_item item = { + .type = RTE_FLOW_ITEM_TYPE_VOID, + .mask = NULL, + }; + + switch (eth_pattern_type) { + case MLX5_FLOW_HW_CTRL_RX_ETH_PATTERN_BCAST_VLAN: + case MLX5_FLOW_HW_CTRL_RX_ETH_PATTERN_IPV4_MCAST_VLAN: + case MLX5_FLOW_HW_CTRL_RX_ETH_PATTERN_IPV6_MCAST_VLAN: + case MLX5_FLOW_HW_CTRL_RX_ETH_PATTERN_DMAC_VLAN: + item.type = RTE_FLOW_ITEM_TYPE_VLAN; + item.mask = &rte_flow_item_vlan_mask; + break; + default: + /* Nothing to update. */ + break; + } + return item; +} + +static struct rte_flow_item +flow_hw_get_ctrl_rx_l3_item(const enum mlx5_flow_ctrl_rx_expanded_rss_type rss_type) +{ + struct rte_flow_item item = { + .type = RTE_FLOW_ITEM_TYPE_VOID, + .mask = NULL, + }; + + switch (rss_type) { + case MLX5_FLOW_HW_CTRL_RX_EXPANDED_RSS_IPV4: + case MLX5_FLOW_HW_CTRL_RX_EXPANDED_RSS_IPV4_UDP: + case MLX5_FLOW_HW_CTRL_RX_EXPANDED_RSS_IPV4_TCP: + item.type = RTE_FLOW_ITEM_TYPE_IPV4; + break; + case MLX5_FLOW_HW_CTRL_RX_EXPANDED_RSS_IPV6: + case MLX5_FLOW_HW_CTRL_RX_EXPANDED_RSS_IPV6_UDP: + case MLX5_FLOW_HW_CTRL_RX_EXPANDED_RSS_IPV6_TCP: + item.type = RTE_FLOW_ITEM_TYPE_IPV6; + break; + default: + /* Nothing to update. */ + break; + } + return item; +} + +static struct rte_flow_item +flow_hw_get_ctrl_rx_l4_item(const enum mlx5_flow_ctrl_rx_expanded_rss_type rss_type) +{ + struct rte_flow_item item = { + .type = RTE_FLOW_ITEM_TYPE_VOID, + .mask = NULL, + }; + + switch (rss_type) { + case MLX5_FLOW_HW_CTRL_RX_EXPANDED_RSS_IPV4_UDP: + case MLX5_FLOW_HW_CTRL_RX_EXPANDED_RSS_IPV6_UDP: + item.type = RTE_FLOW_ITEM_TYPE_UDP; + break; + case MLX5_FLOW_HW_CTRL_RX_EXPANDED_RSS_IPV4_TCP: + case MLX5_FLOW_HW_CTRL_RX_EXPANDED_RSS_IPV6_TCP: + item.type = RTE_FLOW_ITEM_TYPE_TCP; + break; + default: + /* Nothing to update. */ + break; + } + return item; +} + +static struct rte_flow_pattern_template * +flow_hw_create_ctrl_rx_pattern_template + (struct rte_eth_dev *dev, + const enum mlx5_flow_ctrl_rx_eth_pattern_type eth_pattern_type, + const enum mlx5_flow_ctrl_rx_expanded_rss_type rss_type) +{ + const struct rte_flow_pattern_template_attr attr = { + .relaxed_matching = 0, + .ingress = 1, + }; + struct rte_flow_item items[] = { + /* Matching patterns */ + flow_hw_get_ctrl_rx_eth_item(eth_pattern_type), + flow_hw_get_ctrl_rx_vlan_item(eth_pattern_type), + flow_hw_get_ctrl_rx_l3_item(rss_type), + flow_hw_get_ctrl_rx_l4_item(rss_type), + /* Terminate pattern */ + { .type = RTE_FLOW_ITEM_TYPE_END } + }; + + return flow_hw_pattern_template_create(dev, &attr, items, NULL); +} + +static int +flow_hw_create_ctrl_rx_tables(struct rte_eth_dev *dev) +{ + struct mlx5_priv *priv = dev->data->dev_private; + unsigned int i; + unsigned int j; + int ret; + + MLX5_ASSERT(!priv->hw_ctrl_rx); + priv->hw_ctrl_rx = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*priv->hw_ctrl_rx), + RTE_CACHE_LINE_SIZE, rte_socket_id()); + if (!priv->hw_ctrl_rx) { + DRV_LOG(ERR, "Failed to allocate memory for Rx control flow tables"); + rte_errno = ENOMEM; + return -rte_errno; + } + /* Create all pattern template variants. */ + for (i = 0; i < MLX5_FLOW_HW_CTRL_RX_ETH_PATTERN_MAX; ++i) { + enum mlx5_flow_ctrl_rx_eth_pattern_type eth_pattern_type = i; + + for (j = 0; j < MLX5_FLOW_HW_CTRL_RX_EXPANDED_RSS_MAX; ++j) { + const enum mlx5_flow_ctrl_rx_expanded_rss_type rss_type = j; + struct rte_flow_template_table_attr attr; + struct rte_flow_pattern_template *pt; + + attr = flow_hw_get_ctrl_rx_table_attr(eth_pattern_type, rss_type); + pt = flow_hw_create_ctrl_rx_pattern_template(dev, eth_pattern_type, + rss_type); + if (!pt) + goto err; + priv->hw_ctrl_rx->tables[i][j].attr = attr; + priv->hw_ctrl_rx->tables[i][j].pt = pt; + } + } + return 0; +err: + ret = rte_errno; + flow_hw_cleanup_ctrl_rx_tables(dev); + rte_errno = ret; + return -ret; +} + +void +mlx5_flow_hw_cleanup_ctrl_rx_templates(struct rte_eth_dev *dev) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_flow_hw_ctrl_rx *hw_ctrl_rx; + unsigned int i; + unsigned int j; + + if (!priv->dr_ctx) + return; + if (!priv->hw_ctrl_rx) + return; + hw_ctrl_rx = priv->hw_ctrl_rx; + for (i = 0; i < MLX5_FLOW_HW_CTRL_RX_ETH_PATTERN_MAX; ++i) { + for (j = 0; j < MLX5_FLOW_HW_CTRL_RX_EXPANDED_RSS_MAX; ++j) { + struct mlx5_flow_hw_ctrl_rx_table *tmpls = &hw_ctrl_rx->tables[i][j]; + + if (tmpls->tbl) { + claim_zero(flow_hw_table_destroy(dev, tmpls->tbl, NULL)); + tmpls->tbl = NULL; + } + } + } + for (j = 0; j < MLX5_FLOW_HW_CTRL_RX_EXPANDED_RSS_MAX; ++j) { + if (hw_ctrl_rx->rss[j]) { + claim_zero(flow_hw_actions_template_destroy(dev, hw_ctrl_rx->rss[j], NULL)); + hw_ctrl_rx->rss[j] = NULL; + } + } +} + /** * Configure port HWS resources. * @@ -6512,6 +6942,12 @@ flow_hw_configure(struct rte_eth_dev *dev, priv->nb_queue = nb_q_updated; rte_spinlock_init(&priv->hw_ctrl_lock); LIST_INIT(&priv->hw_ctrl_flows); + ret = flow_hw_create_ctrl_rx_tables(dev); + if (ret) { + rte_flow_error_set(error, -ret, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + "Failed to set up Rx control flow templates"); + goto err; + } /* Initialize meter library*/ if (port_attr->nb_meters) if (mlx5_flow_meter_init(dev, port_attr->nb_meters, 1, 1, nb_q_updated)) @@ -6665,6 +7101,7 @@ flow_hw_resource_release(struct rte_eth_dev *dev) flow_hw_rxq_flag_set(dev, false); flow_hw_flush_all_ctrl_flows(dev); flow_hw_cleanup_tx_repr_tagging(dev); + flow_hw_cleanup_ctrl_rx_tables(dev); while (!LIST_EMPTY(&priv->flow_hw_tbl_ongo)) { tbl = LIST_FIRST(&priv->flow_hw_tbl_ongo); flow_hw_table_destroy(dev, tbl, NULL); @@ -8460,6 +8897,368 @@ mlx5_flow_hw_tx_repr_matching_flow(struct rte_eth_dev *dev, uint32_t sqn) items, 0, actions, 0); } +static uint32_t +__calc_pattern_flags(const enum mlx5_flow_ctrl_rx_eth_pattern_type eth_pattern_type) +{ + switch (eth_pattern_type) { + case MLX5_FLOW_HW_CTRL_RX_ETH_PATTERN_ALL: + return MLX5_CTRL_PROMISCUOUS; + case MLX5_FLOW_HW_CTRL_RX_ETH_PATTERN_ALL_MCAST: + return MLX5_CTRL_ALL_MULTICAST; + case MLX5_FLOW_HW_CTRL_RX_ETH_PATTERN_BCAST: + case MLX5_FLOW_HW_CTRL_RX_ETH_PATTERN_BCAST_VLAN: + return MLX5_CTRL_BROADCAST; + case MLX5_FLOW_HW_CTRL_RX_ETH_PATTERN_IPV4_MCAST: + case MLX5_FLOW_HW_CTRL_RX_ETH_PATTERN_IPV4_MCAST_VLAN: + return MLX5_CTRL_IPV4_MULTICAST; + case MLX5_FLOW_HW_CTRL_RX_ETH_PATTERN_IPV6_MCAST: + case MLX5_FLOW_HW_CTRL_RX_ETH_PATTERN_IPV6_MCAST_VLAN: + return MLX5_CTRL_IPV6_MULTICAST; + case MLX5_FLOW_HW_CTRL_RX_ETH_PATTERN_DMAC: + case MLX5_FLOW_HW_CTRL_RX_ETH_PATTERN_DMAC_VLAN: + return MLX5_CTRL_DMAC; + default: + /* Should not reach here. */ + MLX5_ASSERT(false); + return 0; + } +} + +static uint32_t +__calc_vlan_flags(const enum mlx5_flow_ctrl_rx_eth_pattern_type eth_pattern_type) +{ + switch (eth_pattern_type) { + case MLX5_FLOW_HW_CTRL_RX_ETH_PATTERN_BCAST_VLAN: + case MLX5_FLOW_HW_CTRL_RX_ETH_PATTERN_IPV4_MCAST_VLAN: + case MLX5_FLOW_HW_CTRL_RX_ETH_PATTERN_IPV6_MCAST_VLAN: + case MLX5_FLOW_HW_CTRL_RX_ETH_PATTERN_DMAC_VLAN: + return MLX5_CTRL_VLAN_FILTER; + default: + return 0; + } +} + +static bool +eth_pattern_type_is_requested(const enum mlx5_flow_ctrl_rx_eth_pattern_type eth_pattern_type, + uint32_t flags) +{ + uint32_t pattern_flags = __calc_pattern_flags(eth_pattern_type); + uint32_t vlan_flags = __calc_vlan_flags(eth_pattern_type); + bool pattern_requested = !!(pattern_flags & flags); + bool consider_vlan = vlan_flags || (MLX5_CTRL_VLAN_FILTER & flags); + bool vlan_requested = !!(vlan_flags & flags); + + if (consider_vlan) + return pattern_requested && vlan_requested; + else + return pattern_requested; +} + +static bool +rss_type_is_requested(struct mlx5_priv *priv, + const enum mlx5_flow_ctrl_rx_expanded_rss_type rss_type) +{ + struct rte_flow_actions_template *at = priv->hw_ctrl_rx->rss[rss_type]; + unsigned int i; + + for (i = 0; at->actions[i].type != RTE_FLOW_ACTION_TYPE_END; ++i) { + if (at->actions[i].type == RTE_FLOW_ACTION_TYPE_RSS) { + const struct rte_flow_action_rss *rss = at->actions[i].conf; + uint64_t rss_types = rss->types; + + if ((rss_types & priv->rss_conf.rss_hf) != rss_types) + return false; + } + } + return true; +} + +static const struct rte_flow_item_eth * +__get_eth_spec(const enum mlx5_flow_ctrl_rx_eth_pattern_type pattern) +{ + switch (pattern) { + case MLX5_FLOW_HW_CTRL_RX_ETH_PATTERN_ALL: + return &ctrl_rx_eth_promisc_spec; + case MLX5_FLOW_HW_CTRL_RX_ETH_PATTERN_ALL_MCAST: + return &ctrl_rx_eth_mcast_spec; + case MLX5_FLOW_HW_CTRL_RX_ETH_PATTERN_BCAST: + case MLX5_FLOW_HW_CTRL_RX_ETH_PATTERN_BCAST_VLAN: + return &ctrl_rx_eth_bcast_spec; + case MLX5_FLOW_HW_CTRL_RX_ETH_PATTERN_IPV4_MCAST: + case MLX5_FLOW_HW_CTRL_RX_ETH_PATTERN_IPV4_MCAST_VLAN: + return &ctrl_rx_eth_ipv4_mcast_spec; + case MLX5_FLOW_HW_CTRL_RX_ETH_PATTERN_IPV6_MCAST: + case MLX5_FLOW_HW_CTRL_RX_ETH_PATTERN_IPV6_MCAST_VLAN: + return &ctrl_rx_eth_ipv6_mcast_spec; + default: + /* This case should not be reached. */ + MLX5_ASSERT(false); + return NULL; + } +} + +static int +__flow_hw_ctrl_flows_single(struct rte_eth_dev *dev, + struct rte_flow_template_table *tbl, + const enum mlx5_flow_ctrl_rx_eth_pattern_type pattern_type, + const enum mlx5_flow_ctrl_rx_expanded_rss_type rss_type) +{ + const struct rte_flow_item_eth *eth_spec = __get_eth_spec(pattern_type); + struct rte_flow_item items[5]; + struct rte_flow_action actions[] = { + { .type = RTE_FLOW_ACTION_TYPE_RSS }, + { .type = RTE_FLOW_ACTION_TYPE_END }, + }; + + if (!eth_spec) + return -EINVAL; + memset(items, 0, sizeof(items)); + items[0] = (struct rte_flow_item){ + .type = RTE_FLOW_ITEM_TYPE_ETH, + .spec = eth_spec, + }; + items[1] = (struct rte_flow_item){ .type = RTE_FLOW_ITEM_TYPE_VOID }; + items[2] = flow_hw_get_ctrl_rx_l3_item(rss_type); + items[3] = flow_hw_get_ctrl_rx_l4_item(rss_type); + items[4] = (struct rte_flow_item){ .type = RTE_FLOW_ITEM_TYPE_END }; + /* Without VLAN filtering, only a single flow rule must be created. */ + return flow_hw_create_ctrl_flow(dev, dev, tbl, items, 0, actions, 0); +} + +static int +__flow_hw_ctrl_flows_single_vlan(struct rte_eth_dev *dev, + struct rte_flow_template_table *tbl, + const enum mlx5_flow_ctrl_rx_eth_pattern_type pattern_type, + const enum mlx5_flow_ctrl_rx_expanded_rss_type rss_type) +{ + struct mlx5_priv *priv = dev->data->dev_private; + const struct rte_flow_item_eth *eth_spec = __get_eth_spec(pattern_type); + struct rte_flow_item items[5]; + struct rte_flow_action actions[] = { + { .type = RTE_FLOW_ACTION_TYPE_RSS }, + { .type = RTE_FLOW_ACTION_TYPE_END }, + }; + unsigned int i; + + if (!eth_spec) + return -EINVAL; + memset(items, 0, sizeof(items)); + items[0] = (struct rte_flow_item){ + .type = RTE_FLOW_ITEM_TYPE_ETH, + .spec = eth_spec, + }; + /* Optional VLAN for now will be VOID - will be filled later. */ + items[1] = (struct rte_flow_item){ .type = RTE_FLOW_ITEM_TYPE_VLAN }; + items[2] = flow_hw_get_ctrl_rx_l3_item(rss_type); + items[3] = flow_hw_get_ctrl_rx_l4_item(rss_type); + items[4] = (struct rte_flow_item){ .type = RTE_FLOW_ITEM_TYPE_END }; + /* Since VLAN filtering is done, create a single flow rule for each registered vid. */ + for (i = 0; i < priv->vlan_filter_n; ++i) { + uint16_t vlan = priv->vlan_filter[i]; + struct rte_flow_item_vlan vlan_spec = { + .tci = rte_cpu_to_be_16(vlan), + }; + + items[1].spec = &vlan_spec; + if (flow_hw_create_ctrl_flow(dev, dev, tbl, items, 0, actions, 0)) + return -rte_errno; + } + return 0; +} + +static int +__flow_hw_ctrl_flows_unicast(struct rte_eth_dev *dev, + struct rte_flow_template_table *tbl, + const enum mlx5_flow_ctrl_rx_eth_pattern_type pattern_type, + const enum mlx5_flow_ctrl_rx_expanded_rss_type rss_type) +{ + struct rte_flow_item_eth eth_spec; + struct rte_flow_item items[5]; + struct rte_flow_action actions[] = { + { .type = RTE_FLOW_ACTION_TYPE_RSS }, + { .type = RTE_FLOW_ACTION_TYPE_END }, + }; + const struct rte_ether_addr cmp = { + .addr_bytes = "\x00\x00\x00\x00\x00\x00", + }; + unsigned int i; + + RTE_SET_USED(pattern_type); + + memset(ð_spec, 0, sizeof(eth_spec)); + memset(items, 0, sizeof(items)); + items[0] = (struct rte_flow_item){ + .type = RTE_FLOW_ITEM_TYPE_ETH, + .spec = ð_spec, + }; + items[1] = (struct rte_flow_item){ .type = RTE_FLOW_ITEM_TYPE_VOID }; + items[2] = flow_hw_get_ctrl_rx_l3_item(rss_type); + items[3] = flow_hw_get_ctrl_rx_l4_item(rss_type); + items[4] = (struct rte_flow_item){ .type = RTE_FLOW_ITEM_TYPE_END }; + for (i = 0; i < MLX5_MAX_MAC_ADDRESSES; ++i) { + struct rte_ether_addr *mac = &dev->data->mac_addrs[i]; + + if (!memcmp(mac, &cmp, sizeof(*mac))) + continue; + memcpy(ð_spec.dst.addr_bytes, mac->addr_bytes, RTE_ETHER_ADDR_LEN); + if (flow_hw_create_ctrl_flow(dev, dev, tbl, items, 0, actions, 0)) + return -rte_errno; + } + return 0; +} + +static int +__flow_hw_ctrl_flows_unicast_vlan(struct rte_eth_dev *dev, + struct rte_flow_template_table *tbl, + const enum mlx5_flow_ctrl_rx_eth_pattern_type pattern_type, + const enum mlx5_flow_ctrl_rx_expanded_rss_type rss_type) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct rte_flow_item_eth eth_spec; + struct rte_flow_item items[5]; + struct rte_flow_action actions[] = { + { .type = RTE_FLOW_ACTION_TYPE_RSS }, + { .type = RTE_FLOW_ACTION_TYPE_END }, + }; + const struct rte_ether_addr cmp = { + .addr_bytes = "\x00\x00\x00\x00\x00\x00", + }; + unsigned int i; + unsigned int j; + + RTE_SET_USED(pattern_type); + + memset(ð_spec, 0, sizeof(eth_spec)); + memset(items, 0, sizeof(items)); + items[0] = (struct rte_flow_item){ + .type = RTE_FLOW_ITEM_TYPE_ETH, + .spec = ð_spec, + }; + items[1] = (struct rte_flow_item){ .type = RTE_FLOW_ITEM_TYPE_VLAN }; + items[2] = flow_hw_get_ctrl_rx_l3_item(rss_type); + items[3] = flow_hw_get_ctrl_rx_l4_item(rss_type); + items[4] = (struct rte_flow_item){ .type = RTE_FLOW_ITEM_TYPE_END }; + for (i = 0; i < MLX5_MAX_MAC_ADDRESSES; ++i) { + struct rte_ether_addr *mac = &dev->data->mac_addrs[i]; + + if (!memcmp(mac, &cmp, sizeof(*mac))) + continue; + memcpy(ð_spec.dst.addr_bytes, mac->addr_bytes, RTE_ETHER_ADDR_LEN); + for (j = 0; j < priv->vlan_filter_n; ++j) { + uint16_t vlan = priv->vlan_filter[j]; + struct rte_flow_item_vlan vlan_spec = { + .tci = rte_cpu_to_be_16(vlan), + }; + + items[1].spec = &vlan_spec; + if (flow_hw_create_ctrl_flow(dev, dev, tbl, items, 0, actions, 0)) + return -rte_errno; + } + } + return 0; +} + +static int +__flow_hw_ctrl_flows(struct rte_eth_dev *dev, + struct rte_flow_template_table *tbl, + const enum mlx5_flow_ctrl_rx_eth_pattern_type pattern_type, + const enum mlx5_flow_ctrl_rx_expanded_rss_type rss_type) +{ + switch (pattern_type) { + case MLX5_FLOW_HW_CTRL_RX_ETH_PATTERN_ALL: + case MLX5_FLOW_HW_CTRL_RX_ETH_PATTERN_ALL_MCAST: + case MLX5_FLOW_HW_CTRL_RX_ETH_PATTERN_BCAST: + case MLX5_FLOW_HW_CTRL_RX_ETH_PATTERN_IPV4_MCAST: + case MLX5_FLOW_HW_CTRL_RX_ETH_PATTERN_IPV6_MCAST: + return __flow_hw_ctrl_flows_single(dev, tbl, pattern_type, rss_type); + case MLX5_FLOW_HW_CTRL_RX_ETH_PATTERN_BCAST_VLAN: + case MLX5_FLOW_HW_CTRL_RX_ETH_PATTERN_IPV4_MCAST_VLAN: + case MLX5_FLOW_HW_CTRL_RX_ETH_PATTERN_IPV6_MCAST_VLAN: + return __flow_hw_ctrl_flows_single_vlan(dev, tbl, pattern_type, rss_type); + case MLX5_FLOW_HW_CTRL_RX_ETH_PATTERN_DMAC: + return __flow_hw_ctrl_flows_unicast(dev, tbl, pattern_type, rss_type); + case MLX5_FLOW_HW_CTRL_RX_ETH_PATTERN_DMAC_VLAN: + return __flow_hw_ctrl_flows_unicast_vlan(dev, tbl, pattern_type, rss_type); + default: + /* Should not reach here. */ + MLX5_ASSERT(false); + rte_errno = EINVAL; + return -EINVAL; + } +} + + +int +mlx5_flow_hw_ctrl_flows(struct rte_eth_dev *dev, uint32_t flags) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_flow_hw_ctrl_rx *hw_ctrl_rx; + unsigned int i; + unsigned int j; + int ret = 0; + + RTE_SET_USED(priv); + RTE_SET_USED(flags); + if (!priv->dr_ctx) { + DRV_LOG(DEBUG, "port %u Control flow rules will not be created. " + "HWS needs to be configured beforehand.", + dev->data->port_id); + return 0; + } + if (!priv->hw_ctrl_rx) { + DRV_LOG(ERR, "port %u Control flow rules templates were not created.", + dev->data->port_id); + rte_errno = EINVAL; + return -rte_errno; + } + hw_ctrl_rx = priv->hw_ctrl_rx; + for (i = 0; i < MLX5_FLOW_HW_CTRL_RX_ETH_PATTERN_MAX; ++i) { + const enum mlx5_flow_ctrl_rx_eth_pattern_type eth_pattern_type = i; + + if (!eth_pattern_type_is_requested(eth_pattern_type, flags)) + continue; + for (j = 0; j < MLX5_FLOW_HW_CTRL_RX_EXPANDED_RSS_MAX; ++j) { + const enum mlx5_flow_ctrl_rx_expanded_rss_type rss_type = j; + struct rte_flow_actions_template *at; + struct mlx5_flow_hw_ctrl_rx_table *tmpls = &hw_ctrl_rx->tables[i][j]; + const struct mlx5_flow_template_table_cfg cfg = { + .attr = tmpls->attr, + .external = 0, + }; + + if (!hw_ctrl_rx->rss[rss_type]) { + at = flow_hw_create_ctrl_rx_rss_template(dev, rss_type); + if (!at) + return -rte_errno; + hw_ctrl_rx->rss[rss_type] = at; + } else { + at = hw_ctrl_rx->rss[rss_type]; + } + if (!rss_type_is_requested(priv, rss_type)) + continue; + if (!tmpls->tbl) { + tmpls->tbl = flow_hw_table_create(dev, &cfg, + &tmpls->pt, 1, &at, 1, NULL); + if (!tmpls->tbl) { + DRV_LOG(ERR, "port %u Failed to create template table " + "for control flow rules. Unable to create " + "control flow rules.", + dev->data->port_id); + return -rte_errno; + } + } + + ret = __flow_hw_ctrl_flows(dev, tmpls->tbl, eth_pattern_type, rss_type); + if (ret) { + DRV_LOG(ERR, "port %u Failed to create control flow rule.", + dev->data->port_id); + return ret; + } + } + } + return 0; +} + void mlx5_flow_meter_uninit(struct rte_eth_dev *dev) { diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c index b1543b480e..b7818f9598 100644 --- a/drivers/net/mlx5/mlx5_rxq.c +++ b/drivers/net/mlx5/mlx5_rxq.c @@ -2568,13 +2568,14 @@ mlx5_ind_table_obj_new(struct rte_eth_dev *dev, const uint16_t *queues, struct mlx5_priv *priv = dev->data->dev_private; struct mlx5_ind_table_obj *ind_tbl; int ret; + uint32_t max_queues_n = priv->rxqs_n > queues_n ? priv->rxqs_n : queues_n; /* * Allocate maximum queues for shared action as queue number * maybe modified later. */ ind_tbl = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*ind_tbl) + - (standalone ? priv->rxqs_n : queues_n) * + (standalone ? max_queues_n : queues_n) * sizeof(uint16_t), 0, SOCKET_ID_ANY); if (!ind_tbl) { rte_errno = ENOMEM; diff --git a/drivers/net/mlx5/mlx5_trigger.c b/drivers/net/mlx5/mlx5_trigger.c index 8c9d5c1b13..4b821a1076 100644 --- a/drivers/net/mlx5/mlx5_trigger.c +++ b/drivers/net/mlx5/mlx5_trigger.c @@ -1415,6 +1415,9 @@ mlx5_dev_stop(struct rte_eth_dev *dev) mlx5_flow_list_flush(dev, MLX5_FLOW_TYPE_GEN, true); mlx5_flow_meter_rxq_flush(dev); mlx5_action_handle_detach(dev); +#ifdef HAVE_MLX5_HWS_SUPPORT + mlx5_flow_hw_cleanup_ctrl_rx_templates(dev); +#endif mlx5_rx_intr_vec_disable(dev); priv->sh->port[priv->dev_port - 1].ih_port_id = RTE_MAX_ETHPORTS; priv->sh->port[priv->dev_port - 1].devx_ih_port_id = RTE_MAX_ETHPORTS; @@ -1435,6 +1438,7 @@ mlx5_traffic_enable_hws(struct rte_eth_dev *dev) { struct mlx5_priv *priv = dev->data->dev_private; struct mlx5_sh_config *config = &priv->sh->config; + uint64_t flags = 0; unsigned int i; int ret; @@ -1481,7 +1485,18 @@ mlx5_traffic_enable_hws(struct rte_eth_dev *dev) } else { DRV_LOG(INFO, "port %u FDB default rule is disabled", dev->data->port_id); } - return 0; + if (priv->isolated) + return 0; + if (dev->data->promiscuous) + flags |= MLX5_CTRL_PROMISCUOUS; + if (dev->data->all_multicast) + flags |= MLX5_CTRL_ALL_MULTICAST; + else + flags |= MLX5_CTRL_BROADCAST | MLX5_CTRL_IPV4_MULTICAST | MLX5_CTRL_IPV6_MULTICAST; + flags |= MLX5_CTRL_DMAC; + if (priv->vlan_filter_n) + flags |= MLX5_CTRL_VLAN_FILTER; + return mlx5_flow_hw_ctrl_flows(dev, flags); error: ret = rte_errno; mlx5_flow_hw_flush_ctrl_flows(dev); @@ -1717,6 +1732,9 @@ mlx5_traffic_restart(struct rte_eth_dev *dev) { if (dev->data->dev_started) { mlx5_traffic_disable(dev); +#ifdef HAVE_MLX5_HWS_SUPPORT + mlx5_flow_hw_cleanup_ctrl_rx_templates(dev); +#endif return mlx5_traffic_enable(dev); } return 0; -- 2.25.1