From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 41E6D43E57 for ; Sat, 13 Apr 2024 14:59:42 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 2EFE340E8A; Sat, 13 Apr 2024 14:59:42 +0200 (CEST) Received: from NAM02-DM3-obe.outbound.protection.outlook.com (mail-dm3nam02on2060.outbound.protection.outlook.com [40.107.95.60]) by mails.dpdk.org (Postfix) with ESMTP id 6154440294 for ; Sat, 13 Apr 2024 14:59:40 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=hXZliFFFVrlFSwU5I3QG7nKYn2Dvl+HLIrt5V6gGYKU56fM8wM0jvPQXgPto+IhISH4Rr/sU7lzv4bK5maou9YfGKMWvnRwpS/E6uZXJAd6KYrTd8l9I1g2iX94K08Ll/sZQdj/jdtI7uROZ6lctlFCveIPVQOWJtlL9++qhAtlMbxNlR5kzA96VQhhHQSwC8mYQ2kALnfOd1m3WjwuBiiGSQhL1BU0z2U1fqTPP3W7oVLQYyvDaFHuUr1SAxNMIv1AHjc5rRkADlUsnUnHSwayNs64Ncc6p61qgUTuHMllvB+Y6PidK5XHajiGi1aUeGNIUKR4HZqXxmXOKko/xAQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=nby4OcbCvzC9tj2zHCBi7upm2PDSghua5f85G/syTfg=; b=bWrHlGdSp11VOmqdnf5gBvvsLc55OJVMO8Ea92pbaFeheekiXCJmsQ88j+7h1MtNAvOZPJbNvLYzc+ekRfiNFDlKzvie4Bos92w4WHkyU6PAvqIDHJk+d7psal2uYqm70iGPPP3rFoQWCHvDIJUMvCJfTvG/QnH8gSiqelyfI53lsAk3gwQ+HIaoz8y0RkBidAlQk9YuVHpzyU+3haOngYwnPjSM8CC0XH7Jk+xgJqSDtqscr+hj4DFzQzTJYBfj0sGQJUTjz1m9NkX6/uZ1hhOBWQwTsaDvDz15fzDRiT1DXigfWftLXUgafsk+h/EtEJX1rnA1OpCYc4pIBD0imw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=nby4OcbCvzC9tj2zHCBi7upm2PDSghua5f85G/syTfg=; b=a6eM/1YEAxSMvj+j+vkO8e9ez0nUywU75LdySln+86Wop0p3Pp3FIxiMBQihv1yvZa8bGrLlh6n8ifmyhxVIhKc/TC7KVF/e57yLapvJcU44fSO5S+wI0riwozVctZVBbSfAGO5+ZzVH4AZKvOC0V8QmsqphYAU09cdGYXYZxtVB4Y8Dqu3khFFTTHEnhCZMLFtY7I6rnXtrAZFQGxkBnwoeBWC+RqwGDAdeo/mQ1yVpTZ/1FCIhuk1IK8wY9fGUlXbi5hECCjSdXeP7AjIUjo2AKKTz06qUcBPuHR98cSendXtEzCCRWXUsZdlJUMboq3S3g2sbHUcwtSZZkoZbhA== Received: from MW4PR04CA0099.namprd04.prod.outlook.com (2603:10b6:303:83::14) by DS7PR12MB6071.namprd12.prod.outlook.com (2603:10b6:8:9d::11) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7409.46; Sat, 13 Apr 2024 12:59:35 +0000 Received: from MWH0EPF000989EA.namprd02.prod.outlook.com (2603:10b6:303:83:cafe::b8) by MW4PR04CA0099.outlook.office365.com (2603:10b6:303:83::14) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7472.21 via Frontend Transport; Sat, 13 Apr 2024 12:59:31 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by MWH0EPF000989EA.mail.protection.outlook.com (10.167.241.137) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7452.22 via Frontend Transport; Sat, 13 Apr 2024 12:59:31 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.41; Sat, 13 Apr 2024 05:59:24 -0700 Received: from nvidia.com (10.126.230.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1258.12; Sat, 13 Apr 2024 05:59:22 -0700 From: Xueming Li To: Dariusz Sosnowski CC: Ori Kam , dpdk stable Subject: patch 'net/mlx5: fix template clean up of FDB control flow rule' has been queued to stable release 23.11.1 Date: Sat, 13 Apr 2024 20:49:43 +0800 Message-ID: <20240413125005.725659-103-xuemingl@nvidia.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240413125005.725659-1-xuemingl@nvidia.com> References: <20240305094757.439387-1-xuemingl@nvidia.com> <20240413125005.725659-1-xuemingl@nvidia.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Originating-IP: [10.126.230.35] X-ClientProxiedBy: rnnvmail201.nvidia.com (10.129.68.8) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: MWH0EPF000989EA:EE_|DS7PR12MB6071:EE_ X-MS-Office365-Filtering-Correlation-Id: 6a8288c3-4457-4e63-906b-08dc5bb98f97 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: F8Z86SAv66Zn6shf359xfOxtX8qWo1VISEqz6g7DZgZUHeZcT6jdeWwWJ5L138BvEAclXItMUgsl8RwFyVqfsIFmoHi4RjeV2gTp1Omt6mJA6GYlCaAIqbQrNpDORG+mEAQYYC2yQPqzYVxvZ4A21ObfVVYGRMcHxgHqthU8E6ILT/6SkzVVC0108ewu+Ot8kTzw4xNNpjmpgo55qBcRP6WZSHeJuxytwi9RFsOr+6nLytfmZoNs8rVd7Gcy8AiVQFw6+/J1AfTStVRBZFOMtxO+co9Qo1WTNJ1+3CthEEFZHa+VGbLPbW5tXdfrrOV/Jaz8/plIVzHWVvoxHUQ+Sej4p5zAM3Z9adKGVn1YEEZYQBrAjAB5yDxKWX+1MQTYZv+pKL72ggVjjl3FbX8BRgyf7/7TnopgJw5fxvqAI2kl3Oz1DjSC+n5CZ2vT1Dlghm+QZ8SVpLpgtqwu8VSt1KqBMX7H1ie3xcjx/I5CnrRCxszt8ETjcaYH0lnfsptcZVOxnUfoNnCjRBM48VuFwekwRR9oMaIZTzlsohT6YK0EjbfDQYnDkQzaZonsuhYxHfo9RhI7gAs7QHaAhkgMcgvva6XSTHhjQvKPFUWauC9fRGwizzAetYb/iz7KJrzse/1o6xisJmdLzmYv26QQbkzcnF5tVVi5PUbjXLo5LaarkWdVVfWsQBHwKGg8W4q7u2HoH+th9jxofF699mWoyiB8gmB2MV1/jrbZLMHh7+7S0WWwyShBnd7bCoC9whSO X-Forefront-Antispam-Report: CIP:216.228.117.161; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge2.nvidia.com; CAT:NONE; SFS:(13230031)(1800799015)(376005)(36860700004)(82310400014); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 13 Apr 2024 12:59:31.4982 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 6a8288c3-4457-4e63-906b-08dc5bb98f97 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.161]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: MWH0EPF000989EA.namprd02.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS7PR12MB6071 X-BeenThere: stable@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: patches for DPDK stable branches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: stable-bounces@dpdk.org Hi, FYI, your patch has been queued to stable release 23.11.1 Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet. It will be pushed if I get no objections before 04/15/24. So please shout if anyone has objections. Also note that after the patch there's a diff of the upstream commit vs the patch applied to the branch. This will indicate if there was any rebasing needed to apply to the stable branch. If there were code changes for rebasing (ie: not only metadata diffs), please double check that the rebase was correctly done. Queued patches are on a temporary branch at: https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging This queued commit can be viewed at: https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=b1749f6ed202bc6413c51119aac17a32b41c09eb Thanks. Xueming Li --- >From b1749f6ed202bc6413c51119aac17a32b41c09eb Mon Sep 17 00:00:00 2001 From: Dariusz Sosnowski Date: Wed, 6 Mar 2024 21:21:48 +0100 Subject: [PATCH] net/mlx5: fix template clean up of FDB control flow rule Cc: Xueming Li [ upstream commit 48db3b61c3b81c6efcd343b7929a000eb998cb0b ] This patch refactors the creation and clean up of templates used for FDB control flow rules, when HWS is enabled. All pattern and actions templates, and template tables are stored in a separate structure, `mlx5_flow_hw_ctrl_fdb`. It is allocated if and only if E-Switch is enabled. During HWS clean up, all of these templates are explicitly destroyed, instead of relying on templates general templates clean up. Fixes: 1939eb6f660c ("net/mlx5: support flow port action with HWS") Fixes: 49dffadf4b0c ("net/mlx5: fix LACP redirection in Rx domain") Signed-off-by: Dariusz Sosnowski Acked-by: Ori Kam --- drivers/net/mlx5/mlx5.h | 6 +- drivers/net/mlx5/mlx5_flow.h | 19 +++ drivers/net/mlx5/mlx5_flow_hw.c | 255 ++++++++++++++++++-------------- 3 files changed, 166 insertions(+), 114 deletions(-) diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index e2c6fe0d00..33401d96e4 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -1870,11 +1870,7 @@ struct mlx5_priv { rte_spinlock_t hw_ctrl_lock; LIST_HEAD(hw_ctrl_flow, mlx5_hw_ctrl_flow) hw_ctrl_flows; LIST_HEAD(hw_ext_ctrl_flow, mlx5_hw_ctrl_flow) hw_ext_ctrl_flows; - struct rte_flow_template_table *hw_esw_sq_miss_root_tbl; - struct rte_flow_template_table *hw_esw_sq_miss_tbl; - struct rte_flow_template_table *hw_esw_zero_tbl; - struct rte_flow_template_table *hw_tx_meta_cpy_tbl; - struct rte_flow_template_table *hw_lacp_rx_tbl; + struct mlx5_flow_hw_ctrl_fdb *hw_ctrl_fdb; struct rte_flow_pattern_template *hw_tx_repr_tagging_pt; struct rte_flow_actions_template *hw_tx_repr_tagging_at; struct rte_flow_template_table *hw_tx_repr_tagging_tbl; diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index edc273c518..7a5e334a83 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -2446,6 +2446,25 @@ struct mlx5_flow_hw_ctrl_rx { [MLX5_FLOW_HW_CTRL_RX_EXPANDED_RSS_MAX]; }; +/* Contains all templates required for control flow rules in FDB with HWS. */ +struct mlx5_flow_hw_ctrl_fdb { + struct rte_flow_pattern_template *esw_mgr_items_tmpl; + struct rte_flow_actions_template *regc_jump_actions_tmpl; + struct rte_flow_template_table *hw_esw_sq_miss_root_tbl; + struct rte_flow_pattern_template *regc_sq_items_tmpl; + struct rte_flow_actions_template *port_actions_tmpl; + struct rte_flow_template_table *hw_esw_sq_miss_tbl; + struct rte_flow_pattern_template *port_items_tmpl; + struct rte_flow_actions_template *jump_one_actions_tmpl; + struct rte_flow_template_table *hw_esw_zero_tbl; + struct rte_flow_pattern_template *tx_meta_items_tmpl; + struct rte_flow_actions_template *tx_meta_actions_tmpl; + struct rte_flow_template_table *hw_tx_meta_cpy_tbl; + struct rte_flow_pattern_template *lacp_rx_items_tmpl; + struct rte_flow_actions_template *lacp_rx_actions_tmpl; + struct rte_flow_template_table *hw_lacp_rx_tbl; +}; + #define MLX5_CTRL_PROMISCUOUS (RTE_BIT32(0)) #define MLX5_CTRL_ALL_MULTICAST (RTE_BIT32(1)) #define MLX5_CTRL_BROADCAST (RTE_BIT32(2)) diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index 9de5552bc8..938d9b5824 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -8445,6 +8445,72 @@ flow_hw_create_ctrl_jump_table(struct rte_eth_dev *dev, return flow_hw_table_create(dev, &cfg, &it, 1, &at, 1, error); } +/** + * Cleans up all template tables and pattern, and actions templates used for + * FDB control flow rules. + * + * @param dev + * Pointer to Ethernet device. + */ +static void +flow_hw_cleanup_ctrl_fdb_tables(struct rte_eth_dev *dev) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_flow_hw_ctrl_fdb *hw_ctrl_fdb; + + if (!priv->hw_ctrl_fdb) + return; + hw_ctrl_fdb = priv->hw_ctrl_fdb; + /* Clean up templates used for LACP default miss table. */ + if (hw_ctrl_fdb->hw_lacp_rx_tbl) + claim_zero(flow_hw_table_destroy(dev, hw_ctrl_fdb->hw_lacp_rx_tbl, NULL)); + if (hw_ctrl_fdb->lacp_rx_actions_tmpl) + claim_zero(flow_hw_actions_template_destroy(dev, hw_ctrl_fdb->lacp_rx_actions_tmpl, + NULL)); + if (hw_ctrl_fdb->lacp_rx_items_tmpl) + claim_zero(flow_hw_pattern_template_destroy(dev, hw_ctrl_fdb->lacp_rx_items_tmpl, + NULL)); + /* Clean up templates used for default Tx metadata copy. */ + if (hw_ctrl_fdb->hw_tx_meta_cpy_tbl) + claim_zero(flow_hw_table_destroy(dev, hw_ctrl_fdb->hw_tx_meta_cpy_tbl, NULL)); + if (hw_ctrl_fdb->tx_meta_actions_tmpl) + claim_zero(flow_hw_actions_template_destroy(dev, hw_ctrl_fdb->tx_meta_actions_tmpl, + NULL)); + if (hw_ctrl_fdb->tx_meta_items_tmpl) + claim_zero(flow_hw_pattern_template_destroy(dev, hw_ctrl_fdb->tx_meta_items_tmpl, + NULL)); + /* Clean up templates used for default FDB jump rule. */ + if (hw_ctrl_fdb->hw_esw_zero_tbl) + claim_zero(flow_hw_table_destroy(dev, hw_ctrl_fdb->hw_esw_zero_tbl, NULL)); + if (hw_ctrl_fdb->jump_one_actions_tmpl) + claim_zero(flow_hw_actions_template_destroy(dev, hw_ctrl_fdb->jump_one_actions_tmpl, + NULL)); + if (hw_ctrl_fdb->port_items_tmpl) + claim_zero(flow_hw_pattern_template_destroy(dev, hw_ctrl_fdb->port_items_tmpl, + NULL)); + /* Clean up templates used for default SQ miss flow rules - non-root table. */ + if (hw_ctrl_fdb->hw_esw_sq_miss_tbl) + claim_zero(flow_hw_table_destroy(dev, hw_ctrl_fdb->hw_esw_sq_miss_tbl, NULL)); + if (hw_ctrl_fdb->regc_sq_items_tmpl) + claim_zero(flow_hw_pattern_template_destroy(dev, hw_ctrl_fdb->regc_sq_items_tmpl, + NULL)); + if (hw_ctrl_fdb->port_actions_tmpl) + claim_zero(flow_hw_actions_template_destroy(dev, hw_ctrl_fdb->port_actions_tmpl, + NULL)); + /* Clean up templates used for default SQ miss flow rules - root table. */ + if (hw_ctrl_fdb->hw_esw_sq_miss_root_tbl) + claim_zero(flow_hw_table_destroy(dev, hw_ctrl_fdb->hw_esw_sq_miss_root_tbl, NULL)); + if (hw_ctrl_fdb->regc_jump_actions_tmpl) + claim_zero(flow_hw_actions_template_destroy(dev, + hw_ctrl_fdb->regc_jump_actions_tmpl, NULL)); + if (hw_ctrl_fdb->esw_mgr_items_tmpl) + claim_zero(flow_hw_pattern_template_destroy(dev, hw_ctrl_fdb->esw_mgr_items_tmpl, + NULL)); + /* Clean up templates structure for FDB control flow rules. */ + mlx5_free(hw_ctrl_fdb); + priv->hw_ctrl_fdb = NULL; +} + /* * Create a table on the root group to for the LACP traffic redirecting. * @@ -8494,110 +8560,109 @@ flow_hw_create_lacp_rx_table(struct rte_eth_dev *dev, * @return * 0 on success, negative values otherwise */ -static __rte_unused int +static int flow_hw_create_ctrl_tables(struct rte_eth_dev *dev, struct rte_flow_error *error) { struct mlx5_priv *priv = dev->data->dev_private; - struct rte_flow_pattern_template *esw_mgr_items_tmpl = NULL; - struct rte_flow_pattern_template *regc_sq_items_tmpl = NULL; - struct rte_flow_pattern_template *port_items_tmpl = NULL; - struct rte_flow_pattern_template *tx_meta_items_tmpl = NULL; - struct rte_flow_pattern_template *lacp_rx_items_tmpl = NULL; - struct rte_flow_actions_template *regc_jump_actions_tmpl = NULL; - struct rte_flow_actions_template *port_actions_tmpl = NULL; - struct rte_flow_actions_template *jump_one_actions_tmpl = NULL; - struct rte_flow_actions_template *tx_meta_actions_tmpl = NULL; - struct rte_flow_actions_template *lacp_rx_actions_tmpl = NULL; + struct mlx5_flow_hw_ctrl_fdb *hw_ctrl_fdb; uint32_t xmeta = priv->sh->config.dv_xmeta_en; uint32_t repr_matching = priv->sh->config.repr_matching; - int ret; + MLX5_ASSERT(priv->hw_ctrl_fdb == NULL); + hw_ctrl_fdb = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*hw_ctrl_fdb), 0, SOCKET_ID_ANY); + if (!hw_ctrl_fdb) { + DRV_LOG(ERR, "port %u failed to allocate memory for FDB control flow templates", + dev->data->port_id); + rte_errno = ENOMEM; + goto err; + } + priv->hw_ctrl_fdb = hw_ctrl_fdb; /* Create templates and table for default SQ miss flow rules - root table. */ - esw_mgr_items_tmpl = flow_hw_create_ctrl_esw_mgr_pattern_template(dev, error); - if (!esw_mgr_items_tmpl) { + hw_ctrl_fdb->esw_mgr_items_tmpl = flow_hw_create_ctrl_esw_mgr_pattern_template(dev, error); + if (!hw_ctrl_fdb->esw_mgr_items_tmpl) { DRV_LOG(ERR, "port %u failed to create E-Switch Manager item" " template for control flows", dev->data->port_id); goto err; } - regc_jump_actions_tmpl = flow_hw_create_ctrl_regc_jump_actions_template(dev, error); - if (!regc_jump_actions_tmpl) { + hw_ctrl_fdb->regc_jump_actions_tmpl = flow_hw_create_ctrl_regc_jump_actions_template + (dev, error); + if (!hw_ctrl_fdb->regc_jump_actions_tmpl) { DRV_LOG(ERR, "port %u failed to create REG_C set and jump action template" " for control flows", dev->data->port_id); goto err; } - MLX5_ASSERT(priv->hw_esw_sq_miss_root_tbl == NULL); - priv->hw_esw_sq_miss_root_tbl = flow_hw_create_ctrl_sq_miss_root_table - (dev, esw_mgr_items_tmpl, regc_jump_actions_tmpl, error); - if (!priv->hw_esw_sq_miss_root_tbl) { + hw_ctrl_fdb->hw_esw_sq_miss_root_tbl = flow_hw_create_ctrl_sq_miss_root_table + (dev, hw_ctrl_fdb->esw_mgr_items_tmpl, hw_ctrl_fdb->regc_jump_actions_tmpl, + error); + if (!hw_ctrl_fdb->hw_esw_sq_miss_root_tbl) { DRV_LOG(ERR, "port %u failed to create table for default sq miss (root table)" " for control flows", dev->data->port_id); goto err; } /* Create templates and table for default SQ miss flow rules - non-root table. */ - regc_sq_items_tmpl = flow_hw_create_ctrl_regc_sq_pattern_template(dev, error); - if (!regc_sq_items_tmpl) { + hw_ctrl_fdb->regc_sq_items_tmpl = flow_hw_create_ctrl_regc_sq_pattern_template(dev, error); + if (!hw_ctrl_fdb->regc_sq_items_tmpl) { DRV_LOG(ERR, "port %u failed to create SQ item template for" " control flows", dev->data->port_id); goto err; } - port_actions_tmpl = flow_hw_create_ctrl_port_actions_template(dev, error); - if (!port_actions_tmpl) { + hw_ctrl_fdb->port_actions_tmpl = flow_hw_create_ctrl_port_actions_template(dev, error); + if (!hw_ctrl_fdb->port_actions_tmpl) { DRV_LOG(ERR, "port %u failed to create port action template" " for control flows", dev->data->port_id); goto err; } - MLX5_ASSERT(priv->hw_esw_sq_miss_tbl == NULL); - priv->hw_esw_sq_miss_tbl = flow_hw_create_ctrl_sq_miss_table(dev, regc_sq_items_tmpl, - port_actions_tmpl, error); - if (!priv->hw_esw_sq_miss_tbl) { + hw_ctrl_fdb->hw_esw_sq_miss_tbl = flow_hw_create_ctrl_sq_miss_table + (dev, hw_ctrl_fdb->regc_sq_items_tmpl, hw_ctrl_fdb->port_actions_tmpl, + error); + if (!hw_ctrl_fdb->hw_esw_sq_miss_tbl) { DRV_LOG(ERR, "port %u failed to create table for default sq miss (non-root table)" " for control flows", dev->data->port_id); goto err; } /* Create templates and table for default FDB jump flow rules. */ - port_items_tmpl = flow_hw_create_ctrl_port_pattern_template(dev, error); - if (!port_items_tmpl) { + hw_ctrl_fdb->port_items_tmpl = flow_hw_create_ctrl_port_pattern_template(dev, error); + if (!hw_ctrl_fdb->port_items_tmpl) { DRV_LOG(ERR, "port %u failed to create SQ item template for" " control flows", dev->data->port_id); goto err; } - jump_one_actions_tmpl = flow_hw_create_ctrl_jump_actions_template + hw_ctrl_fdb->jump_one_actions_tmpl = flow_hw_create_ctrl_jump_actions_template (dev, MLX5_HW_LOWEST_USABLE_GROUP, error); - if (!jump_one_actions_tmpl) { + if (!hw_ctrl_fdb->jump_one_actions_tmpl) { DRV_LOG(ERR, "port %u failed to create jump action template" " for control flows", dev->data->port_id); goto err; } - MLX5_ASSERT(priv->hw_esw_zero_tbl == NULL); - priv->hw_esw_zero_tbl = flow_hw_create_ctrl_jump_table(dev, port_items_tmpl, - jump_one_actions_tmpl, - error); - if (!priv->hw_esw_zero_tbl) { + hw_ctrl_fdb->hw_esw_zero_tbl = flow_hw_create_ctrl_jump_table + (dev, hw_ctrl_fdb->port_items_tmpl, hw_ctrl_fdb->jump_one_actions_tmpl, + error); + if (!hw_ctrl_fdb->hw_esw_zero_tbl) { DRV_LOG(ERR, "port %u failed to create table for default jump to group 1" " for control flows", dev->data->port_id); goto err; } /* Create templates and table for default Tx metadata copy flow rule. */ if (!repr_matching && xmeta == MLX5_XMETA_MODE_META32_HWS) { - tx_meta_items_tmpl = + hw_ctrl_fdb->tx_meta_items_tmpl = flow_hw_create_tx_default_mreg_copy_pattern_template(dev, error); - if (!tx_meta_items_tmpl) { + if (!hw_ctrl_fdb->tx_meta_items_tmpl) { DRV_LOG(ERR, "port %u failed to Tx metadata copy pattern" " template for control flows", dev->data->port_id); goto err; } - tx_meta_actions_tmpl = + hw_ctrl_fdb->tx_meta_actions_tmpl = flow_hw_create_tx_default_mreg_copy_actions_template(dev, error); - if (!tx_meta_actions_tmpl) { + if (!hw_ctrl_fdb->tx_meta_actions_tmpl) { DRV_LOG(ERR, "port %u failed to Tx metadata copy actions" " template for control flows", dev->data->port_id); goto err; } - MLX5_ASSERT(priv->hw_tx_meta_cpy_tbl == NULL); - priv->hw_tx_meta_cpy_tbl = - flow_hw_create_tx_default_mreg_copy_table(dev, tx_meta_items_tmpl, - tx_meta_actions_tmpl, error); - if (!priv->hw_tx_meta_cpy_tbl) { + hw_ctrl_fdb->hw_tx_meta_cpy_tbl = + flow_hw_create_tx_default_mreg_copy_table + (dev, hw_ctrl_fdb->tx_meta_items_tmpl, + hw_ctrl_fdb->tx_meta_actions_tmpl, error); + if (!hw_ctrl_fdb->hw_tx_meta_cpy_tbl) { DRV_LOG(ERR, "port %u failed to create table for default" " Tx metadata copy flow rule", dev->data->port_id); goto err; @@ -8605,71 +8670,34 @@ flow_hw_create_ctrl_tables(struct rte_eth_dev *dev, struct rte_flow_error *error } /* Create LACP default miss table. */ if (!priv->sh->config.lacp_by_user && priv->pf_bond >= 0 && priv->master) { - lacp_rx_items_tmpl = flow_hw_create_lacp_rx_pattern_template(dev, error); - if (!lacp_rx_items_tmpl) { + hw_ctrl_fdb->lacp_rx_items_tmpl = + flow_hw_create_lacp_rx_pattern_template(dev, error); + if (!hw_ctrl_fdb->lacp_rx_items_tmpl) { DRV_LOG(ERR, "port %u failed to create pattern template" " for LACP Rx traffic", dev->data->port_id); goto err; } - lacp_rx_actions_tmpl = flow_hw_create_lacp_rx_actions_template(dev, error); - if (!lacp_rx_actions_tmpl) { + hw_ctrl_fdb->lacp_rx_actions_tmpl = + flow_hw_create_lacp_rx_actions_template(dev, error); + if (!hw_ctrl_fdb->lacp_rx_actions_tmpl) { DRV_LOG(ERR, "port %u failed to create actions template" " for LACP Rx traffic", dev->data->port_id); goto err; } - priv->hw_lacp_rx_tbl = flow_hw_create_lacp_rx_table(dev, lacp_rx_items_tmpl, - lacp_rx_actions_tmpl, error); - if (!priv->hw_lacp_rx_tbl) { + hw_ctrl_fdb->hw_lacp_rx_tbl = flow_hw_create_lacp_rx_table + (dev, hw_ctrl_fdb->lacp_rx_items_tmpl, + hw_ctrl_fdb->lacp_rx_actions_tmpl, error); + if (!hw_ctrl_fdb->hw_lacp_rx_tbl) { DRV_LOG(ERR, "port %u failed to create template table for" " for LACP Rx traffic", dev->data->port_id); goto err; } } return 0; + err: - /* Do not overwrite the rte_errno. */ - ret = -rte_errno; - if (ret == 0) - ret = rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, - "Failed to create control tables."); - if (priv->hw_tx_meta_cpy_tbl) { - flow_hw_table_destroy(dev, priv->hw_tx_meta_cpy_tbl, NULL); - priv->hw_tx_meta_cpy_tbl = NULL; - } - if (priv->hw_esw_zero_tbl) { - flow_hw_table_destroy(dev, priv->hw_esw_zero_tbl, NULL); - priv->hw_esw_zero_tbl = NULL; - } - if (priv->hw_esw_sq_miss_tbl) { - flow_hw_table_destroy(dev, priv->hw_esw_sq_miss_tbl, NULL); - priv->hw_esw_sq_miss_tbl = NULL; - } - if (priv->hw_esw_sq_miss_root_tbl) { - flow_hw_table_destroy(dev, priv->hw_esw_sq_miss_root_tbl, NULL); - priv->hw_esw_sq_miss_root_tbl = NULL; - } - if (lacp_rx_actions_tmpl) - flow_hw_actions_template_destroy(dev, lacp_rx_actions_tmpl, NULL); - if (tx_meta_actions_tmpl) - flow_hw_actions_template_destroy(dev, tx_meta_actions_tmpl, NULL); - if (jump_one_actions_tmpl) - flow_hw_actions_template_destroy(dev, jump_one_actions_tmpl, NULL); - if (port_actions_tmpl) - flow_hw_actions_template_destroy(dev, port_actions_tmpl, NULL); - if (regc_jump_actions_tmpl) - flow_hw_actions_template_destroy(dev, regc_jump_actions_tmpl, NULL); - if (lacp_rx_items_tmpl) - flow_hw_pattern_template_destroy(dev, lacp_rx_items_tmpl, NULL); - if (tx_meta_items_tmpl) - flow_hw_pattern_template_destroy(dev, tx_meta_items_tmpl, NULL); - if (port_items_tmpl) - flow_hw_pattern_template_destroy(dev, port_items_tmpl, NULL); - if (regc_sq_items_tmpl) - flow_hw_pattern_template_destroy(dev, regc_sq_items_tmpl, NULL); - if (esw_mgr_items_tmpl) - flow_hw_pattern_template_destroy(dev, esw_mgr_items_tmpl, NULL); - return ret; + flow_hw_cleanup_ctrl_fdb_tables(dev); + return -EINVAL; } static void @@ -9640,6 +9668,7 @@ err: } mlx5_flow_quota_destroy(dev); flow_hw_destroy_send_to_kernel_action(priv); + flow_hw_cleanup_ctrl_fdb_tables(dev); flow_hw_free_vport_actions(priv); for (i = 0; i < MLX5_HW_ACTION_FLAG_MAX; i++) { if (priv->hw_drop[i]) @@ -9698,6 +9727,7 @@ flow_hw_resource_release(struct rte_eth_dev *dev) return; flow_hw_rxq_flag_set(dev, false); flow_hw_flush_all_ctrl_flows(dev); + flow_hw_cleanup_ctrl_fdb_tables(dev); flow_hw_cleanup_tx_repr_tagging(dev); flow_hw_cleanup_ctrl_rx_tables(dev); while (!LIST_EMPTY(&priv->flow_hw_grp)) { @@ -11983,8 +12013,9 @@ mlx5_flow_hw_esw_create_sq_miss_flow(struct rte_eth_dev *dev, uint32_t sqn, bool proxy_port_id, port_id); return 0; } - if (!proxy_priv->hw_esw_sq_miss_root_tbl || - !proxy_priv->hw_esw_sq_miss_tbl) { + if (!proxy_priv->hw_ctrl_fdb || + !proxy_priv->hw_ctrl_fdb->hw_esw_sq_miss_root_tbl || + !proxy_priv->hw_ctrl_fdb->hw_esw_sq_miss_tbl) { DRV_LOG(ERR, "Transfer proxy port (port %u) of port %u was configured, but " "default flow tables were not created.", proxy_port_id, port_id); @@ -12016,7 +12047,8 @@ mlx5_flow_hw_esw_create_sq_miss_flow(struct rte_eth_dev *dev, uint32_t sqn, bool actions[2] = (struct rte_flow_action) { .type = RTE_FLOW_ACTION_TYPE_END, }; - ret = flow_hw_create_ctrl_flow(dev, proxy_dev, proxy_priv->hw_esw_sq_miss_root_tbl, + ret = flow_hw_create_ctrl_flow(dev, proxy_dev, + proxy_priv->hw_ctrl_fdb->hw_esw_sq_miss_root_tbl, items, 0, actions, 0, &flow_info, external); if (ret) { DRV_LOG(ERR, "Port %u failed to create root SQ miss flow rule for SQ %u, ret %d", @@ -12047,7 +12079,8 @@ mlx5_flow_hw_esw_create_sq_miss_flow(struct rte_eth_dev *dev, uint32_t sqn, bool .type = RTE_FLOW_ACTION_TYPE_END, }; flow_info.type = MLX5_HW_CTRL_FLOW_TYPE_SQ_MISS; - ret = flow_hw_create_ctrl_flow(dev, proxy_dev, proxy_priv->hw_esw_sq_miss_tbl, + ret = flow_hw_create_ctrl_flow(dev, proxy_dev, + proxy_priv->hw_ctrl_fdb->hw_esw_sq_miss_tbl, items, 0, actions, 0, &flow_info, external); if (ret) { DRV_LOG(ERR, "Port %u failed to create HWS SQ miss flow rule for SQ %u, ret %d", @@ -12093,8 +12126,9 @@ mlx5_flow_hw_esw_destroy_sq_miss_flow(struct rte_eth_dev *dev, uint32_t sqn) proxy_priv = proxy_dev->data->dev_private; if (!proxy_priv->dr_ctx) return 0; - if (!proxy_priv->hw_esw_sq_miss_root_tbl || - !proxy_priv->hw_esw_sq_miss_tbl) + if (!proxy_priv->hw_ctrl_fdb || + !proxy_priv->hw_ctrl_fdb->hw_esw_sq_miss_root_tbl || + !proxy_priv->hw_ctrl_fdb->hw_esw_sq_miss_tbl) return 0; cf = LIST_FIRST(&proxy_priv->hw_ctrl_flows); while (cf != NULL) { @@ -12161,7 +12195,7 @@ mlx5_flow_hw_esw_create_default_jump_flow(struct rte_eth_dev *dev) proxy_port_id, port_id); return 0; } - if (!proxy_priv->hw_esw_zero_tbl) { + if (!proxy_priv->hw_ctrl_fdb || !proxy_priv->hw_ctrl_fdb->hw_esw_zero_tbl) { DRV_LOG(ERR, "Transfer proxy port (port %u) of port %u was configured, but " "default flow tables were not created.", proxy_port_id, port_id); @@ -12169,7 +12203,7 @@ mlx5_flow_hw_esw_create_default_jump_flow(struct rte_eth_dev *dev) return -rte_errno; } return flow_hw_create_ctrl_flow(dev, proxy_dev, - proxy_priv->hw_esw_zero_tbl, + proxy_priv->hw_ctrl_fdb->hw_esw_zero_tbl, items, 0, actions, 0, &flow_info, false); } @@ -12221,10 +12255,12 @@ mlx5_flow_hw_create_tx_default_mreg_copy_flow(struct rte_eth_dev *dev) }; MLX5_ASSERT(priv->master); - if (!priv->dr_ctx || !priv->hw_tx_meta_cpy_tbl) + if (!priv->dr_ctx || + !priv->hw_ctrl_fdb || + !priv->hw_ctrl_fdb->hw_tx_meta_cpy_tbl) return 0; return flow_hw_create_ctrl_flow(dev, dev, - priv->hw_tx_meta_cpy_tbl, + priv->hw_ctrl_fdb->hw_tx_meta_cpy_tbl, eth_all, 0, copy_reg_action, 0, &flow_info, false); } @@ -12316,10 +12352,11 @@ mlx5_flow_hw_lacp_rx_flow(struct rte_eth_dev *dev) .type = MLX5_HW_CTRL_FLOW_TYPE_LACP_RX, }; - if (!priv->dr_ctx || !priv->hw_lacp_rx_tbl) + if (!priv->dr_ctx || !priv->hw_ctrl_fdb || !priv->hw_ctrl_fdb->hw_lacp_rx_tbl) return 0; - return flow_hw_create_ctrl_flow(dev, dev, priv->hw_lacp_rx_tbl, eth_lacp, 0, - miss_action, 0, &flow_info, false); + return flow_hw_create_ctrl_flow(dev, dev, + priv->hw_ctrl_fdb->hw_lacp_rx_tbl, + eth_lacp, 0, miss_action, 0, &flow_info, false); } static uint32_t -- 2.34.1 --- Diff of the applied patch vs upstream commit (please double-check if non-empty: --- --- - 2024-04-13 20:43:08.169094189 +0800 +++ 0103-net-mlx5-fix-template-clean-up-of-FDB-control-flow-r.patch 2024-04-13 20:43:05.097753801 +0800 @@ -1 +1 @@ -From 48db3b61c3b81c6efcd343b7929a000eb998cb0b Mon Sep 17 00:00:00 2001 +From b1749f6ed202bc6413c51119aac17a32b41c09eb Mon Sep 17 00:00:00 2001 @@ -4,0 +5,3 @@ +Cc: Xueming Li + +[ upstream commit 48db3b61c3b81c6efcd343b7929a000eb998cb0b ] @@ -16 +18,0 @@ -Cc: stable@dpdk.org @@ -27 +29 @@ -index 6ff8f322e0..0091a2459c 100644 +index e2c6fe0d00..33401d96e4 100644 @@ -30 +32 @@ -@@ -1894,11 +1894,7 @@ struct mlx5_priv { +@@ -1870,11 +1870,7 @@ struct mlx5_priv { @@ -44 +46 @@ -index ff3830a888..34b5e0f45b 100644 +index edc273c518..7a5e334a83 100644 @@ -47 +49 @@ -@@ -2775,6 +2775,25 @@ struct mlx5_flow_hw_ctrl_rx { +@@ -2446,6 +2446,25 @@ struct mlx5_flow_hw_ctrl_rx { @@ -74 +76 @@ -index a96c829045..feeb071b4b 100644 +index 9de5552bc8..938d9b5824 100644 @@ -77 +79 @@ -@@ -9363,6 +9363,72 @@ flow_hw_create_ctrl_jump_table(struct rte_eth_dev *dev, +@@ -8445,6 +8445,72 @@ flow_hw_create_ctrl_jump_table(struct rte_eth_dev *dev, @@ -150 +152 @@ -@@ -9412,110 +9478,109 @@ flow_hw_create_lacp_rx_table(struct rte_eth_dev *dev, +@@ -8494,110 +8560,109 @@ flow_hw_create_lacp_rx_table(struct rte_eth_dev *dev, @@ -306 +308 @@ -@@ -9523,71 +9588,34 @@ flow_hw_create_ctrl_tables(struct rte_eth_dev *dev, struct rte_flow_error *error +@@ -8605,71 +8670,34 @@ flow_hw_create_ctrl_tables(struct rte_eth_dev *dev, struct rte_flow_error *error @@ -391,2 +393,2 @@ -@@ -10619,6 +10647,7 @@ err: - action_template_drop_release(dev); +@@ -9640,6 +9668,7 @@ err: + } @@ -399,2 +401,2 @@ -@@ -10681,6 +10710,7 @@ flow_hw_resource_release(struct rte_eth_dev *dev) - dev->flow_fp_ops = &rte_flow_fp_default_ops; +@@ -9698,6 +9727,7 @@ flow_hw_resource_release(struct rte_eth_dev *dev) + return; @@ -406,2 +408,2 @@ - action_template_drop_release(dev); -@@ -13259,8 +13289,9 @@ mlx5_flow_hw_esw_create_sq_miss_flow(struct rte_eth_dev *dev, uint32_t sqn, bool + while (!LIST_EMPTY(&priv->flow_hw_grp)) { +@@ -11983,8 +12013,9 @@ mlx5_flow_hw_esw_create_sq_miss_flow(struct rte_eth_dev *dev, uint32_t sqn, bool @@ -419 +421 @@ -@@ -13292,7 +13323,8 @@ mlx5_flow_hw_esw_create_sq_miss_flow(struct rte_eth_dev *dev, uint32_t sqn, bool +@@ -12016,7 +12047,8 @@ mlx5_flow_hw_esw_create_sq_miss_flow(struct rte_eth_dev *dev, uint32_t sqn, bool @@ -429 +431 @@ -@@ -13323,7 +13355,8 @@ mlx5_flow_hw_esw_create_sq_miss_flow(struct rte_eth_dev *dev, uint32_t sqn, bool +@@ -12047,7 +12079,8 @@ mlx5_flow_hw_esw_create_sq_miss_flow(struct rte_eth_dev *dev, uint32_t sqn, bool @@ -439 +441 @@ -@@ -13369,8 +13402,9 @@ mlx5_flow_hw_esw_destroy_sq_miss_flow(struct rte_eth_dev *dev, uint32_t sqn) +@@ -12093,8 +12126,9 @@ mlx5_flow_hw_esw_destroy_sq_miss_flow(struct rte_eth_dev *dev, uint32_t sqn) @@ -451 +453 @@ -@@ -13437,7 +13471,7 @@ mlx5_flow_hw_esw_create_default_jump_flow(struct rte_eth_dev *dev) +@@ -12161,7 +12195,7 @@ mlx5_flow_hw_esw_create_default_jump_flow(struct rte_eth_dev *dev) @@ -460 +462 @@ -@@ -13445,7 +13479,7 @@ mlx5_flow_hw_esw_create_default_jump_flow(struct rte_eth_dev *dev) +@@ -12169,7 +12203,7 @@ mlx5_flow_hw_esw_create_default_jump_flow(struct rte_eth_dev *dev) @@ -469 +471 @@ -@@ -13497,10 +13531,12 @@ mlx5_flow_hw_create_tx_default_mreg_copy_flow(struct rte_eth_dev *dev) +@@ -12221,10 +12255,12 @@ mlx5_flow_hw_create_tx_default_mreg_copy_flow(struct rte_eth_dev *dev) @@ -484 +486 @@ -@@ -13592,10 +13628,11 @@ mlx5_flow_hw_lacp_rx_flow(struct rte_eth_dev *dev) +@@ -12316,10 +12352,11 @@ mlx5_flow_hw_lacp_rx_flow(struct rte_eth_dev *dev)