From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id E1DD6470E5 for ; Thu, 25 Dec 2025 10:23:46 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 1131840663; Thu, 25 Dec 2025 10:23:29 +0100 (CET) Received: from PH0PR06CU001.outbound.protection.outlook.com (mail-westus3azon11011058.outbound.protection.outlook.com [40.107.208.58]) by mails.dpdk.org (Postfix) with ESMTP id 6945E402B2 for ; Thu, 25 Dec 2025 10:23:27 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=taJVC8AhnBjzC3i1yWIFsq9bHVgpVYc/FqEVx2+ol1GntwWNAQccYIRyiOO3dUp/ra4iCRvAVStG9IKDJUreeRj5rbLhuA0cP6sXmUxzyEXkTS3D5gVS+GeU4nrk1UA/ZH3OpiF0lKSJWL7/9GBoDn2ukWPBk+Eh4IqigT3l6rwpkVKLJVkpkQsMrpAZ1X1I1pVM1MSv+d0bYMLcUPw/YBZvF1GaPFJPa99gtj/p2IGlCGd2bcvWk/D/doNo/XgEDIUZ/w/DSYYcULa72EvI16tWtpLcvQV0lG9csAgk7dQU0LXsXwNDxK2e+rldzWRNPoFlw3X1INiz+vyYWMs9lQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=ycPd+OryhhhzhC75eLJhH9C9YISNRek4Op+oHcFHgbQ=; b=bcqHrxuUK6r1mNJxJveflUdR6cscmUscEU96jz5FHoi2DzFx+tvgqESEoqVJH50+06d9y9t4lVzIYXbxROA6IAyOb1hVUbCU/7J3/ItPYBphmkroDGs9QjtW0Oa/KDOKipgiJMNuYh3Oy0zcI64pUwvo7LggyQGKDy5ytK7iiDafN8ZE171ENfs0aXGA/5bIEk+9Lk5a2cT1UFjpNxD9l9u7RmaJJdvWGJqlrHEAVzGFrrQbDo2w1EXrOp+ZPvFFEKO/+Gv/aLrcSHxf8LqM3alcxgCRvsm6aXKfOsDnkRDPyId6Ttg47U41Sel0BIWJp9BhHPpD+Nnj7KL0nkaWiw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.118.233) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=ycPd+OryhhhzhC75eLJhH9C9YISNRek4Op+oHcFHgbQ=; b=iqdPkKlv8SH+3cTwyLkWuNVFUlwcN0GaMc7DVs2Qv5B8AW/qy8gSqLfoey8Dko98+V6Tn2NG0LNymhhCOrdZD5nJribEMcWGsZzHpxDCbTeq+v+NBfCyxWMrimoZWKnRncHAAvcYz2FsUkR9NRkHwR6LGPF+TGQCX1mgW357vYz+aVQXQBsYl4kjENbUnLeovnFZDBbK4KwTIHY6VoSJFuGFpiRwMZDW8Fl6hqY/zCqbxHZw/Rh6LquLUG6Gk2NwzrXP6ffgnYrGQDzDUHpXCIkk7cLsu83HLkbz1Ovtk6Pif67hAPUSKFD4G3StLWkmGsvm+Xcxo769UPYzU/o/Ww== Received: from BY5PR04CA0006.namprd04.prod.outlook.com (2603:10b6:a03:1d0::16) by LV9PR12MB9831.namprd12.prod.outlook.com (2603:10b6:408:2e7::8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.9456.11; Thu, 25 Dec 2025 09:23:21 +0000 Received: from BY1PEPF0001AE1A.namprd04.prod.outlook.com (2603:10b6:a03:1d0:cafe::4d) by BY5PR04CA0006.outlook.office365.com (2603:10b6:a03:1d0::16) with Microsoft SMTP Server (version=TLS1_3, cipher=TLS_AES_256_GCM_SHA384) id 15.20.9456.11 via Frontend Transport; Thu, 25 Dec 2025 09:23:17 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.118.233) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.118.233 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.118.233; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.118.233) by BY1PEPF0001AE1A.mail.protection.outlook.com (10.167.242.102) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.9456.9 via Frontend Transport; Thu, 25 Dec 2025 09:23:21 +0000 Received: from drhqmail203.nvidia.com (10.126.190.182) by mail.nvidia.com (10.127.129.6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.2562.20; Thu, 25 Dec 2025 01:23:14 -0800 Received: from drhqmail202.nvidia.com (10.126.190.181) by drhqmail203.nvidia.com (10.126.190.182) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.2562.20; Thu, 25 Dec 2025 01:23:14 -0800 Received: from nvidia.com (10.127.8.12) by mail.nvidia.com (10.126.190.181) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.2562.20 via Frontend Transport; Thu, 25 Dec 2025 01:23:12 -0800 From: Shani Peretz To: Viacheslav Ovsiienko CC: Dariusz Sosnowski , dpdk stable Subject: patch 'net/mlx5: fix control flow leakage for external SQ' has been queued to stable release 23.11.6 Date: Thu, 25 Dec 2025 11:18:13 +0200 Message-ID: <20251225091938.345892-52-shperetz@nvidia.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20251225091938.345892-1-shperetz@nvidia.com> References: <20251221145746.763179-93-shperetz@nvidia.com> <20251225091938.345892-1-shperetz@nvidia.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-NV-OnPremToCloud: ExternallySecured X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BY1PEPF0001AE1A:EE_|LV9PR12MB9831:EE_ X-MS-Office365-Filtering-Correlation-Id: aa042f0a-66d4-480e-fe13-08de43973f1d X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; ARA:13230040|376014|1800799024|82310400026|36860700013|13003099007; X-Microsoft-Antispam-Message-Info: =?us-ascii?Q?RJRD+Guxw1+3SsJZYPsrAENOgTQyOg85iEHTX31QE/6BIBj4W8RnF+t83HEx?= =?us-ascii?Q?e+B3ZqDyt1ojj8VYBd9P4dbm1wTrdRdBM/bNO6+frx7AH7ZO9Z3Ylghoar7p?= =?us-ascii?Q?gyg3BACWRQZxhNeDdvPFWQy4+naxDgSoftfFo1tFTnHCR5CoHszfgbZIBUE8?= =?us-ascii?Q?bJSkJ7PgZvmCJgUpGx8qJrNv/LBTv3E/GPkkhzcWCwIbP7779h7+0iS8KqES?= =?us-ascii?Q?zVuVSJSRom2kzhwDyev7OawRtZs7bILfpWrno+dNaJkBrSI5esfZ7Skr29iG?= =?us-ascii?Q?Cdb2umIB6RhfgtzVmULuMy75umnInPqu0+WivpYz0zQN9uCNDSojgnA/QwY9?= =?us-ascii?Q?p9Y6VWPZ5gkcgyGK0tR1wISK5R54lkNklClT/n4xBEbQCZjG93KnKJ4VHq6B?= =?us-ascii?Q?J5ErrwcVqnnxOIRvIhrQsxifh7w1NeaYMYO8K8dJUSwJh+nIZqeG1X4vxuaX?= =?us-ascii?Q?Uj4J4jPyaNWhM+EzNyZlbrwM8FaD0JVUF5bB0V6Om5Z/2LsyohThqA9BByK+?= =?us-ascii?Q?Pv2j38b9ripG6Z3yQRO2LoFj+7IwSndhWPA86UbXC4WIEVWBxNZa3DzhfngH?= =?us-ascii?Q?km6+VOIxrYpEg4y6yEbRhObuMqAsm355ni+k0gS3v9lSUWDmDZ51trAIIHsH?= =?us-ascii?Q?xNOWDThOw/EDZ+1xn7xVdoGPq/ClYjwBCfrs08kmJikW0lo23NxjTe02NK1J?= =?us-ascii?Q?SQQtcJ8QolfOinT/ffIVAlrKwVAtNY1B2nWUpQwDX/JGYc0SlclPlYpG2XrN?= =?us-ascii?Q?CqSwgTjnKAHWD0WYzGEmR7t+/SLOesfdd6jgywaAjRy/ZId525gAymIqScC8?= =?us-ascii?Q?42SJ2Py6KmyWtZ6d+EJTkiRRxcOUVz0PGtFVoyc+h0S51fEsn7k445QNBW6q?= =?us-ascii?Q?bSTCNoak74gi2tFxSCXTffAH56QJypGI+1BmhKqFKDBnEoaqO5SnDA/WQx7k?= =?us-ascii?Q?C9eqIVz+GV28zCSUsueiDGFQ38lTE9uxbhNB/iuK+gl5RvDhTAgZgGlUHLCb?= =?us-ascii?Q?XGRR2MIhE+Xk9YrmFOHqOgd/dEU9KxERw7Xb0eee22zQ5T89SETMaXbHG7PN?= =?us-ascii?Q?Mrs/IByH9ujYkPwyaHaufvJ63JeVwWMvp/LNQ8hz9cVx8j2nZk0ZDFp3XIx9?= =?us-ascii?Q?sp0n0tvwYq1BMdNzYGfGaHpFLcwYC3Kle3DVlMoPfQzyH2y0ax5kJDFyx3Lc?= =?us-ascii?Q?2Y5jDjwgARp+/v5JiY+KOlLvLa2+xm/dxRSMD2jeYwNaaMj8rzELYdekRvEl?= =?us-ascii?Q?qA4WtkM9pTh92Pf4zFGimNdHrcdzCBekqaT0BjLez9PevEgx6WK7vwjz3OZG?= =?us-ascii?Q?TiPAk627Ep9Rip93NXlBNexzFQnOZolJEb4CrEKKEEn4adM5Wpfm6Y2p/Qmu?= =?us-ascii?Q?kY5YHdXMIF7sCGMMSAvZyIIaM6SDeOtbsvj/FAB1dg4Lq8TpvkJ86+FVAsmM?= =?us-ascii?Q?6tG2AjReNsusf6Gcl6WaWDZoqTJMLrf3W7nuetKs7Dj0cK+xvrFRwfmKcFws?= =?us-ascii?Q?MBydNkciZRIz3ngVKPPVxpr6GYzktw90YppybsoS/TEn0IzaJP0jDsYlufmD?= =?us-ascii?Q?fXzb3XjyJ714hapoV92wlADlJ8zBIr9gvRfV3Gv+?= X-Forefront-Antispam-Report: CIP:216.228.118.233; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc7edge2.nvidia.com; CAT:NONE; SFS:(13230040)(376014)(1800799024)(82310400026)(36860700013)(13003099007); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 Dec 2025 09:23:21.1046 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: aa042f0a-66d4-480e-fe13-08de43973f1d X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.118.233]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BY1PEPF0001AE1A.namprd04.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: LV9PR12MB9831 X-BeenThere: stable@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: patches for DPDK stable branches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: stable-bounces@dpdk.org Hi, FYI, your patch has been queued to stable release 23.11.6 Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet. It will be pushed if I get no objections before 12/30/25. So please shout if anyone has objections. Also note that after the patch there's a diff of the upstream commit vs the patch applied to the branch. This will indicate if there was any rebasing needed to apply to the stable branch. If there were code changes for rebasing (ie: not only metadata diffs), please double check that the rebase was correctly done. Queued patches are on a temporary branch at: https://github.com/shanipr/dpdk-stable This queued commit can be viewed at: https://github.com/shanipr/dpdk-stable/commit/db2a376e8bd7ca39e35b895e4b8427967b1fbf78 Thanks. Shani --- >From db2a376e8bd7ca39e35b895e4b8427967b1fbf78 Mon Sep 17 00:00:00 2001 From: Viacheslav Ovsiienko Date: Tue, 18 Nov 2025 18:51:58 +0200 Subject: [PATCH] net/mlx5: fix control flow leakage for external SQ [ upstream commit 3bf9f0f9f0beb8dcd4f3b316c3216a87bc9ab49f ] There is the private API rte_pmd_mlx5_external_sq_enable(), that allows application to create the Send Queue (SQ) on its own and then enable this queue usage as "external SQ". On this enabling call some implicit flows are created to provide compliant SQs behavior - copy metadata register, forward queue originated packet to correct VF, etc. These implicit flows are marked as "external" ones, and there is no cleanup on device start and stop for this kind of flows. Also, PMD has no knowledge if external SQ is still in use by application and implicit cleanup can not be performed. As a result, on multiple device start/stop cycles application re-creates and re-enables many external SQs, causing implicit flow tables overflow. To resolve this issue the rte_pmd_mlx5_external_sq_disable() API is provided, that allows to application to notify PMD the external SQ is not in usage anymore and related implicit flows can be dismissed. Fixes: 26e1eaf2dac4 ("net/mlx5: support device control for E-Switch default rule") Cc: stable@dpdk.org Signed-off-by: Viacheslav Ovsiienko Acked-by: Dariusz Sosnowski --- drivers/net/mlx5/mlx5_flow.h | 12 ++-- drivers/net/mlx5/mlx5_flow_hw.c | 106 +++++++++++++++++++++++++++++++- drivers/net/mlx5/mlx5_trigger.c | 2 +- drivers/net/mlx5/mlx5_txq.c | 54 ++++++++++++++-- drivers/net/mlx5/rte_pmd_mlx5.h | 18 ++++++ drivers/net/mlx5/version.map | 1 + 6 files changed, 181 insertions(+), 12 deletions(-) diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index 219ea462c9..1ebf584078 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -2917,12 +2917,16 @@ int mlx5_flow_hw_flush_ctrl_flows(struct rte_eth_dev *dev); int mlx5_flow_hw_esw_create_sq_miss_flow(struct rte_eth_dev *dev, uint32_t sqn, bool external); int mlx5_flow_hw_esw_destroy_sq_miss_flow(struct rte_eth_dev *dev, - uint32_t sqn); + uint32_t sqn, bool external); int mlx5_flow_hw_esw_create_default_jump_flow(struct rte_eth_dev *dev); int mlx5_flow_hw_create_tx_default_mreg_copy_flow(struct rte_eth_dev *dev, - uint32_t sqn, - bool external); -int mlx5_flow_hw_tx_repr_matching_flow(struct rte_eth_dev *dev, uint32_t sqn, bool external); + uint32_t sqn, bool external); +int mlx5_flow_hw_destroy_tx_default_mreg_copy_flow(struct rte_eth_dev *dev, + uint32_t sqn, bool external); +int mlx5_flow_hw_create_tx_repr_matching_flow(struct rte_eth_dev *dev, + uint32_t sqn, bool external); +int mlx5_flow_hw_destroy_tx_repr_matching_flow(struct rte_eth_dev *dev, + uint32_t sqn, bool external); int mlx5_flow_hw_lacp_rx_flow(struct rte_eth_dev *dev); int mlx5_flow_actions_validate(struct rte_eth_dev *dev, const struct rte_flow_actions_template_attr *attr, diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index 41910d801b..b66ed53141 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -12197,7 +12197,7 @@ flow_hw_is_matching_sq_miss_flow(struct mlx5_hw_ctrl_flow *cf, } int -mlx5_flow_hw_esw_destroy_sq_miss_flow(struct rte_eth_dev *dev, uint32_t sqn) +mlx5_flow_hw_esw_destroy_sq_miss_flow(struct rte_eth_dev *dev, uint32_t sqn, bool external) { uint16_t port_id = dev->data->port_id; uint16_t proxy_port_id = dev->data->port_id; @@ -12224,7 +12224,8 @@ mlx5_flow_hw_esw_destroy_sq_miss_flow(struct rte_eth_dev *dev, uint32_t sqn) !proxy_priv->hw_ctrl_fdb->hw_esw_sq_miss_root_tbl || !proxy_priv->hw_ctrl_fdb->hw_esw_sq_miss_tbl) return 0; - cf = LIST_FIRST(&proxy_priv->hw_ctrl_flows); + cf = external ? LIST_FIRST(&proxy_priv->hw_ext_ctrl_flows) : + LIST_FIRST(&proxy_priv->hw_ctrl_flows); while (cf != NULL) { cf_next = LIST_NEXT(cf, next); if (flow_hw_is_matching_sq_miss_flow(cf, dev, sqn)) { @@ -12358,8 +12359,58 @@ mlx5_flow_hw_create_tx_default_mreg_copy_flow(struct rte_eth_dev *dev, uint32_t items, 0, copy_reg_action, 0, &flow_info, external); } +static bool +flow_hw_is_matching_tx_mreg_copy_flow(struct mlx5_hw_ctrl_flow *cf, + struct rte_eth_dev *dev, + uint32_t sqn) +{ + if (cf->owner_dev != dev) + return false; + if (cf->info.type == MLX5_HW_CTRL_FLOW_TYPE_TX_META_COPY && cf->info.tx_repr_sq == sqn) + return true; + return false; +} + +int +mlx5_flow_hw_destroy_tx_default_mreg_copy_flow(struct rte_eth_dev *dev, uint32_t sqn, bool external) +{ + uint16_t port_id = dev->data->port_id; + uint16_t proxy_port_id = dev->data->port_id; + struct rte_eth_dev *proxy_dev; + struct mlx5_priv *proxy_priv; + struct mlx5_hw_ctrl_flow *cf; + struct mlx5_hw_ctrl_flow *cf_next; + int ret; + + ret = rte_flow_pick_transfer_proxy(port_id, &proxy_port_id, NULL); + if (ret) { + DRV_LOG(ERR, "Unable to pick transfer proxy port for port %u. Transfer proxy " + "port must be present for default SQ miss flow rules to exist.", + port_id); + return ret; + } + proxy_dev = &rte_eth_devices[proxy_port_id]; + proxy_priv = proxy_dev->data->dev_private; + if (!proxy_priv->dr_ctx || + !proxy_priv->hw_ctrl_fdb || + !proxy_priv->hw_ctrl_fdb->hw_tx_meta_cpy_tbl) + return 0; + cf = external ? LIST_FIRST(&proxy_priv->hw_ext_ctrl_flows) : + LIST_FIRST(&proxy_priv->hw_ctrl_flows); + while (cf != NULL) { + cf_next = LIST_NEXT(cf, next); + if (flow_hw_is_matching_tx_mreg_copy_flow(cf, dev, sqn)) { + claim_zero(flow_hw_destroy_ctrl_flow(proxy_dev, cf->flow)); + LIST_REMOVE(cf, next); + mlx5_free(cf); + } + cf = cf_next; + } + return 0; +} + int -mlx5_flow_hw_tx_repr_matching_flow(struct rte_eth_dev *dev, uint32_t sqn, bool external) +mlx5_flow_hw_create_tx_repr_matching_flow(struct rte_eth_dev *dev, uint32_t sqn, bool external) { struct mlx5_priv *priv = dev->data->dev_private; struct mlx5_rte_flow_item_sq sq_spec = { @@ -12416,6 +12467,55 @@ mlx5_flow_hw_tx_repr_matching_flow(struct rte_eth_dev *dev, uint32_t sqn, bool e items, 0, actions, 0, &flow_info, external); } +static bool +flow_hw_is_tx_matching_repr_matching_flow(struct mlx5_hw_ctrl_flow *cf, + struct rte_eth_dev *dev, + uint32_t sqn) +{ + if (cf->owner_dev != dev) + return false; + if (cf->info.type == MLX5_HW_CTRL_FLOW_TYPE_TX_REPR_MATCH && cf->info.tx_repr_sq == sqn) + return true; + return false; +} + +int +mlx5_flow_hw_destroy_tx_repr_matching_flow(struct rte_eth_dev *dev, uint32_t sqn, bool external) +{ + uint16_t port_id = dev->data->port_id; + uint16_t proxy_port_id = dev->data->port_id; + struct rte_eth_dev *proxy_dev; + struct mlx5_priv *proxy_priv; + struct mlx5_hw_ctrl_flow *cf; + struct mlx5_hw_ctrl_flow *cf_next; + int ret; + + ret = rte_flow_pick_transfer_proxy(port_id, &proxy_port_id, NULL); + if (ret) { + DRV_LOG(ERR, "Unable to pick transfer proxy port for port %u. Transfer proxy " + "port must be present for default SQ miss flow rules to exist.", + port_id); + return ret; + } + proxy_dev = &rte_eth_devices[proxy_port_id]; + proxy_priv = proxy_dev->data->dev_private; + if (!proxy_priv->dr_ctx || + !proxy_priv->hw_tx_repr_tagging_tbl) + return 0; + cf = external ? LIST_FIRST(&proxy_priv->hw_ext_ctrl_flows) : + LIST_FIRST(&proxy_priv->hw_ctrl_flows); + while (cf != NULL) { + cf_next = LIST_NEXT(cf, next); + if (flow_hw_is_tx_matching_repr_matching_flow(cf, dev, sqn)) { + claim_zero(flow_hw_destroy_ctrl_flow(proxy_dev, cf->flow)); + LIST_REMOVE(cf, next); + mlx5_free(cf); + } + cf = cf_next; + } + return 0; +} + int mlx5_flow_hw_lacp_rx_flow(struct rte_eth_dev *dev) { diff --git a/drivers/net/mlx5/mlx5_trigger.c b/drivers/net/mlx5/mlx5_trigger.c index 51d848158c..3bda84e963 100644 --- a/drivers/net/mlx5/mlx5_trigger.c +++ b/drivers/net/mlx5/mlx5_trigger.c @@ -1495,7 +1495,7 @@ mlx5_traffic_enable_hws(struct rte_eth_dev *dev) } } if (config->dv_esw_en && config->repr_matching) { - if (mlx5_flow_hw_tx_repr_matching_flow(dev, queue, false)) { + if (mlx5_flow_hw_create_tx_repr_matching_flow(dev, queue, false)) { mlx5_txq_release(dev, i); goto error; } diff --git a/drivers/net/mlx5/mlx5_txq.c b/drivers/net/mlx5/mlx5_txq.c index 3f8d861180..d6f5790983 100644 --- a/drivers/net/mlx5/mlx5_txq.c +++ b/drivers/net/mlx5/mlx5_txq.c @@ -1308,7 +1308,7 @@ rte_pmd_mlx5_external_sq_enable(uint16_t port_id, uint32_t sq_num) priv = dev->data->dev_private; if ((!priv->representor && !priv->master) || !priv->sh->config.dv_esw_en) { - DRV_LOG(ERR, "Port %u must be represetnor or master port in E-Switch mode.", + DRV_LOG(ERR, "Port %u must be representor or master port in E-Switch mode.", port_id); rte_errno = EINVAL; return -rte_errno; @@ -1329,9 +1329,9 @@ rte_pmd_mlx5_external_sq_enable(uint16_t port_id, uint32_t sq_num) } if (priv->sh->config.repr_matching && - mlx5_flow_hw_tx_repr_matching_flow(dev, sq_num, true)) { + mlx5_flow_hw_create_tx_repr_matching_flow(dev, sq_num, true)) { if (sq_miss_created) - mlx5_flow_hw_esw_destroy_sq_miss_flow(dev, sq_num); + mlx5_flow_hw_esw_destroy_sq_miss_flow(dev, sq_num, true); return -rte_errno; } @@ -1339,7 +1339,7 @@ rte_pmd_mlx5_external_sq_enable(uint16_t port_id, uint32_t sq_num) priv->sh->config.dv_xmeta_en == MLX5_XMETA_MODE_META32_HWS && mlx5_flow_hw_create_tx_default_mreg_copy_flow(dev, sq_num, true)) { if (sq_miss_created) - mlx5_flow_hw_esw_destroy_sq_miss_flow(dev, sq_num); + mlx5_flow_hw_esw_destroy_sq_miss_flow(dev, sq_num, true); return -rte_errno; } return 0; @@ -1353,6 +1353,52 @@ rte_pmd_mlx5_external_sq_enable(uint16_t port_id, uint32_t sq_num) return -rte_errno; } +int +rte_pmd_mlx5_external_sq_disable(uint16_t port_id, uint32_t sq_num) +{ + struct rte_eth_dev *dev; + struct mlx5_priv *priv; + + if (rte_eth_dev_is_valid_port(port_id) < 0) { + DRV_LOG(ERR, "There is no Ethernet device for port %u.", + port_id); + rte_errno = ENODEV; + return -rte_errno; + } + dev = &rte_eth_devices[port_id]; + priv = dev->data->dev_private; + if ((!priv->representor && !priv->master) || + !priv->sh->config.dv_esw_en) { + DRV_LOG(ERR, "Port %u must be representor or master port in E-Switch mode.", + port_id); + rte_errno = EINVAL; + return -rte_errno; + } + if (sq_num == 0) { + DRV_LOG(ERR, "Invalid SQ number."); + rte_errno = EINVAL; + return -rte_errno; + } +#ifdef HAVE_MLX5_HWS_SUPPORT + if (priv->sh->config.dv_flow_en == 2) { + if (priv->sh->config.fdb_def_rule && + mlx5_flow_hw_esw_destroy_sq_miss_flow(dev, sq_num, true)) + return -rte_errno; + if (priv->sh->config.repr_matching && + mlx5_flow_hw_destroy_tx_repr_matching_flow(dev, sq_num, true)) + return -rte_errno; + if (!priv->sh->config.repr_matching && + priv->sh->config.dv_xmeta_en == MLX5_XMETA_MODE_META32_HWS && + mlx5_flow_hw_destroy_tx_default_mreg_copy_flow(dev, sq_num, true)) + return -rte_errno; + return 0; + } +#endif + /* Not supported for software steering. */ + rte_errno = ENOTSUP; + return -rte_errno; +} + /** * Set the Tx queue dynamic timestamp (mask and offset) * diff --git a/drivers/net/mlx5/rte_pmd_mlx5.h b/drivers/net/mlx5/rte_pmd_mlx5.h index cc9340f71e..ee5c4a08e9 100644 --- a/drivers/net/mlx5/rte_pmd_mlx5.h +++ b/drivers/net/mlx5/rte_pmd_mlx5.h @@ -232,6 +232,24 @@ enum rte_pmd_mlx5_flow_engine_mode { __rte_experimental int rte_pmd_mlx5_flow_engine_set_mode(enum rte_pmd_mlx5_flow_engine_mode mode, uint32_t flags); +/** + * Disable traffic for external SQ. Should be invoked by application + * before destroying the external SQ. + * + * @param[in] port_id + * The port identifier of the Ethernet device. + * @param[in] sq_num + * SQ HW number. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + * Possible values for rte_errno: + * - EINVAL - invalid sq_number or port type. + * - ENODEV - there is no Ethernet device for this port id. + */ +__rte_experimental +int rte_pmd_mlx5_external_sq_disable(uint16_t port_id, uint32_t sq_num); + #ifdef __cplusplus } #endif diff --git a/drivers/net/mlx5/version.map b/drivers/net/mlx5/version.map index 99f5ab754a..3561a1db2a 100644 --- a/drivers/net/mlx5/version.map +++ b/drivers/net/mlx5/version.map @@ -17,4 +17,5 @@ EXPERIMENTAL { rte_pmd_mlx5_external_sq_enable; # added in 23.03 rte_pmd_mlx5_flow_engine_set_mode; + rte_pmd_mlx5_external_sq_disable; }; -- 2.43.0 --- Diff of the applied patch vs upstream commit (please double-check if non-empty: --- --- - 2025-12-25 11:16:38.912054998 +0200 +++ 0052-net-mlx5-fix-control-flow-leakage-for-external-SQ.patch 2025-12-25 11:16:35.824918000 +0200 @@ -1 +1 @@ -From 3bf9f0f9f0beb8dcd4f3b316c3216a87bc9ab49f Mon Sep 17 00:00:00 2001 +From db2a376e8bd7ca39e35b895e4b8427967b1fbf78 Mon Sep 17 00:00:00 2001 @@ -3 +3 @@ -Date: Wed, 29 Oct 2025 17:57:09 +0200 +Date: Tue, 18 Nov 2025 18:51:58 +0200 @@ -5,0 +6,2 @@ +[ upstream commit 3bf9f0f9f0beb8dcd4f3b316c3216a87bc9ab49f ] + @@ -37 +39 @@ - drivers/net/mlx5/mlx5_txq.c | 55 +++++++++++++++-- + drivers/net/mlx5/mlx5_txq.c | 54 ++++++++++++++-- @@ -39 +41,2 @@ - 5 files changed, 181 insertions(+), 12 deletions(-) + drivers/net/mlx5/version.map | 1 + + 6 files changed, 181 insertions(+), 12 deletions(-) @@ -42 +45 @@ -index c5905ebfac..6da3c74eb9 100644 +index 219ea462c9..1ebf584078 100644 @@ -45 +48 @@ -@@ -3563,12 +3563,16 @@ int mlx5_flow_hw_flush_ctrl_flows(struct rte_eth_dev *dev); +@@ -2917,12 +2917,16 @@ int mlx5_flow_hw_flush_ctrl_flows(struct rte_eth_dev *dev); @@ -67 +70 @@ -index d945c88eb0..eb3dcce59d 100644 +index 41910d801b..b66ed53141 100644 @@ -70 +73 @@ -@@ -15897,7 +15897,7 @@ flow_hw_is_matching_sq_miss_flow(struct mlx5_ctrl_flow_entry *cf, +@@ -12197,7 +12197,7 @@ flow_hw_is_matching_sq_miss_flow(struct mlx5_hw_ctrl_flow *cf, @@ -79 +82 @@ -@@ -15924,7 +15924,8 @@ mlx5_flow_hw_esw_destroy_sq_miss_flow(struct rte_eth_dev *dev, uint32_t sqn) +@@ -12224,7 +12224,8 @@ mlx5_flow_hw_esw_destroy_sq_miss_flow(struct rte_eth_dev *dev, uint32_t sqn) @@ -89 +92 @@ -@@ -16058,8 +16059,58 @@ mlx5_flow_hw_create_tx_default_mreg_copy_flow(struct rte_eth_dev *dev, uint32_t +@@ -12358,8 +12359,58 @@ mlx5_flow_hw_create_tx_default_mreg_copy_flow(struct rte_eth_dev *dev, uint32_t @@ -94 +97 @@ -+flow_hw_is_matching_tx_mreg_copy_flow(struct mlx5_ctrl_flow_entry *cf, ++flow_hw_is_matching_tx_mreg_copy_flow(struct mlx5_hw_ctrl_flow *cf, @@ -100 +103 @@ -+ if (cf->info.type == MLX5_CTRL_FLOW_TYPE_TX_META_COPY && cf->info.tx_repr_sq == sqn) ++ if (cf->info.type == MLX5_HW_CTRL_FLOW_TYPE_TX_META_COPY && cf->info.tx_repr_sq == sqn) @@ -112,2 +115,2 @@ -+ struct mlx5_ctrl_flow_entry *cf; -+ struct mlx5_ctrl_flow_entry *cf_next; ++ struct mlx5_hw_ctrl_flow *cf; ++ struct mlx5_hw_ctrl_flow *cf_next; @@ -149 +152 @@ -@@ -16116,6 +16167,55 @@ mlx5_flow_hw_tx_repr_matching_flow(struct rte_eth_dev *dev, uint32_t sqn, bool e +@@ -12416,6 +12467,55 @@ mlx5_flow_hw_tx_repr_matching_flow(struct rte_eth_dev *dev, uint32_t sqn, bool e @@ -154 +157 @@ -+flow_hw_is_tx_matching_repr_matching_flow(struct mlx5_ctrl_flow_entry *cf, ++flow_hw_is_tx_matching_repr_matching_flow(struct mlx5_hw_ctrl_flow *cf, @@ -160 +163 @@ -+ if (cf->info.type == MLX5_CTRL_FLOW_TYPE_TX_REPR_MATCH && cf->info.tx_repr_sq == sqn) ++ if (cf->info.type == MLX5_HW_CTRL_FLOW_TYPE_TX_REPR_MATCH && cf->info.tx_repr_sq == sqn) @@ -172,2 +175,2 @@ -+ struct mlx5_ctrl_flow_entry *cf; -+ struct mlx5_ctrl_flow_entry *cf_next; ++ struct mlx5_hw_ctrl_flow *cf; ++ struct mlx5_hw_ctrl_flow *cf_next; @@ -206 +209 @@ -index e6acb56d4d..6acf398ccc 100644 +index 51d848158c..3bda84e963 100644 @@ -209 +212 @@ -@@ -1622,7 +1622,7 @@ mlx5_traffic_enable_hws(struct rte_eth_dev *dev) +@@ -1495,7 +1495,7 @@ mlx5_traffic_enable_hws(struct rte_eth_dev *dev) @@ -219 +222 @@ -index 834ca541d5..1d258f979c 100644 +index 3f8d861180..d6f5790983 100644 @@ -222 +225 @@ -@@ -1433,7 +1433,7 @@ rte_pmd_mlx5_external_sq_enable(uint16_t port_id, uint32_t sq_num) +@@ -1308,7 +1308,7 @@ rte_pmd_mlx5_external_sq_enable(uint16_t port_id, uint32_t sq_num) @@ -231 +234 @@ -@@ -1454,9 +1454,9 @@ rte_pmd_mlx5_external_sq_enable(uint16_t port_id, uint32_t sq_num) +@@ -1329,9 +1329,9 @@ rte_pmd_mlx5_external_sq_enable(uint16_t port_id, uint32_t sq_num) @@ -243 +246 @@ -@@ -1464,7 +1464,7 @@ rte_pmd_mlx5_external_sq_enable(uint16_t port_id, uint32_t sq_num) +@@ -1339,7 +1339,7 @@ rte_pmd_mlx5_external_sq_enable(uint16_t port_id, uint32_t sq_num) @@ -252 +255 @@ -@@ -1478,6 +1478,53 @@ rte_pmd_mlx5_external_sq_enable(uint16_t port_id, uint32_t sq_num) +@@ -1353,6 +1353,52 @@ rte_pmd_mlx5_external_sq_enable(uint16_t port_id, uint32_t sq_num) @@ -256 +258,0 @@ -+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_mlx5_external_sq_disable, 25.11) @@ -307 +309 @@ -index 4d4821afae..31f99e7a78 100644 +index cc9340f71e..ee5c4a08e9 100644 @@ -310,3 +312,3 @@ -@@ -484,6 +484,24 @@ typedef void (*rte_pmd_mlx5_driver_event_callback_t)(uint16_t port_id, - const void *opaque); - +@@ -232,6 +232,24 @@ enum rte_pmd_mlx5_flow_engine_mode { + __rte_experimental + int rte_pmd_mlx5_flow_engine_set_mode(enum rte_pmd_mlx5_flow_engine_mode mode, uint32_t flags); @@ -332,3 +334,13 @@ - /** - * Register mlx5 driver event callback. - * + #ifdef __cplusplus + } + #endif +diff --git a/drivers/net/mlx5/version.map b/drivers/net/mlx5/version.map +index 99f5ab754a..3561a1db2a 100644 +--- a/drivers/net/mlx5/version.map ++++ b/drivers/net/mlx5/version.map +@@ -17,4 +17,5 @@ EXPERIMENTAL { + rte_pmd_mlx5_external_sq_enable; + # added in 23.03 + rte_pmd_mlx5_flow_engine_set_mode; ++ rte_pmd_mlx5_external_sq_disable; + };