From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 24DC5433D8 for ; Mon, 11 Dec 2023 11:20:49 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 1EE0C40ED2; Mon, 11 Dec 2023 11:20:49 +0100 (CET) Received: from NAM11-BN8-obe.outbound.protection.outlook.com (mail-bn8nam11on2040.outbound.protection.outlook.com [40.107.236.40]) by mails.dpdk.org (Postfix) with ESMTP id 7DCF240E0F for ; Mon, 11 Dec 2023 11:20:47 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=fTRvUTdfI2bs9W/JzHAI6MMvA5ti5ilMvSg3BlaWd4V4IJxqEzwn8v4pQMVDhzkrg3YBR6fz3mmkYqXrrJWUH6hxXWYtOWeCjKsfP6A8VcrYLWKyzeLn9DLvO3A45rGZh+aXthMWqAbLiCQYYXOl4XsZAZiUptIUd4LUvXzwicI90cYOfK0yivkbOCB2gS3wQKjpuBtkxUlUywSfKtoNIWStaxFA4X1Ycp6GbyMN4pj2C812FYgcgQqmIdiN9tQtFXprjGOYDCgKTt4WsNRkSl3QASBmozTUoLQYa9T+Rr/YepzD1tO0VPAxPmcPuuphEJ0jp7TrMoNdmYeSMzUOdg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=alUQL3LsP2KyBlK5ncytfbZDnEcgyiKuEgwRmB64K8c=; b=Me5s9PGLYedwt8TGdgHzjCS4UdC5V1QZVGLDRUtAW2TH/RaY8ZnMpBec+W37CKAy4+T+N1WpIOz8taaQH7YgkbdoL670GVqu/ZHbwjiS/dsJl/sVhy0A34iGqamJH6orfy+lbZsLh+TWmBruBblfDuYZ+cwPKHyNXiUg4s0rwZLUUvy4LWTN0YBmsQrjX/x4GZKAnoDX3Nl2DaJTIF1u7KTJ3BqoNXhmuO9lJCiVQD5cbPoC4bDa1VevP/nvxmmo7D74Yva4TcEm3Z59stkORHHLJ/UOamQygEHcr1dz9DXhx8lLW0hzw1CjE6hllTkXOEFcJ8NjjIWdZBY1lBZ4UQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=alUQL3LsP2KyBlK5ncytfbZDnEcgyiKuEgwRmB64K8c=; b=TzTMqBB8rPD11/cTzZE0+e6lR1gvwTOVFb9Wbp/PUm5rcRhvPE3DySjK8ZRL0M7J5rxq14L65pGP7eQFtzij3Rp4RwFgMuITE5lKQaEk0OJ9Qa6RcF6tYJnvEwGHhEeAjQgo5/VAVGJecfZcx9F94t/nLka4VD2AvRgajvY4xVgVXgQycEVCpBSI/YQsU7xeAt8LdRdvXNy/x5MvALpRlS14KtYckr4jO6zwdQVO9oZ0eyyQbzTV7zNF87SoayXsY9TxyIs6r4ElCUp5KfZns5i6Zqrq7Ji48d66wXezi+RlLHKUGUqCrZ0zzYVhbpDEuYehNcSPht+FzzbuRWuZIw== Received: from CH5P221CA0007.NAMP221.PROD.OUTLOOK.COM (2603:10b6:610:1f2::15) by BY5PR12MB4904.namprd12.prod.outlook.com (2603:10b6:a03:1d3::13) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7068.32; Mon, 11 Dec 2023 10:20:42 +0000 Received: from SN1PEPF000252A2.namprd05.prod.outlook.com (2603:10b6:610:1f2:cafe::a0) by CH5P221CA0007.outlook.office365.com (2603:10b6:610:1f2::15) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7068.33 via Frontend Transport; Mon, 11 Dec 2023 10:20:41 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by SN1PEPF000252A2.mail.protection.outlook.com (10.167.242.9) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7091.18 via Frontend Transport; Mon, 11 Dec 2023 10:20:41 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.41; Mon, 11 Dec 2023 02:20:24 -0800 Received: from nvidia.com (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.41; Mon, 11 Dec 2023 02:20:23 -0800 From: Xueming Li To: Dariusz Sosnowski CC: dpdk stable Subject: patch 'net/mlx5: fix missing flow rules for external SQ' has been queued to stable release 22.11.4 Date: Mon, 11 Dec 2023 18:11:52 +0800 Message-ID: <20231211101226.2122-88-xuemingl@nvidia.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20231211101226.2122-1-xuemingl@nvidia.com> References: <20231022142250.10324-1-xuemingl@nvidia.com> <20231211101226.2122-1-xuemingl@nvidia.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail201.nvidia.com (10.129.68.8) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: SN1PEPF000252A2:EE_|BY5PR12MB4904:EE_ X-MS-Office365-Filtering-Correlation-Id: 88c6ade2-e2b9-4ed4-8dae-08dbfa32d42a X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: 33ib6QtT6TI/kLyIZQGNnpp6TYWmBd4aaczCeWrDZKuu7bfobiwIdVRmTWTJoRUYhGFQPuhzn/67w5DF4X1zf/geCO1Jed9DHHpDMbhjhgWOWYBIAlvl7RnuMI7LM0Aa9cOo4lMUc/iYDjMW6gsnUuugSIbW3cOcKAJWNxPNg4RiRCjd/uyIT4BIu7ImvflE5YFATW/gRd4SNj3UA433EZo8LNmVbUQuo5PX1SfYfJxw8TgeZTAEHlD49FHxulyv9kqMvb9Mhl83/EVGokKSS3AIATSIS/QTxlD3eTG42o8h72knfiWd2pBzc08ldfTd+jDMj88XkgbtoWBn6ho7eawmzh/tHDSZtjTd8TRTKJbPRMVpFkTiCYA2rReTBCF4ciP5se7rU0WfMs9kDCTNnPVTmCeZhbGEaHoVGzPuodJtKTEk97TNOG13woKmDofmlcdIWVAYbKxwbbhhX4D9daMtc9qcgNxX2pgTHzuhFSsz3e/KVwLXke7f/B03RHuLmkQz3nh+rqHTZ2WKIQcUtvHrs2iBKNlMHajH8cSZeu0ltjAWeNZch0Z8jED3ux/haXqx9VwYi/jviWO6Hf97/r/IO2nPr2JmMv6TMKpgseMU37EEE+pxZfXOFbQJlxXDb3ljZs3C8knUsUJAl90I4eTJ+r2oSsEGS3+CwBr64dmWBsaWpNrktmsXJ/preeWHyL6bPikyq+iWQj1qHRseUF3cyF+84Xi6DFHIqGYr1NTTQ6MRrx7DOjecxUg/sc19vLlkEp6rhuGXSJ5WkIf/KU+hm4BsU63lSafxRxYlJyU3+m8h4z+M40cifXUP4QppJLWYOIZtFo1JTUmAvV3R+g== X-Forefront-Antispam-Report: CIP:216.228.117.161; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge2.nvidia.com; CAT:NONE; SFS:(13230031)(4636009)(136003)(346002)(396003)(376002)(39850400004)(230273577357003)(230173577357003)(230922051799003)(64100799003)(451199024)(186009)(1800799012)(82310400011)(46966006)(40470700004)(36840700001)(40460700003)(40480700001)(55016003)(53546011)(1076003)(336012)(16526019)(6286002)(26005)(2616005)(7696005)(6666004)(47076005)(966005)(478600001)(7636003)(82740400003)(356005)(86362001)(36756003)(40140700001)(41300700001)(5660300002)(70206006)(6636002)(70586007)(2906002)(426003)(36860700001)(30864003)(83380400001)(4001150100001)(6862004)(8936002)(8676002)(316002)(4326008)(37006003); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 Dec 2023 10:20:41.6518 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 88c6ade2-e2b9-4ed4-8dae-08dbfa32d42a X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.161]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: SN1PEPF000252A2.namprd05.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY5PR12MB4904 X-BeenThere: stable@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: patches for DPDK stable branches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: stable-bounces@dpdk.org Hi, FYI, your patch has been queued to stable release 22.11.4 Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet. It will be pushed if I get no objections before 12/13/23. So please shout if anyone has objections. Also note that after the patch there's a diff of the upstream commit vs the patch applied to the branch. This will indicate if there was any rebasing needed to apply to the stable branch. If there were code changes for rebasing (ie: not only metadata diffs), please double check that the rebase was correctly done. Queued patches are on a temporary branch at: https://git.dpdk.org/dpdk-stable/log/?h=22.11-staging This queued commit can be viewed at: https://git.dpdk.org/dpdk-stable/commit/?h=22.11-staging&id=ca79cce293c5d090f36797b7efa8b00e54790ba0 Thanks. Xueming Li --- >From ca79cce293c5d090f36797b7efa8b00e54790ba0 Mon Sep 17 00:00:00 2001 From: Dariusz Sosnowski Date: Thu, 9 Nov 2023 16:55:46 +0800 Subject: [PATCH] net/mlx5: fix missing flow rules for external SQ Cc: Xueming Li [ upstream commit 86f2907c2ab6977980131f848e79f3ca05250279 ] mlx5 PMD exposes a capability to register externally created SQs as if it was an SQ of a given representor port. Registration would cause a creation of control flow rules in FDB domain used to forward traffic between SQ and destination represented port. Before this patch, if representor matching was enabled (device argument repr_matching_en is equal to 1, default configuration), then during registration of external SQs, mlx5 PMD would not create control flow rules in NIC Tx domain. This caused an issue with packet metadata. If a packet sent on external SQ had packet metadata attached, then it would be lost when it would go from NIC Tx to FDB domain. With representor matching disabled everything is working correctly, because in that mode there is a single global flow rule for preserving packet metadata. This flow rule matches whole traffic on NIC Tx domain. With representor matching enabled, NIC Tx flow rules are created per SQ. This patch fixes that behavior. If representor matching is enabled, then NIC Tx flow rules are created for each external SQ registered in rte_pmd_mlx5_external_sq_enable(). This patch also adds an ability to destroy SQ miss flow rules for a given port and SQ number. This is required for error rollback flow in rte_pmd_mlx5_external_sq_enable(). Fixes: 26e1eaf2dac4 ("net/mlx5: support device control for E-Switch default rule") Signed-off-by: Dariusz Sosnowski --- drivers/net/mlx5/mlx5.h | 40 ++++++++++++ drivers/net/mlx5/mlx5_flow.h | 2 + drivers/net/mlx5/mlx5_flow_hw.c | 107 +++++++++++++++++++++++++++++--- drivers/net/mlx5/mlx5_txq.c | 12 +++- 4 files changed, 149 insertions(+), 12 deletions(-) diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index fa8931e8b5..8a46ba90b0 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -1639,10 +1639,50 @@ struct mlx5_obj_ops { #define MLX5_RSS_HASH_FIELDS_LEN RTE_DIM(mlx5_rss_hash_fields) +enum mlx5_hw_ctrl_flow_type { + MLX5_HW_CTRL_FLOW_TYPE_GENERAL, + MLX5_HW_CTRL_FLOW_TYPE_SQ_MISS_ROOT, + MLX5_HW_CTRL_FLOW_TYPE_SQ_MISS, + MLX5_HW_CTRL_FLOW_TYPE_DEFAULT_JUMP, + MLX5_HW_CTRL_FLOW_TYPE_TX_META_COPY, + MLX5_HW_CTRL_FLOW_TYPE_TX_REPR_MATCH, + MLX5_HW_CTRL_FLOW_TYPE_LACP_RX, + MLX5_HW_CTRL_FLOW_TYPE_DEFAULT_RX_RSS, +}; + +/** Additional info about control flow rule. */ +struct mlx5_hw_ctrl_flow_info { + /** Determines the kind of control flow rule. */ + enum mlx5_hw_ctrl_flow_type type; + union { + /** + * If control flow is a SQ miss flow (root or not), + * then fields contains matching SQ number. + */ + uint32_t esw_mgr_sq; + /** + * If control flow is a Tx representor matching, + * then fields contains matching SQ number. + */ + uint32_t tx_repr_sq; + }; +}; + +/** Entry for tracking control flow rules in HWS. */ struct mlx5_hw_ctrl_flow { LIST_ENTRY(mlx5_hw_ctrl_flow) next; + /** + * Owner device is a port on behalf of which flow rule was created. + * + * It's different from the port which really created the flow rule + * if and only if flow rule is created on transfer proxy port + * on behalf of representor port. + */ struct rte_eth_dev *owner_dev; + /** Pointer to flow rule handle. */ struct rte_flow *flow; + /** Additional information about the control flow rule. */ + struct mlx5_hw_ctrl_flow_info info; }; struct mlx5_flow_hw_ctrl_rx; diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index 52edc4c961..f03734f991 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -2582,6 +2582,8 @@ int mlx5_flow_hw_flush_ctrl_flows(struct rte_eth_dev *dev); int mlx5_flow_hw_esw_create_sq_miss_flow(struct rte_eth_dev *dev, uint32_t sqn); +int mlx5_flow_hw_esw_destroy_sq_miss_flow(struct rte_eth_dev *dev, + uint32_t sqn); int mlx5_flow_hw_esw_create_default_jump_flow(struct rte_eth_dev *dev); int mlx5_flow_hw_create_tx_default_mreg_copy_flow(struct rte_eth_dev *dev); int mlx5_flow_hw_tx_repr_matching_flow(struct rte_eth_dev *dev, uint32_t sqn); diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index 102f67a925..3f3ab4859b 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -8437,6 +8437,8 @@ const struct mlx5_flow_driver_ops mlx5_flow_hw_drv_ops = { * Pointer to flow rule actions. * @param action_template_idx * Index of an action template associated with @p table. + * @param info + * Additional info about control flow rule. * * @return * 0 on success, negative errno value otherwise and rte_errno set. @@ -8448,7 +8450,8 @@ flow_hw_create_ctrl_flow(struct rte_eth_dev *owner_dev, struct rte_flow_item items[], uint8_t item_template_idx, struct rte_flow_action actions[], - uint8_t action_template_idx) + uint8_t action_template_idx, + struct mlx5_hw_ctrl_flow_info *info) { struct mlx5_priv *priv = proxy_dev->data->dev_private; uint32_t queue = CTRL_QUEUE_ID(priv); @@ -8495,6 +8498,10 @@ flow_hw_create_ctrl_flow(struct rte_eth_dev *owner_dev, } entry->owner_dev = owner_dev; entry->flow = flow; + if (info) + entry->info = *info; + else + entry->info.type = MLX5_HW_CTRL_FLOW_TYPE_GENERAL; LIST_INSERT_HEAD(&priv->hw_ctrl_flows, entry, next); rte_spinlock_unlock(&priv->hw_ctrl_lock); return 0; @@ -8698,6 +8705,10 @@ mlx5_flow_hw_esw_create_sq_miss_flow(struct rte_eth_dev *dev, uint32_t sqn) }; struct rte_flow_item items[3] = { { 0 } }; struct rte_flow_action actions[3] = { { 0 } }; + struct mlx5_hw_ctrl_flow_info flow_info = { + .type = MLX5_HW_CTRL_FLOW_TYPE_SQ_MISS_ROOT, + .esw_mgr_sq = sqn, + }; struct rte_eth_dev *proxy_dev; struct mlx5_priv *proxy_priv; uint16_t proxy_port_id = dev->data->port_id; @@ -8753,7 +8764,7 @@ mlx5_flow_hw_esw_create_sq_miss_flow(struct rte_eth_dev *dev, uint32_t sqn) .type = RTE_FLOW_ACTION_TYPE_END, }; ret = flow_hw_create_ctrl_flow(dev, proxy_dev, proxy_priv->hw_esw_sq_miss_root_tbl, - items, 0, actions, 0); + items, 0, actions, 0, &flow_info); if (ret) { DRV_LOG(ERR, "Port %u failed to create root SQ miss flow rule for SQ %u, ret %d", port_id, sqn, ret); @@ -8782,8 +8793,9 @@ mlx5_flow_hw_esw_create_sq_miss_flow(struct rte_eth_dev *dev, uint32_t sqn) actions[1] = (struct rte_flow_action){ .type = RTE_FLOW_ACTION_TYPE_END, }; + flow_info.type = MLX5_HW_CTRL_FLOW_TYPE_SQ_MISS; ret = flow_hw_create_ctrl_flow(dev, proxy_dev, proxy_priv->hw_esw_sq_miss_tbl, - items, 0, actions, 0); + items, 0, actions, 0, &flow_info); if (ret) { DRV_LOG(ERR, "Port %u failed to create HWS SQ miss flow rule for SQ %u, ret %d", port_id, sqn, ret); @@ -8792,6 +8804,58 @@ mlx5_flow_hw_esw_create_sq_miss_flow(struct rte_eth_dev *dev, uint32_t sqn) return 0; } +static bool +flow_hw_is_matching_sq_miss_flow(struct mlx5_hw_ctrl_flow *cf, + struct rte_eth_dev *dev, + uint32_t sqn) +{ + if (cf->owner_dev != dev) + return false; + if (cf->info.type == MLX5_HW_CTRL_FLOW_TYPE_SQ_MISS_ROOT && cf->info.esw_mgr_sq == sqn) + return true; + if (cf->info.type == MLX5_HW_CTRL_FLOW_TYPE_SQ_MISS && cf->info.esw_mgr_sq == sqn) + return true; + return false; +} + +int +mlx5_flow_hw_esw_destroy_sq_miss_flow(struct rte_eth_dev *dev, uint32_t sqn) +{ + uint16_t port_id = dev->data->port_id; + uint16_t proxy_port_id = dev->data->port_id; + struct rte_eth_dev *proxy_dev; + struct mlx5_priv *proxy_priv; + struct mlx5_hw_ctrl_flow *cf; + struct mlx5_hw_ctrl_flow *cf_next; + int ret; + + ret = rte_flow_pick_transfer_proxy(port_id, &proxy_port_id, NULL); + if (ret) { + DRV_LOG(ERR, "Unable to pick transfer proxy port for port %u. Transfer proxy " + "port must be present for default SQ miss flow rules to exist.", + port_id); + return ret; + } + proxy_dev = &rte_eth_devices[proxy_port_id]; + proxy_priv = proxy_dev->data->dev_private; + if (!proxy_priv->dr_ctx) + return 0; + if (!proxy_priv->hw_esw_sq_miss_root_tbl || + !proxy_priv->hw_esw_sq_miss_tbl) + return 0; + cf = LIST_FIRST(&proxy_priv->hw_ctrl_flows); + while (cf != NULL) { + cf_next = LIST_NEXT(cf, next); + if (flow_hw_is_matching_sq_miss_flow(cf, dev, sqn)) { + claim_zero(flow_hw_destroy_ctrl_flow(proxy_dev, cf->flow)); + LIST_REMOVE(cf, next); + mlx5_free(cf); + } + cf = cf_next; + } + return 0; +} + int mlx5_flow_hw_esw_create_default_jump_flow(struct rte_eth_dev *dev) { @@ -8820,6 +8884,9 @@ mlx5_flow_hw_esw_create_default_jump_flow(struct rte_eth_dev *dev) .type = RTE_FLOW_ACTION_TYPE_END, } }; + struct mlx5_hw_ctrl_flow_info flow_info = { + .type = MLX5_HW_CTRL_FLOW_TYPE_DEFAULT_JUMP, + }; struct rte_eth_dev *proxy_dev; struct mlx5_priv *proxy_priv; uint16_t proxy_port_id = dev->data->port_id; @@ -8850,7 +8917,7 @@ mlx5_flow_hw_esw_create_default_jump_flow(struct rte_eth_dev *dev) } return flow_hw_create_ctrl_flow(dev, proxy_dev, proxy_priv->hw_esw_zero_tbl, - items, 0, actions, 0); + items, 0, actions, 0, &flow_info); } int @@ -8896,13 +8963,16 @@ mlx5_flow_hw_create_tx_default_mreg_copy_flow(struct rte_eth_dev *dev) .type = RTE_FLOW_ACTION_TYPE_END, }, }; + struct mlx5_hw_ctrl_flow_info flow_info = { + .type = MLX5_HW_CTRL_FLOW_TYPE_TX_META_COPY, + }; MLX5_ASSERT(priv->master); if (!priv->dr_ctx || !priv->hw_tx_meta_cpy_tbl) return 0; return flow_hw_create_ctrl_flow(dev, dev, priv->hw_tx_meta_cpy_tbl, - eth_all, 0, copy_reg_action, 0); + eth_all, 0, copy_reg_action, 0, &flow_info); } int @@ -8931,6 +9001,10 @@ mlx5_flow_hw_tx_repr_matching_flow(struct rte_eth_dev *dev, uint32_t sqn) { .type = RTE_FLOW_ACTION_TYPE_END }, { .type = RTE_FLOW_ACTION_TYPE_END }, }; + struct mlx5_hw_ctrl_flow_info flow_info = { + .type = MLX5_HW_CTRL_FLOW_TYPE_TX_REPR_MATCH, + .tx_repr_sq = sqn, + }; /* It is assumed that caller checked for representor matching. */ MLX5_ASSERT(priv->sh->config.repr_matching); @@ -8956,7 +9030,7 @@ mlx5_flow_hw_tx_repr_matching_flow(struct rte_eth_dev *dev, uint32_t sqn) actions[2].type = RTE_FLOW_ACTION_TYPE_JUMP; } return flow_hw_create_ctrl_flow(dev, dev, priv->hw_tx_repr_tagging_tbl, - items, 0, actions, 0); + items, 0, actions, 0, &flow_info); } static uint32_t @@ -9071,6 +9145,9 @@ __flow_hw_ctrl_flows_single(struct rte_eth_dev *dev, { .type = RTE_FLOW_ACTION_TYPE_RSS }, { .type = RTE_FLOW_ACTION_TYPE_END }, }; + struct mlx5_hw_ctrl_flow_info flow_info = { + .type = MLX5_HW_CTRL_FLOW_TYPE_DEFAULT_RX_RSS, + }; if (!eth_spec) return -EINVAL; @@ -9084,7 +9161,7 @@ __flow_hw_ctrl_flows_single(struct rte_eth_dev *dev, items[3] = flow_hw_get_ctrl_rx_l4_item(rss_type); items[4] = (struct rte_flow_item){ .type = RTE_FLOW_ITEM_TYPE_END }; /* Without VLAN filtering, only a single flow rule must be created. */ - return flow_hw_create_ctrl_flow(dev, dev, tbl, items, 0, actions, 0); + return flow_hw_create_ctrl_flow(dev, dev, tbl, items, 0, actions, 0, &flow_info); } static int @@ -9100,6 +9177,9 @@ __flow_hw_ctrl_flows_single_vlan(struct rte_eth_dev *dev, { .type = RTE_FLOW_ACTION_TYPE_RSS }, { .type = RTE_FLOW_ACTION_TYPE_END }, }; + struct mlx5_hw_ctrl_flow_info flow_info = { + .type = MLX5_HW_CTRL_FLOW_TYPE_DEFAULT_RX_RSS, + }; unsigned int i; if (!eth_spec) @@ -9122,7 +9202,7 @@ __flow_hw_ctrl_flows_single_vlan(struct rte_eth_dev *dev, }; items[1].spec = &vlan_spec; - if (flow_hw_create_ctrl_flow(dev, dev, tbl, items, 0, actions, 0)) + if (flow_hw_create_ctrl_flow(dev, dev, tbl, items, 0, actions, 0, &flow_info)) return -rte_errno; } return 0; @@ -9140,6 +9220,9 @@ __flow_hw_ctrl_flows_unicast(struct rte_eth_dev *dev, { .type = RTE_FLOW_ACTION_TYPE_RSS }, { .type = RTE_FLOW_ACTION_TYPE_END }, }; + struct mlx5_hw_ctrl_flow_info flow_info = { + .type = MLX5_HW_CTRL_FLOW_TYPE_DEFAULT_RX_RSS, + }; const struct rte_ether_addr cmp = { .addr_bytes = "\x00\x00\x00\x00\x00\x00", }; @@ -9163,7 +9246,7 @@ __flow_hw_ctrl_flows_unicast(struct rte_eth_dev *dev, if (!memcmp(mac, &cmp, sizeof(*mac))) continue; memcpy(ð_spec.dst.addr_bytes, mac->addr_bytes, RTE_ETHER_ADDR_LEN); - if (flow_hw_create_ctrl_flow(dev, dev, tbl, items, 0, actions, 0)) + if (flow_hw_create_ctrl_flow(dev, dev, tbl, items, 0, actions, 0, &flow_info)) return -rte_errno; } return 0; @@ -9182,6 +9265,9 @@ __flow_hw_ctrl_flows_unicast_vlan(struct rte_eth_dev *dev, { .type = RTE_FLOW_ACTION_TYPE_RSS }, { .type = RTE_FLOW_ACTION_TYPE_END }, }; + struct mlx5_hw_ctrl_flow_info flow_info = { + .type = MLX5_HW_CTRL_FLOW_TYPE_DEFAULT_RX_RSS, + }; const struct rte_ether_addr cmp = { .addr_bytes = "\x00\x00\x00\x00\x00\x00", }; @@ -9213,7 +9299,8 @@ __flow_hw_ctrl_flows_unicast_vlan(struct rte_eth_dev *dev, }; items[1].spec = &vlan_spec; - if (flow_hw_create_ctrl_flow(dev, dev, tbl, items, 0, actions, 0)) + if (flow_hw_create_ctrl_flow(dev, dev, tbl, items, 0, actions, 0, + &flow_info)) return -rte_errno; } } diff --git a/drivers/net/mlx5/mlx5_txq.c b/drivers/net/mlx5/mlx5_txq.c index 5543f2c570..8c48e7e2a8 100644 --- a/drivers/net/mlx5/mlx5_txq.c +++ b/drivers/net/mlx5/mlx5_txq.c @@ -1310,8 +1310,16 @@ rte_pmd_mlx5_external_sq_enable(uint16_t port_id, uint32_t sq_num) return -rte_errno; } #ifdef HAVE_MLX5_HWS_SUPPORT - if (priv->sh->config.dv_flow_en == 2) - return mlx5_flow_hw_esw_create_sq_miss_flow(dev, sq_num); + if (priv->sh->config.dv_flow_en == 2) { + if (mlx5_flow_hw_esw_create_sq_miss_flow(dev, sq_num)) + return -rte_errno; + if (priv->sh->config.repr_matching && + mlx5_flow_hw_tx_repr_matching_flow(dev, sq_num)) { + mlx5_flow_hw_esw_destroy_sq_miss_flow(dev, sq_num); + return -rte_errno; + } + return 0; + } #endif flow = mlx5_flow_create_devx_sq_miss_flow(dev, sq_num); if (flow > 0) -- 2.25.1 --- Diff of the applied patch vs upstream commit (please double-check if non-empty: --- --- - 2023-12-11 17:56:25.900025600 +0800 +++ 0087-net-mlx5-fix-missing-flow-rules-for-external-SQ.patch 2023-12-11 17:56:23.177652300 +0800 @@ -1 +1 @@ -From 86f2907c2ab6977980131f848e79f3ca05250279 Mon Sep 17 00:00:00 2001 +From ca79cce293c5d090f36797b7efa8b00e54790ba0 Mon Sep 17 00:00:00 2001 @@ -4,0 +5,3 @@ +Cc: Xueming Li + +[ upstream commit 86f2907c2ab6977980131f848e79f3ca05250279 ] @@ -32 +34,0 @@ -Cc: stable@dpdk.org @@ -43 +45 @@ -index f5eacb2c67..45ad0701f1 100644 +index fa8931e8b5..8a46ba90b0 100644 @@ -46 +48 @@ -@@ -1705,10 +1705,50 @@ struct mlx5_obj_ops { +@@ -1639,10 +1639,50 @@ struct mlx5_obj_ops { @@ -96 +98 @@ - /* + struct mlx5_flow_hw_ctrl_rx; @@ -98 +100 @@ -index 094be12715..d57b3b5465 100644 +index 52edc4c961..f03734f991 100644 @@ -101 +103 @@ -@@ -2875,6 +2875,8 @@ int mlx5_flow_hw_flush_ctrl_flows(struct rte_eth_dev *dev); +@@ -2582,6 +2582,8 @@ int mlx5_flow_hw_flush_ctrl_flows(struct rte_eth_dev *dev); @@ -111 +113 @@ -index f57126e2ff..d512889682 100644 +index 102f67a925..3f3ab4859b 100644 @@ -114 +116 @@ -@@ -11341,6 +11341,8 @@ const struct mlx5_flow_driver_ops mlx5_flow_hw_drv_ops = { +@@ -8437,6 +8437,8 @@ const struct mlx5_flow_driver_ops mlx5_flow_hw_drv_ops = { @@ -123 +125 @@ -@@ -11352,7 +11354,8 @@ flow_hw_create_ctrl_flow(struct rte_eth_dev *owner_dev, +@@ -8448,7 +8450,8 @@ flow_hw_create_ctrl_flow(struct rte_eth_dev *owner_dev, @@ -133 +135 @@ -@@ -11399,6 +11402,10 @@ flow_hw_create_ctrl_flow(struct rte_eth_dev *owner_dev, +@@ -8495,6 +8498,10 @@ flow_hw_create_ctrl_flow(struct rte_eth_dev *owner_dev, @@ -144 +146 @@ -@@ -11602,6 +11609,10 @@ mlx5_flow_hw_esw_create_sq_miss_flow(struct rte_eth_dev *dev, uint32_t sqn) +@@ -8698,6 +8705,10 @@ mlx5_flow_hw_esw_create_sq_miss_flow(struct rte_eth_dev *dev, uint32_t sqn) @@ -155 +157 @@ -@@ -11657,7 +11668,7 @@ mlx5_flow_hw_esw_create_sq_miss_flow(struct rte_eth_dev *dev, uint32_t sqn) +@@ -8753,7 +8764,7 @@ mlx5_flow_hw_esw_create_sq_miss_flow(struct rte_eth_dev *dev, uint32_t sqn) @@ -164 +166 @@ -@@ -11686,8 +11697,9 @@ mlx5_flow_hw_esw_create_sq_miss_flow(struct rte_eth_dev *dev, uint32_t sqn) +@@ -8782,8 +8793,9 @@ mlx5_flow_hw_esw_create_sq_miss_flow(struct rte_eth_dev *dev, uint32_t sqn) @@ -175 +177 @@ -@@ -11696,6 +11708,58 @@ mlx5_flow_hw_esw_create_sq_miss_flow(struct rte_eth_dev *dev, uint32_t sqn) +@@ -8792,6 +8804,58 @@ mlx5_flow_hw_esw_create_sq_miss_flow(struct rte_eth_dev *dev, uint32_t sqn) @@ -234 +236 @@ -@@ -11724,6 +11788,9 @@ mlx5_flow_hw_esw_create_default_jump_flow(struct rte_eth_dev *dev) +@@ -8820,6 +8884,9 @@ mlx5_flow_hw_esw_create_default_jump_flow(struct rte_eth_dev *dev) @@ -244 +246 @@ -@@ -11754,7 +11821,7 @@ mlx5_flow_hw_esw_create_default_jump_flow(struct rte_eth_dev *dev) +@@ -8850,7 +8917,7 @@ mlx5_flow_hw_esw_create_default_jump_flow(struct rte_eth_dev *dev) @@ -253 +255 @@ -@@ -11800,13 +11867,16 @@ mlx5_flow_hw_create_tx_default_mreg_copy_flow(struct rte_eth_dev *dev) +@@ -8896,13 +8963,16 @@ mlx5_flow_hw_create_tx_default_mreg_copy_flow(struct rte_eth_dev *dev) @@ -271 +273 @@ -@@ -11835,6 +11905,10 @@ mlx5_flow_hw_tx_repr_matching_flow(struct rte_eth_dev *dev, uint32_t sqn) +@@ -8931,6 +9001,10 @@ mlx5_flow_hw_tx_repr_matching_flow(struct rte_eth_dev *dev, uint32_t sqn) @@ -282 +284 @@ -@@ -11860,7 +11934,7 @@ mlx5_flow_hw_tx_repr_matching_flow(struct rte_eth_dev *dev, uint32_t sqn) +@@ -8956,7 +9030,7 @@ mlx5_flow_hw_tx_repr_matching_flow(struct rte_eth_dev *dev, uint32_t sqn) @@ -291 +293 @@ -@@ -11975,6 +12049,9 @@ __flow_hw_ctrl_flows_single(struct rte_eth_dev *dev, +@@ -9071,6 +9145,9 @@ __flow_hw_ctrl_flows_single(struct rte_eth_dev *dev, @@ -301 +303 @@ -@@ -11988,7 +12065,7 @@ __flow_hw_ctrl_flows_single(struct rte_eth_dev *dev, +@@ -9084,7 +9161,7 @@ __flow_hw_ctrl_flows_single(struct rte_eth_dev *dev, @@ -310 +312 @@ -@@ -12004,6 +12081,9 @@ __flow_hw_ctrl_flows_single_vlan(struct rte_eth_dev *dev, +@@ -9100,6 +9177,9 @@ __flow_hw_ctrl_flows_single_vlan(struct rte_eth_dev *dev, @@ -320 +322 @@ -@@ -12026,7 +12106,7 @@ __flow_hw_ctrl_flows_single_vlan(struct rte_eth_dev *dev, +@@ -9122,7 +9202,7 @@ __flow_hw_ctrl_flows_single_vlan(struct rte_eth_dev *dev, @@ -329 +331 @@ -@@ -12044,6 +12124,9 @@ __flow_hw_ctrl_flows_unicast(struct rte_eth_dev *dev, +@@ -9140,6 +9220,9 @@ __flow_hw_ctrl_flows_unicast(struct rte_eth_dev *dev, @@ -339 +341 @@ -@@ -12067,7 +12150,7 @@ __flow_hw_ctrl_flows_unicast(struct rte_eth_dev *dev, +@@ -9163,7 +9246,7 @@ __flow_hw_ctrl_flows_unicast(struct rte_eth_dev *dev, @@ -342 +344 @@ - memcpy(ð_spec.hdr.dst_addr.addr_bytes, mac->addr_bytes, RTE_ETHER_ADDR_LEN); + memcpy(ð_spec.dst.addr_bytes, mac->addr_bytes, RTE_ETHER_ADDR_LEN); @@ -348 +350 @@ -@@ -12086,6 +12169,9 @@ __flow_hw_ctrl_flows_unicast_vlan(struct rte_eth_dev *dev, +@@ -9182,6 +9265,9 @@ __flow_hw_ctrl_flows_unicast_vlan(struct rte_eth_dev *dev, @@ -358 +360 @@ -@@ -12117,7 +12203,8 @@ __flow_hw_ctrl_flows_unicast_vlan(struct rte_eth_dev *dev, +@@ -9213,7 +9299,8 @@ __flow_hw_ctrl_flows_unicast_vlan(struct rte_eth_dev *dev, @@ -369 +371 @@ -index b584055fa8..ccdf2ffb14 100644 +index 5543f2c570..8c48e7e2a8 100644