From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 3CBEF45B5A; Thu, 17 Oct 2024 09:58:48 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id A425B40676; Thu, 17 Oct 2024 09:58:28 +0200 (CEST) Received: from NAM11-CO1-obe.outbound.protection.outlook.com (mail-co1nam11on2078.outbound.protection.outlook.com [40.107.220.78]) by mails.dpdk.org (Postfix) with ESMTP id D3D9140261 for ; Thu, 17 Oct 2024 09:58:26 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=gwS21ssrx6H7oi3o/slyVDldcBTo4zVjYYAy+/NHpvjqL6jrNgG1MS/JLQ/VLiCtjKJDLrYzRfbv67oB6BpKHREiKtZmMITIwcsMGDoKtPfzZ3IpEp8omIubq/Y+Co7Ov9b8I4u8gkFzJSnwnODEHaO8stJOU8T7Z/EliYDm2qjQy4J93sHSFQ+sVJvU2Yp9bFL0aY/2glyi4+5GuZ5Ekg4C286c6glbOMwifh5I4+sdJ44Cx6xzBL4iClZVzrO38e6B4Pz+3XsvkbtfK/5SgtyvDTBMHiEZs+u+dGoFDQ5iJB+a5P17/bN/rASnRb9/FmdGTQyIUsm32Bu57NRPCQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=kPEDPdlKEkiMbzGeaVGS/F2ai2g5bminVduGRvJbzT4=; b=FCX8z/Lk0iKINVrI61YyXWtfdDhYEtNrVUae4i4ttKX0r5CqtxSV3pmxgc/QQlNFEPqrqFqV6L2DckwZaMFe09UF58y2gkO7UOFcNFdbuKrTebXfP2WKIu7x9B6grj+TttDC3b0vYP/cfq8c7Lr/N9+6KSgyLNi5yTkPEk2WwKX+hh5RdFMNJUvNJq6EHofG0xk/RXN8+9WfyRvQQVVk05XeiSSmnDR/BXThsTQymXNG7eUOktEJUlCbsg00B3Nyn0odNF/saZLf3KvmrDM4RyFk9sXlpyGchsibCl2fhlqyIHyv+BJa4OjzPAiVPnz+RCz8NnWccNzGJ5I3FcbVKQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=kPEDPdlKEkiMbzGeaVGS/F2ai2g5bminVduGRvJbzT4=; b=tZi4kXY8UA578bQ+5NqaBFbmCVV8JQaGw46t2xP6gYdT4jfH+8Li/yqUhOIJQOIrMeOBRCBLquU+FcjXmvXKkIF3kSB96nqQfmNR1lJP5ODahrnjuhzDDlqOQ2sqAnEJcFG3Fpfs1X1DiVpEcyHuNPxT06Szi+dEDM6f7WcPBEQx8BV1O6Z5xQrUQUSQCkSZqmsVzDz5qJxBdcDwqyW1/Amr35Q1IkookZCJulfMIXrZ0XarvVh7UkBIabcP8k+b+RfHXRX1C1Af823x9pIKoD5la0SeFXZgt5DcjtyswMDW9uMgHV88G8aFTmdbab1subQgf6WA2HLPBKIHoMRYBA== Received: from MN2PR07CA0022.namprd07.prod.outlook.com (2603:10b6:208:1a0::32) by IA0PR12MB8975.namprd12.prod.outlook.com (2603:10b6:208:48f::11) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8069.17; Thu, 17 Oct 2024 07:58:21 +0000 Received: from MN1PEPF0000F0E1.namprd04.prod.outlook.com (2603:10b6:208:1a0:cafe::a4) by MN2PR07CA0022.outlook.office365.com (2603:10b6:208:1a0::32) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8069.19 via Frontend Transport; Thu, 17 Oct 2024 07:58:21 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by MN1PEPF0000F0E1.mail.protection.outlook.com (10.167.242.39) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8069.17 via Frontend Transport; Thu, 17 Oct 2024 07:58:21 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Thu, 17 Oct 2024 00:58:06 -0700 Received: from nvidia.com (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Thu, 17 Oct 2024 00:58:03 -0700 From: Dariusz Sosnowski To: Viacheslav Ovsiienko , Bing Zhao , Ori Kam , Suanming Mou , Matan Azrad CC: Subject: [PATCH 03/10] net/mlx5: rework creation of unicast flow rules Date: Thu, 17 Oct 2024 09:57:31 +0200 Message-ID: <20241017075738.190064-4-dsosnowski@nvidia.com> X-Mailer: git-send-email 2.39.5 In-Reply-To: <20241017075738.190064-1-dsosnowski@nvidia.com> References: <20241017075738.190064-1-dsosnowski@nvidia.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail201.nvidia.com (10.129.68.8) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: MN1PEPF0000F0E1:EE_|IA0PR12MB8975:EE_ X-MS-Office365-Filtering-Correlation-Id: 9fcf3b25-aeda-43d2-0c8a-08dcee817815 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; ARA:13230040|36860700013|1800799024|376014|82310400026; X-Microsoft-Antispam-Message-Info: =?us-ascii?Q?shA9ktkO7/VuxROaSVLDHBDDRLbMptLiAib9rnRIyi7XBFkiNsejp/AZi5qC?= =?us-ascii?Q?8MbLqVoWVIjQfw3gGWh0BxGuBz0/JDg/tadzrxTYkEFGZLajjEyprNNVsnuG?= =?us-ascii?Q?VzxLOKKkXzuBY94GjFcKd2huBKl2cPjzfHiCcTdpmL7fXFcYoNjxTr5GoT/X?= =?us-ascii?Q?u9H5n1RwcDRpvngvUFYez0Y782jGtjqUML9nO1chlho/g2Vf9Zi/C2DZWW68?= =?us-ascii?Q?Ul0BvQqqG9z3TyvswtpwnpW5qP4YSxebR1ds+uBwpcKHWYxsF+nz2/BJ0CQj?= =?us-ascii?Q?ID9h1AioRT1XqKdruMsWpmy6DpnMHi4kWfnosr8Ykcnu9g6qOkGmog5JG+cU?= =?us-ascii?Q?nqnCj90iTGYOQtglXBeCinRiHeaF4xzLAcxLoM8ccn6lmKOXqHs6kveOpEZk?= =?us-ascii?Q?idg8/N29ieqAEdRLaIlaZApRGJuimCYqj+BYB225KuPw7ThaqTzCIs6zEjFB?= =?us-ascii?Q?RT7zn7nyXVWIBZbZysYViYaAdhcf6amLZ/r8n8W7TnyD0uW8NCiO2sWtfPAv?= =?us-ascii?Q?cu77R2Gmb0HOygpOZ1vT/tS3CZn2x54OJrFhw886Z0PNp9PwOZ1zJOQTg1Dp?= =?us-ascii?Q?HZVWOrZ4E98XqSxxde/Ip1TVqYZ2ZJErBhD0sJNSIe0eVFLzy9aFrBOJo9XP?= =?us-ascii?Q?2TMdwGrEun0JA4e1Nqra77MI4MhXSI5MUvSDStvbxTKvxkIya4qmNZQfLWIX?= =?us-ascii?Q?C2WGLip1wn60762hpkc1pZieMsZWiqhLvRYUnEgyH1Uit7u9aNpHIxf7z9g0?= =?us-ascii?Q?GciqHSxzAKAE5Z1V8UwvdVTYcRbWCf0EFsvHli0pstFMKhZVQsvo3Uht0qGo?= =?us-ascii?Q?OAWkn32lDg/nkTy5lZ91oIdLFdew5Iq64KdmdzY7s+Tc/gH/ZOLC4dl0vfUp?= =?us-ascii?Q?0s7GKaPu8EvRD1GV9+dln+VwevWs/p4EGFrfLDDSq/fBK8qJwztFubGldlgR?= =?us-ascii?Q?uGRziMpGiaDaYzfvOcJH7PtqgG3z1DMbRXq2VCqLo3exKj7ZDx8LhsT2jtdI?= =?us-ascii?Q?GoivV+SiRgZiz1J+fwJyQKTXEW3GDHpFYkIstfnVsxiLP5Rm3RCSvg0mNo0X?= =?us-ascii?Q?ZNmLSBLDLBT/1R7lU3isFQoFhCrO4HdTY5dg1Wa31WTTd6zIkaP6omYlj5Qq?= =?us-ascii?Q?ycBxarVhug2vGcaP8PA2z/cxoXVasHxGStwPmUMcfTbhCrHPAvF5aNQffY/J?= =?us-ascii?Q?hHqHUt74i3D/InglCqo6aqfdK8Fy1YVomHRdopyFVr93ZU/gvAnBULtYPaia?= =?us-ascii?Q?8uxm9NO58CQwZzoe2N97ovKR7xulEhLTFD9NXoBBtRnjqCsrRRgzCbqMUdd6?= =?us-ascii?Q?+WS81Gd0hmNihX1mlj+UK55dhGumeUzpHS2yrDn1hmOTzVK1xYUQBT7TqKG9?= =?us-ascii?Q?2kYeD+FYMyRF9lrepKsWEh1v1O6bIQy0I2MgjEUy6JtP7OBgMw=3D=3D?= X-Forefront-Antispam-Report: CIP:216.228.117.161; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge2.nvidia.com; CAT:NONE; SFS:(13230040)(36860700013)(1800799024)(376014)(82310400026); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Oct 2024 07:58:21.0573 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 9fcf3b25-aeda-43d2-0c8a-08dcee817815 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.161]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: MN1PEPF0000F0E1.namprd04.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: IA0PR12MB8975 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Rework the code responsible for creation of unicast control flow rules, to allow creation of: - unicast DMAC flow rules and - unicast DMAC with VMAN flow rules, outside of mlx5_traffic_enable() called when port is started. Signed-off-by: Dariusz Sosnowski --- drivers/net/mlx5/meson.build | 1 + drivers/net/mlx5/mlx5_flow.h | 9 ++ drivers/net/mlx5/mlx5_flow_hw.c | 215 ++++++++++++++++++++------ drivers/net/mlx5/mlx5_flow_hw_stubs.c | 41 +++++ 4 files changed, 219 insertions(+), 47 deletions(-) create mode 100644 drivers/net/mlx5/mlx5_flow_hw_stubs.c diff --git a/drivers/net/mlx5/meson.build b/drivers/net/mlx5/meson.build index eb5eb2cce7..0114673491 100644 --- a/drivers/net/mlx5/meson.build +++ b/drivers/net/mlx5/meson.build @@ -23,6 +23,7 @@ sources = files( 'mlx5_flow_dv.c', 'mlx5_flow_aso.c', 'mlx5_flow_flex.c', + 'mlx5_flow_hw_stubs.c', 'mlx5_mac.c', 'mlx5_rss.c', 'mlx5_rx.c', diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index 86a1476879..2ff0b25d4d 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -2990,6 +2990,15 @@ struct mlx5_flow_hw_ctrl_fdb { #define MLX5_CTRL_VLAN_FILTER (RTE_BIT32(6)) int mlx5_flow_hw_ctrl_flows(struct rte_eth_dev *dev, uint32_t flags); + +/** Create a control flow rule for matching unicast DMAC (HWS). */ +int mlx5_flow_hw_ctrl_flow_dmac(struct rte_eth_dev *dev, const struct rte_ether_addr *addr); + +/** Create a control flow rule for matching unicast DMAC with VLAN (HWS). */ +int mlx5_flow_hw_ctrl_flow_dmac_vlan(struct rte_eth_dev *dev, + const struct rte_ether_addr *addr, + const uint16_t vlan); + void mlx5_flow_hw_cleanup_ctrl_rx_templates(struct rte_eth_dev *dev); int mlx5_flow_group_to_table(struct rte_eth_dev *dev, diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index f6918825eb..afc9778b97 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -15896,12 +15896,14 @@ __flow_hw_ctrl_flows_single_vlan(struct rte_eth_dev *dev, } static int -__flow_hw_ctrl_flows_unicast(struct rte_eth_dev *dev, - struct rte_flow_template_table *tbl, - const enum mlx5_flow_ctrl_rx_eth_pattern_type pattern_type, - const enum mlx5_flow_ctrl_rx_expanded_rss_type rss_type) +__flow_hw_ctrl_flows_unicast_create(struct rte_eth_dev *dev, + struct rte_flow_template_table *tbl, + const enum mlx5_flow_ctrl_rx_expanded_rss_type rss_type, + const struct rte_ether_addr *addr) { - struct rte_flow_item_eth eth_spec; + struct rte_flow_item_eth eth_spec = { + .hdr.dst_addr = *addr, + }; struct rte_flow_item items[5]; struct rte_flow_action actions[] = { { .type = RTE_FLOW_ACTION_TYPE_RSS }, @@ -15909,15 +15911,11 @@ __flow_hw_ctrl_flows_unicast(struct rte_eth_dev *dev, }; struct mlx5_hw_ctrl_flow_info flow_info = { .type = MLX5_HW_CTRL_FLOW_TYPE_DEFAULT_RX_RSS_UNICAST_DMAC, + .uc = { + .dmac = *addr, + }, }; - const struct rte_ether_addr cmp = { - .addr_bytes = { 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 }, - }; - unsigned int i; - - RTE_SET_USED(pattern_type); - memset(ð_spec, 0, sizeof(eth_spec)); memset(items, 0, sizeof(items)); items[0] = (struct rte_flow_item){ .type = RTE_FLOW_ITEM_TYPE_ETH, @@ -15927,28 +15925,47 @@ __flow_hw_ctrl_flows_unicast(struct rte_eth_dev *dev, items[2] = flow_hw_get_ctrl_rx_l3_item(rss_type); items[3] = flow_hw_get_ctrl_rx_l4_item(rss_type); items[4] = (struct rte_flow_item){ .type = RTE_FLOW_ITEM_TYPE_END }; + + if (flow_hw_create_ctrl_flow(dev, dev, tbl, items, 0, actions, 0, &flow_info, false)) + return -rte_errno; + + return 0; +} + +static int +__flow_hw_ctrl_flows_unicast(struct rte_eth_dev *dev, + struct rte_flow_template_table *tbl, + const enum mlx5_flow_ctrl_rx_expanded_rss_type rss_type) +{ + unsigned int i; + int ret; + for (i = 0; i < MLX5_MAX_MAC_ADDRESSES; ++i) { struct rte_ether_addr *mac = &dev->data->mac_addrs[i]; - if (!memcmp(mac, &cmp, sizeof(*mac))) + if (rte_is_zero_ether_addr(mac)) continue; - eth_spec.hdr.dst_addr = *mac; - flow_info.uc.dmac = *mac; - if (flow_hw_create_ctrl_flow(dev, dev, - tbl, items, 0, actions, 0, &flow_info, false)) - return -rte_errno; + + ret = __flow_hw_ctrl_flows_unicast_create(dev, tbl, rss_type, mac); + if (ret < 0) + return ret; } return 0; } static int -__flow_hw_ctrl_flows_unicast_vlan(struct rte_eth_dev *dev, - struct rte_flow_template_table *tbl, - const enum mlx5_flow_ctrl_rx_eth_pattern_type pattern_type, - const enum mlx5_flow_ctrl_rx_expanded_rss_type rss_type) -{ - struct mlx5_priv *priv = dev->data->dev_private; - struct rte_flow_item_eth eth_spec; +__flow_hw_ctrl_flows_unicast_vlan_create(struct rte_eth_dev *dev, + struct rte_flow_template_table *tbl, + const enum mlx5_flow_ctrl_rx_expanded_rss_type rss_type, + const struct rte_ether_addr *addr, + const uint16_t vid) +{ + struct rte_flow_item_eth eth_spec = { + .hdr.dst_addr = *addr, + }; + struct rte_flow_item_vlan vlan_spec = { + .tci = rte_cpu_to_be_16(vid), + }; struct rte_flow_item items[5]; struct rte_flow_action actions[] = { { .type = RTE_FLOW_ACTION_TYPE_RSS }, @@ -15956,43 +15973,54 @@ __flow_hw_ctrl_flows_unicast_vlan(struct rte_eth_dev *dev, }; struct mlx5_hw_ctrl_flow_info flow_info = { .type = MLX5_HW_CTRL_FLOW_TYPE_DEFAULT_RX_RSS_UNICAST_DMAC_VLAN, + .uc = { + .dmac = *addr, + .vlan = vid, + }, }; - const struct rte_ether_addr cmp = { - .addr_bytes = { 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 }, - }; - unsigned int i; - unsigned int j; - - RTE_SET_USED(pattern_type); - memset(ð_spec, 0, sizeof(eth_spec)); memset(items, 0, sizeof(items)); items[0] = (struct rte_flow_item){ .type = RTE_FLOW_ITEM_TYPE_ETH, .spec = ð_spec, }; - items[1] = (struct rte_flow_item){ .type = RTE_FLOW_ITEM_TYPE_VLAN }; + items[1] = (struct rte_flow_item){ + .type = RTE_FLOW_ITEM_TYPE_VLAN, + .spec = &vlan_spec, + }; items[2] = flow_hw_get_ctrl_rx_l3_item(rss_type); items[3] = flow_hw_get_ctrl_rx_l4_item(rss_type); items[4] = (struct rte_flow_item){ .type = RTE_FLOW_ITEM_TYPE_END }; + + if (flow_hw_create_ctrl_flow(dev, dev, tbl, items, 0, actions, 0, &flow_info, false)) + return -rte_errno; + + return 0; +} + +static int +__flow_hw_ctrl_flows_unicast_vlan(struct rte_eth_dev *dev, + struct rte_flow_template_table *tbl, + const enum mlx5_flow_ctrl_rx_expanded_rss_type rss_type) +{ + struct mlx5_priv *priv = dev->data->dev_private; + unsigned int i; + unsigned int j; + for (i = 0; i < MLX5_MAX_MAC_ADDRESSES; ++i) { struct rte_ether_addr *mac = &dev->data->mac_addrs[i]; - if (!memcmp(mac, &cmp, sizeof(*mac))) + if (rte_is_zero_ether_addr(mac)) continue; - eth_spec.hdr.dst_addr = *mac; - flow_info.uc.dmac = *mac; + for (j = 0; j < priv->vlan_filter_n; ++j) { uint16_t vlan = priv->vlan_filter[j]; - struct rte_flow_item_vlan vlan_spec = { - .hdr.vlan_tci = rte_cpu_to_be_16(vlan), - }; + int ret; - flow_info.uc.vlan = vlan; - items[1].spec = &vlan_spec; - if (flow_hw_create_ctrl_flow(dev, dev, tbl, items, 0, actions, 0, - &flow_info, false)) - return -rte_errno; + ret = __flow_hw_ctrl_flows_unicast_vlan_create(dev, tbl, rss_type, + mac, vlan); + if (ret < 0) + return ret; } } return 0; @@ -16016,9 +16044,9 @@ __flow_hw_ctrl_flows(struct rte_eth_dev *dev, case MLX5_FLOW_HW_CTRL_RX_ETH_PATTERN_IPV6_MCAST_VLAN: return __flow_hw_ctrl_flows_single_vlan(dev, tbl, pattern_type, rss_type); case MLX5_FLOW_HW_CTRL_RX_ETH_PATTERN_DMAC: - return __flow_hw_ctrl_flows_unicast(dev, tbl, pattern_type, rss_type); + return __flow_hw_ctrl_flows_unicast(dev, tbl, rss_type); case MLX5_FLOW_HW_CTRL_RX_ETH_PATTERN_DMAC_VLAN: - return __flow_hw_ctrl_flows_unicast_vlan(dev, tbl, pattern_type, rss_type); + return __flow_hw_ctrl_flows_unicast_vlan(dev, tbl, rss_type); default: /* Should not reach here. */ MLX5_ASSERT(false); @@ -16099,6 +16127,99 @@ mlx5_flow_hw_ctrl_flows(struct rte_eth_dev *dev, uint32_t flags) return 0; } +static int +mlx5_flow_hw_ctrl_flow_single(struct rte_eth_dev *dev, + const enum mlx5_flow_ctrl_rx_eth_pattern_type eth_pattern_type, + const struct rte_ether_addr *addr, + const uint16_t vlan) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_flow_hw_ctrl_rx *hw_ctrl_rx; + unsigned int j; + int ret = 0; + + if (!priv->dr_ctx) { + DRV_LOG(DEBUG, "port %u Control flow rules will not be created. " + "HWS needs to be configured beforehand.", + dev->data->port_id); + return 0; + } + if (!priv->hw_ctrl_rx) { + DRV_LOG(ERR, "port %u Control flow rules templates were not created.", + dev->data->port_id); + rte_errno = EINVAL; + return -rte_errno; + } + hw_ctrl_rx = priv->hw_ctrl_rx; + + /* TODO: this part should be somehow refactored. It's common with common flow creation. */ + for (j = 0; j < MLX5_FLOW_HW_CTRL_RX_EXPANDED_RSS_MAX; ++j) { + const enum mlx5_flow_ctrl_rx_expanded_rss_type rss_type = j; + const unsigned int pti = eth_pattern_type; + struct rte_flow_actions_template *at; + struct mlx5_flow_hw_ctrl_rx_table *tmpls = &hw_ctrl_rx->tables[pti][j]; + const struct mlx5_flow_template_table_cfg cfg = { + .attr = tmpls->attr, + .external = 0, + }; + + if (!hw_ctrl_rx->rss[rss_type]) { + at = flow_hw_create_ctrl_rx_rss_template(dev, rss_type); + if (!at) + return -rte_errno; + hw_ctrl_rx->rss[rss_type] = at; + } else { + at = hw_ctrl_rx->rss[rss_type]; + } + if (!rss_type_is_requested(priv, rss_type)) + continue; + if (!tmpls->tbl) { + tmpls->tbl = flow_hw_table_create(dev, &cfg, + &tmpls->pt, 1, &at, 1, NULL); + if (!tmpls->tbl) { + DRV_LOG(ERR, "port %u Failed to create template table " + "for control flow rules. Unable to create " + "control flow rules.", + dev->data->port_id); + return -rte_errno; + } + } + + MLX5_ASSERT(eth_pattern_type == MLX5_FLOW_HW_CTRL_RX_ETH_PATTERN_DMAC || + eth_pattern_type == MLX5_FLOW_HW_CTRL_RX_ETH_PATTERN_DMAC_VLAN); + + if (eth_pattern_type == MLX5_FLOW_HW_CTRL_RX_ETH_PATTERN_DMAC) + ret = __flow_hw_ctrl_flows_unicast_create(dev, tmpls->tbl, rss_type, addr); + else + ret = __flow_hw_ctrl_flows_unicast_vlan_create(dev, tmpls->tbl, rss_type, + addr, vlan); + if (ret) { + DRV_LOG(ERR, "port %u Failed to create unicast control flow rule.", + dev->data->port_id); + return ret; + } + } + + return 0; +} + +int +mlx5_flow_hw_ctrl_flow_dmac(struct rte_eth_dev *dev, + const struct rte_ether_addr *addr) +{ + return mlx5_flow_hw_ctrl_flow_single(dev, MLX5_FLOW_HW_CTRL_RX_ETH_PATTERN_DMAC, + addr, 0); +} + +int +mlx5_flow_hw_ctrl_flow_dmac_vlan(struct rte_eth_dev *dev, + const struct rte_ether_addr *addr, + const uint16_t vlan) +{ + return mlx5_flow_hw_ctrl_flow_single(dev, MLX5_FLOW_HW_CTRL_RX_ETH_PATTERN_DMAC_VLAN, + addr, vlan); +} + static __rte_always_inline uint32_t mlx5_reformat_domain_to_tbl_type(const struct rte_flow_indir_action_conf *domain) { diff --git a/drivers/net/mlx5/mlx5_flow_hw_stubs.c b/drivers/net/mlx5/mlx5_flow_hw_stubs.c new file mode 100644 index 0000000000..985c046056 --- /dev/null +++ b/drivers/net/mlx5/mlx5_flow_hw_stubs.c @@ -0,0 +1,41 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2024 NVIDIA Corporation & Affiliates + */ + +/** + * @file + * + * mlx5_flow_hw.c source file is included in the build only on Linux. + * Functions defined there are compiled if and only if available rdma-core supports DV. + * + * This file contains stubs (through weak linking) for any functions exported from that file. + */ + +#include "mlx5_flow.h" + +/* + * This is a stub for the real implementation of this function in mlx5_flow_hw.c in case: + * - PMD is compiled on Windows or + * - available rdma-core does not support HWS. + */ +__rte_weak int +mlx5_flow_hw_ctrl_flow_dmac(struct rte_eth_dev *dev __rte_unused, + const struct rte_ether_addr *addr __rte_unused) +{ + rte_errno = ENOTSUP; + return -rte_errno; +} + +/* + * This is a stub for the real implementation of this function in mlx5_flow_hw.c in case: + * - PMD is compiled on Windows or + * - available rdma-core does not support HWS. + */ +__rte_weak int +mlx5_flow_hw_ctrl_flow_dmac_vlan(struct rte_eth_dev *dev __rte_unused, + const struct rte_ether_addr *addr __rte_unused, + const uint16_t vlan __rte_unused) +{ + rte_errno = ENOTSUP; + return -rte_errno; +} -- 2.39.5