From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 25E3945BA0; Tue, 22 Oct 2024 14:07:29 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id A50704067C; Tue, 22 Oct 2024 14:07:17 +0200 (CEST) Received: from NAM10-DM6-obe.outbound.protection.outlook.com (mail-dm6nam10on2084.outbound.protection.outlook.com [40.107.93.84]) by mails.dpdk.org (Postfix) with ESMTP id 962C540664 for ; Tue, 22 Oct 2024 14:07:15 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=nkYv9owWBAHBPmt2huwrcB+ao5mULjL4mvmXLHuO09EHpHmItj7PYM58LdtANLvHmY5Rf8TsWgsDlPOxogWBbJjz6W1mCkTOPqyjBU6NG4t5SakQp3PUa1JRNWIABmVhOBCMw3h2x62zDFHboOlt+sPUGNeXqUs/bCPH+szQHM7YinN0mobKkGxjD1chJm7zmIk3j0m36InK5Y8HnGlC/j+2cN9gdegkCzDW0iV9Mm2d2/xVIOX25eTHDkcNcSsoTl62rhTVWeP72q3TIdV8cBafEjuFlrWdvCwsyP8sNIm52BFEUlPfk8KbbQjw/f7AhjW32BGzlVKzYArisc/LlQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=z2+zRm1S8CagBpa7EGtoywdWlUv0MDdVQ0So/p1HnKc=; b=RwVk0HNJP+MQSNi1ZBKnRRVz/kj4cfam4nEOyj8ZJex9LwAcouE9o0k5SLGWMHfRjb5H107yBKpS8+o2ElA8FE3RjLKtLus5i9FwW4sz82TlmfLlhs64xNXIriJdeCbsW8L+O/f00qWfmoxZuUACKFir476wtfBudD0uNmU51hsqM6y9GtHP0tlI8Zhot6DIYoRWwtIQFjI7chM6MRYG/5n9nQ1GjjhWrXppAs/aGDZWvRDNMrojCSmb5i+fiSb2JA7MavBFt38WNYYqgjc52/jtuA9k5UGnXsmk6Bce417frUJvuIXiLEG7bU9HA/L2dby2IddrJQOwfIhikx+bvw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=z2+zRm1S8CagBpa7EGtoywdWlUv0MDdVQ0So/p1HnKc=; b=SWKLctThVO9WNbEfv50HCc3SCs6ia/Uc1hXeJBPmiM3uCJATWPAyL2xdB7eSBW2q0VjwwbOCBMEL0noNTOweltELAWC/qncolW/XgmwY5Tm8buPQe3tK5M/KRdeTzaP9qsyEhwEx51MygrECb+F7E+iHynrsy41Hgx8MoggVFm38N240BJY9iF1RWqSLeTdwSbBcCIoOFOj20SbTvApEorI6AfpjEdj72kpKa044vsEeocVtIpM4+icTd3y7XIzAHpUaBmOAGxkx9IKez8g1br1aoEfGIJI6P/27XFH9VUbfCWYyzQyNAnfjw8nl7v9UC16PDl1WmoKTjDSqKagz8Q== Received: from BY5PR16CA0018.namprd16.prod.outlook.com (2603:10b6:a03:1a0::31) by LV8PR12MB9359.namprd12.prod.outlook.com (2603:10b6:408:1fe::10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8069.28; Tue, 22 Oct 2024 12:07:10 +0000 Received: from SJ1PEPF000023DA.namprd21.prod.outlook.com (2603:10b6:a03:1a0:cafe::7a) by BY5PR16CA0018.outlook.office365.com (2603:10b6:a03:1a0::31) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8069.26 via Frontend Transport; Tue, 22 Oct 2024 12:07:10 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by SJ1PEPF000023DA.mail.protection.outlook.com (10.167.244.75) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8114.2 via Frontend Transport; Tue, 22 Oct 2024 12:07:10 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Tue, 22 Oct 2024 05:06:57 -0700 Received: from nvidia.com (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Tue, 22 Oct 2024 05:06:56 -0700 From: Dariusz Sosnowski To: Viacheslav Ovsiienko , Bing Zhao , Ori Kam , Suanming Mou , Matan Azrad CC: Subject: [PATCH v2 03/10] net/mlx5: rework creation of unicast flow rules Date: Tue, 22 Oct 2024 14:06:11 +0200 Message-ID: <20241022120618.512091-4-dsosnowski@nvidia.com> X-Mailer: git-send-email 2.39.5 In-Reply-To: <20241022120618.512091-1-dsosnowski@nvidia.com> References: <20241017075738.190064-1-dsosnowski@nvidia.com> <20241022120618.512091-1-dsosnowski@nvidia.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail202.nvidia.com (10.129.68.7) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: SJ1PEPF000023DA:EE_|LV8PR12MB9359:EE_ X-MS-Office365-Filtering-Correlation-Id: 885dd227-1a27-44f8-95ba-08dcf2920e76 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; ARA:13230040|36860700013|1800799024|376014|82310400026; X-Microsoft-Antispam-Message-Info: =?us-ascii?Q?CfzrA110/Gxclll/LlwL5GI8Y9BjMQkHfcZugc4LxXBTwe3Nqw5olY6Tqzan?= =?us-ascii?Q?YN5tS5Qdi1iG/Pp4FR4qHlZfYA02N3NIjBK2UbsMfj0WizZT6AcrHCXYJU7j?= =?us-ascii?Q?TDXX+MQ3v8gGSVTdiWGpHg107yEKrFEFkB7wgsQVyYOgJjxqkhEYPJ/Lx41v?= =?us-ascii?Q?8EcwhXBNNf3T7+F+WTfyCXdHxhaAmxr0NgiiJza3lI3dEPnPUn5dPsc8Gt4X?= =?us-ascii?Q?RO2bSHrGTtRwmkOynjcrHYpPd6vxn8g3uwmKmmPz/57IrWegyV1lzHv8SszX?= =?us-ascii?Q?EyOANrqaRzrki749+4IyRygqtzfvXIgqUiXfr9Z6ISj5o8oxFhhDB8mhtynU?= =?us-ascii?Q?C9Q35Ut6enVZWzf+zPiBDdZlGIK0DoPlX0a49HNws+3k5FJQyUYLa11rMdQI?= =?us-ascii?Q?keVEmJpGGW9chv/RjKKv9UWrorGF5mJPNMa6bapcfJfYh2wi4ji/AgVr2K8Q?= =?us-ascii?Q?UeJldEhZKKTFGBgeEF0nkaHxIq5wXYELneK0UBt3HNTqt6F/+uGXj3Gb8Yld?= =?us-ascii?Q?O/Srm3l9gv7w+dB3vcJvJ+DqLqe5g5u5pRDSRe9BZ/qtQVg6k1etRr9uuv/8?= =?us-ascii?Q?lGrfPRHXMPVk50H7fjRDrMX6hkv67PqOgt0YWJkmn/FhIvtQxoUt5t4vF7v4?= =?us-ascii?Q?Uhzzjle1KqZLylazxeSYeDQLFtJ39/IRh69yjg4695NDOREI4HVSohMO10Kd?= =?us-ascii?Q?fm2b/cQxPFO8gQGNAe4vhcNdXvEL6Elx+UHxUVRR5MR3MVBRjqUg0sReI5zl?= =?us-ascii?Q?lxfYRu7M15A//22W57mbH2s1FxXKFlNvabWwqpUy7WV4QBBc26qjN+GmYuMA?= =?us-ascii?Q?p6spWByu6FpIlHXRGItz3BeQXvHxj40TMS6gL1RnHIfGkP0bg2ajZErOUwFZ?= =?us-ascii?Q?1LcsN7xGSjPshX8XCC/Oa1evZt212p7X5KARn3qTYprXWNoO/7uhwz4ouCvT?= =?us-ascii?Q?G+VfzOxJKOki9lxLKQoote52ucrdD/x4MMVpMY3VDW9m2IL51cgeQAh1rTqR?= =?us-ascii?Q?GmRHR47Go/d5i/bpJJ97Qdyd77nApJEhbvrrU+hc7nKEHHY2vGcgCBAZjFhR?= =?us-ascii?Q?pfOszsFRfvxQI9diuLubGKQtjKXbn9sCb5o/mwGBpLvkjOF1OGJyRdwLVPCr?= =?us-ascii?Q?tj3TvoqUBnDr+1B0ZQ+DvslG/QXfiVauEL2DQ6Rv2MsSR8ipfn/RelB7VLBK?= =?us-ascii?Q?LuPVV2jHDCNlfExV1bqogOaOoYXTDnVfT7pt4HKaaokG12CWMz/pRlZmiGXL?= =?us-ascii?Q?eQORsORPUpdjWgR0CBFlPEPxqtqR8dUXjEMsPzVulbu5SpT37Q9AzupvZRnQ?= =?us-ascii?Q?HnefCMlS1wWpcORvT6bRU9lGo22TP6ECL1Oybf48yOEbPxDiPrxeU+p5Jymf?= =?us-ascii?Q?PjOJMnzC4RcVuRU1frlbXmjCCv/VstWi+7fT/ZOsjxM8d87Xfg=3D=3D?= X-Forefront-Antispam-Report: CIP:216.228.117.161; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge2.nvidia.com; CAT:NONE; SFS:(13230040)(36860700013)(1800799024)(376014)(82310400026); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 22 Oct 2024 12:07:10.0317 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 885dd227-1a27-44f8-95ba-08dcf2920e76 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.161]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: SJ1PEPF000023DA.namprd21.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: LV8PR12MB9359 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Rework the code responsible for creation of unicast control flow rules, to allow creation of: - unicast DMAC flow rules and - unicast DMAC with VMAN flow rules, outside of mlx5_traffic_enable() called when port is started. Signed-off-by: Dariusz Sosnowski Acked-by: Viacheslav Ovsiienko --- drivers/net/mlx5/meson.build | 1 + drivers/net/mlx5/mlx5_flow.h | 9 ++ drivers/net/mlx5/mlx5_flow_hw.c | 215 ++++++++++++++++++++------ drivers/net/mlx5/mlx5_flow_hw_stubs.c | 41 +++++ 4 files changed, 219 insertions(+), 47 deletions(-) create mode 100644 drivers/net/mlx5/mlx5_flow_hw_stubs.c diff --git a/drivers/net/mlx5/meson.build b/drivers/net/mlx5/meson.build index eb5eb2cce7..0114673491 100644 --- a/drivers/net/mlx5/meson.build +++ b/drivers/net/mlx5/meson.build @@ -23,6 +23,7 @@ sources = files( 'mlx5_flow_dv.c', 'mlx5_flow_aso.c', 'mlx5_flow_flex.c', + 'mlx5_flow_hw_stubs.c', 'mlx5_mac.c', 'mlx5_rss.c', 'mlx5_rx.c', diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index 86a1476879..2ff0b25d4d 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -2990,6 +2990,15 @@ struct mlx5_flow_hw_ctrl_fdb { #define MLX5_CTRL_VLAN_FILTER (RTE_BIT32(6)) int mlx5_flow_hw_ctrl_flows(struct rte_eth_dev *dev, uint32_t flags); + +/** Create a control flow rule for matching unicast DMAC (HWS). */ +int mlx5_flow_hw_ctrl_flow_dmac(struct rte_eth_dev *dev, const struct rte_ether_addr *addr); + +/** Create a control flow rule for matching unicast DMAC with VLAN (HWS). */ +int mlx5_flow_hw_ctrl_flow_dmac_vlan(struct rte_eth_dev *dev, + const struct rte_ether_addr *addr, + const uint16_t vlan); + void mlx5_flow_hw_cleanup_ctrl_rx_templates(struct rte_eth_dev *dev); int mlx5_flow_group_to_table(struct rte_eth_dev *dev, diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index fbc56497ae..d573cb5640 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -15894,12 +15894,14 @@ __flow_hw_ctrl_flows_single_vlan(struct rte_eth_dev *dev, } static int -__flow_hw_ctrl_flows_unicast(struct rte_eth_dev *dev, - struct rte_flow_template_table *tbl, - const enum mlx5_flow_ctrl_rx_eth_pattern_type pattern_type, - const enum mlx5_flow_ctrl_rx_expanded_rss_type rss_type) +__flow_hw_ctrl_flows_unicast_create(struct rte_eth_dev *dev, + struct rte_flow_template_table *tbl, + const enum mlx5_flow_ctrl_rx_expanded_rss_type rss_type, + const struct rte_ether_addr *addr) { - struct rte_flow_item_eth eth_spec; + struct rte_flow_item_eth eth_spec = { + .hdr.dst_addr = *addr, + }; struct rte_flow_item items[5]; struct rte_flow_action actions[] = { { .type = RTE_FLOW_ACTION_TYPE_RSS }, @@ -15907,15 +15909,11 @@ __flow_hw_ctrl_flows_unicast(struct rte_eth_dev *dev, }; struct mlx5_hw_ctrl_flow_info flow_info = { .type = MLX5_HW_CTRL_FLOW_TYPE_DEFAULT_RX_RSS_UNICAST_DMAC, + .uc = { + .dmac = *addr, + }, }; - const struct rte_ether_addr cmp = { - .addr_bytes = { 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 }, - }; - unsigned int i; - - RTE_SET_USED(pattern_type); - memset(ð_spec, 0, sizeof(eth_spec)); memset(items, 0, sizeof(items)); items[0] = (struct rte_flow_item){ .type = RTE_FLOW_ITEM_TYPE_ETH, @@ -15925,28 +15923,47 @@ __flow_hw_ctrl_flows_unicast(struct rte_eth_dev *dev, items[2] = flow_hw_get_ctrl_rx_l3_item(rss_type); items[3] = flow_hw_get_ctrl_rx_l4_item(rss_type); items[4] = (struct rte_flow_item){ .type = RTE_FLOW_ITEM_TYPE_END }; + + if (flow_hw_create_ctrl_flow(dev, dev, tbl, items, 0, actions, 0, &flow_info, false)) + return -rte_errno; + + return 0; +} + +static int +__flow_hw_ctrl_flows_unicast(struct rte_eth_dev *dev, + struct rte_flow_template_table *tbl, + const enum mlx5_flow_ctrl_rx_expanded_rss_type rss_type) +{ + unsigned int i; + int ret; + for (i = 0; i < MLX5_MAX_MAC_ADDRESSES; ++i) { struct rte_ether_addr *mac = &dev->data->mac_addrs[i]; - if (!memcmp(mac, &cmp, sizeof(*mac))) + if (rte_is_zero_ether_addr(mac)) continue; - eth_spec.hdr.dst_addr = *mac; - flow_info.uc.dmac = *mac; - if (flow_hw_create_ctrl_flow(dev, dev, - tbl, items, 0, actions, 0, &flow_info, false)) - return -rte_errno; + + ret = __flow_hw_ctrl_flows_unicast_create(dev, tbl, rss_type, mac); + if (ret < 0) + return ret; } return 0; } static int -__flow_hw_ctrl_flows_unicast_vlan(struct rte_eth_dev *dev, - struct rte_flow_template_table *tbl, - const enum mlx5_flow_ctrl_rx_eth_pattern_type pattern_type, - const enum mlx5_flow_ctrl_rx_expanded_rss_type rss_type) -{ - struct mlx5_priv *priv = dev->data->dev_private; - struct rte_flow_item_eth eth_spec; +__flow_hw_ctrl_flows_unicast_vlan_create(struct rte_eth_dev *dev, + struct rte_flow_template_table *tbl, + const enum mlx5_flow_ctrl_rx_expanded_rss_type rss_type, + const struct rte_ether_addr *addr, + const uint16_t vid) +{ + struct rte_flow_item_eth eth_spec = { + .hdr.dst_addr = *addr, + }; + struct rte_flow_item_vlan vlan_spec = { + .tci = rte_cpu_to_be_16(vid), + }; struct rte_flow_item items[5]; struct rte_flow_action actions[] = { { .type = RTE_FLOW_ACTION_TYPE_RSS }, @@ -15954,43 +15971,54 @@ __flow_hw_ctrl_flows_unicast_vlan(struct rte_eth_dev *dev, }; struct mlx5_hw_ctrl_flow_info flow_info = { .type = MLX5_HW_CTRL_FLOW_TYPE_DEFAULT_RX_RSS_UNICAST_DMAC_VLAN, + .uc = { + .dmac = *addr, + .vlan = vid, + }, }; - const struct rte_ether_addr cmp = { - .addr_bytes = { 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 }, - }; - unsigned int i; - unsigned int j; - - RTE_SET_USED(pattern_type); - memset(ð_spec, 0, sizeof(eth_spec)); memset(items, 0, sizeof(items)); items[0] = (struct rte_flow_item){ .type = RTE_FLOW_ITEM_TYPE_ETH, .spec = ð_spec, }; - items[1] = (struct rte_flow_item){ .type = RTE_FLOW_ITEM_TYPE_VLAN }; + items[1] = (struct rte_flow_item){ + .type = RTE_FLOW_ITEM_TYPE_VLAN, + .spec = &vlan_spec, + }; items[2] = flow_hw_get_ctrl_rx_l3_item(rss_type); items[3] = flow_hw_get_ctrl_rx_l4_item(rss_type); items[4] = (struct rte_flow_item){ .type = RTE_FLOW_ITEM_TYPE_END }; + + if (flow_hw_create_ctrl_flow(dev, dev, tbl, items, 0, actions, 0, &flow_info, false)) + return -rte_errno; + + return 0; +} + +static int +__flow_hw_ctrl_flows_unicast_vlan(struct rte_eth_dev *dev, + struct rte_flow_template_table *tbl, + const enum mlx5_flow_ctrl_rx_expanded_rss_type rss_type) +{ + struct mlx5_priv *priv = dev->data->dev_private; + unsigned int i; + unsigned int j; + for (i = 0; i < MLX5_MAX_MAC_ADDRESSES; ++i) { struct rte_ether_addr *mac = &dev->data->mac_addrs[i]; - if (!memcmp(mac, &cmp, sizeof(*mac))) + if (rte_is_zero_ether_addr(mac)) continue; - eth_spec.hdr.dst_addr = *mac; - flow_info.uc.dmac = *mac; + for (j = 0; j < priv->vlan_filter_n; ++j) { uint16_t vlan = priv->vlan_filter[j]; - struct rte_flow_item_vlan vlan_spec = { - .hdr.vlan_tci = rte_cpu_to_be_16(vlan), - }; + int ret; - flow_info.uc.vlan = vlan; - items[1].spec = &vlan_spec; - if (flow_hw_create_ctrl_flow(dev, dev, tbl, items, 0, actions, 0, - &flow_info, false)) - return -rte_errno; + ret = __flow_hw_ctrl_flows_unicast_vlan_create(dev, tbl, rss_type, + mac, vlan); + if (ret < 0) + return ret; } } return 0; @@ -16014,9 +16042,9 @@ __flow_hw_ctrl_flows(struct rte_eth_dev *dev, case MLX5_FLOW_HW_CTRL_RX_ETH_PATTERN_IPV6_MCAST_VLAN: return __flow_hw_ctrl_flows_single_vlan(dev, tbl, pattern_type, rss_type); case MLX5_FLOW_HW_CTRL_RX_ETH_PATTERN_DMAC: - return __flow_hw_ctrl_flows_unicast(dev, tbl, pattern_type, rss_type); + return __flow_hw_ctrl_flows_unicast(dev, tbl, rss_type); case MLX5_FLOW_HW_CTRL_RX_ETH_PATTERN_DMAC_VLAN: - return __flow_hw_ctrl_flows_unicast_vlan(dev, tbl, pattern_type, rss_type); + return __flow_hw_ctrl_flows_unicast_vlan(dev, tbl, rss_type); default: /* Should not reach here. */ MLX5_ASSERT(false); @@ -16097,6 +16125,99 @@ mlx5_flow_hw_ctrl_flows(struct rte_eth_dev *dev, uint32_t flags) return 0; } +static int +mlx5_flow_hw_ctrl_flow_single(struct rte_eth_dev *dev, + const enum mlx5_flow_ctrl_rx_eth_pattern_type eth_pattern_type, + const struct rte_ether_addr *addr, + const uint16_t vlan) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_flow_hw_ctrl_rx *hw_ctrl_rx; + unsigned int j; + int ret = 0; + + if (!priv->dr_ctx) { + DRV_LOG(DEBUG, "port %u Control flow rules will not be created. " + "HWS needs to be configured beforehand.", + dev->data->port_id); + return 0; + } + if (!priv->hw_ctrl_rx) { + DRV_LOG(ERR, "port %u Control flow rules templates were not created.", + dev->data->port_id); + rte_errno = EINVAL; + return -rte_errno; + } + hw_ctrl_rx = priv->hw_ctrl_rx; + + /* TODO: this part should be somehow refactored. It's common with common flow creation. */ + for (j = 0; j < MLX5_FLOW_HW_CTRL_RX_EXPANDED_RSS_MAX; ++j) { + const enum mlx5_flow_ctrl_rx_expanded_rss_type rss_type = j; + const unsigned int pti = eth_pattern_type; + struct rte_flow_actions_template *at; + struct mlx5_flow_hw_ctrl_rx_table *tmpls = &hw_ctrl_rx->tables[pti][j]; + const struct mlx5_flow_template_table_cfg cfg = { + .attr = tmpls->attr, + .external = 0, + }; + + if (!hw_ctrl_rx->rss[rss_type]) { + at = flow_hw_create_ctrl_rx_rss_template(dev, rss_type); + if (!at) + return -rte_errno; + hw_ctrl_rx->rss[rss_type] = at; + } else { + at = hw_ctrl_rx->rss[rss_type]; + } + if (!rss_type_is_requested(priv, rss_type)) + continue; + if (!tmpls->tbl) { + tmpls->tbl = flow_hw_table_create(dev, &cfg, + &tmpls->pt, 1, &at, 1, NULL); + if (!tmpls->tbl) { + DRV_LOG(ERR, "port %u Failed to create template table " + "for control flow rules. Unable to create " + "control flow rules.", + dev->data->port_id); + return -rte_errno; + } + } + + MLX5_ASSERT(eth_pattern_type == MLX5_FLOW_HW_CTRL_RX_ETH_PATTERN_DMAC || + eth_pattern_type == MLX5_FLOW_HW_CTRL_RX_ETH_PATTERN_DMAC_VLAN); + + if (eth_pattern_type == MLX5_FLOW_HW_CTRL_RX_ETH_PATTERN_DMAC) + ret = __flow_hw_ctrl_flows_unicast_create(dev, tmpls->tbl, rss_type, addr); + else + ret = __flow_hw_ctrl_flows_unicast_vlan_create(dev, tmpls->tbl, rss_type, + addr, vlan); + if (ret) { + DRV_LOG(ERR, "port %u Failed to create unicast control flow rule.", + dev->data->port_id); + return ret; + } + } + + return 0; +} + +int +mlx5_flow_hw_ctrl_flow_dmac(struct rte_eth_dev *dev, + const struct rte_ether_addr *addr) +{ + return mlx5_flow_hw_ctrl_flow_single(dev, MLX5_FLOW_HW_CTRL_RX_ETH_PATTERN_DMAC, + addr, 0); +} + +int +mlx5_flow_hw_ctrl_flow_dmac_vlan(struct rte_eth_dev *dev, + const struct rte_ether_addr *addr, + const uint16_t vlan) +{ + return mlx5_flow_hw_ctrl_flow_single(dev, MLX5_FLOW_HW_CTRL_RX_ETH_PATTERN_DMAC_VLAN, + addr, vlan); +} + static __rte_always_inline uint32_t mlx5_reformat_domain_to_tbl_type(const struct rte_flow_indir_action_conf *domain) { diff --git a/drivers/net/mlx5/mlx5_flow_hw_stubs.c b/drivers/net/mlx5/mlx5_flow_hw_stubs.c new file mode 100644 index 0000000000..985c046056 --- /dev/null +++ b/drivers/net/mlx5/mlx5_flow_hw_stubs.c @@ -0,0 +1,41 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2024 NVIDIA Corporation & Affiliates + */ + +/** + * @file + * + * mlx5_flow_hw.c source file is included in the build only on Linux. + * Functions defined there are compiled if and only if available rdma-core supports DV. + * + * This file contains stubs (through weak linking) for any functions exported from that file. + */ + +#include "mlx5_flow.h" + +/* + * This is a stub for the real implementation of this function in mlx5_flow_hw.c in case: + * - PMD is compiled on Windows or + * - available rdma-core does not support HWS. + */ +__rte_weak int +mlx5_flow_hw_ctrl_flow_dmac(struct rte_eth_dev *dev __rte_unused, + const struct rte_ether_addr *addr __rte_unused) +{ + rte_errno = ENOTSUP; + return -rte_errno; +} + +/* + * This is a stub for the real implementation of this function in mlx5_flow_hw.c in case: + * - PMD is compiled on Windows or + * - available rdma-core does not support HWS. + */ +__rte_weak int +mlx5_flow_hw_ctrl_flow_dmac_vlan(struct rte_eth_dev *dev __rte_unused, + const struct rte_ether_addr *addr __rte_unused, + const uint16_t vlan __rte_unused) +{ + rte_errno = ENOTSUP; + return -rte_errno; +} -- 2.39.5