From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id AEB6C44132; Sun, 2 Jun 2024 12:28:53 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 9682442D9A; Sun, 2 Jun 2024 12:28:53 +0200 (CEST) Received: from NAM10-DM6-obe.outbound.protection.outlook.com (mail-dm6nam10on2068.outbound.protection.outlook.com [40.107.93.68]) by mails.dpdk.org (Postfix) with ESMTP id AE70E40042 for ; Sun, 2 Jun 2024 12:28:51 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=kxSukjqkSTiYs+sgKcFEOIp21Mf2Du6U1WoneaDC+EwZXT7ovKmW5rvEClbVlcq2tX9A58UoQ9ZYHgUeyo08dyLS+O6IZug1eri9lGiQ+B3XnPcV1/TbJMiD5UjB9qMqaTYte1DwhdWUtUpIR04BWpjM/8hg+yEJFtWWwT82mGZVZtPvGD2SB0BGO/fKcv4mclq0RKWqVAYnRmFneK9Q7o4uzH9W+mZbxMPLde71uhX0IJ0GPbj4xKVrbzdZD1JLHxUQlRTUsqOs5Su9mn9fBeM0gY5F8PFFLp8Lz5YW3qheLT3bZt+csUK4aq7c6a7qViNfAJBNNxDAqwefkPgJzw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=2aHeFzAe3dfcKxhh3ODtVqLLe75qXUZu/heKoqgjofo=; b=N+HanMD/MhLAEznD/k9rL2B/kaP6JpRZBES+RVS6zZgwPnQG5UTdVxTXVM6mWy/BvPq3Bh9GXzwsWVnQ4kvScGsLYNWy6/H01P9IQxzpt8ur74DoBMnrR+BqemSDf5pIpQQiqQO2UVUr4RVZTdHgyAOk7IQyqaXAQD+TSLnkjOJoqY1yu7UgZYE5o1jUgU/fqwPPN88I6OUe5qkyP8najJyq0I6Wr7fIta9a25siD/PCpf5WHvp3Q0FTM0QseyrO7T5mP3+PHG6Or6gDYdq2vth8q8HgH6P+OSyTfmHAu6RRfZcJyXKhceeweA6gyPPFccJqZWZ/JgNuUbaMTJiWUQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.118.233) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=2aHeFzAe3dfcKxhh3ODtVqLLe75qXUZu/heKoqgjofo=; b=qLA4LshAGDriW9HHa0OzDJ0//Hz0Hx+1RRRSf92952nhb6CBaVpY2gZYVfTztuGSaztmgN/JCXhH6kO5Rl98/Vt8l5mouBNFXNClVovIunFrBQCzaZIFGK5QwhLc8YsxMT2HV9yaSiCDTAW45yJ973Aqs43sdAHiI3rbvTD8ScKvUUepujb7qnzcAQl15W1SDFhGOCXwSZual/Px3lnLCwZ9BzlAfz+e3UBdaegg3ry3mrjLybquFh0gwf9pknnBhmZiDlQo++Fx/h/CWepapPeGH+MmHF+WCXF9U1ZcYyMFy/alVDhYWZQbvUN7AyNklOA+eeBvbyWBJh473iixGQ== Received: from BN8PR12CA0024.namprd12.prod.outlook.com (2603:10b6:408:60::37) by CH3PR12MB7691.namprd12.prod.outlook.com (2603:10b6:610:151::18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7633.27; Sun, 2 Jun 2024 10:28:48 +0000 Received: from BN1PEPF00004680.namprd03.prod.outlook.com (2603:10b6:408:60:cafe::a) by BN8PR12CA0024.outlook.office365.com (2603:10b6:408:60::37) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7633.27 via Frontend Transport; Sun, 2 Jun 2024 10:28:47 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.118.233) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.118.233 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.118.233; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.118.233) by BN1PEPF00004680.mail.protection.outlook.com (10.167.243.85) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7633.15 via Frontend Transport; Sun, 2 Jun 2024 10:28:47 +0000 Received: from drhqmail202.nvidia.com (10.126.190.181) by mail.nvidia.com (10.127.129.6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Sun, 2 Jun 2024 03:28:47 -0700 Received: from drhqmail201.nvidia.com (10.126.190.180) by drhqmail202.nvidia.com (10.126.190.181) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Sun, 2 Jun 2024 03:28:46 -0700 Received: from nvidia.com (10.127.8.14) by mail.nvidia.com (10.126.190.180) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4 via Frontend Transport; Sun, 2 Jun 2024 03:28:44 -0700 From: Maayan Kashani To: CC: , , , Gregory Etelson , Viacheslav Ovsiienko , Ori Kam , Suanming Mou , Matan Azrad Subject: [PATCH 1/6] net/mlx5: update NTA rule pattern and actions flags Date: Sun, 2 Jun 2024 13:28:40 +0300 Message-ID: <20240602102841.196990-1-mkashani@nvidia.com> X-Mailer: git-send-email 2.21.0 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-NV-OnPremToCloud: ExternallySecured X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BN1PEPF00004680:EE_|CH3PR12MB7691:EE_ X-MS-Office365-Filtering-Correlation-Id: 2f1e57d2-2870-4855-d735-08dc82eec9cb X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; ARA:13230031|36860700004|376005|1800799015|82310400017; X-Microsoft-Antispam-Message-Info: =?us-ascii?Q?fzZIMAVRUoRSIVJ6UsPqQgnPniWc3nqxuaqeGnjAzIq092FMBMPyPCRpivqH?= =?us-ascii?Q?UB5s1Z9qIgPVkFGEjOTkX5TrEmXFu+J9DQDlPZnaznWm2JKdjEITpRVa3Z4D?= =?us-ascii?Q?HbNs1SOV7GrHvK9NAE/fbLbY1xB+6CePgq1u2mvczV2bp48OjjCu1lDykdnN?= =?us-ascii?Q?GjIm7zh05itFDu+Sd62sAZEi7UylEDAabzlhF6fcB3Z405uOX/tx+zRGQO05?= =?us-ascii?Q?G3cvJHfawAvaJlTbohQWGGAR7IFBjzP+s/L/LFD0u+D8IoBstYGIDeYeQXHk?= =?us-ascii?Q?GnXhRi5D4V0xATmYEQ2cDePCx+VSnlMFV5Xpkh4GqKEVh1lgn3UsiAsfsI+f?= =?us-ascii?Q?UzYYh6eBIkAkoViIwZ8csn1sWYg0w3PV6Ma3WY/z52JwA9kpir2GABtChBxa?= =?us-ascii?Q?co/yxqIaVsG7n5/OaUnyc0RHh04HL6Wh6ovpZSmkAvOGTa2RoNbsIzS4H7YQ?= =?us-ascii?Q?SmRKnzkaI3caaJ1gH5aNSFobfATH4CqLhhfD761v2K66aJgiEhZTwWxG4L3D?= =?us-ascii?Q?i4xPW0yyk5nla4zalkPBWl+VbSYOm4E4SLfWMFFjhSiJ7pmgIS1HRt/K39iA?= =?us-ascii?Q?A2tvmSCBzAwKL48Z6HD7HgNgHxUJvQF6IZwqvVQW/m0vzuQWcYw4ea7DLqgV?= =?us-ascii?Q?8uXPts7cgldWZuOlj7mezRFjw31GZmPZMcQUzn6p4PA7+syESAMQosRvai6H?= =?us-ascii?Q?IrdZ7s2wclCQDR9HC3zKX9LcGgpmb+g0ATIcmhABcW3cn08dWf1krqD8Yk9t?= =?us-ascii?Q?5+g6Nw7ooiM9zNHfEqfaDRuMMpkF5mwK3w8fricdM03qnRfNfKvWdQoHs+4i?= =?us-ascii?Q?u8fwY8Sy1w4Ve2vNZJc58yhkw+FnGTVUefs5CxOWTccbHlIVTL3s7aJqa6OM?= =?us-ascii?Q?eMk5NrWr25KDVgeWepppFH3gNlXP939mSrpt50DGxd6KEJx4/AhNK9LLoXfz?= =?us-ascii?Q?KCmnsvsUVXWbGUw7Y52NZXthmiBwab95W/Yjb67/+i8G1EOH4QK+nZF2nhjA?= =?us-ascii?Q?M69UF4X6wbccVaVdHRECJc0pZlQveyfLMq+iBPEWV9hDZhbzAyapBExJyfpz?= =?us-ascii?Q?ZNrURsvcs6KqSVaKlg4zc+3GJnuAmCgRoBzHRcEZokl/RzuPOkPp06jdHMJd?= =?us-ascii?Q?c6pMCjH7Tp1QjXvBMyorjiYcxzuNXgcVMBI89tHM20kndQYXSED4xH7J1DhV?= =?us-ascii?Q?+g/5o+vFJmCpVoyIk9jxKM8Zo8RIOelyLLGhLGXWr7lOERJXI85MK8MCq7DI?= =?us-ascii?Q?xPNE3fQfHGkRdZTCQG7hjZ1+H4vDVKCcIt+GnBnEY4hpaCdhcJFnk1F4nZlU?= =?us-ascii?Q?56KFrNbLtuenUjGoGhET/pbIJRzoEkzRnbvEXwRXD9BkxXGV5QMbsM69jc6D?= =?us-ascii?Q?tgjaoPKZZtG/OlnioKfIgW6TsORG?= X-Forefront-Antispam-Report: CIP:216.228.118.233; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc7edge2.nvidia.com; CAT:NONE; SFS:(13230031)(36860700004)(376005)(1800799015)(82310400017); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 Jun 2024 10:28:47.7143 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 2f1e57d2-2870-4855-d735-08dc82eec9cb X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.118.233]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN1PEPF00004680.namprd03.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH3PR12MB7691 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Gregory Etelson Move pattern flags bitmap to flow_hw_list_create. Create actions flags bitmap in flow_hw_list_create. PMD uses pattern and actions bitmaps for direct queries instead of iterating arrays. Signed-off-by: Gregory Etelson --- drivers/net/mlx5/mlx5_flow_hw.c | 147 +++++++++++++++++++++++++++----- 1 file changed, 126 insertions(+), 21 deletions(-) diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index d938b5976a..696f675f63 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -568,6 +568,111 @@ flow_hw_matching_item_flags_get(const struct rte_flow_item items[]) return item_flags; } +static uint64_t +flow_hw_action_flags_get(const struct rte_flow_action actions[], + struct rte_flow_error *error) +{ + uint64_t action_flags = 0; + const struct rte_flow_action *action; + + for (action = actions; action->type != RTE_FLOW_ACTION_TYPE_END; action++) { + int type = (int)action->type; + switch (type) { + case RTE_FLOW_ACTION_TYPE_INDIRECT: + switch (MLX5_INDIRECT_ACTION_TYPE_GET(action->conf)) { + case MLX5_INDIRECT_ACTION_TYPE_RSS: + goto rss; + case MLX5_INDIRECT_ACTION_TYPE_AGE: + goto age; + case MLX5_INDIRECT_ACTION_TYPE_COUNT: + goto count; + case MLX5_INDIRECT_ACTION_TYPE_CT: + goto ct; + case MLX5_INDIRECT_ACTION_TYPE_METER_MARK: + goto meter; + default: + goto error; + } + break; + case RTE_FLOW_ACTION_TYPE_DROP: + action_flags |= MLX5_FLOW_ACTION_DROP; + break; + case RTE_FLOW_ACTION_TYPE_MARK: + action_flags |= MLX5_FLOW_ACTION_MARK; + break; + case RTE_FLOW_ACTION_TYPE_OF_PUSH_VLAN: + action_flags |= MLX5_FLOW_ACTION_OF_PUSH_VLAN; + break; + case RTE_FLOW_ACTION_TYPE_OF_POP_VLAN: + action_flags |= MLX5_FLOW_ACTION_OF_POP_VLAN; + break; + case RTE_FLOW_ACTION_TYPE_JUMP: + action_flags |= MLX5_FLOW_ACTION_JUMP; + break; + case RTE_FLOW_ACTION_TYPE_QUEUE: + action_flags |= MLX5_FLOW_ACTION_QUEUE; + break; + case RTE_FLOW_ACTION_TYPE_RSS: +rss: + action_flags |= MLX5_FLOW_ACTION_RSS; + break; + case RTE_FLOW_ACTION_TYPE_VXLAN_ENCAP: + case RTE_FLOW_ACTION_TYPE_NVGRE_ENCAP: + action_flags |= MLX5_FLOW_ACTION_ENCAP; + break; + case RTE_FLOW_ACTION_TYPE_RAW_ENCAP: + action_flags |= MLX5_FLOW_ACTION_ENCAP; + break; + case RTE_FLOW_ACTION_TYPE_VXLAN_DECAP: + case RTE_FLOW_ACTION_TYPE_NVGRE_DECAP: + action_flags |= MLX5_FLOW_ACTION_DECAP; + break; + case RTE_FLOW_ACTION_TYPE_RAW_DECAP: + action_flags |= MLX5_FLOW_ACTION_DECAP; + break; + case RTE_FLOW_ACTION_TYPE_SEND_TO_KERNEL: + action_flags |= MLX5_FLOW_ACTION_SEND_TO_KERNEL; + break; + case RTE_FLOW_ACTION_TYPE_MODIFY_FIELD: + action_flags |= MLX5_FLOW_ACTION_MODIFY_FIELD; + break; + case RTE_FLOW_ACTION_TYPE_PORT_ID: + case RTE_FLOW_ACTION_TYPE_REPRESENTED_PORT: + action_flags |= MLX5_FLOW_ACTION_PORT_ID; + break; + case RTE_FLOW_ACTION_TYPE_AGE: +age: + action_flags |= MLX5_FLOW_ACTION_AGE; + break; + case RTE_FLOW_ACTION_TYPE_COUNT: +count: + action_flags |= MLX5_FLOW_ACTION_COUNT; + break; + case RTE_FLOW_ACTION_TYPE_CONNTRACK: +ct: + action_flags |= MLX5_FLOW_ACTION_CT; + break; + case RTE_FLOW_ACTION_TYPE_METER_MARK: +meter: + action_flags |= MLX5_FLOW_ACTION_METER; + break; + case MLX5_RTE_FLOW_ACTION_TYPE_DEFAULT_MISS: + action_flags |= MLX5_FLOW_ACTION_DEFAULT_MISS; + break; + case RTE_FLOW_ACTION_TYPE_VOID: + case RTE_FLOW_ACTION_TYPE_END: + break; + default: + goto error; + } + } + return action_flags; +error: + rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, + action, "invalid flow action"); + return 0; +} + /** * Register destination table DR jump action. * @@ -12339,21 +12444,20 @@ flow_hw_encap_decap_resource_register static int flow_hw_translate_flow_actions(struct rte_eth_dev *dev, - const struct rte_flow_attr *attr, - const struct rte_flow_action actions[], - struct rte_flow_hw *flow, - struct mlx5_flow_hw_action_params *ap, - struct mlx5_hw_actions *hw_acts, - uint64_t item_flags, - bool external, - struct rte_flow_error *error) + const struct rte_flow_attr *attr, + const struct rte_flow_action actions[], + struct rte_flow_hw *flow, + struct mlx5_flow_hw_action_params *ap, + struct mlx5_hw_actions *hw_acts, + uint64_t item_flags, uint64_t action_flags, + bool external, + struct rte_flow_error *error) { int ret = 0; uint32_t src_group = 0; enum mlx5dr_table_type table_type; struct rte_flow_template_table *table = NULL; struct mlx5_flow_group grp; - uint64_t action_flags = 0; struct rte_flow_actions_template *at = NULL; struct rte_flow_actions_template_attr template_attr = { .egress = attr->egress, @@ -12631,14 +12735,13 @@ static int flow_hw_apply(struct rte_eth_dev *dev __rte_unused, * @return * 0 on success, negative errno value otherwise and rte_errno set. */ -static int flow_hw_create_flow(struct rte_eth_dev *dev, - enum mlx5_flow_type type, - const struct rte_flow_attr *attr, - const struct rte_flow_item items[], - const struct rte_flow_action actions[], - bool external, - struct rte_flow_hw **flow, - struct rte_flow_error *error) +static int +flow_hw_create_flow(struct rte_eth_dev *dev, enum mlx5_flow_type type, + const struct rte_flow_attr *attr, + const struct rte_flow_item items[], + const struct rte_flow_action actions[], + uint64_t item_flags, uint64_t action_flags, bool external, + struct rte_flow_hw **flow, struct rte_flow_error *error) { int ret; struct mlx5_hw_actions hw_act; @@ -12668,8 +12771,6 @@ static int flow_hw_create_flow(struct rte_eth_dev *dev, .tbl_type = 0, }; - uint64_t item_flags = 0; - memset(&hw_act, 0, sizeof(hw_act)); if (attr->transfer) tbl_type = MLX5DR_TABLE_TYPE_FDB; @@ -12710,7 +12811,7 @@ static int flow_hw_create_flow(struct rte_eth_dev *dev, /* Note: the actions should be saved in the sub-flow rule itself for reference. */ ret = flow_hw_translate_flow_actions(dev, attr, actions, *flow, &ap, &hw_act, - item_flags, external, error); + item_flags, action_flags, external, error); if (ret) goto error; @@ -12848,11 +12949,15 @@ static uintptr_t flow_hw_list_create(struct rte_eth_dev *dev, { int ret; struct rte_flow_hw *flow = NULL; + uint64_t item_flags = flow_hw_matching_item_flags_get(items); + uint64_t action_flags = flow_hw_action_flags_get(actions, error); /*TODO: Handle split/expand to num_flows. */ /* Create single flow. */ - ret = flow_hw_create_flow(dev, type, attr, items, actions, external, &flow, error); + ret = flow_hw_create_flow(dev, type, attr, items, actions, + item_flags, action_flags, + external, &flow, error); if (ret) goto free; if (flow) -- 2.25.1 >From 3b5691b2cc6b498d2cf92ab1f71b054fea477492 Mon Sep 17 00:00:00 2001 From: Gregory Etelson Date: Tue, 12 Dec 2023 20:02:54 +0200 Subject: [PATCH 2/6] net/mlx5: support RSS expansion in non-template HWS setup The MLX5 PMD expands flow rule with the RSS action in the non-template environment. The patch adds RSS flow rule expansion for legacy flow rules in the template setup. Signed-off-by: Gregory Etelson --- drivers/net/mlx5/hws/mlx5dr_definer.c | 2 + drivers/net/mlx5/meson.build | 1 + drivers/net/mlx5/mlx5.c | 4 + drivers/net/mlx5/mlx5.h | 6 +- drivers/net/mlx5/mlx5_flow.h | 36 +- drivers/net/mlx5/mlx5_flow_hw.c | 65 +-- drivers/net/mlx5/mlx5_nta_rss.c | 564 ++++++++++++++++++++++++++ 7 files changed, 646 insertions(+), 32 deletions(-) create mode 100644 drivers/net/mlx5/mlx5_nta_rss.c diff --git a/drivers/net/mlx5/hws/mlx5dr_definer.c b/drivers/net/mlx5/hws/mlx5dr_definer.c index 4d297352a6..29046ee875 100644 --- a/drivers/net/mlx5/hws/mlx5dr_definer.c +++ b/drivers/net/mlx5/hws/mlx5dr_definer.c @@ -381,6 +381,8 @@ mlx5dr_definer_ptype_l4_set(struct mlx5dr_definer_fc *fc, l4_type = STE_UDP; else if (packet_type == (inner ? RTE_PTYPE_INNER_L4_ICMP : RTE_PTYPE_L4_ICMP)) l4_type = STE_ICMP; + else if (packet_type == RTE_PTYPE_TUNNEL_ESP) + l4_type = STE_ESP; DR_SET(tag, l4_type, fc->byte_off, fc->bit_off, fc->bit_mask); } diff --git a/drivers/net/mlx5/meson.build b/drivers/net/mlx5/meson.build index d705fe21bb..b279ddf47c 100644 --- a/drivers/net/mlx5/meson.build +++ b/drivers/net/mlx5/meson.build @@ -42,6 +42,7 @@ sources = files( 'mlx5_vlan.c', 'mlx5_utils.c', 'mlx5_devx.c', + 'mlx5_nta_rss.c', ) if is_linux diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c index d15302d00d..5bde450a6d 100644 --- a/drivers/net/mlx5/mlx5.c +++ b/drivers/net/mlx5/mlx5.c @@ -2365,6 +2365,10 @@ mlx5_dev_close(struct rte_eth_dev *dev) claim_zero(mlx5_geneve_tlv_options_destroy(priv->tlv_options, sh->phdev)); priv->tlv_options = NULL; } + if (priv->ptype_rss_groups) { + mlx5_ipool_destroy(priv->ptype_rss_groups); + priv->ptype_rss_groups = NULL; + } #endif if (priv->rxq_privs != NULL) { /* XXX race condition if mlx5_rx_burst() is still running. */ diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index e635907c52..1b55229c52 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -1184,6 +1184,10 @@ struct mlx5_flow_tbl_resource { #define MLX5_MAX_TABLES_EXTERNAL MLX5_FLOW_TABLE_LEVEL_POLICY #define MLX5_FLOW_TABLE_HWS_POLICY (MLX5_MAX_TABLES - 10) #define MLX5_MAX_TABLES_FDB UINT16_MAX +#define MLX5_FLOW_TABLE_PTYPE_RSS_NUM 1024 +#define MLX5_FLOW_TABLE_PTYPE_RSS_LAST (MLX5_MAX_TABLES - 11) +#define MLX5_FLOW_TABLE_PTYPE_RSS_BASE \ +(1 + MLX5_FLOW_TABLE_PTYPE_RSS_LAST - MLX5_FLOW_TABLE_PTYPE_RSS_NUM) #define MLX5_FLOW_TABLE_FACTOR 10 /* ID generation structure. */ @@ -2019,7 +2023,7 @@ struct mlx5_priv { * Todo: consider to add *_MAX macro. */ struct mlx5dr_action *action_nat64[MLX5DR_TABLE_TYPE_MAX][2]; - + struct mlx5_indexed_pool *ptype_rss_groups; #endif struct rte_eth_dev *shared_host; /* Host device for HW steering. */ RTE_ATOMIC(uint16_t) shared_refcnt; /* HW steering host reference counter. */ diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index 7ccc3cb7cd..7e0f005741 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -484,6 +484,9 @@ enum mlx5_feature_name { RTE_ETH_RSS_NONFRAG_IPV4_TCP | RTE_ETH_RSS_NONFRAG_IPV4_UDP | \ RTE_ETH_RSS_NONFRAG_IPV4_OTHER) +/* Valid L4 RSS types */ +#define MLX5_L4_RSS_TYPES (RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY) + /* IBV hash source bits for IPV4. */ #define MLX5_IPV4_IBV_RX_HASH (IBV_RX_HASH_SRC_IPV4 | IBV_RX_HASH_DST_IPV4) @@ -1313,6 +1316,8 @@ enum { #define MLX5_DR_RULE_SIZE 72 +SLIST_HEAD(mlx5_nta_rss_flow_head, rte_flow_hw); + /** HWS non template flow data. */ struct rte_flow_nt2hws { /** BWC rule pointer. */ @@ -1325,7 +1330,10 @@ struct rte_flow_nt2hws { struct mlx5_flow_dv_modify_hdr_resource *modify_hdr; /** Encap/decap index. */ uint32_t rix_encap_decap; -}; + uint8_t chaned_flow; + /** Chain NTA flows. */ + SLIST_ENTRY(rte_flow_hw) next; +} __rte_packed; /** HWS flow struct. */ struct rte_flow_hw { @@ -3415,7 +3423,6 @@ flow_hw_get_ipv6_route_ext_mod_id_from_ctx(void *dr_ctx, uint8_t idx) #endif return 0; } - void mlx5_indirect_list_handles_release(struct rte_eth_dev *dev); #ifdef HAVE_MLX5_HWS_SUPPORT @@ -3428,5 +3435,30 @@ mlx5_destroy_legacy_indirect(struct rte_eth_dev *dev, void mlx5_hw_decap_encap_destroy(struct rte_eth_dev *dev, struct mlx5_indirect_list *reformat); +int +flow_hw_create_flow(struct rte_eth_dev *dev, enum mlx5_flow_type type, + const struct rte_flow_attr *attr, + const struct rte_flow_item items[], + const struct rte_flow_action actions[], + uint64_t item_flags, uint64_t action_flags, bool external, + struct rte_flow_hw **flow, struct rte_flow_error *error); +void +flow_hw_destroy(struct rte_eth_dev *dev, struct rte_flow_hw *flow); +void +flow_hw_list_destroy(struct rte_eth_dev *dev, enum mlx5_flow_type type, + uintptr_t flow_idx); +const struct rte_flow_action_rss * +flow_nta_locate_rss(struct rte_eth_dev *dev, + const struct rte_flow_action actions[], + struct rte_flow_error *error); +struct rte_flow_hw * +flow_nta_handle_rss(struct rte_eth_dev *dev, + const struct rte_flow_attr *attr, + const struct rte_flow_item items[], + const struct rte_flow_action actions[], + const struct rte_flow_action_rss *rss_conf, + uint64_t item_flags, uint64_t action_flags, + bool external, enum mlx5_flow_type flow_type, + struct rte_flow_error *error); #endif #endif /* RTE_PMD_MLX5_FLOW_H_ */ diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index 696f675f63..7984bf2f73 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -492,7 +492,7 @@ flow_hw_hashfields_set(struct mlx5_flow_rss_desc *rss_desc, fields |= IBV_RX_HASH_IPSEC_SPI; if (rss_inner) fields |= IBV_RX_HASH_INNER; - *hash_fields = fields; + *hash_fields |= fields; } /** @@ -755,9 +755,7 @@ flow_hw_jump_release(struct rte_eth_dev *dev, struct mlx5_hw_jump_action *jump) static inline struct mlx5_hrxq* flow_hw_tir_action_register(struct rte_eth_dev *dev, uint32_t hws_flags, - const struct rte_flow_action *action, - uint64_t item_flags, - bool is_template) + const struct rte_flow_action *action) { struct mlx5_flow_rss_desc rss_desc = { .hws_flags = hws_flags, @@ -780,10 +778,7 @@ flow_hw_tir_action_register(struct rte_eth_dev *dev, rss_desc.key_len = MLX5_RSS_HASH_KEY_LEN; rss_desc.types = !rss->types ? RTE_ETH_RSS_IP : rss->types; rss_desc.symmetric_hash_function = MLX5_RSS_IS_SYMM(rss->func); - if (is_template) - flow_hw_hashfields_set(&rss_desc, &rss_desc.hash_fields); - else - flow_dv_hashfields_set(item_flags, &rss_desc, &rss_desc.hash_fields); + flow_hw_hashfields_set(&rss_desc, &rss_desc.hash_fields); flow_dv_action_rss_l34_hash_adjust(rss->types, &rss_desc.hash_fields); if (rss->level > 1) { @@ -2508,9 +2503,8 @@ __flow_hw_translate_actions_template(struct rte_eth_dev *dev, ((const struct rte_flow_action_queue *) masks->conf)->index) { acts->tir = flow_hw_tir_action_register - (dev, - mlx5_hw_act_flag[!!attr->group][type], - actions, 0, true); + (dev, mlx5_hw_act_flag[!!attr->group][type], + actions); if (!acts->tir) goto err; acts->rule_acts[dr_pos].action = @@ -2524,9 +2518,8 @@ __flow_hw_translate_actions_template(struct rte_eth_dev *dev, case RTE_FLOW_ACTION_TYPE_RSS: if (actions->conf && masks->conf) { acts->tir = flow_hw_tir_action_register - (dev, - mlx5_hw_act_flag[!!attr->group][type], - actions, 0, true); + (dev, mlx5_hw_act_flag[!!attr->group][type], + actions); if (!acts->tir) goto err; acts->rule_acts[dr_pos].action = @@ -3413,11 +3406,7 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, break; case RTE_FLOW_ACTION_TYPE_RSS: case RTE_FLOW_ACTION_TYPE_QUEUE: - hrxq = flow_hw_tir_action_register(dev, - ft_flag, - action, - item_flags, - !flow->nt_rule); + hrxq = flow_hw_tir_action_register(dev, ft_flag, action); if (!hrxq) goto error; rule_acts[act_data->action_dst].action = hrxq->action; @@ -12735,7 +12724,7 @@ static int flow_hw_apply(struct rte_eth_dev *dev __rte_unused, * @return * 0 on success, negative errno value otherwise and rte_errno set. */ -static int +int flow_hw_create_flow(struct rte_eth_dev *dev, enum mlx5_flow_type type, const struct rte_flow_attr *attr, const struct rte_flow_item items[], @@ -12848,7 +12837,7 @@ flow_hw_create_flow(struct rte_eth_dev *dev, enum mlx5_flow_type type, } #endif -static void +void flow_hw_destroy(struct rte_eth_dev *dev, struct rte_flow_hw *flow) { int ret; @@ -12903,18 +12892,23 @@ flow_hw_destroy(struct rte_eth_dev *dev, struct rte_flow_hw *flow) * @param[in] flow_addr * Address of flow to destroy. */ -static void flow_hw_list_destroy(struct rte_eth_dev *dev, enum mlx5_flow_type type, - uintptr_t flow_addr) +void +flow_hw_list_destroy(struct rte_eth_dev *dev, enum mlx5_flow_type type, + uintptr_t flow_addr) { struct mlx5_priv *priv = dev->data->dev_private; - /* Get flow via idx */ struct rte_flow_hw *flow = (struct rte_flow_hw *)flow_addr; + struct mlx5_nta_rss_flow_head head = { .slh_first = flow }; - if (!flow) + if (flow->nt2hws->chaned_flow) return; - flow_hw_destroy(dev, flow); - /* Release flow memory by idx */ - mlx5_ipool_free(priv->flows[type], flow->idx); + while (!SLIST_EMPTY(&head)) { + flow = SLIST_FIRST(&head); + SLIST_REMOVE_HEAD(&head, nt2hws->next); + flow_hw_destroy(dev, flow); + /* Release flow memory by idx */ + mlx5_ipool_free(priv->flows[type], flow->idx); + } } #endif @@ -12952,6 +12946,19 @@ static uintptr_t flow_hw_list_create(struct rte_eth_dev *dev, uint64_t item_flags = flow_hw_matching_item_flags_get(items); uint64_t action_flags = flow_hw_action_flags_get(actions, error); + + if (action_flags & MLX5_FLOW_ACTION_RSS) { + const struct rte_flow_action_rss + *rss_conf = flow_nta_locate_rss(dev, actions, error); + flow = flow_nta_handle_rss(dev, attr, items, actions, rss_conf, + item_flags, action_flags, external, + type, error); + if (flow) + return (uintptr_t)flow; + if (error->type != RTE_FLOW_ERROR_TYPE_NONE) + return 0; + /* Fall Through to non-expanded RSS flow */ + } /*TODO: Handle split/expand to num_flows. */ /* Create single flow. */ @@ -13111,7 +13118,7 @@ mirror_format_tir(struct rte_eth_dev *dev, table_type = get_mlx5dr_table_type(&table_cfg->attr.flow_attr); hws_flags = mlx5_hw_act_flag[MLX5_HW_ACTION_FLAG_NONE_ROOT][table_type]; - tir_ctx = flow_hw_tir_action_register(dev, hws_flags, action, 0, true); + tir_ctx = flow_hw_tir_action_register(dev, hws_flags, action); if (!tir_ctx) return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, diff --git a/drivers/net/mlx5/mlx5_nta_rss.c b/drivers/net/mlx5/mlx5_nta_rss.c new file mode 100644 index 0000000000..1f0085ff06 --- /dev/null +++ b/drivers/net/mlx5/mlx5_nta_rss.c @@ -0,0 +1,564 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2024 NVIDIA Corporation & Affiliates + */ + +#include + +#include +#include "mlx5.h" +#include "mlx5_defs.h" +#include "mlx5_flow.h" +#include "mlx5_rx.h" +#include "rte_common.h" + +#ifdef HAVE_MLX5_HWS_SUPPORT + +struct mlx5_nta_rss_ctx { + struct rte_eth_dev *dev; + struct rte_flow_attr *attr; + struct rte_flow_item *pattern; + struct rte_flow_action *actions; + const struct rte_flow_action_rss *rss_conf; + struct rte_flow_error *error; + struct mlx5_nta_rss_flow_head *head; + uint64_t pattern_flags; + enum mlx5_flow_type flow_type; + bool external; +}; + +#define MLX5_RSS_PTYPE_ITEM_INDEX 0 +#ifdef MLX5_RSS_PTYPE_DEBUG +#define MLX5_RSS_PTYPE_ACTION_INDEX 1 +#else +#define MLX5_RSS_PTYPE_ACTION_INDEX 0 +#endif + +#define MLX5_RSS_PTYPE_ITEMS_NUM (MLX5_RSS_PTYPE_ITEM_INDEX + 2) +#define MLX5_RSS_PTYPE_ACTIONS_NUM (MLX5_RSS_PTYPE_ACTION_INDEX + 2) + +static int +mlx5_nta_ptype_rss_flow_create(struct mlx5_nta_rss_ctx *ctx, + uint32_t ptype, uint64_t rss_type) +{ + int ret; + struct rte_flow_hw *flow; + struct rte_flow_item_ptype *ptype_spec = (void *)(uintptr_t) + ctx->pattern[MLX5_RSS_PTYPE_ITEM_INDEX].spec; + struct rte_flow_action_rss *rss_conf = (void *)(uintptr_t) + ctx->actions[MLX5_RSS_PTYPE_ACTION_INDEX].conf; + bool dbg_log = rte_log_can_log(mlx5_logtype, RTE_LOG_DEBUG); + uint32_t mark_id = 0; +#ifdef MLX5_RSS_PTYPE_DEBUG + struct rte_flow_action_mark *mark = (void *)(uintptr_t) + ctx->actions[MLX5_RSS_PTYPE_ACTION_INDEX - 1].conf; + + /* + * Inner L3 and L4 ptype values are too large for 24bit mark + */ + mark->id = + ((ptype & (RTE_PTYPE_INNER_L3_MASK | RTE_PTYPE_INNER_L4_MASK)) == ptype) ? + ptype >> 20 : ptype; + mark_id = mark->id; + dbg_log = true; +#endif + ptype_spec->packet_type = ptype; + rss_conf->types = rss_type; + ret = flow_hw_create_flow(ctx->dev, MLX5_FLOW_TYPE_GEN, ctx->attr, + ctx->pattern, ctx->actions, + MLX5_FLOW_ITEM_PTYPE, MLX5_FLOW_ACTION_RSS, + ctx->external, &flow, ctx->error); + if (flow) { + SLIST_INSERT_HEAD(ctx->head, flow, nt2hws->next); + if (dbg_log) { + DRV_LOG(NOTICE, + "PTYPE RSS: group %u ptype spec %#x rss types %#lx mark %#x\n", + ctx->attr->group, ptype_spec->packet_type, + (unsigned long)rss_conf->types, mark_id); + } + } + return ret; +} + +/* + * Call conditions: + * * Flow pattern did not include outer L3 and L4 items. + * * RSS configuration had L3 hash types. + */ +static struct rte_flow_hw * +mlx5_hw_rss_expand_l3(struct mlx5_nta_rss_ctx *rss_ctx) +{ + int ret; + int ptype_ip4, ptype_ip6; + uint64_t rss_types = rte_eth_rss_hf_refine(rss_ctx->rss_conf->types); + + if (rss_ctx->rss_conf->level < 2) { + ptype_ip4 = RTE_PTYPE_L3_IPV4; + ptype_ip6 = RTE_PTYPE_L3_IPV6; + } else { + ptype_ip4 = RTE_PTYPE_INNER_L3_IPV4; + ptype_ip6 = RTE_PTYPE_INNER_L3_IPV6; + } + if (rss_types & MLX5_IPV4_LAYER_TYPES) { + ret = mlx5_nta_ptype_rss_flow_create + (rss_ctx, ptype_ip4, (rss_types & ~MLX5_IPV6_LAYER_TYPES)); + if (ret) + goto error; + } + if (rss_types & MLX5_IPV6_LAYER_TYPES) { + ret = mlx5_nta_ptype_rss_flow_create + (rss_ctx, ptype_ip6, rss_types & ~MLX5_IPV4_LAYER_TYPES); + if (ret) + goto error; + } + return SLIST_FIRST(rss_ctx->head); + +error: + flow_hw_list_destroy(rss_ctx->dev, rss_ctx->flow_type, + (uintptr_t)SLIST_FIRST(rss_ctx->head)); + return NULL; +} + +static void +mlx5_nta_rss_expand_l3_l4(struct mlx5_nta_rss_ctx *rss_ctx, + uint64_t rss_types, uint64_t rss_l3_types) +{ + int ret; + int ptype_l3, ptype_l4_udp, ptype_l4_tcp, ptype_l4_esp = 0; + uint64_t rss = rss_types & + ~(rss_l3_types == MLX5_IPV4_LAYER_TYPES ? + MLX5_IPV6_LAYER_TYPES : MLX5_IPV4_LAYER_TYPES); + + + if (rss_ctx->rss_conf->level < 2) { + ptype_l3 = rss_l3_types == MLX5_IPV4_LAYER_TYPES ? + RTE_PTYPE_L3_IPV4 : RTE_PTYPE_L3_IPV6; + ptype_l4_esp = RTE_PTYPE_TUNNEL_ESP; + ptype_l4_udp = RTE_PTYPE_L4_UDP; + ptype_l4_tcp = RTE_PTYPE_L4_TCP; + } else { + ptype_l3 = rss_l3_types == MLX5_IPV4_LAYER_TYPES ? + RTE_PTYPE_INNER_L3_IPV4 : RTE_PTYPE_INNER_L3_IPV6; + ptype_l4_udp = RTE_PTYPE_INNER_L4_UDP; + ptype_l4_tcp = RTE_PTYPE_INNER_L4_TCP; + } + if (rss_types & RTE_ETH_RSS_ESP) { + ret = mlx5_nta_ptype_rss_flow_create + (rss_ctx, ptype_l3 | ptype_l4_esp, + rss & ~(RTE_ETH_RSS_UDP | RTE_ETH_RSS_TCP)); + if (ret) + goto error; + } + if (rss_types & RTE_ETH_RSS_UDP) { + ret = mlx5_nta_ptype_rss_flow_create(rss_ctx, + ptype_l3 | ptype_l4_udp, + rss & ~(RTE_ETH_RSS_ESP | RTE_ETH_RSS_TCP)); + if (ret) + goto error; + } + if (rss_types & RTE_ETH_RSS_TCP) { + ret = mlx5_nta_ptype_rss_flow_create(rss_ctx, + ptype_l3 | ptype_l4_tcp, + rss & ~(RTE_ETH_RSS_ESP | RTE_ETH_RSS_UDP)); + if (ret) + goto error; + } + return; +error: + flow_hw_list_destroy(rss_ctx->dev, rss_ctx->flow_type, + (uintptr_t)SLIST_FIRST(rss_ctx->head)); +} + +/* + * Call conditions: + * * Flow pattern did not include L4 item. + * * RSS configuration had L4 hash types. + */ +static struct rte_flow_hw * +mlx5_hw_rss_expand_l4(struct mlx5_nta_rss_ctx *rss_ctx) +{ + uint64_t rss_types = rte_eth_rss_hf_refine(rss_ctx->rss_conf->types); + uint64_t l3_item = rss_ctx->pattern_flags & + (rss_ctx->rss_conf->level < 2 ? + MLX5_FLOW_LAYER_OUTER_L3 : MLX5_FLOW_LAYER_INNER_L3); + + if (l3_item) { + /* + * Outer L3 header was present in the original pattern. + * Expand L4 level only. + */ + if (l3_item & MLX5_FLOW_LAYER_L3_IPV4) + mlx5_nta_rss_expand_l3_l4(rss_ctx, rss_types, MLX5_IPV4_LAYER_TYPES); + else + mlx5_nta_rss_expand_l3_l4(rss_ctx, rss_types, MLX5_IPV6_LAYER_TYPES); + } else { + if (rss_types & (MLX5_IPV4_LAYER_TYPES | MLX5_IPV6_LAYER_TYPES)) { + mlx5_hw_rss_expand_l3(rss_ctx); + /* + * No outer L3 item in application flow pattern. + * RSS hash types are L3 and L4. + * ** Expand L3 according to RSS configuration and L4. + */ + if (rss_types & MLX5_IPV4_LAYER_TYPES) + mlx5_nta_rss_expand_l3_l4(rss_ctx, rss_types, + MLX5_IPV4_LAYER_TYPES); + if (rss_types & MLX5_IPV6_LAYER_TYPES) + mlx5_nta_rss_expand_l3_l4(rss_ctx, rss_types, + MLX5_IPV6_LAYER_TYPES); + } else { + /* + * No outer L3 item in application flow pattern, + * RSS hash type is L4 only. + */ + mlx5_nta_rss_expand_l3_l4(rss_ctx, rss_types, + MLX5_IPV4_LAYER_TYPES); + mlx5_nta_rss_expand_l3_l4(rss_ctx, rss_types, + MLX5_IPV6_LAYER_TYPES); + } + } + return SLIST_EMPTY(rss_ctx->head) ? NULL : SLIST_FIRST(rss_ctx->head); +} + +static struct mlx5_indexed_pool * +mlx5_nta_ptype_ipool_create(struct rte_eth_dev *dev) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_indexed_pool_config ipool_cfg = { + .size = 1, + .trunk_size = 32, + .grow_trunk = 5, + .grow_shift = 1, + .need_lock = 1, + .release_mem_en = !!priv->sh->config.reclaim_mode, + .malloc = mlx5_malloc, + .max_idx = MLX5_FLOW_TABLE_PTYPE_RSS_NUM, + .free = mlx5_free, + .type = "mlx5_nta_ptype_rss" + }; + return mlx5_ipool_create(&ipool_cfg); +} + +static void +mlx5_hw_release_rss_ptype_group(struct rte_eth_dev *dev, uint32_t group) +{ + struct mlx5_priv *priv = dev->data->dev_private; + + if (!priv->ptype_rss_groups) + return; + mlx5_ipool_free(priv->ptype_rss_groups, group); +} + +static uint32_t +mlx5_hw_get_rss_ptype_group(struct rte_eth_dev *dev) +{ + void *obj; + uint32_t idx = 0; + struct mlx5_priv *priv = dev->data->dev_private; + + if (!priv->ptype_rss_groups) { + priv->ptype_rss_groups = mlx5_nta_ptype_ipool_create(dev); + if (!priv->ptype_rss_groups) { + DRV_LOG(DEBUG, "PTYPE RSS: failed to allocate groups pool"); + return 0; + } + } + obj = mlx5_ipool_malloc(priv->ptype_rss_groups, &idx); + if (!obj) { + DRV_LOG(DEBUG, "PTYPE RSS: failed to fetch ptype group from the pool"); + return 0; + } + return idx + MLX5_FLOW_TABLE_PTYPE_RSS_BASE; +} + +static struct rte_flow_hw * +mlx5_hw_rss_ptype_create_miss_flow(struct rte_eth_dev *dev, + const struct rte_flow_action_rss *rss_conf, + uint32_t ptype_group, bool external, + struct rte_flow_error *error) +{ + struct rte_flow_hw *flow = NULL; + const struct rte_flow_attr miss_attr = { + .ingress = 1, + .group = ptype_group, + .priority = 3 + }; + const struct rte_flow_item miss_pattern[2] = { + [0] = { .type = RTE_FLOW_ITEM_TYPE_ETH }, + [1] = { .type = RTE_FLOW_ITEM_TYPE_END } + }; + struct rte_flow_action miss_actions[] = { +#ifdef MLX5_RSS_PTYPE_DEBUG + [MLX5_RSS_PTYPE_ACTION_INDEX - 1] = { + .type = RTE_FLOW_ACTION_TYPE_MARK, + .conf = &(const struct rte_flow_action_mark){.id = 0xfac} + }, +#endif + [MLX5_RSS_PTYPE_ACTION_INDEX] = { + .type = RTE_FLOW_ACTION_TYPE_RSS, + .conf = rss_conf + }, + [MLX5_RSS_PTYPE_ACTION_INDEX + 1] = { .type = RTE_FLOW_ACTION_TYPE_END } + }; + + flow_hw_create_flow(dev, MLX5_FLOW_TYPE_GEN, &miss_attr, + miss_pattern, miss_actions, 0, MLX5_FLOW_ACTION_RSS, + external, &flow, error); + return flow; +} + +static struct rte_flow_hw * +mlx5_hw_rss_ptype_create_base_flow(struct rte_eth_dev *dev, + const struct rte_flow_attr *attr, + const struct rte_flow_item pattern[], + const struct rte_flow_action orig_actions[], + uint32_t ptype_group, uint64_t item_flags, + uint64_t action_flags, bool external, + enum mlx5_flow_type flow_type, + struct rte_flow_error *error) +{ + int i = 0; + struct rte_flow_hw *flow = NULL; + struct rte_flow_action actions[MLX5_HW_MAX_ACTS]; + enum mlx5_indirect_type indirect_type; + + do { + switch (orig_actions[i].type) { + case RTE_FLOW_ACTION_TYPE_INDIRECT: + indirect_type = (typeof(indirect_type)) + MLX5_INDIRECT_ACTION_TYPE_GET + (orig_actions[i].conf); + if (indirect_type != MLX5_INDIRECT_ACTION_TYPE_RSS) { + actions[i] = orig_actions[i]; + break; + } + /* Fall through */ + case RTE_FLOW_ACTION_TYPE_RSS: + actions[i].type = RTE_FLOW_ACTION_TYPE_JUMP; + actions[i].conf = &(const struct rte_flow_action_jump) { + .group = ptype_group + }; + break; + default: + actions[i] = orig_actions[i]; + } + + } while (actions[i++].type != RTE_FLOW_ACTION_TYPE_END); + action_flags &= ~MLX5_FLOW_ACTION_RSS; + action_flags |= MLX5_FLOW_ACTION_JUMP; + flow_hw_create_flow(dev, flow_type, attr, pattern, actions, + item_flags, action_flags, external, &flow, error); + return flow; +} + +const struct rte_flow_action_rss * +flow_nta_locate_rss(struct rte_eth_dev *dev, + const struct rte_flow_action actions[], + struct rte_flow_error *error) +{ + const struct rte_flow_action *a; + const struct rte_flow_action_rss *rss_conf = NULL; + + for (a = actions; a->type != RTE_FLOW_ACTION_TYPE_END; a++) { + if (a->type == RTE_FLOW_ACTION_TYPE_RSS) { + rss_conf = a->conf; + break; + } + if (a->type == RTE_FLOW_ACTION_TYPE_INDIRECT && + MLX5_INDIRECT_ACTION_TYPE_GET(a->conf) == + MLX5_INDIRECT_ACTION_TYPE_RSS) { + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_shared_action_rss *shared_rss; + uint32_t handle = (uint32_t)(uintptr_t)a->conf; + + shared_rss = mlx5_ipool_get + (priv->sh->ipool[MLX5_IPOOL_RSS_SHARED_ACTIONS], + MLX5_INDIRECT_ACTION_IDX_GET(handle)); + if (!shared_rss) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION_CONF, + a->conf, "invalid shared RSS handle"); + return NULL; + } + rss_conf = &shared_rss->origin; + break; + } + } + if (a->type == RTE_FLOW_ACTION_TYPE_END) { + rte_flow_error_set(error, 0, RTE_FLOW_ERROR_TYPE_NONE, NULL, NULL); + return NULL; + } + return rss_conf; +} + +static __rte_always_inline void +mlx5_nta_rss_init_ptype_ctx(struct mlx5_nta_rss_ctx *rss_ctx, + struct rte_eth_dev *dev, + struct rte_flow_attr *ptype_attr, + struct rte_flow_item *ptype_pattern, + struct rte_flow_action *ptype_actions, + const struct rte_flow_action_rss *rss_conf, + struct mlx5_nta_rss_flow_head *head, + struct rte_flow_error *error, + uint64_t item_flags, + enum mlx5_flow_type flow_type, bool external) +{ + rss_ctx->dev = dev; + rss_ctx->attr = ptype_attr; + rss_ctx->pattern = ptype_pattern; + rss_ctx->actions = ptype_actions; + rss_ctx->rss_conf = rss_conf; + rss_ctx->error = error; + rss_ctx->head = head; + rss_ctx->pattern_flags = item_flags; + rss_ctx->flow_type = flow_type; + rss_ctx->external = external; +} + +/* + * MLX5 HW hashes IPv4 and IPv6 L3 headers and UDP, TCP, ESP L4 headers. + * RSS expansion is required when RSS action was configured to hash + * network protocol that was not mentioned in flow pattern. + * + */ +#define MLX5_PTYPE_RSS_OUTER_MASK (RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L3_IPV6 | \ + RTE_PTYPE_L4_UDP | RTE_PTYPE_L4_TCP | \ + RTE_PTYPE_TUNNEL_ESP) +#define MLX5_PTYPE_RSS_INNER_MASK (RTE_PTYPE_INNER_L3_IPV4 | RTE_PTYPE_INNER_L3_IPV6 | \ + RTE_PTYPE_INNER_L4_TCP | RTE_PTYPE_INNER_L4_UDP) + +struct rte_flow_hw * +flow_nta_handle_rss(struct rte_eth_dev *dev, + const struct rte_flow_attr *attr, + const struct rte_flow_item items[], + const struct rte_flow_action actions[], + const struct rte_flow_action_rss *rss_conf, + uint64_t item_flags, uint64_t action_flags, + bool external, enum mlx5_flow_type flow_type, + struct rte_flow_error *error) +{ + struct rte_flow_hw *rss_base = NULL, *rss_next = NULL, *rss_miss = NULL; + struct rte_flow_action_rss ptype_rss_conf; + struct mlx5_nta_rss_ctx rss_ctx; + uint64_t rss_types = rte_eth_rss_hf_refine(rss_conf->types); + bool inner_rss = rss_conf->level > 1; + bool outer_rss = !inner_rss; + bool l3_item = (outer_rss && (item_flags & MLX5_FLOW_LAYER_OUTER_L3)) || + (inner_rss && (item_flags & MLX5_FLOW_LAYER_INNER_L3)); + bool l4_item = (outer_rss && (item_flags & MLX5_FLOW_LAYER_OUTER_L4)) || + (inner_rss && (item_flags & MLX5_FLOW_LAYER_INNER_L4)); + bool l3_hash = rss_types & (MLX5_IPV4_LAYER_TYPES | MLX5_IPV6_LAYER_TYPES); + bool l4_hash = rss_types & (RTE_ETH_RSS_UDP | RTE_ETH_RSS_TCP | RTE_ETH_RSS_ESP); + struct mlx5_nta_rss_flow_head expansion_head = SLIST_HEAD_INITIALIZER(0); + struct rte_flow_attr ptype_attr = { + .ingress = 1 + }; + struct rte_flow_item_ptype ptype_spec = { .packet_type = 0 }; + const struct rte_flow_item_ptype ptype_mask = { + .packet_type = outer_rss ? + MLX5_PTYPE_RSS_OUTER_MASK : MLX5_PTYPE_RSS_INNER_MASK + }; + struct rte_flow_item ptype_pattern[MLX5_RSS_PTYPE_ITEMS_NUM] = { + [MLX5_RSS_PTYPE_ITEM_INDEX] = { + .type = RTE_FLOW_ITEM_TYPE_PTYPE, + .spec = &ptype_spec, + .mask = &ptype_mask + }, + [MLX5_RSS_PTYPE_ITEM_INDEX + 1] = { .type = RTE_FLOW_ITEM_TYPE_END } + }; + struct rte_flow_action ptype_actions[MLX5_RSS_PTYPE_ACTIONS_NUM] = { +#ifdef MLX5_RSS_PTYPE_DEBUG + [MLX5_RSS_PTYPE_ACTION_INDEX - 1] = { + .type = RTE_FLOW_ACTION_TYPE_MARK, + .conf = &(const struct rte_flow_action_mark) {.id = 101} + }, +#endif + [MLX5_RSS_PTYPE_ACTION_INDEX] = { + .type = RTE_FLOW_ACTION_TYPE_RSS, + .conf = &ptype_rss_conf + }, + [MLX5_RSS_PTYPE_ACTION_INDEX + 1] = { .type = RTE_FLOW_ACTION_TYPE_END } + }; + + if (l4_item) { + /* + * Original flow pattern extended up to L4 level. + * L4 is the maximal expansion level. + * Original pattern does not need expansion. + */ + rte_flow_error_set(error, 0, RTE_FLOW_ERROR_TYPE_NONE, NULL, NULL); + return NULL; + } + if (!l4_hash) { + if (!l3_hash) { + /* + * RSS action was not configured to hash L3 or L4. + * No expansion needed. + */ + rte_flow_error_set(error, 0, RTE_FLOW_ERROR_TYPE_NONE, NULL, NULL); + return NULL; + } + if (l3_item) { + /* + * Original flow pattern extended up to L3 level. + * RSS action was not set for L4 hash. + * Original pattern does not need expansion. + */ + rte_flow_error_set(error, 0, RTE_FLOW_ERROR_TYPE_NONE, NULL, NULL); + return NULL; + } + } + /* Create RSS expansions in dedicated PTYPE flow group */ + ptype_attr.group = mlx5_hw_get_rss_ptype_group(dev); + if (!ptype_attr.group) { + rte_flow_error_set(error, ENOMEM, RTE_FLOW_ERROR_TYPE_ATTR_GROUP, + NULL, "cannot get RSS PTYPE group"); + return NULL; + } + ptype_rss_conf = *rss_conf; + mlx5_nta_rss_init_ptype_ctx(&rss_ctx, dev, &ptype_attr, ptype_pattern, + ptype_actions, rss_conf, &expansion_head, + error, item_flags, flow_type, external); + rss_miss = mlx5_hw_rss_ptype_create_miss_flow(dev, rss_conf, ptype_attr.group, + external, error); + if (!rss_miss) + goto error; + if (l4_hash) { + rss_next = mlx5_hw_rss_expand_l4(&rss_ctx); + if (!rss_next) + goto error; + } else if (l3_hash) { + rss_next = mlx5_hw_rss_expand_l3(&rss_ctx); + if (!rss_next) + goto error; + } + rss_base = mlx5_hw_rss_ptype_create_base_flow(dev, attr, items, actions, + ptype_attr.group, item_flags, + action_flags, external, + flow_type, error); + if (!rss_base) + goto error; + SLIST_INSERT_HEAD(&expansion_head, rss_miss, nt2hws->next); + SLIST_INSERT_HEAD(&expansion_head, rss_base, nt2hws->next); + /** + * PMD must return to application a reference to the base flow. + * This way RSS expansion could work with counter, meter and other + * flow actions. + */ + MLX5_ASSERT(rss_base == SLIST_FIRST(&expansion_head)); + rss_next = SLIST_NEXT(rss_base, nt2hws->next); + while (rss_next) { + rss_next->nt2hws->chaned_flow = 1; + rss_next = SLIST_NEXT(rss_next, nt2hws->next); + } + return SLIST_FIRST(&expansion_head); + +error: + if (rss_miss) + flow_hw_list_destroy(dev, flow_type, (uintptr_t)rss_miss); + if (rss_next) + flow_hw_list_destroy(dev, flow_type, (uintptr_t)rss_next); + mlx5_hw_release_rss_ptype_group(dev, ptype_attr.group); + return NULL; +} + +#endif + -- 2.25.1 >From 59b2b036cbec25fa235d13c9ff1e84aa816aea62 Mon Sep 17 00:00:00 2001 From: Gregory Etelson Date: Tue, 23 Jan 2024 15:13:52 +0200 Subject: [PATCH 3/6] net/mlx5: support indirect actions in non-template setup Add support for the RSS, AGE, COUNT and CONNTRACK indirect flow actions for the non-template flow rules. Signed-off-by: Gregory Etelson --- drivers/net/mlx5/mlx5_flow_hw.c | 111 +++++++++++++++++++++++++------- 1 file changed, 89 insertions(+), 22 deletions(-) diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index 7984bf2f73..9f43fbfb35 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -12431,6 +12431,91 @@ flow_hw_encap_decap_resource_register return 0; } +static enum rte_flow_action_type +flow_nta_get_indirect_action_type(const struct rte_flow_action *action) +{ + switch (MLX5_INDIRECT_ACTION_TYPE_GET(action->conf)) { + case MLX5_INDIRECT_ACTION_TYPE_RSS: + return RTE_FLOW_ACTION_TYPE_RSS; + case MLX5_INDIRECT_ACTION_TYPE_AGE: + return RTE_FLOW_ACTION_TYPE_AGE; + case MLX5_INDIRECT_ACTION_TYPE_COUNT: + return RTE_FLOW_ACTION_TYPE_COUNT; + case MLX5_INDIRECT_ACTION_TYPE_CT: + return RTE_FLOW_ACTION_TYPE_CONNTRACK; + default: + break; + } + return RTE_FLOW_ACTION_TYPE_END; +} + +static void +flow_nta_set_mh_mask_conf(const struct rte_flow_action_modify_field *action_conf, + struct rte_flow_action_modify_field *mask_conf) +{ + memset(mask_conf, 0xff, sizeof(*mask_conf)); + mask_conf->operation = action_conf->operation; + mask_conf->dst.field = action_conf->dst.field; + mask_conf->src.field = action_conf->src.field; +} + +union actions_conf { + struct rte_flow_action_modify_field modify_field; + struct rte_flow_action_raw_encap raw_encap; + struct rte_flow_action_vxlan_encap vxlan_encap; + struct rte_flow_action_nvgre_encap nvgre_encap; +}; + +static int +flow_nta_build_template_mask(const struct rte_flow_action actions[], + struct rte_flow_action masks[MLX5_HW_MAX_ACTS], + union actions_conf mask_conf[MLX5_HW_MAX_ACTS]) +{ + int i; + + for (i = 0; i == 0 || actions[i - 1].type != RTE_FLOW_ACTION_TYPE_END; i++) { + const struct rte_flow_action *action = &actions[i]; + struct rte_flow_action *mask = &masks[i]; + union actions_conf *conf = &mask_conf[i]; + + mask->type = action->type; + switch (action->type) { + case RTE_FLOW_ACTION_TYPE_INDIRECT: + mask->type = flow_nta_get_indirect_action_type(action); + if (!mask->type) + return -EINVAL; + break; + case RTE_FLOW_ACTION_TYPE_MODIFY_FIELD: + flow_nta_set_mh_mask_conf(action->conf, (void *)conf); + mask->conf = conf; + break; + case RTE_FLOW_ACTION_TYPE_RAW_ENCAP: + /* This mask will set this action as shared. */ + memset(conf, 0xff, sizeof(struct rte_flow_action_raw_encap)); + mask->conf = conf; + break; + case RTE_FLOW_ACTION_TYPE_VXLAN_ENCAP: + /* This mask will set this action as shared. */ + conf->vxlan_encap.definition = + ((const struct rte_flow_action_vxlan_encap *) + action->conf)->definition; + mask->conf = conf; + break; + case RTE_FLOW_ACTION_TYPE_NVGRE_ENCAP: + /* This mask will set this action as shared. */ + conf->nvgre_encap.definition = + ((const struct rte_flow_action_nvgre_encap *) + action->conf)->definition; + mask->conf = conf; + break; + default: + break; + } + } + return 0; +#undef NTA_CHECK_CONF_BUF_SIZE +} + static int flow_hw_translate_flow_actions(struct rte_eth_dev *dev, const struct rte_flow_attr *attr, @@ -12454,30 +12539,12 @@ flow_hw_translate_flow_actions(struct rte_eth_dev *dev, .transfer = attr->transfer, }; struct rte_flow_action masks[MLX5_HW_MAX_ACTS]; - struct rte_flow_action_raw_encap encap_conf; - struct rte_flow_action_modify_field mh_conf[MLX5_HW_MAX_ACTS]; + union actions_conf mask_conf[MLX5_HW_MAX_ACTS]; - memset(&masks, 0, sizeof(masks)); - int i = -1; - do { - i++; - masks[i].type = actions[i].type; - if (masks[i].type == RTE_FLOW_ACTION_TYPE_RAW_ENCAP) { - memset(&encap_conf, 0x00, sizeof(encap_conf)); - encap_conf.size = ((const struct rte_flow_action_raw_encap *) - (actions[i].conf))->size; - masks[i].conf = &encap_conf; - } - if (masks[i].type == RTE_FLOW_ACTION_TYPE_MODIFY_FIELD) { - const struct rte_flow_action_modify_field *conf = actions[i].conf; - memset(&mh_conf, 0xff, sizeof(mh_conf[i])); - mh_conf[i].operation = conf->operation; - mh_conf[i].dst.field = conf->dst.field; - mh_conf[i].src.field = conf->src.field; - masks[i].conf = &mh_conf[i]; - } - } while (masks[i].type != RTE_FLOW_ACTION_TYPE_END); RTE_SET_USED(action_flags); + memset(masks, 0, sizeof(masks)); + memset(mask_conf, 0, sizeof(mask_conf)); + flow_nta_build_template_mask(actions, masks, mask_conf); /* The group in the attribute translation was done in advance. */ ret = __translate_group(dev, attr, external, attr->group, &src_group, error); if (ret) -- 2.25.1 >From a89a0db5a37629073b37ab2e43c59e4c2933b41e Mon Sep 17 00:00:00 2001 From: Gregory Etelson Date: Wed, 24 Jan 2024 08:08:51 +0200 Subject: [PATCH 4/6] net/mlx5: update ASO resources in non-template setup Non-template PMD implementation allocates ASO flow actions resources on-demand. Current PMD implementation iterated over actions array in search for actions that required ASO resources allocations. The patch replaced the iteration with actions flags bitmap queries. Signed-off-by: Gregory Etelson --- drivers/net/mlx5/mlx5_flow_hw.c | 102 ++++++++++++++------------------ 1 file changed, 45 insertions(+), 57 deletions(-) diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index 9f43fbfb35..f2ed2d8e46 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -12671,79 +12671,67 @@ static int flow_hw_register_matcher(struct rte_eth_dev *dev, NULL, "fail to register matcher"); } -static int flow_hw_ensure_action_pools_allocated(struct rte_eth_dev *dev, - const struct rte_flow_action actions[], - struct rte_flow_error *error) +static int +flow_hw_allocate_actions(struct rte_eth_dev *dev, + uint64_t action_flags, + struct rte_flow_error *error) { - bool actions_end = false; struct mlx5_priv *priv = dev->data->dev_private; int ret; uint obj_num; - for (; !actions_end; actions++) { - switch ((int)actions->type) { - case RTE_FLOW_ACTION_TYPE_AGE: - /* If no age objects were previously allocated. */ - if (!priv->hws_age_req) { - /* If no counters were previously allocated. */ - if (!priv->hws_cpool) { - obj_num = MLX5_CNT_NT_MAX(priv); - ret = mlx5_hws_cnt_pool_create(dev, obj_num, - priv->nb_queue, NULL); - if (ret) - goto err; - } - if (priv->hws_cpool) { - /* Allocate same number of counters. */ - ret = mlx5_hws_age_pool_init(dev, - priv->hws_cpool->cfg.request_num, - priv->nb_queue, false); - if (ret) - goto err; - } - } - break; - case RTE_FLOW_ACTION_TYPE_COUNT: + if (action_flags & MLX5_FLOW_ACTION_AGE) { + /* If no age objects were previously allocated. */ + if (!priv->hws_age_req) { /* If no counters were previously allocated. */ if (!priv->hws_cpool) { obj_num = MLX5_CNT_NT_MAX(priv); ret = mlx5_hws_cnt_pool_create(dev, obj_num, - priv->nb_queue, NULL); - if (ret) - goto err; - } - break; - case RTE_FLOW_ACTION_TYPE_CONNTRACK: - /* If no CT were previously allocated. */ - if (!priv->hws_ctpool) { - obj_num = MLX5_CT_NT_MAX(priv); - ret = mlx5_flow_ct_init(dev, obj_num, priv->nb_queue); - if (ret) - goto err; - } - break; - case RTE_FLOW_ACTION_TYPE_METER_MARK: - /* If no meters were previously allocated. */ - if (!priv->hws_mpool) { - obj_num = MLX5_MTR_NT_MAX(priv); - ret = mlx5_flow_meter_init(dev, obj_num, 0, 0, - priv->nb_queue); + priv->nb_queue, NULL); if (ret) goto err; } - break; - case RTE_FLOW_ACTION_TYPE_END: - actions_end = true; - break; - default: - break; + /* Allocate same number of counters. */ + ret = mlx5_hws_age_pool_init(dev, priv->hws_cpool->cfg.request_num, + priv->nb_queue, false); + if (ret) + goto err; + } + } + if (action_flags & MLX5_FLOW_ACTION_COUNT) { + /* If no counters were previously allocated. */ + if (!priv->hws_cpool) { + obj_num = MLX5_CNT_NT_MAX(priv); + ret = mlx5_hws_cnt_pool_create(dev, obj_num, + priv->nb_queue, NULL); + if (ret) + goto err; + } + } + if (action_flags & MLX5_FLOW_ACTION_CT) { + /* If no CT were previously allocated. */ + if (!priv->hws_ctpool) { + obj_num = MLX5_CT_NT_MAX(priv); + ret = mlx5_flow_ct_init(dev, obj_num, priv->nb_queue); + if (ret) + goto err; + } + } + if (action_flags & MLX5_FLOW_ACTION_METER) { + /* If no meters were previously allocated. */ + if (!priv->hws_mpool) { + obj_num = MLX5_MTR_NT_MAX(priv); + ret = mlx5_flow_meter_init(dev, obj_num, 0, 0, + priv->nb_queue); + if (ret) + goto err; } } return 0; err: return rte_flow_error_set(error, ret, - RTE_FLOW_ERROR_TYPE_UNSPECIFIED, - NULL, "fail to allocate actions"); + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, "fail to allocate actions"); } /* TODO: remove dev if not used */ @@ -12861,7 +12849,7 @@ flow_hw_create_flow(struct rte_eth_dev *dev, enum mlx5_flow_type type, * The output actions bit mask instead of * looping on the actions array twice. */ - ret = flow_hw_ensure_action_pools_allocated(dev, actions, error); + ret = flow_hw_allocate_actions(dev, action_flags, error); if (ret) goto error; -- 2.25.1 >From 2f1eb0e76903db7e3e69309d6fe66dc1fb9de748 Mon Sep 17 00:00:00 2001 From: Gregory Etelson Date: Wed, 24 Jan 2024 08:17:18 +0200 Subject: [PATCH 5/6] net/mlx5: update HWS ASO actions validation HWS ASO actions validation required PMD to allocate resources during the port configuration, before the action validation was called. That approach does not work in the HWS non-template setup, because non-template setup does not have explicit port configuration procedure and port allocates ASO resources "on demand". The patch assumes that if port did not have ASO resources during action validation PMD was configured for the non-template and allocates missing resources. Signed-off-by: Gregory Etelson --- drivers/net/mlx5/mlx5_flow_hw.c | 41 +++++++++++++++++++++------------ 1 file changed, 26 insertions(+), 15 deletions(-) diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index f2ed2d8e46..66e0b46f9b 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -207,6 +207,11 @@ mlx5_destroy_multi_pattern_segment(struct mlx5_multi_pattern_segment *segment); static __rte_always_inline enum mlx5_indirect_list_type flow_hw_inlist_type_get(const struct rte_flow_action *actions); +static int +flow_hw_allocate_actions(struct rte_eth_dev *dev, + uint64_t action_flags, + struct rte_flow_error *error); + static __rte_always_inline int mlx5_multi_pattern_reformat_to_index(enum mlx5dr_action_type type) { @@ -11361,25 +11366,31 @@ flow_hw_action_handle_validate(struct rte_eth_dev *dev, uint32_t queue, RTE_SET_USED(user_data); switch (action->type) { case RTE_FLOW_ACTION_TYPE_AGE: - if (!priv->hws_age_req) - return rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, - NULL, - "aging pool not initialized"); + if (!priv->hws_age_req) { + if (flow_hw_allocate_actions(dev, MLX5_FLOW_ACTION_AGE, + error)) + return rte_flow_error_set + (error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, + NULL, "aging pool not initialized"); + } break; case RTE_FLOW_ACTION_TYPE_COUNT: - if (!priv->hws_cpool) - return rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, - NULL, - "counters pool not initialized"); + if (!priv->hws_cpool) { + if (flow_hw_allocate_actions(dev, MLX5_FLOW_ACTION_COUNT, + error)) + return rte_flow_error_set + (error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, + NULL, "counters pool not initialized"); + } break; case RTE_FLOW_ACTION_TYPE_CONNTRACK: - if (priv->hws_ctpool == NULL) - return rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, - NULL, - "CT pool not initialized"); + if (priv->hws_ctpool == NULL) { + if (flow_hw_allocate_actions(dev, MLX5_FLOW_ACTION_CT, + error)) + return rte_flow_error_set + (error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, + NULL, "CT pool not initialized"); + } return mlx5_validate_action_ct(dev, action->conf, error); case RTE_FLOW_ACTION_TYPE_METER_MARK: return flow_hw_validate_action_meter_mark(dev, action, true, error); -- 2.25.1 >From cb2810c444e6694ad55c036af5ddb7da2a43150e Mon Sep 17 00:00:00 2001 From: Gregory Etelson Date: Mon, 5 Feb 2024 14:35:42 +0200 Subject: [PATCH 6/6] net/mlx5: support FDB in non-template mode Support non-template flows API in FDB mode: dpdk-testpmd -a $PCI,dv_flow_en=2,representor=vf0-1 -- -i testpmd> flow create 0 group 0 transfer \ pattern eth / end \ actions count / drop / end Signed-off-by: Gregory Etelson --- drivers/net/mlx5/mlx5_flow_hw.c | 46 +++++++++++++++++++++++++++------ 1 file changed, 38 insertions(+), 8 deletions(-) diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index 66e0b46f9b..43bcaab592 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -12614,6 +12614,28 @@ flow_hw_translate_flow_actions(struct rte_eth_dev *dev, return ret; } +static int +flow_hw_unregister_matcher(struct rte_eth_dev *dev, + struct mlx5_flow_dv_matcher *matcher) +{ + int ret; + struct mlx5_priv *priv = dev->data->dev_private; + + if (matcher->matcher_object) { + ret = mlx5_hlist_unregister(priv->sh->groups, &matcher->group->entry); + if (ret) + goto error; + if (matcher->group) { + ret = mlx5_list_unregister(matcher->group->matchers, &matcher->entry); + if (ret) + goto error; + } + } + return 0; +error: + return -EINVAL; +} + static int flow_hw_register_matcher(struct rte_eth_dev *dev, const struct rte_flow_attr *attr, const struct rte_flow_item items[], @@ -12640,24 +12662,23 @@ static int flow_hw_register_matcher(struct rte_eth_dev *dev, .data = matcher, .data2 = items_ptr, }; - struct mlx5_list_entry *group_entry; - struct mlx5_list_entry *matcher_entry; + struct mlx5_list_entry *group_entry = NULL; + struct mlx5_list_entry *matcher_entry = NULL; struct mlx5_flow_dv_matcher *resource; struct mlx5_list *matchers_list; struct mlx5_flow_group *flow_group; - uint32_t group = 0; int ret; matcher->crc = rte_raw_cksum((const void *)matcher->mask.buf, matcher->mask.size); matcher->priority = attr->priority; - ret = __translate_group(dev, attr, external, attr->group, &group, error); + ret = __translate_group(dev, attr, external, attr->group, &flow_attr.group, error); if (ret) return ret; /* Register the flow group. */ - group_entry = mlx5_hlist_register(priv->sh->groups, group, &ctx); + group_entry = mlx5_hlist_register(priv->sh->groups, flow_attr.group, &ctx); if (!group_entry) goto error; flow_group = container_of(group_entry, struct mlx5_flow_group, entry); @@ -12668,15 +12689,16 @@ static int flow_hw_register_matcher(struct rte_eth_dev *dev, if (!matcher_entry) goto error; resource = container_of(matcher_entry, typeof(*resource), entry); - if (!resource) - goto error; flow->nt2hws->matcher = resource; return 0; error: - if (error) + if (group_entry) + mlx5_hlist_unregister(priv->sh->groups, group_entry); + if (error) { if (sub_error.type != RTE_FLOW_ERROR_TYPE_NONE) rte_memcpy(error, &sub_error, sizeof(sub_error)); + } return rte_flow_error_set(error, ENOMEM, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, "fail to register matcher"); @@ -12899,6 +12921,12 @@ flow_hw_create_flow(struct rte_eth_dev *dev, enum mlx5_flow_type type, mlx5_free(hw_act.push_remove); if (hw_act.mhdr) mlx5_free(hw_act.mhdr); + if (ret) { + /* release after actual error */ + if ((*flow)->nt2hws && (*flow)->nt2hws->matcher) + flow_hw_unregister_matcher(dev, + (*flow)->nt2hws->matcher); + } return ret; } #endif @@ -12945,6 +12973,8 @@ flow_hw_destroy(struct rte_eth_dev *dev, struct rte_flow_hw *flow) if (ret) DRV_LOG(ERR, "failed to release modify action."); } + if (flow->nt2hws->matcher) + flow_hw_unregister_matcher(dev, flow->nt2hws->matcher); } #ifdef HAVE_MLX5_HWS_SUPPORT -- 2.25.1