From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id D4E6C45BC6; Thu, 24 Oct 2024 19:52:42 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 57F0143534; Thu, 24 Oct 2024 19:52:32 +0200 (CEST) Received: from NAM02-BN1-obe.outbound.protection.outlook.com (mail-bn1nam02on2072.outbound.protection.outlook.com [40.107.212.72]) by mails.dpdk.org (Postfix) with ESMTP id 5FA4343542 for ; Thu, 24 Oct 2024 19:52:30 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=ZQl5lYlm2mrhHeg1EXeSKLpX4awMrZR6/GE8/qWHc8HgM1PO4fIYkjALdPBHkNTMDFx0m6MLqQ6i97FKRRWMoxeMyqZrezs+U4/CCTy5mCWvZQDtPJcCEnx1A43qgndp0s3aY/qHIeLu2b/SxyuRGWFJvObgGfmwMmwkaE5I3w7Kw+SdRhxyWvYnb92CwPJaKF4/+8K+Pf46JGPScNXLI2JLoI9eT4fMOBWiYvJtlOUPomqDymicbFlPM5qsizcTFHufWtXia7YzwcVDBNdr73P74ucBZ6J7/0Ssm0x2qZQ9Z7SmfN0SBXNqIYX8y08nOiXqMdgtEBP0QlGU1RLTMg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=fnYqtLfk8OvCkRO8sgZ/lfONGeR9mEvDZ3BEWz4JmPA=; b=A/C1f2G9SokKvhajsuKrf83Cp92yio6u3EFXqZzCjn2vHDfzUUdyqtwTJx+PrBWWnmoR/aQBXqB3aIa2NhUcafhn9vr6AeqAZJxMjv3Wfu6Si4JBmrwoapBNMU7jE7jz9dHQLrHvfGrnYgXcIJSLAH1zd8rn0Ts1US1viMKMrmo6UjnI8clO8CuQ59G5HMxdkHTEUaEU2knZJdMGzvKAiBzJQCBLmDpl/DXZyQUfZ22e19JAEdXi1+c9mocUsD9W2A+B5Bnu8prHBpVSeiyqwcxVj2g58TtyuXnpTa89R6/y+uSb9IhhYwB+CYs6HpRrziij7tJ+zYK0NKifY+NDtw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=fnYqtLfk8OvCkRO8sgZ/lfONGeR9mEvDZ3BEWz4JmPA=; b=bY0eDDG7ODHxw19Pj1eTzc6INs64qVRmV6k1bYy+SwU+hbplpjSYNS1IoiwH7FI713BsuiETZVmNF9Pk4Bl3LNgpggy+FOCOYcaJSjki+JznZy8U9UZgr6C1fh0kvfzIL1g22JyAAuXJrJSZaZw+y+RH59aOPKgve93GGNsH62MiRm9G7dmLGdGWa+piRRxnjSK4w4CiabBHHsrgwKUqq7c5+mzDZc2IFtHHzsys9I1J1EjjDvUMOELPJ64/QeVHdyQMs1XXj+02nyFZSSjnFFQ+mZAtevS8halAjlzF/4uqF+eQWwJV40BAf3/xQcK3v5PSnw3vowbUqqpdClNozA== Received: from CY5PR20CA0002.namprd20.prod.outlook.com (2603:10b6:930:3::24) by DS0PR12MB6390.namprd12.prod.outlook.com (2603:10b6:8:ce::7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8093.16; Thu, 24 Oct 2024 17:52:27 +0000 Received: from CY4PEPF0000FCC2.namprd03.prod.outlook.com (2603:10b6:930:3:cafe::80) by CY5PR20CA0002.outlook.office365.com (2603:10b6:930:3::24) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8093.18 via Frontend Transport; Thu, 24 Oct 2024 17:52:27 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by CY4PEPF0000FCC2.mail.protection.outlook.com (10.167.242.104) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8093.14 via Frontend Transport; Thu, 24 Oct 2024 17:52:26 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Thu, 24 Oct 2024 10:52:06 -0700 Received: from pegasus01.mtr.labs.mlnx (10.126.230.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Thu, 24 Oct 2024 10:52:03 -0700 From: Alexander Kozyrev To: CC: , , , , , , , , Subject: [PATCH v4 5/5] net/mlx5: implement jump to table index action Date: Thu, 24 Oct 2024 20:50:07 +0300 Message-ID: <20241024175132.1752108-6-akozyrev@nvidia.com> X-Mailer: git-send-email 2.43.5 In-Reply-To: <20241024175132.1752108-1-akozyrev@nvidia.com> References: <20241024154351.1743447-1-akozyrev@nvidia.com> <20241024175132.1752108-1-akozyrev@nvidia.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Originating-IP: [10.126.230.35] X-ClientProxiedBy: rnnvmail202.nvidia.com (10.129.68.7) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CY4PEPF0000FCC2:EE_|DS0PR12MB6390:EE_ X-MS-Office365-Filtering-Correlation-Id: 2e492038-b9db-4b61-b3cb-08dcf4549f85 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; ARA:13230040|1800799024|376014|82310400026|36860700013; X-Microsoft-Antispam-Message-Info: =?us-ascii?Q?x4+BUw0Y8PwWKfj17eVRmxVbaSQ3e12cx6X17oiEDlLNEHj+FsYk99O2AB7O?= =?us-ascii?Q?eCehRCbSFvjzxySoJ/DGVPDEuYTHbVAl67MPh1Rz+7vAGYhbWh4gFdouuOQb?= =?us-ascii?Q?z4XQDTdldYgfEa7bTu+ATrYALfr8xdR2XXjOwc4M8GuTBYumG6v2UY7ZONZb?= =?us-ascii?Q?ssoAZjvyyXTxkbXmQtWQiFdfrbPkQWY3NBcUhriG5QkmPxGdZ54wRBRRjenP?= =?us-ascii?Q?8GZ2BteK4TWKdQAxEzKzWCuWvqqjL5R5yoQvKMkRaLGnvKuWQIKCprh6f1z8?= =?us-ascii?Q?g6X8Azg+p6igj1a+Zx8t4FFViy1ByJHzIUV1MJ8REexUI3kCy0XQ9p1BYSVQ?= =?us-ascii?Q?FQXjgFYBNExHhwwZPkOj3Axrss+EWt1/poGYdYtkrVJhjLqoPz9FPhNqMRhN?= =?us-ascii?Q?mJ7PBJDpzdjTMF3nY0The0tvqpMfEETD43FdZ45yy+Fj3AGSAJg+eBo3jeGa?= =?us-ascii?Q?vyttn6xOGjMj++ZKLEKGAkFQPtTIbhufkRn3NXp7EGj4ax8G24SpEGrMcEdd?= =?us-ascii?Q?YsYdMafWSzIRcyouX28pZIo/sAoMOHykC/AWUFCKuOWqiuioydzadb+VNiOu?= =?us-ascii?Q?T288jlaXKx19u61fLv3JI9C15kjDrZ0nGxbfHwSrhA9HrAyfBmaNuhyIrywR?= =?us-ascii?Q?ns//+kik9PPOGnjfKRnnW2bvEvf6pOPfgDgOPEBQP2Hs5uE7AXIQ80ok9VFC?= =?us-ascii?Q?oFJyyXcsu367vjbYDyT77mvVsSz9j/NaxH4xR6VFSE8d61lZdvajXFy83+wV?= =?us-ascii?Q?cVrCLdkHL0zR6BJSGCCo35lB+V75Hy+pNezxMVTQ6VQwpai7n3VoezmdEVoe?= =?us-ascii?Q?0PSmvy2Lr/UZzkvF0MwOZAs23jUHqm7d6Aq7FmrIn1q9gQNRUBYv5iFLScCn?= =?us-ascii?Q?JMW7tbtqYCMJw4579fJHGeq5Rs+In2Ne1yVMb0qUBPUlyjTkoXYlKU6TFoaC?= =?us-ascii?Q?WVfMtHZ8imIj9CZM8whDB2Ws/tned9uc9nSY9EMeF3vGXh1u75RBGvUAIeS2?= =?us-ascii?Q?XNfSudkUjus8RZiDAKwRMwSGrhbXWP91vUGjCRSojT1gocvRUajIjr1oCqEp?= =?us-ascii?Q?w0Q6PEsqSmJTPbYI8eZBBdtnj5JSKO94OhafoV/AJnBGbfVJgZCc3IDphlaf?= =?us-ascii?Q?PSXtcFE75MfCoSDUw/GGYMWwGXIQlsQQ4vh0NHNuGiP+N8WG98tpqibR60aG?= =?us-ascii?Q?bQiSiMUH1PYhYsgtEdSOM9H7svJ9H9Gf9EKQUQaXZEQ4cHfIODPd6SE5zlz7?= =?us-ascii?Q?LOL82UpEgT+V9a4jjWv+ClouPeotwoGzYPe3Jj1UiYuTsO5kzUx6FzjtSl3M?= =?us-ascii?Q?zIQnlSduUHrlSj8RW2qzO/1U5ZNMifOEIjUJ1z3EHYeO8rddBLsMqdqln6XH?= =?us-ascii?Q?r+/HC1uyrWd4Y6maci+GWBW7rYAIaA38Hgs2KvG44dmZCKOgUQ=3D=3D?= X-Forefront-Antispam-Report: CIP:216.228.117.160; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge1.nvidia.com; CAT:NONE; SFS:(13230040)(1800799024)(376014)(82310400026)(36860700013); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 Oct 2024 17:52:26.9131 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 2e492038-b9db-4b61-b3cb-08dcf4549f85 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.160]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CY4PEPF0000FCC2.namprd03.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS0PR12MB6390 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Implement RTE_FLOW_ACTION_TYPE_JUMP_TO_TABLE_INDEX action. Create the hardware steering jump to matcher action, associated with the template matcher. Use this action and provide the rule index as an offset in the matcher. Note that it is only supported by the isolated matcher, i.e. the table insertion type is by index with pattern. Signed-off-by: Alexander Kozyrev --- doc/guides/nics/features/default.ini | 1 + doc/guides/nics/features/mlx5.ini | 1 + doc/guides/prog_guide/ethdev/flow_offload.rst | 24 +++ doc/guides/rel_notes/release_24_11.rst | 1 + drivers/net/mlx5/mlx5_flow.h | 8 +- drivers/net/mlx5/mlx5_flow_hw.c | 145 ++++++++++++++++++ 6 files changed, 178 insertions(+), 2 deletions(-) diff --git a/doc/guides/nics/features/default.ini b/doc/guides/nics/features/default.ini index 1e9a156a2a..a730365a16 100644 --- a/doc/guides/nics/features/default.ini +++ b/doc/guides/nics/features/default.ini @@ -221,3 +221,4 @@ skip_cman = vf = vxlan_decap = vxlan_encap = +jump_to_table_index = diff --git a/doc/guides/nics/features/mlx5.ini b/doc/guides/nics/features/mlx5.ini index 056e04275b..55bc52c666 100644 --- a/doc/guides/nics/features/mlx5.ini +++ b/doc/guides/nics/features/mlx5.ini @@ -150,3 +150,4 @@ set_tp_src = Y set_ttl = Y vxlan_decap = Y vxlan_encap = Y +jump_to_table_index = Y diff --git a/doc/guides/prog_guide/ethdev/flow_offload.rst b/doc/guides/prog_guide/ethdev/flow_offload.rst index 2d6187ed11..bff0b5a794 100644 --- a/doc/guides/prog_guide/ethdev/flow_offload.rst +++ b/doc/guides/prog_guide/ethdev/flow_offload.rst @@ -3535,6 +3535,30 @@ Send packets to the kernel, without going to userspace at all. The packets will be received by the kernel driver sharing the same device as the DPDK port on which this action is configured. +Action: ``JUMP_TO_TABLE_INDEX`` +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +Redirects packets to a particular index in a flow table. + +Bypassing a hierarchy of groups, this action redirects the matched flow to +the specified index in the particular template table on the device. + +If a matched flow is redirected to a non-existing template table or +the table which doesn't contain a rule at the specified index, +then the behavior is undefined and the resulting behavior is up to driver. + +.. _table_rte_flow_action_jump_to_table_index: + +.. table:: JUMP_TO_TABLE_INDEX + + +-----------+-------------------------------------------+ + | Field | Value | + +===========+===========================================+ + | ``table`` | Template table to redirect packets to | + +-----------+-------------------------------------------+ + | ``index`` | Index in the table to redirect packets to | + +-----------+-------------------------------------------+ + Negative types ~~~~~~~~~~~~~~ diff --git a/doc/guides/rel_notes/release_24_11.rst b/doc/guides/rel_notes/release_24_11.rst index 07a8435b19..dbef29706c 100644 --- a/doc/guides/rel_notes/release_24_11.rst +++ b/doc/guides/rel_notes/release_24_11.rst @@ -249,6 +249,7 @@ New Features * **Updated NVIDIA MLX5 net driver.** * Added rte_flow_async_create_by_index_with_pattern() support. + * Added RTE_FLOW_ACTION_TYPE_JUMP_TO_TABLE_INDEX support. Removed Items ------------- diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index db56ae051d..90ad23c94a 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -398,6 +398,7 @@ enum mlx5_feature_name { #define MLX5_FLOW_ACTION_IPV6_ROUTING_REMOVE (1ull << 48) #define MLX5_FLOW_ACTION_IPV6_ROUTING_PUSH (1ull << 49) #define MLX5_FLOW_ACTION_NAT64 (1ull << 50) +#define MLX5_FLOW_ACTION_JUMP_TO_TABLE_INDEX (1ull << 51) #define MLX5_FLOW_DROP_INCLUSIVE_ACTIONS \ (MLX5_FLOW_ACTION_COUNT | MLX5_FLOW_ACTION_SAMPLE | MLX5_FLOW_ACTION_AGE) @@ -408,12 +409,14 @@ enum mlx5_feature_name { MLX5_FLOW_ACTION_DEFAULT_MISS | \ MLX5_FLOW_ACTION_METER_WITH_TERMINATED_POLICY | \ MLX5_FLOW_ACTION_SEND_TO_KERNEL | \ - MLX5_FLOW_ACTION_PORT_REPRESENTOR) + MLX5_FLOW_ACTION_PORT_REPRESENTOR | \ + MLX5_FLOW_ACTION_JUMP_TO_TABLE_INDEX) #define MLX5_FLOW_FATE_ESWITCH_ACTIONS \ (MLX5_FLOW_ACTION_DROP | MLX5_FLOW_ACTION_PORT_ID | \ MLX5_FLOW_ACTION_SEND_TO_KERNEL | \ - MLX5_FLOW_ACTION_JUMP | MLX5_FLOW_ACTION_METER_WITH_TERMINATED_POLICY) + MLX5_FLOW_ACTION_JUMP | MLX5_FLOW_ACTION_METER_WITH_TERMINATED_POLICY | \ + MLX5_FLOW_ACTION_JUMP_TO_TABLE_INDEX) #define MLX5_FLOW_MODIFY_HDR_ACTIONS (MLX5_FLOW_ACTION_SET_IPV4_SRC | \ MLX5_FLOW_ACTION_SET_IPV4_DST | \ @@ -1704,6 +1707,7 @@ struct mlx5_flow_template_table_cfg { struct mlx5_matcher_info { struct mlx5dr_matcher *matcher; /* Template matcher. */ + struct mlx5dr_action *jump; /* Jump to matcher action. */ RTE_ATOMIC(uint32_t) refcnt; }; diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index 412d927efb..0ef7844fd8 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -729,6 +729,9 @@ flow_hw_action_flags_get(const struct rte_flow_action actions[], case MLX5_RTE_FLOW_ACTION_TYPE_DEFAULT_MISS: action_flags |= MLX5_FLOW_ACTION_DEFAULT_MISS; break; + case RTE_FLOW_ACTION_TYPE_JUMP_TO_TABLE_INDEX: + action_flags |= MLX5_FLOW_ACTION_JUMP_TO_TABLE_INDEX; + break; case RTE_FLOW_ACTION_TYPE_VOID: case RTE_FLOW_ACTION_TYPE_END: break; @@ -2925,6 +2928,34 @@ __flow_hw_translate_actions_template(struct rte_eth_dev *dev, src_pos, dr_pos)) goto err; break; + case RTE_FLOW_ACTION_TYPE_JUMP_TO_TABLE_INDEX: + if (masks->conf && + ((const struct rte_flow_action_jump_to_table_index *) + masks->conf)->table) { + struct rte_flow_template_table *jump_table = + ((const struct rte_flow_action_jump_to_table_index *) + actions->conf)->table; + acts->rule_acts[dr_pos].jump_to_matcher.offset = + ((const struct rte_flow_action_jump_to_table_index *) + actions->conf)->index; + if (likely(!rte_flow_template_table_resizable(dev->data->port_id, + &jump_table->cfg.attr))) { + acts->rule_acts[dr_pos].action = + jump_table->matcher_info[0].jump; + } else { + uint32_t selector; + rte_rwlock_read_lock(&jump_table->matcher_replace_rwlk); + selector = jump_table->matcher_selector; + acts->rule_acts[dr_pos].action = + jump_table->matcher_info[selector].jump; + rte_rwlock_read_unlock(&jump_table->matcher_replace_rwlk); + } + } else if (__flow_hw_act_data_general_append + (priv, acts, actions->type, + src_pos, dr_pos)){ + goto err; + } + break; case RTE_FLOW_ACTION_TYPE_END: actions_end = true; break; @@ -3527,6 +3558,7 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, cnt_id_t cnt_id; uint32_t *cnt_queue; uint32_t mtr_id; + struct rte_flow_template_table *jump_table; action = &actions[act_data->action_src]; /* @@ -3759,6 +3791,25 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, rule_acts[act_data->action_dst].action = priv->action_nat64[table->type][nat64_c->type]; break; + case RTE_FLOW_ACTION_TYPE_JUMP_TO_TABLE_INDEX: + jump_table = ((const struct rte_flow_action_jump_to_table_index *) + action->conf)->table; + if (likely(!rte_flow_template_table_resizable(dev->data->port_id, + &table->cfg.attr))) { + rule_acts[act_data->action_dst].action = + jump_table->matcher_info[0].jump; + } else { + uint32_t selector; + rte_rwlock_read_lock(&table->matcher_replace_rwlk); + selector = table->matcher_selector; + rule_acts[act_data->action_dst].action = + jump_table->matcher_info[selector].jump; + rte_rwlock_read_unlock(&table->matcher_replace_rwlk); + } + rule_acts[act_data->action_dst].jump_to_matcher.offset = + ((const struct rte_flow_action_jump_to_table_index *) + action->conf)->index; + break; default: break; } @@ -4963,6 +5014,10 @@ flow_hw_table_create(struct rte_eth_dev *dev, }; struct mlx5_priv *priv = dev->data->dev_private; struct mlx5dr_matcher_attr matcher_attr = {0}; + struct mlx5dr_action_jump_to_matcher_attr jump_attr = { + .type = MLX5DR_ACTION_JUMP_TO_MATCHER_BY_INDEX, + .matcher = NULL, + }; struct rte_flow_template_table *tbl = NULL; struct mlx5_flow_group *grp; struct mlx5dr_match_template *mt[MLX5_HW_TBL_MAX_ITEM_TEMPLATE]; @@ -5153,6 +5208,13 @@ flow_hw_table_create(struct rte_eth_dev *dev, tbl->type = attr->flow_attr.transfer ? MLX5DR_TABLE_TYPE_FDB : (attr->flow_attr.egress ? MLX5DR_TABLE_TYPE_NIC_TX : MLX5DR_TABLE_TYPE_NIC_RX); + if (matcher_attr.isolated) { + jump_attr.matcher = tbl->matcher_info[0].matcher; + tbl->matcher_info[0].jump = mlx5dr_action_create_jump_to_matcher(priv->dr_ctx, + &jump_attr, mlx5_hw_act_flag[!!attr->flow_attr.group][tbl->type]); + if (!tbl->matcher_info[0].jump) + goto jtm_error; + } /* * Only the matcher supports update and needs more than 1 WQE, an additional * index is needed. Or else the flow index can be reused. @@ -5175,6 +5237,9 @@ flow_hw_table_create(struct rte_eth_dev *dev, rte_rwlock_init(&tbl->matcher_replace_rwlk); return tbl; res_error: + if (tbl->matcher_info[0].jump) + mlx5dr_action_destroy(tbl->matcher_info[0].jump); +jtm_error: if (tbl->matcher_info[0].matcher) (void)mlx5dr_matcher_destroy(tbl->matcher_info[0].matcher); at_error: @@ -5439,8 +5504,12 @@ flow_hw_table_destroy(struct rte_eth_dev *dev, 1, rte_memory_order_relaxed); } flow_hw_destroy_table_multi_pattern_ctx(table); + if (table->matcher_info[0].jump) + mlx5dr_action_destroy(table->matcher_info[0].jump); if (table->matcher_info[0].matcher) mlx5dr_matcher_destroy(table->matcher_info[0].matcher); + if (table->matcher_info[1].jump) + mlx5dr_action_destroy(table->matcher_info[1].jump); if (table->matcher_info[1].matcher) mlx5dr_matcher_destroy(table->matcher_info[1].matcher); mlx5_hlist_unregister(priv->sh->groups, &table->grp->entry); @@ -6545,6 +6614,7 @@ flow_hw_template_expand_modify_field(struct rte_flow_action actions[], case RTE_FLOW_ACTION_TYPE_DROP: case RTE_FLOW_ACTION_TYPE_SEND_TO_KERNEL: case RTE_FLOW_ACTION_TYPE_JUMP: + case RTE_FLOW_ACTION_TYPE_JUMP_TO_TABLE_INDEX: case RTE_FLOW_ACTION_TYPE_QUEUE: case RTE_FLOW_ACTION_TYPE_RSS: case RTE_FLOW_ACTION_TYPE_REPRESENTED_PORT: @@ -6761,6 +6831,43 @@ flow_hw_validate_action_jump(struct rte_eth_dev *dev, return 0; } +static int +mlx5_flow_validate_action_jump_to_table_index(const struct rte_flow_action *action, + const struct rte_flow_action *mask, + struct rte_flow_error *error) +{ + const struct rte_flow_action_jump_to_table_index *m = mask->conf; + const struct rte_flow_action_jump_to_table_index *v = action->conf; + struct mlx5dr_action *jump_action; + uint32_t t_group = 0; + + if (!m || !m->table) + return 0; + if (!v) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, action, + "Invalid jump to matcher action configuration"); + t_group = v->table->grp->group_id; + if (t_group == 0) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, action, + "Unsupported action - jump to root table"); + if (likely(!rte_flow_template_table_resizable(0, &v->table->cfg.attr))) { + jump_action = v->table->matcher_info[0].jump; + } else { + uint32_t selector; + rte_rwlock_read_lock(&v->table->matcher_replace_rwlk); + selector = v->table->matcher_selector; + jump_action = v->table->matcher_info[selector].jump; + rte_rwlock_read_unlock(&v->table->matcher_replace_rwlk); + } + if (jump_action == NULL) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, action, + "Unsupported action - table is not an rule array"); + return 0; +} + static int mlx5_hw_validate_action_mark(struct rte_eth_dev *dev, const struct rte_flow_action *template_action, @@ -7242,6 +7349,12 @@ mlx5_flow_hw_actions_validate(struct rte_eth_dev *dev, return ret; action_flags |= MLX5_FLOW_ACTION_DEFAULT_MISS; break; + case RTE_FLOW_ACTION_TYPE_JUMP_TO_TABLE_INDEX: + ret = mlx5_flow_validate_action_jump_to_table_index(action, mask, error); + if (ret < 0) + return ret; + action_flags |= MLX5_FLOW_ACTION_JUMP_TO_TABLE_INDEX; + break; default: return rte_flow_error_set(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ACTION, @@ -7286,6 +7399,7 @@ static enum mlx5dr_action_type mlx5_hw_dr_action_types[] = { [RTE_FLOW_ACTION_TYPE_IPV6_EXT_PUSH] = MLX5DR_ACTION_TYP_PUSH_IPV6_ROUTE_EXT, [RTE_FLOW_ACTION_TYPE_IPV6_EXT_REMOVE] = MLX5DR_ACTION_TYP_POP_IPV6_ROUTE_EXT, [RTE_FLOW_ACTION_TYPE_NAT64] = MLX5DR_ACTION_TYP_NAT64, + [RTE_FLOW_ACTION_TYPE_JUMP_TO_TABLE_INDEX] = MLX5DR_ACTION_TYP_JUMP_TO_MATCHER, }; static inline void @@ -7513,6 +7627,11 @@ flow_hw_parse_flow_actions_to_dr_actions(struct rte_eth_dev *dev, at->dr_off[i] = curr_off; action_types[curr_off++] = MLX5DR_ACTION_TYP_MISS; break; + case RTE_FLOW_ACTION_TYPE_JUMP_TO_TABLE_INDEX: + *tmpl_flags |= MLX5DR_ACTION_TEMPLATE_FLAG_RELAXED_ORDER; + at->dr_off[i] = curr_off; + action_types[curr_off++] = MLX5DR_ACTION_TYP_JUMP_TO_MATCHER; + break; default: type = mlx5_hw_dr_action_types[at->actions[i].type]; at->dr_off[i] = curr_off; @@ -13944,6 +14063,7 @@ mlx5_mirror_destroy_clone(struct rte_eth_dev *dev, case RTE_FLOW_ACTION_TYPE_JUMP: flow_hw_jump_release(dev, clone->action_ctx); break; + case RTE_FLOW_ACTION_TYPE_JUMP_TO_TABLE_INDEX: case RTE_FLOW_ACTION_TYPE_REPRESENTED_PORT: case RTE_FLOW_ACTION_TYPE_PORT_REPRESENTOR: case RTE_FLOW_ACTION_TYPE_RAW_ENCAP: @@ -13977,6 +14097,7 @@ mlx5_mirror_terminal_action(const struct rte_flow_action *action) case RTE_FLOW_ACTION_TYPE_QUEUE: case RTE_FLOW_ACTION_TYPE_REPRESENTED_PORT: case RTE_FLOW_ACTION_TYPE_PORT_REPRESENTOR: + case RTE_FLOW_ACTION_TYPE_JUMP_TO_TABLE_INDEX: return true; default: break; @@ -14019,6 +14140,8 @@ mlx5_mirror_validate_sample_action(struct rte_eth_dev *dev, action[1].type != RTE_FLOW_ACTION_TYPE_RAW_ENCAP) return false; break; + case RTE_FLOW_ACTION_TYPE_JUMP_TO_TABLE_INDEX: + break; default: return false; } @@ -14753,8 +14876,14 @@ flow_hw_table_resize(struct rte_eth_dev *dev, struct mlx5dr_action_template *at[MLX5_HW_TBL_MAX_ACTION_TEMPLATE]; struct mlx5dr_match_template *mt[MLX5_HW_TBL_MAX_ITEM_TEMPLATE]; struct mlx5dr_matcher_attr matcher_attr = table->matcher_attr; + struct mlx5dr_action_jump_to_matcher_attr jump_attr = { + .type = MLX5DR_ACTION_JUMP_TO_MATCHER_BY_INDEX, + .matcher = NULL, + }; struct mlx5_multi_pattern_segment *segment = NULL; struct mlx5dr_matcher *matcher = NULL; + struct mlx5dr_action *jump = NULL; + struct mlx5_priv *priv = dev->data->dev_private; uint32_t i, selector = table->matcher_selector; uint32_t other_selector = (selector + 1) & 1; int ret; @@ -14802,6 +14931,17 @@ flow_hw_table_resize(struct rte_eth_dev *dev, table, "failed to create new matcher"); goto error; } + if (matcher_attr.isolated) { + jump_attr.matcher = matcher; + jump = mlx5dr_action_create_jump_to_matcher(priv->dr_ctx, &jump_attr, + mlx5_hw_act_flag[!!table->cfg.attr.flow_attr.group][table->type]); + if (!jump) { + ret = rte_flow_error_set(error, rte_errno, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + table, "failed to create jump to matcher action"); + goto error; + } + } rte_rwlock_write_lock(&table->matcher_replace_rwlk); ret = mlx5dr_matcher_resize_set_target (table->matcher_info[selector].matcher, matcher); @@ -14814,6 +14954,7 @@ flow_hw_table_resize(struct rte_eth_dev *dev, } table->cfg.attr.nb_flows = nb_flows; table->matcher_info[other_selector].matcher = matcher; + table->matcher_info[other_selector].jump = jump; table->matcher_selector = other_selector; rte_atomic_store_explicit(&table->matcher_info[other_selector].refcnt, 0, rte_memory_order_relaxed); @@ -14822,6 +14963,8 @@ flow_hw_table_resize(struct rte_eth_dev *dev, error: if (segment) mlx5_destroy_multi_pattern_segment(segment); + if (jump) + mlx5dr_action_destroy(jump); if (matcher) { ret = mlx5dr_matcher_destroy(matcher); return rte_flow_error_set(error, rte_errno, @@ -14852,6 +14995,8 @@ flow_hw_table_resize_complete(__rte_unused struct rte_eth_dev *dev, return rte_flow_error_set(error, EBUSY, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, table, "cannot complete table resize"); + if (matcher_info->jump) + mlx5dr_action_destroy(matcher_info->jump); ret = mlx5dr_matcher_destroy(matcher_info->matcher); if (ret) return rte_flow_error_set(error, rte_errno, -- 2.43.5