From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id BE31E43913; Tue, 13 Feb 2024 10:51:19 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 5651541132; Tue, 13 Feb 2024 10:51:13 +0100 (CET) Received: from NAM04-MW2-obe.outbound.protection.outlook.com (mail-mw2nam04on2068.outbound.protection.outlook.com [40.107.101.68]) by mails.dpdk.org (Postfix) with ESMTP id 23AA041109 for ; Tue, 13 Feb 2024 10:51:11 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=gHug633ixVlN7GYpHhSY3pP2h2CwFPbus5UEt76daguZ9t57Cfh0i2+FxgvDPwE9Eh9dOk6JcxquIYg6B0woeP9drLJQy0GAIOQfbm274VfYo1L3EyN1jEywsV+BjjceDzZ8+qQtTziBzSqBgZexjEESqqcm7AHwTlAbMBK5wKC1rc8ELsRTq5dAttX0aoEKpysAjmNLav2PngQRsgY5p9ovyrwtxSR26afTZr3TmrTufNGCXOIXXgm5+4Eb6keriFpjQcwKtA0skrC/L5+qpcscAkZDVWtSX4Kt3OTCTcGT49rBfL/FMBSpeXifIXz4JJxvITze8ViNqC+kehk3Lw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=h/wMguNV4pFqN3D8PoSbiaHy+NECtGBaW6PqKsgSNSw=; b=Su1hlk73yrEwA8vVcs0/e3l96xh2ywzOvqIJpQABzeAX5zY3a8b1l5490DQAPHlCrvqMNRzTbIR3YV936RCAUOWjQEU+2VbH7dIl7Kv7XnaDyR6LLOodY32laMxuwe/5AH7ZYc79N8p45qlgFgaMy7gMIPVjOflQcfAjTIGJeTxzWbBMMMc0rIMxkXF90suKe/h1le8LWjNXfVK2FpRGrraj09RQOrKa3qj0m84oaym+9rxA2HzCLrO50G4ukgjsvv7j3bJIYLvUPQzu3j+gAThwvzNYiJ5kMDn18vge+EFHz8wkV5HSKJHD2sHA/nTTZek6+Gxg1tgU/htw1q4eNA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=monjalon.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=h/wMguNV4pFqN3D8PoSbiaHy+NECtGBaW6PqKsgSNSw=; b=UQ2NA5rXbCrhKs6IcrTMikt23YU57BHGkTYw+5jLLws/e8TvhW9GWyPm1FP+mEZiDladN+mBtizBUwaVaNbSACSGHXZB3+BJWzi6IQwE0T0px24yojKFJXjc6Tm1zi5Zpt4NSx6VYn1R9aB7/jtiSp39e/6uIjT8qSh+HZAzxb12O92N5kY+PEAsBdHoMrelK0pccEOmBzMM/TvnlzUrwpDFyr8HvSlTy8rJkqplJ5gQHVL+MGcRAs8cLws8tiApoSwl1eG8h80SRD4Ce9gSQvysAS0oKI95VfVur3Rw2pi+ab01uPyeZtsXalbwX3pFst4PhnTdIS7rK5VZqm4IeQ== Received: from CH2PR19CA0025.namprd19.prod.outlook.com (2603:10b6:610:4d::35) by DM4PR12MB6352.namprd12.prod.outlook.com (2603:10b6:8:a0::14) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7292.24; Tue, 13 Feb 2024 09:51:06 +0000 Received: from CH2PEPF0000009A.namprd02.prod.outlook.com (2603:10b6:610:4d:cafe::e2) by CH2PR19CA0025.outlook.office365.com (2603:10b6:610:4d::35) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7270.39 via Frontend Transport; Tue, 13 Feb 2024 09:51:06 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by CH2PEPF0000009A.mail.protection.outlook.com (10.167.244.22) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7292.25 via Frontend Transport; Tue, 13 Feb 2024 09:51:06 +0000 Received: from rnnvmail203.nvidia.com (10.129.68.9) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.41; Tue, 13 Feb 2024 01:50:57 -0800 Received: from rnnvmail203.nvidia.com (10.129.68.9) by rnnvmail203.nvidia.com (10.129.68.9) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.41; Tue, 13 Feb 2024 01:50:56 -0800 Received: from nvidia.com (10.127.8.12) by mail.nvidia.com (10.129.68.9) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.41 via Frontend Transport; Tue, 13 Feb 2024 01:50:54 -0800 From: Itamar Gozlan To: , , , , , , , Dariusz Sosnowski , Ori Kam , Matan Azrad CC: Subject: [PATCH 3/9] net/mlx5/hws: add support for resizable matchers Date: Tue, 13 Feb 2024 11:50:31 +0200 Message-ID: <20240213095038.451299-3-igozlan@nvidia.com> X-Mailer: git-send-email 2.39.3 In-Reply-To: <20240213095038.451299-1-igozlan@nvidia.com> References: <20240213095038.451299-1-igozlan@nvidia.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-NV-OnPremToCloud: ExternallySecured X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CH2PEPF0000009A:EE_|DM4PR12MB6352:EE_ X-MS-Office365-Filtering-Correlation-Id: 0d913f21-b118-4196-cc6c-08dc2c794c82 X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: Mq4O0F4AIe/gRiwLHLS8lcFJnw0m6AwawL4ybXzacI6zDoundghr8cOTVTEqoWdlJCTSFgX90yGzLGTK7H61DrZD00pVOtTM8lMzoTEZyPxIIFFleNFDQjAXtyjSvVTiiGpXquAEW3u8wysQjWSaijzBEMp0vcuja027lcIfRBegCNS6jXggGeYRo86VH6Mig3ljP081oA/pcGrx9ZElsHHCCbEmQNANsoKbzSIOtXFbhaybpSA4qrtNTtyqZcZMRbkpSPZB8ZCp7AEiKfoEqM5AoWKGWIRXqKzj90Nol1akutuFXuBhkUgSKb1xDT7lr/DXfy9fbzyFGrRPJtwTuB/YjMVd9y9pjqK8Tpcw8RNXD33O/FNhn9F7VnrXsI4nR3/BB6bqaIMs0lJDDuvBsIcr+AmcOf8xf1U7MYOijk3rr+ChejE7g2ZCfZ/2U1+q8ylg/p238HzvsyaWcrqruxcF7UxGwPlyUcBsrYtKr1UwDnJcNOp4DCi8/1TMfst29j23akCv4u4+h8HOseFko56mrefGLw+IAztStPCZC9XFsO5ylaXMblGJRUHldWc8/hhXOC6xrVLuCZmXqUheUT7wSuNJ+tk+KYxaAGRuoQKKKgBt2si8pgr3i1xxmTFJVR0S0zLqipWd4tdt3atAYQ== X-Forefront-Antispam-Report: CIP:216.228.117.160; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge1.nvidia.com; CAT:NONE; SFS:(13230031)(4636009)(346002)(396003)(376002)(136003)(39860400002)(230922051799003)(64100799003)(186009)(82310400011)(1800799012)(451199024)(36840700001)(46966006)(40470700004)(478600001)(41300700001)(55016003)(5660300002)(8936002)(8676002)(4326008)(30864003)(2906002)(70586007)(6636002)(110136005)(6666004)(6286002)(316002)(83380400001)(7696005)(26005)(336012)(426003)(82740400003)(7636003)(356005)(86362001)(70206006)(1076003)(921011)(36756003)(2616005); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 13 Feb 2024 09:51:06.4510 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 0d913f21-b118-4196-cc6c-08dc2c794c82 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.160]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CH2PEPF0000009A.namprd02.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM4PR12MB6352 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Yevgeny Kliteynik Add support for matcher resize with the following new API calls: - mlx5dr_matcher_resize_set_target - mlx5dr_matcher_resize_rule_move The first function links two matchers and allows moving rules from src matcher to dst matcher. Both matchers should have the same characteristics (e.g. same mt, same at). It is the user's responsibility to make sure that the dst matcher has enough space for the moved rules. After this function, the user can move rules from src into dst matcher, and he is no longer allowed to insert rules to the src matcher. The second function is used to move the rule from matcher that is being resized to a bigger matcher. Moving a single rule includes creating a new rule in the destination matcher, and deleting the rule from the source matcher. This operation creates a single completion. Signed-off-by: Yevgeny Kliteynik Acked-by: Matan Azrad --- drivers/net/mlx5/hws/mlx5dr.h | 39 ++++ drivers/net/mlx5/hws/mlx5dr_definer.c | 5 +- drivers/net/mlx5/hws/mlx5dr_definer.h | 3 + drivers/net/mlx5/hws/mlx5dr_matcher.c | 181 +++++++++++++++- drivers/net/mlx5/hws/mlx5dr_matcher.h | 21 ++ drivers/net/mlx5/hws/mlx5dr_rule.c | 290 ++++++++++++++++++++++---- drivers/net/mlx5/hws/mlx5dr_rule.h | 30 ++- drivers/net/mlx5/hws/mlx5dr_send.c | 45 ++++ 8 files changed, 573 insertions(+), 41 deletions(-) diff --git a/drivers/net/mlx5/hws/mlx5dr.h b/drivers/net/mlx5/hws/mlx5dr.h index d88f73ab57..9d8f8e13dc 100644 --- a/drivers/net/mlx5/hws/mlx5dr.h +++ b/drivers/net/mlx5/hws/mlx5dr.h @@ -139,6 +139,8 @@ struct mlx5dr_matcher_attr { /* Define the insertion and distribution modes for this matcher */ enum mlx5dr_matcher_insert_mode insert_mode; enum mlx5dr_matcher_distribute_mode distribute_mode; + /* Define whether the created matcher supports resizing into a bigger matcher */ + bool resizable; union { struct { uint8_t sz_row_log; @@ -419,6 +421,43 @@ int mlx5dr_matcher_destroy(struct mlx5dr_matcher *matcher); int mlx5dr_matcher_attach_at(struct mlx5dr_matcher *matcher, struct mlx5dr_action_template *at); +/* Link two matchers and enable moving rules from src matcher to dst matcher. + * Both matchers must be in the same table type, must be created with 'resizable' + * property, and should have the same characteristics (e.g. same mt, same at). + * + * It is the user's responsibility to make sure that the dst matcher + * was allocated with the appropriate size. + * + * Once the function is completed, the user is: + * - allowed to move rules from src into dst matcher + * - no longer allowed to insert rules to the src matcher + * + * The user is always allowed to insert rules to the dst matcher and + * to delete rules from any matcher. + * + * @param[in] src_matcher + * source matcher for moving rules from + * @param[in] dst_matcher + * destination matcher for moving rules to + * @return zero on successful move, non zero otherwise. + */ +int mlx5dr_matcher_resize_set_target(struct mlx5dr_matcher *src_matcher, + struct mlx5dr_matcher *dst_matcher); + +/* Enqueue moving rule operation: moving rule from src matcher to a dst matcher + * + * @param[in] src_matcher + * matcher that the rule belongs to + * @param[in] rule + * the rule to move + * @param[in] attr + * rule attributes + * @return zero on success, non zero otherwise. + */ +int mlx5dr_matcher_resize_rule_move(struct mlx5dr_matcher *src_matcher, + struct mlx5dr_rule *rule, + struct mlx5dr_rule_attr *attr); + /* Get the size of the rule handle (mlx5dr_rule) to be used on rule creation. * * @return size in bytes of rule handle struct. diff --git a/drivers/net/mlx5/hws/mlx5dr_definer.c b/drivers/net/mlx5/hws/mlx5dr_definer.c index 8b8757ecac..e564062313 100644 --- a/drivers/net/mlx5/hws/mlx5dr_definer.c +++ b/drivers/net/mlx5/hws/mlx5dr_definer.c @@ -3296,9 +3296,8 @@ int mlx5dr_definer_get_id(struct mlx5dr_definer *definer) return definer->obj->id; } -static int -mlx5dr_definer_compare(struct mlx5dr_definer *definer_a, - struct mlx5dr_definer *definer_b) +int mlx5dr_definer_compare(struct mlx5dr_definer *definer_a, + struct mlx5dr_definer *definer_b) { int i; diff --git a/drivers/net/mlx5/hws/mlx5dr_definer.h b/drivers/net/mlx5/hws/mlx5dr_definer.h index ced9d9da13..71cc0e94de 100644 --- a/drivers/net/mlx5/hws/mlx5dr_definer.h +++ b/drivers/net/mlx5/hws/mlx5dr_definer.h @@ -733,4 +733,7 @@ int mlx5dr_definer_init_cache(struct mlx5dr_definer_cache **cache); void mlx5dr_definer_uninit_cache(struct mlx5dr_definer_cache *cache); +int mlx5dr_definer_compare(struct mlx5dr_definer *definer_a, + struct mlx5dr_definer *definer_b); + #endif diff --git a/drivers/net/mlx5/hws/mlx5dr_matcher.c b/drivers/net/mlx5/hws/mlx5dr_matcher.c index 4ea161eae6..0d5c462734 100644 --- a/drivers/net/mlx5/hws/mlx5dr_matcher.c +++ b/drivers/net/mlx5/hws/mlx5dr_matcher.c @@ -704,6 +704,65 @@ static int mlx5dr_matcher_check_and_process_at(struct mlx5dr_matcher *matcher, return 0; } +static int +mlx5dr_matcher_resize_init(struct mlx5dr_matcher *src_matcher) +{ + struct mlx5dr_matcher_resize_data *resize_data; + + resize_data = simple_calloc(1, sizeof(*resize_data)); + if (!resize_data) { + rte_errno = ENOMEM; + return rte_errno; + } + + resize_data->stc = src_matcher->action_ste.stc; + resize_data->action_ste_rtc_0 = src_matcher->action_ste.rtc_0; + resize_data->action_ste_rtc_1 = src_matcher->action_ste.rtc_1; + resize_data->action_ste_pool = src_matcher->action_ste.max_stes ? + src_matcher->action_ste.pool : + NULL; + + /* Place the new resized matcher on the dst matcher's list */ + LIST_INSERT_HEAD(&src_matcher->resize_dst->resize_data, + resize_data, next); + + /* Move all the previous resized matchers to the dst matcher's list */ + while (!LIST_EMPTY(&src_matcher->resize_data)) { + resize_data = LIST_FIRST(&src_matcher->resize_data); + LIST_REMOVE(resize_data, next); + LIST_INSERT_HEAD(&src_matcher->resize_dst->resize_data, + resize_data, next); + } + + return 0; +} + +static void +mlx5dr_matcher_resize_uninit(struct mlx5dr_matcher *matcher) +{ + struct mlx5dr_matcher_resize_data *resize_data; + + if (!mlx5dr_matcher_is_resizable(matcher) || + !matcher->action_ste.max_stes) + return; + + while (!LIST_EMPTY(&matcher->resize_data)) { + resize_data = LIST_FIRST(&matcher->resize_data); + LIST_REMOVE(resize_data, next); + + mlx5dr_action_free_single_stc(matcher->tbl->ctx, + matcher->tbl->type, + &resize_data->stc); + + if (matcher->tbl->type == MLX5DR_TABLE_TYPE_FDB) + mlx5dr_cmd_destroy_obj(resize_data->action_ste_rtc_1); + mlx5dr_cmd_destroy_obj(resize_data->action_ste_rtc_0); + if (resize_data->action_ste_pool) + mlx5dr_pool_destroy(resize_data->action_ste_pool); + simple_free(resize_data); + } +} + static int mlx5dr_matcher_bind_at(struct mlx5dr_matcher *matcher) { bool is_jumbo = mlx5dr_matcher_mt_is_jumbo(matcher->mt); @@ -790,7 +849,9 @@ static void mlx5dr_matcher_unbind_at(struct mlx5dr_matcher *matcher) { struct mlx5dr_table *tbl = matcher->tbl; - if (!matcher->action_ste.max_stes || matcher->flags & MLX5DR_MATCHER_FLAGS_COLLISION) + if (!matcher->action_ste.max_stes || + matcher->flags & MLX5DR_MATCHER_FLAGS_COLLISION || + mlx5dr_matcher_is_in_resize(matcher)) return; mlx5dr_action_free_single_stc(tbl->ctx, tbl->type, &matcher->action_ste.stc); @@ -947,6 +1008,10 @@ mlx5dr_matcher_process_attr(struct mlx5dr_cmd_query_caps *caps, DR_LOG(ERR, "Root matcher does not support at attaching"); goto not_supported; } + if (attr->resizable) { + DR_LOG(ERR, "Root matcher does not support resizing"); + goto not_supported; + } return 0; } @@ -960,6 +1025,8 @@ mlx5dr_matcher_process_attr(struct mlx5dr_cmd_query_caps *caps, attr->insert_mode == MLX5DR_MATCHER_INSERT_BY_HASH) attr->table.sz_col_log = mlx5dr_matcher_rules_to_tbl_depth(attr->rule.num_log); + matcher->flags |= attr->resizable ? MLX5DR_MATCHER_FLAGS_RESIZABLE : 0; + return mlx5dr_matcher_check_attr_sz(caps, attr); not_supported: @@ -1018,6 +1085,7 @@ static int mlx5dr_matcher_create_and_connect(struct mlx5dr_matcher *matcher) static void mlx5dr_matcher_destroy_and_disconnect(struct mlx5dr_matcher *matcher) { + mlx5dr_matcher_resize_uninit(matcher); mlx5dr_matcher_disconnect(matcher); mlx5dr_matcher_create_uninit_shared(matcher); mlx5dr_matcher_destroy_rtc(matcher, DR_MATCHER_RTC_TYPE_MATCH); @@ -1452,3 +1520,114 @@ int mlx5dr_match_template_destroy(struct mlx5dr_match_template *mt) simple_free(mt); return 0; } + +static int mlx5dr_matcher_resize_precheck(struct mlx5dr_matcher *src_matcher, + struct mlx5dr_matcher *dst_matcher) +{ + int i; + + if (mlx5dr_table_is_root(src_matcher->tbl) || + mlx5dr_table_is_root(dst_matcher->tbl)) { + DR_LOG(ERR, "Src/dst matcher belongs to root table - resize unsupported"); + goto out_einval; + } + + if (src_matcher->tbl->type != dst_matcher->tbl->type) { + DR_LOG(ERR, "Table type mismatch for src/dst matchers"); + goto out_einval; + } + + if (mlx5dr_matcher_req_fw_wqe(src_matcher) || + mlx5dr_matcher_req_fw_wqe(dst_matcher)) { + DR_LOG(ERR, "Matchers require FW WQE - resize unsupported"); + goto out_einval; + } + + if (!mlx5dr_matcher_is_resizable(src_matcher) || + !mlx5dr_matcher_is_resizable(dst_matcher)) { + DR_LOG(ERR, "Src/dst matcher is not resizable"); + goto out_einval; + } + + if (mlx5dr_matcher_is_insert_by_idx(src_matcher) != + mlx5dr_matcher_is_insert_by_idx(dst_matcher)) { + DR_LOG(ERR, "Src/dst matchers insert mode mismatch"); + goto out_einval; + } + + if (mlx5dr_matcher_is_in_resize(src_matcher) || + mlx5dr_matcher_is_in_resize(dst_matcher)) { + DR_LOG(ERR, "Src/dst matcher is already in resize"); + goto out_einval; + } + + /* Compare match templates - make sure the definers are equivalent */ + if (src_matcher->num_of_mt != dst_matcher->num_of_mt) { + DR_LOG(ERR, "Src/dst matcher match templates mismatch"); + goto out_einval; + } + + if (src_matcher->action_ste.max_stes > dst_matcher->action_ste.max_stes) { + DR_LOG(ERR, "Src/dst matcher max STEs mismatch"); + goto out_einval; + } + + for (i = 0; i < src_matcher->num_of_mt; i++) { + if (mlx5dr_definer_compare(src_matcher->mt[i].definer, + dst_matcher->mt[i].definer)) { + DR_LOG(ERR, "Src/dst matcher definers mismatch"); + goto out_einval; + } + } + + return 0; + +out_einval: + rte_errno = EINVAL; + return rte_errno; +} + +int mlx5dr_matcher_resize_set_target(struct mlx5dr_matcher *src_matcher, + struct mlx5dr_matcher *dst_matcher) +{ + int ret = 0; + + pthread_spin_lock(&src_matcher->tbl->ctx->ctrl_lock); + + if (mlx5dr_matcher_resize_precheck(src_matcher, dst_matcher)) { + ret = -rte_errno; + goto out; + } + + src_matcher->resize_dst = dst_matcher; + + if (mlx5dr_matcher_resize_init(src_matcher)) { + src_matcher->resize_dst = NULL; + ret = -rte_errno; + } + +out: + pthread_spin_unlock(&src_matcher->tbl->ctx->ctrl_lock); + return ret; +} + +int mlx5dr_matcher_resize_rule_move(struct mlx5dr_matcher *src_matcher, + struct mlx5dr_rule *rule, + struct mlx5dr_rule_attr *attr) +{ + if (unlikely(!mlx5dr_matcher_is_in_resize(src_matcher))) { + DR_LOG(ERR, "Matcher is not resizable or not in resize"); + goto out_einval; + } + + if (unlikely(src_matcher != rule->matcher)) { + DR_LOG(ERR, "Rule doesn't belong to src matcher"); + goto out_einval; + } + + return mlx5dr_rule_move_hws_add(rule, attr); + +out_einval: + rte_errno = EINVAL; + return -rte_errno; +} diff --git a/drivers/net/mlx5/hws/mlx5dr_matcher.h b/drivers/net/mlx5/hws/mlx5dr_matcher.h index 363a61fd41..0f2bf96e8b 100644 --- a/drivers/net/mlx5/hws/mlx5dr_matcher.h +++ b/drivers/net/mlx5/hws/mlx5dr_matcher.h @@ -26,6 +26,7 @@ enum mlx5dr_matcher_flags { MLX5DR_MATCHER_FLAGS_RANGE_DEFINER = 1 << 0, MLX5DR_MATCHER_FLAGS_HASH_DEFINER = 1 << 1, MLX5DR_MATCHER_FLAGS_COLLISION = 1 << 2, + MLX5DR_MATCHER_FLAGS_RESIZABLE = 1 << 3, }; struct mlx5dr_match_template { @@ -59,6 +60,14 @@ struct mlx5dr_matcher_action_ste { uint8_t max_stes; }; +struct mlx5dr_matcher_resize_data { + struct mlx5dr_pool_chunk stc; + struct mlx5dr_devx_obj *action_ste_rtc_0; + struct mlx5dr_devx_obj *action_ste_rtc_1; + struct mlx5dr_pool *action_ste_pool; + LIST_ENTRY(mlx5dr_matcher_resize_data) next; +}; + struct mlx5dr_matcher { struct mlx5dr_table *tbl; struct mlx5dr_matcher_attr attr; @@ -71,10 +80,12 @@ struct mlx5dr_matcher { uint8_t flags; struct mlx5dr_devx_obj *end_ft; struct mlx5dr_matcher *col_matcher; + struct mlx5dr_matcher *resize_dst; struct mlx5dr_matcher_match_ste match_ste; struct mlx5dr_matcher_action_ste action_ste; struct mlx5dr_definer *hash_definer; LIST_ENTRY(mlx5dr_matcher) next; + LIST_HEAD(resize_data_head, mlx5dr_matcher_resize_data) resize_data; }; static inline bool @@ -89,6 +100,16 @@ mlx5dr_matcher_mt_is_range(struct mlx5dr_match_template *mt) return (!!mt->range_definer); } +static inline bool mlx5dr_matcher_is_resizable(struct mlx5dr_matcher *matcher) +{ + return !!(matcher->flags & MLX5DR_MATCHER_FLAGS_RESIZABLE); +} + +static inline bool mlx5dr_matcher_is_in_resize(struct mlx5dr_matcher *matcher) +{ + return !!matcher->resize_dst; +} + static inline bool mlx5dr_matcher_req_fw_wqe(struct mlx5dr_matcher *matcher) { /* Currently HWS doesn't support hash different from match or range */ diff --git a/drivers/net/mlx5/hws/mlx5dr_rule.c b/drivers/net/mlx5/hws/mlx5dr_rule.c index e39137a6ee..6bf087e187 100644 --- a/drivers/net/mlx5/hws/mlx5dr_rule.c +++ b/drivers/net/mlx5/hws/mlx5dr_rule.c @@ -114,6 +114,23 @@ static void mlx5dr_rule_init_dep_wqe(struct mlx5dr_send_ring_dep_wqe *dep_wqe, } } +static void mlx5dr_rule_move_get_rtc(struct mlx5dr_rule *rule, + struct mlx5dr_send_ste_attr *ste_attr) +{ + struct mlx5dr_matcher *dst_matcher = rule->matcher->resize_dst; + + if (rule->resize_info->rtc_0) { + ste_attr->rtc_0 = dst_matcher->match_ste.rtc_0->id; + ste_attr->retry_rtc_0 = dst_matcher->col_matcher ? + dst_matcher->col_matcher->match_ste.rtc_0->id : 0; + } + if (rule->resize_info->rtc_1) { + ste_attr->rtc_1 = dst_matcher->match_ste.rtc_1->id; + ste_attr->retry_rtc_1 = dst_matcher->col_matcher ? + dst_matcher->col_matcher->match_ste.rtc_1->id : 0; + } +} + static void mlx5dr_rule_gen_comp(struct mlx5dr_send_engine *queue, struct mlx5dr_rule *rule, bool err, @@ -134,6 +151,34 @@ static void mlx5dr_rule_gen_comp(struct mlx5dr_send_engine *queue, mlx5dr_send_engine_gen_comp(queue, user_data, comp_status); } +static void +mlx5dr_rule_save_resize_info(struct mlx5dr_rule *rule, + struct mlx5dr_send_ste_attr *ste_attr) +{ + rule->resize_info = simple_calloc(1, sizeof(*rule->resize_info)); + if (unlikely(!rule->resize_info)) { + assert(rule->resize_info); + rte_errno = ENOMEM; + } + + memcpy(rule->resize_info->ctrl_seg, ste_attr->wqe_ctrl, + sizeof(rule->resize_info->ctrl_seg)); + memcpy(rule->resize_info->data_seg, ste_attr->wqe_data, + sizeof(rule->resize_info->data_seg)); + + rule->resize_info->action_ste_pool = rule->matcher->action_ste.max_stes ? + rule->matcher->action_ste.pool : + NULL; +} + +static void mlx5dr_rule_clear_resize_info(struct mlx5dr_rule *rule) +{ + if (rule->resize_info) { + simple_free(rule->resize_info); + rule->resize_info = NULL; + } +} + static void mlx5dr_rule_save_delete_info(struct mlx5dr_rule *rule, struct mlx5dr_send_ste_attr *ste_attr) @@ -161,17 +206,29 @@ mlx5dr_rule_save_delete_info(struct mlx5dr_rule *rule, return; } - if (is_jumbo) - memcpy(rule->tag.jumbo, ste_attr->wqe_data->jumbo, MLX5DR_JUMBO_TAG_SZ); - else - memcpy(rule->tag.match, ste_attr->wqe_data->tag, MLX5DR_MATCH_TAG_SZ); + if (likely(!mlx5dr_matcher_is_resizable(rule->matcher))) { + if (is_jumbo) + memcpy(&rule->tag.jumbo, ste_attr->wqe_data->action, MLX5DR_JUMBO_TAG_SZ); + else + memcpy(&rule->tag.match, ste_attr->wqe_data->tag, MLX5DR_MATCH_TAG_SZ); + return; + } + + mlx5dr_rule_save_resize_info(rule, ste_attr); } static void mlx5dr_rule_clear_delete_info(struct mlx5dr_rule *rule) { - if (unlikely(mlx5dr_matcher_req_fw_wqe(rule->matcher))) + if (unlikely(mlx5dr_matcher_req_fw_wqe(rule->matcher))) { simple_free(rule->tag_ptr); + return; + } + + if (unlikely(mlx5dr_matcher_is_resizable(rule->matcher))) { + mlx5dr_rule_clear_resize_info(rule); + return; + } } static void @@ -188,8 +245,11 @@ mlx5dr_rule_load_delete_info(struct mlx5dr_rule *rule, ste_attr->range_wqe_tag = &rule->tag_ptr[1]; ste_attr->send_attr.range_definer_id = rule->tag_ptr[1].reserved[1]; } - } else { + } else if (likely(!mlx5dr_matcher_is_resizable(rule->matcher))) { ste_attr->wqe_tag = &rule->tag; + } else { + ste_attr->wqe_tag = (struct mlx5dr_rule_match_tag *) + &rule->resize_info->data_seg[MLX5DR_STE_CTRL_SZ]; } } @@ -220,6 +280,7 @@ static int mlx5dr_rule_alloc_action_ste(struct mlx5dr_rule *rule, void mlx5dr_rule_free_action_ste_idx(struct mlx5dr_rule *rule) { struct mlx5dr_matcher *matcher = rule->matcher; + struct mlx5dr_pool *pool; if (rule->action_ste_idx > -1 && !matcher->attr.optimize_using_rule_idx && @@ -229,7 +290,11 @@ void mlx5dr_rule_free_action_ste_idx(struct mlx5dr_rule *rule) /* This release is safe only when the rule match part was deleted */ ste.order = rte_log2_u32(matcher->action_ste.max_stes); ste.offset = rule->action_ste_idx; - mlx5dr_pool_chunk_free(matcher->action_ste.pool, &ste); + + /* Free the original action pool if rule was resized */ + pool = mlx5dr_matcher_is_resizable(matcher) ? rule->resize_info->action_ste_pool : + matcher->action_ste.pool; + mlx5dr_pool_chunk_free(pool, &ste); } } @@ -266,6 +331,23 @@ static void mlx5dr_rule_create_init(struct mlx5dr_rule *rule, apply->require_dep = 0; } +static void mlx5dr_rule_move_init(struct mlx5dr_rule *rule, + struct mlx5dr_rule_attr *attr) +{ + /* Save the old RTC IDs to be later used in match STE delete */ + rule->resize_info->rtc_0 = rule->rtc_0; + rule->resize_info->rtc_1 = rule->rtc_1; + rule->resize_info->rule_idx = attr->rule_idx; + + rule->rtc_0 = 0; + rule->rtc_1 = 0; + + rule->pending_wqes = 0; + rule->action_ste_idx = -1; + rule->status = MLX5DR_RULE_STATUS_CREATING; + rule->resize_info->state = MLX5DR_RULE_RESIZE_STATE_WRITING; +} + static int mlx5dr_rule_create_hws_fw_wqe(struct mlx5dr_rule *rule, struct mlx5dr_rule_attr *attr, uint8_t mt_idx, @@ -346,7 +428,9 @@ static int mlx5dr_rule_create_hws_fw_wqe(struct mlx5dr_rule *rule, /* Send WQEs to FW */ mlx5dr_send_stes_fw(queue, &ste_attr); - /* Backup TAG on the rule for deletion */ + /* Backup TAG on the rule for deletion, and save ctrl/data + * segments to be used when resizing the matcher. + */ mlx5dr_rule_save_delete_info(rule, &ste_attr); mlx5dr_send_engine_inc_rule(queue); @@ -469,7 +553,9 @@ static int mlx5dr_rule_create_hws(struct mlx5dr_rule *rule, mlx5dr_send_ste(queue, &ste_attr); } - /* Backup TAG on the rule for deletion, only after insertion */ + /* Backup TAG on the rule for deletion and resize info for + * moving rules to a new matcher, only after insertion. + */ if (!is_update) mlx5dr_rule_save_delete_info(rule, &ste_attr); @@ -496,7 +582,7 @@ static void mlx5dr_rule_destroy_failed_hws(struct mlx5dr_rule *rule, /* Rule failed now we can safely release action STEs */ mlx5dr_rule_free_action_ste_idx(rule); - /* Clear complex tag */ + /* Clear complex tag or info that was saved for matcher resizing */ mlx5dr_rule_clear_delete_info(rule); /* If a rule that was indicated as burst (need to trigger HW) has failed @@ -571,12 +657,12 @@ static int mlx5dr_rule_destroy_hws(struct mlx5dr_rule *rule, mlx5dr_rule_load_delete_info(rule, &ste_attr); - if (unlikely(fw_wqe)) { + if (unlikely(fw_wqe)) mlx5dr_send_stes_fw(queue, &ste_attr); - mlx5dr_rule_clear_delete_info(rule); - } else { + else mlx5dr_send_ste(queue, &ste_attr); - } + + mlx5dr_rule_clear_delete_info(rule); return 0; } @@ -664,9 +750,11 @@ static int mlx5dr_rule_destroy_root(struct mlx5dr_rule *rule, return 0; } -static int mlx5dr_rule_enqueue_precheck(struct mlx5dr_context *ctx, +static int mlx5dr_rule_enqueue_precheck(struct mlx5dr_rule *rule, struct mlx5dr_rule_attr *attr) { + struct mlx5dr_context *ctx = rule->matcher->tbl->ctx; + if (unlikely(!attr->user_data)) { rte_errno = EINVAL; return rte_errno; @@ -681,6 +769,153 @@ static int mlx5dr_rule_enqueue_precheck(struct mlx5dr_context *ctx, return 0; } +static int mlx5dr_rule_enqueue_precheck_move(struct mlx5dr_rule *rule, + struct mlx5dr_rule_attr *attr) +{ + if (unlikely(rule->status != MLX5DR_RULE_STATUS_CREATED)) { + rte_errno = EINVAL; + return rte_errno; + } + + return mlx5dr_rule_enqueue_precheck(rule, attr); +} + +static int mlx5dr_rule_enqueue_precheck_create(struct mlx5dr_rule *rule, + struct mlx5dr_rule_attr *attr) +{ + if (unlikely(mlx5dr_matcher_is_in_resize(rule->matcher))) { + /* Matcher in resize - new rules are not allowed */ + rte_errno = EAGAIN; + return rte_errno; + } + + return mlx5dr_rule_enqueue_precheck(rule, attr); +} + +static int mlx5dr_rule_enqueue_precheck_update(struct mlx5dr_rule *rule, + struct mlx5dr_rule_attr *attr) +{ + struct mlx5dr_matcher *matcher = rule->matcher; + + if (unlikely((mlx5dr_table_is_root(matcher->tbl) || + mlx5dr_matcher_req_fw_wqe(matcher)))) { + DR_LOG(ERR, "Rule update is not supported on current matcher"); + rte_errno = ENOTSUP; + return rte_errno; + } + + if (unlikely(!matcher->attr.optimize_using_rule_idx && + !mlx5dr_matcher_is_insert_by_idx(matcher))) { + DR_LOG(ERR, "Rule update requires optimize by idx matcher"); + rte_errno = ENOTSUP; + return rte_errno; + } + + if (unlikely(mlx5dr_matcher_is_resizable(rule->matcher))) { + DR_LOG(ERR, "Rule update is not supported on resizable matcher"); + rte_errno = ENOTSUP; + return rte_errno; + } + + if (unlikely(rule->status != MLX5DR_RULE_STATUS_CREATED)) { + DR_LOG(ERR, "Current rule status does not allow update"); + rte_errno = EBUSY; + return rte_errno; + } + + return mlx5dr_rule_enqueue_precheck_create(rule, attr); +} + +int mlx5dr_rule_move_hws_remove(struct mlx5dr_rule *rule, + void *queue_ptr, + void *user_data) +{ + bool is_jumbo = mlx5dr_matcher_mt_is_jumbo(rule->matcher->mt); + struct mlx5dr_wqe_gta_ctrl_seg empty_wqe_ctrl = {0}; + struct mlx5dr_matcher *matcher = rule->matcher; + struct mlx5dr_send_engine *queue = queue_ptr; + struct mlx5dr_send_ste_attr ste_attr = {0}; + + /* Send dependent WQEs */ + mlx5dr_send_all_dep_wqe(queue); + + rule->resize_info->state = MLX5DR_RULE_RESIZE_STATE_DELETING; + + ste_attr.send_attr.fence = 0; + ste_attr.send_attr.opmod = MLX5DR_WQE_GTA_OPMOD_STE; + ste_attr.send_attr.opcode = MLX5DR_WQE_OPCODE_TBL_ACCESS; + ste_attr.send_attr.len = MLX5DR_WQE_SZ_GTA_CTRL + MLX5DR_WQE_SZ_GTA_DATA; + ste_attr.send_attr.rule = rule; + ste_attr.send_attr.notify_hw = 1; + ste_attr.send_attr.user_data = user_data; + ste_attr.rtc_0 = rule->resize_info->rtc_0; + ste_attr.rtc_1 = rule->resize_info->rtc_1; + ste_attr.used_id_rtc_0 = &rule->resize_info->rtc_0; + ste_attr.used_id_rtc_1 = &rule->resize_info->rtc_1; + ste_attr.wqe_ctrl = &empty_wqe_ctrl; + ste_attr.wqe_tag_is_jumbo = is_jumbo; + ste_attr.gta_opcode = MLX5DR_WQE_GTA_OP_DEACTIVATE; + + if (unlikely(mlx5dr_matcher_is_insert_by_idx(matcher))) + ste_attr.direct_index = rule->resize_info->rule_idx; + + mlx5dr_rule_load_delete_info(rule, &ste_attr); + mlx5dr_send_ste(queue, &ste_attr); + + return 0; +} + +int mlx5dr_rule_move_hws_add(struct mlx5dr_rule *rule, + struct mlx5dr_rule_attr *attr) +{ + bool is_jumbo = mlx5dr_matcher_mt_is_jumbo(rule->matcher->mt); + struct mlx5dr_context *ctx = rule->matcher->tbl->ctx; + struct mlx5dr_matcher *matcher = rule->matcher; + struct mlx5dr_send_ste_attr ste_attr = {0}; + struct mlx5dr_send_engine *queue; + + if (unlikely(mlx5dr_rule_enqueue_precheck_move(rule, attr))) + return -rte_errno; + + queue = &ctx->send_queue[attr->queue_id]; + + if (unlikely(mlx5dr_send_engine_err(queue))) { + rte_errno = EIO; + return rte_errno; + } + + mlx5dr_rule_move_init(rule, attr); + + mlx5dr_rule_move_get_rtc(rule, &ste_attr); + + ste_attr.send_attr.opmod = MLX5DR_WQE_GTA_OPMOD_STE; + ste_attr.send_attr.opcode = MLX5DR_WQE_OPCODE_TBL_ACCESS; + ste_attr.send_attr.len = MLX5DR_WQE_SZ_GTA_CTRL + MLX5DR_WQE_SZ_GTA_DATA; + ste_attr.gta_opcode = MLX5DR_WQE_GTA_OP_ACTIVATE; + ste_attr.wqe_tag_is_jumbo = is_jumbo; + + ste_attr.send_attr.rule = rule; + ste_attr.send_attr.fence = 0; + ste_attr.send_attr.notify_hw = !attr->burst; + ste_attr.send_attr.user_data = attr->user_data; + + ste_attr.used_id_rtc_0 = &rule->rtc_0; + ste_attr.used_id_rtc_1 = &rule->rtc_1; + ste_attr.wqe_ctrl = (struct mlx5dr_wqe_gta_ctrl_seg *)rule->resize_info->ctrl_seg; + ste_attr.wqe_data = (struct mlx5dr_wqe_gta_data_seg_ste *)rule->resize_info->data_seg; + ste_attr.direct_index = mlx5dr_matcher_is_insert_by_idx(matcher) ? + attr->rule_idx : 0; + + mlx5dr_send_ste(queue, &ste_attr); + mlx5dr_send_engine_inc_rule(queue); + + /* Send dependent WQEs */ + if (!attr->burst) + mlx5dr_send_all_dep_wqe(queue); + + return 0; +} + int mlx5dr_rule_create(struct mlx5dr_matcher *matcher, uint8_t mt_idx, const struct rte_flow_item items[], @@ -689,13 +924,11 @@ int mlx5dr_rule_create(struct mlx5dr_matcher *matcher, struct mlx5dr_rule_attr *attr, struct mlx5dr_rule *rule_handle) { - struct mlx5dr_context *ctx; int ret; rule_handle->matcher = matcher; - ctx = matcher->tbl->ctx; - if (mlx5dr_rule_enqueue_precheck(ctx, attr)) + if (unlikely(mlx5dr_rule_enqueue_precheck_create(rule_handle, attr))) return -rte_errno; assert(matcher->num_of_mt >= mt_idx); @@ -723,7 +956,7 @@ int mlx5dr_rule_destroy(struct mlx5dr_rule *rule, { int ret; - if (mlx5dr_rule_enqueue_precheck(rule->matcher->tbl->ctx, attr)) + if (unlikely(mlx5dr_rule_enqueue_precheck(rule, attr))) return -rte_errno; if (unlikely(mlx5dr_table_is_root(rule->matcher->tbl))) @@ -739,24 +972,9 @@ int mlx5dr_rule_action_update(struct mlx5dr_rule *rule_handle, struct mlx5dr_rule_action rule_actions[], struct mlx5dr_rule_attr *attr) { - struct mlx5dr_matcher *matcher = rule_handle->matcher; int ret; - if (unlikely(mlx5dr_table_is_root(matcher->tbl) || - unlikely(mlx5dr_matcher_req_fw_wqe(matcher)))) { - DR_LOG(ERR, "Rule update not supported on current matcher"); - rte_errno = ENOTSUP; - return -rte_errno; - } - - if (!matcher->attr.optimize_using_rule_idx && - !mlx5dr_matcher_is_insert_by_idx(matcher)) { - DR_LOG(ERR, "Rule update requires optimize by idx matcher"); - rte_errno = ENOTSUP; - return -rte_errno; - } - - if (mlx5dr_rule_enqueue_precheck(matcher->tbl->ctx, attr)) + if (unlikely(mlx5dr_rule_enqueue_precheck_update(rule_handle, attr))) return -rte_errno; ret = mlx5dr_rule_create_hws(rule_handle, @@ -780,7 +998,7 @@ int mlx5dr_rule_hash_calculate(struct mlx5dr_matcher *matcher, enum mlx5dr_rule_hash_calc_mode mode, uint32_t *ret_hash) { - uint8_t tag[MLX5DR_STE_SZ] = {0}; + uint8_t tag[MLX5DR_WQE_SZ_GTA_DATA] = {0}; struct mlx5dr_match_template *mt; if (!matcher || !matcher->mt) { diff --git a/drivers/net/mlx5/hws/mlx5dr_rule.h b/drivers/net/mlx5/hws/mlx5dr_rule.h index f7d97eead5..07adf9c5ad 100644 --- a/drivers/net/mlx5/hws/mlx5dr_rule.h +++ b/drivers/net/mlx5/hws/mlx5dr_rule.h @@ -10,7 +10,6 @@ enum { MLX5DR_ACTIONS_SZ = 12, MLX5DR_MATCH_TAG_SZ = 32, MLX5DR_JUMBO_TAG_SZ = 44, - MLX5DR_STE_SZ = 64, }; enum mlx5dr_rule_status { @@ -23,6 +22,12 @@ enum mlx5dr_rule_status { MLX5DR_RULE_STATUS_FAILED, }; +enum mlx5dr_rule_move_state { + MLX5DR_RULE_RESIZE_STATE_IDLE, + MLX5DR_RULE_RESIZE_STATE_WRITING, + MLX5DR_RULE_RESIZE_STATE_DELETING, +}; + struct mlx5dr_rule_match_tag { union { uint8_t jumbo[MLX5DR_JUMBO_TAG_SZ]; @@ -33,6 +38,16 @@ struct mlx5dr_rule_match_tag { }; }; +struct mlx5dr_rule_resize_info { + uint8_t state; + uint32_t rtc_0; + uint32_t rtc_1; + uint32_t rule_idx; + struct mlx5dr_pool *action_ste_pool; + uint8_t ctrl_seg[MLX5DR_WQE_SZ_GTA_CTRL]; /* Ctrl segment of STE: 48 bytes */ + uint8_t data_seg[MLX5DR_WQE_SZ_GTA_DATA]; /* Data segment of STE: 64 bytes */ +}; + struct mlx5dr_rule { struct mlx5dr_matcher *matcher; union { @@ -40,6 +55,7 @@ struct mlx5dr_rule { /* Pointer to tag to store more than one tag */ struct mlx5dr_rule_match_tag *tag_ptr; struct ibv_flow *flow; + struct mlx5dr_rule_resize_info *resize_info; }; uint32_t rtc_0; /* The RTC into which the STE was inserted */ uint32_t rtc_1; /* The RTC into which the STE was inserted */ @@ -50,4 +66,16 @@ struct mlx5dr_rule { void mlx5dr_rule_free_action_ste_idx(struct mlx5dr_rule *rule); +int mlx5dr_rule_move_hws_remove(struct mlx5dr_rule *rule, + void *queue, void *user_data); + +int mlx5dr_rule_move_hws_add(struct mlx5dr_rule *rule, + struct mlx5dr_rule_attr *attr); + +static inline bool mlx5dr_rule_move_in_progress(struct mlx5dr_rule *rule) +{ + return rule->resize_info && + rule->resize_info->state != MLX5DR_RULE_RESIZE_STATE_IDLE; +} + #endif /* MLX5DR_RULE_H_ */ diff --git a/drivers/net/mlx5/hws/mlx5dr_send.c b/drivers/net/mlx5/hws/mlx5dr_send.c index 622d574bfa..64138279a1 100644 --- a/drivers/net/mlx5/hws/mlx5dr_send.c +++ b/drivers/net/mlx5/hws/mlx5dr_send.c @@ -444,6 +444,46 @@ void mlx5dr_send_engine_flush_queue(struct mlx5dr_send_engine *queue) mlx5dr_send_engine_post_ring(sq, queue->uar, wqe_ctrl); } +static void +mlx5dr_send_engine_update_rule_resize(struct mlx5dr_send_engine *queue, + struct mlx5dr_send_ring_priv *priv, + enum rte_flow_op_status *status) +{ + switch (priv->rule->resize_info->state) { + case MLX5DR_RULE_RESIZE_STATE_WRITING: + if (priv->rule->status == MLX5DR_RULE_STATUS_FAILING) { + /* Backup original RTCs */ + uint32_t orig_rtc_0 = priv->rule->resize_info->rtc_0; + uint32_t orig_rtc_1 = priv->rule->resize_info->rtc_1; + + /* Delete partially failed move rule using resize_info */ + priv->rule->resize_info->rtc_0 = priv->rule->rtc_0; + priv->rule->resize_info->rtc_1 = priv->rule->rtc_1; + + /* Move rule to original RTC for future delete */ + priv->rule->rtc_0 = orig_rtc_0; + priv->rule->rtc_1 = orig_rtc_1; + } + /* Clean leftovers */ + mlx5dr_rule_move_hws_remove(priv->rule, queue, priv->user_data); + break; + + case MLX5DR_RULE_RESIZE_STATE_DELETING: + if (priv->rule->status == MLX5DR_RULE_STATUS_FAILING) { + *status = RTE_FLOW_OP_ERROR; + } else { + *status = RTE_FLOW_OP_SUCCESS; + priv->rule->matcher = priv->rule->matcher->resize_dst; + } + priv->rule->resize_info->state = MLX5DR_RULE_RESIZE_STATE_IDLE; + priv->rule->status = MLX5DR_RULE_STATUS_CREATED; + break; + + default: + break; + } +} + static void mlx5dr_send_engine_update_rule(struct mlx5dr_send_engine *queue, struct mlx5dr_send_ring_priv *priv, uint16_t wqe_cnt, @@ -465,6 +505,11 @@ static void mlx5dr_send_engine_update_rule(struct mlx5dr_send_engine *queue, /* Update rule status for the last completion */ if (!priv->rule->pending_wqes) { + if (unlikely(mlx5dr_rule_move_in_progress(priv->rule))) { + mlx5dr_send_engine_update_rule_resize(queue, priv, status); + return; + } + if (unlikely(priv->rule->status == MLX5DR_RULE_STATUS_FAILING)) { /* Rule completely failed and doesn't require cleanup */ if (!priv->rule->rtc_0 && !priv->rule->rtc_1) -- 2.39.3