From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id CB12D43C2C; Wed, 28 Feb 2024 18:02:26 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 479A542F23; Wed, 28 Feb 2024 18:01:53 +0100 (CET) Received: from NAM11-BN8-obe.outbound.protection.outlook.com (mail-bn8nam11on2041.outbound.protection.outlook.com [40.107.236.41]) by mails.dpdk.org (Postfix) with ESMTP id 0570742F23 for ; Wed, 28 Feb 2024 18:01:50 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=S/Ri3HM4FNv6OYanW4gg/l7FXnYgkKTq3+kEcw0bCoGcTM/vYDx1hn6m1QGAHcpQA69lpLyvytN6pNfAjrFtcBbJF76aeuQSye3QJwoC1wZMNL+ZbEmUeLcUNYmTNg7ZImKAtdMWnDtZ1eB4HutqNWlqdPGIO4PRb+mnxstbuzRotC2m5DGy+3bXXw8A1SoLBu3RPBqxqNOA2HmETvHPspUlH5K8SUp2bzv4Dybms58A13x4NKLWUvh+xm9UFwRjyne8MK35VZBxFI1kzKTJ0nLJWMBdPX0Mc8u0dcI4y79cjuYrCDRzRn6wkfhaDcKRKamwGIl/5+nF8uLklXaN2g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=uykZfzmBKZKNlUtgIDo1KqXYFnE12NthjW136auWHYA=; b=ZBX6EpE6yvMlFNnBCvnd9WDcbukokROAdB6dOZJV/5ajNaFOsM5tTgKre8IVOBksuWlc+w5VV/zEay6w3hXOiiluXbswDpvrEKV9EoBCRrpkqKPuBIaunasZ5pPYDx8Cc1nknOHaUIdeUAie5clkpUsaqkTJoXo3i2j26+ZWIF2ndLeFf1y3NqUnRqZv1BygfAEpQ7nHdLPQKczH9J9+EmvdC6k6R4/yyzR2fVrpLmLlZZjPUFcvuvbPeU4Q4otHfRyWOXkBtU/Al6bIQCzYkygCgVY+UvU1NPhm7SB0awLK9ZC5jKmBas+MznWQDCtn5fWTFbAMliSoNvwzwbprew== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=uykZfzmBKZKNlUtgIDo1KqXYFnE12NthjW136auWHYA=; b=ZNxmeNNh4imZh/BNoxQpBvM4F4s2R8VmES3683jIDHccrxspBg0GzT3jalvqBH8iFdTTpt/d7dFGzcPXuJfPwbdPatsRi/aBvnpTUyKS/+mQ/Bd4fCuQhaxcN4cND0tfrHb/f/x9a2dlHHmKl9et2afhXFYlj6f3Inwhbl338Q+JNouTPnyVEJa8wBkA8BWj60AgdUoZpsH8SuqAV915L/e24+CQa9dlXT76YgNPfpT2ZWUKoUT/fsRDCVh1WLD7uq9j/+BU27HDqY5VPxvF26S8q60rDkez36JxA+nZ7+KL/g8wTNJpk7/mi/s81jiSNJAGy1MagMrke5xs/fi85A== Received: from MN2PR07CA0028.namprd07.prod.outlook.com (2603:10b6:208:1a0::38) by PH7PR12MB9256.namprd12.prod.outlook.com (2603:10b6:510:2fe::21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7316.36; Wed, 28 Feb 2024 17:01:45 +0000 Received: from BL02EPF0001A106.namprd05.prod.outlook.com (2603:10b6:208:1a0:cafe::2c) by MN2PR07CA0028.outlook.office365.com (2603:10b6:208:1a0::38) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7339.27 via Frontend Transport; Wed, 28 Feb 2024 17:01:45 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by BL02EPF0001A106.mail.protection.outlook.com (10.167.241.139) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7292.25 via Frontend Transport; Wed, 28 Feb 2024 17:01:45 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.41; Wed, 28 Feb 2024 09:01:14 -0800 Received: from nvidia.com (10.126.230.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1258.12; Wed, 28 Feb 2024 09:01:11 -0800 From: Dariusz Sosnowski To: Viacheslav Ovsiienko , Ori Kam , Suanming Mou , Matan Azrad CC: , Raslan Darawsheh , Bing Zhao Subject: [PATCH 04/11] net/mlx5: skip the unneeded resource index allocation Date: Wed, 28 Feb 2024 18:00:39 +0100 Message-ID: <20240228170046.176600-5-dsosnowski@nvidia.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240228170046.176600-1-dsosnowski@nvidia.com> References: <20240228170046.176600-1-dsosnowski@nvidia.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Originating-IP: [10.126.230.35] X-ClientProxiedBy: rnnvmail202.nvidia.com (10.129.68.7) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BL02EPF0001A106:EE_|PH7PR12MB9256:EE_ X-MS-Office365-Filtering-Correlation-Id: 90c4eb60-55fb-47d0-e117-08dc387ef1ee X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: Lvcbh6h06HS4uII2HKZnL/Cj3Wkua21CaUDCUCDPEPI1oTqct0rfINed6t7FxsKJwnsTx9pq9Ku6PoArqMCnxf5a1Y4OZ/amGg2TjQNQr3UmSJhTPKxkR+sYMJNwcvB7Mj/TQEK0x4F8/oYZb8GiFSPJloG0V13AyEN0/iEoo1jGkY9DpQUohsfLLhxo3elCIR3GDITpEVtb+KfFlDBO3JAO11SRg38m7jjDFKZqd6vtpzuo03Jp61/vsuKyZnISgIHi+SrOKVCBFEY0tA1XRQLhnAPpVtBdJ7HRG5qQ2Pce/6hB5o7YV6rdATOqq2TfdzD1H8EtbMSWlRqGfORCBEvnmKEBMjm9v3MAWlpImvCBFG2EpiP5+4ULvl9QairICYuNpzQWvaPPRBqL5i5thkAUcwQ03tDfRXGDbDvMpaGrOspBzUT0l4XnwWS5tyHMpiJgqjYkgEmpRYCCvseziGyJK9zNopXvHJXEPDGQj1bdZctk/sQIvyYXyKHXKSCxM/ZvRdo6Jpu6FTkBlqHFDtzRqqTEM3+kMZTkVHPghHTyTwCQGkKCNtUo059FVHgq4xvm1aL8WjHKrx7/I6nmfoVg4HTQZlKoDwrJh6KgsakhTVKaqGsiI1nrmSsyzq66mREBEaVt7BUnPMWnvK3qS/ZxYQ9upYRYRigCEEctFj66QWIY9UWZhH29cMytAev1pITS8IaPafN9Hf1P9738HR/kw1N7DXHMmKfgY7SFLx/7/gigFAi5n1PcQcJUbvYh X-Forefront-Antispam-Report: CIP:216.228.117.160; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge1.nvidia.com; CAT:NONE; SFS:(13230031)(230273577357003)(82310400014)(36860700004); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Feb 2024 17:01:45.3423 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 90c4eb60-55fb-47d0-e117-08dc387ef1ee X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.160]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BL02EPF0001A106.namprd05.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH7PR12MB9256 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Bing Zhao The resource index was introduced to decouple the flow rule and its resources used by hardware steering. This is needed only when a rule update is supported. In some cases, the update is not supported on a table(matcher). E.g.: * Table is resizable * FW gets involved * Root table * Not index based or optimized (not applicable) Or only one STE entry is required per rule. When doing an update, the operation is always atomic. There is no need for the extra resource index either. If the matcher doesn't support rule update or the maximal entry is only 1 for this matcher, there is no need to manage the resource index allocation and free from the pool. Signed-off-by: Bing Zhao --- drivers/net/mlx5/mlx5_flow_hw.c | 129 +++++++++++++++++++------------- 1 file changed, 76 insertions(+), 53 deletions(-) diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index 442237f2b6..fcf493c771 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -3364,9 +3364,6 @@ flow_hw_async_flow_create(struct rte_eth_dev *dev, flow = mlx5_ipool_zmalloc(table->flow, &flow_idx); if (!flow) goto error; - mlx5_ipool_malloc(table->resource, &res_idx); - if (!res_idx) - goto error; rule_acts = flow_hw_get_dr_action_buffer(priv, table, action_template_index, queue); /* * Set the table here in order to know the destination table @@ -3375,7 +3372,14 @@ flow_hw_async_flow_create(struct rte_eth_dev *dev, flow->table = table; flow->mt_idx = pattern_template_index; flow->idx = flow_idx; - flow->res_idx = res_idx; + if (table->resource) { + mlx5_ipool_malloc(table->resource, &res_idx); + if (!res_idx) + goto error; + flow->res_idx = res_idx; + } else { + flow->res_idx = flow_idx; + } /* * Set the job type here in order to know if the flow memory * should be freed or not when get the result from dequeue. @@ -3385,11 +3389,10 @@ flow_hw_async_flow_create(struct rte_eth_dev *dev, job->user_data = user_data; rule_attr.user_data = job; /* - * Indexed pool returns 1-based indices, but mlx5dr expects 0-based indices for rule - * insertion hints. + * Indexed pool returns 1-based indices, but mlx5dr expects 0-based indices + * for rule insertion hints. */ - MLX5_ASSERT(res_idx > 0); - flow->rule_idx = res_idx - 1; + flow->rule_idx = flow->res_idx - 1; rule_attr.rule_idx = flow->rule_idx; /* * Construct the flow actions based on the input actions. @@ -3432,12 +3435,12 @@ flow_hw_async_flow_create(struct rte_eth_dev *dev, if (likely(!ret)) return (struct rte_flow *)flow; error: - if (job) - flow_hw_job_put(priv, job, queue); + if (table->resource && res_idx) + mlx5_ipool_free(table->resource, res_idx); if (flow_idx) mlx5_ipool_free(table->flow, flow_idx); - if (res_idx) - mlx5_ipool_free(table->resource, res_idx); + if (job) + flow_hw_job_put(priv, job, queue); rte_flow_error_set(error, rte_errno, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, "fail to create rte flow"); @@ -3508,9 +3511,6 @@ flow_hw_async_flow_create_by_index(struct rte_eth_dev *dev, flow = mlx5_ipool_zmalloc(table->flow, &flow_idx); if (!flow) goto error; - mlx5_ipool_malloc(table->resource, &res_idx); - if (!res_idx) - goto error; rule_acts = flow_hw_get_dr_action_buffer(priv, table, action_template_index, queue); /* * Set the table here in order to know the destination table @@ -3519,7 +3519,14 @@ flow_hw_async_flow_create_by_index(struct rte_eth_dev *dev, flow->table = table; flow->mt_idx = 0; flow->idx = flow_idx; - flow->res_idx = res_idx; + if (table->resource) { + mlx5_ipool_malloc(table->resource, &res_idx); + if (!res_idx) + goto error; + flow->res_idx = res_idx; + } else { + flow->res_idx = flow_idx; + } /* * Set the job type here in order to know if the flow memory * should be freed or not when get the result from dequeue. @@ -3528,9 +3535,7 @@ flow_hw_async_flow_create_by_index(struct rte_eth_dev *dev, job->flow = flow; job->user_data = user_data; rule_attr.user_data = job; - /* - * Set the rule index. - */ + /* Set the rule index. */ flow->rule_idx = rule_index; rule_attr.rule_idx = flow->rule_idx; /* @@ -3566,12 +3571,12 @@ flow_hw_async_flow_create_by_index(struct rte_eth_dev *dev, if (likely(!ret)) return (struct rte_flow *)flow; error: - if (job) - flow_hw_job_put(priv, job, queue); - if (res_idx) + if (table->resource && res_idx) mlx5_ipool_free(table->resource, res_idx); if (flow_idx) mlx5_ipool_free(table->flow, flow_idx); + if (job) + flow_hw_job_put(priv, job, queue); rte_flow_error_set(error, rte_errno, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, "fail to create rte flow"); @@ -3634,9 +3639,6 @@ flow_hw_async_flow_update(struct rte_eth_dev *dev, rte_errno = ENOMEM; goto error; } - mlx5_ipool_malloc(table->resource, &res_idx); - if (!res_idx) - goto error; nf = job->upd_flow; memset(nf, 0, sizeof(struct rte_flow_hw)); rule_acts = flow_hw_get_dr_action_buffer(priv, table, action_template_index, queue); @@ -3647,7 +3649,14 @@ flow_hw_async_flow_update(struct rte_eth_dev *dev, nf->table = table; nf->mt_idx = of->mt_idx; nf->idx = of->idx; - nf->res_idx = res_idx; + if (table->resource) { + mlx5_ipool_malloc(table->resource, &res_idx); + if (!res_idx) + goto error; + nf->res_idx = res_idx; + } else { + nf->res_idx = of->res_idx; + } /* * Set the job type here in order to know if the flow memory * should be freed or not when get the result from dequeue. @@ -3657,11 +3666,11 @@ flow_hw_async_flow_update(struct rte_eth_dev *dev, job->user_data = user_data; rule_attr.user_data = job; /* - * Indexed pool returns 1-based indices, but mlx5dr expects 0-based indices for rule - * insertion hints. + * Indexed pool returns 1-based indices, but mlx5dr expects 0-based indices + * for rule insertion hints. + * If there is only one STE, the update will be atomic by nature. */ - MLX5_ASSERT(res_idx > 0); - nf->rule_idx = res_idx - 1; + nf->rule_idx = nf->res_idx - 1; rule_attr.rule_idx = nf->rule_idx; /* * Construct the flow actions based on the input actions. @@ -3687,14 +3696,14 @@ flow_hw_async_flow_update(struct rte_eth_dev *dev, if (likely(!ret)) return 0; error: + if (table->resource && res_idx) + mlx5_ipool_free(table->resource, res_idx); /* Flow created fail, return the descriptor and flow memory. */ if (job) flow_hw_job_put(priv, job, queue); - if (res_idx) - mlx5_ipool_free(table->resource, res_idx); return rte_flow_error_set(error, rte_errno, - RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, - "fail to update rte flow"); + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + "fail to update rte flow"); } /** @@ -3949,13 +3958,15 @@ hw_cmpl_flow_update_or_destroy(struct rte_eth_dev *dev, } if (job->type != MLX5_HW_Q_JOB_TYPE_UPDATE) { if (table) { - mlx5_ipool_free(table->resource, res_idx); + if (table->resource) + mlx5_ipool_free(table->resource, res_idx); mlx5_ipool_free(table->flow, flow->idx); } } else { rte_memcpy(flow, job->upd_flow, offsetof(struct rte_flow_hw, rule)); - mlx5_ipool_free(table->resource, res_idx); + if (table->resource) + mlx5_ipool_free(table->resource, res_idx); } } @@ -4455,6 +4466,7 @@ flow_hw_table_create(struct rte_eth_dev *dev, uint32_t i = 0, max_tpl = MLX5_HW_TBL_MAX_ITEM_TEMPLATE; uint32_t nb_flows = rte_align32pow2(attr->nb_flows); bool port_started = !!dev->data->dev_started; + bool rpool_needed; size_t tbl_mem_size; int err; @@ -4492,13 +4504,6 @@ flow_hw_table_create(struct rte_eth_dev *dev, tbl->flow = mlx5_ipool_create(&cfg); if (!tbl->flow) goto error; - /* Allocate rule indexed pool. */ - cfg.size = 0; - cfg.type = "mlx5_hw_table_rule"; - cfg.max_idx += priv->hw_q[0].size; - tbl->resource = mlx5_ipool_create(&cfg); - if (!tbl->resource) - goto error; /* Register the flow group. */ ge = mlx5_hlist_register(priv->sh->groups, attr->flow_attr.group, &ctx); if (!ge) @@ -4578,12 +4583,30 @@ flow_hw_table_create(struct rte_eth_dev *dev, tbl->type = attr->flow_attr.transfer ? MLX5DR_TABLE_TYPE_FDB : (attr->flow_attr.egress ? MLX5DR_TABLE_TYPE_NIC_TX : MLX5DR_TABLE_TYPE_NIC_RX); + /* + * Only the matcher supports update and needs more than 1 WQE, an additional + * index is needed. Or else the flow index can be reused. + */ + rpool_needed = mlx5dr_matcher_is_updatable(tbl->matcher_info[0].matcher) && + mlx5dr_matcher_is_dependent(tbl->matcher_info[0].matcher); + if (rpool_needed) { + /* Allocate rule indexed pool. */ + cfg.size = 0; + cfg.type = "mlx5_hw_table_rule"; + cfg.max_idx += priv->hw_q[0].size; + tbl->resource = mlx5_ipool_create(&cfg); + if (!tbl->resource) + goto res_error; + } if (port_started) LIST_INSERT_HEAD(&priv->flow_hw_tbl, tbl, next); else LIST_INSERT_HEAD(&priv->flow_hw_tbl_ongo, tbl, next); rte_rwlock_init(&tbl->matcher_replace_rwlk); return tbl; +res_error: + if (tbl->matcher_info[0].matcher) + (void)mlx5dr_matcher_destroy(tbl->matcher_info[0].matcher); at_error: for (i = 0; i < nb_action_templates; i++) { __flow_hw_action_template_destroy(dev, &tbl->ats[i].acts); @@ -4601,8 +4624,6 @@ flow_hw_table_create(struct rte_eth_dev *dev, if (tbl->grp) mlx5_hlist_unregister(priv->sh->groups, &tbl->grp->entry); - if (tbl->resource) - mlx5_ipool_destroy(tbl->resource); if (tbl->flow) mlx5_ipool_destroy(tbl->flow); mlx5_free(tbl); @@ -4811,12 +4832,13 @@ flow_hw_table_destroy(struct rte_eth_dev *dev, uint32_t ridx = 1; /* Build ipool allocated object bitmap. */ - mlx5_ipool_flush_cache(table->resource); + if (table->resource) + mlx5_ipool_flush_cache(table->resource); mlx5_ipool_flush_cache(table->flow); /* Check if ipool has allocated objects. */ if (table->refcnt || mlx5_ipool_get_next(table->flow, &fidx) || - mlx5_ipool_get_next(table->resource, &ridx)) { + (table->resource && mlx5_ipool_get_next(table->resource, &ridx))) { DRV_LOG(WARNING, "Table %p is still in use.", (void *)table); return rte_flow_error_set(error, EBUSY, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, @@ -4838,7 +4860,8 @@ flow_hw_table_destroy(struct rte_eth_dev *dev, if (table->matcher_info[1].matcher) mlx5dr_matcher_destroy(table->matcher_info[1].matcher); mlx5_hlist_unregister(priv->sh->groups, &table->grp->entry); - mlx5_ipool_destroy(table->resource); + if (table->resource) + mlx5_ipool_destroy(table->resource); mlx5_ipool_destroy(table->flow); mlx5_free(table); return 0; @@ -12340,11 +12363,11 @@ flow_hw_table_resize(struct rte_eth_dev *dev, return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, table, "cannot resize flows pool"); - ret = mlx5_ipool_resize(table->resource, nb_flows); - if (ret) - return rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_UNSPECIFIED, - table, "cannot resize resources pool"); + /* + * A resizable matcher doesn't support rule update. In this case, the ipool + * for the resource is not created and there is no need to resize it. + */ + MLX5_ASSERT(!table->resource); if (mlx5_is_multi_pattern_active(&table->mpctx)) { ret = flow_hw_table_resize_multi_pattern_actions(dev, table, nb_flows, error); if (ret < 0) -- 2.39.2