From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id B3B26A0C47; Tue, 6 Jul 2021 15:15:50 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 4365041367; Tue, 6 Jul 2021 15:15:48 +0200 (CEST) Received: from NAM10-DM6-obe.outbound.protection.outlook.com (mail-dm6nam10on2056.outbound.protection.outlook.com [40.107.93.56]) by mails.dpdk.org (Postfix) with ESMTP id 2AE3041362 for ; Tue, 6 Jul 2021 15:15:47 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=JeYtYCXhhMU6d5fhxKV2R5WfNjoJpHHAoR8L2jG0YurbpDDu8G4YFa0S1qE/beflqlDZB4mUV337Z5yYMfSnSEXQusBQrYFGC0KNmABL2qFWUiSDvHNb+ndd6uYEDhBTcxrSKeXIQh4El/Hr/9N4HG787KkUn08uNFTkJBJ/Xbwd/0aE003BEAskvZunxNmkWJbLjPu5SALTzApky48OgX+pxFpBqzL8eymj5fl/9KLjmrW6yDihYaUD139jbFBWTkXRQ2IEJ6On3bziMGC4RNqK0BtVwW7IjLNFCvXpNOacJ87SDDI6IIqPUsxSHQ+7DMgLtXGZ53kpMMXuE7KEkg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=YJ/tTMumzv6My+7A5H7rLAa9O8PlwwMKWmjfp0MbC0w=; b=mkbuYVlAXfvsuK+1Ztobi6XWO1EfjC9WHISkiulAYW3iEO7509oUT1uFKsx1EWw8SeZgBhYE6NFMVo1CEKJygjofSfgnNWZ6OfbqKdr2ue9kRllf81deQeoMhuS+62OSECRdng+ITxDSySTyOoxFr0Dljtul4DS1EZxqQhBk3pVWnOlP0wceK1HbtL8d6+IT9Fvm+anSMQjfp2FviqPRWrrTCPVCE9esoVj26MR3JF3rTi+FJys4K/4MNlFesFni0Rs2lowgfo0M8Hnduoujcby5oS5LWZEG6E9T2lSRywxj90E3xyFR9xOwtF7c8EZdUjbjUpBaNH0lMCGYCZYNdw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=monjalon.net smtp.mailfrom=nvidia.com; dmarc=pass (p=none sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=YJ/tTMumzv6My+7A5H7rLAa9O8PlwwMKWmjfp0MbC0w=; b=mGAW0eEj0te4Ads33EYCHD/CqNubFWNVvLFTcfcuB10lNZl7NzXgyoNTC2R36Nxylb5Ujcs/06Iza2s5qXQVd9YGLHOnzgyO2EgZRsuZ4I4uLqJ6Azv2bDtDETrnqXWgLq+jOrpwkE/O2nsbZ/GlVlG+lqhkUldIart+hvahOykOLCxQaUAV7bvKZRvDmCiSDhR2oz4XFkr3VpUaaC2O26f65DTINp0+NULHuPLayQqFm6Jfxeyc/pY8n3teudlSi6w0fenycd7byzjeu2UTI7qAa+5K4oTIvzNxJjePH8I6PGu/gBGC0T6Njz/PZh0u7OHOwMfpEs0X0oHHA7Bxog== Received: from BN9P223CA0002.NAMP223.PROD.OUTLOOK.COM (2603:10b6:408:10b::7) by BL0PR12MB2529.namprd12.prod.outlook.com (2603:10b6:207:4e::28) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4287.22; Tue, 6 Jul 2021 13:15:45 +0000 Received: from BN8NAM11FT058.eop-nam11.prod.protection.outlook.com (2603:10b6:408:10b:cafe::d0) by BN9P223CA0002.outlook.office365.com (2603:10b6:408:10b::7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4287.22 via Frontend Transport; Tue, 6 Jul 2021 13:15:45 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; monjalon.net; dkim=none (message not signed) header.d=none;monjalon.net; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by BN8NAM11FT058.mail.protection.outlook.com (10.13.177.58) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4287.22 via Frontend Transport; Tue, 6 Jul 2021 13:15:44 +0000 Received: from nvidia.com (172.20.187.5) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Tue, 6 Jul 2021 13:15:41 +0000 From: Shun Hao To: , , , "Shahaf Shuler" CC: , , Date: Tue, 6 Jul 2021 16:14:49 +0300 Message-ID: <20210706131450.30917-4-shunh@nvidia.com> X-Mailer: git-send-email 2.20.0 In-Reply-To: <20210706131450.30917-1-shunh@nvidia.com> References: <20210706131450.30917-1-shunh@nvidia.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Originating-IP: [172.20.187.5] X-ClientProxiedBy: HQMAIL101.nvidia.com (172.20.187.10) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 94114b13-5e8e-4326-6ef1-08d9408029bb X-MS-TrafficTypeDiagnostic: BL0PR12MB2529: X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:2733; X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: kWbxPMl3/+0SGsz9t//1alM9gkI7jFgWryouvy2ICpNEq87MTinWQ6icYFVTVSQfWPlB3P+xYf7s5RqOEn+6XSeiSeGy02l5vl/GuB+vQiqO8MTe3XGu6peoardqsf88Uoa9pquxzGYU1czxC3VbBZkoLGVSYhN/DlgNXMshJlCE1R1KF1+LHCv4MgmRrxAqCF41JcP7b+P55VpVbWMX8OlnE3cqclDnqaRZw4ptzngmjNAlTNWMCvgEsNOx970cV2tf5L7stEM7wYCcMyrOvaWfxW+iPR6uo5A5aOi1DJpJsjXk8k/YBMBIG+xca60dXXlSeyfJrDHaSQ9oVkYKT00KEzpuTbZNJvK2F84BjLMtwNz07lqW+E3N1vYHpMkWze2E+cLuqs1kiwUriJw93yZrQRuj/Qr3QTUnzHqstENwsefB9XpOWL96YRhxhVuCXfg93yPJnNaWox/QdD8nkd4u7dIOcikrh1vpB2kV4LBX9/HKHwHAeaPJJE4AJr6e8yfFESXSu8qRsh506vQnaVjDc594SpFWNVGLcDxoP4x+rj0jIqACAis3IhyqexwWk21uzn5SYPxQr4AiOzjTkebnVcIGyioOhJZZBB/ivGH7qNla0rbsFh7W52TNTy5bJv+fD4UIcaAxZfnzgZQqTk9q52qvAXZ23upGNAhm8Q0KJ4qtprysX4D76Ue4pPM9MFzZzH/wFL4NvPnEPbAPXQ== X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(396003)(136003)(376002)(346002)(39860400002)(46966006)(36840700001)(7636003)(82310400003)(7696005)(26005)(2906002)(336012)(478600001)(426003)(55016002)(8676002)(2616005)(6636002)(8936002)(356005)(36860700001)(54906003)(16526019)(186003)(1076003)(83380400001)(110136005)(4326008)(82740400003)(316002)(6286002)(5660300002)(70586007)(107886003)(36906005)(70206006)(86362001)(47076005)(36756003); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 Jul 2021 13:15:44.7917 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 94114b13-5e8e-4326-6ef1-08d9408029bb X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT058.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BL0PR12MB2529 Subject: [dpdk-dev] [PATCH v1 3/4] net/mlx5: meter hierarchy destroy and cleanup X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" When creating hierarchy meter, its color rules will increase next meter's reference count, so when destroy the hierarchy meter, also need to dereference the next meter's count. During flushing all meters of a port, need to destroy all hierarchy meters and their policies first, to dereference the last meter in hierarchy. Then all meters have no reference and can be destroyed. Signed-off-by: Shun Hao Acked-by: Matan Azrad --- drivers/net/mlx5/mlx5_flow_dv.c | 15 +++- drivers/net/mlx5/mlx5_flow_meter.c | 132 +++++++++++++++++++++++++++++ 2 files changed, 145 insertions(+), 2 deletions(-) diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c index 119de09809..681e6fb07c 100644 --- a/drivers/net/mlx5/mlx5_flow_dv.c +++ b/drivers/net/mlx5/mlx5_flow_dv.c @@ -14588,12 +14588,20 @@ static void __flow_dv_destroy_sub_policy_rules(struct rte_eth_dev *dev, struct mlx5_flow_meter_sub_policy *sub_policy) { + struct mlx5_priv *priv = dev->data->dev_private; struct mlx5_flow_tbl_data_entry *tbl; + struct mlx5_flow_meter_policy *policy = sub_policy->main_policy; + struct mlx5_flow_meter_info *next_fm; struct mlx5_sub_policy_color_rule *color_rule; void *tmp; - int i; + uint32_t i; for (i = 0; i < RTE_COLORS; i++) { + next_fm = NULL; + if (i == RTE_COLOR_GREEN && policy && + policy->act_cnt[i].fate_action == MLX5_FLOW_FATE_MTR) + next_fm = mlx5_flow_meter_find(priv, + policy->act_cnt[i].next_mtr_id, NULL); TAILQ_FOREACH_SAFE(color_rule, &sub_policy->color_rules[i], next_port, tmp) { claim_zero(mlx5_flow_os_destroy_flow(color_rule->rule)); @@ -14604,11 +14612,14 @@ __flow_dv_destroy_sub_policy_rules(struct rte_eth_dev *dev, TAILQ_REMOVE(&sub_policy->color_rules[i], color_rule, next_port); mlx5_free(color_rule); + if (next_fm) + mlx5_flow_meter_detach(priv, next_fm); } } for (i = 0; i < MLX5_MTR_RTE_COLORS; i++) { if (sub_policy->rix_hrxq[i]) { - mlx5_hrxq_release(dev, sub_policy->rix_hrxq[i]); + if (policy && !policy->is_hierarchy) + mlx5_hrxq_release(dev, sub_policy->rix_hrxq[i]); sub_policy->rix_hrxq[i] = 0; } if (sub_policy->jump_tbl[i]) { diff --git a/drivers/net/mlx5/mlx5_flow_meter.c b/drivers/net/mlx5/mlx5_flow_meter.c index 03f7e120e1..78eb2a60f9 100644 --- a/drivers/net/mlx5/mlx5_flow_meter.c +++ b/drivers/net/mlx5/mlx5_flow_meter.c @@ -1891,6 +1891,136 @@ mlx5_flow_meter_rxq_flush(struct rte_eth_dev *dev) } } +/** + * Iterate a meter hierarchy and flush all meters and policies if possible. + * + * @param[in] dev + * Pointer to Ethernet device. + * @param[in] fm + * Pointer to flow meter. + * @param[in] mtr_idx + * .Meter's index + * @param[out] error + * Pointer to rte meter error structure. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +static int +mlx5_flow_meter_flush_hierarchy(struct rte_eth_dev *dev, + struct mlx5_flow_meter_info *fm, + uint32_t mtr_idx, + struct rte_mtr_error *error) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_flow_meter_policy *policy; + uint32_t policy_id; + struct mlx5_flow_meter_info *next_fm; + uint32_t next_mtr_idx; + struct mlx5_flow_meter_policy *next_policy = NULL; + + policy = mlx5_flow_meter_policy_find(dev, fm->policy_id, NULL); + MLX5_ASSERT(policy); + while (!fm->ref_cnt && policy->is_hierarchy) { + policy_id = fm->policy_id; + next_fm = mlx5_flow_meter_find(priv, + policy->act_cnt[RTE_COLOR_GREEN].next_mtr_id, + &next_mtr_idx); + if (next_fm) { + next_policy = mlx5_flow_meter_policy_find(dev, + next_fm->policy_id, + NULL); + MLX5_ASSERT(next_policy); + } + if (mlx5_flow_meter_params_flush(dev, fm, mtr_idx)) + return -rte_mtr_error_set(error, ENOTSUP, + RTE_MTR_ERROR_TYPE_MTR_ID, + NULL, + "Failed to flush meter."); + if (policy->ref_cnt) + break; + if (__mlx5_flow_meter_policy_delete(dev, policy_id, + policy, error, true)) + return -rte_errno; + mlx5_free(policy); + if (!next_fm || !next_policy) + break; + fm = next_fm; + mtr_idx = next_mtr_idx; + policy = next_policy; + } + return 0; +} + +/** + * Flush all the hierarchy meters and their policies. + * + * @param[in] dev + * Pointer to Ethernet device. + * @param[out] error + * Pointer to rte meter error structure. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +static int +mlx5_flow_meter_flush_all_hierarchies(struct rte_eth_dev *dev, + struct rte_mtr_error *error) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_flow_meter_info *fm; + struct mlx5_flow_meter_policy *policy; + struct mlx5_flow_meter_sub_policy *sub_policy; + struct mlx5_flow_meter_info *next_fm; + struct mlx5_aso_mtr *aso_mtr; + uint32_t mtr_idx = 0; + uint32_t i, policy_idx; + void *entry; + + if (!priv->mtr_idx_tbl || !priv->policy_idx_tbl) + return 0; + MLX5_L3T_FOREACH(priv->mtr_idx_tbl, i, entry) { + mtr_idx = *(uint32_t *)entry; + if (!mtr_idx) + continue; + aso_mtr = mlx5_aso_meter_by_idx(priv, mtr_idx); + fm = &aso_mtr->fm; + if (fm->ref_cnt || fm->def_policy) + continue; + if (mlx5_flow_meter_flush_hierarchy(dev, fm, mtr_idx, error)) + return -rte_errno; + } + MLX5_L3T_FOREACH(priv->policy_idx_tbl, i, entry) { + policy_idx = *(uint32_t *)entry; + sub_policy = mlx5_ipool_get + (priv->sh->ipool[MLX5_IPOOL_MTR_POLICY], + policy_idx); + if (!sub_policy) + return -rte_mtr_error_set(error, + EINVAL, + RTE_MTR_ERROR_TYPE_METER_POLICY_ID, + NULL, "Meter policy invalid."); + policy = sub_policy->main_policy; + if (!policy || !policy->is_hierarchy || policy->ref_cnt) + continue; + next_fm = mlx5_flow_meter_find(priv, + policy->act_cnt[RTE_COLOR_GREEN].next_mtr_id, + &mtr_idx); + if (__mlx5_flow_meter_policy_delete(dev, i, policy, + error, true)) + return -rte_mtr_error_set(error, + EINVAL, + RTE_MTR_ERROR_TYPE_METER_POLICY_ID, + NULL, "Meter policy invalid."); + mlx5_free(policy); + if (!next_fm || next_fm->ref_cnt || next_fm->def_policy) + continue; + if (mlx5_flow_meter_flush_hierarchy(dev, next_fm, + mtr_idx, error)) + return -rte_errno; + } + return 0; +} /** * Flush meter configuration. * @@ -1919,6 +2049,8 @@ mlx5_flow_meter_flush(struct rte_eth_dev *dev, struct rte_mtr_error *error) if (!priv->mtr_en) return 0; if (priv->sh->meter_aso_en) { + if (mlx5_flow_meter_flush_all_hierarchies(dev, error)) + return -rte_errno; if (priv->mtr_idx_tbl) { MLX5_L3T_FOREACH(priv->mtr_idx_tbl, i, entry) { mtr_idx = *(uint32_t *)entry; -- 2.21.0