From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 6A6B0A0C46; Sun, 18 Jul 2021 19:19:21 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 31E2241195; Sun, 18 Jul 2021 19:19:10 +0200 (CEST) Received: from NAM12-MW2-obe.outbound.protection.outlook.com (mail-mw2nam12on2049.outbound.protection.outlook.com [40.107.244.49]) by mails.dpdk.org (Postfix) with ESMTP id 89A5E41192 for ; Sun, 18 Jul 2021 19:19:08 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=Vxv3wR2UaYfhjGsvhU9hejAxxQWv+j9gYKtLVm0gtR63SxF+wXxVoLNdwPCo+BKsVIf48O7/4lKwMdbts3m1En8po+6xoY1n4dYyppmfPbe6dgyGiGPe5/bCelpxHz71jTrPi6xi5oiAcitVq/6+/bA+RXCnDtjxQ0WV8PJuW+GPBFG4jxhP+QBHpD1mQVhQGTMwRa4WoEIDRjoTxGbMXeHpYx6sSSOKzFBEm1IhwuyIazfzs74duPCfxzjQmMev9hur5K1Az1zI1hd35O+kJ5LNwJFmW3pkhYif1oYnhjssCya5AxjfcMlS7A3lXMMp7OrTXSkSAOQuFWdVlpzDcQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=5v+DrUR4Fulv2w5ZD3sgvKj/nwJNf3HflQen24qwwNo=; b=VHNy8xRN2sXBKYsd6MQVnRXvrsb5YDFLAQKO38iXJLvSgPEnlQOmoqrEoWZgw0qOaYyflvVcFVKhfEFUgsE2WdUOAxOh1V1DPuokMc/iqexfauOo6qln0dVFS8OtRV5yOjonh1S+yBSx6zL5ML7pdJXadv/kStFYBFrWYKIjTZtWa1F9cdYb7HLxBe4zgAnIlQbo1ZVHTVD8Xkdc5kvJmg2RZgZP0OcktcTtUaqpI3NtKWGYG6sLMg/htcyC7TDj96GA11pD22PS18IRR/7kDNw3DdAjrWatGeZAynevFuDcZlXsPPHyFGpkhsfbrc1WFZazmkKOU+LfkVgUfVOimw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=monjalon.net smtp.mailfrom=nvidia.com; dmarc=pass (p=quarantine sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=5v+DrUR4Fulv2w5ZD3sgvKj/nwJNf3HflQen24qwwNo=; b=IjgF4so1TT98YTdvCgtu/D+8qNwUkWu7NR3T3kMDXvL7NShQxcfaQWvKQMxLanNlclbYA6o6nyyfGbDYh1EX9s7+LXTQrZndWCEq9fQ/BqalMeIr7508HNojxZZLZ3PFLAnzMO6YfghdfMexRqs+YwpznSxqrPGnepKEP92bOTdL4QLh7QmAy6YQeLhBUEniPdM8U/CfcyqSAappGqm8O+Hy+SW/g1/ABcfdC1P8Yxf/1nP+S3E4e9rNiEOJl0z+mGBbW8vB3Y9jVClmyb66h15MMnRHzxOWayvY1AumU5voijWIELl3yb3rUuwJ/irE92CwZVXeyahF7t9hSWo6iA== Received: from BN6PR14CA0025.namprd14.prod.outlook.com (2603:10b6:404:13f::11) by BY5PR12MB4068.namprd12.prod.outlook.com (2603:10b6:a03:203::12) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4331.23; Sun, 18 Jul 2021 17:19:04 +0000 Received: from BN8NAM11FT042.eop-nam11.prod.protection.outlook.com (2603:10b6:404:13f:cafe::c) by BN6PR14CA0025.outlook.office365.com (2603:10b6:404:13f::11) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4331.21 via Frontend Transport; Sun, 18 Jul 2021 17:19:04 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; monjalon.net; dkim=none (message not signed) header.d=none;monjalon.net; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by BN8NAM11FT042.mail.protection.outlook.com (10.13.177.85) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4331.21 via Frontend Transport; Sun, 18 Jul 2021 17:19:04 +0000 Received: from nvidia.com (172.20.187.5) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Sun, 18 Jul 2021 17:19:01 +0000 From: Bing Zhao To: , CC: , , , , , Date: Sun, 18 Jul 2021 20:18:14 +0300 Message-ID: <20210718171817.23822-5-bingz@nvidia.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20210718171817.23822-1-bingz@nvidia.com> References: <20210705155756.21443-1-bingz@nvidia.com> <20210718171817.23822-1-bingz@nvidia.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Originating-IP: [172.20.187.5] X-ClientProxiedBy: HQMAIL107.nvidia.com (172.20.187.13) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 162076ae-7ec1-4cef-751a-08d94a1024a7 X-MS-TrafficTypeDiagnostic: BY5PR12MB4068: X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:3968; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: aaQ2HVJHlRebnCDErVQ3fiHmrNgEloGf1TdKF0KybpjlIdl6EnlkZLZ48EY1a73cRyjfEL5DwOb3vKxqoKowJUbYdOf4dmLPEuFmVhmSf6tvMZPq69AluHNfUdE8+/JvppypRSZVdzf1UQZRYD0Rndup2EZFkT656QwcnRkqg4igOMJ5OsJ8AZIvpoSh8P6EH1zYNL2F2vOezck3TZ2rPr0bqlr9kFlcPNxaNEHyQ2KqfGC+et30RdPUu2/nXgWQfO7w8bsS+1wSHius6z2TvPVOMNjlAPLLAUPbHHyNiizEF09UhCbSloQUHda37uNL9IZoERxYSqBP2MtSYIEY1oYWZFNn1i8ESzWi7/zZTllF2cZrtxZk7POBeJi/enbclegNbCe2eLIP5heDpjQztoAglr5zVeYlZJ9kxjyp9j2B/rwDiM+KeR070Giy5AmCvvcLQWNHvcq32vqiexKlA2L2WVR4u9ZYhx5rSQRCWbOgKWm2muUOlbaAV9+8zoiRBGTwcNPGMdJjofzi7SCxO+8jdXdSfEHtc4jorwsgFEUf52hJeiRYHjuGay1zRg74g/W3+ahhcXRqBmpnG8b0c2iYPKBQmQRtQ42CoUdBs0/NFKZUfNRzRECYtjkv5mQzlYZ/0e76uOjgOGKeuGYRLHPoCPkJtawMakLwqYTU9/B+lvb3X3jBk1ogXnhTUxqJE/qevBitPdRFRp0r2vV1TPv0JTnhFWFdvjo8Q1HKOEA= X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(46966006)(36840700001)(316002)(70586007)(110136005)(55016002)(47076005)(70206006)(426003)(34020700004)(54906003)(2616005)(8676002)(2906002)(36756003)(6636002)(8936002)(508600001)(82310400003)(36906005)(4326008)(336012)(30864003)(83380400001)(16526019)(1076003)(186003)(356005)(86362001)(36860700001)(5660300002)(6666004)(107886003)(7636003)(6286002)(26005)(7696005); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Jul 2021 17:19:04.2713 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 162076ae-7ec1-4cef-751a-08d94a1024a7 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT042.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY5PR12MB4068 Subject: [dpdk-dev] [PATCH v2 4/7] net/mlx5: split policies handling of colors X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" If the fate action is either RSS or Queue of a meter policy, the action will only be created in the flow splitting stage. With queue as the fate action, only one sub-policy is needed. And RSS will have more than one sub-policies if there is an expansion. Since the RSS parameters are the same for both green and yellow colors except the queues, the expansion result will be unique. Even if only one color has the RSS action, the checking and possible expansion will be done then. For each sub-policy, the action rules need to be created separately on its own policy table. Signed-off-by: Bing Zhao --- drivers/net/mlx5/mlx5_flow.c | 40 ++++++++++---------- drivers/net/mlx5/mlx5_flow_dv.c | 67 +++++++++++++++++---------------- 2 files changed, 55 insertions(+), 52 deletions(-) diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c index 347e8c1a09..d90c8cd314 100644 --- a/drivers/net/mlx5/mlx5_flow.c +++ b/drivers/net/mlx5/mlx5_flow.c @@ -4687,7 +4687,7 @@ get_meter_sub_policy(struct rte_eth_dev *dev, struct mlx5_flow_rss_desc *rss_desc[MLX5_MTR_RTE_COLORS] = {0}; uint32_t i; - /** + /* * This is a tmp dev_flow, * no need to register any matcher for it in translate. */ @@ -4695,18 +4695,19 @@ get_meter_sub_policy(struct rte_eth_dev *dev, for (i = 0; i < MLX5_MTR_RTE_COLORS; i++) { struct mlx5_flow dev_flow = {0}; struct mlx5_flow_handle dev_handle = { {0} }; + uint8_t fate = final_policy->act_cnt[i].fate_action; - if (final_policy->is_rss) { + if (fate == MLX5_FLOW_FATE_SHARED_RSS) { const void *rss_act = final_policy->act_cnt[i].rss->conf; struct rte_flow_action rss_actions[2] = { [0] = { .type = RTE_FLOW_ACTION_TYPE_RSS, - .conf = rss_act + .conf = rss_act, }, [1] = { .type = RTE_FLOW_ACTION_TYPE_END, - .conf = NULL + .conf = NULL, } }; @@ -4731,9 +4732,10 @@ get_meter_sub_policy(struct rte_eth_dev *dev, rss_desc_v[i].hash_fields ? rss_desc_v[i].queue_num : 1; rss_desc_v[i].tunnel = - !!(dev_flow.handle->layers & - MLX5_FLOW_LAYER_TUNNEL); - } else { + !!(dev_flow.handle->layers & + MLX5_FLOW_LAYER_TUNNEL); + rss_desc[i] = &rss_desc_v[i]; + } else if (fate == MLX5_FLOW_FATE_QUEUE) { /* This is queue action. */ rss_desc_v[i] = wks->rss_desc; rss_desc_v[i].key_len = 0; @@ -4741,24 +4743,24 @@ get_meter_sub_policy(struct rte_eth_dev *dev, rss_desc_v[i].queue = &final_policy->act_cnt[i].queue; rss_desc_v[i].queue_num = 1; + rss_desc[i] = &rss_desc_v[i]; + } else { + rss_desc[i] = NULL; } - rss_desc[i] = &rss_desc_v[i]; } sub_policy = flow_drv_meter_sub_policy_rss_prepare(dev, flow, policy, rss_desc); } else { enum mlx5_meter_domain mtr_domain = attr->transfer ? MLX5_MTR_DOMAIN_TRANSFER : - attr->egress ? MLX5_MTR_DOMAIN_EGRESS : - MLX5_MTR_DOMAIN_INGRESS; + (attr->egress ? MLX5_MTR_DOMAIN_EGRESS : + MLX5_MTR_DOMAIN_INGRESS); sub_policy = policy->sub_policys[mtr_domain][0]; } - if (!sub_policy) { + if (!sub_policy) rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, - "Failed to get meter sub-policy."); - goto exit; - } + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + "Failed to get meter sub-policy."); exit: return sub_policy; } @@ -4956,8 +4958,8 @@ flow_meter_split_prep(struct rte_eth_dev *dev, } else { enum mlx5_meter_domain mtr_domain = attr->transfer ? MLX5_MTR_DOMAIN_TRANSFER : - attr->egress ? MLX5_MTR_DOMAIN_EGRESS : - MLX5_MTR_DOMAIN_INGRESS; + (attr->egress ? MLX5_MTR_DOMAIN_EGRESS : + MLX5_MTR_DOMAIN_INGRESS); sub_policy = &priv->sh->mtrmng->def_policy[mtr_domain]->sub_policy; @@ -4973,8 +4975,8 @@ flow_meter_split_prep(struct rte_eth_dev *dev, actions_pre++; if (!tag_action) return rte_flow_error_set(error, ENOMEM, - RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, - "No tag action space."); + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, "No tag action space."); if (!mtr_flow_id) { tag_action->type = RTE_FLOW_ACTION_TYPE_VOID; goto exit; diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c index ffe97d453a..c617e8801a 100644 --- a/drivers/net/mlx5/mlx5_flow_dv.c +++ b/drivers/net/mlx5/mlx5_flow_dv.c @@ -15070,11 +15070,11 @@ __flow_dv_destroy_sub_policy_rules(struct rte_eth_dev *dev, next_port, tmp) { claim_zero(mlx5_flow_os_destroy_flow(color_rule->rule)); tbl = container_of(color_rule->matcher->tbl, - typeof(*tbl), tbl); + typeof(*tbl), tbl); mlx5_list_unregister(tbl->matchers, - &color_rule->matcher->entry); + &color_rule->matcher->entry); TAILQ_REMOVE(&sub_policy->color_rules[i], - color_rule, next_port); + color_rule, next_port); mlx5_free(color_rule); if (next_fm) mlx5_flow_meter_detach(priv, next_fm); @@ -15088,13 +15088,13 @@ __flow_dv_destroy_sub_policy_rules(struct rte_eth_dev *dev, } if (sub_policy->jump_tbl[i]) { flow_dv_tbl_resource_release(MLX5_SH(dev), - sub_policy->jump_tbl[i]); + sub_policy->jump_tbl[i]); sub_policy->jump_tbl[i] = NULL; } } if (sub_policy->tbl_rsc) { flow_dv_tbl_resource_release(MLX5_SH(dev), - sub_policy->tbl_rsc); + sub_policy->tbl_rsc); sub_policy->tbl_rsc = NULL; } } @@ -15111,7 +15111,7 @@ __flow_dv_destroy_sub_policy_rules(struct rte_eth_dev *dev, */ static void flow_dv_destroy_policy_rules(struct rte_eth_dev *dev, - struct mlx5_flow_meter_policy *mtr_policy) + struct mlx5_flow_meter_policy *mtr_policy) { uint32_t i, j; struct mlx5_flow_meter_sub_policy *sub_policy; @@ -15124,8 +15124,8 @@ flow_dv_destroy_policy_rules(struct rte_eth_dev *dev, for (j = 0; j < sub_policy_num; j++) { sub_policy = mtr_policy->sub_policys[i][j]; if (sub_policy) - __flow_dv_destroy_sub_policy_rules - (dev, sub_policy); + __flow_dv_destroy_sub_policy_rules(dev, + sub_policy); } } } @@ -16162,6 +16162,7 @@ __flow_dv_create_policy_acts_rules(struct rte_eth_dev *dev, bool match_src_port = false; int i; + /* If RSS or Queue, no previous actions / rules is created. */ for (i = 0; i < RTE_COLORS; i++) { acts[i].actions_n = 0; if (i == RTE_COLOR_RED) { @@ -16661,37 +16662,36 @@ __flow_dv_meter_get_rss_sub_policy(struct rte_eth_dev *dev, sub_policy_num = (mtr_policy->sub_policy_num >> (MLX5_MTR_SUB_POLICY_NUM_SHIFT * domain)) & MLX5_MTR_SUB_POLICY_NUM_MASK; - for (i = 0; i < sub_policy_num; - i++) { - for (j = 0; j < MLX5_MTR_RTE_COLORS; j++) { - if (rss_desc[j] && - hrxq_idx[j] != - mtr_policy->sub_policys[domain][i]->rix_hrxq[j]) + for (j = 0; j < sub_policy_num; j++) { + for (i = 0; i < MLX5_MTR_RTE_COLORS; i++) { + if (rss_desc[i] && + hrxq_idx[i] != + mtr_policy->sub_policys[domain][j]->rix_hrxq[i]) break; } - if (j >= MLX5_MTR_RTE_COLORS) { + if (i >= MLX5_MTR_RTE_COLORS) { /* * Found the sub policy table with - * the same queue per color + * the same queue per color. */ rte_spinlock_unlock(&mtr_policy->sl); - for (j = 0; j < MLX5_MTR_RTE_COLORS; j++) - mlx5_hrxq_release(dev, hrxq_idx[j]); + for (i = 0; i < MLX5_MTR_RTE_COLORS; i++) + mlx5_hrxq_release(dev, hrxq_idx[i]); *is_reuse = true; - return mtr_policy->sub_policys[domain][i]; + return mtr_policy->sub_policys[domain][j]; } } /* Create sub policy. */ if (!mtr_policy->sub_policys[domain][0]->rix_hrxq[0]) { - /* Reuse the first dummy sub_policy*/ + /* Reuse the first pre-allocated sub_policy. */ sub_policy = mtr_policy->sub_policys[domain][0]; sub_policy_idx = sub_policy->idx; } else { sub_policy = mlx5_ipool_zmalloc (priv->sh->ipool[MLX5_IPOOL_MTR_POLICY], - &sub_policy_idx); + &sub_policy_idx); if (!sub_policy || - sub_policy_idx > MLX5_MAX_SUB_POLICY_TBL_NUM) { + sub_policy_idx > MLX5_MAX_SUB_POLICY_TBL_NUM) { for (i = 0; i < MLX5_MTR_RTE_COLORS; i++) mlx5_hrxq_release(dev, hrxq_idx[i]); goto rss_sub_policy_error; @@ -16713,9 +16713,9 @@ __flow_dv_meter_get_rss_sub_policy(struct rte_eth_dev *dev, * RSS action to Queue action. */ hrxq = mlx5_ipool_get(priv->sh->ipool[MLX5_IPOOL_HRXQ], - hrxq_idx[i]); + hrxq_idx[i]); if (!hrxq) { - DRV_LOG(ERR, "Failed to create policy hrxq"); + DRV_LOG(ERR, "Failed to get policy hrxq"); goto rss_sub_policy_error; } act_cnt = &mtr_policy->act_cnt[i]; @@ -16730,19 +16730,21 @@ __flow_dv_meter_get_rss_sub_policy(struct rte_eth_dev *dev, } } if (__flow_dv_create_policy_acts_rules(dev, mtr_policy, - sub_policy, domain)) { + sub_policy, domain)) { DRV_LOG(ERR, "Failed to create policy " - "rules per domain."); + "rules for ingress domain."); goto rss_sub_policy_error; } if (sub_policy != mtr_policy->sub_policys[domain][0]) { i = (mtr_policy->sub_policy_num >> (MLX5_MTR_SUB_POLICY_NUM_SHIFT * domain)) & MLX5_MTR_SUB_POLICY_NUM_MASK; + if (i >= MLX5_MTR_RSS_MAX_SUB_POLICY) { + DRV_LOG(ERR, "No free sub-policy slot."); + goto rss_sub_policy_error; + } mtr_policy->sub_policys[domain][i] = sub_policy; i++; - if (i > MLX5_MTR_RSS_MAX_SUB_POLICY) - goto rss_sub_policy_error; mtr_policy->sub_policy_num &= ~(MLX5_MTR_SUB_POLICY_NUM_MASK << (MLX5_MTR_SUB_POLICY_NUM_SHIFT * domain)); mtr_policy->sub_policy_num |= @@ -16760,8 +16762,7 @@ __flow_dv_meter_get_rss_sub_policy(struct rte_eth_dev *dev, (MLX5_MTR_SUB_POLICY_NUM_SHIFT * domain)) & MLX5_MTR_SUB_POLICY_NUM_MASK; mtr_policy->sub_policys[domain][i] = NULL; - mlx5_ipool_free - (priv->sh->ipool[MLX5_IPOOL_MTR_POLICY], + mlx5_ipool_free(priv->sh->ipool[MLX5_IPOOL_MTR_POLICY], sub_policy->idx); } } @@ -16822,7 +16823,7 @@ flow_dv_meter_sub_policy_rss_prepare(struct rte_eth_dev *dev, while (i) { /** * From last policy to the first one in hierarchy, - * create/get the sub policy for each of them. + * create / get the sub policy for each of them. */ sub_policy = __flow_dv_meter_get_rss_sub_policy(dev, policies[--i], @@ -17026,7 +17027,7 @@ flow_dv_meter_hierarchy_rule_create(struct rte_eth_dev *dev, */ static void flow_dv_destroy_sub_policy_with_rxq(struct rte_eth_dev *dev, - struct mlx5_flow_meter_policy *mtr_policy) + struct mlx5_flow_meter_policy *mtr_policy) { struct mlx5_priv *priv = dev->data->dev_private; struct mlx5_flow_meter_sub_policy *sub_policy = NULL; @@ -17072,7 +17073,7 @@ flow_dv_destroy_sub_policy_with_rxq(struct rte_eth_dev *dev, case MLX5_FLOW_FATE_QUEUE: sub_policy = mtr_policy->sub_policys[domain][0]; __flow_dv_destroy_sub_policy_rules(dev, - sub_policy); + sub_policy); break; default: /*Other actions without queue and do nothing*/ -- 2.27.0