From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 68334A0C50; Wed, 21 Jul 2021 10:55:12 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 80AC74111E; Wed, 21 Jul 2021 10:54:53 +0200 (CEST) Received: from NAM12-BN8-obe.outbound.protection.outlook.com (mail-bn8nam12on2056.outbound.protection.outlook.com [40.107.237.56]) by mails.dpdk.org (Postfix) with ESMTP id C677A410FE for ; Wed, 21 Jul 2021 10:54:49 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=fRBFkYKzRfvlubbcDakon8ea+YEjsQFkCQWjtdycUl+6R0ufTdqzuJSgZcAEqeTyLe7VNSbdygeP0MG8a7ICtuc/AMsh9L0dpAZMZxVAXUkB0LQs2ce9syzpY4TZWzCuCDLrfNSSaU77sCx+duzw1y+Ml4vi26/BwSuI7UmsN28oLFbvQ11XjWs+dA24puApPN7H7VtK3+eC+sB+QEiKhsrnkk7jHk+4nbCd7eH3rnXesgQLXp21/ETfZwJcy5HsEFuAuE1gsyXaKwpFweAHPZK0vTSfHy8ACxwro8qWXX1Llf3ElJZHWRuCquu9PWaLLG2oMzkgWDh5FxrI4MP/vQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=hcKRlMEYD8VDlQb6piElxU163sQRT015ZqBiKa7oxw8=; b=dIBcX79A6CXS66W7HgmMVz2riVT5V0cz4DabE1J6f68MLTOgSha3PB7Z4X6+Fn32KRJwYQnj3A6Hzb9W5Hfy/0GToklORpZjGI74r5x4iioBWjRWJkikvvdbzoE8MwdTQqDn6M3R+z5Lg6IK9rUNlKYcu0ASGfSxwN60apzVmYgaj/01bVXronde5y81UqPlDKSvJL5ePUsHDWajF5YQEiwmqZF+8KNui6TKaGRhkwdeKoUb+54R9Vf1VlQOxhOhrmGOi0IPowafQ0LOnVuMnUWUNsQi88HCppq8bgYwFUmySiPO76LQFrEyj9H8a1t9QgH8WKUmcVwlP++ko1/1Xw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=monjalon.net smtp.mailfrom=nvidia.com; dmarc=pass (p=quarantine sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=hcKRlMEYD8VDlQb6piElxU163sQRT015ZqBiKa7oxw8=; b=TjHSk5aAjTeuCKwQw03On6zxNA6ME+dkrXa/XTE0/2E+0MOs5O9Ibp+j9wy/h0/O1kHV1tLWr7IlQChJJtYjPsGSz5htm0g2muYeLkgJCiWQTUHoaLqGl5WBtH1/EVsC3u1opw3CRJfpywOu4RyspGTaBiG5jzvSPP5+LHF+U4yXD16bzuICzrS5O5CGYiRl1TiWef0wITjeWYjQu2rIrF6OmnHxgmhsakkwlkbuqBLeLIU5DFQN3/rXt1Fc/3B/e3CoCTCWcnwmKnHRbyB69Cqfqjp/VRd0KHnuBm731EyIBju4pzUH3LcickkyVLR7CcV5hSqiIWdO6eJIZXc5dQ== Received: from MW4PR04CA0072.namprd04.prod.outlook.com (2603:10b6:303:6b::17) by CH0PR12MB5282.namprd12.prod.outlook.com (2603:10b6:610:d5::22) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4331.21; Wed, 21 Jul 2021 08:54:48 +0000 Received: from CO1NAM11FT038.eop-nam11.prod.protection.outlook.com (2603:10b6:303:6b:cafe::e0) by MW4PR04CA0072.outlook.office365.com (2603:10b6:303:6b::17) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4331.23 via Frontend Transport; Wed, 21 Jul 2021 08:54:48 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; monjalon.net; dkim=none (message not signed) header.d=none;monjalon.net; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by CO1NAM11FT038.mail.protection.outlook.com (10.13.174.231) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4352.24 via Frontend Transport; Wed, 21 Jul 2021 08:54:47 +0000 Received: from nvidia.com (172.20.187.5) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Wed, 21 Jul 2021 08:54:45 +0000 From: Bing Zhao To: , CC: , , , , , Date: Wed, 21 Jul 2021 11:54:18 +0300 Message-ID: <20210721085421.13111-5-bingz@nvidia.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20210721085421.13111-1-bingz@nvidia.com> References: <20210705155756.21443-1-bingz@nvidia.com> <20210721085421.13111-1-bingz@nvidia.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Originating-IP: [172.20.187.5] X-ClientProxiedBy: HQMAIL101.nvidia.com (172.20.187.10) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 0375d0ee-c3e0-400f-6c0d-08d94c2531a3 X-MS-TrafficTypeDiagnostic: CH0PR12MB5282: X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:3968; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: T/IcvhM9qGCuryPOIygc51ndhaCDrNIdjDGXERzptTvgzp9XstzHH5AH4zf4J1vm06UHwlBvR+AVkKs2Fc8Ta9LfCDhr2Ee8NTC4220wjbVnTYuvu2m7JkBc8EssoBHwnYopwyFv9GOF9fo3eS/vThewWK8li5O6iMYLvZuM6CFADw3clxTxN6iUPPSJh4A/B2N94R9HClgzlzYaZzhxffR96Xcqhsqqih8hXNGs5EqnW/r9bSwGc1mUgvU5vENlUp/6MVaM9GwfllWWs1o7ZIR+Mq+Mfkxtwu0/D+568WL9/sFpHKT/9s7rc62j5nouLZqMnMtzC6oTtFiVw3KgdiRjxfeJ8W1+j+lGoNYQwQBaXtRGi+mu01gk8MNcxZlvGEsd08PLDZfVQD3VguLGiLGS3lbssmRJID4MPOvsq10aRE7XA4bFKCAScxZHwuQJGfTmK20mZlzR/AyWqEzb3fNnez386hjomOcTsFUHlIDbs5EEfUKppHGWGZovn4QNaAob5qT6gyob7CUSP9Klb8kTximbn9/JJMHPBWPYIOIyp/kme0sWutdMTlViW07f9LI4/7YYhsap4haOsQsHamFiegze6KIGrDtnA8nSk0S+IgxlTD9N7aMKa7V13n5cMkqNnnZtuKRhLXEFcku3JJYtLZcs6XM1sPbLe2cNpMKuYvJzzX3GaFna44UiET5Jv4kl8p+2CPJNDlrE0Xv0Rw== X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(396003)(376002)(136003)(346002)(39860400002)(36840700001)(46966006)(8676002)(316002)(186003)(82740400003)(6666004)(82310400003)(356005)(478600001)(6636002)(7636003)(5660300002)(36860700001)(336012)(4326008)(55016002)(16526019)(8936002)(107886003)(54906003)(70206006)(83380400001)(1076003)(30864003)(36906005)(110136005)(26005)(36756003)(426003)(2616005)(6286002)(7696005)(2906002)(47076005)(86362001)(70586007); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 21 Jul 2021 08:54:47.8851 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 0375d0ee-c3e0-400f-6c0d-08d94c2531a3 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT038.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH0PR12MB5282 Subject: [dpdk-dev] [PATCH v3 4/7] net/mlx5: split policies handling of colors X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" If the fate action is either RSS or Queue of a meter policy, the action will only be created in the flow splitting stage. With queue as the fate action, only one sub-policy is needed. And RSS will have more than one sub-policies if there is an expansion. Since the RSS parameters are the same for both green and yellow colors except the queues, the expansion result will be unique. Even if only one color has the RSS action, the checking and possible expansion will be done then. For each sub-policy, the action rules need to be created separately on its own policy table. Signed-off-by: Bing Zhao --- drivers/net/mlx5/mlx5_flow.c | 40 ++++++++++---------- drivers/net/mlx5/mlx5_flow_dv.c | 67 +++++++++++++++++---------------- 2 files changed, 55 insertions(+), 52 deletions(-) diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c index 347e8c1a09..d90c8cd314 100644 --- a/drivers/net/mlx5/mlx5_flow.c +++ b/drivers/net/mlx5/mlx5_flow.c @@ -4687,7 +4687,7 @@ get_meter_sub_policy(struct rte_eth_dev *dev, struct mlx5_flow_rss_desc *rss_desc[MLX5_MTR_RTE_COLORS] = {0}; uint32_t i; - /** + /* * This is a tmp dev_flow, * no need to register any matcher for it in translate. */ @@ -4695,18 +4695,19 @@ get_meter_sub_policy(struct rte_eth_dev *dev, for (i = 0; i < MLX5_MTR_RTE_COLORS; i++) { struct mlx5_flow dev_flow = {0}; struct mlx5_flow_handle dev_handle = { {0} }; + uint8_t fate = final_policy->act_cnt[i].fate_action; - if (final_policy->is_rss) { + if (fate == MLX5_FLOW_FATE_SHARED_RSS) { const void *rss_act = final_policy->act_cnt[i].rss->conf; struct rte_flow_action rss_actions[2] = { [0] = { .type = RTE_FLOW_ACTION_TYPE_RSS, - .conf = rss_act + .conf = rss_act, }, [1] = { .type = RTE_FLOW_ACTION_TYPE_END, - .conf = NULL + .conf = NULL, } }; @@ -4731,9 +4732,10 @@ get_meter_sub_policy(struct rte_eth_dev *dev, rss_desc_v[i].hash_fields ? rss_desc_v[i].queue_num : 1; rss_desc_v[i].tunnel = - !!(dev_flow.handle->layers & - MLX5_FLOW_LAYER_TUNNEL); - } else { + !!(dev_flow.handle->layers & + MLX5_FLOW_LAYER_TUNNEL); + rss_desc[i] = &rss_desc_v[i]; + } else if (fate == MLX5_FLOW_FATE_QUEUE) { /* This is queue action. */ rss_desc_v[i] = wks->rss_desc; rss_desc_v[i].key_len = 0; @@ -4741,24 +4743,24 @@ get_meter_sub_policy(struct rte_eth_dev *dev, rss_desc_v[i].queue = &final_policy->act_cnt[i].queue; rss_desc_v[i].queue_num = 1; + rss_desc[i] = &rss_desc_v[i]; + } else { + rss_desc[i] = NULL; } - rss_desc[i] = &rss_desc_v[i]; } sub_policy = flow_drv_meter_sub_policy_rss_prepare(dev, flow, policy, rss_desc); } else { enum mlx5_meter_domain mtr_domain = attr->transfer ? MLX5_MTR_DOMAIN_TRANSFER : - attr->egress ? MLX5_MTR_DOMAIN_EGRESS : - MLX5_MTR_DOMAIN_INGRESS; + (attr->egress ? MLX5_MTR_DOMAIN_EGRESS : + MLX5_MTR_DOMAIN_INGRESS); sub_policy = policy->sub_policys[mtr_domain][0]; } - if (!sub_policy) { + if (!sub_policy) rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, - "Failed to get meter sub-policy."); - goto exit; - } + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + "Failed to get meter sub-policy."); exit: return sub_policy; } @@ -4956,8 +4958,8 @@ flow_meter_split_prep(struct rte_eth_dev *dev, } else { enum mlx5_meter_domain mtr_domain = attr->transfer ? MLX5_MTR_DOMAIN_TRANSFER : - attr->egress ? MLX5_MTR_DOMAIN_EGRESS : - MLX5_MTR_DOMAIN_INGRESS; + (attr->egress ? MLX5_MTR_DOMAIN_EGRESS : + MLX5_MTR_DOMAIN_INGRESS); sub_policy = &priv->sh->mtrmng->def_policy[mtr_domain]->sub_policy; @@ -4973,8 +4975,8 @@ flow_meter_split_prep(struct rte_eth_dev *dev, actions_pre++; if (!tag_action) return rte_flow_error_set(error, ENOMEM, - RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, - "No tag action space."); + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, "No tag action space."); if (!mtr_flow_id) { tag_action->type = RTE_FLOW_ACTION_TYPE_VOID; goto exit; diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c index 2400565232..ee593a7001 100644 --- a/drivers/net/mlx5/mlx5_flow_dv.c +++ b/drivers/net/mlx5/mlx5_flow_dv.c @@ -15070,11 +15070,11 @@ __flow_dv_destroy_sub_policy_rules(struct rte_eth_dev *dev, next_port, tmp) { claim_zero(mlx5_flow_os_destroy_flow(color_rule->rule)); tbl = container_of(color_rule->matcher->tbl, - typeof(*tbl), tbl); + typeof(*tbl), tbl); mlx5_list_unregister(tbl->matchers, - &color_rule->matcher->entry); + &color_rule->matcher->entry); TAILQ_REMOVE(&sub_policy->color_rules[i], - color_rule, next_port); + color_rule, next_port); mlx5_free(color_rule); if (next_fm) mlx5_flow_meter_detach(priv, next_fm); @@ -15088,13 +15088,13 @@ __flow_dv_destroy_sub_policy_rules(struct rte_eth_dev *dev, } if (sub_policy->jump_tbl[i]) { flow_dv_tbl_resource_release(MLX5_SH(dev), - sub_policy->jump_tbl[i]); + sub_policy->jump_tbl[i]); sub_policy->jump_tbl[i] = NULL; } } if (sub_policy->tbl_rsc) { flow_dv_tbl_resource_release(MLX5_SH(dev), - sub_policy->tbl_rsc); + sub_policy->tbl_rsc); sub_policy->tbl_rsc = NULL; } } @@ -15111,7 +15111,7 @@ __flow_dv_destroy_sub_policy_rules(struct rte_eth_dev *dev, */ static void flow_dv_destroy_policy_rules(struct rte_eth_dev *dev, - struct mlx5_flow_meter_policy *mtr_policy) + struct mlx5_flow_meter_policy *mtr_policy) { uint32_t i, j; struct mlx5_flow_meter_sub_policy *sub_policy; @@ -15124,8 +15124,8 @@ flow_dv_destroy_policy_rules(struct rte_eth_dev *dev, for (j = 0; j < sub_policy_num; j++) { sub_policy = mtr_policy->sub_policys[i][j]; if (sub_policy) - __flow_dv_destroy_sub_policy_rules - (dev, sub_policy); + __flow_dv_destroy_sub_policy_rules(dev, + sub_policy); } } } @@ -16158,6 +16158,7 @@ __flow_dv_create_policy_acts_rules(struct rte_eth_dev *dev, bool match_src_port = false; int i; + /* If RSS or Queue, no previous actions / rules is created. */ for (i = 0; i < RTE_COLORS; i++) { acts[i].actions_n = 0; if (i == RTE_COLOR_RED) { @@ -16657,37 +16658,36 @@ __flow_dv_meter_get_rss_sub_policy(struct rte_eth_dev *dev, sub_policy_num = (mtr_policy->sub_policy_num >> (MLX5_MTR_SUB_POLICY_NUM_SHIFT * domain)) & MLX5_MTR_SUB_POLICY_NUM_MASK; - for (i = 0; i < sub_policy_num; - i++) { - for (j = 0; j < MLX5_MTR_RTE_COLORS; j++) { - if (rss_desc[j] && - hrxq_idx[j] != - mtr_policy->sub_policys[domain][i]->rix_hrxq[j]) + for (j = 0; j < sub_policy_num; j++) { + for (i = 0; i < MLX5_MTR_RTE_COLORS; i++) { + if (rss_desc[i] && + hrxq_idx[i] != + mtr_policy->sub_policys[domain][j]->rix_hrxq[i]) break; } - if (j >= MLX5_MTR_RTE_COLORS) { + if (i >= MLX5_MTR_RTE_COLORS) { /* * Found the sub policy table with - * the same queue per color + * the same queue per color. */ rte_spinlock_unlock(&mtr_policy->sl); - for (j = 0; j < MLX5_MTR_RTE_COLORS; j++) - mlx5_hrxq_release(dev, hrxq_idx[j]); + for (i = 0; i < MLX5_MTR_RTE_COLORS; i++) + mlx5_hrxq_release(dev, hrxq_idx[i]); *is_reuse = true; - return mtr_policy->sub_policys[domain][i]; + return mtr_policy->sub_policys[domain][j]; } } /* Create sub policy. */ if (!mtr_policy->sub_policys[domain][0]->rix_hrxq[0]) { - /* Reuse the first dummy sub_policy*/ + /* Reuse the first pre-allocated sub_policy. */ sub_policy = mtr_policy->sub_policys[domain][0]; sub_policy_idx = sub_policy->idx; } else { sub_policy = mlx5_ipool_zmalloc (priv->sh->ipool[MLX5_IPOOL_MTR_POLICY], - &sub_policy_idx); + &sub_policy_idx); if (!sub_policy || - sub_policy_idx > MLX5_MAX_SUB_POLICY_TBL_NUM) { + sub_policy_idx > MLX5_MAX_SUB_POLICY_TBL_NUM) { for (i = 0; i < MLX5_MTR_RTE_COLORS; i++) mlx5_hrxq_release(dev, hrxq_idx[i]); goto rss_sub_policy_error; @@ -16709,9 +16709,9 @@ __flow_dv_meter_get_rss_sub_policy(struct rte_eth_dev *dev, * RSS action to Queue action. */ hrxq = mlx5_ipool_get(priv->sh->ipool[MLX5_IPOOL_HRXQ], - hrxq_idx[i]); + hrxq_idx[i]); if (!hrxq) { - DRV_LOG(ERR, "Failed to create policy hrxq"); + DRV_LOG(ERR, "Failed to get policy hrxq"); goto rss_sub_policy_error; } act_cnt = &mtr_policy->act_cnt[i]; @@ -16726,19 +16726,21 @@ __flow_dv_meter_get_rss_sub_policy(struct rte_eth_dev *dev, } } if (__flow_dv_create_policy_acts_rules(dev, mtr_policy, - sub_policy, domain)) { + sub_policy, domain)) { DRV_LOG(ERR, "Failed to create policy " - "rules per domain."); + "rules for ingress domain."); goto rss_sub_policy_error; } if (sub_policy != mtr_policy->sub_policys[domain][0]) { i = (mtr_policy->sub_policy_num >> (MLX5_MTR_SUB_POLICY_NUM_SHIFT * domain)) & MLX5_MTR_SUB_POLICY_NUM_MASK; + if (i >= MLX5_MTR_RSS_MAX_SUB_POLICY) { + DRV_LOG(ERR, "No free sub-policy slot."); + goto rss_sub_policy_error; + } mtr_policy->sub_policys[domain][i] = sub_policy; i++; - if (i > MLX5_MTR_RSS_MAX_SUB_POLICY) - goto rss_sub_policy_error; mtr_policy->sub_policy_num &= ~(MLX5_MTR_SUB_POLICY_NUM_MASK << (MLX5_MTR_SUB_POLICY_NUM_SHIFT * domain)); mtr_policy->sub_policy_num |= @@ -16756,8 +16758,7 @@ __flow_dv_meter_get_rss_sub_policy(struct rte_eth_dev *dev, (MLX5_MTR_SUB_POLICY_NUM_SHIFT * domain)) & MLX5_MTR_SUB_POLICY_NUM_MASK; mtr_policy->sub_policys[domain][i] = NULL; - mlx5_ipool_free - (priv->sh->ipool[MLX5_IPOOL_MTR_POLICY], + mlx5_ipool_free(priv->sh->ipool[MLX5_IPOOL_MTR_POLICY], sub_policy->idx); } } @@ -16818,7 +16819,7 @@ flow_dv_meter_sub_policy_rss_prepare(struct rte_eth_dev *dev, while (i) { /** * From last policy to the first one in hierarchy, - * create/get the sub policy for each of them. + * create / get the sub policy for each of them. */ sub_policy = __flow_dv_meter_get_rss_sub_policy(dev, policies[--i], @@ -17022,7 +17023,7 @@ flow_dv_meter_hierarchy_rule_create(struct rte_eth_dev *dev, */ static void flow_dv_destroy_sub_policy_with_rxq(struct rte_eth_dev *dev, - struct mlx5_flow_meter_policy *mtr_policy) + struct mlx5_flow_meter_policy *mtr_policy) { struct mlx5_priv *priv = dev->data->dev_private; struct mlx5_flow_meter_sub_policy *sub_policy = NULL; @@ -17068,7 +17069,7 @@ flow_dv_destroy_sub_policy_with_rxq(struct rte_eth_dev *dev, case MLX5_FLOW_FATE_QUEUE: sub_policy = mtr_policy->sub_policys[domain][0]; __flow_dv_destroy_sub_policy_rules(dev, - sub_policy); + sub_policy); break; default: /*Other actions without queue and do nothing*/ -- 2.27.0