From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 0DC24A0A0A; Thu, 13 May 2021 09:36:15 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 83987406A2; Thu, 13 May 2021 09:36:15 +0200 (CEST) Received: from NAM10-DM6-obe.outbound.protection.outlook.com (mail-dm6nam10on2042.outbound.protection.outlook.com [40.107.93.42]) by mails.dpdk.org (Postfix) with ESMTP id 6CE634067E for ; Thu, 13 May 2021 09:36:14 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=NlovskNkZqVlrWsZQNK6X8+dNrR25NUKNisYkKK1u3ka4JQdNXMS4WZKTlNgKQe5DXPgJyyQvm/hNGOLjh/v3c1ChSzPBoI/gOeNQ4iv3qie3GMkjeeNvhrsw9xR/YMpNlJ2Vlnzkv/iiF7+7CEq2kYd6Mucns3nm0E0y8vRcSfgfc60CwQYnCWcaZHWwY1mIj6Ds3Q3/xRWnyC1K08jbg2DNyo3u8B/z3XBIzqe+GppC6cz08uzjPvk/e5afnGqIdyikzEr5/TVNBdZOKhUZ5aGVApV/OBGd1O2IHbyqHNOUVg5r11noYuF9epE/By62ZSdQ80nkzSCQWKJ25nuxQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=p4BAwMbeM9lCezAKWqDxxFhtzcNd8v9+3/w6Kr6wRso=; b=EVDrgU57/KNfbGLmt8zP811XM2W6cESfF1B9IAj/ch7PY5R2q63cvov8jnKNPAWkn9U14MJjYOneeIp2TxZo7ldQnTCqJa+3xOQH37I2sKiPj/vln1kXoCXky7MpjdaQW5y4/BG54L7k8B9z07iud+bBtsV5Rh1NQn+Zc/9NvvjSor50/2y2rWbFeRrAoyHLgVEiMCoOegibrdQkhhSNc1tn6mUhyqg1gZA43TYqTG84E2gaG5itst6sr9PDrQ5rjGVjlW9PSAovDp4FENxAeyK+vGFgXecLpuSIzbe6IL9PCEXD4jidVdvCZEGlgQIcJaGKEp097lPu262Mhwur0g== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=monjalon.net smtp.mailfrom=nvidia.com; dmarc=pass (p=none sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=p4BAwMbeM9lCezAKWqDxxFhtzcNd8v9+3/w6Kr6wRso=; b=tFuGL9emXIqCEp5dK8eIUEYaShf/YGobtcF/hk7pidFuro2imcT1QV6Gq71JBAQQvIYH/bNT8WCTvhaEv7mBPFLuzmyJmUaQSyJ8yu1oqEOv0JCPSuNbzqPn6bs0Kc8uAHWt3JjCPQp86gboz732OHb/mCbtQL1fgKg3+mW0U/roxLi7rTuihww0J57yXKJstEZHbTeO0OE2PtKMTPyZltn7443PKFCV896kLrf4K3gGao8zRanwR2A5OUVCU/JN9lUpO2ERvUkvYXf5arnz2LTQ6Yz0VrtO6lsRPhrsUzw0Ems2NbsL830xZP13PG87DILOx2OjvZrZIvQadFvfVA== Received: from DM6PR13CA0038.namprd13.prod.outlook.com (2603:10b6:5:134::15) by BYAPR12MB2901.namprd12.prod.outlook.com (2603:10b6:a03:138::32) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4108.28; Thu, 13 May 2021 07:36:12 +0000 Received: from DM6NAM11FT038.eop-nam11.prod.protection.outlook.com (2603:10b6:5:134:cafe::7f) by DM6PR13CA0038.outlook.office365.com (2603:10b6:5:134::15) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4150.11 via Frontend Transport; Thu, 13 May 2021 07:36:12 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; monjalon.net; dkim=none (message not signed) header.d=none;monjalon.net; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by DM6NAM11FT038.mail.protection.outlook.com (10.13.173.137) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4129.25 via Frontend Transport; Thu, 13 May 2021 07:36:11 +0000 Received: from nvidia.com (172.20.145.6) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Thu, 13 May 2021 07:36:09 +0000 From: Li Zhang To: , , , CC: , , , Date: Thu, 13 May 2021 10:35:50 +0300 Message-ID: <20210513073552.3213962-1-lizh@nvidia.com> X-Mailer: git-send-email 2.21.0 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Originating-IP: [172.20.145.6] X-ClientProxiedBy: HQMAIL107.nvidia.com (172.20.187.13) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: d96b2587-8a54-499b-3230-08d915e1c824 X-MS-TrafficTypeDiagnostic: BYAPR12MB2901: X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:2089; X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: tCxScUZTt6fcCB2SX7rsk2h31FuPrLdbuLfXfSiFW/w82af7fZPJxuWdeUbAWLl5+XsbsRDuS+acn/JsV5Sct90xW0eoM0TcC+S+0NAl3U2kL5n7/NERww3fGl6CTUbrvvjTjjrWHSr9MsT+Q2mrnqe6kM1Ni5UWTyhl4kVhxOSxb1aFsrSaPlIo6HTcUO3uc/22k7LDj8jc0Bac6Y7gGtevKT0+q7s9KlziXcjLGT/hEqGxv7TZkooJUjOkm8GhBYqCX7pI/qfdQY/D9wlEv653RbqIC0glStWHAUhfrzCCWPvdU9kkVGfwDLut49eapcMXoG9/CKUsBlfOJ4pbM6Xkh+ronuwWAQiMTPrtGo4SgmkGE/pGgeFw5VX0TzRx1l0B2eLUr6pnAQLclZlPH6G2QBmqYjx/F8ouTOH4z+X1eVtVytDVmrfzbK5/h9owWmkhCdSBBxGQ9ntloyfClkFowAwZbDZu8b/MLS2G1lbkhhuG2rdMDcxURk6fFuTO1dqUBeN1luoMyBZKe5ngfgx4Znf+m8hqg8KW5rBnPCQmqIY/OsO61eilJ85gpyPZZyDNwf24CwH/ybNnsqh68bc1DGrNjwIaz5VCYuQdRrWv7UgiJ3pqVW4R77aGOMMcY0DP7aKJLGEd6KWkrw3BYi8X5FLClbuCAvQxAFhEgK0= X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(376002)(396003)(136003)(346002)(39860400002)(46966006)(36840700001)(8676002)(316002)(36756003)(6286002)(82740400003)(36860700001)(2906002)(70206006)(70586007)(478600001)(36906005)(30864003)(186003)(16526019)(82310400003)(7636003)(6636002)(1076003)(5660300002)(47076005)(8936002)(4326008)(7696005)(107886003)(6666004)(2616005)(26005)(426003)(55016002)(336012)(86362001)(54906003)(83380400001)(356005)(110136005); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 13 May 2021 07:36:11.7746 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: d96b2587-8a54-499b-3230-08d915e1c824 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT038.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR12MB2901 Subject: [dpdk-dev] [PATCH v1] net/mlx5: fix meter cleaning in stop operation X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" A meter may handle Rx queue reference in his sub-policies. In stop operation, all the Rx queues are released. Wrongly, the meter reference was not released before destroying the Rx queues what cause an error in stop. Release the Rx queues meter references in stop operation. Fixes: fc6ce56bba ("net/mlx5: prepare sub-policy for flow with meter") Signed-off-by: Li Zhang --- drivers/net/mlx5/mlx5.h | 5 ++ drivers/net/mlx5/mlx5_flow.c | 77 ++++++++++++++------ drivers/net/mlx5/mlx5_flow.h | 6 ++ drivers/net/mlx5/mlx5_flow_dv.c | 109 +++++++++++++++++++---------- drivers/net/mlx5/mlx5_flow_meter.c | 36 +++++++++- drivers/net/mlx5/mlx5_trigger.c | 2 +- 6 files changed, 175 insertions(+), 60 deletions(-) diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index 7eca6a6fa6..b8a29dd369 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -668,6 +668,8 @@ struct mlx5_meter_policy_action_container { /* Index to port ID action resource. */ void *dr_jump_action[MLX5_MTR_DOMAIN_MAX]; /* Jump/drop action per color. */ + uint16_t queue; + /* Queue action configuration. */ }; }; @@ -681,6 +683,8 @@ struct mlx5_flow_meter_policy { /* Rule applies to egress domain. */ uint32_t transfer:1; /* Rule applies to transfer domain. */ + uint32_t is_queue:1; + /* Is queue action in policy table. */ rte_spinlock_t sl; uint32_t ref_cnt; /* Use count. */ @@ -1655,6 +1659,7 @@ struct mlx5_flow_meter_policy *mlx5_flow_meter_policy_find uint32_t *policy_idx); int mlx5_flow_meter_flush(struct rte_eth_dev *dev, struct rte_mtr_error *error); +void mlx5_flow_meter_rxq_flush(struct rte_eth_dev *dev); /* mlx5_os.c */ struct rte_pci_driver; diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c index 23d4224ec5..dbeca571b6 100644 --- a/drivers/net/mlx5/mlx5_flow.c +++ b/drivers/net/mlx5/mlx5_flow.c @@ -4567,7 +4567,9 @@ get_meter_sub_policy(struct rte_eth_dev *dev, "Failed to find Meter Policy."); goto exit; } - if (policy->is_rss) { + if (policy->is_rss || + (policy->is_queue && + !policy->sub_policys[MLX5_MTR_DOMAIN_INGRESS][0]->rix_hrxq[0])) { struct mlx5_flow_workspace *wks = mlx5_flow_get_thread_workspace(); struct mlx5_flow_rss_desc rss_desc_v[MLX5_MTR_RTE_COLORS]; @@ -4583,34 +4585,49 @@ get_meter_sub_policy(struct rte_eth_dev *dev, for (i = 0; i < MLX5_MTR_RTE_COLORS; i++) { struct mlx5_flow dev_flow = {0}; struct mlx5_flow_handle dev_handle = { {0} }; - const void *rss_act = policy->act_cnt[i].rss->conf; - struct rte_flow_action rss_actions[2] = { - [0] = { + + rss_desc_v[i] = wks->rss_desc; + if (policy->is_rss) { + const void *rss_act = + policy->act_cnt[i].rss->conf; + struct rte_flow_action rss_actions[2] = { + [0] = { .type = RTE_FLOW_ACTION_TYPE_RSS, .conf = rss_act - }, - [1] = { + }, + [1] = { .type = RTE_FLOW_ACTION_TYPE_END, .conf = NULL - } - }; + } + }; - dev_flow.handle = &dev_handle; - dev_flow.ingress = attr->ingress; - dev_flow.flow = flow; - dev_flow.external = 0; + dev_flow.handle = &dev_handle; + dev_flow.ingress = attr->ingress; + dev_flow.flow = flow; + dev_flow.external = 0; #ifdef HAVE_IBV_FLOW_DV_SUPPORT - dev_flow.dv.transfer = attr->transfer; + dev_flow.dv.transfer = attr->transfer; #endif - /* Translate RSS action to get rss hash fields. */ - if (flow_drv_translate(dev, &dev_flow, attr, + /** + * Translate RSS action to get rss hash fields. + */ + if (flow_drv_translate(dev, &dev_flow, attr, items, rss_actions, error)) - goto exit; - rss_desc_v[i] = wks->rss_desc; - rss_desc_v[i].key_len = MLX5_RSS_HASH_KEY_LEN; - rss_desc_v[i].hash_fields = dev_flow.hash_fields; - rss_desc_v[i].queue_num = rss_desc_v[i].hash_fields ? - rss_desc_v[i].queue_num : 1; + goto exit; + rss_desc_v[i].key_len = MLX5_RSS_HASH_KEY_LEN; + rss_desc_v[i].hash_fields = + dev_flow.hash_fields; + rss_desc_v[i].queue_num = + rss_desc_v[i].hash_fields ? + rss_desc_v[i].queue_num : 1; + } else { + /* This is queue action. */ + rss_desc_v[i].key_len = 0; + rss_desc_v[i].hash_fields = 0; + rss_desc_v[i].queue = + &policy->act_cnt[i].queue; + rss_desc_v[i].queue_num = 1; + } rss_desc[i] = &rss_desc_v[i]; } sub_policy = flow_drv_meter_sub_policy_rss_prepare(dev, @@ -7223,6 +7240,24 @@ mlx5_flow_destroy_mtr_drop_tbls(struct rte_eth_dev *dev) fops->destroy_mtr_drop_tbls(dev); } +/** + * Destroy the sub policy table with RX queue. + * + * @param[in] dev + * Pointer to Ethernet device. + * @param[in] mtr_policy + * Pointer to meter policy table. + */ +void +mlx5_flow_destroy_sub_policy_with_rxq(struct rte_eth_dev *dev, + struct mlx5_flow_meter_policy *mtr_policy) +{ + const struct mlx5_flow_driver_ops *fops; + + fops = flow_get_drv_ops(MLX5_FLOW_TYPE_DV); + fops->destroy_sub_policy_with_rxq(dev, mtr_policy); +} + /** * Allocate the needed aso flow meter id. * diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index 04c8806bf6..2f2aa962f9 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -1156,6 +1156,9 @@ typedef struct mlx5_flow_meter_sub_policy * (struct rte_eth_dev *dev, struct mlx5_flow_meter_policy *mtr_policy, struct mlx5_flow_rss_desc *rss_desc[MLX5_MTR_RTE_COLORS]); +typedef void (*mlx5_flow_destroy_sub_policy_with_rxq_t) + (struct rte_eth_dev *dev, + struct mlx5_flow_meter_policy *mtr_policy); typedef uint32_t (*mlx5_flow_mtr_alloc_t) (struct rte_eth_dev *dev); typedef void (*mlx5_flow_mtr_free_t)(struct rte_eth_dev *dev, @@ -1249,6 +1252,7 @@ struct mlx5_flow_driver_ops { mlx5_flow_create_def_policy_t create_def_policy; mlx5_flow_destroy_def_policy_t destroy_def_policy; mlx5_flow_meter_sub_policy_rss_prepare_t meter_sub_policy_rss_prepare; + mlx5_flow_destroy_sub_policy_with_rxq_t destroy_sub_policy_with_rxq; mlx5_flow_counter_alloc_t counter_alloc; mlx5_flow_counter_free_t counter_free; mlx5_flow_counter_query_t counter_query; @@ -1562,6 +1566,8 @@ struct mlx5_flow_meter_sub_policy *mlx5_flow_meter_sub_policy_rss_prepare (struct rte_eth_dev *dev, struct mlx5_flow_meter_policy *mtr_policy, struct mlx5_flow_rss_desc *rss_desc[MLX5_MTR_RTE_COLORS]); +void mlx5_flow_destroy_sub_policy_with_rxq(struct rte_eth_dev *dev, + struct mlx5_flow_meter_policy *mtr_policy); int mlx5_flow_dv_discover_counter_offset_support(struct rte_eth_dev *dev); int mlx5_action_handle_flush(struct rte_eth_dev *dev); void mlx5_release_tunnel_hub(struct mlx5_dev_ctx_shared *sh, uint16_t port_id); diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c index 7fc7efbc5c..c7a0a38650 100644 --- a/drivers/net/mlx5/mlx5_flow_dv.c +++ b/drivers/net/mlx5/mlx5_flow_dv.c @@ -14707,12 +14707,6 @@ __flow_dv_create_domain_policy_acts(struct rte_eth_dev *dev, MLX5_ASSERT(dev_flow.dv.tag_resource); act_cnt->rix_mark = dev_flow.handle->dvh.rix_tag; - if (action_flags & MLX5_FLOW_ACTION_QUEUE) { - dev_flow.handle->rix_hrxq = - mtr_policy->sub_policys[domain][0]->rix_hrxq[i]; - flow_drv_rxq_flags_set(dev, - dev_flow.handle); - } action_flags |= MLX5_FLOW_ACTION_MARK; break; } @@ -14760,12 +14754,6 @@ __flow_dv_create_domain_policy_acts(struct rte_eth_dev *dev, "set tag action"); act_cnt->modify_hdr = dev_flow.handle->dvh.modify_hdr; - if (action_flags & MLX5_FLOW_ACTION_QUEUE) { - dev_flow.handle->rix_hrxq = - mtr_policy->sub_policys[domain][0]->rix_hrxq[i]; - flow_drv_rxq_flags_set(dev, - dev_flow.handle); - } action_flags |= MLX5_FLOW_ACTION_SET_TAG; break; } @@ -14809,41 +14797,20 @@ __flow_dv_create_domain_policy_acts(struct rte_eth_dev *dev, } case RTE_FLOW_ACTION_TYPE_QUEUE: { - struct mlx5_hrxq *hrxq; - uint32_t hrxq_idx; - struct mlx5_flow_rss_desc rss_desc; - struct mlx5_flow_meter_sub_policy *sub_policy = - mtr_policy->sub_policys[domain][0]; - if (i >= MLX5_MTR_RTE_COLORS) return -rte_mtr_error_set(error, ENOTSUP, RTE_MTR_ERROR_TYPE_METER_POLICY, NULL, "cannot create policy " "fate queue for this color"); - memset(&rss_desc, 0, - sizeof(struct mlx5_flow_rss_desc)); - rss_desc.queue_num = 1; - rss_desc.const_q = act->conf; - hrxq = flow_dv_hrxq_prepare(dev, &dev_flow, - &rss_desc, &hrxq_idx); - if (!hrxq) - return -rte_mtr_error_set(error, - ENOTSUP, - RTE_MTR_ERROR_TYPE_METER_POLICY, - NULL, - "cannot create policy fate queue"); - sub_policy->rix_hrxq[i] = hrxq_idx; + act_cnt->queue = + ((const struct rte_flow_action_queue *) + (act->conf))->index; act_cnt->fate_action = MLX5_FLOW_FATE_QUEUE; dev_flow.handle->fate_action = MLX5_FLOW_FATE_QUEUE; - if (action_flags & MLX5_FLOW_ACTION_MARK || - action_flags & MLX5_FLOW_ACTION_SET_TAG) { - dev_flow.handle->rix_hrxq = hrxq_idx; - flow_drv_rxq_flags_set(dev, - dev_flow.handle); - } + mtr_policy->is_queue = 1; action_flags |= MLX5_FLOW_ACTION_QUEUE; break; } @@ -16057,6 +16024,73 @@ flow_dv_meter_sub_policy_rss_prepare(struct rte_eth_dev *dev, return NULL; } + +/** + * Destroy the sub policy table with RX queue. + * + * @param[in] dev + * Pointer to Ethernet device. + * @param[in] mtr_policy + * Pointer to meter policy table. + */ +static void +flow_dv_destroy_sub_policy_with_rxq(struct rte_eth_dev *dev, + struct mlx5_flow_meter_policy *mtr_policy) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_flow_meter_sub_policy *sub_policy = NULL; + uint32_t domain = MLX5_MTR_DOMAIN_INGRESS; + uint32_t i, j; + uint16_t sub_policy_num, new_policy_num; + + rte_spinlock_lock(&mtr_policy->sl); + for (i = 0; i < MLX5_MTR_RTE_COLORS; i++) { + switch (mtr_policy->act_cnt[i].fate_action) { + case MLX5_FLOW_FATE_SHARED_RSS: + sub_policy_num = (mtr_policy->sub_policy_num >> + (MLX5_MTR_SUB_POLICY_NUM_SHIFT * domain)) & + MLX5_MTR_SUB_POLICY_NUM_MASK; + new_policy_num = sub_policy_num; + for (j = 0; j < sub_policy_num; j++) { + sub_policy = + mtr_policy->sub_policys[domain][j]; + if (sub_policy) { + __flow_dv_destroy_sub_policy_rules(dev, + sub_policy); + if (sub_policy != + mtr_policy->sub_policys[domain][0]) { + mtr_policy->sub_policys[domain][j] = + NULL; + mlx5_ipool_free + (priv->sh->ipool[MLX5_IPOOL_MTR_POLICY], + sub_policy->idx); + new_policy_num--; + } + } + } + if (new_policy_num != sub_policy_num) { + mtr_policy->sub_policy_num &= + ~(MLX5_MTR_SUB_POLICY_NUM_MASK << + (MLX5_MTR_SUB_POLICY_NUM_SHIFT * domain)); + mtr_policy->sub_policy_num |= + (new_policy_num & + MLX5_MTR_SUB_POLICY_NUM_MASK) << + (MLX5_MTR_SUB_POLICY_NUM_SHIFT * domain); + } + break; + case MLX5_FLOW_FATE_QUEUE: + sub_policy = mtr_policy->sub_policys[domain][0]; + __flow_dv_destroy_sub_policy_rules(dev, + sub_policy); + break; + default: + /*Other actions without queue and do nothing*/ + break; + } + } + rte_spinlock_unlock(&mtr_policy->sl); +} + /** * Validate the batch counter support in root table. * @@ -16668,6 +16702,7 @@ const struct mlx5_flow_driver_ops mlx5_flow_dv_drv_ops = { .create_def_policy = flow_dv_create_def_policy, .destroy_def_policy = flow_dv_destroy_def_policy, .meter_sub_policy_rss_prepare = flow_dv_meter_sub_policy_rss_prepare, + .destroy_sub_policy_with_rxq = flow_dv_destroy_sub_policy_with_rxq, .counter_alloc = flow_dv_counter_allocate, .counter_free = flow_dv_counter_free, .counter_query = flow_dv_counter_query, diff --git a/drivers/net/mlx5/mlx5_flow_meter.c b/drivers/net/mlx5/mlx5_flow_meter.c index ac2735e6e2..16991748dc 100644 --- a/drivers/net/mlx5/mlx5_flow_meter.c +++ b/drivers/net/mlx5/mlx5_flow_meter.c @@ -748,7 +748,7 @@ mlx5_flow_meter_policy_add(struct rte_eth_dev *dev, policy->actions, error); if (ret) goto policy_add_err; - if (!is_rss) { + if (!is_rss && !mtr_policy->is_queue) { /* Create policy rules in HW. */ ret = mlx5_flow_create_policy_rules(dev, mtr_policy); if (ret) @@ -1808,6 +1808,40 @@ mlx5_flow_meter_detach(struct mlx5_priv *priv, #endif } +/** + * Flush meter with Rx queue configuration. + * + * @param[in] dev + * Pointer to Ethernet device. + */ +void +mlx5_flow_meter_rxq_flush(struct rte_eth_dev *dev) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_flow_meter_sub_policy *sub_policy; + struct mlx5_flow_meter_policy *mtr_policy; + void *entry; + uint32_t i, policy_idx; + + if (!priv->mtr_en) + return; + if (priv->sh->mtrmng->policy_idx_tbl && priv->sh->refcnt == 1) { + MLX5_L3T_FOREACH(priv->sh->mtrmng->policy_idx_tbl, + i, entry) { + policy_idx = *(uint32_t *)entry; + sub_policy = mlx5_ipool_get + (priv->sh->ipool[MLX5_IPOOL_MTR_POLICY], + policy_idx); + if (!sub_policy || !sub_policy->main_policy) + continue; + mtr_policy = sub_policy->main_policy; + if (mtr_policy->is_queue || mtr_policy->is_rss) + mlx5_flow_destroy_sub_policy_with_rxq(dev, + mtr_policy); + } + } +} + /** * Flush meter configuration. * diff --git a/drivers/net/mlx5/mlx5_trigger.c b/drivers/net/mlx5/mlx5_trigger.c index eb8c99cd93..879d3171e9 100644 --- a/drivers/net/mlx5/mlx5_trigger.c +++ b/drivers/net/mlx5/mlx5_trigger.c @@ -1180,7 +1180,7 @@ mlx5_dev_stop(struct rte_eth_dev *dev) mlx5_traffic_disable(dev); /* All RX queue flags will be cleared in the flush interface. */ mlx5_flow_list_flush(dev, &priv->flows, true); - mlx5_flow_meter_flush(dev, NULL); + mlx5_flow_meter_rxq_flush(dev); mlx5_rx_intr_vec_disable(dev); priv->sh->port[priv->dev_port - 1].ih_port_id = RTE_MAX_ETHPORTS; priv->sh->port[priv->dev_port - 1].devx_ih_port_id = RTE_MAX_ETHPORTS; -- 2.27.0