From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 52E14A0548; Thu, 27 May 2021 11:34:54 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id C4D3441118; Thu, 27 May 2021 11:34:31 +0200 (CEST) Received: from NAM11-DM6-obe.outbound.protection.outlook.com (mail-dm6nam11on2050.outbound.protection.outlook.com [40.107.223.50]) by mails.dpdk.org (Postfix) with ESMTP id 50AC54110B for ; Thu, 27 May 2021 11:34:30 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=oK+tiyGYXeUUNHakJh0A/RNpcCd7tb/XQQpSjfpDUVTLymXjDI6Da0ySsDaWyqhcZOh8zyqYAXjnZ2AJR3HBrk/obNIOpQI+sQa+tCyTDJ6S/P3t3MEEZQFyr8BZpt7DM2vyQ/1JvU1MDHmlyFXrvMq4p3nZL+BtmR807er0F2n2jJyz1POp7tSXxTgu+QPGCT70V487wDIjLSJ/RAH7ERd3NSwivrpAat9W6eV+oQSuryYev0TK97JoV1woXzCBFJQjwHqOU71K/r5UPWG1d+R89lI/4CEKFFEgxxx5rOfDA9EYZ84Wf6+gI2tp/1O/Azri78oX7O9++dQy/9vB/Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=frLCko39zo2fDrwry3leXbxpkMsuUzr6Q5+nlwX1FkE=; b=WHOxzeYpNYY1Ujg99ddqYg8orXeIk88jamO5skJLDrn0BtC3FKWiwecqj+NUH/FaW+ZhEubgHRn669mfWoZHjtg7pTk7sXz/LesxuulLJewek1l4QaZdtLwl+wc6WIIR9U0Yu9qjiWJcW13DX00qVofjhOShZKEg9cpQyG0lNxfsxfJEWWSgD/foOidd97DR7HyzN7VKY/gGsYbUt3c75p6tsgCsSgjSQyAwbaWErGvwYKHrKeOa3VUUCk4cgQ/uiUz6qrPLUhQRAwioJ1DUh4um4ze2bI5M2bN2D0QaltwMSDUy2BF6sOmge9GuURk1f6ouRg/MrkHcIMVXV/zIXA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=none sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=frLCko39zo2fDrwry3leXbxpkMsuUzr6Q5+nlwX1FkE=; b=ihMwRuZb4aSkKUK7fKMkgdkufB3jpA5tLmmew7PsOJ+W6Hc6sH5whkHSdIJH+OeWvJhKRaDX1BoqBeDm7KrmJiHEz+qa+j5xKMB0SqQlMhFPrT9aANItHKvTEZ4ngRJgipDD+r+m1OMwS+Wykd9CN4NbHKPKxqg7JomchzhDgW90chkoNyTovtjx1kXUAGv2H5byXa1Y54B1k2/HZm+ib9kMOYHBCzUDuTs/RiOKppLx6KShh83x1jwv8WKGPJtBlws7SlpopV5pKuaHOE+DQ8yf4GYCDIdev8DCdJxGQUZhFXOIvsGgyVw7Gu9bFcNYbItiJyhRjXtfODQLEcftmQ== Received: from MWHPR19CA0059.namprd19.prod.outlook.com (2603:10b6:300:94::21) by DM5PR12MB1177.namprd12.prod.outlook.com (2603:10b6:3:6d::20) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4150.27; Thu, 27 May 2021 09:34:28 +0000 Received: from CO1NAM11FT026.eop-nam11.prod.protection.outlook.com (2603:10b6:300:94:cafe::25) by MWHPR19CA0059.outlook.office365.com (2603:10b6:300:94::21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4173.22 via Frontend Transport; Thu, 27 May 2021 09:34:28 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by CO1NAM11FT026.mail.protection.outlook.com (10.13.175.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4129.25 via Frontend Transport; Thu, 27 May 2021 09:34:28 +0000 Received: from nvidia.com (172.20.145.6) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Thu, 27 May 2021 09:34:26 +0000 From: Suanming Mou To: , CC: , Date: Thu, 27 May 2021 12:34:03 +0300 Message-ID: <20210527093403.1153127-5-suanmingm@nvidia.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210527093403.1153127-1-suanmingm@nvidia.com> References: <20210527093403.1153127-1-suanmingm@nvidia.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Originating-IP: [172.20.145.6] X-ClientProxiedBy: HQMAIL107.nvidia.com (172.20.187.13) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 9a359597-4cee-4790-5bf0-08d920f29fc1 X-MS-TrafficTypeDiagnostic: DM5PR12MB1177: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:568; X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: En/w19NGlexP2pZXXgZugtppj4LeBC4RMs5JL79kWePr68smaaUpKsdbzt8cVt7K79zLF+AUqYfsnz5/AwSqh8fbuJNYsE9L5jNG28ABiJGj2nqIcztTr8FKmfjPi2eD8oiOGJbKFlkluY2YrN4Oc9RaT4MJALIplBefQtlI+Rh+KmsUAQpAcmuArh5tzwU1lCOyEO83jAWHl/tMyiX0TuVcf2bmVRHpwfTAK8QlKptBvW4hldymFoWSWWRmpTIAKQEhMfk9rIYuX4ov5zJfqUn85obEqw/kQeqrw/aGQ4eFqfcHtWDlB7onVaFz0arYtVrYqEuAO8eeCtqa/CKQacgbjS0Z5e2TsT1HJavk8C87DrCgTzCa3OsTFFZjsEGcGV7oVcVztShlt69FZk8naC/C73y/z0kwH0hd7HuGXNF3RRiNRs8a9YlDFXSr5rQs4+JHhlqrQctbD2mVz6muoCvhJrHAtfqUTrh/s7VdD/2TJIkZvu1BeGPLHj4pptd43aAnnLg4Rr17lbPOLMPl6m1stM5ER2mygYIiKKOQDbqZfBNWqLMXSb1OVt91kFvfnIg34LJDCqhqi6A5HNeFW57SBKnaa8u2Tvp9XxujoNDiC7E0RMaplxnn8Sf3d1GYMoFrLC6H39t6QG91NQl3XPiGS+10tYWCC4JgHWwwh0M= X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(346002)(136003)(376002)(39860400002)(396003)(46966006)(36840700001)(356005)(70206006)(70586007)(55016002)(8936002)(426003)(30864003)(6286002)(316002)(16526019)(86362001)(6666004)(186003)(6636002)(82310400003)(2616005)(36756003)(83380400001)(5660300002)(4326008)(1076003)(36860700001)(54906003)(8676002)(7636003)(47076005)(82740400003)(26005)(7696005)(336012)(110136005)(36906005)(478600001)(2906002); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 27 May 2021 09:34:28.2710 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 9a359597-4cee-4790-5bf0-08d920f29fc1 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT026.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM5PR12MB1177 Subject: [dpdk-dev] [PATCH 4/4] net/mlx5: replace flow list with index pool X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" The flow list is used to save the create flows and to be used only when port closes all the flows need to be flushed. This commit creates a new function to get the available entries from the index pool. Then flows can be flushed from index pool. Signed-off-by: Suanming Mou --- drivers/net/mlx5/linux/mlx5_os.c | 11 +++ drivers/net/mlx5/mlx5.c | 3 +- drivers/net/mlx5/mlx5.h | 4 +- drivers/net/mlx5/mlx5_flow.c | 111 +++++++++++-------------------- drivers/net/mlx5/mlx5_flow.h | 1 + drivers/net/mlx5/mlx5_flow_dv.c | 5 ++ drivers/net/mlx5/mlx5_trigger.c | 8 +-- 7 files changed, 61 insertions(+), 82 deletions(-) diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c index 534a56a555..c56be18d24 100644 --- a/drivers/net/mlx5/linux/mlx5_os.c +++ b/drivers/net/mlx5/linux/mlx5_os.c @@ -1608,6 +1608,17 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev, mlx5_set_min_inline(spawn, config); /* Store device configuration on private structure. */ priv->config = *config; + struct mlx5_indexed_pool_config icfg = { + .size = sizeof(struct rte_flow), + .trunk_size = 4096, + .need_lock = 1, + .release_mem_en = 0, + .malloc = mlx5_malloc, + .free = mlx5_free, + .per_core_cache = 1 << 23, + .type = "rte_flow_ipool", + }; + priv->flows = mlx5_ipool_create(&icfg); /* Create context for virtual machine VLAN workaround. */ priv->vmwa_context = mlx5_vlan_vmwa_init(eth_dev, spawn->ifindex); if (config->dv_flow_en) { diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c index cf1815cb74..b0f9f1a45e 100644 --- a/drivers/net/mlx5/mlx5.c +++ b/drivers/net/mlx5/mlx5.c @@ -323,6 +323,7 @@ static const struct mlx5_indexed_pool_config mlx5_ipool_cfg[] = { .grow_shift = 2, .need_lock = 1, .release_mem_en = 1, + .per_core_cache = 1 << 23, .malloc = mlx5_malloc, .free = mlx5_free, .type = "mlx5_flow_handle_ipool", @@ -1528,7 +1529,7 @@ mlx5_dev_close(struct rte_eth_dev *dev) * If all the flows are already flushed in the device stop stage, * then this will return directly without any action. */ - mlx5_flow_list_flush(dev, &priv->flows, true); + mlx5_flow_list_flush(dev, false, true); mlx5_action_handle_flush(dev); mlx5_flow_meter_flush(dev, NULL); /* Prevent crashes when queues are still in use. */ diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index 32b2817bf2..f6b72cd352 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -1344,7 +1344,7 @@ struct mlx5_priv { unsigned int (*reta_idx)[]; /* RETA index table. */ unsigned int reta_idx_n; /* RETA index size. */ struct mlx5_drop drop_queue; /* Flow drop queues. */ - uint32_t flows; /* RTE Flow rules. */ + struct mlx5_indexed_pool *flows; /* RTE Flow rules. */ uint32_t ctrl_flows; /* Control flow rules. */ rte_spinlock_t flow_list_lock; struct mlx5_obj_ops obj_ops; /* HW objects operations. */ @@ -1596,7 +1596,7 @@ struct rte_flow *mlx5_flow_create(struct rte_eth_dev *dev, struct rte_flow_error *error); int mlx5_flow_destroy(struct rte_eth_dev *dev, struct rte_flow *flow, struct rte_flow_error *error); -void mlx5_flow_list_flush(struct rte_eth_dev *dev, uint32_t *list, bool active); +void mlx5_flow_list_flush(struct rte_eth_dev *dev, bool control, bool active); int mlx5_flow_flush(struct rte_eth_dev *dev, struct rte_flow_error *error); int mlx5_flow_query(struct rte_eth_dev *dev, struct rte_flow *flow, const struct rte_flow_action *action, void *data, diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c index dbeca571b6..8f9eea2c00 100644 --- a/drivers/net/mlx5/mlx5_flow.c +++ b/drivers/net/mlx5/mlx5_flow.c @@ -3068,31 +3068,6 @@ mlx5_flow_validate_item_ecpri(const struct rte_flow_item *item, MLX5_ITEM_RANGE_NOT_ACCEPTED, error); } -/** - * Release resource related QUEUE/RSS action split. - * - * @param dev - * Pointer to Ethernet device. - * @param flow - * Flow to release id's from. - */ -static void -flow_mreg_split_qrss_release(struct rte_eth_dev *dev, - struct rte_flow *flow) -{ - struct mlx5_priv *priv = dev->data->dev_private; - uint32_t handle_idx; - struct mlx5_flow_handle *dev_handle; - - SILIST_FOREACH(priv->sh->ipool[MLX5_IPOOL_MLX5_FLOW], flow->dev_handles, - handle_idx, dev_handle, next) - if (dev_handle->split_flow_id && - !dev_handle->is_meter_flow_id) - mlx5_ipool_free(priv->sh->ipool - [MLX5_IPOOL_RSS_EXPANTION_FLOW_ID], - dev_handle->split_flow_id); -} - static int flow_null_validate(struct rte_eth_dev *dev __rte_unused, const struct rte_flow_attr *attr __rte_unused, @@ -3388,7 +3363,6 @@ flow_drv_destroy(struct rte_eth_dev *dev, struct rte_flow *flow) const struct mlx5_flow_driver_ops *fops; enum mlx5_flow_drv_type type = flow->drv_type; - flow_mreg_split_qrss_release(dev, flow); MLX5_ASSERT(type > MLX5_FLOW_TYPE_MIN && type < MLX5_FLOW_TYPE_MAX); fops = flow_get_drv_ops(type); fops->destroy(dev, flow); @@ -3971,7 +3945,7 @@ flow_check_hairpin_split(struct rte_eth_dev *dev, /* Declare flow create/destroy prototype in advance. */ static uint32_t -flow_list_create(struct rte_eth_dev *dev, uint32_t *list, +flow_list_create(struct rte_eth_dev *dev, bool control, const struct rte_flow_attr *attr, const struct rte_flow_item items[], const struct rte_flow_action actions[], @@ -4100,7 +4074,7 @@ flow_dv_mreg_create_cb(struct mlx5_hlist *list, uint64_t key, * be applied, removed, deleted in ardbitrary order * by list traversing. */ - mcp_res->rix_flow = flow_list_create(dev, NULL, &attr, items, + mcp_res->rix_flow = flow_list_create(dev, false, &attr, items, actions, false, error); if (!mcp_res->rix_flow) { mlx5_ipool_free(priv->sh->ipool[MLX5_IPOOL_MCP], idx); @@ -6062,7 +6036,7 @@ flow_rss_workspace_adjust(struct mlx5_flow_workspace *wks, * A flow index on success, 0 otherwise and rte_errno is set. */ static uint32_t -flow_list_create(struct rte_eth_dev *dev, uint32_t *list, +flow_list_create(struct rte_eth_dev *dev, bool control, const struct rte_flow_attr *attr, const struct rte_flow_item items[], const struct rte_flow_action original_actions[], @@ -6130,7 +6104,7 @@ flow_list_create(struct rte_eth_dev *dev, uint32_t *list, external, hairpin_flow, error); if (ret < 0) goto error_before_hairpin_split; - flow = mlx5_ipool_zmalloc(priv->sh->ipool[MLX5_IPOOL_RTE_FLOW], &idx); + flow = mlx5_ipool_zmalloc(priv->flows, &idx); if (!flow) { rte_errno = ENOMEM; goto error_before_hairpin_split; @@ -6256,12 +6230,7 @@ flow_list_create(struct rte_eth_dev *dev, uint32_t *list, if (ret < 0) goto error; } - if (list) { - rte_spinlock_lock(&priv->flow_list_lock); - ILIST_INSERT(priv->sh->ipool[MLX5_IPOOL_RTE_FLOW], list, idx, - flow, next); - rte_spinlock_unlock(&priv->flow_list_lock); - } + flow->control = control; flow_rxq_flags_set(dev, flow); rte_free(translated_actions); tunnel = flow_tunnel_from_rule(wks->flows); @@ -6283,7 +6252,7 @@ flow_list_create(struct rte_eth_dev *dev, uint32_t *list, mlx5_ipool_get (priv->sh->ipool[MLX5_IPOOL_RSS_SHARED_ACTIONS], rss_desc->shared_rss))->refcnt, 1, __ATOMIC_RELAXED); - mlx5_ipool_free(priv->sh->ipool[MLX5_IPOOL_RTE_FLOW], idx); + mlx5_ipool_free(priv->flows, idx); rte_errno = ret; /* Restore rte_errno. */ ret = rte_errno; rte_errno = ret; @@ -6335,10 +6304,9 @@ mlx5_flow_create_esw_table_zero_flow(struct rte_eth_dev *dev) .type = RTE_FLOW_ACTION_TYPE_END, }, }; - struct mlx5_priv *priv = dev->data->dev_private; struct rte_flow_error error; - return (void *)(uintptr_t)flow_list_create(dev, &priv->ctrl_flows, + return (void *)(uintptr_t)flow_list_create(dev, true, &attr, &pattern, actions, false, &error); } @@ -6390,8 +6358,6 @@ mlx5_flow_create(struct rte_eth_dev *dev, const struct rte_flow_action actions[], struct rte_flow_error *error) { - struct mlx5_priv *priv = dev->data->dev_private; - /* * If the device is not started yet, it is not allowed to created a * flow from application. PMD default flows and traffic control flows @@ -6407,7 +6373,7 @@ mlx5_flow_create(struct rte_eth_dev *dev, return NULL; } - return (void *)(uintptr_t)flow_list_create(dev, &priv->flows, + return (void *)(uintptr_t)flow_list_create(dev, false, attr, items, actions, true, error); } @@ -6429,8 +6395,7 @@ flow_list_destroy(struct rte_eth_dev *dev, uint32_t *list, uint32_t flow_idx) { struct mlx5_priv *priv = dev->data->dev_private; - struct rte_flow *flow = mlx5_ipool_get(priv->sh->ipool - [MLX5_IPOOL_RTE_FLOW], flow_idx); + struct rte_flow *flow = mlx5_ipool_get(priv->flows, flow_idx); if (!flow) return; @@ -6443,7 +6408,7 @@ flow_list_destroy(struct rte_eth_dev *dev, uint32_t *list, flow_drv_destroy(dev, flow); if (list) { rte_spinlock_lock(&priv->flow_list_lock); - ILIST_REMOVE(priv->sh->ipool[MLX5_IPOOL_RTE_FLOW], list, + ILIST_REMOVE(priv->flows, list, flow_idx, flow, next); rte_spinlock_unlock(&priv->flow_list_lock); } @@ -6456,7 +6421,7 @@ flow_list_destroy(struct rte_eth_dev *dev, uint32_t *list, mlx5_flow_tunnel_free(dev, tunnel); } flow_mreg_del_copy_action(dev, flow); - mlx5_ipool_free(priv->sh->ipool[MLX5_IPOOL_RTE_FLOW], flow_idx); + mlx5_ipool_free(priv->flows, flow_idx); } /** @@ -6464,19 +6429,23 @@ flow_list_destroy(struct rte_eth_dev *dev, uint32_t *list, * * @param dev * Pointer to Ethernet device. - * @param list - * Pointer to the Indexed flow list. + * @param control + * Flow type to be flushed. * @param active * If flushing is called avtively. */ void -mlx5_flow_list_flush(struct rte_eth_dev *dev, uint32_t *list, bool active) +mlx5_flow_list_flush(struct rte_eth_dev *dev, bool control, bool active) { - uint32_t num_flushed = 0; + struct mlx5_priv *priv = dev->data->dev_private; + uint32_t num_flushed = 0, fidx = 1; + struct rte_flow *flow; - while (*list) { - flow_list_destroy(dev, list, *list); - num_flushed++; + MLX5_IPOOL_FOREACH(priv->flows, fidx, flow) { + if (flow->control == control) { + flow_list_destroy(dev, NULL, fidx); + num_flushed++; + } } if (active) { DRV_LOG(INFO, "port %u: %u flows flushed before stopping", @@ -6647,18 +6616,20 @@ mlx5_flow_pop_thread_workspace(void) * @return the number of flows not released. */ int -mlx5_flow_verify(struct rte_eth_dev *dev) +mlx5_flow_verify(struct rte_eth_dev *dev __rte_unused) { struct mlx5_priv *priv = dev->data->dev_private; struct rte_flow *flow; - uint32_t idx; + uint32_t idx = 0; int ret = 0; - ILIST_FOREACH(priv->sh->ipool[MLX5_IPOOL_RTE_FLOW], priv->flows, idx, - flow, next) { + flow = mlx5_ipool_get_next(priv->flows, &idx); + while (flow) { DRV_LOG(DEBUG, "port %u flow %p still referenced", dev->data->port_id, (void *)flow); - ++ret; + idx++; + ret++; + flow = mlx5_ipool_get_next(priv->flows, &idx); } return ret; } @@ -6678,7 +6649,6 @@ int mlx5_ctrl_flow_source_queue(struct rte_eth_dev *dev, uint32_t queue) { - struct mlx5_priv *priv = dev->data->dev_private; const struct rte_flow_attr attr = { .egress = 1, .priority = 0, @@ -6711,7 +6681,7 @@ mlx5_ctrl_flow_source_queue(struct rte_eth_dev *dev, actions[0].type = RTE_FLOW_ACTION_TYPE_JUMP; actions[0].conf = &jump; actions[1].type = RTE_FLOW_ACTION_TYPE_END; - flow_idx = flow_list_create(dev, &priv->ctrl_flows, + flow_idx = flow_list_create(dev, true, &attr, items, actions, false, &error); if (!flow_idx) { DRV_LOG(DEBUG, @@ -6801,7 +6771,7 @@ mlx5_ctrl_flow_vlan(struct rte_eth_dev *dev, action_rss.types = 0; for (i = 0; i != priv->reta_idx_n; ++i) queue[i] = (*priv->reta_idx)[i]; - flow_idx = flow_list_create(dev, &priv->ctrl_flows, + flow_idx = flow_list_create(dev, true, &attr, items, actions, false, &error); if (!flow_idx) return -rte_errno; @@ -6843,7 +6813,6 @@ mlx5_ctrl_flow(struct rte_eth_dev *dev, int mlx5_flow_lacp_miss(struct rte_eth_dev *dev) { - struct mlx5_priv *priv = dev->data->dev_private; /* * The LACP matching is done by only using ether type since using * a multicast dst mac causes kernel to give low priority to this flow. @@ -6877,7 +6846,7 @@ mlx5_flow_lacp_miss(struct rte_eth_dev *dev) }, }; struct rte_flow_error error; - uint32_t flow_idx = flow_list_create(dev, &priv->ctrl_flows, + uint32_t flow_idx = flow_list_create(dev, true, &attr, items, actions, false, &error); if (!flow_idx) @@ -6896,9 +6865,7 @@ mlx5_flow_destroy(struct rte_eth_dev *dev, struct rte_flow *flow, struct rte_flow_error *error __rte_unused) { - struct mlx5_priv *priv = dev->data->dev_private; - - flow_list_destroy(dev, &priv->flows, (uintptr_t)(void *)flow); + flow_list_destroy(dev, NULL, (uintptr_t)(void *)flow); return 0; } @@ -6912,9 +6879,7 @@ int mlx5_flow_flush(struct rte_eth_dev *dev, struct rte_flow_error *error __rte_unused) { - struct mlx5_priv *priv = dev->data->dev_private; - - mlx5_flow_list_flush(dev, &priv->flows, false); + mlx5_flow_list_flush(dev, false, false); return 0; } @@ -6965,8 +6930,7 @@ flow_drv_query(struct rte_eth_dev *dev, { struct mlx5_priv *priv = dev->data->dev_private; const struct mlx5_flow_driver_ops *fops; - struct rte_flow *flow = mlx5_ipool_get(priv->sh->ipool - [MLX5_IPOOL_RTE_FLOW], + struct rte_flow *flow = mlx5_ipool_get(priv->flows, flow_idx); enum mlx5_flow_drv_type ftype; @@ -7832,10 +7796,9 @@ mlx5_flow_discover_mreg_c(struct rte_eth_dev *dev) if (!config->dv_flow_en) break; /* Create internal flow, validation skips copy action. */ - flow_idx = flow_list_create(dev, NULL, &attr, items, + flow_idx = flow_list_create(dev, false, &attr, items, actions, false, &error); - flow = mlx5_ipool_get(priv->sh->ipool[MLX5_IPOOL_RTE_FLOW], - flow_idx); + flow = mlx5_ipool_get(priv->flows, flow_idx); if (!flow) continue; config->flow_mreg_c[n++] = idx; diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index 2f2aa962f9..cb0b97a2d4 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -1000,6 +1000,7 @@ struct rte_flow { ILIST_ENTRY(uint32_t)next; /**< Index to the next flow structure. */ uint32_t dev_handles; /**< Device flow handles that are part of the flow. */ + uint32_t control:1; uint32_t drv_type:2; /**< Driver type. */ uint32_t tunnel:1; uint32_t meter:24; /**< Holds flow meter id. */ diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c index c50649a107..d039794c9a 100644 --- a/drivers/net/mlx5/mlx5_flow_dv.c +++ b/drivers/net/mlx5/mlx5_flow_dv.c @@ -13830,6 +13830,11 @@ flow_dv_destroy(struct rte_eth_dev *dev, struct rte_flow *flow) dev_handle->split_flow_id) mlx5_ipool_free(fm->flow_ipool, dev_handle->split_flow_id); + else if (dev_handle->split_flow_id && + !dev_handle->is_meter_flow_id) + mlx5_ipool_free(priv->sh->ipool + [MLX5_IPOOL_RSS_EXPANTION_FLOW_ID], + dev_handle->split_flow_id); mlx5_ipool_free(priv->sh->ipool[MLX5_IPOOL_MLX5_FLOW], tmp_idx); } diff --git a/drivers/net/mlx5/mlx5_trigger.c b/drivers/net/mlx5/mlx5_trigger.c index ae7fcca229..48cf780786 100644 --- a/drivers/net/mlx5/mlx5_trigger.c +++ b/drivers/net/mlx5/mlx5_trigger.c @@ -1187,7 +1187,7 @@ mlx5_dev_stop(struct rte_eth_dev *dev) /* Control flows for default traffic can be removed firstly. */ mlx5_traffic_disable(dev); /* All RX queue flags will be cleared in the flush interface. */ - mlx5_flow_list_flush(dev, &priv->flows, true); + mlx5_flow_list_flush(dev, false, true); mlx5_flow_meter_rxq_flush(dev); mlx5_rx_intr_vec_disable(dev); priv->sh->port[priv->dev_port - 1].ih_port_id = RTE_MAX_ETHPORTS; @@ -1370,7 +1370,7 @@ mlx5_traffic_enable(struct rte_eth_dev *dev) return 0; error: ret = rte_errno; /* Save rte_errno before cleanup. */ - mlx5_flow_list_flush(dev, &priv->ctrl_flows, false); + mlx5_flow_list_flush(dev, true, false); rte_errno = ret; /* Restore rte_errno. */ return -rte_errno; } @@ -1385,9 +1385,7 @@ mlx5_traffic_enable(struct rte_eth_dev *dev) void mlx5_traffic_disable(struct rte_eth_dev *dev) { - struct mlx5_priv *priv = dev->data->dev_private; - - mlx5_flow_list_flush(dev, &priv->ctrl_flows, false); + mlx5_flow_list_flush(dev, true, false); } /** -- 2.25.1