From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id E60AA4416A; Thu, 6 Jun 2024 14:34:56 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 08A3E42D6A; Thu, 6 Jun 2024 14:34:08 +0200 (CEST) Received: from NAM10-MW2-obe.outbound.protection.outlook.com (mail-mw2nam10on2066.outbound.protection.outlook.com [40.107.94.66]) by mails.dpdk.org (Postfix) with ESMTP id 7DA94427E7 for ; Thu, 6 Jun 2024 14:34:06 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=VPtj42V7XL6VT/E/oedFHv9RA7hANUIM4TgqeSPikkJ8tvWJVspNKXa0OdfeDNIPEu402ujsoGKxhTPnmyI2/NlTvwms+lth/INFELjVf3VCBHzCWYAg/whpnA0XRVhE+9HFaQg3L8r1kgt8ty/DFkrjopgKB2ODGO7Mv545gUBVet4YtaV5s4Uhd/TpVmBpoj8ICGf0dlAAM7s39jfI+Ac8JLk4al7z31KM3o7LaSp8EKHjcXzyjVYZN5qgAT5I/K+BkHzs4MuNtAIaES7TRgLjOKV82+McQolTGLr4XLrMv8JUgSXWCmHrYmqjIkBbLmUKxpCXqvapGlmeM0VRXg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=27bYFKqDFlO7ydEOcBmvx10k194Ct5+f122i/fsvNus=; b=nAkKVCGypdmbsOszXPIHY2B+Cbw6PPvHlF5yIK9z2uPi1lTS3FFpI8uZTtytk0q00tbjUZQ5fadA8NgfL0lPEaqWfg4+ErD9Hk9wvVFSl9xf/W47k8QvnGXveCHic5txDk1Ka8jgANlYj7eSIahDJ7E0bf2VETLmczRO3BeD+urJQwd62K9HJWSoDNpKolgAxkYNAMRaBDwbn39Es6ZzCF88GiswTY6AP7mECLNqtTvxd+Aae0hW4L1qkCEkCr94br5FXLdaLNs3j6Ex9YU1zPqqqtO/awWvL2kC6AXC2GxBxPgPUojXfCNXAQmnmfi7bYNBD/62riXFmuKqbNUC0g== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=27bYFKqDFlO7ydEOcBmvx10k194Ct5+f122i/fsvNus=; b=c0Sghend6zq7D/DdwohshpDq5G3I5etjZNlaDqCrxgw5/BL89Tsh37pFyYUIt8+pQErMQzx4u25988mWecQW/UZBs0TcgXqrYkWOwWCuTq/2WtmffEDuYfdDO8I3XpXTc5FoZj4p0ci5ChWkkKJLtVyxDMZQvWr04podaD6hGC3aON5oAzY5PeTiDc1+AG4QAGZCKmjDVUwLCPP2opGMK46x2a0lPzRN7I+njBp8I45/oO5mDdc8A0ry3gnWwKaJHdpyKguMS+SL7p+BUetSuuiO0M8tLdjoqxvts5xZax7BJwswzLHxa9Z32JgKiZ9x46bwYueOnxFYBgMsT9XcBQ== Received: from BY3PR05CA0042.namprd05.prod.outlook.com (2603:10b6:a03:39b::17) by SJ0PR12MB7082.namprd12.prod.outlook.com (2603:10b6:a03:4ae::12) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7633.33; Thu, 6 Jun 2024 12:34:02 +0000 Received: from SJ1PEPF0000231F.namprd03.prod.outlook.com (2603:10b6:a03:39b:cafe::e) by BY3PR05CA0042.outlook.office365.com (2603:10b6:a03:39b::17) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7677.12 via Frontend Transport; Thu, 6 Jun 2024 12:34:02 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by SJ1PEPF0000231F.mail.protection.outlook.com (10.167.242.235) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7633.15 via Frontend Transport; Thu, 6 Jun 2024 12:34:02 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Thu, 6 Jun 2024 05:33:48 -0700 Received: from rnnvmail203.nvidia.com (10.129.68.9) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Thu, 6 Jun 2024 05:33:48 -0700 Received: from nvidia.com (10.127.8.14) by mail.nvidia.com (10.129.68.9) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4 via Frontend Transport; Thu, 6 Jun 2024 05:33:46 -0700 From: Maayan Kashani To: CC: , , , Viacheslav Ovsiienko , Ori Kam , Suanming Mou , Matan Azrad Subject: [PATCH v5 11/11] net/mlx5: initial design changes Date: Thu, 6 Jun 2024 15:32:56 +0300 Message-ID: <20240606123256.177947-11-mkashani@nvidia.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20240606123256.177947-1-mkashani@nvidia.com> References: <20240606102317.172553-1-mkashani@nvidia.com> <20240606123256.177947-1-mkashani@nvidia.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-NV-OnPremToCloud: ExternallySecured X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: SJ1PEPF0000231F:EE_|SJ0PR12MB7082:EE_ X-MS-Office365-Filtering-Correlation-Id: 21cb5368-134b-4328-447e-08dc8624f273 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; ARA:13230031|1800799015|376005|36860700004|82310400017; X-Microsoft-Antispam-Message-Info: =?us-ascii?Q?jd/v2zKqsVoqTVH8/1Eb+JcdKS+V535nXbl6O0bc4L5MIESPLYreQZMafbZy?= =?us-ascii?Q?P0J8BAV2FgbBpKXBx+2U43zAj+4PRVzNc+NP5DI7QK9WW/q9YZqjqy7cldCa?= =?us-ascii?Q?mcpZTun6Bz2fVPeBORmSm3nNdoRcYxtqwksoCqGj1dfpbuEGC0jMe79yxTXA?= =?us-ascii?Q?59SbPv6N6WZ+iDoqojhWm82cCxvrBxvuiCeW/wX2mNMcjt+XcvbWh4JIG3XH?= =?us-ascii?Q?MLVPQ6DKfL1JKkK9k8OxABLBGRl1Frj/1u0UsmvLdpZ4bBGYvtXIdbTFpzqj?= =?us-ascii?Q?SIIss33gkMjxUpVLKGbdwFuJhe1kLztmjDszM76LwGGlX9Sjz+tf/9+OdfU4?= =?us-ascii?Q?XCFniGzR8DaaWEfZpKH/VZdnGDyDQEItOLd+gZ8Mw2TEEGqUcFTimPjmZbhr?= =?us-ascii?Q?d9kJZTyXQxY71YxOh0oz6rGlVSHQZnL9FiRS0mBHzRHGuPhrxZZ+7xyqNWq9?= =?us-ascii?Q?Jh+5V0Xl+HZZ8IdhsU1LCJIJXUa46bPETOU5ugd4E4Fp8tp3N8Siiu5nqAgH?= =?us-ascii?Q?+X/Kz3GF7rG8MOR1roK3y+bA8/qyNVQ+5XIZJSBobOHLZnePuAc2x5D2DtIh?= =?us-ascii?Q?KyzHy/TdgKc/wyAnL5B0ethlaByxBNVuxW2cJVXUIewgqNwJ3mwJQMVykMbC?= =?us-ascii?Q?rgC7cCverJSMfugyBI3Qc7T3eTVNBgW01mQYv3PWznC5hsRQzsZRD+zgQYlp?= =?us-ascii?Q?0rSxy8PFzb1U/uHCBJkVUfOOBJFUwEb994z5XIXde/4eaJNf0WFSDOl5AsHD?= =?us-ascii?Q?U8mpz5K5JA98F1oS4UmNafXIESESc/oKLJRKnZ+OTLwtl0lwvhhWZv/aMTrI?= =?us-ascii?Q?5lNczw0yt5MFOyr23kawTY5VoCq6V6wpDm8+dNJ0NB5CKN50UyZ0i2wr3xnf?= =?us-ascii?Q?N1PxFRtsieO18O6US4Uxc6Dx4KM+YLxwGmusUQjAR+Ouyb9aWjnVMjPqp8XU?= =?us-ascii?Q?gJYW8Po8QjKV5DtLAcHPC18NT8khiBGH4VwNPeUNtZx8HNAByuc1jn/hmsno?= =?us-ascii?Q?Q/BAQJC5I1+M9rclNBTIh8P3K0TsSf/IiJpzgbqmrcVCso2qdPqW8+jp/OwX?= =?us-ascii?Q?0+cqcj6s3VAgggl3gwjemoJNDFsyiAbviZJg22x9OVQWeIW0SHKCanXAVqMK?= =?us-ascii?Q?Du3PXRe6GRblD9xDYWwN96UCkjoUJ3qNv+lnUdqHsa/rvcDo66zXHdH17CD9?= =?us-ascii?Q?sv+g2Os07/vwXtRZU7IwSPaYzhRPzl9ebAvizu0l3eYRe+hmFXJVGP1C1ulE?= =?us-ascii?Q?gv3lQgKpS4L4liMGFAZJPAYGZ4MKHYZ0joxYhj6T/5QjNryceG6O4xR5OD6D?= =?us-ascii?Q?uMvZoWliVQ53zFKFMVpqLiGfdNhk+NySx5mBG61VkaTDTQwJgPG+CQwXg0OQ?= =?us-ascii?Q?KdVnTikyY6Os3aLmSh04x9ZlrBiUT3SKW0uLUkKA0uzUROJk+g=3D=3D?= X-Forefront-Antispam-Report: CIP:216.228.117.160; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge1.nvidia.com; CAT:NONE; SFS:(13230031)(1800799015)(376005)(36860700004)(82310400017); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 Jun 2024 12:34:02.3884 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 21cb5368-134b-4328-447e-08dc8624f273 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.160]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: SJ1PEPF0000231F.namprd03.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR12MB7082 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Change flow_drv_list_create/destroy to mlx5_flow_list_create/destroy. Remove resource release function inlining. Check number of queues in template mode in hw configure function. Use user priority to calculate matcher priority. Signed-off-by: Maayan Kashani Acked-by: Dariusz Sosnowski --- drivers/net/mlx5/mlx5_flow.c | 70 +++++++--------------- drivers/net/mlx5/mlx5_flow_hw.c | 100 +++++++++++++++----------------- 2 files changed, 68 insertions(+), 102 deletions(-) diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c index f44200db57..7bcbbc74b5 100644 --- a/drivers/net/mlx5/mlx5_flow.c +++ b/drivers/net/mlx5/mlx5_flow.c @@ -4983,19 +4983,6 @@ flow_check_hairpin_split(struct rte_eth_dev *dev, return 0; } -/* Declare flow create/destroy prototype in advance. */ - -static uintptr_t -flow_drv_list_create(struct rte_eth_dev *dev, enum mlx5_flow_type type, - const struct rte_flow_attr *attr, - const struct rte_flow_item items[], - const struct rte_flow_action actions[], - bool external, struct rte_flow_error *error); - -static void -flow_drv_list_destroy(struct rte_eth_dev *dev, enum mlx5_flow_type type, - uintptr_t flow_idx); - int flow_dv_mreg_match_cb(void *tool_ctx __rte_unused, struct mlx5_list_entry *entry, void *cb_ctx) @@ -5114,7 +5101,7 @@ flow_dv_mreg_create_cb(void *tool_ctx, void *cb_ctx) * be applied, removed, deleted in arbitrary order * by list traversing. */ - mcp_res->rix_flow = flow_drv_list_create(dev, MLX5_FLOW_TYPE_MCP, + mcp_res->rix_flow = mlx5_flow_list_create(dev, MLX5_FLOW_TYPE_MCP, &attr, items, actions, false, error); if (!mcp_res->rix_flow) { mlx5_ipool_free(priv->sh->ipool[MLX5_IPOOL_MCP], idx); @@ -5208,7 +5195,7 @@ flow_dv_mreg_remove_cb(void *tool_ctx, struct mlx5_list_entry *entry) struct mlx5_priv *priv = dev->data->dev_private; MLX5_ASSERT(mcp_res->rix_flow); - flow_drv_list_destroy(dev, MLX5_FLOW_TYPE_MCP, mcp_res->rix_flow); + mlx5_flow_list_destroy(dev, MLX5_FLOW_TYPE_MCP, mcp_res->rix_flow); mlx5_ipool_free(priv->sh->ipool[MLX5_IPOOL_MCP], mcp_res->idx); } @@ -7595,7 +7582,7 @@ mlx5_flow_create_esw_table_zero_flow(struct rte_eth_dev *dev) }; struct rte_flow_error error; - return (void *)(uintptr_t)flow_drv_list_create(dev, MLX5_FLOW_TYPE_CTL, + return (void *)(uintptr_t)mlx5_flow_list_create(dev, MLX5_FLOW_TYPE_CTL, &attr, &pattern, actions, false, &error); } @@ -7663,14 +7650,14 @@ mlx5_flow_create_devx_sq_miss_flow(struct rte_eth_dev *dev, uint32_t sq_num) * Creates group 0, highest priority jump flow. * Matches txq to bypass kernel packets. */ - if (flow_drv_list_create(dev, MLX5_FLOW_TYPE_CTL, &attr, pattern, actions, + if (mlx5_flow_list_create(dev, MLX5_FLOW_TYPE_CTL, &attr, pattern, actions, false, &error) == 0) return 0; /* Create group 1, lowest priority redirect flow for txq. */ attr.group = 1; actions[0].conf = &port; actions[0].type = RTE_FLOW_ACTION_TYPE_PORT_ID; - return flow_drv_list_create(dev, MLX5_FLOW_TYPE_CTL, &attr, pattern, + return mlx5_flow_list_create(dev, MLX5_FLOW_TYPE_CTL, &attr, pattern, actions, false, &error); } @@ -7826,7 +7813,7 @@ mlx5_flow_cache_flow_toggle(struct rte_eth_dev *dev, bool orig_prio) flow_info->flow_idx_low_prio); if (high && low) { RTE_SWAP(*low, *high); - flow_drv_list_destroy(dev, MLX5_FLOW_TYPE_GEN, + mlx5_flow_list_destroy(dev, MLX5_FLOW_TYPE_GEN, flow_info->flow_idx_low_prio); flow_info->flow_idx_high_prio = 0; } @@ -7840,7 +7827,7 @@ mlx5_flow_cache_flow_toggle(struct rte_eth_dev *dev, bool orig_prio) while (flow_info) { if (flow_info->orig_prio != flow_info->attr.priority) { if (flow_info->flow_idx_high_prio) - flow_drv_list_destroy(dev, MLX5_FLOW_TYPE_GEN, + mlx5_flow_list_destroy(dev, MLX5_FLOW_TYPE_GEN, flow_info->flow_idx_high_prio); else break; @@ -7995,12 +7982,13 @@ mlx5_flow_create(struct rte_eth_dev *dev, RTE_PMD_MLX5_FLOW_ENGINE_FLAG_STANDBY_DUP_INGRESS))) new_attr->priority += 1; } - flow_idx = flow_drv_list_create(dev, MLX5_FLOW_TYPE_GEN, attr, items, actions, true, error); + flow_idx = mlx5_flow_list_create(dev, MLX5_FLOW_TYPE_GEN, attr, items, actions, + true, error); if (!flow_idx) return NULL; if (unlikely(mlx5_need_cache_flow(priv, attr))) { if (mlx5_flow_cache_flow_info(dev, attr, prio, items, actions, flow_idx)) { - flow_drv_list_destroy(dev, MLX5_FLOW_TYPE_GEN, flow_idx); + mlx5_flow_list_destroy(dev, MLX5_FLOW_TYPE_GEN, flow_idx); flow_idx = 0; } } @@ -8013,17 +8001,6 @@ mlx5_flow_list_create(struct rte_eth_dev *dev, enum mlx5_flow_type type, const struct rte_flow_item items[], const struct rte_flow_action actions[], bool external, struct rte_flow_error *error) -{ - return flow_drv_list_create(dev, type, attr, items, actions, external, - error); -} - -uintptr_t -flow_drv_list_create(struct rte_eth_dev *dev, enum mlx5_flow_type type, - const struct rte_flow_attr *attr, - const struct rte_flow_item items[], - const struct rte_flow_action actions[], - bool external, struct rte_flow_error *error) { const struct mlx5_flow_driver_ops *fops; enum mlx5_flow_drv_type drv_type = flow_get_drv_type(dev, attr); @@ -8072,8 +8049,8 @@ flow_legacy_list_destroy(struct rte_eth_dev *dev, enum mlx5_flow_type type, mlx5_ipool_free(priv->flows[type], flow_idx); } -static void -flow_drv_list_destroy(struct rte_eth_dev *dev, enum mlx5_flow_type type, +void +mlx5_flow_list_destroy(struct rte_eth_dev *dev, enum mlx5_flow_type type, uintptr_t flow_idx) { const struct mlx5_flow_driver_ops *fops; @@ -8084,13 +8061,6 @@ flow_drv_list_destroy(struct rte_eth_dev *dev, enum mlx5_flow_type type, fops->list_destroy(dev, type, flow_idx); } -void -mlx5_flow_list_destroy(struct rte_eth_dev *dev, enum mlx5_flow_type type, - uintptr_t flow_idx) -{ - flow_drv_list_destroy(dev, type, flow_idx); -} - /** * Destroy all flows. * @@ -8119,9 +8089,9 @@ mlx5_flow_list_flush(struct rte_eth_dev *dev, enum mlx5_flow_type type, #endif MLX5_IPOOL_FOREACH(priv->flows[type], fidx, flow) { if (priv->sh->config.dv_flow_en == 2) { - flow_drv_list_destroy(dev, type, (uintptr_t)flow); + mlx5_flow_list_destroy(dev, type, (uintptr_t)flow); } else { - flow_drv_list_destroy(dev, type, fidx); + mlx5_flow_list_destroy(dev, type, fidx); } if (unlikely(mlx5_need_cache_flow(priv, NULL) && type == MLX5_FLOW_TYPE_GEN)) { flow_info = LIST_FIRST(&mode_info->hot_upgrade); @@ -8394,7 +8364,7 @@ mlx5_ctrl_flow_source_queue(struct rte_eth_dev *dev, actions[0].type = RTE_FLOW_ACTION_TYPE_JUMP; actions[0].conf = &jump; actions[1].type = RTE_FLOW_ACTION_TYPE_END; - flow_idx = flow_drv_list_create(dev, MLX5_FLOW_TYPE_CTL, + flow_idx = mlx5_flow_list_create(dev, MLX5_FLOW_TYPE_CTL, &attr, items, actions, false, &error); if (!flow_idx) { DRV_LOG(DEBUG, @@ -8484,7 +8454,7 @@ mlx5_ctrl_flow_vlan(struct rte_eth_dev *dev, action_rss.types = 0; for (i = 0; i != priv->reta_idx_n; ++i) queue[i] = (*priv->reta_idx)[i]; - flow_idx = flow_drv_list_create(dev, MLX5_FLOW_TYPE_CTL, + flow_idx = mlx5_flow_list_create(dev, MLX5_FLOW_TYPE_CTL, &attr, items, actions, false, &error); if (!flow_idx) return -rte_errno; @@ -8559,7 +8529,7 @@ mlx5_flow_lacp_miss(struct rte_eth_dev *dev) }, }; struct rte_flow_error error; - uint32_t flow_idx = flow_drv_list_create(dev, MLX5_FLOW_TYPE_CTL, + uint32_t flow_idx = mlx5_flow_list_create(dev, MLX5_FLOW_TYPE_CTL, &attr, items, actions, false, &error); @@ -8583,7 +8553,7 @@ mlx5_flow_destroy(struct rte_eth_dev *dev, struct rte_pmd_mlx5_flow_engine_mode_info *mode_info = &priv->mode_info; struct mlx5_dv_flow_info *flow_info; - flow_drv_list_destroy(dev, MLX5_FLOW_TYPE_GEN, + mlx5_flow_list_destroy(dev, MLX5_FLOW_TYPE_GEN, (uintptr_t)(void *)flow); if (unlikely(mlx5_need_cache_flow(priv, NULL))) { flow_info = LIST_FIRST(&mode_info->hot_upgrade); @@ -9896,14 +9866,14 @@ mlx5_flow_discover_mreg_c(struct rte_eth_dev *dev) if (!priv->sh->config.dv_flow_en) break; /* Create internal flow, validation skips copy action. */ - flow_idx = flow_drv_list_create(dev, MLX5_FLOW_TYPE_GEN, &attr, + flow_idx = mlx5_flow_list_create(dev, MLX5_FLOW_TYPE_GEN, &attr, items, actions, false, &error); flow = mlx5_ipool_get(priv->flows[MLX5_FLOW_TYPE_GEN], flow_idx); if (!flow) continue; priv->sh->flow_mreg_c[n++] = idx; - flow_drv_list_destroy(dev, MLX5_FLOW_TYPE_GEN, flow_idx); + mlx5_flow_list_destroy(dev, MLX5_FLOW_TYPE_GEN, flow_idx); } for (; n < MLX5_MREG_C_NUM; ++n) priv->sh->flow_mreg_c[n] = REG_NON; diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index b4b0de417a..4461e9d55a 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -2204,12 +2204,12 @@ mlx5_create_ipv6_ext_reformat(struct rte_eth_dev *dev, */ static int __flow_hw_translate_actions_template(struct rte_eth_dev *dev, - const struct mlx5_flow_template_table_cfg *cfg, - struct mlx5_hw_actions *acts, - struct rte_flow_actions_template *at, - struct mlx5_tbl_multi_pattern_ctx *mp_ctx, - bool nt_mode __rte_unused, - struct rte_flow_error *error) + const struct mlx5_flow_template_table_cfg *cfg, + struct mlx5_hw_actions *acts, + struct rte_flow_actions_template *at, + struct mlx5_tbl_multi_pattern_ctx *mp_ctx, + bool nt_mode, + struct rte_flow_error *error) { struct mlx5_priv *priv = dev->data->dev_private; const struct rte_flow_template_table_attr *table_attr = &cfg->attr; @@ -11199,7 +11199,7 @@ static int flow_hw_validate_attributes(const struct rte_flow_port_attr *port_attr, uint16_t nb_queue, const struct rte_flow_queue_attr *queue_attr[], - struct rte_flow_error *error) + bool nt_mode, struct rte_flow_error *error) { uint32_t size; unsigned int i; @@ -11208,7 +11208,7 @@ flow_hw_validate_attributes(const struct rte_flow_port_attr *port_attr, return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, "Port attributes must be non-NULL"); - if (nb_queue == 0) + if (nb_queue == 0 && !nt_mode) return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, "At least one flow queue is required"); @@ -11285,7 +11285,7 @@ __flow_hw_configure(struct rte_eth_dev *dev, rte_errno = EINVAL; goto err; } - if (flow_hw_validate_attributes(port_attr, nb_queue, queue_attr, error)) + if (flow_hw_validate_attributes(port_attr, nb_queue, queue_attr, nt_mode, error)) return -rte_errno; /* * Calling rte_flow_configure() again is allowed if @@ -11303,7 +11303,7 @@ __flow_hw_configure(struct rte_eth_dev *dev, } } /* If previous configuration was not default non template mode config. */ - if (!(priv->hw_attr->nt_mode)) { + if (!priv->hw_attr->nt_mode) { if (flow_hw_compare_config(priv->hw_attr, port_attr, nb_queue, queue_attr)) return 0; else @@ -12768,6 +12768,7 @@ flow_hw_get_aged_flows(struct rte_eth_dev *dev, void **contexts, /** * Initialization function for non template API which calls * flow_hw_configure with default values. + * Configure non queues cause 1 queue is configured by default for inner usage. * * @param[in] dev * Pointer to the Ethernet device structure. @@ -12777,8 +12778,6 @@ flow_hw_get_aged_flows(struct rte_eth_dev *dev, void **contexts, * @return * 0 on success, a negative errno value otherwise and rte_errno is set. */ - /* Configure non queues cause 1 queue is configured by default for inner usage. */ - int flow_hw_init(struct rte_eth_dev *dev, struct rte_flow_error *error) @@ -12806,10 +12805,10 @@ flow_hw_init(struct rte_eth_dev *dev, } static int flow_hw_prepare(struct rte_eth_dev *dev, - const struct rte_flow_action actions[] __rte_unused, - enum mlx5_flow_type type, - struct rte_flow_hw **flow, - struct rte_flow_error *error) + const struct rte_flow_action actions[] __rte_unused, + enum mlx5_flow_type type, + struct rte_flow_hw **flow, + struct rte_flow_error *error) { struct mlx5_priv *priv = dev->data->dev_private; uint32_t idx = 0; @@ -12932,14 +12931,14 @@ flow_hw_encap_decap_resource_register static int flow_hw_translate_flow_actions(struct rte_eth_dev *dev, - const struct rte_flow_attr *attr, - const struct rte_flow_action actions[], - struct rte_flow_hw *flow, - struct mlx5_flow_hw_action_params *ap, - struct mlx5_hw_actions *hw_acts, - uint64_t item_flags, - bool external, - struct rte_flow_error *error) + const struct rte_flow_attr *attr, + const struct rte_flow_action actions[], + struct rte_flow_hw *flow, + struct mlx5_flow_hw_action_params *ap, + struct mlx5_hw_actions *hw_acts, + uint64_t item_flags, + bool external, + struct rte_flow_error *error) { int ret = 0; uint32_t src_group = 0; @@ -13037,12 +13036,12 @@ flow_hw_translate_flow_actions(struct rte_eth_dev *dev, } static int flow_hw_register_matcher(struct rte_eth_dev *dev, - const struct rte_flow_attr *attr, - const struct rte_flow_item items[], - bool external, - struct rte_flow_hw *flow, - struct mlx5_flow_dv_matcher *matcher, - struct rte_flow_error *error) + const struct rte_flow_attr *attr, + const struct rte_flow_item items[], + bool external, + struct rte_flow_hw *flow, + struct mlx5_flow_dv_matcher *matcher, + struct rte_flow_error *error) { struct mlx5_priv *priv = dev->data->dev_private; struct rte_flow_error sub_error = { @@ -13073,10 +13072,7 @@ static int flow_hw_register_matcher(struct rte_eth_dev *dev, matcher->crc = rte_raw_cksum((const void *)matcher->mask.buf, matcher->mask.size); - matcher->priority = mlx5_get_matcher_priority(dev, attr, - matcher->priority, - external); - + matcher->priority = attr->priority; ret = __translate_group(dev, attr, external, attr->group, &group, error); if (ret) return ret; @@ -13184,10 +13180,10 @@ static int flow_hw_ensure_action_pools_allocated(struct rte_eth_dev *dev, /* TODO: remove dev if not used */ static int flow_hw_apply(struct rte_eth_dev *dev __rte_unused, - const struct rte_flow_item items[], - struct mlx5dr_rule_action rule_actions[], - struct rte_flow_hw *flow, - struct rte_flow_error *error) + const struct rte_flow_item items[], + struct mlx5dr_rule_action rule_actions[], + struct rte_flow_hw *flow, + struct rte_flow_error *error) { struct mlx5dr_bwc_rule *rule = NULL; @@ -13228,13 +13224,13 @@ static int flow_hw_apply(struct rte_eth_dev *dev __rte_unused, * 0 on success, negative errno value otherwise and rte_errno set. */ static int flow_hw_create_flow(struct rte_eth_dev *dev, - enum mlx5_flow_type type, - const struct rte_flow_attr *attr, - const struct rte_flow_item items[], - const struct rte_flow_action actions[], - bool external, - struct rte_flow_hw **flow, - struct rte_flow_error *error) + enum mlx5_flow_type type, + const struct rte_flow_attr *attr, + const struct rte_flow_item items[], + const struct rte_flow_action actions[], + bool external, + struct rte_flow_hw **flow, + struct rte_flow_error *error) { int ret; struct mlx5_hw_actions hw_act; @@ -13399,7 +13395,7 @@ flow_hw_destroy(struct rte_eth_dev *dev, struct rte_flow_hw *flow) * Address of flow to destroy. */ static void flow_hw_list_destroy(struct rte_eth_dev *dev, enum mlx5_flow_type type, - uintptr_t flow_addr) + uintptr_t flow_addr) { struct mlx5_priv *priv = dev->data->dev_private; /* Get flow via idx */ @@ -13435,12 +13431,12 @@ static void flow_hw_list_destroy(struct rte_eth_dev *dev, enum mlx5_flow_type ty * A flow addr on success, 0 otherwise and rte_errno is set. */ static uintptr_t flow_hw_list_create(struct rte_eth_dev *dev, - enum mlx5_flow_type type, - const struct rte_flow_attr *attr, - const struct rte_flow_item items[], - const struct rte_flow_action actions[], - bool external, - struct rte_flow_error *error) + enum mlx5_flow_type type, + const struct rte_flow_attr *attr, + const struct rte_flow_item items[], + const struct rte_flow_action actions[], + bool external, + struct rte_flow_error *error) { int ret; struct rte_flow_hw *flow = NULL; -- 2.21.0