From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 8426344173; Thu, 6 Jun 2024 12:24:21 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 6C42242E66; Thu, 6 Jun 2024 12:24:06 +0200 (CEST) Received: from NAM11-BN8-obe.outbound.protection.outlook.com (mail-bn8nam11on2057.outbound.protection.outlook.com [40.107.236.57]) by mails.dpdk.org (Postfix) with ESMTP id 8104C42E57 for ; Thu, 6 Jun 2024 12:24:05 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=C3vNiEPnzvy6e7BvbOLNB+UYgvugf6KhKQk9Z9Ghv5JiS9kYSPAkxg5Uju0MAlq+tFYYu/6ipwtPZDR7ZJmInIfwnHp5hAjI3tOm9ih14HXKuhtSJoHhUSqQiN2J4BHgqRPh/KqvRzbm2jQ05GOA1nTVeuLHWb7z3RU6CSgje3X/jUMBGpfl+uc6k9iLrv3PjHiJdyZZoYUd6BWJwnC68xZ000RuhnzSyP6o0+2/gxU0aftnN9IXeJv6N/OdOFDY4m1S8VPkeeb4DatCxSfUx/7AviD/6jPTiYAWBK+vv51xSO72+lv71sxxfarLSlCvNmsTFnjxLuhY4BVT8eaPLQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=pkUMMOtWUNxn9gYnssp5tf/4jPlvaMeigeLtggdMMuc=; b=KYvOuxqv4oPBmrzl6aqStNGVI7vVNPhemkAifLPSYVDkEgUhi1llYVsu8wHSWz/HjNNH1+CSEM5G+ub6hBvQ361lMjFrGQbNxnY6JRNcei4ev7Z2SjqP/my2T/wRqjgXmp75v+cn9R4c8xge9YHV/+SztKjBkwkow35uctig/dtphbNyhed2e6rIL9naFJW4dRt7BP/l2zdTTRWz/YdsN0ZdDgiTmtGDXPBkvm31i9idclAJHtwrbKIpCpqM6QgQTj4xZTDPvUfnHwPS5UxYJK0+Jhj90G5IuURxXlKdIMfyvTYenRh7B/gUruZdz8WCitI6Xmx2bYXE3UG9stzZvQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=pkUMMOtWUNxn9gYnssp5tf/4jPlvaMeigeLtggdMMuc=; b=uUy1zYiof0Xj8+Z1yUMmF+6R1qcKbFkEN2/pjfrIQaLuBNGYJBeAHwD2yLTv9qOOyUXU+0wdzoTaAh5UgUOtYEO2d2dLrp6+1e4BkKxuQ5YAGMHpTifrvcN2Fu+gxJzvPaud7HbIhaiHXhQWt4URhJfTiwDnPYDcbOCj2fKi7ES7BmBMeLq3f+nWIWcMfI72B0zj+BAtMGh2K6UWgO7t73PJWulSE27u83w3D9caC3uHO8Xws8n0Lzk7CiEbS1+WsNv2QXIE4BMFQ7qDchaQvQCpgKQ+sBcrti0xtFhUsUX4XQFASRLu0RRvBi8XDFrgzwVuqCqDLo81fCBSkSsw/g== Received: from SJ0PR03CA0227.namprd03.prod.outlook.com (2603:10b6:a03:39f::22) by CH2PR12MB4293.namprd12.prod.outlook.com (2603:10b6:610:7e::19) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7633.33; Thu, 6 Jun 2024 10:24:02 +0000 Received: from SJ1PEPF00001CDD.namprd05.prod.outlook.com (2603:10b6:a03:39f:cafe::a2) by SJ0PR03CA0227.outlook.office365.com (2603:10b6:a03:39f::22) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7656.18 via Frontend Transport; Thu, 6 Jun 2024 10:24:01 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by SJ1PEPF00001CDD.mail.protection.outlook.com (10.167.242.5) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7633.15 via Frontend Transport; Thu, 6 Jun 2024 10:24:01 +0000 Received: from rnnvmail204.nvidia.com (10.129.68.6) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Thu, 6 Jun 2024 03:23:49 -0700 Received: from rnnvmail204.nvidia.com (10.129.68.6) by rnnvmail204.nvidia.com (10.129.68.6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Thu, 6 Jun 2024 03:23:48 -0700 Received: from nvidia.com (10.127.8.14) by mail.nvidia.com (10.129.68.6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4 via Frontend Transport; Thu, 6 Jun 2024 03:23:46 -0700 From: Maayan Kashani To: CC: , , , Viacheslav Ovsiienko , Ori Kam , Suanming Mou , Matan Azrad Subject: [PATCH v4 05/11] net/mlx5: add ASO actions support to non-template mode Date: Thu, 6 Jun 2024 13:23:10 +0300 Message-ID: <20240606102317.172553-6-mkashani@nvidia.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20240606102317.172553-1-mkashani@nvidia.com> References: <20240603104850.9935-1-mkashani@nvidia.com> <20240606102317.172553-1-mkashani@nvidia.com> MIME-Version: 1.0 Content-Type: text/plain; charset="y" Content-Transfer-Encoding: 8bit X-NV-OnPremToCloud: ExternallySecured X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: SJ1PEPF00001CDD:EE_|CH2PR12MB4293:EE_ X-MS-Office365-Filtering-Correlation-Id: 81b8c550-98be-4292-7ac5-08dc8612c8fe X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; ARA:13230031|82310400017|1800799015|36860700004|376005; X-Microsoft-Antispam-Message-Info: =?us-ascii?Q?Q8CODjm/LUPS7qPwSYcymdLIbHQwF7pEe1RCS4X1TZvVRvNjeMrwYY+wsnQU?= =?us-ascii?Q?BjAid9fVRjzvHWjUfg+Mu+5JhmL/VHppFTkJZ/RmrZS06nyJgOECdUmjckVS?= =?us-ascii?Q?BirOl4RC1iqi4XsWiicOQmTPRUNt7xandAaEe+Um9uN9dsQCKBrt9CHtNEau?= =?us-ascii?Q?Mm4U6+mg2/+SKXaBlgBiZN9aZZmLr+ivCCQ+fgY+5xk01P2z40DxER3VBDbP?= =?us-ascii?Q?U5gdTOsC6XxpOBp171oGbfZvoX74eGiMzXQHhRH6m5VEqElsafQahp+0JAjL?= =?us-ascii?Q?VwmHxFp2zfHSH+u8hU5STcwUgJb9PaTIaH6aXyJLk0Slds/UZ/0aUAA638Ov?= =?us-ascii?Q?ZQhzCYpPgerziAg7wxDcYQUyy5T/HGgCHGsTanXtl2nC17btU3MEtVwqtpqP?= =?us-ascii?Q?mfNa74GTTWrxdVC/PeOHyMUJvIvcHOIEdq1NtuqE4Tv+j+SYUObYj6sJQqO0?= =?us-ascii?Q?hrnT/1segyRwLcuQphQdJ06KJ0HaSwyWDcejPIvE1uKpfw9c6y4XF39L4HnG?= =?us-ascii?Q?LYWzy77ST4jxcVYS1KV0pXKwg+Ne8VeS+7SQbHbNcU2FRn6KY3y+NlzxI54w?= =?us-ascii?Q?7wbVpRNaHN9BEEuP4c4lc7bo2X1vlQbZvy3rVmAtEiuuY13NgtLBPhUKjudO?= =?us-ascii?Q?J7WoxqzgbQE0OIa2eTmglwxCxxYoMSTjD1NhaLJZbzs+iyvewyeMjc/Zc5PI?= =?us-ascii?Q?q3LC76SuFAH3itrrLBNKba8yAAW32os6koksbuiQz3tK+S9G0CerFnGSHHNJ?= =?us-ascii?Q?w3Mvg7DLoaLmO7eJCS1K6gVNijUfsjMQwznjXqscn+/BUo7G7g7vli2Vq4kc?= =?us-ascii?Q?75bYZDA9QPI75gHEMqvxeNAzbsThWQOJLcMUgn2duQ4VM18eU323nt357vIf?= =?us-ascii?Q?DttQ7xNV82ZYRyVm4hze27mk5EcfeajDKylZtbr7ECgUa+s4tS7+OZg6kvvg?= =?us-ascii?Q?wXbP00B7TM85ER4BZpRsCzwYSCg4V1AavBJ4yXSqoZiZ6yN4n6InFEtJPO/8?= =?us-ascii?Q?KG82pYQNXaCaX0OYdaUrpewIPg7dc8Nz14N93JY7XRNOfNs3qGcsjoJ1v9Zr?= =?us-ascii?Q?yZ6SfU6IMD+CRnx3d3uDW3SoxFT/7marbZM3y5ITWiZxhGja18wMT8A6fj3D?= =?us-ascii?Q?thDDSSnQqv27xH/5/6PWhvbTIYNiWVAu8XF00ClalIKF6B7hhcBmY6QhPvfO?= =?us-ascii?Q?tzppcxbMkf0WAbi36TLYtfN3QTKMEHxYJWgkeBBuD5a7dGVdjDy/YYtZtHst?= =?us-ascii?Q?ZYOHumRlo3REBZAx+QdXDWFnD5rZFEBNE/LBLnv3fmdvc2qoTZH+c3ct8doo?= =?us-ascii?Q?EYMB1Vv6UZvd++HFjNI20VxjkRxEWu/meNB+ZSvLfP6epEV5b/UqW7Whq0/p?= =?us-ascii?Q?oxJ3E9gP+UfZcKDXUeqeWHZJNmfuAYw9A9kehD+yLqua6nd2tQ=3D=3D?= X-Forefront-Antispam-Report: CIP:216.228.117.160; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge1.nvidia.com; CAT:NONE; SFS:(13230031)(82310400017)(1800799015)(36860700004)(376005); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 Jun 2024 10:24:01.8796 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 81b8c550-98be-4292-7ac5-08dc8612c8fe X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.160]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: SJ1PEPF00001CDD.namprd05.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH2PR12MB4293 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org This patch adds counter, connection tracking, meter and age actions support to non-template mode. For CT, counter and meter: if no previous allocation was handled by hw configure routine, Half of the maximum supported number of objects will be allocated. For AGE action, if no counters were allocated, allocate half of the maximum, and then allocate same number of AGE objects. Also extracted the shared host handling to the configure function. And align all ASO actions to have init function for future code improvement. This patch does not affect SW Steering flow engine. Signed-off-by: Maayan Kashani Acked-by: Dariusz Sosnowski --- drivers/net/mlx5/mlx5.h | 7 ++ drivers/net/mlx5/mlx5_flow_hw.c | 185 +++++++++++++++++++++++++++----- drivers/net/mlx5/mlx5_hws_cnt.c | 47 ++++---- drivers/net/mlx5/mlx5_hws_cnt.h | 10 +- 4 files changed, 190 insertions(+), 59 deletions(-) diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index 0d30e7ab36..26ce485ce8 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -792,6 +792,13 @@ struct mlx5_dev_shared_port { /* Only yellow color valid. */ #define MLX5_MTR_POLICY_MODE_OY 3 +/* Max number of meters allocated in non template mode. */ +#define MLX5_MTR_NT_MAX (1 << 23) /*TODO verify number */ +/* Max number of connection tracking allocated in non template mode */ +#define MLX5_CT_NT_MAX (1 << 23) /*TODO verify number */ +/* Max number of counters allocated in non template mode */ +#define MLX5_CNT_MAX (1 << 23) /*TODO verify number */ + enum mlx5_meter_domain { MLX5_MTR_DOMAIN_INGRESS, MLX5_MTR_DOMAIN_EGRESS, diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index e993e70494..77d40bad8a 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -287,6 +287,11 @@ static void flow_hw_construct_quota(struct mlx5_priv *priv, struct mlx5dr_rule_action *rule_act, uint32_t qid); +static int +mlx5_flow_ct_init(struct rte_eth_dev *dev, + uint32_t nb_conn_tracks, + uint16_t nb_queue); + static __rte_always_inline uint32_t flow_hw_tx_tag_regc_mask(struct rte_eth_dev *dev); static __rte_always_inline uint32_t flow_hw_tx_tag_regc_value(struct rte_eth_dev *dev); @@ -1673,7 +1678,7 @@ flow_hw_meter_mark_alloc(struct rte_eth_dev *dev, uint32_t queue, } if (meter_mark->profile == NULL) return NULL; - aso_mtr = mlx5_ipool_malloc(priv->hws_mpool->idx_pool, &mtr_id); + aso_mtr = mlx5_ipool_malloc(pool->idx_pool, &mtr_id); if (!aso_mtr) return NULL; /* Fill the flow meter parameters. */ @@ -2483,8 +2488,10 @@ __flow_hw_actions_translate(struct rte_eth_dev *dev, recom_type = MLX5DR_ACTION_TYP_POP_IPV6_ROUTE_EXT; break; case RTE_FLOW_ACTION_TYPE_SEND_TO_KERNEL: - flow_hw_translate_group(dev, cfg, attr->group, + ret = flow_hw_translate_group(dev, cfg, attr->group, &target_grp, error); + if (ret) + return ret; if (target_grp == 0) { __flow_hw_action_template_destroy(dev, acts); return rte_flow_error_set(error, ENOTSUP, @@ -2531,8 +2538,10 @@ __flow_hw_actions_translate(struct rte_eth_dev *dev, goto err; break; case RTE_FLOW_ACTION_TYPE_AGE: - flow_hw_translate_group(dev, cfg, attr->group, + ret = flow_hw_translate_group(dev, cfg, attr->group, &target_grp, error); + if (ret) + return ret; if (target_grp == 0) { __flow_hw_action_template_destroy(dev, acts); return rte_flow_error_set(error, ENOTSUP, @@ -2547,8 +2556,10 @@ __flow_hw_actions_translate(struct rte_eth_dev *dev, goto err; break; case RTE_FLOW_ACTION_TYPE_COUNT: - flow_hw_translate_group(dev, cfg, attr->group, + ret = flow_hw_translate_group(dev, cfg, attr->group, &target_grp, error); + if (ret) + return ret; if (target_grp == 0) { __flow_hw_action_template_destroy(dev, acts); return rte_flow_error_set(error, ENOTSUP, @@ -9705,12 +9716,12 @@ flow_hw_ct_pool_destroy(struct rte_eth_dev *dev, static struct mlx5_aso_ct_pool * flow_hw_ct_pool_create(struct rte_eth_dev *dev, - const struct rte_flow_port_attr *port_attr) + uint32_t nb_conn_tracks) { struct mlx5_priv *priv = dev->data->dev_private; struct mlx5_aso_ct_pool *pool; struct mlx5_devx_obj *obj; - uint32_t nb_cts = rte_align32pow2(port_attr->nb_conn_tracks); + uint32_t nb_cts = rte_align32pow2(nb_conn_tracks); uint32_t log_obj_size = rte_log2_u32(nb_cts); struct mlx5_indexed_pool_config cfg = { .size = sizeof(struct mlx5_aso_ct_action), @@ -9762,7 +9773,7 @@ flow_hw_ct_pool_create(struct rte_eth_dev *dev, pool->devx_obj = host_priv->hws_ctpool->devx_obj; pool->cts = host_priv->hws_ctpool->cts; MLX5_ASSERT(pool->cts); - MLX5_ASSERT(!port_attr->nb_conn_tracks); + MLX5_ASSERT(!nb_conn_tracks); } reg_id = mlx5_flow_get_reg_id(dev, MLX5_ASO_CONNTRACK, 0, NULL); flags |= MLX5DR_ACTION_FLAG_HWS_RX | MLX5DR_ACTION_FLAG_HWS_TX; @@ -9782,6 +9793,46 @@ flow_hw_ct_pool_create(struct rte_eth_dev *dev, return NULL; } +static int +mlx5_flow_ct_init(struct rte_eth_dev *dev, + uint32_t nb_conn_tracks, + uint16_t nb_queue) +{ + struct mlx5_priv *priv = dev->data->dev_private; + uint32_t mem_size; + int ret = -ENOMEM; + + if (!priv->shared_host) { + mem_size = sizeof(struct mlx5_aso_sq) * nb_queue + + sizeof(*priv->ct_mng); + priv->ct_mng = mlx5_malloc(MLX5_MEM_ZERO, mem_size, + RTE_CACHE_LINE_SIZE, + SOCKET_ID_ANY); + if (!priv->ct_mng) + goto err; + ret = mlx5_aso_ct_queue_init(priv->sh, priv->ct_mng, + nb_queue); + if (ret) + goto err; + } + priv->hws_ctpool = flow_hw_ct_pool_create(dev, nb_conn_tracks); + if (!priv->hws_ctpool) + goto err; + priv->sh->ct_aso_en = 1; + return 0; + +err: + if (priv->hws_ctpool) { + flow_hw_ct_pool_destroy(dev, priv->hws_ctpool); + priv->hws_ctpool = NULL; + } + if (priv->ct_mng) { + flow_hw_ct_mng_destroy(dev, priv->ct_mng); + priv->ct_mng = NULL; + } + return ret; +} + static void flow_hw_destroy_vlan(struct rte_eth_dev *dev) { @@ -10429,6 +10480,7 @@ flow_hw_configure(struct rte_eth_dev *dev, bool is_proxy = !!(priv->sh->config.dv_esw_en && priv->master); int ret = 0; uint32_t action_flags; + bool strict_queue = false; if (mlx5dr_rule_get_handle_size() != MLX5_DR_RULE_SIZE) { rte_errno = EINVAL; @@ -10670,25 +10722,13 @@ flow_hw_configure(struct rte_eth_dev *dev, if (!priv->shared_host) flow_hw_create_send_to_kernel_actions(priv); if (port_attr->nb_conn_tracks || (host_priv && host_priv->hws_ctpool)) { - if (!priv->shared_host) { - mem_size = sizeof(struct mlx5_aso_sq) * nb_q_updated + - sizeof(*priv->ct_mng); - priv->ct_mng = mlx5_malloc(MLX5_MEM_ZERO, mem_size, - RTE_CACHE_LINE_SIZE, SOCKET_ID_ANY); - if (!priv->ct_mng) - goto err; - if (mlx5_aso_ct_queue_init(priv->sh, priv->ct_mng, nb_q_updated)) - goto err; - } - priv->hws_ctpool = flow_hw_ct_pool_create(dev, port_attr); - if (!priv->hws_ctpool) + if (mlx5_flow_ct_init(dev, port_attr->nb_conn_tracks, nb_q_updated)) goto err; - priv->sh->ct_aso_en = 1; } if (port_attr->nb_counters || (host_priv && host_priv->hws_cpool)) { - priv->hws_cpool = mlx5_hws_cnt_pool_create(dev, port_attr, - nb_queue); - if (priv->hws_cpool == NULL) + if (mlx5_hws_cnt_pool_create(dev, port_attr->nb_counters, + nb_queue, + (host_priv ? host_priv->hws_cpool : NULL))) goto err; } if (port_attr->nb_aging_objects) { @@ -10705,12 +10745,17 @@ flow_hw_configure(struct rte_eth_dev *dev, rte_errno = EINVAL; goto err; } - ret = mlx5_hws_age_pool_init(dev, port_attr, nb_queue); - if (ret < 0) { - rte_flow_error_set(error, -ret, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, - NULL, "Failed to init age pool."); + if (port_attr->flags & RTE_FLOW_PORT_FLAG_SHARE_INDIRECT) { + DRV_LOG(ERR, "Aging is not supported " + "in cross vHCA sharing mode"); + ret = -ENOTSUP; goto err; } + strict_queue = !!(port_attr->flags & RTE_FLOW_PORT_FLAG_STRICT_QUEUE); + ret = mlx5_hws_age_pool_init(dev, port_attr->nb_aging_objects, + nb_queue, strict_queue); + if (ret < 0) + goto err; } ret = flow_hw_create_vlan(dev); if (ret) { @@ -12322,7 +12367,78 @@ static int flow_hw_register_matcher(struct rte_eth_dev *dev, } } -static int flow_hw_apply(struct rte_eth_dev *dev __rte_unused, /* TODO: remove if not used */ +static int flow_hw_allocate_actions(struct rte_eth_dev *dev, + const struct rte_flow_action actions[], + struct rte_flow_error *error) +{ + bool actions_end = false; + struct mlx5_priv *priv = dev->data->dev_private; + int ret; + + for (; !actions_end; actions++) { + switch ((int)actions->type) { + case RTE_FLOW_ACTION_TYPE_AGE: + /* If no age objects were previously allocated. */ + if (!priv->hws_age_req) { + /* If no counters were previously allocated. */ + if (!priv->hws_cpool) { + ret = mlx5_hws_cnt_pool_create(dev, MLX5_CNT_MAX, + priv->nb_queue, NULL); + if (ret) + goto err; + } + if (priv->hws_cpool) { + /* Allocate same number of counters. */ + ret = mlx5_hws_age_pool_init(dev, + priv->hws_cpool->cfg.request_num, + priv->nb_queue, false); + if (ret) + goto err; + } + } + break; + case RTE_FLOW_ACTION_TYPE_COUNT: + /* If no counters were previously allocated. */ + if (!priv->hws_cpool) { + ret = mlx5_hws_cnt_pool_create(dev, MLX5_CNT_MAX, + priv->nb_queue, NULL); + if (ret) + goto err; + } + break; + case RTE_FLOW_ACTION_TYPE_CONNTRACK: + /* If no CT were previously allocated. */ + if (!priv->hws_ctpool) { + ret = mlx5_flow_ct_init(dev, MLX5_CT_NT_MAX, priv->nb_queue); + if (ret) + goto err; + } + break; + case RTE_FLOW_ACTION_TYPE_METER_MARK: + /* If no meters were previously allocated. */ + if (!priv->hws_mpool) { + ret = mlx5_flow_meter_init(dev, MLX5_MTR_NT_MAX, 0, 0, + priv->nb_queue); + if (ret) + goto err; + } + break; + case RTE_FLOW_ACTION_TYPE_END: + actions_end = true; + break; + default: + break; + } + } + return 0; +err: + return rte_flow_error_set(error, ret, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, "fail to allocate actions"); +} + +/* TODO: remove dev if not used */ +static int flow_hw_apply(struct rte_eth_dev *dev __rte_unused, const struct rte_flow_item items[], struct mlx5dr_rule_action rule_actions[], struct rte_flow_hw *flow, @@ -12432,11 +12548,24 @@ static int flow_hw_create_flow(struct rte_eth_dev *dev, if (ret) goto error; + /* + * ASO allocation – iterating on actions list to allocate missing resources. + * In the future when validate function in hws will be added, + * The output actions bit mask instead of + * looping on the actions array twice. + */ + ret = flow_hw_allocate_actions(dev, actions, error); + /* Note: the actions should be saved in the sub-flow rule itself for reference. */ ret = flow_hw_translate_actions(dev, attr, actions, *flow, &hw_act, external, error); if (ret) goto error; + /* + * TODO: check regarding release: CT index is not saved per rule, + * the index is in the conf of given action. + */ + /* * If the flow is external (from application) OR device is started, * OR mreg discover, then apply immediately. diff --git a/drivers/net/mlx5/mlx5_hws_cnt.c b/drivers/net/mlx5/mlx5_hws_cnt.c index 1b625e07bd..36d422bdfa 100644 --- a/drivers/net/mlx5/mlx5_hws_cnt.c +++ b/drivers/net/mlx5/mlx5_hws_cnt.c @@ -443,7 +443,7 @@ mlx5_hws_cnt_pool_init(struct mlx5_dev_ctx_shared *sh, (uint32_t)cnt_num, SOCKET_ID_ANY, RING_F_MP_HTS_ENQ | RING_F_SC_DEQ | RING_F_EXACT_SZ); if (cntp->wait_reset_list == NULL) { - DRV_LOG(ERR, "failed to create free list ring"); + DRV_LOG(ERR, "failed to create wait reset list ring"); goto error; } snprintf(mz_name, sizeof(mz_name), "%s_U_RING", pcfg->name); @@ -631,16 +631,17 @@ mlx5_hws_cnt_pool_action_create(struct mlx5_priv *priv, return ret; } -struct mlx5_hws_cnt_pool * +int mlx5_hws_cnt_pool_create(struct rte_eth_dev *dev, - const struct rte_flow_port_attr *pattr, uint16_t nb_queue) + uint32_t nb_counters, uint16_t nb_queue, + struct mlx5_hws_cnt_pool *chost) { struct mlx5_hws_cnt_pool *cpool = NULL; struct mlx5_priv *priv = dev->data->dev_private; struct mlx5_hws_cache_param cparam = {0}; struct mlx5_hws_cnt_pool_cfg pcfg = {0}; char *mp_name; - int ret = 0; + int ret = -1; size_t sz; mp_name = mlx5_malloc(MLX5_MEM_ZERO, RTE_MEMZONE_NAMESIZE, 0, SOCKET_ID_ANY); @@ -648,13 +649,9 @@ mlx5_hws_cnt_pool_create(struct rte_eth_dev *dev, goto error; snprintf(mp_name, RTE_MEMZONE_NAMESIZE, "MLX5_HWS_CNT_P_%x", dev->data->port_id); pcfg.name = mp_name; - pcfg.request_num = pattr->nb_counters; + pcfg.request_num = nb_counters; pcfg.alloc_factor = HWS_CNT_ALLOC_FACTOR_DEFAULT; - if (pattr->flags & RTE_FLOW_PORT_FLAG_SHARE_INDIRECT) { - struct mlx5_priv *host_priv = - priv->shared_host->data->dev_private; - struct mlx5_hws_cnt_pool *chost = host_priv->hws_cpool; - + if (chost) { pcfg.host_cpool = chost; cpool = mlx5_hws_cnt_pool_init(priv->sh, &pcfg, &cparam); if (cpool == NULL) @@ -662,13 +659,13 @@ mlx5_hws_cnt_pool_create(struct rte_eth_dev *dev, ret = mlx5_hws_cnt_pool_action_create(priv, cpool); if (ret != 0) goto error; - return cpool; + goto success; } /* init cnt service if not. */ if (priv->sh->cnt_svc == NULL) { ret = mlx5_hws_cnt_svc_init(priv->sh); - if (ret != 0) - return NULL; + if (ret) + return ret; } cparam.fetch_sz = HWS_CNT_CACHE_FETCH_DEFAULT; cparam.preload_sz = HWS_CNT_CACHE_PRELOAD_DEFAULT; @@ -701,10 +698,13 @@ mlx5_hws_cnt_pool_create(struct rte_eth_dev *dev, rte_spinlock_lock(&priv->sh->cpool_lock); LIST_INSERT_HEAD(&priv->sh->hws_cpool_list, cpool, next); rte_spinlock_unlock(&priv->sh->cpool_lock); - return cpool; +success: + priv->hws_cpool = cpool; + return 0; error: mlx5_hws_cnt_pool_destroy(priv->sh, cpool); - return NULL; + priv->hws_cpool = NULL; + return ret; } void @@ -1217,8 +1217,9 @@ mlx5_hws_age_info_destroy(struct mlx5_priv *priv) */ int mlx5_hws_age_pool_init(struct rte_eth_dev *dev, - const struct rte_flow_port_attr *attr, - uint16_t nb_queues) + uint32_t nb_aging_objects, + uint16_t nb_queues, + bool strict_queue) { struct mlx5_priv *priv = dev->data->dev_private; struct mlx5_age_info *age_info = GET_PORT_AGE_INFO(priv); @@ -1233,28 +1234,20 @@ mlx5_hws_age_pool_init(struct rte_eth_dev *dev, .free = mlx5_free, .type = "mlx5_hws_age_pool", }; - bool strict_queue = false; uint32_t nb_alloc_cnts; uint32_t rsize; uint32_t nb_ages_updated; int ret; - strict_queue = !!(attr->flags & RTE_FLOW_PORT_FLAG_STRICT_QUEUE); MLX5_ASSERT(priv->hws_cpool); - if (attr->flags & RTE_FLOW_PORT_FLAG_SHARE_INDIRECT) { - DRV_LOG(ERR, "Aging sn not supported " - "in cross vHCA sharing mode"); - rte_errno = ENOTSUP; - return -ENOTSUP; - } nb_alloc_cnts = mlx5_hws_cnt_pool_get_size(priv->hws_cpool); if (strict_queue) { rsize = mlx5_hws_aged_out_q_ring_size_get(nb_alloc_cnts, nb_queues); - nb_ages_updated = rsize * nb_queues + attr->nb_aging_objects; + nb_ages_updated = rsize * nb_queues + nb_aging_objects; } else { rsize = mlx5_hws_aged_out_ring_size_get(nb_alloc_cnts); - nb_ages_updated = rsize + attr->nb_aging_objects; + nb_ages_updated = rsize + nb_aging_objects; } ret = mlx5_hws_age_info_init(dev, nb_queues, strict_queue, rsize); if (ret < 0) diff --git a/drivers/net/mlx5/mlx5_hws_cnt.h b/drivers/net/mlx5/mlx5_hws_cnt.h index db4e99e37c..996ac8dd9a 100644 --- a/drivers/net/mlx5/mlx5_hws_cnt.h +++ b/drivers/net/mlx5/mlx5_hws_cnt.h @@ -712,9 +712,10 @@ mlx5_hws_cnt_service_thread_create(struct mlx5_dev_ctx_shared *sh); void mlx5_hws_cnt_service_thread_destroy(struct mlx5_dev_ctx_shared *sh); -struct mlx5_hws_cnt_pool * +int mlx5_hws_cnt_pool_create(struct rte_eth_dev *dev, - const struct rte_flow_port_attr *pattr, uint16_t nb_queue); + uint32_t nb_counters, uint16_t nb_queue, + struct mlx5_hws_cnt_pool *chost); void mlx5_hws_cnt_pool_destroy(struct mlx5_dev_ctx_shared *sh, @@ -744,8 +745,9 @@ mlx5_hws_age_context_get(struct mlx5_priv *priv, uint32_t idx); int mlx5_hws_age_pool_init(struct rte_eth_dev *dev, - const struct rte_flow_port_attr *attr, - uint16_t nb_queues); + uint32_t nb_aging_objects, + uint16_t nb_queues, + bool strict_queue); void mlx5_hws_age_pool_destroy(struct mlx5_priv *priv); -- 2.21.0