From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 960C444144; Mon, 3 Jun 2024 10:06:25 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 51AF242E17; Mon, 3 Jun 2024 10:06:00 +0200 (CEST) Received: from NAM10-MW2-obe.outbound.protection.outlook.com (mail-mw2nam10on2060.outbound.protection.outlook.com [40.107.94.60]) by mails.dpdk.org (Postfix) with ESMTP id 3FEC942E0D for ; Mon, 3 Jun 2024 10:05:56 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=dF9A8tMKD0UWrMSc+Cwv+tSWETlZCO7e/f4mZlp+5Qj4zwPbp+JZi9q2toiCEpuRLLacyDZ3qeI/xRFahJNyUt9tG1OdutJLm9jddMHG+2lSalo7jhzT2l19Iekjfbj/QeU+tWXcFk24SOsoAtvDr0+uvAQL6SpKaYB7z8B4DxCgjg/VsadgkfCw6znildqo7JpQVi1NF6HbJ3RlLNL+eSlOIRT+RpWt+ktHNPmoBAG+LEWw21I9QaK/e6ZJZpWrQeNIrqXvY0i2XW/hzWYpOB6zaktxiGFdqSPcsSwzsW7tHN2btgx3RX4o23nuOw8yds4eLKqeEQhV1EZZlWXQ+A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=vbNrI75XOhV+qpctchU1LEH5NthednOw/IvpcUqMFZE=; b=I+y0MFcEqiR702AWqMEuDqEups8Ciqa15q2GlhwzGBXusLmVMmN9AX6Yp4rlfqk1Vche/E7UPnxNgxad83K6CDPuuFfnqE2JFU4MXx/IvaMHChSUji4JMpQ0AM0g86p3yBituSY+MfW5xCaPJ2yO1y8nW/ASWeR4JuQMxJKXpFSJn2P3qOE6TH0AzLMbNX/tRST2DMyk52QDq0fvIbSb2lTwzT8mH9PCHj7ds8XQCoiWlWH/z+iNazvHErNvMhNu+4zxJjfa08hXwpGz/a4w0/GxjrPtSj9WkKqYaQx076ZE52L/xJBt8aS88APllGzgQP+kKFNzwmdhDdg/7XDPYQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.118.232) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=vbNrI75XOhV+qpctchU1LEH5NthednOw/IvpcUqMFZE=; b=W1YPoF8zbrmfOoatMTCCfGIS+Cvz+pfT78gEXZrPJ+PvUAnnwA33H73k3w12Me27wkhVaH5qEFl7yGQN4m1dlG7ek1Yg9B6HhgDtaChf8M/x/tAdvVFRC20XoCx4JbM/Ua3CM3c9vk9in77/J+12C9JND585ubzgUilpRTpr/+yQIXUoG91nJj3dnGEcybBKmSTiPjKbwgleHN8NDOh1UpuII0XZ+VkFyn6eEMZHi4NVdGrGS9yMNtuzvb/eJI2r0VPSjnZpGyzNf5Qytv+vF7k4MwR1/j4wWGZs27X5uWIooEGEd9NZ7UyofckovoCaXbtAWIIUtm/UlkeZP5Z/zQ== Received: from BYAPR07CA0088.namprd07.prod.outlook.com (2603:10b6:a03:12b::29) by SN7PR12MB6932.namprd12.prod.outlook.com (2603:10b6:806:260::14) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7633.27; Mon, 3 Jun 2024 08:05:51 +0000 Received: from SJ1PEPF0000231E.namprd03.prod.outlook.com (2603:10b6:a03:12b:cafe::41) by BYAPR07CA0088.outlook.office365.com (2603:10b6:a03:12b::29) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7633.29 via Frontend Transport; Mon, 3 Jun 2024 08:05:51 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.118.232) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.118.232 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.118.232; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.118.232) by SJ1PEPF0000231E.mail.protection.outlook.com (10.167.242.230) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7633.15 via Frontend Transport; Mon, 3 Jun 2024 08:05:51 +0000 Received: from drhqmail201.nvidia.com (10.126.190.180) by mail.nvidia.com (10.127.129.5) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Mon, 3 Jun 2024 01:05:42 -0700 Received: from drhqmail201.nvidia.com (10.126.190.180) by drhqmail201.nvidia.com (10.126.190.180) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Mon, 3 Jun 2024 01:05:42 -0700 Received: from nvidia.com (10.127.8.14) by mail.nvidia.com (10.126.190.180) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4 via Frontend Transport; Mon, 3 Jun 2024 01:05:40 -0700 From: Maayan Kashani To: CC: , , , Viacheslav Ovsiienko , Ori Kam , Suanming Mou , Matan Azrad Subject: [PATCH v2 13/34] net/mlx5: support bulk actions in non template mode Date: Mon, 3 Jun 2024 11:05:02 +0300 Message-ID: <20240603080505.2641-9-mkashani@nvidia.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20240603080505.2641-1-mkashani@nvidia.com> References: <20240602102802.196920-1-mkashani@nvidia.com> <20240603080505.2641-1-mkashani@nvidia.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-NV-OnPremToCloud: ExternallySecured X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: SJ1PEPF0000231E:EE_|SN7PR12MB6932:EE_ X-MS-Office365-Filtering-Correlation-Id: c52303a7-c64b-41ca-6df7-08dc83a3fc46 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; ARA:13230031|36860700004|1800799015|376005|82310400017; X-Microsoft-Antispam-Message-Info: =?us-ascii?Q?THBYpWKCS8JYxQUrM8HWORdKlmHXQLFPSNvmimm4JLTh9IVrabpustFY7vFX?= =?us-ascii?Q?+su1Fn67OM/y+PQftSBf2oGld4cfNNoEZkxR5NoPIkRg8m/OZWblvYtKmz4k?= =?us-ascii?Q?SYPCcAFAlrvA7xtoEPfESX37Wi/LUVM2wJyzLJaL7UQg36LUTlUEWdrHkK2V?= =?us-ascii?Q?/jaoYT3gkjYLV4pE5lXCcDtF/bNX7H3fhynfsZD5d+IK/YYG57QXrRaR8hKb?= =?us-ascii?Q?kRxB1kr9xYpix46pLsIIg1Ah4U+mkB7jcqzSI2owCFAhPtdymlQziDWOQFSv?= =?us-ascii?Q?80/0bsfFagEyFuGivxNaPm4Fe9ZBLS4mAgz8PriG8JzJGlr6E/OiMLb3qdDr?= =?us-ascii?Q?QX5Yu/0LUMppKhjr84MWCMYvUbM646Ky3wyAuh8/0UFR4ZpoTt4xLDTmiLsv?= =?us-ascii?Q?DYJiDTgcYunzdwSTOV4Syi0+Sn2ggkVXHM3V02M7OKt8f087jKNYGspxyI5o?= =?us-ascii?Q?itzoTUPBVrXs7GE8FAxSzfFaBLWvZeNB4oJNhBCSlDi6aJUVjcjbiclM8zfH?= =?us-ascii?Q?Kib5tRorVS5sY9ufvGu3kVxP3m4sixC7i+akQgn6OiUdOFscVpT7kBjHxTxY?= =?us-ascii?Q?m4iZdka6jWi41cP6mTqq018HF2SwssCV/bD1WrGwAO/Wqk2Kvp69RRPrC9IZ?= =?us-ascii?Q?Dwq3M/kCG19wJNSCfXFZd9RxA89ZGkkdzmIeRgHAoi1iOvoYBS5HCmsHYFUr?= =?us-ascii?Q?/mf7jlHxIEqfVAXyX2aQV+D/abz++TH4Nuf1aIADR5hAsgpOUqNMYpZoyodw?= =?us-ascii?Q?qG/E8anriw1Hv18+RHWZzIRyrVSoR7RjeBAramjOpn1LQTUHwF8rCVp+bzEw?= =?us-ascii?Q?OFs8tGgRr8wDEv4M7yIleqXhEtLP3PZlmFlCpxYkI+CujUTokf5p77yyAriE?= =?us-ascii?Q?9bIM6X58PaMG3nv54AWHZRBNok2MGVhM95K4ZuS9mOw8ITpTHMAvFOcMHtZn?= =?us-ascii?Q?OXYIzFWKlDYfailLp26s+GBzNk4n2gbh1y2m3urP5KuGQ2Y+BVbmybs/G15F?= =?us-ascii?Q?DPeIua/cC1BAO98KaKzqvevWJsAyVr4u+PcuF/VDwepS34XIAjJh3wsRiXge?= =?us-ascii?Q?YXE9pRAxmdxtp4ptL1wIHecWkKIvtYqLGLDhhojbqJDN1crYvK617kdHeaKw?= =?us-ascii?Q?JGSrWswZfL6qsypIw1Qy38FE93VF0PhFXuqID8woS2iAB2wdpgX/58H3AQ58?= =?us-ascii?Q?/VzGvDhiJBJplMXkDqL+6pZ/B+hUmh1GzkdkwRS11Jx5VBDd4P3godTWxtEJ?= =?us-ascii?Q?2avuRry1wf9MmeHnuiuh1niTSUha3E2KWoyjmv5q6PJQpW7SkzMuamTBRIb8?= =?us-ascii?Q?cFoDfve1B/aP8gXDCc4mPw0IWr6tk9Cj4PTJ7p9aCoofkOch74saFAF8idLG?= =?us-ascii?Q?XJKuVVZcXjwGFfpfapuFfClR1buC?= X-Forefront-Antispam-Report: CIP:216.228.118.232; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc7edge1.nvidia.com; CAT:NONE; SFS:(13230031)(36860700004)(1800799015)(376005)(82310400017); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 Jun 2024 08:05:51.4837 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: c52303a7-c64b-41ca-6df7-08dc83a3fc46 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.118.232]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: SJ1PEPF0000231E.namprd03.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: SN7PR12MB6932 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add support for encap/decap/modify header action for non template API. Save 1 action per bulk according to action data. Reuse action if possible. Store actions same as for SWS today, use same key structure. Signed-off-by: Maayan Kashani --- drivers/net/mlx5/mlx5_flow.h | 44 ++++-- drivers/net/mlx5/mlx5_flow_dv.c | 268 ++++++++++++++++++++------------ drivers/net/mlx5/mlx5_flow_hw.c | 184 ++++++++++++++++++++-- 3 files changed, 368 insertions(+), 128 deletions(-) diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index 5bcfc1d88a..7ccc3cb7cd 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -666,6 +666,10 @@ struct mlx5_flow_dv_modify_hdr_resource { struct mlx5_list_entry entry; void *action; /**< Modify header action object. */ uint32_t idx; +#ifdef HAVE_MLX5_HWS_SUPPORT + void *mh_dr_pattern; /**< Modify header DR pattern(HWS only). */ +#endif + uint64_t flags; /**< Flags for RDMA API(HWS only). */ /* Key area for hash list matching: */ uint8_t ft_type; /**< Flow table type, Rx or Tx. */ uint8_t actions_num; /**< Number of modification actions. */ @@ -1317,7 +1321,11 @@ struct rte_flow_nt2hws { struct mlx5_flow_dv_matcher *matcher; /**< Auxiliary data stored per flow. */ struct rte_flow_hw_aux *flow_aux; -} __rte_packed; + /** Modify header pointer. */ + struct mlx5_flow_dv_modify_hdr_resource *modify_hdr; + /** Encap/decap index. */ + uint32_t rix_encap_decap; +}; /** HWS flow struct. */ struct rte_flow_hw { @@ -3079,14 +3087,14 @@ struct mlx5_list_entry *flow_dv_tag_clone_cb(void *tool_ctx, void *cb_ctx); void flow_dv_tag_clone_free_cb(void *tool_ctx, struct mlx5_list_entry *entry); -int flow_dv_modify_match_cb(void *tool_ctx, struct mlx5_list_entry *entry, +int flow_modify_match_cb(void *tool_ctx, struct mlx5_list_entry *entry, void *cb_ctx); -struct mlx5_list_entry *flow_dv_modify_create_cb(void *tool_ctx, void *ctx); -void flow_dv_modify_remove_cb(void *tool_ctx, struct mlx5_list_entry *entry); -struct mlx5_list_entry *flow_dv_modify_clone_cb(void *tool_ctx, +struct mlx5_list_entry *flow_modify_create_cb(void *tool_ctx, void *ctx); +void flow_modify_remove_cb(void *tool_ctx, struct mlx5_list_entry *entry); +struct mlx5_list_entry *flow_modify_clone_cb(void *tool_ctx, struct mlx5_list_entry *oentry, void *ctx); -void flow_dv_modify_clone_free_cb(void *tool_ctx, +void flow_modify_clone_free_cb(void *tool_ctx, struct mlx5_list_entry *entry); struct mlx5_list_entry *flow_dv_mreg_create_cb(void *tool_ctx, void *ctx); @@ -3098,18 +3106,30 @@ struct mlx5_list_entry *flow_dv_mreg_clone_cb(void *tool_ctx, void *ctx); void flow_dv_mreg_clone_free_cb(void *tool_ctx, struct mlx5_list_entry *entry); -int flow_dv_encap_decap_match_cb(void *tool_ctx, struct mlx5_list_entry *entry, +int flow_encap_decap_match_cb(void *tool_ctx, struct mlx5_list_entry *entry, void *cb_ctx); -struct mlx5_list_entry *flow_dv_encap_decap_create_cb(void *tool_ctx, +struct mlx5_list_entry *flow_encap_decap_create_cb(void *tool_ctx, void *cb_ctx); -void flow_dv_encap_decap_remove_cb(void *tool_ctx, +void flow_encap_decap_remove_cb(void *tool_ctx, struct mlx5_list_entry *entry); -struct mlx5_list_entry *flow_dv_encap_decap_clone_cb(void *tool_ctx, +struct mlx5_list_entry *flow_encap_decap_clone_cb(void *tool_ctx, struct mlx5_list_entry *entry, void *cb_ctx); -void flow_dv_encap_decap_clone_free_cb(void *tool_ctx, +void flow_encap_decap_clone_free_cb(void *tool_ctx, struct mlx5_list_entry *entry); - +int __flow_encap_decap_resource_register + (struct rte_eth_dev *dev, + struct mlx5_flow_dv_encap_decap_resource *resource, + bool is_root, + struct mlx5_flow_dv_encap_decap_resource **encap_decap, + struct rte_flow_error *error); +int __flow_modify_hdr_resource_register + (struct rte_eth_dev *dev, + struct mlx5_flow_dv_modify_hdr_resource *resource, + struct mlx5_flow_dv_modify_hdr_resource **modify, + struct rte_flow_error *error); +int flow_encap_decap_resource_release(struct rte_eth_dev *dev, + uint32_t encap_decap_idx); int flow_matcher_match_cb(void *tool_ctx, struct mlx5_list_entry *entry, void *ctx); struct mlx5_list_entry *flow_matcher_create_cb(void *tool_ctx, void *ctx); diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c index a4fde4125e..3611ffa4a1 100644 --- a/drivers/net/mlx5/mlx5_flow_dv.c +++ b/drivers/net/mlx5/mlx5_flow_dv.c @@ -97,10 +97,6 @@ union flow_dv_attr { uint32_t attr; }; -static int -flow_dv_encap_decap_resource_release(struct rte_eth_dev *dev, - uint32_t encap_decap_idx); - static int flow_dv_port_id_action_resource_release(struct rte_eth_dev *dev, uint32_t port_id); @@ -4272,7 +4268,7 @@ flow_dv_validate_item_aggr_affinity(struct rte_eth_dev *dev, } int -flow_dv_encap_decap_match_cb(void *tool_ctx __rte_unused, +flow_encap_decap_match_cb(void *tool_ctx __rte_unused, struct mlx5_list_entry *entry, void *cb_ctx) { struct mlx5_flow_cb_ctx *ctx = cb_ctx; @@ -4293,7 +4289,7 @@ flow_dv_encap_decap_match_cb(void *tool_ctx __rte_unused, } struct mlx5_list_entry * -flow_dv_encap_decap_create_cb(void *tool_ctx, void *cb_ctx) +flow_encap_decap_create_cb(void *tool_ctx, void *cb_ctx) { struct mlx5_dev_ctx_shared *sh = tool_ctx; struct mlx5_flow_cb_ctx *ctx = cb_ctx; @@ -4301,14 +4297,11 @@ flow_dv_encap_decap_create_cb(void *tool_ctx, void *cb_ctx) struct mlx5_flow_dv_encap_decap_resource *ctx_resource = ctx->data; struct mlx5_flow_dv_encap_decap_resource *resource; uint32_t idx; - int ret; + int ret = 0; +#ifdef HAVE_MLX5_HWS_SUPPORT + struct mlx5dr_action_reformat_header hdr; +#endif - if (ctx_resource->ft_type == MLX5DV_FLOW_TABLE_TYPE_FDB) - domain = sh->fdb_domain; - else if (ctx_resource->ft_type == MLX5DV_FLOW_TABLE_TYPE_NIC_RX) - domain = sh->rx_domain; - else - domain = sh->tx_domain; /* Register new encap/decap resource. */ resource = mlx5_ipool_zmalloc(sh->ipool[MLX5_IPOOL_DECAP_ENCAP], &idx); if (!resource) { @@ -4318,10 +4311,29 @@ flow_dv_encap_decap_create_cb(void *tool_ctx, void *cb_ctx) return NULL; } *resource = *ctx_resource; - resource->idx = idx; - ret = mlx5_flow_os_create_flow_action_packet_reformat(sh->cdev->ctx, - domain, resource, - &resource->action); + if (sh->config.dv_flow_en == 2) { +#ifdef HAVE_MLX5_HWS_SUPPORT + hdr.sz = ctx_resource->size; + hdr.data = ctx_resource->buf; + resource->action = mlx5dr_action_create_reformat + (ctx->data2, (enum mlx5dr_action_type)ctx_resource->reformat_type, 1, + &hdr, 1, ctx_resource->flags); + if (!resource->action) + ret = -1; +#else + ret = -1; +#endif + } else { + if (ctx_resource->ft_type == MLX5DV_FLOW_TABLE_TYPE_FDB) + domain = sh->fdb_domain; + else if (ctx_resource->ft_type == MLX5DV_FLOW_TABLE_TYPE_NIC_RX) + domain = sh->rx_domain; + else + domain = sh->tx_domain; + ret = mlx5_flow_os_create_flow_action_packet_reformat(sh->cdev->ctx, + domain, resource, + &resource->action); + } if (ret) { mlx5_ipool_free(sh->ipool[MLX5_IPOOL_DECAP_ENCAP], idx); rte_flow_error_set(ctx->error, ENOMEM, @@ -4329,12 +4341,12 @@ flow_dv_encap_decap_create_cb(void *tool_ctx, void *cb_ctx) NULL, "cannot create action"); return NULL; } - + resource->idx = idx; return &resource->entry; } struct mlx5_list_entry * -flow_dv_encap_decap_clone_cb(void *tool_ctx, struct mlx5_list_entry *oentry, +flow_encap_decap_clone_cb(void *tool_ctx, struct mlx5_list_entry *oentry, void *cb_ctx) { struct mlx5_dev_ctx_shared *sh = tool_ctx; @@ -4356,7 +4368,7 @@ flow_dv_encap_decap_clone_cb(void *tool_ctx, struct mlx5_list_entry *oentry, } void -flow_dv_encap_decap_clone_free_cb(void *tool_ctx, struct mlx5_list_entry *entry) +flow_encap_decap_clone_free_cb(void *tool_ctx, struct mlx5_list_entry *entry) { struct mlx5_dev_ctx_shared *sh = tool_ctx; struct mlx5_flow_dv_encap_decap_resource *res = @@ -4365,26 +4377,11 @@ flow_dv_encap_decap_clone_free_cb(void *tool_ctx, struct mlx5_list_entry *entry) mlx5_ipool_free(sh->ipool[MLX5_IPOOL_DECAP_ENCAP], res->idx); } -/** - * Find existing encap/decap resource or create and register a new one. - * - * @param[in, out] dev - * Pointer to rte_eth_dev structure. - * @param[in, out] resource - * Pointer to encap/decap resource. - * @parm[in, out] dev_flow - * Pointer to the dev_flow. - * @param[out] error - * pointer to error structure. - * - * @return - * 0 on success otherwise -errno and errno is set. - */ -static int -flow_dv_encap_decap_resource_register - (struct rte_eth_dev *dev, +int +__flow_encap_decap_resource_register(struct rte_eth_dev *dev, struct mlx5_flow_dv_encap_decap_resource *resource, - struct mlx5_flow *dev_flow, + bool is_root, + struct mlx5_flow_dv_encap_decap_resource **encap_decap, struct rte_flow_error *error) { struct mlx5_priv *priv = dev->data->dev_private; @@ -4407,13 +4404,14 @@ flow_dv_encap_decap_resource_register { .ft_type = resource->ft_type, .refmt_type = resource->reformat_type, - .is_root = !!dev_flow->dv.group, + .is_root = is_root, .reserve = 0, } }; struct mlx5_flow_cb_ctx ctx = { .error = error, .data = resource, + .data2 = priv->dr_ctx, }; struct mlx5_hlist *encaps_decaps; uint64_t key64; @@ -4422,15 +4420,14 @@ flow_dv_encap_decap_resource_register "encaps_decaps", MLX5_FLOW_ENCAP_DECAP_HTABLE_SZ, true, true, sh, - flow_dv_encap_decap_create_cb, - flow_dv_encap_decap_match_cb, - flow_dv_encap_decap_remove_cb, - flow_dv_encap_decap_clone_cb, - flow_dv_encap_decap_clone_free_cb, + flow_encap_decap_create_cb, + flow_encap_decap_match_cb, + flow_encap_decap_remove_cb, + flow_encap_decap_clone_cb, + flow_encap_decap_clone_free_cb, error); if (unlikely(!encaps_decaps)) return -rte_errno; - resource->flags = dev_flow->dv.group ? 0 : 1; key64 = __rte_raw_cksum(&encap_decap_key.v32, sizeof(encap_decap_key.v32), 0); if (resource->reformat_type != @@ -4440,9 +4437,40 @@ flow_dv_encap_decap_resource_register entry = mlx5_hlist_register(encaps_decaps, key64, &ctx); if (!entry) return -rte_errno; - resource = container_of(entry, typeof(*resource), entry); - dev_flow->dv.encap_decap = resource; - dev_flow->handle->dvh.rix_encap_decap = resource->idx; + *encap_decap = container_of(entry, typeof(*resource), entry); + return 0; +} + +/** + * Find existing encap/decap resource or create and register a new one. + * + * @param[in, out] dev + * Pointer to rte_eth_dev structure. + * @param[in, out] resource + * Pointer to encap/decap resource. + * @param[in, out] dev_flow + * Pointer to the dev_flow. + * @param[out] error + * pointer to error structure. + * + * @return + * 0 on success otherwise -errno and errno is set. + */ +static int +flow_dv_encap_decap_resource_register + (struct rte_eth_dev *dev, + struct mlx5_flow_dv_encap_decap_resource *resource, + struct mlx5_flow *dev_flow, + struct rte_flow_error *error) +{ + int ret; + + resource->flags = dev_flow->dv.group ? 0 : 1; + ret = __flow_encap_decap_resource_register(dev, resource, !!dev_flow->dv.group, + &dev_flow->dv.encap_decap, error); + if (ret) + return ret; + dev_flow->handle->dvh.rix_encap_decap = dev_flow->dv.encap_decap->idx; return 0; } @@ -6122,7 +6150,7 @@ flow_dv_validate_action_modify_ipv6_dscp(const uint64_t action_flags, } int -flow_dv_modify_match_cb(void *tool_ctx __rte_unused, +flow_modify_match_cb(void *tool_ctx __rte_unused, struct mlx5_list_entry *entry, void *cb_ctx) { struct mlx5_flow_cb_ctx *ctx = cb_ctx; @@ -6178,7 +6206,7 @@ flow_dv_modify_ipool_get(struct mlx5_dev_ctx_shared *sh, uint8_t index) } struct mlx5_list_entry * -flow_dv_modify_create_cb(void *tool_ctx, void *cb_ctx) +flow_modify_create_cb(void *tool_ctx, void *cb_ctx) { struct mlx5_dev_ctx_shared *sh = tool_ctx; struct mlx5_flow_cb_ctx *ctx = cb_ctx; @@ -6187,11 +6215,13 @@ flow_dv_modify_create_cb(void *tool_ctx, void *cb_ctx) struct mlx5_flow_dv_modify_hdr_resource *ref = ctx->data; struct mlx5_indexed_pool *ipool = flow_dv_modify_ipool_get(sh, ref->actions_num - 1); - int ret; + int ret = 0; uint32_t data_len = ref->actions_num * sizeof(ref->actions[0]); uint32_t key_len = sizeof(*ref) - offsetof(typeof(*ref), ft_type); uint32_t idx; + struct mlx5_tbl_multi_pattern_ctx *mpctx; + typeof(mpctx->mh) *mh_dr_pattern = ref->mh_dr_pattern; if (unlikely(!ipool)) { rte_flow_error_set(ctx->error, ENOMEM, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, @@ -6205,18 +6235,30 @@ flow_dv_modify_create_cb(void *tool_ctx, void *cb_ctx) "cannot allocate resource memory"); return NULL; } - rte_memcpy(RTE_PTR_ADD(entry, offsetof(typeof(*entry), ft_type)), - RTE_PTR_ADD(ref, offsetof(typeof(*ref), ft_type)), - key_len + data_len); - if (entry->ft_type == MLX5DV_FLOW_TABLE_TYPE_FDB) - ns = sh->fdb_domain; - else if (entry->ft_type == MLX5DV_FLOW_TABLE_TYPE_NIC_TX) - ns = sh->tx_domain; - else - ns = sh->rx_domain; - ret = mlx5_flow_os_create_flow_action_modify_header - (sh->cdev->ctx, ns, entry, - data_len, &entry->action); + rte_memcpy(&entry->ft_type, + RTE_PTR_ADD(ref, offsetof(typeof(*ref), ft_type)), + key_len + data_len); + if (sh->config.dv_flow_en == 2) { +#ifdef HAVE_MLX5_HWS_SUPPORT + entry->action = mlx5dr_action_create_modify_header(ctx->data2, + mh_dr_pattern->elements_num, + mh_dr_pattern->pattern, 0, ref->flags); + if (!entry->action) + ret = -1; +#else + ret = -1; +#endif + } else { + if (entry->ft_type == MLX5DV_FLOW_TABLE_TYPE_FDB) + ns = sh->fdb_domain; + else if (entry->ft_type == MLX5DV_FLOW_TABLE_TYPE_NIC_TX) + ns = sh->tx_domain; + else + ns = sh->rx_domain; + ret = mlx5_flow_os_create_flow_action_modify_header + (sh->cdev->ctx, ns, entry, + data_len, &entry->action); + } if (ret) { mlx5_ipool_free(sh->mdh_ipools[ref->actions_num - 1], idx); rte_flow_error_set(ctx->error, ENOMEM, @@ -6229,7 +6271,7 @@ flow_dv_modify_create_cb(void *tool_ctx, void *cb_ctx) } struct mlx5_list_entry * -flow_dv_modify_clone_cb(void *tool_ctx, struct mlx5_list_entry *oentry, +flow_modify_clone_cb(void *tool_ctx, struct mlx5_list_entry *oentry, void *cb_ctx) { struct mlx5_dev_ctx_shared *sh = tool_ctx; @@ -6253,7 +6295,7 @@ flow_dv_modify_clone_cb(void *tool_ctx, struct mlx5_list_entry *oentry, } void -flow_dv_modify_clone_free_cb(void *tool_ctx, struct mlx5_list_entry *entry) +flow_modify_clone_free_cb(void *tool_ctx, struct mlx5_list_entry *entry) { struct mlx5_dev_ctx_shared *sh = tool_ctx; struct mlx5_flow_dv_modify_hdr_resource *res = @@ -6521,27 +6563,11 @@ flow_dv_validate_action_sample(uint64_t *action_flags, return 0; } -/** - * Find existing modify-header resource or create and register a new one. - * - * @param dev[in, out] - * Pointer to rte_eth_dev structure. - * @param[in, out] resource - * Pointer to modify-header resource. - * @parm[in, out] dev_flow - * Pointer to the dev_flow. - * @param[out] error - * pointer to error structure. - * - * @return - * 0 on success otherwise -errno and errno is set. - */ -static int -flow_dv_modify_hdr_resource_register - (struct rte_eth_dev *dev, - struct mlx5_flow_dv_modify_hdr_resource *resource, - struct mlx5_flow *dev_flow, - struct rte_flow_error *error) +int +__flow_modify_hdr_resource_register(struct rte_eth_dev *dev, + struct mlx5_flow_dv_modify_hdr_resource *resource, + struct mlx5_flow_dv_modify_hdr_resource **modify, + struct rte_flow_error *error) { struct mlx5_priv *priv = dev->data->dev_private; struct mlx5_dev_ctx_shared *sh = priv->sh; @@ -6552,6 +6578,7 @@ flow_dv_modify_hdr_resource_register struct mlx5_flow_cb_ctx ctx = { .error = error, .data = resource, + .data2 = priv->dr_ctx, }; struct mlx5_hlist *modify_cmds; uint64_t key64; @@ -6560,15 +6587,14 @@ flow_dv_modify_hdr_resource_register "hdr_modify", MLX5_FLOW_HDR_MODIFY_HTABLE_SZ, true, false, sh, - flow_dv_modify_create_cb, - flow_dv_modify_match_cb, - flow_dv_modify_remove_cb, - flow_dv_modify_clone_cb, - flow_dv_modify_clone_free_cb, + flow_modify_create_cb, + flow_modify_match_cb, + flow_modify_remove_cb, + flow_modify_clone_cb, + flow_modify_clone_free_cb, error); if (unlikely(!modify_cmds)) return -rte_errno; - resource->root = !dev_flow->dv.group; if (resource->actions_num > flow_dv_modify_hdr_action_max(dev, resource->root)) return rte_flow_error_set(error, EOVERFLOW, @@ -6578,11 +6604,37 @@ flow_dv_modify_hdr_resource_register entry = mlx5_hlist_register(modify_cmds, key64, &ctx); if (!entry) return -rte_errno; - resource = container_of(entry, typeof(*resource), entry); - dev_flow->handle->dvh.modify_hdr = resource; + *modify = container_of(entry, typeof(*resource), entry); return 0; } +/** + * Find existing modify-header resource or create and register a new one. + * + * @param dev[in, out] + * Pointer to rte_eth_dev structure. + * @param[in, out] resource + * Pointer to modify-header resource. + * @param[in, out] dev_flow + * Pointer to the dev_flow. + * @param[out] error + * pointer to error structure. + * + * @return + * 0 on success otherwise -errno and errno is set. + */ +static int +flow_dv_modify_hdr_resource_register + (struct rte_eth_dev *dev, + struct mlx5_flow_dv_modify_hdr_resource *resource, + struct mlx5_flow *dev_flow, + struct rte_flow_error *error) +{ + resource->root = !dev_flow->dv.group; + return __flow_modify_hdr_resource_register(dev, resource, + &dev_flow->handle->dvh.modify_hdr, error); +} + /** * Get DV flow counter by index. * @@ -12403,7 +12455,7 @@ flow_dv_sample_sub_actions_release(struct rte_eth_dev *dev, act_res->rix_hrxq = 0; } if (act_res->rix_encap_decap) { - flow_dv_encap_decap_resource_release(dev, + flow_encap_decap_resource_release(dev, act_res->rix_encap_decap); act_res->rix_encap_decap = 0; } @@ -15840,13 +15892,18 @@ flow_dv_matcher_release(struct rte_eth_dev *dev, } void -flow_dv_encap_decap_remove_cb(void *tool_ctx, struct mlx5_list_entry *entry) +flow_encap_decap_remove_cb(void *tool_ctx, struct mlx5_list_entry *entry) { struct mlx5_dev_ctx_shared *sh = tool_ctx; struct mlx5_flow_dv_encap_decap_resource *res = container_of(entry, typeof(*res), entry); - claim_zero(mlx5_flow_os_destroy_flow_action(res->action)); +#ifdef HAVE_MLX5_HWS_SUPPORT + if (sh->config.dv_flow_en == 2) + claim_zero(mlx5dr_action_destroy(res->action)); + else +#endif + claim_zero(mlx5_flow_os_destroy_flow_action(res->action)); mlx5_ipool_free(sh->ipool[MLX5_IPOOL_DECAP_ENCAP], res->idx); } @@ -15861,8 +15918,8 @@ flow_dv_encap_decap_remove_cb(void *tool_ctx, struct mlx5_list_entry *entry) * @return * 1 while a reference on it exists, 0 when freed. */ -static int -flow_dv_encap_decap_resource_release(struct rte_eth_dev *dev, +int +flow_encap_decap_resource_release(struct rte_eth_dev *dev, uint32_t encap_decap_idx) { struct mlx5_priv *priv = dev->data->dev_private; @@ -15902,13 +15959,18 @@ flow_dv_jump_tbl_resource_release(struct rte_eth_dev *dev, } void -flow_dv_modify_remove_cb(void *tool_ctx, struct mlx5_list_entry *entry) +flow_modify_remove_cb(void *tool_ctx, struct mlx5_list_entry *entry) { struct mlx5_flow_dv_modify_hdr_resource *res = container_of(entry, typeof(*res), entry); struct mlx5_dev_ctx_shared *sh = tool_ctx; - claim_zero(mlx5_flow_os_destroy_flow_action(res->action)); +#ifdef HAVE_MLX5_HWS_SUPPORT + if (sh->config.dv_flow_en == 2) + claim_zero(mlx5dr_action_destroy(res->action)); + else +#endif + claim_zero(mlx5_flow_os_destroy_flow_action(res->action)); mlx5_ipool_free(sh->mdh_ipools[res->actions_num - 1], res->idx); } @@ -16277,7 +16339,7 @@ flow_dv_destroy(struct rte_eth_dev *dev, struct rte_flow *flow) if (dev_handle->dvh.rix_dest_array) flow_dv_dest_array_resource_release(dev, dev_handle); if (dev_handle->dvh.rix_encap_decap) - flow_dv_encap_decap_resource_release(dev, + flow_encap_decap_resource_release(dev, dev_handle->dvh.rix_encap_decap); if (dev_handle->dvh.modify_hdr) flow_dv_modify_hdr_resource_release(dev, dev_handle); diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index d768968676..41f20ed222 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -9,6 +9,7 @@ #include #include "mlx5.h" +#include "mlx5_common.h" #include "mlx5_defs.h" #include "mlx5_flow.h" #include "mlx5_rx.h" @@ -224,6 +225,22 @@ mlx5_multi_pattern_reformat_to_index(enum mlx5dr_action_type type) return -1; } +/* Include only supported reformat actions for BWC non template API. */ +static __rte_always_inline int +mlx5_bwc_multi_pattern_reformat_to_index(enum mlx5dr_action_type type) +{ + switch (type) { + case MLX5DR_ACTION_TYP_REFORMAT_TNL_L2_TO_L2: + case MLX5DR_ACTION_TYP_REFORMAT_L2_TO_TNL_L2: + case MLX5DR_ACTION_TYP_REFORMAT_TNL_L3_TO_L2: + case MLX5DR_ACTION_TYP_REFORMAT_L2_TO_TNL_L3: + return mlx5_multi_pattern_reformat_to_index(type); + default: + break; + } + return -1; +} + static __rte_always_inline enum mlx5dr_action_type mlx5_multi_pattern_reformat_index_to_type(uint32_t ix) { @@ -1317,11 +1334,13 @@ flow_hw_converted_mhdr_cmds_append(struct mlx5_hw_modify_header_action *mhdr, static __rte_always_inline void flow_hw_modify_field_init(struct mlx5_hw_modify_header_action *mhdr, - struct rte_flow_actions_template *at) + struct rte_flow_actions_template *at, + bool nt_mode) { memset(mhdr, 0, sizeof(*mhdr)); /* Modify header action without any commands is shared by default. */ - mhdr->shared = true; + if (!(nt_mode)) + mhdr->shared = true; mhdr->pos = at->mhdr_off; } @@ -2124,7 +2143,6 @@ mlx5_tbl_translate_modify_header(struct rte_eth_dev *dev, } else { typeof(mp_ctx->mh) *mh = &mp_ctx->mh; uint32_t idx = mh->elements_num; - mh->pattern[mh->elements_num++] = pattern; acts->mhdr->multi_pattern = 1; acts->rule_acts[mhdr_ix].modify_header.pattern_idx = idx; @@ -2270,7 +2288,7 @@ __flow_hw_translate_actions_template(struct rte_eth_dev *dev, uint32_t target_grp = 0; int table_type; - flow_hw_modify_field_init(&mhdr, at); + flow_hw_modify_field_init(&mhdr, at, nt_mode); if (attr->transfer) type = MLX5DR_TABLE_TYPE_FDB; else if (attr->egress) @@ -3226,6 +3244,7 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, flow->res_idx - mp_segment->head_index; rule_acts[pos].modify_header.data = (uint8_t *)ap->mhdr_cmd; + MLX5_ASSERT(hw_acts->mhdr->mhdr_cmds_num <= MLX5_MHDR_MAX_CMD); rte_memcpy(ap->mhdr_cmd, hw_acts->mhdr->mhdr_cmds, sizeof(*ap->mhdr_cmd) * hw_acts->mhdr->mhdr_cmds_num); } @@ -12225,12 +12244,106 @@ static int flow_hw_prepare(struct rte_eth_dev *dev, /*TODO: consider if other allocation is needed for actions translate. */ return 0; } +#define FLOW_HW_SET_DV_FIELDS(flow_attr, root, flags) \ +{ \ + typeof(flow_attr) _flow_attr = (flow_attr); \ + if (_flow_attr->transfer) \ + dv_resource.ft_type = MLX5DV_FLOW_TABLE_TYPE_FDB; \ + else \ + dv_resource.ft_type = _flow_attr->egress ? MLX5DV_FLOW_TABLE_TYPE_NIC_TX : \ + MLX5DV_FLOW_TABLE_TYPE_NIC_RX; \ + root = _flow_attr->group ? 0 : 1; \ + flags = mlx5_hw_act_flag[!!_flow_attr->group][get_mlx5dr_table_type(_flow_attr)]; \ +} + +static int +flow_hw_modify_hdr_resource_register + (struct rte_eth_dev *dev, + struct rte_flow_template_table *table, + struct mlx5_hw_actions *hw_acts, + struct rte_flow_hw *dev_flow, + struct rte_flow_error *error) +{ + struct rte_flow_attr *attr = &table->cfg.attr.flow_attr; + struct mlx5_flow_dv_modify_hdr_resource *dv_resource_ptr = NULL; + struct mlx5_flow_dv_modify_hdr_resource dv_resource; + struct mlx5_tbl_multi_pattern_ctx *mpctx = &table->mpctx; + int ret; + + if (hw_acts->mhdr) { + dv_resource.actions_num = hw_acts->mhdr->mhdr_cmds_num; + memcpy(dv_resource.actions, hw_acts->mhdr->mhdr_cmds, + sizeof(struct mlx5_modification_cmd) * dv_resource.actions_num); + } else { + return 0; + } + FLOW_HW_SET_DV_FIELDS(attr, dv_resource.root, dv_resource.flags); + /* Save a pointer to the pattern needed for DR layer created on actions translate. */ + dv_resource.mh_dr_pattern = &table->mpctx.mh; + ret = __flow_modify_hdr_resource_register(dev, &dv_resource, + &dv_resource_ptr, error); + if (ret) + return ret; + MLX5_ASSERT(dv_resource_ptr); + dev_flow->nt2hws->modify_hdr = dv_resource_ptr; + /* keep action for the rule construction. */ + mpctx->segments[0].mhdr_action = dv_resource_ptr->action; + /* Bulk size is 1, so index is 1. */ + dev_flow->res_idx = 1; + return 0; +} + +static int +flow_hw_encap_decap_resource_register + (struct rte_eth_dev *dev, + struct rte_flow_template_table *table, + struct mlx5_hw_actions *hw_acts, + struct rte_flow_hw *dev_flow, + struct rte_flow_error *error) +{ + struct rte_flow_attr *attr = &table->cfg.attr.flow_attr; + struct mlx5_flow_dv_encap_decap_resource *dv_resource_ptr = NULL; + struct mlx5_flow_dv_encap_decap_resource dv_resource; + struct mlx5_tbl_multi_pattern_ctx *mpctx = &table->mpctx; + int ret; + bool is_root; + int ix; + + if (hw_acts->encap_decap) + dv_resource.reformat_type = hw_acts->encap_decap->action_type; + else + return 0; + ix = mlx5_bwc_multi_pattern_reformat_to_index((enum mlx5dr_action_type) + dv_resource.reformat_type); + if (ix < 0) + return ix; + typeof(mpctx->reformat[0]) *reformat = mpctx->reformat + ix; + if (!reformat->elements_num) + return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, + NULL, "No reformat action exist in the table."); + dv_resource.size = reformat->reformat_hdr->sz; + FLOW_HW_SET_DV_FIELDS(attr, is_root, dv_resource.flags); + MLX5_ASSERT(dv_resource.size <= MLX5_ENCAP_MAX_LEN); + memcpy(dv_resource.buf, reformat->reformat_hdr->data, dv_resource.size); + ret = __flow_encap_decap_resource_register(dev, &dv_resource, is_root, + &dv_resource_ptr, error); + if (ret) + return ret; + MLX5_ASSERT(dv_resource_ptr); + dev_flow->nt2hws->rix_encap_decap = dv_resource_ptr->idx; + /* keep action for the rule construction. */ + mpctx->segments[0].reformat_action[ix] = dv_resource_ptr->action; + /* Bulk size is 1, so index is 1. */ + dev_flow->res_idx = 1; + return 0; +} static int flow_hw_translate_flow_actions(struct rte_eth_dev *dev, const struct rte_flow_attr *attr, const struct rte_flow_action actions[], struct rte_flow_hw *flow, + struct mlx5_flow_hw_action_params *ap, struct mlx5_hw_actions *hw_acts, uint64_t item_flags, bool external, @@ -12249,12 +12362,28 @@ flow_hw_translate_flow_actions(struct rte_eth_dev *dev, .transfer = attr->transfer, }; struct rte_flow_action masks[MLX5_HW_MAX_ACTS]; - struct mlx5_flow_hw_action_params ap; + struct rte_flow_action_raw_encap encap_conf; + struct rte_flow_action_modify_field mh_conf[MLX5_HW_MAX_ACTS]; + memset(&masks, 0, sizeof(masks)); int i = -1; do { i++; masks[i].type = actions[i].type; + if (masks[i].type == RTE_FLOW_ACTION_TYPE_RAW_ENCAP) { + memset(&encap_conf, 0x00, sizeof(encap_conf)); + encap_conf.size = ((const struct rte_flow_action_raw_encap *) + (actions[i].conf))->size; + masks[i].conf = &encap_conf; + } + if (masks[i].type == RTE_FLOW_ACTION_TYPE_MODIFY_FIELD) { + const struct rte_flow_action_modify_field *conf = actions[i].conf; + memset(&mh_conf, 0xff, sizeof(mh_conf[i])); + mh_conf[i].operation = conf->operation; + mh_conf[i].dst.field = conf->dst.field; + mh_conf[i].src.field = conf->src.field; + masks[i].conf = &mh_conf[i]; + } } while (masks[i].type != RTE_FLOW_ACTION_TYPE_END); RTE_SET_USED(action_flags); /* The group in the attribute translation was done in advance. */ @@ -12279,8 +12408,6 @@ flow_hw_translate_flow_actions(struct rte_eth_dev *dev, ret = -rte_errno; goto end; } - if (ret) - goto clean_up; grp.group_id = src_group; table->grp = &grp; table->type = table_type; @@ -12290,19 +12417,25 @@ flow_hw_translate_flow_actions(struct rte_eth_dev *dev, table->ats[0].action_template = at; ret = __flow_hw_translate_actions_template(dev, &table->cfg, hw_acts, at, &table->mpctx, true, error); + if (ret) + goto end; + /* handle bulk actions register. */ + ret = flow_hw_encap_decap_resource_register(dev, table, hw_acts, flow, error); + if (ret) + goto clean_up; + ret = flow_hw_modify_hdr_resource_register(dev, table, hw_acts, flow, error); if (ret) goto clean_up; table->ats[0].acts = *hw_acts; - ret = flow_hw_actions_construct(dev, flow, &ap, + ret = flow_hw_actions_construct(dev, flow, ap, &table->ats[0], item_flags, table, actions, hw_acts->rule_acts, 0, error); if (ret) goto clean_up; - goto end; clean_up: /* Make sure that there is no garbage in the actions. */ - __flow_hw_actions_release(dev, hw_acts); + __flow_hw_action_template_destroy(dev, hw_acts); end: if (table) mlx5_free(table); @@ -12508,6 +12641,7 @@ static int flow_hw_create_flow(struct rte_eth_dev *dev, { int ret; struct mlx5_hw_actions hw_act; + struct mlx5_flow_hw_action_params ap; struct mlx5_flow_dv_matcher matcher = { .mask = { .size = sizeof(matcher.mask.buf), @@ -12574,7 +12708,7 @@ static int flow_hw_create_flow(struct rte_eth_dev *dev, goto error; /* Note: the actions should be saved in the sub-flow rule itself for reference. */ - ret = flow_hw_translate_flow_actions(dev, attr, actions, *flow, &hw_act, + ret = flow_hw_translate_flow_actions(dev, attr, actions, *flow, &ap, &hw_act, item_flags, external, error); if (ret) goto error; @@ -12595,9 +12729,19 @@ static int flow_hw_create_flow(struct rte_eth_dev *dev, if (ret) goto error; } - return 0; - + ret = 0; error: + /* + * Release memory allocated. + * Cannot use __flow_hw_actions_release(dev, &hw_act); + * since it destroys the actions as well. + */ + if (hw_act.encap_decap) + mlx5_free(hw_act.encap_decap); + if (hw_act.push_remove) + mlx5_free(hw_act.push_remove); + if (hw_act.mhdr) + mlx5_free(hw_act.mhdr); return ret; } #endif @@ -12606,6 +12750,7 @@ static void flow_hw_destroy(struct rte_eth_dev *dev, struct rte_flow_hw *flow) { int ret; + struct mlx5_priv *priv = dev->data->dev_private; if (!flow || !flow->nt2hws) return; @@ -12630,6 +12775,19 @@ flow_hw_destroy(struct rte_eth_dev *dev, struct rte_flow_hw *flow) */ if (flow->nt2hws->flow_aux) mlx5_free(flow->nt2hws->flow_aux); + + if (flow->nt2hws->rix_encap_decap) { + ret = flow_encap_decap_resource_release(dev, flow->nt2hws->rix_encap_decap); + if (ret) + DRV_LOG(ERR, "failed to release encap decap."); + } + if (flow->nt2hws->modify_hdr) { + MLX5_ASSERT(flow->nt2hws->modify_hdr->action); + ret = mlx5_hlist_unregister(priv->sh->modify_cmds, + &flow->nt2hws->modify_hdr->entry); + if (ret) + DRV_LOG(ERR, "failed to release modify action."); + } } #ifdef HAVE_MLX5_HWS_SUPPORT -- 2.25.1