From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 0B6CB41F51; Sun, 9 Jun 2024 13:02:36 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 73D4B40E2C; Sun, 9 Jun 2024 13:02:03 +0200 (CEST) Received: from NAM11-CO1-obe.outbound.protection.outlook.com (mail-co1nam11on2062.outbound.protection.outlook.com [40.107.220.62]) by mails.dpdk.org (Postfix) with ESMTP id 48C6240E34 for ; Sun, 9 Jun 2024 13:02:01 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=lgRqeNN3/owpcjU9/pO7+sYhiu5oH64O4FS+ZAaXBZdGUoPMD54BKMS1bywe5WZbbnmcBpjCeZGjqG5yxtpBYvpIF7379LxMOsCKZldYsSzROVCrr0XvPlU3JeADwCMkK0uCh1jkSCf19IDwgyYVzALB15EE1Qo4oh8kxp9Ef+B1HUh6c+Mytb0XesvVCc2B//lmrYbGpfK91eTNjssqqlNVhgTW7C0vNhZNMfzAQQ+D1JKoO6syT1cMZS4ytnBefO/IVOGUo7vNYnU/Fk3YrN+vHX3BaTQw2eBixTLF6pqCGsWO2NDGSmpoxkpsW/hAlLGHav2UrNIwseVCGwvTFA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=4s+inYt1Jlwvd6p2Yd3wErilt6cXF61JQ0NkPHuNWwc=; b=m5AZZHNP9UU5Q4ljZWgiMkejbuFBk8sMS5Vn4cVYMSsKKJSlhxr317WQfgnfrAU70Gog5RR0FlWJPX7zdIO63ptP2cs+1Sx3zBKcehpbop3RxvTCDaNAGvjWC9hr0tWhXXBG/LK4xy5cbdZnR+BE9POOLQwmU4u/uEFULveoLBfXlzkNCBiNXMGol5Xlujz0J+QECQ5HDoHbhU8FxUYg4fwdZhpRtPDjj0oU2PmM2isVU6tzwCu1962N9Y9iBSuSCLjXm/hvkrlF+e68qGQql3EbZoc0J/KHEMLr2udlf3bQHiD29kILwoDvq4C9wYvwgocYZd6SfdL1YQ3qaYZp6Q== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.118.233) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=4s+inYt1Jlwvd6p2Yd3wErilt6cXF61JQ0NkPHuNWwc=; b=UCjx6zopssyLFK/K7wIlonuYL47m7+Lt2khd/RFfp5MzrpCwImT0n9fKJS6Gv9tHHAbBTHf8tHyjBVSZzSG/PvbfQx9NpfS27wy9RkjxQlQN+nN9QAsgC8w1Rn5yizlMUbJPa8kVpxh5/gBSX3x4gD7671knYWmbpH47o+LUTB760/p/q6sjfVa81ovJQvUqAjR/RsFGBLCk7+iKrZU8L44+Uxv5iAg0xHSSIoEciZ0UWhDGnVSRH60a/5UaU2FUoSP/mVx/UlXk94z1KExfzeQ21voi82mreivqjT+ijKgqYTf8/uBLL6d6h+8C2PofjeaN01YdSbSm3hwMLkMslA== Received: from BN9PR03CA0421.namprd03.prod.outlook.com (2603:10b6:408:113::6) by PH0PR12MB5679.namprd12.prod.outlook.com (2603:10b6:510:14f::6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7633.36; Sun, 9 Jun 2024 11:01:57 +0000 Received: from BL02EPF0001A105.namprd05.prod.outlook.com (2603:10b6:408:113:cafe::9f) by BN9PR03CA0421.outlook.office365.com (2603:10b6:408:113::6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7656.23 via Frontend Transport; Sun, 9 Jun 2024 11:01:57 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.118.233) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.118.233 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.118.233; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.118.233) by BL02EPF0001A105.mail.protection.outlook.com (10.167.241.137) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7677.15 via Frontend Transport; Sun, 9 Jun 2024 11:01:57 +0000 Received: from drhqmail201.nvidia.com (10.126.190.180) by mail.nvidia.com (10.127.129.6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Sun, 9 Jun 2024 04:01:45 -0700 Received: from drhqmail201.nvidia.com (10.126.190.180) by drhqmail201.nvidia.com (10.126.190.180) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Sun, 9 Jun 2024 04:01:45 -0700 Received: from nvidia.com (10.127.8.14) by mail.nvidia.com (10.126.190.180) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4 via Frontend Transport; Sun, 9 Jun 2024 04:01:43 -0700 From: Maayan Kashani To: CC: , , , Viacheslav Ovsiienko , Ori Kam , Suanming Mou , Matan Azrad Subject: [PATCH v7 09/11] net/mlx5: support bulk actions in non template mode Date: Sun, 9 Jun 2024 14:01:05 +0300 Message-ID: <20240609110107.92009-9-mkashani@nvidia.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20240609110107.92009-1-mkashani@nvidia.com> References: <20240609085600.87274-1-mkashani@nvidia.com> <20240609110107.92009-1-mkashani@nvidia.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-NV-OnPremToCloud: ExternallySecured X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BL02EPF0001A105:EE_|PH0PR12MB5679:EE_ X-MS-Office365-Filtering-Correlation-Id: 96c82e3b-0c81-41c1-bc44-08dc88739484 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; ARA:13230031|82310400017|376005|1800799015|36860700004; X-Microsoft-Antispam-Message-Info: =?us-ascii?Q?6pAt62GldGybmhHGYi7ahsQKNQ9RRDeYNWNlzHsl6Tgphx86yOrozqrB/sNG?= =?us-ascii?Q?EGyoqx2iw/Su4y2f3xMra/ZkKA9mrzyXpYPN7pua0viJsjtwpTNj3S/9Au9B?= =?us-ascii?Q?0FbfxTEkQacAUAc9LMtuTTfoobyQT15KQONluu7x2S8wz04lSA880TnwfUVP?= =?us-ascii?Q?mi24aOtidCCEuXlYKiDJkC+NzIRgZiTQbRqtiNVBurqHeTTXvd0u75hovrpC?= =?us-ascii?Q?ofOspdISntE4MirW8Wm1cr1O9GwG38kpn69WUI2vt8G80EZE6ijWrjyWiSKg?= =?us-ascii?Q?SoGl6MIDolM9yMdMu50OqnssWgekbzwLPAv2SQMoUEEKIPjOV1j24n0PM4Mu?= =?us-ascii?Q?DpmZyo07jK0uSi+JjOs04jRXv9MW+5fOKZe9yUwKROKoLCpTu8a2VmU8sjGD?= =?us-ascii?Q?8+U+jdko6ZQHLBehiSEWHrZzUJ9yT5AJH7Nudi7x+90rDnFfW1pHenfTMkdv?= =?us-ascii?Q?3NoLrtmGDKSFA7BlI/vHS4w/mUG/OCAK8oskE33c4QTh1bB/tFXzVDvABBZb?= =?us-ascii?Q?YDf0rfwIhjck0a4ojYkrjdfW3gM26ZRrXDuYZiAZA/jXv+4prKyPrIZG1X1o?= =?us-ascii?Q?aKWIfwu5Ww6t43Vsc8Q8fwvOSXki0nNN3oX8DGvB/3RojvXLaArzSL443z3g?= =?us-ascii?Q?4S5nGTxP+WkT4yt+HDS/y2eILxQpKPU7m9JY4PytPH7L48Q5DdxL9iX+ZpbM?= =?us-ascii?Q?Cg4Myf0cgxcgLyBiY7UYyrNV/fLjHOrrkvCaRNH2y2dwfXugAAmNJCZC6jZC?= =?us-ascii?Q?I6jLuxmYkYJnHSVzq2DYN0j35f4KrjyOxOBLBlwxrQbR7t8d3/HfiOuGm7N8?= =?us-ascii?Q?N0kEfO09cemtonZ2VoUW7zOJKT58rRpJRdWW6e1QcCvy/AwJndfgbMH2wJWa?= =?us-ascii?Q?enrGg4eQyVaStkjaX9SDYIvbqoopEJK1oexxnUM/JyWGMXQs0vNjYyY7j/Km?= =?us-ascii?Q?j/M4YqsWwGZ6UnQz0XToiQ/d5VT5w0pBdQzxw27vs61PSBW3GVuTcj1V+B7l?= =?us-ascii?Q?2AKNP9UBBauRGB7vYgppxKcg/NpnRgVX0pZscen0UA9mH4eTu7hDWrJ4TkAQ?= =?us-ascii?Q?uOqvNrsG9TspA6PGIjbRUHTL+Vc1ln3E+s0gcsq1rnqSyKLoJ9lYSFQw26By?= =?us-ascii?Q?gSeA2YsbMtBJbdtGy1j6StarNH1sBjGOmE0jq0e8PCYiEvUn6URvDfaqJN0b?= =?us-ascii?Q?qHGDJcKvBo98DjCJNYF/n6+19xHICL4oxwRFU7ILZcH/vaj4p96TP9K3bmcC?= =?us-ascii?Q?YVRrVynuHY55F2TMfa43PHzi5dHBKJjs/TZ0bhoc5cpqm4nGgxyapAy/gBuk?= =?us-ascii?Q?sIYJbKAF5SXBx+DXboDBEYaee1myG2hQS6yQd60aoPqZzv3mVIOJDgFGNBy5?= =?us-ascii?Q?3o/Xhu1FnC1yQ9LlW1PZfHMb6Qf/H8h42GzJgtHpcIEhLQDvlw=3D=3D?= X-Forefront-Antispam-Report: CIP:216.228.118.233; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc7edge2.nvidia.com; CAT:NONE; SFS:(13230031)(82310400017)(376005)(1800799015)(36860700004); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Jun 2024 11:01:57.2236 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 96c82e3b-0c81-41c1-bc44-08dc88739484 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.118.233]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BL02EPF0001A105.namprd05.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH0PR12MB5679 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add support for encap/decap/modify header action for non template API. Save 1 action per bulk according to action data. Reuse action if possible. Store actions same as for SWS today, use same key structure. Signed-off-by: Maayan Kashani Acked-by: Dariusz Sosnowski --- drivers/net/mlx5/mlx5_flow.h | 44 ++++-- drivers/net/mlx5/mlx5_flow_dv.c | 268 ++++++++++++++++++++------------ drivers/net/mlx5/mlx5_flow_hw.c | 184 ++++++++++++++++++++-- 3 files changed, 368 insertions(+), 128 deletions(-) diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index 73f4471e983..582d5754cd5 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -671,6 +671,10 @@ struct mlx5_flow_dv_modify_hdr_resource { struct mlx5_list_entry entry; void *action; /**< Modify header action object. */ uint32_t idx; +#ifdef HAVE_MLX5_HWS_SUPPORT + void *mh_dr_pattern; /**< Modify header DR pattern(HWS only). */ +#endif + uint64_t flags; /**< Flags for RDMA API(HWS only). */ /* Key area for hash list matching: */ uint8_t ft_type; /**< Flow table type, Rx or Tx. */ uint8_t actions_num; /**< Number of modification actions. */ @@ -1322,7 +1326,11 @@ struct rte_flow_nt2hws { struct mlx5_flow_dv_matcher *matcher; /**< Auxiliary data stored per flow. */ struct rte_flow_hw_aux *flow_aux; -} __rte_packed; + /** Modify header pointer. */ + struct mlx5_flow_dv_modify_hdr_resource *modify_hdr; + /** Encap/decap index. */ + uint32_t rix_encap_decap; +}; /** HWS flow struct. */ struct rte_flow_hw { @@ -3238,14 +3246,14 @@ struct mlx5_list_entry *flow_dv_tag_clone_cb(void *tool_ctx, void *cb_ctx); void flow_dv_tag_clone_free_cb(void *tool_ctx, struct mlx5_list_entry *entry); -int flow_dv_modify_match_cb(void *tool_ctx, struct mlx5_list_entry *entry, +int flow_modify_match_cb(void *tool_ctx, struct mlx5_list_entry *entry, void *cb_ctx); -struct mlx5_list_entry *flow_dv_modify_create_cb(void *tool_ctx, void *ctx); -void flow_dv_modify_remove_cb(void *tool_ctx, struct mlx5_list_entry *entry); -struct mlx5_list_entry *flow_dv_modify_clone_cb(void *tool_ctx, +struct mlx5_list_entry *flow_modify_create_cb(void *tool_ctx, void *ctx); +void flow_modify_remove_cb(void *tool_ctx, struct mlx5_list_entry *entry); +struct mlx5_list_entry *flow_modify_clone_cb(void *tool_ctx, struct mlx5_list_entry *oentry, void *ctx); -void flow_dv_modify_clone_free_cb(void *tool_ctx, +void flow_modify_clone_free_cb(void *tool_ctx, struct mlx5_list_entry *entry); struct mlx5_list_entry *flow_dv_mreg_create_cb(void *tool_ctx, void *ctx); @@ -3257,18 +3265,30 @@ struct mlx5_list_entry *flow_dv_mreg_clone_cb(void *tool_ctx, void *ctx); void flow_dv_mreg_clone_free_cb(void *tool_ctx, struct mlx5_list_entry *entry); -int flow_dv_encap_decap_match_cb(void *tool_ctx, struct mlx5_list_entry *entry, +int flow_encap_decap_match_cb(void *tool_ctx, struct mlx5_list_entry *entry, void *cb_ctx); -struct mlx5_list_entry *flow_dv_encap_decap_create_cb(void *tool_ctx, +struct mlx5_list_entry *flow_encap_decap_create_cb(void *tool_ctx, void *cb_ctx); -void flow_dv_encap_decap_remove_cb(void *tool_ctx, +void flow_encap_decap_remove_cb(void *tool_ctx, struct mlx5_list_entry *entry); -struct mlx5_list_entry *flow_dv_encap_decap_clone_cb(void *tool_ctx, +struct mlx5_list_entry *flow_encap_decap_clone_cb(void *tool_ctx, struct mlx5_list_entry *entry, void *cb_ctx); -void flow_dv_encap_decap_clone_free_cb(void *tool_ctx, +void flow_encap_decap_clone_free_cb(void *tool_ctx, struct mlx5_list_entry *entry); - +int __flow_encap_decap_resource_register + (struct rte_eth_dev *dev, + struct mlx5_flow_dv_encap_decap_resource *resource, + bool is_root, + struct mlx5_flow_dv_encap_decap_resource **encap_decap, + struct rte_flow_error *error); +int __flow_modify_hdr_resource_register + (struct rte_eth_dev *dev, + struct mlx5_flow_dv_modify_hdr_resource *resource, + struct mlx5_flow_dv_modify_hdr_resource **modify, + struct rte_flow_error *error); +int flow_encap_decap_resource_release(struct rte_eth_dev *dev, + uint32_t encap_decap_idx); int flow_matcher_match_cb(void *tool_ctx, struct mlx5_list_entry *entry, void *ctx); struct mlx5_list_entry *flow_matcher_create_cb(void *tool_ctx, void *ctx); diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c index 345d6ee3b8f..9c2dfa95a16 100644 --- a/drivers/net/mlx5/mlx5_flow_dv.c +++ b/drivers/net/mlx5/mlx5_flow_dv.c @@ -97,10 +97,6 @@ union flow_dv_attr { uint32_t attr; }; -static int -flow_dv_encap_decap_resource_release(struct rte_eth_dev *dev, - uint32_t encap_decap_idx); - static int flow_dv_port_id_action_resource_release(struct rte_eth_dev *dev, uint32_t port_id); @@ -4287,7 +4283,7 @@ flow_dv_validate_item_aggr_affinity(struct rte_eth_dev *dev, } int -flow_dv_encap_decap_match_cb(void *tool_ctx __rte_unused, +flow_encap_decap_match_cb(void *tool_ctx __rte_unused, struct mlx5_list_entry *entry, void *cb_ctx) { struct mlx5_flow_cb_ctx *ctx = cb_ctx; @@ -4308,7 +4304,7 @@ flow_dv_encap_decap_match_cb(void *tool_ctx __rte_unused, } struct mlx5_list_entry * -flow_dv_encap_decap_create_cb(void *tool_ctx, void *cb_ctx) +flow_encap_decap_create_cb(void *tool_ctx, void *cb_ctx) { struct mlx5_dev_ctx_shared *sh = tool_ctx; struct mlx5_flow_cb_ctx *ctx = cb_ctx; @@ -4316,14 +4312,11 @@ flow_dv_encap_decap_create_cb(void *tool_ctx, void *cb_ctx) struct mlx5_flow_dv_encap_decap_resource *ctx_resource = ctx->data; struct mlx5_flow_dv_encap_decap_resource *resource; uint32_t idx; - int ret; + int ret = 0; +#ifdef HAVE_MLX5_HWS_SUPPORT + struct mlx5dr_action_reformat_header hdr; +#endif - if (ctx_resource->ft_type == MLX5DV_FLOW_TABLE_TYPE_FDB) - domain = sh->fdb_domain; - else if (ctx_resource->ft_type == MLX5DV_FLOW_TABLE_TYPE_NIC_RX) - domain = sh->rx_domain; - else - domain = sh->tx_domain; /* Register new encap/decap resource. */ resource = mlx5_ipool_zmalloc(sh->ipool[MLX5_IPOOL_DECAP_ENCAP], &idx); if (!resource) { @@ -4333,10 +4326,29 @@ flow_dv_encap_decap_create_cb(void *tool_ctx, void *cb_ctx) return NULL; } *resource = *ctx_resource; - resource->idx = idx; - ret = mlx5_flow_os_create_flow_action_packet_reformat(sh->cdev->ctx, - domain, resource, - &resource->action); + if (sh->config.dv_flow_en == 2) { +#ifdef HAVE_MLX5_HWS_SUPPORT + hdr.sz = ctx_resource->size; + hdr.data = ctx_resource->buf; + resource->action = mlx5dr_action_create_reformat + (ctx->data2, (enum mlx5dr_action_type)ctx_resource->reformat_type, 1, + &hdr, 1, ctx_resource->flags); + if (!resource->action) + ret = -1; +#else + ret = -1; +#endif + } else { + if (ctx_resource->ft_type == MLX5DV_FLOW_TABLE_TYPE_FDB) + domain = sh->fdb_domain; + else if (ctx_resource->ft_type == MLX5DV_FLOW_TABLE_TYPE_NIC_RX) + domain = sh->rx_domain; + else + domain = sh->tx_domain; + ret = mlx5_flow_os_create_flow_action_packet_reformat(sh->cdev->ctx, + domain, resource, + &resource->action); + } if (ret) { mlx5_ipool_free(sh->ipool[MLX5_IPOOL_DECAP_ENCAP], idx); rte_flow_error_set(ctx->error, ENOMEM, @@ -4344,12 +4356,12 @@ flow_dv_encap_decap_create_cb(void *tool_ctx, void *cb_ctx) NULL, "cannot create action"); return NULL; } - + resource->idx = idx; return &resource->entry; } struct mlx5_list_entry * -flow_dv_encap_decap_clone_cb(void *tool_ctx, struct mlx5_list_entry *oentry, +flow_encap_decap_clone_cb(void *tool_ctx, struct mlx5_list_entry *oentry, void *cb_ctx) { struct mlx5_dev_ctx_shared *sh = tool_ctx; @@ -4371,7 +4383,7 @@ flow_dv_encap_decap_clone_cb(void *tool_ctx, struct mlx5_list_entry *oentry, } void -flow_dv_encap_decap_clone_free_cb(void *tool_ctx, struct mlx5_list_entry *entry) +flow_encap_decap_clone_free_cb(void *tool_ctx, struct mlx5_list_entry *entry) { struct mlx5_dev_ctx_shared *sh = tool_ctx; struct mlx5_flow_dv_encap_decap_resource *res = @@ -4380,26 +4392,11 @@ flow_dv_encap_decap_clone_free_cb(void *tool_ctx, struct mlx5_list_entry *entry) mlx5_ipool_free(sh->ipool[MLX5_IPOOL_DECAP_ENCAP], res->idx); } -/** - * Find existing encap/decap resource or create and register a new one. - * - * @param[in, out] dev - * Pointer to rte_eth_dev structure. - * @param[in, out] resource - * Pointer to encap/decap resource. - * @parm[in, out] dev_flow - * Pointer to the dev_flow. - * @param[out] error - * pointer to error structure. - * - * @return - * 0 on success otherwise -errno and errno is set. - */ -static int -flow_dv_encap_decap_resource_register - (struct rte_eth_dev *dev, +int +__flow_encap_decap_resource_register(struct rte_eth_dev *dev, struct mlx5_flow_dv_encap_decap_resource *resource, - struct mlx5_flow *dev_flow, + bool is_root, + struct mlx5_flow_dv_encap_decap_resource **encap_decap, struct rte_flow_error *error) { struct mlx5_priv *priv = dev->data->dev_private; @@ -4422,13 +4419,14 @@ flow_dv_encap_decap_resource_register { .ft_type = resource->ft_type, .refmt_type = resource->reformat_type, - .is_root = !!dev_flow->dv.group, + .is_root = is_root, .reserve = 0, } }; struct mlx5_flow_cb_ctx ctx = { .error = error, .data = resource, + .data2 = priv->dr_ctx, }; struct mlx5_hlist *encaps_decaps; uint64_t key64; @@ -4437,15 +4435,14 @@ flow_dv_encap_decap_resource_register "encaps_decaps", MLX5_FLOW_ENCAP_DECAP_HTABLE_SZ, true, true, sh, - flow_dv_encap_decap_create_cb, - flow_dv_encap_decap_match_cb, - flow_dv_encap_decap_remove_cb, - flow_dv_encap_decap_clone_cb, - flow_dv_encap_decap_clone_free_cb, + flow_encap_decap_create_cb, + flow_encap_decap_match_cb, + flow_encap_decap_remove_cb, + flow_encap_decap_clone_cb, + flow_encap_decap_clone_free_cb, error); if (unlikely(!encaps_decaps)) return -rte_errno; - resource->flags = dev_flow->dv.group ? 0 : 1; key64 = __rte_raw_cksum(&encap_decap_key.v32, sizeof(encap_decap_key.v32), 0); if (resource->reformat_type != @@ -4455,9 +4452,40 @@ flow_dv_encap_decap_resource_register entry = mlx5_hlist_register(encaps_decaps, key64, &ctx); if (!entry) return -rte_errno; - resource = container_of(entry, typeof(*resource), entry); - dev_flow->dv.encap_decap = resource; - dev_flow->handle->dvh.rix_encap_decap = resource->idx; + *encap_decap = container_of(entry, typeof(*resource), entry); + return 0; +} + +/** + * Find existing encap/decap resource or create and register a new one. + * + * @param[in, out] dev + * Pointer to rte_eth_dev structure. + * @param[in, out] resource + * Pointer to encap/decap resource. + * @param[in, out] dev_flow + * Pointer to the dev_flow. + * @param[out] error + * pointer to error structure. + * + * @return + * 0 on success otherwise -errno and errno is set. + */ +static int +flow_dv_encap_decap_resource_register + (struct rte_eth_dev *dev, + struct mlx5_flow_dv_encap_decap_resource *resource, + struct mlx5_flow *dev_flow, + struct rte_flow_error *error) +{ + int ret; + + resource->flags = dev_flow->dv.group ? 0 : 1; + ret = __flow_encap_decap_resource_register(dev, resource, !!dev_flow->dv.group, + &dev_flow->dv.encap_decap, error); + if (ret) + return ret; + dev_flow->handle->dvh.rix_encap_decap = dev_flow->dv.encap_decap->idx; return 0; } @@ -6137,7 +6165,7 @@ flow_dv_validate_action_modify_ipv6_dscp(const uint64_t action_flags, } int -flow_dv_modify_match_cb(void *tool_ctx __rte_unused, +flow_modify_match_cb(void *tool_ctx __rte_unused, struct mlx5_list_entry *entry, void *cb_ctx) { struct mlx5_flow_cb_ctx *ctx = cb_ctx; @@ -6193,7 +6221,7 @@ flow_dv_modify_ipool_get(struct mlx5_dev_ctx_shared *sh, uint8_t index) } struct mlx5_list_entry * -flow_dv_modify_create_cb(void *tool_ctx, void *cb_ctx) +flow_modify_create_cb(void *tool_ctx, void *cb_ctx) { struct mlx5_dev_ctx_shared *sh = tool_ctx; struct mlx5_flow_cb_ctx *ctx = cb_ctx; @@ -6202,11 +6230,13 @@ flow_dv_modify_create_cb(void *tool_ctx, void *cb_ctx) struct mlx5_flow_dv_modify_hdr_resource *ref = ctx->data; struct mlx5_indexed_pool *ipool = flow_dv_modify_ipool_get(sh, ref->actions_num - 1); - int ret; + int ret = 0; uint32_t data_len = ref->actions_num * sizeof(ref->actions[0]); uint32_t key_len = sizeof(*ref) - offsetof(typeof(*ref), ft_type); uint32_t idx; + struct mlx5_tbl_multi_pattern_ctx *mpctx; + typeof(mpctx->mh) *mh_dr_pattern = ref->mh_dr_pattern; if (unlikely(!ipool)) { rte_flow_error_set(ctx->error, ENOMEM, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, @@ -6220,18 +6250,30 @@ flow_dv_modify_create_cb(void *tool_ctx, void *cb_ctx) "cannot allocate resource memory"); return NULL; } - rte_memcpy(RTE_PTR_ADD(entry, offsetof(typeof(*entry), ft_type)), - RTE_PTR_ADD(ref, offsetof(typeof(*ref), ft_type)), - key_len + data_len); - if (entry->ft_type == MLX5DV_FLOW_TABLE_TYPE_FDB) - ns = sh->fdb_domain; - else if (entry->ft_type == MLX5DV_FLOW_TABLE_TYPE_NIC_TX) - ns = sh->tx_domain; - else - ns = sh->rx_domain; - ret = mlx5_flow_os_create_flow_action_modify_header - (sh->cdev->ctx, ns, entry, - data_len, &entry->action); + rte_memcpy(&entry->ft_type, + RTE_PTR_ADD(ref, offsetof(typeof(*ref), ft_type)), + key_len + data_len); + if (sh->config.dv_flow_en == 2) { +#ifdef HAVE_MLX5_HWS_SUPPORT + entry->action = mlx5dr_action_create_modify_header(ctx->data2, + mh_dr_pattern->elements_num, + mh_dr_pattern->pattern, 0, ref->flags); + if (!entry->action) + ret = -1; +#else + ret = -1; +#endif + } else { + if (entry->ft_type == MLX5DV_FLOW_TABLE_TYPE_FDB) + ns = sh->fdb_domain; + else if (entry->ft_type == MLX5DV_FLOW_TABLE_TYPE_NIC_TX) + ns = sh->tx_domain; + else + ns = sh->rx_domain; + ret = mlx5_flow_os_create_flow_action_modify_header + (sh->cdev->ctx, ns, entry, + data_len, &entry->action); + } if (ret) { mlx5_ipool_free(sh->mdh_ipools[ref->actions_num - 1], idx); rte_flow_error_set(ctx->error, ENOMEM, @@ -6244,7 +6286,7 @@ flow_dv_modify_create_cb(void *tool_ctx, void *cb_ctx) } struct mlx5_list_entry * -flow_dv_modify_clone_cb(void *tool_ctx, struct mlx5_list_entry *oentry, +flow_modify_clone_cb(void *tool_ctx, struct mlx5_list_entry *oentry, void *cb_ctx) { struct mlx5_dev_ctx_shared *sh = tool_ctx; @@ -6268,7 +6310,7 @@ flow_dv_modify_clone_cb(void *tool_ctx, struct mlx5_list_entry *oentry, } void -flow_dv_modify_clone_free_cb(void *tool_ctx, struct mlx5_list_entry *entry) +flow_modify_clone_free_cb(void *tool_ctx, struct mlx5_list_entry *entry) { struct mlx5_dev_ctx_shared *sh = tool_ctx; struct mlx5_flow_dv_modify_hdr_resource *res = @@ -6536,27 +6578,11 @@ flow_dv_validate_action_sample(uint64_t *action_flags, return 0; } -/** - * Find existing modify-header resource or create and register a new one. - * - * @param dev[in, out] - * Pointer to rte_eth_dev structure. - * @param[in, out] resource - * Pointer to modify-header resource. - * @parm[in, out] dev_flow - * Pointer to the dev_flow. - * @param[out] error - * pointer to error structure. - * - * @return - * 0 on success otherwise -errno and errno is set. - */ -static int -flow_dv_modify_hdr_resource_register - (struct rte_eth_dev *dev, - struct mlx5_flow_dv_modify_hdr_resource *resource, - struct mlx5_flow *dev_flow, - struct rte_flow_error *error) +int +__flow_modify_hdr_resource_register(struct rte_eth_dev *dev, + struct mlx5_flow_dv_modify_hdr_resource *resource, + struct mlx5_flow_dv_modify_hdr_resource **modify, + struct rte_flow_error *error) { struct mlx5_priv *priv = dev->data->dev_private; struct mlx5_dev_ctx_shared *sh = priv->sh; @@ -6567,6 +6593,7 @@ flow_dv_modify_hdr_resource_register struct mlx5_flow_cb_ctx ctx = { .error = error, .data = resource, + .data2 = priv->dr_ctx, }; struct mlx5_hlist *modify_cmds; uint64_t key64; @@ -6575,15 +6602,14 @@ flow_dv_modify_hdr_resource_register "hdr_modify", MLX5_FLOW_HDR_MODIFY_HTABLE_SZ, true, false, sh, - flow_dv_modify_create_cb, - flow_dv_modify_match_cb, - flow_dv_modify_remove_cb, - flow_dv_modify_clone_cb, - flow_dv_modify_clone_free_cb, + flow_modify_create_cb, + flow_modify_match_cb, + flow_modify_remove_cb, + flow_modify_clone_cb, + flow_modify_clone_free_cb, error); if (unlikely(!modify_cmds)) return -rte_errno; - resource->root = !dev_flow->dv.group; if (resource->actions_num > flow_dv_modify_hdr_action_max(dev, resource->root)) return rte_flow_error_set(error, EOVERFLOW, @@ -6593,11 +6619,37 @@ flow_dv_modify_hdr_resource_register entry = mlx5_hlist_register(modify_cmds, key64, &ctx); if (!entry) return -rte_errno; - resource = container_of(entry, typeof(*resource), entry); - dev_flow->handle->dvh.modify_hdr = resource; + *modify = container_of(entry, typeof(*resource), entry); return 0; } +/** + * Find existing modify-header resource or create and register a new one. + * + * @param dev[in, out] + * Pointer to rte_eth_dev structure. + * @param[in, out] resource + * Pointer to modify-header resource. + * @param[in, out] dev_flow + * Pointer to the dev_flow. + * @param[out] error + * pointer to error structure. + * + * @return + * 0 on success otherwise -errno and errno is set. + */ +static int +flow_dv_modify_hdr_resource_register + (struct rte_eth_dev *dev, + struct mlx5_flow_dv_modify_hdr_resource *resource, + struct mlx5_flow *dev_flow, + struct rte_flow_error *error) +{ + resource->root = !dev_flow->dv.group; + return __flow_modify_hdr_resource_register(dev, resource, + &dev_flow->handle->dvh.modify_hdr, error); +} + /** * Get DV flow counter by index. * @@ -12507,7 +12559,7 @@ flow_dv_sample_sub_actions_release(struct rte_eth_dev *dev, act_res->rix_hrxq = 0; } if (act_res->rix_encap_decap) { - flow_dv_encap_decap_resource_release(dev, + flow_encap_decap_resource_release(dev, act_res->rix_encap_decap); act_res->rix_encap_decap = 0; } @@ -15967,13 +16019,18 @@ flow_dv_matcher_release(struct rte_eth_dev *dev, } void -flow_dv_encap_decap_remove_cb(void *tool_ctx, struct mlx5_list_entry *entry) +flow_encap_decap_remove_cb(void *tool_ctx, struct mlx5_list_entry *entry) { struct mlx5_dev_ctx_shared *sh = tool_ctx; struct mlx5_flow_dv_encap_decap_resource *res = container_of(entry, typeof(*res), entry); - claim_zero(mlx5_flow_os_destroy_flow_action(res->action)); +#ifdef HAVE_MLX5_HWS_SUPPORT + if (sh->config.dv_flow_en == 2) + claim_zero(mlx5dr_action_destroy(res->action)); + else +#endif + claim_zero(mlx5_flow_os_destroy_flow_action(res->action)); mlx5_ipool_free(sh->ipool[MLX5_IPOOL_DECAP_ENCAP], res->idx); } @@ -15988,8 +16045,8 @@ flow_dv_encap_decap_remove_cb(void *tool_ctx, struct mlx5_list_entry *entry) * @return * 1 while a reference on it exists, 0 when freed. */ -static int -flow_dv_encap_decap_resource_release(struct rte_eth_dev *dev, +int +flow_encap_decap_resource_release(struct rte_eth_dev *dev, uint32_t encap_decap_idx) { struct mlx5_priv *priv = dev->data->dev_private; @@ -16029,13 +16086,18 @@ flow_dv_jump_tbl_resource_release(struct rte_eth_dev *dev, } void -flow_dv_modify_remove_cb(void *tool_ctx, struct mlx5_list_entry *entry) +flow_modify_remove_cb(void *tool_ctx, struct mlx5_list_entry *entry) { struct mlx5_flow_dv_modify_hdr_resource *res = container_of(entry, typeof(*res), entry); struct mlx5_dev_ctx_shared *sh = tool_ctx; - claim_zero(mlx5_flow_os_destroy_flow_action(res->action)); +#ifdef HAVE_MLX5_HWS_SUPPORT + if (sh->config.dv_flow_en == 2) + claim_zero(mlx5dr_action_destroy(res->action)); + else +#endif + claim_zero(mlx5_flow_os_destroy_flow_action(res->action)); mlx5_ipool_free(sh->mdh_ipools[res->actions_num - 1], res->idx); } @@ -16404,7 +16466,7 @@ flow_dv_destroy(struct rte_eth_dev *dev, struct rte_flow *flow) if (dev_handle->dvh.rix_dest_array) flow_dv_dest_array_resource_release(dev, dev_handle); if (dev_handle->dvh.rix_encap_decap) - flow_dv_encap_decap_resource_release(dev, + flow_encap_decap_resource_release(dev, dev_handle->dvh.rix_encap_decap); if (dev_handle->dvh.modify_hdr) flow_dv_modify_hdr_resource_release(dev, dev_handle); diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index e914f4e1ae6..f326ca0a21c 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -9,6 +9,7 @@ #include #include "mlx5.h" +#include "mlx5_common.h" #include "mlx5_defs.h" #include "mlx5_flow.h" #include "mlx5_flow_os.h" @@ -239,6 +240,22 @@ mlx5_multi_pattern_reformat_to_index(enum mlx5dr_action_type type) return -1; } +/* Include only supported reformat actions for BWC non template API. */ +static __rte_always_inline int +mlx5_bwc_multi_pattern_reformat_to_index(enum mlx5dr_action_type type) +{ + switch (type) { + case MLX5DR_ACTION_TYP_REFORMAT_TNL_L2_TO_L2: + case MLX5DR_ACTION_TYP_REFORMAT_L2_TO_TNL_L2: + case MLX5DR_ACTION_TYP_REFORMAT_TNL_L3_TO_L2: + case MLX5DR_ACTION_TYP_REFORMAT_L2_TO_TNL_L3: + return mlx5_multi_pattern_reformat_to_index(type); + default: + break; + } + return -1; +} + static __rte_always_inline enum mlx5dr_action_type mlx5_multi_pattern_reformat_index_to_type(uint32_t ix) { @@ -1267,11 +1284,13 @@ flow_hw_converted_mhdr_cmds_append(struct mlx5_hw_modify_header_action *mhdr, static __rte_always_inline void flow_hw_modify_field_init(struct mlx5_hw_modify_header_action *mhdr, - struct rte_flow_actions_template *at) + struct rte_flow_actions_template *at, + bool nt_mode) { memset(mhdr, 0, sizeof(*mhdr)); /* Modify header action without any commands is shared by default. */ - mhdr->shared = true; + if (!(nt_mode)) + mhdr->shared = true; mhdr->pos = at->mhdr_off; } @@ -2074,7 +2093,6 @@ mlx5_tbl_translate_modify_header(struct rte_eth_dev *dev, } else { typeof(mp_ctx->mh) *mh = &mp_ctx->mh; uint32_t idx = mh->elements_num; - mh->pattern[mh->elements_num++] = pattern; acts->mhdr->multi_pattern = 1; acts->rule_acts[mhdr_ix].modify_header.pattern_idx = idx; @@ -2220,7 +2238,7 @@ __flow_hw_translate_actions_template(struct rte_eth_dev *dev, uint32_t target_grp = 0; int table_type; - flow_hw_modify_field_init(&mhdr, at); + flow_hw_modify_field_init(&mhdr, at, nt_mode); if (attr->transfer) type = MLX5DR_TABLE_TYPE_FDB; else if (attr->egress) @@ -3189,6 +3207,7 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, flow->res_idx - mp_segment->head_index; rule_acts[pos].modify_header.data = (uint8_t *)ap->mhdr_cmd; + MLX5_ASSERT(hw_acts->mhdr->mhdr_cmds_num <= MLX5_MHDR_MAX_CMD); rte_memcpy(ap->mhdr_cmd, hw_acts->mhdr->mhdr_cmds, sizeof(*ap->mhdr_cmd) * hw_acts->mhdr->mhdr_cmds_num); } @@ -12817,12 +12836,106 @@ static int flow_hw_prepare(struct rte_eth_dev *dev, /*TODO: consider if other allocation is needed for actions translate. */ return 0; } +#define FLOW_HW_SET_DV_FIELDS(flow_attr, root, flags) \ +{ \ + typeof(flow_attr) _flow_attr = (flow_attr); \ + if (_flow_attr->transfer) \ + dv_resource.ft_type = MLX5DV_FLOW_TABLE_TYPE_FDB; \ + else \ + dv_resource.ft_type = _flow_attr->egress ? MLX5DV_FLOW_TABLE_TYPE_NIC_TX : \ + MLX5DV_FLOW_TABLE_TYPE_NIC_RX; \ + root = _flow_attr->group ? 0 : 1; \ + flags = mlx5_hw_act_flag[!!_flow_attr->group][get_mlx5dr_table_type(_flow_attr)]; \ +} + +static int +flow_hw_modify_hdr_resource_register + (struct rte_eth_dev *dev, + struct rte_flow_template_table *table, + struct mlx5_hw_actions *hw_acts, + struct rte_flow_hw *dev_flow, + struct rte_flow_error *error) +{ + struct rte_flow_attr *attr = &table->cfg.attr.flow_attr; + struct mlx5_flow_dv_modify_hdr_resource *dv_resource_ptr = NULL; + struct mlx5_flow_dv_modify_hdr_resource dv_resource; + struct mlx5_tbl_multi_pattern_ctx *mpctx = &table->mpctx; + int ret; + + if (hw_acts->mhdr) { + dv_resource.actions_num = hw_acts->mhdr->mhdr_cmds_num; + memcpy(dv_resource.actions, hw_acts->mhdr->mhdr_cmds, + sizeof(struct mlx5_modification_cmd) * dv_resource.actions_num); + } else { + return 0; + } + FLOW_HW_SET_DV_FIELDS(attr, dv_resource.root, dv_resource.flags); + /* Save a pointer to the pattern needed for DR layer created on actions translate. */ + dv_resource.mh_dr_pattern = &table->mpctx.mh; + ret = __flow_modify_hdr_resource_register(dev, &dv_resource, + &dv_resource_ptr, error); + if (ret) + return ret; + MLX5_ASSERT(dv_resource_ptr); + dev_flow->nt2hws->modify_hdr = dv_resource_ptr; + /* keep action for the rule construction. */ + mpctx->segments[0].mhdr_action = dv_resource_ptr->action; + /* Bulk size is 1, so index is 1. */ + dev_flow->res_idx = 1; + return 0; +} + +static int +flow_hw_encap_decap_resource_register + (struct rte_eth_dev *dev, + struct rte_flow_template_table *table, + struct mlx5_hw_actions *hw_acts, + struct rte_flow_hw *dev_flow, + struct rte_flow_error *error) +{ + struct rte_flow_attr *attr = &table->cfg.attr.flow_attr; + struct mlx5_flow_dv_encap_decap_resource *dv_resource_ptr = NULL; + struct mlx5_flow_dv_encap_decap_resource dv_resource; + struct mlx5_tbl_multi_pattern_ctx *mpctx = &table->mpctx; + int ret; + bool is_root; + int ix; + + if (hw_acts->encap_decap) + dv_resource.reformat_type = hw_acts->encap_decap->action_type; + else + return 0; + ix = mlx5_bwc_multi_pattern_reformat_to_index((enum mlx5dr_action_type) + dv_resource.reformat_type); + if (ix < 0) + return ix; + typeof(mpctx->reformat[0]) *reformat = mpctx->reformat + ix; + if (!reformat->elements_num) + return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, + NULL, "No reformat action exist in the table."); + dv_resource.size = reformat->reformat_hdr->sz; + FLOW_HW_SET_DV_FIELDS(attr, is_root, dv_resource.flags); + MLX5_ASSERT(dv_resource.size <= MLX5_ENCAP_MAX_LEN); + memcpy(dv_resource.buf, reformat->reformat_hdr->data, dv_resource.size); + ret = __flow_encap_decap_resource_register(dev, &dv_resource, is_root, + &dv_resource_ptr, error); + if (ret) + return ret; + MLX5_ASSERT(dv_resource_ptr); + dev_flow->nt2hws->rix_encap_decap = dv_resource_ptr->idx; + /* keep action for the rule construction. */ + mpctx->segments[0].reformat_action[ix] = dv_resource_ptr->action; + /* Bulk size is 1, so index is 1. */ + dev_flow->res_idx = 1; + return 0; +} static int flow_hw_translate_flow_actions(struct rte_eth_dev *dev, const struct rte_flow_attr *attr, const struct rte_flow_action actions[], struct rte_flow_hw *flow, + struct mlx5_flow_hw_action_params *ap, struct mlx5_hw_actions *hw_acts, uint64_t item_flags, bool external, @@ -12841,12 +12954,28 @@ flow_hw_translate_flow_actions(struct rte_eth_dev *dev, .transfer = attr->transfer, }; struct rte_flow_action masks[MLX5_HW_MAX_ACTS]; - struct mlx5_flow_hw_action_params ap; + struct rte_flow_action_raw_encap encap_conf; + struct rte_flow_action_modify_field mh_conf[MLX5_HW_MAX_ACTS]; + memset(&masks, 0, sizeof(masks)); int i = -1; do { i++; masks[i].type = actions[i].type; + if (masks[i].type == RTE_FLOW_ACTION_TYPE_RAW_ENCAP) { + memset(&encap_conf, 0x00, sizeof(encap_conf)); + encap_conf.size = ((const struct rte_flow_action_raw_encap *) + (actions[i].conf))->size; + masks[i].conf = &encap_conf; + } + if (masks[i].type == RTE_FLOW_ACTION_TYPE_MODIFY_FIELD) { + const struct rte_flow_action_modify_field *conf = actions[i].conf; + memset(&mh_conf, 0xff, sizeof(mh_conf[i])); + mh_conf[i].operation = conf->operation; + mh_conf[i].dst.field = conf->dst.field; + mh_conf[i].src.field = conf->src.field; + masks[i].conf = &mh_conf[i]; + } } while (masks[i].type != RTE_FLOW_ACTION_TYPE_END); RTE_SET_USED(action_flags); /* The group in the attribute translation was done in advance. */ @@ -12871,8 +13000,6 @@ flow_hw_translate_flow_actions(struct rte_eth_dev *dev, ret = -rte_errno; goto end; } - if (ret) - goto clean_up; grp.group_id = src_group; table->grp = &grp; table->type = table_type; @@ -12882,19 +13009,25 @@ flow_hw_translate_flow_actions(struct rte_eth_dev *dev, table->ats[0].action_template = at; ret = __flow_hw_translate_actions_template(dev, &table->cfg, hw_acts, at, &table->mpctx, true, error); + if (ret) + goto end; + /* handle bulk actions register. */ + ret = flow_hw_encap_decap_resource_register(dev, table, hw_acts, flow, error); + if (ret) + goto clean_up; + ret = flow_hw_modify_hdr_resource_register(dev, table, hw_acts, flow, error); if (ret) goto clean_up; table->ats[0].acts = *hw_acts; - ret = flow_hw_actions_construct(dev, flow, &ap, + ret = flow_hw_actions_construct(dev, flow, ap, &table->ats[0], item_flags, table, actions, hw_acts->rule_acts, 0, error); if (ret) goto clean_up; - goto end; clean_up: /* Make sure that there is no garbage in the actions. */ - __flow_hw_actions_release(dev, hw_acts); + __flow_hw_action_template_destroy(dev, hw_acts); end: if (table) mlx5_free(table); @@ -13100,6 +13233,7 @@ static int flow_hw_create_flow(struct rte_eth_dev *dev, { int ret; struct mlx5_hw_actions hw_act; + struct mlx5_flow_hw_action_params ap; struct mlx5_flow_dv_matcher matcher = { .mask = { .size = sizeof(matcher.mask.buf), @@ -13166,7 +13300,7 @@ static int flow_hw_create_flow(struct rte_eth_dev *dev, goto error; /* Note: the actions should be saved in the sub-flow rule itself for reference. */ - ret = flow_hw_translate_flow_actions(dev, attr, actions, *flow, &hw_act, + ret = flow_hw_translate_flow_actions(dev, attr, actions, *flow, &ap, &hw_act, item_flags, external, error); if (ret) goto error; @@ -13187,9 +13321,19 @@ static int flow_hw_create_flow(struct rte_eth_dev *dev, if (ret) goto error; } - return 0; - + ret = 0; error: + /* + * Release memory allocated. + * Cannot use __flow_hw_actions_release(dev, &hw_act); + * since it destroys the actions as well. + */ + if (hw_act.encap_decap) + mlx5_free(hw_act.encap_decap); + if (hw_act.push_remove) + mlx5_free(hw_act.push_remove); + if (hw_act.mhdr) + mlx5_free(hw_act.mhdr); return ret; } #endif @@ -13198,6 +13342,7 @@ static void flow_hw_destroy(struct rte_eth_dev *dev, struct rte_flow_hw *flow) { int ret; + struct mlx5_priv *priv = dev->data->dev_private; if (!flow || !flow->nt2hws) return; @@ -13222,6 +13367,19 @@ flow_hw_destroy(struct rte_eth_dev *dev, struct rte_flow_hw *flow) */ if (flow->nt2hws->flow_aux) mlx5_free(flow->nt2hws->flow_aux); + + if (flow->nt2hws->rix_encap_decap) { + ret = flow_encap_decap_resource_release(dev, flow->nt2hws->rix_encap_decap); + if (ret) + DRV_LOG(ERR, "failed to release encap decap."); + } + if (flow->nt2hws->modify_hdr) { + MLX5_ASSERT(flow->nt2hws->modify_hdr->action); + ret = mlx5_hlist_unregister(priv->sh->modify_cmds, + &flow->nt2hws->modify_hdr->entry); + if (ret) + DRV_LOG(ERR, "failed to release modify action."); + } } #ifdef HAVE_MLX5_HWS_SUPPORT -- 2.21.0