From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <dev-bounces@dpdk.org>
Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124])
	by inbox.dpdk.org (Postfix) with ESMTP id 5BC2243251;
	Tue, 31 Oct 2023 11:52:35 +0100 (CET)
Received: from mails.dpdk.org (localhost [127.0.0.1])
	by mails.dpdk.org (Postfix) with ESMTP id DCBD240A7A;
	Tue, 31 Oct 2023 11:52:12 +0100 (CET)
Received: from NAM10-MW2-obe.outbound.protection.outlook.com
 (mail-mw2nam10on2067.outbound.protection.outlook.com [40.107.94.67])
 by mails.dpdk.org (Postfix) with ESMTP id 29A9540A71
 for <dev@dpdk.org>; Tue, 31 Oct 2023 11:52:11 +0100 (CET)
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=HHjRajJFMbHfNk7uJwDeJERpeN4Z9BPQWNSQwsPRRdyqhM4hNwP9so0I7z0iMJ2/SITA4AqZJPQ1MxPqWr6VDEyoeAsLy9794pA4HaCORrfzPiYRqVhrUggh/0wYPgqdXMm+caZ4EgWhF9NfrI9F7g2/3aM+iWYf+BQEWaBAEr7orI/X97ySCjs4JOFdzTbDW86dIKvzI9tX58UscUNfYoH0d0ZnUESp1LPYThmOgwlLRKhoj7FuCzXryDTfp6R+0KyUGUZnDB9qmT+qO4usW9OgV+3sZUSHopyG/im4XP2zG9WoDvvvOaagaqrbdnhnRg8D7oyMEohTl3daCmOlSw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=sNjbioQdQWJVKbda03XStAtSs5yTBikhBi+ytBCNqUY=;
 b=noX8PsgO5mP6W9myHDqA7TFb0m60ZnKU67Z1H4ndVZgeL+JhXaXXqIUFgzY9NcBsBfxLDSUwH+NpQ0MTUdIalQ8Y4oeZ+2S5HCCHFD7fNbIolugp0v9aByP9jBB7msHtnHkusBtoPtzgG3tv3iN6JUB1k3aiEd93dPAadVyLjCs5Ksm53FdfcnRJhFtVbIMHLs7m8C6FAeWb6+pg1LPmFZ/MLc3w8DhWeyPxih9+vacBcn3Gk9rYxeke73oLh6L5yEv2yWmTo0pILgWEixSJV6owbIGCD9BO7SE8PAHZ8/RFrFoN4EPY/Qy7nDqJWEQwRWrVO7VvfOSmueeuyHHHkw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 216.228.117.160) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com;
 dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com;
 dkim=none (message not signed); arc=none (0)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com;
 s=selector2;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=sNjbioQdQWJVKbda03XStAtSs5yTBikhBi+ytBCNqUY=;
 b=tuC4blPm2UziluwCoZgZq2fZ6Tk6jYiZpWlIqSRtZvRUKM+idUZ8HsanwKhcnwPHdBI7qhDt3A7ePlAOAG1EacxeGToHHEEFGymQBsPGKvvoYLA7idGOo1NX5ziUfP8iP2I1Ahm2Aca5Pmp1KF9icl16bQF+t5HQz4+WLY0CAKFRQgXRTTZvaErfpKP0/rD9KLUbfxTNsm9BJ+Ey4ReXcZMhGOM4Oc7ohd6Po5XTfkKyEXbgpLIRHmJ2SXfVdFwPD3N4EvUTSjmtuZup8epi+VkOeA9fvFBdo+y3NTfQTilnF4TWsS8HXBvhlwZWCJSEIEuOM5y2VfdVRTNEHyDSNg==
Received: from MW4PR03CA0122.namprd03.prod.outlook.com (2603:10b6:303:8c::7)
 by MN6PR12MB8589.namprd12.prod.outlook.com (2603:10b6:208:47d::16) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6933.27; Tue, 31 Oct
 2023 10:52:08 +0000
Received: from CO1PEPF000042AD.namprd03.prod.outlook.com
 (2603:10b6:303:8c:cafe::58) by MW4PR03CA0122.outlook.office365.com
 (2603:10b6:303:8c::7) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6933.28 via Frontend
 Transport; Tue, 31 Oct 2023 10:52:08 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160)
 smtp.mailfrom=nvidia.com;
 dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=nvidia.com;
Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates
 216.228.117.160 as permitted sender) receiver=protection.outlook.com;
 client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C
Received: from mail.nvidia.com (216.228.117.160) by
 CO1PEPF000042AD.mail.protection.outlook.com (10.167.243.42) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6954.19 via Frontend Transport; Tue, 31 Oct 2023 10:52:08 +0000
Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com
 (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.41; Tue, 31 Oct
 2023 03:51:58 -0700
Received: from nvidia.com (10.126.230.35) by rnnvmail201.nvidia.com
 (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.41; Tue, 31 Oct
 2023 03:51:55 -0700
From: Rongwei Liu <rongweil@nvidia.com>
To: <dev@dpdk.org>, <matan@nvidia.com>, <viacheslavo@nvidia.com>,
 <orika@nvidia.com>, <suanmingm@nvidia.com>, <thomas@monjalon.net>
Subject: [PATCH v3 5/6] net/mlx5: implement IPv6 routing push remove
Date: Tue, 31 Oct 2023 12:51:30 +0200
Message-ID: <20231031105131.441078-6-rongweil@nvidia.com>
X-Mailer: git-send-email 2.27.0
In-Reply-To: <20231031105131.441078-1-rongweil@nvidia.com>
References: <20231031094244.381557-1-rongweil@nvidia.com>
 <20231031105131.441078-1-rongweil@nvidia.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-Originating-IP: [10.126.230.35]
X-ClientProxiedBy: rnnvmail201.nvidia.com (10.129.68.8) To
 rnnvmail201.nvidia.com (10.129.68.8)
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: CO1PEPF000042AD:EE_|MN6PR12MB8589:EE_
X-MS-Office365-Filtering-Correlation-Id: d537b920-0348-4649-cf0b-08dbd9ff6db8
X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: aCJDOSGWraAeE5LgkRyNr8zbWwr7LUgFSxRDoHZZJ53KQl7myS2wBTIZtgpg3ZJpHPw5OM9h1mnyXr9YP0bYNq8ujQluxdEIuHmHVzzhiZjHnfqRfzzpy/oynX9LdMHMmPBmWqtHYoUNAwdGksVwejeewIlUYZXciYoT0zkTM4W/RoQsSBIxQvej6gOuRsf9fIs5o2UQQXU/kPNl4JsKrvA5j/UaLY8BSpXm15JXUphSQQQJ2HQJg8RRR8ALEue9Iwbpj0Q0y29+QDEeHyNKLESgPYY4CbIQkBKVnI8yEwskL63AzKtctonOeyk16AGF560Q5MbN7RhTDXKM6fnuEbq1LLbP4z0UeosDPjS8Yx2h9X8ii7L0L29t+lecqqTDzIABp71+RYb4yYwOUiJbnWcv48YzG4xQF9eRFzzxJ7frogeMmZpG4/cZsuq7805GQsdw6EbW2KzUeTZqiDUWSyjqedXgzr3SNPqLgmtNOP3mKDYoyJ7Cua7rQeqw81ZZNXxSKrkyYLgUO3FFy2f2Kaj9U3FDiNOTOx4pRLUu/nn7QQyWG+E7ndXLcyf7OUOxKUmgxrYKMzPQsFw4dmhw3PG+lgaZ6SSH8Zpb9cUr9witAKPYQYBNhuw63bRix2WPpTNlBSjWbAXdVBo9+B4nKGQD13jYy0RvVgx1DVwe10QSm3WL9Rorc4/wE6kkICT9DZ8byd6Sb5XjqEPqMe7qwOw3OvUSaNAmF7fVEb6t+mDMIBrfbmMGQCkAXOT32ytaYciCVrRxc/+uxC7FuGhO0R1tBkmMDRwqNLMPZAdIwXY=
X-Forefront-Antispam-Report: CIP:216.228.117.160; CTRY:US; LANG:en; SCL:1; SRV:;
 IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge1.nvidia.com; CAT:NONE;
 SFS:(13230031)(4636009)(396003)(346002)(39860400002)(376002)(136003)(230922051799003)(230273577357003)(230173577357003)(451199024)(64100799003)(82310400011)(1800799009)(186009)(40470700004)(36840700001)(46966006)(55016003)(2906002)(40460700003)(36860700001)(47076005)(70586007)(83380400001)(70206006)(356005)(7636003)(82740400003)(110136005)(316002)(7696005)(26005)(478600001)(426003)(6286002)(1076003)(2616005)(6666004)(16526019)(336012)(41300700001)(5660300002)(30864003)(8936002)(8676002)(40480700001)(86362001)(36756003);
 DIR:OUT; SFP:1101; 
X-OriginatorOrg: Nvidia.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 31 Oct 2023 10:52:08.2210 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: d537b920-0348-4649-cf0b-08dbd9ff6db8
X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.160];
 Helo=[mail.nvidia.com]
X-MS-Exchange-CrossTenant-AuthSource: CO1PEPF000042AD.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN6PR12MB8589
X-BeenThere: dev@dpdk.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: DPDK patches and discussions <dev.dpdk.org>
List-Unsubscribe: <https://mails.dpdk.org/options/dev>,
 <mailto:dev-request@dpdk.org?subject=unsubscribe>
List-Archive: <http://mails.dpdk.org/archives/dev/>
List-Post: <mailto:dev@dpdk.org>
List-Help: <mailto:dev-request@dpdk.org?subject=help>
List-Subscribe: <https://mails.dpdk.org/listinfo/dev>,
 <mailto:dev-request@dpdk.org?subject=subscribe>
Errors-To: dev-bounces@dpdk.org

Reserve the push data buffer for each job and the maximum
length is set to 128 for now.

Only supports type IPPROTO_ROUTING when translating the rte
flow action.

Remove actions must be shared globally and only supports next layer
as TCP or UDP.

Signed-off-by: Rongwei Liu <rongweil@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Suanming Mou <suanmingm@nvidia.com>
---
 doc/guides/nics/features/mlx5.ini      |   2 +
 doc/guides/nics/mlx5.rst               |  11 +-
 doc/guides/rel_notes/release_23_11.rst |   2 +
 drivers/net/mlx5/mlx5.h                |   1 +
 drivers/net/mlx5/mlx5_flow.h           |  21 +-
 drivers/net/mlx5/mlx5_flow_hw.c        | 282 ++++++++++++++++++++++++-
 6 files changed, 309 insertions(+), 10 deletions(-)

diff --git a/doc/guides/nics/features/mlx5.ini b/doc/guides/nics/features/mlx5.ini
index 0ed9a6aefc..0739fe9d63 100644
--- a/doc/guides/nics/features/mlx5.ini
+++ b/doc/guides/nics/features/mlx5.ini
@@ -108,6 +108,8 @@ flag                 = Y
 inc_tcp_ack          = Y
 inc_tcp_seq          = Y
 indirect_list        = Y
+ipv6_ext_push        = Y
+ipv6_ext_remove      = Y
 jump                 = Y
 mark                 = Y
 meter                = Y
diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst
index be5054e68a..955dedf3db 100644
--- a/doc/guides/nics/mlx5.rst
+++ b/doc/guides/nics/mlx5.rst
@@ -148,7 +148,9 @@ Features
 - Matching on GTP extension header with raw encap/decap action.
 - Matching on Geneve TLV option header with raw encap/decap action.
 - Matching on ESP header SPI field.
+- Matching on flex item with specific pattern.
 - Matching on InfiniBand BTH.
+- Modify flex item field.
 - Modify IPv4/IPv6 ECN field.
 - RSS support in sample action.
 - E-Switch mirroring and jump.
@@ -166,7 +168,7 @@ Features
 - Sub-Function.
 - Matching on represented port.
 - Matching on aggregated affinity.
-
+- Push or remove IPv6 routing extension.
 
 Limitations
 -----------
@@ -759,6 +761,13 @@ Limitations
   to the representor of the source virtual port (SF/VF), while if it is disabled, the
   traffic will be routed based on the steering rules in the ingress domain.
 
+- IPv6 routing extension push or remove:
+
+  - Supported only with HW Steering enabled (``dv_flow_en`` = 2).
+  - Supported in non-zero group (No limits on transfer domain if `fdb_def_rule_en` = 1 which is default).
+  - Only supports TCP or UDP as next layer.
+  - IPv6 routing header must be the only present extension.
+  - Not supported on guest port.
 
 Statistics
 ----------
diff --git a/doc/guides/rel_notes/release_23_11.rst b/doc/guides/rel_notes/release_23_11.rst
index 93999893bd..5ef309ea59 100644
--- a/doc/guides/rel_notes/release_23_11.rst
+++ b/doc/guides/rel_notes/release_23_11.rst
@@ -157,6 +157,8 @@ New Features
   * Added support for ``RTE_FLOW_ACTION_TYPE_INDIRECT_LIST`` flow action.
   * Added support for ``RTE_FLOW_ITEM_TYPE_PTYPE`` flow item.
   * Added support for ``RTE_FLOW_ACTION_TYPE_PORT_REPRESENTOR`` flow action and mirror.
+  * Added support for ``RTE_FLOW_ACTION_TYPE_IPV6_EXT_PUSH`` flow action.
+  * Added support for ``RTE_FLOW_ACTION_TYPE_IPV6_EXT_REMOVE`` flow action.
 
 * **Updated Solarflare net driver.**
 
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index f13a56ee9e..277bbbf407 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -373,6 +373,7 @@ struct mlx5_hw_q_job {
 	};
 	void *user_data; /* Job user data. */
 	uint8_t *encap_data; /* Encap data. */
+	uint8_t *push_data; /* IPv6 routing push data. */
 	struct mlx5_modification_cmd *mhdr_cmd;
 	struct rte_flow_item *items;
 	union {
diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h
index 43608e15d2..c7be1f3553 100644
--- a/drivers/net/mlx5/mlx5_flow.h
+++ b/drivers/net/mlx5/mlx5_flow.h
@@ -363,6 +363,8 @@ enum mlx5_feature_name {
 #define MLX5_FLOW_ACTION_INDIRECT_AGE (1ull << 44)
 #define MLX5_FLOW_ACTION_QUOTA (1ull << 46)
 #define MLX5_FLOW_ACTION_PORT_REPRESENTOR (1ull << 47)
+#define MLX5_FLOW_ACTION_IPV6_ROUTING_REMOVE (1ull << 48)
+#define MLX5_FLOW_ACTION_IPV6_ROUTING_PUSH (1ull << 49)
 
 #define MLX5_FLOW_DROP_INCLUSIVE_ACTIONS \
 	(MLX5_FLOW_ACTION_COUNT | MLX5_FLOW_ACTION_SAMPLE | MLX5_FLOW_ACTION_AGE)
@@ -1269,6 +1271,8 @@ typedef int
 			    const struct rte_flow_action *,
 			    struct mlx5dr_rule_action *);
 
+#define MLX5_MHDR_MAX_CMD ((MLX5_MAX_MODIFY_NUM) * 2 + 1)
+
 /* rte flow action translate to DR action struct. */
 struct mlx5_action_construct_data {
 	LIST_ENTRY(mlx5_action_construct_data) next;
@@ -1315,6 +1319,10 @@ struct mlx5_action_construct_data {
 		struct {
 			cnt_id_t id;
 		} shared_counter;
+		struct {
+			/* IPv6 extension push data len. */
+			uint16_t len;
+		} ipv6_ext;
 		struct {
 			uint32_t id;
 			uint32_t conf_masked:1;
@@ -1359,6 +1367,7 @@ struct rte_flow_actions_template {
 	uint16_t *src_off; /* RTE action displacement from app. template */
 	uint16_t reformat_off; /* Offset of DR reformat action. */
 	uint16_t mhdr_off; /* Offset of DR modify header action. */
+	uint16_t recom_off;  /* Offset of DR IPv6 routing push remove action. */
 	uint32_t refcnt; /* Reference counter. */
 	uint8_t flex_item; /* flex item index. */
 };
@@ -1384,7 +1393,14 @@ struct mlx5_hw_encap_decap_action {
 	uint8_t data[]; /* Action data. */
 };
 
-#define MLX5_MHDR_MAX_CMD ((MLX5_MAX_MODIFY_NUM) * 2 + 1)
+/* Push remove action struct. */
+struct mlx5_hw_push_remove_action {
+	struct mlx5dr_action *action; /* Action object. */
+	/* Is push_remove action shared across flows in table. */
+	uint8_t shared;
+	size_t data_size; /* Action metadata size. */
+	uint8_t data[]; /* Action data. */
+};
 
 /* Modify field action struct. */
 struct mlx5_hw_modify_header_action {
@@ -1415,6 +1431,9 @@ struct mlx5_hw_actions {
 	/* Encap/Decap action. */
 	struct mlx5_hw_encap_decap_action *encap_decap;
 	uint16_t encap_decap_pos; /* Encap/Decap action position. */
+	/* Push/remove action. */
+	struct mlx5_hw_push_remove_action *push_remove;
+	uint16_t push_remove_pos; /* Push/remove action position. */
 	uint32_t mark:1; /* Indicate the mark action. */
 	cnt_id_t cnt_id; /* Counter id. */
 	uint32_t mtr_id; /* Meter id. */
diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c
index 977751394e..592d436099 100644
--- a/drivers/net/mlx5/mlx5_flow_hw.c
+++ b/drivers/net/mlx5/mlx5_flow_hw.c
@@ -624,6 +624,12 @@ __flow_hw_action_template_destroy(struct rte_eth_dev *dev,
 		mlx5_free(acts->encap_decap);
 		acts->encap_decap = NULL;
 	}
+	if (acts->push_remove) {
+		if (acts->push_remove->action)
+			mlx5dr_action_destroy(acts->push_remove->action);
+		mlx5_free(acts->push_remove);
+		acts->push_remove = NULL;
+	}
 	if (acts->mhdr) {
 		flow_hw_template_destroy_mhdr_action(acts->mhdr);
 		mlx5_free(acts->mhdr);
@@ -761,6 +767,44 @@ __flow_hw_act_data_encap_append(struct mlx5_priv *priv,
 	return 0;
 }
 
+/**
+ * Append dynamic push action to the dynamic action list.
+ *
+ * @param[in] dev
+ *   Pointer to the port.
+ * @param[in] acts
+ *   Pointer to the template HW steering DR actions.
+ * @param[in] type
+ *   Action type.
+ * @param[in] action_src
+ *   Offset of source rte flow action.
+ * @param[in] action_dst
+ *   Offset of destination DR action.
+ * @param[in] len
+ *   Length of the data to be updated.
+ *
+ * @return
+ *    Data pointer on success, NULL otherwise and rte_errno is set.
+ */
+static __rte_always_inline void *
+__flow_hw_act_data_push_append(struct rte_eth_dev *dev,
+			       struct mlx5_hw_actions *acts,
+			       enum rte_flow_action_type type,
+			       uint16_t action_src,
+			       uint16_t action_dst,
+			       uint16_t len)
+{
+	struct mlx5_action_construct_data *act_data;
+	struct mlx5_priv *priv = dev->data->dev_private;
+
+	act_data = __flow_hw_act_data_alloc(priv, type, action_src, action_dst);
+	if (!act_data)
+		return NULL;
+	act_data->ipv6_ext.len = len;
+	LIST_INSERT_HEAD(&acts->act_list, act_data, next);
+	return act_data;
+}
+
 static __rte_always_inline int
 __flow_hw_act_data_hdr_modify_append(struct mlx5_priv *priv,
 				     struct mlx5_hw_actions *acts,
@@ -1924,6 +1968,82 @@ mlx5_tbl_translate_modify_header(struct rte_eth_dev *dev,
 	return 0;
 }
 
+
+static int
+mlx5_create_ipv6_ext_reformat(struct rte_eth_dev *dev,
+			      const struct mlx5_flow_template_table_cfg *cfg,
+			      struct mlx5_hw_actions *acts,
+			      struct rte_flow_actions_template *at,
+			      uint8_t *push_data, uint8_t *push_data_m,
+			      size_t push_size, uint16_t recom_src,
+			      enum mlx5dr_action_type recom_type)
+{
+	struct mlx5_priv *priv = dev->data->dev_private;
+	const struct rte_flow_template_table_attr *table_attr = &cfg->attr;
+	const struct rte_flow_attr *attr = &table_attr->flow_attr;
+	enum mlx5dr_table_type type = get_mlx5dr_table_type(attr);
+	struct mlx5_action_construct_data *act_data;
+	struct mlx5dr_action_reformat_header hdr = {0};
+	uint32_t flag, bulk = 0;
+
+	flag = mlx5_hw_act_flag[!!attr->group][type];
+	acts->push_remove = mlx5_malloc(MLX5_MEM_ZERO,
+					sizeof(*acts->push_remove) + push_size,
+					0, SOCKET_ID_ANY);
+	if (!acts->push_remove)
+		return -ENOMEM;
+
+	switch (recom_type) {
+	case MLX5DR_ACTION_TYP_PUSH_IPV6_ROUTE_EXT:
+		if (!push_data || !push_size)
+			goto err1;
+		if (!push_data_m) {
+			bulk = rte_log2_u32(table_attr->nb_flows);
+		} else {
+			flag |= MLX5DR_ACTION_FLAG_SHARED;
+			acts->push_remove->shared = 1;
+		}
+		acts->push_remove->data_size = push_size;
+		memcpy(acts->push_remove->data, push_data, push_size);
+		hdr.data = push_data;
+		hdr.sz = push_size;
+		break;
+	case MLX5DR_ACTION_TYP_POP_IPV6_ROUTE_EXT:
+		flag |= MLX5DR_ACTION_FLAG_SHARED;
+		acts->push_remove->shared = 1;
+		break;
+	default:
+		break;
+	}
+
+	acts->push_remove->action =
+		mlx5dr_action_create_reformat_ipv6_ext(priv->dr_ctx,
+				recom_type, &hdr, bulk, flag);
+	if (!acts->push_remove->action)
+		goto err1;
+	acts->rule_acts[at->recom_off].action = acts->push_remove->action;
+	acts->rule_acts[at->recom_off].ipv6_ext.header = acts->push_remove->data;
+	acts->rule_acts[at->recom_off].ipv6_ext.offset = 0;
+	acts->push_remove_pos = at->recom_off;
+	if (!acts->push_remove->shared) {
+		act_data = __flow_hw_act_data_push_append(dev, acts,
+				RTE_FLOW_ACTION_TYPE_IPV6_EXT_PUSH,
+				recom_src, at->recom_off, push_size);
+		if (!act_data)
+			goto err;
+	}
+	return 0;
+err:
+	if (acts->push_remove->action)
+		mlx5dr_action_destroy(acts->push_remove->action);
+err1:
+	if (acts->push_remove) {
+		mlx5_free(acts->push_remove);
+		acts->push_remove = NULL;
+	}
+	return -EINVAL;
+}
+
 /**
  * Translate rte_flow actions to DR action.
  *
@@ -1957,19 +2077,24 @@ __flow_hw_actions_translate(struct rte_eth_dev *dev,
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
 	const struct rte_flow_template_table_attr *table_attr = &cfg->attr;
+	struct mlx5_hca_flex_attr *hca_attr = &priv->sh->cdev->config.hca_attr.flex;
 	const struct rte_flow_attr *attr = &table_attr->flow_attr;
 	struct rte_flow_action *actions = at->actions;
 	struct rte_flow_action *masks = at->masks;
 	enum mlx5dr_action_type refmt_type = MLX5DR_ACTION_TYP_LAST;
+	enum mlx5dr_action_type recom_type = MLX5DR_ACTION_TYP_LAST;
 	const struct rte_flow_action_raw_encap *raw_encap_data;
+	const struct rte_flow_action_ipv6_ext_push *ipv6_ext_data;
 	const struct rte_flow_item *enc_item = NULL, *enc_item_m = NULL;
-	uint16_t reformat_src = 0;
+	uint16_t reformat_src = 0, recom_src = 0;
 	uint8_t *encap_data = NULL, *encap_data_m = NULL;
-	size_t data_size = 0;
+	uint8_t *push_data = NULL, *push_data_m = NULL;
+	size_t data_size = 0, push_size = 0;
 	struct mlx5_hw_modify_header_action mhdr = { 0 };
 	bool actions_end = false;
 	uint32_t type;
 	bool reformat_used = false;
+	bool recom_used = false;
 	unsigned int of_vlan_offset;
 	uint16_t jump_pos;
 	uint32_t ct_idx;
@@ -2175,6 +2300,36 @@ __flow_hw_actions_translate(struct rte_eth_dev *dev,
 			reformat_used = true;
 			refmt_type = MLX5DR_ACTION_TYP_REFORMAT_TNL_L2_TO_L2;
 			break;
+		case RTE_FLOW_ACTION_TYPE_IPV6_EXT_PUSH:
+			if (!hca_attr->query_match_sample_info || !hca_attr->parse_graph_anchor ||
+			    !priv->sh->srh_flex_parser.flex.mapnum) {
+				DRV_LOG(ERR, "SRv6 anchor is not supported.");
+				goto err;
+			}
+			MLX5_ASSERT(!recom_used && !recom_type);
+			recom_used = true;
+			recom_type = MLX5DR_ACTION_TYP_PUSH_IPV6_ROUTE_EXT;
+			ipv6_ext_data =
+				(const struct rte_flow_action_ipv6_ext_push *)masks->conf;
+			if (ipv6_ext_data)
+				push_data_m = ipv6_ext_data->data;
+			ipv6_ext_data =
+				(const struct rte_flow_action_ipv6_ext_push *)actions->conf;
+			if (ipv6_ext_data) {
+				push_data = ipv6_ext_data->data;
+				push_size = ipv6_ext_data->size;
+			}
+			recom_src = src_pos;
+			break;
+		case RTE_FLOW_ACTION_TYPE_IPV6_EXT_REMOVE:
+			if (!hca_attr->query_match_sample_info || !hca_attr->parse_graph_anchor ||
+			    !priv->sh->srh_flex_parser.flex.mapnum) {
+				DRV_LOG(ERR, "SRv6 anchor is not supported.");
+				goto err;
+			}
+			recom_used = true;
+			recom_type = MLX5DR_ACTION_TYP_POP_IPV6_ROUTE_EXT;
+			break;
 		case RTE_FLOW_ACTION_TYPE_SEND_TO_KERNEL:
 			flow_hw_translate_group(dev, cfg, attr->group,
 						&target_grp, error);
@@ -2322,6 +2477,14 @@ __flow_hw_actions_translate(struct rte_eth_dev *dev,
 		if (ret)
 			goto err;
 	}
+	if (recom_used) {
+		MLX5_ASSERT(at->recom_off != UINT16_MAX);
+		ret = mlx5_create_ipv6_ext_reformat(dev, cfg, acts, at, push_data,
+						    push_data_m, push_size, recom_src,
+						    recom_type);
+		if (ret)
+			goto err;
+	}
 	return 0;
 err:
 	err = rte_errno;
@@ -2719,11 +2882,13 @@ flow_hw_actions_construct(struct rte_eth_dev *dev,
 	const struct mlx5_hw_actions *hw_acts = &hw_at->acts;
 	const struct rte_flow_action *action;
 	const struct rte_flow_action_raw_encap *raw_encap_data;
+	const struct rte_flow_action_ipv6_ext_push *ipv6_push;
 	const struct rte_flow_item *enc_item = NULL;
 	const struct rte_flow_action_ethdev *port_action = NULL;
 	const struct rte_flow_action_meter *meter = NULL;
 	const struct rte_flow_action_age *age = NULL;
 	uint8_t *buf = job->encap_data;
+	uint8_t *push_buf = job->push_data;
 	struct rte_flow_attr attr = {
 			.ingress = 1,
 	};
@@ -2854,6 +3019,13 @@ flow_hw_actions_construct(struct rte_eth_dev *dev,
 			MLX5_ASSERT(raw_encap_data->size ==
 				    act_data->encap.len);
 			break;
+		case RTE_FLOW_ACTION_TYPE_IPV6_EXT_PUSH:
+			ipv6_push =
+				(const struct rte_flow_action_ipv6_ext_push *)action->conf;
+			rte_memcpy((void *)push_buf, ipv6_push->data,
+				   act_data->ipv6_ext.len);
+			MLX5_ASSERT(ipv6_push->size == act_data->ipv6_ext.len);
+			break;
 		case RTE_FLOW_ACTION_TYPE_MODIFY_FIELD:
 			if (action->type == RTE_FLOW_ACTION_TYPE_OF_SET_VLAN_VID)
 				ret = flow_hw_set_vlan_vid_construct(dev, job,
@@ -3010,6 +3182,11 @@ flow_hw_actions_construct(struct rte_eth_dev *dev,
 				job->flow->res_idx - 1;
 		rule_acts[hw_acts->encap_decap_pos].reformat.data = buf;
 	}
+	if (hw_acts->push_remove && !hw_acts->push_remove->shared) {
+		rule_acts[hw_acts->push_remove_pos].ipv6_ext.offset =
+				job->flow->res_idx - 1;
+		rule_acts[hw_acts->push_remove_pos].ipv6_ext.header = push_buf;
+	}
 	if (mlx5_hws_cnt_id_valid(hw_acts->cnt_id))
 		job->flow->cnt_id = hw_acts->cnt_id;
 	return 0;
@@ -5113,6 +5290,38 @@ flow_hw_validate_action_indirect(struct rte_eth_dev *dev,
 	return 0;
 }
 
+/**
+ * Validate ipv6_ext_push action.
+ *
+ * @param[in] dev
+ *   Pointer to rte_eth_dev structure.
+ * @param[in] action
+ *   Pointer to the indirect action.
+ * @param[out] error
+ *   Pointer to error structure.
+ *
+ * @return
+ *   0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+static int
+flow_hw_validate_action_ipv6_ext_push(struct rte_eth_dev *dev __rte_unused,
+				      const struct rte_flow_action *action,
+				      struct rte_flow_error *error)
+{
+	const struct rte_flow_action_ipv6_ext_push *raw_push_data = action->conf;
+
+	if (!raw_push_data || !raw_push_data->size || !raw_push_data->data)
+		return rte_flow_error_set(error, EINVAL,
+					  RTE_FLOW_ERROR_TYPE_ACTION, action,
+					  "invalid ipv6_ext_push data");
+	if (raw_push_data->type != IPPROTO_ROUTING ||
+	    raw_push_data->size > MLX5_PUSH_MAX_LEN)
+		return rte_flow_error_set(error, EINVAL,
+					  RTE_FLOW_ERROR_TYPE_ACTION, action,
+					  "Unsupported ipv6_ext_push type or length");
+	return 0;
+}
+
 /**
  * Validate raw_encap action.
  *
@@ -5340,6 +5549,7 @@ mlx5_flow_hw_actions_validate(struct rte_eth_dev *dev,
 #endif
 	uint16_t i;
 	int ret;
+	const struct rte_flow_action_ipv6_ext_remove *remove_data;
 
 	/* FDB actions are only valid to proxy port. */
 	if (attr->transfer && (!priv->sh->config.dv_esw_en || !priv->master))
@@ -5436,6 +5646,21 @@ mlx5_flow_hw_actions_validate(struct rte_eth_dev *dev,
 			/* TODO: Validation logic */
 			action_flags |= MLX5_FLOW_ACTION_DECAP;
 			break;
+		case RTE_FLOW_ACTION_TYPE_IPV6_EXT_PUSH:
+			ret = flow_hw_validate_action_ipv6_ext_push(dev, action, error);
+			if (ret < 0)
+				return ret;
+			action_flags |= MLX5_FLOW_ACTION_IPV6_ROUTING_PUSH;
+			break;
+		case RTE_FLOW_ACTION_TYPE_IPV6_EXT_REMOVE:
+			remove_data = action->conf;
+			/* Remove action must be shared. */
+			if (remove_data->type != IPPROTO_ROUTING || !mask) {
+				DRV_LOG(ERR, "Only supports shared IPv6 routing remove");
+				return -EINVAL;
+			}
+			action_flags |= MLX5_FLOW_ACTION_IPV6_ROUTING_REMOVE;
+			break;
 		case RTE_FLOW_ACTION_TYPE_METER:
 			/* TODO: Validation logic */
 			action_flags |= MLX5_FLOW_ACTION_METER;
@@ -5551,6 +5776,8 @@ static enum mlx5dr_action_type mlx5_hw_dr_action_types[] = {
 	[RTE_FLOW_ACTION_TYPE_OF_POP_VLAN] = MLX5DR_ACTION_TYP_POP_VLAN,
 	[RTE_FLOW_ACTION_TYPE_OF_PUSH_VLAN] = MLX5DR_ACTION_TYP_PUSH_VLAN,
 	[RTE_FLOW_ACTION_TYPE_SEND_TO_KERNEL] = MLX5DR_ACTION_TYP_DEST_ROOT,
+	[RTE_FLOW_ACTION_TYPE_IPV6_EXT_PUSH] = MLX5DR_ACTION_TYP_PUSH_IPV6_ROUTE_EXT,
+	[RTE_FLOW_ACTION_TYPE_IPV6_EXT_REMOVE] = MLX5DR_ACTION_TYP_POP_IPV6_ROUTE_EXT,
 };
 
 static inline void
@@ -5648,6 +5875,8 @@ flow_hw_template_actions_list(struct rte_flow_actions_template *at,
 /**
  * Create DR action template based on a provided sequence of flow actions.
  *
+ * @param[in] dev
+ *   Pointer to the rte_eth_dev structure.
  * @param[in] at
  *   Pointer to flow actions template to be updated.
  *
@@ -5656,7 +5885,8 @@ flow_hw_template_actions_list(struct rte_flow_actions_template *at,
  *   NULL otherwise.
  */
 static struct mlx5dr_action_template *
-flow_hw_dr_actions_template_create(struct rte_flow_actions_template *at)
+flow_hw_dr_actions_template_create(struct rte_eth_dev *dev,
+				   struct rte_flow_actions_template *at)
 {
 	struct mlx5dr_action_template *dr_template;
 	enum mlx5dr_action_type action_types[MLX5_HW_MAX_ACTS] = { MLX5DR_ACTION_TYP_LAST };
@@ -5665,8 +5895,11 @@ flow_hw_dr_actions_template_create(struct rte_flow_actions_template *at)
 	enum mlx5dr_action_type reformat_act_type = MLX5DR_ACTION_TYP_REFORMAT_TNL_L2_TO_L2;
 	uint16_t reformat_off = UINT16_MAX;
 	uint16_t mhdr_off = UINT16_MAX;
+	uint16_t recom_off = UINT16_MAX;
 	uint16_t cnt_off = UINT16_MAX;
+	enum mlx5dr_action_type recom_type = MLX5DR_ACTION_TYP_LAST;
 	int ret;
+
 	for (i = 0, curr_off = 0; at->actions[i].type != RTE_FLOW_ACTION_TYPE_END; ++i) {
 		const struct rte_flow_action_raw_encap *raw_encap_data;
 		size_t data_size;
@@ -5698,6 +5931,16 @@ flow_hw_dr_actions_template_create(struct rte_flow_actions_template *at)
 			reformat_off = curr_off++;
 			reformat_act_type = mlx5_hw_dr_action_types[at->actions[i].type];
 			break;
+		case RTE_FLOW_ACTION_TYPE_IPV6_EXT_PUSH:
+			MLX5_ASSERT(recom_off == UINT16_MAX);
+			recom_type = MLX5DR_ACTION_TYP_PUSH_IPV6_ROUTE_EXT;
+			recom_off = curr_off++;
+			break;
+		case RTE_FLOW_ACTION_TYPE_IPV6_EXT_REMOVE:
+			MLX5_ASSERT(recom_off == UINT16_MAX);
+			recom_type = MLX5DR_ACTION_TYP_POP_IPV6_ROUTE_EXT;
+			recom_off = curr_off++;
+			break;
 		case RTE_FLOW_ACTION_TYPE_RAW_ENCAP:
 			raw_encap_data = at->actions[i].conf;
 			data_size = raw_encap_data->size;
@@ -5770,11 +6013,25 @@ flow_hw_dr_actions_template_create(struct rte_flow_actions_template *at)
 		at->reformat_off = reformat_off;
 		action_types[reformat_off] = reformat_act_type;
 	}
+	if (recom_off != UINT16_MAX) {
+		at->recom_off = recom_off;
+		action_types[recom_off] = recom_type;
+	}
 	dr_template = mlx5dr_action_template_create(action_types);
-	if (dr_template)
+	if (dr_template) {
 		at->dr_actions_num = curr_off;
-	else
+	} else {
 		DRV_LOG(ERR, "Failed to create DR action template: %d", rte_errno);
+		return NULL;
+	}
+	/* Create srh flex parser for remove anchor. */
+	if ((recom_type == MLX5DR_ACTION_TYP_POP_IPV6_ROUTE_EXT ||
+	     recom_type == MLX5DR_ACTION_TYP_PUSH_IPV6_ROUTE_EXT) &&
+	    mlx5_alloc_srh_flex_parser(dev)) {
+		DRV_LOG(ERR, "Failed to create srv6 flex parser");
+		claim_zero(mlx5dr_action_template_destroy(dr_template));
+		return NULL;
+	}
 	return dr_template;
 err_actions_num:
 	DRV_LOG(ERR, "Number of HW actions (%u) exceeded maximum (%u) allowed in template",
@@ -6183,7 +6440,7 @@ flow_hw_actions_template_create(struct rte_eth_dev *dev,
 			break;
 		}
 	}
-	at->tmpl = flow_hw_dr_actions_template_create(at);
+	at->tmpl = flow_hw_dr_actions_template_create(dev, at);
 	if (!at->tmpl)
 		goto error;
 	at->action_flags = action_flags;
@@ -6220,6 +6477,9 @@ flow_hw_actions_template_destroy(struct rte_eth_dev *dev,
 				 struct rte_flow_actions_template *template,
 				 struct rte_flow_error *error __rte_unused)
 {
+	uint64_t flag = MLX5_FLOW_ACTION_IPV6_ROUTING_REMOVE |
+			MLX5_FLOW_ACTION_IPV6_ROUTING_PUSH;
+
 	if (__atomic_load_n(&template->refcnt, __ATOMIC_RELAXED) > 1) {
 		DRV_LOG(WARNING, "Action template %p is still in use.",
 			(void *)template);
@@ -6228,6 +6488,8 @@ flow_hw_actions_template_destroy(struct rte_eth_dev *dev,
 				   NULL,
 				   "action template in using");
 	}
+	if (template->action_flags & flag)
+		mlx5_free_srh_flex_parser(dev);
 	LIST_REMOVE(template, next);
 	flow_hw_flex_item_release(dev, &template->flex_item);
 	if (template->tmpl)
@@ -8796,6 +9058,7 @@ flow_hw_configure(struct rte_eth_dev *dev,
 		mem_size += (sizeof(struct mlx5_hw_q_job *) +
 			    sizeof(struct mlx5_hw_q_job) +
 			    sizeof(uint8_t) * MLX5_ENCAP_MAX_LEN +
+			    sizeof(uint8_t) * MLX5_PUSH_MAX_LEN +
 			    sizeof(struct mlx5_modification_cmd) *
 			    MLX5_MHDR_MAX_CMD +
 			    sizeof(struct rte_flow_item) *
@@ -8811,7 +9074,7 @@ flow_hw_configure(struct rte_eth_dev *dev,
 	}
 	for (i = 0; i < nb_q_updated; i++) {
 		char mz_name[RTE_MEMZONE_NAMESIZE];
-		uint8_t *encap = NULL;
+		uint8_t *encap = NULL, *push = NULL;
 		struct mlx5_modification_cmd *mhdr_cmd = NULL;
 		struct rte_flow_item *items = NULL;
 		struct rte_flow_hw *upd_flow = NULL;
@@ -8831,13 +9094,16 @@ flow_hw_configure(struct rte_eth_dev *dev,
 			   &job[_queue_attr[i]->size];
 		encap = (uint8_t *)
 			 &mhdr_cmd[_queue_attr[i]->size * MLX5_MHDR_MAX_CMD];
-		items = (struct rte_flow_item *)
+		push = (uint8_t *)
 			 &encap[_queue_attr[i]->size * MLX5_ENCAP_MAX_LEN];
+		items = (struct rte_flow_item *)
+			 &push[_queue_attr[i]->size * MLX5_PUSH_MAX_LEN];
 		upd_flow = (struct rte_flow_hw *)
 			&items[_queue_attr[i]->size * MLX5_HW_MAX_ITEMS];
 		for (j = 0; j < _queue_attr[i]->size; j++) {
 			job[j].mhdr_cmd = &mhdr_cmd[j * MLX5_MHDR_MAX_CMD];
 			job[j].encap_data = &encap[j * MLX5_ENCAP_MAX_LEN];
+			job[j].push_data = &push[j * MLX5_PUSH_MAX_LEN];
 			job[j].items = &items[j * MLX5_HW_MAX_ITEMS];
 			job[j].upd_flow = &upd_flow[j];
 			priv->hw_q[i].job[j] = &job[j];
-- 
2.27.0