DPDK patches and discussions
 help / color / mirror / Atom feed
* [PATCH 0/8] support NAT64 action
@ 2023-12-27  9:07 Bing Zhao
  2023-12-27  9:07 ` [PATCH 1/8] ethdev: introduce " Bing Zhao
                   ` (11 more replies)
  0 siblings, 12 replies; 36+ messages in thread
From: Bing Zhao @ 2023-12-27  9:07 UTC (permalink / raw)
  To: orika, aman.deep.singh, yuying.zhang, dsosnowski, viacheslavo,
	suanmingm, matan, thomas, ferruh.yigit, andrew.rybchenko, dev,
	rasland

This patchset introduce the NAT64 action support for rte_flow.

Bing Zhao (7):
  ethdev: introduce NAT64 action
  app/testpmd: add support for NAT64 in the command line
  net/mlx5: fetch the available registers for NAT64
  common/mlx5: add new modify field defininations
  net/mlx5: create NAT64 actions during configuration
  net/mlx5: add NAT64 action support in rule creation
  net/mlx5: validate the actions combination with NAT64

Erez Shitrit (1):
  net/mlx5/hws: support NAT64 action

 app/test-pmd/cmdline_flow.c                 |  23 ++
 doc/guides/nics/features/default.ini        |   1 +
 doc/guides/nics/features/mlx5.ini           |   1 +
 doc/guides/nics/mlx5.rst                    |   9 +-
 doc/guides/prog_guide/rte_flow.rst          |   8 +
 doc/guides/testpmd_app_ug/testpmd_funcs.rst |   4 +
 drivers/common/mlx5/mlx5_prm.h              |   5 +
 drivers/net/mlx5/hws/mlx5dr.h               |  29 ++
 drivers/net/mlx5/hws/mlx5dr_action.c        | 437 +++++++++++++++++++-
 drivers/net/mlx5/hws/mlx5dr_action.h        |  35 ++
 drivers/net/mlx5/hws/mlx5dr_debug.c         |   1 +
 drivers/net/mlx5/mlx5.c                     |   9 +
 drivers/net/mlx5/mlx5.h                     |   8 +
 drivers/net/mlx5/mlx5_flow.h                |  12 +
 drivers/net/mlx5/mlx5_flow_dv.c             |   4 +-
 drivers/net/mlx5/mlx5_flow_hw.c             |  91 ++++
 lib/ethdev/rte_flow.c                       |   1 +
 lib/ethdev/rte_flow.h                       |  27 ++
 18 files changed, 702 insertions(+), 3 deletions(-)

-- 
2.25.1


^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH 1/8] ethdev: introduce NAT64 action
  2023-12-27  9:07 [PATCH 0/8] support NAT64 action Bing Zhao
@ 2023-12-27  9:07 ` Bing Zhao
  2023-12-27  9:07 ` [PATCH 2/8] app/testpmd: add support for NAT64 in the command line Bing Zhao
                   ` (10 subsequent siblings)
  11 siblings, 0 replies; 36+ messages in thread
From: Bing Zhao @ 2023-12-27  9:07 UTC (permalink / raw)
  To: orika, aman.deep.singh, yuying.zhang, dsosnowski, viacheslavo,
	suanmingm, matan, thomas, ferruh.yigit, andrew.rybchenko, dev,
	rasland

In order to support the communication between IPv4 and IPv6 nodes in
the network, different technologies are used, like dual-stacks,
tunneling and NAT64. In some IPv4-only clients, it is hard to deploy
new software and(or) hardware to support IPv6 protocol.

NAT64 is a choice and it will also reduce the unnecessary overhead of
the traffic in the network. The NAT64 gateways take the
responsibility of the packet headers translation between the IPv6
clouds and IPv4-only clouds.

The commit introduce the NAT64 flow action to offload the software
involvement to the hardware.

This action should support the offloading of the IP headers'
translation. The following fields should be reset correctly in the
translation.
  - Version
  - Traffic Class / TOS
  - Flow Label (0 in v4)
  - Payload Length / Total length
  - Next Header
  - Hop Limit / TTL

The PMD needs to support the basic conversion of the fields above,
and the well-known prefix will be used when translating IPv4 address
to IPv6 address. Another modify fields can be used after the NAT64 to
support other modes with different prefix and offset.

The ICMP* and transport layers protocol is out of the scope of NAT64
rte_flow action.

Reference links:
  - https://datatracker.ietf.org/doc/html/rfc6146
  - https://datatracker.ietf.org/doc/html/rfc6052
  - https://datatracker.ietf.org/doc/html/rfc6145

Signed-off-by: Bing Zhao <bingz@nvidia.com>
---
 doc/guides/nics/features/default.ini |  1 +
 doc/guides/prog_guide/rte_flow.rst   |  8 ++++++++
 lib/ethdev/rte_flow.c                |  1 +
 lib/ethdev/rte_flow.h                | 27 +++++++++++++++++++++++++++
 4 files changed, 37 insertions(+)

diff --git a/doc/guides/nics/features/default.ini b/doc/guides/nics/features/default.ini
index 806cb033ff..f8a47a055a 100644
--- a/doc/guides/nics/features/default.ini
+++ b/doc/guides/nics/features/default.ini
@@ -170,6 +170,7 @@ mark                 =
 meter                =
 meter_mark           =
 modify_field         =
+nat64                =
 nvgre_decap          =
 nvgre_encap          =
 of_copy_ttl_in       =
diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index 627b845bfb..f87628e9dc 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -3506,6 +3506,14 @@ The packets will be received by the kernel driver sharing the same device
 as the DPDK port on which this action is configured.
 
 
+Action: ``NAT64``
+^^^^^^^^^^^^^^^^^
+
+This action does the header translation between IPv4 and IPv6. Besides
+converting the IP addresses, other fields in the IP header are handled as
+well. The ``type`` field should be provided as defined in the
+``rte_flow_action_nat64`` when creating the action.
+
 Negative types
 ~~~~~~~~~~~~~~
 
diff --git a/lib/ethdev/rte_flow.c b/lib/ethdev/rte_flow.c
index 549e329558..502df3bfd1 100644
--- a/lib/ethdev/rte_flow.c
+++ b/lib/ethdev/rte_flow.c
@@ -268,6 +268,7 @@ static const struct rte_flow_desc_data rte_flow_desc_action[] = {
 		       sizeof(struct rte_flow_action_indirect_list)),
 	MK_FLOW_ACTION(PROG,
 		       sizeof(struct rte_flow_action_prog)),
+	MK_FLOW_ACTION(NAT64, sizeof(struct rte_flow_action_nat64)),
 };
 
 int
diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
index affdc8121b..da2afaef83 100644
--- a/lib/ethdev/rte_flow.h
+++ b/lib/ethdev/rte_flow.h
@@ -3022,6 +3022,13 @@ enum rte_flow_action_type {
 	 * @see struct rte_flow_action_prog.
 	 */
 	RTE_FLOW_ACTION_TYPE_PROG,
+
+	/**
+	 * Support the NAT64 translation.
+	 *
+	 * @see struct rte_flow_action_nat64
+	 */
+	RTE_FLOW_ACTION_TYPE_NAT64,
 };
 
 /**
@@ -4150,6 +4157,26 @@ rte_flow_dynf_metadata_set(struct rte_mbuf *m, uint32_t v)
 	*RTE_FLOW_DYNF_METADATA(m) = v;
 }
 
+/**
+ * NAT64 translation type for IP headers.
+ */
+enum rte_flow_nat64_type {
+	RTE_FLOW_NAT64_6TO4 = 0, /**< IPv6 to IPv4 headers translation. */
+	RTE_FLOW_NAT64_4TO6 = 1, /**< IPv4 to IPv6 headers translation. */
+};
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this structure may change without prior notice
+ *
+ * RTE_FLOW_ACTION_TYPE_NAT64
+ *
+ * Specify the NAT64 translation type.
+ */
+struct rte_flow_action_nat64 {
+	enum rte_flow_nat64_type type;
+};
+
 /**
  * Definition of a single action.
  *
-- 
2.25.1


^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH 2/8] app/testpmd: add support for NAT64 in the command line
  2023-12-27  9:07 [PATCH 0/8] support NAT64 action Bing Zhao
  2023-12-27  9:07 ` [PATCH 1/8] ethdev: introduce " Bing Zhao
@ 2023-12-27  9:07 ` Bing Zhao
  2023-12-27  9:07 ` [PATCH 3/8] net/mlx5: fetch the available registers for NAT64 Bing Zhao
                   ` (9 subsequent siblings)
  11 siblings, 0 replies; 36+ messages in thread
From: Bing Zhao @ 2023-12-27  9:07 UTC (permalink / raw)
  To: orika, aman.deep.singh, yuying.zhang, dsosnowski, viacheslavo,
	suanmingm, matan, thomas, ferruh.yigit, andrew.rybchenko, dev,
	rasland

The type of NAT64 action will be parsed.

Usage example with template API:
  ...
  flow actions_template 0 create ingress actions_template_id 1 \
    template count / nat64 / jump / end mask count / nat64 / \
    jump / end
  flow template_table 0 create group 1 priority 0 ingress table_id \
    0x1 rules_number 8 pattern_template 0 actions_template 1
  flow queue 0 create 2 template_table 0x1 pattern_template 0 \
    actions_template 0 postpone no pattern eth / end actions count / \
    nat64 type 1 / jump group 2 / end
   ...

Signed-off-by: Bing Zhao <bingz@nvidia.com>
---
 app/test-pmd/cmdline_flow.c                 | 23 +++++++++++++++++++++
 doc/guides/testpmd_app_ug/testpmd_funcs.rst |  4 ++++
 2 files changed, 27 insertions(+)

diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index ce71818705..6fb252d3d5 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -728,6 +728,8 @@ enum index {
 	ACTION_IPV6_EXT_PUSH,
 	ACTION_IPV6_EXT_PUSH_INDEX,
 	ACTION_IPV6_EXT_PUSH_INDEX_VALUE,
+	ACTION_NAT64,
+	ACTION_NAT64_MODE,
 };
 
 /** Maximum size for pattern in struct rte_flow_item_raw. */
@@ -2193,6 +2195,7 @@ static const enum index next_action[] = {
 	ACTION_QUOTA_QU,
 	ACTION_IPV6_EXT_REMOVE,
 	ACTION_IPV6_EXT_PUSH,
+	ACTION_NAT64,
 	ZERO,
 };
 
@@ -2534,6 +2537,12 @@ static const enum index action_represented_port[] = {
 	ZERO,
 };
 
+static const enum index action_nat64[] = {
+	ACTION_NAT64_MODE,
+	ACTION_NEXT,
+	ZERO,
+};
+
 static int parse_set_raw_encap_decap(struct context *, const struct token *,
 				     const char *, unsigned int,
 				     void *, unsigned int);
@@ -7022,6 +7031,20 @@ static const struct token token_list[] = {
 		.call = parse_vc_action_ipv6_ext_push_index,
 		.comp = comp_set_ipv6_ext_index,
 	},
+	[ACTION_NAT64] = {
+		.name = "nat64",
+		.help = "NAT64 IP headers translation",
+		.priv = PRIV_ACTION(NAT64, sizeof(struct rte_flow_action_nat64)),
+		.next = NEXT(action_nat64),
+		.call = parse_vc,
+	},
+	[ACTION_NAT64_MODE] = {
+		.name = "type",
+		.help = "NAT64 translation type",
+		.next = NEXT(action_nat64, NEXT_ENTRY(COMMON_UNSIGNED)),
+		.args = ARGS(ARGS_ENTRY(struct rte_flow_action_nat64, type)),
+		.call = parse_vc_conf,
+	},
 	/* Top level command. */
 	[SET] = {
 		.name = "set",
diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
index 447e28e694..01044043d0 100644
--- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
+++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
@@ -4151,6 +4151,10 @@ This section lists supported actions and their attributes, if any.
   - ``src_ptr``: pointer to source immediate value.
   - ``width``: number of bits to copy.
 
+- ``nat64``: NAT64 IP headers translation
+
+  - ``type {unsigned}``: NAT64 translation type
+
 Destroying flow rules
 ~~~~~~~~~~~~~~~~~~~~~
 
-- 
2.25.1


^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH 3/8] net/mlx5: fetch the available registers for NAT64
  2023-12-27  9:07 [PATCH 0/8] support NAT64 action Bing Zhao
  2023-12-27  9:07 ` [PATCH 1/8] ethdev: introduce " Bing Zhao
  2023-12-27  9:07 ` [PATCH 2/8] app/testpmd: add support for NAT64 in the command line Bing Zhao
@ 2023-12-27  9:07 ` Bing Zhao
  2023-12-27  9:07 ` [PATCH 4/8] common/mlx5: add new modify field defininations Bing Zhao
                   ` (8 subsequent siblings)
  11 siblings, 0 replies; 36+ messages in thread
From: Bing Zhao @ 2023-12-27  9:07 UTC (permalink / raw)
  To: orika, aman.deep.singh, yuying.zhang, dsosnowski, viacheslavo,
	suanmingm, matan, thomas, ferruh.yigit, andrew.rybchenko, dev,
	rasland

REG_C_6 is used as the 1st one and since it is reserved internally
by default, there is no impact.

The remaining 2 registers will be fetched from the available TAGs
array from right to left. They will not be masked in the array due
to the fact that not all the rules will use NAT64 action.

Signed-off-by: Bing Zhao <bingz@nvidia.com>
---
 drivers/net/mlx5/mlx5.c | 9 +++++++++
 drivers/net/mlx5/mlx5.h | 2 ++
 2 files changed, 11 insertions(+)

diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c
index 3a182de248..6f7b2aaa77 100644
--- a/drivers/net/mlx5/mlx5.c
+++ b/drivers/net/mlx5/mlx5.c
@@ -1643,6 +1643,15 @@ mlx5_init_hws_flow_tags_registers(struct mlx5_dev_ctx_shared *sh)
 		if (!!((1 << i) & masks))
 			reg->hw_avl_tags[j++] = mlx5_regc_value(i);
 	}
+	/*
+	 * Set the registers for NAT64 usage internally. REG_C_6 is always used.
+	 * The other 2 registers will be fetched from right to left, at least 2
+	 * tag registers should be available.
+	 */
+	MLX5_ASSERT(j >= (MLX5_FLOW_NAT64_REGS_MAX - 1));
+	reg->nat64_regs[0] = REG_C_6;
+	reg->nat64_regs[1] = reg->hw_avl_tags[j - 2];
+	reg->nat64_regs[2] = reg->hw_avl_tags[j - 1];
 }
 
 static void
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 263ebead7f..b73ab78870 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -1407,10 +1407,12 @@ struct mlx5_hws_cnt_svc_mng {
 };
 
 #define MLX5_FLOW_HW_TAGS_MAX 12
+#define MLX5_FLOW_NAT64_REGS_MAX 3
 
 struct mlx5_dev_registers {
 	enum modify_reg aso_reg;
 	enum modify_reg hw_avl_tags[MLX5_FLOW_HW_TAGS_MAX];
+	enum modify_reg nat64_regs[MLX5_FLOW_NAT64_REGS_MAX];
 };
 
 #if defined(HAVE_MLX5DV_DR) && \
-- 
2.25.1


^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH 4/8] common/mlx5: add new modify field defininations
  2023-12-27  9:07 [PATCH 0/8] support NAT64 action Bing Zhao
                   ` (2 preceding siblings ...)
  2023-12-27  9:07 ` [PATCH 3/8] net/mlx5: fetch the available registers for NAT64 Bing Zhao
@ 2023-12-27  9:07 ` Bing Zhao
  2023-12-27  9:07 ` [PATCH 5/8] net/mlx5/hws: support NAT64 action Bing Zhao
                   ` (7 subsequent siblings)
  11 siblings, 0 replies; 36+ messages in thread
From: Bing Zhao @ 2023-12-27  9:07 UTC (permalink / raw)
  To: orika, aman.deep.singh, yuying.zhang, dsosnowski, viacheslavo,
	suanmingm, matan, thomas, ferruh.yigit, andrew.rybchenko, dev,
	rasland

This commit adds TCP data offset, IPv4 total length, IPv4 IHL,
IPv6 payload length in modify field operation.

Also redefine the out protocol(next header) for both IPv4 and IPv6.

Signed-off-by: Bing Zhao <bingz@nvidia.com>
---
 drivers/common/mlx5/mlx5_prm.h | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/drivers/common/mlx5/mlx5_prm.h b/drivers/common/mlx5/mlx5_prm.h
index 9e22dce6da..2f009f81ea 100644
--- a/drivers/common/mlx5/mlx5_prm.h
+++ b/drivers/common/mlx5/mlx5_prm.h
@@ -839,6 +839,7 @@ enum mlx5_modification_field {
 	MLX5_MODI_IN_MPLS_LABEL_2,
 	MLX5_MODI_IN_MPLS_LABEL_3,
 	MLX5_MODI_IN_MPLS_LABEL_4,
+	MLX5_MODI_OUT_IP_PROTOCOL = 0x4A,
 	MLX5_MODI_OUT_IPV6_NEXT_HDR = 0x4A,
 	MLX5_MODI_META_REG_C_8 = 0x8F,
 	MLX5_MODI_META_REG_C_9 = 0x90,
@@ -848,6 +849,10 @@ enum mlx5_modification_field {
 	MLX5_MODI_META_REG_C_13 = 0x94,
 	MLX5_MODI_META_REG_C_14 = 0x95,
 	MLX5_MODI_META_REG_C_15 = 0x96,
+	MLX5_MODI_OUT_IPV4_TOTAL_LEN = 0x11D,
+	MLX5_MODI_OUT_IPV6_PAYLOAD_LEN = 0x11E,
+	MLX5_MODI_OUT_IPV4_IHL = 0x11F,
+	MLX5_MODI_OUT_TCP_DATA_OFFSET = 0x120,
 	MLX5_MODI_INVALID = INT_MAX,
 };
 
-- 
2.25.1


^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH 5/8] net/mlx5/hws: support NAT64 action
  2023-12-27  9:07 [PATCH 0/8] support NAT64 action Bing Zhao
                   ` (3 preceding siblings ...)
  2023-12-27  9:07 ` [PATCH 4/8] common/mlx5: add new modify field defininations Bing Zhao
@ 2023-12-27  9:07 ` Bing Zhao
  2023-12-27  9:07 ` [PATCH 6/8] net/mlx5: create NAT64 actions during configuration Bing Zhao
                   ` (6 subsequent siblings)
  11 siblings, 0 replies; 36+ messages in thread
From: Bing Zhao @ 2023-12-27  9:07 UTC (permalink / raw)
  To: orika, aman.deep.singh, yuying.zhang, dsosnowski, viacheslavo,
	suanmingm, matan, thomas, ferruh.yigit, andrew.rybchenko, dev,
	rasland
  Cc: Erez Shitrit

From: Erez Shitrit <erezsh@nvidia.com>

Add support of new action mlx5dr_action_create_nat64.
The new action allows to modify IP packets from version to version, IPV6
to IPV4 and vice versa.

Signed-off-by: Erez Shitrit <erezsh@nvidia.com>
Signed-off-by: Bing Zhao <bingz@nvidia.com>
---
 drivers/net/mlx5/hws/mlx5dr.h        |  29 ++
 drivers/net/mlx5/hws/mlx5dr_action.c | 437 ++++++++++++++++++++++++++-
 drivers/net/mlx5/hws/mlx5dr_action.h |  35 +++
 drivers/net/mlx5/hws/mlx5dr_debug.c  |   1 +
 4 files changed, 501 insertions(+), 1 deletion(-)

diff --git a/drivers/net/mlx5/hws/mlx5dr.h b/drivers/net/mlx5/hws/mlx5dr.h
index d88f73ab57..44fff5db25 100644
--- a/drivers/net/mlx5/hws/mlx5dr.h
+++ b/drivers/net/mlx5/hws/mlx5dr.h
@@ -51,6 +51,7 @@ enum mlx5dr_action_type {
 	MLX5DR_ACTION_TYP_DEST_ARRAY,
 	MLX5DR_ACTION_TYP_POP_IPV6_ROUTE_EXT,
 	MLX5DR_ACTION_TYP_PUSH_IPV6_ROUTE_EXT,
+	MLX5DR_ACTION_TYP_NAT64,
 	MLX5DR_ACTION_TYP_MAX,
 };
 
@@ -796,6 +797,34 @@ mlx5dr_action_create_reformat_ipv6_ext(struct mlx5dr_context *ctx,
 				       uint32_t log_bulk_size,
 				       uint32_t flags);
 
+enum mlx5dr_action_nat64_flags {
+	MLX5DR_ACTION_NAT64_V4_TO_V6 = 1 << 0,
+	MLX5DR_ACTION_NAT64_V6_TO_V4 = 1 << 1,
+	/* Indicates if to backup ipv4 addresses in last two registers */
+	MLX5DR_ACTION_NAT64_BACKUP_ADDR = 1 << 2,
+};
+
+struct mlx5dr_action_nat64_attr {
+	uint8_t num_of_registers;
+	uint8_t *registers;
+	enum mlx5dr_action_nat64_flags flags;
+};
+
+/* Create direct rule nat64 action.
+ *
+ * @param[in] ctx
+ *	The context in which the new action will be created.
+ * @param[in] attr
+ * 	The relevant attiribute of the NAT action.
+  * @param[in] flags
+ *	Action creation flags. (enum mlx5dr_action_flags)
+ * @return pointer to mlx5dr_action on success NULL otherwise.
+ */
+struct mlx5dr_action *
+mlx5dr_action_create_nat64(struct mlx5dr_context *ctx,
+			   struct mlx5dr_action_nat64_attr *attr,
+			   uint32_t flags);
+
 /* Destroy direct rule action.
  *
  * @param[in] action
diff --git a/drivers/net/mlx5/hws/mlx5dr_action.c b/drivers/net/mlx5/hws/mlx5dr_action.c
index 862ee3e332..4193d8e767 100644
--- a/drivers/net/mlx5/hws/mlx5dr_action.c
+++ b/drivers/net/mlx5/hws/mlx5dr_action.c
@@ -31,6 +31,7 @@ static const uint32_t action_order_arr[MLX5DR_TABLE_TYPE_MAX][MLX5DR_ACTION_TYP_
 		BIT(MLX5DR_ACTION_TYP_ASO_CT),
 		BIT(MLX5DR_ACTION_TYP_PUSH_VLAN),
 		BIT(MLX5DR_ACTION_TYP_PUSH_VLAN),
+		BIT(MLX5DR_ACTION_TYP_NAT64),
 		BIT(MLX5DR_ACTION_TYP_MODIFY_HDR),
 		BIT(MLX5DR_ACTION_TYP_INSERT_HEADER) |
 		BIT(MLX5DR_ACTION_TYP_PUSH_IPV6_ROUTE_EXT) |
@@ -52,6 +53,7 @@ static const uint32_t action_order_arr[MLX5DR_TABLE_TYPE_MAX][MLX5DR_ACTION_TYP_
 		BIT(MLX5DR_ACTION_TYP_ASO_CT),
 		BIT(MLX5DR_ACTION_TYP_PUSH_VLAN),
 		BIT(MLX5DR_ACTION_TYP_PUSH_VLAN),
+		BIT(MLX5DR_ACTION_TYP_NAT64),
 		BIT(MLX5DR_ACTION_TYP_MODIFY_HDR),
 		BIT(MLX5DR_ACTION_TYP_INSERT_HEADER) |
 		BIT(MLX5DR_ACTION_TYP_PUSH_IPV6_ROUTE_EXT) |
@@ -75,6 +77,7 @@ static const uint32_t action_order_arr[MLX5DR_TABLE_TYPE_MAX][MLX5DR_ACTION_TYP_
 		BIT(MLX5DR_ACTION_TYP_ASO_CT),
 		BIT(MLX5DR_ACTION_TYP_PUSH_VLAN),
 		BIT(MLX5DR_ACTION_TYP_PUSH_VLAN),
+		BIT(MLX5DR_ACTION_TYP_NAT64),
 		BIT(MLX5DR_ACTION_TYP_MODIFY_HDR),
 		BIT(MLX5DR_ACTION_TYP_INSERT_HEADER) |
 		BIT(MLX5DR_ACTION_TYP_PUSH_IPV6_ROUTE_EXT) |
@@ -246,6 +249,311 @@ static void mlx5dr_action_put_shared_stc(struct mlx5dr_action *action,
 		mlx5dr_action_put_shared_stc_nic(ctx, stc_type, MLX5DR_TABLE_TYPE_FDB);
 }
 
+static struct mlx5dr_action *
+mlx5dr_action_create_nat64_copy_state(struct mlx5dr_context *ctx,
+				      struct mlx5dr_action_nat64_attr *attr,
+				      uint32_t flags)
+{
+	__be64 modify_action_data[MLX5DR_ACTION_NAT64_MAX_MODIFY_ACTIONS];
+	struct mlx5dr_action_mh_pattern pat[2];
+	struct mlx5dr_action *action;
+	uint32_t packet_len_field;
+	uint8_t *action_ptr;
+	uint32_t ttl_field;
+	uint32_t src_addr;
+	uint32_t dst_addr;
+	bool is_v4_to_v6;
+
+	is_v4_to_v6 = attr->flags & MLX5DR_ACTION_NAT64_V4_TO_V6;
+
+	if (is_v4_to_v6) {
+		packet_len_field = MLX5_MODI_OUT_IPV4_TOTAL_LEN;
+		ttl_field = MLX5_MODI_OUT_IPV4_TTL;
+		src_addr = MLX5_MODI_OUT_SIPV4;
+		dst_addr = MLX5_MODI_OUT_DIPV4;
+	} else {
+		packet_len_field = MLX5_MODI_OUT_IPV6_PAYLOAD_LEN;
+		ttl_field = MLX5_MODI_OUT_IPV6_HOPLIMIT;
+		src_addr = MLX5_MODI_OUT_SIPV6_31_0;
+		dst_addr = MLX5_MODI_OUT_DIPV6_31_0;
+	}
+
+	memset(modify_action_data, 0, sizeof(modify_action_data));
+	action_ptr = (uint8_t *) modify_action_data;
+
+	if (attr->flags & MLX5DR_ACTION_NAT64_BACKUP_ADDR) {
+		MLX5_SET(copy_action_in, action_ptr, action_type, MLX5_MODIFICATION_TYPE_COPY);
+		MLX5_SET(copy_action_in, action_ptr, src_field, src_addr);
+		MLX5_SET(copy_action_in, action_ptr, dst_field,
+			 attr->registers[MLX5DR_ACTION_NAT64_REG_SRC_IP]);
+		action_ptr += MLX5DR_ACTION_DOUBLE_SIZE;
+
+		MLX5_SET(copy_action_in, action_ptr, action_type, MLX5_MODIFICATION_TYPE_COPY);
+		MLX5_SET(copy_action_in, action_ptr, src_field, dst_addr);
+		MLX5_SET(copy_action_in, action_ptr, dst_field,
+			 attr->registers[MLX5DR_ACTION_NAT64_REG_DST_IP]);
+		action_ptr += MLX5DR_ACTION_DOUBLE_SIZE;
+	}
+
+	/* | 8 bit - 8 bit     - 16 bit     |
+	 * | ttl   - protocol  - packet-len |
+	 */
+	MLX5_SET(copy_action_in, action_ptr, action_type, MLX5_MODIFICATION_TYPE_COPY);
+	MLX5_SET(copy_action_in, action_ptr, src_field, packet_len_field);
+	MLX5_SET(copy_action_in, action_ptr, dst_field,
+		 attr->registers[MLX5DR_ACTION_NAT64_REG_CONTROL]);
+	MLX5_SET(copy_action_in, action_ptr, dst_offset, 0);/* 16 bits in the lsb */
+	MLX5_SET(copy_action_in, action_ptr, length, 16);
+	action_ptr += MLX5DR_ACTION_DOUBLE_SIZE;
+
+	MLX5_SET(copy_action_in, action_ptr, action_type, MLX5_MODIFICATION_TYPE_NOP);
+	action_ptr += MLX5DR_ACTION_DOUBLE_SIZE;
+
+	MLX5_SET(copy_action_in, action_ptr, action_type, MLX5_MODIFICATION_TYPE_COPY);
+	MLX5_SET(copy_action_in, action_ptr, src_field, MLX5_MODI_OUT_IP_PROTOCOL);
+	MLX5_SET(copy_action_in, action_ptr, dst_field,
+		 attr->registers[MLX5DR_ACTION_NAT64_REG_CONTROL]);
+	MLX5_SET(copy_action_in, action_ptr, dst_offset, 16);
+	MLX5_SET(copy_action_in, action_ptr, length, 8);
+	action_ptr += MLX5DR_ACTION_DOUBLE_SIZE;
+
+	MLX5_SET(copy_action_in, action_ptr, action_type, MLX5_MODIFICATION_TYPE_NOP);
+	action_ptr += MLX5DR_ACTION_DOUBLE_SIZE;
+
+	MLX5_SET(copy_action_in, action_ptr, action_type, MLX5_MODIFICATION_TYPE_COPY);
+	MLX5_SET(copy_action_in, action_ptr, src_field, ttl_field);
+	MLX5_SET(copy_action_in, action_ptr, dst_field,
+		 attr->registers[MLX5DR_ACTION_NAT64_REG_CONTROL]);
+	MLX5_SET(copy_action_in, action_ptr, dst_offset, 24);
+	MLX5_SET(copy_action_in, action_ptr, length, 8);
+	action_ptr += MLX5DR_ACTION_DOUBLE_SIZE;
+
+	/* set sip and dip to 0, in order to have new csum */
+	if (is_v4_to_v6) {
+		MLX5_SET(set_action_in, action_ptr, action_type, MLX5_MODIFICATION_TYPE_SET);
+		MLX5_SET(set_action_in, action_ptr, field, MLX5_MODI_OUT_SIPV4);
+		MLX5_SET(set_action_in, action_ptr, data, 0);
+		action_ptr += MLX5DR_ACTION_DOUBLE_SIZE;
+
+		MLX5_SET(set_action_in, action_ptr, action_type, MLX5_MODIFICATION_TYPE_SET);
+		MLX5_SET(set_action_in, action_ptr, field, MLX5_MODI_OUT_DIPV4);
+		MLX5_SET(set_action_in, action_ptr, data, 0);
+		action_ptr += MLX5DR_ACTION_DOUBLE_SIZE;
+	}
+
+	pat[0].data = modify_action_data;
+	pat[0].sz = (action_ptr - (uint8_t *) modify_action_data);
+
+	action = mlx5dr_action_create_modify_header(ctx, 1, pat, 0, flags);
+	if (!action) {
+		DR_LOG(ERR, "Failed to create copy for NAT64: action_sz: %zu, flags: 0x%x\n",
+		       pat[0].sz, flags);
+		return NULL;
+	}
+
+	return action;
+}
+
+static struct mlx5dr_action *
+mlx5dr_action_create_nat64_repalce_state(struct mlx5dr_context *ctx,
+					 struct mlx5dr_action_nat64_attr *attr,
+					 uint32_t flags)
+{
+	uint32_t address_prefix[MLX5DR_ACTION_NAT64_HEADER_MINUS_ONE] = {0};
+	__be64 modify_action_data[MLX5DR_ACTION_NAT64_MAX_MODIFY_ACTIONS];
+	struct mlx5dr_action_mh_pattern pat[2];
+	static struct mlx5dr_action *action;
+	uint8_t header_size_in_dw;
+	uint8_t *action_ptr;
+	uint32_t eth_type;
+	bool is_v4_to_v6;
+	uint32_t ip_ver;
+	int i;
+
+	is_v4_to_v6 = attr->flags & MLX5DR_ACTION_NAT64_V4_TO_V6;
+
+	if (is_v4_to_v6) {
+		uint32_t nat64_well_known_pref[] = {0x0,
+						    0x9bff6400, 0x0, 0x0, 0x0,
+						    0x9bff6400, 0x0, 0x0, 0x0};
+
+		header_size_in_dw = MLX5DR_ACTION_NAT64_IPV6_HEADER;
+		ip_ver = MLX5DR_ACTION_NAT64_IPV6_VER;
+		eth_type = RTE_ETHER_TYPE_IPV6;
+		memcpy(address_prefix, nat64_well_known_pref,
+		       MLX5DR_ACTION_NAT64_HEADER_MINUS_ONE * sizeof(uint32_t));
+	} else {
+		header_size_in_dw = MLX5DR_ACTION_NAT64_IPV4_HEADER;
+		ip_ver = MLX5DR_ACTION_NAT64_IPV4_VER;
+		eth_type = RTE_ETHER_TYPE_IPV4;
+	}
+
+	memset(modify_action_data, 0, sizeof(modify_action_data));
+	action_ptr = (uint8_t *) modify_action_data;
+
+	MLX5_SET(set_action_in, action_ptr, action_type, MLX5_MODIFICATION_TYPE_SET);
+	MLX5_SET(set_action_in, action_ptr, field, MLX5_MODI_OUT_ETHERTYPE);
+	MLX5_SET(set_action_in, action_ptr, length, 16);
+	MLX5_SET(set_action_in, action_ptr, data, eth_type);
+	action_ptr += MLX5DR_ACTION_DOUBLE_SIZE;
+
+	/* push empty header with ipv6 as version */
+	MLX5_SET(stc_ste_param_insert, action_ptr, action_type,
+		 MLX5_MODIFICATION_TYPE_INSERT);
+	MLX5_SET(stc_ste_param_insert, action_ptr, inline_data, 0x1);
+	MLX5_SET(stc_ste_param_insert, action_ptr, insert_anchor,
+		 MLX5_HEADER_ANCHOR_IPV6_IPV4);
+	MLX5_SET(stc_ste_param_insert, action_ptr, insert_size, 2);
+	MLX5_SET(stc_ste_param_insert, action_ptr, insert_argument, ip_ver);
+	action_ptr += MLX5DR_ACTION_DOUBLE_SIZE;
+
+	for (i = 0; i < header_size_in_dw - 1; i++) {
+		MLX5_SET(stc_ste_param_insert, action_ptr, action_type,
+				MLX5_MODIFICATION_TYPE_INSERT);
+		MLX5_SET(stc_ste_param_insert, action_ptr, inline_data, 0x1);
+		MLX5_SET(stc_ste_param_insert, action_ptr, insert_anchor,
+				MLX5_HEADER_ANCHOR_IPV6_IPV4);
+		MLX5_SET(stc_ste_param_insert, action_ptr, insert_size, 2);
+		MLX5_SET(stc_ste_param_insert, action_ptr, insert_argument,
+			 htobe32(address_prefix[i]));
+		action_ptr += MLX5DR_ACTION_DOUBLE_SIZE;
+	}
+
+	/* Remove orig src/dst addr (8 bytes, 4 words) */
+	MLX5_SET(stc_ste_param_remove, action_ptr, action_type,
+		 MLX5_MODIFICATION_TYPE_REMOVE);
+	MLX5_SET(stc_ste_param_remove, action_ptr, remove_start_anchor,
+		 MLX5_HEADER_ANCHOR_IPV6_IPV4);
+	MLX5_SET(stc_ste_param_remove, action_ptr, remove_end_anchor,
+		 MLX5_HEADER_ANCHOR_TCP_UDP);
+	action_ptr += MLX5DR_ACTION_DOUBLE_SIZE;
+
+	pat[0].data = modify_action_data;
+	pat[0].sz = action_ptr - (uint8_t *) modify_action_data;
+
+	action = mlx5dr_action_create_modify_header(ctx, 1, pat, 0, flags);
+	if (!action) {
+		DR_LOG(ERR, "Failed to create action: action_sz: %zu flags: 0x%x\n",
+		       pat[0].sz, flags);
+		return NULL;
+	}
+
+	return action;
+}
+
+static struct mlx5dr_action *
+mlx5dr_action_create_nat64_copy_back_state(struct mlx5dr_context *ctx,
+					   struct mlx5dr_action_nat64_attr *attr,
+					   uint32_t flags)
+{
+	__be64 modify_action_data[MLX5DR_ACTION_NAT64_MAX_MODIFY_ACTIONS];
+	struct mlx5dr_action_mh_pattern pat[2];
+	struct mlx5dr_action *action;
+	uint32_t packet_len_field;
+	uint32_t packet_len_add;
+	uint8_t *action_ptr;
+	uint32_t ttl_field;
+	uint32_t src_addr;
+	uint32_t dst_addr;
+	bool is_v4_to_v6;
+
+	is_v4_to_v6 = attr->flags & MLX5DR_ACTION_NAT64_V4_TO_V6;
+
+	if (is_v4_to_v6) {
+		packet_len_field = MLX5_MODI_OUT_IPV6_PAYLOAD_LEN;
+		 /* 2' comp to 20, to get -20 in add operation */
+		packet_len_add = MLX5DR_ACTION_NAT64_DEC_20;
+		ttl_field = MLX5_MODI_OUT_IPV6_HOPLIMIT;
+		src_addr = MLX5_MODI_OUT_SIPV6_31_0;
+		dst_addr = MLX5_MODI_OUT_DIPV6_31_0;
+	} else {
+		packet_len_field = MLX5_MODI_OUT_IPV4_TOTAL_LEN;
+		/* ipv4 len is including 20 bytes of the header, so add 20 over ipv6 len */
+		packet_len_add = MLX5DR_ACTION_NAT64_ADD_20;
+		ttl_field = MLX5_MODI_OUT_IPV4_TTL;
+		src_addr = MLX5_MODI_OUT_SIPV4;
+		dst_addr = MLX5_MODI_OUT_DIPV4;
+
+	}
+
+	memset(modify_action_data, 0, sizeof(modify_action_data));
+	action_ptr = (uint8_t *) modify_action_data;
+
+	MLX5_SET(copy_action_in, action_ptr, action_type, MLX5_MODIFICATION_TYPE_COPY);
+	MLX5_SET(copy_action_in, action_ptr, src_field,
+		 attr->registers[MLX5DR_ACTION_NAT64_REG_CONTROL]);
+	MLX5_SET(copy_action_in, action_ptr, dst_field,
+		 packet_len_field);
+	MLX5_SET(copy_action_in, action_ptr, src_offset, 32);
+	MLX5_SET(copy_action_in, action_ptr, length, 16);
+	action_ptr += MLX5DR_ACTION_DOUBLE_SIZE;
+
+	MLX5_SET(copy_action_in, action_ptr, action_type, MLX5_MODIFICATION_TYPE_NOP);
+	action_ptr += MLX5DR_ACTION_DOUBLE_SIZE;
+
+	MLX5_SET(copy_action_in, action_ptr, action_type, MLX5_MODIFICATION_TYPE_COPY);
+	MLX5_SET(copy_action_in, action_ptr, src_field,
+		 attr->registers[MLX5DR_ACTION_NAT64_REG_CONTROL]);
+	MLX5_SET(copy_action_in, action_ptr, dst_field,
+		 MLX5_MODI_OUT_IP_PROTOCOL);
+	MLX5_SET(copy_action_in, action_ptr, src_offset, 16);
+	MLX5_SET(copy_action_in, action_ptr, dst_offset, 0);
+	MLX5_SET(copy_action_in, action_ptr, length, 8);
+	action_ptr += MLX5DR_ACTION_DOUBLE_SIZE;
+
+	MLX5_SET(copy_action_in, action_ptr, action_type, MLX5_MODIFICATION_TYPE_NOP);
+	action_ptr += MLX5DR_ACTION_DOUBLE_SIZE;
+
+	MLX5_SET(copy_action_in, action_ptr, action_type, MLX5_MODIFICATION_TYPE_COPY);
+	MLX5_SET(copy_action_in, action_ptr, src_field,
+		 attr->registers[MLX5DR_ACTION_NAT64_REG_CONTROL]);
+	MLX5_SET(copy_action_in, action_ptr, dst_field, ttl_field);
+	MLX5_SET(copy_action_in, action_ptr, src_offset, 24);
+	MLX5_SET(copy_action_in, action_ptr, length, 8);
+	action_ptr += MLX5DR_ACTION_DOUBLE_SIZE;
+
+	MLX5_SET(copy_action_in, action_ptr, action_type, MLX5_MODIFICATION_TYPE_NOP);
+	action_ptr += MLX5DR_ACTION_DOUBLE_SIZE;
+
+	/* if required Copy original addresses */
+	if (attr->flags & MLX5DR_ACTION_NAT64_BACKUP_ADDR) {
+		MLX5_SET(copy_action_in, action_ptr, action_type, MLX5_MODIFICATION_TYPE_COPY);
+		MLX5_SET(copy_action_in, action_ptr, src_field,
+			 attr->registers[MLX5DR_ACTION_NAT64_REG_SRC_IP]);
+		MLX5_SET(copy_action_in, action_ptr, dst_field, src_addr);
+		MLX5_SET(copy_action_in, action_ptr, src_offset, 0);
+		MLX5_SET(copy_action_in, action_ptr, length, 32);
+		action_ptr += MLX5DR_ACTION_DOUBLE_SIZE;
+
+		MLX5_SET(copy_action_in, action_ptr, action_type, MLX5_MODIFICATION_TYPE_COPY);
+		MLX5_SET(copy_action_in, action_ptr, src_field,
+			 attr->registers[MLX5DR_ACTION_NAT64_REG_DST_IP]);
+		MLX5_SET(copy_action_in, action_ptr, dst_field, dst_addr);
+		MLX5_SET(copy_action_in, action_ptr, src_offset, 0);
+		MLX5_SET(copy_action_in, action_ptr, length, 32);
+		action_ptr += MLX5DR_ACTION_DOUBLE_SIZE;
+	}
+
+	/* take/add off 20 bytes ipv4/6 from/to the total size */
+	MLX5_SET(set_action_in, action_ptr, action_type, MLX5_MODIFICATION_TYPE_ADD);
+	MLX5_SET(set_action_in, action_ptr, field, packet_len_field);
+	MLX5_SET(set_action_in, action_ptr, data, packet_len_add);
+	MLX5_SET(set_action_in, action_ptr, length, 16);
+	action_ptr += MLX5DR_ACTION_DOUBLE_SIZE;
+
+	pat[0].data = modify_action_data;
+	pat[0].sz = action_ptr - (uint8_t *) modify_action_data;
+
+	action = mlx5dr_action_create_modify_header(ctx, 1, pat, 0, flags);
+	if (!action) {
+		DR_LOG(ERR, "Failed to create action: action_sz: %zu, flags: 0x%x\n",
+		       pat[0].sz, flags);
+		return NULL;
+	}
+
+	return action;
+}
+
 static void mlx5dr_action_print_combo(enum mlx5dr_action_type *user_actions)
 {
 	DR_LOG(ERR, "Invalid action_type sequence");
@@ -2526,6 +2834,94 @@ mlx5dr_action_create_reformat_ipv6_ext(struct mlx5dr_context *ctx,
 	return NULL;
 }
 
+static bool
+mlx5dr_action_nat64_validate_param(struct mlx5dr_action_nat64_attr *attr,
+				   uint32_t flags)
+{
+	if (mlx5dr_action_is_root_flags(flags)) {
+		DR_LOG(ERR, "Nat64 action not supported for root");
+		rte_errno = ENOTSUP;
+		return false;
+	}
+
+	if (!(flags & MLX5DR_ACTION_FLAG_SHARED)) {
+		DR_LOG(ERR, "Nat64 action must be with SHARED flag");
+		rte_errno = EINVAL;
+		return false;
+	}
+
+	if (attr->num_of_registers > MLX5DR_ACTION_NAT64_REG_MAX) {
+		DR_LOG(ERR, "Nat64 action doesn't support more than %d registers",
+		       MLX5DR_ACTION_NAT64_REG_MAX);
+		rte_errno = EINVAL;
+		return false;
+	}
+
+	if (attr->flags & MLX5DR_ACTION_NAT64_BACKUP_ADDR &&
+	    attr->num_of_registers != MLX5DR_ACTION_NAT64_REG_MAX) {
+		DR_LOG(ERR, "Nat64 backup addr requires %d registers",
+		       MLX5DR_ACTION_NAT64_REG_MAX);
+		rte_errno = EINVAL;
+		return false;
+	}
+
+	if (!(attr->flags & MLX5DR_ACTION_NAT64_V4_TO_V6 ||
+	      attr->flags & MLX5DR_ACTION_NAT64_V6_TO_V4)) {
+		DR_LOG(ERR, "Nat64 backup addr requires one mode at least");
+		rte_errno = EINVAL;
+		return false;
+	}
+
+	return true;
+}
+
+struct mlx5dr_action *
+mlx5dr_action_create_nat64(struct mlx5dr_context *ctx,
+			   struct mlx5dr_action_nat64_attr *attr,
+			   uint32_t flags)
+{
+	struct mlx5dr_action *action;
+
+	if (!mlx5dr_action_nat64_validate_param(attr, flags))
+		return NULL;
+
+	action = mlx5dr_action_create_generic(ctx, flags, MLX5DR_ACTION_TYP_NAT64);
+	if (!action)
+		return NULL;
+
+	action->nat64.stages[MLX5DR_ACTION_NAT64_STAGE_COPY] =
+		mlx5dr_action_create_nat64_copy_state(ctx, attr, flags);
+	if (!action->nat64.stages[MLX5DR_ACTION_NAT64_STAGE_COPY]) {
+		DR_LOG(ERR, "Nat64 failed creating copy state");
+		goto free_action;
+	}
+
+	action->nat64.stages[MLX5DR_ACTION_NAT64_STAGE_REPLACE] =
+		mlx5dr_action_create_nat64_repalce_state(ctx, attr, flags);
+	if (!action->nat64.stages[MLX5DR_ACTION_NAT64_STAGE_REPLACE]) {
+		DR_LOG(ERR, "Nat64 failed creating replace state");
+		goto free_copy;
+	}
+
+	action->nat64.stages[MLX5DR_ACTION_NAT64_STAGE_COPYBACK] =
+		mlx5dr_action_create_nat64_copy_back_state(ctx, attr, flags);
+	if (!action->nat64.stages[MLX5DR_ACTION_NAT64_STAGE_COPYBACK]) {
+		DR_LOG(ERR, "Nat64 failed creating copyback state");
+		goto free_replace;
+	}
+
+	return action;
+
+
+free_replace:
+	mlx5dr_action_destroy(action->nat64.stages[MLX5DR_ACTION_NAT64_STAGE_REPLACE]);
+free_copy:
+	mlx5dr_action_destroy(action->nat64.stages[MLX5DR_ACTION_NAT64_STAGE_COPY]);
+free_action:
+	simple_free(action);
+	return NULL;
+}
+
 static void mlx5dr_action_destroy_hws(struct mlx5dr_action *action)
 {
 	struct mlx5dr_devx_obj *obj = NULL;
@@ -2600,6 +2996,10 @@ static void mlx5dr_action_destroy_hws(struct mlx5dr_action *action)
 			if (action->ipv6_route_ext.action[i])
 				mlx5dr_action_destroy(action->ipv6_route_ext.action[i]);
 		break;
+	case MLX5DR_ACTION_TYP_NAT64:
+		for (i = 0; i < MLX5DR_ACTION_NAT64_STAGES; i++)
+			mlx5dr_action_destroy(action->nat64.stages[i]);
+		break;
 	}
 }
 
@@ -2874,6 +3274,28 @@ mlx5dr_action_setter_modify_header(struct mlx5dr_actions_apply_data *apply,
 	}
 }
 
+static void
+mlx5dr_action_setter_nat64(struct mlx5dr_actions_apply_data *apply,
+			   struct mlx5dr_actions_wqe_setter *setter)
+{
+	struct mlx5dr_rule_action *rule_action;
+	struct mlx5dr_action *cur_stage_action;
+	struct mlx5dr_action *action;
+	uint32_t stc_idx;
+
+	rule_action = &apply->rule_action[setter->idx_double];
+	action = rule_action->action;
+	cur_stage_action = action->nat64.stages[setter->stage_idx];
+
+	stc_idx = htobe32(cur_stage_action->stc[apply->tbl_type].offset);
+
+	apply->wqe_ctrl->stc_ix[MLX5DR_ACTION_STC_IDX_DW6] = stc_idx;
+	apply->wqe_ctrl->stc_ix[MLX5DR_ACTION_STC_IDX_DW7] = 0;
+
+	apply->wqe_data[MLX5DR_ACTION_OFFSET_DW6] = 0;
+	apply->wqe_data[MLX5DR_ACTION_OFFSET_DW7] = 0;
+}
+
 static void
 mlx5dr_action_setter_insert_ptr(struct mlx5dr_actions_apply_data *apply,
 				struct mlx5dr_actions_wqe_setter *setter)
@@ -3174,7 +3596,7 @@ int mlx5dr_action_template_process(struct mlx5dr_action_template *at)
 	struct mlx5dr_actions_wqe_setter *setter = at->setters;
 	struct mlx5dr_actions_wqe_setter *pop_setter = NULL;
 	struct mlx5dr_actions_wqe_setter *last_setter;
-	int i;
+	int i, j;
 
 	/* Note: Given action combination must be valid */
 
@@ -3361,6 +3783,19 @@ int mlx5dr_action_template_process(struct mlx5dr_action_template *at)
 			setter->idx_ctr = i;
 			break;
 
+		case MLX5DR_ACTION_TYP_NAT64:
+			/* NAT64 requires 3 setters, each of them does specific modify header */
+			for (j = 0; j < MLX5DR_ACTION_NAT64_STAGES; j++) {
+				setter = mlx5dr_action_setter_find_first(last_setter,
+									 ASF_DOUBLE | ASF_REMOVE);
+				setter->flags |= ASF_DOUBLE | ASF_MODIFY;
+				setter->set_double = &mlx5dr_action_setter_nat64;
+				setter->idx_double = i;
+				/* The stage indicates which modify-header to push */
+				setter->stage_idx = j;
+			}
+			break;
+
 		default:
 			DR_LOG(ERR, "Unsupported action type: %d", action_type[i]);
 			rte_errno = ENOTSUP;
diff --git a/drivers/net/mlx5/hws/mlx5dr_action.h b/drivers/net/mlx5/hws/mlx5dr_action.h
index fad35a845b..49c2a9bc6b 100644
--- a/drivers/net/mlx5/hws/mlx5dr_action.h
+++ b/drivers/net/mlx5/hws/mlx5dr_action.h
@@ -11,6 +11,9 @@
 /* Max number of internal subactions of ipv6_ext */
 #define MLX5DR_ACTION_IPV6_EXT_MAX_SA 4
 
+/* Number of MH in NAT64 */
+#define MLX5DR_ACTION_NAT64_STAGES 3
+
 enum mlx5dr_action_stc_idx {
 	MLX5DR_ACTION_STC_IDX_CTRL = 0,
 	MLX5DR_ACTION_STC_IDX_HIT = 1,
@@ -68,6 +71,34 @@ enum mlx5dr_action_stc_reparse {
 	MLX5DR_ACTION_STC_REPARSE_OFF,
 };
 
+ /* 2' comp to 20, to get -20 in add operation */
+#define MLX5DR_ACTION_NAT64_DEC_20 0xffffffec
+
+enum {
+	MLX5DR_ACTION_NAT64_MAX_MODIFY_ACTIONS = 20,
+	MLX5DR_ACTION_NAT64_ADD_20 = 20,
+	MLX5DR_ACTION_NAT64_HEADER_MINUS_ONE = 9,
+	MLX5DR_ACTION_NAT64_IPV6_HEADER = 10,
+	MLX5DR_ACTION_NAT64_IPV4_HEADER = 5,
+	MLX5DR_ACTION_NAT64_IPV6_VER = 0x60000000,
+	MLX5DR_ACTION_NAT64_IPV4_VER = 0x45000000,
+};
+
+/* 3 stages for the nat64 action */
+enum mlx5dr_action_nat64_stages {
+	MLX5DR_ACTION_NAT64_STAGE_COPY = 0,
+	MLX5DR_ACTION_NAT64_STAGE_REPLACE = 1,
+	MLX5DR_ACTION_NAT64_STAGE_COPYBACK = 2,
+};
+
+/* Registers for keeping data from stage to stage */
+enum {
+	MLX5DR_ACTION_NAT64_REG_CONTROL = 0,
+	MLX5DR_ACTION_NAT64_REG_SRC_IP = 1,
+	MLX5DR_ACTION_NAT64_REG_DST_IP = 2,
+	MLX5DR_ACTION_NAT64_REG_MAX = 3,
+};
+
 struct mlx5dr_action_default_stc {
 	struct mlx5dr_pool_chunk nop_ctr;
 	struct mlx5dr_pool_chunk nop_dw5;
@@ -109,6 +140,7 @@ struct mlx5dr_actions_wqe_setter {
 	uint8_t idx_double;
 	uint8_t idx_ctr;
 	uint8_t idx_hit;
+	uint8_t stage_idx;
 	uint8_t flags;
 	uint8_t extra_data;
 };
@@ -182,6 +214,9 @@ struct mlx5dr_action {
 					uint8_t num_of_words;
 					bool decap;
 				} remove_header;
+				struct {
+					struct mlx5dr_action *stages[MLX5DR_ACTION_NAT64_STAGES];
+				} nat64;
 			};
 		};
 
diff --git a/drivers/net/mlx5/hws/mlx5dr_debug.c b/drivers/net/mlx5/hws/mlx5dr_debug.c
index 11557bcab8..39e168d556 100644
--- a/drivers/net/mlx5/hws/mlx5dr_debug.c
+++ b/drivers/net/mlx5/hws/mlx5dr_debug.c
@@ -28,6 +28,7 @@ const char *mlx5dr_debug_action_type_str[] = {
 	[MLX5DR_ACTION_TYP_REMOVE_HEADER] = "REMOVE_HEADER",
 	[MLX5DR_ACTION_TYP_POP_IPV6_ROUTE_EXT] = "POP_IPV6_ROUTE_EXT",
 	[MLX5DR_ACTION_TYP_PUSH_IPV6_ROUTE_EXT] = "PUSH_IPV6_ROUTE_EXT",
+	[MLX5DR_ACTION_TYP_NAT64] = "NAT64",
 };
 
 static_assert(ARRAY_SIZE(mlx5dr_debug_action_type_str) == MLX5DR_ACTION_TYP_MAX,
-- 
2.25.1


^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH 6/8] net/mlx5: create NAT64 actions during configuration
  2023-12-27  9:07 [PATCH 0/8] support NAT64 action Bing Zhao
                   ` (4 preceding siblings ...)
  2023-12-27  9:07 ` [PATCH 5/8] net/mlx5/hws: support NAT64 action Bing Zhao
@ 2023-12-27  9:07 ` Bing Zhao
  2023-12-27  9:07 ` [PATCH 7/8] net/mlx5: add NAT64 action support in rule creation Bing Zhao
                   ` (5 subsequent siblings)
  11 siblings, 0 replies; 36+ messages in thread
From: Bing Zhao @ 2023-12-27  9:07 UTC (permalink / raw)
  To: orika, aman.deep.singh, yuying.zhang, dsosnowski, viacheslavo,
	suanmingm, matan, thomas, ferruh.yigit, andrew.rybchenko, dev,
	rasland

The NAT64 DR actions can be shared among the tables. All these
actions can be created during configuring the flow queues and saved
for the future usage.

Even the actions can be shared now, inside per each flow rule, the
actual hardware resources are unique.

Signed-off-by: Bing Zhao <bingz@nvidia.com>
---
 doc/guides/nics/features/mlx5.ini |  1 +
 doc/guides/nics/mlx5.rst          |  9 ++++-
 drivers/net/mlx5/mlx5.h           |  6 +++
 drivers/net/mlx5/mlx5_flow.h      | 11 ++++++
 drivers/net/mlx5/mlx5_flow_dv.c   |  4 +-
 drivers/net/mlx5/mlx5_flow_hw.c   | 65 +++++++++++++++++++++++++++++++
 6 files changed, 94 insertions(+), 2 deletions(-)

diff --git a/doc/guides/nics/features/mlx5.ini b/doc/guides/nics/features/mlx5.ini
index 0739fe9d63..f074ff20db 100644
--- a/doc/guides/nics/features/mlx5.ini
+++ b/doc/guides/nics/features/mlx5.ini
@@ -115,6 +115,7 @@ mark                 = Y
 meter                = Y
 meter_mark           = Y
 modify_field         = Y
+nat64                = Y
 nvgre_decap          = Y
 nvgre_encap          = Y
 of_pop_vlan          = Y
diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst
index 6b52fb93c5..920cd1e62f 100644
--- a/doc/guides/nics/mlx5.rst
+++ b/doc/guides/nics/mlx5.rst
@@ -167,7 +167,7 @@ Features
 - Sub-Function.
 - Matching on represented port.
 - Matching on aggregated affinity.
-
+- NAT64.
 
 Limitations
 -----------
@@ -779,6 +779,13 @@ Limitations
   if preceding active application rules are still present and vice versa.
 
 
+- NAT64 action:
+  - Supported only with HW Steering enabled (``dv_flow_en`` = 2).
+  - Supported only on non-root table.
+  - Actions order limitation should follow the modify fields action.
+  - The last 2 TAG registers will be used implicitly in address backup mode.
+  - Even if the action can be shared, new steering entries will be created per flow rule. It is recommended a single rule with NAT64 should be shared to reduce the duplication of entries. The default address and other fields covertion will be handled with NAT64 action. To support other address, new rule(s) with modify fields on the IP addresses should be created.
+
 Statistics
 ----------
 
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index b73ab78870..860c77a4dd 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -1967,6 +1967,12 @@ struct mlx5_priv {
 	struct mlx5_aso_mtr_pool *hws_mpool; /* HW steering's Meter pool. */
 	struct mlx5_flow_hw_ctrl_rx *hw_ctrl_rx;
 	/**< HW steering templates used to create control flow rules. */
+	/*
+	 * The NAT64 action can be shared among matchers per domain.
+	 * [0]: RTE_FLOW_NAT64_6TO4, [1]: RTE_FLOW_NAT64_4TO6
+	 * Todo: consider to add *_MAX macro.
+	 */
+	struct mlx5dr_action *action_nat64[MLX5DR_TABLE_TYPE_MAX][2];
 #endif
 	struct rte_eth_dev *shared_host; /* Host device for HW steering. */
 	uint16_t shared_refcnt; /* HW steering host reference counter. */
diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h
index 6dde9de688..81026632ed 100644
--- a/drivers/net/mlx5/mlx5_flow.h
+++ b/drivers/net/mlx5/mlx5_flow.h
@@ -159,6 +159,17 @@ struct mlx5_rte_flow_item_sq {
 	uint32_t queue; /* DevX SQ number */
 };
 
+/* Map from registers to modify fields. */
+extern enum mlx5_modification_field reg_to_field[];
+extern const size_t mlx5_mod_reg_size;
+
+static __rte_always_inline enum mlx5_modification_field
+mlx5_covert_reg_to_field(enum modify_reg reg)
+{
+	MLX5_ASSERT((size_t)reg < mlx5_mod_reg_size);
+	return reg_to_field[reg];
+}
+
 /* Feature name to allocate metadata register. */
 enum mlx5_feature_name {
 	MLX5_HAIRPIN_RX,
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index 115d730317..97915a54ef 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -958,7 +958,7 @@ flow_dv_convert_action_modify_tcp_ack
 					     MLX5_MODIFICATION_TYPE_ADD, error);
 }
 
-static enum mlx5_modification_field reg_to_field[] = {
+enum mlx5_modification_field reg_to_field[] = {
 	[REG_NON] = MLX5_MODI_OUT_NONE,
 	[REG_A] = MLX5_MODI_META_DATA_REG_A,
 	[REG_B] = MLX5_MODI_META_DATA_REG_B,
@@ -976,6 +976,8 @@ static enum mlx5_modification_field reg_to_field[] = {
 	[REG_C_11] = MLX5_MODI_META_REG_C_11,
 };
 
+const size_t mlx5_mod_reg_size = RTE_DIM(reg_to_field);
+
 /**
  * Convert register set to DV specification.
  *
diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c
index da873ae2e2..9b9ad8de2d 100644
--- a/drivers/net/mlx5/mlx5_flow_hw.c
+++ b/drivers/net/mlx5/mlx5_flow_hw.c
@@ -7413,6 +7413,66 @@ flow_hw_destroy_send_to_kernel_action(struct mlx5_priv *priv)
 	}
 }
 
+static void
+flow_hw_destroy_nat64_actions(struct mlx5_priv *priv)
+{
+	uint32_t i;
+
+	for (i = MLX5DR_TABLE_TYPE_NIC_RX; i < MLX5DR_TABLE_TYPE_MAX; i++) {
+		if (priv->action_nat64[i][0]) {
+			(void)mlx5dr_action_destroy(priv->action_nat64[i][0]);
+			priv->action_nat64[i][0] = NULL;
+		}
+		if (priv->action_nat64[i][1]) {
+			(void)mlx5dr_action_destroy(priv->action_nat64[i][1]);
+			priv->action_nat64[i][1] = NULL;
+		}
+	}
+}
+
+static int
+flow_hw_create_nat64_actions(struct mlx5_priv *priv, struct rte_flow_error *error)
+{
+	struct mlx5dr_action_nat64_attr attr;
+	uint8_t regs[MLX5_FLOW_NAT64_REGS_MAX];
+	uint32_t i;
+	const uint32_t flags[MLX5DR_TABLE_TYPE_MAX] = {
+		MLX5DR_ACTION_FLAG_HWS_RX | MLX5DR_ACTION_FLAG_SHARED,
+		MLX5DR_ACTION_FLAG_HWS_TX | MLX5DR_ACTION_FLAG_SHARED,
+		MLX5DR_ACTION_FLAG_HWS_FDB | MLX5DR_ACTION_FLAG_SHARED,
+	};
+	struct mlx5dr_action *act;
+
+	attr.registers = regs;
+	/* Try to use 3 registers by default. */
+	attr.num_of_registers = MLX5_FLOW_NAT64_REGS_MAX;
+	for (i = 0; i < MLX5_FLOW_NAT64_REGS_MAX; i++) {
+		MLX5_ASSERT(priv->sh->registers.nat64_regs[i] != REG_NON);
+		regs[i] = mlx5_covert_reg_to_field(priv->sh->registers.nat64_regs[i]);
+	}
+	for (i = MLX5DR_TABLE_TYPE_NIC_RX; i < MLX5DR_TABLE_TYPE_MAX; i++) {
+		if (i == MLX5DR_TABLE_TYPE_FDB && !priv->sh->config.dv_esw_en)
+			continue;
+		attr.flags = (enum mlx5dr_action_nat64_flags)
+			     (MLX5DR_ACTION_NAT64_V6_TO_V4 | MLX5DR_ACTION_NAT64_BACKUP_ADDR);
+		act = mlx5dr_action_create_nat64(priv->dr_ctx, &attr, flags[i]);
+		if (!act)
+			return rte_flow_error_set(error, rte_errno,
+						  RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+						  "Failed to create v6 to v4 action.");
+		priv->action_nat64[i][0] = act;
+		attr.flags = (enum mlx5dr_action_nat64_flags)
+			     (MLX5DR_ACTION_NAT64_V4_TO_V6 | MLX5DR_ACTION_NAT64_BACKUP_ADDR);
+		act = mlx5dr_action_create_nat64(priv->dr_ctx, &attr, flags[i]);
+		if (!act)
+			return rte_flow_error_set(error, rte_errno,
+						  RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+						  "Failed to create v4 to v6 action.");
+		priv->action_nat64[i][1] = act;
+	}
+	return 0;
+}
+
 /**
  * Create an egress pattern template matching on source SQ.
  *
@@ -9539,6 +9599,9 @@ flow_hw_configure(struct rte_eth_dev *dev,
 				   NULL, "Failed to VLAN actions.");
 		goto err;
 	}
+	ret = flow_hw_create_nat64_actions(priv, error);
+	if (ret)
+		goto err;
 	if (_queue_attr)
 		mlx5_free(_queue_attr);
 	if (port_attr->flags & RTE_FLOW_PORT_FLAG_STRICT_QUEUE)
@@ -9570,6 +9633,7 @@ flow_hw_configure(struct rte_eth_dev *dev,
 	}
 	if (priv->hw_def_miss)
 		mlx5dr_action_destroy(priv->hw_def_miss);
+	flow_hw_destroy_nat64_actions(priv);
 	flow_hw_destroy_vlan(dev);
 	if (dr_ctx)
 		claim_zero(mlx5dr_context_close(dr_ctx));
@@ -9649,6 +9713,7 @@ flow_hw_resource_release(struct rte_eth_dev *dev)
 	}
 	if (priv->hw_def_miss)
 		mlx5dr_action_destroy(priv->hw_def_miss);
+	flow_hw_destroy_nat64_actions(priv);
 	flow_hw_destroy_vlan(dev);
 	flow_hw_destroy_send_to_kernel_action(priv);
 	flow_hw_free_vport_actions(priv);
-- 
2.25.1


^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH 7/8] net/mlx5: add NAT64 action support in rule creation
  2023-12-27  9:07 [PATCH 0/8] support NAT64 action Bing Zhao
                   ` (5 preceding siblings ...)
  2023-12-27  9:07 ` [PATCH 6/8] net/mlx5: create NAT64 actions during configuration Bing Zhao
@ 2023-12-27  9:07 ` Bing Zhao
  2023-12-27  9:07 ` [PATCH 8/8] net/mlx5: validate the actions combination with NAT64 Bing Zhao
                   ` (4 subsequent siblings)
  11 siblings, 0 replies; 36+ messages in thread
From: Bing Zhao @ 2023-12-27  9:07 UTC (permalink / raw)
  To: orika, aman.deep.singh, yuying.zhang, dsosnowski, viacheslavo,
	suanmingm, matan, thomas, ferruh.yigit, andrew.rybchenko, dev,
	rasland

The action will handle the IPv4 and IPv6 headers translation. It will
add / remove IPv6 address prefix by default.

To use the user specific address, another rule to modify the
addresses of the IP header is needed.

Signed-off-by: Bing Zhao <bingz@nvidia.com>
---
 drivers/net/mlx5/mlx5_flow_hw.c | 22 ++++++++++++++++++++++
 1 file changed, 22 insertions(+)

diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c
index 9b9ad8de2d..9b60233549 100644
--- a/drivers/net/mlx5/mlx5_flow_hw.c
+++ b/drivers/net/mlx5/mlx5_flow_hw.c
@@ -2479,6 +2479,19 @@ __flow_hw_actions_translate(struct rte_eth_dev *dev,
 			}
 			acts->rule_acts[dr_pos].action = priv->hw_def_miss;
 			break;
+		case RTE_FLOW_ACTION_TYPE_NAT64:
+			if (masks->conf &&
+			    ((const struct rte_flow_action_nat64 *)masks->conf)->type) {
+				const struct rte_flow_action_nat64 *nat64_c =
+					(const struct rte_flow_action_nat64 *)actions->conf;
+
+				acts->rule_acts[dr_pos].action =
+					priv->action_nat64[type][nat64_c->type];
+			} else if (__flow_hw_act_data_general_append(priv, acts,
+								     actions->type,
+								     src_pos, dr_pos))
+				goto err;
+			break;
 		case RTE_FLOW_ACTION_TYPE_END:
 			actions_end = true;
 			break;
@@ -2912,6 +2925,7 @@ flow_hw_actions_construct(struct rte_eth_dev *dev,
 	const struct rte_flow_action_ethdev *port_action = NULL;
 	const struct rte_flow_action_meter *meter = NULL;
 	const struct rte_flow_action_age *age = NULL;
+	const struct rte_flow_action_nat64 *nat64_c = NULL;
 	uint8_t *buf = job->encap_data;
 	uint8_t *push_buf = job->push_data;
 	struct rte_flow_attr attr = {
@@ -3179,6 +3193,13 @@ flow_hw_actions_construct(struct rte_eth_dev *dev,
 			if (ret != 0)
 				return ret;
 			break;
+		case RTE_FLOW_ACTION_TYPE_NAT64:
+			nat64_c = action->conf;
+			if (!priv->action_nat64[table->type][nat64_c->type])
+				return -1;
+			rule_acts[act_data->action_dst].action =
+				priv->action_nat64[table->type][nat64_c->type];
+			break;
 		default:
 			break;
 		}
@@ -5872,6 +5893,7 @@ static enum mlx5dr_action_type mlx5_hw_dr_action_types[] = {
 	[RTE_FLOW_ACTION_TYPE_SEND_TO_KERNEL] = MLX5DR_ACTION_TYP_DEST_ROOT,
 	[RTE_FLOW_ACTION_TYPE_IPV6_EXT_PUSH] = MLX5DR_ACTION_TYP_PUSH_IPV6_ROUTE_EXT,
 	[RTE_FLOW_ACTION_TYPE_IPV6_EXT_REMOVE] = MLX5DR_ACTION_TYP_POP_IPV6_ROUTE_EXT,
+	[RTE_FLOW_ACTION_TYPE_NAT64] = MLX5DR_ACTION_TYP_NAT64,
 };
 
 static inline void
-- 
2.25.1


^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH 8/8] net/mlx5: validate the actions combination with NAT64
  2023-12-27  9:07 [PATCH 0/8] support NAT64 action Bing Zhao
                   ` (6 preceding siblings ...)
  2023-12-27  9:07 ` [PATCH 7/8] net/mlx5: add NAT64 action support in rule creation Bing Zhao
@ 2023-12-27  9:07 ` Bing Zhao
  2024-01-31  9:38 ` [PATCH v2 0/2] support NAT64 action Bing Zhao
                   ` (3 subsequent siblings)
  11 siblings, 0 replies; 36+ messages in thread
From: Bing Zhao @ 2023-12-27  9:07 UTC (permalink / raw)
  To: orika, aman.deep.singh, yuying.zhang, dsosnowski, viacheslavo,
	suanmingm, matan, thomas, ferruh.yigit, andrew.rybchenko, dev,
	rasland

NAT64 is treated as a modify header action. The action order and
limitation should be the same as that of modify header in each
domain.

Since the last 2 TAG registers will be used implicitly in the
address backup mode, the values in these registers are no longer
valid after the NAT64 action. The application should not try to
match these TAGs after the rule that contains NAT64 action.

Signed-off-by: Bing Zhao <bingz@nvidia.com>
---
 drivers/net/mlx5/mlx5_flow.h    | 1 +
 drivers/net/mlx5/mlx5_flow_hw.c | 4 ++++
 2 files changed, 5 insertions(+)

diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h
index 81026632ed..6bdd350aef 100644
--- a/drivers/net/mlx5/mlx5_flow.h
+++ b/drivers/net/mlx5/mlx5_flow.h
@@ -376,6 +376,7 @@ enum mlx5_feature_name {
 #define MLX5_FLOW_ACTION_PORT_REPRESENTOR (1ull << 47)
 #define MLX5_FLOW_ACTION_IPV6_ROUTING_REMOVE (1ull << 48)
 #define MLX5_FLOW_ACTION_IPV6_ROUTING_PUSH (1ull << 49)
+#define MLX5_FLOW_ACTION_NAT64 (1ull << 50)
 
 #define MLX5_FLOW_DROP_INCLUSIVE_ACTIONS \
 	(MLX5_FLOW_ACTION_COUNT | MLX5_FLOW_ACTION_SAMPLE | MLX5_FLOW_ACTION_AGE)
diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c
index 9b60233549..09ae49faa4 100644
--- a/drivers/net/mlx5/mlx5_flow_hw.c
+++ b/drivers/net/mlx5/mlx5_flow_hw.c
@@ -5841,6 +5841,10 @@ mlx5_flow_hw_actions_validate(struct rte_eth_dev *dev,
 				MLX5_HW_VLAN_PUSH_VID_IDX;
 			action_flags |= MLX5_FLOW_ACTION_OF_PUSH_VLAN;
 			break;
+		case RTE_FLOW_ACTION_TYPE_NAT64:
+			/* TODO: Validation logic */
+			action_flags |= MLX5_FLOW_ACTION_NAT64;
+			break;
 		case RTE_FLOW_ACTION_TYPE_END:
 			actions_end = true;
 			break;
-- 
2.25.1


^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH v2 0/2] support NAT64 action
  2023-12-27  9:07 [PATCH 0/8] support NAT64 action Bing Zhao
                   ` (7 preceding siblings ...)
  2023-12-27  9:07 ` [PATCH 8/8] net/mlx5: validate the actions combination with NAT64 Bing Zhao
@ 2024-01-31  9:38 ` Bing Zhao
  2024-01-31  9:38   ` [PATCH v2 1/2] ethdev: introduce " Bing Zhao
                     ` (2 more replies)
  2024-02-20 14:10 ` [PATCH v2 0/5] NAT64 support in mlx5 PMD Bing Zhao
                   ` (2 subsequent siblings)
  11 siblings, 3 replies; 36+ messages in thread
From: Bing Zhao @ 2024-01-31  9:38 UTC (permalink / raw)
  To: orika, aman.deep.singh, yuying.zhang, dsosnowski, viacheslavo,
	suanmingm, matan, thomas, ferruh.yigit, andrew.rybchenko, dev,
	rasland

This patchset introduce the NAT64 action support for rte_flow.

---
v2: split the common part and PMD part.
---

Bing Zhao (2):
  ethdev: introduce NAT64 action
  app/testpmd: add support for NAT64 in the command line

 app/test-pmd/cmdline_flow.c                 | 23 ++++++++++++++++++
 doc/guides/nics/features/default.ini        |  1 +
 doc/guides/prog_guide/rte_flow.rst          |  8 ++++++
 doc/guides/testpmd_app_ug/testpmd_funcs.rst |  4 +++
 lib/ethdev/rte_flow.c                       |  1 +
 lib/ethdev/rte_flow.h                       | 27 +++++++++++++++++++++
 6 files changed, 64 insertions(+)

-- 
2.34.1


^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH v2 1/2] ethdev: introduce NAT64 action
  2024-01-31  9:38 ` [PATCH v2 0/2] support NAT64 action Bing Zhao
@ 2024-01-31  9:38   ` Bing Zhao
  2024-02-01  8:38     ` Ori Kam
  2024-01-31  9:38   ` [PATCH v2 2/2] app/testpmd: add support for NAT64 in the command line Bing Zhao
  2024-02-01 16:00   ` [PATCH v2 0/2] support NAT64 action Ferruh Yigit
  2 siblings, 1 reply; 36+ messages in thread
From: Bing Zhao @ 2024-01-31  9:38 UTC (permalink / raw)
  To: orika, aman.deep.singh, yuying.zhang, dsosnowski, viacheslavo,
	suanmingm, matan, thomas, ferruh.yigit, andrew.rybchenko, dev,
	rasland

In order to support the communication between IPv4 and IPv6 nodes in
the network, different technologies are used, like dual-stacks,
tunneling and NAT64. In some IPv4-only clients, it is hard to deploy
new software and(or) hardware to support IPv6 protocol.

NAT64 is a choice and it will also reduce the unnecessary overhead of
the traffic in the network. The NAT64 gateways take the
responsibility of the packet headers translation between the IPv6
clouds and IPv4-only clouds.

The commit introduce the NAT64 flow action to offload the software
involvement to the hardware.

This action should support the offloading of the IP headers'
translation. The following fields should be reset correctly in the
translation.
  - Version
  - Traffic Class / TOS
  - Flow Label (0 in v4)
  - Payload Length / Total length
  - Next Header
  - Hop Limit / TTL

The PMD needs to support the basic conversion of the fields above,
and the well-known prefix will be used when translating IPv4 address
to IPv6 address. Another modify fields can be used after the NAT64 to
support other modes with different prefix and offset.

The ICMP* and transport layers protocol is out of the scope of NAT64
rte_flow action.

Reference links:
  - https://datatracker.ietf.org/doc/html/rfc6146
  - https://datatracker.ietf.org/doc/html/rfc6052
  - https://datatracker.ietf.org/doc/html/rfc6145

Signed-off-by: Bing Zhao <bingz@nvidia.com>
---
 doc/guides/nics/features/default.ini |  1 +
 doc/guides/prog_guide/rte_flow.rst   |  8 ++++++++
 lib/ethdev/rte_flow.c                |  1 +
 lib/ethdev/rte_flow.h                | 27 +++++++++++++++++++++++++++
 4 files changed, 37 insertions(+)

diff --git a/doc/guides/nics/features/default.ini b/doc/guides/nics/features/default.ini
index 6d50236292..4db7e41193 100644
--- a/doc/guides/nics/features/default.ini
+++ b/doc/guides/nics/features/default.ini
@@ -171,6 +171,7 @@ mark                 =
 meter                =
 meter_mark           =
 modify_field         =
+nat64                =
 nvgre_decap          =
 nvgre_encap          =
 of_copy_ttl_in       =
diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index 900fdaefb6..7af329bd93 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -3520,6 +3520,14 @@ The packets will be received by the kernel driver sharing the same device
 as the DPDK port on which this action is configured.
 
 
+Action: ``NAT64``
+^^^^^^^^^^^^^^^^^
+
+This action does the header translation between IPv4 and IPv6. Besides
+converting the IP addresses, other fields in the IP header are handled as
+well. The ``type`` field should be provided as defined in the
+``rte_flow_action_nat64`` when creating the action.
+
 Negative types
 ~~~~~~~~~~~~~~
 
diff --git a/lib/ethdev/rte_flow.c b/lib/ethdev/rte_flow.c
index 3f58d792f9..156545454c 100644
--- a/lib/ethdev/rte_flow.c
+++ b/lib/ethdev/rte_flow.c
@@ -271,6 +271,7 @@ static const struct rte_flow_desc_data rte_flow_desc_action[] = {
 		       sizeof(struct rte_flow_action_indirect_list)),
 	MK_FLOW_ACTION(PROG,
 		       sizeof(struct rte_flow_action_prog)),
+	MK_FLOW_ACTION(NAT64, sizeof(struct rte_flow_action_nat64)),
 };
 
 int
diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
index 1267c146e5..1dded812ec 100644
--- a/lib/ethdev/rte_flow.h
+++ b/lib/ethdev/rte_flow.h
@@ -3051,6 +3051,13 @@ enum rte_flow_action_type {
 	 * @see struct rte_flow_action_prog.
 	 */
 	RTE_FLOW_ACTION_TYPE_PROG,
+
+	/**
+	 * Support the NAT64 translation.
+	 *
+	 * @see struct rte_flow_action_nat64
+	 */
+	RTE_FLOW_ACTION_TYPE_NAT64,
 };
 
 /**
@@ -4180,6 +4187,26 @@ rte_flow_dynf_metadata_set(struct rte_mbuf *m, uint32_t v)
 	*RTE_FLOW_DYNF_METADATA(m) = v;
 }
 
+/**
+ * NAT64 translation type for IP headers.
+ */
+enum rte_flow_nat64_type {
+	RTE_FLOW_NAT64_6TO4 = 0, /**< IPv6 to IPv4 headers translation. */
+	RTE_FLOW_NAT64_4TO6 = 1, /**< IPv4 to IPv6 headers translation. */
+};
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this structure may change without prior notice
+ *
+ * RTE_FLOW_ACTION_TYPE_NAT64
+ *
+ * Specify the NAT64 translation type.
+ */
+struct rte_flow_action_nat64 {
+	enum rte_flow_nat64_type type;
+};
+
 /**
  * Definition of a single action.
  *
-- 
2.34.1


^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH v2 2/2] app/testpmd: add support for NAT64 in the command line
  2024-01-31  9:38 ` [PATCH v2 0/2] support NAT64 action Bing Zhao
  2024-01-31  9:38   ` [PATCH v2 1/2] ethdev: introduce " Bing Zhao
@ 2024-01-31  9:38   ` Bing Zhao
  2024-02-01  8:38     ` Ori Kam
  2024-02-01 16:00   ` [PATCH v2 0/2] support NAT64 action Ferruh Yigit
  2 siblings, 1 reply; 36+ messages in thread
From: Bing Zhao @ 2024-01-31  9:38 UTC (permalink / raw)
  To: orika, aman.deep.singh, yuying.zhang, dsosnowski, viacheslavo,
	suanmingm, matan, thomas, ferruh.yigit, andrew.rybchenko, dev,
	rasland

The type of NAT64 action will be parsed.

Usage example with template API:
  ...
  flow actions_template 0 create ingress actions_template_id 1 \
    template count / nat64 / jump / end mask count / nat64 / \
    jump / end
  flow template_table 0 create group 1 priority 0 ingress table_id \
    0x1 rules_number 8 pattern_template 0 actions_template 1
  flow queue 0 create 2 template_table 0x1 pattern_template 0 \
    actions_template 0 postpone no pattern eth / end actions count / \
    nat64 type 1 / jump group 2 / end
   ...

Signed-off-by: Bing Zhao <bingz@nvidia.com>
---
 app/test-pmd/cmdline_flow.c                 | 23 +++++++++++++++++++++
 doc/guides/testpmd_app_ug/testpmd_funcs.rst |  4 ++++
 2 files changed, 27 insertions(+)

diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index 4062879552..d26986a9ab 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -733,6 +733,8 @@ enum index {
 	ACTION_IPV6_EXT_PUSH,
 	ACTION_IPV6_EXT_PUSH_INDEX,
 	ACTION_IPV6_EXT_PUSH_INDEX_VALUE,
+	ACTION_NAT64,
+	ACTION_NAT64_MODE,
 };
 
 /** Maximum size for pattern in struct rte_flow_item_raw. */
@@ -2209,6 +2211,7 @@ static const enum index next_action[] = {
 	ACTION_QUOTA_QU,
 	ACTION_IPV6_EXT_REMOVE,
 	ACTION_IPV6_EXT_PUSH,
+	ACTION_NAT64,
 	ZERO,
 };
 
@@ -2550,6 +2553,12 @@ static const enum index action_represented_port[] = {
 	ZERO,
 };
 
+static const enum index action_nat64[] = {
+	ACTION_NAT64_MODE,
+	ACTION_NEXT,
+	ZERO,
+};
+
 static int parse_set_raw_encap_decap(struct context *, const struct token *,
 				     const char *, unsigned int,
 				     void *, unsigned int);
@@ -7077,6 +7086,20 @@ static const struct token token_list[] = {
 		.call = parse_vc_action_ipv6_ext_push_index,
 		.comp = comp_set_ipv6_ext_index,
 	},
+	[ACTION_NAT64] = {
+		.name = "nat64",
+		.help = "NAT64 IP headers translation",
+		.priv = PRIV_ACTION(NAT64, sizeof(struct rte_flow_action_nat64)),
+		.next = NEXT(action_nat64),
+		.call = parse_vc,
+	},
+	[ACTION_NAT64_MODE] = {
+		.name = "type",
+		.help = "NAT64 translation type",
+		.next = NEXT(action_nat64, NEXT_ENTRY(COMMON_UNSIGNED)),
+		.args = ARGS(ARGS_ENTRY(struct rte_flow_action_nat64, type)),
+		.call = parse_vc_conf,
+	},
 	/* Top level command. */
 	[SET] = {
 		.name = "set",
diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
index 38ab421547..d1801c1b26 100644
--- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
+++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
@@ -4169,6 +4169,10 @@ This section lists supported actions and their attributes, if any.
   - ``src_ptr``: pointer to source immediate value.
   - ``width``: number of bits to copy.
 
+- ``nat64``: NAT64 IP headers translation
+
+  - ``type {unsigned}``: NAT64 translation type
+
 Destroying flow rules
 ~~~~~~~~~~~~~~~~~~~~~
 
-- 
2.34.1


^ permalink raw reply	[flat|nested] 36+ messages in thread

* RE: [PATCH v2 1/2] ethdev: introduce NAT64 action
  2024-01-31  9:38   ` [PATCH v2 1/2] ethdev: introduce " Bing Zhao
@ 2024-02-01  8:38     ` Ori Kam
  0 siblings, 0 replies; 36+ messages in thread
From: Ori Kam @ 2024-02-01  8:38 UTC (permalink / raw)
  To: Bing Zhao, aman.deep.singh, yuying.zhang, Dariusz Sosnowski,
	Slava Ovsiienko, Suanming Mou, Matan Azrad,
	NBU-Contact-Thomas Monjalon (EXTERNAL),
	ferruh.yigit, andrew.rybchenko, dev, Raslan Darawsheh

Hi Bing

> -----Original Message-----
> From: Bing Zhao <bingz@nvidia.com>
> Sent: Wednesday, January 31, 2024 11:38 AM
> 
> In order to support the communication between IPv4 and IPv6 nodes in
> the network, different technologies are used, like dual-stacks,
> tunneling and NAT64. In some IPv4-only clients, it is hard to deploy
> new software and(or) hardware to support IPv6 protocol.
> 
> NAT64 is a choice and it will also reduce the unnecessary overhead of
> the traffic in the network. The NAT64 gateways take the
> responsibility of the packet headers translation between the IPv6
> clouds and IPv4-only clouds.
> 
> The commit introduce the NAT64 flow action to offload the software
> involvement to the hardware.
> 
> This action should support the offloading of the IP headers'
> translation. The following fields should be reset correctly in the
> translation.
>   - Version
>   - Traffic Class / TOS
>   - Flow Label (0 in v4)
>   - Payload Length / Total length
>   - Next Header
>   - Hop Limit / TTL
> 
> The PMD needs to support the basic conversion of the fields above,
> and the well-known prefix will be used when translating IPv4 address
> to IPv6 address. Another modify fields can be used after the NAT64 to
> support other modes with different prefix and offset.
> 
> The ICMP* and transport layers protocol is out of the scope of NAT64
> rte_flow action.
> 
> Reference links:
>   - https://datatracker.ietf.org/doc/html/rfc6146
>   - https://datatracker.ietf.org/doc/html/rfc6052
>   - https://datatracker.ietf.org/doc/html/rfc6145
> 
> Signed-off-by: Bing Zhao <bingz@nvidia.com>
> ---

Acked-by: Ori Kam <orika@nvidia.com>
Best,
Ori

^ permalink raw reply	[flat|nested] 36+ messages in thread

* RE: [PATCH v2 2/2] app/testpmd: add support for NAT64 in the command line
  2024-01-31  9:38   ` [PATCH v2 2/2] app/testpmd: add support for NAT64 in the command line Bing Zhao
@ 2024-02-01  8:38     ` Ori Kam
  0 siblings, 0 replies; 36+ messages in thread
From: Ori Kam @ 2024-02-01  8:38 UTC (permalink / raw)
  To: Bing Zhao, aman.deep.singh, yuying.zhang, Dariusz Sosnowski,
	Slava Ovsiienko, Suanming Mou, Matan Azrad,
	NBU-Contact-Thomas Monjalon (EXTERNAL),
	ferruh.yigit, andrew.rybchenko, dev, Raslan Darawsheh



> -----Original Message-----
> From: Bing Zhao <bingz@nvidia.com>
> Sent: Wednesday, January 31, 2024 11:38 AM
> 
> The type of NAT64 action will be parsed.
> 
> Usage example with template API:
>   ...
>   flow actions_template 0 create ingress actions_template_id 1 \
>     template count / nat64 / jump / end mask count / nat64 / \
>     jump / end
>   flow template_table 0 create group 1 priority 0 ingress table_id \
>     0x1 rules_number 8 pattern_template 0 actions_template 1
>   flow queue 0 create 2 template_table 0x1 pattern_template 0 \
>     actions_template 0 postpone no pattern eth / end actions count / \
>     nat64 type 1 / jump group 2 / end
>    ...
> 
> Signed-off-by: Bing Zhao <bingz@nvidia.com>
> ---
>  app/test-pmd/cmdline_flow.c                 | 23 +++++++++++++++++++++
>  doc/guides/testpmd_app_ug/testpmd_funcs.rst |  4 ++++
>  2 files changed, 27 insertions(+)
> 
> diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
> index 4062879552..d26986a9ab 100644
> --- a/app/test-pmd/cmdline_flow.c
> +++ b/app/test-pmd/cmdline_flow.c
> @@ -733,6 +733,8 @@ enum index {
>  	ACTION_IPV6_EXT_PUSH,
>  	ACTION_IPV6_EXT_PUSH_INDEX,
>  	ACTION_IPV6_EXT_PUSH_INDEX_VALUE,
> +	ACTION_NAT64,
> +	ACTION_NAT64_MODE,
>  };
> 
>  /** Maximum size for pattern in struct rte_flow_item_raw. */
> @@ -2209,6 +2211,7 @@ static const enum index next_action[] = {
>  	ACTION_QUOTA_QU,
>  	ACTION_IPV6_EXT_REMOVE,
>  	ACTION_IPV6_EXT_PUSH,
> +	ACTION_NAT64,
>  	ZERO,
>  };
> 
> @@ -2550,6 +2553,12 @@ static const enum index
> action_represented_port[] = {
>  	ZERO,
>  };
> 
> +static const enum index action_nat64[] = {
> +	ACTION_NAT64_MODE,
> +	ACTION_NEXT,
> +	ZERO,
> +};
> +
>  static int parse_set_raw_encap_decap(struct context *, const struct token *,
>  				     const char *, unsigned int,
>  				     void *, unsigned int);
> @@ -7077,6 +7086,20 @@ static const struct token token_list[] = {
>  		.call = parse_vc_action_ipv6_ext_push_index,
>  		.comp = comp_set_ipv6_ext_index,
>  	},
> +	[ACTION_NAT64] = {
> +		.name = "nat64",
> +		.help = "NAT64 IP headers translation",
> +		.priv = PRIV_ACTION(NAT64, sizeof(struct
> rte_flow_action_nat64)),
> +		.next = NEXT(action_nat64),
> +		.call = parse_vc,
> +	},
> +	[ACTION_NAT64_MODE] = {
> +		.name = "type",
> +		.help = "NAT64 translation type",
> +		.next = NEXT(action_nat64,
> NEXT_ENTRY(COMMON_UNSIGNED)),
> +		.args = ARGS(ARGS_ENTRY(struct rte_flow_action_nat64,
> type)),
> +		.call = parse_vc_conf,
> +	},
>  	/* Top level command. */
>  	[SET] = {
>  		.name = "set",
> diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
> b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
> index 38ab421547..d1801c1b26 100644
> --- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
> +++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
> @@ -4169,6 +4169,10 @@ This section lists supported actions and their
> attributes, if any.
>    - ``src_ptr``: pointer to source immediate value.
>    - ``width``: number of bits to copy.
> 
> +- ``nat64``: NAT64 IP headers translation
> +
> +  - ``type {unsigned}``: NAT64 translation type
> +
>  Destroying flow rules
>  ~~~~~~~~~~~~~~~~~~~~~
> 
> --
> 2.34.1

Acked-by: Ori Kam <orika@nvidia.com>
Best,
Ori


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v2 0/2] support NAT64 action
  2024-01-31  9:38 ` [PATCH v2 0/2] support NAT64 action Bing Zhao
  2024-01-31  9:38   ` [PATCH v2 1/2] ethdev: introduce " Bing Zhao
  2024-01-31  9:38   ` [PATCH v2 2/2] app/testpmd: add support for NAT64 in the command line Bing Zhao
@ 2024-02-01 16:00   ` Ferruh Yigit
  2024-02-01 16:05     ` Ferruh Yigit
  2 siblings, 1 reply; 36+ messages in thread
From: Ferruh Yigit @ 2024-02-01 16:00 UTC (permalink / raw)
  To: orika, aman.deep.singh, yuying.zhang, dsosnowski, viacheslavo,
	suanmingm, matan, thomas, andrew.rybchenko, dev, rasland,
	Bing Zhao


On Wed, 31 Jan 2024 11:38:02 +0200, Bing Zhao wrote:
> This patchset introduce the NAT64 action support for rte_flow.
> 

Applied, thanks!

[1/2] ethdev: introduce NAT64 action
      commit: 8d06f5a2da9991ebd514869a72f5136e0ee1eaf1
[2/2] app/testpmd: add support for NAT64 in the command line
      commit: 1d14e0581427004de88ac95e25529761f4492621

Best regards,
-- 
Ferruh Yigit <ferruh.yigit@amd.com>

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v2 0/2] support NAT64 action
  2024-02-01 16:00   ` [PATCH v2 0/2] support NAT64 action Ferruh Yigit
@ 2024-02-01 16:05     ` Ferruh Yigit
  0 siblings, 0 replies; 36+ messages in thread
From: Ferruh Yigit @ 2024-02-01 16:05 UTC (permalink / raw)
  To: Thomas Monjalon, David Marchand, Akhil Goyal,
	Jerin Jacob Kollanukkaran, Maxime Coquelin
  Cc: Bing Zhao, dev

On 2/1/2024 4:00 PM, Ferruh Yigit wrote:
> 
> On Wed, 31 Jan 2024 11:38:02 +0200, Bing Zhao wrote:
>> This patchset introduce the NAT64 action support for rte_flow.
>>
> 
> Applied, thanks!
> 
> [1/2] ethdev: introduce NAT64 action
>       commit: 8d06f5a2da9991ebd514869a72f5136e0ee1eaf1
> [2/2] app/testpmd: add support for NAT64 in the command line
>       commit: 1d14e0581427004de88ac95e25529761f4492621
> 
> Best regards,
>

Hi sub-tree maintainers,

I sent above applied message from console using "b4 ty" tool [1], in
case it helps you, FYI.


[1]
https://www.mankier.com/5/b4#Subcommand_Options-b4_ty

^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH v2 0/5] NAT64 support in mlx5 PMD
  2023-12-27  9:07 [PATCH 0/8] support NAT64 action Bing Zhao
                   ` (8 preceding siblings ...)
  2024-01-31  9:38 ` [PATCH v2 0/2] support NAT64 action Bing Zhao
@ 2024-02-20 14:10 ` Bing Zhao
  2024-02-20 14:10   ` [PATCH v2 1/5] net/mlx5/hws: support NAT64 action Bing Zhao
                     ` (4 more replies)
  2024-02-20 14:37 ` [PATCH v3 0/5] NAT64 support in mlx5 PMD Bing Zhao
  2024-02-28 15:09 ` [PATCH v4 " Bing Zhao
  11 siblings, 5 replies; 36+ messages in thread
From: Bing Zhao @ 2024-02-20 14:10 UTC (permalink / raw)
  To: orika, aman.deep.singh, dsosnowski, viacheslavo, suanmingm,
	matan, thomas, ferruh.yigit, andrew.rybchenko, dev, rasland

This patch set contains the mlx5 PMD implementation for NAT64.

Update in v2:
  1. separate from the RTE and testpmd common part.
  2. reorder the commits.
  3. bug fix, code polishing and document update.

Bing Zhao (4):
  net/mlx5: fetch the available registers for NAT64
  net/mlx5: create NAT64 actions during configuration
  net/mlx5: add NAT64 action support in rule creation
  net/mlx5: validate the actions combination with NAT64

Erez Shitrit (1):
  net/mlx5/hws: support NAT64 action

 doc/guides/nics/features/mlx5.ini      |   1 +
 doc/guides/nics/mlx5.rst               |  10 +
 doc/guides/rel_notes/release_24_03.rst |   7 +
 drivers/net/mlx5/hws/mlx5dr.h          |  29 ++
 drivers/net/mlx5/hws/mlx5dr_action.c   | 437 ++++++++++++++++++++++++-
 drivers/net/mlx5/hws/mlx5dr_action.h   |  35 ++
 drivers/net/mlx5/hws/mlx5dr_debug.c    |   1 +
 drivers/net/mlx5/mlx5.c                |   9 +
 drivers/net/mlx5/mlx5.h                |   8 +
 drivers/net/mlx5/mlx5_flow.h           |  12 +
 drivers/net/mlx5/mlx5_flow_dv.c        |   4 +-
 drivers/net/mlx5/mlx5_flow_hw.c        | 136 ++++++++
 12 files changed, 687 insertions(+), 2 deletions(-)

-- 
2.34.1


^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH v2 1/5] net/mlx5/hws: support NAT64 action
  2024-02-20 14:10 ` [PATCH v2 0/5] NAT64 support in mlx5 PMD Bing Zhao
@ 2024-02-20 14:10   ` Bing Zhao
  2024-02-20 14:10   ` [PATCH v2 2/5] net/mlx5: fetch the available registers for NAT64 Bing Zhao
                     ` (3 subsequent siblings)
  4 siblings, 0 replies; 36+ messages in thread
From: Bing Zhao @ 2024-02-20 14:10 UTC (permalink / raw)
  To: orika, aman.deep.singh, dsosnowski, viacheslavo, suanmingm,
	matan, thomas, ferruh.yigit, andrew.rybchenko, dev, rasland
  Cc: Erez Shitrit

From: Erez Shitrit <erezsh@nvidia.com>

Add support of new action mlx5dr_action_create_nat64.
The new action allows to modify IP packets from version to version, IPV6
to IPV4 and vice versa.

Signed-off-by: Erez Shitrit <erezsh@nvidia.com>
Signed-off-by: Bing Zhao <bingz@nvidia.com>
---
 drivers/net/mlx5/hws/mlx5dr.h        |  29 ++
 drivers/net/mlx5/hws/mlx5dr_action.c | 437 ++++++++++++++++++++++++++-
 drivers/net/mlx5/hws/mlx5dr_action.h |  35 +++
 drivers/net/mlx5/hws/mlx5dr_debug.c  |   1 +
 4 files changed, 501 insertions(+), 1 deletion(-)

diff --git a/drivers/net/mlx5/hws/mlx5dr.h b/drivers/net/mlx5/hws/mlx5dr.h
index 9c5b068c93..9ee6503439 100644
--- a/drivers/net/mlx5/hws/mlx5dr.h
+++ b/drivers/net/mlx5/hws/mlx5dr.h
@@ -51,6 +51,7 @@ enum mlx5dr_action_type {
 	MLX5DR_ACTION_TYP_DEST_ARRAY,
 	MLX5DR_ACTION_TYP_POP_IPV6_ROUTE_EXT,
 	MLX5DR_ACTION_TYP_PUSH_IPV6_ROUTE_EXT,
+	MLX5DR_ACTION_TYP_NAT64,
 	MLX5DR_ACTION_TYP_MAX,
 };
 
@@ -817,6 +818,34 @@ mlx5dr_action_create_reformat_ipv6_ext(struct mlx5dr_context *ctx,
 				       uint32_t log_bulk_size,
 				       uint32_t flags);
 
+enum mlx5dr_action_nat64_flags {
+	MLX5DR_ACTION_NAT64_V4_TO_V6 = 1 << 0,
+	MLX5DR_ACTION_NAT64_V6_TO_V4 = 1 << 1,
+	/* Indicates if to backup ipv4 addresses in last two registers */
+	MLX5DR_ACTION_NAT64_BACKUP_ADDR = 1 << 2,
+};
+
+struct mlx5dr_action_nat64_attr {
+	uint8_t num_of_registers;
+	uint8_t *registers;
+	enum mlx5dr_action_nat64_flags flags;
+};
+
+/* Create direct rule nat64 action.
+ *
+ * @param[in] ctx
+ *	The context in which the new action will be created.
+ * @param[in] attr
+ * 	The relevant attiribute of the NAT action.
+  * @param[in] flags
+ *	Action creation flags. (enum mlx5dr_action_flags)
+ * @return pointer to mlx5dr_action on success NULL otherwise.
+ */
+struct mlx5dr_action *
+mlx5dr_action_create_nat64(struct mlx5dr_context *ctx,
+			   struct mlx5dr_action_nat64_attr *attr,
+			   uint32_t flags);
+
 /* Destroy direct rule action.
  *
  * @param[in] action
diff --git a/drivers/net/mlx5/hws/mlx5dr_action.c b/drivers/net/mlx5/hws/mlx5dr_action.c
index 862ee3e332..06cbf5930b 100644
--- a/drivers/net/mlx5/hws/mlx5dr_action.c
+++ b/drivers/net/mlx5/hws/mlx5dr_action.c
@@ -31,6 +31,7 @@ static const uint32_t action_order_arr[MLX5DR_TABLE_TYPE_MAX][MLX5DR_ACTION_TYP_
 		BIT(MLX5DR_ACTION_TYP_ASO_CT),
 		BIT(MLX5DR_ACTION_TYP_PUSH_VLAN),
 		BIT(MLX5DR_ACTION_TYP_PUSH_VLAN),
+		BIT(MLX5DR_ACTION_TYP_NAT64),
 		BIT(MLX5DR_ACTION_TYP_MODIFY_HDR),
 		BIT(MLX5DR_ACTION_TYP_INSERT_HEADER) |
 		BIT(MLX5DR_ACTION_TYP_PUSH_IPV6_ROUTE_EXT) |
@@ -52,6 +53,7 @@ static const uint32_t action_order_arr[MLX5DR_TABLE_TYPE_MAX][MLX5DR_ACTION_TYP_
 		BIT(MLX5DR_ACTION_TYP_ASO_CT),
 		BIT(MLX5DR_ACTION_TYP_PUSH_VLAN),
 		BIT(MLX5DR_ACTION_TYP_PUSH_VLAN),
+		BIT(MLX5DR_ACTION_TYP_NAT64),
 		BIT(MLX5DR_ACTION_TYP_MODIFY_HDR),
 		BIT(MLX5DR_ACTION_TYP_INSERT_HEADER) |
 		BIT(MLX5DR_ACTION_TYP_PUSH_IPV6_ROUTE_EXT) |
@@ -75,6 +77,7 @@ static const uint32_t action_order_arr[MLX5DR_TABLE_TYPE_MAX][MLX5DR_ACTION_TYP_
 		BIT(MLX5DR_ACTION_TYP_ASO_CT),
 		BIT(MLX5DR_ACTION_TYP_PUSH_VLAN),
 		BIT(MLX5DR_ACTION_TYP_PUSH_VLAN),
+		BIT(MLX5DR_ACTION_TYP_NAT64),
 		BIT(MLX5DR_ACTION_TYP_MODIFY_HDR),
 		BIT(MLX5DR_ACTION_TYP_INSERT_HEADER) |
 		BIT(MLX5DR_ACTION_TYP_PUSH_IPV6_ROUTE_EXT) |
@@ -246,6 +249,311 @@ static void mlx5dr_action_put_shared_stc(struct mlx5dr_action *action,
 		mlx5dr_action_put_shared_stc_nic(ctx, stc_type, MLX5DR_TABLE_TYPE_FDB);
 }
 
+static struct mlx5dr_action *
+mlx5dr_action_create_nat64_copy_state(struct mlx5dr_context *ctx,
+				      struct mlx5dr_action_nat64_attr *attr,
+				      uint32_t flags)
+{
+	__be64 modify_action_data[MLX5DR_ACTION_NAT64_MAX_MODIFY_ACTIONS];
+	struct mlx5dr_action_mh_pattern pat[2];
+	struct mlx5dr_action *action;
+	uint32_t packet_len_field;
+	uint8_t *action_ptr;
+	uint32_t ttl_field;
+	uint32_t src_addr;
+	uint32_t dst_addr;
+	bool is_v4_to_v6;
+
+	is_v4_to_v6 = attr->flags & MLX5DR_ACTION_NAT64_V4_TO_V6;
+
+	if (is_v4_to_v6) {
+		packet_len_field = MLX5_MODI_OUT_IPV4_TOTAL_LEN;
+		ttl_field = MLX5_MODI_OUT_IPV4_TTL;
+		src_addr = MLX5_MODI_OUT_SIPV4;
+		dst_addr = MLX5_MODI_OUT_DIPV4;
+	} else {
+		packet_len_field = MLX5_MODI_OUT_IPV6_PAYLOAD_LEN;
+		ttl_field = MLX5_MODI_OUT_IPV6_HOPLIMIT;
+		src_addr = MLX5_MODI_OUT_SIPV6_31_0;
+		dst_addr = MLX5_MODI_OUT_DIPV6_31_0;
+	}
+
+	memset(modify_action_data, 0, sizeof(modify_action_data));
+	action_ptr = (uint8_t *) modify_action_data;
+
+	if (attr->flags & MLX5DR_ACTION_NAT64_BACKUP_ADDR) {
+		MLX5_SET(copy_action_in, action_ptr, action_type, MLX5_MODIFICATION_TYPE_COPY);
+		MLX5_SET(copy_action_in, action_ptr, src_field, src_addr);
+		MLX5_SET(copy_action_in, action_ptr, dst_field,
+			 attr->registers[MLX5DR_ACTION_NAT64_REG_SRC_IP]);
+		action_ptr += MLX5DR_ACTION_DOUBLE_SIZE;
+
+		MLX5_SET(copy_action_in, action_ptr, action_type, MLX5_MODIFICATION_TYPE_COPY);
+		MLX5_SET(copy_action_in, action_ptr, src_field, dst_addr);
+		MLX5_SET(copy_action_in, action_ptr, dst_field,
+			 attr->registers[MLX5DR_ACTION_NAT64_REG_DST_IP]);
+		action_ptr += MLX5DR_ACTION_DOUBLE_SIZE;
+	}
+
+	/* | 8 bit - 8 bit     - 16 bit     |
+	 * | ttl   - protocol  - packet-len |
+	 */
+	MLX5_SET(copy_action_in, action_ptr, action_type, MLX5_MODIFICATION_TYPE_COPY);
+	MLX5_SET(copy_action_in, action_ptr, src_field, packet_len_field);
+	MLX5_SET(copy_action_in, action_ptr, dst_field,
+		 attr->registers[MLX5DR_ACTION_NAT64_REG_CONTROL]);
+	MLX5_SET(copy_action_in, action_ptr, dst_offset, 0);/* 16 bits in the lsb */
+	MLX5_SET(copy_action_in, action_ptr, length, 16);
+	action_ptr += MLX5DR_ACTION_DOUBLE_SIZE;
+
+	MLX5_SET(copy_action_in, action_ptr, action_type, MLX5_MODIFICATION_TYPE_NOP);
+	action_ptr += MLX5DR_ACTION_DOUBLE_SIZE;
+
+	MLX5_SET(copy_action_in, action_ptr, action_type, MLX5_MODIFICATION_TYPE_COPY);
+	MLX5_SET(copy_action_in, action_ptr, src_field, MLX5_MODI_OUT_IP_PROTOCOL);
+	MLX5_SET(copy_action_in, action_ptr, dst_field,
+		 attr->registers[MLX5DR_ACTION_NAT64_REG_CONTROL]);
+	MLX5_SET(copy_action_in, action_ptr, dst_offset, 16);
+	MLX5_SET(copy_action_in, action_ptr, length, 8);
+	action_ptr += MLX5DR_ACTION_DOUBLE_SIZE;
+
+	MLX5_SET(copy_action_in, action_ptr, action_type, MLX5_MODIFICATION_TYPE_NOP);
+	action_ptr += MLX5DR_ACTION_DOUBLE_SIZE;
+
+	MLX5_SET(copy_action_in, action_ptr, action_type, MLX5_MODIFICATION_TYPE_COPY);
+	MLX5_SET(copy_action_in, action_ptr, src_field, ttl_field);
+	MLX5_SET(copy_action_in, action_ptr, dst_field,
+		 attr->registers[MLX5DR_ACTION_NAT64_REG_CONTROL]);
+	MLX5_SET(copy_action_in, action_ptr, dst_offset, 24);
+	MLX5_SET(copy_action_in, action_ptr, length, 8);
+	action_ptr += MLX5DR_ACTION_DOUBLE_SIZE;
+
+	/* set sip and dip to 0, in order to have new csum */
+	if (is_v4_to_v6) {
+		MLX5_SET(set_action_in, action_ptr, action_type, MLX5_MODIFICATION_TYPE_SET);
+		MLX5_SET(set_action_in, action_ptr, field, MLX5_MODI_OUT_SIPV4);
+		MLX5_SET(set_action_in, action_ptr, data, 0);
+		action_ptr += MLX5DR_ACTION_DOUBLE_SIZE;
+
+		MLX5_SET(set_action_in, action_ptr, action_type, MLX5_MODIFICATION_TYPE_SET);
+		MLX5_SET(set_action_in, action_ptr, field, MLX5_MODI_OUT_DIPV4);
+		MLX5_SET(set_action_in, action_ptr, data, 0);
+		action_ptr += MLX5DR_ACTION_DOUBLE_SIZE;
+	}
+
+	pat[0].data = modify_action_data;
+	pat[0].sz = (action_ptr - (uint8_t *) modify_action_data);
+
+	action = mlx5dr_action_create_modify_header(ctx, 1, pat, 0, flags);
+	if (!action) {
+		DR_LOG(ERR, "Failed to create copy for NAT64: action_sz: %zu, flags: 0x%x\n",
+		       pat[0].sz, flags);
+		return NULL;
+	}
+
+	return action;
+}
+
+static struct mlx5dr_action *
+mlx5dr_action_create_nat64_repalce_state(struct mlx5dr_context *ctx,
+					 struct mlx5dr_action_nat64_attr *attr,
+					 uint32_t flags)
+{
+	uint32_t address_prefix[MLX5DR_ACTION_NAT64_HEADER_MINUS_ONE] = {0};
+	__be64 modify_action_data[MLX5DR_ACTION_NAT64_MAX_MODIFY_ACTIONS];
+	struct mlx5dr_action_mh_pattern pat[2];
+	static struct mlx5dr_action *action;
+	uint8_t header_size_in_dw;
+	uint8_t *action_ptr;
+	uint32_t eth_type;
+	bool is_v4_to_v6;
+	uint32_t ip_ver;
+	int i;
+
+	is_v4_to_v6 = attr->flags & MLX5DR_ACTION_NAT64_V4_TO_V6;
+
+	if (is_v4_to_v6) {
+		uint32_t nat64_well_known_pref[] = {0x00010000,
+						    0x9bff6400, 0x0, 0x0, 0x0,
+						    0x9bff6400, 0x0, 0x0, 0x0};
+
+		header_size_in_dw = MLX5DR_ACTION_NAT64_IPV6_HEADER;
+		ip_ver = MLX5DR_ACTION_NAT64_IPV6_VER;
+		eth_type = RTE_ETHER_TYPE_IPV6;
+		memcpy(address_prefix, nat64_well_known_pref,
+		       MLX5DR_ACTION_NAT64_HEADER_MINUS_ONE * sizeof(uint32_t));
+	} else {
+		header_size_in_dw = MLX5DR_ACTION_NAT64_IPV4_HEADER;
+		ip_ver = MLX5DR_ACTION_NAT64_IPV4_VER;
+		eth_type = RTE_ETHER_TYPE_IPV4;
+	}
+
+	memset(modify_action_data, 0, sizeof(modify_action_data));
+	action_ptr = (uint8_t *) modify_action_data;
+
+	MLX5_SET(set_action_in, action_ptr, action_type, MLX5_MODIFICATION_TYPE_SET);
+	MLX5_SET(set_action_in, action_ptr, field, MLX5_MODI_OUT_ETHERTYPE);
+	MLX5_SET(set_action_in, action_ptr, length, 16);
+	MLX5_SET(set_action_in, action_ptr, data, eth_type);
+	action_ptr += MLX5DR_ACTION_DOUBLE_SIZE;
+
+	/* push empty header with ipv6 as version */
+	MLX5_SET(stc_ste_param_insert, action_ptr, action_type,
+		 MLX5_MODIFICATION_TYPE_INSERT);
+	MLX5_SET(stc_ste_param_insert, action_ptr, inline_data, 0x1);
+	MLX5_SET(stc_ste_param_insert, action_ptr, insert_anchor,
+		 MLX5_HEADER_ANCHOR_IPV6_IPV4);
+	MLX5_SET(stc_ste_param_insert, action_ptr, insert_size, 2);
+	MLX5_SET(stc_ste_param_insert, action_ptr, insert_argument, ip_ver);
+	action_ptr += MLX5DR_ACTION_DOUBLE_SIZE;
+
+	for (i = 0; i < header_size_in_dw - 1; i++) {
+		MLX5_SET(stc_ste_param_insert, action_ptr, action_type,
+				MLX5_MODIFICATION_TYPE_INSERT);
+		MLX5_SET(stc_ste_param_insert, action_ptr, inline_data, 0x1);
+		MLX5_SET(stc_ste_param_insert, action_ptr, insert_anchor,
+				MLX5_HEADER_ANCHOR_IPV6_IPV4);
+		MLX5_SET(stc_ste_param_insert, action_ptr, insert_size, 2);
+		MLX5_SET(stc_ste_param_insert, action_ptr, insert_argument,
+			 htobe32(address_prefix[i]));
+		action_ptr += MLX5DR_ACTION_DOUBLE_SIZE;
+	}
+
+	/* Remove orig src/dst addr (8 bytes, 4 words) */
+	MLX5_SET(stc_ste_param_remove, action_ptr, action_type,
+		 MLX5_MODIFICATION_TYPE_REMOVE);
+	MLX5_SET(stc_ste_param_remove, action_ptr, remove_start_anchor,
+		 MLX5_HEADER_ANCHOR_IPV6_IPV4);
+	MLX5_SET(stc_ste_param_remove, action_ptr, remove_end_anchor,
+		 MLX5_HEADER_ANCHOR_TCP_UDP);
+	action_ptr += MLX5DR_ACTION_DOUBLE_SIZE;
+
+	pat[0].data = modify_action_data;
+	pat[0].sz = action_ptr - (uint8_t *) modify_action_data;
+
+	action = mlx5dr_action_create_modify_header(ctx, 1, pat, 0, flags);
+	if (!action) {
+		DR_LOG(ERR, "Failed to create action: action_sz: %zu flags: 0x%x\n",
+		       pat[0].sz, flags);
+		return NULL;
+	}
+
+	return action;
+}
+
+static struct mlx5dr_action *
+mlx5dr_action_create_nat64_copy_back_state(struct mlx5dr_context *ctx,
+					   struct mlx5dr_action_nat64_attr *attr,
+					   uint32_t flags)
+{
+	__be64 modify_action_data[MLX5DR_ACTION_NAT64_MAX_MODIFY_ACTIONS];
+	struct mlx5dr_action_mh_pattern pat[2];
+	struct mlx5dr_action *action;
+	uint32_t packet_len_field;
+	uint32_t packet_len_add;
+	uint8_t *action_ptr;
+	uint32_t ttl_field;
+	uint32_t src_addr;
+	uint32_t dst_addr;
+	bool is_v4_to_v6;
+
+	is_v4_to_v6 = attr->flags & MLX5DR_ACTION_NAT64_V4_TO_V6;
+
+	if (is_v4_to_v6) {
+		packet_len_field = MLX5_MODI_OUT_IPV6_PAYLOAD_LEN;
+		 /* 2' comp to 20, to get -20 in add operation */
+		packet_len_add = MLX5DR_ACTION_NAT64_DEC_20;
+		ttl_field = MLX5_MODI_OUT_IPV6_HOPLIMIT;
+		src_addr = MLX5_MODI_OUT_SIPV6_31_0;
+		dst_addr = MLX5_MODI_OUT_DIPV6_31_0;
+	} else {
+		packet_len_field = MLX5_MODI_OUT_IPV4_TOTAL_LEN;
+		/* ipv4 len is including 20 bytes of the header, so add 20 over ipv6 len */
+		packet_len_add = MLX5DR_ACTION_NAT64_ADD_20;
+		ttl_field = MLX5_MODI_OUT_IPV4_TTL;
+		src_addr = MLX5_MODI_OUT_SIPV4;
+		dst_addr = MLX5_MODI_OUT_DIPV4;
+
+	}
+
+	memset(modify_action_data, 0, sizeof(modify_action_data));
+	action_ptr = (uint8_t *) modify_action_data;
+
+	MLX5_SET(copy_action_in, action_ptr, action_type, MLX5_MODIFICATION_TYPE_COPY);
+	MLX5_SET(copy_action_in, action_ptr, src_field,
+		 attr->registers[MLX5DR_ACTION_NAT64_REG_CONTROL]);
+	MLX5_SET(copy_action_in, action_ptr, dst_field,
+		 packet_len_field);
+	MLX5_SET(copy_action_in, action_ptr, src_offset, 32);
+	MLX5_SET(copy_action_in, action_ptr, length, 16);
+	action_ptr += MLX5DR_ACTION_DOUBLE_SIZE;
+
+	MLX5_SET(copy_action_in, action_ptr, action_type, MLX5_MODIFICATION_TYPE_NOP);
+	action_ptr += MLX5DR_ACTION_DOUBLE_SIZE;
+
+	MLX5_SET(copy_action_in, action_ptr, action_type, MLX5_MODIFICATION_TYPE_COPY);
+	MLX5_SET(copy_action_in, action_ptr, src_field,
+		 attr->registers[MLX5DR_ACTION_NAT64_REG_CONTROL]);
+	MLX5_SET(copy_action_in, action_ptr, dst_field,
+		 MLX5_MODI_OUT_IP_PROTOCOL);
+	MLX5_SET(copy_action_in, action_ptr, src_offset, 16);
+	MLX5_SET(copy_action_in, action_ptr, dst_offset, 0);
+	MLX5_SET(copy_action_in, action_ptr, length, 8);
+	action_ptr += MLX5DR_ACTION_DOUBLE_SIZE;
+
+	MLX5_SET(copy_action_in, action_ptr, action_type, MLX5_MODIFICATION_TYPE_NOP);
+	action_ptr += MLX5DR_ACTION_DOUBLE_SIZE;
+
+	MLX5_SET(copy_action_in, action_ptr, action_type, MLX5_MODIFICATION_TYPE_COPY);
+	MLX5_SET(copy_action_in, action_ptr, src_field,
+		 attr->registers[MLX5DR_ACTION_NAT64_REG_CONTROL]);
+	MLX5_SET(copy_action_in, action_ptr, dst_field, ttl_field);
+	MLX5_SET(copy_action_in, action_ptr, src_offset, 24);
+	MLX5_SET(copy_action_in, action_ptr, length, 8);
+	action_ptr += MLX5DR_ACTION_DOUBLE_SIZE;
+
+	MLX5_SET(copy_action_in, action_ptr, action_type, MLX5_MODIFICATION_TYPE_NOP);
+	action_ptr += MLX5DR_ACTION_DOUBLE_SIZE;
+
+	/* if required Copy original addresses */
+	if (attr->flags & MLX5DR_ACTION_NAT64_BACKUP_ADDR) {
+		MLX5_SET(copy_action_in, action_ptr, action_type, MLX5_MODIFICATION_TYPE_COPY);
+		MLX5_SET(copy_action_in, action_ptr, src_field,
+			 attr->registers[MLX5DR_ACTION_NAT64_REG_SRC_IP]);
+		MLX5_SET(copy_action_in, action_ptr, dst_field, src_addr);
+		MLX5_SET(copy_action_in, action_ptr, src_offset, 0);
+		MLX5_SET(copy_action_in, action_ptr, length, 32);
+		action_ptr += MLX5DR_ACTION_DOUBLE_SIZE;
+
+		MLX5_SET(copy_action_in, action_ptr, action_type, MLX5_MODIFICATION_TYPE_COPY);
+		MLX5_SET(copy_action_in, action_ptr, src_field,
+			 attr->registers[MLX5DR_ACTION_NAT64_REG_DST_IP]);
+		MLX5_SET(copy_action_in, action_ptr, dst_field, dst_addr);
+		MLX5_SET(copy_action_in, action_ptr, src_offset, 0);
+		MLX5_SET(copy_action_in, action_ptr, length, 32);
+		action_ptr += MLX5DR_ACTION_DOUBLE_SIZE;
+	}
+
+	/* take/add off 20 bytes ipv4/6 from/to the total size */
+	MLX5_SET(set_action_in, action_ptr, action_type, MLX5_MODIFICATION_TYPE_ADD);
+	MLX5_SET(set_action_in, action_ptr, field, packet_len_field);
+	MLX5_SET(set_action_in, action_ptr, data, packet_len_add);
+	MLX5_SET(set_action_in, action_ptr, length, 16);
+	action_ptr += MLX5DR_ACTION_DOUBLE_SIZE;
+
+	pat[0].data = modify_action_data;
+	pat[0].sz = action_ptr - (uint8_t *) modify_action_data;
+
+	action = mlx5dr_action_create_modify_header(ctx, 1, pat, 0, flags);
+	if (!action) {
+		DR_LOG(ERR, "Failed to create action: action_sz: %zu, flags: 0x%x\n",
+		       pat[0].sz, flags);
+		return NULL;
+	}
+
+	return action;
+}
+
 static void mlx5dr_action_print_combo(enum mlx5dr_action_type *user_actions)
 {
 	DR_LOG(ERR, "Invalid action_type sequence");
@@ -2526,6 +2834,94 @@ mlx5dr_action_create_reformat_ipv6_ext(struct mlx5dr_context *ctx,
 	return NULL;
 }
 
+static bool
+mlx5dr_action_nat64_validate_param(struct mlx5dr_action_nat64_attr *attr,
+				   uint32_t flags)
+{
+	if (mlx5dr_action_is_root_flags(flags)) {
+		DR_LOG(ERR, "Nat64 action not supported for root");
+		rte_errno = ENOTSUP;
+		return false;
+	}
+
+	if (!(flags & MLX5DR_ACTION_FLAG_SHARED)) {
+		DR_LOG(ERR, "Nat64 action must be with SHARED flag");
+		rte_errno = EINVAL;
+		return false;
+	}
+
+	if (attr->num_of_registers > MLX5DR_ACTION_NAT64_REG_MAX) {
+		DR_LOG(ERR, "Nat64 action doesn't support more than %d registers",
+		       MLX5DR_ACTION_NAT64_REG_MAX);
+		rte_errno = EINVAL;
+		return false;
+	}
+
+	if (attr->flags & MLX5DR_ACTION_NAT64_BACKUP_ADDR &&
+	    attr->num_of_registers != MLX5DR_ACTION_NAT64_REG_MAX) {
+		DR_LOG(ERR, "Nat64 backup addr requires %d registers",
+		       MLX5DR_ACTION_NAT64_REG_MAX);
+		rte_errno = EINVAL;
+		return false;
+	}
+
+	if (!(attr->flags & MLX5DR_ACTION_NAT64_V4_TO_V6 ||
+	      attr->flags & MLX5DR_ACTION_NAT64_V6_TO_V4)) {
+		DR_LOG(ERR, "Nat64 backup addr requires one mode at least");
+		rte_errno = EINVAL;
+		return false;
+	}
+
+	return true;
+}
+
+struct mlx5dr_action *
+mlx5dr_action_create_nat64(struct mlx5dr_context *ctx,
+			   struct mlx5dr_action_nat64_attr *attr,
+			   uint32_t flags)
+{
+	struct mlx5dr_action *action;
+
+	if (!mlx5dr_action_nat64_validate_param(attr, flags))
+		return NULL;
+
+	action = mlx5dr_action_create_generic(ctx, flags, MLX5DR_ACTION_TYP_NAT64);
+	if (!action)
+		return NULL;
+
+	action->nat64.stages[MLX5DR_ACTION_NAT64_STAGE_COPY] =
+		mlx5dr_action_create_nat64_copy_state(ctx, attr, flags);
+	if (!action->nat64.stages[MLX5DR_ACTION_NAT64_STAGE_COPY]) {
+		DR_LOG(ERR, "Nat64 failed creating copy state");
+		goto free_action;
+	}
+
+	action->nat64.stages[MLX5DR_ACTION_NAT64_STAGE_REPLACE] =
+		mlx5dr_action_create_nat64_repalce_state(ctx, attr, flags);
+	if (!action->nat64.stages[MLX5DR_ACTION_NAT64_STAGE_REPLACE]) {
+		DR_LOG(ERR, "Nat64 failed creating replace state");
+		goto free_copy;
+	}
+
+	action->nat64.stages[MLX5DR_ACTION_NAT64_STAGE_COPYBACK] =
+		mlx5dr_action_create_nat64_copy_back_state(ctx, attr, flags);
+	if (!action->nat64.stages[MLX5DR_ACTION_NAT64_STAGE_COPYBACK]) {
+		DR_LOG(ERR, "Nat64 failed creating copyback state");
+		goto free_replace;
+	}
+
+	return action;
+
+
+free_replace:
+	mlx5dr_action_destroy(action->nat64.stages[MLX5DR_ACTION_NAT64_STAGE_REPLACE]);
+free_copy:
+	mlx5dr_action_destroy(action->nat64.stages[MLX5DR_ACTION_NAT64_STAGE_COPY]);
+free_action:
+	simple_free(action);
+	return NULL;
+}
+
 static void mlx5dr_action_destroy_hws(struct mlx5dr_action *action)
 {
 	struct mlx5dr_devx_obj *obj = NULL;
@@ -2600,6 +2996,10 @@ static void mlx5dr_action_destroy_hws(struct mlx5dr_action *action)
 			if (action->ipv6_route_ext.action[i])
 				mlx5dr_action_destroy(action->ipv6_route_ext.action[i]);
 		break;
+	case MLX5DR_ACTION_TYP_NAT64:
+		for (i = 0; i < MLX5DR_ACTION_NAT64_STAGES; i++)
+			mlx5dr_action_destroy(action->nat64.stages[i]);
+		break;
 	}
 }
 
@@ -2874,6 +3274,28 @@ mlx5dr_action_setter_modify_header(struct mlx5dr_actions_apply_data *apply,
 	}
 }
 
+static void
+mlx5dr_action_setter_nat64(struct mlx5dr_actions_apply_data *apply,
+			   struct mlx5dr_actions_wqe_setter *setter)
+{
+	struct mlx5dr_rule_action *rule_action;
+	struct mlx5dr_action *cur_stage_action;
+	struct mlx5dr_action *action;
+	uint32_t stc_idx;
+
+	rule_action = &apply->rule_action[setter->idx_double];
+	action = rule_action->action;
+	cur_stage_action = action->nat64.stages[setter->stage_idx];
+
+	stc_idx = htobe32(cur_stage_action->stc[apply->tbl_type].offset);
+
+	apply->wqe_ctrl->stc_ix[MLX5DR_ACTION_STC_IDX_DW6] = stc_idx;
+	apply->wqe_ctrl->stc_ix[MLX5DR_ACTION_STC_IDX_DW7] = 0;
+
+	apply->wqe_data[MLX5DR_ACTION_OFFSET_DW6] = 0;
+	apply->wqe_data[MLX5DR_ACTION_OFFSET_DW7] = 0;
+}
+
 static void
 mlx5dr_action_setter_insert_ptr(struct mlx5dr_actions_apply_data *apply,
 				struct mlx5dr_actions_wqe_setter *setter)
@@ -3174,7 +3596,7 @@ int mlx5dr_action_template_process(struct mlx5dr_action_template *at)
 	struct mlx5dr_actions_wqe_setter *setter = at->setters;
 	struct mlx5dr_actions_wqe_setter *pop_setter = NULL;
 	struct mlx5dr_actions_wqe_setter *last_setter;
-	int i;
+	int i, j;
 
 	/* Note: Given action combination must be valid */
 
@@ -3361,6 +3783,19 @@ int mlx5dr_action_template_process(struct mlx5dr_action_template *at)
 			setter->idx_ctr = i;
 			break;
 
+		case MLX5DR_ACTION_TYP_NAT64:
+			/* NAT64 requires 3 setters, each of them does specific modify header */
+			for (j = 0; j < MLX5DR_ACTION_NAT64_STAGES; j++) {
+				setter = mlx5dr_action_setter_find_first(last_setter,
+									 ASF_DOUBLE | ASF_REMOVE);
+				setter->flags |= ASF_DOUBLE | ASF_MODIFY;
+				setter->set_double = &mlx5dr_action_setter_nat64;
+				setter->idx_double = i;
+				/* The stage indicates which modify-header to push */
+				setter->stage_idx = j;
+			}
+			break;
+
 		default:
 			DR_LOG(ERR, "Unsupported action type: %d", action_type[i]);
 			rte_errno = ENOTSUP;
diff --git a/drivers/net/mlx5/hws/mlx5dr_action.h b/drivers/net/mlx5/hws/mlx5dr_action.h
index fad35a845b..49c2a9bc6b 100644
--- a/drivers/net/mlx5/hws/mlx5dr_action.h
+++ b/drivers/net/mlx5/hws/mlx5dr_action.h
@@ -11,6 +11,9 @@
 /* Max number of internal subactions of ipv6_ext */
 #define MLX5DR_ACTION_IPV6_EXT_MAX_SA 4
 
+/* Number of MH in NAT64 */
+#define MLX5DR_ACTION_NAT64_STAGES 3
+
 enum mlx5dr_action_stc_idx {
 	MLX5DR_ACTION_STC_IDX_CTRL = 0,
 	MLX5DR_ACTION_STC_IDX_HIT = 1,
@@ -68,6 +71,34 @@ enum mlx5dr_action_stc_reparse {
 	MLX5DR_ACTION_STC_REPARSE_OFF,
 };
 
+ /* 2' comp to 20, to get -20 in add operation */
+#define MLX5DR_ACTION_NAT64_DEC_20 0xffffffec
+
+enum {
+	MLX5DR_ACTION_NAT64_MAX_MODIFY_ACTIONS = 20,
+	MLX5DR_ACTION_NAT64_ADD_20 = 20,
+	MLX5DR_ACTION_NAT64_HEADER_MINUS_ONE = 9,
+	MLX5DR_ACTION_NAT64_IPV6_HEADER = 10,
+	MLX5DR_ACTION_NAT64_IPV4_HEADER = 5,
+	MLX5DR_ACTION_NAT64_IPV6_VER = 0x60000000,
+	MLX5DR_ACTION_NAT64_IPV4_VER = 0x45000000,
+};
+
+/* 3 stages for the nat64 action */
+enum mlx5dr_action_nat64_stages {
+	MLX5DR_ACTION_NAT64_STAGE_COPY = 0,
+	MLX5DR_ACTION_NAT64_STAGE_REPLACE = 1,
+	MLX5DR_ACTION_NAT64_STAGE_COPYBACK = 2,
+};
+
+/* Registers for keeping data from stage to stage */
+enum {
+	MLX5DR_ACTION_NAT64_REG_CONTROL = 0,
+	MLX5DR_ACTION_NAT64_REG_SRC_IP = 1,
+	MLX5DR_ACTION_NAT64_REG_DST_IP = 2,
+	MLX5DR_ACTION_NAT64_REG_MAX = 3,
+};
+
 struct mlx5dr_action_default_stc {
 	struct mlx5dr_pool_chunk nop_ctr;
 	struct mlx5dr_pool_chunk nop_dw5;
@@ -109,6 +140,7 @@ struct mlx5dr_actions_wqe_setter {
 	uint8_t idx_double;
 	uint8_t idx_ctr;
 	uint8_t idx_hit;
+	uint8_t stage_idx;
 	uint8_t flags;
 	uint8_t extra_data;
 };
@@ -182,6 +214,9 @@ struct mlx5dr_action {
 					uint8_t num_of_words;
 					bool decap;
 				} remove_header;
+				struct {
+					struct mlx5dr_action *stages[MLX5DR_ACTION_NAT64_STAGES];
+				} nat64;
 			};
 		};
 
diff --git a/drivers/net/mlx5/hws/mlx5dr_debug.c b/drivers/net/mlx5/hws/mlx5dr_debug.c
index 11557bcab8..39e168d556 100644
--- a/drivers/net/mlx5/hws/mlx5dr_debug.c
+++ b/drivers/net/mlx5/hws/mlx5dr_debug.c
@@ -28,6 +28,7 @@ const char *mlx5dr_debug_action_type_str[] = {
 	[MLX5DR_ACTION_TYP_REMOVE_HEADER] = "REMOVE_HEADER",
 	[MLX5DR_ACTION_TYP_POP_IPV6_ROUTE_EXT] = "POP_IPV6_ROUTE_EXT",
 	[MLX5DR_ACTION_TYP_PUSH_IPV6_ROUTE_EXT] = "PUSH_IPV6_ROUTE_EXT",
+	[MLX5DR_ACTION_TYP_NAT64] = "NAT64",
 };
 
 static_assert(ARRAY_SIZE(mlx5dr_debug_action_type_str) == MLX5DR_ACTION_TYP_MAX,
-- 
2.34.1


^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH v2 2/5] net/mlx5: fetch the available registers for NAT64
  2024-02-20 14:10 ` [PATCH v2 0/5] NAT64 support in mlx5 PMD Bing Zhao
  2024-02-20 14:10   ` [PATCH v2 1/5] net/mlx5/hws: support NAT64 action Bing Zhao
@ 2024-02-20 14:10   ` Bing Zhao
  2024-02-20 14:10   ` [PATCH v2 3/5] net/mlx5: create NAT64 actions during configuration Bing Zhao
                     ` (2 subsequent siblings)
  4 siblings, 0 replies; 36+ messages in thread
From: Bing Zhao @ 2024-02-20 14:10 UTC (permalink / raw)
  To: orika, aman.deep.singh, dsosnowski, viacheslavo, suanmingm,
	matan, thomas, ferruh.yigit, andrew.rybchenko, dev, rasland

REG_C_6 is used as the 1st one and since it is reserved internally
by default, there is no impact.

The remaining 2 registers will be fetched from the available TAGs
array from right to left. They will not be masked in the array due
to the fact that not all the rules will use NAT64 action.

Signed-off-by: Bing Zhao <bingz@nvidia.com>
---
 drivers/net/mlx5/mlx5.c | 9 +++++++++
 drivers/net/mlx5/mlx5.h | 2 ++
 2 files changed, 11 insertions(+)

diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c
index 881c42a97a..9c3b9946e3 100644
--- a/drivers/net/mlx5/mlx5.c
+++ b/drivers/net/mlx5/mlx5.c
@@ -1644,6 +1644,15 @@ mlx5_init_hws_flow_tags_registers(struct mlx5_dev_ctx_shared *sh)
 		if (!!((1 << i) & masks))
 			reg->hw_avl_tags[j++] = mlx5_regc_value(i);
 	}
+	/*
+	 * Set the registers for NAT64 usage internally. REG_C_6 is always used.
+	 * The other 2 registers will be fetched from right to left, at least 2
+	 * tag registers should be available.
+	 */
+	MLX5_ASSERT(j >= (MLX5_FLOW_NAT64_REGS_MAX - 1));
+	reg->nat64_regs[0] = REG_C_6;
+	reg->nat64_regs[1] = reg->hw_avl_tags[j - 2];
+	reg->nat64_regs[2] = reg->hw_avl_tags[j - 1];
 }
 
 static void
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 5265d1aa1f..544cf35069 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -1407,10 +1407,12 @@ struct mlx5_hws_cnt_svc_mng {
 };
 
 #define MLX5_FLOW_HW_TAGS_MAX 12
+#define MLX5_FLOW_NAT64_REGS_MAX 3
 
 struct mlx5_dev_registers {
 	enum modify_reg aso_reg;
 	enum modify_reg hw_avl_tags[MLX5_FLOW_HW_TAGS_MAX];
+	enum modify_reg nat64_regs[MLX5_FLOW_NAT64_REGS_MAX];
 };
 
 #if defined(HAVE_MLX5DV_DR) && \
-- 
2.34.1


^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH v2 3/5] net/mlx5: create NAT64 actions during configuration
  2024-02-20 14:10 ` [PATCH v2 0/5] NAT64 support in mlx5 PMD Bing Zhao
  2024-02-20 14:10   ` [PATCH v2 1/5] net/mlx5/hws: support NAT64 action Bing Zhao
  2024-02-20 14:10   ` [PATCH v2 2/5] net/mlx5: fetch the available registers for NAT64 Bing Zhao
@ 2024-02-20 14:10   ` Bing Zhao
  2024-02-20 14:10   ` [PATCH v2 4/5] net/mlx5: add NAT64 action support in rule creation Bing Zhao
  2024-02-20 14:10   ` [PATCH v2 5/5] net/mlx5: validate the actions combination with NAT64 Bing Zhao
  4 siblings, 0 replies; 36+ messages in thread
From: Bing Zhao @ 2024-02-20 14:10 UTC (permalink / raw)
  To: orika, aman.deep.singh, dsosnowski, viacheslavo, suanmingm,
	matan, thomas, ferruh.yigit, andrew.rybchenko, dev, rasland

The NAT64 DR actions can be shared among the tables. All these
actions can be created during configuring the flow queues and saved
for the future usage.

Even the actions can be shared now, inside per each flow rule, the
actual hardware resources are unique.

Signed-off-by: Bing Zhao <bingz@nvidia.com>
---
 doc/guides/nics/features/mlx5.ini      |  1 +
 doc/guides/nics/mlx5.rst               | 10 ++++
 doc/guides/rel_notes/release_24_03.rst |  7 +++
 drivers/net/mlx5/mlx5.h                |  6 +++
 drivers/net/mlx5/mlx5_flow.h           | 11 +++++
 drivers/net/mlx5/mlx5_flow_dv.c        |  4 +-
 drivers/net/mlx5/mlx5_flow_hw.c        | 65 ++++++++++++++++++++++++++
 7 files changed, 103 insertions(+), 1 deletion(-)

diff --git a/doc/guides/nics/features/mlx5.ini b/doc/guides/nics/features/mlx5.ini
index 30027f2ba1..81a7067cc3 100644
--- a/doc/guides/nics/features/mlx5.ini
+++ b/doc/guides/nics/features/mlx5.ini
@@ -117,6 +117,7 @@ mark                 = Y
 meter                = Y
 meter_mark           = Y
 modify_field         = Y
+nat64                = Y
 nvgre_decap          = Y
 nvgre_encap          = Y
 of_pop_vlan          = Y
diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst
index fa013b03bb..248e4e41fa 100644
--- a/doc/guides/nics/mlx5.rst
+++ b/doc/guides/nics/mlx5.rst
@@ -168,6 +168,7 @@ Features
 - Matching on represented port.
 - Matching on aggregated affinity.
 - Matching on random value.
+- NAT64.
 
 
 Limitations
@@ -824,6 +825,15 @@ Limitations
   - Only match with compare result between packet fields is supported.
 
 
+- NAT64 action:
+  - Supported only with HW Steering enabled (``dv_flow_en`` = 2).
+  - FW version: at least ``XX.39.1002``.
+  - Supported only on non-root table.
+  - Actions order limitation should follow the modify fields action.
+  - The last 2 TAG registers will be used implicitly in address backup mode.
+  - Even if the action can be shared, new steering entries will be created per flow rule. It is recommended a single rule with NAT64 should be shared to reduce the duplication of entries. The default address and other fields covertion will be handled with NAT64 action. To support other address, new rule(s) with modify fields on the IP addresses should be created.
+  - TOS / Traffic Class is not supported now.
+
 Statistics
 ----------
 
diff --git a/doc/guides/rel_notes/release_24_03.rst b/doc/guides/rel_notes/release_24_03.rst
index 619459baae..492c77ff4f 100644
--- a/doc/guides/rel_notes/release_24_03.rst
+++ b/doc/guides/rel_notes/release_24_03.rst
@@ -102,6 +102,11 @@ New Features
   * ``rte_flow_template_table_resize_complete()``.
     Complete table resize.
 
+* **Added a flow action type for NAT64.**
+
+  Added ``RTE_FLOW_ACTION_TYPE_NAT64`` to support offloading of header conversion
+  between IPv4 and IPv6.
+
 * **Updated Atomic Rules' Arkville PMD.**
 
   * Added support for Atomic Rules' TK242 packet-capture family of devices
@@ -133,6 +138,8 @@ New Features
   * Added HW steering support for modify field ``RTE_FLOW_FIELD_ESP_SEQ_NUM`` flow action.
   * Added HW steering support for modify field ``RTE_FLOW_FIELD_ESP_PROTO`` flow action.
 
+  * Added support for ``RTE_FLOW_ACTION_TYPE_NAT64`` flow action in HW Steering flow engine.
+
 
 Removed Items
 -------------
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 544cf35069..1ad40e38e1 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -1986,6 +1986,12 @@ struct mlx5_priv {
 	struct mlx5_aso_mtr_pool *hws_mpool; /* HW steering's Meter pool. */
 	struct mlx5_flow_hw_ctrl_rx *hw_ctrl_rx;
 	/**< HW steering templates used to create control flow rules. */
+	/*
+	 * The NAT64 action can be shared among matchers per domain.
+	 * [0]: RTE_FLOW_NAT64_6TO4, [1]: RTE_FLOW_NAT64_4TO6
+	 * Todo: consider to add *_MAX macro.
+	 */
+	struct mlx5dr_action *action_nat64[MLX5DR_TABLE_TYPE_MAX][2];
 #endif
 	struct rte_eth_dev *shared_host; /* Host device for HW steering. */
 	uint16_t shared_refcnt; /* HW steering host reference counter. */
diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h
index a4d0ff7b13..da13f1f210 100644
--- a/drivers/net/mlx5/mlx5_flow.h
+++ b/drivers/net/mlx5/mlx5_flow.h
@@ -159,6 +159,17 @@ struct mlx5_rte_flow_item_sq {
 	uint32_t queue; /* DevX SQ number */
 };
 
+/* Map from registers to modify fields. */
+extern enum mlx5_modification_field reg_to_field[];
+extern const size_t mlx5_mod_reg_size;
+
+static __rte_always_inline enum mlx5_modification_field
+mlx5_covert_reg_to_field(enum modify_reg reg)
+{
+	MLX5_ASSERT((size_t)reg < mlx5_mod_reg_size);
+	return reg_to_field[reg];
+}
+
 /* Feature name to allocate metadata register. */
 enum mlx5_feature_name {
 	MLX5_HAIRPIN_RX,
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index 6fded15d91..17c405508d 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -968,7 +968,7 @@ flow_dv_convert_action_modify_tcp_ack
 					     MLX5_MODIFICATION_TYPE_ADD, error);
 }
 
-static enum mlx5_modification_field reg_to_field[] = {
+enum mlx5_modification_field reg_to_field[] = {
 	[REG_NON] = MLX5_MODI_OUT_NONE,
 	[REG_A] = MLX5_MODI_META_DATA_REG_A,
 	[REG_B] = MLX5_MODI_META_DATA_REG_B,
@@ -986,6 +986,8 @@ static enum mlx5_modification_field reg_to_field[] = {
 	[REG_C_11] = MLX5_MODI_META_REG_C_11,
 };
 
+const size_t mlx5_mod_reg_size = RTE_DIM(reg_to_field);
+
 /**
  * Convert register set to DV specification.
  *
diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c
index 3bb3a9a178..f53df40041 100644
--- a/drivers/net/mlx5/mlx5_flow_hw.c
+++ b/drivers/net/mlx5/mlx5_flow_hw.c
@@ -7606,6 +7606,66 @@ flow_hw_destroy_send_to_kernel_action(struct mlx5_priv *priv)
 	}
 }
 
+static void
+flow_hw_destroy_nat64_actions(struct mlx5_priv *priv)
+{
+	uint32_t i;
+
+	for (i = MLX5DR_TABLE_TYPE_NIC_RX; i < MLX5DR_TABLE_TYPE_MAX; i++) {
+		if (priv->action_nat64[i][RTE_FLOW_NAT64_6TO4]) {
+			(void)mlx5dr_action_destroy(priv->action_nat64[i][RTE_FLOW_NAT64_6TO4]);
+			priv->action_nat64[i][RTE_FLOW_NAT64_6TO4] = NULL;
+		}
+		if (priv->action_nat64[i][RTE_FLOW_NAT64_4TO6]) {
+			(void)mlx5dr_action_destroy(priv->action_nat64[i][RTE_FLOW_NAT64_4TO6]);
+			priv->action_nat64[i][RTE_FLOW_NAT64_4TO6] = NULL;
+		}
+	}
+}
+
+static int
+flow_hw_create_nat64_actions(struct mlx5_priv *priv, struct rte_flow_error *error)
+{
+	struct mlx5dr_action_nat64_attr attr;
+	uint8_t regs[MLX5_FLOW_NAT64_REGS_MAX];
+	uint32_t i;
+	const uint32_t flags[MLX5DR_TABLE_TYPE_MAX] = {
+		MLX5DR_ACTION_FLAG_HWS_RX | MLX5DR_ACTION_FLAG_SHARED,
+		MLX5DR_ACTION_FLAG_HWS_TX | MLX5DR_ACTION_FLAG_SHARED,
+		MLX5DR_ACTION_FLAG_HWS_FDB | MLX5DR_ACTION_FLAG_SHARED,
+	};
+	struct mlx5dr_action *act;
+
+	attr.registers = regs;
+	/* Try to use 3 registers by default. */
+	attr.num_of_registers = MLX5_FLOW_NAT64_REGS_MAX;
+	for (i = 0; i < MLX5_FLOW_NAT64_REGS_MAX; i++) {
+		MLX5_ASSERT(priv->sh->registers.nat64_regs[i] != REG_NON);
+		regs[i] = mlx5_covert_reg_to_field(priv->sh->registers.nat64_regs[i]);
+	}
+	for (i = MLX5DR_TABLE_TYPE_NIC_RX; i < MLX5DR_TABLE_TYPE_MAX; i++) {
+		if (i == MLX5DR_TABLE_TYPE_FDB && !priv->sh->config.dv_esw_en)
+			continue;
+		attr.flags = (enum mlx5dr_action_nat64_flags)
+			     (MLX5DR_ACTION_NAT64_V6_TO_V4 | MLX5DR_ACTION_NAT64_BACKUP_ADDR);
+		act = mlx5dr_action_create_nat64(priv->dr_ctx, &attr, flags[i]);
+		if (!act)
+			return rte_flow_error_set(error, rte_errno,
+						  RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+						  "Failed to create v6 to v4 action.");
+		priv->action_nat64[i][RTE_FLOW_NAT64_6TO4] = act;
+		attr.flags = (enum mlx5dr_action_nat64_flags)
+			     (MLX5DR_ACTION_NAT64_V4_TO_V6 | MLX5DR_ACTION_NAT64_BACKUP_ADDR);
+		act = mlx5dr_action_create_nat64(priv->dr_ctx, &attr, flags[i]);
+		if (!act)
+			return rte_flow_error_set(error, rte_errno,
+						  RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+						  "Failed to create v4 to v6 action.");
+		priv->action_nat64[i][RTE_FLOW_NAT64_4TO6] = act;
+	}
+	return 0;
+}
+
 /**
  * Create an egress pattern template matching on source SQ.
  *
@@ -9732,6 +9792,9 @@ flow_hw_configure(struct rte_eth_dev *dev,
 				   NULL, "Failed to VLAN actions.");
 		goto err;
 	}
+	if (flow_hw_create_nat64_actions(priv, error))
+		DRV_LOG(WARNING, "Cannot create NAT64 action on port %u, "
+			"please check the FW version", dev->data->port_id);
 	if (_queue_attr)
 		mlx5_free(_queue_attr);
 	if (port_attr->flags & RTE_FLOW_PORT_FLAG_STRICT_QUEUE)
@@ -9764,6 +9827,7 @@ flow_hw_configure(struct rte_eth_dev *dev,
 	}
 	if (priv->hw_def_miss)
 		mlx5dr_action_destroy(priv->hw_def_miss);
+	flow_hw_destroy_nat64_actions(priv);
 	flow_hw_destroy_vlan(dev);
 	if (dr_ctx)
 		claim_zero(mlx5dr_context_close(dr_ctx));
@@ -9844,6 +9908,7 @@ flow_hw_resource_release(struct rte_eth_dev *dev)
 	}
 	if (priv->hw_def_miss)
 		mlx5dr_action_destroy(priv->hw_def_miss);
+	flow_hw_destroy_nat64_actions(priv);
 	flow_hw_destroy_vlan(dev);
 	flow_hw_destroy_send_to_kernel_action(priv);
 	flow_hw_free_vport_actions(priv);
-- 
2.34.1


^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH v2 4/5] net/mlx5: add NAT64 action support in rule creation
  2024-02-20 14:10 ` [PATCH v2 0/5] NAT64 support in mlx5 PMD Bing Zhao
                     ` (2 preceding siblings ...)
  2024-02-20 14:10   ` [PATCH v2 3/5] net/mlx5: create NAT64 actions during configuration Bing Zhao
@ 2024-02-20 14:10   ` Bing Zhao
  2024-02-20 14:10   ` [PATCH v2 5/5] net/mlx5: validate the actions combination with NAT64 Bing Zhao
  4 siblings, 0 replies; 36+ messages in thread
From: Bing Zhao @ 2024-02-20 14:10 UTC (permalink / raw)
  To: orika, aman.deep.singh, dsosnowski, viacheslavo, suanmingm,
	matan, thomas, ferruh.yigit, andrew.rybchenko, dev, rasland

The action will handle the IPv4 and IPv6 headers translation. It will
add / remove IPv6 address prefix by default.

To use the user specific address, another rule to modify the
addresses of the IP header is needed.

Signed-off-by: Bing Zhao <bingz@nvidia.com>
---
 drivers/net/mlx5/mlx5_flow_hw.c | 20 ++++++++++++++++++++
 1 file changed, 20 insertions(+)

diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c
index f53df40041..abe7159ad1 100644
--- a/drivers/net/mlx5/mlx5_flow_hw.c
+++ b/drivers/net/mlx5/mlx5_flow_hw.c
@@ -2492,6 +2492,19 @@ __flow_hw_actions_translate(struct rte_eth_dev *dev,
 			}
 			acts->rule_acts[dr_pos].action = priv->hw_def_miss;
 			break;
+		case RTE_FLOW_ACTION_TYPE_NAT64:
+			if (masks->conf &&
+			    ((const struct rte_flow_action_nat64 *)masks->conf)->type) {
+				const struct rte_flow_action_nat64 *nat64_c =
+					(const struct rte_flow_action_nat64 *)actions->conf;
+
+				acts->rule_acts[dr_pos].action =
+					priv->action_nat64[type][nat64_c->type];
+			} else if (__flow_hw_act_data_general_append(priv, acts,
+								     actions->type,
+								     src_pos, dr_pos))
+				goto err;
+			break;
 		case RTE_FLOW_ACTION_TYPE_END:
 			actions_end = true;
 			break;
@@ -2934,6 +2947,7 @@ flow_hw_actions_construct(struct rte_eth_dev *dev,
 	const struct rte_flow_action_ethdev *port_action = NULL;
 	const struct rte_flow_action_meter *meter = NULL;
 	const struct rte_flow_action_age *age = NULL;
+	const struct rte_flow_action_nat64 *nat64_c = NULL;
 	uint8_t *buf = job->encap_data;
 	uint8_t *push_buf = job->push_data;
 	struct rte_flow_attr attr = {
@@ -3201,6 +3215,11 @@ flow_hw_actions_construct(struct rte_eth_dev *dev,
 			if (ret != 0)
 				return ret;
 			break;
+		case RTE_FLOW_ACTION_TYPE_NAT64:
+			nat64_c = action->conf;
+			rule_acts[act_data->action_dst].action =
+				priv->action_nat64[table->type][nat64_c->type];
+			break;
 		default:
 			break;
 		}
@@ -5959,6 +5978,7 @@ static enum mlx5dr_action_type mlx5_hw_dr_action_types[] = {
 	[RTE_FLOW_ACTION_TYPE_SEND_TO_KERNEL] = MLX5DR_ACTION_TYP_DEST_ROOT,
 	[RTE_FLOW_ACTION_TYPE_IPV6_EXT_PUSH] = MLX5DR_ACTION_TYP_PUSH_IPV6_ROUTE_EXT,
 	[RTE_FLOW_ACTION_TYPE_IPV6_EXT_REMOVE] = MLX5DR_ACTION_TYP_POP_IPV6_ROUTE_EXT,
+	[RTE_FLOW_ACTION_TYPE_NAT64] = MLX5DR_ACTION_TYP_NAT64,
 };
 
 static inline void
-- 
2.34.1


^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH v2 5/5] net/mlx5: validate the actions combination with NAT64
  2024-02-20 14:10 ` [PATCH v2 0/5] NAT64 support in mlx5 PMD Bing Zhao
                     ` (3 preceding siblings ...)
  2024-02-20 14:10   ` [PATCH v2 4/5] net/mlx5: add NAT64 action support in rule creation Bing Zhao
@ 2024-02-20 14:10   ` Bing Zhao
  4 siblings, 0 replies; 36+ messages in thread
From: Bing Zhao @ 2024-02-20 14:10 UTC (permalink / raw)
  To: orika, aman.deep.singh, dsosnowski, viacheslavo, suanmingm,
	matan, thomas, ferruh.yigit, andrew.rybchenko, dev, rasland

NAT64 is treated as a modify header action. The action order and
limitation should be the same as that of modify header in each
domain.

Since the last 2 TAG registers will be used implicitly in the
address backup mode, the values in these registers are no longer
valid after the NAT64 action. The application should not try to
match these TAGs after the rule that contains NAT64 action.

Signed-off-by: Bing Zhao <bingz@nvidia.com>
---
 drivers/net/mlx5/mlx5_flow.h    |  1 +
 drivers/net/mlx5/mlx5_flow_hw.c | 51 +++++++++++++++++++++++++++++++++
 2 files changed, 52 insertions(+)

diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h
index da13f1f210..c3e053d730 100644
--- a/drivers/net/mlx5/mlx5_flow.h
+++ b/drivers/net/mlx5/mlx5_flow.h
@@ -382,6 +382,7 @@ enum mlx5_feature_name {
 #define MLX5_FLOW_ACTION_PORT_REPRESENTOR (1ull << 47)
 #define MLX5_FLOW_ACTION_IPV6_ROUTING_REMOVE (1ull << 48)
 #define MLX5_FLOW_ACTION_IPV6_ROUTING_PUSH (1ull << 49)
+#define MLX5_FLOW_ACTION_NAT64 (1ull << 50)
 
 #define MLX5_FLOW_DROP_INCLUSIVE_ACTIONS \
 	(MLX5_FLOW_ACTION_COUNT | MLX5_FLOW_ACTION_SAMPLE | MLX5_FLOW_ACTION_AGE)
diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c
index abe7159ad1..4d2b271210 100644
--- a/drivers/net/mlx5/mlx5_flow_hw.c
+++ b/drivers/net/mlx5/mlx5_flow_hw.c
@@ -5725,6 +5725,50 @@ flow_hw_validate_action_default_miss(struct rte_eth_dev *dev,
 	return 0;
 }
 
+static int
+flow_hw_validate_action_nat64(struct rte_eth_dev *dev,
+			      const struct rte_flow_actions_template_attr *attr,
+			      const struct rte_flow_action *action,
+			      const struct rte_flow_action *mask,
+			      uint64_t action_flags,
+			      struct rte_flow_error *error)
+{
+	struct mlx5_priv *priv = dev->data->dev_private;
+	const struct rte_flow_action_nat64 *nat64_c;
+	enum rte_flow_nat64_type cov_type;
+
+	RTE_SET_USED(action_flags);
+	if (mask->conf && ((const struct rte_flow_action_nat64 *)mask->conf)->type) {
+		nat64_c = (const struct rte_flow_action_nat64 *)action->conf;
+		cov_type = nat64_c->type;
+		if ((attr->ingress && !priv->action_nat64[MLX5DR_TABLE_TYPE_NIC_RX][cov_type]) ||
+		    (attr->egress && !priv->action_nat64[MLX5DR_TABLE_TYPE_NIC_TX][cov_type]) ||
+		    (attr->transfer && !priv->action_nat64[MLX5DR_TABLE_TYPE_FDB][cov_type]))
+			goto err_out;
+	} else {
+		/*
+		 * Usually, the actions will be used on both directions. For non-masked actions,
+		 * both directions' actions will be checked.
+		 */
+		if (attr->ingress)
+			if (!priv->action_nat64[MLX5DR_TABLE_TYPE_NIC_RX][RTE_FLOW_NAT64_6TO4] ||
+			    !priv->action_nat64[MLX5DR_TABLE_TYPE_NIC_RX][RTE_FLOW_NAT64_4TO6])
+				goto err_out;
+		if (attr->egress)
+			if (!priv->action_nat64[MLX5DR_TABLE_TYPE_NIC_TX][RTE_FLOW_NAT64_6TO4] ||
+			    !priv->action_nat64[MLX5DR_TABLE_TYPE_NIC_TX][RTE_FLOW_NAT64_4TO6])
+				goto err_out;
+		if (attr->transfer)
+			if (!priv->action_nat64[MLX5DR_TABLE_TYPE_FDB][RTE_FLOW_NAT64_6TO4] ||
+			    !priv->action_nat64[MLX5DR_TABLE_TYPE_FDB][RTE_FLOW_NAT64_4TO6])
+				goto err_out;
+	}
+	return 0;
+err_out:
+	return rte_flow_error_set(error, EOPNOTSUPP, RTE_FLOW_ERROR_TYPE_ACTION,
+				  NULL, "NAT64 action is not supported.");
+}
+
 static int
 mlx5_flow_hw_actions_validate(struct rte_eth_dev *dev,
 			      const struct rte_flow_actions_template_attr *attr,
@@ -5926,6 +5970,13 @@ mlx5_flow_hw_actions_validate(struct rte_eth_dev *dev,
 				MLX5_HW_VLAN_PUSH_VID_IDX;
 			action_flags |= MLX5_FLOW_ACTION_OF_PUSH_VLAN;
 			break;
+		case RTE_FLOW_ACTION_TYPE_NAT64:
+			ret = flow_hw_validate_action_nat64(dev, attr, action, mask,
+							    action_flags, error);
+			if (ret != 0)
+				return ret;
+			action_flags |= MLX5_FLOW_ACTION_NAT64;
+			break;
 		case RTE_FLOW_ACTION_TYPE_END:
 			actions_end = true;
 			break;
-- 
2.34.1


^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH v3 0/5] NAT64 support in mlx5 PMD
  2023-12-27  9:07 [PATCH 0/8] support NAT64 action Bing Zhao
                   ` (9 preceding siblings ...)
  2024-02-20 14:10 ` [PATCH v2 0/5] NAT64 support in mlx5 PMD Bing Zhao
@ 2024-02-20 14:37 ` Bing Zhao
  2024-02-20 14:37   ` [PATCH v3 1/5] net/mlx5/hws: support NAT64 action Bing Zhao
                     ` (5 more replies)
  2024-02-28 15:09 ` [PATCH v4 " Bing Zhao
  11 siblings, 6 replies; 36+ messages in thread
From: Bing Zhao @ 2024-02-20 14:37 UTC (permalink / raw)
  To: orika, aman.deep.singh, dsosnowski, viacheslavo, suanmingm,
	matan, thomas, ferruh.yigit, andrew.rybchenko, dev, rasland

This patch set contains the mlx5 PMD implementation for NAT64.

Update in v3:
  1. code style and typo.

Update in v2:
  1. separate from the RTE and testpmd common part.
  2. reorder the commits.
  3. bug fix, code polishing and document update.

Bing Zhao (4):
  net/mlx5: fetch the available registers for NAT64
  net/mlx5: create NAT64 actions during configuration
  net/mlx5: add NAT64 action support in rule creation
  net/mlx5: validate the actions combination with NAT64

Erez Shitrit (1):
  net/mlx5/hws: support NAT64 action

 doc/guides/nics/features/mlx5.ini      |   1 +
 doc/guides/nics/mlx5.rst               |  10 +
 doc/guides/rel_notes/release_24_03.rst |   7 +
 drivers/net/mlx5/hws/mlx5dr.h          |  29 ++
 drivers/net/mlx5/hws/mlx5dr_action.c   | 436 ++++++++++++++++++++++++-
 drivers/net/mlx5/hws/mlx5dr_action.h   |  35 ++
 drivers/net/mlx5/hws/mlx5dr_debug.c    |   1 +
 drivers/net/mlx5/mlx5.c                |   9 +
 drivers/net/mlx5/mlx5.h                |   8 +
 drivers/net/mlx5/mlx5_flow.h           |  12 +
 drivers/net/mlx5/mlx5_flow_dv.c        |   4 +-
 drivers/net/mlx5/mlx5_flow_hw.c        | 136 ++++++++
 12 files changed, 686 insertions(+), 2 deletions(-)

-- 
2.34.1


^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH v3 1/5] net/mlx5/hws: support NAT64 action
  2024-02-20 14:37 ` [PATCH v3 0/5] NAT64 support in mlx5 PMD Bing Zhao
@ 2024-02-20 14:37   ` Bing Zhao
  2024-02-20 14:37   ` [PATCH v3 2/5] net/mlx5: fetch the available registers for NAT64 Bing Zhao
                     ` (4 subsequent siblings)
  5 siblings, 0 replies; 36+ messages in thread
From: Bing Zhao @ 2024-02-20 14:37 UTC (permalink / raw)
  To: orika, aman.deep.singh, dsosnowski, viacheslavo, suanmingm,
	matan, thomas, ferruh.yigit, andrew.rybchenko, dev, rasland
  Cc: Erez Shitrit

From: Erez Shitrit <erezsh@nvidia.com>

Add support of new action mlx5dr_action_create_nat64.
The new action allows to modify IP packets from version to version, IPV6
to IPV4 and vice versa.

Signed-off-by: Erez Shitrit <erezsh@nvidia.com>
Signed-off-by: Bing Zhao <bingz@nvidia.com>
---
 drivers/net/mlx5/hws/mlx5dr.h        |  29 ++
 drivers/net/mlx5/hws/mlx5dr_action.c | 436 ++++++++++++++++++++++++++-
 drivers/net/mlx5/hws/mlx5dr_action.h |  35 +++
 drivers/net/mlx5/hws/mlx5dr_debug.c  |   1 +
 4 files changed, 500 insertions(+), 1 deletion(-)

diff --git a/drivers/net/mlx5/hws/mlx5dr.h b/drivers/net/mlx5/hws/mlx5dr.h
index 9c5b068c93..557fc1eef5 100644
--- a/drivers/net/mlx5/hws/mlx5dr.h
+++ b/drivers/net/mlx5/hws/mlx5dr.h
@@ -51,6 +51,7 @@ enum mlx5dr_action_type {
 	MLX5DR_ACTION_TYP_DEST_ARRAY,
 	MLX5DR_ACTION_TYP_POP_IPV6_ROUTE_EXT,
 	MLX5DR_ACTION_TYP_PUSH_IPV6_ROUTE_EXT,
+	MLX5DR_ACTION_TYP_NAT64,
 	MLX5DR_ACTION_TYP_MAX,
 };
 
@@ -817,6 +818,34 @@ mlx5dr_action_create_reformat_ipv6_ext(struct mlx5dr_context *ctx,
 				       uint32_t log_bulk_size,
 				       uint32_t flags);
 
+enum mlx5dr_action_nat64_flags {
+	MLX5DR_ACTION_NAT64_V4_TO_V6 = 1 << 0,
+	MLX5DR_ACTION_NAT64_V6_TO_V4 = 1 << 1,
+	/* Indicates if to backup ipv4 addresses in last two registers */
+	MLX5DR_ACTION_NAT64_BACKUP_ADDR = 1 << 2,
+};
+
+struct mlx5dr_action_nat64_attr {
+	uint8_t num_of_registers;
+	uint8_t *registers;
+	enum mlx5dr_action_nat64_flags flags;
+};
+
+/* Create direct rule nat64 action.
+ *
+ * @param[in] ctx
+ *	The context in which the new action will be created.
+ * @param[in] attr
+ *	The relevant attribute of the NAT action.
+ * @param[in] flags
+ *	Action creation flags. (enum mlx5dr_action_flags)
+ * @return pointer to mlx5dr_action on success NULL otherwise.
+ */
+struct mlx5dr_action *
+mlx5dr_action_create_nat64(struct mlx5dr_context *ctx,
+			   struct mlx5dr_action_nat64_attr *attr,
+			   uint32_t flags);
+
 /* Destroy direct rule action.
  *
  * @param[in] action
diff --git a/drivers/net/mlx5/hws/mlx5dr_action.c b/drivers/net/mlx5/hws/mlx5dr_action.c
index 862ee3e332..d9091b9f72 100644
--- a/drivers/net/mlx5/hws/mlx5dr_action.c
+++ b/drivers/net/mlx5/hws/mlx5dr_action.c
@@ -31,6 +31,7 @@ static const uint32_t action_order_arr[MLX5DR_TABLE_TYPE_MAX][MLX5DR_ACTION_TYP_
 		BIT(MLX5DR_ACTION_TYP_ASO_CT),
 		BIT(MLX5DR_ACTION_TYP_PUSH_VLAN),
 		BIT(MLX5DR_ACTION_TYP_PUSH_VLAN),
+		BIT(MLX5DR_ACTION_TYP_NAT64),
 		BIT(MLX5DR_ACTION_TYP_MODIFY_HDR),
 		BIT(MLX5DR_ACTION_TYP_INSERT_HEADER) |
 		BIT(MLX5DR_ACTION_TYP_PUSH_IPV6_ROUTE_EXT) |
@@ -52,6 +53,7 @@ static const uint32_t action_order_arr[MLX5DR_TABLE_TYPE_MAX][MLX5DR_ACTION_TYP_
 		BIT(MLX5DR_ACTION_TYP_ASO_CT),
 		BIT(MLX5DR_ACTION_TYP_PUSH_VLAN),
 		BIT(MLX5DR_ACTION_TYP_PUSH_VLAN),
+		BIT(MLX5DR_ACTION_TYP_NAT64),
 		BIT(MLX5DR_ACTION_TYP_MODIFY_HDR),
 		BIT(MLX5DR_ACTION_TYP_INSERT_HEADER) |
 		BIT(MLX5DR_ACTION_TYP_PUSH_IPV6_ROUTE_EXT) |
@@ -75,6 +77,7 @@ static const uint32_t action_order_arr[MLX5DR_TABLE_TYPE_MAX][MLX5DR_ACTION_TYP_
 		BIT(MLX5DR_ACTION_TYP_ASO_CT),
 		BIT(MLX5DR_ACTION_TYP_PUSH_VLAN),
 		BIT(MLX5DR_ACTION_TYP_PUSH_VLAN),
+		BIT(MLX5DR_ACTION_TYP_NAT64),
 		BIT(MLX5DR_ACTION_TYP_MODIFY_HDR),
 		BIT(MLX5DR_ACTION_TYP_INSERT_HEADER) |
 		BIT(MLX5DR_ACTION_TYP_PUSH_IPV6_ROUTE_EXT) |
@@ -246,6 +249,310 @@ static void mlx5dr_action_put_shared_stc(struct mlx5dr_action *action,
 		mlx5dr_action_put_shared_stc_nic(ctx, stc_type, MLX5DR_TABLE_TYPE_FDB);
 }
 
+static struct mlx5dr_action *
+mlx5dr_action_create_nat64_copy_state(struct mlx5dr_context *ctx,
+				      struct mlx5dr_action_nat64_attr *attr,
+				      uint32_t flags)
+{
+	__be64 modify_action_data[MLX5DR_ACTION_NAT64_MAX_MODIFY_ACTIONS];
+	struct mlx5dr_action_mh_pattern pat[2];
+	struct mlx5dr_action *action;
+	uint32_t packet_len_field;
+	uint8_t *action_ptr;
+	uint32_t ttl_field;
+	uint32_t src_addr;
+	uint32_t dst_addr;
+	bool is_v4_to_v6;
+
+	is_v4_to_v6 = attr->flags & MLX5DR_ACTION_NAT64_V4_TO_V6;
+
+	if (is_v4_to_v6) {
+		packet_len_field = MLX5_MODI_OUT_IPV4_TOTAL_LEN;
+		ttl_field = MLX5_MODI_OUT_IPV4_TTL;
+		src_addr = MLX5_MODI_OUT_SIPV4;
+		dst_addr = MLX5_MODI_OUT_DIPV4;
+	} else {
+		packet_len_field = MLX5_MODI_OUT_IPV6_PAYLOAD_LEN;
+		ttl_field = MLX5_MODI_OUT_IPV6_HOPLIMIT;
+		src_addr = MLX5_MODI_OUT_SIPV6_31_0;
+		dst_addr = MLX5_MODI_OUT_DIPV6_31_0;
+	}
+
+	memset(modify_action_data, 0, sizeof(modify_action_data));
+	action_ptr = (uint8_t *)modify_action_data;
+
+	if (attr->flags & MLX5DR_ACTION_NAT64_BACKUP_ADDR) {
+		MLX5_SET(copy_action_in, action_ptr, action_type, MLX5_MODIFICATION_TYPE_COPY);
+		MLX5_SET(copy_action_in, action_ptr, src_field, src_addr);
+		MLX5_SET(copy_action_in, action_ptr, dst_field,
+			 attr->registers[MLX5DR_ACTION_NAT64_REG_SRC_IP]);
+		action_ptr += MLX5DR_ACTION_DOUBLE_SIZE;
+
+		MLX5_SET(copy_action_in, action_ptr, action_type, MLX5_MODIFICATION_TYPE_COPY);
+		MLX5_SET(copy_action_in, action_ptr, src_field, dst_addr);
+		MLX5_SET(copy_action_in, action_ptr, dst_field,
+			 attr->registers[MLX5DR_ACTION_NAT64_REG_DST_IP]);
+		action_ptr += MLX5DR_ACTION_DOUBLE_SIZE;
+	}
+
+	/* | 8 bit - 8 bit     - 16 bit     |
+	 * | ttl   - protocol  - packet-len |
+	 */
+	MLX5_SET(copy_action_in, action_ptr, action_type, MLX5_MODIFICATION_TYPE_COPY);
+	MLX5_SET(copy_action_in, action_ptr, src_field, packet_len_field);
+	MLX5_SET(copy_action_in, action_ptr, dst_field,
+		 attr->registers[MLX5DR_ACTION_NAT64_REG_CONTROL]);
+	MLX5_SET(copy_action_in, action_ptr, dst_offset, 0);/* 16 bits in the lsb */
+	MLX5_SET(copy_action_in, action_ptr, length, 16);
+	action_ptr += MLX5DR_ACTION_DOUBLE_SIZE;
+
+	MLX5_SET(copy_action_in, action_ptr, action_type, MLX5_MODIFICATION_TYPE_NOP);
+	action_ptr += MLX5DR_ACTION_DOUBLE_SIZE;
+
+	MLX5_SET(copy_action_in, action_ptr, action_type, MLX5_MODIFICATION_TYPE_COPY);
+	MLX5_SET(copy_action_in, action_ptr, src_field, MLX5_MODI_OUT_IP_PROTOCOL);
+	MLX5_SET(copy_action_in, action_ptr, dst_field,
+		 attr->registers[MLX5DR_ACTION_NAT64_REG_CONTROL]);
+	MLX5_SET(copy_action_in, action_ptr, dst_offset, 16);
+	MLX5_SET(copy_action_in, action_ptr, length, 8);
+	action_ptr += MLX5DR_ACTION_DOUBLE_SIZE;
+
+	MLX5_SET(copy_action_in, action_ptr, action_type, MLX5_MODIFICATION_TYPE_NOP);
+	action_ptr += MLX5DR_ACTION_DOUBLE_SIZE;
+
+	MLX5_SET(copy_action_in, action_ptr, action_type, MLX5_MODIFICATION_TYPE_COPY);
+	MLX5_SET(copy_action_in, action_ptr, src_field, ttl_field);
+	MLX5_SET(copy_action_in, action_ptr, dst_field,
+		 attr->registers[MLX5DR_ACTION_NAT64_REG_CONTROL]);
+	MLX5_SET(copy_action_in, action_ptr, dst_offset, 24);
+	MLX5_SET(copy_action_in, action_ptr, length, 8);
+	action_ptr += MLX5DR_ACTION_DOUBLE_SIZE;
+
+	/* set sip and dip to 0, in order to have new csum */
+	if (is_v4_to_v6) {
+		MLX5_SET(set_action_in, action_ptr, action_type, MLX5_MODIFICATION_TYPE_SET);
+		MLX5_SET(set_action_in, action_ptr, field, MLX5_MODI_OUT_SIPV4);
+		MLX5_SET(set_action_in, action_ptr, data, 0);
+		action_ptr += MLX5DR_ACTION_DOUBLE_SIZE;
+
+		MLX5_SET(set_action_in, action_ptr, action_type, MLX5_MODIFICATION_TYPE_SET);
+		MLX5_SET(set_action_in, action_ptr, field, MLX5_MODI_OUT_DIPV4);
+		MLX5_SET(set_action_in, action_ptr, data, 0);
+		action_ptr += MLX5DR_ACTION_DOUBLE_SIZE;
+	}
+
+	pat[0].data = modify_action_data;
+	pat[0].sz = (action_ptr - (uint8_t *)modify_action_data);
+
+	action = mlx5dr_action_create_modify_header(ctx, 1, pat, 0, flags);
+	if (!action) {
+		DR_LOG(ERR, "Failed to create copy for NAT64: action_sz: %zu, flags: 0x%x\n",
+		       pat[0].sz, flags);
+		return NULL;
+	}
+
+	return action;
+}
+
+static struct mlx5dr_action *
+mlx5dr_action_create_nat64_repalce_state(struct mlx5dr_context *ctx,
+					 struct mlx5dr_action_nat64_attr *attr,
+					 uint32_t flags)
+{
+	uint32_t address_prefix[MLX5DR_ACTION_NAT64_HEADER_MINUS_ONE] = {0};
+	__be64 modify_action_data[MLX5DR_ACTION_NAT64_MAX_MODIFY_ACTIONS];
+	struct mlx5dr_action_mh_pattern pat[2];
+	static struct mlx5dr_action *action;
+	uint8_t header_size_in_dw;
+	uint8_t *action_ptr;
+	uint32_t eth_type;
+	bool is_v4_to_v6;
+	uint32_t ip_ver;
+	int i;
+
+	is_v4_to_v6 = attr->flags & MLX5DR_ACTION_NAT64_V4_TO_V6;
+
+	if (is_v4_to_v6) {
+		uint32_t nat64_well_known_pref[] = {0x00010000,
+						    0x9bff6400, 0x0, 0x0, 0x0,
+						    0x9bff6400, 0x0, 0x0, 0x0};
+
+		header_size_in_dw = MLX5DR_ACTION_NAT64_IPV6_HEADER;
+		ip_ver = MLX5DR_ACTION_NAT64_IPV6_VER;
+		eth_type = RTE_ETHER_TYPE_IPV6;
+		memcpy(address_prefix, nat64_well_known_pref,
+		       MLX5DR_ACTION_NAT64_HEADER_MINUS_ONE * sizeof(uint32_t));
+	} else {
+		header_size_in_dw = MLX5DR_ACTION_NAT64_IPV4_HEADER;
+		ip_ver = MLX5DR_ACTION_NAT64_IPV4_VER;
+		eth_type = RTE_ETHER_TYPE_IPV4;
+	}
+
+	memset(modify_action_data, 0, sizeof(modify_action_data));
+	action_ptr = (uint8_t *)modify_action_data;
+
+	MLX5_SET(set_action_in, action_ptr, action_type, MLX5_MODIFICATION_TYPE_SET);
+	MLX5_SET(set_action_in, action_ptr, field, MLX5_MODI_OUT_ETHERTYPE);
+	MLX5_SET(set_action_in, action_ptr, length, 16);
+	MLX5_SET(set_action_in, action_ptr, data, eth_type);
+	action_ptr += MLX5DR_ACTION_DOUBLE_SIZE;
+
+	/* push empty header with ipv6 as version */
+	MLX5_SET(stc_ste_param_insert, action_ptr, action_type,
+		 MLX5_MODIFICATION_TYPE_INSERT);
+	MLX5_SET(stc_ste_param_insert, action_ptr, inline_data, 0x1);
+	MLX5_SET(stc_ste_param_insert, action_ptr, insert_anchor,
+		 MLX5_HEADER_ANCHOR_IPV6_IPV4);
+	MLX5_SET(stc_ste_param_insert, action_ptr, insert_size, 2);
+	MLX5_SET(stc_ste_param_insert, action_ptr, insert_argument, ip_ver);
+	action_ptr += MLX5DR_ACTION_DOUBLE_SIZE;
+
+	for (i = 0; i < header_size_in_dw - 1; i++) {
+		MLX5_SET(stc_ste_param_insert, action_ptr, action_type,
+				MLX5_MODIFICATION_TYPE_INSERT);
+		MLX5_SET(stc_ste_param_insert, action_ptr, inline_data, 0x1);
+		MLX5_SET(stc_ste_param_insert, action_ptr, insert_anchor,
+				MLX5_HEADER_ANCHOR_IPV6_IPV4);
+		MLX5_SET(stc_ste_param_insert, action_ptr, insert_size, 2);
+		MLX5_SET(stc_ste_param_insert, action_ptr, insert_argument,
+			 htobe32(address_prefix[i]));
+		action_ptr += MLX5DR_ACTION_DOUBLE_SIZE;
+	}
+
+	/* Remove orig src/dst addr (8 bytes, 4 words) */
+	MLX5_SET(stc_ste_param_remove, action_ptr, action_type,
+		 MLX5_MODIFICATION_TYPE_REMOVE);
+	MLX5_SET(stc_ste_param_remove, action_ptr, remove_start_anchor,
+		 MLX5_HEADER_ANCHOR_IPV6_IPV4);
+	MLX5_SET(stc_ste_param_remove, action_ptr, remove_end_anchor,
+		 MLX5_HEADER_ANCHOR_TCP_UDP);
+	action_ptr += MLX5DR_ACTION_DOUBLE_SIZE;
+
+	pat[0].data = modify_action_data;
+	pat[0].sz = action_ptr - (uint8_t *)modify_action_data;
+
+	action = mlx5dr_action_create_modify_header(ctx, 1, pat, 0, flags);
+	if (!action) {
+		DR_LOG(ERR, "Failed to create action: action_sz: %zu flags: 0x%x\n",
+		       pat[0].sz, flags);
+		return NULL;
+	}
+
+	return action;
+}
+
+static struct mlx5dr_action *
+mlx5dr_action_create_nat64_copy_back_state(struct mlx5dr_context *ctx,
+					   struct mlx5dr_action_nat64_attr *attr,
+					   uint32_t flags)
+{
+	__be64 modify_action_data[MLX5DR_ACTION_NAT64_MAX_MODIFY_ACTIONS];
+	struct mlx5dr_action_mh_pattern pat[2];
+	struct mlx5dr_action *action;
+	uint32_t packet_len_field;
+	uint32_t packet_len_add;
+	uint8_t *action_ptr;
+	uint32_t ttl_field;
+	uint32_t src_addr;
+	uint32_t dst_addr;
+	bool is_v4_to_v6;
+
+	is_v4_to_v6 = attr->flags & MLX5DR_ACTION_NAT64_V4_TO_V6;
+
+	if (is_v4_to_v6) {
+		packet_len_field = MLX5_MODI_OUT_IPV6_PAYLOAD_LEN;
+		 /* 2' comp to 20, to get -20 in add operation */
+		packet_len_add = MLX5DR_ACTION_NAT64_DEC_20;
+		ttl_field = MLX5_MODI_OUT_IPV6_HOPLIMIT;
+		src_addr = MLX5_MODI_OUT_SIPV6_31_0;
+		dst_addr = MLX5_MODI_OUT_DIPV6_31_0;
+	} else {
+		packet_len_field = MLX5_MODI_OUT_IPV4_TOTAL_LEN;
+		/* ipv4 len is including 20 bytes of the header, so add 20 over ipv6 len */
+		packet_len_add = MLX5DR_ACTION_NAT64_ADD_20;
+		ttl_field = MLX5_MODI_OUT_IPV4_TTL;
+		src_addr = MLX5_MODI_OUT_SIPV4;
+		dst_addr = MLX5_MODI_OUT_DIPV4;
+	}
+
+	memset(modify_action_data, 0, sizeof(modify_action_data));
+	action_ptr = (uint8_t *)modify_action_data;
+
+	MLX5_SET(copy_action_in, action_ptr, action_type, MLX5_MODIFICATION_TYPE_COPY);
+	MLX5_SET(copy_action_in, action_ptr, src_field,
+		 attr->registers[MLX5DR_ACTION_NAT64_REG_CONTROL]);
+	MLX5_SET(copy_action_in, action_ptr, dst_field,
+		 packet_len_field);
+	MLX5_SET(copy_action_in, action_ptr, src_offset, 32);
+	MLX5_SET(copy_action_in, action_ptr, length, 16);
+	action_ptr += MLX5DR_ACTION_DOUBLE_SIZE;
+
+	MLX5_SET(copy_action_in, action_ptr, action_type, MLX5_MODIFICATION_TYPE_NOP);
+	action_ptr += MLX5DR_ACTION_DOUBLE_SIZE;
+
+	MLX5_SET(copy_action_in, action_ptr, action_type, MLX5_MODIFICATION_TYPE_COPY);
+	MLX5_SET(copy_action_in, action_ptr, src_field,
+		 attr->registers[MLX5DR_ACTION_NAT64_REG_CONTROL]);
+	MLX5_SET(copy_action_in, action_ptr, dst_field,
+		 MLX5_MODI_OUT_IP_PROTOCOL);
+	MLX5_SET(copy_action_in, action_ptr, src_offset, 16);
+	MLX5_SET(copy_action_in, action_ptr, dst_offset, 0);
+	MLX5_SET(copy_action_in, action_ptr, length, 8);
+	action_ptr += MLX5DR_ACTION_DOUBLE_SIZE;
+
+	MLX5_SET(copy_action_in, action_ptr, action_type, MLX5_MODIFICATION_TYPE_NOP);
+	action_ptr += MLX5DR_ACTION_DOUBLE_SIZE;
+
+	MLX5_SET(copy_action_in, action_ptr, action_type, MLX5_MODIFICATION_TYPE_COPY);
+	MLX5_SET(copy_action_in, action_ptr, src_field,
+		 attr->registers[MLX5DR_ACTION_NAT64_REG_CONTROL]);
+	MLX5_SET(copy_action_in, action_ptr, dst_field, ttl_field);
+	MLX5_SET(copy_action_in, action_ptr, src_offset, 24);
+	MLX5_SET(copy_action_in, action_ptr, length, 8);
+	action_ptr += MLX5DR_ACTION_DOUBLE_SIZE;
+
+	MLX5_SET(copy_action_in, action_ptr, action_type, MLX5_MODIFICATION_TYPE_NOP);
+	action_ptr += MLX5DR_ACTION_DOUBLE_SIZE;
+
+	/* if required Copy original addresses */
+	if (attr->flags & MLX5DR_ACTION_NAT64_BACKUP_ADDR) {
+		MLX5_SET(copy_action_in, action_ptr, action_type, MLX5_MODIFICATION_TYPE_COPY);
+		MLX5_SET(copy_action_in, action_ptr, src_field,
+			 attr->registers[MLX5DR_ACTION_NAT64_REG_SRC_IP]);
+		MLX5_SET(copy_action_in, action_ptr, dst_field, src_addr);
+		MLX5_SET(copy_action_in, action_ptr, src_offset, 0);
+		MLX5_SET(copy_action_in, action_ptr, length, 32);
+		action_ptr += MLX5DR_ACTION_DOUBLE_SIZE;
+
+		MLX5_SET(copy_action_in, action_ptr, action_type, MLX5_MODIFICATION_TYPE_COPY);
+		MLX5_SET(copy_action_in, action_ptr, src_field,
+			 attr->registers[MLX5DR_ACTION_NAT64_REG_DST_IP]);
+		MLX5_SET(copy_action_in, action_ptr, dst_field, dst_addr);
+		MLX5_SET(copy_action_in, action_ptr, src_offset, 0);
+		MLX5_SET(copy_action_in, action_ptr, length, 32);
+		action_ptr += MLX5DR_ACTION_DOUBLE_SIZE;
+	}
+
+	/* take/add off 20 bytes ipv4/6 from/to the total size */
+	MLX5_SET(set_action_in, action_ptr, action_type, MLX5_MODIFICATION_TYPE_ADD);
+	MLX5_SET(set_action_in, action_ptr, field, packet_len_field);
+	MLX5_SET(set_action_in, action_ptr, data, packet_len_add);
+	MLX5_SET(set_action_in, action_ptr, length, 16);
+	action_ptr += MLX5DR_ACTION_DOUBLE_SIZE;
+
+	pat[0].data = modify_action_data;
+	pat[0].sz = action_ptr - (uint8_t *)modify_action_data;
+
+	action = mlx5dr_action_create_modify_header(ctx, 1, pat, 0, flags);
+	if (!action) {
+		DR_LOG(ERR, "Failed to create action: action_sz: %zu, flags: 0x%x\n",
+		       pat[0].sz, flags);
+		return NULL;
+	}
+
+	return action;
+}
+
 static void mlx5dr_action_print_combo(enum mlx5dr_action_type *user_actions)
 {
 	DR_LOG(ERR, "Invalid action_type sequence");
@@ -2526,6 +2833,94 @@ mlx5dr_action_create_reformat_ipv6_ext(struct mlx5dr_context *ctx,
 	return NULL;
 }
 
+static bool
+mlx5dr_action_nat64_validate_param(struct mlx5dr_action_nat64_attr *attr,
+				   uint32_t flags)
+{
+	if (mlx5dr_action_is_root_flags(flags)) {
+		DR_LOG(ERR, "Nat64 action not supported for root");
+		rte_errno = ENOTSUP;
+		return false;
+	}
+
+	if (!(flags & MLX5DR_ACTION_FLAG_SHARED)) {
+		DR_LOG(ERR, "Nat64 action must be with SHARED flag");
+		rte_errno = EINVAL;
+		return false;
+	}
+
+	if (attr->num_of_registers > MLX5DR_ACTION_NAT64_REG_MAX) {
+		DR_LOG(ERR, "Nat64 action doesn't support more than %d registers",
+		       MLX5DR_ACTION_NAT64_REG_MAX);
+		rte_errno = EINVAL;
+		return false;
+	}
+
+	if (attr->flags & MLX5DR_ACTION_NAT64_BACKUP_ADDR &&
+	    attr->num_of_registers != MLX5DR_ACTION_NAT64_REG_MAX) {
+		DR_LOG(ERR, "Nat64 backup addr requires %d registers",
+		       MLX5DR_ACTION_NAT64_REG_MAX);
+		rte_errno = EINVAL;
+		return false;
+	}
+
+	if (!(attr->flags & MLX5DR_ACTION_NAT64_V4_TO_V6 ||
+	      attr->flags & MLX5DR_ACTION_NAT64_V6_TO_V4)) {
+		DR_LOG(ERR, "Nat64 backup addr requires one mode at least");
+		rte_errno = EINVAL;
+		return false;
+	}
+
+	return true;
+}
+
+struct mlx5dr_action *
+mlx5dr_action_create_nat64(struct mlx5dr_context *ctx,
+			   struct mlx5dr_action_nat64_attr *attr,
+			   uint32_t flags)
+{
+	struct mlx5dr_action *action;
+
+	if (!mlx5dr_action_nat64_validate_param(attr, flags))
+		return NULL;
+
+	action = mlx5dr_action_create_generic(ctx, flags, MLX5DR_ACTION_TYP_NAT64);
+	if (!action)
+		return NULL;
+
+	action->nat64.stages[MLX5DR_ACTION_NAT64_STAGE_COPY] =
+		mlx5dr_action_create_nat64_copy_state(ctx, attr, flags);
+	if (!action->nat64.stages[MLX5DR_ACTION_NAT64_STAGE_COPY]) {
+		DR_LOG(ERR, "Nat64 failed creating copy state");
+		goto free_action;
+	}
+
+	action->nat64.stages[MLX5DR_ACTION_NAT64_STAGE_REPLACE] =
+		mlx5dr_action_create_nat64_repalce_state(ctx, attr, flags);
+	if (!action->nat64.stages[MLX5DR_ACTION_NAT64_STAGE_REPLACE]) {
+		DR_LOG(ERR, "Nat64 failed creating replace state");
+		goto free_copy;
+	}
+
+	action->nat64.stages[MLX5DR_ACTION_NAT64_STAGE_COPYBACK] =
+		mlx5dr_action_create_nat64_copy_back_state(ctx, attr, flags);
+	if (!action->nat64.stages[MLX5DR_ACTION_NAT64_STAGE_COPYBACK]) {
+		DR_LOG(ERR, "Nat64 failed creating copyback state");
+		goto free_replace;
+	}
+
+	return action;
+
+
+free_replace:
+	mlx5dr_action_destroy(action->nat64.stages[MLX5DR_ACTION_NAT64_STAGE_REPLACE]);
+free_copy:
+	mlx5dr_action_destroy(action->nat64.stages[MLX5DR_ACTION_NAT64_STAGE_COPY]);
+free_action:
+	simple_free(action);
+	return NULL;
+}
+
 static void mlx5dr_action_destroy_hws(struct mlx5dr_action *action)
 {
 	struct mlx5dr_devx_obj *obj = NULL;
@@ -2600,6 +2995,10 @@ static void mlx5dr_action_destroy_hws(struct mlx5dr_action *action)
 			if (action->ipv6_route_ext.action[i])
 				mlx5dr_action_destroy(action->ipv6_route_ext.action[i]);
 		break;
+	case MLX5DR_ACTION_TYP_NAT64:
+		for (i = 0; i < MLX5DR_ACTION_NAT64_STAGES; i++)
+			mlx5dr_action_destroy(action->nat64.stages[i]);
+		break;
 	}
 }
 
@@ -2874,6 +3273,28 @@ mlx5dr_action_setter_modify_header(struct mlx5dr_actions_apply_data *apply,
 	}
 }
 
+static void
+mlx5dr_action_setter_nat64(struct mlx5dr_actions_apply_data *apply,
+			   struct mlx5dr_actions_wqe_setter *setter)
+{
+	struct mlx5dr_rule_action *rule_action;
+	struct mlx5dr_action *cur_stage_action;
+	struct mlx5dr_action *action;
+	uint32_t stc_idx;
+
+	rule_action = &apply->rule_action[setter->idx_double];
+	action = rule_action->action;
+	cur_stage_action = action->nat64.stages[setter->stage_idx];
+
+	stc_idx = htobe32(cur_stage_action->stc[apply->tbl_type].offset);
+
+	apply->wqe_ctrl->stc_ix[MLX5DR_ACTION_STC_IDX_DW6] = stc_idx;
+	apply->wqe_ctrl->stc_ix[MLX5DR_ACTION_STC_IDX_DW7] = 0;
+
+	apply->wqe_data[MLX5DR_ACTION_OFFSET_DW6] = 0;
+	apply->wqe_data[MLX5DR_ACTION_OFFSET_DW7] = 0;
+}
+
 static void
 mlx5dr_action_setter_insert_ptr(struct mlx5dr_actions_apply_data *apply,
 				struct mlx5dr_actions_wqe_setter *setter)
@@ -3174,7 +3595,7 @@ int mlx5dr_action_template_process(struct mlx5dr_action_template *at)
 	struct mlx5dr_actions_wqe_setter *setter = at->setters;
 	struct mlx5dr_actions_wqe_setter *pop_setter = NULL;
 	struct mlx5dr_actions_wqe_setter *last_setter;
-	int i;
+	int i, j;
 
 	/* Note: Given action combination must be valid */
 
@@ -3361,6 +3782,19 @@ int mlx5dr_action_template_process(struct mlx5dr_action_template *at)
 			setter->idx_ctr = i;
 			break;
 
+		case MLX5DR_ACTION_TYP_NAT64:
+			/* NAT64 requires 3 setters, each of them does specific modify header */
+			for (j = 0; j < MLX5DR_ACTION_NAT64_STAGES; j++) {
+				setter = mlx5dr_action_setter_find_first(last_setter,
+									 ASF_DOUBLE | ASF_REMOVE);
+				setter->flags |= ASF_DOUBLE | ASF_MODIFY;
+				setter->set_double = &mlx5dr_action_setter_nat64;
+				setter->idx_double = i;
+				/* The stage indicates which modify-header to push */
+				setter->stage_idx = j;
+			}
+			break;
+
 		default:
 			DR_LOG(ERR, "Unsupported action type: %d", action_type[i]);
 			rte_errno = ENOTSUP;
diff --git a/drivers/net/mlx5/hws/mlx5dr_action.h b/drivers/net/mlx5/hws/mlx5dr_action.h
index fad35a845b..49c2a9bc6b 100644
--- a/drivers/net/mlx5/hws/mlx5dr_action.h
+++ b/drivers/net/mlx5/hws/mlx5dr_action.h
@@ -11,6 +11,9 @@
 /* Max number of internal subactions of ipv6_ext */
 #define MLX5DR_ACTION_IPV6_EXT_MAX_SA 4
 
+/* Number of MH in NAT64 */
+#define MLX5DR_ACTION_NAT64_STAGES 3
+
 enum mlx5dr_action_stc_idx {
 	MLX5DR_ACTION_STC_IDX_CTRL = 0,
 	MLX5DR_ACTION_STC_IDX_HIT = 1,
@@ -68,6 +71,34 @@ enum mlx5dr_action_stc_reparse {
 	MLX5DR_ACTION_STC_REPARSE_OFF,
 };
 
+ /* 2' comp to 20, to get -20 in add operation */
+#define MLX5DR_ACTION_NAT64_DEC_20 0xffffffec
+
+enum {
+	MLX5DR_ACTION_NAT64_MAX_MODIFY_ACTIONS = 20,
+	MLX5DR_ACTION_NAT64_ADD_20 = 20,
+	MLX5DR_ACTION_NAT64_HEADER_MINUS_ONE = 9,
+	MLX5DR_ACTION_NAT64_IPV6_HEADER = 10,
+	MLX5DR_ACTION_NAT64_IPV4_HEADER = 5,
+	MLX5DR_ACTION_NAT64_IPV6_VER = 0x60000000,
+	MLX5DR_ACTION_NAT64_IPV4_VER = 0x45000000,
+};
+
+/* 3 stages for the nat64 action */
+enum mlx5dr_action_nat64_stages {
+	MLX5DR_ACTION_NAT64_STAGE_COPY = 0,
+	MLX5DR_ACTION_NAT64_STAGE_REPLACE = 1,
+	MLX5DR_ACTION_NAT64_STAGE_COPYBACK = 2,
+};
+
+/* Registers for keeping data from stage to stage */
+enum {
+	MLX5DR_ACTION_NAT64_REG_CONTROL = 0,
+	MLX5DR_ACTION_NAT64_REG_SRC_IP = 1,
+	MLX5DR_ACTION_NAT64_REG_DST_IP = 2,
+	MLX5DR_ACTION_NAT64_REG_MAX = 3,
+};
+
 struct mlx5dr_action_default_stc {
 	struct mlx5dr_pool_chunk nop_ctr;
 	struct mlx5dr_pool_chunk nop_dw5;
@@ -109,6 +140,7 @@ struct mlx5dr_actions_wqe_setter {
 	uint8_t idx_double;
 	uint8_t idx_ctr;
 	uint8_t idx_hit;
+	uint8_t stage_idx;
 	uint8_t flags;
 	uint8_t extra_data;
 };
@@ -182,6 +214,9 @@ struct mlx5dr_action {
 					uint8_t num_of_words;
 					bool decap;
 				} remove_header;
+				struct {
+					struct mlx5dr_action *stages[MLX5DR_ACTION_NAT64_STAGES];
+				} nat64;
 			};
 		};
 
diff --git a/drivers/net/mlx5/hws/mlx5dr_debug.c b/drivers/net/mlx5/hws/mlx5dr_debug.c
index 11557bcab8..39e168d556 100644
--- a/drivers/net/mlx5/hws/mlx5dr_debug.c
+++ b/drivers/net/mlx5/hws/mlx5dr_debug.c
@@ -28,6 +28,7 @@ const char *mlx5dr_debug_action_type_str[] = {
 	[MLX5DR_ACTION_TYP_REMOVE_HEADER] = "REMOVE_HEADER",
 	[MLX5DR_ACTION_TYP_POP_IPV6_ROUTE_EXT] = "POP_IPV6_ROUTE_EXT",
 	[MLX5DR_ACTION_TYP_PUSH_IPV6_ROUTE_EXT] = "PUSH_IPV6_ROUTE_EXT",
+	[MLX5DR_ACTION_TYP_NAT64] = "NAT64",
 };
 
 static_assert(ARRAY_SIZE(mlx5dr_debug_action_type_str) == MLX5DR_ACTION_TYP_MAX,
-- 
2.34.1


^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH v3 2/5] net/mlx5: fetch the available registers for NAT64
  2024-02-20 14:37 ` [PATCH v3 0/5] NAT64 support in mlx5 PMD Bing Zhao
  2024-02-20 14:37   ` [PATCH v3 1/5] net/mlx5/hws: support NAT64 action Bing Zhao
@ 2024-02-20 14:37   ` Bing Zhao
  2024-02-20 14:37   ` [PATCH v3 3/5] net/mlx5: create NAT64 actions during configuration Bing Zhao
                     ` (3 subsequent siblings)
  5 siblings, 0 replies; 36+ messages in thread
From: Bing Zhao @ 2024-02-20 14:37 UTC (permalink / raw)
  To: orika, aman.deep.singh, dsosnowski, viacheslavo, suanmingm,
	matan, thomas, ferruh.yigit, andrew.rybchenko, dev, rasland

REG_C_6 is used as the 1st one and since it is reserved internally
by default, there is no impact.

The remaining 2 registers will be fetched from the available TAGs
array from right to left. They will not be masked in the array due
to the fact that not all the rules will use NAT64 action.

Signed-off-by: Bing Zhao <bingz@nvidia.com>
---
 drivers/net/mlx5/mlx5.c | 9 +++++++++
 drivers/net/mlx5/mlx5.h | 2 ++
 2 files changed, 11 insertions(+)

diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c
index 881c42a97a..9c3b9946e3 100644
--- a/drivers/net/mlx5/mlx5.c
+++ b/drivers/net/mlx5/mlx5.c
@@ -1644,6 +1644,15 @@ mlx5_init_hws_flow_tags_registers(struct mlx5_dev_ctx_shared *sh)
 		if (!!((1 << i) & masks))
 			reg->hw_avl_tags[j++] = mlx5_regc_value(i);
 	}
+	/*
+	 * Set the registers for NAT64 usage internally. REG_C_6 is always used.
+	 * The other 2 registers will be fetched from right to left, at least 2
+	 * tag registers should be available.
+	 */
+	MLX5_ASSERT(j >= (MLX5_FLOW_NAT64_REGS_MAX - 1));
+	reg->nat64_regs[0] = REG_C_6;
+	reg->nat64_regs[1] = reg->hw_avl_tags[j - 2];
+	reg->nat64_regs[2] = reg->hw_avl_tags[j - 1];
 }
 
 static void
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 5265d1aa1f..544cf35069 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -1407,10 +1407,12 @@ struct mlx5_hws_cnt_svc_mng {
 };
 
 #define MLX5_FLOW_HW_TAGS_MAX 12
+#define MLX5_FLOW_NAT64_REGS_MAX 3
 
 struct mlx5_dev_registers {
 	enum modify_reg aso_reg;
 	enum modify_reg hw_avl_tags[MLX5_FLOW_HW_TAGS_MAX];
+	enum modify_reg nat64_regs[MLX5_FLOW_NAT64_REGS_MAX];
 };
 
 #if defined(HAVE_MLX5DV_DR) && \
-- 
2.34.1


^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH v3 3/5] net/mlx5: create NAT64 actions during configuration
  2024-02-20 14:37 ` [PATCH v3 0/5] NAT64 support in mlx5 PMD Bing Zhao
  2024-02-20 14:37   ` [PATCH v3 1/5] net/mlx5/hws: support NAT64 action Bing Zhao
  2024-02-20 14:37   ` [PATCH v3 2/5] net/mlx5: fetch the available registers for NAT64 Bing Zhao
@ 2024-02-20 14:37   ` Bing Zhao
  2024-02-20 14:37   ` [PATCH v3 4/5] net/mlx5: add NAT64 action support in rule creation Bing Zhao
                     ` (2 subsequent siblings)
  5 siblings, 0 replies; 36+ messages in thread
From: Bing Zhao @ 2024-02-20 14:37 UTC (permalink / raw)
  To: orika, aman.deep.singh, dsosnowski, viacheslavo, suanmingm,
	matan, thomas, ferruh.yigit, andrew.rybchenko, dev, rasland

The NAT64 DR actions can be shared among the tables. All these
actions can be created during configuring the flow queues and saved
for the future usage.

Even the actions can be shared now, inside per each flow rule, the
actual hardware resources are unique.

Signed-off-by: Bing Zhao <bingz@nvidia.com>
---
 doc/guides/nics/features/mlx5.ini      |  1 +
 doc/guides/nics/mlx5.rst               | 10 ++++
 doc/guides/rel_notes/release_24_03.rst |  7 +++
 drivers/net/mlx5/mlx5.h                |  6 +++
 drivers/net/mlx5/mlx5_flow.h           | 11 +++++
 drivers/net/mlx5/mlx5_flow_dv.c        |  4 +-
 drivers/net/mlx5/mlx5_flow_hw.c        | 65 ++++++++++++++++++++++++++
 7 files changed, 103 insertions(+), 1 deletion(-)

diff --git a/doc/guides/nics/features/mlx5.ini b/doc/guides/nics/features/mlx5.ini
index 30027f2ba1..81a7067cc3 100644
--- a/doc/guides/nics/features/mlx5.ini
+++ b/doc/guides/nics/features/mlx5.ini
@@ -117,6 +117,7 @@ mark                 = Y
 meter                = Y
 meter_mark           = Y
 modify_field         = Y
+nat64                = Y
 nvgre_decap          = Y
 nvgre_encap          = Y
 of_pop_vlan          = Y
diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst
index fa013b03bb..248e4e41fa 100644
--- a/doc/guides/nics/mlx5.rst
+++ b/doc/guides/nics/mlx5.rst
@@ -168,6 +168,7 @@ Features
 - Matching on represented port.
 - Matching on aggregated affinity.
 - Matching on random value.
+- NAT64.
 
 
 Limitations
@@ -824,6 +825,15 @@ Limitations
   - Only match with compare result between packet fields is supported.
 
 
+- NAT64 action:
+  - Supported only with HW Steering enabled (``dv_flow_en`` = 2).
+  - FW version: at least ``XX.39.1002``.
+  - Supported only on non-root table.
+  - Actions order limitation should follow the modify fields action.
+  - The last 2 TAG registers will be used implicitly in address backup mode.
+  - Even if the action can be shared, new steering entries will be created per flow rule. It is recommended a single rule with NAT64 should be shared to reduce the duplication of entries. The default address and other fields covertion will be handled with NAT64 action. To support other address, new rule(s) with modify fields on the IP addresses should be created.
+  - TOS / Traffic Class is not supported now.
+
 Statistics
 ----------
 
diff --git a/doc/guides/rel_notes/release_24_03.rst b/doc/guides/rel_notes/release_24_03.rst
index 619459baae..492c77ff4f 100644
--- a/doc/guides/rel_notes/release_24_03.rst
+++ b/doc/guides/rel_notes/release_24_03.rst
@@ -102,6 +102,11 @@ New Features
   * ``rte_flow_template_table_resize_complete()``.
     Complete table resize.
 
+* **Added a flow action type for NAT64.**
+
+  Added ``RTE_FLOW_ACTION_TYPE_NAT64`` to support offloading of header conversion
+  between IPv4 and IPv6.
+
 * **Updated Atomic Rules' Arkville PMD.**
 
   * Added support for Atomic Rules' TK242 packet-capture family of devices
@@ -133,6 +138,8 @@ New Features
   * Added HW steering support for modify field ``RTE_FLOW_FIELD_ESP_SEQ_NUM`` flow action.
   * Added HW steering support for modify field ``RTE_FLOW_FIELD_ESP_PROTO`` flow action.
 
+  * Added support for ``RTE_FLOW_ACTION_TYPE_NAT64`` flow action in HW Steering flow engine.
+
 
 Removed Items
 -------------
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 544cf35069..1ad40e38e1 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -1986,6 +1986,12 @@ struct mlx5_priv {
 	struct mlx5_aso_mtr_pool *hws_mpool; /* HW steering's Meter pool. */
 	struct mlx5_flow_hw_ctrl_rx *hw_ctrl_rx;
 	/**< HW steering templates used to create control flow rules. */
+	/*
+	 * The NAT64 action can be shared among matchers per domain.
+	 * [0]: RTE_FLOW_NAT64_6TO4, [1]: RTE_FLOW_NAT64_4TO6
+	 * Todo: consider to add *_MAX macro.
+	 */
+	struct mlx5dr_action *action_nat64[MLX5DR_TABLE_TYPE_MAX][2];
 #endif
 	struct rte_eth_dev *shared_host; /* Host device for HW steering. */
 	uint16_t shared_refcnt; /* HW steering host reference counter. */
diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h
index a4d0ff7b13..af41fd2112 100644
--- a/drivers/net/mlx5/mlx5_flow.h
+++ b/drivers/net/mlx5/mlx5_flow.h
@@ -159,6 +159,17 @@ struct mlx5_rte_flow_item_sq {
 	uint32_t queue; /* DevX SQ number */
 };
 
+/* Map from registers to modify fields. */
+extern enum mlx5_modification_field reg_to_field[];
+extern const size_t mlx5_mod_reg_size;
+
+static __rte_always_inline enum mlx5_modification_field
+mlx5_convert_reg_to_field(enum modify_reg reg)
+{
+	MLX5_ASSERT((size_t)reg < mlx5_mod_reg_size);
+	return reg_to_field[reg];
+}
+
 /* Feature name to allocate metadata register. */
 enum mlx5_feature_name {
 	MLX5_HAIRPIN_RX,
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index 6fded15d91..17c405508d 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -968,7 +968,7 @@ flow_dv_convert_action_modify_tcp_ack
 					     MLX5_MODIFICATION_TYPE_ADD, error);
 }
 
-static enum mlx5_modification_field reg_to_field[] = {
+enum mlx5_modification_field reg_to_field[] = {
 	[REG_NON] = MLX5_MODI_OUT_NONE,
 	[REG_A] = MLX5_MODI_META_DATA_REG_A,
 	[REG_B] = MLX5_MODI_META_DATA_REG_B,
@@ -986,6 +986,8 @@ static enum mlx5_modification_field reg_to_field[] = {
 	[REG_C_11] = MLX5_MODI_META_REG_C_11,
 };
 
+const size_t mlx5_mod_reg_size = RTE_DIM(reg_to_field);
+
 /**
  * Convert register set to DV specification.
  *
diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c
index 3bb3a9a178..386f6d1ae1 100644
--- a/drivers/net/mlx5/mlx5_flow_hw.c
+++ b/drivers/net/mlx5/mlx5_flow_hw.c
@@ -7606,6 +7606,66 @@ flow_hw_destroy_send_to_kernel_action(struct mlx5_priv *priv)
 	}
 }
 
+static void
+flow_hw_destroy_nat64_actions(struct mlx5_priv *priv)
+{
+	uint32_t i;
+
+	for (i = MLX5DR_TABLE_TYPE_NIC_RX; i < MLX5DR_TABLE_TYPE_MAX; i++) {
+		if (priv->action_nat64[i][RTE_FLOW_NAT64_6TO4]) {
+			(void)mlx5dr_action_destroy(priv->action_nat64[i][RTE_FLOW_NAT64_6TO4]);
+			priv->action_nat64[i][RTE_FLOW_NAT64_6TO4] = NULL;
+		}
+		if (priv->action_nat64[i][RTE_FLOW_NAT64_4TO6]) {
+			(void)mlx5dr_action_destroy(priv->action_nat64[i][RTE_FLOW_NAT64_4TO6]);
+			priv->action_nat64[i][RTE_FLOW_NAT64_4TO6] = NULL;
+		}
+	}
+}
+
+static int
+flow_hw_create_nat64_actions(struct mlx5_priv *priv, struct rte_flow_error *error)
+{
+	struct mlx5dr_action_nat64_attr attr;
+	uint8_t regs[MLX5_FLOW_NAT64_REGS_MAX];
+	uint32_t i;
+	const uint32_t flags[MLX5DR_TABLE_TYPE_MAX] = {
+		MLX5DR_ACTION_FLAG_HWS_RX | MLX5DR_ACTION_FLAG_SHARED,
+		MLX5DR_ACTION_FLAG_HWS_TX | MLX5DR_ACTION_FLAG_SHARED,
+		MLX5DR_ACTION_FLAG_HWS_FDB | MLX5DR_ACTION_FLAG_SHARED,
+	};
+	struct mlx5dr_action *act;
+
+	attr.registers = regs;
+	/* Try to use 3 registers by default. */
+	attr.num_of_registers = MLX5_FLOW_NAT64_REGS_MAX;
+	for (i = 0; i < MLX5_FLOW_NAT64_REGS_MAX; i++) {
+		MLX5_ASSERT(priv->sh->registers.nat64_regs[i] != REG_NON);
+		regs[i] = mlx5_convert_reg_to_field(priv->sh->registers.nat64_regs[i]);
+	}
+	for (i = MLX5DR_TABLE_TYPE_NIC_RX; i < MLX5DR_TABLE_TYPE_MAX; i++) {
+		if (i == MLX5DR_TABLE_TYPE_FDB && !priv->sh->config.dv_esw_en)
+			continue;
+		attr.flags = (enum mlx5dr_action_nat64_flags)
+			     (MLX5DR_ACTION_NAT64_V6_TO_V4 | MLX5DR_ACTION_NAT64_BACKUP_ADDR);
+		act = mlx5dr_action_create_nat64(priv->dr_ctx, &attr, flags[i]);
+		if (!act)
+			return rte_flow_error_set(error, rte_errno,
+						  RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+						  "Failed to create v6 to v4 action.");
+		priv->action_nat64[i][RTE_FLOW_NAT64_6TO4] = act;
+		attr.flags = (enum mlx5dr_action_nat64_flags)
+			     (MLX5DR_ACTION_NAT64_V4_TO_V6 | MLX5DR_ACTION_NAT64_BACKUP_ADDR);
+		act = mlx5dr_action_create_nat64(priv->dr_ctx, &attr, flags[i]);
+		if (!act)
+			return rte_flow_error_set(error, rte_errno,
+						  RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+						  "Failed to create v4 to v6 action.");
+		priv->action_nat64[i][RTE_FLOW_NAT64_4TO6] = act;
+	}
+	return 0;
+}
+
 /**
  * Create an egress pattern template matching on source SQ.
  *
@@ -9732,6 +9792,9 @@ flow_hw_configure(struct rte_eth_dev *dev,
 				   NULL, "Failed to VLAN actions.");
 		goto err;
 	}
+	if (flow_hw_create_nat64_actions(priv, error))
+		DRV_LOG(WARNING, "Cannot create NAT64 action on port %u, "
+			"please check the FW version", dev->data->port_id);
 	if (_queue_attr)
 		mlx5_free(_queue_attr);
 	if (port_attr->flags & RTE_FLOW_PORT_FLAG_STRICT_QUEUE)
@@ -9764,6 +9827,7 @@ flow_hw_configure(struct rte_eth_dev *dev,
 	}
 	if (priv->hw_def_miss)
 		mlx5dr_action_destroy(priv->hw_def_miss);
+	flow_hw_destroy_nat64_actions(priv);
 	flow_hw_destroy_vlan(dev);
 	if (dr_ctx)
 		claim_zero(mlx5dr_context_close(dr_ctx));
@@ -9844,6 +9908,7 @@ flow_hw_resource_release(struct rte_eth_dev *dev)
 	}
 	if (priv->hw_def_miss)
 		mlx5dr_action_destroy(priv->hw_def_miss);
+	flow_hw_destroy_nat64_actions(priv);
 	flow_hw_destroy_vlan(dev);
 	flow_hw_destroy_send_to_kernel_action(priv);
 	flow_hw_free_vport_actions(priv);
-- 
2.34.1


^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH v3 4/5] net/mlx5: add NAT64 action support in rule creation
  2024-02-20 14:37 ` [PATCH v3 0/5] NAT64 support in mlx5 PMD Bing Zhao
                     ` (2 preceding siblings ...)
  2024-02-20 14:37   ` [PATCH v3 3/5] net/mlx5: create NAT64 actions during configuration Bing Zhao
@ 2024-02-20 14:37   ` Bing Zhao
  2024-02-20 14:37   ` [PATCH v3 5/5] net/mlx5: validate the actions combination with NAT64 Bing Zhao
  2024-02-21 13:14   ` [PATCH v3 0/5] NAT64 support in mlx5 PMD Ori Kam
  5 siblings, 0 replies; 36+ messages in thread
From: Bing Zhao @ 2024-02-20 14:37 UTC (permalink / raw)
  To: orika, aman.deep.singh, dsosnowski, viacheslavo, suanmingm,
	matan, thomas, ferruh.yigit, andrew.rybchenko, dev, rasland

The action will handle the IPv4 and IPv6 headers translation. It will
add / remove IPv6 address prefix by default.

To use the user specific address, another rule to modify the
addresses of the IP header is needed.

Signed-off-by: Bing Zhao <bingz@nvidia.com>
---
 drivers/net/mlx5/mlx5_flow_hw.c | 20 ++++++++++++++++++++
 1 file changed, 20 insertions(+)

diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c
index 386f6d1ae1..a2e2c6769a 100644
--- a/drivers/net/mlx5/mlx5_flow_hw.c
+++ b/drivers/net/mlx5/mlx5_flow_hw.c
@@ -2492,6 +2492,19 @@ __flow_hw_actions_translate(struct rte_eth_dev *dev,
 			}
 			acts->rule_acts[dr_pos].action = priv->hw_def_miss;
 			break;
+		case RTE_FLOW_ACTION_TYPE_NAT64:
+			if (masks->conf &&
+			    ((const struct rte_flow_action_nat64 *)masks->conf)->type) {
+				const struct rte_flow_action_nat64 *nat64_c =
+					(const struct rte_flow_action_nat64 *)actions->conf;
+
+				acts->rule_acts[dr_pos].action =
+					priv->action_nat64[type][nat64_c->type];
+			} else if (__flow_hw_act_data_general_append(priv, acts,
+								     actions->type,
+								     src_pos, dr_pos))
+				goto err;
+			break;
 		case RTE_FLOW_ACTION_TYPE_END:
 			actions_end = true;
 			break;
@@ -2934,6 +2947,7 @@ flow_hw_actions_construct(struct rte_eth_dev *dev,
 	const struct rte_flow_action_ethdev *port_action = NULL;
 	const struct rte_flow_action_meter *meter = NULL;
 	const struct rte_flow_action_age *age = NULL;
+	const struct rte_flow_action_nat64 *nat64_c = NULL;
 	uint8_t *buf = job->encap_data;
 	uint8_t *push_buf = job->push_data;
 	struct rte_flow_attr attr = {
@@ -3201,6 +3215,11 @@ flow_hw_actions_construct(struct rte_eth_dev *dev,
 			if (ret != 0)
 				return ret;
 			break;
+		case RTE_FLOW_ACTION_TYPE_NAT64:
+			nat64_c = action->conf;
+			rule_acts[act_data->action_dst].action =
+				priv->action_nat64[table->type][nat64_c->type];
+			break;
 		default:
 			break;
 		}
@@ -5959,6 +5978,7 @@ static enum mlx5dr_action_type mlx5_hw_dr_action_types[] = {
 	[RTE_FLOW_ACTION_TYPE_SEND_TO_KERNEL] = MLX5DR_ACTION_TYP_DEST_ROOT,
 	[RTE_FLOW_ACTION_TYPE_IPV6_EXT_PUSH] = MLX5DR_ACTION_TYP_PUSH_IPV6_ROUTE_EXT,
 	[RTE_FLOW_ACTION_TYPE_IPV6_EXT_REMOVE] = MLX5DR_ACTION_TYP_POP_IPV6_ROUTE_EXT,
+	[RTE_FLOW_ACTION_TYPE_NAT64] = MLX5DR_ACTION_TYP_NAT64,
 };
 
 static inline void
-- 
2.34.1


^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH v3 5/5] net/mlx5: validate the actions combination with NAT64
  2024-02-20 14:37 ` [PATCH v3 0/5] NAT64 support in mlx5 PMD Bing Zhao
                     ` (3 preceding siblings ...)
  2024-02-20 14:37   ` [PATCH v3 4/5] net/mlx5: add NAT64 action support in rule creation Bing Zhao
@ 2024-02-20 14:37   ` Bing Zhao
  2024-02-21 13:14   ` [PATCH v3 0/5] NAT64 support in mlx5 PMD Ori Kam
  5 siblings, 0 replies; 36+ messages in thread
From: Bing Zhao @ 2024-02-20 14:37 UTC (permalink / raw)
  To: orika, aman.deep.singh, dsosnowski, viacheslavo, suanmingm,
	matan, thomas, ferruh.yigit, andrew.rybchenko, dev, rasland

NAT64 is treated as a modify header action. The action order and
limitation should be the same as that of modify header in each
domain.

Since the last 2 TAG registers will be used implicitly in the
address backup mode, the values in these registers are no longer
valid after the NAT64 action. The application should not try to
match these TAGs after the rule that contains NAT64 action.

Signed-off-by: Bing Zhao <bingz@nvidia.com>
---
 drivers/net/mlx5/mlx5_flow.h    |  1 +
 drivers/net/mlx5/mlx5_flow_hw.c | 51 +++++++++++++++++++++++++++++++++
 2 files changed, 52 insertions(+)

diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h
index af41fd2112..52994fa3ee 100644
--- a/drivers/net/mlx5/mlx5_flow.h
+++ b/drivers/net/mlx5/mlx5_flow.h
@@ -382,6 +382,7 @@ enum mlx5_feature_name {
 #define MLX5_FLOW_ACTION_PORT_REPRESENTOR (1ull << 47)
 #define MLX5_FLOW_ACTION_IPV6_ROUTING_REMOVE (1ull << 48)
 #define MLX5_FLOW_ACTION_IPV6_ROUTING_PUSH (1ull << 49)
+#define MLX5_FLOW_ACTION_NAT64 (1ull << 50)
 
 #define MLX5_FLOW_DROP_INCLUSIVE_ACTIONS \
 	(MLX5_FLOW_ACTION_COUNT | MLX5_FLOW_ACTION_SAMPLE | MLX5_FLOW_ACTION_AGE)
diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c
index a2e2c6769a..2057528c84 100644
--- a/drivers/net/mlx5/mlx5_flow_hw.c
+++ b/drivers/net/mlx5/mlx5_flow_hw.c
@@ -5725,6 +5725,50 @@ flow_hw_validate_action_default_miss(struct rte_eth_dev *dev,
 	return 0;
 }
 
+static int
+flow_hw_validate_action_nat64(struct rte_eth_dev *dev,
+			      const struct rte_flow_actions_template_attr *attr,
+			      const struct rte_flow_action *action,
+			      const struct rte_flow_action *mask,
+			      uint64_t action_flags,
+			      struct rte_flow_error *error)
+{
+	struct mlx5_priv *priv = dev->data->dev_private;
+	const struct rte_flow_action_nat64 *nat64_c;
+	enum rte_flow_nat64_type cov_type;
+
+	RTE_SET_USED(action_flags);
+	if (mask->conf && ((const struct rte_flow_action_nat64 *)mask->conf)->type) {
+		nat64_c = (const struct rte_flow_action_nat64 *)action->conf;
+		cov_type = nat64_c->type;
+		if ((attr->ingress && !priv->action_nat64[MLX5DR_TABLE_TYPE_NIC_RX][cov_type]) ||
+		    (attr->egress && !priv->action_nat64[MLX5DR_TABLE_TYPE_NIC_TX][cov_type]) ||
+		    (attr->transfer && !priv->action_nat64[MLX5DR_TABLE_TYPE_FDB][cov_type]))
+			goto err_out;
+	} else {
+		/*
+		 * Usually, the actions will be used on both directions. For non-masked actions,
+		 * both directions' actions will be checked.
+		 */
+		if (attr->ingress)
+			if (!priv->action_nat64[MLX5DR_TABLE_TYPE_NIC_RX][RTE_FLOW_NAT64_6TO4] ||
+			    !priv->action_nat64[MLX5DR_TABLE_TYPE_NIC_RX][RTE_FLOW_NAT64_4TO6])
+				goto err_out;
+		if (attr->egress)
+			if (!priv->action_nat64[MLX5DR_TABLE_TYPE_NIC_TX][RTE_FLOW_NAT64_6TO4] ||
+			    !priv->action_nat64[MLX5DR_TABLE_TYPE_NIC_TX][RTE_FLOW_NAT64_4TO6])
+				goto err_out;
+		if (attr->transfer)
+			if (!priv->action_nat64[MLX5DR_TABLE_TYPE_FDB][RTE_FLOW_NAT64_6TO4] ||
+			    !priv->action_nat64[MLX5DR_TABLE_TYPE_FDB][RTE_FLOW_NAT64_4TO6])
+				goto err_out;
+	}
+	return 0;
+err_out:
+	return rte_flow_error_set(error, EOPNOTSUPP, RTE_FLOW_ERROR_TYPE_ACTION,
+				  NULL, "NAT64 action is not supported.");
+}
+
 static int
 mlx5_flow_hw_actions_validate(struct rte_eth_dev *dev,
 			      const struct rte_flow_actions_template_attr *attr,
@@ -5926,6 +5970,13 @@ mlx5_flow_hw_actions_validate(struct rte_eth_dev *dev,
 				MLX5_HW_VLAN_PUSH_VID_IDX;
 			action_flags |= MLX5_FLOW_ACTION_OF_PUSH_VLAN;
 			break;
+		case RTE_FLOW_ACTION_TYPE_NAT64:
+			ret = flow_hw_validate_action_nat64(dev, attr, action, mask,
+							    action_flags, error);
+			if (ret != 0)
+				return ret;
+			action_flags |= MLX5_FLOW_ACTION_NAT64;
+			break;
 		case RTE_FLOW_ACTION_TYPE_END:
 			actions_end = true;
 			break;
-- 
2.34.1


^ permalink raw reply	[flat|nested] 36+ messages in thread

* RE: [PATCH v3 0/5] NAT64 support in mlx5 PMD
  2024-02-20 14:37 ` [PATCH v3 0/5] NAT64 support in mlx5 PMD Bing Zhao
                     ` (4 preceding siblings ...)
  2024-02-20 14:37   ` [PATCH v3 5/5] net/mlx5: validate the actions combination with NAT64 Bing Zhao
@ 2024-02-21 13:14   ` Ori Kam
  5 siblings, 0 replies; 36+ messages in thread
From: Ori Kam @ 2024-02-21 13:14 UTC (permalink / raw)
  To: Bing Zhao, aman.deep.singh, Dariusz Sosnowski, Slava Ovsiienko,
	Suanming Mou, Matan Azrad, NBU-Contact-Thomas Monjalon (EXTERNAL),
	ferruh.yigit, andrew.rybchenko, dev, Raslan Darawsheh



> -----Original Message-----
> From: Bing Zhao <bingz@nvidia.com>
> Sent: Tuesday, February 20, 2024 4:37 PM
> 
> This patch set contains the mlx5 PMD implementation for NAT64.
> 
> Update in v3:
>   1. code style and typo.
> 
> Update in v2:
>   1. separate from the RTE and testpmd common part.
>   2. reorder the commits.
>   3. bug fix, code polishing and document update.
> 
> Bing Zhao (4):
>   net/mlx5: fetch the available registers for NAT64
>   net/mlx5: create NAT64 actions during configuration
>   net/mlx5: add NAT64 action support in rule creation
>   net/mlx5: validate the actions combination with NAT64
> 
> Erez Shitrit (1):
>   net/mlx5/hws: support NAT64 action
> 
>  doc/guides/nics/features/mlx5.ini      |   1 +
>  doc/guides/nics/mlx5.rst               |  10 +
>  doc/guides/rel_notes/release_24_03.rst |   7 +
>  drivers/net/mlx5/hws/mlx5dr.h          |  29 ++
>  drivers/net/mlx5/hws/mlx5dr_action.c   | 436 ++++++++++++++++++++++++-
>  drivers/net/mlx5/hws/mlx5dr_action.h   |  35 ++
>  drivers/net/mlx5/hws/mlx5dr_debug.c    |   1 +
>  drivers/net/mlx5/mlx5.c                |   9 +
>  drivers/net/mlx5/mlx5.h                |   8 +
>  drivers/net/mlx5/mlx5_flow.h           |  12 +
>  drivers/net/mlx5/mlx5_flow_dv.c        |   4 +-
>  drivers/net/mlx5/mlx5_flow_hw.c        | 136 ++++++++
>  12 files changed, 686 insertions(+), 2 deletions(-)
> 
> --
> 2.34.1

Series-acked-by:  Ori Kam <orika@nvidia.com>
Best,
Ori


^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH v4 0/5] NAT64 support in mlx5 PMD
  2023-12-27  9:07 [PATCH 0/8] support NAT64 action Bing Zhao
                   ` (10 preceding siblings ...)
  2024-02-20 14:37 ` [PATCH v3 0/5] NAT64 support in mlx5 PMD Bing Zhao
@ 2024-02-28 15:09 ` Bing Zhao
  2024-02-28 15:09   ` [PATCH v4 1/5] net/mlx5/hws: support NAT64 action Bing Zhao
                     ` (5 more replies)
  11 siblings, 6 replies; 36+ messages in thread
From: Bing Zhao @ 2024-02-28 15:09 UTC (permalink / raw)
  To: orika, aman.deep.singh, dsosnowski, viacheslavo, suanmingm,
	matan, thomas, ferruh.yigit, dev, rasland
  Cc: yuying.zhang, andrew.rybchenko

This patch set contains the mlx5 PMD implementation for NAT64.

Series-acked-by: Ori Kam <orika@nvidia.com>

Update in v4:
  1. rebase to solve the conflicts.
  2. fix the old NIC startup issue in a separate patch:
     https://patches.dpdk.org/project/dpdk/patch/20240227152627.25749-1-bingz@nvidia.com/

Update in v3:
  1. code style and typo.

Update in v2:
  1. separate from the RTE and testpmd common part.
  2. reorder the commits.
  3. bug fix, code polishing and document update.

Bing Zhao (4):
  net/mlx5: fetch the available registers for NAT64
  net/mlx5: create NAT64 actions during configuration
  net/mlx5: add NAT64 action support in rule creation
  net/mlx5: validate the actions combination with NAT64

Erez Shitrit (1):
  net/mlx5/hws: support NAT64 action

 doc/guides/nics/features/mlx5.ini      |   1 +
 doc/guides/nics/mlx5.rst               |  10 +
 doc/guides/rel_notes/release_24_03.rst |   7 +
 drivers/net/mlx5/hws/mlx5dr.h          |  29 ++
 drivers/net/mlx5/hws/mlx5dr_action.c   | 436 ++++++++++++++++++++++++-
 drivers/net/mlx5/hws/mlx5dr_action.h   |  35 ++
 drivers/net/mlx5/hws/mlx5dr_debug.c    |   1 +
 drivers/net/mlx5/mlx5.c                |   9 +
 drivers/net/mlx5/mlx5.h                |  11 +
 drivers/net/mlx5/mlx5_flow.h           |  12 +
 drivers/net/mlx5/mlx5_flow_dv.c        |   4 +-
 drivers/net/mlx5/mlx5_flow_hw.c        | 136 ++++++++
 12 files changed, 689 insertions(+), 2 deletions(-)

-- 
2.39.3


^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH v4 1/5] net/mlx5/hws: support NAT64 action
  2024-02-28 15:09 ` [PATCH v4 " Bing Zhao
@ 2024-02-28 15:09   ` Bing Zhao
  2024-02-28 15:09   ` [PATCH v4 2/5] net/mlx5: fetch the available registers for NAT64 Bing Zhao
                     ` (4 subsequent siblings)
  5 siblings, 0 replies; 36+ messages in thread
From: Bing Zhao @ 2024-02-28 15:09 UTC (permalink / raw)
  To: orika, aman.deep.singh, dsosnowski, viacheslavo, suanmingm,
	matan, thomas, ferruh.yigit, dev, rasland
  Cc: yuying.zhang, andrew.rybchenko, Erez Shitrit

From: Erez Shitrit <erezsh@nvidia.com>

Add support of new action mlx5dr_action_create_nat64.
The new action allows to modify IP packets from version to version, IPV6
to IPV4 and vice versa.

Signed-off-by: Erez Shitrit <erezsh@nvidia.com>
Signed-off-by: Bing Zhao <bingz@nvidia.com>
---
 drivers/net/mlx5/hws/mlx5dr.h        |  29 ++
 drivers/net/mlx5/hws/mlx5dr_action.c | 436 ++++++++++++++++++++++++++-
 drivers/net/mlx5/hws/mlx5dr_action.h |  35 +++
 drivers/net/mlx5/hws/mlx5dr_debug.c  |   1 +
 4 files changed, 500 insertions(+), 1 deletion(-)

diff --git a/drivers/net/mlx5/hws/mlx5dr.h b/drivers/net/mlx5/hws/mlx5dr.h
index d612f300c6..8441ae97e9 100644
--- a/drivers/net/mlx5/hws/mlx5dr.h
+++ b/drivers/net/mlx5/hws/mlx5dr.h
@@ -51,6 +51,7 @@ enum mlx5dr_action_type {
 	MLX5DR_ACTION_TYP_DEST_ARRAY,
 	MLX5DR_ACTION_TYP_POP_IPV6_ROUTE_EXT,
 	MLX5DR_ACTION_TYP_PUSH_IPV6_ROUTE_EXT,
+	MLX5DR_ACTION_TYP_NAT64,
 	MLX5DR_ACTION_TYP_MAX,
 };
 
@@ -868,6 +869,34 @@ mlx5dr_action_create_reformat_ipv6_ext(struct mlx5dr_context *ctx,
 				       uint32_t log_bulk_size,
 				       uint32_t flags);
 
+enum mlx5dr_action_nat64_flags {
+	MLX5DR_ACTION_NAT64_V4_TO_V6 = 1 << 0,
+	MLX5DR_ACTION_NAT64_V6_TO_V4 = 1 << 1,
+	/* Indicates if to backup ipv4 addresses in last two registers */
+	MLX5DR_ACTION_NAT64_BACKUP_ADDR = 1 << 2,
+};
+
+struct mlx5dr_action_nat64_attr {
+	uint8_t num_of_registers;
+	uint8_t *registers;
+	enum mlx5dr_action_nat64_flags flags;
+};
+
+/* Create direct rule nat64 action.
+ *
+ * @param[in] ctx
+ *	The context in which the new action will be created.
+ * @param[in] attr
+ *	The relevant attribute of the NAT action.
+ * @param[in] flags
+ *	Action creation flags. (enum mlx5dr_action_flags)
+ * @return pointer to mlx5dr_action on success NULL otherwise.
+ */
+struct mlx5dr_action *
+mlx5dr_action_create_nat64(struct mlx5dr_context *ctx,
+			   struct mlx5dr_action_nat64_attr *attr,
+			   uint32_t flags);
+
 /* Destroy direct rule action.
  *
  * @param[in] action
diff --git a/drivers/net/mlx5/hws/mlx5dr_action.c b/drivers/net/mlx5/hws/mlx5dr_action.c
index 631763dee0..96cad553aa 100644
--- a/drivers/net/mlx5/hws/mlx5dr_action.c
+++ b/drivers/net/mlx5/hws/mlx5dr_action.c
@@ -31,6 +31,7 @@ static const uint32_t action_order_arr[MLX5DR_TABLE_TYPE_MAX][MLX5DR_ACTION_TYP_
 		BIT(MLX5DR_ACTION_TYP_ASO_CT),
 		BIT(MLX5DR_ACTION_TYP_PUSH_VLAN),
 		BIT(MLX5DR_ACTION_TYP_PUSH_VLAN),
+		BIT(MLX5DR_ACTION_TYP_NAT64),
 		BIT(MLX5DR_ACTION_TYP_MODIFY_HDR),
 		BIT(MLX5DR_ACTION_TYP_INSERT_HEADER) |
 		BIT(MLX5DR_ACTION_TYP_PUSH_IPV6_ROUTE_EXT) |
@@ -52,6 +53,7 @@ static const uint32_t action_order_arr[MLX5DR_TABLE_TYPE_MAX][MLX5DR_ACTION_TYP_
 		BIT(MLX5DR_ACTION_TYP_ASO_CT),
 		BIT(MLX5DR_ACTION_TYP_PUSH_VLAN),
 		BIT(MLX5DR_ACTION_TYP_PUSH_VLAN),
+		BIT(MLX5DR_ACTION_TYP_NAT64),
 		BIT(MLX5DR_ACTION_TYP_MODIFY_HDR),
 		BIT(MLX5DR_ACTION_TYP_INSERT_HEADER) |
 		BIT(MLX5DR_ACTION_TYP_PUSH_IPV6_ROUTE_EXT) |
@@ -75,6 +77,7 @@ static const uint32_t action_order_arr[MLX5DR_TABLE_TYPE_MAX][MLX5DR_ACTION_TYP_
 		BIT(MLX5DR_ACTION_TYP_ASO_CT),
 		BIT(MLX5DR_ACTION_TYP_PUSH_VLAN),
 		BIT(MLX5DR_ACTION_TYP_PUSH_VLAN),
+		BIT(MLX5DR_ACTION_TYP_NAT64),
 		BIT(MLX5DR_ACTION_TYP_MODIFY_HDR),
 		BIT(MLX5DR_ACTION_TYP_INSERT_HEADER) |
 		BIT(MLX5DR_ACTION_TYP_PUSH_IPV6_ROUTE_EXT) |
@@ -246,6 +249,310 @@ static void mlx5dr_action_put_shared_stc(struct mlx5dr_action *action,
 		mlx5dr_action_put_shared_stc_nic(ctx, stc_type, MLX5DR_TABLE_TYPE_FDB);
 }
 
+static struct mlx5dr_action *
+mlx5dr_action_create_nat64_copy_state(struct mlx5dr_context *ctx,
+				      struct mlx5dr_action_nat64_attr *attr,
+				      uint32_t flags)
+{
+	__be64 modify_action_data[MLX5DR_ACTION_NAT64_MAX_MODIFY_ACTIONS];
+	struct mlx5dr_action_mh_pattern pat[2];
+	struct mlx5dr_action *action;
+	uint32_t packet_len_field;
+	uint8_t *action_ptr;
+	uint32_t ttl_field;
+	uint32_t src_addr;
+	uint32_t dst_addr;
+	bool is_v4_to_v6;
+
+	is_v4_to_v6 = attr->flags & MLX5DR_ACTION_NAT64_V4_TO_V6;
+
+	if (is_v4_to_v6) {
+		packet_len_field = MLX5_MODI_OUT_IPV4_TOTAL_LEN;
+		ttl_field = MLX5_MODI_OUT_IPV4_TTL;
+		src_addr = MLX5_MODI_OUT_SIPV4;
+		dst_addr = MLX5_MODI_OUT_DIPV4;
+	} else {
+		packet_len_field = MLX5_MODI_OUT_IPV6_PAYLOAD_LEN;
+		ttl_field = MLX5_MODI_OUT_IPV6_HOPLIMIT;
+		src_addr = MLX5_MODI_OUT_SIPV6_31_0;
+		dst_addr = MLX5_MODI_OUT_DIPV6_31_0;
+	}
+
+	memset(modify_action_data, 0, sizeof(modify_action_data));
+	action_ptr = (uint8_t *)modify_action_data;
+
+	if (attr->flags & MLX5DR_ACTION_NAT64_BACKUP_ADDR) {
+		MLX5_SET(copy_action_in, action_ptr, action_type, MLX5_MODIFICATION_TYPE_COPY);
+		MLX5_SET(copy_action_in, action_ptr, src_field, src_addr);
+		MLX5_SET(copy_action_in, action_ptr, dst_field,
+			 attr->registers[MLX5DR_ACTION_NAT64_REG_SRC_IP]);
+		action_ptr += MLX5DR_ACTION_DOUBLE_SIZE;
+
+		MLX5_SET(copy_action_in, action_ptr, action_type, MLX5_MODIFICATION_TYPE_COPY);
+		MLX5_SET(copy_action_in, action_ptr, src_field, dst_addr);
+		MLX5_SET(copy_action_in, action_ptr, dst_field,
+			 attr->registers[MLX5DR_ACTION_NAT64_REG_DST_IP]);
+		action_ptr += MLX5DR_ACTION_DOUBLE_SIZE;
+	}
+
+	/* | 8 bit - 8 bit     - 16 bit     |
+	 * | ttl   - protocol  - packet-len |
+	 */
+	MLX5_SET(copy_action_in, action_ptr, action_type, MLX5_MODIFICATION_TYPE_COPY);
+	MLX5_SET(copy_action_in, action_ptr, src_field, packet_len_field);
+	MLX5_SET(copy_action_in, action_ptr, dst_field,
+		 attr->registers[MLX5DR_ACTION_NAT64_REG_CONTROL]);
+	MLX5_SET(copy_action_in, action_ptr, dst_offset, 0);/* 16 bits in the lsb */
+	MLX5_SET(copy_action_in, action_ptr, length, 16);
+	action_ptr += MLX5DR_ACTION_DOUBLE_SIZE;
+
+	MLX5_SET(copy_action_in, action_ptr, action_type, MLX5_MODIFICATION_TYPE_NOP);
+	action_ptr += MLX5DR_ACTION_DOUBLE_SIZE;
+
+	MLX5_SET(copy_action_in, action_ptr, action_type, MLX5_MODIFICATION_TYPE_COPY);
+	MLX5_SET(copy_action_in, action_ptr, src_field, MLX5_MODI_OUT_IP_PROTOCOL);
+	MLX5_SET(copy_action_in, action_ptr, dst_field,
+		 attr->registers[MLX5DR_ACTION_NAT64_REG_CONTROL]);
+	MLX5_SET(copy_action_in, action_ptr, dst_offset, 16);
+	MLX5_SET(copy_action_in, action_ptr, length, 8);
+	action_ptr += MLX5DR_ACTION_DOUBLE_SIZE;
+
+	MLX5_SET(copy_action_in, action_ptr, action_type, MLX5_MODIFICATION_TYPE_NOP);
+	action_ptr += MLX5DR_ACTION_DOUBLE_SIZE;
+
+	MLX5_SET(copy_action_in, action_ptr, action_type, MLX5_MODIFICATION_TYPE_COPY);
+	MLX5_SET(copy_action_in, action_ptr, src_field, ttl_field);
+	MLX5_SET(copy_action_in, action_ptr, dst_field,
+		 attr->registers[MLX5DR_ACTION_NAT64_REG_CONTROL]);
+	MLX5_SET(copy_action_in, action_ptr, dst_offset, 24);
+	MLX5_SET(copy_action_in, action_ptr, length, 8);
+	action_ptr += MLX5DR_ACTION_DOUBLE_SIZE;
+
+	/* set sip and dip to 0, in order to have new csum */
+	if (is_v4_to_v6) {
+		MLX5_SET(set_action_in, action_ptr, action_type, MLX5_MODIFICATION_TYPE_SET);
+		MLX5_SET(set_action_in, action_ptr, field, MLX5_MODI_OUT_SIPV4);
+		MLX5_SET(set_action_in, action_ptr, data, 0);
+		action_ptr += MLX5DR_ACTION_DOUBLE_SIZE;
+
+		MLX5_SET(set_action_in, action_ptr, action_type, MLX5_MODIFICATION_TYPE_SET);
+		MLX5_SET(set_action_in, action_ptr, field, MLX5_MODI_OUT_DIPV4);
+		MLX5_SET(set_action_in, action_ptr, data, 0);
+		action_ptr += MLX5DR_ACTION_DOUBLE_SIZE;
+	}
+
+	pat[0].data = modify_action_data;
+	pat[0].sz = (action_ptr - (uint8_t *)modify_action_data);
+
+	action = mlx5dr_action_create_modify_header(ctx, 1, pat, 0, flags);
+	if (!action) {
+		DR_LOG(ERR, "Failed to create copy for NAT64: action_sz: %zu, flags: 0x%x\n",
+		       pat[0].sz, flags);
+		return NULL;
+	}
+
+	return action;
+}
+
+static struct mlx5dr_action *
+mlx5dr_action_create_nat64_repalce_state(struct mlx5dr_context *ctx,
+					 struct mlx5dr_action_nat64_attr *attr,
+					 uint32_t flags)
+{
+	uint32_t address_prefix[MLX5DR_ACTION_NAT64_HEADER_MINUS_ONE] = {0};
+	__be64 modify_action_data[MLX5DR_ACTION_NAT64_MAX_MODIFY_ACTIONS];
+	struct mlx5dr_action_mh_pattern pat[2];
+	static struct mlx5dr_action *action;
+	uint8_t header_size_in_dw;
+	uint8_t *action_ptr;
+	uint32_t eth_type;
+	bool is_v4_to_v6;
+	uint32_t ip_ver;
+	int i;
+
+	is_v4_to_v6 = attr->flags & MLX5DR_ACTION_NAT64_V4_TO_V6;
+
+	if (is_v4_to_v6) {
+		uint32_t nat64_well_known_pref[] = {0x00010000,
+						    0x9bff6400, 0x0, 0x0, 0x0,
+						    0x9bff6400, 0x0, 0x0, 0x0};
+
+		header_size_in_dw = MLX5DR_ACTION_NAT64_IPV6_HEADER;
+		ip_ver = MLX5DR_ACTION_NAT64_IPV6_VER;
+		eth_type = RTE_ETHER_TYPE_IPV6;
+		memcpy(address_prefix, nat64_well_known_pref,
+		       MLX5DR_ACTION_NAT64_HEADER_MINUS_ONE * sizeof(uint32_t));
+	} else {
+		header_size_in_dw = MLX5DR_ACTION_NAT64_IPV4_HEADER;
+		ip_ver = MLX5DR_ACTION_NAT64_IPV4_VER;
+		eth_type = RTE_ETHER_TYPE_IPV4;
+	}
+
+	memset(modify_action_data, 0, sizeof(modify_action_data));
+	action_ptr = (uint8_t *)modify_action_data;
+
+	MLX5_SET(set_action_in, action_ptr, action_type, MLX5_MODIFICATION_TYPE_SET);
+	MLX5_SET(set_action_in, action_ptr, field, MLX5_MODI_OUT_ETHERTYPE);
+	MLX5_SET(set_action_in, action_ptr, length, 16);
+	MLX5_SET(set_action_in, action_ptr, data, eth_type);
+	action_ptr += MLX5DR_ACTION_DOUBLE_SIZE;
+
+	/* push empty header with ipv6 as version */
+	MLX5_SET(stc_ste_param_insert, action_ptr, action_type,
+		 MLX5_MODIFICATION_TYPE_INSERT);
+	MLX5_SET(stc_ste_param_insert, action_ptr, inline_data, 0x1);
+	MLX5_SET(stc_ste_param_insert, action_ptr, insert_anchor,
+		 MLX5_HEADER_ANCHOR_IPV6_IPV4);
+	MLX5_SET(stc_ste_param_insert, action_ptr, insert_size, 2);
+	MLX5_SET(stc_ste_param_insert, action_ptr, insert_argument, ip_ver);
+	action_ptr += MLX5DR_ACTION_DOUBLE_SIZE;
+
+	for (i = 0; i < header_size_in_dw - 1; i++) {
+		MLX5_SET(stc_ste_param_insert, action_ptr, action_type,
+				MLX5_MODIFICATION_TYPE_INSERT);
+		MLX5_SET(stc_ste_param_insert, action_ptr, inline_data, 0x1);
+		MLX5_SET(stc_ste_param_insert, action_ptr, insert_anchor,
+				MLX5_HEADER_ANCHOR_IPV6_IPV4);
+		MLX5_SET(stc_ste_param_insert, action_ptr, insert_size, 2);
+		MLX5_SET(stc_ste_param_insert, action_ptr, insert_argument,
+			 htobe32(address_prefix[i]));
+		action_ptr += MLX5DR_ACTION_DOUBLE_SIZE;
+	}
+
+	/* Remove orig src/dst addr (8 bytes, 4 words) */
+	MLX5_SET(stc_ste_param_remove, action_ptr, action_type,
+		 MLX5_MODIFICATION_TYPE_REMOVE);
+	MLX5_SET(stc_ste_param_remove, action_ptr, remove_start_anchor,
+		 MLX5_HEADER_ANCHOR_IPV6_IPV4);
+	MLX5_SET(stc_ste_param_remove, action_ptr, remove_end_anchor,
+		 MLX5_HEADER_ANCHOR_TCP_UDP);
+	action_ptr += MLX5DR_ACTION_DOUBLE_SIZE;
+
+	pat[0].data = modify_action_data;
+	pat[0].sz = action_ptr - (uint8_t *)modify_action_data;
+
+	action = mlx5dr_action_create_modify_header(ctx, 1, pat, 0, flags);
+	if (!action) {
+		DR_LOG(ERR, "Failed to create action: action_sz: %zu flags: 0x%x\n",
+		       pat[0].sz, flags);
+		return NULL;
+	}
+
+	return action;
+}
+
+static struct mlx5dr_action *
+mlx5dr_action_create_nat64_copy_back_state(struct mlx5dr_context *ctx,
+					   struct mlx5dr_action_nat64_attr *attr,
+					   uint32_t flags)
+{
+	__be64 modify_action_data[MLX5DR_ACTION_NAT64_MAX_MODIFY_ACTIONS];
+	struct mlx5dr_action_mh_pattern pat[2];
+	struct mlx5dr_action *action;
+	uint32_t packet_len_field;
+	uint32_t packet_len_add;
+	uint8_t *action_ptr;
+	uint32_t ttl_field;
+	uint32_t src_addr;
+	uint32_t dst_addr;
+	bool is_v4_to_v6;
+
+	is_v4_to_v6 = attr->flags & MLX5DR_ACTION_NAT64_V4_TO_V6;
+
+	if (is_v4_to_v6) {
+		packet_len_field = MLX5_MODI_OUT_IPV6_PAYLOAD_LEN;
+		 /* 2' comp to 20, to get -20 in add operation */
+		packet_len_add = MLX5DR_ACTION_NAT64_DEC_20;
+		ttl_field = MLX5_MODI_OUT_IPV6_HOPLIMIT;
+		src_addr = MLX5_MODI_OUT_SIPV6_31_0;
+		dst_addr = MLX5_MODI_OUT_DIPV6_31_0;
+	} else {
+		packet_len_field = MLX5_MODI_OUT_IPV4_TOTAL_LEN;
+		/* ipv4 len is including 20 bytes of the header, so add 20 over ipv6 len */
+		packet_len_add = MLX5DR_ACTION_NAT64_ADD_20;
+		ttl_field = MLX5_MODI_OUT_IPV4_TTL;
+		src_addr = MLX5_MODI_OUT_SIPV4;
+		dst_addr = MLX5_MODI_OUT_DIPV4;
+	}
+
+	memset(modify_action_data, 0, sizeof(modify_action_data));
+	action_ptr = (uint8_t *)modify_action_data;
+
+	MLX5_SET(copy_action_in, action_ptr, action_type, MLX5_MODIFICATION_TYPE_COPY);
+	MLX5_SET(copy_action_in, action_ptr, src_field,
+		 attr->registers[MLX5DR_ACTION_NAT64_REG_CONTROL]);
+	MLX5_SET(copy_action_in, action_ptr, dst_field,
+		 packet_len_field);
+	MLX5_SET(copy_action_in, action_ptr, src_offset, 32);
+	MLX5_SET(copy_action_in, action_ptr, length, 16);
+	action_ptr += MLX5DR_ACTION_DOUBLE_SIZE;
+
+	MLX5_SET(copy_action_in, action_ptr, action_type, MLX5_MODIFICATION_TYPE_NOP);
+	action_ptr += MLX5DR_ACTION_DOUBLE_SIZE;
+
+	MLX5_SET(copy_action_in, action_ptr, action_type, MLX5_MODIFICATION_TYPE_COPY);
+	MLX5_SET(copy_action_in, action_ptr, src_field,
+		 attr->registers[MLX5DR_ACTION_NAT64_REG_CONTROL]);
+	MLX5_SET(copy_action_in, action_ptr, dst_field,
+		 MLX5_MODI_OUT_IP_PROTOCOL);
+	MLX5_SET(copy_action_in, action_ptr, src_offset, 16);
+	MLX5_SET(copy_action_in, action_ptr, dst_offset, 0);
+	MLX5_SET(copy_action_in, action_ptr, length, 8);
+	action_ptr += MLX5DR_ACTION_DOUBLE_SIZE;
+
+	MLX5_SET(copy_action_in, action_ptr, action_type, MLX5_MODIFICATION_TYPE_NOP);
+	action_ptr += MLX5DR_ACTION_DOUBLE_SIZE;
+
+	MLX5_SET(copy_action_in, action_ptr, action_type, MLX5_MODIFICATION_TYPE_COPY);
+	MLX5_SET(copy_action_in, action_ptr, src_field,
+		 attr->registers[MLX5DR_ACTION_NAT64_REG_CONTROL]);
+	MLX5_SET(copy_action_in, action_ptr, dst_field, ttl_field);
+	MLX5_SET(copy_action_in, action_ptr, src_offset, 24);
+	MLX5_SET(copy_action_in, action_ptr, length, 8);
+	action_ptr += MLX5DR_ACTION_DOUBLE_SIZE;
+
+	MLX5_SET(copy_action_in, action_ptr, action_type, MLX5_MODIFICATION_TYPE_NOP);
+	action_ptr += MLX5DR_ACTION_DOUBLE_SIZE;
+
+	/* if required Copy original addresses */
+	if (attr->flags & MLX5DR_ACTION_NAT64_BACKUP_ADDR) {
+		MLX5_SET(copy_action_in, action_ptr, action_type, MLX5_MODIFICATION_TYPE_COPY);
+		MLX5_SET(copy_action_in, action_ptr, src_field,
+			 attr->registers[MLX5DR_ACTION_NAT64_REG_SRC_IP]);
+		MLX5_SET(copy_action_in, action_ptr, dst_field, src_addr);
+		MLX5_SET(copy_action_in, action_ptr, src_offset, 0);
+		MLX5_SET(copy_action_in, action_ptr, length, 32);
+		action_ptr += MLX5DR_ACTION_DOUBLE_SIZE;
+
+		MLX5_SET(copy_action_in, action_ptr, action_type, MLX5_MODIFICATION_TYPE_COPY);
+		MLX5_SET(copy_action_in, action_ptr, src_field,
+			 attr->registers[MLX5DR_ACTION_NAT64_REG_DST_IP]);
+		MLX5_SET(copy_action_in, action_ptr, dst_field, dst_addr);
+		MLX5_SET(copy_action_in, action_ptr, src_offset, 0);
+		MLX5_SET(copy_action_in, action_ptr, length, 32);
+		action_ptr += MLX5DR_ACTION_DOUBLE_SIZE;
+	}
+
+	/* take/add off 20 bytes ipv4/6 from/to the total size */
+	MLX5_SET(set_action_in, action_ptr, action_type, MLX5_MODIFICATION_TYPE_ADD);
+	MLX5_SET(set_action_in, action_ptr, field, packet_len_field);
+	MLX5_SET(set_action_in, action_ptr, data, packet_len_add);
+	MLX5_SET(set_action_in, action_ptr, length, 16);
+	action_ptr += MLX5DR_ACTION_DOUBLE_SIZE;
+
+	pat[0].data = modify_action_data;
+	pat[0].sz = action_ptr - (uint8_t *)modify_action_data;
+
+	action = mlx5dr_action_create_modify_header(ctx, 1, pat, 0, flags);
+	if (!action) {
+		DR_LOG(ERR, "Failed to create action: action_sz: %zu, flags: 0x%x\n",
+		       pat[0].sz, flags);
+		return NULL;
+	}
+
+	return action;
+}
+
 static void mlx5dr_action_print_combo(enum mlx5dr_action_type *user_actions)
 {
 	DR_LOG(ERR, "Invalid action_type sequence");
@@ -2530,6 +2837,94 @@ mlx5dr_action_create_reformat_ipv6_ext(struct mlx5dr_context *ctx,
 	return NULL;
 }
 
+static bool
+mlx5dr_action_nat64_validate_param(struct mlx5dr_action_nat64_attr *attr,
+				   uint32_t flags)
+{
+	if (mlx5dr_action_is_root_flags(flags)) {
+		DR_LOG(ERR, "Nat64 action not supported for root");
+		rte_errno = ENOTSUP;
+		return false;
+	}
+
+	if (!(flags & MLX5DR_ACTION_FLAG_SHARED)) {
+		DR_LOG(ERR, "Nat64 action must be with SHARED flag");
+		rte_errno = EINVAL;
+		return false;
+	}
+
+	if (attr->num_of_registers > MLX5DR_ACTION_NAT64_REG_MAX) {
+		DR_LOG(ERR, "Nat64 action doesn't support more than %d registers",
+		       MLX5DR_ACTION_NAT64_REG_MAX);
+		rte_errno = EINVAL;
+		return false;
+	}
+
+	if (attr->flags & MLX5DR_ACTION_NAT64_BACKUP_ADDR &&
+	    attr->num_of_registers != MLX5DR_ACTION_NAT64_REG_MAX) {
+		DR_LOG(ERR, "Nat64 backup addr requires %d registers",
+		       MLX5DR_ACTION_NAT64_REG_MAX);
+		rte_errno = EINVAL;
+		return false;
+	}
+
+	if (!(attr->flags & MLX5DR_ACTION_NAT64_V4_TO_V6 ||
+	      attr->flags & MLX5DR_ACTION_NAT64_V6_TO_V4)) {
+		DR_LOG(ERR, "Nat64 backup addr requires one mode at least");
+		rte_errno = EINVAL;
+		return false;
+	}
+
+	return true;
+}
+
+struct mlx5dr_action *
+mlx5dr_action_create_nat64(struct mlx5dr_context *ctx,
+			   struct mlx5dr_action_nat64_attr *attr,
+			   uint32_t flags)
+{
+	struct mlx5dr_action *action;
+
+	if (!mlx5dr_action_nat64_validate_param(attr, flags))
+		return NULL;
+
+	action = mlx5dr_action_create_generic(ctx, flags, MLX5DR_ACTION_TYP_NAT64);
+	if (!action)
+		return NULL;
+
+	action->nat64.stages[MLX5DR_ACTION_NAT64_STAGE_COPY] =
+		mlx5dr_action_create_nat64_copy_state(ctx, attr, flags);
+	if (!action->nat64.stages[MLX5DR_ACTION_NAT64_STAGE_COPY]) {
+		DR_LOG(ERR, "Nat64 failed creating copy state");
+		goto free_action;
+	}
+
+	action->nat64.stages[MLX5DR_ACTION_NAT64_STAGE_REPLACE] =
+		mlx5dr_action_create_nat64_repalce_state(ctx, attr, flags);
+	if (!action->nat64.stages[MLX5DR_ACTION_NAT64_STAGE_REPLACE]) {
+		DR_LOG(ERR, "Nat64 failed creating replace state");
+		goto free_copy;
+	}
+
+	action->nat64.stages[MLX5DR_ACTION_NAT64_STAGE_COPYBACK] =
+		mlx5dr_action_create_nat64_copy_back_state(ctx, attr, flags);
+	if (!action->nat64.stages[MLX5DR_ACTION_NAT64_STAGE_COPYBACK]) {
+		DR_LOG(ERR, "Nat64 failed creating copyback state");
+		goto free_replace;
+	}
+
+	return action;
+
+
+free_replace:
+	mlx5dr_action_destroy(action->nat64.stages[MLX5DR_ACTION_NAT64_STAGE_REPLACE]);
+free_copy:
+	mlx5dr_action_destroy(action->nat64.stages[MLX5DR_ACTION_NAT64_STAGE_COPY]);
+free_action:
+	simple_free(action);
+	return NULL;
+}
+
 static void mlx5dr_action_destroy_hws(struct mlx5dr_action *action)
 {
 	struct mlx5dr_devx_obj *obj = NULL;
@@ -2604,6 +2999,10 @@ static void mlx5dr_action_destroy_hws(struct mlx5dr_action *action)
 			if (action->ipv6_route_ext.action[i])
 				mlx5dr_action_destroy(action->ipv6_route_ext.action[i]);
 		break;
+	case MLX5DR_ACTION_TYP_NAT64:
+		for (i = 0; i < MLX5DR_ACTION_NAT64_STAGES; i++)
+			mlx5dr_action_destroy(action->nat64.stages[i]);
+		break;
 	}
 }
 
@@ -2878,6 +3277,28 @@ mlx5dr_action_setter_modify_header(struct mlx5dr_actions_apply_data *apply,
 	}
 }
 
+static void
+mlx5dr_action_setter_nat64(struct mlx5dr_actions_apply_data *apply,
+			   struct mlx5dr_actions_wqe_setter *setter)
+{
+	struct mlx5dr_rule_action *rule_action;
+	struct mlx5dr_action *cur_stage_action;
+	struct mlx5dr_action *action;
+	uint32_t stc_idx;
+
+	rule_action = &apply->rule_action[setter->idx_double];
+	action = rule_action->action;
+	cur_stage_action = action->nat64.stages[setter->stage_idx];
+
+	stc_idx = htobe32(cur_stage_action->stc[apply->tbl_type].offset);
+
+	apply->wqe_ctrl->stc_ix[MLX5DR_ACTION_STC_IDX_DW6] = stc_idx;
+	apply->wqe_ctrl->stc_ix[MLX5DR_ACTION_STC_IDX_DW7] = 0;
+
+	apply->wqe_data[MLX5DR_ACTION_OFFSET_DW6] = 0;
+	apply->wqe_data[MLX5DR_ACTION_OFFSET_DW7] = 0;
+}
+
 static void
 mlx5dr_action_setter_insert_ptr(struct mlx5dr_actions_apply_data *apply,
 				struct mlx5dr_actions_wqe_setter *setter)
@@ -3178,7 +3599,7 @@ int mlx5dr_action_template_process(struct mlx5dr_action_template *at)
 	struct mlx5dr_actions_wqe_setter *setter = at->setters;
 	struct mlx5dr_actions_wqe_setter *pop_setter = NULL;
 	struct mlx5dr_actions_wqe_setter *last_setter;
-	int i;
+	int i, j;
 
 	/* Note: Given action combination must be valid */
 
@@ -3366,6 +3787,19 @@ int mlx5dr_action_template_process(struct mlx5dr_action_template *at)
 			setter->idx_ctr = i;
 			break;
 
+		case MLX5DR_ACTION_TYP_NAT64:
+			/* NAT64 requires 3 setters, each of them does specific modify header */
+			for (j = 0; j < MLX5DR_ACTION_NAT64_STAGES; j++) {
+				setter = mlx5dr_action_setter_find_first(last_setter,
+									 ASF_DOUBLE | ASF_REMOVE);
+				setter->flags |= ASF_DOUBLE | ASF_MODIFY;
+				setter->set_double = &mlx5dr_action_setter_nat64;
+				setter->idx_double = i;
+				/* The stage indicates which modify-header to push */
+				setter->stage_idx = j;
+			}
+			break;
+
 		default:
 			DR_LOG(ERR, "Unsupported action type: %d", action_type[i]);
 			rte_errno = ENOTSUP;
diff --git a/drivers/net/mlx5/hws/mlx5dr_action.h b/drivers/net/mlx5/hws/mlx5dr_action.h
index 0c8e4bbb5a..064c18a90c 100644
--- a/drivers/net/mlx5/hws/mlx5dr_action.h
+++ b/drivers/net/mlx5/hws/mlx5dr_action.h
@@ -11,6 +11,9 @@
 /* Max number of internal subactions of ipv6_ext */
 #define MLX5DR_ACTION_IPV6_EXT_MAX_SA 4
 
+/* Number of MH in NAT64 */
+#define MLX5DR_ACTION_NAT64_STAGES 3
+
 enum mlx5dr_action_stc_idx {
 	MLX5DR_ACTION_STC_IDX_CTRL = 0,
 	MLX5DR_ACTION_STC_IDX_HIT = 1,
@@ -68,6 +71,34 @@ enum mlx5dr_action_stc_reparse {
 	MLX5DR_ACTION_STC_REPARSE_OFF,
 };
 
+ /* 2' comp to 20, to get -20 in add operation */
+#define MLX5DR_ACTION_NAT64_DEC_20 0xffffffec
+
+enum {
+	MLX5DR_ACTION_NAT64_MAX_MODIFY_ACTIONS = 20,
+	MLX5DR_ACTION_NAT64_ADD_20 = 20,
+	MLX5DR_ACTION_NAT64_HEADER_MINUS_ONE = 9,
+	MLX5DR_ACTION_NAT64_IPV6_HEADER = 10,
+	MLX5DR_ACTION_NAT64_IPV4_HEADER = 5,
+	MLX5DR_ACTION_NAT64_IPV6_VER = 0x60000000,
+	MLX5DR_ACTION_NAT64_IPV4_VER = 0x45000000,
+};
+
+/* 3 stages for the nat64 action */
+enum mlx5dr_action_nat64_stages {
+	MLX5DR_ACTION_NAT64_STAGE_COPY = 0,
+	MLX5DR_ACTION_NAT64_STAGE_REPLACE = 1,
+	MLX5DR_ACTION_NAT64_STAGE_COPYBACK = 2,
+};
+
+/* Registers for keeping data from stage to stage */
+enum {
+	MLX5DR_ACTION_NAT64_REG_CONTROL = 0,
+	MLX5DR_ACTION_NAT64_REG_SRC_IP = 1,
+	MLX5DR_ACTION_NAT64_REG_DST_IP = 2,
+	MLX5DR_ACTION_NAT64_REG_MAX = 3,
+};
+
 struct mlx5dr_action_default_stc {
 	struct mlx5dr_pool_chunk nop_ctr;
 	struct mlx5dr_pool_chunk nop_dw5;
@@ -109,6 +140,7 @@ struct mlx5dr_actions_wqe_setter {
 	uint8_t idx_double;
 	uint8_t idx_ctr;
 	uint8_t idx_hit;
+	uint8_t stage_idx;
 	uint8_t flags;
 	uint8_t extra_data;
 };
@@ -184,6 +216,9 @@ struct mlx5dr_action {
 					uint8_t num_of_words;
 					bool decap;
 				} remove_header;
+				struct {
+					struct mlx5dr_action *stages[MLX5DR_ACTION_NAT64_STAGES];
+				} nat64;
 			};
 		};
 
diff --git a/drivers/net/mlx5/hws/mlx5dr_debug.c b/drivers/net/mlx5/hws/mlx5dr_debug.c
index a9094cd35b..8f07c7fd66 100644
--- a/drivers/net/mlx5/hws/mlx5dr_debug.c
+++ b/drivers/net/mlx5/hws/mlx5dr_debug.c
@@ -28,6 +28,7 @@ const char *mlx5dr_debug_action_type_str[] = {
 	[MLX5DR_ACTION_TYP_REMOVE_HEADER] = "REMOVE_HEADER",
 	[MLX5DR_ACTION_TYP_POP_IPV6_ROUTE_EXT] = "POP_IPV6_ROUTE_EXT",
 	[MLX5DR_ACTION_TYP_PUSH_IPV6_ROUTE_EXT] = "PUSH_IPV6_ROUTE_EXT",
+	[MLX5DR_ACTION_TYP_NAT64] = "NAT64",
 };
 
 static_assert(ARRAY_SIZE(mlx5dr_debug_action_type_str) == MLX5DR_ACTION_TYP_MAX,
-- 
2.39.3


^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH v4 2/5] net/mlx5: fetch the available registers for NAT64
  2024-02-28 15:09 ` [PATCH v4 " Bing Zhao
  2024-02-28 15:09   ` [PATCH v4 1/5] net/mlx5/hws: support NAT64 action Bing Zhao
@ 2024-02-28 15:09   ` Bing Zhao
  2024-02-28 15:09   ` [PATCH v4 3/5] net/mlx5: create NAT64 actions during configuration Bing Zhao
                     ` (3 subsequent siblings)
  5 siblings, 0 replies; 36+ messages in thread
From: Bing Zhao @ 2024-02-28 15:09 UTC (permalink / raw)
  To: orika, aman.deep.singh, dsosnowski, viacheslavo, suanmingm,
	matan, thomas, ferruh.yigit, dev, rasland
  Cc: yuying.zhang, andrew.rybchenko

REG_C_6 is used as the 1st one and since it is reserved internally
by default, there is no impact.

The remaining 2 registers will be fetched from the available TAGs
array from right to left. They will not be masked in the array due
to the fact that not all the rules will use NAT64 action.

Signed-off-by: Bing Zhao <bingz@nvidia.com>
---
 drivers/net/mlx5/mlx5.c | 9 +++++++++
 drivers/net/mlx5/mlx5.h | 2 ++
 2 files changed, 11 insertions(+)

diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c
index f2ca0ae4c2..cc7cd6adf5 100644
--- a/drivers/net/mlx5/mlx5.c
+++ b/drivers/net/mlx5/mlx5.c
@@ -1644,6 +1644,15 @@ mlx5_init_hws_flow_tags_registers(struct mlx5_dev_ctx_shared *sh)
 		if (!!((1 << i) & masks))
 			reg->hw_avl_tags[j++] = mlx5_regc_value(i);
 	}
+	/*
+	 * Set the registers for NAT64 usage internally. REG_C_6 is always used.
+	 * The other 2 registers will be fetched from right to left, at least 2
+	 * tag registers should be available.
+	 */
+	MLX5_ASSERT(j >= (MLX5_FLOW_NAT64_REGS_MAX - 1));
+	reg->nat64_regs[0] = REG_C_6;
+	reg->nat64_regs[1] = reg->hw_avl_tags[j - 2];
+	reg->nat64_regs[2] = reg->hw_avl_tags[j - 1];
 }
 
 static void
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 99850a58af..ee17a30454 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -1407,10 +1407,12 @@ struct mlx5_hws_cnt_svc_mng {
 };
 
 #define MLX5_FLOW_HW_TAGS_MAX 12
+#define MLX5_FLOW_NAT64_REGS_MAX 3
 
 struct mlx5_dev_registers {
 	enum modify_reg aso_reg;
 	enum modify_reg hw_avl_tags[MLX5_FLOW_HW_TAGS_MAX];
+	enum modify_reg nat64_regs[MLX5_FLOW_NAT64_REGS_MAX];
 };
 
 #if defined(HAVE_MLX5DV_DR) && \
-- 
2.39.3


^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH v4 3/5] net/mlx5: create NAT64 actions during configuration
  2024-02-28 15:09 ` [PATCH v4 " Bing Zhao
  2024-02-28 15:09   ` [PATCH v4 1/5] net/mlx5/hws: support NAT64 action Bing Zhao
  2024-02-28 15:09   ` [PATCH v4 2/5] net/mlx5: fetch the available registers for NAT64 Bing Zhao
@ 2024-02-28 15:09   ` Bing Zhao
  2024-02-28 15:09   ` [PATCH v4 4/5] net/mlx5: add NAT64 action support in rule creation Bing Zhao
                     ` (2 subsequent siblings)
  5 siblings, 0 replies; 36+ messages in thread
From: Bing Zhao @ 2024-02-28 15:09 UTC (permalink / raw)
  To: orika, aman.deep.singh, dsosnowski, viacheslavo, suanmingm,
	matan, thomas, ferruh.yigit, dev, rasland
  Cc: yuying.zhang, andrew.rybchenko

The NAT64 DR actions can be shared among the tables. All these
actions can be created during configuring the flow queues and saved
for the future usage.

Even the actions can be shared now, inside per each flow rule, the
actual hardware resources are unique.

Signed-off-by: Bing Zhao <bingz@nvidia.com>
---
 doc/guides/nics/features/mlx5.ini      |  1 +
 doc/guides/nics/mlx5.rst               | 10 ++++
 doc/guides/rel_notes/release_24_03.rst |  7 +++
 drivers/net/mlx5/mlx5.h                |  9 ++++
 drivers/net/mlx5/mlx5_flow.h           | 11 +++++
 drivers/net/mlx5/mlx5_flow_dv.c        |  4 +-
 drivers/net/mlx5/mlx5_flow_hw.c        | 65 ++++++++++++++++++++++++++
 7 files changed, 106 insertions(+), 1 deletion(-)

diff --git a/doc/guides/nics/features/mlx5.ini b/doc/guides/nics/features/mlx5.ini
index 30027f2ba1..81a7067cc3 100644
--- a/doc/guides/nics/features/mlx5.ini
+++ b/doc/guides/nics/features/mlx5.ini
@@ -117,6 +117,7 @@ mark                 = Y
 meter                = Y
 meter_mark           = Y
 modify_field         = Y
+nat64                = Y
 nvgre_decap          = Y
 nvgre_encap          = Y
 of_pop_vlan          = Y
diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst
index 329b98f68f..c0294f268d 100644
--- a/doc/guides/nics/mlx5.rst
+++ b/doc/guides/nics/mlx5.rst
@@ -168,6 +168,7 @@ Features
 - Sub-Function.
 - Matching on represented port.
 - Matching on aggregated affinity.
+- NAT64.
 
 
 Limitations
@@ -886,6 +887,15 @@ Limitations
   if preceding active application rules are still present and vice versa.
 
 
+- NAT64 action:
+  - Supported only with HW Steering enabled (``dv_flow_en`` = 2).
+  - FW version: at least ``XX.39.1002``.
+  - Supported only on non-root table.
+  - Actions order limitation should follow the modify fields action.
+  - The last 2 TAG registers will be used implicitly in address backup mode.
+  - Even if the action can be shared, new steering entries will be created per flow rule. It is recommended a single rule with NAT64 should be shared to reduce the duplication of entries. The default address and other fields covertion will be handled with NAT64 action. To support other address, new rule(s) with modify fields on the IP addresses should be created.
+  - TOS / Traffic Class is not supported now.
+
 Statistics
 ----------
 
diff --git a/doc/guides/rel_notes/release_24_03.rst b/doc/guides/rel_notes/release_24_03.rst
index 23ac6568ac..744f530ead 100644
--- a/doc/guides/rel_notes/release_24_03.rst
+++ b/doc/guides/rel_notes/release_24_03.rst
@@ -101,6 +101,11 @@ New Features
   * ``rte_flow_template_table_resize_complete()``.
     Complete table resize.
 
+* **Added a flow action type for NAT64.**
+
+  Added ``RTE_FLOW_ACTION_TYPE_NAT64`` to support offloading of header conversion
+  between IPv4 and IPv6.
+
 * **Updated Atomic Rules' Arkville driver.**
 
   * Added support for Atomic Rules' TK242 packet-capture family of devices
@@ -145,6 +150,8 @@ New Features
     to support TLS v1.2, TLS v1.3 and DTLS v1.2.
   * Added PMD API to allow raw submission of instructions to CPT.
 
+  * Added support for ``RTE_FLOW_ACTION_TYPE_NAT64`` flow action in HW Steering flow engine.
+
 
 Removed Items
 -------------
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index ee17a30454..c47712a146 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -1997,7 +1997,16 @@ struct mlx5_priv {
 	struct mlx5_aso_mtr_pool *hws_mpool; /* HW steering's Meter pool. */
 	struct mlx5_flow_hw_ctrl_rx *hw_ctrl_rx;
 	/**< HW steering templates used to create control flow rules. */
+
 	struct rte_flow_actions_template *action_template_drop[MLX5DR_TABLE_TYPE_MAX];
+
+	/*
+	 * The NAT64 action can be shared among matchers per domain.
+	 * [0]: RTE_FLOW_NAT64_6TO4, [1]: RTE_FLOW_NAT64_4TO6
+	 * Todo: consider to add *_MAX macro.
+	 */
+	struct mlx5dr_action *action_nat64[MLX5DR_TABLE_TYPE_MAX][2];
+
 #endif
 	struct rte_eth_dev *shared_host; /* Host device for HW steering. */
 	uint16_t shared_refcnt; /* HW steering host reference counter. */
diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h
index 187f440893..897a283716 100644
--- a/drivers/net/mlx5/mlx5_flow.h
+++ b/drivers/net/mlx5/mlx5_flow.h
@@ -169,6 +169,17 @@ struct mlx5_rte_flow_item_sq {
 	uint32_t queue; /* DevX SQ number */
 };
 
+/* Map from registers to modify fields. */
+extern enum mlx5_modification_field reg_to_field[];
+extern const size_t mlx5_mod_reg_size;
+
+static __rte_always_inline enum mlx5_modification_field
+mlx5_convert_reg_to_field(enum modify_reg reg)
+{
+	MLX5_ASSERT((size_t)reg < mlx5_mod_reg_size);
+	return reg_to_field[reg];
+}
+
 /* Feature name to allocate metadata register. */
 enum mlx5_feature_name {
 	MLX5_HAIRPIN_RX,
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index ddf19e9a51..18f09b22be 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -971,7 +971,7 @@ flow_dv_convert_action_modify_tcp_ack
 					     MLX5_MODIFICATION_TYPE_ADD, error);
 }
 
-static enum mlx5_modification_field reg_to_field[] = {
+enum mlx5_modification_field reg_to_field[] = {
 	[REG_NON] = MLX5_MODI_OUT_NONE,
 	[REG_A] = MLX5_MODI_META_DATA_REG_A,
 	[REG_B] = MLX5_MODI_META_DATA_REG_B,
@@ -989,6 +989,8 @@ static enum mlx5_modification_field reg_to_field[] = {
 	[REG_C_11] = MLX5_MODI_META_REG_C_11,
 };
 
+const size_t mlx5_mod_reg_size = RTE_DIM(reg_to_field);
+
 /**
  * Convert register set to DV specification.
  *
diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c
index 969f0dc85a..77f0aff91e 100644
--- a/drivers/net/mlx5/mlx5_flow_hw.c
+++ b/drivers/net/mlx5/mlx5_flow_hw.c
@@ -7832,6 +7832,66 @@ flow_hw_destroy_send_to_kernel_action(struct mlx5_priv *priv)
 	}
 }
 
+static void
+flow_hw_destroy_nat64_actions(struct mlx5_priv *priv)
+{
+	uint32_t i;
+
+	for (i = MLX5DR_TABLE_TYPE_NIC_RX; i < MLX5DR_TABLE_TYPE_MAX; i++) {
+		if (priv->action_nat64[i][RTE_FLOW_NAT64_6TO4]) {
+			(void)mlx5dr_action_destroy(priv->action_nat64[i][RTE_FLOW_NAT64_6TO4]);
+			priv->action_nat64[i][RTE_FLOW_NAT64_6TO4] = NULL;
+		}
+		if (priv->action_nat64[i][RTE_FLOW_NAT64_4TO6]) {
+			(void)mlx5dr_action_destroy(priv->action_nat64[i][RTE_FLOW_NAT64_4TO6]);
+			priv->action_nat64[i][RTE_FLOW_NAT64_4TO6] = NULL;
+		}
+	}
+}
+
+static int
+flow_hw_create_nat64_actions(struct mlx5_priv *priv, struct rte_flow_error *error)
+{
+	struct mlx5dr_action_nat64_attr attr;
+	uint8_t regs[MLX5_FLOW_NAT64_REGS_MAX];
+	uint32_t i;
+	const uint32_t flags[MLX5DR_TABLE_TYPE_MAX] = {
+		MLX5DR_ACTION_FLAG_HWS_RX | MLX5DR_ACTION_FLAG_SHARED,
+		MLX5DR_ACTION_FLAG_HWS_TX | MLX5DR_ACTION_FLAG_SHARED,
+		MLX5DR_ACTION_FLAG_HWS_FDB | MLX5DR_ACTION_FLAG_SHARED,
+	};
+	struct mlx5dr_action *act;
+
+	attr.registers = regs;
+	/* Try to use 3 registers by default. */
+	attr.num_of_registers = MLX5_FLOW_NAT64_REGS_MAX;
+	for (i = 0; i < MLX5_FLOW_NAT64_REGS_MAX; i++) {
+		MLX5_ASSERT(priv->sh->registers.nat64_regs[i] != REG_NON);
+		regs[i] = mlx5_convert_reg_to_field(priv->sh->registers.nat64_regs[i]);
+	}
+	for (i = MLX5DR_TABLE_TYPE_NIC_RX; i < MLX5DR_TABLE_TYPE_MAX; i++) {
+		if (i == MLX5DR_TABLE_TYPE_FDB && !priv->sh->config.dv_esw_en)
+			continue;
+		attr.flags = (enum mlx5dr_action_nat64_flags)
+			     (MLX5DR_ACTION_NAT64_V6_TO_V4 | MLX5DR_ACTION_NAT64_BACKUP_ADDR);
+		act = mlx5dr_action_create_nat64(priv->dr_ctx, &attr, flags[i]);
+		if (!act)
+			return rte_flow_error_set(error, rte_errno,
+						  RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+						  "Failed to create v6 to v4 action.");
+		priv->action_nat64[i][RTE_FLOW_NAT64_6TO4] = act;
+		attr.flags = (enum mlx5dr_action_nat64_flags)
+			     (MLX5DR_ACTION_NAT64_V4_TO_V6 | MLX5DR_ACTION_NAT64_BACKUP_ADDR);
+		act = mlx5dr_action_create_nat64(priv->dr_ctx, &attr, flags[i]);
+		if (!act)
+			return rte_flow_error_set(error, rte_errno,
+						  RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+						  "Failed to create v4 to v6 action.");
+		priv->action_nat64[i][RTE_FLOW_NAT64_4TO6] = act;
+	}
+	return 0;
+}
+
 /**
  * Create an egress pattern template matching on source SQ.
  *
@@ -10033,6 +10093,9 @@ flow_hw_configure(struct rte_eth_dev *dev,
 				   NULL, "Failed to VLAN actions.");
 		goto err;
 	}
+	if (flow_hw_create_nat64_actions(priv, error))
+		DRV_LOG(WARNING, "Cannot create NAT64 action on port %u, "
+			"please check the FW version", dev->data->port_id);
 	if (_queue_attr)
 		mlx5_free(_queue_attr);
 	if (port_attr->flags & RTE_FLOW_PORT_FLAG_STRICT_QUEUE)
@@ -10066,6 +10129,7 @@ flow_hw_configure(struct rte_eth_dev *dev,
 	}
 	if (priv->hw_def_miss)
 		mlx5dr_action_destroy(priv->hw_def_miss);
+	flow_hw_destroy_nat64_actions(priv);
 	flow_hw_destroy_vlan(dev);
 	if (dr_ctx)
 		claim_zero(mlx5dr_context_close(dr_ctx));
@@ -10147,6 +10211,7 @@ flow_hw_resource_release(struct rte_eth_dev *dev)
 	}
 	if (priv->hw_def_miss)
 		mlx5dr_action_destroy(priv->hw_def_miss);
+	flow_hw_destroy_nat64_actions(priv);
 	flow_hw_destroy_vlan(dev);
 	flow_hw_destroy_send_to_kernel_action(priv);
 	flow_hw_free_vport_actions(priv);
-- 
2.39.3


^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH v4 4/5] net/mlx5: add NAT64 action support in rule creation
  2024-02-28 15:09 ` [PATCH v4 " Bing Zhao
                     ` (2 preceding siblings ...)
  2024-02-28 15:09   ` [PATCH v4 3/5] net/mlx5: create NAT64 actions during configuration Bing Zhao
@ 2024-02-28 15:09   ` Bing Zhao
  2024-02-28 15:09   ` [PATCH v4 5/5] net/mlx5: validate the actions combination with NAT64 Bing Zhao
  2024-02-29 10:39   ` [PATCH v4 0/5] NAT64 support in mlx5 PMD Raslan Darawsheh
  5 siblings, 0 replies; 36+ messages in thread
From: Bing Zhao @ 2024-02-28 15:09 UTC (permalink / raw)
  To: orika, aman.deep.singh, dsosnowski, viacheslavo, suanmingm,
	matan, thomas, ferruh.yigit, dev, rasland
  Cc: yuying.zhang, andrew.rybchenko

The action will handle the IPv4 and IPv6 headers translation. It will
add / remove IPv6 address prefix by default.

To use the user specific address, another rule to modify the
addresses of the IP header is needed.

Signed-off-by: Bing Zhao <bingz@nvidia.com>
---
 drivers/net/mlx5/mlx5_flow_hw.c | 20 ++++++++++++++++++++
 1 file changed, 20 insertions(+)

diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c
index 77f0aff91e..f32bdff98f 100644
--- a/drivers/net/mlx5/mlx5_flow_hw.c
+++ b/drivers/net/mlx5/mlx5_flow_hw.c
@@ -2499,6 +2499,19 @@ __flow_hw_actions_translate(struct rte_eth_dev *dev,
 			}
 			acts->rule_acts[dr_pos].action = priv->hw_def_miss;
 			break;
+		case RTE_FLOW_ACTION_TYPE_NAT64:
+			if (masks->conf &&
+			    ((const struct rte_flow_action_nat64 *)masks->conf)->type) {
+				const struct rte_flow_action_nat64 *nat64_c =
+					(const struct rte_flow_action_nat64 *)actions->conf;
+
+				acts->rule_acts[dr_pos].action =
+					priv->action_nat64[type][nat64_c->type];
+			} else if (__flow_hw_act_data_general_append(priv, acts,
+								     actions->type,
+								     src_pos, dr_pos))
+				goto err;
+			break;
 		case RTE_FLOW_ACTION_TYPE_END:
 			actions_end = true;
 			break;
@@ -2941,6 +2954,7 @@ flow_hw_actions_construct(struct rte_eth_dev *dev,
 	const struct rte_flow_action_ethdev *port_action = NULL;
 	const struct rte_flow_action_meter *meter = NULL;
 	const struct rte_flow_action_age *age = NULL;
+	const struct rte_flow_action_nat64 *nat64_c = NULL;
 	uint8_t *buf = job->encap_data;
 	uint8_t *push_buf = job->push_data;
 	struct rte_flow_attr attr = {
@@ -3208,6 +3222,11 @@ flow_hw_actions_construct(struct rte_eth_dev *dev,
 			if (ret != 0)
 				return ret;
 			break;
+		case RTE_FLOW_ACTION_TYPE_NAT64:
+			nat64_c = action->conf;
+			rule_acts[act_data->action_dst].action =
+				priv->action_nat64[table->type][nat64_c->type];
+			break;
 		default:
 			break;
 		}
@@ -6099,6 +6118,7 @@ static enum mlx5dr_action_type mlx5_hw_dr_action_types[] = {
 	[RTE_FLOW_ACTION_TYPE_SEND_TO_KERNEL] = MLX5DR_ACTION_TYP_DEST_ROOT,
 	[RTE_FLOW_ACTION_TYPE_IPV6_EXT_PUSH] = MLX5DR_ACTION_TYP_PUSH_IPV6_ROUTE_EXT,
 	[RTE_FLOW_ACTION_TYPE_IPV6_EXT_REMOVE] = MLX5DR_ACTION_TYP_POP_IPV6_ROUTE_EXT,
+	[RTE_FLOW_ACTION_TYPE_NAT64] = MLX5DR_ACTION_TYP_NAT64,
 };
 
 static inline void
-- 
2.39.3


^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH v4 5/5] net/mlx5: validate the actions combination with NAT64
  2024-02-28 15:09 ` [PATCH v4 " Bing Zhao
                     ` (3 preceding siblings ...)
  2024-02-28 15:09   ` [PATCH v4 4/5] net/mlx5: add NAT64 action support in rule creation Bing Zhao
@ 2024-02-28 15:09   ` Bing Zhao
  2024-02-29 10:39   ` [PATCH v4 0/5] NAT64 support in mlx5 PMD Raslan Darawsheh
  5 siblings, 0 replies; 36+ messages in thread
From: Bing Zhao @ 2024-02-28 15:09 UTC (permalink / raw)
  To: orika, aman.deep.singh, dsosnowski, viacheslavo, suanmingm,
	matan, thomas, ferruh.yigit, dev, rasland
  Cc: yuying.zhang, andrew.rybchenko

NAT64 is treated as a modify header action. The action order and
limitation should be the same as that of modify header in each
domain.

Since the last 2 TAG registers will be used implicitly in the
address backup mode, the values in these registers are no longer
valid after the NAT64 action. The application should not try to
match these TAGs after the rule that contains NAT64 action.

Signed-off-by: Bing Zhao <bingz@nvidia.com>
---
 drivers/net/mlx5/mlx5_flow.h    |  1 +
 drivers/net/mlx5/mlx5_flow_hw.c | 51 +++++++++++++++++++++++++++++++++
 2 files changed, 52 insertions(+)

diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h
index 897a283716..ea428a8c21 100644
--- a/drivers/net/mlx5/mlx5_flow.h
+++ b/drivers/net/mlx5/mlx5_flow.h
@@ -392,6 +392,7 @@ enum mlx5_feature_name {
 #define MLX5_FLOW_ACTION_PORT_REPRESENTOR (1ull << 47)
 #define MLX5_FLOW_ACTION_IPV6_ROUTING_REMOVE (1ull << 48)
 #define MLX5_FLOW_ACTION_IPV6_ROUTING_PUSH (1ull << 49)
+#define MLX5_FLOW_ACTION_NAT64 (1ull << 50)
 
 #define MLX5_FLOW_DROP_INCLUSIVE_ACTIONS \
 	(MLX5_FLOW_ACTION_COUNT | MLX5_FLOW_ACTION_SAMPLE | MLX5_FLOW_ACTION_AGE)
diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c
index f32bdff98f..7730bcab6f 100644
--- a/drivers/net/mlx5/mlx5_flow_hw.c
+++ b/drivers/net/mlx5/mlx5_flow_hw.c
@@ -5866,6 +5866,50 @@ flow_hw_validate_action_default_miss(struct rte_eth_dev *dev,
 	return 0;
 }
 
+static int
+flow_hw_validate_action_nat64(struct rte_eth_dev *dev,
+			      const struct rte_flow_actions_template_attr *attr,
+			      const struct rte_flow_action *action,
+			      const struct rte_flow_action *mask,
+			      uint64_t action_flags,
+			      struct rte_flow_error *error)
+{
+	struct mlx5_priv *priv = dev->data->dev_private;
+	const struct rte_flow_action_nat64 *nat64_c;
+	enum rte_flow_nat64_type cov_type;
+
+	RTE_SET_USED(action_flags);
+	if (mask->conf && ((const struct rte_flow_action_nat64 *)mask->conf)->type) {
+		nat64_c = (const struct rte_flow_action_nat64 *)action->conf;
+		cov_type = nat64_c->type;
+		if ((attr->ingress && !priv->action_nat64[MLX5DR_TABLE_TYPE_NIC_RX][cov_type]) ||
+		    (attr->egress && !priv->action_nat64[MLX5DR_TABLE_TYPE_NIC_TX][cov_type]) ||
+		    (attr->transfer && !priv->action_nat64[MLX5DR_TABLE_TYPE_FDB][cov_type]))
+			goto err_out;
+	} else {
+		/*
+		 * Usually, the actions will be used on both directions. For non-masked actions,
+		 * both directions' actions will be checked.
+		 */
+		if (attr->ingress)
+			if (!priv->action_nat64[MLX5DR_TABLE_TYPE_NIC_RX][RTE_FLOW_NAT64_6TO4] ||
+			    !priv->action_nat64[MLX5DR_TABLE_TYPE_NIC_RX][RTE_FLOW_NAT64_4TO6])
+				goto err_out;
+		if (attr->egress)
+			if (!priv->action_nat64[MLX5DR_TABLE_TYPE_NIC_TX][RTE_FLOW_NAT64_6TO4] ||
+			    !priv->action_nat64[MLX5DR_TABLE_TYPE_NIC_TX][RTE_FLOW_NAT64_4TO6])
+				goto err_out;
+		if (attr->transfer)
+			if (!priv->action_nat64[MLX5DR_TABLE_TYPE_FDB][RTE_FLOW_NAT64_6TO4] ||
+			    !priv->action_nat64[MLX5DR_TABLE_TYPE_FDB][RTE_FLOW_NAT64_4TO6])
+				goto err_out;
+	}
+	return 0;
+err_out:
+	return rte_flow_error_set(error, EOPNOTSUPP, RTE_FLOW_ERROR_TYPE_ACTION,
+				  NULL, "NAT64 action is not supported.");
+}
+
 static int
 mlx5_flow_hw_actions_validate(struct rte_eth_dev *dev,
 			      const struct rte_flow_actions_template_attr *attr,
@@ -6066,6 +6110,13 @@ mlx5_flow_hw_actions_validate(struct rte_eth_dev *dev,
 				MLX5_HW_VLAN_PUSH_VID_IDX;
 			action_flags |= MLX5_FLOW_ACTION_OF_PUSH_VLAN;
 			break;
+		case RTE_FLOW_ACTION_TYPE_NAT64:
+			ret = flow_hw_validate_action_nat64(dev, attr, action, mask,
+							    action_flags, error);
+			if (ret != 0)
+				return ret;
+			action_flags |= MLX5_FLOW_ACTION_NAT64;
+			break;
 		case RTE_FLOW_ACTION_TYPE_END:
 			actions_end = true;
 			break;
-- 
2.39.3


^ permalink raw reply	[flat|nested] 36+ messages in thread

* RE: [PATCH v4 0/5] NAT64 support in mlx5 PMD
  2024-02-28 15:09 ` [PATCH v4 " Bing Zhao
                     ` (4 preceding siblings ...)
  2024-02-28 15:09   ` [PATCH v4 5/5] net/mlx5: validate the actions combination with NAT64 Bing Zhao
@ 2024-02-29 10:39   ` Raslan Darawsheh
  5 siblings, 0 replies; 36+ messages in thread
From: Raslan Darawsheh @ 2024-02-29 10:39 UTC (permalink / raw)
  To: Bing Zhao, Ori Kam, aman.deep.singh, Dariusz Sosnowski,
	Slava Ovsiienko, Suanming Mou, Matan Azrad,
	NBU-Contact-Thomas Monjalon (EXTERNAL),
	ferruh.yigit, dev
  Cc: yuying.zhang, andrew.rybchenko

Hi,

> -----Original Message-----
> From: Bing Zhao <bingz@nvidia.com>
> Sent: Wednesday, February 28, 2024 5:09 PM
> To: Ori Kam <orika@nvidia.com>; aman.deep.singh@intel.com; Dariusz
> Sosnowski <dsosnowski@nvidia.com>; Slava Ovsiienko
> <viacheslavo@nvidia.com>; Suanming Mou <suanmingm@nvidia.com>;
> Matan Azrad <matan@nvidia.com>; NBU-Contact-Thomas Monjalon
> (EXTERNAL) <thomas@monjalon.net>; ferruh.yigit@amd.com;
> dev@dpdk.org; Raslan Darawsheh <rasland@nvidia.com>
> Cc: yuying.zhang@intel.com; andrew.rybchenko@oktetlabs.ru
> Subject: [PATCH v4 0/5] NAT64 support in mlx5 PMD
> 
> This patch set contains the mlx5 PMD implementation for NAT64.
> 
> Series-acked-by: Ori Kam <orika@nvidia.com>
> 
> Update in v4:
>   1. rebase to solve the conflicts.
>   2. fix the old NIC startup issue in a separate patch:
>      https://patches.dpdk.org/project/dpdk/patch/20240227152627.25749-
> 1-bingz@nvidia.com/
> 
> Update in v3:
>   1. code style and typo.
> 
> Update in v2:
>   1. separate from the RTE and testpmd common part.
>   2. reorder the commits.
>   3. bug fix, code polishing and document update.
> 
> Bing Zhao (4):
>   net/mlx5: fetch the available registers for NAT64
>   net/mlx5: create NAT64 actions during configuration
>   net/mlx5: add NAT64 action support in rule creation
>   net/mlx5: validate the actions combination with NAT64
> 
> Erez Shitrit (1):
>   net/mlx5/hws: support NAT64 action
> 
>  doc/guides/nics/features/mlx5.ini      |   1 +
>  doc/guides/nics/mlx5.rst               |  10 +
>  doc/guides/rel_notes/release_24_03.rst |   7 +
>  drivers/net/mlx5/hws/mlx5dr.h          |  29 ++
>  drivers/net/mlx5/hws/mlx5dr_action.c   | 436
> ++++++++++++++++++++++++-
>  drivers/net/mlx5/hws/mlx5dr_action.h   |  35 ++
>  drivers/net/mlx5/hws/mlx5dr_debug.c    |   1 +
>  drivers/net/mlx5/mlx5.c                |   9 +
>  drivers/net/mlx5/mlx5.h                |  11 +
>  drivers/net/mlx5/mlx5_flow.h           |  12 +
>  drivers/net/mlx5/mlx5_flow_dv.c        |   4 +-
>  drivers/net/mlx5/mlx5_flow_hw.c        | 136 ++++++++
>  12 files changed, 689 insertions(+), 2 deletions(-)
> 
> --
> 2.39.3
Series applied to next-net-mlx,
Kindest regards
Raslan Darawsheh

^ permalink raw reply	[flat|nested] 36+ messages in thread

end of thread, other threads:[~2024-02-29 10:40 UTC | newest]

Thread overview: 36+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-12-27  9:07 [PATCH 0/8] support NAT64 action Bing Zhao
2023-12-27  9:07 ` [PATCH 1/8] ethdev: introduce " Bing Zhao
2023-12-27  9:07 ` [PATCH 2/8] app/testpmd: add support for NAT64 in the command line Bing Zhao
2023-12-27  9:07 ` [PATCH 3/8] net/mlx5: fetch the available registers for NAT64 Bing Zhao
2023-12-27  9:07 ` [PATCH 4/8] common/mlx5: add new modify field defininations Bing Zhao
2023-12-27  9:07 ` [PATCH 5/8] net/mlx5/hws: support NAT64 action Bing Zhao
2023-12-27  9:07 ` [PATCH 6/8] net/mlx5: create NAT64 actions during configuration Bing Zhao
2023-12-27  9:07 ` [PATCH 7/8] net/mlx5: add NAT64 action support in rule creation Bing Zhao
2023-12-27  9:07 ` [PATCH 8/8] net/mlx5: validate the actions combination with NAT64 Bing Zhao
2024-01-31  9:38 ` [PATCH v2 0/2] support NAT64 action Bing Zhao
2024-01-31  9:38   ` [PATCH v2 1/2] ethdev: introduce " Bing Zhao
2024-02-01  8:38     ` Ori Kam
2024-01-31  9:38   ` [PATCH v2 2/2] app/testpmd: add support for NAT64 in the command line Bing Zhao
2024-02-01  8:38     ` Ori Kam
2024-02-01 16:00   ` [PATCH v2 0/2] support NAT64 action Ferruh Yigit
2024-02-01 16:05     ` Ferruh Yigit
2024-02-20 14:10 ` [PATCH v2 0/5] NAT64 support in mlx5 PMD Bing Zhao
2024-02-20 14:10   ` [PATCH v2 1/5] net/mlx5/hws: support NAT64 action Bing Zhao
2024-02-20 14:10   ` [PATCH v2 2/5] net/mlx5: fetch the available registers for NAT64 Bing Zhao
2024-02-20 14:10   ` [PATCH v2 3/5] net/mlx5: create NAT64 actions during configuration Bing Zhao
2024-02-20 14:10   ` [PATCH v2 4/5] net/mlx5: add NAT64 action support in rule creation Bing Zhao
2024-02-20 14:10   ` [PATCH v2 5/5] net/mlx5: validate the actions combination with NAT64 Bing Zhao
2024-02-20 14:37 ` [PATCH v3 0/5] NAT64 support in mlx5 PMD Bing Zhao
2024-02-20 14:37   ` [PATCH v3 1/5] net/mlx5/hws: support NAT64 action Bing Zhao
2024-02-20 14:37   ` [PATCH v3 2/5] net/mlx5: fetch the available registers for NAT64 Bing Zhao
2024-02-20 14:37   ` [PATCH v3 3/5] net/mlx5: create NAT64 actions during configuration Bing Zhao
2024-02-20 14:37   ` [PATCH v3 4/5] net/mlx5: add NAT64 action support in rule creation Bing Zhao
2024-02-20 14:37   ` [PATCH v3 5/5] net/mlx5: validate the actions combination with NAT64 Bing Zhao
2024-02-21 13:14   ` [PATCH v3 0/5] NAT64 support in mlx5 PMD Ori Kam
2024-02-28 15:09 ` [PATCH v4 " Bing Zhao
2024-02-28 15:09   ` [PATCH v4 1/5] net/mlx5/hws: support NAT64 action Bing Zhao
2024-02-28 15:09   ` [PATCH v4 2/5] net/mlx5: fetch the available registers for NAT64 Bing Zhao
2024-02-28 15:09   ` [PATCH v4 3/5] net/mlx5: create NAT64 actions during configuration Bing Zhao
2024-02-28 15:09   ` [PATCH v4 4/5] net/mlx5: add NAT64 action support in rule creation Bing Zhao
2024-02-28 15:09   ` [PATCH v4 5/5] net/mlx5: validate the actions combination with NAT64 Bing Zhao
2024-02-29 10:39   ` [PATCH v4 0/5] NAT64 support in mlx5 PMD Raslan Darawsheh

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).