DPDK patches and discussions
 help / color / mirror / Atom feed
* [PATCH 00/12] crypto/dpaax_sec: misc enhancements
@ 2023-08-23  7:08 Hemant Agrawal
  2023-08-23  7:08 ` [PATCH 01/12] common/dpaax: update IPsec base descriptor length Hemant Agrawal
                   ` (13 more replies)
  0 siblings, 14 replies; 30+ messages in thread
From: Hemant Agrawal @ 2023-08-23  7:08 UTC (permalink / raw)
  To: dev; +Cc: gakhil

This series include misc enhancements in dpaax_sec drivers.

- improving the IPsec protocol offload features
- enhancing PDCP protocol processing
- code optimization and cleanup

Apeksha Gupta (1):
  crypto/dpaa2_sec: enhance dpaa FD FL FMT offset set

Gagandeep Singh (3):
  common/dpaax: update IPsec base descriptor length
  common/dpaax: change mode to wait in shared desc
  crypto/dpaax_sec: set the authdata in non-auth case

Hemant Agrawal (7):
  crypto/dpaa2_sec: supporting null cipher and auth
  crypto/dpaa_sec: supporting null cipher and auth
  crypto/dpaa2_sec: support copy df and dscp in proto offload
  crypto/dpaa2_sec: increase the anti replay window size
  crypto/dpaa2_sec: enable esn support
  crypto/dpaa2_sec: add NAT-T support in IPsec offload
  crypto/dpaa2_sec: add support to set df and diffserv

Vanshika Shukla (1):
  crypto/dpaa2_sec: initialize the pdcp alg to null

 drivers/common/dpaax/caamflib/desc/ipsec.h    |   4 +-
 drivers/common/dpaax/caamflib/desc/pdcp.h     |  82 +++---
 .../dpaax/caamflib/rta/sec_run_time_asm.h     |   2 +-
 drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c   | 234 ++++++++++--------
 drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h     |  64 ++++-
 drivers/crypto/dpaa2_sec/dpaa2_sec_raw_dp.c   |  47 +---
 drivers/crypto/dpaa_sec/dpaa_sec.c            |   5 +
 drivers/crypto/dpaa_sec/dpaa_sec.h            |  42 +++-
 drivers/net/dpaa2/dpaa2_rxtx.c                |   3 +-
 9 files changed, 294 insertions(+), 189 deletions(-)

-- 
2.17.1


^ permalink raw reply	[flat|nested] 30+ messages in thread

* [PATCH 01/12] common/dpaax: update IPsec base descriptor length
  2023-08-23  7:08 [PATCH 00/12] crypto/dpaax_sec: misc enhancements Hemant Agrawal
@ 2023-08-23  7:08 ` Hemant Agrawal
  2023-08-23  7:08 ` [PATCH 02/12] common/dpaax: change mode to wait in shared desc Hemant Agrawal
                   ` (12 subsequent siblings)
  13 siblings, 0 replies; 30+ messages in thread
From: Hemant Agrawal @ 2023-08-23  7:08 UTC (permalink / raw)
  To: dev; +Cc: gakhil, Gagandeep Singh, Franck LENORMAND

From: Gagandeep Singh <g.singh@nxp.com>

If all the keys are inlined, the descriptor would
be 32 + 20 = 52 which is the size of the CURRENT shared
descriptor created.

So 32 * CAAM_CMD_SZ is the value that must be passed to
rta_inline_query() for its "sd_base_len" parameter and
drivers are using IPSEC_AUTH_VAR_AES_DEC_BASE_DESC_LEN
value to pass as first argument to rta_inline_query().

So, Value of IPSEC_AUTH_VAR_AES_DEC_BASE_DESC_LEN must be
updated to 32 CAAM_CMD_SZ.

Signed-off-by: Franck LENORMAND <franck.lenormand@nxp.com>
Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
---
 drivers/common/dpaax/caamflib/desc/ipsec.h           | 4 ++--
 drivers/common/dpaax/caamflib/rta/sec_run_time_asm.h | 2 +-
 2 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/drivers/common/dpaax/caamflib/desc/ipsec.h b/drivers/common/dpaax/caamflib/desc/ipsec.h
index 8ec6aac915..14e80baf77 100644
--- a/drivers/common/dpaax/caamflib/desc/ipsec.h
+++ b/drivers/common/dpaax/caamflib/desc/ipsec.h
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
  *
  * Copyright 2008-2016 Freescale Semiconductor Inc.
- * Copyright 2016,2019-2020 NXP
+ * Copyright 2016,2019-2022 NXP
  *
  */
 
@@ -1380,7 +1380,7 @@ cnstr_shdsc_ipsec_new_decap(uint32_t *descbuf, bool ps,
  * layers to determine whether keys can be inlined or not. To be used as first
  * parameter of rta_inline_query().
  */
-#define IPSEC_AUTH_VAR_BASE_DESC_LEN	(27 * CAAM_CMD_SZ)
+#define IPSEC_AUTH_VAR_BASE_DESC_LEN	(31 * CAAM_CMD_SZ)
 
 /**
  * IPSEC_AUTH_VAR_AES_DEC_BASE_DESC_LEN - IPsec AES decap shared descriptor
diff --git a/drivers/common/dpaax/caamflib/rta/sec_run_time_asm.h b/drivers/common/dpaax/caamflib/rta/sec_run_time_asm.h
index f40eaadea3..5c2efeb2c5 100644
--- a/drivers/common/dpaax/caamflib/rta/sec_run_time_asm.h
+++ b/drivers/common/dpaax/caamflib/rta/sec_run_time_asm.h
@@ -413,7 +413,7 @@ rta_program_finalize(struct program *program)
 {
 	/* Descriptor is usually not allowed to go beyond 64 words size */
 	if (program->current_pc > MAX_CAAM_DESCSIZE)
-		pr_warn("Descriptor Size exceeded max limit of 64 words\n");
+		pr_debug("Descriptor Size exceeded max limit of 64 words");
 
 	/* Descriptor is erroneous */
 	if (program->first_error_pc) {
-- 
2.17.1


^ permalink raw reply	[flat|nested] 30+ messages in thread

* [PATCH 02/12] common/dpaax: change mode to wait in shared desc
  2023-08-23  7:08 [PATCH 00/12] crypto/dpaax_sec: misc enhancements Hemant Agrawal
  2023-08-23  7:08 ` [PATCH 01/12] common/dpaax: update IPsec base descriptor length Hemant Agrawal
@ 2023-08-23  7:08 ` Hemant Agrawal
  2023-08-23  7:08 ` [PATCH 03/12] crypto/dpaa2_sec: initialize the pdcp alg to null Hemant Agrawal
                   ` (11 subsequent siblings)
  13 siblings, 0 replies; 30+ messages in thread
From: Hemant Agrawal @ 2023-08-23  7:08 UTC (permalink / raw)
  To: dev; +Cc: gakhil, Gagandeep Singh

From: Gagandeep Singh <g.singh@nxp.com>

In case of protocol based offload, it is better to wait before the
share descriptor complete the execution. Simultaneous sharing may
cause issues.

Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
---
 drivers/common/dpaax/caamflib/desc/pdcp.h | 82 +++++++++++------------
 1 file changed, 41 insertions(+), 41 deletions(-)

diff --git a/drivers/common/dpaax/caamflib/desc/pdcp.h b/drivers/common/dpaax/caamflib/desc/pdcp.h
index 289ee2a7d5..7d16c66d79 100644
--- a/drivers/common/dpaax/caamflib/desc/pdcp.h
+++ b/drivers/common/dpaax/caamflib/desc/pdcp.h
@@ -1,6 +1,6 @@
 /* SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
  * Copyright 2008-2013 Freescale Semiconductor, Inc.
- * Copyright 2019-2022 NXP
+ * Copyright 2019-2023 NXP
  */
 
 #ifndef __DESC_PDCP_H__
@@ -2338,27 +2338,27 @@ cnstr_shdsc_pdcp_c_plane_encap(uint32_t *descbuf,
 		desc_share[PDCP_CIPHER_TYPE_INVALID][PDCP_AUTH_TYPE_INVALID] = {
 		{	/* NULL */
 			SHR_WAIT,	/* NULL */
-			SHR_ALWAYS,	/* SNOW f9 */
-			SHR_ALWAYS,	/* AES CMAC */
-			SHR_ALWAYS	/* ZUC-I */
+			SHR_WAIT,	/* SNOW f9 */
+			SHR_WAIT,	/* AES CMAC */
+			SHR_WAIT	/* ZUC-I */
 		},
 		{	/* SNOW f8 */
-			SHR_ALWAYS,	/* NULL */
-			SHR_ALWAYS,	/* SNOW f9 */
+			SHR_WAIT,	/* NULL */
+			SHR_WAIT,	/* SNOW f9 */
 			SHR_WAIT,	/* AES CMAC */
 			SHR_WAIT	/* ZUC-I */
 		},
 		{	/* AES CTR */
-			SHR_ALWAYS,	/* NULL */
-			SHR_ALWAYS,	/* SNOW f9 */
-			SHR_ALWAYS,	/* AES CMAC */
+			SHR_WAIT,	/* NULL */
+			SHR_WAIT,	/* SNOW f9 */
+			SHR_WAIT,	/* AES CMAC */
 			SHR_WAIT	/* ZUC-I */
 		},
 		{	/* ZUC-E */
-			SHR_ALWAYS,	/* NULL */
+			SHR_WAIT,	/* NULL */
 			SHR_WAIT,	/* SNOW f9 */
 			SHR_WAIT,	/* AES CMAC */
-			SHR_ALWAYS	/* ZUC-I */
+			SHR_WAIT	/* ZUC-I */
 		},
 	};
 	enum pdb_type_e pdb_type;
@@ -2478,27 +2478,27 @@ cnstr_shdsc_pdcp_c_plane_decap(uint32_t *descbuf,
 		desc_share[PDCP_CIPHER_TYPE_INVALID][PDCP_AUTH_TYPE_INVALID] = {
 		{	/* NULL */
 			SHR_WAIT,	/* NULL */
-			SHR_ALWAYS,	/* SNOW f9 */
-			SHR_ALWAYS,	/* AES CMAC */
-			SHR_ALWAYS	/* ZUC-I */
+			SHR_WAIT,	/* SNOW f9 */
+			SHR_WAIT,	/* AES CMAC */
+			SHR_WAIT	/* ZUC-I */
 		},
 		{	/* SNOW f8 */
-			SHR_ALWAYS,	/* NULL */
-			SHR_ALWAYS,	/* SNOW f9 */
+			SHR_WAIT,	/* NULL */
+			SHR_WAIT,	/* SNOW f9 */
 			SHR_WAIT,	/* AES CMAC */
 			SHR_WAIT	/* ZUC-I */
 		},
 		{	/* AES CTR */
-			SHR_ALWAYS,	/* NULL */
-			SHR_ALWAYS,	/* SNOW f9 */
-			SHR_ALWAYS,	/* AES CMAC */
+			SHR_WAIT,	/* NULL */
+			SHR_WAIT,	/* SNOW f9 */
+			SHR_WAIT,	/* AES CMAC */
 			SHR_WAIT	/* ZUC-I */
 		},
 		{	/* ZUC-E */
-			SHR_ALWAYS,	/* NULL */
+			SHR_WAIT,	/* NULL */
 			SHR_WAIT,	/* SNOW f9 */
 			SHR_WAIT,	/* AES CMAC */
-			SHR_ALWAYS	/* ZUC-I */
+			SHR_WAIT	/* ZUC-I */
 		},
 	};
 	enum pdb_type_e pdb_type;
@@ -2643,24 +2643,24 @@ cnstr_shdsc_pdcp_u_plane_encap(uint32_t *descbuf,
 		desc_share[PDCP_CIPHER_TYPE_INVALID][PDCP_AUTH_TYPE_INVALID] = {
 		{	/* NULL */
 			SHR_WAIT,	/* NULL */
-			SHR_ALWAYS,	/* SNOW f9 */
-			SHR_ALWAYS,	/* AES CMAC */
-			SHR_ALWAYS	/* ZUC-I */
+			SHR_WAIT,	/* SNOW f9 */
+			SHR_WAIT,	/* AES CMAC */
+			SHR_WAIT	/* ZUC-I */
 		},
 		{	/* SNOW f8 */
-			SHR_ALWAYS,	/* NULL */
-			SHR_ALWAYS,	/* SNOW f9 */
+			SHR_WAIT,	/* NULL */
+			SHR_WAIT,	/* SNOW f9 */
 			SHR_WAIT,	/* AES CMAC */
 			SHR_WAIT	/* ZUC-I */
 		},
 		{	/* AES CTR */
-			SHR_ALWAYS,	/* NULL */
-			SHR_ALWAYS,	/* SNOW f9 */
-			SHR_ALWAYS,	/* AES CMAC */
+			SHR_WAIT,	/* NULL */
+			SHR_WAIT,	/* SNOW f9 */
+			SHR_WAIT,	/* AES CMAC */
 			SHR_WAIT	/* ZUC-I */
 		},
 		{	/* ZUC-E */
-			SHR_ALWAYS,	/* NULL */
+			SHR_WAIT,	/* NULL */
 			SHR_WAIT,	/* SNOW f9 */
 			SHR_WAIT,	/* AES CMAC */
 			SHR_WAIT	/* ZUC-I */
@@ -2677,7 +2677,7 @@ cnstr_shdsc_pdcp_u_plane_encap(uint32_t *descbuf,
 	if (authdata)
 		SHR_HDR(p, desc_share[cipherdata->algtype][authdata->algtype], 0, 0);
 	else
-		SHR_HDR(p, SHR_ALWAYS, 0, 0);
+		SHR_HDR(p, SHR_WAIT, 0, 0);
 	pdb_type = cnstr_pdcp_u_plane_pdb(p, sn_size, hfn,
 					  bearer, direction, hfn_threshold,
 					  cipherdata, authdata);
@@ -2828,24 +2828,24 @@ cnstr_shdsc_pdcp_u_plane_decap(uint32_t *descbuf,
 		desc_share[PDCP_CIPHER_TYPE_INVALID][PDCP_AUTH_TYPE_INVALID] = {
 		{	/* NULL */
 			SHR_WAIT,	/* NULL */
-			SHR_ALWAYS,	/* SNOW f9 */
-			SHR_ALWAYS,	/* AES CMAC */
-			SHR_ALWAYS	/* ZUC-I */
+			SHR_WAIT,	/* SNOW f9 */
+			SHR_WAIT,	/* AES CMAC */
+			SHR_WAIT	/* ZUC-I */
 		},
 		{	/* SNOW f8 */
-			SHR_ALWAYS,	/* NULL */
-			SHR_ALWAYS,	/* SNOW f9 */
+			SHR_WAIT,	/* NULL */
+			SHR_WAIT,	/* SNOW f9 */
 			SHR_WAIT,	/* AES CMAC */
 			SHR_WAIT	/* ZUC-I */
 		},
 		{	/* AES CTR */
-			SHR_ALWAYS,	/* NULL */
-			SHR_ALWAYS,	/* SNOW f9 */
-			SHR_ALWAYS,	/* AES CMAC */
+			SHR_WAIT,	/* NULL */
+			SHR_WAIT,	/* SNOW f9 */
+			SHR_WAIT,	/* AES CMAC */
 			SHR_WAIT	/* ZUC-I */
 		},
 		{	/* ZUC-E */
-			SHR_ALWAYS,	/* NULL */
+			SHR_WAIT,	/* NULL */
 			SHR_WAIT,	/* SNOW f9 */
 			SHR_WAIT,	/* AES CMAC */
 			SHR_WAIT	/* ZUC-I */
@@ -2862,7 +2862,7 @@ cnstr_shdsc_pdcp_u_plane_decap(uint32_t *descbuf,
 	if (authdata)
 		SHR_HDR(p, desc_share[cipherdata->algtype][authdata->algtype], 0, 0);
 	else
-		SHR_HDR(p, SHR_ALWAYS, 0, 0);
+		SHR_HDR(p, SHR_WAIT, 0, 0);
 
 	pdb_type = cnstr_pdcp_u_plane_pdb(p, sn_size, hfn, bearer,
 					  direction, hfn_threshold,
-- 
2.17.1


^ permalink raw reply	[flat|nested] 30+ messages in thread

* [PATCH 03/12] crypto/dpaa2_sec: initialize the pdcp alg to null
  2023-08-23  7:08 [PATCH 00/12] crypto/dpaax_sec: misc enhancements Hemant Agrawal
  2023-08-23  7:08 ` [PATCH 01/12] common/dpaax: update IPsec base descriptor length Hemant Agrawal
  2023-08-23  7:08 ` [PATCH 02/12] common/dpaax: change mode to wait in shared desc Hemant Agrawal
@ 2023-08-23  7:08 ` Hemant Agrawal
  2023-08-23  7:08 ` [PATCH 04/12] crypto/dpaa2_sec: supporting null cipher and auth Hemant Agrawal
                   ` (10 subsequent siblings)
  13 siblings, 0 replies; 30+ messages in thread
From: Hemant Agrawal @ 2023-08-23  7:08 UTC (permalink / raw)
  To: dev; +Cc: gakhil, Vanshika Shukla

From: Vanshika Shukla <vanshika.shukla@nxp.com>

This patch initializes the pdcp alg to null.

Signed-off-by: Vanshika Shukla <vanshika.shukla@nxp.com>
---
 drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
index 5ccfcbd7a6..c2b836d716 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  *
  *   Copyright (c) 2016 Freescale Semiconductor, Inc. All rights reserved.
- *   Copyright 2016-2022 NXP
+ *   Copyright 2016-2023 NXP
  *
  */
 
@@ -3512,6 +3512,7 @@ dpaa2_sec_set_pdcp_session(struct rte_cryptodev *dev,
 		session->auth_key.data = NULL;
 		session->auth_key.length = 0;
 		session->auth_alg = 0;
+		authdata.algtype = PDCP_AUTH_TYPE_NULL;
 	}
 	authdata.key = (size_t)session->auth_key.data;
 	authdata.keylen = session->auth_key.length;
-- 
2.17.1


^ permalink raw reply	[flat|nested] 30+ messages in thread

* [PATCH 04/12] crypto/dpaa2_sec: supporting null cipher and auth
  2023-08-23  7:08 [PATCH 00/12] crypto/dpaax_sec: misc enhancements Hemant Agrawal
                   ` (2 preceding siblings ...)
  2023-08-23  7:08 ` [PATCH 03/12] crypto/dpaa2_sec: initialize the pdcp alg to null Hemant Agrawal
@ 2023-08-23  7:08 ` Hemant Agrawal
  2023-08-23  7:08 ` [PATCH 05/12] crypto/dpaa_sec: " Hemant Agrawal
                   ` (9 subsequent siblings)
  13 siblings, 0 replies; 30+ messages in thread
From: Hemant Agrawal @ 2023-08-23  7:08 UTC (permalink / raw)
  To: dev; +Cc: gakhil

IPSEC proto offload support NULL in combo cases, thus adding
NULL cipher and auth in security capabilities. Non-supported
cases are already protected in the code.

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
 drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h | 43 +++++++++++++++++++++--
 1 file changed, 41 insertions(+), 2 deletions(-)

diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h b/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
index f84d2caf43..5a4eb8e2ed 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  *
  *   Copyright (c) 2016 Freescale Semiconductor, Inc. All rights reserved.
- *   Copyright 2016,2020-2022 NXP
+ *   Copyright 2016,2020-2023 NXP
  *
  */
 
@@ -878,7 +878,46 @@ static const struct rte_cryptodev_capabilities dpaa2_pdcp_capabilities[] = {
 			}, }
 		}, }
 	},
-
+	{	/* NULL (AUTH) */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		{.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+			{.auth = {
+				.algo = RTE_CRYPTO_AUTH_NULL,
+				.block_size = 1,
+				.key_size = {
+					.min = 0,
+					.max = 0,
+					.increment = 0
+				},
+				.digest_size = {
+					.min = 0,
+					.max = 0,
+					.increment = 0
+				},
+			}, },
+		}, },
+	},
+	{	/* NULL (CIPHER) */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		{.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
+			{.cipher = {
+				.algo = RTE_CRYPTO_CIPHER_NULL,
+				.block_size = 1,
+				.key_size = {
+					.min = 0,
+					.max = 0,
+					.increment = 0
+				},
+				.iv_size = {
+					.min = 0,
+					.max = 0,
+					.increment = 0
+				}
+			}, },
+		}, }
+	},
 	RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
 };
 
-- 
2.17.1


^ permalink raw reply	[flat|nested] 30+ messages in thread

* [PATCH 05/12] crypto/dpaa_sec: supporting null cipher and auth
  2023-08-23  7:08 [PATCH 00/12] crypto/dpaax_sec: misc enhancements Hemant Agrawal
                   ` (3 preceding siblings ...)
  2023-08-23  7:08 ` [PATCH 04/12] crypto/dpaa2_sec: supporting null cipher and auth Hemant Agrawal
@ 2023-08-23  7:08 ` Hemant Agrawal
  2023-08-23  7:08 ` [PATCH 06/12] crypto/dpaax_sec: set the authdata in non-auth case Hemant Agrawal
                   ` (8 subsequent siblings)
  13 siblings, 0 replies; 30+ messages in thread
From: Hemant Agrawal @ 2023-08-23  7:08 UTC (permalink / raw)
  To: dev; +Cc: gakhil

Adding NULL cipher and auth in capabilities.

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
 drivers/crypto/dpaa_sec/dpaa_sec.h | 42 +++++++++++++++++++++++++++++-
 1 file changed, 41 insertions(+), 1 deletion(-)

diff --git a/drivers/crypto/dpaa_sec/dpaa_sec.h b/drivers/crypto/dpaa_sec/dpaa_sec.h
index 412a9da942..eff6dcf311 100644
--- a/drivers/crypto/dpaa_sec/dpaa_sec.h
+++ b/drivers/crypto/dpaa_sec/dpaa_sec.h
@@ -1,6 +1,6 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  *
- *   Copyright 2016-2022 NXP
+ *   Copyright 2016-2023 NXP
  *
  */
 
@@ -782,6 +782,46 @@ static const struct rte_cryptodev_capabilities dpaa_sec_capabilities[] = {
 			}, }
 		}, }
 	},
+	{	/* NULL (AUTH) */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		{.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+			{.auth = {
+				.algo = RTE_CRYPTO_AUTH_NULL,
+				.block_size = 1,
+				.key_size = {
+					.min = 0,
+					.max = 0,
+					.increment = 0
+				},
+				.digest_size = {
+					.min = 0,
+					.max = 0,
+					.increment = 0
+				},
+			}, },
+		}, },
+	},
+	{	/* NULL (CIPHER) */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		{.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
+			{.cipher = {
+				.algo = RTE_CRYPTO_CIPHER_NULL,
+				.block_size = 1,
+				.key_size = {
+					.min = 0,
+					.max = 0,
+					.increment = 0
+				},
+				.iv_size = {
+					.min = 0,
+					.max = 0,
+					.increment = 0
+				}
+			}, },
+		}, }
+	},
 	RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
 };
 
-- 
2.17.1


^ permalink raw reply	[flat|nested] 30+ messages in thread

* [PATCH 06/12] crypto/dpaax_sec: set the authdata in non-auth case
  2023-08-23  7:08 [PATCH 00/12] crypto/dpaax_sec: misc enhancements Hemant Agrawal
                   ` (4 preceding siblings ...)
  2023-08-23  7:08 ` [PATCH 05/12] crypto/dpaa_sec: " Hemant Agrawal
@ 2023-08-23  7:08 ` Hemant Agrawal
  2023-08-23  7:08 ` [PATCH 07/12] crypto/dpaa2_sec: enhance dpaa FD FL FMT offset set Hemant Agrawal
                   ` (7 subsequent siblings)
  13 siblings, 0 replies; 30+ messages in thread
From: Hemant Agrawal @ 2023-08-23  7:08 UTC (permalink / raw)
  To: dev; +Cc: gakhil, Gagandeep Singh

From: Gagandeep Singh <g.singh@nxp.com>

The descriptors refers to auth data as well, so initializing it
properly for the non-auth cases.

Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
---
 drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c | 16 ++++++++++++----
 drivers/crypto/dpaa_sec/dpaa_sec.c          |  5 +++++
 2 files changed, 17 insertions(+), 4 deletions(-)

diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
index c2b836d716..0a0b7f15af 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
@@ -3538,12 +3538,20 @@ dpaa2_sec_set_pdcp_session(struct rte_cryptodev *dev,
 				      session->auth_alg);
 			goto out;
 		}
-
 		p_authdata = &authdata;
-	} else if (pdcp_xform->domain == RTE_SECURITY_PDCP_MODE_CONTROL) {
-		DPAA2_SEC_ERR("Crypto: Integrity must for c-plane");
-		goto out;
+	} else {
+		if (pdcp_xform->domain == RTE_SECURITY_PDCP_MODE_CONTROL) {
+			DPAA2_SEC_ERR("Crypto: Integrity must for c-plane");
+			goto out;
+		}
+		session->auth_key.data = NULL;
+		session->auth_key.length = 0;
+		session->auth_alg = 0;
 	}
+	authdata.key = (size_t)session->auth_key.data;
+	authdata.keylen = session->auth_key.length;
+	authdata.key_enc_flags = 0;
+	authdata.key_type = RTA_DATA_IMM;
 
 	if (pdcp_xform->sdap_enabled) {
 		int nb_keys_to_inline =
diff --git a/drivers/crypto/dpaa_sec/dpaa_sec.c b/drivers/crypto/dpaa_sec/dpaa_sec.c
index 7d47c32693..39babd76f8 100644
--- a/drivers/crypto/dpaa_sec/dpaa_sec.c
+++ b/drivers/crypto/dpaa_sec/dpaa_sec.c
@@ -3188,6 +3188,11 @@ dpaa_sec_set_pdcp_session(struct rte_cryptodev *dev,
 		       auth_xform->key.length);
 		session->auth_alg = auth_xform->algo;
 	} else {
+		if (pdcp_xform->domain == RTE_SECURITY_PDCP_MODE_CONTROL) {
+			DPAA_SEC_ERR("Crypto: Integrity must for c-plane");
+			ret = -EINVAL;
+			goto out;
+		}
 		session->auth_key.data = NULL;
 		session->auth_key.length = 0;
 		session->auth_alg = 0;
-- 
2.17.1


^ permalink raw reply	[flat|nested] 30+ messages in thread

* [PATCH 07/12] crypto/dpaa2_sec: enhance dpaa FD FL FMT offset set
  2023-08-23  7:08 [PATCH 00/12] crypto/dpaax_sec: misc enhancements Hemant Agrawal
                   ` (5 preceding siblings ...)
  2023-08-23  7:08 ` [PATCH 06/12] crypto/dpaax_sec: set the authdata in non-auth case Hemant Agrawal
@ 2023-08-23  7:08 ` Hemant Agrawal
  2023-08-23  7:08 ` [PATCH 08/12] crypto/dpaa2_sec: support copy df and dscp in proto offload Hemant Agrawal
                   ` (6 subsequent siblings)
  13 siblings, 0 replies; 30+ messages in thread
From: Hemant Agrawal @ 2023-08-23  7:08 UTC (permalink / raw)
  To: dev; +Cc: gakhil, Apeksha Gupta

From: Apeksha Gupta <apeksha.gupta@nxp.com>

The macro DPAA2_SET_FLE_OFFSET(fle, offset) only works for masking the
offset upto with 12 bits. When the offset value is more that 12 bits,
this macro may over writing the FMT/SL/F bits which are beyond the
offset bits.
The FLE_ADDR is modified to FLE_ADDR + OFFSET, and the FLE_OFFSET
is made to 0.

Signed-off-by: Apeksha Gupta <apeksha.gupta@nxp.com>
---
 drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c | 87 +++++++--------------
 drivers/crypto/dpaa2_sec/dpaa2_sec_raw_dp.c | 47 +++--------
 drivers/net/dpaa2/dpaa2_rxtx.c              |  3 +-
 3 files changed, 38 insertions(+), 99 deletions(-)

diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
index 0a0b7f15af..36f08afccc 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
@@ -138,16 +138,14 @@ build_proto_compound_sg_fd(dpaa2_sec_session *sess,
 	DPAA2_SET_FLE_ADDR(op_fle, DPAA2_VADDR_TO_IOVA(sge));
 
 	/* Configure Output SGE for Encap/Decap */
-	DPAA2_SET_FLE_ADDR(sge, DPAA2_MBUF_VADDR_TO_IOVA(mbuf));
-	DPAA2_SET_FLE_OFFSET(sge, mbuf->data_off);
+	DPAA2_SET_FLE_ADDR(sge, rte_pktmbuf_iova(mbuf));
 	/* o/p segs */
 	while (mbuf->next) {
 		sge->length = mbuf->data_len;
 		out_len += sge->length;
 		sge++;
 		mbuf = mbuf->next;
-		DPAA2_SET_FLE_ADDR(sge, DPAA2_MBUF_VADDR_TO_IOVA(mbuf));
-		DPAA2_SET_FLE_OFFSET(sge, mbuf->data_off);
+		DPAA2_SET_FLE_ADDR(sge, rte_pktmbuf_iova(mbuf));
 	}
 	/* using buf_len for last buf - so that extra data can be added */
 	sge->length = mbuf->buf_len - mbuf->data_off;
@@ -165,8 +163,7 @@ build_proto_compound_sg_fd(dpaa2_sec_session *sess,
 	DPAA2_SET_FLE_FIN(ip_fle);
 
 	/* Configure input SGE for Encap/Decap */
-	DPAA2_SET_FLE_ADDR(sge, DPAA2_MBUF_VADDR_TO_IOVA(mbuf));
-	DPAA2_SET_FLE_OFFSET(sge, mbuf->data_off);
+	DPAA2_SET_FLE_ADDR(sge, rte_pktmbuf_iova(mbuf));
 	sge->length = mbuf->data_len;
 	in_len += sge->length;
 
@@ -174,8 +171,7 @@ build_proto_compound_sg_fd(dpaa2_sec_session *sess,
 	/* i/p segs */
 	while (mbuf) {
 		sge++;
-		DPAA2_SET_FLE_ADDR(sge, DPAA2_MBUF_VADDR_TO_IOVA(mbuf));
-		DPAA2_SET_FLE_OFFSET(sge, mbuf->data_off);
+		DPAA2_SET_FLE_ADDR(sge, rte_pktmbuf_iova(mbuf));
 		sge->length = mbuf->data_len;
 		in_len += sge->length;
 		mbuf = mbuf->next;
@@ -247,13 +243,11 @@ build_proto_compound_fd(dpaa2_sec_session *sess,
 	DPAA2_SET_FD_FLC(fd, DPAA2_VADDR_TO_IOVA(flc));
 
 	/* Configure Output FLE with dst mbuf data  */
-	DPAA2_SET_FLE_ADDR(op_fle, DPAA2_MBUF_VADDR_TO_IOVA(dst_mbuf));
-	DPAA2_SET_FLE_OFFSET(op_fle, dst_mbuf->data_off);
+	DPAA2_SET_FLE_ADDR(op_fle, rte_pktmbuf_iova(dst_mbuf));
 	DPAA2_SET_FLE_LEN(op_fle, dst_mbuf->buf_len);
 
 	/* Configure Input FLE with src mbuf data */
-	DPAA2_SET_FLE_ADDR(ip_fle, DPAA2_MBUF_VADDR_TO_IOVA(src_mbuf));
-	DPAA2_SET_FLE_OFFSET(ip_fle, src_mbuf->data_off);
+	DPAA2_SET_FLE_ADDR(ip_fle, rte_pktmbuf_iova(src_mbuf));
 	DPAA2_SET_FLE_LEN(ip_fle, src_mbuf->pkt_len);
 
 	DPAA2_SET_FD_LEN(fd, ip_fle->length);
@@ -373,16 +367,14 @@ build_authenc_gcm_sg_fd(dpaa2_sec_session *sess,
 			sym_op->aead.data.length;
 
 	/* Configure Output SGE for Encap/Decap */
-	DPAA2_SET_FLE_ADDR(sge, DPAA2_MBUF_VADDR_TO_IOVA(mbuf));
-	DPAA2_SET_FLE_OFFSET(sge, mbuf->data_off + sym_op->aead.data.offset);
+	DPAA2_SET_FLE_ADDR(sge, rte_pktmbuf_iova(mbuf) + sym_op->aead.data.offset);
 	sge->length = mbuf->data_len - sym_op->aead.data.offset;
 
 	mbuf = mbuf->next;
 	/* o/p segs */
 	while (mbuf) {
 		sge++;
-		DPAA2_SET_FLE_ADDR(sge, DPAA2_MBUF_VADDR_TO_IOVA(mbuf));
-		DPAA2_SET_FLE_OFFSET(sge, mbuf->data_off);
+		DPAA2_SET_FLE_ADDR(sge, rte_pktmbuf_iova(mbuf));
 		sge->length = mbuf->data_len;
 		mbuf = mbuf->next;
 	}
@@ -420,17 +412,14 @@ build_authenc_gcm_sg_fd(dpaa2_sec_session *sess,
 		sge++;
 	}
 
-	DPAA2_SET_FLE_ADDR(sge, DPAA2_MBUF_VADDR_TO_IOVA(mbuf));
-	DPAA2_SET_FLE_OFFSET(sge, sym_op->aead.data.offset +
-				mbuf->data_off);
+	DPAA2_SET_FLE_ADDR(sge, rte_pktmbuf_iova(mbuf) + sym_op->aead.data.offset);
 	sge->length = mbuf->data_len - sym_op->aead.data.offset;
 
 	mbuf = mbuf->next;
 	/* i/p segs */
 	while (mbuf) {
 		sge++;
-		DPAA2_SET_FLE_ADDR(sge, DPAA2_MBUF_VADDR_TO_IOVA(mbuf));
-		DPAA2_SET_FLE_OFFSET(sge, mbuf->data_off);
+		DPAA2_SET_FLE_ADDR(sge, rte_pktmbuf_iova(mbuf));
 		sge->length = mbuf->data_len;
 		mbuf = mbuf->next;
 	}
@@ -535,8 +524,7 @@ build_authenc_gcm_fd(dpaa2_sec_session *sess,
 	DPAA2_SET_FLE_SG_EXT(fle);
 
 	/* Configure Output SGE for Encap/Decap */
-	DPAA2_SET_FLE_ADDR(sge, DPAA2_MBUF_VADDR_TO_IOVA(dst));
-	DPAA2_SET_FLE_OFFSET(sge, dst->data_off + sym_op->aead.data.offset);
+	DPAA2_SET_FLE_ADDR(sge, rte_pktmbuf_iova(dst) + sym_op->aead.data.offset);
 	sge->length = sym_op->aead.data.length;
 
 	if (sess->dir == DIR_ENC) {
@@ -571,9 +559,7 @@ build_authenc_gcm_fd(dpaa2_sec_session *sess,
 		sge++;
 	}
 
-	DPAA2_SET_FLE_ADDR(sge, DPAA2_MBUF_VADDR_TO_IOVA(sym_op->m_src));
-	DPAA2_SET_FLE_OFFSET(sge, sym_op->aead.data.offset +
-				sym_op->m_src->data_off);
+	DPAA2_SET_FLE_ADDR(sge, rte_pktmbuf_iova(sym_op->m_src) + sym_op->aead.data.offset);
 	sge->length = sym_op->aead.data.length;
 	if (sess->dir == DIR_DEC) {
 		sge++;
@@ -666,16 +652,14 @@ build_authenc_sg_fd(dpaa2_sec_session *sess,
 			sym_op->cipher.data.length;
 
 	/* Configure Output SGE for Encap/Decap */
-	DPAA2_SET_FLE_ADDR(sge, DPAA2_MBUF_VADDR_TO_IOVA(mbuf));
-	DPAA2_SET_FLE_OFFSET(sge, mbuf->data_off + sym_op->auth.data.offset);
+	DPAA2_SET_FLE_ADDR(sge, rte_pktmbuf_iova(mbuf) + sym_op->auth.data.offset);
 	sge->length = mbuf->data_len - sym_op->auth.data.offset;
 
 	mbuf = mbuf->next;
 	/* o/p segs */
 	while (mbuf) {
 		sge++;
-		DPAA2_SET_FLE_ADDR(sge, DPAA2_MBUF_VADDR_TO_IOVA(mbuf));
-		DPAA2_SET_FLE_OFFSET(sge, mbuf->data_off);
+		DPAA2_SET_FLE_ADDR(sge, rte_pktmbuf_iova(mbuf));
 		sge->length = mbuf->data_len;
 		mbuf = mbuf->next;
 	}
@@ -706,17 +690,14 @@ build_authenc_sg_fd(dpaa2_sec_session *sess,
 	sge->length = sess->iv.length;
 
 	sge++;
-	DPAA2_SET_FLE_ADDR(sge, DPAA2_MBUF_VADDR_TO_IOVA(mbuf));
-	DPAA2_SET_FLE_OFFSET(sge, sym_op->auth.data.offset +
-				mbuf->data_off);
+	DPAA2_SET_FLE_ADDR(sge, rte_pktmbuf_iova(mbuf) + sym_op->auth.data.offset);
 	sge->length = mbuf->data_len - sym_op->auth.data.offset;
 
 	mbuf = mbuf->next;
 	/* i/p segs */
 	while (mbuf) {
 		sge++;
-		DPAA2_SET_FLE_ADDR(sge, DPAA2_MBUF_VADDR_TO_IOVA(mbuf));
-		DPAA2_SET_FLE_OFFSET(sge, mbuf->data_off);
+		DPAA2_SET_FLE_ADDR(sge, rte_pktmbuf_iova(mbuf));
 		sge->length = mbuf->data_len;
 		mbuf = mbuf->next;
 	}
@@ -830,9 +811,7 @@ build_authenc_fd(dpaa2_sec_session *sess,
 	DPAA2_SET_FLE_SG_EXT(fle);
 
 	/* Configure Output SGE for Encap/Decap */
-	DPAA2_SET_FLE_ADDR(sge, DPAA2_MBUF_VADDR_TO_IOVA(dst));
-	DPAA2_SET_FLE_OFFSET(sge, sym_op->cipher.data.offset +
-				dst->data_off);
+	DPAA2_SET_FLE_ADDR(sge, rte_pktmbuf_iova(dst) + sym_op->cipher.data.offset);
 	sge->length = sym_op->cipher.data.length;
 
 	if (sess->dir == DIR_ENC) {
@@ -862,9 +841,7 @@ build_authenc_fd(dpaa2_sec_session *sess,
 	sge->length = sess->iv.length;
 	sge++;
 
-	DPAA2_SET_FLE_ADDR(sge, DPAA2_MBUF_VADDR_TO_IOVA(sym_op->m_src));
-	DPAA2_SET_FLE_OFFSET(sge, sym_op->auth.data.offset +
-				sym_op->m_src->data_off);
+	DPAA2_SET_FLE_ADDR(sge, rte_pktmbuf_iova(sym_op->m_src) + sym_op->auth.data.offset);
 	sge->length = sym_op->auth.data.length;
 	if (sess->dir == DIR_DEC) {
 		sge++;
@@ -965,8 +942,7 @@ static inline int build_auth_sg_fd(
 		sge++;
 	}
 	/* i/p 1st seg */
-	DPAA2_SET_FLE_ADDR(sge, DPAA2_MBUF_VADDR_TO_IOVA(mbuf));
-	DPAA2_SET_FLE_OFFSET(sge, data_offset + mbuf->data_off);
+	DPAA2_SET_FLE_ADDR(sge, rte_pktmbuf_iova(mbuf) + data_offset);
 
 	if (data_len <= (mbuf->data_len - data_offset)) {
 		sge->length = data_len;
@@ -978,8 +954,7 @@ static inline int build_auth_sg_fd(
 		while ((data_len = data_len - sge->length) &&
 		       (mbuf = mbuf->next)) {
 			sge++;
-			DPAA2_SET_FLE_ADDR(sge, DPAA2_MBUF_VADDR_TO_IOVA(mbuf));
-			DPAA2_SET_FLE_OFFSET(sge, mbuf->data_off);
+			DPAA2_SET_FLE_ADDR(sge, rte_pktmbuf_iova(mbuf));
 			if (data_len > mbuf->data_len)
 				sge->length = mbuf->data_len;
 			else
@@ -1097,8 +1072,7 @@ build_auth_fd(dpaa2_sec_session *sess, struct rte_crypto_op *op,
 	}
 
 	/* Setting data to authenticate */
-	DPAA2_SET_FLE_ADDR(sge, DPAA2_MBUF_VADDR_TO_IOVA(sym_op->m_src));
-	DPAA2_SET_FLE_OFFSET(sge, data_offset + sym_op->m_src->data_off);
+	DPAA2_SET_FLE_ADDR(sge, rte_pktmbuf_iova(sym_op->m_src) + data_offset);
 	sge->length = data_len;
 
 	if (sess->dir == DIR_DEC) {
@@ -1183,16 +1157,14 @@ build_cipher_sg_fd(dpaa2_sec_session *sess, struct rte_crypto_op *op,
 	DPAA2_SET_FLE_SG_EXT(op_fle);
 
 	/* o/p 1st seg */
-	DPAA2_SET_FLE_ADDR(sge, DPAA2_MBUF_VADDR_TO_IOVA(mbuf));
-	DPAA2_SET_FLE_OFFSET(sge, data_offset + mbuf->data_off);
+	DPAA2_SET_FLE_ADDR(sge, rte_pktmbuf_iova(mbuf) + data_offset);
 	sge->length = mbuf->data_len - data_offset;
 
 	mbuf = mbuf->next;
 	/* o/p segs */
 	while (mbuf) {
 		sge++;
-		DPAA2_SET_FLE_ADDR(sge, DPAA2_MBUF_VADDR_TO_IOVA(mbuf));
-		DPAA2_SET_FLE_OFFSET(sge, mbuf->data_off);
+		DPAA2_SET_FLE_ADDR(sge, rte_pktmbuf_iova(mbuf));
 		sge->length = mbuf->data_len;
 		mbuf = mbuf->next;
 	}
@@ -1212,22 +1184,19 @@ build_cipher_sg_fd(dpaa2_sec_session *sess, struct rte_crypto_op *op,
 
 	/* i/p IV */
 	DPAA2_SET_FLE_ADDR(sge, DPAA2_VADDR_TO_IOVA(iv_ptr));
-	DPAA2_SET_FLE_OFFSET(sge, 0);
 	sge->length = sess->iv.length;
 
 	sge++;
 
 	/* i/p 1st seg */
-	DPAA2_SET_FLE_ADDR(sge, DPAA2_MBUF_VADDR_TO_IOVA(mbuf));
-	DPAA2_SET_FLE_OFFSET(sge, data_offset + mbuf->data_off);
+	DPAA2_SET_FLE_ADDR(sge, rte_pktmbuf_iova(mbuf) + data_offset);
 	sge->length = mbuf->data_len - data_offset;
 
 	mbuf = mbuf->next;
 	/* i/p segs */
 	while (mbuf) {
 		sge++;
-		DPAA2_SET_FLE_ADDR(sge, DPAA2_MBUF_VADDR_TO_IOVA(mbuf));
-		DPAA2_SET_FLE_OFFSET(sge, mbuf->data_off);
+		DPAA2_SET_FLE_ADDR(sge, rte_pktmbuf_iova(mbuf));
 		sge->length = mbuf->data_len;
 		mbuf = mbuf->next;
 	}
@@ -1328,8 +1297,7 @@ build_cipher_fd(dpaa2_sec_session *sess, struct rte_crypto_op *op,
 		sess->iv.length,
 		sym_op->m_src->data_off);
 
-	DPAA2_SET_FLE_ADDR(fle, DPAA2_MBUF_VADDR_TO_IOVA(dst));
-	DPAA2_SET_FLE_OFFSET(fle, data_offset + dst->data_off);
+	DPAA2_SET_FLE_ADDR(fle, rte_pktmbuf_iova(dst) + data_offset);
 
 	fle->length = data_len + sess->iv.length;
 
@@ -1349,8 +1317,7 @@ build_cipher_fd(dpaa2_sec_session *sess, struct rte_crypto_op *op,
 	sge->length = sess->iv.length;
 
 	sge++;
-	DPAA2_SET_FLE_ADDR(sge, DPAA2_MBUF_VADDR_TO_IOVA(sym_op->m_src));
-	DPAA2_SET_FLE_OFFSET(sge, data_offset + sym_op->m_src->data_off);
+	DPAA2_SET_FLE_ADDR(sge, rte_pktmbuf_iova(sym_op->m_src) + data_offset);
 
 	sge->length = data_len;
 	DPAA2_SET_FLE_FIN(sge);
diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_raw_dp.c b/drivers/crypto/dpaa2_sec/dpaa2_sec_raw_dp.c
index 36c79e450a..4754b9d6f8 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_raw_dp.c
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_raw_dp.c
@@ -95,29 +95,25 @@ build_raw_dp_chain_fd(uint8_t *drv_ctx,
 	/* OOP */
 	if (dest_sgl) {
 		/* Configure Output SGE for Encap/Decap */
-		DPAA2_SET_FLE_ADDR(sge, dest_sgl->vec[0].iova);
-		DPAA2_SET_FLE_OFFSET(sge, ofs.ofs.cipher.head);
+		DPAA2_SET_FLE_ADDR(sge, dest_sgl->vec[0].iova + ofs.ofs.cipher.head);
 		sge->length = dest_sgl->vec[0].len - ofs.ofs.cipher.head;
 
 		/* o/p segs */
 		for (i = 1; i < dest_sgl->num; i++) {
 			sge++;
 			DPAA2_SET_FLE_ADDR(sge, dest_sgl->vec[i].iova);
-			DPAA2_SET_FLE_OFFSET(sge, 0);
 			sge->length = dest_sgl->vec[i].len;
 		}
 		sge->length -= ofs.ofs.cipher.tail;
 	} else {
 		/* Configure Output SGE for Encap/Decap */
-		DPAA2_SET_FLE_ADDR(sge, sgl->vec[0].iova);
-		DPAA2_SET_FLE_OFFSET(sge, ofs.ofs.cipher.head);
+		DPAA2_SET_FLE_ADDR(sge, sgl->vec[0].iova + ofs.ofs.cipher.head);
 		sge->length = sgl->vec[0].len - ofs.ofs.cipher.head;
 
 		/* o/p segs */
 		for (i = 1; i < sgl->num; i++) {
 			sge++;
 			DPAA2_SET_FLE_ADDR(sge, sgl->vec[i].iova);
-			DPAA2_SET_FLE_OFFSET(sge, 0);
 			sge->length = sgl->vec[i].len;
 		}
 		sge->length -= ofs.ofs.cipher.tail;
@@ -148,14 +144,12 @@ build_raw_dp_chain_fd(uint8_t *drv_ctx,
 	sge->length = sess->iv.length;
 
 	sge++;
-	DPAA2_SET_FLE_ADDR(sge, sgl->vec[0].iova);
-	DPAA2_SET_FLE_OFFSET(sge, ofs.ofs.auth.head);
+	DPAA2_SET_FLE_ADDR(sge, sgl->vec[0].iova + ofs.ofs.auth.head);
 	sge->length = sgl->vec[0].len - ofs.ofs.auth.head;
 
 	for (i = 1; i < sgl->num; i++) {
 		sge++;
 		DPAA2_SET_FLE_ADDR(sge, sgl->vec[i].iova);
-		DPAA2_SET_FLE_OFFSET(sge, 0);
 		sge->length = sgl->vec[i].len;
 	}
 
@@ -244,28 +238,24 @@ build_raw_dp_aead_fd(uint8_t *drv_ctx,
 	/* OOP */
 	if (dest_sgl) {
 		/* Configure Output SGE for Encap/Decap */
-		DPAA2_SET_FLE_ADDR(sge, dest_sgl->vec[0].iova);
-		DPAA2_SET_FLE_OFFSET(sge, ofs.ofs.cipher.head);
+		DPAA2_SET_FLE_ADDR(sge, dest_sgl->vec[0].iova +  ofs.ofs.cipher.head);
 		sge->length = dest_sgl->vec[0].len - ofs.ofs.cipher.head;
 
 		/* o/p segs */
 		for (i = 1; i < dest_sgl->num; i++) {
 			sge++;
 			DPAA2_SET_FLE_ADDR(sge, dest_sgl->vec[i].iova);
-			DPAA2_SET_FLE_OFFSET(sge, 0);
 			sge->length = dest_sgl->vec[i].len;
 		}
 	} else {
 		/* Configure Output SGE for Encap/Decap */
-		DPAA2_SET_FLE_ADDR(sge, sgl->vec[0].iova);
-		DPAA2_SET_FLE_OFFSET(sge, ofs.ofs.cipher.head);
+		DPAA2_SET_FLE_ADDR(sge, sgl->vec[0].iova + ofs.ofs.cipher.head);
 		sge->length = sgl->vec[0].len - ofs.ofs.cipher.head;
 
 		/* o/p segs */
 		for (i = 1; i < sgl->num; i++) {
 			sge++;
 			DPAA2_SET_FLE_ADDR(sge, sgl->vec[i].iova);
-			DPAA2_SET_FLE_OFFSET(sge, 0);
 			sge->length = sgl->vec[i].len;
 		}
 	}
@@ -299,15 +289,13 @@ build_raw_dp_aead_fd(uint8_t *drv_ctx,
 		sge++;
 	}
 
-	DPAA2_SET_FLE_ADDR(sge, sgl->vec[0].iova);
-	DPAA2_SET_FLE_OFFSET(sge, ofs.ofs.cipher.head);
+	DPAA2_SET_FLE_ADDR(sge, sgl->vec[0].iova + ofs.ofs.cipher.head);
 	sge->length = sgl->vec[0].len - ofs.ofs.cipher.head;
 
 	/* i/p segs */
 	for (i = 1; i < sgl->num; i++) {
 		sge++;
 		DPAA2_SET_FLE_ADDR(sge, sgl->vec[i].iova);
-		DPAA2_SET_FLE_OFFSET(sge, 0);
 		sge->length = sgl->vec[i].len;
 	}
 
@@ -412,8 +400,7 @@ build_raw_dp_auth_fd(uint8_t *drv_ctx,
 		sge++;
 	}
 	/* i/p 1st seg */
-	DPAA2_SET_FLE_ADDR(sge, sgl->vec[0].iova);
-	DPAA2_SET_FLE_OFFSET(sge, data_offset);
+	DPAA2_SET_FLE_ADDR(sge, sgl->vec[0].iova + data_offset);
 
 	if (data_len <= (int)(sgl->vec[0].len - data_offset)) {
 		sge->length = data_len;
@@ -423,7 +410,6 @@ build_raw_dp_auth_fd(uint8_t *drv_ctx,
 		for (i = 1; i < sgl->num; i++) {
 			sge++;
 			DPAA2_SET_FLE_ADDR(sge, sgl->vec[i].iova);
-			DPAA2_SET_FLE_OFFSET(sge, 0);
 			sge->length = sgl->vec[i].len;
 		}
 	}
@@ -502,14 +488,12 @@ build_raw_dp_proto_fd(uint8_t *drv_ctx,
 	if (dest_sgl) {
 		/* Configure Output SGE for Encap/Decap */
 		DPAA2_SET_FLE_ADDR(sge, dest_sgl->vec[0].iova);
-		DPAA2_SET_FLE_OFFSET(sge, 0);
 		sge->length = dest_sgl->vec[0].len;
 		out_len += sge->length;
 		/* o/p segs */
 		for (i = 1; i < dest_sgl->num; i++) {
 			sge++;
 			DPAA2_SET_FLE_ADDR(sge, dest_sgl->vec[i].iova);
-			DPAA2_SET_FLE_OFFSET(sge, 0);
 			sge->length = dest_sgl->vec[i].len;
 			out_len += sge->length;
 		}
@@ -518,14 +502,12 @@ build_raw_dp_proto_fd(uint8_t *drv_ctx,
 	} else {
 		/* Configure Output SGE for Encap/Decap */
 		DPAA2_SET_FLE_ADDR(sge, sgl->vec[0].iova);
-		DPAA2_SET_FLE_OFFSET(sge, 0);
 		sge->length = sgl->vec[0].len;
 		out_len += sge->length;
 		/* o/p segs */
 		for (i = 1; i < sgl->num; i++) {
 			sge++;
 			DPAA2_SET_FLE_ADDR(sge, sgl->vec[i].iova);
-			DPAA2_SET_FLE_OFFSET(sge, 0);
 			sge->length = sgl->vec[i].len;
 			out_len += sge->length;
 		}
@@ -545,14 +527,12 @@ build_raw_dp_proto_fd(uint8_t *drv_ctx,
 
 	/* Configure input SGE for Encap/Decap */
 	DPAA2_SET_FLE_ADDR(sge, sgl->vec[0].iova);
-	DPAA2_SET_FLE_OFFSET(sge, 0);
 	sge->length = sgl->vec[0].len;
 	in_len += sge->length;
 	/* i/p segs */
 	for (i = 1; i < sgl->num; i++) {
 		sge++;
 		DPAA2_SET_FLE_ADDR(sge, sgl->vec[i].iova);
-		DPAA2_SET_FLE_OFFSET(sge, 0);
 		sge->length = sgl->vec[i].len;
 		in_len += sge->length;
 	}
@@ -638,28 +618,24 @@ build_raw_dp_cipher_fd(uint8_t *drv_ctx,
 	/* OOP */
 	if (dest_sgl) {
 		/* o/p 1st seg */
-		DPAA2_SET_FLE_ADDR(sge, dest_sgl->vec[0].iova);
-		DPAA2_SET_FLE_OFFSET(sge, data_offset);
+		DPAA2_SET_FLE_ADDR(sge, dest_sgl->vec[0].iova + data_offset);
 		sge->length = dest_sgl->vec[0].len - data_offset;
 
 		/* o/p segs */
 		for (i = 1; i < dest_sgl->num; i++) {
 			sge++;
 			DPAA2_SET_FLE_ADDR(sge, dest_sgl->vec[i].iova);
-			DPAA2_SET_FLE_OFFSET(sge, 0);
 			sge->length = dest_sgl->vec[i].len;
 		}
 	} else {
 		/* o/p 1st seg */
-		DPAA2_SET_FLE_ADDR(sge, sgl->vec[0].iova);
-		DPAA2_SET_FLE_OFFSET(sge, data_offset);
+		DPAA2_SET_FLE_ADDR(sge, sgl->vec[0].iova + data_offset);
 		sge->length = sgl->vec[0].len - data_offset;
 
 		/* o/p segs */
 		for (i = 1; i < sgl->num; i++) {
 			sge++;
 			DPAA2_SET_FLE_ADDR(sge, sgl->vec[i].iova);
-			DPAA2_SET_FLE_OFFSET(sge, 0);
 			sge->length = sgl->vec[i].len;
 		}
 	}
@@ -678,21 +654,18 @@ build_raw_dp_cipher_fd(uint8_t *drv_ctx,
 
 	/* i/p IV */
 	DPAA2_SET_FLE_ADDR(sge, iv->iova);
-	DPAA2_SET_FLE_OFFSET(sge, 0);
 	sge->length = sess->iv.length;
 
 	sge++;
 
 	/* i/p 1st seg */
-	DPAA2_SET_FLE_ADDR(sge, sgl->vec[0].iova);
-	DPAA2_SET_FLE_OFFSET(sge, data_offset);
+	DPAA2_SET_FLE_ADDR(sge, sgl->vec[0].iova + data_offset);
 	sge->length = sgl->vec[0].len - data_offset;
 
 	/* i/p segs */
 	for (i = 1; i < sgl->num; i++) {
 		sge++;
 		DPAA2_SET_FLE_ADDR(sge, sgl->vec[i].iova);
-		DPAA2_SET_FLE_OFFSET(sge, 0);
 		sge->length = sgl->vec[i].len;
 	}
 	DPAA2_SET_FLE_FIN(sge);
diff --git a/drivers/net/dpaa2/dpaa2_rxtx.c b/drivers/net/dpaa2/dpaa2_rxtx.c
index 85910bbd8f..23f7c4132d 100644
--- a/drivers/net/dpaa2/dpaa2_rxtx.c
+++ b/drivers/net/dpaa2/dpaa2_rxtx.c
@@ -471,8 +471,7 @@ eth_mbuf_to_sg_fd(struct rte_mbuf *mbuf,
 		sge = &sgt[i];
 		/*Resetting the buffer pool id and offset field*/
 		sge->fin_bpid_offset = 0;
-		DPAA2_SET_FLE_ADDR(sge, DPAA2_MBUF_VADDR_TO_IOVA(cur_seg));
-		DPAA2_SET_FLE_OFFSET(sge, cur_seg->data_off);
+		DPAA2_SET_FLE_ADDR(sge, rte_pktmbuf_iova(cur_seg));
 		sge->length = cur_seg->data_len;
 		if (RTE_MBUF_DIRECT(cur_seg)) {
 			/* if we are using inline SGT in same buffers
-- 
2.17.1


^ permalink raw reply	[flat|nested] 30+ messages in thread

* [PATCH 08/12] crypto/dpaa2_sec: support copy df and dscp in proto offload
  2023-08-23  7:08 [PATCH 00/12] crypto/dpaax_sec: misc enhancements Hemant Agrawal
                   ` (6 preceding siblings ...)
  2023-08-23  7:08 ` [PATCH 07/12] crypto/dpaa2_sec: enhance dpaa FD FL FMT offset set Hemant Agrawal
@ 2023-08-23  7:08 ` Hemant Agrawal
  2023-08-23  7:08 ` [PATCH 09/12] crypto/dpaa2_sec: increase the anti replay window size Hemant Agrawal
                   ` (5 subsequent siblings)
  13 siblings, 0 replies; 30+ messages in thread
From: Hemant Agrawal @ 2023-08-23  7:08 UTC (permalink / raw)
  To: dev; +Cc: gakhil

This patch adds support for enabling capability to copy
dscp and df bits from inner to outer header and vice-versa.

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
 drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c | 18 ++++++++++++++----
 drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h   | 10 ++++++++--
 2 files changed, 22 insertions(+), 6 deletions(-)

diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
index 36f08afccc..16e7facdb4 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
@@ -3193,10 +3193,14 @@ dpaa2_sec_set_ipsec_session(struct rte_cryptodev *dev,
 			encap_pdb.options |= PDBHMO_ESP_ENCAP_DTTL;
 		if (ipsec_xform->options.esn)
 			encap_pdb.options |= PDBOPTS_ESP_ESN;
+		if (ipsec_xform->options.copy_dscp)
+			encap_pdb.options |= PDBOPTS_ESP_DIFFSERV;
 		encap_pdb.spi = ipsec_xform->spi;
 		session->dir = DIR_ENC;
 		if (ipsec_xform->tunnel.type ==
 				RTE_SECURITY_IPSEC_TUNNEL_IPV4) {
+			if (ipsec_xform->options.copy_df)
+				encap_pdb.options |= PDBHMO_ESP_DFBIT;
 			encap_pdb.ip_hdr_len = sizeof(struct ip);
 			ip4_hdr.ip_v = IPVERSION;
 			ip4_hdr.ip_hl = 5;
@@ -3261,12 +3265,18 @@ dpaa2_sec_set_ipsec_session(struct rte_cryptodev *dev,
 			break;
 		}
 
-		decap_pdb.options = (ipsec_xform->tunnel.type ==
-				RTE_SECURITY_IPSEC_TUNNEL_IPV4) ?
-				sizeof(struct ip) << 16 :
-				sizeof(struct rte_ipv6_hdr) << 16;
+		if (ipsec_xform->tunnel.type ==
+				RTE_SECURITY_IPSEC_TUNNEL_IPV4) {
+			decap_pdb.options = sizeof(struct ip) << 16;
+			if (ipsec_xform->options.copy_df)
+				decap_pdb.options |= PDBHMO_ESP_DFV;
+		} else {
+			decap_pdb.options = sizeof(struct rte_ipv6_hdr) << 16;
+		}
 		if (ipsec_xform->options.esn)
 			decap_pdb.options |= PDBOPTS_ESP_ESN;
+		if (ipsec_xform->options.copy_dscp)
+			decap_pdb.options |= PDBOPTS_ESP_DIFFSERV;
 
 		if (ipsec_xform->replay_win_sz) {
 			uint32_t win_sz;
diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h b/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
index 5a4eb8e2ed..0f29e6299f 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
@@ -929,7 +929,10 @@ static const struct rte_security_capability dpaa2_sec_security_cap[] = {
 			.proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP,
 			.mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL,
 			.direction = RTE_SECURITY_IPSEC_SA_DIR_EGRESS,
-			.options = { 0 },
+			.options = {
+				.copy_df = 1,
+				.copy_dscp = 1,
+			},
 			.replay_win_sz_max = 128
 		},
 		.crypto_capabilities = dpaa2_sec_capabilities
@@ -941,7 +944,10 @@ static const struct rte_security_capability dpaa2_sec_security_cap[] = {
 			.proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP,
 			.mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL,
 			.direction = RTE_SECURITY_IPSEC_SA_DIR_INGRESS,
-			.options = { 0 },
+			.options = {
+				.copy_df = 1,
+				.copy_dscp = 1,
+			},
 			.replay_win_sz_max = 128
 		},
 		.crypto_capabilities = dpaa2_sec_capabilities
-- 
2.17.1


^ permalink raw reply	[flat|nested] 30+ messages in thread

* [PATCH 09/12] crypto/dpaa2_sec: increase the anti replay window size
  2023-08-23  7:08 [PATCH 00/12] crypto/dpaax_sec: misc enhancements Hemant Agrawal
                   ` (7 preceding siblings ...)
  2023-08-23  7:08 ` [PATCH 08/12] crypto/dpaa2_sec: support copy df and dscp in proto offload Hemant Agrawal
@ 2023-08-23  7:08 ` Hemant Agrawal
  2023-08-23  7:08 ` [PATCH 10/12] crypto/dpaa2_sec: enable esn support Hemant Agrawal
                   ` (4 subsequent siblings)
  13 siblings, 0 replies; 30+ messages in thread
From: Hemant Agrawal @ 2023-08-23  7:08 UTC (permalink / raw)
  To: dev; +Cc: gakhil

LX216x can support upto 1024 anti replay window size.

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
 drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h b/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
index 0f29e6299f..ee904829ed 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
@@ -933,7 +933,7 @@ static const struct rte_security_capability dpaa2_sec_security_cap[] = {
 				.copy_df = 1,
 				.copy_dscp = 1,
 			},
-			.replay_win_sz_max = 128
+			.replay_win_sz_max = 1024
 		},
 		.crypto_capabilities = dpaa2_sec_capabilities
 	},
@@ -948,7 +948,7 @@ static const struct rte_security_capability dpaa2_sec_security_cap[] = {
 				.copy_df = 1,
 				.copy_dscp = 1,
 			},
-			.replay_win_sz_max = 128
+			.replay_win_sz_max = 1024
 		},
 		.crypto_capabilities = dpaa2_sec_capabilities
 	},
-- 
2.17.1


^ permalink raw reply	[flat|nested] 30+ messages in thread

* [PATCH 10/12] crypto/dpaa2_sec: enable esn support
  2023-08-23  7:08 [PATCH 00/12] crypto/dpaax_sec: misc enhancements Hemant Agrawal
                   ` (8 preceding siblings ...)
  2023-08-23  7:08 ` [PATCH 09/12] crypto/dpaa2_sec: increase the anti replay window size Hemant Agrawal
@ 2023-08-23  7:08 ` Hemant Agrawal
  2023-08-23  7:08 ` [PATCH 11/12] crypto/dpaa2_sec: add NAT-T support in IPsec offload Hemant Agrawal
                   ` (3 subsequent siblings)
  13 siblings, 0 replies; 30+ messages in thread
From: Hemant Agrawal @ 2023-08-23  7:08 UTC (permalink / raw)
  To: dev; +Cc: gakhil

LX216x suppots ESN.
Also enable to correctly print the SEC era.

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
 drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c | 2 +-
 drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h   | 2 ++
 2 files changed, 3 insertions(+), 1 deletion(-)

diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
index 16e7facdb4..7fd15de1a5 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
@@ -4386,7 +4386,7 @@ cryptodev_dpaa2_sec_probe(struct rte_dpaa2_driver *dpaa2_drv __rte_unused,
 	else
 		rta_set_sec_era(RTA_SEC_ERA_8);
 
-	DPAA2_SEC_INFO("2-SEC ERA is %d", rta_get_sec_era());
+	DPAA2_SEC_INFO("2-SEC ERA is %d", USER_SEC_ERA(rta_get_sec_era()));
 
 	/* Invoke PMD device initialization function */
 	retval = dpaa2_sec_dev_init(cryptodev);
diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h b/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
index ee904829ed..d3e2df72b0 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
@@ -932,6 +932,7 @@ static const struct rte_security_capability dpaa2_sec_security_cap[] = {
 			.options = {
 				.copy_df = 1,
 				.copy_dscp = 1,
+				.esn = 1,
 			},
 			.replay_win_sz_max = 1024
 		},
@@ -947,6 +948,7 @@ static const struct rte_security_capability dpaa2_sec_security_cap[] = {
 			.options = {
 				.copy_df = 1,
 				.copy_dscp = 1,
+				.esn = 1,
 			},
 			.replay_win_sz_max = 1024
 		},
-- 
2.17.1


^ permalink raw reply	[flat|nested] 30+ messages in thread

* [PATCH 11/12] crypto/dpaa2_sec: add NAT-T support in IPsec offload
  2023-08-23  7:08 [PATCH 00/12] crypto/dpaax_sec: misc enhancements Hemant Agrawal
                   ` (9 preceding siblings ...)
  2023-08-23  7:08 ` [PATCH 10/12] crypto/dpaa2_sec: enable esn support Hemant Agrawal
@ 2023-08-23  7:08 ` Hemant Agrawal
  2023-08-23  7:08 ` [PATCH 12/12] crypto/dpaa2_sec: add support to set df and diffserv Hemant Agrawal
                   ` (2 subsequent siblings)
  13 siblings, 0 replies; 30+ messages in thread
From: Hemant Agrawal @ 2023-08-23  7:08 UTC (permalink / raw)
  To: dev; +Cc: gakhil

This patch adds supports for UDP encapsulation in NAT-T for
IPSEC security protocol offload case.

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
 drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c | 101 ++++++++++++++------
 drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h   |   3 +
 2 files changed, 75 insertions(+), 29 deletions(-)

diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
index 7fd15de1a5..675ee49489 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
@@ -10,6 +10,7 @@
 #include <unistd.h>
 
 #include <rte_ip.h>
+#include <rte_udp.h>
 #include <rte_mbuf.h>
 #include <rte_cryptodev.h>
 #include <rte_malloc.h>
@@ -3162,9 +3163,9 @@ dpaa2_sec_set_ipsec_session(struct rte_cryptodev *dev,
 
 	session->ctxt_type = DPAA2_SEC_IPSEC;
 	if (ipsec_xform->direction == RTE_SECURITY_IPSEC_SA_DIR_EGRESS) {
-		uint8_t *hdr = NULL;
-		struct ip ip4_hdr;
-		struct rte_ipv6_hdr ip6_hdr;
+		uint8_t hdr[48] = {};
+		struct rte_ipv4_hdr *ip4_hdr;
+		struct rte_ipv6_hdr *ip6_hdr;
 		struct ipsec_encap_pdb encap_pdb;
 
 		flc->dhr = SEC_FLC_DHR_OUTBOUND;
@@ -3187,38 +3188,77 @@ dpaa2_sec_set_ipsec_session(struct rte_cryptodev *dev,
 
 		encap_pdb.options = (IPVERSION << PDBNH_ESP_ENCAP_SHIFT) |
 			PDBOPTS_ESP_OIHI_PDB_INL |
-			PDBOPTS_ESP_IVSRC |
 			PDBHMO_ESP_SNR;
-		if (ipsec_xform->options.dec_ttl)
-			encap_pdb.options |= PDBHMO_ESP_ENCAP_DTTL;
+
+		if (ipsec_xform->options.iv_gen_disable == 0)
+			encap_pdb.options |= PDBOPTS_ESP_IVSRC;
 		if (ipsec_xform->options.esn)
 			encap_pdb.options |= PDBOPTS_ESP_ESN;
 		if (ipsec_xform->options.copy_dscp)
 			encap_pdb.options |= PDBOPTS_ESP_DIFFSERV;
+		if (ipsec_xform->options.ecn)
+			encap_pdb.options |= PDBOPTS_ESP_TECN;
 		encap_pdb.spi = ipsec_xform->spi;
 		session->dir = DIR_ENC;
 		if (ipsec_xform->tunnel.type ==
 				RTE_SECURITY_IPSEC_TUNNEL_IPV4) {
 			if (ipsec_xform->options.copy_df)
 				encap_pdb.options |= PDBHMO_ESP_DFBIT;
-			encap_pdb.ip_hdr_len = sizeof(struct ip);
-			ip4_hdr.ip_v = IPVERSION;
-			ip4_hdr.ip_hl = 5;
-			ip4_hdr.ip_len = rte_cpu_to_be_16(sizeof(ip4_hdr));
-			ip4_hdr.ip_tos = ipsec_xform->tunnel.ipv4.dscp;
-			ip4_hdr.ip_id = 0;
-			ip4_hdr.ip_off = 0;
-			ip4_hdr.ip_ttl = ipsec_xform->tunnel.ipv4.ttl;
-			ip4_hdr.ip_p = IPPROTO_ESP;
-			ip4_hdr.ip_sum = 0;
-			ip4_hdr.ip_src = ipsec_xform->tunnel.ipv4.src_ip;
-			ip4_hdr.ip_dst = ipsec_xform->tunnel.ipv4.dst_ip;
-			ip4_hdr.ip_sum = calc_chksum((uint16_t *)(void *)
-					&ip4_hdr, sizeof(struct ip));
-			hdr = (uint8_t *)&ip4_hdr;
+			ip4_hdr = (struct rte_ipv4_hdr *)&hdr;
+
+			encap_pdb.ip_hdr_len = sizeof(struct rte_ipv4_hdr);
+			ip4_hdr->version_ihl = RTE_IPV4_VHL_DEF;
+			ip4_hdr->time_to_live = ipsec_xform->tunnel.ipv4.ttl;
+			ip4_hdr->type_of_service =
+				ipsec_xform->tunnel.ipv4.dscp;
+			ip4_hdr->hdr_checksum = 0;
+			ip4_hdr->packet_id = 0;
+			ip4_hdr->fragment_offset = 0;
+			memcpy(&ip4_hdr->src_addr,
+				&ipsec_xform->tunnel.ipv4.src_ip,
+				sizeof(struct in_addr));
+			memcpy(&ip4_hdr->dst_addr,
+				&ipsec_xform->tunnel.ipv4.dst_ip,
+				sizeof(struct in_addr));
+			if (ipsec_xform->options.udp_encap) {
+				uint16_t sport, dport;
+				struct rte_udp_hdr *uh =
+					(struct rte_udp_hdr *) (ip4_hdr +
+						sizeof(struct rte_ipv4_hdr));
+
+				sport = ipsec_xform->udp.sport ?
+					ipsec_xform->udp.sport : 4500;
+				dport = ipsec_xform->udp.dport ?
+					ipsec_xform->udp.dport : 4500;
+				uh->src_port = rte_cpu_to_be_16(sport);
+				uh->dst_port = rte_cpu_to_be_16(dport);
+				uh->dgram_len = 0;
+				uh->dgram_cksum = 0;
+
+				ip4_hdr->next_proto_id = IPPROTO_UDP;
+				ip4_hdr->total_length =
+					rte_cpu_to_be_16(
+						sizeof(struct rte_ipv4_hdr) +
+						sizeof(struct rte_udp_hdr));
+				encap_pdb.ip_hdr_len +=
+					sizeof(struct rte_udp_hdr);
+				encap_pdb.options |=
+					PDBOPTS_ESP_NAT | PDBOPTS_ESP_NUC;
+			} else {
+				ip4_hdr->total_length =
+					rte_cpu_to_be_16(
+						sizeof(struct rte_ipv4_hdr));
+				ip4_hdr->next_proto_id = IPPROTO_ESP;
+			}
+
+			ip4_hdr->hdr_checksum = calc_chksum((uint16_t *)
+				(void *)ip4_hdr, sizeof(struct rte_ipv4_hdr));
+
 		} else if (ipsec_xform->tunnel.type ==
 				RTE_SECURITY_IPSEC_TUNNEL_IPV6) {
-			ip6_hdr.vtc_flow = rte_cpu_to_be_32(
+			ip6_hdr = (struct rte_ipv6_hdr *)&hdr;
+
+			ip6_hdr->vtc_flow = rte_cpu_to_be_32(
 				DPAA2_IPv6_DEFAULT_VTC_FLOW |
 				((ipsec_xform->tunnel.ipv6.dscp <<
 					RTE_IPV6_HDR_TC_SHIFT) &
@@ -3227,18 +3267,17 @@ dpaa2_sec_set_ipsec_session(struct rte_cryptodev *dev,
 					RTE_IPV6_HDR_FL_SHIFT) &
 					RTE_IPV6_HDR_FL_MASK));
 			/* Payload length will be updated by HW */
-			ip6_hdr.payload_len = 0;
-			ip6_hdr.hop_limits =
-					ipsec_xform->tunnel.ipv6.hlimit;
-			ip6_hdr.proto = (ipsec_xform->proto ==
+			ip6_hdr->payload_len = 0;
+			ip6_hdr->hop_limits = ipsec_xform->tunnel.ipv6.hlimit ?
+					ipsec_xform->tunnel.ipv6.hlimit : 0x40;
+			ip6_hdr->proto = (ipsec_xform->proto ==
 					RTE_SECURITY_IPSEC_SA_PROTO_ESP) ?
 					IPPROTO_ESP : IPPROTO_AH;
-			memcpy(&ip6_hdr.src_addr,
+			memcpy(&ip6_hdr->src_addr,
 				&ipsec_xform->tunnel.ipv6.src_addr, 16);
-			memcpy(&ip6_hdr.dst_addr,
+			memcpy(&ip6_hdr->dst_addr,
 				&ipsec_xform->tunnel.ipv6.dst_addr, 16);
 			encap_pdb.ip_hdr_len = sizeof(struct rte_ipv6_hdr);
-			hdr = (uint8_t *)&ip6_hdr;
 		}
 
 		bufsize = cnstr_shdsc_ipsec_new_encap(priv->flc_desc[0].desc,
@@ -3277,6 +3316,10 @@ dpaa2_sec_set_ipsec_session(struct rte_cryptodev *dev,
 			decap_pdb.options |= PDBOPTS_ESP_ESN;
 		if (ipsec_xform->options.copy_dscp)
 			decap_pdb.options |= PDBOPTS_ESP_DIFFSERV;
+		if (ipsec_xform->options.ecn)
+			decap_pdb.options |= PDBOPTS_ESP_TECN;
+		if (ipsec_xform->options.dec_ttl)
+			decap_pdb.options |= PDBHMO_ESP_DECAP_DTTL;
 
 		if (ipsec_xform->replay_win_sz) {
 			uint32_t win_sz;
diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h b/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
index d3e2df72b0..cf6542a222 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
@@ -930,6 +930,7 @@ static const struct rte_security_capability dpaa2_sec_security_cap[] = {
 			.mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL,
 			.direction = RTE_SECURITY_IPSEC_SA_DIR_EGRESS,
 			.options = {
+				.udp_encap = 1,
 				.copy_df = 1,
 				.copy_dscp = 1,
 				.esn = 1,
@@ -946,6 +947,8 @@ static const struct rte_security_capability dpaa2_sec_security_cap[] = {
 			.mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL,
 			.direction = RTE_SECURITY_IPSEC_SA_DIR_INGRESS,
 			.options = {
+				.iv_gen_disable = 1,
+				.udp_encap = 1,
 				.copy_df = 1,
 				.copy_dscp = 1,
 				.esn = 1,
-- 
2.17.1


^ permalink raw reply	[flat|nested] 30+ messages in thread

* [PATCH 12/12] crypto/dpaa2_sec: add support to set df and diffserv
  2023-08-23  7:08 [PATCH 00/12] crypto/dpaax_sec: misc enhancements Hemant Agrawal
                   ` (10 preceding siblings ...)
  2023-08-23  7:08 ` [PATCH 11/12] crypto/dpaa2_sec: add NAT-T support in IPsec offload Hemant Agrawal
@ 2023-08-23  7:08 ` Hemant Agrawal
  2023-09-18 10:31 ` [EXT] [PATCH 00/12] crypto/dpaax_sec: misc enhancements Akhil Goyal
  2023-09-20 13:33 ` [PATCH v2 00/13] " Hemant Agrawal
  13 siblings, 0 replies; 30+ messages in thread
From: Hemant Agrawal @ 2023-08-23  7:08 UTC (permalink / raw)
  To: dev; +Cc: gakhil

This patch enables the ipsec protocol offload to copy DF and diffserv

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
 drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c | 31 +++++++++++++--------
 drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h   |  2 ++
 2 files changed, 21 insertions(+), 12 deletions(-)

diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
index 675ee49489..5370216cfa 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
@@ -3202,24 +3202,31 @@ dpaa2_sec_set_ipsec_session(struct rte_cryptodev *dev,
 		session->dir = DIR_ENC;
 		if (ipsec_xform->tunnel.type ==
 				RTE_SECURITY_IPSEC_TUNNEL_IPV4) {
+			if (ipsec_xform->options.dec_ttl)
+				encap_pdb.options |= PDBHMO_ESP_ENCAP_DTTL;
 			if (ipsec_xform->options.copy_df)
 				encap_pdb.options |= PDBHMO_ESP_DFBIT;
 			ip4_hdr = (struct rte_ipv4_hdr *)&hdr;
 
 			encap_pdb.ip_hdr_len = sizeof(struct rte_ipv4_hdr);
 			ip4_hdr->version_ihl = RTE_IPV4_VHL_DEF;
-			ip4_hdr->time_to_live = ipsec_xform->tunnel.ipv4.ttl;
-			ip4_hdr->type_of_service =
-				ipsec_xform->tunnel.ipv4.dscp;
+			ip4_hdr->time_to_live = ipsec_xform->tunnel.ipv4.ttl ?
+						ipsec_xform->tunnel.ipv4.ttl :  0x40;
+			ip4_hdr->type_of_service = (ipsec_xform->tunnel.ipv4.dscp<<2);
+
 			ip4_hdr->hdr_checksum = 0;
 			ip4_hdr->packet_id = 0;
-			ip4_hdr->fragment_offset = 0;
-			memcpy(&ip4_hdr->src_addr,
-				&ipsec_xform->tunnel.ipv4.src_ip,
-				sizeof(struct in_addr));
-			memcpy(&ip4_hdr->dst_addr,
-				&ipsec_xform->tunnel.ipv4.dst_ip,
-				sizeof(struct in_addr));
+			if (ipsec_xform->tunnel.ipv4.df) {
+				uint16_t frag_off = 0;
+				frag_off |= RTE_IPV4_HDR_DF_FLAG;
+				ip4_hdr->fragment_offset = rte_cpu_to_be_16(frag_off);
+			} else
+				ip4_hdr->fragment_offset = 0;
+
+			memcpy(&ip4_hdr->src_addr, &ipsec_xform->tunnel.ipv4.src_ip,
+			       sizeof(struct in_addr));
+			memcpy(&ip4_hdr->dst_addr, &ipsec_xform->tunnel.ipv4.dst_ip,
+			       sizeof(struct in_addr));
 			if (ipsec_xform->options.udp_encap) {
 				uint16_t sport, dport;
 				struct rte_udp_hdr *uh =
@@ -3309,6 +3316,8 @@ dpaa2_sec_set_ipsec_session(struct rte_cryptodev *dev,
 			decap_pdb.options = sizeof(struct ip) << 16;
 			if (ipsec_xform->options.copy_df)
 				decap_pdb.options |= PDBHMO_ESP_DFV;
+			if (ipsec_xform->options.dec_ttl)
+				decap_pdb.options |= PDBHMO_ESP_DECAP_DTTL;
 		} else {
 			decap_pdb.options = sizeof(struct rte_ipv6_hdr) << 16;
 		}
@@ -3318,8 +3327,6 @@ dpaa2_sec_set_ipsec_session(struct rte_cryptodev *dev,
 			decap_pdb.options |= PDBOPTS_ESP_DIFFSERV;
 		if (ipsec_xform->options.ecn)
 			decap_pdb.options |= PDBOPTS_ESP_TECN;
-		if (ipsec_xform->options.dec_ttl)
-			decap_pdb.options |= PDBHMO_ESP_DECAP_DTTL;
 
 		if (ipsec_xform->replay_win_sz) {
 			uint32_t win_sz;
diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h b/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
index cf6542a222..1c0bc3d6de 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
@@ -933,6 +933,7 @@ static const struct rte_security_capability dpaa2_sec_security_cap[] = {
 				.udp_encap = 1,
 				.copy_df = 1,
 				.copy_dscp = 1,
+				.dec_ttl = 1,
 				.esn = 1,
 			},
 			.replay_win_sz_max = 1024
@@ -951,6 +952,7 @@ static const struct rte_security_capability dpaa2_sec_security_cap[] = {
 				.udp_encap = 1,
 				.copy_df = 1,
 				.copy_dscp = 1,
+				.dec_ttl = 1,
 				.esn = 1,
 			},
 			.replay_win_sz_max = 1024
-- 
2.17.1


^ permalink raw reply	[flat|nested] 30+ messages in thread

* RE: [EXT] [PATCH 00/12] crypto/dpaax_sec: misc enhancements
  2023-08-23  7:08 [PATCH 00/12] crypto/dpaax_sec: misc enhancements Hemant Agrawal
                   ` (11 preceding siblings ...)
  2023-08-23  7:08 ` [PATCH 12/12] crypto/dpaa2_sec: add support to set df and diffserv Hemant Agrawal
@ 2023-09-18 10:31 ` Akhil Goyal
  2023-09-20 13:33 ` [PATCH v2 00/13] " Hemant Agrawal
  13 siblings, 0 replies; 30+ messages in thread
From: Akhil Goyal @ 2023-09-18 10:31 UTC (permalink / raw)
  To: Hemant Agrawal, dev

> ----------------------------------------------------------------------
> This series include misc enhancements in dpaax_sec drivers.
> 
> - improving the IPsec protocol offload features
> - enhancing PDCP protocol processing
> - code optimization and cleanup
> 
> Apeksha Gupta (1):
>   crypto/dpaa2_sec: enhance dpaa FD FL FMT offset set
> 
> Gagandeep Singh (3):
>   common/dpaax: update IPsec base descriptor length
>   common/dpaax: change mode to wait in shared desc
>   crypto/dpaax_sec: set the authdata in non-auth case
> 
> Hemant Agrawal (7):
>   crypto/dpaa2_sec: supporting null cipher and auth
>   crypto/dpaa_sec: supporting null cipher and auth
>   crypto/dpaa2_sec: support copy df and dscp in proto offload
>   crypto/dpaa2_sec: increase the anti replay window size
>   crypto/dpaa2_sec: enable esn support
>   crypto/dpaa2_sec: add NAT-T support in IPsec offload
>   crypto/dpaa2_sec: add support to set df and diffserv
> 
> Vanshika Shukla (1):
>   crypto/dpaa2_sec: initialize the pdcp alg to null
> 
>  drivers/common/dpaax/caamflib/desc/ipsec.h    |   4 +-
>  drivers/common/dpaax/caamflib/desc/pdcp.h     |  82 +++---
>  .../dpaax/caamflib/rta/sec_run_time_asm.h     |   2 +-
>  drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c   | 234 ++++++++++--------
>  drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h     |  64 ++++-
>  drivers/crypto/dpaa2_sec/dpaa2_sec_raw_dp.c   |  47 +---
>  drivers/crypto/dpaa_sec/dpaa_sec.c            |   5 +
>  drivers/crypto/dpaa_sec/dpaa_sec.h            |  42 +++-
>  drivers/net/dpaa2/dpaa2_rxtx.c                |   3 +-
>  9 files changed, 294 insertions(+), 189 deletions(-)
> 
Please fix compilation issues.
http://mails.dpdk.org/archives/test-report/2023-August/451079.html

^ permalink raw reply	[flat|nested] 30+ messages in thread

* [PATCH v2 00/13] crypto/dpaax_sec: misc enhancements
  2023-08-23  7:08 [PATCH 00/12] crypto/dpaax_sec: misc enhancements Hemant Agrawal
                   ` (12 preceding siblings ...)
  2023-09-18 10:31 ` [EXT] [PATCH 00/12] crypto/dpaax_sec: misc enhancements Akhil Goyal
@ 2023-09-20 13:33 ` Hemant Agrawal
  2023-09-20 13:33   ` [PATCH v2 01/13] common/dpaax: update IPsec base descriptor length Hemant Agrawal
                     ` (13 more replies)
  13 siblings, 14 replies; 30+ messages in thread
From: Hemant Agrawal @ 2023-09-20 13:33 UTC (permalink / raw)
  To: gakhil; +Cc: dev

v2: compilation fixes

This series include misc enhancements in dpaax_sec drivers.

- improving the IPsec protocol offload features
- enhancing PDCP protocol processing
- code optimization and cleanup

Apeksha Gupta (1):
  crypto/dpaa2_sec: enhance dpaa FD FL FMT offset set

Gagandeep Singh (3):
  common/dpaax: update IPsec base descriptor length
  common/dpaax: change mode to wait in shared desc
  crypto/dpaax_sec: set the authdata in non-auth case

Hemant Agrawal (8):
  crypto/dpaa2_sec: supporting null cipher and auth
  crypto/dpaa_sec: supporting null cipher and auth
  crypto/dpaa2_sec: support copy df and dscp in proto offload
  crypto/dpaa2_sec: increase the anti replay window size
  crypto/dpaa2_sec: enable esn support
  crypto/dpaa2_sec: add NAT-T support in IPsec offload
  crypto/dpaa2_sec: add support to set df and diffserv
  crypto/dpaax_sec: enable sha224-hmac support for IPsec

Vanshika Shukla (1):
  crypto/dpaa2_sec: initialize the pdcp alg to null

 drivers/common/dpaax/caamflib/desc.h          |   5 +-
 drivers/common/dpaax/caamflib/desc/ipsec.h    |   9 +-
 drivers/common/dpaax/caamflib/desc/pdcp.h     |  82 +++---
 .../common/dpaax/caamflib/rta/protocol_cmd.h  |   5 +-
 .../dpaax/caamflib/rta/sec_run_time_asm.h     |   2 +-
 drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c   | 245 +++++++++++-------
 drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h     |  64 ++++-
 drivers/crypto/dpaa2_sec/dpaa2_sec_raw_dp.c   |  47 +---
 drivers/crypto/dpaa_sec/dpaa_sec.c            |  15 +-
 drivers/crypto/dpaa_sec/dpaa_sec.h            |  42 ++-
 drivers/net/dpaa2/dpaa2_rxtx.c                |   3 +-
 11 files changed, 326 insertions(+), 193 deletions(-)

-- 
2.17.1


^ permalink raw reply	[flat|nested] 30+ messages in thread

* [PATCH v2 01/13] common/dpaax: update IPsec base descriptor length
  2023-09-20 13:33 ` [PATCH v2 00/13] " Hemant Agrawal
@ 2023-09-20 13:33   ` Hemant Agrawal
  2023-09-20 13:33   ` [PATCH v2 02/13] common/dpaax: change mode to wait in shared desc Hemant Agrawal
                     ` (12 subsequent siblings)
  13 siblings, 0 replies; 30+ messages in thread
From: Hemant Agrawal @ 2023-09-20 13:33 UTC (permalink / raw)
  To: gakhil; +Cc: dev, Gagandeep Singh, Franck LENORMAND

From: Gagandeep Singh <g.singh@nxp.com>

If all the keys are inlined, the descriptor would
be 32 + 20 = 52 which is the size of the CURRENT shared
descriptor created.

So 32 * CAAM_CMD_SZ is the value that must be passed to
rta_inline_query() for its "sd_base_len" parameter and
drivers are using IPSEC_AUTH_VAR_AES_DEC_BASE_DESC_LEN
value to pass as first argument to rta_inline_query().

So, Value of IPSEC_AUTH_VAR_AES_DEC_BASE_DESC_LEN must be
updated to 32 CAAM_CMD_SZ.

Signed-off-by: Franck LENORMAND <franck.lenormand@nxp.com>
Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
---
 drivers/common/dpaax/caamflib/desc/ipsec.h           | 4 ++--
 drivers/common/dpaax/caamflib/rta/sec_run_time_asm.h | 2 +-
 2 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/drivers/common/dpaax/caamflib/desc/ipsec.h b/drivers/common/dpaax/caamflib/desc/ipsec.h
index 8ec6aac915..14e80baf77 100644
--- a/drivers/common/dpaax/caamflib/desc/ipsec.h
+++ b/drivers/common/dpaax/caamflib/desc/ipsec.h
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
  *
  * Copyright 2008-2016 Freescale Semiconductor Inc.
- * Copyright 2016,2019-2020 NXP
+ * Copyright 2016,2019-2022 NXP
  *
  */
 
@@ -1380,7 +1380,7 @@ cnstr_shdsc_ipsec_new_decap(uint32_t *descbuf, bool ps,
  * layers to determine whether keys can be inlined or not. To be used as first
  * parameter of rta_inline_query().
  */
-#define IPSEC_AUTH_VAR_BASE_DESC_LEN	(27 * CAAM_CMD_SZ)
+#define IPSEC_AUTH_VAR_BASE_DESC_LEN	(31 * CAAM_CMD_SZ)
 
 /**
  * IPSEC_AUTH_VAR_AES_DEC_BASE_DESC_LEN - IPsec AES decap shared descriptor
diff --git a/drivers/common/dpaax/caamflib/rta/sec_run_time_asm.h b/drivers/common/dpaax/caamflib/rta/sec_run_time_asm.h
index f40eaadea3..5c2efeb2c5 100644
--- a/drivers/common/dpaax/caamflib/rta/sec_run_time_asm.h
+++ b/drivers/common/dpaax/caamflib/rta/sec_run_time_asm.h
@@ -413,7 +413,7 @@ rta_program_finalize(struct program *program)
 {
 	/* Descriptor is usually not allowed to go beyond 64 words size */
 	if (program->current_pc > MAX_CAAM_DESCSIZE)
-		pr_warn("Descriptor Size exceeded max limit of 64 words\n");
+		pr_debug("Descriptor Size exceeded max limit of 64 words");
 
 	/* Descriptor is erroneous */
 	if (program->first_error_pc) {
-- 
2.17.1


^ permalink raw reply	[flat|nested] 30+ messages in thread

* [PATCH v2 02/13] common/dpaax: change mode to wait in shared desc
  2023-09-20 13:33 ` [PATCH v2 00/13] " Hemant Agrawal
  2023-09-20 13:33   ` [PATCH v2 01/13] common/dpaax: update IPsec base descriptor length Hemant Agrawal
@ 2023-09-20 13:33   ` Hemant Agrawal
  2023-09-20 13:33   ` [PATCH v2 03/13] crypto/dpaa2_sec: initialize the pdcp alg to null Hemant Agrawal
                     ` (11 subsequent siblings)
  13 siblings, 0 replies; 30+ messages in thread
From: Hemant Agrawal @ 2023-09-20 13:33 UTC (permalink / raw)
  To: gakhil; +Cc: dev, Gagandeep Singh

From: Gagandeep Singh <g.singh@nxp.com>

In case of protocol based offload, it is better to wait before the
share descriptor complete the execution. Simultaneous sharing may
cause issues.

Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
---
 drivers/common/dpaax/caamflib/desc/pdcp.h | 82 +++++++++++------------
 1 file changed, 41 insertions(+), 41 deletions(-)

diff --git a/drivers/common/dpaax/caamflib/desc/pdcp.h b/drivers/common/dpaax/caamflib/desc/pdcp.h
index 289ee2a7d5..7d16c66d79 100644
--- a/drivers/common/dpaax/caamflib/desc/pdcp.h
+++ b/drivers/common/dpaax/caamflib/desc/pdcp.h
@@ -1,6 +1,6 @@
 /* SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
  * Copyright 2008-2013 Freescale Semiconductor, Inc.
- * Copyright 2019-2022 NXP
+ * Copyright 2019-2023 NXP
  */
 
 #ifndef __DESC_PDCP_H__
@@ -2338,27 +2338,27 @@ cnstr_shdsc_pdcp_c_plane_encap(uint32_t *descbuf,
 		desc_share[PDCP_CIPHER_TYPE_INVALID][PDCP_AUTH_TYPE_INVALID] = {
 		{	/* NULL */
 			SHR_WAIT,	/* NULL */
-			SHR_ALWAYS,	/* SNOW f9 */
-			SHR_ALWAYS,	/* AES CMAC */
-			SHR_ALWAYS	/* ZUC-I */
+			SHR_WAIT,	/* SNOW f9 */
+			SHR_WAIT,	/* AES CMAC */
+			SHR_WAIT	/* ZUC-I */
 		},
 		{	/* SNOW f8 */
-			SHR_ALWAYS,	/* NULL */
-			SHR_ALWAYS,	/* SNOW f9 */
+			SHR_WAIT,	/* NULL */
+			SHR_WAIT,	/* SNOW f9 */
 			SHR_WAIT,	/* AES CMAC */
 			SHR_WAIT	/* ZUC-I */
 		},
 		{	/* AES CTR */
-			SHR_ALWAYS,	/* NULL */
-			SHR_ALWAYS,	/* SNOW f9 */
-			SHR_ALWAYS,	/* AES CMAC */
+			SHR_WAIT,	/* NULL */
+			SHR_WAIT,	/* SNOW f9 */
+			SHR_WAIT,	/* AES CMAC */
 			SHR_WAIT	/* ZUC-I */
 		},
 		{	/* ZUC-E */
-			SHR_ALWAYS,	/* NULL */
+			SHR_WAIT,	/* NULL */
 			SHR_WAIT,	/* SNOW f9 */
 			SHR_WAIT,	/* AES CMAC */
-			SHR_ALWAYS	/* ZUC-I */
+			SHR_WAIT	/* ZUC-I */
 		},
 	};
 	enum pdb_type_e pdb_type;
@@ -2478,27 +2478,27 @@ cnstr_shdsc_pdcp_c_plane_decap(uint32_t *descbuf,
 		desc_share[PDCP_CIPHER_TYPE_INVALID][PDCP_AUTH_TYPE_INVALID] = {
 		{	/* NULL */
 			SHR_WAIT,	/* NULL */
-			SHR_ALWAYS,	/* SNOW f9 */
-			SHR_ALWAYS,	/* AES CMAC */
-			SHR_ALWAYS	/* ZUC-I */
+			SHR_WAIT,	/* SNOW f9 */
+			SHR_WAIT,	/* AES CMAC */
+			SHR_WAIT	/* ZUC-I */
 		},
 		{	/* SNOW f8 */
-			SHR_ALWAYS,	/* NULL */
-			SHR_ALWAYS,	/* SNOW f9 */
+			SHR_WAIT,	/* NULL */
+			SHR_WAIT,	/* SNOW f9 */
 			SHR_WAIT,	/* AES CMAC */
 			SHR_WAIT	/* ZUC-I */
 		},
 		{	/* AES CTR */
-			SHR_ALWAYS,	/* NULL */
-			SHR_ALWAYS,	/* SNOW f9 */
-			SHR_ALWAYS,	/* AES CMAC */
+			SHR_WAIT,	/* NULL */
+			SHR_WAIT,	/* SNOW f9 */
+			SHR_WAIT,	/* AES CMAC */
 			SHR_WAIT	/* ZUC-I */
 		},
 		{	/* ZUC-E */
-			SHR_ALWAYS,	/* NULL */
+			SHR_WAIT,	/* NULL */
 			SHR_WAIT,	/* SNOW f9 */
 			SHR_WAIT,	/* AES CMAC */
-			SHR_ALWAYS	/* ZUC-I */
+			SHR_WAIT	/* ZUC-I */
 		},
 	};
 	enum pdb_type_e pdb_type;
@@ -2643,24 +2643,24 @@ cnstr_shdsc_pdcp_u_plane_encap(uint32_t *descbuf,
 		desc_share[PDCP_CIPHER_TYPE_INVALID][PDCP_AUTH_TYPE_INVALID] = {
 		{	/* NULL */
 			SHR_WAIT,	/* NULL */
-			SHR_ALWAYS,	/* SNOW f9 */
-			SHR_ALWAYS,	/* AES CMAC */
-			SHR_ALWAYS	/* ZUC-I */
+			SHR_WAIT,	/* SNOW f9 */
+			SHR_WAIT,	/* AES CMAC */
+			SHR_WAIT	/* ZUC-I */
 		},
 		{	/* SNOW f8 */
-			SHR_ALWAYS,	/* NULL */
-			SHR_ALWAYS,	/* SNOW f9 */
+			SHR_WAIT,	/* NULL */
+			SHR_WAIT,	/* SNOW f9 */
 			SHR_WAIT,	/* AES CMAC */
 			SHR_WAIT	/* ZUC-I */
 		},
 		{	/* AES CTR */
-			SHR_ALWAYS,	/* NULL */
-			SHR_ALWAYS,	/* SNOW f9 */
-			SHR_ALWAYS,	/* AES CMAC */
+			SHR_WAIT,	/* NULL */
+			SHR_WAIT,	/* SNOW f9 */
+			SHR_WAIT,	/* AES CMAC */
 			SHR_WAIT	/* ZUC-I */
 		},
 		{	/* ZUC-E */
-			SHR_ALWAYS,	/* NULL */
+			SHR_WAIT,	/* NULL */
 			SHR_WAIT,	/* SNOW f9 */
 			SHR_WAIT,	/* AES CMAC */
 			SHR_WAIT	/* ZUC-I */
@@ -2677,7 +2677,7 @@ cnstr_shdsc_pdcp_u_plane_encap(uint32_t *descbuf,
 	if (authdata)
 		SHR_HDR(p, desc_share[cipherdata->algtype][authdata->algtype], 0, 0);
 	else
-		SHR_HDR(p, SHR_ALWAYS, 0, 0);
+		SHR_HDR(p, SHR_WAIT, 0, 0);
 	pdb_type = cnstr_pdcp_u_plane_pdb(p, sn_size, hfn,
 					  bearer, direction, hfn_threshold,
 					  cipherdata, authdata);
@@ -2828,24 +2828,24 @@ cnstr_shdsc_pdcp_u_plane_decap(uint32_t *descbuf,
 		desc_share[PDCP_CIPHER_TYPE_INVALID][PDCP_AUTH_TYPE_INVALID] = {
 		{	/* NULL */
 			SHR_WAIT,	/* NULL */
-			SHR_ALWAYS,	/* SNOW f9 */
-			SHR_ALWAYS,	/* AES CMAC */
-			SHR_ALWAYS	/* ZUC-I */
+			SHR_WAIT,	/* SNOW f9 */
+			SHR_WAIT,	/* AES CMAC */
+			SHR_WAIT	/* ZUC-I */
 		},
 		{	/* SNOW f8 */
-			SHR_ALWAYS,	/* NULL */
-			SHR_ALWAYS,	/* SNOW f9 */
+			SHR_WAIT,	/* NULL */
+			SHR_WAIT,	/* SNOW f9 */
 			SHR_WAIT,	/* AES CMAC */
 			SHR_WAIT	/* ZUC-I */
 		},
 		{	/* AES CTR */
-			SHR_ALWAYS,	/* NULL */
-			SHR_ALWAYS,	/* SNOW f9 */
-			SHR_ALWAYS,	/* AES CMAC */
+			SHR_WAIT,	/* NULL */
+			SHR_WAIT,	/* SNOW f9 */
+			SHR_WAIT,	/* AES CMAC */
 			SHR_WAIT	/* ZUC-I */
 		},
 		{	/* ZUC-E */
-			SHR_ALWAYS,	/* NULL */
+			SHR_WAIT,	/* NULL */
 			SHR_WAIT,	/* SNOW f9 */
 			SHR_WAIT,	/* AES CMAC */
 			SHR_WAIT	/* ZUC-I */
@@ -2862,7 +2862,7 @@ cnstr_shdsc_pdcp_u_plane_decap(uint32_t *descbuf,
 	if (authdata)
 		SHR_HDR(p, desc_share[cipherdata->algtype][authdata->algtype], 0, 0);
 	else
-		SHR_HDR(p, SHR_ALWAYS, 0, 0);
+		SHR_HDR(p, SHR_WAIT, 0, 0);
 
 	pdb_type = cnstr_pdcp_u_plane_pdb(p, sn_size, hfn, bearer,
 					  direction, hfn_threshold,
-- 
2.17.1


^ permalink raw reply	[flat|nested] 30+ messages in thread

* [PATCH v2 03/13] crypto/dpaa2_sec: initialize the pdcp alg to null
  2023-09-20 13:33 ` [PATCH v2 00/13] " Hemant Agrawal
  2023-09-20 13:33   ` [PATCH v2 01/13] common/dpaax: update IPsec base descriptor length Hemant Agrawal
  2023-09-20 13:33   ` [PATCH v2 02/13] common/dpaax: change mode to wait in shared desc Hemant Agrawal
@ 2023-09-20 13:33   ` Hemant Agrawal
  2023-09-20 13:33   ` [PATCH v2 04/13] crypto/dpaa2_sec: supporting null cipher and auth Hemant Agrawal
                     ` (10 subsequent siblings)
  13 siblings, 0 replies; 30+ messages in thread
From: Hemant Agrawal @ 2023-09-20 13:33 UTC (permalink / raw)
  To: gakhil; +Cc: dev, Vanshika Shukla

From: Vanshika Shukla <vanshika.shukla@nxp.com>

This patch initializes the pdcp alg to null.

Signed-off-by: Vanshika Shukla <vanshika.shukla@nxp.com>
---
 drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
index f9eba4a7bd..3ceb886ddb 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  *
  *   Copyright (c) 2016 Freescale Semiconductor, Inc. All rights reserved.
- *   Copyright 2016-2022 NXP
+ *   Copyright 2016-2023 NXP
  *
  */
 
@@ -3512,6 +3512,7 @@ dpaa2_sec_set_pdcp_session(struct rte_cryptodev *dev,
 		session->auth_key.data = NULL;
 		session->auth_key.length = 0;
 		session->auth_alg = 0;
+		authdata.algtype = PDCP_AUTH_TYPE_NULL;
 	}
 	authdata.key = (size_t)session->auth_key.data;
 	authdata.keylen = session->auth_key.length;
-- 
2.17.1


^ permalink raw reply	[flat|nested] 30+ messages in thread

* [PATCH v2 04/13] crypto/dpaa2_sec: supporting null cipher and auth
  2023-09-20 13:33 ` [PATCH v2 00/13] " Hemant Agrawal
                     ` (2 preceding siblings ...)
  2023-09-20 13:33   ` [PATCH v2 03/13] crypto/dpaa2_sec: initialize the pdcp alg to null Hemant Agrawal
@ 2023-09-20 13:33   ` Hemant Agrawal
  2023-09-20 13:33   ` [PATCH v2 05/13] crypto/dpaa_sec: " Hemant Agrawal
                     ` (9 subsequent siblings)
  13 siblings, 0 replies; 30+ messages in thread
From: Hemant Agrawal @ 2023-09-20 13:33 UTC (permalink / raw)
  To: gakhil; +Cc: dev

IPSEC proto offload support NULL in combo cases, thus adding
NULL cipher and auth in security capabilities. Non-supported
cases are already protected in the code.

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
 drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h | 43 +++++++++++++++++++++--
 1 file changed, 41 insertions(+), 2 deletions(-)

diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h b/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
index f84d2caf43..5a4eb8e2ed 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  *
  *   Copyright (c) 2016 Freescale Semiconductor, Inc. All rights reserved.
- *   Copyright 2016,2020-2022 NXP
+ *   Copyright 2016,2020-2023 NXP
  *
  */
 
@@ -878,7 +878,46 @@ static const struct rte_cryptodev_capabilities dpaa2_pdcp_capabilities[] = {
 			}, }
 		}, }
 	},
-
+	{	/* NULL (AUTH) */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		{.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+			{.auth = {
+				.algo = RTE_CRYPTO_AUTH_NULL,
+				.block_size = 1,
+				.key_size = {
+					.min = 0,
+					.max = 0,
+					.increment = 0
+				},
+				.digest_size = {
+					.min = 0,
+					.max = 0,
+					.increment = 0
+				},
+			}, },
+		}, },
+	},
+	{	/* NULL (CIPHER) */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		{.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
+			{.cipher = {
+				.algo = RTE_CRYPTO_CIPHER_NULL,
+				.block_size = 1,
+				.key_size = {
+					.min = 0,
+					.max = 0,
+					.increment = 0
+				},
+				.iv_size = {
+					.min = 0,
+					.max = 0,
+					.increment = 0
+				}
+			}, },
+		}, }
+	},
 	RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
 };
 
-- 
2.17.1


^ permalink raw reply	[flat|nested] 30+ messages in thread

* [PATCH v2 05/13] crypto/dpaa_sec: supporting null cipher and auth
  2023-09-20 13:33 ` [PATCH v2 00/13] " Hemant Agrawal
                     ` (3 preceding siblings ...)
  2023-09-20 13:33   ` [PATCH v2 04/13] crypto/dpaa2_sec: supporting null cipher and auth Hemant Agrawal
@ 2023-09-20 13:33   ` Hemant Agrawal
  2023-09-20 13:33   ` [PATCH v2 06/13] crypto/dpaax_sec: set the authdata in non-auth case Hemant Agrawal
                     ` (8 subsequent siblings)
  13 siblings, 0 replies; 30+ messages in thread
From: Hemant Agrawal @ 2023-09-20 13:33 UTC (permalink / raw)
  To: gakhil; +Cc: dev

Adding NULL cipher and auth in capabilities.

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
 drivers/crypto/dpaa_sec/dpaa_sec.h | 42 +++++++++++++++++++++++++++++-
 1 file changed, 41 insertions(+), 1 deletion(-)

diff --git a/drivers/crypto/dpaa_sec/dpaa_sec.h b/drivers/crypto/dpaa_sec/dpaa_sec.h
index 412a9da942..eff6dcf311 100644
--- a/drivers/crypto/dpaa_sec/dpaa_sec.h
+++ b/drivers/crypto/dpaa_sec/dpaa_sec.h
@@ -1,6 +1,6 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  *
- *   Copyright 2016-2022 NXP
+ *   Copyright 2016-2023 NXP
  *
  */
 
@@ -782,6 +782,46 @@ static const struct rte_cryptodev_capabilities dpaa_sec_capabilities[] = {
 			}, }
 		}, }
 	},
+	{	/* NULL (AUTH) */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		{.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+			{.auth = {
+				.algo = RTE_CRYPTO_AUTH_NULL,
+				.block_size = 1,
+				.key_size = {
+					.min = 0,
+					.max = 0,
+					.increment = 0
+				},
+				.digest_size = {
+					.min = 0,
+					.max = 0,
+					.increment = 0
+				},
+			}, },
+		}, },
+	},
+	{	/* NULL (CIPHER) */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		{.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
+			{.cipher = {
+				.algo = RTE_CRYPTO_CIPHER_NULL,
+				.block_size = 1,
+				.key_size = {
+					.min = 0,
+					.max = 0,
+					.increment = 0
+				},
+				.iv_size = {
+					.min = 0,
+					.max = 0,
+					.increment = 0
+				}
+			}, },
+		}, }
+	},
 	RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
 };
 
-- 
2.17.1


^ permalink raw reply	[flat|nested] 30+ messages in thread

* [PATCH v2 06/13] crypto/dpaax_sec: set the authdata in non-auth case
  2023-09-20 13:33 ` [PATCH v2 00/13] " Hemant Agrawal
                     ` (4 preceding siblings ...)
  2023-09-20 13:33   ` [PATCH v2 05/13] crypto/dpaa_sec: " Hemant Agrawal
@ 2023-09-20 13:33   ` Hemant Agrawal
  2023-09-20 13:33   ` [PATCH v2 07/13] crypto/dpaa2_sec: enhance dpaa FD FL FMT offset set Hemant Agrawal
                     ` (7 subsequent siblings)
  13 siblings, 0 replies; 30+ messages in thread
From: Hemant Agrawal @ 2023-09-20 13:33 UTC (permalink / raw)
  To: gakhil; +Cc: dev, Gagandeep Singh

From: Gagandeep Singh <g.singh@nxp.com>

The descriptors refers to auth data as well, so initializing it
properly for the non-auth cases.

Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
---
 drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c | 16 ++++++++++++----
 drivers/crypto/dpaa_sec/dpaa_sec.c          |  5 +++++
 2 files changed, 17 insertions(+), 4 deletions(-)

diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
index 3ceb886ddb..1fc0d2e7cc 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
@@ -3538,12 +3538,20 @@ dpaa2_sec_set_pdcp_session(struct rte_cryptodev *dev,
 				      session->auth_alg);
 			goto out;
 		}
-
 		p_authdata = &authdata;
-	} else if (pdcp_xform->domain == RTE_SECURITY_PDCP_MODE_CONTROL) {
-		DPAA2_SEC_ERR("Crypto: Integrity must for c-plane");
-		goto out;
+	} else {
+		if (pdcp_xform->domain == RTE_SECURITY_PDCP_MODE_CONTROL) {
+			DPAA2_SEC_ERR("Crypto: Integrity must for c-plane");
+			goto out;
+		}
+		session->auth_key.data = NULL;
+		session->auth_key.length = 0;
+		session->auth_alg = 0;
 	}
+	authdata.key = (size_t)session->auth_key.data;
+	authdata.keylen = session->auth_key.length;
+	authdata.key_enc_flags = 0;
+	authdata.key_type = RTA_DATA_IMM;
 
 	if (pdcp_xform->sdap_enabled) {
 		int nb_keys_to_inline =
diff --git a/drivers/crypto/dpaa_sec/dpaa_sec.c b/drivers/crypto/dpaa_sec/dpaa_sec.c
index f3f565826f..0fcba95916 100644
--- a/drivers/crypto/dpaa_sec/dpaa_sec.c
+++ b/drivers/crypto/dpaa_sec/dpaa_sec.c
@@ -3188,6 +3188,11 @@ dpaa_sec_set_pdcp_session(struct rte_cryptodev *dev,
 		       auth_xform->key.length);
 		session->auth_alg = auth_xform->algo;
 	} else {
+		if (pdcp_xform->domain == RTE_SECURITY_PDCP_MODE_CONTROL) {
+			DPAA_SEC_ERR("Crypto: Integrity must for c-plane");
+			ret = -EINVAL;
+			goto out;
+		}
 		session->auth_key.data = NULL;
 		session->auth_key.length = 0;
 		session->auth_alg = 0;
-- 
2.17.1


^ permalink raw reply	[flat|nested] 30+ messages in thread

* [PATCH v2 07/13] crypto/dpaa2_sec: enhance dpaa FD FL FMT offset set
  2023-09-20 13:33 ` [PATCH v2 00/13] " Hemant Agrawal
                     ` (5 preceding siblings ...)
  2023-09-20 13:33   ` [PATCH v2 06/13] crypto/dpaax_sec: set the authdata in non-auth case Hemant Agrawal
@ 2023-09-20 13:33   ` Hemant Agrawal
  2023-09-20 13:33   ` [PATCH v2 08/13] crypto/dpaa2_sec: support copy df and dscp in proto offload Hemant Agrawal
                     ` (6 subsequent siblings)
  13 siblings, 0 replies; 30+ messages in thread
From: Hemant Agrawal @ 2023-09-20 13:33 UTC (permalink / raw)
  To: gakhil; +Cc: dev, Apeksha Gupta

From: Apeksha Gupta <apeksha.gupta@nxp.com>

The macro DPAA2_SET_FLE_OFFSET(fle, offset) only works for masking the
offset upto with 12 bits. When the offset value is more that 12 bits,
this macro may over writing the FMT/SL/F bits which are beyond the
offset bits.
The FLE_ADDR is modified to FLE_ADDR + OFFSET, and the FLE_OFFSET
is made to 0.

Signed-off-by: Apeksha Gupta <apeksha.gupta@nxp.com>
---
 drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c | 87 +++++++--------------
 drivers/crypto/dpaa2_sec/dpaa2_sec_raw_dp.c | 47 +++--------
 drivers/net/dpaa2/dpaa2_rxtx.c              |  3 +-
 3 files changed, 38 insertions(+), 99 deletions(-)

diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
index 1fc0d2e7cc..daa6a71360 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
@@ -138,16 +138,14 @@ build_proto_compound_sg_fd(dpaa2_sec_session *sess,
 	DPAA2_SET_FLE_ADDR(op_fle, DPAA2_VADDR_TO_IOVA(sge));
 
 	/* Configure Output SGE for Encap/Decap */
-	DPAA2_SET_FLE_ADDR(sge, DPAA2_MBUF_VADDR_TO_IOVA(mbuf));
-	DPAA2_SET_FLE_OFFSET(sge, mbuf->data_off);
+	DPAA2_SET_FLE_ADDR(sge, rte_pktmbuf_iova(mbuf));
 	/* o/p segs */
 	while (mbuf->next) {
 		sge->length = mbuf->data_len;
 		out_len += sge->length;
 		sge++;
 		mbuf = mbuf->next;
-		DPAA2_SET_FLE_ADDR(sge, DPAA2_MBUF_VADDR_TO_IOVA(mbuf));
-		DPAA2_SET_FLE_OFFSET(sge, mbuf->data_off);
+		DPAA2_SET_FLE_ADDR(sge, rte_pktmbuf_iova(mbuf));
 	}
 	/* using buf_len for last buf - so that extra data can be added */
 	sge->length = mbuf->buf_len - mbuf->data_off;
@@ -165,8 +163,7 @@ build_proto_compound_sg_fd(dpaa2_sec_session *sess,
 	DPAA2_SET_FLE_FIN(ip_fle);
 
 	/* Configure input SGE for Encap/Decap */
-	DPAA2_SET_FLE_ADDR(sge, DPAA2_MBUF_VADDR_TO_IOVA(mbuf));
-	DPAA2_SET_FLE_OFFSET(sge, mbuf->data_off);
+	DPAA2_SET_FLE_ADDR(sge, rte_pktmbuf_iova(mbuf));
 	sge->length = mbuf->data_len;
 	in_len += sge->length;
 
@@ -174,8 +171,7 @@ build_proto_compound_sg_fd(dpaa2_sec_session *sess,
 	/* i/p segs */
 	while (mbuf) {
 		sge++;
-		DPAA2_SET_FLE_ADDR(sge, DPAA2_MBUF_VADDR_TO_IOVA(mbuf));
-		DPAA2_SET_FLE_OFFSET(sge, mbuf->data_off);
+		DPAA2_SET_FLE_ADDR(sge, rte_pktmbuf_iova(mbuf));
 		sge->length = mbuf->data_len;
 		in_len += sge->length;
 		mbuf = mbuf->next;
@@ -247,13 +243,11 @@ build_proto_compound_fd(dpaa2_sec_session *sess,
 	DPAA2_SET_FD_FLC(fd, DPAA2_VADDR_TO_IOVA(flc));
 
 	/* Configure Output FLE with dst mbuf data  */
-	DPAA2_SET_FLE_ADDR(op_fle, DPAA2_MBUF_VADDR_TO_IOVA(dst_mbuf));
-	DPAA2_SET_FLE_OFFSET(op_fle, dst_mbuf->data_off);
+	DPAA2_SET_FLE_ADDR(op_fle, rte_pktmbuf_iova(dst_mbuf));
 	DPAA2_SET_FLE_LEN(op_fle, dst_mbuf->buf_len);
 
 	/* Configure Input FLE with src mbuf data */
-	DPAA2_SET_FLE_ADDR(ip_fle, DPAA2_MBUF_VADDR_TO_IOVA(src_mbuf));
-	DPAA2_SET_FLE_OFFSET(ip_fle, src_mbuf->data_off);
+	DPAA2_SET_FLE_ADDR(ip_fle, rte_pktmbuf_iova(src_mbuf));
 	DPAA2_SET_FLE_LEN(ip_fle, src_mbuf->pkt_len);
 
 	DPAA2_SET_FD_LEN(fd, ip_fle->length);
@@ -373,16 +367,14 @@ build_authenc_gcm_sg_fd(dpaa2_sec_session *sess,
 			sym_op->aead.data.length;
 
 	/* Configure Output SGE for Encap/Decap */
-	DPAA2_SET_FLE_ADDR(sge, DPAA2_MBUF_VADDR_TO_IOVA(mbuf));
-	DPAA2_SET_FLE_OFFSET(sge, mbuf->data_off + sym_op->aead.data.offset);
+	DPAA2_SET_FLE_ADDR(sge, rte_pktmbuf_iova(mbuf) + sym_op->aead.data.offset);
 	sge->length = mbuf->data_len - sym_op->aead.data.offset;
 
 	mbuf = mbuf->next;
 	/* o/p segs */
 	while (mbuf) {
 		sge++;
-		DPAA2_SET_FLE_ADDR(sge, DPAA2_MBUF_VADDR_TO_IOVA(mbuf));
-		DPAA2_SET_FLE_OFFSET(sge, mbuf->data_off);
+		DPAA2_SET_FLE_ADDR(sge, rte_pktmbuf_iova(mbuf));
 		sge->length = mbuf->data_len;
 		mbuf = mbuf->next;
 	}
@@ -420,17 +412,14 @@ build_authenc_gcm_sg_fd(dpaa2_sec_session *sess,
 		sge++;
 	}
 
-	DPAA2_SET_FLE_ADDR(sge, DPAA2_MBUF_VADDR_TO_IOVA(mbuf));
-	DPAA2_SET_FLE_OFFSET(sge, sym_op->aead.data.offset +
-				mbuf->data_off);
+	DPAA2_SET_FLE_ADDR(sge, rte_pktmbuf_iova(mbuf) + sym_op->aead.data.offset);
 	sge->length = mbuf->data_len - sym_op->aead.data.offset;
 
 	mbuf = mbuf->next;
 	/* i/p segs */
 	while (mbuf) {
 		sge++;
-		DPAA2_SET_FLE_ADDR(sge, DPAA2_MBUF_VADDR_TO_IOVA(mbuf));
-		DPAA2_SET_FLE_OFFSET(sge, mbuf->data_off);
+		DPAA2_SET_FLE_ADDR(sge, rte_pktmbuf_iova(mbuf));
 		sge->length = mbuf->data_len;
 		mbuf = mbuf->next;
 	}
@@ -535,8 +524,7 @@ build_authenc_gcm_fd(dpaa2_sec_session *sess,
 	DPAA2_SET_FLE_SG_EXT(fle);
 
 	/* Configure Output SGE for Encap/Decap */
-	DPAA2_SET_FLE_ADDR(sge, DPAA2_MBUF_VADDR_TO_IOVA(dst));
-	DPAA2_SET_FLE_OFFSET(sge, dst->data_off + sym_op->aead.data.offset);
+	DPAA2_SET_FLE_ADDR(sge, rte_pktmbuf_iova(dst) + sym_op->aead.data.offset);
 	sge->length = sym_op->aead.data.length;
 
 	if (sess->dir == DIR_ENC) {
@@ -571,9 +559,7 @@ build_authenc_gcm_fd(dpaa2_sec_session *sess,
 		sge++;
 	}
 
-	DPAA2_SET_FLE_ADDR(sge, DPAA2_MBUF_VADDR_TO_IOVA(sym_op->m_src));
-	DPAA2_SET_FLE_OFFSET(sge, sym_op->aead.data.offset +
-				sym_op->m_src->data_off);
+	DPAA2_SET_FLE_ADDR(sge, rte_pktmbuf_iova(sym_op->m_src) + sym_op->aead.data.offset);
 	sge->length = sym_op->aead.data.length;
 	if (sess->dir == DIR_DEC) {
 		sge++;
@@ -666,16 +652,14 @@ build_authenc_sg_fd(dpaa2_sec_session *sess,
 			sym_op->cipher.data.length;
 
 	/* Configure Output SGE for Encap/Decap */
-	DPAA2_SET_FLE_ADDR(sge, DPAA2_MBUF_VADDR_TO_IOVA(mbuf));
-	DPAA2_SET_FLE_OFFSET(sge, mbuf->data_off + sym_op->auth.data.offset);
+	DPAA2_SET_FLE_ADDR(sge, rte_pktmbuf_iova(mbuf) + sym_op->auth.data.offset);
 	sge->length = mbuf->data_len - sym_op->auth.data.offset;
 
 	mbuf = mbuf->next;
 	/* o/p segs */
 	while (mbuf) {
 		sge++;
-		DPAA2_SET_FLE_ADDR(sge, DPAA2_MBUF_VADDR_TO_IOVA(mbuf));
-		DPAA2_SET_FLE_OFFSET(sge, mbuf->data_off);
+		DPAA2_SET_FLE_ADDR(sge, rte_pktmbuf_iova(mbuf));
 		sge->length = mbuf->data_len;
 		mbuf = mbuf->next;
 	}
@@ -706,17 +690,14 @@ build_authenc_sg_fd(dpaa2_sec_session *sess,
 	sge->length = sess->iv.length;
 
 	sge++;
-	DPAA2_SET_FLE_ADDR(sge, DPAA2_MBUF_VADDR_TO_IOVA(mbuf));
-	DPAA2_SET_FLE_OFFSET(sge, sym_op->auth.data.offset +
-				mbuf->data_off);
+	DPAA2_SET_FLE_ADDR(sge, rte_pktmbuf_iova(mbuf) + sym_op->auth.data.offset);
 	sge->length = mbuf->data_len - sym_op->auth.data.offset;
 
 	mbuf = mbuf->next;
 	/* i/p segs */
 	while (mbuf) {
 		sge++;
-		DPAA2_SET_FLE_ADDR(sge, DPAA2_MBUF_VADDR_TO_IOVA(mbuf));
-		DPAA2_SET_FLE_OFFSET(sge, mbuf->data_off);
+		DPAA2_SET_FLE_ADDR(sge, rte_pktmbuf_iova(mbuf));
 		sge->length = mbuf->data_len;
 		mbuf = mbuf->next;
 	}
@@ -830,9 +811,7 @@ build_authenc_fd(dpaa2_sec_session *sess,
 	DPAA2_SET_FLE_SG_EXT(fle);
 
 	/* Configure Output SGE for Encap/Decap */
-	DPAA2_SET_FLE_ADDR(sge, DPAA2_MBUF_VADDR_TO_IOVA(dst));
-	DPAA2_SET_FLE_OFFSET(sge, sym_op->cipher.data.offset +
-				dst->data_off);
+	DPAA2_SET_FLE_ADDR(sge, rte_pktmbuf_iova(dst) + sym_op->cipher.data.offset);
 	sge->length = sym_op->cipher.data.length;
 
 	if (sess->dir == DIR_ENC) {
@@ -862,9 +841,7 @@ build_authenc_fd(dpaa2_sec_session *sess,
 	sge->length = sess->iv.length;
 	sge++;
 
-	DPAA2_SET_FLE_ADDR(sge, DPAA2_MBUF_VADDR_TO_IOVA(sym_op->m_src));
-	DPAA2_SET_FLE_OFFSET(sge, sym_op->auth.data.offset +
-				sym_op->m_src->data_off);
+	DPAA2_SET_FLE_ADDR(sge, rte_pktmbuf_iova(sym_op->m_src) + sym_op->auth.data.offset);
 	sge->length = sym_op->auth.data.length;
 	if (sess->dir == DIR_DEC) {
 		sge++;
@@ -965,8 +942,7 @@ static inline int build_auth_sg_fd(
 		sge++;
 	}
 	/* i/p 1st seg */
-	DPAA2_SET_FLE_ADDR(sge, DPAA2_MBUF_VADDR_TO_IOVA(mbuf));
-	DPAA2_SET_FLE_OFFSET(sge, data_offset + mbuf->data_off);
+	DPAA2_SET_FLE_ADDR(sge, rte_pktmbuf_iova(mbuf) + data_offset);
 
 	if (data_len <= (mbuf->data_len - data_offset)) {
 		sge->length = data_len;
@@ -978,8 +954,7 @@ static inline int build_auth_sg_fd(
 		while ((data_len = data_len - sge->length) &&
 		       (mbuf = mbuf->next)) {
 			sge++;
-			DPAA2_SET_FLE_ADDR(sge, DPAA2_MBUF_VADDR_TO_IOVA(mbuf));
-			DPAA2_SET_FLE_OFFSET(sge, mbuf->data_off);
+			DPAA2_SET_FLE_ADDR(sge, rte_pktmbuf_iova(mbuf));
 			if (data_len > mbuf->data_len)
 				sge->length = mbuf->data_len;
 			else
@@ -1097,8 +1072,7 @@ build_auth_fd(dpaa2_sec_session *sess, struct rte_crypto_op *op,
 	}
 
 	/* Setting data to authenticate */
-	DPAA2_SET_FLE_ADDR(sge, DPAA2_MBUF_VADDR_TO_IOVA(sym_op->m_src));
-	DPAA2_SET_FLE_OFFSET(sge, data_offset + sym_op->m_src->data_off);
+	DPAA2_SET_FLE_ADDR(sge, rte_pktmbuf_iova(sym_op->m_src) + data_offset);
 	sge->length = data_len;
 
 	if (sess->dir == DIR_DEC) {
@@ -1183,16 +1157,14 @@ build_cipher_sg_fd(dpaa2_sec_session *sess, struct rte_crypto_op *op,
 	DPAA2_SET_FLE_SG_EXT(op_fle);
 
 	/* o/p 1st seg */
-	DPAA2_SET_FLE_ADDR(sge, DPAA2_MBUF_VADDR_TO_IOVA(mbuf));
-	DPAA2_SET_FLE_OFFSET(sge, data_offset + mbuf->data_off);
+	DPAA2_SET_FLE_ADDR(sge, rte_pktmbuf_iova(mbuf) + data_offset);
 	sge->length = mbuf->data_len - data_offset;
 
 	mbuf = mbuf->next;
 	/* o/p segs */
 	while (mbuf) {
 		sge++;
-		DPAA2_SET_FLE_ADDR(sge, DPAA2_MBUF_VADDR_TO_IOVA(mbuf));
-		DPAA2_SET_FLE_OFFSET(sge, mbuf->data_off);
+		DPAA2_SET_FLE_ADDR(sge, rte_pktmbuf_iova(mbuf));
 		sge->length = mbuf->data_len;
 		mbuf = mbuf->next;
 	}
@@ -1212,22 +1184,19 @@ build_cipher_sg_fd(dpaa2_sec_session *sess, struct rte_crypto_op *op,
 
 	/* i/p IV */
 	DPAA2_SET_FLE_ADDR(sge, DPAA2_VADDR_TO_IOVA(iv_ptr));
-	DPAA2_SET_FLE_OFFSET(sge, 0);
 	sge->length = sess->iv.length;
 
 	sge++;
 
 	/* i/p 1st seg */
-	DPAA2_SET_FLE_ADDR(sge, DPAA2_MBUF_VADDR_TO_IOVA(mbuf));
-	DPAA2_SET_FLE_OFFSET(sge, data_offset + mbuf->data_off);
+	DPAA2_SET_FLE_ADDR(sge, rte_pktmbuf_iova(mbuf) + data_offset);
 	sge->length = mbuf->data_len - data_offset;
 
 	mbuf = mbuf->next;
 	/* i/p segs */
 	while (mbuf) {
 		sge++;
-		DPAA2_SET_FLE_ADDR(sge, DPAA2_MBUF_VADDR_TO_IOVA(mbuf));
-		DPAA2_SET_FLE_OFFSET(sge, mbuf->data_off);
+		DPAA2_SET_FLE_ADDR(sge, rte_pktmbuf_iova(mbuf));
 		sge->length = mbuf->data_len;
 		mbuf = mbuf->next;
 	}
@@ -1328,8 +1297,7 @@ build_cipher_fd(dpaa2_sec_session *sess, struct rte_crypto_op *op,
 		sess->iv.length,
 		sym_op->m_src->data_off);
 
-	DPAA2_SET_FLE_ADDR(fle, DPAA2_MBUF_VADDR_TO_IOVA(dst));
-	DPAA2_SET_FLE_OFFSET(fle, data_offset + dst->data_off);
+	DPAA2_SET_FLE_ADDR(fle, rte_pktmbuf_iova(dst) + data_offset);
 
 	fle->length = data_len + sess->iv.length;
 
@@ -1349,8 +1317,7 @@ build_cipher_fd(dpaa2_sec_session *sess, struct rte_crypto_op *op,
 	sge->length = sess->iv.length;
 
 	sge++;
-	DPAA2_SET_FLE_ADDR(sge, DPAA2_MBUF_VADDR_TO_IOVA(sym_op->m_src));
-	DPAA2_SET_FLE_OFFSET(sge, data_offset + sym_op->m_src->data_off);
+	DPAA2_SET_FLE_ADDR(sge, rte_pktmbuf_iova(sym_op->m_src) + data_offset);
 
 	sge->length = data_len;
 	DPAA2_SET_FLE_FIN(sge);
diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_raw_dp.c b/drivers/crypto/dpaa2_sec/dpaa2_sec_raw_dp.c
index 36c79e450a..4754b9d6f8 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_raw_dp.c
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_raw_dp.c
@@ -95,29 +95,25 @@ build_raw_dp_chain_fd(uint8_t *drv_ctx,
 	/* OOP */
 	if (dest_sgl) {
 		/* Configure Output SGE for Encap/Decap */
-		DPAA2_SET_FLE_ADDR(sge, dest_sgl->vec[0].iova);
-		DPAA2_SET_FLE_OFFSET(sge, ofs.ofs.cipher.head);
+		DPAA2_SET_FLE_ADDR(sge, dest_sgl->vec[0].iova + ofs.ofs.cipher.head);
 		sge->length = dest_sgl->vec[0].len - ofs.ofs.cipher.head;
 
 		/* o/p segs */
 		for (i = 1; i < dest_sgl->num; i++) {
 			sge++;
 			DPAA2_SET_FLE_ADDR(sge, dest_sgl->vec[i].iova);
-			DPAA2_SET_FLE_OFFSET(sge, 0);
 			sge->length = dest_sgl->vec[i].len;
 		}
 		sge->length -= ofs.ofs.cipher.tail;
 	} else {
 		/* Configure Output SGE for Encap/Decap */
-		DPAA2_SET_FLE_ADDR(sge, sgl->vec[0].iova);
-		DPAA2_SET_FLE_OFFSET(sge, ofs.ofs.cipher.head);
+		DPAA2_SET_FLE_ADDR(sge, sgl->vec[0].iova + ofs.ofs.cipher.head);
 		sge->length = sgl->vec[0].len - ofs.ofs.cipher.head;
 
 		/* o/p segs */
 		for (i = 1; i < sgl->num; i++) {
 			sge++;
 			DPAA2_SET_FLE_ADDR(sge, sgl->vec[i].iova);
-			DPAA2_SET_FLE_OFFSET(sge, 0);
 			sge->length = sgl->vec[i].len;
 		}
 		sge->length -= ofs.ofs.cipher.tail;
@@ -148,14 +144,12 @@ build_raw_dp_chain_fd(uint8_t *drv_ctx,
 	sge->length = sess->iv.length;
 
 	sge++;
-	DPAA2_SET_FLE_ADDR(sge, sgl->vec[0].iova);
-	DPAA2_SET_FLE_OFFSET(sge, ofs.ofs.auth.head);
+	DPAA2_SET_FLE_ADDR(sge, sgl->vec[0].iova + ofs.ofs.auth.head);
 	sge->length = sgl->vec[0].len - ofs.ofs.auth.head;
 
 	for (i = 1; i < sgl->num; i++) {
 		sge++;
 		DPAA2_SET_FLE_ADDR(sge, sgl->vec[i].iova);
-		DPAA2_SET_FLE_OFFSET(sge, 0);
 		sge->length = sgl->vec[i].len;
 	}
 
@@ -244,28 +238,24 @@ build_raw_dp_aead_fd(uint8_t *drv_ctx,
 	/* OOP */
 	if (dest_sgl) {
 		/* Configure Output SGE for Encap/Decap */
-		DPAA2_SET_FLE_ADDR(sge, dest_sgl->vec[0].iova);
-		DPAA2_SET_FLE_OFFSET(sge, ofs.ofs.cipher.head);
+		DPAA2_SET_FLE_ADDR(sge, dest_sgl->vec[0].iova +  ofs.ofs.cipher.head);
 		sge->length = dest_sgl->vec[0].len - ofs.ofs.cipher.head;
 
 		/* o/p segs */
 		for (i = 1; i < dest_sgl->num; i++) {
 			sge++;
 			DPAA2_SET_FLE_ADDR(sge, dest_sgl->vec[i].iova);
-			DPAA2_SET_FLE_OFFSET(sge, 0);
 			sge->length = dest_sgl->vec[i].len;
 		}
 	} else {
 		/* Configure Output SGE for Encap/Decap */
-		DPAA2_SET_FLE_ADDR(sge, sgl->vec[0].iova);
-		DPAA2_SET_FLE_OFFSET(sge, ofs.ofs.cipher.head);
+		DPAA2_SET_FLE_ADDR(sge, sgl->vec[0].iova + ofs.ofs.cipher.head);
 		sge->length = sgl->vec[0].len - ofs.ofs.cipher.head;
 
 		/* o/p segs */
 		for (i = 1; i < sgl->num; i++) {
 			sge++;
 			DPAA2_SET_FLE_ADDR(sge, sgl->vec[i].iova);
-			DPAA2_SET_FLE_OFFSET(sge, 0);
 			sge->length = sgl->vec[i].len;
 		}
 	}
@@ -299,15 +289,13 @@ build_raw_dp_aead_fd(uint8_t *drv_ctx,
 		sge++;
 	}
 
-	DPAA2_SET_FLE_ADDR(sge, sgl->vec[0].iova);
-	DPAA2_SET_FLE_OFFSET(sge, ofs.ofs.cipher.head);
+	DPAA2_SET_FLE_ADDR(sge, sgl->vec[0].iova + ofs.ofs.cipher.head);
 	sge->length = sgl->vec[0].len - ofs.ofs.cipher.head;
 
 	/* i/p segs */
 	for (i = 1; i < sgl->num; i++) {
 		sge++;
 		DPAA2_SET_FLE_ADDR(sge, sgl->vec[i].iova);
-		DPAA2_SET_FLE_OFFSET(sge, 0);
 		sge->length = sgl->vec[i].len;
 	}
 
@@ -412,8 +400,7 @@ build_raw_dp_auth_fd(uint8_t *drv_ctx,
 		sge++;
 	}
 	/* i/p 1st seg */
-	DPAA2_SET_FLE_ADDR(sge, sgl->vec[0].iova);
-	DPAA2_SET_FLE_OFFSET(sge, data_offset);
+	DPAA2_SET_FLE_ADDR(sge, sgl->vec[0].iova + data_offset);
 
 	if (data_len <= (int)(sgl->vec[0].len - data_offset)) {
 		sge->length = data_len;
@@ -423,7 +410,6 @@ build_raw_dp_auth_fd(uint8_t *drv_ctx,
 		for (i = 1; i < sgl->num; i++) {
 			sge++;
 			DPAA2_SET_FLE_ADDR(sge, sgl->vec[i].iova);
-			DPAA2_SET_FLE_OFFSET(sge, 0);
 			sge->length = sgl->vec[i].len;
 		}
 	}
@@ -502,14 +488,12 @@ build_raw_dp_proto_fd(uint8_t *drv_ctx,
 	if (dest_sgl) {
 		/* Configure Output SGE for Encap/Decap */
 		DPAA2_SET_FLE_ADDR(sge, dest_sgl->vec[0].iova);
-		DPAA2_SET_FLE_OFFSET(sge, 0);
 		sge->length = dest_sgl->vec[0].len;
 		out_len += sge->length;
 		/* o/p segs */
 		for (i = 1; i < dest_sgl->num; i++) {
 			sge++;
 			DPAA2_SET_FLE_ADDR(sge, dest_sgl->vec[i].iova);
-			DPAA2_SET_FLE_OFFSET(sge, 0);
 			sge->length = dest_sgl->vec[i].len;
 			out_len += sge->length;
 		}
@@ -518,14 +502,12 @@ build_raw_dp_proto_fd(uint8_t *drv_ctx,
 	} else {
 		/* Configure Output SGE for Encap/Decap */
 		DPAA2_SET_FLE_ADDR(sge, sgl->vec[0].iova);
-		DPAA2_SET_FLE_OFFSET(sge, 0);
 		sge->length = sgl->vec[0].len;
 		out_len += sge->length;
 		/* o/p segs */
 		for (i = 1; i < sgl->num; i++) {
 			sge++;
 			DPAA2_SET_FLE_ADDR(sge, sgl->vec[i].iova);
-			DPAA2_SET_FLE_OFFSET(sge, 0);
 			sge->length = sgl->vec[i].len;
 			out_len += sge->length;
 		}
@@ -545,14 +527,12 @@ build_raw_dp_proto_fd(uint8_t *drv_ctx,
 
 	/* Configure input SGE for Encap/Decap */
 	DPAA2_SET_FLE_ADDR(sge, sgl->vec[0].iova);
-	DPAA2_SET_FLE_OFFSET(sge, 0);
 	sge->length = sgl->vec[0].len;
 	in_len += sge->length;
 	/* i/p segs */
 	for (i = 1; i < sgl->num; i++) {
 		sge++;
 		DPAA2_SET_FLE_ADDR(sge, sgl->vec[i].iova);
-		DPAA2_SET_FLE_OFFSET(sge, 0);
 		sge->length = sgl->vec[i].len;
 		in_len += sge->length;
 	}
@@ -638,28 +618,24 @@ build_raw_dp_cipher_fd(uint8_t *drv_ctx,
 	/* OOP */
 	if (dest_sgl) {
 		/* o/p 1st seg */
-		DPAA2_SET_FLE_ADDR(sge, dest_sgl->vec[0].iova);
-		DPAA2_SET_FLE_OFFSET(sge, data_offset);
+		DPAA2_SET_FLE_ADDR(sge, dest_sgl->vec[0].iova + data_offset);
 		sge->length = dest_sgl->vec[0].len - data_offset;
 
 		/* o/p segs */
 		for (i = 1; i < dest_sgl->num; i++) {
 			sge++;
 			DPAA2_SET_FLE_ADDR(sge, dest_sgl->vec[i].iova);
-			DPAA2_SET_FLE_OFFSET(sge, 0);
 			sge->length = dest_sgl->vec[i].len;
 		}
 	} else {
 		/* o/p 1st seg */
-		DPAA2_SET_FLE_ADDR(sge, sgl->vec[0].iova);
-		DPAA2_SET_FLE_OFFSET(sge, data_offset);
+		DPAA2_SET_FLE_ADDR(sge, sgl->vec[0].iova + data_offset);
 		sge->length = sgl->vec[0].len - data_offset;
 
 		/* o/p segs */
 		for (i = 1; i < sgl->num; i++) {
 			sge++;
 			DPAA2_SET_FLE_ADDR(sge, sgl->vec[i].iova);
-			DPAA2_SET_FLE_OFFSET(sge, 0);
 			sge->length = sgl->vec[i].len;
 		}
 	}
@@ -678,21 +654,18 @@ build_raw_dp_cipher_fd(uint8_t *drv_ctx,
 
 	/* i/p IV */
 	DPAA2_SET_FLE_ADDR(sge, iv->iova);
-	DPAA2_SET_FLE_OFFSET(sge, 0);
 	sge->length = sess->iv.length;
 
 	sge++;
 
 	/* i/p 1st seg */
-	DPAA2_SET_FLE_ADDR(sge, sgl->vec[0].iova);
-	DPAA2_SET_FLE_OFFSET(sge, data_offset);
+	DPAA2_SET_FLE_ADDR(sge, sgl->vec[0].iova + data_offset);
 	sge->length = sgl->vec[0].len - data_offset;
 
 	/* i/p segs */
 	for (i = 1; i < sgl->num; i++) {
 		sge++;
 		DPAA2_SET_FLE_ADDR(sge, sgl->vec[i].iova);
-		DPAA2_SET_FLE_OFFSET(sge, 0);
 		sge->length = sgl->vec[i].len;
 	}
 	DPAA2_SET_FLE_FIN(sge);
diff --git a/drivers/net/dpaa2/dpaa2_rxtx.c b/drivers/net/dpaa2/dpaa2_rxtx.c
index 85910bbd8f..23f7c4132d 100644
--- a/drivers/net/dpaa2/dpaa2_rxtx.c
+++ b/drivers/net/dpaa2/dpaa2_rxtx.c
@@ -471,8 +471,7 @@ eth_mbuf_to_sg_fd(struct rte_mbuf *mbuf,
 		sge = &sgt[i];
 		/*Resetting the buffer pool id and offset field*/
 		sge->fin_bpid_offset = 0;
-		DPAA2_SET_FLE_ADDR(sge, DPAA2_MBUF_VADDR_TO_IOVA(cur_seg));
-		DPAA2_SET_FLE_OFFSET(sge, cur_seg->data_off);
+		DPAA2_SET_FLE_ADDR(sge, rte_pktmbuf_iova(cur_seg));
 		sge->length = cur_seg->data_len;
 		if (RTE_MBUF_DIRECT(cur_seg)) {
 			/* if we are using inline SGT in same buffers
-- 
2.17.1


^ permalink raw reply	[flat|nested] 30+ messages in thread

* [PATCH v2 08/13] crypto/dpaa2_sec: support copy df and dscp in proto offload
  2023-09-20 13:33 ` [PATCH v2 00/13] " Hemant Agrawal
                     ` (6 preceding siblings ...)
  2023-09-20 13:33   ` [PATCH v2 07/13] crypto/dpaa2_sec: enhance dpaa FD FL FMT offset set Hemant Agrawal
@ 2023-09-20 13:33   ` Hemant Agrawal
  2023-09-20 13:33   ` [PATCH v2 09/13] crypto/dpaa2_sec: increase the anti replay window size Hemant Agrawal
                     ` (5 subsequent siblings)
  13 siblings, 0 replies; 30+ messages in thread
From: Hemant Agrawal @ 2023-09-20 13:33 UTC (permalink / raw)
  To: gakhil; +Cc: dev

This patch adds support for enabling capability to copy
dscp and df bits from inner to outer header and vice-versa.

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
 drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c | 18 ++++++++++++++----
 drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h   | 10 ++++++++--
 2 files changed, 22 insertions(+), 6 deletions(-)

diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
index daa6a71360..3b96798242 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
@@ -3193,10 +3193,14 @@ dpaa2_sec_set_ipsec_session(struct rte_cryptodev *dev,
 			encap_pdb.options |= PDBHMO_ESP_ENCAP_DTTL;
 		if (ipsec_xform->options.esn)
 			encap_pdb.options |= PDBOPTS_ESP_ESN;
+		if (ipsec_xform->options.copy_dscp)
+			encap_pdb.options |= PDBOPTS_ESP_DIFFSERV;
 		encap_pdb.spi = ipsec_xform->spi;
 		session->dir = DIR_ENC;
 		if (ipsec_xform->tunnel.type ==
 				RTE_SECURITY_IPSEC_TUNNEL_IPV4) {
+			if (ipsec_xform->options.copy_df)
+				encap_pdb.options |= PDBHMO_ESP_DFBIT;
 			encap_pdb.ip_hdr_len = sizeof(struct ip);
 			ip4_hdr.ip_v = IPVERSION;
 			ip4_hdr.ip_hl = 5;
@@ -3261,12 +3265,18 @@ dpaa2_sec_set_ipsec_session(struct rte_cryptodev *dev,
 			break;
 		}
 
-		decap_pdb.options = (ipsec_xform->tunnel.type ==
-				RTE_SECURITY_IPSEC_TUNNEL_IPV4) ?
-				sizeof(struct ip) << 16 :
-				sizeof(struct rte_ipv6_hdr) << 16;
+		if (ipsec_xform->tunnel.type ==
+				RTE_SECURITY_IPSEC_TUNNEL_IPV4) {
+			decap_pdb.options = sizeof(struct ip) << 16;
+			if (ipsec_xform->options.copy_df)
+				decap_pdb.options |= PDBHMO_ESP_DFV;
+		} else {
+			decap_pdb.options = sizeof(struct rte_ipv6_hdr) << 16;
+		}
 		if (ipsec_xform->options.esn)
 			decap_pdb.options |= PDBOPTS_ESP_ESN;
+		if (ipsec_xform->options.copy_dscp)
+			decap_pdb.options |= PDBOPTS_ESP_DIFFSERV;
 
 		if (ipsec_xform->replay_win_sz) {
 			uint32_t win_sz;
diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h b/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
index 5a4eb8e2ed..0f29e6299f 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
@@ -929,7 +929,10 @@ static const struct rte_security_capability dpaa2_sec_security_cap[] = {
 			.proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP,
 			.mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL,
 			.direction = RTE_SECURITY_IPSEC_SA_DIR_EGRESS,
-			.options = { 0 },
+			.options = {
+				.copy_df = 1,
+				.copy_dscp = 1,
+			},
 			.replay_win_sz_max = 128
 		},
 		.crypto_capabilities = dpaa2_sec_capabilities
@@ -941,7 +944,10 @@ static const struct rte_security_capability dpaa2_sec_security_cap[] = {
 			.proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP,
 			.mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL,
 			.direction = RTE_SECURITY_IPSEC_SA_DIR_INGRESS,
-			.options = { 0 },
+			.options = {
+				.copy_df = 1,
+				.copy_dscp = 1,
+			},
 			.replay_win_sz_max = 128
 		},
 		.crypto_capabilities = dpaa2_sec_capabilities
-- 
2.17.1


^ permalink raw reply	[flat|nested] 30+ messages in thread

* [PATCH v2 09/13] crypto/dpaa2_sec: increase the anti replay window size
  2023-09-20 13:33 ` [PATCH v2 00/13] " Hemant Agrawal
                     ` (7 preceding siblings ...)
  2023-09-20 13:33   ` [PATCH v2 08/13] crypto/dpaa2_sec: support copy df and dscp in proto offload Hemant Agrawal
@ 2023-09-20 13:33   ` Hemant Agrawal
  2023-09-20 13:34   ` [PATCH v2 10/13] crypto/dpaa2_sec: enable esn support Hemant Agrawal
                     ` (4 subsequent siblings)
  13 siblings, 0 replies; 30+ messages in thread
From: Hemant Agrawal @ 2023-09-20 13:33 UTC (permalink / raw)
  To: gakhil; +Cc: dev

LX216x can support upto 1024 anti replay window size.

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
 drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h b/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
index 0f29e6299f..ee904829ed 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
@@ -933,7 +933,7 @@ static const struct rte_security_capability dpaa2_sec_security_cap[] = {
 				.copy_df = 1,
 				.copy_dscp = 1,
 			},
-			.replay_win_sz_max = 128
+			.replay_win_sz_max = 1024
 		},
 		.crypto_capabilities = dpaa2_sec_capabilities
 	},
@@ -948,7 +948,7 @@ static const struct rte_security_capability dpaa2_sec_security_cap[] = {
 				.copy_df = 1,
 				.copy_dscp = 1,
 			},
-			.replay_win_sz_max = 128
+			.replay_win_sz_max = 1024
 		},
 		.crypto_capabilities = dpaa2_sec_capabilities
 	},
-- 
2.17.1


^ permalink raw reply	[flat|nested] 30+ messages in thread

* [PATCH v2 10/13] crypto/dpaa2_sec: enable esn support
  2023-09-20 13:33 ` [PATCH v2 00/13] " Hemant Agrawal
                     ` (8 preceding siblings ...)
  2023-09-20 13:33   ` [PATCH v2 09/13] crypto/dpaa2_sec: increase the anti replay window size Hemant Agrawal
@ 2023-09-20 13:34   ` Hemant Agrawal
  2023-09-20 13:34   ` [PATCH v2 11/13] crypto/dpaa2_sec: add NAT-T support in IPsec offload Hemant Agrawal
                     ` (3 subsequent siblings)
  13 siblings, 0 replies; 30+ messages in thread
From: Hemant Agrawal @ 2023-09-20 13:34 UTC (permalink / raw)
  To: gakhil; +Cc: dev

LX216x suppots ESN.
Also enable to correctly print the SEC era.

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
 drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c | 2 +-
 drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h   | 2 ++
 2 files changed, 3 insertions(+), 1 deletion(-)

diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
index 3b96798242..85830347c6 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
@@ -4386,7 +4386,7 @@ cryptodev_dpaa2_sec_probe(struct rte_dpaa2_driver *dpaa2_drv __rte_unused,
 	else
 		rta_set_sec_era(RTA_SEC_ERA_8);
 
-	DPAA2_SEC_INFO("2-SEC ERA is %d", rta_get_sec_era());
+	DPAA2_SEC_INFO("2-SEC ERA is %d", USER_SEC_ERA(rta_get_sec_era()));
 
 	/* Invoke PMD device initialization function */
 	retval = dpaa2_sec_dev_init(cryptodev);
diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h b/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
index ee904829ed..d3e2df72b0 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
@@ -932,6 +932,7 @@ static const struct rte_security_capability dpaa2_sec_security_cap[] = {
 			.options = {
 				.copy_df = 1,
 				.copy_dscp = 1,
+				.esn = 1,
 			},
 			.replay_win_sz_max = 1024
 		},
@@ -947,6 +948,7 @@ static const struct rte_security_capability dpaa2_sec_security_cap[] = {
 			.options = {
 				.copy_df = 1,
 				.copy_dscp = 1,
+				.esn = 1,
 			},
 			.replay_win_sz_max = 1024
 		},
-- 
2.17.1


^ permalink raw reply	[flat|nested] 30+ messages in thread

* [PATCH v2 11/13] crypto/dpaa2_sec: add NAT-T support in IPsec offload
  2023-09-20 13:33 ` [PATCH v2 00/13] " Hemant Agrawal
                     ` (9 preceding siblings ...)
  2023-09-20 13:34   ` [PATCH v2 10/13] crypto/dpaa2_sec: enable esn support Hemant Agrawal
@ 2023-09-20 13:34   ` Hemant Agrawal
  2023-09-20 13:34   ` [PATCH v2 12/13] crypto/dpaa2_sec: add support to set df and diffserv Hemant Agrawal
                     ` (2 subsequent siblings)
  13 siblings, 0 replies; 30+ messages in thread
From: Hemant Agrawal @ 2023-09-20 13:34 UTC (permalink / raw)
  To: gakhil; +Cc: dev

This patch adds supports for UDP encapsulation in NAT-T for
IPSEC security protocol offload case.

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
 drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c | 101 ++++++++++++++------
 drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h   |   3 +
 2 files changed, 75 insertions(+), 29 deletions(-)

diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
index 85830347c6..809c357423 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
@@ -10,6 +10,7 @@
 #include <unistd.h>
 
 #include <rte_ip.h>
+#include <rte_udp.h>
 #include <rte_mbuf.h>
 #include <rte_cryptodev.h>
 #include <rte_malloc.h>
@@ -3162,9 +3163,9 @@ dpaa2_sec_set_ipsec_session(struct rte_cryptodev *dev,
 
 	session->ctxt_type = DPAA2_SEC_IPSEC;
 	if (ipsec_xform->direction == RTE_SECURITY_IPSEC_SA_DIR_EGRESS) {
-		uint8_t *hdr = NULL;
-		struct ip ip4_hdr;
-		struct rte_ipv6_hdr ip6_hdr;
+		uint8_t hdr[48] = {};
+		struct rte_ipv4_hdr *ip4_hdr;
+		struct rte_ipv6_hdr *ip6_hdr;
 		struct ipsec_encap_pdb encap_pdb;
 
 		flc->dhr = SEC_FLC_DHR_OUTBOUND;
@@ -3187,38 +3188,77 @@ dpaa2_sec_set_ipsec_session(struct rte_cryptodev *dev,
 
 		encap_pdb.options = (IPVERSION << PDBNH_ESP_ENCAP_SHIFT) |
 			PDBOPTS_ESP_OIHI_PDB_INL |
-			PDBOPTS_ESP_IVSRC |
 			PDBHMO_ESP_SNR;
-		if (ipsec_xform->options.dec_ttl)
-			encap_pdb.options |= PDBHMO_ESP_ENCAP_DTTL;
+
+		if (ipsec_xform->options.iv_gen_disable == 0)
+			encap_pdb.options |= PDBOPTS_ESP_IVSRC;
 		if (ipsec_xform->options.esn)
 			encap_pdb.options |= PDBOPTS_ESP_ESN;
 		if (ipsec_xform->options.copy_dscp)
 			encap_pdb.options |= PDBOPTS_ESP_DIFFSERV;
+		if (ipsec_xform->options.ecn)
+			encap_pdb.options |= PDBOPTS_ESP_TECN;
 		encap_pdb.spi = ipsec_xform->spi;
 		session->dir = DIR_ENC;
 		if (ipsec_xform->tunnel.type ==
 				RTE_SECURITY_IPSEC_TUNNEL_IPV4) {
 			if (ipsec_xform->options.copy_df)
 				encap_pdb.options |= PDBHMO_ESP_DFBIT;
-			encap_pdb.ip_hdr_len = sizeof(struct ip);
-			ip4_hdr.ip_v = IPVERSION;
-			ip4_hdr.ip_hl = 5;
-			ip4_hdr.ip_len = rte_cpu_to_be_16(sizeof(ip4_hdr));
-			ip4_hdr.ip_tos = ipsec_xform->tunnel.ipv4.dscp;
-			ip4_hdr.ip_id = 0;
-			ip4_hdr.ip_off = 0;
-			ip4_hdr.ip_ttl = ipsec_xform->tunnel.ipv4.ttl;
-			ip4_hdr.ip_p = IPPROTO_ESP;
-			ip4_hdr.ip_sum = 0;
-			ip4_hdr.ip_src = ipsec_xform->tunnel.ipv4.src_ip;
-			ip4_hdr.ip_dst = ipsec_xform->tunnel.ipv4.dst_ip;
-			ip4_hdr.ip_sum = calc_chksum((uint16_t *)(void *)
-					&ip4_hdr, sizeof(struct ip));
-			hdr = (uint8_t *)&ip4_hdr;
+			ip4_hdr = (struct rte_ipv4_hdr *)hdr;
+
+			encap_pdb.ip_hdr_len = sizeof(struct rte_ipv4_hdr);
+			ip4_hdr->version_ihl = RTE_IPV4_VHL_DEF;
+			ip4_hdr->time_to_live = ipsec_xform->tunnel.ipv4.ttl;
+			ip4_hdr->type_of_service =
+				ipsec_xform->tunnel.ipv4.dscp;
+			ip4_hdr->hdr_checksum = 0;
+			ip4_hdr->packet_id = 0;
+			ip4_hdr->fragment_offset = 0;
+			memcpy(&ip4_hdr->src_addr,
+				&ipsec_xform->tunnel.ipv4.src_ip,
+				sizeof(struct in_addr));
+			memcpy(&ip4_hdr->dst_addr,
+				&ipsec_xform->tunnel.ipv4.dst_ip,
+				sizeof(struct in_addr));
+			if (ipsec_xform->options.udp_encap) {
+				uint16_t sport, dport;
+				struct rte_udp_hdr *uh =
+					(struct rte_udp_hdr *) (hdr +
+						sizeof(struct rte_ipv4_hdr));
+
+				sport = ipsec_xform->udp.sport ?
+					ipsec_xform->udp.sport : 4500;
+				dport = ipsec_xform->udp.dport ?
+					ipsec_xform->udp.dport : 4500;
+				uh->src_port = rte_cpu_to_be_16(sport);
+				uh->dst_port = rte_cpu_to_be_16(dport);
+				uh->dgram_len = 0;
+				uh->dgram_cksum = 0;
+
+				ip4_hdr->next_proto_id = IPPROTO_UDP;
+				ip4_hdr->total_length =
+					rte_cpu_to_be_16(
+						sizeof(struct rte_ipv4_hdr) +
+						sizeof(struct rte_udp_hdr));
+				encap_pdb.ip_hdr_len +=
+					sizeof(struct rte_udp_hdr);
+				encap_pdb.options |=
+					PDBOPTS_ESP_NAT | PDBOPTS_ESP_NUC;
+			} else {
+				ip4_hdr->total_length =
+					rte_cpu_to_be_16(
+						sizeof(struct rte_ipv4_hdr));
+				ip4_hdr->next_proto_id = IPPROTO_ESP;
+			}
+
+			ip4_hdr->hdr_checksum = calc_chksum((uint16_t *)
+				(void *)ip4_hdr, sizeof(struct rte_ipv4_hdr));
+
 		} else if (ipsec_xform->tunnel.type ==
 				RTE_SECURITY_IPSEC_TUNNEL_IPV6) {
-			ip6_hdr.vtc_flow = rte_cpu_to_be_32(
+			ip6_hdr = (struct rte_ipv6_hdr *)hdr;
+
+			ip6_hdr->vtc_flow = rte_cpu_to_be_32(
 				DPAA2_IPv6_DEFAULT_VTC_FLOW |
 				((ipsec_xform->tunnel.ipv6.dscp <<
 					RTE_IPV6_HDR_TC_SHIFT) &
@@ -3227,18 +3267,17 @@ dpaa2_sec_set_ipsec_session(struct rte_cryptodev *dev,
 					RTE_IPV6_HDR_FL_SHIFT) &
 					RTE_IPV6_HDR_FL_MASK));
 			/* Payload length will be updated by HW */
-			ip6_hdr.payload_len = 0;
-			ip6_hdr.hop_limits =
-					ipsec_xform->tunnel.ipv6.hlimit;
-			ip6_hdr.proto = (ipsec_xform->proto ==
+			ip6_hdr->payload_len = 0;
+			ip6_hdr->hop_limits = ipsec_xform->tunnel.ipv6.hlimit ?
+					ipsec_xform->tunnel.ipv6.hlimit : 0x40;
+			ip6_hdr->proto = (ipsec_xform->proto ==
 					RTE_SECURITY_IPSEC_SA_PROTO_ESP) ?
 					IPPROTO_ESP : IPPROTO_AH;
-			memcpy(&ip6_hdr.src_addr,
+			memcpy(&ip6_hdr->src_addr,
 				&ipsec_xform->tunnel.ipv6.src_addr, 16);
-			memcpy(&ip6_hdr.dst_addr,
+			memcpy(&ip6_hdr->dst_addr,
 				&ipsec_xform->tunnel.ipv6.dst_addr, 16);
 			encap_pdb.ip_hdr_len = sizeof(struct rte_ipv6_hdr);
-			hdr = (uint8_t *)&ip6_hdr;
 		}
 
 		bufsize = cnstr_shdsc_ipsec_new_encap(priv->flc_desc[0].desc,
@@ -3277,6 +3316,10 @@ dpaa2_sec_set_ipsec_session(struct rte_cryptodev *dev,
 			decap_pdb.options |= PDBOPTS_ESP_ESN;
 		if (ipsec_xform->options.copy_dscp)
 			decap_pdb.options |= PDBOPTS_ESP_DIFFSERV;
+		if (ipsec_xform->options.ecn)
+			decap_pdb.options |= PDBOPTS_ESP_TECN;
+		if (ipsec_xform->options.dec_ttl)
+			decap_pdb.options |= PDBHMO_ESP_DECAP_DTTL;
 
 		if (ipsec_xform->replay_win_sz) {
 			uint32_t win_sz;
diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h b/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
index d3e2df72b0..cf6542a222 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
@@ -930,6 +930,7 @@ static const struct rte_security_capability dpaa2_sec_security_cap[] = {
 			.mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL,
 			.direction = RTE_SECURITY_IPSEC_SA_DIR_EGRESS,
 			.options = {
+				.udp_encap = 1,
 				.copy_df = 1,
 				.copy_dscp = 1,
 				.esn = 1,
@@ -946,6 +947,8 @@ static const struct rte_security_capability dpaa2_sec_security_cap[] = {
 			.mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL,
 			.direction = RTE_SECURITY_IPSEC_SA_DIR_INGRESS,
 			.options = {
+				.iv_gen_disable = 1,
+				.udp_encap = 1,
 				.copy_df = 1,
 				.copy_dscp = 1,
 				.esn = 1,
-- 
2.17.1


^ permalink raw reply	[flat|nested] 30+ messages in thread

* [PATCH v2 12/13] crypto/dpaa2_sec: add support to set df and diffserv
  2023-09-20 13:33 ` [PATCH v2 00/13] " Hemant Agrawal
                     ` (10 preceding siblings ...)
  2023-09-20 13:34   ` [PATCH v2 11/13] crypto/dpaa2_sec: add NAT-T support in IPsec offload Hemant Agrawal
@ 2023-09-20 13:34   ` Hemant Agrawal
  2023-09-20 13:34   ` [PATCH v2 13/13] crypto/dpaax_sec: enable sha224-hmac support for IPsec Hemant Agrawal
  2023-09-21  8:05   ` [EXT] [PATCH v2 00/13] crypto/dpaax_sec: misc enhancements Akhil Goyal
  13 siblings, 0 replies; 30+ messages in thread
From: Hemant Agrawal @ 2023-09-20 13:34 UTC (permalink / raw)
  To: gakhil; +Cc: dev

This patch enables the ipsec protocol offload to copy DF and diffserv

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
 drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c | 32 +++++++++++++--------
 drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h   |  2 ++
 2 files changed, 22 insertions(+), 12 deletions(-)

diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
index 809c357423..77ed68ad6d 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
@@ -3202,24 +3202,32 @@ dpaa2_sec_set_ipsec_session(struct rte_cryptodev *dev,
 		session->dir = DIR_ENC;
 		if (ipsec_xform->tunnel.type ==
 				RTE_SECURITY_IPSEC_TUNNEL_IPV4) {
+			if (ipsec_xform->options.dec_ttl)
+				encap_pdb.options |= PDBHMO_ESP_ENCAP_DTTL;
 			if (ipsec_xform->options.copy_df)
 				encap_pdb.options |= PDBHMO_ESP_DFBIT;
 			ip4_hdr = (struct rte_ipv4_hdr *)hdr;
 
 			encap_pdb.ip_hdr_len = sizeof(struct rte_ipv4_hdr);
 			ip4_hdr->version_ihl = RTE_IPV4_VHL_DEF;
-			ip4_hdr->time_to_live = ipsec_xform->tunnel.ipv4.ttl;
-			ip4_hdr->type_of_service =
-				ipsec_xform->tunnel.ipv4.dscp;
+			ip4_hdr->time_to_live = ipsec_xform->tunnel.ipv4.ttl ?
+						ipsec_xform->tunnel.ipv4.ttl :  0x40;
+			ip4_hdr->type_of_service = (ipsec_xform->tunnel.ipv4.dscp<<2);
+
 			ip4_hdr->hdr_checksum = 0;
 			ip4_hdr->packet_id = 0;
-			ip4_hdr->fragment_offset = 0;
-			memcpy(&ip4_hdr->src_addr,
-				&ipsec_xform->tunnel.ipv4.src_ip,
-				sizeof(struct in_addr));
-			memcpy(&ip4_hdr->dst_addr,
-				&ipsec_xform->tunnel.ipv4.dst_ip,
-				sizeof(struct in_addr));
+			if (ipsec_xform->tunnel.ipv4.df) {
+				uint16_t frag_off = 0;
+
+				frag_off |= RTE_IPV4_HDR_DF_FLAG;
+				ip4_hdr->fragment_offset = rte_cpu_to_be_16(frag_off);
+			} else
+				ip4_hdr->fragment_offset = 0;
+
+			memcpy(&ip4_hdr->src_addr, &ipsec_xform->tunnel.ipv4.src_ip,
+			       sizeof(struct in_addr));
+			memcpy(&ip4_hdr->dst_addr, &ipsec_xform->tunnel.ipv4.dst_ip,
+			       sizeof(struct in_addr));
 			if (ipsec_xform->options.udp_encap) {
 				uint16_t sport, dport;
 				struct rte_udp_hdr *uh =
@@ -3309,6 +3317,8 @@ dpaa2_sec_set_ipsec_session(struct rte_cryptodev *dev,
 			decap_pdb.options = sizeof(struct ip) << 16;
 			if (ipsec_xform->options.copy_df)
 				decap_pdb.options |= PDBHMO_ESP_DFV;
+			if (ipsec_xform->options.dec_ttl)
+				decap_pdb.options |= PDBHMO_ESP_DECAP_DTTL;
 		} else {
 			decap_pdb.options = sizeof(struct rte_ipv6_hdr) << 16;
 		}
@@ -3318,8 +3328,6 @@ dpaa2_sec_set_ipsec_session(struct rte_cryptodev *dev,
 			decap_pdb.options |= PDBOPTS_ESP_DIFFSERV;
 		if (ipsec_xform->options.ecn)
 			decap_pdb.options |= PDBOPTS_ESP_TECN;
-		if (ipsec_xform->options.dec_ttl)
-			decap_pdb.options |= PDBHMO_ESP_DECAP_DTTL;
 
 		if (ipsec_xform->replay_win_sz) {
 			uint32_t win_sz;
diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h b/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
index cf6542a222..1c0bc3d6de 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
@@ -933,6 +933,7 @@ static const struct rte_security_capability dpaa2_sec_security_cap[] = {
 				.udp_encap = 1,
 				.copy_df = 1,
 				.copy_dscp = 1,
+				.dec_ttl = 1,
 				.esn = 1,
 			},
 			.replay_win_sz_max = 1024
@@ -951,6 +952,7 @@ static const struct rte_security_capability dpaa2_sec_security_cap[] = {
 				.udp_encap = 1,
 				.copy_df = 1,
 				.copy_dscp = 1,
+				.dec_ttl = 1,
 				.esn = 1,
 			},
 			.replay_win_sz_max = 1024
-- 
2.17.1


^ permalink raw reply	[flat|nested] 30+ messages in thread

* [PATCH v2 13/13] crypto/dpaax_sec: enable sha224-hmac support for IPsec
  2023-09-20 13:33 ` [PATCH v2 00/13] " Hemant Agrawal
                     ` (11 preceding siblings ...)
  2023-09-20 13:34   ` [PATCH v2 12/13] crypto/dpaa2_sec: add support to set df and diffserv Hemant Agrawal
@ 2023-09-20 13:34   ` Hemant Agrawal
  2023-09-21  8:05   ` [EXT] [PATCH v2 00/13] crypto/dpaax_sec: misc enhancements Akhil Goyal
  13 siblings, 0 replies; 30+ messages in thread
From: Hemant Agrawal @ 2023-09-20 13:34 UTC (permalink / raw)
  To: gakhil; +Cc: dev

Enabling the SHA224 support in ipsec proto mode
for dpaax drivers.

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
 drivers/common/dpaax/caamflib/desc.h             |  5 ++++-
 drivers/common/dpaax/caamflib/desc/ipsec.h       |  5 +++++
 drivers/common/dpaax/caamflib/rta/protocol_cmd.h |  5 ++++-
 drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c      | 10 +++++++++-
 drivers/crypto/dpaa_sec/dpaa_sec.c               | 10 +++++++++-
 5 files changed, 31 insertions(+), 4 deletions(-)

diff --git a/drivers/common/dpaax/caamflib/desc.h b/drivers/common/dpaax/caamflib/desc.h
index 635d6bad07..4a1285c4d4 100644
--- a/drivers/common/dpaax/caamflib/desc.h
+++ b/drivers/common/dpaax/caamflib/desc.h
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
  *
  * Copyright 2008-2016 Freescale Semiconductor Inc.
- * Copyright 2016, 2019 NXP
+ * Copyright 2016, 2019, 2023 NXP
  *
  */
 
@@ -662,6 +662,9 @@ extern enum rta_sec_era rta_sec_era;
 #define OP_PCL_IPSEC_HMAC_SHA2_256_128		 0x000c
 #define OP_PCL_IPSEC_HMAC_SHA2_384_192		 0x000d
 #define OP_PCL_IPSEC_HMAC_SHA2_512_256		 0x000e
+#define OP_PCL_IPSEC_HMAC_SHA2_224_96		 0x00f2
+#define OP_PCL_IPSEC_HMAC_SHA2_224_112		 0x00f4
+#define OP_PCL_IPSEC_HMAC_SHA2_224_224		 0x00f8
 
 /* For SRTP - OP_PCLID_SRTP */
 #define OP_PCL_SRTP_CIPHER_MASK			 0xff00
diff --git a/drivers/common/dpaax/caamflib/desc/ipsec.h b/drivers/common/dpaax/caamflib/desc/ipsec.h
index 14e80baf77..95fc3ea5ba 100644
--- a/drivers/common/dpaax/caamflib/desc/ipsec.h
+++ b/drivers/common/dpaax/caamflib/desc/ipsec.h
@@ -710,6 +710,11 @@ static inline void __gen_auth_key(struct program *program,
 	case OP_PCL_IPSEC_HMAC_SHA2_512_256:
 		dkp_protid = OP_PCLID_DKP_SHA512;
 		break;
+	case OP_PCL_IPSEC_HMAC_SHA2_224_96:
+	case OP_PCL_IPSEC_HMAC_SHA2_224_112:
+	case OP_PCL_IPSEC_HMAC_SHA2_224_224:
+		dkp_protid = OP_PCLID_DKP_SHA224;
+		break;
 	default:
 		KEY(program, KEY2, authdata->key_enc_flags, authdata->key,
 		    authdata->keylen, INLINE_KEY(authdata));
diff --git a/drivers/common/dpaax/caamflib/rta/protocol_cmd.h b/drivers/common/dpaax/caamflib/rta/protocol_cmd.h
index ac5c8af716..5b33f103be 100644
--- a/drivers/common/dpaax/caamflib/rta/protocol_cmd.h
+++ b/drivers/common/dpaax/caamflib/rta/protocol_cmd.h
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
  *
  * Copyright 2008-2016 Freescale Semiconductor Inc.
- * Copyright 2016,2019 NXP
+ * Copyright 2016,2019,2023 NXP
  *
  */
 
@@ -241,6 +241,9 @@ __rta_ipsec_proto(uint16_t protoinfo)
 	case OP_PCL_IPSEC_HMAC_MD5_128:
 	case OP_PCL_IPSEC_HMAC_SHA1_160:
 	case OP_PCL_IPSEC_AES_CMAC_96:
+	case OP_PCL_IPSEC_HMAC_SHA2_224_96:
+	case OP_PCL_IPSEC_HMAC_SHA2_224_112:
+	case OP_PCL_IPSEC_HMAC_SHA2_224_224:
 	case OP_PCL_IPSEC_HMAC_SHA2_256_128:
 	case OP_PCL_IPSEC_HMAC_SHA2_384_192:
 	case OP_PCL_IPSEC_HMAC_SHA2_512_256:
diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
index 77ed68ad6d..bb5a2c629e 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
@@ -3005,6 +3005,15 @@ dpaa2_sec_ipsec_proto_init(struct rte_crypto_cipher_xform *cipher_xform,
 		authdata->algtype = OP_PCL_IPSEC_HMAC_MD5_96;
 		authdata->algmode = OP_ALG_AAI_HMAC;
 		break;
+	case RTE_CRYPTO_AUTH_SHA224_HMAC:
+		authdata->algmode = OP_ALG_AAI_HMAC;
+		if (session->digest_length == 6)
+			authdata->algtype = OP_PCL_IPSEC_HMAC_SHA2_224_96;
+		else if (session->digest_length == 14)
+			authdata->algtype = OP_PCL_IPSEC_HMAC_SHA2_224_224;
+		else
+			authdata->algtype = OP_PCL_IPSEC_HMAC_SHA2_224_112;
+		break;
 	case RTE_CRYPTO_AUTH_SHA256_HMAC:
 		authdata->algtype = OP_PCL_IPSEC_HMAC_SHA2_256_128;
 		authdata->algmode = OP_ALG_AAI_HMAC;
@@ -3032,7 +3041,6 @@ dpaa2_sec_ipsec_proto_init(struct rte_crypto_cipher_xform *cipher_xform,
 	case RTE_CRYPTO_AUTH_NULL:
 		authdata->algtype = OP_PCL_IPSEC_HMAC_NULL;
 		break;
-	case RTE_CRYPTO_AUTH_SHA224_HMAC:
 	case RTE_CRYPTO_AUTH_SNOW3G_UIA2:
 	case RTE_CRYPTO_AUTH_SHA1:
 	case RTE_CRYPTO_AUTH_SHA256:
diff --git a/drivers/crypto/dpaa_sec/dpaa_sec.c b/drivers/crypto/dpaa_sec/dpaa_sec.c
index 0fcba95916..a301e8edb2 100644
--- a/drivers/crypto/dpaa_sec/dpaa_sec.c
+++ b/drivers/crypto/dpaa_sec/dpaa_sec.c
@@ -2817,6 +2817,15 @@ dpaa_sec_ipsec_proto_init(struct rte_crypto_cipher_xform *cipher_xform,
 			"+++Using sha256-hmac truncated len is non-standard,"
 			"it will not work with lookaside proto");
 		break;
+	case RTE_CRYPTO_AUTH_SHA224_HMAC:
+		session->auth_key.algmode = OP_ALG_AAI_HMAC;
+		if (session->digest_length == 6)
+			session->auth_key.alg = OP_PCL_IPSEC_HMAC_SHA2_224_96;
+		else if (session->digest_length == 14)
+			session->auth_key.alg = OP_PCL_IPSEC_HMAC_SHA2_224_224;
+		else
+			session->auth_key.alg = OP_PCL_IPSEC_HMAC_SHA2_224_112;
+		break;
 	case RTE_CRYPTO_AUTH_SHA384_HMAC:
 		session->auth_key.alg = OP_PCL_IPSEC_HMAC_SHA2_384_192;
 		session->auth_key.algmode = OP_ALG_AAI_HMAC;
@@ -2836,7 +2845,6 @@ dpaa_sec_ipsec_proto_init(struct rte_crypto_cipher_xform *cipher_xform,
 		session->auth_key.alg = OP_PCL_IPSEC_AES_XCBC_MAC_96;
 		session->auth_key.algmode = OP_ALG_AAI_XCBC_MAC;
 		break;
-	case RTE_CRYPTO_AUTH_SHA224_HMAC:
 	case RTE_CRYPTO_AUTH_SNOW3G_UIA2:
 	case RTE_CRYPTO_AUTH_SHA1:
 	case RTE_CRYPTO_AUTH_SHA256:
-- 
2.17.1


^ permalink raw reply	[flat|nested] 30+ messages in thread

* RE: [EXT] [PATCH v2 00/13] crypto/dpaax_sec: misc enhancements
  2023-09-20 13:33 ` [PATCH v2 00/13] " Hemant Agrawal
                     ` (12 preceding siblings ...)
  2023-09-20 13:34   ` [PATCH v2 13/13] crypto/dpaax_sec: enable sha224-hmac support for IPsec Hemant Agrawal
@ 2023-09-21  8:05   ` Akhil Goyal
  2023-09-21  8:55     ` Hemant Agrawal
  13 siblings, 1 reply; 30+ messages in thread
From: Akhil Goyal @ 2023-09-21  8:05 UTC (permalink / raw)
  To: Hemant Agrawal, Franck Lenormand, Apeksha Gupta, Vanshika Shukla,
	Gagandeep Singh
  Cc: dev

> v2: compilation fixes
> 
> This series include misc enhancements in dpaax_sec drivers.
> 
> - improving the IPsec protocol offload features
> - enhancing PDCP protocol processing
> - code optimization and cleanup
> 
> Apeksha Gupta (1):
>   crypto/dpaa2_sec: enhance dpaa FD FL FMT offset set
> 
> Gagandeep Singh (3):
>   common/dpaax: update IPsec base descriptor length
>   common/dpaax: change mode to wait in shared desc
>   crypto/dpaax_sec: set the authdata in non-auth case
> 
> Hemant Agrawal (8):
>   crypto/dpaa2_sec: supporting null cipher and auth
>   crypto/dpaa_sec: supporting null cipher and auth
>   crypto/dpaa2_sec: support copy df and dscp in proto offload
>   crypto/dpaa2_sec: increase the anti replay window size
>   crypto/dpaa2_sec: enable esn support
>   crypto/dpaa2_sec: add NAT-T support in IPsec offload
>   crypto/dpaa2_sec: add support to set df and diffserv
>   crypto/dpaax_sec: enable sha224-hmac support for IPsec
> 
> Vanshika Shukla (1):
>   crypto/dpaa2_sec: initialize the pdcp alg to null
> 
>  drivers/common/dpaax/caamflib/desc.h          |   5 +-
>  drivers/common/dpaax/caamflib/desc/ipsec.h    |   9 +-
>  drivers/common/dpaax/caamflib/desc/pdcp.h     |  82 +++---
>  .../common/dpaax/caamflib/rta/protocol_cmd.h  |   5 +-
>  .../dpaax/caamflib/rta/sec_run_time_asm.h     |   2 +-
>  drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c   | 245 +++++++++++-------
>  drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h     |  64 ++++-
>  drivers/crypto/dpaa2_sec/dpaa2_sec_raw_dp.c   |  47 +---
>  drivers/crypto/dpaa_sec/dpaa_sec.c            |  15 +-
>  drivers/crypto/dpaa_sec/dpaa_sec.h            |  42 ++-
>  drivers/net/dpaa2/dpaa2_rxtx.c                |   3 +-
>  11 files changed, 326 insertions(+), 193 deletions(-)
> 
Please improve writing the title and description of patches.
Applied to dpdk-next-crypto.
Please review the patches applied. Make sure not to repeat these things.

crypto/dpaax_sec: support SHA224-HMAC for IPsec
crypto/dpaa2_sec: support copy DF and diffserv
crypto/dpaa2_sec: support NAT-T in IPsec offload
crypto/dpaa2_sec: support ESN
crypto/dpaa2_sec: increase anti replay window size
crypto/dpaa2_sec: support copy DF and DSCP in IPsec
crypto/dpaa2_sec: prevent FLE offset overflow
crypto/dpaax_sec: set authdata in non-auth case
crypto/dpaa_sec: support null cipher and auth
crypto/dpaa2_sec: support null cipher and auth
crypto/dpaa2_sec: initialize PDCP alg to null
common/dpaax: change mode to wait in shared desc
common/dpaax: update IPsec base descriptor length

Few capability changes in dpaa2 driver were specific to LX2160.
But are common to other dpaa2 devices. I hope those are taken care of with appropriate checks.


^ permalink raw reply	[flat|nested] 30+ messages in thread

* RE: [EXT] [PATCH v2 00/13] crypto/dpaax_sec: misc enhancements
  2023-09-21  8:05   ` [EXT] [PATCH v2 00/13] crypto/dpaax_sec: misc enhancements Akhil Goyal
@ 2023-09-21  8:55     ` Hemant Agrawal
  0 siblings, 0 replies; 30+ messages in thread
From: Hemant Agrawal @ 2023-09-21  8:55 UTC (permalink / raw)
  To: Akhil Goyal, Franck Lenormand, Apeksha Gupta, Vanshika Shukla,
	Gagandeep Singh
  Cc: dev

Hi Akhil

> -----Original Message-----
> From: Akhil Goyal <gakhil@marvell.com>
> Sent: Thursday, September 21, 2023 1:35 PM
> To: Hemant Agrawal <hemant.agrawal@nxp.com>; Franck Lenormand
> <franck.lenormand@nxp.com>; Apeksha Gupta <apeksha.gupta@nxp.com>;
> Vanshika Shukla <vanshika.shukla@nxp.com>; Gagandeep Singh
> <G.Singh@nxp.com>
> Cc: dev@dpdk.org
> Subject: RE: [EXT] [PATCH v2 00/13] crypto/dpaax_sec: misc enhancements
> Importance: High
> 
> > v2: compilation fixes
> >
> > This series include misc enhancements in dpaax_sec drivers.
> >
> > - improving the IPsec protocol offload features
> > - enhancing PDCP protocol processing
> > - code optimization and cleanup
> >
> > Apeksha Gupta (1):
> >   crypto/dpaa2_sec: enhance dpaa FD FL FMT offset set
> >
> > Gagandeep Singh (3):
> >   common/dpaax: update IPsec base descriptor length
> >   common/dpaax: change mode to wait in shared desc
> >   crypto/dpaax_sec: set the authdata in non-auth case
> >
> > Hemant Agrawal (8):
> >   crypto/dpaa2_sec: supporting null cipher and auth
> >   crypto/dpaa_sec: supporting null cipher and auth
> >   crypto/dpaa2_sec: support copy df and dscp in proto offload
> >   crypto/dpaa2_sec: increase the anti replay window size
> >   crypto/dpaa2_sec: enable esn support
> >   crypto/dpaa2_sec: add NAT-T support in IPsec offload
> >   crypto/dpaa2_sec: add support to set df and diffserv
> >   crypto/dpaax_sec: enable sha224-hmac support for IPsec
> >
> > Vanshika Shukla (1):
> >   crypto/dpaa2_sec: initialize the pdcp alg to null
> >
> >  drivers/common/dpaax/caamflib/desc.h          |   5 +-
> >  drivers/common/dpaax/caamflib/desc/ipsec.h    |   9 +-
> >  drivers/common/dpaax/caamflib/desc/pdcp.h     |  82 +++---
> >  .../common/dpaax/caamflib/rta/protocol_cmd.h  |   5 +-
> >  .../dpaax/caamflib/rta/sec_run_time_asm.h     |   2 +-
> >  drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c   | 245 +++++++++++-------
> >  drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h     |  64 ++++-
> >  drivers/crypto/dpaa2_sec/dpaa2_sec_raw_dp.c   |  47 +---
> >  drivers/crypto/dpaa_sec/dpaa_sec.c            |  15 +-
> >  drivers/crypto/dpaa_sec/dpaa_sec.h            |  42 ++-
> >  drivers/net/dpaa2/dpaa2_rxtx.c                |   3 +-
> >  11 files changed, 326 insertions(+), 193 deletions(-)
> >
> Please improve writing the title and description of patches.
> Applied to dpdk-next-crypto.
> Please review the patches applied. Make sure not to repeat these things.
[Hemant] Thanks
> 
> crypto/dpaax_sec: support SHA224-HMAC for IPsec
> crypto/dpaa2_sec: support copy DF and diffserv
> crypto/dpaa2_sec: support NAT-T in IPsec offload
> crypto/dpaa2_sec: support ESN
> crypto/dpaa2_sec: increase anti replay window size
> crypto/dpaa2_sec: support copy DF and DSCP in IPsec
> crypto/dpaa2_sec: prevent FLE offset overflow
> crypto/dpaax_sec: set authdata in non-auth case
> crypto/dpaa_sec: support null cipher and auth
> crypto/dpaa2_sec: support null cipher and auth
> crypto/dpaa2_sec: initialize PDCP alg to null
> common/dpaax: change mode to wait in shared desc
> common/dpaax: update IPsec base descriptor length
> 
> Few capability changes in dpaa2 driver were specific to LX2160.
> But are common to other dpaa2 devices. I hope those are taken care of with
> appropriate checks.
[Hemant] Yes, your observation is correct. However we have not tested these things yet on dpaa, once we test, we will submit the patch.

Regards,
Hemant


^ permalink raw reply	[flat|nested] 30+ messages in thread

end of thread, other threads:[~2023-09-21  8:55 UTC | newest]

Thread overview: 30+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-08-23  7:08 [PATCH 00/12] crypto/dpaax_sec: misc enhancements Hemant Agrawal
2023-08-23  7:08 ` [PATCH 01/12] common/dpaax: update IPsec base descriptor length Hemant Agrawal
2023-08-23  7:08 ` [PATCH 02/12] common/dpaax: change mode to wait in shared desc Hemant Agrawal
2023-08-23  7:08 ` [PATCH 03/12] crypto/dpaa2_sec: initialize the pdcp alg to null Hemant Agrawal
2023-08-23  7:08 ` [PATCH 04/12] crypto/dpaa2_sec: supporting null cipher and auth Hemant Agrawal
2023-08-23  7:08 ` [PATCH 05/12] crypto/dpaa_sec: " Hemant Agrawal
2023-08-23  7:08 ` [PATCH 06/12] crypto/dpaax_sec: set the authdata in non-auth case Hemant Agrawal
2023-08-23  7:08 ` [PATCH 07/12] crypto/dpaa2_sec: enhance dpaa FD FL FMT offset set Hemant Agrawal
2023-08-23  7:08 ` [PATCH 08/12] crypto/dpaa2_sec: support copy df and dscp in proto offload Hemant Agrawal
2023-08-23  7:08 ` [PATCH 09/12] crypto/dpaa2_sec: increase the anti replay window size Hemant Agrawal
2023-08-23  7:08 ` [PATCH 10/12] crypto/dpaa2_sec: enable esn support Hemant Agrawal
2023-08-23  7:08 ` [PATCH 11/12] crypto/dpaa2_sec: add NAT-T support in IPsec offload Hemant Agrawal
2023-08-23  7:08 ` [PATCH 12/12] crypto/dpaa2_sec: add support to set df and diffserv Hemant Agrawal
2023-09-18 10:31 ` [EXT] [PATCH 00/12] crypto/dpaax_sec: misc enhancements Akhil Goyal
2023-09-20 13:33 ` [PATCH v2 00/13] " Hemant Agrawal
2023-09-20 13:33   ` [PATCH v2 01/13] common/dpaax: update IPsec base descriptor length Hemant Agrawal
2023-09-20 13:33   ` [PATCH v2 02/13] common/dpaax: change mode to wait in shared desc Hemant Agrawal
2023-09-20 13:33   ` [PATCH v2 03/13] crypto/dpaa2_sec: initialize the pdcp alg to null Hemant Agrawal
2023-09-20 13:33   ` [PATCH v2 04/13] crypto/dpaa2_sec: supporting null cipher and auth Hemant Agrawal
2023-09-20 13:33   ` [PATCH v2 05/13] crypto/dpaa_sec: " Hemant Agrawal
2023-09-20 13:33   ` [PATCH v2 06/13] crypto/dpaax_sec: set the authdata in non-auth case Hemant Agrawal
2023-09-20 13:33   ` [PATCH v2 07/13] crypto/dpaa2_sec: enhance dpaa FD FL FMT offset set Hemant Agrawal
2023-09-20 13:33   ` [PATCH v2 08/13] crypto/dpaa2_sec: support copy df and dscp in proto offload Hemant Agrawal
2023-09-20 13:33   ` [PATCH v2 09/13] crypto/dpaa2_sec: increase the anti replay window size Hemant Agrawal
2023-09-20 13:34   ` [PATCH v2 10/13] crypto/dpaa2_sec: enable esn support Hemant Agrawal
2023-09-20 13:34   ` [PATCH v2 11/13] crypto/dpaa2_sec: add NAT-T support in IPsec offload Hemant Agrawal
2023-09-20 13:34   ` [PATCH v2 12/13] crypto/dpaa2_sec: add support to set df and diffserv Hemant Agrawal
2023-09-20 13:34   ` [PATCH v2 13/13] crypto/dpaax_sec: enable sha224-hmac support for IPsec Hemant Agrawal
2023-09-21  8:05   ` [EXT] [PATCH v2 00/13] crypto/dpaax_sec: misc enhancements Akhil Goyal
2023-09-21  8:55     ` Hemant Agrawal

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).