DPDK patches and discussions
 help / color / mirror / Atom feed
* [PATCH v1] crypto/ipsec_mb: add digest encrypted feature in AESNI_MB
@ 2023-04-21 10:13 Brian Dooley
  2023-04-24  5:46 ` [EXT] " Akhil Goyal
                   ` (2 more replies)
  0 siblings, 3 replies; 32+ messages in thread
From: Brian Dooley @ 2023-04-21 10:13 UTC (permalink / raw)
  To: Kai Ji, Pablo de Lara; +Cc: dev, gakhil, Brian Dooley

AESNI_MB PMD does not support Digest Encrypted. This patch adds partial
support for this feature.

Signed-off-by: Brian Dooley <brian.dooley@intel.com>
---
Some out-of-place tests are still failing.
Only some in-place tests are passing.
Working on adding support for this feature in v2.
---
 app/test/1.diff                        | 0
 drivers/crypto/ipsec_mb/pmd_aesni_mb.c | 3 ++-
 2 files changed, 2 insertions(+), 1 deletion(-)
 create mode 100644 app/test/1.diff

diff --git a/app/test/1.diff b/app/test/1.diff
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/drivers/crypto/ipsec_mb/pmd_aesni_mb.c b/drivers/crypto/ipsec_mb/pmd_aesni_mb.c
index ac20d01937..fbb556af87 100644
--- a/drivers/crypto/ipsec_mb/pmd_aesni_mb.c
+++ b/drivers/crypto/ipsec_mb/pmd_aesni_mb.c
@@ -2335,7 +2335,8 @@ RTE_INIT(ipsec_mb_register_aesni_mb)
 			RTE_CRYPTODEV_FF_IN_PLACE_SGL |
 			RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT |
 			RTE_CRYPTODEV_FF_OOP_LB_IN_SGL_OUT |
-			RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT;
+			RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT |
+			RTE_CRYPTODEV_FF_DIGEST_ENCRYPTED;
 
 	aesni_mb_data->internals_priv_size = 0;
 	aesni_mb_data->ops = &aesni_mb_pmd_ops;
-- 
2.25.1


^ permalink raw reply	[flat|nested] 32+ messages in thread

* RE: [EXT] [PATCH v1] crypto/ipsec_mb: add digest encrypted feature in AESNI_MB
  2023-04-21 10:13 [PATCH v1] crypto/ipsec_mb: add digest encrypted feature in AESNI_MB Brian Dooley
@ 2023-04-24  5:46 ` Akhil Goyal
  2023-04-24 13:49   ` Dooley, Brian
  2023-07-20 10:38 ` [PATCH v1] crypto/ipsec_mb: add digest encrypted feature Brian Dooley
  2023-09-19 10:42 ` [PATCH v9 0/3] Add Digest Encrypted to aesni_mb PMD Brian Dooley
  2 siblings, 1 reply; 32+ messages in thread
From: Akhil Goyal @ 2023-04-24  5:46 UTC (permalink / raw)
  To: Brian Dooley, Kai Ji, Pablo de Lara; +Cc: dev

> Subject: [EXT] [PATCH v1] crypto/ipsec_mb: add digest encrypted feature in
> AESNI_MB
> AESNI_MB PMD does not support Digest Encrypted. This patch adds partial
> support for this feature.

I do not get it, what is the point of adding partial support.
It should be added when it is supported.
Also whenever, you add, add in documentation as well.


> 
> Signed-off-by: Brian Dooley <brian.dooley@intel.com>
> ---
> Some out-of-place tests are still failing.
> Only some in-place tests are passing.
> Working on adding support for this feature in v2.

You cannot just send half cooked patches.

> ---
>  app/test/1.diff                        | 0
>  drivers/crypto/ipsec_mb/pmd_aesni_mb.c | 3 ++-
>  2 files changed, 2 insertions(+), 1 deletion(-)
>  create mode 100644 app/test/1.diff
> 
> diff --git a/app/test/1.diff b/app/test/1.diff
> new file mode 100644
> index 0000000000..e69de29bb2
This file is accidently added.

^ permalink raw reply	[flat|nested] 32+ messages in thread

* RE: [EXT] [PATCH v1] crypto/ipsec_mb: add digest encrypted feature in AESNI_MB
  2023-04-24  5:46 ` [EXT] " Akhil Goyal
@ 2023-04-24 13:49   ` Dooley, Brian
  0 siblings, 0 replies; 32+ messages in thread
From: Dooley, Brian @ 2023-04-24 13:49 UTC (permalink / raw)
  To: Akhil Goyal, Ji, Kai, De Lara Guarch, Pablo; +Cc: dev

Hi Akhil,

> -----Original Message-----
> From: Akhil Goyal <gakhil@marvell.com>
> Sent: Monday 24 April 2023 06:46
> To: Dooley, Brian <brian.dooley@intel.com>; Ji, Kai <kai.ji@intel.com>; De
> Lara Guarch, Pablo <pablo.de.lara.guarch@intel.com>
> Cc: dev@dpdk.org
> Subject: RE: [EXT] [PATCH v1] crypto/ipsec_mb: add digest encrypted feature
> in AESNI_MB
> 
> > Subject: [EXT] [PATCH v1] crypto/ipsec_mb: add digest encrypted
> > feature in AESNI_MB AESNI_MB PMD does not support Digest Encrypted.
> > This patch adds partial support for this feature.
> 
> I do not get it, what is the point of adding partial support.
> It should be added when it is supported.
> Also whenever, you add, add in documentation as well.
Apologies for this, This patch has a bit more work to do and should have been an RFC.
Confident that it can be completed for the release.
> 
> 
> >
> > Signed-off-by: Brian Dooley <brian.dooley@intel.com>
> > ---
> > Some out-of-place tests are still failing.
> > Only some in-place tests are passing.
> > Working on adding support for this feature in v2.
> 
> You cannot just send half cooked patches.
> 
> > ---
> >  app/test/1.diff                        | 0
> >  drivers/crypto/ipsec_mb/pmd_aesni_mb.c | 3 ++-
> >  2 files changed, 2 insertions(+), 1 deletion(-)  create mode 100644
> > app/test/1.diff
> >
> > diff --git a/app/test/1.diff b/app/test/1.diff new file mode 100644
> > index 0000000000..e69de29bb2
> This file is accidently added.

Thanks,
Brian

^ permalink raw reply	[flat|nested] 32+ messages in thread

* [PATCH v1] crypto/ipsec_mb: add digest encrypted feature
  2023-04-21 10:13 [PATCH v1] crypto/ipsec_mb: add digest encrypted feature in AESNI_MB Brian Dooley
  2023-04-24  5:46 ` [EXT] " Akhil Goyal
@ 2023-07-20 10:38 ` Brian Dooley
  2023-08-21 14:42   ` [PATCH v2] " Brian Dooley
  2023-09-19 10:42 ` [PATCH v9 0/3] Add Digest Encrypted to aesni_mb PMD Brian Dooley
  2 siblings, 1 reply; 32+ messages in thread
From: Brian Dooley @ 2023-07-20 10:38 UTC (permalink / raw)
  To: Kai Ji, Pablo de Lara; +Cc: dev, gakhil, Brian Dooley

AESNI_MB PMD does not support Digest Encrypted. This patch adds support
for this feature.

Signed-off-by: Brian Dooley <brian.dooley@intel.com>
---
 drivers/crypto/ipsec_mb/pmd_aesni_mb.c | 107 +++++++++++++++++++++++--
 1 file changed, 102 insertions(+), 5 deletions(-)

diff --git a/drivers/crypto/ipsec_mb/pmd_aesni_mb.c b/drivers/crypto/ipsec_mb/pmd_aesni_mb.c
index 9e298023d7..48b0e66f24 100644
--- a/drivers/crypto/ipsec_mb/pmd_aesni_mb.c
+++ b/drivers/crypto/ipsec_mb/pmd_aesni_mb.c
@@ -1438,6 +1438,54 @@ set_gcm_job(IMB_MGR *mb_mgr, IMB_JOB *job, const uint8_t sgl,
 	return 0;
 }
 
+/** Check if conditions are met for digest-appended operations */
+static uint8_t *
+aesni_mb_digest_appended_in_src(struct rte_crypto_op *op, IMB_JOB *job,
+		uint32_t oop)
+{
+	unsigned int auth_size, cipher_size;
+	uint8_t *end_cipher;
+	uint8_t *start_cipher;
+
+	if (job->cipher_mode == IMB_CIPHER_NULL)
+		return NULL;
+
+	if (job->cipher_mode == IMB_CIPHER_ZUC_EEA3 ||
+		job->cipher_mode == IMB_CIPHER_SNOW3G_UEA2_BITLEN ||
+		job->cipher_mode == IMB_CIPHER_KASUMI_UEA1_BITLEN) {
+		cipher_size = (op->sym->cipher.data.offset >> 3) +
+			(op->sym->cipher.data.length >> 3);
+	} else {
+		cipher_size = (op->sym->cipher.data.offset) +
+			(op->sym->cipher.data.length);
+	}
+	if (job->hash_alg == IMB_AUTH_ZUC_EIA3_BITLEN ||
+		job->hash_alg == IMB_AUTH_SNOW3G_UIA2_BITLEN ||
+		job->hash_alg == IMB_AUTH_KASUMI_UIA1 ||
+		job->hash_alg == IMB_AUTH_ZUC256_EIA3_BITLEN) {
+		auth_size = (op->sym->auth.data.offset >> 3) +
+			(op->sym->auth.data.length >> 3);
+	} else {
+		auth_size = (op->sym->auth.data.offset) +
+			(op->sym->auth.data.length);
+	}
+
+	if (!oop) {
+		end_cipher = rte_pktmbuf_mtod_offset(op->sym->m_src, uint8_t *, cipher_size);
+		start_cipher = rte_pktmbuf_mtod(op->sym->m_src, uint8_t *);
+	} else {
+		end_cipher = rte_pktmbuf_mtod_offset(op->sym->m_dst, uint8_t *, cipher_size);
+		start_cipher = rte_pktmbuf_mtod(op->sym->m_dst, uint8_t *);
+	}
+
+	if (start_cipher < op->sym->auth.digest.data &&
+		op->sym->auth.digest.data < end_cipher) {
+		return rte_pktmbuf_mtod_offset(op->sym->m_src, uint8_t *, auth_size);
+	} else {
+		return NULL;
+	}
+}
+
 /**
  * Process a crypto operation and complete a IMB_JOB job structure for
  * submission to the multi buffer library for processing.
@@ -1580,9 +1628,12 @@ set_mb_job_params(IMB_JOB *job, struct ipsec_mb_qp *qp,
 	} else {
 		if (aead)
 			job->auth_tag_output = op->sym->aead.digest.data;
-		else
-			job->auth_tag_output = op->sym->auth.digest.data;
-
+		else {
+			job->auth_tag_output = aesni_mb_digest_appended_in_src(op, job, oop);
+			if (job->auth_tag_output == NULL) {
+				job->auth_tag_output = op->sym->auth.digest.data;
+			}
+		}
 		if (session->auth.req_digest_len !=
 				job->auth_tag_output_len_in_bytes) {
 			job->auth_tag_output =
@@ -1917,6 +1968,7 @@ post_process_mb_job(struct ipsec_mb_qp *qp, IMB_JOB *job)
 	struct aesni_mb_session *sess = NULL;
 	uint8_t *linear_buf = NULL;
 	int sgl = 0;
+	uint8_t oop = 0;
 	uint8_t is_docsis_sec = 0;
 
 	if (op->sess_type == RTE_CRYPTO_OP_SECURITY_SESSION) {
@@ -1962,8 +2014,52 @@ post_process_mb_job(struct ipsec_mb_qp *qp, IMB_JOB *job)
 						op->sym->auth.digest.data,
 						sess->auth.req_digest_len,
 						&op->status);
-			} else
+			} else {
+				if (!op->sym->m_dst || op->sym->m_dst == op->sym->m_src) {
+					/* in-place operation */
+					oop = 0;
+				} else { /* out-of-place operation */
+					oop = 1;
+				}
+
+				if (op->sym->m_src->nb_segs == 1 && op->sym->m_dst != NULL
+				&& !is_aead_algo(job->hash_alg,	sess->template_job.cipher_mode) &&
+				aesni_mb_digest_appended_in_src(op, job, oop) != NULL) {
+					unsigned int auth_size, cipher_size;
+					int unencrypted_bytes = 0;
+					if (job->cipher_mode == IMB_CIPHER_SNOW3G_UEA2_BITLEN ||
+						job->cipher_mode == IMB_CIPHER_KASUMI_UEA1_BITLEN ||
+						job->cipher_mode == IMB_CIPHER_ZUC_EEA3) {
+						cipher_size = (op->sym->cipher.data.offset >> 3) +
+							(op->sym->cipher.data.length >> 3);
+					} else {
+						cipher_size = (op->sym->cipher.data.offset) +
+							(op->sym->cipher.data.length);
+					}
+					if (job->hash_alg == IMB_AUTH_ZUC_EIA3_BITLEN ||
+						job->hash_alg == IMB_AUTH_SNOW3G_UIA2_BITLEN ||
+						job->hash_alg == IMB_AUTH_KASUMI_UIA1 ||
+						job->hash_alg == IMB_AUTH_ZUC256_EIA3_BITLEN) {
+						auth_size = (op->sym->auth.data.offset >> 3) +
+							(op->sym->auth.data.length >> 3);
+					} else {
+						auth_size = (op->sym->auth.data.offset) +
+						(op->sym->auth.data.length);
+					}
+					if (job->cipher_mode != IMB_CIPHER_NULL) {
+						unencrypted_bytes =	auth_size +
+						job->auth_tag_output_len_in_bytes - cipher_size;
+						if (unencrypted_bytes > 0)
+							rte_memcpy(
+							rte_pktmbuf_mtod_offset(
+								op->sym->m_dst, uint8_t *, cipher_size),
+							rte_pktmbuf_mtod_offset(
+								op->sym->m_src, uint8_t *, cipher_size),
+							unencrypted_bytes);
+					}
+				}
 				generate_digest(job, op, sess);
+			}
 			break;
 		default:
 			op->status = RTE_CRYPTO_OP_STATUS_ERROR;
@@ -2555,7 +2651,8 @@ RTE_INIT(ipsec_mb_register_aesni_mb)
 			RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT |
 			RTE_CRYPTODEV_FF_OOP_LB_IN_SGL_OUT |
 			RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT |
-			RTE_CRYPTODEV_FF_SECURITY;
+			RTE_CRYPTODEV_FF_SECURITY |
+			RTE_CRYPTODEV_FF_DIGEST_ENCRYPTED;
 
 	aesni_mb_data->internals_priv_size = 0;
 	aesni_mb_data->ops = &aesni_mb_pmd_ops;
-- 
2.25.1


^ permalink raw reply	[flat|nested] 32+ messages in thread

* [PATCH v2] crypto/ipsec_mb: add digest encrypted feature
  2023-07-20 10:38 ` [PATCH v1] crypto/ipsec_mb: add digest encrypted feature Brian Dooley
@ 2023-08-21 14:42   ` Brian Dooley
  2023-08-25  8:41     ` [PATCH v3] " Brian Dooley
  0 siblings, 1 reply; 32+ messages in thread
From: Brian Dooley @ 2023-08-21 14:42 UTC (permalink / raw)
  To: Kai Ji, Pablo de Lara; +Cc: dev, gakhil, Brian Dooley

AESNI_MB PMD does not support Digest Encrypted. This patch adds a check and
support for this feature.

Signed-off-by: Brian Dooley <brian.dooley@intel.com>
---
v2:
Fixed CHECKPATCH warning
---
 drivers/crypto/ipsec_mb/pmd_aesni_mb.c | 107 +++++++++++++++++++++++--
 1 file changed, 102 insertions(+), 5 deletions(-)

diff --git a/drivers/crypto/ipsec_mb/pmd_aesni_mb.c b/drivers/crypto/ipsec_mb/pmd_aesni_mb.c
index 9e298023d7..66f3c82e80 100644
--- a/drivers/crypto/ipsec_mb/pmd_aesni_mb.c
+++ b/drivers/crypto/ipsec_mb/pmd_aesni_mb.c
@@ -1438,6 +1438,54 @@ set_gcm_job(IMB_MGR *mb_mgr, IMB_JOB *job, const uint8_t sgl,
 	return 0;
 }
 
+/** Check if conditions are met for digest-appended operations */
+static uint8_t *
+aesni_mb_digest_appended_in_src(struct rte_crypto_op *op, IMB_JOB *job,
+		uint32_t oop)
+{
+	unsigned int auth_size, cipher_size;
+	uint8_t *end_cipher;
+	uint8_t *start_cipher;
+
+	if (job->cipher_mode == IMB_CIPHER_NULL)
+		return NULL;
+
+	if (job->cipher_mode == IMB_CIPHER_ZUC_EEA3 ||
+		job->cipher_mode == IMB_CIPHER_SNOW3G_UEA2_BITLEN ||
+		job->cipher_mode == IMB_CIPHER_KASUMI_UEA1_BITLEN) {
+		cipher_size = (op->sym->cipher.data.offset >> 3) +
+			(op->sym->cipher.data.length >> 3);
+	} else {
+		cipher_size = (op->sym->cipher.data.offset) +
+			(op->sym->cipher.data.length);
+	}
+	if (job->hash_alg == IMB_AUTH_ZUC_EIA3_BITLEN ||
+		job->hash_alg == IMB_AUTH_SNOW3G_UIA2_BITLEN ||
+		job->hash_alg == IMB_AUTH_KASUMI_UIA1 ||
+		job->hash_alg == IMB_AUTH_ZUC256_EIA3_BITLEN) {
+		auth_size = (op->sym->auth.data.offset >> 3) +
+			(op->sym->auth.data.length >> 3);
+	} else {
+		auth_size = (op->sym->auth.data.offset) +
+			(op->sym->auth.data.length);
+	}
+
+	if (!oop) {
+		end_cipher = rte_pktmbuf_mtod_offset(op->sym->m_src, uint8_t *, cipher_size);
+		start_cipher = rte_pktmbuf_mtod(op->sym->m_src, uint8_t *);
+	} else {
+		end_cipher = rte_pktmbuf_mtod_offset(op->sym->m_dst, uint8_t *, cipher_size);
+		start_cipher = rte_pktmbuf_mtod(op->sym->m_dst, uint8_t *);
+	}
+
+	if (start_cipher < op->sym->auth.digest.data &&
+		op->sym->auth.digest.data < end_cipher) {
+		return rte_pktmbuf_mtod_offset(op->sym->m_src, uint8_t *, auth_size);
+	} else {
+		return NULL;
+	}
+}
+
 /**
  * Process a crypto operation and complete a IMB_JOB job structure for
  * submission to the multi buffer library for processing.
@@ -1580,9 +1628,12 @@ set_mb_job_params(IMB_JOB *job, struct ipsec_mb_qp *qp,
 	} else {
 		if (aead)
 			job->auth_tag_output = op->sym->aead.digest.data;
-		else
-			job->auth_tag_output = op->sym->auth.digest.data;
-
+		else {
+			job->auth_tag_output = aesni_mb_digest_appended_in_src(op, job, oop);
+			if (job->auth_tag_output == NULL) {
+				job->auth_tag_output = op->sym->auth.digest.data;
+			}
+		}
 		if (session->auth.req_digest_len !=
 				job->auth_tag_output_len_in_bytes) {
 			job->auth_tag_output =
@@ -1917,6 +1968,7 @@ post_process_mb_job(struct ipsec_mb_qp *qp, IMB_JOB *job)
 	struct aesni_mb_session *sess = NULL;
 	uint8_t *linear_buf = NULL;
 	int sgl = 0;
+	uint8_t oop = 0;
 	uint8_t is_docsis_sec = 0;
 
 	if (op->sess_type == RTE_CRYPTO_OP_SECURITY_SESSION) {
@@ -1962,8 +2014,52 @@ post_process_mb_job(struct ipsec_mb_qp *qp, IMB_JOB *job)
 						op->sym->auth.digest.data,
 						sess->auth.req_digest_len,
 						&op->status);
-			} else
+			} else {
+				if (!op->sym->m_dst || op->sym->m_dst == op->sym->m_src) {
+					/* in-place operation */
+					oop = 0;
+				} else { /* out-of-place operation */
+					oop = 1;
+				}
+
+				if (op->sym->m_src->nb_segs == 1 && op->sym->m_dst != NULL
+				&& !is_aead_algo(job->hash_alg,	sess->template_job.cipher_mode) &&
+				aesni_mb_digest_appended_in_src(op, job, oop) != NULL) {
+					unsigned int auth_size, cipher_size;
+					int unencrypted_bytes = 0;
+					if (job->cipher_mode == IMB_CIPHER_SNOW3G_UEA2_BITLEN ||
+						job->cipher_mode == IMB_CIPHER_KASUMI_UEA1_BITLEN ||
+						job->cipher_mode == IMB_CIPHER_ZUC_EEA3) {
+						cipher_size = (op->sym->cipher.data.offset >> 3) +
+							(op->sym->cipher.data.length >> 3);
+					} else {
+						cipher_size = (op->sym->cipher.data.offset) +
+							(op->sym->cipher.data.length);
+					}
+					if (job->hash_alg == IMB_AUTH_ZUC_EIA3_BITLEN ||
+						job->hash_alg == IMB_AUTH_SNOW3G_UIA2_BITLEN ||
+						job->hash_alg == IMB_AUTH_KASUMI_UIA1 ||
+						job->hash_alg == IMB_AUTH_ZUC256_EIA3_BITLEN) {
+						auth_size = (op->sym->auth.data.offset >> 3) +
+							(op->sym->auth.data.length >> 3);
+					} else {
+						auth_size = (op->sym->auth.data.offset) +
+						(op->sym->auth.data.length);
+					}
+					if (job->cipher_mode != IMB_CIPHER_NULL) {
+						unencrypted_bytes =	auth_size +
+						job->auth_tag_output_len_in_bytes - cipher_size;
+					}
+					if (unencrypted_bytes > 0)
+						rte_memcpy(
+						rte_pktmbuf_mtod_offset(
+							op->sym->m_dst, uint8_t *, cipher_size),
+						rte_pktmbuf_mtod_offset(
+							op->sym->m_src, uint8_t *, cipher_size),
+						unencrypted_bytes);
+				}
 				generate_digest(job, op, sess);
+			}
 			break;
 		default:
 			op->status = RTE_CRYPTO_OP_STATUS_ERROR;
@@ -2555,7 +2651,8 @@ RTE_INIT(ipsec_mb_register_aesni_mb)
 			RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT |
 			RTE_CRYPTODEV_FF_OOP_LB_IN_SGL_OUT |
 			RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT |
-			RTE_CRYPTODEV_FF_SECURITY;
+			RTE_CRYPTODEV_FF_SECURITY |
+			RTE_CRYPTODEV_FF_DIGEST_ENCRYPTED;
 
 	aesni_mb_data->internals_priv_size = 0;
 	aesni_mb_data->ops = &aesni_mb_pmd_ops;
-- 
2.25.1


^ permalink raw reply	[flat|nested] 32+ messages in thread

* [PATCH v3] crypto/ipsec_mb: add digest encrypted feature
  2023-08-21 14:42   ` [PATCH v2] " Brian Dooley
@ 2023-08-25  8:41     ` Brian Dooley
  2023-09-05 15:12       ` [PATCH v4 0/2] Add Digest Encrypted to aesni_mb PMD Brian Dooley
  0 siblings, 1 reply; 32+ messages in thread
From: Brian Dooley @ 2023-08-25  8:41 UTC (permalink / raw)
  To: Kai Ji, Pablo de Lara; +Cc: dev, gakhil, Brian Dooley

AESNI_MB PMD does not support Digest Encrypted. This patch adds a check and
support for this feature.

Signed-off-by: Brian Dooley <brian.dooley@intel.com>
---
v2:
Fixed CHECKPATCH warning
v3:
Add Digest encrypted support to docs
---
 doc/guides/cryptodevs/features/aesni_mb.ini |   1 +
 drivers/crypto/ipsec_mb/pmd_aesni_mb.c      | 107 +++++++++++++++++++-
 2 files changed, 103 insertions(+), 5 deletions(-)

diff --git a/doc/guides/cryptodevs/features/aesni_mb.ini b/doc/guides/cryptodevs/features/aesni_mb.ini
index e4e965c35a..8df5fa2c85 100644
--- a/doc/guides/cryptodevs/features/aesni_mb.ini
+++ b/doc/guides/cryptodevs/features/aesni_mb.ini
@@ -20,6 +20,7 @@ OOP LB  In LB  Out     = Y
 CPU crypto             = Y
 Symmetric sessionless  = Y
 Non-Byte aligned data  = Y
+Digest encrypted       = Y
 
 ;
 ; Supported crypto algorithms of the 'aesni_mb' crypto driver.
diff --git a/drivers/crypto/ipsec_mb/pmd_aesni_mb.c b/drivers/crypto/ipsec_mb/pmd_aesni_mb.c
index 9e298023d7..66f3c82e80 100644
--- a/drivers/crypto/ipsec_mb/pmd_aesni_mb.c
+++ b/drivers/crypto/ipsec_mb/pmd_aesni_mb.c
@@ -1438,6 +1438,54 @@ set_gcm_job(IMB_MGR *mb_mgr, IMB_JOB *job, const uint8_t sgl,
 	return 0;
 }
 
+/** Check if conditions are met for digest-appended operations */
+static uint8_t *
+aesni_mb_digest_appended_in_src(struct rte_crypto_op *op, IMB_JOB *job,
+		uint32_t oop)
+{
+	unsigned int auth_size, cipher_size;
+	uint8_t *end_cipher;
+	uint8_t *start_cipher;
+
+	if (job->cipher_mode == IMB_CIPHER_NULL)
+		return NULL;
+
+	if (job->cipher_mode == IMB_CIPHER_ZUC_EEA3 ||
+		job->cipher_mode == IMB_CIPHER_SNOW3G_UEA2_BITLEN ||
+		job->cipher_mode == IMB_CIPHER_KASUMI_UEA1_BITLEN) {
+		cipher_size = (op->sym->cipher.data.offset >> 3) +
+			(op->sym->cipher.data.length >> 3);
+	} else {
+		cipher_size = (op->sym->cipher.data.offset) +
+			(op->sym->cipher.data.length);
+	}
+	if (job->hash_alg == IMB_AUTH_ZUC_EIA3_BITLEN ||
+		job->hash_alg == IMB_AUTH_SNOW3G_UIA2_BITLEN ||
+		job->hash_alg == IMB_AUTH_KASUMI_UIA1 ||
+		job->hash_alg == IMB_AUTH_ZUC256_EIA3_BITLEN) {
+		auth_size = (op->sym->auth.data.offset >> 3) +
+			(op->sym->auth.data.length >> 3);
+	} else {
+		auth_size = (op->sym->auth.data.offset) +
+			(op->sym->auth.data.length);
+	}
+
+	if (!oop) {
+		end_cipher = rte_pktmbuf_mtod_offset(op->sym->m_src, uint8_t *, cipher_size);
+		start_cipher = rte_pktmbuf_mtod(op->sym->m_src, uint8_t *);
+	} else {
+		end_cipher = rte_pktmbuf_mtod_offset(op->sym->m_dst, uint8_t *, cipher_size);
+		start_cipher = rte_pktmbuf_mtod(op->sym->m_dst, uint8_t *);
+	}
+
+	if (start_cipher < op->sym->auth.digest.data &&
+		op->sym->auth.digest.data < end_cipher) {
+		return rte_pktmbuf_mtod_offset(op->sym->m_src, uint8_t *, auth_size);
+	} else {
+		return NULL;
+	}
+}
+
 /**
  * Process a crypto operation and complete a IMB_JOB job structure for
  * submission to the multi buffer library for processing.
@@ -1580,9 +1628,12 @@ set_mb_job_params(IMB_JOB *job, struct ipsec_mb_qp *qp,
 	} else {
 		if (aead)
 			job->auth_tag_output = op->sym->aead.digest.data;
-		else
-			job->auth_tag_output = op->sym->auth.digest.data;
-
+		else {
+			job->auth_tag_output = aesni_mb_digest_appended_in_src(op, job, oop);
+			if (job->auth_tag_output == NULL) {
+				job->auth_tag_output = op->sym->auth.digest.data;
+			}
+		}
 		if (session->auth.req_digest_len !=
 				job->auth_tag_output_len_in_bytes) {
 			job->auth_tag_output =
@@ -1917,6 +1968,7 @@ post_process_mb_job(struct ipsec_mb_qp *qp, IMB_JOB *job)
 	struct aesni_mb_session *sess = NULL;
 	uint8_t *linear_buf = NULL;
 	int sgl = 0;
+	uint8_t oop = 0;
 	uint8_t is_docsis_sec = 0;
 
 	if (op->sess_type == RTE_CRYPTO_OP_SECURITY_SESSION) {
@@ -1962,8 +2014,52 @@ post_process_mb_job(struct ipsec_mb_qp *qp, IMB_JOB *job)
 						op->sym->auth.digest.data,
 						sess->auth.req_digest_len,
 						&op->status);
-			} else
+			} else {
+				if (!op->sym->m_dst || op->sym->m_dst == op->sym->m_src) {
+					/* in-place operation */
+					oop = 0;
+				} else { /* out-of-place operation */
+					oop = 1;
+				}
+
+				if (op->sym->m_src->nb_segs == 1 && op->sym->m_dst != NULL
+				&& !is_aead_algo(job->hash_alg,	sess->template_job.cipher_mode) &&
+				aesni_mb_digest_appended_in_src(op, job, oop) != NULL) {
+					unsigned int auth_size, cipher_size;
+					int unencrypted_bytes = 0;
+					if (job->cipher_mode == IMB_CIPHER_SNOW3G_UEA2_BITLEN ||
+						job->cipher_mode == IMB_CIPHER_KASUMI_UEA1_BITLEN ||
+						job->cipher_mode == IMB_CIPHER_ZUC_EEA3) {
+						cipher_size = (op->sym->cipher.data.offset >> 3) +
+							(op->sym->cipher.data.length >> 3);
+					} else {
+						cipher_size = (op->sym->cipher.data.offset) +
+							(op->sym->cipher.data.length);
+					}
+					if (job->hash_alg == IMB_AUTH_ZUC_EIA3_BITLEN ||
+						job->hash_alg == IMB_AUTH_SNOW3G_UIA2_BITLEN ||
+						job->hash_alg == IMB_AUTH_KASUMI_UIA1 ||
+						job->hash_alg == IMB_AUTH_ZUC256_EIA3_BITLEN) {
+						auth_size = (op->sym->auth.data.offset >> 3) +
+							(op->sym->auth.data.length >> 3);
+					} else {
+						auth_size = (op->sym->auth.data.offset) +
+						(op->sym->auth.data.length);
+					}
+					if (job->cipher_mode != IMB_CIPHER_NULL) {
+						unencrypted_bytes =	auth_size +
+						job->auth_tag_output_len_in_bytes - cipher_size;
+					}
+					if (unencrypted_bytes > 0)
+						rte_memcpy(
+						rte_pktmbuf_mtod_offset(
+							op->sym->m_dst, uint8_t *, cipher_size),
+						rte_pktmbuf_mtod_offset(
+							op->sym->m_src, uint8_t *, cipher_size),
+						unencrypted_bytes);
+				}
 				generate_digest(job, op, sess);
+			}
 			break;
 		default:
 			op->status = RTE_CRYPTO_OP_STATUS_ERROR;
@@ -2555,7 +2651,8 @@ RTE_INIT(ipsec_mb_register_aesni_mb)
 			RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT |
 			RTE_CRYPTODEV_FF_OOP_LB_IN_SGL_OUT |
 			RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT |
-			RTE_CRYPTODEV_FF_SECURITY;
+			RTE_CRYPTODEV_FF_SECURITY |
+			RTE_CRYPTODEV_FF_DIGEST_ENCRYPTED;
 
 	aesni_mb_data->internals_priv_size = 0;
 	aesni_mb_data->ops = &aesni_mb_pmd_ops;
-- 
2.25.1


^ permalink raw reply	[flat|nested] 32+ messages in thread

* [PATCH v4 0/2] Add Digest Encrypted to aesni_mb PMD
  2023-08-25  8:41     ` [PATCH v3] " Brian Dooley
@ 2023-09-05 15:12       ` Brian Dooley
  2023-09-05 15:12         ` [PATCH v4 1/2] crypto/ipsec_mb: add digest encrypted feature Brian Dooley
                           ` (2 more replies)
  0 siblings, 3 replies; 32+ messages in thread
From: Brian Dooley @ 2023-09-05 15:12 UTC (permalink / raw)
  Cc: dev, gakhil, Brian Dooley

This series adds the Digest Encrypted feature to the AESNI_MB PMD.
It also fixes an issue where IV data in SNOW3G and ZUC algorithms
were incorrect and are required to be non-zero length.

v2:
Fixed CHECKPATCH warning
v3:
Add Digest encrypted support to docs
v4:
add comments and refactor

Brian Dooley (2):
  crypto/ipsec_mb: add digest encrypted feature
  test/crypto: fix IV in some vectors

 app/test/test_cryptodev_mixed_test_vectors.h |   8 +-
 doc/guides/cryptodevs/features/aesni_mb.ini  |   1 +
 drivers/crypto/ipsec_mb/pmd_aesni_mb.c       | 107 ++++++++++++++++++-
 3 files changed, 109 insertions(+), 7 deletions(-)

-- 
2.25.1


^ permalink raw reply	[flat|nested] 32+ messages in thread

* [PATCH v4 1/2] crypto/ipsec_mb: add digest encrypted feature
  2023-09-05 15:12       ` [PATCH v4 0/2] Add Digest Encrypted to aesni_mb PMD Brian Dooley
@ 2023-09-05 15:12         ` Brian Dooley
  2023-09-05 15:12         ` [PATCH v4 2/2] test/crypto: fix IV in some vectors Brian Dooley
  2023-09-05 16:15         ` [PATCH v5 0/2] Add Digest Encrypted to aesni_mb PMD Brian Dooley
  2 siblings, 0 replies; 32+ messages in thread
From: Brian Dooley @ 2023-09-05 15:12 UTC (permalink / raw)
  To: Kai Ji, Pablo de Lara; +Cc: dev, gakhil, Brian Dooley

AESNI_MB PMD does not support Digest Encrypted. This patch adds a check and
support for this feature.

Signed-off-by: Brian Dooley <brian.dooley@intel.com>
---
v2:
Fixed CHECKPATCH warning
v3:
Add Digest encrypted support to docs
v4:
add comments and refactor
---
 doc/guides/cryptodevs/features/aesni_mb.ini |   1 +
 drivers/crypto/ipsec_mb/pmd_aesni_mb.c      | 109 +++++++++++++++++++-
 2 files changed, 105 insertions(+), 5 deletions(-)

diff --git a/doc/guides/cryptodevs/features/aesni_mb.ini b/doc/guides/cryptodevs/features/aesni_mb.ini
index e4e965c35a..8df5fa2c85 100644
--- a/doc/guides/cryptodevs/features/aesni_mb.ini
+++ b/doc/guides/cryptodevs/features/aesni_mb.ini
@@ -20,6 +20,7 @@ OOP LB  In LB  Out     = Y
 CPU crypto             = Y
 Symmetric sessionless  = Y
 Non-Byte aligned data  = Y
+Digest encrypted       = Y
 
 ;
 ; Supported crypto algorithms of the 'aesni_mb' crypto driver.
diff --git a/drivers/crypto/ipsec_mb/pmd_aesni_mb.c b/drivers/crypto/ipsec_mb/pmd_aesni_mb.c
index 9e298023d7..552d1f16b5 100644
--- a/drivers/crypto/ipsec_mb/pmd_aesni_mb.c
+++ b/drivers/crypto/ipsec_mb/pmd_aesni_mb.c
@@ -1438,6 +1438,54 @@ set_gcm_job(IMB_MGR *mb_mgr, IMB_JOB *job, const uint8_t sgl,
 	return 0;
 }
 
+/** Check if conditions are met for digest-appended operations */
+static uint8_t *
+aesni_mb_digest_appended_in_src(struct rte_crypto_op *op, IMB_JOB *job,
+		uint32_t oop)
+{
+	unsigned int auth_size, cipher_size;
+	uint8_t *end_cipher;
+	uint8_t *start_cipher;
+
+	if (job->cipher_mode == IMB_CIPHER_NULL)
+		return NULL;
+
+	if (job->cipher_mode == IMB_CIPHER_ZUC_EEA3 ||
+		job->cipher_mode == IMB_CIPHER_SNOW3G_UEA2_BITLEN ||
+		job->cipher_mode == IMB_CIPHER_KASUMI_UEA1_BITLEN) {
+		cipher_size = (op->sym->cipher.data.offset >> 3) +
+			(op->sym->cipher.data.length >> 3);
+	} else {
+		cipher_size = (op->sym->cipher.data.offset) +
+			(op->sym->cipher.data.length);
+	}
+	if (job->hash_alg == IMB_AUTH_ZUC_EIA3_BITLEN ||
+		job->hash_alg == IMB_AUTH_SNOW3G_UIA2_BITLEN ||
+		job->hash_alg == IMB_AUTH_KASUMI_UIA1 ||
+		job->hash_alg == IMB_AUTH_ZUC256_EIA3_BITLEN) {
+		auth_size = (op->sym->auth.data.offset >> 3) +
+			(op->sym->auth.data.length >> 3);
+	} else {
+		auth_size = (op->sym->auth.data.offset) +
+			(op->sym->auth.data.length);
+	}
+
+	if (!oop) {
+		end_cipher = rte_pktmbuf_mtod_offset(op->sym->m_src, uint8_t *, cipher_size);
+		start_cipher = rte_pktmbuf_mtod(op->sym->m_src, uint8_t *);
+	} else {
+		end_cipher = rte_pktmbuf_mtod_offset(op->sym->m_dst, uint8_t *, cipher_size);
+		start_cipher = rte_pktmbuf_mtod(op->sym->m_dst, uint8_t *);
+	}
+
+	if (start_cipher < op->sym->auth.digest.data &&
+		op->sym->auth.digest.data < end_cipher) {
+		return rte_pktmbuf_mtod_offset(op->sym->m_src, uint8_t *, auth_size);
+	} else {
+		return NULL;
+	}
+}
+
 /**
  * Process a crypto operation and complete a IMB_JOB job structure for
  * submission to the multi buffer library for processing.
@@ -1580,9 +1628,12 @@ set_mb_job_params(IMB_JOB *job, struct ipsec_mb_qp *qp,
 	} else {
 		if (aead)
 			job->auth_tag_output = op->sym->aead.digest.data;
-		else
-			job->auth_tag_output = op->sym->auth.digest.data;
-
+		else {
+			job->auth_tag_output = aesni_mb_digest_appended_in_src(op, job, oop);
+			if (job->auth_tag_output == NULL) {
+				job->auth_tag_output = op->sym->auth.digest.data;
+			}
+		}
 		if (session->auth.req_digest_len !=
 				job->auth_tag_output_len_in_bytes) {
 			job->auth_tag_output =
@@ -1917,6 +1968,7 @@ post_process_mb_job(struct ipsec_mb_qp *qp, IMB_JOB *job)
 	struct aesni_mb_session *sess = NULL;
 	uint8_t *linear_buf = NULL;
 	int sgl = 0;
+	uint8_t oop = 0;
 	uint8_t is_docsis_sec = 0;
 
 	if (op->sess_type == RTE_CRYPTO_OP_SECURITY_SESSION) {
@@ -1962,8 +2014,54 @@ post_process_mb_job(struct ipsec_mb_qp *qp, IMB_JOB *job)
 						op->sym->auth.digest.data,
 						sess->auth.req_digest_len,
 						&op->status);
-			} else
+			} else {
+				if (!op->sym->m_dst || op->sym->m_dst == op->sym->m_src) {
+					/* in-place operation */
+					oop = 0;
+				} else { /* out-of-place operation */
+					oop = 1;
+				}
+
+				/* Enable digest check */
+				if (op->sym->m_src->nb_segs == 1 && op->sym->m_dst != NULL
+				&& !is_aead_algo(job->hash_alg,	sess->template_job.cipher_mode) &&
+				aesni_mb_digest_appended_in_src(op, job, oop) != NULL) {
+					unsigned int auth_size, cipher_size;
+					int unencrypted_bytes = 0;
+					if (job->cipher_mode == IMB_CIPHER_SNOW3G_UEA2_BITLEN ||
+						job->cipher_mode == IMB_CIPHER_KASUMI_UEA1_BITLEN ||
+						job->cipher_mode == IMB_CIPHER_ZUC_EEA3) {
+						cipher_size = (op->sym->cipher.data.offset >> 3) +
+							(op->sym->cipher.data.length >> 3);
+					} else {
+						cipher_size = (op->sym->cipher.data.offset) +
+							(op->sym->cipher.data.length);
+					}
+					if (job->hash_alg == IMB_AUTH_ZUC_EIA3_BITLEN ||
+						job->hash_alg == IMB_AUTH_SNOW3G_UIA2_BITLEN ||
+						job->hash_alg == IMB_AUTH_KASUMI_UIA1 ||
+						job->hash_alg == IMB_AUTH_ZUC256_EIA3_BITLEN) {
+						auth_size = (op->sym->auth.data.offset >> 3) +
+							(op->sym->auth.data.length >> 3);
+					} else {
+						auth_size = (op->sym->auth.data.offset) +
+						(op->sym->auth.data.length);
+					}
+					/* Check for unencrypted bytes in partial digest cases */
+					if (job->cipher_mode != IMB_CIPHER_NULL) {
+						unencrypted_bytes = auth_size +
+						job->auth_tag_output_len_in_bytes - cipher_size;
+						if (unencrypted_bytes > 0)
+							rte_memcpy(
+							rte_pktmbuf_mtod_offset(
+								op->sym->m_dst, uint8_t *, cipher_size),
+							rte_pktmbuf_mtod_offset(
+								op->sym->m_src, uint8_t *, cipher_size),
+							unencrypted_bytes);
+					}
+				}
 				generate_digest(job, op, sess);
+			}
 			break;
 		default:
 			op->status = RTE_CRYPTO_OP_STATUS_ERROR;
@@ -2555,7 +2653,8 @@ RTE_INIT(ipsec_mb_register_aesni_mb)
 			RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT |
 			RTE_CRYPTODEV_FF_OOP_LB_IN_SGL_OUT |
 			RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT |
-			RTE_CRYPTODEV_FF_SECURITY;
+			RTE_CRYPTODEV_FF_SECURITY |
+			RTE_CRYPTODEV_FF_DIGEST_ENCRYPTED;
 
 	aesni_mb_data->internals_priv_size = 0;
 	aesni_mb_data->ops = &aesni_mb_pmd_ops;
-- 
2.25.1


^ permalink raw reply	[flat|nested] 32+ messages in thread

* [PATCH v4 2/2] test/crypto: fix IV in some vectors
  2023-09-05 15:12       ` [PATCH v4 0/2] Add Digest Encrypted to aesni_mb PMD Brian Dooley
  2023-09-05 15:12         ` [PATCH v4 1/2] crypto/ipsec_mb: add digest encrypted feature Brian Dooley
@ 2023-09-05 15:12         ` Brian Dooley
  2023-09-05 16:15         ` [PATCH v5 0/2] Add Digest Encrypted to aesni_mb PMD Brian Dooley
  2 siblings, 0 replies; 32+ messages in thread
From: Brian Dooley @ 2023-09-05 15:12 UTC (permalink / raw)
  To: Akhil Goyal, Fan Zhang; +Cc: dev, Brian Dooley, adamx.dybkowski

SNOW3G and ZUC algorithms require non-zero length IVs.

Fixes: c6c267a00a92 ("test/crypto: add mixed encypted-digest")
Cc: adamx.dybkowski@intel.com

Signed-off-by: Brian Dooley <brian.dooley@intel.com>
---
 app/test/test_cryptodev_mixed_test_vectors.h | 8 ++++++--
 1 file changed, 6 insertions(+), 2 deletions(-)

diff --git a/app/test/test_cryptodev_mixed_test_vectors.h b/app/test/test_cryptodev_mixed_test_vectors.h
index 161e2d905f..9c4313185e 100644
--- a/app/test/test_cryptodev_mixed_test_vectors.h
+++ b/app/test/test_cryptodev_mixed_test_vectors.h
@@ -478,8 +478,10 @@ struct mixed_cipher_auth_test_data auth_aes_cmac_cipher_snow_test_case_1 = {
 	},
 	.cipher_iv = {
 		.data = {
+			0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+			0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00
 		},
-		.len = 0,
+		.len = 16,
 	},
 	.cipher = {
 		.len_bits = 516 << 3,
@@ -917,8 +919,10 @@ struct mixed_cipher_auth_test_data auth_aes_cmac_cipher_zuc_test_case_1 = {
 	},
 	.cipher_iv = {
 		.data = {
+			0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+			0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00
 		},
-		.len = 0,
+		.len = 16,
 	},
 	.cipher = {
 		.len_bits = 516 << 3,
-- 
2.25.1


^ permalink raw reply	[flat|nested] 32+ messages in thread

* [PATCH v5 0/2] Add Digest Encrypted to aesni_mb PMD
  2023-09-05 15:12       ` [PATCH v4 0/2] Add Digest Encrypted to aesni_mb PMD Brian Dooley
  2023-09-05 15:12         ` [PATCH v4 1/2] crypto/ipsec_mb: add digest encrypted feature Brian Dooley
  2023-09-05 15:12         ` [PATCH v4 2/2] test/crypto: fix IV in some vectors Brian Dooley
@ 2023-09-05 16:15         ` Brian Dooley
  2023-09-05 16:15           ` [PATCH v5 1/2] crypto/ipsec_mb: add digest encrypted feature Brian Dooley
                             ` (2 more replies)
  2 siblings, 3 replies; 32+ messages in thread
From: Brian Dooley @ 2023-09-05 16:15 UTC (permalink / raw)
  Cc: dev, gakhil, Brian Dooley

This series adds the Digest Encrypted feature to the AESNI_MB PMD.
It also fixes an issue where IV data in SNOW3G and ZUC algorithms
were incorrect and are required to be non-zero length.

v2:
Fixed CHECKPATCH warning
v3:
Add Digest encrypted support to docs
v4:
add comments and refactor
v5:
Fix checkpatch warnings

Brian Dooley (2):
  crypto/ipsec_mb: add digest encrypted feature
  test/crypto: fix IV in some vectors

 app/test/test_cryptodev_mixed_test_vectors.h |   8 +-
 doc/guides/cryptodevs/features/aesni_mb.ini  |   1 +
 drivers/crypto/ipsec_mb/pmd_aesni_mb.c       | 107 ++++++++++++++++++-
 3 files changed, 109 insertions(+), 7 deletions(-)

-- 
2.25.1


^ permalink raw reply	[flat|nested] 32+ messages in thread

* [PATCH v5 1/2] crypto/ipsec_mb: add digest encrypted feature
  2023-09-05 16:15         ` [PATCH v5 0/2] Add Digest Encrypted to aesni_mb PMD Brian Dooley
@ 2023-09-05 16:15           ` Brian Dooley
  2023-09-05 16:15           ` [PATCH v5 2/2] test/crypto: fix IV in some vectors Brian Dooley
  2023-09-07 10:26           ` [PATCH v5 0/2] Add Digest Encrypted to aesni_mb PMD Brian Dooley
  2 siblings, 0 replies; 32+ messages in thread
From: Brian Dooley @ 2023-09-05 16:15 UTC (permalink / raw)
  To: Kai Ji, Pablo de Lara; +Cc: dev, gakhil, Brian Dooley

AESNI_MB PMD does not support Digest Encrypted. This patch adds a check and
support for this feature.

Signed-off-by: Brian Dooley <brian.dooley@intel.com>
---
v2:
Fixed CHECKPATCH warning
v3:
Add Digest encrypted support to docs
v4:
Add comments and small refactor
v5:
Fix checkpatch warnings
---
 doc/guides/cryptodevs/features/aesni_mb.ini |   1 +
 drivers/crypto/ipsec_mb/pmd_aesni_mb.c      | 109 +++++++++++++++++++-
 2 files changed, 105 insertions(+), 5 deletions(-)

diff --git a/doc/guides/cryptodevs/features/aesni_mb.ini b/doc/guides/cryptodevs/features/aesni_mb.ini
index e4e965c35a..8df5fa2c85 100644
--- a/doc/guides/cryptodevs/features/aesni_mb.ini
+++ b/doc/guides/cryptodevs/features/aesni_mb.ini
@@ -20,6 +20,7 @@ OOP LB  In LB  Out     = Y
 CPU crypto             = Y
 Symmetric sessionless  = Y
 Non-Byte aligned data  = Y
+Digest encrypted       = Y
 
 ;
 ; Supported crypto algorithms of the 'aesni_mb' crypto driver.
diff --git a/drivers/crypto/ipsec_mb/pmd_aesni_mb.c b/drivers/crypto/ipsec_mb/pmd_aesni_mb.c
index 9e298023d7..7f61065939 100644
--- a/drivers/crypto/ipsec_mb/pmd_aesni_mb.c
+++ b/drivers/crypto/ipsec_mb/pmd_aesni_mb.c
@@ -1438,6 +1438,54 @@ set_gcm_job(IMB_MGR *mb_mgr, IMB_JOB *job, const uint8_t sgl,
 	return 0;
 }
 
+/** Check if conditions are met for digest-appended operations */
+static uint8_t *
+aesni_mb_digest_appended_in_src(struct rte_crypto_op *op, IMB_JOB *job,
+		uint32_t oop)
+{
+	unsigned int auth_size, cipher_size;
+	uint8_t *end_cipher;
+	uint8_t *start_cipher;
+
+	if (job->cipher_mode == IMB_CIPHER_NULL)
+		return NULL;
+
+	if (job->cipher_mode == IMB_CIPHER_ZUC_EEA3 ||
+		job->cipher_mode == IMB_CIPHER_SNOW3G_UEA2_BITLEN ||
+		job->cipher_mode == IMB_CIPHER_KASUMI_UEA1_BITLEN) {
+		cipher_size = (op->sym->cipher.data.offset >> 3) +
+			(op->sym->cipher.data.length >> 3);
+	} else {
+		cipher_size = (op->sym->cipher.data.offset) +
+			(op->sym->cipher.data.length);
+	}
+	if (job->hash_alg == IMB_AUTH_ZUC_EIA3_BITLEN ||
+		job->hash_alg == IMB_AUTH_SNOW3G_UIA2_BITLEN ||
+		job->hash_alg == IMB_AUTH_KASUMI_UIA1 ||
+		job->hash_alg == IMB_AUTH_ZUC256_EIA3_BITLEN) {
+		auth_size = (op->sym->auth.data.offset >> 3) +
+			(op->sym->auth.data.length >> 3);
+	} else {
+		auth_size = (op->sym->auth.data.offset) +
+			(op->sym->auth.data.length);
+	}
+
+	if (!oop) {
+		end_cipher = rte_pktmbuf_mtod_offset(op->sym->m_src, uint8_t *, cipher_size);
+		start_cipher = rte_pktmbuf_mtod(op->sym->m_src, uint8_t *);
+	} else {
+		end_cipher = rte_pktmbuf_mtod_offset(op->sym->m_dst, uint8_t *, cipher_size);
+		start_cipher = rte_pktmbuf_mtod(op->sym->m_dst, uint8_t *);
+	}
+
+	if (start_cipher < op->sym->auth.digest.data &&
+		op->sym->auth.digest.data < end_cipher) {
+		return rte_pktmbuf_mtod_offset(op->sym->m_src, uint8_t *, auth_size);
+	} else {
+		return NULL;
+	}
+}
+
 /**
  * Process a crypto operation and complete a IMB_JOB job structure for
  * submission to the multi buffer library for processing.
@@ -1580,9 +1628,12 @@ set_mb_job_params(IMB_JOB *job, struct ipsec_mb_qp *qp,
 	} else {
 		if (aead)
 			job->auth_tag_output = op->sym->aead.digest.data;
-		else
-			job->auth_tag_output = op->sym->auth.digest.data;
-
+		else {
+			job->auth_tag_output = aesni_mb_digest_appended_in_src(op, job, oop);
+			if (job->auth_tag_output == NULL) {
+				job->auth_tag_output = op->sym->auth.digest.data;
+			}
+		}
 		if (session->auth.req_digest_len !=
 				job->auth_tag_output_len_in_bytes) {
 			job->auth_tag_output =
@@ -1917,6 +1968,7 @@ post_process_mb_job(struct ipsec_mb_qp *qp, IMB_JOB *job)
 	struct aesni_mb_session *sess = NULL;
 	uint8_t *linear_buf = NULL;
 	int sgl = 0;
+	uint8_t oop = 0;
 	uint8_t is_docsis_sec = 0;
 
 	if (op->sess_type == RTE_CRYPTO_OP_SECURITY_SESSION) {
@@ -1962,8 +2014,54 @@ post_process_mb_job(struct ipsec_mb_qp *qp, IMB_JOB *job)
 						op->sym->auth.digest.data,
 						sess->auth.req_digest_len,
 						&op->status);
-			} else
+			} else {
+				if (!op->sym->m_dst || op->sym->m_dst == op->sym->m_src) {
+					/* in-place operation */
+					oop = 0;
+				} else { /* out-of-place operation */
+					oop = 1;
+				}
+
+				/* Enable digest check */
+				if (op->sym->m_src->nb_segs == 1 && op->sym->m_dst != NULL
+				&& !is_aead_algo(job->hash_alg,	sess->template_job.cipher_mode) &&
+				aesni_mb_digest_appended_in_src(op, job, oop) != NULL) {
+					unsigned int auth_size, cipher_size;
+					int unencrypted_bytes = 0;
+					if (job->cipher_mode == IMB_CIPHER_SNOW3G_UEA2_BITLEN ||
+						job->cipher_mode == IMB_CIPHER_KASUMI_UEA1_BITLEN ||
+						job->cipher_mode == IMB_CIPHER_ZUC_EEA3) {
+						cipher_size = (op->sym->cipher.data.offset >> 3) +
+							(op->sym->cipher.data.length >> 3);
+					} else {
+						cipher_size = (op->sym->cipher.data.offset) +
+							(op->sym->cipher.data.length);
+					}
+					if (job->hash_alg == IMB_AUTH_ZUC_EIA3_BITLEN ||
+						job->hash_alg == IMB_AUTH_SNOW3G_UIA2_BITLEN ||
+						job->hash_alg == IMB_AUTH_KASUMI_UIA1 ||
+						job->hash_alg == IMB_AUTH_ZUC256_EIA3_BITLEN) {
+						auth_size = (op->sym->auth.data.offset >> 3) +
+							(op->sym->auth.data.length >> 3);
+					} else {
+						auth_size = (op->sym->auth.data.offset) +
+						(op->sym->auth.data.length);
+					}
+					/* Check for unencrypted bytes in partial digest cases */
+					if (job->cipher_mode != IMB_CIPHER_NULL) {
+						unencrypted_bytes = auth_size +
+						job->auth_tag_output_len_in_bytes - cipher_size;
+					}
+					if (unencrypted_bytes > 0)
+						rte_memcpy(
+						rte_pktmbuf_mtod_offset(op->sym->m_dst, uint8_t *,
+						cipher_size),
+						rte_pktmbuf_mtod_offset(op->sym->m_src, uint8_t *,
+						cipher_size),
+						unencrypted_bytes);
+				}
 				generate_digest(job, op, sess);
+			}
 			break;
 		default:
 			op->status = RTE_CRYPTO_OP_STATUS_ERROR;
@@ -2555,7 +2653,8 @@ RTE_INIT(ipsec_mb_register_aesni_mb)
 			RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT |
 			RTE_CRYPTODEV_FF_OOP_LB_IN_SGL_OUT |
 			RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT |
-			RTE_CRYPTODEV_FF_SECURITY;
+			RTE_CRYPTODEV_FF_SECURITY |
+			RTE_CRYPTODEV_FF_DIGEST_ENCRYPTED;
 
 	aesni_mb_data->internals_priv_size = 0;
 	aesni_mb_data->ops = &aesni_mb_pmd_ops;
-- 
2.25.1


^ permalink raw reply	[flat|nested] 32+ messages in thread

* [PATCH v5 2/2] test/crypto: fix IV in some vectors
  2023-09-05 16:15         ` [PATCH v5 0/2] Add Digest Encrypted to aesni_mb PMD Brian Dooley
  2023-09-05 16:15           ` [PATCH v5 1/2] crypto/ipsec_mb: add digest encrypted feature Brian Dooley
@ 2023-09-05 16:15           ` Brian Dooley
  2023-09-07 10:26           ` [PATCH v5 0/2] Add Digest Encrypted to aesni_mb PMD Brian Dooley
  2 siblings, 0 replies; 32+ messages in thread
From: Brian Dooley @ 2023-09-05 16:15 UTC (permalink / raw)
  To: Akhil Goyal, Fan Zhang; +Cc: dev, Brian Dooley, adamx.dybkowski

SNOW3G and ZUC algorithms require non-zero length IVs.

Fixes: c6c267a00a92 ("test/crypto: add mixed encypted-digest")
Cc: adamx.dybkowski@intel.com

Signed-off-by: Brian Dooley <brian.dooley@intel.com>
---
 app/test/test_cryptodev_mixed_test_vectors.h | 8 ++++++--
 1 file changed, 6 insertions(+), 2 deletions(-)

diff --git a/app/test/test_cryptodev_mixed_test_vectors.h b/app/test/test_cryptodev_mixed_test_vectors.h
index 161e2d905f..9c4313185e 100644
--- a/app/test/test_cryptodev_mixed_test_vectors.h
+++ b/app/test/test_cryptodev_mixed_test_vectors.h
@@ -478,8 +478,10 @@ struct mixed_cipher_auth_test_data auth_aes_cmac_cipher_snow_test_case_1 = {
 	},
 	.cipher_iv = {
 		.data = {
+			0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+			0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00
 		},
-		.len = 0,
+		.len = 16,
 	},
 	.cipher = {
 		.len_bits = 516 << 3,
@@ -917,8 +919,10 @@ struct mixed_cipher_auth_test_data auth_aes_cmac_cipher_zuc_test_case_1 = {
 	},
 	.cipher_iv = {
 		.data = {
+			0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+			0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00
 		},
-		.len = 0,
+		.len = 16,
 	},
 	.cipher = {
 		.len_bits = 516 << 3,
-- 
2.25.1


^ permalink raw reply	[flat|nested] 32+ messages in thread

* [PATCH v5 0/2] Add Digest Encrypted to aesni_mb PMD
  2023-09-05 16:15         ` [PATCH v5 0/2] Add Digest Encrypted to aesni_mb PMD Brian Dooley
  2023-09-05 16:15           ` [PATCH v5 1/2] crypto/ipsec_mb: add digest encrypted feature Brian Dooley
  2023-09-05 16:15           ` [PATCH v5 2/2] test/crypto: fix IV in some vectors Brian Dooley
@ 2023-09-07 10:26           ` Brian Dooley
  2023-09-07 10:26             ` [PATCH v6 1/2] crypto/ipsec_mb: add digest encrypted feature Brian Dooley
  2023-09-07 10:26             ` [PATCH v6 2/2] test/crypto: fix IV in some vectors Brian Dooley
  2 siblings, 2 replies; 32+ messages in thread
From: Brian Dooley @ 2023-09-07 10:26 UTC (permalink / raw)
  Cc: dev, gakhil, Brian Dooley

This series adds the Digest Encrypted feature to the AESNI_MB PMD.
It also fixes an issue where IV data in SNOW3G and ZUC algorithms
were incorrect and are required to be non-zero length.

v2:
Fixed CHECKPATCH warning
v3:
Add Digest encrypted support to docs
v4:
add comments and refactor
v5:
Fix checkpatch warnings
v6:
Add skipping tests for synchronous crypto

Brian Dooley (2):
  crypto/ipsec_mb: add digest encrypted feature
  test/crypto: fix IV in some vectors

 app/test/test_cryptodev_mixed_test_vectors.h |   8 +-
 doc/guides/cryptodevs/features/aesni_mb.ini  |   1 +
 drivers/crypto/ipsec_mb/pmd_aesni_mb.c       | 107 ++++++++++++++++++-
 3 files changed, 109 insertions(+), 7 deletions(-)

-- 
2.25.1


^ permalink raw reply	[flat|nested] 32+ messages in thread

* [PATCH v6 1/2] crypto/ipsec_mb: add digest encrypted feature
  2023-09-07 10:26           ` [PATCH v5 0/2] Add Digest Encrypted to aesni_mb PMD Brian Dooley
@ 2023-09-07 10:26             ` Brian Dooley
  2023-09-07 15:25               ` Power, Ciara
  2023-09-07 16:12               ` [PATCH v7 0/3] Add Digest Encrypted to aesni_mb PMD Brian Dooley
  2023-09-07 10:26             ` [PATCH v6 2/2] test/crypto: fix IV in some vectors Brian Dooley
  1 sibling, 2 replies; 32+ messages in thread
From: Brian Dooley @ 2023-09-07 10:26 UTC (permalink / raw)
  To: Akhil Goyal, Fan Zhang, Kai Ji, Pablo de Lara; +Cc: dev, Brian Dooley

AESNI_MB PMD does not support Digest Encrypted. This patch adds a check and
support for this feature.

Signed-off-by: Brian Dooley <brian.dooley@intel.com>
---
v2:
Fixed CHECKPATCH warning
v3:
Add Digest encrypted support to docs
v4:
Add comments and small refactor
v5:
Fix checkpatch warnings
v6:
Add skipping tests for synchronous crypto
---
 app/test/test_cryptodev.c                   |   6 ++
 doc/guides/cryptodevs/features/aesni_mb.ini |   1 +
 drivers/crypto/ipsec_mb/pmd_aesni_mb.c      | 109 +++++++++++++++++++-
 3 files changed, 111 insertions(+), 5 deletions(-)

diff --git a/app/test/test_cryptodev.c b/app/test/test_cryptodev.c
index 956268bfcd..70f6b7ece1 100644
--- a/app/test/test_cryptodev.c
+++ b/app/test/test_cryptodev.c
@@ -6394,6 +6394,9 @@ test_zuc_auth_cipher(const struct wireless_test_data *tdata,
 			tdata->digest.len) < 0)
 		return TEST_SKIPPED;
 
+	if (gbl_action_type == RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO)
+		return TEST_SKIPPED;
+
 	rte_cryptodev_info_get(ts_params->valid_devs[0], &dev_info);
 
 	uint64_t feat_flags = dev_info.feature_flags;
@@ -7829,6 +7832,9 @@ test_mixed_auth_cipher(const struct mixed_cipher_auth_test_data *tdata,
 	if (global_api_test_type == CRYPTODEV_RAW_API_TEST)
 		return TEST_SKIPPED;
 
+	if (gbl_action_type == RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO)
+		return TEST_SKIPPED;
+
 	rte_cryptodev_info_get(ts_params->valid_devs[0], &dev_info);
 
 	uint64_t feat_flags = dev_info.feature_flags;
diff --git a/doc/guides/cryptodevs/features/aesni_mb.ini b/doc/guides/cryptodevs/features/aesni_mb.ini
index e4e965c35a..8df5fa2c85 100644
--- a/doc/guides/cryptodevs/features/aesni_mb.ini
+++ b/doc/guides/cryptodevs/features/aesni_mb.ini
@@ -20,6 +20,7 @@ OOP LB  In LB  Out     = Y
 CPU crypto             = Y
 Symmetric sessionless  = Y
 Non-Byte aligned data  = Y
+Digest encrypted       = Y
 
 ;
 ; Supported crypto algorithms of the 'aesni_mb' crypto driver.
diff --git a/drivers/crypto/ipsec_mb/pmd_aesni_mb.c b/drivers/crypto/ipsec_mb/pmd_aesni_mb.c
index 9e298023d7..7f61065939 100644
--- a/drivers/crypto/ipsec_mb/pmd_aesni_mb.c
+++ b/drivers/crypto/ipsec_mb/pmd_aesni_mb.c
@@ -1438,6 +1438,54 @@ set_gcm_job(IMB_MGR *mb_mgr, IMB_JOB *job, const uint8_t sgl,
 	return 0;
 }
 
+/** Check if conditions are met for digest-appended operations */
+static uint8_t *
+aesni_mb_digest_appended_in_src(struct rte_crypto_op *op, IMB_JOB *job,
+		uint32_t oop)
+{
+	unsigned int auth_size, cipher_size;
+	uint8_t *end_cipher;
+	uint8_t *start_cipher;
+
+	if (job->cipher_mode == IMB_CIPHER_NULL)
+		return NULL;
+
+	if (job->cipher_mode == IMB_CIPHER_ZUC_EEA3 ||
+		job->cipher_mode == IMB_CIPHER_SNOW3G_UEA2_BITLEN ||
+		job->cipher_mode == IMB_CIPHER_KASUMI_UEA1_BITLEN) {
+		cipher_size = (op->sym->cipher.data.offset >> 3) +
+			(op->sym->cipher.data.length >> 3);
+	} else {
+		cipher_size = (op->sym->cipher.data.offset) +
+			(op->sym->cipher.data.length);
+	}
+	if (job->hash_alg == IMB_AUTH_ZUC_EIA3_BITLEN ||
+		job->hash_alg == IMB_AUTH_SNOW3G_UIA2_BITLEN ||
+		job->hash_alg == IMB_AUTH_KASUMI_UIA1 ||
+		job->hash_alg == IMB_AUTH_ZUC256_EIA3_BITLEN) {
+		auth_size = (op->sym->auth.data.offset >> 3) +
+			(op->sym->auth.data.length >> 3);
+	} else {
+		auth_size = (op->sym->auth.data.offset) +
+			(op->sym->auth.data.length);
+	}
+
+	if (!oop) {
+		end_cipher = rte_pktmbuf_mtod_offset(op->sym->m_src, uint8_t *, cipher_size);
+		start_cipher = rte_pktmbuf_mtod(op->sym->m_src, uint8_t *);
+	} else {
+		end_cipher = rte_pktmbuf_mtod_offset(op->sym->m_dst, uint8_t *, cipher_size);
+		start_cipher = rte_pktmbuf_mtod(op->sym->m_dst, uint8_t *);
+	}
+
+	if (start_cipher < op->sym->auth.digest.data &&
+		op->sym->auth.digest.data < end_cipher) {
+		return rte_pktmbuf_mtod_offset(op->sym->m_src, uint8_t *, auth_size);
+	} else {
+		return NULL;
+	}
+}
+
 /**
  * Process a crypto operation and complete a IMB_JOB job structure for
  * submission to the multi buffer library for processing.
@@ -1580,9 +1628,12 @@ set_mb_job_params(IMB_JOB *job, struct ipsec_mb_qp *qp,
 	} else {
 		if (aead)
 			job->auth_tag_output = op->sym->aead.digest.data;
-		else
-			job->auth_tag_output = op->sym->auth.digest.data;
-
+		else {
+			job->auth_tag_output = aesni_mb_digest_appended_in_src(op, job, oop);
+			if (job->auth_tag_output == NULL) {
+				job->auth_tag_output = op->sym->auth.digest.data;
+			}
+		}
 		if (session->auth.req_digest_len !=
 				job->auth_tag_output_len_in_bytes) {
 			job->auth_tag_output =
@@ -1917,6 +1968,7 @@ post_process_mb_job(struct ipsec_mb_qp *qp, IMB_JOB *job)
 	struct aesni_mb_session *sess = NULL;
 	uint8_t *linear_buf = NULL;
 	int sgl = 0;
+	uint8_t oop = 0;
 	uint8_t is_docsis_sec = 0;
 
 	if (op->sess_type == RTE_CRYPTO_OP_SECURITY_SESSION) {
@@ -1962,8 +2014,54 @@ post_process_mb_job(struct ipsec_mb_qp *qp, IMB_JOB *job)
 						op->sym->auth.digest.data,
 						sess->auth.req_digest_len,
 						&op->status);
-			} else
+			} else {
+				if (!op->sym->m_dst || op->sym->m_dst == op->sym->m_src) {
+					/* in-place operation */
+					oop = 0;
+				} else { /* out-of-place operation */
+					oop = 1;
+				}
+
+				/* Enable digest check */
+				if (op->sym->m_src->nb_segs == 1 && op->sym->m_dst != NULL
+				&& !is_aead_algo(job->hash_alg,	sess->template_job.cipher_mode) &&
+				aesni_mb_digest_appended_in_src(op, job, oop) != NULL) {
+					unsigned int auth_size, cipher_size;
+					int unencrypted_bytes = 0;
+					if (job->cipher_mode == IMB_CIPHER_SNOW3G_UEA2_BITLEN ||
+						job->cipher_mode == IMB_CIPHER_KASUMI_UEA1_BITLEN ||
+						job->cipher_mode == IMB_CIPHER_ZUC_EEA3) {
+						cipher_size = (op->sym->cipher.data.offset >> 3) +
+							(op->sym->cipher.data.length >> 3);
+					} else {
+						cipher_size = (op->sym->cipher.data.offset) +
+							(op->sym->cipher.data.length);
+					}
+					if (job->hash_alg == IMB_AUTH_ZUC_EIA3_BITLEN ||
+						job->hash_alg == IMB_AUTH_SNOW3G_UIA2_BITLEN ||
+						job->hash_alg == IMB_AUTH_KASUMI_UIA1 ||
+						job->hash_alg == IMB_AUTH_ZUC256_EIA3_BITLEN) {
+						auth_size = (op->sym->auth.data.offset >> 3) +
+							(op->sym->auth.data.length >> 3);
+					} else {
+						auth_size = (op->sym->auth.data.offset) +
+						(op->sym->auth.data.length);
+					}
+					/* Check for unencrypted bytes in partial digest cases */
+					if (job->cipher_mode != IMB_CIPHER_NULL) {
+						unencrypted_bytes = auth_size +
+						job->auth_tag_output_len_in_bytes - cipher_size;
+					}
+					if (unencrypted_bytes > 0)
+						rte_memcpy(
+						rte_pktmbuf_mtod_offset(op->sym->m_dst, uint8_t *,
+						cipher_size),
+						rte_pktmbuf_mtod_offset(op->sym->m_src, uint8_t *,
+						cipher_size),
+						unencrypted_bytes);
+				}
 				generate_digest(job, op, sess);
+			}
 			break;
 		default:
 			op->status = RTE_CRYPTO_OP_STATUS_ERROR;
@@ -2555,7 +2653,8 @@ RTE_INIT(ipsec_mb_register_aesni_mb)
 			RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT |
 			RTE_CRYPTODEV_FF_OOP_LB_IN_SGL_OUT |
 			RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT |
-			RTE_CRYPTODEV_FF_SECURITY;
+			RTE_CRYPTODEV_FF_SECURITY |
+			RTE_CRYPTODEV_FF_DIGEST_ENCRYPTED;
 
 	aesni_mb_data->internals_priv_size = 0;
 	aesni_mb_data->ops = &aesni_mb_pmd_ops;
-- 
2.25.1


^ permalink raw reply	[flat|nested] 32+ messages in thread

* [PATCH v6 2/2] test/crypto: fix IV in some vectors
  2023-09-07 10:26           ` [PATCH v5 0/2] Add Digest Encrypted to aesni_mb PMD Brian Dooley
  2023-09-07 10:26             ` [PATCH v6 1/2] crypto/ipsec_mb: add digest encrypted feature Brian Dooley
@ 2023-09-07 10:26             ` Brian Dooley
  2023-09-07 15:25               ` Power, Ciara
  1 sibling, 1 reply; 32+ messages in thread
From: Brian Dooley @ 2023-09-07 10:26 UTC (permalink / raw)
  To: Akhil Goyal, Fan Zhang; +Cc: dev, Brian Dooley, adamx.dybkowski

SNOW3G and ZUC algorithms require non-zero length IVs.

Fixes: c6c267a00a92 ("test/crypto: add mixed encypted-digest")
Cc: adamx.dybkowski@intel.com

Signed-off-by: Brian Dooley <brian.dooley@intel.com>
---
 app/test/test_cryptodev_mixed_test_vectors.h | 8 ++++++--
 1 file changed, 6 insertions(+), 2 deletions(-)

diff --git a/app/test/test_cryptodev_mixed_test_vectors.h b/app/test/test_cryptodev_mixed_test_vectors.h
index 161e2d905f..9c4313185e 100644
--- a/app/test/test_cryptodev_mixed_test_vectors.h
+++ b/app/test/test_cryptodev_mixed_test_vectors.h
@@ -478,8 +478,10 @@ struct mixed_cipher_auth_test_data auth_aes_cmac_cipher_snow_test_case_1 = {
 	},
 	.cipher_iv = {
 		.data = {
+			0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+			0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00
 		},
-		.len = 0,
+		.len = 16,
 	},
 	.cipher = {
 		.len_bits = 516 << 3,
@@ -917,8 +919,10 @@ struct mixed_cipher_auth_test_data auth_aes_cmac_cipher_zuc_test_case_1 = {
 	},
 	.cipher_iv = {
 		.data = {
+			0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+			0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00
 		},
-		.len = 0,
+		.len = 16,
 	},
 	.cipher = {
 		.len_bits = 516 << 3,
-- 
2.25.1


^ permalink raw reply	[flat|nested] 32+ messages in thread

* RE: [PATCH v6 1/2] crypto/ipsec_mb: add digest encrypted feature
  2023-09-07 10:26             ` [PATCH v6 1/2] crypto/ipsec_mb: add digest encrypted feature Brian Dooley
@ 2023-09-07 15:25               ` Power, Ciara
  2023-09-07 16:12               ` [PATCH v7 0/3] Add Digest Encrypted to aesni_mb PMD Brian Dooley
  1 sibling, 0 replies; 32+ messages in thread
From: Power, Ciara @ 2023-09-07 15:25 UTC (permalink / raw)
  To: Dooley, Brian, Akhil Goyal, Fan Zhang, Ji, Kai, De Lara Guarch, Pablo
  Cc: dev, Dooley, Brian


Hi Brian,

> -----Original Message-----
> From: Brian Dooley <brian.dooley@intel.com>
> Sent: Thursday, September 7, 2023 11:26 AM
> To: Akhil Goyal <gakhil@marvell.com>; Fan Zhang <fanzhang.oss@gmail.com>;
> Ji, Kai <kai.ji@intel.com>; De Lara Guarch, Pablo
> <pablo.de.lara.guarch@intel.com>
> Cc: dev@dpdk.org; Dooley, Brian <brian.dooley@intel.com>
> Subject: [PATCH v6 1/2] crypto/ipsec_mb: add digest encrypted feature
> 
> AESNI_MB PMD does not support Digest Encrypted. This patch adds a check
> and support for this feature.
> 
> Signed-off-by: Brian Dooley <brian.dooley@intel.com>
> ---
> v2:
> Fixed CHECKPATCH warning
> v3:
> Add Digest encrypted support to docs
> v4:
> Add comments and small refactor
> v5:
> Fix checkpatch warnings
> v6:
> Add skipping tests for synchronous crypto
> ---
>  app/test/test_cryptodev.c                   |   6 ++
>  doc/guides/cryptodevs/features/aesni_mb.ini |   1 +
>  drivers/crypto/ipsec_mb/pmd_aesni_mb.c      | 109
> +++++++++++++++++++-
>  3 files changed, 111 insertions(+), 5 deletions(-)
> 
> diff --git a/app/test/test_cryptodev.c b/app/test/test_cryptodev.c index
> 956268bfcd..70f6b7ece1 100644
> --- a/app/test/test_cryptodev.c
> +++ b/app/test/test_cryptodev.c
> @@ -6394,6 +6394,9 @@ test_zuc_auth_cipher(const struct
> wireless_test_data *tdata,
>  			tdata->digest.len) < 0)
>  		return TEST_SKIPPED;
> 
> +	if (gbl_action_type == RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO)
> +		return TEST_SKIPPED;
> +
>  	rte_cryptodev_info_get(ts_params->valid_devs[0], &dev_info);
> 
>  	uint64_t feat_flags = dev_info.feature_flags; @@ -7829,6 +7832,9
> @@ test_mixed_auth_cipher(const struct mixed_cipher_auth_test_data
> *tdata,
>  	if (global_api_test_type == CRYPTODEV_RAW_API_TEST)
>  		return TEST_SKIPPED;
> 
> +	if (gbl_action_type == RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO)
> +		return TEST_SKIPPED;
> +
>  	rte_cryptodev_info_get(ts_params->valid_devs[0], &dev_info);
> 
<snip>

Small thing, I think the above fixes should be in their own fix patch.

Code changes look good to me. Can keep my ack on v7 with the fixes split out.

Acked-by: Ciara Power <ciara.power@intel.com>

^ permalink raw reply	[flat|nested] 32+ messages in thread

* RE: [PATCH v6 2/2] test/crypto: fix IV in some vectors
  2023-09-07 10:26             ` [PATCH v6 2/2] test/crypto: fix IV in some vectors Brian Dooley
@ 2023-09-07 15:25               ` Power, Ciara
  0 siblings, 0 replies; 32+ messages in thread
From: Power, Ciara @ 2023-09-07 15:25 UTC (permalink / raw)
  To: Dooley, Brian, Akhil Goyal, Fan Zhang; +Cc: dev, Dooley, Brian, adamx.dybkowski



> -----Original Message-----
> From: Brian Dooley <brian.dooley@intel.com>
> Sent: Thursday, September 7, 2023 11:26 AM
> To: Akhil Goyal <gakhil@marvell.com>; Fan Zhang <fanzhang.oss@gmail.com>
> Cc: dev@dpdk.org; Dooley, Brian <brian.dooley@intel.com>;
> adamx.dybkowski@intel.com
> Subject: [PATCH v6 2/2] test/crypto: fix IV in some vectors
> 
> SNOW3G and ZUC algorithms require non-zero length IVs.
> 
> Fixes: c6c267a00a92 ("test/crypto: add mixed encypted-digest")
> Cc: adamx.dybkowski@intel.com
> 
> Signed-off-by: Brian Dooley <brian.dooley@intel.com>
> ---
>  app/test/test_cryptodev_mixed_test_vectors.h | 8 ++++++--
>  1 file changed, 6 insertions(+), 2 deletions(-)

Acked-by: Ciara Power <ciara.power@intel.com>

^ permalink raw reply	[flat|nested] 32+ messages in thread

* [PATCH v7 0/3] Add Digest Encrypted to aesni_mb PMD
  2023-09-07 10:26             ` [PATCH v6 1/2] crypto/ipsec_mb: add digest encrypted feature Brian Dooley
  2023-09-07 15:25               ` Power, Ciara
@ 2023-09-07 16:12               ` Brian Dooley
  2023-09-07 16:12                 ` [PATCH v7 1/3] crypto/ipsec_mb: add digest encrypted feature Brian Dooley
                                   ` (3 more replies)
  1 sibling, 4 replies; 32+ messages in thread
From: Brian Dooley @ 2023-09-07 16:12 UTC (permalink / raw)
  Cc: dev, gakhil, Brian Dooley

This series adds the Digest Encrypted feature to the AESNI_MB PMD.
It also fixes an issue where IV data in SNOW3G and ZUC algorithms
were incorrect and are required to be non-zero length.

v2:
Fixed CHECKPATCH warning
v3:
Add Digest encrypted support to docs
v4:
add comments and refactor
v5:
Fix checkpatch warnings
v6:
Add skipping tests for synchronous crypto
v7:
Separate synchronous fix into separate commit

Brian Dooley (3):
  crypto/ipsec_mb: add digest encrypted feature
  test/crypto: fix IV in some vectors
  test/crypto: fix failing synchronous tests

 app/test/test_cryptodev.c                    |   6 +
 app/test/test_cryptodev_mixed_test_vectors.h |   8 +-
 doc/guides/cryptodevs/features/aesni_mb.ini  |   1 +
 drivers/crypto/ipsec_mb/pmd_aesni_mb.c       | 109 ++++++++++++++++++-
 4 files changed, 117 insertions(+), 7 deletions(-)

-- 
2.25.1


^ permalink raw reply	[flat|nested] 32+ messages in thread

* [PATCH v7 1/3] crypto/ipsec_mb: add digest encrypted feature
  2023-09-07 16:12               ` [PATCH v7 0/3] Add Digest Encrypted to aesni_mb PMD Brian Dooley
@ 2023-09-07 16:12                 ` Brian Dooley
  2023-09-07 16:12                 ` [PATCH v7 2/3] test/crypto: fix IV in some vectors Brian Dooley
                                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 32+ messages in thread
From: Brian Dooley @ 2023-09-07 16:12 UTC (permalink / raw)
  To: Kai Ji, Pablo de Lara; +Cc: dev, gakhil, Brian Dooley, Ciara Power

AESNI_MB PMD does not support Digest Encrypted. This patch adds a check and
support for this feature.

Acked-by: Ciara Power <ciara.power@intel.com>
Signed-off-by: Brian Dooley <brian.dooley@intel.com>
---
v2:
Fixed CHECKPATCH warning
v3:
Add Digest encrypted support to docs
v4:
Add comments and small refactor
v5:
Fix checkpatch warnings
v6:
Add skipping tests for synchronous crypto
v7:
Separate synchronous fix into separate commit
---
 doc/guides/cryptodevs/features/aesni_mb.ini |   1 +
 drivers/crypto/ipsec_mb/pmd_aesni_mb.c      | 109 +++++++++++++++++++-
 2 files changed, 105 insertions(+), 5 deletions(-)

diff --git a/doc/guides/cryptodevs/features/aesni_mb.ini b/doc/guides/cryptodevs/features/aesni_mb.ini
index e4e965c35a..8df5fa2c85 100644
--- a/doc/guides/cryptodevs/features/aesni_mb.ini
+++ b/doc/guides/cryptodevs/features/aesni_mb.ini
@@ -20,6 +20,7 @@ OOP LB  In LB  Out     = Y
 CPU crypto             = Y
 Symmetric sessionless  = Y
 Non-Byte aligned data  = Y
+Digest encrypted       = Y
 
 ;
 ; Supported crypto algorithms of the 'aesni_mb' crypto driver.
diff --git a/drivers/crypto/ipsec_mb/pmd_aesni_mb.c b/drivers/crypto/ipsec_mb/pmd_aesni_mb.c
index 9e298023d7..7f61065939 100644
--- a/drivers/crypto/ipsec_mb/pmd_aesni_mb.c
+++ b/drivers/crypto/ipsec_mb/pmd_aesni_mb.c
@@ -1438,6 +1438,54 @@ set_gcm_job(IMB_MGR *mb_mgr, IMB_JOB *job, const uint8_t sgl,
 	return 0;
 }
 
+/** Check if conditions are met for digest-appended operations */
+static uint8_t *
+aesni_mb_digest_appended_in_src(struct rte_crypto_op *op, IMB_JOB *job,
+		uint32_t oop)
+{
+	unsigned int auth_size, cipher_size;
+	uint8_t *end_cipher;
+	uint8_t *start_cipher;
+
+	if (job->cipher_mode == IMB_CIPHER_NULL)
+		return NULL;
+
+	if (job->cipher_mode == IMB_CIPHER_ZUC_EEA3 ||
+		job->cipher_mode == IMB_CIPHER_SNOW3G_UEA2_BITLEN ||
+		job->cipher_mode == IMB_CIPHER_KASUMI_UEA1_BITLEN) {
+		cipher_size = (op->sym->cipher.data.offset >> 3) +
+			(op->sym->cipher.data.length >> 3);
+	} else {
+		cipher_size = (op->sym->cipher.data.offset) +
+			(op->sym->cipher.data.length);
+	}
+	if (job->hash_alg == IMB_AUTH_ZUC_EIA3_BITLEN ||
+		job->hash_alg == IMB_AUTH_SNOW3G_UIA2_BITLEN ||
+		job->hash_alg == IMB_AUTH_KASUMI_UIA1 ||
+		job->hash_alg == IMB_AUTH_ZUC256_EIA3_BITLEN) {
+		auth_size = (op->sym->auth.data.offset >> 3) +
+			(op->sym->auth.data.length >> 3);
+	} else {
+		auth_size = (op->sym->auth.data.offset) +
+			(op->sym->auth.data.length);
+	}
+
+	if (!oop) {
+		end_cipher = rte_pktmbuf_mtod_offset(op->sym->m_src, uint8_t *, cipher_size);
+		start_cipher = rte_pktmbuf_mtod(op->sym->m_src, uint8_t *);
+	} else {
+		end_cipher = rte_pktmbuf_mtod_offset(op->sym->m_dst, uint8_t *, cipher_size);
+		start_cipher = rte_pktmbuf_mtod(op->sym->m_dst, uint8_t *);
+	}
+
+	if (start_cipher < op->sym->auth.digest.data &&
+		op->sym->auth.digest.data < end_cipher) {
+		return rte_pktmbuf_mtod_offset(op->sym->m_src, uint8_t *, auth_size);
+	} else {
+		return NULL;
+	}
+}
+
 /**
  * Process a crypto operation and complete a IMB_JOB job structure for
  * submission to the multi buffer library for processing.
@@ -1580,9 +1628,12 @@ set_mb_job_params(IMB_JOB *job, struct ipsec_mb_qp *qp,
 	} else {
 		if (aead)
 			job->auth_tag_output = op->sym->aead.digest.data;
-		else
-			job->auth_tag_output = op->sym->auth.digest.data;
-
+		else {
+			job->auth_tag_output = aesni_mb_digest_appended_in_src(op, job, oop);
+			if (job->auth_tag_output == NULL) {
+				job->auth_tag_output = op->sym->auth.digest.data;
+			}
+		}
 		if (session->auth.req_digest_len !=
 				job->auth_tag_output_len_in_bytes) {
 			job->auth_tag_output =
@@ -1917,6 +1968,7 @@ post_process_mb_job(struct ipsec_mb_qp *qp, IMB_JOB *job)
 	struct aesni_mb_session *sess = NULL;
 	uint8_t *linear_buf = NULL;
 	int sgl = 0;
+	uint8_t oop = 0;
 	uint8_t is_docsis_sec = 0;
 
 	if (op->sess_type == RTE_CRYPTO_OP_SECURITY_SESSION) {
@@ -1962,8 +2014,54 @@ post_process_mb_job(struct ipsec_mb_qp *qp, IMB_JOB *job)
 						op->sym->auth.digest.data,
 						sess->auth.req_digest_len,
 						&op->status);
-			} else
+			} else {
+				if (!op->sym->m_dst || op->sym->m_dst == op->sym->m_src) {
+					/* in-place operation */
+					oop = 0;
+				} else { /* out-of-place operation */
+					oop = 1;
+				}
+
+				/* Enable digest check */
+				if (op->sym->m_src->nb_segs == 1 && op->sym->m_dst != NULL
+				&& !is_aead_algo(job->hash_alg,	sess->template_job.cipher_mode) &&
+				aesni_mb_digest_appended_in_src(op, job, oop) != NULL) {
+					unsigned int auth_size, cipher_size;
+					int unencrypted_bytes = 0;
+					if (job->cipher_mode == IMB_CIPHER_SNOW3G_UEA2_BITLEN ||
+						job->cipher_mode == IMB_CIPHER_KASUMI_UEA1_BITLEN ||
+						job->cipher_mode == IMB_CIPHER_ZUC_EEA3) {
+						cipher_size = (op->sym->cipher.data.offset >> 3) +
+							(op->sym->cipher.data.length >> 3);
+					} else {
+						cipher_size = (op->sym->cipher.data.offset) +
+							(op->sym->cipher.data.length);
+					}
+					if (job->hash_alg == IMB_AUTH_ZUC_EIA3_BITLEN ||
+						job->hash_alg == IMB_AUTH_SNOW3G_UIA2_BITLEN ||
+						job->hash_alg == IMB_AUTH_KASUMI_UIA1 ||
+						job->hash_alg == IMB_AUTH_ZUC256_EIA3_BITLEN) {
+						auth_size = (op->sym->auth.data.offset >> 3) +
+							(op->sym->auth.data.length >> 3);
+					} else {
+						auth_size = (op->sym->auth.data.offset) +
+						(op->sym->auth.data.length);
+					}
+					/* Check for unencrypted bytes in partial digest cases */
+					if (job->cipher_mode != IMB_CIPHER_NULL) {
+						unencrypted_bytes = auth_size +
+						job->auth_tag_output_len_in_bytes - cipher_size;
+					}
+					if (unencrypted_bytes > 0)
+						rte_memcpy(
+						rte_pktmbuf_mtod_offset(op->sym->m_dst, uint8_t *,
+						cipher_size),
+						rte_pktmbuf_mtod_offset(op->sym->m_src, uint8_t *,
+						cipher_size),
+						unencrypted_bytes);
+				}
 				generate_digest(job, op, sess);
+			}
 			break;
 		default:
 			op->status = RTE_CRYPTO_OP_STATUS_ERROR;
@@ -2555,7 +2653,8 @@ RTE_INIT(ipsec_mb_register_aesni_mb)
 			RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT |
 			RTE_CRYPTODEV_FF_OOP_LB_IN_SGL_OUT |
 			RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT |
-			RTE_CRYPTODEV_FF_SECURITY;
+			RTE_CRYPTODEV_FF_SECURITY |
+			RTE_CRYPTODEV_FF_DIGEST_ENCRYPTED;
 
 	aesni_mb_data->internals_priv_size = 0;
 	aesni_mb_data->ops = &aesni_mb_pmd_ops;
-- 
2.25.1


^ permalink raw reply	[flat|nested] 32+ messages in thread

* [PATCH v7 2/3] test/crypto: fix IV in some vectors
  2023-09-07 16:12               ` [PATCH v7 0/3] Add Digest Encrypted to aesni_mb PMD Brian Dooley
  2023-09-07 16:12                 ` [PATCH v7 1/3] crypto/ipsec_mb: add digest encrypted feature Brian Dooley
@ 2023-09-07 16:12                 ` Brian Dooley
  2023-09-07 16:12                 ` [PATCH v7 3/3] test/crypto: fix failing synchronous tests Brian Dooley
  2023-09-14 15:22                 ` [PATCH v8 0/3] Add Digest Encrypted to aesni_mb PMD Brian Dooley
  3 siblings, 0 replies; 32+ messages in thread
From: Brian Dooley @ 2023-09-07 16:12 UTC (permalink / raw)
  To: Akhil Goyal, Fan Zhang; +Cc: dev, Brian Dooley, adamx.dybkowski, Ciara Power

SNOW3G and ZUC algorithms require non-zero length IVs.

Fixes: c6c267a00a92 ("test/crypto: add mixed encypted-digest")
Cc: adamx.dybkowski@intel.com

Acked-by: Ciara Power <ciara.power@intel.com>
Signed-off-by: Brian Dooley <brian.dooley@intel.com>
---
 app/test/test_cryptodev_mixed_test_vectors.h | 8 ++++++--
 1 file changed, 6 insertions(+), 2 deletions(-)

diff --git a/app/test/test_cryptodev_mixed_test_vectors.h b/app/test/test_cryptodev_mixed_test_vectors.h
index 161e2d905f..9c4313185e 100644
--- a/app/test/test_cryptodev_mixed_test_vectors.h
+++ b/app/test/test_cryptodev_mixed_test_vectors.h
@@ -478,8 +478,10 @@ struct mixed_cipher_auth_test_data auth_aes_cmac_cipher_snow_test_case_1 = {
 	},
 	.cipher_iv = {
 		.data = {
+			0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+			0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00
 		},
-		.len = 0,
+		.len = 16,
 	},
 	.cipher = {
 		.len_bits = 516 << 3,
@@ -917,8 +919,10 @@ struct mixed_cipher_auth_test_data auth_aes_cmac_cipher_zuc_test_case_1 = {
 	},
 	.cipher_iv = {
 		.data = {
+			0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+			0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00
 		},
-		.len = 0,
+		.len = 16,
 	},
 	.cipher = {
 		.len_bits = 516 << 3,
-- 
2.25.1


^ permalink raw reply	[flat|nested] 32+ messages in thread

* [PATCH v7 3/3] test/crypto: fix failing synchronous tests
  2023-09-07 16:12               ` [PATCH v7 0/3] Add Digest Encrypted to aesni_mb PMD Brian Dooley
  2023-09-07 16:12                 ` [PATCH v7 1/3] crypto/ipsec_mb: add digest encrypted feature Brian Dooley
  2023-09-07 16:12                 ` [PATCH v7 2/3] test/crypto: fix IV in some vectors Brian Dooley
@ 2023-09-07 16:12                 ` Brian Dooley
  2023-09-14 15:22                 ` [PATCH v8 0/3] Add Digest Encrypted to aesni_mb PMD Brian Dooley
  3 siblings, 0 replies; 32+ messages in thread
From: Brian Dooley @ 2023-09-07 16:12 UTC (permalink / raw)
  To: Akhil Goyal, Fan Zhang
  Cc: dev, Brian Dooley, pablo.de.lara.guarch, Ciara Power

Some synchronous tests do not support digest encrypted and need to be
skipped. This commit adds in extra skips for these tests.

Fixes: 55ab4a8c4fb5 ("test/crypto: disable wireless cases for CPU crypto API")
Cc: pablo.de.lara.guarch@intel.com

Acked-by: Ciara Power <ciara.power@intel.com>
Signed-off-by: Brian Dooley <brian.dooley@intel.com>
---
 app/test/test_cryptodev.c | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/app/test/test_cryptodev.c b/app/test/test_cryptodev.c
index 956268bfcd..70f6b7ece1 100644
--- a/app/test/test_cryptodev.c
+++ b/app/test/test_cryptodev.c
@@ -6394,6 +6394,9 @@ test_zuc_auth_cipher(const struct wireless_test_data *tdata,
 			tdata->digest.len) < 0)
 		return TEST_SKIPPED;
 
+	if (gbl_action_type == RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO)
+		return TEST_SKIPPED;
+
 	rte_cryptodev_info_get(ts_params->valid_devs[0], &dev_info);
 
 	uint64_t feat_flags = dev_info.feature_flags;
@@ -7829,6 +7832,9 @@ test_mixed_auth_cipher(const struct mixed_cipher_auth_test_data *tdata,
 	if (global_api_test_type == CRYPTODEV_RAW_API_TEST)
 		return TEST_SKIPPED;
 
+	if (gbl_action_type == RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO)
+		return TEST_SKIPPED;
+
 	rte_cryptodev_info_get(ts_params->valid_devs[0], &dev_info);
 
 	uint64_t feat_flags = dev_info.feature_flags;
-- 
2.25.1


^ permalink raw reply	[flat|nested] 32+ messages in thread

* [PATCH v8 0/3] Add Digest Encrypted to aesni_mb PMD
  2023-09-07 16:12               ` [PATCH v7 0/3] Add Digest Encrypted to aesni_mb PMD Brian Dooley
                                   ` (2 preceding siblings ...)
  2023-09-07 16:12                 ` [PATCH v7 3/3] test/crypto: fix failing synchronous tests Brian Dooley
@ 2023-09-14 15:22                 ` Brian Dooley
  2023-09-14 15:22                   ` [PATCH v8 1/3] crypto/ipsec_mb: add digest encrypted feature Brian Dooley
                                     ` (2 more replies)
  3 siblings, 3 replies; 32+ messages in thread
From: Brian Dooley @ 2023-09-14 15:22 UTC (permalink / raw)
  Cc: dev, gakhil, Brian Dooley

This series adds the Digest Encrypted feature to the AESNI_MB PMD.
It also fixes an issue where IV data in SNOW3G and ZUC algorithms
were incorrect and are required to be non-zero length.

v2:
Fixed CHECKPATCH warning
v3:
Add Digest encrypted support to docs
v4:
add comments and refactor
v5:
Fix checkpatch warnings
v6:
Add skipping tests for synchronous crypto
v7:
Separate synchronous fix into separate commit
v8:
Reword commit and add stable

Brian Dooley (3):
  crypto/ipsec_mb: add digest encrypted feature
  test/crypto: fix IV in some vectors
  test/crypto: fix failing synchronous tests

 app/test/test_cryptodev.c                    |   6 +
 app/test/test_cryptodev_mixed_test_vectors.h |   8 +-
 doc/guides/cryptodevs/features/aesni_mb.ini  |   1 +
 drivers/crypto/ipsec_mb/pmd_aesni_mb.c       | 109 ++++++++++++++++++-
 4 files changed, 117 insertions(+), 7 deletions(-)

-- 
2.25.1


^ permalink raw reply	[flat|nested] 32+ messages in thread

* [PATCH v8 1/3] crypto/ipsec_mb: add digest encrypted feature
  2023-09-14 15:22                 ` [PATCH v8 0/3] Add Digest Encrypted to aesni_mb PMD Brian Dooley
@ 2023-09-14 15:22                   ` Brian Dooley
  2023-09-19  6:02                     ` [EXT] " Akhil Goyal
  2023-09-14 15:22                   ` [PATCH v8 2/3] test/crypto: fix IV in some vectors Brian Dooley
  2023-09-14 15:22                   ` [PATCH v8 3/3] test/crypto: fix failing synchronous tests Brian Dooley
  2 siblings, 1 reply; 32+ messages in thread
From: Brian Dooley @ 2023-09-14 15:22 UTC (permalink / raw)
  To: Kai Ji, Pablo de Lara; +Cc: dev, gakhil, Brian Dooley, Ciara Power

AESNI_MB PMD does not support Digest Encrypted. This patch adds a check and
support for this feature.

Acked-by: Ciara Power <ciara.power@intel.com>
Signed-off-by: Brian Dooley <brian.dooley@intel.com>
---
v2:
Fixed CHECKPATCH warning
v3:
Add Digest encrypted support to docs
v4:
Add comments and small refactor
v5:
Fix checkpatch warnings
v6:
Add skipping tests for synchronous crypto
v7:
Separate  synchronous fix into separate commit
---
 doc/guides/cryptodevs/features/aesni_mb.ini |   1 +
 drivers/crypto/ipsec_mb/pmd_aesni_mb.c      | 109 +++++++++++++++++++-
 2 files changed, 105 insertions(+), 5 deletions(-)

diff --git a/doc/guides/cryptodevs/features/aesni_mb.ini b/doc/guides/cryptodevs/features/aesni_mb.ini
index e4e965c35a..8df5fa2c85 100644
--- a/doc/guides/cryptodevs/features/aesni_mb.ini
+++ b/doc/guides/cryptodevs/features/aesni_mb.ini
@@ -20,6 +20,7 @@ OOP LB  In LB  Out     = Y
 CPU crypto             = Y
 Symmetric sessionless  = Y
 Non-Byte aligned data  = Y
+Digest encrypted       = Y
 
 ;
 ; Supported crypto algorithms of the 'aesni_mb' crypto driver.
diff --git a/drivers/crypto/ipsec_mb/pmd_aesni_mb.c b/drivers/crypto/ipsec_mb/pmd_aesni_mb.c
index 9e298023d7..7f61065939 100644
--- a/drivers/crypto/ipsec_mb/pmd_aesni_mb.c
+++ b/drivers/crypto/ipsec_mb/pmd_aesni_mb.c
@@ -1438,6 +1438,54 @@ set_gcm_job(IMB_MGR *mb_mgr, IMB_JOB *job, const uint8_t sgl,
 	return 0;
 }
 
+/** Check if conditions are met for digest-appended operations */
+static uint8_t *
+aesni_mb_digest_appended_in_src(struct rte_crypto_op *op, IMB_JOB *job,
+		uint32_t oop)
+{
+	unsigned int auth_size, cipher_size;
+	uint8_t *end_cipher;
+	uint8_t *start_cipher;
+
+	if (job->cipher_mode == IMB_CIPHER_NULL)
+		return NULL;
+
+	if (job->cipher_mode == IMB_CIPHER_ZUC_EEA3 ||
+		job->cipher_mode == IMB_CIPHER_SNOW3G_UEA2_BITLEN ||
+		job->cipher_mode == IMB_CIPHER_KASUMI_UEA1_BITLEN) {
+		cipher_size = (op->sym->cipher.data.offset >> 3) +
+			(op->sym->cipher.data.length >> 3);
+	} else {
+		cipher_size = (op->sym->cipher.data.offset) +
+			(op->sym->cipher.data.length);
+	}
+	if (job->hash_alg == IMB_AUTH_ZUC_EIA3_BITLEN ||
+		job->hash_alg == IMB_AUTH_SNOW3G_UIA2_BITLEN ||
+		job->hash_alg == IMB_AUTH_KASUMI_UIA1 ||
+		job->hash_alg == IMB_AUTH_ZUC256_EIA3_BITLEN) {
+		auth_size = (op->sym->auth.data.offset >> 3) +
+			(op->sym->auth.data.length >> 3);
+	} else {
+		auth_size = (op->sym->auth.data.offset) +
+			(op->sym->auth.data.length);
+	}
+
+	if (!oop) {
+		end_cipher = rte_pktmbuf_mtod_offset(op->sym->m_src, uint8_t *, cipher_size);
+		start_cipher = rte_pktmbuf_mtod(op->sym->m_src, uint8_t *);
+	} else {
+		end_cipher = rte_pktmbuf_mtod_offset(op->sym->m_dst, uint8_t *, cipher_size);
+		start_cipher = rte_pktmbuf_mtod(op->sym->m_dst, uint8_t *);
+	}
+
+	if (start_cipher < op->sym->auth.digest.data &&
+		op->sym->auth.digest.data < end_cipher) {
+		return rte_pktmbuf_mtod_offset(op->sym->m_src, uint8_t *, auth_size);
+	} else {
+		return NULL;
+	}
+}
+
 /**
  * Process a crypto operation and complete a IMB_JOB job structure for
  * submission to the multi buffer library for processing.
@@ -1580,9 +1628,12 @@ set_mb_job_params(IMB_JOB *job, struct ipsec_mb_qp *qp,
 	} else {
 		if (aead)
 			job->auth_tag_output = op->sym->aead.digest.data;
-		else
-			job->auth_tag_output = op->sym->auth.digest.data;
-
+		else {
+			job->auth_tag_output = aesni_mb_digest_appended_in_src(op, job, oop);
+			if (job->auth_tag_output == NULL) {
+				job->auth_tag_output = op->sym->auth.digest.data;
+			}
+		}
 		if (session->auth.req_digest_len !=
 				job->auth_tag_output_len_in_bytes) {
 			job->auth_tag_output =
@@ -1917,6 +1968,7 @@ post_process_mb_job(struct ipsec_mb_qp *qp, IMB_JOB *job)
 	struct aesni_mb_session *sess = NULL;
 	uint8_t *linear_buf = NULL;
 	int sgl = 0;
+	uint8_t oop = 0;
 	uint8_t is_docsis_sec = 0;
 
 	if (op->sess_type == RTE_CRYPTO_OP_SECURITY_SESSION) {
@@ -1962,8 +2014,54 @@ post_process_mb_job(struct ipsec_mb_qp *qp, IMB_JOB *job)
 						op->sym->auth.digest.data,
 						sess->auth.req_digest_len,
 						&op->status);
-			} else
+			} else {
+				if (!op->sym->m_dst || op->sym->m_dst == op->sym->m_src) {
+					/* in-place operation */
+					oop = 0;
+				} else { /* out-of-place operation */
+					oop = 1;
+				}
+
+				/* Enable digest check */
+				if (op->sym->m_src->nb_segs == 1 && op->sym->m_dst != NULL
+				&& !is_aead_algo(job->hash_alg,	sess->template_job.cipher_mode) &&
+				aesni_mb_digest_appended_in_src(op, job, oop) != NULL) {
+					unsigned int auth_size, cipher_size;
+					int unencrypted_bytes = 0;
+					if (job->cipher_mode == IMB_CIPHER_SNOW3G_UEA2_BITLEN ||
+						job->cipher_mode == IMB_CIPHER_KASUMI_UEA1_BITLEN ||
+						job->cipher_mode == IMB_CIPHER_ZUC_EEA3) {
+						cipher_size = (op->sym->cipher.data.offset >> 3) +
+							(op->sym->cipher.data.length >> 3);
+					} else {
+						cipher_size = (op->sym->cipher.data.offset) +
+							(op->sym->cipher.data.length);
+					}
+					if (job->hash_alg == IMB_AUTH_ZUC_EIA3_BITLEN ||
+						job->hash_alg == IMB_AUTH_SNOW3G_UIA2_BITLEN ||
+						job->hash_alg == IMB_AUTH_KASUMI_UIA1 ||
+						job->hash_alg == IMB_AUTH_ZUC256_EIA3_BITLEN) {
+						auth_size = (op->sym->auth.data.offset >> 3) +
+							(op->sym->auth.data.length >> 3);
+					} else {
+						auth_size = (op->sym->auth.data.offset) +
+						(op->sym->auth.data.length);
+					}
+					/* Check for unencrypted bytes in partial digest cases */
+					if (job->cipher_mode != IMB_CIPHER_NULL) {
+						unencrypted_bytes = auth_size +
+						job->auth_tag_output_len_in_bytes - cipher_size;
+					}
+					if (unencrypted_bytes > 0)
+						rte_memcpy(
+						rte_pktmbuf_mtod_offset(op->sym->m_dst, uint8_t *,
+						cipher_size),
+						rte_pktmbuf_mtod_offset(op->sym->m_src, uint8_t *,
+						cipher_size),
+						unencrypted_bytes);
+				}
 				generate_digest(job, op, sess);
+			}
 			break;
 		default:
 			op->status = RTE_CRYPTO_OP_STATUS_ERROR;
@@ -2555,7 +2653,8 @@ RTE_INIT(ipsec_mb_register_aesni_mb)
 			RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT |
 			RTE_CRYPTODEV_FF_OOP_LB_IN_SGL_OUT |
 			RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT |
-			RTE_CRYPTODEV_FF_SECURITY;
+			RTE_CRYPTODEV_FF_SECURITY |
+			RTE_CRYPTODEV_FF_DIGEST_ENCRYPTED;
 
 	aesni_mb_data->internals_priv_size = 0;
 	aesni_mb_data->ops = &aesni_mb_pmd_ops;
-- 
2.25.1


^ permalink raw reply	[flat|nested] 32+ messages in thread

* [PATCH v8 2/3] test/crypto: fix IV in some vectors
  2023-09-14 15:22                 ` [PATCH v8 0/3] Add Digest Encrypted to aesni_mb PMD Brian Dooley
  2023-09-14 15:22                   ` [PATCH v8 1/3] crypto/ipsec_mb: add digest encrypted feature Brian Dooley
@ 2023-09-14 15:22                   ` Brian Dooley
  2023-09-14 15:22                   ` [PATCH v8 3/3] test/crypto: fix failing synchronous tests Brian Dooley
  2 siblings, 0 replies; 32+ messages in thread
From: Brian Dooley @ 2023-09-14 15:22 UTC (permalink / raw)
  To: Akhil Goyal, Fan Zhang
  Cc: dev, Brian Dooley, adamx.dybkowski, stable, Ciara Power

SNOW3G and ZUC algorithms require non-zero length IVs.

Fixes: c6c267a00a92 ("test/crypto: add mixed encypted-digest")
Cc: adamx.dybkowski@intel.com
Cc: stable@dpdk.org

Acked-by: Ciara Power <ciara.power@intel.com>
Signed-off-by: Brian Dooley <brian.dooley@intel.com>
--
v8:
Add cc stable
---
 app/test/test_cryptodev_mixed_test_vectors.h | 8 ++++++--
 1 file changed, 6 insertions(+), 2 deletions(-)

diff --git a/app/test/test_cryptodev_mixed_test_vectors.h b/app/test/test_cryptodev_mixed_test_vectors.h
index 161e2d905f..9c4313185e 100644
--- a/app/test/test_cryptodev_mixed_test_vectors.h
+++ b/app/test/test_cryptodev_mixed_test_vectors.h
@@ -478,8 +478,10 @@ struct mixed_cipher_auth_test_data auth_aes_cmac_cipher_snow_test_case_1 = {
 	},
 	.cipher_iv = {
 		.data = {
+			0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+			0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00
 		},
-		.len = 0,
+		.len = 16,
 	},
 	.cipher = {
 		.len_bits = 516 << 3,
@@ -917,8 +919,10 @@ struct mixed_cipher_auth_test_data auth_aes_cmac_cipher_zuc_test_case_1 = {
 	},
 	.cipher_iv = {
 		.data = {
+			0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+			0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00
 		},
-		.len = 0,
+		.len = 16,
 	},
 	.cipher = {
 		.len_bits = 516 << 3,
-- 
2.25.1


^ permalink raw reply	[flat|nested] 32+ messages in thread

* [PATCH v8 3/3] test/crypto: fix failing synchronous tests
  2023-09-14 15:22                 ` [PATCH v8 0/3] Add Digest Encrypted to aesni_mb PMD Brian Dooley
  2023-09-14 15:22                   ` [PATCH v8 1/3] crypto/ipsec_mb: add digest encrypted feature Brian Dooley
  2023-09-14 15:22                   ` [PATCH v8 2/3] test/crypto: fix IV in some vectors Brian Dooley
@ 2023-09-14 15:22                   ` Brian Dooley
  2 siblings, 0 replies; 32+ messages in thread
From: Brian Dooley @ 2023-09-14 15:22 UTC (permalink / raw)
  To: Akhil Goyal, Fan Zhang
  Cc: dev, Brian Dooley, pablo.de.lara.guarch, stable, Ciara Power

Some synchronous tests are not supported for cpu crypto and need to be
skipped. This commit adds in extra skips for these tests.

Fixes: 55ab4a8c4fb5 ("test/crypto: disable wireless cases for CPU crypto API")
Cc: pablo.de.lara.guarch@intel.com
Cc: stable@dpdk.org

Acked-by: Ciara Power <ciara.power@intel.com>
Signed-off-by: Brian Dooley <brian.dooley@intel.com>
---
v8:
Reword commit and add cc stable
---
 app/test/test_cryptodev.c | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/app/test/test_cryptodev.c b/app/test/test_cryptodev.c
index 956268bfcd..70f6b7ece1 100644
--- a/app/test/test_cryptodev.c
+++ b/app/test/test_cryptodev.c
@@ -6394,6 +6394,9 @@ test_zuc_auth_cipher(const struct wireless_test_data *tdata,
 			tdata->digest.len) < 0)
 		return TEST_SKIPPED;
 
+	if (gbl_action_type == RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO)
+		return TEST_SKIPPED;
+
 	rte_cryptodev_info_get(ts_params->valid_devs[0], &dev_info);
 
 	uint64_t feat_flags = dev_info.feature_flags;
@@ -7829,6 +7832,9 @@ test_mixed_auth_cipher(const struct mixed_cipher_auth_test_data *tdata,
 	if (global_api_test_type == CRYPTODEV_RAW_API_TEST)
 		return TEST_SKIPPED;
 
+	if (gbl_action_type == RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO)
+		return TEST_SKIPPED;
+
 	rte_cryptodev_info_get(ts_params->valid_devs[0], &dev_info);
 
 	uint64_t feat_flags = dev_info.feature_flags;
-- 
2.25.1


^ permalink raw reply	[flat|nested] 32+ messages in thread

* RE: [EXT] [PATCH v8 1/3] crypto/ipsec_mb: add digest encrypted feature
  2023-09-14 15:22                   ` [PATCH v8 1/3] crypto/ipsec_mb: add digest encrypted feature Brian Dooley
@ 2023-09-19  6:02                     ` Akhil Goyal
  0 siblings, 0 replies; 32+ messages in thread
From: Akhil Goyal @ 2023-09-19  6:02 UTC (permalink / raw)
  To: Brian Dooley, Kai Ji, Pablo de Lara; +Cc: dev, Ciara Power

> ----------------------------------------------------------------------
> AESNI_MB PMD does not support Digest Encrypted. This patch adds a check and
> support for this feature.
> 
> Acked-by: Ciara Power <ciara.power@intel.com>
> Signed-off-by: Brian Dooley <brian.dooley@intel.com>
> ---
> v2:
> Fixed CHECKPATCH warning
> v3:
> Add Digest encrypted support to docs
> v4:
> Add comments and small refactor
> v5:
> Fix checkpatch warnings
> v6:
> Add skipping tests for synchronous crypto
> v7:
> Separate  synchronous fix into separate commit
> ---
>  doc/guides/cryptodevs/features/aesni_mb.ini |   1 +
>  drivers/crypto/ipsec_mb/pmd_aesni_mb.c      | 109 +++++++++++++++++++-
>  2 files changed, 105 insertions(+), 5 deletions(-)

Release notes??



^ permalink raw reply	[flat|nested] 32+ messages in thread

* [PATCH v9 0/3] Add Digest Encrypted to aesni_mb PMD
  2023-04-21 10:13 [PATCH v1] crypto/ipsec_mb: add digest encrypted feature in AESNI_MB Brian Dooley
  2023-04-24  5:46 ` [EXT] " Akhil Goyal
  2023-07-20 10:38 ` [PATCH v1] crypto/ipsec_mb: add digest encrypted feature Brian Dooley
@ 2023-09-19 10:42 ` Brian Dooley
  2023-09-19 10:42   ` [PATCH v9 1/3] crypto/ipsec_mb: add digest encrypted feature Brian Dooley
                     ` (4 more replies)
  2 siblings, 5 replies; 32+ messages in thread
From: Brian Dooley @ 2023-09-19 10:42 UTC (permalink / raw)
  Cc: dev, gakhil, Brian Dooley

This series adds the Digest Encrypted feature to the AESNI_MB PMD.
It also fixes an issue where IV data in SNOW3G and ZUC algorithms
were incorrect and are required to be non-zero length.

v9:
Added release notes
v8:
reword commit and add stable
v7:
Separate  synchronous fix into separate commit
v6:
Add skipping tests for synchronous crypto
v5:
Fix checkpatch warnings
v4:
Add comments and small refactor
v3:
Add Digest encrypted support to docs
v2:
Fixed CHECKPATCH warning

Brian Dooley (3):
  crypto/ipsec_mb: add digest encrypted feature
  test/crypto: fix IV in some vectors
  test/crypto: fix failing synchronous tests

 app/test/test_cryptodev.c                    |   6 +
 app/test/test_cryptodev_mixed_test_vectors.h |   8 +-
 doc/guides/cryptodevs/features/aesni_mb.ini  |   1 +
 doc/guides/rel_notes/release_23_11.rst       |   4 +
 drivers/crypto/ipsec_mb/pmd_aesni_mb.c       | 109 ++++++++++++++++++-
 5 files changed, 121 insertions(+), 7 deletions(-)

-- 
2.25.1


^ permalink raw reply	[flat|nested] 32+ messages in thread

* [PATCH v9 1/3] crypto/ipsec_mb: add digest encrypted feature
  2023-09-19 10:42 ` [PATCH v9 0/3] Add Digest Encrypted to aesni_mb PMD Brian Dooley
@ 2023-09-19 10:42   ` Brian Dooley
  2023-09-19 10:42   ` [PATCH v9 2/3] test/crypto: fix IV in some vectors Brian Dooley
                     ` (3 subsequent siblings)
  4 siblings, 0 replies; 32+ messages in thread
From: Brian Dooley @ 2023-09-19 10:42 UTC (permalink / raw)
  To: Kai Ji, Pablo de Lara; +Cc: dev, gakhil, Brian Dooley, Ciara Power

AESNI_MB PMD does not support Digest Encrypted. This patch adds a check and
support for this feature.

Signed-off-by: Brian Dooley <brian.dooley@intel.com>
Acked-by: Ciara Power <ciara.power@intel.com>
---
v9:
Added release notes
v7:
Separate  synchronous fix into separate commit
v6:
Add skipping tests for synchronous crypto
v5:
Fix checkpatch warnings
v4:
Add comments and small refactor
v3:
Add Digest encrypted support to docs
v2:
Fixed CHECKPATCH warning
---
 doc/guides/cryptodevs/features/aesni_mb.ini |   1 +
 doc/guides/rel_notes/release_23_11.rst      |   4 +
 drivers/crypto/ipsec_mb/pmd_aesni_mb.c      | 109 +++++++++++++++++++-
 3 files changed, 109 insertions(+), 5 deletions(-)

diff --git a/doc/guides/cryptodevs/features/aesni_mb.ini b/doc/guides/cryptodevs/features/aesni_mb.ini
index e4e965c35a..8df5fa2c85 100644
--- a/doc/guides/cryptodevs/features/aesni_mb.ini
+++ b/doc/guides/cryptodevs/features/aesni_mb.ini
@@ -20,6 +20,7 @@ OOP LB  In LB  Out     = Y
 CPU crypto             = Y
 Symmetric sessionless  = Y
 Non-Byte aligned data  = Y
+Digest encrypted       = Y
 
 ;
 ; Supported crypto algorithms of the 'aesni_mb' crypto driver.
diff --git a/doc/guides/rel_notes/release_23_11.rst b/doc/guides/rel_notes/release_23_11.rst
index 333e1d95a2..4757f41a7e 100644
--- a/doc/guides/rel_notes/release_23_11.rst
+++ b/doc/guides/rel_notes/release_23_11.rst
@@ -78,6 +78,10 @@ New Features
 * build: Optional libraries can now be selected with the new ``enable_libs``
   build option similarly to the existing ``enable_drivers`` build option.
 
+* **Updated ipsec_mb crypto driver.**
+
+  * Added support for Digest Encrypted to AESNI_MB PMD asynchronous crypto.
+
 
 Removed Items
 -------------
diff --git a/drivers/crypto/ipsec_mb/pmd_aesni_mb.c b/drivers/crypto/ipsec_mb/pmd_aesni_mb.c
index 9e298023d7..7f61065939 100644
--- a/drivers/crypto/ipsec_mb/pmd_aesni_mb.c
+++ b/drivers/crypto/ipsec_mb/pmd_aesni_mb.c
@@ -1438,6 +1438,54 @@ set_gcm_job(IMB_MGR *mb_mgr, IMB_JOB *job, const uint8_t sgl,
 	return 0;
 }
 
+/** Check if conditions are met for digest-appended operations */
+static uint8_t *
+aesni_mb_digest_appended_in_src(struct rte_crypto_op *op, IMB_JOB *job,
+		uint32_t oop)
+{
+	unsigned int auth_size, cipher_size;
+	uint8_t *end_cipher;
+	uint8_t *start_cipher;
+
+	if (job->cipher_mode == IMB_CIPHER_NULL)
+		return NULL;
+
+	if (job->cipher_mode == IMB_CIPHER_ZUC_EEA3 ||
+		job->cipher_mode == IMB_CIPHER_SNOW3G_UEA2_BITLEN ||
+		job->cipher_mode == IMB_CIPHER_KASUMI_UEA1_BITLEN) {
+		cipher_size = (op->sym->cipher.data.offset >> 3) +
+			(op->sym->cipher.data.length >> 3);
+	} else {
+		cipher_size = (op->sym->cipher.data.offset) +
+			(op->sym->cipher.data.length);
+	}
+	if (job->hash_alg == IMB_AUTH_ZUC_EIA3_BITLEN ||
+		job->hash_alg == IMB_AUTH_SNOW3G_UIA2_BITLEN ||
+		job->hash_alg == IMB_AUTH_KASUMI_UIA1 ||
+		job->hash_alg == IMB_AUTH_ZUC256_EIA3_BITLEN) {
+		auth_size = (op->sym->auth.data.offset >> 3) +
+			(op->sym->auth.data.length >> 3);
+	} else {
+		auth_size = (op->sym->auth.data.offset) +
+			(op->sym->auth.data.length);
+	}
+
+	if (!oop) {
+		end_cipher = rte_pktmbuf_mtod_offset(op->sym->m_src, uint8_t *, cipher_size);
+		start_cipher = rte_pktmbuf_mtod(op->sym->m_src, uint8_t *);
+	} else {
+		end_cipher = rte_pktmbuf_mtod_offset(op->sym->m_dst, uint8_t *, cipher_size);
+		start_cipher = rte_pktmbuf_mtod(op->sym->m_dst, uint8_t *);
+	}
+
+	if (start_cipher < op->sym->auth.digest.data &&
+		op->sym->auth.digest.data < end_cipher) {
+		return rte_pktmbuf_mtod_offset(op->sym->m_src, uint8_t *, auth_size);
+	} else {
+		return NULL;
+	}
+}
+
 /**
  * Process a crypto operation and complete a IMB_JOB job structure for
  * submission to the multi buffer library for processing.
@@ -1580,9 +1628,12 @@ set_mb_job_params(IMB_JOB *job, struct ipsec_mb_qp *qp,
 	} else {
 		if (aead)
 			job->auth_tag_output = op->sym->aead.digest.data;
-		else
-			job->auth_tag_output = op->sym->auth.digest.data;
-
+		else {
+			job->auth_tag_output = aesni_mb_digest_appended_in_src(op, job, oop);
+			if (job->auth_tag_output == NULL) {
+				job->auth_tag_output = op->sym->auth.digest.data;
+			}
+		}
 		if (session->auth.req_digest_len !=
 				job->auth_tag_output_len_in_bytes) {
 			job->auth_tag_output =
@@ -1917,6 +1968,7 @@ post_process_mb_job(struct ipsec_mb_qp *qp, IMB_JOB *job)
 	struct aesni_mb_session *sess = NULL;
 	uint8_t *linear_buf = NULL;
 	int sgl = 0;
+	uint8_t oop = 0;
 	uint8_t is_docsis_sec = 0;
 
 	if (op->sess_type == RTE_CRYPTO_OP_SECURITY_SESSION) {
@@ -1962,8 +2014,54 @@ post_process_mb_job(struct ipsec_mb_qp *qp, IMB_JOB *job)
 						op->sym->auth.digest.data,
 						sess->auth.req_digest_len,
 						&op->status);
-			} else
+			} else {
+				if (!op->sym->m_dst || op->sym->m_dst == op->sym->m_src) {
+					/* in-place operation */
+					oop = 0;
+				} else { /* out-of-place operation */
+					oop = 1;
+				}
+
+				/* Enable digest check */
+				if (op->sym->m_src->nb_segs == 1 && op->sym->m_dst != NULL
+				&& !is_aead_algo(job->hash_alg,	sess->template_job.cipher_mode) &&
+				aesni_mb_digest_appended_in_src(op, job, oop) != NULL) {
+					unsigned int auth_size, cipher_size;
+					int unencrypted_bytes = 0;
+					if (job->cipher_mode == IMB_CIPHER_SNOW3G_UEA2_BITLEN ||
+						job->cipher_mode == IMB_CIPHER_KASUMI_UEA1_BITLEN ||
+						job->cipher_mode == IMB_CIPHER_ZUC_EEA3) {
+						cipher_size = (op->sym->cipher.data.offset >> 3) +
+							(op->sym->cipher.data.length >> 3);
+					} else {
+						cipher_size = (op->sym->cipher.data.offset) +
+							(op->sym->cipher.data.length);
+					}
+					if (job->hash_alg == IMB_AUTH_ZUC_EIA3_BITLEN ||
+						job->hash_alg == IMB_AUTH_SNOW3G_UIA2_BITLEN ||
+						job->hash_alg == IMB_AUTH_KASUMI_UIA1 ||
+						job->hash_alg == IMB_AUTH_ZUC256_EIA3_BITLEN) {
+						auth_size = (op->sym->auth.data.offset >> 3) +
+							(op->sym->auth.data.length >> 3);
+					} else {
+						auth_size = (op->sym->auth.data.offset) +
+						(op->sym->auth.data.length);
+					}
+					/* Check for unencrypted bytes in partial digest cases */
+					if (job->cipher_mode != IMB_CIPHER_NULL) {
+						unencrypted_bytes = auth_size +
+						job->auth_tag_output_len_in_bytes - cipher_size;
+					}
+					if (unencrypted_bytes > 0)
+						rte_memcpy(
+						rte_pktmbuf_mtod_offset(op->sym->m_dst, uint8_t *,
+						cipher_size),
+						rte_pktmbuf_mtod_offset(op->sym->m_src, uint8_t *,
+						cipher_size),
+						unencrypted_bytes);
+				}
 				generate_digest(job, op, sess);
+			}
 			break;
 		default:
 			op->status = RTE_CRYPTO_OP_STATUS_ERROR;
@@ -2555,7 +2653,8 @@ RTE_INIT(ipsec_mb_register_aesni_mb)
 			RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT |
 			RTE_CRYPTODEV_FF_OOP_LB_IN_SGL_OUT |
 			RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT |
-			RTE_CRYPTODEV_FF_SECURITY;
+			RTE_CRYPTODEV_FF_SECURITY |
+			RTE_CRYPTODEV_FF_DIGEST_ENCRYPTED;
 
 	aesni_mb_data->internals_priv_size = 0;
 	aesni_mb_data->ops = &aesni_mb_pmd_ops;
-- 
2.25.1


^ permalink raw reply	[flat|nested] 32+ messages in thread

* [PATCH v9 2/3] test/crypto: fix IV in some vectors
  2023-09-19 10:42 ` [PATCH v9 0/3] Add Digest Encrypted to aesni_mb PMD Brian Dooley
  2023-09-19 10:42   ` [PATCH v9 1/3] crypto/ipsec_mb: add digest encrypted feature Brian Dooley
@ 2023-09-19 10:42   ` Brian Dooley
  2023-09-19 10:42   ` [PATCH v9 3/3] test/crypto: fix failing synchronous tests Brian Dooley
                     ` (2 subsequent siblings)
  4 siblings, 0 replies; 32+ messages in thread
From: Brian Dooley @ 2023-09-19 10:42 UTC (permalink / raw)
  To: Akhil Goyal, Fan Zhang
  Cc: dev, Brian Dooley, adamx.dybkowski, stable, Ciara Power

SNOW3G and ZUC algorithms require non-zero length IVs.

Fixes: c6c267a00a92 ("test/crypto: add mixed encypted-digest")
Cc: adamx.dybkowski@intel.com
Cc: stable@dpdk.org

Signed-off-by: Brian Dooley <brian.dooley@intel.com>
Acked-by: Ciara Power <ciara.power@intel.com>
---
 app/test/test_cryptodev_mixed_test_vectors.h | 8 ++++++--
 1 file changed, 6 insertions(+), 2 deletions(-)

diff --git a/app/test/test_cryptodev_mixed_test_vectors.h b/app/test/test_cryptodev_mixed_test_vectors.h
index 161e2d905f..9c4313185e 100644
--- a/app/test/test_cryptodev_mixed_test_vectors.h
+++ b/app/test/test_cryptodev_mixed_test_vectors.h
@@ -478,8 +478,10 @@ struct mixed_cipher_auth_test_data auth_aes_cmac_cipher_snow_test_case_1 = {
 	},
 	.cipher_iv = {
 		.data = {
+			0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+			0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00
 		},
-		.len = 0,
+		.len = 16,
 	},
 	.cipher = {
 		.len_bits = 516 << 3,
@@ -917,8 +919,10 @@ struct mixed_cipher_auth_test_data auth_aes_cmac_cipher_zuc_test_case_1 = {
 	},
 	.cipher_iv = {
 		.data = {
+			0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+			0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00
 		},
-		.len = 0,
+		.len = 16,
 	},
 	.cipher = {
 		.len_bits = 516 << 3,
-- 
2.25.1


^ permalink raw reply	[flat|nested] 32+ messages in thread

* [PATCH v9 3/3] test/crypto: fix failing synchronous tests
  2023-09-19 10:42 ` [PATCH v9 0/3] Add Digest Encrypted to aesni_mb PMD Brian Dooley
  2023-09-19 10:42   ` [PATCH v9 1/3] crypto/ipsec_mb: add digest encrypted feature Brian Dooley
  2023-09-19 10:42   ` [PATCH v9 2/3] test/crypto: fix IV in some vectors Brian Dooley
@ 2023-09-19 10:42   ` Brian Dooley
  2023-09-19 12:28   ` [EXT] [PATCH v9 0/3] Add Digest Encrypted to aesni_mb PMD Akhil Goyal
  2023-09-21  7:13   ` Akhil Goyal
  4 siblings, 0 replies; 32+ messages in thread
From: Brian Dooley @ 2023-09-19 10:42 UTC (permalink / raw)
  To: Akhil Goyal, Fan Zhang
  Cc: dev, Brian Dooley, pablo.de.lara.guarch, stable, Ciara Power

Some synchronous tests are not supported for cpu crypto and need to be
skipped. This commit adds in extra skips for these tests.

Fixes: 55ab4a8c4fb5 ("test/crypto: disable wireless cases for CPU crypto API")
Cc: pablo.de.lara.guarch@intel.com
Cc: stable@dpdk.org

Signed-off-by: Brian Dooley <brian.dooley@intel.com>
Acked-by: Ciara Power <ciara.power@intel.com>
---
v8:
reword commit and add stable
---
 app/test/test_cryptodev.c | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/app/test/test_cryptodev.c b/app/test/test_cryptodev.c
index 956268bfcd..70f6b7ece1 100644
--- a/app/test/test_cryptodev.c
+++ b/app/test/test_cryptodev.c
@@ -6394,6 +6394,9 @@ test_zuc_auth_cipher(const struct wireless_test_data *tdata,
 			tdata->digest.len) < 0)
 		return TEST_SKIPPED;
 
+	if (gbl_action_type == RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO)
+		return TEST_SKIPPED;
+
 	rte_cryptodev_info_get(ts_params->valid_devs[0], &dev_info);
 
 	uint64_t feat_flags = dev_info.feature_flags;
@@ -7829,6 +7832,9 @@ test_mixed_auth_cipher(const struct mixed_cipher_auth_test_data *tdata,
 	if (global_api_test_type == CRYPTODEV_RAW_API_TEST)
 		return TEST_SKIPPED;
 
+	if (gbl_action_type == RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO)
+		return TEST_SKIPPED;
+
 	rte_cryptodev_info_get(ts_params->valid_devs[0], &dev_info);
 
 	uint64_t feat_flags = dev_info.feature_flags;
-- 
2.25.1


^ permalink raw reply	[flat|nested] 32+ messages in thread

* RE: [EXT] [PATCH v9 0/3] Add Digest Encrypted to aesni_mb PMD
  2023-09-19 10:42 ` [PATCH v9 0/3] Add Digest Encrypted to aesni_mb PMD Brian Dooley
                     ` (2 preceding siblings ...)
  2023-09-19 10:42   ` [PATCH v9 3/3] test/crypto: fix failing synchronous tests Brian Dooley
@ 2023-09-19 12:28   ` Akhil Goyal
  2023-09-21  7:13   ` Akhil Goyal
  4 siblings, 0 replies; 32+ messages in thread
From: Akhil Goyal @ 2023-09-19 12:28 UTC (permalink / raw)
  To: Brian Dooley; +Cc: dev

> This series adds the Digest Encrypted feature to the AESNI_MB PMD.
> It also fixes an issue where IV data in SNOW3G and ZUC algorithms
> were incorrect and are required to be non-zero length.
> 
> v9:
> Added release notes
Series applied to dpdk-next-crypto
Thanks.

^ permalink raw reply	[flat|nested] 32+ messages in thread

* RE: [EXT] [PATCH v9 0/3] Add Digest Encrypted to aesni_mb PMD
  2023-09-19 10:42 ` [PATCH v9 0/3] Add Digest Encrypted to aesni_mb PMD Brian Dooley
                     ` (3 preceding siblings ...)
  2023-09-19 12:28   ` [EXT] [PATCH v9 0/3] Add Digest Encrypted to aesni_mb PMD Akhil Goyal
@ 2023-09-21  7:13   ` Akhil Goyal
  4 siblings, 0 replies; 32+ messages in thread
From: Akhil Goyal @ 2023-09-21  7:13 UTC (permalink / raw)
  To: Brian Dooley; +Cc: dev

> This series adds the Digest Encrypted feature to the AESNI_MB PMD.
> It also fixes an issue where IV data in SNOW3G and ZUC algorithms
> were incorrect and are required to be non-zero length.
> 
> v9:
> Added release notes
Applied to dpdk-next-crypto

Thanks.

^ permalink raw reply	[flat|nested] 32+ messages in thread

end of thread, other threads:[~2023-09-21  7:14 UTC | newest]

Thread overview: 32+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-04-21 10:13 [PATCH v1] crypto/ipsec_mb: add digest encrypted feature in AESNI_MB Brian Dooley
2023-04-24  5:46 ` [EXT] " Akhil Goyal
2023-04-24 13:49   ` Dooley, Brian
2023-07-20 10:38 ` [PATCH v1] crypto/ipsec_mb: add digest encrypted feature Brian Dooley
2023-08-21 14:42   ` [PATCH v2] " Brian Dooley
2023-08-25  8:41     ` [PATCH v3] " Brian Dooley
2023-09-05 15:12       ` [PATCH v4 0/2] Add Digest Encrypted to aesni_mb PMD Brian Dooley
2023-09-05 15:12         ` [PATCH v4 1/2] crypto/ipsec_mb: add digest encrypted feature Brian Dooley
2023-09-05 15:12         ` [PATCH v4 2/2] test/crypto: fix IV in some vectors Brian Dooley
2023-09-05 16:15         ` [PATCH v5 0/2] Add Digest Encrypted to aesni_mb PMD Brian Dooley
2023-09-05 16:15           ` [PATCH v5 1/2] crypto/ipsec_mb: add digest encrypted feature Brian Dooley
2023-09-05 16:15           ` [PATCH v5 2/2] test/crypto: fix IV in some vectors Brian Dooley
2023-09-07 10:26           ` [PATCH v5 0/2] Add Digest Encrypted to aesni_mb PMD Brian Dooley
2023-09-07 10:26             ` [PATCH v6 1/2] crypto/ipsec_mb: add digest encrypted feature Brian Dooley
2023-09-07 15:25               ` Power, Ciara
2023-09-07 16:12               ` [PATCH v7 0/3] Add Digest Encrypted to aesni_mb PMD Brian Dooley
2023-09-07 16:12                 ` [PATCH v7 1/3] crypto/ipsec_mb: add digest encrypted feature Brian Dooley
2023-09-07 16:12                 ` [PATCH v7 2/3] test/crypto: fix IV in some vectors Brian Dooley
2023-09-07 16:12                 ` [PATCH v7 3/3] test/crypto: fix failing synchronous tests Brian Dooley
2023-09-14 15:22                 ` [PATCH v8 0/3] Add Digest Encrypted to aesni_mb PMD Brian Dooley
2023-09-14 15:22                   ` [PATCH v8 1/3] crypto/ipsec_mb: add digest encrypted feature Brian Dooley
2023-09-19  6:02                     ` [EXT] " Akhil Goyal
2023-09-14 15:22                   ` [PATCH v8 2/3] test/crypto: fix IV in some vectors Brian Dooley
2023-09-14 15:22                   ` [PATCH v8 3/3] test/crypto: fix failing synchronous tests Brian Dooley
2023-09-07 10:26             ` [PATCH v6 2/2] test/crypto: fix IV in some vectors Brian Dooley
2023-09-07 15:25               ` Power, Ciara
2023-09-19 10:42 ` [PATCH v9 0/3] Add Digest Encrypted to aesni_mb PMD Brian Dooley
2023-09-19 10:42   ` [PATCH v9 1/3] crypto/ipsec_mb: add digest encrypted feature Brian Dooley
2023-09-19 10:42   ` [PATCH v9 2/3] test/crypto: fix IV in some vectors Brian Dooley
2023-09-19 10:42   ` [PATCH v9 3/3] test/crypto: fix failing synchronous tests Brian Dooley
2023-09-19 12:28   ` [EXT] [PATCH v9 0/3] Add Digest Encrypted to aesni_mb PMD Akhil Goyal
2023-09-21  7:13   ` Akhil Goyal

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).