DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] [PATCH v1 0/1] optimization for crypto request processing
@ 2020-10-12  7:02 Vikas Gupta
  2020-10-12  7:02 ` [dpdk-dev] [PATCH v1 1/1] crypto/bcmfs: optimize " Vikas Gupta
  2020-10-13  7:47 ` [dpdk-dev] [PATCH v2 0/1] optimization for " Vikas Gupta
  0 siblings, 2 replies; 5+ messages in thread
From: Vikas Gupta @ 2020-10-12  7:02 UTC (permalink / raw)
  To: dev, akhil.goyal; +Cc: vikram.prakash, Vikas Gupta

Hi,
 This patch optimizes for crypto request processing in crypto engine
 by reducing the number of source BDs for Broadcom FlexSparc device.

The patch has been tested on FlexSparc device.

Regards,
Vikas

Vikas Gupta (1):
  crypto/bcmfs: optimize crypto request processing

 drivers/crypto/bcmfs/bcmfs_sym_defs.h   |   5 +
 drivers/crypto/bcmfs/bcmfs_sym_engine.c | 220 +++++++++++++-----------
 drivers/crypto/bcmfs/bcmfs_sym_pmd.c    |   6 +-
 drivers/crypto/bcmfs/bcmfs_sym_req.h    |  29 ++--
 4 files changed, 138 insertions(+), 122 deletions(-)

-- 
2.17.1


^ permalink raw reply	[flat|nested] 5+ messages in thread

* [dpdk-dev] [PATCH v1 1/1] crypto/bcmfs: optimize crypto request processing
  2020-10-12  7:02 [dpdk-dev] [PATCH v1 0/1] optimization for crypto request processing Vikas Gupta
@ 2020-10-12  7:02 ` Vikas Gupta
  2020-10-13  7:47 ` [dpdk-dev] [PATCH v2 0/1] optimization for " Vikas Gupta
  1 sibling, 0 replies; 5+ messages in thread
From: Vikas Gupta @ 2020-10-12  7:02 UTC (permalink / raw)
  To: dev, akhil.goyal; +Cc: vikram.prakash, Vikas Gupta, Raveendra Padasalagi

Reduce number of source BDs to submit a request to crypto engine.
This improves the performance as crypto engine fetches all the BDs in
single cycle. Adjust optional metadata (OMD) in continuation of
fixed meta data (FMD).

Signed-off-by: Vikas Gupta <vikas.gupta@broadcom.com>
Signed-off-by: Raveendra Padasalagi <raveendra.padasalagi@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/crypto/bcmfs/bcmfs_sym_defs.h   |   5 +
 drivers/crypto/bcmfs/bcmfs_sym_engine.c | 220 +++++++++++++-----------
 drivers/crypto/bcmfs/bcmfs_sym_pmd.c    |   6 +-
 drivers/crypto/bcmfs/bcmfs_sym_req.h    |  29 ++--
 4 files changed, 138 insertions(+), 122 deletions(-)

diff --git a/drivers/crypto/bcmfs/bcmfs_sym_defs.h b/drivers/crypto/bcmfs/bcmfs_sym_defs.h
index aea1f281e4..eaefe97e26 100644
--- a/drivers/crypto/bcmfs/bcmfs_sym_defs.h
+++ b/drivers/crypto/bcmfs/bcmfs_sym_defs.h
@@ -27,6 +27,11 @@ struct bcmfs_sym_request;
 /** Crypot Request processing hash tag check error. */
 #define BCMFS_SYM_RESPONSE_HASH_TAG_ERROR        (3)
 
+/** Maximum threshold length to adjust AAD in continuation
+ *  with source BD of (FMD + OMD)
+ */
+#define BCMFS_AAD_THRESH_LEN	64
+
 int
 bcmfs_process_sym_crypto_op(struct rte_crypto_op *op,
 			    struct bcmfs_sym_session *sess,
diff --git a/drivers/crypto/bcmfs/bcmfs_sym_engine.c b/drivers/crypto/bcmfs/bcmfs_sym_engine.c
index 537bfbec8b..458acd0966 100644
--- a/drivers/crypto/bcmfs/bcmfs_sym_engine.c
+++ b/drivers/crypto/bcmfs/bcmfs_sym_engine.c
@@ -565,6 +565,7 @@ bcmfs_crypto_build_auth_req(struct bcmfs_sym_request *sreq,
 	int src_index = 0;
 	struct spu2_fmd *fmd;
 	uint64_t payload_len;
+	uint32_t src_msg_len = 0;
 	enum spu2_hash_mode spu2_auth_mode;
 	enum spu2_hash_type spu2_auth_type = SPU2_HASH_TYPE_NONE;
 	uint64_t iv_size = (iv != NULL) ? fsattr_sz(iv) : 0;
@@ -613,26 +614,25 @@ bcmfs_crypto_build_auth_req(struct bcmfs_sym_request *sreq,
 
 	spu2_fmd_ctrl3_write(fmd, fsattr_sz(src));
 
-	/* Source metadata and data pointers */
+	/* FMD */
 	sreq->msgs.srcs_addr[src_index] = sreq->fptr;
-	sreq->msgs.srcs_len[src_index] = sizeof(struct spu2_fmd);
-	src_index++;
-
-	if (auth_key != NULL && fsattr_sz(auth_key) != 0) {
-		memcpy(sreq->auth_key, fsattr_va(auth_key),
-		       fsattr_sz(auth_key));
+	src_msg_len += sizeof(*fmd);
 
-		sreq->msgs.srcs_addr[src_index] = sreq->aptr;
-		sreq->msgs.srcs_len[src_index] = fsattr_sz(auth_key);
-		src_index++;
+	/* Start of OMD */
+	if (auth_ksize != 0) {
+		memcpy((uint8_t *)fmd + src_msg_len, fsattr_va(auth_key),
+		       auth_ksize);
+		src_msg_len += auth_ksize;
 	}
 
-	if (iv != NULL && fsattr_sz(iv) != 0) {
-		memcpy(sreq->iv, fsattr_va(iv), fsattr_sz(iv));
-		sreq->msgs.srcs_addr[src_index] = sreq->iptr;
-		sreq->msgs.srcs_len[src_index] = iv_size;
-		src_index++;
-	}
+	if (iv_size != 0) {
+		memcpy((uint8_t *)fmd + src_msg_len, fsattr_va(iv),
+		       iv_size);
+		src_msg_len += iv_size;
+	} /* End of OMD */
+
+	sreq->msgs.srcs_len[src_index] = src_msg_len;
+	src_index++;
 
 	sreq->msgs.srcs_addr[src_index] = fsattr_pa(src);
 	sreq->msgs.srcs_len[src_index] = fsattr_sz(src);
@@ -683,7 +683,7 @@ bcmfs_crypto_build_cipher_req(struct bcmfs_sym_request *sreq,
 	int ret = 0;
 	int src_index = 0;
 	struct spu2_fmd *fmd;
-	unsigned int xts_keylen;
+	uint32_t src_msg_len = 0;
 	enum spu2_cipher_mode spu2_ciph_mode = 0;
 	enum spu2_cipher_type spu2_ciph_type = SPU2_CIPHER_TYPE_NONE;
 	bool is_inbound = (cipher_op == RTE_CRYPTO_CIPHER_OP_DECRYPT);
@@ -714,36 +714,36 @@ bcmfs_crypto_build_cipher_req(struct bcmfs_sym_request *sreq,
 
 	spu2_fmd_ctrl3_write(fmd, fsattr_sz(src));
 
-	/* Source metadata and data pointers */
+	/* FMD */
 	sreq->msgs.srcs_addr[src_index] = sreq->fptr;
-	sreq->msgs.srcs_len[src_index] = sizeof(struct spu2_fmd);
-	src_index++;
+	src_msg_len += sizeof(*fmd);
 
+	/* Start of OMD */
 	if (cipher_key != NULL && fsattr_sz(cipher_key) != 0) {
+		uint8_t *cipher_buf = (uint8_t *)fmd + src_msg_len;
 		if (calgo == RTE_CRYPTO_CIPHER_AES_XTS) {
-			xts_keylen = fsattr_sz(cipher_key) / 2;
-			memcpy(sreq->cipher_key,
+			uint32_t xts_keylen = fsattr_sz(cipher_key) / 2;
+			memcpy(cipher_buf,
 			       (uint8_t *)fsattr_va(cipher_key) + xts_keylen,
 			       xts_keylen);
-			memcpy(sreq->cipher_key + xts_keylen,
+			memcpy(cipher_buf + xts_keylen,
 			       fsattr_va(cipher_key), xts_keylen);
 		} else {
-			memcpy(sreq->cipher_key,
-				fsattr_va(cipher_key), fsattr_sz(cipher_key));
+			memcpy(cipher_buf, fsattr_va(cipher_key),
+			       fsattr_sz(cipher_key));
 		}
 
-		sreq->msgs.srcs_addr[src_index] = sreq->cptr;
-		sreq->msgs.srcs_len[src_index] = fsattr_sz(cipher_key);
-		src_index++;
+		src_msg_len += fsattr_sz(cipher_key);
 	}
 
 	if (iv != NULL && fsattr_sz(iv) != 0) {
-		memcpy(sreq->iv,
-			fsattr_va(iv), fsattr_sz(iv));
-		sreq->msgs.srcs_addr[src_index] = sreq->iptr;
-		sreq->msgs.srcs_len[src_index] = fsattr_sz(iv);
-		src_index++;
-	}
+		memcpy((uint8_t *)fmd + src_msg_len,
+		       fsattr_va(iv), fsattr_sz(iv));
+		src_msg_len +=  fsattr_sz(iv);
+	} /* End of OMD */
+
+	sreq->msgs.srcs_len[src_index] = src_msg_len;
+	src_index++;
 
 	sreq->msgs.srcs_addr[src_index] = fsattr_pa(src);
 	sreq->msgs.srcs_len[src_index] = fsattr_sz(src);
@@ -782,17 +782,19 @@ bcmfs_crypto_build_chain_request(struct bcmfs_sym_request *sreq,
 	bool auth_first = 0;
 	struct spu2_fmd *fmd;
 	uint64_t payload_len;
+	uint32_t src_msg_len = 0;
 	enum spu2_cipher_mode spu2_ciph_mode = 0;
 	enum spu2_hash_mode spu2_auth_mode = 0;
-	uint64_t aad_size = (aad != NULL) ? fsattr_sz(aad) : 0;
-	uint64_t iv_size = (iv != NULL) ? fsattr_sz(iv) : 0;
 	enum spu2_cipher_type spu2_ciph_type = SPU2_CIPHER_TYPE_NONE;
 	uint64_t auth_ksize = (auth_key != NULL) ?
 				fsattr_sz(auth_key) : 0;
 	uint64_t cipher_ksize = (cipher_key != NULL) ?
 					fsattr_sz(cipher_key) : 0;
+	uint64_t iv_size = (iv != NULL) ? fsattr_sz(iv) : 0;
 	uint64_t digest_size = (digest != NULL) ?
 					fsattr_sz(digest) : 0;
+	uint64_t aad_size = (aad != NULL) ?
+				fsattr_sz(aad) : 0;
 	enum spu2_hash_type spu2_auth_type = SPU2_HASH_TYPE_NONE;
 	bool is_inbound = (auth_op == RTE_CRYPTO_AUTH_OP_VERIFY);
 
@@ -821,9 +823,6 @@ bcmfs_crypto_build_chain_request(struct bcmfs_sym_request *sreq,
 
 	auth_first = cipher_first ? 0 : 1;
 
-	if (iv != NULL && fsattr_sz(iv) != 0)
-		memcpy(sreq->iv, fsattr_va(iv), fsattr_sz(iv));
-
 	fmd  = &sreq->fmd;
 
 	spu2_fmd_ctrl0_write(fmd, is_inbound, auth_first, SPU2_PROTO_RESV,
@@ -840,57 +839,61 @@ bcmfs_crypto_build_chain_request(struct bcmfs_sym_request *sreq,
 
 	spu2_fmd_ctrl3_write(fmd, payload_len);
 
-	/* Source metadata and data pointers */
+	/* FMD */
 	sreq->msgs.srcs_addr[src_index] = sreq->fptr;
-	sreq->msgs.srcs_len[src_index] = sizeof(struct spu2_fmd);
-	src_index++;
-
-	if (auth_key != NULL && fsattr_sz(auth_key) != 0) {
-		memcpy(sreq->auth_key,
-		       fsattr_va(auth_key), fsattr_sz(auth_key));
+	src_msg_len += sizeof(*fmd);
 
+	/* Start of OMD */
+	if (auth_ksize != 0) {
+		memcpy((uint8_t *)fmd + src_msg_len,
+		       fsattr_va(auth_key), auth_ksize);
+		src_msg_len += auth_ksize;
 #if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG
 	BCMFS_DP_HEXDUMP_LOG(DEBUG, "auth key:", fsattr_va(auth_key),
-			     fsattr_sz(auth_key));
+			     auth_ksize);
 #endif
-		sreq->msgs.srcs_addr[src_index] = sreq->aptr;
-		sreq->msgs.srcs_len[src_index] = fsattr_sz(auth_key);
-		src_index++;
 	}
 
-	if (cipher_key != NULL && fsattr_sz(cipher_key) != 0) {
-		memcpy(sreq->cipher_key,
-		       fsattr_va(cipher_key), fsattr_sz(cipher_key));
+	if (cipher_ksize != 0) {
+		memcpy((uint8_t *)fmd + src_msg_len,
+		       fsattr_va(cipher_key), cipher_ksize);
+		src_msg_len += cipher_ksize;
 
 #if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG
 	BCMFS_DP_HEXDUMP_LOG(DEBUG, "cipher key:", fsattr_va(cipher_key),
-			     fsattr_sz(cipher_key));
+			     cipher_ksize);
 #endif
-		sreq->msgs.srcs_addr[src_index] = sreq->cptr;
-		sreq->msgs.srcs_len[src_index] = fsattr_sz(cipher_key);
-		src_index++;
 	}
 
-	if (iv != NULL && fsattr_sz(iv) != 0) {
+	if (iv_size != 0) {
+		memcpy((uint8_t *)fmd + src_msg_len,
+		       fsattr_va(iv), iv_size);
+		src_msg_len += iv_size;
 #if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG
 		BCMFS_DP_HEXDUMP_LOG(DEBUG, "iv key:", fsattr_va(iv),
-				     fsattr_sz(iv));
+				     iv_size);
 #endif
-		sreq->msgs.srcs_addr[src_index] = sreq->iptr;
-		sreq->msgs.srcs_len[src_index] = iv_size;
-		src_index++;
-	}
+	} /* End of OMD */
+
+	sreq->msgs.srcs_len[src_index] = src_msg_len;
 
-	if (aad != NULL && fsattr_sz(aad) != 0) {
+	if (aad_size != 0) {
+		if (fsattr_sz(aad) < BCMFS_AAD_THRESH_LEN) {
+			memcpy((uint8_t *)fmd + src_msg_len, fsattr_va(aad), aad_size);
+			sreq->msgs.srcs_len[src_index] += aad_size;
+		} else {
+			src_index++;
+			sreq->msgs.srcs_addr[src_index] = fsattr_pa(aad);
+			sreq->msgs.srcs_len[src_index] = aad_size;
+		}
 #if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG
 		BCMFS_DP_HEXDUMP_LOG(DEBUG, "aad :", fsattr_va(aad),
-				     fsattr_sz(aad));
+				     aad_size);
 #endif
-		sreq->msgs.srcs_addr[src_index] = fsattr_pa(aad);
-		sreq->msgs.srcs_len[src_index] = fsattr_sz(aad);
-		src_index++;
 	}
 
+	src_index++;
+
 	sreq->msgs.srcs_addr[src_index] = fsattr_pa(src);
 	sreq->msgs.srcs_len[src_index] = fsattr_sz(src);
 	src_index++;
@@ -916,7 +919,7 @@ bcmfs_crypto_build_chain_request(struct bcmfs_sym_request *sreq,
 		 * as such. So program dummy location to capture
 		 * digest data
 		 */
-		if (digest != NULL && fsattr_sz(digest) != 0) {
+		if (digest_size != 0) {
 			sreq->msgs.dsts_addr[dst_index] =
 				sreq->dptr;
 			sreq->msgs.dsts_len[dst_index] =
@@ -924,7 +927,7 @@ bcmfs_crypto_build_chain_request(struct bcmfs_sym_request *sreq,
 			dst_index++;
 		}
 	} else {
-		if (digest != NULL && fsattr_sz(digest) != 0) {
+		if (digest_size != 0) {
 			sreq->msgs.dsts_addr[dst_index] =
 				fsattr_pa(digest);
 			sreq->msgs.dsts_len[dst_index] =
@@ -943,7 +946,7 @@ bcmfs_crypto_build_chain_request(struct bcmfs_sym_request *sreq,
 
 static void
 bcmfs_crypto_ccm_update_iv(uint8_t *ivbuf,
-			   unsigned int *ivlen, bool is_esp)
+			   uint64_t *ivlen, bool is_esp)
 {
 	int L;  /* size of length field, in bytes */
 
@@ -976,15 +979,17 @@ bcmfs_crypto_build_aead_request(struct bcmfs_sym_request *sreq,
 	bool auth_first = 0;
 	struct spu2_fmd *fmd;
 	uint64_t payload_len;
-	uint64_t aad_size = (aad != NULL) ? fsattr_sz(aad) : 0;
-	unsigned int iv_size = (iv != NULL) ? fsattr_sz(iv) : 0;
+	uint32_t src_msg_len = 0;
+	uint8_t iv_buf[BCMFS_MAX_IV_SIZE];
 	enum spu2_cipher_mode spu2_ciph_mode = 0;
 	enum spu2_hash_mode spu2_auth_mode = 0;
 	enum spu2_cipher_type spu2_ciph_type = SPU2_CIPHER_TYPE_NONE;
 	enum spu2_hash_type spu2_auth_type = SPU2_HASH_TYPE_NONE;
 	uint64_t ksize = (key != NULL) ? fsattr_sz(key) : 0;
+	uint64_t iv_size = (iv != NULL) ? fsattr_sz(iv) : 0;
+	uint64_t aad_size = (aad != NULL) ? fsattr_sz(aad) : 0;
 	uint64_t digest_size = (digest != NULL) ?
-					fsattr_sz(digest) : 0;
+				fsattr_sz(digest) : 0;
 	bool is_inbound = (aeop == RTE_CRYPTO_AEAD_OP_DECRYPT);
 
 	if (src == NULL)
@@ -1032,17 +1037,16 @@ bcmfs_crypto_build_aead_request(struct bcmfs_sym_request *sreq,
 				0 : 1;
 	}
 
-	if (iv != NULL && fsattr_sz(iv) != 0)
-		memcpy(sreq->iv, fsattr_va(iv), fsattr_sz(iv));
+	if (iv_size != 0)
+		memcpy(iv_buf, fsattr_va(iv), iv_size);
 
 	if (ae_algo == RTE_CRYPTO_AEAD_AES_CCM) {
 		spu2_auth_mode = SPU2_HASH_MODE_CCM;
 		spu2_ciph_mode = SPU2_CIPHER_MODE_CCM;
-		if (iv != NULL)  {
-			memcpy(sreq->iv, fsattr_va(iv),
-			       fsattr_sz(iv));
-			iv_size = fsattr_sz(iv);
-			bcmfs_crypto_ccm_update_iv(sreq->iv, &iv_size, false);
+		if (iv_size != 0)  {
+			memcpy(iv_buf, fsattr_va(iv),
+			       iv_size);
+			bcmfs_crypto_ccm_update_iv(iv_buf, &iv_size, false);
 		}
 
 		/* opposite for ccm (auth 1st on encrypt) */
@@ -1066,44 +1070,50 @@ bcmfs_crypto_build_aead_request(struct bcmfs_sym_request *sreq,
 
 	spu2_fmd_ctrl3_write(fmd, payload_len);
 
-	/* Source metadata and data pointers */
+	/* FMD */
 	sreq->msgs.srcs_addr[src_index] = sreq->fptr;
-	sreq->msgs.srcs_len[src_index] = sizeof(struct spu2_fmd);
-	src_index++;
+	src_msg_len += sizeof(*fmd);
 
-	if (key != NULL && fsattr_sz(key) != 0) {
-		memcpy(sreq->cipher_key,
-		       fsattr_va(key), fsattr_sz(key));
+	if (ksize) {
+		memcpy((uint8_t *)fmd + src_msg_len,
+		       fsattr_va(key), ksize);
+		src_msg_len += ksize;
 
 #if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG
 	BCMFS_DP_HEXDUMP_LOG(DEBUG, "cipher key:", fsattr_va(key),
-			     fsattr_sz(key));
+			     ksize);
 #endif
-		sreq->msgs.srcs_addr[src_index] = sreq->cptr;
-		sreq->msgs.srcs_len[src_index] = fsattr_sz(key);
-		src_index++;
 	}
 
-	if (iv != NULL && fsattr_sz(iv) != 0) {
+	if (iv_size) {
+		memcpy((uint8_t *)fmd + src_msg_len, iv_buf, iv_size);
+		src_msg_len += iv_size;
+
 #if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG
 		BCMFS_DP_HEXDUMP_LOG(DEBUG, "iv key:", fsattr_va(iv),
 				     fsattr_sz(iv));
 #endif
-		sreq->msgs.srcs_addr[src_index] = sreq->iptr;
-		sreq->msgs.srcs_len[src_index] = iv_size;
-		src_index++;
-	}
+	} /* End of OMD */
+
+	sreq->msgs.srcs_len[src_index] = src_msg_len;
 
-	if (aad != NULL && fsattr_sz(aad) != 0) {
+	if (aad_size != 0) {
+		if (aad_size < BCMFS_AAD_THRESH_LEN) {
+			memcpy((uint8_t *)fmd + src_msg_len, fsattr_va(aad), aad_size);
+			sreq->msgs.srcs_len[src_index] += aad_size;
+		} else {
+			src_index++;
+			sreq->msgs.srcs_addr[src_index] = fsattr_pa(aad);
+			sreq->msgs.srcs_len[src_index] = aad_size;
+		}
 #if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG
 		BCMFS_DP_HEXDUMP_LOG(DEBUG, "aad :", fsattr_va(aad),
-				     fsattr_sz(aad));
+				     aad_size);
 #endif
-		sreq->msgs.srcs_addr[src_index] = fsattr_pa(aad);
-		sreq->msgs.srcs_len[src_index] = fsattr_sz(aad);
-		src_index++;
 	}
 
+	src_index++;
+
 	sreq->msgs.srcs_addr[src_index] = fsattr_pa(src);
 	sreq->msgs.srcs_len[src_index] = fsattr_sz(src);
 	src_index++;
@@ -1129,19 +1139,19 @@ bcmfs_crypto_build_aead_request(struct bcmfs_sym_request *sreq,
 		 * as such. So program dummy location to capture
 		 * digest data
 		 */
-		if (digest != NULL && fsattr_sz(digest) != 0) {
+		if (digest_size != 0) {
 			sreq->msgs.dsts_addr[dst_index] =
 				sreq->dptr;
 			sreq->msgs.dsts_len[dst_index] =
-				fsattr_sz(digest);
+				digest_size;
 			dst_index++;
 		}
 	} else {
-		if (digest != NULL && fsattr_sz(digest) != 0) {
+		if (digest_size != 0) {
 			sreq->msgs.dsts_addr[dst_index] =
 				fsattr_pa(digest);
 			sreq->msgs.dsts_len[dst_index] =
-				fsattr_sz(digest);
+				digest_size;
 			dst_index++;
 		}
 	}
diff --git a/drivers/crypto/bcmfs/bcmfs_sym_pmd.c b/drivers/crypto/bcmfs/bcmfs_sym_pmd.c
index 568797b4fd..aa7fad6d70 100644
--- a/drivers/crypto/bcmfs/bcmfs_sym_pmd.c
+++ b/drivers/crypto/bcmfs/bcmfs_sym_pmd.c
@@ -132,10 +132,8 @@ static void
 spu_req_init(struct bcmfs_sym_request *sr, rte_iova_t iova __rte_unused)
 {
 	memset(sr, 0, sizeof(*sr));
-	sr->fptr = iova;
-	sr->cptr = iova + offsetof(struct bcmfs_sym_request, cipher_key);
-	sr->aptr = iova + offsetof(struct bcmfs_sym_request, auth_key);
-	sr->iptr = iova + offsetof(struct bcmfs_sym_request, iv);
+	sr->fptr = iova + offsetof(struct bcmfs_sym_request, fmd);
+	sr->optr = iova + offsetof(struct bcmfs_sym_request, omd);
 	sr->dptr = iova + offsetof(struct bcmfs_sym_request, digest);
 	sr->rptr = iova + offsetof(struct bcmfs_sym_request, resp);
 }
diff --git a/drivers/crypto/bcmfs/bcmfs_sym_req.h b/drivers/crypto/bcmfs/bcmfs_sym_req.h
index e53c50adc1..17dff5be4e 100644
--- a/drivers/crypto/bcmfs/bcmfs_sym_req.h
+++ b/drivers/crypto/bcmfs/bcmfs_sym_req.h
@@ -11,6 +11,14 @@
 #include "bcmfs_dev_msg.h"
 #include "bcmfs_sym_defs.h"
 
+/** Max variable length. Since we adjust AAD
+ * in same BD if it is less than BCMFS_AAD_THRESH_LEN
+ * so we add it here.
+ */
+#define BCMFS_MAX_OMDMD_LEN	((2 * (BCMFS_MAX_KEY_SIZE)) +	\
+				 (2 * (BCMFS_MAX_IV_SIZE)) +	\
+				 (BCMFS_AAD_THRESH_LEN))
+
 /* Fixed SPU2 Metadata */
 struct spu2_fmd {
 	uint64_t ctrl0;
@@ -24,14 +32,14 @@ struct spu2_fmd {
  * rte_crypto_op
  */
 struct bcmfs_sym_request {
+	/*
+	 * Only single BD for metadata so
+	 * FMD + OMD must be in continuation
+	 */
 	/* spu2 engine related data */
 	struct spu2_fmd fmd;
-	/* cipher key */
-	uint8_t cipher_key[BCMFS_MAX_KEY_SIZE];
-	/* auth key */
-	uint8_t auth_key[BCMFS_MAX_KEY_SIZE];
-	/* iv key */
-	uint8_t iv[BCMFS_MAX_IV_SIZE];
+	/* variable metadata in continuation with fmd */
+	uint8_t omd[BCMFS_MAX_OMDMD_LEN];
 	/* digest data output from crypto h/w */
 	uint8_t digest[BCMFS_MAX_DIGEST_SIZE];
 	/* 2-Bytes response from crypto h/w */
@@ -42,17 +50,12 @@ struct bcmfs_sym_request {
 	 */
 	/* iova for fmd */
 	rte_iova_t fptr;
-	/* iova for cipher key */
-	rte_iova_t cptr;
-	/* iova for auth key */
-	rte_iova_t aptr;
-	/* iova for iv key */
-	rte_iova_t iptr;
+	/* iova for omd */
+	rte_iova_t optr;
 	/* iova for digest */
 	rte_iova_t dptr;
 	/* iova for response */
 	rte_iova_t rptr;
-
 	/* bcmfs qp message for h/w queues to process */
 	struct bcmfs_qp_message msgs;
 	/* crypto op */
-- 
2.17.1


^ permalink raw reply	[flat|nested] 5+ messages in thread

* [dpdk-dev] [PATCH v2 0/1] optimization for crypto request processing
  2020-10-12  7:02 [dpdk-dev] [PATCH v1 0/1] optimization for crypto request processing Vikas Gupta
  2020-10-12  7:02 ` [dpdk-dev] [PATCH v1 1/1] crypto/bcmfs: optimize " Vikas Gupta
@ 2020-10-13  7:47 ` Vikas Gupta
  2020-10-13  7:47   ` [dpdk-dev] [PATCH v2 1/1] crypto/bcmfs: optimize " Vikas Gupta
  1 sibling, 1 reply; 5+ messages in thread
From: Vikas Gupta @ 2020-10-13  7:47 UTC (permalink / raw)
  To: dev, akhil.goyal; +Cc: vikram.prakash, Vikas Gupta

Hi,
 This patch optimizes for crypto request processing in crypto engine
 by reducing the number of source BDs for Broadcom FlexSparc device.

The patch has been tested on FlexSparc device.

Regards,
Vikas

Changes from: v1->v2 
	Rebase the patch with latest dpdk-next-crypto

Vikas Gupta (1):
  crypto/bcmfs: optimize crypto request processing

 drivers/crypto/bcmfs/bcmfs_sym_defs.h   |   5 +
 drivers/crypto/bcmfs/bcmfs_sym_engine.c | 220 +++++++++++++-----------
 drivers/crypto/bcmfs/bcmfs_sym_pmd.c    |   6 +-
 drivers/crypto/bcmfs/bcmfs_sym_req.h    |  29 ++--
 4 files changed, 138 insertions(+), 122 deletions(-)

-- 
2.17.1


^ permalink raw reply	[flat|nested] 5+ messages in thread

* [dpdk-dev] [PATCH v2 1/1] crypto/bcmfs: optimize crypto request processing
  2020-10-13  7:47 ` [dpdk-dev] [PATCH v2 0/1] optimization for " Vikas Gupta
@ 2020-10-13  7:47   ` Vikas Gupta
  2020-10-14 20:27     ` Akhil Goyal
  0 siblings, 1 reply; 5+ messages in thread
From: Vikas Gupta @ 2020-10-13  7:47 UTC (permalink / raw)
  To: dev, akhil.goyal; +Cc: vikram.prakash, Vikas Gupta, Raveendra Padasalagi

Reduce number of source BDs to submit a request to crypto engine.
This improves the performance as crypto engine fetches all the BDs in
single cycle. Adjust optional metadata (OMD) in continuation of
fixed meta data (FMD).

Signed-off-by: Vikas Gupta <vikas.gupta@broadcom.com>
Signed-off-by: Raveendra Padasalagi <raveendra.padasalagi@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/crypto/bcmfs/bcmfs_sym_defs.h   |   5 +
 drivers/crypto/bcmfs/bcmfs_sym_engine.c | 220 +++++++++++++-----------
 drivers/crypto/bcmfs/bcmfs_sym_pmd.c    |   6 +-
 drivers/crypto/bcmfs/bcmfs_sym_req.h    |  29 ++--
 4 files changed, 138 insertions(+), 122 deletions(-)

diff --git a/drivers/crypto/bcmfs/bcmfs_sym_defs.h b/drivers/crypto/bcmfs/bcmfs_sym_defs.h
index aea1f281e..eaefe97e2 100644
--- a/drivers/crypto/bcmfs/bcmfs_sym_defs.h
+++ b/drivers/crypto/bcmfs/bcmfs_sym_defs.h
@@ -27,6 +27,11 @@ struct bcmfs_sym_request;
 /** Crypot Request processing hash tag check error. */
 #define BCMFS_SYM_RESPONSE_HASH_TAG_ERROR        (3)
 
+/** Maximum threshold length to adjust AAD in continuation
+ *  with source BD of (FMD + OMD)
+ */
+#define BCMFS_AAD_THRESH_LEN	64
+
 int
 bcmfs_process_sym_crypto_op(struct rte_crypto_op *op,
 			    struct bcmfs_sym_session *sess,
diff --git a/drivers/crypto/bcmfs/bcmfs_sym_engine.c b/drivers/crypto/bcmfs/bcmfs_sym_engine.c
index 537bfbec8..458acd096 100644
--- a/drivers/crypto/bcmfs/bcmfs_sym_engine.c
+++ b/drivers/crypto/bcmfs/bcmfs_sym_engine.c
@@ -565,6 +565,7 @@ bcmfs_crypto_build_auth_req(struct bcmfs_sym_request *sreq,
 	int src_index = 0;
 	struct spu2_fmd *fmd;
 	uint64_t payload_len;
+	uint32_t src_msg_len = 0;
 	enum spu2_hash_mode spu2_auth_mode;
 	enum spu2_hash_type spu2_auth_type = SPU2_HASH_TYPE_NONE;
 	uint64_t iv_size = (iv != NULL) ? fsattr_sz(iv) : 0;
@@ -613,26 +614,25 @@ bcmfs_crypto_build_auth_req(struct bcmfs_sym_request *sreq,
 
 	spu2_fmd_ctrl3_write(fmd, fsattr_sz(src));
 
-	/* Source metadata and data pointers */
+	/* FMD */
 	sreq->msgs.srcs_addr[src_index] = sreq->fptr;
-	sreq->msgs.srcs_len[src_index] = sizeof(struct spu2_fmd);
-	src_index++;
-
-	if (auth_key != NULL && fsattr_sz(auth_key) != 0) {
-		memcpy(sreq->auth_key, fsattr_va(auth_key),
-		       fsattr_sz(auth_key));
+	src_msg_len += sizeof(*fmd);
 
-		sreq->msgs.srcs_addr[src_index] = sreq->aptr;
-		sreq->msgs.srcs_len[src_index] = fsattr_sz(auth_key);
-		src_index++;
+	/* Start of OMD */
+	if (auth_ksize != 0) {
+		memcpy((uint8_t *)fmd + src_msg_len, fsattr_va(auth_key),
+		       auth_ksize);
+		src_msg_len += auth_ksize;
 	}
 
-	if (iv != NULL && fsattr_sz(iv) != 0) {
-		memcpy(sreq->iv, fsattr_va(iv), fsattr_sz(iv));
-		sreq->msgs.srcs_addr[src_index] = sreq->iptr;
-		sreq->msgs.srcs_len[src_index] = iv_size;
-		src_index++;
-	}
+	if (iv_size != 0) {
+		memcpy((uint8_t *)fmd + src_msg_len, fsattr_va(iv),
+		       iv_size);
+		src_msg_len += iv_size;
+	} /* End of OMD */
+
+	sreq->msgs.srcs_len[src_index] = src_msg_len;
+	src_index++;
 
 	sreq->msgs.srcs_addr[src_index] = fsattr_pa(src);
 	sreq->msgs.srcs_len[src_index] = fsattr_sz(src);
@@ -683,7 +683,7 @@ bcmfs_crypto_build_cipher_req(struct bcmfs_sym_request *sreq,
 	int ret = 0;
 	int src_index = 0;
 	struct spu2_fmd *fmd;
-	unsigned int xts_keylen;
+	uint32_t src_msg_len = 0;
 	enum spu2_cipher_mode spu2_ciph_mode = 0;
 	enum spu2_cipher_type spu2_ciph_type = SPU2_CIPHER_TYPE_NONE;
 	bool is_inbound = (cipher_op == RTE_CRYPTO_CIPHER_OP_DECRYPT);
@@ -714,36 +714,36 @@ bcmfs_crypto_build_cipher_req(struct bcmfs_sym_request *sreq,
 
 	spu2_fmd_ctrl3_write(fmd, fsattr_sz(src));
 
-	/* Source metadata and data pointers */
+	/* FMD */
 	sreq->msgs.srcs_addr[src_index] = sreq->fptr;
-	sreq->msgs.srcs_len[src_index] = sizeof(struct spu2_fmd);
-	src_index++;
+	src_msg_len += sizeof(*fmd);
 
+	/* Start of OMD */
 	if (cipher_key != NULL && fsattr_sz(cipher_key) != 0) {
+		uint8_t *cipher_buf = (uint8_t *)fmd + src_msg_len;
 		if (calgo == RTE_CRYPTO_CIPHER_AES_XTS) {
-			xts_keylen = fsattr_sz(cipher_key) / 2;
-			memcpy(sreq->cipher_key,
+			uint32_t xts_keylen = fsattr_sz(cipher_key) / 2;
+			memcpy(cipher_buf,
 			       (uint8_t *)fsattr_va(cipher_key) + xts_keylen,
 			       xts_keylen);
-			memcpy(sreq->cipher_key + xts_keylen,
+			memcpy(cipher_buf + xts_keylen,
 			       fsattr_va(cipher_key), xts_keylen);
 		} else {
-			memcpy(sreq->cipher_key,
-				fsattr_va(cipher_key), fsattr_sz(cipher_key));
+			memcpy(cipher_buf, fsattr_va(cipher_key),
+			       fsattr_sz(cipher_key));
 		}
 
-		sreq->msgs.srcs_addr[src_index] = sreq->cptr;
-		sreq->msgs.srcs_len[src_index] = fsattr_sz(cipher_key);
-		src_index++;
+		src_msg_len += fsattr_sz(cipher_key);
 	}
 
 	if (iv != NULL && fsattr_sz(iv) != 0) {
-		memcpy(sreq->iv,
-			fsattr_va(iv), fsattr_sz(iv));
-		sreq->msgs.srcs_addr[src_index] = sreq->iptr;
-		sreq->msgs.srcs_len[src_index] = fsattr_sz(iv);
-		src_index++;
-	}
+		memcpy((uint8_t *)fmd + src_msg_len,
+		       fsattr_va(iv), fsattr_sz(iv));
+		src_msg_len +=  fsattr_sz(iv);
+	} /* End of OMD */
+
+	sreq->msgs.srcs_len[src_index] = src_msg_len;
+	src_index++;
 
 	sreq->msgs.srcs_addr[src_index] = fsattr_pa(src);
 	sreq->msgs.srcs_len[src_index] = fsattr_sz(src);
@@ -782,17 +782,19 @@ bcmfs_crypto_build_chain_request(struct bcmfs_sym_request *sreq,
 	bool auth_first = 0;
 	struct spu2_fmd *fmd;
 	uint64_t payload_len;
+	uint32_t src_msg_len = 0;
 	enum spu2_cipher_mode spu2_ciph_mode = 0;
 	enum spu2_hash_mode spu2_auth_mode = 0;
-	uint64_t aad_size = (aad != NULL) ? fsattr_sz(aad) : 0;
-	uint64_t iv_size = (iv != NULL) ? fsattr_sz(iv) : 0;
 	enum spu2_cipher_type spu2_ciph_type = SPU2_CIPHER_TYPE_NONE;
 	uint64_t auth_ksize = (auth_key != NULL) ?
 				fsattr_sz(auth_key) : 0;
 	uint64_t cipher_ksize = (cipher_key != NULL) ?
 					fsattr_sz(cipher_key) : 0;
+	uint64_t iv_size = (iv != NULL) ? fsattr_sz(iv) : 0;
 	uint64_t digest_size = (digest != NULL) ?
 					fsattr_sz(digest) : 0;
+	uint64_t aad_size = (aad != NULL) ?
+				fsattr_sz(aad) : 0;
 	enum spu2_hash_type spu2_auth_type = SPU2_HASH_TYPE_NONE;
 	bool is_inbound = (auth_op == RTE_CRYPTO_AUTH_OP_VERIFY);
 
@@ -821,9 +823,6 @@ bcmfs_crypto_build_chain_request(struct bcmfs_sym_request *sreq,
 
 	auth_first = cipher_first ? 0 : 1;
 
-	if (iv != NULL && fsattr_sz(iv) != 0)
-		memcpy(sreq->iv, fsattr_va(iv), fsattr_sz(iv));
-
 	fmd  = &sreq->fmd;
 
 	spu2_fmd_ctrl0_write(fmd, is_inbound, auth_first, SPU2_PROTO_RESV,
@@ -840,57 +839,61 @@ bcmfs_crypto_build_chain_request(struct bcmfs_sym_request *sreq,
 
 	spu2_fmd_ctrl3_write(fmd, payload_len);
 
-	/* Source metadata and data pointers */
+	/* FMD */
 	sreq->msgs.srcs_addr[src_index] = sreq->fptr;
-	sreq->msgs.srcs_len[src_index] = sizeof(struct spu2_fmd);
-	src_index++;
-
-	if (auth_key != NULL && fsattr_sz(auth_key) != 0) {
-		memcpy(sreq->auth_key,
-		       fsattr_va(auth_key), fsattr_sz(auth_key));
+	src_msg_len += sizeof(*fmd);
 
+	/* Start of OMD */
+	if (auth_ksize != 0) {
+		memcpy((uint8_t *)fmd + src_msg_len,
+		       fsattr_va(auth_key), auth_ksize);
+		src_msg_len += auth_ksize;
 #if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG
 	BCMFS_DP_HEXDUMP_LOG(DEBUG, "auth key:", fsattr_va(auth_key),
-			     fsattr_sz(auth_key));
+			     auth_ksize);
 #endif
-		sreq->msgs.srcs_addr[src_index] = sreq->aptr;
-		sreq->msgs.srcs_len[src_index] = fsattr_sz(auth_key);
-		src_index++;
 	}
 
-	if (cipher_key != NULL && fsattr_sz(cipher_key) != 0) {
-		memcpy(sreq->cipher_key,
-		       fsattr_va(cipher_key), fsattr_sz(cipher_key));
+	if (cipher_ksize != 0) {
+		memcpy((uint8_t *)fmd + src_msg_len,
+		       fsattr_va(cipher_key), cipher_ksize);
+		src_msg_len += cipher_ksize;
 
 #if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG
 	BCMFS_DP_HEXDUMP_LOG(DEBUG, "cipher key:", fsattr_va(cipher_key),
-			     fsattr_sz(cipher_key));
+			     cipher_ksize);
 #endif
-		sreq->msgs.srcs_addr[src_index] = sreq->cptr;
-		sreq->msgs.srcs_len[src_index] = fsattr_sz(cipher_key);
-		src_index++;
 	}
 
-	if (iv != NULL && fsattr_sz(iv) != 0) {
+	if (iv_size != 0) {
+		memcpy((uint8_t *)fmd + src_msg_len,
+		       fsattr_va(iv), iv_size);
+		src_msg_len += iv_size;
 #if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG
 		BCMFS_DP_HEXDUMP_LOG(DEBUG, "iv key:", fsattr_va(iv),
-				     fsattr_sz(iv));
+				     iv_size);
 #endif
-		sreq->msgs.srcs_addr[src_index] = sreq->iptr;
-		sreq->msgs.srcs_len[src_index] = iv_size;
-		src_index++;
-	}
+	} /* End of OMD */
+
+	sreq->msgs.srcs_len[src_index] = src_msg_len;
 
-	if (aad != NULL && fsattr_sz(aad) != 0) {
+	if (aad_size != 0) {
+		if (fsattr_sz(aad) < BCMFS_AAD_THRESH_LEN) {
+			memcpy((uint8_t *)fmd + src_msg_len, fsattr_va(aad), aad_size);
+			sreq->msgs.srcs_len[src_index] += aad_size;
+		} else {
+			src_index++;
+			sreq->msgs.srcs_addr[src_index] = fsattr_pa(aad);
+			sreq->msgs.srcs_len[src_index] = aad_size;
+		}
 #if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG
 		BCMFS_DP_HEXDUMP_LOG(DEBUG, "aad :", fsattr_va(aad),
-				     fsattr_sz(aad));
+				     aad_size);
 #endif
-		sreq->msgs.srcs_addr[src_index] = fsattr_pa(aad);
-		sreq->msgs.srcs_len[src_index] = fsattr_sz(aad);
-		src_index++;
 	}
 
+	src_index++;
+
 	sreq->msgs.srcs_addr[src_index] = fsattr_pa(src);
 	sreq->msgs.srcs_len[src_index] = fsattr_sz(src);
 	src_index++;
@@ -916,7 +919,7 @@ bcmfs_crypto_build_chain_request(struct bcmfs_sym_request *sreq,
 		 * as such. So program dummy location to capture
 		 * digest data
 		 */
-		if (digest != NULL && fsattr_sz(digest) != 0) {
+		if (digest_size != 0) {
 			sreq->msgs.dsts_addr[dst_index] =
 				sreq->dptr;
 			sreq->msgs.dsts_len[dst_index] =
@@ -924,7 +927,7 @@ bcmfs_crypto_build_chain_request(struct bcmfs_sym_request *sreq,
 			dst_index++;
 		}
 	} else {
-		if (digest != NULL && fsattr_sz(digest) != 0) {
+		if (digest_size != 0) {
 			sreq->msgs.dsts_addr[dst_index] =
 				fsattr_pa(digest);
 			sreq->msgs.dsts_len[dst_index] =
@@ -943,7 +946,7 @@ bcmfs_crypto_build_chain_request(struct bcmfs_sym_request *sreq,
 
 static void
 bcmfs_crypto_ccm_update_iv(uint8_t *ivbuf,
-			   unsigned int *ivlen, bool is_esp)
+			   uint64_t *ivlen, bool is_esp)
 {
 	int L;  /* size of length field, in bytes */
 
@@ -976,15 +979,17 @@ bcmfs_crypto_build_aead_request(struct bcmfs_sym_request *sreq,
 	bool auth_first = 0;
 	struct spu2_fmd *fmd;
 	uint64_t payload_len;
-	uint64_t aad_size = (aad != NULL) ? fsattr_sz(aad) : 0;
-	unsigned int iv_size = (iv != NULL) ? fsattr_sz(iv) : 0;
+	uint32_t src_msg_len = 0;
+	uint8_t iv_buf[BCMFS_MAX_IV_SIZE];
 	enum spu2_cipher_mode spu2_ciph_mode = 0;
 	enum spu2_hash_mode spu2_auth_mode = 0;
 	enum spu2_cipher_type spu2_ciph_type = SPU2_CIPHER_TYPE_NONE;
 	enum spu2_hash_type spu2_auth_type = SPU2_HASH_TYPE_NONE;
 	uint64_t ksize = (key != NULL) ? fsattr_sz(key) : 0;
+	uint64_t iv_size = (iv != NULL) ? fsattr_sz(iv) : 0;
+	uint64_t aad_size = (aad != NULL) ? fsattr_sz(aad) : 0;
 	uint64_t digest_size = (digest != NULL) ?
-					fsattr_sz(digest) : 0;
+				fsattr_sz(digest) : 0;
 	bool is_inbound = (aeop == RTE_CRYPTO_AEAD_OP_DECRYPT);
 
 	if (src == NULL)
@@ -1032,17 +1037,16 @@ bcmfs_crypto_build_aead_request(struct bcmfs_sym_request *sreq,
 				0 : 1;
 	}
 
-	if (iv != NULL && fsattr_sz(iv) != 0)
-		memcpy(sreq->iv, fsattr_va(iv), fsattr_sz(iv));
+	if (iv_size != 0)
+		memcpy(iv_buf, fsattr_va(iv), iv_size);
 
 	if (ae_algo == RTE_CRYPTO_AEAD_AES_CCM) {
 		spu2_auth_mode = SPU2_HASH_MODE_CCM;
 		spu2_ciph_mode = SPU2_CIPHER_MODE_CCM;
-		if (iv != NULL)  {
-			memcpy(sreq->iv, fsattr_va(iv),
-			       fsattr_sz(iv));
-			iv_size = fsattr_sz(iv);
-			bcmfs_crypto_ccm_update_iv(sreq->iv, &iv_size, false);
+		if (iv_size != 0)  {
+			memcpy(iv_buf, fsattr_va(iv),
+			       iv_size);
+			bcmfs_crypto_ccm_update_iv(iv_buf, &iv_size, false);
 		}
 
 		/* opposite for ccm (auth 1st on encrypt) */
@@ -1066,44 +1070,50 @@ bcmfs_crypto_build_aead_request(struct bcmfs_sym_request *sreq,
 
 	spu2_fmd_ctrl3_write(fmd, payload_len);
 
-	/* Source metadata and data pointers */
+	/* FMD */
 	sreq->msgs.srcs_addr[src_index] = sreq->fptr;
-	sreq->msgs.srcs_len[src_index] = sizeof(struct spu2_fmd);
-	src_index++;
+	src_msg_len += sizeof(*fmd);
 
-	if (key != NULL && fsattr_sz(key) != 0) {
-		memcpy(sreq->cipher_key,
-		       fsattr_va(key), fsattr_sz(key));
+	if (ksize) {
+		memcpy((uint8_t *)fmd + src_msg_len,
+		       fsattr_va(key), ksize);
+		src_msg_len += ksize;
 
 #if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG
 	BCMFS_DP_HEXDUMP_LOG(DEBUG, "cipher key:", fsattr_va(key),
-			     fsattr_sz(key));
+			     ksize);
 #endif
-		sreq->msgs.srcs_addr[src_index] = sreq->cptr;
-		sreq->msgs.srcs_len[src_index] = fsattr_sz(key);
-		src_index++;
 	}
 
-	if (iv != NULL && fsattr_sz(iv) != 0) {
+	if (iv_size) {
+		memcpy((uint8_t *)fmd + src_msg_len, iv_buf, iv_size);
+		src_msg_len += iv_size;
+
 #if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG
 		BCMFS_DP_HEXDUMP_LOG(DEBUG, "iv key:", fsattr_va(iv),
 				     fsattr_sz(iv));
 #endif
-		sreq->msgs.srcs_addr[src_index] = sreq->iptr;
-		sreq->msgs.srcs_len[src_index] = iv_size;
-		src_index++;
-	}
+	} /* End of OMD */
+
+	sreq->msgs.srcs_len[src_index] = src_msg_len;
 
-	if (aad != NULL && fsattr_sz(aad) != 0) {
+	if (aad_size != 0) {
+		if (aad_size < BCMFS_AAD_THRESH_LEN) {
+			memcpy((uint8_t *)fmd + src_msg_len, fsattr_va(aad), aad_size);
+			sreq->msgs.srcs_len[src_index] += aad_size;
+		} else {
+			src_index++;
+			sreq->msgs.srcs_addr[src_index] = fsattr_pa(aad);
+			sreq->msgs.srcs_len[src_index] = aad_size;
+		}
 #if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG
 		BCMFS_DP_HEXDUMP_LOG(DEBUG, "aad :", fsattr_va(aad),
-				     fsattr_sz(aad));
+				     aad_size);
 #endif
-		sreq->msgs.srcs_addr[src_index] = fsattr_pa(aad);
-		sreq->msgs.srcs_len[src_index] = fsattr_sz(aad);
-		src_index++;
 	}
 
+	src_index++;
+
 	sreq->msgs.srcs_addr[src_index] = fsattr_pa(src);
 	sreq->msgs.srcs_len[src_index] = fsattr_sz(src);
 	src_index++;
@@ -1129,19 +1139,19 @@ bcmfs_crypto_build_aead_request(struct bcmfs_sym_request *sreq,
 		 * as such. So program dummy location to capture
 		 * digest data
 		 */
-		if (digest != NULL && fsattr_sz(digest) != 0) {
+		if (digest_size != 0) {
 			sreq->msgs.dsts_addr[dst_index] =
 				sreq->dptr;
 			sreq->msgs.dsts_len[dst_index] =
-				fsattr_sz(digest);
+				digest_size;
 			dst_index++;
 		}
 	} else {
-		if (digest != NULL && fsattr_sz(digest) != 0) {
+		if (digest_size != 0) {
 			sreq->msgs.dsts_addr[dst_index] =
 				fsattr_pa(digest);
 			sreq->msgs.dsts_len[dst_index] =
-				fsattr_sz(digest);
+				digest_size;
 			dst_index++;
 		}
 	}
diff --git a/drivers/crypto/bcmfs/bcmfs_sym_pmd.c b/drivers/crypto/bcmfs/bcmfs_sym_pmd.c
index 568797b4f..aa7fad6d7 100644
--- a/drivers/crypto/bcmfs/bcmfs_sym_pmd.c
+++ b/drivers/crypto/bcmfs/bcmfs_sym_pmd.c
@@ -132,10 +132,8 @@ static void
 spu_req_init(struct bcmfs_sym_request *sr, rte_iova_t iova __rte_unused)
 {
 	memset(sr, 0, sizeof(*sr));
-	sr->fptr = iova;
-	sr->cptr = iova + offsetof(struct bcmfs_sym_request, cipher_key);
-	sr->aptr = iova + offsetof(struct bcmfs_sym_request, auth_key);
-	sr->iptr = iova + offsetof(struct bcmfs_sym_request, iv);
+	sr->fptr = iova + offsetof(struct bcmfs_sym_request, fmd);
+	sr->optr = iova + offsetof(struct bcmfs_sym_request, omd);
 	sr->dptr = iova + offsetof(struct bcmfs_sym_request, digest);
 	sr->rptr = iova + offsetof(struct bcmfs_sym_request, resp);
 }
diff --git a/drivers/crypto/bcmfs/bcmfs_sym_req.h b/drivers/crypto/bcmfs/bcmfs_sym_req.h
index e53c50adc..17dff5be4 100644
--- a/drivers/crypto/bcmfs/bcmfs_sym_req.h
+++ b/drivers/crypto/bcmfs/bcmfs_sym_req.h
@@ -11,6 +11,14 @@
 #include "bcmfs_dev_msg.h"
 #include "bcmfs_sym_defs.h"
 
+/** Max variable length. Since we adjust AAD
+ * in same BD if it is less than BCMFS_AAD_THRESH_LEN
+ * so we add it here.
+ */
+#define BCMFS_MAX_OMDMD_LEN	((2 * (BCMFS_MAX_KEY_SIZE)) +	\
+				 (2 * (BCMFS_MAX_IV_SIZE)) +	\
+				 (BCMFS_AAD_THRESH_LEN))
+
 /* Fixed SPU2 Metadata */
 struct spu2_fmd {
 	uint64_t ctrl0;
@@ -24,14 +32,14 @@ struct spu2_fmd {
  * rte_crypto_op
  */
 struct bcmfs_sym_request {
+	/*
+	 * Only single BD for metadata so
+	 * FMD + OMD must be in continuation
+	 */
 	/* spu2 engine related data */
 	struct spu2_fmd fmd;
-	/* cipher key */
-	uint8_t cipher_key[BCMFS_MAX_KEY_SIZE];
-	/* auth key */
-	uint8_t auth_key[BCMFS_MAX_KEY_SIZE];
-	/* iv key */
-	uint8_t iv[BCMFS_MAX_IV_SIZE];
+	/* variable metadata in continuation with fmd */
+	uint8_t omd[BCMFS_MAX_OMDMD_LEN];
 	/* digest data output from crypto h/w */
 	uint8_t digest[BCMFS_MAX_DIGEST_SIZE];
 	/* 2-Bytes response from crypto h/w */
@@ -42,17 +50,12 @@ struct bcmfs_sym_request {
 	 */
 	/* iova for fmd */
 	rte_iova_t fptr;
-	/* iova for cipher key */
-	rte_iova_t cptr;
-	/* iova for auth key */
-	rte_iova_t aptr;
-	/* iova for iv key */
-	rte_iova_t iptr;
+	/* iova for omd */
+	rte_iova_t optr;
 	/* iova for digest */
 	rte_iova_t dptr;
 	/* iova for response */
 	rte_iova_t rptr;
-
 	/* bcmfs qp message for h/w queues to process */
 	struct bcmfs_qp_message msgs;
 	/* crypto op */
-- 
2.17.1


^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [dpdk-dev] [PATCH v2 1/1] crypto/bcmfs: optimize crypto request processing
  2020-10-13  7:47   ` [dpdk-dev] [PATCH v2 1/1] crypto/bcmfs: optimize " Vikas Gupta
@ 2020-10-14 20:27     ` Akhil Goyal
  0 siblings, 0 replies; 5+ messages in thread
From: Akhil Goyal @ 2020-10-14 20:27 UTC (permalink / raw)
  To: Vikas Gupta, dev; +Cc: vikram.prakash, Raveendra Padasalagi


> Reduce number of source BDs to submit a request to crypto engine.
> This improves the performance as crypto engine fetches all the BDs in
> single cycle. Adjust optional metadata (OMD) in continuation of
> fixed meta data (FMD).
> 
> Signed-off-by: Vikas Gupta <vikas.gupta@broadcom.com>
> Signed-off-by: Raveendra Padasalagi <raveendra.padasalagi@broadcom.com>
> Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Applied to dpdk-next-crypto

Thanks.


^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2020-10-14 20:27 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-10-12  7:02 [dpdk-dev] [PATCH v1 0/1] optimization for crypto request processing Vikas Gupta
2020-10-12  7:02 ` [dpdk-dev] [PATCH v1 1/1] crypto/bcmfs: optimize " Vikas Gupta
2020-10-13  7:47 ` [dpdk-dev] [PATCH v2 0/1] optimization for " Vikas Gupta
2020-10-13  7:47   ` [dpdk-dev] [PATCH v2 1/1] crypto/bcmfs: optimize " Vikas Gupta
2020-10-14 20:27     ` Akhil Goyal

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).