* [PATCH v3 1/8] crypto/cnxk: check for null pointer
2023-06-20 10:20 [PATCH v3 0/8] fixes and improvements to CNXK crypto PMD Tejasree Kondoj
@ 2023-06-20 10:20 ` Tejasree Kondoj
2023-06-20 10:21 ` [PATCH v3 2/8] crypto/cnxk: remove packet length checks in crypto offload Tejasree Kondoj
` (7 subsequent siblings)
8 siblings, 0 replies; 11+ messages in thread
From: Tejasree Kondoj @ 2023-06-20 10:20 UTC (permalink / raw)
To: Akhil Goyal
Cc: Anoob Joseph, Aakash Sasidharan, Gowrishankar Muthukrishnan,
Vidya Sagar Velumuri, dev
Checking for NULL pointer dereference.
Signed-off-by: Tejasree Kondoj <ktejasree@marvell.com>
---
drivers/crypto/cnxk/cnxk_se.h | 14 ++++++++------
1 file changed, 8 insertions(+), 6 deletions(-)
diff --git a/drivers/crypto/cnxk/cnxk_se.h b/drivers/crypto/cnxk/cnxk_se.h
index c66ab80749..a85e4c5170 100644
--- a/drivers/crypto/cnxk/cnxk_se.h
+++ b/drivers/crypto/cnxk/cnxk_se.h
@@ -2185,12 +2185,14 @@ fill_sess_auth(struct rte_crypto_sym_xform *xform, struct cnxk_se_sess *sess)
if (zsk_flag && sess->roc_se_ctx.auth_then_ciph) {
struct rte_crypto_cipher_xform *c_form;
- c_form = &xform->next->cipher;
- if (c_form->op != RTE_CRYPTO_CIPHER_OP_ENCRYPT &&
- a_form->op != RTE_CRYPTO_AUTH_OP_GENERATE) {
- plt_dp_err("Crypto: PDCP auth then cipher must use"
- " options: encrypt and generate");
- return -EINVAL;
+ if (xform->next != NULL) {
+ c_form = &xform->next->cipher;
+ if ((c_form != NULL) && (c_form->op != RTE_CRYPTO_CIPHER_OP_ENCRYPT) &&
+ a_form->op != RTE_CRYPTO_AUTH_OP_GENERATE) {
+ plt_dp_err("Crypto: PDCP auth then cipher must use"
+ " options: encrypt and generate");
+ return -EINVAL;
+ }
}
}
--
2.25.1
^ permalink raw reply [flat|nested] 11+ messages in thread
* [PATCH v3 2/8] crypto/cnxk: remove packet length checks in crypto offload
2023-06-20 10:20 [PATCH v3 0/8] fixes and improvements to CNXK crypto PMD Tejasree Kondoj
2023-06-20 10:20 ` [PATCH v3 1/8] crypto/cnxk: check for null pointer Tejasree Kondoj
@ 2023-06-20 10:21 ` Tejasree Kondoj
2023-06-20 10:21 ` [PATCH v3 3/8] crypto/cnxk: use pt inst for null cipher with null auth Tejasree Kondoj
` (6 subsequent siblings)
8 siblings, 0 replies; 11+ messages in thread
From: Tejasree Kondoj @ 2023-06-20 10:21 UTC (permalink / raw)
To: Akhil Goyal
Cc: Anoob Joseph, Aakash Sasidharan, Gowrishankar Muthukrishnan,
Vidya Sagar Velumuri, dev
From: Anoob Joseph <anoobj@marvell.com>
When performing crypto offload, the packet length of the input/output
buffer does not matter. The length that matters is the
cipher/authentication range specified in crypto_op. Since application
can request for ciphering of a small portion of the buffer, the extra
comparison of buffer lengths may result in false failures during
enqueue of OOP operations.
Signed-off-by: Anoob Joseph <anoobj@marvell.com>
---
drivers/crypto/cnxk/cnxk_se.h | 54 +++--------------------------------
1 file changed, 4 insertions(+), 50 deletions(-)
diff --git a/drivers/crypto/cnxk/cnxk_se.h b/drivers/crypto/cnxk/cnxk_se.h
index a85e4c5170..87414eb131 100644
--- a/drivers/crypto/cnxk/cnxk_se.h
+++ b/drivers/crypto/cnxk/cnxk_se.h
@@ -2539,23 +2539,6 @@ fill_fc_params(struct rte_crypto_op *cop, struct cnxk_se_sess *sess,
}
if (unlikely(m_dst != NULL)) {
- uint32_t pkt_len;
-
- /* Try to make room as much as src has */
- pkt_len = rte_pktmbuf_pkt_len(m_dst);
-
- if (unlikely(pkt_len < rte_pktmbuf_pkt_len(m_src))) {
- pkt_len = rte_pktmbuf_pkt_len(m_src) - pkt_len;
- if (!rte_pktmbuf_append(m_dst, pkt_len)) {
- plt_dp_err("Not enough space in "
- "m_dst %p, need %u"
- " more",
- m_dst, pkt_len);
- ret = -EINVAL;
- goto err_exit;
- }
- }
-
if (prepare_iov_from_pkt(m_dst, fc_params.dst_iov, 0)) {
plt_dp_err("Prepare dst iov failed for "
"m_dst %p",
@@ -2650,32 +2633,18 @@ fill_pdcp_params(struct rte_crypto_op *cop, struct cnxk_se_sess *sess,
fc_params.dst_iov = fc_params.src_iov = (void *)src;
prepare_iov_from_pkt_inplace(m_src, &fc_params, &flags);
} else {
- uint32_t pkt_len;
-
/* Out of place processing */
+
fc_params.src_iov = (void *)src;
fc_params.dst_iov = (void *)dst;
/* Store SG I/O in the api for reuse */
- if (prepare_iov_from_pkt(m_src, fc_params.src_iov, 0)) {
+ if (unlikely(prepare_iov_from_pkt(m_src, fc_params.src_iov, 0))) {
plt_dp_err("Prepare src iov failed");
ret = -EINVAL;
goto err_exit;
}
- /* Try to make room as much as src has */
- pkt_len = rte_pktmbuf_pkt_len(m_dst);
-
- if (unlikely(pkt_len < rte_pktmbuf_pkt_len(m_src))) {
- pkt_len = rte_pktmbuf_pkt_len(m_src) - pkt_len;
- if (unlikely(rte_pktmbuf_append(m_dst, pkt_len) == NULL)) {
- plt_dp_err("Not enough space in m_dst %p, need %u more", m_dst,
- pkt_len);
- ret = -EINVAL;
- goto err_exit;
- }
- }
-
if (unlikely(prepare_iov_from_pkt(m_dst, fc_params.dst_iov, 0))) {
plt_dp_err("Prepare dst iov failed for m_dst %p", m_dst);
ret = -EINVAL;
@@ -2689,7 +2658,8 @@ fill_pdcp_params(struct rte_crypto_op *cop, struct cnxk_se_sess *sess,
mdata = alloc_op_meta(&fc_params.meta_buf, m_info->mlen, m_info->pool, infl_req);
if (mdata == NULL) {
plt_dp_err("Could not allocate meta buffer");
- return -ENOMEM;
+ ret = -ENOMEM;
+ goto err_exit;
}
}
@@ -2798,22 +2768,6 @@ fill_pdcp_chain_params(struct rte_crypto_op *cop, struct cnxk_se_sess *sess,
}
if (unlikely(m_dst != NULL)) {
- uint32_t pkt_len;
-
- /* Try to make room as much as src has */
- pkt_len = rte_pktmbuf_pkt_len(m_dst);
-
- if (unlikely(pkt_len < rte_pktmbuf_pkt_len(m_src))) {
- pkt_len = rte_pktmbuf_pkt_len(m_src) - pkt_len;
- if (!rte_pktmbuf_append(m_dst, pkt_len)) {
- plt_dp_err("Not enough space in m_dst "
- "%p, need %u more",
- m_dst, pkt_len);
- ret = -EINVAL;
- goto err_exit;
- }
- }
-
if (unlikely(prepare_iov_from_pkt(m_dst, fc_params.dst_iov, 0))) {
plt_dp_err("Could not prepare m_dst iov %p", m_dst);
ret = -EINVAL;
--
2.25.1
^ permalink raw reply [flat|nested] 11+ messages in thread
* [PATCH v3 3/8] crypto/cnxk: use pt inst for null cipher with null auth
2023-06-20 10:20 [PATCH v3 0/8] fixes and improvements to CNXK crypto PMD Tejasree Kondoj
2023-06-20 10:20 ` [PATCH v3 1/8] crypto/cnxk: check for null pointer Tejasree Kondoj
2023-06-20 10:21 ` [PATCH v3 2/8] crypto/cnxk: remove packet length checks in crypto offload Tejasree Kondoj
@ 2023-06-20 10:21 ` Tejasree Kondoj
2023-06-20 10:21 ` [PATCH v3 4/8] crypto/cnxk: enable context cache for 103XX Tejasree Kondoj
` (5 subsequent siblings)
8 siblings, 0 replies; 11+ messages in thread
From: Tejasree Kondoj @ 2023-06-20 10:21 UTC (permalink / raw)
To: Akhil Goyal
Cc: Aakash Sasidharan, Anoob Joseph, Gowrishankar Muthukrishnan,
Vidya Sagar Velumuri, dev
From: Aakash Sasidharan <asasidharan@marvell.com>
Use passthrough instruction for NULL cipher with NULL
auth combination.
Signed-off-by: Aakash Sasidharan <asasidharan@marvell.com>
---
drivers/crypto/cnxk/cnxk_cryptodev_ops.c | 20 ++++----
drivers/crypto/cnxk/cnxk_se.h | 59 ++++++++++++++++--------
2 files changed, 50 insertions(+), 29 deletions(-)
diff --git a/drivers/crypto/cnxk/cnxk_cryptodev_ops.c b/drivers/crypto/cnxk/cnxk_cryptodev_ops.c
index d405786668..2018b0eba5 100644
--- a/drivers/crypto/cnxk/cnxk_cryptodev_ops.c
+++ b/drivers/crypto/cnxk/cnxk_cryptodev_ops.c
@@ -526,16 +526,13 @@ cnxk_sess_fill(struct roc_cpt *roc_cpt, struct rte_crypto_sym_xform *xform,
return -EINVAL;
}
- if ((c_xfrm == NULL || c_xfrm->cipher.algo == RTE_CRYPTO_CIPHER_NULL) &&
- a_xfrm != NULL && a_xfrm->auth.algo == RTE_CRYPTO_AUTH_NULL &&
- a_xfrm->auth.op == RTE_CRYPTO_AUTH_OP_VERIFY) {
- plt_dp_err("Null cipher + null auth verify is not supported");
- return -ENOTSUP;
- }
+ if ((aead_xfrm == NULL) &&
+ (c_xfrm == NULL || c_xfrm->cipher.algo == RTE_CRYPTO_CIPHER_NULL) &&
+ (a_xfrm == NULL || a_xfrm->auth.algo == RTE_CRYPTO_AUTH_NULL))
+ sess->passthrough = 1;
/* Cipher only */
- if (c_xfrm != NULL &&
- (a_xfrm == NULL || a_xfrm->auth.algo == RTE_CRYPTO_AUTH_NULL)) {
+ if (c_xfrm != NULL && (a_xfrm == NULL || a_xfrm->auth.algo == RTE_CRYPTO_AUTH_NULL)) {
if (fill_sess_cipher(c_xfrm, sess))
return -ENOTSUP;
else
@@ -662,7 +659,8 @@ cnxk_cpt_inst_w7_get(struct cnxk_se_sess *sess, struct roc_cpt *roc_cpt)
inst_w7.s.cptr += 8;
/* Set the engine group */
- if (sess->zsk_flag || sess->aes_ctr_eea2 || sess->is_sha3 || sess->is_sm3)
+ if (sess->zsk_flag || sess->aes_ctr_eea2 || sess->is_sha3 || sess->is_sm3 ||
+ sess->passthrough)
inst_w7.s.egrp = roc_cpt->eng_grp[CPT_ENG_TYPE_SE];
else
inst_w7.s.egrp = roc_cpt->eng_grp[CPT_ENG_TYPE_IE];
@@ -687,7 +685,9 @@ sym_session_configure(struct roc_cpt *roc_cpt, struct rte_crypto_sym_xform *xfor
sess_priv->lf = roc_cpt->lf[0];
- if (sess_priv->cpt_op & ROC_SE_OP_CIPHER_MASK) {
+ if (sess_priv->passthrough)
+ thr_type = CPT_DP_THREAD_TYPE_PT;
+ else if (sess_priv->cpt_op & ROC_SE_OP_CIPHER_MASK) {
switch (sess_priv->roc_se_ctx.fc_type) {
case ROC_SE_FC_GEN:
if (sess_priv->aes_gcm || sess_priv->aes_ccm || sess_priv->chacha_poly)
diff --git a/drivers/crypto/cnxk/cnxk_se.h b/drivers/crypto/cnxk/cnxk_se.h
index 87414eb131..ceb50fa3b6 100644
--- a/drivers/crypto/cnxk/cnxk_se.h
+++ b/drivers/crypto/cnxk/cnxk_se.h
@@ -24,6 +24,8 @@ enum cpt_dp_thread_type {
CPT_DP_THREAD_TYPE_PDCP_CHAIN,
CPT_DP_THREAD_TYPE_KASUMI,
CPT_DP_THREAD_AUTH_ONLY,
+ CPT_DP_THREAD_GENERIC,
+ CPT_DP_THREAD_TYPE_PT,
};
struct cnxk_se_sess {
@@ -46,7 +48,8 @@ struct cnxk_se_sess {
uint8_t is_sha3 : 1;
uint8_t short_iv : 1;
uint8_t is_sm3 : 1;
- uint8_t rsvd : 5;
+ uint8_t passthrough : 1;
+ uint8_t rsvd : 4;
uint8_t mac_len;
uint8_t iv_length;
uint8_t auth_iv_length;
@@ -636,15 +639,6 @@ cpt_digest_gen_sg_ver1_prep(uint32_t flags, uint64_t d_lens, struct roc_se_fc_pa
cpt_inst_w4.s.dlen = data_len;
}
- /* Null auth only case enters the if */
- if (unlikely(!hash_type && !ctx->enc_cipher)) {
- cpt_inst_w4.s.opcode_major = ROC_SE_MAJOR_OP_MISC;
- /* Minor op is passthrough */
- cpt_inst_w4.s.opcode_minor = 0x03;
- /* Send out completion code only */
- cpt_inst_w4.s.param2 = 0x1;
- }
-
/* DPTR has SG list */
in_buffer = m_vaddr;
@@ -758,15 +752,6 @@ cpt_digest_gen_sg_ver2_prep(uint32_t flags, uint64_t d_lens, struct roc_se_fc_pa
cpt_inst_w4.s.dlen = data_len;
}
- /* Null auth only case enters the if */
- if (unlikely(!hash_type && !ctx->enc_cipher)) {
- cpt_inst_w4.s.opcode_major = ROC_SE_MAJOR_OP_MISC;
- /* Minor op is passthrough */
- cpt_inst_w4.s.opcode_minor = 0x03;
- /* Send out completion code only */
- cpt_inst_w4.s.param2 = 0x1;
- }
-
/* DPTR has SG list */
/* TODO Add error check if space will be sufficient */
@@ -2376,6 +2361,7 @@ prepare_iov_from_pkt_inplace(struct rte_mbuf *pkt,
iovec->buf_cnt = index;
return;
}
+
static __rte_always_inline int
fill_fc_params(struct rte_crypto_op *cop, struct cnxk_se_sess *sess,
struct cpt_qp_meta_info *m_info, struct cpt_inflight_req *infl_req,
@@ -2592,6 +2578,38 @@ fill_fc_params(struct rte_crypto_op *cop, struct cnxk_se_sess *sess,
return ret;
}
+static inline int
+fill_passthrough_params(struct rte_crypto_op *cop, struct cpt_inst_s *inst)
+{
+ struct rte_crypto_sym_op *sym_op = cop->sym;
+ struct rte_mbuf *m_src, *m_dst;
+
+ const union cpt_inst_w4 w4 = {
+ .s.opcode_major = ROC_SE_MAJOR_OP_MISC,
+ .s.opcode_minor = ROC_SE_MISC_MINOR_OP_PASSTHROUGH,
+ .s.param1 = 1,
+ .s.param2 = 1,
+ .s.dlen = 0,
+ };
+
+ m_src = sym_op->m_src;
+ m_dst = sym_op->m_dst;
+
+ if (unlikely(m_dst != NULL && m_dst != m_src)) {
+ void *src = rte_pktmbuf_mtod_offset(m_src, void *, cop->sym->cipher.data.offset);
+ void *dst = rte_pktmbuf_mtod(m_dst, void *);
+ int data_len = cop->sym->cipher.data.length;
+
+ rte_memcpy(dst, src, data_len);
+ }
+
+ inst->w0.u64 = 0;
+ inst->w5.u64 = 0;
+ inst->w4.u64 = w4.u64;
+
+ return 0;
+}
+
static __rte_always_inline int
fill_pdcp_params(struct rte_crypto_op *cop, struct cnxk_se_sess *sess,
struct cpt_qp_meta_info *m_info, struct cpt_inflight_req *infl_req,
@@ -3012,6 +3030,9 @@ cpt_sym_inst_fill(struct cnxk_cpt_qp *qp, struct rte_crypto_op *op, struct cnxk_
int ret;
switch (sess->dp_thr_type) {
+ case CPT_DP_THREAD_TYPE_PT:
+ ret = fill_passthrough_params(op, inst);
+ break;
case CPT_DP_THREAD_TYPE_PDCP:
ret = fill_pdcp_params(op, sess, &qp->meta_info, infl_req, inst, is_sg_ver2);
break;
--
2.25.1
^ permalink raw reply [flat|nested] 11+ messages in thread
* [PATCH v3 4/8] crypto/cnxk: enable context cache for 103XX
2023-06-20 10:20 [PATCH v3 0/8] fixes and improvements to CNXK crypto PMD Tejasree Kondoj
` (2 preceding siblings ...)
2023-06-20 10:21 ` [PATCH v3 3/8] crypto/cnxk: use pt inst for null cipher with null auth Tejasree Kondoj
@ 2023-06-20 10:21 ` Tejasree Kondoj
2023-06-20 10:21 ` [PATCH v3 5/8] crypto/cnxk: add support for raw APIs Tejasree Kondoj
` (4 subsequent siblings)
8 siblings, 0 replies; 11+ messages in thread
From: Tejasree Kondoj @ 2023-06-20 10:21 UTC (permalink / raw)
To: Akhil Goyal
Cc: Anoob Joseph, Aakash Sasidharan, Gowrishankar Muthukrishnan,
Vidya Sagar Velumuri, dev
Enabling context cache for SE instructions on 106B0
and 103XX.
Signed-off-by: Tejasree Kondoj <ktejasree@marvell.com>
---
drivers/crypto/cnxk/cnxk_cryptodev_ops.c | 6 +++---
drivers/crypto/cnxk/cnxk_cryptodev_ops.h | 8 ++++++++
2 files changed, 11 insertions(+), 3 deletions(-)
diff --git a/drivers/crypto/cnxk/cnxk_cryptodev_ops.c b/drivers/crypto/cnxk/cnxk_cryptodev_ops.c
index 2018b0eba5..d0c99d37e8 100644
--- a/drivers/crypto/cnxk/cnxk_cryptodev_ops.c
+++ b/drivers/crypto/cnxk/cnxk_cryptodev_ops.c
@@ -653,7 +653,7 @@ cnxk_cpt_inst_w7_get(struct cnxk_se_sess *sess, struct roc_cpt *roc_cpt)
inst_w7.s.cptr = (uint64_t)&sess->roc_se_ctx.se_ctx;
- if (roc_errata_cpt_hang_on_mixed_ctx_val())
+ if (hw_ctx_cache_enable())
inst_w7.s.ctx_val = 1;
else
inst_w7.s.cptr += 8;
@@ -729,7 +729,7 @@ sym_session_configure(struct roc_cpt *roc_cpt, struct rte_crypto_sym_xform *xfor
sess_priv->cpt_inst_w7 = cnxk_cpt_inst_w7_get(sess_priv, roc_cpt);
- if (roc_errata_cpt_hang_on_mixed_ctx_val())
+ if (hw_ctx_cache_enable())
roc_se_ctx_init(&sess_priv->roc_se_ctx);
return 0;
@@ -755,7 +755,7 @@ sym_session_clear(struct rte_cryptodev_sym_session *sess, bool is_session_less)
struct cnxk_se_sess *sess_priv = (struct cnxk_se_sess *)sess;
/* Trigger CTX flush + invalidate to remove from CTX_CACHE */
- if (roc_errata_cpt_hang_on_mixed_ctx_val())
+ if (hw_ctx_cache_enable())
roc_cpt_lf_ctx_flush(sess_priv->lf, &sess_priv->roc_se_ctx.se_ctx, true);
if (sess_priv->roc_se_ctx.auth_key != NULL)
diff --git a/drivers/crypto/cnxk/cnxk_cryptodev_ops.h b/drivers/crypto/cnxk/cnxk_cryptodev_ops.h
index b1a40e8e25..6ee4cbda70 100644
--- a/drivers/crypto/cnxk/cnxk_cryptodev_ops.h
+++ b/drivers/crypto/cnxk/cnxk_cryptodev_ops.h
@@ -13,6 +13,7 @@
#include "roc_constants.h"
#include "roc_cpt.h"
#include "roc_cpt_sg.h"
+#include "roc_errata.h"
#include "roc_se.h"
#define CNXK_CPT_MIN_HEADROOM_REQ 32
@@ -180,4 +181,11 @@ alloc_op_meta(struct roc_se_buf_ptr *buf, int32_t len, struct rte_mempool *cpt_m
return mdata;
}
+
+static __rte_always_inline bool
+hw_ctx_cache_enable(void)
+{
+ return roc_errata_cpt_hang_on_mixed_ctx_val() || roc_model_is_cn10ka_b0() ||
+ roc_model_is_cn10kb_a0();
+}
#endif /* _CNXK_CRYPTODEV_OPS_H_ */
--
2.25.1
^ permalink raw reply [flat|nested] 11+ messages in thread
* [PATCH v3 5/8] crypto/cnxk: add support for raw APIs
2023-06-20 10:20 [PATCH v3 0/8] fixes and improvements to CNXK crypto PMD Tejasree Kondoj
` (3 preceding siblings ...)
2023-06-20 10:21 ` [PATCH v3 4/8] crypto/cnxk: enable context cache for 103XX Tejasree Kondoj
@ 2023-06-20 10:21 ` Tejasree Kondoj
2023-06-23 14:25 ` Thomas Monjalon
2023-06-20 10:21 ` [PATCH v3 6/8] test/crypto: enable raw crypto tests for crypto_cn10k Tejasree Kondoj
` (3 subsequent siblings)
8 siblings, 1 reply; 11+ messages in thread
From: Tejasree Kondoj @ 2023-06-20 10:21 UTC (permalink / raw)
To: Akhil Goyal
Cc: Anoob Joseph, Aakash Sasidharan, Gowrishankar Muthukrishnan,
Vidya Sagar Velumuri, dev
From: Anoob Joseph <anoobj@marvell.com>
Add crypto RAW API support in cnxk PMD
Enable the flag to allow execution of raw test suite.
Signed-off-by: Vidya Sagar Velumuri <vvelumuri@marvell.com>
Signed-off-by: Anoob Joseph <anoobj@marvell.com>
---
doc/guides/cryptodevs/features/cn10k.ini | 1 +
doc/guides/rel_notes/release_23_07.rst | 1 +
drivers/crypto/cnxk/cn10k_cryptodev_ops.c | 459 ++++++++++++++++++++++
drivers/crypto/cnxk/cnxk_cryptodev.c | 20 +-
drivers/crypto/cnxk/cnxk_cryptodev_ops.h | 1 +
drivers/crypto/cnxk/cnxk_se.h | 293 ++++++++++++++
6 files changed, 762 insertions(+), 13 deletions(-)
diff --git a/doc/guides/cryptodevs/features/cn10k.ini b/doc/guides/cryptodevs/features/cn10k.ini
index d8844b5c83..68a9fddb80 100644
--- a/doc/guides/cryptodevs/features/cn10k.ini
+++ b/doc/guides/cryptodevs/features/cn10k.ini
@@ -17,6 +17,7 @@ Symmetric sessionless = Y
RSA PRIV OP KEY EXP = Y
RSA PRIV OP KEY QT = Y
Digest encrypted = Y
+Sym raw data path API = Y
Inner checksum = Y
;
diff --git a/doc/guides/rel_notes/release_23_07.rst b/doc/guides/rel_notes/release_23_07.rst
index 027ae7bd2d..bd41f49458 100644
--- a/doc/guides/rel_notes/release_23_07.rst
+++ b/doc/guides/rel_notes/release_23_07.rst
@@ -154,6 +154,7 @@ New Features
* Added support for PDCP chain in cn10k crypto driver.
* Added support for SM3 hash operations.
* Added support for AES-CCM in cn9k and cn10k drivers.
+ * Added support for RAW cryptodev APIs in cn10k driver.
* **Updated OpenSSL crypto driver.**
diff --git a/drivers/crypto/cnxk/cn10k_cryptodev_ops.c b/drivers/crypto/cnxk/cn10k_cryptodev_ops.c
index e405a2ad9f..47b0e3a6f3 100644
--- a/drivers/crypto/cnxk/cn10k_cryptodev_ops.c
+++ b/drivers/crypto/cnxk/cn10k_cryptodev_ops.c
@@ -1064,6 +1064,461 @@ cn10k_cpt_dev_info_get(struct rte_cryptodev *dev,
}
}
+static inline int
+cn10k_cpt_raw_fill_inst(struct cnxk_iov *iov, struct cnxk_cpt_qp *qp,
+ struct cnxk_sym_dp_ctx *dp_ctx, struct cpt_inst_s inst[],
+ struct cpt_inflight_req *infl_req, void *opaque, const bool is_sg_ver2)
+{
+ struct cnxk_se_sess *sess;
+ int ret;
+
+ const union cpt_res_s res = {
+ .cn10k.compcode = CPT_COMP_NOT_DONE,
+ };
+
+ inst[0].w0.u64 = 0;
+ inst[0].w2.u64 = 0;
+ inst[0].w3.u64 = 0;
+
+ sess = dp_ctx->sess;
+
+ switch (sess->dp_thr_type) {
+ case CPT_DP_THREAD_TYPE_PT:
+ ret = fill_raw_passthrough_params(iov, inst);
+ break;
+ case CPT_DP_THREAD_TYPE_FC_CHAIN:
+ ret = fill_raw_fc_params(iov, sess, &qp->meta_info, infl_req, &inst[0], false,
+ false, is_sg_ver2);
+ break;
+ case CPT_DP_THREAD_TYPE_FC_AEAD:
+ ret = fill_raw_fc_params(iov, sess, &qp->meta_info, infl_req, &inst[0], false, true,
+ is_sg_ver2);
+ break;
+ case CPT_DP_THREAD_AUTH_ONLY:
+ ret = fill_raw_digest_params(iov, sess, &qp->meta_info, infl_req, &inst[0],
+ is_sg_ver2);
+ break;
+ default:
+ ret = -EINVAL;
+ }
+
+ if (unlikely(ret))
+ return 0;
+
+ inst[0].res_addr = (uint64_t)&infl_req->res;
+ __atomic_store_n(&infl_req->res.u64[0], res.u64[0], __ATOMIC_RELAXED);
+ infl_req->opaque = opaque;
+
+ inst[0].w7.u64 = sess->cpt_inst_w7;
+
+ return 1;
+}
+
+static uint32_t
+cn10k_cpt_raw_enqueue_burst(void *qpair, uint8_t *drv_ctx, struct rte_crypto_sym_vec *vec,
+ union rte_crypto_sym_ofs ofs, void *user_data[], int *enqueue_status,
+ const bool is_sgv2)
+{
+ uint16_t lmt_id, nb_allowed, nb_ops = vec->num;
+ uint64_t lmt_base, lmt_arg, io_addr, head;
+ struct cpt_inflight_req *infl_req;
+ struct cnxk_cpt_qp *qp = qpair;
+ struct cnxk_sym_dp_ctx *dp_ctx;
+ struct pending_queue *pend_q;
+ uint32_t count = 0, index;
+ union cpt_fc_write_s fc;
+ struct cpt_inst_s *inst;
+ uint64_t *fc_addr;
+ int ret, i;
+
+ pend_q = &qp->pend_q;
+ const uint64_t pq_mask = pend_q->pq_mask;
+
+ head = pend_q->head;
+ nb_allowed = pending_queue_free_cnt(head, pend_q->tail, pq_mask);
+ nb_ops = RTE_MIN(nb_ops, nb_allowed);
+
+ if (unlikely(nb_ops == 0))
+ return 0;
+
+ lmt_base = qp->lmtline.lmt_base;
+ io_addr = qp->lmtline.io_addr;
+ fc_addr = qp->lmtline.fc_addr;
+
+ const uint32_t fc_thresh = qp->lmtline.fc_thresh;
+
+ ROC_LMT_BASE_ID_GET(lmt_base, lmt_id);
+ inst = (struct cpt_inst_s *)lmt_base;
+
+ dp_ctx = (struct cnxk_sym_dp_ctx *)drv_ctx;
+again:
+ fc.u64[0] = __atomic_load_n(fc_addr, __ATOMIC_RELAXED);
+ if (unlikely(fc.s.qsize > fc_thresh)) {
+ i = 0;
+ goto pend_q_commit;
+ }
+
+ for (i = 0; i < RTE_MIN(PKTS_PER_LOOP, nb_ops); i++) {
+ struct cnxk_iov iov;
+
+ index = count + i;
+ infl_req = &pend_q->req_queue[head];
+ infl_req->op_flags = 0;
+
+ cnxk_raw_burst_to_iov(vec, &ofs, index, &iov);
+ ret = cn10k_cpt_raw_fill_inst(&iov, qp, dp_ctx, &inst[2 * i], infl_req,
+ user_data[index], is_sgv2);
+ if (unlikely(ret != 1)) {
+ plt_dp_err("Could not process vec: %d", index);
+ if (i == 0 && count == 0)
+ return -1;
+ else if (i == 0)
+ goto pend_q_commit;
+ else
+ break;
+ }
+ pending_queue_advance(&head, pq_mask);
+ }
+
+ if (i > PKTS_PER_STEORL) {
+ lmt_arg = ROC_CN10K_CPT_LMT_ARG | (PKTS_PER_STEORL - 1) << 12 | (uint64_t)lmt_id;
+ roc_lmt_submit_steorl(lmt_arg, io_addr);
+ lmt_arg = ROC_CN10K_CPT_LMT_ARG | (i - PKTS_PER_STEORL - 1) << 12 |
+ (uint64_t)(lmt_id + PKTS_PER_STEORL);
+ roc_lmt_submit_steorl(lmt_arg, io_addr);
+ } else {
+ lmt_arg = ROC_CN10K_CPT_LMT_ARG | (i - 1) << 12 | (uint64_t)lmt_id;
+ roc_lmt_submit_steorl(lmt_arg, io_addr);
+ }
+
+ rte_io_wmb();
+
+ if (nb_ops - i > 0 && i == PKTS_PER_LOOP) {
+ nb_ops -= i;
+ count += i;
+ goto again;
+ }
+
+pend_q_commit:
+ rte_atomic_thread_fence(__ATOMIC_RELEASE);
+
+ pend_q->head = head;
+ pend_q->time_out = rte_get_timer_cycles() + DEFAULT_COMMAND_TIMEOUT * rte_get_timer_hz();
+
+ *enqueue_status = 1;
+ return count + i;
+}
+
+static uint32_t
+cn10k_cpt_raw_enqueue_burst_sgv2(void *qpair, uint8_t *drv_ctx, struct rte_crypto_sym_vec *vec,
+ union rte_crypto_sym_ofs ofs, void *user_data[],
+ int *enqueue_status)
+{
+ return cn10k_cpt_raw_enqueue_burst(qpair, drv_ctx, vec, ofs, user_data, enqueue_status,
+ true);
+}
+
+static uint32_t
+cn10k_cpt_raw_enqueue_burst_sgv1(void *qpair, uint8_t *drv_ctx, struct rte_crypto_sym_vec *vec,
+ union rte_crypto_sym_ofs ofs, void *user_data[],
+ int *enqueue_status)
+{
+ return cn10k_cpt_raw_enqueue_burst(qpair, drv_ctx, vec, ofs, user_data, enqueue_status,
+ false);
+}
+
+static int
+cn10k_cpt_raw_enqueue(void *qpair, uint8_t *drv_ctx, struct rte_crypto_vec *data_vec,
+ uint16_t n_data_vecs, union rte_crypto_sym_ofs ofs,
+ struct rte_crypto_va_iova_ptr *iv, struct rte_crypto_va_iova_ptr *digest,
+ struct rte_crypto_va_iova_ptr *aad_or_auth_iv, void *user_data,
+ const bool is_sgv2)
+{
+ uint64_t lmt_base, lmt_arg, io_addr, head;
+ struct cpt_inflight_req *infl_req;
+ struct cnxk_cpt_qp *qp = qpair;
+ struct cnxk_sym_dp_ctx *dp_ctx;
+ uint16_t lmt_id, nb_allowed;
+ struct cpt_inst_s *inst;
+ union cpt_fc_write_s fc;
+ struct cnxk_iov iov;
+ uint64_t *fc_addr;
+ int ret;
+
+ struct pending_queue *pend_q = &qp->pend_q;
+ const uint64_t pq_mask = pend_q->pq_mask;
+ const uint32_t fc_thresh = qp->lmtline.fc_thresh;
+
+ head = pend_q->head;
+ nb_allowed = pending_queue_free_cnt(head, pend_q->tail, pq_mask);
+
+ if (unlikely(nb_allowed == 0))
+ return -1;
+
+ cnxk_raw_to_iov(data_vec, n_data_vecs, &ofs, iv, digest, aad_or_auth_iv, &iov);
+
+ lmt_base = qp->lmtline.lmt_base;
+ io_addr = qp->lmtline.io_addr;
+ fc_addr = qp->lmtline.fc_addr;
+
+ ROC_LMT_BASE_ID_GET(lmt_base, lmt_id);
+ inst = (struct cpt_inst_s *)lmt_base;
+
+ fc.u64[0] = __atomic_load_n(fc_addr, __ATOMIC_RELAXED);
+ if (unlikely(fc.s.qsize > fc_thresh))
+ return -1;
+
+ dp_ctx = (struct cnxk_sym_dp_ctx *)drv_ctx;
+ infl_req = &pend_q->req_queue[head];
+ infl_req->op_flags = 0;
+
+ ret = cn10k_cpt_raw_fill_inst(&iov, qp, dp_ctx, &inst[0], infl_req, user_data, is_sgv2);
+ if (unlikely(ret != 1)) {
+ plt_dp_err("Could not process vec");
+ return -1;
+ }
+
+ pending_queue_advance(&head, pq_mask);
+
+ lmt_arg = ROC_CN10K_CPT_LMT_ARG | (uint64_t)lmt_id;
+ roc_lmt_submit_steorl(lmt_arg, io_addr);
+
+ rte_io_wmb();
+
+ pend_q->head = head;
+ pend_q->time_out = rte_get_timer_cycles() + DEFAULT_COMMAND_TIMEOUT * rte_get_timer_hz();
+
+ return 1;
+}
+
+static int
+cn10k_cpt_raw_enqueue_sgv2(void *qpair, uint8_t *drv_ctx, struct rte_crypto_vec *data_vec,
+ uint16_t n_data_vecs, union rte_crypto_sym_ofs ofs,
+ struct rte_crypto_va_iova_ptr *iv, struct rte_crypto_va_iova_ptr *digest,
+ struct rte_crypto_va_iova_ptr *aad_or_auth_iv, void *user_data)
+{
+ return cn10k_cpt_raw_enqueue(qpair, drv_ctx, data_vec, n_data_vecs, ofs, iv, digest,
+ aad_or_auth_iv, user_data, true);
+}
+
+static int
+cn10k_cpt_raw_enqueue_sgv1(void *qpair, uint8_t *drv_ctx, struct rte_crypto_vec *data_vec,
+ uint16_t n_data_vecs, union rte_crypto_sym_ofs ofs,
+ struct rte_crypto_va_iova_ptr *iv, struct rte_crypto_va_iova_ptr *digest,
+ struct rte_crypto_va_iova_ptr *aad_or_auth_iv, void *user_data)
+{
+ return cn10k_cpt_raw_enqueue(qpair, drv_ctx, data_vec, n_data_vecs, ofs, iv, digest,
+ aad_or_auth_iv, user_data, false);
+}
+
+static inline int
+cn10k_cpt_raw_dequeue_post_process(struct cpt_cn10k_res_s *res)
+{
+ const uint8_t uc_compcode = res->uc_compcode;
+ const uint8_t compcode = res->compcode;
+ int ret = 1;
+
+ if (likely(compcode == CPT_COMP_GOOD)) {
+ if (unlikely(uc_compcode))
+ plt_dp_info("Request failed with microcode error: 0x%x", res->uc_compcode);
+ else
+ ret = 0;
+ }
+
+ return ret;
+}
+
+static uint32_t
+cn10k_cpt_sym_raw_dequeue_burst(void *qptr, uint8_t *drv_ctx,
+ rte_cryptodev_raw_get_dequeue_count_t get_dequeue_count,
+ uint32_t max_nb_to_dequeue,
+ rte_cryptodev_raw_post_dequeue_t post_dequeue, void **out_user_data,
+ uint8_t is_user_data_array, uint32_t *n_success,
+ int *dequeue_status)
+{
+ struct cpt_inflight_req *infl_req;
+ struct cnxk_cpt_qp *qp = qptr;
+ struct pending_queue *pend_q;
+ uint64_t infl_cnt, pq_tail;
+ union cpt_res_s res;
+ int is_op_success;
+ uint16_t nb_ops;
+ void *opaque;
+ int i = 0;
+
+ pend_q = &qp->pend_q;
+
+ const uint64_t pq_mask = pend_q->pq_mask;
+
+ RTE_SET_USED(drv_ctx);
+ pq_tail = pend_q->tail;
+ infl_cnt = pending_queue_infl_cnt(pend_q->head, pq_tail, pq_mask);
+
+ /* Ensure infl_cnt isn't read before data lands */
+ rte_atomic_thread_fence(__ATOMIC_ACQUIRE);
+
+ infl_req = &pend_q->req_queue[pq_tail];
+
+ opaque = infl_req->opaque;
+ if (get_dequeue_count)
+ nb_ops = get_dequeue_count(opaque);
+ else
+ nb_ops = max_nb_to_dequeue;
+ nb_ops = RTE_MIN(nb_ops, infl_cnt);
+
+ for (i = 0; i < nb_ops; i++) {
+ is_op_success = 0;
+ infl_req = &pend_q->req_queue[pq_tail];
+
+ res.u64[0] = __atomic_load_n(&infl_req->res.u64[0], __ATOMIC_RELAXED);
+
+ if (unlikely(res.cn10k.compcode == CPT_COMP_NOT_DONE)) {
+ if (unlikely(rte_get_timer_cycles() > pend_q->time_out)) {
+ plt_err("Request timed out");
+ cnxk_cpt_dump_on_err(qp);
+ pend_q->time_out = rte_get_timer_cycles() +
+ DEFAULT_COMMAND_TIMEOUT * rte_get_timer_hz();
+ }
+ break;
+ }
+
+ pending_queue_advance(&pq_tail, pq_mask);
+
+ if (!cn10k_cpt_raw_dequeue_post_process(&res.cn10k)) {
+ is_op_success = 1;
+ *n_success += 1;
+ }
+
+ if (is_user_data_array) {
+ out_user_data[i] = infl_req->opaque;
+ post_dequeue(out_user_data[i], i, is_op_success);
+ } else {
+ if (i == 0)
+ out_user_data[0] = opaque;
+ post_dequeue(out_user_data[0], i, is_op_success);
+ }
+
+ if (unlikely(infl_req->op_flags & CPT_OP_FLAGS_METABUF))
+ rte_mempool_put(qp->meta_info.pool, infl_req->mdata);
+ }
+
+ pend_q->tail = pq_tail;
+ *dequeue_status = 1;
+
+ return i;
+}
+
+static void *
+cn10k_cpt_sym_raw_dequeue(void *qptr, uint8_t *drv_ctx, int *dequeue_status,
+ enum rte_crypto_op_status *op_status)
+{
+ struct cpt_inflight_req *infl_req;
+ struct cnxk_cpt_qp *qp = qptr;
+ struct pending_queue *pend_q;
+ uint64_t pq_tail;
+ union cpt_res_s res;
+ void *opaque = NULL;
+
+ pend_q = &qp->pend_q;
+
+ const uint64_t pq_mask = pend_q->pq_mask;
+
+ RTE_SET_USED(drv_ctx);
+
+ pq_tail = pend_q->tail;
+
+ rte_atomic_thread_fence(__ATOMIC_ACQUIRE);
+
+ infl_req = &pend_q->req_queue[pq_tail];
+
+ res.u64[0] = __atomic_load_n(&infl_req->res.u64[0], __ATOMIC_RELAXED);
+
+ if (unlikely(res.cn10k.compcode == CPT_COMP_NOT_DONE)) {
+ if (unlikely(rte_get_timer_cycles() > pend_q->time_out)) {
+ plt_err("Request timed out");
+ cnxk_cpt_dump_on_err(qp);
+ pend_q->time_out = rte_get_timer_cycles() +
+ DEFAULT_COMMAND_TIMEOUT * rte_get_timer_hz();
+ }
+ goto exit;
+ }
+
+ pending_queue_advance(&pq_tail, pq_mask);
+
+ opaque = infl_req->opaque;
+
+ if (!cn10k_cpt_raw_dequeue_post_process(&res.cn10k))
+ *op_status = RTE_CRYPTO_OP_STATUS_SUCCESS;
+ else
+ *op_status = RTE_CRYPTO_OP_STATUS_ERROR;
+
+ if (unlikely(infl_req->op_flags & CPT_OP_FLAGS_METABUF))
+ rte_mempool_put(qp->meta_info.pool, infl_req->mdata);
+
+ *dequeue_status = 1;
+exit:
+ return opaque;
+}
+
+static int
+cn10k_sym_get_raw_dp_ctx_size(struct rte_cryptodev *dev __rte_unused)
+{
+ return sizeof(struct cnxk_sym_dp_ctx);
+}
+
+static int
+cn10k_sym_configure_raw_dp_ctx(struct rte_cryptodev *dev, uint16_t qp_id,
+ struct rte_crypto_raw_dp_ctx *raw_dp_ctx,
+ enum rte_crypto_op_sess_type sess_type,
+ union rte_cryptodev_session_ctx session_ctx, uint8_t is_update)
+{
+ struct cnxk_se_sess *sess = (struct cnxk_se_sess *)session_ctx.crypto_sess;
+ struct cnxk_sym_dp_ctx *dp_ctx;
+
+ if (sess_type != RTE_CRYPTO_OP_WITH_SESSION)
+ return -ENOTSUP;
+
+ if (sess == NULL)
+ return -EINVAL;
+
+ if ((sess->dp_thr_type == CPT_DP_THREAD_TYPE_PDCP) ||
+ (sess->dp_thr_type == CPT_DP_THREAD_TYPE_PDCP_CHAIN) ||
+ (sess->dp_thr_type == CPT_DP_THREAD_TYPE_KASUMI))
+ return -ENOTSUP;
+
+ if ((sess->dp_thr_type == CPT_DP_THREAD_AUTH_ONLY) &&
+ ((sess->roc_se_ctx.fc_type == ROC_SE_KASUMI) ||
+ (sess->roc_se_ctx.fc_type == ROC_SE_PDCP)))
+ return -ENOTSUP;
+
+ if ((sess->roc_se_ctx.hash_type == ROC_SE_GMAC_TYPE) ||
+ (sess->roc_se_ctx.hash_type == ROC_SE_SHA1_TYPE))
+ return -ENOTSUP;
+
+ dp_ctx = (struct cnxk_sym_dp_ctx *)raw_dp_ctx->drv_ctx_data;
+ dp_ctx->sess = sess;
+
+ if (!is_update) {
+ struct cnxk_cpt_vf *vf;
+
+ raw_dp_ctx->qp_data = (struct cnxk_cpt_qp *)dev->data->queue_pairs[qp_id];
+ raw_dp_ctx->dequeue = cn10k_cpt_sym_raw_dequeue;
+ raw_dp_ctx->dequeue_burst = cn10k_cpt_sym_raw_dequeue_burst;
+
+ vf = dev->data->dev_private;
+ if (vf->cpt.hw_caps[CPT_ENG_TYPE_SE].sg_ver2 &&
+ vf->cpt.hw_caps[CPT_ENG_TYPE_IE].sg_ver2) {
+ raw_dp_ctx->enqueue = cn10k_cpt_raw_enqueue_sgv2;
+ raw_dp_ctx->enqueue_burst = cn10k_cpt_raw_enqueue_burst_sgv2;
+ } else {
+ raw_dp_ctx->enqueue = cn10k_cpt_raw_enqueue_sgv1;
+ raw_dp_ctx->enqueue_burst = cn10k_cpt_raw_enqueue_burst_sgv1;
+ }
+ }
+
+ return 0;
+}
+
struct rte_cryptodev_ops cn10k_cpt_ops = {
/* Device control ops */
.dev_configure = cnxk_cpt_dev_config,
@@ -1090,4 +1545,8 @@ struct rte_cryptodev_ops cn10k_cpt_ops = {
/* Event crypto ops */
.session_ev_mdata_set = cn10k_cpt_crypto_adapter_ev_mdata_set,
.queue_pair_event_error_query = cnxk_cpt_queue_pair_event_error_query,
+
+ /* Raw data-path API related operations */
+ .sym_get_raw_dp_ctx_size = cn10k_sym_get_raw_dp_ctx_size,
+ .sym_configure_raw_dp_ctx = cn10k_sym_configure_raw_dp_ctx,
};
diff --git a/drivers/crypto/cnxk/cnxk_cryptodev.c b/drivers/crypto/cnxk/cnxk_cryptodev.c
index 4fa1907cea..4819a14184 100644
--- a/drivers/crypto/cnxk/cnxk_cryptodev.c
+++ b/drivers/crypto/cnxk/cnxk_cryptodev.c
@@ -13,22 +13,16 @@
uint64_t
cnxk_cpt_default_ff_get(void)
{
- uint64_t ff = RTE_CRYPTODEV_FF_SYMMETRIC_CRYPTO |
- RTE_CRYPTODEV_FF_ASYMMETRIC_CRYPTO |
- RTE_CRYPTODEV_FF_HW_ACCELERATED |
- RTE_CRYPTODEV_FF_RSA_PRIV_OP_KEY_QT |
+ uint64_t ff = RTE_CRYPTODEV_FF_SYMMETRIC_CRYPTO | RTE_CRYPTODEV_FF_ASYMMETRIC_CRYPTO |
+ RTE_CRYPTODEV_FF_HW_ACCELERATED | RTE_CRYPTODEV_FF_RSA_PRIV_OP_KEY_QT |
RTE_CRYPTODEV_FF_RSA_PRIV_OP_KEY_EXP |
- RTE_CRYPTODEV_FF_SYM_OPERATION_CHAINING |
- RTE_CRYPTODEV_FF_IN_PLACE_SGL |
- RTE_CRYPTODEV_FF_OOP_LB_IN_LB_OUT |
- RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT |
- RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT |
- RTE_CRYPTODEV_FF_SYM_SESSIONLESS |
- RTE_CRYPTODEV_FF_DIGEST_ENCRYPTED |
- RTE_CRYPTODEV_FF_SECURITY;
+ RTE_CRYPTODEV_FF_SYM_OPERATION_CHAINING | RTE_CRYPTODEV_FF_IN_PLACE_SGL |
+ RTE_CRYPTODEV_FF_OOP_LB_IN_LB_OUT | RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT |
+ RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT | RTE_CRYPTODEV_FF_SYM_SESSIONLESS |
+ RTE_CRYPTODEV_FF_DIGEST_ENCRYPTED | RTE_CRYPTODEV_FF_SECURITY;
if (roc_model_is_cn10k())
- ff |= RTE_CRYPTODEV_FF_SECURITY_INNER_CSUM;
+ ff |= RTE_CRYPTODEV_FF_SECURITY_INNER_CSUM | RTE_CRYPTODEV_FF_SYM_RAW_DP;
return ff;
}
diff --git a/drivers/crypto/cnxk/cnxk_cryptodev_ops.h b/drivers/crypto/cnxk/cnxk_cryptodev_ops.h
index 6ee4cbda70..4a8eb0890b 100644
--- a/drivers/crypto/cnxk/cnxk_cryptodev_ops.h
+++ b/drivers/crypto/cnxk/cnxk_cryptodev_ops.h
@@ -44,6 +44,7 @@ struct cpt_qp_meta_info {
struct cpt_inflight_req {
union cpt_res_s res;
union {
+ void *opaque;
struct rte_crypto_op *cop;
struct rte_event_vector *vec;
};
diff --git a/drivers/crypto/cnxk/cnxk_se.h b/drivers/crypto/cnxk/cnxk_se.h
index ceb50fa3b6..9f3bff3e68 100644
--- a/drivers/crypto/cnxk/cnxk_se.h
+++ b/drivers/crypto/cnxk/cnxk_se.h
@@ -63,6 +63,23 @@ struct cnxk_se_sess {
struct roc_cpt_lf *lf;
} __rte_aligned(ROC_ALIGN);
+struct cnxk_sym_dp_ctx {
+ struct cnxk_se_sess *sess;
+};
+
+struct cnxk_iov {
+ char src[SRC_IOV_SIZE];
+ char dst[SRC_IOV_SIZE];
+ void *iv_buf;
+ void *aad_buf;
+ void *mac_buf;
+ uint16_t c_head;
+ uint16_t c_tail;
+ uint16_t a_head;
+ uint16_t a_tail;
+ int data_len;
+};
+
static __rte_always_inline int fill_sess_gmac(struct rte_crypto_sym_xform *xform,
struct cnxk_se_sess *sess);
@@ -3061,4 +3078,280 @@ cpt_sym_inst_fill(struct cnxk_cpt_qp *qp, struct rte_crypto_op *op, struct cnxk_
return ret;
}
+static __rte_always_inline uint32_t
+prepare_iov_from_raw_vec(struct rte_crypto_vec *vec, struct roc_se_iov_ptr *iovec, uint32_t num)
+{
+ uint32_t i, total_len = 0;
+
+ for (i = 0; i < num; i++) {
+ iovec->bufs[i].vaddr = vec[i].base;
+ iovec->bufs[i].size = vec[i].len;
+
+ total_len += vec[i].len;
+ }
+
+ iovec->buf_cnt = i;
+ return total_len;
+}
+
+static __rte_always_inline void
+cnxk_raw_burst_to_iov(struct rte_crypto_sym_vec *vec, union rte_crypto_sym_ofs *ofs, int index,
+ struct cnxk_iov *iov)
+{
+ iov->iv_buf = vec->iv[index].va;
+ iov->aad_buf = vec->aad[index].va;
+ iov->mac_buf = vec->digest[index].va;
+
+ iov->data_len =
+ prepare_iov_from_raw_vec(vec->src_sgl[index].vec, (struct roc_se_iov_ptr *)iov->src,
+ vec->src_sgl[index].num);
+
+ if (vec->dest_sgl == NULL)
+ prepare_iov_from_raw_vec(vec->src_sgl[index].vec, (struct roc_se_iov_ptr *)iov->dst,
+ vec->src_sgl[index].num);
+ else
+ prepare_iov_from_raw_vec(vec->dest_sgl[index].vec,
+ (struct roc_se_iov_ptr *)iov->dst,
+ vec->dest_sgl[index].num);
+
+ iov->c_head = ofs->ofs.cipher.head;
+ iov->c_tail = ofs->ofs.cipher.tail;
+
+ iov->a_head = ofs->ofs.auth.head;
+ iov->a_tail = ofs->ofs.auth.tail;
+}
+
+static __rte_always_inline void
+cnxk_raw_to_iov(struct rte_crypto_vec *data_vec, uint16_t n_vecs, union rte_crypto_sym_ofs *ofs,
+ struct rte_crypto_va_iova_ptr *iv, struct rte_crypto_va_iova_ptr *digest,
+ struct rte_crypto_va_iova_ptr *aad, struct cnxk_iov *iov)
+{
+ iov->iv_buf = iv->va;
+ iov->aad_buf = aad->va;
+ iov->mac_buf = digest->va;
+
+ iov->data_len =
+ prepare_iov_from_raw_vec(data_vec, (struct roc_se_iov_ptr *)iov->src, n_vecs);
+ prepare_iov_from_raw_vec(data_vec, (struct roc_se_iov_ptr *)iov->dst, n_vecs);
+
+ iov->c_head = ofs->ofs.cipher.head;
+ iov->c_tail = ofs->ofs.cipher.tail;
+
+ iov->a_head = ofs->ofs.auth.head;
+ iov->a_tail = ofs->ofs.auth.tail;
+}
+
+static inline void
+raw_memcpy(struct cnxk_iov *iov)
+{
+ struct roc_se_iov_ptr *src = (struct roc_se_iov_ptr *)iov->src;
+ struct roc_se_iov_ptr *dst = (struct roc_se_iov_ptr *)iov->dst;
+ int num = src->buf_cnt;
+ int i;
+
+ /* skip copy in case of inplace */
+ if (dst->bufs[0].vaddr == src->bufs[0].vaddr)
+ return;
+
+ for (i = 0; i < num; i++) {
+ rte_memcpy(dst->bufs[i].vaddr, src->bufs[i].vaddr, src->bufs[i].size);
+ dst->bufs[i].size = src->bufs[i].size;
+ }
+}
+
+static inline int
+fill_raw_passthrough_params(struct cnxk_iov *iov, struct cpt_inst_s *inst)
+{
+ const union cpt_inst_w4 w4 = {
+ .s.opcode_major = ROC_SE_MAJOR_OP_MISC,
+ .s.opcode_minor = ROC_SE_MISC_MINOR_OP_PASSTHROUGH,
+ .s.param1 = 1,
+ .s.param2 = 1,
+ .s.dlen = 0,
+ };
+
+ inst->w0.u64 = 0;
+ inst->w5.u64 = 0;
+ inst->w4.u64 = w4.u64;
+
+ raw_memcpy(iov);
+
+ return 0;
+}
+
+static __rte_always_inline int
+fill_raw_fc_params(struct cnxk_iov *iov, struct cnxk_se_sess *sess, struct cpt_qp_meta_info *m_info,
+ struct cpt_inflight_req *infl_req, struct cpt_inst_s *inst, const bool is_kasumi,
+ const bool is_aead, const bool is_sg_ver2)
+{
+ uint32_t cipher_len, auth_len = 0;
+ struct roc_se_fc_params fc_params;
+ uint8_t cpt_op = sess->cpt_op;
+ uint64_t d_offs, d_lens;
+ uint8_t ccm_iv_buf[16];
+ uint32_t flags = 0;
+ void *mdata = NULL;
+ uint32_t iv_buf[4];
+ int ret;
+
+ fc_params.cipher_iv_len = sess->iv_length;
+ fc_params.ctx = &sess->roc_se_ctx;
+ fc_params.auth_iv_buf = NULL;
+ fc_params.auth_iv_len = 0;
+ fc_params.mac_buf.size = 0;
+ fc_params.mac_buf.vaddr = 0;
+ fc_params.iv_buf = NULL;
+
+ if (likely(is_kasumi || sess->iv_length)) {
+ flags |= ROC_SE_VALID_IV_BUF;
+ fc_params.iv_buf = iov->iv_buf;
+
+ if (sess->short_iv) {
+ memcpy((uint8_t *)iv_buf, iov->iv_buf, 12);
+ iv_buf[3] = rte_cpu_to_be_32(0x1);
+ fc_params.iv_buf = iv_buf;
+ }
+
+ if (sess->aes_ccm) {
+ memcpy((uint8_t *)ccm_iv_buf, iov->iv_buf, sess->iv_length + 1);
+ ccm_iv_buf[0] = 14 - sess->iv_length;
+ fc_params.iv_buf = ccm_iv_buf;
+ }
+ }
+
+ fc_params.src_iov = (void *)iov->src;
+ fc_params.dst_iov = (void *)iov->dst;
+
+ cipher_len = iov->data_len - iov->c_head - iov->c_tail;
+ auth_len = iov->data_len - iov->a_head - iov->a_tail;
+
+ d_offs = (iov->c_head << 16) | iov->a_head;
+ d_lens = ((uint64_t)cipher_len << 32) | auth_len;
+
+ if (is_aead) {
+ uint16_t aad_len = sess->aad_length;
+
+ if (likely(aad_len == 0)) {
+ d_offs = (iov->c_head << 16) | iov->c_head;
+ d_lens = ((uint64_t)cipher_len << 32) | cipher_len;
+ } else {
+ flags |= ROC_SE_VALID_AAD_BUF;
+ fc_params.aad_buf.size = sess->aad_length;
+ /* For AES CCM, AAD is written 18B after aad.data as per API */
+ if (sess->aes_ccm)
+ fc_params.aad_buf.vaddr = PLT_PTR_ADD((uint8_t *)iov->aad_buf, 18);
+ else
+ fc_params.aad_buf.vaddr = iov->aad_buf;
+
+ d_offs = (iov->c_head << 16);
+ d_lens = ((uint64_t)cipher_len << 32);
+ }
+ }
+
+ if (likely(sess->mac_len)) {
+ flags |= ROC_SE_VALID_MAC_BUF;
+ fc_params.mac_buf.size = sess->mac_len;
+ fc_params.mac_buf.vaddr = iov->mac_buf;
+ }
+
+ fc_params.meta_buf.vaddr = NULL;
+ mdata = alloc_op_meta(&fc_params.meta_buf, m_info->mlen, m_info->pool, infl_req);
+ if (mdata == NULL) {
+ plt_dp_err("Error allocating meta buffer for request");
+ return -ENOMEM;
+ }
+
+ if (is_kasumi) {
+ if (cpt_op & ROC_SE_OP_ENCODE)
+ ret = cpt_enc_hmac_prep(flags, d_offs, d_lens, &fc_params, inst,
+ is_sg_ver2);
+ else
+ ret = cpt_dec_hmac_prep(flags, d_offs, d_lens, &fc_params, inst,
+ is_sg_ver2);
+ } else {
+ if (cpt_op & ROC_SE_OP_ENCODE)
+ ret = cpt_enc_hmac_prep(flags, d_offs, d_lens, &fc_params, inst,
+ is_sg_ver2);
+ else
+ ret = cpt_dec_hmac_prep(flags, d_offs, d_lens, &fc_params, inst,
+ is_sg_ver2);
+ }
+
+ if (unlikely(ret)) {
+ plt_dp_err("Preparing request failed due to bad input arg");
+ goto free_mdata_and_exit;
+ }
+
+ return 0;
+
+free_mdata_and_exit:
+ rte_mempool_put(m_info->pool, infl_req->mdata);
+ return ret;
+}
+
+static __rte_always_inline int
+fill_raw_digest_params(struct cnxk_iov *iov, struct cnxk_se_sess *sess,
+ struct cpt_qp_meta_info *m_info, struct cpt_inflight_req *infl_req,
+ struct cpt_inst_s *inst, const bool is_sg_ver2)
+{
+ uint16_t auth_op = sess->cpt_op & ROC_SE_OP_AUTH_MASK;
+ struct roc_se_fc_params fc_params;
+ uint16_t mac_len = sess->mac_len;
+ uint64_t d_offs, d_lens;
+ uint32_t auth_len = 0;
+ uint32_t flags = 0;
+ void *mdata = NULL;
+ uint32_t space = 0;
+ int ret;
+
+ memset(&fc_params, 0, sizeof(struct roc_se_fc_params));
+ fc_params.cipher_iv_len = sess->iv_length;
+ fc_params.ctx = &sess->roc_se_ctx;
+
+ mdata = alloc_op_meta(&fc_params.meta_buf, m_info->mlen, m_info->pool, infl_req);
+ if (mdata == NULL) {
+ plt_dp_err("Error allocating meta buffer for request");
+ ret = -ENOMEM;
+ goto err_exit;
+ }
+
+ flags |= ROC_SE_VALID_MAC_BUF;
+ fc_params.src_iov = (void *)iov->src;
+ auth_len = iov->data_len - iov->a_head - iov->a_tail;
+ d_lens = auth_len;
+ d_offs = iov->a_head;
+
+ if (auth_op == ROC_SE_OP_AUTH_GENERATE) {
+ fc_params.mac_buf.size = sess->mac_len;
+ fc_params.mac_buf.vaddr = iov->mac_buf;
+ } else {
+ uint64_t *op = mdata;
+
+ /* Need space for storing generated mac */
+ space += 2 * sizeof(uint64_t);
+
+ fc_params.mac_buf.vaddr = (uint8_t *)mdata + space;
+ fc_params.mac_buf.size = mac_len;
+ space += RTE_ALIGN_CEIL(mac_len, 8);
+ op[0] = (uintptr_t)iov->mac_buf;
+ op[1] = mac_len;
+ infl_req->op_flags |= CPT_OP_FLAGS_AUTH_VERIFY;
+ }
+
+ fc_params.meta_buf.vaddr = (uint8_t *)mdata + space;
+ fc_params.meta_buf.size -= space;
+
+ ret = cpt_fc_enc_hmac_prep(flags, d_offs, d_lens, &fc_params, inst, is_sg_ver2);
+ if (ret)
+ goto free_mdata_and_exit;
+
+ return 0;
+
+free_mdata_and_exit:
+ if (infl_req->op_flags & CPT_OP_FLAGS_METABUF)
+ rte_mempool_put(m_info->pool, infl_req->mdata);
+err_exit:
+ return ret;
+}
+
#endif /*_CNXK_SE_H_ */
--
2.25.1
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH v3 5/8] crypto/cnxk: add support for raw APIs
2023-06-20 10:21 ` [PATCH v3 5/8] crypto/cnxk: add support for raw APIs Tejasree Kondoj
@ 2023-06-23 14:25 ` Thomas Monjalon
0 siblings, 0 replies; 11+ messages in thread
From: Thomas Monjalon @ 2023-06-23 14:25 UTC (permalink / raw)
To: Akhil Goyal, Anoob Joseph
Cc: dev, Aakash Sasidharan, Gowrishankar Muthukrishnan,
Vidya Sagar Velumuri, dev, Tejasree Kondoj, jerinj
20/06/2023 12:21, Tejasree Kondoj:
> From: Anoob Joseph <anoobj@marvell.com>
>
> Add crypto RAW API support in cnxk PMD
> Enable the flag to allow execution of raw test suite.
>
> Signed-off-by: Vidya Sagar Velumuri <vvelumuri@marvell.com>
> Signed-off-by: Anoob Joseph <anoobj@marvell.com>
The compilation with a RISC-V compiler reveal an issue
with iv_buf being null in some case:
In file included from ../../dpdk/drivers/crypto/cnxk/cn10k_cryptodev_ops.c:29:
In function 'sg2_inst_prep',
inlined from 'cpt_kasumi_enc_prep' at ../../dpdk/drivers/crypto/cnxk/cnxk_se.h:1821:3,
inlined from 'cpt_fc_enc_hmac_prep' at ../../dpdk/drivers/crypto/cnxk/cnxk_se.h:1902:9,
inlined from 'fill_raw_digest_params' at ../../dpdk/drivers/crypto/cnxk/cnxk_se.h:3620:8,
inlined from 'cn10k_cpt_raw_fill_inst.isra' at ../../dpdk/drivers/crypto/cnxk/cn10k_cryptodev_ops.c:1098:9:
../../dpdk/drivers/crypto/cnxk/cnxk_se.h:479:25: error: argument 2 null where non-null expected [-Werror=nonnull]
479 | memcpy(iv_d, iv_s, iv_len);
I won't pull this patch and the next one about testing of raw API.
^ permalink raw reply [flat|nested] 11+ messages in thread
* [PATCH v3 6/8] test/crypto: enable raw crypto tests for crypto_cn10k
2023-06-20 10:20 [PATCH v3 0/8] fixes and improvements to CNXK crypto PMD Tejasree Kondoj
` (4 preceding siblings ...)
2023-06-20 10:21 ` [PATCH v3 5/8] crypto/cnxk: add support for raw APIs Tejasree Kondoj
@ 2023-06-20 10:21 ` Tejasree Kondoj
2023-06-20 10:21 ` [PATCH v3 7/8] crypto/cnxk: add support for sm4 Tejasree Kondoj
` (2 subsequent siblings)
8 siblings, 0 replies; 11+ messages in thread
From: Tejasree Kondoj @ 2023-06-20 10:21 UTC (permalink / raw)
To: Akhil Goyal, Fan Zhang
Cc: Anoob Joseph, Ciara Power, Aakash Sasidharan,
Gowrishankar Muthukrishnan, Vidya Sagar Velumuri, dev
From: Anoob Joseph <anoobj@marvell.com>
Enable raw crypto tests with crypto_cn10k.
Signed-off-by: Anoob Joseph <anoobj@marvell.com>
---
app/test/test_cryptodev.c | 8 ++++++++
1 file changed, 8 insertions(+)
diff --git a/app/test/test_cryptodev.c b/app/test/test_cryptodev.c
index fb2af40b99..2ba37ed4bd 100644
--- a/app/test/test_cryptodev.c
+++ b/app/test/test_cryptodev.c
@@ -17519,6 +17519,12 @@ test_cryptodev_cn10k(void)
return run_cryptodev_testsuite(RTE_STR(CRYPTODEV_NAME_CN10K_PMD));
}
+static int
+test_cryptodev_cn10k_raw_api(void)
+{
+ return run_cryptodev_raw_testsuite(RTE_STR(CRYPTODEV_NAME_CN10K_PMD));
+}
+
static int
test_cryptodev_dpaa2_sec_raw_api(void)
{
@@ -17531,6 +17537,8 @@ test_cryptodev_dpaa_sec_raw_api(void)
return run_cryptodev_raw_testsuite(RTE_STR(CRYPTODEV_NAME_DPAA_SEC_PMD));
}
+REGISTER_TEST_COMMAND(cryptodev_cn10k_raw_api_autotest,
+ test_cryptodev_cn10k_raw_api);
REGISTER_TEST_COMMAND(cryptodev_dpaa2_sec_raw_api_autotest,
test_cryptodev_dpaa2_sec_raw_api);
REGISTER_TEST_COMMAND(cryptodev_dpaa_sec_raw_api_autotest,
--
2.25.1
^ permalink raw reply [flat|nested] 11+ messages in thread
* [PATCH v3 7/8] crypto/cnxk: add support for sm4
2023-06-20 10:20 [PATCH v3 0/8] fixes and improvements to CNXK crypto PMD Tejasree Kondoj
` (5 preceding siblings ...)
2023-06-20 10:21 ` [PATCH v3 6/8] test/crypto: enable raw crypto tests for crypto_cn10k Tejasree Kondoj
@ 2023-06-20 10:21 ` Tejasree Kondoj
2023-06-20 10:21 ` [PATCH v3 8/8] crypto/cnxk: fix order of ECFPM parameters Tejasree Kondoj
2023-06-20 10:26 ` [PATCH v3 0/8] fixes and improvements to CNXK crypto PMD Akhil Goyal
8 siblings, 0 replies; 11+ messages in thread
From: Tejasree Kondoj @ 2023-06-20 10:21 UTC (permalink / raw)
To: Akhil Goyal
Cc: Vidya Sagar Velumuri, Anoob Joseph, Aakash Sasidharan,
Gowrishankar Muthukrishnan, dev
From: Vidya Sagar Velumuri <vvelumuri@marvell.com>
Add support for SM4 cipher
Support for modes: SM4_CBC, SM4_ECB, SM4_CTR, SM4_OFB, SM4_CFB
Signed-off-by: Vidya Sagar Velumuri <vvelumuri@marvell.com>
---
doc/guides/cryptodevs/cnxk.rst | 1 +
doc/guides/cryptodevs/features/cn10k.ini | 5 +
doc/guides/rel_notes/release_23_07.rst | 1 +
drivers/common/cnxk/hw/cpt.h | 5 +-
drivers/common/cnxk/roc_se.c | 3 +
drivers/common/cnxk/roc_se.h | 20 ++
drivers/crypto/cnxk/cnxk_cryptodev.h | 2 +-
.../crypto/cnxk/cnxk_cryptodev_capabilities.c | 113 ++++++-
drivers/crypto/cnxk/cnxk_cryptodev_ops.c | 5 +-
drivers/crypto/cnxk/cnxk_se.h | 278 +++++++++++++++++-
10 files changed, 426 insertions(+), 7 deletions(-)
diff --git a/doc/guides/cryptodevs/cnxk.rst b/doc/guides/cryptodevs/cnxk.rst
index 777e8ffb0e..fbe67475be 100644
--- a/doc/guides/cryptodevs/cnxk.rst
+++ b/doc/guides/cryptodevs/cnxk.rst
@@ -41,6 +41,7 @@ Cipher algorithms:
* ``RTE_CRYPTO_CIPHER_KASUMI_F8``
* ``RTE_CRYPTO_CIPHER_SNOW3G_UEA2``
* ``RTE_CRYPTO_CIPHER_ZUC_EEA3``
+* ``RTE_CRYPTO_CIPHER_SM4``
Hash algorithms:
diff --git a/doc/guides/cryptodevs/features/cn10k.ini b/doc/guides/cryptodevs/features/cn10k.ini
index 68a9fddb80..53ee2a720e 100644
--- a/doc/guides/cryptodevs/features/cn10k.ini
+++ b/doc/guides/cryptodevs/features/cn10k.ini
@@ -39,6 +39,11 @@ DES CBC = Y
KASUMI F8 = Y
SNOW3G UEA2 = Y
ZUC EEA3 = Y
+SM4 ECB = Y
+SM4 CBC = Y
+SM4 CTR = Y
+SM4 CFB = Y
+SM4 OFB = Y
;
; Supported authentication algorithms of 'cn10k' crypto driver.
diff --git a/doc/guides/rel_notes/release_23_07.rst b/doc/guides/rel_notes/release_23_07.rst
index bd41f49458..7468eb2047 100644
--- a/doc/guides/rel_notes/release_23_07.rst
+++ b/doc/guides/rel_notes/release_23_07.rst
@@ -155,6 +155,7 @@ New Features
* Added support for SM3 hash operations.
* Added support for AES-CCM in cn9k and cn10k drivers.
* Added support for RAW cryptodev APIs in cn10k driver.
+ * Added support for SM4 operations in cn10k driver.
* **Updated OpenSSL crypto driver.**
diff --git a/drivers/common/cnxk/hw/cpt.h b/drivers/common/cnxk/hw/cpt.h
index 82ea076e4c..5e1519e202 100644
--- a/drivers/common/cnxk/hw/cpt.h
+++ b/drivers/common/cnxk/hw/cpt.h
@@ -73,7 +73,10 @@ union cpt_eng_caps {
uint64_t __io des : 1;
uint64_t __io crc : 1;
uint64_t __io mmul : 1;
- uint64_t __io reserved_15_33 : 19;
+ uint64_t __io reserved_15_20 : 6;
+ uint64_t __io sm3 : 1;
+ uint64_t __io sm4 : 1;
+ uint64_t __io reserved_23_33 : 11;
uint64_t __io pdcp_chain : 1;
uint64_t __io sg_ver2 : 1;
uint64_t __io reserved_36_63 : 28;
diff --git a/drivers/common/cnxk/roc_se.c b/drivers/common/cnxk/roc_se.c
index f9b6936267..2662297315 100644
--- a/drivers/common/cnxk/roc_se.c
+++ b/drivers/common/cnxk/roc_se.c
@@ -757,6 +757,9 @@ roc_se_ctx_init(struct roc_se_ctx *roc_se_ctx)
case ROC_SE_PDCP_CHAIN:
ctx_len = sizeof(struct roc_se_zuc_snow3g_chain_ctx);
break;
+ case ROC_SE_SM:
+ ctx_len = sizeof(struct roc_se_sm_context);
+ break;
default:
ctx_len = 0;
}
diff --git a/drivers/common/cnxk/roc_se.h b/drivers/common/cnxk/roc_se.h
index 1e7abecf8f..008ab31912 100644
--- a/drivers/common/cnxk/roc_se.h
+++ b/drivers/common/cnxk/roc_se.h
@@ -17,6 +17,7 @@
#define ROC_SE_MAJOR_OP_PDCP 0x37
#define ROC_SE_MAJOR_OP_KASUMI 0x38
#define ROC_SE_MAJOR_OP_PDCP_CHAIN 0x3C
+#define ROC_SE_MAJOR_OP_SM 0x3D
#define ROC_SE_MAJOR_OP_MISC 0x01ULL
#define ROC_SE_MISC_MINOR_OP_PASSTHROUGH 0x03ULL
@@ -28,6 +29,8 @@
#define ROC_SE_OFF_CTRL_LEN 8
+#define ROC_SE_SM4_KEY_LEN 16
+
#define ROC_SE_ZS_EA 0x1
#define ROC_SE_ZS_IA 0x2
#define ROC_SE_K_F8 0x4
@@ -38,6 +41,7 @@
#define ROC_SE_KASUMI 0x3
#define ROC_SE_HASH_HMAC 0x4
#define ROC_SE_PDCP_CHAIN 0x5
+#define ROC_SE_SM 0x6
#define ROC_SE_OP_CIPHER_ENCRYPT 0x1
#define ROC_SE_OP_CIPHER_DECRYPT 0x2
@@ -125,6 +129,14 @@ typedef enum {
ROC_SE_DES_DOCSISBPI = 0x96,
} roc_se_cipher_type;
+typedef enum {
+ ROC_SM4_ECB = 0x0,
+ ROC_SM4_CBC = 0x1,
+ ROC_SM4_CTR = 0x2,
+ ROC_SM4_CFB = 0x3,
+ ROC_SM4_OFB = 0x4,
+} roc_sm_cipher_type;
+
typedef enum {
/* Microcode errors */
ROC_SE_NO_ERR = 0x00,
@@ -192,6 +204,13 @@ struct roc_se_context {
struct roc_se_hmac_context hmac;
};
+struct roc_se_sm_context {
+ uint64_t rsvd_56_60 : 5;
+ uint64_t enc_cipher : 3;
+ uint64_t rsvd_0_55 : 56;
+ uint8_t encr_key[16];
+};
+
struct roc_se_otk_zuc_ctx {
union {
uint64_t u64;
@@ -325,6 +344,7 @@ struct roc_se_ctx {
struct roc_se_zuc_snow3g_ctx zs_ctx;
struct roc_se_zuc_snow3g_chain_ctx zs_ch_ctx;
struct roc_se_kasumi_ctx k_ctx;
+ struct roc_se_sm_context sm_ctx;
};
} se_ctx __plt_aligned(ROC_ALIGN);
uint8_t *auth_key;
diff --git a/drivers/crypto/cnxk/cnxk_cryptodev.h b/drivers/crypto/cnxk/cnxk_cryptodev.h
index ce45f5d01b..09f5ba0650 100644
--- a/drivers/crypto/cnxk/cnxk_cryptodev.h
+++ b/drivers/crypto/cnxk/cnxk_cryptodev.h
@@ -10,7 +10,7 @@
#include "roc_cpt.h"
-#define CNXK_CPT_MAX_CAPS 49
+#define CNXK_CPT_MAX_CAPS 54
#define CNXK_SEC_CRYPTO_MAX_CAPS 16
#define CNXK_SEC_MAX_CAPS 9
#define CNXK_AE_EC_ID_MAX 8
diff --git a/drivers/crypto/cnxk/cnxk_cryptodev_capabilities.c b/drivers/crypto/cnxk/cnxk_cryptodev_capabilities.c
index 8a3b0c48d0..4c6357353e 100644
--- a/drivers/crypto/cnxk/cnxk_cryptodev_capabilities.c
+++ b/drivers/crypto/cnxk/cnxk_cryptodev_capabilities.c
@@ -1049,6 +1049,109 @@ static const struct rte_cryptodev_capabilities caps_null[] = {
},
};
+static const struct rte_cryptodev_capabilities caps_sm4[] = {
+ { /* SM4 CBC */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
+ {.cipher = {
+ .algo = RTE_CRYPTO_CIPHER_SM4_CBC,
+ .block_size = 16,
+ .key_size = {
+ .min = 16,
+ .max = 16,
+ .increment = 0
+ },
+ .iv_size = {
+ .min = 16,
+ .max = 16,
+ .increment = 0
+ }
+ }, }
+ }, }
+ },
+ { /* SM4 ECB */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
+ {.cipher = {
+ .algo = RTE_CRYPTO_CIPHER_SM4_ECB,
+ .block_size = 16,
+ .key_size = {
+ .min = 16,
+ .max = 16,
+ .increment = 0
+ },
+ .iv_size = {
+ .min = 0,
+ .max = 0,
+ .increment = 0
+ }
+ }, }
+ }, }
+ },
+ { /* SM4 CTR */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
+ {.cipher = {
+ .algo = RTE_CRYPTO_CIPHER_SM4_CTR,
+ .block_size = 16,
+ .key_size = {
+ .min = 16,
+ .max = 16,
+ .increment = 0
+ },
+ .iv_size = {
+ .min = 16,
+ .max = 16,
+ .increment = 0
+ }
+ }, }
+ }, }
+ },
+ { /* SM4 OFB */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
+ {.cipher = {
+ .algo = RTE_CRYPTO_CIPHER_SM4_OFB,
+ .block_size = 16,
+ .key_size = {
+ .min = 16,
+ .max = 16,
+ .increment = 0
+ },
+ .iv_size = {
+ .min = 16,
+ .max = 16,
+ .increment = 0
+ }
+ }, }
+ }, }
+ },
+ { /* SM4 CFB */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
+ {.cipher = {
+ .algo = RTE_CRYPTO_CIPHER_SM4_CFB,
+ .block_size = 16,
+ .key_size = {
+ .min = 16,
+ .max = 16,
+ .increment = 0
+ },
+ .iv_size = {
+ .min = 16,
+ .max = 16,
+ .increment = 0
+ }
+ }, }
+ }, }
+ },
+};
+
static const struct rte_cryptodev_capabilities caps_end[] = {
RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
};
@@ -1513,9 +1616,13 @@ cn9k_crypto_caps_add(struct rte_cryptodev_capabilities cnxk_caps[], int *cur_pos
}
static void
-cn10k_crypto_caps_add(struct rte_cryptodev_capabilities cnxk_caps[], int *cur_pos)
+cn10k_crypto_caps_add(struct rte_cryptodev_capabilities cnxk_caps[],
+ union cpt_eng_caps *hw_caps, int *cur_pos)
{
- cpt_caps_add(cnxk_caps, cur_pos, caps_sm3, RTE_DIM(caps_sm3));
+ if (hw_caps->sg_ver2) {
+ CPT_CAPS_ADD(cnxk_caps, cur_pos, hw_caps, sm3);
+ CPT_CAPS_ADD(cnxk_caps, cur_pos, hw_caps, sm4);
+ }
}
static void
@@ -1537,7 +1644,7 @@ crypto_caps_populate(struct rte_cryptodev_capabilities cnxk_caps[],
cn9k_crypto_caps_add(cnxk_caps, &cur_pos);
if (roc_model_is_cn10k())
- cn10k_crypto_caps_add(cnxk_caps, &cur_pos);
+ cn10k_crypto_caps_add(cnxk_caps, hw_caps, &cur_pos);
cpt_caps_add(cnxk_caps, &cur_pos, caps_null, RTE_DIM(caps_null));
cpt_caps_add(cnxk_caps, &cur_pos, caps_end, RTE_DIM(caps_end));
diff --git a/drivers/crypto/cnxk/cnxk_cryptodev_ops.c b/drivers/crypto/cnxk/cnxk_cryptodev_ops.c
index d0c99d37e8..50150d3f06 100644
--- a/drivers/crypto/cnxk/cnxk_cryptodev_ops.c
+++ b/drivers/crypto/cnxk/cnxk_cryptodev_ops.c
@@ -660,7 +660,7 @@ cnxk_cpt_inst_w7_get(struct cnxk_se_sess *sess, struct roc_cpt *roc_cpt)
/* Set the engine group */
if (sess->zsk_flag || sess->aes_ctr_eea2 || sess->is_sha3 || sess->is_sm3 ||
- sess->passthrough)
+ sess->passthrough || sess->is_sm4)
inst_w7.s.egrp = roc_cpt->eng_grp[CPT_ENG_TYPE_SE];
else
inst_w7.s.egrp = roc_cpt->eng_grp[CPT_ENG_TYPE_IE];
@@ -704,6 +704,9 @@ sym_session_configure(struct roc_cpt *roc_cpt, struct rte_crypto_sym_xform *xfor
case ROC_SE_PDCP_CHAIN:
thr_type = CPT_DP_THREAD_TYPE_PDCP_CHAIN;
break;
+ case ROC_SE_SM:
+ thr_type = CPT_DP_THREAD_TYPE_SM;
+ break;
default:
plt_err("Invalid op type");
ret = -ENOTSUP;
diff --git a/drivers/crypto/cnxk/cnxk_se.h b/drivers/crypto/cnxk/cnxk_se.h
index 9f3bff3e68..3444f2d599 100644
--- a/drivers/crypto/cnxk/cnxk_se.h
+++ b/drivers/crypto/cnxk/cnxk_se.h
@@ -23,6 +23,7 @@ enum cpt_dp_thread_type {
CPT_DP_THREAD_TYPE_PDCP,
CPT_DP_THREAD_TYPE_PDCP_CHAIN,
CPT_DP_THREAD_TYPE_KASUMI,
+ CPT_DP_THREAD_TYPE_SM,
CPT_DP_THREAD_AUTH_ONLY,
CPT_DP_THREAD_GENERIC,
CPT_DP_THREAD_TYPE_PT,
@@ -49,7 +50,8 @@ struct cnxk_se_sess {
uint8_t short_iv : 1;
uint8_t is_sm3 : 1;
uint8_t passthrough : 1;
- uint8_t rsvd : 4;
+ uint8_t is_sm4 : 1;
+ uint8_t rsvd : 3;
uint8_t mac_len;
uint8_t iv_length;
uint8_t auth_iv_length;
@@ -1059,6 +1061,100 @@ pdcp_chain_sg2_prep(struct roc_se_fc_params *params, struct roc_se_ctx *cpt_ctx,
return ret;
}
+static __rte_always_inline int
+cpt_sm_prep(uint32_t flags, uint64_t d_offs, uint64_t d_lens, struct roc_se_fc_params *fc_params,
+ struct cpt_inst_s *inst, const bool is_sg_ver2, int decrypt)
+{
+ int32_t inputlen, outputlen, enc_dlen;
+ union cpt_inst_w4 cpt_inst_w4;
+ uint32_t passthrough_len = 0;
+ struct roc_se_ctx *se_ctx;
+ uint32_t encr_data_len;
+ uint32_t encr_offset;
+ uint64_t offset_ctrl;
+ uint8_t iv_len = 16;
+ uint8_t *src = NULL;
+ void *offset_vaddr;
+ int ret;
+
+ encr_offset = ROC_SE_ENCR_OFFSET(d_offs);
+ encr_data_len = ROC_SE_ENCR_DLEN(d_lens);
+
+ se_ctx = fc_params->ctx;
+ cpt_inst_w4.u64 = se_ctx->template_w4.u64;
+
+ if (unlikely(!(flags & ROC_SE_VALID_IV_BUF)))
+ iv_len = 0;
+
+ encr_offset += iv_len;
+ enc_dlen = encr_data_len + encr_offset;
+ enc_dlen = RTE_ALIGN_CEIL(encr_data_len, 8) + encr_offset;
+
+ inputlen = enc_dlen;
+ outputlen = enc_dlen;
+
+ cpt_inst_w4.s.param1 = encr_data_len;
+
+ if (unlikely(encr_offset >> 8)) {
+ plt_dp_err("Offset not supported");
+ plt_dp_err("enc_offset: %d", encr_offset);
+ return -1;
+ }
+
+ offset_ctrl = rte_cpu_to_be_64((uint64_t)encr_offset);
+
+ /*
+ * In cn9k, cn10k since we have a limitation of
+ * IV & Offset control word not part of instruction
+ * and need to be part of Data Buffer, we check if
+ * head room is there and then only do the Direct mode processing
+ */
+ if (likely((flags & ROC_SE_SINGLE_BUF_INPLACE) && (flags & ROC_SE_SINGLE_BUF_HEADROOM))) {
+ void *dm_vaddr = fc_params->bufs[0].vaddr;
+
+ /* Use Direct mode */
+
+ offset_vaddr = PLT_PTR_SUB(dm_vaddr, ROC_SE_OFF_CTRL_LEN + iv_len);
+ *(uint64_t *)offset_vaddr = offset_ctrl;
+
+ /* DPTR */
+ inst->dptr = (uint64_t)offset_vaddr;
+
+ /* RPTR should just exclude offset control word */
+ inst->rptr = (uint64_t)dm_vaddr - iv_len;
+
+ cpt_inst_w4.s.dlen = inputlen + ROC_SE_OFF_CTRL_LEN;
+
+ if (likely(iv_len)) {
+ void *dst = PLT_PTR_ADD(offset_vaddr, ROC_SE_OFF_CTRL_LEN);
+ uint64_t *src = fc_params->iv_buf;
+
+ rte_memcpy(dst, src, 16);
+ }
+ inst->w4.u64 = cpt_inst_w4.u64;
+ } else {
+ if (likely(iv_len))
+ src = fc_params->iv_buf;
+
+ inst->w4.u64 = cpt_inst_w4.u64;
+
+ if (is_sg_ver2)
+ ret = sg2_inst_prep(fc_params, inst, offset_ctrl, src, iv_len, 0, 0,
+ inputlen, outputlen, passthrough_len, flags, 0,
+ decrypt);
+ else
+ ret = sg_inst_prep(fc_params, inst, offset_ctrl, src, iv_len, 0, 0,
+ inputlen, outputlen, passthrough_len, flags, 0, decrypt);
+
+ if (unlikely(ret)) {
+ plt_dp_err("sg prep failed");
+ return -1;
+ }
+ }
+
+ return 0;
+}
+
static __rte_always_inline int
cpt_enc_hmac_prep(uint32_t flags, uint64_t d_offs, uint64_t d_lens,
struct roc_se_fc_params *fc_params, struct cpt_inst_s *inst,
@@ -1899,6 +1995,71 @@ fill_sess_aead(struct rte_crypto_sym_xform *xform, struct cnxk_se_sess *sess)
return 0;
}
+static __rte_always_inline int
+fill_sm_sess_cipher(struct rte_crypto_sym_xform *xform, struct cnxk_se_sess *sess)
+{
+ struct roc_se_sm_context *sm_ctx = &sess->roc_se_ctx.se_ctx.sm_ctx;
+ struct rte_crypto_cipher_xform *c_form;
+ roc_sm_cipher_type enc_type = 0;
+
+ c_form = &xform->cipher;
+
+ if (c_form->op == RTE_CRYPTO_CIPHER_OP_ENCRYPT) {
+ sess->cpt_op |= ROC_SE_OP_CIPHER_ENCRYPT;
+ sess->roc_se_ctx.template_w4.s.opcode_minor = ROC_SE_FC_MINOR_OP_ENCRYPT;
+ } else if (c_form->op == RTE_CRYPTO_CIPHER_OP_DECRYPT) {
+ sess->cpt_op |= ROC_SE_OP_CIPHER_DECRYPT;
+ sess->roc_se_ctx.template_w4.s.opcode_minor = ROC_SE_FC_MINOR_OP_DECRYPT;
+ } else {
+ plt_dp_err("Unknown cipher operation\n");
+ return -1;
+ }
+
+ switch (c_form->algo) {
+ case RTE_CRYPTO_CIPHER_SM4_CBC:
+ enc_type = ROC_SM4_CBC;
+ break;
+ case RTE_CRYPTO_CIPHER_SM4_ECB:
+ enc_type = ROC_SM4_ECB;
+ break;
+ case RTE_CRYPTO_CIPHER_SM4_CTR:
+ enc_type = ROC_SM4_CTR;
+ break;
+ case RTE_CRYPTO_CIPHER_SM4_CFB:
+ enc_type = ROC_SM4_CFB;
+ break;
+ case RTE_CRYPTO_CIPHER_SM4_OFB:
+ enc_type = ROC_SM4_OFB;
+ break;
+ default:
+ plt_dp_err("Crypto: Undefined cipher algo %u specified", c_form->algo);
+ return -1;
+ }
+
+ sess->iv_offset = c_form->iv.offset;
+ sess->iv_length = c_form->iv.length;
+
+ if (c_form->key.length != ROC_SE_SM4_KEY_LEN) {
+ plt_dp_err("Invalid cipher params keylen %u", c_form->key.length);
+ return -1;
+ }
+
+ sess->zsk_flag = 0;
+ sess->zs_cipher = 0;
+ sess->aes_gcm = 0;
+ sess->aes_ctr = 0;
+ sess->is_null = 0;
+ sess->is_sm4 = 1;
+ sess->roc_se_ctx.fc_type = ROC_SE_SM;
+
+ sess->roc_se_ctx.template_w4.s.opcode_major = ROC_SE_MAJOR_OP_SM;
+
+ memcpy(sm_ctx->encr_key, c_form->key.data, ROC_SE_SM4_KEY_LEN);
+ sm_ctx->enc_cipher = enc_type;
+
+ return 0;
+}
+
static __rte_always_inline int
fill_sess_cipher(struct rte_crypto_sym_xform *xform, struct cnxk_se_sess *sess)
{
@@ -1909,6 +2070,13 @@ fill_sess_cipher(struct rte_crypto_sym_xform *xform, struct cnxk_se_sess *sess)
c_form = &xform->cipher;
+ if ((c_form->algo == RTE_CRYPTO_CIPHER_SM4_CBC) ||
+ (c_form->algo == RTE_CRYPTO_CIPHER_SM4_ECB) ||
+ (c_form->algo == RTE_CRYPTO_CIPHER_SM4_CTR) ||
+ (c_form->algo == RTE_CRYPTO_CIPHER_SM4_CFB) ||
+ (c_form->algo == RTE_CRYPTO_CIPHER_SM4_OFB))
+ return fill_sm_sess_cipher(xform, sess);
+
if (c_form->op == RTE_CRYPTO_CIPHER_OP_ENCRYPT)
sess->cpt_op |= ROC_SE_OP_CIPHER_ENCRYPT;
else if (c_form->op == RTE_CRYPTO_CIPHER_OP_DECRYPT) {
@@ -2379,6 +2547,110 @@ prepare_iov_from_pkt_inplace(struct rte_mbuf *pkt,
return;
}
+static __rte_always_inline int
+fill_sm_params(struct rte_crypto_op *cop, struct cnxk_se_sess *sess,
+ struct cpt_qp_meta_info *m_info, struct cpt_inflight_req *infl_req,
+ struct cpt_inst_s *inst, const bool is_sg_ver2)
+{
+ struct rte_crypto_sym_op *sym_op = cop->sym;
+ struct roc_se_fc_params fc_params;
+ struct rte_mbuf *m_src, *m_dst;
+ uint8_t cpt_op = sess->cpt_op;
+ uint64_t d_offs, d_lens;
+ char src[SRC_IOV_SIZE];
+ char dst[SRC_IOV_SIZE];
+ void *mdata = NULL;
+#ifdef CPT_ALWAYS_USE_SG_MODE
+ uint8_t inplace = 0;
+#else
+ uint8_t inplace = 1;
+#endif
+ uint32_t flags = 0;
+ int ret;
+
+ uint32_t ci_data_length = sym_op->cipher.data.length;
+ uint32_t ci_data_offset = sym_op->cipher.data.offset;
+
+ fc_params.cipher_iv_len = sess->iv_length;
+ fc_params.auth_iv_len = 0;
+ fc_params.auth_iv_buf = NULL;
+ fc_params.iv_buf = NULL;
+ fc_params.mac_buf.size = 0;
+ fc_params.mac_buf.vaddr = 0;
+
+ if (likely(sess->iv_length)) {
+ flags |= ROC_SE_VALID_IV_BUF;
+ fc_params.iv_buf = rte_crypto_op_ctod_offset(cop, uint8_t *, sess->iv_offset);
+ }
+
+ m_src = sym_op->m_src;
+ m_dst = sym_op->m_dst;
+
+ d_offs = ci_data_offset;
+ d_offs = (d_offs << 16);
+
+ d_lens = ci_data_length;
+ d_lens = (d_lens << 32);
+
+ fc_params.ctx = &sess->roc_se_ctx;
+
+ if (likely(!m_dst && inplace)) {
+ fc_params.dst_iov = fc_params.src_iov = (void *)src;
+
+ prepare_iov_from_pkt_inplace(m_src, &fc_params, &flags);
+
+ } else {
+ /* Out of place processing */
+ fc_params.src_iov = (void *)src;
+ fc_params.dst_iov = (void *)dst;
+
+ /* Store SG I/O in the api for reuse */
+ if (prepare_iov_from_pkt(m_src, fc_params.src_iov, 0)) {
+ plt_dp_err("Prepare src iov failed");
+ ret = -EINVAL;
+ goto err_exit;
+ }
+
+ if (unlikely(m_dst != NULL)) {
+ if (prepare_iov_from_pkt(m_dst, fc_params.dst_iov, 0)) {
+ plt_dp_err("Prepare dst iov failed for m_dst %p", m_dst);
+ ret = -EINVAL;
+ goto err_exit;
+ }
+ } else {
+ fc_params.dst_iov = (void *)src;
+ }
+ }
+
+ fc_params.meta_buf.vaddr = NULL;
+
+ if (unlikely(!((flags & ROC_SE_SINGLE_BUF_INPLACE) &&
+ (flags & ROC_SE_SINGLE_BUF_HEADROOM)))) {
+ mdata = alloc_op_meta(&fc_params.meta_buf, m_info->mlen, m_info->pool, infl_req);
+ if (mdata == NULL) {
+ plt_dp_err("Error allocating meta buffer for request");
+ return -ENOMEM;
+ }
+ }
+
+ /* Finally prepare the instruction */
+ ret = cpt_sm_prep(flags, d_offs, d_lens, &fc_params, inst, is_sg_ver2,
+ !(cpt_op & ROC_SE_OP_ENCODE));
+
+ if (unlikely(ret)) {
+ plt_dp_err("Preparing request failed due to bad input arg");
+ goto free_mdata_and_exit;
+ }
+
+ return 0;
+
+free_mdata_and_exit:
+ if (infl_req->op_flags & CPT_OP_FLAGS_METABUF)
+ rte_mempool_put(m_info->pool, infl_req->mdata);
+err_exit:
+ return ret;
+}
+
static __rte_always_inline int
fill_fc_params(struct rte_crypto_op *cop, struct cnxk_se_sess *sess,
struct cpt_qp_meta_info *m_info, struct cpt_inflight_req *infl_req,
@@ -3068,6 +3340,10 @@ cpt_sym_inst_fill(struct cnxk_cpt_qp *qp, struct rte_crypto_op *op, struct cnxk_
ret = fill_fc_params(op, sess, &qp->meta_info, infl_req, inst, true, false,
is_sg_ver2);
break;
+ case CPT_DP_THREAD_TYPE_SM:
+ ret = fill_sm_params(op, sess, &qp->meta_info, infl_req, inst, is_sg_ver2);
+ break;
+
case CPT_DP_THREAD_AUTH_ONLY:
ret = fill_digest_params(op, sess, &qp->meta_info, infl_req, inst, is_sg_ver2);
break;
--
2.25.1
^ permalink raw reply [flat|nested] 11+ messages in thread
* [PATCH v3 8/8] crypto/cnxk: fix order of ECFPM parameters
2023-06-20 10:20 [PATCH v3 0/8] fixes and improvements to CNXK crypto PMD Tejasree Kondoj
` (6 preceding siblings ...)
2023-06-20 10:21 ` [PATCH v3 7/8] crypto/cnxk: add support for sm4 Tejasree Kondoj
@ 2023-06-20 10:21 ` Tejasree Kondoj
2023-06-20 10:26 ` [PATCH v3 0/8] fixes and improvements to CNXK crypto PMD Akhil Goyal
8 siblings, 0 replies; 11+ messages in thread
From: Tejasree Kondoj @ 2023-06-20 10:21 UTC (permalink / raw)
To: Akhil Goyal
Cc: Gowrishankar Muthukrishnan, Anoob Joseph, Aakash Sasidharan,
Vidya Sagar Velumuri, dev
From: Gowrishankar Muthukrishnan <gmuthukrishn@marvell.com>
Fix order of ECFPM parameters.
Fixes: 76618fc4bef ("crypto/cnxk: fix order of ECFPM parameters")
Signed-off-by: Gowrishankar Muthukrishnan <gmuthukrishn@marvell.com>
---
drivers/crypto/cnxk/cnxk_ae.h | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/crypto/cnxk/cnxk_ae.h b/drivers/crypto/cnxk/cnxk_ae.h
index 47f000dd5e..7ad259b7f4 100644
--- a/drivers/crypto/cnxk/cnxk_ae.h
+++ b/drivers/crypto/cnxk/cnxk_ae.h
@@ -723,7 +723,8 @@ cnxk_ae_ecfpm_prep(struct rte_crypto_ecpm_op_param *ecpm,
* optionally ROUNDUP8(input point(x and y coordinates)).
* Please note point length is equivalent to prime of the curve
*/
- if (cpt_ver == ROC_CPT_REVISION_ID_96XX_C0) {
+ if (cpt_ver == ROC_CPT_REVISION_ID_96XX_B0 || cpt_ver == ROC_CPT_REVISION_ID_96XX_C0 ||
+ cpt_ver == ROC_CPT_REVISION_ID_98XX) {
dlen = sizeof(fpm_table_iova) + 3 * p_align + scalar_align;
memset(dptr, 0, dlen);
*(uint64_t *)dptr = fpm_table_iova;
--
2.25.1
^ permalink raw reply [flat|nested] 11+ messages in thread
* RE: [PATCH v3 0/8] fixes and improvements to CNXK crypto PMD
2023-06-20 10:20 [PATCH v3 0/8] fixes and improvements to CNXK crypto PMD Tejasree Kondoj
` (7 preceding siblings ...)
2023-06-20 10:21 ` [PATCH v3 8/8] crypto/cnxk: fix order of ECFPM parameters Tejasree Kondoj
@ 2023-06-20 10:26 ` Akhil Goyal
8 siblings, 0 replies; 11+ messages in thread
From: Akhil Goyal @ 2023-06-20 10:26 UTC (permalink / raw)
To: Tejasree Kondoj
Cc: Anoob Joseph, Aakash Sasidharan, Gowrishankar Muthukrishnan,
Vidya Sagar Velumuri, dev
> Subject: [PATCH v3 0/8] fixes and improvements to CNXK crypto PMD
>
> This series adds SM4, raw cryptodev API support and
> fixes to CNXK crypto PMD.
>
> v3:
> * Updated feature file for cn10k
Series applied to dpdk-next-crypto
Had just applied your v2 with the doc changes and few title updates.
Thanks.
^ permalink raw reply [flat|nested] 11+ messages in thread