* [dpdk-dev] [PATCH 00/10] NXP DPAAx crypto fixes
@ 2019-10-11 16:32 Hemant Agrawal
2019-10-11 16:32 ` [dpdk-dev] [PATCH 01/10] test/crypto: fix PDCP test support Hemant Agrawal
` (10 more replies)
0 siblings, 11 replies; 27+ messages in thread
From: Hemant Agrawal @ 2019-10-11 16:32 UTC (permalink / raw)
To: dev; +Cc: akhil.goyal
This patch series largely content
1. fixes in crypto drivers
2. supprot ESN like cases
3. enabling snow/ZUC for dpaa_sec
Hemant Agrawal (7):
test/crypto: fix PDCP test support
crypto/dpaa2_sec: fix ipv6 support
test/crypto: increase test cases support for dpaax
test/crypto: add test to test ESN like case
crypto/dpaa_sec: add support for snow3G and ZUC
test/crypto: enable snow3G and zuc cases for dpaa
crypto/dpaa_sec: code reorg for better session mgmt
Vakul Garg (3):
crypto/dpaa_sec: fix to check for aead as well
crypto/dpaa2_sec: enhance gcm descs to not skip aadt
crypto/dpaa2_sec: add support of auth trailer in cipher-auth
app/test/test_cryptodev.c | 483 ++++++++++-
app/test/test_cryptodev_aes_test_vectors.h | 67 ++
doc/guides/cryptodevs/dpaa_sec.rst | 4 +
doc/guides/cryptodevs/features/dpaa_sec.ini | 4 +
drivers/crypto/caam_jr/caam_jr.c | 24 +-
drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c | 47 +-
drivers/crypto/dpaa2_sec/hw/desc/algo.h | 10 -
drivers/crypto/dpaa2_sec/hw/desc/ipsec.h | 167 ++--
drivers/crypto/dpaa_sec/dpaa_sec.c | 844 +++++++++++++-------
drivers/crypto/dpaa_sec/dpaa_sec.h | 109 ++-
10 files changed, 1319 insertions(+), 440 deletions(-)
--
2.17.1
^ permalink raw reply [flat|nested] 27+ messages in thread
* [dpdk-dev] [PATCH 01/10] test/crypto: fix PDCP test support
2019-10-11 16:32 [dpdk-dev] [PATCH 00/10] NXP DPAAx crypto fixes Hemant Agrawal
@ 2019-10-11 16:32 ` Hemant Agrawal
2019-10-11 16:32 ` [dpdk-dev] [PATCH 02/10] crypto/dpaa2_sec: fix ipv6 support Hemant Agrawal
` (9 subsequent siblings)
10 siblings, 0 replies; 27+ messages in thread
From: Hemant Agrawal @ 2019-10-11 16:32 UTC (permalink / raw)
To: dev; +Cc: akhil.goyal, Hemant Agrawal
use session_priv_mpool instead of session pool
Fixes: d883e6e7131b ("test/crypto: add PDCP C-Plane encap cases")
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
app/test/test_cryptodev.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/app/test/test_cryptodev.c b/app/test/test_cryptodev.c
index ffed298fd..879b31ceb 100644
--- a/app/test/test_cryptodev.c
+++ b/app/test/test_cryptodev.c
@@ -7144,7 +7144,7 @@ test_pdcp_proto(int i, int oop,
/* Create security session */
ut_params->sec_session = rte_security_session_create(ctx,
- &sess_conf, ts_params->session_mpool);
+ &sess_conf, ts_params->session_priv_mpool);
if (!ut_params->sec_session) {
printf("TestCase %s()-%d line %d failed %s: ",
@@ -7393,7 +7393,7 @@ test_pdcp_proto_SGL(int i, int oop,
/* Create security session */
ut_params->sec_session = rte_security_session_create(ctx,
- &sess_conf, ts_params->session_mpool);
+ &sess_conf, ts_params->session_priv_mpool);
if (!ut_params->sec_session) {
printf("TestCase %s()-%d line %d failed %s: ",
--
2.17.1
^ permalink raw reply [flat|nested] 27+ messages in thread
* [dpdk-dev] [PATCH 02/10] crypto/dpaa2_sec: fix ipv6 support
2019-10-11 16:32 [dpdk-dev] [PATCH 00/10] NXP DPAAx crypto fixes Hemant Agrawal
2019-10-11 16:32 ` [dpdk-dev] [PATCH 01/10] test/crypto: fix PDCP test support Hemant Agrawal
@ 2019-10-11 16:32 ` Hemant Agrawal
2019-10-11 16:32 ` [dpdk-dev] [PATCH 03/10] crypto/dpaa_sec: fix to check for aead as well Hemant Agrawal
` (8 subsequent siblings)
10 siblings, 0 replies; 27+ messages in thread
From: Hemant Agrawal @ 2019-10-11 16:32 UTC (permalink / raw)
To: dev; +Cc: akhil.goyal, Hemant Agrawal
HW PDB Option was being overwritten.
Fixes: 53982ba2805d ("crypto/dpaa2_sec: support IPv6 tunnel for protocol offload")
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c | 5 ++---
1 file changed, 2 insertions(+), 3 deletions(-)
diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
index 2ab34a00f..14f0c523c 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
@@ -2819,13 +2819,12 @@ dpaa2_sec_set_ipsec_session(struct rte_cryptodev *dev,
RTE_SECURITY_IPSEC_SA_DIR_INGRESS) {
flc->dhr = SEC_FLC_DHR_INBOUND;
memset(&decap_pdb, 0, sizeof(struct ipsec_decap_pdb));
- decap_pdb.options = sizeof(struct ip) << 16;
- if (ipsec_xform->options.esn)
- decap_pdb.options |= PDBOPTS_ESP_ESN;
decap_pdb.options = (ipsec_xform->tunnel.type ==
RTE_SECURITY_IPSEC_TUNNEL_IPV4) ?
sizeof(struct ip) << 16 :
sizeof(struct rte_ipv6_hdr) << 16;
+ if (ipsec_xform->options.esn)
+ decap_pdb.options |= PDBOPTS_ESP_ESN;
session->dir = DIR_DEC;
bufsize = cnstr_shdsc_ipsec_new_decap(priv->flc_desc[0].desc,
1, 0, SHR_SERIAL,
--
2.17.1
^ permalink raw reply [flat|nested] 27+ messages in thread
* [dpdk-dev] [PATCH 03/10] crypto/dpaa_sec: fix to check for aead as well
2019-10-11 16:32 [dpdk-dev] [PATCH 00/10] NXP DPAAx crypto fixes Hemant Agrawal
2019-10-11 16:32 ` [dpdk-dev] [PATCH 01/10] test/crypto: fix PDCP test support Hemant Agrawal
2019-10-11 16:32 ` [dpdk-dev] [PATCH 02/10] crypto/dpaa2_sec: fix ipv6 support Hemant Agrawal
@ 2019-10-11 16:32 ` Hemant Agrawal
2019-10-11 16:32 ` [dpdk-dev] [PATCH 04/10] crypto/dpaa2_sec: enhance gcm descs to not skip aadt Hemant Agrawal
` (7 subsequent siblings)
10 siblings, 0 replies; 27+ messages in thread
From: Hemant Agrawal @ 2019-10-11 16:32 UTC (permalink / raw)
To: dev; +Cc: akhil.goyal, stable, Vakul Garg
From: Vakul Garg <vakul.garg@nxp.com>
The code shall also check aead as non auth-cipher case
Fixes: 1f14d500bce1 ("crypto/dpaa_sec: support IPsec protocol offload")
Cc: stable@dpdk.org
Signed-off-by: Vakul Garg <vakul.garg@nxp.com>
---
drivers/crypto/dpaa_sec/dpaa_sec.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/crypto/dpaa_sec/dpaa_sec.c b/drivers/crypto/dpaa_sec/dpaa_sec.c
index 38cfdd378..e89cbcefb 100644
--- a/drivers/crypto/dpaa_sec/dpaa_sec.c
+++ b/drivers/crypto/dpaa_sec/dpaa_sec.c
@@ -266,7 +266,8 @@ static inline int is_auth_cipher(dpaa_sec_session *ses)
return ((ses->cipher_alg != RTE_CRYPTO_CIPHER_NULL) &&
(ses->auth_alg != RTE_CRYPTO_AUTH_NULL) &&
(ses->proto_alg != RTE_SECURITY_PROTOCOL_PDCP) &&
- (ses->proto_alg != RTE_SECURITY_PROTOCOL_IPSEC));
+ (ses->proto_alg != RTE_SECURITY_PROTOCOL_IPSEC) &&
+ (ses->aead_alg == 0));
}
static inline int is_proto_ipsec(dpaa_sec_session *ses)
--
2.17.1
^ permalink raw reply [flat|nested] 27+ messages in thread
* [dpdk-dev] [PATCH 04/10] crypto/dpaa2_sec: enhance gcm descs to not skip aadt
2019-10-11 16:32 [dpdk-dev] [PATCH 00/10] NXP DPAAx crypto fixes Hemant Agrawal
` (2 preceding siblings ...)
2019-10-11 16:32 ` [dpdk-dev] [PATCH 03/10] crypto/dpaa_sec: fix to check for aead as well Hemant Agrawal
@ 2019-10-11 16:32 ` Hemant Agrawal
2019-10-11 16:32 ` [dpdk-dev] [PATCH 05/10] crypto/dpaa2_sec: add support of auth trailer in cipher-auth Hemant Agrawal
` (6 subsequent siblings)
10 siblings, 0 replies; 27+ messages in thread
From: Hemant Agrawal @ 2019-10-11 16:32 UTC (permalink / raw)
To: dev; +Cc: akhil.goyal, Vakul Garg
From: Vakul Garg <vakul.garg@nxp.com>
The GCM descriptors needlessly skip auth_only_len bytes from output
buffer. Due to this, workarounds have to be made in dpseci driver code.
Also this leads to failing of one cryptodev test case for gcm. In this
patch, we change the descriptor construction and adjust dpseci driver
accordingly. The test_AES_GCM_auth_encrypt_SGL_out_of_place_400B_1seg
now passes.
Signed-off-by: Vakul Garg <vakul.garg@nxp.com>
---
drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c | 25 ++++++++-------------
drivers/crypto/dpaa2_sec/hw/desc/algo.h | 10 ---------
drivers/crypto/dpaa_sec/dpaa_sec.c | 14 +++++-------
3 files changed, 15 insertions(+), 34 deletions(-)
diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
index 14f0c523c..8803e8d3c 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
@@ -350,14 +350,13 @@ build_authenc_gcm_sg_fd(dpaa2_sec_session *sess,
DPAA2_SET_FLE_INTERNAL_JD(op_fle, auth_only_len);
op_fle->length = (sess->dir == DIR_ENC) ?
- (sym_op->aead.data.length + icv_len + auth_only_len) :
- sym_op->aead.data.length + auth_only_len;
+ (sym_op->aead.data.length + icv_len) :
+ sym_op->aead.data.length;
/* Configure Output SGE for Encap/Decap */
DPAA2_SET_FLE_ADDR(sge, DPAA2_MBUF_VADDR_TO_IOVA(mbuf));
- DPAA2_SET_FLE_OFFSET(sge, mbuf->data_off +
- RTE_ALIGN_CEIL(auth_only_len, 16) - auth_only_len);
- sge->length = mbuf->data_len - sym_op->aead.data.offset + auth_only_len;
+ DPAA2_SET_FLE_OFFSET(sge, mbuf->data_off + sym_op->aead.data.offset);
+ sge->length = mbuf->data_len - sym_op->aead.data.offset;
mbuf = mbuf->next;
/* o/p segs */
@@ -510,24 +509,21 @@ build_authenc_gcm_fd(dpaa2_sec_session *sess,
if (auth_only_len)
DPAA2_SET_FLE_INTERNAL_JD(fle, auth_only_len);
fle->length = (sess->dir == DIR_ENC) ?
- (sym_op->aead.data.length + icv_len + auth_only_len) :
- sym_op->aead.data.length + auth_only_len;
+ (sym_op->aead.data.length + icv_len) :
+ sym_op->aead.data.length;
DPAA2_SET_FLE_SG_EXT(fle);
/* Configure Output SGE for Encap/Decap */
DPAA2_SET_FLE_ADDR(sge, DPAA2_MBUF_VADDR_TO_IOVA(dst));
- DPAA2_SET_FLE_OFFSET(sge, dst->data_off +
- RTE_ALIGN_CEIL(auth_only_len, 16) - auth_only_len);
- sge->length = sym_op->aead.data.length + auth_only_len;
+ DPAA2_SET_FLE_OFFSET(sge, dst->data_off + sym_op->aead.data.offset);
+ sge->length = sym_op->aead.data.length;
if (sess->dir == DIR_ENC) {
sge++;
DPAA2_SET_FLE_ADDR(sge,
DPAA2_VADDR_TO_IOVA(sym_op->aead.digest.data));
sge->length = sess->digest_length;
- DPAA2_SET_FD_LEN(fd, (sym_op->aead.data.length +
- sess->iv.length + auth_only_len));
}
DPAA2_SET_FLE_FIN(sge);
@@ -566,10 +562,6 @@ build_authenc_gcm_fd(dpaa2_sec_session *sess,
sess->digest_length);
DPAA2_SET_FLE_ADDR(sge, DPAA2_VADDR_TO_IOVA(old_icv));
sge->length = sess->digest_length;
- DPAA2_SET_FD_LEN(fd, (sym_op->aead.data.length +
- sess->digest_length +
- sess->iv.length +
- auth_only_len));
}
DPAA2_SET_FLE_FIN(sge);
@@ -578,6 +570,7 @@ build_authenc_gcm_fd(dpaa2_sec_session *sess,
DPAA2_SET_FD_INTERNAL_JD(fd, auth_only_len);
}
+ DPAA2_SET_FD_LEN(fd, fle->length);
return 0;
}
diff --git a/drivers/crypto/dpaa2_sec/hw/desc/algo.h b/drivers/crypto/dpaa2_sec/hw/desc/algo.h
index 32ce787fa..c41cb2292 100644
--- a/drivers/crypto/dpaa2_sec/hw/desc/algo.h
+++ b/drivers/crypto/dpaa2_sec/hw/desc/algo.h
@@ -649,11 +649,6 @@ cnstr_shdsc_gcm_encap(uint32_t *descbuf, bool ps, bool swap,
MATHB(p, ZERO, ADD, MATH3, VSEQINSZ, 4, 0);
pzeroassocjump1 = JUMP(p, zeroassocjump1, LOCAL_JUMP, ALL_TRUE, MATH_Z);
- MATHB(p, ZERO, ADD, MATH3, VSEQOUTSZ, 4, 0);
-
- /* skip assoc data */
- SEQFIFOSTORE(p, SKIP, 0, 0, VLF);
-
/* cryptlen = seqinlen - assoclen */
MATHB(p, SEQINSZ, SUB, MATH3, VSEQOUTSZ, 4, 0);
@@ -756,11 +751,6 @@ cnstr_shdsc_gcm_decap(uint32_t *descbuf, bool ps, bool swap,
MATHB(p, ZERO, ADD, MATH3, VSEQINSZ, 4, 0);
pzeroassocjump1 = JUMP(p, zeroassocjump1, LOCAL_JUMP, ALL_TRUE, MATH_Z);
- MATHB(p, ZERO, ADD, MATH3, VSEQOUTSZ, 4, 0);
-
- /* skip assoc data */
- SEQFIFOSTORE(p, SKIP, 0, 0, VLF);
-
/* read assoc data */
SEQFIFOLOAD(p, AAD1, 0, CLASS1 | VLF | FLUSH1);
diff --git a/drivers/crypto/dpaa_sec/dpaa_sec.c b/drivers/crypto/dpaa_sec/dpaa_sec.c
index e89cbcefb..c1c6c054a 100644
--- a/drivers/crypto/dpaa_sec/dpaa_sec.c
+++ b/drivers/crypto/dpaa_sec/dpaa_sec.c
@@ -1180,10 +1180,9 @@ build_cipher_auth_gcm_sg(struct rte_crypto_op *op, dpaa_sec_session *ses)
out_sg = &cf->sg[0];
out_sg->extension = 1;
if (is_encode(ses))
- out_sg->length = sym->aead.data.length + ses->auth_only_len
- + ses->digest_length;
+ out_sg->length = sym->aead.data.length + ses->digest_length;
else
- out_sg->length = sym->aead.data.length + ses->auth_only_len;
+ out_sg->length = sym->aead.data.length;
/* output sg entries */
sg = &cf->sg[2];
@@ -1192,9 +1191,8 @@ build_cipher_auth_gcm_sg(struct rte_crypto_op *op, dpaa_sec_session *ses)
/* 1st seg */
qm_sg_entry_set64(sg, rte_pktmbuf_mtophys(mbuf));
- sg->length = mbuf->data_len - sym->aead.data.offset +
- ses->auth_only_len;
- sg->offset = sym->aead.data.offset - ses->auth_only_len;
+ sg->length = mbuf->data_len - sym->aead.data.offset;
+ sg->offset = sym->aead.data.offset;
/* Successive segs */
mbuf = mbuf->next;
@@ -1367,8 +1365,8 @@ build_cipher_auth_gcm(struct rte_crypto_op *op, dpaa_sec_session *ses)
sg++;
qm_sg_entry_set64(&cf->sg[0], dpaa_mem_vtop(sg));
qm_sg_entry_set64(sg,
- dst_start_addr + sym->aead.data.offset - ses->auth_only_len);
- sg->length = sym->aead.data.length + ses->auth_only_len;
+ dst_start_addr + sym->aead.data.offset);
+ sg->length = sym->aead.data.length;
length = sg->length;
if (is_encode(ses)) {
cpu_to_hw_sg(sg);
--
2.17.1
^ permalink raw reply [flat|nested] 27+ messages in thread
* [dpdk-dev] [PATCH 05/10] crypto/dpaa2_sec: add support of auth trailer in cipher-auth
2019-10-11 16:32 [dpdk-dev] [PATCH 00/10] NXP DPAAx crypto fixes Hemant Agrawal
` (3 preceding siblings ...)
2019-10-11 16:32 ` [dpdk-dev] [PATCH 04/10] crypto/dpaa2_sec: enhance gcm descs to not skip aadt Hemant Agrawal
@ 2019-10-11 16:32 ` Hemant Agrawal
2019-10-11 16:32 ` [dpdk-dev] [PATCH 06/10] test/crypto: increase test cases support for dpaax Hemant Agrawal
` (5 subsequent siblings)
10 siblings, 0 replies; 27+ messages in thread
From: Hemant Agrawal @ 2019-10-11 16:32 UTC (permalink / raw)
To: dev; +Cc: akhil.goyal, Vakul Garg
From: Vakul Garg <vakul.garg@nxp.com>
This patch adds support of auth-only data trailing after cipher data.
Signed-off-by: Vakul Garg <vakul.garg@nxp.com>
---
drivers/crypto/caam_jr/caam_jr.c | 24 +--
drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c | 17 +-
drivers/crypto/dpaa2_sec/hw/desc/ipsec.h | 167 ++++++++------------
drivers/crypto/dpaa_sec/dpaa_sec.c | 35 +++-
4 files changed, 121 insertions(+), 122 deletions(-)
diff --git a/drivers/crypto/caam_jr/caam_jr.c b/drivers/crypto/caam_jr/caam_jr.c
index 57101d9a6..6ceba18f1 100644
--- a/drivers/crypto/caam_jr/caam_jr.c
+++ b/drivers/crypto/caam_jr/caam_jr.c
@@ -450,13 +450,11 @@ caam_jr_prep_cdb(struct caam_jr_session *ses)
&alginfo_c, &alginfo_a);
}
} else {
- /* Auth_only_len is set as 0 here and it will be
- * overwritten in fd for each packet.
- */
+ /* Auth_only_len is overwritten in fd for each job */
shared_desc_len = cnstr_shdsc_authenc(cdb->sh_desc,
true, swap, SHR_SERIAL,
&alginfo_c, &alginfo_a,
- ses->iv.length, 0,
+ ses->iv.length,
ses->digest_length, ses->dir);
}
}
@@ -1066,10 +1064,11 @@ build_cipher_auth_sg(struct rte_crypto_op *op, struct caam_jr_session *ses)
uint8_t *IV_ptr = rte_crypto_op_ctod_offset(op, uint8_t *,
ses->iv.offset);
struct sec_job_descriptor_t *jobdescr;
- uint32_t auth_only_len;
-
- auth_only_len = op->sym->auth.data.length -
- op->sym->cipher.data.length;
+ uint16_t auth_hdr_len = sym->cipher.data.offset -
+ sym->auth.data.offset;
+ uint16_t auth_tail_len = sym->auth.data.length -
+ sym->cipher.data.length - auth_hdr_len;
+ uint32_t auth_only_len = (auth_tail_len << 16) | auth_hdr_len;
if (sym->m_dst) {
mbuf = sym->m_dst;
@@ -1208,10 +1207,11 @@ build_cipher_auth(struct rte_crypto_op *op, struct caam_jr_session *ses)
uint8_t *IV_ptr = rte_crypto_op_ctod_offset(op, uint8_t *,
ses->iv.offset);
struct sec_job_descriptor_t *jobdescr;
- uint32_t auth_only_len;
-
- auth_only_len = op->sym->auth.data.length -
- op->sym->cipher.data.length;
+ uint16_t auth_hdr_len = sym->cipher.data.offset -
+ sym->auth.data.offset;
+ uint16_t auth_tail_len = sym->auth.data.length -
+ sym->cipher.data.length - auth_hdr_len;
+ uint32_t auth_only_len = (auth_tail_len << 16) | auth_hdr_len;
src_start_addr = rte_pktmbuf_iova(sym->m_src);
if (sym->m_dst)
diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
index 8803e8d3c..23a3fa929 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
@@ -583,8 +583,11 @@ build_authenc_sg_fd(dpaa2_sec_session *sess,
struct ctxt_priv *priv = sess->ctxt;
struct qbman_fle *fle, *sge, *ip_fle, *op_fle;
struct sec_flow_context *flc;
- uint32_t auth_only_len = sym_op->auth.data.length -
- sym_op->cipher.data.length;
+ uint16_t auth_hdr_len = sym_op->cipher.data.offset -
+ sym_op->auth.data.offset;
+ uint16_t auth_tail_len = sym_op->auth.data.length -
+ sym_op->cipher.data.length - auth_hdr_len;
+ uint32_t auth_only_len = (auth_tail_len << 16) | auth_hdr_len;
int icv_len = sess->digest_length;
uint8_t *old_icv;
struct rte_mbuf *mbuf;
@@ -727,8 +730,12 @@ build_authenc_fd(dpaa2_sec_session *sess,
struct ctxt_priv *priv = sess->ctxt;
struct qbman_fle *fle, *sge;
struct sec_flow_context *flc;
- uint32_t auth_only_len = sym_op->auth.data.length -
- sym_op->cipher.data.length;
+ uint16_t auth_hdr_len = sym_op->cipher.data.offset -
+ sym_op->auth.data.offset;
+ uint16_t auth_tail_len = sym_op->auth.data.length -
+ sym_op->cipher.data.length - auth_hdr_len;
+ uint32_t auth_only_len = (auth_tail_len << 16) | auth_hdr_len;
+
int icv_len = sess->digest_length, retval;
uint8_t *old_icv;
uint8_t *iv_ptr = rte_crypto_op_ctod_offset(op, uint8_t *,
@@ -2217,7 +2224,6 @@ dpaa2_sec_aead_chain_init(struct rte_cryptodev *dev,
struct rte_crypto_sym_xform *xform,
dpaa2_sec_session *session)
{
- struct dpaa2_sec_aead_ctxt *ctxt = &session->ext_params.aead_ctxt;
struct dpaa2_sec_dev_private *dev_priv = dev->data->dev_private;
struct alginfo authdata, cipherdata;
int bufsize;
@@ -2411,7 +2417,6 @@ dpaa2_sec_aead_chain_init(struct rte_cryptodev *dev,
0, SHR_SERIAL,
&cipherdata, &authdata,
session->iv.length,
- ctxt->auth_only_len,
session->digest_length,
session->dir);
if (bufsize < 0) {
diff --git a/drivers/crypto/dpaa2_sec/hw/desc/ipsec.h b/drivers/crypto/dpaa2_sec/hw/desc/ipsec.h
index d071f46fd..d1ffd7fd2 100644
--- a/drivers/crypto/dpaa2_sec/hw/desc/ipsec.h
+++ b/drivers/crypto/dpaa2_sec/hw/desc/ipsec.h
@@ -1412,9 +1412,6 @@ cnstr_shdsc_ipsec_new_decap(uint32_t *descbuf, bool ps,
*
* @ivlen: length of the IV to be read from the input frame, before any data
* to be processed
- * @auth_only_len: length of the data to be authenticated-only (commonly IP
- * header, IV, Sequence number and SPI)
- * Note: Extended Sequence Number processing is NOT supported
*
* @trunc_len: the length of the ICV to be written to the output frame. If 0,
* then the corresponding length of the digest, according to the
@@ -1425,30 +1422,30 @@ cnstr_shdsc_ipsec_new_decap(uint32_t *descbuf, bool ps,
* will be done correctly:
* For encapsulation:
* Input:
- * +----+----------------+---------------------------------------------+
- * | IV | Auth-only data | Padded data to be authenticated & Encrypted |
- * +----+----------------+---------------------------------------------+
+ * +----+----------------+-----------------------------------------------+
+ * | IV | Auth-only head | Padded data to be auth & Enc | Auth-only tail |
+ * +----+----------------+-----------------------------------------------+
* Output:
* +--------------------------------------+
* | Authenticated & Encrypted data | ICV |
* +--------------------------------+-----+
-
+ *
* For decapsulation:
* Input:
- * +----+----------------+--------------------------------+-----+
- * | IV | Auth-only data | Authenticated & Encrypted data | ICV |
- * +----+----------------+--------------------------------+-----+
+ * +----+----------------+-----------------+----------------------+
+ * | IV | Auth-only head | Auth & Enc data | Auth-only tail | ICV |
+ * +----+----------------+-----------------+----------------------+
* Output:
- * +----+--------------------------+
+ * +----+---------------------------+
* | Decrypted & authenticated data |
- * +----+--------------------------+
+ * +----+---------------------------+
*
* Note: This descriptor can use per-packet commands, encoded as below in the
* DPOVRD register:
- * 32 24 16 0
- * +------+---------------------+
- * | 0x80 | 0x00| auth_only_len |
- * +------+---------------------+
+ * 32 28 16 1
+ * +------+------------------------------+
+ * | 0x8 | auth_tail_len | auth_hdr_len |
+ * +------+------------------------------+
*
* This mechanism is available only for SoCs having SEC ERA >= 3. In other
* words, this will not work for P4080TO2
@@ -1465,7 +1462,7 @@ cnstr_shdsc_authenc(uint32_t *descbuf, bool ps, bool swap,
enum rta_share_type share,
struct alginfo *cipherdata,
struct alginfo *authdata,
- uint16_t ivlen, uint16_t auth_only_len,
+ uint16_t ivlen,
uint8_t trunc_len, uint8_t dir)
{
struct program prg;
@@ -1473,16 +1470,16 @@ cnstr_shdsc_authenc(uint32_t *descbuf, bool ps, bool swap,
const bool need_dk = (dir == DIR_DEC) &&
(cipherdata->algtype == OP_ALG_ALGSEL_AES) &&
(cipherdata->algmode == OP_ALG_AAI_CBC);
+ int data_type;
- LABEL(skip_patch_len);
LABEL(keyjmp);
LABEL(skipkeys);
- LABEL(aonly_len_offset);
- REFERENCE(pskip_patch_len);
+ LABEL(proc_icv);
+ LABEL(no_auth_tail);
REFERENCE(pkeyjmp);
REFERENCE(pskipkeys);
- REFERENCE(read_len);
- REFERENCE(write_len);
+ REFERENCE(p_proc_icv);
+ REFERENCE(p_no_auth_tail);
PROGRAM_CNTXT_INIT(p, descbuf, 0);
@@ -1500,48 +1497,15 @@ cnstr_shdsc_authenc(uint32_t *descbuf, bool ps, bool swap,
SHR_HDR(p, share, 1, SC);
- /*
- * M0 will contain the value provided by the user when creating
- * the shared descriptor. If the user provided an override in
- * DPOVRD, then M0 will contain that value
- */
- MATHB(p, MATH0, ADD, auth_only_len, MATH0, 4, IMMED2);
-
- if (rta_sec_era >= RTA_SEC_ERA_3) {
- /*
- * Check if the user wants to override the auth-only len
- */
- MATHB(p, DPOVRD, ADD, 0x80000000, MATH2, 4, IMMED2);
-
- /*
- * No need to patch the length of the auth-only data read if
- * the user did not override it
- */
- pskip_patch_len = JUMP(p, skip_patch_len, LOCAL_JUMP, ALL_TRUE,
- MATH_N);
-
- /* Get auth-only len in M0 */
- MATHB(p, MATH2, AND, 0xFFFF, MATH0, 4, IMMED2);
-
- /*
- * Since M0 is used in calculations, don't mangle it, copy
- * its content to M1 and use this for patching.
- */
- MATHB(p, MATH0, ADD, MATH1, MATH1, 4, 0);
-
- read_len = MOVE(p, DESCBUF, 0, MATH1, 0, 6, WAITCOMP | IMMED);
- write_len = MOVE(p, MATH1, 0, DESCBUF, 0, 8, WAITCOMP | IMMED);
-
- SET_LABEL(p, skip_patch_len);
- }
- /*
- * MATH0 contains the value in DPOVRD w/o the MSB, or the initial
- * value, as provided by the user at descriptor creation time
- */
- if (dir == DIR_ENC)
- MATHB(p, MATH0, ADD, ivlen, MATH0, 4, IMMED2);
- else
- MATHB(p, MATH0, ADD, ivlen + trunc_len, MATH0, 4, IMMED2);
+ /* Collect the (auth_tail || auth_hdr) len from DPOVRD */
+ MATHB(p, DPOVRD, ADD, 0x80000000, MATH2, 4, IMMED2);
+
+ /* Get auth_hdr len in MATH0 */
+ MATHB(p, MATH2, AND, 0xFFFF, MATH0, 4, IMMED2);
+
+ /* Get auth_tail len in MATH2 */
+ MATHB(p, MATH2, AND, 0xFFF0000, MATH2, 4, IMMED2);
+ MATHI(p, MATH2, RSHIFT, 16, MATH2, 4, IMMED2);
pkeyjmp = JUMP(p, keyjmp, LOCAL_JUMP, ALL_TRUE, SHRD);
@@ -1581,61 +1545,70 @@ cnstr_shdsc_authenc(uint32_t *descbuf, bool ps, bool swap,
OP_ALG_AS_INITFINAL, ICV_CHECK_DISABLE, dir);
}
+ /* Read IV */
+ if (cipherdata->algmode == OP_ALG_AAI_CTR)
+ SEQLOAD(p, CONTEXT1, 16, ivlen, 0);
+ else
+ SEQLOAD(p, CONTEXT1, 0, ivlen, 0);
+
+ /*
+ * authenticate auth_hdr data
+ */
+ MATHB(p, MATH0, ADD, ZERO, VSEQINSZ, 4, 0);
+ SEQFIFOLOAD(p, MSG2, 0, VLF);
+
/*
* Prepare the length of the data to be both encrypted/decrypted
* and authenticated/checked
*/
- MATHB(p, SEQINSZ, SUB, MATH0, VSEQINSZ, 4, 0);
+ MATHB(p, SEQINSZ, SUB, MATH2, VSEQINSZ, 4, 0);
+ if (dir == DIR_DEC) {
+ MATHB(p, VSEQINSZ, SUB, trunc_len, VSEQINSZ, 4, IMMED2);
+ data_type = MSGINSNOOP;
+ } else {
+ data_type = MSGOUTSNOOP;
+ }
- MATHB(p, VSEQINSZ, SUB, MATH3, VSEQOUTSZ, 4, 0);
+ MATHB(p, VSEQINSZ, ADD, ZERO, VSEQOUTSZ, 4, 0);
/* Prepare for writing the output frame */
SEQFIFOSTORE(p, MSG, 0, 0, VLF);
- SET_LABEL(p, aonly_len_offset);
- /* Read IV */
- if (cipherdata->algmode == OP_ALG_AAI_CTR)
- SEQLOAD(p, CONTEXT1, 16, ivlen, 0);
- else
- SEQLOAD(p, CONTEXT1, 0, ivlen, 0);
+ /* Check if there is no auth-tail */
+ MATHB(p, MATH2, ADD, ZERO, MATH2, 4, 0);
+ p_no_auth_tail = JUMP(p, no_auth_tail, LOCAL_JUMP, ALL_TRUE, MATH_Z);
/*
- * Read data needed only for authentication. This is overwritten above
- * if the user requested it.
+ * Read input plain/cipher text, encrypt/decrypt & auth & write
+ * to output
*/
- SEQFIFOLOAD(p, MSG2, auth_only_len, 0);
+ SEQFIFOLOAD(p, data_type, 0, VLF | LAST1 | FLUSH1);
+
+ /* Authenticate auth tail */
+ MATHB(p, MATH2, ADD, ZERO, VSEQINSZ, 4, 0);
+ SEQFIFOLOAD(p, MSG2, 0, VLF | LAST2);
+
+ /* Jump to process icv */
+ p_proc_icv = JUMP(p, proc_icv, LOCAL_JUMP, ALL_FALSE, MATH_Z);
+
+ SET_LABEL(p, no_auth_tail);
- if (dir == DIR_ENC) {
- /*
- * Read input plaintext, encrypt and authenticate & write to
- * output
- */
- SEQFIFOLOAD(p, MSGOUTSNOOP, 0, VLF | LAST1 | LAST2 | FLUSH1);
+ SEQFIFOLOAD(p, data_type, 0, VLF | LAST1 | LAST2 | FLUSH1);
+ SET_LABEL(p, proc_icv);
+
+ if (dir == DIR_ENC)
/* Finally, write the ICV */
SEQSTORE(p, CONTEXT2, 0, trunc_len, 0);
- } else {
- /*
- * Read input ciphertext, decrypt and authenticate & write to
- * output
- */
- SEQFIFOLOAD(p, MSGINSNOOP, 0, VLF | LAST1 | LAST2 | FLUSH1);
-
+ else
/* Read the ICV to check */
SEQFIFOLOAD(p, ICV2, trunc_len, LAST2);
- }
PATCH_JUMP(p, pkeyjmp, keyjmp);
PATCH_JUMP(p, pskipkeys, skipkeys);
- PATCH_JUMP(p, pskipkeys, skipkeys);
-
- if (rta_sec_era >= RTA_SEC_ERA_3) {
- PATCH_JUMP(p, pskip_patch_len, skip_patch_len);
- PATCH_MOVE(p, read_len, aonly_len_offset);
- PATCH_MOVE(p, write_len, aonly_len_offset);
- }
-
+ PATCH_JUMP(p, p_no_auth_tail, no_auth_tail);
+ PATCH_JUMP(p, p_proc_icv, proc_icv);
return PROGRAM_FINALIZE(p);
}
diff --git a/drivers/crypto/dpaa_sec/dpaa_sec.c b/drivers/crypto/dpaa_sec/dpaa_sec.c
index c1c6c054a..019a7119f 100644
--- a/drivers/crypto/dpaa_sec/dpaa_sec.c
+++ b/drivers/crypto/dpaa_sec/dpaa_sec.c
@@ -742,7 +742,7 @@ dpaa_sec_prep_cdb(dpaa_sec_session *ses)
*/
shared_desc_len = cnstr_shdsc_authenc(cdb->sh_desc,
true, swap, SHR_SERIAL, &alginfo_c, &alginfo_a,
- ses->iv.length, 0,
+ ses->iv.length,
ses->digest_length, ses->dir);
}
@@ -1753,7 +1753,8 @@ dpaa_sec_enqueue_burst(void *qp, struct rte_crypto_op **ops,
struct rte_crypto_op *op;
struct dpaa_sec_job *cf;
dpaa_sec_session *ses;
- uint32_t auth_only_len, index, flags[DPAA_SEC_BURST] = {0};
+ uint16_t auth_hdr_len, auth_tail_len;
+ uint32_t index, flags[DPAA_SEC_BURST] = {0};
struct qman_fq *inq[DPAA_SEC_BURST];
while (nb_ops) {
@@ -1809,8 +1810,10 @@ dpaa_sec_enqueue_burst(void *qp, struct rte_crypto_op **ops,
goto send_pkts;
}
- auth_only_len = op->sym->auth.data.length -
+ auth_hdr_len = op->sym->auth.data.length -
op->sym->cipher.data.length;
+ auth_tail_len = 0;
+
if (rte_pktmbuf_is_contiguous(op->sym->m_src) &&
((op->sym->m_dst == NULL) ||
rte_pktmbuf_is_contiguous(op->sym->m_dst))) {
@@ -1824,8 +1827,15 @@ dpaa_sec_enqueue_burst(void *qp, struct rte_crypto_op **ops,
cf = build_cipher_only(op, ses);
} else if (is_aead(ses)) {
cf = build_cipher_auth_gcm(op, ses);
- auth_only_len = ses->auth_only_len;
+ auth_hdr_len = ses->auth_only_len;
} else if (is_auth_cipher(ses)) {
+ auth_hdr_len =
+ op->sym->cipher.data.offset
+ - op->sym->auth.data.offset;
+ auth_tail_len =
+ op->sym->auth.data.length
+ - op->sym->cipher.data.length
+ - auth_hdr_len;
cf = build_cipher_auth(op, ses);
} else {
DPAA_SEC_DP_ERR("not supported ops");
@@ -1842,8 +1852,15 @@ dpaa_sec_enqueue_burst(void *qp, struct rte_crypto_op **ops,
cf = build_cipher_only_sg(op, ses);
} else if (is_aead(ses)) {
cf = build_cipher_auth_gcm_sg(op, ses);
- auth_only_len = ses->auth_only_len;
+ auth_hdr_len = ses->auth_only_len;
} else if (is_auth_cipher(ses)) {
+ auth_hdr_len =
+ op->sym->cipher.data.offset
+ - op->sym->auth.data.offset;
+ auth_tail_len =
+ op->sym->auth.data.length
+ - op->sym->cipher.data.length
+ - auth_hdr_len;
cf = build_cipher_auth_sg(op, ses);
} else {
DPAA_SEC_DP_ERR("not supported ops");
@@ -1865,12 +1882,16 @@ dpaa_sec_enqueue_burst(void *qp, struct rte_crypto_op **ops,
qm_fd_addr_set64(fd, dpaa_mem_vtop(cf->sg));
fd->_format1 = qm_fd_compound;
fd->length29 = 2 * sizeof(struct qm_sg_entry);
+
/* Auth_only_len is set as 0 in descriptor and it is
* overwritten here in the fd.cmd which will update
* the DPOVRD reg.
*/
- if (auth_only_len)
- fd->cmd = 0x80000000 | auth_only_len;
+ if (auth_hdr_len || auth_tail_len) {
+ fd->cmd = 0x80000000;
+ fd->cmd |=
+ ((auth_tail_len << 16) | auth_hdr_len);
+ }
/* In case of PDCP, per packet HFN is stored in
* mbuf priv after sym_op.
--
2.17.1
^ permalink raw reply [flat|nested] 27+ messages in thread
* [dpdk-dev] [PATCH 06/10] test/crypto: increase test cases support for dpaax
2019-10-11 16:32 [dpdk-dev] [PATCH 00/10] NXP DPAAx crypto fixes Hemant Agrawal
` (4 preceding siblings ...)
2019-10-11 16:32 ` [dpdk-dev] [PATCH 05/10] crypto/dpaa2_sec: add support of auth trailer in cipher-auth Hemant Agrawal
@ 2019-10-11 16:32 ` Hemant Agrawal
2019-10-11 16:32 ` [dpdk-dev] [PATCH 07/10] test/crypto: add test to test ESN like case Hemant Agrawal
` (4 subsequent siblings)
10 siblings, 0 replies; 27+ messages in thread
From: Hemant Agrawal @ 2019-10-11 16:32 UTC (permalink / raw)
To: dev; +Cc: akhil.goyal, Hemant Agrawal
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
app/test/test_cryptodev.c | 132 ++++++++++++++++++++++++++++++++------
1 file changed, 113 insertions(+), 19 deletions(-)
diff --git a/app/test/test_cryptodev.c b/app/test/test_cryptodev.c
index 879b31ceb..c4c730495 100644
--- a/app/test/test_cryptodev.c
+++ b/app/test/test_cryptodev.c
@@ -12294,6 +12294,14 @@ static struct unit_test_suite cryptodev_dpaa_sec_testsuite = {
test_PDCP_PROTO_SGL_oop_128B_32B),
#endif
/** AES GCM Authenticated Encryption */
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_encrypt_SGL_in_place_1500B),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_encrypt_SGL_out_of_place_400B_400B),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_encrypt_SGL_out_of_place_1500B_2000B),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_encrypt_SGL_out_of_place_400B_1seg),
TEST_CASE_ST(ut_setup, ut_teardown,
test_AES_GCM_authenticated_encryption_test_case_1),
TEST_CASE_ST(ut_setup, ut_teardown,
@@ -12308,6 +12316,8 @@ static struct unit_test_suite cryptodev_dpaa_sec_testsuite = {
test_AES_GCM_authenticated_encryption_test_case_6),
TEST_CASE_ST(ut_setup, ut_teardown,
test_AES_GCM_authenticated_encryption_test_case_7),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_authenticated_encryption_test_case_8),
/** AES GCM Authenticated Decryption */
TEST_CASE_ST(ut_setup, ut_teardown,
@@ -12324,6 +12334,40 @@ static struct unit_test_suite cryptodev_dpaa_sec_testsuite = {
test_AES_GCM_authenticated_decryption_test_case_6),
TEST_CASE_ST(ut_setup, ut_teardown,
test_AES_GCM_authenticated_decryption_test_case_7),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_authenticated_decryption_test_case_8),
+
+ /** AES GCM Authenticated Encryption 192 bits key */
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_encryption_test_case_192_1),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_encryption_test_case_192_2),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_encryption_test_case_192_3),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_encryption_test_case_192_4),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_encryption_test_case_192_5),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_encryption_test_case_192_6),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_encryption_test_case_192_7),
+
+ /** AES GCM Authenticated Decryption 192 bits key */
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_decryption_test_case_192_1),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_decryption_test_case_192_2),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_decryption_test_case_192_3),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_decryption_test_case_192_4),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_decryption_test_case_192_5),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_decryption_test_case_192_6),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_decryption_test_case_192_7),
/** AES GCM Authenticated Encryption 256 bits key */
TEST_CASE_ST(ut_setup, ut_teardown,
@@ -12363,17 +12407,31 @@ static struct unit_test_suite cryptodev_dpaa_sec_testsuite = {
TEST_CASE_ST(ut_setup, ut_teardown,
test_AES_GCM_authenticated_decryption_oop_test_case_1),
- /** Scatter-Gather */
+ /** Negative tests */
TEST_CASE_ST(ut_setup, ut_teardown,
- test_AES_GCM_auth_encrypt_SGL_in_place_1500B),
+ test_AES_GCM_auth_encryption_fail_iv_corrupt),
TEST_CASE_ST(ut_setup, ut_teardown,
- test_AES_GCM_auth_encrypt_SGL_out_of_place_400B_400B),
+ test_AES_GCM_auth_encryption_fail_in_data_corrupt),
TEST_CASE_ST(ut_setup, ut_teardown,
- test_AES_GCM_auth_encrypt_SGL_out_of_place_400B_1seg),
+ test_AES_GCM_auth_encryption_fail_out_data_corrupt),
TEST_CASE_ST(ut_setup, ut_teardown,
- test_AES_GCM_auth_encrypt_SGL_out_of_place_1500B_2000B),
-
- /** Negative tests */
+ test_AES_GCM_auth_encryption_fail_aad_len_corrupt),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_encryption_fail_aad_corrupt),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_encryption_fail_tag_corrupt),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_decryption_fail_iv_corrupt),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_decryption_fail_in_data_corrupt),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_decryption_fail_out_data_corrupt),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_decryption_fail_aad_len_corrupt),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_decryption_fail_aad_corrupt),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_decryption_fail_tag_corrupt),
TEST_CASE_ST(ut_setup, ut_teardown,
authentication_verify_HMAC_SHA1_fail_data_corrupt),
TEST_CASE_ST(ut_setup, ut_teardown,
@@ -12431,6 +12489,14 @@ static struct unit_test_suite cryptodev_dpaa2_sec_testsuite = {
test_PDCP_PROTO_SGL_oop_128B_32B),
#endif
/** AES GCM Authenticated Encryption */
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_encrypt_SGL_in_place_1500B),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_encrypt_SGL_out_of_place_400B_400B),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_encrypt_SGL_out_of_place_1500B_2000B),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_encrypt_SGL_out_of_place_400B_1seg),
TEST_CASE_ST(ut_setup, ut_teardown,
test_AES_GCM_authenticated_encryption_test_case_1),
TEST_CASE_ST(ut_setup, ut_teardown,
@@ -12445,6 +12511,8 @@ static struct unit_test_suite cryptodev_dpaa2_sec_testsuite = {
test_AES_GCM_authenticated_encryption_test_case_6),
TEST_CASE_ST(ut_setup, ut_teardown,
test_AES_GCM_authenticated_encryption_test_case_7),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_authenticated_encryption_test_case_8),
/** AES GCM Authenticated Decryption */
TEST_CASE_ST(ut_setup, ut_teardown,
@@ -12461,6 +12529,8 @@ static struct unit_test_suite cryptodev_dpaa2_sec_testsuite = {
test_AES_GCM_authenticated_decryption_test_case_6),
TEST_CASE_ST(ut_setup, ut_teardown,
test_AES_GCM_authenticated_decryption_test_case_7),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_authenticated_decryption_test_case_8),
/** AES GCM Authenticated Encryption 192 bits key */
TEST_CASE_ST(ut_setup, ut_teardown,
@@ -12532,16 +12602,6 @@ static struct unit_test_suite cryptodev_dpaa2_sec_testsuite = {
TEST_CASE_ST(ut_setup, ut_teardown,
test_AES_GCM_authenticated_decryption_oop_test_case_1),
- /** Scatter-Gather */
- TEST_CASE_ST(ut_setup, ut_teardown,
- test_AES_GCM_auth_encrypt_SGL_in_place_1500B),
- TEST_CASE_ST(ut_setup, ut_teardown,
- test_AES_GCM_auth_encrypt_SGL_out_of_place_400B_400B),
- TEST_CASE_ST(ut_setup, ut_teardown,
- test_AES_GCM_auth_encrypt_SGL_out_of_place_400B_1seg),
- TEST_CASE_ST(ut_setup, ut_teardown,
- test_AES_GCM_auth_encrypt_SGL_out_of_place_1500B_2000B),
-
/** SNOW 3G encrypt only (UEA2) */
TEST_CASE_ST(ut_setup, ut_teardown,
test_snow3g_encryption_test_case_1),
@@ -12557,9 +12617,9 @@ static struct unit_test_suite cryptodev_dpaa2_sec_testsuite = {
TEST_CASE_ST(ut_setup, ut_teardown,
test_snow3g_encryption_test_case_1_oop),
TEST_CASE_ST(ut_setup, ut_teardown,
- test_snow3g_decryption_test_case_1_oop),
+ test_snow3g_encryption_test_case_1_oop_sgl),
TEST_CASE_ST(ut_setup, ut_teardown,
- test_snow3g_encryption_test_case_1_oop_sgl),
+ test_snow3g_decryption_test_case_1_oop),
/** SNOW 3G decrypt only (UEA2) */
TEST_CASE_ST(ut_setup, ut_teardown,
@@ -12606,7 +12666,41 @@ static struct unit_test_suite cryptodev_dpaa2_sec_testsuite = {
TEST_CASE_ST(ut_setup, ut_teardown,
test_zuc_hash_generate_test_case_8),
+ /** HMAC_MD5 Authentication */
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_MD5_HMAC_generate_case_1),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_MD5_HMAC_verify_case_1),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_MD5_HMAC_generate_case_2),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_MD5_HMAC_verify_case_2),
+
/** Negative tests */
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_encryption_fail_iv_corrupt),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_encryption_fail_in_data_corrupt),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_encryption_fail_out_data_corrupt),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_encryption_fail_aad_len_corrupt),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_encryption_fail_aad_corrupt),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_encryption_fail_tag_corrupt),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_decryption_fail_iv_corrupt),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_decryption_fail_in_data_corrupt),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_decryption_fail_out_data_corrupt),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_decryption_fail_aad_len_corrupt),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_decryption_fail_aad_corrupt),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_decryption_fail_tag_corrupt),
TEST_CASE_ST(ut_setup, ut_teardown,
authentication_verify_HMAC_SHA1_fail_data_corrupt),
TEST_CASE_ST(ut_setup, ut_teardown,
--
2.17.1
^ permalink raw reply [flat|nested] 27+ messages in thread
* [dpdk-dev] [PATCH 07/10] test/crypto: add test to test ESN like case
2019-10-11 16:32 [dpdk-dev] [PATCH 00/10] NXP DPAAx crypto fixes Hemant Agrawal
` (5 preceding siblings ...)
2019-10-11 16:32 ` [dpdk-dev] [PATCH 06/10] test/crypto: increase test cases support for dpaax Hemant Agrawal
@ 2019-10-11 16:32 ` Hemant Agrawal
2019-10-11 16:32 ` [dpdk-dev] [PATCH 08/10] crypto/dpaa_sec: add support for snow3G and ZUC Hemant Agrawal
` (3 subsequent siblings)
10 siblings, 0 replies; 27+ messages in thread
From: Hemant Agrawal @ 2019-10-11 16:32 UTC (permalink / raw)
To: dev; +Cc: akhil.goyal, Hemant Agrawal, Vakul Garg
This patch add support for case when there is auth only header and
auth only tailroom area. This simulates the cases of IPSEC ESN cases.
This patch also enable the new test case for openssl, dpaaX platforms.
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Vakul Garg <vakul.garg@nxp.com>
---
app/test/test_cryptodev.c | 287 ++++++++++++++++++++-
app/test/test_cryptodev_aes_test_vectors.h | 67 +++++
2 files changed, 349 insertions(+), 5 deletions(-)
diff --git a/app/test/test_cryptodev.c b/app/test/test_cryptodev.c
index c4c730495..ec0473016 100644
--- a/app/test/test_cryptodev.c
+++ b/app/test/test_cryptodev.c
@@ -9741,6 +9741,8 @@ test_AES_GMAC_authentication_verify_test_case_4(void)
struct test_crypto_vector {
enum rte_crypto_cipher_algorithm crypto_algo;
+ unsigned int cipher_offset;
+ unsigned int cipher_len;
struct {
uint8_t data[64];
@@ -9763,6 +9765,7 @@ struct test_crypto_vector {
} ciphertext;
enum rte_crypto_auth_algorithm auth_algo;
+ unsigned int auth_offset;
struct {
uint8_t data[128];
@@ -9838,6 +9841,8 @@ aes128_gmac_test_vector = {
static const struct test_crypto_vector
aes128cbc_hmac_sha1_test_vector = {
.crypto_algo = RTE_CRYPTO_CIPHER_AES_CBC,
+ .cipher_offset = 0,
+ .cipher_len = 512,
.cipher_key = {
.data = {
0xE4, 0x23, 0x33, 0x8A, 0x35, 0x64, 0x61, 0xE2,
@@ -9861,6 +9866,7 @@ aes128cbc_hmac_sha1_test_vector = {
.len = 512
},
.auth_algo = RTE_CRYPTO_AUTH_SHA1_HMAC,
+ .auth_offset = 0,
.auth_key = {
.data = {
0xF8, 0x2A, 0xC7, 0x54, 0xDB, 0x96, 0x18, 0xAA,
@@ -9879,6 +9885,53 @@ aes128cbc_hmac_sha1_test_vector = {
}
};
+static const struct test_crypto_vector
+aes128cbc_hmac_sha1_aad_test_vector = {
+ .crypto_algo = RTE_CRYPTO_CIPHER_AES_CBC,
+ .cipher_offset = 12,
+ .cipher_len = 496,
+ .cipher_key = {
+ .data = {
+ 0xE4, 0x23, 0x33, 0x8A, 0x35, 0x64, 0x61, 0xE2,
+ 0x49, 0x03, 0xDD, 0xC6, 0xB8, 0xCA, 0x55, 0x7A
+ },
+ .len = 16
+ },
+ .iv = {
+ .data = {
+ 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07,
+ 0x08, 0x09, 0x0A, 0x0B, 0x0C, 0x0D, 0x0E, 0x0F
+ },
+ .len = 16
+ },
+ .plaintext = {
+ .data = plaintext_hash,
+ .len = 512
+ },
+ .ciphertext = {
+ .data = ciphertext512_aes128cbc_aad,
+ .len = 512
+ },
+ .auth_algo = RTE_CRYPTO_AUTH_SHA1_HMAC,
+ .auth_offset = 0,
+ .auth_key = {
+ .data = {
+ 0xF8, 0x2A, 0xC7, 0x54, 0xDB, 0x96, 0x18, 0xAA,
+ 0xC3, 0xA1, 0x53, 0xF6, 0x1F, 0x17, 0x60, 0xBD,
+ 0xDE, 0xF4, 0xDE, 0xAD
+ },
+ .len = 20
+ },
+ .digest = {
+ .data = {
+ 0x1F, 0x6A, 0xD2, 0x8B, 0x4B, 0xB3, 0xC0, 0x9E,
+ 0x86, 0x9B, 0x3A, 0xF2, 0x00, 0x5B, 0x4F, 0x08,
+ 0x62, 0x8D, 0x62, 0x65
+ },
+ .len = 20
+ }
+};
+
static void
data_corruption(uint8_t *data)
{
@@ -10121,11 +10174,11 @@ create_cipher_auth_operation(struct crypto_testsuite_params *ts_params,
rte_memcpy(rte_crypto_op_ctod_offset(ut_params->op, uint8_t *, IV_OFFSET),
reference->iv.data, reference->iv.len);
- sym_op->cipher.data.length = reference->ciphertext.len;
- sym_op->cipher.data.offset = 0;
+ sym_op->cipher.data.length = reference->cipher_len;
+ sym_op->cipher.data.offset = reference->cipher_offset;
- sym_op->auth.data.length = reference->ciphertext.len;
- sym_op->auth.data.offset = 0;
+ sym_op->auth.data.length = reference->plaintext.len;
+ sym_op->auth.data.offset = reference->auth_offset;
return 0;
}
@@ -10336,6 +10389,193 @@ test_authenticated_decryption_fail_when_corruption(
return 0;
}
+static int
+test_authenticated_encryt_with_esn(
+ struct crypto_testsuite_params *ts_params,
+ struct crypto_unittest_params *ut_params,
+ const struct test_crypto_vector *reference)
+{
+ int retval;
+
+ uint8_t *authciphertext, *plaintext, *auth_tag;
+ uint16_t plaintext_pad_len;
+ uint8_t cipher_key[reference->cipher_key.len + 1];
+ uint8_t auth_key[reference->auth_key.len + 1];
+
+ /* Create session */
+ memcpy(cipher_key, reference->cipher_key.data,
+ reference->cipher_key.len);
+ memcpy(auth_key, reference->auth_key.data, reference->auth_key.len);
+
+ /* Setup Cipher Parameters */
+ ut_params->cipher_xform.type = RTE_CRYPTO_SYM_XFORM_CIPHER;
+ ut_params->cipher_xform.cipher.algo = reference->crypto_algo;
+ ut_params->cipher_xform.cipher.op = RTE_CRYPTO_CIPHER_OP_ENCRYPT;
+ ut_params->cipher_xform.cipher.key.data = cipher_key;
+ ut_params->cipher_xform.cipher.key.length = reference->cipher_key.len;
+ ut_params->cipher_xform.cipher.iv.offset = IV_OFFSET;
+ ut_params->cipher_xform.cipher.iv.length = reference->iv.len;
+
+ ut_params->cipher_xform.next = &ut_params->auth_xform;
+
+ /* Setup Authentication Parameters */
+ ut_params->auth_xform.type = RTE_CRYPTO_SYM_XFORM_AUTH;
+ ut_params->auth_xform.auth.op = RTE_CRYPTO_AUTH_OP_GENERATE;
+ ut_params->auth_xform.auth.algo = reference->auth_algo;
+ ut_params->auth_xform.auth.key.length = reference->auth_key.len;
+ ut_params->auth_xform.auth.key.data = auth_key;
+ ut_params->auth_xform.auth.digest_length = reference->digest.len;
+ ut_params->auth_xform.next = NULL;
+
+ /* Create Crypto session*/
+ ut_params->sess = rte_cryptodev_sym_session_create(
+ ts_params->session_mpool);
+
+ rte_cryptodev_sym_session_init(ts_params->valid_devs[0],
+ ut_params->sess,
+ &ut_params->cipher_xform,
+ ts_params->session_priv_mpool);
+
+ TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation failed");
+
+ ut_params->ibuf = rte_pktmbuf_alloc(ts_params->mbuf_pool);
+ TEST_ASSERT_NOT_NULL(ut_params->ibuf,
+ "Failed to allocate input buffer in mempool");
+
+ /* clear mbuf payload */
+ memset(rte_pktmbuf_mtod(ut_params->ibuf, uint8_t *), 0,
+ rte_pktmbuf_tailroom(ut_params->ibuf));
+
+ plaintext = (uint8_t *)rte_pktmbuf_append(ut_params->ibuf,
+ reference->plaintext.len);
+ TEST_ASSERT_NOT_NULL(plaintext, "no room to append plaintext");
+ memcpy(plaintext, reference->plaintext.data, reference->plaintext.len);
+
+ /* Create operation */
+ retval = create_cipher_auth_operation(ts_params,
+ ut_params,
+ reference, 0);
+
+ if (retval < 0)
+ return retval;
+
+ ut_params->op = process_crypto_request(ts_params->valid_devs[0],
+ ut_params->op);
+
+ TEST_ASSERT_NOT_NULL(ut_params->op, "no crypto operation returned");
+
+ TEST_ASSERT_EQUAL(ut_params->op->status, RTE_CRYPTO_OP_STATUS_SUCCESS,
+ "crypto op processing failed");
+
+ plaintext_pad_len = RTE_ALIGN_CEIL(reference->plaintext.len, 16);
+
+ authciphertext = rte_pktmbuf_mtod_offset(ut_params->ibuf, uint8_t *,
+ ut_params->op->sym->auth.data.offset);
+ auth_tag = authciphertext + plaintext_pad_len;
+ debug_hexdump(stdout, "ciphertext:", authciphertext,
+ reference->ciphertext.len);
+ debug_hexdump(stdout, "auth tag:", auth_tag, reference->digest.len);
+
+ /* Validate obuf */
+ TEST_ASSERT_BUFFERS_ARE_EQUAL(
+ authciphertext,
+ reference->ciphertext.data,
+ reference->ciphertext.len,
+ "Ciphertext data not as expected");
+
+ TEST_ASSERT_BUFFERS_ARE_EQUAL(
+ auth_tag,
+ reference->digest.data,
+ reference->digest.len,
+ "Generated digest not as expected");
+
+ return TEST_SUCCESS;
+
+}
+
+static int
+test_authenticated_decrypt_with_esn(
+ struct crypto_testsuite_params *ts_params,
+ struct crypto_unittest_params *ut_params,
+ const struct test_crypto_vector *reference)
+{
+ int retval;
+
+ uint8_t *ciphertext;
+ uint8_t cipher_key[reference->cipher_key.len + 1];
+ uint8_t auth_key[reference->auth_key.len + 1];
+
+ /* Create session */
+ memcpy(cipher_key, reference->cipher_key.data,
+ reference->cipher_key.len);
+ memcpy(auth_key, reference->auth_key.data, reference->auth_key.len);
+
+ /* Setup Authentication Parameters */
+ ut_params->auth_xform.type = RTE_CRYPTO_SYM_XFORM_AUTH;
+ ut_params->auth_xform.auth.op = RTE_CRYPTO_AUTH_OP_VERIFY;
+ ut_params->auth_xform.auth.algo = reference->auth_algo;
+ ut_params->auth_xform.auth.key.length = reference->auth_key.len;
+ ut_params->auth_xform.auth.key.data = auth_key;
+ ut_params->auth_xform.auth.digest_length = reference->digest.len;
+ ut_params->auth_xform.next = &ut_params->cipher_xform;
+
+ /* Setup Cipher Parameters */
+ ut_params->cipher_xform.type = RTE_CRYPTO_SYM_XFORM_CIPHER;
+ ut_params->cipher_xform.next = NULL;
+ ut_params->cipher_xform.cipher.algo = reference->crypto_algo;
+ ut_params->cipher_xform.cipher.op = RTE_CRYPTO_CIPHER_OP_DECRYPT;
+ ut_params->cipher_xform.cipher.key.data = cipher_key;
+ ut_params->cipher_xform.cipher.key.length = reference->cipher_key.len;
+ ut_params->cipher_xform.cipher.iv.offset = IV_OFFSET;
+ ut_params->cipher_xform.cipher.iv.length = reference->iv.len;
+
+ /* Create Crypto session*/
+ ut_params->sess = rte_cryptodev_sym_session_create(
+ ts_params->session_mpool);
+
+ rte_cryptodev_sym_session_init(ts_params->valid_devs[0],
+ ut_params->sess,
+ &ut_params->auth_xform,
+ ts_params->session_priv_mpool);
+
+ TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation failed");
+
+ ut_params->ibuf = rte_pktmbuf_alloc(ts_params->mbuf_pool);
+ TEST_ASSERT_NOT_NULL(ut_params->ibuf,
+ "Failed to allocate input buffer in mempool");
+
+ /* clear mbuf payload */
+ memset(rte_pktmbuf_mtod(ut_params->ibuf, uint8_t *), 0,
+ rte_pktmbuf_tailroom(ut_params->ibuf));
+
+ ciphertext = (uint8_t *)rte_pktmbuf_append(ut_params->ibuf,
+ reference->ciphertext.len);
+ TEST_ASSERT_NOT_NULL(ciphertext, "no room to append ciphertext");
+ memcpy(ciphertext, reference->ciphertext.data,
+ reference->ciphertext.len);
+
+ /* Create operation */
+ retval = create_cipher_auth_verify_operation(ts_params,
+ ut_params,
+ reference);
+
+ if (retval < 0)
+ return retval;
+
+ ut_params->op = process_crypto_request(ts_params->valid_devs[0],
+ ut_params->op);
+
+ TEST_ASSERT_NOT_NULL(ut_params->op, "failed crypto process");
+ TEST_ASSERT_EQUAL(ut_params->op->status,
+ RTE_CRYPTO_OP_STATUS_SUCCESS,
+ "crypto op processing passed");
+
+ ut_params->obuf = ut_params->op->sym->m_src;
+ TEST_ASSERT_NOT_NULL(ut_params->obuf, "failed to retrieve obuf");
+
+ return 0;
+}
+
static int
create_aead_operation_SGL(enum rte_crypto_aead_operation op,
const struct aead_test_data *tdata,
@@ -10809,6 +11049,24 @@ auth_decryption_AES128CBC_HMAC_SHA1_fail_tag_corrupt(void)
&aes128cbc_hmac_sha1_test_vector);
}
+static int
+auth_encrypt_AES128CBC_HMAC_SHA1_esn_check(void)
+{
+ return test_authenticated_encryt_with_esn(
+ &testsuite_params,
+ &unittest_params,
+ &aes128cbc_hmac_sha1_aad_test_vector);
+}
+
+static int
+auth_decrypt_AES128CBC_HMAC_SHA1_esn_check(void)
+{
+ return test_authenticated_decrypt_with_esn(
+ &testsuite_params,
+ &unittest_params,
+ &aes128cbc_hmac_sha1_aad_test_vector);
+}
+
#ifdef RTE_LIBRTE_PMD_CRYPTO_SCHEDULER
/* global AESNI slave IDs for the scheduler test */
@@ -11830,6 +12088,13 @@ static struct unit_test_suite cryptodev_openssl_testsuite = {
TEST_CASE_ST(ut_setup, ut_teardown,
auth_decryption_AES128CBC_HMAC_SHA1_fail_tag_corrupt),
+ /* ESN Testcase */
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ auth_encrypt_AES128CBC_HMAC_SHA1_esn_check),
+
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ auth_decrypt_AES128CBC_HMAC_SHA1_esn_check),
+
TEST_CASES_END() /**< NULL terminate unit test array */
}
};
@@ -12441,6 +12706,12 @@ static struct unit_test_suite cryptodev_dpaa_sec_testsuite = {
TEST_CASE_ST(ut_setup, ut_teardown,
auth_decryption_AES128CBC_HMAC_SHA1_fail_tag_corrupt),
+ /* ESN Testcase */
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ auth_encrypt_AES128CBC_HMAC_SHA1_esn_check),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ auth_decrypt_AES128CBC_HMAC_SHA1_esn_check),
+
TEST_CASES_END() /**< NULL terminate unit test array */
}
};
@@ -12454,7 +12725,6 @@ static struct unit_test_suite cryptodev_dpaa2_sec_testsuite = {
test_device_configure_invalid_dev_id),
TEST_CASE_ST(ut_setup, ut_teardown,
test_multi_session),
-
TEST_CASE_ST(ut_setup, ut_teardown,
test_AES_chain_dpaa2_sec_all),
TEST_CASE_ST(ut_setup, ut_teardown,
@@ -12710,6 +12980,13 @@ static struct unit_test_suite cryptodev_dpaa2_sec_testsuite = {
TEST_CASE_ST(ut_setup, ut_teardown,
auth_decryption_AES128CBC_HMAC_SHA1_fail_tag_corrupt),
+ /* ESN Testcase */
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ auth_encrypt_AES128CBC_HMAC_SHA1_esn_check),
+
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ auth_decrypt_AES128CBC_HMAC_SHA1_esn_check),
+
TEST_CASES_END() /**< NULL terminate unit test array */
}
};
diff --git a/app/test/test_cryptodev_aes_test_vectors.h b/app/test/test_cryptodev_aes_test_vectors.h
index 46239efb7..54a5d75b2 100644
--- a/app/test/test_cryptodev_aes_test_vectors.h
+++ b/app/test/test_cryptodev_aes_test_vectors.h
@@ -356,6 +356,73 @@ static const struct blockcipher_test_data null_test_data_chain_x1_multiple = {
}
};
+static const uint8_t ciphertext512_aes128cbc_aad[] = {
+ 0x57, 0x68, 0x61, 0x74, 0x20, 0x61, 0x20, 0x6C,
+ 0x6F, 0x75, 0x73, 0x79, 0x6D, 0x70, 0xB4, 0xAD,
+ 0x09, 0x7C, 0xD7, 0x52, 0xD6, 0xF2, 0xBF, 0xD1,
+ 0x9D, 0x79, 0xC6, 0xB6, 0x8F, 0x94, 0xEB, 0xD8,
+ 0xBA, 0x5E, 0x01, 0x49, 0x7D, 0xB3, 0xC5, 0xFE,
+ 0x18, 0xF4, 0xE3, 0x60, 0x8C, 0x84, 0x68, 0x13,
+ 0x33, 0x06, 0x85, 0x60, 0xD3, 0xE7, 0x8A, 0xB5,
+ 0x23, 0xA2, 0xDE, 0x52, 0x5C, 0xB6, 0x26, 0x37,
+ 0xBB, 0x23, 0x8A, 0x38, 0x07, 0x85, 0xB6, 0x2E,
+ 0xC3, 0x69, 0x57, 0x79, 0x6B, 0xE4, 0xD7, 0x86,
+ 0x23, 0x72, 0x4C, 0x65, 0x49, 0x08, 0x1E, 0xF3,
+ 0xCC, 0x71, 0x4C, 0x45, 0x97, 0x03, 0xBC, 0xA0,
+ 0x9D, 0xF0, 0x4F, 0x5D, 0xEC, 0x40, 0x6C, 0xC6,
+ 0x52, 0xC0, 0x9D, 0x1C, 0xDC, 0x8B, 0xC2, 0xFA,
+ 0x35, 0xA7, 0x3A, 0x00, 0x04, 0x1C, 0xA6, 0x91,
+ 0x5D, 0xEB, 0x07, 0xA1, 0xB9, 0x3E, 0xD1, 0xB6,
+ 0xCA, 0x96, 0xEC, 0x71, 0xF7, 0x7D, 0xB6, 0x09,
+ 0x3D, 0x19, 0x6E, 0x75, 0x03, 0xC3, 0x1A, 0x4E,
+ 0x5B, 0x4D, 0xEA, 0xD9, 0x92, 0x96, 0x01, 0xFB,
+ 0xA3, 0xC2, 0x6D, 0xC4, 0x17, 0x6B, 0xB4, 0x3B,
+ 0x1E, 0x87, 0x54, 0x26, 0x95, 0x63, 0x07, 0x73,
+ 0xB6, 0xBA, 0x52, 0xD7, 0xA7, 0xD0, 0x9C, 0x75,
+ 0x8A, 0xCF, 0xC4, 0x3C, 0x4A, 0x55, 0x0E, 0x53,
+ 0xEC, 0xE0, 0x31, 0x51, 0xB7, 0xB7, 0xD2, 0xB4,
+ 0xF3, 0x2B, 0x70, 0x6D, 0x15, 0x9E, 0x57, 0x30,
+ 0x72, 0xE5, 0xA4, 0x71, 0x5F, 0xA4, 0xE8, 0x7C,
+ 0x46, 0x58, 0x36, 0x71, 0x91, 0x55, 0xAA, 0x99,
+ 0x3B, 0x3F, 0xF6, 0xA2, 0x9D, 0x27, 0xBF, 0xC2,
+ 0x62, 0x2C, 0x85, 0xB7, 0x51, 0xDD, 0xFD, 0x7B,
+ 0x8B, 0xB5, 0xDD, 0x2A, 0x73, 0xF8, 0x93, 0x9A,
+ 0x3F, 0xAD, 0x1D, 0xF0, 0x46, 0xD1, 0x76, 0x83,
+ 0x71, 0x4E, 0xD3, 0x0D, 0x64, 0x8C, 0xC3, 0xE6,
+ 0x03, 0xED, 0xE8, 0x53, 0x23, 0x1A, 0xC7, 0x86,
+ 0xEB, 0x87, 0xD6, 0x78, 0xF9, 0xFB, 0x9C, 0x1D,
+ 0xE7, 0x4E, 0xC0, 0x70, 0x27, 0x7A, 0x43, 0xE2,
+ 0x5D, 0xA4, 0x10, 0x40, 0xBE, 0x61, 0x0D, 0x2B,
+ 0x25, 0x08, 0x75, 0x91, 0xB5, 0x5A, 0x26, 0xC8,
+ 0x32, 0xA7, 0xC6, 0x88, 0xBF, 0x75, 0x94, 0xCC,
+ 0x58, 0xA4, 0xFE, 0x2F, 0xF7, 0x5C, 0xD2, 0x36,
+ 0x66, 0x55, 0xF0, 0xEA, 0xF5, 0x64, 0x43, 0xE7,
+ 0x6D, 0xE0, 0xED, 0xA1, 0x10, 0x0A, 0x84, 0x07,
+ 0x11, 0x88, 0xFA, 0xA1, 0xD3, 0xA0, 0x00, 0x5D,
+ 0xEB, 0xB5, 0x62, 0x01, 0x72, 0xC1, 0x9B, 0x39,
+ 0x0B, 0xD3, 0xAF, 0x04, 0x19, 0x42, 0xEC, 0xFF,
+ 0x4B, 0xB3, 0x5E, 0x87, 0x27, 0xE4, 0x26, 0x57,
+ 0x76, 0xCD, 0x36, 0x31, 0x5B, 0x94, 0x74, 0xFF,
+ 0x33, 0x91, 0xAA, 0xD1, 0x45, 0x34, 0xC2, 0x11,
+ 0xF0, 0x35, 0x44, 0xC9, 0xD5, 0xA2, 0x5A, 0xC2,
+ 0xE9, 0x9E, 0xCA, 0xE2, 0x6F, 0xD2, 0x40, 0xB4,
+ 0x93, 0x42, 0x78, 0x20, 0x92, 0x88, 0xC7, 0x16,
+ 0xCF, 0x15, 0x54, 0x7B, 0xE1, 0x46, 0x38, 0x69,
+ 0xB8, 0xE4, 0xF1, 0x81, 0xF0, 0x08, 0x6F, 0x92,
+ 0x6D, 0x1A, 0xD9, 0x93, 0xFA, 0xD7, 0x35, 0xFE,
+ 0x7F, 0x59, 0x43, 0x1D, 0x3A, 0x3B, 0xFC, 0xD0,
+ 0x14, 0x95, 0x1E, 0xB2, 0x04, 0x08, 0x4F, 0xC6,
+ 0xEA, 0xE8, 0x22, 0xF3, 0xD7, 0x66, 0x93, 0xAA,
+ 0xFD, 0xA0, 0xFE, 0x03, 0x96, 0x54, 0x78, 0x35,
+ 0x18, 0xED, 0xB7, 0x2F, 0x40, 0xE3, 0x8E, 0x22,
+ 0xC6, 0xDA, 0xB0, 0x8E, 0xA0, 0xA1, 0x62, 0x03,
+ 0x63, 0x34, 0x11, 0xF5, 0x9E, 0xAA, 0x6B, 0xC4,
+ 0x14, 0x75, 0x4C, 0xF4, 0xD8, 0xD9, 0xF1, 0x76,
+ 0xE3, 0xD3, 0x55, 0xCE, 0x22, 0x7D, 0x4A, 0xB7,
+ 0xBB, 0x7F, 0x4F, 0x09, 0x88, 0x70, 0x6E, 0x09,
+ 0x84, 0x6B, 0x24, 0x19, 0x2C, 0x20, 0x73, 0x75
+};
+
/* AES128-CTR-SHA1 test vector */
static const struct blockcipher_test_data aes_test_data_1 = {
.crypto_algo = RTE_CRYPTO_CIPHER_AES_CTR,
--
2.17.1
^ permalink raw reply [flat|nested] 27+ messages in thread
* [dpdk-dev] [PATCH 08/10] crypto/dpaa_sec: add support for snow3G and ZUC
2019-10-11 16:32 [dpdk-dev] [PATCH 00/10] NXP DPAAx crypto fixes Hemant Agrawal
` (6 preceding siblings ...)
2019-10-11 16:32 ` [dpdk-dev] [PATCH 07/10] test/crypto: add test to test ESN like case Hemant Agrawal
@ 2019-10-11 16:32 ` Hemant Agrawal
2019-10-11 16:32 ` [dpdk-dev] [PATCH 09/10] test/crypto: enable snow3G and zuc cases for dpaa Hemant Agrawal
` (2 subsequent siblings)
10 siblings, 0 replies; 27+ messages in thread
From: Hemant Agrawal @ 2019-10-11 16:32 UTC (permalink / raw)
To: dev; +Cc: akhil.goyal, Hemant Agrawal
This patch add support for ZUC and SNOW 3G in non-PDCP offload mode.
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
doc/guides/cryptodevs/dpaa_sec.rst | 4 +
doc/guides/cryptodevs/features/dpaa_sec.ini | 4 +
drivers/crypto/dpaa_sec/dpaa_sec.c | 378 ++++++++++++++++----
drivers/crypto/dpaa_sec/dpaa_sec.h | 91 ++++-
4 files changed, 407 insertions(+), 70 deletions(-)
diff --git a/doc/guides/cryptodevs/dpaa_sec.rst b/doc/guides/cryptodevs/dpaa_sec.rst
index 0a2600634..7e9fcf625 100644
--- a/doc/guides/cryptodevs/dpaa_sec.rst
+++ b/doc/guides/cryptodevs/dpaa_sec.rst
@@ -58,6 +58,8 @@ Cipher algorithms:
* ``RTE_CRYPTO_CIPHER_AES128_CTR``
* ``RTE_CRYPTO_CIPHER_AES192_CTR``
* ``RTE_CRYPTO_CIPHER_AES256_CTR``
+* ``RTE_CRYPTO_CIPHER_SNOW3G_UEA2``
+* ``RTE_CRYPTO_CIPHER_ZUC_EEA3``
Hash algorithms:
@@ -66,7 +68,9 @@ Hash algorithms:
* ``RTE_CRYPTO_AUTH_SHA256_HMAC``
* ``RTE_CRYPTO_AUTH_SHA384_HMAC``
* ``RTE_CRYPTO_AUTH_SHA512_HMAC``
+* ``RTE_CRYPTO_AUTH_SNOW3G_UIA2``
* ``RTE_CRYPTO_AUTH_MD5_HMAC``
+* ``RTE_CRYPTO_AUTH_ZUC_EIA3``
AEAD algorithms:
diff --git a/doc/guides/cryptodevs/features/dpaa_sec.ini b/doc/guides/cryptodevs/features/dpaa_sec.ini
index 954a70808..243f3e1d6 100644
--- a/doc/guides/cryptodevs/features/dpaa_sec.ini
+++ b/doc/guides/cryptodevs/features/dpaa_sec.ini
@@ -25,6 +25,8 @@ AES CTR (128) = Y
AES CTR (192) = Y
AES CTR (256) = Y
3DES CBC = Y
+SNOW3G UEA2 = Y
+ZUC EEA3 = Y
;
; Supported authentication algorithms of the 'dpaa_sec' crypto driver.
@@ -36,6 +38,8 @@ SHA224 HMAC = Y
SHA256 HMAC = Y
SHA384 HMAC = Y
SHA512 HMAC = Y
+SNOW3G UIA2 = Y
+ZUC EIA3 = Y
;
; Supported AEAD algorithms of the 'dpaa_sec' crypto driver.
diff --git a/drivers/crypto/dpaa_sec/dpaa_sec.c b/drivers/crypto/dpaa_sec/dpaa_sec.c
index 019a7119f..970cdf0cc 100644
--- a/drivers/crypto/dpaa_sec/dpaa_sec.c
+++ b/drivers/crypto/dpaa_sec/dpaa_sec.c
@@ -630,39 +630,171 @@ dpaa_sec_prep_cdb(dpaa_sec_session *ses)
} else if (is_proto_pdcp(ses)) {
shared_desc_len = dpaa_sec_prep_pdcp_cdb(ses);
} else if (is_cipher_only(ses)) {
- caam_cipher_alg(ses, &alginfo_c);
- if (alginfo_c.algtype == (unsigned int)DPAA_SEC_ALG_UNSUPPORT) {
- DPAA_SEC_ERR("not supported cipher alg");
- return -ENOTSUP;
- }
-
alginfo_c.key = (size_t)ses->cipher_key.data;
alginfo_c.keylen = ses->cipher_key.length;
alginfo_c.key_enc_flags = 0;
alginfo_c.key_type = RTA_DATA_IMM;
-
- shared_desc_len = cnstr_shdsc_blkcipher(
- cdb->sh_desc, true,
- swap, SHR_NEVER, &alginfo_c,
- NULL,
- ses->iv.length,
- ses->dir);
- } else if (is_auth_only(ses)) {
- caam_auth_alg(ses, &alginfo_a);
- if (alginfo_a.algtype == (unsigned int)DPAA_SEC_ALG_UNSUPPORT) {
- DPAA_SEC_ERR("not supported auth alg");
+ switch (ses->cipher_alg) {
+ case RTE_CRYPTO_CIPHER_NULL:
+ alginfo_c.algtype = 0;
+ shared_desc_len = cnstr_shdsc_blkcipher(
+ cdb->sh_desc, true,
+ swap, SHR_NEVER, &alginfo_c,
+ NULL,
+ ses->iv.length,
+ ses->dir);
+ break;
+ case RTE_CRYPTO_CIPHER_AES_CBC:
+ alginfo_c.algtype = OP_ALG_ALGSEL_AES;
+ alginfo_c.algmode = OP_ALG_AAI_CBC;
+ shared_desc_len = cnstr_shdsc_blkcipher(
+ cdb->sh_desc, true,
+ swap, SHR_NEVER, &alginfo_c,
+ NULL,
+ ses->iv.length,
+ ses->dir);
+ break;
+ case RTE_CRYPTO_CIPHER_3DES_CBC:
+ alginfo_c.algtype = OP_ALG_ALGSEL_3DES;
+ alginfo_c.algmode = OP_ALG_AAI_CBC;
+ shared_desc_len = cnstr_shdsc_blkcipher(
+ cdb->sh_desc, true,
+ swap, SHR_NEVER, &alginfo_c,
+ NULL,
+ ses->iv.length,
+ ses->dir);
+ break;
+ case RTE_CRYPTO_CIPHER_AES_CTR:
+ alginfo_c.algtype = OP_ALG_ALGSEL_AES;
+ alginfo_c.algmode = OP_ALG_AAI_CTR;
+ shared_desc_len = cnstr_shdsc_blkcipher(
+ cdb->sh_desc, true,
+ swap, SHR_NEVER, &alginfo_c,
+ NULL,
+ ses->iv.length,
+ ses->dir);
+ break;
+ case RTE_CRYPTO_CIPHER_3DES_CTR:
+ alginfo_c.algtype = OP_ALG_ALGSEL_3DES;
+ alginfo_c.algmode = OP_ALG_AAI_CTR;
+ shared_desc_len = cnstr_shdsc_blkcipher(
+ cdb->sh_desc, true,
+ swap, SHR_NEVER, &alginfo_c,
+ NULL,
+ ses->iv.length,
+ ses->dir);
+ break;
+ case RTE_CRYPTO_CIPHER_SNOW3G_UEA2:
+ alginfo_c.algtype = OP_ALG_ALGSEL_SNOW_F8;
+ shared_desc_len = cnstr_shdsc_snow_f8(
+ cdb->sh_desc, true, swap,
+ &alginfo_c,
+ ses->dir);
+ break;
+ case RTE_CRYPTO_CIPHER_ZUC_EEA3:
+ alginfo_c.algtype = OP_ALG_ALGSEL_ZUCE;
+ shared_desc_len = cnstr_shdsc_zuce(
+ cdb->sh_desc, true, swap,
+ &alginfo_c,
+ ses->dir);
+ break;
+ default:
+ DPAA_SEC_ERR("unsupported cipher alg %d",
+ ses->cipher_alg);
return -ENOTSUP;
}
-
+ } else if (is_auth_only(ses)) {
alginfo_a.key = (size_t)ses->auth_key.data;
alginfo_a.keylen = ses->auth_key.length;
alginfo_a.key_enc_flags = 0;
alginfo_a.key_type = RTA_DATA_IMM;
-
- shared_desc_len = cnstr_shdsc_hmac(cdb->sh_desc, true,
- swap, SHR_NEVER, &alginfo_a,
- !ses->dir,
- ses->digest_length);
+ switch (ses->auth_alg) {
+ case RTE_CRYPTO_AUTH_NULL:
+ alginfo_a.algtype = 0;
+ ses->digest_length = 0;
+ shared_desc_len = cnstr_shdsc_hmac(
+ cdb->sh_desc, true,
+ swap, SHR_NEVER, &alginfo_a,
+ !ses->dir,
+ ses->digest_length);
+ break;
+ case RTE_CRYPTO_AUTH_MD5_HMAC:
+ alginfo_a.algtype = OP_ALG_ALGSEL_MD5;
+ alginfo_a.algmode = OP_ALG_AAI_HMAC;
+ shared_desc_len = cnstr_shdsc_hmac(
+ cdb->sh_desc, true,
+ swap, SHR_NEVER, &alginfo_a,
+ !ses->dir,
+ ses->digest_length);
+ break;
+ case RTE_CRYPTO_AUTH_SHA1_HMAC:
+ alginfo_a.algtype = OP_ALG_ALGSEL_SHA1;
+ alginfo_a.algmode = OP_ALG_AAI_HMAC;
+ shared_desc_len = cnstr_shdsc_hmac(
+ cdb->sh_desc, true,
+ swap, SHR_NEVER, &alginfo_a,
+ !ses->dir,
+ ses->digest_length);
+ break;
+ case RTE_CRYPTO_AUTH_SHA224_HMAC:
+ alginfo_a.algtype = OP_ALG_ALGSEL_SHA224;
+ alginfo_a.algmode = OP_ALG_AAI_HMAC;
+ shared_desc_len = cnstr_shdsc_hmac(
+ cdb->sh_desc, true,
+ swap, SHR_NEVER, &alginfo_a,
+ !ses->dir,
+ ses->digest_length);
+ break;
+ case RTE_CRYPTO_AUTH_SHA256_HMAC:
+ alginfo_a.algtype = OP_ALG_ALGSEL_SHA256;
+ alginfo_a.algmode = OP_ALG_AAI_HMAC;
+ shared_desc_len = cnstr_shdsc_hmac(
+ cdb->sh_desc, true,
+ swap, SHR_NEVER, &alginfo_a,
+ !ses->dir,
+ ses->digest_length);
+ break;
+ case RTE_CRYPTO_AUTH_SHA384_HMAC:
+ alginfo_a.algtype = OP_ALG_ALGSEL_SHA384;
+ alginfo_a.algmode = OP_ALG_AAI_HMAC;
+ shared_desc_len = cnstr_shdsc_hmac(
+ cdb->sh_desc, true,
+ swap, SHR_NEVER, &alginfo_a,
+ !ses->dir,
+ ses->digest_length);
+ break;
+ case RTE_CRYPTO_AUTH_SHA512_HMAC:
+ alginfo_a.algtype = OP_ALG_ALGSEL_SHA512;
+ alginfo_a.algmode = OP_ALG_AAI_HMAC;
+ shared_desc_len = cnstr_shdsc_hmac(
+ cdb->sh_desc, true,
+ swap, SHR_NEVER, &alginfo_a,
+ !ses->dir,
+ ses->digest_length);
+ break;
+ case RTE_CRYPTO_AUTH_SNOW3G_UIA2:
+ alginfo_a.algtype = OP_ALG_ALGSEL_SNOW_F9;
+ alginfo_a.algmode = OP_ALG_AAI_F9;
+ ses->auth_alg = RTE_CRYPTO_AUTH_SNOW3G_UIA2;
+ shared_desc_len = cnstr_shdsc_snow_f9(
+ cdb->sh_desc, true, swap,
+ &alginfo_a,
+ !ses->dir,
+ ses->digest_length);
+ break;
+ case RTE_CRYPTO_AUTH_ZUC_EIA3:
+ alginfo_a.algtype = OP_ALG_ALGSEL_ZUCA;
+ alginfo_a.algmode = OP_ALG_AAI_F9;
+ ses->auth_alg = RTE_CRYPTO_AUTH_ZUC_EIA3;
+ shared_desc_len = cnstr_shdsc_zuca(
+ cdb->sh_desc, true, swap,
+ &alginfo_a,
+ !ses->dir,
+ ses->digest_length);
+ break;
+ default:
+ DPAA_SEC_ERR("unsupported auth alg %u", ses->auth_alg);
+ }
} else if (is_aead(ses)) {
caam_aead_alg(ses, &alginfo);
if (alginfo.algtype == (unsigned int)DPAA_SEC_ALG_UNSUPPORT) {
@@ -849,6 +981,21 @@ build_auth_only_sg(struct rte_crypto_op *op, dpaa_sec_session *ses)
struct qm_sg_entry *sg, *out_sg, *in_sg;
phys_addr_t start_addr;
uint8_t *old_digest, extra_segs;
+ int data_len, data_offset;
+
+ data_len = sym->auth.data.length;
+ data_offset = sym->auth.data.offset;
+
+ if (ses->auth_alg == RTE_CRYPTO_AUTH_SNOW3G_UIA2 ||
+ ses->auth_alg == RTE_CRYPTO_AUTH_ZUC_EIA3) {
+ if ((data_len & 7) || (data_offset & 7)) {
+ DPAA_SEC_ERR("AUTH: len/offset must be full bytes");
+ return NULL;
+ }
+
+ data_len = data_len >> 3;
+ data_offset = data_offset >> 3;
+ }
if (is_decode(ses))
extra_segs = 3;
@@ -879,23 +1026,52 @@ build_auth_only_sg(struct rte_crypto_op *op, dpaa_sec_session *ses)
/* need to extend the input to a compound frame */
in_sg->extension = 1;
in_sg->final = 1;
- in_sg->length = sym->auth.data.length;
+ in_sg->length = data_len;
qm_sg_entry_set64(in_sg, dpaa_mem_vtop(&cf->sg[2]));
/* 1st seg */
sg = in_sg + 1;
- qm_sg_entry_set64(sg, rte_pktmbuf_mtophys(mbuf));
- sg->length = mbuf->data_len - sym->auth.data.offset;
- sg->offset = sym->auth.data.offset;
- /* Successive segs */
- mbuf = mbuf->next;
- while (mbuf) {
+ if (ses->iv.length) {
+ uint8_t *iv_ptr;
+
+ iv_ptr = rte_crypto_op_ctod_offset(op, uint8_t *,
+ ses->iv.offset);
+
+ if (ses->auth_alg == RTE_CRYPTO_AUTH_SNOW3G_UIA2) {
+ iv_ptr = conv_to_snow_f9_iv(iv_ptr);
+ sg->length = 12;
+ } else if (ses->auth_alg == RTE_CRYPTO_AUTH_ZUC_EIA3) {
+ iv_ptr = conv_to_zuc_eia_iv(iv_ptr);
+ sg->length = 8;
+ } else {
+ sg->length = ses->iv.length;
+ }
+ qm_sg_entry_set64(sg, dpaa_mem_vtop(iv_ptr));
+ in_sg->length += sg->length;
cpu_to_hw_sg(sg);
sg++;
- qm_sg_entry_set64(sg, rte_pktmbuf_mtophys(mbuf));
- sg->length = mbuf->data_len;
- mbuf = mbuf->next;
+ }
+
+ qm_sg_entry_set64(sg, rte_pktmbuf_mtophys(mbuf));
+ sg->offset = data_offset;
+
+ if (data_len <= (mbuf->data_len - data_offset)) {
+ sg->length = data_len;
+ } else {
+ sg->length = mbuf->data_len - data_offset;
+
+ /* remaining i/p segs */
+ while ((data_len = data_len - sg->length) &&
+ (mbuf = mbuf->next)) {
+ cpu_to_hw_sg(sg);
+ sg++;
+ qm_sg_entry_set64(sg, rte_pktmbuf_mtophys(mbuf));
+ if (data_len > mbuf->data_len)
+ sg->length = mbuf->data_len;
+ else
+ sg->length = data_len;
+ }
}
if (is_decode(ses)) {
@@ -908,9 +1084,6 @@ build_auth_only_sg(struct rte_crypto_op *op, dpaa_sec_session *ses)
qm_sg_entry_set64(sg, start_addr);
sg->length = ses->digest_length;
in_sg->length += ses->digest_length;
- } else {
- /* Digest calculation case */
- sg->length -= ses->digest_length;
}
sg->final = 1;
cpu_to_hw_sg(sg);
@@ -934,9 +1107,24 @@ build_auth_only(struct rte_crypto_op *op, dpaa_sec_session *ses)
struct rte_mbuf *mbuf = sym->m_src;
struct dpaa_sec_job *cf;
struct dpaa_sec_op_ctx *ctx;
- struct qm_sg_entry *sg;
+ struct qm_sg_entry *sg, *in_sg;
rte_iova_t start_addr;
uint8_t *old_digest;
+ int data_len, data_offset;
+
+ data_len = sym->auth.data.length;
+ data_offset = sym->auth.data.offset;
+
+ if (ses->auth_alg == RTE_CRYPTO_AUTH_SNOW3G_UIA2 ||
+ ses->auth_alg == RTE_CRYPTO_AUTH_ZUC_EIA3) {
+ if ((data_len & 7) || (data_offset & 7)) {
+ DPAA_SEC_ERR("AUTH: len/offset must be full bytes");
+ return NULL;
+ }
+
+ data_len = data_len >> 3;
+ data_offset = data_offset >> 3;
+ }
ctx = dpaa_sec_alloc_ctx(ses, 4);
if (!ctx)
@@ -954,36 +1142,55 @@ build_auth_only(struct rte_crypto_op *op, dpaa_sec_session *ses)
cpu_to_hw_sg(sg);
/* input */
- sg = &cf->sg[1];
- if (is_decode(ses)) {
- /* need to extend the input to a compound frame */
- sg->extension = 1;
- qm_sg_entry_set64(sg, dpaa_mem_vtop(&cf->sg[2]));
- sg->length = sym->auth.data.length + ses->digest_length;
- sg->final = 1;
+ in_sg = &cf->sg[1];
+ /* need to extend the input to a compound frame */
+ in_sg->extension = 1;
+ in_sg->final = 1;
+ in_sg->length = data_len;
+ qm_sg_entry_set64(in_sg, dpaa_mem_vtop(&cf->sg[2]));
+ sg = &cf->sg[2];
+
+ if (ses->iv.length) {
+ uint8_t *iv_ptr;
+
+ iv_ptr = rte_crypto_op_ctod_offset(op, uint8_t *,
+ ses->iv.offset);
+
+ if (ses->auth_alg == RTE_CRYPTO_AUTH_SNOW3G_UIA2) {
+ iv_ptr = conv_to_snow_f9_iv(iv_ptr);
+ sg->length = 12;
+ } else if (ses->auth_alg == RTE_CRYPTO_AUTH_ZUC_EIA3) {
+ iv_ptr = conv_to_zuc_eia_iv(iv_ptr);
+ sg->length = 8;
+ } else {
+ sg->length = ses->iv.length;
+ }
+ qm_sg_entry_set64(sg, dpaa_mem_vtop(iv_ptr));
+ in_sg->length += sg->length;
cpu_to_hw_sg(sg);
+ sg++;
+ }
- sg = &cf->sg[2];
+ qm_sg_entry_set64(sg, rte_pktmbuf_mtophys(mbuf));
+ sg->offset = data_offset;
+ sg->length = data_len;
+
+ if (is_decode(ses)) {
+ /* Digest verification case */
+ cpu_to_hw_sg(sg);
/* hash result or digest, save digest first */
rte_memcpy(old_digest, sym->auth.digest.data,
- ses->digest_length);
- qm_sg_entry_set64(sg, start_addr + sym->auth.data.offset);
- sg->length = sym->auth.data.length;
- cpu_to_hw_sg(sg);
-
+ ses->digest_length);
/* let's check digest by hw */
start_addr = dpaa_mem_vtop(old_digest);
sg++;
qm_sg_entry_set64(sg, start_addr);
sg->length = ses->digest_length;
- sg->final = 1;
- cpu_to_hw_sg(sg);
- } else {
- qm_sg_entry_set64(sg, start_addr + sym->auth.data.offset);
- sg->length = sym->auth.data.length;
- sg->final = 1;
- cpu_to_hw_sg(sg);
+ in_sg->length += ses->digest_length;
}
+ sg->final = 1;
+ cpu_to_hw_sg(sg);
+ cpu_to_hw_sg(in_sg);
return cf;
}
@@ -999,6 +1206,21 @@ build_cipher_only_sg(struct rte_crypto_op *op, dpaa_sec_session *ses)
uint8_t req_segs;
uint8_t *IV_ptr = rte_crypto_op_ctod_offset(op, uint8_t *,
ses->iv.offset);
+ int data_len, data_offset;
+
+ data_len = sym->cipher.data.length;
+ data_offset = sym->cipher.data.offset;
+
+ if (ses->cipher_alg == RTE_CRYPTO_CIPHER_SNOW3G_UEA2 ||
+ ses->cipher_alg == RTE_CRYPTO_CIPHER_ZUC_EEA3) {
+ if ((data_len & 7) || (data_offset & 7)) {
+ DPAA_SEC_ERR("CIPHER: len/offset must be full bytes");
+ return NULL;
+ }
+
+ data_len = data_len >> 3;
+ data_offset = data_offset >> 3;
+ }
if (sym->m_dst) {
mbuf = sym->m_dst;
@@ -1007,7 +1229,6 @@ build_cipher_only_sg(struct rte_crypto_op *op, dpaa_sec_session *ses)
mbuf = sym->m_src;
req_segs = mbuf->nb_segs * 2 + 3;
}
-
if (mbuf->nb_segs > MAX_SG_ENTRIES) {
DPAA_SEC_DP_ERR("Cipher: Max sec segs supported is %d",
MAX_SG_ENTRIES);
@@ -1024,15 +1245,15 @@ build_cipher_only_sg(struct rte_crypto_op *op, dpaa_sec_session *ses)
/* output */
out_sg = &cf->sg[0];
out_sg->extension = 1;
- out_sg->length = sym->cipher.data.length;
+ out_sg->length = data_len;
qm_sg_entry_set64(out_sg, dpaa_mem_vtop(&cf->sg[2]));
cpu_to_hw_sg(out_sg);
/* 1st seg */
sg = &cf->sg[2];
qm_sg_entry_set64(sg, rte_pktmbuf_mtophys(mbuf));
- sg->length = mbuf->data_len - sym->cipher.data.offset;
- sg->offset = sym->cipher.data.offset;
+ sg->length = mbuf->data_len - data_offset;
+ sg->offset = data_offset;
/* Successive segs */
mbuf = mbuf->next;
@@ -1051,7 +1272,7 @@ build_cipher_only_sg(struct rte_crypto_op *op, dpaa_sec_session *ses)
in_sg = &cf->sg[1];
in_sg->extension = 1;
in_sg->final = 1;
- in_sg->length = sym->cipher.data.length + ses->iv.length;
+ in_sg->length = data_len + ses->iv.length;
sg++;
qm_sg_entry_set64(in_sg, dpaa_mem_vtop(sg));
@@ -1065,8 +1286,8 @@ build_cipher_only_sg(struct rte_crypto_op *op, dpaa_sec_session *ses)
/* 1st seg */
sg++;
qm_sg_entry_set64(sg, rte_pktmbuf_mtophys(mbuf));
- sg->length = mbuf->data_len - sym->cipher.data.offset;
- sg->offset = sym->cipher.data.offset;
+ sg->length = mbuf->data_len - data_offset;
+ sg->offset = data_offset;
/* Successive segs */
mbuf = mbuf->next;
@@ -1093,6 +1314,21 @@ build_cipher_only(struct rte_crypto_op *op, dpaa_sec_session *ses)
rte_iova_t src_start_addr, dst_start_addr;
uint8_t *IV_ptr = rte_crypto_op_ctod_offset(op, uint8_t *,
ses->iv.offset);
+ int data_len, data_offset;
+
+ data_len = sym->cipher.data.length;
+ data_offset = sym->cipher.data.offset;
+
+ if (ses->cipher_alg == RTE_CRYPTO_CIPHER_SNOW3G_UEA2 ||
+ ses->cipher_alg == RTE_CRYPTO_CIPHER_ZUC_EEA3) {
+ if ((data_len & 7) || (data_offset & 7)) {
+ DPAA_SEC_ERR("CIPHER: len/offset must be full bytes");
+ return NULL;
+ }
+
+ data_len = data_len >> 3;
+ data_offset = data_offset >> 3;
+ }
ctx = dpaa_sec_alloc_ctx(ses, 4);
if (!ctx)
@@ -1110,8 +1346,8 @@ build_cipher_only(struct rte_crypto_op *op, dpaa_sec_session *ses)
/* output */
sg = &cf->sg[0];
- qm_sg_entry_set64(sg, dst_start_addr + sym->cipher.data.offset);
- sg->length = sym->cipher.data.length + ses->iv.length;
+ qm_sg_entry_set64(sg, dst_start_addr + data_offset);
+ sg->length = data_len + ses->iv.length;
cpu_to_hw_sg(sg);
/* input */
@@ -1120,7 +1356,7 @@ build_cipher_only(struct rte_crypto_op *op, dpaa_sec_session *ses)
/* need to extend the input to a compound frame */
sg->extension = 1;
sg->final = 1;
- sg->length = sym->cipher.data.length + ses->iv.length;
+ sg->length = data_len + ses->iv.length;
qm_sg_entry_set64(sg, dpaa_mem_vtop(&cf->sg[2]));
cpu_to_hw_sg(sg);
@@ -1130,8 +1366,8 @@ build_cipher_only(struct rte_crypto_op *op, dpaa_sec_session *ses)
cpu_to_hw_sg(sg);
sg++;
- qm_sg_entry_set64(sg, src_start_addr + sym->cipher.data.offset);
- sg->length = sym->cipher.data.length;
+ qm_sg_entry_set64(sg, src_start_addr + data_offset);
+ sg->length = data_len;
sg->final = 1;
cpu_to_hw_sg(sg);
@@ -2066,6 +2302,10 @@ dpaa_sec_auth_init(struct rte_cryptodev *dev __rte_unused,
}
session->auth_key.length = xform->auth.key.length;
session->digest_length = xform->auth.digest_length;
+ if (session->cipher_alg == RTE_CRYPTO_CIPHER_NULL) {
+ session->iv.offset = xform->auth.iv.offset;
+ session->iv.length = xform->auth.iv.length;
+ }
memcpy(session->auth_key.data, xform->auth.key.data,
xform->auth.key.length);
diff --git a/drivers/crypto/dpaa_sec/dpaa_sec.h b/drivers/crypto/dpaa_sec/dpaa_sec.h
index 009ab7536..149923aa1 100644
--- a/drivers/crypto/dpaa_sec/dpaa_sec.h
+++ b/drivers/crypto/dpaa_sec/dpaa_sec.h
@@ -416,7 +416,96 @@ static const struct rte_cryptodev_capabilities dpaa_sec_capabilities[] = {
}, }
}, }
},
-
+ { /* SNOW 3G (UIA2) */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_SNOW3G_UIA2,
+ .block_size = 16,
+ .key_size = {
+ .min = 16,
+ .max = 16,
+ .increment = 0
+ },
+ .digest_size = {
+ .min = 4,
+ .max = 4,
+ .increment = 0
+ },
+ .iv_size = {
+ .min = 16,
+ .max = 16,
+ .increment = 0
+ }
+ }, }
+ }, }
+ },
+ { /* SNOW 3G (UEA2) */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
+ {.cipher = {
+ .algo = RTE_CRYPTO_CIPHER_SNOW3G_UEA2,
+ .block_size = 16,
+ .key_size = {
+ .min = 16,
+ .max = 16,
+ .increment = 0
+ },
+ .iv_size = {
+ .min = 16,
+ .max = 16,
+ .increment = 0
+ }
+ }, }
+ }, }
+ },
+ { /* ZUC (EEA3) */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
+ {.cipher = {
+ .algo = RTE_CRYPTO_CIPHER_ZUC_EEA3,
+ .block_size = 16,
+ .key_size = {
+ .min = 16,
+ .max = 16,
+ .increment = 0
+ },
+ .iv_size = {
+ .min = 16,
+ .max = 16,
+ .increment = 0
+ }
+ }, }
+ }, }
+ },
+ { /* ZUC (EIA3) */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_ZUC_EIA3,
+ .block_size = 16,
+ .key_size = {
+ .min = 16,
+ .max = 16,
+ .increment = 0
+ },
+ .digest_size = {
+ .min = 4,
+ .max = 4,
+ .increment = 0
+ },
+ .iv_size = {
+ .min = 16,
+ .max = 16,
+ .increment = 0
+ }
+ }, }
+ }, }
+ },
RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
};
--
2.17.1
^ permalink raw reply [flat|nested] 27+ messages in thread
* [dpdk-dev] [PATCH 09/10] test/crypto: enable snow3G and zuc cases for dpaa
2019-10-11 16:32 [dpdk-dev] [PATCH 00/10] NXP DPAAx crypto fixes Hemant Agrawal
` (7 preceding siblings ...)
2019-10-11 16:32 ` [dpdk-dev] [PATCH 08/10] crypto/dpaa_sec: add support for snow3G and ZUC Hemant Agrawal
@ 2019-10-11 16:32 ` Hemant Agrawal
2019-10-11 16:32 ` [dpdk-dev] [PATCH 10/10] crypto/dpaa_sec: code reorg for better session mgmt Hemant Agrawal
2019-10-14 6:53 ` [dpdk-dev] [PATCH v2 00/10] NXP DPAAx crypto fixes Hemant Agrawal
10 siblings, 0 replies; 27+ messages in thread
From: Hemant Agrawal @ 2019-10-11 16:32 UTC (permalink / raw)
To: dev; +Cc: akhil.goyal, Hemant Agrawal
This patch add the snow and zuc cipher only and auth only cases support.
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
app/test/test_cryptodev.c | 64 +++++++++++++++++++++++++++++++++++++++
1 file changed, 64 insertions(+)
diff --git a/app/test/test_cryptodev.c b/app/test/test_cryptodev.c
index ec0473016..a3ae2e2f5 100644
--- a/app/test/test_cryptodev.c
+++ b/app/test/test_cryptodev.c
@@ -12672,6 +12672,70 @@ static struct unit_test_suite cryptodev_dpaa_sec_testsuite = {
TEST_CASE_ST(ut_setup, ut_teardown,
test_AES_GCM_authenticated_decryption_oop_test_case_1),
+ /** SNOW 3G encrypt only (UEA2) */
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_snow3g_encryption_test_case_1),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_snow3g_encryption_test_case_2),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_snow3g_encryption_test_case_3),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_snow3g_encryption_test_case_4),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_snow3g_encryption_test_case_5),
+
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_snow3g_encryption_test_case_1_oop),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_snow3g_encryption_test_case_1_oop_sgl),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_snow3g_decryption_test_case_1_oop),
+
+ /** SNOW 3G decrypt only (UEA2) */
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_snow3g_decryption_test_case_1),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_snow3g_decryption_test_case_2),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_snow3g_decryption_test_case_3),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_snow3g_decryption_test_case_4),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_snow3g_decryption_test_case_5),
+
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_snow3g_hash_generate_test_case_1),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_snow3g_hash_generate_test_case_2),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_snow3g_hash_generate_test_case_3),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_snow3g_hash_verify_test_case_1),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_snow3g_hash_verify_test_case_2),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_snow3g_hash_verify_test_case_3),
+
+ /** ZUC encrypt only (EEA3) */
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_zuc_encryption_test_case_1),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_zuc_encryption_test_case_2),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_zuc_encryption_test_case_3),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_zuc_encryption_test_case_4),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_zuc_encryption_test_case_5),
+
+ /** ZUC authenticate (EIA3) */
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_zuc_hash_generate_test_case_6),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_zuc_hash_generate_test_case_7),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_zuc_hash_generate_test_case_8),
+
/** Negative tests */
TEST_CASE_ST(ut_setup, ut_teardown,
test_AES_GCM_auth_encryption_fail_iv_corrupt),
--
2.17.1
^ permalink raw reply [flat|nested] 27+ messages in thread
* [dpdk-dev] [PATCH 10/10] crypto/dpaa_sec: code reorg for better session mgmt
2019-10-11 16:32 [dpdk-dev] [PATCH 00/10] NXP DPAAx crypto fixes Hemant Agrawal
` (8 preceding siblings ...)
2019-10-11 16:32 ` [dpdk-dev] [PATCH 09/10] test/crypto: enable snow3G and zuc cases for dpaa Hemant Agrawal
@ 2019-10-11 16:32 ` Hemant Agrawal
2019-10-11 19:03 ` Aaron Conole
2019-10-14 6:53 ` [dpdk-dev] [PATCH v2 00/10] NXP DPAAx crypto fixes Hemant Agrawal
10 siblings, 1 reply; 27+ messages in thread
From: Hemant Agrawal @ 2019-10-11 16:32 UTC (permalink / raw)
To: dev; +Cc: akhil.goyal, Hemant Agrawal
The session related parameters shall be populated during
the session create only.
At the runtime on first packet, the CDB should just reference
the session data instead of re-interpreting data again.
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
drivers/crypto/dpaa_sec/dpaa_sec.c | 612 ++++++++++++++++-------------
drivers/crypto/dpaa_sec/dpaa_sec.h | 18 +-
2 files changed, 345 insertions(+), 285 deletions(-)
diff --git a/drivers/crypto/dpaa_sec/dpaa_sec.c b/drivers/crypto/dpaa_sec/dpaa_sec.c
index 970cdf0cc..b932bf1cb 100644
--- a/drivers/crypto/dpaa_sec/dpaa_sec.c
+++ b/drivers/crypto/dpaa_sec/dpaa_sec.c
@@ -290,102 +290,6 @@ static inline int is_decode(dpaa_sec_session *ses)
return ses->dir == DIR_DEC;
}
-static inline void
-caam_auth_alg(dpaa_sec_session *ses, struct alginfo *alginfo_a)
-{
- switch (ses->auth_alg) {
- case RTE_CRYPTO_AUTH_NULL:
- alginfo_a->algtype =
- (ses->proto_alg == RTE_SECURITY_PROTOCOL_IPSEC) ?
- OP_PCL_IPSEC_HMAC_NULL : 0;
- ses->digest_length = 0;
- break;
- case RTE_CRYPTO_AUTH_MD5_HMAC:
- alginfo_a->algtype =
- (ses->proto_alg == RTE_SECURITY_PROTOCOL_IPSEC) ?
- OP_PCL_IPSEC_HMAC_MD5_96 : OP_ALG_ALGSEL_MD5;
- alginfo_a->algmode = OP_ALG_AAI_HMAC;
- break;
- case RTE_CRYPTO_AUTH_SHA1_HMAC:
- alginfo_a->algtype =
- (ses->proto_alg == RTE_SECURITY_PROTOCOL_IPSEC) ?
- OP_PCL_IPSEC_HMAC_SHA1_96 : OP_ALG_ALGSEL_SHA1;
- alginfo_a->algmode = OP_ALG_AAI_HMAC;
- break;
- case RTE_CRYPTO_AUTH_SHA224_HMAC:
- alginfo_a->algtype =
- (ses->proto_alg == RTE_SECURITY_PROTOCOL_IPSEC) ?
- OP_PCL_IPSEC_HMAC_SHA1_160 : OP_ALG_ALGSEL_SHA224;
- alginfo_a->algmode = OP_ALG_AAI_HMAC;
- break;
- case RTE_CRYPTO_AUTH_SHA256_HMAC:
- alginfo_a->algtype =
- (ses->proto_alg == RTE_SECURITY_PROTOCOL_IPSEC) ?
- OP_PCL_IPSEC_HMAC_SHA2_256_128 : OP_ALG_ALGSEL_SHA256;
- alginfo_a->algmode = OP_ALG_AAI_HMAC;
- break;
- case RTE_CRYPTO_AUTH_SHA384_HMAC:
- alginfo_a->algtype =
- (ses->proto_alg == RTE_SECURITY_PROTOCOL_IPSEC) ?
- OP_PCL_IPSEC_HMAC_SHA2_384_192 : OP_ALG_ALGSEL_SHA384;
- alginfo_a->algmode = OP_ALG_AAI_HMAC;
- break;
- case RTE_CRYPTO_AUTH_SHA512_HMAC:
- alginfo_a->algtype =
- (ses->proto_alg == RTE_SECURITY_PROTOCOL_IPSEC) ?
- OP_PCL_IPSEC_HMAC_SHA2_512_256 : OP_ALG_ALGSEL_SHA512;
- alginfo_a->algmode = OP_ALG_AAI_HMAC;
- break;
- default:
- DPAA_SEC_ERR("unsupported auth alg %u", ses->auth_alg);
- }
-}
-
-static inline void
-caam_cipher_alg(dpaa_sec_session *ses, struct alginfo *alginfo_c)
-{
- switch (ses->cipher_alg) {
- case RTE_CRYPTO_CIPHER_NULL:
- alginfo_c->algtype =
- (ses->proto_alg == RTE_SECURITY_PROTOCOL_IPSEC) ?
- OP_PCL_IPSEC_NULL : 0;
- break;
- case RTE_CRYPTO_CIPHER_AES_CBC:
- alginfo_c->algtype =
- (ses->proto_alg == RTE_SECURITY_PROTOCOL_IPSEC) ?
- OP_PCL_IPSEC_AES_CBC : OP_ALG_ALGSEL_AES;
- alginfo_c->algmode = OP_ALG_AAI_CBC;
- break;
- case RTE_CRYPTO_CIPHER_3DES_CBC:
- alginfo_c->algtype =
- (ses->proto_alg == RTE_SECURITY_PROTOCOL_IPSEC) ?
- OP_PCL_IPSEC_3DES : OP_ALG_ALGSEL_3DES;
- alginfo_c->algmode = OP_ALG_AAI_CBC;
- break;
- case RTE_CRYPTO_CIPHER_AES_CTR:
- alginfo_c->algtype =
- (ses->proto_alg == RTE_SECURITY_PROTOCOL_IPSEC) ?
- OP_PCL_IPSEC_AES_CTR : OP_ALG_ALGSEL_AES;
- alginfo_c->algmode = OP_ALG_AAI_CTR;
- break;
- default:
- DPAA_SEC_ERR("unsupported cipher alg %d", ses->cipher_alg);
- }
-}
-
-static inline void
-caam_aead_alg(dpaa_sec_session *ses, struct alginfo *alginfo)
-{
- switch (ses->aead_alg) {
- case RTE_CRYPTO_AEAD_AES_GCM:
- alginfo->algtype = OP_ALG_ALGSEL_AES;
- alginfo->algmode = OP_ALG_AAI_GCM;
- break;
- default:
- DPAA_SEC_ERR("unsupported AEAD alg %d", ses->aead_alg);
- }
-}
-
static int
dpaa_sec_prep_pdcp_cdb(dpaa_sec_session *ses)
{
@@ -400,58 +304,24 @@ dpaa_sec_prep_pdcp_cdb(dpaa_sec_session *ses)
int swap = true;
#endif
- switch (ses->cipher_alg) {
- case RTE_CRYPTO_CIPHER_SNOW3G_UEA2:
- cipherdata.algtype = PDCP_CIPHER_TYPE_SNOW;
- break;
- case RTE_CRYPTO_CIPHER_ZUC_EEA3:
- cipherdata.algtype = PDCP_CIPHER_TYPE_ZUC;
- break;
- case RTE_CRYPTO_CIPHER_AES_CTR:
- cipherdata.algtype = PDCP_CIPHER_TYPE_AES;
- break;
- case RTE_CRYPTO_CIPHER_NULL:
- cipherdata.algtype = PDCP_CIPHER_TYPE_NULL;
- break;
- default:
- DPAA_SEC_ERR("Crypto: Undefined Cipher specified %u",
- ses->cipher_alg);
- return -1;
- }
-
cipherdata.key = (size_t)ses->cipher_key.data;
cipherdata.keylen = ses->cipher_key.length;
cipherdata.key_enc_flags = 0;
cipherdata.key_type = RTA_DATA_IMM;
+ cipherdata.algtype = ses->cipher_key.alg;
+ cipherdata.algmode = ses->cipher_key.algmode;
cdb->sh_desc[0] = cipherdata.keylen;
cdb->sh_desc[1] = 0;
cdb->sh_desc[2] = 0;
if (ses->auth_alg) {
- switch (ses->auth_alg) {
- case RTE_CRYPTO_AUTH_SNOW3G_UIA2:
- authdata.algtype = PDCP_AUTH_TYPE_SNOW;
- break;
- case RTE_CRYPTO_AUTH_ZUC_EIA3:
- authdata.algtype = PDCP_AUTH_TYPE_ZUC;
- break;
- case RTE_CRYPTO_AUTH_AES_CMAC:
- authdata.algtype = PDCP_AUTH_TYPE_AES;
- break;
- case RTE_CRYPTO_AUTH_NULL:
- authdata.algtype = PDCP_AUTH_TYPE_NULL;
- break;
- default:
- DPAA_SEC_ERR("Crypto: Unsupported auth alg %u",
- ses->auth_alg);
- return -1;
- }
-
authdata.key = (size_t)ses->auth_key.data;
authdata.keylen = ses->auth_key.length;
authdata.key_enc_flags = 0;
authdata.key_type = RTA_DATA_IMM;
+ authdata.algtype = ses->auth_key.alg;
+ authdata.algmode = ses->auth_key.algmode;
p_authdata = &authdata;
@@ -541,27 +411,19 @@ dpaa_sec_prep_ipsec_cdb(dpaa_sec_session *ses)
int swap = true;
#endif
- caam_cipher_alg(ses, &cipherdata);
- if (cipherdata.algtype == (unsigned int)DPAA_SEC_ALG_UNSUPPORT) {
- DPAA_SEC_ERR("not supported cipher alg");
- return -ENOTSUP;
- }
-
cipherdata.key = (size_t)ses->cipher_key.data;
cipherdata.keylen = ses->cipher_key.length;
cipherdata.key_enc_flags = 0;
cipherdata.key_type = RTA_DATA_IMM;
-
- caam_auth_alg(ses, &authdata);
- if (authdata.algtype == (unsigned int)DPAA_SEC_ALG_UNSUPPORT) {
- DPAA_SEC_ERR("not supported auth alg");
- return -ENOTSUP;
- }
+ cipherdata.algtype = ses->cipher_key.alg;
+ cipherdata.algmode = ses->cipher_key.algmode;
authdata.key = (size_t)ses->auth_key.data;
authdata.keylen = ses->auth_key.length;
authdata.key_enc_flags = 0;
authdata.key_type = RTA_DATA_IMM;
+ authdata.algtype = ses->auth_key.alg;
+ authdata.algmode = ses->auth_key.algmode;
cdb->sh_desc[0] = cipherdata.keylen;
cdb->sh_desc[1] = authdata.keylen;
@@ -625,58 +487,26 @@ dpaa_sec_prep_cdb(dpaa_sec_session *ses)
memset(cdb, 0, sizeof(struct sec_cdb));
- if (is_proto_ipsec(ses)) {
+ switch (ses->ctxt) {
+ case DPAA_SEC_IPSEC:
shared_desc_len = dpaa_sec_prep_ipsec_cdb(ses);
- } else if (is_proto_pdcp(ses)) {
+ break;
+ case DPAA_SEC_PDCP:
shared_desc_len = dpaa_sec_prep_pdcp_cdb(ses);
- } else if (is_cipher_only(ses)) {
+ break;
+ case DPAA_SEC_CIPHER:
alginfo_c.key = (size_t)ses->cipher_key.data;
alginfo_c.keylen = ses->cipher_key.length;
alginfo_c.key_enc_flags = 0;
alginfo_c.key_type = RTA_DATA_IMM;
+ alginfo_c.algtype = ses->cipher_key.alg;
+ alginfo_c.algmode = ses->cipher_key.algmode;
+
switch (ses->cipher_alg) {
- case RTE_CRYPTO_CIPHER_NULL:
- alginfo_c.algtype = 0;
- shared_desc_len = cnstr_shdsc_blkcipher(
- cdb->sh_desc, true,
- swap, SHR_NEVER, &alginfo_c,
- NULL,
- ses->iv.length,
- ses->dir);
- break;
case RTE_CRYPTO_CIPHER_AES_CBC:
- alginfo_c.algtype = OP_ALG_ALGSEL_AES;
- alginfo_c.algmode = OP_ALG_AAI_CBC;
- shared_desc_len = cnstr_shdsc_blkcipher(
- cdb->sh_desc, true,
- swap, SHR_NEVER, &alginfo_c,
- NULL,
- ses->iv.length,
- ses->dir);
- break;
case RTE_CRYPTO_CIPHER_3DES_CBC:
- alginfo_c.algtype = OP_ALG_ALGSEL_3DES;
- alginfo_c.algmode = OP_ALG_AAI_CBC;
- shared_desc_len = cnstr_shdsc_blkcipher(
- cdb->sh_desc, true,
- swap, SHR_NEVER, &alginfo_c,
- NULL,
- ses->iv.length,
- ses->dir);
- break;
case RTE_CRYPTO_CIPHER_AES_CTR:
- alginfo_c.algtype = OP_ALG_ALGSEL_AES;
- alginfo_c.algmode = OP_ALG_AAI_CTR;
- shared_desc_len = cnstr_shdsc_blkcipher(
- cdb->sh_desc, true,
- swap, SHR_NEVER, &alginfo_c,
- NULL,
- ses->iv.length,
- ses->dir);
- break;
case RTE_CRYPTO_CIPHER_3DES_CTR:
- alginfo_c.algtype = OP_ALG_ALGSEL_3DES;
- alginfo_c.algmode = OP_ALG_AAI_CTR;
shared_desc_len = cnstr_shdsc_blkcipher(
cdb->sh_desc, true,
swap, SHR_NEVER, &alginfo_c,
@@ -685,14 +515,12 @@ dpaa_sec_prep_cdb(dpaa_sec_session *ses)
ses->dir);
break;
case RTE_CRYPTO_CIPHER_SNOW3G_UEA2:
- alginfo_c.algtype = OP_ALG_ALGSEL_SNOW_F8;
shared_desc_len = cnstr_shdsc_snow_f8(
cdb->sh_desc, true, swap,
&alginfo_c,
ses->dir);
break;
case RTE_CRYPTO_CIPHER_ZUC_EEA3:
- alginfo_c.algtype = OP_ALG_ALGSEL_ZUCE;
shared_desc_len = cnstr_shdsc_zuce(
cdb->sh_desc, true, swap,
&alginfo_c,
@@ -703,69 +531,21 @@ dpaa_sec_prep_cdb(dpaa_sec_session *ses)
ses->cipher_alg);
return -ENOTSUP;
}
- } else if (is_auth_only(ses)) {
+ break;
+ case DPAA_SEC_AUTH:
alginfo_a.key = (size_t)ses->auth_key.data;
alginfo_a.keylen = ses->auth_key.length;
alginfo_a.key_enc_flags = 0;
alginfo_a.key_type = RTA_DATA_IMM;
+ alginfo_a.algtype = ses->auth_key.alg;
+ alginfo_a.algmode = ses->auth_key.algmode;
switch (ses->auth_alg) {
- case RTE_CRYPTO_AUTH_NULL:
- alginfo_a.algtype = 0;
- ses->digest_length = 0;
- shared_desc_len = cnstr_shdsc_hmac(
- cdb->sh_desc, true,
- swap, SHR_NEVER, &alginfo_a,
- !ses->dir,
- ses->digest_length);
- break;
case RTE_CRYPTO_AUTH_MD5_HMAC:
- alginfo_a.algtype = OP_ALG_ALGSEL_MD5;
- alginfo_a.algmode = OP_ALG_AAI_HMAC;
- shared_desc_len = cnstr_shdsc_hmac(
- cdb->sh_desc, true,
- swap, SHR_NEVER, &alginfo_a,
- !ses->dir,
- ses->digest_length);
- break;
case RTE_CRYPTO_AUTH_SHA1_HMAC:
- alginfo_a.algtype = OP_ALG_ALGSEL_SHA1;
- alginfo_a.algmode = OP_ALG_AAI_HMAC;
- shared_desc_len = cnstr_shdsc_hmac(
- cdb->sh_desc, true,
- swap, SHR_NEVER, &alginfo_a,
- !ses->dir,
- ses->digest_length);
- break;
case RTE_CRYPTO_AUTH_SHA224_HMAC:
- alginfo_a.algtype = OP_ALG_ALGSEL_SHA224;
- alginfo_a.algmode = OP_ALG_AAI_HMAC;
- shared_desc_len = cnstr_shdsc_hmac(
- cdb->sh_desc, true,
- swap, SHR_NEVER, &alginfo_a,
- !ses->dir,
- ses->digest_length);
- break;
case RTE_CRYPTO_AUTH_SHA256_HMAC:
- alginfo_a.algtype = OP_ALG_ALGSEL_SHA256;
- alginfo_a.algmode = OP_ALG_AAI_HMAC;
- shared_desc_len = cnstr_shdsc_hmac(
- cdb->sh_desc, true,
- swap, SHR_NEVER, &alginfo_a,
- !ses->dir,
- ses->digest_length);
- break;
case RTE_CRYPTO_AUTH_SHA384_HMAC:
- alginfo_a.algtype = OP_ALG_ALGSEL_SHA384;
- alginfo_a.algmode = OP_ALG_AAI_HMAC;
- shared_desc_len = cnstr_shdsc_hmac(
- cdb->sh_desc, true,
- swap, SHR_NEVER, &alginfo_a,
- !ses->dir,
- ses->digest_length);
- break;
case RTE_CRYPTO_AUTH_SHA512_HMAC:
- alginfo_a.algtype = OP_ALG_ALGSEL_SHA512;
- alginfo_a.algmode = OP_ALG_AAI_HMAC;
shared_desc_len = cnstr_shdsc_hmac(
cdb->sh_desc, true,
swap, SHR_NEVER, &alginfo_a,
@@ -773,9 +553,6 @@ dpaa_sec_prep_cdb(dpaa_sec_session *ses)
ses->digest_length);
break;
case RTE_CRYPTO_AUTH_SNOW3G_UIA2:
- alginfo_a.algtype = OP_ALG_ALGSEL_SNOW_F9;
- alginfo_a.algmode = OP_ALG_AAI_F9;
- ses->auth_alg = RTE_CRYPTO_AUTH_SNOW3G_UIA2;
shared_desc_len = cnstr_shdsc_snow_f9(
cdb->sh_desc, true, swap,
&alginfo_a,
@@ -783,9 +560,6 @@ dpaa_sec_prep_cdb(dpaa_sec_session *ses)
ses->digest_length);
break;
case RTE_CRYPTO_AUTH_ZUC_EIA3:
- alginfo_a.algtype = OP_ALG_ALGSEL_ZUCA;
- alginfo_a.algmode = OP_ALG_AAI_F9;
- ses->auth_alg = RTE_CRYPTO_AUTH_ZUC_EIA3;
shared_desc_len = cnstr_shdsc_zuca(
cdb->sh_desc, true, swap,
&alginfo_a,
@@ -795,8 +569,8 @@ dpaa_sec_prep_cdb(dpaa_sec_session *ses)
default:
DPAA_SEC_ERR("unsupported auth alg %u", ses->auth_alg);
}
- } else if (is_aead(ses)) {
- caam_aead_alg(ses, &alginfo);
+ break;
+ case DPAA_SEC_AEAD:
if (alginfo.algtype == (unsigned int)DPAA_SEC_ALG_UNSUPPORT) {
DPAA_SEC_ERR("not supported aead alg");
return -ENOTSUP;
@@ -805,6 +579,8 @@ dpaa_sec_prep_cdb(dpaa_sec_session *ses)
alginfo.keylen = ses->aead_key.length;
alginfo.key_enc_flags = 0;
alginfo.key_type = RTA_DATA_IMM;
+ alginfo.algtype = ses->aead_key.alg;
+ alginfo.algmode = ses->aead_key.algmode;
if (ses->dir == DIR_ENC)
shared_desc_len = cnstr_shdsc_gcm_encap(
@@ -818,28 +594,21 @@ dpaa_sec_prep_cdb(dpaa_sec_session *ses)
&alginfo,
ses->iv.length,
ses->digest_length);
- } else {
- caam_cipher_alg(ses, &alginfo_c);
- if (alginfo_c.algtype == (unsigned int)DPAA_SEC_ALG_UNSUPPORT) {
- DPAA_SEC_ERR("not supported cipher alg");
- return -ENOTSUP;
- }
-
+ break;
+ case DPAA_SEC_CIPHER_HASH:
alginfo_c.key = (size_t)ses->cipher_key.data;
alginfo_c.keylen = ses->cipher_key.length;
alginfo_c.key_enc_flags = 0;
alginfo_c.key_type = RTA_DATA_IMM;
-
- caam_auth_alg(ses, &alginfo_a);
- if (alginfo_a.algtype == (unsigned int)DPAA_SEC_ALG_UNSUPPORT) {
- DPAA_SEC_ERR("not supported auth alg");
- return -ENOTSUP;
- }
+ alginfo_c.algtype = ses->cipher_key.alg;
+ alginfo_c.algmode = ses->cipher_key.algmode;
alginfo_a.key = (size_t)ses->auth_key.data;
alginfo_a.keylen = ses->auth_key.length;
alginfo_a.key_enc_flags = 0;
alginfo_a.key_type = RTA_DATA_IMM;
+ alginfo_a.algtype = ses->auth_key.alg;
+ alginfo_a.algmode = ses->auth_key.algmode;
cdb->sh_desc[0] = alginfo_c.keylen;
cdb->sh_desc[1] = alginfo_a.keylen;
@@ -876,6 +645,11 @@ dpaa_sec_prep_cdb(dpaa_sec_session *ses)
true, swap, SHR_SERIAL, &alginfo_c, &alginfo_a,
ses->iv.length,
ses->digest_length, ses->dir);
+ break;
+ case DPAA_SEC_HASH_CIPHER:
+ default:
+ DPAA_SEC_ERR("error: Unsupported session");
+ return -ENOTSUP;
}
if (shared_desc_len < 0) {
@@ -2053,18 +1827,22 @@ dpaa_sec_enqueue_burst(void *qp, struct rte_crypto_op **ops,
if (rte_pktmbuf_is_contiguous(op->sym->m_src) &&
((op->sym->m_dst == NULL) ||
rte_pktmbuf_is_contiguous(op->sym->m_dst))) {
- if (is_proto_ipsec(ses)) {
- cf = build_proto(op, ses);
- } else if (is_proto_pdcp(ses)) {
+ switch (ses->ctxt) {
+ case DPAA_SEC_PDCP:
+ case DPAA_SEC_IPSEC:
cf = build_proto(op, ses);
- } else if (is_auth_only(ses)) {
+ break;
+ case DPAA_SEC_AUTH:
cf = build_auth_only(op, ses);
- } else if (is_cipher_only(ses)) {
+ break;
+ case DPAA_SEC_CIPHER:
cf = build_cipher_only(op, ses);
- } else if (is_aead(ses)) {
+ break;
+ case DPAA_SEC_AEAD:
cf = build_cipher_auth_gcm(op, ses);
auth_hdr_len = ses->auth_only_len;
- } else if (is_auth_cipher(ses)) {
+ break;
+ case DPAA_SEC_CIPHER_HASH:
auth_hdr_len =
op->sym->cipher.data.offset
- op->sym->auth.data.offset;
@@ -2073,23 +1851,30 @@ dpaa_sec_enqueue_burst(void *qp, struct rte_crypto_op **ops,
- op->sym->cipher.data.length
- auth_hdr_len;
cf = build_cipher_auth(op, ses);
- } else {
+ break;
+ default:
DPAA_SEC_DP_ERR("not supported ops");
frames_to_send = loop;
nb_ops = loop;
goto send_pkts;
}
} else {
- if (is_proto_pdcp(ses) || is_proto_ipsec(ses)) {
+ switch (ses->ctxt) {
+ case DPAA_SEC_PDCP:
+ case DPAA_SEC_IPSEC:
cf = build_proto_sg(op, ses);
- } else if (is_auth_only(ses)) {
+ break;
+ case DPAA_SEC_AUTH:
cf = build_auth_only_sg(op, ses);
- } else if (is_cipher_only(ses)) {
+ break;
+ case DPAA_SEC_CIPHER:
cf = build_cipher_only_sg(op, ses);
- } else if (is_aead(ses)) {
+ break;
+ case DPAA_SEC_AEAD:
cf = build_cipher_auth_gcm_sg(op, ses);
auth_hdr_len = ses->auth_only_len;
- } else if (is_auth_cipher(ses)) {
+ break;
+ case DPAA_SEC_CIPHER_HASH:
auth_hdr_len =
op->sym->cipher.data.offset
- op->sym->auth.data.offset;
@@ -2098,7 +1883,8 @@ dpaa_sec_enqueue_burst(void *qp, struct rte_crypto_op **ops,
- op->sym->cipher.data.length
- auth_hdr_len;
cf = build_cipher_auth_sg(op, ses);
- } else {
+ break;
+ default:
DPAA_SEC_DP_ERR("not supported ops");
frames_to_send = loop;
nb_ops = loop;
@@ -2282,6 +2068,31 @@ dpaa_sec_cipher_init(struct rte_cryptodev *dev __rte_unused,
memcpy(session->cipher_key.data, xform->cipher.key.data,
xform->cipher.key.length);
+ switch (xform->cipher.algo) {
+ case RTE_CRYPTO_CIPHER_AES_CBC:
+ session->cipher_key.alg = OP_ALG_ALGSEL_AES;
+ session->cipher_key.algmode = OP_ALG_AAI_CBC;
+ break;
+ case RTE_CRYPTO_CIPHER_3DES_CBC:
+ session->cipher_key.alg = OP_ALG_ALGSEL_3DES;
+ session->cipher_key.algmode = OP_ALG_AAI_CBC;
+ break;
+ case RTE_CRYPTO_CIPHER_AES_CTR:
+ session->cipher_key.alg = OP_ALG_ALGSEL_AES;
+ session->cipher_key.algmode = OP_ALG_AAI_CTR;
+ break;
+ case RTE_CRYPTO_CIPHER_SNOW3G_UEA2:
+ session->cipher_key.alg = OP_ALG_ALGSEL_SNOW_F8;
+ break;
+ case RTE_CRYPTO_CIPHER_ZUC_EEA3:
+ session->cipher_key.alg = OP_ALG_ALGSEL_ZUCE;
+ break;
+ default:
+ DPAA_SEC_ERR("Crypto: Undefined Cipher specified %u",
+ xform->cipher.algo);
+ rte_free(session->cipher_key.data);
+ return -1;
+ }
session->dir = (xform->cipher.op == RTE_CRYPTO_CIPHER_OP_ENCRYPT) ?
DIR_ENC : DIR_DEC;
@@ -2309,18 +2120,165 @@ dpaa_sec_auth_init(struct rte_cryptodev *dev __rte_unused,
memcpy(session->auth_key.data, xform->auth.key.data,
xform->auth.key.length);
+
+ switch (xform->auth.algo) {
+ case RTE_CRYPTO_AUTH_SHA1_HMAC:
+ session->auth_key.alg = OP_ALG_ALGSEL_SHA1;
+ session->auth_key.algmode = OP_ALG_AAI_HMAC;
+ break;
+ case RTE_CRYPTO_AUTH_MD5_HMAC:
+ session->auth_key.alg = OP_ALG_ALGSEL_MD5;
+ session->auth_key.algmode = OP_ALG_AAI_HMAC;
+ break;
+ case RTE_CRYPTO_AUTH_SHA224_HMAC:
+ session->auth_key.alg = OP_ALG_ALGSEL_SHA224;
+ session->auth_key.algmode = OP_ALG_AAI_HMAC;
+ break;
+ case RTE_CRYPTO_AUTH_SHA256_HMAC:
+ session->auth_key.alg = OP_ALG_ALGSEL_SHA256;
+ session->auth_key.algmode = OP_ALG_AAI_HMAC;
+ break;
+ case RTE_CRYPTO_AUTH_SHA384_HMAC:
+ session->auth_key.alg = OP_ALG_ALGSEL_SHA384;
+ session->auth_key.algmode = OP_ALG_AAI_HMAC;
+ break;
+ case RTE_CRYPTO_AUTH_SHA512_HMAC:
+ session->auth_key.alg = OP_ALG_ALGSEL_SHA512;
+ session->auth_key.algmode = OP_ALG_AAI_HMAC;
+ break;
+ case RTE_CRYPTO_AUTH_SNOW3G_UIA2:
+ session->auth_key.alg = OP_ALG_ALGSEL_SNOW_F9;
+ session->auth_key.algmode = OP_ALG_AAI_F9;
+ break;
+ case RTE_CRYPTO_AUTH_ZUC_EIA3:
+ session->auth_key.alg = OP_ALG_ALGSEL_ZUCA;
+ session->auth_key.algmode = OP_ALG_AAI_F9;
+ break;
+ default:
+ DPAA_SEC_ERR("Crypto: Unsupported Auth specified %u",
+ xform->auth.algo);
+ rte_free(session->auth_key.data);
+ return -1;
+ }
+
session->dir = (xform->auth.op == RTE_CRYPTO_AUTH_OP_GENERATE) ?
DIR_ENC : DIR_DEC;
return 0;
}
+static int
+dpaa_sec_chain_init(struct rte_cryptodev *dev __rte_unused,
+ struct rte_crypto_sym_xform *xform,
+ dpaa_sec_session *session)
+{
+
+ struct rte_crypto_cipher_xform *cipher_xform;
+ struct rte_crypto_auth_xform *auth_xform;
+
+ if (session->auth_cipher_text) {
+ cipher_xform = &xform->cipher;
+ auth_xform = &xform->next->auth;
+ } else {
+ cipher_xform = &xform->next->cipher;
+ auth_xform = &xform->auth;
+ }
+
+ /* Set IV parameters */
+ session->iv.offset = cipher_xform->iv.offset;
+ session->iv.length = cipher_xform->iv.length;
+
+ session->cipher_key.data = rte_zmalloc(NULL, cipher_xform->key.length,
+ RTE_CACHE_LINE_SIZE);
+ if (session->cipher_key.data == NULL && cipher_xform->key.length > 0) {
+ DPAA_SEC_ERR("No Memory for cipher key");
+ return -1;
+ }
+ session->cipher_key.length = cipher_xform->key.length;
+ session->auth_key.data = rte_zmalloc(NULL, auth_xform->key.length,
+ RTE_CACHE_LINE_SIZE);
+ if (session->auth_key.data == NULL && auth_xform->key.length > 0) {
+ DPAA_SEC_ERR("No Memory for auth key");
+ rte_free(session->cipher_key.data);
+ return -ENOMEM;
+ }
+ session->auth_key.length = auth_xform->key.length;
+ memcpy(session->cipher_key.data, cipher_xform->key.data,
+ cipher_xform->key.length);
+ memcpy(session->auth_key.data, auth_xform->key.data,
+ auth_xform->key.length);
+
+ session->digest_length = auth_xform->digest_length;
+ session->auth_alg = auth_xform->algo;
+
+ switch (auth_xform->algo) {
+ case RTE_CRYPTO_AUTH_SHA1_HMAC:
+ session->auth_key.alg = OP_ALG_ALGSEL_SHA1;
+ session->auth_key.algmode = OP_ALG_AAI_HMAC;
+ break;
+ case RTE_CRYPTO_AUTH_MD5_HMAC:
+ session->auth_key.alg = OP_ALG_ALGSEL_MD5;
+ session->auth_key.algmode = OP_ALG_AAI_HMAC;
+ break;
+ case RTE_CRYPTO_AUTH_SHA224_HMAC:
+ session->auth_key.alg = OP_ALG_ALGSEL_SHA224;
+ session->auth_key.algmode = OP_ALG_AAI_HMAC;
+ break;
+ case RTE_CRYPTO_AUTH_SHA256_HMAC:
+ session->auth_key.alg = OP_ALG_ALGSEL_SHA256;
+ session->auth_key.algmode = OP_ALG_AAI_HMAC;
+ break;
+ case RTE_CRYPTO_AUTH_SHA384_HMAC:
+ session->auth_key.alg = OP_ALG_ALGSEL_SHA384;
+ session->auth_key.algmode = OP_ALG_AAI_HMAC;
+ break;
+ case RTE_CRYPTO_AUTH_SHA512_HMAC:
+ session->auth_key.alg = OP_ALG_ALGSEL_SHA512;
+ session->auth_key.algmode = OP_ALG_AAI_HMAC;
+ break;
+ default:
+ DPAA_SEC_ERR("Crypto: Unsupported Auth specified %u",
+ auth_xform->algo);
+ goto error_out;
+ }
+
+ session->cipher_alg = cipher_xform->algo;
+
+ switch (cipher_xform->algo) {
+ case RTE_CRYPTO_CIPHER_AES_CBC:
+ session->cipher_key.alg = OP_ALG_ALGSEL_AES;
+ session->cipher_key.algmode = OP_ALG_AAI_CBC;
+ break;
+ case RTE_CRYPTO_CIPHER_3DES_CBC:
+ session->cipher_key.alg = OP_ALG_ALGSEL_3DES;
+ session->cipher_key.algmode = OP_ALG_AAI_CBC;
+ break;
+ case RTE_CRYPTO_CIPHER_AES_CTR:
+ session->cipher_key.alg = OP_ALG_ALGSEL_AES;
+ session->cipher_key.algmode = OP_ALG_AAI_CTR;
+ break;
+ default:
+ DPAA_SEC_ERR("Crypto: Undefined Cipher specified %u",
+ cipher_xform->algo);
+ goto error_out;
+ }
+ session->dir = (cipher_xform->op == RTE_CRYPTO_CIPHER_OP_ENCRYPT) ?
+ DIR_ENC : DIR_DEC;
+ return 0;
+
+error_out:
+ rte_free(session->cipher_key.data);
+ rte_free(session->auth_key.data);
+ return -1;
+}
+
static int
dpaa_sec_aead_init(struct rte_cryptodev *dev __rte_unused,
struct rte_crypto_sym_xform *xform,
dpaa_sec_session *session)
{
session->aead_alg = xform->aead.algo;
+ session->ctxt = DPAA_SEC_AEAD;
session->iv.length = xform->aead.iv.length;
session->iv.offset = xform->aead.iv.offset;
session->auth_only_len = xform->aead.aad_length;
@@ -2335,6 +2293,18 @@ dpaa_sec_aead_init(struct rte_cryptodev *dev __rte_unused,
memcpy(session->aead_key.data, xform->aead.key.data,
xform->aead.key.length);
+
+ switch (session->aead_alg) {
+ case RTE_CRYPTO_AEAD_AES_GCM:
+ session->aead_key.alg = OP_ALG_ALGSEL_AES;
+ session->aead_key.algmode = OP_ALG_AAI_GCM;
+ break;
+ default:
+ DPAA_SEC_ERR("unsupported AEAD alg %d", session->aead_alg);
+ rte_free(session->aead_key.data);
+ return -ENOMEM;
+ }
+
session->dir = (xform->aead.op == RTE_CRYPTO_AEAD_OP_ENCRYPT) ?
DIR_ENC : DIR_DEC;
@@ -2422,31 +2392,34 @@ dpaa_sec_set_session_parameters(struct rte_cryptodev *dev,
/* Cipher Only */
if (xform->type == RTE_CRYPTO_SYM_XFORM_CIPHER && xform->next == NULL) {
session->auth_alg = RTE_CRYPTO_AUTH_NULL;
+ session->ctxt = DPAA_SEC_CIPHER;
dpaa_sec_cipher_init(dev, xform, session);
/* Authentication Only */
} else if (xform->type == RTE_CRYPTO_SYM_XFORM_AUTH &&
xform->next == NULL) {
session->cipher_alg = RTE_CRYPTO_CIPHER_NULL;
+ session->ctxt = DPAA_SEC_AUTH;
dpaa_sec_auth_init(dev, xform, session);
/* Cipher then Authenticate */
} else if (xform->type == RTE_CRYPTO_SYM_XFORM_CIPHER &&
xform->next->type == RTE_CRYPTO_SYM_XFORM_AUTH) {
if (xform->cipher.op == RTE_CRYPTO_CIPHER_OP_ENCRYPT) {
- dpaa_sec_cipher_init(dev, xform, session);
- dpaa_sec_auth_init(dev, xform->next, session);
+ session->ctxt = DPAA_SEC_CIPHER_HASH;
+ session->auth_cipher_text = 1;
+ dpaa_sec_chain_init(dev, xform, session);
} else {
DPAA_SEC_ERR("Not supported: Auth then Cipher");
return -EINVAL;
}
-
/* Authenticate then Cipher */
} else if (xform->type == RTE_CRYPTO_SYM_XFORM_AUTH &&
xform->next->type == RTE_CRYPTO_SYM_XFORM_CIPHER) {
if (xform->next->cipher.op == RTE_CRYPTO_CIPHER_OP_DECRYPT) {
- dpaa_sec_auth_init(dev, xform, session);
- dpaa_sec_cipher_init(dev, xform->next, session);
+ session->ctxt = DPAA_SEC_CIPHER_HASH;
+ session->auth_cipher_text = 0;
+ dpaa_sec_chain_init(dev, xform, session);
} else {
DPAA_SEC_ERR("Not supported: Auth then Cipher");
return -EINVAL;
@@ -2574,6 +2547,7 @@ dpaa_sec_set_ipsec_session(__rte_unused struct rte_cryptodev *dev,
cipher_xform = &conf->crypto_xform->next->cipher;
}
session->proto_alg = conf->protocol;
+ session->ctxt = DPAA_SEC_IPSEC;
if (cipher_xform && cipher_xform->algo != RTE_CRYPTO_CIPHER_NULL) {
session->cipher_key.data = rte_zmalloc(NULL,
@@ -2589,9 +2563,20 @@ dpaa_sec_set_ipsec_session(__rte_unused struct rte_cryptodev *dev,
session->cipher_key.length = cipher_xform->key.length;
switch (cipher_xform->algo) {
+ case RTE_CRYPTO_CIPHER_NULL:
+ session->cipher_key.alg = OP_PCL_IPSEC_NULL;
+ break;
case RTE_CRYPTO_CIPHER_AES_CBC:
+ session->cipher_key.alg = OP_PCL_IPSEC_AES_CBC;
+ session->cipher_key.algmode = OP_ALG_AAI_CBC;
+ break;
case RTE_CRYPTO_CIPHER_3DES_CBC:
+ session->cipher_key.alg = OP_PCL_IPSEC_3DES;
+ session->cipher_key.algmode = OP_ALG_AAI_CBC;
+ break;
case RTE_CRYPTO_CIPHER_AES_CTR:
+ session->cipher_key.alg = OP_PCL_IPSEC_AES_CTR;
+ session->cipher_key.algmode = OP_ALG_AAI_CTR;
break;
default:
DPAA_SEC_ERR("Crypto: Unsupported Cipher alg %u",
@@ -2620,12 +2605,33 @@ dpaa_sec_set_ipsec_session(__rte_unused struct rte_cryptodev *dev,
session->auth_key.length = auth_xform->key.length;
switch (auth_xform->algo) {
- case RTE_CRYPTO_AUTH_SHA1_HMAC:
+ case RTE_CRYPTO_AUTH_NULL:
+ session->auth_key.alg = OP_PCL_IPSEC_HMAC_NULL;
+ session->digest_length = 0;
+ break;
case RTE_CRYPTO_AUTH_MD5_HMAC:
+ session->auth_key.alg = OP_PCL_IPSEC_HMAC_MD5_96;
+ session->auth_key.algmode = OP_ALG_AAI_HMAC;
+ break;
+ case RTE_CRYPTO_AUTH_SHA1_HMAC:
+ session->auth_key.alg = OP_PCL_IPSEC_HMAC_SHA1_96;
+ session->auth_key.algmode = OP_ALG_AAI_HMAC;
+ break;
+ case RTE_CRYPTO_AUTH_SHA224_HMAC:
+ session->auth_key.alg = OP_PCL_IPSEC_HMAC_SHA1_160;
+ session->auth_key.algmode = OP_ALG_AAI_HMAC;
+ break;
case RTE_CRYPTO_AUTH_SHA256_HMAC:
+ session->auth_key.alg = OP_PCL_IPSEC_HMAC_SHA2_256_128;
+ session->auth_key.algmode = OP_ALG_AAI_HMAC;
+ break;
case RTE_CRYPTO_AUTH_SHA384_HMAC:
+ session->auth_key.alg = OP_PCL_IPSEC_HMAC_SHA2_384_192;
+ session->auth_key.algmode = OP_ALG_AAI_HMAC;
+ break;
case RTE_CRYPTO_AUTH_SHA512_HMAC:
- case RTE_CRYPTO_AUTH_AES_CMAC:
+ session->auth_key.alg = OP_PCL_IPSEC_HMAC_SHA2_512_256;
+ session->auth_key.algmode = OP_ALG_AAI_HMAC;
break;
default:
DPAA_SEC_ERR("Crypto: Unsupported auth alg %u",
@@ -2766,7 +2772,28 @@ dpaa_sec_set_pdcp_session(struct rte_cryptodev *dev,
}
session->proto_alg = conf->protocol;
+ session->ctxt = DPAA_SEC_PDCP;
+
if (cipher_xform) {
+ switch (cipher_xform->algo) {
+ case RTE_CRYPTO_CIPHER_SNOW3G_UEA2:
+ session->cipher_key.alg = PDCP_CIPHER_TYPE_SNOW;
+ break;
+ case RTE_CRYPTO_CIPHER_ZUC_EEA3:
+ session->cipher_key.alg = PDCP_CIPHER_TYPE_ZUC;
+ break;
+ case RTE_CRYPTO_CIPHER_AES_CTR:
+ session->cipher_key.alg = PDCP_CIPHER_TYPE_AES;
+ break;
+ case RTE_CRYPTO_CIPHER_NULL:
+ session->cipher_key.alg = PDCP_CIPHER_TYPE_NULL;
+ break;
+ default:
+ DPAA_SEC_ERR("Crypto: Undefined Cipher specified %u",
+ session->cipher_alg);
+ return -1;
+ }
+
session->cipher_key.data = rte_zmalloc(NULL,
cipher_xform->key.length,
RTE_CACHE_LINE_SIZE);
@@ -2798,6 +2825,25 @@ dpaa_sec_set_pdcp_session(struct rte_cryptodev *dev,
}
if (auth_xform) {
+ switch (auth_xform->algo) {
+ case RTE_CRYPTO_AUTH_SNOW3G_UIA2:
+ session->auth_key.alg = PDCP_AUTH_TYPE_SNOW;
+ break;
+ case RTE_CRYPTO_AUTH_ZUC_EIA3:
+ session->auth_key.alg = PDCP_AUTH_TYPE_ZUC;
+ break;
+ case RTE_CRYPTO_AUTH_AES_CMAC:
+ session->auth_key.alg = PDCP_AUTH_TYPE_AES;
+ break;
+ case RTE_CRYPTO_AUTH_NULL:
+ session->auth_key.alg = PDCP_AUTH_TYPE_NULL;
+ break;
+ default:
+ DPAA_SEC_ERR("Crypto: Unsupported auth alg %u",
+ session->auth_alg);
+ rte_free(session->cipher_key.data);
+ return -1;
+ }
session->auth_key.data = rte_zmalloc(NULL,
auth_xform->key.length,
RTE_CACHE_LINE_SIZE);
diff --git a/drivers/crypto/dpaa_sec/dpaa_sec.h b/drivers/crypto/dpaa_sec/dpaa_sec.h
index 149923aa1..a661d5a56 100644
--- a/drivers/crypto/dpaa_sec/dpaa_sec.h
+++ b/drivers/crypto/dpaa_sec/dpaa_sec.h
@@ -38,14 +38,19 @@ enum dpaa_sec_op_type {
DPAA_SEC_NONE, /*!< No Cipher operations*/
DPAA_SEC_CIPHER,/*!< CIPHER operations */
DPAA_SEC_AUTH, /*!< Authentication Operations */
- DPAA_SEC_AEAD, /*!< Authenticated Encryption with associated data */
+ DPAA_SEC_AEAD, /*!< AEAD (AES-GCM/CCM) type operations */
+ DPAA_SEC_CIPHER_HASH, /*!< Authenticated Encryption with
+ * associated data
+ */
+ DPAA_SEC_HASH_CIPHER, /*!< Encryption with Authenticated
+ * associated data
+ */
DPAA_SEC_IPSEC, /*!< IPSEC protocol operations*/
DPAA_SEC_PDCP, /*!< PDCP protocol operations*/
DPAA_SEC_PKC, /*!< Public Key Cryptographic Operations */
DPAA_SEC_MAX
};
-
#define DPAA_SEC_MAX_DESC_SIZE 64
/* code or cmd block to caam */
struct sec_cdb {
@@ -113,6 +118,7 @@ struct sec_pdcp_ctxt {
typedef struct dpaa_sec_session_entry {
uint8_t dir; /*!< Operation Direction */
+ uint8_t ctxt; /*!< Session Context Type */
enum rte_crypto_cipher_algorithm cipher_alg; /*!< Cipher Algorithm*/
enum rte_crypto_auth_algorithm auth_alg; /*!< Authentication Algorithm*/
enum rte_crypto_aead_algorithm aead_alg; /*!< AEAD Algorithm*/
@@ -121,15 +127,21 @@ typedef struct dpaa_sec_session_entry {
struct {
uint8_t *data; /**< pointer to key data */
size_t length; /**< key length in bytes */
+ uint32_t alg;
+ uint32_t algmode;
} aead_key;
struct {
struct {
uint8_t *data; /**< pointer to key data */
size_t length; /**< key length in bytes */
+ uint32_t alg;
+ uint32_t algmode;
} cipher_key;
struct {
uint8_t *data; /**< pointer to key data */
size_t length; /**< key length in bytes */
+ uint32_t alg;
+ uint32_t algmode;
} auth_key;
};
};
@@ -148,6 +160,8 @@ typedef struct dpaa_sec_session_entry {
struct ip ip4_hdr;
struct rte_ipv6_hdr ip6_hdr;
};
+ uint8_t auth_cipher_text;
+ /**< Authenticate/cipher ordering */
};
struct sec_pdcp_ctxt pdcp;
};
--
2.17.1
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [dpdk-dev] [PATCH 10/10] crypto/dpaa_sec: code reorg for better session mgmt
2019-10-11 16:32 ` [dpdk-dev] [PATCH 10/10] crypto/dpaa_sec: code reorg for better session mgmt Hemant Agrawal
@ 2019-10-11 19:03 ` Aaron Conole
2019-10-14 4:57 ` Hemant Agrawal
0 siblings, 1 reply; 27+ messages in thread
From: Aaron Conole @ 2019-10-11 19:03 UTC (permalink / raw)
To: Hemant Agrawal; +Cc: dev, akhil.goyal
Hemant Agrawal <hemant.agrawal@nxp.com> writes:
> The session related parameters shall be populated during
> the session create only.
> At the runtime on first packet, the CDB should just reference
> the session data instead of re-interpreting data again.
>
> Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
> ---
As a part of this patch, a number of static functions are no longer
used, and should be removed (for example is_auth_only, is_cipher_only,
is_aead, is_auth_cipher, and is_proto_ipsec).
You will see this if you choose to build with clang. gcc sees the
functions marked as static inline, and doesn't seem to warn.
> drivers/crypto/dpaa_sec/dpaa_sec.c | 612 ++++++++++++++++-------------
> drivers/crypto/dpaa_sec/dpaa_sec.h | 18 +-
> 2 files changed, 345 insertions(+), 285 deletions(-)
>
> diff --git a/drivers/crypto/dpaa_sec/dpaa_sec.c b/drivers/crypto/dpaa_sec/dpaa_sec.c
> index 970cdf0cc..b932bf1cb 100644
> --- a/drivers/crypto/dpaa_sec/dpaa_sec.c
> +++ b/drivers/crypto/dpaa_sec/dpaa_sec.c
> @@ -290,102 +290,6 @@ static inline int is_decode(dpaa_sec_session *ses)
> return ses->dir == DIR_DEC;
> }
>
> -static inline void
> -caam_auth_alg(dpaa_sec_session *ses, struct alginfo *alginfo_a)
> -{
> - switch (ses->auth_alg) {
> - case RTE_CRYPTO_AUTH_NULL:
> - alginfo_a->algtype =
> - (ses->proto_alg == RTE_SECURITY_PROTOCOL_IPSEC) ?
> - OP_PCL_IPSEC_HMAC_NULL : 0;
> - ses->digest_length = 0;
> - break;
> - case RTE_CRYPTO_AUTH_MD5_HMAC:
> - alginfo_a->algtype =
> - (ses->proto_alg == RTE_SECURITY_PROTOCOL_IPSEC) ?
> - OP_PCL_IPSEC_HMAC_MD5_96 : OP_ALG_ALGSEL_MD5;
> - alginfo_a->algmode = OP_ALG_AAI_HMAC;
> - break;
> - case RTE_CRYPTO_AUTH_SHA1_HMAC:
> - alginfo_a->algtype =
> - (ses->proto_alg == RTE_SECURITY_PROTOCOL_IPSEC) ?
> - OP_PCL_IPSEC_HMAC_SHA1_96 : OP_ALG_ALGSEL_SHA1;
> - alginfo_a->algmode = OP_ALG_AAI_HMAC;
> - break;
> - case RTE_CRYPTO_AUTH_SHA224_HMAC:
> - alginfo_a->algtype =
> - (ses->proto_alg == RTE_SECURITY_PROTOCOL_IPSEC) ?
> - OP_PCL_IPSEC_HMAC_SHA1_160 : OP_ALG_ALGSEL_SHA224;
> - alginfo_a->algmode = OP_ALG_AAI_HMAC;
> - break;
> - case RTE_CRYPTO_AUTH_SHA256_HMAC:
> - alginfo_a->algtype =
> - (ses->proto_alg == RTE_SECURITY_PROTOCOL_IPSEC) ?
> - OP_PCL_IPSEC_HMAC_SHA2_256_128 : OP_ALG_ALGSEL_SHA256;
> - alginfo_a->algmode = OP_ALG_AAI_HMAC;
> - break;
> - case RTE_CRYPTO_AUTH_SHA384_HMAC:
> - alginfo_a->algtype =
> - (ses->proto_alg == RTE_SECURITY_PROTOCOL_IPSEC) ?
> - OP_PCL_IPSEC_HMAC_SHA2_384_192 : OP_ALG_ALGSEL_SHA384;
> - alginfo_a->algmode = OP_ALG_AAI_HMAC;
> - break;
> - case RTE_CRYPTO_AUTH_SHA512_HMAC:
> - alginfo_a->algtype =
> - (ses->proto_alg == RTE_SECURITY_PROTOCOL_IPSEC) ?
> - OP_PCL_IPSEC_HMAC_SHA2_512_256 : OP_ALG_ALGSEL_SHA512;
> - alginfo_a->algmode = OP_ALG_AAI_HMAC;
> - break;
> - default:
> - DPAA_SEC_ERR("unsupported auth alg %u", ses->auth_alg);
> - }
> -}
> -
> -static inline void
> -caam_cipher_alg(dpaa_sec_session *ses, struct alginfo *alginfo_c)
> -{
> - switch (ses->cipher_alg) {
> - case RTE_CRYPTO_CIPHER_NULL:
> - alginfo_c->algtype =
> - (ses->proto_alg == RTE_SECURITY_PROTOCOL_IPSEC) ?
> - OP_PCL_IPSEC_NULL : 0;
> - break;
> - case RTE_CRYPTO_CIPHER_AES_CBC:
> - alginfo_c->algtype =
> - (ses->proto_alg == RTE_SECURITY_PROTOCOL_IPSEC) ?
> - OP_PCL_IPSEC_AES_CBC : OP_ALG_ALGSEL_AES;
> - alginfo_c->algmode = OP_ALG_AAI_CBC;
> - break;
> - case RTE_CRYPTO_CIPHER_3DES_CBC:
> - alginfo_c->algtype =
> - (ses->proto_alg == RTE_SECURITY_PROTOCOL_IPSEC) ?
> - OP_PCL_IPSEC_3DES : OP_ALG_ALGSEL_3DES;
> - alginfo_c->algmode = OP_ALG_AAI_CBC;
> - break;
> - case RTE_CRYPTO_CIPHER_AES_CTR:
> - alginfo_c->algtype =
> - (ses->proto_alg == RTE_SECURITY_PROTOCOL_IPSEC) ?
> - OP_PCL_IPSEC_AES_CTR : OP_ALG_ALGSEL_AES;
> - alginfo_c->algmode = OP_ALG_AAI_CTR;
> - break;
> - default:
> - DPAA_SEC_ERR("unsupported cipher alg %d", ses->cipher_alg);
> - }
> -}
> -
> -static inline void
> -caam_aead_alg(dpaa_sec_session *ses, struct alginfo *alginfo)
> -{
> - switch (ses->aead_alg) {
> - case RTE_CRYPTO_AEAD_AES_GCM:
> - alginfo->algtype = OP_ALG_ALGSEL_AES;
> - alginfo->algmode = OP_ALG_AAI_GCM;
> - break;
> - default:
> - DPAA_SEC_ERR("unsupported AEAD alg %d", ses->aead_alg);
> - }
> -}
> -
> static int
> dpaa_sec_prep_pdcp_cdb(dpaa_sec_session *ses)
> {
> @@ -400,58 +304,24 @@ dpaa_sec_prep_pdcp_cdb(dpaa_sec_session *ses)
> int swap = true;
> #endif
>
> - switch (ses->cipher_alg) {
> - case RTE_CRYPTO_CIPHER_SNOW3G_UEA2:
> - cipherdata.algtype = PDCP_CIPHER_TYPE_SNOW;
> - break;
> - case RTE_CRYPTO_CIPHER_ZUC_EEA3:
> - cipherdata.algtype = PDCP_CIPHER_TYPE_ZUC;
> - break;
> - case RTE_CRYPTO_CIPHER_AES_CTR:
> - cipherdata.algtype = PDCP_CIPHER_TYPE_AES;
> - break;
> - case RTE_CRYPTO_CIPHER_NULL:
> - cipherdata.algtype = PDCP_CIPHER_TYPE_NULL;
> - break;
> - default:
> - DPAA_SEC_ERR("Crypto: Undefined Cipher specified %u",
> - ses->cipher_alg);
> - return -1;
> - }
> -
> cipherdata.key = (size_t)ses->cipher_key.data;
> cipherdata.keylen = ses->cipher_key.length;
> cipherdata.key_enc_flags = 0;
> cipherdata.key_type = RTA_DATA_IMM;
> + cipherdata.algtype = ses->cipher_key.alg;
> + cipherdata.algmode = ses->cipher_key.algmode;
>
> cdb->sh_desc[0] = cipherdata.keylen;
> cdb->sh_desc[1] = 0;
> cdb->sh_desc[2] = 0;
>
> if (ses->auth_alg) {
> - switch (ses->auth_alg) {
> - case RTE_CRYPTO_AUTH_SNOW3G_UIA2:
> - authdata.algtype = PDCP_AUTH_TYPE_SNOW;
> - break;
> - case RTE_CRYPTO_AUTH_ZUC_EIA3:
> - authdata.algtype = PDCP_AUTH_TYPE_ZUC;
> - break;
> - case RTE_CRYPTO_AUTH_AES_CMAC:
> - authdata.algtype = PDCP_AUTH_TYPE_AES;
> - break;
> - case RTE_CRYPTO_AUTH_NULL:
> - authdata.algtype = PDCP_AUTH_TYPE_NULL;
> - break;
> - default:
> - DPAA_SEC_ERR("Crypto: Unsupported auth alg %u",
> - ses->auth_alg);
> - return -1;
> - }
> -
> authdata.key = (size_t)ses->auth_key.data;
> authdata.keylen = ses->auth_key.length;
> authdata.key_enc_flags = 0;
> authdata.key_type = RTA_DATA_IMM;
> + authdata.algtype = ses->auth_key.alg;
> + authdata.algmode = ses->auth_key.algmode;
>
> p_authdata = &authdata;
>
> @@ -541,27 +411,19 @@ dpaa_sec_prep_ipsec_cdb(dpaa_sec_session *ses)
> int swap = true;
> #endif
>
> - caam_cipher_alg(ses, &cipherdata);
> - if (cipherdata.algtype == (unsigned int)DPAA_SEC_ALG_UNSUPPORT) {
> - DPAA_SEC_ERR("not supported cipher alg");
> - return -ENOTSUP;
> - }
> -
> cipherdata.key = (size_t)ses->cipher_key.data;
> cipherdata.keylen = ses->cipher_key.length;
> cipherdata.key_enc_flags = 0;
> cipherdata.key_type = RTA_DATA_IMM;
> -
> - caam_auth_alg(ses, &authdata);
> - if (authdata.algtype == (unsigned int)DPAA_SEC_ALG_UNSUPPORT) {
> - DPAA_SEC_ERR("not supported auth alg");
> - return -ENOTSUP;
> - }
> + cipherdata.algtype = ses->cipher_key.alg;
> + cipherdata.algmode = ses->cipher_key.algmode;
>
> authdata.key = (size_t)ses->auth_key.data;
> authdata.keylen = ses->auth_key.length;
> authdata.key_enc_flags = 0;
> authdata.key_type = RTA_DATA_IMM;
> + authdata.algtype = ses->auth_key.alg;
> + authdata.algmode = ses->auth_key.algmode;
>
> cdb->sh_desc[0] = cipherdata.keylen;
> cdb->sh_desc[1] = authdata.keylen;
> @@ -625,58 +487,26 @@ dpaa_sec_prep_cdb(dpaa_sec_session *ses)
>
> memset(cdb, 0, sizeof(struct sec_cdb));
>
> - if (is_proto_ipsec(ses)) {
> + switch (ses->ctxt) {
> + case DPAA_SEC_IPSEC:
> shared_desc_len = dpaa_sec_prep_ipsec_cdb(ses);
> - } else if (is_proto_pdcp(ses)) {
> + break;
> + case DPAA_SEC_PDCP:
> shared_desc_len = dpaa_sec_prep_pdcp_cdb(ses);
> - } else if (is_cipher_only(ses)) {
> + break;
> + case DPAA_SEC_CIPHER:
> alginfo_c.key = (size_t)ses->cipher_key.data;
> alginfo_c.keylen = ses->cipher_key.length;
> alginfo_c.key_enc_flags = 0;
> alginfo_c.key_type = RTA_DATA_IMM;
> + alginfo_c.algtype = ses->cipher_key.alg;
> + alginfo_c.algmode = ses->cipher_key.algmode;
> +
> switch (ses->cipher_alg) {
> - case RTE_CRYPTO_CIPHER_NULL:
> - alginfo_c.algtype = 0;
> - shared_desc_len = cnstr_shdsc_blkcipher(
> - cdb->sh_desc, true,
> - swap, SHR_NEVER, &alginfo_c,
> - NULL,
> - ses->iv.length,
> - ses->dir);
> - break;
> case RTE_CRYPTO_CIPHER_AES_CBC:
> - alginfo_c.algtype = OP_ALG_ALGSEL_AES;
> - alginfo_c.algmode = OP_ALG_AAI_CBC;
> - shared_desc_len = cnstr_shdsc_blkcipher(
> - cdb->sh_desc, true,
> - swap, SHR_NEVER, &alginfo_c,
> - NULL,
> - ses->iv.length,
> - ses->dir);
> - break;
> case RTE_CRYPTO_CIPHER_3DES_CBC:
> - alginfo_c.algtype = OP_ALG_ALGSEL_3DES;
> - alginfo_c.algmode = OP_ALG_AAI_CBC;
> - shared_desc_len = cnstr_shdsc_blkcipher(
> - cdb->sh_desc, true,
> - swap, SHR_NEVER, &alginfo_c,
> - NULL,
> - ses->iv.length,
> - ses->dir);
> - break;
> case RTE_CRYPTO_CIPHER_AES_CTR:
> - alginfo_c.algtype = OP_ALG_ALGSEL_AES;
> - alginfo_c.algmode = OP_ALG_AAI_CTR;
> - shared_desc_len = cnstr_shdsc_blkcipher(
> - cdb->sh_desc, true,
> - swap, SHR_NEVER, &alginfo_c,
> - NULL,
> - ses->iv.length,
> - ses->dir);
> - break;
> case RTE_CRYPTO_CIPHER_3DES_CTR:
> - alginfo_c.algtype = OP_ALG_ALGSEL_3DES;
> - alginfo_c.algmode = OP_ALG_AAI_CTR;
> shared_desc_len = cnstr_shdsc_blkcipher(
> cdb->sh_desc, true,
> swap, SHR_NEVER, &alginfo_c,
> @@ -685,14 +515,12 @@ dpaa_sec_prep_cdb(dpaa_sec_session *ses)
> ses->dir);
> break;
> case RTE_CRYPTO_CIPHER_SNOW3G_UEA2:
> - alginfo_c.algtype = OP_ALG_ALGSEL_SNOW_F8;
> shared_desc_len = cnstr_shdsc_snow_f8(
> cdb->sh_desc, true, swap,
> &alginfo_c,
> ses->dir);
> break;
> case RTE_CRYPTO_CIPHER_ZUC_EEA3:
> - alginfo_c.algtype = OP_ALG_ALGSEL_ZUCE;
> shared_desc_len = cnstr_shdsc_zuce(
> cdb->sh_desc, true, swap,
> &alginfo_c,
> @@ -703,69 +531,21 @@ dpaa_sec_prep_cdb(dpaa_sec_session *ses)
> ses->cipher_alg);
> return -ENOTSUP;
> }
> - } else if (is_auth_only(ses)) {
> + break;
> + case DPAA_SEC_AUTH:
> alginfo_a.key = (size_t)ses->auth_key.data;
> alginfo_a.keylen = ses->auth_key.length;
> alginfo_a.key_enc_flags = 0;
> alginfo_a.key_type = RTA_DATA_IMM;
> + alginfo_a.algtype = ses->auth_key.alg;
> + alginfo_a.algmode = ses->auth_key.algmode;
> switch (ses->auth_alg) {
> - case RTE_CRYPTO_AUTH_NULL:
> - alginfo_a.algtype = 0;
> - ses->digest_length = 0;
> - shared_desc_len = cnstr_shdsc_hmac(
> - cdb->sh_desc, true,
> - swap, SHR_NEVER, &alginfo_a,
> - !ses->dir,
> - ses->digest_length);
> - break;
> case RTE_CRYPTO_AUTH_MD5_HMAC:
> - alginfo_a.algtype = OP_ALG_ALGSEL_MD5;
> - alginfo_a.algmode = OP_ALG_AAI_HMAC;
> - shared_desc_len = cnstr_shdsc_hmac(
> - cdb->sh_desc, true,
> - swap, SHR_NEVER, &alginfo_a,
> - !ses->dir,
> - ses->digest_length);
> - break;
> case RTE_CRYPTO_AUTH_SHA1_HMAC:
> - alginfo_a.algtype = OP_ALG_ALGSEL_SHA1;
> - alginfo_a.algmode = OP_ALG_AAI_HMAC;
> - shared_desc_len = cnstr_shdsc_hmac(
> - cdb->sh_desc, true,
> - swap, SHR_NEVER, &alginfo_a,
> - !ses->dir,
> - ses->digest_length);
> - break;
> case RTE_CRYPTO_AUTH_SHA224_HMAC:
> - alginfo_a.algtype = OP_ALG_ALGSEL_SHA224;
> - alginfo_a.algmode = OP_ALG_AAI_HMAC;
> - shared_desc_len = cnstr_shdsc_hmac(
> - cdb->sh_desc, true,
> - swap, SHR_NEVER, &alginfo_a,
> - !ses->dir,
> - ses->digest_length);
> - break;
> case RTE_CRYPTO_AUTH_SHA256_HMAC:
> - alginfo_a.algtype = OP_ALG_ALGSEL_SHA256;
> - alginfo_a.algmode = OP_ALG_AAI_HMAC;
> - shared_desc_len = cnstr_shdsc_hmac(
> - cdb->sh_desc, true,
> - swap, SHR_NEVER, &alginfo_a,
> - !ses->dir,
> - ses->digest_length);
> - break;
> case RTE_CRYPTO_AUTH_SHA384_HMAC:
> - alginfo_a.algtype = OP_ALG_ALGSEL_SHA384;
> - alginfo_a.algmode = OP_ALG_AAI_HMAC;
> - shared_desc_len = cnstr_shdsc_hmac(
> - cdb->sh_desc, true,
> - swap, SHR_NEVER, &alginfo_a,
> - !ses->dir,
> - ses->digest_length);
> - break;
> case RTE_CRYPTO_AUTH_SHA512_HMAC:
> - alginfo_a.algtype = OP_ALG_ALGSEL_SHA512;
> - alginfo_a.algmode = OP_ALG_AAI_HMAC;
> shared_desc_len = cnstr_shdsc_hmac(
> cdb->sh_desc, true,
> swap, SHR_NEVER, &alginfo_a,
> @@ -773,9 +553,6 @@ dpaa_sec_prep_cdb(dpaa_sec_session *ses)
> ses->digest_length);
> break;
> case RTE_CRYPTO_AUTH_SNOW3G_UIA2:
> - alginfo_a.algtype = OP_ALG_ALGSEL_SNOW_F9;
> - alginfo_a.algmode = OP_ALG_AAI_F9;
> - ses->auth_alg = RTE_CRYPTO_AUTH_SNOW3G_UIA2;
> shared_desc_len = cnstr_shdsc_snow_f9(
> cdb->sh_desc, true, swap,
> &alginfo_a,
> @@ -783,9 +560,6 @@ dpaa_sec_prep_cdb(dpaa_sec_session *ses)
> ses->digest_length);
> break;
> case RTE_CRYPTO_AUTH_ZUC_EIA3:
> - alginfo_a.algtype = OP_ALG_ALGSEL_ZUCA;
> - alginfo_a.algmode = OP_ALG_AAI_F9;
> - ses->auth_alg = RTE_CRYPTO_AUTH_ZUC_EIA3;
> shared_desc_len = cnstr_shdsc_zuca(
> cdb->sh_desc, true, swap,
> &alginfo_a,
> @@ -795,8 +569,8 @@ dpaa_sec_prep_cdb(dpaa_sec_session *ses)
> default:
> DPAA_SEC_ERR("unsupported auth alg %u", ses->auth_alg);
> }
> - } else if (is_aead(ses)) {
> - caam_aead_alg(ses, &alginfo);
> + break;
> + case DPAA_SEC_AEAD:
> if (alginfo.algtype == (unsigned int)DPAA_SEC_ALG_UNSUPPORT) {
> DPAA_SEC_ERR("not supported aead alg");
> return -ENOTSUP;
> @@ -805,6 +579,8 @@ dpaa_sec_prep_cdb(dpaa_sec_session *ses)
> alginfo.keylen = ses->aead_key.length;
> alginfo.key_enc_flags = 0;
> alginfo.key_type = RTA_DATA_IMM;
> + alginfo.algtype = ses->aead_key.alg;
> + alginfo.algmode = ses->aead_key.algmode;
>
> if (ses->dir == DIR_ENC)
> shared_desc_len = cnstr_shdsc_gcm_encap(
> @@ -818,28 +594,21 @@ dpaa_sec_prep_cdb(dpaa_sec_session *ses)
> &alginfo,
> ses->iv.length,
> ses->digest_length);
> - } else {
> - caam_cipher_alg(ses, &alginfo_c);
> - if (alginfo_c.algtype == (unsigned int)DPAA_SEC_ALG_UNSUPPORT) {
> - DPAA_SEC_ERR("not supported cipher alg");
> - return -ENOTSUP;
> - }
> -
> + break;
> + case DPAA_SEC_CIPHER_HASH:
> alginfo_c.key = (size_t)ses->cipher_key.data;
> alginfo_c.keylen = ses->cipher_key.length;
> alginfo_c.key_enc_flags = 0;
> alginfo_c.key_type = RTA_DATA_IMM;
> -
> - caam_auth_alg(ses, &alginfo_a);
> - if (alginfo_a.algtype == (unsigned int)DPAA_SEC_ALG_UNSUPPORT) {
> - DPAA_SEC_ERR("not supported auth alg");
> - return -ENOTSUP;
> - }
> + alginfo_c.algtype = ses->cipher_key.alg;
> + alginfo_c.algmode = ses->cipher_key.algmode;
>
> alginfo_a.key = (size_t)ses->auth_key.data;
> alginfo_a.keylen = ses->auth_key.length;
> alginfo_a.key_enc_flags = 0;
> alginfo_a.key_type = RTA_DATA_IMM;
> + alginfo_a.algtype = ses->auth_key.alg;
> + alginfo_a.algmode = ses->auth_key.algmode;
>
> cdb->sh_desc[0] = alginfo_c.keylen;
> cdb->sh_desc[1] = alginfo_a.keylen;
> @@ -876,6 +645,11 @@ dpaa_sec_prep_cdb(dpaa_sec_session *ses)
> true, swap, SHR_SERIAL, &alginfo_c, &alginfo_a,
> ses->iv.length,
> ses->digest_length, ses->dir);
> + break;
> + case DPAA_SEC_HASH_CIPHER:
> + default:
> + DPAA_SEC_ERR("error: Unsupported session");
> + return -ENOTSUP;
> }
>
> if (shared_desc_len < 0) {
> @@ -2053,18 +1827,22 @@ dpaa_sec_enqueue_burst(void *qp, struct rte_crypto_op **ops,
> if (rte_pktmbuf_is_contiguous(op->sym->m_src) &&
> ((op->sym->m_dst == NULL) ||
> rte_pktmbuf_is_contiguous(op->sym->m_dst))) {
> - if (is_proto_ipsec(ses)) {
> - cf = build_proto(op, ses);
> - } else if (is_proto_pdcp(ses)) {
> + switch (ses->ctxt) {
> + case DPAA_SEC_PDCP:
> + case DPAA_SEC_IPSEC:
> cf = build_proto(op, ses);
> - } else if (is_auth_only(ses)) {
> + break;
> + case DPAA_SEC_AUTH:
> cf = build_auth_only(op, ses);
> - } else if (is_cipher_only(ses)) {
> + break;
> + case DPAA_SEC_CIPHER:
> cf = build_cipher_only(op, ses);
> - } else if (is_aead(ses)) {
> + break;
> + case DPAA_SEC_AEAD:
> cf = build_cipher_auth_gcm(op, ses);
> auth_hdr_len = ses->auth_only_len;
> - } else if (is_auth_cipher(ses)) {
> + break;
> + case DPAA_SEC_CIPHER_HASH:
> auth_hdr_len =
> op->sym->cipher.data.offset
> - op->sym->auth.data.offset;
> @@ -2073,23 +1851,30 @@ dpaa_sec_enqueue_burst(void *qp, struct rte_crypto_op **ops,
> - op->sym->cipher.data.length
> - auth_hdr_len;
> cf = build_cipher_auth(op, ses);
> - } else {
> + break;
> + default:
> DPAA_SEC_DP_ERR("not supported ops");
> frames_to_send = loop;
> nb_ops = loop;
> goto send_pkts;
> }
> } else {
> - if (is_proto_pdcp(ses) || is_proto_ipsec(ses)) {
> + switch (ses->ctxt) {
> + case DPAA_SEC_PDCP:
> + case DPAA_SEC_IPSEC:
> cf = build_proto_sg(op, ses);
> - } else if (is_auth_only(ses)) {
> + break;
> + case DPAA_SEC_AUTH:
> cf = build_auth_only_sg(op, ses);
> - } else if (is_cipher_only(ses)) {
> + break;
> + case DPAA_SEC_CIPHER:
> cf = build_cipher_only_sg(op, ses);
> - } else if (is_aead(ses)) {
> + break;
> + case DPAA_SEC_AEAD:
> cf = build_cipher_auth_gcm_sg(op, ses);
> auth_hdr_len = ses->auth_only_len;
> - } else if (is_auth_cipher(ses)) {
> + break;
> + case DPAA_SEC_CIPHER_HASH:
> auth_hdr_len =
> op->sym->cipher.data.offset
> - op->sym->auth.data.offset;
> @@ -2098,7 +1883,8 @@ dpaa_sec_enqueue_burst(void *qp, struct rte_crypto_op **ops,
> - op->sym->cipher.data.length
> - auth_hdr_len;
> cf = build_cipher_auth_sg(op, ses);
> - } else {
> + break;
> + default:
> DPAA_SEC_DP_ERR("not supported ops");
> frames_to_send = loop;
> nb_ops = loop;
> @@ -2282,6 +2068,31 @@ dpaa_sec_cipher_init(struct rte_cryptodev *dev __rte_unused,
>
> memcpy(session->cipher_key.data, xform->cipher.key.data,
> xform->cipher.key.length);
> + switch (xform->cipher.algo) {
> + case RTE_CRYPTO_CIPHER_AES_CBC:
> + session->cipher_key.alg = OP_ALG_ALGSEL_AES;
> + session->cipher_key.algmode = OP_ALG_AAI_CBC;
> + break;
> + case RTE_CRYPTO_CIPHER_3DES_CBC:
> + session->cipher_key.alg = OP_ALG_ALGSEL_3DES;
> + session->cipher_key.algmode = OP_ALG_AAI_CBC;
> + break;
> + case RTE_CRYPTO_CIPHER_AES_CTR:
> + session->cipher_key.alg = OP_ALG_ALGSEL_AES;
> + session->cipher_key.algmode = OP_ALG_AAI_CTR;
> + break;
> + case RTE_CRYPTO_CIPHER_SNOW3G_UEA2:
> + session->cipher_key.alg = OP_ALG_ALGSEL_SNOW_F8;
> + break;
> + case RTE_CRYPTO_CIPHER_ZUC_EEA3:
> + session->cipher_key.alg = OP_ALG_ALGSEL_ZUCE;
> + break;
> + default:
> + DPAA_SEC_ERR("Crypto: Undefined Cipher specified %u",
> + xform->cipher.algo);
> + rte_free(session->cipher_key.data);
> + return -1;
> + }
> session->dir = (xform->cipher.op == RTE_CRYPTO_CIPHER_OP_ENCRYPT) ?
> DIR_ENC : DIR_DEC;
>
> @@ -2309,18 +2120,165 @@ dpaa_sec_auth_init(struct rte_cryptodev *dev __rte_unused,
>
> memcpy(session->auth_key.data, xform->auth.key.data,
> xform->auth.key.length);
> +
> + switch (xform->auth.algo) {
> + case RTE_CRYPTO_AUTH_SHA1_HMAC:
> + session->auth_key.alg = OP_ALG_ALGSEL_SHA1;
> + session->auth_key.algmode = OP_ALG_AAI_HMAC;
> + break;
> + case RTE_CRYPTO_AUTH_MD5_HMAC:
> + session->auth_key.alg = OP_ALG_ALGSEL_MD5;
> + session->auth_key.algmode = OP_ALG_AAI_HMAC;
> + break;
> + case RTE_CRYPTO_AUTH_SHA224_HMAC:
> + session->auth_key.alg = OP_ALG_ALGSEL_SHA224;
> + session->auth_key.algmode = OP_ALG_AAI_HMAC;
> + break;
> + case RTE_CRYPTO_AUTH_SHA256_HMAC:
> + session->auth_key.alg = OP_ALG_ALGSEL_SHA256;
> + session->auth_key.algmode = OP_ALG_AAI_HMAC;
> + break;
> + case RTE_CRYPTO_AUTH_SHA384_HMAC:
> + session->auth_key.alg = OP_ALG_ALGSEL_SHA384;
> + session->auth_key.algmode = OP_ALG_AAI_HMAC;
> + break;
> + case RTE_CRYPTO_AUTH_SHA512_HMAC:
> + session->auth_key.alg = OP_ALG_ALGSEL_SHA512;
> + session->auth_key.algmode = OP_ALG_AAI_HMAC;
> + break;
> + case RTE_CRYPTO_AUTH_SNOW3G_UIA2:
> + session->auth_key.alg = OP_ALG_ALGSEL_SNOW_F9;
> + session->auth_key.algmode = OP_ALG_AAI_F9;
> + break;
> + case RTE_CRYPTO_AUTH_ZUC_EIA3:
> + session->auth_key.alg = OP_ALG_ALGSEL_ZUCA;
> + session->auth_key.algmode = OP_ALG_AAI_F9;
> + break;
> + default:
> + DPAA_SEC_ERR("Crypto: Unsupported Auth specified %u",
> + xform->auth.algo);
> + rte_free(session->auth_key.data);
> + return -1;
> + }
> +
> session->dir = (xform->auth.op == RTE_CRYPTO_AUTH_OP_GENERATE) ?
> DIR_ENC : DIR_DEC;
>
> return 0;
> }
>
> +static int
> +dpaa_sec_chain_init(struct rte_cryptodev *dev __rte_unused,
> + struct rte_crypto_sym_xform *xform,
> + dpaa_sec_session *session)
> +{
> +
> + struct rte_crypto_cipher_xform *cipher_xform;
> + struct rte_crypto_auth_xform *auth_xform;
> +
> + if (session->auth_cipher_text) {
> + cipher_xform = &xform->cipher;
> + auth_xform = &xform->next->auth;
> + } else {
> + cipher_xform = &xform->next->cipher;
> + auth_xform = &xform->auth;
> + }
> +
> + /* Set IV parameters */
> + session->iv.offset = cipher_xform->iv.offset;
> + session->iv.length = cipher_xform->iv.length;
> +
> + session->cipher_key.data = rte_zmalloc(NULL, cipher_xform->key.length,
> + RTE_CACHE_LINE_SIZE);
> + if (session->cipher_key.data == NULL && cipher_xform->key.length > 0) {
> + DPAA_SEC_ERR("No Memory for cipher key");
> + return -1;
> + }
> + session->cipher_key.length = cipher_xform->key.length;
> + session->auth_key.data = rte_zmalloc(NULL, auth_xform->key.length,
> + RTE_CACHE_LINE_SIZE);
> + if (session->auth_key.data == NULL && auth_xform->key.length > 0) {
> + DPAA_SEC_ERR("No Memory for auth key");
> + rte_free(session->cipher_key.data);
> + return -ENOMEM;
> + }
> + session->auth_key.length = auth_xform->key.length;
> + memcpy(session->cipher_key.data, cipher_xform->key.data,
> + cipher_xform->key.length);
> + memcpy(session->auth_key.data, auth_xform->key.data,
> + auth_xform->key.length);
> +
> + session->digest_length = auth_xform->digest_length;
> + session->auth_alg = auth_xform->algo;
> +
> + switch (auth_xform->algo) {
> + case RTE_CRYPTO_AUTH_SHA1_HMAC:
> + session->auth_key.alg = OP_ALG_ALGSEL_SHA1;
> + session->auth_key.algmode = OP_ALG_AAI_HMAC;
> + break;
> + case RTE_CRYPTO_AUTH_MD5_HMAC:
> + session->auth_key.alg = OP_ALG_ALGSEL_MD5;
> + session->auth_key.algmode = OP_ALG_AAI_HMAC;
> + break;
> + case RTE_CRYPTO_AUTH_SHA224_HMAC:
> + session->auth_key.alg = OP_ALG_ALGSEL_SHA224;
> + session->auth_key.algmode = OP_ALG_AAI_HMAC;
> + break;
> + case RTE_CRYPTO_AUTH_SHA256_HMAC:
> + session->auth_key.alg = OP_ALG_ALGSEL_SHA256;
> + session->auth_key.algmode = OP_ALG_AAI_HMAC;
> + break;
> + case RTE_CRYPTO_AUTH_SHA384_HMAC:
> + session->auth_key.alg = OP_ALG_ALGSEL_SHA384;
> + session->auth_key.algmode = OP_ALG_AAI_HMAC;
> + break;
> + case RTE_CRYPTO_AUTH_SHA512_HMAC:
> + session->auth_key.alg = OP_ALG_ALGSEL_SHA512;
> + session->auth_key.algmode = OP_ALG_AAI_HMAC;
> + break;
> + default:
> + DPAA_SEC_ERR("Crypto: Unsupported Auth specified %u",
> + auth_xform->algo);
> + goto error_out;
> + }
> +
> + session->cipher_alg = cipher_xform->algo;
> +
> + switch (cipher_xform->algo) {
> + case RTE_CRYPTO_CIPHER_AES_CBC:
> + session->cipher_key.alg = OP_ALG_ALGSEL_AES;
> + session->cipher_key.algmode = OP_ALG_AAI_CBC;
> + break;
> + case RTE_CRYPTO_CIPHER_3DES_CBC:
> + session->cipher_key.alg = OP_ALG_ALGSEL_3DES;
> + session->cipher_key.algmode = OP_ALG_AAI_CBC;
> + break;
> + case RTE_CRYPTO_CIPHER_AES_CTR:
> + session->cipher_key.alg = OP_ALG_ALGSEL_AES;
> + session->cipher_key.algmode = OP_ALG_AAI_CTR;
> + break;
> + default:
> + DPAA_SEC_ERR("Crypto: Undefined Cipher specified %u",
> + cipher_xform->algo);
> + goto error_out;
> + }
> + session->dir = (cipher_xform->op == RTE_CRYPTO_CIPHER_OP_ENCRYPT) ?
> + DIR_ENC : DIR_DEC;
> + return 0;
> +
> +error_out:
> + rte_free(session->cipher_key.data);
> + rte_free(session->auth_key.data);
> + return -1;
> +}
> +
> static int
> dpaa_sec_aead_init(struct rte_cryptodev *dev __rte_unused,
> struct rte_crypto_sym_xform *xform,
> dpaa_sec_session *session)
> {
> session->aead_alg = xform->aead.algo;
> + session->ctxt = DPAA_SEC_AEAD;
> session->iv.length = xform->aead.iv.length;
> session->iv.offset = xform->aead.iv.offset;
> session->auth_only_len = xform->aead.aad_length;
> @@ -2335,6 +2293,18 @@ dpaa_sec_aead_init(struct rte_cryptodev *dev __rte_unused,
>
> memcpy(session->aead_key.data, xform->aead.key.data,
> xform->aead.key.length);
> +
> + switch (session->aead_alg) {
> + case RTE_CRYPTO_AEAD_AES_GCM:
> + session->aead_key.alg = OP_ALG_ALGSEL_AES;
> + session->aead_key.algmode = OP_ALG_AAI_GCM;
> + break;
> + default:
> + DPAA_SEC_ERR("unsupported AEAD alg %d", session->aead_alg);
> + rte_free(session->aead_key.data);
> + return -ENOMEM;
> + }
> +
> session->dir = (xform->aead.op == RTE_CRYPTO_AEAD_OP_ENCRYPT) ?
> DIR_ENC : DIR_DEC;
>
> @@ -2422,31 +2392,34 @@ dpaa_sec_set_session_parameters(struct rte_cryptodev *dev,
> /* Cipher Only */
> if (xform->type == RTE_CRYPTO_SYM_XFORM_CIPHER && xform->next == NULL) {
> session->auth_alg = RTE_CRYPTO_AUTH_NULL;
> + session->ctxt = DPAA_SEC_CIPHER;
> dpaa_sec_cipher_init(dev, xform, session);
>
> /* Authentication Only */
> } else if (xform->type == RTE_CRYPTO_SYM_XFORM_AUTH &&
> xform->next == NULL) {
> session->cipher_alg = RTE_CRYPTO_CIPHER_NULL;
> + session->ctxt = DPAA_SEC_AUTH;
> dpaa_sec_auth_init(dev, xform, session);
>
> /* Cipher then Authenticate */
> } else if (xform->type == RTE_CRYPTO_SYM_XFORM_CIPHER &&
> xform->next->type == RTE_CRYPTO_SYM_XFORM_AUTH) {
> if (xform->cipher.op == RTE_CRYPTO_CIPHER_OP_ENCRYPT) {
> - dpaa_sec_cipher_init(dev, xform, session);
> - dpaa_sec_auth_init(dev, xform->next, session);
> + session->ctxt = DPAA_SEC_CIPHER_HASH;
> + session->auth_cipher_text = 1;
> + dpaa_sec_chain_init(dev, xform, session);
> } else {
> DPAA_SEC_ERR("Not supported: Auth then Cipher");
> return -EINVAL;
> }
> -
> /* Authenticate then Cipher */
> } else if (xform->type == RTE_CRYPTO_SYM_XFORM_AUTH &&
> xform->next->type == RTE_CRYPTO_SYM_XFORM_CIPHER) {
> if (xform->next->cipher.op == RTE_CRYPTO_CIPHER_OP_DECRYPT) {
> - dpaa_sec_auth_init(dev, xform, session);
> - dpaa_sec_cipher_init(dev, xform->next, session);
> + session->ctxt = DPAA_SEC_CIPHER_HASH;
> + session->auth_cipher_text = 0;
> + dpaa_sec_chain_init(dev, xform, session);
> } else {
> DPAA_SEC_ERR("Not supported: Auth then Cipher");
> return -EINVAL;
> @@ -2574,6 +2547,7 @@ dpaa_sec_set_ipsec_session(__rte_unused struct rte_cryptodev *dev,
> cipher_xform = &conf->crypto_xform->next->cipher;
> }
> session->proto_alg = conf->protocol;
> + session->ctxt = DPAA_SEC_IPSEC;
>
> if (cipher_xform && cipher_xform->algo != RTE_CRYPTO_CIPHER_NULL) {
> session->cipher_key.data = rte_zmalloc(NULL,
> @@ -2589,9 +2563,20 @@ dpaa_sec_set_ipsec_session(__rte_unused struct rte_cryptodev *dev,
> session->cipher_key.length = cipher_xform->key.length;
>
> switch (cipher_xform->algo) {
> + case RTE_CRYPTO_CIPHER_NULL:
> + session->cipher_key.alg = OP_PCL_IPSEC_NULL;
> + break;
> case RTE_CRYPTO_CIPHER_AES_CBC:
> + session->cipher_key.alg = OP_PCL_IPSEC_AES_CBC;
> + session->cipher_key.algmode = OP_ALG_AAI_CBC;
> + break;
> case RTE_CRYPTO_CIPHER_3DES_CBC:
> + session->cipher_key.alg = OP_PCL_IPSEC_3DES;
> + session->cipher_key.algmode = OP_ALG_AAI_CBC;
> + break;
> case RTE_CRYPTO_CIPHER_AES_CTR:
> + session->cipher_key.alg = OP_PCL_IPSEC_AES_CTR;
> + session->cipher_key.algmode = OP_ALG_AAI_CTR;
> break;
> default:
> DPAA_SEC_ERR("Crypto: Unsupported Cipher alg %u",
> @@ -2620,12 +2605,33 @@ dpaa_sec_set_ipsec_session(__rte_unused struct rte_cryptodev *dev,
> session->auth_key.length = auth_xform->key.length;
>
> switch (auth_xform->algo) {
> - case RTE_CRYPTO_AUTH_SHA1_HMAC:
> + case RTE_CRYPTO_AUTH_NULL:
> + session->auth_key.alg = OP_PCL_IPSEC_HMAC_NULL;
> + session->digest_length = 0;
> + break;
> case RTE_CRYPTO_AUTH_MD5_HMAC:
> + session->auth_key.alg = OP_PCL_IPSEC_HMAC_MD5_96;
> + session->auth_key.algmode = OP_ALG_AAI_HMAC;
> + break;
> + case RTE_CRYPTO_AUTH_SHA1_HMAC:
> + session->auth_key.alg = OP_PCL_IPSEC_HMAC_SHA1_96;
> + session->auth_key.algmode = OP_ALG_AAI_HMAC;
> + break;
> + case RTE_CRYPTO_AUTH_SHA224_HMAC:
> + session->auth_key.alg = OP_PCL_IPSEC_HMAC_SHA1_160;
> + session->auth_key.algmode = OP_ALG_AAI_HMAC;
> + break;
> case RTE_CRYPTO_AUTH_SHA256_HMAC:
> + session->auth_key.alg = OP_PCL_IPSEC_HMAC_SHA2_256_128;
> + session->auth_key.algmode = OP_ALG_AAI_HMAC;
> + break;
> case RTE_CRYPTO_AUTH_SHA384_HMAC:
> + session->auth_key.alg = OP_PCL_IPSEC_HMAC_SHA2_384_192;
> + session->auth_key.algmode = OP_ALG_AAI_HMAC;
> + break;
> case RTE_CRYPTO_AUTH_SHA512_HMAC:
> - case RTE_CRYPTO_AUTH_AES_CMAC:
> + session->auth_key.alg = OP_PCL_IPSEC_HMAC_SHA2_512_256;
> + session->auth_key.algmode = OP_ALG_AAI_HMAC;
> break;
> default:
> DPAA_SEC_ERR("Crypto: Unsupported auth alg %u",
> @@ -2766,7 +2772,28 @@ dpaa_sec_set_pdcp_session(struct rte_cryptodev *dev,
> }
>
> session->proto_alg = conf->protocol;
> + session->ctxt = DPAA_SEC_PDCP;
> +
> if (cipher_xform) {
> + switch (cipher_xform->algo) {
> + case RTE_CRYPTO_CIPHER_SNOW3G_UEA2:
> + session->cipher_key.alg = PDCP_CIPHER_TYPE_SNOW;
> + break;
> + case RTE_CRYPTO_CIPHER_ZUC_EEA3:
> + session->cipher_key.alg = PDCP_CIPHER_TYPE_ZUC;
> + break;
> + case RTE_CRYPTO_CIPHER_AES_CTR:
> + session->cipher_key.alg = PDCP_CIPHER_TYPE_AES;
> + break;
> + case RTE_CRYPTO_CIPHER_NULL:
> + session->cipher_key.alg = PDCP_CIPHER_TYPE_NULL;
> + break;
> + default:
> + DPAA_SEC_ERR("Crypto: Undefined Cipher specified %u",
> + session->cipher_alg);
> + return -1;
> + }
> +
> session->cipher_key.data = rte_zmalloc(NULL,
> cipher_xform->key.length,
> RTE_CACHE_LINE_SIZE);
> @@ -2798,6 +2825,25 @@ dpaa_sec_set_pdcp_session(struct rte_cryptodev *dev,
> }
>
> if (auth_xform) {
> + switch (auth_xform->algo) {
> + case RTE_CRYPTO_AUTH_SNOW3G_UIA2:
> + session->auth_key.alg = PDCP_AUTH_TYPE_SNOW;
> + break;
> + case RTE_CRYPTO_AUTH_ZUC_EIA3:
> + session->auth_key.alg = PDCP_AUTH_TYPE_ZUC;
> + break;
> + case RTE_CRYPTO_AUTH_AES_CMAC:
> + session->auth_key.alg = PDCP_AUTH_TYPE_AES;
> + break;
> + case RTE_CRYPTO_AUTH_NULL:
> + session->auth_key.alg = PDCP_AUTH_TYPE_NULL;
> + break;
> + default:
> + DPAA_SEC_ERR("Crypto: Unsupported auth alg %u",
> + session->auth_alg);
> + rte_free(session->cipher_key.data);
> + return -1;
> + }
> session->auth_key.data = rte_zmalloc(NULL,
> auth_xform->key.length,
> RTE_CACHE_LINE_SIZE);
> diff --git a/drivers/crypto/dpaa_sec/dpaa_sec.h b/drivers/crypto/dpaa_sec/dpaa_sec.h
> index 149923aa1..a661d5a56 100644
> --- a/drivers/crypto/dpaa_sec/dpaa_sec.h
> +++ b/drivers/crypto/dpaa_sec/dpaa_sec.h
> @@ -38,14 +38,19 @@ enum dpaa_sec_op_type {
> DPAA_SEC_NONE, /*!< No Cipher operations*/
> DPAA_SEC_CIPHER,/*!< CIPHER operations */
> DPAA_SEC_AUTH, /*!< Authentication Operations */
> - DPAA_SEC_AEAD, /*!< Authenticated Encryption with associated data */
> + DPAA_SEC_AEAD, /*!< AEAD (AES-GCM/CCM) type operations */
> + DPAA_SEC_CIPHER_HASH, /*!< Authenticated Encryption with
> + * associated data
> + */
> + DPAA_SEC_HASH_CIPHER, /*!< Encryption with Authenticated
> + * associated data
> + */
> DPAA_SEC_IPSEC, /*!< IPSEC protocol operations*/
> DPAA_SEC_PDCP, /*!< PDCP protocol operations*/
> DPAA_SEC_PKC, /*!< Public Key Cryptographic Operations */
> DPAA_SEC_MAX
> };
>
> -
> #define DPAA_SEC_MAX_DESC_SIZE 64
> /* code or cmd block to caam */
> struct sec_cdb {
> @@ -113,6 +118,7 @@ struct sec_pdcp_ctxt {
>
> typedef struct dpaa_sec_session_entry {
> uint8_t dir; /*!< Operation Direction */
> + uint8_t ctxt; /*!< Session Context Type */
> enum rte_crypto_cipher_algorithm cipher_alg; /*!< Cipher Algorithm*/
> enum rte_crypto_auth_algorithm auth_alg; /*!< Authentication Algorithm*/
> enum rte_crypto_aead_algorithm aead_alg; /*!< AEAD Algorithm*/
> @@ -121,15 +127,21 @@ typedef struct dpaa_sec_session_entry {
> struct {
> uint8_t *data; /**< pointer to key data */
> size_t length; /**< key length in bytes */
> + uint32_t alg;
> + uint32_t algmode;
> } aead_key;
> struct {
> struct {
> uint8_t *data; /**< pointer to key data */
> size_t length; /**< key length in bytes */
> + uint32_t alg;
> + uint32_t algmode;
> } cipher_key;
> struct {
> uint8_t *data; /**< pointer to key data */
> size_t length; /**< key length in bytes */
> + uint32_t alg;
> + uint32_t algmode;
> } auth_key;
> };
> };
> @@ -148,6 +160,8 @@ typedef struct dpaa_sec_session_entry {
> struct ip ip4_hdr;
> struct rte_ipv6_hdr ip6_hdr;
> };
> + uint8_t auth_cipher_text;
> + /**< Authenticate/cipher ordering */
> };
> struct sec_pdcp_ctxt pdcp;
> };
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [dpdk-dev] [PATCH 10/10] crypto/dpaa_sec: code reorg for better session mgmt
2019-10-11 19:03 ` Aaron Conole
@ 2019-10-14 4:57 ` Hemant Agrawal
0 siblings, 0 replies; 27+ messages in thread
From: Hemant Agrawal @ 2019-10-14 4:57 UTC (permalink / raw)
To: Aaron Conole; +Cc: dev, Akhil Goyal
Hi Aaron,
Thanks!
I will fix these comments.
Regards,
Hemant
> -----Original Message-----
> From: Aaron Conole <aconole@redhat.com>
> Sent: Saturday, October 12, 2019 12:34 AM
> To: Hemant Agrawal <hemant.agrawal@nxp.com>
> Cc: dev@dpdk.org; Akhil Goyal <akhil.goyal@nxp.com>
> Subject: Re: [dpdk-dev] [PATCH 10/10] crypto/dpaa_sec: code reorg for
> better session mgmt
> Importance: High
>
> Hemant Agrawal <hemant.agrawal@nxp.com> writes:
>
> > The session related parameters shall be populated during the session
> > create only.
> > At the runtime on first packet, the CDB should just reference the
> > session data instead of re-interpreting data again.
> >
> > Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
> > ---
>
> As a part of this patch, a number of static functions are no longer used, and
> should be removed (for example is_auth_only, is_cipher_only, is_aead,
> is_auth_cipher, and is_proto_ipsec).
>
> You will see this if you choose to build with clang. gcc sees the functions
> marked as static inline, and doesn't seem to warn.
>
^ permalink raw reply [flat|nested] 27+ messages in thread
* [dpdk-dev] [PATCH v2 00/10] NXP DPAAx crypto fixes
2019-10-11 16:32 [dpdk-dev] [PATCH 00/10] NXP DPAAx crypto fixes Hemant Agrawal
` (9 preceding siblings ...)
2019-10-11 16:32 ` [dpdk-dev] [PATCH 10/10] crypto/dpaa_sec: code reorg for better session mgmt Hemant Agrawal
@ 2019-10-14 6:53 ` Hemant Agrawal
2019-10-14 6:53 ` [dpdk-dev] [PATCH v2 01/10] test/crypto: fix PDCP test support Hemant Agrawal
` (10 more replies)
10 siblings, 11 replies; 27+ messages in thread
From: Hemant Agrawal @ 2019-10-14 6:53 UTC (permalink / raw)
To: dev; +Cc: akhil.goyal
This patch series largely content
1. fixes in crypto drivers
2. supprot ESN like cases
3. enabling snow/ZUC for dpaa_sec
v2: fix the clang compilation errors in 10/10
Hemant Agrawal (7):
test/crypto: fix PDCP test support
crypto/dpaa2_sec: fix ipv6 support
test/crypto: increase test cases support for dpaax
test/crypto: add test to test ESN like case
crypto/dpaa_sec: add support for snow3G and ZUC
test/crypto: enable snow3G and zuc cases for dpaa
crypto/dpaa_sec: code reorg for better session mgmt
Vakul Garg (3):
crypto/dpaa_sec: fix to check for aead as well
crypto/dpaa2_sec: enhance gcm descs to not skip aadt
crypto/dpaa2_sec: add support of auth trailer in cipher-auth
app/test/test_cryptodev.c | 483 ++++++++++-
app/test/test_cryptodev_aes_test_vectors.h | 67 ++
doc/guides/cryptodevs/dpaa_sec.rst | 4 +
doc/guides/cryptodevs/features/dpaa_sec.ini | 4 +
drivers/crypto/caam_jr/caam_jr.c | 24 +-
drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c | 47 +-
drivers/crypto/dpaa2_sec/hw/desc/algo.h | 10 -
drivers/crypto/dpaa2_sec/hw/desc/ipsec.h | 167 ++--
drivers/crypto/dpaa_sec/dpaa_sec.c | 885 +++++++++++++-------
drivers/crypto/dpaa_sec/dpaa_sec.h | 109 ++-
10 files changed, 1320 insertions(+), 480 deletions(-)
--
2.17.1
^ permalink raw reply [flat|nested] 27+ messages in thread
* [dpdk-dev] [PATCH v2 01/10] test/crypto: fix PDCP test support
2019-10-14 6:53 ` [dpdk-dev] [PATCH v2 00/10] NXP DPAAx crypto fixes Hemant Agrawal
@ 2019-10-14 6:53 ` Hemant Agrawal
2019-10-14 6:53 ` [dpdk-dev] [PATCH v2 02/10] crypto/dpaa2_sec: fix ipv6 support Hemant Agrawal
` (9 subsequent siblings)
10 siblings, 0 replies; 27+ messages in thread
From: Hemant Agrawal @ 2019-10-14 6:53 UTC (permalink / raw)
To: dev; +Cc: akhil.goyal, Hemant Agrawal
use session_priv_mpool instead of session pool
Fixes: d883e6e7131b ("test/crypto: add PDCP C-Plane encap cases")
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
app/test/test_cryptodev.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/app/test/test_cryptodev.c b/app/test/test_cryptodev.c
index ffed298fd..879b31ceb 100644
--- a/app/test/test_cryptodev.c
+++ b/app/test/test_cryptodev.c
@@ -7144,7 +7144,7 @@ test_pdcp_proto(int i, int oop,
/* Create security session */
ut_params->sec_session = rte_security_session_create(ctx,
- &sess_conf, ts_params->session_mpool);
+ &sess_conf, ts_params->session_priv_mpool);
if (!ut_params->sec_session) {
printf("TestCase %s()-%d line %d failed %s: ",
@@ -7393,7 +7393,7 @@ test_pdcp_proto_SGL(int i, int oop,
/* Create security session */
ut_params->sec_session = rte_security_session_create(ctx,
- &sess_conf, ts_params->session_mpool);
+ &sess_conf, ts_params->session_priv_mpool);
if (!ut_params->sec_session) {
printf("TestCase %s()-%d line %d failed %s: ",
--
2.17.1
^ permalink raw reply [flat|nested] 27+ messages in thread
* [dpdk-dev] [PATCH v2 02/10] crypto/dpaa2_sec: fix ipv6 support
2019-10-14 6:53 ` [dpdk-dev] [PATCH v2 00/10] NXP DPAAx crypto fixes Hemant Agrawal
2019-10-14 6:53 ` [dpdk-dev] [PATCH v2 01/10] test/crypto: fix PDCP test support Hemant Agrawal
@ 2019-10-14 6:53 ` Hemant Agrawal
2019-10-14 6:53 ` [dpdk-dev] [PATCH v2 03/10] crypto/dpaa_sec: fix to check for aead as well Hemant Agrawal
` (8 subsequent siblings)
10 siblings, 0 replies; 27+ messages in thread
From: Hemant Agrawal @ 2019-10-14 6:53 UTC (permalink / raw)
To: dev; +Cc: akhil.goyal, Hemant Agrawal
HW PDB Option was being overwritten.
Fixes: 53982ba2805d ("crypto/dpaa2_sec: support IPv6 tunnel for protocol offload")
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c | 5 ++---
1 file changed, 2 insertions(+), 3 deletions(-)
diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
index 2ab34a00f..14f0c523c 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
@@ -2819,13 +2819,12 @@ dpaa2_sec_set_ipsec_session(struct rte_cryptodev *dev,
RTE_SECURITY_IPSEC_SA_DIR_INGRESS) {
flc->dhr = SEC_FLC_DHR_INBOUND;
memset(&decap_pdb, 0, sizeof(struct ipsec_decap_pdb));
- decap_pdb.options = sizeof(struct ip) << 16;
- if (ipsec_xform->options.esn)
- decap_pdb.options |= PDBOPTS_ESP_ESN;
decap_pdb.options = (ipsec_xform->tunnel.type ==
RTE_SECURITY_IPSEC_TUNNEL_IPV4) ?
sizeof(struct ip) << 16 :
sizeof(struct rte_ipv6_hdr) << 16;
+ if (ipsec_xform->options.esn)
+ decap_pdb.options |= PDBOPTS_ESP_ESN;
session->dir = DIR_DEC;
bufsize = cnstr_shdsc_ipsec_new_decap(priv->flc_desc[0].desc,
1, 0, SHR_SERIAL,
--
2.17.1
^ permalink raw reply [flat|nested] 27+ messages in thread
* [dpdk-dev] [PATCH v2 03/10] crypto/dpaa_sec: fix to check for aead as well
2019-10-14 6:53 ` [dpdk-dev] [PATCH v2 00/10] NXP DPAAx crypto fixes Hemant Agrawal
2019-10-14 6:53 ` [dpdk-dev] [PATCH v2 01/10] test/crypto: fix PDCP test support Hemant Agrawal
2019-10-14 6:53 ` [dpdk-dev] [PATCH v2 02/10] crypto/dpaa2_sec: fix ipv6 support Hemant Agrawal
@ 2019-10-14 6:53 ` Hemant Agrawal
2019-10-14 6:53 ` [dpdk-dev] [PATCH v2 04/10] crypto/dpaa2_sec: enhance gcm descs to not skip aadt Hemant Agrawal
` (7 subsequent siblings)
10 siblings, 0 replies; 27+ messages in thread
From: Hemant Agrawal @ 2019-10-14 6:53 UTC (permalink / raw)
To: dev; +Cc: akhil.goyal, stable, Vakul Garg
From: Vakul Garg <vakul.garg@nxp.com>
The code shall also check aead as non auth-cipher case
Fixes: 1f14d500bce1 ("crypto/dpaa_sec: support IPsec protocol offload")
Cc: stable@dpdk.org
Signed-off-by: Vakul Garg <vakul.garg@nxp.com>
---
drivers/crypto/dpaa_sec/dpaa_sec.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/crypto/dpaa_sec/dpaa_sec.c b/drivers/crypto/dpaa_sec/dpaa_sec.c
index 38cfdd378..e89cbcefb 100644
--- a/drivers/crypto/dpaa_sec/dpaa_sec.c
+++ b/drivers/crypto/dpaa_sec/dpaa_sec.c
@@ -266,7 +266,8 @@ static inline int is_auth_cipher(dpaa_sec_session *ses)
return ((ses->cipher_alg != RTE_CRYPTO_CIPHER_NULL) &&
(ses->auth_alg != RTE_CRYPTO_AUTH_NULL) &&
(ses->proto_alg != RTE_SECURITY_PROTOCOL_PDCP) &&
- (ses->proto_alg != RTE_SECURITY_PROTOCOL_IPSEC));
+ (ses->proto_alg != RTE_SECURITY_PROTOCOL_IPSEC) &&
+ (ses->aead_alg == 0));
}
static inline int is_proto_ipsec(dpaa_sec_session *ses)
--
2.17.1
^ permalink raw reply [flat|nested] 27+ messages in thread
* [dpdk-dev] [PATCH v2 04/10] crypto/dpaa2_sec: enhance gcm descs to not skip aadt
2019-10-14 6:53 ` [dpdk-dev] [PATCH v2 00/10] NXP DPAAx crypto fixes Hemant Agrawal
` (2 preceding siblings ...)
2019-10-14 6:53 ` [dpdk-dev] [PATCH v2 03/10] crypto/dpaa_sec: fix to check for aead as well Hemant Agrawal
@ 2019-10-14 6:53 ` Hemant Agrawal
2019-10-14 6:53 ` [dpdk-dev] [PATCH v2 05/10] crypto/dpaa2_sec: add support of auth trailer in cipher-auth Hemant Agrawal
` (6 subsequent siblings)
10 siblings, 0 replies; 27+ messages in thread
From: Hemant Agrawal @ 2019-10-14 6:53 UTC (permalink / raw)
To: dev; +Cc: akhil.goyal, Vakul Garg
From: Vakul Garg <vakul.garg@nxp.com>
The GCM descriptors needlessly skip auth_only_len bytes from output
buffer. Due to this, workarounds have to be made in dpseci driver code.
Also this leads to failing of one cryptodev test case for gcm. In this
patch, we change the descriptor construction and adjust dpseci driver
accordingly. The test_AES_GCM_auth_encrypt_SGL_out_of_place_400B_1seg
now passes.
Signed-off-by: Vakul Garg <vakul.garg@nxp.com>
---
drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c | 25 ++++++++-------------
drivers/crypto/dpaa2_sec/hw/desc/algo.h | 10 ---------
drivers/crypto/dpaa_sec/dpaa_sec.c | 14 +++++-------
3 files changed, 15 insertions(+), 34 deletions(-)
diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
index 14f0c523c..8803e8d3c 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
@@ -350,14 +350,13 @@ build_authenc_gcm_sg_fd(dpaa2_sec_session *sess,
DPAA2_SET_FLE_INTERNAL_JD(op_fle, auth_only_len);
op_fle->length = (sess->dir == DIR_ENC) ?
- (sym_op->aead.data.length + icv_len + auth_only_len) :
- sym_op->aead.data.length + auth_only_len;
+ (sym_op->aead.data.length + icv_len) :
+ sym_op->aead.data.length;
/* Configure Output SGE for Encap/Decap */
DPAA2_SET_FLE_ADDR(sge, DPAA2_MBUF_VADDR_TO_IOVA(mbuf));
- DPAA2_SET_FLE_OFFSET(sge, mbuf->data_off +
- RTE_ALIGN_CEIL(auth_only_len, 16) - auth_only_len);
- sge->length = mbuf->data_len - sym_op->aead.data.offset + auth_only_len;
+ DPAA2_SET_FLE_OFFSET(sge, mbuf->data_off + sym_op->aead.data.offset);
+ sge->length = mbuf->data_len - sym_op->aead.data.offset;
mbuf = mbuf->next;
/* o/p segs */
@@ -510,24 +509,21 @@ build_authenc_gcm_fd(dpaa2_sec_session *sess,
if (auth_only_len)
DPAA2_SET_FLE_INTERNAL_JD(fle, auth_only_len);
fle->length = (sess->dir == DIR_ENC) ?
- (sym_op->aead.data.length + icv_len + auth_only_len) :
- sym_op->aead.data.length + auth_only_len;
+ (sym_op->aead.data.length + icv_len) :
+ sym_op->aead.data.length;
DPAA2_SET_FLE_SG_EXT(fle);
/* Configure Output SGE for Encap/Decap */
DPAA2_SET_FLE_ADDR(sge, DPAA2_MBUF_VADDR_TO_IOVA(dst));
- DPAA2_SET_FLE_OFFSET(sge, dst->data_off +
- RTE_ALIGN_CEIL(auth_only_len, 16) - auth_only_len);
- sge->length = sym_op->aead.data.length + auth_only_len;
+ DPAA2_SET_FLE_OFFSET(sge, dst->data_off + sym_op->aead.data.offset);
+ sge->length = sym_op->aead.data.length;
if (sess->dir == DIR_ENC) {
sge++;
DPAA2_SET_FLE_ADDR(sge,
DPAA2_VADDR_TO_IOVA(sym_op->aead.digest.data));
sge->length = sess->digest_length;
- DPAA2_SET_FD_LEN(fd, (sym_op->aead.data.length +
- sess->iv.length + auth_only_len));
}
DPAA2_SET_FLE_FIN(sge);
@@ -566,10 +562,6 @@ build_authenc_gcm_fd(dpaa2_sec_session *sess,
sess->digest_length);
DPAA2_SET_FLE_ADDR(sge, DPAA2_VADDR_TO_IOVA(old_icv));
sge->length = sess->digest_length;
- DPAA2_SET_FD_LEN(fd, (sym_op->aead.data.length +
- sess->digest_length +
- sess->iv.length +
- auth_only_len));
}
DPAA2_SET_FLE_FIN(sge);
@@ -578,6 +570,7 @@ build_authenc_gcm_fd(dpaa2_sec_session *sess,
DPAA2_SET_FD_INTERNAL_JD(fd, auth_only_len);
}
+ DPAA2_SET_FD_LEN(fd, fle->length);
return 0;
}
diff --git a/drivers/crypto/dpaa2_sec/hw/desc/algo.h b/drivers/crypto/dpaa2_sec/hw/desc/algo.h
index 32ce787fa..c41cb2292 100644
--- a/drivers/crypto/dpaa2_sec/hw/desc/algo.h
+++ b/drivers/crypto/dpaa2_sec/hw/desc/algo.h
@@ -649,11 +649,6 @@ cnstr_shdsc_gcm_encap(uint32_t *descbuf, bool ps, bool swap,
MATHB(p, ZERO, ADD, MATH3, VSEQINSZ, 4, 0);
pzeroassocjump1 = JUMP(p, zeroassocjump1, LOCAL_JUMP, ALL_TRUE, MATH_Z);
- MATHB(p, ZERO, ADD, MATH3, VSEQOUTSZ, 4, 0);
-
- /* skip assoc data */
- SEQFIFOSTORE(p, SKIP, 0, 0, VLF);
-
/* cryptlen = seqinlen - assoclen */
MATHB(p, SEQINSZ, SUB, MATH3, VSEQOUTSZ, 4, 0);
@@ -756,11 +751,6 @@ cnstr_shdsc_gcm_decap(uint32_t *descbuf, bool ps, bool swap,
MATHB(p, ZERO, ADD, MATH3, VSEQINSZ, 4, 0);
pzeroassocjump1 = JUMP(p, zeroassocjump1, LOCAL_JUMP, ALL_TRUE, MATH_Z);
- MATHB(p, ZERO, ADD, MATH3, VSEQOUTSZ, 4, 0);
-
- /* skip assoc data */
- SEQFIFOSTORE(p, SKIP, 0, 0, VLF);
-
/* read assoc data */
SEQFIFOLOAD(p, AAD1, 0, CLASS1 | VLF | FLUSH1);
diff --git a/drivers/crypto/dpaa_sec/dpaa_sec.c b/drivers/crypto/dpaa_sec/dpaa_sec.c
index e89cbcefb..c1c6c054a 100644
--- a/drivers/crypto/dpaa_sec/dpaa_sec.c
+++ b/drivers/crypto/dpaa_sec/dpaa_sec.c
@@ -1180,10 +1180,9 @@ build_cipher_auth_gcm_sg(struct rte_crypto_op *op, dpaa_sec_session *ses)
out_sg = &cf->sg[0];
out_sg->extension = 1;
if (is_encode(ses))
- out_sg->length = sym->aead.data.length + ses->auth_only_len
- + ses->digest_length;
+ out_sg->length = sym->aead.data.length + ses->digest_length;
else
- out_sg->length = sym->aead.data.length + ses->auth_only_len;
+ out_sg->length = sym->aead.data.length;
/* output sg entries */
sg = &cf->sg[2];
@@ -1192,9 +1191,8 @@ build_cipher_auth_gcm_sg(struct rte_crypto_op *op, dpaa_sec_session *ses)
/* 1st seg */
qm_sg_entry_set64(sg, rte_pktmbuf_mtophys(mbuf));
- sg->length = mbuf->data_len - sym->aead.data.offset +
- ses->auth_only_len;
- sg->offset = sym->aead.data.offset - ses->auth_only_len;
+ sg->length = mbuf->data_len - sym->aead.data.offset;
+ sg->offset = sym->aead.data.offset;
/* Successive segs */
mbuf = mbuf->next;
@@ -1367,8 +1365,8 @@ build_cipher_auth_gcm(struct rte_crypto_op *op, dpaa_sec_session *ses)
sg++;
qm_sg_entry_set64(&cf->sg[0], dpaa_mem_vtop(sg));
qm_sg_entry_set64(sg,
- dst_start_addr + sym->aead.data.offset - ses->auth_only_len);
- sg->length = sym->aead.data.length + ses->auth_only_len;
+ dst_start_addr + sym->aead.data.offset);
+ sg->length = sym->aead.data.length;
length = sg->length;
if (is_encode(ses)) {
cpu_to_hw_sg(sg);
--
2.17.1
^ permalink raw reply [flat|nested] 27+ messages in thread
* [dpdk-dev] [PATCH v2 05/10] crypto/dpaa2_sec: add support of auth trailer in cipher-auth
2019-10-14 6:53 ` [dpdk-dev] [PATCH v2 00/10] NXP DPAAx crypto fixes Hemant Agrawal
` (3 preceding siblings ...)
2019-10-14 6:53 ` [dpdk-dev] [PATCH v2 04/10] crypto/dpaa2_sec: enhance gcm descs to not skip aadt Hemant Agrawal
@ 2019-10-14 6:53 ` Hemant Agrawal
2019-10-14 6:53 ` [dpdk-dev] [PATCH v2 06/10] test/crypto: increase test cases support for dpaax Hemant Agrawal
` (5 subsequent siblings)
10 siblings, 0 replies; 27+ messages in thread
From: Hemant Agrawal @ 2019-10-14 6:53 UTC (permalink / raw)
To: dev; +Cc: akhil.goyal, Vakul Garg
From: Vakul Garg <vakul.garg@nxp.com>
This patch adds support of auth-only data trailing after cipher data.
Signed-off-by: Vakul Garg <vakul.garg@nxp.com>
---
drivers/crypto/caam_jr/caam_jr.c | 24 +--
drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c | 17 +-
drivers/crypto/dpaa2_sec/hw/desc/ipsec.h | 167 ++++++++------------
drivers/crypto/dpaa_sec/dpaa_sec.c | 35 +++-
4 files changed, 121 insertions(+), 122 deletions(-)
diff --git a/drivers/crypto/caam_jr/caam_jr.c b/drivers/crypto/caam_jr/caam_jr.c
index 57101d9a6..6ceba18f1 100644
--- a/drivers/crypto/caam_jr/caam_jr.c
+++ b/drivers/crypto/caam_jr/caam_jr.c
@@ -450,13 +450,11 @@ caam_jr_prep_cdb(struct caam_jr_session *ses)
&alginfo_c, &alginfo_a);
}
} else {
- /* Auth_only_len is set as 0 here and it will be
- * overwritten in fd for each packet.
- */
+ /* Auth_only_len is overwritten in fd for each job */
shared_desc_len = cnstr_shdsc_authenc(cdb->sh_desc,
true, swap, SHR_SERIAL,
&alginfo_c, &alginfo_a,
- ses->iv.length, 0,
+ ses->iv.length,
ses->digest_length, ses->dir);
}
}
@@ -1066,10 +1064,11 @@ build_cipher_auth_sg(struct rte_crypto_op *op, struct caam_jr_session *ses)
uint8_t *IV_ptr = rte_crypto_op_ctod_offset(op, uint8_t *,
ses->iv.offset);
struct sec_job_descriptor_t *jobdescr;
- uint32_t auth_only_len;
-
- auth_only_len = op->sym->auth.data.length -
- op->sym->cipher.data.length;
+ uint16_t auth_hdr_len = sym->cipher.data.offset -
+ sym->auth.data.offset;
+ uint16_t auth_tail_len = sym->auth.data.length -
+ sym->cipher.data.length - auth_hdr_len;
+ uint32_t auth_only_len = (auth_tail_len << 16) | auth_hdr_len;
if (sym->m_dst) {
mbuf = sym->m_dst;
@@ -1208,10 +1207,11 @@ build_cipher_auth(struct rte_crypto_op *op, struct caam_jr_session *ses)
uint8_t *IV_ptr = rte_crypto_op_ctod_offset(op, uint8_t *,
ses->iv.offset);
struct sec_job_descriptor_t *jobdescr;
- uint32_t auth_only_len;
-
- auth_only_len = op->sym->auth.data.length -
- op->sym->cipher.data.length;
+ uint16_t auth_hdr_len = sym->cipher.data.offset -
+ sym->auth.data.offset;
+ uint16_t auth_tail_len = sym->auth.data.length -
+ sym->cipher.data.length - auth_hdr_len;
+ uint32_t auth_only_len = (auth_tail_len << 16) | auth_hdr_len;
src_start_addr = rte_pktmbuf_iova(sym->m_src);
if (sym->m_dst)
diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
index 8803e8d3c..23a3fa929 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
@@ -583,8 +583,11 @@ build_authenc_sg_fd(dpaa2_sec_session *sess,
struct ctxt_priv *priv = sess->ctxt;
struct qbman_fle *fle, *sge, *ip_fle, *op_fle;
struct sec_flow_context *flc;
- uint32_t auth_only_len = sym_op->auth.data.length -
- sym_op->cipher.data.length;
+ uint16_t auth_hdr_len = sym_op->cipher.data.offset -
+ sym_op->auth.data.offset;
+ uint16_t auth_tail_len = sym_op->auth.data.length -
+ sym_op->cipher.data.length - auth_hdr_len;
+ uint32_t auth_only_len = (auth_tail_len << 16) | auth_hdr_len;
int icv_len = sess->digest_length;
uint8_t *old_icv;
struct rte_mbuf *mbuf;
@@ -727,8 +730,12 @@ build_authenc_fd(dpaa2_sec_session *sess,
struct ctxt_priv *priv = sess->ctxt;
struct qbman_fle *fle, *sge;
struct sec_flow_context *flc;
- uint32_t auth_only_len = sym_op->auth.data.length -
- sym_op->cipher.data.length;
+ uint16_t auth_hdr_len = sym_op->cipher.data.offset -
+ sym_op->auth.data.offset;
+ uint16_t auth_tail_len = sym_op->auth.data.length -
+ sym_op->cipher.data.length - auth_hdr_len;
+ uint32_t auth_only_len = (auth_tail_len << 16) | auth_hdr_len;
+
int icv_len = sess->digest_length, retval;
uint8_t *old_icv;
uint8_t *iv_ptr = rte_crypto_op_ctod_offset(op, uint8_t *,
@@ -2217,7 +2224,6 @@ dpaa2_sec_aead_chain_init(struct rte_cryptodev *dev,
struct rte_crypto_sym_xform *xform,
dpaa2_sec_session *session)
{
- struct dpaa2_sec_aead_ctxt *ctxt = &session->ext_params.aead_ctxt;
struct dpaa2_sec_dev_private *dev_priv = dev->data->dev_private;
struct alginfo authdata, cipherdata;
int bufsize;
@@ -2411,7 +2417,6 @@ dpaa2_sec_aead_chain_init(struct rte_cryptodev *dev,
0, SHR_SERIAL,
&cipherdata, &authdata,
session->iv.length,
- ctxt->auth_only_len,
session->digest_length,
session->dir);
if (bufsize < 0) {
diff --git a/drivers/crypto/dpaa2_sec/hw/desc/ipsec.h b/drivers/crypto/dpaa2_sec/hw/desc/ipsec.h
index d071f46fd..d1ffd7fd2 100644
--- a/drivers/crypto/dpaa2_sec/hw/desc/ipsec.h
+++ b/drivers/crypto/dpaa2_sec/hw/desc/ipsec.h
@@ -1412,9 +1412,6 @@ cnstr_shdsc_ipsec_new_decap(uint32_t *descbuf, bool ps,
*
* @ivlen: length of the IV to be read from the input frame, before any data
* to be processed
- * @auth_only_len: length of the data to be authenticated-only (commonly IP
- * header, IV, Sequence number and SPI)
- * Note: Extended Sequence Number processing is NOT supported
*
* @trunc_len: the length of the ICV to be written to the output frame. If 0,
* then the corresponding length of the digest, according to the
@@ -1425,30 +1422,30 @@ cnstr_shdsc_ipsec_new_decap(uint32_t *descbuf, bool ps,
* will be done correctly:
* For encapsulation:
* Input:
- * +----+----------------+---------------------------------------------+
- * | IV | Auth-only data | Padded data to be authenticated & Encrypted |
- * +----+----------------+---------------------------------------------+
+ * +----+----------------+-----------------------------------------------+
+ * | IV | Auth-only head | Padded data to be auth & Enc | Auth-only tail |
+ * +----+----------------+-----------------------------------------------+
* Output:
* +--------------------------------------+
* | Authenticated & Encrypted data | ICV |
* +--------------------------------+-----+
-
+ *
* For decapsulation:
* Input:
- * +----+----------------+--------------------------------+-----+
- * | IV | Auth-only data | Authenticated & Encrypted data | ICV |
- * +----+----------------+--------------------------------+-----+
+ * +----+----------------+-----------------+----------------------+
+ * | IV | Auth-only head | Auth & Enc data | Auth-only tail | ICV |
+ * +----+----------------+-----------------+----------------------+
* Output:
- * +----+--------------------------+
+ * +----+---------------------------+
* | Decrypted & authenticated data |
- * +----+--------------------------+
+ * +----+---------------------------+
*
* Note: This descriptor can use per-packet commands, encoded as below in the
* DPOVRD register:
- * 32 24 16 0
- * +------+---------------------+
- * | 0x80 | 0x00| auth_only_len |
- * +------+---------------------+
+ * 32 28 16 1
+ * +------+------------------------------+
+ * | 0x8 | auth_tail_len | auth_hdr_len |
+ * +------+------------------------------+
*
* This mechanism is available only for SoCs having SEC ERA >= 3. In other
* words, this will not work for P4080TO2
@@ -1465,7 +1462,7 @@ cnstr_shdsc_authenc(uint32_t *descbuf, bool ps, bool swap,
enum rta_share_type share,
struct alginfo *cipherdata,
struct alginfo *authdata,
- uint16_t ivlen, uint16_t auth_only_len,
+ uint16_t ivlen,
uint8_t trunc_len, uint8_t dir)
{
struct program prg;
@@ -1473,16 +1470,16 @@ cnstr_shdsc_authenc(uint32_t *descbuf, bool ps, bool swap,
const bool need_dk = (dir == DIR_DEC) &&
(cipherdata->algtype == OP_ALG_ALGSEL_AES) &&
(cipherdata->algmode == OP_ALG_AAI_CBC);
+ int data_type;
- LABEL(skip_patch_len);
LABEL(keyjmp);
LABEL(skipkeys);
- LABEL(aonly_len_offset);
- REFERENCE(pskip_patch_len);
+ LABEL(proc_icv);
+ LABEL(no_auth_tail);
REFERENCE(pkeyjmp);
REFERENCE(pskipkeys);
- REFERENCE(read_len);
- REFERENCE(write_len);
+ REFERENCE(p_proc_icv);
+ REFERENCE(p_no_auth_tail);
PROGRAM_CNTXT_INIT(p, descbuf, 0);
@@ -1500,48 +1497,15 @@ cnstr_shdsc_authenc(uint32_t *descbuf, bool ps, bool swap,
SHR_HDR(p, share, 1, SC);
- /*
- * M0 will contain the value provided by the user when creating
- * the shared descriptor. If the user provided an override in
- * DPOVRD, then M0 will contain that value
- */
- MATHB(p, MATH0, ADD, auth_only_len, MATH0, 4, IMMED2);
-
- if (rta_sec_era >= RTA_SEC_ERA_3) {
- /*
- * Check if the user wants to override the auth-only len
- */
- MATHB(p, DPOVRD, ADD, 0x80000000, MATH2, 4, IMMED2);
-
- /*
- * No need to patch the length of the auth-only data read if
- * the user did not override it
- */
- pskip_patch_len = JUMP(p, skip_patch_len, LOCAL_JUMP, ALL_TRUE,
- MATH_N);
-
- /* Get auth-only len in M0 */
- MATHB(p, MATH2, AND, 0xFFFF, MATH0, 4, IMMED2);
-
- /*
- * Since M0 is used in calculations, don't mangle it, copy
- * its content to M1 and use this for patching.
- */
- MATHB(p, MATH0, ADD, MATH1, MATH1, 4, 0);
-
- read_len = MOVE(p, DESCBUF, 0, MATH1, 0, 6, WAITCOMP | IMMED);
- write_len = MOVE(p, MATH1, 0, DESCBUF, 0, 8, WAITCOMP | IMMED);
-
- SET_LABEL(p, skip_patch_len);
- }
- /*
- * MATH0 contains the value in DPOVRD w/o the MSB, or the initial
- * value, as provided by the user at descriptor creation time
- */
- if (dir == DIR_ENC)
- MATHB(p, MATH0, ADD, ivlen, MATH0, 4, IMMED2);
- else
- MATHB(p, MATH0, ADD, ivlen + trunc_len, MATH0, 4, IMMED2);
+ /* Collect the (auth_tail || auth_hdr) len from DPOVRD */
+ MATHB(p, DPOVRD, ADD, 0x80000000, MATH2, 4, IMMED2);
+
+ /* Get auth_hdr len in MATH0 */
+ MATHB(p, MATH2, AND, 0xFFFF, MATH0, 4, IMMED2);
+
+ /* Get auth_tail len in MATH2 */
+ MATHB(p, MATH2, AND, 0xFFF0000, MATH2, 4, IMMED2);
+ MATHI(p, MATH2, RSHIFT, 16, MATH2, 4, IMMED2);
pkeyjmp = JUMP(p, keyjmp, LOCAL_JUMP, ALL_TRUE, SHRD);
@@ -1581,61 +1545,70 @@ cnstr_shdsc_authenc(uint32_t *descbuf, bool ps, bool swap,
OP_ALG_AS_INITFINAL, ICV_CHECK_DISABLE, dir);
}
+ /* Read IV */
+ if (cipherdata->algmode == OP_ALG_AAI_CTR)
+ SEQLOAD(p, CONTEXT1, 16, ivlen, 0);
+ else
+ SEQLOAD(p, CONTEXT1, 0, ivlen, 0);
+
+ /*
+ * authenticate auth_hdr data
+ */
+ MATHB(p, MATH0, ADD, ZERO, VSEQINSZ, 4, 0);
+ SEQFIFOLOAD(p, MSG2, 0, VLF);
+
/*
* Prepare the length of the data to be both encrypted/decrypted
* and authenticated/checked
*/
- MATHB(p, SEQINSZ, SUB, MATH0, VSEQINSZ, 4, 0);
+ MATHB(p, SEQINSZ, SUB, MATH2, VSEQINSZ, 4, 0);
+ if (dir == DIR_DEC) {
+ MATHB(p, VSEQINSZ, SUB, trunc_len, VSEQINSZ, 4, IMMED2);
+ data_type = MSGINSNOOP;
+ } else {
+ data_type = MSGOUTSNOOP;
+ }
- MATHB(p, VSEQINSZ, SUB, MATH3, VSEQOUTSZ, 4, 0);
+ MATHB(p, VSEQINSZ, ADD, ZERO, VSEQOUTSZ, 4, 0);
/* Prepare for writing the output frame */
SEQFIFOSTORE(p, MSG, 0, 0, VLF);
- SET_LABEL(p, aonly_len_offset);
- /* Read IV */
- if (cipherdata->algmode == OP_ALG_AAI_CTR)
- SEQLOAD(p, CONTEXT1, 16, ivlen, 0);
- else
- SEQLOAD(p, CONTEXT1, 0, ivlen, 0);
+ /* Check if there is no auth-tail */
+ MATHB(p, MATH2, ADD, ZERO, MATH2, 4, 0);
+ p_no_auth_tail = JUMP(p, no_auth_tail, LOCAL_JUMP, ALL_TRUE, MATH_Z);
/*
- * Read data needed only for authentication. This is overwritten above
- * if the user requested it.
+ * Read input plain/cipher text, encrypt/decrypt & auth & write
+ * to output
*/
- SEQFIFOLOAD(p, MSG2, auth_only_len, 0);
+ SEQFIFOLOAD(p, data_type, 0, VLF | LAST1 | FLUSH1);
+
+ /* Authenticate auth tail */
+ MATHB(p, MATH2, ADD, ZERO, VSEQINSZ, 4, 0);
+ SEQFIFOLOAD(p, MSG2, 0, VLF | LAST2);
+
+ /* Jump to process icv */
+ p_proc_icv = JUMP(p, proc_icv, LOCAL_JUMP, ALL_FALSE, MATH_Z);
+
+ SET_LABEL(p, no_auth_tail);
- if (dir == DIR_ENC) {
- /*
- * Read input plaintext, encrypt and authenticate & write to
- * output
- */
- SEQFIFOLOAD(p, MSGOUTSNOOP, 0, VLF | LAST1 | LAST2 | FLUSH1);
+ SEQFIFOLOAD(p, data_type, 0, VLF | LAST1 | LAST2 | FLUSH1);
+ SET_LABEL(p, proc_icv);
+
+ if (dir == DIR_ENC)
/* Finally, write the ICV */
SEQSTORE(p, CONTEXT2, 0, trunc_len, 0);
- } else {
- /*
- * Read input ciphertext, decrypt and authenticate & write to
- * output
- */
- SEQFIFOLOAD(p, MSGINSNOOP, 0, VLF | LAST1 | LAST2 | FLUSH1);
-
+ else
/* Read the ICV to check */
SEQFIFOLOAD(p, ICV2, trunc_len, LAST2);
- }
PATCH_JUMP(p, pkeyjmp, keyjmp);
PATCH_JUMP(p, pskipkeys, skipkeys);
- PATCH_JUMP(p, pskipkeys, skipkeys);
-
- if (rta_sec_era >= RTA_SEC_ERA_3) {
- PATCH_JUMP(p, pskip_patch_len, skip_patch_len);
- PATCH_MOVE(p, read_len, aonly_len_offset);
- PATCH_MOVE(p, write_len, aonly_len_offset);
- }
-
+ PATCH_JUMP(p, p_no_auth_tail, no_auth_tail);
+ PATCH_JUMP(p, p_proc_icv, proc_icv);
return PROGRAM_FINALIZE(p);
}
diff --git a/drivers/crypto/dpaa_sec/dpaa_sec.c b/drivers/crypto/dpaa_sec/dpaa_sec.c
index c1c6c054a..019a7119f 100644
--- a/drivers/crypto/dpaa_sec/dpaa_sec.c
+++ b/drivers/crypto/dpaa_sec/dpaa_sec.c
@@ -742,7 +742,7 @@ dpaa_sec_prep_cdb(dpaa_sec_session *ses)
*/
shared_desc_len = cnstr_shdsc_authenc(cdb->sh_desc,
true, swap, SHR_SERIAL, &alginfo_c, &alginfo_a,
- ses->iv.length, 0,
+ ses->iv.length,
ses->digest_length, ses->dir);
}
@@ -1753,7 +1753,8 @@ dpaa_sec_enqueue_burst(void *qp, struct rte_crypto_op **ops,
struct rte_crypto_op *op;
struct dpaa_sec_job *cf;
dpaa_sec_session *ses;
- uint32_t auth_only_len, index, flags[DPAA_SEC_BURST] = {0};
+ uint16_t auth_hdr_len, auth_tail_len;
+ uint32_t index, flags[DPAA_SEC_BURST] = {0};
struct qman_fq *inq[DPAA_SEC_BURST];
while (nb_ops) {
@@ -1809,8 +1810,10 @@ dpaa_sec_enqueue_burst(void *qp, struct rte_crypto_op **ops,
goto send_pkts;
}
- auth_only_len = op->sym->auth.data.length -
+ auth_hdr_len = op->sym->auth.data.length -
op->sym->cipher.data.length;
+ auth_tail_len = 0;
+
if (rte_pktmbuf_is_contiguous(op->sym->m_src) &&
((op->sym->m_dst == NULL) ||
rte_pktmbuf_is_contiguous(op->sym->m_dst))) {
@@ -1824,8 +1827,15 @@ dpaa_sec_enqueue_burst(void *qp, struct rte_crypto_op **ops,
cf = build_cipher_only(op, ses);
} else if (is_aead(ses)) {
cf = build_cipher_auth_gcm(op, ses);
- auth_only_len = ses->auth_only_len;
+ auth_hdr_len = ses->auth_only_len;
} else if (is_auth_cipher(ses)) {
+ auth_hdr_len =
+ op->sym->cipher.data.offset
+ - op->sym->auth.data.offset;
+ auth_tail_len =
+ op->sym->auth.data.length
+ - op->sym->cipher.data.length
+ - auth_hdr_len;
cf = build_cipher_auth(op, ses);
} else {
DPAA_SEC_DP_ERR("not supported ops");
@@ -1842,8 +1852,15 @@ dpaa_sec_enqueue_burst(void *qp, struct rte_crypto_op **ops,
cf = build_cipher_only_sg(op, ses);
} else if (is_aead(ses)) {
cf = build_cipher_auth_gcm_sg(op, ses);
- auth_only_len = ses->auth_only_len;
+ auth_hdr_len = ses->auth_only_len;
} else if (is_auth_cipher(ses)) {
+ auth_hdr_len =
+ op->sym->cipher.data.offset
+ - op->sym->auth.data.offset;
+ auth_tail_len =
+ op->sym->auth.data.length
+ - op->sym->cipher.data.length
+ - auth_hdr_len;
cf = build_cipher_auth_sg(op, ses);
} else {
DPAA_SEC_DP_ERR("not supported ops");
@@ -1865,12 +1882,16 @@ dpaa_sec_enqueue_burst(void *qp, struct rte_crypto_op **ops,
qm_fd_addr_set64(fd, dpaa_mem_vtop(cf->sg));
fd->_format1 = qm_fd_compound;
fd->length29 = 2 * sizeof(struct qm_sg_entry);
+
/* Auth_only_len is set as 0 in descriptor and it is
* overwritten here in the fd.cmd which will update
* the DPOVRD reg.
*/
- if (auth_only_len)
- fd->cmd = 0x80000000 | auth_only_len;
+ if (auth_hdr_len || auth_tail_len) {
+ fd->cmd = 0x80000000;
+ fd->cmd |=
+ ((auth_tail_len << 16) | auth_hdr_len);
+ }
/* In case of PDCP, per packet HFN is stored in
* mbuf priv after sym_op.
--
2.17.1
^ permalink raw reply [flat|nested] 27+ messages in thread
* [dpdk-dev] [PATCH v2 06/10] test/crypto: increase test cases support for dpaax
2019-10-14 6:53 ` [dpdk-dev] [PATCH v2 00/10] NXP DPAAx crypto fixes Hemant Agrawal
` (4 preceding siblings ...)
2019-10-14 6:53 ` [dpdk-dev] [PATCH v2 05/10] crypto/dpaa2_sec: add support of auth trailer in cipher-auth Hemant Agrawal
@ 2019-10-14 6:53 ` Hemant Agrawal
2019-10-14 6:53 ` [dpdk-dev] [PATCH v2 07/10] test/crypto: add test to test ESN like case Hemant Agrawal
` (4 subsequent siblings)
10 siblings, 0 replies; 27+ messages in thread
From: Hemant Agrawal @ 2019-10-14 6:53 UTC (permalink / raw)
To: dev; +Cc: akhil.goyal, Hemant Agrawal
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
app/test/test_cryptodev.c | 132 ++++++++++++++++++++++++++++++++------
1 file changed, 113 insertions(+), 19 deletions(-)
diff --git a/app/test/test_cryptodev.c b/app/test/test_cryptodev.c
index 879b31ceb..c4c730495 100644
--- a/app/test/test_cryptodev.c
+++ b/app/test/test_cryptodev.c
@@ -12294,6 +12294,14 @@ static struct unit_test_suite cryptodev_dpaa_sec_testsuite = {
test_PDCP_PROTO_SGL_oop_128B_32B),
#endif
/** AES GCM Authenticated Encryption */
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_encrypt_SGL_in_place_1500B),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_encrypt_SGL_out_of_place_400B_400B),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_encrypt_SGL_out_of_place_1500B_2000B),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_encrypt_SGL_out_of_place_400B_1seg),
TEST_CASE_ST(ut_setup, ut_teardown,
test_AES_GCM_authenticated_encryption_test_case_1),
TEST_CASE_ST(ut_setup, ut_teardown,
@@ -12308,6 +12316,8 @@ static struct unit_test_suite cryptodev_dpaa_sec_testsuite = {
test_AES_GCM_authenticated_encryption_test_case_6),
TEST_CASE_ST(ut_setup, ut_teardown,
test_AES_GCM_authenticated_encryption_test_case_7),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_authenticated_encryption_test_case_8),
/** AES GCM Authenticated Decryption */
TEST_CASE_ST(ut_setup, ut_teardown,
@@ -12324,6 +12334,40 @@ static struct unit_test_suite cryptodev_dpaa_sec_testsuite = {
test_AES_GCM_authenticated_decryption_test_case_6),
TEST_CASE_ST(ut_setup, ut_teardown,
test_AES_GCM_authenticated_decryption_test_case_7),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_authenticated_decryption_test_case_8),
+
+ /** AES GCM Authenticated Encryption 192 bits key */
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_encryption_test_case_192_1),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_encryption_test_case_192_2),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_encryption_test_case_192_3),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_encryption_test_case_192_4),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_encryption_test_case_192_5),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_encryption_test_case_192_6),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_encryption_test_case_192_7),
+
+ /** AES GCM Authenticated Decryption 192 bits key */
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_decryption_test_case_192_1),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_decryption_test_case_192_2),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_decryption_test_case_192_3),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_decryption_test_case_192_4),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_decryption_test_case_192_5),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_decryption_test_case_192_6),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_decryption_test_case_192_7),
/** AES GCM Authenticated Encryption 256 bits key */
TEST_CASE_ST(ut_setup, ut_teardown,
@@ -12363,17 +12407,31 @@ static struct unit_test_suite cryptodev_dpaa_sec_testsuite = {
TEST_CASE_ST(ut_setup, ut_teardown,
test_AES_GCM_authenticated_decryption_oop_test_case_1),
- /** Scatter-Gather */
+ /** Negative tests */
TEST_CASE_ST(ut_setup, ut_teardown,
- test_AES_GCM_auth_encrypt_SGL_in_place_1500B),
+ test_AES_GCM_auth_encryption_fail_iv_corrupt),
TEST_CASE_ST(ut_setup, ut_teardown,
- test_AES_GCM_auth_encrypt_SGL_out_of_place_400B_400B),
+ test_AES_GCM_auth_encryption_fail_in_data_corrupt),
TEST_CASE_ST(ut_setup, ut_teardown,
- test_AES_GCM_auth_encrypt_SGL_out_of_place_400B_1seg),
+ test_AES_GCM_auth_encryption_fail_out_data_corrupt),
TEST_CASE_ST(ut_setup, ut_teardown,
- test_AES_GCM_auth_encrypt_SGL_out_of_place_1500B_2000B),
-
- /** Negative tests */
+ test_AES_GCM_auth_encryption_fail_aad_len_corrupt),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_encryption_fail_aad_corrupt),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_encryption_fail_tag_corrupt),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_decryption_fail_iv_corrupt),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_decryption_fail_in_data_corrupt),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_decryption_fail_out_data_corrupt),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_decryption_fail_aad_len_corrupt),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_decryption_fail_aad_corrupt),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_decryption_fail_tag_corrupt),
TEST_CASE_ST(ut_setup, ut_teardown,
authentication_verify_HMAC_SHA1_fail_data_corrupt),
TEST_CASE_ST(ut_setup, ut_teardown,
@@ -12431,6 +12489,14 @@ static struct unit_test_suite cryptodev_dpaa2_sec_testsuite = {
test_PDCP_PROTO_SGL_oop_128B_32B),
#endif
/** AES GCM Authenticated Encryption */
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_encrypt_SGL_in_place_1500B),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_encrypt_SGL_out_of_place_400B_400B),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_encrypt_SGL_out_of_place_1500B_2000B),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_encrypt_SGL_out_of_place_400B_1seg),
TEST_CASE_ST(ut_setup, ut_teardown,
test_AES_GCM_authenticated_encryption_test_case_1),
TEST_CASE_ST(ut_setup, ut_teardown,
@@ -12445,6 +12511,8 @@ static struct unit_test_suite cryptodev_dpaa2_sec_testsuite = {
test_AES_GCM_authenticated_encryption_test_case_6),
TEST_CASE_ST(ut_setup, ut_teardown,
test_AES_GCM_authenticated_encryption_test_case_7),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_authenticated_encryption_test_case_8),
/** AES GCM Authenticated Decryption */
TEST_CASE_ST(ut_setup, ut_teardown,
@@ -12461,6 +12529,8 @@ static struct unit_test_suite cryptodev_dpaa2_sec_testsuite = {
test_AES_GCM_authenticated_decryption_test_case_6),
TEST_CASE_ST(ut_setup, ut_teardown,
test_AES_GCM_authenticated_decryption_test_case_7),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_authenticated_decryption_test_case_8),
/** AES GCM Authenticated Encryption 192 bits key */
TEST_CASE_ST(ut_setup, ut_teardown,
@@ -12532,16 +12602,6 @@ static struct unit_test_suite cryptodev_dpaa2_sec_testsuite = {
TEST_CASE_ST(ut_setup, ut_teardown,
test_AES_GCM_authenticated_decryption_oop_test_case_1),
- /** Scatter-Gather */
- TEST_CASE_ST(ut_setup, ut_teardown,
- test_AES_GCM_auth_encrypt_SGL_in_place_1500B),
- TEST_CASE_ST(ut_setup, ut_teardown,
- test_AES_GCM_auth_encrypt_SGL_out_of_place_400B_400B),
- TEST_CASE_ST(ut_setup, ut_teardown,
- test_AES_GCM_auth_encrypt_SGL_out_of_place_400B_1seg),
- TEST_CASE_ST(ut_setup, ut_teardown,
- test_AES_GCM_auth_encrypt_SGL_out_of_place_1500B_2000B),
-
/** SNOW 3G encrypt only (UEA2) */
TEST_CASE_ST(ut_setup, ut_teardown,
test_snow3g_encryption_test_case_1),
@@ -12557,9 +12617,9 @@ static struct unit_test_suite cryptodev_dpaa2_sec_testsuite = {
TEST_CASE_ST(ut_setup, ut_teardown,
test_snow3g_encryption_test_case_1_oop),
TEST_CASE_ST(ut_setup, ut_teardown,
- test_snow3g_decryption_test_case_1_oop),
+ test_snow3g_encryption_test_case_1_oop_sgl),
TEST_CASE_ST(ut_setup, ut_teardown,
- test_snow3g_encryption_test_case_1_oop_sgl),
+ test_snow3g_decryption_test_case_1_oop),
/** SNOW 3G decrypt only (UEA2) */
TEST_CASE_ST(ut_setup, ut_teardown,
@@ -12606,7 +12666,41 @@ static struct unit_test_suite cryptodev_dpaa2_sec_testsuite = {
TEST_CASE_ST(ut_setup, ut_teardown,
test_zuc_hash_generate_test_case_8),
+ /** HMAC_MD5 Authentication */
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_MD5_HMAC_generate_case_1),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_MD5_HMAC_verify_case_1),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_MD5_HMAC_generate_case_2),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_MD5_HMAC_verify_case_2),
+
/** Negative tests */
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_encryption_fail_iv_corrupt),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_encryption_fail_in_data_corrupt),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_encryption_fail_out_data_corrupt),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_encryption_fail_aad_len_corrupt),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_encryption_fail_aad_corrupt),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_encryption_fail_tag_corrupt),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_decryption_fail_iv_corrupt),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_decryption_fail_in_data_corrupt),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_decryption_fail_out_data_corrupt),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_decryption_fail_aad_len_corrupt),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_decryption_fail_aad_corrupt),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_decryption_fail_tag_corrupt),
TEST_CASE_ST(ut_setup, ut_teardown,
authentication_verify_HMAC_SHA1_fail_data_corrupt),
TEST_CASE_ST(ut_setup, ut_teardown,
--
2.17.1
^ permalink raw reply [flat|nested] 27+ messages in thread
* [dpdk-dev] [PATCH v2 07/10] test/crypto: add test to test ESN like case
2019-10-14 6:53 ` [dpdk-dev] [PATCH v2 00/10] NXP DPAAx crypto fixes Hemant Agrawal
` (5 preceding siblings ...)
2019-10-14 6:53 ` [dpdk-dev] [PATCH v2 06/10] test/crypto: increase test cases support for dpaax Hemant Agrawal
@ 2019-10-14 6:53 ` Hemant Agrawal
2019-10-14 6:53 ` [dpdk-dev] [PATCH v2 08/10] crypto/dpaa_sec: add support for snow3G and ZUC Hemant Agrawal
` (3 subsequent siblings)
10 siblings, 0 replies; 27+ messages in thread
From: Hemant Agrawal @ 2019-10-14 6:53 UTC (permalink / raw)
To: dev; +Cc: akhil.goyal, Hemant Agrawal, Vakul Garg
This patch add support for case when there is auth only header and
auth only tailroom area. This simulates the cases of IPSEC ESN cases.
This patch also enable the new test case for openssl, dpaaX platforms.
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Vakul Garg <vakul.garg@nxp.com>
---
app/test/test_cryptodev.c | 287 ++++++++++++++++++++-
app/test/test_cryptodev_aes_test_vectors.h | 67 +++++
2 files changed, 349 insertions(+), 5 deletions(-)
diff --git a/app/test/test_cryptodev.c b/app/test/test_cryptodev.c
index c4c730495..ec0473016 100644
--- a/app/test/test_cryptodev.c
+++ b/app/test/test_cryptodev.c
@@ -9741,6 +9741,8 @@ test_AES_GMAC_authentication_verify_test_case_4(void)
struct test_crypto_vector {
enum rte_crypto_cipher_algorithm crypto_algo;
+ unsigned int cipher_offset;
+ unsigned int cipher_len;
struct {
uint8_t data[64];
@@ -9763,6 +9765,7 @@ struct test_crypto_vector {
} ciphertext;
enum rte_crypto_auth_algorithm auth_algo;
+ unsigned int auth_offset;
struct {
uint8_t data[128];
@@ -9838,6 +9841,8 @@ aes128_gmac_test_vector = {
static const struct test_crypto_vector
aes128cbc_hmac_sha1_test_vector = {
.crypto_algo = RTE_CRYPTO_CIPHER_AES_CBC,
+ .cipher_offset = 0,
+ .cipher_len = 512,
.cipher_key = {
.data = {
0xE4, 0x23, 0x33, 0x8A, 0x35, 0x64, 0x61, 0xE2,
@@ -9861,6 +9866,7 @@ aes128cbc_hmac_sha1_test_vector = {
.len = 512
},
.auth_algo = RTE_CRYPTO_AUTH_SHA1_HMAC,
+ .auth_offset = 0,
.auth_key = {
.data = {
0xF8, 0x2A, 0xC7, 0x54, 0xDB, 0x96, 0x18, 0xAA,
@@ -9879,6 +9885,53 @@ aes128cbc_hmac_sha1_test_vector = {
}
};
+static const struct test_crypto_vector
+aes128cbc_hmac_sha1_aad_test_vector = {
+ .crypto_algo = RTE_CRYPTO_CIPHER_AES_CBC,
+ .cipher_offset = 12,
+ .cipher_len = 496,
+ .cipher_key = {
+ .data = {
+ 0xE4, 0x23, 0x33, 0x8A, 0x35, 0x64, 0x61, 0xE2,
+ 0x49, 0x03, 0xDD, 0xC6, 0xB8, 0xCA, 0x55, 0x7A
+ },
+ .len = 16
+ },
+ .iv = {
+ .data = {
+ 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07,
+ 0x08, 0x09, 0x0A, 0x0B, 0x0C, 0x0D, 0x0E, 0x0F
+ },
+ .len = 16
+ },
+ .plaintext = {
+ .data = plaintext_hash,
+ .len = 512
+ },
+ .ciphertext = {
+ .data = ciphertext512_aes128cbc_aad,
+ .len = 512
+ },
+ .auth_algo = RTE_CRYPTO_AUTH_SHA1_HMAC,
+ .auth_offset = 0,
+ .auth_key = {
+ .data = {
+ 0xF8, 0x2A, 0xC7, 0x54, 0xDB, 0x96, 0x18, 0xAA,
+ 0xC3, 0xA1, 0x53, 0xF6, 0x1F, 0x17, 0x60, 0xBD,
+ 0xDE, 0xF4, 0xDE, 0xAD
+ },
+ .len = 20
+ },
+ .digest = {
+ .data = {
+ 0x1F, 0x6A, 0xD2, 0x8B, 0x4B, 0xB3, 0xC0, 0x9E,
+ 0x86, 0x9B, 0x3A, 0xF2, 0x00, 0x5B, 0x4F, 0x08,
+ 0x62, 0x8D, 0x62, 0x65
+ },
+ .len = 20
+ }
+};
+
static void
data_corruption(uint8_t *data)
{
@@ -10121,11 +10174,11 @@ create_cipher_auth_operation(struct crypto_testsuite_params *ts_params,
rte_memcpy(rte_crypto_op_ctod_offset(ut_params->op, uint8_t *, IV_OFFSET),
reference->iv.data, reference->iv.len);
- sym_op->cipher.data.length = reference->ciphertext.len;
- sym_op->cipher.data.offset = 0;
+ sym_op->cipher.data.length = reference->cipher_len;
+ sym_op->cipher.data.offset = reference->cipher_offset;
- sym_op->auth.data.length = reference->ciphertext.len;
- sym_op->auth.data.offset = 0;
+ sym_op->auth.data.length = reference->plaintext.len;
+ sym_op->auth.data.offset = reference->auth_offset;
return 0;
}
@@ -10336,6 +10389,193 @@ test_authenticated_decryption_fail_when_corruption(
return 0;
}
+static int
+test_authenticated_encryt_with_esn(
+ struct crypto_testsuite_params *ts_params,
+ struct crypto_unittest_params *ut_params,
+ const struct test_crypto_vector *reference)
+{
+ int retval;
+
+ uint8_t *authciphertext, *plaintext, *auth_tag;
+ uint16_t plaintext_pad_len;
+ uint8_t cipher_key[reference->cipher_key.len + 1];
+ uint8_t auth_key[reference->auth_key.len + 1];
+
+ /* Create session */
+ memcpy(cipher_key, reference->cipher_key.data,
+ reference->cipher_key.len);
+ memcpy(auth_key, reference->auth_key.data, reference->auth_key.len);
+
+ /* Setup Cipher Parameters */
+ ut_params->cipher_xform.type = RTE_CRYPTO_SYM_XFORM_CIPHER;
+ ut_params->cipher_xform.cipher.algo = reference->crypto_algo;
+ ut_params->cipher_xform.cipher.op = RTE_CRYPTO_CIPHER_OP_ENCRYPT;
+ ut_params->cipher_xform.cipher.key.data = cipher_key;
+ ut_params->cipher_xform.cipher.key.length = reference->cipher_key.len;
+ ut_params->cipher_xform.cipher.iv.offset = IV_OFFSET;
+ ut_params->cipher_xform.cipher.iv.length = reference->iv.len;
+
+ ut_params->cipher_xform.next = &ut_params->auth_xform;
+
+ /* Setup Authentication Parameters */
+ ut_params->auth_xform.type = RTE_CRYPTO_SYM_XFORM_AUTH;
+ ut_params->auth_xform.auth.op = RTE_CRYPTO_AUTH_OP_GENERATE;
+ ut_params->auth_xform.auth.algo = reference->auth_algo;
+ ut_params->auth_xform.auth.key.length = reference->auth_key.len;
+ ut_params->auth_xform.auth.key.data = auth_key;
+ ut_params->auth_xform.auth.digest_length = reference->digest.len;
+ ut_params->auth_xform.next = NULL;
+
+ /* Create Crypto session*/
+ ut_params->sess = rte_cryptodev_sym_session_create(
+ ts_params->session_mpool);
+
+ rte_cryptodev_sym_session_init(ts_params->valid_devs[0],
+ ut_params->sess,
+ &ut_params->cipher_xform,
+ ts_params->session_priv_mpool);
+
+ TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation failed");
+
+ ut_params->ibuf = rte_pktmbuf_alloc(ts_params->mbuf_pool);
+ TEST_ASSERT_NOT_NULL(ut_params->ibuf,
+ "Failed to allocate input buffer in mempool");
+
+ /* clear mbuf payload */
+ memset(rte_pktmbuf_mtod(ut_params->ibuf, uint8_t *), 0,
+ rte_pktmbuf_tailroom(ut_params->ibuf));
+
+ plaintext = (uint8_t *)rte_pktmbuf_append(ut_params->ibuf,
+ reference->plaintext.len);
+ TEST_ASSERT_NOT_NULL(plaintext, "no room to append plaintext");
+ memcpy(plaintext, reference->plaintext.data, reference->plaintext.len);
+
+ /* Create operation */
+ retval = create_cipher_auth_operation(ts_params,
+ ut_params,
+ reference, 0);
+
+ if (retval < 0)
+ return retval;
+
+ ut_params->op = process_crypto_request(ts_params->valid_devs[0],
+ ut_params->op);
+
+ TEST_ASSERT_NOT_NULL(ut_params->op, "no crypto operation returned");
+
+ TEST_ASSERT_EQUAL(ut_params->op->status, RTE_CRYPTO_OP_STATUS_SUCCESS,
+ "crypto op processing failed");
+
+ plaintext_pad_len = RTE_ALIGN_CEIL(reference->plaintext.len, 16);
+
+ authciphertext = rte_pktmbuf_mtod_offset(ut_params->ibuf, uint8_t *,
+ ut_params->op->sym->auth.data.offset);
+ auth_tag = authciphertext + plaintext_pad_len;
+ debug_hexdump(stdout, "ciphertext:", authciphertext,
+ reference->ciphertext.len);
+ debug_hexdump(stdout, "auth tag:", auth_tag, reference->digest.len);
+
+ /* Validate obuf */
+ TEST_ASSERT_BUFFERS_ARE_EQUAL(
+ authciphertext,
+ reference->ciphertext.data,
+ reference->ciphertext.len,
+ "Ciphertext data not as expected");
+
+ TEST_ASSERT_BUFFERS_ARE_EQUAL(
+ auth_tag,
+ reference->digest.data,
+ reference->digest.len,
+ "Generated digest not as expected");
+
+ return TEST_SUCCESS;
+
+}
+
+static int
+test_authenticated_decrypt_with_esn(
+ struct crypto_testsuite_params *ts_params,
+ struct crypto_unittest_params *ut_params,
+ const struct test_crypto_vector *reference)
+{
+ int retval;
+
+ uint8_t *ciphertext;
+ uint8_t cipher_key[reference->cipher_key.len + 1];
+ uint8_t auth_key[reference->auth_key.len + 1];
+
+ /* Create session */
+ memcpy(cipher_key, reference->cipher_key.data,
+ reference->cipher_key.len);
+ memcpy(auth_key, reference->auth_key.data, reference->auth_key.len);
+
+ /* Setup Authentication Parameters */
+ ut_params->auth_xform.type = RTE_CRYPTO_SYM_XFORM_AUTH;
+ ut_params->auth_xform.auth.op = RTE_CRYPTO_AUTH_OP_VERIFY;
+ ut_params->auth_xform.auth.algo = reference->auth_algo;
+ ut_params->auth_xform.auth.key.length = reference->auth_key.len;
+ ut_params->auth_xform.auth.key.data = auth_key;
+ ut_params->auth_xform.auth.digest_length = reference->digest.len;
+ ut_params->auth_xform.next = &ut_params->cipher_xform;
+
+ /* Setup Cipher Parameters */
+ ut_params->cipher_xform.type = RTE_CRYPTO_SYM_XFORM_CIPHER;
+ ut_params->cipher_xform.next = NULL;
+ ut_params->cipher_xform.cipher.algo = reference->crypto_algo;
+ ut_params->cipher_xform.cipher.op = RTE_CRYPTO_CIPHER_OP_DECRYPT;
+ ut_params->cipher_xform.cipher.key.data = cipher_key;
+ ut_params->cipher_xform.cipher.key.length = reference->cipher_key.len;
+ ut_params->cipher_xform.cipher.iv.offset = IV_OFFSET;
+ ut_params->cipher_xform.cipher.iv.length = reference->iv.len;
+
+ /* Create Crypto session*/
+ ut_params->sess = rte_cryptodev_sym_session_create(
+ ts_params->session_mpool);
+
+ rte_cryptodev_sym_session_init(ts_params->valid_devs[0],
+ ut_params->sess,
+ &ut_params->auth_xform,
+ ts_params->session_priv_mpool);
+
+ TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation failed");
+
+ ut_params->ibuf = rte_pktmbuf_alloc(ts_params->mbuf_pool);
+ TEST_ASSERT_NOT_NULL(ut_params->ibuf,
+ "Failed to allocate input buffer in mempool");
+
+ /* clear mbuf payload */
+ memset(rte_pktmbuf_mtod(ut_params->ibuf, uint8_t *), 0,
+ rte_pktmbuf_tailroom(ut_params->ibuf));
+
+ ciphertext = (uint8_t *)rte_pktmbuf_append(ut_params->ibuf,
+ reference->ciphertext.len);
+ TEST_ASSERT_NOT_NULL(ciphertext, "no room to append ciphertext");
+ memcpy(ciphertext, reference->ciphertext.data,
+ reference->ciphertext.len);
+
+ /* Create operation */
+ retval = create_cipher_auth_verify_operation(ts_params,
+ ut_params,
+ reference);
+
+ if (retval < 0)
+ return retval;
+
+ ut_params->op = process_crypto_request(ts_params->valid_devs[0],
+ ut_params->op);
+
+ TEST_ASSERT_NOT_NULL(ut_params->op, "failed crypto process");
+ TEST_ASSERT_EQUAL(ut_params->op->status,
+ RTE_CRYPTO_OP_STATUS_SUCCESS,
+ "crypto op processing passed");
+
+ ut_params->obuf = ut_params->op->sym->m_src;
+ TEST_ASSERT_NOT_NULL(ut_params->obuf, "failed to retrieve obuf");
+
+ return 0;
+}
+
static int
create_aead_operation_SGL(enum rte_crypto_aead_operation op,
const struct aead_test_data *tdata,
@@ -10809,6 +11049,24 @@ auth_decryption_AES128CBC_HMAC_SHA1_fail_tag_corrupt(void)
&aes128cbc_hmac_sha1_test_vector);
}
+static int
+auth_encrypt_AES128CBC_HMAC_SHA1_esn_check(void)
+{
+ return test_authenticated_encryt_with_esn(
+ &testsuite_params,
+ &unittest_params,
+ &aes128cbc_hmac_sha1_aad_test_vector);
+}
+
+static int
+auth_decrypt_AES128CBC_HMAC_SHA1_esn_check(void)
+{
+ return test_authenticated_decrypt_with_esn(
+ &testsuite_params,
+ &unittest_params,
+ &aes128cbc_hmac_sha1_aad_test_vector);
+}
+
#ifdef RTE_LIBRTE_PMD_CRYPTO_SCHEDULER
/* global AESNI slave IDs for the scheduler test */
@@ -11830,6 +12088,13 @@ static struct unit_test_suite cryptodev_openssl_testsuite = {
TEST_CASE_ST(ut_setup, ut_teardown,
auth_decryption_AES128CBC_HMAC_SHA1_fail_tag_corrupt),
+ /* ESN Testcase */
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ auth_encrypt_AES128CBC_HMAC_SHA1_esn_check),
+
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ auth_decrypt_AES128CBC_HMAC_SHA1_esn_check),
+
TEST_CASES_END() /**< NULL terminate unit test array */
}
};
@@ -12441,6 +12706,12 @@ static struct unit_test_suite cryptodev_dpaa_sec_testsuite = {
TEST_CASE_ST(ut_setup, ut_teardown,
auth_decryption_AES128CBC_HMAC_SHA1_fail_tag_corrupt),
+ /* ESN Testcase */
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ auth_encrypt_AES128CBC_HMAC_SHA1_esn_check),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ auth_decrypt_AES128CBC_HMAC_SHA1_esn_check),
+
TEST_CASES_END() /**< NULL terminate unit test array */
}
};
@@ -12454,7 +12725,6 @@ static struct unit_test_suite cryptodev_dpaa2_sec_testsuite = {
test_device_configure_invalid_dev_id),
TEST_CASE_ST(ut_setup, ut_teardown,
test_multi_session),
-
TEST_CASE_ST(ut_setup, ut_teardown,
test_AES_chain_dpaa2_sec_all),
TEST_CASE_ST(ut_setup, ut_teardown,
@@ -12710,6 +12980,13 @@ static struct unit_test_suite cryptodev_dpaa2_sec_testsuite = {
TEST_CASE_ST(ut_setup, ut_teardown,
auth_decryption_AES128CBC_HMAC_SHA1_fail_tag_corrupt),
+ /* ESN Testcase */
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ auth_encrypt_AES128CBC_HMAC_SHA1_esn_check),
+
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ auth_decrypt_AES128CBC_HMAC_SHA1_esn_check),
+
TEST_CASES_END() /**< NULL terminate unit test array */
}
};
diff --git a/app/test/test_cryptodev_aes_test_vectors.h b/app/test/test_cryptodev_aes_test_vectors.h
index 46239efb7..54a5d75b2 100644
--- a/app/test/test_cryptodev_aes_test_vectors.h
+++ b/app/test/test_cryptodev_aes_test_vectors.h
@@ -356,6 +356,73 @@ static const struct blockcipher_test_data null_test_data_chain_x1_multiple = {
}
};
+static const uint8_t ciphertext512_aes128cbc_aad[] = {
+ 0x57, 0x68, 0x61, 0x74, 0x20, 0x61, 0x20, 0x6C,
+ 0x6F, 0x75, 0x73, 0x79, 0x6D, 0x70, 0xB4, 0xAD,
+ 0x09, 0x7C, 0xD7, 0x52, 0xD6, 0xF2, 0xBF, 0xD1,
+ 0x9D, 0x79, 0xC6, 0xB6, 0x8F, 0x94, 0xEB, 0xD8,
+ 0xBA, 0x5E, 0x01, 0x49, 0x7D, 0xB3, 0xC5, 0xFE,
+ 0x18, 0xF4, 0xE3, 0x60, 0x8C, 0x84, 0x68, 0x13,
+ 0x33, 0x06, 0x85, 0x60, 0xD3, 0xE7, 0x8A, 0xB5,
+ 0x23, 0xA2, 0xDE, 0x52, 0x5C, 0xB6, 0x26, 0x37,
+ 0xBB, 0x23, 0x8A, 0x38, 0x07, 0x85, 0xB6, 0x2E,
+ 0xC3, 0x69, 0x57, 0x79, 0x6B, 0xE4, 0xD7, 0x86,
+ 0x23, 0x72, 0x4C, 0x65, 0x49, 0x08, 0x1E, 0xF3,
+ 0xCC, 0x71, 0x4C, 0x45, 0x97, 0x03, 0xBC, 0xA0,
+ 0x9D, 0xF0, 0x4F, 0x5D, 0xEC, 0x40, 0x6C, 0xC6,
+ 0x52, 0xC0, 0x9D, 0x1C, 0xDC, 0x8B, 0xC2, 0xFA,
+ 0x35, 0xA7, 0x3A, 0x00, 0x04, 0x1C, 0xA6, 0x91,
+ 0x5D, 0xEB, 0x07, 0xA1, 0xB9, 0x3E, 0xD1, 0xB6,
+ 0xCA, 0x96, 0xEC, 0x71, 0xF7, 0x7D, 0xB6, 0x09,
+ 0x3D, 0x19, 0x6E, 0x75, 0x03, 0xC3, 0x1A, 0x4E,
+ 0x5B, 0x4D, 0xEA, 0xD9, 0x92, 0x96, 0x01, 0xFB,
+ 0xA3, 0xC2, 0x6D, 0xC4, 0x17, 0x6B, 0xB4, 0x3B,
+ 0x1E, 0x87, 0x54, 0x26, 0x95, 0x63, 0x07, 0x73,
+ 0xB6, 0xBA, 0x52, 0xD7, 0xA7, 0xD0, 0x9C, 0x75,
+ 0x8A, 0xCF, 0xC4, 0x3C, 0x4A, 0x55, 0x0E, 0x53,
+ 0xEC, 0xE0, 0x31, 0x51, 0xB7, 0xB7, 0xD2, 0xB4,
+ 0xF3, 0x2B, 0x70, 0x6D, 0x15, 0x9E, 0x57, 0x30,
+ 0x72, 0xE5, 0xA4, 0x71, 0x5F, 0xA4, 0xE8, 0x7C,
+ 0x46, 0x58, 0x36, 0x71, 0x91, 0x55, 0xAA, 0x99,
+ 0x3B, 0x3F, 0xF6, 0xA2, 0x9D, 0x27, 0xBF, 0xC2,
+ 0x62, 0x2C, 0x85, 0xB7, 0x51, 0xDD, 0xFD, 0x7B,
+ 0x8B, 0xB5, 0xDD, 0x2A, 0x73, 0xF8, 0x93, 0x9A,
+ 0x3F, 0xAD, 0x1D, 0xF0, 0x46, 0xD1, 0x76, 0x83,
+ 0x71, 0x4E, 0xD3, 0x0D, 0x64, 0x8C, 0xC3, 0xE6,
+ 0x03, 0xED, 0xE8, 0x53, 0x23, 0x1A, 0xC7, 0x86,
+ 0xEB, 0x87, 0xD6, 0x78, 0xF9, 0xFB, 0x9C, 0x1D,
+ 0xE7, 0x4E, 0xC0, 0x70, 0x27, 0x7A, 0x43, 0xE2,
+ 0x5D, 0xA4, 0x10, 0x40, 0xBE, 0x61, 0x0D, 0x2B,
+ 0x25, 0x08, 0x75, 0x91, 0xB5, 0x5A, 0x26, 0xC8,
+ 0x32, 0xA7, 0xC6, 0x88, 0xBF, 0x75, 0x94, 0xCC,
+ 0x58, 0xA4, 0xFE, 0x2F, 0xF7, 0x5C, 0xD2, 0x36,
+ 0x66, 0x55, 0xF0, 0xEA, 0xF5, 0x64, 0x43, 0xE7,
+ 0x6D, 0xE0, 0xED, 0xA1, 0x10, 0x0A, 0x84, 0x07,
+ 0x11, 0x88, 0xFA, 0xA1, 0xD3, 0xA0, 0x00, 0x5D,
+ 0xEB, 0xB5, 0x62, 0x01, 0x72, 0xC1, 0x9B, 0x39,
+ 0x0B, 0xD3, 0xAF, 0x04, 0x19, 0x42, 0xEC, 0xFF,
+ 0x4B, 0xB3, 0x5E, 0x87, 0x27, 0xE4, 0x26, 0x57,
+ 0x76, 0xCD, 0x36, 0x31, 0x5B, 0x94, 0x74, 0xFF,
+ 0x33, 0x91, 0xAA, 0xD1, 0x45, 0x34, 0xC2, 0x11,
+ 0xF0, 0x35, 0x44, 0xC9, 0xD5, 0xA2, 0x5A, 0xC2,
+ 0xE9, 0x9E, 0xCA, 0xE2, 0x6F, 0xD2, 0x40, 0xB4,
+ 0x93, 0x42, 0x78, 0x20, 0x92, 0x88, 0xC7, 0x16,
+ 0xCF, 0x15, 0x54, 0x7B, 0xE1, 0x46, 0x38, 0x69,
+ 0xB8, 0xE4, 0xF1, 0x81, 0xF0, 0x08, 0x6F, 0x92,
+ 0x6D, 0x1A, 0xD9, 0x93, 0xFA, 0xD7, 0x35, 0xFE,
+ 0x7F, 0x59, 0x43, 0x1D, 0x3A, 0x3B, 0xFC, 0xD0,
+ 0x14, 0x95, 0x1E, 0xB2, 0x04, 0x08, 0x4F, 0xC6,
+ 0xEA, 0xE8, 0x22, 0xF3, 0xD7, 0x66, 0x93, 0xAA,
+ 0xFD, 0xA0, 0xFE, 0x03, 0x96, 0x54, 0x78, 0x35,
+ 0x18, 0xED, 0xB7, 0x2F, 0x40, 0xE3, 0x8E, 0x22,
+ 0xC6, 0xDA, 0xB0, 0x8E, 0xA0, 0xA1, 0x62, 0x03,
+ 0x63, 0x34, 0x11, 0xF5, 0x9E, 0xAA, 0x6B, 0xC4,
+ 0x14, 0x75, 0x4C, 0xF4, 0xD8, 0xD9, 0xF1, 0x76,
+ 0xE3, 0xD3, 0x55, 0xCE, 0x22, 0x7D, 0x4A, 0xB7,
+ 0xBB, 0x7F, 0x4F, 0x09, 0x88, 0x70, 0x6E, 0x09,
+ 0x84, 0x6B, 0x24, 0x19, 0x2C, 0x20, 0x73, 0x75
+};
+
/* AES128-CTR-SHA1 test vector */
static const struct blockcipher_test_data aes_test_data_1 = {
.crypto_algo = RTE_CRYPTO_CIPHER_AES_CTR,
--
2.17.1
^ permalink raw reply [flat|nested] 27+ messages in thread
* [dpdk-dev] [PATCH v2 08/10] crypto/dpaa_sec: add support for snow3G and ZUC
2019-10-14 6:53 ` [dpdk-dev] [PATCH v2 00/10] NXP DPAAx crypto fixes Hemant Agrawal
` (6 preceding siblings ...)
2019-10-14 6:53 ` [dpdk-dev] [PATCH v2 07/10] test/crypto: add test to test ESN like case Hemant Agrawal
@ 2019-10-14 6:53 ` Hemant Agrawal
2019-10-14 6:53 ` [dpdk-dev] [PATCH v2 09/10] test/crypto: enable snow3G and zuc cases for dpaa Hemant Agrawal
` (2 subsequent siblings)
10 siblings, 0 replies; 27+ messages in thread
From: Hemant Agrawal @ 2019-10-14 6:53 UTC (permalink / raw)
To: dev; +Cc: akhil.goyal, Hemant Agrawal
This patch add support for ZUC and SNOW 3G in non-PDCP offload mode.
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
doc/guides/cryptodevs/dpaa_sec.rst | 4 +
doc/guides/cryptodevs/features/dpaa_sec.ini | 4 +
drivers/crypto/dpaa_sec/dpaa_sec.c | 378 ++++++++++++++++----
drivers/crypto/dpaa_sec/dpaa_sec.h | 91 ++++-
4 files changed, 407 insertions(+), 70 deletions(-)
diff --git a/doc/guides/cryptodevs/dpaa_sec.rst b/doc/guides/cryptodevs/dpaa_sec.rst
index 0a2600634..7e9fcf625 100644
--- a/doc/guides/cryptodevs/dpaa_sec.rst
+++ b/doc/guides/cryptodevs/dpaa_sec.rst
@@ -58,6 +58,8 @@ Cipher algorithms:
* ``RTE_CRYPTO_CIPHER_AES128_CTR``
* ``RTE_CRYPTO_CIPHER_AES192_CTR``
* ``RTE_CRYPTO_CIPHER_AES256_CTR``
+* ``RTE_CRYPTO_CIPHER_SNOW3G_UEA2``
+* ``RTE_CRYPTO_CIPHER_ZUC_EEA3``
Hash algorithms:
@@ -66,7 +68,9 @@ Hash algorithms:
* ``RTE_CRYPTO_AUTH_SHA256_HMAC``
* ``RTE_CRYPTO_AUTH_SHA384_HMAC``
* ``RTE_CRYPTO_AUTH_SHA512_HMAC``
+* ``RTE_CRYPTO_AUTH_SNOW3G_UIA2``
* ``RTE_CRYPTO_AUTH_MD5_HMAC``
+* ``RTE_CRYPTO_AUTH_ZUC_EIA3``
AEAD algorithms:
diff --git a/doc/guides/cryptodevs/features/dpaa_sec.ini b/doc/guides/cryptodevs/features/dpaa_sec.ini
index 954a70808..243f3e1d6 100644
--- a/doc/guides/cryptodevs/features/dpaa_sec.ini
+++ b/doc/guides/cryptodevs/features/dpaa_sec.ini
@@ -25,6 +25,8 @@ AES CTR (128) = Y
AES CTR (192) = Y
AES CTR (256) = Y
3DES CBC = Y
+SNOW3G UEA2 = Y
+ZUC EEA3 = Y
;
; Supported authentication algorithms of the 'dpaa_sec' crypto driver.
@@ -36,6 +38,8 @@ SHA224 HMAC = Y
SHA256 HMAC = Y
SHA384 HMAC = Y
SHA512 HMAC = Y
+SNOW3G UIA2 = Y
+ZUC EIA3 = Y
;
; Supported AEAD algorithms of the 'dpaa_sec' crypto driver.
diff --git a/drivers/crypto/dpaa_sec/dpaa_sec.c b/drivers/crypto/dpaa_sec/dpaa_sec.c
index 019a7119f..970cdf0cc 100644
--- a/drivers/crypto/dpaa_sec/dpaa_sec.c
+++ b/drivers/crypto/dpaa_sec/dpaa_sec.c
@@ -630,39 +630,171 @@ dpaa_sec_prep_cdb(dpaa_sec_session *ses)
} else if (is_proto_pdcp(ses)) {
shared_desc_len = dpaa_sec_prep_pdcp_cdb(ses);
} else if (is_cipher_only(ses)) {
- caam_cipher_alg(ses, &alginfo_c);
- if (alginfo_c.algtype == (unsigned int)DPAA_SEC_ALG_UNSUPPORT) {
- DPAA_SEC_ERR("not supported cipher alg");
- return -ENOTSUP;
- }
-
alginfo_c.key = (size_t)ses->cipher_key.data;
alginfo_c.keylen = ses->cipher_key.length;
alginfo_c.key_enc_flags = 0;
alginfo_c.key_type = RTA_DATA_IMM;
-
- shared_desc_len = cnstr_shdsc_blkcipher(
- cdb->sh_desc, true,
- swap, SHR_NEVER, &alginfo_c,
- NULL,
- ses->iv.length,
- ses->dir);
- } else if (is_auth_only(ses)) {
- caam_auth_alg(ses, &alginfo_a);
- if (alginfo_a.algtype == (unsigned int)DPAA_SEC_ALG_UNSUPPORT) {
- DPAA_SEC_ERR("not supported auth alg");
+ switch (ses->cipher_alg) {
+ case RTE_CRYPTO_CIPHER_NULL:
+ alginfo_c.algtype = 0;
+ shared_desc_len = cnstr_shdsc_blkcipher(
+ cdb->sh_desc, true,
+ swap, SHR_NEVER, &alginfo_c,
+ NULL,
+ ses->iv.length,
+ ses->dir);
+ break;
+ case RTE_CRYPTO_CIPHER_AES_CBC:
+ alginfo_c.algtype = OP_ALG_ALGSEL_AES;
+ alginfo_c.algmode = OP_ALG_AAI_CBC;
+ shared_desc_len = cnstr_shdsc_blkcipher(
+ cdb->sh_desc, true,
+ swap, SHR_NEVER, &alginfo_c,
+ NULL,
+ ses->iv.length,
+ ses->dir);
+ break;
+ case RTE_CRYPTO_CIPHER_3DES_CBC:
+ alginfo_c.algtype = OP_ALG_ALGSEL_3DES;
+ alginfo_c.algmode = OP_ALG_AAI_CBC;
+ shared_desc_len = cnstr_shdsc_blkcipher(
+ cdb->sh_desc, true,
+ swap, SHR_NEVER, &alginfo_c,
+ NULL,
+ ses->iv.length,
+ ses->dir);
+ break;
+ case RTE_CRYPTO_CIPHER_AES_CTR:
+ alginfo_c.algtype = OP_ALG_ALGSEL_AES;
+ alginfo_c.algmode = OP_ALG_AAI_CTR;
+ shared_desc_len = cnstr_shdsc_blkcipher(
+ cdb->sh_desc, true,
+ swap, SHR_NEVER, &alginfo_c,
+ NULL,
+ ses->iv.length,
+ ses->dir);
+ break;
+ case RTE_CRYPTO_CIPHER_3DES_CTR:
+ alginfo_c.algtype = OP_ALG_ALGSEL_3DES;
+ alginfo_c.algmode = OP_ALG_AAI_CTR;
+ shared_desc_len = cnstr_shdsc_blkcipher(
+ cdb->sh_desc, true,
+ swap, SHR_NEVER, &alginfo_c,
+ NULL,
+ ses->iv.length,
+ ses->dir);
+ break;
+ case RTE_CRYPTO_CIPHER_SNOW3G_UEA2:
+ alginfo_c.algtype = OP_ALG_ALGSEL_SNOW_F8;
+ shared_desc_len = cnstr_shdsc_snow_f8(
+ cdb->sh_desc, true, swap,
+ &alginfo_c,
+ ses->dir);
+ break;
+ case RTE_CRYPTO_CIPHER_ZUC_EEA3:
+ alginfo_c.algtype = OP_ALG_ALGSEL_ZUCE;
+ shared_desc_len = cnstr_shdsc_zuce(
+ cdb->sh_desc, true, swap,
+ &alginfo_c,
+ ses->dir);
+ break;
+ default:
+ DPAA_SEC_ERR("unsupported cipher alg %d",
+ ses->cipher_alg);
return -ENOTSUP;
}
-
+ } else if (is_auth_only(ses)) {
alginfo_a.key = (size_t)ses->auth_key.data;
alginfo_a.keylen = ses->auth_key.length;
alginfo_a.key_enc_flags = 0;
alginfo_a.key_type = RTA_DATA_IMM;
-
- shared_desc_len = cnstr_shdsc_hmac(cdb->sh_desc, true,
- swap, SHR_NEVER, &alginfo_a,
- !ses->dir,
- ses->digest_length);
+ switch (ses->auth_alg) {
+ case RTE_CRYPTO_AUTH_NULL:
+ alginfo_a.algtype = 0;
+ ses->digest_length = 0;
+ shared_desc_len = cnstr_shdsc_hmac(
+ cdb->sh_desc, true,
+ swap, SHR_NEVER, &alginfo_a,
+ !ses->dir,
+ ses->digest_length);
+ break;
+ case RTE_CRYPTO_AUTH_MD5_HMAC:
+ alginfo_a.algtype = OP_ALG_ALGSEL_MD5;
+ alginfo_a.algmode = OP_ALG_AAI_HMAC;
+ shared_desc_len = cnstr_shdsc_hmac(
+ cdb->sh_desc, true,
+ swap, SHR_NEVER, &alginfo_a,
+ !ses->dir,
+ ses->digest_length);
+ break;
+ case RTE_CRYPTO_AUTH_SHA1_HMAC:
+ alginfo_a.algtype = OP_ALG_ALGSEL_SHA1;
+ alginfo_a.algmode = OP_ALG_AAI_HMAC;
+ shared_desc_len = cnstr_shdsc_hmac(
+ cdb->sh_desc, true,
+ swap, SHR_NEVER, &alginfo_a,
+ !ses->dir,
+ ses->digest_length);
+ break;
+ case RTE_CRYPTO_AUTH_SHA224_HMAC:
+ alginfo_a.algtype = OP_ALG_ALGSEL_SHA224;
+ alginfo_a.algmode = OP_ALG_AAI_HMAC;
+ shared_desc_len = cnstr_shdsc_hmac(
+ cdb->sh_desc, true,
+ swap, SHR_NEVER, &alginfo_a,
+ !ses->dir,
+ ses->digest_length);
+ break;
+ case RTE_CRYPTO_AUTH_SHA256_HMAC:
+ alginfo_a.algtype = OP_ALG_ALGSEL_SHA256;
+ alginfo_a.algmode = OP_ALG_AAI_HMAC;
+ shared_desc_len = cnstr_shdsc_hmac(
+ cdb->sh_desc, true,
+ swap, SHR_NEVER, &alginfo_a,
+ !ses->dir,
+ ses->digest_length);
+ break;
+ case RTE_CRYPTO_AUTH_SHA384_HMAC:
+ alginfo_a.algtype = OP_ALG_ALGSEL_SHA384;
+ alginfo_a.algmode = OP_ALG_AAI_HMAC;
+ shared_desc_len = cnstr_shdsc_hmac(
+ cdb->sh_desc, true,
+ swap, SHR_NEVER, &alginfo_a,
+ !ses->dir,
+ ses->digest_length);
+ break;
+ case RTE_CRYPTO_AUTH_SHA512_HMAC:
+ alginfo_a.algtype = OP_ALG_ALGSEL_SHA512;
+ alginfo_a.algmode = OP_ALG_AAI_HMAC;
+ shared_desc_len = cnstr_shdsc_hmac(
+ cdb->sh_desc, true,
+ swap, SHR_NEVER, &alginfo_a,
+ !ses->dir,
+ ses->digest_length);
+ break;
+ case RTE_CRYPTO_AUTH_SNOW3G_UIA2:
+ alginfo_a.algtype = OP_ALG_ALGSEL_SNOW_F9;
+ alginfo_a.algmode = OP_ALG_AAI_F9;
+ ses->auth_alg = RTE_CRYPTO_AUTH_SNOW3G_UIA2;
+ shared_desc_len = cnstr_shdsc_snow_f9(
+ cdb->sh_desc, true, swap,
+ &alginfo_a,
+ !ses->dir,
+ ses->digest_length);
+ break;
+ case RTE_CRYPTO_AUTH_ZUC_EIA3:
+ alginfo_a.algtype = OP_ALG_ALGSEL_ZUCA;
+ alginfo_a.algmode = OP_ALG_AAI_F9;
+ ses->auth_alg = RTE_CRYPTO_AUTH_ZUC_EIA3;
+ shared_desc_len = cnstr_shdsc_zuca(
+ cdb->sh_desc, true, swap,
+ &alginfo_a,
+ !ses->dir,
+ ses->digest_length);
+ break;
+ default:
+ DPAA_SEC_ERR("unsupported auth alg %u", ses->auth_alg);
+ }
} else if (is_aead(ses)) {
caam_aead_alg(ses, &alginfo);
if (alginfo.algtype == (unsigned int)DPAA_SEC_ALG_UNSUPPORT) {
@@ -849,6 +981,21 @@ build_auth_only_sg(struct rte_crypto_op *op, dpaa_sec_session *ses)
struct qm_sg_entry *sg, *out_sg, *in_sg;
phys_addr_t start_addr;
uint8_t *old_digest, extra_segs;
+ int data_len, data_offset;
+
+ data_len = sym->auth.data.length;
+ data_offset = sym->auth.data.offset;
+
+ if (ses->auth_alg == RTE_CRYPTO_AUTH_SNOW3G_UIA2 ||
+ ses->auth_alg == RTE_CRYPTO_AUTH_ZUC_EIA3) {
+ if ((data_len & 7) || (data_offset & 7)) {
+ DPAA_SEC_ERR("AUTH: len/offset must be full bytes");
+ return NULL;
+ }
+
+ data_len = data_len >> 3;
+ data_offset = data_offset >> 3;
+ }
if (is_decode(ses))
extra_segs = 3;
@@ -879,23 +1026,52 @@ build_auth_only_sg(struct rte_crypto_op *op, dpaa_sec_session *ses)
/* need to extend the input to a compound frame */
in_sg->extension = 1;
in_sg->final = 1;
- in_sg->length = sym->auth.data.length;
+ in_sg->length = data_len;
qm_sg_entry_set64(in_sg, dpaa_mem_vtop(&cf->sg[2]));
/* 1st seg */
sg = in_sg + 1;
- qm_sg_entry_set64(sg, rte_pktmbuf_mtophys(mbuf));
- sg->length = mbuf->data_len - sym->auth.data.offset;
- sg->offset = sym->auth.data.offset;
- /* Successive segs */
- mbuf = mbuf->next;
- while (mbuf) {
+ if (ses->iv.length) {
+ uint8_t *iv_ptr;
+
+ iv_ptr = rte_crypto_op_ctod_offset(op, uint8_t *,
+ ses->iv.offset);
+
+ if (ses->auth_alg == RTE_CRYPTO_AUTH_SNOW3G_UIA2) {
+ iv_ptr = conv_to_snow_f9_iv(iv_ptr);
+ sg->length = 12;
+ } else if (ses->auth_alg == RTE_CRYPTO_AUTH_ZUC_EIA3) {
+ iv_ptr = conv_to_zuc_eia_iv(iv_ptr);
+ sg->length = 8;
+ } else {
+ sg->length = ses->iv.length;
+ }
+ qm_sg_entry_set64(sg, dpaa_mem_vtop(iv_ptr));
+ in_sg->length += sg->length;
cpu_to_hw_sg(sg);
sg++;
- qm_sg_entry_set64(sg, rte_pktmbuf_mtophys(mbuf));
- sg->length = mbuf->data_len;
- mbuf = mbuf->next;
+ }
+
+ qm_sg_entry_set64(sg, rte_pktmbuf_mtophys(mbuf));
+ sg->offset = data_offset;
+
+ if (data_len <= (mbuf->data_len - data_offset)) {
+ sg->length = data_len;
+ } else {
+ sg->length = mbuf->data_len - data_offset;
+
+ /* remaining i/p segs */
+ while ((data_len = data_len - sg->length) &&
+ (mbuf = mbuf->next)) {
+ cpu_to_hw_sg(sg);
+ sg++;
+ qm_sg_entry_set64(sg, rte_pktmbuf_mtophys(mbuf));
+ if (data_len > mbuf->data_len)
+ sg->length = mbuf->data_len;
+ else
+ sg->length = data_len;
+ }
}
if (is_decode(ses)) {
@@ -908,9 +1084,6 @@ build_auth_only_sg(struct rte_crypto_op *op, dpaa_sec_session *ses)
qm_sg_entry_set64(sg, start_addr);
sg->length = ses->digest_length;
in_sg->length += ses->digest_length;
- } else {
- /* Digest calculation case */
- sg->length -= ses->digest_length;
}
sg->final = 1;
cpu_to_hw_sg(sg);
@@ -934,9 +1107,24 @@ build_auth_only(struct rte_crypto_op *op, dpaa_sec_session *ses)
struct rte_mbuf *mbuf = sym->m_src;
struct dpaa_sec_job *cf;
struct dpaa_sec_op_ctx *ctx;
- struct qm_sg_entry *sg;
+ struct qm_sg_entry *sg, *in_sg;
rte_iova_t start_addr;
uint8_t *old_digest;
+ int data_len, data_offset;
+
+ data_len = sym->auth.data.length;
+ data_offset = sym->auth.data.offset;
+
+ if (ses->auth_alg == RTE_CRYPTO_AUTH_SNOW3G_UIA2 ||
+ ses->auth_alg == RTE_CRYPTO_AUTH_ZUC_EIA3) {
+ if ((data_len & 7) || (data_offset & 7)) {
+ DPAA_SEC_ERR("AUTH: len/offset must be full bytes");
+ return NULL;
+ }
+
+ data_len = data_len >> 3;
+ data_offset = data_offset >> 3;
+ }
ctx = dpaa_sec_alloc_ctx(ses, 4);
if (!ctx)
@@ -954,36 +1142,55 @@ build_auth_only(struct rte_crypto_op *op, dpaa_sec_session *ses)
cpu_to_hw_sg(sg);
/* input */
- sg = &cf->sg[1];
- if (is_decode(ses)) {
- /* need to extend the input to a compound frame */
- sg->extension = 1;
- qm_sg_entry_set64(sg, dpaa_mem_vtop(&cf->sg[2]));
- sg->length = sym->auth.data.length + ses->digest_length;
- sg->final = 1;
+ in_sg = &cf->sg[1];
+ /* need to extend the input to a compound frame */
+ in_sg->extension = 1;
+ in_sg->final = 1;
+ in_sg->length = data_len;
+ qm_sg_entry_set64(in_sg, dpaa_mem_vtop(&cf->sg[2]));
+ sg = &cf->sg[2];
+
+ if (ses->iv.length) {
+ uint8_t *iv_ptr;
+
+ iv_ptr = rte_crypto_op_ctod_offset(op, uint8_t *,
+ ses->iv.offset);
+
+ if (ses->auth_alg == RTE_CRYPTO_AUTH_SNOW3G_UIA2) {
+ iv_ptr = conv_to_snow_f9_iv(iv_ptr);
+ sg->length = 12;
+ } else if (ses->auth_alg == RTE_CRYPTO_AUTH_ZUC_EIA3) {
+ iv_ptr = conv_to_zuc_eia_iv(iv_ptr);
+ sg->length = 8;
+ } else {
+ sg->length = ses->iv.length;
+ }
+ qm_sg_entry_set64(sg, dpaa_mem_vtop(iv_ptr));
+ in_sg->length += sg->length;
cpu_to_hw_sg(sg);
+ sg++;
+ }
- sg = &cf->sg[2];
+ qm_sg_entry_set64(sg, rte_pktmbuf_mtophys(mbuf));
+ sg->offset = data_offset;
+ sg->length = data_len;
+
+ if (is_decode(ses)) {
+ /* Digest verification case */
+ cpu_to_hw_sg(sg);
/* hash result or digest, save digest first */
rte_memcpy(old_digest, sym->auth.digest.data,
- ses->digest_length);
- qm_sg_entry_set64(sg, start_addr + sym->auth.data.offset);
- sg->length = sym->auth.data.length;
- cpu_to_hw_sg(sg);
-
+ ses->digest_length);
/* let's check digest by hw */
start_addr = dpaa_mem_vtop(old_digest);
sg++;
qm_sg_entry_set64(sg, start_addr);
sg->length = ses->digest_length;
- sg->final = 1;
- cpu_to_hw_sg(sg);
- } else {
- qm_sg_entry_set64(sg, start_addr + sym->auth.data.offset);
- sg->length = sym->auth.data.length;
- sg->final = 1;
- cpu_to_hw_sg(sg);
+ in_sg->length += ses->digest_length;
}
+ sg->final = 1;
+ cpu_to_hw_sg(sg);
+ cpu_to_hw_sg(in_sg);
return cf;
}
@@ -999,6 +1206,21 @@ build_cipher_only_sg(struct rte_crypto_op *op, dpaa_sec_session *ses)
uint8_t req_segs;
uint8_t *IV_ptr = rte_crypto_op_ctod_offset(op, uint8_t *,
ses->iv.offset);
+ int data_len, data_offset;
+
+ data_len = sym->cipher.data.length;
+ data_offset = sym->cipher.data.offset;
+
+ if (ses->cipher_alg == RTE_CRYPTO_CIPHER_SNOW3G_UEA2 ||
+ ses->cipher_alg == RTE_CRYPTO_CIPHER_ZUC_EEA3) {
+ if ((data_len & 7) || (data_offset & 7)) {
+ DPAA_SEC_ERR("CIPHER: len/offset must be full bytes");
+ return NULL;
+ }
+
+ data_len = data_len >> 3;
+ data_offset = data_offset >> 3;
+ }
if (sym->m_dst) {
mbuf = sym->m_dst;
@@ -1007,7 +1229,6 @@ build_cipher_only_sg(struct rte_crypto_op *op, dpaa_sec_session *ses)
mbuf = sym->m_src;
req_segs = mbuf->nb_segs * 2 + 3;
}
-
if (mbuf->nb_segs > MAX_SG_ENTRIES) {
DPAA_SEC_DP_ERR("Cipher: Max sec segs supported is %d",
MAX_SG_ENTRIES);
@@ -1024,15 +1245,15 @@ build_cipher_only_sg(struct rte_crypto_op *op, dpaa_sec_session *ses)
/* output */
out_sg = &cf->sg[0];
out_sg->extension = 1;
- out_sg->length = sym->cipher.data.length;
+ out_sg->length = data_len;
qm_sg_entry_set64(out_sg, dpaa_mem_vtop(&cf->sg[2]));
cpu_to_hw_sg(out_sg);
/* 1st seg */
sg = &cf->sg[2];
qm_sg_entry_set64(sg, rte_pktmbuf_mtophys(mbuf));
- sg->length = mbuf->data_len - sym->cipher.data.offset;
- sg->offset = sym->cipher.data.offset;
+ sg->length = mbuf->data_len - data_offset;
+ sg->offset = data_offset;
/* Successive segs */
mbuf = mbuf->next;
@@ -1051,7 +1272,7 @@ build_cipher_only_sg(struct rte_crypto_op *op, dpaa_sec_session *ses)
in_sg = &cf->sg[1];
in_sg->extension = 1;
in_sg->final = 1;
- in_sg->length = sym->cipher.data.length + ses->iv.length;
+ in_sg->length = data_len + ses->iv.length;
sg++;
qm_sg_entry_set64(in_sg, dpaa_mem_vtop(sg));
@@ -1065,8 +1286,8 @@ build_cipher_only_sg(struct rte_crypto_op *op, dpaa_sec_session *ses)
/* 1st seg */
sg++;
qm_sg_entry_set64(sg, rte_pktmbuf_mtophys(mbuf));
- sg->length = mbuf->data_len - sym->cipher.data.offset;
- sg->offset = sym->cipher.data.offset;
+ sg->length = mbuf->data_len - data_offset;
+ sg->offset = data_offset;
/* Successive segs */
mbuf = mbuf->next;
@@ -1093,6 +1314,21 @@ build_cipher_only(struct rte_crypto_op *op, dpaa_sec_session *ses)
rte_iova_t src_start_addr, dst_start_addr;
uint8_t *IV_ptr = rte_crypto_op_ctod_offset(op, uint8_t *,
ses->iv.offset);
+ int data_len, data_offset;
+
+ data_len = sym->cipher.data.length;
+ data_offset = sym->cipher.data.offset;
+
+ if (ses->cipher_alg == RTE_CRYPTO_CIPHER_SNOW3G_UEA2 ||
+ ses->cipher_alg == RTE_CRYPTO_CIPHER_ZUC_EEA3) {
+ if ((data_len & 7) || (data_offset & 7)) {
+ DPAA_SEC_ERR("CIPHER: len/offset must be full bytes");
+ return NULL;
+ }
+
+ data_len = data_len >> 3;
+ data_offset = data_offset >> 3;
+ }
ctx = dpaa_sec_alloc_ctx(ses, 4);
if (!ctx)
@@ -1110,8 +1346,8 @@ build_cipher_only(struct rte_crypto_op *op, dpaa_sec_session *ses)
/* output */
sg = &cf->sg[0];
- qm_sg_entry_set64(sg, dst_start_addr + sym->cipher.data.offset);
- sg->length = sym->cipher.data.length + ses->iv.length;
+ qm_sg_entry_set64(sg, dst_start_addr + data_offset);
+ sg->length = data_len + ses->iv.length;
cpu_to_hw_sg(sg);
/* input */
@@ -1120,7 +1356,7 @@ build_cipher_only(struct rte_crypto_op *op, dpaa_sec_session *ses)
/* need to extend the input to a compound frame */
sg->extension = 1;
sg->final = 1;
- sg->length = sym->cipher.data.length + ses->iv.length;
+ sg->length = data_len + ses->iv.length;
qm_sg_entry_set64(sg, dpaa_mem_vtop(&cf->sg[2]));
cpu_to_hw_sg(sg);
@@ -1130,8 +1366,8 @@ build_cipher_only(struct rte_crypto_op *op, dpaa_sec_session *ses)
cpu_to_hw_sg(sg);
sg++;
- qm_sg_entry_set64(sg, src_start_addr + sym->cipher.data.offset);
- sg->length = sym->cipher.data.length;
+ qm_sg_entry_set64(sg, src_start_addr + data_offset);
+ sg->length = data_len;
sg->final = 1;
cpu_to_hw_sg(sg);
@@ -2066,6 +2302,10 @@ dpaa_sec_auth_init(struct rte_cryptodev *dev __rte_unused,
}
session->auth_key.length = xform->auth.key.length;
session->digest_length = xform->auth.digest_length;
+ if (session->cipher_alg == RTE_CRYPTO_CIPHER_NULL) {
+ session->iv.offset = xform->auth.iv.offset;
+ session->iv.length = xform->auth.iv.length;
+ }
memcpy(session->auth_key.data, xform->auth.key.data,
xform->auth.key.length);
diff --git a/drivers/crypto/dpaa_sec/dpaa_sec.h b/drivers/crypto/dpaa_sec/dpaa_sec.h
index 009ab7536..149923aa1 100644
--- a/drivers/crypto/dpaa_sec/dpaa_sec.h
+++ b/drivers/crypto/dpaa_sec/dpaa_sec.h
@@ -416,7 +416,96 @@ static const struct rte_cryptodev_capabilities dpaa_sec_capabilities[] = {
}, }
}, }
},
-
+ { /* SNOW 3G (UIA2) */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_SNOW3G_UIA2,
+ .block_size = 16,
+ .key_size = {
+ .min = 16,
+ .max = 16,
+ .increment = 0
+ },
+ .digest_size = {
+ .min = 4,
+ .max = 4,
+ .increment = 0
+ },
+ .iv_size = {
+ .min = 16,
+ .max = 16,
+ .increment = 0
+ }
+ }, }
+ }, }
+ },
+ { /* SNOW 3G (UEA2) */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
+ {.cipher = {
+ .algo = RTE_CRYPTO_CIPHER_SNOW3G_UEA2,
+ .block_size = 16,
+ .key_size = {
+ .min = 16,
+ .max = 16,
+ .increment = 0
+ },
+ .iv_size = {
+ .min = 16,
+ .max = 16,
+ .increment = 0
+ }
+ }, }
+ }, }
+ },
+ { /* ZUC (EEA3) */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
+ {.cipher = {
+ .algo = RTE_CRYPTO_CIPHER_ZUC_EEA3,
+ .block_size = 16,
+ .key_size = {
+ .min = 16,
+ .max = 16,
+ .increment = 0
+ },
+ .iv_size = {
+ .min = 16,
+ .max = 16,
+ .increment = 0
+ }
+ }, }
+ }, }
+ },
+ { /* ZUC (EIA3) */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_ZUC_EIA3,
+ .block_size = 16,
+ .key_size = {
+ .min = 16,
+ .max = 16,
+ .increment = 0
+ },
+ .digest_size = {
+ .min = 4,
+ .max = 4,
+ .increment = 0
+ },
+ .iv_size = {
+ .min = 16,
+ .max = 16,
+ .increment = 0
+ }
+ }, }
+ }, }
+ },
RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
};
--
2.17.1
^ permalink raw reply [flat|nested] 27+ messages in thread
* [dpdk-dev] [PATCH v2 09/10] test/crypto: enable snow3G and zuc cases for dpaa
2019-10-14 6:53 ` [dpdk-dev] [PATCH v2 00/10] NXP DPAAx crypto fixes Hemant Agrawal
` (7 preceding siblings ...)
2019-10-14 6:53 ` [dpdk-dev] [PATCH v2 08/10] crypto/dpaa_sec: add support for snow3G and ZUC Hemant Agrawal
@ 2019-10-14 6:53 ` Hemant Agrawal
2019-10-14 6:53 ` [dpdk-dev] [PATCH v2 10/10] crypto/dpaa_sec: code reorg for better session mgmt Hemant Agrawal
2019-10-15 12:50 ` [dpdk-dev] [PATCH v2 00/10] NXP DPAAx crypto fixes Akhil Goyal
10 siblings, 0 replies; 27+ messages in thread
From: Hemant Agrawal @ 2019-10-14 6:53 UTC (permalink / raw)
To: dev; +Cc: akhil.goyal, Hemant Agrawal
This patch add the snow and zuc cipher only and auth only cases support.
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
app/test/test_cryptodev.c | 64 +++++++++++++++++++++++++++++++++++++++
1 file changed, 64 insertions(+)
diff --git a/app/test/test_cryptodev.c b/app/test/test_cryptodev.c
index ec0473016..a3ae2e2f5 100644
--- a/app/test/test_cryptodev.c
+++ b/app/test/test_cryptodev.c
@@ -12672,6 +12672,70 @@ static struct unit_test_suite cryptodev_dpaa_sec_testsuite = {
TEST_CASE_ST(ut_setup, ut_teardown,
test_AES_GCM_authenticated_decryption_oop_test_case_1),
+ /** SNOW 3G encrypt only (UEA2) */
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_snow3g_encryption_test_case_1),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_snow3g_encryption_test_case_2),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_snow3g_encryption_test_case_3),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_snow3g_encryption_test_case_4),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_snow3g_encryption_test_case_5),
+
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_snow3g_encryption_test_case_1_oop),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_snow3g_encryption_test_case_1_oop_sgl),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_snow3g_decryption_test_case_1_oop),
+
+ /** SNOW 3G decrypt only (UEA2) */
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_snow3g_decryption_test_case_1),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_snow3g_decryption_test_case_2),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_snow3g_decryption_test_case_3),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_snow3g_decryption_test_case_4),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_snow3g_decryption_test_case_5),
+
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_snow3g_hash_generate_test_case_1),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_snow3g_hash_generate_test_case_2),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_snow3g_hash_generate_test_case_3),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_snow3g_hash_verify_test_case_1),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_snow3g_hash_verify_test_case_2),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_snow3g_hash_verify_test_case_3),
+
+ /** ZUC encrypt only (EEA3) */
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_zuc_encryption_test_case_1),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_zuc_encryption_test_case_2),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_zuc_encryption_test_case_3),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_zuc_encryption_test_case_4),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_zuc_encryption_test_case_5),
+
+ /** ZUC authenticate (EIA3) */
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_zuc_hash_generate_test_case_6),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_zuc_hash_generate_test_case_7),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_zuc_hash_generate_test_case_8),
+
/** Negative tests */
TEST_CASE_ST(ut_setup, ut_teardown,
test_AES_GCM_auth_encryption_fail_iv_corrupt),
--
2.17.1
^ permalink raw reply [flat|nested] 27+ messages in thread
* [dpdk-dev] [PATCH v2 10/10] crypto/dpaa_sec: code reorg for better session mgmt
2019-10-14 6:53 ` [dpdk-dev] [PATCH v2 00/10] NXP DPAAx crypto fixes Hemant Agrawal
` (8 preceding siblings ...)
2019-10-14 6:53 ` [dpdk-dev] [PATCH v2 09/10] test/crypto: enable snow3G and zuc cases for dpaa Hemant Agrawal
@ 2019-10-14 6:53 ` Hemant Agrawal
2019-10-15 12:50 ` [dpdk-dev] [PATCH v2 00/10] NXP DPAAx crypto fixes Akhil Goyal
10 siblings, 0 replies; 27+ messages in thread
From: Hemant Agrawal @ 2019-10-14 6:53 UTC (permalink / raw)
To: dev; +Cc: akhil.goyal, Hemant Agrawal
The session related parameters shall be populated during
the session create only.
At the runtime on first packet, the CDB should just reference
the session data instead of re-interpreting data again.
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
drivers/crypto/dpaa_sec/dpaa_sec.c | 657 +++++++++++++++--------------
drivers/crypto/dpaa_sec/dpaa_sec.h | 18 +-
2 files changed, 348 insertions(+), 327 deletions(-)
diff --git a/drivers/crypto/dpaa_sec/dpaa_sec.c b/drivers/crypto/dpaa_sec/dpaa_sec.c
index 970cdf0cc..61bd2501d 100644
--- a/drivers/crypto/dpaa_sec/dpaa_sec.c
+++ b/drivers/crypto/dpaa_sec/dpaa_sec.c
@@ -242,44 +242,6 @@ dpaa_sec_init_tx(struct qman_fq *fq)
return ret;
}
-static inline int is_cipher_only(dpaa_sec_session *ses)
-{
- return ((ses->cipher_alg != RTE_CRYPTO_CIPHER_NULL) &&
- (ses->auth_alg == RTE_CRYPTO_AUTH_NULL));
-}
-
-static inline int is_auth_only(dpaa_sec_session *ses)
-{
- return ((ses->cipher_alg == RTE_CRYPTO_CIPHER_NULL) &&
- (ses->auth_alg != RTE_CRYPTO_AUTH_NULL));
-}
-
-static inline int is_aead(dpaa_sec_session *ses)
-{
- return ((ses->cipher_alg == 0) &&
- (ses->auth_alg == 0) &&
- (ses->aead_alg != 0));
-}
-
-static inline int is_auth_cipher(dpaa_sec_session *ses)
-{
- return ((ses->cipher_alg != RTE_CRYPTO_CIPHER_NULL) &&
- (ses->auth_alg != RTE_CRYPTO_AUTH_NULL) &&
- (ses->proto_alg != RTE_SECURITY_PROTOCOL_PDCP) &&
- (ses->proto_alg != RTE_SECURITY_PROTOCOL_IPSEC) &&
- (ses->aead_alg == 0));
-}
-
-static inline int is_proto_ipsec(dpaa_sec_session *ses)
-{
- return (ses->proto_alg == RTE_SECURITY_PROTOCOL_IPSEC);
-}
-
-static inline int is_proto_pdcp(dpaa_sec_session *ses)
-{
- return (ses->proto_alg == RTE_SECURITY_PROTOCOL_PDCP);
-}
-
static inline int is_encode(dpaa_sec_session *ses)
{
return ses->dir == DIR_ENC;
@@ -290,102 +252,6 @@ static inline int is_decode(dpaa_sec_session *ses)
return ses->dir == DIR_DEC;
}
-static inline void
-caam_auth_alg(dpaa_sec_session *ses, struct alginfo *alginfo_a)
-{
- switch (ses->auth_alg) {
- case RTE_CRYPTO_AUTH_NULL:
- alginfo_a->algtype =
- (ses->proto_alg == RTE_SECURITY_PROTOCOL_IPSEC) ?
- OP_PCL_IPSEC_HMAC_NULL : 0;
- ses->digest_length = 0;
- break;
- case RTE_CRYPTO_AUTH_MD5_HMAC:
- alginfo_a->algtype =
- (ses->proto_alg == RTE_SECURITY_PROTOCOL_IPSEC) ?
- OP_PCL_IPSEC_HMAC_MD5_96 : OP_ALG_ALGSEL_MD5;
- alginfo_a->algmode = OP_ALG_AAI_HMAC;
- break;
- case RTE_CRYPTO_AUTH_SHA1_HMAC:
- alginfo_a->algtype =
- (ses->proto_alg == RTE_SECURITY_PROTOCOL_IPSEC) ?
- OP_PCL_IPSEC_HMAC_SHA1_96 : OP_ALG_ALGSEL_SHA1;
- alginfo_a->algmode = OP_ALG_AAI_HMAC;
- break;
- case RTE_CRYPTO_AUTH_SHA224_HMAC:
- alginfo_a->algtype =
- (ses->proto_alg == RTE_SECURITY_PROTOCOL_IPSEC) ?
- OP_PCL_IPSEC_HMAC_SHA1_160 : OP_ALG_ALGSEL_SHA224;
- alginfo_a->algmode = OP_ALG_AAI_HMAC;
- break;
- case RTE_CRYPTO_AUTH_SHA256_HMAC:
- alginfo_a->algtype =
- (ses->proto_alg == RTE_SECURITY_PROTOCOL_IPSEC) ?
- OP_PCL_IPSEC_HMAC_SHA2_256_128 : OP_ALG_ALGSEL_SHA256;
- alginfo_a->algmode = OP_ALG_AAI_HMAC;
- break;
- case RTE_CRYPTO_AUTH_SHA384_HMAC:
- alginfo_a->algtype =
- (ses->proto_alg == RTE_SECURITY_PROTOCOL_IPSEC) ?
- OP_PCL_IPSEC_HMAC_SHA2_384_192 : OP_ALG_ALGSEL_SHA384;
- alginfo_a->algmode = OP_ALG_AAI_HMAC;
- break;
- case RTE_CRYPTO_AUTH_SHA512_HMAC:
- alginfo_a->algtype =
- (ses->proto_alg == RTE_SECURITY_PROTOCOL_IPSEC) ?
- OP_PCL_IPSEC_HMAC_SHA2_512_256 : OP_ALG_ALGSEL_SHA512;
- alginfo_a->algmode = OP_ALG_AAI_HMAC;
- break;
- default:
- DPAA_SEC_ERR("unsupported auth alg %u", ses->auth_alg);
- }
-}
-
-static inline void
-caam_cipher_alg(dpaa_sec_session *ses, struct alginfo *alginfo_c)
-{
- switch (ses->cipher_alg) {
- case RTE_CRYPTO_CIPHER_NULL:
- alginfo_c->algtype =
- (ses->proto_alg == RTE_SECURITY_PROTOCOL_IPSEC) ?
- OP_PCL_IPSEC_NULL : 0;
- break;
- case RTE_CRYPTO_CIPHER_AES_CBC:
- alginfo_c->algtype =
- (ses->proto_alg == RTE_SECURITY_PROTOCOL_IPSEC) ?
- OP_PCL_IPSEC_AES_CBC : OP_ALG_ALGSEL_AES;
- alginfo_c->algmode = OP_ALG_AAI_CBC;
- break;
- case RTE_CRYPTO_CIPHER_3DES_CBC:
- alginfo_c->algtype =
- (ses->proto_alg == RTE_SECURITY_PROTOCOL_IPSEC) ?
- OP_PCL_IPSEC_3DES : OP_ALG_ALGSEL_3DES;
- alginfo_c->algmode = OP_ALG_AAI_CBC;
- break;
- case RTE_CRYPTO_CIPHER_AES_CTR:
- alginfo_c->algtype =
- (ses->proto_alg == RTE_SECURITY_PROTOCOL_IPSEC) ?
- OP_PCL_IPSEC_AES_CTR : OP_ALG_ALGSEL_AES;
- alginfo_c->algmode = OP_ALG_AAI_CTR;
- break;
- default:
- DPAA_SEC_ERR("unsupported cipher alg %d", ses->cipher_alg);
- }
-}
-
-static inline void
-caam_aead_alg(dpaa_sec_session *ses, struct alginfo *alginfo)
-{
- switch (ses->aead_alg) {
- case RTE_CRYPTO_AEAD_AES_GCM:
- alginfo->algtype = OP_ALG_ALGSEL_AES;
- alginfo->algmode = OP_ALG_AAI_GCM;
- break;
- default:
- DPAA_SEC_ERR("unsupported AEAD alg %d", ses->aead_alg);
- }
-}
-
static int
dpaa_sec_prep_pdcp_cdb(dpaa_sec_session *ses)
{
@@ -400,58 +266,24 @@ dpaa_sec_prep_pdcp_cdb(dpaa_sec_session *ses)
int swap = true;
#endif
- switch (ses->cipher_alg) {
- case RTE_CRYPTO_CIPHER_SNOW3G_UEA2:
- cipherdata.algtype = PDCP_CIPHER_TYPE_SNOW;
- break;
- case RTE_CRYPTO_CIPHER_ZUC_EEA3:
- cipherdata.algtype = PDCP_CIPHER_TYPE_ZUC;
- break;
- case RTE_CRYPTO_CIPHER_AES_CTR:
- cipherdata.algtype = PDCP_CIPHER_TYPE_AES;
- break;
- case RTE_CRYPTO_CIPHER_NULL:
- cipherdata.algtype = PDCP_CIPHER_TYPE_NULL;
- break;
- default:
- DPAA_SEC_ERR("Crypto: Undefined Cipher specified %u",
- ses->cipher_alg);
- return -1;
- }
-
cipherdata.key = (size_t)ses->cipher_key.data;
cipherdata.keylen = ses->cipher_key.length;
cipherdata.key_enc_flags = 0;
cipherdata.key_type = RTA_DATA_IMM;
+ cipherdata.algtype = ses->cipher_key.alg;
+ cipherdata.algmode = ses->cipher_key.algmode;
cdb->sh_desc[0] = cipherdata.keylen;
cdb->sh_desc[1] = 0;
cdb->sh_desc[2] = 0;
if (ses->auth_alg) {
- switch (ses->auth_alg) {
- case RTE_CRYPTO_AUTH_SNOW3G_UIA2:
- authdata.algtype = PDCP_AUTH_TYPE_SNOW;
- break;
- case RTE_CRYPTO_AUTH_ZUC_EIA3:
- authdata.algtype = PDCP_AUTH_TYPE_ZUC;
- break;
- case RTE_CRYPTO_AUTH_AES_CMAC:
- authdata.algtype = PDCP_AUTH_TYPE_AES;
- break;
- case RTE_CRYPTO_AUTH_NULL:
- authdata.algtype = PDCP_AUTH_TYPE_NULL;
- break;
- default:
- DPAA_SEC_ERR("Crypto: Unsupported auth alg %u",
- ses->auth_alg);
- return -1;
- }
-
authdata.key = (size_t)ses->auth_key.data;
authdata.keylen = ses->auth_key.length;
authdata.key_enc_flags = 0;
authdata.key_type = RTA_DATA_IMM;
+ authdata.algtype = ses->auth_key.alg;
+ authdata.algmode = ses->auth_key.algmode;
p_authdata = &authdata;
@@ -541,27 +373,19 @@ dpaa_sec_prep_ipsec_cdb(dpaa_sec_session *ses)
int swap = true;
#endif
- caam_cipher_alg(ses, &cipherdata);
- if (cipherdata.algtype == (unsigned int)DPAA_SEC_ALG_UNSUPPORT) {
- DPAA_SEC_ERR("not supported cipher alg");
- return -ENOTSUP;
- }
-
cipherdata.key = (size_t)ses->cipher_key.data;
cipherdata.keylen = ses->cipher_key.length;
cipherdata.key_enc_flags = 0;
cipherdata.key_type = RTA_DATA_IMM;
-
- caam_auth_alg(ses, &authdata);
- if (authdata.algtype == (unsigned int)DPAA_SEC_ALG_UNSUPPORT) {
- DPAA_SEC_ERR("not supported auth alg");
- return -ENOTSUP;
- }
+ cipherdata.algtype = ses->cipher_key.alg;
+ cipherdata.algmode = ses->cipher_key.algmode;
authdata.key = (size_t)ses->auth_key.data;
authdata.keylen = ses->auth_key.length;
authdata.key_enc_flags = 0;
authdata.key_type = RTA_DATA_IMM;
+ authdata.algtype = ses->auth_key.alg;
+ authdata.algmode = ses->auth_key.algmode;
cdb->sh_desc[0] = cipherdata.keylen;
cdb->sh_desc[1] = authdata.keylen;
@@ -625,58 +449,26 @@ dpaa_sec_prep_cdb(dpaa_sec_session *ses)
memset(cdb, 0, sizeof(struct sec_cdb));
- if (is_proto_ipsec(ses)) {
+ switch (ses->ctxt) {
+ case DPAA_SEC_IPSEC:
shared_desc_len = dpaa_sec_prep_ipsec_cdb(ses);
- } else if (is_proto_pdcp(ses)) {
+ break;
+ case DPAA_SEC_PDCP:
shared_desc_len = dpaa_sec_prep_pdcp_cdb(ses);
- } else if (is_cipher_only(ses)) {
+ break;
+ case DPAA_SEC_CIPHER:
alginfo_c.key = (size_t)ses->cipher_key.data;
alginfo_c.keylen = ses->cipher_key.length;
alginfo_c.key_enc_flags = 0;
alginfo_c.key_type = RTA_DATA_IMM;
+ alginfo_c.algtype = ses->cipher_key.alg;
+ alginfo_c.algmode = ses->cipher_key.algmode;
+
switch (ses->cipher_alg) {
- case RTE_CRYPTO_CIPHER_NULL:
- alginfo_c.algtype = 0;
- shared_desc_len = cnstr_shdsc_blkcipher(
- cdb->sh_desc, true,
- swap, SHR_NEVER, &alginfo_c,
- NULL,
- ses->iv.length,
- ses->dir);
- break;
case RTE_CRYPTO_CIPHER_AES_CBC:
- alginfo_c.algtype = OP_ALG_ALGSEL_AES;
- alginfo_c.algmode = OP_ALG_AAI_CBC;
- shared_desc_len = cnstr_shdsc_blkcipher(
- cdb->sh_desc, true,
- swap, SHR_NEVER, &alginfo_c,
- NULL,
- ses->iv.length,
- ses->dir);
- break;
case RTE_CRYPTO_CIPHER_3DES_CBC:
- alginfo_c.algtype = OP_ALG_ALGSEL_3DES;
- alginfo_c.algmode = OP_ALG_AAI_CBC;
- shared_desc_len = cnstr_shdsc_blkcipher(
- cdb->sh_desc, true,
- swap, SHR_NEVER, &alginfo_c,
- NULL,
- ses->iv.length,
- ses->dir);
- break;
case RTE_CRYPTO_CIPHER_AES_CTR:
- alginfo_c.algtype = OP_ALG_ALGSEL_AES;
- alginfo_c.algmode = OP_ALG_AAI_CTR;
- shared_desc_len = cnstr_shdsc_blkcipher(
- cdb->sh_desc, true,
- swap, SHR_NEVER, &alginfo_c,
- NULL,
- ses->iv.length,
- ses->dir);
- break;
case RTE_CRYPTO_CIPHER_3DES_CTR:
- alginfo_c.algtype = OP_ALG_ALGSEL_3DES;
- alginfo_c.algmode = OP_ALG_AAI_CTR;
shared_desc_len = cnstr_shdsc_blkcipher(
cdb->sh_desc, true,
swap, SHR_NEVER, &alginfo_c,
@@ -685,14 +477,12 @@ dpaa_sec_prep_cdb(dpaa_sec_session *ses)
ses->dir);
break;
case RTE_CRYPTO_CIPHER_SNOW3G_UEA2:
- alginfo_c.algtype = OP_ALG_ALGSEL_SNOW_F8;
shared_desc_len = cnstr_shdsc_snow_f8(
cdb->sh_desc, true, swap,
&alginfo_c,
ses->dir);
break;
case RTE_CRYPTO_CIPHER_ZUC_EEA3:
- alginfo_c.algtype = OP_ALG_ALGSEL_ZUCE;
shared_desc_len = cnstr_shdsc_zuce(
cdb->sh_desc, true, swap,
&alginfo_c,
@@ -703,69 +493,21 @@ dpaa_sec_prep_cdb(dpaa_sec_session *ses)
ses->cipher_alg);
return -ENOTSUP;
}
- } else if (is_auth_only(ses)) {
+ break;
+ case DPAA_SEC_AUTH:
alginfo_a.key = (size_t)ses->auth_key.data;
alginfo_a.keylen = ses->auth_key.length;
alginfo_a.key_enc_flags = 0;
alginfo_a.key_type = RTA_DATA_IMM;
+ alginfo_a.algtype = ses->auth_key.alg;
+ alginfo_a.algmode = ses->auth_key.algmode;
switch (ses->auth_alg) {
- case RTE_CRYPTO_AUTH_NULL:
- alginfo_a.algtype = 0;
- ses->digest_length = 0;
- shared_desc_len = cnstr_shdsc_hmac(
- cdb->sh_desc, true,
- swap, SHR_NEVER, &alginfo_a,
- !ses->dir,
- ses->digest_length);
- break;
case RTE_CRYPTO_AUTH_MD5_HMAC:
- alginfo_a.algtype = OP_ALG_ALGSEL_MD5;
- alginfo_a.algmode = OP_ALG_AAI_HMAC;
- shared_desc_len = cnstr_shdsc_hmac(
- cdb->sh_desc, true,
- swap, SHR_NEVER, &alginfo_a,
- !ses->dir,
- ses->digest_length);
- break;
case RTE_CRYPTO_AUTH_SHA1_HMAC:
- alginfo_a.algtype = OP_ALG_ALGSEL_SHA1;
- alginfo_a.algmode = OP_ALG_AAI_HMAC;
- shared_desc_len = cnstr_shdsc_hmac(
- cdb->sh_desc, true,
- swap, SHR_NEVER, &alginfo_a,
- !ses->dir,
- ses->digest_length);
- break;
case RTE_CRYPTO_AUTH_SHA224_HMAC:
- alginfo_a.algtype = OP_ALG_ALGSEL_SHA224;
- alginfo_a.algmode = OP_ALG_AAI_HMAC;
- shared_desc_len = cnstr_shdsc_hmac(
- cdb->sh_desc, true,
- swap, SHR_NEVER, &alginfo_a,
- !ses->dir,
- ses->digest_length);
- break;
case RTE_CRYPTO_AUTH_SHA256_HMAC:
- alginfo_a.algtype = OP_ALG_ALGSEL_SHA256;
- alginfo_a.algmode = OP_ALG_AAI_HMAC;
- shared_desc_len = cnstr_shdsc_hmac(
- cdb->sh_desc, true,
- swap, SHR_NEVER, &alginfo_a,
- !ses->dir,
- ses->digest_length);
- break;
case RTE_CRYPTO_AUTH_SHA384_HMAC:
- alginfo_a.algtype = OP_ALG_ALGSEL_SHA384;
- alginfo_a.algmode = OP_ALG_AAI_HMAC;
- shared_desc_len = cnstr_shdsc_hmac(
- cdb->sh_desc, true,
- swap, SHR_NEVER, &alginfo_a,
- !ses->dir,
- ses->digest_length);
- break;
case RTE_CRYPTO_AUTH_SHA512_HMAC:
- alginfo_a.algtype = OP_ALG_ALGSEL_SHA512;
- alginfo_a.algmode = OP_ALG_AAI_HMAC;
shared_desc_len = cnstr_shdsc_hmac(
cdb->sh_desc, true,
swap, SHR_NEVER, &alginfo_a,
@@ -773,9 +515,6 @@ dpaa_sec_prep_cdb(dpaa_sec_session *ses)
ses->digest_length);
break;
case RTE_CRYPTO_AUTH_SNOW3G_UIA2:
- alginfo_a.algtype = OP_ALG_ALGSEL_SNOW_F9;
- alginfo_a.algmode = OP_ALG_AAI_F9;
- ses->auth_alg = RTE_CRYPTO_AUTH_SNOW3G_UIA2;
shared_desc_len = cnstr_shdsc_snow_f9(
cdb->sh_desc, true, swap,
&alginfo_a,
@@ -783,9 +522,6 @@ dpaa_sec_prep_cdb(dpaa_sec_session *ses)
ses->digest_length);
break;
case RTE_CRYPTO_AUTH_ZUC_EIA3:
- alginfo_a.algtype = OP_ALG_ALGSEL_ZUCA;
- alginfo_a.algmode = OP_ALG_AAI_F9;
- ses->auth_alg = RTE_CRYPTO_AUTH_ZUC_EIA3;
shared_desc_len = cnstr_shdsc_zuca(
cdb->sh_desc, true, swap,
&alginfo_a,
@@ -795,8 +531,8 @@ dpaa_sec_prep_cdb(dpaa_sec_session *ses)
default:
DPAA_SEC_ERR("unsupported auth alg %u", ses->auth_alg);
}
- } else if (is_aead(ses)) {
- caam_aead_alg(ses, &alginfo);
+ break;
+ case DPAA_SEC_AEAD:
if (alginfo.algtype == (unsigned int)DPAA_SEC_ALG_UNSUPPORT) {
DPAA_SEC_ERR("not supported aead alg");
return -ENOTSUP;
@@ -805,6 +541,8 @@ dpaa_sec_prep_cdb(dpaa_sec_session *ses)
alginfo.keylen = ses->aead_key.length;
alginfo.key_enc_flags = 0;
alginfo.key_type = RTA_DATA_IMM;
+ alginfo.algtype = ses->aead_key.alg;
+ alginfo.algmode = ses->aead_key.algmode;
if (ses->dir == DIR_ENC)
shared_desc_len = cnstr_shdsc_gcm_encap(
@@ -818,28 +556,21 @@ dpaa_sec_prep_cdb(dpaa_sec_session *ses)
&alginfo,
ses->iv.length,
ses->digest_length);
- } else {
- caam_cipher_alg(ses, &alginfo_c);
- if (alginfo_c.algtype == (unsigned int)DPAA_SEC_ALG_UNSUPPORT) {
- DPAA_SEC_ERR("not supported cipher alg");
- return -ENOTSUP;
- }
-
+ break;
+ case DPAA_SEC_CIPHER_HASH:
alginfo_c.key = (size_t)ses->cipher_key.data;
alginfo_c.keylen = ses->cipher_key.length;
alginfo_c.key_enc_flags = 0;
alginfo_c.key_type = RTA_DATA_IMM;
-
- caam_auth_alg(ses, &alginfo_a);
- if (alginfo_a.algtype == (unsigned int)DPAA_SEC_ALG_UNSUPPORT) {
- DPAA_SEC_ERR("not supported auth alg");
- return -ENOTSUP;
- }
+ alginfo_c.algtype = ses->cipher_key.alg;
+ alginfo_c.algmode = ses->cipher_key.algmode;
alginfo_a.key = (size_t)ses->auth_key.data;
alginfo_a.keylen = ses->auth_key.length;
alginfo_a.key_enc_flags = 0;
alginfo_a.key_type = RTA_DATA_IMM;
+ alginfo_a.algtype = ses->auth_key.alg;
+ alginfo_a.algmode = ses->auth_key.algmode;
cdb->sh_desc[0] = alginfo_c.keylen;
cdb->sh_desc[1] = alginfo_a.keylen;
@@ -876,6 +607,11 @@ dpaa_sec_prep_cdb(dpaa_sec_session *ses)
true, swap, SHR_SERIAL, &alginfo_c, &alginfo_a,
ses->iv.length,
ses->digest_length, ses->dir);
+ break;
+ case DPAA_SEC_HASH_CIPHER:
+ default:
+ DPAA_SEC_ERR("error: Unsupported session");
+ return -ENOTSUP;
}
if (shared_desc_len < 0) {
@@ -2053,18 +1789,22 @@ dpaa_sec_enqueue_burst(void *qp, struct rte_crypto_op **ops,
if (rte_pktmbuf_is_contiguous(op->sym->m_src) &&
((op->sym->m_dst == NULL) ||
rte_pktmbuf_is_contiguous(op->sym->m_dst))) {
- if (is_proto_ipsec(ses)) {
- cf = build_proto(op, ses);
- } else if (is_proto_pdcp(ses)) {
+ switch (ses->ctxt) {
+ case DPAA_SEC_PDCP:
+ case DPAA_SEC_IPSEC:
cf = build_proto(op, ses);
- } else if (is_auth_only(ses)) {
+ break;
+ case DPAA_SEC_AUTH:
cf = build_auth_only(op, ses);
- } else if (is_cipher_only(ses)) {
+ break;
+ case DPAA_SEC_CIPHER:
cf = build_cipher_only(op, ses);
- } else if (is_aead(ses)) {
+ break;
+ case DPAA_SEC_AEAD:
cf = build_cipher_auth_gcm(op, ses);
auth_hdr_len = ses->auth_only_len;
- } else if (is_auth_cipher(ses)) {
+ break;
+ case DPAA_SEC_CIPHER_HASH:
auth_hdr_len =
op->sym->cipher.data.offset
- op->sym->auth.data.offset;
@@ -2073,23 +1813,30 @@ dpaa_sec_enqueue_burst(void *qp, struct rte_crypto_op **ops,
- op->sym->cipher.data.length
- auth_hdr_len;
cf = build_cipher_auth(op, ses);
- } else {
+ break;
+ default:
DPAA_SEC_DP_ERR("not supported ops");
frames_to_send = loop;
nb_ops = loop;
goto send_pkts;
}
} else {
- if (is_proto_pdcp(ses) || is_proto_ipsec(ses)) {
+ switch (ses->ctxt) {
+ case DPAA_SEC_PDCP:
+ case DPAA_SEC_IPSEC:
cf = build_proto_sg(op, ses);
- } else if (is_auth_only(ses)) {
+ break;
+ case DPAA_SEC_AUTH:
cf = build_auth_only_sg(op, ses);
- } else if (is_cipher_only(ses)) {
+ break;
+ case DPAA_SEC_CIPHER:
cf = build_cipher_only_sg(op, ses);
- } else if (is_aead(ses)) {
+ break;
+ case DPAA_SEC_AEAD:
cf = build_cipher_auth_gcm_sg(op, ses);
auth_hdr_len = ses->auth_only_len;
- } else if (is_auth_cipher(ses)) {
+ break;
+ case DPAA_SEC_CIPHER_HASH:
auth_hdr_len =
op->sym->cipher.data.offset
- op->sym->auth.data.offset;
@@ -2098,7 +1845,8 @@ dpaa_sec_enqueue_burst(void *qp, struct rte_crypto_op **ops,
- op->sym->cipher.data.length
- auth_hdr_len;
cf = build_cipher_auth_sg(op, ses);
- } else {
+ break;
+ default:
DPAA_SEC_DP_ERR("not supported ops");
frames_to_send = loop;
nb_ops = loop;
@@ -2132,15 +1880,14 @@ dpaa_sec_enqueue_burst(void *qp, struct rte_crypto_op **ops,
/* In case of PDCP, per packet HFN is stored in
* mbuf priv after sym_op.
*/
- if (is_proto_pdcp(ses) && ses->pdcp.hfn_ovd) {
+ if ((ses->ctxt == DPAA_SEC_PDCP) && ses->pdcp.hfn_ovd) {
fd->cmd = 0x80000000 |
*((uint32_t *)((uint8_t *)op +
ses->pdcp.hfn_ovd_offset));
- DPAA_SEC_DP_DEBUG("Per packet HFN: %x, ovd:%u,%u\n",
+ DPAA_SEC_DP_DEBUG("Per packet HFN: %x, ovd:%u\n",
*((uint32_t *)((uint8_t *)op +
ses->pdcp.hfn_ovd_offset)),
- ses->pdcp.hfn_ovd,
- is_proto_pdcp(ses));
+ ses->pdcp.hfn_ovd);
}
}
@@ -2282,6 +2029,31 @@ dpaa_sec_cipher_init(struct rte_cryptodev *dev __rte_unused,
memcpy(session->cipher_key.data, xform->cipher.key.data,
xform->cipher.key.length);
+ switch (xform->cipher.algo) {
+ case RTE_CRYPTO_CIPHER_AES_CBC:
+ session->cipher_key.alg = OP_ALG_ALGSEL_AES;
+ session->cipher_key.algmode = OP_ALG_AAI_CBC;
+ break;
+ case RTE_CRYPTO_CIPHER_3DES_CBC:
+ session->cipher_key.alg = OP_ALG_ALGSEL_3DES;
+ session->cipher_key.algmode = OP_ALG_AAI_CBC;
+ break;
+ case RTE_CRYPTO_CIPHER_AES_CTR:
+ session->cipher_key.alg = OP_ALG_ALGSEL_AES;
+ session->cipher_key.algmode = OP_ALG_AAI_CTR;
+ break;
+ case RTE_CRYPTO_CIPHER_SNOW3G_UEA2:
+ session->cipher_key.alg = OP_ALG_ALGSEL_SNOW_F8;
+ break;
+ case RTE_CRYPTO_CIPHER_ZUC_EEA3:
+ session->cipher_key.alg = OP_ALG_ALGSEL_ZUCE;
+ break;
+ default:
+ DPAA_SEC_ERR("Crypto: Undefined Cipher specified %u",
+ xform->cipher.algo);
+ rte_free(session->cipher_key.data);
+ return -1;
+ }
session->dir = (xform->cipher.op == RTE_CRYPTO_CIPHER_OP_ENCRYPT) ?
DIR_ENC : DIR_DEC;
@@ -2309,18 +2081,165 @@ dpaa_sec_auth_init(struct rte_cryptodev *dev __rte_unused,
memcpy(session->auth_key.data, xform->auth.key.data,
xform->auth.key.length);
+
+ switch (xform->auth.algo) {
+ case RTE_CRYPTO_AUTH_SHA1_HMAC:
+ session->auth_key.alg = OP_ALG_ALGSEL_SHA1;
+ session->auth_key.algmode = OP_ALG_AAI_HMAC;
+ break;
+ case RTE_CRYPTO_AUTH_MD5_HMAC:
+ session->auth_key.alg = OP_ALG_ALGSEL_MD5;
+ session->auth_key.algmode = OP_ALG_AAI_HMAC;
+ break;
+ case RTE_CRYPTO_AUTH_SHA224_HMAC:
+ session->auth_key.alg = OP_ALG_ALGSEL_SHA224;
+ session->auth_key.algmode = OP_ALG_AAI_HMAC;
+ break;
+ case RTE_CRYPTO_AUTH_SHA256_HMAC:
+ session->auth_key.alg = OP_ALG_ALGSEL_SHA256;
+ session->auth_key.algmode = OP_ALG_AAI_HMAC;
+ break;
+ case RTE_CRYPTO_AUTH_SHA384_HMAC:
+ session->auth_key.alg = OP_ALG_ALGSEL_SHA384;
+ session->auth_key.algmode = OP_ALG_AAI_HMAC;
+ break;
+ case RTE_CRYPTO_AUTH_SHA512_HMAC:
+ session->auth_key.alg = OP_ALG_ALGSEL_SHA512;
+ session->auth_key.algmode = OP_ALG_AAI_HMAC;
+ break;
+ case RTE_CRYPTO_AUTH_SNOW3G_UIA2:
+ session->auth_key.alg = OP_ALG_ALGSEL_SNOW_F9;
+ session->auth_key.algmode = OP_ALG_AAI_F9;
+ break;
+ case RTE_CRYPTO_AUTH_ZUC_EIA3:
+ session->auth_key.alg = OP_ALG_ALGSEL_ZUCA;
+ session->auth_key.algmode = OP_ALG_AAI_F9;
+ break;
+ default:
+ DPAA_SEC_ERR("Crypto: Unsupported Auth specified %u",
+ xform->auth.algo);
+ rte_free(session->auth_key.data);
+ return -1;
+ }
+
session->dir = (xform->auth.op == RTE_CRYPTO_AUTH_OP_GENERATE) ?
DIR_ENC : DIR_DEC;
return 0;
}
+static int
+dpaa_sec_chain_init(struct rte_cryptodev *dev __rte_unused,
+ struct rte_crypto_sym_xform *xform,
+ dpaa_sec_session *session)
+{
+
+ struct rte_crypto_cipher_xform *cipher_xform;
+ struct rte_crypto_auth_xform *auth_xform;
+
+ if (session->auth_cipher_text) {
+ cipher_xform = &xform->cipher;
+ auth_xform = &xform->next->auth;
+ } else {
+ cipher_xform = &xform->next->cipher;
+ auth_xform = &xform->auth;
+ }
+
+ /* Set IV parameters */
+ session->iv.offset = cipher_xform->iv.offset;
+ session->iv.length = cipher_xform->iv.length;
+
+ session->cipher_key.data = rte_zmalloc(NULL, cipher_xform->key.length,
+ RTE_CACHE_LINE_SIZE);
+ if (session->cipher_key.data == NULL && cipher_xform->key.length > 0) {
+ DPAA_SEC_ERR("No Memory for cipher key");
+ return -1;
+ }
+ session->cipher_key.length = cipher_xform->key.length;
+ session->auth_key.data = rte_zmalloc(NULL, auth_xform->key.length,
+ RTE_CACHE_LINE_SIZE);
+ if (session->auth_key.data == NULL && auth_xform->key.length > 0) {
+ DPAA_SEC_ERR("No Memory for auth key");
+ rte_free(session->cipher_key.data);
+ return -ENOMEM;
+ }
+ session->auth_key.length = auth_xform->key.length;
+ memcpy(session->cipher_key.data, cipher_xform->key.data,
+ cipher_xform->key.length);
+ memcpy(session->auth_key.data, auth_xform->key.data,
+ auth_xform->key.length);
+
+ session->digest_length = auth_xform->digest_length;
+ session->auth_alg = auth_xform->algo;
+
+ switch (auth_xform->algo) {
+ case RTE_CRYPTO_AUTH_SHA1_HMAC:
+ session->auth_key.alg = OP_ALG_ALGSEL_SHA1;
+ session->auth_key.algmode = OP_ALG_AAI_HMAC;
+ break;
+ case RTE_CRYPTO_AUTH_MD5_HMAC:
+ session->auth_key.alg = OP_ALG_ALGSEL_MD5;
+ session->auth_key.algmode = OP_ALG_AAI_HMAC;
+ break;
+ case RTE_CRYPTO_AUTH_SHA224_HMAC:
+ session->auth_key.alg = OP_ALG_ALGSEL_SHA224;
+ session->auth_key.algmode = OP_ALG_AAI_HMAC;
+ break;
+ case RTE_CRYPTO_AUTH_SHA256_HMAC:
+ session->auth_key.alg = OP_ALG_ALGSEL_SHA256;
+ session->auth_key.algmode = OP_ALG_AAI_HMAC;
+ break;
+ case RTE_CRYPTO_AUTH_SHA384_HMAC:
+ session->auth_key.alg = OP_ALG_ALGSEL_SHA384;
+ session->auth_key.algmode = OP_ALG_AAI_HMAC;
+ break;
+ case RTE_CRYPTO_AUTH_SHA512_HMAC:
+ session->auth_key.alg = OP_ALG_ALGSEL_SHA512;
+ session->auth_key.algmode = OP_ALG_AAI_HMAC;
+ break;
+ default:
+ DPAA_SEC_ERR("Crypto: Unsupported Auth specified %u",
+ auth_xform->algo);
+ goto error_out;
+ }
+
+ session->cipher_alg = cipher_xform->algo;
+
+ switch (cipher_xform->algo) {
+ case RTE_CRYPTO_CIPHER_AES_CBC:
+ session->cipher_key.alg = OP_ALG_ALGSEL_AES;
+ session->cipher_key.algmode = OP_ALG_AAI_CBC;
+ break;
+ case RTE_CRYPTO_CIPHER_3DES_CBC:
+ session->cipher_key.alg = OP_ALG_ALGSEL_3DES;
+ session->cipher_key.algmode = OP_ALG_AAI_CBC;
+ break;
+ case RTE_CRYPTO_CIPHER_AES_CTR:
+ session->cipher_key.alg = OP_ALG_ALGSEL_AES;
+ session->cipher_key.algmode = OP_ALG_AAI_CTR;
+ break;
+ default:
+ DPAA_SEC_ERR("Crypto: Undefined Cipher specified %u",
+ cipher_xform->algo);
+ goto error_out;
+ }
+ session->dir = (cipher_xform->op == RTE_CRYPTO_CIPHER_OP_ENCRYPT) ?
+ DIR_ENC : DIR_DEC;
+ return 0;
+
+error_out:
+ rte_free(session->cipher_key.data);
+ rte_free(session->auth_key.data);
+ return -1;
+}
+
static int
dpaa_sec_aead_init(struct rte_cryptodev *dev __rte_unused,
struct rte_crypto_sym_xform *xform,
dpaa_sec_session *session)
{
session->aead_alg = xform->aead.algo;
+ session->ctxt = DPAA_SEC_AEAD;
session->iv.length = xform->aead.iv.length;
session->iv.offset = xform->aead.iv.offset;
session->auth_only_len = xform->aead.aad_length;
@@ -2335,6 +2254,18 @@ dpaa_sec_aead_init(struct rte_cryptodev *dev __rte_unused,
memcpy(session->aead_key.data, xform->aead.key.data,
xform->aead.key.length);
+
+ switch (session->aead_alg) {
+ case RTE_CRYPTO_AEAD_AES_GCM:
+ session->aead_key.alg = OP_ALG_ALGSEL_AES;
+ session->aead_key.algmode = OP_ALG_AAI_GCM;
+ break;
+ default:
+ DPAA_SEC_ERR("unsupported AEAD alg %d", session->aead_alg);
+ rte_free(session->aead_key.data);
+ return -ENOMEM;
+ }
+
session->dir = (xform->aead.op == RTE_CRYPTO_AEAD_OP_ENCRYPT) ?
DIR_ENC : DIR_DEC;
@@ -2422,31 +2353,34 @@ dpaa_sec_set_session_parameters(struct rte_cryptodev *dev,
/* Cipher Only */
if (xform->type == RTE_CRYPTO_SYM_XFORM_CIPHER && xform->next == NULL) {
session->auth_alg = RTE_CRYPTO_AUTH_NULL;
+ session->ctxt = DPAA_SEC_CIPHER;
dpaa_sec_cipher_init(dev, xform, session);
/* Authentication Only */
} else if (xform->type == RTE_CRYPTO_SYM_XFORM_AUTH &&
xform->next == NULL) {
session->cipher_alg = RTE_CRYPTO_CIPHER_NULL;
+ session->ctxt = DPAA_SEC_AUTH;
dpaa_sec_auth_init(dev, xform, session);
/* Cipher then Authenticate */
} else if (xform->type == RTE_CRYPTO_SYM_XFORM_CIPHER &&
xform->next->type == RTE_CRYPTO_SYM_XFORM_AUTH) {
if (xform->cipher.op == RTE_CRYPTO_CIPHER_OP_ENCRYPT) {
- dpaa_sec_cipher_init(dev, xform, session);
- dpaa_sec_auth_init(dev, xform->next, session);
+ session->ctxt = DPAA_SEC_CIPHER_HASH;
+ session->auth_cipher_text = 1;
+ dpaa_sec_chain_init(dev, xform, session);
} else {
DPAA_SEC_ERR("Not supported: Auth then Cipher");
return -EINVAL;
}
-
/* Authenticate then Cipher */
} else if (xform->type == RTE_CRYPTO_SYM_XFORM_AUTH &&
xform->next->type == RTE_CRYPTO_SYM_XFORM_CIPHER) {
if (xform->next->cipher.op == RTE_CRYPTO_CIPHER_OP_DECRYPT) {
- dpaa_sec_auth_init(dev, xform, session);
- dpaa_sec_cipher_init(dev, xform->next, session);
+ session->ctxt = DPAA_SEC_CIPHER_HASH;
+ session->auth_cipher_text = 0;
+ dpaa_sec_chain_init(dev, xform, session);
} else {
DPAA_SEC_ERR("Not supported: Auth then Cipher");
return -EINVAL;
@@ -2574,6 +2508,7 @@ dpaa_sec_set_ipsec_session(__rte_unused struct rte_cryptodev *dev,
cipher_xform = &conf->crypto_xform->next->cipher;
}
session->proto_alg = conf->protocol;
+ session->ctxt = DPAA_SEC_IPSEC;
if (cipher_xform && cipher_xform->algo != RTE_CRYPTO_CIPHER_NULL) {
session->cipher_key.data = rte_zmalloc(NULL,
@@ -2589,9 +2524,20 @@ dpaa_sec_set_ipsec_session(__rte_unused struct rte_cryptodev *dev,
session->cipher_key.length = cipher_xform->key.length;
switch (cipher_xform->algo) {
+ case RTE_CRYPTO_CIPHER_NULL:
+ session->cipher_key.alg = OP_PCL_IPSEC_NULL;
+ break;
case RTE_CRYPTO_CIPHER_AES_CBC:
+ session->cipher_key.alg = OP_PCL_IPSEC_AES_CBC;
+ session->cipher_key.algmode = OP_ALG_AAI_CBC;
+ break;
case RTE_CRYPTO_CIPHER_3DES_CBC:
+ session->cipher_key.alg = OP_PCL_IPSEC_3DES;
+ session->cipher_key.algmode = OP_ALG_AAI_CBC;
+ break;
case RTE_CRYPTO_CIPHER_AES_CTR:
+ session->cipher_key.alg = OP_PCL_IPSEC_AES_CTR;
+ session->cipher_key.algmode = OP_ALG_AAI_CTR;
break;
default:
DPAA_SEC_ERR("Crypto: Unsupported Cipher alg %u",
@@ -2620,12 +2566,33 @@ dpaa_sec_set_ipsec_session(__rte_unused struct rte_cryptodev *dev,
session->auth_key.length = auth_xform->key.length;
switch (auth_xform->algo) {
- case RTE_CRYPTO_AUTH_SHA1_HMAC:
+ case RTE_CRYPTO_AUTH_NULL:
+ session->auth_key.alg = OP_PCL_IPSEC_HMAC_NULL;
+ session->digest_length = 0;
+ break;
case RTE_CRYPTO_AUTH_MD5_HMAC:
+ session->auth_key.alg = OP_PCL_IPSEC_HMAC_MD5_96;
+ session->auth_key.algmode = OP_ALG_AAI_HMAC;
+ break;
+ case RTE_CRYPTO_AUTH_SHA1_HMAC:
+ session->auth_key.alg = OP_PCL_IPSEC_HMAC_SHA1_96;
+ session->auth_key.algmode = OP_ALG_AAI_HMAC;
+ break;
+ case RTE_CRYPTO_AUTH_SHA224_HMAC:
+ session->auth_key.alg = OP_PCL_IPSEC_HMAC_SHA1_160;
+ session->auth_key.algmode = OP_ALG_AAI_HMAC;
+ break;
case RTE_CRYPTO_AUTH_SHA256_HMAC:
+ session->auth_key.alg = OP_PCL_IPSEC_HMAC_SHA2_256_128;
+ session->auth_key.algmode = OP_ALG_AAI_HMAC;
+ break;
case RTE_CRYPTO_AUTH_SHA384_HMAC:
+ session->auth_key.alg = OP_PCL_IPSEC_HMAC_SHA2_384_192;
+ session->auth_key.algmode = OP_ALG_AAI_HMAC;
+ break;
case RTE_CRYPTO_AUTH_SHA512_HMAC:
- case RTE_CRYPTO_AUTH_AES_CMAC:
+ session->auth_key.alg = OP_PCL_IPSEC_HMAC_SHA2_512_256;
+ session->auth_key.algmode = OP_ALG_AAI_HMAC;
break;
default:
DPAA_SEC_ERR("Crypto: Unsupported auth alg %u",
@@ -2766,7 +2733,28 @@ dpaa_sec_set_pdcp_session(struct rte_cryptodev *dev,
}
session->proto_alg = conf->protocol;
+ session->ctxt = DPAA_SEC_PDCP;
+
if (cipher_xform) {
+ switch (cipher_xform->algo) {
+ case RTE_CRYPTO_CIPHER_SNOW3G_UEA2:
+ session->cipher_key.alg = PDCP_CIPHER_TYPE_SNOW;
+ break;
+ case RTE_CRYPTO_CIPHER_ZUC_EEA3:
+ session->cipher_key.alg = PDCP_CIPHER_TYPE_ZUC;
+ break;
+ case RTE_CRYPTO_CIPHER_AES_CTR:
+ session->cipher_key.alg = PDCP_CIPHER_TYPE_AES;
+ break;
+ case RTE_CRYPTO_CIPHER_NULL:
+ session->cipher_key.alg = PDCP_CIPHER_TYPE_NULL;
+ break;
+ default:
+ DPAA_SEC_ERR("Crypto: Undefined Cipher specified %u",
+ session->cipher_alg);
+ return -1;
+ }
+
session->cipher_key.data = rte_zmalloc(NULL,
cipher_xform->key.length,
RTE_CACHE_LINE_SIZE);
@@ -2798,6 +2786,25 @@ dpaa_sec_set_pdcp_session(struct rte_cryptodev *dev,
}
if (auth_xform) {
+ switch (auth_xform->algo) {
+ case RTE_CRYPTO_AUTH_SNOW3G_UIA2:
+ session->auth_key.alg = PDCP_AUTH_TYPE_SNOW;
+ break;
+ case RTE_CRYPTO_AUTH_ZUC_EIA3:
+ session->auth_key.alg = PDCP_AUTH_TYPE_ZUC;
+ break;
+ case RTE_CRYPTO_AUTH_AES_CMAC:
+ session->auth_key.alg = PDCP_AUTH_TYPE_AES;
+ break;
+ case RTE_CRYPTO_AUTH_NULL:
+ session->auth_key.alg = PDCP_AUTH_TYPE_NULL;
+ break;
+ default:
+ DPAA_SEC_ERR("Crypto: Unsupported auth alg %u",
+ session->auth_alg);
+ rte_free(session->cipher_key.data);
+ return -1;
+ }
session->auth_key.data = rte_zmalloc(NULL,
auth_xform->key.length,
RTE_CACHE_LINE_SIZE);
diff --git a/drivers/crypto/dpaa_sec/dpaa_sec.h b/drivers/crypto/dpaa_sec/dpaa_sec.h
index 149923aa1..a661d5a56 100644
--- a/drivers/crypto/dpaa_sec/dpaa_sec.h
+++ b/drivers/crypto/dpaa_sec/dpaa_sec.h
@@ -38,14 +38,19 @@ enum dpaa_sec_op_type {
DPAA_SEC_NONE, /*!< No Cipher operations*/
DPAA_SEC_CIPHER,/*!< CIPHER operations */
DPAA_SEC_AUTH, /*!< Authentication Operations */
- DPAA_SEC_AEAD, /*!< Authenticated Encryption with associated data */
+ DPAA_SEC_AEAD, /*!< AEAD (AES-GCM/CCM) type operations */
+ DPAA_SEC_CIPHER_HASH, /*!< Authenticated Encryption with
+ * associated data
+ */
+ DPAA_SEC_HASH_CIPHER, /*!< Encryption with Authenticated
+ * associated data
+ */
DPAA_SEC_IPSEC, /*!< IPSEC protocol operations*/
DPAA_SEC_PDCP, /*!< PDCP protocol operations*/
DPAA_SEC_PKC, /*!< Public Key Cryptographic Operations */
DPAA_SEC_MAX
};
-
#define DPAA_SEC_MAX_DESC_SIZE 64
/* code or cmd block to caam */
struct sec_cdb {
@@ -113,6 +118,7 @@ struct sec_pdcp_ctxt {
typedef struct dpaa_sec_session_entry {
uint8_t dir; /*!< Operation Direction */
+ uint8_t ctxt; /*!< Session Context Type */
enum rte_crypto_cipher_algorithm cipher_alg; /*!< Cipher Algorithm*/
enum rte_crypto_auth_algorithm auth_alg; /*!< Authentication Algorithm*/
enum rte_crypto_aead_algorithm aead_alg; /*!< AEAD Algorithm*/
@@ -121,15 +127,21 @@ typedef struct dpaa_sec_session_entry {
struct {
uint8_t *data; /**< pointer to key data */
size_t length; /**< key length in bytes */
+ uint32_t alg;
+ uint32_t algmode;
} aead_key;
struct {
struct {
uint8_t *data; /**< pointer to key data */
size_t length; /**< key length in bytes */
+ uint32_t alg;
+ uint32_t algmode;
} cipher_key;
struct {
uint8_t *data; /**< pointer to key data */
size_t length; /**< key length in bytes */
+ uint32_t alg;
+ uint32_t algmode;
} auth_key;
};
};
@@ -148,6 +160,8 @@ typedef struct dpaa_sec_session_entry {
struct ip ip4_hdr;
struct rte_ipv6_hdr ip6_hdr;
};
+ uint8_t auth_cipher_text;
+ /**< Authenticate/cipher ordering */
};
struct sec_pdcp_ctxt pdcp;
};
--
2.17.1
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [dpdk-dev] [PATCH v2 00/10] NXP DPAAx crypto fixes
2019-10-14 6:53 ` [dpdk-dev] [PATCH v2 00/10] NXP DPAAx crypto fixes Hemant Agrawal
` (9 preceding siblings ...)
2019-10-14 6:53 ` [dpdk-dev] [PATCH v2 10/10] crypto/dpaa_sec: code reorg for better session mgmt Hemant Agrawal
@ 2019-10-15 12:50 ` Akhil Goyal
2019-10-15 12:51 ` Akhil Goyal
10 siblings, 1 reply; 27+ messages in thread
From: Akhil Goyal @ 2019-10-15 12:50 UTC (permalink / raw)
To: Hemant Agrawal, dev
>
> This patch series largely content
> 1. fixes in crypto drivers
> 2. supprot ESN like cases
> 3. enabling snow/ZUC for dpaa_sec
>
> v2: fix the clang compilation errors in 10/10
>
> Hemant Agrawal (7):
> test/crypto: fix PDCP test support
> crypto/dpaa2_sec: fix ipv6 support
> test/crypto: increase test cases support for dpaax
> test/crypto: add test to test ESN like case
> crypto/dpaa_sec: add support for snow3G and ZUC
> test/crypto: enable snow3G and zuc cases for dpaa
> crypto/dpaa_sec: code reorg for better session mgmt
>
> Vakul Garg (3):
> crypto/dpaa_sec: fix to check for aead as well
> crypto/dpaa2_sec: enhance gcm descs to not skip aadt
> crypto/dpaa2_sec: add support of auth trailer in cipher-auth
>
> app/test/test_cryptodev.c | 483 ++++++++++-
> app/test/test_cryptodev_aes_test_vectors.h | 67 ++
> doc/guides/cryptodevs/dpaa_sec.rst | 4 +
> doc/guides/cryptodevs/features/dpaa_sec.ini | 4 +
> drivers/crypto/caam_jr/caam_jr.c | 24 +-
> drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c | 47 +-
> drivers/crypto/dpaa2_sec/hw/desc/algo.h | 10 -
> drivers/crypto/dpaa2_sec/hw/desc/ipsec.h | 167 ++--
> drivers/crypto/dpaa_sec/dpaa_sec.c | 885 +++++++++++++-------
> drivers/crypto/dpaa_sec/dpaa_sec.h | 109 ++-
> 10 files changed, 1320 insertions(+), 480 deletions(-)
>
> --
> 2.17.1
Series
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [dpdk-dev] [PATCH v2 00/10] NXP DPAAx crypto fixes
2019-10-15 12:50 ` [dpdk-dev] [PATCH v2 00/10] NXP DPAAx crypto fixes Akhil Goyal
@ 2019-10-15 12:51 ` Akhil Goyal
2019-10-15 13:30 ` Akhil Goyal
0 siblings, 1 reply; 27+ messages in thread
From: Akhil Goyal @ 2019-10-15 12:51 UTC (permalink / raw)
To: Hemant Agrawal, dev
>
> >
> > This patch series largely content
> > 1. fixes in crypto drivers
> > 2. supprot ESN like cases
> > 3. enabling snow/ZUC for dpaa_sec
> >
> > v2: fix the clang compilation errors in 10/10
> >
> > Hemant Agrawal (7):
> > test/crypto: fix PDCP test support
> > crypto/dpaa2_sec: fix ipv6 support
> > test/crypto: increase test cases support for dpaax
> > test/crypto: add test to test ESN like case
> > crypto/dpaa_sec: add support for snow3G and ZUC
> > test/crypto: enable snow3G and zuc cases for dpaa
> > crypto/dpaa_sec: code reorg for better session mgmt
> >
> > Vakul Garg (3):
> > crypto/dpaa_sec: fix to check for aead as well
> > crypto/dpaa2_sec: enhance gcm descs to not skip aadt
> > crypto/dpaa2_sec: add support of auth trailer in cipher-auth
> >
> > app/test/test_cryptodev.c | 483 ++++++++++-
> > app/test/test_cryptodev_aes_test_vectors.h | 67 ++
> > doc/guides/cryptodevs/dpaa_sec.rst | 4 +
> > doc/guides/cryptodevs/features/dpaa_sec.ini | 4 +
> > drivers/crypto/caam_jr/caam_jr.c | 24 +-
> > drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c | 47 +-
> > drivers/crypto/dpaa2_sec/hw/desc/algo.h | 10 -
> > drivers/crypto/dpaa2_sec/hw/desc/ipsec.h | 167 ++--
> > drivers/crypto/dpaa_sec/dpaa_sec.c | 885 +++++++++++++-------
> > drivers/crypto/dpaa_sec/dpaa_sec.h | 109 ++-
> > 10 files changed, 1320 insertions(+), 480 deletions(-)
> >
> > --
> > 2.17.1
>
> Series
> Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
Patch titles and description need to be updates.
Will do so while applying the series.
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [dpdk-dev] [PATCH v2 00/10] NXP DPAAx crypto fixes
2019-10-15 12:51 ` Akhil Goyal
@ 2019-10-15 13:30 ` Akhil Goyal
0 siblings, 0 replies; 27+ messages in thread
From: Akhil Goyal @ 2019-10-15 13:30 UTC (permalink / raw)
To: Hemant Agrawal, dev
>
>
> >
> > >
> > > This patch series largely content
> > > 1. fixes in crypto drivers
> > > 2. supprot ESN like cases
> > > 3. enabling snow/ZUC for dpaa_sec
> > >
> > > v2: fix the clang compilation errors in 10/10
> > >
> > > Hemant Agrawal (7):
> > > test/crypto: fix PDCP test support
> > > crypto/dpaa2_sec: fix ipv6 support
> > > test/crypto: increase test cases support for dpaax
> > > test/crypto: add test to test ESN like case
> > > crypto/dpaa_sec: add support for snow3G and ZUC
> > > test/crypto: enable snow3G and zuc cases for dpaa
> > > crypto/dpaa_sec: code reorg for better session mgmt
> > >
> > > Vakul Garg (3):
> > > crypto/dpaa_sec: fix to check for aead as well
> > > crypto/dpaa2_sec: enhance gcm descs to not skip aadt
> > > crypto/dpaa2_sec: add support of auth trailer in cipher-auth
> > >
> > > app/test/test_cryptodev.c | 483 ++++++++++-
> > > app/test/test_cryptodev_aes_test_vectors.h | 67 ++
> > > doc/guides/cryptodevs/dpaa_sec.rst | 4 +
> > > doc/guides/cryptodevs/features/dpaa_sec.ini | 4 +
> > > drivers/crypto/caam_jr/caam_jr.c | 24 +-
> > > drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c | 47 +-
> > > drivers/crypto/dpaa2_sec/hw/desc/algo.h | 10 -
> > > drivers/crypto/dpaa2_sec/hw/desc/ipsec.h | 167 ++--
> > > drivers/crypto/dpaa_sec/dpaa_sec.c | 885 +++++++++++++-------
> > > drivers/crypto/dpaa_sec/dpaa_sec.h | 109 ++-
> > > 10 files changed, 1320 insertions(+), 480 deletions(-)
> > >
> > > --
> > > 2.17.1
> >
> > Series
> > Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
>
> Patch titles and description need to be updates.
> Will do so while applying the series.
Applied to dpdk-next-crypto
Thanks.
^ permalink raw reply [flat|nested] 27+ messages in thread
end of thread, other threads:[~2019-10-15 13:30 UTC | newest]
Thread overview: 27+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-10-11 16:32 [dpdk-dev] [PATCH 00/10] NXP DPAAx crypto fixes Hemant Agrawal
2019-10-11 16:32 ` [dpdk-dev] [PATCH 01/10] test/crypto: fix PDCP test support Hemant Agrawal
2019-10-11 16:32 ` [dpdk-dev] [PATCH 02/10] crypto/dpaa2_sec: fix ipv6 support Hemant Agrawal
2019-10-11 16:32 ` [dpdk-dev] [PATCH 03/10] crypto/dpaa_sec: fix to check for aead as well Hemant Agrawal
2019-10-11 16:32 ` [dpdk-dev] [PATCH 04/10] crypto/dpaa2_sec: enhance gcm descs to not skip aadt Hemant Agrawal
2019-10-11 16:32 ` [dpdk-dev] [PATCH 05/10] crypto/dpaa2_sec: add support of auth trailer in cipher-auth Hemant Agrawal
2019-10-11 16:32 ` [dpdk-dev] [PATCH 06/10] test/crypto: increase test cases support for dpaax Hemant Agrawal
2019-10-11 16:32 ` [dpdk-dev] [PATCH 07/10] test/crypto: add test to test ESN like case Hemant Agrawal
2019-10-11 16:32 ` [dpdk-dev] [PATCH 08/10] crypto/dpaa_sec: add support for snow3G and ZUC Hemant Agrawal
2019-10-11 16:32 ` [dpdk-dev] [PATCH 09/10] test/crypto: enable snow3G and zuc cases for dpaa Hemant Agrawal
2019-10-11 16:32 ` [dpdk-dev] [PATCH 10/10] crypto/dpaa_sec: code reorg for better session mgmt Hemant Agrawal
2019-10-11 19:03 ` Aaron Conole
2019-10-14 4:57 ` Hemant Agrawal
2019-10-14 6:53 ` [dpdk-dev] [PATCH v2 00/10] NXP DPAAx crypto fixes Hemant Agrawal
2019-10-14 6:53 ` [dpdk-dev] [PATCH v2 01/10] test/crypto: fix PDCP test support Hemant Agrawal
2019-10-14 6:53 ` [dpdk-dev] [PATCH v2 02/10] crypto/dpaa2_sec: fix ipv6 support Hemant Agrawal
2019-10-14 6:53 ` [dpdk-dev] [PATCH v2 03/10] crypto/dpaa_sec: fix to check for aead as well Hemant Agrawal
2019-10-14 6:53 ` [dpdk-dev] [PATCH v2 04/10] crypto/dpaa2_sec: enhance gcm descs to not skip aadt Hemant Agrawal
2019-10-14 6:53 ` [dpdk-dev] [PATCH v2 05/10] crypto/dpaa2_sec: add support of auth trailer in cipher-auth Hemant Agrawal
2019-10-14 6:53 ` [dpdk-dev] [PATCH v2 06/10] test/crypto: increase test cases support for dpaax Hemant Agrawal
2019-10-14 6:53 ` [dpdk-dev] [PATCH v2 07/10] test/crypto: add test to test ESN like case Hemant Agrawal
2019-10-14 6:53 ` [dpdk-dev] [PATCH v2 08/10] crypto/dpaa_sec: add support for snow3G and ZUC Hemant Agrawal
2019-10-14 6:53 ` [dpdk-dev] [PATCH v2 09/10] test/crypto: enable snow3G and zuc cases for dpaa Hemant Agrawal
2019-10-14 6:53 ` [dpdk-dev] [PATCH v2 10/10] crypto/dpaa_sec: code reorg for better session mgmt Hemant Agrawal
2019-10-15 12:50 ` [dpdk-dev] [PATCH v2 00/10] NXP DPAAx crypto fixes Akhil Goyal
2019-10-15 12:51 ` Akhil Goyal
2019-10-15 13:30 ` Akhil Goyal
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).