* [dpdk-dev] [PATCH 00/20] crypto/dpaaX_sec: Support Wireless algos
@ 2019-09-02 12:17 Akhil Goyal
2019-09-02 12:17 ` [dpdk-dev] [PATCH 01/20] drivers/crypto: Support PDCP 12-bit c-plane processing Akhil Goyal
` (21 more replies)
0 siblings, 22 replies; 75+ messages in thread
From: Akhil Goyal @ 2019-09-02 12:17 UTC (permalink / raw)
To: dev; +Cc: hemant.agrawal, vakul.garg, Akhil Goyal
PDCP protocol offload using rte_security are supported in
dpaa2_sec and dpaa_sec drivers.
Wireless algos(SNOW/ZUC) without protocol offload are also
supported as per crypto APIs.
Akhil Goyal (5):
security: add hfn override option in PDCP
crypto/dpaaX_sec: update dpovrd for hfn override in PDCP
crypto/dpaa2_sec: update desc for pdcp 18bit enc-auth
crypto/dpaa2_sec/hw: update 12bit SN desc for null auth for ERA8
crypto/dpaa_sec: support scatter gather for pdcp
Hemant Agrawal (4):
crypto/dpaa2_sec: support CAAM HW era 10
crypto/dpaa2_sec: support scatter gather for proto offloads
crypto/dpaa2_sec: Support snow3g cipher/integrity
crypto/dpaa2_sec: Support zuc ciphering
Vakul Garg (11):
drivers/crypto: Support PDCP 12-bit c-plane processing
drivers/crypto: Support PDCP u-plane with integrity
crypto/dpaa2_sec: disable 'write-safe' for PDCP
crypto/dpaa2_sec/hw: Support 18-bit PDCP enc-auth cases
crypto/dpaa2_sec/hw: Support aes-aes 18-bit PDCP
crypto/dpaa2_sec/hw: Support zuc-zuc 18-bit PDCP
crypto/dpaa2_sec/hw: Support snow-snow 18-bit PDCP
crypto/dpaa2_sec/hw: Support snow-f8
crypto/dpaa2_sec/hw: Support snow-f9
crypto/dpaa2_sec/hw: Support kasumi
crypto/dpaa2_sec/hw: Support zuc cipher/integrity
drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c | 477 ++++--
drivers/crypto/dpaa2_sec/dpaa2_sec_logs.h | 4 +-
drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h | 70 +-
drivers/crypto/dpaa2_sec/hw/desc.h | 8 +-
drivers/crypto/dpaa2_sec/hw/desc/algo.h | 259 ++-
drivers/crypto/dpaa2_sec/hw/desc/pdcp.h | 1387 ++++++++++++++---
.../dpaa2_sec/hw/rta/fifo_load_store_cmd.h | 9 +-
drivers/crypto/dpaa2_sec/hw/rta/header_cmd.h | 21 +-
drivers/crypto/dpaa2_sec/hw/rta/jump_cmd.h | 3 +-
drivers/crypto/dpaa2_sec/hw/rta/key_cmd.h | 5 +-
drivers/crypto/dpaa2_sec/hw/rta/load_cmd.h | 10 +-
drivers/crypto/dpaa2_sec/hw/rta/math_cmd.h | 12 +-
drivers/crypto/dpaa2_sec/hw/rta/move_cmd.h | 8 +-
drivers/crypto/dpaa2_sec/hw/rta/nfifo_cmd.h | 10 +-
.../crypto/dpaa2_sec/hw/rta/operation_cmd.h | 6 +-
.../crypto/dpaa2_sec/hw/rta/protocol_cmd.h | 11 +-
.../dpaa2_sec/hw/rta/sec_run_time_asm.h | 27 +-
.../dpaa2_sec/hw/rta/seq_in_out_ptr_cmd.h | 7 +-
drivers/crypto/dpaa2_sec/hw/rta/store_cmd.h | 6 +-
drivers/crypto/dpaa_sec/dpaa_sec.c | 262 +++-
drivers/crypto/dpaa_sec/dpaa_sec.h | 4 +-
lib/librte_security/rte_security.h | 4 +-
22 files changed, 2082 insertions(+), 528 deletions(-)
--
2.17.1
^ permalink raw reply [flat|nested] 75+ messages in thread
* [dpdk-dev] [PATCH 01/20] drivers/crypto: Support PDCP 12-bit c-plane processing
2019-09-02 12:17 [dpdk-dev] [PATCH 00/20] crypto/dpaaX_sec: Support Wireless algos Akhil Goyal
@ 2019-09-02 12:17 ` Akhil Goyal
2019-09-02 12:17 ` [dpdk-dev] [PATCH 02/20] drivers/crypto: Support PDCP u-plane with integrity Akhil Goyal
` (20 subsequent siblings)
21 siblings, 0 replies; 75+ messages in thread
From: Akhil Goyal @ 2019-09-02 12:17 UTC (permalink / raw)
To: dev; +Cc: hemant.agrawal, vakul.garg
From: Vakul Garg <vakul.garg@nxp.com>
Added support for 12-bit c-plane. We implement it using 'u-plane for RN'
protocol descriptors. This is because 'c-plane' protocol descriptors
assume 5-bit sequence numbers. Since the crypto processing remains same
irrespective of c-plane or u-plane, we choose 'u-plane for RN' protocol
descriptors to implement 12-bit c-plane. 'U-plane for RN' protocol
descriptors support both confidentiality and integrity (required for
c-plane) for 7/12/15 bit sequence numbers.
For little endian platforms, incorrect IV is generated if MOVE command
is used in pdcp non-proto descriptors. This is because MOVE command
treats data as word. We changed MOVE to MOVEB since we require data to
be treated as byte array. The change works on both ls1046, ls2088.
Signed-off-by: Vakul Garg <vakul.garg@nxp.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
---
drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c | 7 +-
drivers/crypto/dpaa2_sec/hw/desc.h | 3 +-
drivers/crypto/dpaa2_sec/hw/desc/pdcp.h | 176 ++++++++++++++----
.../crypto/dpaa2_sec/hw/rta/protocol_cmd.h | 6 +-
drivers/crypto/dpaa_sec/dpaa_sec.c | 7 +-
5 files changed, 158 insertions(+), 41 deletions(-)
diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
index 420e86589..05c29e62f 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
@@ -2663,9 +2663,10 @@ dpaa2_sec_set_pdcp_session(struct rte_cryptodev *dev,
/* Auth is only applicable for control mode operation. */
if (pdcp_xform->domain == RTE_SECURITY_PDCP_MODE_CONTROL) {
- if (pdcp_xform->sn_size != RTE_SECURITY_PDCP_SN_SIZE_5) {
+ if (pdcp_xform->sn_size != RTE_SECURITY_PDCP_SN_SIZE_5 &&
+ pdcp_xform->sn_size != RTE_SECURITY_PDCP_SN_SIZE_12) {
DPAA2_SEC_ERR(
- "PDCP Seq Num size should be 5 bits for cmode");
+ "PDCP Seq Num size should be 5/12 bits for cmode");
goto out;
}
if (auth_xform) {
@@ -2716,6 +2717,7 @@ dpaa2_sec_set_pdcp_session(struct rte_cryptodev *dev,
bufsize = cnstr_shdsc_pdcp_c_plane_encap(
priv->flc_desc[0].desc, 1, swap,
pdcp_xform->hfn,
+ pdcp_xform->sn_size,
pdcp_xform->bearer,
pdcp_xform->pkt_dir,
pdcp_xform->hfn_threshold,
@@ -2725,6 +2727,7 @@ dpaa2_sec_set_pdcp_session(struct rte_cryptodev *dev,
bufsize = cnstr_shdsc_pdcp_c_plane_decap(
priv->flc_desc[0].desc, 1, swap,
pdcp_xform->hfn,
+ pdcp_xform->sn_size,
pdcp_xform->bearer,
pdcp_xform->pkt_dir,
pdcp_xform->hfn_threshold,
diff --git a/drivers/crypto/dpaa2_sec/hw/desc.h b/drivers/crypto/dpaa2_sec/hw/desc.h
index 5d99dd8af..e12c3db2f 100644
--- a/drivers/crypto/dpaa2_sec/hw/desc.h
+++ b/drivers/crypto/dpaa2_sec/hw/desc.h
@@ -1,7 +1,7 @@
/* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
*
* Copyright 2008-2016 Freescale Semiconductor Inc.
- * Copyright 2016 NXP
+ * Copyright 2016, 2019 NXP
*
*/
@@ -621,6 +621,7 @@
#define OP_PCLID_LTE_PDCP_USER (0x42 << OP_PCLID_SHIFT)
#define OP_PCLID_LTE_PDCP_CTRL (0x43 << OP_PCLID_SHIFT)
#define OP_PCLID_LTE_PDCP_CTRL_MIXED (0x44 << OP_PCLID_SHIFT)
+#define OP_PCLID_LTE_PDCP_USER_RN (0x45 << OP_PCLID_SHIFT)
/*
* ProtocolInfo selectors
diff --git a/drivers/crypto/dpaa2_sec/hw/desc/pdcp.h b/drivers/crypto/dpaa2_sec/hw/desc/pdcp.h
index fee844100..607c587e2 100644
--- a/drivers/crypto/dpaa2_sec/hw/desc/pdcp.h
+++ b/drivers/crypto/dpaa2_sec/hw/desc/pdcp.h
@@ -253,6 +253,7 @@ pdcp_insert_cplane_null_op(struct program *p,
struct alginfo *cipherdata __maybe_unused,
struct alginfo *authdata __maybe_unused,
unsigned int dir,
+ enum pdcp_sn_size sn_size __maybe_unused,
unsigned char era_2_sw_hfn_ovrd __maybe_unused)
{
LABEL(local_offset);
@@ -413,8 +414,18 @@ pdcp_insert_cplane_int_only_op(struct program *p,
bool swap __maybe_unused,
struct alginfo *cipherdata __maybe_unused,
struct alginfo *authdata, unsigned int dir,
+ enum pdcp_sn_size sn_size,
unsigned char era_2_sw_hfn_ovrd)
{
+ if (rta_sec_era >= RTA_SEC_ERA_8 && sn_size == PDCP_SN_SIZE_12) {
+ KEY(p, KEY2, authdata->key_enc_flags, authdata->key,
+ authdata->keylen, INLINE_KEY(authdata));
+
+ PROTOCOL(p, dir, OP_PCLID_LTE_PDCP_USER_RN,
+ (uint16_t)authdata->algtype);
+ return 0;
+ }
+
LABEL(local_offset);
REFERENCE(move_cmd_read_descbuf);
REFERENCE(move_cmd_write_descbuf);
@@ -720,6 +731,7 @@ pdcp_insert_cplane_enc_only_op(struct program *p,
struct alginfo *cipherdata,
struct alginfo *authdata __maybe_unused,
unsigned int dir,
+ enum pdcp_sn_size sn_size,
unsigned char era_2_sw_hfn_ovrd __maybe_unused)
{
/* Insert Cipher Key */
@@ -727,8 +739,12 @@ pdcp_insert_cplane_enc_only_op(struct program *p,
cipherdata->keylen, INLINE_KEY(cipherdata));
if (rta_sec_era >= RTA_SEC_ERA_8) {
- PROTOCOL(p, dir, OP_PCLID_LTE_PDCP_CTRL_MIXED,
- (uint16_t)cipherdata->algtype << 8);
+ if (sn_size == PDCP_SN_SIZE_5)
+ PROTOCOL(p, dir, OP_PCLID_LTE_PDCP_CTRL_MIXED,
+ (uint16_t)cipherdata->algtype << 8);
+ else
+ PROTOCOL(p, dir, OP_PCLID_LTE_PDCP_USER_RN,
+ (uint16_t)cipherdata->algtype << 8);
return 0;
}
@@ -742,12 +758,12 @@ pdcp_insert_cplane_enc_only_op(struct program *p,
IFB | IMMED2);
SEQSTORE(p, MATH0, 7, 1, 0);
MATHB(p, MATH1, SHLD, MATH1, MATH1, 8, 0);
- MOVE(p, DESCBUF, 8, MATH2, 0, 8, WAITCOMP | IMMED);
+ MOVEB(p, DESCBUF, 8, MATH2, 0, 8, WAITCOMP | IMMED);
MATHB(p, MATH1, OR, MATH2, MATH2, 8, 0);
switch (cipherdata->algtype) {
case PDCP_CIPHER_TYPE_SNOW:
- MOVE(p, MATH2, 0, CONTEXT1, 0, 8, WAITCOMP | IMMED);
+ MOVEB(p, MATH2, 0, CONTEXT1, 0, 8, WAITCOMP | IMMED);
if (rta_sec_era > RTA_SEC_ERA_2) {
MATHB(p, SEQINSZ, SUB, ZERO, VSEQINSZ, 4, 0);
@@ -771,7 +787,7 @@ pdcp_insert_cplane_enc_only_op(struct program *p,
break;
case PDCP_CIPHER_TYPE_AES:
- MOVE(p, MATH2, 0, CONTEXT1, 0x10, 0x10, WAITCOMP | IMMED);
+ MOVEB(p, MATH2, 0, CONTEXT1, 0x10, 0x10, WAITCOMP | IMMED);
if (rta_sec_era > RTA_SEC_ERA_2) {
MATHB(p, SEQINSZ, SUB, ZERO, VSEQINSZ, 4, 0);
@@ -802,8 +818,8 @@ pdcp_insert_cplane_enc_only_op(struct program *p,
return -ENOTSUP;
}
- MOVE(p, MATH2, 0, CONTEXT1, 0, 0x08, IMMED);
- MOVE(p, MATH2, 0, CONTEXT1, 0x08, 0x08, WAITCOMP | IMMED);
+ MOVEB(p, MATH2, 0, CONTEXT1, 0, 0x08, IMMED);
+ MOVEB(p, MATH2, 0, CONTEXT1, 0x08, 0x08, WAITCOMP | IMMED);
MATHB(p, SEQINSZ, SUB, ZERO, VSEQINSZ, 4, 0);
if (dir == OP_TYPE_ENCAP_PROTOCOL)
MATHB(p, SEQINSZ, ADD, PDCP_MAC_I_LEN, VSEQOUTSZ, 4,
@@ -848,6 +864,7 @@ pdcp_insert_cplane_acc_op(struct program *p,
struct alginfo *cipherdata,
struct alginfo *authdata,
unsigned int dir,
+ enum pdcp_sn_size sn_size,
unsigned char era_2_hfn_ovrd __maybe_unused)
{
/* Insert Auth Key */
@@ -857,7 +874,14 @@ pdcp_insert_cplane_acc_op(struct program *p,
/* Insert Cipher Key */
KEY(p, KEY1, cipherdata->key_enc_flags, cipherdata->key,
cipherdata->keylen, INLINE_KEY(cipherdata));
- PROTOCOL(p, dir, OP_PCLID_LTE_PDCP_CTRL, (uint16_t)cipherdata->algtype);
+
+ if (sn_size == PDCP_SN_SIZE_5)
+ PROTOCOL(p, dir, OP_PCLID_LTE_PDCP_CTRL,
+ (uint16_t)cipherdata->algtype);
+ else
+ PROTOCOL(p, dir, OP_PCLID_LTE_PDCP_USER_RN,
+ ((uint16_t)cipherdata->algtype << 8) |
+ (uint16_t)authdata->algtype);
return 0;
}
@@ -868,6 +892,7 @@ pdcp_insert_cplane_snow_aes_op(struct program *p,
struct alginfo *cipherdata,
struct alginfo *authdata,
unsigned int dir,
+ enum pdcp_sn_size sn_size,
unsigned char era_2_sw_hfn_ovrd)
{
LABEL(back_to_sd_offset);
@@ -887,9 +912,14 @@ pdcp_insert_cplane_snow_aes_op(struct program *p,
KEY(p, KEY2, authdata->key_enc_flags, authdata->key,
authdata->keylen, INLINE_KEY(authdata));
- PROTOCOL(p, dir, OP_PCLID_LTE_PDCP_CTRL_MIXED,
- ((uint16_t)cipherdata->algtype << 8) |
- (uint16_t)authdata->algtype);
+ if (sn_size == PDCP_SN_SIZE_5)
+ PROTOCOL(p, dir, OP_PCLID_LTE_PDCP_CTRL_MIXED,
+ ((uint16_t)cipherdata->algtype << 8) |
+ (uint16_t)authdata->algtype);
+ else
+ PROTOCOL(p, dir, OP_PCLID_LTE_PDCP_USER_RN,
+ ((uint16_t)cipherdata->algtype << 8) |
+ (uint16_t)authdata->algtype);
return 0;
}
@@ -1174,6 +1204,7 @@ pdcp_insert_cplane_aes_snow_op(struct program *p,
struct alginfo *cipherdata,
struct alginfo *authdata,
unsigned int dir,
+ enum pdcp_sn_size sn_size,
unsigned char era_2_sw_hfn_ovrd __maybe_unused)
{
KEY(p, KEY1, cipherdata->key_enc_flags, cipherdata->key,
@@ -1182,7 +1213,14 @@ pdcp_insert_cplane_aes_snow_op(struct program *p,
INLINE_KEY(authdata));
if (rta_sec_era >= RTA_SEC_ERA_8) {
- PROTOCOL(p, dir, OP_PCLID_LTE_PDCP_CTRL_MIXED,
+ int pclid;
+
+ if (sn_size == PDCP_SN_SIZE_5)
+ pclid = OP_PCLID_LTE_PDCP_CTRL_MIXED;
+ else
+ pclid = OP_PCLID_LTE_PDCP_USER_RN;
+
+ PROTOCOL(p, dir, pclid,
((uint16_t)cipherdata->algtype << 8) |
(uint16_t)authdata->algtype);
@@ -1281,6 +1319,7 @@ pdcp_insert_cplane_snow_zuc_op(struct program *p,
struct alginfo *cipherdata,
struct alginfo *authdata,
unsigned int dir,
+ enum pdcp_sn_size sn_size,
unsigned char era_2_sw_hfn_ovrd __maybe_unused)
{
LABEL(keyjump);
@@ -1300,7 +1339,14 @@ pdcp_insert_cplane_snow_zuc_op(struct program *p,
SET_LABEL(p, keyjump);
if (rta_sec_era >= RTA_SEC_ERA_8) {
- PROTOCOL(p, dir, OP_PCLID_LTE_PDCP_CTRL_MIXED,
+ int pclid;
+
+ if (sn_size == PDCP_SN_SIZE_5)
+ pclid = OP_PCLID_LTE_PDCP_CTRL_MIXED;
+ else
+ pclid = OP_PCLID_LTE_PDCP_USER_RN;
+
+ PROTOCOL(p, dir, pclid,
((uint16_t)cipherdata->algtype << 8) |
(uint16_t)authdata->algtype);
return 0;
@@ -1376,6 +1422,7 @@ pdcp_insert_cplane_aes_zuc_op(struct program *p,
struct alginfo *cipherdata,
struct alginfo *authdata,
unsigned int dir,
+ enum pdcp_sn_size sn_size,
unsigned char era_2_sw_hfn_ovrd __maybe_unused)
{
LABEL(keyjump);
@@ -1393,7 +1440,14 @@ pdcp_insert_cplane_aes_zuc_op(struct program *p,
INLINE_KEY(authdata));
if (rta_sec_era >= RTA_SEC_ERA_8) {
- PROTOCOL(p, dir, OP_PCLID_LTE_PDCP_CTRL_MIXED,
+ int pclid;
+
+ if (sn_size == PDCP_SN_SIZE_5)
+ pclid = OP_PCLID_LTE_PDCP_CTRL_MIXED;
+ else
+ pclid = OP_PCLID_LTE_PDCP_USER_RN;
+
+ PROTOCOL(p, dir, pclid,
((uint16_t)cipherdata->algtype << 8) |
(uint16_t)authdata->algtype);
@@ -1474,6 +1528,7 @@ pdcp_insert_cplane_zuc_snow_op(struct program *p,
struct alginfo *cipherdata,
struct alginfo *authdata,
unsigned int dir,
+ enum pdcp_sn_size sn_size,
unsigned char era_2_sw_hfn_ovrd __maybe_unused)
{
LABEL(keyjump);
@@ -1491,7 +1546,14 @@ pdcp_insert_cplane_zuc_snow_op(struct program *p,
INLINE_KEY(authdata));
if (rta_sec_era >= RTA_SEC_ERA_8) {
- PROTOCOL(p, dir, OP_PCLID_LTE_PDCP_CTRL_MIXED,
+ int pclid;
+
+ if (sn_size == PDCP_SN_SIZE_5)
+ pclid = OP_PCLID_LTE_PDCP_CTRL_MIXED;
+ else
+ pclid = OP_PCLID_LTE_PDCP_USER_RN;
+
+ PROTOCOL(p, dir, pclid,
((uint16_t)cipherdata->algtype << 8) |
(uint16_t)authdata->algtype);
@@ -1594,6 +1656,7 @@ pdcp_insert_cplane_zuc_aes_op(struct program *p,
struct alginfo *cipherdata,
struct alginfo *authdata,
unsigned int dir,
+ enum pdcp_sn_size sn_size,
unsigned char era_2_sw_hfn_ovrd __maybe_unused)
{
if (rta_sec_era < RTA_SEC_ERA_5) {
@@ -1602,12 +1665,19 @@ pdcp_insert_cplane_zuc_aes_op(struct program *p,
}
if (rta_sec_era >= RTA_SEC_ERA_8) {
+ int pclid;
+
KEY(p, KEY1, cipherdata->key_enc_flags, cipherdata->key,
cipherdata->keylen, INLINE_KEY(cipherdata));
KEY(p, KEY2, authdata->key_enc_flags, authdata->key,
authdata->keylen, INLINE_KEY(authdata));
- PROTOCOL(p, dir, OP_PCLID_LTE_PDCP_CTRL_MIXED,
+ if (sn_size == PDCP_SN_SIZE_5)
+ pclid = OP_PCLID_LTE_PDCP_CTRL_MIXED;
+ else
+ pclid = OP_PCLID_LTE_PDCP_USER_RN;
+
+ PROTOCOL(p, dir, pclid,
((uint16_t)cipherdata->algtype << 8) |
(uint16_t)authdata->algtype);
return 0;
@@ -1754,7 +1824,7 @@ pdcp_insert_uplane_15bit_op(struct program *p,
IFB | IMMED2);
SEQSTORE(p, MATH0, 6, 2, 0);
MATHB(p, MATH1, SHLD, MATH1, MATH1, 8, 0);
- MOVE(p, DESCBUF, 8, MATH2, 0, 8, WAITCOMP | IMMED);
+ MOVEB(p, DESCBUF, 8, MATH2, 0, 8, WAITCOMP | IMMED);
MATHB(p, MATH1, OR, MATH2, MATH2, 8, 0);
MATHB(p, SEQINSZ, SUB, MATH3, VSEQINSZ, 4, 0);
@@ -1765,7 +1835,7 @@ pdcp_insert_uplane_15bit_op(struct program *p,
op = dir == OP_TYPE_ENCAP_PROTOCOL ? DIR_ENC : DIR_DEC;
switch (cipherdata->algtype) {
case PDCP_CIPHER_TYPE_SNOW:
- MOVE(p, MATH2, 0, CONTEXT1, 0, 8, WAITCOMP | IMMED);
+ MOVEB(p, MATH2, 0, CONTEXT1, 0, 8, WAITCOMP | IMMED);
ALG_OPERATION(p, OP_ALG_ALGSEL_SNOW_F8,
OP_ALG_AAI_F8,
OP_ALG_AS_INITFINAL,
@@ -1774,7 +1844,7 @@ pdcp_insert_uplane_15bit_op(struct program *p,
break;
case PDCP_CIPHER_TYPE_AES:
- MOVE(p, MATH2, 0, CONTEXT1, 0x10, 0x10, WAITCOMP | IMMED);
+ MOVEB(p, MATH2, 0, CONTEXT1, 0x10, 0x10, WAITCOMP | IMMED);
ALG_OPERATION(p, OP_ALG_ALGSEL_AES,
OP_ALG_AAI_CTR,
OP_ALG_AS_INITFINAL,
@@ -1787,8 +1857,8 @@ pdcp_insert_uplane_15bit_op(struct program *p,
pr_err("Invalid era for selected algorithm\n");
return -ENOTSUP;
}
- MOVE(p, MATH2, 0, CONTEXT1, 0, 0x08, IMMED);
- MOVE(p, MATH2, 0, CONTEXT1, 0x08, 0x08, WAITCOMP | IMMED);
+ MOVEB(p, MATH2, 0, CONTEXT1, 0, 0x08, IMMED);
+ MOVEB(p, MATH2, 0, CONTEXT1, 0x08, 0x08, WAITCOMP | IMMED);
ALG_OPERATION(p, OP_ALG_ALGSEL_ZUCE,
OP_ALG_AAI_F8,
@@ -1885,6 +1955,7 @@ insert_hfn_ov_op(struct program *p,
static inline enum pdb_type_e
cnstr_pdcp_c_plane_pdb(struct program *p,
uint32_t hfn,
+ enum pdcp_sn_size sn_size,
unsigned char bearer,
unsigned char direction,
uint32_t hfn_threshold,
@@ -1923,18 +1994,36 @@ cnstr_pdcp_c_plane_pdb(struct program *p,
if (rta_sec_era >= RTA_SEC_ERA_8) {
memset(&pdb, 0x00, sizeof(struct pdcp_pdb));
- /* This is a HW issue. Bit 2 should be set to zero,
- * but it does not work this way. Override here.
+ /* To support 12-bit seq numbers, we use u-plane opt in pdb.
+ * SEC supports 5-bit only with c-plane opt in pdb.
*/
- pdb.opt_res.rsvd = 0x00000002;
+ if (sn_size == PDCP_SN_SIZE_12) {
+ pdb.hfn_res = hfn << PDCP_U_PLANE_PDB_LONG_SN_HFN_SHIFT;
+ pdb.bearer_dir_res = (uint32_t)
+ ((bearer << PDCP_U_PLANE_PDB_BEARER_SHIFT) |
+ (direction << PDCP_U_PLANE_PDB_DIR_SHIFT));
- /* Copy relevant information from user to PDB */
- pdb.hfn_res = hfn << PDCP_C_PLANE_PDB_HFN_SHIFT;
- pdb.bearer_dir_res = (uint32_t)
+ pdb.hfn_thr_res =
+ hfn_threshold << PDCP_U_PLANE_PDB_LONG_SN_HFN_THR_SHIFT;
+
+ } else {
+ /* This means 5-bit c-plane.
+ * Here we use c-plane opt in pdb
+ */
+
+ /* This is a HW issue. Bit 2 should be set to zero,
+ * but it does not work this way. Override here.
+ */
+ pdb.opt_res.rsvd = 0x00000002;
+
+ /* Copy relevant information from user to PDB */
+ pdb.hfn_res = hfn << PDCP_C_PLANE_PDB_HFN_SHIFT;
+ pdb.bearer_dir_res = (uint32_t)
((bearer << PDCP_C_PLANE_PDB_BEARER_SHIFT) |
- (direction << PDCP_C_PLANE_PDB_DIR_SHIFT));
- pdb.hfn_thr_res =
- hfn_threshold << PDCP_C_PLANE_PDB_HFN_THR_SHIFT;
+ (direction << PDCP_C_PLANE_PDB_DIR_SHIFT));
+ pdb.hfn_thr_res =
+ hfn_threshold << PDCP_C_PLANE_PDB_HFN_THR_SHIFT;
+ }
/* copy PDB in descriptor*/
__rta_out32(p, pdb.opt_res.opt);
@@ -2053,6 +2142,7 @@ cnstr_pdcp_u_plane_pdb(struct program *p,
* @swap: must be true when core endianness doesn't match SEC endianness
* @hfn: starting Hyper Frame Number to be used together with the SN from the
* PDCP frames.
+ * @sn_size: size of sequence numbers, only 5/12 bit sequence numbers are valid
* @bearer: radio bearer ID
* @direction: the direction of the PDCP frame (UL/DL)
* @hfn_threshold: HFN value that once reached triggers a warning from SEC that
@@ -2077,6 +2167,7 @@ cnstr_shdsc_pdcp_c_plane_encap(uint32_t *descbuf,
bool ps,
bool swap,
uint32_t hfn,
+ enum pdcp_sn_size sn_size,
unsigned char bearer,
unsigned char direction,
uint32_t hfn_threshold,
@@ -2087,7 +2178,7 @@ cnstr_shdsc_pdcp_c_plane_encap(uint32_t *descbuf,
static int
(*pdcp_cp_fp[PDCP_CIPHER_TYPE_INVALID][PDCP_AUTH_TYPE_INVALID])
(struct program*, bool swap, struct alginfo *,
- struct alginfo *, unsigned int,
+ struct alginfo *, unsigned int, enum pdcp_sn_size,
unsigned char __maybe_unused) = {
{ /* NULL */
pdcp_insert_cplane_null_op, /* NULL */
@@ -2152,6 +2243,11 @@ cnstr_shdsc_pdcp_c_plane_encap(uint32_t *descbuf,
return -EINVAL;
}
+ if (sn_size != PDCP_SN_SIZE_12 && sn_size != PDCP_SN_SIZE_5) {
+ pr_err("C-plane supports only 5-bit and 12-bit sequence numbers\n");
+ return -EINVAL;
+ }
+
PROGRAM_CNTXT_INIT(p, descbuf, 0);
if (swap)
PROGRAM_SET_BSWAP(p);
@@ -2162,6 +2258,7 @@ cnstr_shdsc_pdcp_c_plane_encap(uint32_t *descbuf,
pdb_type = cnstr_pdcp_c_plane_pdb(p,
hfn,
+ sn_size,
bearer,
direction,
hfn_threshold,
@@ -2170,7 +2267,7 @@ cnstr_shdsc_pdcp_c_plane_encap(uint32_t *descbuf,
SET_LABEL(p, pdb_end);
- err = insert_hfn_ov_op(p, PDCP_SN_SIZE_5, pdb_type,
+ err = insert_hfn_ov_op(p, sn_size, pdb_type,
era_2_sw_hfn_ovrd);
if (err)
return err;
@@ -2180,6 +2277,7 @@ cnstr_shdsc_pdcp_c_plane_encap(uint32_t *descbuf,
cipherdata,
authdata,
OP_TYPE_ENCAP_PROTOCOL,
+ sn_size,
era_2_sw_hfn_ovrd);
if (err)
return err;
@@ -2197,6 +2295,7 @@ cnstr_shdsc_pdcp_c_plane_encap(uint32_t *descbuf,
* @swap: must be true when core endianness doesn't match SEC endianness
* @hfn: starting Hyper Frame Number to be used together with the SN from the
* PDCP frames.
+ * @sn_size: size of sequence numbers, only 5/12 bit sequence numbers are valid
* @bearer: radio bearer ID
* @direction: the direction of the PDCP frame (UL/DL)
* @hfn_threshold: HFN value that once reached triggers a warning from SEC that
@@ -2222,6 +2321,7 @@ cnstr_shdsc_pdcp_c_plane_decap(uint32_t *descbuf,
bool ps,
bool swap,
uint32_t hfn,
+ enum pdcp_sn_size sn_size,
unsigned char bearer,
unsigned char direction,
uint32_t hfn_threshold,
@@ -2232,7 +2332,8 @@ cnstr_shdsc_pdcp_c_plane_decap(uint32_t *descbuf,
static int
(*pdcp_cp_fp[PDCP_CIPHER_TYPE_INVALID][PDCP_AUTH_TYPE_INVALID])
(struct program*, bool swap, struct alginfo *,
- struct alginfo *, unsigned int, unsigned char) = {
+ struct alginfo *, unsigned int, enum pdcp_sn_size,
+ unsigned char) = {
{ /* NULL */
pdcp_insert_cplane_null_op, /* NULL */
pdcp_insert_cplane_int_only_op, /* SNOW f9 */
@@ -2296,6 +2397,11 @@ cnstr_shdsc_pdcp_c_plane_decap(uint32_t *descbuf,
return -EINVAL;
}
+ if (sn_size != PDCP_SN_SIZE_12 && sn_size != PDCP_SN_SIZE_5) {
+ pr_err("C-plane supports only 5-bit and 12-bit sequence numbers\n");
+ return -EINVAL;
+ }
+
PROGRAM_CNTXT_INIT(p, descbuf, 0);
if (swap)
PROGRAM_SET_BSWAP(p);
@@ -2306,6 +2412,7 @@ cnstr_shdsc_pdcp_c_plane_decap(uint32_t *descbuf,
pdb_type = cnstr_pdcp_c_plane_pdb(p,
hfn,
+ sn_size,
bearer,
direction,
hfn_threshold,
@@ -2314,7 +2421,7 @@ cnstr_shdsc_pdcp_c_plane_decap(uint32_t *descbuf,
SET_LABEL(p, pdb_end);
- err = insert_hfn_ov_op(p, PDCP_SN_SIZE_5, pdb_type,
+ err = insert_hfn_ov_op(p, sn_size, pdb_type,
era_2_sw_hfn_ovrd);
if (err)
return err;
@@ -2324,6 +2431,7 @@ cnstr_shdsc_pdcp_c_plane_decap(uint32_t *descbuf,
cipherdata,
authdata,
OP_TYPE_DECAP_PROTOCOL,
+ sn_size,
era_2_sw_hfn_ovrd);
if (err)
return err;
diff --git a/drivers/crypto/dpaa2_sec/hw/rta/protocol_cmd.h b/drivers/crypto/dpaa2_sec/hw/rta/protocol_cmd.h
index cf8dfb910..82581edf5 100644
--- a/drivers/crypto/dpaa2_sec/hw/rta/protocol_cmd.h
+++ b/drivers/crypto/dpaa2_sec/hw/rta/protocol_cmd.h
@@ -1,7 +1,7 @@
/* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
*
* Copyright 2008-2016 Freescale Semiconductor Inc.
- * Copyright 2016 NXP
+ * Copyright 2016, 2019 NXP
*
*/
@@ -596,13 +596,15 @@ static const struct proto_map proto_table[] = {
/*38*/ {OP_TYPE_DECAP_PROTOCOL, OP_PCLID_LTE_PDCP_CTRL_MIXED,
__rta_lte_pdcp_mixed_proto},
{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_IPSEC_NEW, __rta_ipsec_proto},
+/*40*/ {OP_TYPE_DECAP_PROTOCOL, OP_PCLID_LTE_PDCP_USER_RN,
+ __rta_lte_pdcp_mixed_proto},
};
/*
* Allowed OPERATION protocols for each SEC Era.
* Values represent the number of entries from proto_table[] that are supported.
*/
-static const unsigned int proto_table_sz[] = {21, 29, 29, 29, 29, 35, 37, 39};
+static const unsigned int proto_table_sz[] = {21, 29, 29, 29, 29, 35, 37, 40};
static inline int
rta_proto_operation(struct program *program, uint32_t optype,
diff --git a/drivers/crypto/dpaa_sec/dpaa_sec.c b/drivers/crypto/dpaa_sec/dpaa_sec.c
index fd5b24840..49e0000b1 100644
--- a/drivers/crypto/dpaa_sec/dpaa_sec.c
+++ b/drivers/crypto/dpaa_sec/dpaa_sec.c
@@ -470,6 +470,7 @@ dpaa_sec_prep_pdcp_cdb(dpaa_sec_session *ses)
shared_desc_len = cnstr_shdsc_pdcp_c_plane_encap(
cdb->sh_desc, 1, swap,
ses->pdcp.hfn,
+ ses->pdcp.sn_size,
ses->pdcp.bearer,
ses->pdcp.pkt_dir,
ses->pdcp.hfn_threshold,
@@ -479,6 +480,7 @@ dpaa_sec_prep_pdcp_cdb(dpaa_sec_session *ses)
shared_desc_len = cnstr_shdsc_pdcp_c_plane_decap(
cdb->sh_desc, 1, swap,
ses->pdcp.hfn,
+ ses->pdcp.sn_size,
ses->pdcp.bearer,
ses->pdcp.pkt_dir,
ses->pdcp.hfn_threshold,
@@ -2353,9 +2355,10 @@ dpaa_sec_set_pdcp_session(struct rte_cryptodev *dev,
/* Auth is only applicable for control mode operation. */
if (pdcp_xform->domain == RTE_SECURITY_PDCP_MODE_CONTROL) {
- if (pdcp_xform->sn_size != RTE_SECURITY_PDCP_SN_SIZE_5) {
+ if (pdcp_xform->sn_size != RTE_SECURITY_PDCP_SN_SIZE_5 &&
+ pdcp_xform->sn_size != RTE_SECURITY_PDCP_SN_SIZE_12) {
DPAA_SEC_ERR(
- "PDCP Seq Num size should be 5 bits for cmode");
+ "PDCP Seq Num size should be 5/12 bits for cmode");
goto out;
}
if (auth_xform) {
--
2.17.1
^ permalink raw reply [flat|nested] 75+ messages in thread
* [dpdk-dev] [PATCH 02/20] drivers/crypto: Support PDCP u-plane with integrity
2019-09-02 12:17 [dpdk-dev] [PATCH 00/20] crypto/dpaaX_sec: Support Wireless algos Akhil Goyal
2019-09-02 12:17 ` [dpdk-dev] [PATCH 01/20] drivers/crypto: Support PDCP 12-bit c-plane processing Akhil Goyal
@ 2019-09-02 12:17 ` Akhil Goyal
2019-09-02 12:17 ` [dpdk-dev] [PATCH 03/20] security: add hfn override option in PDCP Akhil Goyal
` (19 subsequent siblings)
21 siblings, 0 replies; 75+ messages in thread
From: Akhil Goyal @ 2019-09-02 12:17 UTC (permalink / raw)
To: dev; +Cc: hemant.agrawal, vakul.garg
From: Vakul Garg <vakul.garg@nxp.com>
PDCP u-plane may optionally support integrity as well.
This patch add support for supporting integrity along with
confidentiality.
Signed-off-by: Vakul Garg <vakul.garg@nxp.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
---
drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c | 67 +++++------
drivers/crypto/dpaa2_sec/hw/desc/pdcp.h | 75 ++++++++++---
drivers/crypto/dpaa_sec/dpaa_sec.c | 116 +++++++++-----------
3 files changed, 144 insertions(+), 114 deletions(-)
diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
index 05c29e62f..e2f248475 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
@@ -2559,6 +2559,7 @@ dpaa2_sec_set_pdcp_session(struct rte_cryptodev *dev,
struct ctxt_priv *priv;
struct dpaa2_sec_dev_private *dev_priv = dev->data->dev_private;
struct alginfo authdata, cipherdata;
+ struct alginfo *p_authdata = NULL;
int bufsize = -1;
struct sec_flow_context *flc;
#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
@@ -2661,39 +2662,32 @@ dpaa2_sec_set_pdcp_session(struct rte_cryptodev *dev,
goto out;
}
- /* Auth is only applicable for control mode operation. */
- if (pdcp_xform->domain == RTE_SECURITY_PDCP_MODE_CONTROL) {
- if (pdcp_xform->sn_size != RTE_SECURITY_PDCP_SN_SIZE_5 &&
- pdcp_xform->sn_size != RTE_SECURITY_PDCP_SN_SIZE_12) {
- DPAA2_SEC_ERR(
- "PDCP Seq Num size should be 5/12 bits for cmode");
- goto out;
- }
- if (auth_xform) {
- session->auth_key.data = rte_zmalloc(NULL,
- auth_xform->key.length,
- RTE_CACHE_LINE_SIZE);
- if (session->auth_key.data == NULL &&
- auth_xform->key.length > 0) {
- DPAA2_SEC_ERR("No Memory for auth key");
- rte_free(session->cipher_key.data);
- rte_free(priv);
- return -ENOMEM;
- }
- session->auth_key.length = auth_xform->key.length;
- memcpy(session->auth_key.data, auth_xform->key.data,
- auth_xform->key.length);
- session->auth_alg = auth_xform->algo;
- } else {
- session->auth_key.data = NULL;
- session->auth_key.length = 0;
- session->auth_alg = RTE_CRYPTO_AUTH_NULL;
+ if (auth_xform) {
+ session->auth_key.data = rte_zmalloc(NULL,
+ auth_xform->key.length,
+ RTE_CACHE_LINE_SIZE);
+ if (!session->auth_key.data &&
+ auth_xform->key.length > 0) {
+ DPAA2_SEC_ERR("No Memory for auth key");
+ rte_free(session->cipher_key.data);
+ rte_free(priv);
+ return -ENOMEM;
}
- authdata.key = (size_t)session->auth_key.data;
- authdata.keylen = session->auth_key.length;
- authdata.key_enc_flags = 0;
- authdata.key_type = RTA_DATA_IMM;
+ session->auth_key.length = auth_xform->key.length;
+ memcpy(session->auth_key.data, auth_xform->key.data,
+ auth_xform->key.length);
+ session->auth_alg = auth_xform->algo;
+ } else {
+ session->auth_key.data = NULL;
+ session->auth_key.length = 0;
+ session->auth_alg = 0;
+ }
+ authdata.key = (size_t)session->auth_key.data;
+ authdata.keylen = session->auth_key.length;
+ authdata.key_enc_flags = 0;
+ authdata.key_type = RTA_DATA_IMM;
+ if (session->auth_alg) {
switch (session->auth_alg) {
case RTE_CRYPTO_AUTH_SNOW3G_UIA2:
authdata.algtype = PDCP_AUTH_TYPE_SNOW;
@@ -2713,6 +2707,13 @@ dpaa2_sec_set_pdcp_session(struct rte_cryptodev *dev,
goto out;
}
+ p_authdata = &authdata;
+ } else if (pdcp_xform->domain == RTE_SECURITY_PDCP_MODE_CONTROL) {
+ DPAA2_SEC_ERR("Crypto: Integrity must for c-plane");
+ goto out;
+ }
+
+ if (pdcp_xform->domain == RTE_SECURITY_PDCP_MODE_CONTROL) {
if (session->dir == DIR_ENC)
bufsize = cnstr_shdsc_pdcp_c_plane_encap(
priv->flc_desc[0].desc, 1, swap,
@@ -2742,7 +2743,7 @@ dpaa2_sec_set_pdcp_session(struct rte_cryptodev *dev,
pdcp_xform->bearer,
pdcp_xform->pkt_dir,
pdcp_xform->hfn_threshold,
- &cipherdata, 0);
+ &cipherdata, p_authdata, 0);
else if (session->dir == DIR_DEC)
bufsize = cnstr_shdsc_pdcp_u_plane_decap(
priv->flc_desc[0].desc, 1, swap,
@@ -2751,7 +2752,7 @@ dpaa2_sec_set_pdcp_session(struct rte_cryptodev *dev,
pdcp_xform->bearer,
pdcp_xform->pkt_dir,
pdcp_xform->hfn_threshold,
- &cipherdata, 0);
+ &cipherdata, p_authdata, 0);
}
if (bufsize < 0) {
diff --git a/drivers/crypto/dpaa2_sec/hw/desc/pdcp.h b/drivers/crypto/dpaa2_sec/hw/desc/pdcp.h
index 607c587e2..a636640c4 100644
--- a/drivers/crypto/dpaa2_sec/hw/desc/pdcp.h
+++ b/drivers/crypto/dpaa2_sec/hw/desc/pdcp.h
@@ -1801,9 +1801,16 @@ static inline int
pdcp_insert_uplane_15bit_op(struct program *p,
bool swap __maybe_unused,
struct alginfo *cipherdata,
+ struct alginfo *authdata,
unsigned int dir)
{
int op;
+
+ /* Insert auth key if requested */
+ if (authdata && authdata->algtype)
+ KEY(p, KEY2, authdata->key_enc_flags, authdata->key,
+ authdata->keylen, INLINE_KEY(authdata));
+
/* Insert Cipher Key */
KEY(p, KEY1, cipherdata->key_enc_flags, cipherdata->key,
cipherdata->keylen, INLINE_KEY(cipherdata));
@@ -2478,6 +2485,7 @@ cnstr_shdsc_pdcp_u_plane_encap(uint32_t *descbuf,
unsigned short direction,
uint32_t hfn_threshold,
struct alginfo *cipherdata,
+ struct alginfo *authdata,
unsigned char era_2_sw_hfn_ovrd)
{
struct program prg;
@@ -2490,6 +2498,11 @@ cnstr_shdsc_pdcp_u_plane_encap(uint32_t *descbuf,
return -EINVAL;
}
+ if (authdata && !authdata->algtype && rta_sec_era < RTA_SEC_ERA_8) {
+ pr_err("Cannot use u-plane auth with era < 8");
+ return -EINVAL;
+ }
+
PROGRAM_CNTXT_INIT(p, descbuf, 0);
if (swap)
PROGRAM_SET_BSWAP(p);
@@ -2509,6 +2522,13 @@ cnstr_shdsc_pdcp_u_plane_encap(uint32_t *descbuf,
if (err)
return err;
+ /* Insert auth key if requested */
+ if (authdata && authdata->algtype) {
+ KEY(p, KEY2, authdata->key_enc_flags,
+ (uint64_t)authdata->key, authdata->keylen,
+ INLINE_KEY(authdata));
+ }
+
switch (sn_size) {
case PDCP_SN_SIZE_7:
case PDCP_SN_SIZE_12:
@@ -2518,20 +2538,24 @@ cnstr_shdsc_pdcp_u_plane_encap(uint32_t *descbuf,
pr_err("Invalid era for selected algorithm\n");
return -ENOTSUP;
}
+ /* fallthrough */
case PDCP_CIPHER_TYPE_AES:
case PDCP_CIPHER_TYPE_SNOW:
+ case PDCP_CIPHER_TYPE_NULL:
/* Insert Cipher Key */
KEY(p, KEY1, cipherdata->key_enc_flags,
(uint64_t)cipherdata->key, cipherdata->keylen,
INLINE_KEY(cipherdata));
- PROTOCOL(p, OP_TYPE_ENCAP_PROTOCOL,
- OP_PCLID_LTE_PDCP_USER,
- (uint16_t)cipherdata->algtype);
- break;
- case PDCP_CIPHER_TYPE_NULL:
- insert_copy_frame_op(p,
- cipherdata,
- OP_TYPE_ENCAP_PROTOCOL);
+
+ if (authdata)
+ PROTOCOL(p, OP_TYPE_ENCAP_PROTOCOL,
+ OP_PCLID_LTE_PDCP_USER_RN,
+ ((uint16_t)cipherdata->algtype << 8) |
+ (uint16_t)authdata->algtype);
+ else
+ PROTOCOL(p, OP_TYPE_ENCAP_PROTOCOL,
+ OP_PCLID_LTE_PDCP_USER,
+ (uint16_t)cipherdata->algtype);
break;
default:
pr_err("%s: Invalid encrypt algorithm selected: %d\n",
@@ -2551,7 +2575,7 @@ cnstr_shdsc_pdcp_u_plane_encap(uint32_t *descbuf,
default:
err = pdcp_insert_uplane_15bit_op(p, swap, cipherdata,
- OP_TYPE_ENCAP_PROTOCOL);
+ authdata, OP_TYPE_ENCAP_PROTOCOL);
if (err)
return err;
break;
@@ -2605,6 +2629,7 @@ cnstr_shdsc_pdcp_u_plane_decap(uint32_t *descbuf,
unsigned short direction,
uint32_t hfn_threshold,
struct alginfo *cipherdata,
+ struct alginfo *authdata,
unsigned char era_2_sw_hfn_ovrd)
{
struct program prg;
@@ -2617,6 +2642,11 @@ cnstr_shdsc_pdcp_u_plane_decap(uint32_t *descbuf,
return -EINVAL;
}
+ if (authdata && !authdata->algtype && rta_sec_era < RTA_SEC_ERA_8) {
+ pr_err("Cannot use u-plane auth with era < 8");
+ return -EINVAL;
+ }
+
PROGRAM_CNTXT_INIT(p, descbuf, 0);
if (swap)
PROGRAM_SET_BSWAP(p);
@@ -2636,6 +2666,12 @@ cnstr_shdsc_pdcp_u_plane_decap(uint32_t *descbuf,
if (err)
return err;
+ /* Insert auth key if requested */
+ if (authdata && authdata->algtype)
+ KEY(p, KEY2, authdata->key_enc_flags,
+ (uint64_t)authdata->key, authdata->keylen,
+ INLINE_KEY(authdata));
+
switch (sn_size) {
case PDCP_SN_SIZE_7:
case PDCP_SN_SIZE_12:
@@ -2645,20 +2681,23 @@ cnstr_shdsc_pdcp_u_plane_decap(uint32_t *descbuf,
pr_err("Invalid era for selected algorithm\n");
return -ENOTSUP;
}
+ /* fallthrough */
case PDCP_CIPHER_TYPE_AES:
case PDCP_CIPHER_TYPE_SNOW:
+ case PDCP_CIPHER_TYPE_NULL:
/* Insert Cipher Key */
KEY(p, KEY1, cipherdata->key_enc_flags,
cipherdata->key, cipherdata->keylen,
INLINE_KEY(cipherdata));
- PROTOCOL(p, OP_TYPE_DECAP_PROTOCOL,
- OP_PCLID_LTE_PDCP_USER,
- (uint16_t)cipherdata->algtype);
- break;
- case PDCP_CIPHER_TYPE_NULL:
- insert_copy_frame_op(p,
- cipherdata,
- OP_TYPE_DECAP_PROTOCOL);
+ if (authdata)
+ PROTOCOL(p, OP_TYPE_DECAP_PROTOCOL,
+ OP_PCLID_LTE_PDCP_USER_RN,
+ ((uint16_t)cipherdata->algtype << 8) |
+ (uint16_t)authdata->algtype);
+ else
+ PROTOCOL(p, OP_TYPE_DECAP_PROTOCOL,
+ OP_PCLID_LTE_PDCP_USER,
+ (uint16_t)cipherdata->algtype);
break;
default:
pr_err("%s: Invalid encrypt algorithm selected: %d\n",
@@ -2678,7 +2717,7 @@ cnstr_shdsc_pdcp_u_plane_decap(uint32_t *descbuf,
default:
err = pdcp_insert_uplane_15bit_op(p, swap, cipherdata,
- OP_TYPE_DECAP_PROTOCOL);
+ authdata, OP_TYPE_DECAP_PROTOCOL);
if (err)
return err;
break;
diff --git a/drivers/crypto/dpaa_sec/dpaa_sec.c b/drivers/crypto/dpaa_sec/dpaa_sec.c
index 49e0000b1..c3a72c454 100644
--- a/drivers/crypto/dpaa_sec/dpaa_sec.c
+++ b/drivers/crypto/dpaa_sec/dpaa_sec.c
@@ -383,6 +383,7 @@ dpaa_sec_prep_pdcp_cdb(dpaa_sec_session *ses)
{
struct alginfo authdata = {0}, cipherdata = {0};
struct sec_cdb *cdb = &ses->cdb;
+ struct alginfo *p_authdata = NULL;
int32_t shared_desc_len = 0;
int err;
#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
@@ -415,7 +416,11 @@ dpaa_sec_prep_pdcp_cdb(dpaa_sec_session *ses)
cipherdata.key_enc_flags = 0;
cipherdata.key_type = RTA_DATA_IMM;
- if (ses->pdcp.domain == RTE_SECURITY_PDCP_MODE_CONTROL) {
+ cdb->sh_desc[0] = cipherdata.keylen;
+ cdb->sh_desc[1] = 0;
+ cdb->sh_desc[2] = 0;
+
+ if (ses->auth_alg) {
switch (ses->auth_alg) {
case RTE_CRYPTO_AUTH_SNOW3G_UIA2:
authdata.algtype = PDCP_AUTH_TYPE_SNOW;
@@ -440,32 +445,36 @@ dpaa_sec_prep_pdcp_cdb(dpaa_sec_session *ses)
authdata.key_enc_flags = 0;
authdata.key_type = RTA_DATA_IMM;
- cdb->sh_desc[0] = cipherdata.keylen;
+ p_authdata = &authdata;
+
cdb->sh_desc[1] = authdata.keylen;
- err = rta_inline_query(IPSEC_AUTH_VAR_AES_DEC_BASE_DESC_LEN,
- MIN_JOB_DESC_SIZE,
- (unsigned int *)cdb->sh_desc,
- &cdb->sh_desc[2], 2);
+ }
- if (err < 0) {
- DPAA_SEC_ERR("Crypto: Incorrect key lengths");
- return err;
- }
- if (!(cdb->sh_desc[2] & 1) && cipherdata.keylen) {
- cipherdata.key = (size_t)dpaa_mem_vtop(
- (void *)(size_t)cipherdata.key);
- cipherdata.key_type = RTA_DATA_PTR;
- }
- if (!(cdb->sh_desc[2] & (1<<1)) && authdata.keylen) {
- authdata.key = (size_t)dpaa_mem_vtop(
- (void *)(size_t)authdata.key);
- authdata.key_type = RTA_DATA_PTR;
- }
+ err = rta_inline_query(IPSEC_AUTH_VAR_AES_DEC_BASE_DESC_LEN,
+ MIN_JOB_DESC_SIZE,
+ (unsigned int *)cdb->sh_desc,
+ &cdb->sh_desc[2], 2);
+ if (err < 0) {
+ DPAA_SEC_ERR("Crypto: Incorrect key lengths");
+ return err;
+ }
- cdb->sh_desc[0] = 0;
- cdb->sh_desc[1] = 0;
- cdb->sh_desc[2] = 0;
+ if (!(cdb->sh_desc[2] & 1) && cipherdata.keylen) {
+ cipherdata.key =
+ (size_t)dpaa_mem_vtop((void *)(size_t)cipherdata.key);
+ cipherdata.key_type = RTA_DATA_PTR;
+ }
+ if (!(cdb->sh_desc[2] & (1 << 1)) && authdata.keylen) {
+ authdata.key =
+ (size_t)dpaa_mem_vtop((void *)(size_t)authdata.key);
+ authdata.key_type = RTA_DATA_PTR;
+ }
+ cdb->sh_desc[0] = 0;
+ cdb->sh_desc[1] = 0;
+ cdb->sh_desc[2] = 0;
+
+ if (ses->pdcp.domain == RTE_SECURITY_PDCP_MODE_CONTROL) {
if (ses->dir == DIR_ENC)
shared_desc_len = cnstr_shdsc_pdcp_c_plane_encap(
cdb->sh_desc, 1, swap,
@@ -487,25 +496,6 @@ dpaa_sec_prep_pdcp_cdb(dpaa_sec_session *ses)
&cipherdata, &authdata,
0);
} else {
- cdb->sh_desc[0] = cipherdata.keylen;
- err = rta_inline_query(IPSEC_AUTH_VAR_AES_DEC_BASE_DESC_LEN,
- MIN_JOB_DESC_SIZE,
- (unsigned int *)cdb->sh_desc,
- &cdb->sh_desc[2], 1);
-
- if (err < 0) {
- DPAA_SEC_ERR("Crypto: Incorrect key lengths");
- return err;
- }
- if (!(cdb->sh_desc[2] & 1) && cipherdata.keylen) {
- cipherdata.key = (size_t)dpaa_mem_vtop(
- (void *)(size_t)cipherdata.key);
- cipherdata.key_type = RTA_DATA_PTR;
- }
- cdb->sh_desc[0] = 0;
- cdb->sh_desc[1] = 0;
- cdb->sh_desc[2] = 0;
-
if (ses->dir == DIR_ENC)
shared_desc_len = cnstr_shdsc_pdcp_u_plane_encap(
cdb->sh_desc, 1, swap,
@@ -514,7 +504,7 @@ dpaa_sec_prep_pdcp_cdb(dpaa_sec_session *ses)
ses->pdcp.bearer,
ses->pdcp.pkt_dir,
ses->pdcp.hfn_threshold,
- &cipherdata, 0);
+ &cipherdata, p_authdata, 0);
else if (ses->dir == DIR_DEC)
shared_desc_len = cnstr_shdsc_pdcp_u_plane_decap(
cdb->sh_desc, 1, swap,
@@ -523,7 +513,7 @@ dpaa_sec_prep_pdcp_cdb(dpaa_sec_session *ses)
ses->pdcp.bearer,
ses->pdcp.pkt_dir,
ses->pdcp.hfn_threshold,
- &cipherdata, 0);
+ &cipherdata, p_authdata, 0);
}
return shared_desc_len;
@@ -2353,7 +2343,6 @@ dpaa_sec_set_pdcp_session(struct rte_cryptodev *dev,
session->dir = DIR_ENC;
}
- /* Auth is only applicable for control mode operation. */
if (pdcp_xform->domain == RTE_SECURITY_PDCP_MODE_CONTROL) {
if (pdcp_xform->sn_size != RTE_SECURITY_PDCP_SN_SIZE_5 &&
pdcp_xform->sn_size != RTE_SECURITY_PDCP_SN_SIZE_12) {
@@ -2361,25 +2350,26 @@ dpaa_sec_set_pdcp_session(struct rte_cryptodev *dev,
"PDCP Seq Num size should be 5/12 bits for cmode");
goto out;
}
- if (auth_xform) {
- session->auth_key.data = rte_zmalloc(NULL,
- auth_xform->key.length,
- RTE_CACHE_LINE_SIZE);
- if (session->auth_key.data == NULL &&
- auth_xform->key.length > 0) {
- DPAA_SEC_ERR("No Memory for auth key");
- rte_free(session->cipher_key.data);
- return -ENOMEM;
- }
- session->auth_key.length = auth_xform->key.length;
- memcpy(session->auth_key.data, auth_xform->key.data,
- auth_xform->key.length);
- session->auth_alg = auth_xform->algo;
- } else {
- session->auth_key.data = NULL;
- session->auth_key.length = 0;
- session->auth_alg = RTE_CRYPTO_AUTH_NULL;
+ }
+
+ if (auth_xform) {
+ session->auth_key.data = rte_zmalloc(NULL,
+ auth_xform->key.length,
+ RTE_CACHE_LINE_SIZE);
+ if (!session->auth_key.data &&
+ auth_xform->key.length > 0) {
+ DPAA_SEC_ERR("No Memory for auth key");
+ rte_free(session->cipher_key.data);
+ return -ENOMEM;
}
+ session->auth_key.length = auth_xform->key.length;
+ memcpy(session->auth_key.data, auth_xform->key.data,
+ auth_xform->key.length);
+ session->auth_alg = auth_xform->algo;
+ } else {
+ session->auth_key.data = NULL;
+ session->auth_key.length = 0;
+ session->auth_alg = 0;
}
session->pdcp.domain = pdcp_xform->domain;
session->pdcp.bearer = pdcp_xform->bearer;
--
2.17.1
^ permalink raw reply [flat|nested] 75+ messages in thread
* [dpdk-dev] [PATCH 03/20] security: add hfn override option in PDCP
2019-09-02 12:17 [dpdk-dev] [PATCH 00/20] crypto/dpaaX_sec: Support Wireless algos Akhil Goyal
2019-09-02 12:17 ` [dpdk-dev] [PATCH 01/20] drivers/crypto: Support PDCP 12-bit c-plane processing Akhil Goyal
2019-09-02 12:17 ` [dpdk-dev] [PATCH 02/20] drivers/crypto: Support PDCP u-plane with integrity Akhil Goyal
@ 2019-09-02 12:17 ` Akhil Goyal
2019-09-19 15:31 ` Akhil Goyal
2019-09-02 12:17 ` [dpdk-dev] [PATCH 04/20] crypto/dpaaX_sec: update dpovrd for hfn override " Akhil Goyal
` (18 subsequent siblings)
21 siblings, 1 reply; 75+ messages in thread
From: Akhil Goyal @ 2019-09-02 12:17 UTC (permalink / raw)
To: dev; +Cc: hemant.agrawal, vakul.garg, Akhil Goyal
HFN can be given as a per packet value also.
As we do not have IV in case of PDCP, and HFN is
used to generate IV. IV field can be used to get the
per packet HFN while enq/deq
If hfn_ovrd field in pdcp_xform is set,
application is expected to set the per packet HFN
in place of IV. Driver will extract the HFN and perform
operations accordingly.
Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
---
lib/librte_security/rte_security.h | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/lib/librte_security/rte_security.h b/lib/librte_security/rte_security.h
index 96806e3a2..4452545fe 100644
--- a/lib/librte_security/rte_security.h
+++ b/lib/librte_security/rte_security.h
@@ -1,5 +1,5 @@
/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright 2017 NXP.
+ * Copyright 2017,2019 NXP
* Copyright(c) 2017 Intel Corporation.
*/
@@ -270,6 +270,8 @@ struct rte_security_pdcp_xform {
uint32_t hfn;
/** HFN Threshold for key renegotiation */
uint32_t hfn_threshold;
+ /** Enable per packet HFN override */
+ uint32_t hfn_ovrd;
};
/**
--
2.17.1
^ permalink raw reply [flat|nested] 75+ messages in thread
* [dpdk-dev] [PATCH 04/20] crypto/dpaaX_sec: update dpovrd for hfn override in PDCP
2019-09-02 12:17 [dpdk-dev] [PATCH 00/20] crypto/dpaaX_sec: Support Wireless algos Akhil Goyal
` (2 preceding siblings ...)
2019-09-02 12:17 ` [dpdk-dev] [PATCH 03/20] security: add hfn override option in PDCP Akhil Goyal
@ 2019-09-02 12:17 ` Akhil Goyal
2019-09-02 12:17 ` [dpdk-dev] [PATCH 05/20] crypto/dpaa2_sec: update desc for pdcp 18bit enc-auth Akhil Goyal
` (17 subsequent siblings)
21 siblings, 0 replies; 75+ messages in thread
From: Akhil Goyal @ 2019-09-02 12:17 UTC (permalink / raw)
To: dev; +Cc: hemant.agrawal, vakul.garg, Akhil Goyal
HFN can be updated on per packet basis. DPOVRD register
can be updated with the per packet value if it is enabled
in session configuration.
Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
---
drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c | 18 ++++++++++--------
drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h | 4 +++-
drivers/crypto/dpaa_sec/dpaa_sec.c | 19 ++++++++++++++++---
drivers/crypto/dpaa_sec/dpaa_sec.h | 4 +++-
4 files changed, 32 insertions(+), 13 deletions(-)
diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
index e2f248475..b66064385 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
@@ -124,14 +124,16 @@ build_proto_compound_fd(dpaa2_sec_session *sess,
DPAA2_SET_FD_LEN(fd, ip_fle->length);
DPAA2_SET_FLE_FIN(ip_fle);
-#ifdef ENABLE_HFN_OVERRIDE
+ /* In case of PDCP, per packet HFN is stored in
+ * mbuf priv after sym_op.
+ */
if (sess->ctxt_type == DPAA2_SEC_PDCP && sess->pdcp.hfn_ovd) {
+ uint32_t hfn_ovd = *((uint8_t *)op + sess->pdcp.hfn_ovd_offset);
/*enable HFN override override */
- DPAA2_SET_FLE_INTERNAL_JD(ip_fle, sess->pdcp.hfn_ovd);
- DPAA2_SET_FLE_INTERNAL_JD(op_fle, sess->pdcp.hfn_ovd);
- DPAA2_SET_FD_INTERNAL_JD(fd, sess->pdcp.hfn_ovd);
+ DPAA2_SET_FLE_INTERNAL_JD(ip_fle, hfn_ovd);
+ DPAA2_SET_FLE_INTERNAL_JD(op_fle, hfn_ovd);
+ DPAA2_SET_FD_INTERNAL_JD(fd, hfn_ovd);
}
-#endif
return 0;
@@ -2632,11 +2634,11 @@ dpaa2_sec_set_pdcp_session(struct rte_cryptodev *dev,
session->pdcp.bearer = pdcp_xform->bearer;
session->pdcp.pkt_dir = pdcp_xform->pkt_dir;
session->pdcp.sn_size = pdcp_xform->sn_size;
-#ifdef ENABLE_HFN_OVERRIDE
- session->pdcp.hfn_ovd = pdcp_xform->hfn_ovd;
-#endif
session->pdcp.hfn = pdcp_xform->hfn;
session->pdcp.hfn_threshold = pdcp_xform->hfn_threshold;
+ session->pdcp.hfn_ovd = pdcp_xform->hfn_ovrd;
+ /* hfv ovd offset location is stored in iv.offset value*/
+ session->pdcp.hfn_ovd_offset = cipher_xform->iv.offset;
cipherdata.key = (size_t)session->cipher_key.data;
cipherdata.keylen = session->cipher_key.length;
diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h b/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
index 51751103d..d0933d660 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
@@ -145,9 +145,11 @@ struct dpaa2_pdcp_ctxt {
int8_t bearer; /*!< PDCP bearer ID */
int8_t pkt_dir;/*!< PDCP Frame Direction 0:UL 1:DL*/
int8_t hfn_ovd;/*!< Overwrite HFN per packet*/
+ uint8_t sn_size; /*!< Sequence number size, 5/7/12/15/18 */
+ uint32_t hfn_ovd_offset;/*!< offset from rte_crypto_op at which
+ per packet hfn is stored */
uint32_t hfn; /*!< Hyper Frame Number */
uint32_t hfn_threshold; /*!< HFN Threashold for key renegotiation */
- uint8_t sn_size; /*!< Sequence number size, 7/12/15 */
};
typedef struct dpaa2_sec_session_entry {
diff --git a/drivers/crypto/dpaa_sec/dpaa_sec.c b/drivers/crypto/dpaa_sec/dpaa_sec.c
index c3a72c454..a74b7a822 100644
--- a/drivers/crypto/dpaa_sec/dpaa_sec.c
+++ b/drivers/crypto/dpaa_sec/dpaa_sec.c
@@ -1751,6 +1751,20 @@ dpaa_sec_enqueue_burst(void *qp, struct rte_crypto_op **ops,
if (auth_only_len)
fd->cmd = 0x80000000 | auth_only_len;
+ /* In case of PDCP, per packet HFN is stored in
+ * mbuf priv after sym_op.
+ */
+ if (is_proto_pdcp(ses) && ses->pdcp.hfn_ovd) {
+ fd->cmd = 0x80000000 |
+ *((uint32_t *)((uint8_t *)op +
+ ses->pdcp.hfn_ovd_offset));
+ DPAA_SEC_DP_DEBUG("Per packet HFN: %x, ovd:%u,%u\n",
+ *((uint32_t *)((uint8_t *)op +
+ ses->pdcp.hfn_ovd_offset)),
+ ses->pdcp.hfn_ovd,
+ is_proto_pdcp(ses));
+ }
+
}
send_pkts:
loop = 0;
@@ -2375,11 +2389,10 @@ dpaa_sec_set_pdcp_session(struct rte_cryptodev *dev,
session->pdcp.bearer = pdcp_xform->bearer;
session->pdcp.pkt_dir = pdcp_xform->pkt_dir;
session->pdcp.sn_size = pdcp_xform->sn_size;
-#ifdef ENABLE_HFN_OVERRIDE
- session->pdcp.hfn_ovd = pdcp_xform->hfn_ovd;
-#endif
session->pdcp.hfn = pdcp_xform->hfn;
session->pdcp.hfn_threshold = pdcp_xform->hfn_threshold;
+ session->pdcp.hfn_ovd = pdcp_xform->hfn_ovrd;
+ session->pdcp.hfn_ovd_offset = cipher_xform->iv.offset;
session->ctx_pool = dev_priv->ctx_pool;
rte_spinlock_lock(&dev_priv->lock);
diff --git a/drivers/crypto/dpaa_sec/dpaa_sec.h b/drivers/crypto/dpaa_sec/dpaa_sec.h
index 75c0960a9..9a2f2c078 100644
--- a/drivers/crypto/dpaa_sec/dpaa_sec.h
+++ b/drivers/crypto/dpaa_sec/dpaa_sec.h
@@ -101,9 +101,11 @@ struct sec_pdcp_ctxt {
int8_t bearer; /*!< PDCP bearer ID */
int8_t pkt_dir;/*!< PDCP Frame Direction 0:UL 1:DL*/
int8_t hfn_ovd;/*!< Overwrite HFN per packet*/
+ uint8_t sn_size; /*!< Sequence number size, 5/7/12/15/18 */
+ uint32_t hfn_ovd_offset;/*!< offset from rte_crypto_op at which
+ per packet hfn is stored */
uint32_t hfn; /*!< Hyper Frame Number */
uint32_t hfn_threshold; /*!< HFN Threashold for key renegotiation */
- uint8_t sn_size; /*!< Sequence number size, 7/12/15 */
};
typedef struct dpaa_sec_session_entry {
--
2.17.1
^ permalink raw reply [flat|nested] 75+ messages in thread
* [dpdk-dev] [PATCH 05/20] crypto/dpaa2_sec: update desc for pdcp 18bit enc-auth
2019-09-02 12:17 [dpdk-dev] [PATCH 00/20] crypto/dpaaX_sec: Support Wireless algos Akhil Goyal
` (3 preceding siblings ...)
2019-09-02 12:17 ` [dpdk-dev] [PATCH 04/20] crypto/dpaaX_sec: update dpovrd for hfn override " Akhil Goyal
@ 2019-09-02 12:17 ` Akhil Goyal
2019-09-02 12:17 ` [dpdk-dev] [PATCH 06/20] crypto/dpaa2_sec: support CAAM HW era 10 Akhil Goyal
` (16 subsequent siblings)
21 siblings, 0 replies; 75+ messages in thread
From: Akhil Goyal @ 2019-09-02 12:17 UTC (permalink / raw)
To: dev; +Cc: hemant.agrawal, vakul.garg, Akhil Goyal
Support following cases
int-only (NULL-NULL, NULL-SNOW, NULL-AES, NULL-ZUC)
enc-only (SNOW-NULL, AES-NULL, ZUC-NULL)
Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
---
drivers/crypto/dpaa2_sec/hw/desc/pdcp.h | 532 +++++++++++++++++++-----
1 file changed, 420 insertions(+), 112 deletions(-)
diff --git a/drivers/crypto/dpaa2_sec/hw/desc/pdcp.h b/drivers/crypto/dpaa2_sec/hw/desc/pdcp.h
index a636640c4..9a73105ac 100644
--- a/drivers/crypto/dpaa2_sec/hw/desc/pdcp.h
+++ b/drivers/crypto/dpaa2_sec/hw/desc/pdcp.h
@@ -1,5 +1,6 @@
/* SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
* Copyright 2008-2013 Freescale Semiconductor, Inc.
+ * Copyright 2019 NXP
*/
#ifndef __DESC_PDCP_H__
@@ -52,6 +53,14 @@
#define PDCP_U_PLANE_15BIT_SN_MASK 0xFF7F0000
#define PDCP_U_PLANE_15BIT_SN_MASK_BE 0x00007FFF
+/**
+ * PDCP_U_PLANE_18BIT_SN_MASK - This mask is used in the PDCP descriptors for
+ * extracting the sequence number (SN) from the
+ * PDCP User Plane header.
+ */
+#define PDCP_U_PLANE_18BIT_SN_MASK 0xFFFF0300
+#define PDCP_U_PLANE_18BIT_SN_MASK_BE 0x0003FFFF
+
/**
* PDCP_BEARER_MASK - This mask is used masking out the bearer for PDCP
* processing with SNOW f9 in LTE.
@@ -192,7 +201,8 @@ enum pdcp_sn_size {
PDCP_SN_SIZE_5 = 5,
PDCP_SN_SIZE_7 = 7,
PDCP_SN_SIZE_12 = 12,
- PDCP_SN_SIZE_15 = 15
+ PDCP_SN_SIZE_15 = 15,
+ PDCP_SN_SIZE_18 = 18
};
/*
@@ -205,14 +215,17 @@ enum pdcp_sn_size {
#define PDCP_U_PLANE_PDB_OPT_SHORT_SN 0x2
#define PDCP_U_PLANE_PDB_OPT_15B_SN 0x4
+#define PDCP_U_PLANE_PDB_OPT_18B_SN 0x6
#define PDCP_U_PLANE_PDB_SHORT_SN_HFN_SHIFT 7
#define PDCP_U_PLANE_PDB_LONG_SN_HFN_SHIFT 12
#define PDCP_U_PLANE_PDB_15BIT_SN_HFN_SHIFT 15
+#define PDCP_U_PLANE_PDB_18BIT_SN_HFN_SHIFT 18
#define PDCP_U_PLANE_PDB_BEARER_SHIFT 27
#define PDCP_U_PLANE_PDB_DIR_SHIFT 26
#define PDCP_U_PLANE_PDB_SHORT_SN_HFN_THR_SHIFT 7
#define PDCP_U_PLANE_PDB_LONG_SN_HFN_THR_SHIFT 12
#define PDCP_U_PLANE_PDB_15BIT_SN_HFN_THR_SHIFT 15
+#define PDCP_U_PLANE_PDB_18BIT_SN_HFN_THR_SHIFT 18
struct pdcp_pdb {
union {
@@ -417,6 +430,9 @@ pdcp_insert_cplane_int_only_op(struct program *p,
enum pdcp_sn_size sn_size,
unsigned char era_2_sw_hfn_ovrd)
{
+ uint32_t offset = 0, length = 0, sn_mask = 0;
+
+ /* 12 bit SN is only supported for protocol offload case */
if (rta_sec_era >= RTA_SEC_ERA_8 && sn_size == PDCP_SN_SIZE_12) {
KEY(p, KEY2, authdata->key_enc_flags, authdata->key,
authdata->keylen, INLINE_KEY(authdata));
@@ -426,6 +442,27 @@ pdcp_insert_cplane_int_only_op(struct program *p,
return 0;
}
+ /* Non-proto is supported only for 5bit cplane and 18bit uplane */
+ switch (sn_size) {
+ case PDCP_SN_SIZE_5:
+ offset = 7;
+ length = 1;
+ sn_mask = (swap == false) ? PDCP_C_PLANE_SN_MASK :
+ PDCP_C_PLANE_SN_MASK_BE;
+ break;
+ case PDCP_SN_SIZE_18:
+ offset = 5;
+ length = 3;
+ sn_mask = (swap == false) ? PDCP_U_PLANE_18BIT_SN_MASK :
+ PDCP_U_PLANE_18BIT_SN_MASK_BE;
+ break;
+ case PDCP_SN_SIZE_7:
+ case PDCP_SN_SIZE_12:
+ case PDCP_SN_SIZE_15:
+ pr_err("Invalid sn_size for %s\n", __func__);
+ return -ENOTSUP;
+
+ }
LABEL(local_offset);
REFERENCE(move_cmd_read_descbuf);
REFERENCE(move_cmd_write_descbuf);
@@ -435,20 +472,20 @@ pdcp_insert_cplane_int_only_op(struct program *p,
/* Insert Auth Key */
KEY(p, KEY2, authdata->key_enc_flags, authdata->key,
authdata->keylen, INLINE_KEY(authdata));
- SEQLOAD(p, MATH0, 7, 1, 0);
+ SEQLOAD(p, MATH0, offset, length, 0);
JUMP(p, 1, LOCAL_JUMP, ALL_TRUE, CALM);
if (rta_sec_era > RTA_SEC_ERA_2 ||
(rta_sec_era == RTA_SEC_ERA_2 &&
era_2_sw_hfn_ovrd == 0)) {
- SEQINPTR(p, 0, 1, RTO);
+ SEQINPTR(p, 0, length, RTO);
} else {
SEQINPTR(p, 0, 5, RTO);
SEQFIFOLOAD(p, SKIP, 4, 0);
}
if (swap == false) {
- MATHB(p, MATH0, AND, PDCP_C_PLANE_SN_MASK, MATH1, 8,
+ MATHB(p, MATH0, AND, sn_mask, MATH1, 8,
IFB | IMMED2);
MATHB(p, MATH1, SHLD, MATH1, MATH1, 8, 0);
@@ -461,7 +498,7 @@ pdcp_insert_cplane_int_only_op(struct program *p,
MATHB(p, MATH1, OR, MATH2, MATH2, 8, 0);
MOVEB(p, MATH2, 0, CONTEXT2, 0, 0x0C, WAITCOMP | IMMED);
} else {
- MATHB(p, MATH0, AND, PDCP_C_PLANE_SN_MASK_BE, MATH1, 8,
+ MATHB(p, MATH0, AND, sn_mask, MATH1, 8,
IFB | IMMED2);
MATHB(p, MATH1, SHLD, MATH1, MATH1, 8, 0);
@@ -553,19 +590,19 @@ pdcp_insert_cplane_int_only_op(struct program *p,
/* Insert Auth Key */
KEY(p, KEY1, authdata->key_enc_flags, authdata->key,
authdata->keylen, INLINE_KEY(authdata));
- SEQLOAD(p, MATH0, 7, 1, 0);
+ SEQLOAD(p, MATH0, offset, length, 0);
JUMP(p, 1, LOCAL_JUMP, ALL_TRUE, CALM);
if (rta_sec_era > RTA_SEC_ERA_2 ||
(rta_sec_era == RTA_SEC_ERA_2 &&
era_2_sw_hfn_ovrd == 0)) {
- SEQINPTR(p, 0, 1, RTO);
+ SEQINPTR(p, 0, length, RTO);
} else {
SEQINPTR(p, 0, 5, RTO);
SEQFIFOLOAD(p, SKIP, 4, 0);
}
if (swap == false) {
- MATHB(p, MATH0, AND, PDCP_C_PLANE_SN_MASK, MATH1, 8,
+ MATHB(p, MATH0, AND, sn_mask, MATH1, 8,
IFB | IMMED2);
MATHB(p, MATH1, SHLD, MATH1, MATH1, 8, 0);
@@ -573,7 +610,7 @@ pdcp_insert_cplane_int_only_op(struct program *p,
MATHB(p, MATH1, OR, MATH2, MATH2, 8, 0);
MOVEB(p, MATH2, 0, IFIFOAB1, 0, 8, IMMED);
} else {
- MATHB(p, MATH0, AND, PDCP_C_PLANE_SN_MASK_BE, MATH1, 8,
+ MATHB(p, MATH0, AND, sn_mask, MATH1, 8,
IFB | IMMED2);
MATHB(p, MATH1, SHLD, MATH1, MATH1, 8, 0);
@@ -665,11 +702,11 @@ pdcp_insert_cplane_int_only_op(struct program *p,
/* Insert Auth Key */
KEY(p, KEY2, authdata->key_enc_flags, authdata->key,
authdata->keylen, INLINE_KEY(authdata));
- SEQLOAD(p, MATH0, 7, 1, 0);
+ SEQLOAD(p, MATH0, offset, length, 0);
JUMP(p, 1, LOCAL_JUMP, ALL_TRUE, CALM);
- SEQINPTR(p, 0, 1, RTO);
+ SEQINPTR(p, 0, length, RTO);
if (swap == false) {
- MATHB(p, MATH0, AND, PDCP_C_PLANE_SN_MASK, MATH1, 8,
+ MATHB(p, MATH0, AND, sn_mask, MATH1, 8,
IFB | IMMED2);
MATHB(p, MATH1, SHLD, MATH1, MATH1, 8, 0);
@@ -678,7 +715,7 @@ pdcp_insert_cplane_int_only_op(struct program *p,
MOVEB(p, MATH2, 0, CONTEXT2, 0, 8, IMMED);
} else {
- MATHB(p, MATH0, AND, PDCP_C_PLANE_SN_MASK_BE, MATH1, 8,
+ MATHB(p, MATH0, AND, sn_mask, MATH1, 8,
IFB | IMMED2);
MATHB(p, MATH1, SHLD, MATH1, MATH1, 8, 0);
@@ -734,11 +771,12 @@ pdcp_insert_cplane_enc_only_op(struct program *p,
enum pdcp_sn_size sn_size,
unsigned char era_2_sw_hfn_ovrd __maybe_unused)
{
+ uint32_t offset = 0, length = 0, sn_mask = 0;
/* Insert Cipher Key */
KEY(p, KEY1, cipherdata->key_enc_flags, cipherdata->key,
cipherdata->keylen, INLINE_KEY(cipherdata));
- if (rta_sec_era >= RTA_SEC_ERA_8) {
+ if (rta_sec_era >= RTA_SEC_ERA_8 && sn_size != PDCP_SN_SIZE_18) {
if (sn_size == PDCP_SN_SIZE_5)
PROTOCOL(p, dir, OP_PCLID_LTE_PDCP_CTRL_MIXED,
(uint16_t)cipherdata->algtype << 8);
@@ -747,16 +785,32 @@ pdcp_insert_cplane_enc_only_op(struct program *p,
(uint16_t)cipherdata->algtype << 8);
return 0;
}
+ /* Non-proto is supported only for 5bit cplane and 18bit uplane */
+ switch (sn_size) {
+ case PDCP_SN_SIZE_5:
+ offset = 7;
+ length = 1;
+ sn_mask = (swap == false) ? PDCP_C_PLANE_SN_MASK :
+ PDCP_C_PLANE_SN_MASK_BE;
+ break;
+ case PDCP_SN_SIZE_18:
+ offset = 5;
+ length = 3;
+ sn_mask = (swap == false) ? PDCP_U_PLANE_18BIT_SN_MASK :
+ PDCP_U_PLANE_18BIT_SN_MASK_BE;
+ break;
+ case PDCP_SN_SIZE_7:
+ case PDCP_SN_SIZE_12:
+ case PDCP_SN_SIZE_15:
+ pr_err("Invalid sn_size for %s\n", __func__);
+ return -ENOTSUP;
- SEQLOAD(p, MATH0, 7, 1, 0);
+ }
+
+ SEQLOAD(p, MATH0, offset, length, 0);
JUMP(p, 1, LOCAL_JUMP, ALL_TRUE, CALM);
- if (swap == false)
- MATHB(p, MATH0, AND, PDCP_C_PLANE_SN_MASK, MATH1, 8,
- IFB | IMMED2);
- else
- MATHB(p, MATH0, AND, PDCP_C_PLANE_SN_MASK_BE, MATH1, 8,
- IFB | IMMED2);
- SEQSTORE(p, MATH0, 7, 1, 0);
+ MATHB(p, MATH0, AND, sn_mask, MATH1, 8, IFB | IMMED2);
+ SEQSTORE(p, MATH0, offset, length, 0);
MATHB(p, MATH1, SHLD, MATH1, MATH1, 8, 0);
MOVEB(p, DESCBUF, 8, MATH2, 0, 8, WAITCOMP | IMMED);
MATHB(p, MATH1, OR, MATH2, MATH2, 8, 0);
@@ -895,6 +949,8 @@ pdcp_insert_cplane_snow_aes_op(struct program *p,
enum pdcp_sn_size sn_size,
unsigned char era_2_sw_hfn_ovrd)
{
+ uint32_t offset = 0, length = 0, sn_mask = 0;
+
LABEL(back_to_sd_offset);
LABEL(end_desc);
LABEL(local_offset);
@@ -906,7 +962,7 @@ pdcp_insert_cplane_snow_aes_op(struct program *p,
REFERENCE(jump_back_to_sd_cmd);
REFERENCE(move_mac_i_to_desc_buf);
- if (rta_sec_era >= RTA_SEC_ERA_8) {
+ if (rta_sec_era >= RTA_SEC_ERA_8 && sn_size != PDCP_SN_SIZE_18) {
KEY(p, KEY1, cipherdata->key_enc_flags, cipherdata->key,
cipherdata->keylen, INLINE_KEY(cipherdata));
KEY(p, KEY2, authdata->key_enc_flags, authdata->key,
@@ -923,19 +979,35 @@ pdcp_insert_cplane_snow_aes_op(struct program *p,
return 0;
}
+ /* Non-proto is supported only for 5bit cplane and 18bit uplane */
+ switch (sn_size) {
+ case PDCP_SN_SIZE_5:
+ offset = 7;
+ length = 1;
+ sn_mask = (swap == false) ? PDCP_C_PLANE_SN_MASK :
+ PDCP_C_PLANE_SN_MASK_BE;
+ break;
+ case PDCP_SN_SIZE_18:
+ offset = 5;
+ length = 3;
+ sn_mask = (swap == false) ? PDCP_U_PLANE_18BIT_SN_MASK :
+ PDCP_U_PLANE_18BIT_SN_MASK_BE;
+ break;
+ case PDCP_SN_SIZE_7:
+ case PDCP_SN_SIZE_12:
+ case PDCP_SN_SIZE_15:
+ pr_err("Invalid sn_size for %s\n", __func__);
+ return -ENOTSUP;
+
+ }
- SEQLOAD(p, MATH0, 7, 1, 0);
+ SEQLOAD(p, MATH0, offset, length, 0);
JUMP(p, 1, LOCAL_JUMP, ALL_TRUE, CALM);
- if (swap == false)
- MATHB(p, MATH0, AND, PDCP_C_PLANE_SN_MASK, MATH1, 8,
- IFB | IMMED2);
- else
- MATHB(p, MATH0, AND, PDCP_C_PLANE_SN_MASK_BE, MATH1, 8,
- IFB | IMMED2);
+ MATHB(p, MATH0, AND, sn_mask, MATH1, 8, IFB | IMMED2);
MATHB(p, MATH1, SHLD, MATH1, MATH1, 8, 0);
MOVE(p, DESCBUF, 4, MATH2, 0, 0x08, WAITCOMP | IMMED);
MATHB(p, MATH1, OR, MATH2, MATH2, 8, 0);
- SEQSTORE(p, MATH0, 7, 1, 0);
+ SEQSTORE(p, MATH0, offset, length, 0);
if (dir == OP_TYPE_ENCAP_PROTOCOL) {
if (rta_sec_era > RTA_SEC_ERA_2 ||
(rta_sec_era == RTA_SEC_ERA_2 &&
@@ -1207,12 +1279,14 @@ pdcp_insert_cplane_aes_snow_op(struct program *p,
enum pdcp_sn_size sn_size,
unsigned char era_2_sw_hfn_ovrd __maybe_unused)
{
+ uint32_t offset = 0, length = 0, sn_mask = 0;
+
KEY(p, KEY1, cipherdata->key_enc_flags, cipherdata->key,
cipherdata->keylen, INLINE_KEY(cipherdata));
KEY(p, KEY2, authdata->key_enc_flags, authdata->key, authdata->keylen,
INLINE_KEY(authdata));
- if (rta_sec_era >= RTA_SEC_ERA_8) {
+ if (rta_sec_era >= RTA_SEC_ERA_8 && sn_size != PDCP_SN_SIZE_18) {
int pclid;
if (sn_size == PDCP_SN_SIZE_5)
@@ -1226,21 +1300,37 @@ pdcp_insert_cplane_aes_snow_op(struct program *p,
return 0;
}
+ /* Non-proto is supported only for 5bit cplane and 18bit uplane */
+ switch (sn_size) {
+ case PDCP_SN_SIZE_5:
+ offset = 7;
+ length = 1;
+ sn_mask = (swap == false) ? PDCP_C_PLANE_SN_MASK :
+ PDCP_C_PLANE_SN_MASK_BE;
+ break;
+ case PDCP_SN_SIZE_18:
+ offset = 5;
+ length = 3;
+ sn_mask = (swap == false) ? PDCP_U_PLANE_18BIT_SN_MASK :
+ PDCP_U_PLANE_18BIT_SN_MASK_BE;
+ break;
+ case PDCP_SN_SIZE_7:
+ case PDCP_SN_SIZE_12:
+ case PDCP_SN_SIZE_15:
+ pr_err("Invalid sn_size for %s\n", __func__);
+ return -ENOTSUP;
+
+ }
if (dir == OP_TYPE_ENCAP_PROTOCOL)
MATHB(p, SEQINSZ, SUB, ONE, VSEQINSZ, 4, 0);
- SEQLOAD(p, MATH0, 7, 1, 0);
+ SEQLOAD(p, MATH0, offset, length, 0);
JUMP(p, 1, LOCAL_JUMP, ALL_TRUE, CALM);
MOVE(p, MATH0, 7, IFIFOAB2, 0, 1, IMMED);
- if (swap == false)
- MATHB(p, MATH0, AND, PDCP_C_PLANE_SN_MASK, MATH1, 8,
- IFB | IMMED2);
- else
- MATHB(p, MATH0, AND, PDCP_C_PLANE_SN_MASK_BE, MATH1, 8,
- IFB | IMMED2);
+ MATHB(p, MATH0, AND, sn_mask, MATH1, 8, IFB | IMMED2);
- SEQSTORE(p, MATH0, 7, 1, 0);
+ SEQSTORE(p, MATH0, offset, length, 0);
MATHB(p, MATH1, SHLD, MATH1, MATH1, 8, 0);
MOVE(p, DESCBUF, 4, MATH2, 0, 8, WAITCOMP | IMMED);
MATHB(p, MATH1, OR, MATH2, MATH1, 8, 0);
@@ -1322,6 +1412,8 @@ pdcp_insert_cplane_snow_zuc_op(struct program *p,
enum pdcp_sn_size sn_size,
unsigned char era_2_sw_hfn_ovrd __maybe_unused)
{
+ uint32_t offset = 0, length = 0, sn_mask = 0;
+
LABEL(keyjump);
REFERENCE(pkeyjump);
@@ -1338,7 +1430,7 @@ pdcp_insert_cplane_snow_zuc_op(struct program *p,
SET_LABEL(p, keyjump);
- if (rta_sec_era >= RTA_SEC_ERA_8) {
+ if (rta_sec_era >= RTA_SEC_ERA_8 && sn_size != PDCP_SN_SIZE_18) {
int pclid;
if (sn_size == PDCP_SN_SIZE_5)
@@ -1351,16 +1443,32 @@ pdcp_insert_cplane_snow_zuc_op(struct program *p,
(uint16_t)authdata->algtype);
return 0;
}
+ /* Non-proto is supported only for 5bit cplane and 18bit uplane */
+ switch (sn_size) {
+ case PDCP_SN_SIZE_5:
+ offset = 7;
+ length = 1;
+ sn_mask = (swap == false) ? PDCP_C_PLANE_SN_MASK :
+ PDCP_C_PLANE_SN_MASK_BE;
+ break;
+ case PDCP_SN_SIZE_18:
+ offset = 5;
+ length = 3;
+ sn_mask = (swap == false) ? PDCP_U_PLANE_18BIT_SN_MASK :
+ PDCP_U_PLANE_18BIT_SN_MASK_BE;
+ break;
+ case PDCP_SN_SIZE_7:
+ case PDCP_SN_SIZE_12:
+ case PDCP_SN_SIZE_15:
+ pr_err("Invalid sn_size for %s\n", __func__);
+ return -ENOTSUP;
+
+ }
- SEQLOAD(p, MATH0, 7, 1, 0);
+ SEQLOAD(p, MATH0, offset, length, 0);
JUMP(p, 1, LOCAL_JUMP, ALL_TRUE, CALM);
MOVE(p, MATH0, 7, IFIFOAB2, 0, 1, IMMED);
- if (swap == false)
- MATHB(p, MATH0, AND, PDCP_C_PLANE_SN_MASK, MATH1, 8,
- IFB | IMMED2);
- else
- MATHB(p, MATH0, AND, PDCP_C_PLANE_SN_MASK, MATH1, 8,
- IFB | IMMED2);
+ MATHB(p, MATH0, AND, sn_mask, MATH1, 8, IFB | IMMED2);
MATHB(p, MATH1, SHLD, MATH1, MATH1, 8, 0);
MOVE(p, DESCBUF, 4, MATH2, 0, 8, WAITCOMP | IMMED);
@@ -1374,7 +1482,7 @@ pdcp_insert_cplane_snow_zuc_op(struct program *p,
MATHB(p, SEQINSZ, SUB, PDCP_MAC_I_LEN, VSEQOUTSZ, 4, IMMED2);
MATHB(p, SEQINSZ, SUB, ZERO, VSEQINSZ, 4, 0);
- SEQSTORE(p, MATH0, 7, 1, 0);
+ SEQSTORE(p, MATH0, offset, length, 0);
if (dir == OP_TYPE_ENCAP_PROTOCOL) {
SEQFIFOSTORE(p, MSG, 0, 0, VLF);
@@ -1425,6 +1533,7 @@ pdcp_insert_cplane_aes_zuc_op(struct program *p,
enum pdcp_sn_size sn_size,
unsigned char era_2_sw_hfn_ovrd __maybe_unused)
{
+ uint32_t offset = 0, length = 0, sn_mask = 0;
LABEL(keyjump);
REFERENCE(pkeyjump);
@@ -1439,7 +1548,7 @@ pdcp_insert_cplane_aes_zuc_op(struct program *p,
KEY(p, KEY2, authdata->key_enc_flags, authdata->key, authdata->keylen,
INLINE_KEY(authdata));
- if (rta_sec_era >= RTA_SEC_ERA_8) {
+ if (rta_sec_era >= RTA_SEC_ERA_8 && sn_size != PDCP_SN_SIZE_18) {
int pclid;
if (sn_size == PDCP_SN_SIZE_5)
@@ -1453,17 +1562,33 @@ pdcp_insert_cplane_aes_zuc_op(struct program *p,
return 0;
}
+ /* Non-proto is supported only for 5bit cplane and 18bit uplane */
+ switch (sn_size) {
+ case PDCP_SN_SIZE_5:
+ offset = 7;
+ length = 1;
+ sn_mask = (swap == false) ? PDCP_C_PLANE_SN_MASK :
+ PDCP_C_PLANE_SN_MASK_BE;
+ break;
+ case PDCP_SN_SIZE_18:
+ offset = 5;
+ length = 3;
+ sn_mask = (swap == false) ? PDCP_U_PLANE_18BIT_SN_MASK :
+ PDCP_U_PLANE_18BIT_SN_MASK_BE;
+ break;
+ case PDCP_SN_SIZE_7:
+ case PDCP_SN_SIZE_12:
+ case PDCP_SN_SIZE_15:
+ pr_err("Invalid sn_size for %s\n", __func__);
+ return -ENOTSUP;
+
+ }
SET_LABEL(p, keyjump);
- SEQLOAD(p, MATH0, 7, 1, 0);
+ SEQLOAD(p, MATH0, offset, length, 0);
JUMP(p, 1, LOCAL_JUMP, ALL_TRUE, CALM);
MOVE(p, MATH0, 7, IFIFOAB2, 0, 1, IMMED);
- if (swap == false)
- MATHB(p, MATH0, AND, PDCP_C_PLANE_SN_MASK, MATH1, 8,
- IFB | IMMED2);
- else
- MATHB(p, MATH0, AND, PDCP_C_PLANE_SN_MASK, MATH1, 8,
- IFB | IMMED2);
+ MATHB(p, MATH0, AND, sn_mask, MATH1, 8, IFB | IMMED2);
MATHB(p, MATH1, SHLD, MATH1, MATH1, 8, 0);
MOVE(p, DESCBUF, 4, MATH2, 0, 8, WAITCOMP | IMMED);
@@ -1477,7 +1602,7 @@ pdcp_insert_cplane_aes_zuc_op(struct program *p,
MATHB(p, SEQINSZ, SUB, PDCP_MAC_I_LEN, VSEQOUTSZ, 4, IMMED2);
MATHB(p, SEQINSZ, SUB, ZERO, VSEQINSZ, 4, 0);
- SEQSTORE(p, MATH0, 7, 1, 0);
+ SEQSTORE(p, MATH0, offset, length, 0);
if (dir == OP_TYPE_ENCAP_PROTOCOL) {
SEQFIFOSTORE(p, MSG, 0, 0, VLF);
@@ -1531,6 +1656,7 @@ pdcp_insert_cplane_zuc_snow_op(struct program *p,
enum pdcp_sn_size sn_size,
unsigned char era_2_sw_hfn_ovrd __maybe_unused)
{
+ uint32_t offset = 0, length = 0, sn_mask = 0;
LABEL(keyjump);
REFERENCE(pkeyjump);
@@ -1545,7 +1671,7 @@ pdcp_insert_cplane_zuc_snow_op(struct program *p,
KEY(p, KEY2, authdata->key_enc_flags, authdata->key, authdata->keylen,
INLINE_KEY(authdata));
- if (rta_sec_era >= RTA_SEC_ERA_8) {
+ if (rta_sec_era >= RTA_SEC_ERA_8 && sn_size != PDCP_SN_SIZE_18) {
int pclid;
if (sn_size == PDCP_SN_SIZE_5)
@@ -1559,17 +1685,32 @@ pdcp_insert_cplane_zuc_snow_op(struct program *p,
return 0;
}
+ /* Non-proto is supported only for 5bit cplane and 18bit uplane */
+ switch (sn_size) {
+ case PDCP_SN_SIZE_5:
+ offset = 7;
+ length = 1;
+ sn_mask = (swap == false) ? PDCP_C_PLANE_SN_MASK :
+ PDCP_C_PLANE_SN_MASK_BE;
+ break;
+ case PDCP_SN_SIZE_18:
+ offset = 5;
+ length = 3;
+ sn_mask = (swap == false) ? PDCP_U_PLANE_18BIT_SN_MASK :
+ PDCP_U_PLANE_18BIT_SN_MASK_BE;
+ break;
+ case PDCP_SN_SIZE_7:
+ case PDCP_SN_SIZE_12:
+ case PDCP_SN_SIZE_15:
+ pr_err("Invalid sn_size for %s\n", __func__);
+ return -ENOTSUP;
+ }
SET_LABEL(p, keyjump);
- SEQLOAD(p, MATH0, 7, 1, 0);
+ SEQLOAD(p, MATH0, offset, length, 0);
JUMP(p, 1, LOCAL_JUMP, ALL_TRUE, CALM);
MOVE(p, MATH0, 7, IFIFOAB2, 0, 1, IMMED);
- if (swap == false)
- MATHB(p, MATH0, AND, PDCP_C_PLANE_SN_MASK, MATH1, 8,
- IFB | IMMED2);
- else
- MATHB(p, MATH0, AND, PDCP_C_PLANE_SN_MASK, MATH1, 8,
- IFB | IMMED2);
+ MATHB(p, MATH0, AND, sn_mask, MATH1, 8, IFB | IMMED2);
MATHB(p, MATH1, SHLD, MATH1, MATH1, 8, 0);
MOVE(p, DESCBUF, 4, MATH2, 0, 8, WAITCOMP | IMMED);
@@ -1599,7 +1740,7 @@ pdcp_insert_cplane_zuc_snow_op(struct program *p,
MATHB(p, VSEQOUTSZ, SUB, ZERO, VSEQINSZ, 4, 0);
}
- SEQSTORE(p, MATH0, 7, 1, 0);
+ SEQSTORE(p, MATH0, offset, length, 0);
if (dir == OP_TYPE_ENCAP_PROTOCOL) {
SEQFIFOSTORE(p, MSG, 0, 0, VLF);
@@ -1659,12 +1800,13 @@ pdcp_insert_cplane_zuc_aes_op(struct program *p,
enum pdcp_sn_size sn_size,
unsigned char era_2_sw_hfn_ovrd __maybe_unused)
{
+ uint32_t offset = 0, length = 0, sn_mask = 0;
if (rta_sec_era < RTA_SEC_ERA_5) {
pr_err("Invalid era for selected algorithm\n");
return -ENOTSUP;
}
- if (rta_sec_era >= RTA_SEC_ERA_8) {
+ if (rta_sec_era >= RTA_SEC_ERA_8 && sn_size != PDCP_SN_SIZE_18) {
int pclid;
KEY(p, KEY1, cipherdata->key_enc_flags, cipherdata->key,
@@ -1682,20 +1824,35 @@ pdcp_insert_cplane_zuc_aes_op(struct program *p,
(uint16_t)authdata->algtype);
return 0;
}
+ /* Non-proto is supported only for 5bit cplane and 18bit uplane */
+ switch (sn_size) {
+ case PDCP_SN_SIZE_5:
+ offset = 7;
+ length = 1;
+ sn_mask = (swap == false) ? PDCP_C_PLANE_SN_MASK :
+ PDCP_C_PLANE_SN_MASK_BE;
+ break;
+ case PDCP_SN_SIZE_18:
+ offset = 5;
+ length = 3;
+ sn_mask = (swap == false) ? PDCP_U_PLANE_18BIT_SN_MASK :
+ PDCP_U_PLANE_18BIT_SN_MASK_BE;
+ break;
+ case PDCP_SN_SIZE_7:
+ case PDCP_SN_SIZE_12:
+ case PDCP_SN_SIZE_15:
+ pr_err("Invalid sn_size for %s\n", __func__);
+ return -ENOTSUP;
+ }
- SEQLOAD(p, MATH0, 7, 1, 0);
+ SEQLOAD(p, MATH0, offset, length, 0);
JUMP(p, 1, LOCAL_JUMP, ALL_TRUE, CALM);
- if (swap == false)
- MATHB(p, MATH0, AND, PDCP_C_PLANE_SN_MASK, MATH1, 8,
- IFB | IMMED2);
- else
- MATHB(p, MATH0, AND, PDCP_C_PLANE_SN_MASK, MATH1, 8,
- IFB | IMMED2);
+ MATHB(p, MATH0, AND, sn_mask, MATH1, 8, IFB | IMMED2);
MATHB(p, MATH1, SHLD, MATH1, MATH1, 8, 0);
MOVE(p, DESCBUF, 4, MATH2, 0, 0x08, WAITCOMP | IMMED);
MATHB(p, MATH1, OR, MATH2, MATH2, 8, 0);
- SEQSTORE(p, MATH0, 7, 1, 0);
+ SEQSTORE(p, MATH0, offset, length, 0);
if (dir == OP_TYPE_ENCAP_PROTOCOL) {
KEY(p, KEY1, authdata->key_enc_flags, authdata->key,
authdata->keylen, INLINE_KEY(authdata));
@@ -1798,38 +1955,43 @@ pdcp_insert_cplane_zuc_aes_op(struct program *p,
}
static inline int
-pdcp_insert_uplane_15bit_op(struct program *p,
+pdcp_insert_uplane_no_int_op(struct program *p,
bool swap __maybe_unused,
struct alginfo *cipherdata,
- struct alginfo *authdata,
- unsigned int dir)
+ unsigned int dir,
+ enum pdcp_sn_size sn_size)
{
int op;
-
- /* Insert auth key if requested */
- if (authdata && authdata->algtype)
- KEY(p, KEY2, authdata->key_enc_flags, authdata->key,
- authdata->keylen, INLINE_KEY(authdata));
+ uint32_t sn_mask;
/* Insert Cipher Key */
KEY(p, KEY1, cipherdata->key_enc_flags, cipherdata->key,
cipherdata->keylen, INLINE_KEY(cipherdata));
- if (rta_sec_era >= RTA_SEC_ERA_8) {
+ if ((rta_sec_era >= RTA_SEC_ERA_8 && sn_size == PDCP_SN_SIZE_15) ||
+ (rta_sec_era >= RTA_SEC_ERA_10)) {
PROTOCOL(p, dir, OP_PCLID_LTE_PDCP_USER,
(uint16_t)cipherdata->algtype);
return 0;
}
- SEQLOAD(p, MATH0, 6, 2, 0);
+ if (sn_size == PDCP_SN_SIZE_15) {
+ SEQLOAD(p, MATH0, 6, 2, 0);
+ sn_mask = (swap == false) ? PDCP_U_PLANE_15BIT_SN_MASK :
+ PDCP_U_PLANE_15BIT_SN_MASK_BE;
+ } else { /* SN Size == PDCP_SN_SIZE_18 */
+ SEQLOAD(p, MATH0, 5, 3, 0);
+ sn_mask = (swap == false) ? PDCP_U_PLANE_18BIT_SN_MASK :
+ PDCP_U_PLANE_18BIT_SN_MASK_BE;
+ }
JUMP(p, 1, LOCAL_JUMP, ALL_TRUE, CALM);
- if (swap == false)
- MATHB(p, MATH0, AND, PDCP_U_PLANE_15BIT_SN_MASK, MATH1, 8,
- IFB | IMMED2);
- else
- MATHB(p, MATH0, AND, PDCP_U_PLANE_15BIT_SN_MASK_BE, MATH1, 8,
- IFB | IMMED2);
- SEQSTORE(p, MATH0, 6, 2, 0);
+ MATHB(p, MATH0, AND, sn_mask, MATH1, 8, IFB | IMMED2);
+
+ if (sn_size == PDCP_SN_SIZE_15)
+ SEQSTORE(p, MATH0, 6, 2, 0);
+ else /* SN Size == PDCP_SN_SIZE_18 */
+ SEQSTORE(p, MATH0, 5, 3, 0);
+
MATHB(p, MATH1, SHLD, MATH1, MATH1, 8, 0);
MOVEB(p, DESCBUF, 8, MATH2, 0, 8, WAITCOMP | IMMED);
MATHB(p, MATH1, OR, MATH2, MATH2, 8, 0);
@@ -2124,6 +2286,13 @@ cnstr_pdcp_u_plane_pdb(struct program *p,
hfn_threshold<<PDCP_U_PLANE_PDB_15BIT_SN_HFN_THR_SHIFT;
break;
+ case PDCP_SN_SIZE_18:
+ pdb.opt_res.opt = (uint32_t)(PDCP_U_PLANE_PDB_OPT_18B_SN);
+ pdb.hfn_res = hfn << PDCP_U_PLANE_PDB_18BIT_SN_HFN_SHIFT;
+ pdb.hfn_thr_res =
+ hfn_threshold<<PDCP_U_PLANE_PDB_18BIT_SN_HFN_THR_SHIFT;
+ break;
+
default:
pr_err("Invalid Sequence Number Size setting in PDB\n");
return -EINVAL;
@@ -2448,6 +2617,61 @@ cnstr_shdsc_pdcp_c_plane_decap(uint32_t *descbuf,
return PROGRAM_FINALIZE(p);
}
+static int
+pdcp_insert_uplane_with_int_op(struct program *p,
+ bool swap __maybe_unused,
+ struct alginfo *cipherdata,
+ struct alginfo *authdata,
+ enum pdcp_sn_size sn_size,
+ unsigned char era_2_sw_hfn_ovrd,
+ unsigned int dir)
+{
+ static int
+ (*pdcp_cp_fp[PDCP_CIPHER_TYPE_INVALID][PDCP_AUTH_TYPE_INVALID])
+ (struct program*, bool swap, struct alginfo *,
+ struct alginfo *, unsigned int, enum pdcp_sn_size,
+ unsigned char __maybe_unused) = {
+ { /* NULL */
+ pdcp_insert_cplane_null_op, /* NULL */
+ pdcp_insert_cplane_int_only_op, /* SNOW f9 */
+ pdcp_insert_cplane_int_only_op, /* AES CMAC */
+ pdcp_insert_cplane_int_only_op /* ZUC-I */
+ },
+ { /* SNOW f8 */
+ pdcp_insert_cplane_enc_only_op, /* NULL */
+ pdcp_insert_cplane_acc_op, /* SNOW f9 */
+ pdcp_insert_cplane_snow_aes_op, /* AES CMAC */
+ pdcp_insert_cplane_snow_zuc_op /* ZUC-I */
+ },
+ { /* AES CTR */
+ pdcp_insert_cplane_enc_only_op, /* NULL */
+ pdcp_insert_cplane_aes_snow_op, /* SNOW f9 */
+ pdcp_insert_cplane_acc_op, /* AES CMAC */
+ pdcp_insert_cplane_aes_zuc_op /* ZUC-I */
+ },
+ { /* ZUC-E */
+ pdcp_insert_cplane_enc_only_op, /* NULL */
+ pdcp_insert_cplane_zuc_snow_op, /* SNOW f9 */
+ pdcp_insert_cplane_zuc_aes_op, /* AES CMAC */
+ pdcp_insert_cplane_acc_op /* ZUC-I */
+ },
+ };
+ int err;
+
+ err = pdcp_cp_fp[cipherdata->algtype][authdata->algtype](p,
+ swap,
+ cipherdata,
+ authdata,
+ dir,
+ sn_size,
+ era_2_sw_hfn_ovrd);
+ if (err)
+ return err;
+
+ return 0;
+}
+
+
/**
* cnstr_shdsc_pdcp_u_plane_encap - Function for creating a PDCP User Plane
* encapsulation descriptor.
@@ -2491,6 +2715,33 @@ cnstr_shdsc_pdcp_u_plane_encap(uint32_t *descbuf,
struct program prg;
struct program *p = &prg;
int err;
+ static enum rta_share_type
+ desc_share[PDCP_CIPHER_TYPE_INVALID][PDCP_AUTH_TYPE_INVALID] = {
+ { /* NULL */
+ SHR_WAIT, /* NULL */
+ SHR_ALWAYS, /* SNOW f9 */
+ SHR_ALWAYS, /* AES CMAC */
+ SHR_ALWAYS /* ZUC-I */
+ },
+ { /* SNOW f8 */
+ SHR_ALWAYS, /* NULL */
+ SHR_ALWAYS, /* SNOW f9 */
+ SHR_WAIT, /* AES CMAC */
+ SHR_WAIT /* ZUC-I */
+ },
+ { /* AES CTR */
+ SHR_ALWAYS, /* NULL */
+ SHR_ALWAYS, /* SNOW f9 */
+ SHR_ALWAYS, /* AES CMAC */
+ SHR_WAIT /* ZUC-I */
+ },
+ { /* ZUC-E */
+ SHR_ALWAYS, /* NULL */
+ SHR_WAIT, /* SNOW f9 */
+ SHR_WAIT, /* AES CMAC */
+ SHR_ALWAYS /* ZUC-I */
+ },
+ };
LABEL(pdb_end);
if (rta_sec_era != RTA_SEC_ERA_2 && era_2_sw_hfn_ovrd) {
@@ -2509,7 +2760,10 @@ cnstr_shdsc_pdcp_u_plane_encap(uint32_t *descbuf,
if (ps)
PROGRAM_SET_36BIT_ADDR(p);
- SHR_HDR(p, SHR_ALWAYS, 0, 0);
+ if (authdata)
+ SHR_HDR(p, desc_share[cipherdata->algtype][authdata->algtype], 0, 0);
+ else
+ SHR_HDR(p, SHR_ALWAYS, 0, 0);
if (cnstr_pdcp_u_plane_pdb(p, sn_size, hfn, bearer, direction,
hfn_threshold)) {
pr_err("Error creating PDCP UPlane PDB\n");
@@ -2522,13 +2776,6 @@ cnstr_shdsc_pdcp_u_plane_encap(uint32_t *descbuf,
if (err)
return err;
- /* Insert auth key if requested */
- if (authdata && authdata->algtype) {
- KEY(p, KEY2, authdata->key_enc_flags,
- (uint64_t)authdata->key, authdata->keylen,
- INLINE_KEY(authdata));
- }
-
switch (sn_size) {
case PDCP_SN_SIZE_7:
case PDCP_SN_SIZE_12:
@@ -2542,6 +2789,12 @@ cnstr_shdsc_pdcp_u_plane_encap(uint32_t *descbuf,
case PDCP_CIPHER_TYPE_AES:
case PDCP_CIPHER_TYPE_SNOW:
case PDCP_CIPHER_TYPE_NULL:
+ /* Insert auth key if requested */
+ if (authdata && authdata->algtype) {
+ KEY(p, KEY2, authdata->key_enc_flags,
+ (uint64_t)authdata->key, authdata->keylen,
+ INLINE_KEY(authdata));
+ }
/* Insert Cipher Key */
KEY(p, KEY1, cipherdata->key_enc_flags,
(uint64_t)cipherdata->key, cipherdata->keylen,
@@ -2566,6 +2819,18 @@ cnstr_shdsc_pdcp_u_plane_encap(uint32_t *descbuf,
break;
case PDCP_SN_SIZE_15:
+ case PDCP_SN_SIZE_18:
+ if (authdata) {
+ err = pdcp_insert_uplane_with_int_op(p, swap,
+ cipherdata, authdata,
+ sn_size, era_2_sw_hfn_ovrd,
+ OP_TYPE_ENCAP_PROTOCOL);
+ if (err)
+ return err;
+
+ break;
+ }
+
switch (cipherdata->algtype) {
case PDCP_CIPHER_TYPE_NULL:
insert_copy_frame_op(p,
@@ -2574,8 +2839,8 @@ cnstr_shdsc_pdcp_u_plane_encap(uint32_t *descbuf,
break;
default:
- err = pdcp_insert_uplane_15bit_op(p, swap, cipherdata,
- authdata, OP_TYPE_ENCAP_PROTOCOL);
+ err = pdcp_insert_uplane_no_int_op(p, swap, cipherdata,
+ OP_TYPE_ENCAP_PROTOCOL, sn_size);
if (err)
return err;
break;
@@ -2635,6 +2900,34 @@ cnstr_shdsc_pdcp_u_plane_decap(uint32_t *descbuf,
struct program prg;
struct program *p = &prg;
int err;
+ static enum rta_share_type
+ desc_share[PDCP_CIPHER_TYPE_INVALID][PDCP_AUTH_TYPE_INVALID] = {
+ { /* NULL */
+ SHR_WAIT, /* NULL */
+ SHR_ALWAYS, /* SNOW f9 */
+ SHR_ALWAYS, /* AES CMAC */
+ SHR_ALWAYS /* ZUC-I */
+ },
+ { /* SNOW f8 */
+ SHR_ALWAYS, /* NULL */
+ SHR_ALWAYS, /* SNOW f9 */
+ SHR_WAIT, /* AES CMAC */
+ SHR_WAIT /* ZUC-I */
+ },
+ { /* AES CTR */
+ SHR_ALWAYS, /* NULL */
+ SHR_ALWAYS, /* SNOW f9 */
+ SHR_ALWAYS, /* AES CMAC */
+ SHR_WAIT /* ZUC-I */
+ },
+ { /* ZUC-E */
+ SHR_ALWAYS, /* NULL */
+ SHR_WAIT, /* SNOW f9 */
+ SHR_WAIT, /* AES CMAC */
+ SHR_ALWAYS /* ZUC-I */
+ },
+ };
+
LABEL(pdb_end);
if (rta_sec_era != RTA_SEC_ERA_2 && era_2_sw_hfn_ovrd) {
@@ -2652,8 +2945,11 @@ cnstr_shdsc_pdcp_u_plane_decap(uint32_t *descbuf,
PROGRAM_SET_BSWAP(p);
if (ps)
PROGRAM_SET_36BIT_ADDR(p);
+ if (authdata)
+ SHR_HDR(p, desc_share[cipherdata->algtype][authdata->algtype], 0, 0);
+ else
+ SHR_HDR(p, SHR_ALWAYS, 0, 0);
- SHR_HDR(p, SHR_ALWAYS, 0, 0);
if (cnstr_pdcp_u_plane_pdb(p, sn_size, hfn, bearer, direction,
hfn_threshold)) {
pr_err("Error creating PDCP UPlane PDB\n");
@@ -2666,12 +2962,6 @@ cnstr_shdsc_pdcp_u_plane_decap(uint32_t *descbuf,
if (err)
return err;
- /* Insert auth key if requested */
- if (authdata && authdata->algtype)
- KEY(p, KEY2, authdata->key_enc_flags,
- (uint64_t)authdata->key, authdata->keylen,
- INLINE_KEY(authdata));
-
switch (sn_size) {
case PDCP_SN_SIZE_7:
case PDCP_SN_SIZE_12:
@@ -2685,6 +2975,12 @@ cnstr_shdsc_pdcp_u_plane_decap(uint32_t *descbuf,
case PDCP_CIPHER_TYPE_AES:
case PDCP_CIPHER_TYPE_SNOW:
case PDCP_CIPHER_TYPE_NULL:
+ /* Insert auth key if requested */
+ if (authdata && authdata->algtype)
+ KEY(p, KEY2, authdata->key_enc_flags,
+ (uint64_t)authdata->key, authdata->keylen,
+ INLINE_KEY(authdata));
+
/* Insert Cipher Key */
KEY(p, KEY1, cipherdata->key_enc_flags,
cipherdata->key, cipherdata->keylen,
@@ -2708,6 +3004,18 @@ cnstr_shdsc_pdcp_u_plane_decap(uint32_t *descbuf,
break;
case PDCP_SN_SIZE_15:
+ case PDCP_SN_SIZE_18:
+ if (authdata) {
+ err = pdcp_insert_uplane_with_int_op(p, swap,
+ cipherdata, authdata,
+ sn_size, era_2_sw_hfn_ovrd,
+ OP_TYPE_DECAP_PROTOCOL);
+ if (err)
+ return err;
+
+ break;
+ }
+
switch (cipherdata->algtype) {
case PDCP_CIPHER_TYPE_NULL:
insert_copy_frame_op(p,
@@ -2716,8 +3024,8 @@ cnstr_shdsc_pdcp_u_plane_decap(uint32_t *descbuf,
break;
default:
- err = pdcp_insert_uplane_15bit_op(p, swap, cipherdata,
- authdata, OP_TYPE_DECAP_PROTOCOL);
+ err = pdcp_insert_uplane_no_int_op(p, swap, cipherdata,
+ OP_TYPE_DECAP_PROTOCOL, sn_size);
if (err)
return err;
break;
--
2.17.1
^ permalink raw reply [flat|nested] 75+ messages in thread
* [dpdk-dev] [PATCH 06/20] crypto/dpaa2_sec: support CAAM HW era 10
2019-09-02 12:17 [dpdk-dev] [PATCH 00/20] crypto/dpaaX_sec: Support Wireless algos Akhil Goyal
` (4 preceding siblings ...)
2019-09-02 12:17 ` [dpdk-dev] [PATCH 05/20] crypto/dpaa2_sec: update desc for pdcp 18bit enc-auth Akhil Goyal
@ 2019-09-02 12:17 ` Akhil Goyal
2019-09-02 12:17 ` [dpdk-dev] [PATCH 07/20] crypto/dpaa2_sec/hw: update 12bit SN desc for null auth for ERA8 Akhil Goyal
` (15 subsequent siblings)
21 siblings, 0 replies; 75+ messages in thread
From: Akhil Goyal @ 2019-09-02 12:17 UTC (permalink / raw)
To: dev; +Cc: hemant.agrawal, vakul.garg
From: Hemant Agrawal <hemant.agrawal@nxp.com>
Adding minimal support for CAAM HW era 10 (used in LX2)
Primary changes are:
1. increased shard desc length form 6 bit to 7 bits
2. support for several PDCP operations as PROTOCOL offload.
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
---
drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c | 5 ++++
drivers/crypto/dpaa2_sec/hw/desc.h | 5 ++++
drivers/crypto/dpaa2_sec/hw/desc/pdcp.h | 21 ++++++++++-----
.../dpaa2_sec/hw/rta/fifo_load_store_cmd.h | 9 ++++---
| 21 ++++++++++++---
drivers/crypto/dpaa2_sec/hw/rta/jump_cmd.h | 3 +--
drivers/crypto/dpaa2_sec/hw/rta/key_cmd.h | 5 ++--
drivers/crypto/dpaa2_sec/hw/rta/load_cmd.h | 10 ++++---
drivers/crypto/dpaa2_sec/hw/rta/math_cmd.h | 12 +++++----
drivers/crypto/dpaa2_sec/hw/rta/move_cmd.h | 8 +++---
drivers/crypto/dpaa2_sec/hw/rta/nfifo_cmd.h | 10 +++----
.../crypto/dpaa2_sec/hw/rta/operation_cmd.h | 6 ++---
.../crypto/dpaa2_sec/hw/rta/protocol_cmd.h | 9 +++++--
.../dpaa2_sec/hw/rta/sec_run_time_asm.h | 27 +++++++++++++------
.../dpaa2_sec/hw/rta/seq_in_out_ptr_cmd.h | 7 +++--
drivers/crypto/dpaa2_sec/hw/rta/store_cmd.h | 6 ++---
16 files changed, 110 insertions(+), 54 deletions(-)
diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
index b66064385..b2e23aefc 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
@@ -3418,6 +3418,11 @@ cryptodev_dpaa2_sec_probe(struct rte_dpaa2_driver *dpaa2_drv __rte_unused,
/* init user callbacks */
TAILQ_INIT(&(cryptodev->link_intr_cbs));
+ if (dpaa2_svr_family == SVR_LX2160A)
+ rta_set_sec_era(RTA_SEC_ERA_10);
+
+ DPAA2_SEC_INFO("2-SEC ERA is %d", rta_get_sec_era());
+
/* Invoke PMD device initialization function */
retval = dpaa2_sec_dev_init(cryptodev);
if (retval == 0)
diff --git a/drivers/crypto/dpaa2_sec/hw/desc.h b/drivers/crypto/dpaa2_sec/hw/desc.h
index e12c3db2f..667da971b 100644
--- a/drivers/crypto/dpaa2_sec/hw/desc.h
+++ b/drivers/crypto/dpaa2_sec/hw/desc.h
@@ -18,6 +18,8 @@
#include "hw/compat.h"
#endif
+extern enum rta_sec_era rta_sec_era;
+
/* Max size of any SEC descriptor in 32-bit words, inclusive of header */
#define MAX_CAAM_DESCSIZE 64
@@ -113,9 +115,12 @@
/* Start Index or SharedDesc Length */
#define HDR_START_IDX_SHIFT 16
#define HDR_START_IDX_MASK (0x3f << HDR_START_IDX_SHIFT)
+#define HDR_START_IDX_MASK_ERA10 (0x7f << HDR_START_IDX_SHIFT)
/* If shared descriptor header, 6-bit length */
#define HDR_DESCLEN_SHR_MASK 0x3f
+/* If shared descriptor header, 7-bit length era10 onwards*/
+#define HDR_DESCLEN_SHR_MASK_ERA10 0x7f
/* If non-shared header, 7-bit length */
#define HDR_DESCLEN_MASK 0x7f
diff --git a/drivers/crypto/dpaa2_sec/hw/desc/pdcp.h b/drivers/crypto/dpaa2_sec/hw/desc/pdcp.h
index 9a73105ac..4bf1d69f9 100644
--- a/drivers/crypto/dpaa2_sec/hw/desc/pdcp.h
+++ b/drivers/crypto/dpaa2_sec/hw/desc/pdcp.h
@@ -776,7 +776,8 @@ pdcp_insert_cplane_enc_only_op(struct program *p,
KEY(p, KEY1, cipherdata->key_enc_flags, cipherdata->key,
cipherdata->keylen, INLINE_KEY(cipherdata));
- if (rta_sec_era >= RTA_SEC_ERA_8 && sn_size != PDCP_SN_SIZE_18) {
+ if ((rta_sec_era >= RTA_SEC_ERA_8 && sn_size != PDCP_SN_SIZE_18) ||
+ (rta_sec_era == RTA_SEC_ERA_10)) {
if (sn_size == PDCP_SN_SIZE_5)
PROTOCOL(p, dir, OP_PCLID_LTE_PDCP_CTRL_MIXED,
(uint16_t)cipherdata->algtype << 8);
@@ -962,7 +963,8 @@ pdcp_insert_cplane_snow_aes_op(struct program *p,
REFERENCE(jump_back_to_sd_cmd);
REFERENCE(move_mac_i_to_desc_buf);
- if (rta_sec_era >= RTA_SEC_ERA_8 && sn_size != PDCP_SN_SIZE_18) {
+ if ((rta_sec_era >= RTA_SEC_ERA_8 && sn_size != PDCP_SN_SIZE_18) ||
+ (rta_sec_era == RTA_SEC_ERA_10)) {
KEY(p, KEY1, cipherdata->key_enc_flags, cipherdata->key,
cipherdata->keylen, INLINE_KEY(cipherdata));
KEY(p, KEY2, authdata->key_enc_flags, authdata->key,
@@ -1286,7 +1288,8 @@ pdcp_insert_cplane_aes_snow_op(struct program *p,
KEY(p, KEY2, authdata->key_enc_flags, authdata->key, authdata->keylen,
INLINE_KEY(authdata));
- if (rta_sec_era >= RTA_SEC_ERA_8 && sn_size != PDCP_SN_SIZE_18) {
+ if ((rta_sec_era >= RTA_SEC_ERA_8 && sn_size != PDCP_SN_SIZE_18) ||
+ (rta_sec_era == RTA_SEC_ERA_10)) {
int pclid;
if (sn_size == PDCP_SN_SIZE_5)
@@ -1430,7 +1433,8 @@ pdcp_insert_cplane_snow_zuc_op(struct program *p,
SET_LABEL(p, keyjump);
- if (rta_sec_era >= RTA_SEC_ERA_8 && sn_size != PDCP_SN_SIZE_18) {
+ if ((rta_sec_era >= RTA_SEC_ERA_8 && sn_size != PDCP_SN_SIZE_18) ||
+ (rta_sec_era == RTA_SEC_ERA_10)) {
int pclid;
if (sn_size == PDCP_SN_SIZE_5)
@@ -1548,7 +1552,8 @@ pdcp_insert_cplane_aes_zuc_op(struct program *p,
KEY(p, KEY2, authdata->key_enc_flags, authdata->key, authdata->keylen,
INLINE_KEY(authdata));
- if (rta_sec_era >= RTA_SEC_ERA_8 && sn_size != PDCP_SN_SIZE_18) {
+ if ((rta_sec_era >= RTA_SEC_ERA_8 && sn_size != PDCP_SN_SIZE_18) ||
+ (rta_sec_era == RTA_SEC_ERA_10)) {
int pclid;
if (sn_size == PDCP_SN_SIZE_5)
@@ -1671,7 +1676,8 @@ pdcp_insert_cplane_zuc_snow_op(struct program *p,
KEY(p, KEY2, authdata->key_enc_flags, authdata->key, authdata->keylen,
INLINE_KEY(authdata));
- if (rta_sec_era >= RTA_SEC_ERA_8 && sn_size != PDCP_SN_SIZE_18) {
+ if ((rta_sec_era >= RTA_SEC_ERA_8 && sn_size != PDCP_SN_SIZE_18) ||
+ (rta_sec_era == RTA_SEC_ERA_10)) {
int pclid;
if (sn_size == PDCP_SN_SIZE_5)
@@ -1806,7 +1812,8 @@ pdcp_insert_cplane_zuc_aes_op(struct program *p,
return -ENOTSUP;
}
- if (rta_sec_era >= RTA_SEC_ERA_8 && sn_size != PDCP_SN_SIZE_18) {
+ if ((rta_sec_era >= RTA_SEC_ERA_8 && sn_size != PDCP_SN_SIZE_18) ||
+ (rta_sec_era == RTA_SEC_ERA_10)) {
int pclid;
KEY(p, KEY1, cipherdata->key_enc_flags, cipherdata->key,
diff --git a/drivers/crypto/dpaa2_sec/hw/rta/fifo_load_store_cmd.h b/drivers/crypto/dpaa2_sec/hw/rta/fifo_load_store_cmd.h
index 8c807aaa2..287e09cd7 100644
--- a/drivers/crypto/dpaa2_sec/hw/rta/fifo_load_store_cmd.h
+++ b/drivers/crypto/dpaa2_sec/hw/rta/fifo_load_store_cmd.h
@@ -1,8 +1,7 @@
/* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
*
* Copyright 2008-2016 Freescale Semiconductor Inc.
- * Copyright 2016 NXP
- *
+ * Copyright 2016,2019 NXP
*/
#ifndef __RTA_FIFO_LOAD_STORE_CMD_H__
@@ -42,7 +41,8 @@ static const uint32_t fifo_load_table[][2] = {
* supported.
*/
static const unsigned int fifo_load_table_sz[] = {22, 22, 23, 23,
- 23, 23, 23, 23};
+ 23, 23, 23, 23,
+ 23, 23};
static inline int
rta_fifo_load(struct program *program, uint32_t src,
@@ -201,7 +201,8 @@ static const uint32_t fifo_store_table[][2] = {
* supported.
*/
static const unsigned int fifo_store_table_sz[] = {21, 21, 21, 21,
- 22, 22, 22, 23};
+ 22, 22, 22, 23,
+ 23, 23};
static inline int
rta_fifo_store(struct program *program, uint32_t src,
--git a/drivers/crypto/dpaa2_sec/hw/rta/header_cmd.h b/drivers/crypto/dpaa2_sec/hw/rta/header_cmd.h
index 0c7ea9387..45aefa04c 100644
--- a/drivers/crypto/dpaa2_sec/hw/rta/header_cmd.h
+++ b/drivers/crypto/dpaa2_sec/hw/rta/header_cmd.h
@@ -1,8 +1,7 @@
/* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
*
* Copyright 2008-2016 Freescale Semiconductor Inc.
- * Copyright 2016 NXP
- *
+ * Copyright 2016,2019 NXP
*/
#ifndef __RTA_HEADER_CMD_H__
@@ -19,6 +18,8 @@ static const uint32_t job_header_flags[] = {
DNR | TD | MTD | SHR | REO | RSMS | EXT,
DNR | TD | MTD | SHR | REO | RSMS | EXT,
DNR | TD | MTD | SHR | REO | RSMS | EXT,
+ DNR | TD | MTD | SHR | REO | EXT,
+ DNR | TD | MTD | SHR | REO | EXT,
DNR | TD | MTD | SHR | REO | EXT
};
@@ -31,6 +32,8 @@ static const uint32_t shr_header_flags[] = {
DNR | SC | PD | CIF | RIF,
DNR | SC | PD | CIF | RIF,
DNR | SC | PD | CIF | RIF,
+ DNR | SC | PD | CIF | RIF,
+ DNR | SC | PD | CIF | RIF,
DNR | SC | PD | CIF | RIF
};
@@ -72,7 +75,12 @@ rta_shr_header(struct program *program,
}
opcode |= HDR_ONE;
- opcode |= (start_idx << HDR_START_IDX_SHIFT) & HDR_START_IDX_MASK;
+ if (rta_sec_era >= RTA_SEC_ERA_10)
+ opcode |= (start_idx << HDR_START_IDX_SHIFT) &
+ HDR_START_IDX_MASK_ERA10;
+ else
+ opcode |= (start_idx << HDR_START_IDX_SHIFT) &
+ HDR_START_IDX_MASK;
if (flags & DNR)
opcode |= HDR_DNR;
@@ -160,7 +168,12 @@ rta_job_header(struct program *program,
}
opcode |= HDR_ONE;
- opcode |= ((start_idx << HDR_START_IDX_SHIFT) & HDR_START_IDX_MASK);
+ if (rta_sec_era >= RTA_SEC_ERA_10)
+ opcode |= (start_idx << HDR_START_IDX_SHIFT) &
+ HDR_START_IDX_MASK_ERA10;
+ else
+ opcode |= (start_idx << HDR_START_IDX_SHIFT) &
+ HDR_START_IDX_MASK;
if (flags & EXT) {
opcode |= HDR_EXT;
diff --git a/drivers/crypto/dpaa2_sec/hw/rta/jump_cmd.h b/drivers/crypto/dpaa2_sec/hw/rta/jump_cmd.h
index 546d22e98..18f781e37 100644
--- a/drivers/crypto/dpaa2_sec/hw/rta/jump_cmd.h
+++ b/drivers/crypto/dpaa2_sec/hw/rta/jump_cmd.h
@@ -1,8 +1,7 @@
/* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
*
* Copyright 2008-2016 Freescale Semiconductor Inc.
- * Copyright 2016 NXP
- *
+ * Copyright 2016,2019 NXP
*/
#ifndef __RTA_JUMP_CMD_H__
diff --git a/drivers/crypto/dpaa2_sec/hw/rta/key_cmd.h b/drivers/crypto/dpaa2_sec/hw/rta/key_cmd.h
index 1ec21234a..ec3fbcaf6 100644
--- a/drivers/crypto/dpaa2_sec/hw/rta/key_cmd.h
+++ b/drivers/crypto/dpaa2_sec/hw/rta/key_cmd.h
@@ -1,8 +1,7 @@
/* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
*
* Copyright 2008-2016 Freescale Semiconductor Inc.
- * Copyright 2016 NXP
- *
+ * Copyright 2016,2019 NXP
*/
#ifndef __RTA_KEY_CMD_H__
@@ -19,6 +18,8 @@ static const uint32_t key_enc_flags[] = {
ENC | NWB | EKT | TK,
ENC | NWB | EKT | TK,
ENC | NWB | EKT | TK | PTS,
+ ENC | NWB | EKT | TK | PTS,
+ ENC | NWB | EKT | TK | PTS,
ENC | NWB | EKT | TK | PTS
};
diff --git a/drivers/crypto/dpaa2_sec/hw/rta/load_cmd.h b/drivers/crypto/dpaa2_sec/hw/rta/load_cmd.h
index f3b0dcfcb..38e253c22 100644
--- a/drivers/crypto/dpaa2_sec/hw/rta/load_cmd.h
+++ b/drivers/crypto/dpaa2_sec/hw/rta/load_cmd.h
@@ -1,8 +1,7 @@
/* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
*
* Copyright 2008-2016 Freescale Semiconductor Inc.
- * Copyright 2016 NXP
- *
+ * Copyright 2016,2019 NXP
*/
#ifndef __RTA_LOAD_CMD_H__
@@ -19,6 +18,8 @@ static const uint32_t load_len_mask_allowed[] = {
0x000000fe,
0x000000fe,
0x000000fe,
+ 0x000000fe,
+ 0x000000fe,
0x000000fe
};
@@ -30,6 +31,8 @@ static const uint32_t load_off_mask_allowed[] = {
0x000000ff,
0x000000ff,
0x000000ff,
+ 0x000000ff,
+ 0x000000ff,
0x000000ff
};
@@ -137,7 +140,8 @@ static const struct load_map load_dst[] = {
* Allowed LOAD destinations for each SEC Era.
* Values represent the number of entries from load_dst[] that are supported.
*/
-static const unsigned int load_dst_sz[] = { 31, 34, 34, 40, 40, 40, 40, 40 };
+static const unsigned int load_dst_sz[] = { 31, 34, 34, 40, 40,
+ 40, 40, 40, 40, 40};
static inline int
load_check_len_offset(int pos, uint32_t length, uint32_t offset)
diff --git a/drivers/crypto/dpaa2_sec/hw/rta/math_cmd.h b/drivers/crypto/dpaa2_sec/hw/rta/math_cmd.h
index 5b28cbabb..cca70f7e0 100644
--- a/drivers/crypto/dpaa2_sec/hw/rta/math_cmd.h
+++ b/drivers/crypto/dpaa2_sec/hw/rta/math_cmd.h
@@ -1,8 +1,7 @@
/* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
*
* Copyright 2008-2016 Freescale Semiconductor Inc.
- * Copyright 2016 NXP
- *
+ * Copyright 2016,2019 NXP
*/
#ifndef __RTA_MATH_CMD_H__
@@ -29,7 +28,8 @@ static const uint32_t math_op1[][2] = {
* Allowed MATH op1 sources for each SEC Era.
* Values represent the number of entries from math_op1[] that are supported.
*/
-static const unsigned int math_op1_sz[] = {10, 10, 12, 12, 12, 12, 12, 12};
+static const unsigned int math_op1_sz[] = {10, 10, 12, 12, 12, 12,
+ 12, 12, 12, 12};
static const uint32_t math_op2[][2] = {
/*1*/ { MATH0, MATH_SRC1_REG0 },
@@ -51,7 +51,8 @@ static const uint32_t math_op2[][2] = {
* Allowed MATH op2 sources for each SEC Era.
* Values represent the number of entries from math_op2[] that are supported.
*/
-static const unsigned int math_op2_sz[] = {8, 9, 13, 13, 13, 13, 13, 13};
+static const unsigned int math_op2_sz[] = {8, 9, 13, 13, 13, 13, 13, 13,
+ 13, 13};
static const uint32_t math_result[][2] = {
/*1*/ { MATH0, MATH_DEST_REG0 },
@@ -71,7 +72,8 @@ static const uint32_t math_result[][2] = {
* Values represent the number of entries from math_result[] that are
* supported.
*/
-static const unsigned int math_result_sz[] = {9, 9, 10, 10, 10, 10, 10, 10};
+static const unsigned int math_result_sz[] = {9, 9, 10, 10, 10, 10, 10, 10,
+ 10, 10};
static inline int
rta_math(struct program *program, uint64_t operand1,
diff --git a/drivers/crypto/dpaa2_sec/hw/rta/move_cmd.h b/drivers/crypto/dpaa2_sec/hw/rta/move_cmd.h
index a7ff7c675..d2151c6dd 100644
--- a/drivers/crypto/dpaa2_sec/hw/rta/move_cmd.h
+++ b/drivers/crypto/dpaa2_sec/hw/rta/move_cmd.h
@@ -1,8 +1,7 @@
/* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
*
* Copyright 2008-2016 Freescale Semiconductor Inc.
- * Copyright 2016 NXP
- *
+ * Copyright 2016,2019 NXP
*/
#ifndef __RTA_MOVE_CMD_H__
@@ -47,7 +46,8 @@ static const uint32_t move_src_table[][2] = {
* Values represent the number of entries from move_src_table[] that are
* supported.
*/
-static const unsigned int move_src_table_sz[] = {9, 11, 14, 14, 14, 14, 14, 14};
+static const unsigned int move_src_table_sz[] = {9, 11, 14, 14, 14, 14, 14, 14,
+ 14, 14};
static const uint32_t move_dst_table[][2] = {
/*1*/ { CONTEXT1, MOVE_DEST_CLASS1CTX },
@@ -72,7 +72,7 @@ static const uint32_t move_dst_table[][2] = {
* supported.
*/
static const
-unsigned int move_dst_table_sz[] = {13, 14, 14, 15, 15, 15, 15, 15};
+unsigned int move_dst_table_sz[] = {13, 14, 14, 15, 15, 15, 15, 15, 15, 15};
static inline int
set_move_offset(struct program *program __maybe_unused,
diff --git a/drivers/crypto/dpaa2_sec/hw/rta/nfifo_cmd.h b/drivers/crypto/dpaa2_sec/hw/rta/nfifo_cmd.h
index 94f775e2e..85092d961 100644
--- a/drivers/crypto/dpaa2_sec/hw/rta/nfifo_cmd.h
+++ b/drivers/crypto/dpaa2_sec/hw/rta/nfifo_cmd.h
@@ -1,8 +1,7 @@
/* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
*
* Copyright 2008-2016 Freescale Semiconductor Inc.
- * Copyright 2016 NXP
- *
+ * Copyright 2016,2019 NXP
*/
#ifndef __RTA_NFIFO_CMD_H__
@@ -24,7 +23,7 @@ static const uint32_t nfifo_src[][2] = {
* Allowed NFIFO LOAD sources for each SEC Era.
* Values represent the number of entries from nfifo_src[] that are supported.
*/
-static const unsigned int nfifo_src_sz[] = {4, 5, 5, 5, 5, 5, 5, 7};
+static const unsigned int nfifo_src_sz[] = {4, 5, 5, 5, 5, 5, 5, 7, 7, 7};
static const uint32_t nfifo_data[][2] = {
{ MSG, NFIFOENTRY_DTYPE_MSG },
@@ -77,7 +76,8 @@ static const uint32_t nfifo_flags[][2] = {
* Allowed NFIFO LOAD flags for each SEC Era.
* Values represent the number of entries from nfifo_flags[] that are supported.
*/
-static const unsigned int nfifo_flags_sz[] = {12, 14, 14, 14, 14, 14, 14, 14};
+static const unsigned int nfifo_flags_sz[] = {12, 14, 14, 14, 14, 14,
+ 14, 14, 14, 14};
static const uint32_t nfifo_pad_flags[][2] = {
{ BM, NFIFOENTRY_BM },
@@ -90,7 +90,7 @@ static const uint32_t nfifo_pad_flags[][2] = {
* Values represent the number of entries from nfifo_pad_flags[] that are
* supported.
*/
-static const unsigned int nfifo_pad_flags_sz[] = {2, 2, 2, 2, 3, 3, 3, 3};
+static const unsigned int nfifo_pad_flags_sz[] = {2, 2, 2, 2, 3, 3, 3, 3, 3, 3};
static inline int
rta_nfifo_load(struct program *program, uint32_t src,
diff --git a/drivers/crypto/dpaa2_sec/hw/rta/operation_cmd.h b/drivers/crypto/dpaa2_sec/hw/rta/operation_cmd.h
index b85760e5b..9a1788c0f 100644
--- a/drivers/crypto/dpaa2_sec/hw/rta/operation_cmd.h
+++ b/drivers/crypto/dpaa2_sec/hw/rta/operation_cmd.h
@@ -1,8 +1,7 @@
/* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
*
* Copyright 2008-2016 Freescale Semiconductor Inc.
- * Copyright 2016 NXP
- *
+ * Copyright 2016,2019 NXP
*/
#ifndef __RTA_OPERATION_CMD_H__
@@ -229,7 +228,8 @@ static const struct alg_aai_map alg_table[] = {
* Allowed OPERATION algorithms for each SEC Era.
* Values represent the number of entries from alg_table[] that are supported.
*/
-static const unsigned int alg_table_sz[] = {14, 15, 15, 15, 17, 17, 11, 17};
+static const unsigned int alg_table_sz[] = {14, 15, 15, 15, 17, 17,
+ 11, 17, 17, 17};
static inline int
rta_operation(struct program *program, uint32_t cipher_algo,
diff --git a/drivers/crypto/dpaa2_sec/hw/rta/protocol_cmd.h b/drivers/crypto/dpaa2_sec/hw/rta/protocol_cmd.h
index 82581edf5..e9f20703f 100644
--- a/drivers/crypto/dpaa2_sec/hw/rta/protocol_cmd.h
+++ b/drivers/crypto/dpaa2_sec/hw/rta/protocol_cmd.h
@@ -1,7 +1,7 @@
/* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
*
* Copyright 2008-2016 Freescale Semiconductor Inc.
- * Copyright 2016, 2019 NXP
+ * Copyright 2016,2019 NXP
*
*/
@@ -326,6 +326,10 @@ static const uint32_t proto_blob_flags[] = {
OP_PCL_BLOB_EKT | OP_PCL_BLOB_REG_MASK | OP_PCL_BLOB_SEC_MEM,
OP_PCL_BLOB_FORMAT_MASK | OP_PCL_BLOB_BLACK | OP_PCL_BLOB_TKEK |
OP_PCL_BLOB_EKT | OP_PCL_BLOB_REG_MASK | OP_PCL_BLOB_SEC_MEM,
+ OP_PCL_BLOB_FORMAT_MASK | OP_PCL_BLOB_BLACK | OP_PCL_BLOB_TKEK |
+ OP_PCL_BLOB_EKT | OP_PCL_BLOB_REG_MASK | OP_PCL_BLOB_SEC_MEM,
+ OP_PCL_BLOB_FORMAT_MASK | OP_PCL_BLOB_BLACK | OP_PCL_BLOB_TKEK |
+ OP_PCL_BLOB_EKT | OP_PCL_BLOB_REG_MASK | OP_PCL_BLOB_SEC_MEM,
OP_PCL_BLOB_FORMAT_MASK | OP_PCL_BLOB_BLACK | OP_PCL_BLOB_TKEK |
OP_PCL_BLOB_EKT | OP_PCL_BLOB_REG_MASK | OP_PCL_BLOB_SEC_MEM
};
@@ -604,7 +608,8 @@ static const struct proto_map proto_table[] = {
* Allowed OPERATION protocols for each SEC Era.
* Values represent the number of entries from proto_table[] that are supported.
*/
-static const unsigned int proto_table_sz[] = {21, 29, 29, 29, 29, 35, 37, 40};
+static const unsigned int proto_table_sz[] = {21, 29, 29, 29, 29, 35, 37,
+ 40, 40, 40};
static inline int
rta_proto_operation(struct program *program, uint32_t optype,
diff --git a/drivers/crypto/dpaa2_sec/hw/rta/sec_run_time_asm.h b/drivers/crypto/dpaa2_sec/hw/rta/sec_run_time_asm.h
index 5357187f8..d8cdebd20 100644
--- a/drivers/crypto/dpaa2_sec/hw/rta/sec_run_time_asm.h
+++ b/drivers/crypto/dpaa2_sec/hw/rta/sec_run_time_asm.h
@@ -1,8 +1,7 @@
/* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
*
* Copyright 2008-2016 Freescale Semiconductor Inc.
- * Copyright 2016 NXP
- *
+ * Copyright 2016,2019 NXP
*/
#ifndef __RTA_SEC_RUN_TIME_ASM_H__
@@ -36,7 +35,9 @@ enum rta_sec_era {
RTA_SEC_ERA_6,
RTA_SEC_ERA_7,
RTA_SEC_ERA_8,
- MAX_SEC_ERA = RTA_SEC_ERA_8
+ RTA_SEC_ERA_9,
+ RTA_SEC_ERA_10,
+ MAX_SEC_ERA = RTA_SEC_ERA_10
};
/**
@@ -605,10 +606,14 @@ __rta_inline_data(struct program *program, uint64_t data,
static inline unsigned int
rta_desc_len(uint32_t *buffer)
{
- if ((*buffer & CMD_MASK) == CMD_DESC_HDR)
+ if ((*buffer & CMD_MASK) == CMD_DESC_HDR) {
return *buffer & HDR_DESCLEN_MASK;
- else
- return *buffer & HDR_DESCLEN_SHR_MASK;
+ } else {
+ if (rta_sec_era >= RTA_SEC_ERA_10)
+ return *buffer & HDR_DESCLEN_SHR_MASK_ERA10;
+ else
+ return *buffer & HDR_DESCLEN_SHR_MASK;
+ }
}
static inline unsigned int
@@ -701,9 +706,15 @@ rta_patch_header(struct program *program, int line, unsigned int new_ref)
return -EINVAL;
opcode = bswap ? swab32(program->buffer[line]) : program->buffer[line];
+ if (rta_sec_era >= RTA_SEC_ERA_10) {
+ opcode &= (uint32_t)~HDR_START_IDX_MASK_ERA10;
+ opcode |= (new_ref << HDR_START_IDX_SHIFT) &
+ HDR_START_IDX_MASK_ERA10;
+ } else {
+ opcode &= (uint32_t)~HDR_START_IDX_MASK;
+ opcode |= (new_ref << HDR_START_IDX_SHIFT) & HDR_START_IDX_MASK;
+ }
- opcode &= (uint32_t)~HDR_START_IDX_MASK;
- opcode |= (new_ref << HDR_START_IDX_SHIFT) & HDR_START_IDX_MASK;
program->buffer[line] = bswap ? swab32(opcode) : opcode;
return 0;
diff --git a/drivers/crypto/dpaa2_sec/hw/rta/seq_in_out_ptr_cmd.h b/drivers/crypto/dpaa2_sec/hw/rta/seq_in_out_ptr_cmd.h
index ceb6a8719..5e6af0c83 100644
--- a/drivers/crypto/dpaa2_sec/hw/rta/seq_in_out_ptr_cmd.h
+++ b/drivers/crypto/dpaa2_sec/hw/rta/seq_in_out_ptr_cmd.h
@@ -1,8 +1,7 @@
/* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
*
* Copyright 2008-2016 Freescale Semiconductor Inc.
- * Copyright 2016 NXP
- *
+ * Copyright 2016,2019 NXP
*/
#ifndef __RTA_SEQ_IN_OUT_PTR_CMD_H__
@@ -19,6 +18,8 @@ static const uint32_t seq_in_ptr_flags[] = {
RBS | INL | SGF | PRE | EXT | RTO | RJD | SOP,
RBS | INL | SGF | PRE | EXT | RTO | RJD | SOP,
RBS | INL | SGF | PRE | EXT | RTO | RJD | SOP,
+ RBS | INL | SGF | PRE | EXT | RTO | RJD | SOP,
+ RBS | INL | SGF | PRE | EXT | RTO | RJD | SOP,
RBS | INL | SGF | PRE | EXT | RTO | RJD | SOP
};
@@ -31,6 +32,8 @@ static const uint32_t seq_out_ptr_flags[] = {
SGF | PRE | EXT | RTO | RST | EWS,
SGF | PRE | EXT | RTO | RST | EWS,
SGF | PRE | EXT | RTO | RST | EWS,
+ SGF | PRE | EXT | RTO | RST | EWS,
+ SGF | PRE | EXT | RTO | RST | EWS,
SGF | PRE | EXT | RTO | RST | EWS
};
diff --git a/drivers/crypto/dpaa2_sec/hw/rta/store_cmd.h b/drivers/crypto/dpaa2_sec/hw/rta/store_cmd.h
index 8b58e544d..5de47d053 100644
--- a/drivers/crypto/dpaa2_sec/hw/rta/store_cmd.h
+++ b/drivers/crypto/dpaa2_sec/hw/rta/store_cmd.h
@@ -1,8 +1,7 @@
/* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
*
* Copyright 2008-2016 Freescale Semiconductor Inc.
- * Copyright 2016 NXP
- *
+ * Copyright 2016,2019 NXP
*/
#ifndef __RTA_STORE_CMD_H__
@@ -56,7 +55,8 @@ static const uint32_t store_src_table[][2] = {
* supported.
*/
static const unsigned int store_src_table_sz[] = {29, 31, 33, 33,
- 33, 33, 35, 35};
+ 33, 33, 35, 35,
+ 35, 35};
static inline int
rta_store(struct program *program, uint64_t src,
--
2.17.1
^ permalink raw reply [flat|nested] 75+ messages in thread
* [dpdk-dev] [PATCH 07/20] crypto/dpaa2_sec/hw: update 12bit SN desc for null auth for ERA8
2019-09-02 12:17 [dpdk-dev] [PATCH 00/20] crypto/dpaaX_sec: Support Wireless algos Akhil Goyal
` (5 preceding siblings ...)
2019-09-02 12:17 ` [dpdk-dev] [PATCH 06/20] crypto/dpaa2_sec: support CAAM HW era 10 Akhil Goyal
@ 2019-09-02 12:17 ` Akhil Goyal
2019-09-02 12:17 ` [dpdk-dev] [PATCH 08/20] crypto/dpaa_sec: support scatter gather for pdcp Akhil Goyal
` (14 subsequent siblings)
21 siblings, 0 replies; 75+ messages in thread
From: Akhil Goyal @ 2019-09-02 12:17 UTC (permalink / raw)
To: dev; +Cc: hemant.agrawal, vakul.garg, Akhil Goyal
For sec era 8, NULL auth using protocol command does not add
4 bytes of null MAC-I and treat NULL integrity as no integrity which
is not correct.
Hence converting this particular case of null integrity on 12b SN
on SEC ERA 8 from protocol offload to non-protocol offload case.
Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
Signed-off-by: Vakul Garg <vakul.garg@nxp.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
---
drivers/crypto/dpaa2_sec/hw/desc/pdcp.h | 32 +++++++++++++++++++++----
1 file changed, 28 insertions(+), 4 deletions(-)
diff --git a/drivers/crypto/dpaa2_sec/hw/desc/pdcp.h b/drivers/crypto/dpaa2_sec/hw/desc/pdcp.h
index 4bf1d69f9..0b074ec80 100644
--- a/drivers/crypto/dpaa2_sec/hw/desc/pdcp.h
+++ b/drivers/crypto/dpaa2_sec/hw/desc/pdcp.h
@@ -43,6 +43,14 @@
#define PDCP_C_PLANE_SN_MASK 0x1F000000
#define PDCP_C_PLANE_SN_MASK_BE 0x0000001F
+/**
+ * PDCP_12BIT_SN_MASK - This mask is used in the PDCP descriptors for
+ * extracting the sequence number (SN) from the
+ * PDCP User Plane header.
+ */
+#define PDCP_12BIT_SN_MASK 0xFF0F0000
+#define PDCP_12BIT_SN_MASK_BE 0x00000FFF
+
/**
* PDCP_U_PLANE_15BIT_SN_MASK - This mask is used in the PDCP descriptors for
* extracting the sequence number (SN) from the
@@ -776,8 +784,10 @@ pdcp_insert_cplane_enc_only_op(struct program *p,
KEY(p, KEY1, cipherdata->key_enc_flags, cipherdata->key,
cipherdata->keylen, INLINE_KEY(cipherdata));
- if ((rta_sec_era >= RTA_SEC_ERA_8 && sn_size != PDCP_SN_SIZE_18) ||
- (rta_sec_era == RTA_SEC_ERA_10)) {
+ if ((rta_sec_era >= RTA_SEC_ERA_8 && sn_size != PDCP_SN_SIZE_18 &&
+ !(rta_sec_era == RTA_SEC_ERA_8 &&
+ authdata->algtype == 0))
+ || (rta_sec_era == RTA_SEC_ERA_10)) {
if (sn_size == PDCP_SN_SIZE_5)
PROTOCOL(p, dir, OP_PCLID_LTE_PDCP_CTRL_MIXED,
(uint16_t)cipherdata->algtype << 8);
@@ -800,12 +810,16 @@ pdcp_insert_cplane_enc_only_op(struct program *p,
sn_mask = (swap == false) ? PDCP_U_PLANE_18BIT_SN_MASK :
PDCP_U_PLANE_18BIT_SN_MASK_BE;
break;
- case PDCP_SN_SIZE_7:
case PDCP_SN_SIZE_12:
+ offset = 6;
+ length = 2;
+ sn_mask = (swap == false) ? PDCP_12BIT_SN_MASK :
+ PDCP_12BIT_SN_MASK_BE;
+ break;
+ case PDCP_SN_SIZE_7:
case PDCP_SN_SIZE_15:
pr_err("Invalid sn_size for %s\n", __func__);
return -ENOTSUP;
-
}
SEQLOAD(p, MATH0, offset, length, 0);
@@ -2796,6 +2810,16 @@ cnstr_shdsc_pdcp_u_plane_encap(uint32_t *descbuf,
case PDCP_CIPHER_TYPE_AES:
case PDCP_CIPHER_TYPE_SNOW:
case PDCP_CIPHER_TYPE_NULL:
+ if (rta_sec_era == RTA_SEC_ERA_8 &&
+ authdata && authdata->algtype == 0){
+ err = pdcp_insert_uplane_with_int_op(p, swap,
+ cipherdata, authdata,
+ sn_size, era_2_sw_hfn_ovrd,
+ OP_TYPE_ENCAP_PROTOCOL);
+ if (err)
+ return err;
+ break;
+ }
/* Insert auth key if requested */
if (authdata && authdata->algtype) {
KEY(p, KEY2, authdata->key_enc_flags,
--
2.17.1
^ permalink raw reply [flat|nested] 75+ messages in thread
* [dpdk-dev] [PATCH 08/20] crypto/dpaa_sec: support scatter gather for pdcp
2019-09-02 12:17 [dpdk-dev] [PATCH 00/20] crypto/dpaaX_sec: Support Wireless algos Akhil Goyal
` (6 preceding siblings ...)
2019-09-02 12:17 ` [dpdk-dev] [PATCH 07/20] crypto/dpaa2_sec/hw: update 12bit SN desc for null auth for ERA8 Akhil Goyal
@ 2019-09-02 12:17 ` Akhil Goyal
2019-09-02 12:17 ` [dpdk-dev] [PATCH 09/20] crypto/dpaa2_sec: support scatter gather for proto offloads Akhil Goyal
` (13 subsequent siblings)
21 siblings, 0 replies; 75+ messages in thread
From: Akhil Goyal @ 2019-09-02 12:17 UTC (permalink / raw)
To: dev; +Cc: hemant.agrawal, vakul.garg, Akhil Goyal
This patch add support for chained input or output
mbufs for PDCP operations.
Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
---
drivers/crypto/dpaa_sec/dpaa_sec.c | 120 +++++++++++++++++++++++++++--
1 file changed, 115 insertions(+), 5 deletions(-)
diff --git a/drivers/crypto/dpaa_sec/dpaa_sec.c b/drivers/crypto/dpaa_sec/dpaa_sec.c
index a74b7a822..17f460f26 100644
--- a/drivers/crypto/dpaa_sec/dpaa_sec.c
+++ b/drivers/crypto/dpaa_sec/dpaa_sec.c
@@ -188,12 +188,18 @@ dqrr_out_fq_cb_rx(struct qman_portal *qm __always_unused,
if (ctx->op->sess_type == RTE_CRYPTO_OP_SECURITY_SESSION) {
struct qm_sg_entry *sg_out;
uint32_t len;
+ struct rte_mbuf *mbuf = (ctx->op->sym->m_dst == NULL) ?
+ ctx->op->sym->m_src : ctx->op->sym->m_dst;
sg_out = &job->sg[0];
hw_sg_to_cpu(sg_out);
len = sg_out->length;
- ctx->op->sym->m_src->pkt_len = len;
- ctx->op->sym->m_src->data_len = len;
+ mbuf->pkt_len = len;
+ while (mbuf->next != NULL) {
+ len -= mbuf->data_len;
+ mbuf = mbuf->next;
+ }
+ mbuf->data_len = len;
}
dpaa_sec_ops[dpaa_sec_op_nb++] = ctx->op;
dpaa_sec_op_ending(ctx);
@@ -259,6 +265,7 @@ static inline int is_auth_cipher(dpaa_sec_session *ses)
{
return ((ses->cipher_alg != RTE_CRYPTO_CIPHER_NULL) &&
(ses->auth_alg != RTE_CRYPTO_AUTH_NULL) &&
+ (ses->proto_alg != RTE_SECURITY_PROTOCOL_PDCP) &&
(ses->proto_alg != RTE_SECURITY_PROTOCOL_IPSEC));
}
@@ -801,12 +808,18 @@ dpaa_sec_deq(struct dpaa_sec_qp *qp, struct rte_crypto_op **ops, int nb_ops)
if (op->sess_type == RTE_CRYPTO_OP_SECURITY_SESSION) {
struct qm_sg_entry *sg_out;
uint32_t len;
+ struct rte_mbuf *mbuf = (op->sym->m_dst == NULL) ?
+ op->sym->m_src : op->sym->m_dst;
sg_out = &job->sg[0];
hw_sg_to_cpu(sg_out);
len = sg_out->length;
- op->sym->m_src->pkt_len = len;
- op->sym->m_src->data_len = len;
+ mbuf->pkt_len = len;
+ while (mbuf->next != NULL) {
+ len -= mbuf->data_len;
+ mbuf = mbuf->next;
+ }
+ mbuf->data_len = len;
}
if (!ctx->fd_status) {
op->status = RTE_CRYPTO_OP_STATUS_SUCCESS;
@@ -1635,6 +1648,99 @@ build_proto(struct rte_crypto_op *op, dpaa_sec_session *ses)
return cf;
}
+static inline struct dpaa_sec_job *
+build_proto_sg(struct rte_crypto_op *op, dpaa_sec_session *ses)
+{
+ struct rte_crypto_sym_op *sym = op->sym;
+ struct dpaa_sec_job *cf;
+ struct dpaa_sec_op_ctx *ctx;
+ struct qm_sg_entry *sg, *out_sg, *in_sg;
+ struct rte_mbuf *mbuf;
+ uint8_t req_segs;
+ uint32_t in_len = 0, out_len = 0;
+
+ if (sym->m_dst) {
+ mbuf = sym->m_dst;
+ } else {
+ mbuf = sym->m_src;
+ }
+ req_segs = mbuf->nb_segs + sym->m_src->nb_segs + 2;
+ if (req_segs > MAX_SG_ENTRIES) {
+ DPAA_SEC_DP_ERR("Proto: Max sec segs supported is %d",
+ MAX_SG_ENTRIES);
+ return NULL;
+ }
+
+ ctx = dpaa_sec_alloc_ctx(ses);
+ if (!ctx)
+ return NULL;
+ cf = &ctx->job;
+ ctx->op = op;
+ /* output */
+ out_sg = &cf->sg[0];
+ out_sg->extension = 1;
+ qm_sg_entry_set64(out_sg, dpaa_mem_vtop(&cf->sg[2]));
+
+ /* 1st seg */
+ sg = &cf->sg[2];
+ qm_sg_entry_set64(sg, rte_pktmbuf_mtophys(mbuf));
+ sg->offset = 0;
+
+ /* Successive segs */
+ while (mbuf->next) {
+ sg->length = mbuf->data_len;
+ out_len += sg->length;
+ mbuf = mbuf->next;
+ cpu_to_hw_sg(sg);
+ sg++;
+ qm_sg_entry_set64(sg, rte_pktmbuf_mtophys(mbuf));
+ sg->offset = 0;
+ }
+ sg->length = mbuf->buf_len - mbuf->data_off;
+ out_len += sg->length;
+ sg->final = 1;
+ cpu_to_hw_sg(sg);
+
+ out_sg->length = out_len;
+ cpu_to_hw_sg(out_sg);
+
+ /* input */
+ mbuf = sym->m_src;
+ in_sg = &cf->sg[1];
+ in_sg->extension = 1;
+ in_sg->final = 1;
+ in_len = mbuf->data_len;
+
+ sg++;
+ qm_sg_entry_set64(in_sg, dpaa_mem_vtop(sg));
+
+ /* 1st seg */
+ qm_sg_entry_set64(sg, rte_pktmbuf_mtophys(mbuf));
+ sg->length = mbuf->data_len;
+ sg->offset = 0;
+
+ /* Successive segs */
+ mbuf = mbuf->next;
+ while (mbuf) {
+ cpu_to_hw_sg(sg);
+ sg++;
+ qm_sg_entry_set64(sg, rte_pktmbuf_mtophys(mbuf));
+ sg->length = mbuf->data_len;
+ sg->offset = 0;
+ in_len += sg->length;
+ mbuf = mbuf->next;
+ }
+ sg->final = 1;
+ cpu_to_hw_sg(sg);
+
+ in_sg->length = in_len;
+ cpu_to_hw_sg(in_sg);
+
+ sym->m_src->packet_type &= ~RTE_PTYPE_L4_MASK;
+
+ return cf;
+}
+
static uint16_t
dpaa_sec_enqueue_burst(void *qp, struct rte_crypto_op **ops,
uint16_t nb_ops)
@@ -1694,7 +1800,9 @@ dpaa_sec_enqueue_burst(void *qp, struct rte_crypto_op **ops,
auth_only_len = op->sym->auth.data.length -
op->sym->cipher.data.length;
- if (rte_pktmbuf_is_contiguous(op->sym->m_src)) {
+ if (rte_pktmbuf_is_contiguous(op->sym->m_src) &&
+ ((op->sym->m_dst == NULL) ||
+ rte_pktmbuf_is_contiguous(op->sym->m_dst))) {
if (is_proto_ipsec(ses)) {
cf = build_proto(op, ses);
} else if (is_proto_pdcp(ses)) {
@@ -1724,6 +1832,8 @@ dpaa_sec_enqueue_burst(void *qp, struct rte_crypto_op **ops,
auth_only_len = ses->auth_only_len;
} else if (is_auth_cipher(ses)) {
cf = build_cipher_auth_sg(op, ses);
+ } else if (is_proto_pdcp(ses) || is_proto_ipsec(ses)) {
+ cf = build_proto_sg(op, ses);
} else {
DPAA_SEC_DP_ERR("not supported ops");
frames_to_send = loop;
--
2.17.1
^ permalink raw reply [flat|nested] 75+ messages in thread
* [dpdk-dev] [PATCH 09/20] crypto/dpaa2_sec: support scatter gather for proto offloads
2019-09-02 12:17 [dpdk-dev] [PATCH 00/20] crypto/dpaaX_sec: Support Wireless algos Akhil Goyal
` (7 preceding siblings ...)
2019-09-02 12:17 ` [dpdk-dev] [PATCH 08/20] crypto/dpaa_sec: support scatter gather for pdcp Akhil Goyal
@ 2019-09-02 12:17 ` Akhil Goyal
2019-09-02 12:17 ` [dpdk-dev] [PATCH 10/20] crypto/dpaa2_sec: disable 'write-safe' for PDCP Akhil Goyal
` (12 subsequent siblings)
21 siblings, 0 replies; 75+ messages in thread
From: Akhil Goyal @ 2019-09-02 12:17 UTC (permalink / raw)
To: dev; +Cc: hemant.agrawal, vakul.garg, Akhil Goyal
From: Hemant Agrawal <hemant.agrawal@nxp.com>
This patch add support for chained input or output
mbufs for PDCP and ipsec protocol offload cases.
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
---
drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c | 134 +++++++++++++++++++-
drivers/crypto/dpaa2_sec/dpaa2_sec_logs.h | 4 +-
2 files changed, 133 insertions(+), 5 deletions(-)
diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
index b2e23aefc..cbc7dff04 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
@@ -64,6 +64,121 @@ static uint8_t cryptodev_driver_id;
int dpaa2_logtype_sec;
+static inline int
+build_proto_compound_sg_fd(dpaa2_sec_session *sess,
+ struct rte_crypto_op *op,
+ struct qbman_fd *fd, uint16_t bpid)
+{
+ struct rte_crypto_sym_op *sym_op = op->sym;
+ struct ctxt_priv *priv = sess->ctxt;
+ struct qbman_fle *fle, *sge, *ip_fle, *op_fle;
+ struct sec_flow_context *flc;
+ struct rte_mbuf *mbuf;
+ uint32_t in_len = 0, out_len = 0;
+
+ if (sym_op->m_dst)
+ mbuf = sym_op->m_dst;
+ else
+ mbuf = sym_op->m_src;
+
+ /* first FLE entry used to store mbuf and session ctxt */
+ fle = (struct qbman_fle *)rte_malloc(NULL, FLE_SG_MEM_SIZE,
+ RTE_CACHE_LINE_SIZE);
+ if (unlikely(!fle)) {
+ DPAA2_SEC_DP_ERR("Proto:SG: Memory alloc failed for SGE");
+ return -1;
+ }
+ memset(fle, 0, FLE_SG_MEM_SIZE);
+ DPAA2_SET_FLE_ADDR(fle, (size_t)op);
+ DPAA2_FLE_SAVE_CTXT(fle, (ptrdiff_t)priv);
+
+ /* Save the shared descriptor */
+ flc = &priv->flc_desc[0].flc;
+
+ op_fle = fle + 1;
+ ip_fle = fle + 2;
+ sge = fle + 3;
+
+ if (likely(bpid < MAX_BPID)) {
+ DPAA2_SET_FD_BPID(fd, bpid);
+ DPAA2_SET_FLE_BPID(op_fle, bpid);
+ DPAA2_SET_FLE_BPID(ip_fle, bpid);
+ } else {
+ DPAA2_SET_FD_IVP(fd);
+ DPAA2_SET_FLE_IVP(op_fle);
+ DPAA2_SET_FLE_IVP(ip_fle);
+ }
+
+ /* Configure FD as a FRAME LIST */
+ DPAA2_SET_FD_ADDR(fd, DPAA2_VADDR_TO_IOVA(op_fle));
+ DPAA2_SET_FD_COMPOUND_FMT(fd);
+ DPAA2_SET_FD_FLC(fd, DPAA2_VADDR_TO_IOVA(flc));
+
+ /* Configure Output FLE with Scatter/Gather Entry */
+ DPAA2_SET_FLE_SG_EXT(op_fle);
+ DPAA2_SET_FLE_ADDR(op_fle, DPAA2_VADDR_TO_IOVA(sge));
+
+ /* Configure Output SGE for Encap/Decap */
+ DPAA2_SET_FLE_ADDR(sge, DPAA2_MBUF_VADDR_TO_IOVA(mbuf));
+ DPAA2_SET_FLE_OFFSET(sge, mbuf->data_off);
+ /* o/p segs */
+ while (mbuf->next) {
+ sge->length = mbuf->data_len;
+ out_len += sge->length;
+ sge++;
+ mbuf = mbuf->next;
+ DPAA2_SET_FLE_ADDR(sge, DPAA2_MBUF_VADDR_TO_IOVA(mbuf));
+ DPAA2_SET_FLE_OFFSET(sge, mbuf->data_off);
+ }
+ /* using buf_len for last buf - so that extra data can be added */
+ sge->length = mbuf->buf_len - mbuf->data_off;
+ out_len += sge->length;
+
+ DPAA2_SET_FLE_FIN(sge);
+ op_fle->length = out_len;
+
+ sge++;
+ mbuf = sym_op->m_src;
+
+ /* Configure Input FLE with Scatter/Gather Entry */
+ DPAA2_SET_FLE_ADDR(ip_fle, DPAA2_VADDR_TO_IOVA(sge));
+ DPAA2_SET_FLE_SG_EXT(ip_fle);
+ DPAA2_SET_FLE_FIN(ip_fle);
+
+ /* Configure input SGE for Encap/Decap */
+ DPAA2_SET_FLE_ADDR(sge, DPAA2_MBUF_VADDR_TO_IOVA(mbuf));
+ DPAA2_SET_FLE_OFFSET(sge, mbuf->data_off);
+ sge->length = mbuf->data_len;
+ in_len += sge->length;
+
+ mbuf = mbuf->next;
+ /* i/p segs */
+ while (mbuf) {
+ sge++;
+ DPAA2_SET_FLE_ADDR(sge, DPAA2_MBUF_VADDR_TO_IOVA(mbuf));
+ DPAA2_SET_FLE_OFFSET(sge, mbuf->data_off);
+ sge->length = mbuf->data_len;
+ in_len += sge->length;
+ mbuf = mbuf->next;
+ }
+ ip_fle->length = in_len;
+ DPAA2_SET_FLE_FIN(sge);
+
+ /* In case of PDCP, per packet HFN is stored in
+ * mbuf priv after sym_op.
+ */
+ if (sess->ctxt_type == DPAA2_SEC_PDCP && sess->pdcp.hfn_ovd) {
+ uint32_t hfn_ovd = *((uint8_t *)op + sess->pdcp.hfn_ovd_offset);
+ /*enable HFN override override */
+ DPAA2_SET_FLE_INTERNAL_JD(ip_fle, hfn_ovd);
+ DPAA2_SET_FLE_INTERNAL_JD(op_fle, hfn_ovd);
+ DPAA2_SET_FD_INTERNAL_JD(fd, hfn_ovd);
+ }
+ DPAA2_SET_FD_LEN(fd, ip_fle->length);
+
+ return 0;
+}
+
static inline int
build_proto_compound_fd(dpaa2_sec_session *sess,
struct rte_crypto_op *op,
@@ -86,7 +201,7 @@ build_proto_compound_fd(dpaa2_sec_session *sess,
/* we are using the first FLE entry to store Mbuf */
retval = rte_mempool_get(priv->fle_pool, (void **)(&fle));
if (retval) {
- DPAA2_SEC_ERR("Memory alloc failed");
+ DPAA2_SEC_DP_ERR("Memory alloc failed");
return -1;
}
memset(fle, 0, FLE_POOL_BUF_SIZE);
@@ -1169,8 +1284,10 @@ build_sec_fd(struct rte_crypto_op *op,
else
return -1;
- /* Segmented buffer */
- if (unlikely(!rte_pktmbuf_is_contiguous(op->sym->m_src))) {
+ /* Any of the buffer is segmented*/
+ if (!rte_pktmbuf_is_contiguous(op->sym->m_src) ||
+ ((op->sym->m_dst != NULL) &&
+ !rte_pktmbuf_is_contiguous(op->sym->m_dst))) {
switch (sess->ctxt_type) {
case DPAA2_SEC_CIPHER:
ret = build_cipher_sg_fd(sess, op, fd, bpid);
@@ -1184,6 +1301,10 @@ build_sec_fd(struct rte_crypto_op *op,
case DPAA2_SEC_CIPHER_HASH:
ret = build_authenc_sg_fd(sess, op, fd, bpid);
break;
+ case DPAA2_SEC_IPSEC:
+ case DPAA2_SEC_PDCP:
+ ret = build_proto_compound_sg_fd(sess, op, fd, bpid);
+ break;
case DPAA2_SEC_HASH_CIPHER:
default:
DPAA2_SEC_ERR("error: Unsupported session");
@@ -1371,9 +1492,14 @@ sec_fd_to_mbuf(const struct qbman_fd *fd)
if (op->sess_type == RTE_CRYPTO_OP_SECURITY_SESSION) {
dpaa2_sec_session *sess = (dpaa2_sec_session *)
get_sec_session_private_data(op->sym->sec_session);
- if (sess->ctxt_type == DPAA2_SEC_IPSEC) {
+ if (sess->ctxt_type == DPAA2_SEC_IPSEC ||
+ sess->ctxt_type == DPAA2_SEC_PDCP) {
uint16_t len = DPAA2_GET_FD_LEN(fd);
dst->pkt_len = len;
+ while (dst->next != NULL) {
+ len -= dst->data_len;
+ dst = dst->next;
+ }
dst->data_len = len;
}
}
diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_logs.h b/drivers/crypto/dpaa2_sec/dpaa2_sec_logs.h
index 8a9904426..c2e11f951 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_logs.h
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_logs.h
@@ -1,7 +1,7 @@
/* SPDX-License-Identifier: BSD-3-Clause
*
* Copyright (c) 2016 Freescale Semiconductor, Inc. All rights reserved.
- * Copyright 2016 NXP
+ * Copyright 2016,2019 NXP
*
*/
@@ -37,6 +37,8 @@ extern int dpaa2_logtype_sec;
DPAA2_SEC_DP_LOG(INFO, fmt, ## args)
#define DPAA2_SEC_DP_WARN(fmt, args...) \
DPAA2_SEC_DP_LOG(WARNING, fmt, ## args)
+#define DPAA2_SEC_DP_ERR(fmt, args...) \
+ DPAA2_SEC_DP_LOG(ERR, fmt, ## args)
#endif /* _DPAA2_SEC_LOGS_H_ */
--
2.17.1
^ permalink raw reply [flat|nested] 75+ messages in thread
* [dpdk-dev] [PATCH 10/20] crypto/dpaa2_sec: disable 'write-safe' for PDCP
2019-09-02 12:17 [dpdk-dev] [PATCH 00/20] crypto/dpaaX_sec: Support Wireless algos Akhil Goyal
` (8 preceding siblings ...)
2019-09-02 12:17 ` [dpdk-dev] [PATCH 09/20] crypto/dpaa2_sec: support scatter gather for proto offloads Akhil Goyal
@ 2019-09-02 12:17 ` Akhil Goyal
2019-09-02 12:17 ` [dpdk-dev] [PATCH 11/20] crypto/dpaa2_sec/hw: Support 18-bit PDCP enc-auth cases Akhil Goyal
` (11 subsequent siblings)
21 siblings, 0 replies; 75+ messages in thread
From: Akhil Goyal @ 2019-09-02 12:17 UTC (permalink / raw)
To: dev; +Cc: hemant.agrawal, vakul.garg
From: Vakul Garg <vakul.garg@nxp.com>
PDCP descriptors in some cases internally use commands which overwrite
memory with extra '0s' if write-safe is kept enabled. This breaks
correct functional behavior of PDCP apis and they in many cases give
incorrect crypto output. There we disable 'write-safe' bit in FLC for
PDCP cases. If there is a performance drop, then write-safe would be
enabled selectively through a separate patch.
Signed-off-by: Vakul Garg <vakul.garg@nxp.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
---
drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c | 8 ++++++--
1 file changed, 6 insertions(+), 2 deletions(-)
diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
index cbc7dff04..c9029e989 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
@@ -2899,8 +2899,12 @@ dpaa2_sec_set_pdcp_session(struct rte_cryptodev *dev,
flc->word1_sdl = (uint8_t)bufsize;
- /* Set EWS bit i.e. enable write-safe */
- DPAA2_SET_FLC_EWS(flc);
+ /* TODO - check the perf impact or
+ * align as per descriptor type
+ * Set EWS bit i.e. enable write-safe
+ * DPAA2_SET_FLC_EWS(flc);
+ */
+
/* Set BS = 1 i.e reuse input buffers as output buffers */
DPAA2_SET_FLC_REUSE_BS(flc);
/* Set FF = 10; reuse input buffers if they provide sufficient space */
--
2.17.1
^ permalink raw reply [flat|nested] 75+ messages in thread
* [dpdk-dev] [PATCH 11/20] crypto/dpaa2_sec/hw: Support 18-bit PDCP enc-auth cases
2019-09-02 12:17 [dpdk-dev] [PATCH 00/20] crypto/dpaaX_sec: Support Wireless algos Akhil Goyal
` (9 preceding siblings ...)
2019-09-02 12:17 ` [dpdk-dev] [PATCH 10/20] crypto/dpaa2_sec: disable 'write-safe' for PDCP Akhil Goyal
@ 2019-09-02 12:17 ` Akhil Goyal
2019-09-02 12:17 ` [dpdk-dev] [PATCH 12/20] crypto/dpaa2_sec/hw: Support aes-aes 18-bit PDCP Akhil Goyal
` (10 subsequent siblings)
21 siblings, 0 replies; 75+ messages in thread
From: Akhil Goyal @ 2019-09-02 12:17 UTC (permalink / raw)
To: dev; +Cc: hemant.agrawal, vakul.garg, Akhil Goyal
From: Vakul Garg <vakul.garg@nxp.com>
This patch support following algo combinations(ENC-AUTH).
- AES-SNOW
- SNOW-AES
- AES-ZUC
- ZUC-AES
- SNOW-ZUC
- ZUC-SNOW
Signed-off-by: Vakul Garg <vakul.garg@nxp.com>
Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
---
drivers/crypto/dpaa2_sec/hw/desc/pdcp.h | 211 ++++++++++++++++--------
1 file changed, 140 insertions(+), 71 deletions(-)
diff --git a/drivers/crypto/dpaa2_sec/hw/desc/pdcp.h b/drivers/crypto/dpaa2_sec/hw/desc/pdcp.h
index 0b074ec80..764daf21c 100644
--- a/drivers/crypto/dpaa2_sec/hw/desc/pdcp.h
+++ b/drivers/crypto/dpaa2_sec/hw/desc/pdcp.h
@@ -1021,21 +1021,21 @@ pdcp_insert_cplane_snow_aes_op(struct program *p,
JUMP(p, 1, LOCAL_JUMP, ALL_TRUE, CALM);
MATHB(p, MATH0, AND, sn_mask, MATH1, 8, IFB | IMMED2);
MATHB(p, MATH1, SHLD, MATH1, MATH1, 8, 0);
- MOVE(p, DESCBUF, 4, MATH2, 0, 0x08, WAITCOMP | IMMED);
+ MOVEB(p, DESCBUF, 4, MATH2, 0, 0x08, WAITCOMP | IMMED);
MATHB(p, MATH1, OR, MATH2, MATH2, 8, 0);
SEQSTORE(p, MATH0, offset, length, 0);
if (dir == OP_TYPE_ENCAP_PROTOCOL) {
if (rta_sec_era > RTA_SEC_ERA_2 ||
(rta_sec_era == RTA_SEC_ERA_2 &&
era_2_sw_hfn_ovrd == 0)) {
- SEQINPTR(p, 0, 1, RTO);
+ SEQINPTR(p, 0, length, RTO);
} else {
SEQINPTR(p, 0, 5, RTO);
SEQFIFOLOAD(p, SKIP, 4, 0);
}
KEY(p, KEY1, authdata->key_enc_flags, authdata->key,
authdata->keylen, INLINE_KEY(authdata));
- MOVE(p, MATH2, 0, IFIFOAB1, 0, 0x08, IMMED);
+ MOVEB(p, MATH2, 0, IFIFOAB1, 0, 0x08, IMMED);
if (rta_sec_era > RTA_SEC_ERA_2) {
MATHB(p, SEQINSZ, SUB, ZERO, VSEQINSZ, 4, 0);
@@ -1088,7 +1088,7 @@ pdcp_insert_cplane_snow_aes_op(struct program *p,
ICV_CHECK_DISABLE,
DIR_DEC);
SEQFIFOLOAD(p, MSG1, 0, VLF | LAST1 | FLUSH1);
- MOVE(p, CONTEXT1, 0, MATH3, 0, 4, WAITCOMP | IMMED);
+ MOVEB(p, CONTEXT1, 0, MATH3, 0, 4, WAITCOMP | IMMED);
if (rta_sec_era <= RTA_SEC_ERA_3)
LOAD(p, CLRW_CLR_C1KEY |
CLRW_CLR_C1CTX |
@@ -1111,7 +1111,7 @@ pdcp_insert_cplane_snow_aes_op(struct program *p,
KEY(p, KEY1, cipherdata->key_enc_flags, cipherdata->key,
cipherdata->keylen, INLINE_KEY(cipherdata));
SET_LABEL(p, local_offset);
- MOVE(p, MATH2, 0, CONTEXT1, 0, 8, IMMED);
+ MOVEB(p, MATH2, 0, CONTEXT1, 0, 8, IMMED);
SEQINPTR(p, 0, 0, RTO);
if (rta_sec_era == RTA_SEC_ERA_2 && era_2_sw_hfn_ovrd) {
@@ -1119,7 +1119,7 @@ pdcp_insert_cplane_snow_aes_op(struct program *p,
MATHB(p, SEQINSZ, ADD, ONE, SEQINSZ, 4, 0);
}
- MATHB(p, SEQINSZ, SUB, ONE, VSEQINSZ, 4, 0);
+ MATHB(p, SEQINSZ, SUB, length, VSEQINSZ, 4, IMMED2);
ALG_OPERATION(p, OP_ALG_ALGSEL_SNOW_F8,
OP_ALG_AAI_F8,
OP_ALG_AS_INITFINAL,
@@ -1130,14 +1130,14 @@ pdcp_insert_cplane_snow_aes_op(struct program *p,
if (rta_sec_era > RTA_SEC_ERA_2 ||
(rta_sec_era == RTA_SEC_ERA_2 &&
era_2_sw_hfn_ovrd == 0))
- SEQFIFOLOAD(p, SKIP, 1, 0);
+ SEQFIFOLOAD(p, SKIP, length, 0);
SEQFIFOLOAD(p, MSG1, 0, VLF);
- MOVE(p, MATH3, 0, IFIFOAB1, 0, 4, LAST1 | FLUSH1 | IMMED);
+ MOVEB(p, MATH3, 0, IFIFOAB1, 0, 4, LAST1 | FLUSH1 | IMMED);
PATCH_MOVE(p, seqin_ptr_read, local_offset);
PATCH_MOVE(p, seqin_ptr_write, local_offset);
} else {
- MOVE(p, MATH2, 0, CONTEXT1, 0, 8, IMMED);
+ MOVEB(p, MATH2, 0, CONTEXT1, 0, 8, IMMED);
if (rta_sec_era >= RTA_SEC_ERA_5)
MOVE(p, CONTEXT1, 0, CONTEXT2, 0, 8, IMMED);
@@ -1147,7 +1147,7 @@ pdcp_insert_cplane_snow_aes_op(struct program *p,
else
MATHB(p, SEQINSZ, SUB, MATH3, VSEQINSZ, 4, 0);
- MATHB(p, SEQINSZ, SUB, PDCP_MAC_I_LEN, VSEQOUTSZ, 4, IMMED2);
+ MATHI(p, SEQINSZ, SUB, PDCP_MAC_I_LEN, VSEQOUTSZ, 4, IMMED2);
/*
* TODO: To be changed when proper support is added in RTA (can't load a
* command that is also written by RTA (or patch it for that matter).
@@ -1237,7 +1237,7 @@ pdcp_insert_cplane_snow_aes_op(struct program *p,
DIR_DEC);
/* Read the # of bytes written in the output buffer + 1 (HDR) */
- MATHB(p, VSEQOUTSZ, ADD, ONE, VSEQINSZ, 4, 0);
+ MATHI(p, VSEQOUTSZ, ADD, length, VSEQINSZ, 4, IMMED2);
if (rta_sec_era <= RTA_SEC_ERA_3)
MOVE(p, MATH3, 0, IFIFOAB1, 0, 8, IMMED);
@@ -1340,24 +1340,24 @@ pdcp_insert_cplane_aes_snow_op(struct program *p,
}
if (dir == OP_TYPE_ENCAP_PROTOCOL)
- MATHB(p, SEQINSZ, SUB, ONE, VSEQINSZ, 4, 0);
+ MATHB(p, SEQINSZ, SUB, length, VSEQINSZ, 4, IMMED2);
SEQLOAD(p, MATH0, offset, length, 0);
JUMP(p, 1, LOCAL_JUMP, ALL_TRUE, CALM);
- MOVE(p, MATH0, 7, IFIFOAB2, 0, 1, IMMED);
+ MOVEB(p, MATH0, offset, IFIFOAB2, 0, length, IMMED);
MATHB(p, MATH0, AND, sn_mask, MATH1, 8, IFB | IMMED2);
SEQSTORE(p, MATH0, offset, length, 0);
MATHB(p, MATH1, SHLD, MATH1, MATH1, 8, 0);
- MOVE(p, DESCBUF, 4, MATH2, 0, 8, WAITCOMP | IMMED);
+ MOVEB(p, DESCBUF, 4, MATH2, 0, 8, WAITCOMP | IMMED);
MATHB(p, MATH1, OR, MATH2, MATH1, 8, 0);
- MOVE(p, MATH1, 0, CONTEXT1, 16, 8, IMMED);
- MOVE(p, MATH1, 0, CONTEXT2, 0, 4, IMMED);
+ MOVEB(p, MATH1, 0, CONTEXT1, 16, 8, IMMED);
+ MOVEB(p, MATH1, 0, CONTEXT2, 0, 4, IMMED);
if (swap == false) {
- MATHB(p, MATH1, AND, lower_32_bits(PDCP_BEARER_MASK), MATH2, 4,
- IMMED2);
- MATHB(p, MATH1, AND, upper_32_bits(PDCP_DIR_MASK), MATH3, 4,
- IMMED2);
+ MATHB(p, MATH1, AND, upper_32_bits(PDCP_BEARER_MASK), MATH2, 4,
+ IMMED2);
+ MATHB(p, MATH1, AND, lower_32_bits(PDCP_DIR_MASK), MATH3, 4,
+ IMMED2);
} else {
MATHB(p, MATH1, AND, lower_32_bits(PDCP_BEARER_MASK_BE), MATH2,
4, IMMED2);
@@ -1365,7 +1365,7 @@ pdcp_insert_cplane_aes_snow_op(struct program *p,
4, IMMED2);
}
MATHB(p, MATH3, SHLD, MATH3, MATH3, 8, 0);
- MOVE(p, MATH2, 4, OFIFO, 0, 12, IMMED);
+ MOVEB(p, MATH2, 4, OFIFO, 0, 12, IMMED);
MOVE(p, OFIFO, 0, CONTEXT2, 4, 12, IMMED);
if (dir == OP_TYPE_ENCAP_PROTOCOL) {
MATHB(p, SEQINSZ, ADD, PDCP_MAC_I_LEN, VSEQOUTSZ, 4, IMMED2);
@@ -1485,14 +1485,14 @@ pdcp_insert_cplane_snow_zuc_op(struct program *p,
SEQLOAD(p, MATH0, offset, length, 0);
JUMP(p, 1, LOCAL_JUMP, ALL_TRUE, CALM);
- MOVE(p, MATH0, 7, IFIFOAB2, 0, 1, IMMED);
+ MOVEB(p, MATH0, offset, IFIFOAB2, 0, length, IMMED);
MATHB(p, MATH0, AND, sn_mask, MATH1, 8, IFB | IMMED2);
MATHB(p, MATH1, SHLD, MATH1, MATH1, 8, 0);
- MOVE(p, DESCBUF, 4, MATH2, 0, 8, WAITCOMP | IMMED);
+ MOVEB(p, DESCBUF, 4, MATH2, 0, 8, WAITCOMP | IMMED);
MATHB(p, MATH1, OR, MATH2, MATH2, 8, 0);
- MOVE(p, MATH2, 0, CONTEXT1, 0, 8, IMMED);
- MOVE(p, MATH2, 0, CONTEXT2, 0, 8, WAITCOMP | IMMED);
+ MOVEB(p, MATH2, 0, CONTEXT1, 0, 8, IMMED);
+ MOVEB(p, MATH2, 0, CONTEXT2, 0, 8, WAITCOMP | IMMED);
if (dir == OP_TYPE_ENCAP_PROTOCOL)
MATHB(p, SEQINSZ, ADD, PDCP_MAC_I_LEN, VSEQOUTSZ, 4, IMMED2);
@@ -1606,14 +1606,14 @@ pdcp_insert_cplane_aes_zuc_op(struct program *p,
SET_LABEL(p, keyjump);
SEQLOAD(p, MATH0, offset, length, 0);
JUMP(p, 1, LOCAL_JUMP, ALL_TRUE, CALM);
- MOVE(p, MATH0, 7, IFIFOAB2, 0, 1, IMMED);
+ MOVEB(p, MATH0, offset, IFIFOAB2, 0, length, IMMED);
MATHB(p, MATH0, AND, sn_mask, MATH1, 8, IFB | IMMED2);
MATHB(p, MATH1, SHLD, MATH1, MATH1, 8, 0);
- MOVE(p, DESCBUF, 4, MATH2, 0, 8, WAITCOMP | IMMED);
+ MOVEB(p, DESCBUF, 4, MATH2, 0, 8, WAITCOMP | IMMED);
MATHB(p, MATH1, OR, MATH2, MATH2, 8, 0);
- MOVE(p, MATH2, 0, CONTEXT1, 16, 8, IMMED);
- MOVE(p, MATH2, 0, CONTEXT2, 0, 8, WAITCOMP | IMMED);
+ MOVEB(p, MATH2, 0, CONTEXT1, 16, 8, IMMED);
+ MOVEB(p, MATH2, 0, CONTEXT2, 0, 8, WAITCOMP | IMMED);
if (dir == OP_TYPE_ENCAP_PROTOCOL)
MATHB(p, SEQINSZ, ADD, PDCP_MAC_I_LEN, VSEQOUTSZ, 4, IMMED2);
@@ -1729,19 +1729,19 @@ pdcp_insert_cplane_zuc_snow_op(struct program *p,
SET_LABEL(p, keyjump);
SEQLOAD(p, MATH0, offset, length, 0);
JUMP(p, 1, LOCAL_JUMP, ALL_TRUE, CALM);
- MOVE(p, MATH0, 7, IFIFOAB2, 0, 1, IMMED);
+ MOVEB(p, MATH0, offset, IFIFOAB2, 0, length, IMMED);
MATHB(p, MATH0, AND, sn_mask, MATH1, 8, IFB | IMMED2);
MATHB(p, MATH1, SHLD, MATH1, MATH1, 8, 0);
- MOVE(p, DESCBUF, 4, MATH2, 0, 8, WAITCOMP | IMMED);
+ MOVEB(p, DESCBUF, 4, MATH2, 0, 8, WAITCOMP | IMMED);
MATHB(p, MATH1, OR, MATH2, MATH1, 8, 0);
- MOVE(p, MATH1, 0, CONTEXT1, 0, 8, IMMED);
- MOVE(p, MATH1, 0, CONTEXT2, 0, 4, IMMED);
+ MOVEB(p, MATH1, 0, CONTEXT1, 0, 8, IMMED);
+ MOVEB(p, MATH1, 0, CONTEXT2, 0, 4, IMMED);
if (swap == false) {
- MATHB(p, MATH1, AND, lower_32_bits(PDCP_BEARER_MASK), MATH2,
- 4, IMMED2);
- MATHB(p, MATH1, AND, upper_32_bits(PDCP_DIR_MASK), MATH3,
- 4, IMMED2);
+ MATHB(p, MATH1, AND, upper_32_bits(PDCP_BEARER_MASK), MATH2,
+ 4, IMMED2);
+ MATHB(p, MATH1, AND, lower_32_bits(PDCP_DIR_MASK), MATH3,
+ 4, IMMED2);
} else {
MATHB(p, MATH1, AND, lower_32_bits(PDCP_BEARER_MASK_BE), MATH2,
4, IMMED2);
@@ -1749,7 +1749,7 @@ pdcp_insert_cplane_zuc_snow_op(struct program *p,
4, IMMED2);
}
MATHB(p, MATH3, SHLD, MATH3, MATH3, 8, 0);
- MOVE(p, MATH2, 4, OFIFO, 0, 12, IMMED);
+ MOVEB(p, MATH2, 4, OFIFO, 0, 12, IMMED);
MOVE(p, OFIFO, 0, CONTEXT2, 4, 12, IMMED);
if (dir == OP_TYPE_ENCAP_PROTOCOL) {
@@ -1798,13 +1798,13 @@ pdcp_insert_cplane_zuc_snow_op(struct program *p,
LOAD(p, 0, DCTRL, 0, LDLEN_RST_CHA_OFIFO_PTR, IMMED);
/* Put ICV to M0 before sending it to C2 for comparison. */
- MOVE(p, OFIFO, 0, MATH0, 0, 4, WAITCOMP | IMMED);
+ MOVEB(p, OFIFO, 0, MATH0, 0, 4, WAITCOMP | IMMED);
LOAD(p, NFIFOENTRY_STYPE_ALTSOURCE |
NFIFOENTRY_DEST_CLASS2 |
NFIFOENTRY_DTYPE_ICV |
NFIFOENTRY_LC2 | 4, NFIFO_SZL, 0, 4, IMMED);
- MOVE(p, MATH0, 0, ALTSOURCE, 0, 4, IMMED);
+ MOVEB(p, MATH0, 0, ALTSOURCE, 0, 4, IMMED);
}
PATCH_JUMP(p, pkeyjump, keyjump);
@@ -1871,14 +1871,14 @@ pdcp_insert_cplane_zuc_aes_op(struct program *p,
MATHB(p, MATH0, AND, sn_mask, MATH1, 8, IFB | IMMED2);
MATHB(p, MATH1, SHLD, MATH1, MATH1, 8, 0);
- MOVE(p, DESCBUF, 4, MATH2, 0, 0x08, WAITCOMP | IMMED);
+ MOVEB(p, DESCBUF, 4, MATH2, 0, 0x08, WAITCOMP | IMMED);
MATHB(p, MATH1, OR, MATH2, MATH2, 8, 0);
SEQSTORE(p, MATH0, offset, length, 0);
if (dir == OP_TYPE_ENCAP_PROTOCOL) {
KEY(p, KEY1, authdata->key_enc_flags, authdata->key,
authdata->keylen, INLINE_KEY(authdata));
- MOVE(p, MATH2, 0, IFIFOAB1, 0, 0x08, IMMED);
- MOVE(p, MATH0, 7, IFIFOAB1, 0, 1, IMMED);
+ MOVEB(p, MATH2, 0, IFIFOAB1, 0, 0x08, IMMED);
+ MOVEB(p, MATH0, offset, IFIFOAB1, 0, length, IMMED);
MATHB(p, SEQINSZ, SUB, ZERO, VSEQINSZ, 4, 0);
MATHB(p, VSEQINSZ, ADD, PDCP_MAC_I_LEN, VSEQOUTSZ, 4, IMMED2);
@@ -1889,7 +1889,7 @@ pdcp_insert_cplane_zuc_aes_op(struct program *p,
ICV_CHECK_DISABLE,
DIR_DEC);
SEQFIFOLOAD(p, MSG1, 0, VLF | LAST1 | FLUSH1);
- MOVE(p, CONTEXT1, 0, MATH3, 0, 4, WAITCOMP | IMMED);
+ MOVEB(p, CONTEXT1, 0, MATH3, 0, 4, WAITCOMP | IMMED);
LOAD(p, CLRW_RESET_CLS1_CHA |
CLRW_CLR_C1KEY |
CLRW_CLR_C1CTX |
@@ -1901,7 +1901,7 @@ pdcp_insert_cplane_zuc_aes_op(struct program *p,
KEY(p, KEY1, cipherdata->key_enc_flags, cipherdata->key,
cipherdata->keylen, INLINE_KEY(cipherdata));
- MOVE(p, MATH2, 0, CONTEXT1, 0, 8, IMMED);
+ MOVEB(p, MATH2, 0, CONTEXT1, 0, 8, IMMED);
SEQINPTR(p, 0, PDCP_NULL_MAX_FRAME_LEN, RTO);
ALG_OPERATION(p, OP_ALG_ALGSEL_ZUCE,
@@ -1911,12 +1911,12 @@ pdcp_insert_cplane_zuc_aes_op(struct program *p,
DIR_ENC);
SEQFIFOSTORE(p, MSG, 0, 0, VLF);
- SEQFIFOLOAD(p, SKIP, 1, 0);
+ SEQFIFOLOAD(p, SKIP, length, 0);
SEQFIFOLOAD(p, MSG1, 0, VLF);
- MOVE(p, MATH3, 0, IFIFOAB1, 0, 4, LAST1 | FLUSH1 | IMMED);
+ MOVEB(p, MATH3, 0, IFIFOAB1, 0, 4, LAST1 | FLUSH1 | IMMED);
} else {
- MOVE(p, MATH2, 0, CONTEXT1, 0, 8, IMMED);
+ MOVEB(p, MATH2, 0, CONTEXT1, 0, 8, IMMED);
MOVE(p, CONTEXT1, 0, CONTEXT2, 0, 8, IMMED);
@@ -1937,7 +1937,7 @@ pdcp_insert_cplane_zuc_aes_op(struct program *p,
SEQFIFOSTORE(p, MSG, 0, 0, VLF | CONT);
SEQFIFOLOAD(p, MSG1, 0, VLF | LAST1 | FLUSH1);
- MOVE(p, OFIFO, 0, MATH3, 0, 4, IMMED);
+ MOVEB(p, OFIFO, 0, MATH3, 0, 4, IMMED);
LOAD(p, CLRW_RESET_CLS1_CHA |
CLRW_CLR_C1KEY |
@@ -1969,7 +1969,7 @@ pdcp_insert_cplane_zuc_aes_op(struct program *p,
NFIFOENTRY_DTYPE_ICV |
NFIFOENTRY_LC1 |
NFIFOENTRY_FC1 | 4, NFIFO_SZL, 0, 4, IMMED);
- MOVE(p, MATH3, 0, ALTSOURCE, 0, 4, IMMED);
+ MOVEB(p, MATH3, 0, ALTSOURCE, 0, 4, IMMED);
}
return 0;
@@ -2080,6 +2080,8 @@ insert_hfn_ov_op(struct program *p,
{
uint32_t imm = PDCP_DPOVRD_HFN_OV_EN;
uint16_t hfn_pdb_offset;
+ LABEL(keyjump);
+ REFERENCE(pkeyjump);
if (rta_sec_era == RTA_SEC_ERA_2 && !era_2_sw_hfn_ovrd)
return 0;
@@ -2115,13 +2117,10 @@ insert_hfn_ov_op(struct program *p,
SEQSTORE(p, MATH0, 4, 4, 0);
}
- if (rta_sec_era >= RTA_SEC_ERA_8)
- JUMP(p, 6, LOCAL_JUMP, ALL_TRUE, MATH_Z);
- else
- JUMP(p, 5, LOCAL_JUMP, ALL_TRUE, MATH_Z);
+ pkeyjump = JUMP(p, keyjump, LOCAL_JUMP, ALL_TRUE, MATH_Z);
if (rta_sec_era > RTA_SEC_ERA_2)
- MATHB(p, DPOVRD, LSHIFT, shift, MATH0, 4, IMMED2);
+ MATHI(p, DPOVRD, LSHIFT, shift, MATH0, 4, IMMED2);
else
MATHB(p, MATH0, LSHIFT, shift, MATH0, 4, IMMED2);
@@ -2136,6 +2135,8 @@ insert_hfn_ov_op(struct program *p,
*/
MATHB(p, DPOVRD, AND, ZERO, DPOVRD, 4, STL);
+ SET_LABEL(p, keyjump);
+ PATCH_JUMP(p, pkeyjump, keyjump);
return 0;
}
@@ -2271,14 +2272,45 @@ cnstr_pdcp_c_plane_pdb(struct program *p,
/*
* PDCP UPlane PDB creation function
*/
-static inline int
+static inline enum pdb_type_e
cnstr_pdcp_u_plane_pdb(struct program *p,
enum pdcp_sn_size sn_size,
uint32_t hfn, unsigned short bearer,
unsigned short direction,
- uint32_t hfn_threshold)
+ uint32_t hfn_threshold,
+ struct alginfo *cipherdata,
+ struct alginfo *authdata)
{
struct pdcp_pdb pdb;
+ enum pdb_type_e pdb_type = PDCP_PDB_TYPE_FULL_PDB;
+ enum pdb_type_e
+ pdb_mask[PDCP_CIPHER_TYPE_INVALID][PDCP_AUTH_TYPE_INVALID] = {
+ { /* NULL */
+ PDCP_PDB_TYPE_NO_PDB, /* NULL */
+ PDCP_PDB_TYPE_FULL_PDB, /* SNOW f9 */
+ PDCP_PDB_TYPE_FULL_PDB, /* AES CMAC */
+ PDCP_PDB_TYPE_FULL_PDB /* ZUC-I */
+ },
+ { /* SNOW f8 */
+ PDCP_PDB_TYPE_FULL_PDB, /* NULL */
+ PDCP_PDB_TYPE_FULL_PDB, /* SNOW f9 */
+ PDCP_PDB_TYPE_REDUCED_PDB, /* AES CMAC */
+ PDCP_PDB_TYPE_REDUCED_PDB /* ZUC-I */
+ },
+ { /* AES CTR */
+ PDCP_PDB_TYPE_FULL_PDB, /* NULL */
+ PDCP_PDB_TYPE_REDUCED_PDB, /* SNOW f9 */
+ PDCP_PDB_TYPE_FULL_PDB, /* AES CMAC */
+ PDCP_PDB_TYPE_REDUCED_PDB /* ZUC-I */
+ },
+ { /* ZUC-E */
+ PDCP_PDB_TYPE_FULL_PDB, /* NULL */
+ PDCP_PDB_TYPE_REDUCED_PDB, /* SNOW f9 */
+ PDCP_PDB_TYPE_REDUCED_PDB, /* AES CMAC */
+ PDCP_PDB_TYPE_FULL_PDB /* ZUC-I */
+ },
+ };
+
/* Read options from user */
/* Depending on sequence number length, the HFN and HFN threshold
* have different lengths.
@@ -2312,6 +2344,12 @@ cnstr_pdcp_u_plane_pdb(struct program *p,
pdb.hfn_res = hfn << PDCP_U_PLANE_PDB_18BIT_SN_HFN_SHIFT;
pdb.hfn_thr_res =
hfn_threshold<<PDCP_U_PLANE_PDB_18BIT_SN_HFN_THR_SHIFT;
+
+ if (rta_sec_era <= RTA_SEC_ERA_8) {
+ if (cipherdata && authdata)
+ pdb_type = pdb_mask[cipherdata->algtype]
+ [authdata->algtype];
+ }
break;
default:
@@ -2323,13 +2361,29 @@ cnstr_pdcp_u_plane_pdb(struct program *p,
((bearer << PDCP_U_PLANE_PDB_BEARER_SHIFT) |
(direction << PDCP_U_PLANE_PDB_DIR_SHIFT));
- /* copy PDB in descriptor*/
- __rta_out32(p, pdb.opt_res.opt);
- __rta_out32(p, pdb.hfn_res);
- __rta_out32(p, pdb.bearer_dir_res);
- __rta_out32(p, pdb.hfn_thr_res);
+ switch (pdb_type) {
+ case PDCP_PDB_TYPE_NO_PDB:
+ break;
+
+ case PDCP_PDB_TYPE_REDUCED_PDB:
+ __rta_out32(p, pdb.hfn_res);
+ __rta_out32(p, pdb.bearer_dir_res);
+ break;
- return 0;
+ case PDCP_PDB_TYPE_FULL_PDB:
+ /* copy PDB in descriptor*/
+ __rta_out32(p, pdb.opt_res.opt);
+ __rta_out32(p, pdb.hfn_res);
+ __rta_out32(p, pdb.bearer_dir_res);
+ __rta_out32(p, pdb.hfn_thr_res);
+
+ break;
+
+ default:
+ return PDCP_PDB_TYPE_INVALID;
+ }
+
+ return pdb_type;
}
/**
* cnstr_shdsc_pdcp_c_plane_encap - Function for creating a PDCP Control Plane
@@ -2736,6 +2790,7 @@ cnstr_shdsc_pdcp_u_plane_encap(uint32_t *descbuf,
struct program prg;
struct program *p = &prg;
int err;
+ enum pdb_type_e pdb_type;
static enum rta_share_type
desc_share[PDCP_CIPHER_TYPE_INVALID][PDCP_AUTH_TYPE_INVALID] = {
{ /* NULL */
@@ -2785,15 +2840,16 @@ cnstr_shdsc_pdcp_u_plane_encap(uint32_t *descbuf,
SHR_HDR(p, desc_share[cipherdata->algtype][authdata->algtype], 0, 0);
else
SHR_HDR(p, SHR_ALWAYS, 0, 0);
- if (cnstr_pdcp_u_plane_pdb(p, sn_size, hfn, bearer, direction,
- hfn_threshold)) {
+ pdb_type = cnstr_pdcp_u_plane_pdb(p, sn_size, hfn,
+ bearer, direction, hfn_threshold,
+ cipherdata, authdata);
+ if (pdb_type == PDCP_PDB_TYPE_INVALID) {
pr_err("Error creating PDCP UPlane PDB\n");
return -EINVAL;
}
SET_LABEL(p, pdb_end);
- err = insert_hfn_ov_op(p, sn_size, PDCP_PDB_TYPE_FULL_PDB,
- era_2_sw_hfn_ovrd);
+ err = insert_hfn_ov_op(p, sn_size, pdb_type, era_2_sw_hfn_ovrd);
if (err)
return err;
@@ -2820,6 +2876,12 @@ cnstr_shdsc_pdcp_u_plane_encap(uint32_t *descbuf,
return err;
break;
}
+
+ if (pdb_type != PDCP_PDB_TYPE_FULL_PDB) {
+ pr_err("PDB type must be FULL for PROTO desc\n");
+ return -EINVAL;
+ }
+
/* Insert auth key if requested */
if (authdata && authdata->algtype) {
KEY(p, KEY2, authdata->key_enc_flags,
@@ -2931,6 +2993,7 @@ cnstr_shdsc_pdcp_u_plane_decap(uint32_t *descbuf,
struct program prg;
struct program *p = &prg;
int err;
+ enum pdb_type_e pdb_type;
static enum rta_share_type
desc_share[PDCP_CIPHER_TYPE_INVALID][PDCP_AUTH_TYPE_INVALID] = {
{ /* NULL */
@@ -2981,15 +3044,16 @@ cnstr_shdsc_pdcp_u_plane_decap(uint32_t *descbuf,
else
SHR_HDR(p, SHR_ALWAYS, 0, 0);
- if (cnstr_pdcp_u_plane_pdb(p, sn_size, hfn, bearer, direction,
- hfn_threshold)) {
+ pdb_type = cnstr_pdcp_u_plane_pdb(p, sn_size, hfn, bearer,
+ direction, hfn_threshold,
+ cipherdata, authdata);
+ if (pdb_type == PDCP_PDB_TYPE_INVALID) {
pr_err("Error creating PDCP UPlane PDB\n");
return -EINVAL;
}
SET_LABEL(p, pdb_end);
- err = insert_hfn_ov_op(p, sn_size, PDCP_PDB_TYPE_FULL_PDB,
- era_2_sw_hfn_ovrd);
+ err = insert_hfn_ov_op(p, sn_size, pdb_type, era_2_sw_hfn_ovrd);
if (err)
return err;
@@ -3006,6 +3070,11 @@ cnstr_shdsc_pdcp_u_plane_decap(uint32_t *descbuf,
case PDCP_CIPHER_TYPE_AES:
case PDCP_CIPHER_TYPE_SNOW:
case PDCP_CIPHER_TYPE_NULL:
+ if (pdb_type != PDCP_PDB_TYPE_FULL_PDB) {
+ pr_err("PDB type must be FULL for PROTO desc\n");
+ return -EINVAL;
+ }
+
/* Insert auth key if requested */
if (authdata && authdata->algtype)
KEY(p, KEY2, authdata->key_enc_flags,
--
2.17.1
^ permalink raw reply [flat|nested] 75+ messages in thread
* [dpdk-dev] [PATCH 12/20] crypto/dpaa2_sec/hw: Support aes-aes 18-bit PDCP
2019-09-02 12:17 [dpdk-dev] [PATCH 00/20] crypto/dpaaX_sec: Support Wireless algos Akhil Goyal
` (10 preceding siblings ...)
2019-09-02 12:17 ` [dpdk-dev] [PATCH 11/20] crypto/dpaa2_sec/hw: Support 18-bit PDCP enc-auth cases Akhil Goyal
@ 2019-09-02 12:17 ` Akhil Goyal
2019-09-02 12:17 ` [dpdk-dev] [PATCH 13/20] crypto/dpaa2_sec/hw: Support zuc-zuc " Akhil Goyal
` (9 subsequent siblings)
21 siblings, 0 replies; 75+ messages in thread
From: Akhil Goyal @ 2019-09-02 12:17 UTC (permalink / raw)
To: dev; +Cc: hemant.agrawal, vakul.garg
From: Vakul Garg <vakul.garg@nxp.com>
This patch support AES-AES PDCP enc-auth case for
devices which do not support PROTOCOL command for 18bit
Signed-off-by: Vakul Garg <vakul.garg@nxp.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
---
drivers/crypto/dpaa2_sec/hw/desc/pdcp.h | 151 +++++++++++++++++++++++-
1 file changed, 150 insertions(+), 1 deletion(-)
diff --git a/drivers/crypto/dpaa2_sec/hw/desc/pdcp.h b/drivers/crypto/dpaa2_sec/hw/desc/pdcp.h
index 764daf21c..a476b8bde 100644
--- a/drivers/crypto/dpaa2_sec/hw/desc/pdcp.h
+++ b/drivers/crypto/dpaa2_sec/hw/desc/pdcp.h
@@ -927,6 +927,155 @@ pdcp_insert_cplane_enc_only_op(struct program *p,
return 0;
}
+static inline int
+pdcp_insert_uplane_aes_aes_op(struct program *p,
+ bool swap __maybe_unused,
+ struct alginfo *cipherdata,
+ struct alginfo *authdata,
+ unsigned int dir,
+ enum pdcp_sn_size sn_size,
+ unsigned char era_2_sw_hfn_ovrd __maybe_unused)
+{
+ uint32_t offset = 0, length = 0, sn_mask = 0;
+
+ if ((rta_sec_era >= RTA_SEC_ERA_8 && sn_size != PDCP_SN_SIZE_18)) {
+ /* Insert Auth Key */
+ KEY(p, KEY2, authdata->key_enc_flags, authdata->key,
+ authdata->keylen, INLINE_KEY(authdata));
+
+ /* Insert Cipher Key */
+ KEY(p, KEY1, cipherdata->key_enc_flags, cipherdata->key,
+ cipherdata->keylen, INLINE_KEY(cipherdata));
+
+ PROTOCOL(p, dir, OP_PCLID_LTE_PDCP_USER_RN,
+ ((uint16_t)cipherdata->algtype << 8) |
+ (uint16_t)authdata->algtype);
+ return 0;
+ }
+
+ /* Non-proto is supported only for 5bit cplane and 18bit uplane */
+ switch (sn_size) {
+ case PDCP_SN_SIZE_18:
+ offset = 5;
+ length = 3;
+ sn_mask = (swap == false) ? PDCP_U_PLANE_18BIT_SN_MASK :
+ PDCP_U_PLANE_18BIT_SN_MASK_BE;
+ break;
+
+ default:
+ pr_err("Invalid sn_size for %s\n", __func__);
+ return -ENOTSUP;
+ }
+
+ SEQLOAD(p, MATH0, offset, length, 0);
+ JUMP(p, 1, LOCAL_JUMP, ALL_TRUE, CALM);
+ MATHB(p, MATH0, AND, sn_mask, MATH1, 8, IFB | IMMED2);
+
+ MATHB(p, MATH1, SHLD, MATH1, MATH1, 8, 0);
+ MOVEB(p, DESCBUF, 8, MATH2, 0, 0x08, WAITCOMP | IMMED);
+ MATHB(p, MATH1, OR, MATH2, MATH2, 8, 0);
+ SEQSTORE(p, MATH0, offset, length, 0);
+
+ if (dir == OP_TYPE_ENCAP_PROTOCOL) {
+ KEY(p, KEY1, authdata->key_enc_flags, authdata->key,
+ authdata->keylen, INLINE_KEY(authdata));
+ MOVEB(p, MATH2, 0, IFIFOAB1, 0, 0x08, IMMED);
+ MOVEB(p, MATH0, offset, IFIFOAB1, 0, length, IMMED);
+
+ MATHB(p, SEQINSZ, SUB, ZERO, VSEQINSZ, 4, 0);
+ MATHB(p, VSEQINSZ, ADD, PDCP_MAC_I_LEN, VSEQOUTSZ, 4, IMMED2);
+
+ ALG_OPERATION(p, OP_ALG_ALGSEL_AES,
+ OP_ALG_AAI_CMAC,
+ OP_ALG_AS_INITFINAL,
+ ICV_CHECK_DISABLE,
+ DIR_DEC);
+ SEQFIFOLOAD(p, MSG1, 0, VLF | LAST1 | FLUSH1);
+ MOVEB(p, CONTEXT1, 0, MATH3, 0, 4, WAITCOMP | IMMED);
+
+ LOAD(p, CLRW_RESET_CLS1_CHA |
+ CLRW_CLR_C1KEY |
+ CLRW_CLR_C1CTX |
+ CLRW_CLR_C1ICV |
+ CLRW_CLR_C1DATAS |
+ CLRW_CLR_C1MODE,
+ CLRW, 0, 4, IMMED);
+
+ KEY(p, KEY1, cipherdata->key_enc_flags, cipherdata->key,
+ cipherdata->keylen, INLINE_KEY(cipherdata));
+
+ MOVEB(p, MATH2, 0, CONTEXT1, 16, 8, IMMED);
+ SEQINPTR(p, 0, PDCP_NULL_MAX_FRAME_LEN, RTO);
+
+ ALG_OPERATION(p, OP_ALG_ALGSEL_AES,
+ OP_ALG_AAI_CTR,
+ OP_ALG_AS_INITFINAL,
+ ICV_CHECK_DISABLE,
+ DIR_ENC);
+
+ SEQFIFOSTORE(p, MSG, 0, 0, VLF);
+
+ SEQFIFOLOAD(p, SKIP, length, 0);
+
+ SEQFIFOLOAD(p, MSG1, 0, VLF);
+ MOVEB(p, MATH3, 0, IFIFOAB1, 0, 4, LAST1 | FLUSH1 | IMMED);
+ } else {
+ MOVEB(p, MATH2, 0, CONTEXT1, 16, 8, IMMED);
+ MOVEB(p, MATH2, 0, CONTEXT2, 0, 8, IMMED);
+
+ MATHB(p, SEQINSZ, SUB, ZERO, VSEQINSZ, 4, 0);
+ MATHB(p, SEQINSZ, SUB, PDCP_MAC_I_LEN, VSEQOUTSZ, 4, IMMED2);
+
+ KEY(p, KEY1, cipherdata->key_enc_flags, cipherdata->key,
+ cipherdata->keylen, INLINE_KEY(cipherdata));
+
+ ALG_OPERATION(p, OP_ALG_ALGSEL_AES,
+ OP_ALG_AAI_CTR,
+ OP_ALG_AS_INITFINAL,
+ ICV_CHECK_DISABLE,
+ DIR_DEC);
+
+ SEQFIFOSTORE(p, MSG, 0, 0, VLF | CONT);
+ SEQFIFOLOAD(p, MSG1, 0, VLF | LAST1 | FLUSH1);
+
+ MOVEB(p, OFIFO, 0, MATH3, 0, 4, IMMED);
+
+ LOAD(p, CLRW_RESET_CLS1_CHA |
+ CLRW_CLR_C1KEY |
+ CLRW_CLR_C1CTX |
+ CLRW_CLR_C1ICV |
+ CLRW_CLR_C1DATAS |
+ CLRW_CLR_C1MODE,
+ CLRW, 0, 4, IMMED);
+
+ KEY(p, KEY1, authdata->key_enc_flags, authdata->key,
+ authdata->keylen, INLINE_KEY(authdata));
+
+ SEQINPTR(p, 0, 0, SOP);
+
+ ALG_OPERATION(p, OP_ALG_ALGSEL_AES,
+ OP_ALG_AAI_CMAC,
+ OP_ALG_AS_INITFINAL,
+ ICV_CHECK_ENABLE,
+ DIR_DEC);
+
+ MATHB(p, SEQINSZ, SUB, ZERO, VSEQINSZ, 4, 0);
+
+ MOVE(p, CONTEXT2, 0, IFIFOAB1, 0, 8, IMMED);
+
+ SEQFIFOLOAD(p, MSG1, 0, VLF | LAST1 | FLUSH1);
+
+ LOAD(p, NFIFOENTRY_STYPE_ALTSOURCE |
+ NFIFOENTRY_DEST_CLASS1 |
+ NFIFOENTRY_DTYPE_ICV |
+ NFIFOENTRY_LC1 |
+ NFIFOENTRY_FC1 | 4, NFIFO_SZL, 0, 4, IMMED);
+ MOVEB(p, MATH3, 0, ALTSOURCE, 0, 4, IMMED);
+ }
+
+ return 0;
+}
+
static inline int
pdcp_insert_cplane_acc_op(struct program *p,
bool swap __maybe_unused,
@@ -2721,7 +2870,7 @@ pdcp_insert_uplane_with_int_op(struct program *p,
{ /* AES CTR */
pdcp_insert_cplane_enc_only_op, /* NULL */
pdcp_insert_cplane_aes_snow_op, /* SNOW f9 */
- pdcp_insert_cplane_acc_op, /* AES CMAC */
+ pdcp_insert_uplane_aes_aes_op, /* AES CMAC */
pdcp_insert_cplane_aes_zuc_op /* ZUC-I */
},
{ /* ZUC-E */
--
2.17.1
^ permalink raw reply [flat|nested] 75+ messages in thread
* [dpdk-dev] [PATCH 13/20] crypto/dpaa2_sec/hw: Support zuc-zuc 18-bit PDCP
2019-09-02 12:17 [dpdk-dev] [PATCH 00/20] crypto/dpaaX_sec: Support Wireless algos Akhil Goyal
` (11 preceding siblings ...)
2019-09-02 12:17 ` [dpdk-dev] [PATCH 12/20] crypto/dpaa2_sec/hw: Support aes-aes 18-bit PDCP Akhil Goyal
@ 2019-09-02 12:17 ` Akhil Goyal
2019-09-02 12:17 ` [dpdk-dev] [PATCH 14/20] crypto/dpaa2_sec/hw: Support snow-snow " Akhil Goyal
` (8 subsequent siblings)
21 siblings, 0 replies; 75+ messages in thread
From: Akhil Goyal @ 2019-09-02 12:17 UTC (permalink / raw)
To: dev; +Cc: hemant.agrawal, vakul.garg
From: Vakul Garg <vakul.garg@nxp.com>
This patch support ZUC-ZUC PDCP enc-auth case for
devices which do not support PROTOCOL command for 18bit.
Signed-off-by: Vakul Garg <vakul.garg@nxp.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
---
drivers/crypto/dpaa2_sec/hw/desc/pdcp.h | 126 +++++++++++++++++++++++-
1 file changed, 125 insertions(+), 1 deletion(-)
diff --git a/drivers/crypto/dpaa2_sec/hw/desc/pdcp.h b/drivers/crypto/dpaa2_sec/hw/desc/pdcp.h
index a476b8bde..9fb3d4993 100644
--- a/drivers/crypto/dpaa2_sec/hw/desc/pdcp.h
+++ b/drivers/crypto/dpaa2_sec/hw/desc/pdcp.h
@@ -927,6 +927,130 @@ pdcp_insert_cplane_enc_only_op(struct program *p,
return 0;
}
+static inline int
+pdcp_insert_uplane_zuc_zuc_op(struct program *p,
+ bool swap __maybe_unused,
+ struct alginfo *cipherdata,
+ struct alginfo *authdata,
+ unsigned int dir,
+ enum pdcp_sn_size sn_size,
+ unsigned char era_2_sw_hfn_ovrd __maybe_unused)
+{
+ uint32_t offset = 0, length = 0, sn_mask = 0;
+
+ LABEL(keyjump);
+ REFERENCE(pkeyjump);
+
+ if (rta_sec_era < RTA_SEC_ERA_5) {
+ pr_err("Invalid era for selected algorithm\n");
+ return -ENOTSUP;
+ }
+
+ pkeyjump = JUMP(p, keyjump, LOCAL_JUMP, ALL_TRUE, SHRD | SELF | BOTH);
+ KEY(p, KEY1, cipherdata->key_enc_flags, cipherdata->key,
+ cipherdata->keylen, INLINE_KEY(cipherdata));
+ KEY(p, KEY2, authdata->key_enc_flags, authdata->key, authdata->keylen,
+ INLINE_KEY(authdata));
+
+ SET_LABEL(p, keyjump);
+ PATCH_JUMP(p, pkeyjump, keyjump);
+
+ if (rta_sec_era >= RTA_SEC_ERA_8 && sn_size != PDCP_SN_SIZE_18) {
+ int pclid;
+
+ if (sn_size == PDCP_SN_SIZE_5)
+ pclid = OP_PCLID_LTE_PDCP_CTRL_MIXED;
+ else
+ pclid = OP_PCLID_LTE_PDCP_USER_RN;
+
+ PROTOCOL(p, dir, pclid,
+ ((uint16_t)cipherdata->algtype << 8) |
+ (uint16_t)authdata->algtype);
+
+ return 0;
+ }
+ /* Non-proto is supported only for 5bit cplane and 18bit uplane */
+ switch (sn_size) {
+ case PDCP_SN_SIZE_5:
+ offset = 7;
+ length = 1;
+ sn_mask = (swap == false) ? PDCP_C_PLANE_SN_MASK :
+ PDCP_C_PLANE_SN_MASK_BE;
+ break;
+ case PDCP_SN_SIZE_18:
+ offset = 5;
+ length = 3;
+ sn_mask = (swap == false) ? PDCP_U_PLANE_18BIT_SN_MASK :
+ PDCP_U_PLANE_18BIT_SN_MASK_BE;
+ break;
+ case PDCP_SN_SIZE_7:
+ case PDCP_SN_SIZE_12:
+ case PDCP_SN_SIZE_15:
+ pr_err("Invalid sn_size for %s\n", __func__);
+ return -ENOTSUP;
+ }
+
+ SEQLOAD(p, MATH0, offset, length, 0);
+ JUMP(p, 1, LOCAL_JUMP, ALL_TRUE, CALM);
+ MOVEB(p, MATH0, offset, IFIFOAB2, 0, length, IMMED);
+ MATHB(p, MATH0, AND, sn_mask, MATH1, 8, IFB | IMMED2);
+ MATHB(p, MATH1, SHLD, MATH1, MATH1, 8, 0);
+
+ MOVEB(p, DESCBUF, 8, MATH2, 0, 8, WAITCOMP | IMMED);
+ MATHB(p, MATH1, OR, MATH2, MATH2, 8, 0);
+ MOVEB(p, MATH2, 0, CONTEXT1, 0, 8, IMMED);
+
+ MOVEB(p, MATH2, 0, CONTEXT2, 0, 8, WAITCOMP | IMMED);
+
+ if (dir == OP_TYPE_ENCAP_PROTOCOL)
+ MATHB(p, SEQINSZ, ADD, PDCP_MAC_I_LEN, VSEQOUTSZ, 4, IMMED2);
+ else
+ MATHB(p, SEQINSZ, SUB, PDCP_MAC_I_LEN, VSEQOUTSZ, 4, IMMED2);
+
+ MATHB(p, SEQINSZ, SUB, ZERO, VSEQINSZ, 4, 0);
+ SEQSTORE(p, MATH0, offset, length, 0);
+
+ if (dir == OP_TYPE_ENCAP_PROTOCOL) {
+ SEQFIFOSTORE(p, MSG, 0, 0, VLF);
+ SEQFIFOLOAD(p, MSGINSNOOP, 0, VLF | LAST2);
+ } else {
+ SEQFIFOSTORE(p, MSG, 0, 0, VLF | CONT);
+ SEQFIFOLOAD(p, MSGOUTSNOOP, 0, VLF | LAST1 | FLUSH1);
+ }
+
+ ALG_OPERATION(p, OP_ALG_ALGSEL_ZUCA,
+ OP_ALG_AAI_F9,
+ OP_ALG_AS_INITFINAL,
+ dir == OP_TYPE_ENCAP_PROTOCOL ?
+ ICV_CHECK_DISABLE : ICV_CHECK_ENABLE,
+ DIR_ENC);
+
+ ALG_OPERATION(p, OP_ALG_ALGSEL_ZUCE,
+ OP_ALG_AAI_F8,
+ OP_ALG_AS_INITFINAL,
+ ICV_CHECK_DISABLE,
+ dir == OP_TYPE_ENCAP_PROTOCOL ? DIR_ENC : DIR_DEC);
+
+ if (dir == OP_TYPE_ENCAP_PROTOCOL) {
+ MOVE(p, CONTEXT2, 0, IFIFOAB1, 0, 4, LAST1 | FLUSH1 | IMMED);
+ } else {
+ /* Save ICV */
+ MOVEB(p, OFIFO, 0, MATH0, 0, 4, IMMED);
+
+ LOAD(p, NFIFOENTRY_STYPE_ALTSOURCE |
+ NFIFOENTRY_DEST_CLASS2 |
+ NFIFOENTRY_DTYPE_ICV |
+ NFIFOENTRY_LC2 | 4, NFIFO_SZL, 0, 4, IMMED);
+ MOVEB(p, MATH0, 0, ALTSOURCE, 0, 4, WAITCOMP | IMMED);
+ }
+
+ /* Reset ZUCA mode and done interrupt */
+ LOAD(p, CLRW_CLR_C2MODE, CLRW, 0, 4, IMMED);
+ LOAD(p, CIRQ_ZADI, ICTRL, 0, 4, IMMED);
+
+ return 0;
+}
+
static inline int
pdcp_insert_uplane_aes_aes_op(struct program *p,
bool swap __maybe_unused,
@@ -2877,7 +3001,7 @@ pdcp_insert_uplane_with_int_op(struct program *p,
pdcp_insert_cplane_enc_only_op, /* NULL */
pdcp_insert_cplane_zuc_snow_op, /* SNOW f9 */
pdcp_insert_cplane_zuc_aes_op, /* AES CMAC */
- pdcp_insert_cplane_acc_op /* ZUC-I */
+ pdcp_insert_uplane_zuc_zuc_op /* ZUC-I */
},
};
int err;
--
2.17.1
^ permalink raw reply [flat|nested] 75+ messages in thread
* [dpdk-dev] [PATCH 14/20] crypto/dpaa2_sec/hw: Support snow-snow 18-bit PDCP
2019-09-02 12:17 [dpdk-dev] [PATCH 00/20] crypto/dpaaX_sec: Support Wireless algos Akhil Goyal
` (12 preceding siblings ...)
2019-09-02 12:17 ` [dpdk-dev] [PATCH 13/20] crypto/dpaa2_sec/hw: Support zuc-zuc " Akhil Goyal
@ 2019-09-02 12:17 ` Akhil Goyal
2019-09-02 12:17 ` [dpdk-dev] [PATCH 15/20] crypto/dpaa2_sec/hw: Support snow-f8 Akhil Goyal
` (7 subsequent siblings)
21 siblings, 0 replies; 75+ messages in thread
From: Akhil Goyal @ 2019-09-02 12:17 UTC (permalink / raw)
To: dev; +Cc: hemant.agrawal, vakul.garg
From: Vakul Garg <vakul.garg@nxp.com>
This patch support SNOW-SNOW (enc-auth) 18bit PDCP case
for devices which do not support PROTOCOL command
Signed-off-by: Vakul Garg <vakul.garg@nxp.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
---
drivers/crypto/dpaa2_sec/hw/desc/pdcp.h | 133 +++++++++++++++++++++++-
1 file changed, 132 insertions(+), 1 deletion(-)
diff --git a/drivers/crypto/dpaa2_sec/hw/desc/pdcp.h b/drivers/crypto/dpaa2_sec/hw/desc/pdcp.h
index 9fb3d4993..b514914ec 100644
--- a/drivers/crypto/dpaa2_sec/hw/desc/pdcp.h
+++ b/drivers/crypto/dpaa2_sec/hw/desc/pdcp.h
@@ -927,6 +927,137 @@ pdcp_insert_cplane_enc_only_op(struct program *p,
return 0;
}
+static inline int
+pdcp_insert_uplane_snow_snow_op(struct program *p,
+ bool swap __maybe_unused,
+ struct alginfo *cipherdata,
+ struct alginfo *authdata,
+ unsigned int dir,
+ enum pdcp_sn_size sn_size,
+ unsigned char era_2_sw_hfn_ovrd __maybe_unused)
+{
+ uint32_t offset = 0, length = 0, sn_mask = 0;
+
+ KEY(p, KEY1, cipherdata->key_enc_flags, cipherdata->key,
+ cipherdata->keylen, INLINE_KEY(cipherdata));
+ KEY(p, KEY2, authdata->key_enc_flags, authdata->key, authdata->keylen,
+ INLINE_KEY(authdata));
+
+ if (rta_sec_era >= RTA_SEC_ERA_8 && sn_size != PDCP_SN_SIZE_18) {
+ int pclid;
+
+ if (sn_size == PDCP_SN_SIZE_5)
+ pclid = OP_PCLID_LTE_PDCP_CTRL_MIXED;
+ else
+ pclid = OP_PCLID_LTE_PDCP_USER_RN;
+
+ PROTOCOL(p, dir, pclid,
+ ((uint16_t)cipherdata->algtype << 8) |
+ (uint16_t)authdata->algtype);
+
+ return 0;
+ }
+ /* Non-proto is supported only for 5bit cplane and 18bit uplane */
+ switch (sn_size) {
+ case PDCP_SN_SIZE_5:
+ offset = 7;
+ length = 1;
+ sn_mask = (swap == false) ? PDCP_C_PLANE_SN_MASK :
+ PDCP_C_PLANE_SN_MASK_BE;
+ break;
+ case PDCP_SN_SIZE_18:
+ offset = 5;
+ length = 3;
+ sn_mask = (swap == false) ? PDCP_U_PLANE_18BIT_SN_MASK :
+ PDCP_U_PLANE_18BIT_SN_MASK_BE;
+ break;
+ case PDCP_SN_SIZE_7:
+ case PDCP_SN_SIZE_12:
+ case PDCP_SN_SIZE_15:
+ pr_err("Invalid sn_size for %s\n", __func__);
+ return -ENOTSUP;
+ }
+
+ if (dir == OP_TYPE_ENCAP_PROTOCOL)
+ MATHB(p, SEQINSZ, SUB, length, VSEQINSZ, 4, IMMED2);
+
+ SEQLOAD(p, MATH0, offset, length, 0);
+ JUMP(p, 1, LOCAL_JUMP, ALL_TRUE, CALM);
+ MOVEB(p, MATH0, offset, IFIFOAB2, 0, length, IMMED);
+ MATHB(p, MATH0, AND, sn_mask, MATH1, 8, IFB | IMMED2);
+
+ SEQSTORE(p, MATH0, offset, length, 0);
+ MATHB(p, MATH1, SHLD, MATH1, MATH1, 8, 0);
+ MOVEB(p, DESCBUF, 8, MATH2, 0, 8, WAITCOMP | IMMED);
+ MATHB(p, MATH1, OR, MATH2, MATH1, 8, 0);
+ MOVEB(p, MATH1, 0, CONTEXT1, 0, 8, IMMED);
+ MOVEB(p, MATH1, 0, CONTEXT2, 0, 4, WAITCOMP | IMMED);
+ if (swap == false) {
+ MATHB(p, MATH1, AND, upper_32_bits(PDCP_BEARER_MASK),
+ MATH2, 4, IMMED2);
+ MATHB(p, MATH1, AND, lower_32_bits(PDCP_DIR_MASK),
+ MATH3, 4, IMMED2);
+ } else {
+ MATHB(p, MATH1, AND, lower_32_bits(PDCP_BEARER_MASK_BE),
+ MATH2, 4, IMMED2);
+ MATHB(p, MATH1, AND, upper_32_bits(PDCP_DIR_MASK_BE),
+ MATH3, 4, IMMED2);
+ }
+ MATHB(p, MATH3, SHLD, MATH3, MATH3, 8, 0);
+
+ MOVEB(p, MATH2, 4, OFIFO, 0, 12, IMMED);
+ MOVE(p, OFIFO, 0, CONTEXT2, 4, 12, IMMED);
+ if (dir == OP_TYPE_ENCAP_PROTOCOL) {
+ MATHB(p, SEQINSZ, ADD, PDCP_MAC_I_LEN, VSEQOUTSZ, 4, IMMED2);
+ } else {
+ MATHI(p, SEQINSZ, SUB, PDCP_MAC_I_LEN, VSEQOUTSZ, 4, IMMED2);
+ MATHI(p, SEQINSZ, SUB, PDCP_MAC_I_LEN, VSEQINSZ, 4, IMMED2);
+ }
+
+ if (dir == OP_TYPE_ENCAP_PROTOCOL)
+ SEQFIFOSTORE(p, MSG, 0, 0, VLF);
+ else
+ SEQFIFOSTORE(p, MSG, 0, 0, VLF | CONT);
+
+ ALG_OPERATION(p, OP_ALG_ALGSEL_SNOW_F9,
+ OP_ALG_AAI_F9,
+ OP_ALG_AS_INITFINAL,
+ dir == OP_TYPE_ENCAP_PROTOCOL ?
+ ICV_CHECK_DISABLE : ICV_CHECK_ENABLE,
+ DIR_DEC);
+ ALG_OPERATION(p, OP_ALG_ALGSEL_SNOW_F8,
+ OP_ALG_AAI_F8,
+ OP_ALG_AS_INITFINAL,
+ ICV_CHECK_DISABLE,
+ dir == OP_TYPE_ENCAP_PROTOCOL ? DIR_ENC : DIR_DEC);
+
+ if (dir == OP_TYPE_ENCAP_PROTOCOL) {
+ SEQFIFOLOAD(p, MSGINSNOOP, 0, VLF | LAST2);
+ MOVE(p, CONTEXT2, 0, IFIFOAB1, 0, 4, LAST1 | FLUSH1 | IMMED);
+ } else {
+ SEQFIFOLOAD(p, MSGOUTSNOOP, 0, VLF | LAST2);
+ SEQFIFOLOAD(p, MSG1, 4, LAST1 | FLUSH1);
+ JUMP(p, 1, LOCAL_JUMP, ALL_TRUE, CLASS1 | NOP | NIFP);
+
+ if (rta_sec_era >= RTA_SEC_ERA_6)
+ LOAD(p, 0, DCTRL, 0, LDLEN_RST_CHA_OFIFO_PTR, IMMED);
+
+ MOVE(p, OFIFO, 0, MATH0, 0, 4, WAITCOMP | IMMED);
+
+ NFIFOADD(p, IFIFO, ICV2, 4, LAST2);
+
+ if (rta_sec_era <= RTA_SEC_ERA_2) {
+ /* Shut off automatic Info FIFO entries */
+ LOAD(p, 0, DCTRL, LDOFF_DISABLE_AUTO_NFIFO, 0, IMMED);
+ MOVE(p, MATH0, 0, IFIFOAB2, 0, 4, WAITCOMP | IMMED);
+ } else {
+ MOVE(p, MATH0, 0, IFIFO, 0, 4, WAITCOMP | IMMED);
+ }
+ }
+
+ return 0;
+}
+
static inline int
pdcp_insert_uplane_zuc_zuc_op(struct program *p,
bool swap __maybe_unused,
@@ -2987,7 +3118,7 @@ pdcp_insert_uplane_with_int_op(struct program *p,
},
{ /* SNOW f8 */
pdcp_insert_cplane_enc_only_op, /* NULL */
- pdcp_insert_cplane_acc_op, /* SNOW f9 */
+ pdcp_insert_uplane_snow_snow_op, /* SNOW f9 */
pdcp_insert_cplane_snow_aes_op, /* AES CMAC */
pdcp_insert_cplane_snow_zuc_op /* ZUC-I */
},
--
2.17.1
^ permalink raw reply [flat|nested] 75+ messages in thread
* [dpdk-dev] [PATCH 15/20] crypto/dpaa2_sec/hw: Support snow-f8
2019-09-02 12:17 [dpdk-dev] [PATCH 00/20] crypto/dpaaX_sec: Support Wireless algos Akhil Goyal
` (13 preceding siblings ...)
2019-09-02 12:17 ` [dpdk-dev] [PATCH 14/20] crypto/dpaa2_sec/hw: Support snow-snow " Akhil Goyal
@ 2019-09-02 12:17 ` Akhil Goyal
2019-09-02 12:17 ` [dpdk-dev] [PATCH 16/20] crypto/dpaa2_sec/hw: Support snow-f9 Akhil Goyal
` (6 subsequent siblings)
21 siblings, 0 replies; 75+ messages in thread
From: Akhil Goyal @ 2019-09-02 12:17 UTC (permalink / raw)
To: dev; +Cc: hemant.agrawal, vakul.garg
From: Vakul Garg <vakul.garg@nxp.com>
This patch add support for non-protocol offload mode
of snow-f8 algo
Signed-off-by: Vakul Garg <vakul.garg@nxp.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
---
drivers/crypto/dpaa2_sec/hw/desc/algo.h | 20 +++++---------------
1 file changed, 5 insertions(+), 15 deletions(-)
diff --git a/drivers/crypto/dpaa2_sec/hw/desc/algo.h b/drivers/crypto/dpaa2_sec/hw/desc/algo.h
index b6cfa8704..2a307a3e1 100644
--- a/drivers/crypto/dpaa2_sec/hw/desc/algo.h
+++ b/drivers/crypto/dpaa2_sec/hw/desc/algo.h
@@ -24,43 +24,33 @@
* @swap: must be true when core endianness doesn't match SEC endianness
* @cipherdata: pointer to block cipher transform definitions
* @dir: Cipher direction (DIR_ENC/DIR_DEC)
- * @count: UEA2 count value (32 bits)
- * @bearer: UEA2 bearer ID (5 bits)
- * @direction: UEA2 direction (1 bit)
*
* Return: size of descriptor written in words or negative number on error
*/
static inline int
cnstr_shdsc_snow_f8(uint32_t *descbuf, bool ps, bool swap,
- struct alginfo *cipherdata, uint8_t dir,
- uint32_t count, uint8_t bearer, uint8_t direction)
+ struct alginfo *cipherdata, uint8_t dir)
{
struct program prg;
struct program *p = &prg;
- uint32_t ct = count;
- uint8_t br = bearer;
- uint8_t dr = direction;
- uint32_t context[2] = {ct, (br << 27) | (dr << 26)};
PROGRAM_CNTXT_INIT(p, descbuf, 0);
- if (swap) {
+ if (swap)
PROGRAM_SET_BSWAP(p);
- context[0] = swab32(context[0]);
- context[1] = swab32(context[1]);
- }
-
if (ps)
PROGRAM_SET_36BIT_ADDR(p);
SHR_HDR(p, SHR_ALWAYS, 1, 0);
KEY(p, KEY1, cipherdata->key_enc_flags, cipherdata->key,
cipherdata->keylen, INLINE_KEY(cipherdata));
+
+ SEQLOAD(p, CONTEXT1, 0, 16, 0);
+
MATHB(p, SEQINSZ, SUB, MATH2, VSEQINSZ, 4, 0);
MATHB(p, SEQINSZ, SUB, MATH2, VSEQOUTSZ, 4, 0);
ALG_OPERATION(p, OP_ALG_ALGSEL_SNOW_F8, OP_ALG_AAI_F8,
OP_ALG_AS_INITFINAL, 0, dir);
- LOAD(p, (uintptr_t)context, CONTEXT1, 0, 8, IMMED | COPY);
SEQFIFOLOAD(p, MSG1, 0, VLF | LAST1);
SEQFIFOSTORE(p, MSG, 0, 0, VLF);
--
2.17.1
^ permalink raw reply [flat|nested] 75+ messages in thread
* [dpdk-dev] [PATCH 16/20] crypto/dpaa2_sec/hw: Support snow-f9
2019-09-02 12:17 [dpdk-dev] [PATCH 00/20] crypto/dpaaX_sec: Support Wireless algos Akhil Goyal
` (14 preceding siblings ...)
2019-09-02 12:17 ` [dpdk-dev] [PATCH 15/20] crypto/dpaa2_sec/hw: Support snow-f8 Akhil Goyal
@ 2019-09-02 12:17 ` Akhil Goyal
2019-09-02 12:17 ` [dpdk-dev] [PATCH 17/20] crypto/dpaa2_sec: Support snow3g cipher/integrity Akhil Goyal
` (5 subsequent siblings)
21 siblings, 0 replies; 75+ messages in thread
From: Akhil Goyal @ 2019-09-02 12:17 UTC (permalink / raw)
To: dev; +Cc: hemant.agrawal, vakul.garg
From: Vakul Garg <vakul.garg@nxp.com>
Add support for snow-f9 in non pdcp protocol offload mode.
This essentially add support to pass pre-computed IV to SEC.
Signed-off-by: Vakul Garg <vakul.garg@nxp.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
---
drivers/crypto/dpaa2_sec/hw/desc/algo.h | 51 +++++++++++++------------
1 file changed, 26 insertions(+), 25 deletions(-)
diff --git a/drivers/crypto/dpaa2_sec/hw/desc/algo.h b/drivers/crypto/dpaa2_sec/hw/desc/algo.h
index 2a307a3e1..5e8e5e79c 100644
--- a/drivers/crypto/dpaa2_sec/hw/desc/algo.h
+++ b/drivers/crypto/dpaa2_sec/hw/desc/algo.h
@@ -64,48 +64,49 @@ cnstr_shdsc_snow_f8(uint32_t *descbuf, bool ps, bool swap,
* @swap: must be true when core endianness doesn't match SEC endianness
* @authdata: pointer to authentication transform definitions
* @dir: cipher direction (DIR_ENC/DIR_DEC)
- * @count: UEA2 count value (32 bits)
- * @fresh: UEA2 fresh value ID (32 bits)
- * @direction: UEA2 direction (1 bit)
- * @datalen: size of data
+ * @chk_icv: check or generate ICV value
+ * @authlen: size of digest
*
* Return: size of descriptor written in words or negative number on error
*/
static inline int
cnstr_shdsc_snow_f9(uint32_t *descbuf, bool ps, bool swap,
- struct alginfo *authdata, uint8_t dir, uint32_t count,
- uint32_t fresh, uint8_t direction, uint32_t datalen)
+ struct alginfo *authdata, uint8_t chk_icv,
+ uint32_t authlen)
{
struct program prg;
struct program *p = &prg;
- uint64_t ct = count;
- uint64_t fr = fresh;
- uint64_t dr = direction;
- uint64_t context[2];
-
- context[0] = (ct << 32) | (dr << 26);
- context[1] = fr << 32;
+ int dir = chk_icv ? DIR_DEC : DIR_ENC;
PROGRAM_CNTXT_INIT(p, descbuf, 0);
- if (swap) {
+ if (swap)
PROGRAM_SET_BSWAP(p);
- context[0] = swab64(context[0]);
- context[1] = swab64(context[1]);
- }
if (ps)
PROGRAM_SET_36BIT_ADDR(p);
+
SHR_HDR(p, SHR_ALWAYS, 1, 0);
- KEY(p, KEY2, authdata->key_enc_flags, authdata->key, authdata->keylen,
- INLINE_KEY(authdata));
- MATHB(p, SEQINSZ, SUB, MATH2, VSEQINSZ, 4, 0);
+ KEY(p, KEY2, authdata->key_enc_flags, authdata->key,
+ authdata->keylen, INLINE_KEY(authdata));
+
+ SEQLOAD(p, CONTEXT2, 0, 12, 0);
+
+ if (chk_icv == ICV_CHECK_ENABLE)
+ MATHB(p, SEQINSZ, SUB, authlen, VSEQINSZ, 4, IMMED2);
+ else
+ MATHB(p, SEQINSZ, SUB, ZERO, VSEQINSZ, 4, 0);
+
ALG_OPERATION(p, OP_ALG_ALGSEL_SNOW_F9, OP_ALG_AAI_F9,
- OP_ALG_AS_INITFINAL, 0, dir);
- LOAD(p, (uintptr_t)context, CONTEXT2, 0, 16, IMMED | COPY);
- SEQFIFOLOAD(p, BIT_DATA, datalen, CLASS2 | LAST2);
- /* Save lower half of MAC out into a 32-bit sequence */
- SEQSTORE(p, CONTEXT2, 0, 4, 0);
+ OP_ALG_AS_INITFINAL, chk_icv, dir);
+
+ SEQFIFOLOAD(p, MSG2, 0, VLF | CLASS2 | LAST2);
+
+ if (chk_icv == ICV_CHECK_ENABLE)
+ SEQFIFOLOAD(p, ICV2, authlen, LAST2);
+ else
+ /* Save lower half of MAC out into a 32-bit sequence */
+ SEQSTORE(p, CONTEXT2, 0, authlen, 0);
return PROGRAM_FINALIZE(p);
}
--
2.17.1
^ permalink raw reply [flat|nested] 75+ messages in thread
* [dpdk-dev] [PATCH 17/20] crypto/dpaa2_sec: Support snow3g cipher/integrity
2019-09-02 12:17 [dpdk-dev] [PATCH 00/20] crypto/dpaaX_sec: Support Wireless algos Akhil Goyal
` (15 preceding siblings ...)
2019-09-02 12:17 ` [dpdk-dev] [PATCH 16/20] crypto/dpaa2_sec/hw: Support snow-f9 Akhil Goyal
@ 2019-09-02 12:17 ` Akhil Goyal
2019-09-02 12:17 ` [dpdk-dev] [PATCH 18/20] crypto/dpaa2_sec/hw: Support kasumi Akhil Goyal
` (4 subsequent siblings)
21 siblings, 0 replies; 75+ messages in thread
From: Akhil Goyal @ 2019-09-02 12:17 UTC (permalink / raw)
To: dev; +Cc: hemant.agrawal, vakul.garg
From: Hemant Agrawal <hemant.agrawal@nxp.com>
Adding basic framework to use snow3g f8 and f9 based
ciphering or integrity with direct crypto apis.
This patch does not support any combo usages yet.
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
---
drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c | 236 ++++++++++++++------
drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h | 46 +++-
drivers/crypto/dpaa2_sec/hw/desc/algo.h | 30 +++
3 files changed, 244 insertions(+), 68 deletions(-)
diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
index c9029e989..3dd9112a7 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
@@ -957,11 +957,26 @@ build_auth_fd(dpaa2_sec_session *sess, struct rte_crypto_op *op,
struct qbman_fle *fle, *sge;
struct sec_flow_context *flc;
struct ctxt_priv *priv = sess->ctxt;
+ int data_len, data_offset;
uint8_t *old_digest;
int retval;
PMD_INIT_FUNC_TRACE();
+ data_len = sym_op->auth.data.length;
+ data_offset = sym_op->auth.data.offset;
+
+ if (sess->auth_alg == RTE_CRYPTO_AUTH_SNOW3G_UIA2 ||
+ sess->auth_alg == RTE_CRYPTO_AUTH_ZUC_EIA3) {
+ if ((data_len & 7) || (data_offset & 7)) {
+ DPAA2_SEC_ERR("AUTH: len/offset must be full bytes");
+ return -1;
+ }
+
+ data_len = data_len >> 3;
+ data_offset = data_offset >> 3;
+ }
+
retval = rte_mempool_get(priv->fle_pool, (void **)(&fle));
if (retval) {
DPAA2_SEC_ERR("AUTH Memory alloc failed for SGE");
@@ -977,64 +992,72 @@ build_auth_fd(dpaa2_sec_session *sess, struct rte_crypto_op *op,
DPAA2_SET_FLE_ADDR(fle, (size_t)op);
DPAA2_FLE_SAVE_CTXT(fle, (ptrdiff_t)priv);
fle = fle + 1;
+ sge = fle + 2;
if (likely(bpid < MAX_BPID)) {
DPAA2_SET_FD_BPID(fd, bpid);
DPAA2_SET_FLE_BPID(fle, bpid);
DPAA2_SET_FLE_BPID(fle + 1, bpid);
+ DPAA2_SET_FLE_BPID(sge, bpid);
+ DPAA2_SET_FLE_BPID(sge + 1, bpid);
} else {
DPAA2_SET_FD_IVP(fd);
DPAA2_SET_FLE_IVP(fle);
DPAA2_SET_FLE_IVP((fle + 1));
+ DPAA2_SET_FLE_IVP(sge);
+ DPAA2_SET_FLE_IVP((sge + 1));
}
+
flc = &priv->flc_desc[DESC_INITFINAL].flc;
DPAA2_SET_FD_FLC(fd, DPAA2_VADDR_TO_IOVA(flc));
+ DPAA2_SET_FD_ADDR(fd, DPAA2_VADDR_TO_IOVA(fle));
+ DPAA2_SET_FD_COMPOUND_FMT(fd);
DPAA2_SET_FLE_ADDR(fle, DPAA2_VADDR_TO_IOVA(sym_op->auth.digest.data));
fle->length = sess->digest_length;
-
- DPAA2_SET_FD_ADDR(fd, DPAA2_VADDR_TO_IOVA(fle));
- DPAA2_SET_FD_COMPOUND_FMT(fd);
fle++;
- if (sess->dir == DIR_ENC) {
- DPAA2_SET_FLE_ADDR(fle,
- DPAA2_MBUF_VADDR_TO_IOVA(sym_op->m_src));
- DPAA2_SET_FLE_OFFSET(fle, sym_op->auth.data.offset +
- sym_op->m_src->data_off);
- DPAA2_SET_FD_LEN(fd, sym_op->auth.data.length);
- fle->length = sym_op->auth.data.length;
- } else {
- sge = fle + 2;
- DPAA2_SET_FLE_SG_EXT(fle);
- DPAA2_SET_FLE_ADDR(fle, DPAA2_VADDR_TO_IOVA(sge));
+ /* Setting input FLE */
+ DPAA2_SET_FLE_ADDR(fle, DPAA2_VADDR_TO_IOVA(sge));
+ DPAA2_SET_FLE_SG_EXT(fle);
+ fle->length = data_len;
- if (likely(bpid < MAX_BPID)) {
- DPAA2_SET_FLE_BPID(sge, bpid);
- DPAA2_SET_FLE_BPID(sge + 1, bpid);
+ if (sess->iv.length) {
+ uint8_t *iv_ptr;
+
+ iv_ptr = rte_crypto_op_ctod_offset(op, uint8_t *,
+ sess->iv.offset);
+
+ if (sess->auth_alg == RTE_CRYPTO_AUTH_SNOW3G_UIA2) {
+ iv_ptr = conv_to_snow_f9_iv(iv_ptr);
+ sge->length = 12;
} else {
- DPAA2_SET_FLE_IVP(sge);
- DPAA2_SET_FLE_IVP((sge + 1));
+ sge->length = sess->iv.length;
}
- DPAA2_SET_FLE_ADDR(sge,
- DPAA2_MBUF_VADDR_TO_IOVA(sym_op->m_src));
- DPAA2_SET_FLE_OFFSET(sge, sym_op->auth.data.offset +
- sym_op->m_src->data_off);
- DPAA2_SET_FD_LEN(fd, sym_op->auth.data.length +
- sess->digest_length);
- sge->length = sym_op->auth.data.length;
+ DPAA2_SET_FLE_ADDR(sge, DPAA2_VADDR_TO_IOVA(iv_ptr));
+ fle->length = fle->length + sge->length;
+ sge++;
+ }
+
+ /* Setting data to authenticate */
+ DPAA2_SET_FLE_ADDR(sge, DPAA2_MBUF_VADDR_TO_IOVA(sym_op->m_src));
+ DPAA2_SET_FLE_OFFSET(sge, data_offset + sym_op->m_src->data_off);
+ sge->length = data_len;
+
+ if (sess->dir == DIR_DEC) {
sge++;
old_digest = (uint8_t *)(sge + 1);
rte_memcpy(old_digest, sym_op->auth.digest.data,
sess->digest_length);
DPAA2_SET_FLE_ADDR(sge, DPAA2_VADDR_TO_IOVA(old_digest));
sge->length = sess->digest_length;
- fle->length = sym_op->auth.data.length +
- sess->digest_length;
- DPAA2_SET_FLE_FIN(sge);
+ fle->length = fle->length + sess->digest_length;
}
+
+ DPAA2_SET_FLE_FIN(sge);
DPAA2_SET_FLE_FIN(fle);
+ DPAA2_SET_FD_LEN(fd, fle->length);
return 0;
}
@@ -1045,6 +1068,7 @@ build_cipher_sg_fd(dpaa2_sec_session *sess, struct rte_crypto_op *op,
{
struct rte_crypto_sym_op *sym_op = op->sym;
struct qbman_fle *ip_fle, *op_fle, *sge, *fle;
+ int data_len, data_offset;
struct sec_flow_context *flc;
struct ctxt_priv *priv = sess->ctxt;
struct rte_mbuf *mbuf;
@@ -1053,6 +1077,20 @@ build_cipher_sg_fd(dpaa2_sec_session *sess, struct rte_crypto_op *op,
PMD_INIT_FUNC_TRACE();
+ data_len = sym_op->cipher.data.length;
+ data_offset = sym_op->cipher.data.offset;
+
+ if (sess->cipher_alg == RTE_CRYPTO_CIPHER_SNOW3G_UEA2 ||
+ sess->cipher_alg == RTE_CRYPTO_CIPHER_ZUC_EEA3) {
+ if ((data_len & 7) || (data_offset & 7)) {
+ DPAA2_SEC_ERR("CIPHER: len/offset must be full bytes");
+ return -1;
+ }
+
+ data_len = data_len >> 3;
+ data_offset = data_offset >> 3;
+ }
+
if (sym_op->m_dst)
mbuf = sym_op->m_dst;
else
@@ -1078,20 +1116,20 @@ build_cipher_sg_fd(dpaa2_sec_session *sess, struct rte_crypto_op *op,
DPAA2_SEC_DP_DEBUG(
"CIPHER SG: cipher_off: 0x%x/length %d, ivlen=%d"
" data_off: 0x%x\n",
- sym_op->cipher.data.offset,
- sym_op->cipher.data.length,
+ data_offset,
+ data_len,
sess->iv.length,
sym_op->m_src->data_off);
/* o/p fle */
DPAA2_SET_FLE_ADDR(op_fle, DPAA2_VADDR_TO_IOVA(sge));
- op_fle->length = sym_op->cipher.data.length;
+ op_fle->length = data_len;
DPAA2_SET_FLE_SG_EXT(op_fle);
/* o/p 1st seg */
DPAA2_SET_FLE_ADDR(sge, DPAA2_MBUF_VADDR_TO_IOVA(mbuf));
- DPAA2_SET_FLE_OFFSET(sge, sym_op->cipher.data.offset + mbuf->data_off);
- sge->length = mbuf->data_len - sym_op->cipher.data.offset;
+ DPAA2_SET_FLE_OFFSET(sge, data_offset + mbuf->data_off);
+ sge->length = mbuf->data_len - data_offset;
mbuf = mbuf->next;
/* o/p segs */
@@ -1113,7 +1151,7 @@ build_cipher_sg_fd(dpaa2_sec_session *sess, struct rte_crypto_op *op,
mbuf = sym_op->m_src;
sge++;
DPAA2_SET_FLE_ADDR(ip_fle, DPAA2_VADDR_TO_IOVA(sge));
- ip_fle->length = sess->iv.length + sym_op->cipher.data.length;
+ ip_fle->length = sess->iv.length + data_len;
DPAA2_SET_FLE_SG_EXT(ip_fle);
/* i/p IV */
@@ -1125,9 +1163,8 @@ build_cipher_sg_fd(dpaa2_sec_session *sess, struct rte_crypto_op *op,
/* i/p 1st seg */
DPAA2_SET_FLE_ADDR(sge, DPAA2_MBUF_VADDR_TO_IOVA(mbuf));
- DPAA2_SET_FLE_OFFSET(sge, sym_op->cipher.data.offset +
- mbuf->data_off);
- sge->length = mbuf->data_len - sym_op->cipher.data.offset;
+ DPAA2_SET_FLE_OFFSET(sge, data_offset + mbuf->data_off);
+ sge->length = mbuf->data_len - data_offset;
mbuf = mbuf->next;
/* i/p segs */
@@ -1164,7 +1201,7 @@ build_cipher_fd(dpaa2_sec_session *sess, struct rte_crypto_op *op,
{
struct rte_crypto_sym_op *sym_op = op->sym;
struct qbman_fle *fle, *sge;
- int retval;
+ int retval, data_len, data_offset;
struct sec_flow_context *flc;
struct ctxt_priv *priv = sess->ctxt;
uint8_t *iv_ptr = rte_crypto_op_ctod_offset(op, uint8_t *,
@@ -1173,6 +1210,20 @@ build_cipher_fd(dpaa2_sec_session *sess, struct rte_crypto_op *op,
PMD_INIT_FUNC_TRACE();
+ data_len = sym_op->cipher.data.length;
+ data_offset = sym_op->cipher.data.offset;
+
+ if (sess->cipher_alg == RTE_CRYPTO_CIPHER_SNOW3G_UEA2 ||
+ sess->cipher_alg == RTE_CRYPTO_CIPHER_ZUC_EEA3) {
+ if ((data_len & 7) || (data_offset & 7)) {
+ DPAA2_SEC_ERR("CIPHER: len/offset must be full bytes");
+ return -1;
+ }
+
+ data_len = data_len >> 3;
+ data_offset = data_offset >> 3;
+ }
+
if (sym_op->m_dst)
dst = sym_op->m_dst;
else
@@ -1211,24 +1262,22 @@ build_cipher_fd(dpaa2_sec_session *sess, struct rte_crypto_op *op,
flc = &priv->flc_desc[0].flc;
DPAA2_SET_FD_ADDR(fd, DPAA2_VADDR_TO_IOVA(fle));
- DPAA2_SET_FD_LEN(fd, sym_op->cipher.data.length +
- sess->iv.length);
+ DPAA2_SET_FD_LEN(fd, data_len + sess->iv.length);
DPAA2_SET_FD_COMPOUND_FMT(fd);
DPAA2_SET_FD_FLC(fd, DPAA2_VADDR_TO_IOVA(flc));
DPAA2_SEC_DP_DEBUG(
"CIPHER: cipher_off: 0x%x/length %d, ivlen=%d,"
" data_off: 0x%x\n",
- sym_op->cipher.data.offset,
- sym_op->cipher.data.length,
+ data_offset,
+ data_len,
sess->iv.length,
sym_op->m_src->data_off);
DPAA2_SET_FLE_ADDR(fle, DPAA2_MBUF_VADDR_TO_IOVA(dst));
- DPAA2_SET_FLE_OFFSET(fle, sym_op->cipher.data.offset +
- dst->data_off);
+ DPAA2_SET_FLE_OFFSET(fle, data_offset + dst->data_off);
- fle->length = sym_op->cipher.data.length + sess->iv.length;
+ fle->length = data_len + sess->iv.length;
DPAA2_SEC_DP_DEBUG(
"CIPHER: 1 - flc = %p, fle = %p FLEaddr = %x-%x, length %d\n",
@@ -1238,7 +1287,7 @@ build_cipher_fd(dpaa2_sec_session *sess, struct rte_crypto_op *op,
fle++;
DPAA2_SET_FLE_ADDR(fle, DPAA2_VADDR_TO_IOVA(sge));
- fle->length = sym_op->cipher.data.length + sess->iv.length;
+ fle->length = data_len + sess->iv.length;
DPAA2_SET_FLE_SG_EXT(fle);
@@ -1247,10 +1296,9 @@ build_cipher_fd(dpaa2_sec_session *sess, struct rte_crypto_op *op,
sge++;
DPAA2_SET_FLE_ADDR(sge, DPAA2_MBUF_VADDR_TO_IOVA(sym_op->m_src));
- DPAA2_SET_FLE_OFFSET(sge, sym_op->cipher.data.offset +
- sym_op->m_src->data_off);
+ DPAA2_SET_FLE_OFFSET(sge, data_offset + sym_op->m_src->data_off);
- sge->length = sym_op->cipher.data.length;
+ sge->length = data_len;
DPAA2_SET_FLE_FIN(sge);
DPAA2_SET_FLE_FIN(fle);
@@ -1761,32 +1809,60 @@ dpaa2_sec_cipher_init(struct rte_cryptodev *dev,
/* Set IV parameters */
session->iv.offset = xform->cipher.iv.offset;
session->iv.length = xform->cipher.iv.length;
+ session->dir = (xform->cipher.op == RTE_CRYPTO_CIPHER_OP_ENCRYPT) ?
+ DIR_ENC : DIR_DEC;
switch (xform->cipher.algo) {
case RTE_CRYPTO_CIPHER_AES_CBC:
cipherdata.algtype = OP_ALG_ALGSEL_AES;
cipherdata.algmode = OP_ALG_AAI_CBC;
session->cipher_alg = RTE_CRYPTO_CIPHER_AES_CBC;
+ bufsize = cnstr_shdsc_blkcipher(priv->flc_desc[0].desc, 1, 0,
+ SHR_NEVER, &cipherdata, NULL,
+ session->iv.length,
+ session->dir);
break;
case RTE_CRYPTO_CIPHER_3DES_CBC:
cipherdata.algtype = OP_ALG_ALGSEL_3DES;
cipherdata.algmode = OP_ALG_AAI_CBC;
session->cipher_alg = RTE_CRYPTO_CIPHER_3DES_CBC;
+ bufsize = cnstr_shdsc_blkcipher(priv->flc_desc[0].desc, 1, 0,
+ SHR_NEVER, &cipherdata, NULL,
+ session->iv.length,
+ session->dir);
break;
case RTE_CRYPTO_CIPHER_AES_CTR:
cipherdata.algtype = OP_ALG_ALGSEL_AES;
cipherdata.algmode = OP_ALG_AAI_CTR;
session->cipher_alg = RTE_CRYPTO_CIPHER_AES_CTR;
+ bufsize = cnstr_shdsc_blkcipher(priv->flc_desc[0].desc, 1, 0,
+ SHR_NEVER, &cipherdata, NULL,
+ session->iv.length,
+ session->dir);
break;
case RTE_CRYPTO_CIPHER_3DES_CTR:
+ cipherdata.algtype = OP_ALG_ALGSEL_3DES;
+ cipherdata.algmode = OP_ALG_AAI_CTR;
+ session->cipher_alg = RTE_CRYPTO_CIPHER_3DES_CTR;
+ bufsize = cnstr_shdsc_blkcipher(priv->flc_desc[0].desc, 1, 0,
+ SHR_NEVER, &cipherdata, NULL,
+ session->iv.length,
+ session->dir);
+ break;
+ case RTE_CRYPTO_CIPHER_SNOW3G_UEA2:
+ cipherdata.algtype = OP_ALG_ALGSEL_SNOW_F8;
+ session->cipher_alg = RTE_CRYPTO_CIPHER_SNOW3G_UEA2;
+ bufsize = cnstr_shdsc_snow_f8(priv->flc_desc[0].desc, 1, 0,
+ &cipherdata,
+ session->dir);
+ break;
+ case RTE_CRYPTO_CIPHER_KASUMI_F8:
+ case RTE_CRYPTO_CIPHER_ZUC_EEA3:
+ case RTE_CRYPTO_CIPHER_AES_F8:
case RTE_CRYPTO_CIPHER_AES_ECB:
case RTE_CRYPTO_CIPHER_3DES_ECB:
case RTE_CRYPTO_CIPHER_AES_XTS:
- case RTE_CRYPTO_CIPHER_AES_F8:
case RTE_CRYPTO_CIPHER_ARC4:
- case RTE_CRYPTO_CIPHER_KASUMI_F8:
- case RTE_CRYPTO_CIPHER_SNOW3G_UEA2:
- case RTE_CRYPTO_CIPHER_ZUC_EEA3:
case RTE_CRYPTO_CIPHER_NULL:
DPAA2_SEC_ERR("Crypto: Unsupported Cipher alg %u",
xform->cipher.algo);
@@ -1796,12 +1872,7 @@ dpaa2_sec_cipher_init(struct rte_cryptodev *dev,
xform->cipher.algo);
goto error_out;
}
- session->dir = (xform->cipher.op == RTE_CRYPTO_CIPHER_OP_ENCRYPT) ?
- DIR_ENC : DIR_DEC;
- bufsize = cnstr_shdsc_blkcipher(priv->flc_desc[0].desc, 1, 0, SHR_NEVER,
- &cipherdata, NULL, session->iv.length,
- session->dir);
if (bufsize < 0) {
DPAA2_SEC_ERR("Crypto: Descriptor build failed");
goto error_out;
@@ -1864,40 +1935,77 @@ dpaa2_sec_auth_init(struct rte_cryptodev *dev,
authdata.key_type = RTA_DATA_IMM;
session->digest_length = xform->auth.digest_length;
+ session->dir = (xform->auth.op == RTE_CRYPTO_AUTH_OP_GENERATE) ?
+ DIR_ENC : DIR_DEC;
switch (xform->auth.algo) {
case RTE_CRYPTO_AUTH_SHA1_HMAC:
authdata.algtype = OP_ALG_ALGSEL_SHA1;
authdata.algmode = OP_ALG_AAI_HMAC;
session->auth_alg = RTE_CRYPTO_AUTH_SHA1_HMAC;
+ bufsize = cnstr_shdsc_hmac(priv->flc_desc[DESC_INITFINAL].desc,
+ 1, 0, SHR_NEVER, &authdata,
+ !session->dir,
+ session->digest_length);
break;
case RTE_CRYPTO_AUTH_MD5_HMAC:
authdata.algtype = OP_ALG_ALGSEL_MD5;
authdata.algmode = OP_ALG_AAI_HMAC;
session->auth_alg = RTE_CRYPTO_AUTH_MD5_HMAC;
+ bufsize = cnstr_shdsc_hmac(priv->flc_desc[DESC_INITFINAL].desc,
+ 1, 0, SHR_NEVER, &authdata,
+ !session->dir,
+ session->digest_length);
break;
case RTE_CRYPTO_AUTH_SHA256_HMAC:
authdata.algtype = OP_ALG_ALGSEL_SHA256;
authdata.algmode = OP_ALG_AAI_HMAC;
session->auth_alg = RTE_CRYPTO_AUTH_SHA256_HMAC;
+ bufsize = cnstr_shdsc_hmac(priv->flc_desc[DESC_INITFINAL].desc,
+ 1, 0, SHR_NEVER, &authdata,
+ !session->dir,
+ session->digest_length);
break;
case RTE_CRYPTO_AUTH_SHA384_HMAC:
authdata.algtype = OP_ALG_ALGSEL_SHA384;
authdata.algmode = OP_ALG_AAI_HMAC;
session->auth_alg = RTE_CRYPTO_AUTH_SHA384_HMAC;
+ bufsize = cnstr_shdsc_hmac(priv->flc_desc[DESC_INITFINAL].desc,
+ 1, 0, SHR_NEVER, &authdata,
+ !session->dir,
+ session->digest_length);
break;
case RTE_CRYPTO_AUTH_SHA512_HMAC:
authdata.algtype = OP_ALG_ALGSEL_SHA512;
authdata.algmode = OP_ALG_AAI_HMAC;
session->auth_alg = RTE_CRYPTO_AUTH_SHA512_HMAC;
+ bufsize = cnstr_shdsc_hmac(priv->flc_desc[DESC_INITFINAL].desc,
+ 1, 0, SHR_NEVER, &authdata,
+ !session->dir,
+ session->digest_length);
break;
case RTE_CRYPTO_AUTH_SHA224_HMAC:
authdata.algtype = OP_ALG_ALGSEL_SHA224;
authdata.algmode = OP_ALG_AAI_HMAC;
session->auth_alg = RTE_CRYPTO_AUTH_SHA224_HMAC;
+ bufsize = cnstr_shdsc_hmac(priv->flc_desc[DESC_INITFINAL].desc,
+ 1, 0, SHR_NEVER, &authdata,
+ !session->dir,
+ session->digest_length);
break;
- case RTE_CRYPTO_AUTH_AES_XCBC_MAC:
case RTE_CRYPTO_AUTH_SNOW3G_UIA2:
+ authdata.algtype = OP_ALG_ALGSEL_SNOW_F9;
+ authdata.algmode = OP_ALG_AAI_F9;
+ session->auth_alg = RTE_CRYPTO_AUTH_SNOW3G_UIA2;
+ session->iv.offset = xform->auth.iv.offset;
+ session->iv.length = xform->auth.iv.length;
+ bufsize = cnstr_shdsc_snow_f9(priv->flc_desc[DESC_INITFINAL].desc,
+ 1, 0, &authdata,
+ !session->dir,
+ session->digest_length);
+ break;
+ case RTE_CRYPTO_AUTH_KASUMI_F9:
+ case RTE_CRYPTO_AUTH_ZUC_EIA3:
case RTE_CRYPTO_AUTH_NULL:
case RTE_CRYPTO_AUTH_SHA1:
case RTE_CRYPTO_AUTH_SHA256:
@@ -1906,10 +2014,9 @@ dpaa2_sec_auth_init(struct rte_cryptodev *dev,
case RTE_CRYPTO_AUTH_SHA384:
case RTE_CRYPTO_AUTH_MD5:
case RTE_CRYPTO_AUTH_AES_GMAC:
- case RTE_CRYPTO_AUTH_KASUMI_F9:
+ case RTE_CRYPTO_AUTH_AES_XCBC_MAC:
case RTE_CRYPTO_AUTH_AES_CMAC:
case RTE_CRYPTO_AUTH_AES_CBC_MAC:
- case RTE_CRYPTO_AUTH_ZUC_EIA3:
DPAA2_SEC_ERR("Crypto: Unsupported auth alg %un",
xform->auth.algo);
goto error_out;
@@ -1918,12 +2025,7 @@ dpaa2_sec_auth_init(struct rte_cryptodev *dev,
xform->auth.algo);
goto error_out;
}
- session->dir = (xform->auth.op == RTE_CRYPTO_AUTH_OP_GENERATE) ?
- DIR_ENC : DIR_DEC;
- bufsize = cnstr_shdsc_hmac(priv->flc_desc[DESC_INITFINAL].desc,
- 1, 0, SHR_NEVER, &authdata, !session->dir,
- session->digest_length);
if (bufsize < 0) {
DPAA2_SEC_ERR("Crypto: Invalid buffer length");
goto error_out;
diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h b/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
index d0933d660..41734b471 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
@@ -408,7 +408,51 @@ static const struct rte_cryptodev_capabilities dpaa2_sec_capabilities[] = {
}, }
}, }
},
-
+ { /* SNOW 3G (UIA2) */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_SNOW3G_UIA2,
+ .block_size = 16,
+ .key_size = {
+ .min = 16,
+ .max = 16,
+ .increment = 0
+ },
+ .digest_size = {
+ .min = 4,
+ .max = 4,
+ .increment = 0
+ },
+ .iv_size = {
+ .min = 16,
+ .max = 16,
+ .increment = 0
+ }
+ }, }
+ }, }
+ },
+ { /* SNOW 3G (UEA2) */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
+ {.cipher = {
+ .algo = RTE_CRYPTO_CIPHER_SNOW3G_UEA2,
+ .block_size = 16,
+ .key_size = {
+ .min = 16,
+ .max = 16,
+ .increment = 0
+ },
+ .iv_size = {
+ .min = 16,
+ .max = 16,
+ .increment = 0
+ }
+ }, }
+ }, }
+ },
RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
};
diff --git a/drivers/crypto/dpaa2_sec/hw/desc/algo.h b/drivers/crypto/dpaa2_sec/hw/desc/algo.h
index 5e8e5e79c..e01b9b4c9 100644
--- a/drivers/crypto/dpaa2_sec/hw/desc/algo.h
+++ b/drivers/crypto/dpaa2_sec/hw/desc/algo.h
@@ -57,6 +57,36 @@ cnstr_shdsc_snow_f8(uint32_t *descbuf, bool ps, bool swap,
return PROGRAM_FINALIZE(p);
}
+/**
+ * conv_to_snow_f9_iv - SNOW/f9 (UIA2) IV 16bit to 12 bit convert
+ * funtion for 3G.
+ * @iv: 16 bit original IV data
+ *
+ * Return: 12 bit IV data as understood by SEC HW
+ */
+
+static inline uint8_t *conv_to_snow_f9_iv(uint8_t *iv)
+{
+ uint8_t temp = (iv[8] == iv[0]) ? 0 : 4;
+
+ iv[12] = iv[4];
+ iv[13] = iv[5];
+ iv[14] = iv[6];
+ iv[15] = iv[7];
+
+ iv[8] = temp;
+ iv[9] = 0x00;
+ iv[10] = 0x00;
+ iv[11] = 0x00;
+
+ iv[4] = iv[0];
+ iv[5] = iv[1];
+ iv[6] = iv[2];
+ iv[7] = iv[3];
+
+ return (iv + 4);
+}
+
/**
* cnstr_shdsc_snow_f9 - SNOW/f9 (UIA2) as a shared descriptor
* @descbuf: pointer to descriptor-under-construction buffer
--
2.17.1
^ permalink raw reply [flat|nested] 75+ messages in thread
* [dpdk-dev] [PATCH 18/20] crypto/dpaa2_sec/hw: Support kasumi
2019-09-02 12:17 [dpdk-dev] [PATCH 00/20] crypto/dpaaX_sec: Support Wireless algos Akhil Goyal
` (16 preceding siblings ...)
2019-09-02 12:17 ` [dpdk-dev] [PATCH 17/20] crypto/dpaa2_sec: Support snow3g cipher/integrity Akhil Goyal
@ 2019-09-02 12:17 ` Akhil Goyal
2019-09-02 12:17 ` [dpdk-dev] [PATCH 19/20] crypto/dpaa2_sec/hw: Support zuc cipher/integrity Akhil Goyal
` (3 subsequent siblings)
21 siblings, 0 replies; 75+ messages in thread
From: Akhil Goyal @ 2019-09-02 12:17 UTC (permalink / raw)
To: dev; +Cc: hemant.agrawal, vakul.garg
From: Vakul Garg <vakul.garg@nxp.com>
Add Kasumi processing for non PDCP proto offload cases.
Also add support for pre-computed IV in Kasumi-f9
Signed-off-by: Vakul Garg <vakul.garg@nxp.com>
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
---
drivers/crypto/dpaa2_sec/hw/desc/algo.h | 64 +++++++++++--------------
1 file changed, 29 insertions(+), 35 deletions(-)
diff --git a/drivers/crypto/dpaa2_sec/hw/desc/algo.h b/drivers/crypto/dpaa2_sec/hw/desc/algo.h
index e01b9b4c9..88ab40da5 100644
--- a/drivers/crypto/dpaa2_sec/hw/desc/algo.h
+++ b/drivers/crypto/dpaa2_sec/hw/desc/algo.h
@@ -349,34 +349,25 @@ cnstr_shdsc_hmac(uint32_t *descbuf, bool ps, bool swap,
*/
static inline int
cnstr_shdsc_kasumi_f8(uint32_t *descbuf, bool ps, bool swap,
- struct alginfo *cipherdata, uint8_t dir,
- uint32_t count, uint8_t bearer, uint8_t direction)
+ struct alginfo *cipherdata, uint8_t dir)
{
struct program prg;
struct program *p = &prg;
- uint64_t ct = count;
- uint64_t br = bearer;
- uint64_t dr = direction;
- uint32_t context[2] = { ct, (br << 27) | (dr << 26) };
PROGRAM_CNTXT_INIT(p, descbuf, 0);
- if (swap) {
+ if (swap)
PROGRAM_SET_BSWAP(p);
-
- context[0] = swab32(context[0]);
- context[1] = swab32(context[1]);
- }
if (ps)
PROGRAM_SET_36BIT_ADDR(p);
SHR_HDR(p, SHR_ALWAYS, 1, 0);
KEY(p, KEY1, cipherdata->key_enc_flags, cipherdata->key,
cipherdata->keylen, INLINE_KEY(cipherdata));
+ SEQLOAD(p, CONTEXT1, 0, 8, 0);
MATHB(p, SEQINSZ, SUB, MATH2, VSEQINSZ, 4, 0);
MATHB(p, SEQINSZ, SUB, MATH2, VSEQOUTSZ, 4, 0);
ALG_OPERATION(p, OP_ALG_ALGSEL_KASUMI, OP_ALG_AAI_F8,
OP_ALG_AS_INITFINAL, 0, dir);
- LOAD(p, (uintptr_t)context, CONTEXT1, 0, 8, IMMED | COPY);
SEQFIFOLOAD(p, MSG1, 0, VLF | LAST1);
SEQFIFOSTORE(p, MSG, 0, 0, VLF);
@@ -390,46 +381,49 @@ cnstr_shdsc_kasumi_f8(uint32_t *descbuf, bool ps, bool swap,
* @ps: if 36/40bit addressing is desired, this parameter must be true
* @swap: must be true when core endianness doesn't match SEC endianness
* @authdata: pointer to authentication transform definitions
- * @dir: cipher direction (DIR_ENC/DIR_DEC)
- * @count: count value (32 bits)
- * @fresh: fresh value ID (32 bits)
- * @direction: direction (1 bit)
- * @datalen: size of data
+ * @chk_icv: check or generate ICV value
+ * @authlen: size of digest
*
* Return: size of descriptor written in words or negative number on error
*/
static inline int
cnstr_shdsc_kasumi_f9(uint32_t *descbuf, bool ps, bool swap,
- struct alginfo *authdata, uint8_t dir,
- uint32_t count, uint32_t fresh, uint8_t direction,
- uint32_t datalen)
+ struct alginfo *authdata, uint8_t chk_icv,
+ uint32_t authlen)
{
struct program prg;
struct program *p = &prg;
- uint16_t ctx_offset = 16;
- uint32_t context[6] = {count, direction << 26, fresh, 0, 0, 0};
+ int dir = chk_icv ? DIR_DEC : DIR_ENC;
PROGRAM_CNTXT_INIT(p, descbuf, 0);
- if (swap) {
+ if (swap)
PROGRAM_SET_BSWAP(p);
- context[0] = swab32(context[0]);
- context[1] = swab32(context[1]);
- context[2] = swab32(context[2]);
- }
if (ps)
PROGRAM_SET_36BIT_ADDR(p);
+
SHR_HDR(p, SHR_ALWAYS, 1, 0);
- KEY(p, KEY1, authdata->key_enc_flags, authdata->key, authdata->keylen,
- INLINE_KEY(authdata));
- MATHB(p, SEQINSZ, SUB, MATH2, VSEQINSZ, 4, 0);
+ KEY(p, KEY2, authdata->key_enc_flags, authdata->key,
+ authdata->keylen, INLINE_KEY(authdata));
+
+ SEQLOAD(p, CONTEXT2, 0, 12, 0);
+
+ if (chk_icv == ICV_CHECK_ENABLE)
+ MATHB(p, SEQINSZ, SUB, authlen, VSEQINSZ, 4, IMMED2);
+ else
+ MATHB(p, SEQINSZ, SUB, ZERO, VSEQINSZ, 4, 0);
+
ALG_OPERATION(p, OP_ALG_ALGSEL_KASUMI, OP_ALG_AAI_F9,
- OP_ALG_AS_INITFINAL, 0, dir);
- LOAD(p, (uintptr_t)context, CONTEXT1, 0, 24, IMMED | COPY);
- SEQFIFOLOAD(p, BIT_DATA, datalen, CLASS1 | LAST1);
- /* Save output MAC of DWORD 2 into a 32-bit sequence */
- SEQSTORE(p, CONTEXT1, ctx_offset, 4, 0);
+ OP_ALG_AS_INITFINAL, chk_icv, dir);
+
+ SEQFIFOLOAD(p, MSG2, 0, VLF | CLASS2 | LAST2);
+
+ if (chk_icv == ICV_CHECK_ENABLE)
+ SEQFIFOLOAD(p, ICV2, authlen, LAST2);
+ else
+ /* Save lower half of MAC out into a 32-bit sequence */
+ SEQSTORE(p, CONTEXT2, 0, authlen, 0);
return PROGRAM_FINALIZE(p);
}
--
2.17.1
^ permalink raw reply [flat|nested] 75+ messages in thread
* [dpdk-dev] [PATCH 19/20] crypto/dpaa2_sec/hw: Support zuc cipher/integrity
2019-09-02 12:17 [dpdk-dev] [PATCH 00/20] crypto/dpaaX_sec: Support Wireless algos Akhil Goyal
` (17 preceding siblings ...)
2019-09-02 12:17 ` [dpdk-dev] [PATCH 18/20] crypto/dpaa2_sec/hw: Support kasumi Akhil Goyal
@ 2019-09-02 12:17 ` Akhil Goyal
2019-09-02 12:17 ` [dpdk-dev] [PATCH 20/20] crypto/dpaa2_sec: Support zuc ciphering Akhil Goyal
` (2 subsequent siblings)
21 siblings, 0 replies; 75+ messages in thread
From: Akhil Goyal @ 2019-09-02 12:17 UTC (permalink / raw)
To: dev; +Cc: hemant.agrawal, vakul.garg
From: Vakul Garg <vakul.garg@nxp.com>
Signed-off-by: Vakul Garg <vakul.garg@nxp.com>
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
---
drivers/crypto/dpaa2_sec/hw/desc/algo.h | 94 +++++++++++++++++++++++++
1 file changed, 94 insertions(+)
diff --git a/drivers/crypto/dpaa2_sec/hw/desc/algo.h b/drivers/crypto/dpaa2_sec/hw/desc/algo.h
index 88ab40da5..689be670b 100644
--- a/drivers/crypto/dpaa2_sec/hw/desc/algo.h
+++ b/drivers/crypto/dpaa2_sec/hw/desc/algo.h
@@ -17,6 +17,100 @@
* Shared descriptors for algorithms (i.e. not for protocols).
*/
+/**
+ * cnstr_shdsc_zuce - ZUC Enc (EEA2) as a shared descriptor
+ * @descbuf: pointer to descriptor-under-construction buffer
+ * @ps: if 36/40bit addressing is desired, this parameter must be true
+ * @swap: must be true when core endianness doesn't match SEC endianness
+ * @cipherdata: pointer to block cipher transform definitions
+ * @dir: Cipher direction (DIR_ENC/DIR_DEC)
+ *
+ * Return: size of descriptor written in words or negative number on error
+ */
+static inline int
+cnstr_shdsc_zuce(uint32_t *descbuf, bool ps, bool swap,
+ struct alginfo *cipherdata, uint8_t dir)
+{
+ struct program prg;
+ struct program *p = &prg;
+
+ PROGRAM_CNTXT_INIT(p, descbuf, 0);
+ if (swap)
+ PROGRAM_SET_BSWAP(p);
+
+ if (ps)
+ PROGRAM_SET_36BIT_ADDR(p);
+ SHR_HDR(p, SHR_ALWAYS, 1, 0);
+
+ KEY(p, KEY1, cipherdata->key_enc_flags, cipherdata->key,
+ cipherdata->keylen, INLINE_KEY(cipherdata));
+
+ SEQLOAD(p, CONTEXT1, 0, 16, 0);
+
+ MATHB(p, SEQINSZ, SUB, MATH2, VSEQINSZ, 4, 0);
+ MATHB(p, SEQINSZ, SUB, MATH2, VSEQOUTSZ, 4, 0);
+ ALG_OPERATION(p, OP_ALG_ALGSEL_ZUCE, OP_ALG_AAI_F8,
+ OP_ALG_AS_INITFINAL, 0, dir);
+ SEQFIFOLOAD(p, MSG1, 0, VLF | LAST1);
+ SEQFIFOSTORE(p, MSG, 0, 0, VLF);
+
+ return PROGRAM_FINALIZE(p);
+}
+
+/**
+ * cnstr_shdsc_zuca - ZUC Auth (EIA2) as a shared descriptor
+ * @descbuf: pointer to descriptor-under-construction buffer
+ * @ps: if 36/40bit addressing is desired, this parameter must be true
+ * @swap: must be true when core endianness doesn't match SEC endianness
+ * @authdata: pointer to authentication transform definitions
+ * @chk_icv: check or generate ICV value
+ * @authlen: size of digest
+ *
+ * Return: size of descriptor written in words or negative number on error
+ */
+static inline int
+cnstr_shdsc_zuca(uint32_t *descbuf, bool ps, bool swap,
+ struct alginfo *authdata, uint8_t chk_icv,
+ uint32_t authlen)
+{
+ struct program prg;
+ struct program *p = &prg;
+ int dir = chk_icv ? DIR_DEC : DIR_ENC;
+
+ PROGRAM_CNTXT_INIT(p, descbuf, 0);
+ if (swap)
+ PROGRAM_SET_BSWAP(p);
+
+ if (ps)
+ PROGRAM_SET_36BIT_ADDR(p);
+
+ SHR_HDR(p, SHR_ALWAYS, 1, 0);
+
+ KEY(p, KEY2, authdata->key_enc_flags, authdata->key,
+ authdata->keylen, INLINE_KEY(authdata));
+
+ SEQLOAD(p, CONTEXT2, 0, 12, 0);
+
+ if (chk_icv == ICV_CHECK_ENABLE)
+ MATHB(p, SEQINSZ, SUB, authlen, VSEQINSZ, 4, IMMED2);
+ else
+ MATHB(p, SEQINSZ, SUB, ZERO, VSEQINSZ, 4, 0);
+
+ ALG_OPERATION(p, OP_ALG_ALGSEL_ZUCA, OP_ALG_AAI_F9,
+ OP_ALG_AS_INITFINAL, chk_icv, dir);
+
+ SEQFIFOLOAD(p, MSG2, 0, VLF | CLASS2 | LAST2);
+
+ if (chk_icv == ICV_CHECK_ENABLE)
+ SEQFIFOLOAD(p, ICV2, authlen, LAST2);
+ else
+ /* Save lower half of MAC out into a 32-bit sequence */
+ SEQSTORE(p, CONTEXT2, 0, authlen, 0);
+
+ return PROGRAM_FINALIZE(p);
+}
+
+
/**
* cnstr_shdsc_snow_f8 - SNOW/f8 (UEA2) as a shared descriptor
* @descbuf: pointer to descriptor-under-construction buffer
--
2.17.1
^ permalink raw reply [flat|nested] 75+ messages in thread
* [dpdk-dev] [PATCH 20/20] crypto/dpaa2_sec: Support zuc ciphering
2019-09-02 12:17 [dpdk-dev] [PATCH 00/20] crypto/dpaaX_sec: Support Wireless algos Akhil Goyal
` (18 preceding siblings ...)
2019-09-02 12:17 ` [dpdk-dev] [PATCH 19/20] crypto/dpaa2_sec/hw: Support zuc cipher/integrity Akhil Goyal
@ 2019-09-02 12:17 ` Akhil Goyal
2019-09-03 14:39 ` [dpdk-dev] [PATCH 00/20] crypto/dpaaX_sec: Support Wireless algos Aaron Conole
2019-09-30 11:52 ` [dpdk-dev] [PATCH v2 " Akhil Goyal
21 siblings, 0 replies; 75+ messages in thread
From: Akhil Goyal @ 2019-09-02 12:17 UTC (permalink / raw)
To: dev; +Cc: hemant.agrawal, vakul.garg
From: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Vakul Garg <vakul.garg@nxp.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
---
drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c | 12 ++++++++++--
drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h | 20 ++++++++++++++++++++
2 files changed, 30 insertions(+), 2 deletions(-)
diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
index 3dd9112a7..e74cfb2e3 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
@@ -1856,8 +1856,14 @@ dpaa2_sec_cipher_init(struct rte_cryptodev *dev,
&cipherdata,
session->dir);
break;
- case RTE_CRYPTO_CIPHER_KASUMI_F8:
case RTE_CRYPTO_CIPHER_ZUC_EEA3:
+ cipherdata.algtype = OP_ALG_ALGSEL_ZUCE;
+ session->cipher_alg = RTE_CRYPTO_CIPHER_ZUC_EEA3;
+ bufsize = cnstr_shdsc_zuce(priv->flc_desc[0].desc, 1, 0,
+ &cipherdata,
+ session->dir);
+ break;
+ case RTE_CRYPTO_CIPHER_KASUMI_F8:
case RTE_CRYPTO_CIPHER_AES_F8:
case RTE_CRYPTO_CIPHER_AES_ECB:
case RTE_CRYPTO_CIPHER_3DES_ECB:
@@ -2004,8 +2010,8 @@ dpaa2_sec_auth_init(struct rte_cryptodev *dev,
!session->dir,
session->digest_length);
break;
- case RTE_CRYPTO_AUTH_KASUMI_F9:
case RTE_CRYPTO_AUTH_ZUC_EIA3:
+ case RTE_CRYPTO_AUTH_KASUMI_F9:
case RTE_CRYPTO_AUTH_NULL:
case RTE_CRYPTO_AUTH_SHA1:
case RTE_CRYPTO_AUTH_SHA256:
@@ -2316,6 +2322,7 @@ dpaa2_sec_aead_chain_init(struct rte_cryptodev *dev,
session->cipher_alg = RTE_CRYPTO_CIPHER_AES_CTR;
break;
case RTE_CRYPTO_CIPHER_SNOW3G_UEA2:
+ case RTE_CRYPTO_CIPHER_ZUC_EEA3:
case RTE_CRYPTO_CIPHER_NULL:
case RTE_CRYPTO_CIPHER_3DES_ECB:
case RTE_CRYPTO_CIPHER_AES_ECB:
@@ -2610,6 +2617,7 @@ dpaa2_sec_ipsec_proto_init(struct rte_crypto_cipher_xform *cipher_xform,
cipherdata->algtype = OP_PCL_IPSEC_NULL;
break;
case RTE_CRYPTO_CIPHER_SNOW3G_UEA2:
+ case RTE_CRYPTO_CIPHER_ZUC_EEA3:
case RTE_CRYPTO_CIPHER_3DES_ECB:
case RTE_CRYPTO_CIPHER_AES_ECB:
case RTE_CRYPTO_CIPHER_KASUMI_F8:
diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h b/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
index 41734b471..1e9b67b4a 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
@@ -453,6 +453,26 @@ static const struct rte_cryptodev_capabilities dpaa2_sec_capabilities[] = {
}, }
}, }
},
+ { /* ZUC (EEA3) */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
+ {.cipher = {
+ .algo = RTE_CRYPTO_CIPHER_ZUC_EEA3,
+ .block_size = 16,
+ .key_size = {
+ .min = 16,
+ .max = 16,
+ .increment = 0
+ },
+ .iv_size = {
+ .min = 16,
+ .max = 16,
+ .increment = 0
+ }
+ }, }
+ }, }
+ },
RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
};
--
2.17.1
^ permalink raw reply [flat|nested] 75+ messages in thread
* Re: [dpdk-dev] [PATCH 00/20] crypto/dpaaX_sec: Support Wireless algos
2019-09-02 12:17 [dpdk-dev] [PATCH 00/20] crypto/dpaaX_sec: Support Wireless algos Akhil Goyal
` (19 preceding siblings ...)
2019-09-02 12:17 ` [dpdk-dev] [PATCH 20/20] crypto/dpaa2_sec: Support zuc ciphering Akhil Goyal
@ 2019-09-03 14:39 ` Aaron Conole
2019-09-03 14:42 ` Akhil Goyal
2019-09-30 11:52 ` [dpdk-dev] [PATCH v2 " Akhil Goyal
21 siblings, 1 reply; 75+ messages in thread
From: Aaron Conole @ 2019-09-03 14:39 UTC (permalink / raw)
To: Akhil Goyal; +Cc: dev, hemant.agrawal, vakul.garg
Akhil Goyal <akhil.goyal@nxp.com> writes:
> PDCP protocol offload using rte_security are supported in
> dpaa2_sec and dpaa_sec drivers.
>
> Wireless algos(SNOW/ZUC) without protocol offload are also
> supported as per crypto APIs.
Hi Akhil,
I didn't bisect this series, but it seems there's some kind of build
error:
https://travis-ci.com/ovsrobot/dpdk/builds/125436276
Hope you can take a look.
-Aaron
>
> Akhil Goyal (5):
> security: add hfn override option in PDCP
> crypto/dpaaX_sec: update dpovrd for hfn override in PDCP
> crypto/dpaa2_sec: update desc for pdcp 18bit enc-auth
> crypto/dpaa2_sec/hw: update 12bit SN desc for null auth for ERA8
> crypto/dpaa_sec: support scatter gather for pdcp
>
> Hemant Agrawal (4):
> crypto/dpaa2_sec: support CAAM HW era 10
> crypto/dpaa2_sec: support scatter gather for proto offloads
> crypto/dpaa2_sec: Support snow3g cipher/integrity
> crypto/dpaa2_sec: Support zuc ciphering
>
> Vakul Garg (11):
> drivers/crypto: Support PDCP 12-bit c-plane processing
> drivers/crypto: Support PDCP u-plane with integrity
> crypto/dpaa2_sec: disable 'write-safe' for PDCP
> crypto/dpaa2_sec/hw: Support 18-bit PDCP enc-auth cases
> crypto/dpaa2_sec/hw: Support aes-aes 18-bit PDCP
> crypto/dpaa2_sec/hw: Support zuc-zuc 18-bit PDCP
> crypto/dpaa2_sec/hw: Support snow-snow 18-bit PDCP
> crypto/dpaa2_sec/hw: Support snow-f8
> crypto/dpaa2_sec/hw: Support snow-f9
> crypto/dpaa2_sec/hw: Support kasumi
> crypto/dpaa2_sec/hw: Support zuc cipher/integrity
>
> drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c | 477 ++++--
> drivers/crypto/dpaa2_sec/dpaa2_sec_logs.h | 4 +-
> drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h | 70 +-
> drivers/crypto/dpaa2_sec/hw/desc.h | 8 +-
> drivers/crypto/dpaa2_sec/hw/desc/algo.h | 259 ++-
> drivers/crypto/dpaa2_sec/hw/desc/pdcp.h | 1387 ++++++++++++++---
> .../dpaa2_sec/hw/rta/fifo_load_store_cmd.h | 9 +-
> drivers/crypto/dpaa2_sec/hw/rta/header_cmd.h | 21 +-
> drivers/crypto/dpaa2_sec/hw/rta/jump_cmd.h | 3 +-
> drivers/crypto/dpaa2_sec/hw/rta/key_cmd.h | 5 +-
> drivers/crypto/dpaa2_sec/hw/rta/load_cmd.h | 10 +-
> drivers/crypto/dpaa2_sec/hw/rta/math_cmd.h | 12 +-
> drivers/crypto/dpaa2_sec/hw/rta/move_cmd.h | 8 +-
> drivers/crypto/dpaa2_sec/hw/rta/nfifo_cmd.h | 10 +-
> .../crypto/dpaa2_sec/hw/rta/operation_cmd.h | 6 +-
> .../crypto/dpaa2_sec/hw/rta/protocol_cmd.h | 11 +-
> .../dpaa2_sec/hw/rta/sec_run_time_asm.h | 27 +-
> .../dpaa2_sec/hw/rta/seq_in_out_ptr_cmd.h | 7 +-
> drivers/crypto/dpaa2_sec/hw/rta/store_cmd.h | 6 +-
> drivers/crypto/dpaa_sec/dpaa_sec.c | 262 +++-
> drivers/crypto/dpaa_sec/dpaa_sec.h | 4 +-
> lib/librte_security/rte_security.h | 4 +-
> 22 files changed, 2082 insertions(+), 528 deletions(-)
^ permalink raw reply [flat|nested] 75+ messages in thread
* Re: [dpdk-dev] [PATCH 00/20] crypto/dpaaX_sec: Support Wireless algos
2019-09-03 14:39 ` [dpdk-dev] [PATCH 00/20] crypto/dpaaX_sec: Support Wireless algos Aaron Conole
@ 2019-09-03 14:42 ` Akhil Goyal
0 siblings, 0 replies; 75+ messages in thread
From: Akhil Goyal @ 2019-09-03 14:42 UTC (permalink / raw)
To: Aaron Conole; +Cc: dev, Hemant Agrawal, Vakul Garg
>
> > PDCP protocol offload using rte_security are supported in
> > dpaa2_sec and dpaa_sec drivers.
> >
> > Wireless algos(SNOW/ZUC) without protocol offload are also
> > supported as per crypto APIs.
>
> Hi Akhil,
>
> I didn't bisect this series, but it seems there's some kind of build
> error:
>
> https://travis-ci.com/ovsrobot/dpdk/builds/125436276
>
> Hope you can take a look.
>
> -Aaron
>
Hi Aaron,
Yes I also saw that issue on clang. Will be sending the next version soon.
Thanks,
Akhil
> >
> > Akhil Goyal (5):
> > security: add hfn override option in PDCP
> > crypto/dpaaX_sec: update dpovrd for hfn override in PDCP
> > crypto/dpaa2_sec: update desc for pdcp 18bit enc-auth
> > crypto/dpaa2_sec/hw: update 12bit SN desc for null auth for ERA8
> > crypto/dpaa_sec: support scatter gather for pdcp
> >
> > Hemant Agrawal (4):
> > crypto/dpaa2_sec: support CAAM HW era 10
> > crypto/dpaa2_sec: support scatter gather for proto offloads
> > crypto/dpaa2_sec: Support snow3g cipher/integrity
> > crypto/dpaa2_sec: Support zuc ciphering
> >
> > Vakul Garg (11):
> > drivers/crypto: Support PDCP 12-bit c-plane processing
> > drivers/crypto: Support PDCP u-plane with integrity
> > crypto/dpaa2_sec: disable 'write-safe' for PDCP
> > crypto/dpaa2_sec/hw: Support 18-bit PDCP enc-auth cases
> > crypto/dpaa2_sec/hw: Support aes-aes 18-bit PDCP
> > crypto/dpaa2_sec/hw: Support zuc-zuc 18-bit PDCP
> > crypto/dpaa2_sec/hw: Support snow-snow 18-bit PDCP
> > crypto/dpaa2_sec/hw: Support snow-f8
> > crypto/dpaa2_sec/hw: Support snow-f9
> > crypto/dpaa2_sec/hw: Support kasumi
> > crypto/dpaa2_sec/hw: Support zuc cipher/integrity
> >
> > drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c | 477 ++++--
> > drivers/crypto/dpaa2_sec/dpaa2_sec_logs.h | 4 +-
> > drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h | 70 +-
> > drivers/crypto/dpaa2_sec/hw/desc.h | 8 +-
> > drivers/crypto/dpaa2_sec/hw/desc/algo.h | 259 ++-
> > drivers/crypto/dpaa2_sec/hw/desc/pdcp.h | 1387 ++++++++++++++---
> > .../dpaa2_sec/hw/rta/fifo_load_store_cmd.h | 9 +-
> > drivers/crypto/dpaa2_sec/hw/rta/header_cmd.h | 21 +-
> > drivers/crypto/dpaa2_sec/hw/rta/jump_cmd.h | 3 +-
> > drivers/crypto/dpaa2_sec/hw/rta/key_cmd.h | 5 +-
> > drivers/crypto/dpaa2_sec/hw/rta/load_cmd.h | 10 +-
> > drivers/crypto/dpaa2_sec/hw/rta/math_cmd.h | 12 +-
> > drivers/crypto/dpaa2_sec/hw/rta/move_cmd.h | 8 +-
> > drivers/crypto/dpaa2_sec/hw/rta/nfifo_cmd.h | 10 +-
> > .../crypto/dpaa2_sec/hw/rta/operation_cmd.h | 6 +-
> > .../crypto/dpaa2_sec/hw/rta/protocol_cmd.h | 11 +-
> > .../dpaa2_sec/hw/rta/sec_run_time_asm.h | 27 +-
> > .../dpaa2_sec/hw/rta/seq_in_out_ptr_cmd.h | 7 +-
> > drivers/crypto/dpaa2_sec/hw/rta/store_cmd.h | 6 +-
> > drivers/crypto/dpaa_sec/dpaa_sec.c | 262 +++-
> > drivers/crypto/dpaa_sec/dpaa_sec.h | 4 +-
> > lib/librte_security/rte_security.h | 4 +-
> > 22 files changed, 2082 insertions(+), 528 deletions(-)
^ permalink raw reply [flat|nested] 75+ messages in thread
* Re: [dpdk-dev] [PATCH 03/20] security: add hfn override option in PDCP
2019-09-02 12:17 ` [dpdk-dev] [PATCH 03/20] security: add hfn override option in PDCP Akhil Goyal
@ 2019-09-19 15:31 ` Akhil Goyal
2019-09-24 11:36 ` Ananyev, Konstantin
2019-09-25 7:18 ` Anoob Joseph
0 siblings, 2 replies; 75+ messages in thread
From: Akhil Goyal @ 2019-09-19 15:31 UTC (permalink / raw)
To: konstantin.ananyev, anoobj, Radu Nicolau
Cc: Hemant Agrawal, Vakul Garg, dev, Akhil Goyal
Hi Konstantin/Anoob/Radu,
Any comments on this patch.
Regards,
Akhil
>
> HFN can be given as a per packet value also.
> As we do not have IV in case of PDCP, and HFN is
> used to generate IV. IV field can be used to get the
> per packet HFN while enq/deq
> If hfn_ovrd field in pdcp_xform is set,
> application is expected to set the per packet HFN
> in place of IV. Driver will extract the HFN and perform
> operations accordingly.
>
> Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
> ---
> lib/librte_security/rte_security.h | 4 +++-
> 1 file changed, 3 insertions(+), 1 deletion(-)
>
> diff --git a/lib/librte_security/rte_security.h
> b/lib/librte_security/rte_security.h
> index 96806e3a2..4452545fe 100644
> --- a/lib/librte_security/rte_security.h
> +++ b/lib/librte_security/rte_security.h
> @@ -1,5 +1,5 @@
> /* SPDX-License-Identifier: BSD-3-Clause
> - * Copyright 2017 NXP.
> + * Copyright 2017,2019 NXP
> * Copyright(c) 2017 Intel Corporation.
> */
>
> @@ -270,6 +270,8 @@ struct rte_security_pdcp_xform {
> uint32_t hfn;
> /** HFN Threshold for key renegotiation */
> uint32_t hfn_threshold;
> + /** Enable per packet HFN override */
> + uint32_t hfn_ovrd;
> };
>
> /**
> --
> 2.17.1
^ permalink raw reply [flat|nested] 75+ messages in thread
* Re: [dpdk-dev] [PATCH 03/20] security: add hfn override option in PDCP
2019-09-19 15:31 ` Akhil Goyal
@ 2019-09-24 11:36 ` Ananyev, Konstantin
2019-09-25 7:18 ` Anoob Joseph
1 sibling, 0 replies; 75+ messages in thread
From: Ananyev, Konstantin @ 2019-09-24 11:36 UTC (permalink / raw)
To: Akhil Goyal, anoobj, Nicolau, Radu; +Cc: Hemant Agrawal, Vakul Garg, dev
> > HFN can be given as a per packet value also.
> > As we do not have IV in case of PDCP, and HFN is
> > used to generate IV. IV field can be used to get the
> > per packet HFN while enq/deq
> > If hfn_ovrd field in pdcp_xform is set,
> > application is expected to set the per packet HFN
> > in place of IV. Driver will extract the HFN and perform
> > operations accordingly.
> >
> > Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
> > ---
> > lib/librte_security/rte_security.h | 4 +++-
> > 1 file changed, 3 insertions(+), 1 deletion(-)
> >
> > diff --git a/lib/librte_security/rte_security.h
> > b/lib/librte_security/rte_security.h
> > index 96806e3a2..4452545fe 100644
> > --- a/lib/librte_security/rte_security.h
> > +++ b/lib/librte_security/rte_security.h
> > @@ -1,5 +1,5 @@
> > /* SPDX-License-Identifier: BSD-3-Clause
> > - * Copyright 2017 NXP.
> > + * Copyright 2017,2019 NXP
> > * Copyright(c) 2017 Intel Corporation.
> > */
> >
> > @@ -270,6 +270,8 @@ struct rte_security_pdcp_xform {
> > uint32_t hfn;
> > /** HFN Threshold for key renegotiation */
> > uint32_t hfn_threshold;
> > + /** Enable per packet HFN override */
> > + uint32_t hfn_ovrd;
> > };
> >
> > /**
> > --
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> > 2.17.1
^ permalink raw reply [flat|nested] 75+ messages in thread
* Re: [dpdk-dev] [PATCH 03/20] security: add hfn override option in PDCP
2019-09-19 15:31 ` Akhil Goyal
2019-09-24 11:36 ` Ananyev, Konstantin
@ 2019-09-25 7:18 ` Anoob Joseph
2019-09-27 15:06 ` Akhil Goyal
1 sibling, 1 reply; 75+ messages in thread
From: Anoob Joseph @ 2019-09-25 7:18 UTC (permalink / raw)
To: Akhil Goyal, konstantin.ananyev, Radu Nicolau
Cc: Hemant Agrawal, Vakul Garg, dev, Narayana Prasad Raju Athreya,
Jerin Jacob Kollanukkaran
Hi Akhil,
Please see inline.
Thanks,
Anoob
> -----Original Message-----
> From: Akhil Goyal <akhil.goyal@nxp.com>
> Sent: Thursday, September 19, 2019 9:01 PM
> To: konstantin.ananyev@intel.com; Anoob Joseph <anoobj@marvell.com>;
> Radu Nicolau <radu.nicolau@intel.com>
> Cc: Hemant Agrawal <hemant.agrawal@nxp.com>; Vakul Garg
> <vakul.garg@nxp.com>; dev@dpdk.org; Akhil Goyal <akhil.goyal@nxp.com>
> Subject: [EXT] RE: [PATCH 03/20] security: add hfn override option in PDCP
>
> External Email
>
> ----------------------------------------------------------------------
> Hi Konstantin/Anoob/Radu,
>
> Any comments on this patch.
>
> Regards,
> Akhil
> >
> > HFN can be given as a per packet value also.
> > As we do not have IV in case of PDCP, and HFN is used to generate IV.
> > IV field can be used to get the per packet HFN while enq/deq If
> > hfn_ovrd field in pdcp_xform is set, application is expected to set
> > the per packet HFN in place of IV. Driver will extract the HFN and
> > perform operations accordingly.
> >
> > Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
> > ---
> > lib/librte_security/rte_security.h | 4 +++-
> > 1 file changed, 3 insertions(+), 1 deletion(-)
> >
> > diff --git a/lib/librte_security/rte_security.h
> > b/lib/librte_security/rte_security.h
> > index 96806e3a2..4452545fe 100644
> > --- a/lib/librte_security/rte_security.h
> > +++ b/lib/librte_security/rte_security.h
> > @@ -1,5 +1,5 @@
> > /* SPDX-License-Identifier: BSD-3-Clause
> > - * Copyright 2017 NXP.
> > + * Copyright 2017,2019 NXP
> > * Copyright(c) 2017 Intel Corporation.
> > */
> >
> > @@ -270,6 +270,8 @@ struct rte_security_pdcp_xform {
> > uint32_t hfn;
> > /** HFN Threshold for key renegotiation */
> > uint32_t hfn_threshold;
> > + /** Enable per packet HFN override */
> > + uint32_t hfn_ovrd;
[Anoob] I think you should document the fact that IV field will be used for HFN. Your patch description accurately describes the procedure but the above comment fails to capture it. Also I would suggest renaming "hfn_ovrd" to something else to make it obvious that IV field is being used. Something like, use_iv_for_hfn or something.
Otherwise, I don't see any issues with the approach.
> > };
> >
> > /**
> > --
> > 2.17.1
^ permalink raw reply [flat|nested] 75+ messages in thread
* Re: [dpdk-dev] [PATCH 03/20] security: add hfn override option in PDCP
2019-09-25 7:18 ` Anoob Joseph
@ 2019-09-27 15:06 ` Akhil Goyal
0 siblings, 0 replies; 75+ messages in thread
From: Akhil Goyal @ 2019-09-27 15:06 UTC (permalink / raw)
To: Anoob Joseph, konstantin.ananyev, Radu Nicolau
Cc: Hemant Agrawal, Vakul Garg, dev, Narayana Prasad Raju Athreya,
Jerin Jacob Kollanukkaran
Hi Anoob,
Thanks for review.
> > > @@ -270,6 +270,8 @@ struct rte_security_pdcp_xform {
> > > uint32_t hfn;
> > > /** HFN Threshold for key renegotiation */
> > > uint32_t hfn_threshold;
> > > + /** Enable per packet HFN override */
> > > + uint32_t hfn_ovrd;
>
> [Anoob] I think you should document the fact that IV field will be used for HFN.
> Your patch description accurately describes the procedure but the above
> comment fails to capture it. Also I would suggest renaming "hfn_ovrd" to
> something else to make it obvious that IV field is being used. Something like,
> use_iv_for_hfn or something.
Will add comments here.
/** HFN can be given as a per packet value also.
* As we do not have IV in case of PDCP, and HFN is
* used to generate IV. IV field can be used to get the
* per packet HFN while enq/deq.
* If hfn_ovrd field is set, user is expected to set the
* per packet HFN in place of IV. PMDs will extract the HFN
* and perform operations accordingly.
*/
But using a different name may not be useful. Here we want to specify that
HFN can be overridden from the per packet value.
Now the usage is explained in the comments. I believe hfn_ovrd is enough to
explain the intent. Though not a very strong opinion on this.
Will send the v2 shortly.
>
> Otherwise, I don't see any issues with the approach.
>
> > > };
> > >
> > > /**
> > > --
> > > 2.17.1
^ permalink raw reply [flat|nested] 75+ messages in thread
* [dpdk-dev] [PATCH v2 00/20] crypto/dpaaX_sec: Support Wireless algos
2019-09-02 12:17 [dpdk-dev] [PATCH 00/20] crypto/dpaaX_sec: Support Wireless algos Akhil Goyal
` (20 preceding siblings ...)
2019-09-03 14:39 ` [dpdk-dev] [PATCH 00/20] crypto/dpaaX_sec: Support Wireless algos Aaron Conole
@ 2019-09-30 11:52 ` Akhil Goyal
2019-09-30 11:52 ` [dpdk-dev] [PATCH v2 01/20] drivers/crypto: support PDCP 12-bit c-plane processing Akhil Goyal
` (21 more replies)
21 siblings, 22 replies; 75+ messages in thread
From: Akhil Goyal @ 2019-09-30 11:52 UTC (permalink / raw)
To: dev; +Cc: aconole, anoobj, Akhil Goyal
PDCP protocol offload using rte_security are supported in
dpaa2_sec and dpaa_sec drivers.
Wireless algos(SNOW/ZUC) without protocol offload are also
supported as per crypto APIs.
changes in V2:
- fix clang build
- enable zuc authentication
- minor fixes
Akhil Goyal (5):
security: add hfn override option in PDCP
drivers/crypto: support hfn override for NXP PMDs
crypto/dpaa2_sec: update desc for pdcp 18bit enc-auth
crypto/dpaa2_sec/hw: update 12bit SN desc for NULL auth
crypto/dpaa_sec: support scatter gather for PDCP
Hemant Agrawal (4):
crypto/dpaa2_sec: support CAAM HW era 10
crypto/dpaa2_sec: support scatter gather for proto offloads
crypto/dpaa2_sec: support snow3g cipher/integrity
crypto/dpaa2_sec: support zuc ciphering/integrity
Vakul Garg (11):
drivers/crypto: support PDCP 12-bit c-plane processing
drivers/crypto: support PDCP u-plane with integrity
crypto/dpaa2_sec: disable 'write-safe' for PDCP
crypto/dpaa2_sec/hw: support 18-bit PDCP enc-auth cases
crypto/dpaa2_sec/hw: support aes-aes 18-bit PDCP
crypto/dpaa2_sec/hw: support zuc-zuc 18-bit PDCP
crypto/dpaa2_sec/hw: support snow-snow 18-bit PDCP
crypto/dpaa2_sec/hw: support snow-f8
crypto/dpaa2_sec/hw: support snow-f9
crypto/dpaa2_sec/hw: support kasumi
crypto/dpaa2_sec/hw: support ZUCE and ZUCA
drivers/crypto/dpaa2_sec/Makefile | 1 +
drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c | 564 +++++--
drivers/crypto/dpaa2_sec/dpaa2_sec_logs.h | 4 +-
drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h | 95 +-
drivers/crypto/dpaa2_sec/hw/desc.h | 8 +-
drivers/crypto/dpaa2_sec/hw/desc/algo.h | 295 +++-
drivers/crypto/dpaa2_sec/hw/desc/pdcp.h | 1387 ++++++++++++++---
.../dpaa2_sec/hw/rta/fifo_load_store_cmd.h | 9 +-
drivers/crypto/dpaa2_sec/hw/rta/header_cmd.h | 21 +-
drivers/crypto/dpaa2_sec/hw/rta/jump_cmd.h | 3 +-
drivers/crypto/dpaa2_sec/hw/rta/key_cmd.h | 5 +-
drivers/crypto/dpaa2_sec/hw/rta/load_cmd.h | 10 +-
drivers/crypto/dpaa2_sec/hw/rta/math_cmd.h | 12 +-
drivers/crypto/dpaa2_sec/hw/rta/move_cmd.h | 8 +-
drivers/crypto/dpaa2_sec/hw/rta/nfifo_cmd.h | 10 +-
.../crypto/dpaa2_sec/hw/rta/operation_cmd.h | 6 +-
.../crypto/dpaa2_sec/hw/rta/protocol_cmd.h | 11 +-
.../dpaa2_sec/hw/rta/sec_run_time_asm.h | 27 +-
.../dpaa2_sec/hw/rta/seq_in_out_ptr_cmd.h | 7 +-
drivers/crypto/dpaa2_sec/hw/rta/store_cmd.h | 6 +-
drivers/crypto/dpaa_sec/dpaa_sec.c | 264 +++-
drivers/crypto/dpaa_sec/dpaa_sec.h | 4 +-
lib/librte_security/rte_security.h | 11 +-
23 files changed, 2221 insertions(+), 547 deletions(-)
--
2.17.1
^ permalink raw reply [flat|nested] 75+ messages in thread
* [dpdk-dev] [PATCH v2 01/20] drivers/crypto: support PDCP 12-bit c-plane processing
2019-09-30 11:52 ` [dpdk-dev] [PATCH v2 " Akhil Goyal
@ 2019-09-30 11:52 ` Akhil Goyal
2019-09-30 11:52 ` [dpdk-dev] [PATCH v2 02/20] drivers/crypto: support PDCP u-plane with integrity Akhil Goyal
` (20 subsequent siblings)
21 siblings, 0 replies; 75+ messages in thread
From: Akhil Goyal @ 2019-09-30 11:52 UTC (permalink / raw)
To: dev; +Cc: aconole, anoobj, Vakul Garg
From: Vakul Garg <vakul.garg@nxp.com>
Added support for 12-bit c-plane. We implement it using 'u-plane for RN'
protocol descriptors. This is because 'c-plane' protocol descriptors
assume 5-bit sequence numbers. Since the crypto processing remains same
irrespective of c-plane or u-plane, we choose 'u-plane for RN' protocol
descriptors to implement 12-bit c-plane. 'U-plane for RN' protocol
descriptors support both confidentiality and integrity (required for
c-plane) for 7/12/15 bit sequence numbers.
For little endian platforms, incorrect IV is generated if MOVE command
is used in pdcp non-proto descriptors. This is because MOVE command
treats data as word. We changed MOVE to MOVEB since we require data to
be treated as byte array. The change works on both ls1046, ls2088.
Signed-off-by: Vakul Garg <vakul.garg@nxp.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
---
drivers/crypto/dpaa2_sec/Makefile | 1 +
drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c | 7 +-
drivers/crypto/dpaa2_sec/hw/desc.h | 3 +-
drivers/crypto/dpaa2_sec/hw/desc/pdcp.h | 176 ++++++++++++++----
.../crypto/dpaa2_sec/hw/rta/protocol_cmd.h | 6 +-
drivers/crypto/dpaa_sec/dpaa_sec.c | 7 +-
6 files changed, 159 insertions(+), 41 deletions(-)
diff --git a/drivers/crypto/dpaa2_sec/Makefile b/drivers/crypto/dpaa2_sec/Makefile
index 9c6657e52..6c882571d 100644
--- a/drivers/crypto/dpaa2_sec/Makefile
+++ b/drivers/crypto/dpaa2_sec/Makefile
@@ -13,6 +13,7 @@ LIB = librte_pmd_dpaa2_sec.a
CFLAGS += -DALLOW_EXPERIMENTAL_API
CFLAGS += -O3
CFLAGS += $(WERROR_FLAGS)
+CFLAGS += -Wno-enum-conversion
ifeq ($(CONFIG_RTE_TOOLCHAIN_GCC),y)
ifeq ($(shell test $(GCC_VERSION) -gt 70 && echo 1), 1)
diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
index f162eeed2..75cd2c0b5 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
@@ -2695,9 +2695,10 @@ dpaa2_sec_set_pdcp_session(struct rte_cryptodev *dev,
/* Auth is only applicable for control mode operation. */
if (pdcp_xform->domain == RTE_SECURITY_PDCP_MODE_CONTROL) {
- if (pdcp_xform->sn_size != RTE_SECURITY_PDCP_SN_SIZE_5) {
+ if (pdcp_xform->sn_size != RTE_SECURITY_PDCP_SN_SIZE_5 &&
+ pdcp_xform->sn_size != RTE_SECURITY_PDCP_SN_SIZE_12) {
DPAA2_SEC_ERR(
- "PDCP Seq Num size should be 5 bits for cmode");
+ "PDCP Seq Num size should be 5/12 bits for cmode");
goto out;
}
if (auth_xform) {
@@ -2748,6 +2749,7 @@ dpaa2_sec_set_pdcp_session(struct rte_cryptodev *dev,
bufsize = cnstr_shdsc_pdcp_c_plane_encap(
priv->flc_desc[0].desc, 1, swap,
pdcp_xform->hfn,
+ pdcp_xform->sn_size,
pdcp_xform->bearer,
pdcp_xform->pkt_dir,
pdcp_xform->hfn_threshold,
@@ -2757,6 +2759,7 @@ dpaa2_sec_set_pdcp_session(struct rte_cryptodev *dev,
bufsize = cnstr_shdsc_pdcp_c_plane_decap(
priv->flc_desc[0].desc, 1, swap,
pdcp_xform->hfn,
+ pdcp_xform->sn_size,
pdcp_xform->bearer,
pdcp_xform->pkt_dir,
pdcp_xform->hfn_threshold,
diff --git a/drivers/crypto/dpaa2_sec/hw/desc.h b/drivers/crypto/dpaa2_sec/hw/desc.h
index 5d99dd8af..e12c3db2f 100644
--- a/drivers/crypto/dpaa2_sec/hw/desc.h
+++ b/drivers/crypto/dpaa2_sec/hw/desc.h
@@ -1,7 +1,7 @@
/* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
*
* Copyright 2008-2016 Freescale Semiconductor Inc.
- * Copyright 2016 NXP
+ * Copyright 2016, 2019 NXP
*
*/
@@ -621,6 +621,7 @@
#define OP_PCLID_LTE_PDCP_USER (0x42 << OP_PCLID_SHIFT)
#define OP_PCLID_LTE_PDCP_CTRL (0x43 << OP_PCLID_SHIFT)
#define OP_PCLID_LTE_PDCP_CTRL_MIXED (0x44 << OP_PCLID_SHIFT)
+#define OP_PCLID_LTE_PDCP_USER_RN (0x45 << OP_PCLID_SHIFT)
/*
* ProtocolInfo selectors
diff --git a/drivers/crypto/dpaa2_sec/hw/desc/pdcp.h b/drivers/crypto/dpaa2_sec/hw/desc/pdcp.h
index fee844100..607c587e2 100644
--- a/drivers/crypto/dpaa2_sec/hw/desc/pdcp.h
+++ b/drivers/crypto/dpaa2_sec/hw/desc/pdcp.h
@@ -253,6 +253,7 @@ pdcp_insert_cplane_null_op(struct program *p,
struct alginfo *cipherdata __maybe_unused,
struct alginfo *authdata __maybe_unused,
unsigned int dir,
+ enum pdcp_sn_size sn_size __maybe_unused,
unsigned char era_2_sw_hfn_ovrd __maybe_unused)
{
LABEL(local_offset);
@@ -413,8 +414,18 @@ pdcp_insert_cplane_int_only_op(struct program *p,
bool swap __maybe_unused,
struct alginfo *cipherdata __maybe_unused,
struct alginfo *authdata, unsigned int dir,
+ enum pdcp_sn_size sn_size,
unsigned char era_2_sw_hfn_ovrd)
{
+ if (rta_sec_era >= RTA_SEC_ERA_8 && sn_size == PDCP_SN_SIZE_12) {
+ KEY(p, KEY2, authdata->key_enc_flags, authdata->key,
+ authdata->keylen, INLINE_KEY(authdata));
+
+ PROTOCOL(p, dir, OP_PCLID_LTE_PDCP_USER_RN,
+ (uint16_t)authdata->algtype);
+ return 0;
+ }
+
LABEL(local_offset);
REFERENCE(move_cmd_read_descbuf);
REFERENCE(move_cmd_write_descbuf);
@@ -720,6 +731,7 @@ pdcp_insert_cplane_enc_only_op(struct program *p,
struct alginfo *cipherdata,
struct alginfo *authdata __maybe_unused,
unsigned int dir,
+ enum pdcp_sn_size sn_size,
unsigned char era_2_sw_hfn_ovrd __maybe_unused)
{
/* Insert Cipher Key */
@@ -727,8 +739,12 @@ pdcp_insert_cplane_enc_only_op(struct program *p,
cipherdata->keylen, INLINE_KEY(cipherdata));
if (rta_sec_era >= RTA_SEC_ERA_8) {
- PROTOCOL(p, dir, OP_PCLID_LTE_PDCP_CTRL_MIXED,
- (uint16_t)cipherdata->algtype << 8);
+ if (sn_size == PDCP_SN_SIZE_5)
+ PROTOCOL(p, dir, OP_PCLID_LTE_PDCP_CTRL_MIXED,
+ (uint16_t)cipherdata->algtype << 8);
+ else
+ PROTOCOL(p, dir, OP_PCLID_LTE_PDCP_USER_RN,
+ (uint16_t)cipherdata->algtype << 8);
return 0;
}
@@ -742,12 +758,12 @@ pdcp_insert_cplane_enc_only_op(struct program *p,
IFB | IMMED2);
SEQSTORE(p, MATH0, 7, 1, 0);
MATHB(p, MATH1, SHLD, MATH1, MATH1, 8, 0);
- MOVE(p, DESCBUF, 8, MATH2, 0, 8, WAITCOMP | IMMED);
+ MOVEB(p, DESCBUF, 8, MATH2, 0, 8, WAITCOMP | IMMED);
MATHB(p, MATH1, OR, MATH2, MATH2, 8, 0);
switch (cipherdata->algtype) {
case PDCP_CIPHER_TYPE_SNOW:
- MOVE(p, MATH2, 0, CONTEXT1, 0, 8, WAITCOMP | IMMED);
+ MOVEB(p, MATH2, 0, CONTEXT1, 0, 8, WAITCOMP | IMMED);
if (rta_sec_era > RTA_SEC_ERA_2) {
MATHB(p, SEQINSZ, SUB, ZERO, VSEQINSZ, 4, 0);
@@ -771,7 +787,7 @@ pdcp_insert_cplane_enc_only_op(struct program *p,
break;
case PDCP_CIPHER_TYPE_AES:
- MOVE(p, MATH2, 0, CONTEXT1, 0x10, 0x10, WAITCOMP | IMMED);
+ MOVEB(p, MATH2, 0, CONTEXT1, 0x10, 0x10, WAITCOMP | IMMED);
if (rta_sec_era > RTA_SEC_ERA_2) {
MATHB(p, SEQINSZ, SUB, ZERO, VSEQINSZ, 4, 0);
@@ -802,8 +818,8 @@ pdcp_insert_cplane_enc_only_op(struct program *p,
return -ENOTSUP;
}
- MOVE(p, MATH2, 0, CONTEXT1, 0, 0x08, IMMED);
- MOVE(p, MATH2, 0, CONTEXT1, 0x08, 0x08, WAITCOMP | IMMED);
+ MOVEB(p, MATH2, 0, CONTEXT1, 0, 0x08, IMMED);
+ MOVEB(p, MATH2, 0, CONTEXT1, 0x08, 0x08, WAITCOMP | IMMED);
MATHB(p, SEQINSZ, SUB, ZERO, VSEQINSZ, 4, 0);
if (dir == OP_TYPE_ENCAP_PROTOCOL)
MATHB(p, SEQINSZ, ADD, PDCP_MAC_I_LEN, VSEQOUTSZ, 4,
@@ -848,6 +864,7 @@ pdcp_insert_cplane_acc_op(struct program *p,
struct alginfo *cipherdata,
struct alginfo *authdata,
unsigned int dir,
+ enum pdcp_sn_size sn_size,
unsigned char era_2_hfn_ovrd __maybe_unused)
{
/* Insert Auth Key */
@@ -857,7 +874,14 @@ pdcp_insert_cplane_acc_op(struct program *p,
/* Insert Cipher Key */
KEY(p, KEY1, cipherdata->key_enc_flags, cipherdata->key,
cipherdata->keylen, INLINE_KEY(cipherdata));
- PROTOCOL(p, dir, OP_PCLID_LTE_PDCP_CTRL, (uint16_t)cipherdata->algtype);
+
+ if (sn_size == PDCP_SN_SIZE_5)
+ PROTOCOL(p, dir, OP_PCLID_LTE_PDCP_CTRL,
+ (uint16_t)cipherdata->algtype);
+ else
+ PROTOCOL(p, dir, OP_PCLID_LTE_PDCP_USER_RN,
+ ((uint16_t)cipherdata->algtype << 8) |
+ (uint16_t)authdata->algtype);
return 0;
}
@@ -868,6 +892,7 @@ pdcp_insert_cplane_snow_aes_op(struct program *p,
struct alginfo *cipherdata,
struct alginfo *authdata,
unsigned int dir,
+ enum pdcp_sn_size sn_size,
unsigned char era_2_sw_hfn_ovrd)
{
LABEL(back_to_sd_offset);
@@ -887,9 +912,14 @@ pdcp_insert_cplane_snow_aes_op(struct program *p,
KEY(p, KEY2, authdata->key_enc_flags, authdata->key,
authdata->keylen, INLINE_KEY(authdata));
- PROTOCOL(p, dir, OP_PCLID_LTE_PDCP_CTRL_MIXED,
- ((uint16_t)cipherdata->algtype << 8) |
- (uint16_t)authdata->algtype);
+ if (sn_size == PDCP_SN_SIZE_5)
+ PROTOCOL(p, dir, OP_PCLID_LTE_PDCP_CTRL_MIXED,
+ ((uint16_t)cipherdata->algtype << 8) |
+ (uint16_t)authdata->algtype);
+ else
+ PROTOCOL(p, dir, OP_PCLID_LTE_PDCP_USER_RN,
+ ((uint16_t)cipherdata->algtype << 8) |
+ (uint16_t)authdata->algtype);
return 0;
}
@@ -1174,6 +1204,7 @@ pdcp_insert_cplane_aes_snow_op(struct program *p,
struct alginfo *cipherdata,
struct alginfo *authdata,
unsigned int dir,
+ enum pdcp_sn_size sn_size,
unsigned char era_2_sw_hfn_ovrd __maybe_unused)
{
KEY(p, KEY1, cipherdata->key_enc_flags, cipherdata->key,
@@ -1182,7 +1213,14 @@ pdcp_insert_cplane_aes_snow_op(struct program *p,
INLINE_KEY(authdata));
if (rta_sec_era >= RTA_SEC_ERA_8) {
- PROTOCOL(p, dir, OP_PCLID_LTE_PDCP_CTRL_MIXED,
+ int pclid;
+
+ if (sn_size == PDCP_SN_SIZE_5)
+ pclid = OP_PCLID_LTE_PDCP_CTRL_MIXED;
+ else
+ pclid = OP_PCLID_LTE_PDCP_USER_RN;
+
+ PROTOCOL(p, dir, pclid,
((uint16_t)cipherdata->algtype << 8) |
(uint16_t)authdata->algtype);
@@ -1281,6 +1319,7 @@ pdcp_insert_cplane_snow_zuc_op(struct program *p,
struct alginfo *cipherdata,
struct alginfo *authdata,
unsigned int dir,
+ enum pdcp_sn_size sn_size,
unsigned char era_2_sw_hfn_ovrd __maybe_unused)
{
LABEL(keyjump);
@@ -1300,7 +1339,14 @@ pdcp_insert_cplane_snow_zuc_op(struct program *p,
SET_LABEL(p, keyjump);
if (rta_sec_era >= RTA_SEC_ERA_8) {
- PROTOCOL(p, dir, OP_PCLID_LTE_PDCP_CTRL_MIXED,
+ int pclid;
+
+ if (sn_size == PDCP_SN_SIZE_5)
+ pclid = OP_PCLID_LTE_PDCP_CTRL_MIXED;
+ else
+ pclid = OP_PCLID_LTE_PDCP_USER_RN;
+
+ PROTOCOL(p, dir, pclid,
((uint16_t)cipherdata->algtype << 8) |
(uint16_t)authdata->algtype);
return 0;
@@ -1376,6 +1422,7 @@ pdcp_insert_cplane_aes_zuc_op(struct program *p,
struct alginfo *cipherdata,
struct alginfo *authdata,
unsigned int dir,
+ enum pdcp_sn_size sn_size,
unsigned char era_2_sw_hfn_ovrd __maybe_unused)
{
LABEL(keyjump);
@@ -1393,7 +1440,14 @@ pdcp_insert_cplane_aes_zuc_op(struct program *p,
INLINE_KEY(authdata));
if (rta_sec_era >= RTA_SEC_ERA_8) {
- PROTOCOL(p, dir, OP_PCLID_LTE_PDCP_CTRL_MIXED,
+ int pclid;
+
+ if (sn_size == PDCP_SN_SIZE_5)
+ pclid = OP_PCLID_LTE_PDCP_CTRL_MIXED;
+ else
+ pclid = OP_PCLID_LTE_PDCP_USER_RN;
+
+ PROTOCOL(p, dir, pclid,
((uint16_t)cipherdata->algtype << 8) |
(uint16_t)authdata->algtype);
@@ -1474,6 +1528,7 @@ pdcp_insert_cplane_zuc_snow_op(struct program *p,
struct alginfo *cipherdata,
struct alginfo *authdata,
unsigned int dir,
+ enum pdcp_sn_size sn_size,
unsigned char era_2_sw_hfn_ovrd __maybe_unused)
{
LABEL(keyjump);
@@ -1491,7 +1546,14 @@ pdcp_insert_cplane_zuc_snow_op(struct program *p,
INLINE_KEY(authdata));
if (rta_sec_era >= RTA_SEC_ERA_8) {
- PROTOCOL(p, dir, OP_PCLID_LTE_PDCP_CTRL_MIXED,
+ int pclid;
+
+ if (sn_size == PDCP_SN_SIZE_5)
+ pclid = OP_PCLID_LTE_PDCP_CTRL_MIXED;
+ else
+ pclid = OP_PCLID_LTE_PDCP_USER_RN;
+
+ PROTOCOL(p, dir, pclid,
((uint16_t)cipherdata->algtype << 8) |
(uint16_t)authdata->algtype);
@@ -1594,6 +1656,7 @@ pdcp_insert_cplane_zuc_aes_op(struct program *p,
struct alginfo *cipherdata,
struct alginfo *authdata,
unsigned int dir,
+ enum pdcp_sn_size sn_size,
unsigned char era_2_sw_hfn_ovrd __maybe_unused)
{
if (rta_sec_era < RTA_SEC_ERA_5) {
@@ -1602,12 +1665,19 @@ pdcp_insert_cplane_zuc_aes_op(struct program *p,
}
if (rta_sec_era >= RTA_SEC_ERA_8) {
+ int pclid;
+
KEY(p, KEY1, cipherdata->key_enc_flags, cipherdata->key,
cipherdata->keylen, INLINE_KEY(cipherdata));
KEY(p, KEY2, authdata->key_enc_flags, authdata->key,
authdata->keylen, INLINE_KEY(authdata));
- PROTOCOL(p, dir, OP_PCLID_LTE_PDCP_CTRL_MIXED,
+ if (sn_size == PDCP_SN_SIZE_5)
+ pclid = OP_PCLID_LTE_PDCP_CTRL_MIXED;
+ else
+ pclid = OP_PCLID_LTE_PDCP_USER_RN;
+
+ PROTOCOL(p, dir, pclid,
((uint16_t)cipherdata->algtype << 8) |
(uint16_t)authdata->algtype);
return 0;
@@ -1754,7 +1824,7 @@ pdcp_insert_uplane_15bit_op(struct program *p,
IFB | IMMED2);
SEQSTORE(p, MATH0, 6, 2, 0);
MATHB(p, MATH1, SHLD, MATH1, MATH1, 8, 0);
- MOVE(p, DESCBUF, 8, MATH2, 0, 8, WAITCOMP | IMMED);
+ MOVEB(p, DESCBUF, 8, MATH2, 0, 8, WAITCOMP | IMMED);
MATHB(p, MATH1, OR, MATH2, MATH2, 8, 0);
MATHB(p, SEQINSZ, SUB, MATH3, VSEQINSZ, 4, 0);
@@ -1765,7 +1835,7 @@ pdcp_insert_uplane_15bit_op(struct program *p,
op = dir == OP_TYPE_ENCAP_PROTOCOL ? DIR_ENC : DIR_DEC;
switch (cipherdata->algtype) {
case PDCP_CIPHER_TYPE_SNOW:
- MOVE(p, MATH2, 0, CONTEXT1, 0, 8, WAITCOMP | IMMED);
+ MOVEB(p, MATH2, 0, CONTEXT1, 0, 8, WAITCOMP | IMMED);
ALG_OPERATION(p, OP_ALG_ALGSEL_SNOW_F8,
OP_ALG_AAI_F8,
OP_ALG_AS_INITFINAL,
@@ -1774,7 +1844,7 @@ pdcp_insert_uplane_15bit_op(struct program *p,
break;
case PDCP_CIPHER_TYPE_AES:
- MOVE(p, MATH2, 0, CONTEXT1, 0x10, 0x10, WAITCOMP | IMMED);
+ MOVEB(p, MATH2, 0, CONTEXT1, 0x10, 0x10, WAITCOMP | IMMED);
ALG_OPERATION(p, OP_ALG_ALGSEL_AES,
OP_ALG_AAI_CTR,
OP_ALG_AS_INITFINAL,
@@ -1787,8 +1857,8 @@ pdcp_insert_uplane_15bit_op(struct program *p,
pr_err("Invalid era for selected algorithm\n");
return -ENOTSUP;
}
- MOVE(p, MATH2, 0, CONTEXT1, 0, 0x08, IMMED);
- MOVE(p, MATH2, 0, CONTEXT1, 0x08, 0x08, WAITCOMP | IMMED);
+ MOVEB(p, MATH2, 0, CONTEXT1, 0, 0x08, IMMED);
+ MOVEB(p, MATH2, 0, CONTEXT1, 0x08, 0x08, WAITCOMP | IMMED);
ALG_OPERATION(p, OP_ALG_ALGSEL_ZUCE,
OP_ALG_AAI_F8,
@@ -1885,6 +1955,7 @@ insert_hfn_ov_op(struct program *p,
static inline enum pdb_type_e
cnstr_pdcp_c_plane_pdb(struct program *p,
uint32_t hfn,
+ enum pdcp_sn_size sn_size,
unsigned char bearer,
unsigned char direction,
uint32_t hfn_threshold,
@@ -1923,18 +1994,36 @@ cnstr_pdcp_c_plane_pdb(struct program *p,
if (rta_sec_era >= RTA_SEC_ERA_8) {
memset(&pdb, 0x00, sizeof(struct pdcp_pdb));
- /* This is a HW issue. Bit 2 should be set to zero,
- * but it does not work this way. Override here.
+ /* To support 12-bit seq numbers, we use u-plane opt in pdb.
+ * SEC supports 5-bit only with c-plane opt in pdb.
*/
- pdb.opt_res.rsvd = 0x00000002;
+ if (sn_size == PDCP_SN_SIZE_12) {
+ pdb.hfn_res = hfn << PDCP_U_PLANE_PDB_LONG_SN_HFN_SHIFT;
+ pdb.bearer_dir_res = (uint32_t)
+ ((bearer << PDCP_U_PLANE_PDB_BEARER_SHIFT) |
+ (direction << PDCP_U_PLANE_PDB_DIR_SHIFT));
- /* Copy relevant information from user to PDB */
- pdb.hfn_res = hfn << PDCP_C_PLANE_PDB_HFN_SHIFT;
- pdb.bearer_dir_res = (uint32_t)
+ pdb.hfn_thr_res =
+ hfn_threshold << PDCP_U_PLANE_PDB_LONG_SN_HFN_THR_SHIFT;
+
+ } else {
+ /* This means 5-bit c-plane.
+ * Here we use c-plane opt in pdb
+ */
+
+ /* This is a HW issue. Bit 2 should be set to zero,
+ * but it does not work this way. Override here.
+ */
+ pdb.opt_res.rsvd = 0x00000002;
+
+ /* Copy relevant information from user to PDB */
+ pdb.hfn_res = hfn << PDCP_C_PLANE_PDB_HFN_SHIFT;
+ pdb.bearer_dir_res = (uint32_t)
((bearer << PDCP_C_PLANE_PDB_BEARER_SHIFT) |
- (direction << PDCP_C_PLANE_PDB_DIR_SHIFT));
- pdb.hfn_thr_res =
- hfn_threshold << PDCP_C_PLANE_PDB_HFN_THR_SHIFT;
+ (direction << PDCP_C_PLANE_PDB_DIR_SHIFT));
+ pdb.hfn_thr_res =
+ hfn_threshold << PDCP_C_PLANE_PDB_HFN_THR_SHIFT;
+ }
/* copy PDB in descriptor*/
__rta_out32(p, pdb.opt_res.opt);
@@ -2053,6 +2142,7 @@ cnstr_pdcp_u_plane_pdb(struct program *p,
* @swap: must be true when core endianness doesn't match SEC endianness
* @hfn: starting Hyper Frame Number to be used together with the SN from the
* PDCP frames.
+ * @sn_size: size of sequence numbers, only 5/12 bit sequence numbers are valid
* @bearer: radio bearer ID
* @direction: the direction of the PDCP frame (UL/DL)
* @hfn_threshold: HFN value that once reached triggers a warning from SEC that
@@ -2077,6 +2167,7 @@ cnstr_shdsc_pdcp_c_plane_encap(uint32_t *descbuf,
bool ps,
bool swap,
uint32_t hfn,
+ enum pdcp_sn_size sn_size,
unsigned char bearer,
unsigned char direction,
uint32_t hfn_threshold,
@@ -2087,7 +2178,7 @@ cnstr_shdsc_pdcp_c_plane_encap(uint32_t *descbuf,
static int
(*pdcp_cp_fp[PDCP_CIPHER_TYPE_INVALID][PDCP_AUTH_TYPE_INVALID])
(struct program*, bool swap, struct alginfo *,
- struct alginfo *, unsigned int,
+ struct alginfo *, unsigned int, enum pdcp_sn_size,
unsigned char __maybe_unused) = {
{ /* NULL */
pdcp_insert_cplane_null_op, /* NULL */
@@ -2152,6 +2243,11 @@ cnstr_shdsc_pdcp_c_plane_encap(uint32_t *descbuf,
return -EINVAL;
}
+ if (sn_size != PDCP_SN_SIZE_12 && sn_size != PDCP_SN_SIZE_5) {
+ pr_err("C-plane supports only 5-bit and 12-bit sequence numbers\n");
+ return -EINVAL;
+ }
+
PROGRAM_CNTXT_INIT(p, descbuf, 0);
if (swap)
PROGRAM_SET_BSWAP(p);
@@ -2162,6 +2258,7 @@ cnstr_shdsc_pdcp_c_plane_encap(uint32_t *descbuf,
pdb_type = cnstr_pdcp_c_plane_pdb(p,
hfn,
+ sn_size,
bearer,
direction,
hfn_threshold,
@@ -2170,7 +2267,7 @@ cnstr_shdsc_pdcp_c_plane_encap(uint32_t *descbuf,
SET_LABEL(p, pdb_end);
- err = insert_hfn_ov_op(p, PDCP_SN_SIZE_5, pdb_type,
+ err = insert_hfn_ov_op(p, sn_size, pdb_type,
era_2_sw_hfn_ovrd);
if (err)
return err;
@@ -2180,6 +2277,7 @@ cnstr_shdsc_pdcp_c_plane_encap(uint32_t *descbuf,
cipherdata,
authdata,
OP_TYPE_ENCAP_PROTOCOL,
+ sn_size,
era_2_sw_hfn_ovrd);
if (err)
return err;
@@ -2197,6 +2295,7 @@ cnstr_shdsc_pdcp_c_plane_encap(uint32_t *descbuf,
* @swap: must be true when core endianness doesn't match SEC endianness
* @hfn: starting Hyper Frame Number to be used together with the SN from the
* PDCP frames.
+ * @sn_size: size of sequence numbers, only 5/12 bit sequence numbers are valid
* @bearer: radio bearer ID
* @direction: the direction of the PDCP frame (UL/DL)
* @hfn_threshold: HFN value that once reached triggers a warning from SEC that
@@ -2222,6 +2321,7 @@ cnstr_shdsc_pdcp_c_plane_decap(uint32_t *descbuf,
bool ps,
bool swap,
uint32_t hfn,
+ enum pdcp_sn_size sn_size,
unsigned char bearer,
unsigned char direction,
uint32_t hfn_threshold,
@@ -2232,7 +2332,8 @@ cnstr_shdsc_pdcp_c_plane_decap(uint32_t *descbuf,
static int
(*pdcp_cp_fp[PDCP_CIPHER_TYPE_INVALID][PDCP_AUTH_TYPE_INVALID])
(struct program*, bool swap, struct alginfo *,
- struct alginfo *, unsigned int, unsigned char) = {
+ struct alginfo *, unsigned int, enum pdcp_sn_size,
+ unsigned char) = {
{ /* NULL */
pdcp_insert_cplane_null_op, /* NULL */
pdcp_insert_cplane_int_only_op, /* SNOW f9 */
@@ -2296,6 +2397,11 @@ cnstr_shdsc_pdcp_c_plane_decap(uint32_t *descbuf,
return -EINVAL;
}
+ if (sn_size != PDCP_SN_SIZE_12 && sn_size != PDCP_SN_SIZE_5) {
+ pr_err("C-plane supports only 5-bit and 12-bit sequence numbers\n");
+ return -EINVAL;
+ }
+
PROGRAM_CNTXT_INIT(p, descbuf, 0);
if (swap)
PROGRAM_SET_BSWAP(p);
@@ -2306,6 +2412,7 @@ cnstr_shdsc_pdcp_c_plane_decap(uint32_t *descbuf,
pdb_type = cnstr_pdcp_c_plane_pdb(p,
hfn,
+ sn_size,
bearer,
direction,
hfn_threshold,
@@ -2314,7 +2421,7 @@ cnstr_shdsc_pdcp_c_plane_decap(uint32_t *descbuf,
SET_LABEL(p, pdb_end);
- err = insert_hfn_ov_op(p, PDCP_SN_SIZE_5, pdb_type,
+ err = insert_hfn_ov_op(p, sn_size, pdb_type,
era_2_sw_hfn_ovrd);
if (err)
return err;
@@ -2324,6 +2431,7 @@ cnstr_shdsc_pdcp_c_plane_decap(uint32_t *descbuf,
cipherdata,
authdata,
OP_TYPE_DECAP_PROTOCOL,
+ sn_size,
era_2_sw_hfn_ovrd);
if (err)
return err;
diff --git a/drivers/crypto/dpaa2_sec/hw/rta/protocol_cmd.h b/drivers/crypto/dpaa2_sec/hw/rta/protocol_cmd.h
index cf8dfb910..82581edf5 100644
--- a/drivers/crypto/dpaa2_sec/hw/rta/protocol_cmd.h
+++ b/drivers/crypto/dpaa2_sec/hw/rta/protocol_cmd.h
@@ -1,7 +1,7 @@
/* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
*
* Copyright 2008-2016 Freescale Semiconductor Inc.
- * Copyright 2016 NXP
+ * Copyright 2016, 2019 NXP
*
*/
@@ -596,13 +596,15 @@ static const struct proto_map proto_table[] = {
/*38*/ {OP_TYPE_DECAP_PROTOCOL, OP_PCLID_LTE_PDCP_CTRL_MIXED,
__rta_lte_pdcp_mixed_proto},
{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_IPSEC_NEW, __rta_ipsec_proto},
+/*40*/ {OP_TYPE_DECAP_PROTOCOL, OP_PCLID_LTE_PDCP_USER_RN,
+ __rta_lte_pdcp_mixed_proto},
};
/*
* Allowed OPERATION protocols for each SEC Era.
* Values represent the number of entries from proto_table[] that are supported.
*/
-static const unsigned int proto_table_sz[] = {21, 29, 29, 29, 29, 35, 37, 39};
+static const unsigned int proto_table_sz[] = {21, 29, 29, 29, 29, 35, 37, 40};
static inline int
rta_proto_operation(struct program *program, uint32_t optype,
diff --git a/drivers/crypto/dpaa_sec/dpaa_sec.c b/drivers/crypto/dpaa_sec/dpaa_sec.c
index b3ac70633..1532eebc5 100644
--- a/drivers/crypto/dpaa_sec/dpaa_sec.c
+++ b/drivers/crypto/dpaa_sec/dpaa_sec.c
@@ -471,6 +471,7 @@ dpaa_sec_prep_pdcp_cdb(dpaa_sec_session *ses)
shared_desc_len = cnstr_shdsc_pdcp_c_plane_encap(
cdb->sh_desc, 1, swap,
ses->pdcp.hfn,
+ ses->pdcp.sn_size,
ses->pdcp.bearer,
ses->pdcp.pkt_dir,
ses->pdcp.hfn_threshold,
@@ -480,6 +481,7 @@ dpaa_sec_prep_pdcp_cdb(dpaa_sec_session *ses)
shared_desc_len = cnstr_shdsc_pdcp_c_plane_decap(
cdb->sh_desc, 1, swap,
ses->pdcp.hfn,
+ ses->pdcp.sn_size,
ses->pdcp.bearer,
ses->pdcp.pkt_dir,
ses->pdcp.hfn_threshold,
@@ -2399,9 +2401,10 @@ dpaa_sec_set_pdcp_session(struct rte_cryptodev *dev,
/* Auth is only applicable for control mode operation. */
if (pdcp_xform->domain == RTE_SECURITY_PDCP_MODE_CONTROL) {
- if (pdcp_xform->sn_size != RTE_SECURITY_PDCP_SN_SIZE_5) {
+ if (pdcp_xform->sn_size != RTE_SECURITY_PDCP_SN_SIZE_5 &&
+ pdcp_xform->sn_size != RTE_SECURITY_PDCP_SN_SIZE_12) {
DPAA_SEC_ERR(
- "PDCP Seq Num size should be 5 bits for cmode");
+ "PDCP Seq Num size should be 5/12 bits for cmode");
goto out;
}
if (auth_xform) {
--
2.17.1
^ permalink raw reply [flat|nested] 75+ messages in thread
* [dpdk-dev] [PATCH v2 02/20] drivers/crypto: support PDCP u-plane with integrity
2019-09-30 11:52 ` [dpdk-dev] [PATCH v2 " Akhil Goyal
2019-09-30 11:52 ` [dpdk-dev] [PATCH v2 01/20] drivers/crypto: support PDCP 12-bit c-plane processing Akhil Goyal
@ 2019-09-30 11:52 ` Akhil Goyal
2019-09-30 11:52 ` [dpdk-dev] [PATCH v2 03/20] security: add hfn override option in PDCP Akhil Goyal
` (19 subsequent siblings)
21 siblings, 0 replies; 75+ messages in thread
From: Akhil Goyal @ 2019-09-30 11:52 UTC (permalink / raw)
To: dev; +Cc: aconole, anoobj, Vakul Garg
From: Vakul Garg <vakul.garg@nxp.com>
PDCP u-plane may optionally support integrity as well.
This patch add support for supporting integrity along with
confidentiality.
Signed-off-by: Vakul Garg <vakul.garg@nxp.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
---
drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c | 67 +++++------
drivers/crypto/dpaa2_sec/hw/desc/pdcp.h | 75 ++++++++++---
drivers/crypto/dpaa_sec/dpaa_sec.c | 116 +++++++++-----------
3 files changed, 144 insertions(+), 114 deletions(-)
diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
index 75cd2c0b5..d1fbd7a78 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
@@ -2591,6 +2591,7 @@ dpaa2_sec_set_pdcp_session(struct rte_cryptodev *dev,
struct ctxt_priv *priv;
struct dpaa2_sec_dev_private *dev_priv = dev->data->dev_private;
struct alginfo authdata, cipherdata;
+ struct alginfo *p_authdata = NULL;
int bufsize = -1;
struct sec_flow_context *flc;
#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
@@ -2693,39 +2694,32 @@ dpaa2_sec_set_pdcp_session(struct rte_cryptodev *dev,
goto out;
}
- /* Auth is only applicable for control mode operation. */
- if (pdcp_xform->domain == RTE_SECURITY_PDCP_MODE_CONTROL) {
- if (pdcp_xform->sn_size != RTE_SECURITY_PDCP_SN_SIZE_5 &&
- pdcp_xform->sn_size != RTE_SECURITY_PDCP_SN_SIZE_12) {
- DPAA2_SEC_ERR(
- "PDCP Seq Num size should be 5/12 bits for cmode");
- goto out;
- }
- if (auth_xform) {
- session->auth_key.data = rte_zmalloc(NULL,
- auth_xform->key.length,
- RTE_CACHE_LINE_SIZE);
- if (session->auth_key.data == NULL &&
- auth_xform->key.length > 0) {
- DPAA2_SEC_ERR("No Memory for auth key");
- rte_free(session->cipher_key.data);
- rte_free(priv);
- return -ENOMEM;
- }
- session->auth_key.length = auth_xform->key.length;
- memcpy(session->auth_key.data, auth_xform->key.data,
- auth_xform->key.length);
- session->auth_alg = auth_xform->algo;
- } else {
- session->auth_key.data = NULL;
- session->auth_key.length = 0;
- session->auth_alg = RTE_CRYPTO_AUTH_NULL;
+ if (auth_xform) {
+ session->auth_key.data = rte_zmalloc(NULL,
+ auth_xform->key.length,
+ RTE_CACHE_LINE_SIZE);
+ if (!session->auth_key.data &&
+ auth_xform->key.length > 0) {
+ DPAA2_SEC_ERR("No Memory for auth key");
+ rte_free(session->cipher_key.data);
+ rte_free(priv);
+ return -ENOMEM;
}
- authdata.key = (size_t)session->auth_key.data;
- authdata.keylen = session->auth_key.length;
- authdata.key_enc_flags = 0;
- authdata.key_type = RTA_DATA_IMM;
+ session->auth_key.length = auth_xform->key.length;
+ memcpy(session->auth_key.data, auth_xform->key.data,
+ auth_xform->key.length);
+ session->auth_alg = auth_xform->algo;
+ } else {
+ session->auth_key.data = NULL;
+ session->auth_key.length = 0;
+ session->auth_alg = 0;
+ }
+ authdata.key = (size_t)session->auth_key.data;
+ authdata.keylen = session->auth_key.length;
+ authdata.key_enc_flags = 0;
+ authdata.key_type = RTA_DATA_IMM;
+ if (session->auth_alg) {
switch (session->auth_alg) {
case RTE_CRYPTO_AUTH_SNOW3G_UIA2:
authdata.algtype = PDCP_AUTH_TYPE_SNOW;
@@ -2745,6 +2739,13 @@ dpaa2_sec_set_pdcp_session(struct rte_cryptodev *dev,
goto out;
}
+ p_authdata = &authdata;
+ } else if (pdcp_xform->domain == RTE_SECURITY_PDCP_MODE_CONTROL) {
+ DPAA2_SEC_ERR("Crypto: Integrity must for c-plane");
+ goto out;
+ }
+
+ if (pdcp_xform->domain == RTE_SECURITY_PDCP_MODE_CONTROL) {
if (session->dir == DIR_ENC)
bufsize = cnstr_shdsc_pdcp_c_plane_encap(
priv->flc_desc[0].desc, 1, swap,
@@ -2774,7 +2775,7 @@ dpaa2_sec_set_pdcp_session(struct rte_cryptodev *dev,
pdcp_xform->bearer,
pdcp_xform->pkt_dir,
pdcp_xform->hfn_threshold,
- &cipherdata, 0);
+ &cipherdata, p_authdata, 0);
else if (session->dir == DIR_DEC)
bufsize = cnstr_shdsc_pdcp_u_plane_decap(
priv->flc_desc[0].desc, 1, swap,
@@ -2783,7 +2784,7 @@ dpaa2_sec_set_pdcp_session(struct rte_cryptodev *dev,
pdcp_xform->bearer,
pdcp_xform->pkt_dir,
pdcp_xform->hfn_threshold,
- &cipherdata, 0);
+ &cipherdata, p_authdata, 0);
}
if (bufsize < 0) {
diff --git a/drivers/crypto/dpaa2_sec/hw/desc/pdcp.h b/drivers/crypto/dpaa2_sec/hw/desc/pdcp.h
index 607c587e2..a636640c4 100644
--- a/drivers/crypto/dpaa2_sec/hw/desc/pdcp.h
+++ b/drivers/crypto/dpaa2_sec/hw/desc/pdcp.h
@@ -1801,9 +1801,16 @@ static inline int
pdcp_insert_uplane_15bit_op(struct program *p,
bool swap __maybe_unused,
struct alginfo *cipherdata,
+ struct alginfo *authdata,
unsigned int dir)
{
int op;
+
+ /* Insert auth key if requested */
+ if (authdata && authdata->algtype)
+ KEY(p, KEY2, authdata->key_enc_flags, authdata->key,
+ authdata->keylen, INLINE_KEY(authdata));
+
/* Insert Cipher Key */
KEY(p, KEY1, cipherdata->key_enc_flags, cipherdata->key,
cipherdata->keylen, INLINE_KEY(cipherdata));
@@ -2478,6 +2485,7 @@ cnstr_shdsc_pdcp_u_plane_encap(uint32_t *descbuf,
unsigned short direction,
uint32_t hfn_threshold,
struct alginfo *cipherdata,
+ struct alginfo *authdata,
unsigned char era_2_sw_hfn_ovrd)
{
struct program prg;
@@ -2490,6 +2498,11 @@ cnstr_shdsc_pdcp_u_plane_encap(uint32_t *descbuf,
return -EINVAL;
}
+ if (authdata && !authdata->algtype && rta_sec_era < RTA_SEC_ERA_8) {
+ pr_err("Cannot use u-plane auth with era < 8");
+ return -EINVAL;
+ }
+
PROGRAM_CNTXT_INIT(p, descbuf, 0);
if (swap)
PROGRAM_SET_BSWAP(p);
@@ -2509,6 +2522,13 @@ cnstr_shdsc_pdcp_u_plane_encap(uint32_t *descbuf,
if (err)
return err;
+ /* Insert auth key if requested */
+ if (authdata && authdata->algtype) {
+ KEY(p, KEY2, authdata->key_enc_flags,
+ (uint64_t)authdata->key, authdata->keylen,
+ INLINE_KEY(authdata));
+ }
+
switch (sn_size) {
case PDCP_SN_SIZE_7:
case PDCP_SN_SIZE_12:
@@ -2518,20 +2538,24 @@ cnstr_shdsc_pdcp_u_plane_encap(uint32_t *descbuf,
pr_err("Invalid era for selected algorithm\n");
return -ENOTSUP;
}
+ /* fallthrough */
case PDCP_CIPHER_TYPE_AES:
case PDCP_CIPHER_TYPE_SNOW:
+ case PDCP_CIPHER_TYPE_NULL:
/* Insert Cipher Key */
KEY(p, KEY1, cipherdata->key_enc_flags,
(uint64_t)cipherdata->key, cipherdata->keylen,
INLINE_KEY(cipherdata));
- PROTOCOL(p, OP_TYPE_ENCAP_PROTOCOL,
- OP_PCLID_LTE_PDCP_USER,
- (uint16_t)cipherdata->algtype);
- break;
- case PDCP_CIPHER_TYPE_NULL:
- insert_copy_frame_op(p,
- cipherdata,
- OP_TYPE_ENCAP_PROTOCOL);
+
+ if (authdata)
+ PROTOCOL(p, OP_TYPE_ENCAP_PROTOCOL,
+ OP_PCLID_LTE_PDCP_USER_RN,
+ ((uint16_t)cipherdata->algtype << 8) |
+ (uint16_t)authdata->algtype);
+ else
+ PROTOCOL(p, OP_TYPE_ENCAP_PROTOCOL,
+ OP_PCLID_LTE_PDCP_USER,
+ (uint16_t)cipherdata->algtype);
break;
default:
pr_err("%s: Invalid encrypt algorithm selected: %d\n",
@@ -2551,7 +2575,7 @@ cnstr_shdsc_pdcp_u_plane_encap(uint32_t *descbuf,
default:
err = pdcp_insert_uplane_15bit_op(p, swap, cipherdata,
- OP_TYPE_ENCAP_PROTOCOL);
+ authdata, OP_TYPE_ENCAP_PROTOCOL);
if (err)
return err;
break;
@@ -2605,6 +2629,7 @@ cnstr_shdsc_pdcp_u_plane_decap(uint32_t *descbuf,
unsigned short direction,
uint32_t hfn_threshold,
struct alginfo *cipherdata,
+ struct alginfo *authdata,
unsigned char era_2_sw_hfn_ovrd)
{
struct program prg;
@@ -2617,6 +2642,11 @@ cnstr_shdsc_pdcp_u_plane_decap(uint32_t *descbuf,
return -EINVAL;
}
+ if (authdata && !authdata->algtype && rta_sec_era < RTA_SEC_ERA_8) {
+ pr_err("Cannot use u-plane auth with era < 8");
+ return -EINVAL;
+ }
+
PROGRAM_CNTXT_INIT(p, descbuf, 0);
if (swap)
PROGRAM_SET_BSWAP(p);
@@ -2636,6 +2666,12 @@ cnstr_shdsc_pdcp_u_plane_decap(uint32_t *descbuf,
if (err)
return err;
+ /* Insert auth key if requested */
+ if (authdata && authdata->algtype)
+ KEY(p, KEY2, authdata->key_enc_flags,
+ (uint64_t)authdata->key, authdata->keylen,
+ INLINE_KEY(authdata));
+
switch (sn_size) {
case PDCP_SN_SIZE_7:
case PDCP_SN_SIZE_12:
@@ -2645,20 +2681,23 @@ cnstr_shdsc_pdcp_u_plane_decap(uint32_t *descbuf,
pr_err("Invalid era for selected algorithm\n");
return -ENOTSUP;
}
+ /* fallthrough */
case PDCP_CIPHER_TYPE_AES:
case PDCP_CIPHER_TYPE_SNOW:
+ case PDCP_CIPHER_TYPE_NULL:
/* Insert Cipher Key */
KEY(p, KEY1, cipherdata->key_enc_flags,
cipherdata->key, cipherdata->keylen,
INLINE_KEY(cipherdata));
- PROTOCOL(p, OP_TYPE_DECAP_PROTOCOL,
- OP_PCLID_LTE_PDCP_USER,
- (uint16_t)cipherdata->algtype);
- break;
- case PDCP_CIPHER_TYPE_NULL:
- insert_copy_frame_op(p,
- cipherdata,
- OP_TYPE_DECAP_PROTOCOL);
+ if (authdata)
+ PROTOCOL(p, OP_TYPE_DECAP_PROTOCOL,
+ OP_PCLID_LTE_PDCP_USER_RN,
+ ((uint16_t)cipherdata->algtype << 8) |
+ (uint16_t)authdata->algtype);
+ else
+ PROTOCOL(p, OP_TYPE_DECAP_PROTOCOL,
+ OP_PCLID_LTE_PDCP_USER,
+ (uint16_t)cipherdata->algtype);
break;
default:
pr_err("%s: Invalid encrypt algorithm selected: %d\n",
@@ -2678,7 +2717,7 @@ cnstr_shdsc_pdcp_u_plane_decap(uint32_t *descbuf,
default:
err = pdcp_insert_uplane_15bit_op(p, swap, cipherdata,
- OP_TYPE_DECAP_PROTOCOL);
+ authdata, OP_TYPE_DECAP_PROTOCOL);
if (err)
return err;
break;
diff --git a/drivers/crypto/dpaa_sec/dpaa_sec.c b/drivers/crypto/dpaa_sec/dpaa_sec.c
index 1532eebc5..c3fbcc11d 100644
--- a/drivers/crypto/dpaa_sec/dpaa_sec.c
+++ b/drivers/crypto/dpaa_sec/dpaa_sec.c
@@ -384,6 +384,7 @@ dpaa_sec_prep_pdcp_cdb(dpaa_sec_session *ses)
{
struct alginfo authdata = {0}, cipherdata = {0};
struct sec_cdb *cdb = &ses->cdb;
+ struct alginfo *p_authdata = NULL;
int32_t shared_desc_len = 0;
int err;
#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
@@ -416,7 +417,11 @@ dpaa_sec_prep_pdcp_cdb(dpaa_sec_session *ses)
cipherdata.key_enc_flags = 0;
cipherdata.key_type = RTA_DATA_IMM;
- if (ses->pdcp.domain == RTE_SECURITY_PDCP_MODE_CONTROL) {
+ cdb->sh_desc[0] = cipherdata.keylen;
+ cdb->sh_desc[1] = 0;
+ cdb->sh_desc[2] = 0;
+
+ if (ses->auth_alg) {
switch (ses->auth_alg) {
case RTE_CRYPTO_AUTH_SNOW3G_UIA2:
authdata.algtype = PDCP_AUTH_TYPE_SNOW;
@@ -441,32 +446,36 @@ dpaa_sec_prep_pdcp_cdb(dpaa_sec_session *ses)
authdata.key_enc_flags = 0;
authdata.key_type = RTA_DATA_IMM;
- cdb->sh_desc[0] = cipherdata.keylen;
+ p_authdata = &authdata;
+
cdb->sh_desc[1] = authdata.keylen;
- err = rta_inline_query(IPSEC_AUTH_VAR_AES_DEC_BASE_DESC_LEN,
- MIN_JOB_DESC_SIZE,
- (unsigned int *)cdb->sh_desc,
- &cdb->sh_desc[2], 2);
+ }
- if (err < 0) {
- DPAA_SEC_ERR("Crypto: Incorrect key lengths");
- return err;
- }
- if (!(cdb->sh_desc[2] & 1) && cipherdata.keylen) {
- cipherdata.key = (size_t)dpaa_mem_vtop(
- (void *)(size_t)cipherdata.key);
- cipherdata.key_type = RTA_DATA_PTR;
- }
- if (!(cdb->sh_desc[2] & (1<<1)) && authdata.keylen) {
- authdata.key = (size_t)dpaa_mem_vtop(
- (void *)(size_t)authdata.key);
- authdata.key_type = RTA_DATA_PTR;
- }
+ err = rta_inline_query(IPSEC_AUTH_VAR_AES_DEC_BASE_DESC_LEN,
+ MIN_JOB_DESC_SIZE,
+ (unsigned int *)cdb->sh_desc,
+ &cdb->sh_desc[2], 2);
+ if (err < 0) {
+ DPAA_SEC_ERR("Crypto: Incorrect key lengths");
+ return err;
+ }
- cdb->sh_desc[0] = 0;
- cdb->sh_desc[1] = 0;
- cdb->sh_desc[2] = 0;
+ if (!(cdb->sh_desc[2] & 1) && cipherdata.keylen) {
+ cipherdata.key =
+ (size_t)dpaa_mem_vtop((void *)(size_t)cipherdata.key);
+ cipherdata.key_type = RTA_DATA_PTR;
+ }
+ if (!(cdb->sh_desc[2] & (1 << 1)) && authdata.keylen) {
+ authdata.key =
+ (size_t)dpaa_mem_vtop((void *)(size_t)authdata.key);
+ authdata.key_type = RTA_DATA_PTR;
+ }
+ cdb->sh_desc[0] = 0;
+ cdb->sh_desc[1] = 0;
+ cdb->sh_desc[2] = 0;
+
+ if (ses->pdcp.domain == RTE_SECURITY_PDCP_MODE_CONTROL) {
if (ses->dir == DIR_ENC)
shared_desc_len = cnstr_shdsc_pdcp_c_plane_encap(
cdb->sh_desc, 1, swap,
@@ -488,25 +497,6 @@ dpaa_sec_prep_pdcp_cdb(dpaa_sec_session *ses)
&cipherdata, &authdata,
0);
} else {
- cdb->sh_desc[0] = cipherdata.keylen;
- err = rta_inline_query(IPSEC_AUTH_VAR_AES_DEC_BASE_DESC_LEN,
- MIN_JOB_DESC_SIZE,
- (unsigned int *)cdb->sh_desc,
- &cdb->sh_desc[2], 1);
-
- if (err < 0) {
- DPAA_SEC_ERR("Crypto: Incorrect key lengths");
- return err;
- }
- if (!(cdb->sh_desc[2] & 1) && cipherdata.keylen) {
- cipherdata.key = (size_t)dpaa_mem_vtop(
- (void *)(size_t)cipherdata.key);
- cipherdata.key_type = RTA_DATA_PTR;
- }
- cdb->sh_desc[0] = 0;
- cdb->sh_desc[1] = 0;
- cdb->sh_desc[2] = 0;
-
if (ses->dir == DIR_ENC)
shared_desc_len = cnstr_shdsc_pdcp_u_plane_encap(
cdb->sh_desc, 1, swap,
@@ -515,7 +505,7 @@ dpaa_sec_prep_pdcp_cdb(dpaa_sec_session *ses)
ses->pdcp.bearer,
ses->pdcp.pkt_dir,
ses->pdcp.hfn_threshold,
- &cipherdata, 0);
+ &cipherdata, p_authdata, 0);
else if (ses->dir == DIR_DEC)
shared_desc_len = cnstr_shdsc_pdcp_u_plane_decap(
cdb->sh_desc, 1, swap,
@@ -524,7 +514,7 @@ dpaa_sec_prep_pdcp_cdb(dpaa_sec_session *ses)
ses->pdcp.bearer,
ses->pdcp.pkt_dir,
ses->pdcp.hfn_threshold,
- &cipherdata, 0);
+ &cipherdata, p_authdata, 0);
}
return shared_desc_len;
@@ -2399,7 +2389,6 @@ dpaa_sec_set_pdcp_session(struct rte_cryptodev *dev,
session->dir = DIR_ENC;
}
- /* Auth is only applicable for control mode operation. */
if (pdcp_xform->domain == RTE_SECURITY_PDCP_MODE_CONTROL) {
if (pdcp_xform->sn_size != RTE_SECURITY_PDCP_SN_SIZE_5 &&
pdcp_xform->sn_size != RTE_SECURITY_PDCP_SN_SIZE_12) {
@@ -2407,25 +2396,26 @@ dpaa_sec_set_pdcp_session(struct rte_cryptodev *dev,
"PDCP Seq Num size should be 5/12 bits for cmode");
goto out;
}
- if (auth_xform) {
- session->auth_key.data = rte_zmalloc(NULL,
- auth_xform->key.length,
- RTE_CACHE_LINE_SIZE);
- if (session->auth_key.data == NULL &&
- auth_xform->key.length > 0) {
- DPAA_SEC_ERR("No Memory for auth key");
- rte_free(session->cipher_key.data);
- return -ENOMEM;
- }
- session->auth_key.length = auth_xform->key.length;
- memcpy(session->auth_key.data, auth_xform->key.data,
- auth_xform->key.length);
- session->auth_alg = auth_xform->algo;
- } else {
- session->auth_key.data = NULL;
- session->auth_key.length = 0;
- session->auth_alg = RTE_CRYPTO_AUTH_NULL;
+ }
+
+ if (auth_xform) {
+ session->auth_key.data = rte_zmalloc(NULL,
+ auth_xform->key.length,
+ RTE_CACHE_LINE_SIZE);
+ if (!session->auth_key.data &&
+ auth_xform->key.length > 0) {
+ DPAA_SEC_ERR("No Memory for auth key");
+ rte_free(session->cipher_key.data);
+ return -ENOMEM;
}
+ session->auth_key.length = auth_xform->key.length;
+ memcpy(session->auth_key.data, auth_xform->key.data,
+ auth_xform->key.length);
+ session->auth_alg = auth_xform->algo;
+ } else {
+ session->auth_key.data = NULL;
+ session->auth_key.length = 0;
+ session->auth_alg = 0;
}
session->pdcp.domain = pdcp_xform->domain;
session->pdcp.bearer = pdcp_xform->bearer;
--
2.17.1
^ permalink raw reply [flat|nested] 75+ messages in thread
* [dpdk-dev] [PATCH v2 03/20] security: add hfn override option in PDCP
2019-09-30 11:52 ` [dpdk-dev] [PATCH v2 " Akhil Goyal
2019-09-30 11:52 ` [dpdk-dev] [PATCH v2 01/20] drivers/crypto: support PDCP 12-bit c-plane processing Akhil Goyal
2019-09-30 11:52 ` [dpdk-dev] [PATCH v2 02/20] drivers/crypto: support PDCP u-plane with integrity Akhil Goyal
@ 2019-09-30 11:52 ` Akhil Goyal
2019-09-30 11:52 ` [dpdk-dev] [PATCH v2 04/20] drivers/crypto: support hfn override for NXP PMDs Akhil Goyal
` (18 subsequent siblings)
21 siblings, 0 replies; 75+ messages in thread
From: Akhil Goyal @ 2019-09-30 11:52 UTC (permalink / raw)
To: dev; +Cc: aconole, anoobj, Akhil Goyal
HFN can be given as a per packet value also.
As we do not have IV in case of PDCP, and HFN is
used to generate IV. IV field can be used to get the
per packet HFN while enq/deq
If hfn_ovrd field in pdcp_xform is set,
application is expected to set the per packet HFN
in place of IV. Driver will extract the HFN and perform
operations accordingly.
Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
lib/librte_security/rte_security.h | 11 ++++++++++-
1 file changed, 10 insertions(+), 1 deletion(-)
diff --git a/lib/librte_security/rte_security.h b/lib/librte_security/rte_security.h
index 2d064f4d0..aaafdfcd7 100644
--- a/lib/librte_security/rte_security.h
+++ b/lib/librte_security/rte_security.h
@@ -1,5 +1,5 @@
/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright 2017 NXP.
+ * Copyright 2017,2019 NXP
* Copyright(c) 2017 Intel Corporation.
*/
@@ -278,6 +278,15 @@ struct rte_security_pdcp_xform {
uint32_t hfn;
/** HFN Threshold for key renegotiation */
uint32_t hfn_threshold;
+ /** HFN can be given as a per packet value also.
+ * As we do not have IV in case of PDCP, and HFN is
+ * used to generate IV. IV field can be used to get the
+ * per packet HFN while enq/deq.
+ * If hfn_ovrd field is set, user is expected to set the
+ * per packet HFN in place of IV. PMDs will extract the HFN
+ * and perform operations accordingly.
+ */
+ uint32_t hfn_ovrd;
};
/**
--
2.17.1
^ permalink raw reply [flat|nested] 75+ messages in thread
* [dpdk-dev] [PATCH v2 04/20] drivers/crypto: support hfn override for NXP PMDs
2019-09-30 11:52 ` [dpdk-dev] [PATCH v2 " Akhil Goyal
` (2 preceding siblings ...)
2019-09-30 11:52 ` [dpdk-dev] [PATCH v2 03/20] security: add hfn override option in PDCP Akhil Goyal
@ 2019-09-30 11:52 ` Akhil Goyal
2019-09-30 11:52 ` [dpdk-dev] [PATCH v2 05/20] crypto/dpaa2_sec: update desc for pdcp 18bit enc-auth Akhil Goyal
` (17 subsequent siblings)
21 siblings, 0 replies; 75+ messages in thread
From: Akhil Goyal @ 2019-09-30 11:52 UTC (permalink / raw)
To: dev; +Cc: aconole, anoobj, Akhil Goyal
Per packet HFN override is supported in NXP PMDs
(dpaa2_sec and dpaa_sec). DPOVRD register can be
updated with the per packet value if it is enabled
in session configuration. The value is read from
the IV offset.
Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
---
drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c | 18 ++++++++++--------
drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h | 4 +++-
drivers/crypto/dpaa_sec/dpaa_sec.c | 19 ++++++++++++++++---
drivers/crypto/dpaa_sec/dpaa_sec.h | 4 +++-
4 files changed, 32 insertions(+), 13 deletions(-)
diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
index d1fbd7a78..d080235f5 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
@@ -125,14 +125,16 @@ build_proto_compound_fd(dpaa2_sec_session *sess,
DPAA2_SET_FD_LEN(fd, ip_fle->length);
DPAA2_SET_FLE_FIN(ip_fle);
-#ifdef ENABLE_HFN_OVERRIDE
+ /* In case of PDCP, per packet HFN is stored in
+ * mbuf priv after sym_op.
+ */
if (sess->ctxt_type == DPAA2_SEC_PDCP && sess->pdcp.hfn_ovd) {
+ uint32_t hfn_ovd = *((uint8_t *)op + sess->pdcp.hfn_ovd_offset);
/*enable HFN override override */
- DPAA2_SET_FLE_INTERNAL_JD(ip_fle, sess->pdcp.hfn_ovd);
- DPAA2_SET_FLE_INTERNAL_JD(op_fle, sess->pdcp.hfn_ovd);
- DPAA2_SET_FD_INTERNAL_JD(fd, sess->pdcp.hfn_ovd);
+ DPAA2_SET_FLE_INTERNAL_JD(ip_fle, hfn_ovd);
+ DPAA2_SET_FLE_INTERNAL_JD(op_fle, hfn_ovd);
+ DPAA2_SET_FD_INTERNAL_JD(fd, hfn_ovd);
}
-#endif
return 0;
@@ -2664,11 +2666,11 @@ dpaa2_sec_set_pdcp_session(struct rte_cryptodev *dev,
session->pdcp.bearer = pdcp_xform->bearer;
session->pdcp.pkt_dir = pdcp_xform->pkt_dir;
session->pdcp.sn_size = pdcp_xform->sn_size;
-#ifdef ENABLE_HFN_OVERRIDE
- session->pdcp.hfn_ovd = pdcp_xform->hfn_ovd;
-#endif
session->pdcp.hfn = pdcp_xform->hfn;
session->pdcp.hfn_threshold = pdcp_xform->hfn_threshold;
+ session->pdcp.hfn_ovd = pdcp_xform->hfn_ovrd;
+ /* hfv ovd offset location is stored in iv.offset value*/
+ session->pdcp.hfn_ovd_offset = cipher_xform->iv.offset;
cipherdata.key = (size_t)session->cipher_key.data;
cipherdata.keylen = session->cipher_key.length;
diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h b/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
index a05deaebd..679fd006b 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
@@ -147,9 +147,11 @@ struct dpaa2_pdcp_ctxt {
int8_t bearer; /*!< PDCP bearer ID */
int8_t pkt_dir;/*!< PDCP Frame Direction 0:UL 1:DL*/
int8_t hfn_ovd;/*!< Overwrite HFN per packet*/
+ uint8_t sn_size; /*!< Sequence number size, 5/7/12/15/18 */
+ uint32_t hfn_ovd_offset;/*!< offset from rte_crypto_op at which
+ per packet hfn is stored */
uint32_t hfn; /*!< Hyper Frame Number */
uint32_t hfn_threshold; /*!< HFN Threashold for key renegotiation */
- uint8_t sn_size; /*!< Sequence number size, 7/12/15 */
};
typedef struct dpaa2_sec_session_entry {
diff --git a/drivers/crypto/dpaa_sec/dpaa_sec.c b/drivers/crypto/dpaa_sec/dpaa_sec.c
index c3fbcc11d..3fc4a606f 100644
--- a/drivers/crypto/dpaa_sec/dpaa_sec.c
+++ b/drivers/crypto/dpaa_sec/dpaa_sec.c
@@ -1764,6 +1764,20 @@ dpaa_sec_enqueue_burst(void *qp, struct rte_crypto_op **ops,
if (auth_only_len)
fd->cmd = 0x80000000 | auth_only_len;
+ /* In case of PDCP, per packet HFN is stored in
+ * mbuf priv after sym_op.
+ */
+ if (is_proto_pdcp(ses) && ses->pdcp.hfn_ovd) {
+ fd->cmd = 0x80000000 |
+ *((uint32_t *)((uint8_t *)op +
+ ses->pdcp.hfn_ovd_offset));
+ DPAA_SEC_DP_DEBUG("Per packet HFN: %x, ovd:%u,%u\n",
+ *((uint32_t *)((uint8_t *)op +
+ ses->pdcp.hfn_ovd_offset)),
+ ses->pdcp.hfn_ovd,
+ is_proto_pdcp(ses));
+ }
+
}
send_pkts:
loop = 0;
@@ -2421,11 +2435,10 @@ dpaa_sec_set_pdcp_session(struct rte_cryptodev *dev,
session->pdcp.bearer = pdcp_xform->bearer;
session->pdcp.pkt_dir = pdcp_xform->pkt_dir;
session->pdcp.sn_size = pdcp_xform->sn_size;
-#ifdef ENABLE_HFN_OVERRIDE
- session->pdcp.hfn_ovd = pdcp_xform->hfn_ovd;
-#endif
session->pdcp.hfn = pdcp_xform->hfn;
session->pdcp.hfn_threshold = pdcp_xform->hfn_threshold;
+ session->pdcp.hfn_ovd = pdcp_xform->hfn_ovrd;
+ session->pdcp.hfn_ovd_offset = cipher_xform->iv.offset;
session->ctx_pool = dev_priv->ctx_pool;
rte_spinlock_lock(&dev_priv->lock);
diff --git a/drivers/crypto/dpaa_sec/dpaa_sec.h b/drivers/crypto/dpaa_sec/dpaa_sec.h
index 08e7d66e5..e148a04df 100644
--- a/drivers/crypto/dpaa_sec/dpaa_sec.h
+++ b/drivers/crypto/dpaa_sec/dpaa_sec.h
@@ -103,9 +103,11 @@ struct sec_pdcp_ctxt {
int8_t bearer; /*!< PDCP bearer ID */
int8_t pkt_dir;/*!< PDCP Frame Direction 0:UL 1:DL*/
int8_t hfn_ovd;/*!< Overwrite HFN per packet*/
+ uint8_t sn_size; /*!< Sequence number size, 5/7/12/15/18 */
+ uint32_t hfn_ovd_offset;/*!< offset from rte_crypto_op at which
+ per packet hfn is stored */
uint32_t hfn; /*!< Hyper Frame Number */
uint32_t hfn_threshold; /*!< HFN Threashold for key renegotiation */
- uint8_t sn_size; /*!< Sequence number size, 7/12/15 */
};
typedef struct dpaa_sec_session_entry {
--
2.17.1
^ permalink raw reply [flat|nested] 75+ messages in thread
* [dpdk-dev] [PATCH v2 05/20] crypto/dpaa2_sec: update desc for pdcp 18bit enc-auth
2019-09-30 11:52 ` [dpdk-dev] [PATCH v2 " Akhil Goyal
` (3 preceding siblings ...)
2019-09-30 11:52 ` [dpdk-dev] [PATCH v2 04/20] drivers/crypto: support hfn override for NXP PMDs Akhil Goyal
@ 2019-09-30 11:52 ` Akhil Goyal
2019-09-30 11:52 ` [dpdk-dev] [PATCH v2 06/20] crypto/dpaa2_sec: support CAAM HW era 10 Akhil Goyal
` (16 subsequent siblings)
21 siblings, 0 replies; 75+ messages in thread
From: Akhil Goyal @ 2019-09-30 11:52 UTC (permalink / raw)
To: dev; +Cc: aconole, anoobj, Akhil Goyal
Support following cases
int-only (NULL-NULL, NULL-SNOW, NULL-AES, NULL-ZUC)
enc-only (SNOW-NULL, AES-NULL, ZUC-NULL)
Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
---
drivers/crypto/dpaa2_sec/hw/desc/pdcp.h | 532 +++++++++++++++++++-----
1 file changed, 420 insertions(+), 112 deletions(-)
diff --git a/drivers/crypto/dpaa2_sec/hw/desc/pdcp.h b/drivers/crypto/dpaa2_sec/hw/desc/pdcp.h
index a636640c4..9a73105ac 100644
--- a/drivers/crypto/dpaa2_sec/hw/desc/pdcp.h
+++ b/drivers/crypto/dpaa2_sec/hw/desc/pdcp.h
@@ -1,5 +1,6 @@
/* SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
* Copyright 2008-2013 Freescale Semiconductor, Inc.
+ * Copyright 2019 NXP
*/
#ifndef __DESC_PDCP_H__
@@ -52,6 +53,14 @@
#define PDCP_U_PLANE_15BIT_SN_MASK 0xFF7F0000
#define PDCP_U_PLANE_15BIT_SN_MASK_BE 0x00007FFF
+/**
+ * PDCP_U_PLANE_18BIT_SN_MASK - This mask is used in the PDCP descriptors for
+ * extracting the sequence number (SN) from the
+ * PDCP User Plane header.
+ */
+#define PDCP_U_PLANE_18BIT_SN_MASK 0xFFFF0300
+#define PDCP_U_PLANE_18BIT_SN_MASK_BE 0x0003FFFF
+
/**
* PDCP_BEARER_MASK - This mask is used masking out the bearer for PDCP
* processing with SNOW f9 in LTE.
@@ -192,7 +201,8 @@ enum pdcp_sn_size {
PDCP_SN_SIZE_5 = 5,
PDCP_SN_SIZE_7 = 7,
PDCP_SN_SIZE_12 = 12,
- PDCP_SN_SIZE_15 = 15
+ PDCP_SN_SIZE_15 = 15,
+ PDCP_SN_SIZE_18 = 18
};
/*
@@ -205,14 +215,17 @@ enum pdcp_sn_size {
#define PDCP_U_PLANE_PDB_OPT_SHORT_SN 0x2
#define PDCP_U_PLANE_PDB_OPT_15B_SN 0x4
+#define PDCP_U_PLANE_PDB_OPT_18B_SN 0x6
#define PDCP_U_PLANE_PDB_SHORT_SN_HFN_SHIFT 7
#define PDCP_U_PLANE_PDB_LONG_SN_HFN_SHIFT 12
#define PDCP_U_PLANE_PDB_15BIT_SN_HFN_SHIFT 15
+#define PDCP_U_PLANE_PDB_18BIT_SN_HFN_SHIFT 18
#define PDCP_U_PLANE_PDB_BEARER_SHIFT 27
#define PDCP_U_PLANE_PDB_DIR_SHIFT 26
#define PDCP_U_PLANE_PDB_SHORT_SN_HFN_THR_SHIFT 7
#define PDCP_U_PLANE_PDB_LONG_SN_HFN_THR_SHIFT 12
#define PDCP_U_PLANE_PDB_15BIT_SN_HFN_THR_SHIFT 15
+#define PDCP_U_PLANE_PDB_18BIT_SN_HFN_THR_SHIFT 18
struct pdcp_pdb {
union {
@@ -417,6 +430,9 @@ pdcp_insert_cplane_int_only_op(struct program *p,
enum pdcp_sn_size sn_size,
unsigned char era_2_sw_hfn_ovrd)
{
+ uint32_t offset = 0, length = 0, sn_mask = 0;
+
+ /* 12 bit SN is only supported for protocol offload case */
if (rta_sec_era >= RTA_SEC_ERA_8 && sn_size == PDCP_SN_SIZE_12) {
KEY(p, KEY2, authdata->key_enc_flags, authdata->key,
authdata->keylen, INLINE_KEY(authdata));
@@ -426,6 +442,27 @@ pdcp_insert_cplane_int_only_op(struct program *p,
return 0;
}
+ /* Non-proto is supported only for 5bit cplane and 18bit uplane */
+ switch (sn_size) {
+ case PDCP_SN_SIZE_5:
+ offset = 7;
+ length = 1;
+ sn_mask = (swap == false) ? PDCP_C_PLANE_SN_MASK :
+ PDCP_C_PLANE_SN_MASK_BE;
+ break;
+ case PDCP_SN_SIZE_18:
+ offset = 5;
+ length = 3;
+ sn_mask = (swap == false) ? PDCP_U_PLANE_18BIT_SN_MASK :
+ PDCP_U_PLANE_18BIT_SN_MASK_BE;
+ break;
+ case PDCP_SN_SIZE_7:
+ case PDCP_SN_SIZE_12:
+ case PDCP_SN_SIZE_15:
+ pr_err("Invalid sn_size for %s\n", __func__);
+ return -ENOTSUP;
+
+ }
LABEL(local_offset);
REFERENCE(move_cmd_read_descbuf);
REFERENCE(move_cmd_write_descbuf);
@@ -435,20 +472,20 @@ pdcp_insert_cplane_int_only_op(struct program *p,
/* Insert Auth Key */
KEY(p, KEY2, authdata->key_enc_flags, authdata->key,
authdata->keylen, INLINE_KEY(authdata));
- SEQLOAD(p, MATH0, 7, 1, 0);
+ SEQLOAD(p, MATH0, offset, length, 0);
JUMP(p, 1, LOCAL_JUMP, ALL_TRUE, CALM);
if (rta_sec_era > RTA_SEC_ERA_2 ||
(rta_sec_era == RTA_SEC_ERA_2 &&
era_2_sw_hfn_ovrd == 0)) {
- SEQINPTR(p, 0, 1, RTO);
+ SEQINPTR(p, 0, length, RTO);
} else {
SEQINPTR(p, 0, 5, RTO);
SEQFIFOLOAD(p, SKIP, 4, 0);
}
if (swap == false) {
- MATHB(p, MATH0, AND, PDCP_C_PLANE_SN_MASK, MATH1, 8,
+ MATHB(p, MATH0, AND, sn_mask, MATH1, 8,
IFB | IMMED2);
MATHB(p, MATH1, SHLD, MATH1, MATH1, 8, 0);
@@ -461,7 +498,7 @@ pdcp_insert_cplane_int_only_op(struct program *p,
MATHB(p, MATH1, OR, MATH2, MATH2, 8, 0);
MOVEB(p, MATH2, 0, CONTEXT2, 0, 0x0C, WAITCOMP | IMMED);
} else {
- MATHB(p, MATH0, AND, PDCP_C_PLANE_SN_MASK_BE, MATH1, 8,
+ MATHB(p, MATH0, AND, sn_mask, MATH1, 8,
IFB | IMMED2);
MATHB(p, MATH1, SHLD, MATH1, MATH1, 8, 0);
@@ -553,19 +590,19 @@ pdcp_insert_cplane_int_only_op(struct program *p,
/* Insert Auth Key */
KEY(p, KEY1, authdata->key_enc_flags, authdata->key,
authdata->keylen, INLINE_KEY(authdata));
- SEQLOAD(p, MATH0, 7, 1, 0);
+ SEQLOAD(p, MATH0, offset, length, 0);
JUMP(p, 1, LOCAL_JUMP, ALL_TRUE, CALM);
if (rta_sec_era > RTA_SEC_ERA_2 ||
(rta_sec_era == RTA_SEC_ERA_2 &&
era_2_sw_hfn_ovrd == 0)) {
- SEQINPTR(p, 0, 1, RTO);
+ SEQINPTR(p, 0, length, RTO);
} else {
SEQINPTR(p, 0, 5, RTO);
SEQFIFOLOAD(p, SKIP, 4, 0);
}
if (swap == false) {
- MATHB(p, MATH0, AND, PDCP_C_PLANE_SN_MASK, MATH1, 8,
+ MATHB(p, MATH0, AND, sn_mask, MATH1, 8,
IFB | IMMED2);
MATHB(p, MATH1, SHLD, MATH1, MATH1, 8, 0);
@@ -573,7 +610,7 @@ pdcp_insert_cplane_int_only_op(struct program *p,
MATHB(p, MATH1, OR, MATH2, MATH2, 8, 0);
MOVEB(p, MATH2, 0, IFIFOAB1, 0, 8, IMMED);
} else {
- MATHB(p, MATH0, AND, PDCP_C_PLANE_SN_MASK_BE, MATH1, 8,
+ MATHB(p, MATH0, AND, sn_mask, MATH1, 8,
IFB | IMMED2);
MATHB(p, MATH1, SHLD, MATH1, MATH1, 8, 0);
@@ -665,11 +702,11 @@ pdcp_insert_cplane_int_only_op(struct program *p,
/* Insert Auth Key */
KEY(p, KEY2, authdata->key_enc_flags, authdata->key,
authdata->keylen, INLINE_KEY(authdata));
- SEQLOAD(p, MATH0, 7, 1, 0);
+ SEQLOAD(p, MATH0, offset, length, 0);
JUMP(p, 1, LOCAL_JUMP, ALL_TRUE, CALM);
- SEQINPTR(p, 0, 1, RTO);
+ SEQINPTR(p, 0, length, RTO);
if (swap == false) {
- MATHB(p, MATH0, AND, PDCP_C_PLANE_SN_MASK, MATH1, 8,
+ MATHB(p, MATH0, AND, sn_mask, MATH1, 8,
IFB | IMMED2);
MATHB(p, MATH1, SHLD, MATH1, MATH1, 8, 0);
@@ -678,7 +715,7 @@ pdcp_insert_cplane_int_only_op(struct program *p,
MOVEB(p, MATH2, 0, CONTEXT2, 0, 8, IMMED);
} else {
- MATHB(p, MATH0, AND, PDCP_C_PLANE_SN_MASK_BE, MATH1, 8,
+ MATHB(p, MATH0, AND, sn_mask, MATH1, 8,
IFB | IMMED2);
MATHB(p, MATH1, SHLD, MATH1, MATH1, 8, 0);
@@ -734,11 +771,12 @@ pdcp_insert_cplane_enc_only_op(struct program *p,
enum pdcp_sn_size sn_size,
unsigned char era_2_sw_hfn_ovrd __maybe_unused)
{
+ uint32_t offset = 0, length = 0, sn_mask = 0;
/* Insert Cipher Key */
KEY(p, KEY1, cipherdata->key_enc_flags, cipherdata->key,
cipherdata->keylen, INLINE_KEY(cipherdata));
- if (rta_sec_era >= RTA_SEC_ERA_8) {
+ if (rta_sec_era >= RTA_SEC_ERA_8 && sn_size != PDCP_SN_SIZE_18) {
if (sn_size == PDCP_SN_SIZE_5)
PROTOCOL(p, dir, OP_PCLID_LTE_PDCP_CTRL_MIXED,
(uint16_t)cipherdata->algtype << 8);
@@ -747,16 +785,32 @@ pdcp_insert_cplane_enc_only_op(struct program *p,
(uint16_t)cipherdata->algtype << 8);
return 0;
}
+ /* Non-proto is supported only for 5bit cplane and 18bit uplane */
+ switch (sn_size) {
+ case PDCP_SN_SIZE_5:
+ offset = 7;
+ length = 1;
+ sn_mask = (swap == false) ? PDCP_C_PLANE_SN_MASK :
+ PDCP_C_PLANE_SN_MASK_BE;
+ break;
+ case PDCP_SN_SIZE_18:
+ offset = 5;
+ length = 3;
+ sn_mask = (swap == false) ? PDCP_U_PLANE_18BIT_SN_MASK :
+ PDCP_U_PLANE_18BIT_SN_MASK_BE;
+ break;
+ case PDCP_SN_SIZE_7:
+ case PDCP_SN_SIZE_12:
+ case PDCP_SN_SIZE_15:
+ pr_err("Invalid sn_size for %s\n", __func__);
+ return -ENOTSUP;
- SEQLOAD(p, MATH0, 7, 1, 0);
+ }
+
+ SEQLOAD(p, MATH0, offset, length, 0);
JUMP(p, 1, LOCAL_JUMP, ALL_TRUE, CALM);
- if (swap == false)
- MATHB(p, MATH0, AND, PDCP_C_PLANE_SN_MASK, MATH1, 8,
- IFB | IMMED2);
- else
- MATHB(p, MATH0, AND, PDCP_C_PLANE_SN_MASK_BE, MATH1, 8,
- IFB | IMMED2);
- SEQSTORE(p, MATH0, 7, 1, 0);
+ MATHB(p, MATH0, AND, sn_mask, MATH1, 8, IFB | IMMED2);
+ SEQSTORE(p, MATH0, offset, length, 0);
MATHB(p, MATH1, SHLD, MATH1, MATH1, 8, 0);
MOVEB(p, DESCBUF, 8, MATH2, 0, 8, WAITCOMP | IMMED);
MATHB(p, MATH1, OR, MATH2, MATH2, 8, 0);
@@ -895,6 +949,8 @@ pdcp_insert_cplane_snow_aes_op(struct program *p,
enum pdcp_sn_size sn_size,
unsigned char era_2_sw_hfn_ovrd)
{
+ uint32_t offset = 0, length = 0, sn_mask = 0;
+
LABEL(back_to_sd_offset);
LABEL(end_desc);
LABEL(local_offset);
@@ -906,7 +962,7 @@ pdcp_insert_cplane_snow_aes_op(struct program *p,
REFERENCE(jump_back_to_sd_cmd);
REFERENCE(move_mac_i_to_desc_buf);
- if (rta_sec_era >= RTA_SEC_ERA_8) {
+ if (rta_sec_era >= RTA_SEC_ERA_8 && sn_size != PDCP_SN_SIZE_18) {
KEY(p, KEY1, cipherdata->key_enc_flags, cipherdata->key,
cipherdata->keylen, INLINE_KEY(cipherdata));
KEY(p, KEY2, authdata->key_enc_flags, authdata->key,
@@ -923,19 +979,35 @@ pdcp_insert_cplane_snow_aes_op(struct program *p,
return 0;
}
+ /* Non-proto is supported only for 5bit cplane and 18bit uplane */
+ switch (sn_size) {
+ case PDCP_SN_SIZE_5:
+ offset = 7;
+ length = 1;
+ sn_mask = (swap == false) ? PDCP_C_PLANE_SN_MASK :
+ PDCP_C_PLANE_SN_MASK_BE;
+ break;
+ case PDCP_SN_SIZE_18:
+ offset = 5;
+ length = 3;
+ sn_mask = (swap == false) ? PDCP_U_PLANE_18BIT_SN_MASK :
+ PDCP_U_PLANE_18BIT_SN_MASK_BE;
+ break;
+ case PDCP_SN_SIZE_7:
+ case PDCP_SN_SIZE_12:
+ case PDCP_SN_SIZE_15:
+ pr_err("Invalid sn_size for %s\n", __func__);
+ return -ENOTSUP;
+
+ }
- SEQLOAD(p, MATH0, 7, 1, 0);
+ SEQLOAD(p, MATH0, offset, length, 0);
JUMP(p, 1, LOCAL_JUMP, ALL_TRUE, CALM);
- if (swap == false)
- MATHB(p, MATH0, AND, PDCP_C_PLANE_SN_MASK, MATH1, 8,
- IFB | IMMED2);
- else
- MATHB(p, MATH0, AND, PDCP_C_PLANE_SN_MASK_BE, MATH1, 8,
- IFB | IMMED2);
+ MATHB(p, MATH0, AND, sn_mask, MATH1, 8, IFB | IMMED2);
MATHB(p, MATH1, SHLD, MATH1, MATH1, 8, 0);
MOVE(p, DESCBUF, 4, MATH2, 0, 0x08, WAITCOMP | IMMED);
MATHB(p, MATH1, OR, MATH2, MATH2, 8, 0);
- SEQSTORE(p, MATH0, 7, 1, 0);
+ SEQSTORE(p, MATH0, offset, length, 0);
if (dir == OP_TYPE_ENCAP_PROTOCOL) {
if (rta_sec_era > RTA_SEC_ERA_2 ||
(rta_sec_era == RTA_SEC_ERA_2 &&
@@ -1207,12 +1279,14 @@ pdcp_insert_cplane_aes_snow_op(struct program *p,
enum pdcp_sn_size sn_size,
unsigned char era_2_sw_hfn_ovrd __maybe_unused)
{
+ uint32_t offset = 0, length = 0, sn_mask = 0;
+
KEY(p, KEY1, cipherdata->key_enc_flags, cipherdata->key,
cipherdata->keylen, INLINE_KEY(cipherdata));
KEY(p, KEY2, authdata->key_enc_flags, authdata->key, authdata->keylen,
INLINE_KEY(authdata));
- if (rta_sec_era >= RTA_SEC_ERA_8) {
+ if (rta_sec_era >= RTA_SEC_ERA_8 && sn_size != PDCP_SN_SIZE_18) {
int pclid;
if (sn_size == PDCP_SN_SIZE_5)
@@ -1226,21 +1300,37 @@ pdcp_insert_cplane_aes_snow_op(struct program *p,
return 0;
}
+ /* Non-proto is supported only for 5bit cplane and 18bit uplane */
+ switch (sn_size) {
+ case PDCP_SN_SIZE_5:
+ offset = 7;
+ length = 1;
+ sn_mask = (swap == false) ? PDCP_C_PLANE_SN_MASK :
+ PDCP_C_PLANE_SN_MASK_BE;
+ break;
+ case PDCP_SN_SIZE_18:
+ offset = 5;
+ length = 3;
+ sn_mask = (swap == false) ? PDCP_U_PLANE_18BIT_SN_MASK :
+ PDCP_U_PLANE_18BIT_SN_MASK_BE;
+ break;
+ case PDCP_SN_SIZE_7:
+ case PDCP_SN_SIZE_12:
+ case PDCP_SN_SIZE_15:
+ pr_err("Invalid sn_size for %s\n", __func__);
+ return -ENOTSUP;
+
+ }
if (dir == OP_TYPE_ENCAP_PROTOCOL)
MATHB(p, SEQINSZ, SUB, ONE, VSEQINSZ, 4, 0);
- SEQLOAD(p, MATH0, 7, 1, 0);
+ SEQLOAD(p, MATH0, offset, length, 0);
JUMP(p, 1, LOCAL_JUMP, ALL_TRUE, CALM);
MOVE(p, MATH0, 7, IFIFOAB2, 0, 1, IMMED);
- if (swap == false)
- MATHB(p, MATH0, AND, PDCP_C_PLANE_SN_MASK, MATH1, 8,
- IFB | IMMED2);
- else
- MATHB(p, MATH0, AND, PDCP_C_PLANE_SN_MASK_BE, MATH1, 8,
- IFB | IMMED2);
+ MATHB(p, MATH0, AND, sn_mask, MATH1, 8, IFB | IMMED2);
- SEQSTORE(p, MATH0, 7, 1, 0);
+ SEQSTORE(p, MATH0, offset, length, 0);
MATHB(p, MATH1, SHLD, MATH1, MATH1, 8, 0);
MOVE(p, DESCBUF, 4, MATH2, 0, 8, WAITCOMP | IMMED);
MATHB(p, MATH1, OR, MATH2, MATH1, 8, 0);
@@ -1322,6 +1412,8 @@ pdcp_insert_cplane_snow_zuc_op(struct program *p,
enum pdcp_sn_size sn_size,
unsigned char era_2_sw_hfn_ovrd __maybe_unused)
{
+ uint32_t offset = 0, length = 0, sn_mask = 0;
+
LABEL(keyjump);
REFERENCE(pkeyjump);
@@ -1338,7 +1430,7 @@ pdcp_insert_cplane_snow_zuc_op(struct program *p,
SET_LABEL(p, keyjump);
- if (rta_sec_era >= RTA_SEC_ERA_8) {
+ if (rta_sec_era >= RTA_SEC_ERA_8 && sn_size != PDCP_SN_SIZE_18) {
int pclid;
if (sn_size == PDCP_SN_SIZE_5)
@@ -1351,16 +1443,32 @@ pdcp_insert_cplane_snow_zuc_op(struct program *p,
(uint16_t)authdata->algtype);
return 0;
}
+ /* Non-proto is supported only for 5bit cplane and 18bit uplane */
+ switch (sn_size) {
+ case PDCP_SN_SIZE_5:
+ offset = 7;
+ length = 1;
+ sn_mask = (swap == false) ? PDCP_C_PLANE_SN_MASK :
+ PDCP_C_PLANE_SN_MASK_BE;
+ break;
+ case PDCP_SN_SIZE_18:
+ offset = 5;
+ length = 3;
+ sn_mask = (swap == false) ? PDCP_U_PLANE_18BIT_SN_MASK :
+ PDCP_U_PLANE_18BIT_SN_MASK_BE;
+ break;
+ case PDCP_SN_SIZE_7:
+ case PDCP_SN_SIZE_12:
+ case PDCP_SN_SIZE_15:
+ pr_err("Invalid sn_size for %s\n", __func__);
+ return -ENOTSUP;
+
+ }
- SEQLOAD(p, MATH0, 7, 1, 0);
+ SEQLOAD(p, MATH0, offset, length, 0);
JUMP(p, 1, LOCAL_JUMP, ALL_TRUE, CALM);
MOVE(p, MATH0, 7, IFIFOAB2, 0, 1, IMMED);
- if (swap == false)
- MATHB(p, MATH0, AND, PDCP_C_PLANE_SN_MASK, MATH1, 8,
- IFB | IMMED2);
- else
- MATHB(p, MATH0, AND, PDCP_C_PLANE_SN_MASK, MATH1, 8,
- IFB | IMMED2);
+ MATHB(p, MATH0, AND, sn_mask, MATH1, 8, IFB | IMMED2);
MATHB(p, MATH1, SHLD, MATH1, MATH1, 8, 0);
MOVE(p, DESCBUF, 4, MATH2, 0, 8, WAITCOMP | IMMED);
@@ -1374,7 +1482,7 @@ pdcp_insert_cplane_snow_zuc_op(struct program *p,
MATHB(p, SEQINSZ, SUB, PDCP_MAC_I_LEN, VSEQOUTSZ, 4, IMMED2);
MATHB(p, SEQINSZ, SUB, ZERO, VSEQINSZ, 4, 0);
- SEQSTORE(p, MATH0, 7, 1, 0);
+ SEQSTORE(p, MATH0, offset, length, 0);
if (dir == OP_TYPE_ENCAP_PROTOCOL) {
SEQFIFOSTORE(p, MSG, 0, 0, VLF);
@@ -1425,6 +1533,7 @@ pdcp_insert_cplane_aes_zuc_op(struct program *p,
enum pdcp_sn_size sn_size,
unsigned char era_2_sw_hfn_ovrd __maybe_unused)
{
+ uint32_t offset = 0, length = 0, sn_mask = 0;
LABEL(keyjump);
REFERENCE(pkeyjump);
@@ -1439,7 +1548,7 @@ pdcp_insert_cplane_aes_zuc_op(struct program *p,
KEY(p, KEY2, authdata->key_enc_flags, authdata->key, authdata->keylen,
INLINE_KEY(authdata));
- if (rta_sec_era >= RTA_SEC_ERA_8) {
+ if (rta_sec_era >= RTA_SEC_ERA_8 && sn_size != PDCP_SN_SIZE_18) {
int pclid;
if (sn_size == PDCP_SN_SIZE_5)
@@ -1453,17 +1562,33 @@ pdcp_insert_cplane_aes_zuc_op(struct program *p,
return 0;
}
+ /* Non-proto is supported only for 5bit cplane and 18bit uplane */
+ switch (sn_size) {
+ case PDCP_SN_SIZE_5:
+ offset = 7;
+ length = 1;
+ sn_mask = (swap == false) ? PDCP_C_PLANE_SN_MASK :
+ PDCP_C_PLANE_SN_MASK_BE;
+ break;
+ case PDCP_SN_SIZE_18:
+ offset = 5;
+ length = 3;
+ sn_mask = (swap == false) ? PDCP_U_PLANE_18BIT_SN_MASK :
+ PDCP_U_PLANE_18BIT_SN_MASK_BE;
+ break;
+ case PDCP_SN_SIZE_7:
+ case PDCP_SN_SIZE_12:
+ case PDCP_SN_SIZE_15:
+ pr_err("Invalid sn_size for %s\n", __func__);
+ return -ENOTSUP;
+
+ }
SET_LABEL(p, keyjump);
- SEQLOAD(p, MATH0, 7, 1, 0);
+ SEQLOAD(p, MATH0, offset, length, 0);
JUMP(p, 1, LOCAL_JUMP, ALL_TRUE, CALM);
MOVE(p, MATH0, 7, IFIFOAB2, 0, 1, IMMED);
- if (swap == false)
- MATHB(p, MATH0, AND, PDCP_C_PLANE_SN_MASK, MATH1, 8,
- IFB | IMMED2);
- else
- MATHB(p, MATH0, AND, PDCP_C_PLANE_SN_MASK, MATH1, 8,
- IFB | IMMED2);
+ MATHB(p, MATH0, AND, sn_mask, MATH1, 8, IFB | IMMED2);
MATHB(p, MATH1, SHLD, MATH1, MATH1, 8, 0);
MOVE(p, DESCBUF, 4, MATH2, 0, 8, WAITCOMP | IMMED);
@@ -1477,7 +1602,7 @@ pdcp_insert_cplane_aes_zuc_op(struct program *p,
MATHB(p, SEQINSZ, SUB, PDCP_MAC_I_LEN, VSEQOUTSZ, 4, IMMED2);
MATHB(p, SEQINSZ, SUB, ZERO, VSEQINSZ, 4, 0);
- SEQSTORE(p, MATH0, 7, 1, 0);
+ SEQSTORE(p, MATH0, offset, length, 0);
if (dir == OP_TYPE_ENCAP_PROTOCOL) {
SEQFIFOSTORE(p, MSG, 0, 0, VLF);
@@ -1531,6 +1656,7 @@ pdcp_insert_cplane_zuc_snow_op(struct program *p,
enum pdcp_sn_size sn_size,
unsigned char era_2_sw_hfn_ovrd __maybe_unused)
{
+ uint32_t offset = 0, length = 0, sn_mask = 0;
LABEL(keyjump);
REFERENCE(pkeyjump);
@@ -1545,7 +1671,7 @@ pdcp_insert_cplane_zuc_snow_op(struct program *p,
KEY(p, KEY2, authdata->key_enc_flags, authdata->key, authdata->keylen,
INLINE_KEY(authdata));
- if (rta_sec_era >= RTA_SEC_ERA_8) {
+ if (rta_sec_era >= RTA_SEC_ERA_8 && sn_size != PDCP_SN_SIZE_18) {
int pclid;
if (sn_size == PDCP_SN_SIZE_5)
@@ -1559,17 +1685,32 @@ pdcp_insert_cplane_zuc_snow_op(struct program *p,
return 0;
}
+ /* Non-proto is supported only for 5bit cplane and 18bit uplane */
+ switch (sn_size) {
+ case PDCP_SN_SIZE_5:
+ offset = 7;
+ length = 1;
+ sn_mask = (swap == false) ? PDCP_C_PLANE_SN_MASK :
+ PDCP_C_PLANE_SN_MASK_BE;
+ break;
+ case PDCP_SN_SIZE_18:
+ offset = 5;
+ length = 3;
+ sn_mask = (swap == false) ? PDCP_U_PLANE_18BIT_SN_MASK :
+ PDCP_U_PLANE_18BIT_SN_MASK_BE;
+ break;
+ case PDCP_SN_SIZE_7:
+ case PDCP_SN_SIZE_12:
+ case PDCP_SN_SIZE_15:
+ pr_err("Invalid sn_size for %s\n", __func__);
+ return -ENOTSUP;
+ }
SET_LABEL(p, keyjump);
- SEQLOAD(p, MATH0, 7, 1, 0);
+ SEQLOAD(p, MATH0, offset, length, 0);
JUMP(p, 1, LOCAL_JUMP, ALL_TRUE, CALM);
MOVE(p, MATH0, 7, IFIFOAB2, 0, 1, IMMED);
- if (swap == false)
- MATHB(p, MATH0, AND, PDCP_C_PLANE_SN_MASK, MATH1, 8,
- IFB | IMMED2);
- else
- MATHB(p, MATH0, AND, PDCP_C_PLANE_SN_MASK, MATH1, 8,
- IFB | IMMED2);
+ MATHB(p, MATH0, AND, sn_mask, MATH1, 8, IFB | IMMED2);
MATHB(p, MATH1, SHLD, MATH1, MATH1, 8, 0);
MOVE(p, DESCBUF, 4, MATH2, 0, 8, WAITCOMP | IMMED);
@@ -1599,7 +1740,7 @@ pdcp_insert_cplane_zuc_snow_op(struct program *p,
MATHB(p, VSEQOUTSZ, SUB, ZERO, VSEQINSZ, 4, 0);
}
- SEQSTORE(p, MATH0, 7, 1, 0);
+ SEQSTORE(p, MATH0, offset, length, 0);
if (dir == OP_TYPE_ENCAP_PROTOCOL) {
SEQFIFOSTORE(p, MSG, 0, 0, VLF);
@@ -1659,12 +1800,13 @@ pdcp_insert_cplane_zuc_aes_op(struct program *p,
enum pdcp_sn_size sn_size,
unsigned char era_2_sw_hfn_ovrd __maybe_unused)
{
+ uint32_t offset = 0, length = 0, sn_mask = 0;
if (rta_sec_era < RTA_SEC_ERA_5) {
pr_err("Invalid era for selected algorithm\n");
return -ENOTSUP;
}
- if (rta_sec_era >= RTA_SEC_ERA_8) {
+ if (rta_sec_era >= RTA_SEC_ERA_8 && sn_size != PDCP_SN_SIZE_18) {
int pclid;
KEY(p, KEY1, cipherdata->key_enc_flags, cipherdata->key,
@@ -1682,20 +1824,35 @@ pdcp_insert_cplane_zuc_aes_op(struct program *p,
(uint16_t)authdata->algtype);
return 0;
}
+ /* Non-proto is supported only for 5bit cplane and 18bit uplane */
+ switch (sn_size) {
+ case PDCP_SN_SIZE_5:
+ offset = 7;
+ length = 1;
+ sn_mask = (swap == false) ? PDCP_C_PLANE_SN_MASK :
+ PDCP_C_PLANE_SN_MASK_BE;
+ break;
+ case PDCP_SN_SIZE_18:
+ offset = 5;
+ length = 3;
+ sn_mask = (swap == false) ? PDCP_U_PLANE_18BIT_SN_MASK :
+ PDCP_U_PLANE_18BIT_SN_MASK_BE;
+ break;
+ case PDCP_SN_SIZE_7:
+ case PDCP_SN_SIZE_12:
+ case PDCP_SN_SIZE_15:
+ pr_err("Invalid sn_size for %s\n", __func__);
+ return -ENOTSUP;
+ }
- SEQLOAD(p, MATH0, 7, 1, 0);
+ SEQLOAD(p, MATH0, offset, length, 0);
JUMP(p, 1, LOCAL_JUMP, ALL_TRUE, CALM);
- if (swap == false)
- MATHB(p, MATH0, AND, PDCP_C_PLANE_SN_MASK, MATH1, 8,
- IFB | IMMED2);
- else
- MATHB(p, MATH0, AND, PDCP_C_PLANE_SN_MASK, MATH1, 8,
- IFB | IMMED2);
+ MATHB(p, MATH0, AND, sn_mask, MATH1, 8, IFB | IMMED2);
MATHB(p, MATH1, SHLD, MATH1, MATH1, 8, 0);
MOVE(p, DESCBUF, 4, MATH2, 0, 0x08, WAITCOMP | IMMED);
MATHB(p, MATH1, OR, MATH2, MATH2, 8, 0);
- SEQSTORE(p, MATH0, 7, 1, 0);
+ SEQSTORE(p, MATH0, offset, length, 0);
if (dir == OP_TYPE_ENCAP_PROTOCOL) {
KEY(p, KEY1, authdata->key_enc_flags, authdata->key,
authdata->keylen, INLINE_KEY(authdata));
@@ -1798,38 +1955,43 @@ pdcp_insert_cplane_zuc_aes_op(struct program *p,
}
static inline int
-pdcp_insert_uplane_15bit_op(struct program *p,
+pdcp_insert_uplane_no_int_op(struct program *p,
bool swap __maybe_unused,
struct alginfo *cipherdata,
- struct alginfo *authdata,
- unsigned int dir)
+ unsigned int dir,
+ enum pdcp_sn_size sn_size)
{
int op;
-
- /* Insert auth key if requested */
- if (authdata && authdata->algtype)
- KEY(p, KEY2, authdata->key_enc_flags, authdata->key,
- authdata->keylen, INLINE_KEY(authdata));
+ uint32_t sn_mask;
/* Insert Cipher Key */
KEY(p, KEY1, cipherdata->key_enc_flags, cipherdata->key,
cipherdata->keylen, INLINE_KEY(cipherdata));
- if (rta_sec_era >= RTA_SEC_ERA_8) {
+ if ((rta_sec_era >= RTA_SEC_ERA_8 && sn_size == PDCP_SN_SIZE_15) ||
+ (rta_sec_era >= RTA_SEC_ERA_10)) {
PROTOCOL(p, dir, OP_PCLID_LTE_PDCP_USER,
(uint16_t)cipherdata->algtype);
return 0;
}
- SEQLOAD(p, MATH0, 6, 2, 0);
+ if (sn_size == PDCP_SN_SIZE_15) {
+ SEQLOAD(p, MATH0, 6, 2, 0);
+ sn_mask = (swap == false) ? PDCP_U_PLANE_15BIT_SN_MASK :
+ PDCP_U_PLANE_15BIT_SN_MASK_BE;
+ } else { /* SN Size == PDCP_SN_SIZE_18 */
+ SEQLOAD(p, MATH0, 5, 3, 0);
+ sn_mask = (swap == false) ? PDCP_U_PLANE_18BIT_SN_MASK :
+ PDCP_U_PLANE_18BIT_SN_MASK_BE;
+ }
JUMP(p, 1, LOCAL_JUMP, ALL_TRUE, CALM);
- if (swap == false)
- MATHB(p, MATH0, AND, PDCP_U_PLANE_15BIT_SN_MASK, MATH1, 8,
- IFB | IMMED2);
- else
- MATHB(p, MATH0, AND, PDCP_U_PLANE_15BIT_SN_MASK_BE, MATH1, 8,
- IFB | IMMED2);
- SEQSTORE(p, MATH0, 6, 2, 0);
+ MATHB(p, MATH0, AND, sn_mask, MATH1, 8, IFB | IMMED2);
+
+ if (sn_size == PDCP_SN_SIZE_15)
+ SEQSTORE(p, MATH0, 6, 2, 0);
+ else /* SN Size == PDCP_SN_SIZE_18 */
+ SEQSTORE(p, MATH0, 5, 3, 0);
+
MATHB(p, MATH1, SHLD, MATH1, MATH1, 8, 0);
MOVEB(p, DESCBUF, 8, MATH2, 0, 8, WAITCOMP | IMMED);
MATHB(p, MATH1, OR, MATH2, MATH2, 8, 0);
@@ -2124,6 +2286,13 @@ cnstr_pdcp_u_plane_pdb(struct program *p,
hfn_threshold<<PDCP_U_PLANE_PDB_15BIT_SN_HFN_THR_SHIFT;
break;
+ case PDCP_SN_SIZE_18:
+ pdb.opt_res.opt = (uint32_t)(PDCP_U_PLANE_PDB_OPT_18B_SN);
+ pdb.hfn_res = hfn << PDCP_U_PLANE_PDB_18BIT_SN_HFN_SHIFT;
+ pdb.hfn_thr_res =
+ hfn_threshold<<PDCP_U_PLANE_PDB_18BIT_SN_HFN_THR_SHIFT;
+ break;
+
default:
pr_err("Invalid Sequence Number Size setting in PDB\n");
return -EINVAL;
@@ -2448,6 +2617,61 @@ cnstr_shdsc_pdcp_c_plane_decap(uint32_t *descbuf,
return PROGRAM_FINALIZE(p);
}
+static int
+pdcp_insert_uplane_with_int_op(struct program *p,
+ bool swap __maybe_unused,
+ struct alginfo *cipherdata,
+ struct alginfo *authdata,
+ enum pdcp_sn_size sn_size,
+ unsigned char era_2_sw_hfn_ovrd,
+ unsigned int dir)
+{
+ static int
+ (*pdcp_cp_fp[PDCP_CIPHER_TYPE_INVALID][PDCP_AUTH_TYPE_INVALID])
+ (struct program*, bool swap, struct alginfo *,
+ struct alginfo *, unsigned int, enum pdcp_sn_size,
+ unsigned char __maybe_unused) = {
+ { /* NULL */
+ pdcp_insert_cplane_null_op, /* NULL */
+ pdcp_insert_cplane_int_only_op, /* SNOW f9 */
+ pdcp_insert_cplane_int_only_op, /* AES CMAC */
+ pdcp_insert_cplane_int_only_op /* ZUC-I */
+ },
+ { /* SNOW f8 */
+ pdcp_insert_cplane_enc_only_op, /* NULL */
+ pdcp_insert_cplane_acc_op, /* SNOW f9 */
+ pdcp_insert_cplane_snow_aes_op, /* AES CMAC */
+ pdcp_insert_cplane_snow_zuc_op /* ZUC-I */
+ },
+ { /* AES CTR */
+ pdcp_insert_cplane_enc_only_op, /* NULL */
+ pdcp_insert_cplane_aes_snow_op, /* SNOW f9 */
+ pdcp_insert_cplane_acc_op, /* AES CMAC */
+ pdcp_insert_cplane_aes_zuc_op /* ZUC-I */
+ },
+ { /* ZUC-E */
+ pdcp_insert_cplane_enc_only_op, /* NULL */
+ pdcp_insert_cplane_zuc_snow_op, /* SNOW f9 */
+ pdcp_insert_cplane_zuc_aes_op, /* AES CMAC */
+ pdcp_insert_cplane_acc_op /* ZUC-I */
+ },
+ };
+ int err;
+
+ err = pdcp_cp_fp[cipherdata->algtype][authdata->algtype](p,
+ swap,
+ cipherdata,
+ authdata,
+ dir,
+ sn_size,
+ era_2_sw_hfn_ovrd);
+ if (err)
+ return err;
+
+ return 0;
+}
+
+
/**
* cnstr_shdsc_pdcp_u_plane_encap - Function for creating a PDCP User Plane
* encapsulation descriptor.
@@ -2491,6 +2715,33 @@ cnstr_shdsc_pdcp_u_plane_encap(uint32_t *descbuf,
struct program prg;
struct program *p = &prg;
int err;
+ static enum rta_share_type
+ desc_share[PDCP_CIPHER_TYPE_INVALID][PDCP_AUTH_TYPE_INVALID] = {
+ { /* NULL */
+ SHR_WAIT, /* NULL */
+ SHR_ALWAYS, /* SNOW f9 */
+ SHR_ALWAYS, /* AES CMAC */
+ SHR_ALWAYS /* ZUC-I */
+ },
+ { /* SNOW f8 */
+ SHR_ALWAYS, /* NULL */
+ SHR_ALWAYS, /* SNOW f9 */
+ SHR_WAIT, /* AES CMAC */
+ SHR_WAIT /* ZUC-I */
+ },
+ { /* AES CTR */
+ SHR_ALWAYS, /* NULL */
+ SHR_ALWAYS, /* SNOW f9 */
+ SHR_ALWAYS, /* AES CMAC */
+ SHR_WAIT /* ZUC-I */
+ },
+ { /* ZUC-E */
+ SHR_ALWAYS, /* NULL */
+ SHR_WAIT, /* SNOW f9 */
+ SHR_WAIT, /* AES CMAC */
+ SHR_ALWAYS /* ZUC-I */
+ },
+ };
LABEL(pdb_end);
if (rta_sec_era != RTA_SEC_ERA_2 && era_2_sw_hfn_ovrd) {
@@ -2509,7 +2760,10 @@ cnstr_shdsc_pdcp_u_plane_encap(uint32_t *descbuf,
if (ps)
PROGRAM_SET_36BIT_ADDR(p);
- SHR_HDR(p, SHR_ALWAYS, 0, 0);
+ if (authdata)
+ SHR_HDR(p, desc_share[cipherdata->algtype][authdata->algtype], 0, 0);
+ else
+ SHR_HDR(p, SHR_ALWAYS, 0, 0);
if (cnstr_pdcp_u_plane_pdb(p, sn_size, hfn, bearer, direction,
hfn_threshold)) {
pr_err("Error creating PDCP UPlane PDB\n");
@@ -2522,13 +2776,6 @@ cnstr_shdsc_pdcp_u_plane_encap(uint32_t *descbuf,
if (err)
return err;
- /* Insert auth key if requested */
- if (authdata && authdata->algtype) {
- KEY(p, KEY2, authdata->key_enc_flags,
- (uint64_t)authdata->key, authdata->keylen,
- INLINE_KEY(authdata));
- }
-
switch (sn_size) {
case PDCP_SN_SIZE_7:
case PDCP_SN_SIZE_12:
@@ -2542,6 +2789,12 @@ cnstr_shdsc_pdcp_u_plane_encap(uint32_t *descbuf,
case PDCP_CIPHER_TYPE_AES:
case PDCP_CIPHER_TYPE_SNOW:
case PDCP_CIPHER_TYPE_NULL:
+ /* Insert auth key if requested */
+ if (authdata && authdata->algtype) {
+ KEY(p, KEY2, authdata->key_enc_flags,
+ (uint64_t)authdata->key, authdata->keylen,
+ INLINE_KEY(authdata));
+ }
/* Insert Cipher Key */
KEY(p, KEY1, cipherdata->key_enc_flags,
(uint64_t)cipherdata->key, cipherdata->keylen,
@@ -2566,6 +2819,18 @@ cnstr_shdsc_pdcp_u_plane_encap(uint32_t *descbuf,
break;
case PDCP_SN_SIZE_15:
+ case PDCP_SN_SIZE_18:
+ if (authdata) {
+ err = pdcp_insert_uplane_with_int_op(p, swap,
+ cipherdata, authdata,
+ sn_size, era_2_sw_hfn_ovrd,
+ OP_TYPE_ENCAP_PROTOCOL);
+ if (err)
+ return err;
+
+ break;
+ }
+
switch (cipherdata->algtype) {
case PDCP_CIPHER_TYPE_NULL:
insert_copy_frame_op(p,
@@ -2574,8 +2839,8 @@ cnstr_shdsc_pdcp_u_plane_encap(uint32_t *descbuf,
break;
default:
- err = pdcp_insert_uplane_15bit_op(p, swap, cipherdata,
- authdata, OP_TYPE_ENCAP_PROTOCOL);
+ err = pdcp_insert_uplane_no_int_op(p, swap, cipherdata,
+ OP_TYPE_ENCAP_PROTOCOL, sn_size);
if (err)
return err;
break;
@@ -2635,6 +2900,34 @@ cnstr_shdsc_pdcp_u_plane_decap(uint32_t *descbuf,
struct program prg;
struct program *p = &prg;
int err;
+ static enum rta_share_type
+ desc_share[PDCP_CIPHER_TYPE_INVALID][PDCP_AUTH_TYPE_INVALID] = {
+ { /* NULL */
+ SHR_WAIT, /* NULL */
+ SHR_ALWAYS, /* SNOW f9 */
+ SHR_ALWAYS, /* AES CMAC */
+ SHR_ALWAYS /* ZUC-I */
+ },
+ { /* SNOW f8 */
+ SHR_ALWAYS, /* NULL */
+ SHR_ALWAYS, /* SNOW f9 */
+ SHR_WAIT, /* AES CMAC */
+ SHR_WAIT /* ZUC-I */
+ },
+ { /* AES CTR */
+ SHR_ALWAYS, /* NULL */
+ SHR_ALWAYS, /* SNOW f9 */
+ SHR_ALWAYS, /* AES CMAC */
+ SHR_WAIT /* ZUC-I */
+ },
+ { /* ZUC-E */
+ SHR_ALWAYS, /* NULL */
+ SHR_WAIT, /* SNOW f9 */
+ SHR_WAIT, /* AES CMAC */
+ SHR_ALWAYS /* ZUC-I */
+ },
+ };
+
LABEL(pdb_end);
if (rta_sec_era != RTA_SEC_ERA_2 && era_2_sw_hfn_ovrd) {
@@ -2652,8 +2945,11 @@ cnstr_shdsc_pdcp_u_plane_decap(uint32_t *descbuf,
PROGRAM_SET_BSWAP(p);
if (ps)
PROGRAM_SET_36BIT_ADDR(p);
+ if (authdata)
+ SHR_HDR(p, desc_share[cipherdata->algtype][authdata->algtype], 0, 0);
+ else
+ SHR_HDR(p, SHR_ALWAYS, 0, 0);
- SHR_HDR(p, SHR_ALWAYS, 0, 0);
if (cnstr_pdcp_u_plane_pdb(p, sn_size, hfn, bearer, direction,
hfn_threshold)) {
pr_err("Error creating PDCP UPlane PDB\n");
@@ -2666,12 +2962,6 @@ cnstr_shdsc_pdcp_u_plane_decap(uint32_t *descbuf,
if (err)
return err;
- /* Insert auth key if requested */
- if (authdata && authdata->algtype)
- KEY(p, KEY2, authdata->key_enc_flags,
- (uint64_t)authdata->key, authdata->keylen,
- INLINE_KEY(authdata));
-
switch (sn_size) {
case PDCP_SN_SIZE_7:
case PDCP_SN_SIZE_12:
@@ -2685,6 +2975,12 @@ cnstr_shdsc_pdcp_u_plane_decap(uint32_t *descbuf,
case PDCP_CIPHER_TYPE_AES:
case PDCP_CIPHER_TYPE_SNOW:
case PDCP_CIPHER_TYPE_NULL:
+ /* Insert auth key if requested */
+ if (authdata && authdata->algtype)
+ KEY(p, KEY2, authdata->key_enc_flags,
+ (uint64_t)authdata->key, authdata->keylen,
+ INLINE_KEY(authdata));
+
/* Insert Cipher Key */
KEY(p, KEY1, cipherdata->key_enc_flags,
cipherdata->key, cipherdata->keylen,
@@ -2708,6 +3004,18 @@ cnstr_shdsc_pdcp_u_plane_decap(uint32_t *descbuf,
break;
case PDCP_SN_SIZE_15:
+ case PDCP_SN_SIZE_18:
+ if (authdata) {
+ err = pdcp_insert_uplane_with_int_op(p, swap,
+ cipherdata, authdata,
+ sn_size, era_2_sw_hfn_ovrd,
+ OP_TYPE_DECAP_PROTOCOL);
+ if (err)
+ return err;
+
+ break;
+ }
+
switch (cipherdata->algtype) {
case PDCP_CIPHER_TYPE_NULL:
insert_copy_frame_op(p,
@@ -2716,8 +3024,8 @@ cnstr_shdsc_pdcp_u_plane_decap(uint32_t *descbuf,
break;
default:
- err = pdcp_insert_uplane_15bit_op(p, swap, cipherdata,
- authdata, OP_TYPE_DECAP_PROTOCOL);
+ err = pdcp_insert_uplane_no_int_op(p, swap, cipherdata,
+ OP_TYPE_DECAP_PROTOCOL, sn_size);
if (err)
return err;
break;
--
2.17.1
^ permalink raw reply [flat|nested] 75+ messages in thread
* [dpdk-dev] [PATCH v2 06/20] crypto/dpaa2_sec: support CAAM HW era 10
2019-09-30 11:52 ` [dpdk-dev] [PATCH v2 " Akhil Goyal
` (4 preceding siblings ...)
2019-09-30 11:52 ` [dpdk-dev] [PATCH v2 05/20] crypto/dpaa2_sec: update desc for pdcp 18bit enc-auth Akhil Goyal
@ 2019-09-30 11:52 ` Akhil Goyal
2019-09-30 11:52 ` [dpdk-dev] [PATCH v2 07/20] crypto/dpaa2_sec/hw: update 12bit SN desc for NULL auth Akhil Goyal
` (15 subsequent siblings)
21 siblings, 0 replies; 75+ messages in thread
From: Akhil Goyal @ 2019-09-30 11:52 UTC (permalink / raw)
To: dev; +Cc: aconole, anoobj, Hemant Agrawal
From: Hemant Agrawal <hemant.agrawal@nxp.com>
Adding minimal support for CAAM HW era 10 (used in LX2)
Primary changes are:
1. increased shard desc length form 6 bit to 7 bits
2. support for several PDCP operations as PROTOCOL offload.
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
---
drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c | 5 ++++
drivers/crypto/dpaa2_sec/hw/desc.h | 5 ++++
drivers/crypto/dpaa2_sec/hw/desc/pdcp.h | 21 ++++++++++-----
.../dpaa2_sec/hw/rta/fifo_load_store_cmd.h | 9 ++++---
| 21 ++++++++++++---
drivers/crypto/dpaa2_sec/hw/rta/jump_cmd.h | 3 +--
drivers/crypto/dpaa2_sec/hw/rta/key_cmd.h | 5 ++--
drivers/crypto/dpaa2_sec/hw/rta/load_cmd.h | 10 ++++---
drivers/crypto/dpaa2_sec/hw/rta/math_cmd.h | 12 +++++----
drivers/crypto/dpaa2_sec/hw/rta/move_cmd.h | 8 +++---
drivers/crypto/dpaa2_sec/hw/rta/nfifo_cmd.h | 10 +++----
.../crypto/dpaa2_sec/hw/rta/operation_cmd.h | 6 ++---
.../crypto/dpaa2_sec/hw/rta/protocol_cmd.h | 9 +++++--
.../dpaa2_sec/hw/rta/sec_run_time_asm.h | 27 +++++++++++++------
.../dpaa2_sec/hw/rta/seq_in_out_ptr_cmd.h | 7 +++--
drivers/crypto/dpaa2_sec/hw/rta/store_cmd.h | 6 ++---
16 files changed, 110 insertions(+), 54 deletions(-)
diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
index d080235f5..fb0314adb 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
@@ -3450,6 +3450,11 @@ cryptodev_dpaa2_sec_probe(struct rte_dpaa2_driver *dpaa2_drv __rte_unused,
/* init user callbacks */
TAILQ_INIT(&(cryptodev->link_intr_cbs));
+ if (dpaa2_svr_family == SVR_LX2160A)
+ rta_set_sec_era(RTA_SEC_ERA_10);
+
+ DPAA2_SEC_INFO("2-SEC ERA is %d", rta_get_sec_era());
+
/* Invoke PMD device initialization function */
retval = dpaa2_sec_dev_init(cryptodev);
if (retval == 0)
diff --git a/drivers/crypto/dpaa2_sec/hw/desc.h b/drivers/crypto/dpaa2_sec/hw/desc.h
index e12c3db2f..667da971b 100644
--- a/drivers/crypto/dpaa2_sec/hw/desc.h
+++ b/drivers/crypto/dpaa2_sec/hw/desc.h
@@ -18,6 +18,8 @@
#include "hw/compat.h"
#endif
+extern enum rta_sec_era rta_sec_era;
+
/* Max size of any SEC descriptor in 32-bit words, inclusive of header */
#define MAX_CAAM_DESCSIZE 64
@@ -113,9 +115,12 @@
/* Start Index or SharedDesc Length */
#define HDR_START_IDX_SHIFT 16
#define HDR_START_IDX_MASK (0x3f << HDR_START_IDX_SHIFT)
+#define HDR_START_IDX_MASK_ERA10 (0x7f << HDR_START_IDX_SHIFT)
/* If shared descriptor header, 6-bit length */
#define HDR_DESCLEN_SHR_MASK 0x3f
+/* If shared descriptor header, 7-bit length era10 onwards*/
+#define HDR_DESCLEN_SHR_MASK_ERA10 0x7f
/* If non-shared header, 7-bit length */
#define HDR_DESCLEN_MASK 0x7f
diff --git a/drivers/crypto/dpaa2_sec/hw/desc/pdcp.h b/drivers/crypto/dpaa2_sec/hw/desc/pdcp.h
index 9a73105ac..4bf1d69f9 100644
--- a/drivers/crypto/dpaa2_sec/hw/desc/pdcp.h
+++ b/drivers/crypto/dpaa2_sec/hw/desc/pdcp.h
@@ -776,7 +776,8 @@ pdcp_insert_cplane_enc_only_op(struct program *p,
KEY(p, KEY1, cipherdata->key_enc_flags, cipherdata->key,
cipherdata->keylen, INLINE_KEY(cipherdata));
- if (rta_sec_era >= RTA_SEC_ERA_8 && sn_size != PDCP_SN_SIZE_18) {
+ if ((rta_sec_era >= RTA_SEC_ERA_8 && sn_size != PDCP_SN_SIZE_18) ||
+ (rta_sec_era == RTA_SEC_ERA_10)) {
if (sn_size == PDCP_SN_SIZE_5)
PROTOCOL(p, dir, OP_PCLID_LTE_PDCP_CTRL_MIXED,
(uint16_t)cipherdata->algtype << 8);
@@ -962,7 +963,8 @@ pdcp_insert_cplane_snow_aes_op(struct program *p,
REFERENCE(jump_back_to_sd_cmd);
REFERENCE(move_mac_i_to_desc_buf);
- if (rta_sec_era >= RTA_SEC_ERA_8 && sn_size != PDCP_SN_SIZE_18) {
+ if ((rta_sec_era >= RTA_SEC_ERA_8 && sn_size != PDCP_SN_SIZE_18) ||
+ (rta_sec_era == RTA_SEC_ERA_10)) {
KEY(p, KEY1, cipherdata->key_enc_flags, cipherdata->key,
cipherdata->keylen, INLINE_KEY(cipherdata));
KEY(p, KEY2, authdata->key_enc_flags, authdata->key,
@@ -1286,7 +1288,8 @@ pdcp_insert_cplane_aes_snow_op(struct program *p,
KEY(p, KEY2, authdata->key_enc_flags, authdata->key, authdata->keylen,
INLINE_KEY(authdata));
- if (rta_sec_era >= RTA_SEC_ERA_8 && sn_size != PDCP_SN_SIZE_18) {
+ if ((rta_sec_era >= RTA_SEC_ERA_8 && sn_size != PDCP_SN_SIZE_18) ||
+ (rta_sec_era == RTA_SEC_ERA_10)) {
int pclid;
if (sn_size == PDCP_SN_SIZE_5)
@@ -1430,7 +1433,8 @@ pdcp_insert_cplane_snow_zuc_op(struct program *p,
SET_LABEL(p, keyjump);
- if (rta_sec_era >= RTA_SEC_ERA_8 && sn_size != PDCP_SN_SIZE_18) {
+ if ((rta_sec_era >= RTA_SEC_ERA_8 && sn_size != PDCP_SN_SIZE_18) ||
+ (rta_sec_era == RTA_SEC_ERA_10)) {
int pclid;
if (sn_size == PDCP_SN_SIZE_5)
@@ -1548,7 +1552,8 @@ pdcp_insert_cplane_aes_zuc_op(struct program *p,
KEY(p, KEY2, authdata->key_enc_flags, authdata->key, authdata->keylen,
INLINE_KEY(authdata));
- if (rta_sec_era >= RTA_SEC_ERA_8 && sn_size != PDCP_SN_SIZE_18) {
+ if ((rta_sec_era >= RTA_SEC_ERA_8 && sn_size != PDCP_SN_SIZE_18) ||
+ (rta_sec_era == RTA_SEC_ERA_10)) {
int pclid;
if (sn_size == PDCP_SN_SIZE_5)
@@ -1671,7 +1676,8 @@ pdcp_insert_cplane_zuc_snow_op(struct program *p,
KEY(p, KEY2, authdata->key_enc_flags, authdata->key, authdata->keylen,
INLINE_KEY(authdata));
- if (rta_sec_era >= RTA_SEC_ERA_8 && sn_size != PDCP_SN_SIZE_18) {
+ if ((rta_sec_era >= RTA_SEC_ERA_8 && sn_size != PDCP_SN_SIZE_18) ||
+ (rta_sec_era == RTA_SEC_ERA_10)) {
int pclid;
if (sn_size == PDCP_SN_SIZE_5)
@@ -1806,7 +1812,8 @@ pdcp_insert_cplane_zuc_aes_op(struct program *p,
return -ENOTSUP;
}
- if (rta_sec_era >= RTA_SEC_ERA_8 && sn_size != PDCP_SN_SIZE_18) {
+ if ((rta_sec_era >= RTA_SEC_ERA_8 && sn_size != PDCP_SN_SIZE_18) ||
+ (rta_sec_era == RTA_SEC_ERA_10)) {
int pclid;
KEY(p, KEY1, cipherdata->key_enc_flags, cipherdata->key,
diff --git a/drivers/crypto/dpaa2_sec/hw/rta/fifo_load_store_cmd.h b/drivers/crypto/dpaa2_sec/hw/rta/fifo_load_store_cmd.h
index 8c807aaa2..287e09cd7 100644
--- a/drivers/crypto/dpaa2_sec/hw/rta/fifo_load_store_cmd.h
+++ b/drivers/crypto/dpaa2_sec/hw/rta/fifo_load_store_cmd.h
@@ -1,8 +1,7 @@
/* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
*
* Copyright 2008-2016 Freescale Semiconductor Inc.
- * Copyright 2016 NXP
- *
+ * Copyright 2016,2019 NXP
*/
#ifndef __RTA_FIFO_LOAD_STORE_CMD_H__
@@ -42,7 +41,8 @@ static const uint32_t fifo_load_table[][2] = {
* supported.
*/
static const unsigned int fifo_load_table_sz[] = {22, 22, 23, 23,
- 23, 23, 23, 23};
+ 23, 23, 23, 23,
+ 23, 23};
static inline int
rta_fifo_load(struct program *program, uint32_t src,
@@ -201,7 +201,8 @@ static const uint32_t fifo_store_table[][2] = {
* supported.
*/
static const unsigned int fifo_store_table_sz[] = {21, 21, 21, 21,
- 22, 22, 22, 23};
+ 22, 22, 22, 23,
+ 23, 23};
static inline int
rta_fifo_store(struct program *program, uint32_t src,
--git a/drivers/crypto/dpaa2_sec/hw/rta/header_cmd.h b/drivers/crypto/dpaa2_sec/hw/rta/header_cmd.h
index 0c7ea9387..45aefa04c 100644
--- a/drivers/crypto/dpaa2_sec/hw/rta/header_cmd.h
+++ b/drivers/crypto/dpaa2_sec/hw/rta/header_cmd.h
@@ -1,8 +1,7 @@
/* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
*
* Copyright 2008-2016 Freescale Semiconductor Inc.
- * Copyright 2016 NXP
- *
+ * Copyright 2016,2019 NXP
*/
#ifndef __RTA_HEADER_CMD_H__
@@ -19,6 +18,8 @@ static const uint32_t job_header_flags[] = {
DNR | TD | MTD | SHR | REO | RSMS | EXT,
DNR | TD | MTD | SHR | REO | RSMS | EXT,
DNR | TD | MTD | SHR | REO | RSMS | EXT,
+ DNR | TD | MTD | SHR | REO | EXT,
+ DNR | TD | MTD | SHR | REO | EXT,
DNR | TD | MTD | SHR | REO | EXT
};
@@ -31,6 +32,8 @@ static const uint32_t shr_header_flags[] = {
DNR | SC | PD | CIF | RIF,
DNR | SC | PD | CIF | RIF,
DNR | SC | PD | CIF | RIF,
+ DNR | SC | PD | CIF | RIF,
+ DNR | SC | PD | CIF | RIF,
DNR | SC | PD | CIF | RIF
};
@@ -72,7 +75,12 @@ rta_shr_header(struct program *program,
}
opcode |= HDR_ONE;
- opcode |= (start_idx << HDR_START_IDX_SHIFT) & HDR_START_IDX_MASK;
+ if (rta_sec_era >= RTA_SEC_ERA_10)
+ opcode |= (start_idx << HDR_START_IDX_SHIFT) &
+ HDR_START_IDX_MASK_ERA10;
+ else
+ opcode |= (start_idx << HDR_START_IDX_SHIFT) &
+ HDR_START_IDX_MASK;
if (flags & DNR)
opcode |= HDR_DNR;
@@ -160,7 +168,12 @@ rta_job_header(struct program *program,
}
opcode |= HDR_ONE;
- opcode |= ((start_idx << HDR_START_IDX_SHIFT) & HDR_START_IDX_MASK);
+ if (rta_sec_era >= RTA_SEC_ERA_10)
+ opcode |= (start_idx << HDR_START_IDX_SHIFT) &
+ HDR_START_IDX_MASK_ERA10;
+ else
+ opcode |= (start_idx << HDR_START_IDX_SHIFT) &
+ HDR_START_IDX_MASK;
if (flags & EXT) {
opcode |= HDR_EXT;
diff --git a/drivers/crypto/dpaa2_sec/hw/rta/jump_cmd.h b/drivers/crypto/dpaa2_sec/hw/rta/jump_cmd.h
index 546d22e98..18f781e37 100644
--- a/drivers/crypto/dpaa2_sec/hw/rta/jump_cmd.h
+++ b/drivers/crypto/dpaa2_sec/hw/rta/jump_cmd.h
@@ -1,8 +1,7 @@
/* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
*
* Copyright 2008-2016 Freescale Semiconductor Inc.
- * Copyright 2016 NXP
- *
+ * Copyright 2016,2019 NXP
*/
#ifndef __RTA_JUMP_CMD_H__
diff --git a/drivers/crypto/dpaa2_sec/hw/rta/key_cmd.h b/drivers/crypto/dpaa2_sec/hw/rta/key_cmd.h
index 1ec21234a..ec3fbcaf6 100644
--- a/drivers/crypto/dpaa2_sec/hw/rta/key_cmd.h
+++ b/drivers/crypto/dpaa2_sec/hw/rta/key_cmd.h
@@ -1,8 +1,7 @@
/* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
*
* Copyright 2008-2016 Freescale Semiconductor Inc.
- * Copyright 2016 NXP
- *
+ * Copyright 2016,2019 NXP
*/
#ifndef __RTA_KEY_CMD_H__
@@ -19,6 +18,8 @@ static const uint32_t key_enc_flags[] = {
ENC | NWB | EKT | TK,
ENC | NWB | EKT | TK,
ENC | NWB | EKT | TK | PTS,
+ ENC | NWB | EKT | TK | PTS,
+ ENC | NWB | EKT | TK | PTS,
ENC | NWB | EKT | TK | PTS
};
diff --git a/drivers/crypto/dpaa2_sec/hw/rta/load_cmd.h b/drivers/crypto/dpaa2_sec/hw/rta/load_cmd.h
index f3b0dcfcb..38e253c22 100644
--- a/drivers/crypto/dpaa2_sec/hw/rta/load_cmd.h
+++ b/drivers/crypto/dpaa2_sec/hw/rta/load_cmd.h
@@ -1,8 +1,7 @@
/* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
*
* Copyright 2008-2016 Freescale Semiconductor Inc.
- * Copyright 2016 NXP
- *
+ * Copyright 2016,2019 NXP
*/
#ifndef __RTA_LOAD_CMD_H__
@@ -19,6 +18,8 @@ static const uint32_t load_len_mask_allowed[] = {
0x000000fe,
0x000000fe,
0x000000fe,
+ 0x000000fe,
+ 0x000000fe,
0x000000fe
};
@@ -30,6 +31,8 @@ static const uint32_t load_off_mask_allowed[] = {
0x000000ff,
0x000000ff,
0x000000ff,
+ 0x000000ff,
+ 0x000000ff,
0x000000ff
};
@@ -137,7 +140,8 @@ static const struct load_map load_dst[] = {
* Allowed LOAD destinations for each SEC Era.
* Values represent the number of entries from load_dst[] that are supported.
*/
-static const unsigned int load_dst_sz[] = { 31, 34, 34, 40, 40, 40, 40, 40 };
+static const unsigned int load_dst_sz[] = { 31, 34, 34, 40, 40,
+ 40, 40, 40, 40, 40};
static inline int
load_check_len_offset(int pos, uint32_t length, uint32_t offset)
diff --git a/drivers/crypto/dpaa2_sec/hw/rta/math_cmd.h b/drivers/crypto/dpaa2_sec/hw/rta/math_cmd.h
index 5b28cbabb..cca70f7e0 100644
--- a/drivers/crypto/dpaa2_sec/hw/rta/math_cmd.h
+++ b/drivers/crypto/dpaa2_sec/hw/rta/math_cmd.h
@@ -1,8 +1,7 @@
/* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
*
* Copyright 2008-2016 Freescale Semiconductor Inc.
- * Copyright 2016 NXP
- *
+ * Copyright 2016,2019 NXP
*/
#ifndef __RTA_MATH_CMD_H__
@@ -29,7 +28,8 @@ static const uint32_t math_op1[][2] = {
* Allowed MATH op1 sources for each SEC Era.
* Values represent the number of entries from math_op1[] that are supported.
*/
-static const unsigned int math_op1_sz[] = {10, 10, 12, 12, 12, 12, 12, 12};
+static const unsigned int math_op1_sz[] = {10, 10, 12, 12, 12, 12,
+ 12, 12, 12, 12};
static const uint32_t math_op2[][2] = {
/*1*/ { MATH0, MATH_SRC1_REG0 },
@@ -51,7 +51,8 @@ static const uint32_t math_op2[][2] = {
* Allowed MATH op2 sources for each SEC Era.
* Values represent the number of entries from math_op2[] that are supported.
*/
-static const unsigned int math_op2_sz[] = {8, 9, 13, 13, 13, 13, 13, 13};
+static const unsigned int math_op2_sz[] = {8, 9, 13, 13, 13, 13, 13, 13,
+ 13, 13};
static const uint32_t math_result[][2] = {
/*1*/ { MATH0, MATH_DEST_REG0 },
@@ -71,7 +72,8 @@ static const uint32_t math_result[][2] = {
* Values represent the number of entries from math_result[] that are
* supported.
*/
-static const unsigned int math_result_sz[] = {9, 9, 10, 10, 10, 10, 10, 10};
+static const unsigned int math_result_sz[] = {9, 9, 10, 10, 10, 10, 10, 10,
+ 10, 10};
static inline int
rta_math(struct program *program, uint64_t operand1,
diff --git a/drivers/crypto/dpaa2_sec/hw/rta/move_cmd.h b/drivers/crypto/dpaa2_sec/hw/rta/move_cmd.h
index a7ff7c675..d2151c6dd 100644
--- a/drivers/crypto/dpaa2_sec/hw/rta/move_cmd.h
+++ b/drivers/crypto/dpaa2_sec/hw/rta/move_cmd.h
@@ -1,8 +1,7 @@
/* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
*
* Copyright 2008-2016 Freescale Semiconductor Inc.
- * Copyright 2016 NXP
- *
+ * Copyright 2016,2019 NXP
*/
#ifndef __RTA_MOVE_CMD_H__
@@ -47,7 +46,8 @@ static const uint32_t move_src_table[][2] = {
* Values represent the number of entries from move_src_table[] that are
* supported.
*/
-static const unsigned int move_src_table_sz[] = {9, 11, 14, 14, 14, 14, 14, 14};
+static const unsigned int move_src_table_sz[] = {9, 11, 14, 14, 14, 14, 14, 14,
+ 14, 14};
static const uint32_t move_dst_table[][2] = {
/*1*/ { CONTEXT1, MOVE_DEST_CLASS1CTX },
@@ -72,7 +72,7 @@ static const uint32_t move_dst_table[][2] = {
* supported.
*/
static const
-unsigned int move_dst_table_sz[] = {13, 14, 14, 15, 15, 15, 15, 15};
+unsigned int move_dst_table_sz[] = {13, 14, 14, 15, 15, 15, 15, 15, 15, 15};
static inline int
set_move_offset(struct program *program __maybe_unused,
diff --git a/drivers/crypto/dpaa2_sec/hw/rta/nfifo_cmd.h b/drivers/crypto/dpaa2_sec/hw/rta/nfifo_cmd.h
index 94f775e2e..85092d961 100644
--- a/drivers/crypto/dpaa2_sec/hw/rta/nfifo_cmd.h
+++ b/drivers/crypto/dpaa2_sec/hw/rta/nfifo_cmd.h
@@ -1,8 +1,7 @@
/* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
*
* Copyright 2008-2016 Freescale Semiconductor Inc.
- * Copyright 2016 NXP
- *
+ * Copyright 2016,2019 NXP
*/
#ifndef __RTA_NFIFO_CMD_H__
@@ -24,7 +23,7 @@ static const uint32_t nfifo_src[][2] = {
* Allowed NFIFO LOAD sources for each SEC Era.
* Values represent the number of entries from nfifo_src[] that are supported.
*/
-static const unsigned int nfifo_src_sz[] = {4, 5, 5, 5, 5, 5, 5, 7};
+static const unsigned int nfifo_src_sz[] = {4, 5, 5, 5, 5, 5, 5, 7, 7, 7};
static const uint32_t nfifo_data[][2] = {
{ MSG, NFIFOENTRY_DTYPE_MSG },
@@ -77,7 +76,8 @@ static const uint32_t nfifo_flags[][2] = {
* Allowed NFIFO LOAD flags for each SEC Era.
* Values represent the number of entries from nfifo_flags[] that are supported.
*/
-static const unsigned int nfifo_flags_sz[] = {12, 14, 14, 14, 14, 14, 14, 14};
+static const unsigned int nfifo_flags_sz[] = {12, 14, 14, 14, 14, 14,
+ 14, 14, 14, 14};
static const uint32_t nfifo_pad_flags[][2] = {
{ BM, NFIFOENTRY_BM },
@@ -90,7 +90,7 @@ static const uint32_t nfifo_pad_flags[][2] = {
* Values represent the number of entries from nfifo_pad_flags[] that are
* supported.
*/
-static const unsigned int nfifo_pad_flags_sz[] = {2, 2, 2, 2, 3, 3, 3, 3};
+static const unsigned int nfifo_pad_flags_sz[] = {2, 2, 2, 2, 3, 3, 3, 3, 3, 3};
static inline int
rta_nfifo_load(struct program *program, uint32_t src,
diff --git a/drivers/crypto/dpaa2_sec/hw/rta/operation_cmd.h b/drivers/crypto/dpaa2_sec/hw/rta/operation_cmd.h
index b85760e5b..9a1788c0f 100644
--- a/drivers/crypto/dpaa2_sec/hw/rta/operation_cmd.h
+++ b/drivers/crypto/dpaa2_sec/hw/rta/operation_cmd.h
@@ -1,8 +1,7 @@
/* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
*
* Copyright 2008-2016 Freescale Semiconductor Inc.
- * Copyright 2016 NXP
- *
+ * Copyright 2016,2019 NXP
*/
#ifndef __RTA_OPERATION_CMD_H__
@@ -229,7 +228,8 @@ static const struct alg_aai_map alg_table[] = {
* Allowed OPERATION algorithms for each SEC Era.
* Values represent the number of entries from alg_table[] that are supported.
*/
-static const unsigned int alg_table_sz[] = {14, 15, 15, 15, 17, 17, 11, 17};
+static const unsigned int alg_table_sz[] = {14, 15, 15, 15, 17, 17,
+ 11, 17, 17, 17};
static inline int
rta_operation(struct program *program, uint32_t cipher_algo,
diff --git a/drivers/crypto/dpaa2_sec/hw/rta/protocol_cmd.h b/drivers/crypto/dpaa2_sec/hw/rta/protocol_cmd.h
index 82581edf5..e9f20703f 100644
--- a/drivers/crypto/dpaa2_sec/hw/rta/protocol_cmd.h
+++ b/drivers/crypto/dpaa2_sec/hw/rta/protocol_cmd.h
@@ -1,7 +1,7 @@
/* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
*
* Copyright 2008-2016 Freescale Semiconductor Inc.
- * Copyright 2016, 2019 NXP
+ * Copyright 2016,2019 NXP
*
*/
@@ -326,6 +326,10 @@ static const uint32_t proto_blob_flags[] = {
OP_PCL_BLOB_EKT | OP_PCL_BLOB_REG_MASK | OP_PCL_BLOB_SEC_MEM,
OP_PCL_BLOB_FORMAT_MASK | OP_PCL_BLOB_BLACK | OP_PCL_BLOB_TKEK |
OP_PCL_BLOB_EKT | OP_PCL_BLOB_REG_MASK | OP_PCL_BLOB_SEC_MEM,
+ OP_PCL_BLOB_FORMAT_MASK | OP_PCL_BLOB_BLACK | OP_PCL_BLOB_TKEK |
+ OP_PCL_BLOB_EKT | OP_PCL_BLOB_REG_MASK | OP_PCL_BLOB_SEC_MEM,
+ OP_PCL_BLOB_FORMAT_MASK | OP_PCL_BLOB_BLACK | OP_PCL_BLOB_TKEK |
+ OP_PCL_BLOB_EKT | OP_PCL_BLOB_REG_MASK | OP_PCL_BLOB_SEC_MEM,
OP_PCL_BLOB_FORMAT_MASK | OP_PCL_BLOB_BLACK | OP_PCL_BLOB_TKEK |
OP_PCL_BLOB_EKT | OP_PCL_BLOB_REG_MASK | OP_PCL_BLOB_SEC_MEM
};
@@ -604,7 +608,8 @@ static const struct proto_map proto_table[] = {
* Allowed OPERATION protocols for each SEC Era.
* Values represent the number of entries from proto_table[] that are supported.
*/
-static const unsigned int proto_table_sz[] = {21, 29, 29, 29, 29, 35, 37, 40};
+static const unsigned int proto_table_sz[] = {21, 29, 29, 29, 29, 35, 37,
+ 40, 40, 40};
static inline int
rta_proto_operation(struct program *program, uint32_t optype,
diff --git a/drivers/crypto/dpaa2_sec/hw/rta/sec_run_time_asm.h b/drivers/crypto/dpaa2_sec/hw/rta/sec_run_time_asm.h
index 5357187f8..d8cdebd20 100644
--- a/drivers/crypto/dpaa2_sec/hw/rta/sec_run_time_asm.h
+++ b/drivers/crypto/dpaa2_sec/hw/rta/sec_run_time_asm.h
@@ -1,8 +1,7 @@
/* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
*
* Copyright 2008-2016 Freescale Semiconductor Inc.
- * Copyright 2016 NXP
- *
+ * Copyright 2016,2019 NXP
*/
#ifndef __RTA_SEC_RUN_TIME_ASM_H__
@@ -36,7 +35,9 @@ enum rta_sec_era {
RTA_SEC_ERA_6,
RTA_SEC_ERA_7,
RTA_SEC_ERA_8,
- MAX_SEC_ERA = RTA_SEC_ERA_8
+ RTA_SEC_ERA_9,
+ RTA_SEC_ERA_10,
+ MAX_SEC_ERA = RTA_SEC_ERA_10
};
/**
@@ -605,10 +606,14 @@ __rta_inline_data(struct program *program, uint64_t data,
static inline unsigned int
rta_desc_len(uint32_t *buffer)
{
- if ((*buffer & CMD_MASK) == CMD_DESC_HDR)
+ if ((*buffer & CMD_MASK) == CMD_DESC_HDR) {
return *buffer & HDR_DESCLEN_MASK;
- else
- return *buffer & HDR_DESCLEN_SHR_MASK;
+ } else {
+ if (rta_sec_era >= RTA_SEC_ERA_10)
+ return *buffer & HDR_DESCLEN_SHR_MASK_ERA10;
+ else
+ return *buffer & HDR_DESCLEN_SHR_MASK;
+ }
}
static inline unsigned int
@@ -701,9 +706,15 @@ rta_patch_header(struct program *program, int line, unsigned int new_ref)
return -EINVAL;
opcode = bswap ? swab32(program->buffer[line]) : program->buffer[line];
+ if (rta_sec_era >= RTA_SEC_ERA_10) {
+ opcode &= (uint32_t)~HDR_START_IDX_MASK_ERA10;
+ opcode |= (new_ref << HDR_START_IDX_SHIFT) &
+ HDR_START_IDX_MASK_ERA10;
+ } else {
+ opcode &= (uint32_t)~HDR_START_IDX_MASK;
+ opcode |= (new_ref << HDR_START_IDX_SHIFT) & HDR_START_IDX_MASK;
+ }
- opcode &= (uint32_t)~HDR_START_IDX_MASK;
- opcode |= (new_ref << HDR_START_IDX_SHIFT) & HDR_START_IDX_MASK;
program->buffer[line] = bswap ? swab32(opcode) : opcode;
return 0;
diff --git a/drivers/crypto/dpaa2_sec/hw/rta/seq_in_out_ptr_cmd.h b/drivers/crypto/dpaa2_sec/hw/rta/seq_in_out_ptr_cmd.h
index ceb6a8719..5e6af0c83 100644
--- a/drivers/crypto/dpaa2_sec/hw/rta/seq_in_out_ptr_cmd.h
+++ b/drivers/crypto/dpaa2_sec/hw/rta/seq_in_out_ptr_cmd.h
@@ -1,8 +1,7 @@
/* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
*
* Copyright 2008-2016 Freescale Semiconductor Inc.
- * Copyright 2016 NXP
- *
+ * Copyright 2016,2019 NXP
*/
#ifndef __RTA_SEQ_IN_OUT_PTR_CMD_H__
@@ -19,6 +18,8 @@ static const uint32_t seq_in_ptr_flags[] = {
RBS | INL | SGF | PRE | EXT | RTO | RJD | SOP,
RBS | INL | SGF | PRE | EXT | RTO | RJD | SOP,
RBS | INL | SGF | PRE | EXT | RTO | RJD | SOP,
+ RBS | INL | SGF | PRE | EXT | RTO | RJD | SOP,
+ RBS | INL | SGF | PRE | EXT | RTO | RJD | SOP,
RBS | INL | SGF | PRE | EXT | RTO | RJD | SOP
};
@@ -31,6 +32,8 @@ static const uint32_t seq_out_ptr_flags[] = {
SGF | PRE | EXT | RTO | RST | EWS,
SGF | PRE | EXT | RTO | RST | EWS,
SGF | PRE | EXT | RTO | RST | EWS,
+ SGF | PRE | EXT | RTO | RST | EWS,
+ SGF | PRE | EXT | RTO | RST | EWS,
SGF | PRE | EXT | RTO | RST | EWS
};
diff --git a/drivers/crypto/dpaa2_sec/hw/rta/store_cmd.h b/drivers/crypto/dpaa2_sec/hw/rta/store_cmd.h
index 8b58e544d..5de47d053 100644
--- a/drivers/crypto/dpaa2_sec/hw/rta/store_cmd.h
+++ b/drivers/crypto/dpaa2_sec/hw/rta/store_cmd.h
@@ -1,8 +1,7 @@
/* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
*
* Copyright 2008-2016 Freescale Semiconductor Inc.
- * Copyright 2016 NXP
- *
+ * Copyright 2016,2019 NXP
*/
#ifndef __RTA_STORE_CMD_H__
@@ -56,7 +55,8 @@ static const uint32_t store_src_table[][2] = {
* supported.
*/
static const unsigned int store_src_table_sz[] = {29, 31, 33, 33,
- 33, 33, 35, 35};
+ 33, 33, 35, 35,
+ 35, 35};
static inline int
rta_store(struct program *program, uint64_t src,
--
2.17.1
^ permalink raw reply [flat|nested] 75+ messages in thread
* [dpdk-dev] [PATCH v2 07/20] crypto/dpaa2_sec/hw: update 12bit SN desc for NULL auth
2019-09-30 11:52 ` [dpdk-dev] [PATCH v2 " Akhil Goyal
` (5 preceding siblings ...)
2019-09-30 11:52 ` [dpdk-dev] [PATCH v2 06/20] crypto/dpaa2_sec: support CAAM HW era 10 Akhil Goyal
@ 2019-09-30 11:52 ` Akhil Goyal
2019-09-30 11:52 ` [dpdk-dev] [PATCH v2 08/20] crypto/dpaa_sec: support scatter gather for PDCP Akhil Goyal
` (14 subsequent siblings)
21 siblings, 0 replies; 75+ messages in thread
From: Akhil Goyal @ 2019-09-30 11:52 UTC (permalink / raw)
To: dev; +Cc: aconole, anoobj, Akhil Goyal, Vakul Garg
For sec era 8, NULL auth using protocol command does not add
4 bytes of null MAC-I and treat NULL integrity as no integrity which
is not correct.
Hence converting this particular case of null integrity on 12b SN
on SEC ERA 8 from protocol offload to non-protocol offload case.
Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
Signed-off-by: Vakul Garg <vakul.garg@nxp.com>
---
drivers/crypto/dpaa2_sec/hw/desc/pdcp.h | 32 +++++++++++++++++++++----
1 file changed, 28 insertions(+), 4 deletions(-)
diff --git a/drivers/crypto/dpaa2_sec/hw/desc/pdcp.h b/drivers/crypto/dpaa2_sec/hw/desc/pdcp.h
index 4bf1d69f9..0b074ec80 100644
--- a/drivers/crypto/dpaa2_sec/hw/desc/pdcp.h
+++ b/drivers/crypto/dpaa2_sec/hw/desc/pdcp.h
@@ -43,6 +43,14 @@
#define PDCP_C_PLANE_SN_MASK 0x1F000000
#define PDCP_C_PLANE_SN_MASK_BE 0x0000001F
+/**
+ * PDCP_12BIT_SN_MASK - This mask is used in the PDCP descriptors for
+ * extracting the sequence number (SN) from the
+ * PDCP User Plane header.
+ */
+#define PDCP_12BIT_SN_MASK 0xFF0F0000
+#define PDCP_12BIT_SN_MASK_BE 0x00000FFF
+
/**
* PDCP_U_PLANE_15BIT_SN_MASK - This mask is used in the PDCP descriptors for
* extracting the sequence number (SN) from the
@@ -776,8 +784,10 @@ pdcp_insert_cplane_enc_only_op(struct program *p,
KEY(p, KEY1, cipherdata->key_enc_flags, cipherdata->key,
cipherdata->keylen, INLINE_KEY(cipherdata));
- if ((rta_sec_era >= RTA_SEC_ERA_8 && sn_size != PDCP_SN_SIZE_18) ||
- (rta_sec_era == RTA_SEC_ERA_10)) {
+ if ((rta_sec_era >= RTA_SEC_ERA_8 && sn_size != PDCP_SN_SIZE_18 &&
+ !(rta_sec_era == RTA_SEC_ERA_8 &&
+ authdata->algtype == 0))
+ || (rta_sec_era == RTA_SEC_ERA_10)) {
if (sn_size == PDCP_SN_SIZE_5)
PROTOCOL(p, dir, OP_PCLID_LTE_PDCP_CTRL_MIXED,
(uint16_t)cipherdata->algtype << 8);
@@ -800,12 +810,16 @@ pdcp_insert_cplane_enc_only_op(struct program *p,
sn_mask = (swap == false) ? PDCP_U_PLANE_18BIT_SN_MASK :
PDCP_U_PLANE_18BIT_SN_MASK_BE;
break;
- case PDCP_SN_SIZE_7:
case PDCP_SN_SIZE_12:
+ offset = 6;
+ length = 2;
+ sn_mask = (swap == false) ? PDCP_12BIT_SN_MASK :
+ PDCP_12BIT_SN_MASK_BE;
+ break;
+ case PDCP_SN_SIZE_7:
case PDCP_SN_SIZE_15:
pr_err("Invalid sn_size for %s\n", __func__);
return -ENOTSUP;
-
}
SEQLOAD(p, MATH0, offset, length, 0);
@@ -2796,6 +2810,16 @@ cnstr_shdsc_pdcp_u_plane_encap(uint32_t *descbuf,
case PDCP_CIPHER_TYPE_AES:
case PDCP_CIPHER_TYPE_SNOW:
case PDCP_CIPHER_TYPE_NULL:
+ if (rta_sec_era == RTA_SEC_ERA_8 &&
+ authdata && authdata->algtype == 0){
+ err = pdcp_insert_uplane_with_int_op(p, swap,
+ cipherdata, authdata,
+ sn_size, era_2_sw_hfn_ovrd,
+ OP_TYPE_ENCAP_PROTOCOL);
+ if (err)
+ return err;
+ break;
+ }
/* Insert auth key if requested */
if (authdata && authdata->algtype) {
KEY(p, KEY2, authdata->key_enc_flags,
--
2.17.1
^ permalink raw reply [flat|nested] 75+ messages in thread
* [dpdk-dev] [PATCH v2 08/20] crypto/dpaa_sec: support scatter gather for PDCP
2019-09-30 11:52 ` [dpdk-dev] [PATCH v2 " Akhil Goyal
` (6 preceding siblings ...)
2019-09-30 11:52 ` [dpdk-dev] [PATCH v2 07/20] crypto/dpaa2_sec/hw: update 12bit SN desc for NULL auth Akhil Goyal
@ 2019-09-30 11:52 ` Akhil Goyal
2019-09-30 11:52 ` [dpdk-dev] [PATCH v2 09/20] crypto/dpaa2_sec: support scatter gather for proto offloads Akhil Goyal
` (13 subsequent siblings)
21 siblings, 0 replies; 75+ messages in thread
From: Akhil Goyal @ 2019-09-30 11:52 UTC (permalink / raw)
To: dev; +Cc: aconole, anoobj, Akhil Goyal
This patch add support for chained input or output
mbufs for PDCP operations.
Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
---
drivers/crypto/dpaa_sec/dpaa_sec.c | 122 +++++++++++++++++++++++++++--
1 file changed, 116 insertions(+), 6 deletions(-)
diff --git a/drivers/crypto/dpaa_sec/dpaa_sec.c b/drivers/crypto/dpaa_sec/dpaa_sec.c
index 3fc4a606f..b5bb87aa6 100644
--- a/drivers/crypto/dpaa_sec/dpaa_sec.c
+++ b/drivers/crypto/dpaa_sec/dpaa_sec.c
@@ -189,12 +189,18 @@ dqrr_out_fq_cb_rx(struct qman_portal *qm __always_unused,
if (ctx->op->sess_type == RTE_CRYPTO_OP_SECURITY_SESSION) {
struct qm_sg_entry *sg_out;
uint32_t len;
+ struct rte_mbuf *mbuf = (ctx->op->sym->m_dst == NULL) ?
+ ctx->op->sym->m_src : ctx->op->sym->m_dst;
sg_out = &job->sg[0];
hw_sg_to_cpu(sg_out);
len = sg_out->length;
- ctx->op->sym->m_src->pkt_len = len;
- ctx->op->sym->m_src->data_len = len;
+ mbuf->pkt_len = len;
+ while (mbuf->next != NULL) {
+ len -= mbuf->data_len;
+ mbuf = mbuf->next;
+ }
+ mbuf->data_len = len;
}
dpaa_sec_ops[dpaa_sec_op_nb++] = ctx->op;
dpaa_sec_op_ending(ctx);
@@ -260,6 +266,7 @@ static inline int is_auth_cipher(dpaa_sec_session *ses)
{
return ((ses->cipher_alg != RTE_CRYPTO_CIPHER_NULL) &&
(ses->auth_alg != RTE_CRYPTO_AUTH_NULL) &&
+ (ses->proto_alg != RTE_SECURITY_PROTOCOL_PDCP) &&
(ses->proto_alg != RTE_SECURITY_PROTOCOL_IPSEC));
}
@@ -802,12 +809,18 @@ dpaa_sec_deq(struct dpaa_sec_qp *qp, struct rte_crypto_op **ops, int nb_ops)
if (op->sess_type == RTE_CRYPTO_OP_SECURITY_SESSION) {
struct qm_sg_entry *sg_out;
uint32_t len;
+ struct rte_mbuf *mbuf = (op->sym->m_dst == NULL) ?
+ op->sym->m_src : op->sym->m_dst;
sg_out = &job->sg[0];
hw_sg_to_cpu(sg_out);
len = sg_out->length;
- op->sym->m_src->pkt_len = len;
- op->sym->m_src->data_len = len;
+ mbuf->pkt_len = len;
+ while (mbuf->next != NULL) {
+ len -= mbuf->data_len;
+ mbuf = mbuf->next;
+ }
+ mbuf->data_len = len;
}
if (!ctx->fd_status) {
op->status = RTE_CRYPTO_OP_STATUS_SUCCESS;
@@ -1636,6 +1649,99 @@ build_proto(struct rte_crypto_op *op, dpaa_sec_session *ses)
return cf;
}
+static inline struct dpaa_sec_job *
+build_proto_sg(struct rte_crypto_op *op, dpaa_sec_session *ses)
+{
+ struct rte_crypto_sym_op *sym = op->sym;
+ struct dpaa_sec_job *cf;
+ struct dpaa_sec_op_ctx *ctx;
+ struct qm_sg_entry *sg, *out_sg, *in_sg;
+ struct rte_mbuf *mbuf;
+ uint8_t req_segs;
+ uint32_t in_len = 0, out_len = 0;
+
+ if (sym->m_dst) {
+ mbuf = sym->m_dst;
+ } else {
+ mbuf = sym->m_src;
+ }
+ req_segs = mbuf->nb_segs + sym->m_src->nb_segs + 2;
+ if (req_segs > MAX_SG_ENTRIES) {
+ DPAA_SEC_DP_ERR("Proto: Max sec segs supported is %d",
+ MAX_SG_ENTRIES);
+ return NULL;
+ }
+
+ ctx = dpaa_sec_alloc_ctx(ses);
+ if (!ctx)
+ return NULL;
+ cf = &ctx->job;
+ ctx->op = op;
+ /* output */
+ out_sg = &cf->sg[0];
+ out_sg->extension = 1;
+ qm_sg_entry_set64(out_sg, dpaa_mem_vtop(&cf->sg[2]));
+
+ /* 1st seg */
+ sg = &cf->sg[2];
+ qm_sg_entry_set64(sg, rte_pktmbuf_mtophys(mbuf));
+ sg->offset = 0;
+
+ /* Successive segs */
+ while (mbuf->next) {
+ sg->length = mbuf->data_len;
+ out_len += sg->length;
+ mbuf = mbuf->next;
+ cpu_to_hw_sg(sg);
+ sg++;
+ qm_sg_entry_set64(sg, rte_pktmbuf_mtophys(mbuf));
+ sg->offset = 0;
+ }
+ sg->length = mbuf->buf_len - mbuf->data_off;
+ out_len += sg->length;
+ sg->final = 1;
+ cpu_to_hw_sg(sg);
+
+ out_sg->length = out_len;
+ cpu_to_hw_sg(out_sg);
+
+ /* input */
+ mbuf = sym->m_src;
+ in_sg = &cf->sg[1];
+ in_sg->extension = 1;
+ in_sg->final = 1;
+ in_len = mbuf->data_len;
+
+ sg++;
+ qm_sg_entry_set64(in_sg, dpaa_mem_vtop(sg));
+
+ /* 1st seg */
+ qm_sg_entry_set64(sg, rte_pktmbuf_mtophys(mbuf));
+ sg->length = mbuf->data_len;
+ sg->offset = 0;
+
+ /* Successive segs */
+ mbuf = mbuf->next;
+ while (mbuf) {
+ cpu_to_hw_sg(sg);
+ sg++;
+ qm_sg_entry_set64(sg, rte_pktmbuf_mtophys(mbuf));
+ sg->length = mbuf->data_len;
+ sg->offset = 0;
+ in_len += sg->length;
+ mbuf = mbuf->next;
+ }
+ sg->final = 1;
+ cpu_to_hw_sg(sg);
+
+ in_sg->length = in_len;
+ cpu_to_hw_sg(in_sg);
+
+ sym->m_src->packet_type &= ~RTE_PTYPE_L4_MASK;
+
+ return cf;
+}
+
static uint16_t
dpaa_sec_enqueue_burst(void *qp, struct rte_crypto_op **ops,
uint16_t nb_ops)
@@ -1707,7 +1813,9 @@ dpaa_sec_enqueue_burst(void *qp, struct rte_crypto_op **ops,
auth_only_len = op->sym->auth.data.length -
op->sym->cipher.data.length;
- if (rte_pktmbuf_is_contiguous(op->sym->m_src)) {
+ if (rte_pktmbuf_is_contiguous(op->sym->m_src) &&
+ ((op->sym->m_dst == NULL) ||
+ rte_pktmbuf_is_contiguous(op->sym->m_dst))) {
if (is_proto_ipsec(ses)) {
cf = build_proto(op, ses);
} else if (is_proto_pdcp(ses)) {
@@ -1728,7 +1836,9 @@ dpaa_sec_enqueue_burst(void *qp, struct rte_crypto_op **ops,
goto send_pkts;
}
} else {
- if (is_auth_only(ses)) {
+ if (is_proto_pdcp(ses) || is_proto_ipsec(ses)) {
+ cf = build_proto_sg(op, ses);
+ } else if (is_auth_only(ses)) {
cf = build_auth_only_sg(op, ses);
} else if (is_cipher_only(ses)) {
cf = build_cipher_only_sg(op, ses);
--
2.17.1
^ permalink raw reply [flat|nested] 75+ messages in thread
* [dpdk-dev] [PATCH v2 09/20] crypto/dpaa2_sec: support scatter gather for proto offloads
2019-09-30 11:52 ` [dpdk-dev] [PATCH v2 " Akhil Goyal
` (7 preceding siblings ...)
2019-09-30 11:52 ` [dpdk-dev] [PATCH v2 08/20] crypto/dpaa_sec: support scatter gather for PDCP Akhil Goyal
@ 2019-09-30 11:52 ` Akhil Goyal
2019-09-30 11:52 ` [dpdk-dev] [PATCH v2 10/20] crypto/dpaa2_sec: disable 'write-safe' for PDCP Akhil Goyal
` (12 subsequent siblings)
21 siblings, 0 replies; 75+ messages in thread
From: Akhil Goyal @ 2019-09-30 11:52 UTC (permalink / raw)
To: dev; +Cc: aconole, anoobj, Hemant Agrawal, Akhil Goyal
From: Hemant Agrawal <hemant.agrawal@nxp.com>
This patch add support for chained input or output
mbufs for PDCP and ipsec protocol offload cases.
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
---
drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c | 134 +++++++++++++++++++-
drivers/crypto/dpaa2_sec/dpaa2_sec_logs.h | 4 +-
2 files changed, 133 insertions(+), 5 deletions(-)
diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
index fb0314adb..36ae78a03 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
@@ -65,6 +65,121 @@ static uint8_t cryptodev_driver_id;
int dpaa2_logtype_sec;
+static inline int
+build_proto_compound_sg_fd(dpaa2_sec_session *sess,
+ struct rte_crypto_op *op,
+ struct qbman_fd *fd, uint16_t bpid)
+{
+ struct rte_crypto_sym_op *sym_op = op->sym;
+ struct ctxt_priv *priv = sess->ctxt;
+ struct qbman_fle *fle, *sge, *ip_fle, *op_fle;
+ struct sec_flow_context *flc;
+ struct rte_mbuf *mbuf;
+ uint32_t in_len = 0, out_len = 0;
+
+ if (sym_op->m_dst)
+ mbuf = sym_op->m_dst;
+ else
+ mbuf = sym_op->m_src;
+
+ /* first FLE entry used to store mbuf and session ctxt */
+ fle = (struct qbman_fle *)rte_malloc(NULL, FLE_SG_MEM_SIZE,
+ RTE_CACHE_LINE_SIZE);
+ if (unlikely(!fle)) {
+ DPAA2_SEC_DP_ERR("Proto:SG: Memory alloc failed for SGE");
+ return -1;
+ }
+ memset(fle, 0, FLE_SG_MEM_SIZE);
+ DPAA2_SET_FLE_ADDR(fle, (size_t)op);
+ DPAA2_FLE_SAVE_CTXT(fle, (ptrdiff_t)priv);
+
+ /* Save the shared descriptor */
+ flc = &priv->flc_desc[0].flc;
+
+ op_fle = fle + 1;
+ ip_fle = fle + 2;
+ sge = fle + 3;
+
+ if (likely(bpid < MAX_BPID)) {
+ DPAA2_SET_FD_BPID(fd, bpid);
+ DPAA2_SET_FLE_BPID(op_fle, bpid);
+ DPAA2_SET_FLE_BPID(ip_fle, bpid);
+ } else {
+ DPAA2_SET_FD_IVP(fd);
+ DPAA2_SET_FLE_IVP(op_fle);
+ DPAA2_SET_FLE_IVP(ip_fle);
+ }
+
+ /* Configure FD as a FRAME LIST */
+ DPAA2_SET_FD_ADDR(fd, DPAA2_VADDR_TO_IOVA(op_fle));
+ DPAA2_SET_FD_COMPOUND_FMT(fd);
+ DPAA2_SET_FD_FLC(fd, DPAA2_VADDR_TO_IOVA(flc));
+
+ /* Configure Output FLE with Scatter/Gather Entry */
+ DPAA2_SET_FLE_SG_EXT(op_fle);
+ DPAA2_SET_FLE_ADDR(op_fle, DPAA2_VADDR_TO_IOVA(sge));
+
+ /* Configure Output SGE for Encap/Decap */
+ DPAA2_SET_FLE_ADDR(sge, DPAA2_MBUF_VADDR_TO_IOVA(mbuf));
+ DPAA2_SET_FLE_OFFSET(sge, mbuf->data_off);
+ /* o/p segs */
+ while (mbuf->next) {
+ sge->length = mbuf->data_len;
+ out_len += sge->length;
+ sge++;
+ mbuf = mbuf->next;
+ DPAA2_SET_FLE_ADDR(sge, DPAA2_MBUF_VADDR_TO_IOVA(mbuf));
+ DPAA2_SET_FLE_OFFSET(sge, mbuf->data_off);
+ }
+ /* using buf_len for last buf - so that extra data can be added */
+ sge->length = mbuf->buf_len - mbuf->data_off;
+ out_len += sge->length;
+
+ DPAA2_SET_FLE_FIN(sge);
+ op_fle->length = out_len;
+
+ sge++;
+ mbuf = sym_op->m_src;
+
+ /* Configure Input FLE with Scatter/Gather Entry */
+ DPAA2_SET_FLE_ADDR(ip_fle, DPAA2_VADDR_TO_IOVA(sge));
+ DPAA2_SET_FLE_SG_EXT(ip_fle);
+ DPAA2_SET_FLE_FIN(ip_fle);
+
+ /* Configure input SGE for Encap/Decap */
+ DPAA2_SET_FLE_ADDR(sge, DPAA2_MBUF_VADDR_TO_IOVA(mbuf));
+ DPAA2_SET_FLE_OFFSET(sge, mbuf->data_off);
+ sge->length = mbuf->data_len;
+ in_len += sge->length;
+
+ mbuf = mbuf->next;
+ /* i/p segs */
+ while (mbuf) {
+ sge++;
+ DPAA2_SET_FLE_ADDR(sge, DPAA2_MBUF_VADDR_TO_IOVA(mbuf));
+ DPAA2_SET_FLE_OFFSET(sge, mbuf->data_off);
+ sge->length = mbuf->data_len;
+ in_len += sge->length;
+ mbuf = mbuf->next;
+ }
+ ip_fle->length = in_len;
+ DPAA2_SET_FLE_FIN(sge);
+
+ /* In case of PDCP, per packet HFN is stored in
+ * mbuf priv after sym_op.
+ */
+ if (sess->ctxt_type == DPAA2_SEC_PDCP && sess->pdcp.hfn_ovd) {
+ uint32_t hfn_ovd = *((uint8_t *)op + sess->pdcp.hfn_ovd_offset);
+ /*enable HFN override override */
+ DPAA2_SET_FLE_INTERNAL_JD(ip_fle, hfn_ovd);
+ DPAA2_SET_FLE_INTERNAL_JD(op_fle, hfn_ovd);
+ DPAA2_SET_FD_INTERNAL_JD(fd, hfn_ovd);
+ }
+ DPAA2_SET_FD_LEN(fd, ip_fle->length);
+
+ return 0;
+}
+
static inline int
build_proto_compound_fd(dpaa2_sec_session *sess,
struct rte_crypto_op *op,
@@ -87,7 +202,7 @@ build_proto_compound_fd(dpaa2_sec_session *sess,
/* we are using the first FLE entry to store Mbuf */
retval = rte_mempool_get(priv->fle_pool, (void **)(&fle));
if (retval) {
- DPAA2_SEC_ERR("Memory alloc failed");
+ DPAA2_SEC_DP_ERR("Memory alloc failed");
return -1;
}
memset(fle, 0, FLE_POOL_BUF_SIZE);
@@ -1170,8 +1285,10 @@ build_sec_fd(struct rte_crypto_op *op,
else
return -1;
- /* Segmented buffer */
- if (unlikely(!rte_pktmbuf_is_contiguous(op->sym->m_src))) {
+ /* Any of the buffer is segmented*/
+ if (!rte_pktmbuf_is_contiguous(op->sym->m_src) ||
+ ((op->sym->m_dst != NULL) &&
+ !rte_pktmbuf_is_contiguous(op->sym->m_dst))) {
switch (sess->ctxt_type) {
case DPAA2_SEC_CIPHER:
ret = build_cipher_sg_fd(sess, op, fd, bpid);
@@ -1185,6 +1302,10 @@ build_sec_fd(struct rte_crypto_op *op,
case DPAA2_SEC_CIPHER_HASH:
ret = build_authenc_sg_fd(sess, op, fd, bpid);
break;
+ case DPAA2_SEC_IPSEC:
+ case DPAA2_SEC_PDCP:
+ ret = build_proto_compound_sg_fd(sess, op, fd, bpid);
+ break;
case DPAA2_SEC_HASH_CIPHER:
default:
DPAA2_SEC_ERR("error: Unsupported session");
@@ -1372,9 +1493,14 @@ sec_fd_to_mbuf(const struct qbman_fd *fd)
if (op->sess_type == RTE_CRYPTO_OP_SECURITY_SESSION) {
dpaa2_sec_session *sess = (dpaa2_sec_session *)
get_sec_session_private_data(op->sym->sec_session);
- if (sess->ctxt_type == DPAA2_SEC_IPSEC) {
+ if (sess->ctxt_type == DPAA2_SEC_IPSEC ||
+ sess->ctxt_type == DPAA2_SEC_PDCP) {
uint16_t len = DPAA2_GET_FD_LEN(fd);
dst->pkt_len = len;
+ while (dst->next != NULL) {
+ len -= dst->data_len;
+ dst = dst->next;
+ }
dst->data_len = len;
}
}
diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_logs.h b/drivers/crypto/dpaa2_sec/dpaa2_sec_logs.h
index 8a9904426..c2e11f951 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_logs.h
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_logs.h
@@ -1,7 +1,7 @@
/* SPDX-License-Identifier: BSD-3-Clause
*
* Copyright (c) 2016 Freescale Semiconductor, Inc. All rights reserved.
- * Copyright 2016 NXP
+ * Copyright 2016,2019 NXP
*
*/
@@ -37,6 +37,8 @@ extern int dpaa2_logtype_sec;
DPAA2_SEC_DP_LOG(INFO, fmt, ## args)
#define DPAA2_SEC_DP_WARN(fmt, args...) \
DPAA2_SEC_DP_LOG(WARNING, fmt, ## args)
+#define DPAA2_SEC_DP_ERR(fmt, args...) \
+ DPAA2_SEC_DP_LOG(ERR, fmt, ## args)
#endif /* _DPAA2_SEC_LOGS_H_ */
--
2.17.1
^ permalink raw reply [flat|nested] 75+ messages in thread
* [dpdk-dev] [PATCH v2 10/20] crypto/dpaa2_sec: disable 'write-safe' for PDCP
2019-09-30 11:52 ` [dpdk-dev] [PATCH v2 " Akhil Goyal
` (8 preceding siblings ...)
2019-09-30 11:52 ` [dpdk-dev] [PATCH v2 09/20] crypto/dpaa2_sec: support scatter gather for proto offloads Akhil Goyal
@ 2019-09-30 11:52 ` Akhil Goyal
2019-09-30 11:52 ` [dpdk-dev] [PATCH v2 11/20] crypto/dpaa2_sec/hw: support 18-bit PDCP enc-auth cases Akhil Goyal
` (11 subsequent siblings)
21 siblings, 0 replies; 75+ messages in thread
From: Akhil Goyal @ 2019-09-30 11:52 UTC (permalink / raw)
To: dev; +Cc: aconole, anoobj, Vakul Garg
From: Vakul Garg <vakul.garg@nxp.com>
PDCP descriptors in some cases internally use commands which overwrite
memory with extra '0s' if write-safe is kept enabled. This breaks
correct functional behavior of PDCP apis and they in many cases give
incorrect crypto output. There we disable 'write-safe' bit in FLC for
PDCP cases. If there is a performance drop, then write-safe would be
enabled selectively through a separate patch.
Signed-off-by: Vakul Garg <vakul.garg@nxp.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
---
drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c | 8 ++++++--
1 file changed, 6 insertions(+), 2 deletions(-)
diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
index 36ae78a03..d53720ae3 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
@@ -2931,8 +2931,12 @@ dpaa2_sec_set_pdcp_session(struct rte_cryptodev *dev,
flc->word1_sdl = (uint8_t)bufsize;
- /* Set EWS bit i.e. enable write-safe */
- DPAA2_SET_FLC_EWS(flc);
+ /* TODO - check the perf impact or
+ * align as per descriptor type
+ * Set EWS bit i.e. enable write-safe
+ * DPAA2_SET_FLC_EWS(flc);
+ */
+
/* Set BS = 1 i.e reuse input buffers as output buffers */
DPAA2_SET_FLC_REUSE_BS(flc);
/* Set FF = 10; reuse input buffers if they provide sufficient space */
--
2.17.1
^ permalink raw reply [flat|nested] 75+ messages in thread
* [dpdk-dev] [PATCH v2 11/20] crypto/dpaa2_sec/hw: support 18-bit PDCP enc-auth cases
2019-09-30 11:52 ` [dpdk-dev] [PATCH v2 " Akhil Goyal
` (9 preceding siblings ...)
2019-09-30 11:52 ` [dpdk-dev] [PATCH v2 10/20] crypto/dpaa2_sec: disable 'write-safe' for PDCP Akhil Goyal
@ 2019-09-30 11:52 ` Akhil Goyal
2019-09-30 11:52 ` [dpdk-dev] [PATCH v2 12/20] crypto/dpaa2_sec/hw: support aes-aes 18-bit PDCP Akhil Goyal
` (10 subsequent siblings)
21 siblings, 0 replies; 75+ messages in thread
From: Akhil Goyal @ 2019-09-30 11:52 UTC (permalink / raw)
To: dev; +Cc: aconole, anoobj, Vakul Garg, Akhil Goyal
From: Vakul Garg <vakul.garg@nxp.com>
This patch support following algo combinations(ENC-AUTH).
- AES-SNOW
- SNOW-AES
- AES-ZUC
- ZUC-AES
- SNOW-ZUC
- ZUC-SNOW
Signed-off-by: Vakul Garg <vakul.garg@nxp.com>
Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
---
drivers/crypto/dpaa2_sec/hw/desc/pdcp.h | 211 ++++++++++++++++--------
1 file changed, 140 insertions(+), 71 deletions(-)
diff --git a/drivers/crypto/dpaa2_sec/hw/desc/pdcp.h b/drivers/crypto/dpaa2_sec/hw/desc/pdcp.h
index 0b074ec80..764daf21c 100644
--- a/drivers/crypto/dpaa2_sec/hw/desc/pdcp.h
+++ b/drivers/crypto/dpaa2_sec/hw/desc/pdcp.h
@@ -1021,21 +1021,21 @@ pdcp_insert_cplane_snow_aes_op(struct program *p,
JUMP(p, 1, LOCAL_JUMP, ALL_TRUE, CALM);
MATHB(p, MATH0, AND, sn_mask, MATH1, 8, IFB | IMMED2);
MATHB(p, MATH1, SHLD, MATH1, MATH1, 8, 0);
- MOVE(p, DESCBUF, 4, MATH2, 0, 0x08, WAITCOMP | IMMED);
+ MOVEB(p, DESCBUF, 4, MATH2, 0, 0x08, WAITCOMP | IMMED);
MATHB(p, MATH1, OR, MATH2, MATH2, 8, 0);
SEQSTORE(p, MATH0, offset, length, 0);
if (dir == OP_TYPE_ENCAP_PROTOCOL) {
if (rta_sec_era > RTA_SEC_ERA_2 ||
(rta_sec_era == RTA_SEC_ERA_2 &&
era_2_sw_hfn_ovrd == 0)) {
- SEQINPTR(p, 0, 1, RTO);
+ SEQINPTR(p, 0, length, RTO);
} else {
SEQINPTR(p, 0, 5, RTO);
SEQFIFOLOAD(p, SKIP, 4, 0);
}
KEY(p, KEY1, authdata->key_enc_flags, authdata->key,
authdata->keylen, INLINE_KEY(authdata));
- MOVE(p, MATH2, 0, IFIFOAB1, 0, 0x08, IMMED);
+ MOVEB(p, MATH2, 0, IFIFOAB1, 0, 0x08, IMMED);
if (rta_sec_era > RTA_SEC_ERA_2) {
MATHB(p, SEQINSZ, SUB, ZERO, VSEQINSZ, 4, 0);
@@ -1088,7 +1088,7 @@ pdcp_insert_cplane_snow_aes_op(struct program *p,
ICV_CHECK_DISABLE,
DIR_DEC);
SEQFIFOLOAD(p, MSG1, 0, VLF | LAST1 | FLUSH1);
- MOVE(p, CONTEXT1, 0, MATH3, 0, 4, WAITCOMP | IMMED);
+ MOVEB(p, CONTEXT1, 0, MATH3, 0, 4, WAITCOMP | IMMED);
if (rta_sec_era <= RTA_SEC_ERA_3)
LOAD(p, CLRW_CLR_C1KEY |
CLRW_CLR_C1CTX |
@@ -1111,7 +1111,7 @@ pdcp_insert_cplane_snow_aes_op(struct program *p,
KEY(p, KEY1, cipherdata->key_enc_flags, cipherdata->key,
cipherdata->keylen, INLINE_KEY(cipherdata));
SET_LABEL(p, local_offset);
- MOVE(p, MATH2, 0, CONTEXT1, 0, 8, IMMED);
+ MOVEB(p, MATH2, 0, CONTEXT1, 0, 8, IMMED);
SEQINPTR(p, 0, 0, RTO);
if (rta_sec_era == RTA_SEC_ERA_2 && era_2_sw_hfn_ovrd) {
@@ -1119,7 +1119,7 @@ pdcp_insert_cplane_snow_aes_op(struct program *p,
MATHB(p, SEQINSZ, ADD, ONE, SEQINSZ, 4, 0);
}
- MATHB(p, SEQINSZ, SUB, ONE, VSEQINSZ, 4, 0);
+ MATHB(p, SEQINSZ, SUB, length, VSEQINSZ, 4, IMMED2);
ALG_OPERATION(p, OP_ALG_ALGSEL_SNOW_F8,
OP_ALG_AAI_F8,
OP_ALG_AS_INITFINAL,
@@ -1130,14 +1130,14 @@ pdcp_insert_cplane_snow_aes_op(struct program *p,
if (rta_sec_era > RTA_SEC_ERA_2 ||
(rta_sec_era == RTA_SEC_ERA_2 &&
era_2_sw_hfn_ovrd == 0))
- SEQFIFOLOAD(p, SKIP, 1, 0);
+ SEQFIFOLOAD(p, SKIP, length, 0);
SEQFIFOLOAD(p, MSG1, 0, VLF);
- MOVE(p, MATH3, 0, IFIFOAB1, 0, 4, LAST1 | FLUSH1 | IMMED);
+ MOVEB(p, MATH3, 0, IFIFOAB1, 0, 4, LAST1 | FLUSH1 | IMMED);
PATCH_MOVE(p, seqin_ptr_read, local_offset);
PATCH_MOVE(p, seqin_ptr_write, local_offset);
} else {
- MOVE(p, MATH2, 0, CONTEXT1, 0, 8, IMMED);
+ MOVEB(p, MATH2, 0, CONTEXT1, 0, 8, IMMED);
if (rta_sec_era >= RTA_SEC_ERA_5)
MOVE(p, CONTEXT1, 0, CONTEXT2, 0, 8, IMMED);
@@ -1147,7 +1147,7 @@ pdcp_insert_cplane_snow_aes_op(struct program *p,
else
MATHB(p, SEQINSZ, SUB, MATH3, VSEQINSZ, 4, 0);
- MATHB(p, SEQINSZ, SUB, PDCP_MAC_I_LEN, VSEQOUTSZ, 4, IMMED2);
+ MATHI(p, SEQINSZ, SUB, PDCP_MAC_I_LEN, VSEQOUTSZ, 4, IMMED2);
/*
* TODO: To be changed when proper support is added in RTA (can't load a
* command that is also written by RTA (or patch it for that matter).
@@ -1237,7 +1237,7 @@ pdcp_insert_cplane_snow_aes_op(struct program *p,
DIR_DEC);
/* Read the # of bytes written in the output buffer + 1 (HDR) */
- MATHB(p, VSEQOUTSZ, ADD, ONE, VSEQINSZ, 4, 0);
+ MATHI(p, VSEQOUTSZ, ADD, length, VSEQINSZ, 4, IMMED2);
if (rta_sec_era <= RTA_SEC_ERA_3)
MOVE(p, MATH3, 0, IFIFOAB1, 0, 8, IMMED);
@@ -1340,24 +1340,24 @@ pdcp_insert_cplane_aes_snow_op(struct program *p,
}
if (dir == OP_TYPE_ENCAP_PROTOCOL)
- MATHB(p, SEQINSZ, SUB, ONE, VSEQINSZ, 4, 0);
+ MATHB(p, SEQINSZ, SUB, length, VSEQINSZ, 4, IMMED2);
SEQLOAD(p, MATH0, offset, length, 0);
JUMP(p, 1, LOCAL_JUMP, ALL_TRUE, CALM);
- MOVE(p, MATH0, 7, IFIFOAB2, 0, 1, IMMED);
+ MOVEB(p, MATH0, offset, IFIFOAB2, 0, length, IMMED);
MATHB(p, MATH0, AND, sn_mask, MATH1, 8, IFB | IMMED2);
SEQSTORE(p, MATH0, offset, length, 0);
MATHB(p, MATH1, SHLD, MATH1, MATH1, 8, 0);
- MOVE(p, DESCBUF, 4, MATH2, 0, 8, WAITCOMP | IMMED);
+ MOVEB(p, DESCBUF, 4, MATH2, 0, 8, WAITCOMP | IMMED);
MATHB(p, MATH1, OR, MATH2, MATH1, 8, 0);
- MOVE(p, MATH1, 0, CONTEXT1, 16, 8, IMMED);
- MOVE(p, MATH1, 0, CONTEXT2, 0, 4, IMMED);
+ MOVEB(p, MATH1, 0, CONTEXT1, 16, 8, IMMED);
+ MOVEB(p, MATH1, 0, CONTEXT2, 0, 4, IMMED);
if (swap == false) {
- MATHB(p, MATH1, AND, lower_32_bits(PDCP_BEARER_MASK), MATH2, 4,
- IMMED2);
- MATHB(p, MATH1, AND, upper_32_bits(PDCP_DIR_MASK), MATH3, 4,
- IMMED2);
+ MATHB(p, MATH1, AND, upper_32_bits(PDCP_BEARER_MASK), MATH2, 4,
+ IMMED2);
+ MATHB(p, MATH1, AND, lower_32_bits(PDCP_DIR_MASK), MATH3, 4,
+ IMMED2);
} else {
MATHB(p, MATH1, AND, lower_32_bits(PDCP_BEARER_MASK_BE), MATH2,
4, IMMED2);
@@ -1365,7 +1365,7 @@ pdcp_insert_cplane_aes_snow_op(struct program *p,
4, IMMED2);
}
MATHB(p, MATH3, SHLD, MATH3, MATH3, 8, 0);
- MOVE(p, MATH2, 4, OFIFO, 0, 12, IMMED);
+ MOVEB(p, MATH2, 4, OFIFO, 0, 12, IMMED);
MOVE(p, OFIFO, 0, CONTEXT2, 4, 12, IMMED);
if (dir == OP_TYPE_ENCAP_PROTOCOL) {
MATHB(p, SEQINSZ, ADD, PDCP_MAC_I_LEN, VSEQOUTSZ, 4, IMMED2);
@@ -1485,14 +1485,14 @@ pdcp_insert_cplane_snow_zuc_op(struct program *p,
SEQLOAD(p, MATH0, offset, length, 0);
JUMP(p, 1, LOCAL_JUMP, ALL_TRUE, CALM);
- MOVE(p, MATH0, 7, IFIFOAB2, 0, 1, IMMED);
+ MOVEB(p, MATH0, offset, IFIFOAB2, 0, length, IMMED);
MATHB(p, MATH0, AND, sn_mask, MATH1, 8, IFB | IMMED2);
MATHB(p, MATH1, SHLD, MATH1, MATH1, 8, 0);
- MOVE(p, DESCBUF, 4, MATH2, 0, 8, WAITCOMP | IMMED);
+ MOVEB(p, DESCBUF, 4, MATH2, 0, 8, WAITCOMP | IMMED);
MATHB(p, MATH1, OR, MATH2, MATH2, 8, 0);
- MOVE(p, MATH2, 0, CONTEXT1, 0, 8, IMMED);
- MOVE(p, MATH2, 0, CONTEXT2, 0, 8, WAITCOMP | IMMED);
+ MOVEB(p, MATH2, 0, CONTEXT1, 0, 8, IMMED);
+ MOVEB(p, MATH2, 0, CONTEXT2, 0, 8, WAITCOMP | IMMED);
if (dir == OP_TYPE_ENCAP_PROTOCOL)
MATHB(p, SEQINSZ, ADD, PDCP_MAC_I_LEN, VSEQOUTSZ, 4, IMMED2);
@@ -1606,14 +1606,14 @@ pdcp_insert_cplane_aes_zuc_op(struct program *p,
SET_LABEL(p, keyjump);
SEQLOAD(p, MATH0, offset, length, 0);
JUMP(p, 1, LOCAL_JUMP, ALL_TRUE, CALM);
- MOVE(p, MATH0, 7, IFIFOAB2, 0, 1, IMMED);
+ MOVEB(p, MATH0, offset, IFIFOAB2, 0, length, IMMED);
MATHB(p, MATH0, AND, sn_mask, MATH1, 8, IFB | IMMED2);
MATHB(p, MATH1, SHLD, MATH1, MATH1, 8, 0);
- MOVE(p, DESCBUF, 4, MATH2, 0, 8, WAITCOMP | IMMED);
+ MOVEB(p, DESCBUF, 4, MATH2, 0, 8, WAITCOMP | IMMED);
MATHB(p, MATH1, OR, MATH2, MATH2, 8, 0);
- MOVE(p, MATH2, 0, CONTEXT1, 16, 8, IMMED);
- MOVE(p, MATH2, 0, CONTEXT2, 0, 8, WAITCOMP | IMMED);
+ MOVEB(p, MATH2, 0, CONTEXT1, 16, 8, IMMED);
+ MOVEB(p, MATH2, 0, CONTEXT2, 0, 8, WAITCOMP | IMMED);
if (dir == OP_TYPE_ENCAP_PROTOCOL)
MATHB(p, SEQINSZ, ADD, PDCP_MAC_I_LEN, VSEQOUTSZ, 4, IMMED2);
@@ -1729,19 +1729,19 @@ pdcp_insert_cplane_zuc_snow_op(struct program *p,
SET_LABEL(p, keyjump);
SEQLOAD(p, MATH0, offset, length, 0);
JUMP(p, 1, LOCAL_JUMP, ALL_TRUE, CALM);
- MOVE(p, MATH0, 7, IFIFOAB2, 0, 1, IMMED);
+ MOVEB(p, MATH0, offset, IFIFOAB2, 0, length, IMMED);
MATHB(p, MATH0, AND, sn_mask, MATH1, 8, IFB | IMMED2);
MATHB(p, MATH1, SHLD, MATH1, MATH1, 8, 0);
- MOVE(p, DESCBUF, 4, MATH2, 0, 8, WAITCOMP | IMMED);
+ MOVEB(p, DESCBUF, 4, MATH2, 0, 8, WAITCOMP | IMMED);
MATHB(p, MATH1, OR, MATH2, MATH1, 8, 0);
- MOVE(p, MATH1, 0, CONTEXT1, 0, 8, IMMED);
- MOVE(p, MATH1, 0, CONTEXT2, 0, 4, IMMED);
+ MOVEB(p, MATH1, 0, CONTEXT1, 0, 8, IMMED);
+ MOVEB(p, MATH1, 0, CONTEXT2, 0, 4, IMMED);
if (swap == false) {
- MATHB(p, MATH1, AND, lower_32_bits(PDCP_BEARER_MASK), MATH2,
- 4, IMMED2);
- MATHB(p, MATH1, AND, upper_32_bits(PDCP_DIR_MASK), MATH3,
- 4, IMMED2);
+ MATHB(p, MATH1, AND, upper_32_bits(PDCP_BEARER_MASK), MATH2,
+ 4, IMMED2);
+ MATHB(p, MATH1, AND, lower_32_bits(PDCP_DIR_MASK), MATH3,
+ 4, IMMED2);
} else {
MATHB(p, MATH1, AND, lower_32_bits(PDCP_BEARER_MASK_BE), MATH2,
4, IMMED2);
@@ -1749,7 +1749,7 @@ pdcp_insert_cplane_zuc_snow_op(struct program *p,
4, IMMED2);
}
MATHB(p, MATH3, SHLD, MATH3, MATH3, 8, 0);
- MOVE(p, MATH2, 4, OFIFO, 0, 12, IMMED);
+ MOVEB(p, MATH2, 4, OFIFO, 0, 12, IMMED);
MOVE(p, OFIFO, 0, CONTEXT2, 4, 12, IMMED);
if (dir == OP_TYPE_ENCAP_PROTOCOL) {
@@ -1798,13 +1798,13 @@ pdcp_insert_cplane_zuc_snow_op(struct program *p,
LOAD(p, 0, DCTRL, 0, LDLEN_RST_CHA_OFIFO_PTR, IMMED);
/* Put ICV to M0 before sending it to C2 for comparison. */
- MOVE(p, OFIFO, 0, MATH0, 0, 4, WAITCOMP | IMMED);
+ MOVEB(p, OFIFO, 0, MATH0, 0, 4, WAITCOMP | IMMED);
LOAD(p, NFIFOENTRY_STYPE_ALTSOURCE |
NFIFOENTRY_DEST_CLASS2 |
NFIFOENTRY_DTYPE_ICV |
NFIFOENTRY_LC2 | 4, NFIFO_SZL, 0, 4, IMMED);
- MOVE(p, MATH0, 0, ALTSOURCE, 0, 4, IMMED);
+ MOVEB(p, MATH0, 0, ALTSOURCE, 0, 4, IMMED);
}
PATCH_JUMP(p, pkeyjump, keyjump);
@@ -1871,14 +1871,14 @@ pdcp_insert_cplane_zuc_aes_op(struct program *p,
MATHB(p, MATH0, AND, sn_mask, MATH1, 8, IFB | IMMED2);
MATHB(p, MATH1, SHLD, MATH1, MATH1, 8, 0);
- MOVE(p, DESCBUF, 4, MATH2, 0, 0x08, WAITCOMP | IMMED);
+ MOVEB(p, DESCBUF, 4, MATH2, 0, 0x08, WAITCOMP | IMMED);
MATHB(p, MATH1, OR, MATH2, MATH2, 8, 0);
SEQSTORE(p, MATH0, offset, length, 0);
if (dir == OP_TYPE_ENCAP_PROTOCOL) {
KEY(p, KEY1, authdata->key_enc_flags, authdata->key,
authdata->keylen, INLINE_KEY(authdata));
- MOVE(p, MATH2, 0, IFIFOAB1, 0, 0x08, IMMED);
- MOVE(p, MATH0, 7, IFIFOAB1, 0, 1, IMMED);
+ MOVEB(p, MATH2, 0, IFIFOAB1, 0, 0x08, IMMED);
+ MOVEB(p, MATH0, offset, IFIFOAB1, 0, length, IMMED);
MATHB(p, SEQINSZ, SUB, ZERO, VSEQINSZ, 4, 0);
MATHB(p, VSEQINSZ, ADD, PDCP_MAC_I_LEN, VSEQOUTSZ, 4, IMMED2);
@@ -1889,7 +1889,7 @@ pdcp_insert_cplane_zuc_aes_op(struct program *p,
ICV_CHECK_DISABLE,
DIR_DEC);
SEQFIFOLOAD(p, MSG1, 0, VLF | LAST1 | FLUSH1);
- MOVE(p, CONTEXT1, 0, MATH3, 0, 4, WAITCOMP | IMMED);
+ MOVEB(p, CONTEXT1, 0, MATH3, 0, 4, WAITCOMP | IMMED);
LOAD(p, CLRW_RESET_CLS1_CHA |
CLRW_CLR_C1KEY |
CLRW_CLR_C1CTX |
@@ -1901,7 +1901,7 @@ pdcp_insert_cplane_zuc_aes_op(struct program *p,
KEY(p, KEY1, cipherdata->key_enc_flags, cipherdata->key,
cipherdata->keylen, INLINE_KEY(cipherdata));
- MOVE(p, MATH2, 0, CONTEXT1, 0, 8, IMMED);
+ MOVEB(p, MATH2, 0, CONTEXT1, 0, 8, IMMED);
SEQINPTR(p, 0, PDCP_NULL_MAX_FRAME_LEN, RTO);
ALG_OPERATION(p, OP_ALG_ALGSEL_ZUCE,
@@ -1911,12 +1911,12 @@ pdcp_insert_cplane_zuc_aes_op(struct program *p,
DIR_ENC);
SEQFIFOSTORE(p, MSG, 0, 0, VLF);
- SEQFIFOLOAD(p, SKIP, 1, 0);
+ SEQFIFOLOAD(p, SKIP, length, 0);
SEQFIFOLOAD(p, MSG1, 0, VLF);
- MOVE(p, MATH3, 0, IFIFOAB1, 0, 4, LAST1 | FLUSH1 | IMMED);
+ MOVEB(p, MATH3, 0, IFIFOAB1, 0, 4, LAST1 | FLUSH1 | IMMED);
} else {
- MOVE(p, MATH2, 0, CONTEXT1, 0, 8, IMMED);
+ MOVEB(p, MATH2, 0, CONTEXT1, 0, 8, IMMED);
MOVE(p, CONTEXT1, 0, CONTEXT2, 0, 8, IMMED);
@@ -1937,7 +1937,7 @@ pdcp_insert_cplane_zuc_aes_op(struct program *p,
SEQFIFOSTORE(p, MSG, 0, 0, VLF | CONT);
SEQFIFOLOAD(p, MSG1, 0, VLF | LAST1 | FLUSH1);
- MOVE(p, OFIFO, 0, MATH3, 0, 4, IMMED);
+ MOVEB(p, OFIFO, 0, MATH3, 0, 4, IMMED);
LOAD(p, CLRW_RESET_CLS1_CHA |
CLRW_CLR_C1KEY |
@@ -1969,7 +1969,7 @@ pdcp_insert_cplane_zuc_aes_op(struct program *p,
NFIFOENTRY_DTYPE_ICV |
NFIFOENTRY_LC1 |
NFIFOENTRY_FC1 | 4, NFIFO_SZL, 0, 4, IMMED);
- MOVE(p, MATH3, 0, ALTSOURCE, 0, 4, IMMED);
+ MOVEB(p, MATH3, 0, ALTSOURCE, 0, 4, IMMED);
}
return 0;
@@ -2080,6 +2080,8 @@ insert_hfn_ov_op(struct program *p,
{
uint32_t imm = PDCP_DPOVRD_HFN_OV_EN;
uint16_t hfn_pdb_offset;
+ LABEL(keyjump);
+ REFERENCE(pkeyjump);
if (rta_sec_era == RTA_SEC_ERA_2 && !era_2_sw_hfn_ovrd)
return 0;
@@ -2115,13 +2117,10 @@ insert_hfn_ov_op(struct program *p,
SEQSTORE(p, MATH0, 4, 4, 0);
}
- if (rta_sec_era >= RTA_SEC_ERA_8)
- JUMP(p, 6, LOCAL_JUMP, ALL_TRUE, MATH_Z);
- else
- JUMP(p, 5, LOCAL_JUMP, ALL_TRUE, MATH_Z);
+ pkeyjump = JUMP(p, keyjump, LOCAL_JUMP, ALL_TRUE, MATH_Z);
if (rta_sec_era > RTA_SEC_ERA_2)
- MATHB(p, DPOVRD, LSHIFT, shift, MATH0, 4, IMMED2);
+ MATHI(p, DPOVRD, LSHIFT, shift, MATH0, 4, IMMED2);
else
MATHB(p, MATH0, LSHIFT, shift, MATH0, 4, IMMED2);
@@ -2136,6 +2135,8 @@ insert_hfn_ov_op(struct program *p,
*/
MATHB(p, DPOVRD, AND, ZERO, DPOVRD, 4, STL);
+ SET_LABEL(p, keyjump);
+ PATCH_JUMP(p, pkeyjump, keyjump);
return 0;
}
@@ -2271,14 +2272,45 @@ cnstr_pdcp_c_plane_pdb(struct program *p,
/*
* PDCP UPlane PDB creation function
*/
-static inline int
+static inline enum pdb_type_e
cnstr_pdcp_u_plane_pdb(struct program *p,
enum pdcp_sn_size sn_size,
uint32_t hfn, unsigned short bearer,
unsigned short direction,
- uint32_t hfn_threshold)
+ uint32_t hfn_threshold,
+ struct alginfo *cipherdata,
+ struct alginfo *authdata)
{
struct pdcp_pdb pdb;
+ enum pdb_type_e pdb_type = PDCP_PDB_TYPE_FULL_PDB;
+ enum pdb_type_e
+ pdb_mask[PDCP_CIPHER_TYPE_INVALID][PDCP_AUTH_TYPE_INVALID] = {
+ { /* NULL */
+ PDCP_PDB_TYPE_NO_PDB, /* NULL */
+ PDCP_PDB_TYPE_FULL_PDB, /* SNOW f9 */
+ PDCP_PDB_TYPE_FULL_PDB, /* AES CMAC */
+ PDCP_PDB_TYPE_FULL_PDB /* ZUC-I */
+ },
+ { /* SNOW f8 */
+ PDCP_PDB_TYPE_FULL_PDB, /* NULL */
+ PDCP_PDB_TYPE_FULL_PDB, /* SNOW f9 */
+ PDCP_PDB_TYPE_REDUCED_PDB, /* AES CMAC */
+ PDCP_PDB_TYPE_REDUCED_PDB /* ZUC-I */
+ },
+ { /* AES CTR */
+ PDCP_PDB_TYPE_FULL_PDB, /* NULL */
+ PDCP_PDB_TYPE_REDUCED_PDB, /* SNOW f9 */
+ PDCP_PDB_TYPE_FULL_PDB, /* AES CMAC */
+ PDCP_PDB_TYPE_REDUCED_PDB /* ZUC-I */
+ },
+ { /* ZUC-E */
+ PDCP_PDB_TYPE_FULL_PDB, /* NULL */
+ PDCP_PDB_TYPE_REDUCED_PDB, /* SNOW f9 */
+ PDCP_PDB_TYPE_REDUCED_PDB, /* AES CMAC */
+ PDCP_PDB_TYPE_FULL_PDB /* ZUC-I */
+ },
+ };
+
/* Read options from user */
/* Depending on sequence number length, the HFN and HFN threshold
* have different lengths.
@@ -2312,6 +2344,12 @@ cnstr_pdcp_u_plane_pdb(struct program *p,
pdb.hfn_res = hfn << PDCP_U_PLANE_PDB_18BIT_SN_HFN_SHIFT;
pdb.hfn_thr_res =
hfn_threshold<<PDCP_U_PLANE_PDB_18BIT_SN_HFN_THR_SHIFT;
+
+ if (rta_sec_era <= RTA_SEC_ERA_8) {
+ if (cipherdata && authdata)
+ pdb_type = pdb_mask[cipherdata->algtype]
+ [authdata->algtype];
+ }
break;
default:
@@ -2323,13 +2361,29 @@ cnstr_pdcp_u_plane_pdb(struct program *p,
((bearer << PDCP_U_PLANE_PDB_BEARER_SHIFT) |
(direction << PDCP_U_PLANE_PDB_DIR_SHIFT));
- /* copy PDB in descriptor*/
- __rta_out32(p, pdb.opt_res.opt);
- __rta_out32(p, pdb.hfn_res);
- __rta_out32(p, pdb.bearer_dir_res);
- __rta_out32(p, pdb.hfn_thr_res);
+ switch (pdb_type) {
+ case PDCP_PDB_TYPE_NO_PDB:
+ break;
+
+ case PDCP_PDB_TYPE_REDUCED_PDB:
+ __rta_out32(p, pdb.hfn_res);
+ __rta_out32(p, pdb.bearer_dir_res);
+ break;
- return 0;
+ case PDCP_PDB_TYPE_FULL_PDB:
+ /* copy PDB in descriptor*/
+ __rta_out32(p, pdb.opt_res.opt);
+ __rta_out32(p, pdb.hfn_res);
+ __rta_out32(p, pdb.bearer_dir_res);
+ __rta_out32(p, pdb.hfn_thr_res);
+
+ break;
+
+ default:
+ return PDCP_PDB_TYPE_INVALID;
+ }
+
+ return pdb_type;
}
/**
* cnstr_shdsc_pdcp_c_plane_encap - Function for creating a PDCP Control Plane
@@ -2736,6 +2790,7 @@ cnstr_shdsc_pdcp_u_plane_encap(uint32_t *descbuf,
struct program prg;
struct program *p = &prg;
int err;
+ enum pdb_type_e pdb_type;
static enum rta_share_type
desc_share[PDCP_CIPHER_TYPE_INVALID][PDCP_AUTH_TYPE_INVALID] = {
{ /* NULL */
@@ -2785,15 +2840,16 @@ cnstr_shdsc_pdcp_u_plane_encap(uint32_t *descbuf,
SHR_HDR(p, desc_share[cipherdata->algtype][authdata->algtype], 0, 0);
else
SHR_HDR(p, SHR_ALWAYS, 0, 0);
- if (cnstr_pdcp_u_plane_pdb(p, sn_size, hfn, bearer, direction,
- hfn_threshold)) {
+ pdb_type = cnstr_pdcp_u_plane_pdb(p, sn_size, hfn,
+ bearer, direction, hfn_threshold,
+ cipherdata, authdata);
+ if (pdb_type == PDCP_PDB_TYPE_INVALID) {
pr_err("Error creating PDCP UPlane PDB\n");
return -EINVAL;
}
SET_LABEL(p, pdb_end);
- err = insert_hfn_ov_op(p, sn_size, PDCP_PDB_TYPE_FULL_PDB,
- era_2_sw_hfn_ovrd);
+ err = insert_hfn_ov_op(p, sn_size, pdb_type, era_2_sw_hfn_ovrd);
if (err)
return err;
@@ -2820,6 +2876,12 @@ cnstr_shdsc_pdcp_u_plane_encap(uint32_t *descbuf,
return err;
break;
}
+
+ if (pdb_type != PDCP_PDB_TYPE_FULL_PDB) {
+ pr_err("PDB type must be FULL for PROTO desc\n");
+ return -EINVAL;
+ }
+
/* Insert auth key if requested */
if (authdata && authdata->algtype) {
KEY(p, KEY2, authdata->key_enc_flags,
@@ -2931,6 +2993,7 @@ cnstr_shdsc_pdcp_u_plane_decap(uint32_t *descbuf,
struct program prg;
struct program *p = &prg;
int err;
+ enum pdb_type_e pdb_type;
static enum rta_share_type
desc_share[PDCP_CIPHER_TYPE_INVALID][PDCP_AUTH_TYPE_INVALID] = {
{ /* NULL */
@@ -2981,15 +3044,16 @@ cnstr_shdsc_pdcp_u_plane_decap(uint32_t *descbuf,
else
SHR_HDR(p, SHR_ALWAYS, 0, 0);
- if (cnstr_pdcp_u_plane_pdb(p, sn_size, hfn, bearer, direction,
- hfn_threshold)) {
+ pdb_type = cnstr_pdcp_u_plane_pdb(p, sn_size, hfn, bearer,
+ direction, hfn_threshold,
+ cipherdata, authdata);
+ if (pdb_type == PDCP_PDB_TYPE_INVALID) {
pr_err("Error creating PDCP UPlane PDB\n");
return -EINVAL;
}
SET_LABEL(p, pdb_end);
- err = insert_hfn_ov_op(p, sn_size, PDCP_PDB_TYPE_FULL_PDB,
- era_2_sw_hfn_ovrd);
+ err = insert_hfn_ov_op(p, sn_size, pdb_type, era_2_sw_hfn_ovrd);
if (err)
return err;
@@ -3006,6 +3070,11 @@ cnstr_shdsc_pdcp_u_plane_decap(uint32_t *descbuf,
case PDCP_CIPHER_TYPE_AES:
case PDCP_CIPHER_TYPE_SNOW:
case PDCP_CIPHER_TYPE_NULL:
+ if (pdb_type != PDCP_PDB_TYPE_FULL_PDB) {
+ pr_err("PDB type must be FULL for PROTO desc\n");
+ return -EINVAL;
+ }
+
/* Insert auth key if requested */
if (authdata && authdata->algtype)
KEY(p, KEY2, authdata->key_enc_flags,
--
2.17.1
^ permalink raw reply [flat|nested] 75+ messages in thread
* [dpdk-dev] [PATCH v2 12/20] crypto/dpaa2_sec/hw: support aes-aes 18-bit PDCP
2019-09-30 11:52 ` [dpdk-dev] [PATCH v2 " Akhil Goyal
` (10 preceding siblings ...)
2019-09-30 11:52 ` [dpdk-dev] [PATCH v2 11/20] crypto/dpaa2_sec/hw: support 18-bit PDCP enc-auth cases Akhil Goyal
@ 2019-09-30 11:52 ` Akhil Goyal
2019-09-30 11:52 ` [dpdk-dev] [PATCH v2 13/20] crypto/dpaa2_sec/hw: support zuc-zuc " Akhil Goyal
` (9 subsequent siblings)
21 siblings, 0 replies; 75+ messages in thread
From: Akhil Goyal @ 2019-09-30 11:52 UTC (permalink / raw)
To: dev; +Cc: aconole, anoobj, Vakul Garg
From: Vakul Garg <vakul.garg@nxp.com>
This patch support AES-AES PDCP enc-auth case for
devices which do not support PROTOCOL command for 18bit
Signed-off-by: Vakul Garg <vakul.garg@nxp.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
---
drivers/crypto/dpaa2_sec/hw/desc/pdcp.h | 151 +++++++++++++++++++++++-
1 file changed, 150 insertions(+), 1 deletion(-)
diff --git a/drivers/crypto/dpaa2_sec/hw/desc/pdcp.h b/drivers/crypto/dpaa2_sec/hw/desc/pdcp.h
index 764daf21c..a476b8bde 100644
--- a/drivers/crypto/dpaa2_sec/hw/desc/pdcp.h
+++ b/drivers/crypto/dpaa2_sec/hw/desc/pdcp.h
@@ -927,6 +927,155 @@ pdcp_insert_cplane_enc_only_op(struct program *p,
return 0;
}
+static inline int
+pdcp_insert_uplane_aes_aes_op(struct program *p,
+ bool swap __maybe_unused,
+ struct alginfo *cipherdata,
+ struct alginfo *authdata,
+ unsigned int dir,
+ enum pdcp_sn_size sn_size,
+ unsigned char era_2_sw_hfn_ovrd __maybe_unused)
+{
+ uint32_t offset = 0, length = 0, sn_mask = 0;
+
+ if ((rta_sec_era >= RTA_SEC_ERA_8 && sn_size != PDCP_SN_SIZE_18)) {
+ /* Insert Auth Key */
+ KEY(p, KEY2, authdata->key_enc_flags, authdata->key,
+ authdata->keylen, INLINE_KEY(authdata));
+
+ /* Insert Cipher Key */
+ KEY(p, KEY1, cipherdata->key_enc_flags, cipherdata->key,
+ cipherdata->keylen, INLINE_KEY(cipherdata));
+
+ PROTOCOL(p, dir, OP_PCLID_LTE_PDCP_USER_RN,
+ ((uint16_t)cipherdata->algtype << 8) |
+ (uint16_t)authdata->algtype);
+ return 0;
+ }
+
+ /* Non-proto is supported only for 5bit cplane and 18bit uplane */
+ switch (sn_size) {
+ case PDCP_SN_SIZE_18:
+ offset = 5;
+ length = 3;
+ sn_mask = (swap == false) ? PDCP_U_PLANE_18BIT_SN_MASK :
+ PDCP_U_PLANE_18BIT_SN_MASK_BE;
+ break;
+
+ default:
+ pr_err("Invalid sn_size for %s\n", __func__);
+ return -ENOTSUP;
+ }
+
+ SEQLOAD(p, MATH0, offset, length, 0);
+ JUMP(p, 1, LOCAL_JUMP, ALL_TRUE, CALM);
+ MATHB(p, MATH0, AND, sn_mask, MATH1, 8, IFB | IMMED2);
+
+ MATHB(p, MATH1, SHLD, MATH1, MATH1, 8, 0);
+ MOVEB(p, DESCBUF, 8, MATH2, 0, 0x08, WAITCOMP | IMMED);
+ MATHB(p, MATH1, OR, MATH2, MATH2, 8, 0);
+ SEQSTORE(p, MATH0, offset, length, 0);
+
+ if (dir == OP_TYPE_ENCAP_PROTOCOL) {
+ KEY(p, KEY1, authdata->key_enc_flags, authdata->key,
+ authdata->keylen, INLINE_KEY(authdata));
+ MOVEB(p, MATH2, 0, IFIFOAB1, 0, 0x08, IMMED);
+ MOVEB(p, MATH0, offset, IFIFOAB1, 0, length, IMMED);
+
+ MATHB(p, SEQINSZ, SUB, ZERO, VSEQINSZ, 4, 0);
+ MATHB(p, VSEQINSZ, ADD, PDCP_MAC_I_LEN, VSEQOUTSZ, 4, IMMED2);
+
+ ALG_OPERATION(p, OP_ALG_ALGSEL_AES,
+ OP_ALG_AAI_CMAC,
+ OP_ALG_AS_INITFINAL,
+ ICV_CHECK_DISABLE,
+ DIR_DEC);
+ SEQFIFOLOAD(p, MSG1, 0, VLF | LAST1 | FLUSH1);
+ MOVEB(p, CONTEXT1, 0, MATH3, 0, 4, WAITCOMP | IMMED);
+
+ LOAD(p, CLRW_RESET_CLS1_CHA |
+ CLRW_CLR_C1KEY |
+ CLRW_CLR_C1CTX |
+ CLRW_CLR_C1ICV |
+ CLRW_CLR_C1DATAS |
+ CLRW_CLR_C1MODE,
+ CLRW, 0, 4, IMMED);
+
+ KEY(p, KEY1, cipherdata->key_enc_flags, cipherdata->key,
+ cipherdata->keylen, INLINE_KEY(cipherdata));
+
+ MOVEB(p, MATH2, 0, CONTEXT1, 16, 8, IMMED);
+ SEQINPTR(p, 0, PDCP_NULL_MAX_FRAME_LEN, RTO);
+
+ ALG_OPERATION(p, OP_ALG_ALGSEL_AES,
+ OP_ALG_AAI_CTR,
+ OP_ALG_AS_INITFINAL,
+ ICV_CHECK_DISABLE,
+ DIR_ENC);
+
+ SEQFIFOSTORE(p, MSG, 0, 0, VLF);
+
+ SEQFIFOLOAD(p, SKIP, length, 0);
+
+ SEQFIFOLOAD(p, MSG1, 0, VLF);
+ MOVEB(p, MATH3, 0, IFIFOAB1, 0, 4, LAST1 | FLUSH1 | IMMED);
+ } else {
+ MOVEB(p, MATH2, 0, CONTEXT1, 16, 8, IMMED);
+ MOVEB(p, MATH2, 0, CONTEXT2, 0, 8, IMMED);
+
+ MATHB(p, SEQINSZ, SUB, ZERO, VSEQINSZ, 4, 0);
+ MATHB(p, SEQINSZ, SUB, PDCP_MAC_I_LEN, VSEQOUTSZ, 4, IMMED2);
+
+ KEY(p, KEY1, cipherdata->key_enc_flags, cipherdata->key,
+ cipherdata->keylen, INLINE_KEY(cipherdata));
+
+ ALG_OPERATION(p, OP_ALG_ALGSEL_AES,
+ OP_ALG_AAI_CTR,
+ OP_ALG_AS_INITFINAL,
+ ICV_CHECK_DISABLE,
+ DIR_DEC);
+
+ SEQFIFOSTORE(p, MSG, 0, 0, VLF | CONT);
+ SEQFIFOLOAD(p, MSG1, 0, VLF | LAST1 | FLUSH1);
+
+ MOVEB(p, OFIFO, 0, MATH3, 0, 4, IMMED);
+
+ LOAD(p, CLRW_RESET_CLS1_CHA |
+ CLRW_CLR_C1KEY |
+ CLRW_CLR_C1CTX |
+ CLRW_CLR_C1ICV |
+ CLRW_CLR_C1DATAS |
+ CLRW_CLR_C1MODE,
+ CLRW, 0, 4, IMMED);
+
+ KEY(p, KEY1, authdata->key_enc_flags, authdata->key,
+ authdata->keylen, INLINE_KEY(authdata));
+
+ SEQINPTR(p, 0, 0, SOP);
+
+ ALG_OPERATION(p, OP_ALG_ALGSEL_AES,
+ OP_ALG_AAI_CMAC,
+ OP_ALG_AS_INITFINAL,
+ ICV_CHECK_ENABLE,
+ DIR_DEC);
+
+ MATHB(p, SEQINSZ, SUB, ZERO, VSEQINSZ, 4, 0);
+
+ MOVE(p, CONTEXT2, 0, IFIFOAB1, 0, 8, IMMED);
+
+ SEQFIFOLOAD(p, MSG1, 0, VLF | LAST1 | FLUSH1);
+
+ LOAD(p, NFIFOENTRY_STYPE_ALTSOURCE |
+ NFIFOENTRY_DEST_CLASS1 |
+ NFIFOENTRY_DTYPE_ICV |
+ NFIFOENTRY_LC1 |
+ NFIFOENTRY_FC1 | 4, NFIFO_SZL, 0, 4, IMMED);
+ MOVEB(p, MATH3, 0, ALTSOURCE, 0, 4, IMMED);
+ }
+
+ return 0;
+}
+
static inline int
pdcp_insert_cplane_acc_op(struct program *p,
bool swap __maybe_unused,
@@ -2721,7 +2870,7 @@ pdcp_insert_uplane_with_int_op(struct program *p,
{ /* AES CTR */
pdcp_insert_cplane_enc_only_op, /* NULL */
pdcp_insert_cplane_aes_snow_op, /* SNOW f9 */
- pdcp_insert_cplane_acc_op, /* AES CMAC */
+ pdcp_insert_uplane_aes_aes_op, /* AES CMAC */
pdcp_insert_cplane_aes_zuc_op /* ZUC-I */
},
{ /* ZUC-E */
--
2.17.1
^ permalink raw reply [flat|nested] 75+ messages in thread
* [dpdk-dev] [PATCH v2 13/20] crypto/dpaa2_sec/hw: support zuc-zuc 18-bit PDCP
2019-09-30 11:52 ` [dpdk-dev] [PATCH v2 " Akhil Goyal
` (11 preceding siblings ...)
2019-09-30 11:52 ` [dpdk-dev] [PATCH v2 12/20] crypto/dpaa2_sec/hw: support aes-aes 18-bit PDCP Akhil Goyal
@ 2019-09-30 11:52 ` Akhil Goyal
2019-09-30 11:52 ` [dpdk-dev] [PATCH v2 14/20] crypto/dpaa2_sec/hw: support snow-snow " Akhil Goyal
` (8 subsequent siblings)
21 siblings, 0 replies; 75+ messages in thread
From: Akhil Goyal @ 2019-09-30 11:52 UTC (permalink / raw)
To: dev; +Cc: aconole, anoobj, Vakul Garg
From: Vakul Garg <vakul.garg@nxp.com>
This patch support ZUC-ZUC PDCP enc-auth case for
devices which do not support PROTOCOL command for 18bit.
Signed-off-by: Vakul Garg <vakul.garg@nxp.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
---
drivers/crypto/dpaa2_sec/hw/desc/pdcp.h | 126 +++++++++++++++++++++++-
1 file changed, 125 insertions(+), 1 deletion(-)
diff --git a/drivers/crypto/dpaa2_sec/hw/desc/pdcp.h b/drivers/crypto/dpaa2_sec/hw/desc/pdcp.h
index a476b8bde..9fb3d4993 100644
--- a/drivers/crypto/dpaa2_sec/hw/desc/pdcp.h
+++ b/drivers/crypto/dpaa2_sec/hw/desc/pdcp.h
@@ -927,6 +927,130 @@ pdcp_insert_cplane_enc_only_op(struct program *p,
return 0;
}
+static inline int
+pdcp_insert_uplane_zuc_zuc_op(struct program *p,
+ bool swap __maybe_unused,
+ struct alginfo *cipherdata,
+ struct alginfo *authdata,
+ unsigned int dir,
+ enum pdcp_sn_size sn_size,
+ unsigned char era_2_sw_hfn_ovrd __maybe_unused)
+{
+ uint32_t offset = 0, length = 0, sn_mask = 0;
+
+ LABEL(keyjump);
+ REFERENCE(pkeyjump);
+
+ if (rta_sec_era < RTA_SEC_ERA_5) {
+ pr_err("Invalid era for selected algorithm\n");
+ return -ENOTSUP;
+ }
+
+ pkeyjump = JUMP(p, keyjump, LOCAL_JUMP, ALL_TRUE, SHRD | SELF | BOTH);
+ KEY(p, KEY1, cipherdata->key_enc_flags, cipherdata->key,
+ cipherdata->keylen, INLINE_KEY(cipherdata));
+ KEY(p, KEY2, authdata->key_enc_flags, authdata->key, authdata->keylen,
+ INLINE_KEY(authdata));
+
+ SET_LABEL(p, keyjump);
+ PATCH_JUMP(p, pkeyjump, keyjump);
+
+ if (rta_sec_era >= RTA_SEC_ERA_8 && sn_size != PDCP_SN_SIZE_18) {
+ int pclid;
+
+ if (sn_size == PDCP_SN_SIZE_5)
+ pclid = OP_PCLID_LTE_PDCP_CTRL_MIXED;
+ else
+ pclid = OP_PCLID_LTE_PDCP_USER_RN;
+
+ PROTOCOL(p, dir, pclid,
+ ((uint16_t)cipherdata->algtype << 8) |
+ (uint16_t)authdata->algtype);
+
+ return 0;
+ }
+ /* Non-proto is supported only for 5bit cplane and 18bit uplane */
+ switch (sn_size) {
+ case PDCP_SN_SIZE_5:
+ offset = 7;
+ length = 1;
+ sn_mask = (swap == false) ? PDCP_C_PLANE_SN_MASK :
+ PDCP_C_PLANE_SN_MASK_BE;
+ break;
+ case PDCP_SN_SIZE_18:
+ offset = 5;
+ length = 3;
+ sn_mask = (swap == false) ? PDCP_U_PLANE_18BIT_SN_MASK :
+ PDCP_U_PLANE_18BIT_SN_MASK_BE;
+ break;
+ case PDCP_SN_SIZE_7:
+ case PDCP_SN_SIZE_12:
+ case PDCP_SN_SIZE_15:
+ pr_err("Invalid sn_size for %s\n", __func__);
+ return -ENOTSUP;
+ }
+
+ SEQLOAD(p, MATH0, offset, length, 0);
+ JUMP(p, 1, LOCAL_JUMP, ALL_TRUE, CALM);
+ MOVEB(p, MATH0, offset, IFIFOAB2, 0, length, IMMED);
+ MATHB(p, MATH0, AND, sn_mask, MATH1, 8, IFB | IMMED2);
+ MATHB(p, MATH1, SHLD, MATH1, MATH1, 8, 0);
+
+ MOVEB(p, DESCBUF, 8, MATH2, 0, 8, WAITCOMP | IMMED);
+ MATHB(p, MATH1, OR, MATH2, MATH2, 8, 0);
+ MOVEB(p, MATH2, 0, CONTEXT1, 0, 8, IMMED);
+
+ MOVEB(p, MATH2, 0, CONTEXT2, 0, 8, WAITCOMP | IMMED);
+
+ if (dir == OP_TYPE_ENCAP_PROTOCOL)
+ MATHB(p, SEQINSZ, ADD, PDCP_MAC_I_LEN, VSEQOUTSZ, 4, IMMED2);
+ else
+ MATHB(p, SEQINSZ, SUB, PDCP_MAC_I_LEN, VSEQOUTSZ, 4, IMMED2);
+
+ MATHB(p, SEQINSZ, SUB, ZERO, VSEQINSZ, 4, 0);
+ SEQSTORE(p, MATH0, offset, length, 0);
+
+ if (dir == OP_TYPE_ENCAP_PROTOCOL) {
+ SEQFIFOSTORE(p, MSG, 0, 0, VLF);
+ SEQFIFOLOAD(p, MSGINSNOOP, 0, VLF | LAST2);
+ } else {
+ SEQFIFOSTORE(p, MSG, 0, 0, VLF | CONT);
+ SEQFIFOLOAD(p, MSGOUTSNOOP, 0, VLF | LAST1 | FLUSH1);
+ }
+
+ ALG_OPERATION(p, OP_ALG_ALGSEL_ZUCA,
+ OP_ALG_AAI_F9,
+ OP_ALG_AS_INITFINAL,
+ dir == OP_TYPE_ENCAP_PROTOCOL ?
+ ICV_CHECK_DISABLE : ICV_CHECK_ENABLE,
+ DIR_ENC);
+
+ ALG_OPERATION(p, OP_ALG_ALGSEL_ZUCE,
+ OP_ALG_AAI_F8,
+ OP_ALG_AS_INITFINAL,
+ ICV_CHECK_DISABLE,
+ dir == OP_TYPE_ENCAP_PROTOCOL ? DIR_ENC : DIR_DEC);
+
+ if (dir == OP_TYPE_ENCAP_PROTOCOL) {
+ MOVE(p, CONTEXT2, 0, IFIFOAB1, 0, 4, LAST1 | FLUSH1 | IMMED);
+ } else {
+ /* Save ICV */
+ MOVEB(p, OFIFO, 0, MATH0, 0, 4, IMMED);
+
+ LOAD(p, NFIFOENTRY_STYPE_ALTSOURCE |
+ NFIFOENTRY_DEST_CLASS2 |
+ NFIFOENTRY_DTYPE_ICV |
+ NFIFOENTRY_LC2 | 4, NFIFO_SZL, 0, 4, IMMED);
+ MOVEB(p, MATH0, 0, ALTSOURCE, 0, 4, WAITCOMP | IMMED);
+ }
+
+ /* Reset ZUCA mode and done interrupt */
+ LOAD(p, CLRW_CLR_C2MODE, CLRW, 0, 4, IMMED);
+ LOAD(p, CIRQ_ZADI, ICTRL, 0, 4, IMMED);
+
+ return 0;
+}
+
static inline int
pdcp_insert_uplane_aes_aes_op(struct program *p,
bool swap __maybe_unused,
@@ -2877,7 +3001,7 @@ pdcp_insert_uplane_with_int_op(struct program *p,
pdcp_insert_cplane_enc_only_op, /* NULL */
pdcp_insert_cplane_zuc_snow_op, /* SNOW f9 */
pdcp_insert_cplane_zuc_aes_op, /* AES CMAC */
- pdcp_insert_cplane_acc_op /* ZUC-I */
+ pdcp_insert_uplane_zuc_zuc_op /* ZUC-I */
},
};
int err;
--
2.17.1
^ permalink raw reply [flat|nested] 75+ messages in thread
* [dpdk-dev] [PATCH v2 14/20] crypto/dpaa2_sec/hw: support snow-snow 18-bit PDCP
2019-09-30 11:52 ` [dpdk-dev] [PATCH v2 " Akhil Goyal
` (12 preceding siblings ...)
2019-09-30 11:52 ` [dpdk-dev] [PATCH v2 13/20] crypto/dpaa2_sec/hw: support zuc-zuc " Akhil Goyal
@ 2019-09-30 11:52 ` Akhil Goyal
2019-09-30 11:52 ` [dpdk-dev] [PATCH v2 15/20] crypto/dpaa2_sec/hw: support snow-f8 Akhil Goyal
` (7 subsequent siblings)
21 siblings, 0 replies; 75+ messages in thread
From: Akhil Goyal @ 2019-09-30 11:52 UTC (permalink / raw)
To: dev; +Cc: aconole, anoobj, Vakul Garg
From: Vakul Garg <vakul.garg@nxp.com>
This patch support SNOW-SNOW (enc-auth) 18bit PDCP case
for devices which do not support PROTOCOL command
Signed-off-by: Vakul Garg <vakul.garg@nxp.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
---
drivers/crypto/dpaa2_sec/hw/desc/pdcp.h | 133 +++++++++++++++++++++++-
1 file changed, 132 insertions(+), 1 deletion(-)
diff --git a/drivers/crypto/dpaa2_sec/hw/desc/pdcp.h b/drivers/crypto/dpaa2_sec/hw/desc/pdcp.h
index 9fb3d4993..b514914ec 100644
--- a/drivers/crypto/dpaa2_sec/hw/desc/pdcp.h
+++ b/drivers/crypto/dpaa2_sec/hw/desc/pdcp.h
@@ -927,6 +927,137 @@ pdcp_insert_cplane_enc_only_op(struct program *p,
return 0;
}
+static inline int
+pdcp_insert_uplane_snow_snow_op(struct program *p,
+ bool swap __maybe_unused,
+ struct alginfo *cipherdata,
+ struct alginfo *authdata,
+ unsigned int dir,
+ enum pdcp_sn_size sn_size,
+ unsigned char era_2_sw_hfn_ovrd __maybe_unused)
+{
+ uint32_t offset = 0, length = 0, sn_mask = 0;
+
+ KEY(p, KEY1, cipherdata->key_enc_flags, cipherdata->key,
+ cipherdata->keylen, INLINE_KEY(cipherdata));
+ KEY(p, KEY2, authdata->key_enc_flags, authdata->key, authdata->keylen,
+ INLINE_KEY(authdata));
+
+ if (rta_sec_era >= RTA_SEC_ERA_8 && sn_size != PDCP_SN_SIZE_18) {
+ int pclid;
+
+ if (sn_size == PDCP_SN_SIZE_5)
+ pclid = OP_PCLID_LTE_PDCP_CTRL_MIXED;
+ else
+ pclid = OP_PCLID_LTE_PDCP_USER_RN;
+
+ PROTOCOL(p, dir, pclid,
+ ((uint16_t)cipherdata->algtype << 8) |
+ (uint16_t)authdata->algtype);
+
+ return 0;
+ }
+ /* Non-proto is supported only for 5bit cplane and 18bit uplane */
+ switch (sn_size) {
+ case PDCP_SN_SIZE_5:
+ offset = 7;
+ length = 1;
+ sn_mask = (swap == false) ? PDCP_C_PLANE_SN_MASK :
+ PDCP_C_PLANE_SN_MASK_BE;
+ break;
+ case PDCP_SN_SIZE_18:
+ offset = 5;
+ length = 3;
+ sn_mask = (swap == false) ? PDCP_U_PLANE_18BIT_SN_MASK :
+ PDCP_U_PLANE_18BIT_SN_MASK_BE;
+ break;
+ case PDCP_SN_SIZE_7:
+ case PDCP_SN_SIZE_12:
+ case PDCP_SN_SIZE_15:
+ pr_err("Invalid sn_size for %s\n", __func__);
+ return -ENOTSUP;
+ }
+
+ if (dir == OP_TYPE_ENCAP_PROTOCOL)
+ MATHB(p, SEQINSZ, SUB, length, VSEQINSZ, 4, IMMED2);
+
+ SEQLOAD(p, MATH0, offset, length, 0);
+ JUMP(p, 1, LOCAL_JUMP, ALL_TRUE, CALM);
+ MOVEB(p, MATH0, offset, IFIFOAB2, 0, length, IMMED);
+ MATHB(p, MATH0, AND, sn_mask, MATH1, 8, IFB | IMMED2);
+
+ SEQSTORE(p, MATH0, offset, length, 0);
+ MATHB(p, MATH1, SHLD, MATH1, MATH1, 8, 0);
+ MOVEB(p, DESCBUF, 8, MATH2, 0, 8, WAITCOMP | IMMED);
+ MATHB(p, MATH1, OR, MATH2, MATH1, 8, 0);
+ MOVEB(p, MATH1, 0, CONTEXT1, 0, 8, IMMED);
+ MOVEB(p, MATH1, 0, CONTEXT2, 0, 4, WAITCOMP | IMMED);
+ if (swap == false) {
+ MATHB(p, MATH1, AND, upper_32_bits(PDCP_BEARER_MASK),
+ MATH2, 4, IMMED2);
+ MATHB(p, MATH1, AND, lower_32_bits(PDCP_DIR_MASK),
+ MATH3, 4, IMMED2);
+ } else {
+ MATHB(p, MATH1, AND, lower_32_bits(PDCP_BEARER_MASK_BE),
+ MATH2, 4, IMMED2);
+ MATHB(p, MATH1, AND, upper_32_bits(PDCP_DIR_MASK_BE),
+ MATH3, 4, IMMED2);
+ }
+ MATHB(p, MATH3, SHLD, MATH3, MATH3, 8, 0);
+
+ MOVEB(p, MATH2, 4, OFIFO, 0, 12, IMMED);
+ MOVE(p, OFIFO, 0, CONTEXT2, 4, 12, IMMED);
+ if (dir == OP_TYPE_ENCAP_PROTOCOL) {
+ MATHB(p, SEQINSZ, ADD, PDCP_MAC_I_LEN, VSEQOUTSZ, 4, IMMED2);
+ } else {
+ MATHI(p, SEQINSZ, SUB, PDCP_MAC_I_LEN, VSEQOUTSZ, 4, IMMED2);
+ MATHI(p, SEQINSZ, SUB, PDCP_MAC_I_LEN, VSEQINSZ, 4, IMMED2);
+ }
+
+ if (dir == OP_TYPE_ENCAP_PROTOCOL)
+ SEQFIFOSTORE(p, MSG, 0, 0, VLF);
+ else
+ SEQFIFOSTORE(p, MSG, 0, 0, VLF | CONT);
+
+ ALG_OPERATION(p, OP_ALG_ALGSEL_SNOW_F9,
+ OP_ALG_AAI_F9,
+ OP_ALG_AS_INITFINAL,
+ dir == OP_TYPE_ENCAP_PROTOCOL ?
+ ICV_CHECK_DISABLE : ICV_CHECK_ENABLE,
+ DIR_DEC);
+ ALG_OPERATION(p, OP_ALG_ALGSEL_SNOW_F8,
+ OP_ALG_AAI_F8,
+ OP_ALG_AS_INITFINAL,
+ ICV_CHECK_DISABLE,
+ dir == OP_TYPE_ENCAP_PROTOCOL ? DIR_ENC : DIR_DEC);
+
+ if (dir == OP_TYPE_ENCAP_PROTOCOL) {
+ SEQFIFOLOAD(p, MSGINSNOOP, 0, VLF | LAST2);
+ MOVE(p, CONTEXT2, 0, IFIFOAB1, 0, 4, LAST1 | FLUSH1 | IMMED);
+ } else {
+ SEQFIFOLOAD(p, MSGOUTSNOOP, 0, VLF | LAST2);
+ SEQFIFOLOAD(p, MSG1, 4, LAST1 | FLUSH1);
+ JUMP(p, 1, LOCAL_JUMP, ALL_TRUE, CLASS1 | NOP | NIFP);
+
+ if (rta_sec_era >= RTA_SEC_ERA_6)
+ LOAD(p, 0, DCTRL, 0, LDLEN_RST_CHA_OFIFO_PTR, IMMED);
+
+ MOVE(p, OFIFO, 0, MATH0, 0, 4, WAITCOMP | IMMED);
+
+ NFIFOADD(p, IFIFO, ICV2, 4, LAST2);
+
+ if (rta_sec_era <= RTA_SEC_ERA_2) {
+ /* Shut off automatic Info FIFO entries */
+ LOAD(p, 0, DCTRL, LDOFF_DISABLE_AUTO_NFIFO, 0, IMMED);
+ MOVE(p, MATH0, 0, IFIFOAB2, 0, 4, WAITCOMP | IMMED);
+ } else {
+ MOVE(p, MATH0, 0, IFIFO, 0, 4, WAITCOMP | IMMED);
+ }
+ }
+
+ return 0;
+}
+
static inline int
pdcp_insert_uplane_zuc_zuc_op(struct program *p,
bool swap __maybe_unused,
@@ -2987,7 +3118,7 @@ pdcp_insert_uplane_with_int_op(struct program *p,
},
{ /* SNOW f8 */
pdcp_insert_cplane_enc_only_op, /* NULL */
- pdcp_insert_cplane_acc_op, /* SNOW f9 */
+ pdcp_insert_uplane_snow_snow_op, /* SNOW f9 */
pdcp_insert_cplane_snow_aes_op, /* AES CMAC */
pdcp_insert_cplane_snow_zuc_op /* ZUC-I */
},
--
2.17.1
^ permalink raw reply [flat|nested] 75+ messages in thread
* [dpdk-dev] [PATCH v2 15/20] crypto/dpaa2_sec/hw: support snow-f8
2019-09-30 11:52 ` [dpdk-dev] [PATCH v2 " Akhil Goyal
` (13 preceding siblings ...)
2019-09-30 11:52 ` [dpdk-dev] [PATCH v2 14/20] crypto/dpaa2_sec/hw: support snow-snow " Akhil Goyal
@ 2019-09-30 11:52 ` Akhil Goyal
2019-09-30 11:52 ` [dpdk-dev] [PATCH v2 16/20] crypto/dpaa2_sec/hw: support snow-f9 Akhil Goyal
` (6 subsequent siblings)
21 siblings, 0 replies; 75+ messages in thread
From: Akhil Goyal @ 2019-09-30 11:52 UTC (permalink / raw)
To: dev; +Cc: aconole, anoobj, Vakul Garg
From: Vakul Garg <vakul.garg@nxp.com>
This patch add support for non-protocol offload mode
of snow-f8 algo
Signed-off-by: Vakul Garg <vakul.garg@nxp.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
---
drivers/crypto/dpaa2_sec/hw/desc/algo.h | 20 +++++---------------
1 file changed, 5 insertions(+), 15 deletions(-)
diff --git a/drivers/crypto/dpaa2_sec/hw/desc/algo.h b/drivers/crypto/dpaa2_sec/hw/desc/algo.h
index b6cfa8704..2a307a3e1 100644
--- a/drivers/crypto/dpaa2_sec/hw/desc/algo.h
+++ b/drivers/crypto/dpaa2_sec/hw/desc/algo.h
@@ -24,43 +24,33 @@
* @swap: must be true when core endianness doesn't match SEC endianness
* @cipherdata: pointer to block cipher transform definitions
* @dir: Cipher direction (DIR_ENC/DIR_DEC)
- * @count: UEA2 count value (32 bits)
- * @bearer: UEA2 bearer ID (5 bits)
- * @direction: UEA2 direction (1 bit)
*
* Return: size of descriptor written in words or negative number on error
*/
static inline int
cnstr_shdsc_snow_f8(uint32_t *descbuf, bool ps, bool swap,
- struct alginfo *cipherdata, uint8_t dir,
- uint32_t count, uint8_t bearer, uint8_t direction)
+ struct alginfo *cipherdata, uint8_t dir)
{
struct program prg;
struct program *p = &prg;
- uint32_t ct = count;
- uint8_t br = bearer;
- uint8_t dr = direction;
- uint32_t context[2] = {ct, (br << 27) | (dr << 26)};
PROGRAM_CNTXT_INIT(p, descbuf, 0);
- if (swap) {
+ if (swap)
PROGRAM_SET_BSWAP(p);
- context[0] = swab32(context[0]);
- context[1] = swab32(context[1]);
- }
-
if (ps)
PROGRAM_SET_36BIT_ADDR(p);
SHR_HDR(p, SHR_ALWAYS, 1, 0);
KEY(p, KEY1, cipherdata->key_enc_flags, cipherdata->key,
cipherdata->keylen, INLINE_KEY(cipherdata));
+
+ SEQLOAD(p, CONTEXT1, 0, 16, 0);
+
MATHB(p, SEQINSZ, SUB, MATH2, VSEQINSZ, 4, 0);
MATHB(p, SEQINSZ, SUB, MATH2, VSEQOUTSZ, 4, 0);
ALG_OPERATION(p, OP_ALG_ALGSEL_SNOW_F8, OP_ALG_AAI_F8,
OP_ALG_AS_INITFINAL, 0, dir);
- LOAD(p, (uintptr_t)context, CONTEXT1, 0, 8, IMMED | COPY);
SEQFIFOLOAD(p, MSG1, 0, VLF | LAST1);
SEQFIFOSTORE(p, MSG, 0, 0, VLF);
--
2.17.1
^ permalink raw reply [flat|nested] 75+ messages in thread
* [dpdk-dev] [PATCH v2 16/20] crypto/dpaa2_sec/hw: support snow-f9
2019-09-30 11:52 ` [dpdk-dev] [PATCH v2 " Akhil Goyal
` (14 preceding siblings ...)
2019-09-30 11:52 ` [dpdk-dev] [PATCH v2 15/20] crypto/dpaa2_sec/hw: support snow-f8 Akhil Goyal
@ 2019-09-30 11:52 ` Akhil Goyal
2019-09-30 11:52 ` [dpdk-dev] [PATCH v2 17/20] crypto/dpaa2_sec: support snow3g cipher/integrity Akhil Goyal
` (5 subsequent siblings)
21 siblings, 0 replies; 75+ messages in thread
From: Akhil Goyal @ 2019-09-30 11:52 UTC (permalink / raw)
To: dev; +Cc: aconole, anoobj, Vakul Garg
From: Vakul Garg <vakul.garg@nxp.com>
Add support for snow-f9 in non pdcp protocol offload mode.
This essentially add support to pass pre-computed IV to SEC.
Signed-off-by: Vakul Garg <vakul.garg@nxp.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
---
drivers/crypto/dpaa2_sec/hw/desc/algo.h | 51 +++++++++++++------------
1 file changed, 26 insertions(+), 25 deletions(-)
diff --git a/drivers/crypto/dpaa2_sec/hw/desc/algo.h b/drivers/crypto/dpaa2_sec/hw/desc/algo.h
index 2a307a3e1..5e8e5e79c 100644
--- a/drivers/crypto/dpaa2_sec/hw/desc/algo.h
+++ b/drivers/crypto/dpaa2_sec/hw/desc/algo.h
@@ -64,48 +64,49 @@ cnstr_shdsc_snow_f8(uint32_t *descbuf, bool ps, bool swap,
* @swap: must be true when core endianness doesn't match SEC endianness
* @authdata: pointer to authentication transform definitions
* @dir: cipher direction (DIR_ENC/DIR_DEC)
- * @count: UEA2 count value (32 bits)
- * @fresh: UEA2 fresh value ID (32 bits)
- * @direction: UEA2 direction (1 bit)
- * @datalen: size of data
+ * @chk_icv: check or generate ICV value
+ * @authlen: size of digest
*
* Return: size of descriptor written in words or negative number on error
*/
static inline int
cnstr_shdsc_snow_f9(uint32_t *descbuf, bool ps, bool swap,
- struct alginfo *authdata, uint8_t dir, uint32_t count,
- uint32_t fresh, uint8_t direction, uint32_t datalen)
+ struct alginfo *authdata, uint8_t chk_icv,
+ uint32_t authlen)
{
struct program prg;
struct program *p = &prg;
- uint64_t ct = count;
- uint64_t fr = fresh;
- uint64_t dr = direction;
- uint64_t context[2];
-
- context[0] = (ct << 32) | (dr << 26);
- context[1] = fr << 32;
+ int dir = chk_icv ? DIR_DEC : DIR_ENC;
PROGRAM_CNTXT_INIT(p, descbuf, 0);
- if (swap) {
+ if (swap)
PROGRAM_SET_BSWAP(p);
- context[0] = swab64(context[0]);
- context[1] = swab64(context[1]);
- }
if (ps)
PROGRAM_SET_36BIT_ADDR(p);
+
SHR_HDR(p, SHR_ALWAYS, 1, 0);
- KEY(p, KEY2, authdata->key_enc_flags, authdata->key, authdata->keylen,
- INLINE_KEY(authdata));
- MATHB(p, SEQINSZ, SUB, MATH2, VSEQINSZ, 4, 0);
+ KEY(p, KEY2, authdata->key_enc_flags, authdata->key,
+ authdata->keylen, INLINE_KEY(authdata));
+
+ SEQLOAD(p, CONTEXT2, 0, 12, 0);
+
+ if (chk_icv == ICV_CHECK_ENABLE)
+ MATHB(p, SEQINSZ, SUB, authlen, VSEQINSZ, 4, IMMED2);
+ else
+ MATHB(p, SEQINSZ, SUB, ZERO, VSEQINSZ, 4, 0);
+
ALG_OPERATION(p, OP_ALG_ALGSEL_SNOW_F9, OP_ALG_AAI_F9,
- OP_ALG_AS_INITFINAL, 0, dir);
- LOAD(p, (uintptr_t)context, CONTEXT2, 0, 16, IMMED | COPY);
- SEQFIFOLOAD(p, BIT_DATA, datalen, CLASS2 | LAST2);
- /* Save lower half of MAC out into a 32-bit sequence */
- SEQSTORE(p, CONTEXT2, 0, 4, 0);
+ OP_ALG_AS_INITFINAL, chk_icv, dir);
+
+ SEQFIFOLOAD(p, MSG2, 0, VLF | CLASS2 | LAST2);
+
+ if (chk_icv == ICV_CHECK_ENABLE)
+ SEQFIFOLOAD(p, ICV2, authlen, LAST2);
+ else
+ /* Save lower half of MAC out into a 32-bit sequence */
+ SEQSTORE(p, CONTEXT2, 0, authlen, 0);
return PROGRAM_FINALIZE(p);
}
--
2.17.1
^ permalink raw reply [flat|nested] 75+ messages in thread
* [dpdk-dev] [PATCH v2 17/20] crypto/dpaa2_sec: support snow3g cipher/integrity
2019-09-30 11:52 ` [dpdk-dev] [PATCH v2 " Akhil Goyal
` (15 preceding siblings ...)
2019-09-30 11:52 ` [dpdk-dev] [PATCH v2 16/20] crypto/dpaa2_sec/hw: support snow-f9 Akhil Goyal
@ 2019-09-30 11:52 ` Akhil Goyal
2019-09-30 11:52 ` [dpdk-dev] [PATCH v2 18/20] crypto/dpaa2_sec/hw: support kasumi Akhil Goyal
` (4 subsequent siblings)
21 siblings, 0 replies; 75+ messages in thread
From: Akhil Goyal @ 2019-09-30 11:52 UTC (permalink / raw)
To: dev; +Cc: aconole, anoobj, Hemant Agrawal, Vakul Garg
From: Hemant Agrawal <hemant.agrawal@nxp.com>
Adding basic framework to use snow3g f8 and f9 based
ciphering or integrity with direct crypto apis.
This patch does not support any combo usages yet.
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Vakul Garg <vakul.garg@nxp.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
---
drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c | 310 ++++++++++++++------
drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h | 46 ++-
drivers/crypto/dpaa2_sec/hw/desc/algo.h | 30 ++
3 files changed, 301 insertions(+), 85 deletions(-)
diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
index d53720ae3..637362e30 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
@@ -880,11 +880,26 @@ static inline int build_auth_sg_fd(
struct qbman_fle *fle, *sge, *ip_fle, *op_fle;
struct sec_flow_context *flc;
struct ctxt_priv *priv = sess->ctxt;
+ int data_len, data_offset;
uint8_t *old_digest;
struct rte_mbuf *mbuf;
PMD_INIT_FUNC_TRACE();
+ data_len = sym_op->auth.data.length;
+ data_offset = sym_op->auth.data.offset;
+
+ if (sess->auth_alg == RTE_CRYPTO_AUTH_SNOW3G_UIA2 ||
+ sess->auth_alg == RTE_CRYPTO_AUTH_ZUC_EIA3) {
+ if ((data_len & 7) || (data_offset & 7)) {
+ DPAA2_SEC_ERR("AUTH: len/offset must be full bytes");
+ return -1;
+ }
+
+ data_len = data_len >> 3;
+ data_offset = data_offset >> 3;
+ }
+
mbuf = sym_op->m_src;
fle = (struct qbman_fle *)rte_malloc(NULL, FLE_SG_MEM_SIZE,
RTE_CACHE_LINE_SIZE);
@@ -914,25 +929,51 @@ static inline int build_auth_sg_fd(
/* i/p fle */
DPAA2_SET_FLE_SG_EXT(ip_fle);
DPAA2_SET_FLE_ADDR(ip_fle, DPAA2_VADDR_TO_IOVA(sge));
- /* i/p 1st seg */
- DPAA2_SET_FLE_ADDR(sge, DPAA2_MBUF_VADDR_TO_IOVA(mbuf));
- DPAA2_SET_FLE_OFFSET(sge, sym_op->auth.data.offset + mbuf->data_off);
- sge->length = mbuf->data_len - sym_op->auth.data.offset;
+ ip_fle->length = data_len;
- /* i/p segs */
- mbuf = mbuf->next;
- while (mbuf) {
+ if (sess->iv.length) {
+ uint8_t *iv_ptr;
+
+ iv_ptr = rte_crypto_op_ctod_offset(op, uint8_t *,
+ sess->iv.offset);
+
+ if (sess->auth_alg == RTE_CRYPTO_AUTH_SNOW3G_UIA2) {
+ iv_ptr = conv_to_snow_f9_iv(iv_ptr);
+ sge->length = 12;
+ } else if (sess->auth_alg == RTE_CRYPTO_AUTH_ZUC_EIA3) {
+ iv_ptr = conv_to_zuc_eia_iv(iv_ptr);
+ sge->length = 8;
+ } else {
+ sge->length = sess->iv.length;
+ }
+ DPAA2_SET_FLE_ADDR(sge, DPAA2_VADDR_TO_IOVA(iv_ptr));
+ ip_fle->length += sge->length;
sge++;
- DPAA2_SET_FLE_ADDR(sge, DPAA2_MBUF_VADDR_TO_IOVA(mbuf));
- DPAA2_SET_FLE_OFFSET(sge, mbuf->data_off);
- sge->length = mbuf->data_len;
- mbuf = mbuf->next;
}
- if (sess->dir == DIR_ENC) {
- /* Digest calculation case */
- sge->length -= sess->digest_length;
- ip_fle->length = sym_op->auth.data.length;
+ /* i/p 1st seg */
+ DPAA2_SET_FLE_ADDR(sge, DPAA2_MBUF_VADDR_TO_IOVA(mbuf));
+ DPAA2_SET_FLE_OFFSET(sge, data_offset + mbuf->data_off);
+
+ if (data_len <= (mbuf->data_len - data_offset)) {
+ sge->length = data_len;
+ data_len = 0;
} else {
+ sge->length = mbuf->data_len - data_offset;
+
+ /* remaining i/p segs */
+ while ((data_len = data_len - sge->length) &&
+ (mbuf = mbuf->next)) {
+ sge++;
+ DPAA2_SET_FLE_ADDR(sge, DPAA2_MBUF_VADDR_TO_IOVA(mbuf));
+ DPAA2_SET_FLE_OFFSET(sge, mbuf->data_off);
+ if (data_len > mbuf->data_len)
+ sge->length = mbuf->data_len;
+ else
+ sge->length = data_len;
+ }
+ }
+
+ if (sess->dir == DIR_DEC) {
/* Digest verification case */
sge++;
old_digest = (uint8_t *)(sge + 1);
@@ -940,8 +981,7 @@ static inline int build_auth_sg_fd(
sess->digest_length);
DPAA2_SET_FLE_ADDR(sge, DPAA2_VADDR_TO_IOVA(old_digest));
sge->length = sess->digest_length;
- ip_fle->length = sym_op->auth.data.length +
- sess->digest_length;
+ ip_fle->length += sess->digest_length;
}
DPAA2_SET_FLE_FIN(sge);
DPAA2_SET_FLE_FIN(ip_fle);
@@ -958,11 +998,26 @@ build_auth_fd(dpaa2_sec_session *sess, struct rte_crypto_op *op,
struct qbman_fle *fle, *sge;
struct sec_flow_context *flc;
struct ctxt_priv *priv = sess->ctxt;
+ int data_len, data_offset;
uint8_t *old_digest;
int retval;
PMD_INIT_FUNC_TRACE();
+ data_len = sym_op->auth.data.length;
+ data_offset = sym_op->auth.data.offset;
+
+ if (sess->auth_alg == RTE_CRYPTO_AUTH_SNOW3G_UIA2 ||
+ sess->auth_alg == RTE_CRYPTO_AUTH_ZUC_EIA3) {
+ if ((data_len & 7) || (data_offset & 7)) {
+ DPAA2_SEC_ERR("AUTH: len/offset must be full bytes");
+ return -1;
+ }
+
+ data_len = data_len >> 3;
+ data_offset = data_offset >> 3;
+ }
+
retval = rte_mempool_get(priv->fle_pool, (void **)(&fle));
if (retval) {
DPAA2_SEC_ERR("AUTH Memory alloc failed for SGE");
@@ -978,64 +1033,72 @@ build_auth_fd(dpaa2_sec_session *sess, struct rte_crypto_op *op,
DPAA2_SET_FLE_ADDR(fle, (size_t)op);
DPAA2_FLE_SAVE_CTXT(fle, (ptrdiff_t)priv);
fle = fle + 1;
+ sge = fle + 2;
if (likely(bpid < MAX_BPID)) {
DPAA2_SET_FD_BPID(fd, bpid);
DPAA2_SET_FLE_BPID(fle, bpid);
DPAA2_SET_FLE_BPID(fle + 1, bpid);
+ DPAA2_SET_FLE_BPID(sge, bpid);
+ DPAA2_SET_FLE_BPID(sge + 1, bpid);
} else {
DPAA2_SET_FD_IVP(fd);
DPAA2_SET_FLE_IVP(fle);
DPAA2_SET_FLE_IVP((fle + 1));
+ DPAA2_SET_FLE_IVP(sge);
+ DPAA2_SET_FLE_IVP((sge + 1));
}
+
flc = &priv->flc_desc[DESC_INITFINAL].flc;
DPAA2_SET_FD_FLC(fd, DPAA2_VADDR_TO_IOVA(flc));
+ DPAA2_SET_FD_ADDR(fd, DPAA2_VADDR_TO_IOVA(fle));
+ DPAA2_SET_FD_COMPOUND_FMT(fd);
DPAA2_SET_FLE_ADDR(fle, DPAA2_VADDR_TO_IOVA(sym_op->auth.digest.data));
fle->length = sess->digest_length;
-
- DPAA2_SET_FD_ADDR(fd, DPAA2_VADDR_TO_IOVA(fle));
- DPAA2_SET_FD_COMPOUND_FMT(fd);
fle++;
- if (sess->dir == DIR_ENC) {
- DPAA2_SET_FLE_ADDR(fle,
- DPAA2_MBUF_VADDR_TO_IOVA(sym_op->m_src));
- DPAA2_SET_FLE_OFFSET(fle, sym_op->auth.data.offset +
- sym_op->m_src->data_off);
- DPAA2_SET_FD_LEN(fd, sym_op->auth.data.length);
- fle->length = sym_op->auth.data.length;
- } else {
- sge = fle + 2;
- DPAA2_SET_FLE_SG_EXT(fle);
- DPAA2_SET_FLE_ADDR(fle, DPAA2_VADDR_TO_IOVA(sge));
+ /* Setting input FLE */
+ DPAA2_SET_FLE_ADDR(fle, DPAA2_VADDR_TO_IOVA(sge));
+ DPAA2_SET_FLE_SG_EXT(fle);
+ fle->length = data_len;
+
+ if (sess->iv.length) {
+ uint8_t *iv_ptr;
+
+ iv_ptr = rte_crypto_op_ctod_offset(op, uint8_t *,
+ sess->iv.offset);
- if (likely(bpid < MAX_BPID)) {
- DPAA2_SET_FLE_BPID(sge, bpid);
- DPAA2_SET_FLE_BPID(sge + 1, bpid);
+ if (sess->auth_alg == RTE_CRYPTO_AUTH_SNOW3G_UIA2) {
+ iv_ptr = conv_to_snow_f9_iv(iv_ptr);
+ sge->length = 12;
} else {
- DPAA2_SET_FLE_IVP(sge);
- DPAA2_SET_FLE_IVP((sge + 1));
+ sge->length = sess->iv.length;
}
- DPAA2_SET_FLE_ADDR(sge,
- DPAA2_MBUF_VADDR_TO_IOVA(sym_op->m_src));
- DPAA2_SET_FLE_OFFSET(sge, sym_op->auth.data.offset +
- sym_op->m_src->data_off);
- DPAA2_SET_FD_LEN(fd, sym_op->auth.data.length +
- sess->digest_length);
- sge->length = sym_op->auth.data.length;
+ DPAA2_SET_FLE_ADDR(sge, DPAA2_VADDR_TO_IOVA(iv_ptr));
+ fle->length = fle->length + sge->length;
+ sge++;
+ }
+
+ /* Setting data to authenticate */
+ DPAA2_SET_FLE_ADDR(sge, DPAA2_MBUF_VADDR_TO_IOVA(sym_op->m_src));
+ DPAA2_SET_FLE_OFFSET(sge, data_offset + sym_op->m_src->data_off);
+ sge->length = data_len;
+
+ if (sess->dir == DIR_DEC) {
sge++;
old_digest = (uint8_t *)(sge + 1);
rte_memcpy(old_digest, sym_op->auth.digest.data,
sess->digest_length);
DPAA2_SET_FLE_ADDR(sge, DPAA2_VADDR_TO_IOVA(old_digest));
sge->length = sess->digest_length;
- fle->length = sym_op->auth.data.length +
- sess->digest_length;
- DPAA2_SET_FLE_FIN(sge);
+ fle->length = fle->length + sess->digest_length;
}
+
+ DPAA2_SET_FLE_FIN(sge);
DPAA2_SET_FLE_FIN(fle);
+ DPAA2_SET_FD_LEN(fd, fle->length);
return 0;
}
@@ -1046,6 +1109,7 @@ build_cipher_sg_fd(dpaa2_sec_session *sess, struct rte_crypto_op *op,
{
struct rte_crypto_sym_op *sym_op = op->sym;
struct qbman_fle *ip_fle, *op_fle, *sge, *fle;
+ int data_len, data_offset;
struct sec_flow_context *flc;
struct ctxt_priv *priv = sess->ctxt;
struct rte_mbuf *mbuf;
@@ -1054,6 +1118,20 @@ build_cipher_sg_fd(dpaa2_sec_session *sess, struct rte_crypto_op *op,
PMD_INIT_FUNC_TRACE();
+ data_len = sym_op->cipher.data.length;
+ data_offset = sym_op->cipher.data.offset;
+
+ if (sess->cipher_alg == RTE_CRYPTO_CIPHER_SNOW3G_UEA2 ||
+ sess->cipher_alg == RTE_CRYPTO_CIPHER_ZUC_EEA3) {
+ if ((data_len & 7) || (data_offset & 7)) {
+ DPAA2_SEC_ERR("CIPHER: len/offset must be full bytes");
+ return -1;
+ }
+
+ data_len = data_len >> 3;
+ data_offset = data_offset >> 3;
+ }
+
if (sym_op->m_dst)
mbuf = sym_op->m_dst;
else
@@ -1079,20 +1157,20 @@ build_cipher_sg_fd(dpaa2_sec_session *sess, struct rte_crypto_op *op,
DPAA2_SEC_DP_DEBUG(
"CIPHER SG: cipher_off: 0x%x/length %d, ivlen=%d"
" data_off: 0x%x\n",
- sym_op->cipher.data.offset,
- sym_op->cipher.data.length,
+ data_offset,
+ data_len,
sess->iv.length,
sym_op->m_src->data_off);
/* o/p fle */
DPAA2_SET_FLE_ADDR(op_fle, DPAA2_VADDR_TO_IOVA(sge));
- op_fle->length = sym_op->cipher.data.length;
+ op_fle->length = data_len;
DPAA2_SET_FLE_SG_EXT(op_fle);
/* o/p 1st seg */
DPAA2_SET_FLE_ADDR(sge, DPAA2_MBUF_VADDR_TO_IOVA(mbuf));
- DPAA2_SET_FLE_OFFSET(sge, sym_op->cipher.data.offset + mbuf->data_off);
- sge->length = mbuf->data_len - sym_op->cipher.data.offset;
+ DPAA2_SET_FLE_OFFSET(sge, data_offset + mbuf->data_off);
+ sge->length = mbuf->data_len - data_offset;
mbuf = mbuf->next;
/* o/p segs */
@@ -1114,7 +1192,7 @@ build_cipher_sg_fd(dpaa2_sec_session *sess, struct rte_crypto_op *op,
mbuf = sym_op->m_src;
sge++;
DPAA2_SET_FLE_ADDR(ip_fle, DPAA2_VADDR_TO_IOVA(sge));
- ip_fle->length = sess->iv.length + sym_op->cipher.data.length;
+ ip_fle->length = sess->iv.length + data_len;
DPAA2_SET_FLE_SG_EXT(ip_fle);
/* i/p IV */
@@ -1126,9 +1204,8 @@ build_cipher_sg_fd(dpaa2_sec_session *sess, struct rte_crypto_op *op,
/* i/p 1st seg */
DPAA2_SET_FLE_ADDR(sge, DPAA2_MBUF_VADDR_TO_IOVA(mbuf));
- DPAA2_SET_FLE_OFFSET(sge, sym_op->cipher.data.offset +
- mbuf->data_off);
- sge->length = mbuf->data_len - sym_op->cipher.data.offset;
+ DPAA2_SET_FLE_OFFSET(sge, data_offset + mbuf->data_off);
+ sge->length = mbuf->data_len - data_offset;
mbuf = mbuf->next;
/* i/p segs */
@@ -1165,7 +1242,7 @@ build_cipher_fd(dpaa2_sec_session *sess, struct rte_crypto_op *op,
{
struct rte_crypto_sym_op *sym_op = op->sym;
struct qbman_fle *fle, *sge;
- int retval;
+ int retval, data_len, data_offset;
struct sec_flow_context *flc;
struct ctxt_priv *priv = sess->ctxt;
uint8_t *iv_ptr = rte_crypto_op_ctod_offset(op, uint8_t *,
@@ -1174,6 +1251,20 @@ build_cipher_fd(dpaa2_sec_session *sess, struct rte_crypto_op *op,
PMD_INIT_FUNC_TRACE();
+ data_len = sym_op->cipher.data.length;
+ data_offset = sym_op->cipher.data.offset;
+
+ if (sess->cipher_alg == RTE_CRYPTO_CIPHER_SNOW3G_UEA2 ||
+ sess->cipher_alg == RTE_CRYPTO_CIPHER_ZUC_EEA3) {
+ if ((data_len & 7) || (data_offset & 7)) {
+ DPAA2_SEC_ERR("CIPHER: len/offset must be full bytes");
+ return -1;
+ }
+
+ data_len = data_len >> 3;
+ data_offset = data_offset >> 3;
+ }
+
if (sym_op->m_dst)
dst = sym_op->m_dst;
else
@@ -1212,24 +1303,22 @@ build_cipher_fd(dpaa2_sec_session *sess, struct rte_crypto_op *op,
flc = &priv->flc_desc[0].flc;
DPAA2_SET_FD_ADDR(fd, DPAA2_VADDR_TO_IOVA(fle));
- DPAA2_SET_FD_LEN(fd, sym_op->cipher.data.length +
- sess->iv.length);
+ DPAA2_SET_FD_LEN(fd, data_len + sess->iv.length);
DPAA2_SET_FD_COMPOUND_FMT(fd);
DPAA2_SET_FD_FLC(fd, DPAA2_VADDR_TO_IOVA(flc));
DPAA2_SEC_DP_DEBUG(
"CIPHER: cipher_off: 0x%x/length %d, ivlen=%d,"
" data_off: 0x%x\n",
- sym_op->cipher.data.offset,
- sym_op->cipher.data.length,
+ data_offset,
+ data_len,
sess->iv.length,
sym_op->m_src->data_off);
DPAA2_SET_FLE_ADDR(fle, DPAA2_MBUF_VADDR_TO_IOVA(dst));
- DPAA2_SET_FLE_OFFSET(fle, sym_op->cipher.data.offset +
- dst->data_off);
+ DPAA2_SET_FLE_OFFSET(fle, data_offset + dst->data_off);
- fle->length = sym_op->cipher.data.length + sess->iv.length;
+ fle->length = data_len + sess->iv.length;
DPAA2_SEC_DP_DEBUG(
"CIPHER: 1 - flc = %p, fle = %p FLEaddr = %x-%x, length %d\n",
@@ -1239,7 +1328,7 @@ build_cipher_fd(dpaa2_sec_session *sess, struct rte_crypto_op *op,
fle++;
DPAA2_SET_FLE_ADDR(fle, DPAA2_VADDR_TO_IOVA(sge));
- fle->length = sym_op->cipher.data.length + sess->iv.length;
+ fle->length = data_len + sess->iv.length;
DPAA2_SET_FLE_SG_EXT(fle);
@@ -1248,10 +1337,9 @@ build_cipher_fd(dpaa2_sec_session *sess, struct rte_crypto_op *op,
sge++;
DPAA2_SET_FLE_ADDR(sge, DPAA2_MBUF_VADDR_TO_IOVA(sym_op->m_src));
- DPAA2_SET_FLE_OFFSET(sge, sym_op->cipher.data.offset +
- sym_op->m_src->data_off);
+ DPAA2_SET_FLE_OFFSET(sge, data_offset + sym_op->m_src->data_off);
- sge->length = sym_op->cipher.data.length;
+ sge->length = data_len;
DPAA2_SET_FLE_FIN(sge);
DPAA2_SET_FLE_FIN(fle);
@@ -1762,32 +1850,60 @@ dpaa2_sec_cipher_init(struct rte_cryptodev *dev,
/* Set IV parameters */
session->iv.offset = xform->cipher.iv.offset;
session->iv.length = xform->cipher.iv.length;
+ session->dir = (xform->cipher.op == RTE_CRYPTO_CIPHER_OP_ENCRYPT) ?
+ DIR_ENC : DIR_DEC;
switch (xform->cipher.algo) {
case RTE_CRYPTO_CIPHER_AES_CBC:
cipherdata.algtype = OP_ALG_ALGSEL_AES;
cipherdata.algmode = OP_ALG_AAI_CBC;
session->cipher_alg = RTE_CRYPTO_CIPHER_AES_CBC;
+ bufsize = cnstr_shdsc_blkcipher(priv->flc_desc[0].desc, 1, 0,
+ SHR_NEVER, &cipherdata, NULL,
+ session->iv.length,
+ session->dir);
break;
case RTE_CRYPTO_CIPHER_3DES_CBC:
cipherdata.algtype = OP_ALG_ALGSEL_3DES;
cipherdata.algmode = OP_ALG_AAI_CBC;
session->cipher_alg = RTE_CRYPTO_CIPHER_3DES_CBC;
+ bufsize = cnstr_shdsc_blkcipher(priv->flc_desc[0].desc, 1, 0,
+ SHR_NEVER, &cipherdata, NULL,
+ session->iv.length,
+ session->dir);
break;
case RTE_CRYPTO_CIPHER_AES_CTR:
cipherdata.algtype = OP_ALG_ALGSEL_AES;
cipherdata.algmode = OP_ALG_AAI_CTR;
session->cipher_alg = RTE_CRYPTO_CIPHER_AES_CTR;
+ bufsize = cnstr_shdsc_blkcipher(priv->flc_desc[0].desc, 1, 0,
+ SHR_NEVER, &cipherdata, NULL,
+ session->iv.length,
+ session->dir);
break;
case RTE_CRYPTO_CIPHER_3DES_CTR:
+ cipherdata.algtype = OP_ALG_ALGSEL_3DES;
+ cipherdata.algmode = OP_ALG_AAI_CTR;
+ session->cipher_alg = RTE_CRYPTO_CIPHER_3DES_CTR;
+ bufsize = cnstr_shdsc_blkcipher(priv->flc_desc[0].desc, 1, 0,
+ SHR_NEVER, &cipherdata, NULL,
+ session->iv.length,
+ session->dir);
+ break;
+ case RTE_CRYPTO_CIPHER_SNOW3G_UEA2:
+ cipherdata.algtype = OP_ALG_ALGSEL_SNOW_F8;
+ session->cipher_alg = RTE_CRYPTO_CIPHER_SNOW3G_UEA2;
+ bufsize = cnstr_shdsc_snow_f8(priv->flc_desc[0].desc, 1, 0,
+ &cipherdata,
+ session->dir);
+ break;
+ case RTE_CRYPTO_CIPHER_KASUMI_F8:
+ case RTE_CRYPTO_CIPHER_ZUC_EEA3:
+ case RTE_CRYPTO_CIPHER_AES_F8:
case RTE_CRYPTO_CIPHER_AES_ECB:
case RTE_CRYPTO_CIPHER_3DES_ECB:
case RTE_CRYPTO_CIPHER_AES_XTS:
- case RTE_CRYPTO_CIPHER_AES_F8:
case RTE_CRYPTO_CIPHER_ARC4:
- case RTE_CRYPTO_CIPHER_KASUMI_F8:
- case RTE_CRYPTO_CIPHER_SNOW3G_UEA2:
- case RTE_CRYPTO_CIPHER_ZUC_EEA3:
case RTE_CRYPTO_CIPHER_NULL:
DPAA2_SEC_ERR("Crypto: Unsupported Cipher alg %u",
xform->cipher.algo);
@@ -1797,12 +1913,7 @@ dpaa2_sec_cipher_init(struct rte_cryptodev *dev,
xform->cipher.algo);
goto error_out;
}
- session->dir = (xform->cipher.op == RTE_CRYPTO_CIPHER_OP_ENCRYPT) ?
- DIR_ENC : DIR_DEC;
- bufsize = cnstr_shdsc_blkcipher(priv->flc_desc[0].desc, 1, 0, SHR_NEVER,
- &cipherdata, NULL, session->iv.length,
- session->dir);
if (bufsize < 0) {
DPAA2_SEC_ERR("Crypto: Descriptor build failed");
goto error_out;
@@ -1865,40 +1976,77 @@ dpaa2_sec_auth_init(struct rte_cryptodev *dev,
authdata.key_type = RTA_DATA_IMM;
session->digest_length = xform->auth.digest_length;
+ session->dir = (xform->auth.op == RTE_CRYPTO_AUTH_OP_GENERATE) ?
+ DIR_ENC : DIR_DEC;
switch (xform->auth.algo) {
case RTE_CRYPTO_AUTH_SHA1_HMAC:
authdata.algtype = OP_ALG_ALGSEL_SHA1;
authdata.algmode = OP_ALG_AAI_HMAC;
session->auth_alg = RTE_CRYPTO_AUTH_SHA1_HMAC;
+ bufsize = cnstr_shdsc_hmac(priv->flc_desc[DESC_INITFINAL].desc,
+ 1, 0, SHR_NEVER, &authdata,
+ !session->dir,
+ session->digest_length);
break;
case RTE_CRYPTO_AUTH_MD5_HMAC:
authdata.algtype = OP_ALG_ALGSEL_MD5;
authdata.algmode = OP_ALG_AAI_HMAC;
session->auth_alg = RTE_CRYPTO_AUTH_MD5_HMAC;
+ bufsize = cnstr_shdsc_hmac(priv->flc_desc[DESC_INITFINAL].desc,
+ 1, 0, SHR_NEVER, &authdata,
+ !session->dir,
+ session->digest_length);
break;
case RTE_CRYPTO_AUTH_SHA256_HMAC:
authdata.algtype = OP_ALG_ALGSEL_SHA256;
authdata.algmode = OP_ALG_AAI_HMAC;
session->auth_alg = RTE_CRYPTO_AUTH_SHA256_HMAC;
+ bufsize = cnstr_shdsc_hmac(priv->flc_desc[DESC_INITFINAL].desc,
+ 1, 0, SHR_NEVER, &authdata,
+ !session->dir,
+ session->digest_length);
break;
case RTE_CRYPTO_AUTH_SHA384_HMAC:
authdata.algtype = OP_ALG_ALGSEL_SHA384;
authdata.algmode = OP_ALG_AAI_HMAC;
session->auth_alg = RTE_CRYPTO_AUTH_SHA384_HMAC;
+ bufsize = cnstr_shdsc_hmac(priv->flc_desc[DESC_INITFINAL].desc,
+ 1, 0, SHR_NEVER, &authdata,
+ !session->dir,
+ session->digest_length);
break;
case RTE_CRYPTO_AUTH_SHA512_HMAC:
authdata.algtype = OP_ALG_ALGSEL_SHA512;
authdata.algmode = OP_ALG_AAI_HMAC;
session->auth_alg = RTE_CRYPTO_AUTH_SHA512_HMAC;
+ bufsize = cnstr_shdsc_hmac(priv->flc_desc[DESC_INITFINAL].desc,
+ 1, 0, SHR_NEVER, &authdata,
+ !session->dir,
+ session->digest_length);
break;
case RTE_CRYPTO_AUTH_SHA224_HMAC:
authdata.algtype = OP_ALG_ALGSEL_SHA224;
authdata.algmode = OP_ALG_AAI_HMAC;
session->auth_alg = RTE_CRYPTO_AUTH_SHA224_HMAC;
+ bufsize = cnstr_shdsc_hmac(priv->flc_desc[DESC_INITFINAL].desc,
+ 1, 0, SHR_NEVER, &authdata,
+ !session->dir,
+ session->digest_length);
break;
- case RTE_CRYPTO_AUTH_AES_XCBC_MAC:
case RTE_CRYPTO_AUTH_SNOW3G_UIA2:
+ authdata.algtype = OP_ALG_ALGSEL_SNOW_F9;
+ authdata.algmode = OP_ALG_AAI_F9;
+ session->auth_alg = RTE_CRYPTO_AUTH_SNOW3G_UIA2;
+ session->iv.offset = xform->auth.iv.offset;
+ session->iv.length = xform->auth.iv.length;
+ bufsize = cnstr_shdsc_snow_f9(priv->flc_desc[DESC_INITFINAL].desc,
+ 1, 0, &authdata,
+ !session->dir,
+ session->digest_length);
+ break;
+ case RTE_CRYPTO_AUTH_KASUMI_F9:
+ case RTE_CRYPTO_AUTH_ZUC_EIA3:
case RTE_CRYPTO_AUTH_NULL:
case RTE_CRYPTO_AUTH_SHA1:
case RTE_CRYPTO_AUTH_SHA256:
@@ -1907,10 +2055,9 @@ dpaa2_sec_auth_init(struct rte_cryptodev *dev,
case RTE_CRYPTO_AUTH_SHA384:
case RTE_CRYPTO_AUTH_MD5:
case RTE_CRYPTO_AUTH_AES_GMAC:
- case RTE_CRYPTO_AUTH_KASUMI_F9:
+ case RTE_CRYPTO_AUTH_AES_XCBC_MAC:
case RTE_CRYPTO_AUTH_AES_CMAC:
case RTE_CRYPTO_AUTH_AES_CBC_MAC:
- case RTE_CRYPTO_AUTH_ZUC_EIA3:
DPAA2_SEC_ERR("Crypto: Unsupported auth alg %un",
xform->auth.algo);
goto error_out;
@@ -1919,12 +2066,7 @@ dpaa2_sec_auth_init(struct rte_cryptodev *dev,
xform->auth.algo);
goto error_out;
}
- session->dir = (xform->auth.op == RTE_CRYPTO_AUTH_OP_GENERATE) ?
- DIR_ENC : DIR_DEC;
- bufsize = cnstr_shdsc_hmac(priv->flc_desc[DESC_INITFINAL].desc,
- 1, 0, SHR_NEVER, &authdata, !session->dir,
- session->digest_length);
if (bufsize < 0) {
DPAA2_SEC_ERR("Crypto: Invalid buffer length");
goto error_out;
diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h b/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
index 679fd006b..0a81ae52d 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
@@ -410,7 +410,51 @@ static const struct rte_cryptodev_capabilities dpaa2_sec_capabilities[] = {
}, }
}, }
},
-
+ { /* SNOW 3G (UIA2) */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_SNOW3G_UIA2,
+ .block_size = 16,
+ .key_size = {
+ .min = 16,
+ .max = 16,
+ .increment = 0
+ },
+ .digest_size = {
+ .min = 4,
+ .max = 4,
+ .increment = 0
+ },
+ .iv_size = {
+ .min = 16,
+ .max = 16,
+ .increment = 0
+ }
+ }, }
+ }, }
+ },
+ { /* SNOW 3G (UEA2) */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
+ {.cipher = {
+ .algo = RTE_CRYPTO_CIPHER_SNOW3G_UEA2,
+ .block_size = 16,
+ .key_size = {
+ .min = 16,
+ .max = 16,
+ .increment = 0
+ },
+ .iv_size = {
+ .min = 16,
+ .max = 16,
+ .increment = 0
+ }
+ }, }
+ }, }
+ },
RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
};
diff --git a/drivers/crypto/dpaa2_sec/hw/desc/algo.h b/drivers/crypto/dpaa2_sec/hw/desc/algo.h
index 5e8e5e79c..e01b9b4c9 100644
--- a/drivers/crypto/dpaa2_sec/hw/desc/algo.h
+++ b/drivers/crypto/dpaa2_sec/hw/desc/algo.h
@@ -57,6 +57,36 @@ cnstr_shdsc_snow_f8(uint32_t *descbuf, bool ps, bool swap,
return PROGRAM_FINALIZE(p);
}
+/**
+ * conv_to_snow_f9_iv - SNOW/f9 (UIA2) IV 16bit to 12 bit convert
+ * funtion for 3G.
+ * @iv: 16 bit original IV data
+ *
+ * Return: 12 bit IV data as understood by SEC HW
+ */
+
+static inline uint8_t *conv_to_snow_f9_iv(uint8_t *iv)
+{
+ uint8_t temp = (iv[8] == iv[0]) ? 0 : 4;
+
+ iv[12] = iv[4];
+ iv[13] = iv[5];
+ iv[14] = iv[6];
+ iv[15] = iv[7];
+
+ iv[8] = temp;
+ iv[9] = 0x00;
+ iv[10] = 0x00;
+ iv[11] = 0x00;
+
+ iv[4] = iv[0];
+ iv[5] = iv[1];
+ iv[6] = iv[2];
+ iv[7] = iv[3];
+
+ return (iv + 4);
+}
+
/**
* cnstr_shdsc_snow_f9 - SNOW/f9 (UIA2) as a shared descriptor
* @descbuf: pointer to descriptor-under-construction buffer
--
2.17.1
^ permalink raw reply [flat|nested] 75+ messages in thread
* [dpdk-dev] [PATCH v2 18/20] crypto/dpaa2_sec/hw: support kasumi
2019-09-30 11:52 ` [dpdk-dev] [PATCH v2 " Akhil Goyal
` (16 preceding siblings ...)
2019-09-30 11:52 ` [dpdk-dev] [PATCH v2 17/20] crypto/dpaa2_sec: support snow3g cipher/integrity Akhil Goyal
@ 2019-09-30 11:52 ` Akhil Goyal
2019-09-30 11:52 ` [dpdk-dev] [PATCH v2 19/20] crypto/dpaa2_sec/hw: support ZUCE and ZUCA Akhil Goyal
` (3 subsequent siblings)
21 siblings, 0 replies; 75+ messages in thread
From: Akhil Goyal @ 2019-09-30 11:52 UTC (permalink / raw)
To: dev; +Cc: aconole, anoobj, Vakul Garg, Hemant Agrawal
From: Vakul Garg <vakul.garg@nxp.com>
Add Kasumi processing for non PDCP proto offload cases.
Also add support for pre-computed IV in Kasumi-f9
Signed-off-by: Vakul Garg <vakul.garg@nxp.com>
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
---
drivers/crypto/dpaa2_sec/hw/desc/algo.h | 64 +++++++++++--------------
1 file changed, 29 insertions(+), 35 deletions(-)
diff --git a/drivers/crypto/dpaa2_sec/hw/desc/algo.h b/drivers/crypto/dpaa2_sec/hw/desc/algo.h
index e01b9b4c9..88ab40da5 100644
--- a/drivers/crypto/dpaa2_sec/hw/desc/algo.h
+++ b/drivers/crypto/dpaa2_sec/hw/desc/algo.h
@@ -349,34 +349,25 @@ cnstr_shdsc_hmac(uint32_t *descbuf, bool ps, bool swap,
*/
static inline int
cnstr_shdsc_kasumi_f8(uint32_t *descbuf, bool ps, bool swap,
- struct alginfo *cipherdata, uint8_t dir,
- uint32_t count, uint8_t bearer, uint8_t direction)
+ struct alginfo *cipherdata, uint8_t dir)
{
struct program prg;
struct program *p = &prg;
- uint64_t ct = count;
- uint64_t br = bearer;
- uint64_t dr = direction;
- uint32_t context[2] = { ct, (br << 27) | (dr << 26) };
PROGRAM_CNTXT_INIT(p, descbuf, 0);
- if (swap) {
+ if (swap)
PROGRAM_SET_BSWAP(p);
-
- context[0] = swab32(context[0]);
- context[1] = swab32(context[1]);
- }
if (ps)
PROGRAM_SET_36BIT_ADDR(p);
SHR_HDR(p, SHR_ALWAYS, 1, 0);
KEY(p, KEY1, cipherdata->key_enc_flags, cipherdata->key,
cipherdata->keylen, INLINE_KEY(cipherdata));
+ SEQLOAD(p, CONTEXT1, 0, 8, 0);
MATHB(p, SEQINSZ, SUB, MATH2, VSEQINSZ, 4, 0);
MATHB(p, SEQINSZ, SUB, MATH2, VSEQOUTSZ, 4, 0);
ALG_OPERATION(p, OP_ALG_ALGSEL_KASUMI, OP_ALG_AAI_F8,
OP_ALG_AS_INITFINAL, 0, dir);
- LOAD(p, (uintptr_t)context, CONTEXT1, 0, 8, IMMED | COPY);
SEQFIFOLOAD(p, MSG1, 0, VLF | LAST1);
SEQFIFOSTORE(p, MSG, 0, 0, VLF);
@@ -390,46 +381,49 @@ cnstr_shdsc_kasumi_f8(uint32_t *descbuf, bool ps, bool swap,
* @ps: if 36/40bit addressing is desired, this parameter must be true
* @swap: must be true when core endianness doesn't match SEC endianness
* @authdata: pointer to authentication transform definitions
- * @dir: cipher direction (DIR_ENC/DIR_DEC)
- * @count: count value (32 bits)
- * @fresh: fresh value ID (32 bits)
- * @direction: direction (1 bit)
- * @datalen: size of data
+ * @chk_icv: check or generate ICV value
+ * @authlen: size of digest
*
* Return: size of descriptor written in words or negative number on error
*/
static inline int
cnstr_shdsc_kasumi_f9(uint32_t *descbuf, bool ps, bool swap,
- struct alginfo *authdata, uint8_t dir,
- uint32_t count, uint32_t fresh, uint8_t direction,
- uint32_t datalen)
+ struct alginfo *authdata, uint8_t chk_icv,
+ uint32_t authlen)
{
struct program prg;
struct program *p = &prg;
- uint16_t ctx_offset = 16;
- uint32_t context[6] = {count, direction << 26, fresh, 0, 0, 0};
+ int dir = chk_icv ? DIR_DEC : DIR_ENC;
PROGRAM_CNTXT_INIT(p, descbuf, 0);
- if (swap) {
+ if (swap)
PROGRAM_SET_BSWAP(p);
- context[0] = swab32(context[0]);
- context[1] = swab32(context[1]);
- context[2] = swab32(context[2]);
- }
if (ps)
PROGRAM_SET_36BIT_ADDR(p);
+
SHR_HDR(p, SHR_ALWAYS, 1, 0);
- KEY(p, KEY1, authdata->key_enc_flags, authdata->key, authdata->keylen,
- INLINE_KEY(authdata));
- MATHB(p, SEQINSZ, SUB, MATH2, VSEQINSZ, 4, 0);
+ KEY(p, KEY2, authdata->key_enc_flags, authdata->key,
+ authdata->keylen, INLINE_KEY(authdata));
+
+ SEQLOAD(p, CONTEXT2, 0, 12, 0);
+
+ if (chk_icv == ICV_CHECK_ENABLE)
+ MATHB(p, SEQINSZ, SUB, authlen, VSEQINSZ, 4, IMMED2);
+ else
+ MATHB(p, SEQINSZ, SUB, ZERO, VSEQINSZ, 4, 0);
+
ALG_OPERATION(p, OP_ALG_ALGSEL_KASUMI, OP_ALG_AAI_F9,
- OP_ALG_AS_INITFINAL, 0, dir);
- LOAD(p, (uintptr_t)context, CONTEXT1, 0, 24, IMMED | COPY);
- SEQFIFOLOAD(p, BIT_DATA, datalen, CLASS1 | LAST1);
- /* Save output MAC of DWORD 2 into a 32-bit sequence */
- SEQSTORE(p, CONTEXT1, ctx_offset, 4, 0);
+ OP_ALG_AS_INITFINAL, chk_icv, dir);
+
+ SEQFIFOLOAD(p, MSG2, 0, VLF | CLASS2 | LAST2);
+
+ if (chk_icv == ICV_CHECK_ENABLE)
+ SEQFIFOLOAD(p, ICV2, authlen, LAST2);
+ else
+ /* Save lower half of MAC out into a 32-bit sequence */
+ SEQSTORE(p, CONTEXT2, 0, authlen, 0);
return PROGRAM_FINALIZE(p);
}
--
2.17.1
^ permalink raw reply [flat|nested] 75+ messages in thread
* [dpdk-dev] [PATCH v2 19/20] crypto/dpaa2_sec/hw: support ZUCE and ZUCA
2019-09-30 11:52 ` [dpdk-dev] [PATCH v2 " Akhil Goyal
` (17 preceding siblings ...)
2019-09-30 11:52 ` [dpdk-dev] [PATCH v2 18/20] crypto/dpaa2_sec/hw: support kasumi Akhil Goyal
@ 2019-09-30 11:52 ` Akhil Goyal
2019-09-30 11:52 ` [dpdk-dev] [PATCH v2 20/20] crypto/dpaa2_sec: support zuc ciphering/integrity Akhil Goyal
` (2 subsequent siblings)
21 siblings, 0 replies; 75+ messages in thread
From: Akhil Goyal @ 2019-09-30 11:52 UTC (permalink / raw)
To: dev; +Cc: aconole, anoobj, Vakul Garg, Hemant Agrawal
From: Vakul Garg <vakul.garg@nxp.com>
This patch add support for ZUC Encryption and ZUC Authentication.
Before passing to CAAM, the 16-byte ZUCA IV is converted to 8-byte
format which consists of 38-bits of count||bearer|direction.
Signed-off-by: Vakul Garg <vakul.garg@nxp.com>
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
---
drivers/crypto/dpaa2_sec/hw/desc/algo.h | 136 +++++++++++++++++++++++-
1 file changed, 132 insertions(+), 4 deletions(-)
diff --git a/drivers/crypto/dpaa2_sec/hw/desc/algo.h b/drivers/crypto/dpaa2_sec/hw/desc/algo.h
index 88ab40da5..c76842732 100644
--- a/drivers/crypto/dpaa2_sec/hw/desc/algo.h
+++ b/drivers/crypto/dpaa2_sec/hw/desc/algo.h
@@ -17,6 +17,103 @@
* Shared descriptors for algorithms (i.e. not for protocols).
*/
+/**
+ * cnstr_shdsc_zuce - ZUC Enc (EEA2) as a shared descriptor
+ * @descbuf: pointer to descriptor-under-construction buffer
+ * @ps: if 36/40bit addressing is desired, this parameter must be true
+ * @swap: must be true when core endianness doesn't match SEC endianness
+ * @cipherdata: pointer to block cipher transform definitions
+ * @dir: Cipher direction (DIR_ENC/DIR_DEC)
+ *
+ * Return: size of descriptor written in words or negative number on error
+ */
+static inline int
+cnstr_shdsc_zuce(uint32_t *descbuf, bool ps, bool swap,
+ struct alginfo *cipherdata, uint8_t dir)
+{
+ struct program prg;
+ struct program *p = &prg;
+
+ PROGRAM_CNTXT_INIT(p, descbuf, 0);
+ if (swap)
+ PROGRAM_SET_BSWAP(p);
+
+ if (ps)
+ PROGRAM_SET_36BIT_ADDR(p);
+ SHR_HDR(p, SHR_ALWAYS, 1, 0);
+
+ KEY(p, KEY1, cipherdata->key_enc_flags, cipherdata->key,
+ cipherdata->keylen, INLINE_KEY(cipherdata));
+
+ SEQLOAD(p, CONTEXT1, 0, 16, 0);
+
+ MATHB(p, SEQINSZ, SUB, MATH2, VSEQINSZ, 4, 0);
+ MATHB(p, SEQINSZ, SUB, MATH2, VSEQOUTSZ, 4, 0);
+ ALG_OPERATION(p, OP_ALG_ALGSEL_ZUCE, OP_ALG_AAI_F8,
+ OP_ALG_AS_INITFINAL, 0, dir);
+ SEQFIFOLOAD(p, MSG1, 0, VLF | LAST1);
+ SEQFIFOSTORE(p, MSG, 0, 0, VLF);
+
+ return PROGRAM_FINALIZE(p);
+}
+
+/**
+ * cnstr_shdsc_zuca - ZUC Auth (EIA2) as a shared descriptor
+ * @descbuf: pointer to descriptor-under-construction buffer
+ * @ps: if 36/40bit addressing is desired, this parameter must be true
+ * @swap: must be true when core endianness doesn't match SEC endianness
+ * @authdata: pointer to authentication transform definitions
+ * @chk_icv: Whether to compare and verify ICV (true/false)
+ * @authlen: size of digest
+ *
+ * The IV prepended before hmac payload must be 8 bytes consisting
+ * of COUNT||BEAERER||DIR. The COUNT is of 32-bits, bearer is of 5 bits and
+ * direction is of 1 bit - totalling to 38 bits.
+ *
+ * Return: size of descriptor written in words or negative number on error
+ */
+static inline int
+cnstr_shdsc_zuca(uint32_t *descbuf, bool ps, bool swap,
+ struct alginfo *authdata, uint8_t chk_icv,
+ uint32_t authlen)
+{
+ struct program prg;
+ struct program *p = &prg;
+ int dir = chk_icv ? DIR_DEC : DIR_ENC;
+
+ PROGRAM_CNTXT_INIT(p, descbuf, 0);
+ if (swap)
+ PROGRAM_SET_BSWAP(p);
+
+ if (ps)
+ PROGRAM_SET_36BIT_ADDR(p);
+ SHR_HDR(p, SHR_ALWAYS, 1, 0);
+
+ KEY(p, KEY2, authdata->key_enc_flags, authdata->key,
+ authdata->keylen, INLINE_KEY(authdata));
+
+ SEQLOAD(p, CONTEXT2, 0, 8, 0);
+
+ if (chk_icv == ICV_CHECK_ENABLE)
+ MATHB(p, SEQINSZ, SUB, authlen, VSEQINSZ, 4, IMMED2);
+ else
+ MATHB(p, SEQINSZ, SUB, ZERO, VSEQINSZ, 4, 0);
+
+ ALG_OPERATION(p, OP_ALG_ALGSEL_ZUCA, OP_ALG_AAI_F9,
+ OP_ALG_AS_INITFINAL, chk_icv, dir);
+
+ SEQFIFOLOAD(p, MSG2, 0, VLF | CLASS2 | LAST2);
+
+ if (chk_icv == ICV_CHECK_ENABLE)
+ SEQFIFOLOAD(p, ICV2, authlen, LAST2);
+ else
+ /* Save lower half of MAC out into a 32-bit sequence */
+ SEQSTORE(p, CONTEXT2, 0, authlen, 0);
+
+ return PROGRAM_FINALIZE(p);
+}
+
+
/**
* cnstr_shdsc_snow_f8 - SNOW/f8 (UEA2) as a shared descriptor
* @descbuf: pointer to descriptor-under-construction buffer
@@ -58,11 +155,43 @@ cnstr_shdsc_snow_f8(uint32_t *descbuf, bool ps, bool swap,
}
/**
- * conv_to_snow_f9_iv - SNOW/f9 (UIA2) IV 16bit to 12 bit convert
+ * conv_to_zuc_eia_iv - ZUCA IV 16-byte to 8-byte convert
+ * function for 3G.
+ * @iv: 16 bytes of original IV data.
+ *
+ * From the original IV, we extract 32-bits of COUNT,
+ * 5-bits of bearer and 1-bit of direction.
+ * Refer to CAAM refman for ZUCA IV format. Then these values are
+ * appended as COUNT||BEARER||DIR continuously to make a 38-bit block.
+ * This 38-bit block is copied left justified into 8-byte array used as
+ * converted IV.
+ *
+ * Return: 8-bytes of IV data as understood by SEC HW
+ */
+
+static inline uint8_t *conv_to_zuc_eia_iv(uint8_t *iv)
+{
+ uint8_t dir = (iv[14] & 0x80) ? 4 : 0;
+
+ iv[12] = iv[4] | dir;
+ iv[13] = 0;
+ iv[14] = 0;
+ iv[15] = 0;
+
+ iv[8] = iv[0];
+ iv[9] = iv[1];
+ iv[10] = iv[2];
+ iv[11] = iv[3];
+
+ return (iv + 8);
+}
+
+/**
+ * conv_to_snow_f9_iv - SNOW/f9 (UIA2) IV 16 byte to 12 byte convert
* funtion for 3G.
- * @iv: 16 bit original IV data
+ * @iv: 16 byte original IV data
*
- * Return: 12 bit IV data as understood by SEC HW
+ * Return: 12 byte IV data as understood by SEC HW
*/
static inline uint8_t *conv_to_snow_f9_iv(uint8_t *iv)
@@ -93,7 +222,6 @@ static inline uint8_t *conv_to_snow_f9_iv(uint8_t *iv)
* @ps: if 36/40bit addressing is desired, this parameter must be true
* @swap: must be true when core endianness doesn't match SEC endianness
* @authdata: pointer to authentication transform definitions
- * @dir: cipher direction (DIR_ENC/DIR_DEC)
* @chk_icv: check or generate ICV value
* @authlen: size of digest
*
--
2.17.1
^ permalink raw reply [flat|nested] 75+ messages in thread
* [dpdk-dev] [PATCH v2 20/20] crypto/dpaa2_sec: support zuc ciphering/integrity
2019-09-30 11:52 ` [dpdk-dev] [PATCH v2 " Akhil Goyal
` (18 preceding siblings ...)
2019-09-30 11:52 ` [dpdk-dev] [PATCH v2 19/20] crypto/dpaa2_sec/hw: support ZUCE and ZUCA Akhil Goyal
@ 2019-09-30 11:52 ` Akhil Goyal
2019-09-30 13:48 ` [dpdk-dev] [PATCH v2 00/20] crypto/dpaaX_sec: Support Wireless algos Akhil Goyal
2019-09-30 14:40 ` [dpdk-dev] [PATCH v3 00/24] " Akhil Goyal
21 siblings, 0 replies; 75+ messages in thread
From: Akhil Goyal @ 2019-09-30 11:52 UTC (permalink / raw)
To: dev; +Cc: aconole, anoobj, Hemant Agrawal, Vakul Garg
From: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Vakul Garg <vakul.garg@nxp.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
---
drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c | 25 +++++++++++-
drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h | 45 +++++++++++++++++++++
2 files changed, 68 insertions(+), 2 deletions(-)
diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
index 637362e30..d985e630a 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
@@ -1072,6 +1072,9 @@ build_auth_fd(dpaa2_sec_session *sess, struct rte_crypto_op *op,
if (sess->auth_alg == RTE_CRYPTO_AUTH_SNOW3G_UIA2) {
iv_ptr = conv_to_snow_f9_iv(iv_ptr);
sge->length = 12;
+ } else if (sess->auth_alg == RTE_CRYPTO_AUTH_ZUC_EIA3) {
+ iv_ptr = conv_to_zuc_eia_iv(iv_ptr);
+ sge->length = 8;
} else {
sge->length = sess->iv.length;
}
@@ -1897,8 +1900,14 @@ dpaa2_sec_cipher_init(struct rte_cryptodev *dev,
&cipherdata,
session->dir);
break;
- case RTE_CRYPTO_CIPHER_KASUMI_F8:
case RTE_CRYPTO_CIPHER_ZUC_EEA3:
+ cipherdata.algtype = OP_ALG_ALGSEL_ZUCE;
+ session->cipher_alg = RTE_CRYPTO_CIPHER_ZUC_EEA3;
+ bufsize = cnstr_shdsc_zuce(priv->flc_desc[0].desc, 1, 0,
+ &cipherdata,
+ session->dir);
+ break;
+ case RTE_CRYPTO_CIPHER_KASUMI_F8:
case RTE_CRYPTO_CIPHER_AES_F8:
case RTE_CRYPTO_CIPHER_AES_ECB:
case RTE_CRYPTO_CIPHER_3DES_ECB:
@@ -2045,8 +2054,18 @@ dpaa2_sec_auth_init(struct rte_cryptodev *dev,
!session->dir,
session->digest_length);
break;
- case RTE_CRYPTO_AUTH_KASUMI_F9:
case RTE_CRYPTO_AUTH_ZUC_EIA3:
+ authdata.algtype = OP_ALG_ALGSEL_ZUCA;
+ authdata.algmode = OP_ALG_AAI_F9;
+ session->auth_alg = RTE_CRYPTO_AUTH_ZUC_EIA3;
+ session->iv.offset = xform->auth.iv.offset;
+ session->iv.length = xform->auth.iv.length;
+ bufsize = cnstr_shdsc_zuca(priv->flc_desc[DESC_INITFINAL].desc,
+ 1, 0, &authdata,
+ !session->dir,
+ session->digest_length);
+ break;
+ case RTE_CRYPTO_AUTH_KASUMI_F9:
case RTE_CRYPTO_AUTH_NULL:
case RTE_CRYPTO_AUTH_SHA1:
case RTE_CRYPTO_AUTH_SHA256:
@@ -2357,6 +2376,7 @@ dpaa2_sec_aead_chain_init(struct rte_cryptodev *dev,
session->cipher_alg = RTE_CRYPTO_CIPHER_AES_CTR;
break;
case RTE_CRYPTO_CIPHER_SNOW3G_UEA2:
+ case RTE_CRYPTO_CIPHER_ZUC_EEA3:
case RTE_CRYPTO_CIPHER_NULL:
case RTE_CRYPTO_CIPHER_3DES_ECB:
case RTE_CRYPTO_CIPHER_AES_ECB:
@@ -2651,6 +2671,7 @@ dpaa2_sec_ipsec_proto_init(struct rte_crypto_cipher_xform *cipher_xform,
cipherdata->algtype = OP_PCL_IPSEC_NULL;
break;
case RTE_CRYPTO_CIPHER_SNOW3G_UEA2:
+ case RTE_CRYPTO_CIPHER_ZUC_EEA3:
case RTE_CRYPTO_CIPHER_3DES_ECB:
case RTE_CRYPTO_CIPHER_AES_ECB:
case RTE_CRYPTO_CIPHER_KASUMI_F8:
diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h b/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
index 0a81ae52d..4d682c5d8 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
@@ -455,6 +455,51 @@ static const struct rte_cryptodev_capabilities dpaa2_sec_capabilities[] = {
}, }
}, }
},
+ { /* ZUC (EEA3) */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
+ {.cipher = {
+ .algo = RTE_CRYPTO_CIPHER_ZUC_EEA3,
+ .block_size = 16,
+ .key_size = {
+ .min = 16,
+ .max = 16,
+ .increment = 0
+ },
+ .iv_size = {
+ .min = 16,
+ .max = 16,
+ .increment = 0
+ }
+ }, }
+ }, }
+ },
+ { /* ZUC (EIA3) */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_ZUC_EIA3,
+ .block_size = 16,
+ .key_size = {
+ .min = 16,
+ .max = 16,
+ .increment = 0
+ },
+ .digest_size = {
+ .min = 4,
+ .max = 4,
+ .increment = 0
+ },
+ .iv_size = {
+ .min = 16,
+ .max = 16,
+ .increment = 0
+ }
+ }, }
+ }, }
+ },
RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
};
--
2.17.1
^ permalink raw reply [flat|nested] 75+ messages in thread
* Re: [dpdk-dev] [PATCH v2 00/20] crypto/dpaaX_sec: Support Wireless algos
2019-09-30 11:52 ` [dpdk-dev] [PATCH v2 " Akhil Goyal
` (19 preceding siblings ...)
2019-09-30 11:52 ` [dpdk-dev] [PATCH v2 20/20] crypto/dpaa2_sec: support zuc ciphering/integrity Akhil Goyal
@ 2019-09-30 13:48 ` Akhil Goyal
2019-09-30 14:40 ` [dpdk-dev] [PATCH v3 00/24] " Akhil Goyal
21 siblings, 0 replies; 75+ messages in thread
From: Akhil Goyal @ 2019-09-30 13:48 UTC (permalink / raw)
To: Akhil Goyal, dev; +Cc: aconole, anoobj
>
> PDCP protocol offload using rte_security are supported in
> dpaa2_sec and dpaa_sec drivers.
>
> Wireless algos(SNOW/ZUC) without protocol offload are also
> supported as per crypto APIs.
>
> changes in V2:
> - fix clang build
> - enable zuc authentication
> - minor fixes
>
> Akhil Goyal (5):
> security: add hfn override option in PDCP
> drivers/crypto: support hfn override for NXP PMDs
> crypto/dpaa2_sec: update desc for pdcp 18bit enc-auth
> crypto/dpaa2_sec/hw: update 12bit SN desc for NULL auth
> crypto/dpaa_sec: support scatter gather for PDCP
>
> Hemant Agrawal (4):
> crypto/dpaa2_sec: support CAAM HW era 10
> crypto/dpaa2_sec: support scatter gather for proto offloads
> crypto/dpaa2_sec: support snow3g cipher/integrity
> crypto/dpaa2_sec: support zuc ciphering/integrity
>
> Vakul Garg (11):
> drivers/crypto: support PDCP 12-bit c-plane processing
> drivers/crypto: support PDCP u-plane with integrity
> crypto/dpaa2_sec: disable 'write-safe' for PDCP
> crypto/dpaa2_sec/hw: support 18-bit PDCP enc-auth cases
> crypto/dpaa2_sec/hw: support aes-aes 18-bit PDCP
> crypto/dpaa2_sec/hw: support zuc-zuc 18-bit PDCP
> crypto/dpaa2_sec/hw: support snow-snow 18-bit PDCP
> crypto/dpaa2_sec/hw: support snow-f8
> crypto/dpaa2_sec/hw: support snow-f9
> crypto/dpaa2_sec/hw: support kasumi
> crypto/dpaa2_sec/hw: support ZUCE and ZUCA
>
> drivers/crypto/dpaa2_sec/Makefile | 1 +
> drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c | 564 +++++--
> drivers/crypto/dpaa2_sec/dpaa2_sec_logs.h | 4 +-
> drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h | 95 +-
> drivers/crypto/dpaa2_sec/hw/desc.h | 8 +-
> drivers/crypto/dpaa2_sec/hw/desc/algo.h | 295 +++-
> drivers/crypto/dpaa2_sec/hw/desc/pdcp.h | 1387 ++++++++++++++---
> .../dpaa2_sec/hw/rta/fifo_load_store_cmd.h | 9 +-
> drivers/crypto/dpaa2_sec/hw/rta/header_cmd.h | 21 +-
> drivers/crypto/dpaa2_sec/hw/rta/jump_cmd.h | 3 +-
> drivers/crypto/dpaa2_sec/hw/rta/key_cmd.h | 5 +-
> drivers/crypto/dpaa2_sec/hw/rta/load_cmd.h | 10 +-
> drivers/crypto/dpaa2_sec/hw/rta/math_cmd.h | 12 +-
> drivers/crypto/dpaa2_sec/hw/rta/move_cmd.h | 8 +-
> drivers/crypto/dpaa2_sec/hw/rta/nfifo_cmd.h | 10 +-
> .../crypto/dpaa2_sec/hw/rta/operation_cmd.h | 6 +-
> .../crypto/dpaa2_sec/hw/rta/protocol_cmd.h | 11 +-
> .../dpaa2_sec/hw/rta/sec_run_time_asm.h | 27 +-
> .../dpaa2_sec/hw/rta/seq_in_out_ptr_cmd.h | 7 +-
> drivers/crypto/dpaa2_sec/hw/rta/store_cmd.h | 6 +-
> drivers/crypto/dpaa_sec/dpaa_sec.c | 264 +++-
> drivers/crypto/dpaa_sec/dpaa_sec.h | 4 +-
> lib/librte_security/rte_security.h | 11 +-
> 23 files changed, 2221 insertions(+), 547 deletions(-)
Self NACK
Will send it again.
^ permalink raw reply [flat|nested] 75+ messages in thread
* [dpdk-dev] [PATCH v3 00/24] crypto/dpaaX_sec: Support Wireless algos
2019-09-30 11:52 ` [dpdk-dev] [PATCH v2 " Akhil Goyal
` (20 preceding siblings ...)
2019-09-30 13:48 ` [dpdk-dev] [PATCH v2 00/20] crypto/dpaaX_sec: Support Wireless algos Akhil Goyal
@ 2019-09-30 14:40 ` Akhil Goyal
2019-09-30 14:40 ` [dpdk-dev] [PATCH v3 01/24] drivers/crypto: support PDCP 12-bit c-plane processing Akhil Goyal
` (24 more replies)
21 siblings, 25 replies; 75+ messages in thread
From: Akhil Goyal @ 2019-09-30 14:40 UTC (permalink / raw)
To: dev; +Cc: aconole, anoobj, Akhil Goyal
PDCP protocol offload using rte_security are supported in
dpaa2_sec and dpaa_sec drivers.
Wireless algos(SNOW/ZUC) without protocol offload are also
supported as per crypto APIs.
changes in v3:
- fix meson build
- fix checkpatch warnings
- include dependent patches(last 4) which were sent separately.
CI was failing due to apply issues.
changes in V2:
- fix clang build
- enable zuc authentication
- minor fixes
Akhil Goyal (6):
security: add hfn override option in PDCP
drivers/crypto: support hfn override for NXP PMDs
crypto/dpaa2_sec: update desc for pdcp 18bit enc-auth
crypto/dpaa2_sec/hw: update 12bit SN desc for NULL auth
crypto/dpaa_sec: support scatter gather for PDCP
crypto/dpaa_sec: change per cryptodev pool to per qp
Hemant Agrawal (7):
crypto/dpaa2_sec: support CAAM HW era 10
crypto/dpaa2_sec: support scatter gather for proto offloads
crypto/dpaa2_sec: support snow3g cipher/integrity
crypto/dpaa2_sec: support zuc ciphering/integrity
crypto/dpaa2_sec: allocate context as per num segs
crypto/dpaa_sec: dynamic contxt buffer for SG cases
crypto/dpaa2_sec: improve debug logging
Vakul Garg (11):
drivers/crypto: support PDCP 12-bit c-plane processing
drivers/crypto: support PDCP u-plane with integrity
crypto/dpaa2_sec: disable 'write-safe' for PDCP
crypto/dpaa2_sec/hw: support 18-bit PDCP enc-auth cases
crypto/dpaa2_sec/hw: support aes-aes 18-bit PDCP
crypto/dpaa2_sec/hw: support zuc-zuc 18-bit PDCP
crypto/dpaa2_sec/hw: support snow-snow 18-bit PDCP
crypto/dpaa2_sec/hw: support snow-f8
crypto/dpaa2_sec/hw: support snow-f9
crypto/dpaa2_sec/hw: support kasumi
crypto/dpaa2_sec/hw: support ZUCE and ZUCA
drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c | 625 ++++++--
drivers/crypto/dpaa2_sec/dpaa2_sec_logs.h | 4 +-
drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h | 96 +-
drivers/crypto/dpaa2_sec/hw/desc.h | 8 +-
drivers/crypto/dpaa2_sec/hw/desc/algo.h | 295 +++-
drivers/crypto/dpaa2_sec/hw/desc/pdcp.h | 1387 ++++++++++++++---
.../dpaa2_sec/hw/rta/fifo_load_store_cmd.h | 9 +-
drivers/crypto/dpaa2_sec/hw/rta/header_cmd.h | 21 +-
drivers/crypto/dpaa2_sec/hw/rta/jump_cmd.h | 3 +-
drivers/crypto/dpaa2_sec/hw/rta/key_cmd.h | 5 +-
drivers/crypto/dpaa2_sec/hw/rta/load_cmd.h | 10 +-
drivers/crypto/dpaa2_sec/hw/rta/math_cmd.h | 12 +-
drivers/crypto/dpaa2_sec/hw/rta/move_cmd.h | 8 +-
drivers/crypto/dpaa2_sec/hw/rta/nfifo_cmd.h | 10 +-
.../crypto/dpaa2_sec/hw/rta/operation_cmd.h | 6 +-
.../crypto/dpaa2_sec/hw/rta/protocol_cmd.h | 11 +-
.../dpaa2_sec/hw/rta/sec_run_time_asm.h | 27 +-
.../dpaa2_sec/hw/rta/seq_in_out_ptr_cmd.h | 7 +-
drivers/crypto/dpaa2_sec/hw/rta/store_cmd.h | 6 +-
drivers/crypto/dpaa_sec/dpaa_sec.c | 361 +++--
drivers/crypto/dpaa_sec/dpaa_sec.h | 16 +-
lib/librte_security/rte_security.h | 11 +-
22 files changed, 2296 insertions(+), 642 deletions(-)
--
2.17.1
^ permalink raw reply [flat|nested] 75+ messages in thread
* [dpdk-dev] [PATCH v3 01/24] drivers/crypto: support PDCP 12-bit c-plane processing
2019-09-30 14:40 ` [dpdk-dev] [PATCH v3 00/24] " Akhil Goyal
@ 2019-09-30 14:40 ` Akhil Goyal
2019-09-30 14:40 ` [dpdk-dev] [PATCH v3 02/24] drivers/crypto: support PDCP u-plane with integrity Akhil Goyal
` (23 subsequent siblings)
24 siblings, 0 replies; 75+ messages in thread
From: Akhil Goyal @ 2019-09-30 14:40 UTC (permalink / raw)
To: dev; +Cc: aconole, anoobj, Vakul Garg
From: Vakul Garg <vakul.garg@nxp.com>
Added support for 12-bit c-plane. We implement it using 'u-plane for RN'
protocol descriptors. This is because 'c-plane' protocol descriptors
assume 5-bit sequence numbers. Since the crypto processing remains same
irrespective of c-plane or u-plane, we choose 'u-plane for RN' protocol
descriptors to implement 12-bit c-plane. 'U-plane for RN' protocol
descriptors support both confidentiality and integrity (required for
c-plane) for 7/12/15 bit sequence numbers.
For little endian platforms, incorrect IV is generated if MOVE command
is used in pdcp non-proto descriptors. This is because MOVE command
treats data as word. We changed MOVE to MOVEB since we require data to
be treated as byte array. The change works on both ls1046, ls2088.
Signed-off-by: Vakul Garg <vakul.garg@nxp.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
---
drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c | 11 +-
drivers/crypto/dpaa2_sec/hw/desc.h | 3 +-
drivers/crypto/dpaa2_sec/hw/desc/pdcp.h | 176 ++++++++++++++----
.../crypto/dpaa2_sec/hw/rta/protocol_cmd.h | 6 +-
drivers/crypto/dpaa_sec/dpaa_sec.c | 7 +-
5 files changed, 160 insertions(+), 43 deletions(-)
diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
index f162eeed2..fae216825 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
@@ -2695,9 +2695,10 @@ dpaa2_sec_set_pdcp_session(struct rte_cryptodev *dev,
/* Auth is only applicable for control mode operation. */
if (pdcp_xform->domain == RTE_SECURITY_PDCP_MODE_CONTROL) {
- if (pdcp_xform->sn_size != RTE_SECURITY_PDCP_SN_SIZE_5) {
+ if (pdcp_xform->sn_size != RTE_SECURITY_PDCP_SN_SIZE_5 &&
+ pdcp_xform->sn_size != RTE_SECURITY_PDCP_SN_SIZE_12) {
DPAA2_SEC_ERR(
- "PDCP Seq Num size should be 5 bits for cmode");
+ "PDCP Seq Num size should be 5/12 bits for cmode");
goto out;
}
if (auth_xform) {
@@ -2748,6 +2749,7 @@ dpaa2_sec_set_pdcp_session(struct rte_cryptodev *dev,
bufsize = cnstr_shdsc_pdcp_c_plane_encap(
priv->flc_desc[0].desc, 1, swap,
pdcp_xform->hfn,
+ session->pdcp.sn_size,
pdcp_xform->bearer,
pdcp_xform->pkt_dir,
pdcp_xform->hfn_threshold,
@@ -2757,6 +2759,7 @@ dpaa2_sec_set_pdcp_session(struct rte_cryptodev *dev,
bufsize = cnstr_shdsc_pdcp_c_plane_decap(
priv->flc_desc[0].desc, 1, swap,
pdcp_xform->hfn,
+ session->pdcp.sn_size,
pdcp_xform->bearer,
pdcp_xform->pkt_dir,
pdcp_xform->hfn_threshold,
@@ -2766,7 +2769,7 @@ dpaa2_sec_set_pdcp_session(struct rte_cryptodev *dev,
if (session->dir == DIR_ENC)
bufsize = cnstr_shdsc_pdcp_u_plane_encap(
priv->flc_desc[0].desc, 1, swap,
- (enum pdcp_sn_size)pdcp_xform->sn_size,
+ session->pdcp.sn_size,
pdcp_xform->hfn,
pdcp_xform->bearer,
pdcp_xform->pkt_dir,
@@ -2775,7 +2778,7 @@ dpaa2_sec_set_pdcp_session(struct rte_cryptodev *dev,
else if (session->dir == DIR_DEC)
bufsize = cnstr_shdsc_pdcp_u_plane_decap(
priv->flc_desc[0].desc, 1, swap,
- (enum pdcp_sn_size)pdcp_xform->sn_size,
+ session->pdcp.sn_size,
pdcp_xform->hfn,
pdcp_xform->bearer,
pdcp_xform->pkt_dir,
diff --git a/drivers/crypto/dpaa2_sec/hw/desc.h b/drivers/crypto/dpaa2_sec/hw/desc.h
index 5d99dd8af..e12c3db2f 100644
--- a/drivers/crypto/dpaa2_sec/hw/desc.h
+++ b/drivers/crypto/dpaa2_sec/hw/desc.h
@@ -1,7 +1,7 @@
/* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
*
* Copyright 2008-2016 Freescale Semiconductor Inc.
- * Copyright 2016 NXP
+ * Copyright 2016, 2019 NXP
*
*/
@@ -621,6 +621,7 @@
#define OP_PCLID_LTE_PDCP_USER (0x42 << OP_PCLID_SHIFT)
#define OP_PCLID_LTE_PDCP_CTRL (0x43 << OP_PCLID_SHIFT)
#define OP_PCLID_LTE_PDCP_CTRL_MIXED (0x44 << OP_PCLID_SHIFT)
+#define OP_PCLID_LTE_PDCP_USER_RN (0x45 << OP_PCLID_SHIFT)
/*
* ProtocolInfo selectors
diff --git a/drivers/crypto/dpaa2_sec/hw/desc/pdcp.h b/drivers/crypto/dpaa2_sec/hw/desc/pdcp.h
index fee844100..607c587e2 100644
--- a/drivers/crypto/dpaa2_sec/hw/desc/pdcp.h
+++ b/drivers/crypto/dpaa2_sec/hw/desc/pdcp.h
@@ -253,6 +253,7 @@ pdcp_insert_cplane_null_op(struct program *p,
struct alginfo *cipherdata __maybe_unused,
struct alginfo *authdata __maybe_unused,
unsigned int dir,
+ enum pdcp_sn_size sn_size __maybe_unused,
unsigned char era_2_sw_hfn_ovrd __maybe_unused)
{
LABEL(local_offset);
@@ -413,8 +414,18 @@ pdcp_insert_cplane_int_only_op(struct program *p,
bool swap __maybe_unused,
struct alginfo *cipherdata __maybe_unused,
struct alginfo *authdata, unsigned int dir,
+ enum pdcp_sn_size sn_size,
unsigned char era_2_sw_hfn_ovrd)
{
+ if (rta_sec_era >= RTA_SEC_ERA_8 && sn_size == PDCP_SN_SIZE_12) {
+ KEY(p, KEY2, authdata->key_enc_flags, authdata->key,
+ authdata->keylen, INLINE_KEY(authdata));
+
+ PROTOCOL(p, dir, OP_PCLID_LTE_PDCP_USER_RN,
+ (uint16_t)authdata->algtype);
+ return 0;
+ }
+
LABEL(local_offset);
REFERENCE(move_cmd_read_descbuf);
REFERENCE(move_cmd_write_descbuf);
@@ -720,6 +731,7 @@ pdcp_insert_cplane_enc_only_op(struct program *p,
struct alginfo *cipherdata,
struct alginfo *authdata __maybe_unused,
unsigned int dir,
+ enum pdcp_sn_size sn_size,
unsigned char era_2_sw_hfn_ovrd __maybe_unused)
{
/* Insert Cipher Key */
@@ -727,8 +739,12 @@ pdcp_insert_cplane_enc_only_op(struct program *p,
cipherdata->keylen, INLINE_KEY(cipherdata));
if (rta_sec_era >= RTA_SEC_ERA_8) {
- PROTOCOL(p, dir, OP_PCLID_LTE_PDCP_CTRL_MIXED,
- (uint16_t)cipherdata->algtype << 8);
+ if (sn_size == PDCP_SN_SIZE_5)
+ PROTOCOL(p, dir, OP_PCLID_LTE_PDCP_CTRL_MIXED,
+ (uint16_t)cipherdata->algtype << 8);
+ else
+ PROTOCOL(p, dir, OP_PCLID_LTE_PDCP_USER_RN,
+ (uint16_t)cipherdata->algtype << 8);
return 0;
}
@@ -742,12 +758,12 @@ pdcp_insert_cplane_enc_only_op(struct program *p,
IFB | IMMED2);
SEQSTORE(p, MATH0, 7, 1, 0);
MATHB(p, MATH1, SHLD, MATH1, MATH1, 8, 0);
- MOVE(p, DESCBUF, 8, MATH2, 0, 8, WAITCOMP | IMMED);
+ MOVEB(p, DESCBUF, 8, MATH2, 0, 8, WAITCOMP | IMMED);
MATHB(p, MATH1, OR, MATH2, MATH2, 8, 0);
switch (cipherdata->algtype) {
case PDCP_CIPHER_TYPE_SNOW:
- MOVE(p, MATH2, 0, CONTEXT1, 0, 8, WAITCOMP | IMMED);
+ MOVEB(p, MATH2, 0, CONTEXT1, 0, 8, WAITCOMP | IMMED);
if (rta_sec_era > RTA_SEC_ERA_2) {
MATHB(p, SEQINSZ, SUB, ZERO, VSEQINSZ, 4, 0);
@@ -771,7 +787,7 @@ pdcp_insert_cplane_enc_only_op(struct program *p,
break;
case PDCP_CIPHER_TYPE_AES:
- MOVE(p, MATH2, 0, CONTEXT1, 0x10, 0x10, WAITCOMP | IMMED);
+ MOVEB(p, MATH2, 0, CONTEXT1, 0x10, 0x10, WAITCOMP | IMMED);
if (rta_sec_era > RTA_SEC_ERA_2) {
MATHB(p, SEQINSZ, SUB, ZERO, VSEQINSZ, 4, 0);
@@ -802,8 +818,8 @@ pdcp_insert_cplane_enc_only_op(struct program *p,
return -ENOTSUP;
}
- MOVE(p, MATH2, 0, CONTEXT1, 0, 0x08, IMMED);
- MOVE(p, MATH2, 0, CONTEXT1, 0x08, 0x08, WAITCOMP | IMMED);
+ MOVEB(p, MATH2, 0, CONTEXT1, 0, 0x08, IMMED);
+ MOVEB(p, MATH2, 0, CONTEXT1, 0x08, 0x08, WAITCOMP | IMMED);
MATHB(p, SEQINSZ, SUB, ZERO, VSEQINSZ, 4, 0);
if (dir == OP_TYPE_ENCAP_PROTOCOL)
MATHB(p, SEQINSZ, ADD, PDCP_MAC_I_LEN, VSEQOUTSZ, 4,
@@ -848,6 +864,7 @@ pdcp_insert_cplane_acc_op(struct program *p,
struct alginfo *cipherdata,
struct alginfo *authdata,
unsigned int dir,
+ enum pdcp_sn_size sn_size,
unsigned char era_2_hfn_ovrd __maybe_unused)
{
/* Insert Auth Key */
@@ -857,7 +874,14 @@ pdcp_insert_cplane_acc_op(struct program *p,
/* Insert Cipher Key */
KEY(p, KEY1, cipherdata->key_enc_flags, cipherdata->key,
cipherdata->keylen, INLINE_KEY(cipherdata));
- PROTOCOL(p, dir, OP_PCLID_LTE_PDCP_CTRL, (uint16_t)cipherdata->algtype);
+
+ if (sn_size == PDCP_SN_SIZE_5)
+ PROTOCOL(p, dir, OP_PCLID_LTE_PDCP_CTRL,
+ (uint16_t)cipherdata->algtype);
+ else
+ PROTOCOL(p, dir, OP_PCLID_LTE_PDCP_USER_RN,
+ ((uint16_t)cipherdata->algtype << 8) |
+ (uint16_t)authdata->algtype);
return 0;
}
@@ -868,6 +892,7 @@ pdcp_insert_cplane_snow_aes_op(struct program *p,
struct alginfo *cipherdata,
struct alginfo *authdata,
unsigned int dir,
+ enum pdcp_sn_size sn_size,
unsigned char era_2_sw_hfn_ovrd)
{
LABEL(back_to_sd_offset);
@@ -887,9 +912,14 @@ pdcp_insert_cplane_snow_aes_op(struct program *p,
KEY(p, KEY2, authdata->key_enc_flags, authdata->key,
authdata->keylen, INLINE_KEY(authdata));
- PROTOCOL(p, dir, OP_PCLID_LTE_PDCP_CTRL_MIXED,
- ((uint16_t)cipherdata->algtype << 8) |
- (uint16_t)authdata->algtype);
+ if (sn_size == PDCP_SN_SIZE_5)
+ PROTOCOL(p, dir, OP_PCLID_LTE_PDCP_CTRL_MIXED,
+ ((uint16_t)cipherdata->algtype << 8) |
+ (uint16_t)authdata->algtype);
+ else
+ PROTOCOL(p, dir, OP_PCLID_LTE_PDCP_USER_RN,
+ ((uint16_t)cipherdata->algtype << 8) |
+ (uint16_t)authdata->algtype);
return 0;
}
@@ -1174,6 +1204,7 @@ pdcp_insert_cplane_aes_snow_op(struct program *p,
struct alginfo *cipherdata,
struct alginfo *authdata,
unsigned int dir,
+ enum pdcp_sn_size sn_size,
unsigned char era_2_sw_hfn_ovrd __maybe_unused)
{
KEY(p, KEY1, cipherdata->key_enc_flags, cipherdata->key,
@@ -1182,7 +1213,14 @@ pdcp_insert_cplane_aes_snow_op(struct program *p,
INLINE_KEY(authdata));
if (rta_sec_era >= RTA_SEC_ERA_8) {
- PROTOCOL(p, dir, OP_PCLID_LTE_PDCP_CTRL_MIXED,
+ int pclid;
+
+ if (sn_size == PDCP_SN_SIZE_5)
+ pclid = OP_PCLID_LTE_PDCP_CTRL_MIXED;
+ else
+ pclid = OP_PCLID_LTE_PDCP_USER_RN;
+
+ PROTOCOL(p, dir, pclid,
((uint16_t)cipherdata->algtype << 8) |
(uint16_t)authdata->algtype);
@@ -1281,6 +1319,7 @@ pdcp_insert_cplane_snow_zuc_op(struct program *p,
struct alginfo *cipherdata,
struct alginfo *authdata,
unsigned int dir,
+ enum pdcp_sn_size sn_size,
unsigned char era_2_sw_hfn_ovrd __maybe_unused)
{
LABEL(keyjump);
@@ -1300,7 +1339,14 @@ pdcp_insert_cplane_snow_zuc_op(struct program *p,
SET_LABEL(p, keyjump);
if (rta_sec_era >= RTA_SEC_ERA_8) {
- PROTOCOL(p, dir, OP_PCLID_LTE_PDCP_CTRL_MIXED,
+ int pclid;
+
+ if (sn_size == PDCP_SN_SIZE_5)
+ pclid = OP_PCLID_LTE_PDCP_CTRL_MIXED;
+ else
+ pclid = OP_PCLID_LTE_PDCP_USER_RN;
+
+ PROTOCOL(p, dir, pclid,
((uint16_t)cipherdata->algtype << 8) |
(uint16_t)authdata->algtype);
return 0;
@@ -1376,6 +1422,7 @@ pdcp_insert_cplane_aes_zuc_op(struct program *p,
struct alginfo *cipherdata,
struct alginfo *authdata,
unsigned int dir,
+ enum pdcp_sn_size sn_size,
unsigned char era_2_sw_hfn_ovrd __maybe_unused)
{
LABEL(keyjump);
@@ -1393,7 +1440,14 @@ pdcp_insert_cplane_aes_zuc_op(struct program *p,
INLINE_KEY(authdata));
if (rta_sec_era >= RTA_SEC_ERA_8) {
- PROTOCOL(p, dir, OP_PCLID_LTE_PDCP_CTRL_MIXED,
+ int pclid;
+
+ if (sn_size == PDCP_SN_SIZE_5)
+ pclid = OP_PCLID_LTE_PDCP_CTRL_MIXED;
+ else
+ pclid = OP_PCLID_LTE_PDCP_USER_RN;
+
+ PROTOCOL(p, dir, pclid,
((uint16_t)cipherdata->algtype << 8) |
(uint16_t)authdata->algtype);
@@ -1474,6 +1528,7 @@ pdcp_insert_cplane_zuc_snow_op(struct program *p,
struct alginfo *cipherdata,
struct alginfo *authdata,
unsigned int dir,
+ enum pdcp_sn_size sn_size,
unsigned char era_2_sw_hfn_ovrd __maybe_unused)
{
LABEL(keyjump);
@@ -1491,7 +1546,14 @@ pdcp_insert_cplane_zuc_snow_op(struct program *p,
INLINE_KEY(authdata));
if (rta_sec_era >= RTA_SEC_ERA_8) {
- PROTOCOL(p, dir, OP_PCLID_LTE_PDCP_CTRL_MIXED,
+ int pclid;
+
+ if (sn_size == PDCP_SN_SIZE_5)
+ pclid = OP_PCLID_LTE_PDCP_CTRL_MIXED;
+ else
+ pclid = OP_PCLID_LTE_PDCP_USER_RN;
+
+ PROTOCOL(p, dir, pclid,
((uint16_t)cipherdata->algtype << 8) |
(uint16_t)authdata->algtype);
@@ -1594,6 +1656,7 @@ pdcp_insert_cplane_zuc_aes_op(struct program *p,
struct alginfo *cipherdata,
struct alginfo *authdata,
unsigned int dir,
+ enum pdcp_sn_size sn_size,
unsigned char era_2_sw_hfn_ovrd __maybe_unused)
{
if (rta_sec_era < RTA_SEC_ERA_5) {
@@ -1602,12 +1665,19 @@ pdcp_insert_cplane_zuc_aes_op(struct program *p,
}
if (rta_sec_era >= RTA_SEC_ERA_8) {
+ int pclid;
+
KEY(p, KEY1, cipherdata->key_enc_flags, cipherdata->key,
cipherdata->keylen, INLINE_KEY(cipherdata));
KEY(p, KEY2, authdata->key_enc_flags, authdata->key,
authdata->keylen, INLINE_KEY(authdata));
- PROTOCOL(p, dir, OP_PCLID_LTE_PDCP_CTRL_MIXED,
+ if (sn_size == PDCP_SN_SIZE_5)
+ pclid = OP_PCLID_LTE_PDCP_CTRL_MIXED;
+ else
+ pclid = OP_PCLID_LTE_PDCP_USER_RN;
+
+ PROTOCOL(p, dir, pclid,
((uint16_t)cipherdata->algtype << 8) |
(uint16_t)authdata->algtype);
return 0;
@@ -1754,7 +1824,7 @@ pdcp_insert_uplane_15bit_op(struct program *p,
IFB | IMMED2);
SEQSTORE(p, MATH0, 6, 2, 0);
MATHB(p, MATH1, SHLD, MATH1, MATH1, 8, 0);
- MOVE(p, DESCBUF, 8, MATH2, 0, 8, WAITCOMP | IMMED);
+ MOVEB(p, DESCBUF, 8, MATH2, 0, 8, WAITCOMP | IMMED);
MATHB(p, MATH1, OR, MATH2, MATH2, 8, 0);
MATHB(p, SEQINSZ, SUB, MATH3, VSEQINSZ, 4, 0);
@@ -1765,7 +1835,7 @@ pdcp_insert_uplane_15bit_op(struct program *p,
op = dir == OP_TYPE_ENCAP_PROTOCOL ? DIR_ENC : DIR_DEC;
switch (cipherdata->algtype) {
case PDCP_CIPHER_TYPE_SNOW:
- MOVE(p, MATH2, 0, CONTEXT1, 0, 8, WAITCOMP | IMMED);
+ MOVEB(p, MATH2, 0, CONTEXT1, 0, 8, WAITCOMP | IMMED);
ALG_OPERATION(p, OP_ALG_ALGSEL_SNOW_F8,
OP_ALG_AAI_F8,
OP_ALG_AS_INITFINAL,
@@ -1774,7 +1844,7 @@ pdcp_insert_uplane_15bit_op(struct program *p,
break;
case PDCP_CIPHER_TYPE_AES:
- MOVE(p, MATH2, 0, CONTEXT1, 0x10, 0x10, WAITCOMP | IMMED);
+ MOVEB(p, MATH2, 0, CONTEXT1, 0x10, 0x10, WAITCOMP | IMMED);
ALG_OPERATION(p, OP_ALG_ALGSEL_AES,
OP_ALG_AAI_CTR,
OP_ALG_AS_INITFINAL,
@@ -1787,8 +1857,8 @@ pdcp_insert_uplane_15bit_op(struct program *p,
pr_err("Invalid era for selected algorithm\n");
return -ENOTSUP;
}
- MOVE(p, MATH2, 0, CONTEXT1, 0, 0x08, IMMED);
- MOVE(p, MATH2, 0, CONTEXT1, 0x08, 0x08, WAITCOMP | IMMED);
+ MOVEB(p, MATH2, 0, CONTEXT1, 0, 0x08, IMMED);
+ MOVEB(p, MATH2, 0, CONTEXT1, 0x08, 0x08, WAITCOMP | IMMED);
ALG_OPERATION(p, OP_ALG_ALGSEL_ZUCE,
OP_ALG_AAI_F8,
@@ -1885,6 +1955,7 @@ insert_hfn_ov_op(struct program *p,
static inline enum pdb_type_e
cnstr_pdcp_c_plane_pdb(struct program *p,
uint32_t hfn,
+ enum pdcp_sn_size sn_size,
unsigned char bearer,
unsigned char direction,
uint32_t hfn_threshold,
@@ -1923,18 +1994,36 @@ cnstr_pdcp_c_plane_pdb(struct program *p,
if (rta_sec_era >= RTA_SEC_ERA_8) {
memset(&pdb, 0x00, sizeof(struct pdcp_pdb));
- /* This is a HW issue. Bit 2 should be set to zero,
- * but it does not work this way. Override here.
+ /* To support 12-bit seq numbers, we use u-plane opt in pdb.
+ * SEC supports 5-bit only with c-plane opt in pdb.
*/
- pdb.opt_res.rsvd = 0x00000002;
+ if (sn_size == PDCP_SN_SIZE_12) {
+ pdb.hfn_res = hfn << PDCP_U_PLANE_PDB_LONG_SN_HFN_SHIFT;
+ pdb.bearer_dir_res = (uint32_t)
+ ((bearer << PDCP_U_PLANE_PDB_BEARER_SHIFT) |
+ (direction << PDCP_U_PLANE_PDB_DIR_SHIFT));
- /* Copy relevant information from user to PDB */
- pdb.hfn_res = hfn << PDCP_C_PLANE_PDB_HFN_SHIFT;
- pdb.bearer_dir_res = (uint32_t)
+ pdb.hfn_thr_res =
+ hfn_threshold << PDCP_U_PLANE_PDB_LONG_SN_HFN_THR_SHIFT;
+
+ } else {
+ /* This means 5-bit c-plane.
+ * Here we use c-plane opt in pdb
+ */
+
+ /* This is a HW issue. Bit 2 should be set to zero,
+ * but it does not work this way. Override here.
+ */
+ pdb.opt_res.rsvd = 0x00000002;
+
+ /* Copy relevant information from user to PDB */
+ pdb.hfn_res = hfn << PDCP_C_PLANE_PDB_HFN_SHIFT;
+ pdb.bearer_dir_res = (uint32_t)
((bearer << PDCP_C_PLANE_PDB_BEARER_SHIFT) |
- (direction << PDCP_C_PLANE_PDB_DIR_SHIFT));
- pdb.hfn_thr_res =
- hfn_threshold << PDCP_C_PLANE_PDB_HFN_THR_SHIFT;
+ (direction << PDCP_C_PLANE_PDB_DIR_SHIFT));
+ pdb.hfn_thr_res =
+ hfn_threshold << PDCP_C_PLANE_PDB_HFN_THR_SHIFT;
+ }
/* copy PDB in descriptor*/
__rta_out32(p, pdb.opt_res.opt);
@@ -2053,6 +2142,7 @@ cnstr_pdcp_u_plane_pdb(struct program *p,
* @swap: must be true when core endianness doesn't match SEC endianness
* @hfn: starting Hyper Frame Number to be used together with the SN from the
* PDCP frames.
+ * @sn_size: size of sequence numbers, only 5/12 bit sequence numbers are valid
* @bearer: radio bearer ID
* @direction: the direction of the PDCP frame (UL/DL)
* @hfn_threshold: HFN value that once reached triggers a warning from SEC that
@@ -2077,6 +2167,7 @@ cnstr_shdsc_pdcp_c_plane_encap(uint32_t *descbuf,
bool ps,
bool swap,
uint32_t hfn,
+ enum pdcp_sn_size sn_size,
unsigned char bearer,
unsigned char direction,
uint32_t hfn_threshold,
@@ -2087,7 +2178,7 @@ cnstr_shdsc_pdcp_c_plane_encap(uint32_t *descbuf,
static int
(*pdcp_cp_fp[PDCP_CIPHER_TYPE_INVALID][PDCP_AUTH_TYPE_INVALID])
(struct program*, bool swap, struct alginfo *,
- struct alginfo *, unsigned int,
+ struct alginfo *, unsigned int, enum pdcp_sn_size,
unsigned char __maybe_unused) = {
{ /* NULL */
pdcp_insert_cplane_null_op, /* NULL */
@@ -2152,6 +2243,11 @@ cnstr_shdsc_pdcp_c_plane_encap(uint32_t *descbuf,
return -EINVAL;
}
+ if (sn_size != PDCP_SN_SIZE_12 && sn_size != PDCP_SN_SIZE_5) {
+ pr_err("C-plane supports only 5-bit and 12-bit sequence numbers\n");
+ return -EINVAL;
+ }
+
PROGRAM_CNTXT_INIT(p, descbuf, 0);
if (swap)
PROGRAM_SET_BSWAP(p);
@@ -2162,6 +2258,7 @@ cnstr_shdsc_pdcp_c_plane_encap(uint32_t *descbuf,
pdb_type = cnstr_pdcp_c_plane_pdb(p,
hfn,
+ sn_size,
bearer,
direction,
hfn_threshold,
@@ -2170,7 +2267,7 @@ cnstr_shdsc_pdcp_c_plane_encap(uint32_t *descbuf,
SET_LABEL(p, pdb_end);
- err = insert_hfn_ov_op(p, PDCP_SN_SIZE_5, pdb_type,
+ err = insert_hfn_ov_op(p, sn_size, pdb_type,
era_2_sw_hfn_ovrd);
if (err)
return err;
@@ -2180,6 +2277,7 @@ cnstr_shdsc_pdcp_c_plane_encap(uint32_t *descbuf,
cipherdata,
authdata,
OP_TYPE_ENCAP_PROTOCOL,
+ sn_size,
era_2_sw_hfn_ovrd);
if (err)
return err;
@@ -2197,6 +2295,7 @@ cnstr_shdsc_pdcp_c_plane_encap(uint32_t *descbuf,
* @swap: must be true when core endianness doesn't match SEC endianness
* @hfn: starting Hyper Frame Number to be used together with the SN from the
* PDCP frames.
+ * @sn_size: size of sequence numbers, only 5/12 bit sequence numbers are valid
* @bearer: radio bearer ID
* @direction: the direction of the PDCP frame (UL/DL)
* @hfn_threshold: HFN value that once reached triggers a warning from SEC that
@@ -2222,6 +2321,7 @@ cnstr_shdsc_pdcp_c_plane_decap(uint32_t *descbuf,
bool ps,
bool swap,
uint32_t hfn,
+ enum pdcp_sn_size sn_size,
unsigned char bearer,
unsigned char direction,
uint32_t hfn_threshold,
@@ -2232,7 +2332,8 @@ cnstr_shdsc_pdcp_c_plane_decap(uint32_t *descbuf,
static int
(*pdcp_cp_fp[PDCP_CIPHER_TYPE_INVALID][PDCP_AUTH_TYPE_INVALID])
(struct program*, bool swap, struct alginfo *,
- struct alginfo *, unsigned int, unsigned char) = {
+ struct alginfo *, unsigned int, enum pdcp_sn_size,
+ unsigned char) = {
{ /* NULL */
pdcp_insert_cplane_null_op, /* NULL */
pdcp_insert_cplane_int_only_op, /* SNOW f9 */
@@ -2296,6 +2397,11 @@ cnstr_shdsc_pdcp_c_plane_decap(uint32_t *descbuf,
return -EINVAL;
}
+ if (sn_size != PDCP_SN_SIZE_12 && sn_size != PDCP_SN_SIZE_5) {
+ pr_err("C-plane supports only 5-bit and 12-bit sequence numbers\n");
+ return -EINVAL;
+ }
+
PROGRAM_CNTXT_INIT(p, descbuf, 0);
if (swap)
PROGRAM_SET_BSWAP(p);
@@ -2306,6 +2412,7 @@ cnstr_shdsc_pdcp_c_plane_decap(uint32_t *descbuf,
pdb_type = cnstr_pdcp_c_plane_pdb(p,
hfn,
+ sn_size,
bearer,
direction,
hfn_threshold,
@@ -2314,7 +2421,7 @@ cnstr_shdsc_pdcp_c_plane_decap(uint32_t *descbuf,
SET_LABEL(p, pdb_end);
- err = insert_hfn_ov_op(p, PDCP_SN_SIZE_5, pdb_type,
+ err = insert_hfn_ov_op(p, sn_size, pdb_type,
era_2_sw_hfn_ovrd);
if (err)
return err;
@@ -2324,6 +2431,7 @@ cnstr_shdsc_pdcp_c_plane_decap(uint32_t *descbuf,
cipherdata,
authdata,
OP_TYPE_DECAP_PROTOCOL,
+ sn_size,
era_2_sw_hfn_ovrd);
if (err)
return err;
diff --git a/drivers/crypto/dpaa2_sec/hw/rta/protocol_cmd.h b/drivers/crypto/dpaa2_sec/hw/rta/protocol_cmd.h
index cf8dfb910..82581edf5 100644
--- a/drivers/crypto/dpaa2_sec/hw/rta/protocol_cmd.h
+++ b/drivers/crypto/dpaa2_sec/hw/rta/protocol_cmd.h
@@ -1,7 +1,7 @@
/* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
*
* Copyright 2008-2016 Freescale Semiconductor Inc.
- * Copyright 2016 NXP
+ * Copyright 2016, 2019 NXP
*
*/
@@ -596,13 +596,15 @@ static const struct proto_map proto_table[] = {
/*38*/ {OP_TYPE_DECAP_PROTOCOL, OP_PCLID_LTE_PDCP_CTRL_MIXED,
__rta_lte_pdcp_mixed_proto},
{OP_TYPE_DECAP_PROTOCOL, OP_PCLID_IPSEC_NEW, __rta_ipsec_proto},
+/*40*/ {OP_TYPE_DECAP_PROTOCOL, OP_PCLID_LTE_PDCP_USER_RN,
+ __rta_lte_pdcp_mixed_proto},
};
/*
* Allowed OPERATION protocols for each SEC Era.
* Values represent the number of entries from proto_table[] that are supported.
*/
-static const unsigned int proto_table_sz[] = {21, 29, 29, 29, 29, 35, 37, 39};
+static const unsigned int proto_table_sz[] = {21, 29, 29, 29, 29, 35, 37, 40};
static inline int
rta_proto_operation(struct program *program, uint32_t optype,
diff --git a/drivers/crypto/dpaa_sec/dpaa_sec.c b/drivers/crypto/dpaa_sec/dpaa_sec.c
index b3ac70633..1532eebc5 100644
--- a/drivers/crypto/dpaa_sec/dpaa_sec.c
+++ b/drivers/crypto/dpaa_sec/dpaa_sec.c
@@ -471,6 +471,7 @@ dpaa_sec_prep_pdcp_cdb(dpaa_sec_session *ses)
shared_desc_len = cnstr_shdsc_pdcp_c_plane_encap(
cdb->sh_desc, 1, swap,
ses->pdcp.hfn,
+ ses->pdcp.sn_size,
ses->pdcp.bearer,
ses->pdcp.pkt_dir,
ses->pdcp.hfn_threshold,
@@ -480,6 +481,7 @@ dpaa_sec_prep_pdcp_cdb(dpaa_sec_session *ses)
shared_desc_len = cnstr_shdsc_pdcp_c_plane_decap(
cdb->sh_desc, 1, swap,
ses->pdcp.hfn,
+ ses->pdcp.sn_size,
ses->pdcp.bearer,
ses->pdcp.pkt_dir,
ses->pdcp.hfn_threshold,
@@ -2399,9 +2401,10 @@ dpaa_sec_set_pdcp_session(struct rte_cryptodev *dev,
/* Auth is only applicable for control mode operation. */
if (pdcp_xform->domain == RTE_SECURITY_PDCP_MODE_CONTROL) {
- if (pdcp_xform->sn_size != RTE_SECURITY_PDCP_SN_SIZE_5) {
+ if (pdcp_xform->sn_size != RTE_SECURITY_PDCP_SN_SIZE_5 &&
+ pdcp_xform->sn_size != RTE_SECURITY_PDCP_SN_SIZE_12) {
DPAA_SEC_ERR(
- "PDCP Seq Num size should be 5 bits for cmode");
+ "PDCP Seq Num size should be 5/12 bits for cmode");
goto out;
}
if (auth_xform) {
--
2.17.1
^ permalink raw reply [flat|nested] 75+ messages in thread
* [dpdk-dev] [PATCH v3 02/24] drivers/crypto: support PDCP u-plane with integrity
2019-09-30 14:40 ` [dpdk-dev] [PATCH v3 00/24] " Akhil Goyal
2019-09-30 14:40 ` [dpdk-dev] [PATCH v3 01/24] drivers/crypto: support PDCP 12-bit c-plane processing Akhil Goyal
@ 2019-09-30 14:40 ` Akhil Goyal
2019-09-30 14:40 ` [dpdk-dev] [PATCH v3 03/24] security: add hfn override option in PDCP Akhil Goyal
` (22 subsequent siblings)
24 siblings, 0 replies; 75+ messages in thread
From: Akhil Goyal @ 2019-09-30 14:40 UTC (permalink / raw)
To: dev; +Cc: aconole, anoobj, Vakul Garg
From: Vakul Garg <vakul.garg@nxp.com>
PDCP u-plane may optionally support integrity as well.
This patch add support for supporting integrity along with
confidentiality.
Signed-off-by: Vakul Garg <vakul.garg@nxp.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
---
drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c | 67 +++++------
drivers/crypto/dpaa2_sec/hw/desc/pdcp.h | 75 ++++++++++---
drivers/crypto/dpaa_sec/dpaa_sec.c | 116 +++++++++-----------
3 files changed, 144 insertions(+), 114 deletions(-)
diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
index fae216825..75a4fe0fa 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
@@ -2591,6 +2591,7 @@ dpaa2_sec_set_pdcp_session(struct rte_cryptodev *dev,
struct ctxt_priv *priv;
struct dpaa2_sec_dev_private *dev_priv = dev->data->dev_private;
struct alginfo authdata, cipherdata;
+ struct alginfo *p_authdata = NULL;
int bufsize = -1;
struct sec_flow_context *flc;
#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
@@ -2693,39 +2694,32 @@ dpaa2_sec_set_pdcp_session(struct rte_cryptodev *dev,
goto out;
}
- /* Auth is only applicable for control mode operation. */
- if (pdcp_xform->domain == RTE_SECURITY_PDCP_MODE_CONTROL) {
- if (pdcp_xform->sn_size != RTE_SECURITY_PDCP_SN_SIZE_5 &&
- pdcp_xform->sn_size != RTE_SECURITY_PDCP_SN_SIZE_12) {
- DPAA2_SEC_ERR(
- "PDCP Seq Num size should be 5/12 bits for cmode");
- goto out;
- }
- if (auth_xform) {
- session->auth_key.data = rte_zmalloc(NULL,
- auth_xform->key.length,
- RTE_CACHE_LINE_SIZE);
- if (session->auth_key.data == NULL &&
- auth_xform->key.length > 0) {
- DPAA2_SEC_ERR("No Memory for auth key");
- rte_free(session->cipher_key.data);
- rte_free(priv);
- return -ENOMEM;
- }
- session->auth_key.length = auth_xform->key.length;
- memcpy(session->auth_key.data, auth_xform->key.data,
- auth_xform->key.length);
- session->auth_alg = auth_xform->algo;
- } else {
- session->auth_key.data = NULL;
- session->auth_key.length = 0;
- session->auth_alg = RTE_CRYPTO_AUTH_NULL;
+ if (auth_xform) {
+ session->auth_key.data = rte_zmalloc(NULL,
+ auth_xform->key.length,
+ RTE_CACHE_LINE_SIZE);
+ if (!session->auth_key.data &&
+ auth_xform->key.length > 0) {
+ DPAA2_SEC_ERR("No Memory for auth key");
+ rte_free(session->cipher_key.data);
+ rte_free(priv);
+ return -ENOMEM;
}
- authdata.key = (size_t)session->auth_key.data;
- authdata.keylen = session->auth_key.length;
- authdata.key_enc_flags = 0;
- authdata.key_type = RTA_DATA_IMM;
+ session->auth_key.length = auth_xform->key.length;
+ memcpy(session->auth_key.data, auth_xform->key.data,
+ auth_xform->key.length);
+ session->auth_alg = auth_xform->algo;
+ } else {
+ session->auth_key.data = NULL;
+ session->auth_key.length = 0;
+ session->auth_alg = 0;
+ }
+ authdata.key = (size_t)session->auth_key.data;
+ authdata.keylen = session->auth_key.length;
+ authdata.key_enc_flags = 0;
+ authdata.key_type = RTA_DATA_IMM;
+ if (session->auth_alg) {
switch (session->auth_alg) {
case RTE_CRYPTO_AUTH_SNOW3G_UIA2:
authdata.algtype = PDCP_AUTH_TYPE_SNOW;
@@ -2745,6 +2739,13 @@ dpaa2_sec_set_pdcp_session(struct rte_cryptodev *dev,
goto out;
}
+ p_authdata = &authdata;
+ } else if (pdcp_xform->domain == RTE_SECURITY_PDCP_MODE_CONTROL) {
+ DPAA2_SEC_ERR("Crypto: Integrity must for c-plane");
+ goto out;
+ }
+
+ if (pdcp_xform->domain == RTE_SECURITY_PDCP_MODE_CONTROL) {
if (session->dir == DIR_ENC)
bufsize = cnstr_shdsc_pdcp_c_plane_encap(
priv->flc_desc[0].desc, 1, swap,
@@ -2774,7 +2775,7 @@ dpaa2_sec_set_pdcp_session(struct rte_cryptodev *dev,
pdcp_xform->bearer,
pdcp_xform->pkt_dir,
pdcp_xform->hfn_threshold,
- &cipherdata, 0);
+ &cipherdata, p_authdata, 0);
else if (session->dir == DIR_DEC)
bufsize = cnstr_shdsc_pdcp_u_plane_decap(
priv->flc_desc[0].desc, 1, swap,
@@ -2783,7 +2784,7 @@ dpaa2_sec_set_pdcp_session(struct rte_cryptodev *dev,
pdcp_xform->bearer,
pdcp_xform->pkt_dir,
pdcp_xform->hfn_threshold,
- &cipherdata, 0);
+ &cipherdata, p_authdata, 0);
}
if (bufsize < 0) {
diff --git a/drivers/crypto/dpaa2_sec/hw/desc/pdcp.h b/drivers/crypto/dpaa2_sec/hw/desc/pdcp.h
index 607c587e2..a636640c4 100644
--- a/drivers/crypto/dpaa2_sec/hw/desc/pdcp.h
+++ b/drivers/crypto/dpaa2_sec/hw/desc/pdcp.h
@@ -1801,9 +1801,16 @@ static inline int
pdcp_insert_uplane_15bit_op(struct program *p,
bool swap __maybe_unused,
struct alginfo *cipherdata,
+ struct alginfo *authdata,
unsigned int dir)
{
int op;
+
+ /* Insert auth key if requested */
+ if (authdata && authdata->algtype)
+ KEY(p, KEY2, authdata->key_enc_flags, authdata->key,
+ authdata->keylen, INLINE_KEY(authdata));
+
/* Insert Cipher Key */
KEY(p, KEY1, cipherdata->key_enc_flags, cipherdata->key,
cipherdata->keylen, INLINE_KEY(cipherdata));
@@ -2478,6 +2485,7 @@ cnstr_shdsc_pdcp_u_plane_encap(uint32_t *descbuf,
unsigned short direction,
uint32_t hfn_threshold,
struct alginfo *cipherdata,
+ struct alginfo *authdata,
unsigned char era_2_sw_hfn_ovrd)
{
struct program prg;
@@ -2490,6 +2498,11 @@ cnstr_shdsc_pdcp_u_plane_encap(uint32_t *descbuf,
return -EINVAL;
}
+ if (authdata && !authdata->algtype && rta_sec_era < RTA_SEC_ERA_8) {
+ pr_err("Cannot use u-plane auth with era < 8");
+ return -EINVAL;
+ }
+
PROGRAM_CNTXT_INIT(p, descbuf, 0);
if (swap)
PROGRAM_SET_BSWAP(p);
@@ -2509,6 +2522,13 @@ cnstr_shdsc_pdcp_u_plane_encap(uint32_t *descbuf,
if (err)
return err;
+ /* Insert auth key if requested */
+ if (authdata && authdata->algtype) {
+ KEY(p, KEY2, authdata->key_enc_flags,
+ (uint64_t)authdata->key, authdata->keylen,
+ INLINE_KEY(authdata));
+ }
+
switch (sn_size) {
case PDCP_SN_SIZE_7:
case PDCP_SN_SIZE_12:
@@ -2518,20 +2538,24 @@ cnstr_shdsc_pdcp_u_plane_encap(uint32_t *descbuf,
pr_err("Invalid era for selected algorithm\n");
return -ENOTSUP;
}
+ /* fallthrough */
case PDCP_CIPHER_TYPE_AES:
case PDCP_CIPHER_TYPE_SNOW:
+ case PDCP_CIPHER_TYPE_NULL:
/* Insert Cipher Key */
KEY(p, KEY1, cipherdata->key_enc_flags,
(uint64_t)cipherdata->key, cipherdata->keylen,
INLINE_KEY(cipherdata));
- PROTOCOL(p, OP_TYPE_ENCAP_PROTOCOL,
- OP_PCLID_LTE_PDCP_USER,
- (uint16_t)cipherdata->algtype);
- break;
- case PDCP_CIPHER_TYPE_NULL:
- insert_copy_frame_op(p,
- cipherdata,
- OP_TYPE_ENCAP_PROTOCOL);
+
+ if (authdata)
+ PROTOCOL(p, OP_TYPE_ENCAP_PROTOCOL,
+ OP_PCLID_LTE_PDCP_USER_RN,
+ ((uint16_t)cipherdata->algtype << 8) |
+ (uint16_t)authdata->algtype);
+ else
+ PROTOCOL(p, OP_TYPE_ENCAP_PROTOCOL,
+ OP_PCLID_LTE_PDCP_USER,
+ (uint16_t)cipherdata->algtype);
break;
default:
pr_err("%s: Invalid encrypt algorithm selected: %d\n",
@@ -2551,7 +2575,7 @@ cnstr_shdsc_pdcp_u_plane_encap(uint32_t *descbuf,
default:
err = pdcp_insert_uplane_15bit_op(p, swap, cipherdata,
- OP_TYPE_ENCAP_PROTOCOL);
+ authdata, OP_TYPE_ENCAP_PROTOCOL);
if (err)
return err;
break;
@@ -2605,6 +2629,7 @@ cnstr_shdsc_pdcp_u_plane_decap(uint32_t *descbuf,
unsigned short direction,
uint32_t hfn_threshold,
struct alginfo *cipherdata,
+ struct alginfo *authdata,
unsigned char era_2_sw_hfn_ovrd)
{
struct program prg;
@@ -2617,6 +2642,11 @@ cnstr_shdsc_pdcp_u_plane_decap(uint32_t *descbuf,
return -EINVAL;
}
+ if (authdata && !authdata->algtype && rta_sec_era < RTA_SEC_ERA_8) {
+ pr_err("Cannot use u-plane auth with era < 8");
+ return -EINVAL;
+ }
+
PROGRAM_CNTXT_INIT(p, descbuf, 0);
if (swap)
PROGRAM_SET_BSWAP(p);
@@ -2636,6 +2666,12 @@ cnstr_shdsc_pdcp_u_plane_decap(uint32_t *descbuf,
if (err)
return err;
+ /* Insert auth key if requested */
+ if (authdata && authdata->algtype)
+ KEY(p, KEY2, authdata->key_enc_flags,
+ (uint64_t)authdata->key, authdata->keylen,
+ INLINE_KEY(authdata));
+
switch (sn_size) {
case PDCP_SN_SIZE_7:
case PDCP_SN_SIZE_12:
@@ -2645,20 +2681,23 @@ cnstr_shdsc_pdcp_u_plane_decap(uint32_t *descbuf,
pr_err("Invalid era for selected algorithm\n");
return -ENOTSUP;
}
+ /* fallthrough */
case PDCP_CIPHER_TYPE_AES:
case PDCP_CIPHER_TYPE_SNOW:
+ case PDCP_CIPHER_TYPE_NULL:
/* Insert Cipher Key */
KEY(p, KEY1, cipherdata->key_enc_flags,
cipherdata->key, cipherdata->keylen,
INLINE_KEY(cipherdata));
- PROTOCOL(p, OP_TYPE_DECAP_PROTOCOL,
- OP_PCLID_LTE_PDCP_USER,
- (uint16_t)cipherdata->algtype);
- break;
- case PDCP_CIPHER_TYPE_NULL:
- insert_copy_frame_op(p,
- cipherdata,
- OP_TYPE_DECAP_PROTOCOL);
+ if (authdata)
+ PROTOCOL(p, OP_TYPE_DECAP_PROTOCOL,
+ OP_PCLID_LTE_PDCP_USER_RN,
+ ((uint16_t)cipherdata->algtype << 8) |
+ (uint16_t)authdata->algtype);
+ else
+ PROTOCOL(p, OP_TYPE_DECAP_PROTOCOL,
+ OP_PCLID_LTE_PDCP_USER,
+ (uint16_t)cipherdata->algtype);
break;
default:
pr_err("%s: Invalid encrypt algorithm selected: %d\n",
@@ -2678,7 +2717,7 @@ cnstr_shdsc_pdcp_u_plane_decap(uint32_t *descbuf,
default:
err = pdcp_insert_uplane_15bit_op(p, swap, cipherdata,
- OP_TYPE_DECAP_PROTOCOL);
+ authdata, OP_TYPE_DECAP_PROTOCOL);
if (err)
return err;
break;
diff --git a/drivers/crypto/dpaa_sec/dpaa_sec.c b/drivers/crypto/dpaa_sec/dpaa_sec.c
index 1532eebc5..c3fbcc11d 100644
--- a/drivers/crypto/dpaa_sec/dpaa_sec.c
+++ b/drivers/crypto/dpaa_sec/dpaa_sec.c
@@ -384,6 +384,7 @@ dpaa_sec_prep_pdcp_cdb(dpaa_sec_session *ses)
{
struct alginfo authdata = {0}, cipherdata = {0};
struct sec_cdb *cdb = &ses->cdb;
+ struct alginfo *p_authdata = NULL;
int32_t shared_desc_len = 0;
int err;
#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
@@ -416,7 +417,11 @@ dpaa_sec_prep_pdcp_cdb(dpaa_sec_session *ses)
cipherdata.key_enc_flags = 0;
cipherdata.key_type = RTA_DATA_IMM;
- if (ses->pdcp.domain == RTE_SECURITY_PDCP_MODE_CONTROL) {
+ cdb->sh_desc[0] = cipherdata.keylen;
+ cdb->sh_desc[1] = 0;
+ cdb->sh_desc[2] = 0;
+
+ if (ses->auth_alg) {
switch (ses->auth_alg) {
case RTE_CRYPTO_AUTH_SNOW3G_UIA2:
authdata.algtype = PDCP_AUTH_TYPE_SNOW;
@@ -441,32 +446,36 @@ dpaa_sec_prep_pdcp_cdb(dpaa_sec_session *ses)
authdata.key_enc_flags = 0;
authdata.key_type = RTA_DATA_IMM;
- cdb->sh_desc[0] = cipherdata.keylen;
+ p_authdata = &authdata;
+
cdb->sh_desc[1] = authdata.keylen;
- err = rta_inline_query(IPSEC_AUTH_VAR_AES_DEC_BASE_DESC_LEN,
- MIN_JOB_DESC_SIZE,
- (unsigned int *)cdb->sh_desc,
- &cdb->sh_desc[2], 2);
+ }
- if (err < 0) {
- DPAA_SEC_ERR("Crypto: Incorrect key lengths");
- return err;
- }
- if (!(cdb->sh_desc[2] & 1) && cipherdata.keylen) {
- cipherdata.key = (size_t)dpaa_mem_vtop(
- (void *)(size_t)cipherdata.key);
- cipherdata.key_type = RTA_DATA_PTR;
- }
- if (!(cdb->sh_desc[2] & (1<<1)) && authdata.keylen) {
- authdata.key = (size_t)dpaa_mem_vtop(
- (void *)(size_t)authdata.key);
- authdata.key_type = RTA_DATA_PTR;
- }
+ err = rta_inline_query(IPSEC_AUTH_VAR_AES_DEC_BASE_DESC_LEN,
+ MIN_JOB_DESC_SIZE,
+ (unsigned int *)cdb->sh_desc,
+ &cdb->sh_desc[2], 2);
+ if (err < 0) {
+ DPAA_SEC_ERR("Crypto: Incorrect key lengths");
+ return err;
+ }
- cdb->sh_desc[0] = 0;
- cdb->sh_desc[1] = 0;
- cdb->sh_desc[2] = 0;
+ if (!(cdb->sh_desc[2] & 1) && cipherdata.keylen) {
+ cipherdata.key =
+ (size_t)dpaa_mem_vtop((void *)(size_t)cipherdata.key);
+ cipherdata.key_type = RTA_DATA_PTR;
+ }
+ if (!(cdb->sh_desc[2] & (1 << 1)) && authdata.keylen) {
+ authdata.key =
+ (size_t)dpaa_mem_vtop((void *)(size_t)authdata.key);
+ authdata.key_type = RTA_DATA_PTR;
+ }
+ cdb->sh_desc[0] = 0;
+ cdb->sh_desc[1] = 0;
+ cdb->sh_desc[2] = 0;
+
+ if (ses->pdcp.domain == RTE_SECURITY_PDCP_MODE_CONTROL) {
if (ses->dir == DIR_ENC)
shared_desc_len = cnstr_shdsc_pdcp_c_plane_encap(
cdb->sh_desc, 1, swap,
@@ -488,25 +497,6 @@ dpaa_sec_prep_pdcp_cdb(dpaa_sec_session *ses)
&cipherdata, &authdata,
0);
} else {
- cdb->sh_desc[0] = cipherdata.keylen;
- err = rta_inline_query(IPSEC_AUTH_VAR_AES_DEC_BASE_DESC_LEN,
- MIN_JOB_DESC_SIZE,
- (unsigned int *)cdb->sh_desc,
- &cdb->sh_desc[2], 1);
-
- if (err < 0) {
- DPAA_SEC_ERR("Crypto: Incorrect key lengths");
- return err;
- }
- if (!(cdb->sh_desc[2] & 1) && cipherdata.keylen) {
- cipherdata.key = (size_t)dpaa_mem_vtop(
- (void *)(size_t)cipherdata.key);
- cipherdata.key_type = RTA_DATA_PTR;
- }
- cdb->sh_desc[0] = 0;
- cdb->sh_desc[1] = 0;
- cdb->sh_desc[2] = 0;
-
if (ses->dir == DIR_ENC)
shared_desc_len = cnstr_shdsc_pdcp_u_plane_encap(
cdb->sh_desc, 1, swap,
@@ -515,7 +505,7 @@ dpaa_sec_prep_pdcp_cdb(dpaa_sec_session *ses)
ses->pdcp.bearer,
ses->pdcp.pkt_dir,
ses->pdcp.hfn_threshold,
- &cipherdata, 0);
+ &cipherdata, p_authdata, 0);
else if (ses->dir == DIR_DEC)
shared_desc_len = cnstr_shdsc_pdcp_u_plane_decap(
cdb->sh_desc, 1, swap,
@@ -524,7 +514,7 @@ dpaa_sec_prep_pdcp_cdb(dpaa_sec_session *ses)
ses->pdcp.bearer,
ses->pdcp.pkt_dir,
ses->pdcp.hfn_threshold,
- &cipherdata, 0);
+ &cipherdata, p_authdata, 0);
}
return shared_desc_len;
@@ -2399,7 +2389,6 @@ dpaa_sec_set_pdcp_session(struct rte_cryptodev *dev,
session->dir = DIR_ENC;
}
- /* Auth is only applicable for control mode operation. */
if (pdcp_xform->domain == RTE_SECURITY_PDCP_MODE_CONTROL) {
if (pdcp_xform->sn_size != RTE_SECURITY_PDCP_SN_SIZE_5 &&
pdcp_xform->sn_size != RTE_SECURITY_PDCP_SN_SIZE_12) {
@@ -2407,25 +2396,26 @@ dpaa_sec_set_pdcp_session(struct rte_cryptodev *dev,
"PDCP Seq Num size should be 5/12 bits for cmode");
goto out;
}
- if (auth_xform) {
- session->auth_key.data = rte_zmalloc(NULL,
- auth_xform->key.length,
- RTE_CACHE_LINE_SIZE);
- if (session->auth_key.data == NULL &&
- auth_xform->key.length > 0) {
- DPAA_SEC_ERR("No Memory for auth key");
- rte_free(session->cipher_key.data);
- return -ENOMEM;
- }
- session->auth_key.length = auth_xform->key.length;
- memcpy(session->auth_key.data, auth_xform->key.data,
- auth_xform->key.length);
- session->auth_alg = auth_xform->algo;
- } else {
- session->auth_key.data = NULL;
- session->auth_key.length = 0;
- session->auth_alg = RTE_CRYPTO_AUTH_NULL;
+ }
+
+ if (auth_xform) {
+ session->auth_key.data = rte_zmalloc(NULL,
+ auth_xform->key.length,
+ RTE_CACHE_LINE_SIZE);
+ if (!session->auth_key.data &&
+ auth_xform->key.length > 0) {
+ DPAA_SEC_ERR("No Memory for auth key");
+ rte_free(session->cipher_key.data);
+ return -ENOMEM;
}
+ session->auth_key.length = auth_xform->key.length;
+ memcpy(session->auth_key.data, auth_xform->key.data,
+ auth_xform->key.length);
+ session->auth_alg = auth_xform->algo;
+ } else {
+ session->auth_key.data = NULL;
+ session->auth_key.length = 0;
+ session->auth_alg = 0;
}
session->pdcp.domain = pdcp_xform->domain;
session->pdcp.bearer = pdcp_xform->bearer;
--
2.17.1
^ permalink raw reply [flat|nested] 75+ messages in thread
* [dpdk-dev] [PATCH v3 03/24] security: add hfn override option in PDCP
2019-09-30 14:40 ` [dpdk-dev] [PATCH v3 00/24] " Akhil Goyal
2019-09-30 14:40 ` [dpdk-dev] [PATCH v3 01/24] drivers/crypto: support PDCP 12-bit c-plane processing Akhil Goyal
2019-09-30 14:40 ` [dpdk-dev] [PATCH v3 02/24] drivers/crypto: support PDCP u-plane with integrity Akhil Goyal
@ 2019-09-30 14:40 ` Akhil Goyal
2019-09-30 14:40 ` [dpdk-dev] [PATCH v3 04/24] drivers/crypto: support hfn override for NXP PMDs Akhil Goyal
` (21 subsequent siblings)
24 siblings, 0 replies; 75+ messages in thread
From: Akhil Goyal @ 2019-09-30 14:40 UTC (permalink / raw)
To: dev; +Cc: aconole, anoobj, Akhil Goyal
HFN can be given as a per packet value also.
As we do not have IV in case of PDCP, and HFN is
used to generate IV. IV field can be used to get the
per packet HFN while enq/deq
If hfn_ovrd field in pdcp_xform is set,
application is expected to set the per packet HFN
in place of IV. Driver will extract the HFN and perform
operations accordingly.
Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
lib/librte_security/rte_security.h | 11 ++++++++++-
1 file changed, 10 insertions(+), 1 deletion(-)
diff --git a/lib/librte_security/rte_security.h b/lib/librte_security/rte_security.h
index 2d064f4d0..aaafdfcd7 100644
--- a/lib/librte_security/rte_security.h
+++ b/lib/librte_security/rte_security.h
@@ -1,5 +1,5 @@
/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright 2017 NXP.
+ * Copyright 2017,2019 NXP
* Copyright(c) 2017 Intel Corporation.
*/
@@ -278,6 +278,15 @@ struct rte_security_pdcp_xform {
uint32_t hfn;
/** HFN Threshold for key renegotiation */
uint32_t hfn_threshold;
+ /** HFN can be given as a per packet value also.
+ * As we do not have IV in case of PDCP, and HFN is
+ * used to generate IV. IV field can be used to get the
+ * per packet HFN while enq/deq.
+ * If hfn_ovrd field is set, user is expected to set the
+ * per packet HFN in place of IV. PMDs will extract the HFN
+ * and perform operations accordingly.
+ */
+ uint32_t hfn_ovrd;
};
/**
--
2.17.1
^ permalink raw reply [flat|nested] 75+ messages in thread
* [dpdk-dev] [PATCH v3 04/24] drivers/crypto: support hfn override for NXP PMDs
2019-09-30 14:40 ` [dpdk-dev] [PATCH v3 00/24] " Akhil Goyal
` (2 preceding siblings ...)
2019-09-30 14:40 ` [dpdk-dev] [PATCH v3 03/24] security: add hfn override option in PDCP Akhil Goyal
@ 2019-09-30 14:40 ` Akhil Goyal
2019-09-30 14:40 ` [dpdk-dev] [PATCH v3 05/24] crypto/dpaa2_sec: update desc for pdcp 18bit enc-auth Akhil Goyal
` (20 subsequent siblings)
24 siblings, 0 replies; 75+ messages in thread
From: Akhil Goyal @ 2019-09-30 14:40 UTC (permalink / raw)
To: dev; +Cc: aconole, anoobj, Akhil Goyal
Per packet HFN override is supported in NXP PMDs
(dpaa2_sec and dpaa_sec). DPOVRD register can be
updated with the per packet value if it is enabled
in session configuration. The value is read from
the IV offset.
Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
---
drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c | 18 ++++++++++--------
drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h | 5 ++++-
drivers/crypto/dpaa_sec/dpaa_sec.c | 19 ++++++++++++++++---
drivers/crypto/dpaa_sec/dpaa_sec.h | 5 ++++-
4 files changed, 34 insertions(+), 13 deletions(-)
diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
index 75a4fe0fa..7946abf40 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
@@ -125,14 +125,16 @@ build_proto_compound_fd(dpaa2_sec_session *sess,
DPAA2_SET_FD_LEN(fd, ip_fle->length);
DPAA2_SET_FLE_FIN(ip_fle);
-#ifdef ENABLE_HFN_OVERRIDE
+ /* In case of PDCP, per packet HFN is stored in
+ * mbuf priv after sym_op.
+ */
if (sess->ctxt_type == DPAA2_SEC_PDCP && sess->pdcp.hfn_ovd) {
+ uint32_t hfn_ovd = *((uint8_t *)op + sess->pdcp.hfn_ovd_offset);
/*enable HFN override override */
- DPAA2_SET_FLE_INTERNAL_JD(ip_fle, sess->pdcp.hfn_ovd);
- DPAA2_SET_FLE_INTERNAL_JD(op_fle, sess->pdcp.hfn_ovd);
- DPAA2_SET_FD_INTERNAL_JD(fd, sess->pdcp.hfn_ovd);
+ DPAA2_SET_FLE_INTERNAL_JD(ip_fle, hfn_ovd);
+ DPAA2_SET_FLE_INTERNAL_JD(op_fle, hfn_ovd);
+ DPAA2_SET_FD_INTERNAL_JD(fd, hfn_ovd);
}
-#endif
return 0;
@@ -2664,11 +2666,11 @@ dpaa2_sec_set_pdcp_session(struct rte_cryptodev *dev,
session->pdcp.bearer = pdcp_xform->bearer;
session->pdcp.pkt_dir = pdcp_xform->pkt_dir;
session->pdcp.sn_size = pdcp_xform->sn_size;
-#ifdef ENABLE_HFN_OVERRIDE
- session->pdcp.hfn_ovd = pdcp_xform->hfn_ovd;
-#endif
session->pdcp.hfn = pdcp_xform->hfn;
session->pdcp.hfn_threshold = pdcp_xform->hfn_threshold;
+ session->pdcp.hfn_ovd = pdcp_xform->hfn_ovrd;
+ /* hfv ovd offset location is stored in iv.offset value*/
+ session->pdcp.hfn_ovd_offset = cipher_xform->iv.offset;
cipherdata.key = (size_t)session->cipher_key.data;
cipherdata.keylen = session->cipher_key.length;
diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h b/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
index a05deaebd..afd98b2d5 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
@@ -147,9 +147,12 @@ struct dpaa2_pdcp_ctxt {
int8_t bearer; /*!< PDCP bearer ID */
int8_t pkt_dir;/*!< PDCP Frame Direction 0:UL 1:DL*/
int8_t hfn_ovd;/*!< Overwrite HFN per packet*/
+ uint8_t sn_size; /*!< Sequence number size, 5/7/12/15/18 */
+ uint32_t hfn_ovd_offset;/*!< offset from rte_crypto_op at which
+ * per packet hfn is stored
+ */
uint32_t hfn; /*!< Hyper Frame Number */
uint32_t hfn_threshold; /*!< HFN Threashold for key renegotiation */
- uint8_t sn_size; /*!< Sequence number size, 7/12/15 */
};
typedef struct dpaa2_sec_session_entry {
diff --git a/drivers/crypto/dpaa_sec/dpaa_sec.c b/drivers/crypto/dpaa_sec/dpaa_sec.c
index c3fbcc11d..3fc4a606f 100644
--- a/drivers/crypto/dpaa_sec/dpaa_sec.c
+++ b/drivers/crypto/dpaa_sec/dpaa_sec.c
@@ -1764,6 +1764,20 @@ dpaa_sec_enqueue_burst(void *qp, struct rte_crypto_op **ops,
if (auth_only_len)
fd->cmd = 0x80000000 | auth_only_len;
+ /* In case of PDCP, per packet HFN is stored in
+ * mbuf priv after sym_op.
+ */
+ if (is_proto_pdcp(ses) && ses->pdcp.hfn_ovd) {
+ fd->cmd = 0x80000000 |
+ *((uint32_t *)((uint8_t *)op +
+ ses->pdcp.hfn_ovd_offset));
+ DPAA_SEC_DP_DEBUG("Per packet HFN: %x, ovd:%u,%u\n",
+ *((uint32_t *)((uint8_t *)op +
+ ses->pdcp.hfn_ovd_offset)),
+ ses->pdcp.hfn_ovd,
+ is_proto_pdcp(ses));
+ }
+
}
send_pkts:
loop = 0;
@@ -2421,11 +2435,10 @@ dpaa_sec_set_pdcp_session(struct rte_cryptodev *dev,
session->pdcp.bearer = pdcp_xform->bearer;
session->pdcp.pkt_dir = pdcp_xform->pkt_dir;
session->pdcp.sn_size = pdcp_xform->sn_size;
-#ifdef ENABLE_HFN_OVERRIDE
- session->pdcp.hfn_ovd = pdcp_xform->hfn_ovd;
-#endif
session->pdcp.hfn = pdcp_xform->hfn;
session->pdcp.hfn_threshold = pdcp_xform->hfn_threshold;
+ session->pdcp.hfn_ovd = pdcp_xform->hfn_ovrd;
+ session->pdcp.hfn_ovd_offset = cipher_xform->iv.offset;
session->ctx_pool = dev_priv->ctx_pool;
rte_spinlock_lock(&dev_priv->lock);
diff --git a/drivers/crypto/dpaa_sec/dpaa_sec.h b/drivers/crypto/dpaa_sec/dpaa_sec.h
index 08e7d66e5..68461cecc 100644
--- a/drivers/crypto/dpaa_sec/dpaa_sec.h
+++ b/drivers/crypto/dpaa_sec/dpaa_sec.h
@@ -103,9 +103,12 @@ struct sec_pdcp_ctxt {
int8_t bearer; /*!< PDCP bearer ID */
int8_t pkt_dir;/*!< PDCP Frame Direction 0:UL 1:DL*/
int8_t hfn_ovd;/*!< Overwrite HFN per packet*/
+ uint8_t sn_size; /*!< Sequence number size, 5/7/12/15/18 */
+ uint32_t hfn_ovd_offset;/*!< offset from rte_crypto_op at which
+ * per packet hfn is stored
+ */
uint32_t hfn; /*!< Hyper Frame Number */
uint32_t hfn_threshold; /*!< HFN Threashold for key renegotiation */
- uint8_t sn_size; /*!< Sequence number size, 7/12/15 */
};
typedef struct dpaa_sec_session_entry {
--
2.17.1
^ permalink raw reply [flat|nested] 75+ messages in thread
* [dpdk-dev] [PATCH v3 05/24] crypto/dpaa2_sec: update desc for pdcp 18bit enc-auth
2019-09-30 14:40 ` [dpdk-dev] [PATCH v3 00/24] " Akhil Goyal
` (3 preceding siblings ...)
2019-09-30 14:40 ` [dpdk-dev] [PATCH v3 04/24] drivers/crypto: support hfn override for NXP PMDs Akhil Goyal
@ 2019-09-30 14:40 ` Akhil Goyal
2019-09-30 14:40 ` [dpdk-dev] [PATCH v3 06/24] crypto/dpaa2_sec: support CAAM HW era 10 Akhil Goyal
` (19 subsequent siblings)
24 siblings, 0 replies; 75+ messages in thread
From: Akhil Goyal @ 2019-09-30 14:40 UTC (permalink / raw)
To: dev; +Cc: aconole, anoobj, Akhil Goyal
Support following cases
int-only (NULL-NULL, NULL-SNOW, NULL-AES, NULL-ZUC)
enc-only (SNOW-NULL, AES-NULL, ZUC-NULL)
Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
---
drivers/crypto/dpaa2_sec/hw/desc/pdcp.h | 532 +++++++++++++++++++-----
1 file changed, 420 insertions(+), 112 deletions(-)
diff --git a/drivers/crypto/dpaa2_sec/hw/desc/pdcp.h b/drivers/crypto/dpaa2_sec/hw/desc/pdcp.h
index a636640c4..9a73105ac 100644
--- a/drivers/crypto/dpaa2_sec/hw/desc/pdcp.h
+++ b/drivers/crypto/dpaa2_sec/hw/desc/pdcp.h
@@ -1,5 +1,6 @@
/* SPDX-License-Identifier: BSD-3-Clause or GPL-2.0+
* Copyright 2008-2013 Freescale Semiconductor, Inc.
+ * Copyright 2019 NXP
*/
#ifndef __DESC_PDCP_H__
@@ -52,6 +53,14 @@
#define PDCP_U_PLANE_15BIT_SN_MASK 0xFF7F0000
#define PDCP_U_PLANE_15BIT_SN_MASK_BE 0x00007FFF
+/**
+ * PDCP_U_PLANE_18BIT_SN_MASK - This mask is used in the PDCP descriptors for
+ * extracting the sequence number (SN) from the
+ * PDCP User Plane header.
+ */
+#define PDCP_U_PLANE_18BIT_SN_MASK 0xFFFF0300
+#define PDCP_U_PLANE_18BIT_SN_MASK_BE 0x0003FFFF
+
/**
* PDCP_BEARER_MASK - This mask is used masking out the bearer for PDCP
* processing with SNOW f9 in LTE.
@@ -192,7 +201,8 @@ enum pdcp_sn_size {
PDCP_SN_SIZE_5 = 5,
PDCP_SN_SIZE_7 = 7,
PDCP_SN_SIZE_12 = 12,
- PDCP_SN_SIZE_15 = 15
+ PDCP_SN_SIZE_15 = 15,
+ PDCP_SN_SIZE_18 = 18
};
/*
@@ -205,14 +215,17 @@ enum pdcp_sn_size {
#define PDCP_U_PLANE_PDB_OPT_SHORT_SN 0x2
#define PDCP_U_PLANE_PDB_OPT_15B_SN 0x4
+#define PDCP_U_PLANE_PDB_OPT_18B_SN 0x6
#define PDCP_U_PLANE_PDB_SHORT_SN_HFN_SHIFT 7
#define PDCP_U_PLANE_PDB_LONG_SN_HFN_SHIFT 12
#define PDCP_U_PLANE_PDB_15BIT_SN_HFN_SHIFT 15
+#define PDCP_U_PLANE_PDB_18BIT_SN_HFN_SHIFT 18
#define PDCP_U_PLANE_PDB_BEARER_SHIFT 27
#define PDCP_U_PLANE_PDB_DIR_SHIFT 26
#define PDCP_U_PLANE_PDB_SHORT_SN_HFN_THR_SHIFT 7
#define PDCP_U_PLANE_PDB_LONG_SN_HFN_THR_SHIFT 12
#define PDCP_U_PLANE_PDB_15BIT_SN_HFN_THR_SHIFT 15
+#define PDCP_U_PLANE_PDB_18BIT_SN_HFN_THR_SHIFT 18
struct pdcp_pdb {
union {
@@ -417,6 +430,9 @@ pdcp_insert_cplane_int_only_op(struct program *p,
enum pdcp_sn_size sn_size,
unsigned char era_2_sw_hfn_ovrd)
{
+ uint32_t offset = 0, length = 0, sn_mask = 0;
+
+ /* 12 bit SN is only supported for protocol offload case */
if (rta_sec_era >= RTA_SEC_ERA_8 && sn_size == PDCP_SN_SIZE_12) {
KEY(p, KEY2, authdata->key_enc_flags, authdata->key,
authdata->keylen, INLINE_KEY(authdata));
@@ -426,6 +442,27 @@ pdcp_insert_cplane_int_only_op(struct program *p,
return 0;
}
+ /* Non-proto is supported only for 5bit cplane and 18bit uplane */
+ switch (sn_size) {
+ case PDCP_SN_SIZE_5:
+ offset = 7;
+ length = 1;
+ sn_mask = (swap == false) ? PDCP_C_PLANE_SN_MASK :
+ PDCP_C_PLANE_SN_MASK_BE;
+ break;
+ case PDCP_SN_SIZE_18:
+ offset = 5;
+ length = 3;
+ sn_mask = (swap == false) ? PDCP_U_PLANE_18BIT_SN_MASK :
+ PDCP_U_PLANE_18BIT_SN_MASK_BE;
+ break;
+ case PDCP_SN_SIZE_7:
+ case PDCP_SN_SIZE_12:
+ case PDCP_SN_SIZE_15:
+ pr_err("Invalid sn_size for %s\n", __func__);
+ return -ENOTSUP;
+
+ }
LABEL(local_offset);
REFERENCE(move_cmd_read_descbuf);
REFERENCE(move_cmd_write_descbuf);
@@ -435,20 +472,20 @@ pdcp_insert_cplane_int_only_op(struct program *p,
/* Insert Auth Key */
KEY(p, KEY2, authdata->key_enc_flags, authdata->key,
authdata->keylen, INLINE_KEY(authdata));
- SEQLOAD(p, MATH0, 7, 1, 0);
+ SEQLOAD(p, MATH0, offset, length, 0);
JUMP(p, 1, LOCAL_JUMP, ALL_TRUE, CALM);
if (rta_sec_era > RTA_SEC_ERA_2 ||
(rta_sec_era == RTA_SEC_ERA_2 &&
era_2_sw_hfn_ovrd == 0)) {
- SEQINPTR(p, 0, 1, RTO);
+ SEQINPTR(p, 0, length, RTO);
} else {
SEQINPTR(p, 0, 5, RTO);
SEQFIFOLOAD(p, SKIP, 4, 0);
}
if (swap == false) {
- MATHB(p, MATH0, AND, PDCP_C_PLANE_SN_MASK, MATH1, 8,
+ MATHB(p, MATH0, AND, sn_mask, MATH1, 8,
IFB | IMMED2);
MATHB(p, MATH1, SHLD, MATH1, MATH1, 8, 0);
@@ -461,7 +498,7 @@ pdcp_insert_cplane_int_only_op(struct program *p,
MATHB(p, MATH1, OR, MATH2, MATH2, 8, 0);
MOVEB(p, MATH2, 0, CONTEXT2, 0, 0x0C, WAITCOMP | IMMED);
} else {
- MATHB(p, MATH0, AND, PDCP_C_PLANE_SN_MASK_BE, MATH1, 8,
+ MATHB(p, MATH0, AND, sn_mask, MATH1, 8,
IFB | IMMED2);
MATHB(p, MATH1, SHLD, MATH1, MATH1, 8, 0);
@@ -553,19 +590,19 @@ pdcp_insert_cplane_int_only_op(struct program *p,
/* Insert Auth Key */
KEY(p, KEY1, authdata->key_enc_flags, authdata->key,
authdata->keylen, INLINE_KEY(authdata));
- SEQLOAD(p, MATH0, 7, 1, 0);
+ SEQLOAD(p, MATH0, offset, length, 0);
JUMP(p, 1, LOCAL_JUMP, ALL_TRUE, CALM);
if (rta_sec_era > RTA_SEC_ERA_2 ||
(rta_sec_era == RTA_SEC_ERA_2 &&
era_2_sw_hfn_ovrd == 0)) {
- SEQINPTR(p, 0, 1, RTO);
+ SEQINPTR(p, 0, length, RTO);
} else {
SEQINPTR(p, 0, 5, RTO);
SEQFIFOLOAD(p, SKIP, 4, 0);
}
if (swap == false) {
- MATHB(p, MATH0, AND, PDCP_C_PLANE_SN_MASK, MATH1, 8,
+ MATHB(p, MATH0, AND, sn_mask, MATH1, 8,
IFB | IMMED2);
MATHB(p, MATH1, SHLD, MATH1, MATH1, 8, 0);
@@ -573,7 +610,7 @@ pdcp_insert_cplane_int_only_op(struct program *p,
MATHB(p, MATH1, OR, MATH2, MATH2, 8, 0);
MOVEB(p, MATH2, 0, IFIFOAB1, 0, 8, IMMED);
} else {
- MATHB(p, MATH0, AND, PDCP_C_PLANE_SN_MASK_BE, MATH1, 8,
+ MATHB(p, MATH0, AND, sn_mask, MATH1, 8,
IFB | IMMED2);
MATHB(p, MATH1, SHLD, MATH1, MATH1, 8, 0);
@@ -665,11 +702,11 @@ pdcp_insert_cplane_int_only_op(struct program *p,
/* Insert Auth Key */
KEY(p, KEY2, authdata->key_enc_flags, authdata->key,
authdata->keylen, INLINE_KEY(authdata));
- SEQLOAD(p, MATH0, 7, 1, 0);
+ SEQLOAD(p, MATH0, offset, length, 0);
JUMP(p, 1, LOCAL_JUMP, ALL_TRUE, CALM);
- SEQINPTR(p, 0, 1, RTO);
+ SEQINPTR(p, 0, length, RTO);
if (swap == false) {
- MATHB(p, MATH0, AND, PDCP_C_PLANE_SN_MASK, MATH1, 8,
+ MATHB(p, MATH0, AND, sn_mask, MATH1, 8,
IFB | IMMED2);
MATHB(p, MATH1, SHLD, MATH1, MATH1, 8, 0);
@@ -678,7 +715,7 @@ pdcp_insert_cplane_int_only_op(struct program *p,
MOVEB(p, MATH2, 0, CONTEXT2, 0, 8, IMMED);
} else {
- MATHB(p, MATH0, AND, PDCP_C_PLANE_SN_MASK_BE, MATH1, 8,
+ MATHB(p, MATH0, AND, sn_mask, MATH1, 8,
IFB | IMMED2);
MATHB(p, MATH1, SHLD, MATH1, MATH1, 8, 0);
@@ -734,11 +771,12 @@ pdcp_insert_cplane_enc_only_op(struct program *p,
enum pdcp_sn_size sn_size,
unsigned char era_2_sw_hfn_ovrd __maybe_unused)
{
+ uint32_t offset = 0, length = 0, sn_mask = 0;
/* Insert Cipher Key */
KEY(p, KEY1, cipherdata->key_enc_flags, cipherdata->key,
cipherdata->keylen, INLINE_KEY(cipherdata));
- if (rta_sec_era >= RTA_SEC_ERA_8) {
+ if (rta_sec_era >= RTA_SEC_ERA_8 && sn_size != PDCP_SN_SIZE_18) {
if (sn_size == PDCP_SN_SIZE_5)
PROTOCOL(p, dir, OP_PCLID_LTE_PDCP_CTRL_MIXED,
(uint16_t)cipherdata->algtype << 8);
@@ -747,16 +785,32 @@ pdcp_insert_cplane_enc_only_op(struct program *p,
(uint16_t)cipherdata->algtype << 8);
return 0;
}
+ /* Non-proto is supported only for 5bit cplane and 18bit uplane */
+ switch (sn_size) {
+ case PDCP_SN_SIZE_5:
+ offset = 7;
+ length = 1;
+ sn_mask = (swap == false) ? PDCP_C_PLANE_SN_MASK :
+ PDCP_C_PLANE_SN_MASK_BE;
+ break;
+ case PDCP_SN_SIZE_18:
+ offset = 5;
+ length = 3;
+ sn_mask = (swap == false) ? PDCP_U_PLANE_18BIT_SN_MASK :
+ PDCP_U_PLANE_18BIT_SN_MASK_BE;
+ break;
+ case PDCP_SN_SIZE_7:
+ case PDCP_SN_SIZE_12:
+ case PDCP_SN_SIZE_15:
+ pr_err("Invalid sn_size for %s\n", __func__);
+ return -ENOTSUP;
- SEQLOAD(p, MATH0, 7, 1, 0);
+ }
+
+ SEQLOAD(p, MATH0, offset, length, 0);
JUMP(p, 1, LOCAL_JUMP, ALL_TRUE, CALM);
- if (swap == false)
- MATHB(p, MATH0, AND, PDCP_C_PLANE_SN_MASK, MATH1, 8,
- IFB | IMMED2);
- else
- MATHB(p, MATH0, AND, PDCP_C_PLANE_SN_MASK_BE, MATH1, 8,
- IFB | IMMED2);
- SEQSTORE(p, MATH0, 7, 1, 0);
+ MATHB(p, MATH0, AND, sn_mask, MATH1, 8, IFB | IMMED2);
+ SEQSTORE(p, MATH0, offset, length, 0);
MATHB(p, MATH1, SHLD, MATH1, MATH1, 8, 0);
MOVEB(p, DESCBUF, 8, MATH2, 0, 8, WAITCOMP | IMMED);
MATHB(p, MATH1, OR, MATH2, MATH2, 8, 0);
@@ -895,6 +949,8 @@ pdcp_insert_cplane_snow_aes_op(struct program *p,
enum pdcp_sn_size sn_size,
unsigned char era_2_sw_hfn_ovrd)
{
+ uint32_t offset = 0, length = 0, sn_mask = 0;
+
LABEL(back_to_sd_offset);
LABEL(end_desc);
LABEL(local_offset);
@@ -906,7 +962,7 @@ pdcp_insert_cplane_snow_aes_op(struct program *p,
REFERENCE(jump_back_to_sd_cmd);
REFERENCE(move_mac_i_to_desc_buf);
- if (rta_sec_era >= RTA_SEC_ERA_8) {
+ if (rta_sec_era >= RTA_SEC_ERA_8 && sn_size != PDCP_SN_SIZE_18) {
KEY(p, KEY1, cipherdata->key_enc_flags, cipherdata->key,
cipherdata->keylen, INLINE_KEY(cipherdata));
KEY(p, KEY2, authdata->key_enc_flags, authdata->key,
@@ -923,19 +979,35 @@ pdcp_insert_cplane_snow_aes_op(struct program *p,
return 0;
}
+ /* Non-proto is supported only for 5bit cplane and 18bit uplane */
+ switch (sn_size) {
+ case PDCP_SN_SIZE_5:
+ offset = 7;
+ length = 1;
+ sn_mask = (swap == false) ? PDCP_C_PLANE_SN_MASK :
+ PDCP_C_PLANE_SN_MASK_BE;
+ break;
+ case PDCP_SN_SIZE_18:
+ offset = 5;
+ length = 3;
+ sn_mask = (swap == false) ? PDCP_U_PLANE_18BIT_SN_MASK :
+ PDCP_U_PLANE_18BIT_SN_MASK_BE;
+ break;
+ case PDCP_SN_SIZE_7:
+ case PDCP_SN_SIZE_12:
+ case PDCP_SN_SIZE_15:
+ pr_err("Invalid sn_size for %s\n", __func__);
+ return -ENOTSUP;
+
+ }
- SEQLOAD(p, MATH0, 7, 1, 0);
+ SEQLOAD(p, MATH0, offset, length, 0);
JUMP(p, 1, LOCAL_JUMP, ALL_TRUE, CALM);
- if (swap == false)
- MATHB(p, MATH0, AND, PDCP_C_PLANE_SN_MASK, MATH1, 8,
- IFB | IMMED2);
- else
- MATHB(p, MATH0, AND, PDCP_C_PLANE_SN_MASK_BE, MATH1, 8,
- IFB | IMMED2);
+ MATHB(p, MATH0, AND, sn_mask, MATH1, 8, IFB | IMMED2);
MATHB(p, MATH1, SHLD, MATH1, MATH1, 8, 0);
MOVE(p, DESCBUF, 4, MATH2, 0, 0x08, WAITCOMP | IMMED);
MATHB(p, MATH1, OR, MATH2, MATH2, 8, 0);
- SEQSTORE(p, MATH0, 7, 1, 0);
+ SEQSTORE(p, MATH0, offset, length, 0);
if (dir == OP_TYPE_ENCAP_PROTOCOL) {
if (rta_sec_era > RTA_SEC_ERA_2 ||
(rta_sec_era == RTA_SEC_ERA_2 &&
@@ -1207,12 +1279,14 @@ pdcp_insert_cplane_aes_snow_op(struct program *p,
enum pdcp_sn_size sn_size,
unsigned char era_2_sw_hfn_ovrd __maybe_unused)
{
+ uint32_t offset = 0, length = 0, sn_mask = 0;
+
KEY(p, KEY1, cipherdata->key_enc_flags, cipherdata->key,
cipherdata->keylen, INLINE_KEY(cipherdata));
KEY(p, KEY2, authdata->key_enc_flags, authdata->key, authdata->keylen,
INLINE_KEY(authdata));
- if (rta_sec_era >= RTA_SEC_ERA_8) {
+ if (rta_sec_era >= RTA_SEC_ERA_8 && sn_size != PDCP_SN_SIZE_18) {
int pclid;
if (sn_size == PDCP_SN_SIZE_5)
@@ -1226,21 +1300,37 @@ pdcp_insert_cplane_aes_snow_op(struct program *p,
return 0;
}
+ /* Non-proto is supported only for 5bit cplane and 18bit uplane */
+ switch (sn_size) {
+ case PDCP_SN_SIZE_5:
+ offset = 7;
+ length = 1;
+ sn_mask = (swap == false) ? PDCP_C_PLANE_SN_MASK :
+ PDCP_C_PLANE_SN_MASK_BE;
+ break;
+ case PDCP_SN_SIZE_18:
+ offset = 5;
+ length = 3;
+ sn_mask = (swap == false) ? PDCP_U_PLANE_18BIT_SN_MASK :
+ PDCP_U_PLANE_18BIT_SN_MASK_BE;
+ break;
+ case PDCP_SN_SIZE_7:
+ case PDCP_SN_SIZE_12:
+ case PDCP_SN_SIZE_15:
+ pr_err("Invalid sn_size for %s\n", __func__);
+ return -ENOTSUP;
+
+ }
if (dir == OP_TYPE_ENCAP_PROTOCOL)
MATHB(p, SEQINSZ, SUB, ONE, VSEQINSZ, 4, 0);
- SEQLOAD(p, MATH0, 7, 1, 0);
+ SEQLOAD(p, MATH0, offset, length, 0);
JUMP(p, 1, LOCAL_JUMP, ALL_TRUE, CALM);
MOVE(p, MATH0, 7, IFIFOAB2, 0, 1, IMMED);
- if (swap == false)
- MATHB(p, MATH0, AND, PDCP_C_PLANE_SN_MASK, MATH1, 8,
- IFB | IMMED2);
- else
- MATHB(p, MATH0, AND, PDCP_C_PLANE_SN_MASK_BE, MATH1, 8,
- IFB | IMMED2);
+ MATHB(p, MATH0, AND, sn_mask, MATH1, 8, IFB | IMMED2);
- SEQSTORE(p, MATH0, 7, 1, 0);
+ SEQSTORE(p, MATH0, offset, length, 0);
MATHB(p, MATH1, SHLD, MATH1, MATH1, 8, 0);
MOVE(p, DESCBUF, 4, MATH2, 0, 8, WAITCOMP | IMMED);
MATHB(p, MATH1, OR, MATH2, MATH1, 8, 0);
@@ -1322,6 +1412,8 @@ pdcp_insert_cplane_snow_zuc_op(struct program *p,
enum pdcp_sn_size sn_size,
unsigned char era_2_sw_hfn_ovrd __maybe_unused)
{
+ uint32_t offset = 0, length = 0, sn_mask = 0;
+
LABEL(keyjump);
REFERENCE(pkeyjump);
@@ -1338,7 +1430,7 @@ pdcp_insert_cplane_snow_zuc_op(struct program *p,
SET_LABEL(p, keyjump);
- if (rta_sec_era >= RTA_SEC_ERA_8) {
+ if (rta_sec_era >= RTA_SEC_ERA_8 && sn_size != PDCP_SN_SIZE_18) {
int pclid;
if (sn_size == PDCP_SN_SIZE_5)
@@ -1351,16 +1443,32 @@ pdcp_insert_cplane_snow_zuc_op(struct program *p,
(uint16_t)authdata->algtype);
return 0;
}
+ /* Non-proto is supported only for 5bit cplane and 18bit uplane */
+ switch (sn_size) {
+ case PDCP_SN_SIZE_5:
+ offset = 7;
+ length = 1;
+ sn_mask = (swap == false) ? PDCP_C_PLANE_SN_MASK :
+ PDCP_C_PLANE_SN_MASK_BE;
+ break;
+ case PDCP_SN_SIZE_18:
+ offset = 5;
+ length = 3;
+ sn_mask = (swap == false) ? PDCP_U_PLANE_18BIT_SN_MASK :
+ PDCP_U_PLANE_18BIT_SN_MASK_BE;
+ break;
+ case PDCP_SN_SIZE_7:
+ case PDCP_SN_SIZE_12:
+ case PDCP_SN_SIZE_15:
+ pr_err("Invalid sn_size for %s\n", __func__);
+ return -ENOTSUP;
+
+ }
- SEQLOAD(p, MATH0, 7, 1, 0);
+ SEQLOAD(p, MATH0, offset, length, 0);
JUMP(p, 1, LOCAL_JUMP, ALL_TRUE, CALM);
MOVE(p, MATH0, 7, IFIFOAB2, 0, 1, IMMED);
- if (swap == false)
- MATHB(p, MATH0, AND, PDCP_C_PLANE_SN_MASK, MATH1, 8,
- IFB | IMMED2);
- else
- MATHB(p, MATH0, AND, PDCP_C_PLANE_SN_MASK, MATH1, 8,
- IFB | IMMED2);
+ MATHB(p, MATH0, AND, sn_mask, MATH1, 8, IFB | IMMED2);
MATHB(p, MATH1, SHLD, MATH1, MATH1, 8, 0);
MOVE(p, DESCBUF, 4, MATH2, 0, 8, WAITCOMP | IMMED);
@@ -1374,7 +1482,7 @@ pdcp_insert_cplane_snow_zuc_op(struct program *p,
MATHB(p, SEQINSZ, SUB, PDCP_MAC_I_LEN, VSEQOUTSZ, 4, IMMED2);
MATHB(p, SEQINSZ, SUB, ZERO, VSEQINSZ, 4, 0);
- SEQSTORE(p, MATH0, 7, 1, 0);
+ SEQSTORE(p, MATH0, offset, length, 0);
if (dir == OP_TYPE_ENCAP_PROTOCOL) {
SEQFIFOSTORE(p, MSG, 0, 0, VLF);
@@ -1425,6 +1533,7 @@ pdcp_insert_cplane_aes_zuc_op(struct program *p,
enum pdcp_sn_size sn_size,
unsigned char era_2_sw_hfn_ovrd __maybe_unused)
{
+ uint32_t offset = 0, length = 0, sn_mask = 0;
LABEL(keyjump);
REFERENCE(pkeyjump);
@@ -1439,7 +1548,7 @@ pdcp_insert_cplane_aes_zuc_op(struct program *p,
KEY(p, KEY2, authdata->key_enc_flags, authdata->key, authdata->keylen,
INLINE_KEY(authdata));
- if (rta_sec_era >= RTA_SEC_ERA_8) {
+ if (rta_sec_era >= RTA_SEC_ERA_8 && sn_size != PDCP_SN_SIZE_18) {
int pclid;
if (sn_size == PDCP_SN_SIZE_5)
@@ -1453,17 +1562,33 @@ pdcp_insert_cplane_aes_zuc_op(struct program *p,
return 0;
}
+ /* Non-proto is supported only for 5bit cplane and 18bit uplane */
+ switch (sn_size) {
+ case PDCP_SN_SIZE_5:
+ offset = 7;
+ length = 1;
+ sn_mask = (swap == false) ? PDCP_C_PLANE_SN_MASK :
+ PDCP_C_PLANE_SN_MASK_BE;
+ break;
+ case PDCP_SN_SIZE_18:
+ offset = 5;
+ length = 3;
+ sn_mask = (swap == false) ? PDCP_U_PLANE_18BIT_SN_MASK :
+ PDCP_U_PLANE_18BIT_SN_MASK_BE;
+ break;
+ case PDCP_SN_SIZE_7:
+ case PDCP_SN_SIZE_12:
+ case PDCP_SN_SIZE_15:
+ pr_err("Invalid sn_size for %s\n", __func__);
+ return -ENOTSUP;
+
+ }
SET_LABEL(p, keyjump);
- SEQLOAD(p, MATH0, 7, 1, 0);
+ SEQLOAD(p, MATH0, offset, length, 0);
JUMP(p, 1, LOCAL_JUMP, ALL_TRUE, CALM);
MOVE(p, MATH0, 7, IFIFOAB2, 0, 1, IMMED);
- if (swap == false)
- MATHB(p, MATH0, AND, PDCP_C_PLANE_SN_MASK, MATH1, 8,
- IFB | IMMED2);
- else
- MATHB(p, MATH0, AND, PDCP_C_PLANE_SN_MASK, MATH1, 8,
- IFB | IMMED2);
+ MATHB(p, MATH0, AND, sn_mask, MATH1, 8, IFB | IMMED2);
MATHB(p, MATH1, SHLD, MATH1, MATH1, 8, 0);
MOVE(p, DESCBUF, 4, MATH2, 0, 8, WAITCOMP | IMMED);
@@ -1477,7 +1602,7 @@ pdcp_insert_cplane_aes_zuc_op(struct program *p,
MATHB(p, SEQINSZ, SUB, PDCP_MAC_I_LEN, VSEQOUTSZ, 4, IMMED2);
MATHB(p, SEQINSZ, SUB, ZERO, VSEQINSZ, 4, 0);
- SEQSTORE(p, MATH0, 7, 1, 0);
+ SEQSTORE(p, MATH0, offset, length, 0);
if (dir == OP_TYPE_ENCAP_PROTOCOL) {
SEQFIFOSTORE(p, MSG, 0, 0, VLF);
@@ -1531,6 +1656,7 @@ pdcp_insert_cplane_zuc_snow_op(struct program *p,
enum pdcp_sn_size sn_size,
unsigned char era_2_sw_hfn_ovrd __maybe_unused)
{
+ uint32_t offset = 0, length = 0, sn_mask = 0;
LABEL(keyjump);
REFERENCE(pkeyjump);
@@ -1545,7 +1671,7 @@ pdcp_insert_cplane_zuc_snow_op(struct program *p,
KEY(p, KEY2, authdata->key_enc_flags, authdata->key, authdata->keylen,
INLINE_KEY(authdata));
- if (rta_sec_era >= RTA_SEC_ERA_8) {
+ if (rta_sec_era >= RTA_SEC_ERA_8 && sn_size != PDCP_SN_SIZE_18) {
int pclid;
if (sn_size == PDCP_SN_SIZE_5)
@@ -1559,17 +1685,32 @@ pdcp_insert_cplane_zuc_snow_op(struct program *p,
return 0;
}
+ /* Non-proto is supported only for 5bit cplane and 18bit uplane */
+ switch (sn_size) {
+ case PDCP_SN_SIZE_5:
+ offset = 7;
+ length = 1;
+ sn_mask = (swap == false) ? PDCP_C_PLANE_SN_MASK :
+ PDCP_C_PLANE_SN_MASK_BE;
+ break;
+ case PDCP_SN_SIZE_18:
+ offset = 5;
+ length = 3;
+ sn_mask = (swap == false) ? PDCP_U_PLANE_18BIT_SN_MASK :
+ PDCP_U_PLANE_18BIT_SN_MASK_BE;
+ break;
+ case PDCP_SN_SIZE_7:
+ case PDCP_SN_SIZE_12:
+ case PDCP_SN_SIZE_15:
+ pr_err("Invalid sn_size for %s\n", __func__);
+ return -ENOTSUP;
+ }
SET_LABEL(p, keyjump);
- SEQLOAD(p, MATH0, 7, 1, 0);
+ SEQLOAD(p, MATH0, offset, length, 0);
JUMP(p, 1, LOCAL_JUMP, ALL_TRUE, CALM);
MOVE(p, MATH0, 7, IFIFOAB2, 0, 1, IMMED);
- if (swap == false)
- MATHB(p, MATH0, AND, PDCP_C_PLANE_SN_MASK, MATH1, 8,
- IFB | IMMED2);
- else
- MATHB(p, MATH0, AND, PDCP_C_PLANE_SN_MASK, MATH1, 8,
- IFB | IMMED2);
+ MATHB(p, MATH0, AND, sn_mask, MATH1, 8, IFB | IMMED2);
MATHB(p, MATH1, SHLD, MATH1, MATH1, 8, 0);
MOVE(p, DESCBUF, 4, MATH2, 0, 8, WAITCOMP | IMMED);
@@ -1599,7 +1740,7 @@ pdcp_insert_cplane_zuc_snow_op(struct program *p,
MATHB(p, VSEQOUTSZ, SUB, ZERO, VSEQINSZ, 4, 0);
}
- SEQSTORE(p, MATH0, 7, 1, 0);
+ SEQSTORE(p, MATH0, offset, length, 0);
if (dir == OP_TYPE_ENCAP_PROTOCOL) {
SEQFIFOSTORE(p, MSG, 0, 0, VLF);
@@ -1659,12 +1800,13 @@ pdcp_insert_cplane_zuc_aes_op(struct program *p,
enum pdcp_sn_size sn_size,
unsigned char era_2_sw_hfn_ovrd __maybe_unused)
{
+ uint32_t offset = 0, length = 0, sn_mask = 0;
if (rta_sec_era < RTA_SEC_ERA_5) {
pr_err("Invalid era for selected algorithm\n");
return -ENOTSUP;
}
- if (rta_sec_era >= RTA_SEC_ERA_8) {
+ if (rta_sec_era >= RTA_SEC_ERA_8 && sn_size != PDCP_SN_SIZE_18) {
int pclid;
KEY(p, KEY1, cipherdata->key_enc_flags, cipherdata->key,
@@ -1682,20 +1824,35 @@ pdcp_insert_cplane_zuc_aes_op(struct program *p,
(uint16_t)authdata->algtype);
return 0;
}
+ /* Non-proto is supported only for 5bit cplane and 18bit uplane */
+ switch (sn_size) {
+ case PDCP_SN_SIZE_5:
+ offset = 7;
+ length = 1;
+ sn_mask = (swap == false) ? PDCP_C_PLANE_SN_MASK :
+ PDCP_C_PLANE_SN_MASK_BE;
+ break;
+ case PDCP_SN_SIZE_18:
+ offset = 5;
+ length = 3;
+ sn_mask = (swap == false) ? PDCP_U_PLANE_18BIT_SN_MASK :
+ PDCP_U_PLANE_18BIT_SN_MASK_BE;
+ break;
+ case PDCP_SN_SIZE_7:
+ case PDCP_SN_SIZE_12:
+ case PDCP_SN_SIZE_15:
+ pr_err("Invalid sn_size for %s\n", __func__);
+ return -ENOTSUP;
+ }
- SEQLOAD(p, MATH0, 7, 1, 0);
+ SEQLOAD(p, MATH0, offset, length, 0);
JUMP(p, 1, LOCAL_JUMP, ALL_TRUE, CALM);
- if (swap == false)
- MATHB(p, MATH0, AND, PDCP_C_PLANE_SN_MASK, MATH1, 8,
- IFB | IMMED2);
- else
- MATHB(p, MATH0, AND, PDCP_C_PLANE_SN_MASK, MATH1, 8,
- IFB | IMMED2);
+ MATHB(p, MATH0, AND, sn_mask, MATH1, 8, IFB | IMMED2);
MATHB(p, MATH1, SHLD, MATH1, MATH1, 8, 0);
MOVE(p, DESCBUF, 4, MATH2, 0, 0x08, WAITCOMP | IMMED);
MATHB(p, MATH1, OR, MATH2, MATH2, 8, 0);
- SEQSTORE(p, MATH0, 7, 1, 0);
+ SEQSTORE(p, MATH0, offset, length, 0);
if (dir == OP_TYPE_ENCAP_PROTOCOL) {
KEY(p, KEY1, authdata->key_enc_flags, authdata->key,
authdata->keylen, INLINE_KEY(authdata));
@@ -1798,38 +1955,43 @@ pdcp_insert_cplane_zuc_aes_op(struct program *p,
}
static inline int
-pdcp_insert_uplane_15bit_op(struct program *p,
+pdcp_insert_uplane_no_int_op(struct program *p,
bool swap __maybe_unused,
struct alginfo *cipherdata,
- struct alginfo *authdata,
- unsigned int dir)
+ unsigned int dir,
+ enum pdcp_sn_size sn_size)
{
int op;
-
- /* Insert auth key if requested */
- if (authdata && authdata->algtype)
- KEY(p, KEY2, authdata->key_enc_flags, authdata->key,
- authdata->keylen, INLINE_KEY(authdata));
+ uint32_t sn_mask;
/* Insert Cipher Key */
KEY(p, KEY1, cipherdata->key_enc_flags, cipherdata->key,
cipherdata->keylen, INLINE_KEY(cipherdata));
- if (rta_sec_era >= RTA_SEC_ERA_8) {
+ if ((rta_sec_era >= RTA_SEC_ERA_8 && sn_size == PDCP_SN_SIZE_15) ||
+ (rta_sec_era >= RTA_SEC_ERA_10)) {
PROTOCOL(p, dir, OP_PCLID_LTE_PDCP_USER,
(uint16_t)cipherdata->algtype);
return 0;
}
- SEQLOAD(p, MATH0, 6, 2, 0);
+ if (sn_size == PDCP_SN_SIZE_15) {
+ SEQLOAD(p, MATH0, 6, 2, 0);
+ sn_mask = (swap == false) ? PDCP_U_PLANE_15BIT_SN_MASK :
+ PDCP_U_PLANE_15BIT_SN_MASK_BE;
+ } else { /* SN Size == PDCP_SN_SIZE_18 */
+ SEQLOAD(p, MATH0, 5, 3, 0);
+ sn_mask = (swap == false) ? PDCP_U_PLANE_18BIT_SN_MASK :
+ PDCP_U_PLANE_18BIT_SN_MASK_BE;
+ }
JUMP(p, 1, LOCAL_JUMP, ALL_TRUE, CALM);
- if (swap == false)
- MATHB(p, MATH0, AND, PDCP_U_PLANE_15BIT_SN_MASK, MATH1, 8,
- IFB | IMMED2);
- else
- MATHB(p, MATH0, AND, PDCP_U_PLANE_15BIT_SN_MASK_BE, MATH1, 8,
- IFB | IMMED2);
- SEQSTORE(p, MATH0, 6, 2, 0);
+ MATHB(p, MATH0, AND, sn_mask, MATH1, 8, IFB | IMMED2);
+
+ if (sn_size == PDCP_SN_SIZE_15)
+ SEQSTORE(p, MATH0, 6, 2, 0);
+ else /* SN Size == PDCP_SN_SIZE_18 */
+ SEQSTORE(p, MATH0, 5, 3, 0);
+
MATHB(p, MATH1, SHLD, MATH1, MATH1, 8, 0);
MOVEB(p, DESCBUF, 8, MATH2, 0, 8, WAITCOMP | IMMED);
MATHB(p, MATH1, OR, MATH2, MATH2, 8, 0);
@@ -2124,6 +2286,13 @@ cnstr_pdcp_u_plane_pdb(struct program *p,
hfn_threshold<<PDCP_U_PLANE_PDB_15BIT_SN_HFN_THR_SHIFT;
break;
+ case PDCP_SN_SIZE_18:
+ pdb.opt_res.opt = (uint32_t)(PDCP_U_PLANE_PDB_OPT_18B_SN);
+ pdb.hfn_res = hfn << PDCP_U_PLANE_PDB_18BIT_SN_HFN_SHIFT;
+ pdb.hfn_thr_res =
+ hfn_threshold<<PDCP_U_PLANE_PDB_18BIT_SN_HFN_THR_SHIFT;
+ break;
+
default:
pr_err("Invalid Sequence Number Size setting in PDB\n");
return -EINVAL;
@@ -2448,6 +2617,61 @@ cnstr_shdsc_pdcp_c_plane_decap(uint32_t *descbuf,
return PROGRAM_FINALIZE(p);
}
+static int
+pdcp_insert_uplane_with_int_op(struct program *p,
+ bool swap __maybe_unused,
+ struct alginfo *cipherdata,
+ struct alginfo *authdata,
+ enum pdcp_sn_size sn_size,
+ unsigned char era_2_sw_hfn_ovrd,
+ unsigned int dir)
+{
+ static int
+ (*pdcp_cp_fp[PDCP_CIPHER_TYPE_INVALID][PDCP_AUTH_TYPE_INVALID])
+ (struct program*, bool swap, struct alginfo *,
+ struct alginfo *, unsigned int, enum pdcp_sn_size,
+ unsigned char __maybe_unused) = {
+ { /* NULL */
+ pdcp_insert_cplane_null_op, /* NULL */
+ pdcp_insert_cplane_int_only_op, /* SNOW f9 */
+ pdcp_insert_cplane_int_only_op, /* AES CMAC */
+ pdcp_insert_cplane_int_only_op /* ZUC-I */
+ },
+ { /* SNOW f8 */
+ pdcp_insert_cplane_enc_only_op, /* NULL */
+ pdcp_insert_cplane_acc_op, /* SNOW f9 */
+ pdcp_insert_cplane_snow_aes_op, /* AES CMAC */
+ pdcp_insert_cplane_snow_zuc_op /* ZUC-I */
+ },
+ { /* AES CTR */
+ pdcp_insert_cplane_enc_only_op, /* NULL */
+ pdcp_insert_cplane_aes_snow_op, /* SNOW f9 */
+ pdcp_insert_cplane_acc_op, /* AES CMAC */
+ pdcp_insert_cplane_aes_zuc_op /* ZUC-I */
+ },
+ { /* ZUC-E */
+ pdcp_insert_cplane_enc_only_op, /* NULL */
+ pdcp_insert_cplane_zuc_snow_op, /* SNOW f9 */
+ pdcp_insert_cplane_zuc_aes_op, /* AES CMAC */
+ pdcp_insert_cplane_acc_op /* ZUC-I */
+ },
+ };
+ int err;
+
+ err = pdcp_cp_fp[cipherdata->algtype][authdata->algtype](p,
+ swap,
+ cipherdata,
+ authdata,
+ dir,
+ sn_size,
+ era_2_sw_hfn_ovrd);
+ if (err)
+ return err;
+
+ return 0;
+}
+
+
/**
* cnstr_shdsc_pdcp_u_plane_encap - Function for creating a PDCP User Plane
* encapsulation descriptor.
@@ -2491,6 +2715,33 @@ cnstr_shdsc_pdcp_u_plane_encap(uint32_t *descbuf,
struct program prg;
struct program *p = &prg;
int err;
+ static enum rta_share_type
+ desc_share[PDCP_CIPHER_TYPE_INVALID][PDCP_AUTH_TYPE_INVALID] = {
+ { /* NULL */
+ SHR_WAIT, /* NULL */
+ SHR_ALWAYS, /* SNOW f9 */
+ SHR_ALWAYS, /* AES CMAC */
+ SHR_ALWAYS /* ZUC-I */
+ },
+ { /* SNOW f8 */
+ SHR_ALWAYS, /* NULL */
+ SHR_ALWAYS, /* SNOW f9 */
+ SHR_WAIT, /* AES CMAC */
+ SHR_WAIT /* ZUC-I */
+ },
+ { /* AES CTR */
+ SHR_ALWAYS, /* NULL */
+ SHR_ALWAYS, /* SNOW f9 */
+ SHR_ALWAYS, /* AES CMAC */
+ SHR_WAIT /* ZUC-I */
+ },
+ { /* ZUC-E */
+ SHR_ALWAYS, /* NULL */
+ SHR_WAIT, /* SNOW f9 */
+ SHR_WAIT, /* AES CMAC */
+ SHR_ALWAYS /* ZUC-I */
+ },
+ };
LABEL(pdb_end);
if (rta_sec_era != RTA_SEC_ERA_2 && era_2_sw_hfn_ovrd) {
@@ -2509,7 +2760,10 @@ cnstr_shdsc_pdcp_u_plane_encap(uint32_t *descbuf,
if (ps)
PROGRAM_SET_36BIT_ADDR(p);
- SHR_HDR(p, SHR_ALWAYS, 0, 0);
+ if (authdata)
+ SHR_HDR(p, desc_share[cipherdata->algtype][authdata->algtype], 0, 0);
+ else
+ SHR_HDR(p, SHR_ALWAYS, 0, 0);
if (cnstr_pdcp_u_plane_pdb(p, sn_size, hfn, bearer, direction,
hfn_threshold)) {
pr_err("Error creating PDCP UPlane PDB\n");
@@ -2522,13 +2776,6 @@ cnstr_shdsc_pdcp_u_plane_encap(uint32_t *descbuf,
if (err)
return err;
- /* Insert auth key if requested */
- if (authdata && authdata->algtype) {
- KEY(p, KEY2, authdata->key_enc_flags,
- (uint64_t)authdata->key, authdata->keylen,
- INLINE_KEY(authdata));
- }
-
switch (sn_size) {
case PDCP_SN_SIZE_7:
case PDCP_SN_SIZE_12:
@@ -2542,6 +2789,12 @@ cnstr_shdsc_pdcp_u_plane_encap(uint32_t *descbuf,
case PDCP_CIPHER_TYPE_AES:
case PDCP_CIPHER_TYPE_SNOW:
case PDCP_CIPHER_TYPE_NULL:
+ /* Insert auth key if requested */
+ if (authdata && authdata->algtype) {
+ KEY(p, KEY2, authdata->key_enc_flags,
+ (uint64_t)authdata->key, authdata->keylen,
+ INLINE_KEY(authdata));
+ }
/* Insert Cipher Key */
KEY(p, KEY1, cipherdata->key_enc_flags,
(uint64_t)cipherdata->key, cipherdata->keylen,
@@ -2566,6 +2819,18 @@ cnstr_shdsc_pdcp_u_plane_encap(uint32_t *descbuf,
break;
case PDCP_SN_SIZE_15:
+ case PDCP_SN_SIZE_18:
+ if (authdata) {
+ err = pdcp_insert_uplane_with_int_op(p, swap,
+ cipherdata, authdata,
+ sn_size, era_2_sw_hfn_ovrd,
+ OP_TYPE_ENCAP_PROTOCOL);
+ if (err)
+ return err;
+
+ break;
+ }
+
switch (cipherdata->algtype) {
case PDCP_CIPHER_TYPE_NULL:
insert_copy_frame_op(p,
@@ -2574,8 +2839,8 @@ cnstr_shdsc_pdcp_u_plane_encap(uint32_t *descbuf,
break;
default:
- err = pdcp_insert_uplane_15bit_op(p, swap, cipherdata,
- authdata, OP_TYPE_ENCAP_PROTOCOL);
+ err = pdcp_insert_uplane_no_int_op(p, swap, cipherdata,
+ OP_TYPE_ENCAP_PROTOCOL, sn_size);
if (err)
return err;
break;
@@ -2635,6 +2900,34 @@ cnstr_shdsc_pdcp_u_plane_decap(uint32_t *descbuf,
struct program prg;
struct program *p = &prg;
int err;
+ static enum rta_share_type
+ desc_share[PDCP_CIPHER_TYPE_INVALID][PDCP_AUTH_TYPE_INVALID] = {
+ { /* NULL */
+ SHR_WAIT, /* NULL */
+ SHR_ALWAYS, /* SNOW f9 */
+ SHR_ALWAYS, /* AES CMAC */
+ SHR_ALWAYS /* ZUC-I */
+ },
+ { /* SNOW f8 */
+ SHR_ALWAYS, /* NULL */
+ SHR_ALWAYS, /* SNOW f9 */
+ SHR_WAIT, /* AES CMAC */
+ SHR_WAIT /* ZUC-I */
+ },
+ { /* AES CTR */
+ SHR_ALWAYS, /* NULL */
+ SHR_ALWAYS, /* SNOW f9 */
+ SHR_ALWAYS, /* AES CMAC */
+ SHR_WAIT /* ZUC-I */
+ },
+ { /* ZUC-E */
+ SHR_ALWAYS, /* NULL */
+ SHR_WAIT, /* SNOW f9 */
+ SHR_WAIT, /* AES CMAC */
+ SHR_ALWAYS /* ZUC-I */
+ },
+ };
+
LABEL(pdb_end);
if (rta_sec_era != RTA_SEC_ERA_2 && era_2_sw_hfn_ovrd) {
@@ -2652,8 +2945,11 @@ cnstr_shdsc_pdcp_u_plane_decap(uint32_t *descbuf,
PROGRAM_SET_BSWAP(p);
if (ps)
PROGRAM_SET_36BIT_ADDR(p);
+ if (authdata)
+ SHR_HDR(p, desc_share[cipherdata->algtype][authdata->algtype], 0, 0);
+ else
+ SHR_HDR(p, SHR_ALWAYS, 0, 0);
- SHR_HDR(p, SHR_ALWAYS, 0, 0);
if (cnstr_pdcp_u_plane_pdb(p, sn_size, hfn, bearer, direction,
hfn_threshold)) {
pr_err("Error creating PDCP UPlane PDB\n");
@@ -2666,12 +2962,6 @@ cnstr_shdsc_pdcp_u_plane_decap(uint32_t *descbuf,
if (err)
return err;
- /* Insert auth key if requested */
- if (authdata && authdata->algtype)
- KEY(p, KEY2, authdata->key_enc_flags,
- (uint64_t)authdata->key, authdata->keylen,
- INLINE_KEY(authdata));
-
switch (sn_size) {
case PDCP_SN_SIZE_7:
case PDCP_SN_SIZE_12:
@@ -2685,6 +2975,12 @@ cnstr_shdsc_pdcp_u_plane_decap(uint32_t *descbuf,
case PDCP_CIPHER_TYPE_AES:
case PDCP_CIPHER_TYPE_SNOW:
case PDCP_CIPHER_TYPE_NULL:
+ /* Insert auth key if requested */
+ if (authdata && authdata->algtype)
+ KEY(p, KEY2, authdata->key_enc_flags,
+ (uint64_t)authdata->key, authdata->keylen,
+ INLINE_KEY(authdata));
+
/* Insert Cipher Key */
KEY(p, KEY1, cipherdata->key_enc_flags,
cipherdata->key, cipherdata->keylen,
@@ -2708,6 +3004,18 @@ cnstr_shdsc_pdcp_u_plane_decap(uint32_t *descbuf,
break;
case PDCP_SN_SIZE_15:
+ case PDCP_SN_SIZE_18:
+ if (authdata) {
+ err = pdcp_insert_uplane_with_int_op(p, swap,
+ cipherdata, authdata,
+ sn_size, era_2_sw_hfn_ovrd,
+ OP_TYPE_DECAP_PROTOCOL);
+ if (err)
+ return err;
+
+ break;
+ }
+
switch (cipherdata->algtype) {
case PDCP_CIPHER_TYPE_NULL:
insert_copy_frame_op(p,
@@ -2716,8 +3024,8 @@ cnstr_shdsc_pdcp_u_plane_decap(uint32_t *descbuf,
break;
default:
- err = pdcp_insert_uplane_15bit_op(p, swap, cipherdata,
- authdata, OP_TYPE_DECAP_PROTOCOL);
+ err = pdcp_insert_uplane_no_int_op(p, swap, cipherdata,
+ OP_TYPE_DECAP_PROTOCOL, sn_size);
if (err)
return err;
break;
--
2.17.1
^ permalink raw reply [flat|nested] 75+ messages in thread
* [dpdk-dev] [PATCH v3 06/24] crypto/dpaa2_sec: support CAAM HW era 10
2019-09-30 14:40 ` [dpdk-dev] [PATCH v3 00/24] " Akhil Goyal
` (4 preceding siblings ...)
2019-09-30 14:40 ` [dpdk-dev] [PATCH v3 05/24] crypto/dpaa2_sec: update desc for pdcp 18bit enc-auth Akhil Goyal
@ 2019-09-30 14:40 ` Akhil Goyal
2019-09-30 14:40 ` [dpdk-dev] [PATCH v3 07/24] crypto/dpaa2_sec/hw: update 12bit SN desc for NULL auth Akhil Goyal
` (18 subsequent siblings)
24 siblings, 0 replies; 75+ messages in thread
From: Akhil Goyal @ 2019-09-30 14:40 UTC (permalink / raw)
To: dev; +Cc: aconole, anoobj, Hemant Agrawal
From: Hemant Agrawal <hemant.agrawal@nxp.com>
Adding minimal support for CAAM HW era 10 (used in LX2)
Primary changes are:
1. increased shard desc length form 6 bit to 7 bits
2. support for several PDCP operations as PROTOCOL offload.
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
---
drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c | 5 ++++
drivers/crypto/dpaa2_sec/hw/desc.h | 5 ++++
drivers/crypto/dpaa2_sec/hw/desc/pdcp.h | 21 ++++++++++-----
.../dpaa2_sec/hw/rta/fifo_load_store_cmd.h | 9 ++++---
| 21 ++++++++++++---
drivers/crypto/dpaa2_sec/hw/rta/jump_cmd.h | 3 +--
drivers/crypto/dpaa2_sec/hw/rta/key_cmd.h | 5 ++--
drivers/crypto/dpaa2_sec/hw/rta/load_cmd.h | 10 ++++---
drivers/crypto/dpaa2_sec/hw/rta/math_cmd.h | 12 +++++----
drivers/crypto/dpaa2_sec/hw/rta/move_cmd.h | 8 +++---
drivers/crypto/dpaa2_sec/hw/rta/nfifo_cmd.h | 10 +++----
.../crypto/dpaa2_sec/hw/rta/operation_cmd.h | 6 ++---
.../crypto/dpaa2_sec/hw/rta/protocol_cmd.h | 9 +++++--
.../dpaa2_sec/hw/rta/sec_run_time_asm.h | 27 +++++++++++++------
.../dpaa2_sec/hw/rta/seq_in_out_ptr_cmd.h | 7 +++--
drivers/crypto/dpaa2_sec/hw/rta/store_cmd.h | 6 ++---
16 files changed, 110 insertions(+), 54 deletions(-)
diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
index 7946abf40..9108b3c43 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
@@ -3450,6 +3450,11 @@ cryptodev_dpaa2_sec_probe(struct rte_dpaa2_driver *dpaa2_drv __rte_unused,
/* init user callbacks */
TAILQ_INIT(&(cryptodev->link_intr_cbs));
+ if (dpaa2_svr_family == SVR_LX2160A)
+ rta_set_sec_era(RTA_SEC_ERA_10);
+
+ DPAA2_SEC_INFO("2-SEC ERA is %d", rta_get_sec_era());
+
/* Invoke PMD device initialization function */
retval = dpaa2_sec_dev_init(cryptodev);
if (retval == 0)
diff --git a/drivers/crypto/dpaa2_sec/hw/desc.h b/drivers/crypto/dpaa2_sec/hw/desc.h
index e12c3db2f..667da971b 100644
--- a/drivers/crypto/dpaa2_sec/hw/desc.h
+++ b/drivers/crypto/dpaa2_sec/hw/desc.h
@@ -18,6 +18,8 @@
#include "hw/compat.h"
#endif
+extern enum rta_sec_era rta_sec_era;
+
/* Max size of any SEC descriptor in 32-bit words, inclusive of header */
#define MAX_CAAM_DESCSIZE 64
@@ -113,9 +115,12 @@
/* Start Index or SharedDesc Length */
#define HDR_START_IDX_SHIFT 16
#define HDR_START_IDX_MASK (0x3f << HDR_START_IDX_SHIFT)
+#define HDR_START_IDX_MASK_ERA10 (0x7f << HDR_START_IDX_SHIFT)
/* If shared descriptor header, 6-bit length */
#define HDR_DESCLEN_SHR_MASK 0x3f
+/* If shared descriptor header, 7-bit length era10 onwards*/
+#define HDR_DESCLEN_SHR_MASK_ERA10 0x7f
/* If non-shared header, 7-bit length */
#define HDR_DESCLEN_MASK 0x7f
diff --git a/drivers/crypto/dpaa2_sec/hw/desc/pdcp.h b/drivers/crypto/dpaa2_sec/hw/desc/pdcp.h
index 9a73105ac..4bf1d69f9 100644
--- a/drivers/crypto/dpaa2_sec/hw/desc/pdcp.h
+++ b/drivers/crypto/dpaa2_sec/hw/desc/pdcp.h
@@ -776,7 +776,8 @@ pdcp_insert_cplane_enc_only_op(struct program *p,
KEY(p, KEY1, cipherdata->key_enc_flags, cipherdata->key,
cipherdata->keylen, INLINE_KEY(cipherdata));
- if (rta_sec_era >= RTA_SEC_ERA_8 && sn_size != PDCP_SN_SIZE_18) {
+ if ((rta_sec_era >= RTA_SEC_ERA_8 && sn_size != PDCP_SN_SIZE_18) ||
+ (rta_sec_era == RTA_SEC_ERA_10)) {
if (sn_size == PDCP_SN_SIZE_5)
PROTOCOL(p, dir, OP_PCLID_LTE_PDCP_CTRL_MIXED,
(uint16_t)cipherdata->algtype << 8);
@@ -962,7 +963,8 @@ pdcp_insert_cplane_snow_aes_op(struct program *p,
REFERENCE(jump_back_to_sd_cmd);
REFERENCE(move_mac_i_to_desc_buf);
- if (rta_sec_era >= RTA_SEC_ERA_8 && sn_size != PDCP_SN_SIZE_18) {
+ if ((rta_sec_era >= RTA_SEC_ERA_8 && sn_size != PDCP_SN_SIZE_18) ||
+ (rta_sec_era == RTA_SEC_ERA_10)) {
KEY(p, KEY1, cipherdata->key_enc_flags, cipherdata->key,
cipherdata->keylen, INLINE_KEY(cipherdata));
KEY(p, KEY2, authdata->key_enc_flags, authdata->key,
@@ -1286,7 +1288,8 @@ pdcp_insert_cplane_aes_snow_op(struct program *p,
KEY(p, KEY2, authdata->key_enc_flags, authdata->key, authdata->keylen,
INLINE_KEY(authdata));
- if (rta_sec_era >= RTA_SEC_ERA_8 && sn_size != PDCP_SN_SIZE_18) {
+ if ((rta_sec_era >= RTA_SEC_ERA_8 && sn_size != PDCP_SN_SIZE_18) ||
+ (rta_sec_era == RTA_SEC_ERA_10)) {
int pclid;
if (sn_size == PDCP_SN_SIZE_5)
@@ -1430,7 +1433,8 @@ pdcp_insert_cplane_snow_zuc_op(struct program *p,
SET_LABEL(p, keyjump);
- if (rta_sec_era >= RTA_SEC_ERA_8 && sn_size != PDCP_SN_SIZE_18) {
+ if ((rta_sec_era >= RTA_SEC_ERA_8 && sn_size != PDCP_SN_SIZE_18) ||
+ (rta_sec_era == RTA_SEC_ERA_10)) {
int pclid;
if (sn_size == PDCP_SN_SIZE_5)
@@ -1548,7 +1552,8 @@ pdcp_insert_cplane_aes_zuc_op(struct program *p,
KEY(p, KEY2, authdata->key_enc_flags, authdata->key, authdata->keylen,
INLINE_KEY(authdata));
- if (rta_sec_era >= RTA_SEC_ERA_8 && sn_size != PDCP_SN_SIZE_18) {
+ if ((rta_sec_era >= RTA_SEC_ERA_8 && sn_size != PDCP_SN_SIZE_18) ||
+ (rta_sec_era == RTA_SEC_ERA_10)) {
int pclid;
if (sn_size == PDCP_SN_SIZE_5)
@@ -1671,7 +1676,8 @@ pdcp_insert_cplane_zuc_snow_op(struct program *p,
KEY(p, KEY2, authdata->key_enc_flags, authdata->key, authdata->keylen,
INLINE_KEY(authdata));
- if (rta_sec_era >= RTA_SEC_ERA_8 && sn_size != PDCP_SN_SIZE_18) {
+ if ((rta_sec_era >= RTA_SEC_ERA_8 && sn_size != PDCP_SN_SIZE_18) ||
+ (rta_sec_era == RTA_SEC_ERA_10)) {
int pclid;
if (sn_size == PDCP_SN_SIZE_5)
@@ -1806,7 +1812,8 @@ pdcp_insert_cplane_zuc_aes_op(struct program *p,
return -ENOTSUP;
}
- if (rta_sec_era >= RTA_SEC_ERA_8 && sn_size != PDCP_SN_SIZE_18) {
+ if ((rta_sec_era >= RTA_SEC_ERA_8 && sn_size != PDCP_SN_SIZE_18) ||
+ (rta_sec_era == RTA_SEC_ERA_10)) {
int pclid;
KEY(p, KEY1, cipherdata->key_enc_flags, cipherdata->key,
diff --git a/drivers/crypto/dpaa2_sec/hw/rta/fifo_load_store_cmd.h b/drivers/crypto/dpaa2_sec/hw/rta/fifo_load_store_cmd.h
index 8c807aaa2..287e09cd7 100644
--- a/drivers/crypto/dpaa2_sec/hw/rta/fifo_load_store_cmd.h
+++ b/drivers/crypto/dpaa2_sec/hw/rta/fifo_load_store_cmd.h
@@ -1,8 +1,7 @@
/* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
*
* Copyright 2008-2016 Freescale Semiconductor Inc.
- * Copyright 2016 NXP
- *
+ * Copyright 2016,2019 NXP
*/
#ifndef __RTA_FIFO_LOAD_STORE_CMD_H__
@@ -42,7 +41,8 @@ static const uint32_t fifo_load_table[][2] = {
* supported.
*/
static const unsigned int fifo_load_table_sz[] = {22, 22, 23, 23,
- 23, 23, 23, 23};
+ 23, 23, 23, 23,
+ 23, 23};
static inline int
rta_fifo_load(struct program *program, uint32_t src,
@@ -201,7 +201,8 @@ static const uint32_t fifo_store_table[][2] = {
* supported.
*/
static const unsigned int fifo_store_table_sz[] = {21, 21, 21, 21,
- 22, 22, 22, 23};
+ 22, 22, 22, 23,
+ 23, 23};
static inline int
rta_fifo_store(struct program *program, uint32_t src,
--git a/drivers/crypto/dpaa2_sec/hw/rta/header_cmd.h b/drivers/crypto/dpaa2_sec/hw/rta/header_cmd.h
index 0c7ea9387..45aefa04c 100644
--- a/drivers/crypto/dpaa2_sec/hw/rta/header_cmd.h
+++ b/drivers/crypto/dpaa2_sec/hw/rta/header_cmd.h
@@ -1,8 +1,7 @@
/* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
*
* Copyright 2008-2016 Freescale Semiconductor Inc.
- * Copyright 2016 NXP
- *
+ * Copyright 2016,2019 NXP
*/
#ifndef __RTA_HEADER_CMD_H__
@@ -19,6 +18,8 @@ static const uint32_t job_header_flags[] = {
DNR | TD | MTD | SHR | REO | RSMS | EXT,
DNR | TD | MTD | SHR | REO | RSMS | EXT,
DNR | TD | MTD | SHR | REO | RSMS | EXT,
+ DNR | TD | MTD | SHR | REO | EXT,
+ DNR | TD | MTD | SHR | REO | EXT,
DNR | TD | MTD | SHR | REO | EXT
};
@@ -31,6 +32,8 @@ static const uint32_t shr_header_flags[] = {
DNR | SC | PD | CIF | RIF,
DNR | SC | PD | CIF | RIF,
DNR | SC | PD | CIF | RIF,
+ DNR | SC | PD | CIF | RIF,
+ DNR | SC | PD | CIF | RIF,
DNR | SC | PD | CIF | RIF
};
@@ -72,7 +75,12 @@ rta_shr_header(struct program *program,
}
opcode |= HDR_ONE;
- opcode |= (start_idx << HDR_START_IDX_SHIFT) & HDR_START_IDX_MASK;
+ if (rta_sec_era >= RTA_SEC_ERA_10)
+ opcode |= (start_idx << HDR_START_IDX_SHIFT) &
+ HDR_START_IDX_MASK_ERA10;
+ else
+ opcode |= (start_idx << HDR_START_IDX_SHIFT) &
+ HDR_START_IDX_MASK;
if (flags & DNR)
opcode |= HDR_DNR;
@@ -160,7 +168,12 @@ rta_job_header(struct program *program,
}
opcode |= HDR_ONE;
- opcode |= ((start_idx << HDR_START_IDX_SHIFT) & HDR_START_IDX_MASK);
+ if (rta_sec_era >= RTA_SEC_ERA_10)
+ opcode |= (start_idx << HDR_START_IDX_SHIFT) &
+ HDR_START_IDX_MASK_ERA10;
+ else
+ opcode |= (start_idx << HDR_START_IDX_SHIFT) &
+ HDR_START_IDX_MASK;
if (flags & EXT) {
opcode |= HDR_EXT;
diff --git a/drivers/crypto/dpaa2_sec/hw/rta/jump_cmd.h b/drivers/crypto/dpaa2_sec/hw/rta/jump_cmd.h
index 546d22e98..18f781e37 100644
--- a/drivers/crypto/dpaa2_sec/hw/rta/jump_cmd.h
+++ b/drivers/crypto/dpaa2_sec/hw/rta/jump_cmd.h
@@ -1,8 +1,7 @@
/* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
*
* Copyright 2008-2016 Freescale Semiconductor Inc.
- * Copyright 2016 NXP
- *
+ * Copyright 2016,2019 NXP
*/
#ifndef __RTA_JUMP_CMD_H__
diff --git a/drivers/crypto/dpaa2_sec/hw/rta/key_cmd.h b/drivers/crypto/dpaa2_sec/hw/rta/key_cmd.h
index 1ec21234a..ec3fbcaf6 100644
--- a/drivers/crypto/dpaa2_sec/hw/rta/key_cmd.h
+++ b/drivers/crypto/dpaa2_sec/hw/rta/key_cmd.h
@@ -1,8 +1,7 @@
/* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
*
* Copyright 2008-2016 Freescale Semiconductor Inc.
- * Copyright 2016 NXP
- *
+ * Copyright 2016,2019 NXP
*/
#ifndef __RTA_KEY_CMD_H__
@@ -19,6 +18,8 @@ static const uint32_t key_enc_flags[] = {
ENC | NWB | EKT | TK,
ENC | NWB | EKT | TK,
ENC | NWB | EKT | TK | PTS,
+ ENC | NWB | EKT | TK | PTS,
+ ENC | NWB | EKT | TK | PTS,
ENC | NWB | EKT | TK | PTS
};
diff --git a/drivers/crypto/dpaa2_sec/hw/rta/load_cmd.h b/drivers/crypto/dpaa2_sec/hw/rta/load_cmd.h
index f3b0dcfcb..38e253c22 100644
--- a/drivers/crypto/dpaa2_sec/hw/rta/load_cmd.h
+++ b/drivers/crypto/dpaa2_sec/hw/rta/load_cmd.h
@@ -1,8 +1,7 @@
/* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
*
* Copyright 2008-2016 Freescale Semiconductor Inc.
- * Copyright 2016 NXP
- *
+ * Copyright 2016,2019 NXP
*/
#ifndef __RTA_LOAD_CMD_H__
@@ -19,6 +18,8 @@ static const uint32_t load_len_mask_allowed[] = {
0x000000fe,
0x000000fe,
0x000000fe,
+ 0x000000fe,
+ 0x000000fe,
0x000000fe
};
@@ -30,6 +31,8 @@ static const uint32_t load_off_mask_allowed[] = {
0x000000ff,
0x000000ff,
0x000000ff,
+ 0x000000ff,
+ 0x000000ff,
0x000000ff
};
@@ -137,7 +140,8 @@ static const struct load_map load_dst[] = {
* Allowed LOAD destinations for each SEC Era.
* Values represent the number of entries from load_dst[] that are supported.
*/
-static const unsigned int load_dst_sz[] = { 31, 34, 34, 40, 40, 40, 40, 40 };
+static const unsigned int load_dst_sz[] = { 31, 34, 34, 40, 40,
+ 40, 40, 40, 40, 40};
static inline int
load_check_len_offset(int pos, uint32_t length, uint32_t offset)
diff --git a/drivers/crypto/dpaa2_sec/hw/rta/math_cmd.h b/drivers/crypto/dpaa2_sec/hw/rta/math_cmd.h
index 5b28cbabb..cca70f7e0 100644
--- a/drivers/crypto/dpaa2_sec/hw/rta/math_cmd.h
+++ b/drivers/crypto/dpaa2_sec/hw/rta/math_cmd.h
@@ -1,8 +1,7 @@
/* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
*
* Copyright 2008-2016 Freescale Semiconductor Inc.
- * Copyright 2016 NXP
- *
+ * Copyright 2016,2019 NXP
*/
#ifndef __RTA_MATH_CMD_H__
@@ -29,7 +28,8 @@ static const uint32_t math_op1[][2] = {
* Allowed MATH op1 sources for each SEC Era.
* Values represent the number of entries from math_op1[] that are supported.
*/
-static const unsigned int math_op1_sz[] = {10, 10, 12, 12, 12, 12, 12, 12};
+static const unsigned int math_op1_sz[] = {10, 10, 12, 12, 12, 12,
+ 12, 12, 12, 12};
static const uint32_t math_op2[][2] = {
/*1*/ { MATH0, MATH_SRC1_REG0 },
@@ -51,7 +51,8 @@ static const uint32_t math_op2[][2] = {
* Allowed MATH op2 sources for each SEC Era.
* Values represent the number of entries from math_op2[] that are supported.
*/
-static const unsigned int math_op2_sz[] = {8, 9, 13, 13, 13, 13, 13, 13};
+static const unsigned int math_op2_sz[] = {8, 9, 13, 13, 13, 13, 13, 13,
+ 13, 13};
static const uint32_t math_result[][2] = {
/*1*/ { MATH0, MATH_DEST_REG0 },
@@ -71,7 +72,8 @@ static const uint32_t math_result[][2] = {
* Values represent the number of entries from math_result[] that are
* supported.
*/
-static const unsigned int math_result_sz[] = {9, 9, 10, 10, 10, 10, 10, 10};
+static const unsigned int math_result_sz[] = {9, 9, 10, 10, 10, 10, 10, 10,
+ 10, 10};
static inline int
rta_math(struct program *program, uint64_t operand1,
diff --git a/drivers/crypto/dpaa2_sec/hw/rta/move_cmd.h b/drivers/crypto/dpaa2_sec/hw/rta/move_cmd.h
index a7ff7c675..d2151c6dd 100644
--- a/drivers/crypto/dpaa2_sec/hw/rta/move_cmd.h
+++ b/drivers/crypto/dpaa2_sec/hw/rta/move_cmd.h
@@ -1,8 +1,7 @@
/* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
*
* Copyright 2008-2016 Freescale Semiconductor Inc.
- * Copyright 2016 NXP
- *
+ * Copyright 2016,2019 NXP
*/
#ifndef __RTA_MOVE_CMD_H__
@@ -47,7 +46,8 @@ static const uint32_t move_src_table[][2] = {
* Values represent the number of entries from move_src_table[] that are
* supported.
*/
-static const unsigned int move_src_table_sz[] = {9, 11, 14, 14, 14, 14, 14, 14};
+static const unsigned int move_src_table_sz[] = {9, 11, 14, 14, 14, 14, 14, 14,
+ 14, 14};
static const uint32_t move_dst_table[][2] = {
/*1*/ { CONTEXT1, MOVE_DEST_CLASS1CTX },
@@ -72,7 +72,7 @@ static const uint32_t move_dst_table[][2] = {
* supported.
*/
static const
-unsigned int move_dst_table_sz[] = {13, 14, 14, 15, 15, 15, 15, 15};
+unsigned int move_dst_table_sz[] = {13, 14, 14, 15, 15, 15, 15, 15, 15, 15};
static inline int
set_move_offset(struct program *program __maybe_unused,
diff --git a/drivers/crypto/dpaa2_sec/hw/rta/nfifo_cmd.h b/drivers/crypto/dpaa2_sec/hw/rta/nfifo_cmd.h
index 94f775e2e..85092d961 100644
--- a/drivers/crypto/dpaa2_sec/hw/rta/nfifo_cmd.h
+++ b/drivers/crypto/dpaa2_sec/hw/rta/nfifo_cmd.h
@@ -1,8 +1,7 @@
/* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
*
* Copyright 2008-2016 Freescale Semiconductor Inc.
- * Copyright 2016 NXP
- *
+ * Copyright 2016,2019 NXP
*/
#ifndef __RTA_NFIFO_CMD_H__
@@ -24,7 +23,7 @@ static const uint32_t nfifo_src[][2] = {
* Allowed NFIFO LOAD sources for each SEC Era.
* Values represent the number of entries from nfifo_src[] that are supported.
*/
-static const unsigned int nfifo_src_sz[] = {4, 5, 5, 5, 5, 5, 5, 7};
+static const unsigned int nfifo_src_sz[] = {4, 5, 5, 5, 5, 5, 5, 7, 7, 7};
static const uint32_t nfifo_data[][2] = {
{ MSG, NFIFOENTRY_DTYPE_MSG },
@@ -77,7 +76,8 @@ static const uint32_t nfifo_flags[][2] = {
* Allowed NFIFO LOAD flags for each SEC Era.
* Values represent the number of entries from nfifo_flags[] that are supported.
*/
-static const unsigned int nfifo_flags_sz[] = {12, 14, 14, 14, 14, 14, 14, 14};
+static const unsigned int nfifo_flags_sz[] = {12, 14, 14, 14, 14, 14,
+ 14, 14, 14, 14};
static const uint32_t nfifo_pad_flags[][2] = {
{ BM, NFIFOENTRY_BM },
@@ -90,7 +90,7 @@ static const uint32_t nfifo_pad_flags[][2] = {
* Values represent the number of entries from nfifo_pad_flags[] that are
* supported.
*/
-static const unsigned int nfifo_pad_flags_sz[] = {2, 2, 2, 2, 3, 3, 3, 3};
+static const unsigned int nfifo_pad_flags_sz[] = {2, 2, 2, 2, 3, 3, 3, 3, 3, 3};
static inline int
rta_nfifo_load(struct program *program, uint32_t src,
diff --git a/drivers/crypto/dpaa2_sec/hw/rta/operation_cmd.h b/drivers/crypto/dpaa2_sec/hw/rta/operation_cmd.h
index b85760e5b..9a1788c0f 100644
--- a/drivers/crypto/dpaa2_sec/hw/rta/operation_cmd.h
+++ b/drivers/crypto/dpaa2_sec/hw/rta/operation_cmd.h
@@ -1,8 +1,7 @@
/* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
*
* Copyright 2008-2016 Freescale Semiconductor Inc.
- * Copyright 2016 NXP
- *
+ * Copyright 2016,2019 NXP
*/
#ifndef __RTA_OPERATION_CMD_H__
@@ -229,7 +228,8 @@ static const struct alg_aai_map alg_table[] = {
* Allowed OPERATION algorithms for each SEC Era.
* Values represent the number of entries from alg_table[] that are supported.
*/
-static const unsigned int alg_table_sz[] = {14, 15, 15, 15, 17, 17, 11, 17};
+static const unsigned int alg_table_sz[] = {14, 15, 15, 15, 17, 17,
+ 11, 17, 17, 17};
static inline int
rta_operation(struct program *program, uint32_t cipher_algo,
diff --git a/drivers/crypto/dpaa2_sec/hw/rta/protocol_cmd.h b/drivers/crypto/dpaa2_sec/hw/rta/protocol_cmd.h
index 82581edf5..e9f20703f 100644
--- a/drivers/crypto/dpaa2_sec/hw/rta/protocol_cmd.h
+++ b/drivers/crypto/dpaa2_sec/hw/rta/protocol_cmd.h
@@ -1,7 +1,7 @@
/* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
*
* Copyright 2008-2016 Freescale Semiconductor Inc.
- * Copyright 2016, 2019 NXP
+ * Copyright 2016,2019 NXP
*
*/
@@ -326,6 +326,10 @@ static const uint32_t proto_blob_flags[] = {
OP_PCL_BLOB_EKT | OP_PCL_BLOB_REG_MASK | OP_PCL_BLOB_SEC_MEM,
OP_PCL_BLOB_FORMAT_MASK | OP_PCL_BLOB_BLACK | OP_PCL_BLOB_TKEK |
OP_PCL_BLOB_EKT | OP_PCL_BLOB_REG_MASK | OP_PCL_BLOB_SEC_MEM,
+ OP_PCL_BLOB_FORMAT_MASK | OP_PCL_BLOB_BLACK | OP_PCL_BLOB_TKEK |
+ OP_PCL_BLOB_EKT | OP_PCL_BLOB_REG_MASK | OP_PCL_BLOB_SEC_MEM,
+ OP_PCL_BLOB_FORMAT_MASK | OP_PCL_BLOB_BLACK | OP_PCL_BLOB_TKEK |
+ OP_PCL_BLOB_EKT | OP_PCL_BLOB_REG_MASK | OP_PCL_BLOB_SEC_MEM,
OP_PCL_BLOB_FORMAT_MASK | OP_PCL_BLOB_BLACK | OP_PCL_BLOB_TKEK |
OP_PCL_BLOB_EKT | OP_PCL_BLOB_REG_MASK | OP_PCL_BLOB_SEC_MEM
};
@@ -604,7 +608,8 @@ static const struct proto_map proto_table[] = {
* Allowed OPERATION protocols for each SEC Era.
* Values represent the number of entries from proto_table[] that are supported.
*/
-static const unsigned int proto_table_sz[] = {21, 29, 29, 29, 29, 35, 37, 40};
+static const unsigned int proto_table_sz[] = {21, 29, 29, 29, 29, 35, 37,
+ 40, 40, 40};
static inline int
rta_proto_operation(struct program *program, uint32_t optype,
diff --git a/drivers/crypto/dpaa2_sec/hw/rta/sec_run_time_asm.h b/drivers/crypto/dpaa2_sec/hw/rta/sec_run_time_asm.h
index 5357187f8..d8cdebd20 100644
--- a/drivers/crypto/dpaa2_sec/hw/rta/sec_run_time_asm.h
+++ b/drivers/crypto/dpaa2_sec/hw/rta/sec_run_time_asm.h
@@ -1,8 +1,7 @@
/* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
*
* Copyright 2008-2016 Freescale Semiconductor Inc.
- * Copyright 2016 NXP
- *
+ * Copyright 2016,2019 NXP
*/
#ifndef __RTA_SEC_RUN_TIME_ASM_H__
@@ -36,7 +35,9 @@ enum rta_sec_era {
RTA_SEC_ERA_6,
RTA_SEC_ERA_7,
RTA_SEC_ERA_8,
- MAX_SEC_ERA = RTA_SEC_ERA_8
+ RTA_SEC_ERA_9,
+ RTA_SEC_ERA_10,
+ MAX_SEC_ERA = RTA_SEC_ERA_10
};
/**
@@ -605,10 +606,14 @@ __rta_inline_data(struct program *program, uint64_t data,
static inline unsigned int
rta_desc_len(uint32_t *buffer)
{
- if ((*buffer & CMD_MASK) == CMD_DESC_HDR)
+ if ((*buffer & CMD_MASK) == CMD_DESC_HDR) {
return *buffer & HDR_DESCLEN_MASK;
- else
- return *buffer & HDR_DESCLEN_SHR_MASK;
+ } else {
+ if (rta_sec_era >= RTA_SEC_ERA_10)
+ return *buffer & HDR_DESCLEN_SHR_MASK_ERA10;
+ else
+ return *buffer & HDR_DESCLEN_SHR_MASK;
+ }
}
static inline unsigned int
@@ -701,9 +706,15 @@ rta_patch_header(struct program *program, int line, unsigned int new_ref)
return -EINVAL;
opcode = bswap ? swab32(program->buffer[line]) : program->buffer[line];
+ if (rta_sec_era >= RTA_SEC_ERA_10) {
+ opcode &= (uint32_t)~HDR_START_IDX_MASK_ERA10;
+ opcode |= (new_ref << HDR_START_IDX_SHIFT) &
+ HDR_START_IDX_MASK_ERA10;
+ } else {
+ opcode &= (uint32_t)~HDR_START_IDX_MASK;
+ opcode |= (new_ref << HDR_START_IDX_SHIFT) & HDR_START_IDX_MASK;
+ }
- opcode &= (uint32_t)~HDR_START_IDX_MASK;
- opcode |= (new_ref << HDR_START_IDX_SHIFT) & HDR_START_IDX_MASK;
program->buffer[line] = bswap ? swab32(opcode) : opcode;
return 0;
diff --git a/drivers/crypto/dpaa2_sec/hw/rta/seq_in_out_ptr_cmd.h b/drivers/crypto/dpaa2_sec/hw/rta/seq_in_out_ptr_cmd.h
index ceb6a8719..5e6af0c83 100644
--- a/drivers/crypto/dpaa2_sec/hw/rta/seq_in_out_ptr_cmd.h
+++ b/drivers/crypto/dpaa2_sec/hw/rta/seq_in_out_ptr_cmd.h
@@ -1,8 +1,7 @@
/* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
*
* Copyright 2008-2016 Freescale Semiconductor Inc.
- * Copyright 2016 NXP
- *
+ * Copyright 2016,2019 NXP
*/
#ifndef __RTA_SEQ_IN_OUT_PTR_CMD_H__
@@ -19,6 +18,8 @@ static const uint32_t seq_in_ptr_flags[] = {
RBS | INL | SGF | PRE | EXT | RTO | RJD | SOP,
RBS | INL | SGF | PRE | EXT | RTO | RJD | SOP,
RBS | INL | SGF | PRE | EXT | RTO | RJD | SOP,
+ RBS | INL | SGF | PRE | EXT | RTO | RJD | SOP,
+ RBS | INL | SGF | PRE | EXT | RTO | RJD | SOP,
RBS | INL | SGF | PRE | EXT | RTO | RJD | SOP
};
@@ -31,6 +32,8 @@ static const uint32_t seq_out_ptr_flags[] = {
SGF | PRE | EXT | RTO | RST | EWS,
SGF | PRE | EXT | RTO | RST | EWS,
SGF | PRE | EXT | RTO | RST | EWS,
+ SGF | PRE | EXT | RTO | RST | EWS,
+ SGF | PRE | EXT | RTO | RST | EWS,
SGF | PRE | EXT | RTO | RST | EWS
};
diff --git a/drivers/crypto/dpaa2_sec/hw/rta/store_cmd.h b/drivers/crypto/dpaa2_sec/hw/rta/store_cmd.h
index 8b58e544d..5de47d053 100644
--- a/drivers/crypto/dpaa2_sec/hw/rta/store_cmd.h
+++ b/drivers/crypto/dpaa2_sec/hw/rta/store_cmd.h
@@ -1,8 +1,7 @@
/* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
*
* Copyright 2008-2016 Freescale Semiconductor Inc.
- * Copyright 2016 NXP
- *
+ * Copyright 2016,2019 NXP
*/
#ifndef __RTA_STORE_CMD_H__
@@ -56,7 +55,8 @@ static const uint32_t store_src_table[][2] = {
* supported.
*/
static const unsigned int store_src_table_sz[] = {29, 31, 33, 33,
- 33, 33, 35, 35};
+ 33, 33, 35, 35,
+ 35, 35};
static inline int
rta_store(struct program *program, uint64_t src,
--
2.17.1
^ permalink raw reply [flat|nested] 75+ messages in thread
* [dpdk-dev] [PATCH v3 07/24] crypto/dpaa2_sec/hw: update 12bit SN desc for NULL auth
2019-09-30 14:40 ` [dpdk-dev] [PATCH v3 00/24] " Akhil Goyal
` (5 preceding siblings ...)
2019-09-30 14:40 ` [dpdk-dev] [PATCH v3 06/24] crypto/dpaa2_sec: support CAAM HW era 10 Akhil Goyal
@ 2019-09-30 14:40 ` Akhil Goyal
2019-09-30 14:40 ` [dpdk-dev] [PATCH v3 08/24] crypto/dpaa_sec: support scatter gather for PDCP Akhil Goyal
` (17 subsequent siblings)
24 siblings, 0 replies; 75+ messages in thread
From: Akhil Goyal @ 2019-09-30 14:40 UTC (permalink / raw)
To: dev; +Cc: aconole, anoobj, Akhil Goyal, Vakul Garg
For sec era 8, NULL auth using protocol command does not add
4 bytes of null MAC-I and treat NULL integrity as no integrity which
is not correct.
Hence converting this particular case of null integrity on 12b SN
on SEC ERA 8 from protocol offload to non-protocol offload case.
Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
Signed-off-by: Vakul Garg <vakul.garg@nxp.com>
---
drivers/crypto/dpaa2_sec/hw/desc/pdcp.h | 32 +++++++++++++++++++++----
1 file changed, 28 insertions(+), 4 deletions(-)
diff --git a/drivers/crypto/dpaa2_sec/hw/desc/pdcp.h b/drivers/crypto/dpaa2_sec/hw/desc/pdcp.h
index 4bf1d69f9..0b074ec80 100644
--- a/drivers/crypto/dpaa2_sec/hw/desc/pdcp.h
+++ b/drivers/crypto/dpaa2_sec/hw/desc/pdcp.h
@@ -43,6 +43,14 @@
#define PDCP_C_PLANE_SN_MASK 0x1F000000
#define PDCP_C_PLANE_SN_MASK_BE 0x0000001F
+/**
+ * PDCP_12BIT_SN_MASK - This mask is used in the PDCP descriptors for
+ * extracting the sequence number (SN) from the
+ * PDCP User Plane header.
+ */
+#define PDCP_12BIT_SN_MASK 0xFF0F0000
+#define PDCP_12BIT_SN_MASK_BE 0x00000FFF
+
/**
* PDCP_U_PLANE_15BIT_SN_MASK - This mask is used in the PDCP descriptors for
* extracting the sequence number (SN) from the
@@ -776,8 +784,10 @@ pdcp_insert_cplane_enc_only_op(struct program *p,
KEY(p, KEY1, cipherdata->key_enc_flags, cipherdata->key,
cipherdata->keylen, INLINE_KEY(cipherdata));
- if ((rta_sec_era >= RTA_SEC_ERA_8 && sn_size != PDCP_SN_SIZE_18) ||
- (rta_sec_era == RTA_SEC_ERA_10)) {
+ if ((rta_sec_era >= RTA_SEC_ERA_8 && sn_size != PDCP_SN_SIZE_18 &&
+ !(rta_sec_era == RTA_SEC_ERA_8 &&
+ authdata->algtype == 0))
+ || (rta_sec_era == RTA_SEC_ERA_10)) {
if (sn_size == PDCP_SN_SIZE_5)
PROTOCOL(p, dir, OP_PCLID_LTE_PDCP_CTRL_MIXED,
(uint16_t)cipherdata->algtype << 8);
@@ -800,12 +810,16 @@ pdcp_insert_cplane_enc_only_op(struct program *p,
sn_mask = (swap == false) ? PDCP_U_PLANE_18BIT_SN_MASK :
PDCP_U_PLANE_18BIT_SN_MASK_BE;
break;
- case PDCP_SN_SIZE_7:
case PDCP_SN_SIZE_12:
+ offset = 6;
+ length = 2;
+ sn_mask = (swap == false) ? PDCP_12BIT_SN_MASK :
+ PDCP_12BIT_SN_MASK_BE;
+ break;
+ case PDCP_SN_SIZE_7:
case PDCP_SN_SIZE_15:
pr_err("Invalid sn_size for %s\n", __func__);
return -ENOTSUP;
-
}
SEQLOAD(p, MATH0, offset, length, 0);
@@ -2796,6 +2810,16 @@ cnstr_shdsc_pdcp_u_plane_encap(uint32_t *descbuf,
case PDCP_CIPHER_TYPE_AES:
case PDCP_CIPHER_TYPE_SNOW:
case PDCP_CIPHER_TYPE_NULL:
+ if (rta_sec_era == RTA_SEC_ERA_8 &&
+ authdata && authdata->algtype == 0){
+ err = pdcp_insert_uplane_with_int_op(p, swap,
+ cipherdata, authdata,
+ sn_size, era_2_sw_hfn_ovrd,
+ OP_TYPE_ENCAP_PROTOCOL);
+ if (err)
+ return err;
+ break;
+ }
/* Insert auth key if requested */
if (authdata && authdata->algtype) {
KEY(p, KEY2, authdata->key_enc_flags,
--
2.17.1
^ permalink raw reply [flat|nested] 75+ messages in thread
* [dpdk-dev] [PATCH v3 08/24] crypto/dpaa_sec: support scatter gather for PDCP
2019-09-30 14:40 ` [dpdk-dev] [PATCH v3 00/24] " Akhil Goyal
` (6 preceding siblings ...)
2019-09-30 14:40 ` [dpdk-dev] [PATCH v3 07/24] crypto/dpaa2_sec/hw: update 12bit SN desc for NULL auth Akhil Goyal
@ 2019-09-30 14:40 ` Akhil Goyal
2019-09-30 14:40 ` [dpdk-dev] [PATCH v3 09/24] crypto/dpaa2_sec: support scatter gather for proto offloads Akhil Goyal
` (16 subsequent siblings)
24 siblings, 0 replies; 75+ messages in thread
From: Akhil Goyal @ 2019-09-30 14:40 UTC (permalink / raw)
To: dev; +Cc: aconole, anoobj, Akhil Goyal
This patch add support for chained input or output
mbufs for PDCP operations.
Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
---
drivers/crypto/dpaa_sec/dpaa_sec.c | 122 +++++++++++++++++++++++++++--
1 file changed, 116 insertions(+), 6 deletions(-)
diff --git a/drivers/crypto/dpaa_sec/dpaa_sec.c b/drivers/crypto/dpaa_sec/dpaa_sec.c
index 3fc4a606f..291cba28d 100644
--- a/drivers/crypto/dpaa_sec/dpaa_sec.c
+++ b/drivers/crypto/dpaa_sec/dpaa_sec.c
@@ -189,12 +189,18 @@ dqrr_out_fq_cb_rx(struct qman_portal *qm __always_unused,
if (ctx->op->sess_type == RTE_CRYPTO_OP_SECURITY_SESSION) {
struct qm_sg_entry *sg_out;
uint32_t len;
+ struct rte_mbuf *mbuf = (ctx->op->sym->m_dst == NULL) ?
+ ctx->op->sym->m_src : ctx->op->sym->m_dst;
sg_out = &job->sg[0];
hw_sg_to_cpu(sg_out);
len = sg_out->length;
- ctx->op->sym->m_src->pkt_len = len;
- ctx->op->sym->m_src->data_len = len;
+ mbuf->pkt_len = len;
+ while (mbuf->next != NULL) {
+ len -= mbuf->data_len;
+ mbuf = mbuf->next;
+ }
+ mbuf->data_len = len;
}
dpaa_sec_ops[dpaa_sec_op_nb++] = ctx->op;
dpaa_sec_op_ending(ctx);
@@ -260,6 +266,7 @@ static inline int is_auth_cipher(dpaa_sec_session *ses)
{
return ((ses->cipher_alg != RTE_CRYPTO_CIPHER_NULL) &&
(ses->auth_alg != RTE_CRYPTO_AUTH_NULL) &&
+ (ses->proto_alg != RTE_SECURITY_PROTOCOL_PDCP) &&
(ses->proto_alg != RTE_SECURITY_PROTOCOL_IPSEC));
}
@@ -802,12 +809,18 @@ dpaa_sec_deq(struct dpaa_sec_qp *qp, struct rte_crypto_op **ops, int nb_ops)
if (op->sess_type == RTE_CRYPTO_OP_SECURITY_SESSION) {
struct qm_sg_entry *sg_out;
uint32_t len;
+ struct rte_mbuf *mbuf = (op->sym->m_dst == NULL) ?
+ op->sym->m_src : op->sym->m_dst;
sg_out = &job->sg[0];
hw_sg_to_cpu(sg_out);
len = sg_out->length;
- op->sym->m_src->pkt_len = len;
- op->sym->m_src->data_len = len;
+ mbuf->pkt_len = len;
+ while (mbuf->next != NULL) {
+ len -= mbuf->data_len;
+ mbuf = mbuf->next;
+ }
+ mbuf->data_len = len;
}
if (!ctx->fd_status) {
op->status = RTE_CRYPTO_OP_STATUS_SUCCESS;
@@ -1636,6 +1649,99 @@ build_proto(struct rte_crypto_op *op, dpaa_sec_session *ses)
return cf;
}
+static inline struct dpaa_sec_job *
+build_proto_sg(struct rte_crypto_op *op, dpaa_sec_session *ses)
+{
+ struct rte_crypto_sym_op *sym = op->sym;
+ struct dpaa_sec_job *cf;
+ struct dpaa_sec_op_ctx *ctx;
+ struct qm_sg_entry *sg, *out_sg, *in_sg;
+ struct rte_mbuf *mbuf;
+ uint8_t req_segs;
+ uint32_t in_len = 0, out_len = 0;
+
+ if (sym->m_dst)
+ mbuf = sym->m_dst;
+ else
+ mbuf = sym->m_src;
+
+ req_segs = mbuf->nb_segs + sym->m_src->nb_segs + 2;
+ if (req_segs > MAX_SG_ENTRIES) {
+ DPAA_SEC_DP_ERR("Proto: Max sec segs supported is %d",
+ MAX_SG_ENTRIES);
+ return NULL;
+ }
+
+ ctx = dpaa_sec_alloc_ctx(ses);
+ if (!ctx)
+ return NULL;
+ cf = &ctx->job;
+ ctx->op = op;
+ /* output */
+ out_sg = &cf->sg[0];
+ out_sg->extension = 1;
+ qm_sg_entry_set64(out_sg, dpaa_mem_vtop(&cf->sg[2]));
+
+ /* 1st seg */
+ sg = &cf->sg[2];
+ qm_sg_entry_set64(sg, rte_pktmbuf_mtophys(mbuf));
+ sg->offset = 0;
+
+ /* Successive segs */
+ while (mbuf->next) {
+ sg->length = mbuf->data_len;
+ out_len += sg->length;
+ mbuf = mbuf->next;
+ cpu_to_hw_sg(sg);
+ sg++;
+ qm_sg_entry_set64(sg, rte_pktmbuf_mtophys(mbuf));
+ sg->offset = 0;
+ }
+ sg->length = mbuf->buf_len - mbuf->data_off;
+ out_len += sg->length;
+ sg->final = 1;
+ cpu_to_hw_sg(sg);
+
+ out_sg->length = out_len;
+ cpu_to_hw_sg(out_sg);
+
+ /* input */
+ mbuf = sym->m_src;
+ in_sg = &cf->sg[1];
+ in_sg->extension = 1;
+ in_sg->final = 1;
+ in_len = mbuf->data_len;
+
+ sg++;
+ qm_sg_entry_set64(in_sg, dpaa_mem_vtop(sg));
+
+ /* 1st seg */
+ qm_sg_entry_set64(sg, rte_pktmbuf_mtophys(mbuf));
+ sg->length = mbuf->data_len;
+ sg->offset = 0;
+
+ /* Successive segs */
+ mbuf = mbuf->next;
+ while (mbuf) {
+ cpu_to_hw_sg(sg);
+ sg++;
+ qm_sg_entry_set64(sg, rte_pktmbuf_mtophys(mbuf));
+ sg->length = mbuf->data_len;
+ sg->offset = 0;
+ in_len += sg->length;
+ mbuf = mbuf->next;
+ }
+ sg->final = 1;
+ cpu_to_hw_sg(sg);
+
+ in_sg->length = in_len;
+ cpu_to_hw_sg(in_sg);
+
+ sym->m_src->packet_type &= ~RTE_PTYPE_L4_MASK;
+
+ return cf;
+}
+
static uint16_t
dpaa_sec_enqueue_burst(void *qp, struct rte_crypto_op **ops,
uint16_t nb_ops)
@@ -1707,7 +1813,9 @@ dpaa_sec_enqueue_burst(void *qp, struct rte_crypto_op **ops,
auth_only_len = op->sym->auth.data.length -
op->sym->cipher.data.length;
- if (rte_pktmbuf_is_contiguous(op->sym->m_src)) {
+ if (rte_pktmbuf_is_contiguous(op->sym->m_src) &&
+ ((op->sym->m_dst == NULL) ||
+ rte_pktmbuf_is_contiguous(op->sym->m_dst))) {
if (is_proto_ipsec(ses)) {
cf = build_proto(op, ses);
} else if (is_proto_pdcp(ses)) {
@@ -1728,7 +1836,9 @@ dpaa_sec_enqueue_burst(void *qp, struct rte_crypto_op **ops,
goto send_pkts;
}
} else {
- if (is_auth_only(ses)) {
+ if (is_proto_pdcp(ses) || is_proto_ipsec(ses)) {
+ cf = build_proto_sg(op, ses);
+ } else if (is_auth_only(ses)) {
cf = build_auth_only_sg(op, ses);
} else if (is_cipher_only(ses)) {
cf = build_cipher_only_sg(op, ses);
--
2.17.1
^ permalink raw reply [flat|nested] 75+ messages in thread
* [dpdk-dev] [PATCH v3 09/24] crypto/dpaa2_sec: support scatter gather for proto offloads
2019-09-30 14:40 ` [dpdk-dev] [PATCH v3 00/24] " Akhil Goyal
` (7 preceding siblings ...)
2019-09-30 14:40 ` [dpdk-dev] [PATCH v3 08/24] crypto/dpaa_sec: support scatter gather for PDCP Akhil Goyal
@ 2019-09-30 14:40 ` Akhil Goyal
2019-09-30 14:40 ` [dpdk-dev] [PATCH v3 10/24] crypto/dpaa2_sec: disable 'write-safe' for PDCP Akhil Goyal
` (15 subsequent siblings)
24 siblings, 0 replies; 75+ messages in thread
From: Akhil Goyal @ 2019-09-30 14:40 UTC (permalink / raw)
To: dev; +Cc: aconole, anoobj, Hemant Agrawal, Akhil Goyal
From: Hemant Agrawal <hemant.agrawal@nxp.com>
This patch add support for chained input or output
mbufs for PDCP and ipsec protocol offload cases.
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
---
drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c | 134 +++++++++++++++++++-
drivers/crypto/dpaa2_sec/dpaa2_sec_logs.h | 4 +-
2 files changed, 133 insertions(+), 5 deletions(-)
diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
index 9108b3c43..b8712af24 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
@@ -65,6 +65,121 @@ static uint8_t cryptodev_driver_id;
int dpaa2_logtype_sec;
+static inline int
+build_proto_compound_sg_fd(dpaa2_sec_session *sess,
+ struct rte_crypto_op *op,
+ struct qbman_fd *fd, uint16_t bpid)
+{
+ struct rte_crypto_sym_op *sym_op = op->sym;
+ struct ctxt_priv *priv = sess->ctxt;
+ struct qbman_fle *fle, *sge, *ip_fle, *op_fle;
+ struct sec_flow_context *flc;
+ struct rte_mbuf *mbuf;
+ uint32_t in_len = 0, out_len = 0;
+
+ if (sym_op->m_dst)
+ mbuf = sym_op->m_dst;
+ else
+ mbuf = sym_op->m_src;
+
+ /* first FLE entry used to store mbuf and session ctxt */
+ fle = (struct qbman_fle *)rte_malloc(NULL, FLE_SG_MEM_SIZE,
+ RTE_CACHE_LINE_SIZE);
+ if (unlikely(!fle)) {
+ DPAA2_SEC_DP_ERR("Proto:SG: Memory alloc failed for SGE");
+ return -1;
+ }
+ memset(fle, 0, FLE_SG_MEM_SIZE);
+ DPAA2_SET_FLE_ADDR(fle, (size_t)op);
+ DPAA2_FLE_SAVE_CTXT(fle, (ptrdiff_t)priv);
+
+ /* Save the shared descriptor */
+ flc = &priv->flc_desc[0].flc;
+
+ op_fle = fle + 1;
+ ip_fle = fle + 2;
+ sge = fle + 3;
+
+ if (likely(bpid < MAX_BPID)) {
+ DPAA2_SET_FD_BPID(fd, bpid);
+ DPAA2_SET_FLE_BPID(op_fle, bpid);
+ DPAA2_SET_FLE_BPID(ip_fle, bpid);
+ } else {
+ DPAA2_SET_FD_IVP(fd);
+ DPAA2_SET_FLE_IVP(op_fle);
+ DPAA2_SET_FLE_IVP(ip_fle);
+ }
+
+ /* Configure FD as a FRAME LIST */
+ DPAA2_SET_FD_ADDR(fd, DPAA2_VADDR_TO_IOVA(op_fle));
+ DPAA2_SET_FD_COMPOUND_FMT(fd);
+ DPAA2_SET_FD_FLC(fd, DPAA2_VADDR_TO_IOVA(flc));
+
+ /* Configure Output FLE with Scatter/Gather Entry */
+ DPAA2_SET_FLE_SG_EXT(op_fle);
+ DPAA2_SET_FLE_ADDR(op_fle, DPAA2_VADDR_TO_IOVA(sge));
+
+ /* Configure Output SGE for Encap/Decap */
+ DPAA2_SET_FLE_ADDR(sge, DPAA2_MBUF_VADDR_TO_IOVA(mbuf));
+ DPAA2_SET_FLE_OFFSET(sge, mbuf->data_off);
+ /* o/p segs */
+ while (mbuf->next) {
+ sge->length = mbuf->data_len;
+ out_len += sge->length;
+ sge++;
+ mbuf = mbuf->next;
+ DPAA2_SET_FLE_ADDR(sge, DPAA2_MBUF_VADDR_TO_IOVA(mbuf));
+ DPAA2_SET_FLE_OFFSET(sge, mbuf->data_off);
+ }
+ /* using buf_len for last buf - so that extra data can be added */
+ sge->length = mbuf->buf_len - mbuf->data_off;
+ out_len += sge->length;
+
+ DPAA2_SET_FLE_FIN(sge);
+ op_fle->length = out_len;
+
+ sge++;
+ mbuf = sym_op->m_src;
+
+ /* Configure Input FLE with Scatter/Gather Entry */
+ DPAA2_SET_FLE_ADDR(ip_fle, DPAA2_VADDR_TO_IOVA(sge));
+ DPAA2_SET_FLE_SG_EXT(ip_fle);
+ DPAA2_SET_FLE_FIN(ip_fle);
+
+ /* Configure input SGE for Encap/Decap */
+ DPAA2_SET_FLE_ADDR(sge, DPAA2_MBUF_VADDR_TO_IOVA(mbuf));
+ DPAA2_SET_FLE_OFFSET(sge, mbuf->data_off);
+ sge->length = mbuf->data_len;
+ in_len += sge->length;
+
+ mbuf = mbuf->next;
+ /* i/p segs */
+ while (mbuf) {
+ sge++;
+ DPAA2_SET_FLE_ADDR(sge, DPAA2_MBUF_VADDR_TO_IOVA(mbuf));
+ DPAA2_SET_FLE_OFFSET(sge, mbuf->data_off);
+ sge->length = mbuf->data_len;
+ in_len += sge->length;
+ mbuf = mbuf->next;
+ }
+ ip_fle->length = in_len;
+ DPAA2_SET_FLE_FIN(sge);
+
+ /* In case of PDCP, per packet HFN is stored in
+ * mbuf priv after sym_op.
+ */
+ if (sess->ctxt_type == DPAA2_SEC_PDCP && sess->pdcp.hfn_ovd) {
+ uint32_t hfn_ovd = *((uint8_t *)op + sess->pdcp.hfn_ovd_offset);
+ /*enable HFN override override */
+ DPAA2_SET_FLE_INTERNAL_JD(ip_fle, hfn_ovd);
+ DPAA2_SET_FLE_INTERNAL_JD(op_fle, hfn_ovd);
+ DPAA2_SET_FD_INTERNAL_JD(fd, hfn_ovd);
+ }
+ DPAA2_SET_FD_LEN(fd, ip_fle->length);
+
+ return 0;
+}
+
static inline int
build_proto_compound_fd(dpaa2_sec_session *sess,
struct rte_crypto_op *op,
@@ -87,7 +202,7 @@ build_proto_compound_fd(dpaa2_sec_session *sess,
/* we are using the first FLE entry to store Mbuf */
retval = rte_mempool_get(priv->fle_pool, (void **)(&fle));
if (retval) {
- DPAA2_SEC_ERR("Memory alloc failed");
+ DPAA2_SEC_DP_ERR("Memory alloc failed");
return -1;
}
memset(fle, 0, FLE_POOL_BUF_SIZE);
@@ -1170,8 +1285,10 @@ build_sec_fd(struct rte_crypto_op *op,
else
return -1;
- /* Segmented buffer */
- if (unlikely(!rte_pktmbuf_is_contiguous(op->sym->m_src))) {
+ /* Any of the buffer is segmented*/
+ if (!rte_pktmbuf_is_contiguous(op->sym->m_src) ||
+ ((op->sym->m_dst != NULL) &&
+ !rte_pktmbuf_is_contiguous(op->sym->m_dst))) {
switch (sess->ctxt_type) {
case DPAA2_SEC_CIPHER:
ret = build_cipher_sg_fd(sess, op, fd, bpid);
@@ -1185,6 +1302,10 @@ build_sec_fd(struct rte_crypto_op *op,
case DPAA2_SEC_CIPHER_HASH:
ret = build_authenc_sg_fd(sess, op, fd, bpid);
break;
+ case DPAA2_SEC_IPSEC:
+ case DPAA2_SEC_PDCP:
+ ret = build_proto_compound_sg_fd(sess, op, fd, bpid);
+ break;
case DPAA2_SEC_HASH_CIPHER:
default:
DPAA2_SEC_ERR("error: Unsupported session");
@@ -1372,9 +1493,14 @@ sec_fd_to_mbuf(const struct qbman_fd *fd)
if (op->sess_type == RTE_CRYPTO_OP_SECURITY_SESSION) {
dpaa2_sec_session *sess = (dpaa2_sec_session *)
get_sec_session_private_data(op->sym->sec_session);
- if (sess->ctxt_type == DPAA2_SEC_IPSEC) {
+ if (sess->ctxt_type == DPAA2_SEC_IPSEC ||
+ sess->ctxt_type == DPAA2_SEC_PDCP) {
uint16_t len = DPAA2_GET_FD_LEN(fd);
dst->pkt_len = len;
+ while (dst->next != NULL) {
+ len -= dst->data_len;
+ dst = dst->next;
+ }
dst->data_len = len;
}
}
diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_logs.h b/drivers/crypto/dpaa2_sec/dpaa2_sec_logs.h
index 8a9904426..c2e11f951 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_logs.h
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_logs.h
@@ -1,7 +1,7 @@
/* SPDX-License-Identifier: BSD-3-Clause
*
* Copyright (c) 2016 Freescale Semiconductor, Inc. All rights reserved.
- * Copyright 2016 NXP
+ * Copyright 2016,2019 NXP
*
*/
@@ -37,6 +37,8 @@ extern int dpaa2_logtype_sec;
DPAA2_SEC_DP_LOG(INFO, fmt, ## args)
#define DPAA2_SEC_DP_WARN(fmt, args...) \
DPAA2_SEC_DP_LOG(WARNING, fmt, ## args)
+#define DPAA2_SEC_DP_ERR(fmt, args...) \
+ DPAA2_SEC_DP_LOG(ERR, fmt, ## args)
#endif /* _DPAA2_SEC_LOGS_H_ */
--
2.17.1
^ permalink raw reply [flat|nested] 75+ messages in thread
* [dpdk-dev] [PATCH v3 10/24] crypto/dpaa2_sec: disable 'write-safe' for PDCP
2019-09-30 14:40 ` [dpdk-dev] [PATCH v3 00/24] " Akhil Goyal
` (8 preceding siblings ...)
2019-09-30 14:40 ` [dpdk-dev] [PATCH v3 09/24] crypto/dpaa2_sec: support scatter gather for proto offloads Akhil Goyal
@ 2019-09-30 14:40 ` Akhil Goyal
2019-09-30 14:40 ` [dpdk-dev] [PATCH v3 11/24] crypto/dpaa2_sec/hw: support 18-bit PDCP enc-auth cases Akhil Goyal
` (14 subsequent siblings)
24 siblings, 0 replies; 75+ messages in thread
From: Akhil Goyal @ 2019-09-30 14:40 UTC (permalink / raw)
To: dev; +Cc: aconole, anoobj, Vakul Garg
From: Vakul Garg <vakul.garg@nxp.com>
PDCP descriptors in some cases internally use commands which overwrite
memory with extra '0s' if write-safe is kept enabled. This breaks
correct functional behavior of PDCP apis and they in many cases give
incorrect crypto output. There we disable 'write-safe' bit in FLC for
PDCP cases. If there is a performance drop, then write-safe would be
enabled selectively through a separate patch.
Signed-off-by: Vakul Garg <vakul.garg@nxp.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
---
drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c | 8 ++++++--
1 file changed, 6 insertions(+), 2 deletions(-)
diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
index b8712af24..b020041c8 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
@@ -2931,8 +2931,12 @@ dpaa2_sec_set_pdcp_session(struct rte_cryptodev *dev,
flc->word1_sdl = (uint8_t)bufsize;
- /* Set EWS bit i.e. enable write-safe */
- DPAA2_SET_FLC_EWS(flc);
+ /* TODO - check the perf impact or
+ * align as per descriptor type
+ * Set EWS bit i.e. enable write-safe
+ * DPAA2_SET_FLC_EWS(flc);
+ */
+
/* Set BS = 1 i.e reuse input buffers as output buffers */
DPAA2_SET_FLC_REUSE_BS(flc);
/* Set FF = 10; reuse input buffers if they provide sufficient space */
--
2.17.1
^ permalink raw reply [flat|nested] 75+ messages in thread
* [dpdk-dev] [PATCH v3 11/24] crypto/dpaa2_sec/hw: support 18-bit PDCP enc-auth cases
2019-09-30 14:40 ` [dpdk-dev] [PATCH v3 00/24] " Akhil Goyal
` (9 preceding siblings ...)
2019-09-30 14:40 ` [dpdk-dev] [PATCH v3 10/24] crypto/dpaa2_sec: disable 'write-safe' for PDCP Akhil Goyal
@ 2019-09-30 14:40 ` Akhil Goyal
2019-09-30 14:40 ` [dpdk-dev] [PATCH v3 12/24] crypto/dpaa2_sec/hw: support aes-aes 18-bit PDCP Akhil Goyal
` (13 subsequent siblings)
24 siblings, 0 replies; 75+ messages in thread
From: Akhil Goyal @ 2019-09-30 14:40 UTC (permalink / raw)
To: dev; +Cc: aconole, anoobj, Vakul Garg, Akhil Goyal
From: Vakul Garg <vakul.garg@nxp.com>
This patch support following algo combinations(ENC-AUTH).
- AES-SNOW
- SNOW-AES
- AES-ZUC
- ZUC-AES
- SNOW-ZUC
- ZUC-SNOW
Signed-off-by: Vakul Garg <vakul.garg@nxp.com>
Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
---
drivers/crypto/dpaa2_sec/hw/desc/pdcp.h | 211 ++++++++++++++++--------
1 file changed, 140 insertions(+), 71 deletions(-)
diff --git a/drivers/crypto/dpaa2_sec/hw/desc/pdcp.h b/drivers/crypto/dpaa2_sec/hw/desc/pdcp.h
index 0b074ec80..764daf21c 100644
--- a/drivers/crypto/dpaa2_sec/hw/desc/pdcp.h
+++ b/drivers/crypto/dpaa2_sec/hw/desc/pdcp.h
@@ -1021,21 +1021,21 @@ pdcp_insert_cplane_snow_aes_op(struct program *p,
JUMP(p, 1, LOCAL_JUMP, ALL_TRUE, CALM);
MATHB(p, MATH0, AND, sn_mask, MATH1, 8, IFB | IMMED2);
MATHB(p, MATH1, SHLD, MATH1, MATH1, 8, 0);
- MOVE(p, DESCBUF, 4, MATH2, 0, 0x08, WAITCOMP | IMMED);
+ MOVEB(p, DESCBUF, 4, MATH2, 0, 0x08, WAITCOMP | IMMED);
MATHB(p, MATH1, OR, MATH2, MATH2, 8, 0);
SEQSTORE(p, MATH0, offset, length, 0);
if (dir == OP_TYPE_ENCAP_PROTOCOL) {
if (rta_sec_era > RTA_SEC_ERA_2 ||
(rta_sec_era == RTA_SEC_ERA_2 &&
era_2_sw_hfn_ovrd == 0)) {
- SEQINPTR(p, 0, 1, RTO);
+ SEQINPTR(p, 0, length, RTO);
} else {
SEQINPTR(p, 0, 5, RTO);
SEQFIFOLOAD(p, SKIP, 4, 0);
}
KEY(p, KEY1, authdata->key_enc_flags, authdata->key,
authdata->keylen, INLINE_KEY(authdata));
- MOVE(p, MATH2, 0, IFIFOAB1, 0, 0x08, IMMED);
+ MOVEB(p, MATH2, 0, IFIFOAB1, 0, 0x08, IMMED);
if (rta_sec_era > RTA_SEC_ERA_2) {
MATHB(p, SEQINSZ, SUB, ZERO, VSEQINSZ, 4, 0);
@@ -1088,7 +1088,7 @@ pdcp_insert_cplane_snow_aes_op(struct program *p,
ICV_CHECK_DISABLE,
DIR_DEC);
SEQFIFOLOAD(p, MSG1, 0, VLF | LAST1 | FLUSH1);
- MOVE(p, CONTEXT1, 0, MATH3, 0, 4, WAITCOMP | IMMED);
+ MOVEB(p, CONTEXT1, 0, MATH3, 0, 4, WAITCOMP | IMMED);
if (rta_sec_era <= RTA_SEC_ERA_3)
LOAD(p, CLRW_CLR_C1KEY |
CLRW_CLR_C1CTX |
@@ -1111,7 +1111,7 @@ pdcp_insert_cplane_snow_aes_op(struct program *p,
KEY(p, KEY1, cipherdata->key_enc_flags, cipherdata->key,
cipherdata->keylen, INLINE_KEY(cipherdata));
SET_LABEL(p, local_offset);
- MOVE(p, MATH2, 0, CONTEXT1, 0, 8, IMMED);
+ MOVEB(p, MATH2, 0, CONTEXT1, 0, 8, IMMED);
SEQINPTR(p, 0, 0, RTO);
if (rta_sec_era == RTA_SEC_ERA_2 && era_2_sw_hfn_ovrd) {
@@ -1119,7 +1119,7 @@ pdcp_insert_cplane_snow_aes_op(struct program *p,
MATHB(p, SEQINSZ, ADD, ONE, SEQINSZ, 4, 0);
}
- MATHB(p, SEQINSZ, SUB, ONE, VSEQINSZ, 4, 0);
+ MATHB(p, SEQINSZ, SUB, length, VSEQINSZ, 4, IMMED2);
ALG_OPERATION(p, OP_ALG_ALGSEL_SNOW_F8,
OP_ALG_AAI_F8,
OP_ALG_AS_INITFINAL,
@@ -1130,14 +1130,14 @@ pdcp_insert_cplane_snow_aes_op(struct program *p,
if (rta_sec_era > RTA_SEC_ERA_2 ||
(rta_sec_era == RTA_SEC_ERA_2 &&
era_2_sw_hfn_ovrd == 0))
- SEQFIFOLOAD(p, SKIP, 1, 0);
+ SEQFIFOLOAD(p, SKIP, length, 0);
SEQFIFOLOAD(p, MSG1, 0, VLF);
- MOVE(p, MATH3, 0, IFIFOAB1, 0, 4, LAST1 | FLUSH1 | IMMED);
+ MOVEB(p, MATH3, 0, IFIFOAB1, 0, 4, LAST1 | FLUSH1 | IMMED);
PATCH_MOVE(p, seqin_ptr_read, local_offset);
PATCH_MOVE(p, seqin_ptr_write, local_offset);
} else {
- MOVE(p, MATH2, 0, CONTEXT1, 0, 8, IMMED);
+ MOVEB(p, MATH2, 0, CONTEXT1, 0, 8, IMMED);
if (rta_sec_era >= RTA_SEC_ERA_5)
MOVE(p, CONTEXT1, 0, CONTEXT2, 0, 8, IMMED);
@@ -1147,7 +1147,7 @@ pdcp_insert_cplane_snow_aes_op(struct program *p,
else
MATHB(p, SEQINSZ, SUB, MATH3, VSEQINSZ, 4, 0);
- MATHB(p, SEQINSZ, SUB, PDCP_MAC_I_LEN, VSEQOUTSZ, 4, IMMED2);
+ MATHI(p, SEQINSZ, SUB, PDCP_MAC_I_LEN, VSEQOUTSZ, 4, IMMED2);
/*
* TODO: To be changed when proper support is added in RTA (can't load a
* command that is also written by RTA (or patch it for that matter).
@@ -1237,7 +1237,7 @@ pdcp_insert_cplane_snow_aes_op(struct program *p,
DIR_DEC);
/* Read the # of bytes written in the output buffer + 1 (HDR) */
- MATHB(p, VSEQOUTSZ, ADD, ONE, VSEQINSZ, 4, 0);
+ MATHI(p, VSEQOUTSZ, ADD, length, VSEQINSZ, 4, IMMED2);
if (rta_sec_era <= RTA_SEC_ERA_3)
MOVE(p, MATH3, 0, IFIFOAB1, 0, 8, IMMED);
@@ -1340,24 +1340,24 @@ pdcp_insert_cplane_aes_snow_op(struct program *p,
}
if (dir == OP_TYPE_ENCAP_PROTOCOL)
- MATHB(p, SEQINSZ, SUB, ONE, VSEQINSZ, 4, 0);
+ MATHB(p, SEQINSZ, SUB, length, VSEQINSZ, 4, IMMED2);
SEQLOAD(p, MATH0, offset, length, 0);
JUMP(p, 1, LOCAL_JUMP, ALL_TRUE, CALM);
- MOVE(p, MATH0, 7, IFIFOAB2, 0, 1, IMMED);
+ MOVEB(p, MATH0, offset, IFIFOAB2, 0, length, IMMED);
MATHB(p, MATH0, AND, sn_mask, MATH1, 8, IFB | IMMED2);
SEQSTORE(p, MATH0, offset, length, 0);
MATHB(p, MATH1, SHLD, MATH1, MATH1, 8, 0);
- MOVE(p, DESCBUF, 4, MATH2, 0, 8, WAITCOMP | IMMED);
+ MOVEB(p, DESCBUF, 4, MATH2, 0, 8, WAITCOMP | IMMED);
MATHB(p, MATH1, OR, MATH2, MATH1, 8, 0);
- MOVE(p, MATH1, 0, CONTEXT1, 16, 8, IMMED);
- MOVE(p, MATH1, 0, CONTEXT2, 0, 4, IMMED);
+ MOVEB(p, MATH1, 0, CONTEXT1, 16, 8, IMMED);
+ MOVEB(p, MATH1, 0, CONTEXT2, 0, 4, IMMED);
if (swap == false) {
- MATHB(p, MATH1, AND, lower_32_bits(PDCP_BEARER_MASK), MATH2, 4,
- IMMED2);
- MATHB(p, MATH1, AND, upper_32_bits(PDCP_DIR_MASK), MATH3, 4,
- IMMED2);
+ MATHB(p, MATH1, AND, upper_32_bits(PDCP_BEARER_MASK), MATH2, 4,
+ IMMED2);
+ MATHB(p, MATH1, AND, lower_32_bits(PDCP_DIR_MASK), MATH3, 4,
+ IMMED2);
} else {
MATHB(p, MATH1, AND, lower_32_bits(PDCP_BEARER_MASK_BE), MATH2,
4, IMMED2);
@@ -1365,7 +1365,7 @@ pdcp_insert_cplane_aes_snow_op(struct program *p,
4, IMMED2);
}
MATHB(p, MATH3, SHLD, MATH3, MATH3, 8, 0);
- MOVE(p, MATH2, 4, OFIFO, 0, 12, IMMED);
+ MOVEB(p, MATH2, 4, OFIFO, 0, 12, IMMED);
MOVE(p, OFIFO, 0, CONTEXT2, 4, 12, IMMED);
if (dir == OP_TYPE_ENCAP_PROTOCOL) {
MATHB(p, SEQINSZ, ADD, PDCP_MAC_I_LEN, VSEQOUTSZ, 4, IMMED2);
@@ -1485,14 +1485,14 @@ pdcp_insert_cplane_snow_zuc_op(struct program *p,
SEQLOAD(p, MATH0, offset, length, 0);
JUMP(p, 1, LOCAL_JUMP, ALL_TRUE, CALM);
- MOVE(p, MATH0, 7, IFIFOAB2, 0, 1, IMMED);
+ MOVEB(p, MATH0, offset, IFIFOAB2, 0, length, IMMED);
MATHB(p, MATH0, AND, sn_mask, MATH1, 8, IFB | IMMED2);
MATHB(p, MATH1, SHLD, MATH1, MATH1, 8, 0);
- MOVE(p, DESCBUF, 4, MATH2, 0, 8, WAITCOMP | IMMED);
+ MOVEB(p, DESCBUF, 4, MATH2, 0, 8, WAITCOMP | IMMED);
MATHB(p, MATH1, OR, MATH2, MATH2, 8, 0);
- MOVE(p, MATH2, 0, CONTEXT1, 0, 8, IMMED);
- MOVE(p, MATH2, 0, CONTEXT2, 0, 8, WAITCOMP | IMMED);
+ MOVEB(p, MATH2, 0, CONTEXT1, 0, 8, IMMED);
+ MOVEB(p, MATH2, 0, CONTEXT2, 0, 8, WAITCOMP | IMMED);
if (dir == OP_TYPE_ENCAP_PROTOCOL)
MATHB(p, SEQINSZ, ADD, PDCP_MAC_I_LEN, VSEQOUTSZ, 4, IMMED2);
@@ -1606,14 +1606,14 @@ pdcp_insert_cplane_aes_zuc_op(struct program *p,
SET_LABEL(p, keyjump);
SEQLOAD(p, MATH0, offset, length, 0);
JUMP(p, 1, LOCAL_JUMP, ALL_TRUE, CALM);
- MOVE(p, MATH0, 7, IFIFOAB2, 0, 1, IMMED);
+ MOVEB(p, MATH0, offset, IFIFOAB2, 0, length, IMMED);
MATHB(p, MATH0, AND, sn_mask, MATH1, 8, IFB | IMMED2);
MATHB(p, MATH1, SHLD, MATH1, MATH1, 8, 0);
- MOVE(p, DESCBUF, 4, MATH2, 0, 8, WAITCOMP | IMMED);
+ MOVEB(p, DESCBUF, 4, MATH2, 0, 8, WAITCOMP | IMMED);
MATHB(p, MATH1, OR, MATH2, MATH2, 8, 0);
- MOVE(p, MATH2, 0, CONTEXT1, 16, 8, IMMED);
- MOVE(p, MATH2, 0, CONTEXT2, 0, 8, WAITCOMP | IMMED);
+ MOVEB(p, MATH2, 0, CONTEXT1, 16, 8, IMMED);
+ MOVEB(p, MATH2, 0, CONTEXT2, 0, 8, WAITCOMP | IMMED);
if (dir == OP_TYPE_ENCAP_PROTOCOL)
MATHB(p, SEQINSZ, ADD, PDCP_MAC_I_LEN, VSEQOUTSZ, 4, IMMED2);
@@ -1729,19 +1729,19 @@ pdcp_insert_cplane_zuc_snow_op(struct program *p,
SET_LABEL(p, keyjump);
SEQLOAD(p, MATH0, offset, length, 0);
JUMP(p, 1, LOCAL_JUMP, ALL_TRUE, CALM);
- MOVE(p, MATH0, 7, IFIFOAB2, 0, 1, IMMED);
+ MOVEB(p, MATH0, offset, IFIFOAB2, 0, length, IMMED);
MATHB(p, MATH0, AND, sn_mask, MATH1, 8, IFB | IMMED2);
MATHB(p, MATH1, SHLD, MATH1, MATH1, 8, 0);
- MOVE(p, DESCBUF, 4, MATH2, 0, 8, WAITCOMP | IMMED);
+ MOVEB(p, DESCBUF, 4, MATH2, 0, 8, WAITCOMP | IMMED);
MATHB(p, MATH1, OR, MATH2, MATH1, 8, 0);
- MOVE(p, MATH1, 0, CONTEXT1, 0, 8, IMMED);
- MOVE(p, MATH1, 0, CONTEXT2, 0, 4, IMMED);
+ MOVEB(p, MATH1, 0, CONTEXT1, 0, 8, IMMED);
+ MOVEB(p, MATH1, 0, CONTEXT2, 0, 4, IMMED);
if (swap == false) {
- MATHB(p, MATH1, AND, lower_32_bits(PDCP_BEARER_MASK), MATH2,
- 4, IMMED2);
- MATHB(p, MATH1, AND, upper_32_bits(PDCP_DIR_MASK), MATH3,
- 4, IMMED2);
+ MATHB(p, MATH1, AND, upper_32_bits(PDCP_BEARER_MASK), MATH2,
+ 4, IMMED2);
+ MATHB(p, MATH1, AND, lower_32_bits(PDCP_DIR_MASK), MATH3,
+ 4, IMMED2);
} else {
MATHB(p, MATH1, AND, lower_32_bits(PDCP_BEARER_MASK_BE), MATH2,
4, IMMED2);
@@ -1749,7 +1749,7 @@ pdcp_insert_cplane_zuc_snow_op(struct program *p,
4, IMMED2);
}
MATHB(p, MATH3, SHLD, MATH3, MATH3, 8, 0);
- MOVE(p, MATH2, 4, OFIFO, 0, 12, IMMED);
+ MOVEB(p, MATH2, 4, OFIFO, 0, 12, IMMED);
MOVE(p, OFIFO, 0, CONTEXT2, 4, 12, IMMED);
if (dir == OP_TYPE_ENCAP_PROTOCOL) {
@@ -1798,13 +1798,13 @@ pdcp_insert_cplane_zuc_snow_op(struct program *p,
LOAD(p, 0, DCTRL, 0, LDLEN_RST_CHA_OFIFO_PTR, IMMED);
/* Put ICV to M0 before sending it to C2 for comparison. */
- MOVE(p, OFIFO, 0, MATH0, 0, 4, WAITCOMP | IMMED);
+ MOVEB(p, OFIFO, 0, MATH0, 0, 4, WAITCOMP | IMMED);
LOAD(p, NFIFOENTRY_STYPE_ALTSOURCE |
NFIFOENTRY_DEST_CLASS2 |
NFIFOENTRY_DTYPE_ICV |
NFIFOENTRY_LC2 | 4, NFIFO_SZL, 0, 4, IMMED);
- MOVE(p, MATH0, 0, ALTSOURCE, 0, 4, IMMED);
+ MOVEB(p, MATH0, 0, ALTSOURCE, 0, 4, IMMED);
}
PATCH_JUMP(p, pkeyjump, keyjump);
@@ -1871,14 +1871,14 @@ pdcp_insert_cplane_zuc_aes_op(struct program *p,
MATHB(p, MATH0, AND, sn_mask, MATH1, 8, IFB | IMMED2);
MATHB(p, MATH1, SHLD, MATH1, MATH1, 8, 0);
- MOVE(p, DESCBUF, 4, MATH2, 0, 0x08, WAITCOMP | IMMED);
+ MOVEB(p, DESCBUF, 4, MATH2, 0, 0x08, WAITCOMP | IMMED);
MATHB(p, MATH1, OR, MATH2, MATH2, 8, 0);
SEQSTORE(p, MATH0, offset, length, 0);
if (dir == OP_TYPE_ENCAP_PROTOCOL) {
KEY(p, KEY1, authdata->key_enc_flags, authdata->key,
authdata->keylen, INLINE_KEY(authdata));
- MOVE(p, MATH2, 0, IFIFOAB1, 0, 0x08, IMMED);
- MOVE(p, MATH0, 7, IFIFOAB1, 0, 1, IMMED);
+ MOVEB(p, MATH2, 0, IFIFOAB1, 0, 0x08, IMMED);
+ MOVEB(p, MATH0, offset, IFIFOAB1, 0, length, IMMED);
MATHB(p, SEQINSZ, SUB, ZERO, VSEQINSZ, 4, 0);
MATHB(p, VSEQINSZ, ADD, PDCP_MAC_I_LEN, VSEQOUTSZ, 4, IMMED2);
@@ -1889,7 +1889,7 @@ pdcp_insert_cplane_zuc_aes_op(struct program *p,
ICV_CHECK_DISABLE,
DIR_DEC);
SEQFIFOLOAD(p, MSG1, 0, VLF | LAST1 | FLUSH1);
- MOVE(p, CONTEXT1, 0, MATH3, 0, 4, WAITCOMP | IMMED);
+ MOVEB(p, CONTEXT1, 0, MATH3, 0, 4, WAITCOMP | IMMED);
LOAD(p, CLRW_RESET_CLS1_CHA |
CLRW_CLR_C1KEY |
CLRW_CLR_C1CTX |
@@ -1901,7 +1901,7 @@ pdcp_insert_cplane_zuc_aes_op(struct program *p,
KEY(p, KEY1, cipherdata->key_enc_flags, cipherdata->key,
cipherdata->keylen, INLINE_KEY(cipherdata));
- MOVE(p, MATH2, 0, CONTEXT1, 0, 8, IMMED);
+ MOVEB(p, MATH2, 0, CONTEXT1, 0, 8, IMMED);
SEQINPTR(p, 0, PDCP_NULL_MAX_FRAME_LEN, RTO);
ALG_OPERATION(p, OP_ALG_ALGSEL_ZUCE,
@@ -1911,12 +1911,12 @@ pdcp_insert_cplane_zuc_aes_op(struct program *p,
DIR_ENC);
SEQFIFOSTORE(p, MSG, 0, 0, VLF);
- SEQFIFOLOAD(p, SKIP, 1, 0);
+ SEQFIFOLOAD(p, SKIP, length, 0);
SEQFIFOLOAD(p, MSG1, 0, VLF);
- MOVE(p, MATH3, 0, IFIFOAB1, 0, 4, LAST1 | FLUSH1 | IMMED);
+ MOVEB(p, MATH3, 0, IFIFOAB1, 0, 4, LAST1 | FLUSH1 | IMMED);
} else {
- MOVE(p, MATH2, 0, CONTEXT1, 0, 8, IMMED);
+ MOVEB(p, MATH2, 0, CONTEXT1, 0, 8, IMMED);
MOVE(p, CONTEXT1, 0, CONTEXT2, 0, 8, IMMED);
@@ -1937,7 +1937,7 @@ pdcp_insert_cplane_zuc_aes_op(struct program *p,
SEQFIFOSTORE(p, MSG, 0, 0, VLF | CONT);
SEQFIFOLOAD(p, MSG1, 0, VLF | LAST1 | FLUSH1);
- MOVE(p, OFIFO, 0, MATH3, 0, 4, IMMED);
+ MOVEB(p, OFIFO, 0, MATH3, 0, 4, IMMED);
LOAD(p, CLRW_RESET_CLS1_CHA |
CLRW_CLR_C1KEY |
@@ -1969,7 +1969,7 @@ pdcp_insert_cplane_zuc_aes_op(struct program *p,
NFIFOENTRY_DTYPE_ICV |
NFIFOENTRY_LC1 |
NFIFOENTRY_FC1 | 4, NFIFO_SZL, 0, 4, IMMED);
- MOVE(p, MATH3, 0, ALTSOURCE, 0, 4, IMMED);
+ MOVEB(p, MATH3, 0, ALTSOURCE, 0, 4, IMMED);
}
return 0;
@@ -2080,6 +2080,8 @@ insert_hfn_ov_op(struct program *p,
{
uint32_t imm = PDCP_DPOVRD_HFN_OV_EN;
uint16_t hfn_pdb_offset;
+ LABEL(keyjump);
+ REFERENCE(pkeyjump);
if (rta_sec_era == RTA_SEC_ERA_2 && !era_2_sw_hfn_ovrd)
return 0;
@@ -2115,13 +2117,10 @@ insert_hfn_ov_op(struct program *p,
SEQSTORE(p, MATH0, 4, 4, 0);
}
- if (rta_sec_era >= RTA_SEC_ERA_8)
- JUMP(p, 6, LOCAL_JUMP, ALL_TRUE, MATH_Z);
- else
- JUMP(p, 5, LOCAL_JUMP, ALL_TRUE, MATH_Z);
+ pkeyjump = JUMP(p, keyjump, LOCAL_JUMP, ALL_TRUE, MATH_Z);
if (rta_sec_era > RTA_SEC_ERA_2)
- MATHB(p, DPOVRD, LSHIFT, shift, MATH0, 4, IMMED2);
+ MATHI(p, DPOVRD, LSHIFT, shift, MATH0, 4, IMMED2);
else
MATHB(p, MATH0, LSHIFT, shift, MATH0, 4, IMMED2);
@@ -2136,6 +2135,8 @@ insert_hfn_ov_op(struct program *p,
*/
MATHB(p, DPOVRD, AND, ZERO, DPOVRD, 4, STL);
+ SET_LABEL(p, keyjump);
+ PATCH_JUMP(p, pkeyjump, keyjump);
return 0;
}
@@ -2271,14 +2272,45 @@ cnstr_pdcp_c_plane_pdb(struct program *p,
/*
* PDCP UPlane PDB creation function
*/
-static inline int
+static inline enum pdb_type_e
cnstr_pdcp_u_plane_pdb(struct program *p,
enum pdcp_sn_size sn_size,
uint32_t hfn, unsigned short bearer,
unsigned short direction,
- uint32_t hfn_threshold)
+ uint32_t hfn_threshold,
+ struct alginfo *cipherdata,
+ struct alginfo *authdata)
{
struct pdcp_pdb pdb;
+ enum pdb_type_e pdb_type = PDCP_PDB_TYPE_FULL_PDB;
+ enum pdb_type_e
+ pdb_mask[PDCP_CIPHER_TYPE_INVALID][PDCP_AUTH_TYPE_INVALID] = {
+ { /* NULL */
+ PDCP_PDB_TYPE_NO_PDB, /* NULL */
+ PDCP_PDB_TYPE_FULL_PDB, /* SNOW f9 */
+ PDCP_PDB_TYPE_FULL_PDB, /* AES CMAC */
+ PDCP_PDB_TYPE_FULL_PDB /* ZUC-I */
+ },
+ { /* SNOW f8 */
+ PDCP_PDB_TYPE_FULL_PDB, /* NULL */
+ PDCP_PDB_TYPE_FULL_PDB, /* SNOW f9 */
+ PDCP_PDB_TYPE_REDUCED_PDB, /* AES CMAC */
+ PDCP_PDB_TYPE_REDUCED_PDB /* ZUC-I */
+ },
+ { /* AES CTR */
+ PDCP_PDB_TYPE_FULL_PDB, /* NULL */
+ PDCP_PDB_TYPE_REDUCED_PDB, /* SNOW f9 */
+ PDCP_PDB_TYPE_FULL_PDB, /* AES CMAC */
+ PDCP_PDB_TYPE_REDUCED_PDB /* ZUC-I */
+ },
+ { /* ZUC-E */
+ PDCP_PDB_TYPE_FULL_PDB, /* NULL */
+ PDCP_PDB_TYPE_REDUCED_PDB, /* SNOW f9 */
+ PDCP_PDB_TYPE_REDUCED_PDB, /* AES CMAC */
+ PDCP_PDB_TYPE_FULL_PDB /* ZUC-I */
+ },
+ };
+
/* Read options from user */
/* Depending on sequence number length, the HFN and HFN threshold
* have different lengths.
@@ -2312,6 +2344,12 @@ cnstr_pdcp_u_plane_pdb(struct program *p,
pdb.hfn_res = hfn << PDCP_U_PLANE_PDB_18BIT_SN_HFN_SHIFT;
pdb.hfn_thr_res =
hfn_threshold<<PDCP_U_PLANE_PDB_18BIT_SN_HFN_THR_SHIFT;
+
+ if (rta_sec_era <= RTA_SEC_ERA_8) {
+ if (cipherdata && authdata)
+ pdb_type = pdb_mask[cipherdata->algtype]
+ [authdata->algtype];
+ }
break;
default:
@@ -2323,13 +2361,29 @@ cnstr_pdcp_u_plane_pdb(struct program *p,
((bearer << PDCP_U_PLANE_PDB_BEARER_SHIFT) |
(direction << PDCP_U_PLANE_PDB_DIR_SHIFT));
- /* copy PDB in descriptor*/
- __rta_out32(p, pdb.opt_res.opt);
- __rta_out32(p, pdb.hfn_res);
- __rta_out32(p, pdb.bearer_dir_res);
- __rta_out32(p, pdb.hfn_thr_res);
+ switch (pdb_type) {
+ case PDCP_PDB_TYPE_NO_PDB:
+ break;
+
+ case PDCP_PDB_TYPE_REDUCED_PDB:
+ __rta_out32(p, pdb.hfn_res);
+ __rta_out32(p, pdb.bearer_dir_res);
+ break;
- return 0;
+ case PDCP_PDB_TYPE_FULL_PDB:
+ /* copy PDB in descriptor*/
+ __rta_out32(p, pdb.opt_res.opt);
+ __rta_out32(p, pdb.hfn_res);
+ __rta_out32(p, pdb.bearer_dir_res);
+ __rta_out32(p, pdb.hfn_thr_res);
+
+ break;
+
+ default:
+ return PDCP_PDB_TYPE_INVALID;
+ }
+
+ return pdb_type;
}
/**
* cnstr_shdsc_pdcp_c_plane_encap - Function for creating a PDCP Control Plane
@@ -2736,6 +2790,7 @@ cnstr_shdsc_pdcp_u_plane_encap(uint32_t *descbuf,
struct program prg;
struct program *p = &prg;
int err;
+ enum pdb_type_e pdb_type;
static enum rta_share_type
desc_share[PDCP_CIPHER_TYPE_INVALID][PDCP_AUTH_TYPE_INVALID] = {
{ /* NULL */
@@ -2785,15 +2840,16 @@ cnstr_shdsc_pdcp_u_plane_encap(uint32_t *descbuf,
SHR_HDR(p, desc_share[cipherdata->algtype][authdata->algtype], 0, 0);
else
SHR_HDR(p, SHR_ALWAYS, 0, 0);
- if (cnstr_pdcp_u_plane_pdb(p, sn_size, hfn, bearer, direction,
- hfn_threshold)) {
+ pdb_type = cnstr_pdcp_u_plane_pdb(p, sn_size, hfn,
+ bearer, direction, hfn_threshold,
+ cipherdata, authdata);
+ if (pdb_type == PDCP_PDB_TYPE_INVALID) {
pr_err("Error creating PDCP UPlane PDB\n");
return -EINVAL;
}
SET_LABEL(p, pdb_end);
- err = insert_hfn_ov_op(p, sn_size, PDCP_PDB_TYPE_FULL_PDB,
- era_2_sw_hfn_ovrd);
+ err = insert_hfn_ov_op(p, sn_size, pdb_type, era_2_sw_hfn_ovrd);
if (err)
return err;
@@ -2820,6 +2876,12 @@ cnstr_shdsc_pdcp_u_plane_encap(uint32_t *descbuf,
return err;
break;
}
+
+ if (pdb_type != PDCP_PDB_TYPE_FULL_PDB) {
+ pr_err("PDB type must be FULL for PROTO desc\n");
+ return -EINVAL;
+ }
+
/* Insert auth key if requested */
if (authdata && authdata->algtype) {
KEY(p, KEY2, authdata->key_enc_flags,
@@ -2931,6 +2993,7 @@ cnstr_shdsc_pdcp_u_plane_decap(uint32_t *descbuf,
struct program prg;
struct program *p = &prg;
int err;
+ enum pdb_type_e pdb_type;
static enum rta_share_type
desc_share[PDCP_CIPHER_TYPE_INVALID][PDCP_AUTH_TYPE_INVALID] = {
{ /* NULL */
@@ -2981,15 +3044,16 @@ cnstr_shdsc_pdcp_u_plane_decap(uint32_t *descbuf,
else
SHR_HDR(p, SHR_ALWAYS, 0, 0);
- if (cnstr_pdcp_u_plane_pdb(p, sn_size, hfn, bearer, direction,
- hfn_threshold)) {
+ pdb_type = cnstr_pdcp_u_plane_pdb(p, sn_size, hfn, bearer,
+ direction, hfn_threshold,
+ cipherdata, authdata);
+ if (pdb_type == PDCP_PDB_TYPE_INVALID) {
pr_err("Error creating PDCP UPlane PDB\n");
return -EINVAL;
}
SET_LABEL(p, pdb_end);
- err = insert_hfn_ov_op(p, sn_size, PDCP_PDB_TYPE_FULL_PDB,
- era_2_sw_hfn_ovrd);
+ err = insert_hfn_ov_op(p, sn_size, pdb_type, era_2_sw_hfn_ovrd);
if (err)
return err;
@@ -3006,6 +3070,11 @@ cnstr_shdsc_pdcp_u_plane_decap(uint32_t *descbuf,
case PDCP_CIPHER_TYPE_AES:
case PDCP_CIPHER_TYPE_SNOW:
case PDCP_CIPHER_TYPE_NULL:
+ if (pdb_type != PDCP_PDB_TYPE_FULL_PDB) {
+ pr_err("PDB type must be FULL for PROTO desc\n");
+ return -EINVAL;
+ }
+
/* Insert auth key if requested */
if (authdata && authdata->algtype)
KEY(p, KEY2, authdata->key_enc_flags,
--
2.17.1
^ permalink raw reply [flat|nested] 75+ messages in thread
* [dpdk-dev] [PATCH v3 12/24] crypto/dpaa2_sec/hw: support aes-aes 18-bit PDCP
2019-09-30 14:40 ` [dpdk-dev] [PATCH v3 00/24] " Akhil Goyal
` (10 preceding siblings ...)
2019-09-30 14:40 ` [dpdk-dev] [PATCH v3 11/24] crypto/dpaa2_sec/hw: support 18-bit PDCP enc-auth cases Akhil Goyal
@ 2019-09-30 14:40 ` Akhil Goyal
2019-09-30 14:40 ` [dpdk-dev] [PATCH v3 13/24] crypto/dpaa2_sec/hw: support zuc-zuc " Akhil Goyal
` (12 subsequent siblings)
24 siblings, 0 replies; 75+ messages in thread
From: Akhil Goyal @ 2019-09-30 14:40 UTC (permalink / raw)
To: dev; +Cc: aconole, anoobj, Vakul Garg
From: Vakul Garg <vakul.garg@nxp.com>
This patch support AES-AES PDCP enc-auth case for
devices which do not support PROTOCOL command for 18bit
Signed-off-by: Vakul Garg <vakul.garg@nxp.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
---
drivers/crypto/dpaa2_sec/hw/desc/pdcp.h | 151 +++++++++++++++++++++++-
1 file changed, 150 insertions(+), 1 deletion(-)
diff --git a/drivers/crypto/dpaa2_sec/hw/desc/pdcp.h b/drivers/crypto/dpaa2_sec/hw/desc/pdcp.h
index 764daf21c..a476b8bde 100644
--- a/drivers/crypto/dpaa2_sec/hw/desc/pdcp.h
+++ b/drivers/crypto/dpaa2_sec/hw/desc/pdcp.h
@@ -927,6 +927,155 @@ pdcp_insert_cplane_enc_only_op(struct program *p,
return 0;
}
+static inline int
+pdcp_insert_uplane_aes_aes_op(struct program *p,
+ bool swap __maybe_unused,
+ struct alginfo *cipherdata,
+ struct alginfo *authdata,
+ unsigned int dir,
+ enum pdcp_sn_size sn_size,
+ unsigned char era_2_sw_hfn_ovrd __maybe_unused)
+{
+ uint32_t offset = 0, length = 0, sn_mask = 0;
+
+ if ((rta_sec_era >= RTA_SEC_ERA_8 && sn_size != PDCP_SN_SIZE_18)) {
+ /* Insert Auth Key */
+ KEY(p, KEY2, authdata->key_enc_flags, authdata->key,
+ authdata->keylen, INLINE_KEY(authdata));
+
+ /* Insert Cipher Key */
+ KEY(p, KEY1, cipherdata->key_enc_flags, cipherdata->key,
+ cipherdata->keylen, INLINE_KEY(cipherdata));
+
+ PROTOCOL(p, dir, OP_PCLID_LTE_PDCP_USER_RN,
+ ((uint16_t)cipherdata->algtype << 8) |
+ (uint16_t)authdata->algtype);
+ return 0;
+ }
+
+ /* Non-proto is supported only for 5bit cplane and 18bit uplane */
+ switch (sn_size) {
+ case PDCP_SN_SIZE_18:
+ offset = 5;
+ length = 3;
+ sn_mask = (swap == false) ? PDCP_U_PLANE_18BIT_SN_MASK :
+ PDCP_U_PLANE_18BIT_SN_MASK_BE;
+ break;
+
+ default:
+ pr_err("Invalid sn_size for %s\n", __func__);
+ return -ENOTSUP;
+ }
+
+ SEQLOAD(p, MATH0, offset, length, 0);
+ JUMP(p, 1, LOCAL_JUMP, ALL_TRUE, CALM);
+ MATHB(p, MATH0, AND, sn_mask, MATH1, 8, IFB | IMMED2);
+
+ MATHB(p, MATH1, SHLD, MATH1, MATH1, 8, 0);
+ MOVEB(p, DESCBUF, 8, MATH2, 0, 0x08, WAITCOMP | IMMED);
+ MATHB(p, MATH1, OR, MATH2, MATH2, 8, 0);
+ SEQSTORE(p, MATH0, offset, length, 0);
+
+ if (dir == OP_TYPE_ENCAP_PROTOCOL) {
+ KEY(p, KEY1, authdata->key_enc_flags, authdata->key,
+ authdata->keylen, INLINE_KEY(authdata));
+ MOVEB(p, MATH2, 0, IFIFOAB1, 0, 0x08, IMMED);
+ MOVEB(p, MATH0, offset, IFIFOAB1, 0, length, IMMED);
+
+ MATHB(p, SEQINSZ, SUB, ZERO, VSEQINSZ, 4, 0);
+ MATHB(p, VSEQINSZ, ADD, PDCP_MAC_I_LEN, VSEQOUTSZ, 4, IMMED2);
+
+ ALG_OPERATION(p, OP_ALG_ALGSEL_AES,
+ OP_ALG_AAI_CMAC,
+ OP_ALG_AS_INITFINAL,
+ ICV_CHECK_DISABLE,
+ DIR_DEC);
+ SEQFIFOLOAD(p, MSG1, 0, VLF | LAST1 | FLUSH1);
+ MOVEB(p, CONTEXT1, 0, MATH3, 0, 4, WAITCOMP | IMMED);
+
+ LOAD(p, CLRW_RESET_CLS1_CHA |
+ CLRW_CLR_C1KEY |
+ CLRW_CLR_C1CTX |
+ CLRW_CLR_C1ICV |
+ CLRW_CLR_C1DATAS |
+ CLRW_CLR_C1MODE,
+ CLRW, 0, 4, IMMED);
+
+ KEY(p, KEY1, cipherdata->key_enc_flags, cipherdata->key,
+ cipherdata->keylen, INLINE_KEY(cipherdata));
+
+ MOVEB(p, MATH2, 0, CONTEXT1, 16, 8, IMMED);
+ SEQINPTR(p, 0, PDCP_NULL_MAX_FRAME_LEN, RTO);
+
+ ALG_OPERATION(p, OP_ALG_ALGSEL_AES,
+ OP_ALG_AAI_CTR,
+ OP_ALG_AS_INITFINAL,
+ ICV_CHECK_DISABLE,
+ DIR_ENC);
+
+ SEQFIFOSTORE(p, MSG, 0, 0, VLF);
+
+ SEQFIFOLOAD(p, SKIP, length, 0);
+
+ SEQFIFOLOAD(p, MSG1, 0, VLF);
+ MOVEB(p, MATH3, 0, IFIFOAB1, 0, 4, LAST1 | FLUSH1 | IMMED);
+ } else {
+ MOVEB(p, MATH2, 0, CONTEXT1, 16, 8, IMMED);
+ MOVEB(p, MATH2, 0, CONTEXT2, 0, 8, IMMED);
+
+ MATHB(p, SEQINSZ, SUB, ZERO, VSEQINSZ, 4, 0);
+ MATHB(p, SEQINSZ, SUB, PDCP_MAC_I_LEN, VSEQOUTSZ, 4, IMMED2);
+
+ KEY(p, KEY1, cipherdata->key_enc_flags, cipherdata->key,
+ cipherdata->keylen, INLINE_KEY(cipherdata));
+
+ ALG_OPERATION(p, OP_ALG_ALGSEL_AES,
+ OP_ALG_AAI_CTR,
+ OP_ALG_AS_INITFINAL,
+ ICV_CHECK_DISABLE,
+ DIR_DEC);
+
+ SEQFIFOSTORE(p, MSG, 0, 0, VLF | CONT);
+ SEQFIFOLOAD(p, MSG1, 0, VLF | LAST1 | FLUSH1);
+
+ MOVEB(p, OFIFO, 0, MATH3, 0, 4, IMMED);
+
+ LOAD(p, CLRW_RESET_CLS1_CHA |
+ CLRW_CLR_C1KEY |
+ CLRW_CLR_C1CTX |
+ CLRW_CLR_C1ICV |
+ CLRW_CLR_C1DATAS |
+ CLRW_CLR_C1MODE,
+ CLRW, 0, 4, IMMED);
+
+ KEY(p, KEY1, authdata->key_enc_flags, authdata->key,
+ authdata->keylen, INLINE_KEY(authdata));
+
+ SEQINPTR(p, 0, 0, SOP);
+
+ ALG_OPERATION(p, OP_ALG_ALGSEL_AES,
+ OP_ALG_AAI_CMAC,
+ OP_ALG_AS_INITFINAL,
+ ICV_CHECK_ENABLE,
+ DIR_DEC);
+
+ MATHB(p, SEQINSZ, SUB, ZERO, VSEQINSZ, 4, 0);
+
+ MOVE(p, CONTEXT2, 0, IFIFOAB1, 0, 8, IMMED);
+
+ SEQFIFOLOAD(p, MSG1, 0, VLF | LAST1 | FLUSH1);
+
+ LOAD(p, NFIFOENTRY_STYPE_ALTSOURCE |
+ NFIFOENTRY_DEST_CLASS1 |
+ NFIFOENTRY_DTYPE_ICV |
+ NFIFOENTRY_LC1 |
+ NFIFOENTRY_FC1 | 4, NFIFO_SZL, 0, 4, IMMED);
+ MOVEB(p, MATH3, 0, ALTSOURCE, 0, 4, IMMED);
+ }
+
+ return 0;
+}
+
static inline int
pdcp_insert_cplane_acc_op(struct program *p,
bool swap __maybe_unused,
@@ -2721,7 +2870,7 @@ pdcp_insert_uplane_with_int_op(struct program *p,
{ /* AES CTR */
pdcp_insert_cplane_enc_only_op, /* NULL */
pdcp_insert_cplane_aes_snow_op, /* SNOW f9 */
- pdcp_insert_cplane_acc_op, /* AES CMAC */
+ pdcp_insert_uplane_aes_aes_op, /* AES CMAC */
pdcp_insert_cplane_aes_zuc_op /* ZUC-I */
},
{ /* ZUC-E */
--
2.17.1
^ permalink raw reply [flat|nested] 75+ messages in thread
* [dpdk-dev] [PATCH v3 13/24] crypto/dpaa2_sec/hw: support zuc-zuc 18-bit PDCP
2019-09-30 14:40 ` [dpdk-dev] [PATCH v3 00/24] " Akhil Goyal
` (11 preceding siblings ...)
2019-09-30 14:40 ` [dpdk-dev] [PATCH v3 12/24] crypto/dpaa2_sec/hw: support aes-aes 18-bit PDCP Akhil Goyal
@ 2019-09-30 14:40 ` Akhil Goyal
2019-09-30 14:40 ` [dpdk-dev] [PATCH v3 14/24] crypto/dpaa2_sec/hw: support snow-snow " Akhil Goyal
` (11 subsequent siblings)
24 siblings, 0 replies; 75+ messages in thread
From: Akhil Goyal @ 2019-09-30 14:40 UTC (permalink / raw)
To: dev; +Cc: aconole, anoobj, Vakul Garg
From: Vakul Garg <vakul.garg@nxp.com>
This patch support ZUC-ZUC PDCP enc-auth case for
devices which do not support PROTOCOL command for 18bit.
Signed-off-by: Vakul Garg <vakul.garg@nxp.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
---
drivers/crypto/dpaa2_sec/hw/desc/pdcp.h | 126 +++++++++++++++++++++++-
1 file changed, 125 insertions(+), 1 deletion(-)
diff --git a/drivers/crypto/dpaa2_sec/hw/desc/pdcp.h b/drivers/crypto/dpaa2_sec/hw/desc/pdcp.h
index a476b8bde..9fb3d4993 100644
--- a/drivers/crypto/dpaa2_sec/hw/desc/pdcp.h
+++ b/drivers/crypto/dpaa2_sec/hw/desc/pdcp.h
@@ -927,6 +927,130 @@ pdcp_insert_cplane_enc_only_op(struct program *p,
return 0;
}
+static inline int
+pdcp_insert_uplane_zuc_zuc_op(struct program *p,
+ bool swap __maybe_unused,
+ struct alginfo *cipherdata,
+ struct alginfo *authdata,
+ unsigned int dir,
+ enum pdcp_sn_size sn_size,
+ unsigned char era_2_sw_hfn_ovrd __maybe_unused)
+{
+ uint32_t offset = 0, length = 0, sn_mask = 0;
+
+ LABEL(keyjump);
+ REFERENCE(pkeyjump);
+
+ if (rta_sec_era < RTA_SEC_ERA_5) {
+ pr_err("Invalid era for selected algorithm\n");
+ return -ENOTSUP;
+ }
+
+ pkeyjump = JUMP(p, keyjump, LOCAL_JUMP, ALL_TRUE, SHRD | SELF | BOTH);
+ KEY(p, KEY1, cipherdata->key_enc_flags, cipherdata->key,
+ cipherdata->keylen, INLINE_KEY(cipherdata));
+ KEY(p, KEY2, authdata->key_enc_flags, authdata->key, authdata->keylen,
+ INLINE_KEY(authdata));
+
+ SET_LABEL(p, keyjump);
+ PATCH_JUMP(p, pkeyjump, keyjump);
+
+ if (rta_sec_era >= RTA_SEC_ERA_8 && sn_size != PDCP_SN_SIZE_18) {
+ int pclid;
+
+ if (sn_size == PDCP_SN_SIZE_5)
+ pclid = OP_PCLID_LTE_PDCP_CTRL_MIXED;
+ else
+ pclid = OP_PCLID_LTE_PDCP_USER_RN;
+
+ PROTOCOL(p, dir, pclid,
+ ((uint16_t)cipherdata->algtype << 8) |
+ (uint16_t)authdata->algtype);
+
+ return 0;
+ }
+ /* Non-proto is supported only for 5bit cplane and 18bit uplane */
+ switch (sn_size) {
+ case PDCP_SN_SIZE_5:
+ offset = 7;
+ length = 1;
+ sn_mask = (swap == false) ? PDCP_C_PLANE_SN_MASK :
+ PDCP_C_PLANE_SN_MASK_BE;
+ break;
+ case PDCP_SN_SIZE_18:
+ offset = 5;
+ length = 3;
+ sn_mask = (swap == false) ? PDCP_U_PLANE_18BIT_SN_MASK :
+ PDCP_U_PLANE_18BIT_SN_MASK_BE;
+ break;
+ case PDCP_SN_SIZE_7:
+ case PDCP_SN_SIZE_12:
+ case PDCP_SN_SIZE_15:
+ pr_err("Invalid sn_size for %s\n", __func__);
+ return -ENOTSUP;
+ }
+
+ SEQLOAD(p, MATH0, offset, length, 0);
+ JUMP(p, 1, LOCAL_JUMP, ALL_TRUE, CALM);
+ MOVEB(p, MATH0, offset, IFIFOAB2, 0, length, IMMED);
+ MATHB(p, MATH0, AND, sn_mask, MATH1, 8, IFB | IMMED2);
+ MATHB(p, MATH1, SHLD, MATH1, MATH1, 8, 0);
+
+ MOVEB(p, DESCBUF, 8, MATH2, 0, 8, WAITCOMP | IMMED);
+ MATHB(p, MATH1, OR, MATH2, MATH2, 8, 0);
+ MOVEB(p, MATH2, 0, CONTEXT1, 0, 8, IMMED);
+
+ MOVEB(p, MATH2, 0, CONTEXT2, 0, 8, WAITCOMP | IMMED);
+
+ if (dir == OP_TYPE_ENCAP_PROTOCOL)
+ MATHB(p, SEQINSZ, ADD, PDCP_MAC_I_LEN, VSEQOUTSZ, 4, IMMED2);
+ else
+ MATHB(p, SEQINSZ, SUB, PDCP_MAC_I_LEN, VSEQOUTSZ, 4, IMMED2);
+
+ MATHB(p, SEQINSZ, SUB, ZERO, VSEQINSZ, 4, 0);
+ SEQSTORE(p, MATH0, offset, length, 0);
+
+ if (dir == OP_TYPE_ENCAP_PROTOCOL) {
+ SEQFIFOSTORE(p, MSG, 0, 0, VLF);
+ SEQFIFOLOAD(p, MSGINSNOOP, 0, VLF | LAST2);
+ } else {
+ SEQFIFOSTORE(p, MSG, 0, 0, VLF | CONT);
+ SEQFIFOLOAD(p, MSGOUTSNOOP, 0, VLF | LAST1 | FLUSH1);
+ }
+
+ ALG_OPERATION(p, OP_ALG_ALGSEL_ZUCA,
+ OP_ALG_AAI_F9,
+ OP_ALG_AS_INITFINAL,
+ dir == OP_TYPE_ENCAP_PROTOCOL ?
+ ICV_CHECK_DISABLE : ICV_CHECK_ENABLE,
+ DIR_ENC);
+
+ ALG_OPERATION(p, OP_ALG_ALGSEL_ZUCE,
+ OP_ALG_AAI_F8,
+ OP_ALG_AS_INITFINAL,
+ ICV_CHECK_DISABLE,
+ dir == OP_TYPE_ENCAP_PROTOCOL ? DIR_ENC : DIR_DEC);
+
+ if (dir == OP_TYPE_ENCAP_PROTOCOL) {
+ MOVE(p, CONTEXT2, 0, IFIFOAB1, 0, 4, LAST1 | FLUSH1 | IMMED);
+ } else {
+ /* Save ICV */
+ MOVEB(p, OFIFO, 0, MATH0, 0, 4, IMMED);
+
+ LOAD(p, NFIFOENTRY_STYPE_ALTSOURCE |
+ NFIFOENTRY_DEST_CLASS2 |
+ NFIFOENTRY_DTYPE_ICV |
+ NFIFOENTRY_LC2 | 4, NFIFO_SZL, 0, 4, IMMED);
+ MOVEB(p, MATH0, 0, ALTSOURCE, 0, 4, WAITCOMP | IMMED);
+ }
+
+ /* Reset ZUCA mode and done interrupt */
+ LOAD(p, CLRW_CLR_C2MODE, CLRW, 0, 4, IMMED);
+ LOAD(p, CIRQ_ZADI, ICTRL, 0, 4, IMMED);
+
+ return 0;
+}
+
static inline int
pdcp_insert_uplane_aes_aes_op(struct program *p,
bool swap __maybe_unused,
@@ -2877,7 +3001,7 @@ pdcp_insert_uplane_with_int_op(struct program *p,
pdcp_insert_cplane_enc_only_op, /* NULL */
pdcp_insert_cplane_zuc_snow_op, /* SNOW f9 */
pdcp_insert_cplane_zuc_aes_op, /* AES CMAC */
- pdcp_insert_cplane_acc_op /* ZUC-I */
+ pdcp_insert_uplane_zuc_zuc_op /* ZUC-I */
},
};
int err;
--
2.17.1
^ permalink raw reply [flat|nested] 75+ messages in thread
* [dpdk-dev] [PATCH v3 14/24] crypto/dpaa2_sec/hw: support snow-snow 18-bit PDCP
2019-09-30 14:40 ` [dpdk-dev] [PATCH v3 00/24] " Akhil Goyal
` (12 preceding siblings ...)
2019-09-30 14:40 ` [dpdk-dev] [PATCH v3 13/24] crypto/dpaa2_sec/hw: support zuc-zuc " Akhil Goyal
@ 2019-09-30 14:40 ` Akhil Goyal
2019-09-30 14:40 ` [dpdk-dev] [PATCH v3 15/24] crypto/dpaa2_sec/hw: support snow-f8 Akhil Goyal
` (10 subsequent siblings)
24 siblings, 0 replies; 75+ messages in thread
From: Akhil Goyal @ 2019-09-30 14:40 UTC (permalink / raw)
To: dev; +Cc: aconole, anoobj, Vakul Garg
From: Vakul Garg <vakul.garg@nxp.com>
This patch support SNOW-SNOW (enc-auth) 18bit PDCP case
for devices which do not support PROTOCOL command
Signed-off-by: Vakul Garg <vakul.garg@nxp.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
---
drivers/crypto/dpaa2_sec/hw/desc/pdcp.h | 133 +++++++++++++++++++++++-
1 file changed, 132 insertions(+), 1 deletion(-)
diff --git a/drivers/crypto/dpaa2_sec/hw/desc/pdcp.h b/drivers/crypto/dpaa2_sec/hw/desc/pdcp.h
index 9fb3d4993..b514914ec 100644
--- a/drivers/crypto/dpaa2_sec/hw/desc/pdcp.h
+++ b/drivers/crypto/dpaa2_sec/hw/desc/pdcp.h
@@ -927,6 +927,137 @@ pdcp_insert_cplane_enc_only_op(struct program *p,
return 0;
}
+static inline int
+pdcp_insert_uplane_snow_snow_op(struct program *p,
+ bool swap __maybe_unused,
+ struct alginfo *cipherdata,
+ struct alginfo *authdata,
+ unsigned int dir,
+ enum pdcp_sn_size sn_size,
+ unsigned char era_2_sw_hfn_ovrd __maybe_unused)
+{
+ uint32_t offset = 0, length = 0, sn_mask = 0;
+
+ KEY(p, KEY1, cipherdata->key_enc_flags, cipherdata->key,
+ cipherdata->keylen, INLINE_KEY(cipherdata));
+ KEY(p, KEY2, authdata->key_enc_flags, authdata->key, authdata->keylen,
+ INLINE_KEY(authdata));
+
+ if (rta_sec_era >= RTA_SEC_ERA_8 && sn_size != PDCP_SN_SIZE_18) {
+ int pclid;
+
+ if (sn_size == PDCP_SN_SIZE_5)
+ pclid = OP_PCLID_LTE_PDCP_CTRL_MIXED;
+ else
+ pclid = OP_PCLID_LTE_PDCP_USER_RN;
+
+ PROTOCOL(p, dir, pclid,
+ ((uint16_t)cipherdata->algtype << 8) |
+ (uint16_t)authdata->algtype);
+
+ return 0;
+ }
+ /* Non-proto is supported only for 5bit cplane and 18bit uplane */
+ switch (sn_size) {
+ case PDCP_SN_SIZE_5:
+ offset = 7;
+ length = 1;
+ sn_mask = (swap == false) ? PDCP_C_PLANE_SN_MASK :
+ PDCP_C_PLANE_SN_MASK_BE;
+ break;
+ case PDCP_SN_SIZE_18:
+ offset = 5;
+ length = 3;
+ sn_mask = (swap == false) ? PDCP_U_PLANE_18BIT_SN_MASK :
+ PDCP_U_PLANE_18BIT_SN_MASK_BE;
+ break;
+ case PDCP_SN_SIZE_7:
+ case PDCP_SN_SIZE_12:
+ case PDCP_SN_SIZE_15:
+ pr_err("Invalid sn_size for %s\n", __func__);
+ return -ENOTSUP;
+ }
+
+ if (dir == OP_TYPE_ENCAP_PROTOCOL)
+ MATHB(p, SEQINSZ, SUB, length, VSEQINSZ, 4, IMMED2);
+
+ SEQLOAD(p, MATH0, offset, length, 0);
+ JUMP(p, 1, LOCAL_JUMP, ALL_TRUE, CALM);
+ MOVEB(p, MATH0, offset, IFIFOAB2, 0, length, IMMED);
+ MATHB(p, MATH0, AND, sn_mask, MATH1, 8, IFB | IMMED2);
+
+ SEQSTORE(p, MATH0, offset, length, 0);
+ MATHB(p, MATH1, SHLD, MATH1, MATH1, 8, 0);
+ MOVEB(p, DESCBUF, 8, MATH2, 0, 8, WAITCOMP | IMMED);
+ MATHB(p, MATH1, OR, MATH2, MATH1, 8, 0);
+ MOVEB(p, MATH1, 0, CONTEXT1, 0, 8, IMMED);
+ MOVEB(p, MATH1, 0, CONTEXT2, 0, 4, WAITCOMP | IMMED);
+ if (swap == false) {
+ MATHB(p, MATH1, AND, upper_32_bits(PDCP_BEARER_MASK),
+ MATH2, 4, IMMED2);
+ MATHB(p, MATH1, AND, lower_32_bits(PDCP_DIR_MASK),
+ MATH3, 4, IMMED2);
+ } else {
+ MATHB(p, MATH1, AND, lower_32_bits(PDCP_BEARER_MASK_BE),
+ MATH2, 4, IMMED2);
+ MATHB(p, MATH1, AND, upper_32_bits(PDCP_DIR_MASK_BE),
+ MATH3, 4, IMMED2);
+ }
+ MATHB(p, MATH3, SHLD, MATH3, MATH3, 8, 0);
+
+ MOVEB(p, MATH2, 4, OFIFO, 0, 12, IMMED);
+ MOVE(p, OFIFO, 0, CONTEXT2, 4, 12, IMMED);
+ if (dir == OP_TYPE_ENCAP_PROTOCOL) {
+ MATHB(p, SEQINSZ, ADD, PDCP_MAC_I_LEN, VSEQOUTSZ, 4, IMMED2);
+ } else {
+ MATHI(p, SEQINSZ, SUB, PDCP_MAC_I_LEN, VSEQOUTSZ, 4, IMMED2);
+ MATHI(p, SEQINSZ, SUB, PDCP_MAC_I_LEN, VSEQINSZ, 4, IMMED2);
+ }
+
+ if (dir == OP_TYPE_ENCAP_PROTOCOL)
+ SEQFIFOSTORE(p, MSG, 0, 0, VLF);
+ else
+ SEQFIFOSTORE(p, MSG, 0, 0, VLF | CONT);
+
+ ALG_OPERATION(p, OP_ALG_ALGSEL_SNOW_F9,
+ OP_ALG_AAI_F9,
+ OP_ALG_AS_INITFINAL,
+ dir == OP_TYPE_ENCAP_PROTOCOL ?
+ ICV_CHECK_DISABLE : ICV_CHECK_ENABLE,
+ DIR_DEC);
+ ALG_OPERATION(p, OP_ALG_ALGSEL_SNOW_F8,
+ OP_ALG_AAI_F8,
+ OP_ALG_AS_INITFINAL,
+ ICV_CHECK_DISABLE,
+ dir == OP_TYPE_ENCAP_PROTOCOL ? DIR_ENC : DIR_DEC);
+
+ if (dir == OP_TYPE_ENCAP_PROTOCOL) {
+ SEQFIFOLOAD(p, MSGINSNOOP, 0, VLF | LAST2);
+ MOVE(p, CONTEXT2, 0, IFIFOAB1, 0, 4, LAST1 | FLUSH1 | IMMED);
+ } else {
+ SEQFIFOLOAD(p, MSGOUTSNOOP, 0, VLF | LAST2);
+ SEQFIFOLOAD(p, MSG1, 4, LAST1 | FLUSH1);
+ JUMP(p, 1, LOCAL_JUMP, ALL_TRUE, CLASS1 | NOP | NIFP);
+
+ if (rta_sec_era >= RTA_SEC_ERA_6)
+ LOAD(p, 0, DCTRL, 0, LDLEN_RST_CHA_OFIFO_PTR, IMMED);
+
+ MOVE(p, OFIFO, 0, MATH0, 0, 4, WAITCOMP | IMMED);
+
+ NFIFOADD(p, IFIFO, ICV2, 4, LAST2);
+
+ if (rta_sec_era <= RTA_SEC_ERA_2) {
+ /* Shut off automatic Info FIFO entries */
+ LOAD(p, 0, DCTRL, LDOFF_DISABLE_AUTO_NFIFO, 0, IMMED);
+ MOVE(p, MATH0, 0, IFIFOAB2, 0, 4, WAITCOMP | IMMED);
+ } else {
+ MOVE(p, MATH0, 0, IFIFO, 0, 4, WAITCOMP | IMMED);
+ }
+ }
+
+ return 0;
+}
+
static inline int
pdcp_insert_uplane_zuc_zuc_op(struct program *p,
bool swap __maybe_unused,
@@ -2987,7 +3118,7 @@ pdcp_insert_uplane_with_int_op(struct program *p,
},
{ /* SNOW f8 */
pdcp_insert_cplane_enc_only_op, /* NULL */
- pdcp_insert_cplane_acc_op, /* SNOW f9 */
+ pdcp_insert_uplane_snow_snow_op, /* SNOW f9 */
pdcp_insert_cplane_snow_aes_op, /* AES CMAC */
pdcp_insert_cplane_snow_zuc_op /* ZUC-I */
},
--
2.17.1
^ permalink raw reply [flat|nested] 75+ messages in thread
* [dpdk-dev] [PATCH v3 15/24] crypto/dpaa2_sec/hw: support snow-f8
2019-09-30 14:40 ` [dpdk-dev] [PATCH v3 00/24] " Akhil Goyal
` (13 preceding siblings ...)
2019-09-30 14:40 ` [dpdk-dev] [PATCH v3 14/24] crypto/dpaa2_sec/hw: support snow-snow " Akhil Goyal
@ 2019-09-30 14:40 ` Akhil Goyal
2019-09-30 14:40 ` [dpdk-dev] [PATCH v3 16/24] crypto/dpaa2_sec/hw: support snow-f9 Akhil Goyal
` (9 subsequent siblings)
24 siblings, 0 replies; 75+ messages in thread
From: Akhil Goyal @ 2019-09-30 14:40 UTC (permalink / raw)
To: dev; +Cc: aconole, anoobj, Vakul Garg
From: Vakul Garg <vakul.garg@nxp.com>
This patch add support for non-protocol offload mode
of snow-f8 algo
Signed-off-by: Vakul Garg <vakul.garg@nxp.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
---
drivers/crypto/dpaa2_sec/hw/desc/algo.h | 20 +++++---------------
1 file changed, 5 insertions(+), 15 deletions(-)
diff --git a/drivers/crypto/dpaa2_sec/hw/desc/algo.h b/drivers/crypto/dpaa2_sec/hw/desc/algo.h
index b6cfa8704..2a307a3e1 100644
--- a/drivers/crypto/dpaa2_sec/hw/desc/algo.h
+++ b/drivers/crypto/dpaa2_sec/hw/desc/algo.h
@@ -24,43 +24,33 @@
* @swap: must be true when core endianness doesn't match SEC endianness
* @cipherdata: pointer to block cipher transform definitions
* @dir: Cipher direction (DIR_ENC/DIR_DEC)
- * @count: UEA2 count value (32 bits)
- * @bearer: UEA2 bearer ID (5 bits)
- * @direction: UEA2 direction (1 bit)
*
* Return: size of descriptor written in words or negative number on error
*/
static inline int
cnstr_shdsc_snow_f8(uint32_t *descbuf, bool ps, bool swap,
- struct alginfo *cipherdata, uint8_t dir,
- uint32_t count, uint8_t bearer, uint8_t direction)
+ struct alginfo *cipherdata, uint8_t dir)
{
struct program prg;
struct program *p = &prg;
- uint32_t ct = count;
- uint8_t br = bearer;
- uint8_t dr = direction;
- uint32_t context[2] = {ct, (br << 27) | (dr << 26)};
PROGRAM_CNTXT_INIT(p, descbuf, 0);
- if (swap) {
+ if (swap)
PROGRAM_SET_BSWAP(p);
- context[0] = swab32(context[0]);
- context[1] = swab32(context[1]);
- }
-
if (ps)
PROGRAM_SET_36BIT_ADDR(p);
SHR_HDR(p, SHR_ALWAYS, 1, 0);
KEY(p, KEY1, cipherdata->key_enc_flags, cipherdata->key,
cipherdata->keylen, INLINE_KEY(cipherdata));
+
+ SEQLOAD(p, CONTEXT1, 0, 16, 0);
+
MATHB(p, SEQINSZ, SUB, MATH2, VSEQINSZ, 4, 0);
MATHB(p, SEQINSZ, SUB, MATH2, VSEQOUTSZ, 4, 0);
ALG_OPERATION(p, OP_ALG_ALGSEL_SNOW_F8, OP_ALG_AAI_F8,
OP_ALG_AS_INITFINAL, 0, dir);
- LOAD(p, (uintptr_t)context, CONTEXT1, 0, 8, IMMED | COPY);
SEQFIFOLOAD(p, MSG1, 0, VLF | LAST1);
SEQFIFOSTORE(p, MSG, 0, 0, VLF);
--
2.17.1
^ permalink raw reply [flat|nested] 75+ messages in thread
* [dpdk-dev] [PATCH v3 16/24] crypto/dpaa2_sec/hw: support snow-f9
2019-09-30 14:40 ` [dpdk-dev] [PATCH v3 00/24] " Akhil Goyal
` (14 preceding siblings ...)
2019-09-30 14:40 ` [dpdk-dev] [PATCH v3 15/24] crypto/dpaa2_sec/hw: support snow-f8 Akhil Goyal
@ 2019-09-30 14:40 ` Akhil Goyal
2019-09-30 14:40 ` [dpdk-dev] [PATCH v3 17/24] crypto/dpaa2_sec: support snow3g cipher/integrity Akhil Goyal
` (8 subsequent siblings)
24 siblings, 0 replies; 75+ messages in thread
From: Akhil Goyal @ 2019-09-30 14:40 UTC (permalink / raw)
To: dev; +Cc: aconole, anoobj, Vakul Garg
From: Vakul Garg <vakul.garg@nxp.com>
Add support for snow-f9 in non pdcp protocol offload mode.
This essentially add support to pass pre-computed IV to SEC.
Signed-off-by: Vakul Garg <vakul.garg@nxp.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
---
drivers/crypto/dpaa2_sec/hw/desc/algo.h | 51 +++++++++++++------------
1 file changed, 26 insertions(+), 25 deletions(-)
diff --git a/drivers/crypto/dpaa2_sec/hw/desc/algo.h b/drivers/crypto/dpaa2_sec/hw/desc/algo.h
index 2a307a3e1..5e8e5e79c 100644
--- a/drivers/crypto/dpaa2_sec/hw/desc/algo.h
+++ b/drivers/crypto/dpaa2_sec/hw/desc/algo.h
@@ -64,48 +64,49 @@ cnstr_shdsc_snow_f8(uint32_t *descbuf, bool ps, bool swap,
* @swap: must be true when core endianness doesn't match SEC endianness
* @authdata: pointer to authentication transform definitions
* @dir: cipher direction (DIR_ENC/DIR_DEC)
- * @count: UEA2 count value (32 bits)
- * @fresh: UEA2 fresh value ID (32 bits)
- * @direction: UEA2 direction (1 bit)
- * @datalen: size of data
+ * @chk_icv: check or generate ICV value
+ * @authlen: size of digest
*
* Return: size of descriptor written in words or negative number on error
*/
static inline int
cnstr_shdsc_snow_f9(uint32_t *descbuf, bool ps, bool swap,
- struct alginfo *authdata, uint8_t dir, uint32_t count,
- uint32_t fresh, uint8_t direction, uint32_t datalen)
+ struct alginfo *authdata, uint8_t chk_icv,
+ uint32_t authlen)
{
struct program prg;
struct program *p = &prg;
- uint64_t ct = count;
- uint64_t fr = fresh;
- uint64_t dr = direction;
- uint64_t context[2];
-
- context[0] = (ct << 32) | (dr << 26);
- context[1] = fr << 32;
+ int dir = chk_icv ? DIR_DEC : DIR_ENC;
PROGRAM_CNTXT_INIT(p, descbuf, 0);
- if (swap) {
+ if (swap)
PROGRAM_SET_BSWAP(p);
- context[0] = swab64(context[0]);
- context[1] = swab64(context[1]);
- }
if (ps)
PROGRAM_SET_36BIT_ADDR(p);
+
SHR_HDR(p, SHR_ALWAYS, 1, 0);
- KEY(p, KEY2, authdata->key_enc_flags, authdata->key, authdata->keylen,
- INLINE_KEY(authdata));
- MATHB(p, SEQINSZ, SUB, MATH2, VSEQINSZ, 4, 0);
+ KEY(p, KEY2, authdata->key_enc_flags, authdata->key,
+ authdata->keylen, INLINE_KEY(authdata));
+
+ SEQLOAD(p, CONTEXT2, 0, 12, 0);
+
+ if (chk_icv == ICV_CHECK_ENABLE)
+ MATHB(p, SEQINSZ, SUB, authlen, VSEQINSZ, 4, IMMED2);
+ else
+ MATHB(p, SEQINSZ, SUB, ZERO, VSEQINSZ, 4, 0);
+
ALG_OPERATION(p, OP_ALG_ALGSEL_SNOW_F9, OP_ALG_AAI_F9,
- OP_ALG_AS_INITFINAL, 0, dir);
- LOAD(p, (uintptr_t)context, CONTEXT2, 0, 16, IMMED | COPY);
- SEQFIFOLOAD(p, BIT_DATA, datalen, CLASS2 | LAST2);
- /* Save lower half of MAC out into a 32-bit sequence */
- SEQSTORE(p, CONTEXT2, 0, 4, 0);
+ OP_ALG_AS_INITFINAL, chk_icv, dir);
+
+ SEQFIFOLOAD(p, MSG2, 0, VLF | CLASS2 | LAST2);
+
+ if (chk_icv == ICV_CHECK_ENABLE)
+ SEQFIFOLOAD(p, ICV2, authlen, LAST2);
+ else
+ /* Save lower half of MAC out into a 32-bit sequence */
+ SEQSTORE(p, CONTEXT2, 0, authlen, 0);
return PROGRAM_FINALIZE(p);
}
--
2.17.1
^ permalink raw reply [flat|nested] 75+ messages in thread
* [dpdk-dev] [PATCH v3 17/24] crypto/dpaa2_sec: support snow3g cipher/integrity
2019-09-30 14:40 ` [dpdk-dev] [PATCH v3 00/24] " Akhil Goyal
` (15 preceding siblings ...)
2019-09-30 14:40 ` [dpdk-dev] [PATCH v3 16/24] crypto/dpaa2_sec/hw: support snow-f9 Akhil Goyal
@ 2019-09-30 14:40 ` Akhil Goyal
2019-09-30 14:40 ` [dpdk-dev] [PATCH v3 18/24] crypto/dpaa2_sec/hw: support kasumi Akhil Goyal
` (7 subsequent siblings)
24 siblings, 0 replies; 75+ messages in thread
From: Akhil Goyal @ 2019-09-30 14:40 UTC (permalink / raw)
To: dev; +Cc: aconole, anoobj, Hemant Agrawal, Vakul Garg
From: Hemant Agrawal <hemant.agrawal@nxp.com>
Adding basic framework to use snow3g f8 and f9 based
ciphering or integrity with direct crypto apis.
This patch does not support any combo usages yet.
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Vakul Garg <vakul.garg@nxp.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
---
drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c | 310 ++++++++++++++------
drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h | 46 ++-
drivers/crypto/dpaa2_sec/hw/desc/algo.h | 30 ++
3 files changed, 301 insertions(+), 85 deletions(-)
diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
index b020041c8..451fa91fb 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
@@ -880,11 +880,26 @@ static inline int build_auth_sg_fd(
struct qbman_fle *fle, *sge, *ip_fle, *op_fle;
struct sec_flow_context *flc;
struct ctxt_priv *priv = sess->ctxt;
+ int data_len, data_offset;
uint8_t *old_digest;
struct rte_mbuf *mbuf;
PMD_INIT_FUNC_TRACE();
+ data_len = sym_op->auth.data.length;
+ data_offset = sym_op->auth.data.offset;
+
+ if (sess->auth_alg == RTE_CRYPTO_AUTH_SNOW3G_UIA2 ||
+ sess->auth_alg == RTE_CRYPTO_AUTH_ZUC_EIA3) {
+ if ((data_len & 7) || (data_offset & 7)) {
+ DPAA2_SEC_ERR("AUTH: len/offset must be full bytes");
+ return -1;
+ }
+
+ data_len = data_len >> 3;
+ data_offset = data_offset >> 3;
+ }
+
mbuf = sym_op->m_src;
fle = (struct qbman_fle *)rte_malloc(NULL, FLE_SG_MEM_SIZE,
RTE_CACHE_LINE_SIZE);
@@ -914,25 +929,51 @@ static inline int build_auth_sg_fd(
/* i/p fle */
DPAA2_SET_FLE_SG_EXT(ip_fle);
DPAA2_SET_FLE_ADDR(ip_fle, DPAA2_VADDR_TO_IOVA(sge));
- /* i/p 1st seg */
- DPAA2_SET_FLE_ADDR(sge, DPAA2_MBUF_VADDR_TO_IOVA(mbuf));
- DPAA2_SET_FLE_OFFSET(sge, sym_op->auth.data.offset + mbuf->data_off);
- sge->length = mbuf->data_len - sym_op->auth.data.offset;
+ ip_fle->length = data_len;
- /* i/p segs */
- mbuf = mbuf->next;
- while (mbuf) {
+ if (sess->iv.length) {
+ uint8_t *iv_ptr;
+
+ iv_ptr = rte_crypto_op_ctod_offset(op, uint8_t *,
+ sess->iv.offset);
+
+ if (sess->auth_alg == RTE_CRYPTO_AUTH_SNOW3G_UIA2) {
+ iv_ptr = conv_to_snow_f9_iv(iv_ptr);
+ sge->length = 12;
+ } else if (sess->auth_alg == RTE_CRYPTO_AUTH_ZUC_EIA3) {
+ iv_ptr = conv_to_zuc_eia_iv(iv_ptr);
+ sge->length = 8;
+ } else {
+ sge->length = sess->iv.length;
+ }
+ DPAA2_SET_FLE_ADDR(sge, DPAA2_VADDR_TO_IOVA(iv_ptr));
+ ip_fle->length += sge->length;
sge++;
- DPAA2_SET_FLE_ADDR(sge, DPAA2_MBUF_VADDR_TO_IOVA(mbuf));
- DPAA2_SET_FLE_OFFSET(sge, mbuf->data_off);
- sge->length = mbuf->data_len;
- mbuf = mbuf->next;
}
- if (sess->dir == DIR_ENC) {
- /* Digest calculation case */
- sge->length -= sess->digest_length;
- ip_fle->length = sym_op->auth.data.length;
+ /* i/p 1st seg */
+ DPAA2_SET_FLE_ADDR(sge, DPAA2_MBUF_VADDR_TO_IOVA(mbuf));
+ DPAA2_SET_FLE_OFFSET(sge, data_offset + mbuf->data_off);
+
+ if (data_len <= (mbuf->data_len - data_offset)) {
+ sge->length = data_len;
+ data_len = 0;
} else {
+ sge->length = mbuf->data_len - data_offset;
+
+ /* remaining i/p segs */
+ while ((data_len = data_len - sge->length) &&
+ (mbuf = mbuf->next)) {
+ sge++;
+ DPAA2_SET_FLE_ADDR(sge, DPAA2_MBUF_VADDR_TO_IOVA(mbuf));
+ DPAA2_SET_FLE_OFFSET(sge, mbuf->data_off);
+ if (data_len > mbuf->data_len)
+ sge->length = mbuf->data_len;
+ else
+ sge->length = data_len;
+ }
+ }
+
+ if (sess->dir == DIR_DEC) {
/* Digest verification case */
sge++;
old_digest = (uint8_t *)(sge + 1);
@@ -940,8 +981,7 @@ static inline int build_auth_sg_fd(
sess->digest_length);
DPAA2_SET_FLE_ADDR(sge, DPAA2_VADDR_TO_IOVA(old_digest));
sge->length = sess->digest_length;
- ip_fle->length = sym_op->auth.data.length +
- sess->digest_length;
+ ip_fle->length += sess->digest_length;
}
DPAA2_SET_FLE_FIN(sge);
DPAA2_SET_FLE_FIN(ip_fle);
@@ -958,11 +998,26 @@ build_auth_fd(dpaa2_sec_session *sess, struct rte_crypto_op *op,
struct qbman_fle *fle, *sge;
struct sec_flow_context *flc;
struct ctxt_priv *priv = sess->ctxt;
+ int data_len, data_offset;
uint8_t *old_digest;
int retval;
PMD_INIT_FUNC_TRACE();
+ data_len = sym_op->auth.data.length;
+ data_offset = sym_op->auth.data.offset;
+
+ if (sess->auth_alg == RTE_CRYPTO_AUTH_SNOW3G_UIA2 ||
+ sess->auth_alg == RTE_CRYPTO_AUTH_ZUC_EIA3) {
+ if ((data_len & 7) || (data_offset & 7)) {
+ DPAA2_SEC_ERR("AUTH: len/offset must be full bytes");
+ return -1;
+ }
+
+ data_len = data_len >> 3;
+ data_offset = data_offset >> 3;
+ }
+
retval = rte_mempool_get(priv->fle_pool, (void **)(&fle));
if (retval) {
DPAA2_SEC_ERR("AUTH Memory alloc failed for SGE");
@@ -978,64 +1033,72 @@ build_auth_fd(dpaa2_sec_session *sess, struct rte_crypto_op *op,
DPAA2_SET_FLE_ADDR(fle, (size_t)op);
DPAA2_FLE_SAVE_CTXT(fle, (ptrdiff_t)priv);
fle = fle + 1;
+ sge = fle + 2;
if (likely(bpid < MAX_BPID)) {
DPAA2_SET_FD_BPID(fd, bpid);
DPAA2_SET_FLE_BPID(fle, bpid);
DPAA2_SET_FLE_BPID(fle + 1, bpid);
+ DPAA2_SET_FLE_BPID(sge, bpid);
+ DPAA2_SET_FLE_BPID(sge + 1, bpid);
} else {
DPAA2_SET_FD_IVP(fd);
DPAA2_SET_FLE_IVP(fle);
DPAA2_SET_FLE_IVP((fle + 1));
+ DPAA2_SET_FLE_IVP(sge);
+ DPAA2_SET_FLE_IVP((sge + 1));
}
+
flc = &priv->flc_desc[DESC_INITFINAL].flc;
DPAA2_SET_FD_FLC(fd, DPAA2_VADDR_TO_IOVA(flc));
+ DPAA2_SET_FD_ADDR(fd, DPAA2_VADDR_TO_IOVA(fle));
+ DPAA2_SET_FD_COMPOUND_FMT(fd);
DPAA2_SET_FLE_ADDR(fle, DPAA2_VADDR_TO_IOVA(sym_op->auth.digest.data));
fle->length = sess->digest_length;
-
- DPAA2_SET_FD_ADDR(fd, DPAA2_VADDR_TO_IOVA(fle));
- DPAA2_SET_FD_COMPOUND_FMT(fd);
fle++;
- if (sess->dir == DIR_ENC) {
- DPAA2_SET_FLE_ADDR(fle,
- DPAA2_MBUF_VADDR_TO_IOVA(sym_op->m_src));
- DPAA2_SET_FLE_OFFSET(fle, sym_op->auth.data.offset +
- sym_op->m_src->data_off);
- DPAA2_SET_FD_LEN(fd, sym_op->auth.data.length);
- fle->length = sym_op->auth.data.length;
- } else {
- sge = fle + 2;
- DPAA2_SET_FLE_SG_EXT(fle);
- DPAA2_SET_FLE_ADDR(fle, DPAA2_VADDR_TO_IOVA(sge));
+ /* Setting input FLE */
+ DPAA2_SET_FLE_ADDR(fle, DPAA2_VADDR_TO_IOVA(sge));
+ DPAA2_SET_FLE_SG_EXT(fle);
+ fle->length = data_len;
+
+ if (sess->iv.length) {
+ uint8_t *iv_ptr;
+
+ iv_ptr = rte_crypto_op_ctod_offset(op, uint8_t *,
+ sess->iv.offset);
- if (likely(bpid < MAX_BPID)) {
- DPAA2_SET_FLE_BPID(sge, bpid);
- DPAA2_SET_FLE_BPID(sge + 1, bpid);
+ if (sess->auth_alg == RTE_CRYPTO_AUTH_SNOW3G_UIA2) {
+ iv_ptr = conv_to_snow_f9_iv(iv_ptr);
+ sge->length = 12;
} else {
- DPAA2_SET_FLE_IVP(sge);
- DPAA2_SET_FLE_IVP((sge + 1));
+ sge->length = sess->iv.length;
}
- DPAA2_SET_FLE_ADDR(sge,
- DPAA2_MBUF_VADDR_TO_IOVA(sym_op->m_src));
- DPAA2_SET_FLE_OFFSET(sge, sym_op->auth.data.offset +
- sym_op->m_src->data_off);
- DPAA2_SET_FD_LEN(fd, sym_op->auth.data.length +
- sess->digest_length);
- sge->length = sym_op->auth.data.length;
+ DPAA2_SET_FLE_ADDR(sge, DPAA2_VADDR_TO_IOVA(iv_ptr));
+ fle->length = fle->length + sge->length;
+ sge++;
+ }
+
+ /* Setting data to authenticate */
+ DPAA2_SET_FLE_ADDR(sge, DPAA2_MBUF_VADDR_TO_IOVA(sym_op->m_src));
+ DPAA2_SET_FLE_OFFSET(sge, data_offset + sym_op->m_src->data_off);
+ sge->length = data_len;
+
+ if (sess->dir == DIR_DEC) {
sge++;
old_digest = (uint8_t *)(sge + 1);
rte_memcpy(old_digest, sym_op->auth.digest.data,
sess->digest_length);
DPAA2_SET_FLE_ADDR(sge, DPAA2_VADDR_TO_IOVA(old_digest));
sge->length = sess->digest_length;
- fle->length = sym_op->auth.data.length +
- sess->digest_length;
- DPAA2_SET_FLE_FIN(sge);
+ fle->length = fle->length + sess->digest_length;
}
+
+ DPAA2_SET_FLE_FIN(sge);
DPAA2_SET_FLE_FIN(fle);
+ DPAA2_SET_FD_LEN(fd, fle->length);
return 0;
}
@@ -1046,6 +1109,7 @@ build_cipher_sg_fd(dpaa2_sec_session *sess, struct rte_crypto_op *op,
{
struct rte_crypto_sym_op *sym_op = op->sym;
struct qbman_fle *ip_fle, *op_fle, *sge, *fle;
+ int data_len, data_offset;
struct sec_flow_context *flc;
struct ctxt_priv *priv = sess->ctxt;
struct rte_mbuf *mbuf;
@@ -1054,6 +1118,20 @@ build_cipher_sg_fd(dpaa2_sec_session *sess, struct rte_crypto_op *op,
PMD_INIT_FUNC_TRACE();
+ data_len = sym_op->cipher.data.length;
+ data_offset = sym_op->cipher.data.offset;
+
+ if (sess->cipher_alg == RTE_CRYPTO_CIPHER_SNOW3G_UEA2 ||
+ sess->cipher_alg == RTE_CRYPTO_CIPHER_ZUC_EEA3) {
+ if ((data_len & 7) || (data_offset & 7)) {
+ DPAA2_SEC_ERR("CIPHER: len/offset must be full bytes");
+ return -1;
+ }
+
+ data_len = data_len >> 3;
+ data_offset = data_offset >> 3;
+ }
+
if (sym_op->m_dst)
mbuf = sym_op->m_dst;
else
@@ -1079,20 +1157,20 @@ build_cipher_sg_fd(dpaa2_sec_session *sess, struct rte_crypto_op *op,
DPAA2_SEC_DP_DEBUG(
"CIPHER SG: cipher_off: 0x%x/length %d, ivlen=%d"
" data_off: 0x%x\n",
- sym_op->cipher.data.offset,
- sym_op->cipher.data.length,
+ data_offset,
+ data_len,
sess->iv.length,
sym_op->m_src->data_off);
/* o/p fle */
DPAA2_SET_FLE_ADDR(op_fle, DPAA2_VADDR_TO_IOVA(sge));
- op_fle->length = sym_op->cipher.data.length;
+ op_fle->length = data_len;
DPAA2_SET_FLE_SG_EXT(op_fle);
/* o/p 1st seg */
DPAA2_SET_FLE_ADDR(sge, DPAA2_MBUF_VADDR_TO_IOVA(mbuf));
- DPAA2_SET_FLE_OFFSET(sge, sym_op->cipher.data.offset + mbuf->data_off);
- sge->length = mbuf->data_len - sym_op->cipher.data.offset;
+ DPAA2_SET_FLE_OFFSET(sge, data_offset + mbuf->data_off);
+ sge->length = mbuf->data_len - data_offset;
mbuf = mbuf->next;
/* o/p segs */
@@ -1114,7 +1192,7 @@ build_cipher_sg_fd(dpaa2_sec_session *sess, struct rte_crypto_op *op,
mbuf = sym_op->m_src;
sge++;
DPAA2_SET_FLE_ADDR(ip_fle, DPAA2_VADDR_TO_IOVA(sge));
- ip_fle->length = sess->iv.length + sym_op->cipher.data.length;
+ ip_fle->length = sess->iv.length + data_len;
DPAA2_SET_FLE_SG_EXT(ip_fle);
/* i/p IV */
@@ -1126,9 +1204,8 @@ build_cipher_sg_fd(dpaa2_sec_session *sess, struct rte_crypto_op *op,
/* i/p 1st seg */
DPAA2_SET_FLE_ADDR(sge, DPAA2_MBUF_VADDR_TO_IOVA(mbuf));
- DPAA2_SET_FLE_OFFSET(sge, sym_op->cipher.data.offset +
- mbuf->data_off);
- sge->length = mbuf->data_len - sym_op->cipher.data.offset;
+ DPAA2_SET_FLE_OFFSET(sge, data_offset + mbuf->data_off);
+ sge->length = mbuf->data_len - data_offset;
mbuf = mbuf->next;
/* i/p segs */
@@ -1165,7 +1242,7 @@ build_cipher_fd(dpaa2_sec_session *sess, struct rte_crypto_op *op,
{
struct rte_crypto_sym_op *sym_op = op->sym;
struct qbman_fle *fle, *sge;
- int retval;
+ int retval, data_len, data_offset;
struct sec_flow_context *flc;
struct ctxt_priv *priv = sess->ctxt;
uint8_t *iv_ptr = rte_crypto_op_ctod_offset(op, uint8_t *,
@@ -1174,6 +1251,20 @@ build_cipher_fd(dpaa2_sec_session *sess, struct rte_crypto_op *op,
PMD_INIT_FUNC_TRACE();
+ data_len = sym_op->cipher.data.length;
+ data_offset = sym_op->cipher.data.offset;
+
+ if (sess->cipher_alg == RTE_CRYPTO_CIPHER_SNOW3G_UEA2 ||
+ sess->cipher_alg == RTE_CRYPTO_CIPHER_ZUC_EEA3) {
+ if ((data_len & 7) || (data_offset & 7)) {
+ DPAA2_SEC_ERR("CIPHER: len/offset must be full bytes");
+ return -1;
+ }
+
+ data_len = data_len >> 3;
+ data_offset = data_offset >> 3;
+ }
+
if (sym_op->m_dst)
dst = sym_op->m_dst;
else
@@ -1212,24 +1303,22 @@ build_cipher_fd(dpaa2_sec_session *sess, struct rte_crypto_op *op,
flc = &priv->flc_desc[0].flc;
DPAA2_SET_FD_ADDR(fd, DPAA2_VADDR_TO_IOVA(fle));
- DPAA2_SET_FD_LEN(fd, sym_op->cipher.data.length +
- sess->iv.length);
+ DPAA2_SET_FD_LEN(fd, data_len + sess->iv.length);
DPAA2_SET_FD_COMPOUND_FMT(fd);
DPAA2_SET_FD_FLC(fd, DPAA2_VADDR_TO_IOVA(flc));
DPAA2_SEC_DP_DEBUG(
"CIPHER: cipher_off: 0x%x/length %d, ivlen=%d,"
" data_off: 0x%x\n",
- sym_op->cipher.data.offset,
- sym_op->cipher.data.length,
+ data_offset,
+ data_len,
sess->iv.length,
sym_op->m_src->data_off);
DPAA2_SET_FLE_ADDR(fle, DPAA2_MBUF_VADDR_TO_IOVA(dst));
- DPAA2_SET_FLE_OFFSET(fle, sym_op->cipher.data.offset +
- dst->data_off);
+ DPAA2_SET_FLE_OFFSET(fle, data_offset + dst->data_off);
- fle->length = sym_op->cipher.data.length + sess->iv.length;
+ fle->length = data_len + sess->iv.length;
DPAA2_SEC_DP_DEBUG(
"CIPHER: 1 - flc = %p, fle = %p FLEaddr = %x-%x, length %d\n",
@@ -1239,7 +1328,7 @@ build_cipher_fd(dpaa2_sec_session *sess, struct rte_crypto_op *op,
fle++;
DPAA2_SET_FLE_ADDR(fle, DPAA2_VADDR_TO_IOVA(sge));
- fle->length = sym_op->cipher.data.length + sess->iv.length;
+ fle->length = data_len + sess->iv.length;
DPAA2_SET_FLE_SG_EXT(fle);
@@ -1248,10 +1337,9 @@ build_cipher_fd(dpaa2_sec_session *sess, struct rte_crypto_op *op,
sge++;
DPAA2_SET_FLE_ADDR(sge, DPAA2_MBUF_VADDR_TO_IOVA(sym_op->m_src));
- DPAA2_SET_FLE_OFFSET(sge, sym_op->cipher.data.offset +
- sym_op->m_src->data_off);
+ DPAA2_SET_FLE_OFFSET(sge, data_offset + sym_op->m_src->data_off);
- sge->length = sym_op->cipher.data.length;
+ sge->length = data_len;
DPAA2_SET_FLE_FIN(sge);
DPAA2_SET_FLE_FIN(fle);
@@ -1762,32 +1850,60 @@ dpaa2_sec_cipher_init(struct rte_cryptodev *dev,
/* Set IV parameters */
session->iv.offset = xform->cipher.iv.offset;
session->iv.length = xform->cipher.iv.length;
+ session->dir = (xform->cipher.op == RTE_CRYPTO_CIPHER_OP_ENCRYPT) ?
+ DIR_ENC : DIR_DEC;
switch (xform->cipher.algo) {
case RTE_CRYPTO_CIPHER_AES_CBC:
cipherdata.algtype = OP_ALG_ALGSEL_AES;
cipherdata.algmode = OP_ALG_AAI_CBC;
session->cipher_alg = RTE_CRYPTO_CIPHER_AES_CBC;
+ bufsize = cnstr_shdsc_blkcipher(priv->flc_desc[0].desc, 1, 0,
+ SHR_NEVER, &cipherdata, NULL,
+ session->iv.length,
+ session->dir);
break;
case RTE_CRYPTO_CIPHER_3DES_CBC:
cipherdata.algtype = OP_ALG_ALGSEL_3DES;
cipherdata.algmode = OP_ALG_AAI_CBC;
session->cipher_alg = RTE_CRYPTO_CIPHER_3DES_CBC;
+ bufsize = cnstr_shdsc_blkcipher(priv->flc_desc[0].desc, 1, 0,
+ SHR_NEVER, &cipherdata, NULL,
+ session->iv.length,
+ session->dir);
break;
case RTE_CRYPTO_CIPHER_AES_CTR:
cipherdata.algtype = OP_ALG_ALGSEL_AES;
cipherdata.algmode = OP_ALG_AAI_CTR;
session->cipher_alg = RTE_CRYPTO_CIPHER_AES_CTR;
+ bufsize = cnstr_shdsc_blkcipher(priv->flc_desc[0].desc, 1, 0,
+ SHR_NEVER, &cipherdata, NULL,
+ session->iv.length,
+ session->dir);
break;
case RTE_CRYPTO_CIPHER_3DES_CTR:
+ cipherdata.algtype = OP_ALG_ALGSEL_3DES;
+ cipherdata.algmode = OP_ALG_AAI_CTR;
+ session->cipher_alg = RTE_CRYPTO_CIPHER_3DES_CTR;
+ bufsize = cnstr_shdsc_blkcipher(priv->flc_desc[0].desc, 1, 0,
+ SHR_NEVER, &cipherdata, NULL,
+ session->iv.length,
+ session->dir);
+ break;
+ case RTE_CRYPTO_CIPHER_SNOW3G_UEA2:
+ cipherdata.algtype = OP_ALG_ALGSEL_SNOW_F8;
+ session->cipher_alg = RTE_CRYPTO_CIPHER_SNOW3G_UEA2;
+ bufsize = cnstr_shdsc_snow_f8(priv->flc_desc[0].desc, 1, 0,
+ &cipherdata,
+ session->dir);
+ break;
+ case RTE_CRYPTO_CIPHER_KASUMI_F8:
+ case RTE_CRYPTO_CIPHER_ZUC_EEA3:
+ case RTE_CRYPTO_CIPHER_AES_F8:
case RTE_CRYPTO_CIPHER_AES_ECB:
case RTE_CRYPTO_CIPHER_3DES_ECB:
case RTE_CRYPTO_CIPHER_AES_XTS:
- case RTE_CRYPTO_CIPHER_AES_F8:
case RTE_CRYPTO_CIPHER_ARC4:
- case RTE_CRYPTO_CIPHER_KASUMI_F8:
- case RTE_CRYPTO_CIPHER_SNOW3G_UEA2:
- case RTE_CRYPTO_CIPHER_ZUC_EEA3:
case RTE_CRYPTO_CIPHER_NULL:
DPAA2_SEC_ERR("Crypto: Unsupported Cipher alg %u",
xform->cipher.algo);
@@ -1797,12 +1913,7 @@ dpaa2_sec_cipher_init(struct rte_cryptodev *dev,
xform->cipher.algo);
goto error_out;
}
- session->dir = (xform->cipher.op == RTE_CRYPTO_CIPHER_OP_ENCRYPT) ?
- DIR_ENC : DIR_DEC;
- bufsize = cnstr_shdsc_blkcipher(priv->flc_desc[0].desc, 1, 0, SHR_NEVER,
- &cipherdata, NULL, session->iv.length,
- session->dir);
if (bufsize < 0) {
DPAA2_SEC_ERR("Crypto: Descriptor build failed");
goto error_out;
@@ -1865,40 +1976,77 @@ dpaa2_sec_auth_init(struct rte_cryptodev *dev,
authdata.key_type = RTA_DATA_IMM;
session->digest_length = xform->auth.digest_length;
+ session->dir = (xform->auth.op == RTE_CRYPTO_AUTH_OP_GENERATE) ?
+ DIR_ENC : DIR_DEC;
switch (xform->auth.algo) {
case RTE_CRYPTO_AUTH_SHA1_HMAC:
authdata.algtype = OP_ALG_ALGSEL_SHA1;
authdata.algmode = OP_ALG_AAI_HMAC;
session->auth_alg = RTE_CRYPTO_AUTH_SHA1_HMAC;
+ bufsize = cnstr_shdsc_hmac(priv->flc_desc[DESC_INITFINAL].desc,
+ 1, 0, SHR_NEVER, &authdata,
+ !session->dir,
+ session->digest_length);
break;
case RTE_CRYPTO_AUTH_MD5_HMAC:
authdata.algtype = OP_ALG_ALGSEL_MD5;
authdata.algmode = OP_ALG_AAI_HMAC;
session->auth_alg = RTE_CRYPTO_AUTH_MD5_HMAC;
+ bufsize = cnstr_shdsc_hmac(priv->flc_desc[DESC_INITFINAL].desc,
+ 1, 0, SHR_NEVER, &authdata,
+ !session->dir,
+ session->digest_length);
break;
case RTE_CRYPTO_AUTH_SHA256_HMAC:
authdata.algtype = OP_ALG_ALGSEL_SHA256;
authdata.algmode = OP_ALG_AAI_HMAC;
session->auth_alg = RTE_CRYPTO_AUTH_SHA256_HMAC;
+ bufsize = cnstr_shdsc_hmac(priv->flc_desc[DESC_INITFINAL].desc,
+ 1, 0, SHR_NEVER, &authdata,
+ !session->dir,
+ session->digest_length);
break;
case RTE_CRYPTO_AUTH_SHA384_HMAC:
authdata.algtype = OP_ALG_ALGSEL_SHA384;
authdata.algmode = OP_ALG_AAI_HMAC;
session->auth_alg = RTE_CRYPTO_AUTH_SHA384_HMAC;
+ bufsize = cnstr_shdsc_hmac(priv->flc_desc[DESC_INITFINAL].desc,
+ 1, 0, SHR_NEVER, &authdata,
+ !session->dir,
+ session->digest_length);
break;
case RTE_CRYPTO_AUTH_SHA512_HMAC:
authdata.algtype = OP_ALG_ALGSEL_SHA512;
authdata.algmode = OP_ALG_AAI_HMAC;
session->auth_alg = RTE_CRYPTO_AUTH_SHA512_HMAC;
+ bufsize = cnstr_shdsc_hmac(priv->flc_desc[DESC_INITFINAL].desc,
+ 1, 0, SHR_NEVER, &authdata,
+ !session->dir,
+ session->digest_length);
break;
case RTE_CRYPTO_AUTH_SHA224_HMAC:
authdata.algtype = OP_ALG_ALGSEL_SHA224;
authdata.algmode = OP_ALG_AAI_HMAC;
session->auth_alg = RTE_CRYPTO_AUTH_SHA224_HMAC;
+ bufsize = cnstr_shdsc_hmac(priv->flc_desc[DESC_INITFINAL].desc,
+ 1, 0, SHR_NEVER, &authdata,
+ !session->dir,
+ session->digest_length);
break;
- case RTE_CRYPTO_AUTH_AES_XCBC_MAC:
case RTE_CRYPTO_AUTH_SNOW3G_UIA2:
+ authdata.algtype = OP_ALG_ALGSEL_SNOW_F9;
+ authdata.algmode = OP_ALG_AAI_F9;
+ session->auth_alg = RTE_CRYPTO_AUTH_SNOW3G_UIA2;
+ session->iv.offset = xform->auth.iv.offset;
+ session->iv.length = xform->auth.iv.length;
+ bufsize = cnstr_shdsc_snow_f9(priv->flc_desc[DESC_INITFINAL].desc,
+ 1, 0, &authdata,
+ !session->dir,
+ session->digest_length);
+ break;
+ case RTE_CRYPTO_AUTH_KASUMI_F9:
+ case RTE_CRYPTO_AUTH_ZUC_EIA3:
case RTE_CRYPTO_AUTH_NULL:
case RTE_CRYPTO_AUTH_SHA1:
case RTE_CRYPTO_AUTH_SHA256:
@@ -1907,10 +2055,9 @@ dpaa2_sec_auth_init(struct rte_cryptodev *dev,
case RTE_CRYPTO_AUTH_SHA384:
case RTE_CRYPTO_AUTH_MD5:
case RTE_CRYPTO_AUTH_AES_GMAC:
- case RTE_CRYPTO_AUTH_KASUMI_F9:
+ case RTE_CRYPTO_AUTH_AES_XCBC_MAC:
case RTE_CRYPTO_AUTH_AES_CMAC:
case RTE_CRYPTO_AUTH_AES_CBC_MAC:
- case RTE_CRYPTO_AUTH_ZUC_EIA3:
DPAA2_SEC_ERR("Crypto: Unsupported auth alg %un",
xform->auth.algo);
goto error_out;
@@ -1919,12 +2066,7 @@ dpaa2_sec_auth_init(struct rte_cryptodev *dev,
xform->auth.algo);
goto error_out;
}
- session->dir = (xform->auth.op == RTE_CRYPTO_AUTH_OP_GENERATE) ?
- DIR_ENC : DIR_DEC;
- bufsize = cnstr_shdsc_hmac(priv->flc_desc[DESC_INITFINAL].desc,
- 1, 0, SHR_NEVER, &authdata, !session->dir,
- session->digest_length);
if (bufsize < 0) {
DPAA2_SEC_ERR("Crypto: Invalid buffer length");
goto error_out;
diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h b/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
index afd98b2d5..3d793363b 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
@@ -411,7 +411,51 @@ static const struct rte_cryptodev_capabilities dpaa2_sec_capabilities[] = {
}, }
}, }
},
-
+ { /* SNOW 3G (UIA2) */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_SNOW3G_UIA2,
+ .block_size = 16,
+ .key_size = {
+ .min = 16,
+ .max = 16,
+ .increment = 0
+ },
+ .digest_size = {
+ .min = 4,
+ .max = 4,
+ .increment = 0
+ },
+ .iv_size = {
+ .min = 16,
+ .max = 16,
+ .increment = 0
+ }
+ }, }
+ }, }
+ },
+ { /* SNOW 3G (UEA2) */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
+ {.cipher = {
+ .algo = RTE_CRYPTO_CIPHER_SNOW3G_UEA2,
+ .block_size = 16,
+ .key_size = {
+ .min = 16,
+ .max = 16,
+ .increment = 0
+ },
+ .iv_size = {
+ .min = 16,
+ .max = 16,
+ .increment = 0
+ }
+ }, }
+ }, }
+ },
RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
};
diff --git a/drivers/crypto/dpaa2_sec/hw/desc/algo.h b/drivers/crypto/dpaa2_sec/hw/desc/algo.h
index 5e8e5e79c..042cffa59 100644
--- a/drivers/crypto/dpaa2_sec/hw/desc/algo.h
+++ b/drivers/crypto/dpaa2_sec/hw/desc/algo.h
@@ -57,6 +57,36 @@ cnstr_shdsc_snow_f8(uint32_t *descbuf, bool ps, bool swap,
return PROGRAM_FINALIZE(p);
}
+/**
+ * conv_to_snow_f9_iv - SNOW/f9 (UIA2) IV 16bit to 12 bit convert
+ * function for 3G.
+ * @iv: 16 bit original IV data
+ *
+ * Return: 12 bit IV data as understood by SEC HW
+ */
+
+static inline uint8_t *conv_to_snow_f9_iv(uint8_t *iv)
+{
+ uint8_t temp = (iv[8] == iv[0]) ? 0 : 4;
+
+ iv[12] = iv[4];
+ iv[13] = iv[5];
+ iv[14] = iv[6];
+ iv[15] = iv[7];
+
+ iv[8] = temp;
+ iv[9] = 0x00;
+ iv[10] = 0x00;
+ iv[11] = 0x00;
+
+ iv[4] = iv[0];
+ iv[5] = iv[1];
+ iv[6] = iv[2];
+ iv[7] = iv[3];
+
+ return (iv + 4);
+}
+
/**
* cnstr_shdsc_snow_f9 - SNOW/f9 (UIA2) as a shared descriptor
* @descbuf: pointer to descriptor-under-construction buffer
--
2.17.1
^ permalink raw reply [flat|nested] 75+ messages in thread
* [dpdk-dev] [PATCH v3 18/24] crypto/dpaa2_sec/hw: support kasumi
2019-09-30 14:40 ` [dpdk-dev] [PATCH v3 00/24] " Akhil Goyal
` (16 preceding siblings ...)
2019-09-30 14:40 ` [dpdk-dev] [PATCH v3 17/24] crypto/dpaa2_sec: support snow3g cipher/integrity Akhil Goyal
@ 2019-09-30 14:40 ` Akhil Goyal
2019-09-30 14:40 ` [dpdk-dev] [PATCH v3 19/24] crypto/dpaa2_sec/hw: support ZUCE and ZUCA Akhil Goyal
` (6 subsequent siblings)
24 siblings, 0 replies; 75+ messages in thread
From: Akhil Goyal @ 2019-09-30 14:40 UTC (permalink / raw)
To: dev; +Cc: aconole, anoobj, Vakul Garg, Hemant Agrawal
From: Vakul Garg <vakul.garg@nxp.com>
Add Kasumi processing for non PDCP proto offload cases.
Also add support for pre-computed IV in Kasumi-f9
Signed-off-by: Vakul Garg <vakul.garg@nxp.com>
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
---
drivers/crypto/dpaa2_sec/hw/desc/algo.h | 64 +++++++++++--------------
1 file changed, 29 insertions(+), 35 deletions(-)
diff --git a/drivers/crypto/dpaa2_sec/hw/desc/algo.h b/drivers/crypto/dpaa2_sec/hw/desc/algo.h
index 042cffa59..4316ca15e 100644
--- a/drivers/crypto/dpaa2_sec/hw/desc/algo.h
+++ b/drivers/crypto/dpaa2_sec/hw/desc/algo.h
@@ -349,34 +349,25 @@ cnstr_shdsc_hmac(uint32_t *descbuf, bool ps, bool swap,
*/
static inline int
cnstr_shdsc_kasumi_f8(uint32_t *descbuf, bool ps, bool swap,
- struct alginfo *cipherdata, uint8_t dir,
- uint32_t count, uint8_t bearer, uint8_t direction)
+ struct alginfo *cipherdata, uint8_t dir)
{
struct program prg;
struct program *p = &prg;
- uint64_t ct = count;
- uint64_t br = bearer;
- uint64_t dr = direction;
- uint32_t context[2] = { ct, (br << 27) | (dr << 26) };
PROGRAM_CNTXT_INIT(p, descbuf, 0);
- if (swap) {
+ if (swap)
PROGRAM_SET_BSWAP(p);
-
- context[0] = swab32(context[0]);
- context[1] = swab32(context[1]);
- }
if (ps)
PROGRAM_SET_36BIT_ADDR(p);
SHR_HDR(p, SHR_ALWAYS, 1, 0);
KEY(p, KEY1, cipherdata->key_enc_flags, cipherdata->key,
cipherdata->keylen, INLINE_KEY(cipherdata));
+ SEQLOAD(p, CONTEXT1, 0, 8, 0);
MATHB(p, SEQINSZ, SUB, MATH2, VSEQINSZ, 4, 0);
MATHB(p, SEQINSZ, SUB, MATH2, VSEQOUTSZ, 4, 0);
ALG_OPERATION(p, OP_ALG_ALGSEL_KASUMI, OP_ALG_AAI_F8,
OP_ALG_AS_INITFINAL, 0, dir);
- LOAD(p, (uintptr_t)context, CONTEXT1, 0, 8, IMMED | COPY);
SEQFIFOLOAD(p, MSG1, 0, VLF | LAST1);
SEQFIFOSTORE(p, MSG, 0, 0, VLF);
@@ -390,46 +381,49 @@ cnstr_shdsc_kasumi_f8(uint32_t *descbuf, bool ps, bool swap,
* @ps: if 36/40bit addressing is desired, this parameter must be true
* @swap: must be true when core endianness doesn't match SEC endianness
* @authdata: pointer to authentication transform definitions
- * @dir: cipher direction (DIR_ENC/DIR_DEC)
- * @count: count value (32 bits)
- * @fresh: fresh value ID (32 bits)
- * @direction: direction (1 bit)
- * @datalen: size of data
+ * @chk_icv: check or generate ICV value
+ * @authlen: size of digest
*
* Return: size of descriptor written in words or negative number on error
*/
static inline int
cnstr_shdsc_kasumi_f9(uint32_t *descbuf, bool ps, bool swap,
- struct alginfo *authdata, uint8_t dir,
- uint32_t count, uint32_t fresh, uint8_t direction,
- uint32_t datalen)
+ struct alginfo *authdata, uint8_t chk_icv,
+ uint32_t authlen)
{
struct program prg;
struct program *p = &prg;
- uint16_t ctx_offset = 16;
- uint32_t context[6] = {count, direction << 26, fresh, 0, 0, 0};
+ int dir = chk_icv ? DIR_DEC : DIR_ENC;
PROGRAM_CNTXT_INIT(p, descbuf, 0);
- if (swap) {
+ if (swap)
PROGRAM_SET_BSWAP(p);
- context[0] = swab32(context[0]);
- context[1] = swab32(context[1]);
- context[2] = swab32(context[2]);
- }
if (ps)
PROGRAM_SET_36BIT_ADDR(p);
+
SHR_HDR(p, SHR_ALWAYS, 1, 0);
- KEY(p, KEY1, authdata->key_enc_flags, authdata->key, authdata->keylen,
- INLINE_KEY(authdata));
- MATHB(p, SEQINSZ, SUB, MATH2, VSEQINSZ, 4, 0);
+ KEY(p, KEY2, authdata->key_enc_flags, authdata->key,
+ authdata->keylen, INLINE_KEY(authdata));
+
+ SEQLOAD(p, CONTEXT2, 0, 12, 0);
+
+ if (chk_icv == ICV_CHECK_ENABLE)
+ MATHB(p, SEQINSZ, SUB, authlen, VSEQINSZ, 4, IMMED2);
+ else
+ MATHB(p, SEQINSZ, SUB, ZERO, VSEQINSZ, 4, 0);
+
ALG_OPERATION(p, OP_ALG_ALGSEL_KASUMI, OP_ALG_AAI_F9,
- OP_ALG_AS_INITFINAL, 0, dir);
- LOAD(p, (uintptr_t)context, CONTEXT1, 0, 24, IMMED | COPY);
- SEQFIFOLOAD(p, BIT_DATA, datalen, CLASS1 | LAST1);
- /* Save output MAC of DWORD 2 into a 32-bit sequence */
- SEQSTORE(p, CONTEXT1, ctx_offset, 4, 0);
+ OP_ALG_AS_INITFINAL, chk_icv, dir);
+
+ SEQFIFOLOAD(p, MSG2, 0, VLF | CLASS2 | LAST2);
+
+ if (chk_icv == ICV_CHECK_ENABLE)
+ SEQFIFOLOAD(p, ICV2, authlen, LAST2);
+ else
+ /* Save lower half of MAC out into a 32-bit sequence */
+ SEQSTORE(p, CONTEXT2, 0, authlen, 0);
return PROGRAM_FINALIZE(p);
}
--
2.17.1
^ permalink raw reply [flat|nested] 75+ messages in thread
* [dpdk-dev] [PATCH v3 19/24] crypto/dpaa2_sec/hw: support ZUCE and ZUCA
2019-09-30 14:40 ` [dpdk-dev] [PATCH v3 00/24] " Akhil Goyal
` (17 preceding siblings ...)
2019-09-30 14:40 ` [dpdk-dev] [PATCH v3 18/24] crypto/dpaa2_sec/hw: support kasumi Akhil Goyal
@ 2019-09-30 14:40 ` Akhil Goyal
2019-09-30 14:41 ` [dpdk-dev] [PATCH v3 20/24] crypto/dpaa2_sec: support zuc ciphering/integrity Akhil Goyal
` (5 subsequent siblings)
24 siblings, 0 replies; 75+ messages in thread
From: Akhil Goyal @ 2019-09-30 14:40 UTC (permalink / raw)
To: dev; +Cc: aconole, anoobj, Vakul Garg, Hemant Agrawal
From: Vakul Garg <vakul.garg@nxp.com>
This patch add support for ZUC Encryption and ZUC Authentication.
Before passing to CAAM, the 16-byte ZUCA IV is converted to 8-byte
format which consists of 38-bits of count||bearer|direction.
Signed-off-by: Vakul Garg <vakul.garg@nxp.com>
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
---
drivers/crypto/dpaa2_sec/hw/desc/algo.h | 136 +++++++++++++++++++++++-
1 file changed, 132 insertions(+), 4 deletions(-)
diff --git a/drivers/crypto/dpaa2_sec/hw/desc/algo.h b/drivers/crypto/dpaa2_sec/hw/desc/algo.h
index 4316ca15e..32ce787fa 100644
--- a/drivers/crypto/dpaa2_sec/hw/desc/algo.h
+++ b/drivers/crypto/dpaa2_sec/hw/desc/algo.h
@@ -17,6 +17,103 @@
* Shared descriptors for algorithms (i.e. not for protocols).
*/
+/**
+ * cnstr_shdsc_zuce - ZUC Enc (EEA2) as a shared descriptor
+ * @descbuf: pointer to descriptor-under-construction buffer
+ * @ps: if 36/40bit addressing is desired, this parameter must be true
+ * @swap: must be true when core endianness doesn't match SEC endianness
+ * @cipherdata: pointer to block cipher transform definitions
+ * @dir: Cipher direction (DIR_ENC/DIR_DEC)
+ *
+ * Return: size of descriptor written in words or negative number on error
+ */
+static inline int
+cnstr_shdsc_zuce(uint32_t *descbuf, bool ps, bool swap,
+ struct alginfo *cipherdata, uint8_t dir)
+{
+ struct program prg;
+ struct program *p = &prg;
+
+ PROGRAM_CNTXT_INIT(p, descbuf, 0);
+ if (swap)
+ PROGRAM_SET_BSWAP(p);
+
+ if (ps)
+ PROGRAM_SET_36BIT_ADDR(p);
+ SHR_HDR(p, SHR_ALWAYS, 1, 0);
+
+ KEY(p, KEY1, cipherdata->key_enc_flags, cipherdata->key,
+ cipherdata->keylen, INLINE_KEY(cipherdata));
+
+ SEQLOAD(p, CONTEXT1, 0, 16, 0);
+
+ MATHB(p, SEQINSZ, SUB, MATH2, VSEQINSZ, 4, 0);
+ MATHB(p, SEQINSZ, SUB, MATH2, VSEQOUTSZ, 4, 0);
+ ALG_OPERATION(p, OP_ALG_ALGSEL_ZUCE, OP_ALG_AAI_F8,
+ OP_ALG_AS_INITFINAL, 0, dir);
+ SEQFIFOLOAD(p, MSG1, 0, VLF | LAST1);
+ SEQFIFOSTORE(p, MSG, 0, 0, VLF);
+
+ return PROGRAM_FINALIZE(p);
+}
+
+/**
+ * cnstr_shdsc_zuca - ZUC Auth (EIA2) as a shared descriptor
+ * @descbuf: pointer to descriptor-under-construction buffer
+ * @ps: if 36/40bit addressing is desired, this parameter must be true
+ * @swap: must be true when core endianness doesn't match SEC endianness
+ * @authdata: pointer to authentication transform definitions
+ * @chk_icv: Whether to compare and verify ICV (true/false)
+ * @authlen: size of digest
+ *
+ * The IV prepended before hmac payload must be 8 bytes consisting
+ * of COUNT||BEAERER||DIR. The COUNT is of 32-bits, bearer is of 5 bits and
+ * direction is of 1 bit - totalling to 38 bits.
+ *
+ * Return: size of descriptor written in words or negative number on error
+ */
+static inline int
+cnstr_shdsc_zuca(uint32_t *descbuf, bool ps, bool swap,
+ struct alginfo *authdata, uint8_t chk_icv,
+ uint32_t authlen)
+{
+ struct program prg;
+ struct program *p = &prg;
+ int dir = chk_icv ? DIR_DEC : DIR_ENC;
+
+ PROGRAM_CNTXT_INIT(p, descbuf, 0);
+ if (swap)
+ PROGRAM_SET_BSWAP(p);
+
+ if (ps)
+ PROGRAM_SET_36BIT_ADDR(p);
+ SHR_HDR(p, SHR_ALWAYS, 1, 0);
+
+ KEY(p, KEY2, authdata->key_enc_flags, authdata->key,
+ authdata->keylen, INLINE_KEY(authdata));
+
+ SEQLOAD(p, CONTEXT2, 0, 8, 0);
+
+ if (chk_icv == ICV_CHECK_ENABLE)
+ MATHB(p, SEQINSZ, SUB, authlen, VSEQINSZ, 4, IMMED2);
+ else
+ MATHB(p, SEQINSZ, SUB, ZERO, VSEQINSZ, 4, 0);
+
+ ALG_OPERATION(p, OP_ALG_ALGSEL_ZUCA, OP_ALG_AAI_F9,
+ OP_ALG_AS_INITFINAL, chk_icv, dir);
+
+ SEQFIFOLOAD(p, MSG2, 0, VLF | CLASS2 | LAST2);
+
+ if (chk_icv == ICV_CHECK_ENABLE)
+ SEQFIFOLOAD(p, ICV2, authlen, LAST2);
+ else
+ /* Save lower half of MAC out into a 32-bit sequence */
+ SEQSTORE(p, CONTEXT2, 0, authlen, 0);
+
+ return PROGRAM_FINALIZE(p);
+}
+
+
/**
* cnstr_shdsc_snow_f8 - SNOW/f8 (UEA2) as a shared descriptor
* @descbuf: pointer to descriptor-under-construction buffer
@@ -58,11 +155,43 @@ cnstr_shdsc_snow_f8(uint32_t *descbuf, bool ps, bool swap,
}
/**
- * conv_to_snow_f9_iv - SNOW/f9 (UIA2) IV 16bit to 12 bit convert
+ * conv_to_zuc_eia_iv - ZUCA IV 16-byte to 8-byte convert
+ * function for 3G.
+ * @iv: 16 bytes of original IV data.
+ *
+ * From the original IV, we extract 32-bits of COUNT,
+ * 5-bits of bearer and 1-bit of direction.
+ * Refer to CAAM refman for ZUCA IV format. Then these values are
+ * appended as COUNT||BEARER||DIR continuously to make a 38-bit block.
+ * This 38-bit block is copied left justified into 8-byte array used as
+ * converted IV.
+ *
+ * Return: 8-bytes of IV data as understood by SEC HW
+ */
+
+static inline uint8_t *conv_to_zuc_eia_iv(uint8_t *iv)
+{
+ uint8_t dir = (iv[14] & 0x80) ? 4 : 0;
+
+ iv[12] = iv[4] | dir;
+ iv[13] = 0;
+ iv[14] = 0;
+ iv[15] = 0;
+
+ iv[8] = iv[0];
+ iv[9] = iv[1];
+ iv[10] = iv[2];
+ iv[11] = iv[3];
+
+ return (iv + 8);
+}
+
+/**
+ * conv_to_snow_f9_iv - SNOW/f9 (UIA2) IV 16 byte to 12 byte convert
* function for 3G.
- * @iv: 16 bit original IV data
+ * @iv: 16 byte original IV data
*
- * Return: 12 bit IV data as understood by SEC HW
+ * Return: 12 byte IV data as understood by SEC HW
*/
static inline uint8_t *conv_to_snow_f9_iv(uint8_t *iv)
@@ -93,7 +222,6 @@ static inline uint8_t *conv_to_snow_f9_iv(uint8_t *iv)
* @ps: if 36/40bit addressing is desired, this parameter must be true
* @swap: must be true when core endianness doesn't match SEC endianness
* @authdata: pointer to authentication transform definitions
- * @dir: cipher direction (DIR_ENC/DIR_DEC)
* @chk_icv: check or generate ICV value
* @authlen: size of digest
*
--
2.17.1
^ permalink raw reply [flat|nested] 75+ messages in thread
* [dpdk-dev] [PATCH v3 20/24] crypto/dpaa2_sec: support zuc ciphering/integrity
2019-09-30 14:40 ` [dpdk-dev] [PATCH v3 00/24] " Akhil Goyal
` (18 preceding siblings ...)
2019-09-30 14:40 ` [dpdk-dev] [PATCH v3 19/24] crypto/dpaa2_sec/hw: support ZUCE and ZUCA Akhil Goyal
@ 2019-09-30 14:41 ` Akhil Goyal
2019-09-30 14:41 ` [dpdk-dev] [PATCH v3 21/24] crypto/dpaa2_sec: allocate context as per num segs Akhil Goyal
` (4 subsequent siblings)
24 siblings, 0 replies; 75+ messages in thread
From: Akhil Goyal @ 2019-09-30 14:41 UTC (permalink / raw)
To: dev; +Cc: aconole, anoobj, Hemant Agrawal, Vakul Garg
From: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Vakul Garg <vakul.garg@nxp.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
---
drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c | 25 +++++++++++-
drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h | 45 +++++++++++++++++++++
2 files changed, 68 insertions(+), 2 deletions(-)
diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
index 451fa91fb..165324567 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
@@ -1072,6 +1072,9 @@ build_auth_fd(dpaa2_sec_session *sess, struct rte_crypto_op *op,
if (sess->auth_alg == RTE_CRYPTO_AUTH_SNOW3G_UIA2) {
iv_ptr = conv_to_snow_f9_iv(iv_ptr);
sge->length = 12;
+ } else if (sess->auth_alg == RTE_CRYPTO_AUTH_ZUC_EIA3) {
+ iv_ptr = conv_to_zuc_eia_iv(iv_ptr);
+ sge->length = 8;
} else {
sge->length = sess->iv.length;
}
@@ -1897,8 +1900,14 @@ dpaa2_sec_cipher_init(struct rte_cryptodev *dev,
&cipherdata,
session->dir);
break;
- case RTE_CRYPTO_CIPHER_KASUMI_F8:
case RTE_CRYPTO_CIPHER_ZUC_EEA3:
+ cipherdata.algtype = OP_ALG_ALGSEL_ZUCE;
+ session->cipher_alg = RTE_CRYPTO_CIPHER_ZUC_EEA3;
+ bufsize = cnstr_shdsc_zuce(priv->flc_desc[0].desc, 1, 0,
+ &cipherdata,
+ session->dir);
+ break;
+ case RTE_CRYPTO_CIPHER_KASUMI_F8:
case RTE_CRYPTO_CIPHER_AES_F8:
case RTE_CRYPTO_CIPHER_AES_ECB:
case RTE_CRYPTO_CIPHER_3DES_ECB:
@@ -2045,8 +2054,18 @@ dpaa2_sec_auth_init(struct rte_cryptodev *dev,
!session->dir,
session->digest_length);
break;
- case RTE_CRYPTO_AUTH_KASUMI_F9:
case RTE_CRYPTO_AUTH_ZUC_EIA3:
+ authdata.algtype = OP_ALG_ALGSEL_ZUCA;
+ authdata.algmode = OP_ALG_AAI_F9;
+ session->auth_alg = RTE_CRYPTO_AUTH_ZUC_EIA3;
+ session->iv.offset = xform->auth.iv.offset;
+ session->iv.length = xform->auth.iv.length;
+ bufsize = cnstr_shdsc_zuca(priv->flc_desc[DESC_INITFINAL].desc,
+ 1, 0, &authdata,
+ !session->dir,
+ session->digest_length);
+ break;
+ case RTE_CRYPTO_AUTH_KASUMI_F9:
case RTE_CRYPTO_AUTH_NULL:
case RTE_CRYPTO_AUTH_SHA1:
case RTE_CRYPTO_AUTH_SHA256:
@@ -2357,6 +2376,7 @@ dpaa2_sec_aead_chain_init(struct rte_cryptodev *dev,
session->cipher_alg = RTE_CRYPTO_CIPHER_AES_CTR;
break;
case RTE_CRYPTO_CIPHER_SNOW3G_UEA2:
+ case RTE_CRYPTO_CIPHER_ZUC_EEA3:
case RTE_CRYPTO_CIPHER_NULL:
case RTE_CRYPTO_CIPHER_3DES_ECB:
case RTE_CRYPTO_CIPHER_AES_ECB:
@@ -2651,6 +2671,7 @@ dpaa2_sec_ipsec_proto_init(struct rte_crypto_cipher_xform *cipher_xform,
cipherdata->algtype = OP_PCL_IPSEC_NULL;
break;
case RTE_CRYPTO_CIPHER_SNOW3G_UEA2:
+ case RTE_CRYPTO_CIPHER_ZUC_EEA3:
case RTE_CRYPTO_CIPHER_3DES_ECB:
case RTE_CRYPTO_CIPHER_AES_ECB:
case RTE_CRYPTO_CIPHER_KASUMI_F8:
diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h b/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
index 3d793363b..ca4fcfe9b 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
@@ -456,6 +456,51 @@ static const struct rte_cryptodev_capabilities dpaa2_sec_capabilities[] = {
}, }
}, }
},
+ { /* ZUC (EEA3) */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
+ {.cipher = {
+ .algo = RTE_CRYPTO_CIPHER_ZUC_EEA3,
+ .block_size = 16,
+ .key_size = {
+ .min = 16,
+ .max = 16,
+ .increment = 0
+ },
+ .iv_size = {
+ .min = 16,
+ .max = 16,
+ .increment = 0
+ }
+ }, }
+ }, }
+ },
+ { /* ZUC (EIA3) */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_ZUC_EIA3,
+ .block_size = 16,
+ .key_size = {
+ .min = 16,
+ .max = 16,
+ .increment = 0
+ },
+ .digest_size = {
+ .min = 4,
+ .max = 4,
+ .increment = 0
+ },
+ .iv_size = {
+ .min = 16,
+ .max = 16,
+ .increment = 0
+ }
+ }, }
+ }, }
+ },
RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
};
--
2.17.1
^ permalink raw reply [flat|nested] 75+ messages in thread
* [dpdk-dev] [PATCH v3 21/24] crypto/dpaa2_sec: allocate context as per num segs
2019-09-30 14:40 ` [dpdk-dev] [PATCH v3 00/24] " Akhil Goyal
` (19 preceding siblings ...)
2019-09-30 14:41 ` [dpdk-dev] [PATCH v3 20/24] crypto/dpaa2_sec: support zuc ciphering/integrity Akhil Goyal
@ 2019-09-30 14:41 ` Akhil Goyal
2019-09-30 14:41 ` [dpdk-dev] [PATCH v3 22/24] crypto/dpaa_sec: dynamic contxt buffer for SG cases Akhil Goyal
` (3 subsequent siblings)
24 siblings, 0 replies; 75+ messages in thread
From: Akhil Goyal @ 2019-09-30 14:41 UTC (permalink / raw)
To: dev; +Cc: aconole, anoobj, Hemant Agrawal, Akhil Goyal
From: Hemant Agrawal <hemant.agrawal@nxp.com>
DPAA2_SEC hardware can support any number of SG entries.
This patch allocate as many SG entries as are required.
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
---
drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c | 28 +++++++++++++--------
1 file changed, 17 insertions(+), 11 deletions(-)
diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
index 165324567..b811f2f1b 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
@@ -55,7 +55,7 @@ typedef uint64_t dma_addr_t;
#define FLE_POOL_NUM_BUFS 32000
#define FLE_POOL_BUF_SIZE 256
#define FLE_POOL_CACHE_SIZE 512
-#define FLE_SG_MEM_SIZE 2048
+#define FLE_SG_MEM_SIZE(num) (FLE_POOL_BUF_SIZE + ((num) * 32))
#define SEC_FLC_DHR_OUTBOUND -114
#define SEC_FLC_DHR_INBOUND 0
@@ -83,13 +83,14 @@ build_proto_compound_sg_fd(dpaa2_sec_session *sess,
mbuf = sym_op->m_src;
/* first FLE entry used to store mbuf and session ctxt */
- fle = (struct qbman_fle *)rte_malloc(NULL, FLE_SG_MEM_SIZE,
+ fle = (struct qbman_fle *)rte_malloc(NULL,
+ FLE_SG_MEM_SIZE(mbuf->nb_segs + sym_op->m_src->nb_segs),
RTE_CACHE_LINE_SIZE);
if (unlikely(!fle)) {
DPAA2_SEC_DP_ERR("Proto:SG: Memory alloc failed for SGE");
return -1;
}
- memset(fle, 0, FLE_SG_MEM_SIZE);
+ memset(fle, 0, FLE_SG_MEM_SIZE(mbuf->nb_segs + sym_op->m_src->nb_segs));
DPAA2_SET_FLE_ADDR(fle, (size_t)op);
DPAA2_FLE_SAVE_CTXT(fle, (ptrdiff_t)priv);
@@ -312,13 +313,14 @@ build_authenc_gcm_sg_fd(dpaa2_sec_session *sess,
mbuf = sym_op->m_src;
/* first FLE entry used to store mbuf and session ctxt */
- fle = (struct qbman_fle *)rte_malloc(NULL, FLE_SG_MEM_SIZE,
+ fle = (struct qbman_fle *)rte_malloc(NULL,
+ FLE_SG_MEM_SIZE(mbuf->nb_segs + sym_op->m_src->nb_segs),
RTE_CACHE_LINE_SIZE);
if (unlikely(!fle)) {
DPAA2_SEC_ERR("GCM SG: Memory alloc failed for SGE");
return -1;
}
- memset(fle, 0, FLE_SG_MEM_SIZE);
+ memset(fle, 0, FLE_SG_MEM_SIZE(mbuf->nb_segs + sym_op->m_src->nb_segs));
DPAA2_SET_FLE_ADDR(fle, (size_t)op);
DPAA2_FLE_SAVE_CTXT(fle, (size_t)priv);
@@ -608,13 +610,14 @@ build_authenc_sg_fd(dpaa2_sec_session *sess,
mbuf = sym_op->m_src;
/* first FLE entry used to store mbuf and session ctxt */
- fle = (struct qbman_fle *)rte_malloc(NULL, FLE_SG_MEM_SIZE,
+ fle = (struct qbman_fle *)rte_malloc(NULL,
+ FLE_SG_MEM_SIZE(mbuf->nb_segs + sym_op->m_src->nb_segs),
RTE_CACHE_LINE_SIZE);
if (unlikely(!fle)) {
DPAA2_SEC_ERR("AUTHENC SG: Memory alloc failed for SGE");
return -1;
}
- memset(fle, 0, FLE_SG_MEM_SIZE);
+ memset(fle, 0, FLE_SG_MEM_SIZE(mbuf->nb_segs + sym_op->m_src->nb_segs));
DPAA2_SET_FLE_ADDR(fle, (size_t)op);
DPAA2_FLE_SAVE_CTXT(fle, (ptrdiff_t)priv);
@@ -901,13 +904,14 @@ static inline int build_auth_sg_fd(
}
mbuf = sym_op->m_src;
- fle = (struct qbman_fle *)rte_malloc(NULL, FLE_SG_MEM_SIZE,
+ fle = (struct qbman_fle *)rte_malloc(NULL,
+ FLE_SG_MEM_SIZE(mbuf->nb_segs),
RTE_CACHE_LINE_SIZE);
if (unlikely(!fle)) {
DPAA2_SEC_ERR("AUTH SG: Memory alloc failed for SGE");
return -1;
}
- memset(fle, 0, FLE_SG_MEM_SIZE);
+ memset(fle, 0, FLE_SG_MEM_SIZE(mbuf->nb_segs));
/* first FLE entry used to store mbuf and session ctxt */
DPAA2_SET_FLE_ADDR(fle, (size_t)op);
DPAA2_FLE_SAVE_CTXT(fle, (ptrdiff_t)priv);
@@ -1140,13 +1144,15 @@ build_cipher_sg_fd(dpaa2_sec_session *sess, struct rte_crypto_op *op,
else
mbuf = sym_op->m_src;
- fle = (struct qbman_fle *)rte_malloc(NULL, FLE_SG_MEM_SIZE,
+ /* first FLE entry used to store mbuf and session ctxt */
+ fle = (struct qbman_fle *)rte_malloc(NULL,
+ FLE_SG_MEM_SIZE(mbuf->nb_segs + sym_op->m_src->nb_segs),
RTE_CACHE_LINE_SIZE);
if (!fle) {
DPAA2_SEC_ERR("CIPHER SG: Memory alloc failed for SGE");
return -1;
}
- memset(fle, 0, FLE_SG_MEM_SIZE);
+ memset(fle, 0, FLE_SG_MEM_SIZE(mbuf->nb_segs + sym_op->m_src->nb_segs));
/* first FLE entry used to store mbuf and session ctxt */
DPAA2_SET_FLE_ADDR(fle, (size_t)op);
DPAA2_FLE_SAVE_CTXT(fle, (ptrdiff_t)priv);
--
2.17.1
^ permalink raw reply [flat|nested] 75+ messages in thread
* [dpdk-dev] [PATCH v3 22/24] crypto/dpaa_sec: dynamic contxt buffer for SG cases
2019-09-30 14:40 ` [dpdk-dev] [PATCH v3 00/24] " Akhil Goyal
` (20 preceding siblings ...)
2019-09-30 14:41 ` [dpdk-dev] [PATCH v3 21/24] crypto/dpaa2_sec: allocate context as per num segs Akhil Goyal
@ 2019-09-30 14:41 ` Akhil Goyal
2019-09-30 14:41 ` [dpdk-dev] [PATCH v3 23/24] crypto/dpaa_sec: change per cryptodev pool to per qp Akhil Goyal
` (2 subsequent siblings)
24 siblings, 0 replies; 75+ messages in thread
From: Akhil Goyal @ 2019-09-30 14:41 UTC (permalink / raw)
To: dev; +Cc: aconole, anoobj, Hemant Agrawal
From: Hemant Agrawal <hemant.agrawal@nxp.com>
This patch allocate/clean the SEC context dynamically
based on the number of SG entries in the buffer.
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
---
drivers/crypto/dpaa_sec/dpaa_sec.c | 43 ++++++++++++++----------------
drivers/crypto/dpaa_sec/dpaa_sec.h | 8 +++---
2 files changed, 23 insertions(+), 28 deletions(-)
diff --git a/drivers/crypto/dpaa_sec/dpaa_sec.c b/drivers/crypto/dpaa_sec/dpaa_sec.c
index 291cba28d..fa9d03adc 100644
--- a/drivers/crypto/dpaa_sec/dpaa_sec.c
+++ b/drivers/crypto/dpaa_sec/dpaa_sec.c
@@ -65,10 +65,10 @@ dpaa_sec_op_ending(struct dpaa_sec_op_ctx *ctx)
}
static inline struct dpaa_sec_op_ctx *
-dpaa_sec_alloc_ctx(dpaa_sec_session *ses)
+dpaa_sec_alloc_ctx(dpaa_sec_session *ses, int sg_count)
{
struct dpaa_sec_op_ctx *ctx;
- int retval;
+ int i, retval;
retval = rte_mempool_get(ses->ctx_pool, (void **)(&ctx));
if (!ctx || retval) {
@@ -81,14 +81,11 @@ dpaa_sec_alloc_ctx(dpaa_sec_session *ses)
* to clear all the SG entries. dpaa_sec_alloc_ctx() is called for
* each packet, memset is costlier than dcbz_64().
*/
- dcbz_64(&ctx->job.sg[SG_CACHELINE_0]);
- dcbz_64(&ctx->job.sg[SG_CACHELINE_1]);
- dcbz_64(&ctx->job.sg[SG_CACHELINE_2]);
- dcbz_64(&ctx->job.sg[SG_CACHELINE_3]);
+ for (i = 0; i < sg_count && i < MAX_JOB_SG_ENTRIES; i += 4)
+ dcbz_64(&ctx->job.sg[i]);
ctx->ctx_pool = ses->ctx_pool;
- ctx->vtop_offset = (size_t) ctx
- - rte_mempool_virt2iova(ctx);
+ ctx->vtop_offset = (size_t) ctx - rte_mempool_virt2iova(ctx);
return ctx;
}
@@ -855,12 +852,12 @@ build_auth_only_sg(struct rte_crypto_op *op, dpaa_sec_session *ses)
else
extra_segs = 2;
- if ((mbuf->nb_segs + extra_segs) > MAX_SG_ENTRIES) {
+ if (mbuf->nb_segs > MAX_SG_ENTRIES) {
DPAA_SEC_DP_ERR("Auth: Max sec segs supported is %d",
MAX_SG_ENTRIES);
return NULL;
}
- ctx = dpaa_sec_alloc_ctx(ses);
+ ctx = dpaa_sec_alloc_ctx(ses, mbuf->nb_segs + extra_segs);
if (!ctx)
return NULL;
@@ -938,7 +935,7 @@ build_auth_only(struct rte_crypto_op *op, dpaa_sec_session *ses)
rte_iova_t start_addr;
uint8_t *old_digest;
- ctx = dpaa_sec_alloc_ctx(ses);
+ ctx = dpaa_sec_alloc_ctx(ses, 4);
if (!ctx)
return NULL;
@@ -1008,13 +1005,13 @@ build_cipher_only_sg(struct rte_crypto_op *op, dpaa_sec_session *ses)
req_segs = mbuf->nb_segs * 2 + 3;
}
- if (req_segs > MAX_SG_ENTRIES) {
+ if (mbuf->nb_segs > MAX_SG_ENTRIES) {
DPAA_SEC_DP_ERR("Cipher: Max sec segs supported is %d",
MAX_SG_ENTRIES);
return NULL;
}
- ctx = dpaa_sec_alloc_ctx(ses);
+ ctx = dpaa_sec_alloc_ctx(ses, req_segs);
if (!ctx)
return NULL;
@@ -1094,7 +1091,7 @@ build_cipher_only(struct rte_crypto_op *op, dpaa_sec_session *ses)
uint8_t *IV_ptr = rte_crypto_op_ctod_offset(op, uint8_t *,
ses->iv.offset);
- ctx = dpaa_sec_alloc_ctx(ses);
+ ctx = dpaa_sec_alloc_ctx(ses, 4);
if (!ctx)
return NULL;
@@ -1161,13 +1158,13 @@ build_cipher_auth_gcm_sg(struct rte_crypto_op *op, dpaa_sec_session *ses)
if (ses->auth_only_len)
req_segs++;
- if (req_segs > MAX_SG_ENTRIES) {
+ if (mbuf->nb_segs > MAX_SG_ENTRIES) {
DPAA_SEC_DP_ERR("AEAD: Max sec segs supported is %d",
MAX_SG_ENTRIES);
return NULL;
}
- ctx = dpaa_sec_alloc_ctx(ses);
+ ctx = dpaa_sec_alloc_ctx(ses, req_segs);
if (!ctx)
return NULL;
@@ -1296,7 +1293,7 @@ build_cipher_auth_gcm(struct rte_crypto_op *op, dpaa_sec_session *ses)
else
dst_start_addr = src_start_addr;
- ctx = dpaa_sec_alloc_ctx(ses);
+ ctx = dpaa_sec_alloc_ctx(ses, 7);
if (!ctx)
return NULL;
@@ -1409,13 +1406,13 @@ build_cipher_auth_sg(struct rte_crypto_op *op, dpaa_sec_session *ses)
req_segs = mbuf->nb_segs * 2 + 4;
}
- if (req_segs > MAX_SG_ENTRIES) {
+ if (mbuf->nb_segs > MAX_SG_ENTRIES) {
DPAA_SEC_DP_ERR("Cipher-Auth: Max sec segs supported is %d",
MAX_SG_ENTRIES);
return NULL;
}
- ctx = dpaa_sec_alloc_ctx(ses);
+ ctx = dpaa_sec_alloc_ctx(ses, req_segs);
if (!ctx)
return NULL;
@@ -1533,7 +1530,7 @@ build_cipher_auth(struct rte_crypto_op *op, dpaa_sec_session *ses)
else
dst_start_addr = src_start_addr;
- ctx = dpaa_sec_alloc_ctx(ses);
+ ctx = dpaa_sec_alloc_ctx(ses, 7);
if (!ctx)
return NULL;
@@ -1619,7 +1616,7 @@ build_proto(struct rte_crypto_op *op, dpaa_sec_session *ses)
struct qm_sg_entry *sg;
phys_addr_t src_start_addr, dst_start_addr;
- ctx = dpaa_sec_alloc_ctx(ses);
+ ctx = dpaa_sec_alloc_ctx(ses, 2);
if (!ctx)
return NULL;
cf = &ctx->job;
@@ -1666,13 +1663,13 @@ build_proto_sg(struct rte_crypto_op *op, dpaa_sec_session *ses)
mbuf = sym->m_src;
req_segs = mbuf->nb_segs + sym->m_src->nb_segs + 2;
- if (req_segs > MAX_SG_ENTRIES) {
+ if (mbuf->nb_segs > MAX_SG_ENTRIES) {
DPAA_SEC_DP_ERR("Proto: Max sec segs supported is %d",
MAX_SG_ENTRIES);
return NULL;
}
- ctx = dpaa_sec_alloc_ctx(ses);
+ ctx = dpaa_sec_alloc_ctx(ses, req_segs);
if (!ctx)
return NULL;
cf = &ctx->job;
diff --git a/drivers/crypto/dpaa_sec/dpaa_sec.h b/drivers/crypto/dpaa_sec/dpaa_sec.h
index 68461cecc..2a6a3fad7 100644
--- a/drivers/crypto/dpaa_sec/dpaa_sec.h
+++ b/drivers/crypto/dpaa_sec/dpaa_sec.h
@@ -183,13 +183,11 @@ struct dpaa_sec_dev_private {
};
#define MAX_SG_ENTRIES 16
-#define SG_CACHELINE_0 0
-#define SG_CACHELINE_1 4
-#define SG_CACHELINE_2 8
-#define SG_CACHELINE_3 12
+#define MAX_JOB_SG_ENTRIES 36
+
struct dpaa_sec_job {
/* sg[0] output, sg[1] input, others are possible sub frames */
- struct qm_sg_entry sg[MAX_SG_ENTRIES];
+ struct qm_sg_entry sg[MAX_JOB_SG_ENTRIES];
};
#define DPAA_MAX_NB_MAX_DIGEST 32
--
2.17.1
^ permalink raw reply [flat|nested] 75+ messages in thread
* [dpdk-dev] [PATCH v3 23/24] crypto/dpaa_sec: change per cryptodev pool to per qp
2019-09-30 14:40 ` [dpdk-dev] [PATCH v3 00/24] " Akhil Goyal
` (21 preceding siblings ...)
2019-09-30 14:41 ` [dpdk-dev] [PATCH v3 22/24] crypto/dpaa_sec: dynamic contxt buffer for SG cases Akhil Goyal
@ 2019-09-30 14:41 ` Akhil Goyal
2019-09-30 14:41 ` [dpdk-dev] [PATCH v3 24/24] crypto/dpaa2_sec: improve debug logging Akhil Goyal
2019-09-30 20:08 ` [dpdk-dev] [PATCH v3 00/24] crypto/dpaaX_sec: Support Wireless algos Akhil Goyal
24 siblings, 0 replies; 75+ messages in thread
From: Akhil Goyal @ 2019-09-30 14:41 UTC (permalink / raw)
To: dev; +Cc: aconole, anoobj, Akhil Goyal
In cases where single cryptodev is used by multiple cores
using multiple queues, there will be contention for mempool
resources and may eventually get exhuasted.
Basically, mempool should be defined per core.
Now since qp is used per core, mempools are defined in qp setup.
Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
---
drivers/crypto/dpaa_sec/dpaa_sec.c | 58 ++++++++++++------------------
drivers/crypto/dpaa_sec/dpaa_sec.h | 3 +-
2 files changed, 24 insertions(+), 37 deletions(-)
diff --git a/drivers/crypto/dpaa_sec/dpaa_sec.c b/drivers/crypto/dpaa_sec/dpaa_sec.c
index fa9d03adc..32c7392d8 100644
--- a/drivers/crypto/dpaa_sec/dpaa_sec.c
+++ b/drivers/crypto/dpaa_sec/dpaa_sec.c
@@ -70,7 +70,9 @@ dpaa_sec_alloc_ctx(dpaa_sec_session *ses, int sg_count)
struct dpaa_sec_op_ctx *ctx;
int i, retval;
- retval = rte_mempool_get(ses->ctx_pool, (void **)(&ctx));
+ retval = rte_mempool_get(
+ ses->qp[rte_lcore_id() % MAX_DPAA_CORES]->ctx_pool,
+ (void **)(&ctx));
if (!ctx || retval) {
DPAA_SEC_DP_WARN("Alloc sec descriptor failed!");
return NULL;
@@ -84,7 +86,7 @@ dpaa_sec_alloc_ctx(dpaa_sec_session *ses, int sg_count)
for (i = 0; i < sg_count && i < MAX_JOB_SG_ENTRIES; i += 4)
dcbz_64(&ctx->job.sg[i]);
- ctx->ctx_pool = ses->ctx_pool;
+ ctx->ctx_pool = ses->qp[rte_lcore_id() % MAX_DPAA_CORES]->ctx_pool;
ctx->vtop_offset = (size_t) ctx - rte_mempool_virt2iova(ctx);
return ctx;
@@ -1939,6 +1941,7 @@ dpaa_sec_queue_pair_release(struct rte_cryptodev *dev,
}
qp = &internals->qps[qp_id];
+ rte_mempool_free(qp->ctx_pool);
qp->internals = NULL;
dev->data->queue_pairs[qp_id] = NULL;
@@ -1953,6 +1956,7 @@ dpaa_sec_queue_pair_setup(struct rte_cryptodev *dev, uint16_t qp_id,
{
struct dpaa_sec_dev_private *internals;
struct dpaa_sec_qp *qp = NULL;
+ char str[20];
DPAA_SEC_DEBUG("dev =%p, queue =%d, conf =%p", dev, qp_id, qp_conf);
@@ -1965,6 +1969,22 @@ dpaa_sec_queue_pair_setup(struct rte_cryptodev *dev, uint16_t qp_id,
qp = &internals->qps[qp_id];
qp->internals = internals;
+ snprintf(str, sizeof(str), "ctx_pool_d%d_qp%d",
+ dev->data->dev_id, qp_id);
+ if (!qp->ctx_pool) {
+ qp->ctx_pool = rte_mempool_create((const char *)str,
+ CTX_POOL_NUM_BUFS,
+ CTX_POOL_BUF_SIZE,
+ CTX_POOL_CACHE_SIZE, 0,
+ NULL, NULL, NULL, NULL,
+ SOCKET_ID_ANY, 0);
+ if (!qp->ctx_pool) {
+ DPAA_SEC_ERR("%s create failed\n", str);
+ return -ENOMEM;
+ }
+ } else
+ DPAA_SEC_INFO("mempool already created for dev_id : %d, qp: %d",
+ dev->data->dev_id, qp_id);
dev->data->queue_pairs[qp_id] = qp;
return 0;
@@ -2181,7 +2201,6 @@ dpaa_sec_set_session_parameters(struct rte_cryptodev *dev,
DPAA_SEC_ERR("Invalid crypto type");
return -EINVAL;
}
- session->ctx_pool = internals->ctx_pool;
rte_spinlock_lock(&internals->lock);
for (i = 0; i < MAX_DPAA_CORES; i++) {
session->inq[i] = dpaa_sec_attach_rxq(internals);
@@ -2436,7 +2455,6 @@ dpaa_sec_set_ipsec_session(__rte_unused struct rte_cryptodev *dev,
session->dir = DIR_DEC;
} else
goto out;
- session->ctx_pool = internals->ctx_pool;
rte_spinlock_lock(&internals->lock);
for (i = 0; i < MAX_DPAA_CORES; i++) {
session->inq[i] = dpaa_sec_attach_rxq(internals);
@@ -2547,7 +2565,6 @@ dpaa_sec_set_pdcp_session(struct rte_cryptodev *dev,
session->pdcp.hfn_ovd = pdcp_xform->hfn_ovrd;
session->pdcp.hfn_ovd_offset = cipher_xform->iv.offset;
- session->ctx_pool = dev_priv->ctx_pool;
rte_spinlock_lock(&dev_priv->lock);
for (i = 0; i < MAX_DPAA_CORES; i++) {
session->inq[i] = dpaa_sec_attach_rxq(dev_priv);
@@ -2624,32 +2641,11 @@ dpaa_sec_security_session_destroy(void *dev __rte_unused,
}
static int
-dpaa_sec_dev_configure(struct rte_cryptodev *dev,
+dpaa_sec_dev_configure(struct rte_cryptodev *dev __rte_unused,
struct rte_cryptodev_config *config __rte_unused)
{
-
- char str[20];
- struct dpaa_sec_dev_private *internals;
-
PMD_INIT_FUNC_TRACE();
- internals = dev->data->dev_private;
- snprintf(str, sizeof(str), "ctx_pool_%d", dev->data->dev_id);
- if (!internals->ctx_pool) {
- internals->ctx_pool = rte_mempool_create((const char *)str,
- CTX_POOL_NUM_BUFS,
- CTX_POOL_BUF_SIZE,
- CTX_POOL_CACHE_SIZE, 0,
- NULL, NULL, NULL, NULL,
- SOCKET_ID_ANY, 0);
- if (!internals->ctx_pool) {
- DPAA_SEC_ERR("%s create failed\n", str);
- return -ENOMEM;
- }
- } else
- DPAA_SEC_INFO("mempool already created for dev_id : %d",
- dev->data->dev_id);
-
return 0;
}
@@ -2669,17 +2665,11 @@ dpaa_sec_dev_stop(struct rte_cryptodev *dev __rte_unused)
static int
dpaa_sec_dev_close(struct rte_cryptodev *dev)
{
- struct dpaa_sec_dev_private *internals;
-
PMD_INIT_FUNC_TRACE();
if (dev == NULL)
return -ENOMEM;
- internals = dev->data->dev_private;
- rte_mempool_free(internals->ctx_pool);
- internals->ctx_pool = NULL;
-
return 0;
}
@@ -2919,8 +2909,6 @@ dpaa_sec_uninit(struct rte_cryptodev *dev)
internals = dev->data->dev_private;
rte_free(dev->security_ctx);
- /* In case close has been called, internals->ctx_pool would be NULL */
- rte_mempool_free(internals->ctx_pool);
rte_free(internals);
DPAA_SEC_INFO("Closing DPAA_SEC device %s on numa socket %u",
diff --git a/drivers/crypto/dpaa_sec/dpaa_sec.h b/drivers/crypto/dpaa_sec/dpaa_sec.h
index 2a6a3fad7..009ab7536 100644
--- a/drivers/crypto/dpaa_sec/dpaa_sec.h
+++ b/drivers/crypto/dpaa_sec/dpaa_sec.h
@@ -154,11 +154,11 @@ typedef struct dpaa_sec_session_entry {
struct dpaa_sec_qp *qp[MAX_DPAA_CORES];
struct qman_fq *inq[MAX_DPAA_CORES];
struct sec_cdb cdb; /**< cmd block associated with qp */
- struct rte_mempool *ctx_pool; /* session mempool for dpaa_sec_op_ctx */
} dpaa_sec_session;
struct dpaa_sec_qp {
struct dpaa_sec_dev_private *internals;
+ struct rte_mempool *ctx_pool; /* mempool for dpaa_sec_op_ctx */
struct qman_fq outq;
int rx_pkts;
int rx_errs;
@@ -173,7 +173,6 @@ struct dpaa_sec_qp {
/* internal sec queue interface */
struct dpaa_sec_dev_private {
void *sec_hw;
- struct rte_mempool *ctx_pool; /* per dev mempool for dpaa_sec_op_ctx */
struct dpaa_sec_qp qps[RTE_DPAA_MAX_NB_SEC_QPS]; /* i/o queue for sec */
struct qman_fq inq[RTE_DPAA_MAX_RX_QUEUE];
unsigned char inq_attach[RTE_DPAA_MAX_RX_QUEUE];
--
2.17.1
^ permalink raw reply [flat|nested] 75+ messages in thread
* [dpdk-dev] [PATCH v3 24/24] crypto/dpaa2_sec: improve debug logging
2019-09-30 14:40 ` [dpdk-dev] [PATCH v3 00/24] " Akhil Goyal
` (22 preceding siblings ...)
2019-09-30 14:41 ` [dpdk-dev] [PATCH v3 23/24] crypto/dpaa_sec: change per cryptodev pool to per qp Akhil Goyal
@ 2019-09-30 14:41 ` Akhil Goyal
2019-09-30 20:08 ` [dpdk-dev] [PATCH v3 00/24] crypto/dpaaX_sec: Support Wireless algos Akhil Goyal
24 siblings, 0 replies; 75+ messages in thread
From: Akhil Goyal @ 2019-09-30 14:41 UTC (permalink / raw)
To: dev; +Cc: aconole, anoobj, Hemant Agrawal
From: Hemant Agrawal <hemant.agrawal@nxp.com>
unnecessary debug logs in data path are removed
and hardware debug logs are compiled off.
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
---
drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c | 41 ++++++++-------------
1 file changed, 16 insertions(+), 25 deletions(-)
diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
index b811f2f1b..2ab34a00f 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
@@ -305,8 +305,6 @@ build_authenc_gcm_sg_fd(dpaa2_sec_session *sess,
uint8_t *IV_ptr = rte_crypto_op_ctod_offset(op, uint8_t *,
sess->iv.offset);
- PMD_INIT_FUNC_TRACE();
-
if (sym_op->m_dst)
mbuf = sym_op->m_dst;
else
@@ -453,8 +451,6 @@ build_authenc_gcm_fd(dpaa2_sec_session *sess,
uint8_t *IV_ptr = rte_crypto_op_ctod_offset(op, uint8_t *,
sess->iv.offset);
- PMD_INIT_FUNC_TRACE();
-
if (sym_op->m_dst)
dst = sym_op->m_dst;
else
@@ -602,8 +598,6 @@ build_authenc_sg_fd(dpaa2_sec_session *sess,
uint8_t *iv_ptr = rte_crypto_op_ctod_offset(op, uint8_t *,
sess->iv.offset);
- PMD_INIT_FUNC_TRACE();
-
if (sym_op->m_dst)
mbuf = sym_op->m_dst;
else
@@ -748,8 +742,6 @@ build_authenc_fd(dpaa2_sec_session *sess,
sess->iv.offset);
struct rte_mbuf *dst;
- PMD_INIT_FUNC_TRACE();
-
if (sym_op->m_dst)
dst = sym_op->m_dst;
else
@@ -887,8 +879,6 @@ static inline int build_auth_sg_fd(
uint8_t *old_digest;
struct rte_mbuf *mbuf;
- PMD_INIT_FUNC_TRACE();
-
data_len = sym_op->auth.data.length;
data_offset = sym_op->auth.data.offset;
@@ -1006,8 +996,6 @@ build_auth_fd(dpaa2_sec_session *sess, struct rte_crypto_op *op,
uint8_t *old_digest;
int retval;
- PMD_INIT_FUNC_TRACE();
-
data_len = sym_op->auth.data.length;
data_offset = sym_op->auth.data.offset;
@@ -1123,8 +1111,6 @@ build_cipher_sg_fd(dpaa2_sec_session *sess, struct rte_crypto_op *op,
uint8_t *iv_ptr = rte_crypto_op_ctod_offset(op, uint8_t *,
sess->iv.offset);
- PMD_INIT_FUNC_TRACE();
-
data_len = sym_op->cipher.data.length;
data_offset = sym_op->cipher.data.offset;
@@ -1258,8 +1244,6 @@ build_cipher_fd(dpaa2_sec_session *sess, struct rte_crypto_op *op,
sess->iv.offset);
struct rte_mbuf *dst;
- PMD_INIT_FUNC_TRACE();
-
data_len = sym_op->cipher.data.length;
data_offset = sym_op->cipher.data.offset;
@@ -1371,8 +1355,6 @@ build_sec_fd(struct rte_crypto_op *op,
int ret = -1;
dpaa2_sec_session *sess;
- PMD_INIT_FUNC_TRACE();
-
if (op->sess_type == RTE_CRYPTO_OP_WITH_SESSION)
sess = (dpaa2_sec_session *)get_sym_session_private_data(
op->sym->session, cryptodev_driver_id);
@@ -1821,7 +1803,7 @@ dpaa2_sec_cipher_init(struct rte_cryptodev *dev,
{
struct dpaa2_sec_dev_private *dev_priv = dev->data->dev_private;
struct alginfo cipherdata;
- int bufsize, i;
+ int bufsize;
struct ctxt_priv *priv;
struct sec_flow_context *flc;
@@ -1937,9 +1919,11 @@ dpaa2_sec_cipher_init(struct rte_cryptodev *dev,
flc->word1_sdl = (uint8_t)bufsize;
session->ctxt = priv;
+#ifdef CAAM_DESC_DEBUG
+ int i;
for (i = 0; i < bufsize; i++)
DPAA2_SEC_DEBUG("DESC[%d]:0x%x", i, priv->flc_desc[0].desc[i]);
-
+#endif
return 0;
error_out:
@@ -1955,7 +1939,7 @@ dpaa2_sec_auth_init(struct rte_cryptodev *dev,
{
struct dpaa2_sec_dev_private *dev_priv = dev->data->dev_private;
struct alginfo authdata;
- int bufsize, i;
+ int bufsize;
struct ctxt_priv *priv;
struct sec_flow_context *flc;
@@ -2099,10 +2083,12 @@ dpaa2_sec_auth_init(struct rte_cryptodev *dev,
flc->word1_sdl = (uint8_t)bufsize;
session->ctxt = priv;
+#ifdef CAAM_DESC_DEBUG
+ int i;
for (i = 0; i < bufsize; i++)
DPAA2_SEC_DEBUG("DESC[%d]:0x%x",
i, priv->flc_desc[DESC_INITFINAL].desc[i]);
-
+#endif
return 0;
@@ -2120,7 +2106,7 @@ dpaa2_sec_aead_init(struct rte_cryptodev *dev,
struct dpaa2_sec_aead_ctxt *ctxt = &session->ext_params.aead_ctxt;
struct dpaa2_sec_dev_private *dev_priv = dev->data->dev_private;
struct alginfo aeaddata;
- int bufsize, i;
+ int bufsize;
struct ctxt_priv *priv;
struct sec_flow_context *flc;
struct rte_crypto_aead_xform *aead_xform = &xform->aead;
@@ -2218,10 +2204,12 @@ dpaa2_sec_aead_init(struct rte_cryptodev *dev,
flc->word1_sdl = (uint8_t)bufsize;
session->ctxt = priv;
+#ifdef CAAM_DESC_DEBUG
+ int i;
for (i = 0; i < bufsize; i++)
DPAA2_SEC_DEBUG("DESC[%d]:0x%x\n",
i, priv->flc_desc[0].desc[i]);
-
+#endif
return 0;
error_out:
@@ -2239,7 +2227,7 @@ dpaa2_sec_aead_chain_init(struct rte_cryptodev *dev,
struct dpaa2_sec_aead_ctxt *ctxt = &session->ext_params.aead_ctxt;
struct dpaa2_sec_dev_private *dev_priv = dev->data->dev_private;
struct alginfo authdata, cipherdata;
- int bufsize, i;
+ int bufsize;
struct ctxt_priv *priv;
struct sec_flow_context *flc;
struct rte_crypto_cipher_xform *cipher_xform;
@@ -2444,9 +2432,12 @@ dpaa2_sec_aead_chain_init(struct rte_cryptodev *dev,
flc->word1_sdl = (uint8_t)bufsize;
session->ctxt = priv;
+#ifdef CAAM_DESC_DEBUG
+ int i;
for (i = 0; i < bufsize; i++)
DPAA2_SEC_DEBUG("DESC[%d]:0x%x",
i, priv->flc_desc[0].desc[i]);
+#endif
return 0;
--
2.17.1
^ permalink raw reply [flat|nested] 75+ messages in thread
* Re: [dpdk-dev] [PATCH v3 00/24] crypto/dpaaX_sec: Support Wireless algos
2019-09-30 14:40 ` [dpdk-dev] [PATCH v3 00/24] " Akhil Goyal
` (23 preceding siblings ...)
2019-09-30 14:41 ` [dpdk-dev] [PATCH v3 24/24] crypto/dpaa2_sec: improve debug logging Akhil Goyal
@ 2019-09-30 20:08 ` Akhil Goyal
24 siblings, 0 replies; 75+ messages in thread
From: Akhil Goyal @ 2019-09-30 20:08 UTC (permalink / raw)
To: Akhil Goyal, dev; +Cc: aconole, anoobj
>
> PDCP protocol offload using rte_security are supported in
> dpaa2_sec and dpaa_sec drivers.
>
> Wireless algos(SNOW/ZUC) without protocol offload are also
> supported as per crypto APIs.
>
> changes in v3:
> - fix meson build
> - fix checkpatch warnings
> - include dependent patches(last 4) which were sent separately.
> CI was failing due to apply issues.
>
> changes in V2:
> - fix clang build
> - enable zuc authentication
> - minor fixes
>
>
> Akhil Goyal (6):
> security: add hfn override option in PDCP
> drivers/crypto: support hfn override for NXP PMDs
> crypto/dpaa2_sec: update desc for pdcp 18bit enc-auth
> crypto/dpaa2_sec/hw: update 12bit SN desc for NULL auth
> crypto/dpaa_sec: support scatter gather for PDCP
> crypto/dpaa_sec: change per cryptodev pool to per qp
>
> Hemant Agrawal (7):
> crypto/dpaa2_sec: support CAAM HW era 10
> crypto/dpaa2_sec: support scatter gather for proto offloads
> crypto/dpaa2_sec: support snow3g cipher/integrity
> crypto/dpaa2_sec: support zuc ciphering/integrity
> crypto/dpaa2_sec: allocate context as per num segs
> crypto/dpaa_sec: dynamic contxt buffer for SG cases
> crypto/dpaa2_sec: improve debug logging
>
> Vakul Garg (11):
> drivers/crypto: support PDCP 12-bit c-plane processing
> drivers/crypto: support PDCP u-plane with integrity
> crypto/dpaa2_sec: disable 'write-safe' for PDCP
> crypto/dpaa2_sec/hw: support 18-bit PDCP enc-auth cases
> crypto/dpaa2_sec/hw: support aes-aes 18-bit PDCP
> crypto/dpaa2_sec/hw: support zuc-zuc 18-bit PDCP
> crypto/dpaa2_sec/hw: support snow-snow 18-bit PDCP
> crypto/dpaa2_sec/hw: support snow-f8
> crypto/dpaa2_sec/hw: support snow-f9
> crypto/dpaa2_sec/hw: support kasumi
> crypto/dpaa2_sec/hw: support ZUCE and ZUCA
>
> drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c | 625 ++++++--
> drivers/crypto/dpaa2_sec/dpaa2_sec_logs.h | 4 +-
> drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h | 96 +-
> drivers/crypto/dpaa2_sec/hw/desc.h | 8 +-
> drivers/crypto/dpaa2_sec/hw/desc/algo.h | 295 +++-
> drivers/crypto/dpaa2_sec/hw/desc/pdcp.h | 1387 ++++++++++++++---
> .../dpaa2_sec/hw/rta/fifo_load_store_cmd.h | 9 +-
> drivers/crypto/dpaa2_sec/hw/rta/header_cmd.h | 21 +-
> drivers/crypto/dpaa2_sec/hw/rta/jump_cmd.h | 3 +-
> drivers/crypto/dpaa2_sec/hw/rta/key_cmd.h | 5 +-
> drivers/crypto/dpaa2_sec/hw/rta/load_cmd.h | 10 +-
> drivers/crypto/dpaa2_sec/hw/rta/math_cmd.h | 12 +-
> drivers/crypto/dpaa2_sec/hw/rta/move_cmd.h | 8 +-
> drivers/crypto/dpaa2_sec/hw/rta/nfifo_cmd.h | 10 +-
> .../crypto/dpaa2_sec/hw/rta/operation_cmd.h | 6 +-
> .../crypto/dpaa2_sec/hw/rta/protocol_cmd.h | 11 +-
> .../dpaa2_sec/hw/rta/sec_run_time_asm.h | 27 +-
> .../dpaa2_sec/hw/rta/seq_in_out_ptr_cmd.h | 7 +-
> drivers/crypto/dpaa2_sec/hw/rta/store_cmd.h | 6 +-
> drivers/crypto/dpaa_sec/dpaa_sec.c | 361 +++--
> drivers/crypto/dpaa_sec/dpaa_sec.h | 16 +-
> lib/librte_security/rte_security.h | 11 +-
> 22 files changed, 2296 insertions(+), 642 deletions(-)
>
> --
> 2.17.1
Applied to dpdk-next-crypto
^ permalink raw reply [flat|nested] 75+ messages in thread
end of thread, other threads:[~2019-09-30 20:09 UTC | newest]
Thread overview: 75+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-09-02 12:17 [dpdk-dev] [PATCH 00/20] crypto/dpaaX_sec: Support Wireless algos Akhil Goyal
2019-09-02 12:17 ` [dpdk-dev] [PATCH 01/20] drivers/crypto: Support PDCP 12-bit c-plane processing Akhil Goyal
2019-09-02 12:17 ` [dpdk-dev] [PATCH 02/20] drivers/crypto: Support PDCP u-plane with integrity Akhil Goyal
2019-09-02 12:17 ` [dpdk-dev] [PATCH 03/20] security: add hfn override option in PDCP Akhil Goyal
2019-09-19 15:31 ` Akhil Goyal
2019-09-24 11:36 ` Ananyev, Konstantin
2019-09-25 7:18 ` Anoob Joseph
2019-09-27 15:06 ` Akhil Goyal
2019-09-02 12:17 ` [dpdk-dev] [PATCH 04/20] crypto/dpaaX_sec: update dpovrd for hfn override " Akhil Goyal
2019-09-02 12:17 ` [dpdk-dev] [PATCH 05/20] crypto/dpaa2_sec: update desc for pdcp 18bit enc-auth Akhil Goyal
2019-09-02 12:17 ` [dpdk-dev] [PATCH 06/20] crypto/dpaa2_sec: support CAAM HW era 10 Akhil Goyal
2019-09-02 12:17 ` [dpdk-dev] [PATCH 07/20] crypto/dpaa2_sec/hw: update 12bit SN desc for null auth for ERA8 Akhil Goyal
2019-09-02 12:17 ` [dpdk-dev] [PATCH 08/20] crypto/dpaa_sec: support scatter gather for pdcp Akhil Goyal
2019-09-02 12:17 ` [dpdk-dev] [PATCH 09/20] crypto/dpaa2_sec: support scatter gather for proto offloads Akhil Goyal
2019-09-02 12:17 ` [dpdk-dev] [PATCH 10/20] crypto/dpaa2_sec: disable 'write-safe' for PDCP Akhil Goyal
2019-09-02 12:17 ` [dpdk-dev] [PATCH 11/20] crypto/dpaa2_sec/hw: Support 18-bit PDCP enc-auth cases Akhil Goyal
2019-09-02 12:17 ` [dpdk-dev] [PATCH 12/20] crypto/dpaa2_sec/hw: Support aes-aes 18-bit PDCP Akhil Goyal
2019-09-02 12:17 ` [dpdk-dev] [PATCH 13/20] crypto/dpaa2_sec/hw: Support zuc-zuc " Akhil Goyal
2019-09-02 12:17 ` [dpdk-dev] [PATCH 14/20] crypto/dpaa2_sec/hw: Support snow-snow " Akhil Goyal
2019-09-02 12:17 ` [dpdk-dev] [PATCH 15/20] crypto/dpaa2_sec/hw: Support snow-f8 Akhil Goyal
2019-09-02 12:17 ` [dpdk-dev] [PATCH 16/20] crypto/dpaa2_sec/hw: Support snow-f9 Akhil Goyal
2019-09-02 12:17 ` [dpdk-dev] [PATCH 17/20] crypto/dpaa2_sec: Support snow3g cipher/integrity Akhil Goyal
2019-09-02 12:17 ` [dpdk-dev] [PATCH 18/20] crypto/dpaa2_sec/hw: Support kasumi Akhil Goyal
2019-09-02 12:17 ` [dpdk-dev] [PATCH 19/20] crypto/dpaa2_sec/hw: Support zuc cipher/integrity Akhil Goyal
2019-09-02 12:17 ` [dpdk-dev] [PATCH 20/20] crypto/dpaa2_sec: Support zuc ciphering Akhil Goyal
2019-09-03 14:39 ` [dpdk-dev] [PATCH 00/20] crypto/dpaaX_sec: Support Wireless algos Aaron Conole
2019-09-03 14:42 ` Akhil Goyal
2019-09-30 11:52 ` [dpdk-dev] [PATCH v2 " Akhil Goyal
2019-09-30 11:52 ` [dpdk-dev] [PATCH v2 01/20] drivers/crypto: support PDCP 12-bit c-plane processing Akhil Goyal
2019-09-30 11:52 ` [dpdk-dev] [PATCH v2 02/20] drivers/crypto: support PDCP u-plane with integrity Akhil Goyal
2019-09-30 11:52 ` [dpdk-dev] [PATCH v2 03/20] security: add hfn override option in PDCP Akhil Goyal
2019-09-30 11:52 ` [dpdk-dev] [PATCH v2 04/20] drivers/crypto: support hfn override for NXP PMDs Akhil Goyal
2019-09-30 11:52 ` [dpdk-dev] [PATCH v2 05/20] crypto/dpaa2_sec: update desc for pdcp 18bit enc-auth Akhil Goyal
2019-09-30 11:52 ` [dpdk-dev] [PATCH v2 06/20] crypto/dpaa2_sec: support CAAM HW era 10 Akhil Goyal
2019-09-30 11:52 ` [dpdk-dev] [PATCH v2 07/20] crypto/dpaa2_sec/hw: update 12bit SN desc for NULL auth Akhil Goyal
2019-09-30 11:52 ` [dpdk-dev] [PATCH v2 08/20] crypto/dpaa_sec: support scatter gather for PDCP Akhil Goyal
2019-09-30 11:52 ` [dpdk-dev] [PATCH v2 09/20] crypto/dpaa2_sec: support scatter gather for proto offloads Akhil Goyal
2019-09-30 11:52 ` [dpdk-dev] [PATCH v2 10/20] crypto/dpaa2_sec: disable 'write-safe' for PDCP Akhil Goyal
2019-09-30 11:52 ` [dpdk-dev] [PATCH v2 11/20] crypto/dpaa2_sec/hw: support 18-bit PDCP enc-auth cases Akhil Goyal
2019-09-30 11:52 ` [dpdk-dev] [PATCH v2 12/20] crypto/dpaa2_sec/hw: support aes-aes 18-bit PDCP Akhil Goyal
2019-09-30 11:52 ` [dpdk-dev] [PATCH v2 13/20] crypto/dpaa2_sec/hw: support zuc-zuc " Akhil Goyal
2019-09-30 11:52 ` [dpdk-dev] [PATCH v2 14/20] crypto/dpaa2_sec/hw: support snow-snow " Akhil Goyal
2019-09-30 11:52 ` [dpdk-dev] [PATCH v2 15/20] crypto/dpaa2_sec/hw: support snow-f8 Akhil Goyal
2019-09-30 11:52 ` [dpdk-dev] [PATCH v2 16/20] crypto/dpaa2_sec/hw: support snow-f9 Akhil Goyal
2019-09-30 11:52 ` [dpdk-dev] [PATCH v2 17/20] crypto/dpaa2_sec: support snow3g cipher/integrity Akhil Goyal
2019-09-30 11:52 ` [dpdk-dev] [PATCH v2 18/20] crypto/dpaa2_sec/hw: support kasumi Akhil Goyal
2019-09-30 11:52 ` [dpdk-dev] [PATCH v2 19/20] crypto/dpaa2_sec/hw: support ZUCE and ZUCA Akhil Goyal
2019-09-30 11:52 ` [dpdk-dev] [PATCH v2 20/20] crypto/dpaa2_sec: support zuc ciphering/integrity Akhil Goyal
2019-09-30 13:48 ` [dpdk-dev] [PATCH v2 00/20] crypto/dpaaX_sec: Support Wireless algos Akhil Goyal
2019-09-30 14:40 ` [dpdk-dev] [PATCH v3 00/24] " Akhil Goyal
2019-09-30 14:40 ` [dpdk-dev] [PATCH v3 01/24] drivers/crypto: support PDCP 12-bit c-plane processing Akhil Goyal
2019-09-30 14:40 ` [dpdk-dev] [PATCH v3 02/24] drivers/crypto: support PDCP u-plane with integrity Akhil Goyal
2019-09-30 14:40 ` [dpdk-dev] [PATCH v3 03/24] security: add hfn override option in PDCP Akhil Goyal
2019-09-30 14:40 ` [dpdk-dev] [PATCH v3 04/24] drivers/crypto: support hfn override for NXP PMDs Akhil Goyal
2019-09-30 14:40 ` [dpdk-dev] [PATCH v3 05/24] crypto/dpaa2_sec: update desc for pdcp 18bit enc-auth Akhil Goyal
2019-09-30 14:40 ` [dpdk-dev] [PATCH v3 06/24] crypto/dpaa2_sec: support CAAM HW era 10 Akhil Goyal
2019-09-30 14:40 ` [dpdk-dev] [PATCH v3 07/24] crypto/dpaa2_sec/hw: update 12bit SN desc for NULL auth Akhil Goyal
2019-09-30 14:40 ` [dpdk-dev] [PATCH v3 08/24] crypto/dpaa_sec: support scatter gather for PDCP Akhil Goyal
2019-09-30 14:40 ` [dpdk-dev] [PATCH v3 09/24] crypto/dpaa2_sec: support scatter gather for proto offloads Akhil Goyal
2019-09-30 14:40 ` [dpdk-dev] [PATCH v3 10/24] crypto/dpaa2_sec: disable 'write-safe' for PDCP Akhil Goyal
2019-09-30 14:40 ` [dpdk-dev] [PATCH v3 11/24] crypto/dpaa2_sec/hw: support 18-bit PDCP enc-auth cases Akhil Goyal
2019-09-30 14:40 ` [dpdk-dev] [PATCH v3 12/24] crypto/dpaa2_sec/hw: support aes-aes 18-bit PDCP Akhil Goyal
2019-09-30 14:40 ` [dpdk-dev] [PATCH v3 13/24] crypto/dpaa2_sec/hw: support zuc-zuc " Akhil Goyal
2019-09-30 14:40 ` [dpdk-dev] [PATCH v3 14/24] crypto/dpaa2_sec/hw: support snow-snow " Akhil Goyal
2019-09-30 14:40 ` [dpdk-dev] [PATCH v3 15/24] crypto/dpaa2_sec/hw: support snow-f8 Akhil Goyal
2019-09-30 14:40 ` [dpdk-dev] [PATCH v3 16/24] crypto/dpaa2_sec/hw: support snow-f9 Akhil Goyal
2019-09-30 14:40 ` [dpdk-dev] [PATCH v3 17/24] crypto/dpaa2_sec: support snow3g cipher/integrity Akhil Goyal
2019-09-30 14:40 ` [dpdk-dev] [PATCH v3 18/24] crypto/dpaa2_sec/hw: support kasumi Akhil Goyal
2019-09-30 14:40 ` [dpdk-dev] [PATCH v3 19/24] crypto/dpaa2_sec/hw: support ZUCE and ZUCA Akhil Goyal
2019-09-30 14:41 ` [dpdk-dev] [PATCH v3 20/24] crypto/dpaa2_sec: support zuc ciphering/integrity Akhil Goyal
2019-09-30 14:41 ` [dpdk-dev] [PATCH v3 21/24] crypto/dpaa2_sec: allocate context as per num segs Akhil Goyal
2019-09-30 14:41 ` [dpdk-dev] [PATCH v3 22/24] crypto/dpaa_sec: dynamic contxt buffer for SG cases Akhil Goyal
2019-09-30 14:41 ` [dpdk-dev] [PATCH v3 23/24] crypto/dpaa_sec: change per cryptodev pool to per qp Akhil Goyal
2019-09-30 14:41 ` [dpdk-dev] [PATCH v3 24/24] crypto/dpaa2_sec: improve debug logging Akhil Goyal
2019-09-30 20:08 ` [dpdk-dev] [PATCH v3 00/24] crypto/dpaaX_sec: Support Wireless algos Akhil Goyal
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).