* [PATCH 00/24] Fixes and improvements in crypto cnxk
@ 2023-12-21 12:35 Anoob Joseph
2023-12-21 12:35 ` [PATCH 01/24] common/cnxk: fix memory leak Anoob Joseph
` (24 more replies)
0 siblings, 25 replies; 78+ messages in thread
From: Anoob Joseph @ 2023-12-21 12:35 UTC (permalink / raw)
To: Akhil Goyal; +Cc: Jerin Jacob, Vidya Sagar Velumuri, Tejasree Kondoj, dev
Add following features
- TLS record processing offload (TLS 1.2-1.3, DTLS 1.2)
- Rx inject to allow lookaside packets to be injected to ethdev Rx
- Use PDCP_CHAIN opcode instead of PDCP opcode for cipher-only and auth
only cases
Aakash Sasidharan (1):
crypto/cnxk: enable digest gen for zero len input
Akhil Goyal (1):
common/cnxk: fix memory leak
Anoob Joseph (6):
crypto/cnxk: use common macro
crypto/cnxk: return microcode completion code
common/cnxk: update opad-ipad gen to handle TLS
common/cnxk: add TLS record contexts
crypto/cnxk: separate IPsec from security common code
crypto/cnxk: add PMD APIs for raw submission to CPT
Gowrishankar Muthukrishnan (1):
crypto/cnxk: fix ECDH pubkey verify in cn9k
Rahul Bhansali (2):
common/cnxk: add Rx inject configs
crypto/cnxk: Rx inject config update
Tejasree Kondoj (3):
crypto/cnxk: fallback to SG if headroom is not available
crypto/cnxk: replace PDCP with PDCP chain opcode
crypto/cnxk: add CPT SG mode debug
Vidya Sagar Velumuri (10):
crypto/cnxk: enable Rx inject in security lookaside
crypto/cnxk: enable Rx inject for 103
crypto/cnxk: rename security caps as IPsec security caps
crypto/cnxk: add TLS record session ops
crypto/cnxk: add TLS record datapath handling
crypto/cnxk: add TLS capability
crypto/cnxk: validate the combinations supported in TLS
crypto/cnxk: use a single function for opad ipad
crypto/cnxk: add support for TLS 1.3
crypto/cnxk: add TLS 1.3 capability
doc/api/doxy-api-index.md | 1 +
doc/api/doxy-api.conf.in | 1 +
doc/guides/cryptodevs/cnxk.rst | 12 +
doc/guides/rel_notes/release_24_03.rst | 6 +
drivers/common/cnxk/cnxk_security.c | 65 +-
drivers/common/cnxk/cnxk_security.h | 15 +-
drivers/common/cnxk/hw/cpt.h | 12 +-
drivers/common/cnxk/roc_cpt.c | 14 +-
drivers/common/cnxk/roc_cpt.h | 7 +-
drivers/common/cnxk/roc_cpt_priv.h | 2 +-
drivers/common/cnxk/roc_idev.c | 44 +
drivers/common/cnxk/roc_idev.h | 5 +
drivers/common/cnxk/roc_idev_priv.h | 6 +
drivers/common/cnxk/roc_ie_ot.c | 14 +-
drivers/common/cnxk/roc_ie_ot_tls.h | 225 +++++
drivers/common/cnxk/roc_mbox.h | 2 +
drivers/common/cnxk/roc_nix.c | 2 +
drivers/common/cnxk/roc_nix_inl.c | 2 +-
drivers/common/cnxk/roc_nix_inl_dev.c | 2 +-
drivers/common/cnxk/roc_se.c | 379 +++-----
drivers/common/cnxk/roc_se.h | 38 +-
drivers/common/cnxk/version.map | 5 +
drivers/crypto/cnxk/cn10k_cryptodev.c | 2 +-
drivers/crypto/cnxk/cn10k_cryptodev_ops.c | 400 ++++++++-
drivers/crypto/cnxk/cn10k_cryptodev_ops.h | 11 +
drivers/crypto/cnxk/cn10k_cryptodev_sec.c | 134 +++
drivers/crypto/cnxk/cn10k_cryptodev_sec.h | 68 ++
drivers/crypto/cnxk/cn10k_ipsec.c | 134 +--
drivers/crypto/cnxk/cn10k_ipsec.h | 38 +-
drivers/crypto/cnxk/cn10k_ipsec_la_ops.h | 19 +-
drivers/crypto/cnxk/cn10k_tls.c | 830 ++++++++++++++++++
drivers/crypto/cnxk/cn10k_tls.h | 35 +
drivers/crypto/cnxk/cn10k_tls_ops.h | 322 +++++++
drivers/crypto/cnxk/cn9k_cryptodev_ops.c | 68 +-
drivers/crypto/cnxk/cn9k_cryptodev_ops.h | 62 ++
drivers/crypto/cnxk/cn9k_ipsec_la_ops.h | 16 +-
drivers/crypto/cnxk/cnxk_cryptodev.c | 3 +
drivers/crypto/cnxk/cnxk_cryptodev.h | 24 +-
.../crypto/cnxk/cnxk_cryptodev_capabilities.c | 375 +++++++-
drivers/crypto/cnxk/cnxk_cryptodev_devargs.c | 31 +
drivers/crypto/cnxk/cnxk_cryptodev_ops.c | 128 ++-
drivers/crypto/cnxk/cnxk_cryptodev_ops.h | 7 +
drivers/crypto/cnxk/cnxk_se.h | 98 +--
drivers/crypto/cnxk/cnxk_sg.h | 4 +-
drivers/crypto/cnxk/meson.build | 4 +-
drivers/crypto/cnxk/rte_pmd_cnxk_crypto.h | 46 +
drivers/crypto/cnxk/version.map | 3 +
47 files changed, 3015 insertions(+), 706 deletions(-)
create mode 100644 drivers/common/cnxk/roc_ie_ot_tls.h
create mode 100644 drivers/crypto/cnxk/cn10k_cryptodev_sec.c
create mode 100644 drivers/crypto/cnxk/cn10k_cryptodev_sec.h
create mode 100644 drivers/crypto/cnxk/cn10k_tls.c
create mode 100644 drivers/crypto/cnxk/cn10k_tls.h
create mode 100644 drivers/crypto/cnxk/cn10k_tls_ops.h
create mode 100644 drivers/crypto/cnxk/rte_pmd_cnxk_crypto.h
--
2.25.1
^ permalink raw reply [flat|nested] 78+ messages in thread
* [PATCH 01/24] common/cnxk: fix memory leak
2023-12-21 12:35 [PATCH 00/24] Fixes and improvements in crypto cnxk Anoob Joseph
@ 2023-12-21 12:35 ` Anoob Joseph
2023-12-21 12:35 ` [PATCH 02/24] crypto/cnxk: use common macro Anoob Joseph
` (23 subsequent siblings)
24 siblings, 0 replies; 78+ messages in thread
From: Anoob Joseph @ 2023-12-21 12:35 UTC (permalink / raw)
To: Akhil Goyal; +Cc: Jerin Jacob, Vidya Sagar Velumuri, Tejasree Kondoj, dev
From: Akhil Goyal <gakhil@marvell.com>
dev_init() acquires some resources which need to be cleaned
in case a failure is observed afterwards.
Fixes: c045d2e5cbbc ("common/cnxk: add CPT configuration")
Signed-off-by: Akhil Goyal <gakhil@marvell.com>
---
drivers/common/cnxk/roc_cpt.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/common/cnxk/roc_cpt.c b/drivers/common/cnxk/roc_cpt.c
index 981e85a204..4e23d8c135 100644
--- a/drivers/common/cnxk/roc_cpt.c
+++ b/drivers/common/cnxk/roc_cpt.c
@@ -756,7 +756,7 @@ roc_cpt_dev_init(struct roc_cpt *roc_cpt)
rc = dev_init(dev, pci_dev);
if (rc) {
plt_err("Failed to init roc device");
- goto fail;
+ return rc;
}
cpt->pci_dev = pci_dev;
@@ -788,6 +788,7 @@ roc_cpt_dev_init(struct roc_cpt *roc_cpt)
return 0;
fail:
+ dev_fini(dev, pci_dev);
return rc;
}
--
2.25.1
^ permalink raw reply [flat|nested] 78+ messages in thread
* [PATCH 02/24] crypto/cnxk: use common macro
2023-12-21 12:35 [PATCH 00/24] Fixes and improvements in crypto cnxk Anoob Joseph
2023-12-21 12:35 ` [PATCH 01/24] common/cnxk: fix memory leak Anoob Joseph
@ 2023-12-21 12:35 ` Anoob Joseph
2023-12-21 12:35 ` [PATCH 03/24] crypto/cnxk: fallback to SG if headroom is not available Anoob Joseph
` (22 subsequent siblings)
24 siblings, 0 replies; 78+ messages in thread
From: Anoob Joseph @ 2023-12-21 12:35 UTC (permalink / raw)
To: Akhil Goyal; +Cc: Jerin Jacob, Vidya Sagar Velumuri, Tejasree Kondoj, dev
Having different macros for same purpose may cause issues if one is
updated without updating the other. Use same macro by including the
header.
Signed-off-by: Anoob Joseph <anoobj@marvell.com>
---
drivers/crypto/cnxk/cnxk_cryptodev.h | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/drivers/crypto/cnxk/cnxk_cryptodev.h b/drivers/crypto/cnxk/cnxk_cryptodev.h
index d0ad881f2f..f5374131bf 100644
--- a/drivers/crypto/cnxk/cnxk_cryptodev.h
+++ b/drivers/crypto/cnxk/cnxk_cryptodev.h
@@ -8,12 +8,12 @@
#include <rte_cryptodev.h>
#include <rte_security.h>
+#include "roc_ae.h"
#include "roc_cpt.h"
#define CNXK_CPT_MAX_CAPS 55
#define CNXK_SEC_CRYPTO_MAX_CAPS 16
#define CNXK_SEC_MAX_CAPS 9
-#define CNXK_AE_EC_ID_MAX 9
/**
* Device private data
*/
@@ -23,8 +23,8 @@ struct cnxk_cpt_vf {
struct rte_cryptodev_capabilities
sec_crypto_caps[CNXK_SEC_CRYPTO_MAX_CAPS];
struct rte_security_capability sec_caps[CNXK_SEC_MAX_CAPS];
- uint64_t cnxk_fpm_iova[CNXK_AE_EC_ID_MAX];
- struct roc_ae_ec_group *ec_grp[CNXK_AE_EC_ID_MAX];
+ uint64_t cnxk_fpm_iova[ROC_AE_EC_ID_PMAX];
+ struct roc_ae_ec_group *ec_grp[ROC_AE_EC_ID_PMAX];
uint16_t max_qps_limit;
};
--
2.25.1
^ permalink raw reply [flat|nested] 78+ messages in thread
* [PATCH 03/24] crypto/cnxk: fallback to SG if headroom is not available
2023-12-21 12:35 [PATCH 00/24] Fixes and improvements in crypto cnxk Anoob Joseph
2023-12-21 12:35 ` [PATCH 01/24] common/cnxk: fix memory leak Anoob Joseph
2023-12-21 12:35 ` [PATCH 02/24] crypto/cnxk: use common macro Anoob Joseph
@ 2023-12-21 12:35 ` Anoob Joseph
2023-12-21 12:35 ` [PATCH 04/24] crypto/cnxk: return microcode completion code Anoob Joseph
` (21 subsequent siblings)
24 siblings, 0 replies; 78+ messages in thread
From: Anoob Joseph @ 2023-12-21 12:35 UTC (permalink / raw)
To: Akhil Goyal; +Cc: Tejasree Kondoj, Jerin Jacob, Vidya Sagar Velumuri, dev
From: Tejasree Kondoj <ktejasree@marvell.com>
Falling back to SG mode for cn9k lookaside IPsec
if headroom is not available.
Signed-off-by: Tejasree Kondoj <ktejasree@marvell.com>
---
drivers/crypto/cnxk/cn9k_ipsec_la_ops.h | 8 +-------
1 file changed, 1 insertion(+), 7 deletions(-)
diff --git a/drivers/crypto/cnxk/cn9k_ipsec_la_ops.h b/drivers/crypto/cnxk/cn9k_ipsec_la_ops.h
index 85aacb803f..3d0db72775 100644
--- a/drivers/crypto/cnxk/cn9k_ipsec_la_ops.h
+++ b/drivers/crypto/cnxk/cn9k_ipsec_la_ops.h
@@ -82,19 +82,13 @@ process_outb_sa(struct cpt_qp_meta_info *m_info, struct rte_crypto_op *cop,
extend_tail = rlen - dlen;
pkt_len += extend_tail;
- if (likely(m_src->next == NULL)) {
+ if (likely((m_src->next == NULL) && (hdr_len <= data_off))) {
if (unlikely(extend_tail > rte_pktmbuf_tailroom(m_src))) {
plt_dp_err("Not enough tail room (required: %d, available: %d)",
extend_tail, rte_pktmbuf_tailroom(m_src));
return -ENOMEM;
}
- if (unlikely(hdr_len > data_off)) {
- plt_dp_err("Not enough head room (required: %d, available: %d)", hdr_len,
- rte_pktmbuf_headroom(m_src));
- return -ENOMEM;
- }
-
m_src->data_len = pkt_len;
hdr = PLT_PTR_ADD(m_src->buf_addr, data_off - hdr_len);
--
2.25.1
^ permalink raw reply [flat|nested] 78+ messages in thread
* [PATCH 04/24] crypto/cnxk: return microcode completion code
2023-12-21 12:35 [PATCH 00/24] Fixes and improvements in crypto cnxk Anoob Joseph
` (2 preceding siblings ...)
2023-12-21 12:35 ` [PATCH 03/24] crypto/cnxk: fallback to SG if headroom is not available Anoob Joseph
@ 2023-12-21 12:35 ` Anoob Joseph
2023-12-21 12:35 ` [PATCH 05/24] crypto/cnxk: fix ECDH pubkey verify in cn9k Anoob Joseph
` (20 subsequent siblings)
24 siblings, 0 replies; 78+ messages in thread
From: Anoob Joseph @ 2023-12-21 12:35 UTC (permalink / raw)
To: Akhil Goyal; +Cc: Jerin Jacob, Vidya Sagar Velumuri, Tejasree Kondoj, dev
Return microcode completion code in case of errors. This allows
applications to check the failure reasons in more granularity.
Signed-off-by: Anoob Joseph <anoobj@marvell.com>
---
drivers/crypto/cnxk/cn10k_cryptodev_ops.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/drivers/crypto/cnxk/cn10k_cryptodev_ops.c b/drivers/crypto/cnxk/cn10k_cryptodev_ops.c
index 997110e3d3..bef7b75810 100644
--- a/drivers/crypto/cnxk/cn10k_cryptodev_ops.c
+++ b/drivers/crypto/cnxk/cn10k_cryptodev_ops.c
@@ -823,6 +823,7 @@ cn10k_cpt_sec_post_process(struct rte_crypto_op *cop, struct cpt_cn10k_res_s *re
break;
default:
cop->status = RTE_CRYPTO_OP_STATUS_ERROR;
+ cop->aux_flags = res->uc_compcode;
return;
}
@@ -884,6 +885,7 @@ cn10k_cpt_dequeue_post_process(struct cnxk_cpt_qp *qp,
plt_dp_info("Request failed with microcode error");
plt_dp_info("MC completion code 0x%x",
res->uc_compcode);
+ cop->aux_flags = uc_compcode;
goto temp_sess_free;
}
--
2.25.1
^ permalink raw reply [flat|nested] 78+ messages in thread
* [PATCH 05/24] crypto/cnxk: fix ECDH pubkey verify in cn9k
2023-12-21 12:35 [PATCH 00/24] Fixes and improvements in crypto cnxk Anoob Joseph
` (3 preceding siblings ...)
2023-12-21 12:35 ` [PATCH 04/24] crypto/cnxk: return microcode completion code Anoob Joseph
@ 2023-12-21 12:35 ` Anoob Joseph
2023-12-21 12:35 ` [PATCH 06/24] crypto/cnxk: enable digest gen for zero len input Anoob Joseph
` (19 subsequent siblings)
24 siblings, 0 replies; 78+ messages in thread
From: Anoob Joseph @ 2023-12-21 12:35 UTC (permalink / raw)
To: Akhil Goyal
Cc: Gowrishankar Muthukrishnan, Jerin Jacob, Vidya Sagar Velumuri,
Tejasree Kondoj, dev
From: Gowrishankar Muthukrishnan <gmuthukrishn@marvell.com>
Fix ECDH pubkey verify in cn9k.
Fixes: baae0994fa96 ("crypto/cnxk: support ECDH")
Signed-off-by: Gowrishankar Muthukrishnan <gmuthukrishn@marvell.com>
---
drivers/crypto/cnxk/cn9k_cryptodev_ops.c | 12 +++++++++++-
1 file changed, 11 insertions(+), 1 deletion(-)
diff --git a/drivers/crypto/cnxk/cn9k_cryptodev_ops.c b/drivers/crypto/cnxk/cn9k_cryptodev_ops.c
index 34d40b07d4..442cd8e5a9 100644
--- a/drivers/crypto/cnxk/cn9k_cryptodev_ops.c
+++ b/drivers/crypto/cnxk/cn9k_cryptodev_ops.c
@@ -578,7 +578,17 @@ cn9k_cpt_dequeue_post_process(struct cnxk_cpt_qp *qp, struct rte_crypto_op *cop,
if (unlikely(res->uc_compcode)) {
if (res->uc_compcode == ROC_SE_ERR_GC_ICV_MISCOMPARE)
cop->status = RTE_CRYPTO_OP_STATUS_AUTH_FAILED;
- else
+ else if (cop->type == RTE_CRYPTO_OP_TYPE_ASYMMETRIC &&
+ cop->sess_type == RTE_CRYPTO_OP_WITH_SESSION &&
+ cop->asym->ecdh.ke_type == RTE_CRYPTO_ASYM_KE_PUB_KEY_VERIFY) {
+ if (res->uc_compcode == ROC_AE_ERR_ECC_POINT_NOT_ON_CURVE) {
+ cop->status = RTE_CRYPTO_OP_STATUS_ERROR;
+ return;
+ } else if (res->uc_compcode == ROC_AE_ERR_ECC_PAI) {
+ cop->status = RTE_CRYPTO_OP_STATUS_SUCCESS;
+ return;
+ }
+ } else
cop->status = RTE_CRYPTO_OP_STATUS_ERROR;
plt_dp_info("Request failed with microcode error");
--
2.25.1
^ permalink raw reply [flat|nested] 78+ messages in thread
* [PATCH 06/24] crypto/cnxk: enable digest gen for zero len input
2023-12-21 12:35 [PATCH 00/24] Fixes and improvements in crypto cnxk Anoob Joseph
` (4 preceding siblings ...)
2023-12-21 12:35 ` [PATCH 05/24] crypto/cnxk: fix ECDH pubkey verify in cn9k Anoob Joseph
@ 2023-12-21 12:35 ` Anoob Joseph
2023-12-21 12:35 ` [PATCH 07/24] crypto/cnxk: enable Rx inject in security lookaside Anoob Joseph
` (18 subsequent siblings)
24 siblings, 0 replies; 78+ messages in thread
From: Anoob Joseph @ 2023-12-21 12:35 UTC (permalink / raw)
To: Akhil Goyal
Cc: Aakash Sasidharan, Jerin Jacob, Vidya Sagar Velumuri,
Tejasree Kondoj, dev
From: Aakash Sasidharan <asasidharan@marvell.com>
With zero length input, digest generation fails with incorrect
value. Fix this by completely avoiding the gather component
when the input packet has zero data length.
Signed-off-by: Aakash Sasidharan <asasidharan@marvell.com>
---
drivers/crypto/cnxk/cnxk_se.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/crypto/cnxk/cnxk_se.h b/drivers/crypto/cnxk/cnxk_se.h
index c2a807fa94..1aec7dea9f 100644
--- a/drivers/crypto/cnxk/cnxk_se.h
+++ b/drivers/crypto/cnxk/cnxk_se.h
@@ -2479,7 +2479,7 @@ prepare_iov_from_pkt(struct rte_mbuf *pkt, struct roc_se_iov_ptr *iovec, uint32_
void *seg_data = NULL;
int32_t seg_size = 0;
- if (!pkt) {
+ if (!pkt || pkt->data_len == 0) {
iovec->buf_cnt = 0;
return 0;
}
--
2.25.1
^ permalink raw reply [flat|nested] 78+ messages in thread
* [PATCH 07/24] crypto/cnxk: enable Rx inject in security lookaside
2023-12-21 12:35 [PATCH 00/24] Fixes and improvements in crypto cnxk Anoob Joseph
` (5 preceding siblings ...)
2023-12-21 12:35 ` [PATCH 06/24] crypto/cnxk: enable digest gen for zero len input Anoob Joseph
@ 2023-12-21 12:35 ` Anoob Joseph
2023-12-21 12:35 ` [PATCH 08/24] common/cnxk: add Rx inject configs Anoob Joseph
` (17 subsequent siblings)
24 siblings, 0 replies; 78+ messages in thread
From: Anoob Joseph @ 2023-12-21 12:35 UTC (permalink / raw)
To: Akhil Goyal; +Cc: Vidya Sagar Velumuri, Jerin Jacob, Tejasree Kondoj, dev
From: Vidya Sagar Velumuri <vvelumuri@marvell.com>
Add Rx inject fastpath API.
Add devargs to specify an LF to be used for Rx inject.
When the RX inject feature flag is enabled:
1. Reserve a CPT LF to use for RX Inject mode.
2. Enable RXC and disable full packet mode for that LF.
Signed-off-by: Anoob Joseph <anoobj@marvell.com>
Signed-off-by: Vidya Sagar Velumuri <vvelumuri@marvell.com>
---
doc/guides/cryptodevs/cnxk.rst | 12 ++
doc/guides/rel_notes/release_24_03.rst | 3 +
drivers/common/cnxk/hw/cpt.h | 9 ++
drivers/common/cnxk/roc_cpt.c | 11 +-
drivers/common/cnxk/roc_cpt.h | 3 +-
drivers/common/cnxk/roc_cpt_priv.h | 2 +-
drivers/common/cnxk/roc_ie_ot.c | 14 +--
drivers/common/cnxk/roc_mbox.h | 2 +
drivers/common/cnxk/roc_nix_inl.c | 2 +-
drivers/common/cnxk/roc_nix_inl_dev.c | 2 +-
drivers/crypto/cnxk/cn10k_cryptodev_ops.c | 123 +++++++++++++++++++
drivers/crypto/cnxk/cn10k_cryptodev_ops.h | 8 ++
drivers/crypto/cnxk/cn10k_ipsec.c | 4 +
drivers/crypto/cnxk/cn10k_ipsec.h | 2 +
drivers/crypto/cnxk/cnxk_cryptodev.c | 3 +
drivers/crypto/cnxk/cnxk_cryptodev.h | 3 +
drivers/crypto/cnxk/cnxk_cryptodev_devargs.c | 31 +++++
drivers/crypto/cnxk/cnxk_cryptodev_ops.c | 27 +++-
drivers/crypto/cnxk/version.map | 3 +
19 files changed, 249 insertions(+), 15 deletions(-)
diff --git a/doc/guides/cryptodevs/cnxk.rst b/doc/guides/cryptodevs/cnxk.rst
index fbe67475be..8dc745dccd 100644
--- a/doc/guides/cryptodevs/cnxk.rst
+++ b/doc/guides/cryptodevs/cnxk.rst
@@ -187,6 +187,18 @@ Runtime Config Options
With the above configuration, the number of maximum queue pairs supported
by the device is limited to 4.
+- ``LF ID for RX injection in case of fallback mechanism`` (default ``60``)
+
+ LF ID for RX Injection in fallback mechanism of security.
+ Can be configured during runtime by using ``rx_inj_lf`` ``devargs`` parameter.
+
+ For example::
+
+ -a 0002:20:00.1,rx_inj_lf=20
+
+ With the above configuration, LF 20 will be used by the device for RX Injection
+ in security in fallback mechanism secnario.
+
Debugging Options
-----------------
diff --git a/doc/guides/rel_notes/release_24_03.rst b/doc/guides/rel_notes/release_24_03.rst
index e9c9717706..fa30b46ead 100644
--- a/doc/guides/rel_notes/release_24_03.rst
+++ b/doc/guides/rel_notes/release_24_03.rst
@@ -55,6 +55,9 @@ New Features
Also, make sure to start the actual text at the margin.
=======================================================
+* **Updated Marvell cnxk crypto driver.**
+
+ * Added support for Rx inject in crypto_cn10k.
Removed Items
-------------
diff --git a/drivers/common/cnxk/hw/cpt.h b/drivers/common/cnxk/hw/cpt.h
index cf9046bbfb..edab8a5d83 100644
--- a/drivers/common/cnxk/hw/cpt.h
+++ b/drivers/common/cnxk/hw/cpt.h
@@ -237,6 +237,15 @@ struct cpt_inst_s {
uint64_t doneint : 1;
uint64_t nixtx_addr : 60;
} s;
+ struct {
+ uint64_t nixtxl : 3;
+ uint64_t doneint : 1;
+ uint64_t chan : 12;
+ uint64_t l2_len : 8;
+ uint64_t et_offset : 8;
+ uint64_t match_id : 16;
+ uint64_t sso_pf_func : 16;
+ } hw_s;
uint64_t u64;
} w0;
diff --git a/drivers/common/cnxk/roc_cpt.c b/drivers/common/cnxk/roc_cpt.c
index 4e23d8c135..38e46d65c1 100644
--- a/drivers/common/cnxk/roc_cpt.c
+++ b/drivers/common/cnxk/roc_cpt.c
@@ -463,7 +463,7 @@ cpt_available_lfs_get(struct dev *dev, uint16_t *nb_lf)
int
cpt_lfs_alloc(struct dev *dev, uint8_t eng_grpmsk, uint8_t blkaddr, bool inl_dev_sso,
- bool ctx_ilen_valid, uint8_t ctx_ilen)
+ bool ctx_ilen_valid, uint8_t ctx_ilen, bool rxc_ena, uint16_t rx_inj_lf)
{
struct cpt_lf_alloc_req_msg *req;
struct mbox *mbox = mbox_get(dev->mbox);
@@ -489,6 +489,10 @@ cpt_lfs_alloc(struct dev *dev, uint8_t eng_grpmsk, uint8_t blkaddr, bool inl_dev
req->blkaddr = blkaddr;
req->ctx_ilen_valid = ctx_ilen_valid;
req->ctx_ilen = ctx_ilen;
+ if (rxc_ena) {
+ req->rxc_ena = 1;
+ req->rxc_ena_lf_id = rx_inj_lf;
+ }
rc = mbox_process(mbox);
exit:
@@ -586,7 +590,7 @@ cpt_iq_init(struct roc_cpt_lf *lf)
}
int
-roc_cpt_dev_configure(struct roc_cpt *roc_cpt, int nb_lf)
+roc_cpt_dev_configure(struct roc_cpt *roc_cpt, int nb_lf, bool rxc_ena, uint16_t rx_inj_lf)
{
struct cpt *cpt = roc_cpt_to_cpt_priv(roc_cpt);
uint8_t blkaddr[ROC_CPT_MAX_BLKS];
@@ -630,7 +634,8 @@ roc_cpt_dev_configure(struct roc_cpt *roc_cpt, int nb_lf)
ctx_ilen = (PLT_ALIGN(ROC_OT_IPSEC_SA_SZ_MAX, ROC_ALIGN) / 128) - 1;
}
- rc = cpt_lfs_alloc(&cpt->dev, eng_grpmsk, blkaddr[blknum], false, ctx_ilen_valid, ctx_ilen);
+ rc = cpt_lfs_alloc(&cpt->dev, eng_grpmsk, blkaddr[blknum], false, ctx_ilen_valid, ctx_ilen,
+ rxc_ena, rx_inj_lf);
if (rc)
goto lfs_detach;
diff --git a/drivers/common/cnxk/roc_cpt.h b/drivers/common/cnxk/roc_cpt.h
index 787bccb27d..001e71c55e 100644
--- a/drivers/common/cnxk/roc_cpt.h
+++ b/drivers/common/cnxk/roc_cpt.h
@@ -171,7 +171,8 @@ int __roc_api roc_cpt_dev_init(struct roc_cpt *roc_cpt);
int __roc_api roc_cpt_dev_fini(struct roc_cpt *roc_cpt);
int __roc_api roc_cpt_eng_grp_add(struct roc_cpt *roc_cpt,
enum cpt_eng_type eng_type);
-int __roc_api roc_cpt_dev_configure(struct roc_cpt *roc_cpt, int nb_lf);
+int __roc_api roc_cpt_dev_configure(struct roc_cpt *roc_cpt, int nb_lf, bool rxc_ena,
+ uint16_t rx_inj_lf);
void __roc_api roc_cpt_dev_clear(struct roc_cpt *roc_cpt);
int __roc_api roc_cpt_lf_init(struct roc_cpt *roc_cpt, struct roc_cpt_lf *lf);
void __roc_api roc_cpt_lf_fini(struct roc_cpt_lf *lf);
diff --git a/drivers/common/cnxk/roc_cpt_priv.h b/drivers/common/cnxk/roc_cpt_priv.h
index 4ed87c857b..fa4986d671 100644
--- a/drivers/common/cnxk/roc_cpt_priv.h
+++ b/drivers/common/cnxk/roc_cpt_priv.h
@@ -22,7 +22,7 @@ int cpt_lfs_attach(struct dev *dev, uint8_t blkaddr, bool modify,
uint16_t nb_lf);
int cpt_lfs_detach(struct dev *dev);
int cpt_lfs_alloc(struct dev *dev, uint8_t eng_grpmsk, uint8_t blk, bool inl_dev_sso,
- bool ctx_ilen_valid, uint8_t ctx_ilen);
+ bool ctx_ilen_valid, uint8_t ctx_ilen, bool rxc_ena, uint16_t rx_inj_lf);
int cpt_lfs_free(struct dev *dev);
int cpt_lf_init(struct roc_cpt_lf *lf);
void cpt_lf_fini(struct roc_cpt_lf *lf);
diff --git a/drivers/common/cnxk/roc_ie_ot.c b/drivers/common/cnxk/roc_ie_ot.c
index d0b7ad38f1..465b2bc1fb 100644
--- a/drivers/common/cnxk/roc_ie_ot.c
+++ b/drivers/common/cnxk/roc_ie_ot.c
@@ -12,13 +12,13 @@ roc_ot_ipsec_inb_sa_init(struct roc_ot_ipsec_inb_sa *sa, bool is_inline)
memset(sa, 0, sizeof(struct roc_ot_ipsec_inb_sa));
- if (is_inline) {
- sa->w0.s.pkt_output = ROC_IE_OT_SA_PKT_OUTPUT_NO_FRAG;
- sa->w0.s.pkt_format = ROC_IE_OT_SA_PKT_FMT_META;
- sa->w0.s.pkind = ROC_IE_OT_CPT_PKIND;
- sa->w0.s.et_ovrwr = 1;
- sa->w2.s.l3hdr_on_err = 1;
- }
+ sa->w0.s.pkt_output = ROC_IE_OT_SA_PKT_OUTPUT_NO_FRAG;
+ sa->w0.s.pkt_format = ROC_IE_OT_SA_PKT_FMT_META;
+ sa->w0.s.pkind = ROC_IE_OT_CPT_PKIND;
+ sa->w0.s.et_ovrwr = 1;
+ sa->w2.s.l3hdr_on_err = 1;
+
+ PLT_SET_USED(is_inline);
offset = offsetof(struct roc_ot_ipsec_inb_sa, ctx);
sa->w0.s.hw_ctx_off = offset / ROC_CTX_UNIT_8B;
diff --git a/drivers/common/cnxk/roc_mbox.h b/drivers/common/cnxk/roc_mbox.h
index 05434aec5a..0ad8b738c6 100644
--- a/drivers/common/cnxk/roc_mbox.h
+++ b/drivers/common/cnxk/roc_mbox.h
@@ -2022,6 +2022,8 @@ struct cpt_lf_alloc_req_msg {
uint8_t __io blkaddr;
uint8_t __io ctx_ilen_valid : 1;
uint8_t __io ctx_ilen : 7;
+ uint8_t __io rxc_ena : 1;
+ uint8_t __io rxc_ena_lf_id : 7;
};
#define CPT_INLINE_INBOUND 0
diff --git a/drivers/common/cnxk/roc_nix_inl.c b/drivers/common/cnxk/roc_nix_inl.c
index 750fd08355..07a90133ca 100644
--- a/drivers/common/cnxk/roc_nix_inl.c
+++ b/drivers/common/cnxk/roc_nix_inl.c
@@ -986,7 +986,7 @@ roc_nix_inl_outb_init(struct roc_nix *roc_nix)
1ULL << ROC_CPT_DFLT_ENG_GRP_SE_IE |
1ULL << ROC_CPT_DFLT_ENG_GRP_AE);
rc = cpt_lfs_alloc(dev, eng_grpmask, blkaddr,
- !roc_nix->ipsec_out_sso_pffunc, ctx_ilen_valid, ctx_ilen);
+ !roc_nix->ipsec_out_sso_pffunc, ctx_ilen_valid, ctx_ilen, false, 0);
if (rc) {
plt_err("Failed to alloc CPT LF resources, rc=%d", rc);
goto lf_detach;
diff --git a/drivers/common/cnxk/roc_nix_inl_dev.c b/drivers/common/cnxk/roc_nix_inl_dev.c
index dc1306c093..f6991de051 100644
--- a/drivers/common/cnxk/roc_nix_inl_dev.c
+++ b/drivers/common/cnxk/roc_nix_inl_dev.c
@@ -194,7 +194,7 @@ nix_inl_cpt_setup(struct nix_inl_dev *inl_dev, bool inl_dev_sso)
}
rc = cpt_lfs_alloc(dev, eng_grpmask, RVU_BLOCK_ADDR_CPT0, inl_dev_sso, ctx_ilen_valid,
- ctx_ilen);
+ ctx_ilen, false, 0);
if (rc) {
plt_err("Failed to alloc CPT LF resources, rc=%d", rc);
return rc;
diff --git a/drivers/crypto/cnxk/cn10k_cryptodev_ops.c b/drivers/crypto/cnxk/cn10k_cryptodev_ops.c
index bef7b75810..c86d47239b 100644
--- a/drivers/crypto/cnxk/cn10k_cryptodev_ops.c
+++ b/drivers/crypto/cnxk/cn10k_cryptodev_ops.c
@@ -7,6 +7,8 @@
#include <rte_event_crypto_adapter.h>
#include <rte_ip.h>
+#include <ethdev_driver.h>
+
#include "roc_cpt.h"
#if defined(__aarch64__)
#include "roc_io.h"
@@ -1057,6 +1059,103 @@ cn10k_cpt_dequeue_burst(void *qptr, struct rte_crypto_op **ops, uint16_t nb_ops)
return i;
}
+uint16_t __rte_hot
+cn10k_cryptodev_sec_inb_rx_inject(void *dev, struct rte_mbuf **pkts,
+ struct rte_security_session **sess, uint16_t nb_pkts)
+{
+ uint16_t l2_len, pf_func, lmt_id, count = 0;
+ uint64_t lmt_base, lmt_arg, io_addr;
+ struct cn10k_sec_session *sec_sess;
+ struct rte_cryptodev *cdev = dev;
+ union cpt_res_s *hw_res = NULL;
+ struct cpt_inst_s *inst;
+ struct cnxk_cpt_vf *vf;
+ struct rte_mbuf *m;
+ uint64_t dptr;
+ int i;
+
+ const union cpt_res_s res = {
+ .cn10k.compcode = CPT_COMP_NOT_DONE,
+ };
+
+ vf = cdev->data->dev_private;
+
+ lmt_base = vf->rx_inj_lmtline.lmt_base;
+ io_addr = vf->rx_inj_lmtline.io_addr;
+
+ ROC_LMT_BASE_ID_GET(lmt_base, lmt_id);
+ pf_func = vf->rx_inj_pf_func;
+
+again:
+ inst = (struct cpt_inst_s *)lmt_base;
+ for (i = 0; i < RTE_MIN(PKTS_PER_LOOP, nb_pkts); i++) {
+
+ m = pkts[i];
+ sec_sess = (struct cn10k_sec_session *)sess[i];
+
+ if (unlikely(rte_pktmbuf_headroom(m) < 32)) {
+ plt_dp_err("No space for CPT res_s");
+ break;
+ }
+
+ if (unlikely(!rte_pktmbuf_is_contiguous(m))) {
+ plt_dp_err("Multi seg is not supported");
+ break;
+ }
+
+ l2_len = m->l2_len;
+
+ *rte_security_dynfield(m) = (uint64_t)sec_sess->userdata;
+
+ hw_res = rte_pktmbuf_mtod(m, void *);
+ hw_res = RTE_PTR_SUB(hw_res, 32);
+ hw_res = RTE_PTR_ALIGN_CEIL(hw_res, 16);
+
+ /* Prepare CPT instruction */
+ inst->w0.u64 = 0;
+ inst->w2.u64 = 0;
+ inst->w2.s.rvu_pf_func = pf_func;
+ inst->w3.u64 = (((uint64_t)m + sizeof(struct rte_mbuf)) >> 3) << 3 | 1;
+
+ inst->w4.u64 = sec_sess->inst.w4 | (rte_pktmbuf_pkt_len(m));
+ dptr = (uint64_t)rte_pktmbuf_iova(m);
+ inst->dptr = dptr;
+ inst->rptr = dptr;
+
+ inst->w0.hw_s.l2_len = l2_len;
+ inst->w0.hw_s.et_offset = l2_len - 2;
+
+ inst->res_addr = (uint64_t)hw_res;
+ rte_atomic_store_explicit(&hw_res->u64[0], res.u64[0], rte_memory_order_relaxed);
+
+ inst->w7.u64 = sec_sess->inst.w7;
+
+ inst += 2;
+ }
+
+ if (i > PKTS_PER_STEORL) {
+ lmt_arg = ROC_CN10K_CPT_LMT_ARG | (PKTS_PER_STEORL - 1) << 12 | (uint64_t)lmt_id;
+ roc_lmt_submit_steorl(lmt_arg, io_addr);
+ lmt_arg = ROC_CN10K_CPT_LMT_ARG | (i - PKTS_PER_STEORL - 1) << 12 |
+ (uint64_t)(lmt_id + PKTS_PER_STEORL);
+ roc_lmt_submit_steorl(lmt_arg, io_addr);
+ } else {
+ lmt_arg = ROC_CN10K_CPT_LMT_ARG | (i - 1) << 12 | (uint64_t)lmt_id;
+ roc_lmt_submit_steorl(lmt_arg, io_addr);
+ }
+
+ rte_io_wmb();
+
+ if (nb_pkts - i > 0 && i == PKTS_PER_LOOP) {
+ nb_pkts -= i;
+ pkts += i;
+ count += i;
+ goto again;
+ }
+
+ return count + i;
+}
+
void
cn10k_cpt_set_enqdeq_fns(struct rte_cryptodev *dev, struct cnxk_cpt_vf *vf)
{
@@ -1535,6 +1634,30 @@ cn10k_sym_configure_raw_dp_ctx(struct rte_cryptodev *dev, uint16_t qp_id,
return 0;
}
+int
+cn10k_cryptodev_sec_rx_inject_configure(void *device, uint16_t port_id, bool enable)
+{
+ struct rte_cryptodev *crypto_dev = device;
+ struct rte_eth_dev *eth_dev;
+ int ret;
+
+ if (!rte_eth_dev_is_valid_port(port_id))
+ return -EINVAL;
+
+ if (!(crypto_dev->feature_flags & RTE_CRYPTODEV_FF_SECURITY_RX_INJECT))
+ return -ENOTSUP;
+
+ eth_dev = &rte_eth_devices[port_id];
+
+ ret = strncmp(eth_dev->device->driver->name, "net_cn10k", 8);
+ if (ret)
+ return -ENOTSUP;
+
+ RTE_SET_USED(enable);
+
+ return 0;
+}
+
struct rte_cryptodev_ops cn10k_cpt_ops = {
/* Device control ops */
.dev_configure = cnxk_cpt_dev_config,
diff --git a/drivers/crypto/cnxk/cn10k_cryptodev_ops.h b/drivers/crypto/cnxk/cn10k_cryptodev_ops.h
index befbfcdfad..34becede3c 100644
--- a/drivers/crypto/cnxk/cn10k_cryptodev_ops.h
+++ b/drivers/crypto/cnxk/cn10k_cryptodev_ops.h
@@ -16,6 +16,14 @@ extern struct rte_cryptodev_ops cn10k_cpt_ops;
void cn10k_cpt_set_enqdeq_fns(struct rte_cryptodev *dev, struct cnxk_cpt_vf *vf);
+__rte_internal
+uint16_t __rte_hot cn10k_cryptodev_sec_inb_rx_inject(void *dev, struct rte_mbuf **pkts,
+ struct rte_security_session **sess,
+ uint16_t nb_pkts);
+
+__rte_internal
+int cn10k_cryptodev_sec_rx_inject_configure(void *device, uint16_t port_id, bool enable);
+
__rte_internal
uint16_t __rte_hot cn10k_cpt_sg_ver1_crypto_adapter_enqueue(void *ws, struct rte_event ev[],
uint16_t nb_events);
diff --git a/drivers/crypto/cnxk/cn10k_ipsec.c b/drivers/crypto/cnxk/cn10k_ipsec.c
index ffd3f50eed..2d098fdd24 100644
--- a/drivers/crypto/cnxk/cn10k_ipsec.c
+++ b/drivers/crypto/cnxk/cn10k_ipsec.c
@@ -10,6 +10,7 @@
#include <rte_security_driver.h>
#include <rte_udp.h>
+#include "cn10k_cryptodev_ops.h"
#include "cn10k_ipsec.h"
#include "cnxk_cryptodev.h"
#include "cnxk_cryptodev_ops.h"
@@ -297,6 +298,7 @@ cn10k_sec_session_create(void *device, struct rte_security_session_conf *conf,
if (conf->protocol != RTE_SECURITY_PROTOCOL_IPSEC)
return -ENOTSUP;
+ ((struct cn10k_sec_session *)sess)->userdata = conf->userdata;
return cn10k_ipsec_session_create(device, &conf->ipsec,
conf->crypto_xform, sess);
}
@@ -458,4 +460,6 @@ cn10k_sec_ops_override(void)
cnxk_sec_ops.session_get_size = cn10k_sec_session_get_size;
cnxk_sec_ops.session_stats_get = cn10k_sec_session_stats_get;
cnxk_sec_ops.session_update = cn10k_sec_session_update;
+ cnxk_sec_ops.inb_pkt_rx_inject = cn10k_cryptodev_sec_inb_rx_inject;
+ cnxk_sec_ops.rx_inject_configure = cn10k_cryptodev_sec_rx_inject_configure;
}
diff --git a/drivers/crypto/cnxk/cn10k_ipsec.h b/drivers/crypto/cnxk/cn10k_ipsec.h
index 8a93d74062..03ac994001 100644
--- a/drivers/crypto/cnxk/cn10k_ipsec.h
+++ b/drivers/crypto/cnxk/cn10k_ipsec.h
@@ -38,6 +38,8 @@ struct cn10k_sec_session {
bool is_outbound;
/** Queue pair */
struct cnxk_cpt_qp *qp;
+ /** Userdata to be set for Rx inject */
+ void *userdata;
/**
* End of SW mutable area
diff --git a/drivers/crypto/cnxk/cnxk_cryptodev.c b/drivers/crypto/cnxk/cnxk_cryptodev.c
index 4819a14184..b1684e56a7 100644
--- a/drivers/crypto/cnxk/cnxk_cryptodev.c
+++ b/drivers/crypto/cnxk/cnxk_cryptodev.c
@@ -24,6 +24,9 @@ cnxk_cpt_default_ff_get(void)
if (roc_model_is_cn10k())
ff |= RTE_CRYPTODEV_FF_SECURITY_INNER_CSUM | RTE_CRYPTODEV_FF_SYM_RAW_DP;
+ if (roc_model_is_cn10ka_b0())
+ ff |= RTE_CRYPTODEV_FF_SECURITY_RX_INJECT;
+
return ff;
}
diff --git a/drivers/crypto/cnxk/cnxk_cryptodev.h b/drivers/crypto/cnxk/cnxk_cryptodev.h
index f5374131bf..fedae53736 100644
--- a/drivers/crypto/cnxk/cnxk_cryptodev.h
+++ b/drivers/crypto/cnxk/cnxk_cryptodev.h
@@ -18,6 +18,8 @@
* Device private data
*/
struct cnxk_cpt_vf {
+ struct roc_cpt_lmtline rx_inj_lmtline;
+ uint16_t rx_inj_pf_func;
struct roc_cpt cpt;
struct rte_cryptodev_capabilities crypto_caps[CNXK_CPT_MAX_CAPS];
struct rte_cryptodev_capabilities
@@ -26,6 +28,7 @@ struct cnxk_cpt_vf {
uint64_t cnxk_fpm_iova[ROC_AE_EC_ID_PMAX];
struct roc_ae_ec_group *ec_grp[ROC_AE_EC_ID_PMAX];
uint16_t max_qps_limit;
+ uint16_t rx_inj_lf;
};
uint64_t cnxk_cpt_default_ff_get(void);
diff --git a/drivers/crypto/cnxk/cnxk_cryptodev_devargs.c b/drivers/crypto/cnxk/cnxk_cryptodev_devargs.c
index c3e9bdb2d1..f5a76d83ed 100644
--- a/drivers/crypto/cnxk/cnxk_cryptodev_devargs.c
+++ b/drivers/crypto/cnxk/cnxk_cryptodev_devargs.c
@@ -9,6 +9,23 @@
#define CNXK_MAX_QPS_LIMIT "max_qps_limit"
#define CNXK_MAX_QPS_LIMIT_MIN 1
#define CNXK_MAX_QPS_LIMIT_MAX (ROC_CPT_MAX_LFS - 1)
+#define CNXK_RX_INJ_LF "rx_inj_lf"
+
+static int
+parse_rx_inj_lf(const char *key, const char *value, void *extra_args)
+{
+ RTE_SET_USED(key);
+ uint32_t val;
+
+ val = atoi(value);
+
+ if (val < CNXK_MAX_QPS_LIMIT_MIN || val > CNXK_MAX_QPS_LIMIT_MAX)
+ return -EINVAL;
+
+ *(uint16_t *)extra_args = val;
+
+ return 0;
+}
static int
parse_max_qps_limit(const char *key, const char *value, void *extra_args)
@@ -31,8 +48,12 @@ cnxk_cpt_parse_devargs(struct rte_devargs *devargs, struct cnxk_cpt_vf *vf)
{
uint16_t max_qps_limit = CNXK_MAX_QPS_LIMIT_MAX;
struct rte_kvargs *kvlist;
+ uint16_t rx_inj_lf;
int rc;
+ /* Set to max value as default so that the feature is disabled by default. */
+ rx_inj_lf = CNXK_MAX_QPS_LIMIT_MAX;
+
if (devargs == NULL)
goto null_devargs;
@@ -48,10 +69,20 @@ cnxk_cpt_parse_devargs(struct rte_devargs *devargs, struct cnxk_cpt_vf *vf)
rte_kvargs_free(kvlist);
goto exit;
}
+
+ rc = rte_kvargs_process(kvlist, CNXK_RX_INJ_LF, parse_rx_inj_lf, &rx_inj_lf);
+ if (rc < 0) {
+ plt_err("rx_inj_lf should in the range <%d-%d>", CNXK_MAX_QPS_LIMIT_MIN,
+ max_qps_limit - 1);
+ rte_kvargs_free(kvlist);
+ goto exit;
+ }
+
rte_kvargs_free(kvlist);
null_devargs:
vf->max_qps_limit = max_qps_limit;
+ vf->rx_inj_lf = rx_inj_lf;
return 0;
exit:
diff --git a/drivers/crypto/cnxk/cnxk_cryptodev_ops.c b/drivers/crypto/cnxk/cnxk_cryptodev_ops.c
index 82938c77c8..c0733ddbfb 100644
--- a/drivers/crypto/cnxk/cnxk_cryptodev_ops.c
+++ b/drivers/crypto/cnxk/cnxk_cryptodev_ops.c
@@ -5,6 +5,7 @@
#include <rte_cryptodev.h>
#include <cryptodev_pmd.h>
#include <rte_errno.h>
+#include <rte_security_driver.h>
#include "roc_ae_fpm_tables.h"
#include "roc_cpt.h"
@@ -95,6 +96,7 @@ cnxk_cpt_dev_config(struct rte_cryptodev *dev, struct rte_cryptodev_config *conf
struct cnxk_cpt_vf *vf = dev->data->dev_private;
struct roc_cpt *roc_cpt = &vf->cpt;
uint16_t nb_lf_avail, nb_lf;
+ bool rxc_ena = false;
int ret;
/* If this is a reconfigure attempt, clear the device and configure again */
@@ -111,7 +113,13 @@ cnxk_cpt_dev_config(struct rte_cryptodev *dev, struct rte_cryptodev_config *conf
if (nb_lf > nb_lf_avail)
return -ENOTSUP;
- ret = roc_cpt_dev_configure(roc_cpt, nb_lf);
+ if (dev->feature_flags & RTE_CRYPTODEV_FF_SECURITY_RX_INJECT) {
+ if (rte_security_dynfield_register() < 0)
+ return -ENOTSUP;
+ rxc_ena = true;
+ }
+
+ ret = roc_cpt_dev_configure(roc_cpt, nb_lf, rxc_ena, vf->rx_inj_lf);
if (ret) {
plt_err("Could not configure device");
return ret;
@@ -208,6 +216,10 @@ cnxk_cpt_dev_info_get(struct rte_cryptodev *dev,
info->sym.max_nb_sessions = 0;
info->min_mbuf_headroom_req = CNXK_CPT_MIN_HEADROOM_REQ;
info->min_mbuf_tailroom_req = CNXK_CPT_MIN_TAILROOM_REQ;
+
+ /* If the LF ID for RX Inject is less than the available lfs. */
+ if (vf->rx_inj_lf > info->max_nb_queue_pairs)
+ info->feature_flags &= ~RTE_CRYPTODEV_FF_SECURITY_RX_INJECT;
}
static void
@@ -452,6 +464,19 @@ cnxk_cpt_queue_pair_setup(struct rte_cryptodev *dev, uint16_t qp_id,
qp->sess_mp = conf->mp_session;
dev->data->queue_pairs[qp_id] = qp;
+ if (qp_id == vf->rx_inj_lf) {
+ ret = roc_cpt_lmtline_init(roc_cpt, &vf->rx_inj_lmtline, vf->rx_inj_lf);
+ if (ret) {
+ plt_err("Could not init lmtline Rx inject");
+ goto exit;
+ }
+
+ vf->rx_inj_pf_func = qp->lf.pf_func;
+
+ /* Block the queue for other submissions */
+ qp->pend_q.pq_mask = 0;
+ }
+
return 0;
exit:
diff --git a/drivers/crypto/cnxk/version.map b/drivers/crypto/cnxk/version.map
index d13209feec..5789a6bfc9 100644
--- a/drivers/crypto/cnxk/version.map
+++ b/drivers/crypto/cnxk/version.map
@@ -8,5 +8,8 @@ INTERNAL {
cn10k_cpt_crypto_adapter_dequeue;
cn10k_cpt_crypto_adapter_vector_dequeue;
+ cn10k_cryptodev_sec_inb_rx_inject;
+ cn10k_cryptodev_sec_rx_inject_configure;
+
local: *;
};
--
2.25.1
^ permalink raw reply [flat|nested] 78+ messages in thread
* [PATCH 08/24] common/cnxk: add Rx inject configs
2023-12-21 12:35 [PATCH 00/24] Fixes and improvements in crypto cnxk Anoob Joseph
` (6 preceding siblings ...)
2023-12-21 12:35 ` [PATCH 07/24] crypto/cnxk: enable Rx inject in security lookaside Anoob Joseph
@ 2023-12-21 12:35 ` Anoob Joseph
2023-12-21 12:35 ` [PATCH 09/24] crypto/cnxk: Rx inject config update Anoob Joseph
` (16 subsequent siblings)
24 siblings, 0 replies; 78+ messages in thread
From: Anoob Joseph @ 2023-12-21 12:35 UTC (permalink / raw)
To: Akhil Goyal
Cc: Rahul Bhansali, Jerin Jacob, Vidya Sagar Velumuri, Tejasree Kondoj, dev
From: Rahul Bhansali <rbhansali@marvell.com>
Add Rx inject config for feature enable/disable, and store
Rx chan value per port.
Signed-off-by: Rahul Bhansali <rbhansali@marvell.com>
---
drivers/common/cnxk/roc_idev.c | 44 +++++++++++++++++++++++++++++
drivers/common/cnxk/roc_idev.h | 5 ++++
drivers/common/cnxk/roc_idev_priv.h | 6 ++++
drivers/common/cnxk/roc_nix.c | 2 ++
drivers/common/cnxk/version.map | 4 +++
5 files changed, 61 insertions(+)
diff --git a/drivers/common/cnxk/roc_idev.c b/drivers/common/cnxk/roc_idev.c
index e6c6b34d78..48df3518b0 100644
--- a/drivers/common/cnxk/roc_idev.c
+++ b/drivers/common/cnxk/roc_idev.c
@@ -310,3 +310,47 @@ roc_idev_nix_inl_meta_aura_get(void)
return idev->inl_cfg.meta_aura;
return 0;
}
+
+uint8_t
+roc_idev_nix_rx_inject_get(uint16_t port)
+{
+ struct idev_cfg *idev;
+
+ idev = idev_get_cfg();
+ if (idev != NULL && port < PLT_MAX_ETHPORTS)
+ return idev->inl_rx_inj_cfg.rx_inject_en[port];
+
+ return 0;
+}
+
+void
+roc_idev_nix_rx_inject_set(uint16_t port, uint8_t enable)
+{
+ struct idev_cfg *idev;
+
+ idev = idev_get_cfg();
+ if (idev != NULL && port < PLT_MAX_ETHPORTS)
+ __atomic_store_n(&idev->inl_rx_inj_cfg.rx_inject_en[port], enable,
+ __ATOMIC_RELEASE);
+}
+
+uint16_t *
+roc_idev_nix_rx_chan_base_get(void)
+{
+ struct idev_cfg *idev = idev_get_cfg();
+
+ if (idev != NULL)
+ return (uint16_t *)&idev->inl_rx_inj_cfg.chan;
+
+ return NULL;
+}
+
+void
+roc_idev_nix_rx_chan_set(uint16_t port, uint16_t chan)
+{
+ struct idev_cfg *idev;
+
+ idev = idev_get_cfg();
+ if (idev != NULL && port < PLT_MAX_ETHPORTS)
+ __atomic_store_n(&idev->inl_rx_inj_cfg.chan[port], chan, __ATOMIC_RELEASE);
+}
diff --git a/drivers/common/cnxk/roc_idev.h b/drivers/common/cnxk/roc_idev.h
index aea7f5279d..00664eaed6 100644
--- a/drivers/common/cnxk/roc_idev.h
+++ b/drivers/common/cnxk/roc_idev.h
@@ -22,4 +22,9 @@ struct roc_nix_list *__roc_api roc_idev_nix_list_get(void);
struct roc_mcs *__roc_api roc_idev_mcs_get(uint8_t mcs_idx);
void __roc_api roc_idev_mcs_set(struct roc_mcs *mcs);
void __roc_api roc_idev_mcs_free(struct roc_mcs *mcs);
+
+uint8_t __roc_api roc_idev_nix_rx_inject_get(uint16_t port);
+void __roc_api roc_idev_nix_rx_inject_set(uint16_t port, uint8_t enable);
+uint16_t *__roc_api roc_idev_nix_rx_chan_base_get(void);
+void __roc_api roc_idev_nix_rx_chan_set(uint16_t port, uint16_t chan);
#endif /* _ROC_IDEV_H_ */
diff --git a/drivers/common/cnxk/roc_idev_priv.h b/drivers/common/cnxk/roc_idev_priv.h
index 80f8465e1c..8dc1cb25bf 100644
--- a/drivers/common/cnxk/roc_idev_priv.h
+++ b/drivers/common/cnxk/roc_idev_priv.h
@@ -19,6 +19,11 @@ struct idev_nix_inl_cfg {
uint32_t refs;
};
+struct idev_nix_inl_rx_inj_cfg {
+ uint16_t chan[PLT_MAX_ETHPORTS];
+ uint8_t rx_inject_en[PLT_MAX_ETHPORTS];
+};
+
struct idev_cfg {
uint16_t sso_pf_func;
uint16_t npa_pf_func;
@@ -35,6 +40,7 @@ struct idev_cfg {
struct nix_inl_dev *nix_inl_dev;
struct idev_nix_inl_cfg inl_cfg;
struct roc_nix_list roc_nix_list;
+ struct idev_nix_inl_rx_inj_cfg inl_rx_inj_cfg;
plt_spinlock_t nix_inl_dev_lock;
plt_spinlock_t npa_dev_lock;
};
diff --git a/drivers/common/cnxk/roc_nix.c b/drivers/common/cnxk/roc_nix.c
index f64933a1d9..97c0ae3e25 100644
--- a/drivers/common/cnxk/roc_nix.c
+++ b/drivers/common/cnxk/roc_nix.c
@@ -223,6 +223,8 @@ roc_nix_lf_alloc(struct roc_nix *roc_nix, uint32_t nb_rxq, uint32_t nb_txq,
nix->nb_rx_queues = nb_rxq;
nix->nb_tx_queues = nb_txq;
+ roc_idev_nix_rx_chan_set(roc_nix->port_id, rsp->rx_chan_base);
+
nix->rqs = plt_zmalloc(sizeof(struct roc_nix_rq *) * nb_rxq, 0);
if (!nix->rqs) {
rc = -ENOMEM;
diff --git a/drivers/common/cnxk/version.map b/drivers/common/cnxk/version.map
index aa884a8fe2..f84382c401 100644
--- a/drivers/common/cnxk/version.map
+++ b/drivers/common/cnxk/version.map
@@ -105,6 +105,10 @@ INTERNAL {
roc_idev_num_lmtlines_get;
roc_idev_nix_inl_meta_aura_get;
roc_idev_nix_list_get;
+ roc_idev_nix_rx_chan_base_get;
+ roc_idev_nix_rx_chan_set;
+ roc_idev_nix_rx_inject_get;
+ roc_idev_nix_rx_inject_set;
roc_ml_reg_read64;
roc_ml_reg_write64;
roc_ml_reg_read32;
--
2.25.1
^ permalink raw reply [flat|nested] 78+ messages in thread
* [PATCH 09/24] crypto/cnxk: Rx inject config update
2023-12-21 12:35 [PATCH 00/24] Fixes and improvements in crypto cnxk Anoob Joseph
` (7 preceding siblings ...)
2023-12-21 12:35 ` [PATCH 08/24] common/cnxk: add Rx inject configs Anoob Joseph
@ 2023-12-21 12:35 ` Anoob Joseph
2023-12-21 12:35 ` [PATCH 10/24] crypto/cnxk: enable Rx inject for 103 Anoob Joseph
` (15 subsequent siblings)
24 siblings, 0 replies; 78+ messages in thread
From: Anoob Joseph @ 2023-12-21 12:35 UTC (permalink / raw)
To: Akhil Goyal
Cc: Rahul Bhansali, Jerin Jacob, Vidya Sagar Velumuri, Tejasree Kondoj, dev
From: Rahul Bhansali <rbhansali@marvell.com>
- Update chan in CPT inst from port's Rx chan
- Set Rx inject config in Idev struct
Signed-off-by: Rahul Bhansali <rbhansali@marvell.com>
---
drivers/crypto/cnxk/cn10k_cryptodev_ops.c | 4 +++-
drivers/crypto/cnxk/cn10k_ipsec.c | 3 +++
drivers/crypto/cnxk/cnxk_cryptodev.h | 1 +
drivers/crypto/cnxk/cnxk_cryptodev_ops.c | 2 ++
4 files changed, 9 insertions(+), 1 deletion(-)
diff --git a/drivers/crypto/cnxk/cn10k_cryptodev_ops.c b/drivers/crypto/cnxk/cn10k_cryptodev_ops.c
index c86d47239b..53a33aab49 100644
--- a/drivers/crypto/cnxk/cn10k_cryptodev_ops.c
+++ b/drivers/crypto/cnxk/cn10k_cryptodev_ops.c
@@ -15,6 +15,7 @@
#else
#include "roc_io_generic.h"
#endif
+#include "roc_idev.h"
#include "roc_sso.h"
#include "roc_sso_dp.h"
@@ -1122,6 +1123,7 @@ cn10k_cryptodev_sec_inb_rx_inject(void *dev, struct rte_mbuf **pkts,
inst->dptr = dptr;
inst->rptr = dptr;
+ inst->w0.hw_s.chan = *(vf->rx_chan_base + m->port);
inst->w0.hw_s.l2_len = l2_len;
inst->w0.hw_s.et_offset = l2_len - 2;
@@ -1653,7 +1655,7 @@ cn10k_cryptodev_sec_rx_inject_configure(void *device, uint16_t port_id, bool ena
if (ret)
return -ENOTSUP;
- RTE_SET_USED(enable);
+ roc_idev_nix_rx_inject_set(port_id, enable);
return 0;
}
diff --git a/drivers/crypto/cnxk/cn10k_ipsec.c b/drivers/crypto/cnxk/cn10k_ipsec.c
index 2d098fdd24..d08a1067ca 100644
--- a/drivers/crypto/cnxk/cn10k_ipsec.c
+++ b/drivers/crypto/cnxk/cn10k_ipsec.c
@@ -192,6 +192,9 @@ cn10k_ipsec_inb_sa_create(struct roc_cpt *roc_cpt, struct roc_cpt_lf *lf,
sec_sess->is_outbound = false;
sec_sess->inst.w7 = ipsec_cpt_inst_w7_get(roc_cpt, in_sa);
+ /* Save index/SPI in cookie, specific required for Rx Inject */
+ sa_dptr->w1.s.cookie = 0xFFFFFFFF;
+
/* pre-populate CPT INST word 4 */
inst_w4.u64 = 0;
inst_w4.s.opcode_major = ROC_IE_OT_MAJOR_OP_PROCESS_INBOUND_IPSEC | ROC_IE_OT_INPLACE_BIT;
diff --git a/drivers/crypto/cnxk/cnxk_cryptodev.h b/drivers/crypto/cnxk/cnxk_cryptodev.h
index fedae53736..2ae81d2f90 100644
--- a/drivers/crypto/cnxk/cnxk_cryptodev.h
+++ b/drivers/crypto/cnxk/cnxk_cryptodev.h
@@ -20,6 +20,7 @@
struct cnxk_cpt_vf {
struct roc_cpt_lmtline rx_inj_lmtline;
uint16_t rx_inj_pf_func;
+ uint16_t *rx_chan_base;
struct roc_cpt cpt;
struct rte_cryptodev_capabilities crypto_caps[CNXK_CPT_MAX_CAPS];
struct rte_cryptodev_capabilities
diff --git a/drivers/crypto/cnxk/cnxk_cryptodev_ops.c b/drivers/crypto/cnxk/cnxk_cryptodev_ops.c
index c0733ddbfb..fd44155955 100644
--- a/drivers/crypto/cnxk/cnxk_cryptodev_ops.c
+++ b/drivers/crypto/cnxk/cnxk_cryptodev_ops.c
@@ -10,6 +10,7 @@
#include "roc_ae_fpm_tables.h"
#include "roc_cpt.h"
#include "roc_errata.h"
+#include "roc_idev.h"
#include "roc_ie_on.h"
#include "cnxk_ae.h"
@@ -117,6 +118,7 @@ cnxk_cpt_dev_config(struct rte_cryptodev *dev, struct rte_cryptodev_config *conf
if (rte_security_dynfield_register() < 0)
return -ENOTSUP;
rxc_ena = true;
+ vf->rx_chan_base = roc_idev_nix_rx_chan_base_get();
}
ret = roc_cpt_dev_configure(roc_cpt, nb_lf, rxc_ena, vf->rx_inj_lf);
--
2.25.1
^ permalink raw reply [flat|nested] 78+ messages in thread
* [PATCH 10/24] crypto/cnxk: enable Rx inject for 103
2023-12-21 12:35 [PATCH 00/24] Fixes and improvements in crypto cnxk Anoob Joseph
` (8 preceding siblings ...)
2023-12-21 12:35 ` [PATCH 09/24] crypto/cnxk: Rx inject config update Anoob Joseph
@ 2023-12-21 12:35 ` Anoob Joseph
2023-12-21 12:35 ` [PATCH 11/24] crypto/cnxk: rename security caps as IPsec security caps Anoob Joseph
` (14 subsequent siblings)
24 siblings, 0 replies; 78+ messages in thread
From: Anoob Joseph @ 2023-12-21 12:35 UTC (permalink / raw)
To: Akhil Goyal; +Cc: Vidya Sagar Velumuri, Jerin Jacob, Tejasree Kondoj, dev
From: Vidya Sagar Velumuri <vvelumuri@marvell.com>
Enable Rx inject feature for 103XX
Signed-off-by: Vidya Sagar Velumuri <vvelumuri@marvell.com>
---
drivers/crypto/cnxk/cnxk_cryptodev.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/crypto/cnxk/cnxk_cryptodev.c b/drivers/crypto/cnxk/cnxk_cryptodev.c
index b1684e56a7..1eede2e59c 100644
--- a/drivers/crypto/cnxk/cnxk_cryptodev.c
+++ b/drivers/crypto/cnxk/cnxk_cryptodev.c
@@ -24,7 +24,7 @@ cnxk_cpt_default_ff_get(void)
if (roc_model_is_cn10k())
ff |= RTE_CRYPTODEV_FF_SECURITY_INNER_CSUM | RTE_CRYPTODEV_FF_SYM_RAW_DP;
- if (roc_model_is_cn10ka_b0())
+ if (roc_model_is_cn10ka_b0() || roc_model_is_cn10kb())
ff |= RTE_CRYPTODEV_FF_SECURITY_RX_INJECT;
return ff;
--
2.25.1
^ permalink raw reply [flat|nested] 78+ messages in thread
* [PATCH 11/24] crypto/cnxk: rename security caps as IPsec security caps
2023-12-21 12:35 [PATCH 00/24] Fixes and improvements in crypto cnxk Anoob Joseph
` (9 preceding siblings ...)
2023-12-21 12:35 ` [PATCH 10/24] crypto/cnxk: enable Rx inject for 103 Anoob Joseph
@ 2023-12-21 12:35 ` Anoob Joseph
2023-12-21 12:35 ` [PATCH 12/24] common/cnxk: update opad-ipad gen to handle TLS Anoob Joseph
` (13 subsequent siblings)
24 siblings, 0 replies; 78+ messages in thread
From: Anoob Joseph @ 2023-12-21 12:35 UTC (permalink / raw)
To: Akhil Goyal; +Cc: Vidya Sagar Velumuri, Jerin Jacob, Tejasree Kondoj, dev
From: Vidya Sagar Velumuri <vvelumuri@marvell.com>
Security capabilities would vary between IPsec and other new offloads.
Rename existing security caps to indicate that they are IPsec specific
ones.
Rename and change the scope of common functions, inorder to avoid code
duplication. These functions can be used by both IPsec and TLS
Signed-off-by: Anoob Joseph <anoobj@marvell.com>
Signed-off-by: Vidya Sagar Velumuri <vvelumuri@marvell.com>
---
drivers/common/cnxk/cnxk_security.c | 13 ++--
drivers/common/cnxk/cnxk_security.h | 17 +++--
drivers/common/cnxk/version.map | 1 +
drivers/crypto/cnxk/cn10k_cryptodev_ops.c | 18 ++++-
drivers/crypto/cnxk/cn10k_ipsec.c | 46 +++++++-----
drivers/crypto/cnxk/cn10k_ipsec.h | 9 ++-
drivers/crypto/cnxk/cn10k_ipsec_la_ops.h | 18 ++---
drivers/crypto/cnxk/cn9k_ipsec_la_ops.h | 8 +-
drivers/crypto/cnxk/cnxk_cryptodev.h | 10 +--
.../crypto/cnxk/cnxk_cryptodev_capabilities.c | 73 ++++++++++---------
drivers/crypto/cnxk/cnxk_sg.h | 4 +-
11 files changed, 123 insertions(+), 94 deletions(-)
diff --git a/drivers/common/cnxk/cnxk_security.c b/drivers/common/cnxk/cnxk_security.c
index a8c3ba90cd..81991c4697 100644
--- a/drivers/common/cnxk/cnxk_security.c
+++ b/drivers/common/cnxk/cnxk_security.c
@@ -8,9 +8,8 @@
#include "roc_api.h"
-static void
-ipsec_hmac_opad_ipad_gen(struct rte_crypto_sym_xform *auth_xform,
- uint8_t *hmac_opad_ipad)
+void
+cnxk_sec_opad_ipad_gen(struct rte_crypto_sym_xform *auth_xform, uint8_t *hmac_opad_ipad)
{
const uint8_t *key = auth_xform->auth.key.data;
uint32_t length = auth_xform->auth.key.length;
@@ -192,7 +191,7 @@ ot_ipsec_sa_common_param_fill(union roc_ot_ipsec_sa_word2 *w2,
const uint8_t *auth_key = auth_xfrm->auth.key.data;
roc_aes_xcbc_key_derive(auth_key, hmac_opad_ipad);
} else {
- ipsec_hmac_opad_ipad_gen(auth_xfrm, hmac_opad_ipad);
+ cnxk_sec_opad_ipad_gen(auth_xfrm, hmac_opad_ipad);
}
tmp_key = (uint64_t *)hmac_opad_ipad;
@@ -741,7 +740,7 @@ onf_ipsec_sa_common_param_fill(struct roc_ie_onf_sa_ctl *ctl, uint8_t *salt,
key = cipher_xfrm->cipher.key.data;
length = cipher_xfrm->cipher.key.length;
- ipsec_hmac_opad_ipad_gen(auth_xfrm, hmac_opad_ipad);
+ cnxk_sec_opad_ipad_gen(auth_xfrm, hmac_opad_ipad);
}
switch (length) {
@@ -1374,7 +1373,7 @@ cnxk_on_ipsec_outb_sa_create(struct rte_security_ipsec_xform *ipsec,
roc_aes_xcbc_key_derive(auth_key, hmac_opad_ipad);
} else if (auth_xform->auth.algo != RTE_CRYPTO_AUTH_NULL) {
- ipsec_hmac_opad_ipad_gen(auth_xform, hmac_opad_ipad);
+ cnxk_sec_opad_ipad_gen(auth_xform, hmac_opad_ipad);
}
}
@@ -1441,7 +1440,7 @@ cnxk_on_ipsec_inb_sa_create(struct rte_security_ipsec_xform *ipsec,
roc_aes_xcbc_key_derive(auth_key, hmac_opad_ipad);
} else if (auth_xform->auth.algo != RTE_CRYPTO_AUTH_NULL) {
- ipsec_hmac_opad_ipad_gen(auth_xform, hmac_opad_ipad);
+ cnxk_sec_opad_ipad_gen(auth_xform, hmac_opad_ipad);
}
}
diff --git a/drivers/common/cnxk/cnxk_security.h b/drivers/common/cnxk/cnxk_security.h
index 2277ce9144..fabf694df4 100644
--- a/drivers/common/cnxk/cnxk_security.h
+++ b/drivers/common/cnxk/cnxk_security.h
@@ -61,14 +61,15 @@ bool __roc_api cnxk_onf_ipsec_inb_sa_valid(struct roc_onf_ipsec_inb_sa *sa);
bool __roc_api cnxk_onf_ipsec_outb_sa_valid(struct roc_onf_ipsec_outb_sa *sa);
/* [CN9K] */
-int __roc_api
-cnxk_on_ipsec_inb_sa_create(struct rte_security_ipsec_xform *ipsec,
- struct rte_crypto_sym_xform *crypto_xform,
- struct roc_ie_on_inb_sa *in_sa);
+int __roc_api cnxk_on_ipsec_inb_sa_create(struct rte_security_ipsec_xform *ipsec,
+ struct rte_crypto_sym_xform *crypto_xform,
+ struct roc_ie_on_inb_sa *in_sa);
-int __roc_api
-cnxk_on_ipsec_outb_sa_create(struct rte_security_ipsec_xform *ipsec,
- struct rte_crypto_sym_xform *crypto_xform,
- struct roc_ie_on_outb_sa *out_sa);
+int __roc_api cnxk_on_ipsec_outb_sa_create(struct rte_security_ipsec_xform *ipsec,
+ struct rte_crypto_sym_xform *crypto_xform,
+ struct roc_ie_on_outb_sa *out_sa);
+
+__rte_internal
+void cnxk_sec_opad_ipad_gen(struct rte_crypto_sym_xform *auth_xform, uint8_t *hmac_opad_ipad);
#endif /* _CNXK_SECURITY_H__ */
diff --git a/drivers/common/cnxk/version.map b/drivers/common/cnxk/version.map
index f84382c401..15fd5710d2 100644
--- a/drivers/common/cnxk/version.map
+++ b/drivers/common/cnxk/version.map
@@ -1,6 +1,7 @@
INTERNAL {
global:
+ cnxk_sec_opad_ipad_gen;
cnxk_ipsec_icvlen_get;
cnxk_ipsec_ivlen_get;
cnxk_ipsec_outb_rlens_get;
diff --git a/drivers/crypto/cnxk/cn10k_cryptodev_ops.c b/drivers/crypto/cnxk/cn10k_cryptodev_ops.c
index 53a33aab49..f105a431f8 100644
--- a/drivers/crypto/cnxk/cn10k_cryptodev_ops.c
+++ b/drivers/crypto/cnxk/cn10k_cryptodev_ops.c
@@ -80,8 +80,9 @@ cn10k_cpt_sym_temp_sess_create(struct cnxk_cpt_qp *qp, struct rte_crypto_op *op)
}
static __rte_always_inline int __rte_hot
-cpt_sec_inst_fill(struct cnxk_cpt_qp *qp, struct rte_crypto_op *op, struct cn10k_sec_session *sess,
- struct cpt_inst_s *inst, struct cpt_inflight_req *infl_req, const bool is_sg_ver2)
+cpt_sec_ipsec_inst_fill(struct cnxk_cpt_qp *qp, struct rte_crypto_op *op,
+ struct cn10k_sec_session *sess, struct cpt_inst_s *inst,
+ struct cpt_inflight_req *infl_req, const bool is_sg_ver2)
{
struct rte_crypto_sym_op *sym_op = op->sym;
int ret;
@@ -91,7 +92,7 @@ cpt_sec_inst_fill(struct cnxk_cpt_qp *qp, struct rte_crypto_op *op, struct cn10k
return -ENOTSUP;
}
- if (sess->is_outbound)
+ if (sess->ipsec.is_outbound)
ret = process_outb_sa(&qp->lf, op, sess, &qp->meta_info, infl_req, inst,
is_sg_ver2);
else
@@ -100,6 +101,17 @@ cpt_sec_inst_fill(struct cnxk_cpt_qp *qp, struct rte_crypto_op *op, struct cn10k
return ret;
}
+static __rte_always_inline int __rte_hot
+cpt_sec_inst_fill(struct cnxk_cpt_qp *qp, struct rte_crypto_op *op, struct cn10k_sec_session *sess,
+ struct cpt_inst_s *inst, struct cpt_inflight_req *infl_req, const bool is_sg_ver2)
+{
+
+ if (sess->proto == RTE_SECURITY_PROTOCOL_IPSEC)
+ return cpt_sec_ipsec_inst_fill(qp, op, sess, &inst[0], infl_req, is_sg_ver2);
+
+ return 0;
+}
+
static inline int
cn10k_cpt_fill_inst(struct cnxk_cpt_qp *qp, struct rte_crypto_op *ops[], struct cpt_inst_s inst[],
struct cpt_inflight_req *infl_req, const bool is_sg_ver2)
diff --git a/drivers/crypto/cnxk/cn10k_ipsec.c b/drivers/crypto/cnxk/cn10k_ipsec.c
index d08a1067ca..a9c673ea83 100644
--- a/drivers/crypto/cnxk/cn10k_ipsec.c
+++ b/drivers/crypto/cnxk/cn10k_ipsec.c
@@ -20,7 +20,7 @@
#include "roc_api.h"
static uint64_t
-ipsec_cpt_inst_w7_get(struct roc_cpt *roc_cpt, void *sa)
+cpt_inst_w7_get(struct roc_cpt *roc_cpt, void *sa)
{
union cpt_inst_w7 w7;
@@ -64,7 +64,7 @@ cn10k_ipsec_outb_sa_create(struct roc_cpt *roc_cpt, struct roc_cpt_lf *lf,
goto sa_dptr_free;
}
- sec_sess->inst.w7 = ipsec_cpt_inst_w7_get(roc_cpt, out_sa);
+ sec_sess->inst.w7 = cpt_inst_w7_get(roc_cpt, out_sa);
#ifdef LA_IPSEC_DEBUG
/* Use IV from application in debug mode */
@@ -89,7 +89,7 @@ cn10k_ipsec_outb_sa_create(struct roc_cpt *roc_cpt, struct roc_cpt_lf *lf,
}
#endif
- sec_sess->is_outbound = true;
+ sec_sess->ipsec.is_outbound = true;
/* Get Rlen calculation data */
ret = cnxk_ipsec_outb_rlens_get(&rlens, ipsec_xfrm, crypto_xfrm);
@@ -150,6 +150,7 @@ cn10k_ipsec_outb_sa_create(struct roc_cpt *roc_cpt, struct roc_cpt_lf *lf,
/* Trigger CTX flush so that data is written back to DRAM */
roc_cpt_lf_ctx_flush(lf, out_sa, false);
+ sec_sess->proto = RTE_SECURITY_PROTOCOL_IPSEC;
plt_atomic_thread_fence(__ATOMIC_SEQ_CST);
sa_dptr_free:
@@ -189,8 +190,8 @@ cn10k_ipsec_inb_sa_create(struct roc_cpt *roc_cpt, struct roc_cpt_lf *lf,
goto sa_dptr_free;
}
- sec_sess->is_outbound = false;
- sec_sess->inst.w7 = ipsec_cpt_inst_w7_get(roc_cpt, in_sa);
+ sec_sess->ipsec.is_outbound = false;
+ sec_sess->inst.w7 = cpt_inst_w7_get(roc_cpt, in_sa);
/* Save index/SPI in cookie, specific required for Rx Inject */
sa_dptr->w1.s.cookie = 0xFFFFFFFF;
@@ -209,7 +210,7 @@ cn10k_ipsec_inb_sa_create(struct roc_cpt *roc_cpt, struct roc_cpt_lf *lf,
*/
if (ipsec_xfrm->options.ip_csum_enable) {
param1.s.ip_csum_disable = ROC_IE_OT_SA_INNER_PKT_IP_CSUM_ENABLE;
- sec_sess->ip_csum = RTE_MBUF_F_RX_IP_CKSUM_GOOD;
+ sec_sess->ipsec.ip_csum = RTE_MBUF_F_RX_IP_CKSUM_GOOD;
}
/* Disable L4 checksum verification by default */
@@ -250,6 +251,7 @@ cn10k_ipsec_inb_sa_create(struct roc_cpt *roc_cpt, struct roc_cpt_lf *lf,
/* Trigger CTX flush so that data is written back to DRAM */
roc_cpt_lf_ctx_flush(lf, in_sa, true);
+ sec_sess->proto = RTE_SECURITY_PROTOCOL_IPSEC;
plt_atomic_thread_fence(__ATOMIC_SEQ_CST);
sa_dptr_free:
@@ -298,16 +300,15 @@ cn10k_sec_session_create(void *device, struct rte_security_session_conf *conf,
if (conf->action_type != RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL)
return -EINVAL;
- if (conf->protocol != RTE_SECURITY_PROTOCOL_IPSEC)
- return -ENOTSUP;
-
- ((struct cn10k_sec_session *)sess)->userdata = conf->userdata;
- return cn10k_ipsec_session_create(device, &conf->ipsec,
- conf->crypto_xform, sess);
+ if (conf->protocol == RTE_SECURITY_PROTOCOL_IPSEC) {
+ ((struct cn10k_sec_session *)sess)->userdata = conf->userdata;
+ return cn10k_ipsec_session_create(device, &conf->ipsec, conf->crypto_xform, sess);
+ }
+ return -ENOTSUP;
}
static int
-cn10k_sec_session_destroy(void *dev, struct rte_security_session *sec_sess)
+cn10k_sec_ipsec_session_destroy(void *dev, struct rte_security_session *sec_sess)
{
struct rte_cryptodev *crypto_dev = dev;
union roc_ot_ipsec_sa_word2 *w2;
@@ -318,9 +319,6 @@ cn10k_sec_session_destroy(void *dev, struct rte_security_session *sec_sess)
void *sa_dptr = NULL;
int ret;
- if (unlikely(sec_sess == NULL))
- return -EINVAL;
-
sess = (struct cn10k_sec_session *)sec_sess;
qp = crypto_dev->data->queue_pairs[0];
@@ -336,7 +334,7 @@ cn10k_sec_session_destroy(void *dev, struct rte_security_session *sec_sess)
ret = -1;
- if (sess->is_outbound) {
+ if (sess->ipsec.is_outbound) {
sa_dptr = plt_zmalloc(sizeof(struct roc_ot_ipsec_outb_sa), 8);
if (sa_dptr != NULL) {
roc_ot_ipsec_outb_sa_init(sa_dptr);
@@ -376,6 +374,18 @@ cn10k_sec_session_destroy(void *dev, struct rte_security_session *sec_sess)
return 0;
}
+static int
+cn10k_sec_session_destroy(void *dev, struct rte_security_session *sec_sess)
+{
+ if (unlikely(sec_sess == NULL))
+ return -EINVAL;
+
+ if (((struct cn10k_sec_session *)sec_sess)->proto == RTE_SECURITY_PROTOCOL_IPSEC)
+ return cn10k_sec_ipsec_session_destroy(dev, sec_sess);
+
+ return -EINVAL;
+}
+
static unsigned int
cn10k_sec_session_get_size(void *device __rte_unused)
{
@@ -405,7 +415,7 @@ cn10k_sec_session_stats_get(void *device, struct rte_security_session *sess,
stats->protocol = RTE_SECURITY_PROTOCOL_IPSEC;
sa = &priv->sa;
- if (priv->is_outbound) {
+ if (priv->ipsec.is_outbound) {
out_sa = &sa->out_sa;
roc_cpt_lf_ctx_flush(&qp->lf, out_sa, false);
rte_delay_ms(1);
diff --git a/drivers/crypto/cnxk/cn10k_ipsec.h b/drivers/crypto/cnxk/cn10k_ipsec.h
index 03ac994001..2b7a3e6acf 100644
--- a/drivers/crypto/cnxk/cn10k_ipsec.h
+++ b/drivers/crypto/cnxk/cn10k_ipsec.h
@@ -29,13 +29,18 @@ struct cn10k_sec_session {
/** PMD private space */
+ enum rte_security_session_protocol proto;
/** Pre-populated CPT inst words */
struct cnxk_cpt_inst_tmpl inst;
uint16_t max_extended_len;
uint16_t iv_offset;
uint8_t iv_length;
- uint8_t ip_csum;
- bool is_outbound;
+ union {
+ struct {
+ uint8_t ip_csum;
+ bool is_outbound;
+ } ipsec;
+ };
/** Queue pair */
struct cnxk_cpt_qp *qp;
/** Userdata to be set for Rx inject */
diff --git a/drivers/crypto/cnxk/cn10k_ipsec_la_ops.h b/drivers/crypto/cnxk/cn10k_ipsec_la_ops.h
index 8e208eb2ca..af2c85022e 100644
--- a/drivers/crypto/cnxk/cn10k_ipsec_la_ops.h
+++ b/drivers/crypto/cnxk/cn10k_ipsec_la_ops.h
@@ -121,7 +121,7 @@ process_outb_sa(struct roc_cpt_lf *lf, struct rte_crypto_op *cop, struct cn10k_s
i = 0;
gather_comp = (struct roc_sglist_comp *)((uint8_t *)m_data + 8);
- i = fill_ipsec_sg_comp_from_pkt(gather_comp, i, m_src);
+ i = fill_sg_comp_from_pkt(gather_comp, i, m_src);
((uint16_t *)in_buffer)[2] = rte_cpu_to_be_16(i);
g_size_bytes = ((i + 3) / 4) * sizeof(struct roc_sglist_comp);
@@ -132,7 +132,7 @@ process_outb_sa(struct roc_cpt_lf *lf, struct rte_crypto_op *cop, struct cn10k_s
i = 0;
scatter_comp = (struct roc_sglist_comp *)((uint8_t *)gather_comp + g_size_bytes);
- i = fill_ipsec_sg_comp_from_pkt(scatter_comp, i, m_src);
+ i = fill_sg_comp_from_pkt(scatter_comp, i, m_src);
((uint16_t *)in_buffer)[3] = rte_cpu_to_be_16(i);
s_size_bytes = ((i + 3) / 4) * sizeof(struct roc_sglist_comp);
@@ -170,7 +170,7 @@ process_outb_sa(struct roc_cpt_lf *lf, struct rte_crypto_op *cop, struct cn10k_s
i = 0;
gather_comp = (struct roc_sg2list_comp *)((uint8_t *)m_data);
- i = fill_ipsec_sg2_comp_from_pkt(gather_comp, i, m_src);
+ i = fill_sg2_comp_from_pkt(gather_comp, i, m_src);
cpt_inst_w5.s.gather_sz = ((i + 2) / 3);
g_size_bytes = ((i + 2) / 3) * sizeof(struct roc_sg2list_comp);
@@ -181,7 +181,7 @@ process_outb_sa(struct roc_cpt_lf *lf, struct rte_crypto_op *cop, struct cn10k_s
i = 0;
scatter_comp = (struct roc_sg2list_comp *)((uint8_t *)gather_comp + g_size_bytes);
- i = fill_ipsec_sg2_comp_from_pkt(scatter_comp, i, m_src);
+ i = fill_sg2_comp_from_pkt(scatter_comp, i, m_src);
cpt_inst_w6.s.scatter_sz = ((i + 2) / 3);
@@ -211,7 +211,7 @@ process_inb_sa(struct rte_crypto_op *cop, struct cn10k_sec_session *sess, struct
inst->w4.u64 = sess->inst.w4 | rte_pktmbuf_pkt_len(m_src);
dptr = rte_pktmbuf_mtod(m_src, uint64_t);
inst->dptr = dptr;
- m_src->ol_flags |= (uint64_t)sess->ip_csum;
+ m_src->ol_flags |= (uint64_t)sess->ipsec.ip_csum;
} else if (is_sg_ver2 == false) {
struct roc_sglist_comp *scatter_comp, *gather_comp;
uint32_t g_size_bytes, s_size_bytes;
@@ -234,7 +234,7 @@ process_inb_sa(struct rte_crypto_op *cop, struct cn10k_sec_session *sess, struct
/* Input Gather List */
i = 0;
gather_comp = (struct roc_sglist_comp *)((uint8_t *)m_data + 8);
- i = fill_ipsec_sg_comp_from_pkt(gather_comp, i, m_src);
+ i = fill_sg_comp_from_pkt(gather_comp, i, m_src);
((uint16_t *)in_buffer)[2] = rte_cpu_to_be_16(i);
g_size_bytes = ((i + 3) / 4) * sizeof(struct roc_sglist_comp);
@@ -242,7 +242,7 @@ process_inb_sa(struct rte_crypto_op *cop, struct cn10k_sec_session *sess, struct
/* Output Scatter List */
i = 0;
scatter_comp = (struct roc_sglist_comp *)((uint8_t *)gather_comp + g_size_bytes);
- i = fill_ipsec_sg_comp_from_pkt(scatter_comp, i, m_src);
+ i = fill_sg_comp_from_pkt(scatter_comp, i, m_src);
((uint16_t *)in_buffer)[3] = rte_cpu_to_be_16(i);
s_size_bytes = ((i + 3) / 4) * sizeof(struct roc_sglist_comp);
@@ -270,7 +270,7 @@ process_inb_sa(struct rte_crypto_op *cop, struct cn10k_sec_session *sess, struct
i = 0;
gather_comp = (struct roc_sg2list_comp *)((uint8_t *)m_data);
- i = fill_ipsec_sg2_comp_from_pkt(gather_comp, i, m_src);
+ i = fill_sg2_comp_from_pkt(gather_comp, i, m_src);
cpt_inst_w5.s.gather_sz = ((i + 2) / 3);
g_size_bytes = ((i + 2) / 3) * sizeof(struct roc_sg2list_comp);
@@ -278,7 +278,7 @@ process_inb_sa(struct rte_crypto_op *cop, struct cn10k_sec_session *sess, struct
/* Output Scatter List */
i = 0;
scatter_comp = (struct roc_sg2list_comp *)((uint8_t *)gather_comp + g_size_bytes);
- i = fill_ipsec_sg2_comp_from_pkt(scatter_comp, i, m_src);
+ i = fill_sg2_comp_from_pkt(scatter_comp, i, m_src);
cpt_inst_w6.s.scatter_sz = ((i + 2) / 3);
diff --git a/drivers/crypto/cnxk/cn9k_ipsec_la_ops.h b/drivers/crypto/cnxk/cn9k_ipsec_la_ops.h
index 3d0db72775..3e9f1e7efb 100644
--- a/drivers/crypto/cnxk/cn9k_ipsec_la_ops.h
+++ b/drivers/crypto/cnxk/cn9k_ipsec_la_ops.h
@@ -132,7 +132,7 @@ process_outb_sa(struct cpt_qp_meta_info *m_info, struct rte_crypto_op *cop,
gather_comp = (struct roc_sglist_comp *)((uint8_t *)m_data + 8);
i = fill_sg_comp(gather_comp, i, (uint64_t)hdr, hdr_len);
- i = fill_ipsec_sg_comp_from_pkt(gather_comp, i, m_src);
+ i = fill_sg_comp_from_pkt(gather_comp, i, m_src);
((uint16_t *)in_buffer)[2] = rte_cpu_to_be_16(i);
g_size_bytes = ((i + 3) / 4) * sizeof(struct roc_sglist_comp);
@@ -146,7 +146,7 @@ process_outb_sa(struct cpt_qp_meta_info *m_info, struct rte_crypto_op *cop,
scatter_comp = (struct roc_sglist_comp *)((uint8_t *)gather_comp + g_size_bytes);
i = fill_sg_comp(scatter_comp, i, (uint64_t)hdr, hdr_len);
- i = fill_ipsec_sg_comp_from_pkt(scatter_comp, i, m_src);
+ i = fill_sg_comp_from_pkt(scatter_comp, i, m_src);
((uint16_t *)in_buffer)[3] = rte_cpu_to_be_16(i);
s_size_bytes = ((i + 3) / 4) * sizeof(struct roc_sglist_comp);
@@ -228,7 +228,7 @@ process_inb_sa(struct cpt_qp_meta_info *m_info, struct rte_crypto_op *cop,
*/
i = 0;
gather_comp = (struct roc_sglist_comp *)((uint8_t *)m_data + 8);
- i = fill_ipsec_sg_comp_from_pkt(gather_comp, i, m_src);
+ i = fill_sg_comp_from_pkt(gather_comp, i, m_src);
((uint16_t *)in_buffer)[2] = rte_cpu_to_be_16(i);
g_size_bytes = ((i + 3) / 4) * sizeof(struct roc_sglist_comp);
@@ -239,7 +239,7 @@ process_inb_sa(struct cpt_qp_meta_info *m_info, struct rte_crypto_op *cop,
i = 0;
scatter_comp = (struct roc_sglist_comp *)((uint8_t *)gather_comp + g_size_bytes);
i = fill_sg_comp(scatter_comp, i, (uint64_t)hdr, hdr_len);
- i = fill_ipsec_sg_comp_from_pkt(scatter_comp, i, m_src);
+ i = fill_sg_comp_from_pkt(scatter_comp, i, m_src);
((uint16_t *)in_buffer)[3] = rte_cpu_to_be_16(i);
s_size_bytes = ((i + 3) / 4) * sizeof(struct roc_sglist_comp);
diff --git a/drivers/crypto/cnxk/cnxk_cryptodev.h b/drivers/crypto/cnxk/cnxk_cryptodev.h
index 2ae81d2f90..a5c4365631 100644
--- a/drivers/crypto/cnxk/cnxk_cryptodev.h
+++ b/drivers/crypto/cnxk/cnxk_cryptodev.h
@@ -11,9 +11,10 @@
#include "roc_ae.h"
#include "roc_cpt.h"
-#define CNXK_CPT_MAX_CAPS 55
-#define CNXK_SEC_CRYPTO_MAX_CAPS 16
-#define CNXK_SEC_MAX_CAPS 9
+#define CNXK_CPT_MAX_CAPS 55
+#define CNXK_SEC_IPSEC_CRYPTO_MAX_CAPS 16
+#define CNXK_SEC_MAX_CAPS 9
+
/**
* Device private data
*/
@@ -23,8 +24,7 @@ struct cnxk_cpt_vf {
uint16_t *rx_chan_base;
struct roc_cpt cpt;
struct rte_cryptodev_capabilities crypto_caps[CNXK_CPT_MAX_CAPS];
- struct rte_cryptodev_capabilities
- sec_crypto_caps[CNXK_SEC_CRYPTO_MAX_CAPS];
+ struct rte_cryptodev_capabilities sec_ipsec_crypto_caps[CNXK_SEC_IPSEC_CRYPTO_MAX_CAPS];
struct rte_security_capability sec_caps[CNXK_SEC_MAX_CAPS];
uint64_t cnxk_fpm_iova[ROC_AE_EC_ID_PMAX];
struct roc_ae_ec_group *ec_grp[ROC_AE_EC_ID_PMAX];
diff --git a/drivers/crypto/cnxk/cnxk_cryptodev_capabilities.c b/drivers/crypto/cnxk/cnxk_cryptodev_capabilities.c
index 2676b52832..178f510a63 100644
--- a/drivers/crypto/cnxk/cnxk_cryptodev_capabilities.c
+++ b/drivers/crypto/cnxk/cnxk_cryptodev_capabilities.c
@@ -20,13 +20,14 @@
RTE_DIM(caps_##name)); \
} while (0)
-#define SEC_CAPS_ADD(cnxk_caps, cur_pos, hw_caps, name) \
+#define SEC_IPSEC_CAPS_ADD(cnxk_caps, cur_pos, hw_caps, name) \
do { \
if ((hw_caps[CPT_ENG_TYPE_SE].name) || \
(hw_caps[CPT_ENG_TYPE_IE].name) || \
(hw_caps[CPT_ENG_TYPE_AE].name)) \
- sec_caps_add(cnxk_caps, cur_pos, sec_caps_##name, \
- RTE_DIM(sec_caps_##name)); \
+ sec_ipsec_caps_add(cnxk_caps, cur_pos, \
+ sec_ipsec_caps_##name, \
+ RTE_DIM(sec_ipsec_caps_##name)); \
} while (0)
static const struct rte_cryptodev_capabilities caps_mul[] = {
@@ -1184,7 +1185,7 @@ static const struct rte_cryptodev_capabilities caps_end[] = {
RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
};
-static const struct rte_cryptodev_capabilities sec_caps_aes[] = {
+static const struct rte_cryptodev_capabilities sec_ipsec_caps_aes[] = {
{ /* AES GCM */
.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
{.sym = {
@@ -1332,7 +1333,7 @@ static const struct rte_cryptodev_capabilities sec_caps_aes[] = {
},
};
-static const struct rte_cryptodev_capabilities sec_caps_des[] = {
+static const struct rte_cryptodev_capabilities sec_ipsec_caps_des[] = {
{ /* DES */
.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
{.sym = {
@@ -1375,7 +1376,7 @@ static const struct rte_cryptodev_capabilities sec_caps_des[] = {
},
};
-static const struct rte_cryptodev_capabilities sec_caps_sha1_sha2[] = {
+static const struct rte_cryptodev_capabilities sec_ipsec_caps_sha1_sha2[] = {
{ /* SHA1 HMAC */
.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
{.sym = {
@@ -1478,7 +1479,7 @@ static const struct rte_cryptodev_capabilities sec_caps_sha1_sha2[] = {
},
};
-static const struct rte_cryptodev_capabilities sec_caps_null[] = {
+static const struct rte_cryptodev_capabilities sec_ipsec_caps_null[] = {
{ /* NULL (CIPHER) */
.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
{.sym = {
@@ -1691,29 +1692,28 @@ cnxk_crypto_capabilities_get(struct cnxk_cpt_vf *vf)
}
static void
-sec_caps_limit_check(int *cur_pos, int nb_caps)
+sec_ipsec_caps_limit_check(int *cur_pos, int nb_caps)
{
- PLT_VERIFY(*cur_pos + nb_caps <= CNXK_SEC_CRYPTO_MAX_CAPS);
+ PLT_VERIFY(*cur_pos + nb_caps <= CNXK_SEC_IPSEC_CRYPTO_MAX_CAPS);
}
static void
-sec_caps_add(struct rte_cryptodev_capabilities cnxk_caps[], int *cur_pos,
- const struct rte_cryptodev_capabilities *caps, int nb_caps)
+sec_ipsec_caps_add(struct rte_cryptodev_capabilities cnxk_caps[], int *cur_pos,
+ const struct rte_cryptodev_capabilities *caps, int nb_caps)
{
- sec_caps_limit_check(cur_pos, nb_caps);
+ sec_ipsec_caps_limit_check(cur_pos, nb_caps);
memcpy(&cnxk_caps[*cur_pos], caps, nb_caps * sizeof(caps[0]));
*cur_pos += nb_caps;
}
static void
-cn10k_sec_crypto_caps_update(struct rte_cryptodev_capabilities cnxk_caps[],
- int *cur_pos)
+cn10k_sec_ipsec_crypto_caps_update(struct rte_cryptodev_capabilities cnxk_caps[], int *cur_pos)
{
const struct rte_cryptodev_capabilities *cap;
unsigned int i;
- sec_caps_limit_check(cur_pos, 1);
+ sec_ipsec_caps_limit_check(cur_pos, 1);
/* NULL auth */
for (i = 0; i < RTE_DIM(caps_null); i++) {
@@ -1727,7 +1727,7 @@ cn10k_sec_crypto_caps_update(struct rte_cryptodev_capabilities cnxk_caps[],
}
static void
-cn9k_sec_crypto_caps_update(struct rte_cryptodev_capabilities cnxk_caps[])
+cn9k_sec_ipsec_crypto_caps_update(struct rte_cryptodev_capabilities cnxk_caps[])
{
struct rte_cryptodev_capabilities *caps;
@@ -1747,27 +1747,26 @@ cn9k_sec_crypto_caps_update(struct rte_cryptodev_capabilities cnxk_caps[])
}
static void
-sec_crypto_caps_populate(struct rte_cryptodev_capabilities cnxk_caps[],
- union cpt_eng_caps *hw_caps)
+sec_ipsec_crypto_caps_populate(struct rte_cryptodev_capabilities cnxk_caps[],
+ union cpt_eng_caps *hw_caps)
{
int cur_pos = 0;
- SEC_CAPS_ADD(cnxk_caps, &cur_pos, hw_caps, aes);
- SEC_CAPS_ADD(cnxk_caps, &cur_pos, hw_caps, des);
- SEC_CAPS_ADD(cnxk_caps, &cur_pos, hw_caps, sha1_sha2);
+ SEC_IPSEC_CAPS_ADD(cnxk_caps, &cur_pos, hw_caps, aes);
+ SEC_IPSEC_CAPS_ADD(cnxk_caps, &cur_pos, hw_caps, des);
+ SEC_IPSEC_CAPS_ADD(cnxk_caps, &cur_pos, hw_caps, sha1_sha2);
if (roc_model_is_cn10k())
- cn10k_sec_crypto_caps_update(cnxk_caps, &cur_pos);
+ cn10k_sec_ipsec_crypto_caps_update(cnxk_caps, &cur_pos);
else
- cn9k_sec_crypto_caps_update(cnxk_caps);
+ cn9k_sec_ipsec_crypto_caps_update(cnxk_caps);
- sec_caps_add(cnxk_caps, &cur_pos, sec_caps_null,
- RTE_DIM(sec_caps_null));
- sec_caps_add(cnxk_caps, &cur_pos, caps_end, RTE_DIM(caps_end));
+ sec_ipsec_caps_add(cnxk_caps, &cur_pos, sec_ipsec_caps_null, RTE_DIM(sec_ipsec_caps_null));
+ sec_ipsec_caps_add(cnxk_caps, &cur_pos, caps_end, RTE_DIM(caps_end));
}
static void
-cnxk_sec_caps_update(struct rte_security_capability *sec_cap)
+cnxk_sec_ipsec_caps_update(struct rte_security_capability *sec_cap)
{
sec_cap->ipsec.options.udp_encap = 1;
sec_cap->ipsec.options.copy_df = 1;
@@ -1775,7 +1774,7 @@ cnxk_sec_caps_update(struct rte_security_capability *sec_cap)
}
static void
-cn10k_sec_caps_update(struct rte_security_capability *sec_cap)
+cn10k_sec_ipsec_caps_update(struct rte_security_capability *sec_cap)
{
if (sec_cap->ipsec.direction == RTE_SECURITY_IPSEC_SA_DIR_EGRESS) {
#ifdef LA_IPSEC_DEBUG
@@ -1797,7 +1796,7 @@ cn10k_sec_caps_update(struct rte_security_capability *sec_cap)
}
static void
-cn9k_sec_caps_update(struct rte_security_capability *sec_cap)
+cn9k_sec_ipsec_caps_update(struct rte_security_capability *sec_cap)
{
if (sec_cap->ipsec.direction == RTE_SECURITY_IPSEC_SA_DIR_EGRESS) {
#ifdef LA_IPSEC_DEBUG
@@ -1814,22 +1813,24 @@ cnxk_cpt_caps_populate(struct cnxk_cpt_vf *vf)
unsigned long i;
crypto_caps_populate(vf->crypto_caps, vf->cpt.hw_caps);
- sec_crypto_caps_populate(vf->sec_crypto_caps, vf->cpt.hw_caps);
+ sec_ipsec_crypto_caps_populate(vf->sec_ipsec_crypto_caps, vf->cpt.hw_caps);
PLT_STATIC_ASSERT(RTE_DIM(sec_caps_templ) <= RTE_DIM(vf->sec_caps));
memcpy(vf->sec_caps, sec_caps_templ, sizeof(sec_caps_templ));
for (i = 0; i < RTE_DIM(sec_caps_templ) - 1; i++) {
- vf->sec_caps[i].crypto_capabilities = vf->sec_crypto_caps;
- cnxk_sec_caps_update(&vf->sec_caps[i]);
+ if (vf->sec_caps[i].protocol == RTE_SECURITY_PROTOCOL_IPSEC) {
+ vf->sec_caps[i].crypto_capabilities = vf->sec_ipsec_crypto_caps;
- if (roc_model_is_cn10k())
- cn10k_sec_caps_update(&vf->sec_caps[i]);
+ cnxk_sec_ipsec_caps_update(&vf->sec_caps[i]);
- if (roc_model_is_cn9k())
- cn9k_sec_caps_update(&vf->sec_caps[i]);
+ if (roc_model_is_cn10k())
+ cn10k_sec_ipsec_caps_update(&vf->sec_caps[i]);
+ if (roc_model_is_cn9k())
+ cn9k_sec_ipsec_caps_update(&vf->sec_caps[i]);
+ }
}
}
diff --git a/drivers/crypto/cnxk/cnxk_sg.h b/drivers/crypto/cnxk/cnxk_sg.h
index 65244199bd..aa074581d7 100644
--- a/drivers/crypto/cnxk/cnxk_sg.h
+++ b/drivers/crypto/cnxk/cnxk_sg.h
@@ -129,7 +129,7 @@ fill_sg_comp_from_iov(struct roc_sglist_comp *list, uint32_t i, struct roc_se_io
}
static __rte_always_inline uint32_t
-fill_ipsec_sg_comp_from_pkt(struct roc_sglist_comp *list, uint32_t i, struct rte_mbuf *pkt)
+fill_sg_comp_from_pkt(struct roc_sglist_comp *list, uint32_t i, struct rte_mbuf *pkt)
{
uint32_t buf_sz;
void *vaddr;
@@ -150,7 +150,7 @@ fill_ipsec_sg_comp_from_pkt(struct roc_sglist_comp *list, uint32_t i, struct rte
}
static __rte_always_inline uint32_t
-fill_ipsec_sg2_comp_from_pkt(struct roc_sg2list_comp *list, uint32_t i, struct rte_mbuf *pkt)
+fill_sg2_comp_from_pkt(struct roc_sg2list_comp *list, uint32_t i, struct rte_mbuf *pkt)
{
uint32_t buf_sz;
void *vaddr;
--
2.25.1
^ permalink raw reply [flat|nested] 78+ messages in thread
* [PATCH 12/24] common/cnxk: update opad-ipad gen to handle TLS
2023-12-21 12:35 [PATCH 00/24] Fixes and improvements in crypto cnxk Anoob Joseph
` (10 preceding siblings ...)
2023-12-21 12:35 ` [PATCH 11/24] crypto/cnxk: rename security caps as IPsec security caps Anoob Joseph
@ 2023-12-21 12:35 ` Anoob Joseph
2023-12-21 12:35 ` [PATCH 13/24] common/cnxk: add TLS record contexts Anoob Joseph
` (12 subsequent siblings)
24 siblings, 0 replies; 78+ messages in thread
From: Anoob Joseph @ 2023-12-21 12:35 UTC (permalink / raw)
To: Akhil Goyal; +Cc: Jerin Jacob, Vidya Sagar Velumuri, Tejasree Kondoj, dev
For TLS opcodes, ipad is at the offset 64 as compared to the packed
implementation for IPsec. Extend the function to handle TLS contexts as
well.
Signed-off-by: Anoob Joseph <anoobj@marvell.com>
Signed-off-by: Vidya Sagar Velumuri <vvelumuri@marvell.com>
---
drivers/common/cnxk/cnxk_security.c | 15 ++++++++-------
drivers/common/cnxk/cnxk_security.h | 3 ++-
2 files changed, 10 insertions(+), 8 deletions(-)
diff --git a/drivers/common/cnxk/cnxk_security.c b/drivers/common/cnxk/cnxk_security.c
index 81991c4697..bdb04fe142 100644
--- a/drivers/common/cnxk/cnxk_security.c
+++ b/drivers/common/cnxk/cnxk_security.c
@@ -9,7 +9,8 @@
#include "roc_api.h"
void
-cnxk_sec_opad_ipad_gen(struct rte_crypto_sym_xform *auth_xform, uint8_t *hmac_opad_ipad)
+cnxk_sec_opad_ipad_gen(struct rte_crypto_sym_xform *auth_xform, uint8_t *hmac_opad_ipad,
+ bool is_tls)
{
const uint8_t *key = auth_xform->auth.key.data;
uint32_t length = auth_xform->auth.key.length;
@@ -29,11 +30,11 @@ cnxk_sec_opad_ipad_gen(struct rte_crypto_sym_xform *auth_xform, uint8_t *hmac_op
switch (auth_xform->auth.algo) {
case RTE_CRYPTO_AUTH_MD5_HMAC:
roc_hash_md5_gen(opad, (uint32_t *)&hmac_opad_ipad[0]);
- roc_hash_md5_gen(ipad, (uint32_t *)&hmac_opad_ipad[24]);
+ roc_hash_md5_gen(ipad, (uint32_t *)&hmac_opad_ipad[is_tls ? 64 : 24]);
break;
case RTE_CRYPTO_AUTH_SHA1_HMAC:
roc_hash_sha1_gen(opad, (uint32_t *)&hmac_opad_ipad[0]);
- roc_hash_sha1_gen(ipad, (uint32_t *)&hmac_opad_ipad[24]);
+ roc_hash_sha1_gen(ipad, (uint32_t *)&hmac_opad_ipad[is_tls ? 64 : 24]);
break;
case RTE_CRYPTO_AUTH_SHA256_HMAC:
roc_hash_sha256_gen(opad, (uint32_t *)&hmac_opad_ipad[0], 256);
@@ -191,7 +192,7 @@ ot_ipsec_sa_common_param_fill(union roc_ot_ipsec_sa_word2 *w2,
const uint8_t *auth_key = auth_xfrm->auth.key.data;
roc_aes_xcbc_key_derive(auth_key, hmac_opad_ipad);
} else {
- cnxk_sec_opad_ipad_gen(auth_xfrm, hmac_opad_ipad);
+ cnxk_sec_opad_ipad_gen(auth_xfrm, hmac_opad_ipad, false);
}
tmp_key = (uint64_t *)hmac_opad_ipad;
@@ -740,7 +741,7 @@ onf_ipsec_sa_common_param_fill(struct roc_ie_onf_sa_ctl *ctl, uint8_t *salt,
key = cipher_xfrm->cipher.key.data;
length = cipher_xfrm->cipher.key.length;
- cnxk_sec_opad_ipad_gen(auth_xfrm, hmac_opad_ipad);
+ cnxk_sec_opad_ipad_gen(auth_xfrm, hmac_opad_ipad, false);
}
switch (length) {
@@ -1373,7 +1374,7 @@ cnxk_on_ipsec_outb_sa_create(struct rte_security_ipsec_xform *ipsec,
roc_aes_xcbc_key_derive(auth_key, hmac_opad_ipad);
} else if (auth_xform->auth.algo != RTE_CRYPTO_AUTH_NULL) {
- cnxk_sec_opad_ipad_gen(auth_xform, hmac_opad_ipad);
+ cnxk_sec_opad_ipad_gen(auth_xform, hmac_opad_ipad, false);
}
}
@@ -1440,7 +1441,7 @@ cnxk_on_ipsec_inb_sa_create(struct rte_security_ipsec_xform *ipsec,
roc_aes_xcbc_key_derive(auth_key, hmac_opad_ipad);
} else if (auth_xform->auth.algo != RTE_CRYPTO_AUTH_NULL) {
- cnxk_sec_opad_ipad_gen(auth_xform, hmac_opad_ipad);
+ cnxk_sec_opad_ipad_gen(auth_xform, hmac_opad_ipad, false);
}
}
diff --git a/drivers/common/cnxk/cnxk_security.h b/drivers/common/cnxk/cnxk_security.h
index fabf694df4..86ec657cb0 100644
--- a/drivers/common/cnxk/cnxk_security.h
+++ b/drivers/common/cnxk/cnxk_security.h
@@ -70,6 +70,7 @@ int __roc_api cnxk_on_ipsec_outb_sa_create(struct rte_security_ipsec_xform *ipse
struct roc_ie_on_outb_sa *out_sa);
__rte_internal
-void cnxk_sec_opad_ipad_gen(struct rte_crypto_sym_xform *auth_xform, uint8_t *hmac_opad_ipad);
+void cnxk_sec_opad_ipad_gen(struct rte_crypto_sym_xform *auth_xform, uint8_t *hmac_opad_ipad,
+ bool is_tls);
#endif /* _CNXK_SECURITY_H__ */
--
2.25.1
^ permalink raw reply [flat|nested] 78+ messages in thread
* [PATCH 13/24] common/cnxk: add TLS record contexts
2023-12-21 12:35 [PATCH 00/24] Fixes and improvements in crypto cnxk Anoob Joseph
` (11 preceding siblings ...)
2023-12-21 12:35 ` [PATCH 12/24] common/cnxk: update opad-ipad gen to handle TLS Anoob Joseph
@ 2023-12-21 12:35 ` Anoob Joseph
2023-12-21 12:35 ` [PATCH 14/24] crypto/cnxk: separate IPsec from security common code Anoob Joseph
` (11 subsequent siblings)
24 siblings, 0 replies; 78+ messages in thread
From: Anoob Joseph @ 2023-12-21 12:35 UTC (permalink / raw)
To: Akhil Goyal; +Cc: Jerin Jacob, Vidya Sagar Velumuri, Tejasree Kondoj, dev
Add TLS record read and write contexts.
Signed-off-by: Anoob Joseph <anoobj@marvell.com>
Signed-off-by: Vidya Sagar Velumuri <vvelumuri@marvell.com>
---
drivers/common/cnxk/roc_cpt.h | 4 +-
drivers/common/cnxk/roc_ie_ot_tls.h | 199 ++++++++++++++++++++++++++++
drivers/common/cnxk/roc_se.h | 11 ++
3 files changed, 211 insertions(+), 3 deletions(-)
create mode 100644 drivers/common/cnxk/roc_ie_ot_tls.h
diff --git a/drivers/common/cnxk/roc_cpt.h b/drivers/common/cnxk/roc_cpt.h
index 001e71c55e..5a2b5caeb0 100644
--- a/drivers/common/cnxk/roc_cpt.h
+++ b/drivers/common/cnxk/roc_cpt.h
@@ -55,6 +55,7 @@
#define ROC_CPT_AES_CBC_IV_LEN 16
#define ROC_CPT_SHA1_HMAC_LEN 12
#define ROC_CPT_SHA2_HMAC_LEN 16
+#define ROC_CPT_DES_IV_LEN 8
#define ROC_CPT_DES3_KEY_LEN 24
#define ROC_CPT_AES128_KEY_LEN 16
@@ -71,9 +72,6 @@
#define ROC_CPT_DES_BLOCK_LENGTH 8
#define ROC_CPT_AES_BLOCK_LENGTH 16
-#define ROC_CPT_AES_GCM_ROUNDUP_BYTE_LEN 4
-#define ROC_CPT_AES_CBC_ROUNDUP_BYTE_LEN 16
-
/* Salt length for AES-CTR/GCM/CCM and AES-GMAC */
#define ROC_CPT_SALT_LEN 4
diff --git a/drivers/common/cnxk/roc_ie_ot_tls.h b/drivers/common/cnxk/roc_ie_ot_tls.h
new file mode 100644
index 0000000000..61955ef4d1
--- /dev/null
+++ b/drivers/common/cnxk/roc_ie_ot_tls.h
@@ -0,0 +1,199 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2023 Marvell.
+ */
+
+#ifndef __ROC_IE_OT_TLS_H__
+#define __ROC_IE_OT_TLS_H__
+
+#include "roc_platform.h"
+
+#define ROC_IE_OT_TLS_CTX_ILEN 1
+#define ROC_IE_OT_TLS_CTX_HDR_SIZE 1
+#define ROC_IE_OT_TLS_AR_WIN_SIZE_MAX 4096
+#define ROC_IE_OT_TLS_LOG_MIN_AR_WIN_SIZE_M1 5
+
+/* u64 array size to fit anti replay window bits */
+#define ROC_IE_OT_TLS_AR_WINBITS_SZ \
+ (PLT_ALIGN_CEIL(ROC_IE_OT_TLS_AR_WIN_SIZE_MAX, BITS_PER_LONG_LONG) / BITS_PER_LONG_LONG)
+
+/* CN10K TLS opcodes */
+#define ROC_IE_OT_TLS_MAJOR_OP_RECORD_ENC 0x16UL
+#define ROC_IE_OT_TLS_MAJOR_OP_RECORD_DEC 0x17UL
+
+#define ROC_IE_OT_TLS_CTX_MAX_OPAD_IPAD_LEN 128
+#define ROC_IE_OT_TLS_CTX_MAX_KEY_IV_LEN 48
+#define ROC_IE_OT_TLS_CTX_MAX_IV_LEN 16
+
+enum roc_ie_ot_tls_mac_type {
+ ROC_IE_OT_TLS_MAC_MD5 = 1,
+ ROC_IE_OT_TLS_MAC_SHA1 = 2,
+ ROC_IE_OT_TLS_MAC_SHA2_256 = 4,
+ ROC_IE_OT_TLS_MAC_SHA2_384 = 5,
+ ROC_IE_OT_TLS_MAC_SHA2_512 = 6,
+};
+
+enum roc_ie_ot_tls_cipher_type {
+ ROC_IE_OT_TLS_CIPHER_3DES = 1,
+ ROC_IE_OT_TLS_CIPHER_AES_CBC = 3,
+ ROC_IE_OT_TLS_CIPHER_AES_GCM = 7,
+ ROC_IE_OT_TLS_CIPHER_AES_CCM = 10,
+};
+
+enum roc_ie_ot_tls_ver {
+ ROC_IE_OT_TLS_VERSION_TLS_12 = 1,
+ ROC_IE_OT_TLS_VERSION_DTLS_12 = 2,
+};
+
+enum roc_ie_ot_tls_aes_key_len {
+ ROC_IE_OT_TLS_AES_KEY_LEN_128 = 1,
+ ROC_IE_OT_TLS_AES_KEY_LEN_256 = 3,
+};
+
+enum {
+ ROC_IE_OT_TLS_IV_SRC_DEFAULT = 0,
+ ROC_IE_OT_TLS_IV_SRC_FROM_SA = 1,
+};
+
+struct roc_ie_ot_tls_read_ctx_update_reg {
+ uint64_t ar_base;
+ uint64_t ar_valid_mask;
+ uint64_t hard_life;
+ uint64_t soft_life;
+ uint64_t mib_octs;
+ uint64_t mib_pkts;
+ uint64_t ar_winbits[ROC_IE_OT_TLS_AR_WINBITS_SZ];
+};
+
+union roc_ie_ot_tls_param2 {
+ uint16_t u16;
+ struct {
+ uint8_t msg_type;
+ uint8_t rsvd;
+ } s;
+};
+
+struct roc_ie_ot_tls_read_sa {
+ /* Word0 */
+ union {
+ struct {
+ uint64_t ar_win : 3;
+ uint64_t hard_life_dec : 1;
+ uint64_t soft_life_dec : 1;
+ uint64_t count_glb_octets : 1;
+ uint64_t count_glb_pkts : 1;
+ uint64_t count_mib_bytes : 1;
+
+ uint64_t count_mib_pkts : 1;
+ uint64_t hw_ctx_off : 7;
+
+ uint64_t ctx_id : 16;
+
+ uint64_t orig_pkt_fabs : 1;
+ uint64_t orig_pkt_free : 1;
+ uint64_t pkind : 6;
+
+ uint64_t rsvd0 : 1;
+ uint64_t et_ovrwr : 1;
+ uint64_t pkt_output : 2;
+ uint64_t pkt_format : 1;
+ uint64_t defrag_opt : 2;
+ uint64_t x2p_dst : 1;
+
+ uint64_t ctx_push_size : 7;
+ uint64_t rsvd1 : 1;
+
+ uint64_t ctx_hdr_size : 2;
+ uint64_t aop_valid : 1;
+ uint64_t rsvd2 : 1;
+ uint64_t ctx_size : 4;
+ } s;
+ uint64_t u64;
+ } w0;
+
+ /* Word1 */
+ uint64_t w1_rsvd3;
+
+ /* Word2 */
+ union {
+ struct {
+ uint64_t version_select : 4;
+ uint64_t aes_key_len : 2;
+ uint64_t cipher_select : 4;
+ uint64_t mac_select : 4;
+ uint64_t rsvd4 : 50;
+ } s;
+ uint64_t u64;
+ } w2;
+
+ /* Word3 */
+ uint64_t w3_rsvd5;
+
+ /* Word4 - Word9 */
+ uint8_t cipher_key[ROC_IE_OT_TLS_CTX_MAX_KEY_IV_LEN];
+
+ /* Word10 - Word25 */
+ uint8_t opad_ipad[ROC_IE_OT_TLS_CTX_MAX_OPAD_IPAD_LEN];
+
+ /* Word26 - Word32 */
+ struct roc_ie_ot_tls_read_ctx_update_reg ctx;
+};
+
+struct roc_ie_ot_tls_write_sa {
+ /* Word0 */
+ union {
+ struct {
+ uint64_t rsvd0 : 3;
+ uint64_t hard_life_dec : 1;
+ uint64_t soft_life_dec : 1;
+ uint64_t count_glb_octets : 1;
+ uint64_t count_glb_pkts : 1;
+ uint64_t count_mib_bytes : 1;
+
+ uint64_t count_mib_pkts : 1;
+ uint64_t hw_ctx_off : 7;
+
+ uint64_t rsvd1 : 32;
+
+ uint64_t ctx_push_size : 7;
+ uint64_t rsvd2 : 1;
+
+ uint64_t ctx_hdr_size : 2;
+ uint64_t aop_valid : 1;
+ uint64_t rsvd3 : 1;
+ uint64_t ctx_size : 4;
+ } s;
+ uint64_t u64;
+ } w0;
+
+ /* Word1 */
+ uint64_t w1_rsvd4;
+
+ /* Word2 */
+ union {
+ struct {
+ uint64_t version_select : 4;
+ uint64_t aes_key_len : 2;
+ uint64_t cipher_select : 4;
+ uint64_t mac_select : 4;
+ uint64_t iv_at_cptr : 1;
+ uint64_t rsvd5 : 49;
+ } s;
+ uint64_t u64;
+ } w2;
+
+ /* Word3 */
+ uint64_t w3_rsvd6;
+
+ /* Word4 - Word9 */
+ uint8_t cipher_key[ROC_IE_OT_TLS_CTX_MAX_KEY_IV_LEN];
+
+ /* Word10 - Word25 */
+ uint8_t opad_ipad[ROC_IE_OT_TLS_CTX_MAX_OPAD_IPAD_LEN];
+
+ /* Word26 */
+ uint64_t w26_rsvd7;
+
+ /* Word27 */
+ uint64_t seq_num;
+};
+#endif /* __ROC_IE_OT_TLS_H__ */
diff --git a/drivers/common/cnxk/roc_se.h b/drivers/common/cnxk/roc_se.h
index d8cbd58c9a..abb8c6a149 100644
--- a/drivers/common/cnxk/roc_se.h
+++ b/drivers/common/cnxk/roc_se.h
@@ -5,6 +5,8 @@
#ifndef __ROC_SE_H__
#define __ROC_SE_H__
+#include "roc_constants.h"
+
/* SE opcodes */
#define ROC_SE_MAJOR_OP_FC 0x33
#define ROC_SE_FC_MINOR_OP_ENCRYPT 0x0
@@ -162,6 +164,15 @@ typedef enum {
ROC_SE_ERR_GC_ICV_MISCOMPARE = 0x4c,
ROC_SE_ERR_GC_DATA_UNALIGNED = 0x4d,
+ ROC_SE_ERR_SSL_RECORD_LEN_INVALID = 0x82,
+ ROC_SE_ERR_SSL_CTX_LEN_INVALID = 0x83,
+ ROC_SE_ERR_SSL_CIPHER_UNSUPPORTED = 0x84,
+ ROC_SE_ERR_SSL_MAC_UNSUPPORTED = 0x85,
+ ROC_SE_ERR_SSL_VERSION_UNSUPPORTED = 0x86,
+ ROC_SE_ERR_SSL_MAC_MISMATCH = 0x89,
+ ROC_SE_ERR_SSL_PKT_REPLAY_SEQ_OUT_OF_WINDOW = 0xC1,
+ ROC_SE_ERR_SSL_PKT_REPLAY_SEQ = 0xC9,
+
/* API Layer */
ROC_SE_ERR_REQ_PENDING = 0xfe,
ROC_SE_ERR_REQ_TIMEOUT = 0xff,
--
2.25.1
^ permalink raw reply [flat|nested] 78+ messages in thread
* [PATCH 14/24] crypto/cnxk: separate IPsec from security common code
2023-12-21 12:35 [PATCH 00/24] Fixes and improvements in crypto cnxk Anoob Joseph
` (12 preceding siblings ...)
2023-12-21 12:35 ` [PATCH 13/24] common/cnxk: add TLS record contexts Anoob Joseph
@ 2023-12-21 12:35 ` Anoob Joseph
2023-12-21 12:35 ` [PATCH 15/24] crypto/cnxk: add TLS record session ops Anoob Joseph
` (10 subsequent siblings)
24 siblings, 0 replies; 78+ messages in thread
From: Anoob Joseph @ 2023-12-21 12:35 UTC (permalink / raw)
To: Akhil Goyal; +Cc: Jerin Jacob, Vidya Sagar Velumuri, Tejasree Kondoj, dev
The current structs and functions assume only IPsec offload. Separate it
out to allow for addition of TLS.
Signed-off-by: Anoob Joseph <anoobj@marvell.com>
Signed-off-by: Vidya Sagar Velumuri <vvelumuri@marvell.com>
---
drivers/crypto/cnxk/cn10k_cryptodev.c | 2 +-
drivers/crypto/cnxk/cn10k_cryptodev_sec.c | 127 ++++++++++++++++++++++
drivers/crypto/cnxk/cn10k_cryptodev_sec.h | 61 +++++++++++
drivers/crypto/cnxk/cn10k_ipsec.c | 127 +++-------------------
drivers/crypto/cnxk/cn10k_ipsec.h | 45 +++-----
drivers/crypto/cnxk/cn10k_ipsec_la_ops.h | 1 +
drivers/crypto/cnxk/meson.build | 1 +
7 files changed, 218 insertions(+), 146 deletions(-)
create mode 100644 drivers/crypto/cnxk/cn10k_cryptodev_sec.c
create mode 100644 drivers/crypto/cnxk/cn10k_cryptodev_sec.h
diff --git a/drivers/crypto/cnxk/cn10k_cryptodev.c b/drivers/crypto/cnxk/cn10k_cryptodev.c
index 2fd4df3c5d..5ed918e18e 100644
--- a/drivers/crypto/cnxk/cn10k_cryptodev.c
+++ b/drivers/crypto/cnxk/cn10k_cryptodev.c
@@ -12,7 +12,7 @@
#include "cn10k_cryptodev.h"
#include "cn10k_cryptodev_ops.h"
-#include "cn10k_ipsec.h"
+#include "cn10k_cryptodev_sec.h"
#include "cnxk_cryptodev.h"
#include "cnxk_cryptodev_capabilities.h"
#include "cnxk_cryptodev_sec.h"
diff --git a/drivers/crypto/cnxk/cn10k_cryptodev_sec.c b/drivers/crypto/cnxk/cn10k_cryptodev_sec.c
new file mode 100644
index 0000000000..0fd0a5b03c
--- /dev/null
+++ b/drivers/crypto/cnxk/cn10k_cryptodev_sec.c
@@ -0,0 +1,127 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2023 Marvell.
+ */
+
+#include <rte_security.h>
+
+#include "cn10k_cryptodev_ops.h"
+#include "cn10k_cryptodev_sec.h"
+#include "cnxk_cryptodev_ops.h"
+
+static int
+cn10k_sec_session_create(void *dev, struct rte_security_session_conf *conf,
+ struct rte_security_session *sess)
+{
+ struct rte_cryptodev *crypto_dev = dev;
+ struct cnxk_cpt_vf *vf;
+ struct cnxk_cpt_qp *qp;
+
+ if (conf->action_type != RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL)
+ return -EINVAL;
+
+ qp = crypto_dev->data->queue_pairs[0];
+ if (qp == NULL) {
+ plt_err("Setup cryptodev queue pair before creating security session");
+ return -EPERM;
+ }
+
+ vf = crypto_dev->data->dev_private;
+
+ if (conf->protocol == RTE_SECURITY_PROTOCOL_IPSEC) {
+ ((struct cn10k_sec_session *)sess)->userdata = conf->userdata;
+ return cn10k_ipsec_session_create(vf, qp, &conf->ipsec, conf->crypto_xform, sess);
+ }
+
+ return -ENOTSUP;
+}
+
+static int
+cn10k_sec_session_destroy(void *dev, struct rte_security_session *sec_sess)
+{
+ struct cn10k_sec_session *cn10k_sec_sess;
+ struct rte_cryptodev *crypto_dev = dev;
+ struct cnxk_cpt_qp *qp;
+
+ if (unlikely(sec_sess == NULL))
+ return -EINVAL;
+
+ qp = crypto_dev->data->queue_pairs[0];
+ if (unlikely(qp == NULL))
+ return -ENOTSUP;
+
+ cn10k_sec_sess = (struct cn10k_sec_session *)sec_sess;
+
+ if (cn10k_sec_sess->proto == RTE_SECURITY_PROTOCOL_IPSEC)
+ return cn10k_sec_ipsec_session_destroy(qp, cn10k_sec_sess);
+
+ return -EINVAL;
+}
+
+static unsigned int
+cn10k_sec_session_get_size(void *dev __rte_unused)
+{
+ return sizeof(struct cn10k_sec_session) - sizeof(struct rte_security_session);
+}
+
+static int
+cn10k_sec_session_stats_get(void *dev, struct rte_security_session *sec_sess,
+ struct rte_security_stats *stats)
+{
+ struct cn10k_sec_session *cn10k_sec_sess;
+ struct rte_cryptodev *crypto_dev = dev;
+ struct cnxk_cpt_qp *qp;
+
+ if (unlikely(sec_sess == NULL))
+ return -EINVAL;
+
+ qp = crypto_dev->data->queue_pairs[0];
+ if (unlikely(qp == NULL))
+ return -ENOTSUP;
+
+ cn10k_sec_sess = (struct cn10k_sec_session *)sec_sess;
+
+ if (cn10k_sec_sess->proto == RTE_SECURITY_PROTOCOL_IPSEC)
+ return cn10k_ipsec_stats_get(qp, cn10k_sec_sess, stats);
+
+ return -ENOTSUP;
+}
+
+static int
+cn10k_sec_session_update(void *dev, struct rte_security_session *sec_sess,
+ struct rte_security_session_conf *conf)
+{
+ struct cn10k_sec_session *cn10k_sec_sess;
+ struct rte_cryptodev *crypto_dev = dev;
+ struct cnxk_cpt_qp *qp;
+ struct cnxk_cpt_vf *vf;
+
+ if (sec_sess == NULL)
+ return -EINVAL;
+
+ qp = crypto_dev->data->queue_pairs[0];
+ if (qp == NULL)
+ return -EINVAL;
+
+ vf = crypto_dev->data->dev_private;
+
+ cn10k_sec_sess = (struct cn10k_sec_session *)sec_sess;
+
+ if (cn10k_sec_sess->proto == RTE_SECURITY_PROTOCOL_IPSEC)
+ return cn10k_ipsec_session_update(vf, qp, cn10k_sec_sess, conf);
+
+ return -ENOTSUP;
+}
+
+/* Update platform specific security ops */
+void
+cn10k_sec_ops_override(void)
+{
+ /* Update platform specific ops */
+ cnxk_sec_ops.session_create = cn10k_sec_session_create;
+ cnxk_sec_ops.session_destroy = cn10k_sec_session_destroy;
+ cnxk_sec_ops.session_get_size = cn10k_sec_session_get_size;
+ cnxk_sec_ops.session_stats_get = cn10k_sec_session_stats_get;
+ cnxk_sec_ops.session_update = cn10k_sec_session_update;
+ cnxk_sec_ops.inb_pkt_rx_inject = cn10k_cryptodev_sec_inb_rx_inject;
+ cnxk_sec_ops.rx_inject_configure = cn10k_cryptodev_sec_rx_inject_configure;
+}
diff --git a/drivers/crypto/cnxk/cn10k_cryptodev_sec.h b/drivers/crypto/cnxk/cn10k_cryptodev_sec.h
new file mode 100644
index 0000000000..02fd35eab7
--- /dev/null
+++ b/drivers/crypto/cnxk/cn10k_cryptodev_sec.h
@@ -0,0 +1,61 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2023 Marvell.
+ */
+
+#ifndef __CN10K_CRYPTODEV_SEC_H__
+#define __CN10K_CRYPTODEV_SEC_H__
+
+#include <rte_security.h>
+
+#include "roc_constants.h"
+#include "roc_cpt.h"
+
+#include "cn10k_ipsec.h"
+
+struct cn10k_sec_session {
+ struct rte_security_session rte_sess;
+
+ /** PMD private space */
+
+ enum rte_security_session_protocol proto;
+ /** Pre-populated CPT inst words */
+ struct cnxk_cpt_inst_tmpl inst;
+ uint16_t max_extended_len;
+ uint16_t iv_offset;
+ uint8_t iv_length;
+ union {
+ struct {
+ uint8_t ip_csum;
+ bool is_outbound;
+ } ipsec;
+ };
+ /** Queue pair */
+ struct cnxk_cpt_qp *qp;
+ /** Userdata to be set for Rx inject */
+ void *userdata;
+
+ /**
+ * End of SW mutable area
+ */
+ union {
+ struct cn10k_ipsec_sa sa;
+ };
+} __rte_aligned(ROC_ALIGN);
+
+static inline uint64_t
+cpt_inst_w7_get(struct roc_cpt *roc_cpt, void *cptr)
+{
+ union cpt_inst_w7 w7;
+
+ w7.u64 = 0;
+ w7.s.egrp = roc_cpt->eng_grp[CPT_ENG_TYPE_IE];
+ w7.s.ctx_val = 1;
+ w7.s.cptr = (uint64_t)cptr;
+ rte_mb();
+
+ return w7.u64;
+}
+
+void cn10k_sec_ops_override(void);
+
+#endif /* __CN10K_CRYPTODEV_SEC_H__ */
diff --git a/drivers/crypto/cnxk/cn10k_ipsec.c b/drivers/crypto/cnxk/cn10k_ipsec.c
index a9c673ea83..74d6cd70d1 100644
--- a/drivers/crypto/cnxk/cn10k_ipsec.c
+++ b/drivers/crypto/cnxk/cn10k_ipsec.c
@@ -11,6 +11,7 @@
#include <rte_udp.h>
#include "cn10k_cryptodev_ops.h"
+#include "cn10k_cryptodev_sec.h"
#include "cn10k_ipsec.h"
#include "cnxk_cryptodev.h"
#include "cnxk_cryptodev_ops.h"
@@ -19,20 +20,6 @@
#include "roc_api.h"
-static uint64_t
-cpt_inst_w7_get(struct roc_cpt *roc_cpt, void *sa)
-{
- union cpt_inst_w7 w7;
-
- w7.u64 = 0;
- w7.s.egrp = roc_cpt->eng_grp[CPT_ENG_TYPE_IE];
- w7.s.ctx_val = 1;
- w7.s.cptr = (uint64_t)sa;
- rte_mb();
-
- return w7.u64;
-}
-
static int
cn10k_ipsec_outb_sa_create(struct roc_cpt *roc_cpt, struct roc_cpt_lf *lf,
struct rte_security_ipsec_xform *ipsec_xfrm,
@@ -260,29 +247,19 @@ cn10k_ipsec_inb_sa_create(struct roc_cpt *roc_cpt, struct roc_cpt_lf *lf,
return ret;
}
-static int
-cn10k_ipsec_session_create(void *dev,
+int
+cn10k_ipsec_session_create(struct cnxk_cpt_vf *vf, struct cnxk_cpt_qp *qp,
struct rte_security_ipsec_xform *ipsec_xfrm,
struct rte_crypto_sym_xform *crypto_xfrm,
struct rte_security_session *sess)
{
- struct rte_cryptodev *crypto_dev = dev;
struct roc_cpt *roc_cpt;
- struct cnxk_cpt_vf *vf;
- struct cnxk_cpt_qp *qp;
int ret;
- qp = crypto_dev->data->queue_pairs[0];
- if (qp == NULL) {
- plt_err("Setup cpt queue pair before creating security session");
- return -EPERM;
- }
-
ret = cnxk_ipsec_xform_verify(ipsec_xfrm, crypto_xfrm);
if (ret)
return ret;
- vf = crypto_dev->data->dev_private;
roc_cpt = &vf->cpt;
if (ipsec_xfrm->direction == RTE_SECURITY_IPSEC_SA_DIR_INGRESS)
@@ -293,38 +270,15 @@ cn10k_ipsec_session_create(void *dev,
(struct cn10k_sec_session *)sess);
}
-static int
-cn10k_sec_session_create(void *device, struct rte_security_session_conf *conf,
- struct rte_security_session *sess)
-{
- if (conf->action_type != RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL)
- return -EINVAL;
-
- if (conf->protocol == RTE_SECURITY_PROTOCOL_IPSEC) {
- ((struct cn10k_sec_session *)sess)->userdata = conf->userdata;
- return cn10k_ipsec_session_create(device, &conf->ipsec, conf->crypto_xform, sess);
- }
- return -ENOTSUP;
-}
-
-static int
-cn10k_sec_ipsec_session_destroy(void *dev, struct rte_security_session *sec_sess)
+int
+cn10k_sec_ipsec_session_destroy(struct cnxk_cpt_qp *qp, struct cn10k_sec_session *sess)
{
- struct rte_cryptodev *crypto_dev = dev;
union roc_ot_ipsec_sa_word2 *w2;
- struct cn10k_sec_session *sess;
struct cn10k_ipsec_sa *sa;
- struct cnxk_cpt_qp *qp;
struct roc_cpt_lf *lf;
void *sa_dptr = NULL;
int ret;
- sess = (struct cn10k_sec_session *)sec_sess;
-
- qp = crypto_dev->data->queue_pairs[0];
- if (unlikely(qp == NULL))
- return -ENOTSUP;
-
lf = &qp->lf;
sa = &sess->sa;
@@ -374,48 +328,18 @@ cn10k_sec_ipsec_session_destroy(void *dev, struct rte_security_session *sec_sess
return 0;
}
-static int
-cn10k_sec_session_destroy(void *dev, struct rte_security_session *sec_sess)
+int
+cn10k_ipsec_stats_get(struct cnxk_cpt_qp *qp, struct cn10k_sec_session *sess,
+ struct rte_security_stats *stats)
{
- if (unlikely(sec_sess == NULL))
- return -EINVAL;
-
- if (((struct cn10k_sec_session *)sec_sess)->proto == RTE_SECURITY_PROTOCOL_IPSEC)
- return cn10k_sec_ipsec_session_destroy(dev, sec_sess);
-
- return -EINVAL;
-}
-
-static unsigned int
-cn10k_sec_session_get_size(void *device __rte_unused)
-{
- return sizeof(struct cn10k_sec_session) - sizeof(struct rte_security_session);
-}
-
-static int
-cn10k_sec_session_stats_get(void *device, struct rte_security_session *sess,
- struct rte_security_stats *stats)
-{
- struct rte_cryptodev *crypto_dev = device;
struct roc_ot_ipsec_outb_sa *out_sa;
struct roc_ot_ipsec_inb_sa *in_sa;
- struct cn10k_sec_session *priv;
struct cn10k_ipsec_sa *sa;
- struct cnxk_cpt_qp *qp;
-
- if (unlikely(sess == NULL))
- return -EINVAL;
-
- priv = (struct cn10k_sec_session *)sess;
-
- qp = crypto_dev->data->queue_pairs[0];
- if (qp == NULL)
- return -EINVAL;
stats->protocol = RTE_SECURITY_PROTOCOL_IPSEC;
- sa = &priv->sa;
+ sa = &sess->sa;
- if (priv->ipsec.is_outbound) {
+ if (sess->ipsec.is_outbound) {
out_sa = &sa->out_sa;
roc_cpt_lf_ctx_flush(&qp->lf, out_sa, false);
rte_delay_ms(1);
@@ -432,23 +356,13 @@ cn10k_sec_session_stats_get(void *device, struct rte_security_session *sess,
return 0;
}
-static int
-cn10k_sec_session_update(void *device, struct rte_security_session *sess,
- struct rte_security_session_conf *conf)
+int
+cn10k_ipsec_session_update(struct cnxk_cpt_vf *vf, struct cnxk_cpt_qp *qp,
+ struct cn10k_sec_session *sess, struct rte_security_session_conf *conf)
{
- struct rte_cryptodev *crypto_dev = device;
struct roc_cpt *roc_cpt;
- struct cnxk_cpt_qp *qp;
- struct cnxk_cpt_vf *vf;
int ret;
- if (sess == NULL)
- return -EINVAL;
-
- qp = crypto_dev->data->queue_pairs[0];
- if (qp == NULL)
- return -EINVAL;
-
if (conf->ipsec.direction == RTE_SECURITY_IPSEC_SA_DIR_INGRESS)
return -ENOTSUP;
@@ -456,23 +370,8 @@ cn10k_sec_session_update(void *device, struct rte_security_session *sess,
if (ret)
return ret;
- vf = crypto_dev->data->dev_private;
roc_cpt = &vf->cpt;
return cn10k_ipsec_outb_sa_create(roc_cpt, &qp->lf, &conf->ipsec, conf->crypto_xform,
(struct cn10k_sec_session *)sess);
}
-
-/* Update platform specific security ops */
-void
-cn10k_sec_ops_override(void)
-{
- /* Update platform specific ops */
- cnxk_sec_ops.session_create = cn10k_sec_session_create;
- cnxk_sec_ops.session_destroy = cn10k_sec_session_destroy;
- cnxk_sec_ops.session_get_size = cn10k_sec_session_get_size;
- cnxk_sec_ops.session_stats_get = cn10k_sec_session_stats_get;
- cnxk_sec_ops.session_update = cn10k_sec_session_update;
- cnxk_sec_ops.inb_pkt_rx_inject = cn10k_cryptodev_sec_inb_rx_inject;
- cnxk_sec_ops.rx_inject_configure = cn10k_cryptodev_sec_rx_inject_configure;
-}
diff --git a/drivers/crypto/cnxk/cn10k_ipsec.h b/drivers/crypto/cnxk/cn10k_ipsec.h
index 2b7a3e6acf..0d1e14a065 100644
--- a/drivers/crypto/cnxk/cn10k_ipsec.h
+++ b/drivers/crypto/cnxk/cn10k_ipsec.h
@@ -11,9 +11,12 @@
#include "roc_constants.h"
#include "roc_ie_ot.h"
+#include "cnxk_cryptodev.h"
+#include "cnxk_cryptodev_ops.h"
#include "cnxk_ipsec.h"
-typedef void *CN10K_SA_CONTEXT_MARKER[0];
+/* Forward declaration */
+struct cn10k_sec_session;
struct cn10k_ipsec_sa {
union {
@@ -24,34 +27,14 @@ struct cn10k_ipsec_sa {
};
} __rte_aligned(ROC_ALIGN);
-struct cn10k_sec_session {
- struct rte_security_session rte_sess;
-
- /** PMD private space */
-
- enum rte_security_session_protocol proto;
- /** Pre-populated CPT inst words */
- struct cnxk_cpt_inst_tmpl inst;
- uint16_t max_extended_len;
- uint16_t iv_offset;
- uint8_t iv_length;
- union {
- struct {
- uint8_t ip_csum;
- bool is_outbound;
- } ipsec;
- };
- /** Queue pair */
- struct cnxk_cpt_qp *qp;
- /** Userdata to be set for Rx inject */
- void *userdata;
-
- /**
- * End of SW mutable area
- */
- struct cn10k_ipsec_sa sa;
-} __rte_aligned(ROC_ALIGN);
-
-void cn10k_sec_ops_override(void);
-
+int cn10k_ipsec_session_create(struct cnxk_cpt_vf *vf, struct cnxk_cpt_qp *qp,
+ struct rte_security_ipsec_xform *ipsec_xfrm,
+ struct rte_crypto_sym_xform *crypto_xfrm,
+ struct rte_security_session *sess);
+int cn10k_sec_ipsec_session_destroy(struct cnxk_cpt_qp *qp, struct cn10k_sec_session *sess);
+int cn10k_ipsec_stats_get(struct cnxk_cpt_qp *qp, struct cn10k_sec_session *sess,
+ struct rte_security_stats *stats);
+int cn10k_ipsec_session_update(struct cnxk_cpt_vf *vf, struct cnxk_cpt_qp *qp,
+ struct cn10k_sec_session *sess,
+ struct rte_security_session_conf *conf);
#endif /* __CN10K_IPSEC_H__ */
diff --git a/drivers/crypto/cnxk/cn10k_ipsec_la_ops.h b/drivers/crypto/cnxk/cn10k_ipsec_la_ops.h
index af2c85022e..a30b8e413d 100644
--- a/drivers/crypto/cnxk/cn10k_ipsec_la_ops.h
+++ b/drivers/crypto/cnxk/cn10k_ipsec_la_ops.h
@@ -11,6 +11,7 @@
#include "roc_ie.h"
#include "cn10k_cryptodev.h"
+#include "cn10k_cryptodev_sec.h"
#include "cn10k_ipsec.h"
#include "cnxk_cryptodev.h"
#include "cnxk_cryptodev_ops.h"
diff --git a/drivers/crypto/cnxk/meson.build b/drivers/crypto/cnxk/meson.build
index 3d9a0dbbf0..d6fafd43d9 100644
--- a/drivers/crypto/cnxk/meson.build
+++ b/drivers/crypto/cnxk/meson.build
@@ -14,6 +14,7 @@ sources = files(
'cn9k_ipsec.c',
'cn10k_cryptodev.c',
'cn10k_cryptodev_ops.c',
+ 'cn10k_cryptodev_sec.c',
'cn10k_ipsec.c',
'cnxk_cryptodev.c',
'cnxk_cryptodev_capabilities.c',
--
2.25.1
^ permalink raw reply [flat|nested] 78+ messages in thread
* [PATCH 15/24] crypto/cnxk: add TLS record session ops
2023-12-21 12:35 [PATCH 00/24] Fixes and improvements in crypto cnxk Anoob Joseph
` (13 preceding siblings ...)
2023-12-21 12:35 ` [PATCH 14/24] crypto/cnxk: separate IPsec from security common code Anoob Joseph
@ 2023-12-21 12:35 ` Anoob Joseph
2023-12-21 12:35 ` [PATCH 16/24] crypto/cnxk: add TLS record datapath handling Anoob Joseph
` (9 subsequent siblings)
24 siblings, 0 replies; 78+ messages in thread
From: Anoob Joseph @ 2023-12-21 12:35 UTC (permalink / raw)
To: Akhil Goyal; +Cc: Vidya Sagar Velumuri, Jerin Jacob, Tejasree Kondoj, dev
From: Vidya Sagar Velumuri <vvelumuri@marvell.com>
Add TLS record session ops for creating and destroying security
sessions. Add support for both read and write sessions.
Signed-off-by: Anoob Joseph <anoobj@marvell.com>
Signed-off-by: Vidya Sagar Velumuri <vvelumuri@marvell.com>
---
drivers/crypto/cnxk/cn10k_cryptodev_sec.h | 8 +
drivers/crypto/cnxk/cn10k_tls.c | 758 ++++++++++++++++++++++
drivers/crypto/cnxk/cn10k_tls.h | 35 +
drivers/crypto/cnxk/meson.build | 1 +
4 files changed, 802 insertions(+)
create mode 100644 drivers/crypto/cnxk/cn10k_tls.c
create mode 100644 drivers/crypto/cnxk/cn10k_tls.h
diff --git a/drivers/crypto/cnxk/cn10k_cryptodev_sec.h b/drivers/crypto/cnxk/cn10k_cryptodev_sec.h
index 02fd35eab7..33fd3aa398 100644
--- a/drivers/crypto/cnxk/cn10k_cryptodev_sec.h
+++ b/drivers/crypto/cnxk/cn10k_cryptodev_sec.h
@@ -11,6 +11,7 @@
#include "roc_cpt.h"
#include "cn10k_ipsec.h"
+#include "cn10k_tls.h"
struct cn10k_sec_session {
struct rte_security_session rte_sess;
@@ -28,6 +29,12 @@ struct cn10k_sec_session {
uint8_t ip_csum;
bool is_outbound;
} ipsec;
+ struct {
+ uint8_t enable_padding : 1;
+ uint8_t hdr_len : 4;
+ uint8_t rvsd : 3;
+ bool is_write;
+ } tls;
};
/** Queue pair */
struct cnxk_cpt_qp *qp;
@@ -39,6 +46,7 @@ struct cn10k_sec_session {
*/
union {
struct cn10k_ipsec_sa sa;
+ struct cn10k_tls_record tls_rec;
};
} __rte_aligned(ROC_ALIGN);
diff --git a/drivers/crypto/cnxk/cn10k_tls.c b/drivers/crypto/cnxk/cn10k_tls.c
new file mode 100644
index 0000000000..e1ed65b06a
--- /dev/null
+++ b/drivers/crypto/cnxk/cn10k_tls.c
@@ -0,0 +1,758 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2023 Marvell.
+ */
+
+#include <rte_crypto_sym.h>
+#include <rte_cryptodev.h>
+#include <rte_security.h>
+
+#include <cryptodev_pmd.h>
+
+#include "roc_cpt.h"
+#include "roc_se.h"
+
+#include "cn10k_cryptodev_sec.h"
+#include "cn10k_tls.h"
+#include "cnxk_cryptodev.h"
+#include "cnxk_cryptodev_ops.h"
+#include "cnxk_security.h"
+
+static int
+tls_xform_cipher_verify(struct rte_crypto_sym_xform *crypto_xform)
+{
+ enum rte_crypto_cipher_algorithm c_algo = crypto_xform->cipher.algo;
+ uint16_t keylen = crypto_xform->cipher.key.length;
+
+ if (((c_algo == RTE_CRYPTO_CIPHER_NULL) && (keylen == 0)) ||
+ ((c_algo == RTE_CRYPTO_CIPHER_3DES_CBC) && (keylen == 24)) ||
+ ((c_algo == RTE_CRYPTO_CIPHER_AES_CBC) && ((keylen == 16) || (keylen == 32))))
+ return 0;
+
+ return -EINVAL;
+}
+
+static int
+tls_xform_auth_verify(struct rte_crypto_sym_xform *crypto_xform)
+{
+ enum rte_crypto_auth_algorithm a_algo = crypto_xform->auth.algo;
+ uint16_t keylen = crypto_xform->auth.key.length;
+
+ if (((a_algo == RTE_CRYPTO_AUTH_MD5_HMAC) && (keylen == 16)) ||
+ ((a_algo == RTE_CRYPTO_AUTH_SHA1_HMAC) && (keylen == 20)) ||
+ ((a_algo == RTE_CRYPTO_AUTH_SHA256_HMAC) && (keylen == 32)))
+ return 0;
+
+ return -EINVAL;
+}
+
+static int
+tls_xform_aead_verify(struct rte_security_tls_record_xform *tls_xform,
+ struct rte_crypto_sym_xform *crypto_xform)
+{
+ uint16_t keylen = crypto_xform->aead.key.length;
+
+ if (tls_xform->type == RTE_SECURITY_TLS_SESS_TYPE_WRITE &&
+ crypto_xform->aead.op != RTE_CRYPTO_AEAD_OP_ENCRYPT)
+ return -EINVAL;
+
+ if (tls_xform->type == RTE_SECURITY_TLS_SESS_TYPE_READ &&
+ crypto_xform->aead.op != RTE_CRYPTO_AEAD_OP_DECRYPT)
+ return -EINVAL;
+
+ if (crypto_xform->aead.algo == RTE_CRYPTO_AEAD_AES_GCM) {
+ if ((keylen == 16) || (keylen == 32))
+ return 0;
+ }
+
+ return -EINVAL;
+}
+
+static int
+cnxk_tls_xform_verify(struct rte_security_tls_record_xform *tls_xform,
+ struct rte_crypto_sym_xform *crypto_xform)
+{
+ struct rte_crypto_sym_xform *auth_xform, *cipher_xform = NULL;
+ int ret = 0;
+
+ if ((tls_xform->ver != RTE_SECURITY_VERSION_TLS_1_2) &&
+ (tls_xform->ver != RTE_SECURITY_VERSION_DTLS_1_2))
+ return -EINVAL;
+
+ if ((tls_xform->type != RTE_SECURITY_TLS_SESS_TYPE_READ) &&
+ (tls_xform->type != RTE_SECURITY_TLS_SESS_TYPE_WRITE))
+ return -EINVAL;
+
+ if (crypto_xform->type == RTE_CRYPTO_SYM_XFORM_AEAD)
+ return tls_xform_aead_verify(tls_xform, crypto_xform);
+
+ if (tls_xform->type == RTE_SECURITY_TLS_SESS_TYPE_WRITE) {
+ /* Egress */
+
+ /* First should be for auth in Egress */
+ if (crypto_xform->type != RTE_CRYPTO_SYM_XFORM_AUTH)
+ return -EINVAL;
+
+ /* Next if present, should be for cipher in Egress */
+ if ((crypto_xform->next != NULL) &&
+ (crypto_xform->next->type != RTE_CRYPTO_SYM_XFORM_CIPHER))
+ return -EINVAL;
+
+ auth_xform = crypto_xform;
+ cipher_xform = crypto_xform->next;
+ } else {
+ /* Ingress */
+
+ /* First can be for auth only when next is NULL in Ingress. */
+ if ((crypto_xform->type == RTE_CRYPTO_SYM_XFORM_AUTH) &&
+ (crypto_xform->next != NULL))
+ return -EINVAL;
+ else if ((crypto_xform->type != RTE_CRYPTO_SYM_XFORM_CIPHER) ||
+ (crypto_xform->next->type != RTE_CRYPTO_SYM_XFORM_AUTH))
+ return -EINVAL;
+
+ cipher_xform = crypto_xform;
+ auth_xform = crypto_xform->next;
+ }
+
+ if (cipher_xform) {
+ if ((tls_xform->type == RTE_SECURITY_TLS_SESS_TYPE_WRITE) &&
+ !(cipher_xform->cipher.op == RTE_CRYPTO_CIPHER_OP_ENCRYPT &&
+ auth_xform->auth.op == RTE_CRYPTO_AUTH_OP_GENERATE))
+ return -EINVAL;
+
+ if ((tls_xform->type == RTE_SECURITY_TLS_SESS_TYPE_READ) &&
+ !(cipher_xform->cipher.op == RTE_CRYPTO_CIPHER_OP_DECRYPT &&
+ auth_xform->auth.op == RTE_CRYPTO_AUTH_OP_VERIFY))
+ return -EINVAL;
+ } else {
+ if ((tls_xform->type == RTE_SECURITY_TLS_SESS_TYPE_WRITE) &&
+ (auth_xform->auth.op != RTE_CRYPTO_AUTH_OP_GENERATE))
+ return -EINVAL;
+
+ if ((tls_xform->type == RTE_SECURITY_TLS_SESS_TYPE_READ) &&
+ (auth_xform->auth.op == RTE_CRYPTO_AUTH_OP_VERIFY))
+ return -EINVAL;
+ }
+
+ if (cipher_xform)
+ ret = tls_xform_cipher_verify(cipher_xform);
+
+ if (!ret)
+ return tls_xform_auth_verify(auth_xform);
+
+ return ret;
+}
+
+static int
+tls_write_rlens_get(struct rte_security_tls_record_xform *tls_xfrm,
+ struct rte_crypto_sym_xform *crypto_xfrm)
+{
+ enum rte_crypto_cipher_algorithm c_algo = RTE_CRYPTO_CIPHER_NULL;
+ enum rte_crypto_auth_algorithm a_algo = RTE_CRYPTO_AUTH_NULL;
+ uint8_t roundup_byte, tls_hdr_len;
+ uint8_t mac_len, iv_len;
+
+ switch (tls_xfrm->ver) {
+ case RTE_SECURITY_VERSION_TLS_1_2:
+ case RTE_SECURITY_VERSION_TLS_1_3:
+ tls_hdr_len = 5;
+ break;
+ case RTE_SECURITY_VERSION_DTLS_1_2:
+ tls_hdr_len = 13;
+ break;
+ default:
+ tls_hdr_len = 0;
+ break;
+ }
+
+ /* Get Cipher and Auth algo */
+ if (crypto_xfrm->type == RTE_CRYPTO_SYM_XFORM_AEAD)
+ return tls_hdr_len + ROC_CPT_AES_GCM_IV_LEN + ROC_CPT_AES_GCM_MAC_LEN;
+
+ if (crypto_xfrm->type == RTE_CRYPTO_SYM_XFORM_CIPHER) {
+ c_algo = crypto_xfrm->cipher.algo;
+ if (crypto_xfrm->next)
+ a_algo = crypto_xfrm->next->auth.algo;
+ } else {
+ a_algo = crypto_xfrm->auth.algo;
+ if (crypto_xfrm->next)
+ c_algo = crypto_xfrm->next->cipher.algo;
+ }
+
+ switch (c_algo) {
+ case RTE_CRYPTO_CIPHER_NULL:
+ roundup_byte = 4;
+ iv_len = 0;
+ break;
+ case RTE_CRYPTO_CIPHER_3DES_CBC:
+ roundup_byte = ROC_CPT_DES_BLOCK_LENGTH;
+ iv_len = ROC_CPT_DES_IV_LEN;
+ break;
+ case RTE_CRYPTO_CIPHER_AES_CBC:
+ roundup_byte = ROC_CPT_AES_BLOCK_LENGTH;
+ iv_len = ROC_CPT_AES_CBC_IV_LEN;
+ break;
+ default:
+ roundup_byte = 0;
+ iv_len = 0;
+ break;
+ }
+
+ switch (a_algo) {
+ case RTE_CRYPTO_AUTH_NULL:
+ mac_len = 0;
+ break;
+ case RTE_CRYPTO_AUTH_MD5_HMAC:
+ mac_len = 16;
+ break;
+ case RTE_CRYPTO_AUTH_SHA1_HMAC:
+ mac_len = 20;
+ break;
+ case RTE_CRYPTO_AUTH_SHA256_HMAC:
+ mac_len = 32;
+ break;
+ default:
+ mac_len = 0;
+ break;
+ }
+
+ return tls_hdr_len + iv_len + mac_len + roundup_byte;
+}
+
+static void
+tls_write_sa_init(struct roc_ie_ot_tls_write_sa *sa)
+{
+ size_t offset;
+
+ memset(sa, 0, sizeof(struct roc_ie_ot_tls_write_sa));
+
+ offset = offsetof(struct roc_ie_ot_tls_write_sa, w26_rsvd7);
+ sa->w0.s.hw_ctx_off = offset / ROC_CTX_UNIT_8B;
+ sa->w0.s.ctx_push_size = sa->w0.s.hw_ctx_off;
+ sa->w0.s.ctx_size = ROC_IE_OT_TLS_CTX_ILEN;
+ sa->w0.s.ctx_hdr_size = ROC_IE_OT_TLS_CTX_HDR_SIZE;
+ sa->w0.s.aop_valid = 1;
+}
+
+static void
+tls_read_sa_init(struct roc_ie_ot_tls_read_sa *sa)
+{
+ size_t offset;
+
+ memset(sa, 0, sizeof(struct roc_ie_ot_tls_read_sa));
+
+ offset = offsetof(struct roc_ie_ot_tls_read_sa, ctx);
+ sa->w0.s.hw_ctx_off = offset / ROC_CTX_UNIT_8B;
+ sa->w0.s.ctx_push_size = sa->w0.s.hw_ctx_off;
+ sa->w0.s.ctx_size = ROC_IE_OT_TLS_CTX_ILEN;
+ sa->w0.s.ctx_hdr_size = ROC_IE_OT_TLS_CTX_HDR_SIZE;
+ sa->w0.s.aop_valid = 1;
+}
+
+static size_t
+tls_read_ctx_size(struct roc_ie_ot_tls_read_sa *sa)
+{
+ size_t size;
+
+ /* Variable based on Anti-replay Window */
+ size = offsetof(struct roc_ie_ot_tls_read_sa, ctx) +
+ offsetof(struct roc_ie_ot_tls_read_ctx_update_reg, ar_winbits);
+
+ if (sa->w0.s.ar_win)
+ size += (1 << (sa->w0.s.ar_win - 1)) * sizeof(uint64_t);
+
+ return size;
+}
+
+static int
+tls_read_sa_fill(struct roc_ie_ot_tls_read_sa *read_sa,
+ struct rte_security_tls_record_xform *tls_xfrm,
+ struct rte_crypto_sym_xform *crypto_xfrm)
+{
+ struct rte_crypto_sym_xform *auth_xfrm, *cipher_xfrm;
+ const uint8_t *key = NULL;
+ uint64_t *tmp, *tmp_key;
+ uint32_t replay_win_sz;
+ uint8_t *cipher_key;
+ int i, length = 0;
+ size_t offset;
+
+ /* Initialize the SA */
+ memset(read_sa, 0, sizeof(struct roc_ie_ot_tls_read_sa));
+
+ cipher_key = read_sa->cipher_key;
+
+ /* Set encryption algorithm */
+ if ((crypto_xfrm->type == RTE_CRYPTO_SYM_XFORM_AEAD) &&
+ (crypto_xfrm->aead.algo == RTE_CRYPTO_AEAD_AES_GCM)) {
+ read_sa->w2.s.cipher_select = ROC_IE_OT_TLS_CIPHER_AES_GCM;
+ read_sa->w2.s.mac_select = ROC_IE_OT_TLS_MAC_SHA2_256;
+
+ length = crypto_xfrm->aead.key.length;
+ if (length == 16)
+ read_sa->w2.s.aes_key_len = ROC_IE_OT_TLS_AES_KEY_LEN_128;
+ else
+ read_sa->w2.s.aes_key_len = ROC_IE_OT_TLS_AES_KEY_LEN_256;
+
+ key = crypto_xfrm->aead.key.data;
+ memcpy(cipher_key, key, length);
+
+ if (tls_xfrm->ver == RTE_SECURITY_VERSION_TLS_1_2)
+ memcpy(((uint8_t *)cipher_key + 32), &tls_xfrm->tls_1_2.imp_nonce, 4);
+ else if (tls_xfrm->ver == RTE_SECURITY_VERSION_DTLS_1_2)
+ memcpy(((uint8_t *)cipher_key + 32), &tls_xfrm->dtls_1_2.imp_nonce, 4);
+
+ goto key_swap;
+ }
+
+ if (crypto_xfrm->type == RTE_CRYPTO_SYM_XFORM_AUTH) {
+ auth_xfrm = crypto_xfrm;
+ cipher_xfrm = crypto_xfrm->next;
+ } else {
+ cipher_xfrm = crypto_xfrm;
+ auth_xfrm = crypto_xfrm->next;
+ }
+
+ if (cipher_xfrm != NULL) {
+ if (cipher_xfrm->cipher.algo == RTE_CRYPTO_CIPHER_3DES_CBC) {
+ read_sa->w2.s.cipher_select = ROC_IE_OT_TLS_CIPHER_3DES;
+ length = cipher_xfrm->cipher.key.length;
+ } else if (cipher_xfrm->cipher.algo == RTE_CRYPTO_CIPHER_AES_CBC) {
+ read_sa->w2.s.cipher_select = ROC_IE_OT_TLS_CIPHER_AES_CBC;
+ length = cipher_xfrm->cipher.key.length;
+ if (length == 16)
+ read_sa->w2.s.aes_key_len = ROC_IE_OT_TLS_AES_KEY_LEN_128;
+ else if (length == 32)
+ read_sa->w2.s.aes_key_len = ROC_IE_OT_TLS_AES_KEY_LEN_256;
+ else
+ return -EINVAL;
+ } else {
+ return -EINVAL;
+ }
+
+ key = cipher_xfrm->cipher.key.data;
+ memcpy(cipher_key, key, length);
+ }
+
+ if (auth_xfrm->auth.algo == RTE_CRYPTO_AUTH_MD5_HMAC)
+ read_sa->w2.s.mac_select = ROC_IE_OT_TLS_MAC_MD5;
+ else if (auth_xfrm->auth.algo == RTE_CRYPTO_AUTH_SHA1_HMAC)
+ read_sa->w2.s.mac_select = ROC_IE_OT_TLS_MAC_SHA1;
+ else if (auth_xfrm->auth.algo == RTE_CRYPTO_AUTH_SHA256_HMAC)
+ read_sa->w2.s.mac_select = ROC_IE_OT_TLS_MAC_SHA2_256;
+ else
+ return -EINVAL;
+
+ cnxk_sec_opad_ipad_gen(auth_xfrm, read_sa->opad_ipad, true);
+ tmp = (uint64_t *)read_sa->opad_ipad;
+ for (i = 0; i < (int)(ROC_CTX_MAX_OPAD_IPAD_LEN / sizeof(uint64_t)); i++)
+ tmp[i] = rte_be_to_cpu_64(tmp[i]);
+
+key_swap:
+ tmp_key = (uint64_t *)cipher_key;
+ for (i = 0; i < (int)(ROC_IE_OT_TLS_CTX_MAX_KEY_IV_LEN / sizeof(uint64_t)); i++)
+ tmp_key[i] = rte_be_to_cpu_64(tmp_key[i]);
+
+ if (tls_xfrm->ver == RTE_SECURITY_VERSION_DTLS_1_2) {
+ /* Only support power-of-two window sizes supported */
+ replay_win_sz = tls_xfrm->dtls_1_2.ar_win_sz;
+ if (replay_win_sz) {
+ if (!rte_is_power_of_2(replay_win_sz) ||
+ replay_win_sz > ROC_IE_OT_TLS_AR_WIN_SIZE_MAX)
+ return -ENOTSUP;
+
+ read_sa->w0.s.ar_win = rte_log2_u32(replay_win_sz) - 5;
+ }
+ }
+
+ read_sa->w0.s.ctx_hdr_size = ROC_IE_OT_TLS_CTX_HDR_SIZE;
+ read_sa->w0.s.aop_valid = 1;
+
+ offset = offsetof(struct roc_ie_ot_tls_read_sa, ctx);
+
+ /* Word offset for HW managed CTX field */
+ read_sa->w0.s.hw_ctx_off = offset / 8;
+ read_sa->w0.s.ctx_push_size = read_sa->w0.s.hw_ctx_off;
+
+ /* Entire context size in 128B units */
+ read_sa->w0.s.ctx_size = (PLT_ALIGN_CEIL(tls_read_ctx_size(read_sa), ROC_CTX_UNIT_128B) /
+ ROC_CTX_UNIT_128B) -
+ 1;
+
+ if (tls_xfrm->ver == RTE_SECURITY_VERSION_TLS_1_2) {
+ read_sa->w2.s.version_select = ROC_IE_OT_TLS_VERSION_TLS_12;
+ read_sa->ctx.ar_valid_mask = tls_xfrm->tls_1_2.seq_no - 1;
+ } else if (tls_xfrm->ver == RTE_SECURITY_VERSION_DTLS_1_2) {
+ read_sa->w2.s.version_select = ROC_IE_OT_TLS_VERSION_DTLS_12;
+ }
+
+ rte_wmb();
+
+ return 0;
+}
+
+static int
+tls_write_sa_fill(struct roc_ie_ot_tls_write_sa *write_sa,
+ struct rte_security_tls_record_xform *tls_xfrm,
+ struct rte_crypto_sym_xform *crypto_xfrm)
+{
+ struct rte_crypto_sym_xform *auth_xfrm, *cipher_xfrm;
+ const uint8_t *key = NULL;
+ uint8_t *cipher_key;
+ uint64_t *tmp_key;
+ int i, length = 0;
+ size_t offset;
+
+ cipher_key = write_sa->cipher_key;
+
+ /* Set encryption algorithm */
+ if ((crypto_xfrm->type == RTE_CRYPTO_SYM_XFORM_AEAD) &&
+ (crypto_xfrm->aead.algo == RTE_CRYPTO_AEAD_AES_GCM)) {
+ write_sa->w2.s.cipher_select = ROC_IE_OT_TLS_CIPHER_AES_GCM;
+ write_sa->w2.s.mac_select = ROC_IE_OT_TLS_MAC_SHA2_256;
+
+ length = crypto_xfrm->aead.key.length;
+ if (length == 16)
+ write_sa->w2.s.aes_key_len = ROC_IE_OT_TLS_AES_KEY_LEN_128;
+ else
+ write_sa->w2.s.aes_key_len = ROC_IE_OT_TLS_AES_KEY_LEN_256;
+
+ key = crypto_xfrm->aead.key.data;
+ memcpy(cipher_key, key, length);
+
+ if (tls_xfrm->ver == RTE_SECURITY_VERSION_TLS_1_2)
+ memcpy(((uint8_t *)cipher_key + 32), &tls_xfrm->tls_1_2.imp_nonce, 4);
+ else if (tls_xfrm->ver == RTE_SECURITY_VERSION_DTLS_1_2)
+ memcpy(((uint8_t *)cipher_key + 32), &tls_xfrm->dtls_1_2.imp_nonce, 4);
+
+ goto key_swap;
+ }
+
+ if (crypto_xfrm->type == RTE_CRYPTO_SYM_XFORM_AUTH) {
+ auth_xfrm = crypto_xfrm;
+ cipher_xfrm = crypto_xfrm->next;
+ } else {
+ cipher_xfrm = crypto_xfrm;
+ auth_xfrm = crypto_xfrm->next;
+ }
+
+ if (cipher_xfrm != NULL) {
+ if (cipher_xfrm->cipher.algo == RTE_CRYPTO_CIPHER_3DES_CBC) {
+ write_sa->w2.s.cipher_select = ROC_IE_OT_TLS_CIPHER_3DES;
+ length = cipher_xfrm->cipher.key.length;
+ } else if (cipher_xfrm->cipher.algo == RTE_CRYPTO_CIPHER_AES_CBC) {
+ write_sa->w2.s.cipher_select = ROC_IE_OT_TLS_CIPHER_AES_CBC;
+ length = cipher_xfrm->cipher.key.length;
+ if (length == 16)
+ write_sa->w2.s.aes_key_len = ROC_IE_OT_TLS_AES_KEY_LEN_128;
+ else if (length == 32)
+ write_sa->w2.s.aes_key_len = ROC_IE_OT_TLS_AES_KEY_LEN_256;
+ else
+ return -EINVAL;
+ } else {
+ return -EINVAL;
+ }
+
+ key = cipher_xfrm->cipher.key.data;
+ if (key != NULL && length != 0) {
+ /* Copy encryption key */
+ memcpy(cipher_key, key, length);
+ }
+ }
+
+ if (auth_xfrm != NULL) {
+ if (auth_xfrm->auth.algo == RTE_CRYPTO_AUTH_MD5_HMAC)
+ write_sa->w2.s.mac_select = ROC_IE_OT_TLS_MAC_MD5;
+ else if (auth_xfrm->auth.algo == RTE_CRYPTO_AUTH_SHA1_HMAC)
+ write_sa->w2.s.mac_select = ROC_IE_OT_TLS_MAC_SHA1;
+ else if (auth_xfrm->auth.algo == RTE_CRYPTO_AUTH_SHA256_HMAC)
+ write_sa->w2.s.mac_select = ROC_IE_OT_TLS_MAC_SHA2_256;
+ else
+ return -EINVAL;
+
+ cnxk_sec_opad_ipad_gen(auth_xfrm, write_sa->opad_ipad, true);
+ }
+
+ tmp_key = (uint64_t *)write_sa->opad_ipad;
+ for (i = 0; i < (int)(ROC_CTX_MAX_OPAD_IPAD_LEN / sizeof(uint64_t)); i++)
+ tmp_key[i] = rte_be_to_cpu_64(tmp_key[i]);
+
+key_swap:
+ tmp_key = (uint64_t *)cipher_key;
+ for (i = 0; i < (int)(ROC_IE_OT_TLS_CTX_MAX_KEY_IV_LEN / sizeof(uint64_t)); i++)
+ tmp_key[i] = rte_be_to_cpu_64(tmp_key[i]);
+
+ write_sa->w0.s.ctx_hdr_size = ROC_IE_OT_TLS_CTX_HDR_SIZE;
+ offset = offsetof(struct roc_ie_ot_tls_write_sa, w26_rsvd7);
+
+ /* Word offset for HW managed CTX field */
+ write_sa->w0.s.hw_ctx_off = offset / 8;
+ write_sa->w0.s.ctx_push_size = write_sa->w0.s.hw_ctx_off;
+
+ /* Entire context size in 128B units */
+ write_sa->w0.s.ctx_size =
+ (PLT_ALIGN_CEIL(sizeof(struct roc_ie_ot_tls_write_sa), ROC_CTX_UNIT_128B) /
+ ROC_CTX_UNIT_128B) -
+ 1;
+ write_sa->w0.s.aop_valid = 1;
+
+ if (tls_xfrm->ver == RTE_SECURITY_VERSION_TLS_1_2) {
+ write_sa->w2.s.version_select = ROC_IE_OT_TLS_VERSION_TLS_12;
+ write_sa->seq_num = tls_xfrm->tls_1_2.seq_no - 1;
+ } else if (tls_xfrm->ver == RTE_SECURITY_VERSION_DTLS_1_2) {
+ write_sa->w2.s.version_select = ROC_IE_OT_TLS_VERSION_DTLS_12;
+ write_sa->seq_num = ((uint64_t)tls_xfrm->dtls_1_2.epoch << 48) |
+ (tls_xfrm->dtls_1_2.seq_no & 0x0000ffffffffffff);
+ write_sa->seq_num -= 1;
+ }
+
+ write_sa->w2.s.iv_at_cptr = ROC_IE_OT_TLS_IV_SRC_DEFAULT;
+
+#ifdef LA_IPSEC_DEBUG
+ if (tls_xfrm->options.iv_gen_disable == 1)
+ write_sa->w2.s.iv_at_cptr = ROC_IE_OT_TLS_IV_SRC_FROM_SA;
+#else
+ if (tls_xfrm->options.iv_gen_disable) {
+ plt_err("Application provided IV is not supported");
+ return -ENOTSUP;
+ }
+#endif
+
+ rte_wmb();
+
+ return 0;
+}
+
+static int
+cn10k_tls_read_sa_create(struct roc_cpt *roc_cpt, struct roc_cpt_lf *lf,
+ struct rte_security_tls_record_xform *tls_xfrm,
+ struct rte_crypto_sym_xform *crypto_xfrm,
+ struct cn10k_sec_session *sec_sess)
+{
+ struct roc_ie_ot_tls_read_sa *sa_dptr;
+ struct cn10k_tls_record *tls;
+ union cpt_inst_w4 inst_w4;
+ void *read_sa;
+ int ret = 0;
+
+ tls = &sec_sess->tls_rec;
+ read_sa = &tls->read_sa;
+
+ /* Allocate memory to be used as dptr for CPT ucode WRITE_SA op */
+ sa_dptr = plt_zmalloc(sizeof(struct roc_ie_ot_tls_read_sa), 8);
+ if (sa_dptr == NULL) {
+ plt_err("Couldn't allocate memory for SA dptr");
+ return -ENOMEM;
+ }
+
+ /* Translate security parameters to SA */
+ ret = tls_read_sa_fill(sa_dptr, tls_xfrm, crypto_xfrm);
+ if (ret) {
+ plt_err("Could not fill read session parameters");
+ goto sa_dptr_free;
+ }
+ if (crypto_xfrm->type == RTE_CRYPTO_SYM_XFORM_AEAD) {
+ sec_sess->iv_offset = crypto_xfrm->aead.iv.offset;
+ sec_sess->iv_length = crypto_xfrm->aead.iv.length;
+ } else if (crypto_xfrm->type == RTE_CRYPTO_SYM_XFORM_CIPHER) {
+ sec_sess->iv_offset = crypto_xfrm->cipher.iv.offset;
+ sec_sess->iv_length = crypto_xfrm->cipher.iv.length;
+ } else {
+ sec_sess->iv_offset = crypto_xfrm->auth.iv.offset;
+ sec_sess->iv_length = crypto_xfrm->auth.iv.length;
+ }
+
+ if (sa_dptr->w2.s.version_select == ROC_IE_OT_TLS_VERSION_DTLS_12)
+ sec_sess->tls.hdr_len = 13;
+ else if (sa_dptr->w2.s.version_select == ROC_IE_OT_TLS_VERSION_TLS_12)
+ sec_sess->tls.hdr_len = 5;
+
+ sec_sess->proto = RTE_SECURITY_PROTOCOL_TLS_RECORD;
+
+ /* Enable mib counters */
+ sa_dptr->w0.s.count_mib_bytes = 1;
+ sa_dptr->w0.s.count_mib_pkts = 1;
+
+ /* pre-populate CPT INST word 4 */
+ inst_w4.u64 = 0;
+ inst_w4.s.opcode_major = ROC_IE_OT_TLS_MAJOR_OP_RECORD_DEC | ROC_IE_OT_INPLACE_BIT;
+
+ sec_sess->inst.w4 = inst_w4.u64;
+ sec_sess->inst.w7 = cpt_inst_w7_get(roc_cpt, read_sa);
+
+ memset(read_sa, 0, sizeof(struct roc_ie_ot_tls_read_sa));
+
+ /* Copy word0 from sa_dptr to populate ctx_push_sz ctx_size fields */
+ memcpy(read_sa, sa_dptr, 8);
+
+ rte_atomic_thread_fence(rte_memory_order_seq_cst);
+
+ /* Write session using microcode opcode */
+ ret = roc_cpt_ctx_write(lf, sa_dptr, read_sa, sizeof(struct roc_ie_ot_tls_read_sa));
+ if (ret) {
+ plt_err("Could not write read session to hardware");
+ goto sa_dptr_free;
+ }
+
+ /* Trigger CTX flush so that data is written back to DRAM */
+ roc_cpt_lf_ctx_flush(lf, read_sa, true);
+
+ rte_atomic_thread_fence(rte_memory_order_seq_cst);
+
+sa_dptr_free:
+ plt_free(sa_dptr);
+
+ return ret;
+}
+
+static int
+cn10k_tls_write_sa_create(struct roc_cpt *roc_cpt, struct roc_cpt_lf *lf,
+ struct rte_security_tls_record_xform *tls_xfrm,
+ struct rte_crypto_sym_xform *crypto_xfrm,
+ struct cn10k_sec_session *sec_sess)
+{
+ struct roc_ie_ot_tls_write_sa *sa_dptr;
+ struct cn10k_tls_record *tls;
+ union cpt_inst_w4 inst_w4;
+ void *write_sa;
+ int ret = 0;
+
+ tls = &sec_sess->tls_rec;
+ write_sa = &tls->write_sa;
+
+ /* Allocate memory to be used as dptr for CPT ucode WRITE_SA op */
+ sa_dptr = plt_zmalloc(sizeof(struct roc_ie_ot_tls_write_sa), 8);
+ if (sa_dptr == NULL) {
+ plt_err("Couldn't allocate memory for SA dptr");
+ return -ENOMEM;
+ }
+
+ /* Translate security parameters to SA */
+ ret = tls_write_sa_fill(sa_dptr, tls_xfrm, crypto_xfrm);
+ if (ret) {
+ plt_err("Could not fill write session parameters");
+ goto sa_dptr_free;
+ }
+
+ if (crypto_xfrm->type == RTE_CRYPTO_SYM_XFORM_AEAD) {
+ sec_sess->iv_offset = crypto_xfrm->aead.iv.offset;
+ sec_sess->iv_length = crypto_xfrm->aead.iv.length;
+ } else if (crypto_xfrm->type == RTE_CRYPTO_SYM_XFORM_CIPHER) {
+ sec_sess->iv_offset = crypto_xfrm->cipher.iv.offset;
+ sec_sess->iv_length = crypto_xfrm->cipher.iv.length;
+ } else {
+ sec_sess->iv_offset = crypto_xfrm->next->cipher.iv.offset;
+ sec_sess->iv_length = crypto_xfrm->next->cipher.iv.length;
+ }
+
+ sec_sess->tls.is_write = true;
+ sec_sess->tls.enable_padding = tls_xfrm->options.extra_padding_enable;
+ sec_sess->max_extended_len = tls_write_rlens_get(tls_xfrm, crypto_xfrm);
+ sec_sess->proto = RTE_SECURITY_PROTOCOL_TLS_RECORD;
+
+ /* pre-populate CPT INST word 4 */
+ inst_w4.u64 = 0;
+ inst_w4.s.opcode_major = ROC_IE_OT_TLS_MAJOR_OP_RECORD_ENC | ROC_IE_OT_INPLACE_BIT;
+
+ sec_sess->inst.w4 = inst_w4.u64;
+ sec_sess->inst.w7 = cpt_inst_w7_get(roc_cpt, write_sa);
+
+ memset(write_sa, 0, sizeof(struct roc_ie_ot_tls_write_sa));
+
+ /* Copy word0 from sa_dptr to populate ctx_push_sz ctx_size fields */
+ memcpy(write_sa, sa_dptr, 8);
+
+ rte_atomic_thread_fence(rte_memory_order_seq_cst);
+
+ /* Write session using microcode opcode */
+ ret = roc_cpt_ctx_write(lf, sa_dptr, write_sa, sizeof(struct roc_ie_ot_tls_write_sa));
+ if (ret) {
+ plt_err("Could not write tls write session to hardware");
+ goto sa_dptr_free;
+ }
+
+ /* Trigger CTX flush so that data is written back to DRAM */
+ roc_cpt_lf_ctx_flush(lf, write_sa, false);
+
+ rte_atomic_thread_fence(rte_memory_order_seq_cst);
+
+sa_dptr_free:
+ plt_free(sa_dptr);
+
+ return ret;
+}
+
+int
+cn10k_tls_record_session_create(struct cnxk_cpt_vf *vf, struct cnxk_cpt_qp *qp,
+ struct rte_security_tls_record_xform *tls_xfrm,
+ struct rte_crypto_sym_xform *crypto_xfrm,
+ struct rte_security_session *sess)
+{
+ struct roc_cpt *roc_cpt;
+ int ret;
+
+ ret = cnxk_tls_xform_verify(tls_xfrm, crypto_xfrm);
+ if (ret)
+ return ret;
+
+ roc_cpt = &vf->cpt;
+
+ if (tls_xfrm->type == RTE_SECURITY_TLS_SESS_TYPE_READ)
+ return cn10k_tls_read_sa_create(roc_cpt, &qp->lf, tls_xfrm, crypto_xfrm,
+ (struct cn10k_sec_session *)sess);
+ else
+ return cn10k_tls_write_sa_create(roc_cpt, &qp->lf, tls_xfrm, crypto_xfrm,
+ (struct cn10k_sec_session *)sess);
+}
+
+int
+cn10k_sec_tls_session_destroy(struct cnxk_cpt_qp *qp, struct cn10k_sec_session *sess)
+{
+ struct cn10k_tls_record *tls;
+ struct roc_cpt_lf *lf;
+ void *sa_dptr = NULL;
+ int ret;
+
+ lf = &qp->lf;
+
+ tls = &sess->tls_rec;
+
+ /* Trigger CTX flush to write dirty data back to DRAM */
+ roc_cpt_lf_ctx_flush(lf, &tls->read_sa, false);
+
+ ret = -1;
+
+ if (sess->tls.is_write) {
+ sa_dptr = plt_zmalloc(sizeof(struct roc_ie_ot_tls_write_sa), 8);
+ if (sa_dptr != NULL) {
+ tls_write_sa_init(sa_dptr);
+
+ ret = roc_cpt_ctx_write(lf, sa_dptr, &tls->write_sa,
+ sizeof(struct roc_ie_ot_tls_write_sa));
+ }
+ } else {
+ sa_dptr = plt_zmalloc(sizeof(struct roc_ie_ot_tls_read_sa), 8);
+ if (sa_dptr != NULL) {
+ tls_read_sa_init(sa_dptr);
+
+ ret = roc_cpt_ctx_write(lf, sa_dptr, &tls->read_sa,
+ sizeof(struct roc_ie_ot_tls_read_sa));
+ }
+ }
+
+ plt_free(sa_dptr);
+
+ if (ret) {
+ /* MC write_ctx failed. Attempt reload of CTX */
+
+ /* Wait for 1 ms so that flush is complete */
+ rte_delay_ms(1);
+
+ rte_atomic_thread_fence(rte_memory_order_seq_cst);
+
+ /* Trigger CTX reload to fetch new data from DRAM */
+ roc_cpt_lf_ctx_reload(lf, &tls->read_sa);
+ }
+
+ return 0;
+}
diff --git a/drivers/crypto/cnxk/cn10k_tls.h b/drivers/crypto/cnxk/cn10k_tls.h
new file mode 100644
index 0000000000..c477d51169
--- /dev/null
+++ b/drivers/crypto/cnxk/cn10k_tls.h
@@ -0,0 +1,35 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2023 Marvell.
+ */
+
+#ifndef __CN10K_TLS_H__
+#define __CN10K_TLS_H__
+
+#include <rte_crypto_sym.h>
+#include <rte_security.h>
+
+#include "roc_ie_ot_tls.h"
+
+#include "cnxk_cryptodev.h"
+#include "cnxk_cryptodev_ops.h"
+
+/* Forward declaration */
+struct cn10k_sec_session;
+
+struct cn10k_tls_record {
+ union {
+ /** Read SA */
+ struct roc_ie_ot_tls_read_sa read_sa;
+ /** Write SA */
+ struct roc_ie_ot_tls_write_sa write_sa;
+ };
+} __rte_aligned(ROC_ALIGN);
+
+int cn10k_tls_record_session_create(struct cnxk_cpt_vf *vf, struct cnxk_cpt_qp *qp,
+ struct rte_security_tls_record_xform *tls_xfrm,
+ struct rte_crypto_sym_xform *crypto_xfrm,
+ struct rte_security_session *sess);
+
+int cn10k_sec_tls_session_destroy(struct cnxk_cpt_qp *qp, struct cn10k_sec_session *sess);
+
+#endif /* __CN10K_TLS_H__ */
diff --git a/drivers/crypto/cnxk/meson.build b/drivers/crypto/cnxk/meson.build
index d6fafd43d9..ee0c65e32a 100644
--- a/drivers/crypto/cnxk/meson.build
+++ b/drivers/crypto/cnxk/meson.build
@@ -16,6 +16,7 @@ sources = files(
'cn10k_cryptodev_ops.c',
'cn10k_cryptodev_sec.c',
'cn10k_ipsec.c',
+ 'cn10k_tls.c',
'cnxk_cryptodev.c',
'cnxk_cryptodev_capabilities.c',
'cnxk_cryptodev_devargs.c',
--
2.25.1
^ permalink raw reply [flat|nested] 78+ messages in thread
* [PATCH 16/24] crypto/cnxk: add TLS record datapath handling
2023-12-21 12:35 [PATCH 00/24] Fixes and improvements in crypto cnxk Anoob Joseph
` (14 preceding siblings ...)
2023-12-21 12:35 ` [PATCH 15/24] crypto/cnxk: add TLS record session ops Anoob Joseph
@ 2023-12-21 12:35 ` Anoob Joseph
2023-12-21 12:35 ` [PATCH 17/24] crypto/cnxk: add TLS capability Anoob Joseph
` (8 subsequent siblings)
24 siblings, 0 replies; 78+ messages in thread
From: Anoob Joseph @ 2023-12-21 12:35 UTC (permalink / raw)
To: Akhil Goyal; +Cc: Vidya Sagar Velumuri, Jerin Jacob, Tejasree Kondoj, dev
From: Vidya Sagar Velumuri <vvelumuri@marvell.com>
Add support for TLS record handling in datapath.
Signed-off-by: Anoob Joseph <anoobj@marvell.com>
Signed-off-by: Vidya Sagar Velumuri <vvelumuri@marvell.com>
---
drivers/crypto/cnxk/cn10k_cryptodev_ops.c | 57 +++-
drivers/crypto/cnxk/cn10k_cryptodev_sec.c | 7 +
drivers/crypto/cnxk/cn10k_tls_ops.h | 322 ++++++++++++++++++++++
3 files changed, 380 insertions(+), 6 deletions(-)
create mode 100644 drivers/crypto/cnxk/cn10k_tls_ops.h
diff --git a/drivers/crypto/cnxk/cn10k_cryptodev_ops.c b/drivers/crypto/cnxk/cn10k_cryptodev_ops.c
index f105a431f8..c87a8bae1a 100644
--- a/drivers/crypto/cnxk/cn10k_cryptodev_ops.c
+++ b/drivers/crypto/cnxk/cn10k_cryptodev_ops.c
@@ -20,11 +20,14 @@
#include "roc_sso_dp.h"
#include "cn10k_cryptodev.h"
-#include "cn10k_cryptodev_ops.h"
#include "cn10k_cryptodev_event_dp.h"
+#include "cn10k_cryptodev_ops.h"
+#include "cn10k_cryptodev_sec.h"
#include "cn10k_eventdev.h"
#include "cn10k_ipsec.h"
#include "cn10k_ipsec_la_ops.h"
+#include "cn10k_tls.h"
+#include "cn10k_tls_ops.h"
#include "cnxk_ae.h"
#include "cnxk_cryptodev.h"
#include "cnxk_cryptodev_ops.h"
@@ -101,6 +104,18 @@ cpt_sec_ipsec_inst_fill(struct cnxk_cpt_qp *qp, struct rte_crypto_op *op,
return ret;
}
+static __rte_always_inline int __rte_hot
+cpt_sec_tls_inst_fill(struct cnxk_cpt_qp *qp, struct rte_crypto_op *op,
+ struct cn10k_sec_session *sess, struct cpt_inst_s *inst,
+ struct cpt_inflight_req *infl_req, const bool is_sg_ver2)
+{
+ if (sess->tls.is_write)
+ return process_tls_write(&qp->lf, op, sess, &qp->meta_info, infl_req, inst,
+ is_sg_ver2);
+ else
+ return process_tls_read(op, sess, &qp->meta_info, infl_req, inst, is_sg_ver2);
+}
+
static __rte_always_inline int __rte_hot
cpt_sec_inst_fill(struct cnxk_cpt_qp *qp, struct rte_crypto_op *op, struct cn10k_sec_session *sess,
struct cpt_inst_s *inst, struct cpt_inflight_req *infl_req, const bool is_sg_ver2)
@@ -108,6 +123,8 @@ cpt_sec_inst_fill(struct cnxk_cpt_qp *qp, struct rte_crypto_op *op, struct cn10k
if (sess->proto == RTE_SECURITY_PROTOCOL_IPSEC)
return cpt_sec_ipsec_inst_fill(qp, op, sess, &inst[0], infl_req, is_sg_ver2);
+ else if (sess->proto == RTE_SECURITY_PROTOCOL_TLS_RECORD)
+ return cpt_sec_tls_inst_fill(qp, op, sess, &inst[0], infl_req, is_sg_ver2);
return 0;
}
@@ -812,7 +829,7 @@ cn10k_cpt_sg_ver2_crypto_adapter_enqueue(void *ws, struct rte_event ev[], uint16
}
static inline void
-cn10k_cpt_sec_post_process(struct rte_crypto_op *cop, struct cpt_cn10k_res_s *res)
+cn10k_cpt_ipsec_post_process(struct rte_crypto_op *cop, struct cpt_cn10k_res_s *res)
{
struct rte_mbuf *mbuf = cop->sym->m_src;
const uint16_t m_len = res->rlen;
@@ -849,10 +866,38 @@ cn10k_cpt_sec_post_process(struct rte_crypto_op *cop, struct cpt_cn10k_res_s *re
}
static inline void
-cn10k_cpt_dequeue_post_process(struct cnxk_cpt_qp *qp,
- struct rte_crypto_op *cop,
- struct cpt_inflight_req *infl_req,
- struct cpt_cn10k_res_s *res)
+cn10k_cpt_tls_post_process(struct rte_crypto_op *cop, struct cpt_cn10k_res_s *res)
+{
+ struct rte_mbuf *mbuf = cop->sym->m_src;
+ const uint16_t m_len = res->rlen;
+
+ if (!res->uc_compcode) {
+ if (mbuf->next == NULL)
+ mbuf->data_len = m_len;
+ mbuf->pkt_len = m_len;
+ } else {
+ cop->status = RTE_CRYPTO_OP_STATUS_ERROR;
+ cop->aux_flags = res->uc_compcode;
+ plt_err("crypto op failed with UC compcode: 0x%x", res->uc_compcode);
+ }
+}
+
+static inline void
+cn10k_cpt_sec_post_process(struct rte_crypto_op *cop, struct cpt_cn10k_res_s *res)
+{
+ struct rte_crypto_sym_op *sym_op = cop->sym;
+ struct cn10k_sec_session *sess;
+
+ sess = sym_op->session;
+ if (sess->proto == RTE_SECURITY_PROTOCOL_IPSEC)
+ cn10k_cpt_ipsec_post_process(cop, res);
+ else if (sess->proto == RTE_SECURITY_PROTOCOL_TLS_RECORD)
+ cn10k_cpt_tls_post_process(cop, res);
+}
+
+static inline void
+cn10k_cpt_dequeue_post_process(struct cnxk_cpt_qp *qp, struct rte_crypto_op *cop,
+ struct cpt_inflight_req *infl_req, struct cpt_cn10k_res_s *res)
{
const uint8_t uc_compcode = res->uc_compcode;
const uint8_t compcode = res->compcode;
diff --git a/drivers/crypto/cnxk/cn10k_cryptodev_sec.c b/drivers/crypto/cnxk/cn10k_cryptodev_sec.c
index 0fd0a5b03c..300a8e4f94 100644
--- a/drivers/crypto/cnxk/cn10k_cryptodev_sec.c
+++ b/drivers/crypto/cnxk/cn10k_cryptodev_sec.c
@@ -32,6 +32,10 @@ cn10k_sec_session_create(void *dev, struct rte_security_session_conf *conf,
return cn10k_ipsec_session_create(vf, qp, &conf->ipsec, conf->crypto_xform, sess);
}
+ if (conf->protocol == RTE_SECURITY_PROTOCOL_TLS_RECORD)
+ return cn10k_tls_record_session_create(vf, qp, &conf->tls_record,
+ conf->crypto_xform, sess);
+
return -ENOTSUP;
}
@@ -54,6 +58,9 @@ cn10k_sec_session_destroy(void *dev, struct rte_security_session *sec_sess)
if (cn10k_sec_sess->proto == RTE_SECURITY_PROTOCOL_IPSEC)
return cn10k_sec_ipsec_session_destroy(qp, cn10k_sec_sess);
+ if (cn10k_sec_sess->proto == RTE_SECURITY_PROTOCOL_TLS_RECORD)
+ return cn10k_sec_tls_session_destroy(qp, cn10k_sec_sess);
+
return -EINVAL;
}
diff --git a/drivers/crypto/cnxk/cn10k_tls_ops.h b/drivers/crypto/cnxk/cn10k_tls_ops.h
new file mode 100644
index 0000000000..a5d38bacbb
--- /dev/null
+++ b/drivers/crypto/cnxk/cn10k_tls_ops.h
@@ -0,0 +1,322 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2023 Marvell.
+ */
+
+#ifndef __CN10K_TLS_OPS_H__
+#define __CN10K_TLS_OPS_H__
+
+#include <rte_crypto_sym.h>
+#include <rte_security.h>
+
+#include "roc_ie.h"
+
+#include "cn10k_cryptodev.h"
+#include "cn10k_cryptodev_sec.h"
+#include "cnxk_cryptodev.h"
+#include "cnxk_cryptodev_ops.h"
+#include "cnxk_sg.h"
+
+static __rte_always_inline int
+process_tls_write(struct roc_cpt_lf *lf, struct rte_crypto_op *cop, struct cn10k_sec_session *sess,
+ struct cpt_qp_meta_info *m_info, struct cpt_inflight_req *infl_req,
+ struct cpt_inst_s *inst, const bool is_sg_ver2)
+{
+ struct rte_crypto_sym_op *sym_op = cop->sym;
+#ifdef LA_IPSEC_DEBUG
+ struct roc_ie_ot_tls_write_sa *write_sa;
+#endif
+ struct rte_mbuf *m_src = sym_op->m_src;
+ struct rte_mbuf *last_seg;
+ union cpt_inst_w4 w4;
+ void *m_data = NULL;
+ uint8_t *in_buffer;
+
+#ifdef LA_IPSEC_DEBUG
+ write_sa = &sess->tls_rec.write_sa;
+ if (write_sa->w2.s.iv_at_cptr == ROC_IE_OT_TLS_IV_SRC_FROM_SA) {
+
+ uint8_t *iv = PLT_PTR_ADD(write_sa->cipher_key, 32);
+
+ if (write_sa->w2.s.cipher_select == ROC_IE_OT_TLS_CIPHER_AES_GCM) {
+ uint32_t *tmp;
+
+ /* For GCM, the IV and salt format will be like below:
+ * iv[0-3]: lower bytes of IV in BE format.
+ * iv[4-7]: salt / nonce.
+ * iv[12-15]: upper bytes of IV in BE format.
+ */
+ memcpy(iv, rte_crypto_op_ctod_offset(cop, uint8_t *, sess->iv_offset), 4);
+ tmp = (uint32_t *)iv;
+ *tmp = rte_be_to_cpu_32(*tmp);
+
+ memcpy(iv + 12,
+ rte_crypto_op_ctod_offset(cop, uint8_t *, sess->iv_offset + 4), 4);
+ tmp = (uint32_t *)(iv + 12);
+ *tmp = rte_be_to_cpu_32(*tmp);
+ } else if (write_sa->w2.s.cipher_select == ROC_IE_OT_TLS_CIPHER_AES_CBC) {
+ uint64_t *tmp;
+
+ memcpy(iv, rte_crypto_op_ctod_offset(cop, uint8_t *, sess->iv_offset), 16);
+ tmp = (uint64_t *)iv;
+ *tmp = rte_be_to_cpu_64(*tmp);
+ tmp = (uint64_t *)(iv + 8);
+ *tmp = rte_be_to_cpu_64(*tmp);
+ } else if (write_sa->w2.s.cipher_select == ROC_IE_OT_TLS_CIPHER_3DES) {
+ uint64_t *tmp;
+
+ memcpy(iv, rte_crypto_op_ctod_offset(cop, uint8_t *, sess->iv_offset), 8);
+ tmp = (uint64_t *)iv;
+ *tmp = rte_be_to_cpu_64(*tmp);
+ }
+
+ /* Trigger CTX reload to fetch new data from DRAM */
+ roc_cpt_lf_ctx_reload(lf, write_sa);
+ rte_delay_ms(1);
+ }
+#else
+ RTE_SET_USED(lf);
+#endif
+ /* Single buffer direct mode */
+ if (likely(m_src->next == NULL)) {
+ void *vaddr;
+
+ if (unlikely(rte_pktmbuf_tailroom(m_src) < sess->max_extended_len)) {
+ plt_dp_err("Not enough tail room");
+ return -ENOMEM;
+ }
+
+ vaddr = rte_pktmbuf_mtod(m_src, void *);
+ inst->dptr = (uint64_t)vaddr;
+ inst->rptr = (uint64_t)vaddr;
+
+ w4.u64 = sess->inst.w4;
+ w4.s.param1 = m_src->data_len;
+ w4.s.dlen = m_src->data_len;
+
+ w4.s.param2 = cop->param1.tls_record.content_type;
+ w4.s.opcode_minor = sess->tls.enable_padding * cop->aux_flags * 8;
+
+ inst->w4.u64 = w4.u64;
+ } else if (is_sg_ver2 == false) {
+ struct roc_sglist_comp *scatter_comp, *gather_comp;
+ uint32_t g_size_bytes, s_size_bytes;
+ uint32_t dlen;
+ int i;
+
+ last_seg = rte_pktmbuf_lastseg(m_src);
+
+ if (unlikely(rte_pktmbuf_tailroom(last_seg) < sess->max_extended_len)) {
+ plt_dp_err("Not enough tail room (required: %d, available: %d)",
+ sess->max_extended_len, rte_pktmbuf_tailroom(last_seg));
+ return -ENOMEM;
+ }
+
+ m_data = alloc_op_meta(NULL, m_info->mlen, m_info->pool, infl_req);
+ if (unlikely(m_data == NULL)) {
+ plt_dp_err("Error allocating meta buffer for request");
+ return -ENOMEM;
+ }
+
+ in_buffer = (uint8_t *)m_data;
+ ((uint16_t *)in_buffer)[0] = 0;
+ ((uint16_t *)in_buffer)[1] = 0;
+
+ /* Input Gather List */
+ i = 0;
+ gather_comp = (struct roc_sglist_comp *)((uint8_t *)in_buffer + 8);
+
+ i = fill_sg_comp_from_pkt(gather_comp, i, m_src);
+ ((uint16_t *)in_buffer)[2] = rte_cpu_to_be_16(i);
+
+ g_size_bytes = ((i + 3) / 4) * sizeof(struct roc_sglist_comp);
+
+ i = 0;
+ scatter_comp = (struct roc_sglist_comp *)((uint8_t *)gather_comp + g_size_bytes);
+
+ i = fill_sg_comp_from_pkt(scatter_comp, i, m_src);
+ ((uint16_t *)in_buffer)[3] = rte_cpu_to_be_16(i);
+
+ s_size_bytes = ((i + 3) / 4) * sizeof(struct roc_sglist_comp);
+
+ dlen = g_size_bytes + s_size_bytes + ROC_SG_LIST_HDR_SIZE;
+
+ inst->dptr = (uint64_t)in_buffer;
+ inst->rptr = (uint64_t)in_buffer;
+
+ w4.u64 = sess->inst.w4;
+ w4.s.dlen = dlen;
+ w4.s.param1 = rte_pktmbuf_pkt_len(m_src);
+ w4.s.param2 = cop->param1.tls_record.content_type;
+ w4.s.opcode_major |= (uint64_t)ROC_DMA_MODE_SG;
+ w4.s.opcode_minor = sess->tls.enable_padding * cop->aux_flags * 8;
+
+ /* Output Scatter List */
+ last_seg->data_len += sess->max_extended_len;
+ inst->w4.u64 = w4.u64;
+ } else {
+ struct roc_sg2list_comp *scatter_comp, *gather_comp;
+ union cpt_inst_w5 cpt_inst_w5;
+ union cpt_inst_w6 cpt_inst_w6;
+ uint32_t g_size_bytes;
+ int i;
+
+ last_seg = rte_pktmbuf_lastseg(m_src);
+
+ if (unlikely(rte_pktmbuf_tailroom(last_seg) < sess->max_extended_len)) {
+ plt_dp_err("Not enough tail room (required: %d, available: %d)",
+ sess->max_extended_len, rte_pktmbuf_tailroom(last_seg));
+ return -ENOMEM;
+ }
+
+ m_data = alloc_op_meta(NULL, m_info->mlen, m_info->pool, infl_req);
+ if (unlikely(m_data == NULL)) {
+ plt_dp_err("Error allocating meta buffer for request");
+ return -ENOMEM;
+ }
+
+ in_buffer = (uint8_t *)m_data;
+ /* Input Gather List */
+ i = 0;
+ gather_comp = (struct roc_sg2list_comp *)((uint8_t *)in_buffer);
+ i = fill_sg2_comp_from_pkt(gather_comp, i, m_src);
+
+ cpt_inst_w5.s.gather_sz = ((i + 2) / 3);
+ g_size_bytes = ((i + 2) / 3) * sizeof(struct roc_sg2list_comp);
+
+ i = 0;
+ scatter_comp = (struct roc_sg2list_comp *)((uint8_t *)gather_comp + g_size_bytes);
+
+ i = fill_sg2_comp_from_pkt(scatter_comp, i, m_src);
+
+ cpt_inst_w6.s.scatter_sz = ((i + 2) / 3);
+
+ cpt_inst_w5.s.dptr = (uint64_t)gather_comp;
+ cpt_inst_w6.s.rptr = (uint64_t)scatter_comp;
+
+ inst->w5.u64 = cpt_inst_w5.u64;
+ inst->w6.u64 = cpt_inst_w6.u64;
+ w4.u64 = sess->inst.w4;
+ w4.s.dlen = rte_pktmbuf_pkt_len(m_src);
+ w4.s.opcode_major &= (~(ROC_IE_OT_INPLACE_BIT));
+ w4.s.opcode_minor = sess->tls.enable_padding * cop->aux_flags * 8;
+ w4.s.param1 = w4.s.dlen;
+ w4.s.param2 = cop->param1.tls_record.content_type;
+ /* Output Scatter List */
+ last_seg->data_len += sess->max_extended_len;
+ inst->w4.u64 = w4.u64;
+ }
+
+ return 0;
+}
+
+static __rte_always_inline int
+process_tls_read(struct rte_crypto_op *cop, struct cn10k_sec_session *sess,
+ struct cpt_qp_meta_info *m_info, struct cpt_inflight_req *infl_req,
+ struct cpt_inst_s *inst, const bool is_sg_ver2)
+{
+ struct rte_crypto_sym_op *sym_op = cop->sym;
+ struct rte_mbuf *m_src = sym_op->m_src;
+ union cpt_inst_w4 w4;
+ uint8_t *in_buffer;
+ void *m_data;
+
+ if (likely(m_src->next == NULL)) {
+ void *vaddr;
+
+ vaddr = rte_pktmbuf_mtod(m_src, void *);
+
+ inst->dptr = (uint64_t)vaddr;
+ inst->rptr = (uint64_t)vaddr;
+
+ w4.u64 = sess->inst.w4;
+ w4.s.dlen = m_src->data_len;
+ w4.s.param1 = m_src->data_len;
+ inst->w4.u64 = w4.u64;
+ } else if (is_sg_ver2 == false) {
+ struct roc_sglist_comp *scatter_comp, *gather_comp;
+ uint32_t g_size_bytes, s_size_bytes;
+ uint32_t dlen;
+ int i;
+
+ m_data = alloc_op_meta(NULL, m_info->mlen, m_info->pool, infl_req);
+ if (unlikely(m_data == NULL)) {
+ plt_dp_err("Error allocating meta buffer for request");
+ return -ENOMEM;
+ }
+
+ in_buffer = (uint8_t *)m_data;
+ ((uint16_t *)in_buffer)[0] = 0;
+ ((uint16_t *)in_buffer)[1] = 0;
+
+ /* Input Gather List */
+ i = 0;
+ gather_comp = (struct roc_sglist_comp *)((uint8_t *)in_buffer + 8);
+
+ i = fill_sg_comp_from_pkt(gather_comp, i, m_src);
+ ((uint16_t *)in_buffer)[2] = rte_cpu_to_be_16(i);
+
+ g_size_bytes = ((i + 3) / 4) * sizeof(struct roc_sglist_comp);
+
+ i = 0;
+ scatter_comp = (struct roc_sglist_comp *)((uint8_t *)gather_comp + g_size_bytes);
+
+ i = fill_sg_comp_from_pkt(scatter_comp, i, m_src);
+ ((uint16_t *)in_buffer)[3] = rte_cpu_to_be_16(i);
+
+ s_size_bytes = ((i + 3) / 4) * sizeof(struct roc_sglist_comp);
+
+ dlen = g_size_bytes + s_size_bytes + ROC_SG_LIST_HDR_SIZE;
+
+ inst->dptr = (uint64_t)in_buffer;
+ inst->rptr = (uint64_t)in_buffer;
+
+ w4.u64 = sess->inst.w4;
+ w4.s.dlen = dlen;
+ w4.s.opcode_major |= (uint64_t)ROC_DMA_MODE_SG;
+ w4.s.param1 = rte_pktmbuf_pkt_len(m_src);
+ inst->w4.u64 = w4.u64;
+ } else {
+ struct roc_sg2list_comp *scatter_comp, *gather_comp;
+ union cpt_inst_w5 cpt_inst_w5;
+ union cpt_inst_w6 cpt_inst_w6;
+ uint32_t g_size_bytes;
+ int i;
+
+ m_data = alloc_op_meta(NULL, m_info->mlen, m_info->pool, infl_req);
+ if (unlikely(m_data == NULL)) {
+ plt_dp_err("Error allocating meta buffer for request");
+ return -ENOMEM;
+ }
+
+ in_buffer = (uint8_t *)m_data;
+ /* Input Gather List */
+ i = 0;
+
+ gather_comp = (struct roc_sg2list_comp *)((uint8_t *)in_buffer);
+ i = fill_sg2_comp_from_pkt(gather_comp, i, m_src);
+
+ cpt_inst_w5.s.gather_sz = ((i + 2) / 3);
+ g_size_bytes = ((i + 2) / 3) * sizeof(struct roc_sg2list_comp);
+
+ i = 0;
+ scatter_comp = (struct roc_sg2list_comp *)((uint8_t *)gather_comp + g_size_bytes);
+
+ i = fill_sg2_comp_from_pkt(scatter_comp, i, m_src);
+
+ cpt_inst_w6.s.scatter_sz = ((i + 2) / 3);
+
+ cpt_inst_w5.s.dptr = (uint64_t)gather_comp;
+ cpt_inst_w6.s.rptr = (uint64_t)scatter_comp;
+
+ inst->w5.u64 = cpt_inst_w5.u64;
+ inst->w6.u64 = cpt_inst_w6.u64;
+ w4.u64 = sess->inst.w4;
+ w4.s.dlen = rte_pktmbuf_pkt_len(m_src);
+ w4.s.param1 = w4.s.dlen;
+ w4.s.opcode_major &= (~(ROC_IE_OT_INPLACE_BIT));
+ inst->w4.u64 = w4.u64;
+ }
+
+ return 0;
+}
+#endif /* __CN10K_TLS_OPS_H__ */
--
2.25.1
^ permalink raw reply [flat|nested] 78+ messages in thread
* [PATCH 17/24] crypto/cnxk: add TLS capability
2023-12-21 12:35 [PATCH 00/24] Fixes and improvements in crypto cnxk Anoob Joseph
` (15 preceding siblings ...)
2023-12-21 12:35 ` [PATCH 16/24] crypto/cnxk: add TLS record datapath handling Anoob Joseph
@ 2023-12-21 12:35 ` Anoob Joseph
2023-12-21 12:35 ` [PATCH 18/24] crypto/cnxk: add PMD APIs for raw submission to CPT Anoob Joseph
` (7 subsequent siblings)
24 siblings, 0 replies; 78+ messages in thread
From: Anoob Joseph @ 2023-12-21 12:35 UTC (permalink / raw)
To: Akhil Goyal; +Cc: Vidya Sagar Velumuri, Jerin Jacob, Tejasree Kondoj, dev
From: Vidya Sagar Velumuri <vvelumuri@marvell.com>
Add TLS 1.2 record read and write capability.
Add DTLS 1.2 record read and write capability.
Signed-off-by: Anoob Joseph <anoobj@marvell.com>
Signed-off-by: Vidya Sagar Velumuri <vvelumuri@marvell.com>
---
doc/guides/rel_notes/release_24_03.rst | 2 +
drivers/common/cnxk/hw/cpt.h | 3 +-
drivers/crypto/cnxk/cnxk_cryptodev.h | 12 +-
.../crypto/cnxk/cnxk_cryptodev_capabilities.c | 210 ++++++++++++++++++
4 files changed, 223 insertions(+), 4 deletions(-)
diff --git a/doc/guides/rel_notes/release_24_03.rst b/doc/guides/rel_notes/release_24_03.rst
index fa30b46ead..0ebbae9f4e 100644
--- a/doc/guides/rel_notes/release_24_03.rst
+++ b/doc/guides/rel_notes/release_24_03.rst
@@ -58,6 +58,8 @@ New Features
* **Updated Marvell cnxk crypto driver.**
* Added support for Rx inject in crypto_cn10k.
+ * Added support for TLS record processing in crypto_cn10k. Supports TLS 1.2
+ and DTLS 1.2.
Removed Items
-------------
diff --git a/drivers/common/cnxk/hw/cpt.h b/drivers/common/cnxk/hw/cpt.h
index edab8a5d83..2620965606 100644
--- a/drivers/common/cnxk/hw/cpt.h
+++ b/drivers/common/cnxk/hw/cpt.h
@@ -80,7 +80,8 @@ union cpt_eng_caps {
uint64_t __io sg_ver2 : 1;
uint64_t __io sm2 : 1;
uint64_t __io pdcp_chain_zuc256 : 1;
- uint64_t __io reserved_38_63 : 26;
+ uint64_t __io tls : 1;
+ uint64_t __io reserved_39_63 : 25;
};
};
diff --git a/drivers/crypto/cnxk/cnxk_cryptodev.h b/drivers/crypto/cnxk/cnxk_cryptodev.h
index a5c4365631..8c8c58a76b 100644
--- a/drivers/crypto/cnxk/cnxk_cryptodev.h
+++ b/drivers/crypto/cnxk/cnxk_cryptodev.h
@@ -11,9 +11,11 @@
#include "roc_ae.h"
#include "roc_cpt.h"
-#define CNXK_CPT_MAX_CAPS 55
-#define CNXK_SEC_IPSEC_CRYPTO_MAX_CAPS 16
-#define CNXK_SEC_MAX_CAPS 9
+#define CNXK_CPT_MAX_CAPS 55
+#define CNXK_SEC_IPSEC_CRYPTO_MAX_CAPS 16
+#define CNXK_SEC_TLS_1_3_CRYPTO_MAX_CAPS 2
+#define CNXK_SEC_TLS_1_2_CRYPTO_MAX_CAPS 6
+#define CNXK_SEC_MAX_CAPS 17
/**
* Device private data
@@ -25,6 +27,10 @@ struct cnxk_cpt_vf {
struct roc_cpt cpt;
struct rte_cryptodev_capabilities crypto_caps[CNXK_CPT_MAX_CAPS];
struct rte_cryptodev_capabilities sec_ipsec_crypto_caps[CNXK_SEC_IPSEC_CRYPTO_MAX_CAPS];
+ struct rte_cryptodev_capabilities sec_tls_1_3_crypto_caps[CNXK_SEC_TLS_1_3_CRYPTO_MAX_CAPS];
+ struct rte_cryptodev_capabilities sec_tls_1_2_crypto_caps[CNXK_SEC_TLS_1_2_CRYPTO_MAX_CAPS];
+ struct rte_cryptodev_capabilities
+ sec_dtls_1_2_crypto_caps[CNXK_SEC_TLS_1_2_CRYPTO_MAX_CAPS];
struct rte_security_capability sec_caps[CNXK_SEC_MAX_CAPS];
uint64_t cnxk_fpm_iova[ROC_AE_EC_ID_PMAX];
struct roc_ae_ec_group *ec_grp[ROC_AE_EC_ID_PMAX];
diff --git a/drivers/crypto/cnxk/cnxk_cryptodev_capabilities.c b/drivers/crypto/cnxk/cnxk_cryptodev_capabilities.c
index 178f510a63..73100377d9 100644
--- a/drivers/crypto/cnxk/cnxk_cryptodev_capabilities.c
+++ b/drivers/crypto/cnxk/cnxk_cryptodev_capabilities.c
@@ -30,6 +30,16 @@
RTE_DIM(sec_ipsec_caps_##name)); \
} while (0)
+#define SEC_TLS12_CAPS_ADD(cnxk_caps, cur_pos, hw_caps, name) \
+ do { \
+ if ((hw_caps[CPT_ENG_TYPE_SE].name) || \
+ (hw_caps[CPT_ENG_TYPE_IE].name) || \
+ (hw_caps[CPT_ENG_TYPE_AE].name)) \
+ sec_tls12_caps_add(cnxk_caps, cur_pos, \
+ sec_tls12_caps_##name, \
+ RTE_DIM(sec_tls12_caps_##name)); \
+ } while (0)
+
static const struct rte_cryptodev_capabilities caps_mul[] = {
{ /* RSA */
.op = RTE_CRYPTO_OP_TYPE_ASYMMETRIC,
@@ -1502,6 +1512,125 @@ static const struct rte_cryptodev_capabilities sec_ipsec_caps_null[] = {
},
};
+static const struct rte_cryptodev_capabilities sec_tls12_caps_aes[] = {
+ { /* AES GCM */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AEAD,
+ {.aead = {
+ .algo = RTE_CRYPTO_AEAD_AES_GCM,
+ .block_size = 16,
+ .key_size = {
+ .min = 16,
+ .max = 32,
+ .increment = 16
+ },
+ .digest_size = {
+ .min = 16,
+ .max = 16,
+ .increment = 0
+ },
+ .aad_size = {
+ .min = 13,
+ .max = 13,
+ .increment = 0
+ },
+ .iv_size = {
+ .min = 12,
+ .max = 12,
+ .increment = 0
+ }
+ }, }
+ }, }
+ },
+ { /* AES CBC */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
+ {.cipher = {
+ .algo = RTE_CRYPTO_CIPHER_AES_CBC,
+ .block_size = 16,
+ .key_size = {
+ .min = 16,
+ .max = 32,
+ .increment = 8
+ },
+ .iv_size = {
+ .min = 16,
+ .max = 16,
+ .increment = 0
+ }
+ }, }
+ }, }
+ },
+};
+
+static const struct rte_cryptodev_capabilities sec_tls12_caps_des[] = {
+ { /* 3DES CBC */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
+ {.cipher = {
+ .algo = RTE_CRYPTO_CIPHER_3DES_CBC,
+ .block_size = 8,
+ .key_size = {
+ .min = 24,
+ .max = 24,
+ .increment = 0
+ },
+ .iv_size = {
+ .min = 8,
+ .max = 8,
+ .increment = 0
+ }
+ }, }
+ }, }
+ },
+};
+
+static const struct rte_cryptodev_capabilities sec_tls12_caps_sha1_sha2[] = {
+ { /* SHA1 HMAC */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_SHA1_HMAC,
+ .block_size = 64,
+ .key_size = {
+ .min = 20,
+ .max = 20,
+ .increment = 0
+ },
+ .digest_size = {
+ .min = 20,
+ .max = 20,
+ .increment = 0
+ },
+ }, }
+ }, }
+ },
+ { /* SHA256 HMAC */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_SHA256_HMAC,
+ .block_size = 64,
+ .key_size = {
+ .min = 32,
+ .max = 32,
+ .increment = 0
+ },
+ .digest_size = {
+ .min = 32,
+ .max = 32,
+ .increment = 0
+ },
+ }, }
+ }, }
+ },
+};
+
static const struct rte_security_capability sec_caps_templ[] = {
{ /* IPsec Lookaside Protocol ESP Tunnel Ingress */
.action = RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL,
@@ -1591,6 +1720,46 @@ static const struct rte_security_capability sec_caps_templ[] = {
},
.crypto_capabilities = NULL,
},
+ { /* TLS 1.2 Record Read */
+ .action = RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL,
+ .protocol = RTE_SECURITY_PROTOCOL_TLS_RECORD,
+ .tls_record = {
+ .ver = RTE_SECURITY_VERSION_TLS_1_2,
+ .type = RTE_SECURITY_TLS_SESS_TYPE_READ,
+ .ar_win_size = 0,
+ },
+ .crypto_capabilities = NULL,
+ },
+ { /* TLS 1.2 Record Write */
+ .action = RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL,
+ .protocol = RTE_SECURITY_PROTOCOL_TLS_RECORD,
+ .tls_record = {
+ .ver = RTE_SECURITY_VERSION_TLS_1_2,
+ .type = RTE_SECURITY_TLS_SESS_TYPE_WRITE,
+ .ar_win_size = 0,
+ },
+ .crypto_capabilities = NULL,
+ },
+ { /* DTLS 1.2 Record Read */
+ .action = RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL,
+ .protocol = RTE_SECURITY_PROTOCOL_TLS_RECORD,
+ .tls_record = {
+ .ver = RTE_SECURITY_VERSION_DTLS_1_2,
+ .type = RTE_SECURITY_TLS_SESS_TYPE_READ,
+ .ar_win_size = 0,
+ },
+ .crypto_capabilities = NULL,
+ },
+ { /* DTLS 1.2 Record Write */
+ .action = RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL,
+ .protocol = RTE_SECURITY_PROTOCOL_TLS_RECORD,
+ .tls_record = {
+ .ver = RTE_SECURITY_VERSION_DTLS_1_2,
+ .type = RTE_SECURITY_TLS_SESS_TYPE_WRITE,
+ .ar_win_size = 0,
+ },
+ .crypto_capabilities = NULL,
+ },
{
.action = RTE_SECURITY_ACTION_TYPE_NONE
}
@@ -1807,6 +1976,35 @@ cn9k_sec_ipsec_caps_update(struct rte_security_capability *sec_cap)
sec_cap->ipsec.options.esn = 1;
}
+static void
+sec_tls12_caps_limit_check(int *cur_pos, int nb_caps)
+{
+ PLT_VERIFY(*cur_pos + nb_caps <= CNXK_SEC_TLS_1_2_CRYPTO_MAX_CAPS);
+}
+
+static void
+sec_tls12_caps_add(struct rte_cryptodev_capabilities cnxk_caps[], int *cur_pos,
+ const struct rte_cryptodev_capabilities *caps, int nb_caps)
+{
+ sec_tls12_caps_limit_check(cur_pos, nb_caps);
+
+ memcpy(&cnxk_caps[*cur_pos], caps, nb_caps * sizeof(caps[0]));
+ *cur_pos += nb_caps;
+}
+
+static void
+sec_tls12_crypto_caps_populate(struct rte_cryptodev_capabilities cnxk_caps[],
+ union cpt_eng_caps *hw_caps)
+{
+ int cur_pos = 0;
+
+ SEC_TLS12_CAPS_ADD(cnxk_caps, &cur_pos, hw_caps, aes);
+ SEC_TLS12_CAPS_ADD(cnxk_caps, &cur_pos, hw_caps, des);
+ SEC_TLS12_CAPS_ADD(cnxk_caps, &cur_pos, hw_caps, sha1_sha2);
+
+ sec_tls12_caps_add(cnxk_caps, &cur_pos, caps_end, RTE_DIM(caps_end));
+}
+
void
cnxk_cpt_caps_populate(struct cnxk_cpt_vf *vf)
{
@@ -1815,6 +2013,11 @@ cnxk_cpt_caps_populate(struct cnxk_cpt_vf *vf)
crypto_caps_populate(vf->crypto_caps, vf->cpt.hw_caps);
sec_ipsec_crypto_caps_populate(vf->sec_ipsec_crypto_caps, vf->cpt.hw_caps);
+ if (vf->cpt.hw_caps[CPT_ENG_TYPE_SE].tls) {
+ sec_tls12_crypto_caps_populate(vf->sec_tls_1_2_crypto_caps, vf->cpt.hw_caps);
+ sec_tls12_crypto_caps_populate(vf->sec_dtls_1_2_crypto_caps, vf->cpt.hw_caps);
+ }
+
PLT_STATIC_ASSERT(RTE_DIM(sec_caps_templ) <= RTE_DIM(vf->sec_caps));
memcpy(vf->sec_caps, sec_caps_templ, sizeof(sec_caps_templ));
@@ -1830,6 +2033,13 @@ cnxk_cpt_caps_populate(struct cnxk_cpt_vf *vf)
if (roc_model_is_cn9k())
cn9k_sec_ipsec_caps_update(&vf->sec_caps[i]);
+ } else if (vf->sec_caps[i].protocol == RTE_SECURITY_PROTOCOL_TLS_RECORD) {
+ if (vf->sec_caps[i].tls_record.ver == RTE_SECURITY_VERSION_TLS_1_3)
+ vf->sec_caps[i].crypto_capabilities = vf->sec_tls_1_3_crypto_caps;
+ else if (vf->sec_caps[i].tls_record.ver == RTE_SECURITY_VERSION_DTLS_1_2)
+ vf->sec_caps[i].crypto_capabilities = vf->sec_dtls_1_2_crypto_caps;
+ else
+ vf->sec_caps[i].crypto_capabilities = vf->sec_tls_1_2_crypto_caps;
}
}
}
--
2.25.1
^ permalink raw reply [flat|nested] 78+ messages in thread
* [PATCH 18/24] crypto/cnxk: add PMD APIs for raw submission to CPT
2023-12-21 12:35 [PATCH 00/24] Fixes and improvements in crypto cnxk Anoob Joseph
` (16 preceding siblings ...)
2023-12-21 12:35 ` [PATCH 17/24] crypto/cnxk: add TLS capability Anoob Joseph
@ 2023-12-21 12:35 ` Anoob Joseph
2023-12-21 12:35 ` [PATCH 19/24] crypto/cnxk: replace PDCP with PDCP chain opcode Anoob Joseph
` (6 subsequent siblings)
24 siblings, 0 replies; 78+ messages in thread
From: Anoob Joseph @ 2023-12-21 12:35 UTC (permalink / raw)
To: Akhil Goyal; +Cc: Jerin Jacob, Vidya Sagar Velumuri, Tejasree Kondoj, dev
Add PMD APIs to allow applications to directly submit CPT instructions
to hardware.
Signed-off-by: Anoob Joseph <anoobj@marvell.com>
---
doc/api/doxy-api-index.md | 1 +
doc/api/doxy-api.conf.in | 1 +
doc/guides/rel_notes/release_24_03.rst | 1 +
drivers/crypto/cnxk/cn10k_cryptodev_ops.c | 75 ++++++++---------
drivers/crypto/cnxk/cn10k_cryptodev_ops.h | 3 +
drivers/crypto/cnxk/cn9k_cryptodev_ops.c | 56 -------------
drivers/crypto/cnxk/cn9k_cryptodev_ops.h | 62 ++++++++++++++
drivers/crypto/cnxk/cnxk_cryptodev_ops.c | 99 +++++++++++++++++++++++
drivers/crypto/cnxk/meson.build | 2 +-
drivers/crypto/cnxk/rte_pmd_cnxk_crypto.h | 46 +++++++++++
10 files changed, 252 insertions(+), 94 deletions(-)
create mode 100644 drivers/crypto/cnxk/rte_pmd_cnxk_crypto.h
diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md
index a6a768bd7c..69f1a54511 100644
--- a/doc/api/doxy-api-index.md
+++ b/doc/api/doxy-api-index.md
@@ -49,6 +49,7 @@ The public API headers are grouped by topics:
[iavf](@ref rte_pmd_iavf.h),
[bnxt](@ref rte_pmd_bnxt.h),
[cnxk](@ref rte_pmd_cnxk.h),
+ [cnxk_crypto](@ref rte_pmd_cnxk_crypto.h),
[cnxk_eventdev](@ref rte_pmd_cnxk_eventdev.h),
[cnxk_mempool](@ref rte_pmd_cnxk_mempool.h),
[dpaa](@ref rte_pmd_dpaa.h),
diff --git a/doc/api/doxy-api.conf.in b/doc/api/doxy-api.conf.in
index e94c9e4e46..6d11de580e 100644
--- a/doc/api/doxy-api.conf.in
+++ b/doc/api/doxy-api.conf.in
@@ -6,6 +6,7 @@ PROJECT_NUMBER = @VERSION@
USE_MDFILE_AS_MAINPAGE = @TOPDIR@/doc/api/doxy-api-index.md
INPUT = @TOPDIR@/doc/api/doxy-api-index.md \
@TOPDIR@/drivers/bus/vdev \
+ @TOPDIR@/drivers/crypto/cnxk \
@TOPDIR@/drivers/crypto/scheduler \
@TOPDIR@/drivers/dma/dpaa2 \
@TOPDIR@/drivers/event/dlb2 \
diff --git a/doc/guides/rel_notes/release_24_03.rst b/doc/guides/rel_notes/release_24_03.rst
index 0ebbae9f4e..f5773bab5a 100644
--- a/doc/guides/rel_notes/release_24_03.rst
+++ b/doc/guides/rel_notes/release_24_03.rst
@@ -60,6 +60,7 @@ New Features
* Added support for Rx inject in crypto_cn10k.
* Added support for TLS record processing in crypto_cn10k. Supports TLS 1.2
and DTLS 1.2.
+ * Added PMD API to allow raw submission of instructions to CPT.
Removed Items
-------------
diff --git a/drivers/crypto/cnxk/cn10k_cryptodev_ops.c b/drivers/crypto/cnxk/cn10k_cryptodev_ops.c
index c87a8bae1a..c350371505 100644
--- a/drivers/crypto/cnxk/cn10k_cryptodev_ops.c
+++ b/drivers/crypto/cnxk/cn10k_cryptodev_ops.c
@@ -34,13 +34,12 @@
#include "cnxk_eventdev.h"
#include "cnxk_se.h"
-#define PKTS_PER_LOOP 32
-#define PKTS_PER_STEORL 16
+#include "rte_pmd_cnxk_crypto.h"
/* Holds information required to send crypto operations in one burst */
struct ops_burst {
- struct rte_crypto_op *op[PKTS_PER_LOOP];
- uint64_t w2[PKTS_PER_LOOP];
+ struct rte_crypto_op *op[CN10K_PKTS_PER_LOOP];
+ uint64_t w2[CN10K_PKTS_PER_LOOP];
struct cn10k_sso_hws *ws;
struct cnxk_cpt_qp *qp;
uint16_t nb_ops;
@@ -252,7 +251,7 @@ cn10k_cpt_enqueue_burst(void *qptr, struct rte_crypto_op **ops, uint16_t nb_ops,
goto pend_q_commit;
}
- for (i = 0; i < RTE_MIN(PKTS_PER_LOOP, nb_ops); i++) {
+ for (i = 0; i < RTE_MIN(CN10K_PKTS_PER_LOOP, nb_ops); i++) {
infl_req = &pend_q->req_queue[head];
infl_req->op_flags = 0;
@@ -267,23 +266,21 @@ cn10k_cpt_enqueue_burst(void *qptr, struct rte_crypto_op **ops, uint16_t nb_ops,
pending_queue_advance(&head, pq_mask);
}
- if (i > PKTS_PER_STEORL) {
- lmt_arg = ROC_CN10K_CPT_LMT_ARG | (PKTS_PER_STEORL - 1) << 12 |
+ if (i > CN10K_PKTS_PER_STEORL) {
+ lmt_arg = ROC_CN10K_CPT_LMT_ARG | (CN10K_PKTS_PER_STEORL - 1) << 12 |
(uint64_t)lmt_id;
roc_lmt_submit_steorl(lmt_arg, io_addr);
- lmt_arg = ROC_CN10K_CPT_LMT_ARG |
- (i - PKTS_PER_STEORL - 1) << 12 |
- (uint64_t)(lmt_id + PKTS_PER_STEORL);
+ lmt_arg = ROC_CN10K_CPT_LMT_ARG | (i - CN10K_PKTS_PER_STEORL - 1) << 12 |
+ (uint64_t)(lmt_id + CN10K_PKTS_PER_STEORL);
roc_lmt_submit_steorl(lmt_arg, io_addr);
} else {
- lmt_arg = ROC_CN10K_CPT_LMT_ARG | (i - 1) << 12 |
- (uint64_t)lmt_id;
+ lmt_arg = ROC_CN10K_CPT_LMT_ARG | (i - 1) << 12 | (uint64_t)lmt_id;
roc_lmt_submit_steorl(lmt_arg, io_addr);
}
rte_io_wmb();
- if (nb_ops - i > 0 && i == PKTS_PER_LOOP) {
+ if (nb_ops - i > 0 && i == CN10K_PKTS_PER_LOOP) {
nb_ops -= i;
ops += i;
count += i;
@@ -487,7 +484,7 @@ cn10k_cpt_vec_submit(struct vec_request vec_tbl[], uint16_t vec_tbl_len, struct
inst = (struct cpt_inst_s *)lmt_base;
again:
- burst_size = RTE_MIN(PKTS_PER_STEORL, vec_tbl_len);
+ burst_size = RTE_MIN(CN10K_PKTS_PER_STEORL, vec_tbl_len);
for (i = 0; i < burst_size; i++)
cn10k_cpt_vec_inst_fill(&vec_tbl[i], &inst[i * 2], qp, vec_tbl[0].w7);
@@ -516,7 +513,7 @@ static inline int
ca_lmtst_vec_submit(struct ops_burst *burst, struct vec_request vec_tbl[], uint16_t *vec_tbl_len,
const bool is_sg_ver2)
{
- struct cpt_inflight_req *infl_reqs[PKTS_PER_LOOP];
+ struct cpt_inflight_req *infl_reqs[CN10K_PKTS_PER_LOOP];
uint64_t lmt_base, lmt_arg, io_addr;
uint16_t lmt_id, len = *vec_tbl_len;
struct cpt_inst_s *inst, *inst_base;
@@ -618,11 +615,12 @@ next_op:;
if (CNXK_TT_FROM_TAG(burst->ws->gw_rdata) == SSO_TT_ORDERED)
roc_sso_hws_head_wait(burst->ws->base);
- if (i > PKTS_PER_STEORL) {
- lmt_arg = ROC_CN10K_CPT_LMT_ARG | (PKTS_PER_STEORL - 1) << 12 | (uint64_t)lmt_id;
+ if (i > CN10K_PKTS_PER_STEORL) {
+ lmt_arg = ROC_CN10K_CPT_LMT_ARG | (CN10K_PKTS_PER_STEORL - 1) << 12 |
+ (uint64_t)lmt_id;
roc_lmt_submit_steorl(lmt_arg, io_addr);
- lmt_arg = ROC_CN10K_CPT_LMT_ARG | (i - PKTS_PER_STEORL - 1) << 12 |
- (uint64_t)(lmt_id + PKTS_PER_STEORL);
+ lmt_arg = ROC_CN10K_CPT_LMT_ARG | (i - CN10K_PKTS_PER_STEORL - 1) << 12 |
+ (uint64_t)(lmt_id + CN10K_PKTS_PER_STEORL);
roc_lmt_submit_steorl(lmt_arg, io_addr);
} else {
lmt_arg = ROC_CN10K_CPT_LMT_ARG | (i - 1) << 12 | (uint64_t)lmt_id;
@@ -647,7 +645,7 @@ next_op:;
static inline uint16_t
ca_lmtst_burst_submit(struct ops_burst *burst, const bool is_sg_ver2)
{
- struct cpt_inflight_req *infl_reqs[PKTS_PER_LOOP];
+ struct cpt_inflight_req *infl_reqs[CN10K_PKTS_PER_LOOP];
uint64_t lmt_base, lmt_arg, io_addr;
struct cpt_inst_s *inst, *inst_base;
struct cpt_inflight_req *infl_req;
@@ -718,11 +716,12 @@ ca_lmtst_burst_submit(struct ops_burst *burst, const bool is_sg_ver2)
if (CNXK_TT_FROM_TAG(burst->ws->gw_rdata) == SSO_TT_ORDERED)
roc_sso_hws_head_wait(burst->ws->base);
- if (i > PKTS_PER_STEORL) {
- lmt_arg = ROC_CN10K_CPT_LMT_ARG | (PKTS_PER_STEORL - 1) << 12 | (uint64_t)lmt_id;
+ if (i > CN10K_PKTS_PER_STEORL) {
+ lmt_arg = ROC_CN10K_CPT_LMT_ARG | (CN10K_PKTS_PER_STEORL - 1) << 12 |
+ (uint64_t)lmt_id;
roc_lmt_submit_steorl(lmt_arg, io_addr);
- lmt_arg = ROC_CN10K_CPT_LMT_ARG | (i - PKTS_PER_STEORL - 1) << 12 |
- (uint64_t)(lmt_id + PKTS_PER_STEORL);
+ lmt_arg = ROC_CN10K_CPT_LMT_ARG | (i - CN10K_PKTS_PER_STEORL - 1) << 12 |
+ (uint64_t)(lmt_id + CN10K_PKTS_PER_STEORL);
roc_lmt_submit_steorl(lmt_arg, io_addr);
} else {
lmt_arg = ROC_CN10K_CPT_LMT_ARG | (i - 1) << 12 | (uint64_t)lmt_id;
@@ -791,7 +790,7 @@ cn10k_cpt_crypto_adapter_enqueue(void *ws, struct rte_event ev[], uint16_t nb_ev
burst.op[burst.nb_ops] = op;
/* Max nb_ops per burst check */
- if (++burst.nb_ops == PKTS_PER_LOOP) {
+ if (++burst.nb_ops == CN10K_PKTS_PER_LOOP) {
if (is_vector)
submitted = ca_lmtst_vec_submit(&burst, vec_tbl, &vec_tbl_len,
is_sg_ver2);
@@ -1146,7 +1145,7 @@ cn10k_cryptodev_sec_inb_rx_inject(void *dev, struct rte_mbuf **pkts,
again:
inst = (struct cpt_inst_s *)lmt_base;
- for (i = 0; i < RTE_MIN(PKTS_PER_LOOP, nb_pkts); i++) {
+ for (i = 0; i < RTE_MIN(CN10K_PKTS_PER_LOOP, nb_pkts); i++) {
m = pkts[i];
sec_sess = (struct cn10k_sec_session *)sess[i];
@@ -1192,11 +1191,12 @@ cn10k_cryptodev_sec_inb_rx_inject(void *dev, struct rte_mbuf **pkts,
inst += 2;
}
- if (i > PKTS_PER_STEORL) {
- lmt_arg = ROC_CN10K_CPT_LMT_ARG | (PKTS_PER_STEORL - 1) << 12 | (uint64_t)lmt_id;
+ if (i > CN10K_PKTS_PER_STEORL) {
+ lmt_arg = ROC_CN10K_CPT_LMT_ARG | (CN10K_PKTS_PER_STEORL - 1) << 12 |
+ (uint64_t)lmt_id;
roc_lmt_submit_steorl(lmt_arg, io_addr);
- lmt_arg = ROC_CN10K_CPT_LMT_ARG | (i - PKTS_PER_STEORL - 1) << 12 |
- (uint64_t)(lmt_id + PKTS_PER_STEORL);
+ lmt_arg = ROC_CN10K_CPT_LMT_ARG | (i - CN10K_PKTS_PER_STEORL - 1) << 12 |
+ (uint64_t)(lmt_id + CN10K_PKTS_PER_STEORL);
roc_lmt_submit_steorl(lmt_arg, io_addr);
} else {
lmt_arg = ROC_CN10K_CPT_LMT_ARG | (i - 1) << 12 | (uint64_t)lmt_id;
@@ -1205,7 +1205,7 @@ cn10k_cryptodev_sec_inb_rx_inject(void *dev, struct rte_mbuf **pkts,
rte_io_wmb();
- if (nb_pkts - i > 0 && i == PKTS_PER_LOOP) {
+ if (nb_pkts - i > 0 && i == CN10K_PKTS_PER_LOOP) {
nb_pkts -= i;
pkts += i;
count += i;
@@ -1332,7 +1332,7 @@ cn10k_cpt_raw_enqueue_burst(void *qpair, uint8_t *drv_ctx, struct rte_crypto_sym
goto pend_q_commit;
}
- for (i = 0; i < RTE_MIN(PKTS_PER_LOOP, nb_ops); i++) {
+ for (i = 0; i < RTE_MIN(CN10K_PKTS_PER_LOOP, nb_ops); i++) {
struct cnxk_iov iov;
index = count + i;
@@ -1354,11 +1354,12 @@ cn10k_cpt_raw_enqueue_burst(void *qpair, uint8_t *drv_ctx, struct rte_crypto_sym
pending_queue_advance(&head, pq_mask);
}
- if (i > PKTS_PER_STEORL) {
- lmt_arg = ROC_CN10K_CPT_LMT_ARG | (PKTS_PER_STEORL - 1) << 12 | (uint64_t)lmt_id;
+ if (i > CN10K_PKTS_PER_STEORL) {
+ lmt_arg = ROC_CN10K_CPT_LMT_ARG | (CN10K_PKTS_PER_STEORL - 1) << 12 |
+ (uint64_t)lmt_id;
roc_lmt_submit_steorl(lmt_arg, io_addr);
- lmt_arg = ROC_CN10K_CPT_LMT_ARG | (i - PKTS_PER_STEORL - 1) << 12 |
- (uint64_t)(lmt_id + PKTS_PER_STEORL);
+ lmt_arg = ROC_CN10K_CPT_LMT_ARG | (i - CN10K_PKTS_PER_STEORL - 1) << 12 |
+ (uint64_t)(lmt_id + CN10K_PKTS_PER_STEORL);
roc_lmt_submit_steorl(lmt_arg, io_addr);
} else {
lmt_arg = ROC_CN10K_CPT_LMT_ARG | (i - 1) << 12 | (uint64_t)lmt_id;
@@ -1367,7 +1368,7 @@ cn10k_cpt_raw_enqueue_burst(void *qpair, uint8_t *drv_ctx, struct rte_crypto_sym
rte_io_wmb();
- if (nb_ops - i > 0 && i == PKTS_PER_LOOP) {
+ if (nb_ops - i > 0 && i == CN10K_PKTS_PER_LOOP) {
nb_ops -= i;
count += i;
goto again;
diff --git a/drivers/crypto/cnxk/cn10k_cryptodev_ops.h b/drivers/crypto/cnxk/cn10k_cryptodev_ops.h
index 34becede3c..406c4abc7f 100644
--- a/drivers/crypto/cnxk/cn10k_cryptodev_ops.h
+++ b/drivers/crypto/cnxk/cn10k_cryptodev_ops.h
@@ -12,6 +12,9 @@
#include "cnxk_cryptodev.h"
+#define CN10K_PKTS_PER_LOOP 32
+#define CN10K_PKTS_PER_STEORL 16
+
extern struct rte_cryptodev_ops cn10k_cpt_ops;
void cn10k_cpt_set_enqdeq_fns(struct rte_cryptodev *dev, struct cnxk_cpt_vf *vf);
diff --git a/drivers/crypto/cnxk/cn9k_cryptodev_ops.c b/drivers/crypto/cnxk/cn9k_cryptodev_ops.c
index 442cd8e5a9..ac9393eacf 100644
--- a/drivers/crypto/cnxk/cn9k_cryptodev_ops.c
+++ b/drivers/crypto/cnxk/cn9k_cryptodev_ops.c
@@ -122,62 +122,6 @@ cn9k_cpt_inst_prep(struct cnxk_cpt_qp *qp, struct rte_crypto_op *op,
return ret;
}
-static inline void
-cn9k_cpt_inst_submit(struct cpt_inst_s *inst, uint64_t lmtline,
- uint64_t io_addr)
-{
- uint64_t lmt_status;
-
- do {
- /* Copy CPT command to LMTLINE */
- roc_lmt_mov64((void *)lmtline, inst);
-
- /*
- * Make sure compiler does not reorder memcpy and ldeor.
- * LMTST transactions are always flushed from the write
- * buffer immediately, a DMB is not required to push out
- * LMTSTs.
- */
- rte_io_wmb();
- lmt_status = roc_lmt_submit_ldeor(io_addr);
- } while (lmt_status == 0);
-}
-
-static __plt_always_inline void
-cn9k_cpt_inst_submit_dual(struct cpt_inst_s *inst, uint64_t lmtline,
- uint64_t io_addr)
-{
- uint64_t lmt_status;
-
- do {
- /* Copy 2 CPT inst_s to LMTLINE */
-#if defined(RTE_ARCH_ARM64)
- uint64_t *s = (uint64_t *)inst;
- uint64_t *d = (uint64_t *)lmtline;
-
- vst1q_u64(&d[0], vld1q_u64(&s[0]));
- vst1q_u64(&d[2], vld1q_u64(&s[2]));
- vst1q_u64(&d[4], vld1q_u64(&s[4]));
- vst1q_u64(&d[6], vld1q_u64(&s[6]));
- vst1q_u64(&d[8], vld1q_u64(&s[8]));
- vst1q_u64(&d[10], vld1q_u64(&s[10]));
- vst1q_u64(&d[12], vld1q_u64(&s[12]));
- vst1q_u64(&d[14], vld1q_u64(&s[14]));
-#else
- roc_lmt_mov_seg((void *)lmtline, inst, 8);
-#endif
-
- /*
- * Make sure compiler does not reorder memcpy and ldeor.
- * LMTST transactions are always flushed from the write
- * buffer immediately, a DMB is not required to push out
- * LMTSTs.
- */
- rte_io_wmb();
- lmt_status = roc_lmt_submit_ldeor(io_addr);
- } while (lmt_status == 0);
-}
-
static uint16_t
cn9k_cpt_enqueue_burst(void *qptr, struct rte_crypto_op **ops, uint16_t nb_ops)
{
diff --git a/drivers/crypto/cnxk/cn9k_cryptodev_ops.h b/drivers/crypto/cnxk/cn9k_cryptodev_ops.h
index c6ec96153e..3d667094f3 100644
--- a/drivers/crypto/cnxk/cn9k_cryptodev_ops.h
+++ b/drivers/crypto/cnxk/cn9k_cryptodev_ops.h
@@ -8,8 +8,70 @@
#include <rte_compat.h>
#include <cryptodev_pmd.h>
+#include <hw/cpt.h>
+
+#if defined(__aarch64__)
+#include "roc_io.h"
+#else
+#include "roc_io_generic.h"
+#endif
+
extern struct rte_cryptodev_ops cn9k_cpt_ops;
+static inline void
+cn9k_cpt_inst_submit(struct cpt_inst_s *inst, uint64_t lmtline, uint64_t io_addr)
+{
+ uint64_t lmt_status;
+
+ do {
+ /* Copy CPT command to LMTLINE */
+ roc_lmt_mov64((void *)lmtline, inst);
+
+ /*
+ * Make sure compiler does not reorder memcpy and ldeor.
+ * LMTST transactions are always flushed from the write
+ * buffer immediately, a DMB is not required to push out
+ * LMTSTs.
+ */
+ rte_io_wmb();
+ lmt_status = roc_lmt_submit_ldeor(io_addr);
+ } while (lmt_status == 0);
+}
+
+static __plt_always_inline void
+cn9k_cpt_inst_submit_dual(struct cpt_inst_s *inst, uint64_t lmtline, uint64_t io_addr)
+{
+ uint64_t lmt_status;
+
+ do {
+ /* Copy 2 CPT inst_s to LMTLINE */
+#if defined(RTE_ARCH_ARM64)
+ volatile const __uint128_t *src128 = (const __uint128_t *)inst;
+ volatile __uint128_t *dst128 = (__uint128_t *)lmtline;
+
+ dst128[0] = src128[0];
+ dst128[1] = src128[1];
+ dst128[2] = src128[2];
+ dst128[3] = src128[3];
+ dst128[4] = src128[4];
+ dst128[5] = src128[5];
+ dst128[6] = src128[6];
+ dst128[7] = src128[7];
+#else
+ roc_lmt_mov_seg((void *)lmtline, inst, 8);
+#endif
+
+ /*
+ * Make sure compiler does not reorder memcpy and ldeor.
+ * LMTST transactions are always flushed from the write
+ * buffer immediately, a DMB is not required to push out
+ * LMTSTs.
+ */
+ rte_io_wmb();
+ lmt_status = roc_lmt_submit_ldeor(io_addr);
+ } while (lmt_status == 0);
+}
+
void cn9k_cpt_set_enqdeq_fns(struct rte_cryptodev *dev);
__rte_internal
diff --git a/drivers/crypto/cnxk/cnxk_cryptodev_ops.c b/drivers/crypto/cnxk/cnxk_cryptodev_ops.c
index fd44155955..7a37e3e89c 100644
--- a/drivers/crypto/cnxk/cnxk_cryptodev_ops.c
+++ b/drivers/crypto/cnxk/cnxk_cryptodev_ops.c
@@ -12,6 +12,11 @@
#include "roc_errata.h"
#include "roc_idev.h"
#include "roc_ie_on.h"
+#if defined(__aarch64__)
+#include "roc_io.h"
+#else
+#include "roc_io_generic.h"
+#endif
#include "cnxk_ae.h"
#include "cnxk_cryptodev.h"
@@ -19,6 +24,11 @@
#include "cnxk_cryptodev_ops.h"
#include "cnxk_se.h"
+#include "cn10k_cryptodev_ops.h"
+#include "cn9k_cryptodev_ops.h"
+
+#include "rte_pmd_cnxk_crypto.h"
+
#define CNXK_CPT_MAX_ASYM_OP_NUM_PARAMS 5
#define CNXK_CPT_MAX_ASYM_OP_MOD_LEN 1024
#define CNXK_CPT_META_BUF_MAX_CACHE_SIZE 128
@@ -918,3 +928,92 @@ cnxk_cpt_queue_pair_event_error_query(struct rte_cryptodev *dev, uint16_t qp_id)
}
return 0;
}
+
+void *
+rte_pmd_cnxk_crypto_qptr_get(uint8_t dev_id, uint16_t qp_id)
+{
+ const struct rte_crypto_fp_ops *fp_ops;
+ void *qptr;
+
+ fp_ops = &rte_crypto_fp_ops[dev_id];
+ qptr = fp_ops->qp.data[qp_id];
+
+ return qptr;
+}
+
+static inline void
+cnxk_crypto_cn10k_submit(void *qptr, void *inst, uint16_t nb_inst)
+{
+ uint64_t lmt_base, lmt_arg, io_addr;
+ struct cnxk_cpt_qp *qp = qptr;
+ uint16_t i, j, lmt_id;
+ void *lmt_dst;
+
+ lmt_base = qp->lmtline.lmt_base;
+ io_addr = qp->lmtline.io_addr;
+
+ ROC_LMT_BASE_ID_GET(lmt_base, lmt_id);
+
+again:
+ i = RTE_MIN(nb_inst, CN10K_PKTS_PER_LOOP);
+ lmt_dst = PLT_PTR_CAST(lmt_base);
+
+ for (j = 0; j < i; j++) {
+ rte_memcpy(lmt_dst, inst, sizeof(struct cpt_inst_s));
+ inst = RTE_PTR_ADD(inst, sizeof(struct cpt_inst_s));
+ lmt_dst = RTE_PTR_ADD(lmt_dst, 2 * sizeof(struct cpt_inst_s));
+ }
+
+ rte_io_wmb();
+
+ if (i > CN10K_PKTS_PER_STEORL) {
+ lmt_arg = ROC_CN10K_CPT_LMT_ARG | (CN10K_PKTS_PER_STEORL - 1) << 12 |
+ (uint64_t)lmt_id;
+ roc_lmt_submit_steorl(lmt_arg, io_addr);
+ lmt_arg = ROC_CN10K_CPT_LMT_ARG | (i - CN10K_PKTS_PER_STEORL - 1) << 12 |
+ (uint64_t)(lmt_id + CN10K_PKTS_PER_STEORL);
+ roc_lmt_submit_steorl(lmt_arg, io_addr);
+ } else {
+ lmt_arg = ROC_CN10K_CPT_LMT_ARG | (i - 1) << 12 | (uint64_t)lmt_id;
+ roc_lmt_submit_steorl(lmt_arg, io_addr);
+ }
+
+ rte_io_wmb();
+
+ if (nb_inst - i > 0) {
+ nb_inst -= i;
+ goto again;
+ }
+}
+
+static inline void
+cnxk_crypto_cn9k_submit(void *qptr, void *inst, uint16_t nb_inst)
+{
+ struct cnxk_cpt_qp *qp = qptr;
+
+ const uint64_t lmt_base = qp->lf.lmt_base;
+ const uint64_t io_addr = qp->lf.io_addr;
+
+ if (unlikely(nb_inst & 1)) {
+ cn9k_cpt_inst_submit(inst, lmt_base, io_addr);
+ inst = RTE_PTR_ADD(inst, sizeof(struct cpt_inst_s));
+ nb_inst -= 1;
+ }
+
+ while (nb_inst > 0) {
+ cn9k_cpt_inst_submit_dual(inst, lmt_base, io_addr);
+ inst = RTE_PTR_ADD(inst, 2 * sizeof(struct cpt_inst_s));
+ nb_inst -= 2;
+ }
+}
+
+void
+rte_pmd_cnxk_crypto_submit(void *qptr, void *inst, uint16_t nb_inst)
+{
+ if (roc_model_is_cn10k())
+ return cnxk_crypto_cn10k_submit(qptr, inst, nb_inst);
+ else if (roc_model_is_cn9k())
+ return cnxk_crypto_cn9k_submit(qptr, inst, nb_inst);
+
+ plt_err("Invalid cnxk model");
+}
diff --git a/drivers/crypto/cnxk/meson.build b/drivers/crypto/cnxk/meson.build
index ee0c65e32a..aa840fb7bb 100644
--- a/drivers/crypto/cnxk/meson.build
+++ b/drivers/crypto/cnxk/meson.build
@@ -24,8 +24,8 @@ sources = files(
'cnxk_cryptodev_sec.c',
)
+headers = files('rte_pmd_cnxk_crypto.h')
deps += ['bus_pci', 'common_cnxk', 'security', 'eventdev']
-
includes += include_directories('../../../lib/net', '../../event/cnxk')
if get_option('buildtype').contains('debug')
diff --git a/drivers/crypto/cnxk/rte_pmd_cnxk_crypto.h b/drivers/crypto/cnxk/rte_pmd_cnxk_crypto.h
new file mode 100644
index 0000000000..64978a008b
--- /dev/null
+++ b/drivers/crypto/cnxk/rte_pmd_cnxk_crypto.h
@@ -0,0 +1,46 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2023 Marvell.
+ */
+
+/**
+ * @file rte_pmd_cnxk_crypto.h
+ * Marvell CNXK Crypto PMD specific functions.
+ *
+ **/
+
+#ifndef _PMD_CNXK_CRYPTO_H_
+#define _PMD_CNXK_CRYPTO_H_
+
+#include <stdint.h>
+
+/**
+ * Get queue pointer of a specific queue in a cryptodev.
+ *
+ * @param dev_id
+ * Device identifier of cryptodev device.
+ * @param qp_id
+ * Index of the queue pair.
+ * @return
+ * Pointer to queue pair structure that would be the input to submit APIs.
+ */
+void *rte_pmd_cnxk_crypto_qptr_get(uint8_t dev_id, uint16_t qp_id);
+
+/**
+ * Submit CPT instruction (cpt_inst_s) to hardware (CPT).
+ *
+ * The ``qp`` is a pointer obtained from ``rte_pmd_cnxk_crypto_qp_get``. Application should make
+ * sure it doesn't overflow the internal hardware queues. It may do so by making sure the inflight
+ * packets are not more than the number of descriptors configured.
+ *
+ * This API may be called only after the cryptodev and queue pair is configured and is started.
+ *
+ * @param qptr
+ * Pointer obtained with ``rte_pmd_cnxk_crypto_qptr_get``.
+ * @param inst
+ * Pointer to an array of instructions prepared by application.
+ * @param nb_inst
+ * Number of instructions.
+ */
+void rte_pmd_cnxk_crypto_submit(void *qptr, void *inst, uint16_t nb_inst);
+
+#endif /* _PMD_CNXK_CRYPTO_H_ */
--
2.25.1
^ permalink raw reply [flat|nested] 78+ messages in thread
* [PATCH 19/24] crypto/cnxk: replace PDCP with PDCP chain opcode
2023-12-21 12:35 [PATCH 00/24] Fixes and improvements in crypto cnxk Anoob Joseph
` (17 preceding siblings ...)
2023-12-21 12:35 ` [PATCH 18/24] crypto/cnxk: add PMD APIs for raw submission to CPT Anoob Joseph
@ 2023-12-21 12:35 ` Anoob Joseph
2023-12-21 12:35 ` [PATCH 20/24] crypto/cnxk: validate the combinations supported in TLS Anoob Joseph
` (5 subsequent siblings)
24 siblings, 0 replies; 78+ messages in thread
From: Anoob Joseph @ 2023-12-21 12:35 UTC (permalink / raw)
To: Akhil Goyal; +Cc: Tejasree Kondoj, Jerin Jacob, Vidya Sagar Velumuri, dev
From: Tejasree Kondoj <ktejasree@marvell.com>
Replacing PDCP opcode with PDCP chain opcode.
Signed-off-by: Tejasree Kondoj <ktejasree@marvell.com>
---
drivers/common/cnxk/roc_se.c | 331 +++++++++-------------------------
drivers/common/cnxk/roc_se.h | 18 +-
drivers/crypto/cnxk/cnxk_se.h | 96 +++++-----
3 files changed, 135 insertions(+), 310 deletions(-)
diff --git a/drivers/common/cnxk/roc_se.c b/drivers/common/cnxk/roc_se.c
index 6ced4ef789..4e00268149 100644
--- a/drivers/common/cnxk/roc_se.c
+++ b/drivers/common/cnxk/roc_se.c
@@ -88,13 +88,20 @@ cpt_ciph_type_set(roc_se_cipher_type type, struct roc_se_ctx *ctx, uint16_t key_
fc_type = ROC_SE_FC_GEN;
break;
case ROC_SE_ZUC_EEA3:
- if (chained_op) {
- if (unlikely(key_len != 16))
+ if (unlikely(key_len != 16)) {
+ /*
+ * ZUC 256 is not supported with older microcode
+ * where pdcp_iv_offset is 16
+ */
+ if (chained_op || (ctx->pdcp_iv_offset == 16)) {
+ plt_err("ZUC 256 is not supported with chained operations");
return -1;
+ }
+ }
+ if (chained_op)
fc_type = ROC_SE_PDCP_CHAIN;
- } else {
+ else
fc_type = ROC_SE_PDCP;
- }
break;
case ROC_SE_SNOW3G_UEA2:
if (unlikely(key_len != 16))
@@ -197,33 +204,6 @@ cpt_hmac_opad_ipad_gen(roc_se_auth_type auth_type, const uint8_t *key, uint16_t
}
}
-static int
-cpt_pdcp_key_type_set(struct roc_se_zuc_snow3g_ctx *zs_ctx, uint16_t key_len)
-{
- roc_se_aes_type key_type = 0;
-
- if (roc_model_is_cn9k()) {
- if (key_len != 16) {
- plt_err("Only key len 16 is supported on cn9k");
- return -ENOTSUP;
- }
- }
-
- switch (key_len) {
- case 16:
- key_type = ROC_SE_AES_128_BIT;
- break;
- case 32:
- key_type = ROC_SE_AES_256_BIT;
- break;
- default:
- plt_err("Invalid AES key len");
- return -ENOTSUP;
- }
- zs_ctx->zuc.otk_ctx.w0.s.key_len = key_type;
- return 0;
-}
-
static int
cpt_pdcp_chain_key_type_get(uint16_t key_len)
{
@@ -247,36 +227,6 @@ cpt_pdcp_chain_key_type_get(uint16_t key_len)
return key_type;
}
-static int
-cpt_pdcp_mac_len_set(struct roc_se_zuc_snow3g_ctx *zs_ctx, uint16_t mac_len)
-{
- roc_se_pdcp_mac_len_type mac_type = 0;
-
- if (roc_model_is_cn9k()) {
- if (mac_len != 4) {
- plt_err("Only mac len 4 is supported on cn9k");
- return -ENOTSUP;
- }
- }
-
- switch (mac_len) {
- case 4:
- mac_type = ROC_SE_PDCP_MAC_LEN_32_BIT;
- break;
- case 8:
- mac_type = ROC_SE_PDCP_MAC_LEN_64_BIT;
- break;
- case 16:
- mac_type = ROC_SE_PDCP_MAC_LEN_128_BIT;
- break;
- default:
- plt_err("Invalid ZUC MAC len");
- return -ENOTSUP;
- }
- zs_ctx->zuc.otk_ctx.w0.s.mac_len = mac_type;
- return 0;
-}
-
static void
cpt_zuc_const_update(uint8_t *zuc_const, int key_len, int mac_len)
{
@@ -300,32 +250,27 @@ cpt_zuc_const_update(uint8_t *zuc_const, int key_len, int mac_len)
}
int
-roc_se_auth_key_set(struct roc_se_ctx *se_ctx, roc_se_auth_type type,
- const uint8_t *key, uint16_t key_len, uint16_t mac_len)
+roc_se_auth_key_set(struct roc_se_ctx *se_ctx, roc_se_auth_type type, const uint8_t *key,
+ uint16_t key_len, uint16_t mac_len)
{
- struct roc_se_zuc_snow3g_chain_ctx *zs_ch_ctx;
- struct roc_se_zuc_snow3g_ctx *zs_ctx;
struct roc_se_kasumi_ctx *k_ctx;
+ struct roc_se_pdcp_ctx *pctx;
struct roc_se_context *fctx;
uint8_t opcode_minor;
- uint8_t pdcp_alg;
bool chained_op;
- int ret;
if (se_ctx == NULL)
return -1;
- zs_ctx = &se_ctx->se_ctx.zs_ctx;
- zs_ch_ctx = &se_ctx->se_ctx.zs_ch_ctx;
+ pctx = &se_ctx->se_ctx.pctx;
k_ctx = &se_ctx->se_ctx.k_ctx;
fctx = &se_ctx->se_ctx.fctx;
chained_op = se_ctx->ciph_then_auth || se_ctx->auth_then_ciph;
if ((type >= ROC_SE_ZUC_EIA3) && (type <= ROC_SE_KASUMI_F9_ECB)) {
- uint8_t *zuc_const;
uint32_t keyx[4];
- uint8_t *ci_key;
+ int key_type;
if (!key_len)
return -1;
@@ -335,98 +280,64 @@ roc_se_auth_key_set(struct roc_se_ctx *se_ctx, roc_se_auth_type type,
return -1;
}
- if (roc_model_is_cn9k()) {
- ci_key = zs_ctx->zuc.onk_ctx.ci_key;
- zuc_const = zs_ctx->zuc.onk_ctx.zuc_const;
- } else {
- ci_key = zs_ctx->zuc.otk_ctx.ci_key;
- zuc_const = zs_ctx->zuc.otk_ctx.zuc_const;
- }
-
/* For ZUC/SNOW3G/Kasumi */
switch (type) {
case ROC_SE_SNOW3G_UIA2:
- if (chained_op) {
- struct roc_se_onk_zuc_chain_ctx *ctx =
- &zs_ch_ctx->zuc.onk_ctx;
- zs_ch_ctx->zuc.onk_ctx.w0.s.state_conf =
- ROC_SE_PDCP_CHAIN_CTX_KEY_IV;
- ctx->w0.s.auth_type =
- ROC_SE_PDCP_CHAIN_ALG_TYPE_SNOW3G;
- ctx->w0.s.mac_len = mac_len;
- ctx->w0.s.auth_key_len = key_len;
- se_ctx->fc_type = ROC_SE_PDCP_CHAIN;
- cpt_snow3g_key_gen(key, keyx);
- memcpy(ctx->st.auth_key, keyx, key_len);
- } else {
- zs_ctx->zuc.otk_ctx.w0.s.alg_type =
- ROC_SE_PDCP_ALG_TYPE_SNOW3G;
- zs_ctx->zuc.otk_ctx.w0.s.mac_len =
- ROC_SE_PDCP_MAC_LEN_32_BIT;
- cpt_snow3g_key_gen(key, keyx);
- memcpy(ci_key, keyx, key_len);
+ pctx->w0.s.state_conf = ROC_SE_PDCP_CHAIN_CTX_KEY_IV;
+ pctx->w0.s.auth_type = ROC_SE_PDCP_CHAIN_ALG_TYPE_SNOW3G;
+ pctx->w0.s.mac_len = mac_len;
+ pctx->w0.s.auth_key_len = key_len;
+ se_ctx->fc_type = ROC_SE_PDCP_CHAIN;
+ cpt_snow3g_key_gen(key, keyx);
+ memcpy(pctx->st.auth_key, keyx, key_len);
+
+ if (!chained_op)
se_ctx->fc_type = ROC_SE_PDCP;
- }
se_ctx->pdcp_auth_alg = ROC_SE_PDCP_ALG_TYPE_SNOW3G;
se_ctx->zsk_flags = 0x1;
break;
case ROC_SE_ZUC_EIA3:
- if (chained_op) {
- struct roc_se_onk_zuc_chain_ctx *ctx =
- &zs_ch_ctx->zuc.onk_ctx;
- ctx->w0.s.state_conf =
- ROC_SE_PDCP_CHAIN_CTX_KEY_IV;
- ctx->w0.s.auth_type =
- ROC_SE_PDCP_CHAIN_ALG_TYPE_ZUC;
- ctx->w0.s.mac_len = mac_len;
- ctx->w0.s.auth_key_len = key_len;
- memcpy(ctx->st.auth_key, key, key_len);
- cpt_zuc_const_update(ctx->st.auth_zuc_const,
- key_len, mac_len);
- se_ctx->fc_type = ROC_SE_PDCP_CHAIN;
- } else {
- zs_ctx->zuc.otk_ctx.w0.s.alg_type =
- ROC_SE_PDCP_ALG_TYPE_ZUC;
- ret = cpt_pdcp_key_type_set(zs_ctx, key_len);
- if (ret)
- return ret;
- ret = cpt_pdcp_mac_len_set(zs_ctx, mac_len);
- if (ret)
- return ret;
- memcpy(ci_key, key, key_len);
- if (key_len == 32)
- roc_se_zuc_bytes_swap(ci_key, key_len);
- cpt_zuc_const_update(zuc_const, key_len,
- mac_len);
- se_ctx->fc_type = ROC_SE_PDCP;
+ if (unlikely(key_len != 16)) {
+ /*
+ * ZUC 256 is not supported with older microcode
+ * where pdcp_iv_offset is 16
+ */
+ if (chained_op || (se_ctx->pdcp_iv_offset == 16)) {
+ plt_err("ZUC 256 is not supported with chained operations");
+ return -1;
+ }
}
+ key_type = cpt_pdcp_chain_key_type_get(key_len);
+ if (key_type < 0)
+ return key_type;
+ pctx->w0.s.auth_key_len = key_type;
+ pctx->w0.s.state_conf = ROC_SE_PDCP_CHAIN_CTX_KEY_IV;
+ pctx->w0.s.auth_type = ROC_SE_PDCP_CHAIN_ALG_TYPE_ZUC;
+ pctx->w0.s.mac_len = mac_len;
+ memcpy(pctx->st.auth_key, key, key_len);
+ if (key_len == 32)
+ roc_se_zuc_bytes_swap(pctx->st.auth_key, key_len);
+ cpt_zuc_const_update(pctx->st.auth_zuc_const, key_len, mac_len);
+ se_ctx->fc_type = ROC_SE_PDCP_CHAIN;
+
+ if (!chained_op)
+ se_ctx->fc_type = ROC_SE_PDCP;
se_ctx->pdcp_auth_alg = ROC_SE_PDCP_ALG_TYPE_ZUC;
se_ctx->zsk_flags = 0x1;
break;
case ROC_SE_AES_CMAC_EIA2:
- if (chained_op) {
- struct roc_se_onk_zuc_chain_ctx *ctx =
- &zs_ch_ctx->zuc.onk_ctx;
- int key_type;
- key_type = cpt_pdcp_chain_key_type_get(key_len);
- if (key_type < 0)
- return key_type;
- ctx->w0.s.auth_key_len = key_type;
- ctx->w0.s.state_conf =
- ROC_SE_PDCP_CHAIN_CTX_KEY_IV;
- ctx->w0.s.auth_type =
- ROC_SE_PDCP_ALG_TYPE_AES_CTR;
- ctx->w0.s.mac_len = mac_len;
- memcpy(ctx->st.auth_key, key, key_len);
- se_ctx->fc_type = ROC_SE_PDCP_CHAIN;
- } else {
- zs_ctx->zuc.otk_ctx.w0.s.alg_type =
- ROC_SE_PDCP_ALG_TYPE_AES_CTR;
- zs_ctx->zuc.otk_ctx.w0.s.mac_len =
- ROC_SE_PDCP_MAC_LEN_32_BIT;
- memcpy(ci_key, key, key_len);
+ key_type = cpt_pdcp_chain_key_type_get(key_len);
+ if (key_type < 0)
+ return key_type;
+ pctx->w0.s.auth_key_len = key_type;
+ pctx->w0.s.state_conf = ROC_SE_PDCP_CHAIN_CTX_KEY_IV;
+ pctx->w0.s.auth_type = ROC_SE_PDCP_ALG_TYPE_AES_CTR;
+ pctx->w0.s.mac_len = mac_len;
+ memcpy(pctx->st.auth_key, key, key_len);
+ se_ctx->fc_type = ROC_SE_PDCP_CHAIN;
+
+ if (!chained_op)
se_ctx->fc_type = ROC_SE_PDCP;
- }
se_ctx->pdcp_auth_alg = ROC_SE_PDCP_ALG_TYPE_AES_CMAC;
se_ctx->eia2 = 1;
se_ctx->zsk_flags = 0x1;
@@ -454,11 +365,8 @@ roc_se_auth_key_set(struct roc_se_ctx *se_ctx, roc_se_auth_type type,
se_ctx->mac_len = mac_len;
se_ctx->hash_type = type;
- pdcp_alg = zs_ctx->zuc.otk_ctx.w0.s.alg_type;
if (chained_op)
opcode_minor = se_ctx->ciph_then_auth ? 2 : 3;
- else if (roc_model_is_cn9k())
- opcode_minor = ((1 << 7) | (pdcp_alg << 5) | 1);
else
opcode_minor = ((1 << 4) | 1);
@@ -513,29 +421,18 @@ int
roc_se_ciph_key_set(struct roc_se_ctx *se_ctx, roc_se_cipher_type type, const uint8_t *key,
uint16_t key_len)
{
- bool chained_op = se_ctx->ciph_then_auth || se_ctx->auth_then_ciph;
- struct roc_se_zuc_snow3g_ctx *zs_ctx = &se_ctx->se_ctx.zs_ctx;
struct roc_se_context *fctx = &se_ctx->se_ctx.fctx;
- struct roc_se_zuc_snow3g_chain_ctx *zs_ch_ctx;
+ struct roc_se_pdcp_ctx *pctx;
uint8_t opcode_minor = 0;
- uint8_t *zuc_const;
uint32_t keyx[4];
- uint8_t *ci_key;
+ int key_type;
int i, ret;
/* For NULL cipher, no processing required. */
if (type == ROC_SE_PASSTHROUGH)
return 0;
- zs_ch_ctx = &se_ctx->se_ctx.zs_ch_ctx;
-
- if (roc_model_is_cn9k()) {
- ci_key = zs_ctx->zuc.onk_ctx.ci_key;
- zuc_const = zs_ctx->zuc.onk_ctx.zuc_const;
- } else {
- ci_key = zs_ctx->zuc.otk_ctx.ci_key;
- zuc_const = zs_ctx->zuc.otk_ctx.zuc_const;
- }
+ pctx = &se_ctx->se_ctx.pctx;
if ((type == ROC_SE_AES_GCM) || (type == ROC_SE_AES_CCM))
se_ctx->template_w4.s.opcode_minor = BIT(5);
@@ -615,72 +512,38 @@ roc_se_ciph_key_set(struct roc_se_ctx *se_ctx, roc_se_cipher_type type, const ui
fctx->enc.enc_cipher = ROC_SE_DES3_CBC;
goto success;
case ROC_SE_SNOW3G_UEA2:
- if (chained_op == true) {
- struct roc_se_onk_zuc_chain_ctx *ctx =
- &zs_ch_ctx->zuc.onk_ctx;
- zs_ch_ctx->zuc.onk_ctx.w0.s.state_conf =
- ROC_SE_PDCP_CHAIN_CTX_KEY_IV;
- zs_ch_ctx->zuc.onk_ctx.w0.s.cipher_type =
- ROC_SE_PDCP_CHAIN_ALG_TYPE_SNOW3G;
- zs_ch_ctx->zuc.onk_ctx.w0.s.ci_key_len = key_len;
- cpt_snow3g_key_gen(key, keyx);
- memcpy(ctx->st.ci_key, keyx, key_len);
- } else {
- zs_ctx->zuc.otk_ctx.w0.s.key_len = ROC_SE_AES_128_BIT;
- zs_ctx->zuc.otk_ctx.w0.s.alg_type =
- ROC_SE_PDCP_ALG_TYPE_SNOW3G;
- cpt_snow3g_key_gen(key, keyx);
- memcpy(ci_key, keyx, key_len);
- }
+ pctx->w0.s.state_conf = ROC_SE_PDCP_CHAIN_CTX_KEY_IV;
+ pctx->w0.s.cipher_type = ROC_SE_PDCP_CHAIN_ALG_TYPE_SNOW3G;
+ pctx->w0.s.ci_key_len = key_len;
+ cpt_snow3g_key_gen(key, keyx);
+ memcpy(pctx->st.ci_key, keyx, key_len);
se_ctx->pdcp_ci_alg = ROC_SE_PDCP_ALG_TYPE_SNOW3G;
se_ctx->zsk_flags = 0;
goto success;
case ROC_SE_ZUC_EEA3:
- if (chained_op == true) {
- struct roc_se_onk_zuc_chain_ctx *ctx =
- &zs_ch_ctx->zuc.onk_ctx;
- zs_ch_ctx->zuc.onk_ctx.w0.s.state_conf =
- ROC_SE_PDCP_CHAIN_CTX_KEY_IV;
- zs_ch_ctx->zuc.onk_ctx.w0.s.cipher_type =
- ROC_SE_PDCP_CHAIN_ALG_TYPE_ZUC;
- memcpy(ctx->st.ci_key, key, key_len);
- memcpy(ctx->st.ci_zuc_const, zuc_key128, 32);
- zs_ch_ctx->zuc.onk_ctx.w0.s.ci_key_len = key_len;
- } else {
- ret = cpt_pdcp_key_type_set(zs_ctx, key_len);
- if (ret)
- return ret;
- zs_ctx->zuc.otk_ctx.w0.s.alg_type =
- ROC_SE_PDCP_ALG_TYPE_ZUC;
- memcpy(ci_key, key, key_len);
- if (key_len == 32) {
- roc_se_zuc_bytes_swap(ci_key, key_len);
- memcpy(zuc_const, zuc_key256, 16);
- } else
- memcpy(zuc_const, zuc_key128, 32);
- }
-
+ key_type = cpt_pdcp_chain_key_type_get(key_len);
+ if (key_type < 0)
+ return key_type;
+ pctx->w0.s.ci_key_len = key_type;
+ pctx->w0.s.state_conf = ROC_SE_PDCP_CHAIN_CTX_KEY_IV;
+ pctx->w0.s.cipher_type = ROC_SE_PDCP_CHAIN_ALG_TYPE_ZUC;
+ memcpy(pctx->st.ci_key, key, key_len);
+ if (key_len == 32) {
+ roc_se_zuc_bytes_swap(pctx->st.ci_key, key_len);
+ memcpy(pctx->st.ci_zuc_const, zuc_key256, 16);
+ } else
+ memcpy(pctx->st.ci_zuc_const, zuc_key128, 32);
se_ctx->pdcp_ci_alg = ROC_SE_PDCP_ALG_TYPE_ZUC;
se_ctx->zsk_flags = 0;
goto success;
case ROC_SE_AES_CTR_EEA2:
- if (chained_op == true) {
- struct roc_se_onk_zuc_chain_ctx *ctx =
- &zs_ch_ctx->zuc.onk_ctx;
- int key_type;
- key_type = cpt_pdcp_chain_key_type_get(key_len);
- if (key_type < 0)
- return key_type;
- ctx->w0.s.ci_key_len = key_type;
- ctx->w0.s.state_conf = ROC_SE_PDCP_CHAIN_CTX_KEY_IV;
- ctx->w0.s.cipher_type = ROC_SE_PDCP_ALG_TYPE_AES_CTR;
- memcpy(ctx->st.ci_key, key, key_len);
- } else {
- zs_ctx->zuc.otk_ctx.w0.s.key_len = ROC_SE_AES_128_BIT;
- zs_ctx->zuc.otk_ctx.w0.s.alg_type =
- ROC_SE_PDCP_ALG_TYPE_AES_CTR;
- memcpy(ci_key, key, key_len);
- }
+ key_type = cpt_pdcp_chain_key_type_get(key_len);
+ if (key_type < 0)
+ return key_type;
+ pctx->w0.s.ci_key_len = key_type;
+ pctx->w0.s.state_conf = ROC_SE_PDCP_CHAIN_CTX_KEY_IV;
+ pctx->w0.s.cipher_type = ROC_SE_PDCP_ALG_TYPE_AES_CTR;
+ memcpy(pctx->st.ci_key, key, key_len);
se_ctx->pdcp_ci_alg = ROC_SE_PDCP_ALG_TYPE_AES_CTR;
se_ctx->zsk_flags = 0;
goto success;
@@ -720,20 +583,6 @@ roc_se_ciph_key_set(struct roc_se_ctx *se_ctx, roc_se_cipher_type type, const ui
return 0;
}
-void
-roc_se_ctx_swap(struct roc_se_ctx *se_ctx)
-{
- struct roc_se_zuc_snow3g_ctx *zs_ctx = &se_ctx->se_ctx.zs_ctx;
-
- if (roc_model_is_cn9k())
- return;
-
- if (se_ctx->fc_type == ROC_SE_PDCP_CHAIN)
- return;
-
- zs_ctx->zuc.otk_ctx.w0.u64 = htobe64(zs_ctx->zuc.otk_ctx.w0.u64);
-}
-
void
roc_se_ctx_init(struct roc_se_ctx *roc_se_ctx)
{
@@ -745,15 +594,13 @@ roc_se_ctx_init(struct roc_se_ctx *roc_se_ctx)
case ROC_SE_FC_GEN:
ctx_len = sizeof(struct roc_se_context);
break;
+ case ROC_SE_PDCP_CHAIN:
case ROC_SE_PDCP:
- ctx_len = sizeof(struct roc_se_zuc_snow3g_ctx);
+ ctx_len = sizeof(struct roc_se_pdcp_ctx);
break;
case ROC_SE_KASUMI:
ctx_len = sizeof(struct roc_se_kasumi_ctx);
break;
- case ROC_SE_PDCP_CHAIN:
- ctx_len = sizeof(struct roc_se_zuc_snow3g_chain_ctx);
- break;
case ROC_SE_SM:
ctx_len = sizeof(struct roc_se_sm_context);
break;
diff --git a/drivers/common/cnxk/roc_se.h b/drivers/common/cnxk/roc_se.h
index abb8c6a149..d62c40b310 100644
--- a/drivers/common/cnxk/roc_se.h
+++ b/drivers/common/cnxk/roc_se.h
@@ -246,7 +246,7 @@ struct roc_se_onk_zuc_ctx {
uint8_t zuc_const[32];
};
-struct roc_se_onk_zuc_chain_ctx {
+struct roc_se_pdcp_ctx {
union {
uint64_t u64;
struct {
@@ -278,19 +278,6 @@ struct roc_se_onk_zuc_chain_ctx {
} st;
};
-struct roc_se_zuc_snow3g_chain_ctx {
- union {
- struct roc_se_onk_zuc_chain_ctx onk_ctx;
- } zuc;
-};
-
-struct roc_se_zuc_snow3g_ctx {
- union {
- struct roc_se_onk_zuc_ctx onk_ctx;
- struct roc_se_otk_zuc_ctx otk_ctx;
- } zuc;
-};
-
struct roc_se_kasumi_ctx {
uint8_t reg_A[8];
uint8_t ci_key[16];
@@ -356,8 +343,7 @@ struct roc_se_ctx {
} w0;
union {
struct roc_se_context fctx;
- struct roc_se_zuc_snow3g_ctx zs_ctx;
- struct roc_se_zuc_snow3g_chain_ctx zs_ch_ctx;
+ struct roc_se_pdcp_ctx pctx;
struct roc_se_kasumi_ctx k_ctx;
struct roc_se_sm_context sm_ctx;
};
diff --git a/drivers/crypto/cnxk/cnxk_se.h b/drivers/crypto/cnxk/cnxk_se.h
index 1aec7dea9f..8193e96a92 100644
--- a/drivers/crypto/cnxk/cnxk_se.h
+++ b/drivers/crypto/cnxk/cnxk_se.h
@@ -298,8 +298,13 @@ sg_inst_prep(struct roc_se_fc_params *params, struct cpt_inst_s *inst, uint64_t
iv_d = ((uint8_t *)offset_vaddr + ROC_SE_OFF_CTRL_LEN);
if (pdcp_flag) {
- if (likely(iv_len))
- pdcp_iv_copy(iv_d, iv_s, pdcp_alg_type, pack_iv);
+ if (likely(iv_len)) {
+ if (zsk_flags == 0x1)
+ pdcp_iv_copy(iv_d + params->pdcp_iv_offset, iv_s, pdcp_alg_type,
+ pack_iv);
+ else
+ pdcp_iv_copy(iv_d, iv_s, pdcp_alg_type, pack_iv);
+ }
} else {
if (likely(iv_len))
memcpy(iv_d, iv_s, iv_len);
@@ -375,7 +380,7 @@ sg_inst_prep(struct roc_se_fc_params *params, struct cpt_inst_s *inst, uint64_t
i = 0;
scatter_comp = (struct roc_sglist_comp *)((uint8_t *)gather_comp + g_size_bytes);
- if (zsk_flags == 0x1) {
+ if ((zsk_flags == 0x1) && (se_ctx->fc_type == ROC_SE_KASUMI)) {
/* IV in SLIST only for EEA3 & UEA2 or for F8 */
iv_len = 0;
}
@@ -492,8 +497,13 @@ sg2_inst_prep(struct roc_se_fc_params *params, struct cpt_inst_s *inst, uint64_t
iv_d = ((uint8_t *)offset_vaddr + ROC_SE_OFF_CTRL_LEN);
if (pdcp_flag) {
- if (likely(iv_len))
- pdcp_iv_copy(iv_d, iv_s, pdcp_alg_type, pack_iv);
+ if (likely(iv_len)) {
+ if (zsk_flags == 0x1)
+ pdcp_iv_copy(iv_d + params->pdcp_iv_offset, iv_s, pdcp_alg_type,
+ pack_iv);
+ else
+ pdcp_iv_copy(iv_d, iv_s, pdcp_alg_type, pack_iv);
+ }
} else {
if (likely(iv_len))
memcpy(iv_d, iv_s, iv_len);
@@ -567,7 +577,7 @@ sg2_inst_prep(struct roc_se_fc_params *params, struct cpt_inst_s *inst, uint64_t
i = 0;
scatter_comp = (struct roc_sg2list_comp *)((uint8_t *)gather_comp + g_size_bytes);
- if (zsk_flags == 0x1) {
+ if ((zsk_flags == 0x1) && (se_ctx->fc_type == ROC_SE_KASUMI)) {
/* IV in SLIST only for EEA3 & UEA2 or for F8 */
iv_len = 0;
}
@@ -1617,28 +1627,34 @@ static __rte_always_inline int
cpt_pdcp_alg_prep(uint32_t req_flags, uint64_t d_offs, uint64_t d_lens,
struct roc_se_fc_params *params, struct cpt_inst_s *inst, const bool is_sg_ver2)
{
+ /*
+ * pdcp_iv_offset is auth_iv_offset wrt cipher_iv_offset which is
+ * 16 with old microcode without ZUC 256 support
+ * whereas it is 24 with new microcode which has ZUC 256.
+ * So iv_len reserved is 32B for cipher and auth IVs with old microcode
+ * and 48B with new microcode.
+ */
+ const int iv_len = params->pdcp_iv_offset * 2;
+ struct roc_se_ctx *se_ctx = params->ctx;
uint32_t encr_data_len, auth_data_len;
+ const int flags = se_ctx->zsk_flags;
uint32_t encr_offset, auth_offset;
union cpt_inst_w4 cpt_inst_w4;
int32_t inputlen, outputlen;
- struct roc_se_ctx *se_ctx;
uint64_t *offset_vaddr;
uint8_t pdcp_alg_type;
uint32_t mac_len = 0;
const uint8_t *iv_s;
uint8_t pack_iv = 0;
uint64_t offset_ctrl;
- int flags, iv_len;
int ret;
- se_ctx = params->ctx;
- flags = se_ctx->zsk_flags;
mac_len = se_ctx->mac_len;
cpt_inst_w4.u64 = se_ctx->template_w4.u64;
- cpt_inst_w4.s.opcode_major = ROC_SE_MAJOR_OP_PDCP;
if (flags == 0x1) {
+ cpt_inst_w4.s.opcode_minor = 1;
iv_s = params->auth_iv_buf;
/*
@@ -1650,47 +1666,32 @@ cpt_pdcp_alg_prep(uint32_t req_flags, uint64_t d_offs, uint64_t d_lens,
pdcp_alg_type = se_ctx->pdcp_auth_alg;
if (pdcp_alg_type != ROC_SE_PDCP_ALG_TYPE_AES_CMAC) {
- iv_len = params->auth_iv_len;
- if (iv_len == 25) {
- iv_len -= 2;
+ if (params->auth_iv_len == 25)
pack_iv = 1;
- }
auth_offset = auth_offset / 8;
-
- /* consider iv len */
- auth_offset += iv_len;
-
- inputlen =
- auth_offset + (RTE_ALIGN(auth_data_len, 8) / 8);
- } else {
- iv_len = 16;
-
- /* consider iv len */
- auth_offset += iv_len;
-
- inputlen = auth_offset + auth_data_len;
-
- /* length should be in bits */
- auth_data_len *= 8;
+ auth_data_len = RTE_ALIGN(auth_data_len, 8) / 8;
}
- outputlen = mac_len;
+ /* consider iv len */
+ auth_offset += iv_len;
+
+ inputlen = auth_offset + auth_data_len;
+ outputlen = iv_len + mac_len;
offset_ctrl = rte_cpu_to_be_64((uint64_t)auth_offset);
+ cpt_inst_w4.s.param1 = auth_data_len;
encr_data_len = 0;
encr_offset = 0;
} else {
+ cpt_inst_w4.s.opcode_minor = 0;
iv_s = params->iv_buf;
- iv_len = params->cipher_iv_len;
pdcp_alg_type = se_ctx->pdcp_ci_alg;
- if (iv_len == 25) {
- iv_len -= 2;
+ if (params->cipher_iv_len == 25)
pack_iv = 1;
- }
/*
* Microcode expects offsets in bytes
@@ -1700,6 +1701,7 @@ cpt_pdcp_alg_prep(uint32_t req_flags, uint64_t d_offs, uint64_t d_lens,
encr_offset = ROC_SE_ENCR_OFFSET(d_offs);
encr_offset = encr_offset / 8;
+
/* consider iv len */
encr_offset += iv_len;
@@ -1707,10 +1709,11 @@ cpt_pdcp_alg_prep(uint32_t req_flags, uint64_t d_offs, uint64_t d_lens,
outputlen = inputlen;
/* iv offset is 0 */
- offset_ctrl = rte_cpu_to_be_64((uint64_t)encr_offset << 16);
+ offset_ctrl = rte_cpu_to_be_64((uint64_t)encr_offset);
auth_data_len = 0;
auth_offset = 0;
+ cpt_inst_w4.s.param1 = (RTE_ALIGN(encr_data_len, 8) / 8);
}
if (unlikely((encr_offset >> 16) || (auth_offset >> 8))) {
@@ -1720,12 +1723,6 @@ cpt_pdcp_alg_prep(uint32_t req_flags, uint64_t d_offs, uint64_t d_lens,
return -1;
}
- /*
- * Lengths are expected in bits.
- */
- cpt_inst_w4.s.param1 = encr_data_len;
- cpt_inst_w4.s.param2 = auth_data_len;
-
/*
* In cn9k, cn10k since we have a limitation of
* IV & Offset control word not part of instruction
@@ -1738,6 +1735,7 @@ cpt_pdcp_alg_prep(uint32_t req_flags, uint64_t d_offs, uint64_t d_lens,
/* Use Direct mode */
+ cpt_inst_w4.s.opcode_major = ROC_SE_MAJOR_OP_PDCP_CHAIN;
offset_vaddr = (uint64_t *)((uint8_t *)dm_vaddr - ROC_SE_OFF_CTRL_LEN - iv_len);
/* DPTR */
@@ -1753,6 +1751,7 @@ cpt_pdcp_alg_prep(uint32_t req_flags, uint64_t d_offs, uint64_t d_lens,
*offset_vaddr = offset_ctrl;
inst->w4.u64 = cpt_inst_w4.u64;
} else {
+ cpt_inst_w4.s.opcode_major = ROC_SE_MAJOR_OP_PDCP_CHAIN | ROC_DMA_MODE_SG;
inst->w4.u64 = cpt_inst_w4.u64;
if (is_sg_ver2)
ret = sg2_inst_prep(params, inst, offset_ctrl, iv_s, iv_len, pack_iv,
@@ -2243,8 +2242,6 @@ fill_sess_cipher(struct rte_crypto_sym_xform *xform, struct cnxk_se_sess *sess)
c_form->key.length)))
return -1;
- if ((enc_type >= ROC_SE_ZUC_EEA3) && (enc_type <= ROC_SE_AES_CTR_EEA2))
- roc_se_ctx_swap(&sess->roc_se_ctx);
return 0;
}
@@ -2403,15 +2400,10 @@ fill_sess_auth(struct rte_crypto_sym_xform *xform, struct cnxk_se_sess *sess)
sess->auth_iv_offset = a_form->iv.offset;
sess->auth_iv_length = a_form->iv.length;
}
- if (unlikely(roc_se_auth_key_set(&sess->roc_se_ctx, auth_type,
- a_form->key.data, a_form->key.length,
- a_form->digest_length)))
+ if (unlikely(roc_se_auth_key_set(&sess->roc_se_ctx, auth_type, a_form->key.data,
+ a_form->key.length, a_form->digest_length)))
return -1;
- if ((auth_type >= ROC_SE_ZUC_EIA3) &&
- (auth_type <= ROC_SE_AES_CMAC_EIA2))
- roc_se_ctx_swap(&sess->roc_se_ctx);
-
return 0;
}
--
2.25.1
^ permalink raw reply [flat|nested] 78+ messages in thread
* [PATCH 20/24] crypto/cnxk: validate the combinations supported in TLS
2023-12-21 12:35 [PATCH 00/24] Fixes and improvements in crypto cnxk Anoob Joseph
` (18 preceding siblings ...)
2023-12-21 12:35 ` [PATCH 19/24] crypto/cnxk: replace PDCP with PDCP chain opcode Anoob Joseph
@ 2023-12-21 12:35 ` Anoob Joseph
2023-12-21 12:35 ` [PATCH 21/24] crypto/cnxk: use a single function for opad ipad Anoob Joseph
` (4 subsequent siblings)
24 siblings, 0 replies; 78+ messages in thread
From: Anoob Joseph @ 2023-12-21 12:35 UTC (permalink / raw)
To: Akhil Goyal; +Cc: Vidya Sagar Velumuri, Jerin Jacob, Tejasree Kondoj, dev
From: Vidya Sagar Velumuri <vvelumuri@marvell.com>
Validate the cipher and auth combination to allow only the
ones supported by hardware.
Signed-off-by: Vidya Sagar Velumuri <vvelumuri@marvell.com>
---
drivers/crypto/cnxk/cn10k_tls.c | 35 ++++++++++++++++++++++++++++++++-
1 file changed, 34 insertions(+), 1 deletion(-)
diff --git a/drivers/crypto/cnxk/cn10k_tls.c b/drivers/crypto/cnxk/cn10k_tls.c
index e1ed65b06a..fa3ce3e758 100644
--- a/drivers/crypto/cnxk/cn10k_tls.c
+++ b/drivers/crypto/cnxk/cn10k_tls.c
@@ -17,6 +17,36 @@
#include "cnxk_cryptodev_ops.h"
#include "cnxk_security.h"
+static int
+tls_xform_cipher_auth_verify(struct rte_crypto_sym_xform *cipher_xform,
+ struct rte_crypto_sym_xform *auth_xform)
+{
+ enum rte_crypto_cipher_algorithm c_algo = cipher_xform->cipher.algo;
+ enum rte_crypto_auth_algorithm a_algo = auth_xform->auth.algo;
+ int ret = -ENOTSUP;
+
+ switch (c_algo) {
+ case RTE_CRYPTO_CIPHER_NULL:
+ if ((a_algo == RTE_CRYPTO_AUTH_MD5_HMAC) || (a_algo == RTE_CRYPTO_AUTH_SHA1_HMAC) ||
+ (a_algo == RTE_CRYPTO_AUTH_SHA256_HMAC))
+ ret = 0;
+ break;
+ case RTE_CRYPTO_CIPHER_3DES_CBC:
+ if (a_algo == RTE_CRYPTO_AUTH_SHA1_HMAC)
+ ret = 0;
+ break;
+ case RTE_CRYPTO_CIPHER_AES_CBC:
+ if ((a_algo == RTE_CRYPTO_AUTH_SHA1_HMAC) ||
+ (a_algo == RTE_CRYPTO_AUTH_SHA256_HMAC))
+ ret = 0;
+ break;
+ default:
+ break;
+ }
+
+ return ret;
+}
+
static int
tls_xform_cipher_verify(struct rte_crypto_sym_xform *crypto_xform)
{
@@ -138,7 +168,10 @@ cnxk_tls_xform_verify(struct rte_security_tls_record_xform *tls_xform,
ret = tls_xform_cipher_verify(cipher_xform);
if (!ret)
- return tls_xform_auth_verify(auth_xform);
+ ret = tls_xform_auth_verify(auth_xform);
+
+ if (cipher_xform && !ret)
+ return tls_xform_cipher_auth_verify(cipher_xform, auth_xform);
return ret;
}
--
2.25.1
^ permalink raw reply [flat|nested] 78+ messages in thread
* [PATCH 21/24] crypto/cnxk: use a single function for opad ipad
2023-12-21 12:35 [PATCH 00/24] Fixes and improvements in crypto cnxk Anoob Joseph
` (19 preceding siblings ...)
2023-12-21 12:35 ` [PATCH 20/24] crypto/cnxk: validate the combinations supported in TLS Anoob Joseph
@ 2023-12-21 12:35 ` Anoob Joseph
2023-12-21 12:35 ` [PATCH 22/24] crypto/cnxk: add support for TLS 1.3 Anoob Joseph
` (3 subsequent siblings)
24 siblings, 0 replies; 78+ messages in thread
From: Anoob Joseph @ 2023-12-21 12:35 UTC (permalink / raw)
To: Akhil Goyal; +Cc: Vidya Sagar Velumuri, Jerin Jacob, Tejasree Kondoj, dev
From: Vidya Sagar Velumuri <vvelumuri@marvell.com>
Use a single function for opad and ipad generation for IPsec, TLS and
flexi crypto.
Signed-off-by: Vidya Sagar Velumuri <vvelumuri@marvell.com>
---
drivers/common/cnxk/cnxk_security.c | 65 ++++++-----------------------
drivers/common/cnxk/cnxk_security.h | 5 ---
drivers/common/cnxk/roc_se.c | 48 ++++++++++++++-------
drivers/common/cnxk/roc_se.h | 9 ++++
drivers/common/cnxk/version.map | 2 +-
drivers/crypto/cnxk/cn10k_tls.c | 8 +++-
6 files changed, 61 insertions(+), 76 deletions(-)
diff --git a/drivers/common/cnxk/cnxk_security.c b/drivers/common/cnxk/cnxk_security.c
index bdb04fe142..64c901a57a 100644
--- a/drivers/common/cnxk/cnxk_security.c
+++ b/drivers/common/cnxk/cnxk_security.c
@@ -8,55 +8,9 @@
#include "roc_api.h"
-void
-cnxk_sec_opad_ipad_gen(struct rte_crypto_sym_xform *auth_xform, uint8_t *hmac_opad_ipad,
- bool is_tls)
-{
- const uint8_t *key = auth_xform->auth.key.data;
- uint32_t length = auth_xform->auth.key.length;
- uint8_t opad[128] = {[0 ... 127] = 0x5c};
- uint8_t ipad[128] = {[0 ... 127] = 0x36};
- uint32_t i;
-
- /* HMAC OPAD and IPAD */
- for (i = 0; i < 128 && i < length; i++) {
- opad[i] = opad[i] ^ key[i];
- ipad[i] = ipad[i] ^ key[i];
- }
-
- /* Precompute hash of HMAC OPAD and IPAD to avoid
- * per packet computation
- */
- switch (auth_xform->auth.algo) {
- case RTE_CRYPTO_AUTH_MD5_HMAC:
- roc_hash_md5_gen(opad, (uint32_t *)&hmac_opad_ipad[0]);
- roc_hash_md5_gen(ipad, (uint32_t *)&hmac_opad_ipad[is_tls ? 64 : 24]);
- break;
- case RTE_CRYPTO_AUTH_SHA1_HMAC:
- roc_hash_sha1_gen(opad, (uint32_t *)&hmac_opad_ipad[0]);
- roc_hash_sha1_gen(ipad, (uint32_t *)&hmac_opad_ipad[is_tls ? 64 : 24]);
- break;
- case RTE_CRYPTO_AUTH_SHA256_HMAC:
- roc_hash_sha256_gen(opad, (uint32_t *)&hmac_opad_ipad[0], 256);
- roc_hash_sha256_gen(ipad, (uint32_t *)&hmac_opad_ipad[64], 256);
- break;
- case RTE_CRYPTO_AUTH_SHA384_HMAC:
- roc_hash_sha512_gen(opad, (uint64_t *)&hmac_opad_ipad[0], 384);
- roc_hash_sha512_gen(ipad, (uint64_t *)&hmac_opad_ipad[64], 384);
- break;
- case RTE_CRYPTO_AUTH_SHA512_HMAC:
- roc_hash_sha512_gen(opad, (uint64_t *)&hmac_opad_ipad[0], 512);
- roc_hash_sha512_gen(ipad, (uint64_t *)&hmac_opad_ipad[64], 512);
- break;
- default:
- break;
- }
-}
-
static int
-ot_ipsec_sa_common_param_fill(union roc_ot_ipsec_sa_word2 *w2,
- uint8_t *cipher_key, uint8_t *salt_key,
- uint8_t *hmac_opad_ipad,
+ot_ipsec_sa_common_param_fill(union roc_ot_ipsec_sa_word2 *w2, uint8_t *cipher_key,
+ uint8_t *salt_key, uint8_t *hmac_opad_ipad,
struct rte_security_ipsec_xform *ipsec_xfrm,
struct rte_crypto_sym_xform *crypto_xfrm)
{
@@ -192,7 +146,9 @@ ot_ipsec_sa_common_param_fill(union roc_ot_ipsec_sa_word2 *w2,
const uint8_t *auth_key = auth_xfrm->auth.key.data;
roc_aes_xcbc_key_derive(auth_key, hmac_opad_ipad);
} else {
- cnxk_sec_opad_ipad_gen(auth_xfrm, hmac_opad_ipad, false);
+ roc_se_hmac_opad_ipad_gen(w2->s.auth_type, auth_xfrm->auth.key.data,
+ auth_xfrm->auth.key.length, &hmac_opad_ipad[0],
+ ROC_SE_IPSEC);
}
tmp_key = (uint64_t *)hmac_opad_ipad;
@@ -741,7 +697,8 @@ onf_ipsec_sa_common_param_fill(struct roc_ie_onf_sa_ctl *ctl, uint8_t *salt,
key = cipher_xfrm->cipher.key.data;
length = cipher_xfrm->cipher.key.length;
- cnxk_sec_opad_ipad_gen(auth_xfrm, hmac_opad_ipad, false);
+ roc_se_hmac_opad_ipad_gen(ctl->auth_type, auth_xfrm->auth.key.data,
+ auth_xfrm->auth.key.length, hmac_opad_ipad, ROC_SE_IPSEC);
}
switch (length) {
@@ -1374,7 +1331,9 @@ cnxk_on_ipsec_outb_sa_create(struct rte_security_ipsec_xform *ipsec,
roc_aes_xcbc_key_derive(auth_key, hmac_opad_ipad);
} else if (auth_xform->auth.algo != RTE_CRYPTO_AUTH_NULL) {
- cnxk_sec_opad_ipad_gen(auth_xform, hmac_opad_ipad, false);
+ roc_se_hmac_opad_ipad_gen(
+ out_sa->common_sa.ctl.auth_type, auth_xform->auth.key.data,
+ auth_xform->auth.key.length, &hmac_opad_ipad[0], ROC_SE_IPSEC);
}
}
@@ -1441,7 +1400,9 @@ cnxk_on_ipsec_inb_sa_create(struct rte_security_ipsec_xform *ipsec,
roc_aes_xcbc_key_derive(auth_key, hmac_opad_ipad);
} else if (auth_xform->auth.algo != RTE_CRYPTO_AUTH_NULL) {
- cnxk_sec_opad_ipad_gen(auth_xform, hmac_opad_ipad, false);
+ roc_se_hmac_opad_ipad_gen(
+ in_sa->common_sa.ctl.auth_type, auth_xform->auth.key.data,
+ auth_xform->auth.key.length, &hmac_opad_ipad[0], ROC_SE_IPSEC);
}
}
diff --git a/drivers/common/cnxk/cnxk_security.h b/drivers/common/cnxk/cnxk_security.h
index 86ec657cb0..b323b8b757 100644
--- a/drivers/common/cnxk/cnxk_security.h
+++ b/drivers/common/cnxk/cnxk_security.h
@@ -68,9 +68,4 @@ int __roc_api cnxk_on_ipsec_inb_sa_create(struct rte_security_ipsec_xform *ipsec
int __roc_api cnxk_on_ipsec_outb_sa_create(struct rte_security_ipsec_xform *ipsec,
struct rte_crypto_sym_xform *crypto_xform,
struct roc_ie_on_outb_sa *out_sa);
-
-__rte_internal
-void cnxk_sec_opad_ipad_gen(struct rte_crypto_sym_xform *auth_xform, uint8_t *hmac_opad_ipad,
- bool is_tls);
-
#endif /* _CNXK_SECURITY_H__ */
diff --git a/drivers/common/cnxk/roc_se.c b/drivers/common/cnxk/roc_se.c
index 4e00268149..5a3ed0b647 100644
--- a/drivers/common/cnxk/roc_se.c
+++ b/drivers/common/cnxk/roc_se.c
@@ -157,14 +157,29 @@ cpt_ciph_aes_key_type_set(struct roc_se_context *fctx, uint16_t key_len)
fctx->enc.aes_key = aes_key_type;
}
-static void
-cpt_hmac_opad_ipad_gen(roc_se_auth_type auth_type, const uint8_t *key, uint16_t length,
- struct roc_se_hmac_context *hmac)
+void
+roc_se_hmac_opad_ipad_gen(roc_se_auth_type auth_type, const uint8_t *key, uint16_t length,
+ uint8_t *opad_ipad, roc_se_op_type op_type)
{
uint8_t opad[128] = {[0 ... 127] = 0x5c};
uint8_t ipad[128] = {[0 ... 127] = 0x36};
+ uint8_t ipad_offset, opad_offset;
uint32_t i;
+ if (op_type == ROC_SE_IPSEC) {
+ if ((auth_type == ROC_SE_MD5_TYPE) || (auth_type == ROC_SE_SHA1_TYPE))
+ ipad_offset = 24;
+ else
+ ipad_offset = 64;
+ opad_offset = 0;
+ } else if (op_type == ROC_SE_TLS) {
+ ipad_offset = 64;
+ opad_offset = 0;
+ } else {
+ ipad_offset = 0;
+ opad_offset = 64;
+ }
+
/* HMAC OPAD and IPAD */
for (i = 0; i < 128 && i < length; i++) {
opad[i] = opad[i] ^ key[i];
@@ -176,28 +191,28 @@ cpt_hmac_opad_ipad_gen(roc_se_auth_type auth_type, const uint8_t *key, uint16_t
*/
switch (auth_type) {
case ROC_SE_MD5_TYPE:
- roc_hash_md5_gen(opad, (uint32_t *)hmac->opad);
- roc_hash_md5_gen(ipad, (uint32_t *)hmac->ipad);
+ roc_hash_md5_gen(opad, (uint32_t *)&opad_ipad[opad_offset]);
+ roc_hash_md5_gen(ipad, (uint32_t *)&opad_ipad[ipad_offset]);
break;
case ROC_SE_SHA1_TYPE:
- roc_hash_sha1_gen(opad, (uint32_t *)hmac->opad);
- roc_hash_sha1_gen(ipad, (uint32_t *)hmac->ipad);
+ roc_hash_sha1_gen(opad, (uint32_t *)&opad_ipad[opad_offset]);
+ roc_hash_sha1_gen(ipad, (uint32_t *)&opad_ipad[ipad_offset]);
break;
case ROC_SE_SHA2_SHA224:
- roc_hash_sha256_gen(opad, (uint32_t *)hmac->opad, 224);
- roc_hash_sha256_gen(ipad, (uint32_t *)hmac->ipad, 224);
+ roc_hash_sha256_gen(opad, (uint32_t *)&opad_ipad[opad_offset], 224);
+ roc_hash_sha256_gen(ipad, (uint32_t *)&opad_ipad[ipad_offset], 224);
break;
case ROC_SE_SHA2_SHA256:
- roc_hash_sha256_gen(opad, (uint32_t *)hmac->opad, 256);
- roc_hash_sha256_gen(ipad, (uint32_t *)hmac->ipad, 256);
+ roc_hash_sha256_gen(opad, (uint32_t *)&opad_ipad[opad_offset], 256);
+ roc_hash_sha256_gen(ipad, (uint32_t *)&opad_ipad[ipad_offset], 256);
break;
case ROC_SE_SHA2_SHA384:
- roc_hash_sha512_gen(opad, (uint64_t *)hmac->opad, 384);
- roc_hash_sha512_gen(ipad, (uint64_t *)hmac->ipad, 384);
+ roc_hash_sha512_gen(opad, (uint64_t *)&opad_ipad[opad_offset], 384);
+ roc_hash_sha512_gen(ipad, (uint64_t *)&opad_ipad[ipad_offset], 384);
break;
case ROC_SE_SHA2_SHA512:
- roc_hash_sha512_gen(opad, (uint64_t *)hmac->opad, 512);
- roc_hash_sha512_gen(ipad, (uint64_t *)hmac->ipad, 512);
+ roc_hash_sha512_gen(opad, (uint64_t *)&opad_ipad[opad_offset], 512);
+ roc_hash_sha512_gen(ipad, (uint64_t *)&opad_ipad[ipad_offset], 512);
break;
default:
break;
@@ -401,7 +416,8 @@ roc_se_auth_key_set(struct roc_se_ctx *se_ctx, roc_se_auth_type type, const uint
if (chained_op) {
memset(fctx->hmac.ipad, 0, sizeof(fctx->hmac.ipad));
memset(fctx->hmac.opad, 0, sizeof(fctx->hmac.opad));
- cpt_hmac_opad_ipad_gen(type, key, key_len, &fctx->hmac);
+ roc_se_hmac_opad_ipad_gen(type, key, key_len, &fctx->hmac.ipad[0],
+ ROC_SE_FC);
fctx->enc.auth_input_type = 0;
} else {
se_ctx->hmac = 1;
diff --git a/drivers/common/cnxk/roc_se.h b/drivers/common/cnxk/roc_se.h
index d62c40b310..ddcf6bdb44 100644
--- a/drivers/common/cnxk/roc_se.h
+++ b/drivers/common/cnxk/roc_se.h
@@ -191,6 +191,12 @@ typedef enum {
ROC_SE_PDCP_MAC_LEN_128_BIT = 0x3
} roc_se_pdcp_mac_len_type;
+typedef enum {
+ ROC_SE_IPSEC = 0x0,
+ ROC_SE_TLS = 0x1,
+ ROC_SE_FC = 0x2,
+} roc_se_op_type;
+
struct roc_se_enc_context {
uint64_t iv_source : 1;
uint64_t aes_key : 2;
@@ -401,4 +407,7 @@ int __roc_api roc_se_ciph_key_set(struct roc_se_ctx *se_ctx, roc_se_cipher_type
void __roc_api roc_se_ctx_swap(struct roc_se_ctx *se_ctx);
void __roc_api roc_se_ctx_init(struct roc_se_ctx *se_ctx);
+void __roc_api roc_se_hmac_opad_ipad_gen(roc_se_auth_type auth_type, const uint8_t *key,
+ uint16_t length, uint8_t *opad_ipad,
+ roc_se_op_type op_type);
#endif /* __ROC_SE_H__ */
diff --git a/drivers/common/cnxk/version.map b/drivers/common/cnxk/version.map
index 15fd5710d2..b8b0478848 100644
--- a/drivers/common/cnxk/version.map
+++ b/drivers/common/cnxk/version.map
@@ -1,7 +1,6 @@
INTERNAL {
global:
- cnxk_sec_opad_ipad_gen;
cnxk_ipsec_icvlen_get;
cnxk_ipsec_ivlen_get;
cnxk_ipsec_outb_rlens_get;
@@ -472,6 +471,7 @@ INTERNAL {
roc_plt_init;
roc_plt_init_cb_register;
roc_plt_lmt_validate;
+ roc_se_hmac_opad_ipad_gen;
roc_sso_dev_fini;
roc_sso_dev_init;
roc_sso_dump;
diff --git a/drivers/crypto/cnxk/cn10k_tls.c b/drivers/crypto/cnxk/cn10k_tls.c
index fa3ce3e758..5baea181e8 100644
--- a/drivers/crypto/cnxk/cn10k_tls.c
+++ b/drivers/crypto/cnxk/cn10k_tls.c
@@ -376,7 +376,9 @@ tls_read_sa_fill(struct roc_ie_ot_tls_read_sa *read_sa,
else
return -EINVAL;
- cnxk_sec_opad_ipad_gen(auth_xfrm, read_sa->opad_ipad, true);
+ roc_se_hmac_opad_ipad_gen(read_sa->w2.s.mac_select, auth_xfrm->auth.key.data,
+ auth_xfrm->auth.key.length, read_sa->opad_ipad, ROC_SE_TLS);
+
tmp = (uint64_t *)read_sa->opad_ipad;
for (i = 0; i < (int)(ROC_CTX_MAX_OPAD_IPAD_LEN / sizeof(uint64_t)); i++)
tmp[i] = rte_be_to_cpu_64(tmp[i]);
@@ -503,7 +505,9 @@ tls_write_sa_fill(struct roc_ie_ot_tls_write_sa *write_sa,
else
return -EINVAL;
- cnxk_sec_opad_ipad_gen(auth_xfrm, write_sa->opad_ipad, true);
+ roc_se_hmac_opad_ipad_gen(write_sa->w2.s.mac_select, auth_xfrm->auth.key.data,
+ auth_xfrm->auth.key.length, write_sa->opad_ipad,
+ ROC_SE_TLS);
}
tmp_key = (uint64_t *)write_sa->opad_ipad;
--
2.25.1
^ permalink raw reply [flat|nested] 78+ messages in thread
* [PATCH 22/24] crypto/cnxk: add support for TLS 1.3
2023-12-21 12:35 [PATCH 00/24] Fixes and improvements in crypto cnxk Anoob Joseph
` (20 preceding siblings ...)
2023-12-21 12:35 ` [PATCH 21/24] crypto/cnxk: use a single function for opad ipad Anoob Joseph
@ 2023-12-21 12:35 ` Anoob Joseph
2023-12-21 12:35 ` [PATCH 23/24] crypto/cnxk: add TLS 1.3 capability Anoob Joseph
` (2 subsequent siblings)
24 siblings, 0 replies; 78+ messages in thread
From: Anoob Joseph @ 2023-12-21 12:35 UTC (permalink / raw)
To: Akhil Goyal; +Cc: Vidya Sagar Velumuri, Jerin Jacob, Tejasree Kondoj, dev
From: Vidya Sagar Velumuri <vvelumuri@marvell.com>
Add support for TLS-1.3.
Signed-off-by: Vidya Sagar Velumuri <vvelumuri@marvell.com>
---
drivers/common/cnxk/roc_ie_ot_tls.h | 50 +++++--
drivers/crypto/cnxk/cn10k_cryptodev_sec.h | 3 +-
drivers/crypto/cnxk/cn10k_tls.c | 159 +++++++++++++---------
3 files changed, 136 insertions(+), 76 deletions(-)
diff --git a/drivers/common/cnxk/roc_ie_ot_tls.h b/drivers/common/cnxk/roc_ie_ot_tls.h
index 61955ef4d1..91ddb25f7a 100644
--- a/drivers/common/cnxk/roc_ie_ot_tls.h
+++ b/drivers/common/cnxk/roc_ie_ot_tls.h
@@ -17,8 +17,10 @@
(PLT_ALIGN_CEIL(ROC_IE_OT_TLS_AR_WIN_SIZE_MAX, BITS_PER_LONG_LONG) / BITS_PER_LONG_LONG)
/* CN10K TLS opcodes */
-#define ROC_IE_OT_TLS_MAJOR_OP_RECORD_ENC 0x16UL
-#define ROC_IE_OT_TLS_MAJOR_OP_RECORD_DEC 0x17UL
+#define ROC_IE_OT_TLS_MAJOR_OP_RECORD_ENC 0x16UL
+#define ROC_IE_OT_TLS_MAJOR_OP_RECORD_DEC 0x17UL
+#define ROC_IE_OT_TLS13_MAJOR_OP_RECORD_ENC 0x18UL
+#define ROC_IE_OT_TLS13_MAJOR_OP_RECORD_DEC 0x19UL
#define ROC_IE_OT_TLS_CTX_MAX_OPAD_IPAD_LEN 128
#define ROC_IE_OT_TLS_CTX_MAX_KEY_IV_LEN 48
@@ -42,6 +44,7 @@ enum roc_ie_ot_tls_cipher_type {
enum roc_ie_ot_tls_ver {
ROC_IE_OT_TLS_VERSION_TLS_12 = 1,
ROC_IE_OT_TLS_VERSION_DTLS_12 = 2,
+ ROC_IE_OT_TLS_VERSION_TLS_13 = 3,
};
enum roc_ie_ot_tls_aes_key_len {
@@ -131,11 +134,23 @@ struct roc_ie_ot_tls_read_sa {
/* Word4 - Word9 */
uint8_t cipher_key[ROC_IE_OT_TLS_CTX_MAX_KEY_IV_LEN];
- /* Word10 - Word25 */
- uint8_t opad_ipad[ROC_IE_OT_TLS_CTX_MAX_OPAD_IPAD_LEN];
+ union {
+ struct {
+ /* Word10 */
+ uint64_t w10_rsvd6;
+
+ /* Word11 - Word25 */
+ struct roc_ie_ot_tls_read_ctx_update_reg ctx;
+ } tls_13;
+
+ struct {
+ /* Word10 - Word25 */
+ uint8_t opad_ipad[ROC_IE_OT_TLS_CTX_MAX_OPAD_IPAD_LEN];
- /* Word26 - Word32 */
- struct roc_ie_ot_tls_read_ctx_update_reg ctx;
+ /* Word26 - Word95 */
+ struct roc_ie_ot_tls_read_ctx_update_reg ctx;
+ } tls_12;
+ };
};
struct roc_ie_ot_tls_write_sa {
@@ -187,13 +202,24 @@ struct roc_ie_ot_tls_write_sa {
/* Word4 - Word9 */
uint8_t cipher_key[ROC_IE_OT_TLS_CTX_MAX_KEY_IV_LEN];
- /* Word10 - Word25 */
- uint8_t opad_ipad[ROC_IE_OT_TLS_CTX_MAX_OPAD_IPAD_LEN];
+ union {
+ struct {
+ /* Word10 */
+ uint64_t w10_rsvd7;
+
+ uint64_t seq_num;
+ } tls_13;
+
+ struct {
+ /* Word10 - Word25 */
+ uint8_t opad_ipad[ROC_IE_OT_TLS_CTX_MAX_OPAD_IPAD_LEN];
- /* Word26 */
- uint64_t w26_rsvd7;
+ /* Word26 */
+ uint64_t w26_rsvd7;
- /* Word27 */
- uint64_t seq_num;
+ /* Word27 */
+ uint64_t seq_num;
+ } tls_12;
+ };
};
#endif /* __ROC_IE_OT_TLS_H__ */
diff --git a/drivers/crypto/cnxk/cn10k_cryptodev_sec.h b/drivers/crypto/cnxk/cn10k_cryptodev_sec.h
index 33fd3aa398..1e117051cc 100644
--- a/drivers/crypto/cnxk/cn10k_cryptodev_sec.h
+++ b/drivers/crypto/cnxk/cn10k_cryptodev_sec.h
@@ -31,8 +31,7 @@ struct cn10k_sec_session {
} ipsec;
struct {
uint8_t enable_padding : 1;
- uint8_t hdr_len : 4;
- uint8_t rvsd : 3;
+ uint8_t rvsd : 7;
bool is_write;
} tls;
};
diff --git a/drivers/crypto/cnxk/cn10k_tls.c b/drivers/crypto/cnxk/cn10k_tls.c
index 5baea181e8..ce253e3eba 100644
--- a/drivers/crypto/cnxk/cn10k_tls.c
+++ b/drivers/crypto/cnxk/cn10k_tls.c
@@ -105,7 +105,8 @@ cnxk_tls_xform_verify(struct rte_security_tls_record_xform *tls_xform,
int ret = 0;
if ((tls_xform->ver != RTE_SECURITY_VERSION_TLS_1_2) &&
- (tls_xform->ver != RTE_SECURITY_VERSION_DTLS_1_2))
+ (tls_xform->ver != RTE_SECURITY_VERSION_DTLS_1_2) &&
+ (tls_xform->ver != RTE_SECURITY_VERSION_TLS_1_3))
return -EINVAL;
if ((tls_xform->type != RTE_SECURITY_TLS_SESS_TYPE_READ) &&
@@ -115,6 +116,12 @@ cnxk_tls_xform_verify(struct rte_security_tls_record_xform *tls_xform,
if (crypto_xform->type == RTE_CRYPTO_SYM_XFORM_AEAD)
return tls_xform_aead_verify(tls_xform, crypto_xform);
+ /* TLS-1.3 only support AEAD.
+ * Control should not reach here for TLS-1.3
+ */
+ if (tls_xform->ver == RTE_SECURITY_VERSION_TLS_1_3)
+ return -EINVAL;
+
if (tls_xform->type == RTE_SECURITY_TLS_SESS_TYPE_WRITE) {
/* Egress */
@@ -259,7 +266,7 @@ tls_write_sa_init(struct roc_ie_ot_tls_write_sa *sa)
memset(sa, 0, sizeof(struct roc_ie_ot_tls_write_sa));
- offset = offsetof(struct roc_ie_ot_tls_write_sa, w26_rsvd7);
+ offset = offsetof(struct roc_ie_ot_tls_write_sa, tls_12.w26_rsvd7);
sa->w0.s.hw_ctx_off = offset / ROC_CTX_UNIT_8B;
sa->w0.s.ctx_push_size = sa->w0.s.hw_ctx_off;
sa->w0.s.ctx_size = ROC_IE_OT_TLS_CTX_ILEN;
@@ -274,7 +281,7 @@ tls_read_sa_init(struct roc_ie_ot_tls_read_sa *sa)
memset(sa, 0, sizeof(struct roc_ie_ot_tls_read_sa));
- offset = offsetof(struct roc_ie_ot_tls_read_sa, ctx);
+ offset = offsetof(struct roc_ie_ot_tls_read_sa, tls_12.ctx);
sa->w0.s.hw_ctx_off = offset / ROC_CTX_UNIT_8B;
sa->w0.s.ctx_push_size = sa->w0.s.hw_ctx_off;
sa->w0.s.ctx_size = ROC_IE_OT_TLS_CTX_ILEN;
@@ -283,13 +290,18 @@ tls_read_sa_init(struct roc_ie_ot_tls_read_sa *sa)
}
static size_t
-tls_read_ctx_size(struct roc_ie_ot_tls_read_sa *sa)
+tls_read_ctx_size(struct roc_ie_ot_tls_read_sa *sa, enum rte_security_tls_version tls_ver)
{
size_t size;
/* Variable based on Anti-replay Window */
- size = offsetof(struct roc_ie_ot_tls_read_sa, ctx) +
- offsetof(struct roc_ie_ot_tls_read_ctx_update_reg, ar_winbits);
+ if (tls_ver == RTE_SECURITY_VERSION_TLS_1_3) {
+ size = offsetof(struct roc_ie_ot_tls_read_sa, tls_13.ctx) +
+ offsetof(struct roc_ie_ot_tls_read_ctx_update_reg, ar_winbits);
+ } else {
+ size = offsetof(struct roc_ie_ot_tls_read_sa, tls_12.ctx) +
+ offsetof(struct roc_ie_ot_tls_read_ctx_update_reg, ar_winbits);
+ }
if (sa->w0.s.ar_win)
size += (1 << (sa->w0.s.ar_win - 1)) * sizeof(uint64_t);
@@ -302,6 +314,7 @@ tls_read_sa_fill(struct roc_ie_ot_tls_read_sa *read_sa,
struct rte_security_tls_record_xform *tls_xfrm,
struct rte_crypto_sym_xform *crypto_xfrm)
{
+ enum rte_security_tls_version tls_ver = tls_xfrm->ver;
struct rte_crypto_sym_xform *auth_xfrm, *cipher_xfrm;
const uint8_t *key = NULL;
uint64_t *tmp, *tmp_key;
@@ -313,13 +326,22 @@ tls_read_sa_fill(struct roc_ie_ot_tls_read_sa *read_sa,
/* Initialize the SA */
memset(read_sa, 0, sizeof(struct roc_ie_ot_tls_read_sa));
+ if (tls_ver == RTE_SECURITY_VERSION_TLS_1_2) {
+ read_sa->w2.s.version_select = ROC_IE_OT_TLS_VERSION_TLS_12;
+ read_sa->tls_12.ctx.ar_valid_mask = tls_xfrm->tls_1_2.seq_no - 1;
+ } else if (tls_ver == RTE_SECURITY_VERSION_DTLS_1_2) {
+ read_sa->w2.s.version_select = ROC_IE_OT_TLS_VERSION_DTLS_12;
+ } else if (tls_ver == RTE_SECURITY_VERSION_TLS_1_3) {
+ read_sa->w2.s.version_select = ROC_IE_OT_TLS_VERSION_TLS_13;
+ read_sa->tls_13.ctx.ar_valid_mask = tls_xfrm->tls_1_3.seq_no - 1;
+ }
+
cipher_key = read_sa->cipher_key;
/* Set encryption algorithm */
if ((crypto_xfrm->type == RTE_CRYPTO_SYM_XFORM_AEAD) &&
(crypto_xfrm->aead.algo == RTE_CRYPTO_AEAD_AES_GCM)) {
read_sa->w2.s.cipher_select = ROC_IE_OT_TLS_CIPHER_AES_GCM;
- read_sa->w2.s.mac_select = ROC_IE_OT_TLS_MAC_SHA2_256;
length = crypto_xfrm->aead.key.length;
if (length == 16)
@@ -330,10 +352,12 @@ tls_read_sa_fill(struct roc_ie_ot_tls_read_sa *read_sa,
key = crypto_xfrm->aead.key.data;
memcpy(cipher_key, key, length);
- if (tls_xfrm->ver == RTE_SECURITY_VERSION_TLS_1_2)
+ if (tls_ver == RTE_SECURITY_VERSION_TLS_1_2)
memcpy(((uint8_t *)cipher_key + 32), &tls_xfrm->tls_1_2.imp_nonce, 4);
- else if (tls_xfrm->ver == RTE_SECURITY_VERSION_DTLS_1_2)
+ else if (tls_ver == RTE_SECURITY_VERSION_DTLS_1_2)
memcpy(((uint8_t *)cipher_key + 32), &tls_xfrm->dtls_1_2.imp_nonce, 4);
+ else if (tls_ver == RTE_SECURITY_VERSION_TLS_1_3)
+ memcpy(((uint8_t *)cipher_key + 32), &tls_xfrm->tls_1_3.imp_nonce, 12);
goto key_swap;
}
@@ -377,9 +401,10 @@ tls_read_sa_fill(struct roc_ie_ot_tls_read_sa *read_sa,
return -EINVAL;
roc_se_hmac_opad_ipad_gen(read_sa->w2.s.mac_select, auth_xfrm->auth.key.data,
- auth_xfrm->auth.key.length, read_sa->opad_ipad, ROC_SE_TLS);
+ auth_xfrm->auth.key.length, read_sa->tls_12.opad_ipad,
+ ROC_SE_TLS);
- tmp = (uint64_t *)read_sa->opad_ipad;
+ tmp = (uint64_t *)read_sa->tls_12.opad_ipad;
for (i = 0; i < (int)(ROC_CTX_MAX_OPAD_IPAD_LEN / sizeof(uint64_t)); i++)
tmp[i] = rte_be_to_cpu_64(tmp[i]);
@@ -403,24 +428,20 @@ tls_read_sa_fill(struct roc_ie_ot_tls_read_sa *read_sa,
read_sa->w0.s.ctx_hdr_size = ROC_IE_OT_TLS_CTX_HDR_SIZE;
read_sa->w0.s.aop_valid = 1;
- offset = offsetof(struct roc_ie_ot_tls_read_sa, ctx);
+ offset = offsetof(struct roc_ie_ot_tls_read_sa, tls_12.ctx);
+ if (tls_ver == RTE_SECURITY_VERSION_TLS_1_3)
+ offset = offsetof(struct roc_ie_ot_tls_read_sa, tls_13.ctx);
+
+ /* Entire context size in 128B units */
+ read_sa->w0.s.ctx_size =
+ (PLT_ALIGN_CEIL(tls_read_ctx_size(read_sa, tls_ver), ROC_CTX_UNIT_128B) /
+ ROC_CTX_UNIT_128B) -
+ 1;
/* Word offset for HW managed CTX field */
read_sa->w0.s.hw_ctx_off = offset / 8;
read_sa->w0.s.ctx_push_size = read_sa->w0.s.hw_ctx_off;
- /* Entire context size in 128B units */
- read_sa->w0.s.ctx_size = (PLT_ALIGN_CEIL(tls_read_ctx_size(read_sa), ROC_CTX_UNIT_128B) /
- ROC_CTX_UNIT_128B) -
- 1;
-
- if (tls_xfrm->ver == RTE_SECURITY_VERSION_TLS_1_2) {
- read_sa->w2.s.version_select = ROC_IE_OT_TLS_VERSION_TLS_12;
- read_sa->ctx.ar_valid_mask = tls_xfrm->tls_1_2.seq_no - 1;
- } else if (tls_xfrm->ver == RTE_SECURITY_VERSION_DTLS_1_2) {
- read_sa->w2.s.version_select = ROC_IE_OT_TLS_VERSION_DTLS_12;
- }
-
rte_wmb();
return 0;
@@ -431,6 +452,7 @@ tls_write_sa_fill(struct roc_ie_ot_tls_write_sa *write_sa,
struct rte_security_tls_record_xform *tls_xfrm,
struct rte_crypto_sym_xform *crypto_xfrm)
{
+ enum rte_security_tls_version tls_ver = tls_xfrm->ver;
struct rte_crypto_sym_xform *auth_xfrm, *cipher_xfrm;
const uint8_t *key = NULL;
uint8_t *cipher_key;
@@ -438,13 +460,25 @@ tls_write_sa_fill(struct roc_ie_ot_tls_write_sa *write_sa,
int i, length = 0;
size_t offset;
+ if (tls_ver == RTE_SECURITY_VERSION_TLS_1_2) {
+ write_sa->w2.s.version_select = ROC_IE_OT_TLS_VERSION_TLS_12;
+ write_sa->tls_12.seq_num = tls_xfrm->tls_1_2.seq_no - 1;
+ } else if (tls_ver == RTE_SECURITY_VERSION_DTLS_1_2) {
+ write_sa->w2.s.version_select = ROC_IE_OT_TLS_VERSION_DTLS_12;
+ write_sa->tls_12.seq_num = ((uint64_t)tls_xfrm->dtls_1_2.epoch << 48) |
+ (tls_xfrm->dtls_1_2.seq_no & 0x0000ffffffffffff);
+ write_sa->tls_12.seq_num -= 1;
+ } else if (tls_ver == RTE_SECURITY_VERSION_TLS_1_3) {
+ write_sa->w2.s.version_select = ROC_IE_OT_TLS_VERSION_TLS_13;
+ write_sa->tls_13.seq_num = tls_xfrm->tls_1_3.seq_no - 1;
+ }
+
cipher_key = write_sa->cipher_key;
/* Set encryption algorithm */
if ((crypto_xfrm->type == RTE_CRYPTO_SYM_XFORM_AEAD) &&
(crypto_xfrm->aead.algo == RTE_CRYPTO_AEAD_AES_GCM)) {
write_sa->w2.s.cipher_select = ROC_IE_OT_TLS_CIPHER_AES_GCM;
- write_sa->w2.s.mac_select = ROC_IE_OT_TLS_MAC_SHA2_256;
length = crypto_xfrm->aead.key.length;
if (length == 16)
@@ -455,10 +489,12 @@ tls_write_sa_fill(struct roc_ie_ot_tls_write_sa *write_sa,
key = crypto_xfrm->aead.key.data;
memcpy(cipher_key, key, length);
- if (tls_xfrm->ver == RTE_SECURITY_VERSION_TLS_1_2)
+ if (tls_ver == RTE_SECURITY_VERSION_TLS_1_2)
memcpy(((uint8_t *)cipher_key + 32), &tls_xfrm->tls_1_2.imp_nonce, 4);
- else if (tls_xfrm->ver == RTE_SECURITY_VERSION_DTLS_1_2)
+ else if (tls_ver == RTE_SECURITY_VERSION_DTLS_1_2)
memcpy(((uint8_t *)cipher_key + 32), &tls_xfrm->dtls_1_2.imp_nonce, 4);
+ else if (tls_ver == RTE_SECURITY_VERSION_TLS_1_3)
+ memcpy(((uint8_t *)cipher_key + 32), &tls_xfrm->tls_1_3.imp_nonce, 12);
goto key_swap;
}
@@ -506,11 +542,11 @@ tls_write_sa_fill(struct roc_ie_ot_tls_write_sa *write_sa,
return -EINVAL;
roc_se_hmac_opad_ipad_gen(write_sa->w2.s.mac_select, auth_xfrm->auth.key.data,
- auth_xfrm->auth.key.length, write_sa->opad_ipad,
+ auth_xfrm->auth.key.length, write_sa->tls_12.opad_ipad,
ROC_SE_TLS);
}
- tmp_key = (uint64_t *)write_sa->opad_ipad;
+ tmp_key = (uint64_t *)write_sa->tls_12.opad_ipad;
for (i = 0; i < (int)(ROC_CTX_MAX_OPAD_IPAD_LEN / sizeof(uint64_t)); i++)
tmp_key[i] = rte_be_to_cpu_64(tmp_key[i]);
@@ -520,40 +556,37 @@ tls_write_sa_fill(struct roc_ie_ot_tls_write_sa *write_sa,
tmp_key[i] = rte_be_to_cpu_64(tmp_key[i]);
write_sa->w0.s.ctx_hdr_size = ROC_IE_OT_TLS_CTX_HDR_SIZE;
- offset = offsetof(struct roc_ie_ot_tls_write_sa, w26_rsvd7);
-
- /* Word offset for HW managed CTX field */
- write_sa->w0.s.hw_ctx_off = offset / 8;
- write_sa->w0.s.ctx_push_size = write_sa->w0.s.hw_ctx_off;
-
/* Entire context size in 128B units */
write_sa->w0.s.ctx_size =
(PLT_ALIGN_CEIL(sizeof(struct roc_ie_ot_tls_write_sa), ROC_CTX_UNIT_128B) /
ROC_CTX_UNIT_128B) -
1;
- write_sa->w0.s.aop_valid = 1;
+ offset = offsetof(struct roc_ie_ot_tls_write_sa, tls_12.w26_rsvd7);
- if (tls_xfrm->ver == RTE_SECURITY_VERSION_TLS_1_2) {
- write_sa->w2.s.version_select = ROC_IE_OT_TLS_VERSION_TLS_12;
- write_sa->seq_num = tls_xfrm->tls_1_2.seq_no - 1;
- } else if (tls_xfrm->ver == RTE_SECURITY_VERSION_DTLS_1_2) {
- write_sa->w2.s.version_select = ROC_IE_OT_TLS_VERSION_DTLS_12;
- write_sa->seq_num = ((uint64_t)tls_xfrm->dtls_1_2.epoch << 48) |
- (tls_xfrm->dtls_1_2.seq_no & 0x0000ffffffffffff);
- write_sa->seq_num -= 1;
+ if (tls_ver == RTE_SECURITY_VERSION_TLS_1_3) {
+ offset = offsetof(struct roc_ie_ot_tls_write_sa, tls_13.w10_rsvd7);
+ write_sa->w0.s.ctx_size -= 1;
}
+ /* Word offset for HW managed CTX field */
+ write_sa->w0.s.hw_ctx_off = offset / 8;
+ write_sa->w0.s.ctx_push_size = write_sa->w0.s.hw_ctx_off;
+
+ write_sa->w0.s.aop_valid = 1;
+
write_sa->w2.s.iv_at_cptr = ROC_IE_OT_TLS_IV_SRC_DEFAULT;
+ if (write_sa->w2.s.version_select != ROC_IE_OT_TLS_VERSION_TLS_13) {
#ifdef LA_IPSEC_DEBUG
- if (tls_xfrm->options.iv_gen_disable == 1)
- write_sa->w2.s.iv_at_cptr = ROC_IE_OT_TLS_IV_SRC_FROM_SA;
+ if (tls_xfrm->options.iv_gen_disable == 1)
+ write_sa->w2.s.iv_at_cptr = ROC_IE_OT_TLS_IV_SRC_FROM_SA;
#else
- if (tls_xfrm->options.iv_gen_disable) {
- plt_err("Application provided IV is not supported");
- return -ENOTSUP;
- }
+ if (tls_xfrm->options.iv_gen_disable) {
+ plt_err("Application provided IV is not supported");
+ return -ENOTSUP;
+ }
#endif
+ }
rte_wmb();
@@ -599,20 +632,17 @@ cn10k_tls_read_sa_create(struct roc_cpt *roc_cpt, struct roc_cpt_lf *lf,
sec_sess->iv_length = crypto_xfrm->auth.iv.length;
}
- if (sa_dptr->w2.s.version_select == ROC_IE_OT_TLS_VERSION_DTLS_12)
- sec_sess->tls.hdr_len = 13;
- else if (sa_dptr->w2.s.version_select == ROC_IE_OT_TLS_VERSION_TLS_12)
- sec_sess->tls.hdr_len = 5;
-
sec_sess->proto = RTE_SECURITY_PROTOCOL_TLS_RECORD;
- /* Enable mib counters */
- sa_dptr->w0.s.count_mib_bytes = 1;
- sa_dptr->w0.s.count_mib_pkts = 1;
-
/* pre-populate CPT INST word 4 */
inst_w4.u64 = 0;
- inst_w4.s.opcode_major = ROC_IE_OT_TLS_MAJOR_OP_RECORD_DEC | ROC_IE_OT_INPLACE_BIT;
+ if ((sa_dptr->w2.s.version_select == ROC_IE_OT_TLS_VERSION_TLS_12) ||
+ (sa_dptr->w2.s.version_select == ROC_IE_OT_TLS_VERSION_DTLS_12)) {
+ inst_w4.s.opcode_major = ROC_IE_OT_TLS_MAJOR_OP_RECORD_DEC | ROC_IE_OT_INPLACE_BIT;
+ } else if (sa_dptr->w2.s.version_select == ROC_IE_OT_TLS_VERSION_TLS_13) {
+ inst_w4.s.opcode_major =
+ ROC_IE_OT_TLS13_MAJOR_OP_RECORD_DEC | ROC_IE_OT_INPLACE_BIT;
+ }
sec_sess->inst.w4 = inst_w4.u64;
sec_sess->inst.w7 = cpt_inst_w7_get(roc_cpt, read_sa);
@@ -689,8 +719,13 @@ cn10k_tls_write_sa_create(struct roc_cpt *roc_cpt, struct roc_cpt_lf *lf,
/* pre-populate CPT INST word 4 */
inst_w4.u64 = 0;
- inst_w4.s.opcode_major = ROC_IE_OT_TLS_MAJOR_OP_RECORD_ENC | ROC_IE_OT_INPLACE_BIT;
-
+ if ((sa_dptr->w2.s.version_select == ROC_IE_OT_TLS_VERSION_TLS_12) ||
+ (sa_dptr->w2.s.version_select == ROC_IE_OT_TLS_VERSION_DTLS_12)) {
+ inst_w4.s.opcode_major = ROC_IE_OT_TLS_MAJOR_OP_RECORD_ENC | ROC_IE_OT_INPLACE_BIT;
+ } else if (sa_dptr->w2.s.version_select == ROC_IE_OT_TLS_VERSION_TLS_13) {
+ inst_w4.s.opcode_major =
+ ROC_IE_OT_TLS13_MAJOR_OP_RECORD_ENC | ROC_IE_OT_INPLACE_BIT;
+ }
sec_sess->inst.w4 = inst_w4.u64;
sec_sess->inst.w7 = cpt_inst_w7_get(roc_cpt, write_sa);
--
2.25.1
^ permalink raw reply [flat|nested] 78+ messages in thread
* [PATCH 23/24] crypto/cnxk: add TLS 1.3 capability
2023-12-21 12:35 [PATCH 00/24] Fixes and improvements in crypto cnxk Anoob Joseph
` (21 preceding siblings ...)
2023-12-21 12:35 ` [PATCH 22/24] crypto/cnxk: add support for TLS 1.3 Anoob Joseph
@ 2023-12-21 12:35 ` Anoob Joseph
2023-12-21 12:35 ` [PATCH 24/24] crypto/cnxk: add CPT SG mode debug Anoob Joseph
2024-01-02 4:53 ` [PATCH v2 00/24] Fixes and improvements in crypto cnxk Anoob Joseph
24 siblings, 0 replies; 78+ messages in thread
From: Anoob Joseph @ 2023-12-21 12:35 UTC (permalink / raw)
To: Akhil Goyal; +Cc: Vidya Sagar Velumuri, Jerin Jacob, Tejasree Kondoj, dev
From: Vidya Sagar Velumuri <vvelumuri@marvell.com>
Add TLS 1.3 record read and write capability
Signed-off-by: Vidya Sagar Velumuri <vvelumuri@marvell.com>
---
doc/guides/rel_notes/release_24_03.rst | 4 +-
.../crypto/cnxk/cnxk_cryptodev_capabilities.c | 92 +++++++++++++++++++
2 files changed, 94 insertions(+), 2 deletions(-)
diff --git a/doc/guides/rel_notes/release_24_03.rst b/doc/guides/rel_notes/release_24_03.rst
index f5773bab5a..89110e0650 100644
--- a/doc/guides/rel_notes/release_24_03.rst
+++ b/doc/guides/rel_notes/release_24_03.rst
@@ -58,8 +58,8 @@ New Features
* **Updated Marvell cnxk crypto driver.**
* Added support for Rx inject in crypto_cn10k.
- * Added support for TLS record processing in crypto_cn10k. Supports TLS 1.2
- and DTLS 1.2.
+ * Added support for TLS record processing in crypto_cn10k. Supports TLS 1.2,
+ DTLS 1.2 and TLS 1.3.
* Added PMD API to allow raw submission of instructions to CPT.
Removed Items
diff --git a/drivers/crypto/cnxk/cnxk_cryptodev_capabilities.c b/drivers/crypto/cnxk/cnxk_cryptodev_capabilities.c
index 73100377d9..db50de5d58 100644
--- a/drivers/crypto/cnxk/cnxk_cryptodev_capabilities.c
+++ b/drivers/crypto/cnxk/cnxk_cryptodev_capabilities.c
@@ -40,6 +40,16 @@
RTE_DIM(sec_tls12_caps_##name)); \
} while (0)
+#define SEC_TLS13_CAPS_ADD(cnxk_caps, cur_pos, hw_caps, name) \
+ do { \
+ if ((hw_caps[CPT_ENG_TYPE_SE].name) || \
+ (hw_caps[CPT_ENG_TYPE_IE].name) || \
+ (hw_caps[CPT_ENG_TYPE_AE].name)) \
+ sec_tls13_caps_add(cnxk_caps, cur_pos, \
+ sec_tls13_caps_##name, \
+ RTE_DIM(sec_tls13_caps_##name)); \
+ } while (0)
+
static const struct rte_cryptodev_capabilities caps_mul[] = {
{ /* RSA */
.op = RTE_CRYPTO_OP_TYPE_ASYMMETRIC,
@@ -1631,6 +1641,40 @@ static const struct rte_cryptodev_capabilities sec_tls12_caps_sha1_sha2[] = {
},
};
+static const struct rte_cryptodev_capabilities sec_tls13_caps_aes[] = {
+ { /* AES GCM */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AEAD,
+ {.aead = {
+ .algo = RTE_CRYPTO_AEAD_AES_GCM,
+ .block_size = 16,
+ .key_size = {
+ .min = 16,
+ .max = 32,
+ .increment = 16
+ },
+ .digest_size = {
+ .min = 16,
+ .max = 16,
+ .increment = 0
+ },
+ .aad_size = {
+ .min = 5,
+ .max = 5,
+ .increment = 0
+ },
+ .iv_size = {
+ .min = 0,
+ .max = 0,
+ .increment = 0
+ }
+ }, }
+ }, }
+ },
+};
+
+
static const struct rte_security_capability sec_caps_templ[] = {
{ /* IPsec Lookaside Protocol ESP Tunnel Ingress */
.action = RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL,
@@ -1760,6 +1804,26 @@ static const struct rte_security_capability sec_caps_templ[] = {
},
.crypto_capabilities = NULL,
},
+ { /* TLS 1.3 Record Read */
+ .action = RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL,
+ .protocol = RTE_SECURITY_PROTOCOL_TLS_RECORD,
+ .tls_record = {
+ .ver = RTE_SECURITY_VERSION_TLS_1_3,
+ .type = RTE_SECURITY_TLS_SESS_TYPE_READ,
+ .ar_win_size = 0,
+ },
+ .crypto_capabilities = NULL,
+ },
+ { /* TLS 1.3 Record Write */
+ .action = RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL,
+ .protocol = RTE_SECURITY_PROTOCOL_TLS_RECORD,
+ .tls_record = {
+ .ver = RTE_SECURITY_VERSION_TLS_1_3,
+ .type = RTE_SECURITY_TLS_SESS_TYPE_WRITE,
+ .ar_win_size = 0,
+ },
+ .crypto_capabilities = NULL,
+ },
{
.action = RTE_SECURITY_ACTION_TYPE_NONE
}
@@ -2005,6 +2069,33 @@ sec_tls12_crypto_caps_populate(struct rte_cryptodev_capabilities cnxk_caps[],
sec_tls12_caps_add(cnxk_caps, &cur_pos, caps_end, RTE_DIM(caps_end));
}
+static void
+sec_tls13_caps_limit_check(int *cur_pos, int nb_caps)
+{
+ PLT_VERIFY(*cur_pos + nb_caps <= CNXK_SEC_TLS_1_3_CRYPTO_MAX_CAPS);
+}
+
+static void
+sec_tls13_caps_add(struct rte_cryptodev_capabilities cnxk_caps[], int *cur_pos,
+ const struct rte_cryptodev_capabilities *caps, int nb_caps)
+{
+ sec_tls13_caps_limit_check(cur_pos, nb_caps);
+
+ memcpy(&cnxk_caps[*cur_pos], caps, nb_caps * sizeof(caps[0]));
+ *cur_pos += nb_caps;
+}
+
+static void
+sec_tls13_crypto_caps_populate(struct rte_cryptodev_capabilities cnxk_caps[],
+ union cpt_eng_caps *hw_caps)
+{
+ int cur_pos = 0;
+
+ SEC_TLS13_CAPS_ADD(cnxk_caps, &cur_pos, hw_caps, aes);
+
+ sec_tls13_caps_add(cnxk_caps, &cur_pos, caps_end, RTE_DIM(caps_end));
+}
+
void
cnxk_cpt_caps_populate(struct cnxk_cpt_vf *vf)
{
@@ -2016,6 +2107,7 @@ cnxk_cpt_caps_populate(struct cnxk_cpt_vf *vf)
if (vf->cpt.hw_caps[CPT_ENG_TYPE_SE].tls) {
sec_tls12_crypto_caps_populate(vf->sec_tls_1_2_crypto_caps, vf->cpt.hw_caps);
sec_tls12_crypto_caps_populate(vf->sec_dtls_1_2_crypto_caps, vf->cpt.hw_caps);
+ sec_tls13_crypto_caps_populate(vf->sec_tls_1_3_crypto_caps, vf->cpt.hw_caps);
}
PLT_STATIC_ASSERT(RTE_DIM(sec_caps_templ) <= RTE_DIM(vf->sec_caps));
--
2.25.1
^ permalink raw reply [flat|nested] 78+ messages in thread
* [PATCH 24/24] crypto/cnxk: add CPT SG mode debug
2023-12-21 12:35 [PATCH 00/24] Fixes and improvements in crypto cnxk Anoob Joseph
` (22 preceding siblings ...)
2023-12-21 12:35 ` [PATCH 23/24] crypto/cnxk: add TLS 1.3 capability Anoob Joseph
@ 2023-12-21 12:35 ` Anoob Joseph
2024-01-02 4:53 ` [PATCH v2 00/24] Fixes and improvements in crypto cnxk Anoob Joseph
24 siblings, 0 replies; 78+ messages in thread
From: Anoob Joseph @ 2023-12-21 12:35 UTC (permalink / raw)
To: Akhil Goyal; +Cc: Tejasree Kondoj, Jerin Jacob, Vidya Sagar Velumuri, dev
From: Tejasree Kondoj <ktejasree@marvell.com>
Adding CPT SG mode debug dump.
Signed-off-by: Tejasree Kondoj <ktejasree@marvell.com>
---
drivers/crypto/cnxk/cn10k_cryptodev_ops.c | 135 +++++++++++++++++++++-
drivers/crypto/cnxk/cnxk_cryptodev_ops.h | 7 ++
2 files changed, 141 insertions(+), 1 deletion(-)
diff --git a/drivers/crypto/cnxk/cn10k_cryptodev_ops.c b/drivers/crypto/cnxk/cn10k_cryptodev_ops.c
index c350371505..6cfcbafdcc 100644
--- a/drivers/crypto/cnxk/cn10k_cryptodev_ops.c
+++ b/drivers/crypto/cnxk/cn10k_cryptodev_ops.c
@@ -2,9 +2,10 @@
* Copyright(C) 2021 Marvell.
*/
-#include <rte_cryptodev.h>
#include <cryptodev_pmd.h>
+#include <rte_cryptodev.h>
#include <rte_event_crypto_adapter.h>
+#include <rte_hexdump.h>
#include <rte_ip.h>
#include <ethdev_driver.h>
@@ -103,6 +104,104 @@ cpt_sec_ipsec_inst_fill(struct cnxk_cpt_qp *qp, struct rte_crypto_op *op,
return ret;
}
+#ifdef CPT_INST_DEBUG_ENABLE
+static inline void
+cpt_request_data_sgv2_mode_dump(uint8_t *in_buffer, bool glist, uint16_t components)
+{
+ struct roc_se_buf_ptr list_ptr[ROC_MAX_SG_CNT];
+ const char *list = glist ? "glist" : "slist";
+ struct roc_sg2list_comp *sg_ptr = NULL;
+ uint16_t list_cnt = 0;
+ char suffix[64];
+ int i, j;
+
+ sg_ptr = (void *)in_buffer;
+ for (i = 0; i < components; i++) {
+ for (j = 0; j < sg_ptr->u.s.valid_segs; j++) {
+ list_ptr[i * 3 + j].size = sg_ptr->u.s.len[j];
+ list_ptr[i * 3 + j].vaddr = (void *)sg_ptr->ptr[j];
+ list_ptr[i * 3 + j].vaddr = list_ptr[i * 3 + j].vaddr;
+ list_cnt++;
+ }
+ sg_ptr++;
+ }
+
+ printf("Current %s: %u\n", list, list_cnt);
+
+ for (i = 0; i < list_cnt; i++) {
+ snprintf(suffix, sizeof(suffix), "%s[%d]: vaddr 0x%" PRIx64 ", vaddr %p len %u",
+ list, i, (uint64_t)list_ptr[i].vaddr, list_ptr[i].vaddr, list_ptr[i].size);
+ rte_hexdump(stdout, suffix, list_ptr[i].vaddr, list_ptr[i].size);
+ }
+}
+
+static inline void
+cpt_request_data_sg_mode_dump(uint8_t *in_buffer, bool glist)
+{
+ struct roc_se_buf_ptr list_ptr[ROC_MAX_SG_CNT];
+ const char *list = glist ? "glist" : "slist";
+ struct roc_sglist_comp *sg_ptr = NULL;
+ uint16_t list_cnt, components;
+ char suffix[64];
+ int i;
+
+ sg_ptr = (void *)(in_buffer + 8);
+ list_cnt = rte_be_to_cpu_16((((uint16_t *)in_buffer)[2]));
+ if (!glist) {
+ components = list_cnt / 4;
+ if (list_cnt % 4)
+ components++;
+ sg_ptr += components;
+ list_cnt = rte_be_to_cpu_16((((uint16_t *)in_buffer)[3]));
+ }
+
+ printf("Current %s: %u\n", list, list_cnt);
+ components = list_cnt / 4;
+ for (i = 0; i < components; i++) {
+ list_ptr[i * 4 + 0].size = rte_be_to_cpu_16(sg_ptr->u.s.len[0]);
+ list_ptr[i * 4 + 1].size = rte_be_to_cpu_16(sg_ptr->u.s.len[1]);
+ list_ptr[i * 4 + 2].size = rte_be_to_cpu_16(sg_ptr->u.s.len[2]);
+ list_ptr[i * 4 + 3].size = rte_be_to_cpu_16(sg_ptr->u.s.len[3]);
+ list_ptr[i * 4 + 0].vaddr = (void *)rte_be_to_cpu_64(sg_ptr->ptr[0]);
+ list_ptr[i * 4 + 1].vaddr = (void *)rte_be_to_cpu_64(sg_ptr->ptr[1]);
+ list_ptr[i * 4 + 2].vaddr = (void *)rte_be_to_cpu_64(sg_ptr->ptr[2]);
+ list_ptr[i * 4 + 3].vaddr = (void *)rte_be_to_cpu_64(sg_ptr->ptr[3]);
+ list_ptr[i * 4 + 0].vaddr = list_ptr[i * 4 + 0].vaddr;
+ list_ptr[i * 4 + 1].vaddr = list_ptr[i * 4 + 1].vaddr;
+ list_ptr[i * 4 + 2].vaddr = list_ptr[i * 4 + 2].vaddr;
+ list_ptr[i * 4 + 3].vaddr = list_ptr[i * 4 + 3].vaddr;
+ sg_ptr++;
+ }
+
+ components = list_cnt % 4;
+ switch (components) {
+ case 3:
+ list_ptr[i * 4 + 2].size = rte_be_to_cpu_16(sg_ptr->u.s.len[2]);
+ list_ptr[i * 4 + 2].vaddr = (void *)rte_be_to_cpu_64(sg_ptr->ptr[2]);
+ list_ptr[i * 4 + 2].vaddr = list_ptr[i * 4 + 2].vaddr;
+ /* FALLTHROUGH */
+ case 2:
+ list_ptr[i * 4 + 1].size = rte_be_to_cpu_16(sg_ptr->u.s.len[1]);
+ list_ptr[i * 4 + 1].vaddr = (void *)rte_be_to_cpu_64(sg_ptr->ptr[1]);
+ list_ptr[i * 4 + 1].vaddr = list_ptr[i * 4 + 1].vaddr;
+ /* FALLTHROUGH */
+ case 1:
+ list_ptr[i * 4 + 0].size = rte_be_to_cpu_16(sg_ptr->u.s.len[0]);
+ list_ptr[i * 4 + 0].vaddr = (void *)rte_be_to_cpu_64(sg_ptr->ptr[0]);
+ list_ptr[i * 4 + 0].vaddr = list_ptr[i * 4 + 0].vaddr;
+ break;
+ default:
+ break;
+ }
+
+ for (i = 0; i < list_cnt; i++) {
+ snprintf(suffix, sizeof(suffix), "%s[%d]: vaddr 0x%" PRIx64 ", vaddr %p len %u",
+ list, i, (uint64_t)list_ptr[i].vaddr, list_ptr[i].vaddr, list_ptr[i].size);
+ rte_hexdump(stdout, suffix, list_ptr[i].vaddr, list_ptr[i].size);
+ }
+}
+#endif
+
static __rte_always_inline int __rte_hot
cpt_sec_tls_inst_fill(struct cnxk_cpt_qp *qp, struct rte_crypto_op *op,
struct cn10k_sec_session *sess, struct cpt_inst_s *inst,
@@ -205,6 +304,31 @@ cn10k_cpt_fill_inst(struct cnxk_cpt_qp *qp, struct rte_crypto_op *ops[], struct
inst[0].w7.u64 = w7;
+#ifdef CPT_INST_DEBUG_ENABLE
+ infl_req->dptr = (uint8_t *)inst[0].dptr;
+ infl_req->rptr = (uint8_t *)inst[0].rptr;
+ infl_req->is_sg_ver2 = is_sg_ver2;
+ infl_req->scatter_sz = inst[0].w6.s.scatter_sz;
+ infl_req->opcode_major = inst[0].w4.s.opcode_major;
+
+ rte_hexdump(stdout, "cptr", (void *)(uint64_t)inst[0].w7.s.cptr, 128);
+ printf("major opcode:%d\n", inst[0].w4.s.opcode_major);
+ printf("minor opcode:%d\n", inst[0].w4.s.opcode_minor);
+ printf("param1:%d\n", inst[0].w4.s.param1);
+ printf("param2:%d\n", inst[0].w4.s.param2);
+ printf("dlen:%d\n", inst[0].w4.s.dlen);
+
+ if (is_sg_ver2) {
+ cpt_request_data_sgv2_mode_dump((void *)inst[0].dptr, 1, inst[0].w5.s.gather_sz);
+ cpt_request_data_sgv2_mode_dump((void *)inst[0].rptr, 0, inst[0].w6.s.scatter_sz);
+ } else {
+ if (infl_req->opcode_major >> 7) {
+ cpt_request_data_sg_mode_dump((void *)inst[0].dptr, 1);
+ cpt_request_data_sg_mode_dump((void *)inst[0].dptr, 0);
+ }
+ }
+#endif
+
return 1;
}
@@ -935,6 +1059,15 @@ cn10k_cpt_dequeue_post_process(struct cnxk_cpt_qp *qp, struct rte_crypto_op *cop
}
if (likely(compcode == CPT_COMP_GOOD)) {
+#ifdef CPT_INST_DEBUG_ENABLE
+ if (infl_req->is_sg_ver2)
+ cpt_request_data_sgv2_mode_dump(infl_req->rptr, 0, infl_req->scatter_sz);
+ else {
+ if (infl_req->opcode_major >> 7)
+ cpt_request_data_sg_mode_dump(infl_req->dptr, 0);
+ }
+#endif
+
if (unlikely(uc_compcode)) {
if (uc_compcode == ROC_SE_ERR_GC_ICV_MISCOMPARE)
cop->status = RTE_CRYPTO_OP_STATUS_AUTH_FAILED;
diff --git a/drivers/crypto/cnxk/cnxk_cryptodev_ops.h b/drivers/crypto/cnxk/cnxk_cryptodev_ops.h
index c6bb8023ea..e7bba25cb8 100644
--- a/drivers/crypto/cnxk/cnxk_cryptodev_ops.h
+++ b/drivers/crypto/cnxk/cnxk_cryptodev_ops.h
@@ -51,6 +51,13 @@ struct cpt_inflight_req {
};
void *mdata;
uint8_t op_flags;
+#ifdef CPT_INST_DEBUG_ENABLE
+ uint8_t scatter_sz;
+ uint8_t opcode_major;
+ uint8_t is_sg_ver2;
+ uint8_t *dptr;
+ uint8_t *rptr;
+#endif
void *qp;
} __rte_aligned(ROC_ALIGN);
--
2.25.1
^ permalink raw reply [flat|nested] 78+ messages in thread
* [PATCH v2 00/24] Fixes and improvements in crypto cnxk
2023-12-21 12:35 [PATCH 00/24] Fixes and improvements in crypto cnxk Anoob Joseph
` (23 preceding siblings ...)
2023-12-21 12:35 ` [PATCH 24/24] crypto/cnxk: add CPT SG mode debug Anoob Joseph
@ 2024-01-02 4:53 ` Anoob Joseph
2024-01-02 4:53 ` [PATCH v2 01/24] common/cnxk: fix memory leak Anoob Joseph
` (25 more replies)
24 siblings, 26 replies; 78+ messages in thread
From: Anoob Joseph @ 2024-01-02 4:53 UTC (permalink / raw)
To: Akhil Goyal; +Cc: Jerin Jacob, Vidya Sagar Velumuri, Tejasree Kondoj, dev
Add following features
- TLS record processing offload (TLS 1.2-1.3, DTLS 1.2)
- Rx inject to allow lookaside packets to be injected to ethdev Rx
- Use PDCP_CHAIN opcode instead of PDCP opcode for cipher-only and auth
only cases
- PMD API to submit instructions directly to hardware
Changes in v2
- Addressed checkpatch issue
- Addressed build error with stdatomic
Aakash Sasidharan (1):
crypto/cnxk: enable digest gen for zero len input
Akhil Goyal (1):
common/cnxk: fix memory leak
Anoob Joseph (6):
crypto/cnxk: use common macro
crypto/cnxk: return microcode completion code
common/cnxk: update opad-ipad gen to handle TLS
common/cnxk: add TLS record contexts
crypto/cnxk: separate IPsec from security common code
crypto/cnxk: add PMD APIs for raw submission to CPT
Gowrishankar Muthukrishnan (1):
crypto/cnxk: fix ECDH pubkey verify in cn9k
Rahul Bhansali (2):
common/cnxk: add Rx inject configs
crypto/cnxk: Rx inject config update
Tejasree Kondoj (3):
crypto/cnxk: fallback to SG if headroom is not available
crypto/cnxk: replace PDCP with PDCP chain opcode
crypto/cnxk: add CPT SG mode debug
Vidya Sagar Velumuri (10):
crypto/cnxk: enable Rx inject in security lookaside
crypto/cnxk: enable Rx inject for 103
crypto/cnxk: rename security caps as IPsec security caps
crypto/cnxk: add TLS record session ops
crypto/cnxk: add TLS record datapath handling
crypto/cnxk: add TLS capability
crypto/cnxk: validate the combinations supported in TLS
crypto/cnxk: use a single function for opad ipad
crypto/cnxk: add support for TLS 1.3
crypto/cnxk: add TLS 1.3 capability
doc/api/doxy-api-index.md | 1 +
doc/api/doxy-api.conf.in | 1 +
doc/guides/cryptodevs/cnxk.rst | 12 +
doc/guides/rel_notes/release_24_03.rst | 6 +
drivers/common/cnxk/cnxk_security.c | 65 +-
drivers/common/cnxk/cnxk_security.h | 15 +-
drivers/common/cnxk/hw/cpt.h | 12 +-
drivers/common/cnxk/roc_cpt.c | 14 +-
drivers/common/cnxk/roc_cpt.h | 7 +-
drivers/common/cnxk/roc_cpt_priv.h | 2 +-
drivers/common/cnxk/roc_idev.c | 44 +
drivers/common/cnxk/roc_idev.h | 5 +
drivers/common/cnxk/roc_idev_priv.h | 6 +
drivers/common/cnxk/roc_ie_ot.c | 14 +-
drivers/common/cnxk/roc_ie_ot_tls.h | 225 +++++
drivers/common/cnxk/roc_mbox.h | 2 +
drivers/common/cnxk/roc_nix.c | 2 +
drivers/common/cnxk/roc_nix_inl.c | 2 +-
drivers/common/cnxk/roc_nix_inl_dev.c | 2 +-
drivers/common/cnxk/roc_se.c | 379 +++-----
drivers/common/cnxk/roc_se.h | 38 +-
drivers/common/cnxk/version.map | 5 +
drivers/crypto/cnxk/cn10k_cryptodev.c | 2 +-
drivers/crypto/cnxk/cn10k_cryptodev_ops.c | 401 ++++++++-
drivers/crypto/cnxk/cn10k_cryptodev_ops.h | 11 +
drivers/crypto/cnxk/cn10k_cryptodev_sec.c | 134 +++
drivers/crypto/cnxk/cn10k_cryptodev_sec.h | 68 ++
drivers/crypto/cnxk/cn10k_ipsec.c | 134 +--
drivers/crypto/cnxk/cn10k_ipsec.h | 38 +-
drivers/crypto/cnxk/cn10k_ipsec_la_ops.h | 19 +-
drivers/crypto/cnxk/cn10k_tls.c | 830 ++++++++++++++++++
drivers/crypto/cnxk/cn10k_tls.h | 35 +
drivers/crypto/cnxk/cn10k_tls_ops.h | 322 +++++++
drivers/crypto/cnxk/cn9k_cryptodev_ops.c | 68 +-
drivers/crypto/cnxk/cn9k_cryptodev_ops.h | 62 ++
drivers/crypto/cnxk/cn9k_ipsec_la_ops.h | 16 +-
drivers/crypto/cnxk/cnxk_cryptodev.c | 3 +
drivers/crypto/cnxk/cnxk_cryptodev.h | 24 +-
.../crypto/cnxk/cnxk_cryptodev_capabilities.c | 375 +++++++-
drivers/crypto/cnxk/cnxk_cryptodev_devargs.c | 31 +
drivers/crypto/cnxk/cnxk_cryptodev_ops.c | 128 ++-
drivers/crypto/cnxk/cnxk_cryptodev_ops.h | 7 +
drivers/crypto/cnxk/cnxk_se.h | 98 +--
drivers/crypto/cnxk/cnxk_sg.h | 4 +-
drivers/crypto/cnxk/meson.build | 4 +-
drivers/crypto/cnxk/rte_pmd_cnxk_crypto.h | 46 +
drivers/crypto/cnxk/version.map | 3 +
47 files changed, 3016 insertions(+), 706 deletions(-)
create mode 100644 drivers/common/cnxk/roc_ie_ot_tls.h
create mode 100644 drivers/crypto/cnxk/cn10k_cryptodev_sec.c
create mode 100644 drivers/crypto/cnxk/cn10k_cryptodev_sec.h
create mode 100644 drivers/crypto/cnxk/cn10k_tls.c
create mode 100644 drivers/crypto/cnxk/cn10k_tls.h
create mode 100644 drivers/crypto/cnxk/cn10k_tls_ops.h
create mode 100644 drivers/crypto/cnxk/rte_pmd_cnxk_crypto.h
--
2.25.1
^ permalink raw reply [flat|nested] 78+ messages in thread
* [PATCH v2 01/24] common/cnxk: fix memory leak
2024-01-02 4:53 ` [PATCH v2 00/24] Fixes and improvements in crypto cnxk Anoob Joseph
@ 2024-01-02 4:53 ` Anoob Joseph
2024-01-02 4:53 ` [PATCH v2 02/24] crypto/cnxk: use common macro Anoob Joseph
` (24 subsequent siblings)
25 siblings, 0 replies; 78+ messages in thread
From: Anoob Joseph @ 2024-01-02 4:53 UTC (permalink / raw)
To: Akhil Goyal; +Cc: Jerin Jacob, Vidya Sagar Velumuri, Tejasree Kondoj, dev
From: Akhil Goyal <gakhil@marvell.com>
dev_init() acquires some resources which need to be cleaned
in case a failure is observed afterwards.
Fixes: c045d2e5cbbc ("common/cnxk: add CPT configuration")
Signed-off-by: Akhil Goyal <gakhil@marvell.com>
---
drivers/common/cnxk/roc_cpt.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/common/cnxk/roc_cpt.c b/drivers/common/cnxk/roc_cpt.c
index 981e85a204..4e23d8c135 100644
--- a/drivers/common/cnxk/roc_cpt.c
+++ b/drivers/common/cnxk/roc_cpt.c
@@ -756,7 +756,7 @@ roc_cpt_dev_init(struct roc_cpt *roc_cpt)
rc = dev_init(dev, pci_dev);
if (rc) {
plt_err("Failed to init roc device");
- goto fail;
+ return rc;
}
cpt->pci_dev = pci_dev;
@@ -788,6 +788,7 @@ roc_cpt_dev_init(struct roc_cpt *roc_cpt)
return 0;
fail:
+ dev_fini(dev, pci_dev);
return rc;
}
--
2.25.1
^ permalink raw reply [flat|nested] 78+ messages in thread
* [PATCH v2 02/24] crypto/cnxk: use common macro
2024-01-02 4:53 ` [PATCH v2 00/24] Fixes and improvements in crypto cnxk Anoob Joseph
2024-01-02 4:53 ` [PATCH v2 01/24] common/cnxk: fix memory leak Anoob Joseph
@ 2024-01-02 4:53 ` Anoob Joseph
2024-01-02 4:53 ` [PATCH v2 03/24] crypto/cnxk: fallback to SG if headroom is not available Anoob Joseph
` (23 subsequent siblings)
25 siblings, 0 replies; 78+ messages in thread
From: Anoob Joseph @ 2024-01-02 4:53 UTC (permalink / raw)
To: Akhil Goyal; +Cc: Jerin Jacob, Vidya Sagar Velumuri, Tejasree Kondoj, dev
Having different macros for same purpose may cause issues if one is
updated without updating the other. Use same macro by including the
header.
Signed-off-by: Anoob Joseph <anoobj@marvell.com>
---
drivers/crypto/cnxk/cnxk_cryptodev.h | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/drivers/crypto/cnxk/cnxk_cryptodev.h b/drivers/crypto/cnxk/cnxk_cryptodev.h
index d0ad881f2f..f5374131bf 100644
--- a/drivers/crypto/cnxk/cnxk_cryptodev.h
+++ b/drivers/crypto/cnxk/cnxk_cryptodev.h
@@ -8,12 +8,12 @@
#include <rte_cryptodev.h>
#include <rte_security.h>
+#include "roc_ae.h"
#include "roc_cpt.h"
#define CNXK_CPT_MAX_CAPS 55
#define CNXK_SEC_CRYPTO_MAX_CAPS 16
#define CNXK_SEC_MAX_CAPS 9
-#define CNXK_AE_EC_ID_MAX 9
/**
* Device private data
*/
@@ -23,8 +23,8 @@ struct cnxk_cpt_vf {
struct rte_cryptodev_capabilities
sec_crypto_caps[CNXK_SEC_CRYPTO_MAX_CAPS];
struct rte_security_capability sec_caps[CNXK_SEC_MAX_CAPS];
- uint64_t cnxk_fpm_iova[CNXK_AE_EC_ID_MAX];
- struct roc_ae_ec_group *ec_grp[CNXK_AE_EC_ID_MAX];
+ uint64_t cnxk_fpm_iova[ROC_AE_EC_ID_PMAX];
+ struct roc_ae_ec_group *ec_grp[ROC_AE_EC_ID_PMAX];
uint16_t max_qps_limit;
};
--
2.25.1
^ permalink raw reply [flat|nested] 78+ messages in thread
* [PATCH v2 03/24] crypto/cnxk: fallback to SG if headroom is not available
2024-01-02 4:53 ` [PATCH v2 00/24] Fixes and improvements in crypto cnxk Anoob Joseph
2024-01-02 4:53 ` [PATCH v2 01/24] common/cnxk: fix memory leak Anoob Joseph
2024-01-02 4:53 ` [PATCH v2 02/24] crypto/cnxk: use common macro Anoob Joseph
@ 2024-01-02 4:53 ` Anoob Joseph
2024-01-02 4:53 ` [PATCH v2 04/24] crypto/cnxk: return microcode completion code Anoob Joseph
` (22 subsequent siblings)
25 siblings, 0 replies; 78+ messages in thread
From: Anoob Joseph @ 2024-01-02 4:53 UTC (permalink / raw)
To: Akhil Goyal; +Cc: Tejasree Kondoj, Jerin Jacob, Vidya Sagar Velumuri, dev
From: Tejasree Kondoj <ktejasree@marvell.com>
Falling back to SG mode for cn9k lookaside IPsec
if headroom is not available.
Signed-off-by: Tejasree Kondoj <ktejasree@marvell.com>
---
drivers/crypto/cnxk/cn9k_ipsec_la_ops.h | 8 +-------
1 file changed, 1 insertion(+), 7 deletions(-)
diff --git a/drivers/crypto/cnxk/cn9k_ipsec_la_ops.h b/drivers/crypto/cnxk/cn9k_ipsec_la_ops.h
index 85aacb803f..3d0db72775 100644
--- a/drivers/crypto/cnxk/cn9k_ipsec_la_ops.h
+++ b/drivers/crypto/cnxk/cn9k_ipsec_la_ops.h
@@ -82,19 +82,13 @@ process_outb_sa(struct cpt_qp_meta_info *m_info, struct rte_crypto_op *cop,
extend_tail = rlen - dlen;
pkt_len += extend_tail;
- if (likely(m_src->next == NULL)) {
+ if (likely((m_src->next == NULL) && (hdr_len <= data_off))) {
if (unlikely(extend_tail > rte_pktmbuf_tailroom(m_src))) {
plt_dp_err("Not enough tail room (required: %d, available: %d)",
extend_tail, rte_pktmbuf_tailroom(m_src));
return -ENOMEM;
}
- if (unlikely(hdr_len > data_off)) {
- plt_dp_err("Not enough head room (required: %d, available: %d)", hdr_len,
- rte_pktmbuf_headroom(m_src));
- return -ENOMEM;
- }
-
m_src->data_len = pkt_len;
hdr = PLT_PTR_ADD(m_src->buf_addr, data_off - hdr_len);
--
2.25.1
^ permalink raw reply [flat|nested] 78+ messages in thread
* [PATCH v2 04/24] crypto/cnxk: return microcode completion code
2024-01-02 4:53 ` [PATCH v2 00/24] Fixes and improvements in crypto cnxk Anoob Joseph
` (2 preceding siblings ...)
2024-01-02 4:53 ` [PATCH v2 03/24] crypto/cnxk: fallback to SG if headroom is not available Anoob Joseph
@ 2024-01-02 4:53 ` Anoob Joseph
2024-01-02 4:53 ` [PATCH v2 05/24] crypto/cnxk: fix ECDH pubkey verify in cn9k Anoob Joseph
` (21 subsequent siblings)
25 siblings, 0 replies; 78+ messages in thread
From: Anoob Joseph @ 2024-01-02 4:53 UTC (permalink / raw)
To: Akhil Goyal; +Cc: Jerin Jacob, Vidya Sagar Velumuri, Tejasree Kondoj, dev
Return microcode completion code in case of errors. This allows
applications to check the failure reasons in more granularity.
Signed-off-by: Anoob Joseph <anoobj@marvell.com>
---
drivers/crypto/cnxk/cn10k_cryptodev_ops.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/drivers/crypto/cnxk/cn10k_cryptodev_ops.c b/drivers/crypto/cnxk/cn10k_cryptodev_ops.c
index 997110e3d3..bef7b75810 100644
--- a/drivers/crypto/cnxk/cn10k_cryptodev_ops.c
+++ b/drivers/crypto/cnxk/cn10k_cryptodev_ops.c
@@ -823,6 +823,7 @@ cn10k_cpt_sec_post_process(struct rte_crypto_op *cop, struct cpt_cn10k_res_s *re
break;
default:
cop->status = RTE_CRYPTO_OP_STATUS_ERROR;
+ cop->aux_flags = res->uc_compcode;
return;
}
@@ -884,6 +885,7 @@ cn10k_cpt_dequeue_post_process(struct cnxk_cpt_qp *qp,
plt_dp_info("Request failed with microcode error");
plt_dp_info("MC completion code 0x%x",
res->uc_compcode);
+ cop->aux_flags = uc_compcode;
goto temp_sess_free;
}
--
2.25.1
^ permalink raw reply [flat|nested] 78+ messages in thread
* [PATCH v2 05/24] crypto/cnxk: fix ECDH pubkey verify in cn9k
2024-01-02 4:53 ` [PATCH v2 00/24] Fixes and improvements in crypto cnxk Anoob Joseph
` (3 preceding siblings ...)
2024-01-02 4:53 ` [PATCH v2 04/24] crypto/cnxk: return microcode completion code Anoob Joseph
@ 2024-01-02 4:53 ` Anoob Joseph
2024-01-02 4:53 ` [PATCH v2 06/24] crypto/cnxk: enable digest gen for zero len input Anoob Joseph
` (20 subsequent siblings)
25 siblings, 0 replies; 78+ messages in thread
From: Anoob Joseph @ 2024-01-02 4:53 UTC (permalink / raw)
To: Akhil Goyal
Cc: Gowrishankar Muthukrishnan, Jerin Jacob, Vidya Sagar Velumuri,
Tejasree Kondoj, dev
From: Gowrishankar Muthukrishnan <gmuthukrishn@marvell.com>
Fix ECDH pubkey verify in cn9k.
Fixes: baae0994fa96 ("crypto/cnxk: support ECDH")
Signed-off-by: Gowrishankar Muthukrishnan <gmuthukrishn@marvell.com>
---
drivers/crypto/cnxk/cn9k_cryptodev_ops.c | 12 +++++++++++-
1 file changed, 11 insertions(+), 1 deletion(-)
diff --git a/drivers/crypto/cnxk/cn9k_cryptodev_ops.c b/drivers/crypto/cnxk/cn9k_cryptodev_ops.c
index 34d40b07d4..442cd8e5a9 100644
--- a/drivers/crypto/cnxk/cn9k_cryptodev_ops.c
+++ b/drivers/crypto/cnxk/cn9k_cryptodev_ops.c
@@ -578,7 +578,17 @@ cn9k_cpt_dequeue_post_process(struct cnxk_cpt_qp *qp, struct rte_crypto_op *cop,
if (unlikely(res->uc_compcode)) {
if (res->uc_compcode == ROC_SE_ERR_GC_ICV_MISCOMPARE)
cop->status = RTE_CRYPTO_OP_STATUS_AUTH_FAILED;
- else
+ else if (cop->type == RTE_CRYPTO_OP_TYPE_ASYMMETRIC &&
+ cop->sess_type == RTE_CRYPTO_OP_WITH_SESSION &&
+ cop->asym->ecdh.ke_type == RTE_CRYPTO_ASYM_KE_PUB_KEY_VERIFY) {
+ if (res->uc_compcode == ROC_AE_ERR_ECC_POINT_NOT_ON_CURVE) {
+ cop->status = RTE_CRYPTO_OP_STATUS_ERROR;
+ return;
+ } else if (res->uc_compcode == ROC_AE_ERR_ECC_PAI) {
+ cop->status = RTE_CRYPTO_OP_STATUS_SUCCESS;
+ return;
+ }
+ } else
cop->status = RTE_CRYPTO_OP_STATUS_ERROR;
plt_dp_info("Request failed with microcode error");
--
2.25.1
^ permalink raw reply [flat|nested] 78+ messages in thread
* [PATCH v2 06/24] crypto/cnxk: enable digest gen for zero len input
2024-01-02 4:53 ` [PATCH v2 00/24] Fixes and improvements in crypto cnxk Anoob Joseph
` (4 preceding siblings ...)
2024-01-02 4:53 ` [PATCH v2 05/24] crypto/cnxk: fix ECDH pubkey verify in cn9k Anoob Joseph
@ 2024-01-02 4:53 ` Anoob Joseph
2024-01-02 4:54 ` [PATCH v2 07/24] crypto/cnxk: enable Rx inject in security lookaside Anoob Joseph
` (19 subsequent siblings)
25 siblings, 0 replies; 78+ messages in thread
From: Anoob Joseph @ 2024-01-02 4:53 UTC (permalink / raw)
To: Akhil Goyal
Cc: Aakash Sasidharan, Jerin Jacob, Vidya Sagar Velumuri,
Tejasree Kondoj, dev
From: Aakash Sasidharan <asasidharan@marvell.com>
With zero length input, digest generation fails with incorrect
value. Fix this by completely avoiding the gather component
when the input packet has zero data length.
Signed-off-by: Aakash Sasidharan <asasidharan@marvell.com>
---
drivers/crypto/cnxk/cnxk_se.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/crypto/cnxk/cnxk_se.h b/drivers/crypto/cnxk/cnxk_se.h
index c2a807fa94..1aec7dea9f 100644
--- a/drivers/crypto/cnxk/cnxk_se.h
+++ b/drivers/crypto/cnxk/cnxk_se.h
@@ -2479,7 +2479,7 @@ prepare_iov_from_pkt(struct rte_mbuf *pkt, struct roc_se_iov_ptr *iovec, uint32_
void *seg_data = NULL;
int32_t seg_size = 0;
- if (!pkt) {
+ if (!pkt || pkt->data_len == 0) {
iovec->buf_cnt = 0;
return 0;
}
--
2.25.1
^ permalink raw reply [flat|nested] 78+ messages in thread
* [PATCH v2 07/24] crypto/cnxk: enable Rx inject in security lookaside
2024-01-02 4:53 ` [PATCH v2 00/24] Fixes and improvements in crypto cnxk Anoob Joseph
` (5 preceding siblings ...)
2024-01-02 4:53 ` [PATCH v2 06/24] crypto/cnxk: enable digest gen for zero len input Anoob Joseph
@ 2024-01-02 4:54 ` Anoob Joseph
2024-01-16 8:07 ` Akhil Goyal
2024-01-02 4:54 ` [PATCH v2 08/24] common/cnxk: add Rx inject configs Anoob Joseph
` (18 subsequent siblings)
25 siblings, 1 reply; 78+ messages in thread
From: Anoob Joseph @ 2024-01-02 4:54 UTC (permalink / raw)
To: Akhil Goyal; +Cc: Vidya Sagar Velumuri, Jerin Jacob, Tejasree Kondoj, dev
From: Vidya Sagar Velumuri <vvelumuri@marvell.com>
Add Rx inject fastpath API.
Add devargs to specify an LF to be used for Rx inject.
When the RX inject feature flag is enabled:
1. Reserve a CPT LF to use for RX Inject mode.
2. Enable RXC and disable full packet mode for that LF.
Signed-off-by: Anoob Joseph <anoobj@marvell.com>
Signed-off-by: Vidya Sagar Velumuri <vvelumuri@marvell.com>
---
doc/guides/cryptodevs/cnxk.rst | 12 ++
doc/guides/rel_notes/release_24_03.rst | 3 +
drivers/common/cnxk/hw/cpt.h | 9 ++
drivers/common/cnxk/roc_cpt.c | 11 +-
drivers/common/cnxk/roc_cpt.h | 3 +-
drivers/common/cnxk/roc_cpt_priv.h | 2 +-
drivers/common/cnxk/roc_ie_ot.c | 14 +--
drivers/common/cnxk/roc_mbox.h | 2 +
drivers/common/cnxk/roc_nix_inl.c | 2 +-
drivers/common/cnxk/roc_nix_inl_dev.c | 2 +-
drivers/crypto/cnxk/cn10k_cryptodev_ops.c | 124 +++++++++++++++++++
drivers/crypto/cnxk/cn10k_cryptodev_ops.h | 8 ++
drivers/crypto/cnxk/cn10k_ipsec.c | 4 +
drivers/crypto/cnxk/cn10k_ipsec.h | 2 +
drivers/crypto/cnxk/cnxk_cryptodev.c | 3 +
drivers/crypto/cnxk/cnxk_cryptodev.h | 3 +
drivers/crypto/cnxk/cnxk_cryptodev_devargs.c | 31 +++++
drivers/crypto/cnxk/cnxk_cryptodev_ops.c | 27 +++-
drivers/crypto/cnxk/version.map | 3 +
19 files changed, 250 insertions(+), 15 deletions(-)
diff --git a/doc/guides/cryptodevs/cnxk.rst b/doc/guides/cryptodevs/cnxk.rst
index fbe67475be..8dc745dccd 100644
--- a/doc/guides/cryptodevs/cnxk.rst
+++ b/doc/guides/cryptodevs/cnxk.rst
@@ -187,6 +187,18 @@ Runtime Config Options
With the above configuration, the number of maximum queue pairs supported
by the device is limited to 4.
+- ``LF ID for RX injection in case of fallback mechanism`` (default ``60``)
+
+ LF ID for RX Injection in fallback mechanism of security.
+ Can be configured during runtime by using ``rx_inj_lf`` ``devargs`` parameter.
+
+ For example::
+
+ -a 0002:20:00.1,rx_inj_lf=20
+
+ With the above configuration, LF 20 will be used by the device for RX Injection
+ in security in fallback mechanism secnario.
+
Debugging Options
-----------------
diff --git a/doc/guides/rel_notes/release_24_03.rst b/doc/guides/rel_notes/release_24_03.rst
index e9c9717706..fa30b46ead 100644
--- a/doc/guides/rel_notes/release_24_03.rst
+++ b/doc/guides/rel_notes/release_24_03.rst
@@ -55,6 +55,9 @@ New Features
Also, make sure to start the actual text at the margin.
=======================================================
+* **Updated Marvell cnxk crypto driver.**
+
+ * Added support for Rx inject in crypto_cn10k.
Removed Items
-------------
diff --git a/drivers/common/cnxk/hw/cpt.h b/drivers/common/cnxk/hw/cpt.h
index cf9046bbfb..edab8a5d83 100644
--- a/drivers/common/cnxk/hw/cpt.h
+++ b/drivers/common/cnxk/hw/cpt.h
@@ -237,6 +237,15 @@ struct cpt_inst_s {
uint64_t doneint : 1;
uint64_t nixtx_addr : 60;
} s;
+ struct {
+ uint64_t nixtxl : 3;
+ uint64_t doneint : 1;
+ uint64_t chan : 12;
+ uint64_t l2_len : 8;
+ uint64_t et_offset : 8;
+ uint64_t match_id : 16;
+ uint64_t sso_pf_func : 16;
+ } hw_s;
uint64_t u64;
} w0;
diff --git a/drivers/common/cnxk/roc_cpt.c b/drivers/common/cnxk/roc_cpt.c
index 4e23d8c135..38e46d65c1 100644
--- a/drivers/common/cnxk/roc_cpt.c
+++ b/drivers/common/cnxk/roc_cpt.c
@@ -463,7 +463,7 @@ cpt_available_lfs_get(struct dev *dev, uint16_t *nb_lf)
int
cpt_lfs_alloc(struct dev *dev, uint8_t eng_grpmsk, uint8_t blkaddr, bool inl_dev_sso,
- bool ctx_ilen_valid, uint8_t ctx_ilen)
+ bool ctx_ilen_valid, uint8_t ctx_ilen, bool rxc_ena, uint16_t rx_inj_lf)
{
struct cpt_lf_alloc_req_msg *req;
struct mbox *mbox = mbox_get(dev->mbox);
@@ -489,6 +489,10 @@ cpt_lfs_alloc(struct dev *dev, uint8_t eng_grpmsk, uint8_t blkaddr, bool inl_dev
req->blkaddr = blkaddr;
req->ctx_ilen_valid = ctx_ilen_valid;
req->ctx_ilen = ctx_ilen;
+ if (rxc_ena) {
+ req->rxc_ena = 1;
+ req->rxc_ena_lf_id = rx_inj_lf;
+ }
rc = mbox_process(mbox);
exit:
@@ -586,7 +590,7 @@ cpt_iq_init(struct roc_cpt_lf *lf)
}
int
-roc_cpt_dev_configure(struct roc_cpt *roc_cpt, int nb_lf)
+roc_cpt_dev_configure(struct roc_cpt *roc_cpt, int nb_lf, bool rxc_ena, uint16_t rx_inj_lf)
{
struct cpt *cpt = roc_cpt_to_cpt_priv(roc_cpt);
uint8_t blkaddr[ROC_CPT_MAX_BLKS];
@@ -630,7 +634,8 @@ roc_cpt_dev_configure(struct roc_cpt *roc_cpt, int nb_lf)
ctx_ilen = (PLT_ALIGN(ROC_OT_IPSEC_SA_SZ_MAX, ROC_ALIGN) / 128) - 1;
}
- rc = cpt_lfs_alloc(&cpt->dev, eng_grpmsk, blkaddr[blknum], false, ctx_ilen_valid, ctx_ilen);
+ rc = cpt_lfs_alloc(&cpt->dev, eng_grpmsk, blkaddr[blknum], false, ctx_ilen_valid, ctx_ilen,
+ rxc_ena, rx_inj_lf);
if (rc)
goto lfs_detach;
diff --git a/drivers/common/cnxk/roc_cpt.h b/drivers/common/cnxk/roc_cpt.h
index 787bccb27d..001e71c55e 100644
--- a/drivers/common/cnxk/roc_cpt.h
+++ b/drivers/common/cnxk/roc_cpt.h
@@ -171,7 +171,8 @@ int __roc_api roc_cpt_dev_init(struct roc_cpt *roc_cpt);
int __roc_api roc_cpt_dev_fini(struct roc_cpt *roc_cpt);
int __roc_api roc_cpt_eng_grp_add(struct roc_cpt *roc_cpt,
enum cpt_eng_type eng_type);
-int __roc_api roc_cpt_dev_configure(struct roc_cpt *roc_cpt, int nb_lf);
+int __roc_api roc_cpt_dev_configure(struct roc_cpt *roc_cpt, int nb_lf, bool rxc_ena,
+ uint16_t rx_inj_lf);
void __roc_api roc_cpt_dev_clear(struct roc_cpt *roc_cpt);
int __roc_api roc_cpt_lf_init(struct roc_cpt *roc_cpt, struct roc_cpt_lf *lf);
void __roc_api roc_cpt_lf_fini(struct roc_cpt_lf *lf);
diff --git a/drivers/common/cnxk/roc_cpt_priv.h b/drivers/common/cnxk/roc_cpt_priv.h
index 4ed87c857b..fa4986d671 100644
--- a/drivers/common/cnxk/roc_cpt_priv.h
+++ b/drivers/common/cnxk/roc_cpt_priv.h
@@ -22,7 +22,7 @@ int cpt_lfs_attach(struct dev *dev, uint8_t blkaddr, bool modify,
uint16_t nb_lf);
int cpt_lfs_detach(struct dev *dev);
int cpt_lfs_alloc(struct dev *dev, uint8_t eng_grpmsk, uint8_t blk, bool inl_dev_sso,
- bool ctx_ilen_valid, uint8_t ctx_ilen);
+ bool ctx_ilen_valid, uint8_t ctx_ilen, bool rxc_ena, uint16_t rx_inj_lf);
int cpt_lfs_free(struct dev *dev);
int cpt_lf_init(struct roc_cpt_lf *lf);
void cpt_lf_fini(struct roc_cpt_lf *lf);
diff --git a/drivers/common/cnxk/roc_ie_ot.c b/drivers/common/cnxk/roc_ie_ot.c
index d0b7ad38f1..465b2bc1fb 100644
--- a/drivers/common/cnxk/roc_ie_ot.c
+++ b/drivers/common/cnxk/roc_ie_ot.c
@@ -12,13 +12,13 @@ roc_ot_ipsec_inb_sa_init(struct roc_ot_ipsec_inb_sa *sa, bool is_inline)
memset(sa, 0, sizeof(struct roc_ot_ipsec_inb_sa));
- if (is_inline) {
- sa->w0.s.pkt_output = ROC_IE_OT_SA_PKT_OUTPUT_NO_FRAG;
- sa->w0.s.pkt_format = ROC_IE_OT_SA_PKT_FMT_META;
- sa->w0.s.pkind = ROC_IE_OT_CPT_PKIND;
- sa->w0.s.et_ovrwr = 1;
- sa->w2.s.l3hdr_on_err = 1;
- }
+ sa->w0.s.pkt_output = ROC_IE_OT_SA_PKT_OUTPUT_NO_FRAG;
+ sa->w0.s.pkt_format = ROC_IE_OT_SA_PKT_FMT_META;
+ sa->w0.s.pkind = ROC_IE_OT_CPT_PKIND;
+ sa->w0.s.et_ovrwr = 1;
+ sa->w2.s.l3hdr_on_err = 1;
+
+ PLT_SET_USED(is_inline);
offset = offsetof(struct roc_ot_ipsec_inb_sa, ctx);
sa->w0.s.hw_ctx_off = offset / ROC_CTX_UNIT_8B;
diff --git a/drivers/common/cnxk/roc_mbox.h b/drivers/common/cnxk/roc_mbox.h
index 05434aec5a..0ad8b738c6 100644
--- a/drivers/common/cnxk/roc_mbox.h
+++ b/drivers/common/cnxk/roc_mbox.h
@@ -2022,6 +2022,8 @@ struct cpt_lf_alloc_req_msg {
uint8_t __io blkaddr;
uint8_t __io ctx_ilen_valid : 1;
uint8_t __io ctx_ilen : 7;
+ uint8_t __io rxc_ena : 1;
+ uint8_t __io rxc_ena_lf_id : 7;
};
#define CPT_INLINE_INBOUND 0
diff --git a/drivers/common/cnxk/roc_nix_inl.c b/drivers/common/cnxk/roc_nix_inl.c
index 750fd08355..07a90133ca 100644
--- a/drivers/common/cnxk/roc_nix_inl.c
+++ b/drivers/common/cnxk/roc_nix_inl.c
@@ -986,7 +986,7 @@ roc_nix_inl_outb_init(struct roc_nix *roc_nix)
1ULL << ROC_CPT_DFLT_ENG_GRP_SE_IE |
1ULL << ROC_CPT_DFLT_ENG_GRP_AE);
rc = cpt_lfs_alloc(dev, eng_grpmask, blkaddr,
- !roc_nix->ipsec_out_sso_pffunc, ctx_ilen_valid, ctx_ilen);
+ !roc_nix->ipsec_out_sso_pffunc, ctx_ilen_valid, ctx_ilen, false, 0);
if (rc) {
plt_err("Failed to alloc CPT LF resources, rc=%d", rc);
goto lf_detach;
diff --git a/drivers/common/cnxk/roc_nix_inl_dev.c b/drivers/common/cnxk/roc_nix_inl_dev.c
index dc1306c093..f6991de051 100644
--- a/drivers/common/cnxk/roc_nix_inl_dev.c
+++ b/drivers/common/cnxk/roc_nix_inl_dev.c
@@ -194,7 +194,7 @@ nix_inl_cpt_setup(struct nix_inl_dev *inl_dev, bool inl_dev_sso)
}
rc = cpt_lfs_alloc(dev, eng_grpmask, RVU_BLOCK_ADDR_CPT0, inl_dev_sso, ctx_ilen_valid,
- ctx_ilen);
+ ctx_ilen, false, 0);
if (rc) {
plt_err("Failed to alloc CPT LF resources, rc=%d", rc);
return rc;
diff --git a/drivers/crypto/cnxk/cn10k_cryptodev_ops.c b/drivers/crypto/cnxk/cn10k_cryptodev_ops.c
index bef7b75810..e656f47693 100644
--- a/drivers/crypto/cnxk/cn10k_cryptodev_ops.c
+++ b/drivers/crypto/cnxk/cn10k_cryptodev_ops.c
@@ -7,6 +7,8 @@
#include <rte_event_crypto_adapter.h>
#include <rte_ip.h>
+#include <ethdev_driver.h>
+
#include "roc_cpt.h"
#if defined(__aarch64__)
#include "roc_io.h"
@@ -1057,6 +1059,104 @@ cn10k_cpt_dequeue_burst(void *qptr, struct rte_crypto_op **ops, uint16_t nb_ops)
return i;
}
+uint16_t __rte_hot
+cn10k_cryptodev_sec_inb_rx_inject(void *dev, struct rte_mbuf **pkts,
+ struct rte_security_session **sess, uint16_t nb_pkts)
+{
+ uint16_t l2_len, pf_func, lmt_id, count = 0;
+ uint64_t lmt_base, lmt_arg, io_addr;
+ struct cn10k_sec_session *sec_sess;
+ struct rte_cryptodev *cdev = dev;
+ union cpt_res_s *hw_res = NULL;
+ struct cpt_inst_s *inst;
+ struct cnxk_cpt_vf *vf;
+ struct rte_mbuf *m;
+ uint64_t dptr;
+ int i;
+
+ const union cpt_res_s res = {
+ .cn10k.compcode = CPT_COMP_NOT_DONE,
+ };
+
+ vf = cdev->data->dev_private;
+
+ lmt_base = vf->rx_inj_lmtline.lmt_base;
+ io_addr = vf->rx_inj_lmtline.io_addr;
+
+ ROC_LMT_BASE_ID_GET(lmt_base, lmt_id);
+ pf_func = vf->rx_inj_pf_func;
+
+again:
+ inst = (struct cpt_inst_s *)lmt_base;
+ for (i = 0; i < RTE_MIN(PKTS_PER_LOOP, nb_pkts); i++) {
+
+ m = pkts[i];
+ sec_sess = (struct cn10k_sec_session *)sess[i];
+
+ if (unlikely(rte_pktmbuf_headroom(m) < 32)) {
+ plt_dp_err("No space for CPT res_s");
+ break;
+ }
+
+ if (unlikely(!rte_pktmbuf_is_contiguous(m))) {
+ plt_dp_err("Multi seg is not supported");
+ break;
+ }
+
+ l2_len = m->l2_len;
+
+ *rte_security_dynfield(m) = (uint64_t)sec_sess->userdata;
+
+ hw_res = rte_pktmbuf_mtod(m, void *);
+ hw_res = RTE_PTR_SUB(hw_res, 32);
+ hw_res = RTE_PTR_ALIGN_CEIL(hw_res, 16);
+
+ /* Prepare CPT instruction */
+ inst->w0.u64 = 0;
+ inst->w2.u64 = 0;
+ inst->w2.s.rvu_pf_func = pf_func;
+ inst->w3.u64 = (((uint64_t)m + sizeof(struct rte_mbuf)) >> 3) << 3 | 1;
+
+ inst->w4.u64 = sec_sess->inst.w4 | (rte_pktmbuf_pkt_len(m));
+ dptr = (uint64_t)rte_pktmbuf_iova(m);
+ inst->dptr = dptr;
+ inst->rptr = dptr;
+
+ inst->w0.hw_s.l2_len = l2_len;
+ inst->w0.hw_s.et_offset = l2_len - 2;
+
+ inst->res_addr = (uint64_t)hw_res;
+ rte_atomic_store_explicit((unsigned long __rte_atomic *)&hw_res->u64[0], res.u64[0],
+ rte_memory_order_relaxed);
+
+ inst->w7.u64 = sec_sess->inst.w7;
+
+ inst += 2;
+ }
+
+ if (i > PKTS_PER_STEORL) {
+ lmt_arg = ROC_CN10K_CPT_LMT_ARG | (PKTS_PER_STEORL - 1) << 12 | (uint64_t)lmt_id;
+ roc_lmt_submit_steorl(lmt_arg, io_addr);
+ lmt_arg = ROC_CN10K_CPT_LMT_ARG | (i - PKTS_PER_STEORL - 1) << 12 |
+ (uint64_t)(lmt_id + PKTS_PER_STEORL);
+ roc_lmt_submit_steorl(lmt_arg, io_addr);
+ } else {
+ lmt_arg = ROC_CN10K_CPT_LMT_ARG | (i - 1) << 12 | (uint64_t)lmt_id;
+ roc_lmt_submit_steorl(lmt_arg, io_addr);
+ }
+
+ rte_io_wmb();
+
+ if (nb_pkts - i > 0 && i == PKTS_PER_LOOP) {
+ nb_pkts -= i;
+ pkts += i;
+ count += i;
+ goto again;
+ }
+
+ return count + i;
+}
+
void
cn10k_cpt_set_enqdeq_fns(struct rte_cryptodev *dev, struct cnxk_cpt_vf *vf)
{
@@ -1535,6 +1635,30 @@ cn10k_sym_configure_raw_dp_ctx(struct rte_cryptodev *dev, uint16_t qp_id,
return 0;
}
+int
+cn10k_cryptodev_sec_rx_inject_configure(void *device, uint16_t port_id, bool enable)
+{
+ struct rte_cryptodev *crypto_dev = device;
+ struct rte_eth_dev *eth_dev;
+ int ret;
+
+ if (!rte_eth_dev_is_valid_port(port_id))
+ return -EINVAL;
+
+ if (!(crypto_dev->feature_flags & RTE_CRYPTODEV_FF_SECURITY_RX_INJECT))
+ return -ENOTSUP;
+
+ eth_dev = &rte_eth_devices[port_id];
+
+ ret = strncmp(eth_dev->device->driver->name, "net_cn10k", 8);
+ if (ret)
+ return -ENOTSUP;
+
+ RTE_SET_USED(enable);
+
+ return 0;
+}
+
struct rte_cryptodev_ops cn10k_cpt_ops = {
/* Device control ops */
.dev_configure = cnxk_cpt_dev_config,
diff --git a/drivers/crypto/cnxk/cn10k_cryptodev_ops.h b/drivers/crypto/cnxk/cn10k_cryptodev_ops.h
index befbfcdfad..34becede3c 100644
--- a/drivers/crypto/cnxk/cn10k_cryptodev_ops.h
+++ b/drivers/crypto/cnxk/cn10k_cryptodev_ops.h
@@ -16,6 +16,14 @@ extern struct rte_cryptodev_ops cn10k_cpt_ops;
void cn10k_cpt_set_enqdeq_fns(struct rte_cryptodev *dev, struct cnxk_cpt_vf *vf);
+__rte_internal
+uint16_t __rte_hot cn10k_cryptodev_sec_inb_rx_inject(void *dev, struct rte_mbuf **pkts,
+ struct rte_security_session **sess,
+ uint16_t nb_pkts);
+
+__rte_internal
+int cn10k_cryptodev_sec_rx_inject_configure(void *device, uint16_t port_id, bool enable);
+
__rte_internal
uint16_t __rte_hot cn10k_cpt_sg_ver1_crypto_adapter_enqueue(void *ws, struct rte_event ev[],
uint16_t nb_events);
diff --git a/drivers/crypto/cnxk/cn10k_ipsec.c b/drivers/crypto/cnxk/cn10k_ipsec.c
index ffd3f50eed..2d098fdd24 100644
--- a/drivers/crypto/cnxk/cn10k_ipsec.c
+++ b/drivers/crypto/cnxk/cn10k_ipsec.c
@@ -10,6 +10,7 @@
#include <rte_security_driver.h>
#include <rte_udp.h>
+#include "cn10k_cryptodev_ops.h"
#include "cn10k_ipsec.h"
#include "cnxk_cryptodev.h"
#include "cnxk_cryptodev_ops.h"
@@ -297,6 +298,7 @@ cn10k_sec_session_create(void *device, struct rte_security_session_conf *conf,
if (conf->protocol != RTE_SECURITY_PROTOCOL_IPSEC)
return -ENOTSUP;
+ ((struct cn10k_sec_session *)sess)->userdata = conf->userdata;
return cn10k_ipsec_session_create(device, &conf->ipsec,
conf->crypto_xform, sess);
}
@@ -458,4 +460,6 @@ cn10k_sec_ops_override(void)
cnxk_sec_ops.session_get_size = cn10k_sec_session_get_size;
cnxk_sec_ops.session_stats_get = cn10k_sec_session_stats_get;
cnxk_sec_ops.session_update = cn10k_sec_session_update;
+ cnxk_sec_ops.inb_pkt_rx_inject = cn10k_cryptodev_sec_inb_rx_inject;
+ cnxk_sec_ops.rx_inject_configure = cn10k_cryptodev_sec_rx_inject_configure;
}
diff --git a/drivers/crypto/cnxk/cn10k_ipsec.h b/drivers/crypto/cnxk/cn10k_ipsec.h
index 8a93d74062..03ac994001 100644
--- a/drivers/crypto/cnxk/cn10k_ipsec.h
+++ b/drivers/crypto/cnxk/cn10k_ipsec.h
@@ -38,6 +38,8 @@ struct cn10k_sec_session {
bool is_outbound;
/** Queue pair */
struct cnxk_cpt_qp *qp;
+ /** Userdata to be set for Rx inject */
+ void *userdata;
/**
* End of SW mutable area
diff --git a/drivers/crypto/cnxk/cnxk_cryptodev.c b/drivers/crypto/cnxk/cnxk_cryptodev.c
index 4819a14184..b1684e56a7 100644
--- a/drivers/crypto/cnxk/cnxk_cryptodev.c
+++ b/drivers/crypto/cnxk/cnxk_cryptodev.c
@@ -24,6 +24,9 @@ cnxk_cpt_default_ff_get(void)
if (roc_model_is_cn10k())
ff |= RTE_CRYPTODEV_FF_SECURITY_INNER_CSUM | RTE_CRYPTODEV_FF_SYM_RAW_DP;
+ if (roc_model_is_cn10ka_b0())
+ ff |= RTE_CRYPTODEV_FF_SECURITY_RX_INJECT;
+
return ff;
}
diff --git a/drivers/crypto/cnxk/cnxk_cryptodev.h b/drivers/crypto/cnxk/cnxk_cryptodev.h
index f5374131bf..fedae53736 100644
--- a/drivers/crypto/cnxk/cnxk_cryptodev.h
+++ b/drivers/crypto/cnxk/cnxk_cryptodev.h
@@ -18,6 +18,8 @@
* Device private data
*/
struct cnxk_cpt_vf {
+ struct roc_cpt_lmtline rx_inj_lmtline;
+ uint16_t rx_inj_pf_func;
struct roc_cpt cpt;
struct rte_cryptodev_capabilities crypto_caps[CNXK_CPT_MAX_CAPS];
struct rte_cryptodev_capabilities
@@ -26,6 +28,7 @@ struct cnxk_cpt_vf {
uint64_t cnxk_fpm_iova[ROC_AE_EC_ID_PMAX];
struct roc_ae_ec_group *ec_grp[ROC_AE_EC_ID_PMAX];
uint16_t max_qps_limit;
+ uint16_t rx_inj_lf;
};
uint64_t cnxk_cpt_default_ff_get(void);
diff --git a/drivers/crypto/cnxk/cnxk_cryptodev_devargs.c b/drivers/crypto/cnxk/cnxk_cryptodev_devargs.c
index c3e9bdb2d1..f5a76d83ed 100644
--- a/drivers/crypto/cnxk/cnxk_cryptodev_devargs.c
+++ b/drivers/crypto/cnxk/cnxk_cryptodev_devargs.c
@@ -9,6 +9,23 @@
#define CNXK_MAX_QPS_LIMIT "max_qps_limit"
#define CNXK_MAX_QPS_LIMIT_MIN 1
#define CNXK_MAX_QPS_LIMIT_MAX (ROC_CPT_MAX_LFS - 1)
+#define CNXK_RX_INJ_LF "rx_inj_lf"
+
+static int
+parse_rx_inj_lf(const char *key, const char *value, void *extra_args)
+{
+ RTE_SET_USED(key);
+ uint32_t val;
+
+ val = atoi(value);
+
+ if (val < CNXK_MAX_QPS_LIMIT_MIN || val > CNXK_MAX_QPS_LIMIT_MAX)
+ return -EINVAL;
+
+ *(uint16_t *)extra_args = val;
+
+ return 0;
+}
static int
parse_max_qps_limit(const char *key, const char *value, void *extra_args)
@@ -31,8 +48,12 @@ cnxk_cpt_parse_devargs(struct rte_devargs *devargs, struct cnxk_cpt_vf *vf)
{
uint16_t max_qps_limit = CNXK_MAX_QPS_LIMIT_MAX;
struct rte_kvargs *kvlist;
+ uint16_t rx_inj_lf;
int rc;
+ /* Set to max value as default so that the feature is disabled by default. */
+ rx_inj_lf = CNXK_MAX_QPS_LIMIT_MAX;
+
if (devargs == NULL)
goto null_devargs;
@@ -48,10 +69,20 @@ cnxk_cpt_parse_devargs(struct rte_devargs *devargs, struct cnxk_cpt_vf *vf)
rte_kvargs_free(kvlist);
goto exit;
}
+
+ rc = rte_kvargs_process(kvlist, CNXK_RX_INJ_LF, parse_rx_inj_lf, &rx_inj_lf);
+ if (rc < 0) {
+ plt_err("rx_inj_lf should in the range <%d-%d>", CNXK_MAX_QPS_LIMIT_MIN,
+ max_qps_limit - 1);
+ rte_kvargs_free(kvlist);
+ goto exit;
+ }
+
rte_kvargs_free(kvlist);
null_devargs:
vf->max_qps_limit = max_qps_limit;
+ vf->rx_inj_lf = rx_inj_lf;
return 0;
exit:
diff --git a/drivers/crypto/cnxk/cnxk_cryptodev_ops.c b/drivers/crypto/cnxk/cnxk_cryptodev_ops.c
index 82938c77c8..c0733ddbfb 100644
--- a/drivers/crypto/cnxk/cnxk_cryptodev_ops.c
+++ b/drivers/crypto/cnxk/cnxk_cryptodev_ops.c
@@ -5,6 +5,7 @@
#include <rte_cryptodev.h>
#include <cryptodev_pmd.h>
#include <rte_errno.h>
+#include <rte_security_driver.h>
#include "roc_ae_fpm_tables.h"
#include "roc_cpt.h"
@@ -95,6 +96,7 @@ cnxk_cpt_dev_config(struct rte_cryptodev *dev, struct rte_cryptodev_config *conf
struct cnxk_cpt_vf *vf = dev->data->dev_private;
struct roc_cpt *roc_cpt = &vf->cpt;
uint16_t nb_lf_avail, nb_lf;
+ bool rxc_ena = false;
int ret;
/* If this is a reconfigure attempt, clear the device and configure again */
@@ -111,7 +113,13 @@ cnxk_cpt_dev_config(struct rte_cryptodev *dev, struct rte_cryptodev_config *conf
if (nb_lf > nb_lf_avail)
return -ENOTSUP;
- ret = roc_cpt_dev_configure(roc_cpt, nb_lf);
+ if (dev->feature_flags & RTE_CRYPTODEV_FF_SECURITY_RX_INJECT) {
+ if (rte_security_dynfield_register() < 0)
+ return -ENOTSUP;
+ rxc_ena = true;
+ }
+
+ ret = roc_cpt_dev_configure(roc_cpt, nb_lf, rxc_ena, vf->rx_inj_lf);
if (ret) {
plt_err("Could not configure device");
return ret;
@@ -208,6 +216,10 @@ cnxk_cpt_dev_info_get(struct rte_cryptodev *dev,
info->sym.max_nb_sessions = 0;
info->min_mbuf_headroom_req = CNXK_CPT_MIN_HEADROOM_REQ;
info->min_mbuf_tailroom_req = CNXK_CPT_MIN_TAILROOM_REQ;
+
+ /* If the LF ID for RX Inject is less than the available lfs. */
+ if (vf->rx_inj_lf > info->max_nb_queue_pairs)
+ info->feature_flags &= ~RTE_CRYPTODEV_FF_SECURITY_RX_INJECT;
}
static void
@@ -452,6 +464,19 @@ cnxk_cpt_queue_pair_setup(struct rte_cryptodev *dev, uint16_t qp_id,
qp->sess_mp = conf->mp_session;
dev->data->queue_pairs[qp_id] = qp;
+ if (qp_id == vf->rx_inj_lf) {
+ ret = roc_cpt_lmtline_init(roc_cpt, &vf->rx_inj_lmtline, vf->rx_inj_lf);
+ if (ret) {
+ plt_err("Could not init lmtline Rx inject");
+ goto exit;
+ }
+
+ vf->rx_inj_pf_func = qp->lf.pf_func;
+
+ /* Block the queue for other submissions */
+ qp->pend_q.pq_mask = 0;
+ }
+
return 0;
exit:
diff --git a/drivers/crypto/cnxk/version.map b/drivers/crypto/cnxk/version.map
index d13209feec..5789a6bfc9 100644
--- a/drivers/crypto/cnxk/version.map
+++ b/drivers/crypto/cnxk/version.map
@@ -8,5 +8,8 @@ INTERNAL {
cn10k_cpt_crypto_adapter_dequeue;
cn10k_cpt_crypto_adapter_vector_dequeue;
+ cn10k_cryptodev_sec_inb_rx_inject;
+ cn10k_cryptodev_sec_rx_inject_configure;
+
local: *;
};
--
2.25.1
^ permalink raw reply [flat|nested] 78+ messages in thread
* [PATCH v2 08/24] common/cnxk: add Rx inject configs
2024-01-02 4:53 ` [PATCH v2 00/24] Fixes and improvements in crypto cnxk Anoob Joseph
` (6 preceding siblings ...)
2024-01-02 4:54 ` [PATCH v2 07/24] crypto/cnxk: enable Rx inject in security lookaside Anoob Joseph
@ 2024-01-02 4:54 ` Anoob Joseph
2024-01-02 4:54 ` [PATCH v2 09/24] crypto/cnxk: Rx inject config update Anoob Joseph
` (17 subsequent siblings)
25 siblings, 0 replies; 78+ messages in thread
From: Anoob Joseph @ 2024-01-02 4:54 UTC (permalink / raw)
To: Akhil Goyal
Cc: Rahul Bhansali, Jerin Jacob, Vidya Sagar Velumuri, Tejasree Kondoj, dev
From: Rahul Bhansali <rbhansali@marvell.com>
Add Rx inject config for feature enable/disable, and store
Rx chan value per port.
Signed-off-by: Rahul Bhansali <rbhansali@marvell.com>
---
drivers/common/cnxk/roc_idev.c | 44 +++++++++++++++++++++++++++++
drivers/common/cnxk/roc_idev.h | 5 ++++
drivers/common/cnxk/roc_idev_priv.h | 6 ++++
drivers/common/cnxk/roc_nix.c | 2 ++
drivers/common/cnxk/version.map | 4 +++
5 files changed, 61 insertions(+)
diff --git a/drivers/common/cnxk/roc_idev.c b/drivers/common/cnxk/roc_idev.c
index e6c6b34d78..48df3518b0 100644
--- a/drivers/common/cnxk/roc_idev.c
+++ b/drivers/common/cnxk/roc_idev.c
@@ -310,3 +310,47 @@ roc_idev_nix_inl_meta_aura_get(void)
return idev->inl_cfg.meta_aura;
return 0;
}
+
+uint8_t
+roc_idev_nix_rx_inject_get(uint16_t port)
+{
+ struct idev_cfg *idev;
+
+ idev = idev_get_cfg();
+ if (idev != NULL && port < PLT_MAX_ETHPORTS)
+ return idev->inl_rx_inj_cfg.rx_inject_en[port];
+
+ return 0;
+}
+
+void
+roc_idev_nix_rx_inject_set(uint16_t port, uint8_t enable)
+{
+ struct idev_cfg *idev;
+
+ idev = idev_get_cfg();
+ if (idev != NULL && port < PLT_MAX_ETHPORTS)
+ __atomic_store_n(&idev->inl_rx_inj_cfg.rx_inject_en[port], enable,
+ __ATOMIC_RELEASE);
+}
+
+uint16_t *
+roc_idev_nix_rx_chan_base_get(void)
+{
+ struct idev_cfg *idev = idev_get_cfg();
+
+ if (idev != NULL)
+ return (uint16_t *)&idev->inl_rx_inj_cfg.chan;
+
+ return NULL;
+}
+
+void
+roc_idev_nix_rx_chan_set(uint16_t port, uint16_t chan)
+{
+ struct idev_cfg *idev;
+
+ idev = idev_get_cfg();
+ if (idev != NULL && port < PLT_MAX_ETHPORTS)
+ __atomic_store_n(&idev->inl_rx_inj_cfg.chan[port], chan, __ATOMIC_RELEASE);
+}
diff --git a/drivers/common/cnxk/roc_idev.h b/drivers/common/cnxk/roc_idev.h
index aea7f5279d..00664eaed6 100644
--- a/drivers/common/cnxk/roc_idev.h
+++ b/drivers/common/cnxk/roc_idev.h
@@ -22,4 +22,9 @@ struct roc_nix_list *__roc_api roc_idev_nix_list_get(void);
struct roc_mcs *__roc_api roc_idev_mcs_get(uint8_t mcs_idx);
void __roc_api roc_idev_mcs_set(struct roc_mcs *mcs);
void __roc_api roc_idev_mcs_free(struct roc_mcs *mcs);
+
+uint8_t __roc_api roc_idev_nix_rx_inject_get(uint16_t port);
+void __roc_api roc_idev_nix_rx_inject_set(uint16_t port, uint8_t enable);
+uint16_t *__roc_api roc_idev_nix_rx_chan_base_get(void);
+void __roc_api roc_idev_nix_rx_chan_set(uint16_t port, uint16_t chan);
#endif /* _ROC_IDEV_H_ */
diff --git a/drivers/common/cnxk/roc_idev_priv.h b/drivers/common/cnxk/roc_idev_priv.h
index 80f8465e1c..8dc1cb25bf 100644
--- a/drivers/common/cnxk/roc_idev_priv.h
+++ b/drivers/common/cnxk/roc_idev_priv.h
@@ -19,6 +19,11 @@ struct idev_nix_inl_cfg {
uint32_t refs;
};
+struct idev_nix_inl_rx_inj_cfg {
+ uint16_t chan[PLT_MAX_ETHPORTS];
+ uint8_t rx_inject_en[PLT_MAX_ETHPORTS];
+};
+
struct idev_cfg {
uint16_t sso_pf_func;
uint16_t npa_pf_func;
@@ -35,6 +40,7 @@ struct idev_cfg {
struct nix_inl_dev *nix_inl_dev;
struct idev_nix_inl_cfg inl_cfg;
struct roc_nix_list roc_nix_list;
+ struct idev_nix_inl_rx_inj_cfg inl_rx_inj_cfg;
plt_spinlock_t nix_inl_dev_lock;
plt_spinlock_t npa_dev_lock;
};
diff --git a/drivers/common/cnxk/roc_nix.c b/drivers/common/cnxk/roc_nix.c
index f64933a1d9..97c0ae3e25 100644
--- a/drivers/common/cnxk/roc_nix.c
+++ b/drivers/common/cnxk/roc_nix.c
@@ -223,6 +223,8 @@ roc_nix_lf_alloc(struct roc_nix *roc_nix, uint32_t nb_rxq, uint32_t nb_txq,
nix->nb_rx_queues = nb_rxq;
nix->nb_tx_queues = nb_txq;
+ roc_idev_nix_rx_chan_set(roc_nix->port_id, rsp->rx_chan_base);
+
nix->rqs = plt_zmalloc(sizeof(struct roc_nix_rq *) * nb_rxq, 0);
if (!nix->rqs) {
rc = -ENOMEM;
diff --git a/drivers/common/cnxk/version.map b/drivers/common/cnxk/version.map
index aa884a8fe2..f84382c401 100644
--- a/drivers/common/cnxk/version.map
+++ b/drivers/common/cnxk/version.map
@@ -105,6 +105,10 @@ INTERNAL {
roc_idev_num_lmtlines_get;
roc_idev_nix_inl_meta_aura_get;
roc_idev_nix_list_get;
+ roc_idev_nix_rx_chan_base_get;
+ roc_idev_nix_rx_chan_set;
+ roc_idev_nix_rx_inject_get;
+ roc_idev_nix_rx_inject_set;
roc_ml_reg_read64;
roc_ml_reg_write64;
roc_ml_reg_read32;
--
2.25.1
^ permalink raw reply [flat|nested] 78+ messages in thread
* [PATCH v2 09/24] crypto/cnxk: Rx inject config update
2024-01-02 4:53 ` [PATCH v2 00/24] Fixes and improvements in crypto cnxk Anoob Joseph
` (7 preceding siblings ...)
2024-01-02 4:54 ` [PATCH v2 08/24] common/cnxk: add Rx inject configs Anoob Joseph
@ 2024-01-02 4:54 ` Anoob Joseph
2024-01-02 4:54 ` [PATCH v2 10/24] crypto/cnxk: enable Rx inject for 103 Anoob Joseph
` (16 subsequent siblings)
25 siblings, 0 replies; 78+ messages in thread
From: Anoob Joseph @ 2024-01-02 4:54 UTC (permalink / raw)
To: Akhil Goyal
Cc: Rahul Bhansali, Jerin Jacob, Vidya Sagar Velumuri, Tejasree Kondoj, dev
From: Rahul Bhansali <rbhansali@marvell.com>
- Update chan in CPT inst from port's Rx chan
- Set Rx inject config in Idev struct
Signed-off-by: Rahul Bhansali <rbhansali@marvell.com>
---
drivers/crypto/cnxk/cn10k_cryptodev_ops.c | 4 +++-
drivers/crypto/cnxk/cn10k_ipsec.c | 3 +++
drivers/crypto/cnxk/cnxk_cryptodev.h | 1 +
drivers/crypto/cnxk/cnxk_cryptodev_ops.c | 2 ++
4 files changed, 9 insertions(+), 1 deletion(-)
diff --git a/drivers/crypto/cnxk/cn10k_cryptodev_ops.c b/drivers/crypto/cnxk/cn10k_cryptodev_ops.c
index e656f47693..03ecf583af 100644
--- a/drivers/crypto/cnxk/cn10k_cryptodev_ops.c
+++ b/drivers/crypto/cnxk/cn10k_cryptodev_ops.c
@@ -15,6 +15,7 @@
#else
#include "roc_io_generic.h"
#endif
+#include "roc_idev.h"
#include "roc_sso.h"
#include "roc_sso_dp.h"
@@ -1122,6 +1123,7 @@ cn10k_cryptodev_sec_inb_rx_inject(void *dev, struct rte_mbuf **pkts,
inst->dptr = dptr;
inst->rptr = dptr;
+ inst->w0.hw_s.chan = *(vf->rx_chan_base + m->port);
inst->w0.hw_s.l2_len = l2_len;
inst->w0.hw_s.et_offset = l2_len - 2;
@@ -1654,7 +1656,7 @@ cn10k_cryptodev_sec_rx_inject_configure(void *device, uint16_t port_id, bool ena
if (ret)
return -ENOTSUP;
- RTE_SET_USED(enable);
+ roc_idev_nix_rx_inject_set(port_id, enable);
return 0;
}
diff --git a/drivers/crypto/cnxk/cn10k_ipsec.c b/drivers/crypto/cnxk/cn10k_ipsec.c
index 2d098fdd24..d08a1067ca 100644
--- a/drivers/crypto/cnxk/cn10k_ipsec.c
+++ b/drivers/crypto/cnxk/cn10k_ipsec.c
@@ -192,6 +192,9 @@ cn10k_ipsec_inb_sa_create(struct roc_cpt *roc_cpt, struct roc_cpt_lf *lf,
sec_sess->is_outbound = false;
sec_sess->inst.w7 = ipsec_cpt_inst_w7_get(roc_cpt, in_sa);
+ /* Save index/SPI in cookie, specific required for Rx Inject */
+ sa_dptr->w1.s.cookie = 0xFFFFFFFF;
+
/* pre-populate CPT INST word 4 */
inst_w4.u64 = 0;
inst_w4.s.opcode_major = ROC_IE_OT_MAJOR_OP_PROCESS_INBOUND_IPSEC | ROC_IE_OT_INPLACE_BIT;
diff --git a/drivers/crypto/cnxk/cnxk_cryptodev.h b/drivers/crypto/cnxk/cnxk_cryptodev.h
index fedae53736..2ae81d2f90 100644
--- a/drivers/crypto/cnxk/cnxk_cryptodev.h
+++ b/drivers/crypto/cnxk/cnxk_cryptodev.h
@@ -20,6 +20,7 @@
struct cnxk_cpt_vf {
struct roc_cpt_lmtline rx_inj_lmtline;
uint16_t rx_inj_pf_func;
+ uint16_t *rx_chan_base;
struct roc_cpt cpt;
struct rte_cryptodev_capabilities crypto_caps[CNXK_CPT_MAX_CAPS];
struct rte_cryptodev_capabilities
diff --git a/drivers/crypto/cnxk/cnxk_cryptodev_ops.c b/drivers/crypto/cnxk/cnxk_cryptodev_ops.c
index c0733ddbfb..fd44155955 100644
--- a/drivers/crypto/cnxk/cnxk_cryptodev_ops.c
+++ b/drivers/crypto/cnxk/cnxk_cryptodev_ops.c
@@ -10,6 +10,7 @@
#include "roc_ae_fpm_tables.h"
#include "roc_cpt.h"
#include "roc_errata.h"
+#include "roc_idev.h"
#include "roc_ie_on.h"
#include "cnxk_ae.h"
@@ -117,6 +118,7 @@ cnxk_cpt_dev_config(struct rte_cryptodev *dev, struct rte_cryptodev_config *conf
if (rte_security_dynfield_register() < 0)
return -ENOTSUP;
rxc_ena = true;
+ vf->rx_chan_base = roc_idev_nix_rx_chan_base_get();
}
ret = roc_cpt_dev_configure(roc_cpt, nb_lf, rxc_ena, vf->rx_inj_lf);
--
2.25.1
^ permalink raw reply [flat|nested] 78+ messages in thread
* [PATCH v2 10/24] crypto/cnxk: enable Rx inject for 103
2024-01-02 4:53 ` [PATCH v2 00/24] Fixes and improvements in crypto cnxk Anoob Joseph
` (8 preceding siblings ...)
2024-01-02 4:54 ` [PATCH v2 09/24] crypto/cnxk: Rx inject config update Anoob Joseph
@ 2024-01-02 4:54 ` Anoob Joseph
2024-01-02 4:54 ` [PATCH v2 11/24] crypto/cnxk: rename security caps as IPsec security caps Anoob Joseph
` (15 subsequent siblings)
25 siblings, 0 replies; 78+ messages in thread
From: Anoob Joseph @ 2024-01-02 4:54 UTC (permalink / raw)
To: Akhil Goyal; +Cc: Vidya Sagar Velumuri, Jerin Jacob, Tejasree Kondoj, dev
From: Vidya Sagar Velumuri <vvelumuri@marvell.com>
Enable Rx inject feature for 103XX
Signed-off-by: Vidya Sagar Velumuri <vvelumuri@marvell.com>
---
drivers/crypto/cnxk/cnxk_cryptodev.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/crypto/cnxk/cnxk_cryptodev.c b/drivers/crypto/cnxk/cnxk_cryptodev.c
index b1684e56a7..1eede2e59c 100644
--- a/drivers/crypto/cnxk/cnxk_cryptodev.c
+++ b/drivers/crypto/cnxk/cnxk_cryptodev.c
@@ -24,7 +24,7 @@ cnxk_cpt_default_ff_get(void)
if (roc_model_is_cn10k())
ff |= RTE_CRYPTODEV_FF_SECURITY_INNER_CSUM | RTE_CRYPTODEV_FF_SYM_RAW_DP;
- if (roc_model_is_cn10ka_b0())
+ if (roc_model_is_cn10ka_b0() || roc_model_is_cn10kb())
ff |= RTE_CRYPTODEV_FF_SECURITY_RX_INJECT;
return ff;
--
2.25.1
^ permalink raw reply [flat|nested] 78+ messages in thread
* [PATCH v2 11/24] crypto/cnxk: rename security caps as IPsec security caps
2024-01-02 4:53 ` [PATCH v2 00/24] Fixes and improvements in crypto cnxk Anoob Joseph
` (9 preceding siblings ...)
2024-01-02 4:54 ` [PATCH v2 10/24] crypto/cnxk: enable Rx inject for 103 Anoob Joseph
@ 2024-01-02 4:54 ` Anoob Joseph
2024-01-02 4:54 ` [PATCH v2 12/24] common/cnxk: update opad-ipad gen to handle TLS Anoob Joseph
` (14 subsequent siblings)
25 siblings, 0 replies; 78+ messages in thread
From: Anoob Joseph @ 2024-01-02 4:54 UTC (permalink / raw)
To: Akhil Goyal; +Cc: Vidya Sagar Velumuri, Jerin Jacob, Tejasree Kondoj, dev
From: Vidya Sagar Velumuri <vvelumuri@marvell.com>
Security capabilities would vary between IPsec and other new offloads.
Rename existing security caps to indicate that they are IPsec specific
ones.
Rename and change the scope of common functions, inorder to avoid code
duplication. These functions can be used by both IPsec and TLS
Signed-off-by: Anoob Joseph <anoobj@marvell.com>
Signed-off-by: Vidya Sagar Velumuri <vvelumuri@marvell.com>
---
drivers/common/cnxk/cnxk_security.c | 13 ++--
drivers/common/cnxk/cnxk_security.h | 17 +++--
drivers/common/cnxk/version.map | 1 +
drivers/crypto/cnxk/cn10k_cryptodev_ops.c | 18 ++++-
drivers/crypto/cnxk/cn10k_ipsec.c | 46 +++++++-----
drivers/crypto/cnxk/cn10k_ipsec.h | 9 ++-
drivers/crypto/cnxk/cn10k_ipsec_la_ops.h | 18 ++---
drivers/crypto/cnxk/cn9k_ipsec_la_ops.h | 8 +-
drivers/crypto/cnxk/cnxk_cryptodev.h | 10 +--
.../crypto/cnxk/cnxk_cryptodev_capabilities.c | 73 ++++++++++---------
drivers/crypto/cnxk/cnxk_sg.h | 4 +-
11 files changed, 123 insertions(+), 94 deletions(-)
diff --git a/drivers/common/cnxk/cnxk_security.c b/drivers/common/cnxk/cnxk_security.c
index a8c3ba90cd..81991c4697 100644
--- a/drivers/common/cnxk/cnxk_security.c
+++ b/drivers/common/cnxk/cnxk_security.c
@@ -8,9 +8,8 @@
#include "roc_api.h"
-static void
-ipsec_hmac_opad_ipad_gen(struct rte_crypto_sym_xform *auth_xform,
- uint8_t *hmac_opad_ipad)
+void
+cnxk_sec_opad_ipad_gen(struct rte_crypto_sym_xform *auth_xform, uint8_t *hmac_opad_ipad)
{
const uint8_t *key = auth_xform->auth.key.data;
uint32_t length = auth_xform->auth.key.length;
@@ -192,7 +191,7 @@ ot_ipsec_sa_common_param_fill(union roc_ot_ipsec_sa_word2 *w2,
const uint8_t *auth_key = auth_xfrm->auth.key.data;
roc_aes_xcbc_key_derive(auth_key, hmac_opad_ipad);
} else {
- ipsec_hmac_opad_ipad_gen(auth_xfrm, hmac_opad_ipad);
+ cnxk_sec_opad_ipad_gen(auth_xfrm, hmac_opad_ipad);
}
tmp_key = (uint64_t *)hmac_opad_ipad;
@@ -741,7 +740,7 @@ onf_ipsec_sa_common_param_fill(struct roc_ie_onf_sa_ctl *ctl, uint8_t *salt,
key = cipher_xfrm->cipher.key.data;
length = cipher_xfrm->cipher.key.length;
- ipsec_hmac_opad_ipad_gen(auth_xfrm, hmac_opad_ipad);
+ cnxk_sec_opad_ipad_gen(auth_xfrm, hmac_opad_ipad);
}
switch (length) {
@@ -1374,7 +1373,7 @@ cnxk_on_ipsec_outb_sa_create(struct rte_security_ipsec_xform *ipsec,
roc_aes_xcbc_key_derive(auth_key, hmac_opad_ipad);
} else if (auth_xform->auth.algo != RTE_CRYPTO_AUTH_NULL) {
- ipsec_hmac_opad_ipad_gen(auth_xform, hmac_opad_ipad);
+ cnxk_sec_opad_ipad_gen(auth_xform, hmac_opad_ipad);
}
}
@@ -1441,7 +1440,7 @@ cnxk_on_ipsec_inb_sa_create(struct rte_security_ipsec_xform *ipsec,
roc_aes_xcbc_key_derive(auth_key, hmac_opad_ipad);
} else if (auth_xform->auth.algo != RTE_CRYPTO_AUTH_NULL) {
- ipsec_hmac_opad_ipad_gen(auth_xform, hmac_opad_ipad);
+ cnxk_sec_opad_ipad_gen(auth_xform, hmac_opad_ipad);
}
}
diff --git a/drivers/common/cnxk/cnxk_security.h b/drivers/common/cnxk/cnxk_security.h
index 2277ce9144..fabf694df4 100644
--- a/drivers/common/cnxk/cnxk_security.h
+++ b/drivers/common/cnxk/cnxk_security.h
@@ -61,14 +61,15 @@ bool __roc_api cnxk_onf_ipsec_inb_sa_valid(struct roc_onf_ipsec_inb_sa *sa);
bool __roc_api cnxk_onf_ipsec_outb_sa_valid(struct roc_onf_ipsec_outb_sa *sa);
/* [CN9K] */
-int __roc_api
-cnxk_on_ipsec_inb_sa_create(struct rte_security_ipsec_xform *ipsec,
- struct rte_crypto_sym_xform *crypto_xform,
- struct roc_ie_on_inb_sa *in_sa);
+int __roc_api cnxk_on_ipsec_inb_sa_create(struct rte_security_ipsec_xform *ipsec,
+ struct rte_crypto_sym_xform *crypto_xform,
+ struct roc_ie_on_inb_sa *in_sa);
-int __roc_api
-cnxk_on_ipsec_outb_sa_create(struct rte_security_ipsec_xform *ipsec,
- struct rte_crypto_sym_xform *crypto_xform,
- struct roc_ie_on_outb_sa *out_sa);
+int __roc_api cnxk_on_ipsec_outb_sa_create(struct rte_security_ipsec_xform *ipsec,
+ struct rte_crypto_sym_xform *crypto_xform,
+ struct roc_ie_on_outb_sa *out_sa);
+
+__rte_internal
+void cnxk_sec_opad_ipad_gen(struct rte_crypto_sym_xform *auth_xform, uint8_t *hmac_opad_ipad);
#endif /* _CNXK_SECURITY_H__ */
diff --git a/drivers/common/cnxk/version.map b/drivers/common/cnxk/version.map
index f84382c401..15fd5710d2 100644
--- a/drivers/common/cnxk/version.map
+++ b/drivers/common/cnxk/version.map
@@ -1,6 +1,7 @@
INTERNAL {
global:
+ cnxk_sec_opad_ipad_gen;
cnxk_ipsec_icvlen_get;
cnxk_ipsec_ivlen_get;
cnxk_ipsec_outb_rlens_get;
diff --git a/drivers/crypto/cnxk/cn10k_cryptodev_ops.c b/drivers/crypto/cnxk/cn10k_cryptodev_ops.c
index 03ecf583af..084c8d3a24 100644
--- a/drivers/crypto/cnxk/cn10k_cryptodev_ops.c
+++ b/drivers/crypto/cnxk/cn10k_cryptodev_ops.c
@@ -80,8 +80,9 @@ cn10k_cpt_sym_temp_sess_create(struct cnxk_cpt_qp *qp, struct rte_crypto_op *op)
}
static __rte_always_inline int __rte_hot
-cpt_sec_inst_fill(struct cnxk_cpt_qp *qp, struct rte_crypto_op *op, struct cn10k_sec_session *sess,
- struct cpt_inst_s *inst, struct cpt_inflight_req *infl_req, const bool is_sg_ver2)
+cpt_sec_ipsec_inst_fill(struct cnxk_cpt_qp *qp, struct rte_crypto_op *op,
+ struct cn10k_sec_session *sess, struct cpt_inst_s *inst,
+ struct cpt_inflight_req *infl_req, const bool is_sg_ver2)
{
struct rte_crypto_sym_op *sym_op = op->sym;
int ret;
@@ -91,7 +92,7 @@ cpt_sec_inst_fill(struct cnxk_cpt_qp *qp, struct rte_crypto_op *op, struct cn10k
return -ENOTSUP;
}
- if (sess->is_outbound)
+ if (sess->ipsec.is_outbound)
ret = process_outb_sa(&qp->lf, op, sess, &qp->meta_info, infl_req, inst,
is_sg_ver2);
else
@@ -100,6 +101,17 @@ cpt_sec_inst_fill(struct cnxk_cpt_qp *qp, struct rte_crypto_op *op, struct cn10k
return ret;
}
+static __rte_always_inline int __rte_hot
+cpt_sec_inst_fill(struct cnxk_cpt_qp *qp, struct rte_crypto_op *op, struct cn10k_sec_session *sess,
+ struct cpt_inst_s *inst, struct cpt_inflight_req *infl_req, const bool is_sg_ver2)
+{
+
+ if (sess->proto == RTE_SECURITY_PROTOCOL_IPSEC)
+ return cpt_sec_ipsec_inst_fill(qp, op, sess, &inst[0], infl_req, is_sg_ver2);
+
+ return 0;
+}
+
static inline int
cn10k_cpt_fill_inst(struct cnxk_cpt_qp *qp, struct rte_crypto_op *ops[], struct cpt_inst_s inst[],
struct cpt_inflight_req *infl_req, const bool is_sg_ver2)
diff --git a/drivers/crypto/cnxk/cn10k_ipsec.c b/drivers/crypto/cnxk/cn10k_ipsec.c
index d08a1067ca..a9c673ea83 100644
--- a/drivers/crypto/cnxk/cn10k_ipsec.c
+++ b/drivers/crypto/cnxk/cn10k_ipsec.c
@@ -20,7 +20,7 @@
#include "roc_api.h"
static uint64_t
-ipsec_cpt_inst_w7_get(struct roc_cpt *roc_cpt, void *sa)
+cpt_inst_w7_get(struct roc_cpt *roc_cpt, void *sa)
{
union cpt_inst_w7 w7;
@@ -64,7 +64,7 @@ cn10k_ipsec_outb_sa_create(struct roc_cpt *roc_cpt, struct roc_cpt_lf *lf,
goto sa_dptr_free;
}
- sec_sess->inst.w7 = ipsec_cpt_inst_w7_get(roc_cpt, out_sa);
+ sec_sess->inst.w7 = cpt_inst_w7_get(roc_cpt, out_sa);
#ifdef LA_IPSEC_DEBUG
/* Use IV from application in debug mode */
@@ -89,7 +89,7 @@ cn10k_ipsec_outb_sa_create(struct roc_cpt *roc_cpt, struct roc_cpt_lf *lf,
}
#endif
- sec_sess->is_outbound = true;
+ sec_sess->ipsec.is_outbound = true;
/* Get Rlen calculation data */
ret = cnxk_ipsec_outb_rlens_get(&rlens, ipsec_xfrm, crypto_xfrm);
@@ -150,6 +150,7 @@ cn10k_ipsec_outb_sa_create(struct roc_cpt *roc_cpt, struct roc_cpt_lf *lf,
/* Trigger CTX flush so that data is written back to DRAM */
roc_cpt_lf_ctx_flush(lf, out_sa, false);
+ sec_sess->proto = RTE_SECURITY_PROTOCOL_IPSEC;
plt_atomic_thread_fence(__ATOMIC_SEQ_CST);
sa_dptr_free:
@@ -189,8 +190,8 @@ cn10k_ipsec_inb_sa_create(struct roc_cpt *roc_cpt, struct roc_cpt_lf *lf,
goto sa_dptr_free;
}
- sec_sess->is_outbound = false;
- sec_sess->inst.w7 = ipsec_cpt_inst_w7_get(roc_cpt, in_sa);
+ sec_sess->ipsec.is_outbound = false;
+ sec_sess->inst.w7 = cpt_inst_w7_get(roc_cpt, in_sa);
/* Save index/SPI in cookie, specific required for Rx Inject */
sa_dptr->w1.s.cookie = 0xFFFFFFFF;
@@ -209,7 +210,7 @@ cn10k_ipsec_inb_sa_create(struct roc_cpt *roc_cpt, struct roc_cpt_lf *lf,
*/
if (ipsec_xfrm->options.ip_csum_enable) {
param1.s.ip_csum_disable = ROC_IE_OT_SA_INNER_PKT_IP_CSUM_ENABLE;
- sec_sess->ip_csum = RTE_MBUF_F_RX_IP_CKSUM_GOOD;
+ sec_sess->ipsec.ip_csum = RTE_MBUF_F_RX_IP_CKSUM_GOOD;
}
/* Disable L4 checksum verification by default */
@@ -250,6 +251,7 @@ cn10k_ipsec_inb_sa_create(struct roc_cpt *roc_cpt, struct roc_cpt_lf *lf,
/* Trigger CTX flush so that data is written back to DRAM */
roc_cpt_lf_ctx_flush(lf, in_sa, true);
+ sec_sess->proto = RTE_SECURITY_PROTOCOL_IPSEC;
plt_atomic_thread_fence(__ATOMIC_SEQ_CST);
sa_dptr_free:
@@ -298,16 +300,15 @@ cn10k_sec_session_create(void *device, struct rte_security_session_conf *conf,
if (conf->action_type != RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL)
return -EINVAL;
- if (conf->protocol != RTE_SECURITY_PROTOCOL_IPSEC)
- return -ENOTSUP;
-
- ((struct cn10k_sec_session *)sess)->userdata = conf->userdata;
- return cn10k_ipsec_session_create(device, &conf->ipsec,
- conf->crypto_xform, sess);
+ if (conf->protocol == RTE_SECURITY_PROTOCOL_IPSEC) {
+ ((struct cn10k_sec_session *)sess)->userdata = conf->userdata;
+ return cn10k_ipsec_session_create(device, &conf->ipsec, conf->crypto_xform, sess);
+ }
+ return -ENOTSUP;
}
static int
-cn10k_sec_session_destroy(void *dev, struct rte_security_session *sec_sess)
+cn10k_sec_ipsec_session_destroy(void *dev, struct rte_security_session *sec_sess)
{
struct rte_cryptodev *crypto_dev = dev;
union roc_ot_ipsec_sa_word2 *w2;
@@ -318,9 +319,6 @@ cn10k_sec_session_destroy(void *dev, struct rte_security_session *sec_sess)
void *sa_dptr = NULL;
int ret;
- if (unlikely(sec_sess == NULL))
- return -EINVAL;
-
sess = (struct cn10k_sec_session *)sec_sess;
qp = crypto_dev->data->queue_pairs[0];
@@ -336,7 +334,7 @@ cn10k_sec_session_destroy(void *dev, struct rte_security_session *sec_sess)
ret = -1;
- if (sess->is_outbound) {
+ if (sess->ipsec.is_outbound) {
sa_dptr = plt_zmalloc(sizeof(struct roc_ot_ipsec_outb_sa), 8);
if (sa_dptr != NULL) {
roc_ot_ipsec_outb_sa_init(sa_dptr);
@@ -376,6 +374,18 @@ cn10k_sec_session_destroy(void *dev, struct rte_security_session *sec_sess)
return 0;
}
+static int
+cn10k_sec_session_destroy(void *dev, struct rte_security_session *sec_sess)
+{
+ if (unlikely(sec_sess == NULL))
+ return -EINVAL;
+
+ if (((struct cn10k_sec_session *)sec_sess)->proto == RTE_SECURITY_PROTOCOL_IPSEC)
+ return cn10k_sec_ipsec_session_destroy(dev, sec_sess);
+
+ return -EINVAL;
+}
+
static unsigned int
cn10k_sec_session_get_size(void *device __rte_unused)
{
@@ -405,7 +415,7 @@ cn10k_sec_session_stats_get(void *device, struct rte_security_session *sess,
stats->protocol = RTE_SECURITY_PROTOCOL_IPSEC;
sa = &priv->sa;
- if (priv->is_outbound) {
+ if (priv->ipsec.is_outbound) {
out_sa = &sa->out_sa;
roc_cpt_lf_ctx_flush(&qp->lf, out_sa, false);
rte_delay_ms(1);
diff --git a/drivers/crypto/cnxk/cn10k_ipsec.h b/drivers/crypto/cnxk/cn10k_ipsec.h
index 03ac994001..2b7a3e6acf 100644
--- a/drivers/crypto/cnxk/cn10k_ipsec.h
+++ b/drivers/crypto/cnxk/cn10k_ipsec.h
@@ -29,13 +29,18 @@ struct cn10k_sec_session {
/** PMD private space */
+ enum rte_security_session_protocol proto;
/** Pre-populated CPT inst words */
struct cnxk_cpt_inst_tmpl inst;
uint16_t max_extended_len;
uint16_t iv_offset;
uint8_t iv_length;
- uint8_t ip_csum;
- bool is_outbound;
+ union {
+ struct {
+ uint8_t ip_csum;
+ bool is_outbound;
+ } ipsec;
+ };
/** Queue pair */
struct cnxk_cpt_qp *qp;
/** Userdata to be set for Rx inject */
diff --git a/drivers/crypto/cnxk/cn10k_ipsec_la_ops.h b/drivers/crypto/cnxk/cn10k_ipsec_la_ops.h
index 8e208eb2ca..af2c85022e 100644
--- a/drivers/crypto/cnxk/cn10k_ipsec_la_ops.h
+++ b/drivers/crypto/cnxk/cn10k_ipsec_la_ops.h
@@ -121,7 +121,7 @@ process_outb_sa(struct roc_cpt_lf *lf, struct rte_crypto_op *cop, struct cn10k_s
i = 0;
gather_comp = (struct roc_sglist_comp *)((uint8_t *)m_data + 8);
- i = fill_ipsec_sg_comp_from_pkt(gather_comp, i, m_src);
+ i = fill_sg_comp_from_pkt(gather_comp, i, m_src);
((uint16_t *)in_buffer)[2] = rte_cpu_to_be_16(i);
g_size_bytes = ((i + 3) / 4) * sizeof(struct roc_sglist_comp);
@@ -132,7 +132,7 @@ process_outb_sa(struct roc_cpt_lf *lf, struct rte_crypto_op *cop, struct cn10k_s
i = 0;
scatter_comp = (struct roc_sglist_comp *)((uint8_t *)gather_comp + g_size_bytes);
- i = fill_ipsec_sg_comp_from_pkt(scatter_comp, i, m_src);
+ i = fill_sg_comp_from_pkt(scatter_comp, i, m_src);
((uint16_t *)in_buffer)[3] = rte_cpu_to_be_16(i);
s_size_bytes = ((i + 3) / 4) * sizeof(struct roc_sglist_comp);
@@ -170,7 +170,7 @@ process_outb_sa(struct roc_cpt_lf *lf, struct rte_crypto_op *cop, struct cn10k_s
i = 0;
gather_comp = (struct roc_sg2list_comp *)((uint8_t *)m_data);
- i = fill_ipsec_sg2_comp_from_pkt(gather_comp, i, m_src);
+ i = fill_sg2_comp_from_pkt(gather_comp, i, m_src);
cpt_inst_w5.s.gather_sz = ((i + 2) / 3);
g_size_bytes = ((i + 2) / 3) * sizeof(struct roc_sg2list_comp);
@@ -181,7 +181,7 @@ process_outb_sa(struct roc_cpt_lf *lf, struct rte_crypto_op *cop, struct cn10k_s
i = 0;
scatter_comp = (struct roc_sg2list_comp *)((uint8_t *)gather_comp + g_size_bytes);
- i = fill_ipsec_sg2_comp_from_pkt(scatter_comp, i, m_src);
+ i = fill_sg2_comp_from_pkt(scatter_comp, i, m_src);
cpt_inst_w6.s.scatter_sz = ((i + 2) / 3);
@@ -211,7 +211,7 @@ process_inb_sa(struct rte_crypto_op *cop, struct cn10k_sec_session *sess, struct
inst->w4.u64 = sess->inst.w4 | rte_pktmbuf_pkt_len(m_src);
dptr = rte_pktmbuf_mtod(m_src, uint64_t);
inst->dptr = dptr;
- m_src->ol_flags |= (uint64_t)sess->ip_csum;
+ m_src->ol_flags |= (uint64_t)sess->ipsec.ip_csum;
} else if (is_sg_ver2 == false) {
struct roc_sglist_comp *scatter_comp, *gather_comp;
uint32_t g_size_bytes, s_size_bytes;
@@ -234,7 +234,7 @@ process_inb_sa(struct rte_crypto_op *cop, struct cn10k_sec_session *sess, struct
/* Input Gather List */
i = 0;
gather_comp = (struct roc_sglist_comp *)((uint8_t *)m_data + 8);
- i = fill_ipsec_sg_comp_from_pkt(gather_comp, i, m_src);
+ i = fill_sg_comp_from_pkt(gather_comp, i, m_src);
((uint16_t *)in_buffer)[2] = rte_cpu_to_be_16(i);
g_size_bytes = ((i + 3) / 4) * sizeof(struct roc_sglist_comp);
@@ -242,7 +242,7 @@ process_inb_sa(struct rte_crypto_op *cop, struct cn10k_sec_session *sess, struct
/* Output Scatter List */
i = 0;
scatter_comp = (struct roc_sglist_comp *)((uint8_t *)gather_comp + g_size_bytes);
- i = fill_ipsec_sg_comp_from_pkt(scatter_comp, i, m_src);
+ i = fill_sg_comp_from_pkt(scatter_comp, i, m_src);
((uint16_t *)in_buffer)[3] = rte_cpu_to_be_16(i);
s_size_bytes = ((i + 3) / 4) * sizeof(struct roc_sglist_comp);
@@ -270,7 +270,7 @@ process_inb_sa(struct rte_crypto_op *cop, struct cn10k_sec_session *sess, struct
i = 0;
gather_comp = (struct roc_sg2list_comp *)((uint8_t *)m_data);
- i = fill_ipsec_sg2_comp_from_pkt(gather_comp, i, m_src);
+ i = fill_sg2_comp_from_pkt(gather_comp, i, m_src);
cpt_inst_w5.s.gather_sz = ((i + 2) / 3);
g_size_bytes = ((i + 2) / 3) * sizeof(struct roc_sg2list_comp);
@@ -278,7 +278,7 @@ process_inb_sa(struct rte_crypto_op *cop, struct cn10k_sec_session *sess, struct
/* Output Scatter List */
i = 0;
scatter_comp = (struct roc_sg2list_comp *)((uint8_t *)gather_comp + g_size_bytes);
- i = fill_ipsec_sg2_comp_from_pkt(scatter_comp, i, m_src);
+ i = fill_sg2_comp_from_pkt(scatter_comp, i, m_src);
cpt_inst_w6.s.scatter_sz = ((i + 2) / 3);
diff --git a/drivers/crypto/cnxk/cn9k_ipsec_la_ops.h b/drivers/crypto/cnxk/cn9k_ipsec_la_ops.h
index 3d0db72775..3e9f1e7efb 100644
--- a/drivers/crypto/cnxk/cn9k_ipsec_la_ops.h
+++ b/drivers/crypto/cnxk/cn9k_ipsec_la_ops.h
@@ -132,7 +132,7 @@ process_outb_sa(struct cpt_qp_meta_info *m_info, struct rte_crypto_op *cop,
gather_comp = (struct roc_sglist_comp *)((uint8_t *)m_data + 8);
i = fill_sg_comp(gather_comp, i, (uint64_t)hdr, hdr_len);
- i = fill_ipsec_sg_comp_from_pkt(gather_comp, i, m_src);
+ i = fill_sg_comp_from_pkt(gather_comp, i, m_src);
((uint16_t *)in_buffer)[2] = rte_cpu_to_be_16(i);
g_size_bytes = ((i + 3) / 4) * sizeof(struct roc_sglist_comp);
@@ -146,7 +146,7 @@ process_outb_sa(struct cpt_qp_meta_info *m_info, struct rte_crypto_op *cop,
scatter_comp = (struct roc_sglist_comp *)((uint8_t *)gather_comp + g_size_bytes);
i = fill_sg_comp(scatter_comp, i, (uint64_t)hdr, hdr_len);
- i = fill_ipsec_sg_comp_from_pkt(scatter_comp, i, m_src);
+ i = fill_sg_comp_from_pkt(scatter_comp, i, m_src);
((uint16_t *)in_buffer)[3] = rte_cpu_to_be_16(i);
s_size_bytes = ((i + 3) / 4) * sizeof(struct roc_sglist_comp);
@@ -228,7 +228,7 @@ process_inb_sa(struct cpt_qp_meta_info *m_info, struct rte_crypto_op *cop,
*/
i = 0;
gather_comp = (struct roc_sglist_comp *)((uint8_t *)m_data + 8);
- i = fill_ipsec_sg_comp_from_pkt(gather_comp, i, m_src);
+ i = fill_sg_comp_from_pkt(gather_comp, i, m_src);
((uint16_t *)in_buffer)[2] = rte_cpu_to_be_16(i);
g_size_bytes = ((i + 3) / 4) * sizeof(struct roc_sglist_comp);
@@ -239,7 +239,7 @@ process_inb_sa(struct cpt_qp_meta_info *m_info, struct rte_crypto_op *cop,
i = 0;
scatter_comp = (struct roc_sglist_comp *)((uint8_t *)gather_comp + g_size_bytes);
i = fill_sg_comp(scatter_comp, i, (uint64_t)hdr, hdr_len);
- i = fill_ipsec_sg_comp_from_pkt(scatter_comp, i, m_src);
+ i = fill_sg_comp_from_pkt(scatter_comp, i, m_src);
((uint16_t *)in_buffer)[3] = rte_cpu_to_be_16(i);
s_size_bytes = ((i + 3) / 4) * sizeof(struct roc_sglist_comp);
diff --git a/drivers/crypto/cnxk/cnxk_cryptodev.h b/drivers/crypto/cnxk/cnxk_cryptodev.h
index 2ae81d2f90..a5c4365631 100644
--- a/drivers/crypto/cnxk/cnxk_cryptodev.h
+++ b/drivers/crypto/cnxk/cnxk_cryptodev.h
@@ -11,9 +11,10 @@
#include "roc_ae.h"
#include "roc_cpt.h"
-#define CNXK_CPT_MAX_CAPS 55
-#define CNXK_SEC_CRYPTO_MAX_CAPS 16
-#define CNXK_SEC_MAX_CAPS 9
+#define CNXK_CPT_MAX_CAPS 55
+#define CNXK_SEC_IPSEC_CRYPTO_MAX_CAPS 16
+#define CNXK_SEC_MAX_CAPS 9
+
/**
* Device private data
*/
@@ -23,8 +24,7 @@ struct cnxk_cpt_vf {
uint16_t *rx_chan_base;
struct roc_cpt cpt;
struct rte_cryptodev_capabilities crypto_caps[CNXK_CPT_MAX_CAPS];
- struct rte_cryptodev_capabilities
- sec_crypto_caps[CNXK_SEC_CRYPTO_MAX_CAPS];
+ struct rte_cryptodev_capabilities sec_ipsec_crypto_caps[CNXK_SEC_IPSEC_CRYPTO_MAX_CAPS];
struct rte_security_capability sec_caps[CNXK_SEC_MAX_CAPS];
uint64_t cnxk_fpm_iova[ROC_AE_EC_ID_PMAX];
struct roc_ae_ec_group *ec_grp[ROC_AE_EC_ID_PMAX];
diff --git a/drivers/crypto/cnxk/cnxk_cryptodev_capabilities.c b/drivers/crypto/cnxk/cnxk_cryptodev_capabilities.c
index 2676b52832..178f510a63 100644
--- a/drivers/crypto/cnxk/cnxk_cryptodev_capabilities.c
+++ b/drivers/crypto/cnxk/cnxk_cryptodev_capabilities.c
@@ -20,13 +20,14 @@
RTE_DIM(caps_##name)); \
} while (0)
-#define SEC_CAPS_ADD(cnxk_caps, cur_pos, hw_caps, name) \
+#define SEC_IPSEC_CAPS_ADD(cnxk_caps, cur_pos, hw_caps, name) \
do { \
if ((hw_caps[CPT_ENG_TYPE_SE].name) || \
(hw_caps[CPT_ENG_TYPE_IE].name) || \
(hw_caps[CPT_ENG_TYPE_AE].name)) \
- sec_caps_add(cnxk_caps, cur_pos, sec_caps_##name, \
- RTE_DIM(sec_caps_##name)); \
+ sec_ipsec_caps_add(cnxk_caps, cur_pos, \
+ sec_ipsec_caps_##name, \
+ RTE_DIM(sec_ipsec_caps_##name)); \
} while (0)
static const struct rte_cryptodev_capabilities caps_mul[] = {
@@ -1184,7 +1185,7 @@ static const struct rte_cryptodev_capabilities caps_end[] = {
RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
};
-static const struct rte_cryptodev_capabilities sec_caps_aes[] = {
+static const struct rte_cryptodev_capabilities sec_ipsec_caps_aes[] = {
{ /* AES GCM */
.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
{.sym = {
@@ -1332,7 +1333,7 @@ static const struct rte_cryptodev_capabilities sec_caps_aes[] = {
},
};
-static const struct rte_cryptodev_capabilities sec_caps_des[] = {
+static const struct rte_cryptodev_capabilities sec_ipsec_caps_des[] = {
{ /* DES */
.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
{.sym = {
@@ -1375,7 +1376,7 @@ static const struct rte_cryptodev_capabilities sec_caps_des[] = {
},
};
-static const struct rte_cryptodev_capabilities sec_caps_sha1_sha2[] = {
+static const struct rte_cryptodev_capabilities sec_ipsec_caps_sha1_sha2[] = {
{ /* SHA1 HMAC */
.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
{.sym = {
@@ -1478,7 +1479,7 @@ static const struct rte_cryptodev_capabilities sec_caps_sha1_sha2[] = {
},
};
-static const struct rte_cryptodev_capabilities sec_caps_null[] = {
+static const struct rte_cryptodev_capabilities sec_ipsec_caps_null[] = {
{ /* NULL (CIPHER) */
.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
{.sym = {
@@ -1691,29 +1692,28 @@ cnxk_crypto_capabilities_get(struct cnxk_cpt_vf *vf)
}
static void
-sec_caps_limit_check(int *cur_pos, int nb_caps)
+sec_ipsec_caps_limit_check(int *cur_pos, int nb_caps)
{
- PLT_VERIFY(*cur_pos + nb_caps <= CNXK_SEC_CRYPTO_MAX_CAPS);
+ PLT_VERIFY(*cur_pos + nb_caps <= CNXK_SEC_IPSEC_CRYPTO_MAX_CAPS);
}
static void
-sec_caps_add(struct rte_cryptodev_capabilities cnxk_caps[], int *cur_pos,
- const struct rte_cryptodev_capabilities *caps, int nb_caps)
+sec_ipsec_caps_add(struct rte_cryptodev_capabilities cnxk_caps[], int *cur_pos,
+ const struct rte_cryptodev_capabilities *caps, int nb_caps)
{
- sec_caps_limit_check(cur_pos, nb_caps);
+ sec_ipsec_caps_limit_check(cur_pos, nb_caps);
memcpy(&cnxk_caps[*cur_pos], caps, nb_caps * sizeof(caps[0]));
*cur_pos += nb_caps;
}
static void
-cn10k_sec_crypto_caps_update(struct rte_cryptodev_capabilities cnxk_caps[],
- int *cur_pos)
+cn10k_sec_ipsec_crypto_caps_update(struct rte_cryptodev_capabilities cnxk_caps[], int *cur_pos)
{
const struct rte_cryptodev_capabilities *cap;
unsigned int i;
- sec_caps_limit_check(cur_pos, 1);
+ sec_ipsec_caps_limit_check(cur_pos, 1);
/* NULL auth */
for (i = 0; i < RTE_DIM(caps_null); i++) {
@@ -1727,7 +1727,7 @@ cn10k_sec_crypto_caps_update(struct rte_cryptodev_capabilities cnxk_caps[],
}
static void
-cn9k_sec_crypto_caps_update(struct rte_cryptodev_capabilities cnxk_caps[])
+cn9k_sec_ipsec_crypto_caps_update(struct rte_cryptodev_capabilities cnxk_caps[])
{
struct rte_cryptodev_capabilities *caps;
@@ -1747,27 +1747,26 @@ cn9k_sec_crypto_caps_update(struct rte_cryptodev_capabilities cnxk_caps[])
}
static void
-sec_crypto_caps_populate(struct rte_cryptodev_capabilities cnxk_caps[],
- union cpt_eng_caps *hw_caps)
+sec_ipsec_crypto_caps_populate(struct rte_cryptodev_capabilities cnxk_caps[],
+ union cpt_eng_caps *hw_caps)
{
int cur_pos = 0;
- SEC_CAPS_ADD(cnxk_caps, &cur_pos, hw_caps, aes);
- SEC_CAPS_ADD(cnxk_caps, &cur_pos, hw_caps, des);
- SEC_CAPS_ADD(cnxk_caps, &cur_pos, hw_caps, sha1_sha2);
+ SEC_IPSEC_CAPS_ADD(cnxk_caps, &cur_pos, hw_caps, aes);
+ SEC_IPSEC_CAPS_ADD(cnxk_caps, &cur_pos, hw_caps, des);
+ SEC_IPSEC_CAPS_ADD(cnxk_caps, &cur_pos, hw_caps, sha1_sha2);
if (roc_model_is_cn10k())
- cn10k_sec_crypto_caps_update(cnxk_caps, &cur_pos);
+ cn10k_sec_ipsec_crypto_caps_update(cnxk_caps, &cur_pos);
else
- cn9k_sec_crypto_caps_update(cnxk_caps);
+ cn9k_sec_ipsec_crypto_caps_update(cnxk_caps);
- sec_caps_add(cnxk_caps, &cur_pos, sec_caps_null,
- RTE_DIM(sec_caps_null));
- sec_caps_add(cnxk_caps, &cur_pos, caps_end, RTE_DIM(caps_end));
+ sec_ipsec_caps_add(cnxk_caps, &cur_pos, sec_ipsec_caps_null, RTE_DIM(sec_ipsec_caps_null));
+ sec_ipsec_caps_add(cnxk_caps, &cur_pos, caps_end, RTE_DIM(caps_end));
}
static void
-cnxk_sec_caps_update(struct rte_security_capability *sec_cap)
+cnxk_sec_ipsec_caps_update(struct rte_security_capability *sec_cap)
{
sec_cap->ipsec.options.udp_encap = 1;
sec_cap->ipsec.options.copy_df = 1;
@@ -1775,7 +1774,7 @@ cnxk_sec_caps_update(struct rte_security_capability *sec_cap)
}
static void
-cn10k_sec_caps_update(struct rte_security_capability *sec_cap)
+cn10k_sec_ipsec_caps_update(struct rte_security_capability *sec_cap)
{
if (sec_cap->ipsec.direction == RTE_SECURITY_IPSEC_SA_DIR_EGRESS) {
#ifdef LA_IPSEC_DEBUG
@@ -1797,7 +1796,7 @@ cn10k_sec_caps_update(struct rte_security_capability *sec_cap)
}
static void
-cn9k_sec_caps_update(struct rte_security_capability *sec_cap)
+cn9k_sec_ipsec_caps_update(struct rte_security_capability *sec_cap)
{
if (sec_cap->ipsec.direction == RTE_SECURITY_IPSEC_SA_DIR_EGRESS) {
#ifdef LA_IPSEC_DEBUG
@@ -1814,22 +1813,24 @@ cnxk_cpt_caps_populate(struct cnxk_cpt_vf *vf)
unsigned long i;
crypto_caps_populate(vf->crypto_caps, vf->cpt.hw_caps);
- sec_crypto_caps_populate(vf->sec_crypto_caps, vf->cpt.hw_caps);
+ sec_ipsec_crypto_caps_populate(vf->sec_ipsec_crypto_caps, vf->cpt.hw_caps);
PLT_STATIC_ASSERT(RTE_DIM(sec_caps_templ) <= RTE_DIM(vf->sec_caps));
memcpy(vf->sec_caps, sec_caps_templ, sizeof(sec_caps_templ));
for (i = 0; i < RTE_DIM(sec_caps_templ) - 1; i++) {
- vf->sec_caps[i].crypto_capabilities = vf->sec_crypto_caps;
- cnxk_sec_caps_update(&vf->sec_caps[i]);
+ if (vf->sec_caps[i].protocol == RTE_SECURITY_PROTOCOL_IPSEC) {
+ vf->sec_caps[i].crypto_capabilities = vf->sec_ipsec_crypto_caps;
- if (roc_model_is_cn10k())
- cn10k_sec_caps_update(&vf->sec_caps[i]);
+ cnxk_sec_ipsec_caps_update(&vf->sec_caps[i]);
- if (roc_model_is_cn9k())
- cn9k_sec_caps_update(&vf->sec_caps[i]);
+ if (roc_model_is_cn10k())
+ cn10k_sec_ipsec_caps_update(&vf->sec_caps[i]);
+ if (roc_model_is_cn9k())
+ cn9k_sec_ipsec_caps_update(&vf->sec_caps[i]);
+ }
}
}
diff --git a/drivers/crypto/cnxk/cnxk_sg.h b/drivers/crypto/cnxk/cnxk_sg.h
index 65244199bd..aa074581d7 100644
--- a/drivers/crypto/cnxk/cnxk_sg.h
+++ b/drivers/crypto/cnxk/cnxk_sg.h
@@ -129,7 +129,7 @@ fill_sg_comp_from_iov(struct roc_sglist_comp *list, uint32_t i, struct roc_se_io
}
static __rte_always_inline uint32_t
-fill_ipsec_sg_comp_from_pkt(struct roc_sglist_comp *list, uint32_t i, struct rte_mbuf *pkt)
+fill_sg_comp_from_pkt(struct roc_sglist_comp *list, uint32_t i, struct rte_mbuf *pkt)
{
uint32_t buf_sz;
void *vaddr;
@@ -150,7 +150,7 @@ fill_ipsec_sg_comp_from_pkt(struct roc_sglist_comp *list, uint32_t i, struct rte
}
static __rte_always_inline uint32_t
-fill_ipsec_sg2_comp_from_pkt(struct roc_sg2list_comp *list, uint32_t i, struct rte_mbuf *pkt)
+fill_sg2_comp_from_pkt(struct roc_sg2list_comp *list, uint32_t i, struct rte_mbuf *pkt)
{
uint32_t buf_sz;
void *vaddr;
--
2.25.1
^ permalink raw reply [flat|nested] 78+ messages in thread
* [PATCH v2 12/24] common/cnxk: update opad-ipad gen to handle TLS
2024-01-02 4:53 ` [PATCH v2 00/24] Fixes and improvements in crypto cnxk Anoob Joseph
` (10 preceding siblings ...)
2024-01-02 4:54 ` [PATCH v2 11/24] crypto/cnxk: rename security caps as IPsec security caps Anoob Joseph
@ 2024-01-02 4:54 ` Anoob Joseph
2024-01-02 4:54 ` [PATCH v2 13/24] common/cnxk: add TLS record contexts Anoob Joseph
` (13 subsequent siblings)
25 siblings, 0 replies; 78+ messages in thread
From: Anoob Joseph @ 2024-01-02 4:54 UTC (permalink / raw)
To: Akhil Goyal; +Cc: Jerin Jacob, Vidya Sagar Velumuri, Tejasree Kondoj, dev
For TLS opcodes, ipad is at the offset 64 as compared to the packed
implementation for IPsec. Extend the function to handle TLS contexts as
well.
Signed-off-by: Anoob Joseph <anoobj@marvell.com>
Signed-off-by: Vidya Sagar Velumuri <vvelumuri@marvell.com>
---
drivers/common/cnxk/cnxk_security.c | 15 ++++++++-------
drivers/common/cnxk/cnxk_security.h | 3 ++-
2 files changed, 10 insertions(+), 8 deletions(-)
diff --git a/drivers/common/cnxk/cnxk_security.c b/drivers/common/cnxk/cnxk_security.c
index 81991c4697..bdb04fe142 100644
--- a/drivers/common/cnxk/cnxk_security.c
+++ b/drivers/common/cnxk/cnxk_security.c
@@ -9,7 +9,8 @@
#include "roc_api.h"
void
-cnxk_sec_opad_ipad_gen(struct rte_crypto_sym_xform *auth_xform, uint8_t *hmac_opad_ipad)
+cnxk_sec_opad_ipad_gen(struct rte_crypto_sym_xform *auth_xform, uint8_t *hmac_opad_ipad,
+ bool is_tls)
{
const uint8_t *key = auth_xform->auth.key.data;
uint32_t length = auth_xform->auth.key.length;
@@ -29,11 +30,11 @@ cnxk_sec_opad_ipad_gen(struct rte_crypto_sym_xform *auth_xform, uint8_t *hmac_op
switch (auth_xform->auth.algo) {
case RTE_CRYPTO_AUTH_MD5_HMAC:
roc_hash_md5_gen(opad, (uint32_t *)&hmac_opad_ipad[0]);
- roc_hash_md5_gen(ipad, (uint32_t *)&hmac_opad_ipad[24]);
+ roc_hash_md5_gen(ipad, (uint32_t *)&hmac_opad_ipad[is_tls ? 64 : 24]);
break;
case RTE_CRYPTO_AUTH_SHA1_HMAC:
roc_hash_sha1_gen(opad, (uint32_t *)&hmac_opad_ipad[0]);
- roc_hash_sha1_gen(ipad, (uint32_t *)&hmac_opad_ipad[24]);
+ roc_hash_sha1_gen(ipad, (uint32_t *)&hmac_opad_ipad[is_tls ? 64 : 24]);
break;
case RTE_CRYPTO_AUTH_SHA256_HMAC:
roc_hash_sha256_gen(opad, (uint32_t *)&hmac_opad_ipad[0], 256);
@@ -191,7 +192,7 @@ ot_ipsec_sa_common_param_fill(union roc_ot_ipsec_sa_word2 *w2,
const uint8_t *auth_key = auth_xfrm->auth.key.data;
roc_aes_xcbc_key_derive(auth_key, hmac_opad_ipad);
} else {
- cnxk_sec_opad_ipad_gen(auth_xfrm, hmac_opad_ipad);
+ cnxk_sec_opad_ipad_gen(auth_xfrm, hmac_opad_ipad, false);
}
tmp_key = (uint64_t *)hmac_opad_ipad;
@@ -740,7 +741,7 @@ onf_ipsec_sa_common_param_fill(struct roc_ie_onf_sa_ctl *ctl, uint8_t *salt,
key = cipher_xfrm->cipher.key.data;
length = cipher_xfrm->cipher.key.length;
- cnxk_sec_opad_ipad_gen(auth_xfrm, hmac_opad_ipad);
+ cnxk_sec_opad_ipad_gen(auth_xfrm, hmac_opad_ipad, false);
}
switch (length) {
@@ -1373,7 +1374,7 @@ cnxk_on_ipsec_outb_sa_create(struct rte_security_ipsec_xform *ipsec,
roc_aes_xcbc_key_derive(auth_key, hmac_opad_ipad);
} else if (auth_xform->auth.algo != RTE_CRYPTO_AUTH_NULL) {
- cnxk_sec_opad_ipad_gen(auth_xform, hmac_opad_ipad);
+ cnxk_sec_opad_ipad_gen(auth_xform, hmac_opad_ipad, false);
}
}
@@ -1440,7 +1441,7 @@ cnxk_on_ipsec_inb_sa_create(struct rte_security_ipsec_xform *ipsec,
roc_aes_xcbc_key_derive(auth_key, hmac_opad_ipad);
} else if (auth_xform->auth.algo != RTE_CRYPTO_AUTH_NULL) {
- cnxk_sec_opad_ipad_gen(auth_xform, hmac_opad_ipad);
+ cnxk_sec_opad_ipad_gen(auth_xform, hmac_opad_ipad, false);
}
}
diff --git a/drivers/common/cnxk/cnxk_security.h b/drivers/common/cnxk/cnxk_security.h
index fabf694df4..86ec657cb0 100644
--- a/drivers/common/cnxk/cnxk_security.h
+++ b/drivers/common/cnxk/cnxk_security.h
@@ -70,6 +70,7 @@ int __roc_api cnxk_on_ipsec_outb_sa_create(struct rte_security_ipsec_xform *ipse
struct roc_ie_on_outb_sa *out_sa);
__rte_internal
-void cnxk_sec_opad_ipad_gen(struct rte_crypto_sym_xform *auth_xform, uint8_t *hmac_opad_ipad);
+void cnxk_sec_opad_ipad_gen(struct rte_crypto_sym_xform *auth_xform, uint8_t *hmac_opad_ipad,
+ bool is_tls);
#endif /* _CNXK_SECURITY_H__ */
--
2.25.1
^ permalink raw reply [flat|nested] 78+ messages in thread
* [PATCH v2 13/24] common/cnxk: add TLS record contexts
2024-01-02 4:53 ` [PATCH v2 00/24] Fixes and improvements in crypto cnxk Anoob Joseph
` (11 preceding siblings ...)
2024-01-02 4:54 ` [PATCH v2 12/24] common/cnxk: update opad-ipad gen to handle TLS Anoob Joseph
@ 2024-01-02 4:54 ` Anoob Joseph
2024-01-02 4:54 ` [PATCH v2 14/24] crypto/cnxk: separate IPsec from security common code Anoob Joseph
` (12 subsequent siblings)
25 siblings, 0 replies; 78+ messages in thread
From: Anoob Joseph @ 2024-01-02 4:54 UTC (permalink / raw)
To: Akhil Goyal; +Cc: Jerin Jacob, Vidya Sagar Velumuri, Tejasree Kondoj, dev
Add TLS record read and write contexts.
Signed-off-by: Anoob Joseph <anoobj@marvell.com>
Signed-off-by: Vidya Sagar Velumuri <vvelumuri@marvell.com>
---
drivers/common/cnxk/roc_cpt.h | 4 +-
drivers/common/cnxk/roc_ie_ot_tls.h | 199 ++++++++++++++++++++++++++++
drivers/common/cnxk/roc_se.h | 11 ++
3 files changed, 211 insertions(+), 3 deletions(-)
create mode 100644 drivers/common/cnxk/roc_ie_ot_tls.h
diff --git a/drivers/common/cnxk/roc_cpt.h b/drivers/common/cnxk/roc_cpt.h
index 001e71c55e..5a2b5caeb0 100644
--- a/drivers/common/cnxk/roc_cpt.h
+++ b/drivers/common/cnxk/roc_cpt.h
@@ -55,6 +55,7 @@
#define ROC_CPT_AES_CBC_IV_LEN 16
#define ROC_CPT_SHA1_HMAC_LEN 12
#define ROC_CPT_SHA2_HMAC_LEN 16
+#define ROC_CPT_DES_IV_LEN 8
#define ROC_CPT_DES3_KEY_LEN 24
#define ROC_CPT_AES128_KEY_LEN 16
@@ -71,9 +72,6 @@
#define ROC_CPT_DES_BLOCK_LENGTH 8
#define ROC_CPT_AES_BLOCK_LENGTH 16
-#define ROC_CPT_AES_GCM_ROUNDUP_BYTE_LEN 4
-#define ROC_CPT_AES_CBC_ROUNDUP_BYTE_LEN 16
-
/* Salt length for AES-CTR/GCM/CCM and AES-GMAC */
#define ROC_CPT_SALT_LEN 4
diff --git a/drivers/common/cnxk/roc_ie_ot_tls.h b/drivers/common/cnxk/roc_ie_ot_tls.h
new file mode 100644
index 0000000000..61955ef4d1
--- /dev/null
+++ b/drivers/common/cnxk/roc_ie_ot_tls.h
@@ -0,0 +1,199 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2023 Marvell.
+ */
+
+#ifndef __ROC_IE_OT_TLS_H__
+#define __ROC_IE_OT_TLS_H__
+
+#include "roc_platform.h"
+
+#define ROC_IE_OT_TLS_CTX_ILEN 1
+#define ROC_IE_OT_TLS_CTX_HDR_SIZE 1
+#define ROC_IE_OT_TLS_AR_WIN_SIZE_MAX 4096
+#define ROC_IE_OT_TLS_LOG_MIN_AR_WIN_SIZE_M1 5
+
+/* u64 array size to fit anti replay window bits */
+#define ROC_IE_OT_TLS_AR_WINBITS_SZ \
+ (PLT_ALIGN_CEIL(ROC_IE_OT_TLS_AR_WIN_SIZE_MAX, BITS_PER_LONG_LONG) / BITS_PER_LONG_LONG)
+
+/* CN10K TLS opcodes */
+#define ROC_IE_OT_TLS_MAJOR_OP_RECORD_ENC 0x16UL
+#define ROC_IE_OT_TLS_MAJOR_OP_RECORD_DEC 0x17UL
+
+#define ROC_IE_OT_TLS_CTX_MAX_OPAD_IPAD_LEN 128
+#define ROC_IE_OT_TLS_CTX_MAX_KEY_IV_LEN 48
+#define ROC_IE_OT_TLS_CTX_MAX_IV_LEN 16
+
+enum roc_ie_ot_tls_mac_type {
+ ROC_IE_OT_TLS_MAC_MD5 = 1,
+ ROC_IE_OT_TLS_MAC_SHA1 = 2,
+ ROC_IE_OT_TLS_MAC_SHA2_256 = 4,
+ ROC_IE_OT_TLS_MAC_SHA2_384 = 5,
+ ROC_IE_OT_TLS_MAC_SHA2_512 = 6,
+};
+
+enum roc_ie_ot_tls_cipher_type {
+ ROC_IE_OT_TLS_CIPHER_3DES = 1,
+ ROC_IE_OT_TLS_CIPHER_AES_CBC = 3,
+ ROC_IE_OT_TLS_CIPHER_AES_GCM = 7,
+ ROC_IE_OT_TLS_CIPHER_AES_CCM = 10,
+};
+
+enum roc_ie_ot_tls_ver {
+ ROC_IE_OT_TLS_VERSION_TLS_12 = 1,
+ ROC_IE_OT_TLS_VERSION_DTLS_12 = 2,
+};
+
+enum roc_ie_ot_tls_aes_key_len {
+ ROC_IE_OT_TLS_AES_KEY_LEN_128 = 1,
+ ROC_IE_OT_TLS_AES_KEY_LEN_256 = 3,
+};
+
+enum {
+ ROC_IE_OT_TLS_IV_SRC_DEFAULT = 0,
+ ROC_IE_OT_TLS_IV_SRC_FROM_SA = 1,
+};
+
+struct roc_ie_ot_tls_read_ctx_update_reg {
+ uint64_t ar_base;
+ uint64_t ar_valid_mask;
+ uint64_t hard_life;
+ uint64_t soft_life;
+ uint64_t mib_octs;
+ uint64_t mib_pkts;
+ uint64_t ar_winbits[ROC_IE_OT_TLS_AR_WINBITS_SZ];
+};
+
+union roc_ie_ot_tls_param2 {
+ uint16_t u16;
+ struct {
+ uint8_t msg_type;
+ uint8_t rsvd;
+ } s;
+};
+
+struct roc_ie_ot_tls_read_sa {
+ /* Word0 */
+ union {
+ struct {
+ uint64_t ar_win : 3;
+ uint64_t hard_life_dec : 1;
+ uint64_t soft_life_dec : 1;
+ uint64_t count_glb_octets : 1;
+ uint64_t count_glb_pkts : 1;
+ uint64_t count_mib_bytes : 1;
+
+ uint64_t count_mib_pkts : 1;
+ uint64_t hw_ctx_off : 7;
+
+ uint64_t ctx_id : 16;
+
+ uint64_t orig_pkt_fabs : 1;
+ uint64_t orig_pkt_free : 1;
+ uint64_t pkind : 6;
+
+ uint64_t rsvd0 : 1;
+ uint64_t et_ovrwr : 1;
+ uint64_t pkt_output : 2;
+ uint64_t pkt_format : 1;
+ uint64_t defrag_opt : 2;
+ uint64_t x2p_dst : 1;
+
+ uint64_t ctx_push_size : 7;
+ uint64_t rsvd1 : 1;
+
+ uint64_t ctx_hdr_size : 2;
+ uint64_t aop_valid : 1;
+ uint64_t rsvd2 : 1;
+ uint64_t ctx_size : 4;
+ } s;
+ uint64_t u64;
+ } w0;
+
+ /* Word1 */
+ uint64_t w1_rsvd3;
+
+ /* Word2 */
+ union {
+ struct {
+ uint64_t version_select : 4;
+ uint64_t aes_key_len : 2;
+ uint64_t cipher_select : 4;
+ uint64_t mac_select : 4;
+ uint64_t rsvd4 : 50;
+ } s;
+ uint64_t u64;
+ } w2;
+
+ /* Word3 */
+ uint64_t w3_rsvd5;
+
+ /* Word4 - Word9 */
+ uint8_t cipher_key[ROC_IE_OT_TLS_CTX_MAX_KEY_IV_LEN];
+
+ /* Word10 - Word25 */
+ uint8_t opad_ipad[ROC_IE_OT_TLS_CTX_MAX_OPAD_IPAD_LEN];
+
+ /* Word26 - Word32 */
+ struct roc_ie_ot_tls_read_ctx_update_reg ctx;
+};
+
+struct roc_ie_ot_tls_write_sa {
+ /* Word0 */
+ union {
+ struct {
+ uint64_t rsvd0 : 3;
+ uint64_t hard_life_dec : 1;
+ uint64_t soft_life_dec : 1;
+ uint64_t count_glb_octets : 1;
+ uint64_t count_glb_pkts : 1;
+ uint64_t count_mib_bytes : 1;
+
+ uint64_t count_mib_pkts : 1;
+ uint64_t hw_ctx_off : 7;
+
+ uint64_t rsvd1 : 32;
+
+ uint64_t ctx_push_size : 7;
+ uint64_t rsvd2 : 1;
+
+ uint64_t ctx_hdr_size : 2;
+ uint64_t aop_valid : 1;
+ uint64_t rsvd3 : 1;
+ uint64_t ctx_size : 4;
+ } s;
+ uint64_t u64;
+ } w0;
+
+ /* Word1 */
+ uint64_t w1_rsvd4;
+
+ /* Word2 */
+ union {
+ struct {
+ uint64_t version_select : 4;
+ uint64_t aes_key_len : 2;
+ uint64_t cipher_select : 4;
+ uint64_t mac_select : 4;
+ uint64_t iv_at_cptr : 1;
+ uint64_t rsvd5 : 49;
+ } s;
+ uint64_t u64;
+ } w2;
+
+ /* Word3 */
+ uint64_t w3_rsvd6;
+
+ /* Word4 - Word9 */
+ uint8_t cipher_key[ROC_IE_OT_TLS_CTX_MAX_KEY_IV_LEN];
+
+ /* Word10 - Word25 */
+ uint8_t opad_ipad[ROC_IE_OT_TLS_CTX_MAX_OPAD_IPAD_LEN];
+
+ /* Word26 */
+ uint64_t w26_rsvd7;
+
+ /* Word27 */
+ uint64_t seq_num;
+};
+#endif /* __ROC_IE_OT_TLS_H__ */
diff --git a/drivers/common/cnxk/roc_se.h b/drivers/common/cnxk/roc_se.h
index d8cbd58c9a..abb8c6a149 100644
--- a/drivers/common/cnxk/roc_se.h
+++ b/drivers/common/cnxk/roc_se.h
@@ -5,6 +5,8 @@
#ifndef __ROC_SE_H__
#define __ROC_SE_H__
+#include "roc_constants.h"
+
/* SE opcodes */
#define ROC_SE_MAJOR_OP_FC 0x33
#define ROC_SE_FC_MINOR_OP_ENCRYPT 0x0
@@ -162,6 +164,15 @@ typedef enum {
ROC_SE_ERR_GC_ICV_MISCOMPARE = 0x4c,
ROC_SE_ERR_GC_DATA_UNALIGNED = 0x4d,
+ ROC_SE_ERR_SSL_RECORD_LEN_INVALID = 0x82,
+ ROC_SE_ERR_SSL_CTX_LEN_INVALID = 0x83,
+ ROC_SE_ERR_SSL_CIPHER_UNSUPPORTED = 0x84,
+ ROC_SE_ERR_SSL_MAC_UNSUPPORTED = 0x85,
+ ROC_SE_ERR_SSL_VERSION_UNSUPPORTED = 0x86,
+ ROC_SE_ERR_SSL_MAC_MISMATCH = 0x89,
+ ROC_SE_ERR_SSL_PKT_REPLAY_SEQ_OUT_OF_WINDOW = 0xC1,
+ ROC_SE_ERR_SSL_PKT_REPLAY_SEQ = 0xC9,
+
/* API Layer */
ROC_SE_ERR_REQ_PENDING = 0xfe,
ROC_SE_ERR_REQ_TIMEOUT = 0xff,
--
2.25.1
^ permalink raw reply [flat|nested] 78+ messages in thread
* [PATCH v2 14/24] crypto/cnxk: separate IPsec from security common code
2024-01-02 4:53 ` [PATCH v2 00/24] Fixes and improvements in crypto cnxk Anoob Joseph
` (12 preceding siblings ...)
2024-01-02 4:54 ` [PATCH v2 13/24] common/cnxk: add TLS record contexts Anoob Joseph
@ 2024-01-02 4:54 ` Anoob Joseph
2024-01-02 4:54 ` [PATCH v2 15/24] crypto/cnxk: add TLS record session ops Anoob Joseph
` (11 subsequent siblings)
25 siblings, 0 replies; 78+ messages in thread
From: Anoob Joseph @ 2024-01-02 4:54 UTC (permalink / raw)
To: Akhil Goyal; +Cc: Jerin Jacob, Vidya Sagar Velumuri, Tejasree Kondoj, dev
The current structs and functions assume only IPsec offload. Separate it
out to allow for addition of TLS.
Signed-off-by: Anoob Joseph <anoobj@marvell.com>
Signed-off-by: Vidya Sagar Velumuri <vvelumuri@marvell.com>
---
drivers/crypto/cnxk/cn10k_cryptodev.c | 2 +-
drivers/crypto/cnxk/cn10k_cryptodev_sec.c | 127 ++++++++++++++++++++++
drivers/crypto/cnxk/cn10k_cryptodev_sec.h | 61 +++++++++++
drivers/crypto/cnxk/cn10k_ipsec.c | 127 +++-------------------
drivers/crypto/cnxk/cn10k_ipsec.h | 45 +++-----
drivers/crypto/cnxk/cn10k_ipsec_la_ops.h | 1 +
drivers/crypto/cnxk/meson.build | 1 +
7 files changed, 218 insertions(+), 146 deletions(-)
create mode 100644 drivers/crypto/cnxk/cn10k_cryptodev_sec.c
create mode 100644 drivers/crypto/cnxk/cn10k_cryptodev_sec.h
diff --git a/drivers/crypto/cnxk/cn10k_cryptodev.c b/drivers/crypto/cnxk/cn10k_cryptodev.c
index 2fd4df3c5d..5ed918e18e 100644
--- a/drivers/crypto/cnxk/cn10k_cryptodev.c
+++ b/drivers/crypto/cnxk/cn10k_cryptodev.c
@@ -12,7 +12,7 @@
#include "cn10k_cryptodev.h"
#include "cn10k_cryptodev_ops.h"
-#include "cn10k_ipsec.h"
+#include "cn10k_cryptodev_sec.h"
#include "cnxk_cryptodev.h"
#include "cnxk_cryptodev_capabilities.h"
#include "cnxk_cryptodev_sec.h"
diff --git a/drivers/crypto/cnxk/cn10k_cryptodev_sec.c b/drivers/crypto/cnxk/cn10k_cryptodev_sec.c
new file mode 100644
index 0000000000..0fd0a5b03c
--- /dev/null
+++ b/drivers/crypto/cnxk/cn10k_cryptodev_sec.c
@@ -0,0 +1,127 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2023 Marvell.
+ */
+
+#include <rte_security.h>
+
+#include "cn10k_cryptodev_ops.h"
+#include "cn10k_cryptodev_sec.h"
+#include "cnxk_cryptodev_ops.h"
+
+static int
+cn10k_sec_session_create(void *dev, struct rte_security_session_conf *conf,
+ struct rte_security_session *sess)
+{
+ struct rte_cryptodev *crypto_dev = dev;
+ struct cnxk_cpt_vf *vf;
+ struct cnxk_cpt_qp *qp;
+
+ if (conf->action_type != RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL)
+ return -EINVAL;
+
+ qp = crypto_dev->data->queue_pairs[0];
+ if (qp == NULL) {
+ plt_err("Setup cryptodev queue pair before creating security session");
+ return -EPERM;
+ }
+
+ vf = crypto_dev->data->dev_private;
+
+ if (conf->protocol == RTE_SECURITY_PROTOCOL_IPSEC) {
+ ((struct cn10k_sec_session *)sess)->userdata = conf->userdata;
+ return cn10k_ipsec_session_create(vf, qp, &conf->ipsec, conf->crypto_xform, sess);
+ }
+
+ return -ENOTSUP;
+}
+
+static int
+cn10k_sec_session_destroy(void *dev, struct rte_security_session *sec_sess)
+{
+ struct cn10k_sec_session *cn10k_sec_sess;
+ struct rte_cryptodev *crypto_dev = dev;
+ struct cnxk_cpt_qp *qp;
+
+ if (unlikely(sec_sess == NULL))
+ return -EINVAL;
+
+ qp = crypto_dev->data->queue_pairs[0];
+ if (unlikely(qp == NULL))
+ return -ENOTSUP;
+
+ cn10k_sec_sess = (struct cn10k_sec_session *)sec_sess;
+
+ if (cn10k_sec_sess->proto == RTE_SECURITY_PROTOCOL_IPSEC)
+ return cn10k_sec_ipsec_session_destroy(qp, cn10k_sec_sess);
+
+ return -EINVAL;
+}
+
+static unsigned int
+cn10k_sec_session_get_size(void *dev __rte_unused)
+{
+ return sizeof(struct cn10k_sec_session) - sizeof(struct rte_security_session);
+}
+
+static int
+cn10k_sec_session_stats_get(void *dev, struct rte_security_session *sec_sess,
+ struct rte_security_stats *stats)
+{
+ struct cn10k_sec_session *cn10k_sec_sess;
+ struct rte_cryptodev *crypto_dev = dev;
+ struct cnxk_cpt_qp *qp;
+
+ if (unlikely(sec_sess == NULL))
+ return -EINVAL;
+
+ qp = crypto_dev->data->queue_pairs[0];
+ if (unlikely(qp == NULL))
+ return -ENOTSUP;
+
+ cn10k_sec_sess = (struct cn10k_sec_session *)sec_sess;
+
+ if (cn10k_sec_sess->proto == RTE_SECURITY_PROTOCOL_IPSEC)
+ return cn10k_ipsec_stats_get(qp, cn10k_sec_sess, stats);
+
+ return -ENOTSUP;
+}
+
+static int
+cn10k_sec_session_update(void *dev, struct rte_security_session *sec_sess,
+ struct rte_security_session_conf *conf)
+{
+ struct cn10k_sec_session *cn10k_sec_sess;
+ struct rte_cryptodev *crypto_dev = dev;
+ struct cnxk_cpt_qp *qp;
+ struct cnxk_cpt_vf *vf;
+
+ if (sec_sess == NULL)
+ return -EINVAL;
+
+ qp = crypto_dev->data->queue_pairs[0];
+ if (qp == NULL)
+ return -EINVAL;
+
+ vf = crypto_dev->data->dev_private;
+
+ cn10k_sec_sess = (struct cn10k_sec_session *)sec_sess;
+
+ if (cn10k_sec_sess->proto == RTE_SECURITY_PROTOCOL_IPSEC)
+ return cn10k_ipsec_session_update(vf, qp, cn10k_sec_sess, conf);
+
+ return -ENOTSUP;
+}
+
+/* Update platform specific security ops */
+void
+cn10k_sec_ops_override(void)
+{
+ /* Update platform specific ops */
+ cnxk_sec_ops.session_create = cn10k_sec_session_create;
+ cnxk_sec_ops.session_destroy = cn10k_sec_session_destroy;
+ cnxk_sec_ops.session_get_size = cn10k_sec_session_get_size;
+ cnxk_sec_ops.session_stats_get = cn10k_sec_session_stats_get;
+ cnxk_sec_ops.session_update = cn10k_sec_session_update;
+ cnxk_sec_ops.inb_pkt_rx_inject = cn10k_cryptodev_sec_inb_rx_inject;
+ cnxk_sec_ops.rx_inject_configure = cn10k_cryptodev_sec_rx_inject_configure;
+}
diff --git a/drivers/crypto/cnxk/cn10k_cryptodev_sec.h b/drivers/crypto/cnxk/cn10k_cryptodev_sec.h
new file mode 100644
index 0000000000..02fd35eab7
--- /dev/null
+++ b/drivers/crypto/cnxk/cn10k_cryptodev_sec.h
@@ -0,0 +1,61 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2023 Marvell.
+ */
+
+#ifndef __CN10K_CRYPTODEV_SEC_H__
+#define __CN10K_CRYPTODEV_SEC_H__
+
+#include <rte_security.h>
+
+#include "roc_constants.h"
+#include "roc_cpt.h"
+
+#include "cn10k_ipsec.h"
+
+struct cn10k_sec_session {
+ struct rte_security_session rte_sess;
+
+ /** PMD private space */
+
+ enum rte_security_session_protocol proto;
+ /** Pre-populated CPT inst words */
+ struct cnxk_cpt_inst_tmpl inst;
+ uint16_t max_extended_len;
+ uint16_t iv_offset;
+ uint8_t iv_length;
+ union {
+ struct {
+ uint8_t ip_csum;
+ bool is_outbound;
+ } ipsec;
+ };
+ /** Queue pair */
+ struct cnxk_cpt_qp *qp;
+ /** Userdata to be set for Rx inject */
+ void *userdata;
+
+ /**
+ * End of SW mutable area
+ */
+ union {
+ struct cn10k_ipsec_sa sa;
+ };
+} __rte_aligned(ROC_ALIGN);
+
+static inline uint64_t
+cpt_inst_w7_get(struct roc_cpt *roc_cpt, void *cptr)
+{
+ union cpt_inst_w7 w7;
+
+ w7.u64 = 0;
+ w7.s.egrp = roc_cpt->eng_grp[CPT_ENG_TYPE_IE];
+ w7.s.ctx_val = 1;
+ w7.s.cptr = (uint64_t)cptr;
+ rte_mb();
+
+ return w7.u64;
+}
+
+void cn10k_sec_ops_override(void);
+
+#endif /* __CN10K_CRYPTODEV_SEC_H__ */
diff --git a/drivers/crypto/cnxk/cn10k_ipsec.c b/drivers/crypto/cnxk/cn10k_ipsec.c
index a9c673ea83..74d6cd70d1 100644
--- a/drivers/crypto/cnxk/cn10k_ipsec.c
+++ b/drivers/crypto/cnxk/cn10k_ipsec.c
@@ -11,6 +11,7 @@
#include <rte_udp.h>
#include "cn10k_cryptodev_ops.h"
+#include "cn10k_cryptodev_sec.h"
#include "cn10k_ipsec.h"
#include "cnxk_cryptodev.h"
#include "cnxk_cryptodev_ops.h"
@@ -19,20 +20,6 @@
#include "roc_api.h"
-static uint64_t
-cpt_inst_w7_get(struct roc_cpt *roc_cpt, void *sa)
-{
- union cpt_inst_w7 w7;
-
- w7.u64 = 0;
- w7.s.egrp = roc_cpt->eng_grp[CPT_ENG_TYPE_IE];
- w7.s.ctx_val = 1;
- w7.s.cptr = (uint64_t)sa;
- rte_mb();
-
- return w7.u64;
-}
-
static int
cn10k_ipsec_outb_sa_create(struct roc_cpt *roc_cpt, struct roc_cpt_lf *lf,
struct rte_security_ipsec_xform *ipsec_xfrm,
@@ -260,29 +247,19 @@ cn10k_ipsec_inb_sa_create(struct roc_cpt *roc_cpt, struct roc_cpt_lf *lf,
return ret;
}
-static int
-cn10k_ipsec_session_create(void *dev,
+int
+cn10k_ipsec_session_create(struct cnxk_cpt_vf *vf, struct cnxk_cpt_qp *qp,
struct rte_security_ipsec_xform *ipsec_xfrm,
struct rte_crypto_sym_xform *crypto_xfrm,
struct rte_security_session *sess)
{
- struct rte_cryptodev *crypto_dev = dev;
struct roc_cpt *roc_cpt;
- struct cnxk_cpt_vf *vf;
- struct cnxk_cpt_qp *qp;
int ret;
- qp = crypto_dev->data->queue_pairs[0];
- if (qp == NULL) {
- plt_err("Setup cpt queue pair before creating security session");
- return -EPERM;
- }
-
ret = cnxk_ipsec_xform_verify(ipsec_xfrm, crypto_xfrm);
if (ret)
return ret;
- vf = crypto_dev->data->dev_private;
roc_cpt = &vf->cpt;
if (ipsec_xfrm->direction == RTE_SECURITY_IPSEC_SA_DIR_INGRESS)
@@ -293,38 +270,15 @@ cn10k_ipsec_session_create(void *dev,
(struct cn10k_sec_session *)sess);
}
-static int
-cn10k_sec_session_create(void *device, struct rte_security_session_conf *conf,
- struct rte_security_session *sess)
-{
- if (conf->action_type != RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL)
- return -EINVAL;
-
- if (conf->protocol == RTE_SECURITY_PROTOCOL_IPSEC) {
- ((struct cn10k_sec_session *)sess)->userdata = conf->userdata;
- return cn10k_ipsec_session_create(device, &conf->ipsec, conf->crypto_xform, sess);
- }
- return -ENOTSUP;
-}
-
-static int
-cn10k_sec_ipsec_session_destroy(void *dev, struct rte_security_session *sec_sess)
+int
+cn10k_sec_ipsec_session_destroy(struct cnxk_cpt_qp *qp, struct cn10k_sec_session *sess)
{
- struct rte_cryptodev *crypto_dev = dev;
union roc_ot_ipsec_sa_word2 *w2;
- struct cn10k_sec_session *sess;
struct cn10k_ipsec_sa *sa;
- struct cnxk_cpt_qp *qp;
struct roc_cpt_lf *lf;
void *sa_dptr = NULL;
int ret;
- sess = (struct cn10k_sec_session *)sec_sess;
-
- qp = crypto_dev->data->queue_pairs[0];
- if (unlikely(qp == NULL))
- return -ENOTSUP;
-
lf = &qp->lf;
sa = &sess->sa;
@@ -374,48 +328,18 @@ cn10k_sec_ipsec_session_destroy(void *dev, struct rte_security_session *sec_sess
return 0;
}
-static int
-cn10k_sec_session_destroy(void *dev, struct rte_security_session *sec_sess)
+int
+cn10k_ipsec_stats_get(struct cnxk_cpt_qp *qp, struct cn10k_sec_session *sess,
+ struct rte_security_stats *stats)
{
- if (unlikely(sec_sess == NULL))
- return -EINVAL;
-
- if (((struct cn10k_sec_session *)sec_sess)->proto == RTE_SECURITY_PROTOCOL_IPSEC)
- return cn10k_sec_ipsec_session_destroy(dev, sec_sess);
-
- return -EINVAL;
-}
-
-static unsigned int
-cn10k_sec_session_get_size(void *device __rte_unused)
-{
- return sizeof(struct cn10k_sec_session) - sizeof(struct rte_security_session);
-}
-
-static int
-cn10k_sec_session_stats_get(void *device, struct rte_security_session *sess,
- struct rte_security_stats *stats)
-{
- struct rte_cryptodev *crypto_dev = device;
struct roc_ot_ipsec_outb_sa *out_sa;
struct roc_ot_ipsec_inb_sa *in_sa;
- struct cn10k_sec_session *priv;
struct cn10k_ipsec_sa *sa;
- struct cnxk_cpt_qp *qp;
-
- if (unlikely(sess == NULL))
- return -EINVAL;
-
- priv = (struct cn10k_sec_session *)sess;
-
- qp = crypto_dev->data->queue_pairs[0];
- if (qp == NULL)
- return -EINVAL;
stats->protocol = RTE_SECURITY_PROTOCOL_IPSEC;
- sa = &priv->sa;
+ sa = &sess->sa;
- if (priv->ipsec.is_outbound) {
+ if (sess->ipsec.is_outbound) {
out_sa = &sa->out_sa;
roc_cpt_lf_ctx_flush(&qp->lf, out_sa, false);
rte_delay_ms(1);
@@ -432,23 +356,13 @@ cn10k_sec_session_stats_get(void *device, struct rte_security_session *sess,
return 0;
}
-static int
-cn10k_sec_session_update(void *device, struct rte_security_session *sess,
- struct rte_security_session_conf *conf)
+int
+cn10k_ipsec_session_update(struct cnxk_cpt_vf *vf, struct cnxk_cpt_qp *qp,
+ struct cn10k_sec_session *sess, struct rte_security_session_conf *conf)
{
- struct rte_cryptodev *crypto_dev = device;
struct roc_cpt *roc_cpt;
- struct cnxk_cpt_qp *qp;
- struct cnxk_cpt_vf *vf;
int ret;
- if (sess == NULL)
- return -EINVAL;
-
- qp = crypto_dev->data->queue_pairs[0];
- if (qp == NULL)
- return -EINVAL;
-
if (conf->ipsec.direction == RTE_SECURITY_IPSEC_SA_DIR_INGRESS)
return -ENOTSUP;
@@ -456,23 +370,8 @@ cn10k_sec_session_update(void *device, struct rte_security_session *sess,
if (ret)
return ret;
- vf = crypto_dev->data->dev_private;
roc_cpt = &vf->cpt;
return cn10k_ipsec_outb_sa_create(roc_cpt, &qp->lf, &conf->ipsec, conf->crypto_xform,
(struct cn10k_sec_session *)sess);
}
-
-/* Update platform specific security ops */
-void
-cn10k_sec_ops_override(void)
-{
- /* Update platform specific ops */
- cnxk_sec_ops.session_create = cn10k_sec_session_create;
- cnxk_sec_ops.session_destroy = cn10k_sec_session_destroy;
- cnxk_sec_ops.session_get_size = cn10k_sec_session_get_size;
- cnxk_sec_ops.session_stats_get = cn10k_sec_session_stats_get;
- cnxk_sec_ops.session_update = cn10k_sec_session_update;
- cnxk_sec_ops.inb_pkt_rx_inject = cn10k_cryptodev_sec_inb_rx_inject;
- cnxk_sec_ops.rx_inject_configure = cn10k_cryptodev_sec_rx_inject_configure;
-}
diff --git a/drivers/crypto/cnxk/cn10k_ipsec.h b/drivers/crypto/cnxk/cn10k_ipsec.h
index 2b7a3e6acf..0d1e14a065 100644
--- a/drivers/crypto/cnxk/cn10k_ipsec.h
+++ b/drivers/crypto/cnxk/cn10k_ipsec.h
@@ -11,9 +11,12 @@
#include "roc_constants.h"
#include "roc_ie_ot.h"
+#include "cnxk_cryptodev.h"
+#include "cnxk_cryptodev_ops.h"
#include "cnxk_ipsec.h"
-typedef void *CN10K_SA_CONTEXT_MARKER[0];
+/* Forward declaration */
+struct cn10k_sec_session;
struct cn10k_ipsec_sa {
union {
@@ -24,34 +27,14 @@ struct cn10k_ipsec_sa {
};
} __rte_aligned(ROC_ALIGN);
-struct cn10k_sec_session {
- struct rte_security_session rte_sess;
-
- /** PMD private space */
-
- enum rte_security_session_protocol proto;
- /** Pre-populated CPT inst words */
- struct cnxk_cpt_inst_tmpl inst;
- uint16_t max_extended_len;
- uint16_t iv_offset;
- uint8_t iv_length;
- union {
- struct {
- uint8_t ip_csum;
- bool is_outbound;
- } ipsec;
- };
- /** Queue pair */
- struct cnxk_cpt_qp *qp;
- /** Userdata to be set for Rx inject */
- void *userdata;
-
- /**
- * End of SW mutable area
- */
- struct cn10k_ipsec_sa sa;
-} __rte_aligned(ROC_ALIGN);
-
-void cn10k_sec_ops_override(void);
-
+int cn10k_ipsec_session_create(struct cnxk_cpt_vf *vf, struct cnxk_cpt_qp *qp,
+ struct rte_security_ipsec_xform *ipsec_xfrm,
+ struct rte_crypto_sym_xform *crypto_xfrm,
+ struct rte_security_session *sess);
+int cn10k_sec_ipsec_session_destroy(struct cnxk_cpt_qp *qp, struct cn10k_sec_session *sess);
+int cn10k_ipsec_stats_get(struct cnxk_cpt_qp *qp, struct cn10k_sec_session *sess,
+ struct rte_security_stats *stats);
+int cn10k_ipsec_session_update(struct cnxk_cpt_vf *vf, struct cnxk_cpt_qp *qp,
+ struct cn10k_sec_session *sess,
+ struct rte_security_session_conf *conf);
#endif /* __CN10K_IPSEC_H__ */
diff --git a/drivers/crypto/cnxk/cn10k_ipsec_la_ops.h b/drivers/crypto/cnxk/cn10k_ipsec_la_ops.h
index af2c85022e..a30b8e413d 100644
--- a/drivers/crypto/cnxk/cn10k_ipsec_la_ops.h
+++ b/drivers/crypto/cnxk/cn10k_ipsec_la_ops.h
@@ -11,6 +11,7 @@
#include "roc_ie.h"
#include "cn10k_cryptodev.h"
+#include "cn10k_cryptodev_sec.h"
#include "cn10k_ipsec.h"
#include "cnxk_cryptodev.h"
#include "cnxk_cryptodev_ops.h"
diff --git a/drivers/crypto/cnxk/meson.build b/drivers/crypto/cnxk/meson.build
index 3d9a0dbbf0..d6fafd43d9 100644
--- a/drivers/crypto/cnxk/meson.build
+++ b/drivers/crypto/cnxk/meson.build
@@ -14,6 +14,7 @@ sources = files(
'cn9k_ipsec.c',
'cn10k_cryptodev.c',
'cn10k_cryptodev_ops.c',
+ 'cn10k_cryptodev_sec.c',
'cn10k_ipsec.c',
'cnxk_cryptodev.c',
'cnxk_cryptodev_capabilities.c',
--
2.25.1
^ permalink raw reply [flat|nested] 78+ messages in thread
* [PATCH v2 15/24] crypto/cnxk: add TLS record session ops
2024-01-02 4:53 ` [PATCH v2 00/24] Fixes and improvements in crypto cnxk Anoob Joseph
` (13 preceding siblings ...)
2024-01-02 4:54 ` [PATCH v2 14/24] crypto/cnxk: separate IPsec from security common code Anoob Joseph
@ 2024-01-02 4:54 ` Anoob Joseph
2024-01-02 4:54 ` [PATCH v2 16/24] crypto/cnxk: add TLS record datapath handling Anoob Joseph
` (10 subsequent siblings)
25 siblings, 0 replies; 78+ messages in thread
From: Anoob Joseph @ 2024-01-02 4:54 UTC (permalink / raw)
To: Akhil Goyal; +Cc: Vidya Sagar Velumuri, Jerin Jacob, Tejasree Kondoj, dev
From: Vidya Sagar Velumuri <vvelumuri@marvell.com>
Add TLS record session ops for creating and destroying security
sessions. Add support for both read and write sessions.
Signed-off-by: Anoob Joseph <anoobj@marvell.com>
Signed-off-by: Vidya Sagar Velumuri <vvelumuri@marvell.com>
---
drivers/crypto/cnxk/cn10k_cryptodev_sec.h | 8 +
drivers/crypto/cnxk/cn10k_tls.c | 758 ++++++++++++++++++++++
drivers/crypto/cnxk/cn10k_tls.h | 35 +
drivers/crypto/cnxk/meson.build | 1 +
4 files changed, 802 insertions(+)
create mode 100644 drivers/crypto/cnxk/cn10k_tls.c
create mode 100644 drivers/crypto/cnxk/cn10k_tls.h
diff --git a/drivers/crypto/cnxk/cn10k_cryptodev_sec.h b/drivers/crypto/cnxk/cn10k_cryptodev_sec.h
index 02fd35eab7..33fd3aa398 100644
--- a/drivers/crypto/cnxk/cn10k_cryptodev_sec.h
+++ b/drivers/crypto/cnxk/cn10k_cryptodev_sec.h
@@ -11,6 +11,7 @@
#include "roc_cpt.h"
#include "cn10k_ipsec.h"
+#include "cn10k_tls.h"
struct cn10k_sec_session {
struct rte_security_session rte_sess;
@@ -28,6 +29,12 @@ struct cn10k_sec_session {
uint8_t ip_csum;
bool is_outbound;
} ipsec;
+ struct {
+ uint8_t enable_padding : 1;
+ uint8_t hdr_len : 4;
+ uint8_t rvsd : 3;
+ bool is_write;
+ } tls;
};
/** Queue pair */
struct cnxk_cpt_qp *qp;
@@ -39,6 +46,7 @@ struct cn10k_sec_session {
*/
union {
struct cn10k_ipsec_sa sa;
+ struct cn10k_tls_record tls_rec;
};
} __rte_aligned(ROC_ALIGN);
diff --git a/drivers/crypto/cnxk/cn10k_tls.c b/drivers/crypto/cnxk/cn10k_tls.c
new file mode 100644
index 0000000000..7dd61aa159
--- /dev/null
+++ b/drivers/crypto/cnxk/cn10k_tls.c
@@ -0,0 +1,758 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2023 Marvell.
+ */
+
+#include <rte_crypto_sym.h>
+#include <rte_cryptodev.h>
+#include <rte_security.h>
+
+#include <cryptodev_pmd.h>
+
+#include "roc_cpt.h"
+#include "roc_se.h"
+
+#include "cn10k_cryptodev_sec.h"
+#include "cn10k_tls.h"
+#include "cnxk_cryptodev.h"
+#include "cnxk_cryptodev_ops.h"
+#include "cnxk_security.h"
+
+static int
+tls_xform_cipher_verify(struct rte_crypto_sym_xform *crypto_xform)
+{
+ enum rte_crypto_cipher_algorithm c_algo = crypto_xform->cipher.algo;
+ uint16_t keylen = crypto_xform->cipher.key.length;
+
+ if (((c_algo == RTE_CRYPTO_CIPHER_NULL) && (keylen == 0)) ||
+ ((c_algo == RTE_CRYPTO_CIPHER_3DES_CBC) && (keylen == 24)) ||
+ ((c_algo == RTE_CRYPTO_CIPHER_AES_CBC) && ((keylen == 16) || (keylen == 32))))
+ return 0;
+
+ return -EINVAL;
+}
+
+static int
+tls_xform_auth_verify(struct rte_crypto_sym_xform *crypto_xform)
+{
+ enum rte_crypto_auth_algorithm a_algo = crypto_xform->auth.algo;
+ uint16_t keylen = crypto_xform->auth.key.length;
+
+ if (((a_algo == RTE_CRYPTO_AUTH_MD5_HMAC) && (keylen == 16)) ||
+ ((a_algo == RTE_CRYPTO_AUTH_SHA1_HMAC) && (keylen == 20)) ||
+ ((a_algo == RTE_CRYPTO_AUTH_SHA256_HMAC) && (keylen == 32)))
+ return 0;
+
+ return -EINVAL;
+}
+
+static int
+tls_xform_aead_verify(struct rte_security_tls_record_xform *tls_xform,
+ struct rte_crypto_sym_xform *crypto_xform)
+{
+ uint16_t keylen = crypto_xform->aead.key.length;
+
+ if (tls_xform->type == RTE_SECURITY_TLS_SESS_TYPE_WRITE &&
+ crypto_xform->aead.op != RTE_CRYPTO_AEAD_OP_ENCRYPT)
+ return -EINVAL;
+
+ if (tls_xform->type == RTE_SECURITY_TLS_SESS_TYPE_READ &&
+ crypto_xform->aead.op != RTE_CRYPTO_AEAD_OP_DECRYPT)
+ return -EINVAL;
+
+ if (crypto_xform->aead.algo == RTE_CRYPTO_AEAD_AES_GCM) {
+ if ((keylen == 16) || (keylen == 32))
+ return 0;
+ }
+
+ return -EINVAL;
+}
+
+static int
+cnxk_tls_xform_verify(struct rte_security_tls_record_xform *tls_xform,
+ struct rte_crypto_sym_xform *crypto_xform)
+{
+ struct rte_crypto_sym_xform *auth_xform, *cipher_xform = NULL;
+ int ret = 0;
+
+ if ((tls_xform->ver != RTE_SECURITY_VERSION_TLS_1_2) &&
+ (tls_xform->ver != RTE_SECURITY_VERSION_DTLS_1_2))
+ return -EINVAL;
+
+ if ((tls_xform->type != RTE_SECURITY_TLS_SESS_TYPE_READ) &&
+ (tls_xform->type != RTE_SECURITY_TLS_SESS_TYPE_WRITE))
+ return -EINVAL;
+
+ if (crypto_xform->type == RTE_CRYPTO_SYM_XFORM_AEAD)
+ return tls_xform_aead_verify(tls_xform, crypto_xform);
+
+ if (tls_xform->type == RTE_SECURITY_TLS_SESS_TYPE_WRITE) {
+ /* Egress */
+
+ /* First should be for auth in Egress */
+ if (crypto_xform->type != RTE_CRYPTO_SYM_XFORM_AUTH)
+ return -EINVAL;
+
+ /* Next if present, should be for cipher in Egress */
+ if ((crypto_xform->next != NULL) &&
+ (crypto_xform->next->type != RTE_CRYPTO_SYM_XFORM_CIPHER))
+ return -EINVAL;
+
+ auth_xform = crypto_xform;
+ cipher_xform = crypto_xform->next;
+ } else {
+ /* Ingress */
+
+ /* First can be for auth only when next is NULL in Ingress. */
+ if ((crypto_xform->type == RTE_CRYPTO_SYM_XFORM_AUTH) &&
+ (crypto_xform->next != NULL))
+ return -EINVAL;
+ else if ((crypto_xform->type != RTE_CRYPTO_SYM_XFORM_CIPHER) ||
+ (crypto_xform->next->type != RTE_CRYPTO_SYM_XFORM_AUTH))
+ return -EINVAL;
+
+ cipher_xform = crypto_xform;
+ auth_xform = crypto_xform->next;
+ }
+
+ if (cipher_xform) {
+ if ((tls_xform->type == RTE_SECURITY_TLS_SESS_TYPE_WRITE) &&
+ !(cipher_xform->cipher.op == RTE_CRYPTO_CIPHER_OP_ENCRYPT &&
+ auth_xform->auth.op == RTE_CRYPTO_AUTH_OP_GENERATE))
+ return -EINVAL;
+
+ if ((tls_xform->type == RTE_SECURITY_TLS_SESS_TYPE_READ) &&
+ !(cipher_xform->cipher.op == RTE_CRYPTO_CIPHER_OP_DECRYPT &&
+ auth_xform->auth.op == RTE_CRYPTO_AUTH_OP_VERIFY))
+ return -EINVAL;
+ } else {
+ if ((tls_xform->type == RTE_SECURITY_TLS_SESS_TYPE_WRITE) &&
+ (auth_xform->auth.op != RTE_CRYPTO_AUTH_OP_GENERATE))
+ return -EINVAL;
+
+ if ((tls_xform->type == RTE_SECURITY_TLS_SESS_TYPE_READ) &&
+ (auth_xform->auth.op == RTE_CRYPTO_AUTH_OP_VERIFY))
+ return -EINVAL;
+ }
+
+ if (cipher_xform)
+ ret = tls_xform_cipher_verify(cipher_xform);
+
+ if (!ret)
+ return tls_xform_auth_verify(auth_xform);
+
+ return ret;
+}
+
+static int
+tls_write_rlens_get(struct rte_security_tls_record_xform *tls_xfrm,
+ struct rte_crypto_sym_xform *crypto_xfrm)
+{
+ enum rte_crypto_cipher_algorithm c_algo = RTE_CRYPTO_CIPHER_NULL;
+ enum rte_crypto_auth_algorithm a_algo = RTE_CRYPTO_AUTH_NULL;
+ uint8_t roundup_byte, tls_hdr_len;
+ uint8_t mac_len, iv_len;
+
+ switch (tls_xfrm->ver) {
+ case RTE_SECURITY_VERSION_TLS_1_2:
+ case RTE_SECURITY_VERSION_TLS_1_3:
+ tls_hdr_len = 5;
+ break;
+ case RTE_SECURITY_VERSION_DTLS_1_2:
+ tls_hdr_len = 13;
+ break;
+ default:
+ tls_hdr_len = 0;
+ break;
+ }
+
+ /* Get Cipher and Auth algo */
+ if (crypto_xfrm->type == RTE_CRYPTO_SYM_XFORM_AEAD)
+ return tls_hdr_len + ROC_CPT_AES_GCM_IV_LEN + ROC_CPT_AES_GCM_MAC_LEN;
+
+ if (crypto_xfrm->type == RTE_CRYPTO_SYM_XFORM_CIPHER) {
+ c_algo = crypto_xfrm->cipher.algo;
+ if (crypto_xfrm->next)
+ a_algo = crypto_xfrm->next->auth.algo;
+ } else {
+ a_algo = crypto_xfrm->auth.algo;
+ if (crypto_xfrm->next)
+ c_algo = crypto_xfrm->next->cipher.algo;
+ }
+
+ switch (c_algo) {
+ case RTE_CRYPTO_CIPHER_NULL:
+ roundup_byte = 4;
+ iv_len = 0;
+ break;
+ case RTE_CRYPTO_CIPHER_3DES_CBC:
+ roundup_byte = ROC_CPT_DES_BLOCK_LENGTH;
+ iv_len = ROC_CPT_DES_IV_LEN;
+ break;
+ case RTE_CRYPTO_CIPHER_AES_CBC:
+ roundup_byte = ROC_CPT_AES_BLOCK_LENGTH;
+ iv_len = ROC_CPT_AES_CBC_IV_LEN;
+ break;
+ default:
+ roundup_byte = 0;
+ iv_len = 0;
+ break;
+ }
+
+ switch (a_algo) {
+ case RTE_CRYPTO_AUTH_NULL:
+ mac_len = 0;
+ break;
+ case RTE_CRYPTO_AUTH_MD5_HMAC:
+ mac_len = 16;
+ break;
+ case RTE_CRYPTO_AUTH_SHA1_HMAC:
+ mac_len = 20;
+ break;
+ case RTE_CRYPTO_AUTH_SHA256_HMAC:
+ mac_len = 32;
+ break;
+ default:
+ mac_len = 0;
+ break;
+ }
+
+ return tls_hdr_len + iv_len + mac_len + roundup_byte;
+}
+
+static void
+tls_write_sa_init(struct roc_ie_ot_tls_write_sa *sa)
+{
+ size_t offset;
+
+ memset(sa, 0, sizeof(struct roc_ie_ot_tls_write_sa));
+
+ offset = offsetof(struct roc_ie_ot_tls_write_sa, w26_rsvd7);
+ sa->w0.s.hw_ctx_off = offset / ROC_CTX_UNIT_8B;
+ sa->w0.s.ctx_push_size = sa->w0.s.hw_ctx_off;
+ sa->w0.s.ctx_size = ROC_IE_OT_TLS_CTX_ILEN;
+ sa->w0.s.ctx_hdr_size = ROC_IE_OT_TLS_CTX_HDR_SIZE;
+ sa->w0.s.aop_valid = 1;
+}
+
+static void
+tls_read_sa_init(struct roc_ie_ot_tls_read_sa *sa)
+{
+ size_t offset;
+
+ memset(sa, 0, sizeof(struct roc_ie_ot_tls_read_sa));
+
+ offset = offsetof(struct roc_ie_ot_tls_read_sa, ctx);
+ sa->w0.s.hw_ctx_off = offset / ROC_CTX_UNIT_8B;
+ sa->w0.s.ctx_push_size = sa->w0.s.hw_ctx_off;
+ sa->w0.s.ctx_size = ROC_IE_OT_TLS_CTX_ILEN;
+ sa->w0.s.ctx_hdr_size = ROC_IE_OT_TLS_CTX_HDR_SIZE;
+ sa->w0.s.aop_valid = 1;
+}
+
+static size_t
+tls_read_ctx_size(struct roc_ie_ot_tls_read_sa *sa)
+{
+ size_t size;
+
+ /* Variable based on Anti-replay Window */
+ size = offsetof(struct roc_ie_ot_tls_read_sa, ctx) +
+ offsetof(struct roc_ie_ot_tls_read_ctx_update_reg, ar_winbits);
+
+ if (sa->w0.s.ar_win)
+ size += (1 << (sa->w0.s.ar_win - 1)) * sizeof(uint64_t);
+
+ return size;
+}
+
+static int
+tls_read_sa_fill(struct roc_ie_ot_tls_read_sa *read_sa,
+ struct rte_security_tls_record_xform *tls_xfrm,
+ struct rte_crypto_sym_xform *crypto_xfrm)
+{
+ struct rte_crypto_sym_xform *auth_xfrm, *cipher_xfrm;
+ const uint8_t *key = NULL;
+ uint64_t *tmp, *tmp_key;
+ uint32_t replay_win_sz;
+ uint8_t *cipher_key;
+ int i, length = 0;
+ size_t offset;
+
+ /* Initialize the SA */
+ memset(read_sa, 0, sizeof(struct roc_ie_ot_tls_read_sa));
+
+ cipher_key = read_sa->cipher_key;
+
+ /* Set encryption algorithm */
+ if ((crypto_xfrm->type == RTE_CRYPTO_SYM_XFORM_AEAD) &&
+ (crypto_xfrm->aead.algo == RTE_CRYPTO_AEAD_AES_GCM)) {
+ read_sa->w2.s.cipher_select = ROC_IE_OT_TLS_CIPHER_AES_GCM;
+ read_sa->w2.s.mac_select = ROC_IE_OT_TLS_MAC_SHA2_256;
+
+ length = crypto_xfrm->aead.key.length;
+ if (length == 16)
+ read_sa->w2.s.aes_key_len = ROC_IE_OT_TLS_AES_KEY_LEN_128;
+ else
+ read_sa->w2.s.aes_key_len = ROC_IE_OT_TLS_AES_KEY_LEN_256;
+
+ key = crypto_xfrm->aead.key.data;
+ memcpy(cipher_key, key, length);
+
+ if (tls_xfrm->ver == RTE_SECURITY_VERSION_TLS_1_2)
+ memcpy(((uint8_t *)cipher_key + 32), &tls_xfrm->tls_1_2.imp_nonce, 4);
+ else if (tls_xfrm->ver == RTE_SECURITY_VERSION_DTLS_1_2)
+ memcpy(((uint8_t *)cipher_key + 32), &tls_xfrm->dtls_1_2.imp_nonce, 4);
+
+ goto key_swap;
+ }
+
+ if (crypto_xfrm->type == RTE_CRYPTO_SYM_XFORM_AUTH) {
+ auth_xfrm = crypto_xfrm;
+ cipher_xfrm = crypto_xfrm->next;
+ } else {
+ cipher_xfrm = crypto_xfrm;
+ auth_xfrm = crypto_xfrm->next;
+ }
+
+ if (cipher_xfrm != NULL) {
+ if (cipher_xfrm->cipher.algo == RTE_CRYPTO_CIPHER_3DES_CBC) {
+ read_sa->w2.s.cipher_select = ROC_IE_OT_TLS_CIPHER_3DES;
+ length = cipher_xfrm->cipher.key.length;
+ } else if (cipher_xfrm->cipher.algo == RTE_CRYPTO_CIPHER_AES_CBC) {
+ read_sa->w2.s.cipher_select = ROC_IE_OT_TLS_CIPHER_AES_CBC;
+ length = cipher_xfrm->cipher.key.length;
+ if (length == 16)
+ read_sa->w2.s.aes_key_len = ROC_IE_OT_TLS_AES_KEY_LEN_128;
+ else if (length == 32)
+ read_sa->w2.s.aes_key_len = ROC_IE_OT_TLS_AES_KEY_LEN_256;
+ else
+ return -EINVAL;
+ } else {
+ return -EINVAL;
+ }
+
+ key = cipher_xfrm->cipher.key.data;
+ memcpy(cipher_key, key, length);
+ }
+
+ if (auth_xfrm->auth.algo == RTE_CRYPTO_AUTH_MD5_HMAC)
+ read_sa->w2.s.mac_select = ROC_IE_OT_TLS_MAC_MD5;
+ else if (auth_xfrm->auth.algo == RTE_CRYPTO_AUTH_SHA1_HMAC)
+ read_sa->w2.s.mac_select = ROC_IE_OT_TLS_MAC_SHA1;
+ else if (auth_xfrm->auth.algo == RTE_CRYPTO_AUTH_SHA256_HMAC)
+ read_sa->w2.s.mac_select = ROC_IE_OT_TLS_MAC_SHA2_256;
+ else
+ return -EINVAL;
+
+ cnxk_sec_opad_ipad_gen(auth_xfrm, read_sa->opad_ipad, true);
+ tmp = (uint64_t *)read_sa->opad_ipad;
+ for (i = 0; i < (int)(ROC_CTX_MAX_OPAD_IPAD_LEN / sizeof(uint64_t)); i++)
+ tmp[i] = rte_be_to_cpu_64(tmp[i]);
+
+key_swap:
+ tmp_key = (uint64_t *)cipher_key;
+ for (i = 0; i < (int)(ROC_IE_OT_TLS_CTX_MAX_KEY_IV_LEN / sizeof(uint64_t)); i++)
+ tmp_key[i] = rte_be_to_cpu_64(tmp_key[i]);
+
+ if (tls_xfrm->ver == RTE_SECURITY_VERSION_DTLS_1_2) {
+ /* Only support power-of-two window sizes supported */
+ replay_win_sz = tls_xfrm->dtls_1_2.ar_win_sz;
+ if (replay_win_sz) {
+ if (!rte_is_power_of_2(replay_win_sz) ||
+ replay_win_sz > ROC_IE_OT_TLS_AR_WIN_SIZE_MAX)
+ return -ENOTSUP;
+
+ read_sa->w0.s.ar_win = rte_log2_u32(replay_win_sz) - 5;
+ }
+ }
+
+ read_sa->w0.s.ctx_hdr_size = ROC_IE_OT_TLS_CTX_HDR_SIZE;
+ read_sa->w0.s.aop_valid = 1;
+
+ offset = offsetof(struct roc_ie_ot_tls_read_sa, ctx);
+
+ /* Word offset for HW managed CTX field */
+ read_sa->w0.s.hw_ctx_off = offset / 8;
+ read_sa->w0.s.ctx_push_size = read_sa->w0.s.hw_ctx_off;
+
+ /* Entire context size in 128B units */
+ read_sa->w0.s.ctx_size = (PLT_ALIGN_CEIL(tls_read_ctx_size(read_sa), ROC_CTX_UNIT_128B) /
+ ROC_CTX_UNIT_128B) -
+ 1;
+
+ if (tls_xfrm->ver == RTE_SECURITY_VERSION_TLS_1_2) {
+ read_sa->w2.s.version_select = ROC_IE_OT_TLS_VERSION_TLS_12;
+ read_sa->ctx.ar_valid_mask = tls_xfrm->tls_1_2.seq_no - 1;
+ } else if (tls_xfrm->ver == RTE_SECURITY_VERSION_DTLS_1_2) {
+ read_sa->w2.s.version_select = ROC_IE_OT_TLS_VERSION_DTLS_12;
+ }
+
+ rte_wmb();
+
+ return 0;
+}
+
+static int
+tls_write_sa_fill(struct roc_ie_ot_tls_write_sa *write_sa,
+ struct rte_security_tls_record_xform *tls_xfrm,
+ struct rte_crypto_sym_xform *crypto_xfrm)
+{
+ struct rte_crypto_sym_xform *auth_xfrm, *cipher_xfrm;
+ const uint8_t *key = NULL;
+ uint8_t *cipher_key;
+ uint64_t *tmp_key;
+ int i, length = 0;
+ size_t offset;
+
+ cipher_key = write_sa->cipher_key;
+
+ /* Set encryption algorithm */
+ if ((crypto_xfrm->type == RTE_CRYPTO_SYM_XFORM_AEAD) &&
+ (crypto_xfrm->aead.algo == RTE_CRYPTO_AEAD_AES_GCM)) {
+ write_sa->w2.s.cipher_select = ROC_IE_OT_TLS_CIPHER_AES_GCM;
+ write_sa->w2.s.mac_select = ROC_IE_OT_TLS_MAC_SHA2_256;
+
+ length = crypto_xfrm->aead.key.length;
+ if (length == 16)
+ write_sa->w2.s.aes_key_len = ROC_IE_OT_TLS_AES_KEY_LEN_128;
+ else
+ write_sa->w2.s.aes_key_len = ROC_IE_OT_TLS_AES_KEY_LEN_256;
+
+ key = crypto_xfrm->aead.key.data;
+ memcpy(cipher_key, key, length);
+
+ if (tls_xfrm->ver == RTE_SECURITY_VERSION_TLS_1_2)
+ memcpy(((uint8_t *)cipher_key + 32), &tls_xfrm->tls_1_2.imp_nonce, 4);
+ else if (tls_xfrm->ver == RTE_SECURITY_VERSION_DTLS_1_2)
+ memcpy(((uint8_t *)cipher_key + 32), &tls_xfrm->dtls_1_2.imp_nonce, 4);
+
+ goto key_swap;
+ }
+
+ if (crypto_xfrm->type == RTE_CRYPTO_SYM_XFORM_AUTH) {
+ auth_xfrm = crypto_xfrm;
+ cipher_xfrm = crypto_xfrm->next;
+ } else {
+ cipher_xfrm = crypto_xfrm;
+ auth_xfrm = crypto_xfrm->next;
+ }
+
+ if (cipher_xfrm != NULL) {
+ if (cipher_xfrm->cipher.algo == RTE_CRYPTO_CIPHER_3DES_CBC) {
+ write_sa->w2.s.cipher_select = ROC_IE_OT_TLS_CIPHER_3DES;
+ length = cipher_xfrm->cipher.key.length;
+ } else if (cipher_xfrm->cipher.algo == RTE_CRYPTO_CIPHER_AES_CBC) {
+ write_sa->w2.s.cipher_select = ROC_IE_OT_TLS_CIPHER_AES_CBC;
+ length = cipher_xfrm->cipher.key.length;
+ if (length == 16)
+ write_sa->w2.s.aes_key_len = ROC_IE_OT_TLS_AES_KEY_LEN_128;
+ else if (length == 32)
+ write_sa->w2.s.aes_key_len = ROC_IE_OT_TLS_AES_KEY_LEN_256;
+ else
+ return -EINVAL;
+ } else {
+ return -EINVAL;
+ }
+
+ key = cipher_xfrm->cipher.key.data;
+ if (key != NULL && length != 0) {
+ /* Copy encryption key */
+ memcpy(cipher_key, key, length);
+ }
+ }
+
+ if (auth_xfrm != NULL) {
+ if (auth_xfrm->auth.algo == RTE_CRYPTO_AUTH_MD5_HMAC)
+ write_sa->w2.s.mac_select = ROC_IE_OT_TLS_MAC_MD5;
+ else if (auth_xfrm->auth.algo == RTE_CRYPTO_AUTH_SHA1_HMAC)
+ write_sa->w2.s.mac_select = ROC_IE_OT_TLS_MAC_SHA1;
+ else if (auth_xfrm->auth.algo == RTE_CRYPTO_AUTH_SHA256_HMAC)
+ write_sa->w2.s.mac_select = ROC_IE_OT_TLS_MAC_SHA2_256;
+ else
+ return -EINVAL;
+
+ cnxk_sec_opad_ipad_gen(auth_xfrm, write_sa->opad_ipad, true);
+ }
+
+ tmp_key = (uint64_t *)write_sa->opad_ipad;
+ for (i = 0; i < (int)(ROC_CTX_MAX_OPAD_IPAD_LEN / sizeof(uint64_t)); i++)
+ tmp_key[i] = rte_be_to_cpu_64(tmp_key[i]);
+
+key_swap:
+ tmp_key = (uint64_t *)cipher_key;
+ for (i = 0; i < (int)(ROC_IE_OT_TLS_CTX_MAX_KEY_IV_LEN / sizeof(uint64_t)); i++)
+ tmp_key[i] = rte_be_to_cpu_64(tmp_key[i]);
+
+ write_sa->w0.s.ctx_hdr_size = ROC_IE_OT_TLS_CTX_HDR_SIZE;
+ offset = offsetof(struct roc_ie_ot_tls_write_sa, w26_rsvd7);
+
+ /* Word offset for HW managed CTX field */
+ write_sa->w0.s.hw_ctx_off = offset / 8;
+ write_sa->w0.s.ctx_push_size = write_sa->w0.s.hw_ctx_off;
+
+ /* Entire context size in 128B units */
+ write_sa->w0.s.ctx_size =
+ (PLT_ALIGN_CEIL(sizeof(struct roc_ie_ot_tls_write_sa), ROC_CTX_UNIT_128B) /
+ ROC_CTX_UNIT_128B) -
+ 1;
+ write_sa->w0.s.aop_valid = 1;
+
+ if (tls_xfrm->ver == RTE_SECURITY_VERSION_TLS_1_2) {
+ write_sa->w2.s.version_select = ROC_IE_OT_TLS_VERSION_TLS_12;
+ write_sa->seq_num = tls_xfrm->tls_1_2.seq_no - 1;
+ } else if (tls_xfrm->ver == RTE_SECURITY_VERSION_DTLS_1_2) {
+ write_sa->w2.s.version_select = ROC_IE_OT_TLS_VERSION_DTLS_12;
+ write_sa->seq_num = ((uint64_t)tls_xfrm->dtls_1_2.epoch << 48) |
+ (tls_xfrm->dtls_1_2.seq_no & 0x0000ffffffffffff);
+ write_sa->seq_num -= 1;
+ }
+
+ write_sa->w2.s.iv_at_cptr = ROC_IE_OT_TLS_IV_SRC_DEFAULT;
+
+#ifdef LA_IPSEC_DEBUG
+ if (tls_xfrm->options.iv_gen_disable == 1)
+ write_sa->w2.s.iv_at_cptr = ROC_IE_OT_TLS_IV_SRC_FROM_SA;
+#else
+ if (tls_xfrm->options.iv_gen_disable) {
+ plt_err("Application provided IV is not supported");
+ return -ENOTSUP;
+ }
+#endif
+
+ rte_wmb();
+
+ return 0;
+}
+
+static int
+cn10k_tls_read_sa_create(struct roc_cpt *roc_cpt, struct roc_cpt_lf *lf,
+ struct rte_security_tls_record_xform *tls_xfrm,
+ struct rte_crypto_sym_xform *crypto_xfrm,
+ struct cn10k_sec_session *sec_sess)
+{
+ struct roc_ie_ot_tls_read_sa *sa_dptr;
+ struct cn10k_tls_record *tls;
+ union cpt_inst_w4 inst_w4;
+ void *read_sa;
+ int ret = 0;
+
+ tls = &sec_sess->tls_rec;
+ read_sa = &tls->read_sa;
+
+ /* Allocate memory to be used as dptr for CPT ucode WRITE_SA op */
+ sa_dptr = plt_zmalloc(sizeof(struct roc_ie_ot_tls_read_sa), 8);
+ if (sa_dptr == NULL) {
+ plt_err("Could not allocate memory for SA DPTR");
+ return -ENOMEM;
+ }
+
+ /* Translate security parameters to SA */
+ ret = tls_read_sa_fill(sa_dptr, tls_xfrm, crypto_xfrm);
+ if (ret) {
+ plt_err("Could not fill read session parameters");
+ goto sa_dptr_free;
+ }
+ if (crypto_xfrm->type == RTE_CRYPTO_SYM_XFORM_AEAD) {
+ sec_sess->iv_offset = crypto_xfrm->aead.iv.offset;
+ sec_sess->iv_length = crypto_xfrm->aead.iv.length;
+ } else if (crypto_xfrm->type == RTE_CRYPTO_SYM_XFORM_CIPHER) {
+ sec_sess->iv_offset = crypto_xfrm->cipher.iv.offset;
+ sec_sess->iv_length = crypto_xfrm->cipher.iv.length;
+ } else {
+ sec_sess->iv_offset = crypto_xfrm->auth.iv.offset;
+ sec_sess->iv_length = crypto_xfrm->auth.iv.length;
+ }
+
+ if (sa_dptr->w2.s.version_select == ROC_IE_OT_TLS_VERSION_DTLS_12)
+ sec_sess->tls.hdr_len = 13;
+ else if (sa_dptr->w2.s.version_select == ROC_IE_OT_TLS_VERSION_TLS_12)
+ sec_sess->tls.hdr_len = 5;
+
+ sec_sess->proto = RTE_SECURITY_PROTOCOL_TLS_RECORD;
+
+ /* Enable mib counters */
+ sa_dptr->w0.s.count_mib_bytes = 1;
+ sa_dptr->w0.s.count_mib_pkts = 1;
+
+ /* pre-populate CPT INST word 4 */
+ inst_w4.u64 = 0;
+ inst_w4.s.opcode_major = ROC_IE_OT_TLS_MAJOR_OP_RECORD_DEC | ROC_IE_OT_INPLACE_BIT;
+
+ sec_sess->inst.w4 = inst_w4.u64;
+ sec_sess->inst.w7 = cpt_inst_w7_get(roc_cpt, read_sa);
+
+ memset(read_sa, 0, sizeof(struct roc_ie_ot_tls_read_sa));
+
+ /* Copy word0 from sa_dptr to populate ctx_push_sz ctx_size fields */
+ memcpy(read_sa, sa_dptr, 8);
+
+ rte_atomic_thread_fence(rte_memory_order_seq_cst);
+
+ /* Write session using microcode opcode */
+ ret = roc_cpt_ctx_write(lf, sa_dptr, read_sa, sizeof(struct roc_ie_ot_tls_read_sa));
+ if (ret) {
+ plt_err("Could not write read session to hardware");
+ goto sa_dptr_free;
+ }
+
+ /* Trigger CTX flush so that data is written back to DRAM */
+ roc_cpt_lf_ctx_flush(lf, read_sa, true);
+
+ rte_atomic_thread_fence(rte_memory_order_seq_cst);
+
+sa_dptr_free:
+ plt_free(sa_dptr);
+
+ return ret;
+}
+
+static int
+cn10k_tls_write_sa_create(struct roc_cpt *roc_cpt, struct roc_cpt_lf *lf,
+ struct rte_security_tls_record_xform *tls_xfrm,
+ struct rte_crypto_sym_xform *crypto_xfrm,
+ struct cn10k_sec_session *sec_sess)
+{
+ struct roc_ie_ot_tls_write_sa *sa_dptr;
+ struct cn10k_tls_record *tls;
+ union cpt_inst_w4 inst_w4;
+ void *write_sa;
+ int ret = 0;
+
+ tls = &sec_sess->tls_rec;
+ write_sa = &tls->write_sa;
+
+ /* Allocate memory to be used as dptr for CPT ucode WRITE_SA op */
+ sa_dptr = plt_zmalloc(sizeof(struct roc_ie_ot_tls_write_sa), 8);
+ if (sa_dptr == NULL) {
+ plt_err("Could not allocate memory for SA DPTR");
+ return -ENOMEM;
+ }
+
+ /* Translate security parameters to SA */
+ ret = tls_write_sa_fill(sa_dptr, tls_xfrm, crypto_xfrm);
+ if (ret) {
+ plt_err("Could not fill write session parameters");
+ goto sa_dptr_free;
+ }
+
+ if (crypto_xfrm->type == RTE_CRYPTO_SYM_XFORM_AEAD) {
+ sec_sess->iv_offset = crypto_xfrm->aead.iv.offset;
+ sec_sess->iv_length = crypto_xfrm->aead.iv.length;
+ } else if (crypto_xfrm->type == RTE_CRYPTO_SYM_XFORM_CIPHER) {
+ sec_sess->iv_offset = crypto_xfrm->cipher.iv.offset;
+ sec_sess->iv_length = crypto_xfrm->cipher.iv.length;
+ } else {
+ sec_sess->iv_offset = crypto_xfrm->next->cipher.iv.offset;
+ sec_sess->iv_length = crypto_xfrm->next->cipher.iv.length;
+ }
+
+ sec_sess->tls.is_write = true;
+ sec_sess->tls.enable_padding = tls_xfrm->options.extra_padding_enable;
+ sec_sess->max_extended_len = tls_write_rlens_get(tls_xfrm, crypto_xfrm);
+ sec_sess->proto = RTE_SECURITY_PROTOCOL_TLS_RECORD;
+
+ /* pre-populate CPT INST word 4 */
+ inst_w4.u64 = 0;
+ inst_w4.s.opcode_major = ROC_IE_OT_TLS_MAJOR_OP_RECORD_ENC | ROC_IE_OT_INPLACE_BIT;
+
+ sec_sess->inst.w4 = inst_w4.u64;
+ sec_sess->inst.w7 = cpt_inst_w7_get(roc_cpt, write_sa);
+
+ memset(write_sa, 0, sizeof(struct roc_ie_ot_tls_write_sa));
+
+ /* Copy word0 from sa_dptr to populate ctx_push_sz ctx_size fields */
+ memcpy(write_sa, sa_dptr, 8);
+
+ rte_atomic_thread_fence(rte_memory_order_seq_cst);
+
+ /* Write session using microcode opcode */
+ ret = roc_cpt_ctx_write(lf, sa_dptr, write_sa, sizeof(struct roc_ie_ot_tls_write_sa));
+ if (ret) {
+ plt_err("Could not write tls write session to hardware");
+ goto sa_dptr_free;
+ }
+
+ /* Trigger CTX flush so that data is written back to DRAM */
+ roc_cpt_lf_ctx_flush(lf, write_sa, false);
+
+ rte_atomic_thread_fence(rte_memory_order_seq_cst);
+
+sa_dptr_free:
+ plt_free(sa_dptr);
+
+ return ret;
+}
+
+int
+cn10k_tls_record_session_create(struct cnxk_cpt_vf *vf, struct cnxk_cpt_qp *qp,
+ struct rte_security_tls_record_xform *tls_xfrm,
+ struct rte_crypto_sym_xform *crypto_xfrm,
+ struct rte_security_session *sess)
+{
+ struct roc_cpt *roc_cpt;
+ int ret;
+
+ ret = cnxk_tls_xform_verify(tls_xfrm, crypto_xfrm);
+ if (ret)
+ return ret;
+
+ roc_cpt = &vf->cpt;
+
+ if (tls_xfrm->type == RTE_SECURITY_TLS_SESS_TYPE_READ)
+ return cn10k_tls_read_sa_create(roc_cpt, &qp->lf, tls_xfrm, crypto_xfrm,
+ (struct cn10k_sec_session *)sess);
+ else
+ return cn10k_tls_write_sa_create(roc_cpt, &qp->lf, tls_xfrm, crypto_xfrm,
+ (struct cn10k_sec_session *)sess);
+}
+
+int
+cn10k_sec_tls_session_destroy(struct cnxk_cpt_qp *qp, struct cn10k_sec_session *sess)
+{
+ struct cn10k_tls_record *tls;
+ struct roc_cpt_lf *lf;
+ void *sa_dptr = NULL;
+ int ret;
+
+ lf = &qp->lf;
+
+ tls = &sess->tls_rec;
+
+ /* Trigger CTX flush to write dirty data back to DRAM */
+ roc_cpt_lf_ctx_flush(lf, &tls->read_sa, false);
+
+ ret = -1;
+
+ if (sess->tls.is_write) {
+ sa_dptr = plt_zmalloc(sizeof(struct roc_ie_ot_tls_write_sa), 8);
+ if (sa_dptr != NULL) {
+ tls_write_sa_init(sa_dptr);
+
+ ret = roc_cpt_ctx_write(lf, sa_dptr, &tls->write_sa,
+ sizeof(struct roc_ie_ot_tls_write_sa));
+ }
+ } else {
+ sa_dptr = plt_zmalloc(sizeof(struct roc_ie_ot_tls_read_sa), 8);
+ if (sa_dptr != NULL) {
+ tls_read_sa_init(sa_dptr);
+
+ ret = roc_cpt_ctx_write(lf, sa_dptr, &tls->read_sa,
+ sizeof(struct roc_ie_ot_tls_read_sa));
+ }
+ }
+
+ plt_free(sa_dptr);
+
+ if (ret) {
+ /* MC write_ctx failed. Attempt reload of CTX */
+
+ /* Wait for 1 ms so that flush is complete */
+ rte_delay_ms(1);
+
+ rte_atomic_thread_fence(rte_memory_order_seq_cst);
+
+ /* Trigger CTX reload to fetch new data from DRAM */
+ roc_cpt_lf_ctx_reload(lf, &tls->read_sa);
+ }
+
+ return 0;
+}
diff --git a/drivers/crypto/cnxk/cn10k_tls.h b/drivers/crypto/cnxk/cn10k_tls.h
new file mode 100644
index 0000000000..c477d51169
--- /dev/null
+++ b/drivers/crypto/cnxk/cn10k_tls.h
@@ -0,0 +1,35 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2023 Marvell.
+ */
+
+#ifndef __CN10K_TLS_H__
+#define __CN10K_TLS_H__
+
+#include <rte_crypto_sym.h>
+#include <rte_security.h>
+
+#include "roc_ie_ot_tls.h"
+
+#include "cnxk_cryptodev.h"
+#include "cnxk_cryptodev_ops.h"
+
+/* Forward declaration */
+struct cn10k_sec_session;
+
+struct cn10k_tls_record {
+ union {
+ /** Read SA */
+ struct roc_ie_ot_tls_read_sa read_sa;
+ /** Write SA */
+ struct roc_ie_ot_tls_write_sa write_sa;
+ };
+} __rte_aligned(ROC_ALIGN);
+
+int cn10k_tls_record_session_create(struct cnxk_cpt_vf *vf, struct cnxk_cpt_qp *qp,
+ struct rte_security_tls_record_xform *tls_xfrm,
+ struct rte_crypto_sym_xform *crypto_xfrm,
+ struct rte_security_session *sess);
+
+int cn10k_sec_tls_session_destroy(struct cnxk_cpt_qp *qp, struct cn10k_sec_session *sess);
+
+#endif /* __CN10K_TLS_H__ */
diff --git a/drivers/crypto/cnxk/meson.build b/drivers/crypto/cnxk/meson.build
index d6fafd43d9..ee0c65e32a 100644
--- a/drivers/crypto/cnxk/meson.build
+++ b/drivers/crypto/cnxk/meson.build
@@ -16,6 +16,7 @@ sources = files(
'cn10k_cryptodev_ops.c',
'cn10k_cryptodev_sec.c',
'cn10k_ipsec.c',
+ 'cn10k_tls.c',
'cnxk_cryptodev.c',
'cnxk_cryptodev_capabilities.c',
'cnxk_cryptodev_devargs.c',
--
2.25.1
^ permalink raw reply [flat|nested] 78+ messages in thread
* [PATCH v2 16/24] crypto/cnxk: add TLS record datapath handling
2024-01-02 4:53 ` [PATCH v2 00/24] Fixes and improvements in crypto cnxk Anoob Joseph
` (14 preceding siblings ...)
2024-01-02 4:54 ` [PATCH v2 15/24] crypto/cnxk: add TLS record session ops Anoob Joseph
@ 2024-01-02 4:54 ` Anoob Joseph
2024-01-02 4:54 ` [PATCH v2 17/24] crypto/cnxk: add TLS capability Anoob Joseph
` (9 subsequent siblings)
25 siblings, 0 replies; 78+ messages in thread
From: Anoob Joseph @ 2024-01-02 4:54 UTC (permalink / raw)
To: Akhil Goyal; +Cc: Vidya Sagar Velumuri, Jerin Jacob, Tejasree Kondoj, dev
From: Vidya Sagar Velumuri <vvelumuri@marvell.com>
Add support for TLS record handling in datapath.
Signed-off-by: Anoob Joseph <anoobj@marvell.com>
Signed-off-by: Vidya Sagar Velumuri <vvelumuri@marvell.com>
---
drivers/crypto/cnxk/cn10k_cryptodev_ops.c | 57 +++-
drivers/crypto/cnxk/cn10k_cryptodev_sec.c | 7 +
drivers/crypto/cnxk/cn10k_tls_ops.h | 322 ++++++++++++++++++++++
3 files changed, 380 insertions(+), 6 deletions(-)
create mode 100644 drivers/crypto/cnxk/cn10k_tls_ops.h
diff --git a/drivers/crypto/cnxk/cn10k_cryptodev_ops.c b/drivers/crypto/cnxk/cn10k_cryptodev_ops.c
index 084c8d3a24..843a111b0e 100644
--- a/drivers/crypto/cnxk/cn10k_cryptodev_ops.c
+++ b/drivers/crypto/cnxk/cn10k_cryptodev_ops.c
@@ -20,11 +20,14 @@
#include "roc_sso_dp.h"
#include "cn10k_cryptodev.h"
-#include "cn10k_cryptodev_ops.h"
#include "cn10k_cryptodev_event_dp.h"
+#include "cn10k_cryptodev_ops.h"
+#include "cn10k_cryptodev_sec.h"
#include "cn10k_eventdev.h"
#include "cn10k_ipsec.h"
#include "cn10k_ipsec_la_ops.h"
+#include "cn10k_tls.h"
+#include "cn10k_tls_ops.h"
#include "cnxk_ae.h"
#include "cnxk_cryptodev.h"
#include "cnxk_cryptodev_ops.h"
@@ -101,6 +104,18 @@ cpt_sec_ipsec_inst_fill(struct cnxk_cpt_qp *qp, struct rte_crypto_op *op,
return ret;
}
+static __rte_always_inline int __rte_hot
+cpt_sec_tls_inst_fill(struct cnxk_cpt_qp *qp, struct rte_crypto_op *op,
+ struct cn10k_sec_session *sess, struct cpt_inst_s *inst,
+ struct cpt_inflight_req *infl_req, const bool is_sg_ver2)
+{
+ if (sess->tls.is_write)
+ return process_tls_write(&qp->lf, op, sess, &qp->meta_info, infl_req, inst,
+ is_sg_ver2);
+ else
+ return process_tls_read(op, sess, &qp->meta_info, infl_req, inst, is_sg_ver2);
+}
+
static __rte_always_inline int __rte_hot
cpt_sec_inst_fill(struct cnxk_cpt_qp *qp, struct rte_crypto_op *op, struct cn10k_sec_session *sess,
struct cpt_inst_s *inst, struct cpt_inflight_req *infl_req, const bool is_sg_ver2)
@@ -108,6 +123,8 @@ cpt_sec_inst_fill(struct cnxk_cpt_qp *qp, struct rte_crypto_op *op, struct cn10k
if (sess->proto == RTE_SECURITY_PROTOCOL_IPSEC)
return cpt_sec_ipsec_inst_fill(qp, op, sess, &inst[0], infl_req, is_sg_ver2);
+ else if (sess->proto == RTE_SECURITY_PROTOCOL_TLS_RECORD)
+ return cpt_sec_tls_inst_fill(qp, op, sess, &inst[0], infl_req, is_sg_ver2);
return 0;
}
@@ -812,7 +829,7 @@ cn10k_cpt_sg_ver2_crypto_adapter_enqueue(void *ws, struct rte_event ev[], uint16
}
static inline void
-cn10k_cpt_sec_post_process(struct rte_crypto_op *cop, struct cpt_cn10k_res_s *res)
+cn10k_cpt_ipsec_post_process(struct rte_crypto_op *cop, struct cpt_cn10k_res_s *res)
{
struct rte_mbuf *mbuf = cop->sym->m_src;
const uint16_t m_len = res->rlen;
@@ -849,10 +866,38 @@ cn10k_cpt_sec_post_process(struct rte_crypto_op *cop, struct cpt_cn10k_res_s *re
}
static inline void
-cn10k_cpt_dequeue_post_process(struct cnxk_cpt_qp *qp,
- struct rte_crypto_op *cop,
- struct cpt_inflight_req *infl_req,
- struct cpt_cn10k_res_s *res)
+cn10k_cpt_tls_post_process(struct rte_crypto_op *cop, struct cpt_cn10k_res_s *res)
+{
+ struct rte_mbuf *mbuf = cop->sym->m_src;
+ const uint16_t m_len = res->rlen;
+
+ if (!res->uc_compcode) {
+ if (mbuf->next == NULL)
+ mbuf->data_len = m_len;
+ mbuf->pkt_len = m_len;
+ } else {
+ cop->status = RTE_CRYPTO_OP_STATUS_ERROR;
+ cop->aux_flags = res->uc_compcode;
+ plt_err("crypto op failed with UC compcode: 0x%x", res->uc_compcode);
+ }
+}
+
+static inline void
+cn10k_cpt_sec_post_process(struct rte_crypto_op *cop, struct cpt_cn10k_res_s *res)
+{
+ struct rte_crypto_sym_op *sym_op = cop->sym;
+ struct cn10k_sec_session *sess;
+
+ sess = sym_op->session;
+ if (sess->proto == RTE_SECURITY_PROTOCOL_IPSEC)
+ cn10k_cpt_ipsec_post_process(cop, res);
+ else if (sess->proto == RTE_SECURITY_PROTOCOL_TLS_RECORD)
+ cn10k_cpt_tls_post_process(cop, res);
+}
+
+static inline void
+cn10k_cpt_dequeue_post_process(struct cnxk_cpt_qp *qp, struct rte_crypto_op *cop,
+ struct cpt_inflight_req *infl_req, struct cpt_cn10k_res_s *res)
{
const uint8_t uc_compcode = res->uc_compcode;
const uint8_t compcode = res->compcode;
diff --git a/drivers/crypto/cnxk/cn10k_cryptodev_sec.c b/drivers/crypto/cnxk/cn10k_cryptodev_sec.c
index 0fd0a5b03c..300a8e4f94 100644
--- a/drivers/crypto/cnxk/cn10k_cryptodev_sec.c
+++ b/drivers/crypto/cnxk/cn10k_cryptodev_sec.c
@@ -32,6 +32,10 @@ cn10k_sec_session_create(void *dev, struct rte_security_session_conf *conf,
return cn10k_ipsec_session_create(vf, qp, &conf->ipsec, conf->crypto_xform, sess);
}
+ if (conf->protocol == RTE_SECURITY_PROTOCOL_TLS_RECORD)
+ return cn10k_tls_record_session_create(vf, qp, &conf->tls_record,
+ conf->crypto_xform, sess);
+
return -ENOTSUP;
}
@@ -54,6 +58,9 @@ cn10k_sec_session_destroy(void *dev, struct rte_security_session *sec_sess)
if (cn10k_sec_sess->proto == RTE_SECURITY_PROTOCOL_IPSEC)
return cn10k_sec_ipsec_session_destroy(qp, cn10k_sec_sess);
+ if (cn10k_sec_sess->proto == RTE_SECURITY_PROTOCOL_TLS_RECORD)
+ return cn10k_sec_tls_session_destroy(qp, cn10k_sec_sess);
+
return -EINVAL;
}
diff --git a/drivers/crypto/cnxk/cn10k_tls_ops.h b/drivers/crypto/cnxk/cn10k_tls_ops.h
new file mode 100644
index 0000000000..a5d38bacbb
--- /dev/null
+++ b/drivers/crypto/cnxk/cn10k_tls_ops.h
@@ -0,0 +1,322 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2023 Marvell.
+ */
+
+#ifndef __CN10K_TLS_OPS_H__
+#define __CN10K_TLS_OPS_H__
+
+#include <rte_crypto_sym.h>
+#include <rte_security.h>
+
+#include "roc_ie.h"
+
+#include "cn10k_cryptodev.h"
+#include "cn10k_cryptodev_sec.h"
+#include "cnxk_cryptodev.h"
+#include "cnxk_cryptodev_ops.h"
+#include "cnxk_sg.h"
+
+static __rte_always_inline int
+process_tls_write(struct roc_cpt_lf *lf, struct rte_crypto_op *cop, struct cn10k_sec_session *sess,
+ struct cpt_qp_meta_info *m_info, struct cpt_inflight_req *infl_req,
+ struct cpt_inst_s *inst, const bool is_sg_ver2)
+{
+ struct rte_crypto_sym_op *sym_op = cop->sym;
+#ifdef LA_IPSEC_DEBUG
+ struct roc_ie_ot_tls_write_sa *write_sa;
+#endif
+ struct rte_mbuf *m_src = sym_op->m_src;
+ struct rte_mbuf *last_seg;
+ union cpt_inst_w4 w4;
+ void *m_data = NULL;
+ uint8_t *in_buffer;
+
+#ifdef LA_IPSEC_DEBUG
+ write_sa = &sess->tls_rec.write_sa;
+ if (write_sa->w2.s.iv_at_cptr == ROC_IE_OT_TLS_IV_SRC_FROM_SA) {
+
+ uint8_t *iv = PLT_PTR_ADD(write_sa->cipher_key, 32);
+
+ if (write_sa->w2.s.cipher_select == ROC_IE_OT_TLS_CIPHER_AES_GCM) {
+ uint32_t *tmp;
+
+ /* For GCM, the IV and salt format will be like below:
+ * iv[0-3]: lower bytes of IV in BE format.
+ * iv[4-7]: salt / nonce.
+ * iv[12-15]: upper bytes of IV in BE format.
+ */
+ memcpy(iv, rte_crypto_op_ctod_offset(cop, uint8_t *, sess->iv_offset), 4);
+ tmp = (uint32_t *)iv;
+ *tmp = rte_be_to_cpu_32(*tmp);
+
+ memcpy(iv + 12,
+ rte_crypto_op_ctod_offset(cop, uint8_t *, sess->iv_offset + 4), 4);
+ tmp = (uint32_t *)(iv + 12);
+ *tmp = rte_be_to_cpu_32(*tmp);
+ } else if (write_sa->w2.s.cipher_select == ROC_IE_OT_TLS_CIPHER_AES_CBC) {
+ uint64_t *tmp;
+
+ memcpy(iv, rte_crypto_op_ctod_offset(cop, uint8_t *, sess->iv_offset), 16);
+ tmp = (uint64_t *)iv;
+ *tmp = rte_be_to_cpu_64(*tmp);
+ tmp = (uint64_t *)(iv + 8);
+ *tmp = rte_be_to_cpu_64(*tmp);
+ } else if (write_sa->w2.s.cipher_select == ROC_IE_OT_TLS_CIPHER_3DES) {
+ uint64_t *tmp;
+
+ memcpy(iv, rte_crypto_op_ctod_offset(cop, uint8_t *, sess->iv_offset), 8);
+ tmp = (uint64_t *)iv;
+ *tmp = rte_be_to_cpu_64(*tmp);
+ }
+
+ /* Trigger CTX reload to fetch new data from DRAM */
+ roc_cpt_lf_ctx_reload(lf, write_sa);
+ rte_delay_ms(1);
+ }
+#else
+ RTE_SET_USED(lf);
+#endif
+ /* Single buffer direct mode */
+ if (likely(m_src->next == NULL)) {
+ void *vaddr;
+
+ if (unlikely(rte_pktmbuf_tailroom(m_src) < sess->max_extended_len)) {
+ plt_dp_err("Not enough tail room");
+ return -ENOMEM;
+ }
+
+ vaddr = rte_pktmbuf_mtod(m_src, void *);
+ inst->dptr = (uint64_t)vaddr;
+ inst->rptr = (uint64_t)vaddr;
+
+ w4.u64 = sess->inst.w4;
+ w4.s.param1 = m_src->data_len;
+ w4.s.dlen = m_src->data_len;
+
+ w4.s.param2 = cop->param1.tls_record.content_type;
+ w4.s.opcode_minor = sess->tls.enable_padding * cop->aux_flags * 8;
+
+ inst->w4.u64 = w4.u64;
+ } else if (is_sg_ver2 == false) {
+ struct roc_sglist_comp *scatter_comp, *gather_comp;
+ uint32_t g_size_bytes, s_size_bytes;
+ uint32_t dlen;
+ int i;
+
+ last_seg = rte_pktmbuf_lastseg(m_src);
+
+ if (unlikely(rte_pktmbuf_tailroom(last_seg) < sess->max_extended_len)) {
+ plt_dp_err("Not enough tail room (required: %d, available: %d)",
+ sess->max_extended_len, rte_pktmbuf_tailroom(last_seg));
+ return -ENOMEM;
+ }
+
+ m_data = alloc_op_meta(NULL, m_info->mlen, m_info->pool, infl_req);
+ if (unlikely(m_data == NULL)) {
+ plt_dp_err("Error allocating meta buffer for request");
+ return -ENOMEM;
+ }
+
+ in_buffer = (uint8_t *)m_data;
+ ((uint16_t *)in_buffer)[0] = 0;
+ ((uint16_t *)in_buffer)[1] = 0;
+
+ /* Input Gather List */
+ i = 0;
+ gather_comp = (struct roc_sglist_comp *)((uint8_t *)in_buffer + 8);
+
+ i = fill_sg_comp_from_pkt(gather_comp, i, m_src);
+ ((uint16_t *)in_buffer)[2] = rte_cpu_to_be_16(i);
+
+ g_size_bytes = ((i + 3) / 4) * sizeof(struct roc_sglist_comp);
+
+ i = 0;
+ scatter_comp = (struct roc_sglist_comp *)((uint8_t *)gather_comp + g_size_bytes);
+
+ i = fill_sg_comp_from_pkt(scatter_comp, i, m_src);
+ ((uint16_t *)in_buffer)[3] = rte_cpu_to_be_16(i);
+
+ s_size_bytes = ((i + 3) / 4) * sizeof(struct roc_sglist_comp);
+
+ dlen = g_size_bytes + s_size_bytes + ROC_SG_LIST_HDR_SIZE;
+
+ inst->dptr = (uint64_t)in_buffer;
+ inst->rptr = (uint64_t)in_buffer;
+
+ w4.u64 = sess->inst.w4;
+ w4.s.dlen = dlen;
+ w4.s.param1 = rte_pktmbuf_pkt_len(m_src);
+ w4.s.param2 = cop->param1.tls_record.content_type;
+ w4.s.opcode_major |= (uint64_t)ROC_DMA_MODE_SG;
+ w4.s.opcode_minor = sess->tls.enable_padding * cop->aux_flags * 8;
+
+ /* Output Scatter List */
+ last_seg->data_len += sess->max_extended_len;
+ inst->w4.u64 = w4.u64;
+ } else {
+ struct roc_sg2list_comp *scatter_comp, *gather_comp;
+ union cpt_inst_w5 cpt_inst_w5;
+ union cpt_inst_w6 cpt_inst_w6;
+ uint32_t g_size_bytes;
+ int i;
+
+ last_seg = rte_pktmbuf_lastseg(m_src);
+
+ if (unlikely(rte_pktmbuf_tailroom(last_seg) < sess->max_extended_len)) {
+ plt_dp_err("Not enough tail room (required: %d, available: %d)",
+ sess->max_extended_len, rte_pktmbuf_tailroom(last_seg));
+ return -ENOMEM;
+ }
+
+ m_data = alloc_op_meta(NULL, m_info->mlen, m_info->pool, infl_req);
+ if (unlikely(m_data == NULL)) {
+ plt_dp_err("Error allocating meta buffer for request");
+ return -ENOMEM;
+ }
+
+ in_buffer = (uint8_t *)m_data;
+ /* Input Gather List */
+ i = 0;
+ gather_comp = (struct roc_sg2list_comp *)((uint8_t *)in_buffer);
+ i = fill_sg2_comp_from_pkt(gather_comp, i, m_src);
+
+ cpt_inst_w5.s.gather_sz = ((i + 2) / 3);
+ g_size_bytes = ((i + 2) / 3) * sizeof(struct roc_sg2list_comp);
+
+ i = 0;
+ scatter_comp = (struct roc_sg2list_comp *)((uint8_t *)gather_comp + g_size_bytes);
+
+ i = fill_sg2_comp_from_pkt(scatter_comp, i, m_src);
+
+ cpt_inst_w6.s.scatter_sz = ((i + 2) / 3);
+
+ cpt_inst_w5.s.dptr = (uint64_t)gather_comp;
+ cpt_inst_w6.s.rptr = (uint64_t)scatter_comp;
+
+ inst->w5.u64 = cpt_inst_w5.u64;
+ inst->w6.u64 = cpt_inst_w6.u64;
+ w4.u64 = sess->inst.w4;
+ w4.s.dlen = rte_pktmbuf_pkt_len(m_src);
+ w4.s.opcode_major &= (~(ROC_IE_OT_INPLACE_BIT));
+ w4.s.opcode_minor = sess->tls.enable_padding * cop->aux_flags * 8;
+ w4.s.param1 = w4.s.dlen;
+ w4.s.param2 = cop->param1.tls_record.content_type;
+ /* Output Scatter List */
+ last_seg->data_len += sess->max_extended_len;
+ inst->w4.u64 = w4.u64;
+ }
+
+ return 0;
+}
+
+static __rte_always_inline int
+process_tls_read(struct rte_crypto_op *cop, struct cn10k_sec_session *sess,
+ struct cpt_qp_meta_info *m_info, struct cpt_inflight_req *infl_req,
+ struct cpt_inst_s *inst, const bool is_sg_ver2)
+{
+ struct rte_crypto_sym_op *sym_op = cop->sym;
+ struct rte_mbuf *m_src = sym_op->m_src;
+ union cpt_inst_w4 w4;
+ uint8_t *in_buffer;
+ void *m_data;
+
+ if (likely(m_src->next == NULL)) {
+ void *vaddr;
+
+ vaddr = rte_pktmbuf_mtod(m_src, void *);
+
+ inst->dptr = (uint64_t)vaddr;
+ inst->rptr = (uint64_t)vaddr;
+
+ w4.u64 = sess->inst.w4;
+ w4.s.dlen = m_src->data_len;
+ w4.s.param1 = m_src->data_len;
+ inst->w4.u64 = w4.u64;
+ } else if (is_sg_ver2 == false) {
+ struct roc_sglist_comp *scatter_comp, *gather_comp;
+ uint32_t g_size_bytes, s_size_bytes;
+ uint32_t dlen;
+ int i;
+
+ m_data = alloc_op_meta(NULL, m_info->mlen, m_info->pool, infl_req);
+ if (unlikely(m_data == NULL)) {
+ plt_dp_err("Error allocating meta buffer for request");
+ return -ENOMEM;
+ }
+
+ in_buffer = (uint8_t *)m_data;
+ ((uint16_t *)in_buffer)[0] = 0;
+ ((uint16_t *)in_buffer)[1] = 0;
+
+ /* Input Gather List */
+ i = 0;
+ gather_comp = (struct roc_sglist_comp *)((uint8_t *)in_buffer + 8);
+
+ i = fill_sg_comp_from_pkt(gather_comp, i, m_src);
+ ((uint16_t *)in_buffer)[2] = rte_cpu_to_be_16(i);
+
+ g_size_bytes = ((i + 3) / 4) * sizeof(struct roc_sglist_comp);
+
+ i = 0;
+ scatter_comp = (struct roc_sglist_comp *)((uint8_t *)gather_comp + g_size_bytes);
+
+ i = fill_sg_comp_from_pkt(scatter_comp, i, m_src);
+ ((uint16_t *)in_buffer)[3] = rte_cpu_to_be_16(i);
+
+ s_size_bytes = ((i + 3) / 4) * sizeof(struct roc_sglist_comp);
+
+ dlen = g_size_bytes + s_size_bytes + ROC_SG_LIST_HDR_SIZE;
+
+ inst->dptr = (uint64_t)in_buffer;
+ inst->rptr = (uint64_t)in_buffer;
+
+ w4.u64 = sess->inst.w4;
+ w4.s.dlen = dlen;
+ w4.s.opcode_major |= (uint64_t)ROC_DMA_MODE_SG;
+ w4.s.param1 = rte_pktmbuf_pkt_len(m_src);
+ inst->w4.u64 = w4.u64;
+ } else {
+ struct roc_sg2list_comp *scatter_comp, *gather_comp;
+ union cpt_inst_w5 cpt_inst_w5;
+ union cpt_inst_w6 cpt_inst_w6;
+ uint32_t g_size_bytes;
+ int i;
+
+ m_data = alloc_op_meta(NULL, m_info->mlen, m_info->pool, infl_req);
+ if (unlikely(m_data == NULL)) {
+ plt_dp_err("Error allocating meta buffer for request");
+ return -ENOMEM;
+ }
+
+ in_buffer = (uint8_t *)m_data;
+ /* Input Gather List */
+ i = 0;
+
+ gather_comp = (struct roc_sg2list_comp *)((uint8_t *)in_buffer);
+ i = fill_sg2_comp_from_pkt(gather_comp, i, m_src);
+
+ cpt_inst_w5.s.gather_sz = ((i + 2) / 3);
+ g_size_bytes = ((i + 2) / 3) * sizeof(struct roc_sg2list_comp);
+
+ i = 0;
+ scatter_comp = (struct roc_sg2list_comp *)((uint8_t *)gather_comp + g_size_bytes);
+
+ i = fill_sg2_comp_from_pkt(scatter_comp, i, m_src);
+
+ cpt_inst_w6.s.scatter_sz = ((i + 2) / 3);
+
+ cpt_inst_w5.s.dptr = (uint64_t)gather_comp;
+ cpt_inst_w6.s.rptr = (uint64_t)scatter_comp;
+
+ inst->w5.u64 = cpt_inst_w5.u64;
+ inst->w6.u64 = cpt_inst_w6.u64;
+ w4.u64 = sess->inst.w4;
+ w4.s.dlen = rte_pktmbuf_pkt_len(m_src);
+ w4.s.param1 = w4.s.dlen;
+ w4.s.opcode_major &= (~(ROC_IE_OT_INPLACE_BIT));
+ inst->w4.u64 = w4.u64;
+ }
+
+ return 0;
+}
+#endif /* __CN10K_TLS_OPS_H__ */
--
2.25.1
^ permalink raw reply [flat|nested] 78+ messages in thread
* [PATCH v2 17/24] crypto/cnxk: add TLS capability
2024-01-02 4:53 ` [PATCH v2 00/24] Fixes and improvements in crypto cnxk Anoob Joseph
` (15 preceding siblings ...)
2024-01-02 4:54 ` [PATCH v2 16/24] crypto/cnxk: add TLS record datapath handling Anoob Joseph
@ 2024-01-02 4:54 ` Anoob Joseph
2024-01-02 4:54 ` [PATCH v2 18/24] crypto/cnxk: add PMD APIs for raw submission to CPT Anoob Joseph
` (8 subsequent siblings)
25 siblings, 0 replies; 78+ messages in thread
From: Anoob Joseph @ 2024-01-02 4:54 UTC (permalink / raw)
To: Akhil Goyal; +Cc: Vidya Sagar Velumuri, Jerin Jacob, Tejasree Kondoj, dev
From: Vidya Sagar Velumuri <vvelumuri@marvell.com>
Add TLS 1.2 record read and write capability.
Add DTLS 1.2 record read and write capability.
Signed-off-by: Anoob Joseph <anoobj@marvell.com>
Signed-off-by: Vidya Sagar Velumuri <vvelumuri@marvell.com>
---
doc/guides/rel_notes/release_24_03.rst | 2 +
drivers/common/cnxk/hw/cpt.h | 3 +-
drivers/crypto/cnxk/cnxk_cryptodev.h | 12 +-
.../crypto/cnxk/cnxk_cryptodev_capabilities.c | 210 ++++++++++++++++++
4 files changed, 223 insertions(+), 4 deletions(-)
diff --git a/doc/guides/rel_notes/release_24_03.rst b/doc/guides/rel_notes/release_24_03.rst
index fa30b46ead..0ebbae9f4e 100644
--- a/doc/guides/rel_notes/release_24_03.rst
+++ b/doc/guides/rel_notes/release_24_03.rst
@@ -58,6 +58,8 @@ New Features
* **Updated Marvell cnxk crypto driver.**
* Added support for Rx inject in crypto_cn10k.
+ * Added support for TLS record processing in crypto_cn10k. Supports TLS 1.2
+ and DTLS 1.2.
Removed Items
-------------
diff --git a/drivers/common/cnxk/hw/cpt.h b/drivers/common/cnxk/hw/cpt.h
index edab8a5d83..2620965606 100644
--- a/drivers/common/cnxk/hw/cpt.h
+++ b/drivers/common/cnxk/hw/cpt.h
@@ -80,7 +80,8 @@ union cpt_eng_caps {
uint64_t __io sg_ver2 : 1;
uint64_t __io sm2 : 1;
uint64_t __io pdcp_chain_zuc256 : 1;
- uint64_t __io reserved_38_63 : 26;
+ uint64_t __io tls : 1;
+ uint64_t __io reserved_39_63 : 25;
};
};
diff --git a/drivers/crypto/cnxk/cnxk_cryptodev.h b/drivers/crypto/cnxk/cnxk_cryptodev.h
index a5c4365631..8c8c58a76b 100644
--- a/drivers/crypto/cnxk/cnxk_cryptodev.h
+++ b/drivers/crypto/cnxk/cnxk_cryptodev.h
@@ -11,9 +11,11 @@
#include "roc_ae.h"
#include "roc_cpt.h"
-#define CNXK_CPT_MAX_CAPS 55
-#define CNXK_SEC_IPSEC_CRYPTO_MAX_CAPS 16
-#define CNXK_SEC_MAX_CAPS 9
+#define CNXK_CPT_MAX_CAPS 55
+#define CNXK_SEC_IPSEC_CRYPTO_MAX_CAPS 16
+#define CNXK_SEC_TLS_1_3_CRYPTO_MAX_CAPS 2
+#define CNXK_SEC_TLS_1_2_CRYPTO_MAX_CAPS 6
+#define CNXK_SEC_MAX_CAPS 17
/**
* Device private data
@@ -25,6 +27,10 @@ struct cnxk_cpt_vf {
struct roc_cpt cpt;
struct rte_cryptodev_capabilities crypto_caps[CNXK_CPT_MAX_CAPS];
struct rte_cryptodev_capabilities sec_ipsec_crypto_caps[CNXK_SEC_IPSEC_CRYPTO_MAX_CAPS];
+ struct rte_cryptodev_capabilities sec_tls_1_3_crypto_caps[CNXK_SEC_TLS_1_3_CRYPTO_MAX_CAPS];
+ struct rte_cryptodev_capabilities sec_tls_1_2_crypto_caps[CNXK_SEC_TLS_1_2_CRYPTO_MAX_CAPS];
+ struct rte_cryptodev_capabilities
+ sec_dtls_1_2_crypto_caps[CNXK_SEC_TLS_1_2_CRYPTO_MAX_CAPS];
struct rte_security_capability sec_caps[CNXK_SEC_MAX_CAPS];
uint64_t cnxk_fpm_iova[ROC_AE_EC_ID_PMAX];
struct roc_ae_ec_group *ec_grp[ROC_AE_EC_ID_PMAX];
diff --git a/drivers/crypto/cnxk/cnxk_cryptodev_capabilities.c b/drivers/crypto/cnxk/cnxk_cryptodev_capabilities.c
index 178f510a63..73100377d9 100644
--- a/drivers/crypto/cnxk/cnxk_cryptodev_capabilities.c
+++ b/drivers/crypto/cnxk/cnxk_cryptodev_capabilities.c
@@ -30,6 +30,16 @@
RTE_DIM(sec_ipsec_caps_##name)); \
} while (0)
+#define SEC_TLS12_CAPS_ADD(cnxk_caps, cur_pos, hw_caps, name) \
+ do { \
+ if ((hw_caps[CPT_ENG_TYPE_SE].name) || \
+ (hw_caps[CPT_ENG_TYPE_IE].name) || \
+ (hw_caps[CPT_ENG_TYPE_AE].name)) \
+ sec_tls12_caps_add(cnxk_caps, cur_pos, \
+ sec_tls12_caps_##name, \
+ RTE_DIM(sec_tls12_caps_##name)); \
+ } while (0)
+
static const struct rte_cryptodev_capabilities caps_mul[] = {
{ /* RSA */
.op = RTE_CRYPTO_OP_TYPE_ASYMMETRIC,
@@ -1502,6 +1512,125 @@ static const struct rte_cryptodev_capabilities sec_ipsec_caps_null[] = {
},
};
+static const struct rte_cryptodev_capabilities sec_tls12_caps_aes[] = {
+ { /* AES GCM */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AEAD,
+ {.aead = {
+ .algo = RTE_CRYPTO_AEAD_AES_GCM,
+ .block_size = 16,
+ .key_size = {
+ .min = 16,
+ .max = 32,
+ .increment = 16
+ },
+ .digest_size = {
+ .min = 16,
+ .max = 16,
+ .increment = 0
+ },
+ .aad_size = {
+ .min = 13,
+ .max = 13,
+ .increment = 0
+ },
+ .iv_size = {
+ .min = 12,
+ .max = 12,
+ .increment = 0
+ }
+ }, }
+ }, }
+ },
+ { /* AES CBC */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
+ {.cipher = {
+ .algo = RTE_CRYPTO_CIPHER_AES_CBC,
+ .block_size = 16,
+ .key_size = {
+ .min = 16,
+ .max = 32,
+ .increment = 8
+ },
+ .iv_size = {
+ .min = 16,
+ .max = 16,
+ .increment = 0
+ }
+ }, }
+ }, }
+ },
+};
+
+static const struct rte_cryptodev_capabilities sec_tls12_caps_des[] = {
+ { /* 3DES CBC */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
+ {.cipher = {
+ .algo = RTE_CRYPTO_CIPHER_3DES_CBC,
+ .block_size = 8,
+ .key_size = {
+ .min = 24,
+ .max = 24,
+ .increment = 0
+ },
+ .iv_size = {
+ .min = 8,
+ .max = 8,
+ .increment = 0
+ }
+ }, }
+ }, }
+ },
+};
+
+static const struct rte_cryptodev_capabilities sec_tls12_caps_sha1_sha2[] = {
+ { /* SHA1 HMAC */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_SHA1_HMAC,
+ .block_size = 64,
+ .key_size = {
+ .min = 20,
+ .max = 20,
+ .increment = 0
+ },
+ .digest_size = {
+ .min = 20,
+ .max = 20,
+ .increment = 0
+ },
+ }, }
+ }, }
+ },
+ { /* SHA256 HMAC */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_SHA256_HMAC,
+ .block_size = 64,
+ .key_size = {
+ .min = 32,
+ .max = 32,
+ .increment = 0
+ },
+ .digest_size = {
+ .min = 32,
+ .max = 32,
+ .increment = 0
+ },
+ }, }
+ }, }
+ },
+};
+
static const struct rte_security_capability sec_caps_templ[] = {
{ /* IPsec Lookaside Protocol ESP Tunnel Ingress */
.action = RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL,
@@ -1591,6 +1720,46 @@ static const struct rte_security_capability sec_caps_templ[] = {
},
.crypto_capabilities = NULL,
},
+ { /* TLS 1.2 Record Read */
+ .action = RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL,
+ .protocol = RTE_SECURITY_PROTOCOL_TLS_RECORD,
+ .tls_record = {
+ .ver = RTE_SECURITY_VERSION_TLS_1_2,
+ .type = RTE_SECURITY_TLS_SESS_TYPE_READ,
+ .ar_win_size = 0,
+ },
+ .crypto_capabilities = NULL,
+ },
+ { /* TLS 1.2 Record Write */
+ .action = RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL,
+ .protocol = RTE_SECURITY_PROTOCOL_TLS_RECORD,
+ .tls_record = {
+ .ver = RTE_SECURITY_VERSION_TLS_1_2,
+ .type = RTE_SECURITY_TLS_SESS_TYPE_WRITE,
+ .ar_win_size = 0,
+ },
+ .crypto_capabilities = NULL,
+ },
+ { /* DTLS 1.2 Record Read */
+ .action = RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL,
+ .protocol = RTE_SECURITY_PROTOCOL_TLS_RECORD,
+ .tls_record = {
+ .ver = RTE_SECURITY_VERSION_DTLS_1_2,
+ .type = RTE_SECURITY_TLS_SESS_TYPE_READ,
+ .ar_win_size = 0,
+ },
+ .crypto_capabilities = NULL,
+ },
+ { /* DTLS 1.2 Record Write */
+ .action = RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL,
+ .protocol = RTE_SECURITY_PROTOCOL_TLS_RECORD,
+ .tls_record = {
+ .ver = RTE_SECURITY_VERSION_DTLS_1_2,
+ .type = RTE_SECURITY_TLS_SESS_TYPE_WRITE,
+ .ar_win_size = 0,
+ },
+ .crypto_capabilities = NULL,
+ },
{
.action = RTE_SECURITY_ACTION_TYPE_NONE
}
@@ -1807,6 +1976,35 @@ cn9k_sec_ipsec_caps_update(struct rte_security_capability *sec_cap)
sec_cap->ipsec.options.esn = 1;
}
+static void
+sec_tls12_caps_limit_check(int *cur_pos, int nb_caps)
+{
+ PLT_VERIFY(*cur_pos + nb_caps <= CNXK_SEC_TLS_1_2_CRYPTO_MAX_CAPS);
+}
+
+static void
+sec_tls12_caps_add(struct rte_cryptodev_capabilities cnxk_caps[], int *cur_pos,
+ const struct rte_cryptodev_capabilities *caps, int nb_caps)
+{
+ sec_tls12_caps_limit_check(cur_pos, nb_caps);
+
+ memcpy(&cnxk_caps[*cur_pos], caps, nb_caps * sizeof(caps[0]));
+ *cur_pos += nb_caps;
+}
+
+static void
+sec_tls12_crypto_caps_populate(struct rte_cryptodev_capabilities cnxk_caps[],
+ union cpt_eng_caps *hw_caps)
+{
+ int cur_pos = 0;
+
+ SEC_TLS12_CAPS_ADD(cnxk_caps, &cur_pos, hw_caps, aes);
+ SEC_TLS12_CAPS_ADD(cnxk_caps, &cur_pos, hw_caps, des);
+ SEC_TLS12_CAPS_ADD(cnxk_caps, &cur_pos, hw_caps, sha1_sha2);
+
+ sec_tls12_caps_add(cnxk_caps, &cur_pos, caps_end, RTE_DIM(caps_end));
+}
+
void
cnxk_cpt_caps_populate(struct cnxk_cpt_vf *vf)
{
@@ -1815,6 +2013,11 @@ cnxk_cpt_caps_populate(struct cnxk_cpt_vf *vf)
crypto_caps_populate(vf->crypto_caps, vf->cpt.hw_caps);
sec_ipsec_crypto_caps_populate(vf->sec_ipsec_crypto_caps, vf->cpt.hw_caps);
+ if (vf->cpt.hw_caps[CPT_ENG_TYPE_SE].tls) {
+ sec_tls12_crypto_caps_populate(vf->sec_tls_1_2_crypto_caps, vf->cpt.hw_caps);
+ sec_tls12_crypto_caps_populate(vf->sec_dtls_1_2_crypto_caps, vf->cpt.hw_caps);
+ }
+
PLT_STATIC_ASSERT(RTE_DIM(sec_caps_templ) <= RTE_DIM(vf->sec_caps));
memcpy(vf->sec_caps, sec_caps_templ, sizeof(sec_caps_templ));
@@ -1830,6 +2033,13 @@ cnxk_cpt_caps_populate(struct cnxk_cpt_vf *vf)
if (roc_model_is_cn9k())
cn9k_sec_ipsec_caps_update(&vf->sec_caps[i]);
+ } else if (vf->sec_caps[i].protocol == RTE_SECURITY_PROTOCOL_TLS_RECORD) {
+ if (vf->sec_caps[i].tls_record.ver == RTE_SECURITY_VERSION_TLS_1_3)
+ vf->sec_caps[i].crypto_capabilities = vf->sec_tls_1_3_crypto_caps;
+ else if (vf->sec_caps[i].tls_record.ver == RTE_SECURITY_VERSION_DTLS_1_2)
+ vf->sec_caps[i].crypto_capabilities = vf->sec_dtls_1_2_crypto_caps;
+ else
+ vf->sec_caps[i].crypto_capabilities = vf->sec_tls_1_2_crypto_caps;
}
}
}
--
2.25.1
^ permalink raw reply [flat|nested] 78+ messages in thread
* [PATCH v2 18/24] crypto/cnxk: add PMD APIs for raw submission to CPT
2024-01-02 4:53 ` [PATCH v2 00/24] Fixes and improvements in crypto cnxk Anoob Joseph
` (16 preceding siblings ...)
2024-01-02 4:54 ` [PATCH v2 17/24] crypto/cnxk: add TLS capability Anoob Joseph
@ 2024-01-02 4:54 ` Anoob Joseph
2024-01-02 4:54 ` [PATCH v2 19/24] crypto/cnxk: replace PDCP with PDCP chain opcode Anoob Joseph
` (7 subsequent siblings)
25 siblings, 0 replies; 78+ messages in thread
From: Anoob Joseph @ 2024-01-02 4:54 UTC (permalink / raw)
To: Akhil Goyal; +Cc: Jerin Jacob, Vidya Sagar Velumuri, Tejasree Kondoj, dev
Add PMD APIs to allow applications to directly submit CPT instructions
to hardware.
Signed-off-by: Anoob Joseph <anoobj@marvell.com>
---
doc/api/doxy-api-index.md | 1 +
doc/api/doxy-api.conf.in | 1 +
doc/guides/rel_notes/release_24_03.rst | 1 +
drivers/crypto/cnxk/cn10k_cryptodev_ops.c | 75 ++++++++---------
drivers/crypto/cnxk/cn10k_cryptodev_ops.h | 3 +
drivers/crypto/cnxk/cn9k_cryptodev_ops.c | 56 -------------
drivers/crypto/cnxk/cn9k_cryptodev_ops.h | 62 ++++++++++++++
drivers/crypto/cnxk/cnxk_cryptodev_ops.c | 99 +++++++++++++++++++++++
drivers/crypto/cnxk/meson.build | 2 +-
drivers/crypto/cnxk/rte_pmd_cnxk_crypto.h | 46 +++++++++++
10 files changed, 252 insertions(+), 94 deletions(-)
create mode 100644 drivers/crypto/cnxk/rte_pmd_cnxk_crypto.h
diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md
index a6a768bd7c..69f1a54511 100644
--- a/doc/api/doxy-api-index.md
+++ b/doc/api/doxy-api-index.md
@@ -49,6 +49,7 @@ The public API headers are grouped by topics:
[iavf](@ref rte_pmd_iavf.h),
[bnxt](@ref rte_pmd_bnxt.h),
[cnxk](@ref rte_pmd_cnxk.h),
+ [cnxk_crypto](@ref rte_pmd_cnxk_crypto.h),
[cnxk_eventdev](@ref rte_pmd_cnxk_eventdev.h),
[cnxk_mempool](@ref rte_pmd_cnxk_mempool.h),
[dpaa](@ref rte_pmd_dpaa.h),
diff --git a/doc/api/doxy-api.conf.in b/doc/api/doxy-api.conf.in
index e94c9e4e46..6d11de580e 100644
--- a/doc/api/doxy-api.conf.in
+++ b/doc/api/doxy-api.conf.in
@@ -6,6 +6,7 @@ PROJECT_NUMBER = @VERSION@
USE_MDFILE_AS_MAINPAGE = @TOPDIR@/doc/api/doxy-api-index.md
INPUT = @TOPDIR@/doc/api/doxy-api-index.md \
@TOPDIR@/drivers/bus/vdev \
+ @TOPDIR@/drivers/crypto/cnxk \
@TOPDIR@/drivers/crypto/scheduler \
@TOPDIR@/drivers/dma/dpaa2 \
@TOPDIR@/drivers/event/dlb2 \
diff --git a/doc/guides/rel_notes/release_24_03.rst b/doc/guides/rel_notes/release_24_03.rst
index 0ebbae9f4e..f5773bab5a 100644
--- a/doc/guides/rel_notes/release_24_03.rst
+++ b/doc/guides/rel_notes/release_24_03.rst
@@ -60,6 +60,7 @@ New Features
* Added support for Rx inject in crypto_cn10k.
* Added support for TLS record processing in crypto_cn10k. Supports TLS 1.2
and DTLS 1.2.
+ * Added PMD API to allow raw submission of instructions to CPT.
Removed Items
-------------
diff --git a/drivers/crypto/cnxk/cn10k_cryptodev_ops.c b/drivers/crypto/cnxk/cn10k_cryptodev_ops.c
index 843a111b0e..9f4be20ff5 100644
--- a/drivers/crypto/cnxk/cn10k_cryptodev_ops.c
+++ b/drivers/crypto/cnxk/cn10k_cryptodev_ops.c
@@ -34,13 +34,12 @@
#include "cnxk_eventdev.h"
#include "cnxk_se.h"
-#define PKTS_PER_LOOP 32
-#define PKTS_PER_STEORL 16
+#include "rte_pmd_cnxk_crypto.h"
/* Holds information required to send crypto operations in one burst */
struct ops_burst {
- struct rte_crypto_op *op[PKTS_PER_LOOP];
- uint64_t w2[PKTS_PER_LOOP];
+ struct rte_crypto_op *op[CN10K_PKTS_PER_LOOP];
+ uint64_t w2[CN10K_PKTS_PER_LOOP];
struct cn10k_sso_hws *ws;
struct cnxk_cpt_qp *qp;
uint16_t nb_ops;
@@ -252,7 +251,7 @@ cn10k_cpt_enqueue_burst(void *qptr, struct rte_crypto_op **ops, uint16_t nb_ops,
goto pend_q_commit;
}
- for (i = 0; i < RTE_MIN(PKTS_PER_LOOP, nb_ops); i++) {
+ for (i = 0; i < RTE_MIN(CN10K_PKTS_PER_LOOP, nb_ops); i++) {
infl_req = &pend_q->req_queue[head];
infl_req->op_flags = 0;
@@ -267,23 +266,21 @@ cn10k_cpt_enqueue_burst(void *qptr, struct rte_crypto_op **ops, uint16_t nb_ops,
pending_queue_advance(&head, pq_mask);
}
- if (i > PKTS_PER_STEORL) {
- lmt_arg = ROC_CN10K_CPT_LMT_ARG | (PKTS_PER_STEORL - 1) << 12 |
+ if (i > CN10K_PKTS_PER_STEORL) {
+ lmt_arg = ROC_CN10K_CPT_LMT_ARG | (CN10K_PKTS_PER_STEORL - 1) << 12 |
(uint64_t)lmt_id;
roc_lmt_submit_steorl(lmt_arg, io_addr);
- lmt_arg = ROC_CN10K_CPT_LMT_ARG |
- (i - PKTS_PER_STEORL - 1) << 12 |
- (uint64_t)(lmt_id + PKTS_PER_STEORL);
+ lmt_arg = ROC_CN10K_CPT_LMT_ARG | (i - CN10K_PKTS_PER_STEORL - 1) << 12 |
+ (uint64_t)(lmt_id + CN10K_PKTS_PER_STEORL);
roc_lmt_submit_steorl(lmt_arg, io_addr);
} else {
- lmt_arg = ROC_CN10K_CPT_LMT_ARG | (i - 1) << 12 |
- (uint64_t)lmt_id;
+ lmt_arg = ROC_CN10K_CPT_LMT_ARG | (i - 1) << 12 | (uint64_t)lmt_id;
roc_lmt_submit_steorl(lmt_arg, io_addr);
}
rte_io_wmb();
- if (nb_ops - i > 0 && i == PKTS_PER_LOOP) {
+ if (nb_ops - i > 0 && i == CN10K_PKTS_PER_LOOP) {
nb_ops -= i;
ops += i;
count += i;
@@ -487,7 +484,7 @@ cn10k_cpt_vec_submit(struct vec_request vec_tbl[], uint16_t vec_tbl_len, struct
inst = (struct cpt_inst_s *)lmt_base;
again:
- burst_size = RTE_MIN(PKTS_PER_STEORL, vec_tbl_len);
+ burst_size = RTE_MIN(CN10K_PKTS_PER_STEORL, vec_tbl_len);
for (i = 0; i < burst_size; i++)
cn10k_cpt_vec_inst_fill(&vec_tbl[i], &inst[i * 2], qp, vec_tbl[0].w7);
@@ -516,7 +513,7 @@ static inline int
ca_lmtst_vec_submit(struct ops_burst *burst, struct vec_request vec_tbl[], uint16_t *vec_tbl_len,
const bool is_sg_ver2)
{
- struct cpt_inflight_req *infl_reqs[PKTS_PER_LOOP];
+ struct cpt_inflight_req *infl_reqs[CN10K_PKTS_PER_LOOP];
uint64_t lmt_base, lmt_arg, io_addr;
uint16_t lmt_id, len = *vec_tbl_len;
struct cpt_inst_s *inst, *inst_base;
@@ -618,11 +615,12 @@ next_op:;
if (CNXK_TT_FROM_TAG(burst->ws->gw_rdata) == SSO_TT_ORDERED)
roc_sso_hws_head_wait(burst->ws->base);
- if (i > PKTS_PER_STEORL) {
- lmt_arg = ROC_CN10K_CPT_LMT_ARG | (PKTS_PER_STEORL - 1) << 12 | (uint64_t)lmt_id;
+ if (i > CN10K_PKTS_PER_STEORL) {
+ lmt_arg = ROC_CN10K_CPT_LMT_ARG | (CN10K_PKTS_PER_STEORL - 1) << 12 |
+ (uint64_t)lmt_id;
roc_lmt_submit_steorl(lmt_arg, io_addr);
- lmt_arg = ROC_CN10K_CPT_LMT_ARG | (i - PKTS_PER_STEORL - 1) << 12 |
- (uint64_t)(lmt_id + PKTS_PER_STEORL);
+ lmt_arg = ROC_CN10K_CPT_LMT_ARG | (i - CN10K_PKTS_PER_STEORL - 1) << 12 |
+ (uint64_t)(lmt_id + CN10K_PKTS_PER_STEORL);
roc_lmt_submit_steorl(lmt_arg, io_addr);
} else {
lmt_arg = ROC_CN10K_CPT_LMT_ARG | (i - 1) << 12 | (uint64_t)lmt_id;
@@ -647,7 +645,7 @@ next_op:;
static inline uint16_t
ca_lmtst_burst_submit(struct ops_burst *burst, const bool is_sg_ver2)
{
- struct cpt_inflight_req *infl_reqs[PKTS_PER_LOOP];
+ struct cpt_inflight_req *infl_reqs[CN10K_PKTS_PER_LOOP];
uint64_t lmt_base, lmt_arg, io_addr;
struct cpt_inst_s *inst, *inst_base;
struct cpt_inflight_req *infl_req;
@@ -718,11 +716,12 @@ ca_lmtst_burst_submit(struct ops_burst *burst, const bool is_sg_ver2)
if (CNXK_TT_FROM_TAG(burst->ws->gw_rdata) == SSO_TT_ORDERED)
roc_sso_hws_head_wait(burst->ws->base);
- if (i > PKTS_PER_STEORL) {
- lmt_arg = ROC_CN10K_CPT_LMT_ARG | (PKTS_PER_STEORL - 1) << 12 | (uint64_t)lmt_id;
+ if (i > CN10K_PKTS_PER_STEORL) {
+ lmt_arg = ROC_CN10K_CPT_LMT_ARG | (CN10K_PKTS_PER_STEORL - 1) << 12 |
+ (uint64_t)lmt_id;
roc_lmt_submit_steorl(lmt_arg, io_addr);
- lmt_arg = ROC_CN10K_CPT_LMT_ARG | (i - PKTS_PER_STEORL - 1) << 12 |
- (uint64_t)(lmt_id + PKTS_PER_STEORL);
+ lmt_arg = ROC_CN10K_CPT_LMT_ARG | (i - CN10K_PKTS_PER_STEORL - 1) << 12 |
+ (uint64_t)(lmt_id + CN10K_PKTS_PER_STEORL);
roc_lmt_submit_steorl(lmt_arg, io_addr);
} else {
lmt_arg = ROC_CN10K_CPT_LMT_ARG | (i - 1) << 12 | (uint64_t)lmt_id;
@@ -791,7 +790,7 @@ cn10k_cpt_crypto_adapter_enqueue(void *ws, struct rte_event ev[], uint16_t nb_ev
burst.op[burst.nb_ops] = op;
/* Max nb_ops per burst check */
- if (++burst.nb_ops == PKTS_PER_LOOP) {
+ if (++burst.nb_ops == CN10K_PKTS_PER_LOOP) {
if (is_vector)
submitted = ca_lmtst_vec_submit(&burst, vec_tbl, &vec_tbl_len,
is_sg_ver2);
@@ -1146,7 +1145,7 @@ cn10k_cryptodev_sec_inb_rx_inject(void *dev, struct rte_mbuf **pkts,
again:
inst = (struct cpt_inst_s *)lmt_base;
- for (i = 0; i < RTE_MIN(PKTS_PER_LOOP, nb_pkts); i++) {
+ for (i = 0; i < RTE_MIN(CN10K_PKTS_PER_LOOP, nb_pkts); i++) {
m = pkts[i];
sec_sess = (struct cn10k_sec_session *)sess[i];
@@ -1193,11 +1192,12 @@ cn10k_cryptodev_sec_inb_rx_inject(void *dev, struct rte_mbuf **pkts,
inst += 2;
}
- if (i > PKTS_PER_STEORL) {
- lmt_arg = ROC_CN10K_CPT_LMT_ARG | (PKTS_PER_STEORL - 1) << 12 | (uint64_t)lmt_id;
+ if (i > CN10K_PKTS_PER_STEORL) {
+ lmt_arg = ROC_CN10K_CPT_LMT_ARG | (CN10K_PKTS_PER_STEORL - 1) << 12 |
+ (uint64_t)lmt_id;
roc_lmt_submit_steorl(lmt_arg, io_addr);
- lmt_arg = ROC_CN10K_CPT_LMT_ARG | (i - PKTS_PER_STEORL - 1) << 12 |
- (uint64_t)(lmt_id + PKTS_PER_STEORL);
+ lmt_arg = ROC_CN10K_CPT_LMT_ARG | (i - CN10K_PKTS_PER_STEORL - 1) << 12 |
+ (uint64_t)(lmt_id + CN10K_PKTS_PER_STEORL);
roc_lmt_submit_steorl(lmt_arg, io_addr);
} else {
lmt_arg = ROC_CN10K_CPT_LMT_ARG | (i - 1) << 12 | (uint64_t)lmt_id;
@@ -1206,7 +1206,7 @@ cn10k_cryptodev_sec_inb_rx_inject(void *dev, struct rte_mbuf **pkts,
rte_io_wmb();
- if (nb_pkts - i > 0 && i == PKTS_PER_LOOP) {
+ if (nb_pkts - i > 0 && i == CN10K_PKTS_PER_LOOP) {
nb_pkts -= i;
pkts += i;
count += i;
@@ -1333,7 +1333,7 @@ cn10k_cpt_raw_enqueue_burst(void *qpair, uint8_t *drv_ctx, struct rte_crypto_sym
goto pend_q_commit;
}
- for (i = 0; i < RTE_MIN(PKTS_PER_LOOP, nb_ops); i++) {
+ for (i = 0; i < RTE_MIN(CN10K_PKTS_PER_LOOP, nb_ops); i++) {
struct cnxk_iov iov;
index = count + i;
@@ -1355,11 +1355,12 @@ cn10k_cpt_raw_enqueue_burst(void *qpair, uint8_t *drv_ctx, struct rte_crypto_sym
pending_queue_advance(&head, pq_mask);
}
- if (i > PKTS_PER_STEORL) {
- lmt_arg = ROC_CN10K_CPT_LMT_ARG | (PKTS_PER_STEORL - 1) << 12 | (uint64_t)lmt_id;
+ if (i > CN10K_PKTS_PER_STEORL) {
+ lmt_arg = ROC_CN10K_CPT_LMT_ARG | (CN10K_PKTS_PER_STEORL - 1) << 12 |
+ (uint64_t)lmt_id;
roc_lmt_submit_steorl(lmt_arg, io_addr);
- lmt_arg = ROC_CN10K_CPT_LMT_ARG | (i - PKTS_PER_STEORL - 1) << 12 |
- (uint64_t)(lmt_id + PKTS_PER_STEORL);
+ lmt_arg = ROC_CN10K_CPT_LMT_ARG | (i - CN10K_PKTS_PER_STEORL - 1) << 12 |
+ (uint64_t)(lmt_id + CN10K_PKTS_PER_STEORL);
roc_lmt_submit_steorl(lmt_arg, io_addr);
} else {
lmt_arg = ROC_CN10K_CPT_LMT_ARG | (i - 1) << 12 | (uint64_t)lmt_id;
@@ -1368,7 +1369,7 @@ cn10k_cpt_raw_enqueue_burst(void *qpair, uint8_t *drv_ctx, struct rte_crypto_sym
rte_io_wmb();
- if (nb_ops - i > 0 && i == PKTS_PER_LOOP) {
+ if (nb_ops - i > 0 && i == CN10K_PKTS_PER_LOOP) {
nb_ops -= i;
count += i;
goto again;
diff --git a/drivers/crypto/cnxk/cn10k_cryptodev_ops.h b/drivers/crypto/cnxk/cn10k_cryptodev_ops.h
index 34becede3c..406c4abc7f 100644
--- a/drivers/crypto/cnxk/cn10k_cryptodev_ops.h
+++ b/drivers/crypto/cnxk/cn10k_cryptodev_ops.h
@@ -12,6 +12,9 @@
#include "cnxk_cryptodev.h"
+#define CN10K_PKTS_PER_LOOP 32
+#define CN10K_PKTS_PER_STEORL 16
+
extern struct rte_cryptodev_ops cn10k_cpt_ops;
void cn10k_cpt_set_enqdeq_fns(struct rte_cryptodev *dev, struct cnxk_cpt_vf *vf);
diff --git a/drivers/crypto/cnxk/cn9k_cryptodev_ops.c b/drivers/crypto/cnxk/cn9k_cryptodev_ops.c
index 442cd8e5a9..ac9393eacf 100644
--- a/drivers/crypto/cnxk/cn9k_cryptodev_ops.c
+++ b/drivers/crypto/cnxk/cn9k_cryptodev_ops.c
@@ -122,62 +122,6 @@ cn9k_cpt_inst_prep(struct cnxk_cpt_qp *qp, struct rte_crypto_op *op,
return ret;
}
-static inline void
-cn9k_cpt_inst_submit(struct cpt_inst_s *inst, uint64_t lmtline,
- uint64_t io_addr)
-{
- uint64_t lmt_status;
-
- do {
- /* Copy CPT command to LMTLINE */
- roc_lmt_mov64((void *)lmtline, inst);
-
- /*
- * Make sure compiler does not reorder memcpy and ldeor.
- * LMTST transactions are always flushed from the write
- * buffer immediately, a DMB is not required to push out
- * LMTSTs.
- */
- rte_io_wmb();
- lmt_status = roc_lmt_submit_ldeor(io_addr);
- } while (lmt_status == 0);
-}
-
-static __plt_always_inline void
-cn9k_cpt_inst_submit_dual(struct cpt_inst_s *inst, uint64_t lmtline,
- uint64_t io_addr)
-{
- uint64_t lmt_status;
-
- do {
- /* Copy 2 CPT inst_s to LMTLINE */
-#if defined(RTE_ARCH_ARM64)
- uint64_t *s = (uint64_t *)inst;
- uint64_t *d = (uint64_t *)lmtline;
-
- vst1q_u64(&d[0], vld1q_u64(&s[0]));
- vst1q_u64(&d[2], vld1q_u64(&s[2]));
- vst1q_u64(&d[4], vld1q_u64(&s[4]));
- vst1q_u64(&d[6], vld1q_u64(&s[6]));
- vst1q_u64(&d[8], vld1q_u64(&s[8]));
- vst1q_u64(&d[10], vld1q_u64(&s[10]));
- vst1q_u64(&d[12], vld1q_u64(&s[12]));
- vst1q_u64(&d[14], vld1q_u64(&s[14]));
-#else
- roc_lmt_mov_seg((void *)lmtline, inst, 8);
-#endif
-
- /*
- * Make sure compiler does not reorder memcpy and ldeor.
- * LMTST transactions are always flushed from the write
- * buffer immediately, a DMB is not required to push out
- * LMTSTs.
- */
- rte_io_wmb();
- lmt_status = roc_lmt_submit_ldeor(io_addr);
- } while (lmt_status == 0);
-}
-
static uint16_t
cn9k_cpt_enqueue_burst(void *qptr, struct rte_crypto_op **ops, uint16_t nb_ops)
{
diff --git a/drivers/crypto/cnxk/cn9k_cryptodev_ops.h b/drivers/crypto/cnxk/cn9k_cryptodev_ops.h
index c6ec96153e..3d667094f3 100644
--- a/drivers/crypto/cnxk/cn9k_cryptodev_ops.h
+++ b/drivers/crypto/cnxk/cn9k_cryptodev_ops.h
@@ -8,8 +8,70 @@
#include <rte_compat.h>
#include <cryptodev_pmd.h>
+#include <hw/cpt.h>
+
+#if defined(__aarch64__)
+#include "roc_io.h"
+#else
+#include "roc_io_generic.h"
+#endif
+
extern struct rte_cryptodev_ops cn9k_cpt_ops;
+static inline void
+cn9k_cpt_inst_submit(struct cpt_inst_s *inst, uint64_t lmtline, uint64_t io_addr)
+{
+ uint64_t lmt_status;
+
+ do {
+ /* Copy CPT command to LMTLINE */
+ roc_lmt_mov64((void *)lmtline, inst);
+
+ /*
+ * Make sure compiler does not reorder memcpy and ldeor.
+ * LMTST transactions are always flushed from the write
+ * buffer immediately, a DMB is not required to push out
+ * LMTSTs.
+ */
+ rte_io_wmb();
+ lmt_status = roc_lmt_submit_ldeor(io_addr);
+ } while (lmt_status == 0);
+}
+
+static __plt_always_inline void
+cn9k_cpt_inst_submit_dual(struct cpt_inst_s *inst, uint64_t lmtline, uint64_t io_addr)
+{
+ uint64_t lmt_status;
+
+ do {
+ /* Copy 2 CPT inst_s to LMTLINE */
+#if defined(RTE_ARCH_ARM64)
+ volatile const __uint128_t *src128 = (const __uint128_t *)inst;
+ volatile __uint128_t *dst128 = (__uint128_t *)lmtline;
+
+ dst128[0] = src128[0];
+ dst128[1] = src128[1];
+ dst128[2] = src128[2];
+ dst128[3] = src128[3];
+ dst128[4] = src128[4];
+ dst128[5] = src128[5];
+ dst128[6] = src128[6];
+ dst128[7] = src128[7];
+#else
+ roc_lmt_mov_seg((void *)lmtline, inst, 8);
+#endif
+
+ /*
+ * Make sure compiler does not reorder memcpy and ldeor.
+ * LMTST transactions are always flushed from the write
+ * buffer immediately, a DMB is not required to push out
+ * LMTSTs.
+ */
+ rte_io_wmb();
+ lmt_status = roc_lmt_submit_ldeor(io_addr);
+ } while (lmt_status == 0);
+}
+
void cn9k_cpt_set_enqdeq_fns(struct rte_cryptodev *dev);
__rte_internal
diff --git a/drivers/crypto/cnxk/cnxk_cryptodev_ops.c b/drivers/crypto/cnxk/cnxk_cryptodev_ops.c
index fd44155955..7a37e3e89c 100644
--- a/drivers/crypto/cnxk/cnxk_cryptodev_ops.c
+++ b/drivers/crypto/cnxk/cnxk_cryptodev_ops.c
@@ -12,6 +12,11 @@
#include "roc_errata.h"
#include "roc_idev.h"
#include "roc_ie_on.h"
+#if defined(__aarch64__)
+#include "roc_io.h"
+#else
+#include "roc_io_generic.h"
+#endif
#include "cnxk_ae.h"
#include "cnxk_cryptodev.h"
@@ -19,6 +24,11 @@
#include "cnxk_cryptodev_ops.h"
#include "cnxk_se.h"
+#include "cn10k_cryptodev_ops.h"
+#include "cn9k_cryptodev_ops.h"
+
+#include "rte_pmd_cnxk_crypto.h"
+
#define CNXK_CPT_MAX_ASYM_OP_NUM_PARAMS 5
#define CNXK_CPT_MAX_ASYM_OP_MOD_LEN 1024
#define CNXK_CPT_META_BUF_MAX_CACHE_SIZE 128
@@ -918,3 +928,92 @@ cnxk_cpt_queue_pair_event_error_query(struct rte_cryptodev *dev, uint16_t qp_id)
}
return 0;
}
+
+void *
+rte_pmd_cnxk_crypto_qptr_get(uint8_t dev_id, uint16_t qp_id)
+{
+ const struct rte_crypto_fp_ops *fp_ops;
+ void *qptr;
+
+ fp_ops = &rte_crypto_fp_ops[dev_id];
+ qptr = fp_ops->qp.data[qp_id];
+
+ return qptr;
+}
+
+static inline void
+cnxk_crypto_cn10k_submit(void *qptr, void *inst, uint16_t nb_inst)
+{
+ uint64_t lmt_base, lmt_arg, io_addr;
+ struct cnxk_cpt_qp *qp = qptr;
+ uint16_t i, j, lmt_id;
+ void *lmt_dst;
+
+ lmt_base = qp->lmtline.lmt_base;
+ io_addr = qp->lmtline.io_addr;
+
+ ROC_LMT_BASE_ID_GET(lmt_base, lmt_id);
+
+again:
+ i = RTE_MIN(nb_inst, CN10K_PKTS_PER_LOOP);
+ lmt_dst = PLT_PTR_CAST(lmt_base);
+
+ for (j = 0; j < i; j++) {
+ rte_memcpy(lmt_dst, inst, sizeof(struct cpt_inst_s));
+ inst = RTE_PTR_ADD(inst, sizeof(struct cpt_inst_s));
+ lmt_dst = RTE_PTR_ADD(lmt_dst, 2 * sizeof(struct cpt_inst_s));
+ }
+
+ rte_io_wmb();
+
+ if (i > CN10K_PKTS_PER_STEORL) {
+ lmt_arg = ROC_CN10K_CPT_LMT_ARG | (CN10K_PKTS_PER_STEORL - 1) << 12 |
+ (uint64_t)lmt_id;
+ roc_lmt_submit_steorl(lmt_arg, io_addr);
+ lmt_arg = ROC_CN10K_CPT_LMT_ARG | (i - CN10K_PKTS_PER_STEORL - 1) << 12 |
+ (uint64_t)(lmt_id + CN10K_PKTS_PER_STEORL);
+ roc_lmt_submit_steorl(lmt_arg, io_addr);
+ } else {
+ lmt_arg = ROC_CN10K_CPT_LMT_ARG | (i - 1) << 12 | (uint64_t)lmt_id;
+ roc_lmt_submit_steorl(lmt_arg, io_addr);
+ }
+
+ rte_io_wmb();
+
+ if (nb_inst - i > 0) {
+ nb_inst -= i;
+ goto again;
+ }
+}
+
+static inline void
+cnxk_crypto_cn9k_submit(void *qptr, void *inst, uint16_t nb_inst)
+{
+ struct cnxk_cpt_qp *qp = qptr;
+
+ const uint64_t lmt_base = qp->lf.lmt_base;
+ const uint64_t io_addr = qp->lf.io_addr;
+
+ if (unlikely(nb_inst & 1)) {
+ cn9k_cpt_inst_submit(inst, lmt_base, io_addr);
+ inst = RTE_PTR_ADD(inst, sizeof(struct cpt_inst_s));
+ nb_inst -= 1;
+ }
+
+ while (nb_inst > 0) {
+ cn9k_cpt_inst_submit_dual(inst, lmt_base, io_addr);
+ inst = RTE_PTR_ADD(inst, 2 * sizeof(struct cpt_inst_s));
+ nb_inst -= 2;
+ }
+}
+
+void
+rte_pmd_cnxk_crypto_submit(void *qptr, void *inst, uint16_t nb_inst)
+{
+ if (roc_model_is_cn10k())
+ return cnxk_crypto_cn10k_submit(qptr, inst, nb_inst);
+ else if (roc_model_is_cn9k())
+ return cnxk_crypto_cn9k_submit(qptr, inst, nb_inst);
+
+ plt_err("Invalid cnxk model");
+}
diff --git a/drivers/crypto/cnxk/meson.build b/drivers/crypto/cnxk/meson.build
index ee0c65e32a..aa840fb7bb 100644
--- a/drivers/crypto/cnxk/meson.build
+++ b/drivers/crypto/cnxk/meson.build
@@ -24,8 +24,8 @@ sources = files(
'cnxk_cryptodev_sec.c',
)
+headers = files('rte_pmd_cnxk_crypto.h')
deps += ['bus_pci', 'common_cnxk', 'security', 'eventdev']
-
includes += include_directories('../../../lib/net', '../../event/cnxk')
if get_option('buildtype').contains('debug')
diff --git a/drivers/crypto/cnxk/rte_pmd_cnxk_crypto.h b/drivers/crypto/cnxk/rte_pmd_cnxk_crypto.h
new file mode 100644
index 0000000000..64978a008b
--- /dev/null
+++ b/drivers/crypto/cnxk/rte_pmd_cnxk_crypto.h
@@ -0,0 +1,46 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2023 Marvell.
+ */
+
+/**
+ * @file rte_pmd_cnxk_crypto.h
+ * Marvell CNXK Crypto PMD specific functions.
+ *
+ **/
+
+#ifndef _PMD_CNXK_CRYPTO_H_
+#define _PMD_CNXK_CRYPTO_H_
+
+#include <stdint.h>
+
+/**
+ * Get queue pointer of a specific queue in a cryptodev.
+ *
+ * @param dev_id
+ * Device identifier of cryptodev device.
+ * @param qp_id
+ * Index of the queue pair.
+ * @return
+ * Pointer to queue pair structure that would be the input to submit APIs.
+ */
+void *rte_pmd_cnxk_crypto_qptr_get(uint8_t dev_id, uint16_t qp_id);
+
+/**
+ * Submit CPT instruction (cpt_inst_s) to hardware (CPT).
+ *
+ * The ``qp`` is a pointer obtained from ``rte_pmd_cnxk_crypto_qp_get``. Application should make
+ * sure it doesn't overflow the internal hardware queues. It may do so by making sure the inflight
+ * packets are not more than the number of descriptors configured.
+ *
+ * This API may be called only after the cryptodev and queue pair is configured and is started.
+ *
+ * @param qptr
+ * Pointer obtained with ``rte_pmd_cnxk_crypto_qptr_get``.
+ * @param inst
+ * Pointer to an array of instructions prepared by application.
+ * @param nb_inst
+ * Number of instructions.
+ */
+void rte_pmd_cnxk_crypto_submit(void *qptr, void *inst, uint16_t nb_inst);
+
+#endif /* _PMD_CNXK_CRYPTO_H_ */
--
2.25.1
^ permalink raw reply [flat|nested] 78+ messages in thread
* [PATCH v2 19/24] crypto/cnxk: replace PDCP with PDCP chain opcode
2024-01-02 4:53 ` [PATCH v2 00/24] Fixes and improvements in crypto cnxk Anoob Joseph
` (17 preceding siblings ...)
2024-01-02 4:54 ` [PATCH v2 18/24] crypto/cnxk: add PMD APIs for raw submission to CPT Anoob Joseph
@ 2024-01-02 4:54 ` Anoob Joseph
2024-01-02 4:54 ` [PATCH v2 20/24] crypto/cnxk: validate the combinations supported in TLS Anoob Joseph
` (6 subsequent siblings)
25 siblings, 0 replies; 78+ messages in thread
From: Anoob Joseph @ 2024-01-02 4:54 UTC (permalink / raw)
To: Akhil Goyal; +Cc: Tejasree Kondoj, Jerin Jacob, Vidya Sagar Velumuri, dev
From: Tejasree Kondoj <ktejasree@marvell.com>
Replacing PDCP opcode with PDCP chain opcode.
Signed-off-by: Tejasree Kondoj <ktejasree@marvell.com>
---
drivers/common/cnxk/roc_se.c | 331 +++++++++-------------------------
drivers/common/cnxk/roc_se.h | 18 +-
drivers/crypto/cnxk/cnxk_se.h | 96 +++++-----
3 files changed, 135 insertions(+), 310 deletions(-)
diff --git a/drivers/common/cnxk/roc_se.c b/drivers/common/cnxk/roc_se.c
index 6ced4ef789..4e00268149 100644
--- a/drivers/common/cnxk/roc_se.c
+++ b/drivers/common/cnxk/roc_se.c
@@ -88,13 +88,20 @@ cpt_ciph_type_set(roc_se_cipher_type type, struct roc_se_ctx *ctx, uint16_t key_
fc_type = ROC_SE_FC_GEN;
break;
case ROC_SE_ZUC_EEA3:
- if (chained_op) {
- if (unlikely(key_len != 16))
+ if (unlikely(key_len != 16)) {
+ /*
+ * ZUC 256 is not supported with older microcode
+ * where pdcp_iv_offset is 16
+ */
+ if (chained_op || (ctx->pdcp_iv_offset == 16)) {
+ plt_err("ZUC 256 is not supported with chained operations");
return -1;
+ }
+ }
+ if (chained_op)
fc_type = ROC_SE_PDCP_CHAIN;
- } else {
+ else
fc_type = ROC_SE_PDCP;
- }
break;
case ROC_SE_SNOW3G_UEA2:
if (unlikely(key_len != 16))
@@ -197,33 +204,6 @@ cpt_hmac_opad_ipad_gen(roc_se_auth_type auth_type, const uint8_t *key, uint16_t
}
}
-static int
-cpt_pdcp_key_type_set(struct roc_se_zuc_snow3g_ctx *zs_ctx, uint16_t key_len)
-{
- roc_se_aes_type key_type = 0;
-
- if (roc_model_is_cn9k()) {
- if (key_len != 16) {
- plt_err("Only key len 16 is supported on cn9k");
- return -ENOTSUP;
- }
- }
-
- switch (key_len) {
- case 16:
- key_type = ROC_SE_AES_128_BIT;
- break;
- case 32:
- key_type = ROC_SE_AES_256_BIT;
- break;
- default:
- plt_err("Invalid AES key len");
- return -ENOTSUP;
- }
- zs_ctx->zuc.otk_ctx.w0.s.key_len = key_type;
- return 0;
-}
-
static int
cpt_pdcp_chain_key_type_get(uint16_t key_len)
{
@@ -247,36 +227,6 @@ cpt_pdcp_chain_key_type_get(uint16_t key_len)
return key_type;
}
-static int
-cpt_pdcp_mac_len_set(struct roc_se_zuc_snow3g_ctx *zs_ctx, uint16_t mac_len)
-{
- roc_se_pdcp_mac_len_type mac_type = 0;
-
- if (roc_model_is_cn9k()) {
- if (mac_len != 4) {
- plt_err("Only mac len 4 is supported on cn9k");
- return -ENOTSUP;
- }
- }
-
- switch (mac_len) {
- case 4:
- mac_type = ROC_SE_PDCP_MAC_LEN_32_BIT;
- break;
- case 8:
- mac_type = ROC_SE_PDCP_MAC_LEN_64_BIT;
- break;
- case 16:
- mac_type = ROC_SE_PDCP_MAC_LEN_128_BIT;
- break;
- default:
- plt_err("Invalid ZUC MAC len");
- return -ENOTSUP;
- }
- zs_ctx->zuc.otk_ctx.w0.s.mac_len = mac_type;
- return 0;
-}
-
static void
cpt_zuc_const_update(uint8_t *zuc_const, int key_len, int mac_len)
{
@@ -300,32 +250,27 @@ cpt_zuc_const_update(uint8_t *zuc_const, int key_len, int mac_len)
}
int
-roc_se_auth_key_set(struct roc_se_ctx *se_ctx, roc_se_auth_type type,
- const uint8_t *key, uint16_t key_len, uint16_t mac_len)
+roc_se_auth_key_set(struct roc_se_ctx *se_ctx, roc_se_auth_type type, const uint8_t *key,
+ uint16_t key_len, uint16_t mac_len)
{
- struct roc_se_zuc_snow3g_chain_ctx *zs_ch_ctx;
- struct roc_se_zuc_snow3g_ctx *zs_ctx;
struct roc_se_kasumi_ctx *k_ctx;
+ struct roc_se_pdcp_ctx *pctx;
struct roc_se_context *fctx;
uint8_t opcode_minor;
- uint8_t pdcp_alg;
bool chained_op;
- int ret;
if (se_ctx == NULL)
return -1;
- zs_ctx = &se_ctx->se_ctx.zs_ctx;
- zs_ch_ctx = &se_ctx->se_ctx.zs_ch_ctx;
+ pctx = &se_ctx->se_ctx.pctx;
k_ctx = &se_ctx->se_ctx.k_ctx;
fctx = &se_ctx->se_ctx.fctx;
chained_op = se_ctx->ciph_then_auth || se_ctx->auth_then_ciph;
if ((type >= ROC_SE_ZUC_EIA3) && (type <= ROC_SE_KASUMI_F9_ECB)) {
- uint8_t *zuc_const;
uint32_t keyx[4];
- uint8_t *ci_key;
+ int key_type;
if (!key_len)
return -1;
@@ -335,98 +280,64 @@ roc_se_auth_key_set(struct roc_se_ctx *se_ctx, roc_se_auth_type type,
return -1;
}
- if (roc_model_is_cn9k()) {
- ci_key = zs_ctx->zuc.onk_ctx.ci_key;
- zuc_const = zs_ctx->zuc.onk_ctx.zuc_const;
- } else {
- ci_key = zs_ctx->zuc.otk_ctx.ci_key;
- zuc_const = zs_ctx->zuc.otk_ctx.zuc_const;
- }
-
/* For ZUC/SNOW3G/Kasumi */
switch (type) {
case ROC_SE_SNOW3G_UIA2:
- if (chained_op) {
- struct roc_se_onk_zuc_chain_ctx *ctx =
- &zs_ch_ctx->zuc.onk_ctx;
- zs_ch_ctx->zuc.onk_ctx.w0.s.state_conf =
- ROC_SE_PDCP_CHAIN_CTX_KEY_IV;
- ctx->w0.s.auth_type =
- ROC_SE_PDCP_CHAIN_ALG_TYPE_SNOW3G;
- ctx->w0.s.mac_len = mac_len;
- ctx->w0.s.auth_key_len = key_len;
- se_ctx->fc_type = ROC_SE_PDCP_CHAIN;
- cpt_snow3g_key_gen(key, keyx);
- memcpy(ctx->st.auth_key, keyx, key_len);
- } else {
- zs_ctx->zuc.otk_ctx.w0.s.alg_type =
- ROC_SE_PDCP_ALG_TYPE_SNOW3G;
- zs_ctx->zuc.otk_ctx.w0.s.mac_len =
- ROC_SE_PDCP_MAC_LEN_32_BIT;
- cpt_snow3g_key_gen(key, keyx);
- memcpy(ci_key, keyx, key_len);
+ pctx->w0.s.state_conf = ROC_SE_PDCP_CHAIN_CTX_KEY_IV;
+ pctx->w0.s.auth_type = ROC_SE_PDCP_CHAIN_ALG_TYPE_SNOW3G;
+ pctx->w0.s.mac_len = mac_len;
+ pctx->w0.s.auth_key_len = key_len;
+ se_ctx->fc_type = ROC_SE_PDCP_CHAIN;
+ cpt_snow3g_key_gen(key, keyx);
+ memcpy(pctx->st.auth_key, keyx, key_len);
+
+ if (!chained_op)
se_ctx->fc_type = ROC_SE_PDCP;
- }
se_ctx->pdcp_auth_alg = ROC_SE_PDCP_ALG_TYPE_SNOW3G;
se_ctx->zsk_flags = 0x1;
break;
case ROC_SE_ZUC_EIA3:
- if (chained_op) {
- struct roc_se_onk_zuc_chain_ctx *ctx =
- &zs_ch_ctx->zuc.onk_ctx;
- ctx->w0.s.state_conf =
- ROC_SE_PDCP_CHAIN_CTX_KEY_IV;
- ctx->w0.s.auth_type =
- ROC_SE_PDCP_CHAIN_ALG_TYPE_ZUC;
- ctx->w0.s.mac_len = mac_len;
- ctx->w0.s.auth_key_len = key_len;
- memcpy(ctx->st.auth_key, key, key_len);
- cpt_zuc_const_update(ctx->st.auth_zuc_const,
- key_len, mac_len);
- se_ctx->fc_type = ROC_SE_PDCP_CHAIN;
- } else {
- zs_ctx->zuc.otk_ctx.w0.s.alg_type =
- ROC_SE_PDCP_ALG_TYPE_ZUC;
- ret = cpt_pdcp_key_type_set(zs_ctx, key_len);
- if (ret)
- return ret;
- ret = cpt_pdcp_mac_len_set(zs_ctx, mac_len);
- if (ret)
- return ret;
- memcpy(ci_key, key, key_len);
- if (key_len == 32)
- roc_se_zuc_bytes_swap(ci_key, key_len);
- cpt_zuc_const_update(zuc_const, key_len,
- mac_len);
- se_ctx->fc_type = ROC_SE_PDCP;
+ if (unlikely(key_len != 16)) {
+ /*
+ * ZUC 256 is not supported with older microcode
+ * where pdcp_iv_offset is 16
+ */
+ if (chained_op || (se_ctx->pdcp_iv_offset == 16)) {
+ plt_err("ZUC 256 is not supported with chained operations");
+ return -1;
+ }
}
+ key_type = cpt_pdcp_chain_key_type_get(key_len);
+ if (key_type < 0)
+ return key_type;
+ pctx->w0.s.auth_key_len = key_type;
+ pctx->w0.s.state_conf = ROC_SE_PDCP_CHAIN_CTX_KEY_IV;
+ pctx->w0.s.auth_type = ROC_SE_PDCP_CHAIN_ALG_TYPE_ZUC;
+ pctx->w0.s.mac_len = mac_len;
+ memcpy(pctx->st.auth_key, key, key_len);
+ if (key_len == 32)
+ roc_se_zuc_bytes_swap(pctx->st.auth_key, key_len);
+ cpt_zuc_const_update(pctx->st.auth_zuc_const, key_len, mac_len);
+ se_ctx->fc_type = ROC_SE_PDCP_CHAIN;
+
+ if (!chained_op)
+ se_ctx->fc_type = ROC_SE_PDCP;
se_ctx->pdcp_auth_alg = ROC_SE_PDCP_ALG_TYPE_ZUC;
se_ctx->zsk_flags = 0x1;
break;
case ROC_SE_AES_CMAC_EIA2:
- if (chained_op) {
- struct roc_se_onk_zuc_chain_ctx *ctx =
- &zs_ch_ctx->zuc.onk_ctx;
- int key_type;
- key_type = cpt_pdcp_chain_key_type_get(key_len);
- if (key_type < 0)
- return key_type;
- ctx->w0.s.auth_key_len = key_type;
- ctx->w0.s.state_conf =
- ROC_SE_PDCP_CHAIN_CTX_KEY_IV;
- ctx->w0.s.auth_type =
- ROC_SE_PDCP_ALG_TYPE_AES_CTR;
- ctx->w0.s.mac_len = mac_len;
- memcpy(ctx->st.auth_key, key, key_len);
- se_ctx->fc_type = ROC_SE_PDCP_CHAIN;
- } else {
- zs_ctx->zuc.otk_ctx.w0.s.alg_type =
- ROC_SE_PDCP_ALG_TYPE_AES_CTR;
- zs_ctx->zuc.otk_ctx.w0.s.mac_len =
- ROC_SE_PDCP_MAC_LEN_32_BIT;
- memcpy(ci_key, key, key_len);
+ key_type = cpt_pdcp_chain_key_type_get(key_len);
+ if (key_type < 0)
+ return key_type;
+ pctx->w0.s.auth_key_len = key_type;
+ pctx->w0.s.state_conf = ROC_SE_PDCP_CHAIN_CTX_KEY_IV;
+ pctx->w0.s.auth_type = ROC_SE_PDCP_ALG_TYPE_AES_CTR;
+ pctx->w0.s.mac_len = mac_len;
+ memcpy(pctx->st.auth_key, key, key_len);
+ se_ctx->fc_type = ROC_SE_PDCP_CHAIN;
+
+ if (!chained_op)
se_ctx->fc_type = ROC_SE_PDCP;
- }
se_ctx->pdcp_auth_alg = ROC_SE_PDCP_ALG_TYPE_AES_CMAC;
se_ctx->eia2 = 1;
se_ctx->zsk_flags = 0x1;
@@ -454,11 +365,8 @@ roc_se_auth_key_set(struct roc_se_ctx *se_ctx, roc_se_auth_type type,
se_ctx->mac_len = mac_len;
se_ctx->hash_type = type;
- pdcp_alg = zs_ctx->zuc.otk_ctx.w0.s.alg_type;
if (chained_op)
opcode_minor = se_ctx->ciph_then_auth ? 2 : 3;
- else if (roc_model_is_cn9k())
- opcode_minor = ((1 << 7) | (pdcp_alg << 5) | 1);
else
opcode_minor = ((1 << 4) | 1);
@@ -513,29 +421,18 @@ int
roc_se_ciph_key_set(struct roc_se_ctx *se_ctx, roc_se_cipher_type type, const uint8_t *key,
uint16_t key_len)
{
- bool chained_op = se_ctx->ciph_then_auth || se_ctx->auth_then_ciph;
- struct roc_se_zuc_snow3g_ctx *zs_ctx = &se_ctx->se_ctx.zs_ctx;
struct roc_se_context *fctx = &se_ctx->se_ctx.fctx;
- struct roc_se_zuc_snow3g_chain_ctx *zs_ch_ctx;
+ struct roc_se_pdcp_ctx *pctx;
uint8_t opcode_minor = 0;
- uint8_t *zuc_const;
uint32_t keyx[4];
- uint8_t *ci_key;
+ int key_type;
int i, ret;
/* For NULL cipher, no processing required. */
if (type == ROC_SE_PASSTHROUGH)
return 0;
- zs_ch_ctx = &se_ctx->se_ctx.zs_ch_ctx;
-
- if (roc_model_is_cn9k()) {
- ci_key = zs_ctx->zuc.onk_ctx.ci_key;
- zuc_const = zs_ctx->zuc.onk_ctx.zuc_const;
- } else {
- ci_key = zs_ctx->zuc.otk_ctx.ci_key;
- zuc_const = zs_ctx->zuc.otk_ctx.zuc_const;
- }
+ pctx = &se_ctx->se_ctx.pctx;
if ((type == ROC_SE_AES_GCM) || (type == ROC_SE_AES_CCM))
se_ctx->template_w4.s.opcode_minor = BIT(5);
@@ -615,72 +512,38 @@ roc_se_ciph_key_set(struct roc_se_ctx *se_ctx, roc_se_cipher_type type, const ui
fctx->enc.enc_cipher = ROC_SE_DES3_CBC;
goto success;
case ROC_SE_SNOW3G_UEA2:
- if (chained_op == true) {
- struct roc_se_onk_zuc_chain_ctx *ctx =
- &zs_ch_ctx->zuc.onk_ctx;
- zs_ch_ctx->zuc.onk_ctx.w0.s.state_conf =
- ROC_SE_PDCP_CHAIN_CTX_KEY_IV;
- zs_ch_ctx->zuc.onk_ctx.w0.s.cipher_type =
- ROC_SE_PDCP_CHAIN_ALG_TYPE_SNOW3G;
- zs_ch_ctx->zuc.onk_ctx.w0.s.ci_key_len = key_len;
- cpt_snow3g_key_gen(key, keyx);
- memcpy(ctx->st.ci_key, keyx, key_len);
- } else {
- zs_ctx->zuc.otk_ctx.w0.s.key_len = ROC_SE_AES_128_BIT;
- zs_ctx->zuc.otk_ctx.w0.s.alg_type =
- ROC_SE_PDCP_ALG_TYPE_SNOW3G;
- cpt_snow3g_key_gen(key, keyx);
- memcpy(ci_key, keyx, key_len);
- }
+ pctx->w0.s.state_conf = ROC_SE_PDCP_CHAIN_CTX_KEY_IV;
+ pctx->w0.s.cipher_type = ROC_SE_PDCP_CHAIN_ALG_TYPE_SNOW3G;
+ pctx->w0.s.ci_key_len = key_len;
+ cpt_snow3g_key_gen(key, keyx);
+ memcpy(pctx->st.ci_key, keyx, key_len);
se_ctx->pdcp_ci_alg = ROC_SE_PDCP_ALG_TYPE_SNOW3G;
se_ctx->zsk_flags = 0;
goto success;
case ROC_SE_ZUC_EEA3:
- if (chained_op == true) {
- struct roc_se_onk_zuc_chain_ctx *ctx =
- &zs_ch_ctx->zuc.onk_ctx;
- zs_ch_ctx->zuc.onk_ctx.w0.s.state_conf =
- ROC_SE_PDCP_CHAIN_CTX_KEY_IV;
- zs_ch_ctx->zuc.onk_ctx.w0.s.cipher_type =
- ROC_SE_PDCP_CHAIN_ALG_TYPE_ZUC;
- memcpy(ctx->st.ci_key, key, key_len);
- memcpy(ctx->st.ci_zuc_const, zuc_key128, 32);
- zs_ch_ctx->zuc.onk_ctx.w0.s.ci_key_len = key_len;
- } else {
- ret = cpt_pdcp_key_type_set(zs_ctx, key_len);
- if (ret)
- return ret;
- zs_ctx->zuc.otk_ctx.w0.s.alg_type =
- ROC_SE_PDCP_ALG_TYPE_ZUC;
- memcpy(ci_key, key, key_len);
- if (key_len == 32) {
- roc_se_zuc_bytes_swap(ci_key, key_len);
- memcpy(zuc_const, zuc_key256, 16);
- } else
- memcpy(zuc_const, zuc_key128, 32);
- }
-
+ key_type = cpt_pdcp_chain_key_type_get(key_len);
+ if (key_type < 0)
+ return key_type;
+ pctx->w0.s.ci_key_len = key_type;
+ pctx->w0.s.state_conf = ROC_SE_PDCP_CHAIN_CTX_KEY_IV;
+ pctx->w0.s.cipher_type = ROC_SE_PDCP_CHAIN_ALG_TYPE_ZUC;
+ memcpy(pctx->st.ci_key, key, key_len);
+ if (key_len == 32) {
+ roc_se_zuc_bytes_swap(pctx->st.ci_key, key_len);
+ memcpy(pctx->st.ci_zuc_const, zuc_key256, 16);
+ } else
+ memcpy(pctx->st.ci_zuc_const, zuc_key128, 32);
se_ctx->pdcp_ci_alg = ROC_SE_PDCP_ALG_TYPE_ZUC;
se_ctx->zsk_flags = 0;
goto success;
case ROC_SE_AES_CTR_EEA2:
- if (chained_op == true) {
- struct roc_se_onk_zuc_chain_ctx *ctx =
- &zs_ch_ctx->zuc.onk_ctx;
- int key_type;
- key_type = cpt_pdcp_chain_key_type_get(key_len);
- if (key_type < 0)
- return key_type;
- ctx->w0.s.ci_key_len = key_type;
- ctx->w0.s.state_conf = ROC_SE_PDCP_CHAIN_CTX_KEY_IV;
- ctx->w0.s.cipher_type = ROC_SE_PDCP_ALG_TYPE_AES_CTR;
- memcpy(ctx->st.ci_key, key, key_len);
- } else {
- zs_ctx->zuc.otk_ctx.w0.s.key_len = ROC_SE_AES_128_BIT;
- zs_ctx->zuc.otk_ctx.w0.s.alg_type =
- ROC_SE_PDCP_ALG_TYPE_AES_CTR;
- memcpy(ci_key, key, key_len);
- }
+ key_type = cpt_pdcp_chain_key_type_get(key_len);
+ if (key_type < 0)
+ return key_type;
+ pctx->w0.s.ci_key_len = key_type;
+ pctx->w0.s.state_conf = ROC_SE_PDCP_CHAIN_CTX_KEY_IV;
+ pctx->w0.s.cipher_type = ROC_SE_PDCP_ALG_TYPE_AES_CTR;
+ memcpy(pctx->st.ci_key, key, key_len);
se_ctx->pdcp_ci_alg = ROC_SE_PDCP_ALG_TYPE_AES_CTR;
se_ctx->zsk_flags = 0;
goto success;
@@ -720,20 +583,6 @@ roc_se_ciph_key_set(struct roc_se_ctx *se_ctx, roc_se_cipher_type type, const ui
return 0;
}
-void
-roc_se_ctx_swap(struct roc_se_ctx *se_ctx)
-{
- struct roc_se_zuc_snow3g_ctx *zs_ctx = &se_ctx->se_ctx.zs_ctx;
-
- if (roc_model_is_cn9k())
- return;
-
- if (se_ctx->fc_type == ROC_SE_PDCP_CHAIN)
- return;
-
- zs_ctx->zuc.otk_ctx.w0.u64 = htobe64(zs_ctx->zuc.otk_ctx.w0.u64);
-}
-
void
roc_se_ctx_init(struct roc_se_ctx *roc_se_ctx)
{
@@ -745,15 +594,13 @@ roc_se_ctx_init(struct roc_se_ctx *roc_se_ctx)
case ROC_SE_FC_GEN:
ctx_len = sizeof(struct roc_se_context);
break;
+ case ROC_SE_PDCP_CHAIN:
case ROC_SE_PDCP:
- ctx_len = sizeof(struct roc_se_zuc_snow3g_ctx);
+ ctx_len = sizeof(struct roc_se_pdcp_ctx);
break;
case ROC_SE_KASUMI:
ctx_len = sizeof(struct roc_se_kasumi_ctx);
break;
- case ROC_SE_PDCP_CHAIN:
- ctx_len = sizeof(struct roc_se_zuc_snow3g_chain_ctx);
- break;
case ROC_SE_SM:
ctx_len = sizeof(struct roc_se_sm_context);
break;
diff --git a/drivers/common/cnxk/roc_se.h b/drivers/common/cnxk/roc_se.h
index abb8c6a149..d62c40b310 100644
--- a/drivers/common/cnxk/roc_se.h
+++ b/drivers/common/cnxk/roc_se.h
@@ -246,7 +246,7 @@ struct roc_se_onk_zuc_ctx {
uint8_t zuc_const[32];
};
-struct roc_se_onk_zuc_chain_ctx {
+struct roc_se_pdcp_ctx {
union {
uint64_t u64;
struct {
@@ -278,19 +278,6 @@ struct roc_se_onk_zuc_chain_ctx {
} st;
};
-struct roc_se_zuc_snow3g_chain_ctx {
- union {
- struct roc_se_onk_zuc_chain_ctx onk_ctx;
- } zuc;
-};
-
-struct roc_se_zuc_snow3g_ctx {
- union {
- struct roc_se_onk_zuc_ctx onk_ctx;
- struct roc_se_otk_zuc_ctx otk_ctx;
- } zuc;
-};
-
struct roc_se_kasumi_ctx {
uint8_t reg_A[8];
uint8_t ci_key[16];
@@ -356,8 +343,7 @@ struct roc_se_ctx {
} w0;
union {
struct roc_se_context fctx;
- struct roc_se_zuc_snow3g_ctx zs_ctx;
- struct roc_se_zuc_snow3g_chain_ctx zs_ch_ctx;
+ struct roc_se_pdcp_ctx pctx;
struct roc_se_kasumi_ctx k_ctx;
struct roc_se_sm_context sm_ctx;
};
diff --git a/drivers/crypto/cnxk/cnxk_se.h b/drivers/crypto/cnxk/cnxk_se.h
index 1aec7dea9f..8193e96a92 100644
--- a/drivers/crypto/cnxk/cnxk_se.h
+++ b/drivers/crypto/cnxk/cnxk_se.h
@@ -298,8 +298,13 @@ sg_inst_prep(struct roc_se_fc_params *params, struct cpt_inst_s *inst, uint64_t
iv_d = ((uint8_t *)offset_vaddr + ROC_SE_OFF_CTRL_LEN);
if (pdcp_flag) {
- if (likely(iv_len))
- pdcp_iv_copy(iv_d, iv_s, pdcp_alg_type, pack_iv);
+ if (likely(iv_len)) {
+ if (zsk_flags == 0x1)
+ pdcp_iv_copy(iv_d + params->pdcp_iv_offset, iv_s, pdcp_alg_type,
+ pack_iv);
+ else
+ pdcp_iv_copy(iv_d, iv_s, pdcp_alg_type, pack_iv);
+ }
} else {
if (likely(iv_len))
memcpy(iv_d, iv_s, iv_len);
@@ -375,7 +380,7 @@ sg_inst_prep(struct roc_se_fc_params *params, struct cpt_inst_s *inst, uint64_t
i = 0;
scatter_comp = (struct roc_sglist_comp *)((uint8_t *)gather_comp + g_size_bytes);
- if (zsk_flags == 0x1) {
+ if ((zsk_flags == 0x1) && (se_ctx->fc_type == ROC_SE_KASUMI)) {
/* IV in SLIST only for EEA3 & UEA2 or for F8 */
iv_len = 0;
}
@@ -492,8 +497,13 @@ sg2_inst_prep(struct roc_se_fc_params *params, struct cpt_inst_s *inst, uint64_t
iv_d = ((uint8_t *)offset_vaddr + ROC_SE_OFF_CTRL_LEN);
if (pdcp_flag) {
- if (likely(iv_len))
- pdcp_iv_copy(iv_d, iv_s, pdcp_alg_type, pack_iv);
+ if (likely(iv_len)) {
+ if (zsk_flags == 0x1)
+ pdcp_iv_copy(iv_d + params->pdcp_iv_offset, iv_s, pdcp_alg_type,
+ pack_iv);
+ else
+ pdcp_iv_copy(iv_d, iv_s, pdcp_alg_type, pack_iv);
+ }
} else {
if (likely(iv_len))
memcpy(iv_d, iv_s, iv_len);
@@ -567,7 +577,7 @@ sg2_inst_prep(struct roc_se_fc_params *params, struct cpt_inst_s *inst, uint64_t
i = 0;
scatter_comp = (struct roc_sg2list_comp *)((uint8_t *)gather_comp + g_size_bytes);
- if (zsk_flags == 0x1) {
+ if ((zsk_flags == 0x1) && (se_ctx->fc_type == ROC_SE_KASUMI)) {
/* IV in SLIST only for EEA3 & UEA2 or for F8 */
iv_len = 0;
}
@@ -1617,28 +1627,34 @@ static __rte_always_inline int
cpt_pdcp_alg_prep(uint32_t req_flags, uint64_t d_offs, uint64_t d_lens,
struct roc_se_fc_params *params, struct cpt_inst_s *inst, const bool is_sg_ver2)
{
+ /*
+ * pdcp_iv_offset is auth_iv_offset wrt cipher_iv_offset which is
+ * 16 with old microcode without ZUC 256 support
+ * whereas it is 24 with new microcode which has ZUC 256.
+ * So iv_len reserved is 32B for cipher and auth IVs with old microcode
+ * and 48B with new microcode.
+ */
+ const int iv_len = params->pdcp_iv_offset * 2;
+ struct roc_se_ctx *se_ctx = params->ctx;
uint32_t encr_data_len, auth_data_len;
+ const int flags = se_ctx->zsk_flags;
uint32_t encr_offset, auth_offset;
union cpt_inst_w4 cpt_inst_w4;
int32_t inputlen, outputlen;
- struct roc_se_ctx *se_ctx;
uint64_t *offset_vaddr;
uint8_t pdcp_alg_type;
uint32_t mac_len = 0;
const uint8_t *iv_s;
uint8_t pack_iv = 0;
uint64_t offset_ctrl;
- int flags, iv_len;
int ret;
- se_ctx = params->ctx;
- flags = se_ctx->zsk_flags;
mac_len = se_ctx->mac_len;
cpt_inst_w4.u64 = se_ctx->template_w4.u64;
- cpt_inst_w4.s.opcode_major = ROC_SE_MAJOR_OP_PDCP;
if (flags == 0x1) {
+ cpt_inst_w4.s.opcode_minor = 1;
iv_s = params->auth_iv_buf;
/*
@@ -1650,47 +1666,32 @@ cpt_pdcp_alg_prep(uint32_t req_flags, uint64_t d_offs, uint64_t d_lens,
pdcp_alg_type = se_ctx->pdcp_auth_alg;
if (pdcp_alg_type != ROC_SE_PDCP_ALG_TYPE_AES_CMAC) {
- iv_len = params->auth_iv_len;
- if (iv_len == 25) {
- iv_len -= 2;
+ if (params->auth_iv_len == 25)
pack_iv = 1;
- }
auth_offset = auth_offset / 8;
-
- /* consider iv len */
- auth_offset += iv_len;
-
- inputlen =
- auth_offset + (RTE_ALIGN(auth_data_len, 8) / 8);
- } else {
- iv_len = 16;
-
- /* consider iv len */
- auth_offset += iv_len;
-
- inputlen = auth_offset + auth_data_len;
-
- /* length should be in bits */
- auth_data_len *= 8;
+ auth_data_len = RTE_ALIGN(auth_data_len, 8) / 8;
}
- outputlen = mac_len;
+ /* consider iv len */
+ auth_offset += iv_len;
+
+ inputlen = auth_offset + auth_data_len;
+ outputlen = iv_len + mac_len;
offset_ctrl = rte_cpu_to_be_64((uint64_t)auth_offset);
+ cpt_inst_w4.s.param1 = auth_data_len;
encr_data_len = 0;
encr_offset = 0;
} else {
+ cpt_inst_w4.s.opcode_minor = 0;
iv_s = params->iv_buf;
- iv_len = params->cipher_iv_len;
pdcp_alg_type = se_ctx->pdcp_ci_alg;
- if (iv_len == 25) {
- iv_len -= 2;
+ if (params->cipher_iv_len == 25)
pack_iv = 1;
- }
/*
* Microcode expects offsets in bytes
@@ -1700,6 +1701,7 @@ cpt_pdcp_alg_prep(uint32_t req_flags, uint64_t d_offs, uint64_t d_lens,
encr_offset = ROC_SE_ENCR_OFFSET(d_offs);
encr_offset = encr_offset / 8;
+
/* consider iv len */
encr_offset += iv_len;
@@ -1707,10 +1709,11 @@ cpt_pdcp_alg_prep(uint32_t req_flags, uint64_t d_offs, uint64_t d_lens,
outputlen = inputlen;
/* iv offset is 0 */
- offset_ctrl = rte_cpu_to_be_64((uint64_t)encr_offset << 16);
+ offset_ctrl = rte_cpu_to_be_64((uint64_t)encr_offset);
auth_data_len = 0;
auth_offset = 0;
+ cpt_inst_w4.s.param1 = (RTE_ALIGN(encr_data_len, 8) / 8);
}
if (unlikely((encr_offset >> 16) || (auth_offset >> 8))) {
@@ -1720,12 +1723,6 @@ cpt_pdcp_alg_prep(uint32_t req_flags, uint64_t d_offs, uint64_t d_lens,
return -1;
}
- /*
- * Lengths are expected in bits.
- */
- cpt_inst_w4.s.param1 = encr_data_len;
- cpt_inst_w4.s.param2 = auth_data_len;
-
/*
* In cn9k, cn10k since we have a limitation of
* IV & Offset control word not part of instruction
@@ -1738,6 +1735,7 @@ cpt_pdcp_alg_prep(uint32_t req_flags, uint64_t d_offs, uint64_t d_lens,
/* Use Direct mode */
+ cpt_inst_w4.s.opcode_major = ROC_SE_MAJOR_OP_PDCP_CHAIN;
offset_vaddr = (uint64_t *)((uint8_t *)dm_vaddr - ROC_SE_OFF_CTRL_LEN - iv_len);
/* DPTR */
@@ -1753,6 +1751,7 @@ cpt_pdcp_alg_prep(uint32_t req_flags, uint64_t d_offs, uint64_t d_lens,
*offset_vaddr = offset_ctrl;
inst->w4.u64 = cpt_inst_w4.u64;
} else {
+ cpt_inst_w4.s.opcode_major = ROC_SE_MAJOR_OP_PDCP_CHAIN | ROC_DMA_MODE_SG;
inst->w4.u64 = cpt_inst_w4.u64;
if (is_sg_ver2)
ret = sg2_inst_prep(params, inst, offset_ctrl, iv_s, iv_len, pack_iv,
@@ -2243,8 +2242,6 @@ fill_sess_cipher(struct rte_crypto_sym_xform *xform, struct cnxk_se_sess *sess)
c_form->key.length)))
return -1;
- if ((enc_type >= ROC_SE_ZUC_EEA3) && (enc_type <= ROC_SE_AES_CTR_EEA2))
- roc_se_ctx_swap(&sess->roc_se_ctx);
return 0;
}
@@ -2403,15 +2400,10 @@ fill_sess_auth(struct rte_crypto_sym_xform *xform, struct cnxk_se_sess *sess)
sess->auth_iv_offset = a_form->iv.offset;
sess->auth_iv_length = a_form->iv.length;
}
- if (unlikely(roc_se_auth_key_set(&sess->roc_se_ctx, auth_type,
- a_form->key.data, a_form->key.length,
- a_form->digest_length)))
+ if (unlikely(roc_se_auth_key_set(&sess->roc_se_ctx, auth_type, a_form->key.data,
+ a_form->key.length, a_form->digest_length)))
return -1;
- if ((auth_type >= ROC_SE_ZUC_EIA3) &&
- (auth_type <= ROC_SE_AES_CMAC_EIA2))
- roc_se_ctx_swap(&sess->roc_se_ctx);
-
return 0;
}
--
2.25.1
^ permalink raw reply [flat|nested] 78+ messages in thread
* [PATCH v2 20/24] crypto/cnxk: validate the combinations supported in TLS
2024-01-02 4:53 ` [PATCH v2 00/24] Fixes and improvements in crypto cnxk Anoob Joseph
` (18 preceding siblings ...)
2024-01-02 4:54 ` [PATCH v2 19/24] crypto/cnxk: replace PDCP with PDCP chain opcode Anoob Joseph
@ 2024-01-02 4:54 ` Anoob Joseph
2024-01-02 4:54 ` [PATCH v2 21/24] crypto/cnxk: use a single function for opad ipad Anoob Joseph
` (5 subsequent siblings)
25 siblings, 0 replies; 78+ messages in thread
From: Anoob Joseph @ 2024-01-02 4:54 UTC (permalink / raw)
To: Akhil Goyal; +Cc: Vidya Sagar Velumuri, Jerin Jacob, Tejasree Kondoj, dev
From: Vidya Sagar Velumuri <vvelumuri@marvell.com>
Validate the cipher and auth combination to allow only the
ones supported by hardware.
Signed-off-by: Vidya Sagar Velumuri <vvelumuri@marvell.com>
---
drivers/crypto/cnxk/cn10k_tls.c | 35 ++++++++++++++++++++++++++++++++-
1 file changed, 34 insertions(+), 1 deletion(-)
diff --git a/drivers/crypto/cnxk/cn10k_tls.c b/drivers/crypto/cnxk/cn10k_tls.c
index 7dd61aa159..8f50d889d2 100644
--- a/drivers/crypto/cnxk/cn10k_tls.c
+++ b/drivers/crypto/cnxk/cn10k_tls.c
@@ -17,6 +17,36 @@
#include "cnxk_cryptodev_ops.h"
#include "cnxk_security.h"
+static int
+tls_xform_cipher_auth_verify(struct rte_crypto_sym_xform *cipher_xform,
+ struct rte_crypto_sym_xform *auth_xform)
+{
+ enum rte_crypto_cipher_algorithm c_algo = cipher_xform->cipher.algo;
+ enum rte_crypto_auth_algorithm a_algo = auth_xform->auth.algo;
+ int ret = -ENOTSUP;
+
+ switch (c_algo) {
+ case RTE_CRYPTO_CIPHER_NULL:
+ if ((a_algo == RTE_CRYPTO_AUTH_MD5_HMAC) || (a_algo == RTE_CRYPTO_AUTH_SHA1_HMAC) ||
+ (a_algo == RTE_CRYPTO_AUTH_SHA256_HMAC))
+ ret = 0;
+ break;
+ case RTE_CRYPTO_CIPHER_3DES_CBC:
+ if (a_algo == RTE_CRYPTO_AUTH_SHA1_HMAC)
+ ret = 0;
+ break;
+ case RTE_CRYPTO_CIPHER_AES_CBC:
+ if ((a_algo == RTE_CRYPTO_AUTH_SHA1_HMAC) ||
+ (a_algo == RTE_CRYPTO_AUTH_SHA256_HMAC))
+ ret = 0;
+ break;
+ default:
+ break;
+ }
+
+ return ret;
+}
+
static int
tls_xform_cipher_verify(struct rte_crypto_sym_xform *crypto_xform)
{
@@ -138,7 +168,10 @@ cnxk_tls_xform_verify(struct rte_security_tls_record_xform *tls_xform,
ret = tls_xform_cipher_verify(cipher_xform);
if (!ret)
- return tls_xform_auth_verify(auth_xform);
+ ret = tls_xform_auth_verify(auth_xform);
+
+ if (cipher_xform && !ret)
+ return tls_xform_cipher_auth_verify(cipher_xform, auth_xform);
return ret;
}
--
2.25.1
^ permalink raw reply [flat|nested] 78+ messages in thread
* [PATCH v2 21/24] crypto/cnxk: use a single function for opad ipad
2024-01-02 4:53 ` [PATCH v2 00/24] Fixes and improvements in crypto cnxk Anoob Joseph
` (19 preceding siblings ...)
2024-01-02 4:54 ` [PATCH v2 20/24] crypto/cnxk: validate the combinations supported in TLS Anoob Joseph
@ 2024-01-02 4:54 ` Anoob Joseph
2024-01-02 4:54 ` [PATCH v2 22/24] crypto/cnxk: add support for TLS 1.3 Anoob Joseph
` (4 subsequent siblings)
25 siblings, 0 replies; 78+ messages in thread
From: Anoob Joseph @ 2024-01-02 4:54 UTC (permalink / raw)
To: Akhil Goyal; +Cc: Vidya Sagar Velumuri, Jerin Jacob, Tejasree Kondoj, dev
From: Vidya Sagar Velumuri <vvelumuri@marvell.com>
Use a single function for opad and ipad generation for IPsec, TLS and
flexi crypto.
Signed-off-by: Vidya Sagar Velumuri <vvelumuri@marvell.com>
---
drivers/common/cnxk/cnxk_security.c | 65 ++++++-----------------------
drivers/common/cnxk/cnxk_security.h | 5 ---
drivers/common/cnxk/roc_se.c | 48 ++++++++++++++-------
drivers/common/cnxk/roc_se.h | 9 ++++
drivers/common/cnxk/version.map | 2 +-
drivers/crypto/cnxk/cn10k_tls.c | 8 +++-
6 files changed, 61 insertions(+), 76 deletions(-)
diff --git a/drivers/common/cnxk/cnxk_security.c b/drivers/common/cnxk/cnxk_security.c
index bdb04fe142..64c901a57a 100644
--- a/drivers/common/cnxk/cnxk_security.c
+++ b/drivers/common/cnxk/cnxk_security.c
@@ -8,55 +8,9 @@
#include "roc_api.h"
-void
-cnxk_sec_opad_ipad_gen(struct rte_crypto_sym_xform *auth_xform, uint8_t *hmac_opad_ipad,
- bool is_tls)
-{
- const uint8_t *key = auth_xform->auth.key.data;
- uint32_t length = auth_xform->auth.key.length;
- uint8_t opad[128] = {[0 ... 127] = 0x5c};
- uint8_t ipad[128] = {[0 ... 127] = 0x36};
- uint32_t i;
-
- /* HMAC OPAD and IPAD */
- for (i = 0; i < 128 && i < length; i++) {
- opad[i] = opad[i] ^ key[i];
- ipad[i] = ipad[i] ^ key[i];
- }
-
- /* Precompute hash of HMAC OPAD and IPAD to avoid
- * per packet computation
- */
- switch (auth_xform->auth.algo) {
- case RTE_CRYPTO_AUTH_MD5_HMAC:
- roc_hash_md5_gen(opad, (uint32_t *)&hmac_opad_ipad[0]);
- roc_hash_md5_gen(ipad, (uint32_t *)&hmac_opad_ipad[is_tls ? 64 : 24]);
- break;
- case RTE_CRYPTO_AUTH_SHA1_HMAC:
- roc_hash_sha1_gen(opad, (uint32_t *)&hmac_opad_ipad[0]);
- roc_hash_sha1_gen(ipad, (uint32_t *)&hmac_opad_ipad[is_tls ? 64 : 24]);
- break;
- case RTE_CRYPTO_AUTH_SHA256_HMAC:
- roc_hash_sha256_gen(opad, (uint32_t *)&hmac_opad_ipad[0], 256);
- roc_hash_sha256_gen(ipad, (uint32_t *)&hmac_opad_ipad[64], 256);
- break;
- case RTE_CRYPTO_AUTH_SHA384_HMAC:
- roc_hash_sha512_gen(opad, (uint64_t *)&hmac_opad_ipad[0], 384);
- roc_hash_sha512_gen(ipad, (uint64_t *)&hmac_opad_ipad[64], 384);
- break;
- case RTE_CRYPTO_AUTH_SHA512_HMAC:
- roc_hash_sha512_gen(opad, (uint64_t *)&hmac_opad_ipad[0], 512);
- roc_hash_sha512_gen(ipad, (uint64_t *)&hmac_opad_ipad[64], 512);
- break;
- default:
- break;
- }
-}
-
static int
-ot_ipsec_sa_common_param_fill(union roc_ot_ipsec_sa_word2 *w2,
- uint8_t *cipher_key, uint8_t *salt_key,
- uint8_t *hmac_opad_ipad,
+ot_ipsec_sa_common_param_fill(union roc_ot_ipsec_sa_word2 *w2, uint8_t *cipher_key,
+ uint8_t *salt_key, uint8_t *hmac_opad_ipad,
struct rte_security_ipsec_xform *ipsec_xfrm,
struct rte_crypto_sym_xform *crypto_xfrm)
{
@@ -192,7 +146,9 @@ ot_ipsec_sa_common_param_fill(union roc_ot_ipsec_sa_word2 *w2,
const uint8_t *auth_key = auth_xfrm->auth.key.data;
roc_aes_xcbc_key_derive(auth_key, hmac_opad_ipad);
} else {
- cnxk_sec_opad_ipad_gen(auth_xfrm, hmac_opad_ipad, false);
+ roc_se_hmac_opad_ipad_gen(w2->s.auth_type, auth_xfrm->auth.key.data,
+ auth_xfrm->auth.key.length, &hmac_opad_ipad[0],
+ ROC_SE_IPSEC);
}
tmp_key = (uint64_t *)hmac_opad_ipad;
@@ -741,7 +697,8 @@ onf_ipsec_sa_common_param_fill(struct roc_ie_onf_sa_ctl *ctl, uint8_t *salt,
key = cipher_xfrm->cipher.key.data;
length = cipher_xfrm->cipher.key.length;
- cnxk_sec_opad_ipad_gen(auth_xfrm, hmac_opad_ipad, false);
+ roc_se_hmac_opad_ipad_gen(ctl->auth_type, auth_xfrm->auth.key.data,
+ auth_xfrm->auth.key.length, hmac_opad_ipad, ROC_SE_IPSEC);
}
switch (length) {
@@ -1374,7 +1331,9 @@ cnxk_on_ipsec_outb_sa_create(struct rte_security_ipsec_xform *ipsec,
roc_aes_xcbc_key_derive(auth_key, hmac_opad_ipad);
} else if (auth_xform->auth.algo != RTE_CRYPTO_AUTH_NULL) {
- cnxk_sec_opad_ipad_gen(auth_xform, hmac_opad_ipad, false);
+ roc_se_hmac_opad_ipad_gen(
+ out_sa->common_sa.ctl.auth_type, auth_xform->auth.key.data,
+ auth_xform->auth.key.length, &hmac_opad_ipad[0], ROC_SE_IPSEC);
}
}
@@ -1441,7 +1400,9 @@ cnxk_on_ipsec_inb_sa_create(struct rte_security_ipsec_xform *ipsec,
roc_aes_xcbc_key_derive(auth_key, hmac_opad_ipad);
} else if (auth_xform->auth.algo != RTE_CRYPTO_AUTH_NULL) {
- cnxk_sec_opad_ipad_gen(auth_xform, hmac_opad_ipad, false);
+ roc_se_hmac_opad_ipad_gen(
+ in_sa->common_sa.ctl.auth_type, auth_xform->auth.key.data,
+ auth_xform->auth.key.length, &hmac_opad_ipad[0], ROC_SE_IPSEC);
}
}
diff --git a/drivers/common/cnxk/cnxk_security.h b/drivers/common/cnxk/cnxk_security.h
index 86ec657cb0..b323b8b757 100644
--- a/drivers/common/cnxk/cnxk_security.h
+++ b/drivers/common/cnxk/cnxk_security.h
@@ -68,9 +68,4 @@ int __roc_api cnxk_on_ipsec_inb_sa_create(struct rte_security_ipsec_xform *ipsec
int __roc_api cnxk_on_ipsec_outb_sa_create(struct rte_security_ipsec_xform *ipsec,
struct rte_crypto_sym_xform *crypto_xform,
struct roc_ie_on_outb_sa *out_sa);
-
-__rte_internal
-void cnxk_sec_opad_ipad_gen(struct rte_crypto_sym_xform *auth_xform, uint8_t *hmac_opad_ipad,
- bool is_tls);
-
#endif /* _CNXK_SECURITY_H__ */
diff --git a/drivers/common/cnxk/roc_se.c b/drivers/common/cnxk/roc_se.c
index 4e00268149..5a3ed0b647 100644
--- a/drivers/common/cnxk/roc_se.c
+++ b/drivers/common/cnxk/roc_se.c
@@ -157,14 +157,29 @@ cpt_ciph_aes_key_type_set(struct roc_se_context *fctx, uint16_t key_len)
fctx->enc.aes_key = aes_key_type;
}
-static void
-cpt_hmac_opad_ipad_gen(roc_se_auth_type auth_type, const uint8_t *key, uint16_t length,
- struct roc_se_hmac_context *hmac)
+void
+roc_se_hmac_opad_ipad_gen(roc_se_auth_type auth_type, const uint8_t *key, uint16_t length,
+ uint8_t *opad_ipad, roc_se_op_type op_type)
{
uint8_t opad[128] = {[0 ... 127] = 0x5c};
uint8_t ipad[128] = {[0 ... 127] = 0x36};
+ uint8_t ipad_offset, opad_offset;
uint32_t i;
+ if (op_type == ROC_SE_IPSEC) {
+ if ((auth_type == ROC_SE_MD5_TYPE) || (auth_type == ROC_SE_SHA1_TYPE))
+ ipad_offset = 24;
+ else
+ ipad_offset = 64;
+ opad_offset = 0;
+ } else if (op_type == ROC_SE_TLS) {
+ ipad_offset = 64;
+ opad_offset = 0;
+ } else {
+ ipad_offset = 0;
+ opad_offset = 64;
+ }
+
/* HMAC OPAD and IPAD */
for (i = 0; i < 128 && i < length; i++) {
opad[i] = opad[i] ^ key[i];
@@ -176,28 +191,28 @@ cpt_hmac_opad_ipad_gen(roc_se_auth_type auth_type, const uint8_t *key, uint16_t
*/
switch (auth_type) {
case ROC_SE_MD5_TYPE:
- roc_hash_md5_gen(opad, (uint32_t *)hmac->opad);
- roc_hash_md5_gen(ipad, (uint32_t *)hmac->ipad);
+ roc_hash_md5_gen(opad, (uint32_t *)&opad_ipad[opad_offset]);
+ roc_hash_md5_gen(ipad, (uint32_t *)&opad_ipad[ipad_offset]);
break;
case ROC_SE_SHA1_TYPE:
- roc_hash_sha1_gen(opad, (uint32_t *)hmac->opad);
- roc_hash_sha1_gen(ipad, (uint32_t *)hmac->ipad);
+ roc_hash_sha1_gen(opad, (uint32_t *)&opad_ipad[opad_offset]);
+ roc_hash_sha1_gen(ipad, (uint32_t *)&opad_ipad[ipad_offset]);
break;
case ROC_SE_SHA2_SHA224:
- roc_hash_sha256_gen(opad, (uint32_t *)hmac->opad, 224);
- roc_hash_sha256_gen(ipad, (uint32_t *)hmac->ipad, 224);
+ roc_hash_sha256_gen(opad, (uint32_t *)&opad_ipad[opad_offset], 224);
+ roc_hash_sha256_gen(ipad, (uint32_t *)&opad_ipad[ipad_offset], 224);
break;
case ROC_SE_SHA2_SHA256:
- roc_hash_sha256_gen(opad, (uint32_t *)hmac->opad, 256);
- roc_hash_sha256_gen(ipad, (uint32_t *)hmac->ipad, 256);
+ roc_hash_sha256_gen(opad, (uint32_t *)&opad_ipad[opad_offset], 256);
+ roc_hash_sha256_gen(ipad, (uint32_t *)&opad_ipad[ipad_offset], 256);
break;
case ROC_SE_SHA2_SHA384:
- roc_hash_sha512_gen(opad, (uint64_t *)hmac->opad, 384);
- roc_hash_sha512_gen(ipad, (uint64_t *)hmac->ipad, 384);
+ roc_hash_sha512_gen(opad, (uint64_t *)&opad_ipad[opad_offset], 384);
+ roc_hash_sha512_gen(ipad, (uint64_t *)&opad_ipad[ipad_offset], 384);
break;
case ROC_SE_SHA2_SHA512:
- roc_hash_sha512_gen(opad, (uint64_t *)hmac->opad, 512);
- roc_hash_sha512_gen(ipad, (uint64_t *)hmac->ipad, 512);
+ roc_hash_sha512_gen(opad, (uint64_t *)&opad_ipad[opad_offset], 512);
+ roc_hash_sha512_gen(ipad, (uint64_t *)&opad_ipad[ipad_offset], 512);
break;
default:
break;
@@ -401,7 +416,8 @@ roc_se_auth_key_set(struct roc_se_ctx *se_ctx, roc_se_auth_type type, const uint
if (chained_op) {
memset(fctx->hmac.ipad, 0, sizeof(fctx->hmac.ipad));
memset(fctx->hmac.opad, 0, sizeof(fctx->hmac.opad));
- cpt_hmac_opad_ipad_gen(type, key, key_len, &fctx->hmac);
+ roc_se_hmac_opad_ipad_gen(type, key, key_len, &fctx->hmac.ipad[0],
+ ROC_SE_FC);
fctx->enc.auth_input_type = 0;
} else {
se_ctx->hmac = 1;
diff --git a/drivers/common/cnxk/roc_se.h b/drivers/common/cnxk/roc_se.h
index d62c40b310..ddcf6bdb44 100644
--- a/drivers/common/cnxk/roc_se.h
+++ b/drivers/common/cnxk/roc_se.h
@@ -191,6 +191,12 @@ typedef enum {
ROC_SE_PDCP_MAC_LEN_128_BIT = 0x3
} roc_se_pdcp_mac_len_type;
+typedef enum {
+ ROC_SE_IPSEC = 0x0,
+ ROC_SE_TLS = 0x1,
+ ROC_SE_FC = 0x2,
+} roc_se_op_type;
+
struct roc_se_enc_context {
uint64_t iv_source : 1;
uint64_t aes_key : 2;
@@ -401,4 +407,7 @@ int __roc_api roc_se_ciph_key_set(struct roc_se_ctx *se_ctx, roc_se_cipher_type
void __roc_api roc_se_ctx_swap(struct roc_se_ctx *se_ctx);
void __roc_api roc_se_ctx_init(struct roc_se_ctx *se_ctx);
+void __roc_api roc_se_hmac_opad_ipad_gen(roc_se_auth_type auth_type, const uint8_t *key,
+ uint16_t length, uint8_t *opad_ipad,
+ roc_se_op_type op_type);
#endif /* __ROC_SE_H__ */
diff --git a/drivers/common/cnxk/version.map b/drivers/common/cnxk/version.map
index 15fd5710d2..b8b0478848 100644
--- a/drivers/common/cnxk/version.map
+++ b/drivers/common/cnxk/version.map
@@ -1,7 +1,6 @@
INTERNAL {
global:
- cnxk_sec_opad_ipad_gen;
cnxk_ipsec_icvlen_get;
cnxk_ipsec_ivlen_get;
cnxk_ipsec_outb_rlens_get;
@@ -472,6 +471,7 @@ INTERNAL {
roc_plt_init;
roc_plt_init_cb_register;
roc_plt_lmt_validate;
+ roc_se_hmac_opad_ipad_gen;
roc_sso_dev_fini;
roc_sso_dev_init;
roc_sso_dump;
diff --git a/drivers/crypto/cnxk/cn10k_tls.c b/drivers/crypto/cnxk/cn10k_tls.c
index 8f50d889d2..6f6fdf95ee 100644
--- a/drivers/crypto/cnxk/cn10k_tls.c
+++ b/drivers/crypto/cnxk/cn10k_tls.c
@@ -376,7 +376,9 @@ tls_read_sa_fill(struct roc_ie_ot_tls_read_sa *read_sa,
else
return -EINVAL;
- cnxk_sec_opad_ipad_gen(auth_xfrm, read_sa->opad_ipad, true);
+ roc_se_hmac_opad_ipad_gen(read_sa->w2.s.mac_select, auth_xfrm->auth.key.data,
+ auth_xfrm->auth.key.length, read_sa->opad_ipad, ROC_SE_TLS);
+
tmp = (uint64_t *)read_sa->opad_ipad;
for (i = 0; i < (int)(ROC_CTX_MAX_OPAD_IPAD_LEN / sizeof(uint64_t)); i++)
tmp[i] = rte_be_to_cpu_64(tmp[i]);
@@ -503,7 +505,9 @@ tls_write_sa_fill(struct roc_ie_ot_tls_write_sa *write_sa,
else
return -EINVAL;
- cnxk_sec_opad_ipad_gen(auth_xfrm, write_sa->opad_ipad, true);
+ roc_se_hmac_opad_ipad_gen(write_sa->w2.s.mac_select, auth_xfrm->auth.key.data,
+ auth_xfrm->auth.key.length, write_sa->opad_ipad,
+ ROC_SE_TLS);
}
tmp_key = (uint64_t *)write_sa->opad_ipad;
--
2.25.1
^ permalink raw reply [flat|nested] 78+ messages in thread
* [PATCH v2 22/24] crypto/cnxk: add support for TLS 1.3
2024-01-02 4:53 ` [PATCH v2 00/24] Fixes and improvements in crypto cnxk Anoob Joseph
` (20 preceding siblings ...)
2024-01-02 4:54 ` [PATCH v2 21/24] crypto/cnxk: use a single function for opad ipad Anoob Joseph
@ 2024-01-02 4:54 ` Anoob Joseph
2024-01-02 4:54 ` [PATCH v2 23/24] crypto/cnxk: add TLS 1.3 capability Anoob Joseph
` (3 subsequent siblings)
25 siblings, 0 replies; 78+ messages in thread
From: Anoob Joseph @ 2024-01-02 4:54 UTC (permalink / raw)
To: Akhil Goyal; +Cc: Vidya Sagar Velumuri, Jerin Jacob, Tejasree Kondoj, dev
From: Vidya Sagar Velumuri <vvelumuri@marvell.com>
Add support for TLS-1.3.
Signed-off-by: Vidya Sagar Velumuri <vvelumuri@marvell.com>
---
drivers/common/cnxk/roc_ie_ot_tls.h | 50 +++++--
drivers/crypto/cnxk/cn10k_cryptodev_sec.h | 3 +-
drivers/crypto/cnxk/cn10k_tls.c | 159 +++++++++++++---------
3 files changed, 136 insertions(+), 76 deletions(-)
diff --git a/drivers/common/cnxk/roc_ie_ot_tls.h b/drivers/common/cnxk/roc_ie_ot_tls.h
index 61955ef4d1..91ddb25f7a 100644
--- a/drivers/common/cnxk/roc_ie_ot_tls.h
+++ b/drivers/common/cnxk/roc_ie_ot_tls.h
@@ -17,8 +17,10 @@
(PLT_ALIGN_CEIL(ROC_IE_OT_TLS_AR_WIN_SIZE_MAX, BITS_PER_LONG_LONG) / BITS_PER_LONG_LONG)
/* CN10K TLS opcodes */
-#define ROC_IE_OT_TLS_MAJOR_OP_RECORD_ENC 0x16UL
-#define ROC_IE_OT_TLS_MAJOR_OP_RECORD_DEC 0x17UL
+#define ROC_IE_OT_TLS_MAJOR_OP_RECORD_ENC 0x16UL
+#define ROC_IE_OT_TLS_MAJOR_OP_RECORD_DEC 0x17UL
+#define ROC_IE_OT_TLS13_MAJOR_OP_RECORD_ENC 0x18UL
+#define ROC_IE_OT_TLS13_MAJOR_OP_RECORD_DEC 0x19UL
#define ROC_IE_OT_TLS_CTX_MAX_OPAD_IPAD_LEN 128
#define ROC_IE_OT_TLS_CTX_MAX_KEY_IV_LEN 48
@@ -42,6 +44,7 @@ enum roc_ie_ot_tls_cipher_type {
enum roc_ie_ot_tls_ver {
ROC_IE_OT_TLS_VERSION_TLS_12 = 1,
ROC_IE_OT_TLS_VERSION_DTLS_12 = 2,
+ ROC_IE_OT_TLS_VERSION_TLS_13 = 3,
};
enum roc_ie_ot_tls_aes_key_len {
@@ -131,11 +134,23 @@ struct roc_ie_ot_tls_read_sa {
/* Word4 - Word9 */
uint8_t cipher_key[ROC_IE_OT_TLS_CTX_MAX_KEY_IV_LEN];
- /* Word10 - Word25 */
- uint8_t opad_ipad[ROC_IE_OT_TLS_CTX_MAX_OPAD_IPAD_LEN];
+ union {
+ struct {
+ /* Word10 */
+ uint64_t w10_rsvd6;
+
+ /* Word11 - Word25 */
+ struct roc_ie_ot_tls_read_ctx_update_reg ctx;
+ } tls_13;
+
+ struct {
+ /* Word10 - Word25 */
+ uint8_t opad_ipad[ROC_IE_OT_TLS_CTX_MAX_OPAD_IPAD_LEN];
- /* Word26 - Word32 */
- struct roc_ie_ot_tls_read_ctx_update_reg ctx;
+ /* Word26 - Word95 */
+ struct roc_ie_ot_tls_read_ctx_update_reg ctx;
+ } tls_12;
+ };
};
struct roc_ie_ot_tls_write_sa {
@@ -187,13 +202,24 @@ struct roc_ie_ot_tls_write_sa {
/* Word4 - Word9 */
uint8_t cipher_key[ROC_IE_OT_TLS_CTX_MAX_KEY_IV_LEN];
- /* Word10 - Word25 */
- uint8_t opad_ipad[ROC_IE_OT_TLS_CTX_MAX_OPAD_IPAD_LEN];
+ union {
+ struct {
+ /* Word10 */
+ uint64_t w10_rsvd7;
+
+ uint64_t seq_num;
+ } tls_13;
+
+ struct {
+ /* Word10 - Word25 */
+ uint8_t opad_ipad[ROC_IE_OT_TLS_CTX_MAX_OPAD_IPAD_LEN];
- /* Word26 */
- uint64_t w26_rsvd7;
+ /* Word26 */
+ uint64_t w26_rsvd7;
- /* Word27 */
- uint64_t seq_num;
+ /* Word27 */
+ uint64_t seq_num;
+ } tls_12;
+ };
};
#endif /* __ROC_IE_OT_TLS_H__ */
diff --git a/drivers/crypto/cnxk/cn10k_cryptodev_sec.h b/drivers/crypto/cnxk/cn10k_cryptodev_sec.h
index 33fd3aa398..1e117051cc 100644
--- a/drivers/crypto/cnxk/cn10k_cryptodev_sec.h
+++ b/drivers/crypto/cnxk/cn10k_cryptodev_sec.h
@@ -31,8 +31,7 @@ struct cn10k_sec_session {
} ipsec;
struct {
uint8_t enable_padding : 1;
- uint8_t hdr_len : 4;
- uint8_t rvsd : 3;
+ uint8_t rvsd : 7;
bool is_write;
} tls;
};
diff --git a/drivers/crypto/cnxk/cn10k_tls.c b/drivers/crypto/cnxk/cn10k_tls.c
index 6f6fdf95ee..1c1d2e9ece 100644
--- a/drivers/crypto/cnxk/cn10k_tls.c
+++ b/drivers/crypto/cnxk/cn10k_tls.c
@@ -105,7 +105,8 @@ cnxk_tls_xform_verify(struct rte_security_tls_record_xform *tls_xform,
int ret = 0;
if ((tls_xform->ver != RTE_SECURITY_VERSION_TLS_1_2) &&
- (tls_xform->ver != RTE_SECURITY_VERSION_DTLS_1_2))
+ (tls_xform->ver != RTE_SECURITY_VERSION_DTLS_1_2) &&
+ (tls_xform->ver != RTE_SECURITY_VERSION_TLS_1_3))
return -EINVAL;
if ((tls_xform->type != RTE_SECURITY_TLS_SESS_TYPE_READ) &&
@@ -115,6 +116,12 @@ cnxk_tls_xform_verify(struct rte_security_tls_record_xform *tls_xform,
if (crypto_xform->type == RTE_CRYPTO_SYM_XFORM_AEAD)
return tls_xform_aead_verify(tls_xform, crypto_xform);
+ /* TLS-1.3 only support AEAD.
+ * Control should not reach here for TLS-1.3
+ */
+ if (tls_xform->ver == RTE_SECURITY_VERSION_TLS_1_3)
+ return -EINVAL;
+
if (tls_xform->type == RTE_SECURITY_TLS_SESS_TYPE_WRITE) {
/* Egress */
@@ -259,7 +266,7 @@ tls_write_sa_init(struct roc_ie_ot_tls_write_sa *sa)
memset(sa, 0, sizeof(struct roc_ie_ot_tls_write_sa));
- offset = offsetof(struct roc_ie_ot_tls_write_sa, w26_rsvd7);
+ offset = offsetof(struct roc_ie_ot_tls_write_sa, tls_12.w26_rsvd7);
sa->w0.s.hw_ctx_off = offset / ROC_CTX_UNIT_8B;
sa->w0.s.ctx_push_size = sa->w0.s.hw_ctx_off;
sa->w0.s.ctx_size = ROC_IE_OT_TLS_CTX_ILEN;
@@ -274,7 +281,7 @@ tls_read_sa_init(struct roc_ie_ot_tls_read_sa *sa)
memset(sa, 0, sizeof(struct roc_ie_ot_tls_read_sa));
- offset = offsetof(struct roc_ie_ot_tls_read_sa, ctx);
+ offset = offsetof(struct roc_ie_ot_tls_read_sa, tls_12.ctx);
sa->w0.s.hw_ctx_off = offset / ROC_CTX_UNIT_8B;
sa->w0.s.ctx_push_size = sa->w0.s.hw_ctx_off;
sa->w0.s.ctx_size = ROC_IE_OT_TLS_CTX_ILEN;
@@ -283,13 +290,18 @@ tls_read_sa_init(struct roc_ie_ot_tls_read_sa *sa)
}
static size_t
-tls_read_ctx_size(struct roc_ie_ot_tls_read_sa *sa)
+tls_read_ctx_size(struct roc_ie_ot_tls_read_sa *sa, enum rte_security_tls_version tls_ver)
{
size_t size;
/* Variable based on Anti-replay Window */
- size = offsetof(struct roc_ie_ot_tls_read_sa, ctx) +
- offsetof(struct roc_ie_ot_tls_read_ctx_update_reg, ar_winbits);
+ if (tls_ver == RTE_SECURITY_VERSION_TLS_1_3) {
+ size = offsetof(struct roc_ie_ot_tls_read_sa, tls_13.ctx) +
+ offsetof(struct roc_ie_ot_tls_read_ctx_update_reg, ar_winbits);
+ } else {
+ size = offsetof(struct roc_ie_ot_tls_read_sa, tls_12.ctx) +
+ offsetof(struct roc_ie_ot_tls_read_ctx_update_reg, ar_winbits);
+ }
if (sa->w0.s.ar_win)
size += (1 << (sa->w0.s.ar_win - 1)) * sizeof(uint64_t);
@@ -302,6 +314,7 @@ tls_read_sa_fill(struct roc_ie_ot_tls_read_sa *read_sa,
struct rte_security_tls_record_xform *tls_xfrm,
struct rte_crypto_sym_xform *crypto_xfrm)
{
+ enum rte_security_tls_version tls_ver = tls_xfrm->ver;
struct rte_crypto_sym_xform *auth_xfrm, *cipher_xfrm;
const uint8_t *key = NULL;
uint64_t *tmp, *tmp_key;
@@ -313,13 +326,22 @@ tls_read_sa_fill(struct roc_ie_ot_tls_read_sa *read_sa,
/* Initialize the SA */
memset(read_sa, 0, sizeof(struct roc_ie_ot_tls_read_sa));
+ if (tls_ver == RTE_SECURITY_VERSION_TLS_1_2) {
+ read_sa->w2.s.version_select = ROC_IE_OT_TLS_VERSION_TLS_12;
+ read_sa->tls_12.ctx.ar_valid_mask = tls_xfrm->tls_1_2.seq_no - 1;
+ } else if (tls_ver == RTE_SECURITY_VERSION_DTLS_1_2) {
+ read_sa->w2.s.version_select = ROC_IE_OT_TLS_VERSION_DTLS_12;
+ } else if (tls_ver == RTE_SECURITY_VERSION_TLS_1_3) {
+ read_sa->w2.s.version_select = ROC_IE_OT_TLS_VERSION_TLS_13;
+ read_sa->tls_13.ctx.ar_valid_mask = tls_xfrm->tls_1_3.seq_no - 1;
+ }
+
cipher_key = read_sa->cipher_key;
/* Set encryption algorithm */
if ((crypto_xfrm->type == RTE_CRYPTO_SYM_XFORM_AEAD) &&
(crypto_xfrm->aead.algo == RTE_CRYPTO_AEAD_AES_GCM)) {
read_sa->w2.s.cipher_select = ROC_IE_OT_TLS_CIPHER_AES_GCM;
- read_sa->w2.s.mac_select = ROC_IE_OT_TLS_MAC_SHA2_256;
length = crypto_xfrm->aead.key.length;
if (length == 16)
@@ -330,10 +352,12 @@ tls_read_sa_fill(struct roc_ie_ot_tls_read_sa *read_sa,
key = crypto_xfrm->aead.key.data;
memcpy(cipher_key, key, length);
- if (tls_xfrm->ver == RTE_SECURITY_VERSION_TLS_1_2)
+ if (tls_ver == RTE_SECURITY_VERSION_TLS_1_2)
memcpy(((uint8_t *)cipher_key + 32), &tls_xfrm->tls_1_2.imp_nonce, 4);
- else if (tls_xfrm->ver == RTE_SECURITY_VERSION_DTLS_1_2)
+ else if (tls_ver == RTE_SECURITY_VERSION_DTLS_1_2)
memcpy(((uint8_t *)cipher_key + 32), &tls_xfrm->dtls_1_2.imp_nonce, 4);
+ else if (tls_ver == RTE_SECURITY_VERSION_TLS_1_3)
+ memcpy(((uint8_t *)cipher_key + 32), &tls_xfrm->tls_1_3.imp_nonce, 12);
goto key_swap;
}
@@ -377,9 +401,10 @@ tls_read_sa_fill(struct roc_ie_ot_tls_read_sa *read_sa,
return -EINVAL;
roc_se_hmac_opad_ipad_gen(read_sa->w2.s.mac_select, auth_xfrm->auth.key.data,
- auth_xfrm->auth.key.length, read_sa->opad_ipad, ROC_SE_TLS);
+ auth_xfrm->auth.key.length, read_sa->tls_12.opad_ipad,
+ ROC_SE_TLS);
- tmp = (uint64_t *)read_sa->opad_ipad;
+ tmp = (uint64_t *)read_sa->tls_12.opad_ipad;
for (i = 0; i < (int)(ROC_CTX_MAX_OPAD_IPAD_LEN / sizeof(uint64_t)); i++)
tmp[i] = rte_be_to_cpu_64(tmp[i]);
@@ -403,24 +428,20 @@ tls_read_sa_fill(struct roc_ie_ot_tls_read_sa *read_sa,
read_sa->w0.s.ctx_hdr_size = ROC_IE_OT_TLS_CTX_HDR_SIZE;
read_sa->w0.s.aop_valid = 1;
- offset = offsetof(struct roc_ie_ot_tls_read_sa, ctx);
+ offset = offsetof(struct roc_ie_ot_tls_read_sa, tls_12.ctx);
+ if (tls_ver == RTE_SECURITY_VERSION_TLS_1_3)
+ offset = offsetof(struct roc_ie_ot_tls_read_sa, tls_13.ctx);
+
+ /* Entire context size in 128B units */
+ read_sa->w0.s.ctx_size =
+ (PLT_ALIGN_CEIL(tls_read_ctx_size(read_sa, tls_ver), ROC_CTX_UNIT_128B) /
+ ROC_CTX_UNIT_128B) -
+ 1;
/* Word offset for HW managed CTX field */
read_sa->w0.s.hw_ctx_off = offset / 8;
read_sa->w0.s.ctx_push_size = read_sa->w0.s.hw_ctx_off;
- /* Entire context size in 128B units */
- read_sa->w0.s.ctx_size = (PLT_ALIGN_CEIL(tls_read_ctx_size(read_sa), ROC_CTX_UNIT_128B) /
- ROC_CTX_UNIT_128B) -
- 1;
-
- if (tls_xfrm->ver == RTE_SECURITY_VERSION_TLS_1_2) {
- read_sa->w2.s.version_select = ROC_IE_OT_TLS_VERSION_TLS_12;
- read_sa->ctx.ar_valid_mask = tls_xfrm->tls_1_2.seq_no - 1;
- } else if (tls_xfrm->ver == RTE_SECURITY_VERSION_DTLS_1_2) {
- read_sa->w2.s.version_select = ROC_IE_OT_TLS_VERSION_DTLS_12;
- }
-
rte_wmb();
return 0;
@@ -431,6 +452,7 @@ tls_write_sa_fill(struct roc_ie_ot_tls_write_sa *write_sa,
struct rte_security_tls_record_xform *tls_xfrm,
struct rte_crypto_sym_xform *crypto_xfrm)
{
+ enum rte_security_tls_version tls_ver = tls_xfrm->ver;
struct rte_crypto_sym_xform *auth_xfrm, *cipher_xfrm;
const uint8_t *key = NULL;
uint8_t *cipher_key;
@@ -438,13 +460,25 @@ tls_write_sa_fill(struct roc_ie_ot_tls_write_sa *write_sa,
int i, length = 0;
size_t offset;
+ if (tls_ver == RTE_SECURITY_VERSION_TLS_1_2) {
+ write_sa->w2.s.version_select = ROC_IE_OT_TLS_VERSION_TLS_12;
+ write_sa->tls_12.seq_num = tls_xfrm->tls_1_2.seq_no - 1;
+ } else if (tls_ver == RTE_SECURITY_VERSION_DTLS_1_2) {
+ write_sa->w2.s.version_select = ROC_IE_OT_TLS_VERSION_DTLS_12;
+ write_sa->tls_12.seq_num = ((uint64_t)tls_xfrm->dtls_1_2.epoch << 48) |
+ (tls_xfrm->dtls_1_2.seq_no & 0x0000ffffffffffff);
+ write_sa->tls_12.seq_num -= 1;
+ } else if (tls_ver == RTE_SECURITY_VERSION_TLS_1_3) {
+ write_sa->w2.s.version_select = ROC_IE_OT_TLS_VERSION_TLS_13;
+ write_sa->tls_13.seq_num = tls_xfrm->tls_1_3.seq_no - 1;
+ }
+
cipher_key = write_sa->cipher_key;
/* Set encryption algorithm */
if ((crypto_xfrm->type == RTE_CRYPTO_SYM_XFORM_AEAD) &&
(crypto_xfrm->aead.algo == RTE_CRYPTO_AEAD_AES_GCM)) {
write_sa->w2.s.cipher_select = ROC_IE_OT_TLS_CIPHER_AES_GCM;
- write_sa->w2.s.mac_select = ROC_IE_OT_TLS_MAC_SHA2_256;
length = crypto_xfrm->aead.key.length;
if (length == 16)
@@ -455,10 +489,12 @@ tls_write_sa_fill(struct roc_ie_ot_tls_write_sa *write_sa,
key = crypto_xfrm->aead.key.data;
memcpy(cipher_key, key, length);
- if (tls_xfrm->ver == RTE_SECURITY_VERSION_TLS_1_2)
+ if (tls_ver == RTE_SECURITY_VERSION_TLS_1_2)
memcpy(((uint8_t *)cipher_key + 32), &tls_xfrm->tls_1_2.imp_nonce, 4);
- else if (tls_xfrm->ver == RTE_SECURITY_VERSION_DTLS_1_2)
+ else if (tls_ver == RTE_SECURITY_VERSION_DTLS_1_2)
memcpy(((uint8_t *)cipher_key + 32), &tls_xfrm->dtls_1_2.imp_nonce, 4);
+ else if (tls_ver == RTE_SECURITY_VERSION_TLS_1_3)
+ memcpy(((uint8_t *)cipher_key + 32), &tls_xfrm->tls_1_3.imp_nonce, 12);
goto key_swap;
}
@@ -506,11 +542,11 @@ tls_write_sa_fill(struct roc_ie_ot_tls_write_sa *write_sa,
return -EINVAL;
roc_se_hmac_opad_ipad_gen(write_sa->w2.s.mac_select, auth_xfrm->auth.key.data,
- auth_xfrm->auth.key.length, write_sa->opad_ipad,
+ auth_xfrm->auth.key.length, write_sa->tls_12.opad_ipad,
ROC_SE_TLS);
}
- tmp_key = (uint64_t *)write_sa->opad_ipad;
+ tmp_key = (uint64_t *)write_sa->tls_12.opad_ipad;
for (i = 0; i < (int)(ROC_CTX_MAX_OPAD_IPAD_LEN / sizeof(uint64_t)); i++)
tmp_key[i] = rte_be_to_cpu_64(tmp_key[i]);
@@ -520,40 +556,37 @@ tls_write_sa_fill(struct roc_ie_ot_tls_write_sa *write_sa,
tmp_key[i] = rte_be_to_cpu_64(tmp_key[i]);
write_sa->w0.s.ctx_hdr_size = ROC_IE_OT_TLS_CTX_HDR_SIZE;
- offset = offsetof(struct roc_ie_ot_tls_write_sa, w26_rsvd7);
-
- /* Word offset for HW managed CTX field */
- write_sa->w0.s.hw_ctx_off = offset / 8;
- write_sa->w0.s.ctx_push_size = write_sa->w0.s.hw_ctx_off;
-
/* Entire context size in 128B units */
write_sa->w0.s.ctx_size =
(PLT_ALIGN_CEIL(sizeof(struct roc_ie_ot_tls_write_sa), ROC_CTX_UNIT_128B) /
ROC_CTX_UNIT_128B) -
1;
- write_sa->w0.s.aop_valid = 1;
+ offset = offsetof(struct roc_ie_ot_tls_write_sa, tls_12.w26_rsvd7);
- if (tls_xfrm->ver == RTE_SECURITY_VERSION_TLS_1_2) {
- write_sa->w2.s.version_select = ROC_IE_OT_TLS_VERSION_TLS_12;
- write_sa->seq_num = tls_xfrm->tls_1_2.seq_no - 1;
- } else if (tls_xfrm->ver == RTE_SECURITY_VERSION_DTLS_1_2) {
- write_sa->w2.s.version_select = ROC_IE_OT_TLS_VERSION_DTLS_12;
- write_sa->seq_num = ((uint64_t)tls_xfrm->dtls_1_2.epoch << 48) |
- (tls_xfrm->dtls_1_2.seq_no & 0x0000ffffffffffff);
- write_sa->seq_num -= 1;
+ if (tls_ver == RTE_SECURITY_VERSION_TLS_1_3) {
+ offset = offsetof(struct roc_ie_ot_tls_write_sa, tls_13.w10_rsvd7);
+ write_sa->w0.s.ctx_size -= 1;
}
+ /* Word offset for HW managed CTX field */
+ write_sa->w0.s.hw_ctx_off = offset / 8;
+ write_sa->w0.s.ctx_push_size = write_sa->w0.s.hw_ctx_off;
+
+ write_sa->w0.s.aop_valid = 1;
+
write_sa->w2.s.iv_at_cptr = ROC_IE_OT_TLS_IV_SRC_DEFAULT;
+ if (write_sa->w2.s.version_select != ROC_IE_OT_TLS_VERSION_TLS_13) {
#ifdef LA_IPSEC_DEBUG
- if (tls_xfrm->options.iv_gen_disable == 1)
- write_sa->w2.s.iv_at_cptr = ROC_IE_OT_TLS_IV_SRC_FROM_SA;
+ if (tls_xfrm->options.iv_gen_disable == 1)
+ write_sa->w2.s.iv_at_cptr = ROC_IE_OT_TLS_IV_SRC_FROM_SA;
#else
- if (tls_xfrm->options.iv_gen_disable) {
- plt_err("Application provided IV is not supported");
- return -ENOTSUP;
- }
+ if (tls_xfrm->options.iv_gen_disable) {
+ plt_err("Application provided IV is not supported");
+ return -ENOTSUP;
+ }
#endif
+ }
rte_wmb();
@@ -599,20 +632,17 @@ cn10k_tls_read_sa_create(struct roc_cpt *roc_cpt, struct roc_cpt_lf *lf,
sec_sess->iv_length = crypto_xfrm->auth.iv.length;
}
- if (sa_dptr->w2.s.version_select == ROC_IE_OT_TLS_VERSION_DTLS_12)
- sec_sess->tls.hdr_len = 13;
- else if (sa_dptr->w2.s.version_select == ROC_IE_OT_TLS_VERSION_TLS_12)
- sec_sess->tls.hdr_len = 5;
-
sec_sess->proto = RTE_SECURITY_PROTOCOL_TLS_RECORD;
- /* Enable mib counters */
- sa_dptr->w0.s.count_mib_bytes = 1;
- sa_dptr->w0.s.count_mib_pkts = 1;
-
/* pre-populate CPT INST word 4 */
inst_w4.u64 = 0;
- inst_w4.s.opcode_major = ROC_IE_OT_TLS_MAJOR_OP_RECORD_DEC | ROC_IE_OT_INPLACE_BIT;
+ if ((sa_dptr->w2.s.version_select == ROC_IE_OT_TLS_VERSION_TLS_12) ||
+ (sa_dptr->w2.s.version_select == ROC_IE_OT_TLS_VERSION_DTLS_12)) {
+ inst_w4.s.opcode_major = ROC_IE_OT_TLS_MAJOR_OP_RECORD_DEC | ROC_IE_OT_INPLACE_BIT;
+ } else if (sa_dptr->w2.s.version_select == ROC_IE_OT_TLS_VERSION_TLS_13) {
+ inst_w4.s.opcode_major =
+ ROC_IE_OT_TLS13_MAJOR_OP_RECORD_DEC | ROC_IE_OT_INPLACE_BIT;
+ }
sec_sess->inst.w4 = inst_w4.u64;
sec_sess->inst.w7 = cpt_inst_w7_get(roc_cpt, read_sa);
@@ -689,8 +719,13 @@ cn10k_tls_write_sa_create(struct roc_cpt *roc_cpt, struct roc_cpt_lf *lf,
/* pre-populate CPT INST word 4 */
inst_w4.u64 = 0;
- inst_w4.s.opcode_major = ROC_IE_OT_TLS_MAJOR_OP_RECORD_ENC | ROC_IE_OT_INPLACE_BIT;
-
+ if ((sa_dptr->w2.s.version_select == ROC_IE_OT_TLS_VERSION_TLS_12) ||
+ (sa_dptr->w2.s.version_select == ROC_IE_OT_TLS_VERSION_DTLS_12)) {
+ inst_w4.s.opcode_major = ROC_IE_OT_TLS_MAJOR_OP_RECORD_ENC | ROC_IE_OT_INPLACE_BIT;
+ } else if (sa_dptr->w2.s.version_select == ROC_IE_OT_TLS_VERSION_TLS_13) {
+ inst_w4.s.opcode_major =
+ ROC_IE_OT_TLS13_MAJOR_OP_RECORD_ENC | ROC_IE_OT_INPLACE_BIT;
+ }
sec_sess->inst.w4 = inst_w4.u64;
sec_sess->inst.w7 = cpt_inst_w7_get(roc_cpt, write_sa);
--
2.25.1
^ permalink raw reply [flat|nested] 78+ messages in thread
* [PATCH v2 23/24] crypto/cnxk: add TLS 1.3 capability
2024-01-02 4:53 ` [PATCH v2 00/24] Fixes and improvements in crypto cnxk Anoob Joseph
` (21 preceding siblings ...)
2024-01-02 4:54 ` [PATCH v2 22/24] crypto/cnxk: add support for TLS 1.3 Anoob Joseph
@ 2024-01-02 4:54 ` Anoob Joseph
2024-01-02 4:54 ` [PATCH v2 24/24] crypto/cnxk: add CPT SG mode debug Anoob Joseph
` (2 subsequent siblings)
25 siblings, 0 replies; 78+ messages in thread
From: Anoob Joseph @ 2024-01-02 4:54 UTC (permalink / raw)
To: Akhil Goyal; +Cc: Vidya Sagar Velumuri, Jerin Jacob, Tejasree Kondoj, dev
From: Vidya Sagar Velumuri <vvelumuri@marvell.com>
Add TLS 1.3 record read and write capability
Signed-off-by: Vidya Sagar Velumuri <vvelumuri@marvell.com>
---
doc/guides/rel_notes/release_24_03.rst | 4 +-
.../crypto/cnxk/cnxk_cryptodev_capabilities.c | 92 +++++++++++++++++++
2 files changed, 94 insertions(+), 2 deletions(-)
diff --git a/doc/guides/rel_notes/release_24_03.rst b/doc/guides/rel_notes/release_24_03.rst
index f5773bab5a..89110e0650 100644
--- a/doc/guides/rel_notes/release_24_03.rst
+++ b/doc/guides/rel_notes/release_24_03.rst
@@ -58,8 +58,8 @@ New Features
* **Updated Marvell cnxk crypto driver.**
* Added support for Rx inject in crypto_cn10k.
- * Added support for TLS record processing in crypto_cn10k. Supports TLS 1.2
- and DTLS 1.2.
+ * Added support for TLS record processing in crypto_cn10k. Supports TLS 1.2,
+ DTLS 1.2 and TLS 1.3.
* Added PMD API to allow raw submission of instructions to CPT.
Removed Items
diff --git a/drivers/crypto/cnxk/cnxk_cryptodev_capabilities.c b/drivers/crypto/cnxk/cnxk_cryptodev_capabilities.c
index 73100377d9..db50de5d58 100644
--- a/drivers/crypto/cnxk/cnxk_cryptodev_capabilities.c
+++ b/drivers/crypto/cnxk/cnxk_cryptodev_capabilities.c
@@ -40,6 +40,16 @@
RTE_DIM(sec_tls12_caps_##name)); \
} while (0)
+#define SEC_TLS13_CAPS_ADD(cnxk_caps, cur_pos, hw_caps, name) \
+ do { \
+ if ((hw_caps[CPT_ENG_TYPE_SE].name) || \
+ (hw_caps[CPT_ENG_TYPE_IE].name) || \
+ (hw_caps[CPT_ENG_TYPE_AE].name)) \
+ sec_tls13_caps_add(cnxk_caps, cur_pos, \
+ sec_tls13_caps_##name, \
+ RTE_DIM(sec_tls13_caps_##name)); \
+ } while (0)
+
static const struct rte_cryptodev_capabilities caps_mul[] = {
{ /* RSA */
.op = RTE_CRYPTO_OP_TYPE_ASYMMETRIC,
@@ -1631,6 +1641,40 @@ static const struct rte_cryptodev_capabilities sec_tls12_caps_sha1_sha2[] = {
},
};
+static const struct rte_cryptodev_capabilities sec_tls13_caps_aes[] = {
+ { /* AES GCM */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AEAD,
+ {.aead = {
+ .algo = RTE_CRYPTO_AEAD_AES_GCM,
+ .block_size = 16,
+ .key_size = {
+ .min = 16,
+ .max = 32,
+ .increment = 16
+ },
+ .digest_size = {
+ .min = 16,
+ .max = 16,
+ .increment = 0
+ },
+ .aad_size = {
+ .min = 5,
+ .max = 5,
+ .increment = 0
+ },
+ .iv_size = {
+ .min = 0,
+ .max = 0,
+ .increment = 0
+ }
+ }, }
+ }, }
+ },
+};
+
+
static const struct rte_security_capability sec_caps_templ[] = {
{ /* IPsec Lookaside Protocol ESP Tunnel Ingress */
.action = RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL,
@@ -1760,6 +1804,26 @@ static const struct rte_security_capability sec_caps_templ[] = {
},
.crypto_capabilities = NULL,
},
+ { /* TLS 1.3 Record Read */
+ .action = RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL,
+ .protocol = RTE_SECURITY_PROTOCOL_TLS_RECORD,
+ .tls_record = {
+ .ver = RTE_SECURITY_VERSION_TLS_1_3,
+ .type = RTE_SECURITY_TLS_SESS_TYPE_READ,
+ .ar_win_size = 0,
+ },
+ .crypto_capabilities = NULL,
+ },
+ { /* TLS 1.3 Record Write */
+ .action = RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL,
+ .protocol = RTE_SECURITY_PROTOCOL_TLS_RECORD,
+ .tls_record = {
+ .ver = RTE_SECURITY_VERSION_TLS_1_3,
+ .type = RTE_SECURITY_TLS_SESS_TYPE_WRITE,
+ .ar_win_size = 0,
+ },
+ .crypto_capabilities = NULL,
+ },
{
.action = RTE_SECURITY_ACTION_TYPE_NONE
}
@@ -2005,6 +2069,33 @@ sec_tls12_crypto_caps_populate(struct rte_cryptodev_capabilities cnxk_caps[],
sec_tls12_caps_add(cnxk_caps, &cur_pos, caps_end, RTE_DIM(caps_end));
}
+static void
+sec_tls13_caps_limit_check(int *cur_pos, int nb_caps)
+{
+ PLT_VERIFY(*cur_pos + nb_caps <= CNXK_SEC_TLS_1_3_CRYPTO_MAX_CAPS);
+}
+
+static void
+sec_tls13_caps_add(struct rte_cryptodev_capabilities cnxk_caps[], int *cur_pos,
+ const struct rte_cryptodev_capabilities *caps, int nb_caps)
+{
+ sec_tls13_caps_limit_check(cur_pos, nb_caps);
+
+ memcpy(&cnxk_caps[*cur_pos], caps, nb_caps * sizeof(caps[0]));
+ *cur_pos += nb_caps;
+}
+
+static void
+sec_tls13_crypto_caps_populate(struct rte_cryptodev_capabilities cnxk_caps[],
+ union cpt_eng_caps *hw_caps)
+{
+ int cur_pos = 0;
+
+ SEC_TLS13_CAPS_ADD(cnxk_caps, &cur_pos, hw_caps, aes);
+
+ sec_tls13_caps_add(cnxk_caps, &cur_pos, caps_end, RTE_DIM(caps_end));
+}
+
void
cnxk_cpt_caps_populate(struct cnxk_cpt_vf *vf)
{
@@ -2016,6 +2107,7 @@ cnxk_cpt_caps_populate(struct cnxk_cpt_vf *vf)
if (vf->cpt.hw_caps[CPT_ENG_TYPE_SE].tls) {
sec_tls12_crypto_caps_populate(vf->sec_tls_1_2_crypto_caps, vf->cpt.hw_caps);
sec_tls12_crypto_caps_populate(vf->sec_dtls_1_2_crypto_caps, vf->cpt.hw_caps);
+ sec_tls13_crypto_caps_populate(vf->sec_tls_1_3_crypto_caps, vf->cpt.hw_caps);
}
PLT_STATIC_ASSERT(RTE_DIM(sec_caps_templ) <= RTE_DIM(vf->sec_caps));
--
2.25.1
^ permalink raw reply [flat|nested] 78+ messages in thread
* [PATCH v2 24/24] crypto/cnxk: add CPT SG mode debug
2024-01-02 4:53 ` [PATCH v2 00/24] Fixes and improvements in crypto cnxk Anoob Joseph
` (22 preceding siblings ...)
2024-01-02 4:54 ` [PATCH v2 23/24] crypto/cnxk: add TLS 1.3 capability Anoob Joseph
@ 2024-01-02 4:54 ` Anoob Joseph
2024-01-16 8:43 ` [PATCH v2 00/24] Fixes and improvements in crypto cnxk Akhil Goyal
2024-01-17 10:30 ` [PATCH v3 " Anoob Joseph
25 siblings, 0 replies; 78+ messages in thread
From: Anoob Joseph @ 2024-01-02 4:54 UTC (permalink / raw)
To: Akhil Goyal; +Cc: Tejasree Kondoj, Jerin Jacob, Vidya Sagar Velumuri, dev
From: Tejasree Kondoj <ktejasree@marvell.com>
Adding CPT SG mode debug dump.
Signed-off-by: Tejasree Kondoj <ktejasree@marvell.com>
---
drivers/crypto/cnxk/cn10k_cryptodev_ops.c | 135 +++++++++++++++++++++-
drivers/crypto/cnxk/cnxk_cryptodev_ops.h | 7 ++
2 files changed, 141 insertions(+), 1 deletion(-)
diff --git a/drivers/crypto/cnxk/cn10k_cryptodev_ops.c b/drivers/crypto/cnxk/cn10k_cryptodev_ops.c
index 9f4be20ff5..8991150c05 100644
--- a/drivers/crypto/cnxk/cn10k_cryptodev_ops.c
+++ b/drivers/crypto/cnxk/cn10k_cryptodev_ops.c
@@ -2,9 +2,10 @@
* Copyright(C) 2021 Marvell.
*/
-#include <rte_cryptodev.h>
#include <cryptodev_pmd.h>
+#include <rte_cryptodev.h>
#include <rte_event_crypto_adapter.h>
+#include <rte_hexdump.h>
#include <rte_ip.h>
#include <ethdev_driver.h>
@@ -103,6 +104,104 @@ cpt_sec_ipsec_inst_fill(struct cnxk_cpt_qp *qp, struct rte_crypto_op *op,
return ret;
}
+#ifdef CPT_INST_DEBUG_ENABLE
+static inline void
+cpt_request_data_sgv2_mode_dump(uint8_t *in_buffer, bool glist, uint16_t components)
+{
+ struct roc_se_buf_ptr list_ptr[ROC_MAX_SG_CNT];
+ const char *list = glist ? "glist" : "slist";
+ struct roc_sg2list_comp *sg_ptr = NULL;
+ uint16_t list_cnt = 0;
+ char suffix[64];
+ int i, j;
+
+ sg_ptr = (void *)in_buffer;
+ for (i = 0; i < components; i++) {
+ for (j = 0; j < sg_ptr->u.s.valid_segs; j++) {
+ list_ptr[i * 3 + j].size = sg_ptr->u.s.len[j];
+ list_ptr[i * 3 + j].vaddr = (void *)sg_ptr->ptr[j];
+ list_ptr[i * 3 + j].vaddr = list_ptr[i * 3 + j].vaddr;
+ list_cnt++;
+ }
+ sg_ptr++;
+ }
+
+ printf("Current %s: %u\n", list, list_cnt);
+
+ for (i = 0; i < list_cnt; i++) {
+ snprintf(suffix, sizeof(suffix), "%s[%d]: vaddr 0x%" PRIx64 ", vaddr %p len %u",
+ list, i, (uint64_t)list_ptr[i].vaddr, list_ptr[i].vaddr, list_ptr[i].size);
+ rte_hexdump(stdout, suffix, list_ptr[i].vaddr, list_ptr[i].size);
+ }
+}
+
+static inline void
+cpt_request_data_sg_mode_dump(uint8_t *in_buffer, bool glist)
+{
+ struct roc_se_buf_ptr list_ptr[ROC_MAX_SG_CNT];
+ const char *list = glist ? "glist" : "slist";
+ struct roc_sglist_comp *sg_ptr = NULL;
+ uint16_t list_cnt, components;
+ char suffix[64];
+ int i;
+
+ sg_ptr = (void *)(in_buffer + 8);
+ list_cnt = rte_be_to_cpu_16((((uint16_t *)in_buffer)[2]));
+ if (!glist) {
+ components = list_cnt / 4;
+ if (list_cnt % 4)
+ components++;
+ sg_ptr += components;
+ list_cnt = rte_be_to_cpu_16((((uint16_t *)in_buffer)[3]));
+ }
+
+ printf("Current %s: %u\n", list, list_cnt);
+ components = list_cnt / 4;
+ for (i = 0; i < components; i++) {
+ list_ptr[i * 4 + 0].size = rte_be_to_cpu_16(sg_ptr->u.s.len[0]);
+ list_ptr[i * 4 + 1].size = rte_be_to_cpu_16(sg_ptr->u.s.len[1]);
+ list_ptr[i * 4 + 2].size = rte_be_to_cpu_16(sg_ptr->u.s.len[2]);
+ list_ptr[i * 4 + 3].size = rte_be_to_cpu_16(sg_ptr->u.s.len[3]);
+ list_ptr[i * 4 + 0].vaddr = (void *)rte_be_to_cpu_64(sg_ptr->ptr[0]);
+ list_ptr[i * 4 + 1].vaddr = (void *)rte_be_to_cpu_64(sg_ptr->ptr[1]);
+ list_ptr[i * 4 + 2].vaddr = (void *)rte_be_to_cpu_64(sg_ptr->ptr[2]);
+ list_ptr[i * 4 + 3].vaddr = (void *)rte_be_to_cpu_64(sg_ptr->ptr[3]);
+ list_ptr[i * 4 + 0].vaddr = list_ptr[i * 4 + 0].vaddr;
+ list_ptr[i * 4 + 1].vaddr = list_ptr[i * 4 + 1].vaddr;
+ list_ptr[i * 4 + 2].vaddr = list_ptr[i * 4 + 2].vaddr;
+ list_ptr[i * 4 + 3].vaddr = list_ptr[i * 4 + 3].vaddr;
+ sg_ptr++;
+ }
+
+ components = list_cnt % 4;
+ switch (components) {
+ case 3:
+ list_ptr[i * 4 + 2].size = rte_be_to_cpu_16(sg_ptr->u.s.len[2]);
+ list_ptr[i * 4 + 2].vaddr = (void *)rte_be_to_cpu_64(sg_ptr->ptr[2]);
+ list_ptr[i * 4 + 2].vaddr = list_ptr[i * 4 + 2].vaddr;
+ /* FALLTHROUGH */
+ case 2:
+ list_ptr[i * 4 + 1].size = rte_be_to_cpu_16(sg_ptr->u.s.len[1]);
+ list_ptr[i * 4 + 1].vaddr = (void *)rte_be_to_cpu_64(sg_ptr->ptr[1]);
+ list_ptr[i * 4 + 1].vaddr = list_ptr[i * 4 + 1].vaddr;
+ /* FALLTHROUGH */
+ case 1:
+ list_ptr[i * 4 + 0].size = rte_be_to_cpu_16(sg_ptr->u.s.len[0]);
+ list_ptr[i * 4 + 0].vaddr = (void *)rte_be_to_cpu_64(sg_ptr->ptr[0]);
+ list_ptr[i * 4 + 0].vaddr = list_ptr[i * 4 + 0].vaddr;
+ break;
+ default:
+ break;
+ }
+
+ for (i = 0; i < list_cnt; i++) {
+ snprintf(suffix, sizeof(suffix), "%s[%d]: vaddr 0x%" PRIx64 ", vaddr %p len %u",
+ list, i, (uint64_t)list_ptr[i].vaddr, list_ptr[i].vaddr, list_ptr[i].size);
+ rte_hexdump(stdout, suffix, list_ptr[i].vaddr, list_ptr[i].size);
+ }
+}
+#endif
+
static __rte_always_inline int __rte_hot
cpt_sec_tls_inst_fill(struct cnxk_cpt_qp *qp, struct rte_crypto_op *op,
struct cn10k_sec_session *sess, struct cpt_inst_s *inst,
@@ -205,6 +304,31 @@ cn10k_cpt_fill_inst(struct cnxk_cpt_qp *qp, struct rte_crypto_op *ops[], struct
inst[0].w7.u64 = w7;
+#ifdef CPT_INST_DEBUG_ENABLE
+ infl_req->dptr = (uint8_t *)inst[0].dptr;
+ infl_req->rptr = (uint8_t *)inst[0].rptr;
+ infl_req->is_sg_ver2 = is_sg_ver2;
+ infl_req->scatter_sz = inst[0].w6.s.scatter_sz;
+ infl_req->opcode_major = inst[0].w4.s.opcode_major;
+
+ rte_hexdump(stdout, "cptr", (void *)(uint64_t)inst[0].w7.s.cptr, 128);
+ printf("major opcode:%d\n", inst[0].w4.s.opcode_major);
+ printf("minor opcode:%d\n", inst[0].w4.s.opcode_minor);
+ printf("param1:%d\n", inst[0].w4.s.param1);
+ printf("param2:%d\n", inst[0].w4.s.param2);
+ printf("dlen:%d\n", inst[0].w4.s.dlen);
+
+ if (is_sg_ver2) {
+ cpt_request_data_sgv2_mode_dump((void *)inst[0].dptr, 1, inst[0].w5.s.gather_sz);
+ cpt_request_data_sgv2_mode_dump((void *)inst[0].rptr, 0, inst[0].w6.s.scatter_sz);
+ } else {
+ if (infl_req->opcode_major >> 7) {
+ cpt_request_data_sg_mode_dump((void *)inst[0].dptr, 1);
+ cpt_request_data_sg_mode_dump((void *)inst[0].dptr, 0);
+ }
+ }
+#endif
+
return 1;
}
@@ -935,6 +1059,15 @@ cn10k_cpt_dequeue_post_process(struct cnxk_cpt_qp *qp, struct rte_crypto_op *cop
}
if (likely(compcode == CPT_COMP_GOOD)) {
+#ifdef CPT_INST_DEBUG_ENABLE
+ if (infl_req->is_sg_ver2)
+ cpt_request_data_sgv2_mode_dump(infl_req->rptr, 0, infl_req->scatter_sz);
+ else {
+ if (infl_req->opcode_major >> 7)
+ cpt_request_data_sg_mode_dump(infl_req->dptr, 0);
+ }
+#endif
+
if (unlikely(uc_compcode)) {
if (uc_compcode == ROC_SE_ERR_GC_ICV_MISCOMPARE)
cop->status = RTE_CRYPTO_OP_STATUS_AUTH_FAILED;
diff --git a/drivers/crypto/cnxk/cnxk_cryptodev_ops.h b/drivers/crypto/cnxk/cnxk_cryptodev_ops.h
index c6bb8023ea..e7bba25cb8 100644
--- a/drivers/crypto/cnxk/cnxk_cryptodev_ops.h
+++ b/drivers/crypto/cnxk/cnxk_cryptodev_ops.h
@@ -51,6 +51,13 @@ struct cpt_inflight_req {
};
void *mdata;
uint8_t op_flags;
+#ifdef CPT_INST_DEBUG_ENABLE
+ uint8_t scatter_sz;
+ uint8_t opcode_major;
+ uint8_t is_sg_ver2;
+ uint8_t *dptr;
+ uint8_t *rptr;
+#endif
void *qp;
} __rte_aligned(ROC_ALIGN);
--
2.25.1
^ permalink raw reply [flat|nested] 78+ messages in thread
* RE: [PATCH v2 07/24] crypto/cnxk: enable Rx inject in security lookaside
2024-01-02 4:54 ` [PATCH v2 07/24] crypto/cnxk: enable Rx inject in security lookaside Anoob Joseph
@ 2024-01-16 8:07 ` Akhil Goyal
0 siblings, 0 replies; 78+ messages in thread
From: Akhil Goyal @ 2024-01-16 8:07 UTC (permalink / raw)
To: Anoob Joseph
Cc: Vidya Sagar Velumuri, Jerin Jacob Kollanukkaran, Tejasree Kondoj, dev
> Subject: [PATCH v2 07/24] crypto/cnxk: enable Rx inject in security lookaside
>
> From: Vidya Sagar Velumuri <vvelumuri@marvell.com>
>
> Add Rx inject fastpath API.
> Add devargs to specify an LF to be used for Rx inject.
It is better to specify the name of the devarg in description.
> When the RX inject feature flag is enabled:
> 1. Reserve a CPT LF to use for RX Inject mode.
> 2. Enable RXC and disable full packet mode for that LF.
>
> Signed-off-by: Anoob Joseph <anoobj@marvell.com>
> Signed-off-by: Vidya Sagar Velumuri <vvelumuri@marvell.com>
> ---
> doc/guides/cryptodevs/cnxk.rst | 12 ++
Should we also update the cn10k.ini file for supporting the new feature flag?
> doc/guides/rel_notes/release_24_03.rst | 3 +
> drivers/common/cnxk/hw/cpt.h | 9 ++
> drivers/common/cnxk/roc_cpt.c | 11 +-
> drivers/common/cnxk/roc_cpt.h | 3 +-
> drivers/common/cnxk/roc_cpt_priv.h | 2 +-
> drivers/common/cnxk/roc_ie_ot.c | 14 +--
> drivers/common/cnxk/roc_mbox.h | 2 +
> drivers/common/cnxk/roc_nix_inl.c | 2 +-
> drivers/common/cnxk/roc_nix_inl_dev.c | 2 +-
> drivers/crypto/cnxk/cn10k_cryptodev_ops.c | 124 +++++++++++++++++++
> drivers/crypto/cnxk/cn10k_cryptodev_ops.h | 8 ++
> drivers/crypto/cnxk/cn10k_ipsec.c | 4 +
> drivers/crypto/cnxk/cn10k_ipsec.h | 2 +
> drivers/crypto/cnxk/cnxk_cryptodev.c | 3 +
> drivers/crypto/cnxk/cnxk_cryptodev.h | 3 +
> drivers/crypto/cnxk/cnxk_cryptodev_devargs.c | 31 +++++
> drivers/crypto/cnxk/cnxk_cryptodev_ops.c | 27 +++-
> drivers/crypto/cnxk/version.map | 3 +
> 19 files changed, 250 insertions(+), 15 deletions(-)
>
> diff --git a/doc/guides/cryptodevs/cnxk.rst b/doc/guides/cryptodevs/cnxk.rst
> index fbe67475be..8dc745dccd 100644
> --- a/doc/guides/cryptodevs/cnxk.rst
> +++ b/doc/guides/cryptodevs/cnxk.rst
> @@ -187,6 +187,18 @@ Runtime Config Options
> With the above configuration, the number of maximum queue pairs supported
> by the device is limited to 4.
>
> +- ``LF ID for RX injection in case of fallback mechanism`` (default ``60``)
> +
> + LF ID for RX Injection in fallback mechanism of security.
> + Can be configured during runtime by using ``rx_inj_lf`` ``devargs`` parameter.
Can we rename it to rx_inject_lf to improve readability for user?
Or can this be rx_inject_qp as for dpdk user LF term is not exposed?
And we map it to a qp internally. Right?
> +
> + For example::
> +
> + -a 0002:20:00.1,rx_inj_lf=20
> +
> + With the above configuration, LF 20 will be used by the device for RX
> Injection
> + in security in fallback mechanism secnario.
Spell check scenario
> +
> Debugging Options
> -----------------
>
> diff --git a/doc/guides/rel_notes/release_24_03.rst
> b/doc/guides/rel_notes/release_24_03.rst
> index e9c9717706..fa30b46ead 100644
> --- a/doc/guides/rel_notes/release_24_03.rst
> +++ b/doc/guides/rel_notes/release_24_03.rst
> @@ -55,6 +55,9 @@ New Features
> Also, make sure to start the actual text at the margin.
> =======================================================
>
> +* **Updated Marvell cnxk crypto driver.**
> +
> + * Added support for Rx inject in crypto_cn10k.
>
Add an extra line here.
^ permalink raw reply [flat|nested] 78+ messages in thread
* RE: [PATCH v2 00/24] Fixes and improvements in crypto cnxk
2024-01-02 4:53 ` [PATCH v2 00/24] Fixes and improvements in crypto cnxk Anoob Joseph
` (23 preceding siblings ...)
2024-01-02 4:54 ` [PATCH v2 24/24] crypto/cnxk: add CPT SG mode debug Anoob Joseph
@ 2024-01-16 8:43 ` Akhil Goyal
2024-01-17 10:30 ` [PATCH v3 " Anoob Joseph
25 siblings, 0 replies; 78+ messages in thread
From: Akhil Goyal @ 2024-01-16 8:43 UTC (permalink / raw)
To: Anoob Joseph
Cc: Jerin Jacob Kollanukkaran, Vidya Sagar Velumuri, Tejasree Kondoj, dev
> Subject: [PATCH v2 00/24] Fixes and improvements in crypto cnxk
>
> Add following features
> - TLS record processing offload (TLS 1.2-1.3, DTLS 1.2)
> - Rx inject to allow lookaside packets to be injected to ethdev Rx
> - Use PDCP_CHAIN opcode instead of PDCP opcode for cipher-only and auth
> only cases
> - PMD API to submit instructions directly to hardware
>
> Changes in v2
> - Addressed checkpatch issue
> - Addressed build error with stdatomic
>
> Aakash Sasidharan (1):
> crypto/cnxk: enable digest gen for zero len input
>
> Akhil Goyal (1):
> common/cnxk: fix memory leak
>
> Anoob Joseph (6):
> crypto/cnxk: use common macro
> crypto/cnxk: return microcode completion code
> common/cnxk: update opad-ipad gen to handle TLS
> common/cnxk: add TLS record contexts
> crypto/cnxk: separate IPsec from security common code
> crypto/cnxk: add PMD APIs for raw submission to CPT
>
> Gowrishankar Muthukrishnan (1):
> crypto/cnxk: fix ECDH pubkey verify in cn9k
>
> Rahul Bhansali (2):
> common/cnxk: add Rx inject configs
> crypto/cnxk: Rx inject config update
>
> Tejasree Kondoj (3):
> crypto/cnxk: fallback to SG if headroom is not available
> crypto/cnxk: replace PDCP with PDCP chain opcode
> crypto/cnxk: add CPT SG mode debug
>
> Vidya Sagar Velumuri (10):
> crypto/cnxk: enable Rx inject in security lookaside
> crypto/cnxk: enable Rx inject for 103
> crypto/cnxk: rename security caps as IPsec security caps
> crypto/cnxk: add TLS record session ops
> crypto/cnxk: add TLS record datapath handling
> crypto/cnxk: add TLS capability
> crypto/cnxk: validate the combinations supported in TLS
> crypto/cnxk: use a single function for opad ipad
> crypto/cnxk: add support for TLS 1.3
> crypto/cnxk: add TLS 1.3 capability
Apart from the comment on 7/24 patch
Series Acked-by Akhil Goyal <gakhil@marvell.com>
^ permalink raw reply [flat|nested] 78+ messages in thread
* [PATCH v3 00/24] Fixes and improvements in crypto cnxk
2024-01-02 4:53 ` [PATCH v2 00/24] Fixes and improvements in crypto cnxk Anoob Joseph
` (24 preceding siblings ...)
2024-01-16 8:43 ` [PATCH v2 00/24] Fixes and improvements in crypto cnxk Akhil Goyal
@ 2024-01-17 10:30 ` Anoob Joseph
2024-01-17 10:30 ` [PATCH v3 01/24] common/cnxk: fix memory leak Anoob Joseph
` (24 more replies)
25 siblings, 25 replies; 78+ messages in thread
From: Anoob Joseph @ 2024-01-17 10:30 UTC (permalink / raw)
To: Akhil Goyal; +Cc: Jerin Jacob, Vidya Sagar Velumuri, Tejasree Kondoj, dev
Add following features
- TLS record processing offload (TLS 1.2-1.3, DTLS 1.2)
- Rx inject to allow lookaside packets to be injected to ethdev Rx
- Use PDCP_CHAIN opcode instead of PDCP opcode for cipher-only and auth
only cases
- PMD API to submit instructions directly to hardware
Changes in v3
- Addressed Akhil's commments on Rx inject patch
- Updated license year to 2024
Changes in v2
- Addressed checkpatch issue
- Addressed build error with stdatomic
Aakash Sasidharan (1):
crypto/cnxk: enable digest gen for zero len input
Akhil Goyal (1):
common/cnxk: fix memory leak
Anoob Joseph (6):
crypto/cnxk: use common macro
crypto/cnxk: return microcode completion code
common/cnxk: update opad-ipad gen to handle TLS
common/cnxk: add TLS record contexts
crypto/cnxk: separate IPsec from security common code
crypto/cnxk: add PMD APIs for raw submission to CPT
Gowrishankar Muthukrishnan (1):
crypto/cnxk: fix ECDH pubkey verify in cn9k
Rahul Bhansali (2):
common/cnxk: add Rx inject configs
crypto/cnxk: Rx inject config update
Tejasree Kondoj (3):
crypto/cnxk: fallback to SG if headroom is not available
crypto/cnxk: replace PDCP with PDCP chain opcode
crypto/cnxk: add CPT SG mode debug
Vidya Sagar Velumuri (10):
crypto/cnxk: enable Rx inject in security lookaside
crypto/cnxk: enable Rx inject for 103
crypto/cnxk: rename security caps as IPsec security caps
crypto/cnxk: add TLS record session ops
crypto/cnxk: add TLS record datapath handling
crypto/cnxk: add TLS capability
crypto/cnxk: validate the combinations supported in TLS
crypto/cnxk: use a single function for opad ipad
crypto/cnxk: add support for TLS 1.3
crypto/cnxk: add TLS 1.3 capability
doc/api/doxy-api-index.md | 1 +
doc/api/doxy-api.conf.in | 1 +
doc/guides/cryptodevs/cnxk.rst | 12 +
doc/guides/cryptodevs/features/cn10k.ini | 1 +
doc/guides/rel_notes/release_24_03.rst | 7 +
drivers/common/cnxk/cnxk_security.c | 65 +-
drivers/common/cnxk/cnxk_security.h | 15 +-
drivers/common/cnxk/hw/cpt.h | 12 +-
drivers/common/cnxk/roc_cpt.c | 14 +-
drivers/common/cnxk/roc_cpt.h | 7 +-
drivers/common/cnxk/roc_cpt_priv.h | 2 +-
drivers/common/cnxk/roc_idev.c | 44 +
drivers/common/cnxk/roc_idev.h | 5 +
drivers/common/cnxk/roc_idev_priv.h | 6 +
drivers/common/cnxk/roc_ie_ot.c | 14 +-
drivers/common/cnxk/roc_ie_ot_tls.h | 225 +++++
drivers/common/cnxk/roc_mbox.h | 2 +
drivers/common/cnxk/roc_nix.c | 2 +
drivers/common/cnxk/roc_nix_inl.c | 2 +-
drivers/common/cnxk/roc_nix_inl_dev.c | 2 +-
drivers/common/cnxk/roc_se.c | 379 +++-----
drivers/common/cnxk/roc_se.h | 38 +-
drivers/common/cnxk/version.map | 5 +
drivers/crypto/cnxk/cn10k_cryptodev.c | 2 +-
drivers/crypto/cnxk/cn10k_cryptodev_ops.c | 401 ++++++++-
drivers/crypto/cnxk/cn10k_cryptodev_ops.h | 11 +
drivers/crypto/cnxk/cn10k_cryptodev_sec.c | 134 +++
drivers/crypto/cnxk/cn10k_cryptodev_sec.h | 68 ++
drivers/crypto/cnxk/cn10k_ipsec.c | 134 +--
drivers/crypto/cnxk/cn10k_ipsec.h | 38 +-
drivers/crypto/cnxk/cn10k_ipsec_la_ops.h | 19 +-
drivers/crypto/cnxk/cn10k_tls.c | 830 ++++++++++++++++++
drivers/crypto/cnxk/cn10k_tls.h | 35 +
drivers/crypto/cnxk/cn10k_tls_ops.h | 322 +++++++
drivers/crypto/cnxk/cn9k_cryptodev_ops.c | 68 +-
drivers/crypto/cnxk/cn9k_cryptodev_ops.h | 62 ++
drivers/crypto/cnxk/cn9k_ipsec_la_ops.h | 16 +-
drivers/crypto/cnxk/cnxk_cryptodev.c | 3 +
drivers/crypto/cnxk/cnxk_cryptodev.h | 24 +-
.../crypto/cnxk/cnxk_cryptodev_capabilities.c | 375 +++++++-
drivers/crypto/cnxk/cnxk_cryptodev_devargs.c | 31 +
drivers/crypto/cnxk/cnxk_cryptodev_ops.c | 128 ++-
drivers/crypto/cnxk/cnxk_cryptodev_ops.h | 7 +
drivers/crypto/cnxk/cnxk_se.h | 98 +--
drivers/crypto/cnxk/cnxk_sg.h | 4 +-
drivers/crypto/cnxk/meson.build | 4 +-
drivers/crypto/cnxk/rte_pmd_cnxk_crypto.h | 46 +
drivers/crypto/cnxk/version.map | 3 +
48 files changed, 3018 insertions(+), 706 deletions(-)
create mode 100644 drivers/common/cnxk/roc_ie_ot_tls.h
create mode 100644 drivers/crypto/cnxk/cn10k_cryptodev_sec.c
create mode 100644 drivers/crypto/cnxk/cn10k_cryptodev_sec.h
create mode 100644 drivers/crypto/cnxk/cn10k_tls.c
create mode 100644 drivers/crypto/cnxk/cn10k_tls.h
create mode 100644 drivers/crypto/cnxk/cn10k_tls_ops.h
create mode 100644 drivers/crypto/cnxk/rte_pmd_cnxk_crypto.h
--
2.25.1
^ permalink raw reply [flat|nested] 78+ messages in thread
* [PATCH v3 01/24] common/cnxk: fix memory leak
2024-01-17 10:30 ` [PATCH v3 " Anoob Joseph
@ 2024-01-17 10:30 ` Anoob Joseph
2024-01-17 10:30 ` [PATCH v3 02/24] crypto/cnxk: use common macro Anoob Joseph
` (23 subsequent siblings)
24 siblings, 0 replies; 78+ messages in thread
From: Anoob Joseph @ 2024-01-17 10:30 UTC (permalink / raw)
To: Akhil Goyal; +Cc: Jerin Jacob, Vidya Sagar Velumuri, Tejasree Kondoj, dev
From: Akhil Goyal <gakhil@marvell.com>
dev_init() acquires some resources which need to be cleaned
in case a failure is observed afterwards.
Fixes: c045d2e5cbbc ("common/cnxk: add CPT configuration")
Signed-off-by: Akhil Goyal <gakhil@marvell.com>
---
drivers/common/cnxk/roc_cpt.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/common/cnxk/roc_cpt.c b/drivers/common/cnxk/roc_cpt.c
index 981e85a204..4e23d8c135 100644
--- a/drivers/common/cnxk/roc_cpt.c
+++ b/drivers/common/cnxk/roc_cpt.c
@@ -756,7 +756,7 @@ roc_cpt_dev_init(struct roc_cpt *roc_cpt)
rc = dev_init(dev, pci_dev);
if (rc) {
plt_err("Failed to init roc device");
- goto fail;
+ return rc;
}
cpt->pci_dev = pci_dev;
@@ -788,6 +788,7 @@ roc_cpt_dev_init(struct roc_cpt *roc_cpt)
return 0;
fail:
+ dev_fini(dev, pci_dev);
return rc;
}
--
2.25.1
^ permalink raw reply [flat|nested] 78+ messages in thread
* [PATCH v3 02/24] crypto/cnxk: use common macro
2024-01-17 10:30 ` [PATCH v3 " Anoob Joseph
2024-01-17 10:30 ` [PATCH v3 01/24] common/cnxk: fix memory leak Anoob Joseph
@ 2024-01-17 10:30 ` Anoob Joseph
2024-01-17 10:30 ` [PATCH v3 03/24] crypto/cnxk: fallback to SG if headroom is not available Anoob Joseph
` (22 subsequent siblings)
24 siblings, 0 replies; 78+ messages in thread
From: Anoob Joseph @ 2024-01-17 10:30 UTC (permalink / raw)
To: Akhil Goyal; +Cc: Jerin Jacob, Vidya Sagar Velumuri, Tejasree Kondoj, dev
Having different macros for same purpose may cause issues if one is
updated without updating the other. Use same macro by including the
header.
Signed-off-by: Anoob Joseph <anoobj@marvell.com>
---
drivers/crypto/cnxk/cnxk_cryptodev.h | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/drivers/crypto/cnxk/cnxk_cryptodev.h b/drivers/crypto/cnxk/cnxk_cryptodev.h
index d0ad881f2f..f5374131bf 100644
--- a/drivers/crypto/cnxk/cnxk_cryptodev.h
+++ b/drivers/crypto/cnxk/cnxk_cryptodev.h
@@ -8,12 +8,12 @@
#include <rte_cryptodev.h>
#include <rte_security.h>
+#include "roc_ae.h"
#include "roc_cpt.h"
#define CNXK_CPT_MAX_CAPS 55
#define CNXK_SEC_CRYPTO_MAX_CAPS 16
#define CNXK_SEC_MAX_CAPS 9
-#define CNXK_AE_EC_ID_MAX 9
/**
* Device private data
*/
@@ -23,8 +23,8 @@ struct cnxk_cpt_vf {
struct rte_cryptodev_capabilities
sec_crypto_caps[CNXK_SEC_CRYPTO_MAX_CAPS];
struct rte_security_capability sec_caps[CNXK_SEC_MAX_CAPS];
- uint64_t cnxk_fpm_iova[CNXK_AE_EC_ID_MAX];
- struct roc_ae_ec_group *ec_grp[CNXK_AE_EC_ID_MAX];
+ uint64_t cnxk_fpm_iova[ROC_AE_EC_ID_PMAX];
+ struct roc_ae_ec_group *ec_grp[ROC_AE_EC_ID_PMAX];
uint16_t max_qps_limit;
};
--
2.25.1
^ permalink raw reply [flat|nested] 78+ messages in thread
* [PATCH v3 03/24] crypto/cnxk: fallback to SG if headroom is not available
2024-01-17 10:30 ` [PATCH v3 " Anoob Joseph
2024-01-17 10:30 ` [PATCH v3 01/24] common/cnxk: fix memory leak Anoob Joseph
2024-01-17 10:30 ` [PATCH v3 02/24] crypto/cnxk: use common macro Anoob Joseph
@ 2024-01-17 10:30 ` Anoob Joseph
2024-01-17 10:30 ` [PATCH v3 04/24] crypto/cnxk: return microcode completion code Anoob Joseph
` (21 subsequent siblings)
24 siblings, 0 replies; 78+ messages in thread
From: Anoob Joseph @ 2024-01-17 10:30 UTC (permalink / raw)
To: Akhil Goyal; +Cc: Tejasree Kondoj, Jerin Jacob, Vidya Sagar Velumuri, dev
From: Tejasree Kondoj <ktejasree@marvell.com>
Falling back to SG mode for cn9k lookaside IPsec
if headroom is not available.
Signed-off-by: Tejasree Kondoj <ktejasree@marvell.com>
---
drivers/crypto/cnxk/cn9k_ipsec_la_ops.h | 8 +-------
1 file changed, 1 insertion(+), 7 deletions(-)
diff --git a/drivers/crypto/cnxk/cn9k_ipsec_la_ops.h b/drivers/crypto/cnxk/cn9k_ipsec_la_ops.h
index 85aacb803f..3d0db72775 100644
--- a/drivers/crypto/cnxk/cn9k_ipsec_la_ops.h
+++ b/drivers/crypto/cnxk/cn9k_ipsec_la_ops.h
@@ -82,19 +82,13 @@ process_outb_sa(struct cpt_qp_meta_info *m_info, struct rte_crypto_op *cop,
extend_tail = rlen - dlen;
pkt_len += extend_tail;
- if (likely(m_src->next == NULL)) {
+ if (likely((m_src->next == NULL) && (hdr_len <= data_off))) {
if (unlikely(extend_tail > rte_pktmbuf_tailroom(m_src))) {
plt_dp_err("Not enough tail room (required: %d, available: %d)",
extend_tail, rte_pktmbuf_tailroom(m_src));
return -ENOMEM;
}
- if (unlikely(hdr_len > data_off)) {
- plt_dp_err("Not enough head room (required: %d, available: %d)", hdr_len,
- rte_pktmbuf_headroom(m_src));
- return -ENOMEM;
- }
-
m_src->data_len = pkt_len;
hdr = PLT_PTR_ADD(m_src->buf_addr, data_off - hdr_len);
--
2.25.1
^ permalink raw reply [flat|nested] 78+ messages in thread
* [PATCH v3 04/24] crypto/cnxk: return microcode completion code
2024-01-17 10:30 ` [PATCH v3 " Anoob Joseph
` (2 preceding siblings ...)
2024-01-17 10:30 ` [PATCH v3 03/24] crypto/cnxk: fallback to SG if headroom is not available Anoob Joseph
@ 2024-01-17 10:30 ` Anoob Joseph
2024-01-17 10:30 ` [PATCH v3 05/24] crypto/cnxk: fix ECDH pubkey verify in cn9k Anoob Joseph
` (20 subsequent siblings)
24 siblings, 0 replies; 78+ messages in thread
From: Anoob Joseph @ 2024-01-17 10:30 UTC (permalink / raw)
To: Akhil Goyal; +Cc: Jerin Jacob, Vidya Sagar Velumuri, Tejasree Kondoj, dev
Return microcode completion code in case of errors. This allows
applications to check the failure reasons in more granularity.
Signed-off-by: Anoob Joseph <anoobj@marvell.com>
---
drivers/crypto/cnxk/cn10k_cryptodev_ops.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/drivers/crypto/cnxk/cn10k_cryptodev_ops.c b/drivers/crypto/cnxk/cn10k_cryptodev_ops.c
index 997110e3d3..bef7b75810 100644
--- a/drivers/crypto/cnxk/cn10k_cryptodev_ops.c
+++ b/drivers/crypto/cnxk/cn10k_cryptodev_ops.c
@@ -823,6 +823,7 @@ cn10k_cpt_sec_post_process(struct rte_crypto_op *cop, struct cpt_cn10k_res_s *re
break;
default:
cop->status = RTE_CRYPTO_OP_STATUS_ERROR;
+ cop->aux_flags = res->uc_compcode;
return;
}
@@ -884,6 +885,7 @@ cn10k_cpt_dequeue_post_process(struct cnxk_cpt_qp *qp,
plt_dp_info("Request failed with microcode error");
plt_dp_info("MC completion code 0x%x",
res->uc_compcode);
+ cop->aux_flags = uc_compcode;
goto temp_sess_free;
}
--
2.25.1
^ permalink raw reply [flat|nested] 78+ messages in thread
* [PATCH v3 05/24] crypto/cnxk: fix ECDH pubkey verify in cn9k
2024-01-17 10:30 ` [PATCH v3 " Anoob Joseph
` (3 preceding siblings ...)
2024-01-17 10:30 ` [PATCH v3 04/24] crypto/cnxk: return microcode completion code Anoob Joseph
@ 2024-01-17 10:30 ` Anoob Joseph
2024-01-17 10:30 ` [PATCH v3 06/24] crypto/cnxk: enable digest gen for zero len input Anoob Joseph
` (19 subsequent siblings)
24 siblings, 0 replies; 78+ messages in thread
From: Anoob Joseph @ 2024-01-17 10:30 UTC (permalink / raw)
To: Akhil Goyal
Cc: Gowrishankar Muthukrishnan, Jerin Jacob, Vidya Sagar Velumuri,
Tejasree Kondoj, dev
From: Gowrishankar Muthukrishnan <gmuthukrishn@marvell.com>
Fix ECDH pubkey verify in cn9k.
Fixes: baae0994fa96 ("crypto/cnxk: support ECDH")
Signed-off-by: Gowrishankar Muthukrishnan <gmuthukrishn@marvell.com>
---
drivers/crypto/cnxk/cn9k_cryptodev_ops.c | 12 +++++++++++-
1 file changed, 11 insertions(+), 1 deletion(-)
diff --git a/drivers/crypto/cnxk/cn9k_cryptodev_ops.c b/drivers/crypto/cnxk/cn9k_cryptodev_ops.c
index 34d40b07d4..442cd8e5a9 100644
--- a/drivers/crypto/cnxk/cn9k_cryptodev_ops.c
+++ b/drivers/crypto/cnxk/cn9k_cryptodev_ops.c
@@ -578,7 +578,17 @@ cn9k_cpt_dequeue_post_process(struct cnxk_cpt_qp *qp, struct rte_crypto_op *cop,
if (unlikely(res->uc_compcode)) {
if (res->uc_compcode == ROC_SE_ERR_GC_ICV_MISCOMPARE)
cop->status = RTE_CRYPTO_OP_STATUS_AUTH_FAILED;
- else
+ else if (cop->type == RTE_CRYPTO_OP_TYPE_ASYMMETRIC &&
+ cop->sess_type == RTE_CRYPTO_OP_WITH_SESSION &&
+ cop->asym->ecdh.ke_type == RTE_CRYPTO_ASYM_KE_PUB_KEY_VERIFY) {
+ if (res->uc_compcode == ROC_AE_ERR_ECC_POINT_NOT_ON_CURVE) {
+ cop->status = RTE_CRYPTO_OP_STATUS_ERROR;
+ return;
+ } else if (res->uc_compcode == ROC_AE_ERR_ECC_PAI) {
+ cop->status = RTE_CRYPTO_OP_STATUS_SUCCESS;
+ return;
+ }
+ } else
cop->status = RTE_CRYPTO_OP_STATUS_ERROR;
plt_dp_info("Request failed with microcode error");
--
2.25.1
^ permalink raw reply [flat|nested] 78+ messages in thread
* [PATCH v3 06/24] crypto/cnxk: enable digest gen for zero len input
2024-01-17 10:30 ` [PATCH v3 " Anoob Joseph
` (4 preceding siblings ...)
2024-01-17 10:30 ` [PATCH v3 05/24] crypto/cnxk: fix ECDH pubkey verify in cn9k Anoob Joseph
@ 2024-01-17 10:30 ` Anoob Joseph
2024-01-17 10:30 ` [PATCH v3 07/24] crypto/cnxk: enable Rx inject in security lookaside Anoob Joseph
` (18 subsequent siblings)
24 siblings, 0 replies; 78+ messages in thread
From: Anoob Joseph @ 2024-01-17 10:30 UTC (permalink / raw)
To: Akhil Goyal
Cc: Aakash Sasidharan, Jerin Jacob, Vidya Sagar Velumuri,
Tejasree Kondoj, dev
From: Aakash Sasidharan <asasidharan@marvell.com>
With zero length input, digest generation fails with incorrect
value. Fix this by completely avoiding the gather component
when the input packet has zero data length.
Signed-off-by: Aakash Sasidharan <asasidharan@marvell.com>
---
drivers/crypto/cnxk/cnxk_se.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/crypto/cnxk/cnxk_se.h b/drivers/crypto/cnxk/cnxk_se.h
index c2a807fa94..1aec7dea9f 100644
--- a/drivers/crypto/cnxk/cnxk_se.h
+++ b/drivers/crypto/cnxk/cnxk_se.h
@@ -2479,7 +2479,7 @@ prepare_iov_from_pkt(struct rte_mbuf *pkt, struct roc_se_iov_ptr *iovec, uint32_
void *seg_data = NULL;
int32_t seg_size = 0;
- if (!pkt) {
+ if (!pkt || pkt->data_len == 0) {
iovec->buf_cnt = 0;
return 0;
}
--
2.25.1
^ permalink raw reply [flat|nested] 78+ messages in thread
* [PATCH v3 07/24] crypto/cnxk: enable Rx inject in security lookaside
2024-01-17 10:30 ` [PATCH v3 " Anoob Joseph
` (5 preceding siblings ...)
2024-01-17 10:30 ` [PATCH v3 06/24] crypto/cnxk: enable digest gen for zero len input Anoob Joseph
@ 2024-01-17 10:30 ` Anoob Joseph
2024-01-17 10:30 ` [PATCH v3 08/24] common/cnxk: add Rx inject configs Anoob Joseph
` (17 subsequent siblings)
24 siblings, 0 replies; 78+ messages in thread
From: Anoob Joseph @ 2024-01-17 10:30 UTC (permalink / raw)
To: Akhil Goyal; +Cc: Vidya Sagar Velumuri, Jerin Jacob, Tejasree Kondoj, dev
From: Vidya Sagar Velumuri <vvelumuri@marvell.com>
Add Rx inject fastpath API.
Add devargs "rx_inject_qp" to specify the QP to be used for Rx inject.
When the RX inject feature flag is enabled:
1. Reserve a queue pair to use for RX Inject mode.
2. Enable RXC and disable full packet mode for that queue pair.
Signed-off-by: Anoob Joseph <anoobj@marvell.com>
Signed-off-by: Vidya Sagar Velumuri <vvelumuri@marvell.com>
---
doc/guides/cryptodevs/cnxk.rst | 12 ++
doc/guides/cryptodevs/features/cn10k.ini | 1 +
doc/guides/rel_notes/release_24_03.rst | 4 +
drivers/common/cnxk/hw/cpt.h | 9 ++
drivers/common/cnxk/roc_cpt.c | 11 +-
drivers/common/cnxk/roc_cpt.h | 3 +-
drivers/common/cnxk/roc_cpt_priv.h | 2 +-
drivers/common/cnxk/roc_ie_ot.c | 14 +--
drivers/common/cnxk/roc_mbox.h | 2 +
drivers/common/cnxk/roc_nix_inl.c | 2 +-
drivers/common/cnxk/roc_nix_inl_dev.c | 2 +-
drivers/crypto/cnxk/cn10k_cryptodev_ops.c | 124 +++++++++++++++++++
drivers/crypto/cnxk/cn10k_cryptodev_ops.h | 8 ++
drivers/crypto/cnxk/cn10k_ipsec.c | 4 +
drivers/crypto/cnxk/cn10k_ipsec.h | 2 +
drivers/crypto/cnxk/cnxk_cryptodev.c | 3 +
drivers/crypto/cnxk/cnxk_cryptodev.h | 3 +
drivers/crypto/cnxk/cnxk_cryptodev_devargs.c | 31 +++++
drivers/crypto/cnxk/cnxk_cryptodev_ops.c | 27 +++-
drivers/crypto/cnxk/version.map | 3 +
20 files changed, 252 insertions(+), 15 deletions(-)
diff --git a/doc/guides/cryptodevs/cnxk.rst b/doc/guides/cryptodevs/cnxk.rst
index fbe67475be..09328927cd 100644
--- a/doc/guides/cryptodevs/cnxk.rst
+++ b/doc/guides/cryptodevs/cnxk.rst
@@ -187,6 +187,18 @@ Runtime Config Options
With the above configuration, the number of maximum queue pairs supported
by the device is limited to 4.
+- ``QP ID for RX injection in case of fallback mechanism`` (default ``60``)
+
+ QP ID for RX Injection in fallback mechanism of security.
+ Can be configured during runtime by using ``rx_inject_qp`` ``devargs`` parameter.
+
+ For example::
+
+ -a 0002:20:00.1,rx_inject_qp=20
+
+ With the above configuration, QP 20 will be used by the device for RX Injection
+ in security in fallback mechanism scenario.
+
Debugging Options
-----------------
diff --git a/doc/guides/cryptodevs/features/cn10k.ini b/doc/guides/cryptodevs/features/cn10k.ini
index ea8a22eb46..e52c313111 100644
--- a/doc/guides/cryptodevs/features/cn10k.ini
+++ b/doc/guides/cryptodevs/features/cn10k.ini
@@ -19,6 +19,7 @@ RSA PRIV OP KEY QT = Y
Digest encrypted = Y
Sym raw data path API = Y
Inner checksum = Y
+Rx Injection = Y
;
; Supported crypto algorithms of 'cn10k' crypto driver.
diff --git a/doc/guides/rel_notes/release_24_03.rst b/doc/guides/rel_notes/release_24_03.rst
index e9c9717706..eb63728cfd 100644
--- a/doc/guides/rel_notes/release_24_03.rst
+++ b/doc/guides/rel_notes/release_24_03.rst
@@ -55,6 +55,10 @@ New Features
Also, make sure to start the actual text at the margin.
=======================================================
+* **Updated Marvell cnxk crypto driver.**
+
+ * Added support for Rx inject in crypto_cn10k.
+
Removed Items
-------------
diff --git a/drivers/common/cnxk/hw/cpt.h b/drivers/common/cnxk/hw/cpt.h
index cf9046bbfb..edab8a5d83 100644
--- a/drivers/common/cnxk/hw/cpt.h
+++ b/drivers/common/cnxk/hw/cpt.h
@@ -237,6 +237,15 @@ struct cpt_inst_s {
uint64_t doneint : 1;
uint64_t nixtx_addr : 60;
} s;
+ struct {
+ uint64_t nixtxl : 3;
+ uint64_t doneint : 1;
+ uint64_t chan : 12;
+ uint64_t l2_len : 8;
+ uint64_t et_offset : 8;
+ uint64_t match_id : 16;
+ uint64_t sso_pf_func : 16;
+ } hw_s;
uint64_t u64;
} w0;
diff --git a/drivers/common/cnxk/roc_cpt.c b/drivers/common/cnxk/roc_cpt.c
index 4e23d8c135..9f283ceb2e 100644
--- a/drivers/common/cnxk/roc_cpt.c
+++ b/drivers/common/cnxk/roc_cpt.c
@@ -463,7 +463,7 @@ cpt_available_lfs_get(struct dev *dev, uint16_t *nb_lf)
int
cpt_lfs_alloc(struct dev *dev, uint8_t eng_grpmsk, uint8_t blkaddr, bool inl_dev_sso,
- bool ctx_ilen_valid, uint8_t ctx_ilen)
+ bool ctx_ilen_valid, uint8_t ctx_ilen, bool rxc_ena, uint16_t rx_inject_qp)
{
struct cpt_lf_alloc_req_msg *req;
struct mbox *mbox = mbox_get(dev->mbox);
@@ -489,6 +489,10 @@ cpt_lfs_alloc(struct dev *dev, uint8_t eng_grpmsk, uint8_t blkaddr, bool inl_dev
req->blkaddr = blkaddr;
req->ctx_ilen_valid = ctx_ilen_valid;
req->ctx_ilen = ctx_ilen;
+ if (rxc_ena) {
+ req->rxc_ena = 1;
+ req->rxc_ena_lf_id = rx_inject_qp;
+ }
rc = mbox_process(mbox);
exit:
@@ -586,7 +590,7 @@ cpt_iq_init(struct roc_cpt_lf *lf)
}
int
-roc_cpt_dev_configure(struct roc_cpt *roc_cpt, int nb_lf)
+roc_cpt_dev_configure(struct roc_cpt *roc_cpt, int nb_lf, bool rxc_ena, uint16_t rx_inject_qp)
{
struct cpt *cpt = roc_cpt_to_cpt_priv(roc_cpt);
uint8_t blkaddr[ROC_CPT_MAX_BLKS];
@@ -630,7 +634,8 @@ roc_cpt_dev_configure(struct roc_cpt *roc_cpt, int nb_lf)
ctx_ilen = (PLT_ALIGN(ROC_OT_IPSEC_SA_SZ_MAX, ROC_ALIGN) / 128) - 1;
}
- rc = cpt_lfs_alloc(&cpt->dev, eng_grpmsk, blkaddr[blknum], false, ctx_ilen_valid, ctx_ilen);
+ rc = cpt_lfs_alloc(&cpt->dev, eng_grpmsk, blkaddr[blknum], false, ctx_ilen_valid, ctx_ilen,
+ rxc_ena, rx_inject_qp);
if (rc)
goto lfs_detach;
diff --git a/drivers/common/cnxk/roc_cpt.h b/drivers/common/cnxk/roc_cpt.h
index 787bccb27d..9d1173d88a 100644
--- a/drivers/common/cnxk/roc_cpt.h
+++ b/drivers/common/cnxk/roc_cpt.h
@@ -171,7 +171,8 @@ int __roc_api roc_cpt_dev_init(struct roc_cpt *roc_cpt);
int __roc_api roc_cpt_dev_fini(struct roc_cpt *roc_cpt);
int __roc_api roc_cpt_eng_grp_add(struct roc_cpt *roc_cpt,
enum cpt_eng_type eng_type);
-int __roc_api roc_cpt_dev_configure(struct roc_cpt *roc_cpt, int nb_lf);
+int __roc_api roc_cpt_dev_configure(struct roc_cpt *roc_cpt, int nb_lf, bool rxc_ena,
+ uint16_t rx_inject_qp);
void __roc_api roc_cpt_dev_clear(struct roc_cpt *roc_cpt);
int __roc_api roc_cpt_lf_init(struct roc_cpt *roc_cpt, struct roc_cpt_lf *lf);
void __roc_api roc_cpt_lf_fini(struct roc_cpt_lf *lf);
diff --git a/drivers/common/cnxk/roc_cpt_priv.h b/drivers/common/cnxk/roc_cpt_priv.h
index 4ed87c857b..0bd956e373 100644
--- a/drivers/common/cnxk/roc_cpt_priv.h
+++ b/drivers/common/cnxk/roc_cpt_priv.h
@@ -22,7 +22,7 @@ int cpt_lfs_attach(struct dev *dev, uint8_t blkaddr, bool modify,
uint16_t nb_lf);
int cpt_lfs_detach(struct dev *dev);
int cpt_lfs_alloc(struct dev *dev, uint8_t eng_grpmsk, uint8_t blk, bool inl_dev_sso,
- bool ctx_ilen_valid, uint8_t ctx_ilen);
+ bool ctx_ilen_valid, uint8_t ctx_ilen, bool rxc_ena, uint16_t rx_inject_qp);
int cpt_lfs_free(struct dev *dev);
int cpt_lf_init(struct roc_cpt_lf *lf);
void cpt_lf_fini(struct roc_cpt_lf *lf);
diff --git a/drivers/common/cnxk/roc_ie_ot.c b/drivers/common/cnxk/roc_ie_ot.c
index d0b7ad38f1..465b2bc1fb 100644
--- a/drivers/common/cnxk/roc_ie_ot.c
+++ b/drivers/common/cnxk/roc_ie_ot.c
@@ -12,13 +12,13 @@ roc_ot_ipsec_inb_sa_init(struct roc_ot_ipsec_inb_sa *sa, bool is_inline)
memset(sa, 0, sizeof(struct roc_ot_ipsec_inb_sa));
- if (is_inline) {
- sa->w0.s.pkt_output = ROC_IE_OT_SA_PKT_OUTPUT_NO_FRAG;
- sa->w0.s.pkt_format = ROC_IE_OT_SA_PKT_FMT_META;
- sa->w0.s.pkind = ROC_IE_OT_CPT_PKIND;
- sa->w0.s.et_ovrwr = 1;
- sa->w2.s.l3hdr_on_err = 1;
- }
+ sa->w0.s.pkt_output = ROC_IE_OT_SA_PKT_OUTPUT_NO_FRAG;
+ sa->w0.s.pkt_format = ROC_IE_OT_SA_PKT_FMT_META;
+ sa->w0.s.pkind = ROC_IE_OT_CPT_PKIND;
+ sa->w0.s.et_ovrwr = 1;
+ sa->w2.s.l3hdr_on_err = 1;
+
+ PLT_SET_USED(is_inline);
offset = offsetof(struct roc_ot_ipsec_inb_sa, ctx);
sa->w0.s.hw_ctx_off = offset / ROC_CTX_UNIT_8B;
diff --git a/drivers/common/cnxk/roc_mbox.h b/drivers/common/cnxk/roc_mbox.h
index 05434aec5a..0ad8b738c6 100644
--- a/drivers/common/cnxk/roc_mbox.h
+++ b/drivers/common/cnxk/roc_mbox.h
@@ -2022,6 +2022,8 @@ struct cpt_lf_alloc_req_msg {
uint8_t __io blkaddr;
uint8_t __io ctx_ilen_valid : 1;
uint8_t __io ctx_ilen : 7;
+ uint8_t __io rxc_ena : 1;
+ uint8_t __io rxc_ena_lf_id : 7;
};
#define CPT_INLINE_INBOUND 0
diff --git a/drivers/common/cnxk/roc_nix_inl.c b/drivers/common/cnxk/roc_nix_inl.c
index 750fd08355..07a90133ca 100644
--- a/drivers/common/cnxk/roc_nix_inl.c
+++ b/drivers/common/cnxk/roc_nix_inl.c
@@ -986,7 +986,7 @@ roc_nix_inl_outb_init(struct roc_nix *roc_nix)
1ULL << ROC_CPT_DFLT_ENG_GRP_SE_IE |
1ULL << ROC_CPT_DFLT_ENG_GRP_AE);
rc = cpt_lfs_alloc(dev, eng_grpmask, blkaddr,
- !roc_nix->ipsec_out_sso_pffunc, ctx_ilen_valid, ctx_ilen);
+ !roc_nix->ipsec_out_sso_pffunc, ctx_ilen_valid, ctx_ilen, false, 0);
if (rc) {
plt_err("Failed to alloc CPT LF resources, rc=%d", rc);
goto lf_detach;
diff --git a/drivers/common/cnxk/roc_nix_inl_dev.c b/drivers/common/cnxk/roc_nix_inl_dev.c
index dc1306c093..f6991de051 100644
--- a/drivers/common/cnxk/roc_nix_inl_dev.c
+++ b/drivers/common/cnxk/roc_nix_inl_dev.c
@@ -194,7 +194,7 @@ nix_inl_cpt_setup(struct nix_inl_dev *inl_dev, bool inl_dev_sso)
}
rc = cpt_lfs_alloc(dev, eng_grpmask, RVU_BLOCK_ADDR_CPT0, inl_dev_sso, ctx_ilen_valid,
- ctx_ilen);
+ ctx_ilen, false, 0);
if (rc) {
plt_err("Failed to alloc CPT LF resources, rc=%d", rc);
return rc;
diff --git a/drivers/crypto/cnxk/cn10k_cryptodev_ops.c b/drivers/crypto/cnxk/cn10k_cryptodev_ops.c
index bef7b75810..e656f47693 100644
--- a/drivers/crypto/cnxk/cn10k_cryptodev_ops.c
+++ b/drivers/crypto/cnxk/cn10k_cryptodev_ops.c
@@ -7,6 +7,8 @@
#include <rte_event_crypto_adapter.h>
#include <rte_ip.h>
+#include <ethdev_driver.h>
+
#include "roc_cpt.h"
#if defined(__aarch64__)
#include "roc_io.h"
@@ -1057,6 +1059,104 @@ cn10k_cpt_dequeue_burst(void *qptr, struct rte_crypto_op **ops, uint16_t nb_ops)
return i;
}
+uint16_t __rte_hot
+cn10k_cryptodev_sec_inb_rx_inject(void *dev, struct rte_mbuf **pkts,
+ struct rte_security_session **sess, uint16_t nb_pkts)
+{
+ uint16_t l2_len, pf_func, lmt_id, count = 0;
+ uint64_t lmt_base, lmt_arg, io_addr;
+ struct cn10k_sec_session *sec_sess;
+ struct rte_cryptodev *cdev = dev;
+ union cpt_res_s *hw_res = NULL;
+ struct cpt_inst_s *inst;
+ struct cnxk_cpt_vf *vf;
+ struct rte_mbuf *m;
+ uint64_t dptr;
+ int i;
+
+ const union cpt_res_s res = {
+ .cn10k.compcode = CPT_COMP_NOT_DONE,
+ };
+
+ vf = cdev->data->dev_private;
+
+ lmt_base = vf->rx_inj_lmtline.lmt_base;
+ io_addr = vf->rx_inj_lmtline.io_addr;
+
+ ROC_LMT_BASE_ID_GET(lmt_base, lmt_id);
+ pf_func = vf->rx_inj_pf_func;
+
+again:
+ inst = (struct cpt_inst_s *)lmt_base;
+ for (i = 0; i < RTE_MIN(PKTS_PER_LOOP, nb_pkts); i++) {
+
+ m = pkts[i];
+ sec_sess = (struct cn10k_sec_session *)sess[i];
+
+ if (unlikely(rte_pktmbuf_headroom(m) < 32)) {
+ plt_dp_err("No space for CPT res_s");
+ break;
+ }
+
+ if (unlikely(!rte_pktmbuf_is_contiguous(m))) {
+ plt_dp_err("Multi seg is not supported");
+ break;
+ }
+
+ l2_len = m->l2_len;
+
+ *rte_security_dynfield(m) = (uint64_t)sec_sess->userdata;
+
+ hw_res = rte_pktmbuf_mtod(m, void *);
+ hw_res = RTE_PTR_SUB(hw_res, 32);
+ hw_res = RTE_PTR_ALIGN_CEIL(hw_res, 16);
+
+ /* Prepare CPT instruction */
+ inst->w0.u64 = 0;
+ inst->w2.u64 = 0;
+ inst->w2.s.rvu_pf_func = pf_func;
+ inst->w3.u64 = (((uint64_t)m + sizeof(struct rte_mbuf)) >> 3) << 3 | 1;
+
+ inst->w4.u64 = sec_sess->inst.w4 | (rte_pktmbuf_pkt_len(m));
+ dptr = (uint64_t)rte_pktmbuf_iova(m);
+ inst->dptr = dptr;
+ inst->rptr = dptr;
+
+ inst->w0.hw_s.l2_len = l2_len;
+ inst->w0.hw_s.et_offset = l2_len - 2;
+
+ inst->res_addr = (uint64_t)hw_res;
+ rte_atomic_store_explicit((unsigned long __rte_atomic *)&hw_res->u64[0], res.u64[0],
+ rte_memory_order_relaxed);
+
+ inst->w7.u64 = sec_sess->inst.w7;
+
+ inst += 2;
+ }
+
+ if (i > PKTS_PER_STEORL) {
+ lmt_arg = ROC_CN10K_CPT_LMT_ARG | (PKTS_PER_STEORL - 1) << 12 | (uint64_t)lmt_id;
+ roc_lmt_submit_steorl(lmt_arg, io_addr);
+ lmt_arg = ROC_CN10K_CPT_LMT_ARG | (i - PKTS_PER_STEORL - 1) << 12 |
+ (uint64_t)(lmt_id + PKTS_PER_STEORL);
+ roc_lmt_submit_steorl(lmt_arg, io_addr);
+ } else {
+ lmt_arg = ROC_CN10K_CPT_LMT_ARG | (i - 1) << 12 | (uint64_t)lmt_id;
+ roc_lmt_submit_steorl(lmt_arg, io_addr);
+ }
+
+ rte_io_wmb();
+
+ if (nb_pkts - i > 0 && i == PKTS_PER_LOOP) {
+ nb_pkts -= i;
+ pkts += i;
+ count += i;
+ goto again;
+ }
+
+ return count + i;
+}
+
void
cn10k_cpt_set_enqdeq_fns(struct rte_cryptodev *dev, struct cnxk_cpt_vf *vf)
{
@@ -1535,6 +1635,30 @@ cn10k_sym_configure_raw_dp_ctx(struct rte_cryptodev *dev, uint16_t qp_id,
return 0;
}
+int
+cn10k_cryptodev_sec_rx_inject_configure(void *device, uint16_t port_id, bool enable)
+{
+ struct rte_cryptodev *crypto_dev = device;
+ struct rte_eth_dev *eth_dev;
+ int ret;
+
+ if (!rte_eth_dev_is_valid_port(port_id))
+ return -EINVAL;
+
+ if (!(crypto_dev->feature_flags & RTE_CRYPTODEV_FF_SECURITY_RX_INJECT))
+ return -ENOTSUP;
+
+ eth_dev = &rte_eth_devices[port_id];
+
+ ret = strncmp(eth_dev->device->driver->name, "net_cn10k", 8);
+ if (ret)
+ return -ENOTSUP;
+
+ RTE_SET_USED(enable);
+
+ return 0;
+}
+
struct rte_cryptodev_ops cn10k_cpt_ops = {
/* Device control ops */
.dev_configure = cnxk_cpt_dev_config,
diff --git a/drivers/crypto/cnxk/cn10k_cryptodev_ops.h b/drivers/crypto/cnxk/cn10k_cryptodev_ops.h
index befbfcdfad..34becede3c 100644
--- a/drivers/crypto/cnxk/cn10k_cryptodev_ops.h
+++ b/drivers/crypto/cnxk/cn10k_cryptodev_ops.h
@@ -16,6 +16,14 @@ extern struct rte_cryptodev_ops cn10k_cpt_ops;
void cn10k_cpt_set_enqdeq_fns(struct rte_cryptodev *dev, struct cnxk_cpt_vf *vf);
+__rte_internal
+uint16_t __rte_hot cn10k_cryptodev_sec_inb_rx_inject(void *dev, struct rte_mbuf **pkts,
+ struct rte_security_session **sess,
+ uint16_t nb_pkts);
+
+__rte_internal
+int cn10k_cryptodev_sec_rx_inject_configure(void *device, uint16_t port_id, bool enable);
+
__rte_internal
uint16_t __rte_hot cn10k_cpt_sg_ver1_crypto_adapter_enqueue(void *ws, struct rte_event ev[],
uint16_t nb_events);
diff --git a/drivers/crypto/cnxk/cn10k_ipsec.c b/drivers/crypto/cnxk/cn10k_ipsec.c
index ffd3f50eed..2d098fdd24 100644
--- a/drivers/crypto/cnxk/cn10k_ipsec.c
+++ b/drivers/crypto/cnxk/cn10k_ipsec.c
@@ -10,6 +10,7 @@
#include <rte_security_driver.h>
#include <rte_udp.h>
+#include "cn10k_cryptodev_ops.h"
#include "cn10k_ipsec.h"
#include "cnxk_cryptodev.h"
#include "cnxk_cryptodev_ops.h"
@@ -297,6 +298,7 @@ cn10k_sec_session_create(void *device, struct rte_security_session_conf *conf,
if (conf->protocol != RTE_SECURITY_PROTOCOL_IPSEC)
return -ENOTSUP;
+ ((struct cn10k_sec_session *)sess)->userdata = conf->userdata;
return cn10k_ipsec_session_create(device, &conf->ipsec,
conf->crypto_xform, sess);
}
@@ -458,4 +460,6 @@ cn10k_sec_ops_override(void)
cnxk_sec_ops.session_get_size = cn10k_sec_session_get_size;
cnxk_sec_ops.session_stats_get = cn10k_sec_session_stats_get;
cnxk_sec_ops.session_update = cn10k_sec_session_update;
+ cnxk_sec_ops.inb_pkt_rx_inject = cn10k_cryptodev_sec_inb_rx_inject;
+ cnxk_sec_ops.rx_inject_configure = cn10k_cryptodev_sec_rx_inject_configure;
}
diff --git a/drivers/crypto/cnxk/cn10k_ipsec.h b/drivers/crypto/cnxk/cn10k_ipsec.h
index 8a93d74062..03ac994001 100644
--- a/drivers/crypto/cnxk/cn10k_ipsec.h
+++ b/drivers/crypto/cnxk/cn10k_ipsec.h
@@ -38,6 +38,8 @@ struct cn10k_sec_session {
bool is_outbound;
/** Queue pair */
struct cnxk_cpt_qp *qp;
+ /** Userdata to be set for Rx inject */
+ void *userdata;
/**
* End of SW mutable area
diff --git a/drivers/crypto/cnxk/cnxk_cryptodev.c b/drivers/crypto/cnxk/cnxk_cryptodev.c
index 4819a14184..b1684e56a7 100644
--- a/drivers/crypto/cnxk/cnxk_cryptodev.c
+++ b/drivers/crypto/cnxk/cnxk_cryptodev.c
@@ -24,6 +24,9 @@ cnxk_cpt_default_ff_get(void)
if (roc_model_is_cn10k())
ff |= RTE_CRYPTODEV_FF_SECURITY_INNER_CSUM | RTE_CRYPTODEV_FF_SYM_RAW_DP;
+ if (roc_model_is_cn10ka_b0())
+ ff |= RTE_CRYPTODEV_FF_SECURITY_RX_INJECT;
+
return ff;
}
diff --git a/drivers/crypto/cnxk/cnxk_cryptodev.h b/drivers/crypto/cnxk/cnxk_cryptodev.h
index f5374131bf..1ded8911a1 100644
--- a/drivers/crypto/cnxk/cnxk_cryptodev.h
+++ b/drivers/crypto/cnxk/cnxk_cryptodev.h
@@ -18,6 +18,8 @@
* Device private data
*/
struct cnxk_cpt_vf {
+ struct roc_cpt_lmtline rx_inj_lmtline;
+ uint16_t rx_inj_pf_func;
struct roc_cpt cpt;
struct rte_cryptodev_capabilities crypto_caps[CNXK_CPT_MAX_CAPS];
struct rte_cryptodev_capabilities
@@ -26,6 +28,7 @@ struct cnxk_cpt_vf {
uint64_t cnxk_fpm_iova[ROC_AE_EC_ID_PMAX];
struct roc_ae_ec_group *ec_grp[ROC_AE_EC_ID_PMAX];
uint16_t max_qps_limit;
+ uint16_t rx_inject_qp;
};
uint64_t cnxk_cpt_default_ff_get(void);
diff --git a/drivers/crypto/cnxk/cnxk_cryptodev_devargs.c b/drivers/crypto/cnxk/cnxk_cryptodev_devargs.c
index c3e9bdb2d1..adf1ba0543 100644
--- a/drivers/crypto/cnxk/cnxk_cryptodev_devargs.c
+++ b/drivers/crypto/cnxk/cnxk_cryptodev_devargs.c
@@ -9,6 +9,23 @@
#define CNXK_MAX_QPS_LIMIT "max_qps_limit"
#define CNXK_MAX_QPS_LIMIT_MIN 1
#define CNXK_MAX_QPS_LIMIT_MAX (ROC_CPT_MAX_LFS - 1)
+#define CNXK_RX_INJECT_QP "rx_inject_qp"
+
+static int
+parse_rx_inject_qp(const char *key, const char *value, void *extra_args)
+{
+ RTE_SET_USED(key);
+ uint32_t val;
+
+ val = atoi(value);
+
+ if (val < CNXK_MAX_QPS_LIMIT_MIN || val > CNXK_MAX_QPS_LIMIT_MAX)
+ return -EINVAL;
+
+ *(uint16_t *)extra_args = val;
+
+ return 0;
+}
static int
parse_max_qps_limit(const char *key, const char *value, void *extra_args)
@@ -31,8 +48,12 @@ cnxk_cpt_parse_devargs(struct rte_devargs *devargs, struct cnxk_cpt_vf *vf)
{
uint16_t max_qps_limit = CNXK_MAX_QPS_LIMIT_MAX;
struct rte_kvargs *kvlist;
+ uint16_t rx_inject_qp;
int rc;
+ /* Set to max value as default so that the feature is disabled by default. */
+ rx_inject_qp = CNXK_MAX_QPS_LIMIT_MAX;
+
if (devargs == NULL)
goto null_devargs;
@@ -48,10 +69,20 @@ cnxk_cpt_parse_devargs(struct rte_devargs *devargs, struct cnxk_cpt_vf *vf)
rte_kvargs_free(kvlist);
goto exit;
}
+
+ rc = rte_kvargs_process(kvlist, CNXK_RX_INJECT_QP, parse_rx_inject_qp, &rx_inject_qp);
+ if (rc < 0) {
+ plt_err("rx_inject_qp should in the range <%d-%d>", CNXK_MAX_QPS_LIMIT_MIN,
+ max_qps_limit - 1);
+ rte_kvargs_free(kvlist);
+ goto exit;
+ }
+
rte_kvargs_free(kvlist);
null_devargs:
vf->max_qps_limit = max_qps_limit;
+ vf->rx_inject_qp = rx_inject_qp;
return 0;
exit:
diff --git a/drivers/crypto/cnxk/cnxk_cryptodev_ops.c b/drivers/crypto/cnxk/cnxk_cryptodev_ops.c
index 82938c77c8..cdcfa92e6d 100644
--- a/drivers/crypto/cnxk/cnxk_cryptodev_ops.c
+++ b/drivers/crypto/cnxk/cnxk_cryptodev_ops.c
@@ -5,6 +5,7 @@
#include <rte_cryptodev.h>
#include <cryptodev_pmd.h>
#include <rte_errno.h>
+#include <rte_security_driver.h>
#include "roc_ae_fpm_tables.h"
#include "roc_cpt.h"
@@ -95,6 +96,7 @@ cnxk_cpt_dev_config(struct rte_cryptodev *dev, struct rte_cryptodev_config *conf
struct cnxk_cpt_vf *vf = dev->data->dev_private;
struct roc_cpt *roc_cpt = &vf->cpt;
uint16_t nb_lf_avail, nb_lf;
+ bool rxc_ena = false;
int ret;
/* If this is a reconfigure attempt, clear the device and configure again */
@@ -111,7 +113,13 @@ cnxk_cpt_dev_config(struct rte_cryptodev *dev, struct rte_cryptodev_config *conf
if (nb_lf > nb_lf_avail)
return -ENOTSUP;
- ret = roc_cpt_dev_configure(roc_cpt, nb_lf);
+ if (dev->feature_flags & RTE_CRYPTODEV_FF_SECURITY_RX_INJECT) {
+ if (rte_security_dynfield_register() < 0)
+ return -ENOTSUP;
+ rxc_ena = true;
+ }
+
+ ret = roc_cpt_dev_configure(roc_cpt, nb_lf, rxc_ena, vf->rx_inject_qp);
if (ret) {
plt_err("Could not configure device");
return ret;
@@ -208,6 +216,10 @@ cnxk_cpt_dev_info_get(struct rte_cryptodev *dev,
info->sym.max_nb_sessions = 0;
info->min_mbuf_headroom_req = CNXK_CPT_MIN_HEADROOM_REQ;
info->min_mbuf_tailroom_req = CNXK_CPT_MIN_TAILROOM_REQ;
+
+ /* If the LF ID for RX Inject is less than the available lfs. */
+ if (vf->rx_inject_qp > info->max_nb_queue_pairs)
+ info->feature_flags &= ~RTE_CRYPTODEV_FF_SECURITY_RX_INJECT;
}
static void
@@ -452,6 +464,19 @@ cnxk_cpt_queue_pair_setup(struct rte_cryptodev *dev, uint16_t qp_id,
qp->sess_mp = conf->mp_session;
dev->data->queue_pairs[qp_id] = qp;
+ if (qp_id == vf->rx_inject_qp) {
+ ret = roc_cpt_lmtline_init(roc_cpt, &vf->rx_inj_lmtline, vf->rx_inject_qp);
+ if (ret) {
+ plt_err("Could not init lmtline Rx inject");
+ goto exit;
+ }
+
+ vf->rx_inj_pf_func = qp->lf.pf_func;
+
+ /* Block the queue for other submissions */
+ qp->pend_q.pq_mask = 0;
+ }
+
return 0;
exit:
diff --git a/drivers/crypto/cnxk/version.map b/drivers/crypto/cnxk/version.map
index d13209feec..5789a6bfc9 100644
--- a/drivers/crypto/cnxk/version.map
+++ b/drivers/crypto/cnxk/version.map
@@ -8,5 +8,8 @@ INTERNAL {
cn10k_cpt_crypto_adapter_dequeue;
cn10k_cpt_crypto_adapter_vector_dequeue;
+ cn10k_cryptodev_sec_inb_rx_inject;
+ cn10k_cryptodev_sec_rx_inject_configure;
+
local: *;
};
--
2.25.1
^ permalink raw reply [flat|nested] 78+ messages in thread
* [PATCH v3 08/24] common/cnxk: add Rx inject configs
2024-01-17 10:30 ` [PATCH v3 " Anoob Joseph
` (6 preceding siblings ...)
2024-01-17 10:30 ` [PATCH v3 07/24] crypto/cnxk: enable Rx inject in security lookaside Anoob Joseph
@ 2024-01-17 10:30 ` Anoob Joseph
2024-01-17 10:30 ` [PATCH v3 09/24] crypto/cnxk: Rx inject config update Anoob Joseph
` (16 subsequent siblings)
24 siblings, 0 replies; 78+ messages in thread
From: Anoob Joseph @ 2024-01-17 10:30 UTC (permalink / raw)
To: Akhil Goyal
Cc: Rahul Bhansali, Jerin Jacob, Vidya Sagar Velumuri, Tejasree Kondoj, dev
From: Rahul Bhansali <rbhansali@marvell.com>
Add Rx inject config for feature enable/disable, and store
Rx chan value per port.
Signed-off-by: Rahul Bhansali <rbhansali@marvell.com>
---
drivers/common/cnxk/roc_idev.c | 44 +++++++++++++++++++++++++++++
drivers/common/cnxk/roc_idev.h | 5 ++++
drivers/common/cnxk/roc_idev_priv.h | 6 ++++
drivers/common/cnxk/roc_nix.c | 2 ++
drivers/common/cnxk/version.map | 4 +++
5 files changed, 61 insertions(+)
diff --git a/drivers/common/cnxk/roc_idev.c b/drivers/common/cnxk/roc_idev.c
index e6c6b34d78..48df3518b0 100644
--- a/drivers/common/cnxk/roc_idev.c
+++ b/drivers/common/cnxk/roc_idev.c
@@ -310,3 +310,47 @@ roc_idev_nix_inl_meta_aura_get(void)
return idev->inl_cfg.meta_aura;
return 0;
}
+
+uint8_t
+roc_idev_nix_rx_inject_get(uint16_t port)
+{
+ struct idev_cfg *idev;
+
+ idev = idev_get_cfg();
+ if (idev != NULL && port < PLT_MAX_ETHPORTS)
+ return idev->inl_rx_inj_cfg.rx_inject_en[port];
+
+ return 0;
+}
+
+void
+roc_idev_nix_rx_inject_set(uint16_t port, uint8_t enable)
+{
+ struct idev_cfg *idev;
+
+ idev = idev_get_cfg();
+ if (idev != NULL && port < PLT_MAX_ETHPORTS)
+ __atomic_store_n(&idev->inl_rx_inj_cfg.rx_inject_en[port], enable,
+ __ATOMIC_RELEASE);
+}
+
+uint16_t *
+roc_idev_nix_rx_chan_base_get(void)
+{
+ struct idev_cfg *idev = idev_get_cfg();
+
+ if (idev != NULL)
+ return (uint16_t *)&idev->inl_rx_inj_cfg.chan;
+
+ return NULL;
+}
+
+void
+roc_idev_nix_rx_chan_set(uint16_t port, uint16_t chan)
+{
+ struct idev_cfg *idev;
+
+ idev = idev_get_cfg();
+ if (idev != NULL && port < PLT_MAX_ETHPORTS)
+ __atomic_store_n(&idev->inl_rx_inj_cfg.chan[port], chan, __ATOMIC_RELEASE);
+}
diff --git a/drivers/common/cnxk/roc_idev.h b/drivers/common/cnxk/roc_idev.h
index aea7f5279d..00664eaed6 100644
--- a/drivers/common/cnxk/roc_idev.h
+++ b/drivers/common/cnxk/roc_idev.h
@@ -22,4 +22,9 @@ struct roc_nix_list *__roc_api roc_idev_nix_list_get(void);
struct roc_mcs *__roc_api roc_idev_mcs_get(uint8_t mcs_idx);
void __roc_api roc_idev_mcs_set(struct roc_mcs *mcs);
void __roc_api roc_idev_mcs_free(struct roc_mcs *mcs);
+
+uint8_t __roc_api roc_idev_nix_rx_inject_get(uint16_t port);
+void __roc_api roc_idev_nix_rx_inject_set(uint16_t port, uint8_t enable);
+uint16_t *__roc_api roc_idev_nix_rx_chan_base_get(void);
+void __roc_api roc_idev_nix_rx_chan_set(uint16_t port, uint16_t chan);
#endif /* _ROC_IDEV_H_ */
diff --git a/drivers/common/cnxk/roc_idev_priv.h b/drivers/common/cnxk/roc_idev_priv.h
index 80f8465e1c..8dc1cb25bf 100644
--- a/drivers/common/cnxk/roc_idev_priv.h
+++ b/drivers/common/cnxk/roc_idev_priv.h
@@ -19,6 +19,11 @@ struct idev_nix_inl_cfg {
uint32_t refs;
};
+struct idev_nix_inl_rx_inj_cfg {
+ uint16_t chan[PLT_MAX_ETHPORTS];
+ uint8_t rx_inject_en[PLT_MAX_ETHPORTS];
+};
+
struct idev_cfg {
uint16_t sso_pf_func;
uint16_t npa_pf_func;
@@ -35,6 +40,7 @@ struct idev_cfg {
struct nix_inl_dev *nix_inl_dev;
struct idev_nix_inl_cfg inl_cfg;
struct roc_nix_list roc_nix_list;
+ struct idev_nix_inl_rx_inj_cfg inl_rx_inj_cfg;
plt_spinlock_t nix_inl_dev_lock;
plt_spinlock_t npa_dev_lock;
};
diff --git a/drivers/common/cnxk/roc_nix.c b/drivers/common/cnxk/roc_nix.c
index f64933a1d9..97c0ae3e25 100644
--- a/drivers/common/cnxk/roc_nix.c
+++ b/drivers/common/cnxk/roc_nix.c
@@ -223,6 +223,8 @@ roc_nix_lf_alloc(struct roc_nix *roc_nix, uint32_t nb_rxq, uint32_t nb_txq,
nix->nb_rx_queues = nb_rxq;
nix->nb_tx_queues = nb_txq;
+ roc_idev_nix_rx_chan_set(roc_nix->port_id, rsp->rx_chan_base);
+
nix->rqs = plt_zmalloc(sizeof(struct roc_nix_rq *) * nb_rxq, 0);
if (!nix->rqs) {
rc = -ENOMEM;
diff --git a/drivers/common/cnxk/version.map b/drivers/common/cnxk/version.map
index aa884a8fe2..f84382c401 100644
--- a/drivers/common/cnxk/version.map
+++ b/drivers/common/cnxk/version.map
@@ -105,6 +105,10 @@ INTERNAL {
roc_idev_num_lmtlines_get;
roc_idev_nix_inl_meta_aura_get;
roc_idev_nix_list_get;
+ roc_idev_nix_rx_chan_base_get;
+ roc_idev_nix_rx_chan_set;
+ roc_idev_nix_rx_inject_get;
+ roc_idev_nix_rx_inject_set;
roc_ml_reg_read64;
roc_ml_reg_write64;
roc_ml_reg_read32;
--
2.25.1
^ permalink raw reply [flat|nested] 78+ messages in thread
* [PATCH v3 09/24] crypto/cnxk: Rx inject config update
2024-01-17 10:30 ` [PATCH v3 " Anoob Joseph
` (7 preceding siblings ...)
2024-01-17 10:30 ` [PATCH v3 08/24] common/cnxk: add Rx inject configs Anoob Joseph
@ 2024-01-17 10:30 ` Anoob Joseph
2024-01-17 10:30 ` [PATCH v3 10/24] crypto/cnxk: enable Rx inject for 103 Anoob Joseph
` (15 subsequent siblings)
24 siblings, 0 replies; 78+ messages in thread
From: Anoob Joseph @ 2024-01-17 10:30 UTC (permalink / raw)
To: Akhil Goyal
Cc: Rahul Bhansali, Jerin Jacob, Vidya Sagar Velumuri, Tejasree Kondoj, dev
From: Rahul Bhansali <rbhansali@marvell.com>
- Update chan in CPT inst from port's Rx chan
- Set Rx inject config in Idev struct
Signed-off-by: Rahul Bhansali <rbhansali@marvell.com>
---
drivers/crypto/cnxk/cn10k_cryptodev_ops.c | 4 +++-
drivers/crypto/cnxk/cn10k_ipsec.c | 3 +++
drivers/crypto/cnxk/cnxk_cryptodev.h | 1 +
drivers/crypto/cnxk/cnxk_cryptodev_ops.c | 2 ++
4 files changed, 9 insertions(+), 1 deletion(-)
diff --git a/drivers/crypto/cnxk/cn10k_cryptodev_ops.c b/drivers/crypto/cnxk/cn10k_cryptodev_ops.c
index e656f47693..03ecf583af 100644
--- a/drivers/crypto/cnxk/cn10k_cryptodev_ops.c
+++ b/drivers/crypto/cnxk/cn10k_cryptodev_ops.c
@@ -15,6 +15,7 @@
#else
#include "roc_io_generic.h"
#endif
+#include "roc_idev.h"
#include "roc_sso.h"
#include "roc_sso_dp.h"
@@ -1122,6 +1123,7 @@ cn10k_cryptodev_sec_inb_rx_inject(void *dev, struct rte_mbuf **pkts,
inst->dptr = dptr;
inst->rptr = dptr;
+ inst->w0.hw_s.chan = *(vf->rx_chan_base + m->port);
inst->w0.hw_s.l2_len = l2_len;
inst->w0.hw_s.et_offset = l2_len - 2;
@@ -1654,7 +1656,7 @@ cn10k_cryptodev_sec_rx_inject_configure(void *device, uint16_t port_id, bool ena
if (ret)
return -ENOTSUP;
- RTE_SET_USED(enable);
+ roc_idev_nix_rx_inject_set(port_id, enable);
return 0;
}
diff --git a/drivers/crypto/cnxk/cn10k_ipsec.c b/drivers/crypto/cnxk/cn10k_ipsec.c
index 2d098fdd24..d08a1067ca 100644
--- a/drivers/crypto/cnxk/cn10k_ipsec.c
+++ b/drivers/crypto/cnxk/cn10k_ipsec.c
@@ -192,6 +192,9 @@ cn10k_ipsec_inb_sa_create(struct roc_cpt *roc_cpt, struct roc_cpt_lf *lf,
sec_sess->is_outbound = false;
sec_sess->inst.w7 = ipsec_cpt_inst_w7_get(roc_cpt, in_sa);
+ /* Save index/SPI in cookie, specific required for Rx Inject */
+ sa_dptr->w1.s.cookie = 0xFFFFFFFF;
+
/* pre-populate CPT INST word 4 */
inst_w4.u64 = 0;
inst_w4.s.opcode_major = ROC_IE_OT_MAJOR_OP_PROCESS_INBOUND_IPSEC | ROC_IE_OT_INPLACE_BIT;
diff --git a/drivers/crypto/cnxk/cnxk_cryptodev.h b/drivers/crypto/cnxk/cnxk_cryptodev.h
index 1ded8911a1..5d974690fc 100644
--- a/drivers/crypto/cnxk/cnxk_cryptodev.h
+++ b/drivers/crypto/cnxk/cnxk_cryptodev.h
@@ -20,6 +20,7 @@
struct cnxk_cpt_vf {
struct roc_cpt_lmtline rx_inj_lmtline;
uint16_t rx_inj_pf_func;
+ uint16_t *rx_chan_base;
struct roc_cpt cpt;
struct rte_cryptodev_capabilities crypto_caps[CNXK_CPT_MAX_CAPS];
struct rte_cryptodev_capabilities
diff --git a/drivers/crypto/cnxk/cnxk_cryptodev_ops.c b/drivers/crypto/cnxk/cnxk_cryptodev_ops.c
index cdcfa92e6d..04dbc67fc1 100644
--- a/drivers/crypto/cnxk/cnxk_cryptodev_ops.c
+++ b/drivers/crypto/cnxk/cnxk_cryptodev_ops.c
@@ -10,6 +10,7 @@
#include "roc_ae_fpm_tables.h"
#include "roc_cpt.h"
#include "roc_errata.h"
+#include "roc_idev.h"
#include "roc_ie_on.h"
#include "cnxk_ae.h"
@@ -117,6 +118,7 @@ cnxk_cpt_dev_config(struct rte_cryptodev *dev, struct rte_cryptodev_config *conf
if (rte_security_dynfield_register() < 0)
return -ENOTSUP;
rxc_ena = true;
+ vf->rx_chan_base = roc_idev_nix_rx_chan_base_get();
}
ret = roc_cpt_dev_configure(roc_cpt, nb_lf, rxc_ena, vf->rx_inject_qp);
--
2.25.1
^ permalink raw reply [flat|nested] 78+ messages in thread
* [PATCH v3 10/24] crypto/cnxk: enable Rx inject for 103
2024-01-17 10:30 ` [PATCH v3 " Anoob Joseph
` (8 preceding siblings ...)
2024-01-17 10:30 ` [PATCH v3 09/24] crypto/cnxk: Rx inject config update Anoob Joseph
@ 2024-01-17 10:30 ` Anoob Joseph
2024-01-17 10:30 ` [PATCH v3 11/24] crypto/cnxk: rename security caps as IPsec security caps Anoob Joseph
` (14 subsequent siblings)
24 siblings, 0 replies; 78+ messages in thread
From: Anoob Joseph @ 2024-01-17 10:30 UTC (permalink / raw)
To: Akhil Goyal; +Cc: Vidya Sagar Velumuri, Jerin Jacob, Tejasree Kondoj, dev
From: Vidya Sagar Velumuri <vvelumuri@marvell.com>
Enable Rx inject feature for 103XX
Signed-off-by: Vidya Sagar Velumuri <vvelumuri@marvell.com>
---
drivers/crypto/cnxk/cnxk_cryptodev.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/crypto/cnxk/cnxk_cryptodev.c b/drivers/crypto/cnxk/cnxk_cryptodev.c
index b1684e56a7..1eede2e59c 100644
--- a/drivers/crypto/cnxk/cnxk_cryptodev.c
+++ b/drivers/crypto/cnxk/cnxk_cryptodev.c
@@ -24,7 +24,7 @@ cnxk_cpt_default_ff_get(void)
if (roc_model_is_cn10k())
ff |= RTE_CRYPTODEV_FF_SECURITY_INNER_CSUM | RTE_CRYPTODEV_FF_SYM_RAW_DP;
- if (roc_model_is_cn10ka_b0())
+ if (roc_model_is_cn10ka_b0() || roc_model_is_cn10kb())
ff |= RTE_CRYPTODEV_FF_SECURITY_RX_INJECT;
return ff;
--
2.25.1
^ permalink raw reply [flat|nested] 78+ messages in thread
* [PATCH v3 11/24] crypto/cnxk: rename security caps as IPsec security caps
2024-01-17 10:30 ` [PATCH v3 " Anoob Joseph
` (9 preceding siblings ...)
2024-01-17 10:30 ` [PATCH v3 10/24] crypto/cnxk: enable Rx inject for 103 Anoob Joseph
@ 2024-01-17 10:30 ` Anoob Joseph
2024-01-17 10:30 ` [PATCH v3 12/24] common/cnxk: update opad-ipad gen to handle TLS Anoob Joseph
` (13 subsequent siblings)
24 siblings, 0 replies; 78+ messages in thread
From: Anoob Joseph @ 2024-01-17 10:30 UTC (permalink / raw)
To: Akhil Goyal; +Cc: Vidya Sagar Velumuri, Jerin Jacob, Tejasree Kondoj, dev
From: Vidya Sagar Velumuri <vvelumuri@marvell.com>
Security capabilities would vary between IPsec and other new offloads.
Rename existing security caps to indicate that they are IPsec specific
ones.
Rename and change the scope of common functions, inorder to avoid code
duplication. These functions can be used by both IPsec and TLS
Signed-off-by: Anoob Joseph <anoobj@marvell.com>
Signed-off-by: Vidya Sagar Velumuri <vvelumuri@marvell.com>
---
drivers/common/cnxk/cnxk_security.c | 13 ++--
drivers/common/cnxk/cnxk_security.h | 17 +++--
drivers/common/cnxk/version.map | 1 +
drivers/crypto/cnxk/cn10k_cryptodev_ops.c | 18 ++++-
drivers/crypto/cnxk/cn10k_ipsec.c | 46 +++++++-----
drivers/crypto/cnxk/cn10k_ipsec.h | 9 ++-
drivers/crypto/cnxk/cn10k_ipsec_la_ops.h | 18 ++---
drivers/crypto/cnxk/cn9k_ipsec_la_ops.h | 8 +-
drivers/crypto/cnxk/cnxk_cryptodev.h | 10 +--
.../crypto/cnxk/cnxk_cryptodev_capabilities.c | 73 ++++++++++---------
drivers/crypto/cnxk/cnxk_sg.h | 4 +-
11 files changed, 123 insertions(+), 94 deletions(-)
diff --git a/drivers/common/cnxk/cnxk_security.c b/drivers/common/cnxk/cnxk_security.c
index a8c3ba90cd..81991c4697 100644
--- a/drivers/common/cnxk/cnxk_security.c
+++ b/drivers/common/cnxk/cnxk_security.c
@@ -8,9 +8,8 @@
#include "roc_api.h"
-static void
-ipsec_hmac_opad_ipad_gen(struct rte_crypto_sym_xform *auth_xform,
- uint8_t *hmac_opad_ipad)
+void
+cnxk_sec_opad_ipad_gen(struct rte_crypto_sym_xform *auth_xform, uint8_t *hmac_opad_ipad)
{
const uint8_t *key = auth_xform->auth.key.data;
uint32_t length = auth_xform->auth.key.length;
@@ -192,7 +191,7 @@ ot_ipsec_sa_common_param_fill(union roc_ot_ipsec_sa_word2 *w2,
const uint8_t *auth_key = auth_xfrm->auth.key.data;
roc_aes_xcbc_key_derive(auth_key, hmac_opad_ipad);
} else {
- ipsec_hmac_opad_ipad_gen(auth_xfrm, hmac_opad_ipad);
+ cnxk_sec_opad_ipad_gen(auth_xfrm, hmac_opad_ipad);
}
tmp_key = (uint64_t *)hmac_opad_ipad;
@@ -741,7 +740,7 @@ onf_ipsec_sa_common_param_fill(struct roc_ie_onf_sa_ctl *ctl, uint8_t *salt,
key = cipher_xfrm->cipher.key.data;
length = cipher_xfrm->cipher.key.length;
- ipsec_hmac_opad_ipad_gen(auth_xfrm, hmac_opad_ipad);
+ cnxk_sec_opad_ipad_gen(auth_xfrm, hmac_opad_ipad);
}
switch (length) {
@@ -1374,7 +1373,7 @@ cnxk_on_ipsec_outb_sa_create(struct rte_security_ipsec_xform *ipsec,
roc_aes_xcbc_key_derive(auth_key, hmac_opad_ipad);
} else if (auth_xform->auth.algo != RTE_CRYPTO_AUTH_NULL) {
- ipsec_hmac_opad_ipad_gen(auth_xform, hmac_opad_ipad);
+ cnxk_sec_opad_ipad_gen(auth_xform, hmac_opad_ipad);
}
}
@@ -1441,7 +1440,7 @@ cnxk_on_ipsec_inb_sa_create(struct rte_security_ipsec_xform *ipsec,
roc_aes_xcbc_key_derive(auth_key, hmac_opad_ipad);
} else if (auth_xform->auth.algo != RTE_CRYPTO_AUTH_NULL) {
- ipsec_hmac_opad_ipad_gen(auth_xform, hmac_opad_ipad);
+ cnxk_sec_opad_ipad_gen(auth_xform, hmac_opad_ipad);
}
}
diff --git a/drivers/common/cnxk/cnxk_security.h b/drivers/common/cnxk/cnxk_security.h
index 2277ce9144..fabf694df4 100644
--- a/drivers/common/cnxk/cnxk_security.h
+++ b/drivers/common/cnxk/cnxk_security.h
@@ -61,14 +61,15 @@ bool __roc_api cnxk_onf_ipsec_inb_sa_valid(struct roc_onf_ipsec_inb_sa *sa);
bool __roc_api cnxk_onf_ipsec_outb_sa_valid(struct roc_onf_ipsec_outb_sa *sa);
/* [CN9K] */
-int __roc_api
-cnxk_on_ipsec_inb_sa_create(struct rte_security_ipsec_xform *ipsec,
- struct rte_crypto_sym_xform *crypto_xform,
- struct roc_ie_on_inb_sa *in_sa);
+int __roc_api cnxk_on_ipsec_inb_sa_create(struct rte_security_ipsec_xform *ipsec,
+ struct rte_crypto_sym_xform *crypto_xform,
+ struct roc_ie_on_inb_sa *in_sa);
-int __roc_api
-cnxk_on_ipsec_outb_sa_create(struct rte_security_ipsec_xform *ipsec,
- struct rte_crypto_sym_xform *crypto_xform,
- struct roc_ie_on_outb_sa *out_sa);
+int __roc_api cnxk_on_ipsec_outb_sa_create(struct rte_security_ipsec_xform *ipsec,
+ struct rte_crypto_sym_xform *crypto_xform,
+ struct roc_ie_on_outb_sa *out_sa);
+
+__rte_internal
+void cnxk_sec_opad_ipad_gen(struct rte_crypto_sym_xform *auth_xform, uint8_t *hmac_opad_ipad);
#endif /* _CNXK_SECURITY_H__ */
diff --git a/drivers/common/cnxk/version.map b/drivers/common/cnxk/version.map
index f84382c401..15fd5710d2 100644
--- a/drivers/common/cnxk/version.map
+++ b/drivers/common/cnxk/version.map
@@ -1,6 +1,7 @@
INTERNAL {
global:
+ cnxk_sec_opad_ipad_gen;
cnxk_ipsec_icvlen_get;
cnxk_ipsec_ivlen_get;
cnxk_ipsec_outb_rlens_get;
diff --git a/drivers/crypto/cnxk/cn10k_cryptodev_ops.c b/drivers/crypto/cnxk/cn10k_cryptodev_ops.c
index 03ecf583af..084c8d3a24 100644
--- a/drivers/crypto/cnxk/cn10k_cryptodev_ops.c
+++ b/drivers/crypto/cnxk/cn10k_cryptodev_ops.c
@@ -80,8 +80,9 @@ cn10k_cpt_sym_temp_sess_create(struct cnxk_cpt_qp *qp, struct rte_crypto_op *op)
}
static __rte_always_inline int __rte_hot
-cpt_sec_inst_fill(struct cnxk_cpt_qp *qp, struct rte_crypto_op *op, struct cn10k_sec_session *sess,
- struct cpt_inst_s *inst, struct cpt_inflight_req *infl_req, const bool is_sg_ver2)
+cpt_sec_ipsec_inst_fill(struct cnxk_cpt_qp *qp, struct rte_crypto_op *op,
+ struct cn10k_sec_session *sess, struct cpt_inst_s *inst,
+ struct cpt_inflight_req *infl_req, const bool is_sg_ver2)
{
struct rte_crypto_sym_op *sym_op = op->sym;
int ret;
@@ -91,7 +92,7 @@ cpt_sec_inst_fill(struct cnxk_cpt_qp *qp, struct rte_crypto_op *op, struct cn10k
return -ENOTSUP;
}
- if (sess->is_outbound)
+ if (sess->ipsec.is_outbound)
ret = process_outb_sa(&qp->lf, op, sess, &qp->meta_info, infl_req, inst,
is_sg_ver2);
else
@@ -100,6 +101,17 @@ cpt_sec_inst_fill(struct cnxk_cpt_qp *qp, struct rte_crypto_op *op, struct cn10k
return ret;
}
+static __rte_always_inline int __rte_hot
+cpt_sec_inst_fill(struct cnxk_cpt_qp *qp, struct rte_crypto_op *op, struct cn10k_sec_session *sess,
+ struct cpt_inst_s *inst, struct cpt_inflight_req *infl_req, const bool is_sg_ver2)
+{
+
+ if (sess->proto == RTE_SECURITY_PROTOCOL_IPSEC)
+ return cpt_sec_ipsec_inst_fill(qp, op, sess, &inst[0], infl_req, is_sg_ver2);
+
+ return 0;
+}
+
static inline int
cn10k_cpt_fill_inst(struct cnxk_cpt_qp *qp, struct rte_crypto_op *ops[], struct cpt_inst_s inst[],
struct cpt_inflight_req *infl_req, const bool is_sg_ver2)
diff --git a/drivers/crypto/cnxk/cn10k_ipsec.c b/drivers/crypto/cnxk/cn10k_ipsec.c
index d08a1067ca..a9c673ea83 100644
--- a/drivers/crypto/cnxk/cn10k_ipsec.c
+++ b/drivers/crypto/cnxk/cn10k_ipsec.c
@@ -20,7 +20,7 @@
#include "roc_api.h"
static uint64_t
-ipsec_cpt_inst_w7_get(struct roc_cpt *roc_cpt, void *sa)
+cpt_inst_w7_get(struct roc_cpt *roc_cpt, void *sa)
{
union cpt_inst_w7 w7;
@@ -64,7 +64,7 @@ cn10k_ipsec_outb_sa_create(struct roc_cpt *roc_cpt, struct roc_cpt_lf *lf,
goto sa_dptr_free;
}
- sec_sess->inst.w7 = ipsec_cpt_inst_w7_get(roc_cpt, out_sa);
+ sec_sess->inst.w7 = cpt_inst_w7_get(roc_cpt, out_sa);
#ifdef LA_IPSEC_DEBUG
/* Use IV from application in debug mode */
@@ -89,7 +89,7 @@ cn10k_ipsec_outb_sa_create(struct roc_cpt *roc_cpt, struct roc_cpt_lf *lf,
}
#endif
- sec_sess->is_outbound = true;
+ sec_sess->ipsec.is_outbound = true;
/* Get Rlen calculation data */
ret = cnxk_ipsec_outb_rlens_get(&rlens, ipsec_xfrm, crypto_xfrm);
@@ -150,6 +150,7 @@ cn10k_ipsec_outb_sa_create(struct roc_cpt *roc_cpt, struct roc_cpt_lf *lf,
/* Trigger CTX flush so that data is written back to DRAM */
roc_cpt_lf_ctx_flush(lf, out_sa, false);
+ sec_sess->proto = RTE_SECURITY_PROTOCOL_IPSEC;
plt_atomic_thread_fence(__ATOMIC_SEQ_CST);
sa_dptr_free:
@@ -189,8 +190,8 @@ cn10k_ipsec_inb_sa_create(struct roc_cpt *roc_cpt, struct roc_cpt_lf *lf,
goto sa_dptr_free;
}
- sec_sess->is_outbound = false;
- sec_sess->inst.w7 = ipsec_cpt_inst_w7_get(roc_cpt, in_sa);
+ sec_sess->ipsec.is_outbound = false;
+ sec_sess->inst.w7 = cpt_inst_w7_get(roc_cpt, in_sa);
/* Save index/SPI in cookie, specific required for Rx Inject */
sa_dptr->w1.s.cookie = 0xFFFFFFFF;
@@ -209,7 +210,7 @@ cn10k_ipsec_inb_sa_create(struct roc_cpt *roc_cpt, struct roc_cpt_lf *lf,
*/
if (ipsec_xfrm->options.ip_csum_enable) {
param1.s.ip_csum_disable = ROC_IE_OT_SA_INNER_PKT_IP_CSUM_ENABLE;
- sec_sess->ip_csum = RTE_MBUF_F_RX_IP_CKSUM_GOOD;
+ sec_sess->ipsec.ip_csum = RTE_MBUF_F_RX_IP_CKSUM_GOOD;
}
/* Disable L4 checksum verification by default */
@@ -250,6 +251,7 @@ cn10k_ipsec_inb_sa_create(struct roc_cpt *roc_cpt, struct roc_cpt_lf *lf,
/* Trigger CTX flush so that data is written back to DRAM */
roc_cpt_lf_ctx_flush(lf, in_sa, true);
+ sec_sess->proto = RTE_SECURITY_PROTOCOL_IPSEC;
plt_atomic_thread_fence(__ATOMIC_SEQ_CST);
sa_dptr_free:
@@ -298,16 +300,15 @@ cn10k_sec_session_create(void *device, struct rte_security_session_conf *conf,
if (conf->action_type != RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL)
return -EINVAL;
- if (conf->protocol != RTE_SECURITY_PROTOCOL_IPSEC)
- return -ENOTSUP;
-
- ((struct cn10k_sec_session *)sess)->userdata = conf->userdata;
- return cn10k_ipsec_session_create(device, &conf->ipsec,
- conf->crypto_xform, sess);
+ if (conf->protocol == RTE_SECURITY_PROTOCOL_IPSEC) {
+ ((struct cn10k_sec_session *)sess)->userdata = conf->userdata;
+ return cn10k_ipsec_session_create(device, &conf->ipsec, conf->crypto_xform, sess);
+ }
+ return -ENOTSUP;
}
static int
-cn10k_sec_session_destroy(void *dev, struct rte_security_session *sec_sess)
+cn10k_sec_ipsec_session_destroy(void *dev, struct rte_security_session *sec_sess)
{
struct rte_cryptodev *crypto_dev = dev;
union roc_ot_ipsec_sa_word2 *w2;
@@ -318,9 +319,6 @@ cn10k_sec_session_destroy(void *dev, struct rte_security_session *sec_sess)
void *sa_dptr = NULL;
int ret;
- if (unlikely(sec_sess == NULL))
- return -EINVAL;
-
sess = (struct cn10k_sec_session *)sec_sess;
qp = crypto_dev->data->queue_pairs[0];
@@ -336,7 +334,7 @@ cn10k_sec_session_destroy(void *dev, struct rte_security_session *sec_sess)
ret = -1;
- if (sess->is_outbound) {
+ if (sess->ipsec.is_outbound) {
sa_dptr = plt_zmalloc(sizeof(struct roc_ot_ipsec_outb_sa), 8);
if (sa_dptr != NULL) {
roc_ot_ipsec_outb_sa_init(sa_dptr);
@@ -376,6 +374,18 @@ cn10k_sec_session_destroy(void *dev, struct rte_security_session *sec_sess)
return 0;
}
+static int
+cn10k_sec_session_destroy(void *dev, struct rte_security_session *sec_sess)
+{
+ if (unlikely(sec_sess == NULL))
+ return -EINVAL;
+
+ if (((struct cn10k_sec_session *)sec_sess)->proto == RTE_SECURITY_PROTOCOL_IPSEC)
+ return cn10k_sec_ipsec_session_destroy(dev, sec_sess);
+
+ return -EINVAL;
+}
+
static unsigned int
cn10k_sec_session_get_size(void *device __rte_unused)
{
@@ -405,7 +415,7 @@ cn10k_sec_session_stats_get(void *device, struct rte_security_session *sess,
stats->protocol = RTE_SECURITY_PROTOCOL_IPSEC;
sa = &priv->sa;
- if (priv->is_outbound) {
+ if (priv->ipsec.is_outbound) {
out_sa = &sa->out_sa;
roc_cpt_lf_ctx_flush(&qp->lf, out_sa, false);
rte_delay_ms(1);
diff --git a/drivers/crypto/cnxk/cn10k_ipsec.h b/drivers/crypto/cnxk/cn10k_ipsec.h
index 03ac994001..2b7a3e6acf 100644
--- a/drivers/crypto/cnxk/cn10k_ipsec.h
+++ b/drivers/crypto/cnxk/cn10k_ipsec.h
@@ -29,13 +29,18 @@ struct cn10k_sec_session {
/** PMD private space */
+ enum rte_security_session_protocol proto;
/** Pre-populated CPT inst words */
struct cnxk_cpt_inst_tmpl inst;
uint16_t max_extended_len;
uint16_t iv_offset;
uint8_t iv_length;
- uint8_t ip_csum;
- bool is_outbound;
+ union {
+ struct {
+ uint8_t ip_csum;
+ bool is_outbound;
+ } ipsec;
+ };
/** Queue pair */
struct cnxk_cpt_qp *qp;
/** Userdata to be set for Rx inject */
diff --git a/drivers/crypto/cnxk/cn10k_ipsec_la_ops.h b/drivers/crypto/cnxk/cn10k_ipsec_la_ops.h
index 8e208eb2ca..af2c85022e 100644
--- a/drivers/crypto/cnxk/cn10k_ipsec_la_ops.h
+++ b/drivers/crypto/cnxk/cn10k_ipsec_la_ops.h
@@ -121,7 +121,7 @@ process_outb_sa(struct roc_cpt_lf *lf, struct rte_crypto_op *cop, struct cn10k_s
i = 0;
gather_comp = (struct roc_sglist_comp *)((uint8_t *)m_data + 8);
- i = fill_ipsec_sg_comp_from_pkt(gather_comp, i, m_src);
+ i = fill_sg_comp_from_pkt(gather_comp, i, m_src);
((uint16_t *)in_buffer)[2] = rte_cpu_to_be_16(i);
g_size_bytes = ((i + 3) / 4) * sizeof(struct roc_sglist_comp);
@@ -132,7 +132,7 @@ process_outb_sa(struct roc_cpt_lf *lf, struct rte_crypto_op *cop, struct cn10k_s
i = 0;
scatter_comp = (struct roc_sglist_comp *)((uint8_t *)gather_comp + g_size_bytes);
- i = fill_ipsec_sg_comp_from_pkt(scatter_comp, i, m_src);
+ i = fill_sg_comp_from_pkt(scatter_comp, i, m_src);
((uint16_t *)in_buffer)[3] = rte_cpu_to_be_16(i);
s_size_bytes = ((i + 3) / 4) * sizeof(struct roc_sglist_comp);
@@ -170,7 +170,7 @@ process_outb_sa(struct roc_cpt_lf *lf, struct rte_crypto_op *cop, struct cn10k_s
i = 0;
gather_comp = (struct roc_sg2list_comp *)((uint8_t *)m_data);
- i = fill_ipsec_sg2_comp_from_pkt(gather_comp, i, m_src);
+ i = fill_sg2_comp_from_pkt(gather_comp, i, m_src);
cpt_inst_w5.s.gather_sz = ((i + 2) / 3);
g_size_bytes = ((i + 2) / 3) * sizeof(struct roc_sg2list_comp);
@@ -181,7 +181,7 @@ process_outb_sa(struct roc_cpt_lf *lf, struct rte_crypto_op *cop, struct cn10k_s
i = 0;
scatter_comp = (struct roc_sg2list_comp *)((uint8_t *)gather_comp + g_size_bytes);
- i = fill_ipsec_sg2_comp_from_pkt(scatter_comp, i, m_src);
+ i = fill_sg2_comp_from_pkt(scatter_comp, i, m_src);
cpt_inst_w6.s.scatter_sz = ((i + 2) / 3);
@@ -211,7 +211,7 @@ process_inb_sa(struct rte_crypto_op *cop, struct cn10k_sec_session *sess, struct
inst->w4.u64 = sess->inst.w4 | rte_pktmbuf_pkt_len(m_src);
dptr = rte_pktmbuf_mtod(m_src, uint64_t);
inst->dptr = dptr;
- m_src->ol_flags |= (uint64_t)sess->ip_csum;
+ m_src->ol_flags |= (uint64_t)sess->ipsec.ip_csum;
} else if (is_sg_ver2 == false) {
struct roc_sglist_comp *scatter_comp, *gather_comp;
uint32_t g_size_bytes, s_size_bytes;
@@ -234,7 +234,7 @@ process_inb_sa(struct rte_crypto_op *cop, struct cn10k_sec_session *sess, struct
/* Input Gather List */
i = 0;
gather_comp = (struct roc_sglist_comp *)((uint8_t *)m_data + 8);
- i = fill_ipsec_sg_comp_from_pkt(gather_comp, i, m_src);
+ i = fill_sg_comp_from_pkt(gather_comp, i, m_src);
((uint16_t *)in_buffer)[2] = rte_cpu_to_be_16(i);
g_size_bytes = ((i + 3) / 4) * sizeof(struct roc_sglist_comp);
@@ -242,7 +242,7 @@ process_inb_sa(struct rte_crypto_op *cop, struct cn10k_sec_session *sess, struct
/* Output Scatter List */
i = 0;
scatter_comp = (struct roc_sglist_comp *)((uint8_t *)gather_comp + g_size_bytes);
- i = fill_ipsec_sg_comp_from_pkt(scatter_comp, i, m_src);
+ i = fill_sg_comp_from_pkt(scatter_comp, i, m_src);
((uint16_t *)in_buffer)[3] = rte_cpu_to_be_16(i);
s_size_bytes = ((i + 3) / 4) * sizeof(struct roc_sglist_comp);
@@ -270,7 +270,7 @@ process_inb_sa(struct rte_crypto_op *cop, struct cn10k_sec_session *sess, struct
i = 0;
gather_comp = (struct roc_sg2list_comp *)((uint8_t *)m_data);
- i = fill_ipsec_sg2_comp_from_pkt(gather_comp, i, m_src);
+ i = fill_sg2_comp_from_pkt(gather_comp, i, m_src);
cpt_inst_w5.s.gather_sz = ((i + 2) / 3);
g_size_bytes = ((i + 2) / 3) * sizeof(struct roc_sg2list_comp);
@@ -278,7 +278,7 @@ process_inb_sa(struct rte_crypto_op *cop, struct cn10k_sec_session *sess, struct
/* Output Scatter List */
i = 0;
scatter_comp = (struct roc_sg2list_comp *)((uint8_t *)gather_comp + g_size_bytes);
- i = fill_ipsec_sg2_comp_from_pkt(scatter_comp, i, m_src);
+ i = fill_sg2_comp_from_pkt(scatter_comp, i, m_src);
cpt_inst_w6.s.scatter_sz = ((i + 2) / 3);
diff --git a/drivers/crypto/cnxk/cn9k_ipsec_la_ops.h b/drivers/crypto/cnxk/cn9k_ipsec_la_ops.h
index 3d0db72775..3e9f1e7efb 100644
--- a/drivers/crypto/cnxk/cn9k_ipsec_la_ops.h
+++ b/drivers/crypto/cnxk/cn9k_ipsec_la_ops.h
@@ -132,7 +132,7 @@ process_outb_sa(struct cpt_qp_meta_info *m_info, struct rte_crypto_op *cop,
gather_comp = (struct roc_sglist_comp *)((uint8_t *)m_data + 8);
i = fill_sg_comp(gather_comp, i, (uint64_t)hdr, hdr_len);
- i = fill_ipsec_sg_comp_from_pkt(gather_comp, i, m_src);
+ i = fill_sg_comp_from_pkt(gather_comp, i, m_src);
((uint16_t *)in_buffer)[2] = rte_cpu_to_be_16(i);
g_size_bytes = ((i + 3) / 4) * sizeof(struct roc_sglist_comp);
@@ -146,7 +146,7 @@ process_outb_sa(struct cpt_qp_meta_info *m_info, struct rte_crypto_op *cop,
scatter_comp = (struct roc_sglist_comp *)((uint8_t *)gather_comp + g_size_bytes);
i = fill_sg_comp(scatter_comp, i, (uint64_t)hdr, hdr_len);
- i = fill_ipsec_sg_comp_from_pkt(scatter_comp, i, m_src);
+ i = fill_sg_comp_from_pkt(scatter_comp, i, m_src);
((uint16_t *)in_buffer)[3] = rte_cpu_to_be_16(i);
s_size_bytes = ((i + 3) / 4) * sizeof(struct roc_sglist_comp);
@@ -228,7 +228,7 @@ process_inb_sa(struct cpt_qp_meta_info *m_info, struct rte_crypto_op *cop,
*/
i = 0;
gather_comp = (struct roc_sglist_comp *)((uint8_t *)m_data + 8);
- i = fill_ipsec_sg_comp_from_pkt(gather_comp, i, m_src);
+ i = fill_sg_comp_from_pkt(gather_comp, i, m_src);
((uint16_t *)in_buffer)[2] = rte_cpu_to_be_16(i);
g_size_bytes = ((i + 3) / 4) * sizeof(struct roc_sglist_comp);
@@ -239,7 +239,7 @@ process_inb_sa(struct cpt_qp_meta_info *m_info, struct rte_crypto_op *cop,
i = 0;
scatter_comp = (struct roc_sglist_comp *)((uint8_t *)gather_comp + g_size_bytes);
i = fill_sg_comp(scatter_comp, i, (uint64_t)hdr, hdr_len);
- i = fill_ipsec_sg_comp_from_pkt(scatter_comp, i, m_src);
+ i = fill_sg_comp_from_pkt(scatter_comp, i, m_src);
((uint16_t *)in_buffer)[3] = rte_cpu_to_be_16(i);
s_size_bytes = ((i + 3) / 4) * sizeof(struct roc_sglist_comp);
diff --git a/drivers/crypto/cnxk/cnxk_cryptodev.h b/drivers/crypto/cnxk/cnxk_cryptodev.h
index 5d974690fc..6f21d91812 100644
--- a/drivers/crypto/cnxk/cnxk_cryptodev.h
+++ b/drivers/crypto/cnxk/cnxk_cryptodev.h
@@ -11,9 +11,10 @@
#include "roc_ae.h"
#include "roc_cpt.h"
-#define CNXK_CPT_MAX_CAPS 55
-#define CNXK_SEC_CRYPTO_MAX_CAPS 16
-#define CNXK_SEC_MAX_CAPS 9
+#define CNXK_CPT_MAX_CAPS 55
+#define CNXK_SEC_IPSEC_CRYPTO_MAX_CAPS 16
+#define CNXK_SEC_MAX_CAPS 9
+
/**
* Device private data
*/
@@ -23,8 +24,7 @@ struct cnxk_cpt_vf {
uint16_t *rx_chan_base;
struct roc_cpt cpt;
struct rte_cryptodev_capabilities crypto_caps[CNXK_CPT_MAX_CAPS];
- struct rte_cryptodev_capabilities
- sec_crypto_caps[CNXK_SEC_CRYPTO_MAX_CAPS];
+ struct rte_cryptodev_capabilities sec_ipsec_crypto_caps[CNXK_SEC_IPSEC_CRYPTO_MAX_CAPS];
struct rte_security_capability sec_caps[CNXK_SEC_MAX_CAPS];
uint64_t cnxk_fpm_iova[ROC_AE_EC_ID_PMAX];
struct roc_ae_ec_group *ec_grp[ROC_AE_EC_ID_PMAX];
diff --git a/drivers/crypto/cnxk/cnxk_cryptodev_capabilities.c b/drivers/crypto/cnxk/cnxk_cryptodev_capabilities.c
index 2676b52832..178f510a63 100644
--- a/drivers/crypto/cnxk/cnxk_cryptodev_capabilities.c
+++ b/drivers/crypto/cnxk/cnxk_cryptodev_capabilities.c
@@ -20,13 +20,14 @@
RTE_DIM(caps_##name)); \
} while (0)
-#define SEC_CAPS_ADD(cnxk_caps, cur_pos, hw_caps, name) \
+#define SEC_IPSEC_CAPS_ADD(cnxk_caps, cur_pos, hw_caps, name) \
do { \
if ((hw_caps[CPT_ENG_TYPE_SE].name) || \
(hw_caps[CPT_ENG_TYPE_IE].name) || \
(hw_caps[CPT_ENG_TYPE_AE].name)) \
- sec_caps_add(cnxk_caps, cur_pos, sec_caps_##name, \
- RTE_DIM(sec_caps_##name)); \
+ sec_ipsec_caps_add(cnxk_caps, cur_pos, \
+ sec_ipsec_caps_##name, \
+ RTE_DIM(sec_ipsec_caps_##name)); \
} while (0)
static const struct rte_cryptodev_capabilities caps_mul[] = {
@@ -1184,7 +1185,7 @@ static const struct rte_cryptodev_capabilities caps_end[] = {
RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
};
-static const struct rte_cryptodev_capabilities sec_caps_aes[] = {
+static const struct rte_cryptodev_capabilities sec_ipsec_caps_aes[] = {
{ /* AES GCM */
.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
{.sym = {
@@ -1332,7 +1333,7 @@ static const struct rte_cryptodev_capabilities sec_caps_aes[] = {
},
};
-static const struct rte_cryptodev_capabilities sec_caps_des[] = {
+static const struct rte_cryptodev_capabilities sec_ipsec_caps_des[] = {
{ /* DES */
.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
{.sym = {
@@ -1375,7 +1376,7 @@ static const struct rte_cryptodev_capabilities sec_caps_des[] = {
},
};
-static const struct rte_cryptodev_capabilities sec_caps_sha1_sha2[] = {
+static const struct rte_cryptodev_capabilities sec_ipsec_caps_sha1_sha2[] = {
{ /* SHA1 HMAC */
.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
{.sym = {
@@ -1478,7 +1479,7 @@ static const struct rte_cryptodev_capabilities sec_caps_sha1_sha2[] = {
},
};
-static const struct rte_cryptodev_capabilities sec_caps_null[] = {
+static const struct rte_cryptodev_capabilities sec_ipsec_caps_null[] = {
{ /* NULL (CIPHER) */
.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
{.sym = {
@@ -1691,29 +1692,28 @@ cnxk_crypto_capabilities_get(struct cnxk_cpt_vf *vf)
}
static void
-sec_caps_limit_check(int *cur_pos, int nb_caps)
+sec_ipsec_caps_limit_check(int *cur_pos, int nb_caps)
{
- PLT_VERIFY(*cur_pos + nb_caps <= CNXK_SEC_CRYPTO_MAX_CAPS);
+ PLT_VERIFY(*cur_pos + nb_caps <= CNXK_SEC_IPSEC_CRYPTO_MAX_CAPS);
}
static void
-sec_caps_add(struct rte_cryptodev_capabilities cnxk_caps[], int *cur_pos,
- const struct rte_cryptodev_capabilities *caps, int nb_caps)
+sec_ipsec_caps_add(struct rte_cryptodev_capabilities cnxk_caps[], int *cur_pos,
+ const struct rte_cryptodev_capabilities *caps, int nb_caps)
{
- sec_caps_limit_check(cur_pos, nb_caps);
+ sec_ipsec_caps_limit_check(cur_pos, nb_caps);
memcpy(&cnxk_caps[*cur_pos], caps, nb_caps * sizeof(caps[0]));
*cur_pos += nb_caps;
}
static void
-cn10k_sec_crypto_caps_update(struct rte_cryptodev_capabilities cnxk_caps[],
- int *cur_pos)
+cn10k_sec_ipsec_crypto_caps_update(struct rte_cryptodev_capabilities cnxk_caps[], int *cur_pos)
{
const struct rte_cryptodev_capabilities *cap;
unsigned int i;
- sec_caps_limit_check(cur_pos, 1);
+ sec_ipsec_caps_limit_check(cur_pos, 1);
/* NULL auth */
for (i = 0; i < RTE_DIM(caps_null); i++) {
@@ -1727,7 +1727,7 @@ cn10k_sec_crypto_caps_update(struct rte_cryptodev_capabilities cnxk_caps[],
}
static void
-cn9k_sec_crypto_caps_update(struct rte_cryptodev_capabilities cnxk_caps[])
+cn9k_sec_ipsec_crypto_caps_update(struct rte_cryptodev_capabilities cnxk_caps[])
{
struct rte_cryptodev_capabilities *caps;
@@ -1747,27 +1747,26 @@ cn9k_sec_crypto_caps_update(struct rte_cryptodev_capabilities cnxk_caps[])
}
static void
-sec_crypto_caps_populate(struct rte_cryptodev_capabilities cnxk_caps[],
- union cpt_eng_caps *hw_caps)
+sec_ipsec_crypto_caps_populate(struct rte_cryptodev_capabilities cnxk_caps[],
+ union cpt_eng_caps *hw_caps)
{
int cur_pos = 0;
- SEC_CAPS_ADD(cnxk_caps, &cur_pos, hw_caps, aes);
- SEC_CAPS_ADD(cnxk_caps, &cur_pos, hw_caps, des);
- SEC_CAPS_ADD(cnxk_caps, &cur_pos, hw_caps, sha1_sha2);
+ SEC_IPSEC_CAPS_ADD(cnxk_caps, &cur_pos, hw_caps, aes);
+ SEC_IPSEC_CAPS_ADD(cnxk_caps, &cur_pos, hw_caps, des);
+ SEC_IPSEC_CAPS_ADD(cnxk_caps, &cur_pos, hw_caps, sha1_sha2);
if (roc_model_is_cn10k())
- cn10k_sec_crypto_caps_update(cnxk_caps, &cur_pos);
+ cn10k_sec_ipsec_crypto_caps_update(cnxk_caps, &cur_pos);
else
- cn9k_sec_crypto_caps_update(cnxk_caps);
+ cn9k_sec_ipsec_crypto_caps_update(cnxk_caps);
- sec_caps_add(cnxk_caps, &cur_pos, sec_caps_null,
- RTE_DIM(sec_caps_null));
- sec_caps_add(cnxk_caps, &cur_pos, caps_end, RTE_DIM(caps_end));
+ sec_ipsec_caps_add(cnxk_caps, &cur_pos, sec_ipsec_caps_null, RTE_DIM(sec_ipsec_caps_null));
+ sec_ipsec_caps_add(cnxk_caps, &cur_pos, caps_end, RTE_DIM(caps_end));
}
static void
-cnxk_sec_caps_update(struct rte_security_capability *sec_cap)
+cnxk_sec_ipsec_caps_update(struct rte_security_capability *sec_cap)
{
sec_cap->ipsec.options.udp_encap = 1;
sec_cap->ipsec.options.copy_df = 1;
@@ -1775,7 +1774,7 @@ cnxk_sec_caps_update(struct rte_security_capability *sec_cap)
}
static void
-cn10k_sec_caps_update(struct rte_security_capability *sec_cap)
+cn10k_sec_ipsec_caps_update(struct rte_security_capability *sec_cap)
{
if (sec_cap->ipsec.direction == RTE_SECURITY_IPSEC_SA_DIR_EGRESS) {
#ifdef LA_IPSEC_DEBUG
@@ -1797,7 +1796,7 @@ cn10k_sec_caps_update(struct rte_security_capability *sec_cap)
}
static void
-cn9k_sec_caps_update(struct rte_security_capability *sec_cap)
+cn9k_sec_ipsec_caps_update(struct rte_security_capability *sec_cap)
{
if (sec_cap->ipsec.direction == RTE_SECURITY_IPSEC_SA_DIR_EGRESS) {
#ifdef LA_IPSEC_DEBUG
@@ -1814,22 +1813,24 @@ cnxk_cpt_caps_populate(struct cnxk_cpt_vf *vf)
unsigned long i;
crypto_caps_populate(vf->crypto_caps, vf->cpt.hw_caps);
- sec_crypto_caps_populate(vf->sec_crypto_caps, vf->cpt.hw_caps);
+ sec_ipsec_crypto_caps_populate(vf->sec_ipsec_crypto_caps, vf->cpt.hw_caps);
PLT_STATIC_ASSERT(RTE_DIM(sec_caps_templ) <= RTE_DIM(vf->sec_caps));
memcpy(vf->sec_caps, sec_caps_templ, sizeof(sec_caps_templ));
for (i = 0; i < RTE_DIM(sec_caps_templ) - 1; i++) {
- vf->sec_caps[i].crypto_capabilities = vf->sec_crypto_caps;
- cnxk_sec_caps_update(&vf->sec_caps[i]);
+ if (vf->sec_caps[i].protocol == RTE_SECURITY_PROTOCOL_IPSEC) {
+ vf->sec_caps[i].crypto_capabilities = vf->sec_ipsec_crypto_caps;
- if (roc_model_is_cn10k())
- cn10k_sec_caps_update(&vf->sec_caps[i]);
+ cnxk_sec_ipsec_caps_update(&vf->sec_caps[i]);
- if (roc_model_is_cn9k())
- cn9k_sec_caps_update(&vf->sec_caps[i]);
+ if (roc_model_is_cn10k())
+ cn10k_sec_ipsec_caps_update(&vf->sec_caps[i]);
+ if (roc_model_is_cn9k())
+ cn9k_sec_ipsec_caps_update(&vf->sec_caps[i]);
+ }
}
}
diff --git a/drivers/crypto/cnxk/cnxk_sg.h b/drivers/crypto/cnxk/cnxk_sg.h
index 65244199bd..aa074581d7 100644
--- a/drivers/crypto/cnxk/cnxk_sg.h
+++ b/drivers/crypto/cnxk/cnxk_sg.h
@@ -129,7 +129,7 @@ fill_sg_comp_from_iov(struct roc_sglist_comp *list, uint32_t i, struct roc_se_io
}
static __rte_always_inline uint32_t
-fill_ipsec_sg_comp_from_pkt(struct roc_sglist_comp *list, uint32_t i, struct rte_mbuf *pkt)
+fill_sg_comp_from_pkt(struct roc_sglist_comp *list, uint32_t i, struct rte_mbuf *pkt)
{
uint32_t buf_sz;
void *vaddr;
@@ -150,7 +150,7 @@ fill_ipsec_sg_comp_from_pkt(struct roc_sglist_comp *list, uint32_t i, struct rte
}
static __rte_always_inline uint32_t
-fill_ipsec_sg2_comp_from_pkt(struct roc_sg2list_comp *list, uint32_t i, struct rte_mbuf *pkt)
+fill_sg2_comp_from_pkt(struct roc_sg2list_comp *list, uint32_t i, struct rte_mbuf *pkt)
{
uint32_t buf_sz;
void *vaddr;
--
2.25.1
^ permalink raw reply [flat|nested] 78+ messages in thread
* [PATCH v3 12/24] common/cnxk: update opad-ipad gen to handle TLS
2024-01-17 10:30 ` [PATCH v3 " Anoob Joseph
` (10 preceding siblings ...)
2024-01-17 10:30 ` [PATCH v3 11/24] crypto/cnxk: rename security caps as IPsec security caps Anoob Joseph
@ 2024-01-17 10:30 ` Anoob Joseph
2024-01-17 10:30 ` [PATCH v3 13/24] common/cnxk: add TLS record contexts Anoob Joseph
` (12 subsequent siblings)
24 siblings, 0 replies; 78+ messages in thread
From: Anoob Joseph @ 2024-01-17 10:30 UTC (permalink / raw)
To: Akhil Goyal; +Cc: Jerin Jacob, Vidya Sagar Velumuri, Tejasree Kondoj, dev
For TLS opcodes, ipad is at the offset 64 as compared to the packed
implementation for IPsec. Extend the function to handle TLS contexts as
well.
Signed-off-by: Anoob Joseph <anoobj@marvell.com>
Signed-off-by: Vidya Sagar Velumuri <vvelumuri@marvell.com>
---
drivers/common/cnxk/cnxk_security.c | 15 ++++++++-------
drivers/common/cnxk/cnxk_security.h | 3 ++-
2 files changed, 10 insertions(+), 8 deletions(-)
diff --git a/drivers/common/cnxk/cnxk_security.c b/drivers/common/cnxk/cnxk_security.c
index 81991c4697..bdb04fe142 100644
--- a/drivers/common/cnxk/cnxk_security.c
+++ b/drivers/common/cnxk/cnxk_security.c
@@ -9,7 +9,8 @@
#include "roc_api.h"
void
-cnxk_sec_opad_ipad_gen(struct rte_crypto_sym_xform *auth_xform, uint8_t *hmac_opad_ipad)
+cnxk_sec_opad_ipad_gen(struct rte_crypto_sym_xform *auth_xform, uint8_t *hmac_opad_ipad,
+ bool is_tls)
{
const uint8_t *key = auth_xform->auth.key.data;
uint32_t length = auth_xform->auth.key.length;
@@ -29,11 +30,11 @@ cnxk_sec_opad_ipad_gen(struct rte_crypto_sym_xform *auth_xform, uint8_t *hmac_op
switch (auth_xform->auth.algo) {
case RTE_CRYPTO_AUTH_MD5_HMAC:
roc_hash_md5_gen(opad, (uint32_t *)&hmac_opad_ipad[0]);
- roc_hash_md5_gen(ipad, (uint32_t *)&hmac_opad_ipad[24]);
+ roc_hash_md5_gen(ipad, (uint32_t *)&hmac_opad_ipad[is_tls ? 64 : 24]);
break;
case RTE_CRYPTO_AUTH_SHA1_HMAC:
roc_hash_sha1_gen(opad, (uint32_t *)&hmac_opad_ipad[0]);
- roc_hash_sha1_gen(ipad, (uint32_t *)&hmac_opad_ipad[24]);
+ roc_hash_sha1_gen(ipad, (uint32_t *)&hmac_opad_ipad[is_tls ? 64 : 24]);
break;
case RTE_CRYPTO_AUTH_SHA256_HMAC:
roc_hash_sha256_gen(opad, (uint32_t *)&hmac_opad_ipad[0], 256);
@@ -191,7 +192,7 @@ ot_ipsec_sa_common_param_fill(union roc_ot_ipsec_sa_word2 *w2,
const uint8_t *auth_key = auth_xfrm->auth.key.data;
roc_aes_xcbc_key_derive(auth_key, hmac_opad_ipad);
} else {
- cnxk_sec_opad_ipad_gen(auth_xfrm, hmac_opad_ipad);
+ cnxk_sec_opad_ipad_gen(auth_xfrm, hmac_opad_ipad, false);
}
tmp_key = (uint64_t *)hmac_opad_ipad;
@@ -740,7 +741,7 @@ onf_ipsec_sa_common_param_fill(struct roc_ie_onf_sa_ctl *ctl, uint8_t *salt,
key = cipher_xfrm->cipher.key.data;
length = cipher_xfrm->cipher.key.length;
- cnxk_sec_opad_ipad_gen(auth_xfrm, hmac_opad_ipad);
+ cnxk_sec_opad_ipad_gen(auth_xfrm, hmac_opad_ipad, false);
}
switch (length) {
@@ -1373,7 +1374,7 @@ cnxk_on_ipsec_outb_sa_create(struct rte_security_ipsec_xform *ipsec,
roc_aes_xcbc_key_derive(auth_key, hmac_opad_ipad);
} else if (auth_xform->auth.algo != RTE_CRYPTO_AUTH_NULL) {
- cnxk_sec_opad_ipad_gen(auth_xform, hmac_opad_ipad);
+ cnxk_sec_opad_ipad_gen(auth_xform, hmac_opad_ipad, false);
}
}
@@ -1440,7 +1441,7 @@ cnxk_on_ipsec_inb_sa_create(struct rte_security_ipsec_xform *ipsec,
roc_aes_xcbc_key_derive(auth_key, hmac_opad_ipad);
} else if (auth_xform->auth.algo != RTE_CRYPTO_AUTH_NULL) {
- cnxk_sec_opad_ipad_gen(auth_xform, hmac_opad_ipad);
+ cnxk_sec_opad_ipad_gen(auth_xform, hmac_opad_ipad, false);
}
}
diff --git a/drivers/common/cnxk/cnxk_security.h b/drivers/common/cnxk/cnxk_security.h
index fabf694df4..86ec657cb0 100644
--- a/drivers/common/cnxk/cnxk_security.h
+++ b/drivers/common/cnxk/cnxk_security.h
@@ -70,6 +70,7 @@ int __roc_api cnxk_on_ipsec_outb_sa_create(struct rte_security_ipsec_xform *ipse
struct roc_ie_on_outb_sa *out_sa);
__rte_internal
-void cnxk_sec_opad_ipad_gen(struct rte_crypto_sym_xform *auth_xform, uint8_t *hmac_opad_ipad);
+void cnxk_sec_opad_ipad_gen(struct rte_crypto_sym_xform *auth_xform, uint8_t *hmac_opad_ipad,
+ bool is_tls);
#endif /* _CNXK_SECURITY_H__ */
--
2.25.1
^ permalink raw reply [flat|nested] 78+ messages in thread
* [PATCH v3 13/24] common/cnxk: add TLS record contexts
2024-01-17 10:30 ` [PATCH v3 " Anoob Joseph
` (11 preceding siblings ...)
2024-01-17 10:30 ` [PATCH v3 12/24] common/cnxk: update opad-ipad gen to handle TLS Anoob Joseph
@ 2024-01-17 10:30 ` Anoob Joseph
2024-01-17 10:30 ` [PATCH v3 14/24] crypto/cnxk: separate IPsec from security common code Anoob Joseph
` (11 subsequent siblings)
24 siblings, 0 replies; 78+ messages in thread
From: Anoob Joseph @ 2024-01-17 10:30 UTC (permalink / raw)
To: Akhil Goyal; +Cc: Jerin Jacob, Vidya Sagar Velumuri, Tejasree Kondoj, dev
Add TLS record read and write contexts.
Signed-off-by: Anoob Joseph <anoobj@marvell.com>
Signed-off-by: Vidya Sagar Velumuri <vvelumuri@marvell.com>
---
drivers/common/cnxk/roc_cpt.h | 4 +-
drivers/common/cnxk/roc_ie_ot_tls.h | 199 ++++++++++++++++++++++++++++
drivers/common/cnxk/roc_se.h | 11 ++
3 files changed, 211 insertions(+), 3 deletions(-)
create mode 100644 drivers/common/cnxk/roc_ie_ot_tls.h
diff --git a/drivers/common/cnxk/roc_cpt.h b/drivers/common/cnxk/roc_cpt.h
index 9d1173d88a..7ad89bf243 100644
--- a/drivers/common/cnxk/roc_cpt.h
+++ b/drivers/common/cnxk/roc_cpt.h
@@ -55,6 +55,7 @@
#define ROC_CPT_AES_CBC_IV_LEN 16
#define ROC_CPT_SHA1_HMAC_LEN 12
#define ROC_CPT_SHA2_HMAC_LEN 16
+#define ROC_CPT_DES_IV_LEN 8
#define ROC_CPT_DES3_KEY_LEN 24
#define ROC_CPT_AES128_KEY_LEN 16
@@ -71,9 +72,6 @@
#define ROC_CPT_DES_BLOCK_LENGTH 8
#define ROC_CPT_AES_BLOCK_LENGTH 16
-#define ROC_CPT_AES_GCM_ROUNDUP_BYTE_LEN 4
-#define ROC_CPT_AES_CBC_ROUNDUP_BYTE_LEN 16
-
/* Salt length for AES-CTR/GCM/CCM and AES-GMAC */
#define ROC_CPT_SALT_LEN 4
diff --git a/drivers/common/cnxk/roc_ie_ot_tls.h b/drivers/common/cnxk/roc_ie_ot_tls.h
new file mode 100644
index 0000000000..206c3104e6
--- /dev/null
+++ b/drivers/common/cnxk/roc_ie_ot_tls.h
@@ -0,0 +1,199 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2024 Marvell.
+ */
+
+#ifndef __ROC_IE_OT_TLS_H__
+#define __ROC_IE_OT_TLS_H__
+
+#include "roc_platform.h"
+
+#define ROC_IE_OT_TLS_CTX_ILEN 1
+#define ROC_IE_OT_TLS_CTX_HDR_SIZE 1
+#define ROC_IE_OT_TLS_AR_WIN_SIZE_MAX 4096
+#define ROC_IE_OT_TLS_LOG_MIN_AR_WIN_SIZE_M1 5
+
+/* u64 array size to fit anti replay window bits */
+#define ROC_IE_OT_TLS_AR_WINBITS_SZ \
+ (PLT_ALIGN_CEIL(ROC_IE_OT_TLS_AR_WIN_SIZE_MAX, BITS_PER_LONG_LONG) / BITS_PER_LONG_LONG)
+
+/* CN10K TLS opcodes */
+#define ROC_IE_OT_TLS_MAJOR_OP_RECORD_ENC 0x16UL
+#define ROC_IE_OT_TLS_MAJOR_OP_RECORD_DEC 0x17UL
+
+#define ROC_IE_OT_TLS_CTX_MAX_OPAD_IPAD_LEN 128
+#define ROC_IE_OT_TLS_CTX_MAX_KEY_IV_LEN 48
+#define ROC_IE_OT_TLS_CTX_MAX_IV_LEN 16
+
+enum roc_ie_ot_tls_mac_type {
+ ROC_IE_OT_TLS_MAC_MD5 = 1,
+ ROC_IE_OT_TLS_MAC_SHA1 = 2,
+ ROC_IE_OT_TLS_MAC_SHA2_256 = 4,
+ ROC_IE_OT_TLS_MAC_SHA2_384 = 5,
+ ROC_IE_OT_TLS_MAC_SHA2_512 = 6,
+};
+
+enum roc_ie_ot_tls_cipher_type {
+ ROC_IE_OT_TLS_CIPHER_3DES = 1,
+ ROC_IE_OT_TLS_CIPHER_AES_CBC = 3,
+ ROC_IE_OT_TLS_CIPHER_AES_GCM = 7,
+ ROC_IE_OT_TLS_CIPHER_AES_CCM = 10,
+};
+
+enum roc_ie_ot_tls_ver {
+ ROC_IE_OT_TLS_VERSION_TLS_12 = 1,
+ ROC_IE_OT_TLS_VERSION_DTLS_12 = 2,
+};
+
+enum roc_ie_ot_tls_aes_key_len {
+ ROC_IE_OT_TLS_AES_KEY_LEN_128 = 1,
+ ROC_IE_OT_TLS_AES_KEY_LEN_256 = 3,
+};
+
+enum {
+ ROC_IE_OT_TLS_IV_SRC_DEFAULT = 0,
+ ROC_IE_OT_TLS_IV_SRC_FROM_SA = 1,
+};
+
+struct roc_ie_ot_tls_read_ctx_update_reg {
+ uint64_t ar_base;
+ uint64_t ar_valid_mask;
+ uint64_t hard_life;
+ uint64_t soft_life;
+ uint64_t mib_octs;
+ uint64_t mib_pkts;
+ uint64_t ar_winbits[ROC_IE_OT_TLS_AR_WINBITS_SZ];
+};
+
+union roc_ie_ot_tls_param2 {
+ uint16_t u16;
+ struct {
+ uint8_t msg_type;
+ uint8_t rsvd;
+ } s;
+};
+
+struct roc_ie_ot_tls_read_sa {
+ /* Word0 */
+ union {
+ struct {
+ uint64_t ar_win : 3;
+ uint64_t hard_life_dec : 1;
+ uint64_t soft_life_dec : 1;
+ uint64_t count_glb_octets : 1;
+ uint64_t count_glb_pkts : 1;
+ uint64_t count_mib_bytes : 1;
+
+ uint64_t count_mib_pkts : 1;
+ uint64_t hw_ctx_off : 7;
+
+ uint64_t ctx_id : 16;
+
+ uint64_t orig_pkt_fabs : 1;
+ uint64_t orig_pkt_free : 1;
+ uint64_t pkind : 6;
+
+ uint64_t rsvd0 : 1;
+ uint64_t et_ovrwr : 1;
+ uint64_t pkt_output : 2;
+ uint64_t pkt_format : 1;
+ uint64_t defrag_opt : 2;
+ uint64_t x2p_dst : 1;
+
+ uint64_t ctx_push_size : 7;
+ uint64_t rsvd1 : 1;
+
+ uint64_t ctx_hdr_size : 2;
+ uint64_t aop_valid : 1;
+ uint64_t rsvd2 : 1;
+ uint64_t ctx_size : 4;
+ } s;
+ uint64_t u64;
+ } w0;
+
+ /* Word1 */
+ uint64_t w1_rsvd3;
+
+ /* Word2 */
+ union {
+ struct {
+ uint64_t version_select : 4;
+ uint64_t aes_key_len : 2;
+ uint64_t cipher_select : 4;
+ uint64_t mac_select : 4;
+ uint64_t rsvd4 : 50;
+ } s;
+ uint64_t u64;
+ } w2;
+
+ /* Word3 */
+ uint64_t w3_rsvd5;
+
+ /* Word4 - Word9 */
+ uint8_t cipher_key[ROC_IE_OT_TLS_CTX_MAX_KEY_IV_LEN];
+
+ /* Word10 - Word25 */
+ uint8_t opad_ipad[ROC_IE_OT_TLS_CTX_MAX_OPAD_IPAD_LEN];
+
+ /* Word26 - Word32 */
+ struct roc_ie_ot_tls_read_ctx_update_reg ctx;
+};
+
+struct roc_ie_ot_tls_write_sa {
+ /* Word0 */
+ union {
+ struct {
+ uint64_t rsvd0 : 3;
+ uint64_t hard_life_dec : 1;
+ uint64_t soft_life_dec : 1;
+ uint64_t count_glb_octets : 1;
+ uint64_t count_glb_pkts : 1;
+ uint64_t count_mib_bytes : 1;
+
+ uint64_t count_mib_pkts : 1;
+ uint64_t hw_ctx_off : 7;
+
+ uint64_t rsvd1 : 32;
+
+ uint64_t ctx_push_size : 7;
+ uint64_t rsvd2 : 1;
+
+ uint64_t ctx_hdr_size : 2;
+ uint64_t aop_valid : 1;
+ uint64_t rsvd3 : 1;
+ uint64_t ctx_size : 4;
+ } s;
+ uint64_t u64;
+ } w0;
+
+ /* Word1 */
+ uint64_t w1_rsvd4;
+
+ /* Word2 */
+ union {
+ struct {
+ uint64_t version_select : 4;
+ uint64_t aes_key_len : 2;
+ uint64_t cipher_select : 4;
+ uint64_t mac_select : 4;
+ uint64_t iv_at_cptr : 1;
+ uint64_t rsvd5 : 49;
+ } s;
+ uint64_t u64;
+ } w2;
+
+ /* Word3 */
+ uint64_t w3_rsvd6;
+
+ /* Word4 - Word9 */
+ uint8_t cipher_key[ROC_IE_OT_TLS_CTX_MAX_KEY_IV_LEN];
+
+ /* Word10 - Word25 */
+ uint8_t opad_ipad[ROC_IE_OT_TLS_CTX_MAX_OPAD_IPAD_LEN];
+
+ /* Word26 */
+ uint64_t w26_rsvd7;
+
+ /* Word27 */
+ uint64_t seq_num;
+};
+#endif /* __ROC_IE_OT_TLS_H__ */
diff --git a/drivers/common/cnxk/roc_se.h b/drivers/common/cnxk/roc_se.h
index d8cbd58c9a..abb8c6a149 100644
--- a/drivers/common/cnxk/roc_se.h
+++ b/drivers/common/cnxk/roc_se.h
@@ -5,6 +5,8 @@
#ifndef __ROC_SE_H__
#define __ROC_SE_H__
+#include "roc_constants.h"
+
/* SE opcodes */
#define ROC_SE_MAJOR_OP_FC 0x33
#define ROC_SE_FC_MINOR_OP_ENCRYPT 0x0
@@ -162,6 +164,15 @@ typedef enum {
ROC_SE_ERR_GC_ICV_MISCOMPARE = 0x4c,
ROC_SE_ERR_GC_DATA_UNALIGNED = 0x4d,
+ ROC_SE_ERR_SSL_RECORD_LEN_INVALID = 0x82,
+ ROC_SE_ERR_SSL_CTX_LEN_INVALID = 0x83,
+ ROC_SE_ERR_SSL_CIPHER_UNSUPPORTED = 0x84,
+ ROC_SE_ERR_SSL_MAC_UNSUPPORTED = 0x85,
+ ROC_SE_ERR_SSL_VERSION_UNSUPPORTED = 0x86,
+ ROC_SE_ERR_SSL_MAC_MISMATCH = 0x89,
+ ROC_SE_ERR_SSL_PKT_REPLAY_SEQ_OUT_OF_WINDOW = 0xC1,
+ ROC_SE_ERR_SSL_PKT_REPLAY_SEQ = 0xC9,
+
/* API Layer */
ROC_SE_ERR_REQ_PENDING = 0xfe,
ROC_SE_ERR_REQ_TIMEOUT = 0xff,
--
2.25.1
^ permalink raw reply [flat|nested] 78+ messages in thread
* [PATCH v3 14/24] crypto/cnxk: separate IPsec from security common code
2024-01-17 10:30 ` [PATCH v3 " Anoob Joseph
` (12 preceding siblings ...)
2024-01-17 10:30 ` [PATCH v3 13/24] common/cnxk: add TLS record contexts Anoob Joseph
@ 2024-01-17 10:30 ` Anoob Joseph
2024-01-17 10:31 ` [PATCH v3 15/24] crypto/cnxk: add TLS record session ops Anoob Joseph
` (10 subsequent siblings)
24 siblings, 0 replies; 78+ messages in thread
From: Anoob Joseph @ 2024-01-17 10:30 UTC (permalink / raw)
To: Akhil Goyal; +Cc: Jerin Jacob, Vidya Sagar Velumuri, Tejasree Kondoj, dev
The current structs and functions assume only IPsec offload. Separate it
out to allow for addition of TLS.
Signed-off-by: Anoob Joseph <anoobj@marvell.com>
Signed-off-by: Vidya Sagar Velumuri <vvelumuri@marvell.com>
---
drivers/crypto/cnxk/cn10k_cryptodev.c | 2 +-
drivers/crypto/cnxk/cn10k_cryptodev_sec.c | 127 ++++++++++++++++++++++
drivers/crypto/cnxk/cn10k_cryptodev_sec.h | 61 +++++++++++
drivers/crypto/cnxk/cn10k_ipsec.c | 127 +++-------------------
drivers/crypto/cnxk/cn10k_ipsec.h | 45 +++-----
drivers/crypto/cnxk/cn10k_ipsec_la_ops.h | 1 +
drivers/crypto/cnxk/meson.build | 1 +
7 files changed, 218 insertions(+), 146 deletions(-)
create mode 100644 drivers/crypto/cnxk/cn10k_cryptodev_sec.c
create mode 100644 drivers/crypto/cnxk/cn10k_cryptodev_sec.h
diff --git a/drivers/crypto/cnxk/cn10k_cryptodev.c b/drivers/crypto/cnxk/cn10k_cryptodev.c
index 2fd4df3c5d..5ed918e18e 100644
--- a/drivers/crypto/cnxk/cn10k_cryptodev.c
+++ b/drivers/crypto/cnxk/cn10k_cryptodev.c
@@ -12,7 +12,7 @@
#include "cn10k_cryptodev.h"
#include "cn10k_cryptodev_ops.h"
-#include "cn10k_ipsec.h"
+#include "cn10k_cryptodev_sec.h"
#include "cnxk_cryptodev.h"
#include "cnxk_cryptodev_capabilities.h"
#include "cnxk_cryptodev_sec.h"
diff --git a/drivers/crypto/cnxk/cn10k_cryptodev_sec.c b/drivers/crypto/cnxk/cn10k_cryptodev_sec.c
new file mode 100644
index 0000000000..12e53f18db
--- /dev/null
+++ b/drivers/crypto/cnxk/cn10k_cryptodev_sec.c
@@ -0,0 +1,127 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2024 Marvell.
+ */
+
+#include <rte_security.h>
+
+#include "cn10k_cryptodev_ops.h"
+#include "cn10k_cryptodev_sec.h"
+#include "cnxk_cryptodev_ops.h"
+
+static int
+cn10k_sec_session_create(void *dev, struct rte_security_session_conf *conf,
+ struct rte_security_session *sess)
+{
+ struct rte_cryptodev *crypto_dev = dev;
+ struct cnxk_cpt_vf *vf;
+ struct cnxk_cpt_qp *qp;
+
+ if (conf->action_type != RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL)
+ return -EINVAL;
+
+ qp = crypto_dev->data->queue_pairs[0];
+ if (qp == NULL) {
+ plt_err("Setup cryptodev queue pair before creating security session");
+ return -EPERM;
+ }
+
+ vf = crypto_dev->data->dev_private;
+
+ if (conf->protocol == RTE_SECURITY_PROTOCOL_IPSEC) {
+ ((struct cn10k_sec_session *)sess)->userdata = conf->userdata;
+ return cn10k_ipsec_session_create(vf, qp, &conf->ipsec, conf->crypto_xform, sess);
+ }
+
+ return -ENOTSUP;
+}
+
+static int
+cn10k_sec_session_destroy(void *dev, struct rte_security_session *sec_sess)
+{
+ struct cn10k_sec_session *cn10k_sec_sess;
+ struct rte_cryptodev *crypto_dev = dev;
+ struct cnxk_cpt_qp *qp;
+
+ if (unlikely(sec_sess == NULL))
+ return -EINVAL;
+
+ qp = crypto_dev->data->queue_pairs[0];
+ if (unlikely(qp == NULL))
+ return -ENOTSUP;
+
+ cn10k_sec_sess = (struct cn10k_sec_session *)sec_sess;
+
+ if (cn10k_sec_sess->proto == RTE_SECURITY_PROTOCOL_IPSEC)
+ return cn10k_sec_ipsec_session_destroy(qp, cn10k_sec_sess);
+
+ return -EINVAL;
+}
+
+static unsigned int
+cn10k_sec_session_get_size(void *dev __rte_unused)
+{
+ return sizeof(struct cn10k_sec_session) - sizeof(struct rte_security_session);
+}
+
+static int
+cn10k_sec_session_stats_get(void *dev, struct rte_security_session *sec_sess,
+ struct rte_security_stats *stats)
+{
+ struct cn10k_sec_session *cn10k_sec_sess;
+ struct rte_cryptodev *crypto_dev = dev;
+ struct cnxk_cpt_qp *qp;
+
+ if (unlikely(sec_sess == NULL))
+ return -EINVAL;
+
+ qp = crypto_dev->data->queue_pairs[0];
+ if (unlikely(qp == NULL))
+ return -ENOTSUP;
+
+ cn10k_sec_sess = (struct cn10k_sec_session *)sec_sess;
+
+ if (cn10k_sec_sess->proto == RTE_SECURITY_PROTOCOL_IPSEC)
+ return cn10k_ipsec_stats_get(qp, cn10k_sec_sess, stats);
+
+ return -ENOTSUP;
+}
+
+static int
+cn10k_sec_session_update(void *dev, struct rte_security_session *sec_sess,
+ struct rte_security_session_conf *conf)
+{
+ struct cn10k_sec_session *cn10k_sec_sess;
+ struct rte_cryptodev *crypto_dev = dev;
+ struct cnxk_cpt_qp *qp;
+ struct cnxk_cpt_vf *vf;
+
+ if (sec_sess == NULL)
+ return -EINVAL;
+
+ qp = crypto_dev->data->queue_pairs[0];
+ if (qp == NULL)
+ return -EINVAL;
+
+ vf = crypto_dev->data->dev_private;
+
+ cn10k_sec_sess = (struct cn10k_sec_session *)sec_sess;
+
+ if (cn10k_sec_sess->proto == RTE_SECURITY_PROTOCOL_IPSEC)
+ return cn10k_ipsec_session_update(vf, qp, cn10k_sec_sess, conf);
+
+ return -ENOTSUP;
+}
+
+/* Update platform specific security ops */
+void
+cn10k_sec_ops_override(void)
+{
+ /* Update platform specific ops */
+ cnxk_sec_ops.session_create = cn10k_sec_session_create;
+ cnxk_sec_ops.session_destroy = cn10k_sec_session_destroy;
+ cnxk_sec_ops.session_get_size = cn10k_sec_session_get_size;
+ cnxk_sec_ops.session_stats_get = cn10k_sec_session_stats_get;
+ cnxk_sec_ops.session_update = cn10k_sec_session_update;
+ cnxk_sec_ops.inb_pkt_rx_inject = cn10k_cryptodev_sec_inb_rx_inject;
+ cnxk_sec_ops.rx_inject_configure = cn10k_cryptodev_sec_rx_inject_configure;
+}
diff --git a/drivers/crypto/cnxk/cn10k_cryptodev_sec.h b/drivers/crypto/cnxk/cn10k_cryptodev_sec.h
new file mode 100644
index 0000000000..016fa112e1
--- /dev/null
+++ b/drivers/crypto/cnxk/cn10k_cryptodev_sec.h
@@ -0,0 +1,61 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2024 Marvell.
+ */
+
+#ifndef __CN10K_CRYPTODEV_SEC_H__
+#define __CN10K_CRYPTODEV_SEC_H__
+
+#include <rte_security.h>
+
+#include "roc_constants.h"
+#include "roc_cpt.h"
+
+#include "cn10k_ipsec.h"
+
+struct cn10k_sec_session {
+ struct rte_security_session rte_sess;
+
+ /** PMD private space */
+
+ enum rte_security_session_protocol proto;
+ /** Pre-populated CPT inst words */
+ struct cnxk_cpt_inst_tmpl inst;
+ uint16_t max_extended_len;
+ uint16_t iv_offset;
+ uint8_t iv_length;
+ union {
+ struct {
+ uint8_t ip_csum;
+ bool is_outbound;
+ } ipsec;
+ };
+ /** Queue pair */
+ struct cnxk_cpt_qp *qp;
+ /** Userdata to be set for Rx inject */
+ void *userdata;
+
+ /**
+ * End of SW mutable area
+ */
+ union {
+ struct cn10k_ipsec_sa sa;
+ };
+} __rte_aligned(ROC_ALIGN);
+
+static inline uint64_t
+cpt_inst_w7_get(struct roc_cpt *roc_cpt, void *cptr)
+{
+ union cpt_inst_w7 w7;
+
+ w7.u64 = 0;
+ w7.s.egrp = roc_cpt->eng_grp[CPT_ENG_TYPE_IE];
+ w7.s.ctx_val = 1;
+ w7.s.cptr = (uint64_t)cptr;
+ rte_mb();
+
+ return w7.u64;
+}
+
+void cn10k_sec_ops_override(void);
+
+#endif /* __CN10K_CRYPTODEV_SEC_H__ */
diff --git a/drivers/crypto/cnxk/cn10k_ipsec.c b/drivers/crypto/cnxk/cn10k_ipsec.c
index a9c673ea83..74d6cd70d1 100644
--- a/drivers/crypto/cnxk/cn10k_ipsec.c
+++ b/drivers/crypto/cnxk/cn10k_ipsec.c
@@ -11,6 +11,7 @@
#include <rte_udp.h>
#include "cn10k_cryptodev_ops.h"
+#include "cn10k_cryptodev_sec.h"
#include "cn10k_ipsec.h"
#include "cnxk_cryptodev.h"
#include "cnxk_cryptodev_ops.h"
@@ -19,20 +20,6 @@
#include "roc_api.h"
-static uint64_t
-cpt_inst_w7_get(struct roc_cpt *roc_cpt, void *sa)
-{
- union cpt_inst_w7 w7;
-
- w7.u64 = 0;
- w7.s.egrp = roc_cpt->eng_grp[CPT_ENG_TYPE_IE];
- w7.s.ctx_val = 1;
- w7.s.cptr = (uint64_t)sa;
- rte_mb();
-
- return w7.u64;
-}
-
static int
cn10k_ipsec_outb_sa_create(struct roc_cpt *roc_cpt, struct roc_cpt_lf *lf,
struct rte_security_ipsec_xform *ipsec_xfrm,
@@ -260,29 +247,19 @@ cn10k_ipsec_inb_sa_create(struct roc_cpt *roc_cpt, struct roc_cpt_lf *lf,
return ret;
}
-static int
-cn10k_ipsec_session_create(void *dev,
+int
+cn10k_ipsec_session_create(struct cnxk_cpt_vf *vf, struct cnxk_cpt_qp *qp,
struct rte_security_ipsec_xform *ipsec_xfrm,
struct rte_crypto_sym_xform *crypto_xfrm,
struct rte_security_session *sess)
{
- struct rte_cryptodev *crypto_dev = dev;
struct roc_cpt *roc_cpt;
- struct cnxk_cpt_vf *vf;
- struct cnxk_cpt_qp *qp;
int ret;
- qp = crypto_dev->data->queue_pairs[0];
- if (qp == NULL) {
- plt_err("Setup cpt queue pair before creating security session");
- return -EPERM;
- }
-
ret = cnxk_ipsec_xform_verify(ipsec_xfrm, crypto_xfrm);
if (ret)
return ret;
- vf = crypto_dev->data->dev_private;
roc_cpt = &vf->cpt;
if (ipsec_xfrm->direction == RTE_SECURITY_IPSEC_SA_DIR_INGRESS)
@@ -293,38 +270,15 @@ cn10k_ipsec_session_create(void *dev,
(struct cn10k_sec_session *)sess);
}
-static int
-cn10k_sec_session_create(void *device, struct rte_security_session_conf *conf,
- struct rte_security_session *sess)
-{
- if (conf->action_type != RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL)
- return -EINVAL;
-
- if (conf->protocol == RTE_SECURITY_PROTOCOL_IPSEC) {
- ((struct cn10k_sec_session *)sess)->userdata = conf->userdata;
- return cn10k_ipsec_session_create(device, &conf->ipsec, conf->crypto_xform, sess);
- }
- return -ENOTSUP;
-}
-
-static int
-cn10k_sec_ipsec_session_destroy(void *dev, struct rte_security_session *sec_sess)
+int
+cn10k_sec_ipsec_session_destroy(struct cnxk_cpt_qp *qp, struct cn10k_sec_session *sess)
{
- struct rte_cryptodev *crypto_dev = dev;
union roc_ot_ipsec_sa_word2 *w2;
- struct cn10k_sec_session *sess;
struct cn10k_ipsec_sa *sa;
- struct cnxk_cpt_qp *qp;
struct roc_cpt_lf *lf;
void *sa_dptr = NULL;
int ret;
- sess = (struct cn10k_sec_session *)sec_sess;
-
- qp = crypto_dev->data->queue_pairs[0];
- if (unlikely(qp == NULL))
- return -ENOTSUP;
-
lf = &qp->lf;
sa = &sess->sa;
@@ -374,48 +328,18 @@ cn10k_sec_ipsec_session_destroy(void *dev, struct rte_security_session *sec_sess
return 0;
}
-static int
-cn10k_sec_session_destroy(void *dev, struct rte_security_session *sec_sess)
+int
+cn10k_ipsec_stats_get(struct cnxk_cpt_qp *qp, struct cn10k_sec_session *sess,
+ struct rte_security_stats *stats)
{
- if (unlikely(sec_sess == NULL))
- return -EINVAL;
-
- if (((struct cn10k_sec_session *)sec_sess)->proto == RTE_SECURITY_PROTOCOL_IPSEC)
- return cn10k_sec_ipsec_session_destroy(dev, sec_sess);
-
- return -EINVAL;
-}
-
-static unsigned int
-cn10k_sec_session_get_size(void *device __rte_unused)
-{
- return sizeof(struct cn10k_sec_session) - sizeof(struct rte_security_session);
-}
-
-static int
-cn10k_sec_session_stats_get(void *device, struct rte_security_session *sess,
- struct rte_security_stats *stats)
-{
- struct rte_cryptodev *crypto_dev = device;
struct roc_ot_ipsec_outb_sa *out_sa;
struct roc_ot_ipsec_inb_sa *in_sa;
- struct cn10k_sec_session *priv;
struct cn10k_ipsec_sa *sa;
- struct cnxk_cpt_qp *qp;
-
- if (unlikely(sess == NULL))
- return -EINVAL;
-
- priv = (struct cn10k_sec_session *)sess;
-
- qp = crypto_dev->data->queue_pairs[0];
- if (qp == NULL)
- return -EINVAL;
stats->protocol = RTE_SECURITY_PROTOCOL_IPSEC;
- sa = &priv->sa;
+ sa = &sess->sa;
- if (priv->ipsec.is_outbound) {
+ if (sess->ipsec.is_outbound) {
out_sa = &sa->out_sa;
roc_cpt_lf_ctx_flush(&qp->lf, out_sa, false);
rte_delay_ms(1);
@@ -432,23 +356,13 @@ cn10k_sec_session_stats_get(void *device, struct rte_security_session *sess,
return 0;
}
-static int
-cn10k_sec_session_update(void *device, struct rte_security_session *sess,
- struct rte_security_session_conf *conf)
+int
+cn10k_ipsec_session_update(struct cnxk_cpt_vf *vf, struct cnxk_cpt_qp *qp,
+ struct cn10k_sec_session *sess, struct rte_security_session_conf *conf)
{
- struct rte_cryptodev *crypto_dev = device;
struct roc_cpt *roc_cpt;
- struct cnxk_cpt_qp *qp;
- struct cnxk_cpt_vf *vf;
int ret;
- if (sess == NULL)
- return -EINVAL;
-
- qp = crypto_dev->data->queue_pairs[0];
- if (qp == NULL)
- return -EINVAL;
-
if (conf->ipsec.direction == RTE_SECURITY_IPSEC_SA_DIR_INGRESS)
return -ENOTSUP;
@@ -456,23 +370,8 @@ cn10k_sec_session_update(void *device, struct rte_security_session *sess,
if (ret)
return ret;
- vf = crypto_dev->data->dev_private;
roc_cpt = &vf->cpt;
return cn10k_ipsec_outb_sa_create(roc_cpt, &qp->lf, &conf->ipsec, conf->crypto_xform,
(struct cn10k_sec_session *)sess);
}
-
-/* Update platform specific security ops */
-void
-cn10k_sec_ops_override(void)
-{
- /* Update platform specific ops */
- cnxk_sec_ops.session_create = cn10k_sec_session_create;
- cnxk_sec_ops.session_destroy = cn10k_sec_session_destroy;
- cnxk_sec_ops.session_get_size = cn10k_sec_session_get_size;
- cnxk_sec_ops.session_stats_get = cn10k_sec_session_stats_get;
- cnxk_sec_ops.session_update = cn10k_sec_session_update;
- cnxk_sec_ops.inb_pkt_rx_inject = cn10k_cryptodev_sec_inb_rx_inject;
- cnxk_sec_ops.rx_inject_configure = cn10k_cryptodev_sec_rx_inject_configure;
-}
diff --git a/drivers/crypto/cnxk/cn10k_ipsec.h b/drivers/crypto/cnxk/cn10k_ipsec.h
index 2b7a3e6acf..0d1e14a065 100644
--- a/drivers/crypto/cnxk/cn10k_ipsec.h
+++ b/drivers/crypto/cnxk/cn10k_ipsec.h
@@ -11,9 +11,12 @@
#include "roc_constants.h"
#include "roc_ie_ot.h"
+#include "cnxk_cryptodev.h"
+#include "cnxk_cryptodev_ops.h"
#include "cnxk_ipsec.h"
-typedef void *CN10K_SA_CONTEXT_MARKER[0];
+/* Forward declaration */
+struct cn10k_sec_session;
struct cn10k_ipsec_sa {
union {
@@ -24,34 +27,14 @@ struct cn10k_ipsec_sa {
};
} __rte_aligned(ROC_ALIGN);
-struct cn10k_sec_session {
- struct rte_security_session rte_sess;
-
- /** PMD private space */
-
- enum rte_security_session_protocol proto;
- /** Pre-populated CPT inst words */
- struct cnxk_cpt_inst_tmpl inst;
- uint16_t max_extended_len;
- uint16_t iv_offset;
- uint8_t iv_length;
- union {
- struct {
- uint8_t ip_csum;
- bool is_outbound;
- } ipsec;
- };
- /** Queue pair */
- struct cnxk_cpt_qp *qp;
- /** Userdata to be set for Rx inject */
- void *userdata;
-
- /**
- * End of SW mutable area
- */
- struct cn10k_ipsec_sa sa;
-} __rte_aligned(ROC_ALIGN);
-
-void cn10k_sec_ops_override(void);
-
+int cn10k_ipsec_session_create(struct cnxk_cpt_vf *vf, struct cnxk_cpt_qp *qp,
+ struct rte_security_ipsec_xform *ipsec_xfrm,
+ struct rte_crypto_sym_xform *crypto_xfrm,
+ struct rte_security_session *sess);
+int cn10k_sec_ipsec_session_destroy(struct cnxk_cpt_qp *qp, struct cn10k_sec_session *sess);
+int cn10k_ipsec_stats_get(struct cnxk_cpt_qp *qp, struct cn10k_sec_session *sess,
+ struct rte_security_stats *stats);
+int cn10k_ipsec_session_update(struct cnxk_cpt_vf *vf, struct cnxk_cpt_qp *qp,
+ struct cn10k_sec_session *sess,
+ struct rte_security_session_conf *conf);
#endif /* __CN10K_IPSEC_H__ */
diff --git a/drivers/crypto/cnxk/cn10k_ipsec_la_ops.h b/drivers/crypto/cnxk/cn10k_ipsec_la_ops.h
index af2c85022e..a30b8e413d 100644
--- a/drivers/crypto/cnxk/cn10k_ipsec_la_ops.h
+++ b/drivers/crypto/cnxk/cn10k_ipsec_la_ops.h
@@ -11,6 +11,7 @@
#include "roc_ie.h"
#include "cn10k_cryptodev.h"
+#include "cn10k_cryptodev_sec.h"
#include "cn10k_ipsec.h"
#include "cnxk_cryptodev.h"
#include "cnxk_cryptodev_ops.h"
diff --git a/drivers/crypto/cnxk/meson.build b/drivers/crypto/cnxk/meson.build
index 3d9a0dbbf0..d6fafd43d9 100644
--- a/drivers/crypto/cnxk/meson.build
+++ b/drivers/crypto/cnxk/meson.build
@@ -14,6 +14,7 @@ sources = files(
'cn9k_ipsec.c',
'cn10k_cryptodev.c',
'cn10k_cryptodev_ops.c',
+ 'cn10k_cryptodev_sec.c',
'cn10k_ipsec.c',
'cnxk_cryptodev.c',
'cnxk_cryptodev_capabilities.c',
--
2.25.1
^ permalink raw reply [flat|nested] 78+ messages in thread
* [PATCH v3 15/24] crypto/cnxk: add TLS record session ops
2024-01-17 10:30 ` [PATCH v3 " Anoob Joseph
` (13 preceding siblings ...)
2024-01-17 10:30 ` [PATCH v3 14/24] crypto/cnxk: separate IPsec from security common code Anoob Joseph
@ 2024-01-17 10:31 ` Anoob Joseph
2024-01-17 10:31 ` [PATCH v3 16/24] crypto/cnxk: add TLS record datapath handling Anoob Joseph
` (9 subsequent siblings)
24 siblings, 0 replies; 78+ messages in thread
From: Anoob Joseph @ 2024-01-17 10:31 UTC (permalink / raw)
To: Akhil Goyal; +Cc: Vidya Sagar Velumuri, Jerin Jacob, Tejasree Kondoj, dev
From: Vidya Sagar Velumuri <vvelumuri@marvell.com>
Add TLS record session ops for creating and destroying security
sessions. Add support for both read and write sessions.
Signed-off-by: Anoob Joseph <anoobj@marvell.com>
Signed-off-by: Vidya Sagar Velumuri <vvelumuri@marvell.com>
---
drivers/crypto/cnxk/cn10k_cryptodev_sec.h | 8 +
drivers/crypto/cnxk/cn10k_tls.c | 758 ++++++++++++++++++++++
drivers/crypto/cnxk/cn10k_tls.h | 35 +
drivers/crypto/cnxk/meson.build | 1 +
4 files changed, 802 insertions(+)
create mode 100644 drivers/crypto/cnxk/cn10k_tls.c
create mode 100644 drivers/crypto/cnxk/cn10k_tls.h
diff --git a/drivers/crypto/cnxk/cn10k_cryptodev_sec.h b/drivers/crypto/cnxk/cn10k_cryptodev_sec.h
index 016fa112e1..703e71475a 100644
--- a/drivers/crypto/cnxk/cn10k_cryptodev_sec.h
+++ b/drivers/crypto/cnxk/cn10k_cryptodev_sec.h
@@ -11,6 +11,7 @@
#include "roc_cpt.h"
#include "cn10k_ipsec.h"
+#include "cn10k_tls.h"
struct cn10k_sec_session {
struct rte_security_session rte_sess;
@@ -28,6 +29,12 @@ struct cn10k_sec_session {
uint8_t ip_csum;
bool is_outbound;
} ipsec;
+ struct {
+ uint8_t enable_padding : 1;
+ uint8_t hdr_len : 4;
+ uint8_t rvsd : 3;
+ bool is_write;
+ } tls;
};
/** Queue pair */
struct cnxk_cpt_qp *qp;
@@ -39,6 +46,7 @@ struct cn10k_sec_session {
*/
union {
struct cn10k_ipsec_sa sa;
+ struct cn10k_tls_record tls_rec;
};
} __rte_aligned(ROC_ALIGN);
diff --git a/drivers/crypto/cnxk/cn10k_tls.c b/drivers/crypto/cnxk/cn10k_tls.c
new file mode 100644
index 0000000000..afcf7ba6f1
--- /dev/null
+++ b/drivers/crypto/cnxk/cn10k_tls.c
@@ -0,0 +1,758 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2024 Marvell.
+ */
+
+#include <rte_crypto_sym.h>
+#include <rte_cryptodev.h>
+#include <rte_security.h>
+
+#include <cryptodev_pmd.h>
+
+#include "roc_cpt.h"
+#include "roc_se.h"
+
+#include "cn10k_cryptodev_sec.h"
+#include "cn10k_tls.h"
+#include "cnxk_cryptodev.h"
+#include "cnxk_cryptodev_ops.h"
+#include "cnxk_security.h"
+
+static int
+tls_xform_cipher_verify(struct rte_crypto_sym_xform *crypto_xform)
+{
+ enum rte_crypto_cipher_algorithm c_algo = crypto_xform->cipher.algo;
+ uint16_t keylen = crypto_xform->cipher.key.length;
+
+ if (((c_algo == RTE_CRYPTO_CIPHER_NULL) && (keylen == 0)) ||
+ ((c_algo == RTE_CRYPTO_CIPHER_3DES_CBC) && (keylen == 24)) ||
+ ((c_algo == RTE_CRYPTO_CIPHER_AES_CBC) && ((keylen == 16) || (keylen == 32))))
+ return 0;
+
+ return -EINVAL;
+}
+
+static int
+tls_xform_auth_verify(struct rte_crypto_sym_xform *crypto_xform)
+{
+ enum rte_crypto_auth_algorithm a_algo = crypto_xform->auth.algo;
+ uint16_t keylen = crypto_xform->auth.key.length;
+
+ if (((a_algo == RTE_CRYPTO_AUTH_MD5_HMAC) && (keylen == 16)) ||
+ ((a_algo == RTE_CRYPTO_AUTH_SHA1_HMAC) && (keylen == 20)) ||
+ ((a_algo == RTE_CRYPTO_AUTH_SHA256_HMAC) && (keylen == 32)))
+ return 0;
+
+ return -EINVAL;
+}
+
+static int
+tls_xform_aead_verify(struct rte_security_tls_record_xform *tls_xform,
+ struct rte_crypto_sym_xform *crypto_xform)
+{
+ uint16_t keylen = crypto_xform->aead.key.length;
+
+ if (tls_xform->type == RTE_SECURITY_TLS_SESS_TYPE_WRITE &&
+ crypto_xform->aead.op != RTE_CRYPTO_AEAD_OP_ENCRYPT)
+ return -EINVAL;
+
+ if (tls_xform->type == RTE_SECURITY_TLS_SESS_TYPE_READ &&
+ crypto_xform->aead.op != RTE_CRYPTO_AEAD_OP_DECRYPT)
+ return -EINVAL;
+
+ if (crypto_xform->aead.algo == RTE_CRYPTO_AEAD_AES_GCM) {
+ if ((keylen == 16) || (keylen == 32))
+ return 0;
+ }
+
+ return -EINVAL;
+}
+
+static int
+cnxk_tls_xform_verify(struct rte_security_tls_record_xform *tls_xform,
+ struct rte_crypto_sym_xform *crypto_xform)
+{
+ struct rte_crypto_sym_xform *auth_xform, *cipher_xform = NULL;
+ int ret = 0;
+
+ if ((tls_xform->ver != RTE_SECURITY_VERSION_TLS_1_2) &&
+ (tls_xform->ver != RTE_SECURITY_VERSION_DTLS_1_2))
+ return -EINVAL;
+
+ if ((tls_xform->type != RTE_SECURITY_TLS_SESS_TYPE_READ) &&
+ (tls_xform->type != RTE_SECURITY_TLS_SESS_TYPE_WRITE))
+ return -EINVAL;
+
+ if (crypto_xform->type == RTE_CRYPTO_SYM_XFORM_AEAD)
+ return tls_xform_aead_verify(tls_xform, crypto_xform);
+
+ if (tls_xform->type == RTE_SECURITY_TLS_SESS_TYPE_WRITE) {
+ /* Egress */
+
+ /* First should be for auth in Egress */
+ if (crypto_xform->type != RTE_CRYPTO_SYM_XFORM_AUTH)
+ return -EINVAL;
+
+ /* Next if present, should be for cipher in Egress */
+ if ((crypto_xform->next != NULL) &&
+ (crypto_xform->next->type != RTE_CRYPTO_SYM_XFORM_CIPHER))
+ return -EINVAL;
+
+ auth_xform = crypto_xform;
+ cipher_xform = crypto_xform->next;
+ } else {
+ /* Ingress */
+
+ /* First can be for auth only when next is NULL in Ingress. */
+ if ((crypto_xform->type == RTE_CRYPTO_SYM_XFORM_AUTH) &&
+ (crypto_xform->next != NULL))
+ return -EINVAL;
+ else if ((crypto_xform->type != RTE_CRYPTO_SYM_XFORM_CIPHER) ||
+ (crypto_xform->next->type != RTE_CRYPTO_SYM_XFORM_AUTH))
+ return -EINVAL;
+
+ cipher_xform = crypto_xform;
+ auth_xform = crypto_xform->next;
+ }
+
+ if (cipher_xform) {
+ if ((tls_xform->type == RTE_SECURITY_TLS_SESS_TYPE_WRITE) &&
+ !(cipher_xform->cipher.op == RTE_CRYPTO_CIPHER_OP_ENCRYPT &&
+ auth_xform->auth.op == RTE_CRYPTO_AUTH_OP_GENERATE))
+ return -EINVAL;
+
+ if ((tls_xform->type == RTE_SECURITY_TLS_SESS_TYPE_READ) &&
+ !(cipher_xform->cipher.op == RTE_CRYPTO_CIPHER_OP_DECRYPT &&
+ auth_xform->auth.op == RTE_CRYPTO_AUTH_OP_VERIFY))
+ return -EINVAL;
+ } else {
+ if ((tls_xform->type == RTE_SECURITY_TLS_SESS_TYPE_WRITE) &&
+ (auth_xform->auth.op != RTE_CRYPTO_AUTH_OP_GENERATE))
+ return -EINVAL;
+
+ if ((tls_xform->type == RTE_SECURITY_TLS_SESS_TYPE_READ) &&
+ (auth_xform->auth.op == RTE_CRYPTO_AUTH_OP_VERIFY))
+ return -EINVAL;
+ }
+
+ if (cipher_xform)
+ ret = tls_xform_cipher_verify(cipher_xform);
+
+ if (!ret)
+ return tls_xform_auth_verify(auth_xform);
+
+ return ret;
+}
+
+static int
+tls_write_rlens_get(struct rte_security_tls_record_xform *tls_xfrm,
+ struct rte_crypto_sym_xform *crypto_xfrm)
+{
+ enum rte_crypto_cipher_algorithm c_algo = RTE_CRYPTO_CIPHER_NULL;
+ enum rte_crypto_auth_algorithm a_algo = RTE_CRYPTO_AUTH_NULL;
+ uint8_t roundup_byte, tls_hdr_len;
+ uint8_t mac_len, iv_len;
+
+ switch (tls_xfrm->ver) {
+ case RTE_SECURITY_VERSION_TLS_1_2:
+ case RTE_SECURITY_VERSION_TLS_1_3:
+ tls_hdr_len = 5;
+ break;
+ case RTE_SECURITY_VERSION_DTLS_1_2:
+ tls_hdr_len = 13;
+ break;
+ default:
+ tls_hdr_len = 0;
+ break;
+ }
+
+ /* Get Cipher and Auth algo */
+ if (crypto_xfrm->type == RTE_CRYPTO_SYM_XFORM_AEAD)
+ return tls_hdr_len + ROC_CPT_AES_GCM_IV_LEN + ROC_CPT_AES_GCM_MAC_LEN;
+
+ if (crypto_xfrm->type == RTE_CRYPTO_SYM_XFORM_CIPHER) {
+ c_algo = crypto_xfrm->cipher.algo;
+ if (crypto_xfrm->next)
+ a_algo = crypto_xfrm->next->auth.algo;
+ } else {
+ a_algo = crypto_xfrm->auth.algo;
+ if (crypto_xfrm->next)
+ c_algo = crypto_xfrm->next->cipher.algo;
+ }
+
+ switch (c_algo) {
+ case RTE_CRYPTO_CIPHER_NULL:
+ roundup_byte = 4;
+ iv_len = 0;
+ break;
+ case RTE_CRYPTO_CIPHER_3DES_CBC:
+ roundup_byte = ROC_CPT_DES_BLOCK_LENGTH;
+ iv_len = ROC_CPT_DES_IV_LEN;
+ break;
+ case RTE_CRYPTO_CIPHER_AES_CBC:
+ roundup_byte = ROC_CPT_AES_BLOCK_LENGTH;
+ iv_len = ROC_CPT_AES_CBC_IV_LEN;
+ break;
+ default:
+ roundup_byte = 0;
+ iv_len = 0;
+ break;
+ }
+
+ switch (a_algo) {
+ case RTE_CRYPTO_AUTH_NULL:
+ mac_len = 0;
+ break;
+ case RTE_CRYPTO_AUTH_MD5_HMAC:
+ mac_len = 16;
+ break;
+ case RTE_CRYPTO_AUTH_SHA1_HMAC:
+ mac_len = 20;
+ break;
+ case RTE_CRYPTO_AUTH_SHA256_HMAC:
+ mac_len = 32;
+ break;
+ default:
+ mac_len = 0;
+ break;
+ }
+
+ return tls_hdr_len + iv_len + mac_len + roundup_byte;
+}
+
+static void
+tls_write_sa_init(struct roc_ie_ot_tls_write_sa *sa)
+{
+ size_t offset;
+
+ memset(sa, 0, sizeof(struct roc_ie_ot_tls_write_sa));
+
+ offset = offsetof(struct roc_ie_ot_tls_write_sa, w26_rsvd7);
+ sa->w0.s.hw_ctx_off = offset / ROC_CTX_UNIT_8B;
+ sa->w0.s.ctx_push_size = sa->w0.s.hw_ctx_off;
+ sa->w0.s.ctx_size = ROC_IE_OT_TLS_CTX_ILEN;
+ sa->w0.s.ctx_hdr_size = ROC_IE_OT_TLS_CTX_HDR_SIZE;
+ sa->w0.s.aop_valid = 1;
+}
+
+static void
+tls_read_sa_init(struct roc_ie_ot_tls_read_sa *sa)
+{
+ size_t offset;
+
+ memset(sa, 0, sizeof(struct roc_ie_ot_tls_read_sa));
+
+ offset = offsetof(struct roc_ie_ot_tls_read_sa, ctx);
+ sa->w0.s.hw_ctx_off = offset / ROC_CTX_UNIT_8B;
+ sa->w0.s.ctx_push_size = sa->w0.s.hw_ctx_off;
+ sa->w0.s.ctx_size = ROC_IE_OT_TLS_CTX_ILEN;
+ sa->w0.s.ctx_hdr_size = ROC_IE_OT_TLS_CTX_HDR_SIZE;
+ sa->w0.s.aop_valid = 1;
+}
+
+static size_t
+tls_read_ctx_size(struct roc_ie_ot_tls_read_sa *sa)
+{
+ size_t size;
+
+ /* Variable based on Anti-replay Window */
+ size = offsetof(struct roc_ie_ot_tls_read_sa, ctx) +
+ offsetof(struct roc_ie_ot_tls_read_ctx_update_reg, ar_winbits);
+
+ if (sa->w0.s.ar_win)
+ size += (1 << (sa->w0.s.ar_win - 1)) * sizeof(uint64_t);
+
+ return size;
+}
+
+static int
+tls_read_sa_fill(struct roc_ie_ot_tls_read_sa *read_sa,
+ struct rte_security_tls_record_xform *tls_xfrm,
+ struct rte_crypto_sym_xform *crypto_xfrm)
+{
+ struct rte_crypto_sym_xform *auth_xfrm, *cipher_xfrm;
+ const uint8_t *key = NULL;
+ uint64_t *tmp, *tmp_key;
+ uint32_t replay_win_sz;
+ uint8_t *cipher_key;
+ int i, length = 0;
+ size_t offset;
+
+ /* Initialize the SA */
+ memset(read_sa, 0, sizeof(struct roc_ie_ot_tls_read_sa));
+
+ cipher_key = read_sa->cipher_key;
+
+ /* Set encryption algorithm */
+ if ((crypto_xfrm->type == RTE_CRYPTO_SYM_XFORM_AEAD) &&
+ (crypto_xfrm->aead.algo == RTE_CRYPTO_AEAD_AES_GCM)) {
+ read_sa->w2.s.cipher_select = ROC_IE_OT_TLS_CIPHER_AES_GCM;
+ read_sa->w2.s.mac_select = ROC_IE_OT_TLS_MAC_SHA2_256;
+
+ length = crypto_xfrm->aead.key.length;
+ if (length == 16)
+ read_sa->w2.s.aes_key_len = ROC_IE_OT_TLS_AES_KEY_LEN_128;
+ else
+ read_sa->w2.s.aes_key_len = ROC_IE_OT_TLS_AES_KEY_LEN_256;
+
+ key = crypto_xfrm->aead.key.data;
+ memcpy(cipher_key, key, length);
+
+ if (tls_xfrm->ver == RTE_SECURITY_VERSION_TLS_1_2)
+ memcpy(((uint8_t *)cipher_key + 32), &tls_xfrm->tls_1_2.imp_nonce, 4);
+ else if (tls_xfrm->ver == RTE_SECURITY_VERSION_DTLS_1_2)
+ memcpy(((uint8_t *)cipher_key + 32), &tls_xfrm->dtls_1_2.imp_nonce, 4);
+
+ goto key_swap;
+ }
+
+ if (crypto_xfrm->type == RTE_CRYPTO_SYM_XFORM_AUTH) {
+ auth_xfrm = crypto_xfrm;
+ cipher_xfrm = crypto_xfrm->next;
+ } else {
+ cipher_xfrm = crypto_xfrm;
+ auth_xfrm = crypto_xfrm->next;
+ }
+
+ if (cipher_xfrm != NULL) {
+ if (cipher_xfrm->cipher.algo == RTE_CRYPTO_CIPHER_3DES_CBC) {
+ read_sa->w2.s.cipher_select = ROC_IE_OT_TLS_CIPHER_3DES;
+ length = cipher_xfrm->cipher.key.length;
+ } else if (cipher_xfrm->cipher.algo == RTE_CRYPTO_CIPHER_AES_CBC) {
+ read_sa->w2.s.cipher_select = ROC_IE_OT_TLS_CIPHER_AES_CBC;
+ length = cipher_xfrm->cipher.key.length;
+ if (length == 16)
+ read_sa->w2.s.aes_key_len = ROC_IE_OT_TLS_AES_KEY_LEN_128;
+ else if (length == 32)
+ read_sa->w2.s.aes_key_len = ROC_IE_OT_TLS_AES_KEY_LEN_256;
+ else
+ return -EINVAL;
+ } else {
+ return -EINVAL;
+ }
+
+ key = cipher_xfrm->cipher.key.data;
+ memcpy(cipher_key, key, length);
+ }
+
+ if (auth_xfrm->auth.algo == RTE_CRYPTO_AUTH_MD5_HMAC)
+ read_sa->w2.s.mac_select = ROC_IE_OT_TLS_MAC_MD5;
+ else if (auth_xfrm->auth.algo == RTE_CRYPTO_AUTH_SHA1_HMAC)
+ read_sa->w2.s.mac_select = ROC_IE_OT_TLS_MAC_SHA1;
+ else if (auth_xfrm->auth.algo == RTE_CRYPTO_AUTH_SHA256_HMAC)
+ read_sa->w2.s.mac_select = ROC_IE_OT_TLS_MAC_SHA2_256;
+ else
+ return -EINVAL;
+
+ cnxk_sec_opad_ipad_gen(auth_xfrm, read_sa->opad_ipad, true);
+ tmp = (uint64_t *)read_sa->opad_ipad;
+ for (i = 0; i < (int)(ROC_CTX_MAX_OPAD_IPAD_LEN / sizeof(uint64_t)); i++)
+ tmp[i] = rte_be_to_cpu_64(tmp[i]);
+
+key_swap:
+ tmp_key = (uint64_t *)cipher_key;
+ for (i = 0; i < (int)(ROC_IE_OT_TLS_CTX_MAX_KEY_IV_LEN / sizeof(uint64_t)); i++)
+ tmp_key[i] = rte_be_to_cpu_64(tmp_key[i]);
+
+ if (tls_xfrm->ver == RTE_SECURITY_VERSION_DTLS_1_2) {
+ /* Only support power-of-two window sizes supported */
+ replay_win_sz = tls_xfrm->dtls_1_2.ar_win_sz;
+ if (replay_win_sz) {
+ if (!rte_is_power_of_2(replay_win_sz) ||
+ replay_win_sz > ROC_IE_OT_TLS_AR_WIN_SIZE_MAX)
+ return -ENOTSUP;
+
+ read_sa->w0.s.ar_win = rte_log2_u32(replay_win_sz) - 5;
+ }
+ }
+
+ read_sa->w0.s.ctx_hdr_size = ROC_IE_OT_TLS_CTX_HDR_SIZE;
+ read_sa->w0.s.aop_valid = 1;
+
+ offset = offsetof(struct roc_ie_ot_tls_read_sa, ctx);
+
+ /* Word offset for HW managed CTX field */
+ read_sa->w0.s.hw_ctx_off = offset / 8;
+ read_sa->w0.s.ctx_push_size = read_sa->w0.s.hw_ctx_off;
+
+ /* Entire context size in 128B units */
+ read_sa->w0.s.ctx_size = (PLT_ALIGN_CEIL(tls_read_ctx_size(read_sa), ROC_CTX_UNIT_128B) /
+ ROC_CTX_UNIT_128B) -
+ 1;
+
+ if (tls_xfrm->ver == RTE_SECURITY_VERSION_TLS_1_2) {
+ read_sa->w2.s.version_select = ROC_IE_OT_TLS_VERSION_TLS_12;
+ read_sa->ctx.ar_valid_mask = tls_xfrm->tls_1_2.seq_no - 1;
+ } else if (tls_xfrm->ver == RTE_SECURITY_VERSION_DTLS_1_2) {
+ read_sa->w2.s.version_select = ROC_IE_OT_TLS_VERSION_DTLS_12;
+ }
+
+ rte_wmb();
+
+ return 0;
+}
+
+static int
+tls_write_sa_fill(struct roc_ie_ot_tls_write_sa *write_sa,
+ struct rte_security_tls_record_xform *tls_xfrm,
+ struct rte_crypto_sym_xform *crypto_xfrm)
+{
+ struct rte_crypto_sym_xform *auth_xfrm, *cipher_xfrm;
+ const uint8_t *key = NULL;
+ uint8_t *cipher_key;
+ uint64_t *tmp_key;
+ int i, length = 0;
+ size_t offset;
+
+ cipher_key = write_sa->cipher_key;
+
+ /* Set encryption algorithm */
+ if ((crypto_xfrm->type == RTE_CRYPTO_SYM_XFORM_AEAD) &&
+ (crypto_xfrm->aead.algo == RTE_CRYPTO_AEAD_AES_GCM)) {
+ write_sa->w2.s.cipher_select = ROC_IE_OT_TLS_CIPHER_AES_GCM;
+ write_sa->w2.s.mac_select = ROC_IE_OT_TLS_MAC_SHA2_256;
+
+ length = crypto_xfrm->aead.key.length;
+ if (length == 16)
+ write_sa->w2.s.aes_key_len = ROC_IE_OT_TLS_AES_KEY_LEN_128;
+ else
+ write_sa->w2.s.aes_key_len = ROC_IE_OT_TLS_AES_KEY_LEN_256;
+
+ key = crypto_xfrm->aead.key.data;
+ memcpy(cipher_key, key, length);
+
+ if (tls_xfrm->ver == RTE_SECURITY_VERSION_TLS_1_2)
+ memcpy(((uint8_t *)cipher_key + 32), &tls_xfrm->tls_1_2.imp_nonce, 4);
+ else if (tls_xfrm->ver == RTE_SECURITY_VERSION_DTLS_1_2)
+ memcpy(((uint8_t *)cipher_key + 32), &tls_xfrm->dtls_1_2.imp_nonce, 4);
+
+ goto key_swap;
+ }
+
+ if (crypto_xfrm->type == RTE_CRYPTO_SYM_XFORM_AUTH) {
+ auth_xfrm = crypto_xfrm;
+ cipher_xfrm = crypto_xfrm->next;
+ } else {
+ cipher_xfrm = crypto_xfrm;
+ auth_xfrm = crypto_xfrm->next;
+ }
+
+ if (cipher_xfrm != NULL) {
+ if (cipher_xfrm->cipher.algo == RTE_CRYPTO_CIPHER_3DES_CBC) {
+ write_sa->w2.s.cipher_select = ROC_IE_OT_TLS_CIPHER_3DES;
+ length = cipher_xfrm->cipher.key.length;
+ } else if (cipher_xfrm->cipher.algo == RTE_CRYPTO_CIPHER_AES_CBC) {
+ write_sa->w2.s.cipher_select = ROC_IE_OT_TLS_CIPHER_AES_CBC;
+ length = cipher_xfrm->cipher.key.length;
+ if (length == 16)
+ write_sa->w2.s.aes_key_len = ROC_IE_OT_TLS_AES_KEY_LEN_128;
+ else if (length == 32)
+ write_sa->w2.s.aes_key_len = ROC_IE_OT_TLS_AES_KEY_LEN_256;
+ else
+ return -EINVAL;
+ } else {
+ return -EINVAL;
+ }
+
+ key = cipher_xfrm->cipher.key.data;
+ if (key != NULL && length != 0) {
+ /* Copy encryption key */
+ memcpy(cipher_key, key, length);
+ }
+ }
+
+ if (auth_xfrm != NULL) {
+ if (auth_xfrm->auth.algo == RTE_CRYPTO_AUTH_MD5_HMAC)
+ write_sa->w2.s.mac_select = ROC_IE_OT_TLS_MAC_MD5;
+ else if (auth_xfrm->auth.algo == RTE_CRYPTO_AUTH_SHA1_HMAC)
+ write_sa->w2.s.mac_select = ROC_IE_OT_TLS_MAC_SHA1;
+ else if (auth_xfrm->auth.algo == RTE_CRYPTO_AUTH_SHA256_HMAC)
+ write_sa->w2.s.mac_select = ROC_IE_OT_TLS_MAC_SHA2_256;
+ else
+ return -EINVAL;
+
+ cnxk_sec_opad_ipad_gen(auth_xfrm, write_sa->opad_ipad, true);
+ }
+
+ tmp_key = (uint64_t *)write_sa->opad_ipad;
+ for (i = 0; i < (int)(ROC_CTX_MAX_OPAD_IPAD_LEN / sizeof(uint64_t)); i++)
+ tmp_key[i] = rte_be_to_cpu_64(tmp_key[i]);
+
+key_swap:
+ tmp_key = (uint64_t *)cipher_key;
+ for (i = 0; i < (int)(ROC_IE_OT_TLS_CTX_MAX_KEY_IV_LEN / sizeof(uint64_t)); i++)
+ tmp_key[i] = rte_be_to_cpu_64(tmp_key[i]);
+
+ write_sa->w0.s.ctx_hdr_size = ROC_IE_OT_TLS_CTX_HDR_SIZE;
+ offset = offsetof(struct roc_ie_ot_tls_write_sa, w26_rsvd7);
+
+ /* Word offset for HW managed CTX field */
+ write_sa->w0.s.hw_ctx_off = offset / 8;
+ write_sa->w0.s.ctx_push_size = write_sa->w0.s.hw_ctx_off;
+
+ /* Entire context size in 128B units */
+ write_sa->w0.s.ctx_size =
+ (PLT_ALIGN_CEIL(sizeof(struct roc_ie_ot_tls_write_sa), ROC_CTX_UNIT_128B) /
+ ROC_CTX_UNIT_128B) -
+ 1;
+ write_sa->w0.s.aop_valid = 1;
+
+ if (tls_xfrm->ver == RTE_SECURITY_VERSION_TLS_1_2) {
+ write_sa->w2.s.version_select = ROC_IE_OT_TLS_VERSION_TLS_12;
+ write_sa->seq_num = tls_xfrm->tls_1_2.seq_no - 1;
+ } else if (tls_xfrm->ver == RTE_SECURITY_VERSION_DTLS_1_2) {
+ write_sa->w2.s.version_select = ROC_IE_OT_TLS_VERSION_DTLS_12;
+ write_sa->seq_num = ((uint64_t)tls_xfrm->dtls_1_2.epoch << 48) |
+ (tls_xfrm->dtls_1_2.seq_no & 0x0000ffffffffffff);
+ write_sa->seq_num -= 1;
+ }
+
+ write_sa->w2.s.iv_at_cptr = ROC_IE_OT_TLS_IV_SRC_DEFAULT;
+
+#ifdef LA_IPSEC_DEBUG
+ if (tls_xfrm->options.iv_gen_disable == 1)
+ write_sa->w2.s.iv_at_cptr = ROC_IE_OT_TLS_IV_SRC_FROM_SA;
+#else
+ if (tls_xfrm->options.iv_gen_disable) {
+ plt_err("Application provided IV is not supported");
+ return -ENOTSUP;
+ }
+#endif
+
+ rte_wmb();
+
+ return 0;
+}
+
+static int
+cn10k_tls_read_sa_create(struct roc_cpt *roc_cpt, struct roc_cpt_lf *lf,
+ struct rte_security_tls_record_xform *tls_xfrm,
+ struct rte_crypto_sym_xform *crypto_xfrm,
+ struct cn10k_sec_session *sec_sess)
+{
+ struct roc_ie_ot_tls_read_sa *sa_dptr;
+ struct cn10k_tls_record *tls;
+ union cpt_inst_w4 inst_w4;
+ void *read_sa;
+ int ret = 0;
+
+ tls = &sec_sess->tls_rec;
+ read_sa = &tls->read_sa;
+
+ /* Allocate memory to be used as dptr for CPT ucode WRITE_SA op */
+ sa_dptr = plt_zmalloc(sizeof(struct roc_ie_ot_tls_read_sa), 8);
+ if (sa_dptr == NULL) {
+ plt_err("Could not allocate memory for SA DPTR");
+ return -ENOMEM;
+ }
+
+ /* Translate security parameters to SA */
+ ret = tls_read_sa_fill(sa_dptr, tls_xfrm, crypto_xfrm);
+ if (ret) {
+ plt_err("Could not fill read session parameters");
+ goto sa_dptr_free;
+ }
+ if (crypto_xfrm->type == RTE_CRYPTO_SYM_XFORM_AEAD) {
+ sec_sess->iv_offset = crypto_xfrm->aead.iv.offset;
+ sec_sess->iv_length = crypto_xfrm->aead.iv.length;
+ } else if (crypto_xfrm->type == RTE_CRYPTO_SYM_XFORM_CIPHER) {
+ sec_sess->iv_offset = crypto_xfrm->cipher.iv.offset;
+ sec_sess->iv_length = crypto_xfrm->cipher.iv.length;
+ } else {
+ sec_sess->iv_offset = crypto_xfrm->auth.iv.offset;
+ sec_sess->iv_length = crypto_xfrm->auth.iv.length;
+ }
+
+ if (sa_dptr->w2.s.version_select == ROC_IE_OT_TLS_VERSION_DTLS_12)
+ sec_sess->tls.hdr_len = 13;
+ else if (sa_dptr->w2.s.version_select == ROC_IE_OT_TLS_VERSION_TLS_12)
+ sec_sess->tls.hdr_len = 5;
+
+ sec_sess->proto = RTE_SECURITY_PROTOCOL_TLS_RECORD;
+
+ /* Enable mib counters */
+ sa_dptr->w0.s.count_mib_bytes = 1;
+ sa_dptr->w0.s.count_mib_pkts = 1;
+
+ /* pre-populate CPT INST word 4 */
+ inst_w4.u64 = 0;
+ inst_w4.s.opcode_major = ROC_IE_OT_TLS_MAJOR_OP_RECORD_DEC | ROC_IE_OT_INPLACE_BIT;
+
+ sec_sess->inst.w4 = inst_w4.u64;
+ sec_sess->inst.w7 = cpt_inst_w7_get(roc_cpt, read_sa);
+
+ memset(read_sa, 0, sizeof(struct roc_ie_ot_tls_read_sa));
+
+ /* Copy word0 from sa_dptr to populate ctx_push_sz ctx_size fields */
+ memcpy(read_sa, sa_dptr, 8);
+
+ rte_atomic_thread_fence(rte_memory_order_seq_cst);
+
+ /* Write session using microcode opcode */
+ ret = roc_cpt_ctx_write(lf, sa_dptr, read_sa, sizeof(struct roc_ie_ot_tls_read_sa));
+ if (ret) {
+ plt_err("Could not write read session to hardware");
+ goto sa_dptr_free;
+ }
+
+ /* Trigger CTX flush so that data is written back to DRAM */
+ roc_cpt_lf_ctx_flush(lf, read_sa, true);
+
+ rte_atomic_thread_fence(rte_memory_order_seq_cst);
+
+sa_dptr_free:
+ plt_free(sa_dptr);
+
+ return ret;
+}
+
+static int
+cn10k_tls_write_sa_create(struct roc_cpt *roc_cpt, struct roc_cpt_lf *lf,
+ struct rte_security_tls_record_xform *tls_xfrm,
+ struct rte_crypto_sym_xform *crypto_xfrm,
+ struct cn10k_sec_session *sec_sess)
+{
+ struct roc_ie_ot_tls_write_sa *sa_dptr;
+ struct cn10k_tls_record *tls;
+ union cpt_inst_w4 inst_w4;
+ void *write_sa;
+ int ret = 0;
+
+ tls = &sec_sess->tls_rec;
+ write_sa = &tls->write_sa;
+
+ /* Allocate memory to be used as dptr for CPT ucode WRITE_SA op */
+ sa_dptr = plt_zmalloc(sizeof(struct roc_ie_ot_tls_write_sa), 8);
+ if (sa_dptr == NULL) {
+ plt_err("Could not allocate memory for SA DPTR");
+ return -ENOMEM;
+ }
+
+ /* Translate security parameters to SA */
+ ret = tls_write_sa_fill(sa_dptr, tls_xfrm, crypto_xfrm);
+ if (ret) {
+ plt_err("Could not fill write session parameters");
+ goto sa_dptr_free;
+ }
+
+ if (crypto_xfrm->type == RTE_CRYPTO_SYM_XFORM_AEAD) {
+ sec_sess->iv_offset = crypto_xfrm->aead.iv.offset;
+ sec_sess->iv_length = crypto_xfrm->aead.iv.length;
+ } else if (crypto_xfrm->type == RTE_CRYPTO_SYM_XFORM_CIPHER) {
+ sec_sess->iv_offset = crypto_xfrm->cipher.iv.offset;
+ sec_sess->iv_length = crypto_xfrm->cipher.iv.length;
+ } else {
+ sec_sess->iv_offset = crypto_xfrm->next->cipher.iv.offset;
+ sec_sess->iv_length = crypto_xfrm->next->cipher.iv.length;
+ }
+
+ sec_sess->tls.is_write = true;
+ sec_sess->tls.enable_padding = tls_xfrm->options.extra_padding_enable;
+ sec_sess->max_extended_len = tls_write_rlens_get(tls_xfrm, crypto_xfrm);
+ sec_sess->proto = RTE_SECURITY_PROTOCOL_TLS_RECORD;
+
+ /* pre-populate CPT INST word 4 */
+ inst_w4.u64 = 0;
+ inst_w4.s.opcode_major = ROC_IE_OT_TLS_MAJOR_OP_RECORD_ENC | ROC_IE_OT_INPLACE_BIT;
+
+ sec_sess->inst.w4 = inst_w4.u64;
+ sec_sess->inst.w7 = cpt_inst_w7_get(roc_cpt, write_sa);
+
+ memset(write_sa, 0, sizeof(struct roc_ie_ot_tls_write_sa));
+
+ /* Copy word0 from sa_dptr to populate ctx_push_sz ctx_size fields */
+ memcpy(write_sa, sa_dptr, 8);
+
+ rte_atomic_thread_fence(rte_memory_order_seq_cst);
+
+ /* Write session using microcode opcode */
+ ret = roc_cpt_ctx_write(lf, sa_dptr, write_sa, sizeof(struct roc_ie_ot_tls_write_sa));
+ if (ret) {
+ plt_err("Could not write tls write session to hardware");
+ goto sa_dptr_free;
+ }
+
+ /* Trigger CTX flush so that data is written back to DRAM */
+ roc_cpt_lf_ctx_flush(lf, write_sa, false);
+
+ rte_atomic_thread_fence(rte_memory_order_seq_cst);
+
+sa_dptr_free:
+ plt_free(sa_dptr);
+
+ return ret;
+}
+
+int
+cn10k_tls_record_session_create(struct cnxk_cpt_vf *vf, struct cnxk_cpt_qp *qp,
+ struct rte_security_tls_record_xform *tls_xfrm,
+ struct rte_crypto_sym_xform *crypto_xfrm,
+ struct rte_security_session *sess)
+{
+ struct roc_cpt *roc_cpt;
+ int ret;
+
+ ret = cnxk_tls_xform_verify(tls_xfrm, crypto_xfrm);
+ if (ret)
+ return ret;
+
+ roc_cpt = &vf->cpt;
+
+ if (tls_xfrm->type == RTE_SECURITY_TLS_SESS_TYPE_READ)
+ return cn10k_tls_read_sa_create(roc_cpt, &qp->lf, tls_xfrm, crypto_xfrm,
+ (struct cn10k_sec_session *)sess);
+ else
+ return cn10k_tls_write_sa_create(roc_cpt, &qp->lf, tls_xfrm, crypto_xfrm,
+ (struct cn10k_sec_session *)sess);
+}
+
+int
+cn10k_sec_tls_session_destroy(struct cnxk_cpt_qp *qp, struct cn10k_sec_session *sess)
+{
+ struct cn10k_tls_record *tls;
+ struct roc_cpt_lf *lf;
+ void *sa_dptr = NULL;
+ int ret;
+
+ lf = &qp->lf;
+
+ tls = &sess->tls_rec;
+
+ /* Trigger CTX flush to write dirty data back to DRAM */
+ roc_cpt_lf_ctx_flush(lf, &tls->read_sa, false);
+
+ ret = -1;
+
+ if (sess->tls.is_write) {
+ sa_dptr = plt_zmalloc(sizeof(struct roc_ie_ot_tls_write_sa), 8);
+ if (sa_dptr != NULL) {
+ tls_write_sa_init(sa_dptr);
+
+ ret = roc_cpt_ctx_write(lf, sa_dptr, &tls->write_sa,
+ sizeof(struct roc_ie_ot_tls_write_sa));
+ }
+ } else {
+ sa_dptr = plt_zmalloc(sizeof(struct roc_ie_ot_tls_read_sa), 8);
+ if (sa_dptr != NULL) {
+ tls_read_sa_init(sa_dptr);
+
+ ret = roc_cpt_ctx_write(lf, sa_dptr, &tls->read_sa,
+ sizeof(struct roc_ie_ot_tls_read_sa));
+ }
+ }
+
+ plt_free(sa_dptr);
+
+ if (ret) {
+ /* MC write_ctx failed. Attempt reload of CTX */
+
+ /* Wait for 1 ms so that flush is complete */
+ rte_delay_ms(1);
+
+ rte_atomic_thread_fence(rte_memory_order_seq_cst);
+
+ /* Trigger CTX reload to fetch new data from DRAM */
+ roc_cpt_lf_ctx_reload(lf, &tls->read_sa);
+ }
+
+ return 0;
+}
diff --git a/drivers/crypto/cnxk/cn10k_tls.h b/drivers/crypto/cnxk/cn10k_tls.h
new file mode 100644
index 0000000000..19772655da
--- /dev/null
+++ b/drivers/crypto/cnxk/cn10k_tls.h
@@ -0,0 +1,35 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2024 Marvell.
+ */
+
+#ifndef __CN10K_TLS_H__
+#define __CN10K_TLS_H__
+
+#include <rte_crypto_sym.h>
+#include <rte_security.h>
+
+#include "roc_ie_ot_tls.h"
+
+#include "cnxk_cryptodev.h"
+#include "cnxk_cryptodev_ops.h"
+
+/* Forward declaration */
+struct cn10k_sec_session;
+
+struct cn10k_tls_record {
+ union {
+ /** Read SA */
+ struct roc_ie_ot_tls_read_sa read_sa;
+ /** Write SA */
+ struct roc_ie_ot_tls_write_sa write_sa;
+ };
+} __rte_aligned(ROC_ALIGN);
+
+int cn10k_tls_record_session_create(struct cnxk_cpt_vf *vf, struct cnxk_cpt_qp *qp,
+ struct rte_security_tls_record_xform *tls_xfrm,
+ struct rte_crypto_sym_xform *crypto_xfrm,
+ struct rte_security_session *sess);
+
+int cn10k_sec_tls_session_destroy(struct cnxk_cpt_qp *qp, struct cn10k_sec_session *sess);
+
+#endif /* __CN10K_TLS_H__ */
diff --git a/drivers/crypto/cnxk/meson.build b/drivers/crypto/cnxk/meson.build
index d6fafd43d9..ee0c65e32a 100644
--- a/drivers/crypto/cnxk/meson.build
+++ b/drivers/crypto/cnxk/meson.build
@@ -16,6 +16,7 @@ sources = files(
'cn10k_cryptodev_ops.c',
'cn10k_cryptodev_sec.c',
'cn10k_ipsec.c',
+ 'cn10k_tls.c',
'cnxk_cryptodev.c',
'cnxk_cryptodev_capabilities.c',
'cnxk_cryptodev_devargs.c',
--
2.25.1
^ permalink raw reply [flat|nested] 78+ messages in thread
* [PATCH v3 16/24] crypto/cnxk: add TLS record datapath handling
2024-01-17 10:30 ` [PATCH v3 " Anoob Joseph
` (14 preceding siblings ...)
2024-01-17 10:31 ` [PATCH v3 15/24] crypto/cnxk: add TLS record session ops Anoob Joseph
@ 2024-01-17 10:31 ` Anoob Joseph
2024-01-17 10:31 ` [PATCH v3 17/24] crypto/cnxk: add TLS capability Anoob Joseph
` (8 subsequent siblings)
24 siblings, 0 replies; 78+ messages in thread
From: Anoob Joseph @ 2024-01-17 10:31 UTC (permalink / raw)
To: Akhil Goyal; +Cc: Vidya Sagar Velumuri, Jerin Jacob, Tejasree Kondoj, dev
From: Vidya Sagar Velumuri <vvelumuri@marvell.com>
Add support for TLS record handling in datapath.
Signed-off-by: Anoob Joseph <anoobj@marvell.com>
Signed-off-by: Vidya Sagar Velumuri <vvelumuri@marvell.com>
---
drivers/crypto/cnxk/cn10k_cryptodev_ops.c | 57 +++-
drivers/crypto/cnxk/cn10k_cryptodev_sec.c | 7 +
drivers/crypto/cnxk/cn10k_tls_ops.h | 322 ++++++++++++++++++++++
3 files changed, 380 insertions(+), 6 deletions(-)
create mode 100644 drivers/crypto/cnxk/cn10k_tls_ops.h
diff --git a/drivers/crypto/cnxk/cn10k_cryptodev_ops.c b/drivers/crypto/cnxk/cn10k_cryptodev_ops.c
index 084c8d3a24..843a111b0e 100644
--- a/drivers/crypto/cnxk/cn10k_cryptodev_ops.c
+++ b/drivers/crypto/cnxk/cn10k_cryptodev_ops.c
@@ -20,11 +20,14 @@
#include "roc_sso_dp.h"
#include "cn10k_cryptodev.h"
-#include "cn10k_cryptodev_ops.h"
#include "cn10k_cryptodev_event_dp.h"
+#include "cn10k_cryptodev_ops.h"
+#include "cn10k_cryptodev_sec.h"
#include "cn10k_eventdev.h"
#include "cn10k_ipsec.h"
#include "cn10k_ipsec_la_ops.h"
+#include "cn10k_tls.h"
+#include "cn10k_tls_ops.h"
#include "cnxk_ae.h"
#include "cnxk_cryptodev.h"
#include "cnxk_cryptodev_ops.h"
@@ -101,6 +104,18 @@ cpt_sec_ipsec_inst_fill(struct cnxk_cpt_qp *qp, struct rte_crypto_op *op,
return ret;
}
+static __rte_always_inline int __rte_hot
+cpt_sec_tls_inst_fill(struct cnxk_cpt_qp *qp, struct rte_crypto_op *op,
+ struct cn10k_sec_session *sess, struct cpt_inst_s *inst,
+ struct cpt_inflight_req *infl_req, const bool is_sg_ver2)
+{
+ if (sess->tls.is_write)
+ return process_tls_write(&qp->lf, op, sess, &qp->meta_info, infl_req, inst,
+ is_sg_ver2);
+ else
+ return process_tls_read(op, sess, &qp->meta_info, infl_req, inst, is_sg_ver2);
+}
+
static __rte_always_inline int __rte_hot
cpt_sec_inst_fill(struct cnxk_cpt_qp *qp, struct rte_crypto_op *op, struct cn10k_sec_session *sess,
struct cpt_inst_s *inst, struct cpt_inflight_req *infl_req, const bool is_sg_ver2)
@@ -108,6 +123,8 @@ cpt_sec_inst_fill(struct cnxk_cpt_qp *qp, struct rte_crypto_op *op, struct cn10k
if (sess->proto == RTE_SECURITY_PROTOCOL_IPSEC)
return cpt_sec_ipsec_inst_fill(qp, op, sess, &inst[0], infl_req, is_sg_ver2);
+ else if (sess->proto == RTE_SECURITY_PROTOCOL_TLS_RECORD)
+ return cpt_sec_tls_inst_fill(qp, op, sess, &inst[0], infl_req, is_sg_ver2);
return 0;
}
@@ -812,7 +829,7 @@ cn10k_cpt_sg_ver2_crypto_adapter_enqueue(void *ws, struct rte_event ev[], uint16
}
static inline void
-cn10k_cpt_sec_post_process(struct rte_crypto_op *cop, struct cpt_cn10k_res_s *res)
+cn10k_cpt_ipsec_post_process(struct rte_crypto_op *cop, struct cpt_cn10k_res_s *res)
{
struct rte_mbuf *mbuf = cop->sym->m_src;
const uint16_t m_len = res->rlen;
@@ -849,10 +866,38 @@ cn10k_cpt_sec_post_process(struct rte_crypto_op *cop, struct cpt_cn10k_res_s *re
}
static inline void
-cn10k_cpt_dequeue_post_process(struct cnxk_cpt_qp *qp,
- struct rte_crypto_op *cop,
- struct cpt_inflight_req *infl_req,
- struct cpt_cn10k_res_s *res)
+cn10k_cpt_tls_post_process(struct rte_crypto_op *cop, struct cpt_cn10k_res_s *res)
+{
+ struct rte_mbuf *mbuf = cop->sym->m_src;
+ const uint16_t m_len = res->rlen;
+
+ if (!res->uc_compcode) {
+ if (mbuf->next == NULL)
+ mbuf->data_len = m_len;
+ mbuf->pkt_len = m_len;
+ } else {
+ cop->status = RTE_CRYPTO_OP_STATUS_ERROR;
+ cop->aux_flags = res->uc_compcode;
+ plt_err("crypto op failed with UC compcode: 0x%x", res->uc_compcode);
+ }
+}
+
+static inline void
+cn10k_cpt_sec_post_process(struct rte_crypto_op *cop, struct cpt_cn10k_res_s *res)
+{
+ struct rte_crypto_sym_op *sym_op = cop->sym;
+ struct cn10k_sec_session *sess;
+
+ sess = sym_op->session;
+ if (sess->proto == RTE_SECURITY_PROTOCOL_IPSEC)
+ cn10k_cpt_ipsec_post_process(cop, res);
+ else if (sess->proto == RTE_SECURITY_PROTOCOL_TLS_RECORD)
+ cn10k_cpt_tls_post_process(cop, res);
+}
+
+static inline void
+cn10k_cpt_dequeue_post_process(struct cnxk_cpt_qp *qp, struct rte_crypto_op *cop,
+ struct cpt_inflight_req *infl_req, struct cpt_cn10k_res_s *res)
{
const uint8_t uc_compcode = res->uc_compcode;
const uint8_t compcode = res->compcode;
diff --git a/drivers/crypto/cnxk/cn10k_cryptodev_sec.c b/drivers/crypto/cnxk/cn10k_cryptodev_sec.c
index 12e53f18db..cb013986c4 100644
--- a/drivers/crypto/cnxk/cn10k_cryptodev_sec.c
+++ b/drivers/crypto/cnxk/cn10k_cryptodev_sec.c
@@ -32,6 +32,10 @@ cn10k_sec_session_create(void *dev, struct rte_security_session_conf *conf,
return cn10k_ipsec_session_create(vf, qp, &conf->ipsec, conf->crypto_xform, sess);
}
+ if (conf->protocol == RTE_SECURITY_PROTOCOL_TLS_RECORD)
+ return cn10k_tls_record_session_create(vf, qp, &conf->tls_record,
+ conf->crypto_xform, sess);
+
return -ENOTSUP;
}
@@ -54,6 +58,9 @@ cn10k_sec_session_destroy(void *dev, struct rte_security_session *sec_sess)
if (cn10k_sec_sess->proto == RTE_SECURITY_PROTOCOL_IPSEC)
return cn10k_sec_ipsec_session_destroy(qp, cn10k_sec_sess);
+ if (cn10k_sec_sess->proto == RTE_SECURITY_PROTOCOL_TLS_RECORD)
+ return cn10k_sec_tls_session_destroy(qp, cn10k_sec_sess);
+
return -EINVAL;
}
diff --git a/drivers/crypto/cnxk/cn10k_tls_ops.h b/drivers/crypto/cnxk/cn10k_tls_ops.h
new file mode 100644
index 0000000000..7c8ac14ab2
--- /dev/null
+++ b/drivers/crypto/cnxk/cn10k_tls_ops.h
@@ -0,0 +1,322 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2024 Marvell.
+ */
+
+#ifndef __CN10K_TLS_OPS_H__
+#define __CN10K_TLS_OPS_H__
+
+#include <rte_crypto_sym.h>
+#include <rte_security.h>
+
+#include "roc_ie.h"
+
+#include "cn10k_cryptodev.h"
+#include "cn10k_cryptodev_sec.h"
+#include "cnxk_cryptodev.h"
+#include "cnxk_cryptodev_ops.h"
+#include "cnxk_sg.h"
+
+static __rte_always_inline int
+process_tls_write(struct roc_cpt_lf *lf, struct rte_crypto_op *cop, struct cn10k_sec_session *sess,
+ struct cpt_qp_meta_info *m_info, struct cpt_inflight_req *infl_req,
+ struct cpt_inst_s *inst, const bool is_sg_ver2)
+{
+ struct rte_crypto_sym_op *sym_op = cop->sym;
+#ifdef LA_IPSEC_DEBUG
+ struct roc_ie_ot_tls_write_sa *write_sa;
+#endif
+ struct rte_mbuf *m_src = sym_op->m_src;
+ struct rte_mbuf *last_seg;
+ union cpt_inst_w4 w4;
+ void *m_data = NULL;
+ uint8_t *in_buffer;
+
+#ifdef LA_IPSEC_DEBUG
+ write_sa = &sess->tls_rec.write_sa;
+ if (write_sa->w2.s.iv_at_cptr == ROC_IE_OT_TLS_IV_SRC_FROM_SA) {
+
+ uint8_t *iv = PLT_PTR_ADD(write_sa->cipher_key, 32);
+
+ if (write_sa->w2.s.cipher_select == ROC_IE_OT_TLS_CIPHER_AES_GCM) {
+ uint32_t *tmp;
+
+ /* For GCM, the IV and salt format will be like below:
+ * iv[0-3]: lower bytes of IV in BE format.
+ * iv[4-7]: salt / nonce.
+ * iv[12-15]: upper bytes of IV in BE format.
+ */
+ memcpy(iv, rte_crypto_op_ctod_offset(cop, uint8_t *, sess->iv_offset), 4);
+ tmp = (uint32_t *)iv;
+ *tmp = rte_be_to_cpu_32(*tmp);
+
+ memcpy(iv + 12,
+ rte_crypto_op_ctod_offset(cop, uint8_t *, sess->iv_offset + 4), 4);
+ tmp = (uint32_t *)(iv + 12);
+ *tmp = rte_be_to_cpu_32(*tmp);
+ } else if (write_sa->w2.s.cipher_select == ROC_IE_OT_TLS_CIPHER_AES_CBC) {
+ uint64_t *tmp;
+
+ memcpy(iv, rte_crypto_op_ctod_offset(cop, uint8_t *, sess->iv_offset), 16);
+ tmp = (uint64_t *)iv;
+ *tmp = rte_be_to_cpu_64(*tmp);
+ tmp = (uint64_t *)(iv + 8);
+ *tmp = rte_be_to_cpu_64(*tmp);
+ } else if (write_sa->w2.s.cipher_select == ROC_IE_OT_TLS_CIPHER_3DES) {
+ uint64_t *tmp;
+
+ memcpy(iv, rte_crypto_op_ctod_offset(cop, uint8_t *, sess->iv_offset), 8);
+ tmp = (uint64_t *)iv;
+ *tmp = rte_be_to_cpu_64(*tmp);
+ }
+
+ /* Trigger CTX reload to fetch new data from DRAM */
+ roc_cpt_lf_ctx_reload(lf, write_sa);
+ rte_delay_ms(1);
+ }
+#else
+ RTE_SET_USED(lf);
+#endif
+ /* Single buffer direct mode */
+ if (likely(m_src->next == NULL)) {
+ void *vaddr;
+
+ if (unlikely(rte_pktmbuf_tailroom(m_src) < sess->max_extended_len)) {
+ plt_dp_err("Not enough tail room");
+ return -ENOMEM;
+ }
+
+ vaddr = rte_pktmbuf_mtod(m_src, void *);
+ inst->dptr = (uint64_t)vaddr;
+ inst->rptr = (uint64_t)vaddr;
+
+ w4.u64 = sess->inst.w4;
+ w4.s.param1 = m_src->data_len;
+ w4.s.dlen = m_src->data_len;
+
+ w4.s.param2 = cop->param1.tls_record.content_type;
+ w4.s.opcode_minor = sess->tls.enable_padding * cop->aux_flags * 8;
+
+ inst->w4.u64 = w4.u64;
+ } else if (is_sg_ver2 == false) {
+ struct roc_sglist_comp *scatter_comp, *gather_comp;
+ uint32_t g_size_bytes, s_size_bytes;
+ uint32_t dlen;
+ int i;
+
+ last_seg = rte_pktmbuf_lastseg(m_src);
+
+ if (unlikely(rte_pktmbuf_tailroom(last_seg) < sess->max_extended_len)) {
+ plt_dp_err("Not enough tail room (required: %d, available: %d)",
+ sess->max_extended_len, rte_pktmbuf_tailroom(last_seg));
+ return -ENOMEM;
+ }
+
+ m_data = alloc_op_meta(NULL, m_info->mlen, m_info->pool, infl_req);
+ if (unlikely(m_data == NULL)) {
+ plt_dp_err("Error allocating meta buffer for request");
+ return -ENOMEM;
+ }
+
+ in_buffer = (uint8_t *)m_data;
+ ((uint16_t *)in_buffer)[0] = 0;
+ ((uint16_t *)in_buffer)[1] = 0;
+
+ /* Input Gather List */
+ i = 0;
+ gather_comp = (struct roc_sglist_comp *)((uint8_t *)in_buffer + 8);
+
+ i = fill_sg_comp_from_pkt(gather_comp, i, m_src);
+ ((uint16_t *)in_buffer)[2] = rte_cpu_to_be_16(i);
+
+ g_size_bytes = ((i + 3) / 4) * sizeof(struct roc_sglist_comp);
+
+ i = 0;
+ scatter_comp = (struct roc_sglist_comp *)((uint8_t *)gather_comp + g_size_bytes);
+
+ i = fill_sg_comp_from_pkt(scatter_comp, i, m_src);
+ ((uint16_t *)in_buffer)[3] = rte_cpu_to_be_16(i);
+
+ s_size_bytes = ((i + 3) / 4) * sizeof(struct roc_sglist_comp);
+
+ dlen = g_size_bytes + s_size_bytes + ROC_SG_LIST_HDR_SIZE;
+
+ inst->dptr = (uint64_t)in_buffer;
+ inst->rptr = (uint64_t)in_buffer;
+
+ w4.u64 = sess->inst.w4;
+ w4.s.dlen = dlen;
+ w4.s.param1 = rte_pktmbuf_pkt_len(m_src);
+ w4.s.param2 = cop->param1.tls_record.content_type;
+ w4.s.opcode_major |= (uint64_t)ROC_DMA_MODE_SG;
+ w4.s.opcode_minor = sess->tls.enable_padding * cop->aux_flags * 8;
+
+ /* Output Scatter List */
+ last_seg->data_len += sess->max_extended_len;
+ inst->w4.u64 = w4.u64;
+ } else {
+ struct roc_sg2list_comp *scatter_comp, *gather_comp;
+ union cpt_inst_w5 cpt_inst_w5;
+ union cpt_inst_w6 cpt_inst_w6;
+ uint32_t g_size_bytes;
+ int i;
+
+ last_seg = rte_pktmbuf_lastseg(m_src);
+
+ if (unlikely(rte_pktmbuf_tailroom(last_seg) < sess->max_extended_len)) {
+ plt_dp_err("Not enough tail room (required: %d, available: %d)",
+ sess->max_extended_len, rte_pktmbuf_tailroom(last_seg));
+ return -ENOMEM;
+ }
+
+ m_data = alloc_op_meta(NULL, m_info->mlen, m_info->pool, infl_req);
+ if (unlikely(m_data == NULL)) {
+ plt_dp_err("Error allocating meta buffer for request");
+ return -ENOMEM;
+ }
+
+ in_buffer = (uint8_t *)m_data;
+ /* Input Gather List */
+ i = 0;
+ gather_comp = (struct roc_sg2list_comp *)((uint8_t *)in_buffer);
+ i = fill_sg2_comp_from_pkt(gather_comp, i, m_src);
+
+ cpt_inst_w5.s.gather_sz = ((i + 2) / 3);
+ g_size_bytes = ((i + 2) / 3) * sizeof(struct roc_sg2list_comp);
+
+ i = 0;
+ scatter_comp = (struct roc_sg2list_comp *)((uint8_t *)gather_comp + g_size_bytes);
+
+ i = fill_sg2_comp_from_pkt(scatter_comp, i, m_src);
+
+ cpt_inst_w6.s.scatter_sz = ((i + 2) / 3);
+
+ cpt_inst_w5.s.dptr = (uint64_t)gather_comp;
+ cpt_inst_w6.s.rptr = (uint64_t)scatter_comp;
+
+ inst->w5.u64 = cpt_inst_w5.u64;
+ inst->w6.u64 = cpt_inst_w6.u64;
+ w4.u64 = sess->inst.w4;
+ w4.s.dlen = rte_pktmbuf_pkt_len(m_src);
+ w4.s.opcode_major &= (~(ROC_IE_OT_INPLACE_BIT));
+ w4.s.opcode_minor = sess->tls.enable_padding * cop->aux_flags * 8;
+ w4.s.param1 = w4.s.dlen;
+ w4.s.param2 = cop->param1.tls_record.content_type;
+ /* Output Scatter List */
+ last_seg->data_len += sess->max_extended_len;
+ inst->w4.u64 = w4.u64;
+ }
+
+ return 0;
+}
+
+static __rte_always_inline int
+process_tls_read(struct rte_crypto_op *cop, struct cn10k_sec_session *sess,
+ struct cpt_qp_meta_info *m_info, struct cpt_inflight_req *infl_req,
+ struct cpt_inst_s *inst, const bool is_sg_ver2)
+{
+ struct rte_crypto_sym_op *sym_op = cop->sym;
+ struct rte_mbuf *m_src = sym_op->m_src;
+ union cpt_inst_w4 w4;
+ uint8_t *in_buffer;
+ void *m_data;
+
+ if (likely(m_src->next == NULL)) {
+ void *vaddr;
+
+ vaddr = rte_pktmbuf_mtod(m_src, void *);
+
+ inst->dptr = (uint64_t)vaddr;
+ inst->rptr = (uint64_t)vaddr;
+
+ w4.u64 = sess->inst.w4;
+ w4.s.dlen = m_src->data_len;
+ w4.s.param1 = m_src->data_len;
+ inst->w4.u64 = w4.u64;
+ } else if (is_sg_ver2 == false) {
+ struct roc_sglist_comp *scatter_comp, *gather_comp;
+ uint32_t g_size_bytes, s_size_bytes;
+ uint32_t dlen;
+ int i;
+
+ m_data = alloc_op_meta(NULL, m_info->mlen, m_info->pool, infl_req);
+ if (unlikely(m_data == NULL)) {
+ plt_dp_err("Error allocating meta buffer for request");
+ return -ENOMEM;
+ }
+
+ in_buffer = (uint8_t *)m_data;
+ ((uint16_t *)in_buffer)[0] = 0;
+ ((uint16_t *)in_buffer)[1] = 0;
+
+ /* Input Gather List */
+ i = 0;
+ gather_comp = (struct roc_sglist_comp *)((uint8_t *)in_buffer + 8);
+
+ i = fill_sg_comp_from_pkt(gather_comp, i, m_src);
+ ((uint16_t *)in_buffer)[2] = rte_cpu_to_be_16(i);
+
+ g_size_bytes = ((i + 3) / 4) * sizeof(struct roc_sglist_comp);
+
+ i = 0;
+ scatter_comp = (struct roc_sglist_comp *)((uint8_t *)gather_comp + g_size_bytes);
+
+ i = fill_sg_comp_from_pkt(scatter_comp, i, m_src);
+ ((uint16_t *)in_buffer)[3] = rte_cpu_to_be_16(i);
+
+ s_size_bytes = ((i + 3) / 4) * sizeof(struct roc_sglist_comp);
+
+ dlen = g_size_bytes + s_size_bytes + ROC_SG_LIST_HDR_SIZE;
+
+ inst->dptr = (uint64_t)in_buffer;
+ inst->rptr = (uint64_t)in_buffer;
+
+ w4.u64 = sess->inst.w4;
+ w4.s.dlen = dlen;
+ w4.s.opcode_major |= (uint64_t)ROC_DMA_MODE_SG;
+ w4.s.param1 = rte_pktmbuf_pkt_len(m_src);
+ inst->w4.u64 = w4.u64;
+ } else {
+ struct roc_sg2list_comp *scatter_comp, *gather_comp;
+ union cpt_inst_w5 cpt_inst_w5;
+ union cpt_inst_w6 cpt_inst_w6;
+ uint32_t g_size_bytes;
+ int i;
+
+ m_data = alloc_op_meta(NULL, m_info->mlen, m_info->pool, infl_req);
+ if (unlikely(m_data == NULL)) {
+ plt_dp_err("Error allocating meta buffer for request");
+ return -ENOMEM;
+ }
+
+ in_buffer = (uint8_t *)m_data;
+ /* Input Gather List */
+ i = 0;
+
+ gather_comp = (struct roc_sg2list_comp *)((uint8_t *)in_buffer);
+ i = fill_sg2_comp_from_pkt(gather_comp, i, m_src);
+
+ cpt_inst_w5.s.gather_sz = ((i + 2) / 3);
+ g_size_bytes = ((i + 2) / 3) * sizeof(struct roc_sg2list_comp);
+
+ i = 0;
+ scatter_comp = (struct roc_sg2list_comp *)((uint8_t *)gather_comp + g_size_bytes);
+
+ i = fill_sg2_comp_from_pkt(scatter_comp, i, m_src);
+
+ cpt_inst_w6.s.scatter_sz = ((i + 2) / 3);
+
+ cpt_inst_w5.s.dptr = (uint64_t)gather_comp;
+ cpt_inst_w6.s.rptr = (uint64_t)scatter_comp;
+
+ inst->w5.u64 = cpt_inst_w5.u64;
+ inst->w6.u64 = cpt_inst_w6.u64;
+ w4.u64 = sess->inst.w4;
+ w4.s.dlen = rte_pktmbuf_pkt_len(m_src);
+ w4.s.param1 = w4.s.dlen;
+ w4.s.opcode_major &= (~(ROC_IE_OT_INPLACE_BIT));
+ inst->w4.u64 = w4.u64;
+ }
+
+ return 0;
+}
+#endif /* __CN10K_TLS_OPS_H__ */
--
2.25.1
^ permalink raw reply [flat|nested] 78+ messages in thread
* [PATCH v3 17/24] crypto/cnxk: add TLS capability
2024-01-17 10:30 ` [PATCH v3 " Anoob Joseph
` (15 preceding siblings ...)
2024-01-17 10:31 ` [PATCH v3 16/24] crypto/cnxk: add TLS record datapath handling Anoob Joseph
@ 2024-01-17 10:31 ` Anoob Joseph
2024-01-17 10:31 ` [PATCH v3 18/24] crypto/cnxk: add PMD APIs for raw submission to CPT Anoob Joseph
` (7 subsequent siblings)
24 siblings, 0 replies; 78+ messages in thread
From: Anoob Joseph @ 2024-01-17 10:31 UTC (permalink / raw)
To: Akhil Goyal; +Cc: Vidya Sagar Velumuri, Jerin Jacob, Tejasree Kondoj, dev
From: Vidya Sagar Velumuri <vvelumuri@marvell.com>
Add TLS 1.2 record read and write capability.
Add DTLS 1.2 record read and write capability.
Signed-off-by: Anoob Joseph <anoobj@marvell.com>
Signed-off-by: Vidya Sagar Velumuri <vvelumuri@marvell.com>
---
doc/guides/rel_notes/release_24_03.rst | 2 +
drivers/common/cnxk/hw/cpt.h | 3 +-
drivers/crypto/cnxk/cnxk_cryptodev.h | 12 +-
.../crypto/cnxk/cnxk_cryptodev_capabilities.c | 210 ++++++++++++++++++
4 files changed, 223 insertions(+), 4 deletions(-)
diff --git a/doc/guides/rel_notes/release_24_03.rst b/doc/guides/rel_notes/release_24_03.rst
index eb63728cfd..1fd87500ab 100644
--- a/doc/guides/rel_notes/release_24_03.rst
+++ b/doc/guides/rel_notes/release_24_03.rst
@@ -58,6 +58,8 @@ New Features
* **Updated Marvell cnxk crypto driver.**
* Added support for Rx inject in crypto_cn10k.
+ * Added support for TLS record processing in crypto_cn10k. Supports TLS 1.2
+ and DTLS 1.2.
Removed Items
diff --git a/drivers/common/cnxk/hw/cpt.h b/drivers/common/cnxk/hw/cpt.h
index edab8a5d83..2620965606 100644
--- a/drivers/common/cnxk/hw/cpt.h
+++ b/drivers/common/cnxk/hw/cpt.h
@@ -80,7 +80,8 @@ union cpt_eng_caps {
uint64_t __io sg_ver2 : 1;
uint64_t __io sm2 : 1;
uint64_t __io pdcp_chain_zuc256 : 1;
- uint64_t __io reserved_38_63 : 26;
+ uint64_t __io tls : 1;
+ uint64_t __io reserved_39_63 : 25;
};
};
diff --git a/drivers/crypto/cnxk/cnxk_cryptodev.h b/drivers/crypto/cnxk/cnxk_cryptodev.h
index 6f21d91812..45d01b94b3 100644
--- a/drivers/crypto/cnxk/cnxk_cryptodev.h
+++ b/drivers/crypto/cnxk/cnxk_cryptodev.h
@@ -11,9 +11,11 @@
#include "roc_ae.h"
#include "roc_cpt.h"
-#define CNXK_CPT_MAX_CAPS 55
-#define CNXK_SEC_IPSEC_CRYPTO_MAX_CAPS 16
-#define CNXK_SEC_MAX_CAPS 9
+#define CNXK_CPT_MAX_CAPS 55
+#define CNXK_SEC_IPSEC_CRYPTO_MAX_CAPS 16
+#define CNXK_SEC_TLS_1_3_CRYPTO_MAX_CAPS 2
+#define CNXK_SEC_TLS_1_2_CRYPTO_MAX_CAPS 6
+#define CNXK_SEC_MAX_CAPS 17
/**
* Device private data
@@ -25,6 +27,10 @@ struct cnxk_cpt_vf {
struct roc_cpt cpt;
struct rte_cryptodev_capabilities crypto_caps[CNXK_CPT_MAX_CAPS];
struct rte_cryptodev_capabilities sec_ipsec_crypto_caps[CNXK_SEC_IPSEC_CRYPTO_MAX_CAPS];
+ struct rte_cryptodev_capabilities sec_tls_1_3_crypto_caps[CNXK_SEC_TLS_1_3_CRYPTO_MAX_CAPS];
+ struct rte_cryptodev_capabilities sec_tls_1_2_crypto_caps[CNXK_SEC_TLS_1_2_CRYPTO_MAX_CAPS];
+ struct rte_cryptodev_capabilities
+ sec_dtls_1_2_crypto_caps[CNXK_SEC_TLS_1_2_CRYPTO_MAX_CAPS];
struct rte_security_capability sec_caps[CNXK_SEC_MAX_CAPS];
uint64_t cnxk_fpm_iova[ROC_AE_EC_ID_PMAX];
struct roc_ae_ec_group *ec_grp[ROC_AE_EC_ID_PMAX];
diff --git a/drivers/crypto/cnxk/cnxk_cryptodev_capabilities.c b/drivers/crypto/cnxk/cnxk_cryptodev_capabilities.c
index 178f510a63..73100377d9 100644
--- a/drivers/crypto/cnxk/cnxk_cryptodev_capabilities.c
+++ b/drivers/crypto/cnxk/cnxk_cryptodev_capabilities.c
@@ -30,6 +30,16 @@
RTE_DIM(sec_ipsec_caps_##name)); \
} while (0)
+#define SEC_TLS12_CAPS_ADD(cnxk_caps, cur_pos, hw_caps, name) \
+ do { \
+ if ((hw_caps[CPT_ENG_TYPE_SE].name) || \
+ (hw_caps[CPT_ENG_TYPE_IE].name) || \
+ (hw_caps[CPT_ENG_TYPE_AE].name)) \
+ sec_tls12_caps_add(cnxk_caps, cur_pos, \
+ sec_tls12_caps_##name, \
+ RTE_DIM(sec_tls12_caps_##name)); \
+ } while (0)
+
static const struct rte_cryptodev_capabilities caps_mul[] = {
{ /* RSA */
.op = RTE_CRYPTO_OP_TYPE_ASYMMETRIC,
@@ -1502,6 +1512,125 @@ static const struct rte_cryptodev_capabilities sec_ipsec_caps_null[] = {
},
};
+static const struct rte_cryptodev_capabilities sec_tls12_caps_aes[] = {
+ { /* AES GCM */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AEAD,
+ {.aead = {
+ .algo = RTE_CRYPTO_AEAD_AES_GCM,
+ .block_size = 16,
+ .key_size = {
+ .min = 16,
+ .max = 32,
+ .increment = 16
+ },
+ .digest_size = {
+ .min = 16,
+ .max = 16,
+ .increment = 0
+ },
+ .aad_size = {
+ .min = 13,
+ .max = 13,
+ .increment = 0
+ },
+ .iv_size = {
+ .min = 12,
+ .max = 12,
+ .increment = 0
+ }
+ }, }
+ }, }
+ },
+ { /* AES CBC */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
+ {.cipher = {
+ .algo = RTE_CRYPTO_CIPHER_AES_CBC,
+ .block_size = 16,
+ .key_size = {
+ .min = 16,
+ .max = 32,
+ .increment = 8
+ },
+ .iv_size = {
+ .min = 16,
+ .max = 16,
+ .increment = 0
+ }
+ }, }
+ }, }
+ },
+};
+
+static const struct rte_cryptodev_capabilities sec_tls12_caps_des[] = {
+ { /* 3DES CBC */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
+ {.cipher = {
+ .algo = RTE_CRYPTO_CIPHER_3DES_CBC,
+ .block_size = 8,
+ .key_size = {
+ .min = 24,
+ .max = 24,
+ .increment = 0
+ },
+ .iv_size = {
+ .min = 8,
+ .max = 8,
+ .increment = 0
+ }
+ }, }
+ }, }
+ },
+};
+
+static const struct rte_cryptodev_capabilities sec_tls12_caps_sha1_sha2[] = {
+ { /* SHA1 HMAC */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_SHA1_HMAC,
+ .block_size = 64,
+ .key_size = {
+ .min = 20,
+ .max = 20,
+ .increment = 0
+ },
+ .digest_size = {
+ .min = 20,
+ .max = 20,
+ .increment = 0
+ },
+ }, }
+ }, }
+ },
+ { /* SHA256 HMAC */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_SHA256_HMAC,
+ .block_size = 64,
+ .key_size = {
+ .min = 32,
+ .max = 32,
+ .increment = 0
+ },
+ .digest_size = {
+ .min = 32,
+ .max = 32,
+ .increment = 0
+ },
+ }, }
+ }, }
+ },
+};
+
static const struct rte_security_capability sec_caps_templ[] = {
{ /* IPsec Lookaside Protocol ESP Tunnel Ingress */
.action = RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL,
@@ -1591,6 +1720,46 @@ static const struct rte_security_capability sec_caps_templ[] = {
},
.crypto_capabilities = NULL,
},
+ { /* TLS 1.2 Record Read */
+ .action = RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL,
+ .protocol = RTE_SECURITY_PROTOCOL_TLS_RECORD,
+ .tls_record = {
+ .ver = RTE_SECURITY_VERSION_TLS_1_2,
+ .type = RTE_SECURITY_TLS_SESS_TYPE_READ,
+ .ar_win_size = 0,
+ },
+ .crypto_capabilities = NULL,
+ },
+ { /* TLS 1.2 Record Write */
+ .action = RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL,
+ .protocol = RTE_SECURITY_PROTOCOL_TLS_RECORD,
+ .tls_record = {
+ .ver = RTE_SECURITY_VERSION_TLS_1_2,
+ .type = RTE_SECURITY_TLS_SESS_TYPE_WRITE,
+ .ar_win_size = 0,
+ },
+ .crypto_capabilities = NULL,
+ },
+ { /* DTLS 1.2 Record Read */
+ .action = RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL,
+ .protocol = RTE_SECURITY_PROTOCOL_TLS_RECORD,
+ .tls_record = {
+ .ver = RTE_SECURITY_VERSION_DTLS_1_2,
+ .type = RTE_SECURITY_TLS_SESS_TYPE_READ,
+ .ar_win_size = 0,
+ },
+ .crypto_capabilities = NULL,
+ },
+ { /* DTLS 1.2 Record Write */
+ .action = RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL,
+ .protocol = RTE_SECURITY_PROTOCOL_TLS_RECORD,
+ .tls_record = {
+ .ver = RTE_SECURITY_VERSION_DTLS_1_2,
+ .type = RTE_SECURITY_TLS_SESS_TYPE_WRITE,
+ .ar_win_size = 0,
+ },
+ .crypto_capabilities = NULL,
+ },
{
.action = RTE_SECURITY_ACTION_TYPE_NONE
}
@@ -1807,6 +1976,35 @@ cn9k_sec_ipsec_caps_update(struct rte_security_capability *sec_cap)
sec_cap->ipsec.options.esn = 1;
}
+static void
+sec_tls12_caps_limit_check(int *cur_pos, int nb_caps)
+{
+ PLT_VERIFY(*cur_pos + nb_caps <= CNXK_SEC_TLS_1_2_CRYPTO_MAX_CAPS);
+}
+
+static void
+sec_tls12_caps_add(struct rte_cryptodev_capabilities cnxk_caps[], int *cur_pos,
+ const struct rte_cryptodev_capabilities *caps, int nb_caps)
+{
+ sec_tls12_caps_limit_check(cur_pos, nb_caps);
+
+ memcpy(&cnxk_caps[*cur_pos], caps, nb_caps * sizeof(caps[0]));
+ *cur_pos += nb_caps;
+}
+
+static void
+sec_tls12_crypto_caps_populate(struct rte_cryptodev_capabilities cnxk_caps[],
+ union cpt_eng_caps *hw_caps)
+{
+ int cur_pos = 0;
+
+ SEC_TLS12_CAPS_ADD(cnxk_caps, &cur_pos, hw_caps, aes);
+ SEC_TLS12_CAPS_ADD(cnxk_caps, &cur_pos, hw_caps, des);
+ SEC_TLS12_CAPS_ADD(cnxk_caps, &cur_pos, hw_caps, sha1_sha2);
+
+ sec_tls12_caps_add(cnxk_caps, &cur_pos, caps_end, RTE_DIM(caps_end));
+}
+
void
cnxk_cpt_caps_populate(struct cnxk_cpt_vf *vf)
{
@@ -1815,6 +2013,11 @@ cnxk_cpt_caps_populate(struct cnxk_cpt_vf *vf)
crypto_caps_populate(vf->crypto_caps, vf->cpt.hw_caps);
sec_ipsec_crypto_caps_populate(vf->sec_ipsec_crypto_caps, vf->cpt.hw_caps);
+ if (vf->cpt.hw_caps[CPT_ENG_TYPE_SE].tls) {
+ sec_tls12_crypto_caps_populate(vf->sec_tls_1_2_crypto_caps, vf->cpt.hw_caps);
+ sec_tls12_crypto_caps_populate(vf->sec_dtls_1_2_crypto_caps, vf->cpt.hw_caps);
+ }
+
PLT_STATIC_ASSERT(RTE_DIM(sec_caps_templ) <= RTE_DIM(vf->sec_caps));
memcpy(vf->sec_caps, sec_caps_templ, sizeof(sec_caps_templ));
@@ -1830,6 +2033,13 @@ cnxk_cpt_caps_populate(struct cnxk_cpt_vf *vf)
if (roc_model_is_cn9k())
cn9k_sec_ipsec_caps_update(&vf->sec_caps[i]);
+ } else if (vf->sec_caps[i].protocol == RTE_SECURITY_PROTOCOL_TLS_RECORD) {
+ if (vf->sec_caps[i].tls_record.ver == RTE_SECURITY_VERSION_TLS_1_3)
+ vf->sec_caps[i].crypto_capabilities = vf->sec_tls_1_3_crypto_caps;
+ else if (vf->sec_caps[i].tls_record.ver == RTE_SECURITY_VERSION_DTLS_1_2)
+ vf->sec_caps[i].crypto_capabilities = vf->sec_dtls_1_2_crypto_caps;
+ else
+ vf->sec_caps[i].crypto_capabilities = vf->sec_tls_1_2_crypto_caps;
}
}
}
--
2.25.1
^ permalink raw reply [flat|nested] 78+ messages in thread
* [PATCH v3 18/24] crypto/cnxk: add PMD APIs for raw submission to CPT
2024-01-17 10:30 ` [PATCH v3 " Anoob Joseph
` (16 preceding siblings ...)
2024-01-17 10:31 ` [PATCH v3 17/24] crypto/cnxk: add TLS capability Anoob Joseph
@ 2024-01-17 10:31 ` Anoob Joseph
2024-01-17 10:31 ` [PATCH v3 19/24] crypto/cnxk: replace PDCP with PDCP chain opcode Anoob Joseph
` (6 subsequent siblings)
24 siblings, 0 replies; 78+ messages in thread
From: Anoob Joseph @ 2024-01-17 10:31 UTC (permalink / raw)
To: Akhil Goyal; +Cc: Jerin Jacob, Vidya Sagar Velumuri, Tejasree Kondoj, dev
Add PMD APIs to allow applications to directly submit CPT instructions
to hardware.
Signed-off-by: Anoob Joseph <anoobj@marvell.com>
---
doc/api/doxy-api-index.md | 1 +
doc/api/doxy-api.conf.in | 1 +
doc/guides/rel_notes/release_24_03.rst | 1 +
drivers/crypto/cnxk/cn10k_cryptodev_ops.c | 75 ++++++++---------
drivers/crypto/cnxk/cn10k_cryptodev_ops.h | 3 +
drivers/crypto/cnxk/cn9k_cryptodev_ops.c | 56 -------------
drivers/crypto/cnxk/cn9k_cryptodev_ops.h | 62 ++++++++++++++
drivers/crypto/cnxk/cnxk_cryptodev_ops.c | 99 +++++++++++++++++++++++
drivers/crypto/cnxk/meson.build | 2 +-
drivers/crypto/cnxk/rte_pmd_cnxk_crypto.h | 46 +++++++++++
10 files changed, 252 insertions(+), 94 deletions(-)
create mode 100644 drivers/crypto/cnxk/rte_pmd_cnxk_crypto.h
diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md
index a6a768bd7c..69f1a54511 100644
--- a/doc/api/doxy-api-index.md
+++ b/doc/api/doxy-api-index.md
@@ -49,6 +49,7 @@ The public API headers are grouped by topics:
[iavf](@ref rte_pmd_iavf.h),
[bnxt](@ref rte_pmd_bnxt.h),
[cnxk](@ref rte_pmd_cnxk.h),
+ [cnxk_crypto](@ref rte_pmd_cnxk_crypto.h),
[cnxk_eventdev](@ref rte_pmd_cnxk_eventdev.h),
[cnxk_mempool](@ref rte_pmd_cnxk_mempool.h),
[dpaa](@ref rte_pmd_dpaa.h),
diff --git a/doc/api/doxy-api.conf.in b/doc/api/doxy-api.conf.in
index e94c9e4e46..6d11de580e 100644
--- a/doc/api/doxy-api.conf.in
+++ b/doc/api/doxy-api.conf.in
@@ -6,6 +6,7 @@ PROJECT_NUMBER = @VERSION@
USE_MDFILE_AS_MAINPAGE = @TOPDIR@/doc/api/doxy-api-index.md
INPUT = @TOPDIR@/doc/api/doxy-api-index.md \
@TOPDIR@/drivers/bus/vdev \
+ @TOPDIR@/drivers/crypto/cnxk \
@TOPDIR@/drivers/crypto/scheduler \
@TOPDIR@/drivers/dma/dpaa2 \
@TOPDIR@/drivers/event/dlb2 \
diff --git a/doc/guides/rel_notes/release_24_03.rst b/doc/guides/rel_notes/release_24_03.rst
index 1fd87500ab..8fc6e9fb6d 100644
--- a/doc/guides/rel_notes/release_24_03.rst
+++ b/doc/guides/rel_notes/release_24_03.rst
@@ -60,6 +60,7 @@ New Features
* Added support for Rx inject in crypto_cn10k.
* Added support for TLS record processing in crypto_cn10k. Supports TLS 1.2
and DTLS 1.2.
+ * Added PMD API to allow raw submission of instructions to CPT.
Removed Items
diff --git a/drivers/crypto/cnxk/cn10k_cryptodev_ops.c b/drivers/crypto/cnxk/cn10k_cryptodev_ops.c
index 843a111b0e..9f4be20ff5 100644
--- a/drivers/crypto/cnxk/cn10k_cryptodev_ops.c
+++ b/drivers/crypto/cnxk/cn10k_cryptodev_ops.c
@@ -34,13 +34,12 @@
#include "cnxk_eventdev.h"
#include "cnxk_se.h"
-#define PKTS_PER_LOOP 32
-#define PKTS_PER_STEORL 16
+#include "rte_pmd_cnxk_crypto.h"
/* Holds information required to send crypto operations in one burst */
struct ops_burst {
- struct rte_crypto_op *op[PKTS_PER_LOOP];
- uint64_t w2[PKTS_PER_LOOP];
+ struct rte_crypto_op *op[CN10K_PKTS_PER_LOOP];
+ uint64_t w2[CN10K_PKTS_PER_LOOP];
struct cn10k_sso_hws *ws;
struct cnxk_cpt_qp *qp;
uint16_t nb_ops;
@@ -252,7 +251,7 @@ cn10k_cpt_enqueue_burst(void *qptr, struct rte_crypto_op **ops, uint16_t nb_ops,
goto pend_q_commit;
}
- for (i = 0; i < RTE_MIN(PKTS_PER_LOOP, nb_ops); i++) {
+ for (i = 0; i < RTE_MIN(CN10K_PKTS_PER_LOOP, nb_ops); i++) {
infl_req = &pend_q->req_queue[head];
infl_req->op_flags = 0;
@@ -267,23 +266,21 @@ cn10k_cpt_enqueue_burst(void *qptr, struct rte_crypto_op **ops, uint16_t nb_ops,
pending_queue_advance(&head, pq_mask);
}
- if (i > PKTS_PER_STEORL) {
- lmt_arg = ROC_CN10K_CPT_LMT_ARG | (PKTS_PER_STEORL - 1) << 12 |
+ if (i > CN10K_PKTS_PER_STEORL) {
+ lmt_arg = ROC_CN10K_CPT_LMT_ARG | (CN10K_PKTS_PER_STEORL - 1) << 12 |
(uint64_t)lmt_id;
roc_lmt_submit_steorl(lmt_arg, io_addr);
- lmt_arg = ROC_CN10K_CPT_LMT_ARG |
- (i - PKTS_PER_STEORL - 1) << 12 |
- (uint64_t)(lmt_id + PKTS_PER_STEORL);
+ lmt_arg = ROC_CN10K_CPT_LMT_ARG | (i - CN10K_PKTS_PER_STEORL - 1) << 12 |
+ (uint64_t)(lmt_id + CN10K_PKTS_PER_STEORL);
roc_lmt_submit_steorl(lmt_arg, io_addr);
} else {
- lmt_arg = ROC_CN10K_CPT_LMT_ARG | (i - 1) << 12 |
- (uint64_t)lmt_id;
+ lmt_arg = ROC_CN10K_CPT_LMT_ARG | (i - 1) << 12 | (uint64_t)lmt_id;
roc_lmt_submit_steorl(lmt_arg, io_addr);
}
rte_io_wmb();
- if (nb_ops - i > 0 && i == PKTS_PER_LOOP) {
+ if (nb_ops - i > 0 && i == CN10K_PKTS_PER_LOOP) {
nb_ops -= i;
ops += i;
count += i;
@@ -487,7 +484,7 @@ cn10k_cpt_vec_submit(struct vec_request vec_tbl[], uint16_t vec_tbl_len, struct
inst = (struct cpt_inst_s *)lmt_base;
again:
- burst_size = RTE_MIN(PKTS_PER_STEORL, vec_tbl_len);
+ burst_size = RTE_MIN(CN10K_PKTS_PER_STEORL, vec_tbl_len);
for (i = 0; i < burst_size; i++)
cn10k_cpt_vec_inst_fill(&vec_tbl[i], &inst[i * 2], qp, vec_tbl[0].w7);
@@ -516,7 +513,7 @@ static inline int
ca_lmtst_vec_submit(struct ops_burst *burst, struct vec_request vec_tbl[], uint16_t *vec_tbl_len,
const bool is_sg_ver2)
{
- struct cpt_inflight_req *infl_reqs[PKTS_PER_LOOP];
+ struct cpt_inflight_req *infl_reqs[CN10K_PKTS_PER_LOOP];
uint64_t lmt_base, lmt_arg, io_addr;
uint16_t lmt_id, len = *vec_tbl_len;
struct cpt_inst_s *inst, *inst_base;
@@ -618,11 +615,12 @@ next_op:;
if (CNXK_TT_FROM_TAG(burst->ws->gw_rdata) == SSO_TT_ORDERED)
roc_sso_hws_head_wait(burst->ws->base);
- if (i > PKTS_PER_STEORL) {
- lmt_arg = ROC_CN10K_CPT_LMT_ARG | (PKTS_PER_STEORL - 1) << 12 | (uint64_t)lmt_id;
+ if (i > CN10K_PKTS_PER_STEORL) {
+ lmt_arg = ROC_CN10K_CPT_LMT_ARG | (CN10K_PKTS_PER_STEORL - 1) << 12 |
+ (uint64_t)lmt_id;
roc_lmt_submit_steorl(lmt_arg, io_addr);
- lmt_arg = ROC_CN10K_CPT_LMT_ARG | (i - PKTS_PER_STEORL - 1) << 12 |
- (uint64_t)(lmt_id + PKTS_PER_STEORL);
+ lmt_arg = ROC_CN10K_CPT_LMT_ARG | (i - CN10K_PKTS_PER_STEORL - 1) << 12 |
+ (uint64_t)(lmt_id + CN10K_PKTS_PER_STEORL);
roc_lmt_submit_steorl(lmt_arg, io_addr);
} else {
lmt_arg = ROC_CN10K_CPT_LMT_ARG | (i - 1) << 12 | (uint64_t)lmt_id;
@@ -647,7 +645,7 @@ next_op:;
static inline uint16_t
ca_lmtst_burst_submit(struct ops_burst *burst, const bool is_sg_ver2)
{
- struct cpt_inflight_req *infl_reqs[PKTS_PER_LOOP];
+ struct cpt_inflight_req *infl_reqs[CN10K_PKTS_PER_LOOP];
uint64_t lmt_base, lmt_arg, io_addr;
struct cpt_inst_s *inst, *inst_base;
struct cpt_inflight_req *infl_req;
@@ -718,11 +716,12 @@ ca_lmtst_burst_submit(struct ops_burst *burst, const bool is_sg_ver2)
if (CNXK_TT_FROM_TAG(burst->ws->gw_rdata) == SSO_TT_ORDERED)
roc_sso_hws_head_wait(burst->ws->base);
- if (i > PKTS_PER_STEORL) {
- lmt_arg = ROC_CN10K_CPT_LMT_ARG | (PKTS_PER_STEORL - 1) << 12 | (uint64_t)lmt_id;
+ if (i > CN10K_PKTS_PER_STEORL) {
+ lmt_arg = ROC_CN10K_CPT_LMT_ARG | (CN10K_PKTS_PER_STEORL - 1) << 12 |
+ (uint64_t)lmt_id;
roc_lmt_submit_steorl(lmt_arg, io_addr);
- lmt_arg = ROC_CN10K_CPT_LMT_ARG | (i - PKTS_PER_STEORL - 1) << 12 |
- (uint64_t)(lmt_id + PKTS_PER_STEORL);
+ lmt_arg = ROC_CN10K_CPT_LMT_ARG | (i - CN10K_PKTS_PER_STEORL - 1) << 12 |
+ (uint64_t)(lmt_id + CN10K_PKTS_PER_STEORL);
roc_lmt_submit_steorl(lmt_arg, io_addr);
} else {
lmt_arg = ROC_CN10K_CPT_LMT_ARG | (i - 1) << 12 | (uint64_t)lmt_id;
@@ -791,7 +790,7 @@ cn10k_cpt_crypto_adapter_enqueue(void *ws, struct rte_event ev[], uint16_t nb_ev
burst.op[burst.nb_ops] = op;
/* Max nb_ops per burst check */
- if (++burst.nb_ops == PKTS_PER_LOOP) {
+ if (++burst.nb_ops == CN10K_PKTS_PER_LOOP) {
if (is_vector)
submitted = ca_lmtst_vec_submit(&burst, vec_tbl, &vec_tbl_len,
is_sg_ver2);
@@ -1146,7 +1145,7 @@ cn10k_cryptodev_sec_inb_rx_inject(void *dev, struct rte_mbuf **pkts,
again:
inst = (struct cpt_inst_s *)lmt_base;
- for (i = 0; i < RTE_MIN(PKTS_PER_LOOP, nb_pkts); i++) {
+ for (i = 0; i < RTE_MIN(CN10K_PKTS_PER_LOOP, nb_pkts); i++) {
m = pkts[i];
sec_sess = (struct cn10k_sec_session *)sess[i];
@@ -1193,11 +1192,12 @@ cn10k_cryptodev_sec_inb_rx_inject(void *dev, struct rte_mbuf **pkts,
inst += 2;
}
- if (i > PKTS_PER_STEORL) {
- lmt_arg = ROC_CN10K_CPT_LMT_ARG | (PKTS_PER_STEORL - 1) << 12 | (uint64_t)lmt_id;
+ if (i > CN10K_PKTS_PER_STEORL) {
+ lmt_arg = ROC_CN10K_CPT_LMT_ARG | (CN10K_PKTS_PER_STEORL - 1) << 12 |
+ (uint64_t)lmt_id;
roc_lmt_submit_steorl(lmt_arg, io_addr);
- lmt_arg = ROC_CN10K_CPT_LMT_ARG | (i - PKTS_PER_STEORL - 1) << 12 |
- (uint64_t)(lmt_id + PKTS_PER_STEORL);
+ lmt_arg = ROC_CN10K_CPT_LMT_ARG | (i - CN10K_PKTS_PER_STEORL - 1) << 12 |
+ (uint64_t)(lmt_id + CN10K_PKTS_PER_STEORL);
roc_lmt_submit_steorl(lmt_arg, io_addr);
} else {
lmt_arg = ROC_CN10K_CPT_LMT_ARG | (i - 1) << 12 | (uint64_t)lmt_id;
@@ -1206,7 +1206,7 @@ cn10k_cryptodev_sec_inb_rx_inject(void *dev, struct rte_mbuf **pkts,
rte_io_wmb();
- if (nb_pkts - i > 0 && i == PKTS_PER_LOOP) {
+ if (nb_pkts - i > 0 && i == CN10K_PKTS_PER_LOOP) {
nb_pkts -= i;
pkts += i;
count += i;
@@ -1333,7 +1333,7 @@ cn10k_cpt_raw_enqueue_burst(void *qpair, uint8_t *drv_ctx, struct rte_crypto_sym
goto pend_q_commit;
}
- for (i = 0; i < RTE_MIN(PKTS_PER_LOOP, nb_ops); i++) {
+ for (i = 0; i < RTE_MIN(CN10K_PKTS_PER_LOOP, nb_ops); i++) {
struct cnxk_iov iov;
index = count + i;
@@ -1355,11 +1355,12 @@ cn10k_cpt_raw_enqueue_burst(void *qpair, uint8_t *drv_ctx, struct rte_crypto_sym
pending_queue_advance(&head, pq_mask);
}
- if (i > PKTS_PER_STEORL) {
- lmt_arg = ROC_CN10K_CPT_LMT_ARG | (PKTS_PER_STEORL - 1) << 12 | (uint64_t)lmt_id;
+ if (i > CN10K_PKTS_PER_STEORL) {
+ lmt_arg = ROC_CN10K_CPT_LMT_ARG | (CN10K_PKTS_PER_STEORL - 1) << 12 |
+ (uint64_t)lmt_id;
roc_lmt_submit_steorl(lmt_arg, io_addr);
- lmt_arg = ROC_CN10K_CPT_LMT_ARG | (i - PKTS_PER_STEORL - 1) << 12 |
- (uint64_t)(lmt_id + PKTS_PER_STEORL);
+ lmt_arg = ROC_CN10K_CPT_LMT_ARG | (i - CN10K_PKTS_PER_STEORL - 1) << 12 |
+ (uint64_t)(lmt_id + CN10K_PKTS_PER_STEORL);
roc_lmt_submit_steorl(lmt_arg, io_addr);
} else {
lmt_arg = ROC_CN10K_CPT_LMT_ARG | (i - 1) << 12 | (uint64_t)lmt_id;
@@ -1368,7 +1369,7 @@ cn10k_cpt_raw_enqueue_burst(void *qpair, uint8_t *drv_ctx, struct rte_crypto_sym
rte_io_wmb();
- if (nb_ops - i > 0 && i == PKTS_PER_LOOP) {
+ if (nb_ops - i > 0 && i == CN10K_PKTS_PER_LOOP) {
nb_ops -= i;
count += i;
goto again;
diff --git a/drivers/crypto/cnxk/cn10k_cryptodev_ops.h b/drivers/crypto/cnxk/cn10k_cryptodev_ops.h
index 34becede3c..406c4abc7f 100644
--- a/drivers/crypto/cnxk/cn10k_cryptodev_ops.h
+++ b/drivers/crypto/cnxk/cn10k_cryptodev_ops.h
@@ -12,6 +12,9 @@
#include "cnxk_cryptodev.h"
+#define CN10K_PKTS_PER_LOOP 32
+#define CN10K_PKTS_PER_STEORL 16
+
extern struct rte_cryptodev_ops cn10k_cpt_ops;
void cn10k_cpt_set_enqdeq_fns(struct rte_cryptodev *dev, struct cnxk_cpt_vf *vf);
diff --git a/drivers/crypto/cnxk/cn9k_cryptodev_ops.c b/drivers/crypto/cnxk/cn9k_cryptodev_ops.c
index 442cd8e5a9..ac9393eacf 100644
--- a/drivers/crypto/cnxk/cn9k_cryptodev_ops.c
+++ b/drivers/crypto/cnxk/cn9k_cryptodev_ops.c
@@ -122,62 +122,6 @@ cn9k_cpt_inst_prep(struct cnxk_cpt_qp *qp, struct rte_crypto_op *op,
return ret;
}
-static inline void
-cn9k_cpt_inst_submit(struct cpt_inst_s *inst, uint64_t lmtline,
- uint64_t io_addr)
-{
- uint64_t lmt_status;
-
- do {
- /* Copy CPT command to LMTLINE */
- roc_lmt_mov64((void *)lmtline, inst);
-
- /*
- * Make sure compiler does not reorder memcpy and ldeor.
- * LMTST transactions are always flushed from the write
- * buffer immediately, a DMB is not required to push out
- * LMTSTs.
- */
- rte_io_wmb();
- lmt_status = roc_lmt_submit_ldeor(io_addr);
- } while (lmt_status == 0);
-}
-
-static __plt_always_inline void
-cn9k_cpt_inst_submit_dual(struct cpt_inst_s *inst, uint64_t lmtline,
- uint64_t io_addr)
-{
- uint64_t lmt_status;
-
- do {
- /* Copy 2 CPT inst_s to LMTLINE */
-#if defined(RTE_ARCH_ARM64)
- uint64_t *s = (uint64_t *)inst;
- uint64_t *d = (uint64_t *)lmtline;
-
- vst1q_u64(&d[0], vld1q_u64(&s[0]));
- vst1q_u64(&d[2], vld1q_u64(&s[2]));
- vst1q_u64(&d[4], vld1q_u64(&s[4]));
- vst1q_u64(&d[6], vld1q_u64(&s[6]));
- vst1q_u64(&d[8], vld1q_u64(&s[8]));
- vst1q_u64(&d[10], vld1q_u64(&s[10]));
- vst1q_u64(&d[12], vld1q_u64(&s[12]));
- vst1q_u64(&d[14], vld1q_u64(&s[14]));
-#else
- roc_lmt_mov_seg((void *)lmtline, inst, 8);
-#endif
-
- /*
- * Make sure compiler does not reorder memcpy and ldeor.
- * LMTST transactions are always flushed from the write
- * buffer immediately, a DMB is not required to push out
- * LMTSTs.
- */
- rte_io_wmb();
- lmt_status = roc_lmt_submit_ldeor(io_addr);
- } while (lmt_status == 0);
-}
-
static uint16_t
cn9k_cpt_enqueue_burst(void *qptr, struct rte_crypto_op **ops, uint16_t nb_ops)
{
diff --git a/drivers/crypto/cnxk/cn9k_cryptodev_ops.h b/drivers/crypto/cnxk/cn9k_cryptodev_ops.h
index c6ec96153e..3d667094f3 100644
--- a/drivers/crypto/cnxk/cn9k_cryptodev_ops.h
+++ b/drivers/crypto/cnxk/cn9k_cryptodev_ops.h
@@ -8,8 +8,70 @@
#include <rte_compat.h>
#include <cryptodev_pmd.h>
+#include <hw/cpt.h>
+
+#if defined(__aarch64__)
+#include "roc_io.h"
+#else
+#include "roc_io_generic.h"
+#endif
+
extern struct rte_cryptodev_ops cn9k_cpt_ops;
+static inline void
+cn9k_cpt_inst_submit(struct cpt_inst_s *inst, uint64_t lmtline, uint64_t io_addr)
+{
+ uint64_t lmt_status;
+
+ do {
+ /* Copy CPT command to LMTLINE */
+ roc_lmt_mov64((void *)lmtline, inst);
+
+ /*
+ * Make sure compiler does not reorder memcpy and ldeor.
+ * LMTST transactions are always flushed from the write
+ * buffer immediately, a DMB is not required to push out
+ * LMTSTs.
+ */
+ rte_io_wmb();
+ lmt_status = roc_lmt_submit_ldeor(io_addr);
+ } while (lmt_status == 0);
+}
+
+static __plt_always_inline void
+cn9k_cpt_inst_submit_dual(struct cpt_inst_s *inst, uint64_t lmtline, uint64_t io_addr)
+{
+ uint64_t lmt_status;
+
+ do {
+ /* Copy 2 CPT inst_s to LMTLINE */
+#if defined(RTE_ARCH_ARM64)
+ volatile const __uint128_t *src128 = (const __uint128_t *)inst;
+ volatile __uint128_t *dst128 = (__uint128_t *)lmtline;
+
+ dst128[0] = src128[0];
+ dst128[1] = src128[1];
+ dst128[2] = src128[2];
+ dst128[3] = src128[3];
+ dst128[4] = src128[4];
+ dst128[5] = src128[5];
+ dst128[6] = src128[6];
+ dst128[7] = src128[7];
+#else
+ roc_lmt_mov_seg((void *)lmtline, inst, 8);
+#endif
+
+ /*
+ * Make sure compiler does not reorder memcpy and ldeor.
+ * LMTST transactions are always flushed from the write
+ * buffer immediately, a DMB is not required to push out
+ * LMTSTs.
+ */
+ rte_io_wmb();
+ lmt_status = roc_lmt_submit_ldeor(io_addr);
+ } while (lmt_status == 0);
+}
+
void cn9k_cpt_set_enqdeq_fns(struct rte_cryptodev *dev);
__rte_internal
diff --git a/drivers/crypto/cnxk/cnxk_cryptodev_ops.c b/drivers/crypto/cnxk/cnxk_cryptodev_ops.c
index 04dbc67fc1..1dd1dbac9a 100644
--- a/drivers/crypto/cnxk/cnxk_cryptodev_ops.c
+++ b/drivers/crypto/cnxk/cnxk_cryptodev_ops.c
@@ -12,6 +12,11 @@
#include "roc_errata.h"
#include "roc_idev.h"
#include "roc_ie_on.h"
+#if defined(__aarch64__)
+#include "roc_io.h"
+#else
+#include "roc_io_generic.h"
+#endif
#include "cnxk_ae.h"
#include "cnxk_cryptodev.h"
@@ -19,6 +24,11 @@
#include "cnxk_cryptodev_ops.h"
#include "cnxk_se.h"
+#include "cn10k_cryptodev_ops.h"
+#include "cn9k_cryptodev_ops.h"
+
+#include "rte_pmd_cnxk_crypto.h"
+
#define CNXK_CPT_MAX_ASYM_OP_NUM_PARAMS 5
#define CNXK_CPT_MAX_ASYM_OP_MOD_LEN 1024
#define CNXK_CPT_META_BUF_MAX_CACHE_SIZE 128
@@ -918,3 +928,92 @@ cnxk_cpt_queue_pair_event_error_query(struct rte_cryptodev *dev, uint16_t qp_id)
}
return 0;
}
+
+void *
+rte_pmd_cnxk_crypto_qptr_get(uint8_t dev_id, uint16_t qp_id)
+{
+ const struct rte_crypto_fp_ops *fp_ops;
+ void *qptr;
+
+ fp_ops = &rte_crypto_fp_ops[dev_id];
+ qptr = fp_ops->qp.data[qp_id];
+
+ return qptr;
+}
+
+static inline void
+cnxk_crypto_cn10k_submit(void *qptr, void *inst, uint16_t nb_inst)
+{
+ uint64_t lmt_base, lmt_arg, io_addr;
+ struct cnxk_cpt_qp *qp = qptr;
+ uint16_t i, j, lmt_id;
+ void *lmt_dst;
+
+ lmt_base = qp->lmtline.lmt_base;
+ io_addr = qp->lmtline.io_addr;
+
+ ROC_LMT_BASE_ID_GET(lmt_base, lmt_id);
+
+again:
+ i = RTE_MIN(nb_inst, CN10K_PKTS_PER_LOOP);
+ lmt_dst = PLT_PTR_CAST(lmt_base);
+
+ for (j = 0; j < i; j++) {
+ rte_memcpy(lmt_dst, inst, sizeof(struct cpt_inst_s));
+ inst = RTE_PTR_ADD(inst, sizeof(struct cpt_inst_s));
+ lmt_dst = RTE_PTR_ADD(lmt_dst, 2 * sizeof(struct cpt_inst_s));
+ }
+
+ rte_io_wmb();
+
+ if (i > CN10K_PKTS_PER_STEORL) {
+ lmt_arg = ROC_CN10K_CPT_LMT_ARG | (CN10K_PKTS_PER_STEORL - 1) << 12 |
+ (uint64_t)lmt_id;
+ roc_lmt_submit_steorl(lmt_arg, io_addr);
+ lmt_arg = ROC_CN10K_CPT_LMT_ARG | (i - CN10K_PKTS_PER_STEORL - 1) << 12 |
+ (uint64_t)(lmt_id + CN10K_PKTS_PER_STEORL);
+ roc_lmt_submit_steorl(lmt_arg, io_addr);
+ } else {
+ lmt_arg = ROC_CN10K_CPT_LMT_ARG | (i - 1) << 12 | (uint64_t)lmt_id;
+ roc_lmt_submit_steorl(lmt_arg, io_addr);
+ }
+
+ rte_io_wmb();
+
+ if (nb_inst - i > 0) {
+ nb_inst -= i;
+ goto again;
+ }
+}
+
+static inline void
+cnxk_crypto_cn9k_submit(void *qptr, void *inst, uint16_t nb_inst)
+{
+ struct cnxk_cpt_qp *qp = qptr;
+
+ const uint64_t lmt_base = qp->lf.lmt_base;
+ const uint64_t io_addr = qp->lf.io_addr;
+
+ if (unlikely(nb_inst & 1)) {
+ cn9k_cpt_inst_submit(inst, lmt_base, io_addr);
+ inst = RTE_PTR_ADD(inst, sizeof(struct cpt_inst_s));
+ nb_inst -= 1;
+ }
+
+ while (nb_inst > 0) {
+ cn9k_cpt_inst_submit_dual(inst, lmt_base, io_addr);
+ inst = RTE_PTR_ADD(inst, 2 * sizeof(struct cpt_inst_s));
+ nb_inst -= 2;
+ }
+}
+
+void
+rte_pmd_cnxk_crypto_submit(void *qptr, void *inst, uint16_t nb_inst)
+{
+ if (roc_model_is_cn10k())
+ return cnxk_crypto_cn10k_submit(qptr, inst, nb_inst);
+ else if (roc_model_is_cn9k())
+ return cnxk_crypto_cn9k_submit(qptr, inst, nb_inst);
+
+ plt_err("Invalid cnxk model");
+}
diff --git a/drivers/crypto/cnxk/meson.build b/drivers/crypto/cnxk/meson.build
index ee0c65e32a..aa840fb7bb 100644
--- a/drivers/crypto/cnxk/meson.build
+++ b/drivers/crypto/cnxk/meson.build
@@ -24,8 +24,8 @@ sources = files(
'cnxk_cryptodev_sec.c',
)
+headers = files('rte_pmd_cnxk_crypto.h')
deps += ['bus_pci', 'common_cnxk', 'security', 'eventdev']
-
includes += include_directories('../../../lib/net', '../../event/cnxk')
if get_option('buildtype').contains('debug')
diff --git a/drivers/crypto/cnxk/rte_pmd_cnxk_crypto.h b/drivers/crypto/cnxk/rte_pmd_cnxk_crypto.h
new file mode 100644
index 0000000000..8b0a5ba0f2
--- /dev/null
+++ b/drivers/crypto/cnxk/rte_pmd_cnxk_crypto.h
@@ -0,0 +1,46 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2024 Marvell.
+ */
+
+/**
+ * @file rte_pmd_cnxk_crypto.h
+ * Marvell CNXK Crypto PMD specific functions.
+ *
+ **/
+
+#ifndef _PMD_CNXK_CRYPTO_H_
+#define _PMD_CNXK_CRYPTO_H_
+
+#include <stdint.h>
+
+/**
+ * Get queue pointer of a specific queue in a cryptodev.
+ *
+ * @param dev_id
+ * Device identifier of cryptodev device.
+ * @param qp_id
+ * Index of the queue pair.
+ * @return
+ * Pointer to queue pair structure that would be the input to submit APIs.
+ */
+void *rte_pmd_cnxk_crypto_qptr_get(uint8_t dev_id, uint16_t qp_id);
+
+/**
+ * Submit CPT instruction (cpt_inst_s) to hardware (CPT).
+ *
+ * The ``qp`` is a pointer obtained from ``rte_pmd_cnxk_crypto_qp_get``. Application should make
+ * sure it doesn't overflow the internal hardware queues. It may do so by making sure the inflight
+ * packets are not more than the number of descriptors configured.
+ *
+ * This API may be called only after the cryptodev and queue pair is configured and is started.
+ *
+ * @param qptr
+ * Pointer obtained with ``rte_pmd_cnxk_crypto_qptr_get``.
+ * @param inst
+ * Pointer to an array of instructions prepared by application.
+ * @param nb_inst
+ * Number of instructions.
+ */
+void rte_pmd_cnxk_crypto_submit(void *qptr, void *inst, uint16_t nb_inst);
+
+#endif /* _PMD_CNXK_CRYPTO_H_ */
--
2.25.1
^ permalink raw reply [flat|nested] 78+ messages in thread
* [PATCH v3 19/24] crypto/cnxk: replace PDCP with PDCP chain opcode
2024-01-17 10:30 ` [PATCH v3 " Anoob Joseph
` (17 preceding siblings ...)
2024-01-17 10:31 ` [PATCH v3 18/24] crypto/cnxk: add PMD APIs for raw submission to CPT Anoob Joseph
@ 2024-01-17 10:31 ` Anoob Joseph
2024-01-17 10:31 ` [PATCH v3 20/24] crypto/cnxk: validate the combinations supported in TLS Anoob Joseph
` (5 subsequent siblings)
24 siblings, 0 replies; 78+ messages in thread
From: Anoob Joseph @ 2024-01-17 10:31 UTC (permalink / raw)
To: Akhil Goyal; +Cc: Tejasree Kondoj, Jerin Jacob, Vidya Sagar Velumuri, dev
From: Tejasree Kondoj <ktejasree@marvell.com>
Replacing PDCP opcode with PDCP chain opcode.
Signed-off-by: Tejasree Kondoj <ktejasree@marvell.com>
---
drivers/common/cnxk/roc_se.c | 331 +++++++++-------------------------
drivers/common/cnxk/roc_se.h | 18 +-
drivers/crypto/cnxk/cnxk_se.h | 96 +++++-----
3 files changed, 135 insertions(+), 310 deletions(-)
diff --git a/drivers/common/cnxk/roc_se.c b/drivers/common/cnxk/roc_se.c
index 6ced4ef789..4e00268149 100644
--- a/drivers/common/cnxk/roc_se.c
+++ b/drivers/common/cnxk/roc_se.c
@@ -88,13 +88,20 @@ cpt_ciph_type_set(roc_se_cipher_type type, struct roc_se_ctx *ctx, uint16_t key_
fc_type = ROC_SE_FC_GEN;
break;
case ROC_SE_ZUC_EEA3:
- if (chained_op) {
- if (unlikely(key_len != 16))
+ if (unlikely(key_len != 16)) {
+ /*
+ * ZUC 256 is not supported with older microcode
+ * where pdcp_iv_offset is 16
+ */
+ if (chained_op || (ctx->pdcp_iv_offset == 16)) {
+ plt_err("ZUC 256 is not supported with chained operations");
return -1;
+ }
+ }
+ if (chained_op)
fc_type = ROC_SE_PDCP_CHAIN;
- } else {
+ else
fc_type = ROC_SE_PDCP;
- }
break;
case ROC_SE_SNOW3G_UEA2:
if (unlikely(key_len != 16))
@@ -197,33 +204,6 @@ cpt_hmac_opad_ipad_gen(roc_se_auth_type auth_type, const uint8_t *key, uint16_t
}
}
-static int
-cpt_pdcp_key_type_set(struct roc_se_zuc_snow3g_ctx *zs_ctx, uint16_t key_len)
-{
- roc_se_aes_type key_type = 0;
-
- if (roc_model_is_cn9k()) {
- if (key_len != 16) {
- plt_err("Only key len 16 is supported on cn9k");
- return -ENOTSUP;
- }
- }
-
- switch (key_len) {
- case 16:
- key_type = ROC_SE_AES_128_BIT;
- break;
- case 32:
- key_type = ROC_SE_AES_256_BIT;
- break;
- default:
- plt_err("Invalid AES key len");
- return -ENOTSUP;
- }
- zs_ctx->zuc.otk_ctx.w0.s.key_len = key_type;
- return 0;
-}
-
static int
cpt_pdcp_chain_key_type_get(uint16_t key_len)
{
@@ -247,36 +227,6 @@ cpt_pdcp_chain_key_type_get(uint16_t key_len)
return key_type;
}
-static int
-cpt_pdcp_mac_len_set(struct roc_se_zuc_snow3g_ctx *zs_ctx, uint16_t mac_len)
-{
- roc_se_pdcp_mac_len_type mac_type = 0;
-
- if (roc_model_is_cn9k()) {
- if (mac_len != 4) {
- plt_err("Only mac len 4 is supported on cn9k");
- return -ENOTSUP;
- }
- }
-
- switch (mac_len) {
- case 4:
- mac_type = ROC_SE_PDCP_MAC_LEN_32_BIT;
- break;
- case 8:
- mac_type = ROC_SE_PDCP_MAC_LEN_64_BIT;
- break;
- case 16:
- mac_type = ROC_SE_PDCP_MAC_LEN_128_BIT;
- break;
- default:
- plt_err("Invalid ZUC MAC len");
- return -ENOTSUP;
- }
- zs_ctx->zuc.otk_ctx.w0.s.mac_len = mac_type;
- return 0;
-}
-
static void
cpt_zuc_const_update(uint8_t *zuc_const, int key_len, int mac_len)
{
@@ -300,32 +250,27 @@ cpt_zuc_const_update(uint8_t *zuc_const, int key_len, int mac_len)
}
int
-roc_se_auth_key_set(struct roc_se_ctx *se_ctx, roc_se_auth_type type,
- const uint8_t *key, uint16_t key_len, uint16_t mac_len)
+roc_se_auth_key_set(struct roc_se_ctx *se_ctx, roc_se_auth_type type, const uint8_t *key,
+ uint16_t key_len, uint16_t mac_len)
{
- struct roc_se_zuc_snow3g_chain_ctx *zs_ch_ctx;
- struct roc_se_zuc_snow3g_ctx *zs_ctx;
struct roc_se_kasumi_ctx *k_ctx;
+ struct roc_se_pdcp_ctx *pctx;
struct roc_se_context *fctx;
uint8_t opcode_minor;
- uint8_t pdcp_alg;
bool chained_op;
- int ret;
if (se_ctx == NULL)
return -1;
- zs_ctx = &se_ctx->se_ctx.zs_ctx;
- zs_ch_ctx = &se_ctx->se_ctx.zs_ch_ctx;
+ pctx = &se_ctx->se_ctx.pctx;
k_ctx = &se_ctx->se_ctx.k_ctx;
fctx = &se_ctx->se_ctx.fctx;
chained_op = se_ctx->ciph_then_auth || se_ctx->auth_then_ciph;
if ((type >= ROC_SE_ZUC_EIA3) && (type <= ROC_SE_KASUMI_F9_ECB)) {
- uint8_t *zuc_const;
uint32_t keyx[4];
- uint8_t *ci_key;
+ int key_type;
if (!key_len)
return -1;
@@ -335,98 +280,64 @@ roc_se_auth_key_set(struct roc_se_ctx *se_ctx, roc_se_auth_type type,
return -1;
}
- if (roc_model_is_cn9k()) {
- ci_key = zs_ctx->zuc.onk_ctx.ci_key;
- zuc_const = zs_ctx->zuc.onk_ctx.zuc_const;
- } else {
- ci_key = zs_ctx->zuc.otk_ctx.ci_key;
- zuc_const = zs_ctx->zuc.otk_ctx.zuc_const;
- }
-
/* For ZUC/SNOW3G/Kasumi */
switch (type) {
case ROC_SE_SNOW3G_UIA2:
- if (chained_op) {
- struct roc_se_onk_zuc_chain_ctx *ctx =
- &zs_ch_ctx->zuc.onk_ctx;
- zs_ch_ctx->zuc.onk_ctx.w0.s.state_conf =
- ROC_SE_PDCP_CHAIN_CTX_KEY_IV;
- ctx->w0.s.auth_type =
- ROC_SE_PDCP_CHAIN_ALG_TYPE_SNOW3G;
- ctx->w0.s.mac_len = mac_len;
- ctx->w0.s.auth_key_len = key_len;
- se_ctx->fc_type = ROC_SE_PDCP_CHAIN;
- cpt_snow3g_key_gen(key, keyx);
- memcpy(ctx->st.auth_key, keyx, key_len);
- } else {
- zs_ctx->zuc.otk_ctx.w0.s.alg_type =
- ROC_SE_PDCP_ALG_TYPE_SNOW3G;
- zs_ctx->zuc.otk_ctx.w0.s.mac_len =
- ROC_SE_PDCP_MAC_LEN_32_BIT;
- cpt_snow3g_key_gen(key, keyx);
- memcpy(ci_key, keyx, key_len);
+ pctx->w0.s.state_conf = ROC_SE_PDCP_CHAIN_CTX_KEY_IV;
+ pctx->w0.s.auth_type = ROC_SE_PDCP_CHAIN_ALG_TYPE_SNOW3G;
+ pctx->w0.s.mac_len = mac_len;
+ pctx->w0.s.auth_key_len = key_len;
+ se_ctx->fc_type = ROC_SE_PDCP_CHAIN;
+ cpt_snow3g_key_gen(key, keyx);
+ memcpy(pctx->st.auth_key, keyx, key_len);
+
+ if (!chained_op)
se_ctx->fc_type = ROC_SE_PDCP;
- }
se_ctx->pdcp_auth_alg = ROC_SE_PDCP_ALG_TYPE_SNOW3G;
se_ctx->zsk_flags = 0x1;
break;
case ROC_SE_ZUC_EIA3:
- if (chained_op) {
- struct roc_se_onk_zuc_chain_ctx *ctx =
- &zs_ch_ctx->zuc.onk_ctx;
- ctx->w0.s.state_conf =
- ROC_SE_PDCP_CHAIN_CTX_KEY_IV;
- ctx->w0.s.auth_type =
- ROC_SE_PDCP_CHAIN_ALG_TYPE_ZUC;
- ctx->w0.s.mac_len = mac_len;
- ctx->w0.s.auth_key_len = key_len;
- memcpy(ctx->st.auth_key, key, key_len);
- cpt_zuc_const_update(ctx->st.auth_zuc_const,
- key_len, mac_len);
- se_ctx->fc_type = ROC_SE_PDCP_CHAIN;
- } else {
- zs_ctx->zuc.otk_ctx.w0.s.alg_type =
- ROC_SE_PDCP_ALG_TYPE_ZUC;
- ret = cpt_pdcp_key_type_set(zs_ctx, key_len);
- if (ret)
- return ret;
- ret = cpt_pdcp_mac_len_set(zs_ctx, mac_len);
- if (ret)
- return ret;
- memcpy(ci_key, key, key_len);
- if (key_len == 32)
- roc_se_zuc_bytes_swap(ci_key, key_len);
- cpt_zuc_const_update(zuc_const, key_len,
- mac_len);
- se_ctx->fc_type = ROC_SE_PDCP;
+ if (unlikely(key_len != 16)) {
+ /*
+ * ZUC 256 is not supported with older microcode
+ * where pdcp_iv_offset is 16
+ */
+ if (chained_op || (se_ctx->pdcp_iv_offset == 16)) {
+ plt_err("ZUC 256 is not supported with chained operations");
+ return -1;
+ }
}
+ key_type = cpt_pdcp_chain_key_type_get(key_len);
+ if (key_type < 0)
+ return key_type;
+ pctx->w0.s.auth_key_len = key_type;
+ pctx->w0.s.state_conf = ROC_SE_PDCP_CHAIN_CTX_KEY_IV;
+ pctx->w0.s.auth_type = ROC_SE_PDCP_CHAIN_ALG_TYPE_ZUC;
+ pctx->w0.s.mac_len = mac_len;
+ memcpy(pctx->st.auth_key, key, key_len);
+ if (key_len == 32)
+ roc_se_zuc_bytes_swap(pctx->st.auth_key, key_len);
+ cpt_zuc_const_update(pctx->st.auth_zuc_const, key_len, mac_len);
+ se_ctx->fc_type = ROC_SE_PDCP_CHAIN;
+
+ if (!chained_op)
+ se_ctx->fc_type = ROC_SE_PDCP;
se_ctx->pdcp_auth_alg = ROC_SE_PDCP_ALG_TYPE_ZUC;
se_ctx->zsk_flags = 0x1;
break;
case ROC_SE_AES_CMAC_EIA2:
- if (chained_op) {
- struct roc_se_onk_zuc_chain_ctx *ctx =
- &zs_ch_ctx->zuc.onk_ctx;
- int key_type;
- key_type = cpt_pdcp_chain_key_type_get(key_len);
- if (key_type < 0)
- return key_type;
- ctx->w0.s.auth_key_len = key_type;
- ctx->w0.s.state_conf =
- ROC_SE_PDCP_CHAIN_CTX_KEY_IV;
- ctx->w0.s.auth_type =
- ROC_SE_PDCP_ALG_TYPE_AES_CTR;
- ctx->w0.s.mac_len = mac_len;
- memcpy(ctx->st.auth_key, key, key_len);
- se_ctx->fc_type = ROC_SE_PDCP_CHAIN;
- } else {
- zs_ctx->zuc.otk_ctx.w0.s.alg_type =
- ROC_SE_PDCP_ALG_TYPE_AES_CTR;
- zs_ctx->zuc.otk_ctx.w0.s.mac_len =
- ROC_SE_PDCP_MAC_LEN_32_BIT;
- memcpy(ci_key, key, key_len);
+ key_type = cpt_pdcp_chain_key_type_get(key_len);
+ if (key_type < 0)
+ return key_type;
+ pctx->w0.s.auth_key_len = key_type;
+ pctx->w0.s.state_conf = ROC_SE_PDCP_CHAIN_CTX_KEY_IV;
+ pctx->w0.s.auth_type = ROC_SE_PDCP_ALG_TYPE_AES_CTR;
+ pctx->w0.s.mac_len = mac_len;
+ memcpy(pctx->st.auth_key, key, key_len);
+ se_ctx->fc_type = ROC_SE_PDCP_CHAIN;
+
+ if (!chained_op)
se_ctx->fc_type = ROC_SE_PDCP;
- }
se_ctx->pdcp_auth_alg = ROC_SE_PDCP_ALG_TYPE_AES_CMAC;
se_ctx->eia2 = 1;
se_ctx->zsk_flags = 0x1;
@@ -454,11 +365,8 @@ roc_se_auth_key_set(struct roc_se_ctx *se_ctx, roc_se_auth_type type,
se_ctx->mac_len = mac_len;
se_ctx->hash_type = type;
- pdcp_alg = zs_ctx->zuc.otk_ctx.w0.s.alg_type;
if (chained_op)
opcode_minor = se_ctx->ciph_then_auth ? 2 : 3;
- else if (roc_model_is_cn9k())
- opcode_minor = ((1 << 7) | (pdcp_alg << 5) | 1);
else
opcode_minor = ((1 << 4) | 1);
@@ -513,29 +421,18 @@ int
roc_se_ciph_key_set(struct roc_se_ctx *se_ctx, roc_se_cipher_type type, const uint8_t *key,
uint16_t key_len)
{
- bool chained_op = se_ctx->ciph_then_auth || se_ctx->auth_then_ciph;
- struct roc_se_zuc_snow3g_ctx *zs_ctx = &se_ctx->se_ctx.zs_ctx;
struct roc_se_context *fctx = &se_ctx->se_ctx.fctx;
- struct roc_se_zuc_snow3g_chain_ctx *zs_ch_ctx;
+ struct roc_se_pdcp_ctx *pctx;
uint8_t opcode_minor = 0;
- uint8_t *zuc_const;
uint32_t keyx[4];
- uint8_t *ci_key;
+ int key_type;
int i, ret;
/* For NULL cipher, no processing required. */
if (type == ROC_SE_PASSTHROUGH)
return 0;
- zs_ch_ctx = &se_ctx->se_ctx.zs_ch_ctx;
-
- if (roc_model_is_cn9k()) {
- ci_key = zs_ctx->zuc.onk_ctx.ci_key;
- zuc_const = zs_ctx->zuc.onk_ctx.zuc_const;
- } else {
- ci_key = zs_ctx->zuc.otk_ctx.ci_key;
- zuc_const = zs_ctx->zuc.otk_ctx.zuc_const;
- }
+ pctx = &se_ctx->se_ctx.pctx;
if ((type == ROC_SE_AES_GCM) || (type == ROC_SE_AES_CCM))
se_ctx->template_w4.s.opcode_minor = BIT(5);
@@ -615,72 +512,38 @@ roc_se_ciph_key_set(struct roc_se_ctx *se_ctx, roc_se_cipher_type type, const ui
fctx->enc.enc_cipher = ROC_SE_DES3_CBC;
goto success;
case ROC_SE_SNOW3G_UEA2:
- if (chained_op == true) {
- struct roc_se_onk_zuc_chain_ctx *ctx =
- &zs_ch_ctx->zuc.onk_ctx;
- zs_ch_ctx->zuc.onk_ctx.w0.s.state_conf =
- ROC_SE_PDCP_CHAIN_CTX_KEY_IV;
- zs_ch_ctx->zuc.onk_ctx.w0.s.cipher_type =
- ROC_SE_PDCP_CHAIN_ALG_TYPE_SNOW3G;
- zs_ch_ctx->zuc.onk_ctx.w0.s.ci_key_len = key_len;
- cpt_snow3g_key_gen(key, keyx);
- memcpy(ctx->st.ci_key, keyx, key_len);
- } else {
- zs_ctx->zuc.otk_ctx.w0.s.key_len = ROC_SE_AES_128_BIT;
- zs_ctx->zuc.otk_ctx.w0.s.alg_type =
- ROC_SE_PDCP_ALG_TYPE_SNOW3G;
- cpt_snow3g_key_gen(key, keyx);
- memcpy(ci_key, keyx, key_len);
- }
+ pctx->w0.s.state_conf = ROC_SE_PDCP_CHAIN_CTX_KEY_IV;
+ pctx->w0.s.cipher_type = ROC_SE_PDCP_CHAIN_ALG_TYPE_SNOW3G;
+ pctx->w0.s.ci_key_len = key_len;
+ cpt_snow3g_key_gen(key, keyx);
+ memcpy(pctx->st.ci_key, keyx, key_len);
se_ctx->pdcp_ci_alg = ROC_SE_PDCP_ALG_TYPE_SNOW3G;
se_ctx->zsk_flags = 0;
goto success;
case ROC_SE_ZUC_EEA3:
- if (chained_op == true) {
- struct roc_se_onk_zuc_chain_ctx *ctx =
- &zs_ch_ctx->zuc.onk_ctx;
- zs_ch_ctx->zuc.onk_ctx.w0.s.state_conf =
- ROC_SE_PDCP_CHAIN_CTX_KEY_IV;
- zs_ch_ctx->zuc.onk_ctx.w0.s.cipher_type =
- ROC_SE_PDCP_CHAIN_ALG_TYPE_ZUC;
- memcpy(ctx->st.ci_key, key, key_len);
- memcpy(ctx->st.ci_zuc_const, zuc_key128, 32);
- zs_ch_ctx->zuc.onk_ctx.w0.s.ci_key_len = key_len;
- } else {
- ret = cpt_pdcp_key_type_set(zs_ctx, key_len);
- if (ret)
- return ret;
- zs_ctx->zuc.otk_ctx.w0.s.alg_type =
- ROC_SE_PDCP_ALG_TYPE_ZUC;
- memcpy(ci_key, key, key_len);
- if (key_len == 32) {
- roc_se_zuc_bytes_swap(ci_key, key_len);
- memcpy(zuc_const, zuc_key256, 16);
- } else
- memcpy(zuc_const, zuc_key128, 32);
- }
-
+ key_type = cpt_pdcp_chain_key_type_get(key_len);
+ if (key_type < 0)
+ return key_type;
+ pctx->w0.s.ci_key_len = key_type;
+ pctx->w0.s.state_conf = ROC_SE_PDCP_CHAIN_CTX_KEY_IV;
+ pctx->w0.s.cipher_type = ROC_SE_PDCP_CHAIN_ALG_TYPE_ZUC;
+ memcpy(pctx->st.ci_key, key, key_len);
+ if (key_len == 32) {
+ roc_se_zuc_bytes_swap(pctx->st.ci_key, key_len);
+ memcpy(pctx->st.ci_zuc_const, zuc_key256, 16);
+ } else
+ memcpy(pctx->st.ci_zuc_const, zuc_key128, 32);
se_ctx->pdcp_ci_alg = ROC_SE_PDCP_ALG_TYPE_ZUC;
se_ctx->zsk_flags = 0;
goto success;
case ROC_SE_AES_CTR_EEA2:
- if (chained_op == true) {
- struct roc_se_onk_zuc_chain_ctx *ctx =
- &zs_ch_ctx->zuc.onk_ctx;
- int key_type;
- key_type = cpt_pdcp_chain_key_type_get(key_len);
- if (key_type < 0)
- return key_type;
- ctx->w0.s.ci_key_len = key_type;
- ctx->w0.s.state_conf = ROC_SE_PDCP_CHAIN_CTX_KEY_IV;
- ctx->w0.s.cipher_type = ROC_SE_PDCP_ALG_TYPE_AES_CTR;
- memcpy(ctx->st.ci_key, key, key_len);
- } else {
- zs_ctx->zuc.otk_ctx.w0.s.key_len = ROC_SE_AES_128_BIT;
- zs_ctx->zuc.otk_ctx.w0.s.alg_type =
- ROC_SE_PDCP_ALG_TYPE_AES_CTR;
- memcpy(ci_key, key, key_len);
- }
+ key_type = cpt_pdcp_chain_key_type_get(key_len);
+ if (key_type < 0)
+ return key_type;
+ pctx->w0.s.ci_key_len = key_type;
+ pctx->w0.s.state_conf = ROC_SE_PDCP_CHAIN_CTX_KEY_IV;
+ pctx->w0.s.cipher_type = ROC_SE_PDCP_ALG_TYPE_AES_CTR;
+ memcpy(pctx->st.ci_key, key, key_len);
se_ctx->pdcp_ci_alg = ROC_SE_PDCP_ALG_TYPE_AES_CTR;
se_ctx->zsk_flags = 0;
goto success;
@@ -720,20 +583,6 @@ roc_se_ciph_key_set(struct roc_se_ctx *se_ctx, roc_se_cipher_type type, const ui
return 0;
}
-void
-roc_se_ctx_swap(struct roc_se_ctx *se_ctx)
-{
- struct roc_se_zuc_snow3g_ctx *zs_ctx = &se_ctx->se_ctx.zs_ctx;
-
- if (roc_model_is_cn9k())
- return;
-
- if (se_ctx->fc_type == ROC_SE_PDCP_CHAIN)
- return;
-
- zs_ctx->zuc.otk_ctx.w0.u64 = htobe64(zs_ctx->zuc.otk_ctx.w0.u64);
-}
-
void
roc_se_ctx_init(struct roc_se_ctx *roc_se_ctx)
{
@@ -745,15 +594,13 @@ roc_se_ctx_init(struct roc_se_ctx *roc_se_ctx)
case ROC_SE_FC_GEN:
ctx_len = sizeof(struct roc_se_context);
break;
+ case ROC_SE_PDCP_CHAIN:
case ROC_SE_PDCP:
- ctx_len = sizeof(struct roc_se_zuc_snow3g_ctx);
+ ctx_len = sizeof(struct roc_se_pdcp_ctx);
break;
case ROC_SE_KASUMI:
ctx_len = sizeof(struct roc_se_kasumi_ctx);
break;
- case ROC_SE_PDCP_CHAIN:
- ctx_len = sizeof(struct roc_se_zuc_snow3g_chain_ctx);
- break;
case ROC_SE_SM:
ctx_len = sizeof(struct roc_se_sm_context);
break;
diff --git a/drivers/common/cnxk/roc_se.h b/drivers/common/cnxk/roc_se.h
index abb8c6a149..d62c40b310 100644
--- a/drivers/common/cnxk/roc_se.h
+++ b/drivers/common/cnxk/roc_se.h
@@ -246,7 +246,7 @@ struct roc_se_onk_zuc_ctx {
uint8_t zuc_const[32];
};
-struct roc_se_onk_zuc_chain_ctx {
+struct roc_se_pdcp_ctx {
union {
uint64_t u64;
struct {
@@ -278,19 +278,6 @@ struct roc_se_onk_zuc_chain_ctx {
} st;
};
-struct roc_se_zuc_snow3g_chain_ctx {
- union {
- struct roc_se_onk_zuc_chain_ctx onk_ctx;
- } zuc;
-};
-
-struct roc_se_zuc_snow3g_ctx {
- union {
- struct roc_se_onk_zuc_ctx onk_ctx;
- struct roc_se_otk_zuc_ctx otk_ctx;
- } zuc;
-};
-
struct roc_se_kasumi_ctx {
uint8_t reg_A[8];
uint8_t ci_key[16];
@@ -356,8 +343,7 @@ struct roc_se_ctx {
} w0;
union {
struct roc_se_context fctx;
- struct roc_se_zuc_snow3g_ctx zs_ctx;
- struct roc_se_zuc_snow3g_chain_ctx zs_ch_ctx;
+ struct roc_se_pdcp_ctx pctx;
struct roc_se_kasumi_ctx k_ctx;
struct roc_se_sm_context sm_ctx;
};
diff --git a/drivers/crypto/cnxk/cnxk_se.h b/drivers/crypto/cnxk/cnxk_se.h
index 1aec7dea9f..8193e96a92 100644
--- a/drivers/crypto/cnxk/cnxk_se.h
+++ b/drivers/crypto/cnxk/cnxk_se.h
@@ -298,8 +298,13 @@ sg_inst_prep(struct roc_se_fc_params *params, struct cpt_inst_s *inst, uint64_t
iv_d = ((uint8_t *)offset_vaddr + ROC_SE_OFF_CTRL_LEN);
if (pdcp_flag) {
- if (likely(iv_len))
- pdcp_iv_copy(iv_d, iv_s, pdcp_alg_type, pack_iv);
+ if (likely(iv_len)) {
+ if (zsk_flags == 0x1)
+ pdcp_iv_copy(iv_d + params->pdcp_iv_offset, iv_s, pdcp_alg_type,
+ pack_iv);
+ else
+ pdcp_iv_copy(iv_d, iv_s, pdcp_alg_type, pack_iv);
+ }
} else {
if (likely(iv_len))
memcpy(iv_d, iv_s, iv_len);
@@ -375,7 +380,7 @@ sg_inst_prep(struct roc_se_fc_params *params, struct cpt_inst_s *inst, uint64_t
i = 0;
scatter_comp = (struct roc_sglist_comp *)((uint8_t *)gather_comp + g_size_bytes);
- if (zsk_flags == 0x1) {
+ if ((zsk_flags == 0x1) && (se_ctx->fc_type == ROC_SE_KASUMI)) {
/* IV in SLIST only for EEA3 & UEA2 or for F8 */
iv_len = 0;
}
@@ -492,8 +497,13 @@ sg2_inst_prep(struct roc_se_fc_params *params, struct cpt_inst_s *inst, uint64_t
iv_d = ((uint8_t *)offset_vaddr + ROC_SE_OFF_CTRL_LEN);
if (pdcp_flag) {
- if (likely(iv_len))
- pdcp_iv_copy(iv_d, iv_s, pdcp_alg_type, pack_iv);
+ if (likely(iv_len)) {
+ if (zsk_flags == 0x1)
+ pdcp_iv_copy(iv_d + params->pdcp_iv_offset, iv_s, pdcp_alg_type,
+ pack_iv);
+ else
+ pdcp_iv_copy(iv_d, iv_s, pdcp_alg_type, pack_iv);
+ }
} else {
if (likely(iv_len))
memcpy(iv_d, iv_s, iv_len);
@@ -567,7 +577,7 @@ sg2_inst_prep(struct roc_se_fc_params *params, struct cpt_inst_s *inst, uint64_t
i = 0;
scatter_comp = (struct roc_sg2list_comp *)((uint8_t *)gather_comp + g_size_bytes);
- if (zsk_flags == 0x1) {
+ if ((zsk_flags == 0x1) && (se_ctx->fc_type == ROC_SE_KASUMI)) {
/* IV in SLIST only for EEA3 & UEA2 or for F8 */
iv_len = 0;
}
@@ -1617,28 +1627,34 @@ static __rte_always_inline int
cpt_pdcp_alg_prep(uint32_t req_flags, uint64_t d_offs, uint64_t d_lens,
struct roc_se_fc_params *params, struct cpt_inst_s *inst, const bool is_sg_ver2)
{
+ /*
+ * pdcp_iv_offset is auth_iv_offset wrt cipher_iv_offset which is
+ * 16 with old microcode without ZUC 256 support
+ * whereas it is 24 with new microcode which has ZUC 256.
+ * So iv_len reserved is 32B for cipher and auth IVs with old microcode
+ * and 48B with new microcode.
+ */
+ const int iv_len = params->pdcp_iv_offset * 2;
+ struct roc_se_ctx *se_ctx = params->ctx;
uint32_t encr_data_len, auth_data_len;
+ const int flags = se_ctx->zsk_flags;
uint32_t encr_offset, auth_offset;
union cpt_inst_w4 cpt_inst_w4;
int32_t inputlen, outputlen;
- struct roc_se_ctx *se_ctx;
uint64_t *offset_vaddr;
uint8_t pdcp_alg_type;
uint32_t mac_len = 0;
const uint8_t *iv_s;
uint8_t pack_iv = 0;
uint64_t offset_ctrl;
- int flags, iv_len;
int ret;
- se_ctx = params->ctx;
- flags = se_ctx->zsk_flags;
mac_len = se_ctx->mac_len;
cpt_inst_w4.u64 = se_ctx->template_w4.u64;
- cpt_inst_w4.s.opcode_major = ROC_SE_MAJOR_OP_PDCP;
if (flags == 0x1) {
+ cpt_inst_w4.s.opcode_minor = 1;
iv_s = params->auth_iv_buf;
/*
@@ -1650,47 +1666,32 @@ cpt_pdcp_alg_prep(uint32_t req_flags, uint64_t d_offs, uint64_t d_lens,
pdcp_alg_type = se_ctx->pdcp_auth_alg;
if (pdcp_alg_type != ROC_SE_PDCP_ALG_TYPE_AES_CMAC) {
- iv_len = params->auth_iv_len;
- if (iv_len == 25) {
- iv_len -= 2;
+ if (params->auth_iv_len == 25)
pack_iv = 1;
- }
auth_offset = auth_offset / 8;
-
- /* consider iv len */
- auth_offset += iv_len;
-
- inputlen =
- auth_offset + (RTE_ALIGN(auth_data_len, 8) / 8);
- } else {
- iv_len = 16;
-
- /* consider iv len */
- auth_offset += iv_len;
-
- inputlen = auth_offset + auth_data_len;
-
- /* length should be in bits */
- auth_data_len *= 8;
+ auth_data_len = RTE_ALIGN(auth_data_len, 8) / 8;
}
- outputlen = mac_len;
+ /* consider iv len */
+ auth_offset += iv_len;
+
+ inputlen = auth_offset + auth_data_len;
+ outputlen = iv_len + mac_len;
offset_ctrl = rte_cpu_to_be_64((uint64_t)auth_offset);
+ cpt_inst_w4.s.param1 = auth_data_len;
encr_data_len = 0;
encr_offset = 0;
} else {
+ cpt_inst_w4.s.opcode_minor = 0;
iv_s = params->iv_buf;
- iv_len = params->cipher_iv_len;
pdcp_alg_type = se_ctx->pdcp_ci_alg;
- if (iv_len == 25) {
- iv_len -= 2;
+ if (params->cipher_iv_len == 25)
pack_iv = 1;
- }
/*
* Microcode expects offsets in bytes
@@ -1700,6 +1701,7 @@ cpt_pdcp_alg_prep(uint32_t req_flags, uint64_t d_offs, uint64_t d_lens,
encr_offset = ROC_SE_ENCR_OFFSET(d_offs);
encr_offset = encr_offset / 8;
+
/* consider iv len */
encr_offset += iv_len;
@@ -1707,10 +1709,11 @@ cpt_pdcp_alg_prep(uint32_t req_flags, uint64_t d_offs, uint64_t d_lens,
outputlen = inputlen;
/* iv offset is 0 */
- offset_ctrl = rte_cpu_to_be_64((uint64_t)encr_offset << 16);
+ offset_ctrl = rte_cpu_to_be_64((uint64_t)encr_offset);
auth_data_len = 0;
auth_offset = 0;
+ cpt_inst_w4.s.param1 = (RTE_ALIGN(encr_data_len, 8) / 8);
}
if (unlikely((encr_offset >> 16) || (auth_offset >> 8))) {
@@ -1720,12 +1723,6 @@ cpt_pdcp_alg_prep(uint32_t req_flags, uint64_t d_offs, uint64_t d_lens,
return -1;
}
- /*
- * Lengths are expected in bits.
- */
- cpt_inst_w4.s.param1 = encr_data_len;
- cpt_inst_w4.s.param2 = auth_data_len;
-
/*
* In cn9k, cn10k since we have a limitation of
* IV & Offset control word not part of instruction
@@ -1738,6 +1735,7 @@ cpt_pdcp_alg_prep(uint32_t req_flags, uint64_t d_offs, uint64_t d_lens,
/* Use Direct mode */
+ cpt_inst_w4.s.opcode_major = ROC_SE_MAJOR_OP_PDCP_CHAIN;
offset_vaddr = (uint64_t *)((uint8_t *)dm_vaddr - ROC_SE_OFF_CTRL_LEN - iv_len);
/* DPTR */
@@ -1753,6 +1751,7 @@ cpt_pdcp_alg_prep(uint32_t req_flags, uint64_t d_offs, uint64_t d_lens,
*offset_vaddr = offset_ctrl;
inst->w4.u64 = cpt_inst_w4.u64;
} else {
+ cpt_inst_w4.s.opcode_major = ROC_SE_MAJOR_OP_PDCP_CHAIN | ROC_DMA_MODE_SG;
inst->w4.u64 = cpt_inst_w4.u64;
if (is_sg_ver2)
ret = sg2_inst_prep(params, inst, offset_ctrl, iv_s, iv_len, pack_iv,
@@ -2243,8 +2242,6 @@ fill_sess_cipher(struct rte_crypto_sym_xform *xform, struct cnxk_se_sess *sess)
c_form->key.length)))
return -1;
- if ((enc_type >= ROC_SE_ZUC_EEA3) && (enc_type <= ROC_SE_AES_CTR_EEA2))
- roc_se_ctx_swap(&sess->roc_se_ctx);
return 0;
}
@@ -2403,15 +2400,10 @@ fill_sess_auth(struct rte_crypto_sym_xform *xform, struct cnxk_se_sess *sess)
sess->auth_iv_offset = a_form->iv.offset;
sess->auth_iv_length = a_form->iv.length;
}
- if (unlikely(roc_se_auth_key_set(&sess->roc_se_ctx, auth_type,
- a_form->key.data, a_form->key.length,
- a_form->digest_length)))
+ if (unlikely(roc_se_auth_key_set(&sess->roc_se_ctx, auth_type, a_form->key.data,
+ a_form->key.length, a_form->digest_length)))
return -1;
- if ((auth_type >= ROC_SE_ZUC_EIA3) &&
- (auth_type <= ROC_SE_AES_CMAC_EIA2))
- roc_se_ctx_swap(&sess->roc_se_ctx);
-
return 0;
}
--
2.25.1
^ permalink raw reply [flat|nested] 78+ messages in thread
* [PATCH v3 20/24] crypto/cnxk: validate the combinations supported in TLS
2024-01-17 10:30 ` [PATCH v3 " Anoob Joseph
` (18 preceding siblings ...)
2024-01-17 10:31 ` [PATCH v3 19/24] crypto/cnxk: replace PDCP with PDCP chain opcode Anoob Joseph
@ 2024-01-17 10:31 ` Anoob Joseph
2024-01-17 10:31 ` [PATCH v3 21/24] crypto/cnxk: use a single function for opad ipad Anoob Joseph
` (4 subsequent siblings)
24 siblings, 0 replies; 78+ messages in thread
From: Anoob Joseph @ 2024-01-17 10:31 UTC (permalink / raw)
To: Akhil Goyal; +Cc: Vidya Sagar Velumuri, Jerin Jacob, Tejasree Kondoj, dev
From: Vidya Sagar Velumuri <vvelumuri@marvell.com>
Validate the cipher and auth combination to allow only the
ones supported by hardware.
Signed-off-by: Vidya Sagar Velumuri <vvelumuri@marvell.com>
---
drivers/crypto/cnxk/cn10k_tls.c | 35 ++++++++++++++++++++++++++++++++-
1 file changed, 34 insertions(+), 1 deletion(-)
diff --git a/drivers/crypto/cnxk/cn10k_tls.c b/drivers/crypto/cnxk/cn10k_tls.c
index afcf7ba6f1..3c2e0feb2a 100644
--- a/drivers/crypto/cnxk/cn10k_tls.c
+++ b/drivers/crypto/cnxk/cn10k_tls.c
@@ -17,6 +17,36 @@
#include "cnxk_cryptodev_ops.h"
#include "cnxk_security.h"
+static int
+tls_xform_cipher_auth_verify(struct rte_crypto_sym_xform *cipher_xform,
+ struct rte_crypto_sym_xform *auth_xform)
+{
+ enum rte_crypto_cipher_algorithm c_algo = cipher_xform->cipher.algo;
+ enum rte_crypto_auth_algorithm a_algo = auth_xform->auth.algo;
+ int ret = -ENOTSUP;
+
+ switch (c_algo) {
+ case RTE_CRYPTO_CIPHER_NULL:
+ if ((a_algo == RTE_CRYPTO_AUTH_MD5_HMAC) || (a_algo == RTE_CRYPTO_AUTH_SHA1_HMAC) ||
+ (a_algo == RTE_CRYPTO_AUTH_SHA256_HMAC))
+ ret = 0;
+ break;
+ case RTE_CRYPTO_CIPHER_3DES_CBC:
+ if (a_algo == RTE_CRYPTO_AUTH_SHA1_HMAC)
+ ret = 0;
+ break;
+ case RTE_CRYPTO_CIPHER_AES_CBC:
+ if ((a_algo == RTE_CRYPTO_AUTH_SHA1_HMAC) ||
+ (a_algo == RTE_CRYPTO_AUTH_SHA256_HMAC))
+ ret = 0;
+ break;
+ default:
+ break;
+ }
+
+ return ret;
+}
+
static int
tls_xform_cipher_verify(struct rte_crypto_sym_xform *crypto_xform)
{
@@ -138,7 +168,10 @@ cnxk_tls_xform_verify(struct rte_security_tls_record_xform *tls_xform,
ret = tls_xform_cipher_verify(cipher_xform);
if (!ret)
- return tls_xform_auth_verify(auth_xform);
+ ret = tls_xform_auth_verify(auth_xform);
+
+ if (cipher_xform && !ret)
+ return tls_xform_cipher_auth_verify(cipher_xform, auth_xform);
return ret;
}
--
2.25.1
^ permalink raw reply [flat|nested] 78+ messages in thread
* [PATCH v3 21/24] crypto/cnxk: use a single function for opad ipad
2024-01-17 10:30 ` [PATCH v3 " Anoob Joseph
` (19 preceding siblings ...)
2024-01-17 10:31 ` [PATCH v3 20/24] crypto/cnxk: validate the combinations supported in TLS Anoob Joseph
@ 2024-01-17 10:31 ` Anoob Joseph
2024-01-17 10:31 ` [PATCH v3 22/24] crypto/cnxk: add support for TLS 1.3 Anoob Joseph
` (3 subsequent siblings)
24 siblings, 0 replies; 78+ messages in thread
From: Anoob Joseph @ 2024-01-17 10:31 UTC (permalink / raw)
To: Akhil Goyal; +Cc: Vidya Sagar Velumuri, Jerin Jacob, Tejasree Kondoj, dev
From: Vidya Sagar Velumuri <vvelumuri@marvell.com>
Use a single function for opad and ipad generation for IPsec, TLS and
flexi crypto.
Signed-off-by: Vidya Sagar Velumuri <vvelumuri@marvell.com>
---
drivers/common/cnxk/cnxk_security.c | 65 ++++++-----------------------
drivers/common/cnxk/cnxk_security.h | 5 ---
drivers/common/cnxk/roc_se.c | 48 ++++++++++++++-------
drivers/common/cnxk/roc_se.h | 9 ++++
drivers/common/cnxk/version.map | 2 +-
drivers/crypto/cnxk/cn10k_tls.c | 8 +++-
6 files changed, 61 insertions(+), 76 deletions(-)
diff --git a/drivers/common/cnxk/cnxk_security.c b/drivers/common/cnxk/cnxk_security.c
index bdb04fe142..64c901a57a 100644
--- a/drivers/common/cnxk/cnxk_security.c
+++ b/drivers/common/cnxk/cnxk_security.c
@@ -8,55 +8,9 @@
#include "roc_api.h"
-void
-cnxk_sec_opad_ipad_gen(struct rte_crypto_sym_xform *auth_xform, uint8_t *hmac_opad_ipad,
- bool is_tls)
-{
- const uint8_t *key = auth_xform->auth.key.data;
- uint32_t length = auth_xform->auth.key.length;
- uint8_t opad[128] = {[0 ... 127] = 0x5c};
- uint8_t ipad[128] = {[0 ... 127] = 0x36};
- uint32_t i;
-
- /* HMAC OPAD and IPAD */
- for (i = 0; i < 128 && i < length; i++) {
- opad[i] = opad[i] ^ key[i];
- ipad[i] = ipad[i] ^ key[i];
- }
-
- /* Precompute hash of HMAC OPAD and IPAD to avoid
- * per packet computation
- */
- switch (auth_xform->auth.algo) {
- case RTE_CRYPTO_AUTH_MD5_HMAC:
- roc_hash_md5_gen(opad, (uint32_t *)&hmac_opad_ipad[0]);
- roc_hash_md5_gen(ipad, (uint32_t *)&hmac_opad_ipad[is_tls ? 64 : 24]);
- break;
- case RTE_CRYPTO_AUTH_SHA1_HMAC:
- roc_hash_sha1_gen(opad, (uint32_t *)&hmac_opad_ipad[0]);
- roc_hash_sha1_gen(ipad, (uint32_t *)&hmac_opad_ipad[is_tls ? 64 : 24]);
- break;
- case RTE_CRYPTO_AUTH_SHA256_HMAC:
- roc_hash_sha256_gen(opad, (uint32_t *)&hmac_opad_ipad[0], 256);
- roc_hash_sha256_gen(ipad, (uint32_t *)&hmac_opad_ipad[64], 256);
- break;
- case RTE_CRYPTO_AUTH_SHA384_HMAC:
- roc_hash_sha512_gen(opad, (uint64_t *)&hmac_opad_ipad[0], 384);
- roc_hash_sha512_gen(ipad, (uint64_t *)&hmac_opad_ipad[64], 384);
- break;
- case RTE_CRYPTO_AUTH_SHA512_HMAC:
- roc_hash_sha512_gen(opad, (uint64_t *)&hmac_opad_ipad[0], 512);
- roc_hash_sha512_gen(ipad, (uint64_t *)&hmac_opad_ipad[64], 512);
- break;
- default:
- break;
- }
-}
-
static int
-ot_ipsec_sa_common_param_fill(union roc_ot_ipsec_sa_word2 *w2,
- uint8_t *cipher_key, uint8_t *salt_key,
- uint8_t *hmac_opad_ipad,
+ot_ipsec_sa_common_param_fill(union roc_ot_ipsec_sa_word2 *w2, uint8_t *cipher_key,
+ uint8_t *salt_key, uint8_t *hmac_opad_ipad,
struct rte_security_ipsec_xform *ipsec_xfrm,
struct rte_crypto_sym_xform *crypto_xfrm)
{
@@ -192,7 +146,9 @@ ot_ipsec_sa_common_param_fill(union roc_ot_ipsec_sa_word2 *w2,
const uint8_t *auth_key = auth_xfrm->auth.key.data;
roc_aes_xcbc_key_derive(auth_key, hmac_opad_ipad);
} else {
- cnxk_sec_opad_ipad_gen(auth_xfrm, hmac_opad_ipad, false);
+ roc_se_hmac_opad_ipad_gen(w2->s.auth_type, auth_xfrm->auth.key.data,
+ auth_xfrm->auth.key.length, &hmac_opad_ipad[0],
+ ROC_SE_IPSEC);
}
tmp_key = (uint64_t *)hmac_opad_ipad;
@@ -741,7 +697,8 @@ onf_ipsec_sa_common_param_fill(struct roc_ie_onf_sa_ctl *ctl, uint8_t *salt,
key = cipher_xfrm->cipher.key.data;
length = cipher_xfrm->cipher.key.length;
- cnxk_sec_opad_ipad_gen(auth_xfrm, hmac_opad_ipad, false);
+ roc_se_hmac_opad_ipad_gen(ctl->auth_type, auth_xfrm->auth.key.data,
+ auth_xfrm->auth.key.length, hmac_opad_ipad, ROC_SE_IPSEC);
}
switch (length) {
@@ -1374,7 +1331,9 @@ cnxk_on_ipsec_outb_sa_create(struct rte_security_ipsec_xform *ipsec,
roc_aes_xcbc_key_derive(auth_key, hmac_opad_ipad);
} else if (auth_xform->auth.algo != RTE_CRYPTO_AUTH_NULL) {
- cnxk_sec_opad_ipad_gen(auth_xform, hmac_opad_ipad, false);
+ roc_se_hmac_opad_ipad_gen(
+ out_sa->common_sa.ctl.auth_type, auth_xform->auth.key.data,
+ auth_xform->auth.key.length, &hmac_opad_ipad[0], ROC_SE_IPSEC);
}
}
@@ -1441,7 +1400,9 @@ cnxk_on_ipsec_inb_sa_create(struct rte_security_ipsec_xform *ipsec,
roc_aes_xcbc_key_derive(auth_key, hmac_opad_ipad);
} else if (auth_xform->auth.algo != RTE_CRYPTO_AUTH_NULL) {
- cnxk_sec_opad_ipad_gen(auth_xform, hmac_opad_ipad, false);
+ roc_se_hmac_opad_ipad_gen(
+ in_sa->common_sa.ctl.auth_type, auth_xform->auth.key.data,
+ auth_xform->auth.key.length, &hmac_opad_ipad[0], ROC_SE_IPSEC);
}
}
diff --git a/drivers/common/cnxk/cnxk_security.h b/drivers/common/cnxk/cnxk_security.h
index 86ec657cb0..b323b8b757 100644
--- a/drivers/common/cnxk/cnxk_security.h
+++ b/drivers/common/cnxk/cnxk_security.h
@@ -68,9 +68,4 @@ int __roc_api cnxk_on_ipsec_inb_sa_create(struct rte_security_ipsec_xform *ipsec
int __roc_api cnxk_on_ipsec_outb_sa_create(struct rte_security_ipsec_xform *ipsec,
struct rte_crypto_sym_xform *crypto_xform,
struct roc_ie_on_outb_sa *out_sa);
-
-__rte_internal
-void cnxk_sec_opad_ipad_gen(struct rte_crypto_sym_xform *auth_xform, uint8_t *hmac_opad_ipad,
- bool is_tls);
-
#endif /* _CNXK_SECURITY_H__ */
diff --git a/drivers/common/cnxk/roc_se.c b/drivers/common/cnxk/roc_se.c
index 4e00268149..5a3ed0b647 100644
--- a/drivers/common/cnxk/roc_se.c
+++ b/drivers/common/cnxk/roc_se.c
@@ -157,14 +157,29 @@ cpt_ciph_aes_key_type_set(struct roc_se_context *fctx, uint16_t key_len)
fctx->enc.aes_key = aes_key_type;
}
-static void
-cpt_hmac_opad_ipad_gen(roc_se_auth_type auth_type, const uint8_t *key, uint16_t length,
- struct roc_se_hmac_context *hmac)
+void
+roc_se_hmac_opad_ipad_gen(roc_se_auth_type auth_type, const uint8_t *key, uint16_t length,
+ uint8_t *opad_ipad, roc_se_op_type op_type)
{
uint8_t opad[128] = {[0 ... 127] = 0x5c};
uint8_t ipad[128] = {[0 ... 127] = 0x36};
+ uint8_t ipad_offset, opad_offset;
uint32_t i;
+ if (op_type == ROC_SE_IPSEC) {
+ if ((auth_type == ROC_SE_MD5_TYPE) || (auth_type == ROC_SE_SHA1_TYPE))
+ ipad_offset = 24;
+ else
+ ipad_offset = 64;
+ opad_offset = 0;
+ } else if (op_type == ROC_SE_TLS) {
+ ipad_offset = 64;
+ opad_offset = 0;
+ } else {
+ ipad_offset = 0;
+ opad_offset = 64;
+ }
+
/* HMAC OPAD and IPAD */
for (i = 0; i < 128 && i < length; i++) {
opad[i] = opad[i] ^ key[i];
@@ -176,28 +191,28 @@ cpt_hmac_opad_ipad_gen(roc_se_auth_type auth_type, const uint8_t *key, uint16_t
*/
switch (auth_type) {
case ROC_SE_MD5_TYPE:
- roc_hash_md5_gen(opad, (uint32_t *)hmac->opad);
- roc_hash_md5_gen(ipad, (uint32_t *)hmac->ipad);
+ roc_hash_md5_gen(opad, (uint32_t *)&opad_ipad[opad_offset]);
+ roc_hash_md5_gen(ipad, (uint32_t *)&opad_ipad[ipad_offset]);
break;
case ROC_SE_SHA1_TYPE:
- roc_hash_sha1_gen(opad, (uint32_t *)hmac->opad);
- roc_hash_sha1_gen(ipad, (uint32_t *)hmac->ipad);
+ roc_hash_sha1_gen(opad, (uint32_t *)&opad_ipad[opad_offset]);
+ roc_hash_sha1_gen(ipad, (uint32_t *)&opad_ipad[ipad_offset]);
break;
case ROC_SE_SHA2_SHA224:
- roc_hash_sha256_gen(opad, (uint32_t *)hmac->opad, 224);
- roc_hash_sha256_gen(ipad, (uint32_t *)hmac->ipad, 224);
+ roc_hash_sha256_gen(opad, (uint32_t *)&opad_ipad[opad_offset], 224);
+ roc_hash_sha256_gen(ipad, (uint32_t *)&opad_ipad[ipad_offset], 224);
break;
case ROC_SE_SHA2_SHA256:
- roc_hash_sha256_gen(opad, (uint32_t *)hmac->opad, 256);
- roc_hash_sha256_gen(ipad, (uint32_t *)hmac->ipad, 256);
+ roc_hash_sha256_gen(opad, (uint32_t *)&opad_ipad[opad_offset], 256);
+ roc_hash_sha256_gen(ipad, (uint32_t *)&opad_ipad[ipad_offset], 256);
break;
case ROC_SE_SHA2_SHA384:
- roc_hash_sha512_gen(opad, (uint64_t *)hmac->opad, 384);
- roc_hash_sha512_gen(ipad, (uint64_t *)hmac->ipad, 384);
+ roc_hash_sha512_gen(opad, (uint64_t *)&opad_ipad[opad_offset], 384);
+ roc_hash_sha512_gen(ipad, (uint64_t *)&opad_ipad[ipad_offset], 384);
break;
case ROC_SE_SHA2_SHA512:
- roc_hash_sha512_gen(opad, (uint64_t *)hmac->opad, 512);
- roc_hash_sha512_gen(ipad, (uint64_t *)hmac->ipad, 512);
+ roc_hash_sha512_gen(opad, (uint64_t *)&opad_ipad[opad_offset], 512);
+ roc_hash_sha512_gen(ipad, (uint64_t *)&opad_ipad[ipad_offset], 512);
break;
default:
break;
@@ -401,7 +416,8 @@ roc_se_auth_key_set(struct roc_se_ctx *se_ctx, roc_se_auth_type type, const uint
if (chained_op) {
memset(fctx->hmac.ipad, 0, sizeof(fctx->hmac.ipad));
memset(fctx->hmac.opad, 0, sizeof(fctx->hmac.opad));
- cpt_hmac_opad_ipad_gen(type, key, key_len, &fctx->hmac);
+ roc_se_hmac_opad_ipad_gen(type, key, key_len, &fctx->hmac.ipad[0],
+ ROC_SE_FC);
fctx->enc.auth_input_type = 0;
} else {
se_ctx->hmac = 1;
diff --git a/drivers/common/cnxk/roc_se.h b/drivers/common/cnxk/roc_se.h
index d62c40b310..ddcf6bdb44 100644
--- a/drivers/common/cnxk/roc_se.h
+++ b/drivers/common/cnxk/roc_se.h
@@ -191,6 +191,12 @@ typedef enum {
ROC_SE_PDCP_MAC_LEN_128_BIT = 0x3
} roc_se_pdcp_mac_len_type;
+typedef enum {
+ ROC_SE_IPSEC = 0x0,
+ ROC_SE_TLS = 0x1,
+ ROC_SE_FC = 0x2,
+} roc_se_op_type;
+
struct roc_se_enc_context {
uint64_t iv_source : 1;
uint64_t aes_key : 2;
@@ -401,4 +407,7 @@ int __roc_api roc_se_ciph_key_set(struct roc_se_ctx *se_ctx, roc_se_cipher_type
void __roc_api roc_se_ctx_swap(struct roc_se_ctx *se_ctx);
void __roc_api roc_se_ctx_init(struct roc_se_ctx *se_ctx);
+void __roc_api roc_se_hmac_opad_ipad_gen(roc_se_auth_type auth_type, const uint8_t *key,
+ uint16_t length, uint8_t *opad_ipad,
+ roc_se_op_type op_type);
#endif /* __ROC_SE_H__ */
diff --git a/drivers/common/cnxk/version.map b/drivers/common/cnxk/version.map
index 15fd5710d2..b8b0478848 100644
--- a/drivers/common/cnxk/version.map
+++ b/drivers/common/cnxk/version.map
@@ -1,7 +1,6 @@
INTERNAL {
global:
- cnxk_sec_opad_ipad_gen;
cnxk_ipsec_icvlen_get;
cnxk_ipsec_ivlen_get;
cnxk_ipsec_outb_rlens_get;
@@ -472,6 +471,7 @@ INTERNAL {
roc_plt_init;
roc_plt_init_cb_register;
roc_plt_lmt_validate;
+ roc_se_hmac_opad_ipad_gen;
roc_sso_dev_fini;
roc_sso_dev_init;
roc_sso_dump;
diff --git a/drivers/crypto/cnxk/cn10k_tls.c b/drivers/crypto/cnxk/cn10k_tls.c
index 3c2e0feb2a..c30e04a7c0 100644
--- a/drivers/crypto/cnxk/cn10k_tls.c
+++ b/drivers/crypto/cnxk/cn10k_tls.c
@@ -376,7 +376,9 @@ tls_read_sa_fill(struct roc_ie_ot_tls_read_sa *read_sa,
else
return -EINVAL;
- cnxk_sec_opad_ipad_gen(auth_xfrm, read_sa->opad_ipad, true);
+ roc_se_hmac_opad_ipad_gen(read_sa->w2.s.mac_select, auth_xfrm->auth.key.data,
+ auth_xfrm->auth.key.length, read_sa->opad_ipad, ROC_SE_TLS);
+
tmp = (uint64_t *)read_sa->opad_ipad;
for (i = 0; i < (int)(ROC_CTX_MAX_OPAD_IPAD_LEN / sizeof(uint64_t)); i++)
tmp[i] = rte_be_to_cpu_64(tmp[i]);
@@ -503,7 +505,9 @@ tls_write_sa_fill(struct roc_ie_ot_tls_write_sa *write_sa,
else
return -EINVAL;
- cnxk_sec_opad_ipad_gen(auth_xfrm, write_sa->opad_ipad, true);
+ roc_se_hmac_opad_ipad_gen(write_sa->w2.s.mac_select, auth_xfrm->auth.key.data,
+ auth_xfrm->auth.key.length, write_sa->opad_ipad,
+ ROC_SE_TLS);
}
tmp_key = (uint64_t *)write_sa->opad_ipad;
--
2.25.1
^ permalink raw reply [flat|nested] 78+ messages in thread
* [PATCH v3 22/24] crypto/cnxk: add support for TLS 1.3
2024-01-17 10:30 ` [PATCH v3 " Anoob Joseph
` (20 preceding siblings ...)
2024-01-17 10:31 ` [PATCH v3 21/24] crypto/cnxk: use a single function for opad ipad Anoob Joseph
@ 2024-01-17 10:31 ` Anoob Joseph
2024-01-17 10:31 ` [PATCH v3 23/24] crypto/cnxk: add TLS 1.3 capability Anoob Joseph
` (2 subsequent siblings)
24 siblings, 0 replies; 78+ messages in thread
From: Anoob Joseph @ 2024-01-17 10:31 UTC (permalink / raw)
To: Akhil Goyal; +Cc: Vidya Sagar Velumuri, Jerin Jacob, Tejasree Kondoj, dev
From: Vidya Sagar Velumuri <vvelumuri@marvell.com>
Add support for TLS-1.3.
Signed-off-by: Vidya Sagar Velumuri <vvelumuri@marvell.com>
---
drivers/common/cnxk/roc_ie_ot_tls.h | 50 +++++--
drivers/crypto/cnxk/cn10k_cryptodev_sec.h | 3 +-
drivers/crypto/cnxk/cn10k_tls.c | 159 +++++++++++++---------
3 files changed, 136 insertions(+), 76 deletions(-)
diff --git a/drivers/common/cnxk/roc_ie_ot_tls.h b/drivers/common/cnxk/roc_ie_ot_tls.h
index 206c3104e6..b85d075e86 100644
--- a/drivers/common/cnxk/roc_ie_ot_tls.h
+++ b/drivers/common/cnxk/roc_ie_ot_tls.h
@@ -17,8 +17,10 @@
(PLT_ALIGN_CEIL(ROC_IE_OT_TLS_AR_WIN_SIZE_MAX, BITS_PER_LONG_LONG) / BITS_PER_LONG_LONG)
/* CN10K TLS opcodes */
-#define ROC_IE_OT_TLS_MAJOR_OP_RECORD_ENC 0x16UL
-#define ROC_IE_OT_TLS_MAJOR_OP_RECORD_DEC 0x17UL
+#define ROC_IE_OT_TLS_MAJOR_OP_RECORD_ENC 0x16UL
+#define ROC_IE_OT_TLS_MAJOR_OP_RECORD_DEC 0x17UL
+#define ROC_IE_OT_TLS13_MAJOR_OP_RECORD_ENC 0x18UL
+#define ROC_IE_OT_TLS13_MAJOR_OP_RECORD_DEC 0x19UL
#define ROC_IE_OT_TLS_CTX_MAX_OPAD_IPAD_LEN 128
#define ROC_IE_OT_TLS_CTX_MAX_KEY_IV_LEN 48
@@ -42,6 +44,7 @@ enum roc_ie_ot_tls_cipher_type {
enum roc_ie_ot_tls_ver {
ROC_IE_OT_TLS_VERSION_TLS_12 = 1,
ROC_IE_OT_TLS_VERSION_DTLS_12 = 2,
+ ROC_IE_OT_TLS_VERSION_TLS_13 = 3,
};
enum roc_ie_ot_tls_aes_key_len {
@@ -131,11 +134,23 @@ struct roc_ie_ot_tls_read_sa {
/* Word4 - Word9 */
uint8_t cipher_key[ROC_IE_OT_TLS_CTX_MAX_KEY_IV_LEN];
- /* Word10 - Word25 */
- uint8_t opad_ipad[ROC_IE_OT_TLS_CTX_MAX_OPAD_IPAD_LEN];
+ union {
+ struct {
+ /* Word10 */
+ uint64_t w10_rsvd6;
+
+ /* Word11 - Word25 */
+ struct roc_ie_ot_tls_read_ctx_update_reg ctx;
+ } tls_13;
+
+ struct {
+ /* Word10 - Word25 */
+ uint8_t opad_ipad[ROC_IE_OT_TLS_CTX_MAX_OPAD_IPAD_LEN];
- /* Word26 - Word32 */
- struct roc_ie_ot_tls_read_ctx_update_reg ctx;
+ /* Word26 - Word95 */
+ struct roc_ie_ot_tls_read_ctx_update_reg ctx;
+ } tls_12;
+ };
};
struct roc_ie_ot_tls_write_sa {
@@ -187,13 +202,24 @@ struct roc_ie_ot_tls_write_sa {
/* Word4 - Word9 */
uint8_t cipher_key[ROC_IE_OT_TLS_CTX_MAX_KEY_IV_LEN];
- /* Word10 - Word25 */
- uint8_t opad_ipad[ROC_IE_OT_TLS_CTX_MAX_OPAD_IPAD_LEN];
+ union {
+ struct {
+ /* Word10 */
+ uint64_t w10_rsvd7;
+
+ uint64_t seq_num;
+ } tls_13;
+
+ struct {
+ /* Word10 - Word25 */
+ uint8_t opad_ipad[ROC_IE_OT_TLS_CTX_MAX_OPAD_IPAD_LEN];
- /* Word26 */
- uint64_t w26_rsvd7;
+ /* Word26 */
+ uint64_t w26_rsvd7;
- /* Word27 */
- uint64_t seq_num;
+ /* Word27 */
+ uint64_t seq_num;
+ } tls_12;
+ };
};
#endif /* __ROC_IE_OT_TLS_H__ */
diff --git a/drivers/crypto/cnxk/cn10k_cryptodev_sec.h b/drivers/crypto/cnxk/cn10k_cryptodev_sec.h
index 703e71475a..20a260d9ff 100644
--- a/drivers/crypto/cnxk/cn10k_cryptodev_sec.h
+++ b/drivers/crypto/cnxk/cn10k_cryptodev_sec.h
@@ -31,8 +31,7 @@ struct cn10k_sec_session {
} ipsec;
struct {
uint8_t enable_padding : 1;
- uint8_t hdr_len : 4;
- uint8_t rvsd : 3;
+ uint8_t rvsd : 7;
bool is_write;
} tls;
};
diff --git a/drivers/crypto/cnxk/cn10k_tls.c b/drivers/crypto/cnxk/cn10k_tls.c
index c30e04a7c0..879e0ea978 100644
--- a/drivers/crypto/cnxk/cn10k_tls.c
+++ b/drivers/crypto/cnxk/cn10k_tls.c
@@ -105,7 +105,8 @@ cnxk_tls_xform_verify(struct rte_security_tls_record_xform *tls_xform,
int ret = 0;
if ((tls_xform->ver != RTE_SECURITY_VERSION_TLS_1_2) &&
- (tls_xform->ver != RTE_SECURITY_VERSION_DTLS_1_2))
+ (tls_xform->ver != RTE_SECURITY_VERSION_DTLS_1_2) &&
+ (tls_xform->ver != RTE_SECURITY_VERSION_TLS_1_3))
return -EINVAL;
if ((tls_xform->type != RTE_SECURITY_TLS_SESS_TYPE_READ) &&
@@ -115,6 +116,12 @@ cnxk_tls_xform_verify(struct rte_security_tls_record_xform *tls_xform,
if (crypto_xform->type == RTE_CRYPTO_SYM_XFORM_AEAD)
return tls_xform_aead_verify(tls_xform, crypto_xform);
+ /* TLS-1.3 only support AEAD.
+ * Control should not reach here for TLS-1.3
+ */
+ if (tls_xform->ver == RTE_SECURITY_VERSION_TLS_1_3)
+ return -EINVAL;
+
if (tls_xform->type == RTE_SECURITY_TLS_SESS_TYPE_WRITE) {
/* Egress */
@@ -259,7 +266,7 @@ tls_write_sa_init(struct roc_ie_ot_tls_write_sa *sa)
memset(sa, 0, sizeof(struct roc_ie_ot_tls_write_sa));
- offset = offsetof(struct roc_ie_ot_tls_write_sa, w26_rsvd7);
+ offset = offsetof(struct roc_ie_ot_tls_write_sa, tls_12.w26_rsvd7);
sa->w0.s.hw_ctx_off = offset / ROC_CTX_UNIT_8B;
sa->w0.s.ctx_push_size = sa->w0.s.hw_ctx_off;
sa->w0.s.ctx_size = ROC_IE_OT_TLS_CTX_ILEN;
@@ -274,7 +281,7 @@ tls_read_sa_init(struct roc_ie_ot_tls_read_sa *sa)
memset(sa, 0, sizeof(struct roc_ie_ot_tls_read_sa));
- offset = offsetof(struct roc_ie_ot_tls_read_sa, ctx);
+ offset = offsetof(struct roc_ie_ot_tls_read_sa, tls_12.ctx);
sa->w0.s.hw_ctx_off = offset / ROC_CTX_UNIT_8B;
sa->w0.s.ctx_push_size = sa->w0.s.hw_ctx_off;
sa->w0.s.ctx_size = ROC_IE_OT_TLS_CTX_ILEN;
@@ -283,13 +290,18 @@ tls_read_sa_init(struct roc_ie_ot_tls_read_sa *sa)
}
static size_t
-tls_read_ctx_size(struct roc_ie_ot_tls_read_sa *sa)
+tls_read_ctx_size(struct roc_ie_ot_tls_read_sa *sa, enum rte_security_tls_version tls_ver)
{
size_t size;
/* Variable based on Anti-replay Window */
- size = offsetof(struct roc_ie_ot_tls_read_sa, ctx) +
- offsetof(struct roc_ie_ot_tls_read_ctx_update_reg, ar_winbits);
+ if (tls_ver == RTE_SECURITY_VERSION_TLS_1_3) {
+ size = offsetof(struct roc_ie_ot_tls_read_sa, tls_13.ctx) +
+ offsetof(struct roc_ie_ot_tls_read_ctx_update_reg, ar_winbits);
+ } else {
+ size = offsetof(struct roc_ie_ot_tls_read_sa, tls_12.ctx) +
+ offsetof(struct roc_ie_ot_tls_read_ctx_update_reg, ar_winbits);
+ }
if (sa->w0.s.ar_win)
size += (1 << (sa->w0.s.ar_win - 1)) * sizeof(uint64_t);
@@ -302,6 +314,7 @@ tls_read_sa_fill(struct roc_ie_ot_tls_read_sa *read_sa,
struct rte_security_tls_record_xform *tls_xfrm,
struct rte_crypto_sym_xform *crypto_xfrm)
{
+ enum rte_security_tls_version tls_ver = tls_xfrm->ver;
struct rte_crypto_sym_xform *auth_xfrm, *cipher_xfrm;
const uint8_t *key = NULL;
uint64_t *tmp, *tmp_key;
@@ -313,13 +326,22 @@ tls_read_sa_fill(struct roc_ie_ot_tls_read_sa *read_sa,
/* Initialize the SA */
memset(read_sa, 0, sizeof(struct roc_ie_ot_tls_read_sa));
+ if (tls_ver == RTE_SECURITY_VERSION_TLS_1_2) {
+ read_sa->w2.s.version_select = ROC_IE_OT_TLS_VERSION_TLS_12;
+ read_sa->tls_12.ctx.ar_valid_mask = tls_xfrm->tls_1_2.seq_no - 1;
+ } else if (tls_ver == RTE_SECURITY_VERSION_DTLS_1_2) {
+ read_sa->w2.s.version_select = ROC_IE_OT_TLS_VERSION_DTLS_12;
+ } else if (tls_ver == RTE_SECURITY_VERSION_TLS_1_3) {
+ read_sa->w2.s.version_select = ROC_IE_OT_TLS_VERSION_TLS_13;
+ read_sa->tls_13.ctx.ar_valid_mask = tls_xfrm->tls_1_3.seq_no - 1;
+ }
+
cipher_key = read_sa->cipher_key;
/* Set encryption algorithm */
if ((crypto_xfrm->type == RTE_CRYPTO_SYM_XFORM_AEAD) &&
(crypto_xfrm->aead.algo == RTE_CRYPTO_AEAD_AES_GCM)) {
read_sa->w2.s.cipher_select = ROC_IE_OT_TLS_CIPHER_AES_GCM;
- read_sa->w2.s.mac_select = ROC_IE_OT_TLS_MAC_SHA2_256;
length = crypto_xfrm->aead.key.length;
if (length == 16)
@@ -330,10 +352,12 @@ tls_read_sa_fill(struct roc_ie_ot_tls_read_sa *read_sa,
key = crypto_xfrm->aead.key.data;
memcpy(cipher_key, key, length);
- if (tls_xfrm->ver == RTE_SECURITY_VERSION_TLS_1_2)
+ if (tls_ver == RTE_SECURITY_VERSION_TLS_1_2)
memcpy(((uint8_t *)cipher_key + 32), &tls_xfrm->tls_1_2.imp_nonce, 4);
- else if (tls_xfrm->ver == RTE_SECURITY_VERSION_DTLS_1_2)
+ else if (tls_ver == RTE_SECURITY_VERSION_DTLS_1_2)
memcpy(((uint8_t *)cipher_key + 32), &tls_xfrm->dtls_1_2.imp_nonce, 4);
+ else if (tls_ver == RTE_SECURITY_VERSION_TLS_1_3)
+ memcpy(((uint8_t *)cipher_key + 32), &tls_xfrm->tls_1_3.imp_nonce, 12);
goto key_swap;
}
@@ -377,9 +401,10 @@ tls_read_sa_fill(struct roc_ie_ot_tls_read_sa *read_sa,
return -EINVAL;
roc_se_hmac_opad_ipad_gen(read_sa->w2.s.mac_select, auth_xfrm->auth.key.data,
- auth_xfrm->auth.key.length, read_sa->opad_ipad, ROC_SE_TLS);
+ auth_xfrm->auth.key.length, read_sa->tls_12.opad_ipad,
+ ROC_SE_TLS);
- tmp = (uint64_t *)read_sa->opad_ipad;
+ tmp = (uint64_t *)read_sa->tls_12.opad_ipad;
for (i = 0; i < (int)(ROC_CTX_MAX_OPAD_IPAD_LEN / sizeof(uint64_t)); i++)
tmp[i] = rte_be_to_cpu_64(tmp[i]);
@@ -403,24 +428,20 @@ tls_read_sa_fill(struct roc_ie_ot_tls_read_sa *read_sa,
read_sa->w0.s.ctx_hdr_size = ROC_IE_OT_TLS_CTX_HDR_SIZE;
read_sa->w0.s.aop_valid = 1;
- offset = offsetof(struct roc_ie_ot_tls_read_sa, ctx);
+ offset = offsetof(struct roc_ie_ot_tls_read_sa, tls_12.ctx);
+ if (tls_ver == RTE_SECURITY_VERSION_TLS_1_3)
+ offset = offsetof(struct roc_ie_ot_tls_read_sa, tls_13.ctx);
+
+ /* Entire context size in 128B units */
+ read_sa->w0.s.ctx_size =
+ (PLT_ALIGN_CEIL(tls_read_ctx_size(read_sa, tls_ver), ROC_CTX_UNIT_128B) /
+ ROC_CTX_UNIT_128B) -
+ 1;
/* Word offset for HW managed CTX field */
read_sa->w0.s.hw_ctx_off = offset / 8;
read_sa->w0.s.ctx_push_size = read_sa->w0.s.hw_ctx_off;
- /* Entire context size in 128B units */
- read_sa->w0.s.ctx_size = (PLT_ALIGN_CEIL(tls_read_ctx_size(read_sa), ROC_CTX_UNIT_128B) /
- ROC_CTX_UNIT_128B) -
- 1;
-
- if (tls_xfrm->ver == RTE_SECURITY_VERSION_TLS_1_2) {
- read_sa->w2.s.version_select = ROC_IE_OT_TLS_VERSION_TLS_12;
- read_sa->ctx.ar_valid_mask = tls_xfrm->tls_1_2.seq_no - 1;
- } else if (tls_xfrm->ver == RTE_SECURITY_VERSION_DTLS_1_2) {
- read_sa->w2.s.version_select = ROC_IE_OT_TLS_VERSION_DTLS_12;
- }
-
rte_wmb();
return 0;
@@ -431,6 +452,7 @@ tls_write_sa_fill(struct roc_ie_ot_tls_write_sa *write_sa,
struct rte_security_tls_record_xform *tls_xfrm,
struct rte_crypto_sym_xform *crypto_xfrm)
{
+ enum rte_security_tls_version tls_ver = tls_xfrm->ver;
struct rte_crypto_sym_xform *auth_xfrm, *cipher_xfrm;
const uint8_t *key = NULL;
uint8_t *cipher_key;
@@ -438,13 +460,25 @@ tls_write_sa_fill(struct roc_ie_ot_tls_write_sa *write_sa,
int i, length = 0;
size_t offset;
+ if (tls_ver == RTE_SECURITY_VERSION_TLS_1_2) {
+ write_sa->w2.s.version_select = ROC_IE_OT_TLS_VERSION_TLS_12;
+ write_sa->tls_12.seq_num = tls_xfrm->tls_1_2.seq_no - 1;
+ } else if (tls_ver == RTE_SECURITY_VERSION_DTLS_1_2) {
+ write_sa->w2.s.version_select = ROC_IE_OT_TLS_VERSION_DTLS_12;
+ write_sa->tls_12.seq_num = ((uint64_t)tls_xfrm->dtls_1_2.epoch << 48) |
+ (tls_xfrm->dtls_1_2.seq_no & 0x0000ffffffffffff);
+ write_sa->tls_12.seq_num -= 1;
+ } else if (tls_ver == RTE_SECURITY_VERSION_TLS_1_3) {
+ write_sa->w2.s.version_select = ROC_IE_OT_TLS_VERSION_TLS_13;
+ write_sa->tls_13.seq_num = tls_xfrm->tls_1_3.seq_no - 1;
+ }
+
cipher_key = write_sa->cipher_key;
/* Set encryption algorithm */
if ((crypto_xfrm->type == RTE_CRYPTO_SYM_XFORM_AEAD) &&
(crypto_xfrm->aead.algo == RTE_CRYPTO_AEAD_AES_GCM)) {
write_sa->w2.s.cipher_select = ROC_IE_OT_TLS_CIPHER_AES_GCM;
- write_sa->w2.s.mac_select = ROC_IE_OT_TLS_MAC_SHA2_256;
length = crypto_xfrm->aead.key.length;
if (length == 16)
@@ -455,10 +489,12 @@ tls_write_sa_fill(struct roc_ie_ot_tls_write_sa *write_sa,
key = crypto_xfrm->aead.key.data;
memcpy(cipher_key, key, length);
- if (tls_xfrm->ver == RTE_SECURITY_VERSION_TLS_1_2)
+ if (tls_ver == RTE_SECURITY_VERSION_TLS_1_2)
memcpy(((uint8_t *)cipher_key + 32), &tls_xfrm->tls_1_2.imp_nonce, 4);
- else if (tls_xfrm->ver == RTE_SECURITY_VERSION_DTLS_1_2)
+ else if (tls_ver == RTE_SECURITY_VERSION_DTLS_1_2)
memcpy(((uint8_t *)cipher_key + 32), &tls_xfrm->dtls_1_2.imp_nonce, 4);
+ else if (tls_ver == RTE_SECURITY_VERSION_TLS_1_3)
+ memcpy(((uint8_t *)cipher_key + 32), &tls_xfrm->tls_1_3.imp_nonce, 12);
goto key_swap;
}
@@ -506,11 +542,11 @@ tls_write_sa_fill(struct roc_ie_ot_tls_write_sa *write_sa,
return -EINVAL;
roc_se_hmac_opad_ipad_gen(write_sa->w2.s.mac_select, auth_xfrm->auth.key.data,
- auth_xfrm->auth.key.length, write_sa->opad_ipad,
+ auth_xfrm->auth.key.length, write_sa->tls_12.opad_ipad,
ROC_SE_TLS);
}
- tmp_key = (uint64_t *)write_sa->opad_ipad;
+ tmp_key = (uint64_t *)write_sa->tls_12.opad_ipad;
for (i = 0; i < (int)(ROC_CTX_MAX_OPAD_IPAD_LEN / sizeof(uint64_t)); i++)
tmp_key[i] = rte_be_to_cpu_64(tmp_key[i]);
@@ -520,40 +556,37 @@ tls_write_sa_fill(struct roc_ie_ot_tls_write_sa *write_sa,
tmp_key[i] = rte_be_to_cpu_64(tmp_key[i]);
write_sa->w0.s.ctx_hdr_size = ROC_IE_OT_TLS_CTX_HDR_SIZE;
- offset = offsetof(struct roc_ie_ot_tls_write_sa, w26_rsvd7);
-
- /* Word offset for HW managed CTX field */
- write_sa->w0.s.hw_ctx_off = offset / 8;
- write_sa->w0.s.ctx_push_size = write_sa->w0.s.hw_ctx_off;
-
/* Entire context size in 128B units */
write_sa->w0.s.ctx_size =
(PLT_ALIGN_CEIL(sizeof(struct roc_ie_ot_tls_write_sa), ROC_CTX_UNIT_128B) /
ROC_CTX_UNIT_128B) -
1;
- write_sa->w0.s.aop_valid = 1;
+ offset = offsetof(struct roc_ie_ot_tls_write_sa, tls_12.w26_rsvd7);
- if (tls_xfrm->ver == RTE_SECURITY_VERSION_TLS_1_2) {
- write_sa->w2.s.version_select = ROC_IE_OT_TLS_VERSION_TLS_12;
- write_sa->seq_num = tls_xfrm->tls_1_2.seq_no - 1;
- } else if (tls_xfrm->ver == RTE_SECURITY_VERSION_DTLS_1_2) {
- write_sa->w2.s.version_select = ROC_IE_OT_TLS_VERSION_DTLS_12;
- write_sa->seq_num = ((uint64_t)tls_xfrm->dtls_1_2.epoch << 48) |
- (tls_xfrm->dtls_1_2.seq_no & 0x0000ffffffffffff);
- write_sa->seq_num -= 1;
+ if (tls_ver == RTE_SECURITY_VERSION_TLS_1_3) {
+ offset = offsetof(struct roc_ie_ot_tls_write_sa, tls_13.w10_rsvd7);
+ write_sa->w0.s.ctx_size -= 1;
}
+ /* Word offset for HW managed CTX field */
+ write_sa->w0.s.hw_ctx_off = offset / 8;
+ write_sa->w0.s.ctx_push_size = write_sa->w0.s.hw_ctx_off;
+
+ write_sa->w0.s.aop_valid = 1;
+
write_sa->w2.s.iv_at_cptr = ROC_IE_OT_TLS_IV_SRC_DEFAULT;
+ if (write_sa->w2.s.version_select != ROC_IE_OT_TLS_VERSION_TLS_13) {
#ifdef LA_IPSEC_DEBUG
- if (tls_xfrm->options.iv_gen_disable == 1)
- write_sa->w2.s.iv_at_cptr = ROC_IE_OT_TLS_IV_SRC_FROM_SA;
+ if (tls_xfrm->options.iv_gen_disable == 1)
+ write_sa->w2.s.iv_at_cptr = ROC_IE_OT_TLS_IV_SRC_FROM_SA;
#else
- if (tls_xfrm->options.iv_gen_disable) {
- plt_err("Application provided IV is not supported");
- return -ENOTSUP;
- }
+ if (tls_xfrm->options.iv_gen_disable) {
+ plt_err("Application provided IV is not supported");
+ return -ENOTSUP;
+ }
#endif
+ }
rte_wmb();
@@ -599,20 +632,17 @@ cn10k_tls_read_sa_create(struct roc_cpt *roc_cpt, struct roc_cpt_lf *lf,
sec_sess->iv_length = crypto_xfrm->auth.iv.length;
}
- if (sa_dptr->w2.s.version_select == ROC_IE_OT_TLS_VERSION_DTLS_12)
- sec_sess->tls.hdr_len = 13;
- else if (sa_dptr->w2.s.version_select == ROC_IE_OT_TLS_VERSION_TLS_12)
- sec_sess->tls.hdr_len = 5;
-
sec_sess->proto = RTE_SECURITY_PROTOCOL_TLS_RECORD;
- /* Enable mib counters */
- sa_dptr->w0.s.count_mib_bytes = 1;
- sa_dptr->w0.s.count_mib_pkts = 1;
-
/* pre-populate CPT INST word 4 */
inst_w4.u64 = 0;
- inst_w4.s.opcode_major = ROC_IE_OT_TLS_MAJOR_OP_RECORD_DEC | ROC_IE_OT_INPLACE_BIT;
+ if ((sa_dptr->w2.s.version_select == ROC_IE_OT_TLS_VERSION_TLS_12) ||
+ (sa_dptr->w2.s.version_select == ROC_IE_OT_TLS_VERSION_DTLS_12)) {
+ inst_w4.s.opcode_major = ROC_IE_OT_TLS_MAJOR_OP_RECORD_DEC | ROC_IE_OT_INPLACE_BIT;
+ } else if (sa_dptr->w2.s.version_select == ROC_IE_OT_TLS_VERSION_TLS_13) {
+ inst_w4.s.opcode_major =
+ ROC_IE_OT_TLS13_MAJOR_OP_RECORD_DEC | ROC_IE_OT_INPLACE_BIT;
+ }
sec_sess->inst.w4 = inst_w4.u64;
sec_sess->inst.w7 = cpt_inst_w7_get(roc_cpt, read_sa);
@@ -689,8 +719,13 @@ cn10k_tls_write_sa_create(struct roc_cpt *roc_cpt, struct roc_cpt_lf *lf,
/* pre-populate CPT INST word 4 */
inst_w4.u64 = 0;
- inst_w4.s.opcode_major = ROC_IE_OT_TLS_MAJOR_OP_RECORD_ENC | ROC_IE_OT_INPLACE_BIT;
-
+ if ((sa_dptr->w2.s.version_select == ROC_IE_OT_TLS_VERSION_TLS_12) ||
+ (sa_dptr->w2.s.version_select == ROC_IE_OT_TLS_VERSION_DTLS_12)) {
+ inst_w4.s.opcode_major = ROC_IE_OT_TLS_MAJOR_OP_RECORD_ENC | ROC_IE_OT_INPLACE_BIT;
+ } else if (sa_dptr->w2.s.version_select == ROC_IE_OT_TLS_VERSION_TLS_13) {
+ inst_w4.s.opcode_major =
+ ROC_IE_OT_TLS13_MAJOR_OP_RECORD_ENC | ROC_IE_OT_INPLACE_BIT;
+ }
sec_sess->inst.w4 = inst_w4.u64;
sec_sess->inst.w7 = cpt_inst_w7_get(roc_cpt, write_sa);
--
2.25.1
^ permalink raw reply [flat|nested] 78+ messages in thread
* [PATCH v3 23/24] crypto/cnxk: add TLS 1.3 capability
2024-01-17 10:30 ` [PATCH v3 " Anoob Joseph
` (21 preceding siblings ...)
2024-01-17 10:31 ` [PATCH v3 22/24] crypto/cnxk: add support for TLS 1.3 Anoob Joseph
@ 2024-01-17 10:31 ` Anoob Joseph
2024-01-17 10:31 ` [PATCH v3 24/24] crypto/cnxk: add CPT SG mode debug Anoob Joseph
2024-01-18 17:06 ` [PATCH v3 00/24] Fixes and improvements in crypto cnxk Akhil Goyal
24 siblings, 0 replies; 78+ messages in thread
From: Anoob Joseph @ 2024-01-17 10:31 UTC (permalink / raw)
To: Akhil Goyal; +Cc: Vidya Sagar Velumuri, Jerin Jacob, Tejasree Kondoj, dev
From: Vidya Sagar Velumuri <vvelumuri@marvell.com>
Add TLS 1.3 record read and write capability
Signed-off-by: Vidya Sagar Velumuri <vvelumuri@marvell.com>
---
doc/guides/rel_notes/release_24_03.rst | 4 +-
.../crypto/cnxk/cnxk_cryptodev_capabilities.c | 92 +++++++++++++++++++
2 files changed, 94 insertions(+), 2 deletions(-)
diff --git a/doc/guides/rel_notes/release_24_03.rst b/doc/guides/rel_notes/release_24_03.rst
index 8fc6e9fb6d..dc53e313f1 100644
--- a/doc/guides/rel_notes/release_24_03.rst
+++ b/doc/guides/rel_notes/release_24_03.rst
@@ -58,8 +58,8 @@ New Features
* **Updated Marvell cnxk crypto driver.**
* Added support for Rx inject in crypto_cn10k.
- * Added support for TLS record processing in crypto_cn10k. Supports TLS 1.2
- and DTLS 1.2.
+ * Added support for TLS record processing in crypto_cn10k. Supports TLS 1.2,
+ DTLS 1.2 and TLS 1.3.
* Added PMD API to allow raw submission of instructions to CPT.
diff --git a/drivers/crypto/cnxk/cnxk_cryptodev_capabilities.c b/drivers/crypto/cnxk/cnxk_cryptodev_capabilities.c
index 73100377d9..db50de5d58 100644
--- a/drivers/crypto/cnxk/cnxk_cryptodev_capabilities.c
+++ b/drivers/crypto/cnxk/cnxk_cryptodev_capabilities.c
@@ -40,6 +40,16 @@
RTE_DIM(sec_tls12_caps_##name)); \
} while (0)
+#define SEC_TLS13_CAPS_ADD(cnxk_caps, cur_pos, hw_caps, name) \
+ do { \
+ if ((hw_caps[CPT_ENG_TYPE_SE].name) || \
+ (hw_caps[CPT_ENG_TYPE_IE].name) || \
+ (hw_caps[CPT_ENG_TYPE_AE].name)) \
+ sec_tls13_caps_add(cnxk_caps, cur_pos, \
+ sec_tls13_caps_##name, \
+ RTE_DIM(sec_tls13_caps_##name)); \
+ } while (0)
+
static const struct rte_cryptodev_capabilities caps_mul[] = {
{ /* RSA */
.op = RTE_CRYPTO_OP_TYPE_ASYMMETRIC,
@@ -1631,6 +1641,40 @@ static const struct rte_cryptodev_capabilities sec_tls12_caps_sha1_sha2[] = {
},
};
+static const struct rte_cryptodev_capabilities sec_tls13_caps_aes[] = {
+ { /* AES GCM */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AEAD,
+ {.aead = {
+ .algo = RTE_CRYPTO_AEAD_AES_GCM,
+ .block_size = 16,
+ .key_size = {
+ .min = 16,
+ .max = 32,
+ .increment = 16
+ },
+ .digest_size = {
+ .min = 16,
+ .max = 16,
+ .increment = 0
+ },
+ .aad_size = {
+ .min = 5,
+ .max = 5,
+ .increment = 0
+ },
+ .iv_size = {
+ .min = 0,
+ .max = 0,
+ .increment = 0
+ }
+ }, }
+ }, }
+ },
+};
+
+
static const struct rte_security_capability sec_caps_templ[] = {
{ /* IPsec Lookaside Protocol ESP Tunnel Ingress */
.action = RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL,
@@ -1760,6 +1804,26 @@ static const struct rte_security_capability sec_caps_templ[] = {
},
.crypto_capabilities = NULL,
},
+ { /* TLS 1.3 Record Read */
+ .action = RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL,
+ .protocol = RTE_SECURITY_PROTOCOL_TLS_RECORD,
+ .tls_record = {
+ .ver = RTE_SECURITY_VERSION_TLS_1_3,
+ .type = RTE_SECURITY_TLS_SESS_TYPE_READ,
+ .ar_win_size = 0,
+ },
+ .crypto_capabilities = NULL,
+ },
+ { /* TLS 1.3 Record Write */
+ .action = RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL,
+ .protocol = RTE_SECURITY_PROTOCOL_TLS_RECORD,
+ .tls_record = {
+ .ver = RTE_SECURITY_VERSION_TLS_1_3,
+ .type = RTE_SECURITY_TLS_SESS_TYPE_WRITE,
+ .ar_win_size = 0,
+ },
+ .crypto_capabilities = NULL,
+ },
{
.action = RTE_SECURITY_ACTION_TYPE_NONE
}
@@ -2005,6 +2069,33 @@ sec_tls12_crypto_caps_populate(struct rte_cryptodev_capabilities cnxk_caps[],
sec_tls12_caps_add(cnxk_caps, &cur_pos, caps_end, RTE_DIM(caps_end));
}
+static void
+sec_tls13_caps_limit_check(int *cur_pos, int nb_caps)
+{
+ PLT_VERIFY(*cur_pos + nb_caps <= CNXK_SEC_TLS_1_3_CRYPTO_MAX_CAPS);
+}
+
+static void
+sec_tls13_caps_add(struct rte_cryptodev_capabilities cnxk_caps[], int *cur_pos,
+ const struct rte_cryptodev_capabilities *caps, int nb_caps)
+{
+ sec_tls13_caps_limit_check(cur_pos, nb_caps);
+
+ memcpy(&cnxk_caps[*cur_pos], caps, nb_caps * sizeof(caps[0]));
+ *cur_pos += nb_caps;
+}
+
+static void
+sec_tls13_crypto_caps_populate(struct rte_cryptodev_capabilities cnxk_caps[],
+ union cpt_eng_caps *hw_caps)
+{
+ int cur_pos = 0;
+
+ SEC_TLS13_CAPS_ADD(cnxk_caps, &cur_pos, hw_caps, aes);
+
+ sec_tls13_caps_add(cnxk_caps, &cur_pos, caps_end, RTE_DIM(caps_end));
+}
+
void
cnxk_cpt_caps_populate(struct cnxk_cpt_vf *vf)
{
@@ -2016,6 +2107,7 @@ cnxk_cpt_caps_populate(struct cnxk_cpt_vf *vf)
if (vf->cpt.hw_caps[CPT_ENG_TYPE_SE].tls) {
sec_tls12_crypto_caps_populate(vf->sec_tls_1_2_crypto_caps, vf->cpt.hw_caps);
sec_tls12_crypto_caps_populate(vf->sec_dtls_1_2_crypto_caps, vf->cpt.hw_caps);
+ sec_tls13_crypto_caps_populate(vf->sec_tls_1_3_crypto_caps, vf->cpt.hw_caps);
}
PLT_STATIC_ASSERT(RTE_DIM(sec_caps_templ) <= RTE_DIM(vf->sec_caps));
--
2.25.1
^ permalink raw reply [flat|nested] 78+ messages in thread
* [PATCH v3 24/24] crypto/cnxk: add CPT SG mode debug
2024-01-17 10:30 ` [PATCH v3 " Anoob Joseph
` (22 preceding siblings ...)
2024-01-17 10:31 ` [PATCH v3 23/24] crypto/cnxk: add TLS 1.3 capability Anoob Joseph
@ 2024-01-17 10:31 ` Anoob Joseph
2024-01-18 17:06 ` [PATCH v3 00/24] Fixes and improvements in crypto cnxk Akhil Goyal
24 siblings, 0 replies; 78+ messages in thread
From: Anoob Joseph @ 2024-01-17 10:31 UTC (permalink / raw)
To: Akhil Goyal; +Cc: Tejasree Kondoj, Jerin Jacob, Vidya Sagar Velumuri, dev
From: Tejasree Kondoj <ktejasree@marvell.com>
Adding CPT SG mode debug dump.
Signed-off-by: Tejasree Kondoj <ktejasree@marvell.com>
---
drivers/crypto/cnxk/cn10k_cryptodev_ops.c | 135 +++++++++++++++++++++-
drivers/crypto/cnxk/cnxk_cryptodev_ops.h | 7 ++
2 files changed, 141 insertions(+), 1 deletion(-)
diff --git a/drivers/crypto/cnxk/cn10k_cryptodev_ops.c b/drivers/crypto/cnxk/cn10k_cryptodev_ops.c
index 9f4be20ff5..8991150c05 100644
--- a/drivers/crypto/cnxk/cn10k_cryptodev_ops.c
+++ b/drivers/crypto/cnxk/cn10k_cryptodev_ops.c
@@ -2,9 +2,10 @@
* Copyright(C) 2021 Marvell.
*/
-#include <rte_cryptodev.h>
#include <cryptodev_pmd.h>
+#include <rte_cryptodev.h>
#include <rte_event_crypto_adapter.h>
+#include <rte_hexdump.h>
#include <rte_ip.h>
#include <ethdev_driver.h>
@@ -103,6 +104,104 @@ cpt_sec_ipsec_inst_fill(struct cnxk_cpt_qp *qp, struct rte_crypto_op *op,
return ret;
}
+#ifdef CPT_INST_DEBUG_ENABLE
+static inline void
+cpt_request_data_sgv2_mode_dump(uint8_t *in_buffer, bool glist, uint16_t components)
+{
+ struct roc_se_buf_ptr list_ptr[ROC_MAX_SG_CNT];
+ const char *list = glist ? "glist" : "slist";
+ struct roc_sg2list_comp *sg_ptr = NULL;
+ uint16_t list_cnt = 0;
+ char suffix[64];
+ int i, j;
+
+ sg_ptr = (void *)in_buffer;
+ for (i = 0; i < components; i++) {
+ for (j = 0; j < sg_ptr->u.s.valid_segs; j++) {
+ list_ptr[i * 3 + j].size = sg_ptr->u.s.len[j];
+ list_ptr[i * 3 + j].vaddr = (void *)sg_ptr->ptr[j];
+ list_ptr[i * 3 + j].vaddr = list_ptr[i * 3 + j].vaddr;
+ list_cnt++;
+ }
+ sg_ptr++;
+ }
+
+ printf("Current %s: %u\n", list, list_cnt);
+
+ for (i = 0; i < list_cnt; i++) {
+ snprintf(suffix, sizeof(suffix), "%s[%d]: vaddr 0x%" PRIx64 ", vaddr %p len %u",
+ list, i, (uint64_t)list_ptr[i].vaddr, list_ptr[i].vaddr, list_ptr[i].size);
+ rte_hexdump(stdout, suffix, list_ptr[i].vaddr, list_ptr[i].size);
+ }
+}
+
+static inline void
+cpt_request_data_sg_mode_dump(uint8_t *in_buffer, bool glist)
+{
+ struct roc_se_buf_ptr list_ptr[ROC_MAX_SG_CNT];
+ const char *list = glist ? "glist" : "slist";
+ struct roc_sglist_comp *sg_ptr = NULL;
+ uint16_t list_cnt, components;
+ char suffix[64];
+ int i;
+
+ sg_ptr = (void *)(in_buffer + 8);
+ list_cnt = rte_be_to_cpu_16((((uint16_t *)in_buffer)[2]));
+ if (!glist) {
+ components = list_cnt / 4;
+ if (list_cnt % 4)
+ components++;
+ sg_ptr += components;
+ list_cnt = rte_be_to_cpu_16((((uint16_t *)in_buffer)[3]));
+ }
+
+ printf("Current %s: %u\n", list, list_cnt);
+ components = list_cnt / 4;
+ for (i = 0; i < components; i++) {
+ list_ptr[i * 4 + 0].size = rte_be_to_cpu_16(sg_ptr->u.s.len[0]);
+ list_ptr[i * 4 + 1].size = rte_be_to_cpu_16(sg_ptr->u.s.len[1]);
+ list_ptr[i * 4 + 2].size = rte_be_to_cpu_16(sg_ptr->u.s.len[2]);
+ list_ptr[i * 4 + 3].size = rte_be_to_cpu_16(sg_ptr->u.s.len[3]);
+ list_ptr[i * 4 + 0].vaddr = (void *)rte_be_to_cpu_64(sg_ptr->ptr[0]);
+ list_ptr[i * 4 + 1].vaddr = (void *)rte_be_to_cpu_64(sg_ptr->ptr[1]);
+ list_ptr[i * 4 + 2].vaddr = (void *)rte_be_to_cpu_64(sg_ptr->ptr[2]);
+ list_ptr[i * 4 + 3].vaddr = (void *)rte_be_to_cpu_64(sg_ptr->ptr[3]);
+ list_ptr[i * 4 + 0].vaddr = list_ptr[i * 4 + 0].vaddr;
+ list_ptr[i * 4 + 1].vaddr = list_ptr[i * 4 + 1].vaddr;
+ list_ptr[i * 4 + 2].vaddr = list_ptr[i * 4 + 2].vaddr;
+ list_ptr[i * 4 + 3].vaddr = list_ptr[i * 4 + 3].vaddr;
+ sg_ptr++;
+ }
+
+ components = list_cnt % 4;
+ switch (components) {
+ case 3:
+ list_ptr[i * 4 + 2].size = rte_be_to_cpu_16(sg_ptr->u.s.len[2]);
+ list_ptr[i * 4 + 2].vaddr = (void *)rte_be_to_cpu_64(sg_ptr->ptr[2]);
+ list_ptr[i * 4 + 2].vaddr = list_ptr[i * 4 + 2].vaddr;
+ /* FALLTHROUGH */
+ case 2:
+ list_ptr[i * 4 + 1].size = rte_be_to_cpu_16(sg_ptr->u.s.len[1]);
+ list_ptr[i * 4 + 1].vaddr = (void *)rte_be_to_cpu_64(sg_ptr->ptr[1]);
+ list_ptr[i * 4 + 1].vaddr = list_ptr[i * 4 + 1].vaddr;
+ /* FALLTHROUGH */
+ case 1:
+ list_ptr[i * 4 + 0].size = rte_be_to_cpu_16(sg_ptr->u.s.len[0]);
+ list_ptr[i * 4 + 0].vaddr = (void *)rte_be_to_cpu_64(sg_ptr->ptr[0]);
+ list_ptr[i * 4 + 0].vaddr = list_ptr[i * 4 + 0].vaddr;
+ break;
+ default:
+ break;
+ }
+
+ for (i = 0; i < list_cnt; i++) {
+ snprintf(suffix, sizeof(suffix), "%s[%d]: vaddr 0x%" PRIx64 ", vaddr %p len %u",
+ list, i, (uint64_t)list_ptr[i].vaddr, list_ptr[i].vaddr, list_ptr[i].size);
+ rte_hexdump(stdout, suffix, list_ptr[i].vaddr, list_ptr[i].size);
+ }
+}
+#endif
+
static __rte_always_inline int __rte_hot
cpt_sec_tls_inst_fill(struct cnxk_cpt_qp *qp, struct rte_crypto_op *op,
struct cn10k_sec_session *sess, struct cpt_inst_s *inst,
@@ -205,6 +304,31 @@ cn10k_cpt_fill_inst(struct cnxk_cpt_qp *qp, struct rte_crypto_op *ops[], struct
inst[0].w7.u64 = w7;
+#ifdef CPT_INST_DEBUG_ENABLE
+ infl_req->dptr = (uint8_t *)inst[0].dptr;
+ infl_req->rptr = (uint8_t *)inst[0].rptr;
+ infl_req->is_sg_ver2 = is_sg_ver2;
+ infl_req->scatter_sz = inst[0].w6.s.scatter_sz;
+ infl_req->opcode_major = inst[0].w4.s.opcode_major;
+
+ rte_hexdump(stdout, "cptr", (void *)(uint64_t)inst[0].w7.s.cptr, 128);
+ printf("major opcode:%d\n", inst[0].w4.s.opcode_major);
+ printf("minor opcode:%d\n", inst[0].w4.s.opcode_minor);
+ printf("param1:%d\n", inst[0].w4.s.param1);
+ printf("param2:%d\n", inst[0].w4.s.param2);
+ printf("dlen:%d\n", inst[0].w4.s.dlen);
+
+ if (is_sg_ver2) {
+ cpt_request_data_sgv2_mode_dump((void *)inst[0].dptr, 1, inst[0].w5.s.gather_sz);
+ cpt_request_data_sgv2_mode_dump((void *)inst[0].rptr, 0, inst[0].w6.s.scatter_sz);
+ } else {
+ if (infl_req->opcode_major >> 7) {
+ cpt_request_data_sg_mode_dump((void *)inst[0].dptr, 1);
+ cpt_request_data_sg_mode_dump((void *)inst[0].dptr, 0);
+ }
+ }
+#endif
+
return 1;
}
@@ -935,6 +1059,15 @@ cn10k_cpt_dequeue_post_process(struct cnxk_cpt_qp *qp, struct rte_crypto_op *cop
}
if (likely(compcode == CPT_COMP_GOOD)) {
+#ifdef CPT_INST_DEBUG_ENABLE
+ if (infl_req->is_sg_ver2)
+ cpt_request_data_sgv2_mode_dump(infl_req->rptr, 0, infl_req->scatter_sz);
+ else {
+ if (infl_req->opcode_major >> 7)
+ cpt_request_data_sg_mode_dump(infl_req->dptr, 0);
+ }
+#endif
+
if (unlikely(uc_compcode)) {
if (uc_compcode == ROC_SE_ERR_GC_ICV_MISCOMPARE)
cop->status = RTE_CRYPTO_OP_STATUS_AUTH_FAILED;
diff --git a/drivers/crypto/cnxk/cnxk_cryptodev_ops.h b/drivers/crypto/cnxk/cnxk_cryptodev_ops.h
index c6bb8023ea..e7bba25cb8 100644
--- a/drivers/crypto/cnxk/cnxk_cryptodev_ops.h
+++ b/drivers/crypto/cnxk/cnxk_cryptodev_ops.h
@@ -51,6 +51,13 @@ struct cpt_inflight_req {
};
void *mdata;
uint8_t op_flags;
+#ifdef CPT_INST_DEBUG_ENABLE
+ uint8_t scatter_sz;
+ uint8_t opcode_major;
+ uint8_t is_sg_ver2;
+ uint8_t *dptr;
+ uint8_t *rptr;
+#endif
void *qp;
} __rte_aligned(ROC_ALIGN);
--
2.25.1
^ permalink raw reply [flat|nested] 78+ messages in thread
* RE: [PATCH v3 00/24] Fixes and improvements in crypto cnxk
2024-01-17 10:30 ` [PATCH v3 " Anoob Joseph
` (23 preceding siblings ...)
2024-01-17 10:31 ` [PATCH v3 24/24] crypto/cnxk: add CPT SG mode debug Anoob Joseph
@ 2024-01-18 17:06 ` Akhil Goyal
24 siblings, 0 replies; 78+ messages in thread
From: Akhil Goyal @ 2024-01-18 17:06 UTC (permalink / raw)
To: Anoob Joseph; +Cc: Jerin Jacob, Vidya Sagar Velumuri, Tejasree Kondoj, dev
> Subject: [PATCH v3 00/24] Fixes and improvements in crypto cnxk
>
> Add following features
> - TLS record processing offload (TLS 1.2-1.3, DTLS 1.2)
> - Rx inject to allow lookaside packets to be injected to ethdev Rx
> - Use PDCP_CHAIN opcode instead of PDCP opcode for cipher-only and auth
> only cases
> - PMD API to submit instructions directly to hardware
>
> Changes in v3
> - Addressed Akhil's commments on Rx inject patch
> - Updated license year to 2024
>
> Changes in v2
> - Addressed checkpatch issue
> - Addressed build error with stdatomic
>
> Aakash Sasidharan (1):
> crypto/cnxk: enable digest gen for zero len input
>
> Akhil Goyal (1):
> common/cnxk: fix memory leak
>
> Anoob Joseph (6):
> crypto/cnxk: use common macro
> crypto/cnxk: return microcode completion code
> common/cnxk: update opad-ipad gen to handle TLS
> common/cnxk: add TLS record contexts
> crypto/cnxk: separate IPsec from security common code
> crypto/cnxk: add PMD APIs for raw submission to CPT
>
> Gowrishankar Muthukrishnan (1):
> crypto/cnxk: fix ECDH pubkey verify in cn9k
>
> Rahul Bhansali (2):
> common/cnxk: add Rx inject configs
> crypto/cnxk: Rx inject config update
>
> Tejasree Kondoj (3):
> crypto/cnxk: fallback to SG if headroom is not available
> crypto/cnxk: replace PDCP with PDCP chain opcode
> crypto/cnxk: add CPT SG mode debug
>
> Vidya Sagar Velumuri (10):
> crypto/cnxk: enable Rx inject in security lookaside
> crypto/cnxk: enable Rx inject for 103
> crypto/cnxk: rename security caps as IPsec security caps
> crypto/cnxk: add TLS record session ops
> crypto/cnxk: add TLS record datapath handling
> crypto/cnxk: add TLS capability
> crypto/cnxk: validate the combinations supported in TLS
> crypto/cnxk: use a single function for opad ipad
> crypto/cnxk: add support for TLS 1.3
> crypto/cnxk: add TLS 1.3 capability
>
> doc/api/doxy-api-index.md | 1 +
> doc/api/doxy-api.conf.in | 1 +
> doc/guides/cryptodevs/cnxk.rst | 12 +
> doc/guides/cryptodevs/features/cn10k.ini | 1 +
> doc/guides/rel_notes/release_24_03.rst | 7 +
> drivers/common/cnxk/cnxk_security.c | 65 +-
> drivers/common/cnxk/cnxk_security.h | 15 +-
> drivers/common/cnxk/hw/cpt.h | 12 +-
> drivers/common/cnxk/roc_cpt.c | 14 +-
> drivers/common/cnxk/roc_cpt.h | 7 +-
> drivers/common/cnxk/roc_cpt_priv.h | 2 +-
> drivers/common/cnxk/roc_idev.c | 44 +
> drivers/common/cnxk/roc_idev.h | 5 +
> drivers/common/cnxk/roc_idev_priv.h | 6 +
> drivers/common/cnxk/roc_ie_ot.c | 14 +-
> drivers/common/cnxk/roc_ie_ot_tls.h | 225 +++++
> drivers/common/cnxk/roc_mbox.h | 2 +
> drivers/common/cnxk/roc_nix.c | 2 +
> drivers/common/cnxk/roc_nix_inl.c | 2 +-
> drivers/common/cnxk/roc_nix_inl_dev.c | 2 +-
> drivers/common/cnxk/roc_se.c | 379 +++-----
> drivers/common/cnxk/roc_se.h | 38 +-
> drivers/common/cnxk/version.map | 5 +
> drivers/crypto/cnxk/cn10k_cryptodev.c | 2 +-
> drivers/crypto/cnxk/cn10k_cryptodev_ops.c | 401 ++++++++-
> drivers/crypto/cnxk/cn10k_cryptodev_ops.h | 11 +
> drivers/crypto/cnxk/cn10k_cryptodev_sec.c | 134 +++
> drivers/crypto/cnxk/cn10k_cryptodev_sec.h | 68 ++
> drivers/crypto/cnxk/cn10k_ipsec.c | 134 +--
> drivers/crypto/cnxk/cn10k_ipsec.h | 38 +-
> drivers/crypto/cnxk/cn10k_ipsec_la_ops.h | 19 +-
> drivers/crypto/cnxk/cn10k_tls.c | 830 ++++++++++++++++++
> drivers/crypto/cnxk/cn10k_tls.h | 35 +
> drivers/crypto/cnxk/cn10k_tls_ops.h | 322 +++++++
> drivers/crypto/cnxk/cn9k_cryptodev_ops.c | 68 +-
> drivers/crypto/cnxk/cn9k_cryptodev_ops.h | 62 ++
> drivers/crypto/cnxk/cn9k_ipsec_la_ops.h | 16 +-
> drivers/crypto/cnxk/cnxk_cryptodev.c | 3 +
> drivers/crypto/cnxk/cnxk_cryptodev.h | 24 +-
> .../crypto/cnxk/cnxk_cryptodev_capabilities.c | 375 +++++++-
> drivers/crypto/cnxk/cnxk_cryptodev_devargs.c | 31 +
> drivers/crypto/cnxk/cnxk_cryptodev_ops.c | 128 ++-
> drivers/crypto/cnxk/cnxk_cryptodev_ops.h | 7 +
> drivers/crypto/cnxk/cnxk_se.h | 98 +--
> drivers/crypto/cnxk/cnxk_sg.h | 4 +-
> drivers/crypto/cnxk/meson.build | 4 +-
> drivers/crypto/cnxk/rte_pmd_cnxk_crypto.h | 46 +
> drivers/crypto/cnxk/version.map | 3 +
> 48 files changed, 3018 insertions(+), 706 deletions(-)
> create mode 100644 drivers/common/cnxk/roc_ie_ot_tls.h
> create mode 100644 drivers/crypto/cnxk/cn10k_cryptodev_sec.c
> create mode 100644 drivers/crypto/cnxk/cn10k_cryptodev_sec.h
> create mode 100644 drivers/crypto/cnxk/cn10k_tls.c
> create mode 100644 drivers/crypto/cnxk/cn10k_tls.h
> create mode 100644 drivers/crypto/cnxk/cn10k_tls_ops.h
> create mode 100644 drivers/crypto/cnxk/rte_pmd_cnxk_crypto.h
>
> --
> 2.25.1
Acked-by: Akhil Goyal <gakhil@marvell.com>
Series applied to dpdk-next-crypto
Fixed documentation compilation issue and updated release notes and patch description/title for some of the patches.
^ permalink raw reply [flat|nested] 78+ messages in thread
end of thread, other threads:[~2024-01-18 17:07 UTC | newest]
Thread overview: 78+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-12-21 12:35 [PATCH 00/24] Fixes and improvements in crypto cnxk Anoob Joseph
2023-12-21 12:35 ` [PATCH 01/24] common/cnxk: fix memory leak Anoob Joseph
2023-12-21 12:35 ` [PATCH 02/24] crypto/cnxk: use common macro Anoob Joseph
2023-12-21 12:35 ` [PATCH 03/24] crypto/cnxk: fallback to SG if headroom is not available Anoob Joseph
2023-12-21 12:35 ` [PATCH 04/24] crypto/cnxk: return microcode completion code Anoob Joseph
2023-12-21 12:35 ` [PATCH 05/24] crypto/cnxk: fix ECDH pubkey verify in cn9k Anoob Joseph
2023-12-21 12:35 ` [PATCH 06/24] crypto/cnxk: enable digest gen for zero len input Anoob Joseph
2023-12-21 12:35 ` [PATCH 07/24] crypto/cnxk: enable Rx inject in security lookaside Anoob Joseph
2023-12-21 12:35 ` [PATCH 08/24] common/cnxk: add Rx inject configs Anoob Joseph
2023-12-21 12:35 ` [PATCH 09/24] crypto/cnxk: Rx inject config update Anoob Joseph
2023-12-21 12:35 ` [PATCH 10/24] crypto/cnxk: enable Rx inject for 103 Anoob Joseph
2023-12-21 12:35 ` [PATCH 11/24] crypto/cnxk: rename security caps as IPsec security caps Anoob Joseph
2023-12-21 12:35 ` [PATCH 12/24] common/cnxk: update opad-ipad gen to handle TLS Anoob Joseph
2023-12-21 12:35 ` [PATCH 13/24] common/cnxk: add TLS record contexts Anoob Joseph
2023-12-21 12:35 ` [PATCH 14/24] crypto/cnxk: separate IPsec from security common code Anoob Joseph
2023-12-21 12:35 ` [PATCH 15/24] crypto/cnxk: add TLS record session ops Anoob Joseph
2023-12-21 12:35 ` [PATCH 16/24] crypto/cnxk: add TLS record datapath handling Anoob Joseph
2023-12-21 12:35 ` [PATCH 17/24] crypto/cnxk: add TLS capability Anoob Joseph
2023-12-21 12:35 ` [PATCH 18/24] crypto/cnxk: add PMD APIs for raw submission to CPT Anoob Joseph
2023-12-21 12:35 ` [PATCH 19/24] crypto/cnxk: replace PDCP with PDCP chain opcode Anoob Joseph
2023-12-21 12:35 ` [PATCH 20/24] crypto/cnxk: validate the combinations supported in TLS Anoob Joseph
2023-12-21 12:35 ` [PATCH 21/24] crypto/cnxk: use a single function for opad ipad Anoob Joseph
2023-12-21 12:35 ` [PATCH 22/24] crypto/cnxk: add support for TLS 1.3 Anoob Joseph
2023-12-21 12:35 ` [PATCH 23/24] crypto/cnxk: add TLS 1.3 capability Anoob Joseph
2023-12-21 12:35 ` [PATCH 24/24] crypto/cnxk: add CPT SG mode debug Anoob Joseph
2024-01-02 4:53 ` [PATCH v2 00/24] Fixes and improvements in crypto cnxk Anoob Joseph
2024-01-02 4:53 ` [PATCH v2 01/24] common/cnxk: fix memory leak Anoob Joseph
2024-01-02 4:53 ` [PATCH v2 02/24] crypto/cnxk: use common macro Anoob Joseph
2024-01-02 4:53 ` [PATCH v2 03/24] crypto/cnxk: fallback to SG if headroom is not available Anoob Joseph
2024-01-02 4:53 ` [PATCH v2 04/24] crypto/cnxk: return microcode completion code Anoob Joseph
2024-01-02 4:53 ` [PATCH v2 05/24] crypto/cnxk: fix ECDH pubkey verify in cn9k Anoob Joseph
2024-01-02 4:53 ` [PATCH v2 06/24] crypto/cnxk: enable digest gen for zero len input Anoob Joseph
2024-01-02 4:54 ` [PATCH v2 07/24] crypto/cnxk: enable Rx inject in security lookaside Anoob Joseph
2024-01-16 8:07 ` Akhil Goyal
2024-01-02 4:54 ` [PATCH v2 08/24] common/cnxk: add Rx inject configs Anoob Joseph
2024-01-02 4:54 ` [PATCH v2 09/24] crypto/cnxk: Rx inject config update Anoob Joseph
2024-01-02 4:54 ` [PATCH v2 10/24] crypto/cnxk: enable Rx inject for 103 Anoob Joseph
2024-01-02 4:54 ` [PATCH v2 11/24] crypto/cnxk: rename security caps as IPsec security caps Anoob Joseph
2024-01-02 4:54 ` [PATCH v2 12/24] common/cnxk: update opad-ipad gen to handle TLS Anoob Joseph
2024-01-02 4:54 ` [PATCH v2 13/24] common/cnxk: add TLS record contexts Anoob Joseph
2024-01-02 4:54 ` [PATCH v2 14/24] crypto/cnxk: separate IPsec from security common code Anoob Joseph
2024-01-02 4:54 ` [PATCH v2 15/24] crypto/cnxk: add TLS record session ops Anoob Joseph
2024-01-02 4:54 ` [PATCH v2 16/24] crypto/cnxk: add