* [PATCH 00/40] fixes and new features to cnxk crypto PMD
@ 2025-05-23 13:50 Tejasree Kondoj
2025-05-23 13:50 ` [PATCH 01/40] crypto/cnxk: update the sg list population Tejasree Kondoj
` (39 more replies)
0 siblings, 40 replies; 41+ messages in thread
From: Tejasree Kondoj @ 2025-05-23 13:50 UTC (permalink / raw)
To: Akhil Goyal
Cc: Anoob Joseph, Aakash Sasidharan, Nithinsen Kaithakadan,
Rupesh Chiluka, Vidya Sagar Velumuri, dev
Adding CN20K PMD support and improvements to cnxk crypto PMD.
Aakash Sasidharan (1):
crypto/cnxk: fail Rx inject configure if not supported
Nithinsen Kaithakadan (4):
common/cnxk: fix salt handling with aes-ctr
common/cnxk: set correct salt value for ctr algos
common/cnxk: update qsize in CPT iq enable
crypto/cnxk: copy 8B iv into sess in aes ctr
Rupesh Chiluka (2):
crypto/cnxk: extend check for max supported gather entries
crypto/cnxk: add asym sessionless handling
Tejasree Kondoj (8):
crypto/cnxk: add lookaside IPsec CPT LF stats
crypto/cnxk: fix qp stats PMD API
crypto/cnxk: enable IV from application support
crypto/cnxk: move debug dumps to common
crypto/cnxk: add Rx inject in security lookaside
crypto/cnxk: include required headers
crypto/cnxk: add struct variable for custom metadata
doc: update CN20K CPT documentation
Vidya Sagar Velumuri (25):
crypto/cnxk: update the sg list population
crypto/cnxk: add check for max supported gather entries
crypto/cnxk: add probe for cn20k crypto device
crypto/cnxk: add ops skeleton for cn20k
crypto/cnxk: add dev info get
crypto/cnxk: add skeletion for enq deq functions
crypto/cnxk: add lmtst routines for cn20k
crypto/cnxk: add enqueue function support
crypto/cnxk: add cryptodev dequeue support for cn20k
crypto/cnxk: add rte security skeletion for cn20k
crypto/cnxk: add security session creation
crypto/cnxk: add security session destroy
crypto/cnxk: move code to common
crypto/cnxk: add rte sec session update
crypto/cnxk: add rte security datapath handling
crypto/cnxk: add skeleton for tls
crypto/cnxk: add tls write session creation
crypto/cnxk: add tls read session creation
crypto/cnxk: add tls session destroy
crypto/cnxk: add enq and dequeue support for TLS
crypto/cnxk: tls post process
crypto/cnxk: add tls session update
crypto/cnxk: support raw API for cn20k
crypto/cnxk: add model check for cn20k
crypto/cnxk: add support for sessionless asym
doc/guides/cryptodevs/cnxk.rst | 26 +-
doc/guides/cryptodevs/features/cn20k.ini | 113 ++
drivers/common/cnxk/cnxk_security.c | 8 +
drivers/common/cnxk/roc_cpt.c | 5 +
drivers/common/cnxk/roc_cpt.h | 7 +-
drivers/common/cnxk/roc_cpt_sg.h | 2 +
drivers/common/cnxk/roc_ie_ow_tls.h | 233 +++
drivers/crypto/cnxk/cn10k_cryptodev.c | 12 +-
drivers/crypto/cnxk/cn10k_cryptodev_ops.c | 222 ++-
drivers/crypto/cnxk/cn10k_cryptodev_sec.h | 14 -
drivers/crypto/cnxk/cn10k_ipsec.c | 8 +-
drivers/crypto/cnxk/cn10k_ipsec_la_ops.h | 27 +-
drivers/crypto/cnxk/cn10k_tls.c | 4 +-
drivers/crypto/cnxk/cn10k_tls_ops.h | 28 +-
drivers/crypto/cnxk/cn20k_cryptodev.c | 158 ++
drivers/crypto/cnxk/cn20k_cryptodev.h | 13 +
drivers/crypto/cnxk/cn20k_cryptodev_ops.c | 1272 +++++++++++++++++
drivers/crypto/cnxk/cn20k_cryptodev_ops.h | 85 ++
drivers/crypto/cnxk/cn20k_cryptodev_sec.c | 137 ++
drivers/crypto/cnxk/cn20k_cryptodev_sec.h | 64 +
drivers/crypto/cnxk/cn20k_ipsec.c | 378 +++++
drivers/crypto/cnxk/cn20k_ipsec.h | 41 +
drivers/crypto/cnxk/cn20k_ipsec_la_ops.h | 210 +++
drivers/crypto/cnxk/cn20k_tls.c | 917 ++++++++++++
drivers/crypto/cnxk/cn20k_tls.h | 40 +
drivers/crypto/cnxk/cn20k_tls_ops.h | 260 ++++
drivers/crypto/cnxk/cn9k_cryptodev_ops.c | 77 +-
drivers/crypto/cnxk/cn9k_ipsec.c | 19 +-
drivers/crypto/cnxk/cn9k_ipsec_la_ops.h | 15 +-
drivers/crypto/cnxk/cnxk_cryptodev.c | 17 +-
.../crypto/cnxk/cnxk_cryptodev_capabilities.c | 16 +-
drivers/crypto/cnxk/cnxk_cryptodev_ops.c | 127 +-
drivers/crypto/cnxk/cnxk_cryptodev_ops.h | 38 +-
drivers/crypto/cnxk/cnxk_ipsec.h | 2 +
drivers/crypto/cnxk/meson.build | 5 +
drivers/crypto/cnxk/rte_pmd_cnxk_crypto.h | 3 +
36 files changed, 4393 insertions(+), 210 deletions(-)
create mode 100644 doc/guides/cryptodevs/features/cn20k.ini
create mode 100644 drivers/common/cnxk/roc_ie_ow_tls.h
create mode 100644 drivers/crypto/cnxk/cn20k_cryptodev.c
create mode 100644 drivers/crypto/cnxk/cn20k_cryptodev.h
create mode 100644 drivers/crypto/cnxk/cn20k_cryptodev_ops.c
create mode 100644 drivers/crypto/cnxk/cn20k_cryptodev_ops.h
create mode 100644 drivers/crypto/cnxk/cn20k_cryptodev_sec.c
create mode 100644 drivers/crypto/cnxk/cn20k_cryptodev_sec.h
create mode 100644 drivers/crypto/cnxk/cn20k_ipsec.c
create mode 100644 drivers/crypto/cnxk/cn20k_ipsec.h
create mode 100644 drivers/crypto/cnxk/cn20k_ipsec_la_ops.h
create mode 100644 drivers/crypto/cnxk/cn20k_tls.c
create mode 100644 drivers/crypto/cnxk/cn20k_tls.h
create mode 100644 drivers/crypto/cnxk/cn20k_tls_ops.h
--
2.25.1
^ permalink raw reply [flat|nested] 41+ messages in thread
* [PATCH 01/40] crypto/cnxk: update the sg list population
2025-05-23 13:50 [PATCH 00/40] fixes and new features to cnxk crypto PMD Tejasree Kondoj
@ 2025-05-23 13:50 ` Tejasree Kondoj
2025-05-23 13:50 ` [PATCH 02/40] crypto/cnxk: add lookaside IPsec CPT LF stats Tejasree Kondoj
` (38 subsequent siblings)
39 siblings, 0 replies; 41+ messages in thread
From: Tejasree Kondoj @ 2025-05-23 13:50 UTC (permalink / raw)
To: Akhil Goyal
Cc: Vidya Sagar Velumuri, Anoob Joseph, Aakash Sasidharan,
Nithinsen Kaithakadan, Rupesh Chiluka, dev
From: Vidya Sagar Velumuri <vvelumuri@marvell.com>
update the last seg with length before populating the scatter list
Signed-off-by: Vidya Sagar Velumuri <vvelumuri@marvell.com>
---
drivers/crypto/cnxk/cn10k_tls_ops.h | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/drivers/crypto/cnxk/cn10k_tls_ops.h b/drivers/crypto/cnxk/cn10k_tls_ops.h
index e8e2547f68..c5ef3027ac 100644
--- a/drivers/crypto/cnxk/cn10k_tls_ops.h
+++ b/drivers/crypto/cnxk/cn10k_tls_ops.h
@@ -136,6 +136,8 @@ process_tls_write(struct roc_cpt_lf *lf, struct rte_crypto_op *cop, struct cn10k
g_size_bytes = ((i + 3) / 4) * sizeof(struct roc_sglist_comp);
+ /* Output Scatter List */
+ last_seg->data_len += sess->max_extended_len + pad_bytes;
i = 0;
scatter_comp = (struct roc_sglist_comp *)((uint8_t *)gather_comp + g_size_bytes);
@@ -156,8 +158,6 @@ process_tls_write(struct roc_cpt_lf *lf, struct rte_crypto_op *cop, struct cn10k
w4.s.opcode_major |= (uint64_t)ROC_DMA_MODE_SG;
w4.s.opcode_minor = pad_len;
- /* Output Scatter List */
- last_seg->data_len += sess->max_extended_len + pad_bytes;
inst->w4.u64 = w4.u64;
} else {
struct roc_sg2list_comp *scatter_comp, *gather_comp;
@@ -189,6 +189,8 @@ process_tls_write(struct roc_cpt_lf *lf, struct rte_crypto_op *cop, struct cn10k
cpt_inst_w5.s.gather_sz = ((i + 2) / 3);
g_size_bytes = ((i + 2) / 3) * sizeof(struct roc_sg2list_comp);
+ /* Output Scatter List */
+ last_seg->data_len += sess->max_extended_len + pad_bytes;
i = 0;
scatter_comp = (struct roc_sg2list_comp *)((uint8_t *)gather_comp + g_size_bytes);
@@ -209,8 +211,6 @@ process_tls_write(struct roc_cpt_lf *lf, struct rte_crypto_op *cop, struct cn10k
w4.s.opcode_minor = pad_len;
w4.s.param1 = w4.s.dlen;
w4.s.param2 = cop->param1.tls_record.content_type;
- /* Output Scatter List */
- last_seg->data_len += sess->max_extended_len + pad_bytes;
inst->w4.u64 = w4.u64;
}
--
2.25.1
^ permalink raw reply [flat|nested] 41+ messages in thread
* [PATCH 02/40] crypto/cnxk: add lookaside IPsec CPT LF stats
2025-05-23 13:50 [PATCH 00/40] fixes and new features to cnxk crypto PMD Tejasree Kondoj
2025-05-23 13:50 ` [PATCH 01/40] crypto/cnxk: update the sg list population Tejasree Kondoj
@ 2025-05-23 13:50 ` Tejasree Kondoj
2025-05-23 13:50 ` [PATCH 03/40] crypto/cnxk: fix qp stats PMD API Tejasree Kondoj
` (37 subsequent siblings)
39 siblings, 0 replies; 41+ messages in thread
From: Tejasree Kondoj @ 2025-05-23 13:50 UTC (permalink / raw)
To: Akhil Goyal
Cc: Anoob Joseph, Aakash Sasidharan, Nithinsen Kaithakadan,
Rupesh Chiluka, Vidya Sagar Velumuri, dev
Adding global CPT LF stats for lookaside IPsec.
Signed-off-by: Tejasree Kondoj <ktejasree@marvell.com>
---
drivers/crypto/cnxk/cn10k_ipsec.c | 4 ++++
drivers/crypto/cnxk/cnxk_cryptodev_ops.c | 1 +
2 files changed, 5 insertions(+)
diff --git a/drivers/crypto/cnxk/cn10k_ipsec.c b/drivers/crypto/cnxk/cn10k_ipsec.c
index 33ffda0a4c..ae0482d0fe 100644
--- a/drivers/crypto/cnxk/cn10k_ipsec.c
+++ b/drivers/crypto/cnxk/cn10k_ipsec.c
@@ -117,6 +117,8 @@ cn10k_ipsec_outb_sa_create(struct roc_cpt *roc_cpt, struct roc_cpt_lf *lf,
/* Enable mib counters */
sa_dptr->w0.s.count_mib_bytes = 1;
sa_dptr->w0.s.count_mib_pkts = 1;
+ sa_dptr->w0.s.count_glb_pkts = 1;
+ sa_dptr->w0.s.count_glb_octets = 1;
}
memset(out_sa, 0, sizeof(struct roc_ot_ipsec_outb_sa));
@@ -221,6 +223,8 @@ cn10k_ipsec_inb_sa_create(struct roc_cpt *roc_cpt, struct roc_cpt_lf *lf,
/* Enable mib counters */
sa_dptr->w0.s.count_mib_bytes = 1;
sa_dptr->w0.s.count_mib_pkts = 1;
+ sa_dptr->w0.s.count_glb_pkts = 1;
+ sa_dptr->w0.s.count_glb_octets = 1;
}
memset(in_sa, 0, sizeof(struct roc_ot_ipsec_inb_sa));
diff --git a/drivers/crypto/cnxk/cnxk_cryptodev_ops.c b/drivers/crypto/cnxk/cnxk_cryptodev_ops.c
index c3a0a58c8f..613ce11ec1 100644
--- a/drivers/crypto/cnxk/cnxk_cryptodev_ops.c
+++ b/drivers/crypto/cnxk/cnxk_cryptodev_ops.c
@@ -953,6 +953,7 @@ cnxk_cpt_dump_on_err(struct cnxk_cpt_qp *qp)
plt_print("");
roc_cpt_afs_print(qp->lf.roc_cpt);
+ roc_cpt_lfs_print(qp->lf.roc_cpt);
}
int
--
2.25.1
^ permalink raw reply [flat|nested] 41+ messages in thread
* [PATCH 03/40] crypto/cnxk: fix qp stats PMD API
2025-05-23 13:50 [PATCH 00/40] fixes and new features to cnxk crypto PMD Tejasree Kondoj
2025-05-23 13:50 ` [PATCH 01/40] crypto/cnxk: update the sg list population Tejasree Kondoj
2025-05-23 13:50 ` [PATCH 02/40] crypto/cnxk: add lookaside IPsec CPT LF stats Tejasree Kondoj
@ 2025-05-23 13:50 ` Tejasree Kondoj
2025-05-23 13:50 ` [PATCH 04/40] crypto/cnxk: fail Rx inject configure if not supported Tejasree Kondoj
` (36 subsequent siblings)
39 siblings, 0 replies; 41+ messages in thread
From: Tejasree Kondoj @ 2025-05-23 13:50 UTC (permalink / raw)
To: Akhil Goyal
Cc: Anoob Joseph, Aakash Sasidharan, Nithinsen Kaithakadan,
Rupesh Chiluka, Vidya Sagar Velumuri, dev
Fixing qp stats PMD API.
Fixes: bf52722b9377 ("crypto/cnxk: add PMD API to get queue stats")
Signed-off-by: Tejasree Kondoj <ktejasree@marvell.com>
---
drivers/crypto/cnxk/cnxk_cryptodev_ops.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/crypto/cnxk/cnxk_cryptodev_ops.c b/drivers/crypto/cnxk/cnxk_cryptodev_ops.c
index 613ce11ec1..61f3e135aa 100644
--- a/drivers/crypto/cnxk/cnxk_cryptodev_ops.c
+++ b/drivers/crypto/cnxk/cnxk_cryptodev_ops.c
@@ -1218,7 +1218,7 @@ rte_pmd_cnxk_crypto_qp_stats_get(struct rte_pmd_cnxk_crypto_qptr *qptr,
stats->ctx_enc_pkts = plt_read64(lf->rbase + CPT_LF_CTX_ENC_PKT_CNT);
stats->ctx_enc_bytes = plt_read64(lf->rbase + CPT_LF_CTX_ENC_BYTE_CNT);
- stats->ctx_dec_bytes = plt_read64(lf->rbase + CPT_LF_CTX_DEC_BYTE_CNT);
+ stats->ctx_dec_pkts = plt_read64(lf->rbase + CPT_LF_CTX_DEC_PKT_CNT);
stats->ctx_dec_bytes = plt_read64(lf->rbase + CPT_LF_CTX_DEC_BYTE_CNT);
return 0;
--
2.25.1
^ permalink raw reply [flat|nested] 41+ messages in thread
* [PATCH 04/40] crypto/cnxk: fail Rx inject configure if not supported
2025-05-23 13:50 [PATCH 00/40] fixes and new features to cnxk crypto PMD Tejasree Kondoj
` (2 preceding siblings ...)
2025-05-23 13:50 ` [PATCH 03/40] crypto/cnxk: fix qp stats PMD API Tejasree Kondoj
@ 2025-05-23 13:50 ` Tejasree Kondoj
2025-05-23 13:50 ` [PATCH 05/40] crypto/cnxk: add check for max supported gather entries Tejasree Kondoj
` (35 subsequent siblings)
39 siblings, 0 replies; 41+ messages in thread
From: Tejasree Kondoj @ 2025-05-23 13:50 UTC (permalink / raw)
To: Akhil Goyal
Cc: Aakash Sasidharan, Anoob Joseph, Nithinsen Kaithakadan,
Rupesh Chiluka, Vidya Sagar Velumuri, dev
From: Aakash Sasidharan <asasidharan@marvell.com>
Rx inject is supported only with CPT05 microcode version.
sg_ver2 indicates if CPT05 is loaded. Fail the rx inject
configuration with ENOTSUP error if sg_ver2 is not supported.
Signed-off-by: Aakash Sasidharan <asasidharan@marvell.com>
---
drivers/crypto/cnxk/cn10k_cryptodev_ops.c | 6 ++++++
1 file changed, 6 insertions(+)
diff --git a/drivers/crypto/cnxk/cn10k_cryptodev_ops.c b/drivers/crypto/cnxk/cn10k_cryptodev_ops.c
index 851e6f0a88..947f50b4c8 100644
--- a/drivers/crypto/cnxk/cn10k_cryptodev_ops.c
+++ b/drivers/crypto/cnxk/cn10k_cryptodev_ops.c
@@ -1981,6 +1981,7 @@ cn10k_cryptodev_sec_rx_inject_configure(void *device, uint16_t port_id, bool ena
{
struct rte_cryptodev *crypto_dev = device;
struct rte_eth_dev *eth_dev;
+ struct cnxk_cpt_vf *vf;
int ret;
if (!rte_eth_dev_is_valid_port(port_id))
@@ -1989,6 +1990,11 @@ cn10k_cryptodev_sec_rx_inject_configure(void *device, uint16_t port_id, bool ena
if (!(crypto_dev->feature_flags & RTE_CRYPTODEV_FF_SECURITY_RX_INJECT))
return -ENOTSUP;
+ /* Rx Inject is supported only with CPT05. sg_ver2 indicates that CPT05 is loaded */
+ vf = crypto_dev->data->dev_private;
+ if (!(vf->cpt.hw_caps[CPT_ENG_TYPE_SE].sg_ver2 && vf->cpt.hw_caps[CPT_ENG_TYPE_IE].sg_ver2))
+ return -ENOTSUP;
+
eth_dev = &rte_eth_devices[port_id];
ret = strncmp(eth_dev->device->driver->name, "net_cn10k", 8);
--
2.25.1
^ permalink raw reply [flat|nested] 41+ messages in thread
* [PATCH 05/40] crypto/cnxk: add check for max supported gather entries
2025-05-23 13:50 [PATCH 00/40] fixes and new features to cnxk crypto PMD Tejasree Kondoj
` (3 preceding siblings ...)
2025-05-23 13:50 ` [PATCH 04/40] crypto/cnxk: fail Rx inject configure if not supported Tejasree Kondoj
@ 2025-05-23 13:50 ` Tejasree Kondoj
2025-05-23 13:50 ` [PATCH 06/40] crypto/cnxk: enable IV from application support Tejasree Kondoj
` (34 subsequent siblings)
39 siblings, 0 replies; 41+ messages in thread
From: Tejasree Kondoj @ 2025-05-23 13:50 UTC (permalink / raw)
To: Akhil Goyal
Cc: Vidya Sagar Velumuri, Anoob Joseph, Aakash Sasidharan,
Nithinsen Kaithakadan, Rupesh Chiluka, dev
From: Vidya Sagar Velumuri <vvelumuri@marvell.com>
Add check for max supported gather entries.
Signed-off-by: Vidya Sagar Velumuri <vvelumuri@marvell.com>
---
drivers/common/cnxk/roc_cpt_sg.h | 1 +
drivers/crypto/cnxk/cn10k_ipsec_la_ops.h | 10 ++++++++++
drivers/crypto/cnxk/cn10k_tls_ops.h | 10 ++++++++++
3 files changed, 21 insertions(+)
diff --git a/drivers/common/cnxk/roc_cpt_sg.h b/drivers/common/cnxk/roc_cpt_sg.h
index c12187144f..e7e01cd29a 100644
--- a/drivers/common/cnxk/roc_cpt_sg.h
+++ b/drivers/common/cnxk/roc_cpt_sg.h
@@ -14,6 +14,7 @@
#define ROC_SG_ENTRY_SIZE sizeof(struct roc_sglist_comp)
#define ROC_SG_MAX_COMP 25
#define ROC_SG_MAX_DLEN_SIZE (ROC_SG_LIST_HDR_SIZE + (ROC_SG_MAX_COMP * ROC_SG_ENTRY_SIZE))
+#define ROC_SG2_MAX_PTRS 48
struct roc_sglist_comp {
union {
diff --git a/drivers/crypto/cnxk/cn10k_ipsec_la_ops.h b/drivers/crypto/cnxk/cn10k_ipsec_la_ops.h
index 2c500afbca..87442c2a1f 100644
--- a/drivers/crypto/cnxk/cn10k_ipsec_la_ops.h
+++ b/drivers/crypto/cnxk/cn10k_ipsec_la_ops.h
@@ -159,6 +159,11 @@ process_outb_sa(struct roc_cpt_lf *lf, struct rte_crypto_op *cop, struct cn10k_s
return -ENOMEM;
}
+ if (unlikely(m_src->nb_segs > ROC_SG2_MAX_PTRS)) {
+ plt_dp_err("Exceeds max supported components. Reduce segments");
+ return -1;
+ }
+
m_data = alloc_op_meta(NULL, m_info->mlen, m_info->pool, infl_req);
if (unlikely(m_data == NULL)) {
plt_dp_err("Error allocating meta buffer for request");
@@ -259,6 +264,11 @@ process_inb_sa(struct rte_crypto_op *cop, struct cn10k_sec_session *sess, struct
void *m_data;
int i;
+ if (unlikely(m_src->nb_segs > ROC_SG2_MAX_PTRS)) {
+ plt_dp_err("Exceeds max supported components. Reduce segments");
+ return -1;
+ }
+
m_data = alloc_op_meta(NULL, m_info->mlen, m_info->pool, infl_req);
if (unlikely(m_data == NULL)) {
plt_dp_err("Error allocating meta buffer for request");
diff --git a/drivers/crypto/cnxk/cn10k_tls_ops.h b/drivers/crypto/cnxk/cn10k_tls_ops.h
index c5ef3027ac..427c31425c 100644
--- a/drivers/crypto/cnxk/cn10k_tls_ops.h
+++ b/drivers/crypto/cnxk/cn10k_tls_ops.h
@@ -174,6 +174,11 @@ process_tls_write(struct roc_cpt_lf *lf, struct rte_crypto_op *cop, struct cn10k
return -ENOMEM;
}
+ if (unlikely(m_src->nb_segs > ROC_SG2_MAX_PTRS)) {
+ plt_dp_err("Exceeds max supported components. Reduce segments");
+ return -1;
+ }
+
m_data = alloc_op_meta(NULL, m_info->mlen, m_info->pool, infl_req);
if (unlikely(m_data == NULL)) {
plt_dp_err("Error allocating meta buffer for request");
@@ -305,6 +310,11 @@ process_tls_read(struct rte_crypto_op *cop, struct cn10k_sec_session *sess,
uint32_t g_size_bytes;
int i;
+ if (unlikely(m_src->nb_segs > ROC_SG2_MAX_PTRS)) {
+ plt_dp_err("Exceeds max supported components. Reduce segments");
+ return -1;
+ }
+
m_data = alloc_op_meta(NULL, m_info->mlen, m_info->pool, infl_req);
if (unlikely(m_data == NULL)) {
plt_dp_err("Error allocating meta buffer for request");
--
2.25.1
^ permalink raw reply [flat|nested] 41+ messages in thread
* [PATCH 06/40] crypto/cnxk: enable IV from application support
2025-05-23 13:50 [PATCH 00/40] fixes and new features to cnxk crypto PMD Tejasree Kondoj
` (4 preceding siblings ...)
2025-05-23 13:50 ` [PATCH 05/40] crypto/cnxk: add check for max supported gather entries Tejasree Kondoj
@ 2025-05-23 13:50 ` Tejasree Kondoj
2025-05-23 13:50 ` [PATCH 07/40] crypto/cnxk: add probe for cn20k crypto device Tejasree Kondoj
` (33 subsequent siblings)
39 siblings, 0 replies; 41+ messages in thread
From: Tejasree Kondoj @ 2025-05-23 13:50 UTC (permalink / raw)
To: Akhil Goyal
Cc: Anoob Joseph, Aakash Sasidharan, Nithinsen Kaithakadan,
Rupesh Chiluka, Vidya Sagar Velumuri, dev
Enabling IV from application as the default option.
Signed-off-by: Tejasree Kondoj <ktejasree@marvell.com>
---
drivers/crypto/cnxk/cn9k_ipsec.c | 19 +------------------
drivers/crypto/cnxk/cn9k_ipsec_la_ops.h | 5 +----
.../crypto/cnxk/cnxk_cryptodev_capabilities.c | 6 ++----
3 files changed, 4 insertions(+), 26 deletions(-)
diff --git a/drivers/crypto/cnxk/cn9k_ipsec.c b/drivers/crypto/cnxk/cn9k_ipsec.c
index fa00c428e6..62478d2340 100644
--- a/drivers/crypto/cnxk/cn9k_ipsec.c
+++ b/drivers/crypto/cnxk/cn9k_ipsec.c
@@ -48,11 +48,8 @@ cn9k_ipsec_outb_sa_create(struct cnxk_cpt_qp *qp,
if (ret)
return ret;
- sess->custom_hdr_len =
- sizeof(struct roc_ie_on_outb_hdr) - ROC_IE_ON_MAX_IV_LEN;
+ sess->custom_hdr_len = sizeof(struct roc_ie_on_outb_hdr) - ROC_IE_ON_MAX_IV_LEN;
-#ifdef LA_IPSEC_DEBUG
- /* Use IV from application in debug mode */
if (ipsec->options.iv_gen_disable == 1) {
sess->custom_hdr_len = sizeof(struct roc_ie_on_outb_hdr);
@@ -67,12 +64,6 @@ cn9k_ipsec_outb_sa_create(struct cnxk_cpt_qp *qp,
sess->cipher_iv_len = crypto_xform->auth.iv.length;
}
}
-#else
- if (ipsec->options.iv_gen_disable != 0) {
- plt_err("Application provided IV is not supported");
- return -ENOTSUP;
- }
-#endif
ret = cnxk_on_ipsec_outb_sa_create(ipsec, crypto_xform, &sa->out_sa);
@@ -89,16 +80,8 @@ cn9k_ipsec_outb_sa_create(struct cnxk_cpt_qp *qp,
param1.u16 = 0;
param1.s.ikev2 = 1;
-#ifdef LA_IPSEC_DEBUG
- /* Use IV from application in debug mode */
if (ipsec->options.iv_gen_disable == 1)
param1.s.per_pkt_iv = ROC_IE_ON_IV_SRC_FROM_DPTR;
-#else
- if (ipsec->options.iv_gen_disable != 0) {
- plt_err("Application provided IV is not supported");
- return -ENOTSUP;
- }
-#endif
w4.s.param1 = param1.u16;
diff --git a/drivers/crypto/cnxk/cn9k_ipsec_la_ops.h b/drivers/crypto/cnxk/cn9k_ipsec_la_ops.h
index 3e9f1e7efb..befd5b0c05 100644
--- a/drivers/crypto/cnxk/cn9k_ipsec_la_ops.h
+++ b/drivers/crypto/cnxk/cn9k_ipsec_la_ops.h
@@ -159,13 +159,10 @@ process_outb_sa(struct cpt_qp_meta_info *m_info, struct rte_crypto_op *cop,
inst->w4.s.opcode_major |= (uint64_t)ROC_DMA_MODE_SG;
}
-#ifdef LA_IPSEC_DEBUG
if (sess->inst.w4 & ROC_IE_ON_PER_PKT_IV) {
- memcpy(&hdr->iv[0],
- rte_crypto_op_ctod_offset(cop, uint8_t *, sess->cipher_iv_off),
+ memcpy(&hdr->iv[0], rte_crypto_op_ctod_offset(cop, uint8_t *, sess->cipher_iv_off),
sess->cipher_iv_len);
}
-#endif
m_src->pkt_len = pkt_len;
esn = ++sess->esn;
diff --git a/drivers/crypto/cnxk/cnxk_cryptodev_capabilities.c b/drivers/crypto/cnxk/cnxk_cryptodev_capabilities.c
index e78bc37c37..63d2eef349 100644
--- a/drivers/crypto/cnxk/cnxk_cryptodev_capabilities.c
+++ b/drivers/crypto/cnxk/cnxk_cryptodev_capabilities.c
@@ -2102,11 +2102,9 @@ cn10k_sec_ipsec_caps_update(struct rte_security_capability *sec_cap)
static void
cn9k_sec_ipsec_caps_update(struct rte_security_capability *sec_cap)
{
- if (sec_cap->ipsec.direction == RTE_SECURITY_IPSEC_SA_DIR_EGRESS) {
-#ifdef LA_IPSEC_DEBUG
+ if (sec_cap->ipsec.direction == RTE_SECURITY_IPSEC_SA_DIR_EGRESS)
sec_cap->ipsec.options.iv_gen_disable = 1;
-#endif
- }
+
sec_cap->ipsec.replay_win_sz_max = CNXK_ON_AR_WIN_SIZE_MAX;
sec_cap->ipsec.options.esn = 1;
}
--
2.25.1
^ permalink raw reply [flat|nested] 41+ messages in thread
* [PATCH 07/40] crypto/cnxk: add probe for cn20k crypto device
2025-05-23 13:50 [PATCH 00/40] fixes and new features to cnxk crypto PMD Tejasree Kondoj
` (5 preceding siblings ...)
2025-05-23 13:50 ` [PATCH 06/40] crypto/cnxk: enable IV from application support Tejasree Kondoj
@ 2025-05-23 13:50 ` Tejasree Kondoj
2025-05-23 13:50 ` [PATCH 08/40] crypto/cnxk: add ops skeleton for cn20k Tejasree Kondoj
` (32 subsequent siblings)
39 siblings, 0 replies; 41+ messages in thread
From: Tejasree Kondoj @ 2025-05-23 13:50 UTC (permalink / raw)
To: Akhil Goyal
Cc: Vidya Sagar Velumuri, Anoob Joseph, Aakash Sasidharan,
Nithinsen Kaithakadan, Rupesh Chiluka, dev
From: Vidya Sagar Velumuri <vvelumuri@marvell.com>
Add probe for cn20k crypto device
Signed-off-by: Vidya Sagar Velumuri <vvelumuri@marvell.com>
---
drivers/crypto/cnxk/cn10k_cryptodev.c | 12 +-
drivers/crypto/cnxk/cn20k_cryptodev.c | 152 ++++++++++++++++++++++++++
drivers/crypto/cnxk/cn20k_cryptodev.h | 13 +++
drivers/crypto/cnxk/meson.build | 1 +
4 files changed, 170 insertions(+), 8 deletions(-)
create mode 100644 drivers/crypto/cnxk/cn20k_cryptodev.c
create mode 100644 drivers/crypto/cnxk/cn20k_cryptodev.h
diff --git a/drivers/crypto/cnxk/cn10k_cryptodev.c b/drivers/crypto/cnxk/cn10k_cryptodev.c
index 70bef13cda..598def51a5 100644
--- a/drivers/crypto/cnxk/cn10k_cryptodev.c
+++ b/drivers/crypto/cnxk/cn10k_cryptodev.c
@@ -22,14 +22,10 @@
uint8_t cn10k_cryptodev_driver_id;
static struct rte_pci_id pci_id_cpt_table[] = {
- {
- RTE_PCI_DEVICE(PCI_VENDOR_ID_CAVIUM,
- PCI_DEVID_CN10K_RVU_CPT_VF)
- },
- /* sentinel */
- {
- .device_id = 0
- },
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN10KA, PCI_DEVID_CN10K_RVU_CPT_VF),
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN10KAS, PCI_DEVID_CN10K_RVU_CPT_VF),
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN10KB, PCI_DEVID_CN10K_RVU_CPT_VF),
+ {.vendor_id = 0},
};
static int
diff --git a/drivers/crypto/cnxk/cn20k_cryptodev.c b/drivers/crypto/cnxk/cn20k_cryptodev.c
new file mode 100644
index 0000000000..e52336c2b7
--- /dev/null
+++ b/drivers/crypto/cnxk/cn20k_cryptodev.c
@@ -0,0 +1,152 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2025 Marvell.
+ */
+
+#include <bus_pci_driver.h>
+#include <cryptodev_pmd.h>
+#include <dev_driver.h>
+#include <rte_common.h>
+#include <rte_crypto.h>
+#include <rte_cryptodev.h>
+#include <rte_pci.h>
+
+#include "cn20k_cryptodev.h"
+#include "cnxk_cryptodev.h"
+#include "cnxk_cryptodev_capabilities.h"
+#include "cnxk_cryptodev_ops.h"
+#include "cnxk_cryptodev_sec.h"
+
+#include "roc_api.h"
+
+uint8_t cn20k_cryptodev_driver_id;
+
+static struct rte_pci_id pci_id_cpt_table[] = {
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN20KA, PCI_DEVID_CN20K_RVU_CPT_VF),
+ {.vendor_id = 0},
+};
+
+static int
+cn20k_cpt_pci_probe(struct rte_pci_driver *pci_drv __rte_unused, struct rte_pci_device *pci_dev)
+{
+ struct rte_cryptodev_pmd_init_params init_params = {.name = "",
+ .socket_id = rte_socket_id(),
+ .private_data_size =
+ sizeof(struct cnxk_cpt_vf)};
+ char name[RTE_CRYPTODEV_NAME_MAX_LEN];
+ struct rte_cryptodev *dev;
+ struct roc_cpt *roc_cpt;
+ struct cnxk_cpt_vf *vf;
+ int rc;
+
+ rc = roc_plt_init();
+ if (rc < 0) {
+ plt_err("Failed to initialize platform model");
+ return rc;
+ }
+
+ rte_pci_device_name(&pci_dev->addr, name, sizeof(name));
+
+ dev = rte_cryptodev_pmd_create(name, &pci_dev->device, &init_params);
+ if (dev == NULL) {
+ rc = -ENODEV;
+ goto exit;
+ }
+
+ /* Get private data space allocated */
+ vf = dev->data->dev_private;
+
+ roc_cpt = &vf->cpt;
+
+ if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
+ roc_cpt->pci_dev = pci_dev;
+
+ rc = cnxk_cpt_parse_devargs(dev->device->devargs, vf);
+ if (rc) {
+ plt_err("Failed to parse devargs rc=%d", rc);
+ goto pmd_destroy;
+ }
+
+ rc = roc_cpt_dev_init(roc_cpt);
+ if (rc) {
+ plt_err("Failed to initialize roc cpt rc=%d", rc);
+ goto pmd_destroy;
+ }
+
+ rc = cnxk_cpt_eng_grp_add(roc_cpt);
+ if (rc) {
+ plt_err("Failed to add engine group rc=%d", rc);
+ goto dev_fini;
+ }
+
+ /* Create security context */
+ rc = cnxk_crypto_sec_ctx_create(dev);
+ if (rc)
+ goto dev_fini;
+ }
+
+ cnxk_cpt_caps_populate(vf);
+
+ dev->feature_flags = cnxk_cpt_default_ff_get();
+
+ dev->qp_depth_used = cnxk_cpt_qp_depth_used;
+
+ rte_cryptodev_pmd_probing_finish(dev);
+
+ return 0;
+
+dev_fini:
+ if (rte_eal_process_type() == RTE_PROC_PRIMARY)
+ roc_cpt_dev_fini(roc_cpt);
+pmd_destroy:
+ rte_cryptodev_pmd_destroy(dev);
+exit:
+ plt_err("Could not create device (vendor_id: 0x%x device_id: 0x%x)", pci_dev->id.vendor_id,
+ pci_dev->id.device_id);
+ return rc;
+}
+
+static int
+cn20k_cpt_pci_remove(struct rte_pci_device *pci_dev)
+{
+ char name[RTE_CRYPTODEV_NAME_MAX_LEN];
+ struct rte_cryptodev *dev;
+ struct cnxk_cpt_vf *vf;
+ int ret;
+
+ if (pci_dev == NULL)
+ return -EINVAL;
+
+ rte_pci_device_name(&pci_dev->addr, name, sizeof(name));
+
+ dev = rte_cryptodev_pmd_get_named_dev(name);
+ if (dev == NULL)
+ return -ENODEV;
+
+ /* Destroy security context */
+ cnxk_crypto_sec_ctx_destroy(dev);
+
+ if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
+ dev->dev_ops = NULL;
+ vf = dev->data->dev_private;
+ ret = roc_cpt_dev_fini(&vf->cpt);
+ if (ret)
+ return ret;
+ }
+
+ return rte_cryptodev_pmd_destroy(dev);
+}
+
+static struct rte_pci_driver cn20k_cryptodev_pmd = {
+ .id_table = pci_id_cpt_table,
+ .drv_flags = RTE_PCI_DRV_NEED_MAPPING | RTE_PCI_DRV_NEED_IOVA_AS_VA,
+ .probe = cn20k_cpt_pci_probe,
+ .remove = cn20k_cpt_pci_remove,
+};
+
+static struct cryptodev_driver cn20k_cryptodev_drv;
+
+RTE_PMD_REGISTER_PCI(CRYPTODEV_NAME_CN20K_PMD, cn20k_cryptodev_pmd);
+RTE_PMD_REGISTER_PCI_TABLE(CRYPTODEV_NAME_CN20K_PMD, pci_id_cpt_table);
+RTE_PMD_REGISTER_KMOD_DEP(CRYPTODEV_NAME_CN20K_PMD, "vfio-pci");
+RTE_PMD_REGISTER_CRYPTO_DRIVER(cn20k_cryptodev_drv, cn20k_cryptodev_pmd.driver,
+ cn20k_cryptodev_driver_id);
diff --git a/drivers/crypto/cnxk/cn20k_cryptodev.h b/drivers/crypto/cnxk/cn20k_cryptodev.h
new file mode 100644
index 0000000000..d8a84d5464
--- /dev/null
+++ b/drivers/crypto/cnxk/cn20k_cryptodev.h
@@ -0,0 +1,13 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2025 Marvell.
+ */
+
+#ifndef _CN20K_CRYPTODEV_H_
+#define _CN20K_CRYPTODEV_H_
+
+/* Marvell OCTEON CN20K Crypto PMD device name */
+#define CRYPTODEV_NAME_CN20K_PMD crypto_cn20k
+
+extern uint8_t cn20k_cryptodev_driver_id;
+
+#endif /* _CN20K_CRYPTODEV_H_ */
diff --git a/drivers/crypto/cnxk/meson.build b/drivers/crypto/cnxk/meson.build
index e9b67b4a14..886bb5c428 100644
--- a/drivers/crypto/cnxk/meson.build
+++ b/drivers/crypto/cnxk/meson.build
@@ -17,6 +17,7 @@ sources = files(
'cn10k_cryptodev_sec.c',
'cn10k_ipsec.c',
'cn10k_tls.c',
+ 'cn20k_cryptodev.c',
'cnxk_cryptodev.c',
'cnxk_cryptodev_capabilities.c',
'cnxk_cryptodev_devargs.c',
--
2.25.1
^ permalink raw reply [flat|nested] 41+ messages in thread
* [PATCH 08/40] crypto/cnxk: add ops skeleton for cn20k
2025-05-23 13:50 [PATCH 00/40] fixes and new features to cnxk crypto PMD Tejasree Kondoj
` (6 preceding siblings ...)
2025-05-23 13:50 ` [PATCH 07/40] crypto/cnxk: add probe for cn20k crypto device Tejasree Kondoj
@ 2025-05-23 13:50 ` Tejasree Kondoj
2025-05-23 13:50 ` [PATCH 09/40] crypto/cnxk: add dev info get Tejasree Kondoj
` (31 subsequent siblings)
39 siblings, 0 replies; 41+ messages in thread
From: Tejasree Kondoj @ 2025-05-23 13:50 UTC (permalink / raw)
To: Akhil Goyal
Cc: Vidya Sagar Velumuri, Anoob Joseph, Aakash Sasidharan,
Nithinsen Kaithakadan, Rupesh Chiluka, dev
From: Vidya Sagar Velumuri <vvelumuri@marvell.com>
Add ops skeletion for cn20k
Signed-off-by: Vidya Sagar Velumuri <vvelumuri@marvell.com>
---
drivers/crypto/cnxk/cn20k_cryptodev.c | 3 +
drivers/crypto/cnxk/cn20k_cryptodev_ops.c | 92 +++++++++++++++++++++++
drivers/crypto/cnxk/cn20k_cryptodev_ops.h | 23 ++++++
drivers/crypto/cnxk/meson.build | 1 +
4 files changed, 119 insertions(+)
create mode 100644 drivers/crypto/cnxk/cn20k_cryptodev_ops.c
create mode 100644 drivers/crypto/cnxk/cn20k_cryptodev_ops.h
diff --git a/drivers/crypto/cnxk/cn20k_cryptodev.c b/drivers/crypto/cnxk/cn20k_cryptodev.c
index e52336c2b7..980ea7df97 100644
--- a/drivers/crypto/cnxk/cn20k_cryptodev.c
+++ b/drivers/crypto/cnxk/cn20k_cryptodev.c
@@ -11,6 +11,7 @@
#include <rte_pci.h>
#include "cn20k_cryptodev.h"
+#include "cn20k_cryptodev_ops.h"
#include "cnxk_cryptodev.h"
#include "cnxk_cryptodev_capabilities.h"
#include "cnxk_cryptodev_ops.h"
@@ -86,6 +87,8 @@ cn20k_cpt_pci_probe(struct rte_pci_driver *pci_drv __rte_unused, struct rte_pci_
cnxk_cpt_caps_populate(vf);
+ dev->dev_ops = &cn20k_cpt_ops;
+ dev->driver_id = cn20k_cryptodev_driver_id;
dev->feature_flags = cnxk_cpt_default_ff_get();
dev->qp_depth_used = cnxk_cpt_qp_depth_used;
diff --git a/drivers/crypto/cnxk/cn20k_cryptodev_ops.c b/drivers/crypto/cnxk/cn20k_cryptodev_ops.c
new file mode 100644
index 0000000000..64ab285235
--- /dev/null
+++ b/drivers/crypto/cnxk/cn20k_cryptodev_ops.c
@@ -0,0 +1,92 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2025 Marvell.
+ */
+
+#include <cryptodev_pmd.h>
+#include <rte_cryptodev.h>
+
+#include "roc_cpt.h"
+#include "roc_idev.h"
+
+#include "cn20k_cryptodev.h"
+#include "cn20k_cryptodev_ops.h"
+#include "cnxk_cryptodev.h"
+#include "cnxk_cryptodev_ops.h"
+#include "cnxk_se.h"
+
+#include "rte_pmd_cnxk_crypto.h"
+
+static int
+cn20k_cpt_crypto_adapter_ev_mdata_set(struct rte_cryptodev *dev __rte_unused, void *sess,
+ enum rte_crypto_op_type op_type,
+ enum rte_crypto_op_sess_type sess_type, void *mdata)
+{
+ (void)dev;
+ (void)sess;
+ (void)op_type;
+ (void)sess_type;
+ (void)mdata;
+
+ return 0;
+}
+
+static void
+cn20k_cpt_dev_info_get(struct rte_cryptodev *dev, struct rte_cryptodev_info *info)
+{
+ (void)dev;
+ (void)info;
+}
+
+static int
+cn20k_sym_get_raw_dp_ctx_size(struct rte_cryptodev *dev __rte_unused)
+{
+ return 0;
+}
+
+static int
+cn20k_sym_configure_raw_dp_ctx(struct rte_cryptodev *dev, uint16_t qp_id,
+ struct rte_crypto_raw_dp_ctx *raw_dp_ctx,
+ enum rte_crypto_op_sess_type sess_type,
+ union rte_cryptodev_session_ctx session_ctx, uint8_t is_update)
+{
+ (void)dev;
+ (void)qp_id;
+ (void)raw_dp_ctx;
+ (void)sess_type;
+ (void)session_ctx;
+ (void)is_update;
+ return 0;
+}
+
+struct rte_cryptodev_ops cn20k_cpt_ops = {
+ /* Device control ops */
+ .dev_configure = cnxk_cpt_dev_config,
+ .dev_start = cnxk_cpt_dev_start,
+ .dev_stop = cnxk_cpt_dev_stop,
+ .dev_close = cnxk_cpt_dev_close,
+ .dev_infos_get = cn20k_cpt_dev_info_get,
+
+ .stats_get = NULL,
+ .stats_reset = NULL,
+ .queue_pair_setup = cnxk_cpt_queue_pair_setup,
+ .queue_pair_release = cnxk_cpt_queue_pair_release,
+ .queue_pair_reset = cnxk_cpt_queue_pair_reset,
+
+ /* Symmetric crypto ops */
+ .sym_session_get_size = cnxk_cpt_sym_session_get_size,
+ .sym_session_configure = cnxk_cpt_sym_session_configure,
+ .sym_session_clear = cnxk_cpt_sym_session_clear,
+
+ /* Asymmetric crypto ops */
+ .asym_session_get_size = cnxk_ae_session_size_get,
+ .asym_session_configure = cnxk_ae_session_cfg,
+ .asym_session_clear = cnxk_ae_session_clear,
+
+ /* Event crypto ops */
+ .session_ev_mdata_set = cn20k_cpt_crypto_adapter_ev_mdata_set,
+ .queue_pair_event_error_query = cnxk_cpt_queue_pair_event_error_query,
+
+ /* Raw data-path API related operations */
+ .sym_get_raw_dp_ctx_size = cn20k_sym_get_raw_dp_ctx_size,
+ .sym_configure_raw_dp_ctx = cn20k_sym_configure_raw_dp_ctx,
+};
diff --git a/drivers/crypto/cnxk/cn20k_cryptodev_ops.h b/drivers/crypto/cnxk/cn20k_cryptodev_ops.h
new file mode 100644
index 0000000000..d7c3aed22b
--- /dev/null
+++ b/drivers/crypto/cnxk/cn20k_cryptodev_ops.h
@@ -0,0 +1,23 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2025 Marvell.
+ */
+
+#ifndef _CN20K_CRYPTODEV_OPS_H_
+#define _CN20K_CRYPTODEV_OPS_H_
+
+#include <cryptodev_pmd.h>
+#include <rte_compat.h>
+#include <rte_cryptodev.h>
+#include <rte_eventdev.h>
+
+#if defined(__aarch64__)
+#include "roc_io.h"
+#else
+#include "roc_io_generic.h"
+#endif
+
+#include "cnxk_cryptodev.h"
+
+extern struct rte_cryptodev_ops cn20k_cpt_ops;
+
+#endif /* _CN20K_CRYPTODEV_OPS_H_ */
diff --git a/drivers/crypto/cnxk/meson.build b/drivers/crypto/cnxk/meson.build
index 886bb5c428..0b078b4d06 100644
--- a/drivers/crypto/cnxk/meson.build
+++ b/drivers/crypto/cnxk/meson.build
@@ -18,6 +18,7 @@ sources = files(
'cn10k_ipsec.c',
'cn10k_tls.c',
'cn20k_cryptodev.c',
+ 'cn20k_cryptodev_ops.c',
'cnxk_cryptodev.c',
'cnxk_cryptodev_capabilities.c',
'cnxk_cryptodev_devargs.c',
--
2.25.1
^ permalink raw reply [flat|nested] 41+ messages in thread
* [PATCH 09/40] crypto/cnxk: add dev info get
2025-05-23 13:50 [PATCH 00/40] fixes and new features to cnxk crypto PMD Tejasree Kondoj
` (7 preceding siblings ...)
2025-05-23 13:50 ` [PATCH 08/40] crypto/cnxk: add ops skeleton for cn20k Tejasree Kondoj
@ 2025-05-23 13:50 ` Tejasree Kondoj
2025-05-23 13:50 ` [PATCH 10/40] crypto/cnxk: add skeletion for enq deq functions Tejasree Kondoj
` (30 subsequent siblings)
39 siblings, 0 replies; 41+ messages in thread
From: Tejasree Kondoj @ 2025-05-23 13:50 UTC (permalink / raw)
To: Akhil Goyal
Cc: Vidya Sagar Velumuri, Anoob Joseph, Aakash Sasidharan,
Nithinsen Kaithakadan, Rupesh Chiluka, dev
From: Vidya Sagar Velumuri <vvelumuri@marvell.com>
Add dev info get for cn20k
Signed-off-by: Vidya Sagar Velumuri <vvelumuri@marvell.com>
---
drivers/crypto/cnxk/cn20k_cryptodev_ops.c | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/drivers/crypto/cnxk/cn20k_cryptodev_ops.c b/drivers/crypto/cnxk/cn20k_cryptodev_ops.c
index 64ab285235..ac321a2b91 100644
--- a/drivers/crypto/cnxk/cn20k_cryptodev_ops.c
+++ b/drivers/crypto/cnxk/cn20k_cryptodev_ops.c
@@ -33,8 +33,10 @@ cn20k_cpt_crypto_adapter_ev_mdata_set(struct rte_cryptodev *dev __rte_unused, vo
static void
cn20k_cpt_dev_info_get(struct rte_cryptodev *dev, struct rte_cryptodev_info *info)
{
- (void)dev;
- (void)info;
+ if (info != NULL) {
+ cnxk_cpt_dev_info_get(dev, info);
+ info->driver_id = cn20k_cryptodev_driver_id;
+ }
}
static int
--
2.25.1
^ permalink raw reply [flat|nested] 41+ messages in thread
* [PATCH 10/40] crypto/cnxk: add skeletion for enq deq functions
2025-05-23 13:50 [PATCH 00/40] fixes and new features to cnxk crypto PMD Tejasree Kondoj
` (8 preceding siblings ...)
2025-05-23 13:50 ` [PATCH 09/40] crypto/cnxk: add dev info get Tejasree Kondoj
@ 2025-05-23 13:50 ` Tejasree Kondoj
2025-05-23 13:50 ` [PATCH 11/40] crypto/cnxk: add lmtst routines for cn20k Tejasree Kondoj
` (29 subsequent siblings)
39 siblings, 0 replies; 41+ messages in thread
From: Tejasree Kondoj @ 2025-05-23 13:50 UTC (permalink / raw)
To: Akhil Goyal
Cc: Vidya Sagar Velumuri, Anoob Joseph, Aakash Sasidharan,
Nithinsen Kaithakadan, Rupesh Chiluka, dev
From: Vidya Sagar Velumuri <vvelumuri@marvell.com>
Add skeletion for cn20k enq deq functions
Signed-off-by: Vidya Sagar Velumuri <vvelumuri@marvell.com>
---
drivers/crypto/cnxk/cn20k_cryptodev.c | 1 +
drivers/crypto/cnxk/cn20k_cryptodev_ops.c | 29 +++++++++++++++++++++++
drivers/crypto/cnxk/cn20k_cryptodev_ops.h | 1 +
3 files changed, 31 insertions(+)
diff --git a/drivers/crypto/cnxk/cn20k_cryptodev.c b/drivers/crypto/cnxk/cn20k_cryptodev.c
index 980ea7df97..0845c1e20d 100644
--- a/drivers/crypto/cnxk/cn20k_cryptodev.c
+++ b/drivers/crypto/cnxk/cn20k_cryptodev.c
@@ -92,6 +92,7 @@ cn20k_cpt_pci_probe(struct rte_pci_driver *pci_drv __rte_unused, struct rte_pci_
dev->feature_flags = cnxk_cpt_default_ff_get();
dev->qp_depth_used = cnxk_cpt_qp_depth_used;
+ cn20k_cpt_set_enqdeq_fns(dev, vf);
rte_cryptodev_pmd_probing_finish(dev);
diff --git a/drivers/crypto/cnxk/cn20k_cryptodev_ops.c b/drivers/crypto/cnxk/cn20k_cryptodev_ops.c
index ac321a2b91..e3bea9aaf6 100644
--- a/drivers/crypto/cnxk/cn20k_cryptodev_ops.c
+++ b/drivers/crypto/cnxk/cn20k_cryptodev_ops.c
@@ -30,6 +30,35 @@ cn20k_cpt_crypto_adapter_ev_mdata_set(struct rte_cryptodev *dev __rte_unused, vo
return 0;
}
+static uint16_t
+cn20k_cpt_enqueue_burst(void *qptr, struct rte_crypto_op **ops, uint16_t nb_ops)
+{
+ (void)qptr;
+ (void)ops;
+ (void)nb_ops;
+
+ return 0;
+}
+
+static uint16_t
+cn20k_cpt_dequeue_burst(void *qptr, struct rte_crypto_op **ops, uint16_t nb_ops)
+{
+ (void)qptr;
+ (void)ops;
+ (void)nb_ops;
+
+ return 0;
+}
+
+void
+cn20k_cpt_set_enqdeq_fns(struct rte_cryptodev *dev, struct cnxk_cpt_vf *vf)
+{
+ dev->enqueue_burst = cn20k_cpt_enqueue_burst;
+ dev->dequeue_burst = cn20k_cpt_dequeue_burst;
+
+ rte_mb();
+}
+
static void
cn20k_cpt_dev_info_get(struct rte_cryptodev *dev, struct rte_cryptodev_info *info)
{
diff --git a/drivers/crypto/cnxk/cn20k_cryptodev_ops.h b/drivers/crypto/cnxk/cn20k_cryptodev_ops.h
index d7c3aed22b..d6f1592a56 100644
--- a/drivers/crypto/cnxk/cn20k_cryptodev_ops.h
+++ b/drivers/crypto/cnxk/cn20k_cryptodev_ops.h
@@ -20,4 +20,5 @@
extern struct rte_cryptodev_ops cn20k_cpt_ops;
+void cn20k_cpt_set_enqdeq_fns(struct rte_cryptodev *dev, struct cnxk_cpt_vf *vf);
#endif /* _CN20K_CRYPTODEV_OPS_H_ */
--
2.25.1
^ permalink raw reply [flat|nested] 41+ messages in thread
* [PATCH 11/40] crypto/cnxk: add lmtst routines for cn20k
2025-05-23 13:50 [PATCH 00/40] fixes and new features to cnxk crypto PMD Tejasree Kondoj
` (9 preceding siblings ...)
2025-05-23 13:50 ` [PATCH 10/40] crypto/cnxk: add skeletion for enq deq functions Tejasree Kondoj
@ 2025-05-23 13:50 ` Tejasree Kondoj
2025-05-23 13:50 ` [PATCH 12/40] crypto/cnxk: add enqueue function support Tejasree Kondoj
` (28 subsequent siblings)
39 siblings, 0 replies; 41+ messages in thread
From: Tejasree Kondoj @ 2025-05-23 13:50 UTC (permalink / raw)
To: Akhil Goyal
Cc: Vidya Sagar Velumuri, Anoob Joseph, Aakash Sasidharan,
Nithinsen Kaithakadan, Rupesh Chiluka, dev
From: Vidya Sagar Velumuri <vvelumuri@marvell.com>
Add lmtst routines for cn20k
Signed-off-by: Vidya Sagar Velumuri <vvelumuri@marvell.com>
---
drivers/common/cnxk/roc_cpt.h | 7 +--
drivers/crypto/cnxk/cn20k_cryptodev_ops.h | 53 +++++++++++++++++++++++
2 files changed, 57 insertions(+), 3 deletions(-)
diff --git a/drivers/common/cnxk/roc_cpt.h b/drivers/common/cnxk/roc_cpt.h
index 37634793d4..02f49c06b7 100644
--- a/drivers/common/cnxk/roc_cpt.h
+++ b/drivers/common/cnxk/roc_cpt.h
@@ -64,9 +64,10 @@
ROC_CN10K_TWO_CPT_INST_DW_M1 << (19 + 3 * 13) | \
ROC_CN10K_TWO_CPT_INST_DW_M1 << (19 + 3 * 14))
-#define ROC_CN20K_CPT_LMT_ARG ROC_CN10K_CPT_LMT_ARG
-#define ROC_CN20K_DUAL_CPT_LMT_ARG ROC_CN10K_DUAL_CPT_LMT_ARG
-#define ROC_CN20K_CPT_INST_DW_M1 ROC_CN10K_CPT_INST_DW_M1
+#define ROC_CN20K_CPT_LMT_ARG ROC_CN10K_CPT_LMT_ARG
+#define ROC_CN20K_DUAL_CPT_LMT_ARG ROC_CN10K_DUAL_CPT_LMT_ARG
+#define ROC_CN20K_CPT_INST_DW_M1 ROC_CN10K_CPT_INST_DW_M1
+#define ROC_CN20K_TWO_CPT_INST_DW_M1 ROC_CN10K_TWO_CPT_INST_DW_M1
/* CPT helper macros */
#define ROC_CPT_AH_HDR_LEN 12
diff --git a/drivers/crypto/cnxk/cn20k_cryptodev_ops.h b/drivers/crypto/cnxk/cn20k_cryptodev_ops.h
index d6f1592a56..3e2ad1e2df 100644
--- a/drivers/crypto/cnxk/cn20k_cryptodev_ops.h
+++ b/drivers/crypto/cnxk/cn20k_cryptodev_ops.h
@@ -18,7 +18,60 @@
#include "cnxk_cryptodev.h"
+#define CN20K_PKTS_PER_STEORL 32
+#define CN20K_LMTLINES_PER_STEORL 16
+
extern struct rte_cryptodev_ops cn20k_cpt_ops;
void cn20k_cpt_set_enqdeq_fns(struct rte_cryptodev *dev, struct cnxk_cpt_vf *vf);
+
+static __rte_always_inline void __rte_hot
+cn20k_cpt_lmtst_dual_submit(uint64_t *io_addr, const uint16_t lmt_id, int *i)
+{
+ uint64_t lmt_arg;
+
+ /* Check if the total number of instructions is odd or even. */
+ const int flag_odd = *i & 0x1;
+
+ /* Reduce i by 1 when odd number of instructions.*/
+ *i -= flag_odd;
+
+ if (*i > CN20K_PKTS_PER_STEORL) {
+ lmt_arg = ROC_CN20K_DUAL_CPT_LMT_ARG | (CN20K_LMTLINES_PER_STEORL - 1) << 12 |
+ (uint64_t)lmt_id;
+ roc_lmt_submit_steorl(lmt_arg, *io_addr);
+ lmt_arg = ROC_CN20K_DUAL_CPT_LMT_ARG |
+ (*i / 2 - CN20K_LMTLINES_PER_STEORL - 1) << 12 |
+ (uint64_t)(lmt_id + CN20K_LMTLINES_PER_STEORL);
+ roc_lmt_submit_steorl(lmt_arg, *io_addr);
+ if (flag_odd) {
+ *io_addr = (*io_addr & ~(uint64_t)(0x7 << 4)) |
+ (ROC_CN20K_CPT_INST_DW_M1 << 4);
+ lmt_arg = (uint64_t)(lmt_id + *i / 2);
+ roc_lmt_submit_steorl(lmt_arg, *io_addr);
+ *io_addr = (*io_addr & ~(uint64_t)(0x7 << 4)) |
+ (ROC_CN20K_TWO_CPT_INST_DW_M1 << 4);
+ *i += 1;
+ }
+ } else {
+ if (*i != 0) {
+ lmt_arg =
+ ROC_CN20K_DUAL_CPT_LMT_ARG | (*i / 2 - 1) << 12 | (uint64_t)lmt_id;
+ roc_lmt_submit_steorl(lmt_arg, *io_addr);
+ }
+
+ if (flag_odd) {
+ *io_addr = (*io_addr & ~(uint64_t)(0x7 << 4)) |
+ (ROC_CN20K_CPT_INST_DW_M1 << 4);
+ lmt_arg = (uint64_t)(lmt_id + *i / 2);
+ roc_lmt_submit_steorl(lmt_arg, *io_addr);
+ *io_addr = (*io_addr & ~(uint64_t)(0x7 << 4)) |
+ (ROC_CN20K_TWO_CPT_INST_DW_M1 << 4);
+ *i += 1;
+ }
+ }
+
+ rte_io_wmb();
+}
+
#endif /* _CN20K_CRYPTODEV_OPS_H_ */
--
2.25.1
^ permalink raw reply [flat|nested] 41+ messages in thread
* [PATCH 12/40] crypto/cnxk: add enqueue function support
2025-05-23 13:50 [PATCH 00/40] fixes and new features to cnxk crypto PMD Tejasree Kondoj
` (10 preceding siblings ...)
2025-05-23 13:50 ` [PATCH 11/40] crypto/cnxk: add lmtst routines for cn20k Tejasree Kondoj
@ 2025-05-23 13:50 ` Tejasree Kondoj
2025-05-23 13:50 ` [PATCH 13/40] crypto/cnxk: add cryptodev dequeue support for cn20k Tejasree Kondoj
` (27 subsequent siblings)
39 siblings, 0 replies; 41+ messages in thread
From: Tejasree Kondoj @ 2025-05-23 13:50 UTC (permalink / raw)
To: Akhil Goyal
Cc: Vidya Sagar Velumuri, Anoob Joseph, Aakash Sasidharan,
Nithinsen Kaithakadan, Rupesh Chiluka, dev
From: Vidya Sagar Velumuri <vvelumuri@marvell.com>
Add cryptodev enqueue function support for cn20k
Signed-off-by: Vidya Sagar Velumuri <vvelumuri@marvell.com>
---
drivers/crypto/cnxk/cn20k_cryptodev.c | 2 +-
drivers/crypto/cnxk/cn20k_cryptodev_ops.c | 193 +++++++++++++++++++++-
drivers/crypto/cnxk/cn20k_cryptodev_ops.h | 2 +-
drivers/crypto/cnxk/cnxk_cryptodev_ops.h | 11 +-
4 files changed, 195 insertions(+), 13 deletions(-)
diff --git a/drivers/crypto/cnxk/cn20k_cryptodev.c b/drivers/crypto/cnxk/cn20k_cryptodev.c
index 0845c1e20d..4c70c15ca9 100644
--- a/drivers/crypto/cnxk/cn20k_cryptodev.c
+++ b/drivers/crypto/cnxk/cn20k_cryptodev.c
@@ -92,7 +92,7 @@ cn20k_cpt_pci_probe(struct rte_pci_driver *pci_drv __rte_unused, struct rte_pci_
dev->feature_flags = cnxk_cpt_default_ff_get();
dev->qp_depth_used = cnxk_cpt_qp_depth_used;
- cn20k_cpt_set_enqdeq_fns(dev, vf);
+ cn20k_cpt_set_enqdeq_fns(dev);
rte_cryptodev_pmd_probing_finish(dev);
diff --git a/drivers/crypto/cnxk/cn20k_cryptodev_ops.c b/drivers/crypto/cnxk/cn20k_cryptodev_ops.c
index e3bea9aaf6..c59a6dab59 100644
--- a/drivers/crypto/cnxk/cn20k_cryptodev_ops.c
+++ b/drivers/crypto/cnxk/cn20k_cryptodev_ops.c
@@ -10,6 +10,7 @@
#include "cn20k_cryptodev.h"
#include "cn20k_cryptodev_ops.h"
+#include "cnxk_ae.h"
#include "cnxk_cryptodev.h"
#include "cnxk_cryptodev_ops.h"
#include "cnxk_se.h"
@@ -30,14 +31,196 @@ cn20k_cpt_crypto_adapter_ev_mdata_set(struct rte_cryptodev *dev __rte_unused, vo
return 0;
}
+static inline struct cnxk_se_sess *
+cn20k_cpt_sym_temp_sess_create(struct cnxk_cpt_qp *qp, struct rte_crypto_op *op)
+{
+ struct rte_crypto_sym_op *sym_op = op->sym;
+ struct rte_cryptodev_sym_session *sess;
+ struct cnxk_se_sess *priv;
+ int ret;
+
+ /* Create temporary session */
+ if (rte_mempool_get(qp->sess_mp, (void **)&sess) < 0)
+ return NULL;
+
+ ret = sym_session_configure(qp->lf.roc_cpt, sym_op->xform, sess, true);
+ if (ret) {
+ rte_mempool_put(qp->sess_mp, (void *)sess);
+ goto sess_put;
+ }
+
+ priv = (void *)sess;
+ sym_op->session = sess;
+
+ return priv;
+
+sess_put:
+ rte_mempool_put(qp->sess_mp, sess);
+ return NULL;
+}
+
+static inline int
+cn20k_cpt_fill_inst(struct cnxk_cpt_qp *qp, struct rte_crypto_op *ops[], struct cpt_inst_s inst[],
+ struct cpt_inflight_req *infl_req)
+{
+ struct rte_crypto_asym_op *asym_op;
+ struct rte_crypto_sym_op *sym_op;
+ struct cnxk_ae_sess *ae_sess;
+ struct cnxk_se_sess *sess;
+ struct rte_crypto_op *op;
+ uint64_t w7;
+ int ret;
+
+ const union cpt_res_s res = {
+ .cn20k.compcode = CPT_COMP_NOT_DONE,
+ };
+
+ op = ops[0];
+
+ inst[0].w0.u64 = 0;
+ inst[0].w2.u64 = 0;
+ inst[0].w3.u64 = 0;
+
+ sym_op = op->sym;
+
+ if (op->type == RTE_CRYPTO_OP_TYPE_SYMMETRIC) {
+ if (op->sess_type == RTE_CRYPTO_OP_WITH_SESSION) {
+ sess = (struct cnxk_se_sess *)(sym_op->session);
+ ret = cpt_sym_inst_fill(qp, op, sess, infl_req, &inst[0], true);
+ if (unlikely(ret))
+ return 0;
+ w7 = sess->cpt_inst_w7;
+ } else {
+ sess = cn20k_cpt_sym_temp_sess_create(qp, op);
+ if (unlikely(sess == NULL)) {
+ plt_dp_err("Could not create temp session");
+ return 0;
+ }
+
+ ret = cpt_sym_inst_fill(qp, op, sess, infl_req, &inst[0], true);
+ if (unlikely(ret)) {
+ sym_session_clear(op->sym->session, true);
+ rte_mempool_put(qp->sess_mp, op->sym->session);
+ return 0;
+ }
+ w7 = sess->cpt_inst_w7;
+ }
+ } else if (op->type == RTE_CRYPTO_OP_TYPE_ASYMMETRIC) {
+
+ if (op->sess_type == RTE_CRYPTO_OP_WITH_SESSION) {
+ asym_op = op->asym;
+ ae_sess = (struct cnxk_ae_sess *)asym_op->session;
+ ret = cnxk_ae_enqueue(qp, op, infl_req, &inst[0], ae_sess);
+ if (unlikely(ret))
+ return 0;
+ w7 = ae_sess->cpt_inst_w7;
+ } else {
+ plt_dp_err("Not supported Asym op without session");
+ return 0;
+ }
+ } else {
+ plt_dp_err("Unsupported op type");
+ return 0;
+ }
+
+ inst[0].res_addr = (uint64_t)&infl_req->res;
+ rte_atomic_store_explicit(&infl_req->res.u64[0], res.u64[0], rte_memory_order_relaxed);
+ infl_req->cop = op;
+
+ inst[0].w7.u64 = w7;
+
+#ifdef CPT_INST_DEBUG_ENABLE
+ infl_req->dptr = (uint8_t *)inst[0].dptr;
+ infl_req->rptr = (uint8_t *)inst[0].rptr;
+ infl_req->scatter_sz = inst[0].w6.s.scatter_sz;
+ infl_req->opcode_major = inst[0].w4.s.opcode_major;
+
+ rte_hexdump(rte_log_get_stream(), "cptr", (void *)(uint64_t)inst[0].w7.s.cptr, 128);
+ plt_err("major opcode:%d", inst[0].w4.s.opcode_major);
+ plt_err("minor opcode:%d", inst[0].w4.s.opcode_minor);
+ plt_err("param1:%d", inst[0].w4.s.param1);
+ plt_err("param2:%d", inst[0].w4.s.param2);
+ plt_err("dlen:%d", inst[0].w4.s.dlen);
+
+ cpt_request_data_sgv2_mode_dump((void *)inst[0].dptr, 1, inst[0].w5.s.gather_sz);
+ cpt_request_data_sgv2_mode_dump((void *)inst[0].rptr, 0, inst[0].w6.s.scatter_sz);
+#endif
+
+ return 1;
+}
+
static uint16_t
cn20k_cpt_enqueue_burst(void *qptr, struct rte_crypto_op **ops, uint16_t nb_ops)
{
- (void)qptr;
- (void)ops;
- (void)nb_ops;
+ struct cpt_inflight_req *infl_req;
+ uint64_t head, lmt_base, io_addr;
+ uint16_t nb_allowed, count = 0;
+ struct cnxk_cpt_qp *qp = qptr;
+ struct pending_queue *pend_q;
+ struct cpt_inst_s *inst;
+ union cpt_fc_write_s fc;
+ uint64_t *fc_addr;
+ uint16_t lmt_id;
+ int ret, i;
- return 0;
+ pend_q = &qp->pend_q;
+
+ const uint64_t pq_mask = pend_q->pq_mask;
+
+ head = pend_q->head;
+ nb_allowed = pending_queue_free_cnt(head, pend_q->tail, pq_mask);
+ nb_ops = RTE_MIN(nb_ops, nb_allowed);
+
+ if (unlikely(nb_ops == 0))
+ return 0;
+
+ lmt_base = qp->lmtline.lmt_base;
+ io_addr = qp->lmtline.io_addr;
+ fc_addr = qp->lmtline.fc_addr;
+
+ const uint32_t fc_thresh = qp->lmtline.fc_thresh;
+
+ ROC_LMT_BASE_ID_GET(lmt_base, lmt_id);
+ inst = (struct cpt_inst_s *)lmt_base;
+
+again:
+ fc.u64[0] = rte_atomic_load_explicit(fc_addr, rte_memory_order_relaxed);
+ if (unlikely(fc.s.qsize > fc_thresh)) {
+ i = 0;
+ goto pend_q_commit;
+ }
+
+ for (i = 0; i < RTE_MIN(CN20K_CPT_PKTS_PER_LOOP, nb_ops); i++) {
+ infl_req = &pend_q->req_queue[head];
+ infl_req->op_flags = 0;
+
+ ret = cn20k_cpt_fill_inst(qp, ops + i, &inst[i], infl_req);
+ if (unlikely(ret != 1)) {
+ plt_dp_err("Could not process op: %p", ops + i);
+ if (i == 0)
+ goto pend_q_commit;
+ break;
+ }
+
+ pending_queue_advance(&head, pq_mask);
+ }
+
+ cn20k_cpt_lmtst_dual_submit(&io_addr, lmt_id, &i);
+
+ if (nb_ops - i > 0 && i == CN20K_CPT_PKTS_PER_LOOP) {
+ nb_ops -= CN20K_CPT_PKTS_PER_LOOP;
+ ops += CN20K_CPT_PKTS_PER_LOOP;
+ count += CN20K_CPT_PKTS_PER_LOOP;
+ goto again;
+ }
+
+pend_q_commit:
+ rte_atomic_thread_fence(rte_memory_order_release);
+
+ pend_q->head = head;
+ pend_q->time_out = rte_get_timer_cycles() + DEFAULT_COMMAND_TIMEOUT * rte_get_timer_hz();
+
+ return count + i;
}
static uint16_t
@@ -51,7 +234,7 @@ cn20k_cpt_dequeue_burst(void *qptr, struct rte_crypto_op **ops, uint16_t nb_ops)
}
void
-cn20k_cpt_set_enqdeq_fns(struct rte_cryptodev *dev, struct cnxk_cpt_vf *vf)
+cn20k_cpt_set_enqdeq_fns(struct rte_cryptodev *dev)
{
dev->enqueue_burst = cn20k_cpt_enqueue_burst;
dev->dequeue_burst = cn20k_cpt_dequeue_burst;
diff --git a/drivers/crypto/cnxk/cn20k_cryptodev_ops.h b/drivers/crypto/cnxk/cn20k_cryptodev_ops.h
index 3e2ad1e2df..bdd6f71022 100644
--- a/drivers/crypto/cnxk/cn20k_cryptodev_ops.h
+++ b/drivers/crypto/cnxk/cn20k_cryptodev_ops.h
@@ -23,7 +23,7 @@
extern struct rte_cryptodev_ops cn20k_cpt_ops;
-void cn20k_cpt_set_enqdeq_fns(struct rte_cryptodev *dev, struct cnxk_cpt_vf *vf);
+void cn20k_cpt_set_enqdeq_fns(struct rte_cryptodev *dev);
static __rte_always_inline void __rte_hot
cn20k_cpt_lmtst_dual_submit(uint64_t *io_addr, const uint16_t lmt_id, int *i)
diff --git a/drivers/crypto/cnxk/cnxk_cryptodev_ops.h b/drivers/crypto/cnxk/cnxk_cryptodev_ops.h
index 54d32abc9c..6ad52ec13e 100644
--- a/drivers/crypto/cnxk/cnxk_cryptodev_ops.h
+++ b/drivers/crypto/cnxk/cnxk_cryptodev_ops.h
@@ -32,14 +32,13 @@
#define MOD_INC(i, l) ((i) == (l - 1) ? (i) = 0 : (i)++)
-#define CN10K_CPT_PKTS_PER_LOOP 64
+#define CN10K_CPT_PKTS_PER_LOOP 64
+#define CN20K_CPT_PKTS_PER_LOOP 64
/* Macros to form words in CPT instruction */
-#define CNXK_CPT_INST_W2(tag, tt, grp, rvu_pf_func) \
- ((tag) | ((uint64_t)(tt) << 32) | ((uint64_t)(grp) << 34) | \
- ((uint64_t)(rvu_pf_func) << 48))
-#define CNXK_CPT_INST_W3(qord, wqe_ptr) \
- (qord | ((uintptr_t)(wqe_ptr) >> 3) << 3)
+#define CNXK_CPT_INST_W2(tag, tt, grp, rvu_pf_func) \
+ ((tag) | ((uint64_t)(tt) << 32) | ((uint64_t)(grp) << 34) | ((uint64_t)(rvu_pf_func) << 48))
+#define CNXK_CPT_INST_W3(qord, wqe_ptr) (qord | ((uintptr_t)(wqe_ptr) >> 3) << 3)
struct cpt_qp_meta_info {
struct rte_mempool *pool;
--
2.25.1
^ permalink raw reply [flat|nested] 41+ messages in thread
* [PATCH 13/40] crypto/cnxk: add cryptodev dequeue support for cn20k
2025-05-23 13:50 [PATCH 00/40] fixes and new features to cnxk crypto PMD Tejasree Kondoj
` (11 preceding siblings ...)
2025-05-23 13:50 ` [PATCH 12/40] crypto/cnxk: add enqueue function support Tejasree Kondoj
@ 2025-05-23 13:50 ` Tejasree Kondoj
2025-05-23 13:50 ` [PATCH 14/40] crypto/cnxk: move debug dumps to common Tejasree Kondoj
` (26 subsequent siblings)
39 siblings, 0 replies; 41+ messages in thread
From: Tejasree Kondoj @ 2025-05-23 13:50 UTC (permalink / raw)
To: Akhil Goyal
Cc: Vidya Sagar Velumuri, Anoob Joseph, Aakash Sasidharan,
Nithinsen Kaithakadan, Rupesh Chiluka, dev
From: Vidya Sagar Velumuri <vvelumuri@marvell.com>
Add dequeue support in cryptodev for cn20k
Signed-off-by: Vidya Sagar Velumuri <vvelumuri@marvell.com>
---
drivers/crypto/cnxk/cn20k_cryptodev_ops.c | 141 +++++++++++++++++++++-
1 file changed, 137 insertions(+), 4 deletions(-)
diff --git a/drivers/crypto/cnxk/cn20k_cryptodev_ops.c b/drivers/crypto/cnxk/cn20k_cryptodev_ops.c
index c59a6dab59..dbfaa2322a 100644
--- a/drivers/crypto/cnxk/cn20k_cryptodev_ops.c
+++ b/drivers/crypto/cnxk/cn20k_cryptodev_ops.c
@@ -223,14 +223,147 @@ cn20k_cpt_enqueue_burst(void *qptr, struct rte_crypto_op **ops, uint16_t nb_ops)
return count + i;
}
+static inline void
+cn20k_cpt_dequeue_post_process(struct cnxk_cpt_qp *qp, struct rte_crypto_op *cop,
+ struct cpt_inflight_req *infl_req, struct cpt_cn20k_res_s *res)
+{
+ const uint8_t uc_compcode = res->uc_compcode;
+ const uint8_t compcode = res->compcode;
+
+ cop->status = RTE_CRYPTO_OP_STATUS_SUCCESS;
+
+ if (cop->type == RTE_CRYPTO_OP_TYPE_ASYMMETRIC &&
+ cop->sess_type == RTE_CRYPTO_OP_WITH_SESSION) {
+ struct cnxk_ae_sess *sess;
+
+ sess = (struct cnxk_ae_sess *)cop->asym->session;
+ if (sess->xfrm_type == RTE_CRYPTO_ASYM_XFORM_ECDH &&
+ cop->asym->ecdh.ke_type == RTE_CRYPTO_ASYM_KE_PUB_KEY_VERIFY) {
+ if (likely(compcode == CPT_COMP_GOOD)) {
+ if (uc_compcode == ROC_AE_ERR_ECC_POINT_NOT_ON_CURVE) {
+ cop->status = RTE_CRYPTO_OP_STATUS_ERROR;
+ return;
+ } else if (uc_compcode == ROC_AE_ERR_ECC_PAI) {
+ cop->status = RTE_CRYPTO_OP_STATUS_SUCCESS;
+ return;
+ }
+ }
+ }
+ }
+
+ if (likely(compcode == CPT_COMP_GOOD)) {
+#ifdef CPT_INST_DEBUG_ENABLE
+ cpt_request_data_sgv2_mode_dump(infl_req->rptr, 0, infl_req->scatter_sz);
+#endif
+
+ if (unlikely(uc_compcode)) {
+ if (uc_compcode == ROC_SE_ERR_GC_ICV_MISCOMPARE)
+ cop->status = RTE_CRYPTO_OP_STATUS_AUTH_FAILED;
+ else
+ cop->status = RTE_CRYPTO_OP_STATUS_ERROR;
+
+ plt_dp_info("Request failed with microcode error");
+ plt_dp_info("MC completion code 0x%x", res->uc_compcode);
+ cop->aux_flags = uc_compcode;
+ goto temp_sess_free;
+ }
+
+ if (cop->type == RTE_CRYPTO_OP_TYPE_SYMMETRIC) {
+ /* Verify authentication data if required */
+ if (unlikely(infl_req->op_flags & CPT_OP_FLAGS_AUTH_VERIFY)) {
+ uintptr_t *rsp = infl_req->mdata;
+
+ compl_auth_verify(cop, (uint8_t *)rsp[0], rsp[1]);
+ }
+ } else if (cop->type == RTE_CRYPTO_OP_TYPE_ASYMMETRIC) {
+ struct rte_crypto_asym_op *op = cop->asym;
+ uintptr_t *mdata = infl_req->mdata;
+ struct cnxk_ae_sess *sess = (struct cnxk_ae_sess *)op->session;
+
+ cnxk_ae_post_process(cop, sess, (uint8_t *)mdata[0]);
+ }
+ } else {
+ cop->status = RTE_CRYPTO_OP_STATUS_ERROR;
+ plt_dp_info("HW completion code 0x%x", res->compcode);
+
+ switch (compcode) {
+ case CPT_COMP_INSTERR:
+ plt_dp_err("Request failed with instruction error");
+ break;
+ case CPT_COMP_FAULT:
+ plt_dp_err("Request failed with DMA fault");
+ break;
+ case CPT_COMP_HWERR:
+ plt_dp_err("Request failed with hardware error");
+ break;
+ default:
+ plt_dp_err("Request failed with unknown completion code");
+ }
+ }
+
+temp_sess_free:
+ if (unlikely(cop->sess_type == RTE_CRYPTO_OP_SESSIONLESS)) {
+ if (cop->type == RTE_CRYPTO_OP_TYPE_SYMMETRIC) {
+ sym_session_clear(cop->sym->session, true);
+ rte_mempool_put(qp->sess_mp, cop->sym->session);
+ cop->sym->session = NULL;
+ }
+ }
+}
+
static uint16_t
cn20k_cpt_dequeue_burst(void *qptr, struct rte_crypto_op **ops, uint16_t nb_ops)
{
- (void)qptr;
- (void)ops;
- (void)nb_ops;
+ struct cpt_inflight_req *infl_req;
+ struct cnxk_cpt_qp *qp = qptr;
+ struct pending_queue *pend_q;
+ uint64_t infl_cnt, pq_tail;
+ struct rte_crypto_op *cop;
+ union cpt_res_s res;
+ int i;
- return 0;
+ pend_q = &qp->pend_q;
+
+ const uint64_t pq_mask = pend_q->pq_mask;
+
+ pq_tail = pend_q->tail;
+ infl_cnt = pending_queue_infl_cnt(pend_q->head, pq_tail, pq_mask);
+ nb_ops = RTE_MIN(nb_ops, infl_cnt);
+
+ /* Ensure infl_cnt isn't read before data lands */
+ rte_atomic_thread_fence(rte_memory_order_acquire);
+
+ for (i = 0; i < nb_ops; i++) {
+ infl_req = &pend_q->req_queue[pq_tail];
+
+ res.u64[0] =
+ rte_atomic_load_explicit(&infl_req->res.u64[0], rte_memory_order_relaxed);
+
+ if (unlikely(res.cn20k.compcode == CPT_COMP_NOT_DONE)) {
+ if (unlikely(rte_get_timer_cycles() > pend_q->time_out)) {
+ plt_err("Request timed out");
+ cnxk_cpt_dump_on_err(qp);
+ pend_q->time_out = rte_get_timer_cycles() +
+ DEFAULT_COMMAND_TIMEOUT * rte_get_timer_hz();
+ }
+ break;
+ }
+
+ pending_queue_advance(&pq_tail, pq_mask);
+
+ cop = infl_req->cop;
+
+ ops[i] = cop;
+
+ cn20k_cpt_dequeue_post_process(qp, cop, infl_req, &res.cn20k);
+
+ if (unlikely(infl_req->op_flags & CPT_OP_FLAGS_METABUF))
+ rte_mempool_put(qp->meta_info.pool, infl_req->mdata);
+ }
+
+ pend_q->tail = pq_tail;
+
+ return i;
}
void
--
2.25.1
^ permalink raw reply [flat|nested] 41+ messages in thread
* [PATCH 14/40] crypto/cnxk: move debug dumps to common
2025-05-23 13:50 [PATCH 00/40] fixes and new features to cnxk crypto PMD Tejasree Kondoj
` (12 preceding siblings ...)
2025-05-23 13:50 ` [PATCH 13/40] crypto/cnxk: add cryptodev dequeue support for cn20k Tejasree Kondoj
@ 2025-05-23 13:50 ` Tejasree Kondoj
2025-05-23 13:50 ` [PATCH 15/40] crypto/cnxk: add rte security skeletion for cn20k Tejasree Kondoj
` (25 subsequent siblings)
39 siblings, 0 replies; 41+ messages in thread
From: Tejasree Kondoj @ 2025-05-23 13:50 UTC (permalink / raw)
To: Akhil Goyal
Cc: Anoob Joseph, Aakash Sasidharan, Nithinsen Kaithakadan,
Rupesh Chiluka, Vidya Sagar Velumuri, dev
Move the crypto instruction dumps to common
Signed-off-by: Vidya Sagar Velumuri <vvelumuri@marvell.com>
---
drivers/crypto/cnxk/cn10k_cryptodev_ops.c | 125 +++-------------------
drivers/crypto/cnxk/cn20k_cryptodev_ops.c | 7 +-
drivers/crypto/cnxk/cnxk_cryptodev_ops.c | 101 +++++++++++++++++
drivers/crypto/cnxk/cnxk_cryptodev_ops.h | 6 ++
4 files changed, 126 insertions(+), 113 deletions(-)
diff --git a/drivers/crypto/cnxk/cn10k_cryptodev_ops.c b/drivers/crypto/cnxk/cn10k_cryptodev_ops.c
index 947f50b4c8..9ad0629519 100644
--- a/drivers/crypto/cnxk/cn10k_cryptodev_ops.c
+++ b/drivers/crypto/cnxk/cn10k_cryptodev_ops.c
@@ -98,104 +98,6 @@ cpt_sec_ipsec_inst_fill(struct cnxk_cpt_qp *qp, struct rte_crypto_op *op,
return ret;
}
-#ifdef CPT_INST_DEBUG_ENABLE
-static inline void
-cpt_request_data_sgv2_mode_dump(uint8_t *in_buffer, bool glist, uint16_t components)
-{
- struct roc_se_buf_ptr list_ptr[ROC_MAX_SG_CNT];
- const char *list = glist ? "glist" : "slist";
- struct roc_sg2list_comp *sg_ptr = NULL;
- uint16_t list_cnt = 0;
- char suffix[64];
- int i, j;
-
- sg_ptr = (void *)in_buffer;
- for (i = 0; i < components; i++) {
- for (j = 0; j < sg_ptr->u.s.valid_segs; j++) {
- list_ptr[i * 3 + j].size = sg_ptr->u.s.len[j];
- list_ptr[i * 3 + j].vaddr = (void *)sg_ptr->ptr[j];
- list_ptr[i * 3 + j].vaddr = list_ptr[i * 3 + j].vaddr;
- list_cnt++;
- }
- sg_ptr++;
- }
-
- printf("Current %s: %u\n", list, list_cnt);
-
- for (i = 0; i < list_cnt; i++) {
- snprintf(suffix, sizeof(suffix), "%s[%d]: vaddr 0x%" PRIx64 ", vaddr %p len %u",
- list, i, (uint64_t)list_ptr[i].vaddr, list_ptr[i].vaddr, list_ptr[i].size);
- rte_hexdump(stdout, suffix, list_ptr[i].vaddr, list_ptr[i].size);
- }
-}
-
-static inline void
-cpt_request_data_sg_mode_dump(uint8_t *in_buffer, bool glist)
-{
- struct roc_se_buf_ptr list_ptr[ROC_MAX_SG_CNT];
- const char *list = glist ? "glist" : "slist";
- struct roc_sglist_comp *sg_ptr = NULL;
- uint16_t list_cnt, components;
- char suffix[64];
- int i;
-
- sg_ptr = (void *)(in_buffer + 8);
- list_cnt = rte_be_to_cpu_16((((uint16_t *)in_buffer)[2]));
- if (!glist) {
- components = list_cnt / 4;
- if (list_cnt % 4)
- components++;
- sg_ptr += components;
- list_cnt = rte_be_to_cpu_16((((uint16_t *)in_buffer)[3]));
- }
-
- printf("Current %s: %u\n", list, list_cnt);
- components = list_cnt / 4;
- for (i = 0; i < components; i++) {
- list_ptr[i * 4 + 0].size = rte_be_to_cpu_16(sg_ptr->u.s.len[0]);
- list_ptr[i * 4 + 1].size = rte_be_to_cpu_16(sg_ptr->u.s.len[1]);
- list_ptr[i * 4 + 2].size = rte_be_to_cpu_16(sg_ptr->u.s.len[2]);
- list_ptr[i * 4 + 3].size = rte_be_to_cpu_16(sg_ptr->u.s.len[3]);
- list_ptr[i * 4 + 0].vaddr = (void *)rte_be_to_cpu_64(sg_ptr->ptr[0]);
- list_ptr[i * 4 + 1].vaddr = (void *)rte_be_to_cpu_64(sg_ptr->ptr[1]);
- list_ptr[i * 4 + 2].vaddr = (void *)rte_be_to_cpu_64(sg_ptr->ptr[2]);
- list_ptr[i * 4 + 3].vaddr = (void *)rte_be_to_cpu_64(sg_ptr->ptr[3]);
- list_ptr[i * 4 + 0].vaddr = list_ptr[i * 4 + 0].vaddr;
- list_ptr[i * 4 + 1].vaddr = list_ptr[i * 4 + 1].vaddr;
- list_ptr[i * 4 + 2].vaddr = list_ptr[i * 4 + 2].vaddr;
- list_ptr[i * 4 + 3].vaddr = list_ptr[i * 4 + 3].vaddr;
- sg_ptr++;
- }
-
- components = list_cnt % 4;
- switch (components) {
- case 3:
- list_ptr[i * 4 + 2].size = rte_be_to_cpu_16(sg_ptr->u.s.len[2]);
- list_ptr[i * 4 + 2].vaddr = (void *)rte_be_to_cpu_64(sg_ptr->ptr[2]);
- list_ptr[i * 4 + 2].vaddr = list_ptr[i * 4 + 2].vaddr;
- /* FALLTHROUGH */
- case 2:
- list_ptr[i * 4 + 1].size = rte_be_to_cpu_16(sg_ptr->u.s.len[1]);
- list_ptr[i * 4 + 1].vaddr = (void *)rte_be_to_cpu_64(sg_ptr->ptr[1]);
- list_ptr[i * 4 + 1].vaddr = list_ptr[i * 4 + 1].vaddr;
- /* FALLTHROUGH */
- case 1:
- list_ptr[i * 4 + 0].size = rte_be_to_cpu_16(sg_ptr->u.s.len[0]);
- list_ptr[i * 4 + 0].vaddr = (void *)rte_be_to_cpu_64(sg_ptr->ptr[0]);
- list_ptr[i * 4 + 0].vaddr = list_ptr[i * 4 + 0].vaddr;
- break;
- default:
- break;
- }
-
- for (i = 0; i < list_cnt; i++) {
- snprintf(suffix, sizeof(suffix), "%s[%d]: vaddr 0x%" PRIx64 ", vaddr %p len %u",
- list, i, (uint64_t)list_ptr[i].vaddr, list_ptr[i].vaddr, list_ptr[i].size);
- rte_hexdump(stdout, suffix, list_ptr[i].vaddr, list_ptr[i].size);
- }
-}
-#endif
-
static __rte_always_inline int __rte_hot
cpt_sec_tls_inst_fill(struct cnxk_cpt_qp *qp, struct rte_crypto_op *op,
struct cn10k_sec_session *sess, struct cpt_inst_s *inst,
@@ -305,20 +207,22 @@ cn10k_cpt_fill_inst(struct cnxk_cpt_qp *qp, struct rte_crypto_op *ops[], struct
infl_req->scatter_sz = inst[0].w6.s.scatter_sz;
infl_req->opcode_major = inst[0].w4.s.opcode_major;
- rte_hexdump(stdout, "cptr", (void *)(uint64_t)inst[0].w7.s.cptr, 128);
- printf("major opcode:%d\n", inst[0].w4.s.opcode_major);
- printf("minor opcode:%d\n", inst[0].w4.s.opcode_minor);
- printf("param1:%d\n", inst[0].w4.s.param1);
- printf("param2:%d\n", inst[0].w4.s.param2);
- printf("dlen:%d\n", inst[0].w4.s.dlen);
+ rte_hexdump(rte_log_get_stream(), "cptr", (void *)(uint64_t)inst[0].w7.s.cptr, 128);
+ plt_err("major opcode:%d", inst[0].w4.s.opcode_major);
+ plt_err("minor opcode:%d", inst[0].w4.s.opcode_minor);
+ plt_err("param1:%d", inst[0].w4.s.param1);
+ plt_err("param2:%d", inst[0].w4.s.param2);
+ plt_err("dlen:%d", inst[0].w4.s.dlen);
if (is_sg_ver2) {
- cpt_request_data_sgv2_mode_dump((void *)inst[0].dptr, 1, inst[0].w5.s.gather_sz);
- cpt_request_data_sgv2_mode_dump((void *)inst[0].rptr, 0, inst[0].w6.s.scatter_sz);
+ cnxk_cpt_request_data_sgv2_mode_dump((void *)inst[0].dptr, 1,
+ inst[0].w5.s.gather_sz);
+ cnxk_cpt_request_data_sgv2_mode_dump((void *)inst[0].rptr, 0,
+ inst[0].w6.s.scatter_sz);
} else {
if (infl_req->opcode_major >> 7) {
- cpt_request_data_sg_mode_dump((void *)inst[0].dptr, 1);
- cpt_request_data_sg_mode_dump((void *)inst[0].dptr, 0);
+ cnxk_cpt_request_data_sg_mode_dump((void *)inst[0].dptr, 1);
+ cnxk_cpt_request_data_sg_mode_dump((void *)inst[0].dptr, 0);
}
}
#endif
@@ -1163,10 +1067,11 @@ cn10k_cpt_dequeue_post_process(struct cnxk_cpt_qp *qp, struct rte_crypto_op *cop
if (likely(compcode == CPT_COMP_GOOD)) {
#ifdef CPT_INST_DEBUG_ENABLE
if (infl_req->is_sg_ver2)
- cpt_request_data_sgv2_mode_dump(infl_req->rptr, 0, infl_req->scatter_sz);
+ cnxk_cpt_request_data_sgv2_mode_dump(infl_req->rptr, 0,
+ infl_req->scatter_sz);
else {
if (infl_req->opcode_major >> 7)
- cpt_request_data_sg_mode_dump(infl_req->dptr, 0);
+ cnxk_cpt_request_data_sg_mode_dump(infl_req->dptr, 0);
}
#endif
diff --git a/drivers/crypto/cnxk/cn20k_cryptodev_ops.c b/drivers/crypto/cnxk/cn20k_cryptodev_ops.c
index dbfaa2322a..7e84f30f8e 100644
--- a/drivers/crypto/cnxk/cn20k_cryptodev_ops.c
+++ b/drivers/crypto/cnxk/cn20k_cryptodev_ops.c
@@ -4,6 +4,7 @@
#include <cryptodev_pmd.h>
#include <rte_cryptodev.h>
+#include <rte_hexdump.h>
#include "roc_cpt.h"
#include "roc_idev.h"
@@ -142,8 +143,8 @@ cn20k_cpt_fill_inst(struct cnxk_cpt_qp *qp, struct rte_crypto_op *ops[], struct
plt_err("param2:%d", inst[0].w4.s.param2);
plt_err("dlen:%d", inst[0].w4.s.dlen);
- cpt_request_data_sgv2_mode_dump((void *)inst[0].dptr, 1, inst[0].w5.s.gather_sz);
- cpt_request_data_sgv2_mode_dump((void *)inst[0].rptr, 0, inst[0].w6.s.scatter_sz);
+ cnxk_cpt_request_data_sgv2_mode_dump((void *)inst[0].dptr, 1, inst[0].w5.s.gather_sz);
+ cnxk_cpt_request_data_sgv2_mode_dump((void *)inst[0].rptr, 0, inst[0].w6.s.scatter_sz);
#endif
return 1;
@@ -253,7 +254,7 @@ cn20k_cpt_dequeue_post_process(struct cnxk_cpt_qp *qp, struct rte_crypto_op *cop
if (likely(compcode == CPT_COMP_GOOD)) {
#ifdef CPT_INST_DEBUG_ENABLE
- cpt_request_data_sgv2_mode_dump(infl_req->rptr, 0, infl_req->scatter_sz);
+ cnxk_cpt_request_data_sgv2_mode_dump(infl_req->rptr, 0, infl_req->scatter_sz);
#endif
if (unlikely(uc_compcode)) {
diff --git a/drivers/crypto/cnxk/cnxk_cryptodev_ops.c b/drivers/crypto/cnxk/cnxk_cryptodev_ops.c
index 61f3e135aa..b4020f96c1 100644
--- a/drivers/crypto/cnxk/cnxk_cryptodev_ops.c
+++ b/drivers/crypto/cnxk/cnxk_cryptodev_ops.c
@@ -7,6 +7,9 @@
#include <rte_cryptodev.h>
#include <cryptodev_pmd.h>
#include <rte_errno.h>
+#ifdef CPT_INST_DEBUG_ENABLE
+#include <rte_hexdump.h>
+#endif
#include <rte_security_driver.h>
#include "roc_ae_fpm_tables.h"
@@ -1223,3 +1226,101 @@ rte_pmd_cnxk_crypto_qp_stats_get(struct rte_pmd_cnxk_crypto_qptr *qptr,
return 0;
}
+
+#ifdef CPT_INST_DEBUG_ENABLE
+void
+cnxk_cpt_request_data_sgv2_mode_dump(uint8_t *in_buffer, bool glist, uint16_t components)
+{
+ struct roc_se_buf_ptr list_ptr[ROC_MAX_SG_CNT];
+ const char *list = glist ? "glist" : "slist";
+ struct roc_sg2list_comp *sg_ptr = NULL;
+ uint16_t list_cnt = 0;
+ char suffix[64];
+ int i, j;
+
+ sg_ptr = (void *)in_buffer;
+ for (i = 0; i < components; i++) {
+ for (j = 0; j < sg_ptr->u.s.valid_segs; j++) {
+ list_ptr[i * 3 + j].size = sg_ptr->u.s.len[j];
+ list_ptr[i * 3 + j].vaddr = (void *)sg_ptr->ptr[j];
+ list_ptr[i * 3 + j].vaddr = list_ptr[i * 3 + j].vaddr;
+ list_cnt++;
+ }
+ sg_ptr++;
+ }
+
+ plt_err("Current %s: %u", list, list_cnt);
+
+ for (i = 0; i < list_cnt; i++) {
+ snprintf(suffix, sizeof(suffix), "%s[%d]: vaddr 0x%" PRIx64 ", vaddr %p len %u",
+ list, i, (uint64_t)list_ptr[i].vaddr, list_ptr[i].vaddr, list_ptr[i].size);
+ rte_hexdump(rte_log_get_stream(), suffix, list_ptr[i].vaddr, list_ptr[i].size);
+ }
+}
+
+void
+cnxk_cpt_request_data_sg_mode_dump(uint8_t *in_buffer, bool glist)
+{
+ struct roc_se_buf_ptr list_ptr[ROC_MAX_SG_CNT];
+ const char *list = glist ? "glist" : "slist";
+ struct roc_sglist_comp *sg_ptr = NULL;
+ uint16_t list_cnt, components;
+ char suffix[64];
+ int i;
+
+ sg_ptr = (void *)(in_buffer + 8);
+ list_cnt = rte_be_to_cpu_16((((uint16_t *)in_buffer)[2]));
+ if (!glist) {
+ components = list_cnt / 4;
+ if (list_cnt % 4)
+ components++;
+ sg_ptr += components;
+ list_cnt = rte_be_to_cpu_16((((uint16_t *)in_buffer)[3]));
+ }
+
+ plt_err("Current %s: %u", list, list_cnt);
+ components = list_cnt / 4;
+ for (i = 0; i < components; i++) {
+ list_ptr[i * 4 + 0].size = rte_be_to_cpu_16(sg_ptr->u.s.len[0]);
+ list_ptr[i * 4 + 1].size = rte_be_to_cpu_16(sg_ptr->u.s.len[1]);
+ list_ptr[i * 4 + 2].size = rte_be_to_cpu_16(sg_ptr->u.s.len[2]);
+ list_ptr[i * 4 + 3].size = rte_be_to_cpu_16(sg_ptr->u.s.len[3]);
+ list_ptr[i * 4 + 0].vaddr = (void *)rte_be_to_cpu_64(sg_ptr->ptr[0]);
+ list_ptr[i * 4 + 1].vaddr = (void *)rte_be_to_cpu_64(sg_ptr->ptr[1]);
+ list_ptr[i * 4 + 2].vaddr = (void *)rte_be_to_cpu_64(sg_ptr->ptr[2]);
+ list_ptr[i * 4 + 3].vaddr = (void *)rte_be_to_cpu_64(sg_ptr->ptr[3]);
+ list_ptr[i * 4 + 0].vaddr = list_ptr[i * 4 + 0].vaddr;
+ list_ptr[i * 4 + 1].vaddr = list_ptr[i * 4 + 1].vaddr;
+ list_ptr[i * 4 + 2].vaddr = list_ptr[i * 4 + 2].vaddr;
+ list_ptr[i * 4 + 3].vaddr = list_ptr[i * 4 + 3].vaddr;
+ sg_ptr++;
+ }
+
+ components = list_cnt % 4;
+ switch (components) {
+ case 3:
+ list_ptr[i * 4 + 2].size = rte_be_to_cpu_16(sg_ptr->u.s.len[2]);
+ list_ptr[i * 4 + 2].vaddr = (void *)rte_be_to_cpu_64(sg_ptr->ptr[2]);
+ list_ptr[i * 4 + 2].vaddr = list_ptr[i * 4 + 2].vaddr;
+ [[fallthrough]];
+ case 2:
+ list_ptr[i * 4 + 1].size = rte_be_to_cpu_16(sg_ptr->u.s.len[1]);
+ list_ptr[i * 4 + 1].vaddr = (void *)rte_be_to_cpu_64(sg_ptr->ptr[1]);
+ list_ptr[i * 4 + 1].vaddr = list_ptr[i * 4 + 1].vaddr;
+ [[fallthrough]];
+ case 1:
+ list_ptr[i * 4 + 0].size = rte_be_to_cpu_16(sg_ptr->u.s.len[0]);
+ list_ptr[i * 4 + 0].vaddr = (void *)rte_be_to_cpu_64(sg_ptr->ptr[0]);
+ list_ptr[i * 4 + 0].vaddr = list_ptr[i * 4 + 0].vaddr;
+ break;
+ default:
+ break;
+ }
+
+ for (i = 0; i < list_cnt; i++) {
+ snprintf(suffix, sizeof(suffix), "%s[%d]: vaddr 0x%" PRIx64 ", vaddr %p len %u",
+ list, i, (uint64_t)list_ptr[i].vaddr, list_ptr[i].vaddr, list_ptr[i].size);
+ rte_hexdump(rte_log_get_stream(), suffix, list_ptr[i].vaddr, list_ptr[i].size);
+ }
+}
+#endif
diff --git a/drivers/crypto/cnxk/cnxk_cryptodev_ops.h b/drivers/crypto/cnxk/cnxk_cryptodev_ops.h
index 6ad52ec13e..417b869828 100644
--- a/drivers/crypto/cnxk/cnxk_cryptodev_ops.h
+++ b/drivers/crypto/cnxk/cnxk_cryptodev_ops.h
@@ -157,6 +157,12 @@ int cnxk_cpt_queue_pair_event_error_query(struct rte_cryptodev *dev, uint16_t qp
uint32_t cnxk_cpt_qp_depth_used(void *qptr);
+#ifdef CPT_INST_DEBUG_ENABLE
+void cnxk_cpt_request_data_sg_mode_dump(uint8_t *in_buffer, bool glist);
+
+void cnxk_cpt_request_data_sgv2_mode_dump(uint8_t *in_buffer, bool glist, uint16_t components);
+#endif
+
static __rte_always_inline void
pending_queue_advance(uint64_t *index, const uint64_t mask)
{
--
2.25.1
^ permalink raw reply [flat|nested] 41+ messages in thread
* [PATCH 15/40] crypto/cnxk: add rte security skeletion for cn20k
2025-05-23 13:50 [PATCH 00/40] fixes and new features to cnxk crypto PMD Tejasree Kondoj
` (13 preceding siblings ...)
2025-05-23 13:50 ` [PATCH 14/40] crypto/cnxk: move debug dumps to common Tejasree Kondoj
@ 2025-05-23 13:50 ` Tejasree Kondoj
2025-05-23 13:50 ` [PATCH 16/40] crypto/cnxk: add security session creation Tejasree Kondoj
` (24 subsequent siblings)
39 siblings, 0 replies; 41+ messages in thread
From: Tejasree Kondoj @ 2025-05-23 13:50 UTC (permalink / raw)
To: Akhil Goyal
Cc: Vidya Sagar Velumuri, Anoob Joseph, Aakash Sasidharan,
Nithinsen Kaithakadan, Rupesh Chiluka, dev
From: Vidya Sagar Velumuri <vvelumuri@marvell.com>
Add skeletion for rte sec for cn20k
Signed-off-by: Vidya Sagar Velumuri <vvelumuri@marvell.com>
---
drivers/crypto/cnxk/cn20k_cryptodev.c | 2 +
drivers/crypto/cnxk/cn20k_cryptodev_sec.c | 106 ++++++++++++++++++++++
drivers/crypto/cnxk/cn20k_cryptodev_sec.h | 19 ++++
drivers/crypto/cnxk/cn20k_ipsec.c | 68 ++++++++++++++
drivers/crypto/cnxk/cn20k_ipsec.h | 41 +++++++++
drivers/crypto/cnxk/meson.build | 2 +
6 files changed, 238 insertions(+)
create mode 100644 drivers/crypto/cnxk/cn20k_cryptodev_sec.c
create mode 100644 drivers/crypto/cnxk/cn20k_cryptodev_sec.h
create mode 100644 drivers/crypto/cnxk/cn20k_ipsec.c
create mode 100644 drivers/crypto/cnxk/cn20k_ipsec.h
diff --git a/drivers/crypto/cnxk/cn20k_cryptodev.c b/drivers/crypto/cnxk/cn20k_cryptodev.c
index 4c70c15ca9..7b8293cc05 100644
--- a/drivers/crypto/cnxk/cn20k_cryptodev.c
+++ b/drivers/crypto/cnxk/cn20k_cryptodev.c
@@ -12,6 +12,7 @@
#include "cn20k_cryptodev.h"
#include "cn20k_cryptodev_ops.h"
+#include "cn20k_cryptodev_sec.h"
#include "cnxk_cryptodev.h"
#include "cnxk_cryptodev_capabilities.h"
#include "cnxk_cryptodev_ops.h"
@@ -93,6 +94,7 @@ cn20k_cpt_pci_probe(struct rte_pci_driver *pci_drv __rte_unused, struct rte_pci_
dev->qp_depth_used = cnxk_cpt_qp_depth_used;
cn20k_cpt_set_enqdeq_fns(dev);
+ cn20k_sec_ops_override();
rte_cryptodev_pmd_probing_finish(dev);
diff --git a/drivers/crypto/cnxk/cn20k_cryptodev_sec.c b/drivers/crypto/cnxk/cn20k_cryptodev_sec.c
new file mode 100644
index 0000000000..04c8e8f506
--- /dev/null
+++ b/drivers/crypto/cnxk/cn20k_cryptodev_sec.c
@@ -0,0 +1,106 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2025 Marvell.
+ */
+
+#include <rte_security.h>
+
+#include "cn20k_cryptodev_ops.h"
+#include "cn20k_cryptodev_sec.h"
+#include "cnxk_cryptodev_ops.h"
+
+static int
+cn20k_sec_session_create(void *dev, struct rte_security_session_conf *conf,
+ struct rte_security_session *sess)
+{
+ RTE_SET_USED(dev);
+ RTE_SET_USED(conf);
+ RTE_SET_USED(sess);
+
+ return -ENOTSUP;
+}
+
+static int
+cn20k_sec_session_destroy(void *dev, struct rte_security_session *sec_sess)
+{
+ RTE_SET_USED(dev);
+ RTE_SET_USED(sec_sess);
+
+ return -EINVAL;
+}
+
+static unsigned int
+cn20k_sec_session_get_size(void *dev __rte_unused)
+{
+ return 0;
+}
+
+static int
+cn20k_sec_session_stats_get(void *dev, struct rte_security_session *sec_sess,
+ struct rte_security_stats *stats)
+{
+ RTE_SET_USED(dev);
+ RTE_SET_USED(sec_sess);
+ RTE_SET_USED(stats);
+
+ return -ENOTSUP;
+}
+
+static int
+cn20k_sec_session_update(void *dev, struct rte_security_session *sec_sess,
+ struct rte_security_session_conf *conf)
+{
+ RTE_SET_USED(dev);
+ RTE_SET_USED(sec_sess);
+ RTE_SET_USED(conf);
+
+ return -ENOTSUP;
+}
+
+static int
+cn20k_cryptodev_sec_rx_inject_configure(void *device, uint16_t port_id, bool enable)
+{
+ RTE_SET_USED(device);
+ RTE_SET_USED(port_id);
+ RTE_SET_USED(enable);
+
+ return -ENOTSUP;
+}
+
+#if defined(RTE_ARCH_ARM64)
+static uint16_t
+cn20k_cryptodev_sec_inb_rx_inject(void *dev, struct rte_mbuf **pkts,
+ struct rte_security_session **sess, uint16_t nb_pkts)
+{
+ RTE_SET_USED(dev);
+ RTE_SET_USED(pkts);
+ RTE_SET_USED(sess);
+ RTE_SET_USED(nb_pkts);
+
+ return 0;
+}
+#else
+uint16_t __rte_hot
+cn20k_cryptodev_sec_inb_rx_inject(void *dev, struct rte_mbuf **pkts,
+ struct rte_security_session **sess, uint16_t nb_pkts)
+{
+ RTE_SET_USED(dev);
+ RTE_SET_USED(sess);
+ RTE_SET_USED(nb_pkts);
+
+ return 0;
+}
+#endif
+
+/* Update platform specific security ops */
+void
+cn20k_sec_ops_override(void)
+{
+ /* Update platform specific ops */
+ cnxk_sec_ops.session_create = cn20k_sec_session_create;
+ cnxk_sec_ops.session_destroy = cn20k_sec_session_destroy;
+ cnxk_sec_ops.session_get_size = cn20k_sec_session_get_size;
+ cnxk_sec_ops.session_stats_get = cn20k_sec_session_stats_get;
+ cnxk_sec_ops.session_update = cn20k_sec_session_update;
+ cnxk_sec_ops.inb_pkt_rx_inject = cn20k_cryptodev_sec_inb_rx_inject;
+ cnxk_sec_ops.rx_inject_configure = cn20k_cryptodev_sec_rx_inject_configure;
+}
diff --git a/drivers/crypto/cnxk/cn20k_cryptodev_sec.h b/drivers/crypto/cnxk/cn20k_cryptodev_sec.h
new file mode 100644
index 0000000000..5cd0e53017
--- /dev/null
+++ b/drivers/crypto/cnxk/cn20k_cryptodev_sec.h
@@ -0,0 +1,19 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2025 Marvell.
+ */
+
+#ifndef __CN20K_CRYPTODEV_SEC_H__
+#define __CN20K_CRYPTODEV_SEC_H__
+
+#include <rte_common.h>
+#include <rte_security.h>
+
+#include "roc_constants.h"
+#include "roc_cpt.h"
+
+#include "cn20k_ipsec.h"
+
+#define SEC_SESS_SIZE sizeof(struct rte_security_session)
+
+void cn20k_sec_ops_override(void);
+#endif /* __CN20K_CRYPTODEV_SEC_H__ */
diff --git a/drivers/crypto/cnxk/cn20k_ipsec.c b/drivers/crypto/cnxk/cn20k_ipsec.c
new file mode 100644
index 0000000000..da8f818d87
--- /dev/null
+++ b/drivers/crypto/cnxk/cn20k_ipsec.c
@@ -0,0 +1,68 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2025 Marvell.
+ */
+
+#include <cryptodev_pmd.h>
+#include <rte_esp.h>
+#include <rte_ip.h>
+#include <rte_malloc.h>
+#include <rte_security.h>
+#include <rte_security_driver.h>
+#include <rte_udp.h>
+
+#include "cn20k_cryptodev_ops.h"
+#include "cn20k_cryptodev_sec.h"
+#include "cn20k_ipsec.h"
+#include "cnxk_cryptodev.h"
+#include "cnxk_cryptodev_ops.h"
+#include "cnxk_ipsec.h"
+#include "cnxk_security.h"
+
+#include "roc_api.h"
+
+int
+cn20k_ipsec_session_create(struct cnxk_cpt_vf *vf, struct cnxk_cpt_qp *qp,
+ struct rte_security_ipsec_xform *ipsec_xfrm,
+ struct rte_crypto_sym_xform *crypto_xfrm,
+ struct rte_security_session *sess)
+{
+ RTE_SET_USED(vf);
+ RTE_SET_USED(qp);
+ RTE_SET_USED(ipsec_xfrm);
+ RTE_SET_USED(crypto_xfrm);
+ RTE_SET_USED(sess);
+
+ return 0;
+}
+
+int
+cn20k_sec_ipsec_session_destroy(struct cnxk_cpt_qp *qp, struct cn20k_sec_session *sess)
+{
+ RTE_SET_USED(qp);
+ RTE_SET_USED(sess);
+
+ return 0;
+}
+
+int
+cn20k_ipsec_stats_get(struct cnxk_cpt_qp *qp, struct cn20k_sec_session *sess,
+ struct rte_security_stats *stats)
+{
+ RTE_SET_USED(qp);
+ RTE_SET_USED(sess);
+ RTE_SET_USED(stats);
+
+ return 0;
+}
+
+int
+cn20k_ipsec_session_update(struct cnxk_cpt_vf *vf, struct cnxk_cpt_qp *qp,
+ struct cn20k_sec_session *sess, struct rte_security_session_conf *conf)
+{
+ RTE_SET_USED(vf);
+ RTE_SET_USED(qp);
+ RTE_SET_USED(sess);
+ RTE_SET_USED(conf);
+
+ return 0;
+}
diff --git a/drivers/crypto/cnxk/cn20k_ipsec.h b/drivers/crypto/cnxk/cn20k_ipsec.h
new file mode 100644
index 0000000000..202d52405d
--- /dev/null
+++ b/drivers/crypto/cnxk/cn20k_ipsec.h
@@ -0,0 +1,41 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2025 Marvell.
+ */
+
+#ifndef __CN20K_IPSEC_H__
+#define __CN20K_IPSEC_H__
+
+#include <rte_security.h>
+#include <rte_security_driver.h>
+
+#include "roc_constants.h"
+#include "roc_ie_ow.h"
+
+#include "cnxk_cryptodev.h"
+#include "cnxk_cryptodev_ops.h"
+#include "cnxk_ipsec.h"
+
+/* Forward declaration */
+struct cn20k_sec_session;
+
+struct __rte_aligned(ROC_ALIGN) cn20k_ipsec_sa
+{
+ union {
+ /** Inbound SA */
+ struct roc_ow_ipsec_inb_sa in_sa;
+ /** Outbound SA */
+ struct roc_ow_ipsec_outb_sa out_sa;
+ };
+};
+
+int cn20k_ipsec_session_create(struct cnxk_cpt_vf *vf, struct cnxk_cpt_qp *qp,
+ struct rte_security_ipsec_xform *ipsec_xfrm,
+ struct rte_crypto_sym_xform *crypto_xfrm,
+ struct rte_security_session *sess);
+int cn20k_sec_ipsec_session_destroy(struct cnxk_cpt_qp *qp, struct cn20k_sec_session *sess);
+int cn20k_ipsec_stats_get(struct cnxk_cpt_qp *qp, struct cn20k_sec_session *sess,
+ struct rte_security_stats *stats);
+int cn20k_ipsec_session_update(struct cnxk_cpt_vf *vf, struct cnxk_cpt_qp *qp,
+ struct cn20k_sec_session *sess,
+ struct rte_security_session_conf *conf);
+#endif /* __CN20K_IPSEC_H__ */
diff --git a/drivers/crypto/cnxk/meson.build b/drivers/crypto/cnxk/meson.build
index 0b078b4d06..f8077e4d4c 100644
--- a/drivers/crypto/cnxk/meson.build
+++ b/drivers/crypto/cnxk/meson.build
@@ -19,6 +19,8 @@ sources = files(
'cn10k_tls.c',
'cn20k_cryptodev.c',
'cn20k_cryptodev_ops.c',
+ 'cn20k_cryptodev_sec.c',
+ 'cn20k_ipsec.c',
'cnxk_cryptodev.c',
'cnxk_cryptodev_capabilities.c',
'cnxk_cryptodev_devargs.c',
--
2.25.1
^ permalink raw reply [flat|nested] 41+ messages in thread
* [PATCH 16/40] crypto/cnxk: add security session creation
2025-05-23 13:50 [PATCH 00/40] fixes and new features to cnxk crypto PMD Tejasree Kondoj
` (14 preceding siblings ...)
2025-05-23 13:50 ` [PATCH 15/40] crypto/cnxk: add rte security skeletion for cn20k Tejasree Kondoj
@ 2025-05-23 13:50 ` Tejasree Kondoj
2025-05-23 13:50 ` [PATCH 17/40] crypto/cnxk: add security session destroy Tejasree Kondoj
` (23 subsequent siblings)
39 siblings, 0 replies; 41+ messages in thread
From: Tejasree Kondoj @ 2025-05-23 13:50 UTC (permalink / raw)
To: Akhil Goyal
Cc: Vidya Sagar Velumuri, Anoob Joseph, Aakash Sasidharan,
Nithinsen Kaithakadan, Rupesh Chiluka, dev
From: Vidya Sagar Velumuri <vvelumuri@marvell.com>
Add rte security session creation for cn20k
Signed-off-by: Vidya Sagar Velumuri <vvelumuri@marvell.com>
---
drivers/crypto/cnxk/cn20k_cryptodev_sec.c | 22 +-
drivers/crypto/cnxk/cn20k_cryptodev_sec.h | 33 +++
drivers/crypto/cnxk/cn20k_ipsec.c | 250 +++++++++++++++++++++-
3 files changed, 296 insertions(+), 9 deletions(-)
diff --git a/drivers/crypto/cnxk/cn20k_cryptodev_sec.c b/drivers/crypto/cnxk/cn20k_cryptodev_sec.c
index 04c8e8f506..0bb4b7db63 100644
--- a/drivers/crypto/cnxk/cn20k_cryptodev_sec.c
+++ b/drivers/crypto/cnxk/cn20k_cryptodev_sec.c
@@ -12,9 +12,25 @@ static int
cn20k_sec_session_create(void *dev, struct rte_security_session_conf *conf,
struct rte_security_session *sess)
{
- RTE_SET_USED(dev);
- RTE_SET_USED(conf);
- RTE_SET_USED(sess);
+ struct rte_cryptodev *crypto_dev = dev;
+ struct cnxk_cpt_vf *vf;
+ struct cnxk_cpt_qp *qp;
+
+ if (conf->action_type != RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL)
+ return -EINVAL;
+
+ qp = crypto_dev->data->queue_pairs[0];
+ if (qp == NULL) {
+ plt_err("Setup cryptodev queue pair before creating security session");
+ return -EPERM;
+ }
+
+ vf = crypto_dev->data->dev_private;
+
+ if (conf->protocol == RTE_SECURITY_PROTOCOL_IPSEC) {
+ ((struct cn20k_sec_session *)sess)->userdata = conf->userdata;
+ return cn20k_ipsec_session_create(vf, qp, &conf->ipsec, conf->crypto_xform, sess);
+ }
return -ENOTSUP;
}
diff --git a/drivers/crypto/cnxk/cn20k_cryptodev_sec.h b/drivers/crypto/cnxk/cn20k_cryptodev_sec.h
index 5cd0e53017..4d6dcc9670 100644
--- a/drivers/crypto/cnxk/cn20k_cryptodev_sec.h
+++ b/drivers/crypto/cnxk/cn20k_cryptodev_sec.h
@@ -16,4 +16,37 @@
#define SEC_SESS_SIZE sizeof(struct rte_security_session)
void cn20k_sec_ops_override(void);
+
+struct __rte_aligned(ROC_ALIGN) cn20k_sec_session {
+ uint8_t rte_sess[SEC_SESS_SIZE];
+
+ /** PMD private space */
+ alignas(RTE_CACHE_LINE_MIN_SIZE)
+
+ /** Pre-populated CPT inst words */
+ struct cnxk_cpt_inst_tmpl inst;
+ uint16_t max_extended_len;
+ uint16_t iv_offset;
+ uint8_t proto;
+ uint8_t iv_length;
+ union {
+ uint16_t u16;
+ struct {
+ uint8_t ip_csum;
+ uint8_t is_outbound : 1;
+ } ipsec;
+ };
+ /** Queue pair */
+ struct cnxk_cpt_qp *qp;
+ /** Userdata to be set for Rx inject */
+ void *userdata;
+
+ /**
+ * End of SW mutable area
+ */
+ union {
+ struct cn20k_ipsec_sa sa;
+ };
+};
+
#endif /* __CN20K_CRYPTODEV_SEC_H__ */
diff --git a/drivers/crypto/cnxk/cn20k_ipsec.c b/drivers/crypto/cnxk/cn20k_ipsec.c
index da8f818d87..b6ecc4fb1a 100644
--- a/drivers/crypto/cnxk/cn20k_ipsec.c
+++ b/drivers/crypto/cnxk/cn20k_ipsec.c
@@ -20,19 +20,257 @@
#include "roc_api.h"
+static int
+cn20k_ipsec_outb_sa_create(struct roc_cpt *roc_cpt, struct roc_cpt_lf *lf,
+ struct rte_security_ipsec_xform *ipsec_xfrm,
+ struct rte_crypto_sym_xform *crypto_xfrm,
+ struct cn20k_sec_session *sec_sess)
+{
+ union roc_ow_ipsec_outb_param1 param1;
+ struct roc_ow_ipsec_outb_sa *sa_dptr;
+ struct cnxk_ipsec_outb_rlens rlens;
+ struct cn20k_ipsec_sa *sa;
+ union cpt_inst_w4 inst_w4;
+ void *out_sa;
+ int ret = 0;
+
+ sa = &sec_sess->sa;
+ out_sa = &sa->out_sa;
+
+ /* Allocate memory to be used as dptr for CPT ucode WRITE_SA op */
+ sa_dptr = plt_zmalloc(sizeof(struct roc_ow_ipsec_outb_sa), 8);
+ if (sa_dptr == NULL) {
+ plt_err("Couldn't allocate memory for SA dptr");
+ return -ENOMEM;
+ }
+
+ /* Translate security parameters to SA */
+ ret = cnxk_ow_ipsec_outb_sa_fill(sa_dptr, ipsec_xfrm, crypto_xfrm);
+ if (ret) {
+ plt_err("Could not fill outbound session parameters");
+ goto sa_dptr_free;
+ }
+
+ RTE_SET_USED(roc_cpt);
+
+#ifdef LA_IPSEC_DEBUG
+ /* Use IV from application in debug mode */
+ if (ipsec_xfrm->options.iv_gen_disable == 1) {
+ sa_dptr->w2.s.iv_src = ROC_IE_OW_SA_IV_SRC_FROM_SA;
+ if (crypto_xfrm->type == RTE_CRYPTO_SYM_XFORM_AEAD) {
+ sec_sess->iv_offset = crypto_xfrm->aead.iv.offset;
+ sec_sess->iv_length = crypto_xfrm->aead.iv.length;
+ } else if (crypto_xfrm->type == RTE_CRYPTO_SYM_XFORM_CIPHER) {
+ sec_sess->iv_offset = crypto_xfrm->cipher.iv.offset;
+ sec_sess->iv_length = crypto_xfrm->cipher.iv.length;
+ } else {
+ sec_sess->iv_offset = crypto_xfrm->auth.iv.offset;
+ sec_sess->iv_length = crypto_xfrm->auth.iv.length;
+ }
+ }
+#else
+ if (ipsec_xfrm->options.iv_gen_disable != 0) {
+ plt_err("Application provided IV not supported");
+ ret = -ENOTSUP;
+ goto sa_dptr_free;
+ }
+#endif
+
+ sec_sess->ipsec.is_outbound = 1;
+
+ /* Get Rlen calculation data */
+ ret = cnxk_ipsec_outb_rlens_get(&rlens, ipsec_xfrm, crypto_xfrm);
+ if (ret)
+ goto sa_dptr_free;
+
+ sec_sess->max_extended_len = rlens.max_extended_len;
+
+ /* pre-populate CPT INST word 4 */
+ inst_w4.u64 = 0;
+ inst_w4.s.opcode_major = ROC_IE_OW_MAJOR_OP_PROCESS_OUTBOUND_IPSEC | ROC_IE_OW_INPLACE_BIT;
+
+ param1.u16 = 0;
+
+ param1.s.ttl_or_hop_limit = ipsec_xfrm->options.dec_ttl;
+
+ /* Disable IP checksum computation by default */
+ param1.s.ip_csum_disable = ROC_IE_OW_SA_INNER_PKT_IP_CSUM_DISABLE;
+
+ if (ipsec_xfrm->options.ip_csum_enable)
+ param1.s.ip_csum_disable = ROC_IE_OW_SA_INNER_PKT_IP_CSUM_ENABLE;
+
+ /* Disable L4 checksum computation by default */
+ param1.s.l4_csum_disable = ROC_IE_OW_SA_INNER_PKT_L4_CSUM_DISABLE;
+
+ if (ipsec_xfrm->options.l4_csum_enable)
+ param1.s.l4_csum_disable = ROC_IE_OW_SA_INNER_PKT_L4_CSUM_ENABLE;
+
+ inst_w4.s.param1 = param1.u16;
+
+ sec_sess->inst.w4 = inst_w4.u64;
+
+ if (ipsec_xfrm->options.stats == 1) {
+ /* Enable mib counters */
+ sa_dptr->w0.s.count_mib_bytes = 1;
+ sa_dptr->w0.s.count_mib_pkts = 1;
+ sa_dptr->w0.s.count_glb_pkts = 1;
+ sa_dptr->w0.s.count_glb_octets = 1;
+ }
+
+ memset(out_sa, 0, sizeof(struct roc_ow_ipsec_outb_sa));
+
+ /* Copy word0 from sa_dptr to populate ctx_push_sz ctx_size fields */
+ memcpy(out_sa, sa_dptr, 8);
+
+ rte_atomic_thread_fence(rte_memory_order_seq_cst);
+
+ /* Write session using microcode opcode */
+ ret = roc_cpt_ctx_write(lf, sa_dptr, out_sa, sizeof(struct roc_ow_ipsec_outb_sa));
+ if (ret) {
+ plt_err("Could not write outbound session to hardware");
+ goto sa_dptr_free;
+ }
+
+ /* Trigger CTX flush so that data is written back to DRAM */
+ ret = roc_cpt_lf_ctx_flush(lf, out_sa, false);
+ if (ret == -EFAULT) {
+ plt_err("Could not flush outbound session");
+ goto sa_dptr_free;
+ }
+
+ sec_sess->proto = RTE_SECURITY_PROTOCOL_IPSEC;
+ rte_atomic_thread_fence(rte_memory_order_seq_cst);
+
+sa_dptr_free:
+ plt_free(sa_dptr);
+
+ return ret;
+}
+
+static int
+cn20k_ipsec_inb_sa_create(struct roc_cpt *roc_cpt, struct roc_cpt_lf *lf,
+ struct rte_security_ipsec_xform *ipsec_xfrm,
+ struct rte_crypto_sym_xform *crypto_xfrm,
+ struct cn20k_sec_session *sec_sess)
+{
+ union roc_ow_ipsec_inb_param1 param1;
+ struct roc_ow_ipsec_inb_sa *sa_dptr;
+ struct cn20k_ipsec_sa *sa;
+ union cpt_inst_w4 inst_w4;
+ void *in_sa;
+ int ret = 0;
+
+ sa = &sec_sess->sa;
+ in_sa = &sa->in_sa;
+
+ /* Allocate memory to be used as dptr for CPT ucode WRITE_SA op */
+ sa_dptr = plt_zmalloc(sizeof(struct roc_ow_ipsec_inb_sa), 8);
+ if (sa_dptr == NULL) {
+ plt_err("Couldn't allocate memory for SA dptr");
+ return -ENOMEM;
+ }
+
+ /* Translate security parameters to SA */
+ ret = cnxk_ow_ipsec_inb_sa_fill(sa_dptr, ipsec_xfrm, crypto_xfrm);
+ if (ret) {
+ plt_err("Could not fill inbound session parameters");
+ goto sa_dptr_free;
+ }
+
+ sec_sess->ipsec.is_outbound = 0;
+ RTE_SET_USED(roc_cpt);
+
+ /* Save index/SPI in cookie, requirement for Rx Inject */
+ sa_dptr->w1.s.cookie = 0xFFFFFFFF;
+
+ /* pre-populate CPT INST word 4 */
+ inst_w4.u64 = 0;
+ inst_w4.s.opcode_major = ROC_IE_OW_MAJOR_OP_PROCESS_INBOUND_IPSEC | ROC_IE_OW_INPLACE_BIT;
+
+ param1.u16 = 0;
+
+ /* Disable IP checksum verification by default */
+ param1.s.ip_csum_disable = ROC_IE_OW_SA_INNER_PKT_IP_CSUM_DISABLE;
+
+ /* Set the ip chksum flag in mbuf before enqueue.
+ * Reset the flag in post process in case of errors
+ */
+ if (ipsec_xfrm->options.ip_csum_enable) {
+ param1.s.ip_csum_disable = ROC_IE_OW_SA_INNER_PKT_IP_CSUM_ENABLE;
+ sec_sess->ipsec.ip_csum = RTE_MBUF_F_RX_IP_CKSUM_GOOD;
+ }
+
+ /* Disable L4 checksum verification by default */
+ param1.s.l4_csum_disable = ROC_IE_OW_SA_INNER_PKT_L4_CSUM_DISABLE;
+
+ if (ipsec_xfrm->options.l4_csum_enable)
+ param1.s.l4_csum_disable = ROC_IE_OW_SA_INNER_PKT_L4_CSUM_ENABLE;
+
+ param1.s.esp_trailer_disable = 1;
+
+ inst_w4.s.param1 = param1.u16;
+
+ sec_sess->inst.w4 = inst_w4.u64;
+
+ if (ipsec_xfrm->options.stats == 1) {
+ /* Enable mib counters */
+ sa_dptr->w0.s.count_mib_bytes = 1;
+ sa_dptr->w0.s.count_mib_pkts = 1;
+ sa_dptr->w0.s.count_glb_pkts = 1;
+ sa_dptr->w0.s.count_glb_octets = 1;
+ }
+
+ memset(in_sa, 0, sizeof(struct roc_ow_ipsec_inb_sa));
+
+ /* Copy word0 from sa_dptr to populate ctx_push_sz ctx_size fields */
+ memcpy(in_sa, sa_dptr, 8);
+
+ rte_atomic_thread_fence(rte_memory_order_seq_cst);
+
+ /* Write session using microcode opcode */
+ ret = roc_cpt_ctx_write(lf, sa_dptr, in_sa, sizeof(struct roc_ow_ipsec_inb_sa));
+ if (ret) {
+ plt_err("Could not write inbound session to hardware");
+ goto sa_dptr_free;
+ }
+
+ /* Trigger CTX flush so that data is written back to DRAM */
+ ret = roc_cpt_lf_ctx_flush(lf, in_sa, true);
+ if (ret == -EFAULT) {
+ plt_err("Could not flush inbound session");
+ goto sa_dptr_free;
+ }
+
+ sec_sess->proto = RTE_SECURITY_PROTOCOL_IPSEC;
+ rte_atomic_thread_fence(rte_memory_order_seq_cst);
+
+sa_dptr_free:
+ plt_free(sa_dptr);
+
+ return ret;
+}
+
int
cn20k_ipsec_session_create(struct cnxk_cpt_vf *vf, struct cnxk_cpt_qp *qp,
struct rte_security_ipsec_xform *ipsec_xfrm,
struct rte_crypto_sym_xform *crypto_xfrm,
struct rte_security_session *sess)
{
- RTE_SET_USED(vf);
- RTE_SET_USED(qp);
- RTE_SET_USED(ipsec_xfrm);
- RTE_SET_USED(crypto_xfrm);
- RTE_SET_USED(sess);
+ struct roc_cpt *roc_cpt;
+ int ret;
- return 0;
+ ret = cnxk_ipsec_xform_verify(ipsec_xfrm, crypto_xfrm);
+ if (ret)
+ return ret;
+
+ roc_cpt = &vf->cpt;
+
+ if (ipsec_xfrm->direction == RTE_SECURITY_IPSEC_SA_DIR_INGRESS)
+ return cn20k_ipsec_inb_sa_create(roc_cpt, &qp->lf, ipsec_xfrm, crypto_xfrm,
+ (struct cn20k_sec_session *)sess);
+ else
+ return cn20k_ipsec_outb_sa_create(roc_cpt, &qp->lf, ipsec_xfrm, crypto_xfrm,
+ (struct cn20k_sec_session *)sess);
}
int
--
2.25.1
^ permalink raw reply [flat|nested] 41+ messages in thread
* [PATCH 17/40] crypto/cnxk: add security session destroy
2025-05-23 13:50 [PATCH 00/40] fixes and new features to cnxk crypto PMD Tejasree Kondoj
` (15 preceding siblings ...)
2025-05-23 13:50 ` [PATCH 16/40] crypto/cnxk: add security session creation Tejasree Kondoj
@ 2025-05-23 13:50 ` Tejasree Kondoj
2025-05-23 13:50 ` [PATCH 18/40] crypto/cnxk: move code to common Tejasree Kondoj
` (22 subsequent siblings)
39 siblings, 0 replies; 41+ messages in thread
From: Tejasree Kondoj @ 2025-05-23 13:50 UTC (permalink / raw)
To: Akhil Goyal
Cc: Vidya Sagar Velumuri, Anoob Joseph, Aakash Sasidharan,
Nithinsen Kaithakadan, Rupesh Chiluka, dev
From: Vidya Sagar Velumuri <vvelumuri@marvell.com>
Add support for rte security session destroy for cn20k
Signed-off-by: Vidya Sagar Velumuri <vvelumuri@marvell.com>
---
drivers/crypto/cnxk/cn20k_cryptodev_sec.c | 17 +++++++-
drivers/crypto/cnxk/cn20k_ipsec.c | 51 ++++++++++++++++++++++-
2 files changed, 64 insertions(+), 4 deletions(-)
diff --git a/drivers/crypto/cnxk/cn20k_cryptodev_sec.c b/drivers/crypto/cnxk/cn20k_cryptodev_sec.c
index 0bb4b7db63..1b18398250 100644
--- a/drivers/crypto/cnxk/cn20k_cryptodev_sec.c
+++ b/drivers/crypto/cnxk/cn20k_cryptodev_sec.c
@@ -38,8 +38,21 @@ cn20k_sec_session_create(void *dev, struct rte_security_session_conf *conf,
static int
cn20k_sec_session_destroy(void *dev, struct rte_security_session *sec_sess)
{
- RTE_SET_USED(dev);
- RTE_SET_USED(sec_sess);
+ struct cn20k_sec_session *cn20k_sec_sess;
+ struct rte_cryptodev *crypto_dev = dev;
+ struct cnxk_cpt_qp *qp;
+
+ if (unlikely(sec_sess == NULL))
+ return -EINVAL;
+
+ qp = crypto_dev->data->queue_pairs[0];
+ if (unlikely(qp == NULL))
+ return -ENOTSUP;
+
+ cn20k_sec_sess = (struct cn20k_sec_session *)sec_sess;
+
+ if (cn20k_sec_sess->proto == RTE_SECURITY_PROTOCOL_IPSEC)
+ return cn20k_sec_ipsec_session_destroy(qp, cn20k_sec_sess);
return -EINVAL;
}
diff --git a/drivers/crypto/cnxk/cn20k_ipsec.c b/drivers/crypto/cnxk/cn20k_ipsec.c
index b6ecc4fb1a..f898461523 100644
--- a/drivers/crypto/cnxk/cn20k_ipsec.c
+++ b/drivers/crypto/cnxk/cn20k_ipsec.c
@@ -276,8 +276,55 @@ cn20k_ipsec_session_create(struct cnxk_cpt_vf *vf, struct cnxk_cpt_qp *qp,
int
cn20k_sec_ipsec_session_destroy(struct cnxk_cpt_qp *qp, struct cn20k_sec_session *sess)
{
- RTE_SET_USED(qp);
- RTE_SET_USED(sess);
+ union roc_ow_ipsec_sa_word2 *w2;
+ struct cn20k_ipsec_sa *sa;
+ struct roc_cpt_lf *lf;
+ void *sa_dptr = NULL;
+ int ret;
+
+ lf = &qp->lf;
+
+ sa = &sess->sa;
+
+ /* Trigger CTX flush to write dirty data back to DRAM */
+ roc_cpt_lf_ctx_flush(lf, &sa->in_sa, false);
+
+ ret = -1;
+
+ if (sess->ipsec.is_outbound) {
+ sa_dptr = plt_zmalloc(sizeof(struct roc_ow_ipsec_outb_sa), 8);
+ if (sa_dptr != NULL) {
+ roc_ow_ipsec_outb_sa_init(sa_dptr);
+
+ ret = roc_cpt_ctx_write(lf, sa_dptr, &sa->out_sa,
+ sizeof(struct roc_ow_ipsec_outb_sa));
+ }
+ } else {
+ sa_dptr = plt_zmalloc(sizeof(struct roc_ow_ipsec_inb_sa), 8);
+ if (sa_dptr != NULL) {
+ roc_ow_ipsec_inb_sa_init(sa_dptr);
+
+ ret = roc_cpt_ctx_write(lf, sa_dptr, &sa->in_sa,
+ sizeof(struct roc_ow_ipsec_inb_sa));
+ }
+ }
+
+ plt_free(sa_dptr);
+
+ if (ret) {
+ /* MC write_ctx failed. Attempt reload of CTX */
+
+ /* Wait for 1 ms so that flush is complete */
+ rte_delay_ms(1);
+
+ w2 = (union roc_ow_ipsec_sa_word2 *)&sa->in_sa.w2;
+ w2->s.valid = 0;
+
+ rte_atomic_thread_fence(rte_memory_order_seq_cst);
+
+ /* Trigger CTX reload to fetch new data from DRAM */
+ roc_cpt_lf_ctx_reload(lf, &sa->in_sa);
+ }
return 0;
}
--
2.25.1
^ permalink raw reply [flat|nested] 41+ messages in thread
* [PATCH 18/40] crypto/cnxk: move code to common
2025-05-23 13:50 [PATCH 00/40] fixes and new features to cnxk crypto PMD Tejasree Kondoj
` (16 preceding siblings ...)
2025-05-23 13:50 ` [PATCH 17/40] crypto/cnxk: add security session destroy Tejasree Kondoj
@ 2025-05-23 13:50 ` Tejasree Kondoj
2025-05-23 13:50 ` [PATCH 19/40] crypto/cnxk: add rte sec session update Tejasree Kondoj
` (21 subsequent siblings)
39 siblings, 0 replies; 41+ messages in thread
From: Tejasree Kondoj @ 2025-05-23 13:50 UTC (permalink / raw)
To: Akhil Goyal
Cc: Vidya Sagar Velumuri, Anoob Joseph, Aakash Sasidharan,
Nithinsen Kaithakadan, Rupesh Chiluka, dev
From: Vidya Sagar Velumuri <vvelumuri@marvell.com>
Move common code between cn10k and cn20k to common
Signed-off-by: Vidya Sagar Velumuri <vvelumuri@marvell.com>
---
drivers/crypto/cnxk/cn10k_cryptodev_sec.h | 14 --------------
drivers/crypto/cnxk/cn10k_ipsec.c | 4 ++--
drivers/crypto/cnxk/cn10k_tls.c | 4 ++--
drivers/crypto/cnxk/cn20k_ipsec.c | 4 ++--
drivers/crypto/cnxk/cnxk_cryptodev_ops.h | 17 +++++++++++++++++
drivers/crypto/cnxk/cnxk_ipsec.h | 1 +
6 files changed, 24 insertions(+), 20 deletions(-)
diff --git a/drivers/crypto/cnxk/cn10k_cryptodev_sec.h b/drivers/crypto/cnxk/cn10k_cryptodev_sec.h
index 77faaa0fe6..b07fbaf5ee 100644
--- a/drivers/crypto/cnxk/cn10k_cryptodev_sec.h
+++ b/drivers/crypto/cnxk/cn10k_cryptodev_sec.h
@@ -59,20 +59,6 @@ struct __rte_aligned(ROC_ALIGN) cn10k_sec_session {
};
};
-static inline uint64_t
-cpt_inst_w7_get(struct roc_cpt *roc_cpt, void *cptr)
-{
- union cpt_inst_w7 w7;
-
- w7.u64 = 0;
- w7.s.egrp = roc_cpt->eng_grp[CPT_ENG_TYPE_IE];
- w7.s.ctx_val = 1;
- w7.s.cptr = (uint64_t)cptr;
- rte_mb();
-
- return w7.u64;
-}
-
void cn10k_sec_ops_override(void);
#endif /* __CN10K_CRYPTODEV_SEC_H__ */
diff --git a/drivers/crypto/cnxk/cn10k_ipsec.c b/drivers/crypto/cnxk/cn10k_ipsec.c
index ae0482d0fe..5cd4f5257a 100644
--- a/drivers/crypto/cnxk/cn10k_ipsec.c
+++ b/drivers/crypto/cnxk/cn10k_ipsec.c
@@ -51,7 +51,7 @@ cn10k_ipsec_outb_sa_create(struct roc_cpt *roc_cpt, struct roc_cpt_lf *lf,
goto sa_dptr_free;
}
- sec_sess->inst.w7 = cpt_inst_w7_get(roc_cpt, out_sa);
+ sec_sess->inst.w7 = cnxk_cpt_sec_inst_w7_get(roc_cpt, out_sa);
#ifdef LA_IPSEC_DEBUG
/* Use IV from application in debug mode */
@@ -183,7 +183,7 @@ cn10k_ipsec_inb_sa_create(struct roc_cpt *roc_cpt, struct roc_cpt_lf *lf,
}
sec_sess->ipsec.is_outbound = 0;
- sec_sess->inst.w7 = cpt_inst_w7_get(roc_cpt, in_sa);
+ sec_sess->inst.w7 = cnxk_cpt_sec_inst_w7_get(roc_cpt, in_sa);
/* Save index/SPI in cookie, specific required for Rx Inject */
sa_dptr->w1.s.cookie = 0xFFFFFFFF;
diff --git a/drivers/crypto/cnxk/cn10k_tls.c b/drivers/crypto/cnxk/cn10k_tls.c
index 4bd2654499..49edac8cd6 100644
--- a/drivers/crypto/cnxk/cn10k_tls.c
+++ b/drivers/crypto/cnxk/cn10k_tls.c
@@ -690,7 +690,7 @@ cn10k_tls_read_sa_create(struct roc_cpt *roc_cpt, struct roc_cpt_lf *lf,
sec_sess->tls_opt.tls_ver = tls_ver;
sec_sess->inst.w4 = inst_w4.u64;
- sec_sess->inst.w7 = cpt_inst_w7_get(roc_cpt, read_sa);
+ sec_sess->inst.w7 = cnxk_cpt_sec_inst_w7_get(roc_cpt, read_sa);
memset(read_sa, 0, sizeof(struct roc_ie_ot_tls_read_sa));
@@ -783,7 +783,7 @@ cn10k_tls_write_sa_create(struct roc_cpt *roc_cpt, struct roc_cpt_lf *lf,
ROC_IE_OT_TLS13_MAJOR_OP_RECORD_ENC | ROC_IE_OT_INPLACE_BIT;
}
sec_sess->inst.w4 = inst_w4.u64;
- sec_sess->inst.w7 = cpt_inst_w7_get(roc_cpt, write_sa);
+ sec_sess->inst.w7 = cnxk_cpt_sec_inst_w7_get(roc_cpt, write_sa);
memset(write_sa, 0, sizeof(struct roc_ie_ot_tls_write_sa));
diff --git a/drivers/crypto/cnxk/cn20k_ipsec.c b/drivers/crypto/cnxk/cn20k_ipsec.c
index f898461523..049007803d 100644
--- a/drivers/crypto/cnxk/cn20k_ipsec.c
+++ b/drivers/crypto/cnxk/cn20k_ipsec.c
@@ -51,7 +51,7 @@ cn20k_ipsec_outb_sa_create(struct roc_cpt *roc_cpt, struct roc_cpt_lf *lf,
goto sa_dptr_free;
}
- RTE_SET_USED(roc_cpt);
+ sec_sess->inst.w7 = cnxk_cpt_sec_inst_w7_get(roc_cpt, out_sa);
#ifdef LA_IPSEC_DEBUG
/* Use IV from application in debug mode */
@@ -178,7 +178,7 @@ cn20k_ipsec_inb_sa_create(struct roc_cpt *roc_cpt, struct roc_cpt_lf *lf,
}
sec_sess->ipsec.is_outbound = 0;
- RTE_SET_USED(roc_cpt);
+ sec_sess->inst.w7 = cnxk_cpt_sec_inst_w7_get(roc_cpt, in_sa);
/* Save index/SPI in cookie, requirement for Rx Inject */
sa_dptr->w1.s.cookie = 0xFFFFFFFF;
diff --git a/drivers/crypto/cnxk/cnxk_cryptodev_ops.h b/drivers/crypto/cnxk/cnxk_cryptodev_ops.h
index 417b869828..df8d08b7c5 100644
--- a/drivers/crypto/cnxk/cnxk_cryptodev_ops.h
+++ b/drivers/crypto/cnxk/cnxk_cryptodev_ops.h
@@ -218,4 +218,21 @@ hw_ctx_cache_enable(void)
return roc_errata_cpt_hang_on_mixed_ctx_val() || roc_model_is_cn10ka_b0() ||
roc_model_is_cn10kb_a0();
}
+
+static inline uint64_t
+cnxk_cpt_sec_inst_w7_get(struct roc_cpt *roc_cpt, void *cptr)
+{
+ union cpt_inst_w7 w7;
+
+ w7.u64 = 0;
+ if (roc_model_is_cn20k())
+ w7.s.egrp = roc_cpt->eng_grp[CPT_ENG_TYPE_SE];
+ else
+ w7.s.egrp = roc_cpt->eng_grp[CPT_ENG_TYPE_IE];
+ w7.s.ctx_val = 1;
+ w7.s.cptr = (uint64_t)cptr;
+ rte_mb();
+
+ return w7.u64;
+}
#endif /* _CNXK_CRYPTODEV_OPS_H_ */
diff --git a/drivers/crypto/cnxk/cnxk_ipsec.h b/drivers/crypto/cnxk/cnxk_ipsec.h
index 4d3ee23f61..42f8e64009 100644
--- a/drivers/crypto/cnxk/cnxk_ipsec.h
+++ b/drivers/crypto/cnxk/cnxk_ipsec.h
@@ -10,6 +10,7 @@
#include "roc_cpt.h"
#include "roc_ie_on.h"
#include "roc_ie_ot.h"
+#include "roc_model.h"
extern struct rte_security_ops cnxk_sec_ops;
--
2.25.1
^ permalink raw reply [flat|nested] 41+ messages in thread
* [PATCH 19/40] crypto/cnxk: add rte sec session update
2025-05-23 13:50 [PATCH 00/40] fixes and new features to cnxk crypto PMD Tejasree Kondoj
` (17 preceding siblings ...)
2025-05-23 13:50 ` [PATCH 18/40] crypto/cnxk: move code to common Tejasree Kondoj
@ 2025-05-23 13:50 ` Tejasree Kondoj
2025-05-23 13:50 ` [PATCH 20/40] crypto/cnxk: add rte security datapath handling Tejasree Kondoj
` (20 subsequent siblings)
39 siblings, 0 replies; 41+ messages in thread
From: Tejasree Kondoj @ 2025-05-23 13:50 UTC (permalink / raw)
To: Akhil Goyal
Cc: Vidya Sagar Velumuri, Anoob Joseph, Aakash Sasidharan,
Nithinsen Kaithakadan, Rupesh Chiluka, dev
From: Vidya Sagar Velumuri <vvelumuri@marvell.com>
Add support for IPsec session update and IPsec stats get for cn20k
Signed-off-by: Vidya Sagar Velumuri <vvelumuri@marvell.com>
---
drivers/crypto/cnxk/cn20k_cryptodev_sec.c | 41 +++++++++++++++++++----
drivers/crypto/cnxk/cn20k_ipsec.c | 39 +++++++++++++++++----
2 files changed, 66 insertions(+), 14 deletions(-)
diff --git a/drivers/crypto/cnxk/cn20k_cryptodev_sec.c b/drivers/crypto/cnxk/cn20k_cryptodev_sec.c
index 1b18398250..ba7f1baf86 100644
--- a/drivers/crypto/cnxk/cn20k_cryptodev_sec.c
+++ b/drivers/crypto/cnxk/cn20k_cryptodev_sec.c
@@ -60,16 +60,28 @@ cn20k_sec_session_destroy(void *dev, struct rte_security_session *sec_sess)
static unsigned int
cn20k_sec_session_get_size(void *dev __rte_unused)
{
- return 0;
+ return sizeof(struct cn20k_sec_session) - sizeof(struct rte_security_session);
}
static int
cn20k_sec_session_stats_get(void *dev, struct rte_security_session *sec_sess,
struct rte_security_stats *stats)
{
- RTE_SET_USED(dev);
- RTE_SET_USED(sec_sess);
- RTE_SET_USED(stats);
+ struct cn20k_sec_session *cn20k_sec_sess;
+ struct rte_cryptodev *crypto_dev = dev;
+ struct cnxk_cpt_qp *qp;
+
+ if (unlikely(sec_sess == NULL))
+ return -EINVAL;
+
+ qp = crypto_dev->data->queue_pairs[0];
+ if (unlikely(qp == NULL))
+ return -ENOTSUP;
+
+ cn20k_sec_sess = (struct cn20k_sec_session *)sec_sess;
+
+ if (cn20k_sec_sess->proto == RTE_SECURITY_PROTOCOL_IPSEC)
+ return cn20k_ipsec_stats_get(qp, cn20k_sec_sess, stats);
return -ENOTSUP;
}
@@ -78,9 +90,24 @@ static int
cn20k_sec_session_update(void *dev, struct rte_security_session *sec_sess,
struct rte_security_session_conf *conf)
{
- RTE_SET_USED(dev);
- RTE_SET_USED(sec_sess);
- RTE_SET_USED(conf);
+ struct cn20k_sec_session *cn20k_sec_sess;
+ struct rte_cryptodev *crypto_dev = dev;
+ struct cnxk_cpt_qp *qp;
+ struct cnxk_cpt_vf *vf;
+
+ if (sec_sess == NULL)
+ return -EINVAL;
+
+ qp = crypto_dev->data->queue_pairs[0];
+ if (qp == NULL)
+ return -EINVAL;
+
+ vf = crypto_dev->data->dev_private;
+
+ cn20k_sec_sess = (struct cn20k_sec_session *)sec_sess;
+
+ if (cn20k_sec_sess->proto == RTE_SECURITY_PROTOCOL_IPSEC)
+ return cn20k_ipsec_session_update(vf, qp, cn20k_sec_sess, conf);
return -ENOTSUP;
}
diff --git a/drivers/crypto/cnxk/cn20k_ipsec.c b/drivers/crypto/cnxk/cn20k_ipsec.c
index 049007803d..77f7411486 100644
--- a/drivers/crypto/cnxk/cn20k_ipsec.c
+++ b/drivers/crypto/cnxk/cn20k_ipsec.c
@@ -333,9 +333,24 @@ int
cn20k_ipsec_stats_get(struct cnxk_cpt_qp *qp, struct cn20k_sec_session *sess,
struct rte_security_stats *stats)
{
- RTE_SET_USED(qp);
- RTE_SET_USED(sess);
- RTE_SET_USED(stats);
+ struct roc_ow_ipsec_outb_sa *out_sa;
+ struct roc_ow_ipsec_inb_sa *in_sa;
+ struct cn20k_ipsec_sa *sa;
+
+ stats->protocol = RTE_SECURITY_PROTOCOL_IPSEC;
+ sa = &sess->sa;
+
+ if (sess->ipsec.is_outbound) {
+ out_sa = &sa->out_sa;
+ roc_cpt_lf_ctx_flush(&qp->lf, out_sa, false);
+ stats->ipsec.opackets = out_sa->ctx.mib_pkts;
+ stats->ipsec.obytes = out_sa->ctx.mib_octs;
+ } else {
+ in_sa = &sa->in_sa;
+ roc_cpt_lf_ctx_flush(&qp->lf, in_sa, false);
+ stats->ipsec.ipackets = in_sa->ctx.mib_pkts;
+ stats->ipsec.ibytes = in_sa->ctx.mib_octs;
+ }
return 0;
}
@@ -344,10 +359,20 @@ int
cn20k_ipsec_session_update(struct cnxk_cpt_vf *vf, struct cnxk_cpt_qp *qp,
struct cn20k_sec_session *sess, struct rte_security_session_conf *conf)
{
- RTE_SET_USED(vf);
- RTE_SET_USED(qp);
- RTE_SET_USED(sess);
- RTE_SET_USED(conf);
+ struct roc_cpt *roc_cpt;
+ int ret;
+
+ if (conf->ipsec.direction == RTE_SECURITY_IPSEC_SA_DIR_INGRESS)
+ return -ENOTSUP;
+
+ ret = cnxk_ipsec_xform_verify(&conf->ipsec, conf->crypto_xform);
+ if (ret)
+ return ret;
+
+ roc_cpt = &vf->cpt;
+
+ return cn20k_ipsec_outb_sa_create(roc_cpt, &qp->lf, &conf->ipsec, conf->crypto_xform,
+ (struct cn20k_sec_session *)sess);
return 0;
}
--
2.25.1
^ permalink raw reply [flat|nested] 41+ messages in thread
* [PATCH 20/40] crypto/cnxk: add rte security datapath handling
2025-05-23 13:50 [PATCH 00/40] fixes and new features to cnxk crypto PMD Tejasree Kondoj
` (18 preceding siblings ...)
2025-05-23 13:50 ` [PATCH 19/40] crypto/cnxk: add rte sec session update Tejasree Kondoj
@ 2025-05-23 13:50 ` Tejasree Kondoj
2025-05-23 13:50 ` [PATCH 21/40] crypto/cnxk: add Rx inject in security lookaside Tejasree Kondoj
` (19 subsequent siblings)
39 siblings, 0 replies; 41+ messages in thread
From: Tejasree Kondoj @ 2025-05-23 13:50 UTC (permalink / raw)
To: Akhil Goyal
Cc: Vidya Sagar Velumuri, Anoob Joseph, Aakash Sasidharan,
Nithinsen Kaithakadan, Rupesh Chiluka, dev
From: Vidya Sagar Velumuri <vvelumuri@marvell.com>
Add support for enqueue and dequeue of rte security for cn20k
Signed-off-by: Vidya Sagar Velumuri <vvelumuri@marvell.com>
---
drivers/crypto/cnxk/cn20k_cryptodev_ops.c | 108 +++++++++++-
drivers/crypto/cnxk/cn20k_ipsec_la_ops.h | 199 ++++++++++++++++++++++
drivers/crypto/cnxk/cnxk_cryptodev_ops.c | 2 +
drivers/crypto/cnxk/cnxk_ipsec.h | 1 +
4 files changed, 307 insertions(+), 3 deletions(-)
create mode 100644 drivers/crypto/cnxk/cn20k_ipsec_la_ops.h
diff --git a/drivers/crypto/cnxk/cn20k_cryptodev_ops.c b/drivers/crypto/cnxk/cn20k_cryptodev_ops.c
index 7e84f30f8e..28f88704b7 100644
--- a/drivers/crypto/cnxk/cn20k_cryptodev_ops.c
+++ b/drivers/crypto/cnxk/cn20k_cryptodev_ops.c
@@ -11,6 +11,8 @@
#include "cn20k_cryptodev.h"
#include "cn20k_cryptodev_ops.h"
+#include "cn20k_cryptodev_sec.h"
+#include "cn20k_ipsec_la_ops.h"
#include "cnxk_ae.h"
#include "cnxk_cryptodev.h"
#include "cnxk_cryptodev_ops.h"
@@ -60,10 +62,43 @@ cn20k_cpt_sym_temp_sess_create(struct cnxk_cpt_qp *qp, struct rte_crypto_op *op)
return NULL;
}
+static __rte_always_inline int __rte_hot
+cpt_sec_ipsec_inst_fill(struct cnxk_cpt_qp *qp, struct rte_crypto_op *op,
+ struct cn20k_sec_session *sess, struct cpt_inst_s *inst,
+ struct cpt_inflight_req *infl_req)
+{
+ struct rte_crypto_sym_op *sym_op = op->sym;
+ int ret;
+
+ if (unlikely(sym_op->m_dst && sym_op->m_dst != sym_op->m_src)) {
+ plt_dp_err("Out of place is not supported");
+ return -ENOTSUP;
+ }
+
+ if (sess->ipsec.is_outbound)
+ ret = process_outb_sa(&qp->lf, op, sess, &qp->meta_info, infl_req, inst);
+ else
+ ret = process_inb_sa(op, sess, inst, &qp->meta_info, infl_req);
+
+ return ret;
+}
+
+static __rte_always_inline int __rte_hot
+cpt_sec_inst_fill(struct cnxk_cpt_qp *qp, struct rte_crypto_op *op, struct cn20k_sec_session *sess,
+ struct cpt_inst_s *inst, struct cpt_inflight_req *infl_req)
+{
+
+ if (sess->proto == RTE_SECURITY_PROTOCOL_IPSEC)
+ return cpt_sec_ipsec_inst_fill(qp, op, sess, &inst[0], infl_req);
+
+ return 0;
+}
+
static inline int
cn20k_cpt_fill_inst(struct cnxk_cpt_qp *qp, struct rte_crypto_op *ops[], struct cpt_inst_s inst[],
struct cpt_inflight_req *infl_req)
{
+ struct cn20k_sec_session *sec_sess;
struct rte_crypto_asym_op *asym_op;
struct rte_crypto_sym_op *sym_op;
struct cnxk_ae_sess *ae_sess;
@@ -85,7 +120,13 @@ cn20k_cpt_fill_inst(struct cnxk_cpt_qp *qp, struct rte_crypto_op *ops[], struct
sym_op = op->sym;
if (op->type == RTE_CRYPTO_OP_TYPE_SYMMETRIC) {
- if (op->sess_type == RTE_CRYPTO_OP_WITH_SESSION) {
+ if (op->sess_type == RTE_CRYPTO_OP_SECURITY_SESSION) {
+ sec_sess = (struct cn20k_sec_session *)sym_op->session;
+ ret = cpt_sec_inst_fill(qp, op, sec_sess, &inst[0], infl_req);
+ if (unlikely(ret))
+ return 0;
+ w7 = sec_sess->inst.w7;
+ } else if (op->sess_type == RTE_CRYPTO_OP_WITH_SESSION) {
sess = (struct cnxk_se_sess *)(sym_op->session);
ret = cpt_sym_inst_fill(qp, op, sess, infl_req, &inst[0], true);
if (unlikely(ret))
@@ -224,6 +265,52 @@ cn20k_cpt_enqueue_burst(void *qptr, struct rte_crypto_op **ops, uint16_t nb_ops)
return count + i;
}
+static inline void
+cn20k_cpt_ipsec_post_process(struct rte_crypto_op *cop, struct cpt_cn20k_res_s *res)
+{
+ struct rte_mbuf *mbuf = cop->sym->m_src;
+ const uint16_t m_len = res->rlen;
+
+ switch (res->uc_compcode) {
+ case ROC_IE_OW_UCC_SUCCESS_PKT_IP_BADCSUM:
+ mbuf->ol_flags &= ~RTE_MBUF_F_RX_IP_CKSUM_GOOD;
+ mbuf->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
+ break;
+ case ROC_IE_OW_UCC_SUCCESS_PKT_L4_GOODCSUM:
+ mbuf->ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_GOOD;
+ break;
+ case ROC_IE_OW_UCC_SUCCESS_PKT_L4_BADCSUM:
+ mbuf->ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_GOOD;
+ break;
+ case ROC_IE_OW_UCC_SUCCESS_PKT_IP_GOODCSUM:
+ break;
+ case ROC_IE_OW_UCC_SUCCESS_SA_SOFTEXP_FIRST:
+ case ROC_IE_OW_UCC_SUCCESS_SA_SOFTEXP_AGAIN:
+ cop->aux_flags = RTE_CRYPTO_OP_AUX_FLAGS_IPSEC_SOFT_EXPIRY;
+ break;
+ default:
+ cop->status = RTE_CRYPTO_OP_STATUS_ERROR;
+ cop->aux_flags = res->uc_compcode;
+ return;
+ }
+
+ if (mbuf->next == NULL)
+ mbuf->data_len = m_len;
+
+ mbuf->pkt_len = m_len;
+}
+
+static inline void
+cn20k_cpt_sec_post_process(struct rte_crypto_op *cop, struct cpt_cn20k_res_s *res)
+{
+ struct rte_crypto_sym_op *sym_op = cop->sym;
+ struct cn20k_sec_session *sess;
+
+ sess = sym_op->session;
+ if (sess->proto == RTE_SECURITY_PROTOCOL_IPSEC)
+ cn20k_cpt_ipsec_post_process(cop, res);
+}
+
static inline void
cn20k_cpt_dequeue_post_process(struct cnxk_cpt_qp *qp, struct rte_crypto_op *cop,
struct cpt_inflight_req *infl_req, struct cpt_cn20k_res_s *res)
@@ -233,8 +320,23 @@ cn20k_cpt_dequeue_post_process(struct cnxk_cpt_qp *qp, struct rte_crypto_op *cop
cop->status = RTE_CRYPTO_OP_STATUS_SUCCESS;
- if (cop->type == RTE_CRYPTO_OP_TYPE_ASYMMETRIC &&
- cop->sess_type == RTE_CRYPTO_OP_WITH_SESSION) {
+ if (cop->type == RTE_CRYPTO_OP_TYPE_SYMMETRIC &&
+ cop->sess_type == RTE_CRYPTO_OP_SECURITY_SESSION) {
+ if (likely(compcode == CPT_COMP_GOOD || compcode == CPT_COMP_WARN)) {
+ /* Success with additional info */
+ cn20k_cpt_sec_post_process(cop, res);
+ } else {
+ cop->status = RTE_CRYPTO_OP_STATUS_ERROR;
+ plt_dp_info("HW completion code 0x%x", res->compcode);
+ if (compcode == CPT_COMP_GOOD) {
+ plt_dp_info("Request failed with microcode error");
+ plt_dp_info("MC completion code 0x%x", uc_compcode);
+ }
+ }
+
+ return;
+ } else if (cop->type == RTE_CRYPTO_OP_TYPE_ASYMMETRIC &&
+ cop->sess_type == RTE_CRYPTO_OP_WITH_SESSION) {
struct cnxk_ae_sess *sess;
sess = (struct cnxk_ae_sess *)cop->asym->session;
diff --git a/drivers/crypto/cnxk/cn20k_ipsec_la_ops.h b/drivers/crypto/cnxk/cn20k_ipsec_la_ops.h
new file mode 100644
index 0000000000..eff51bd794
--- /dev/null
+++ b/drivers/crypto/cnxk/cn20k_ipsec_la_ops.h
@@ -0,0 +1,199 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2025 Marvell.
+ */
+
+#ifndef __CN20K_IPSEC_LA_OPS_H__
+#define __CN20K_IPSEC_LA_OPS_H__
+
+#include <rte_crypto_sym.h>
+#include <rte_security.h>
+
+#include "roc_ie.h"
+
+#include "cn20k_cryptodev.h"
+#include "cn20k_ipsec.h"
+#include "cnxk_cryptodev.h"
+#include "cnxk_cryptodev_ops.h"
+#include "cnxk_sg.h"
+
+static inline void
+ipsec_po_sa_iv_set(struct cn20k_sec_session *sess, struct rte_crypto_op *cop)
+{
+ uint64_t *iv = &sess->sa.out_sa.iv.u64[0];
+ uint64_t *tmp_iv;
+
+ memcpy(iv, rte_crypto_op_ctod_offset(cop, uint8_t *, sess->iv_offset), 16);
+ tmp_iv = (uint64_t *)iv;
+ *tmp_iv = rte_be_to_cpu_64(*tmp_iv);
+
+ tmp_iv = (uint64_t *)(iv + 1);
+ *tmp_iv = rte_be_to_cpu_64(*tmp_iv);
+}
+
+static inline void
+ipsec_po_sa_aes_gcm_iv_set(struct cn20k_sec_session *sess, struct rte_crypto_op *cop)
+{
+ uint8_t *iv = &sess->sa.out_sa.iv.s.iv_dbg1[0];
+ uint32_t *tmp_iv;
+
+ memcpy(iv, rte_crypto_op_ctod_offset(cop, uint8_t *, sess->iv_offset), 4);
+ tmp_iv = (uint32_t *)iv;
+ *tmp_iv = rte_be_to_cpu_32(*tmp_iv);
+
+ iv = &sess->sa.out_sa.iv.s.iv_dbg2[0];
+ memcpy(iv, rte_crypto_op_ctod_offset(cop, uint8_t *, sess->iv_offset + 4), 4);
+ tmp_iv = (uint32_t *)iv;
+ *tmp_iv = rte_be_to_cpu_32(*tmp_iv);
+}
+
+static __rte_always_inline int
+process_outb_sa(struct roc_cpt_lf *lf, struct rte_crypto_op *cop, struct cn20k_sec_session *sess,
+ struct cpt_qp_meta_info *m_info, struct cpt_inflight_req *infl_req,
+ struct cpt_inst_s *inst)
+{
+ struct rte_crypto_sym_op *sym_op = cop->sym;
+ struct rte_mbuf *m_src = sym_op->m_src;
+ uint64_t inst_w4_u64 = sess->inst.w4;
+ uint64_t dptr;
+
+ RTE_SET_USED(lf);
+
+#ifdef LA_IPSEC_DEBUG
+ if (sess->sa.out_sa.w2.s.iv_src == ROC_IE_OW_SA_IV_SRC_FROM_SA) {
+ if (sess->sa.out_sa.w2.s.enc_type == ROC_IE_SA_ENC_AES_GCM ||
+ sess->sa.out_sa.w2.s.enc_type == ROC_IE_SA_ENC_AES_CCM ||
+ sess->sa.out_sa.w2.s.auth_type == ROC_IE_SA_AUTH_AES_GMAC)
+ ipsec_po_sa_aes_gcm_iv_set(sess, cop);
+ else
+ ipsec_po_sa_iv_set(sess, cop);
+ }
+
+ /* Trigger CTX reload to fetch new data from DRAM */
+ roc_cpt_lf_ctx_reload(lf, &sess->sa.out_sa);
+ rte_delay_ms(1);
+#endif
+ const uint64_t ol_flags = m_src->ol_flags;
+
+ inst_w4_u64 &= ~(((uint64_t)(!!(ol_flags & RTE_MBUF_F_TX_IP_CKSUM)) << 33) |
+ ((uint64_t)(!!(ol_flags & RTE_MBUF_F_TX_L4_MASK)) << 32));
+
+ if (likely(m_src->next == NULL)) {
+ if (unlikely(rte_pktmbuf_tailroom(m_src) < sess->max_extended_len)) {
+ plt_dp_err("Not enough tail room");
+ return -ENOMEM;
+ }
+
+ /* Prepare CPT instruction */
+ inst->w4.u64 = inst_w4_u64 | rte_pktmbuf_pkt_len(m_src);
+ dptr = rte_pktmbuf_mtod(m_src, uint64_t);
+ inst->dptr = dptr;
+ } else {
+ struct roc_sg2list_comp *scatter_comp, *gather_comp;
+ union cpt_inst_w5 cpt_inst_w5;
+ union cpt_inst_w6 cpt_inst_w6;
+ struct rte_mbuf *last_seg;
+ uint32_t g_size_bytes;
+ void *m_data;
+ int i;
+
+ last_seg = rte_pktmbuf_lastseg(m_src);
+
+ if (unlikely(rte_pktmbuf_tailroom(last_seg) < sess->max_extended_len)) {
+ plt_dp_err("Not enough tail room (required: %d, available: %d)",
+ sess->max_extended_len, rte_pktmbuf_tailroom(last_seg));
+ return -ENOMEM;
+ }
+
+ m_data = alloc_op_meta(NULL, m_info->mlen, m_info->pool, infl_req);
+ if (unlikely(m_data == NULL)) {
+ plt_dp_err("Error allocating meta buffer for request");
+ return -ENOMEM;
+ }
+
+ /* Input Gather List */
+ i = 0;
+ gather_comp = (struct roc_sg2list_comp *)((uint8_t *)m_data);
+
+ i = fill_sg2_comp_from_pkt(gather_comp, i, m_src);
+
+ cpt_inst_w5.s.gather_sz = ((i + 2) / 3);
+ g_size_bytes = ((i + 2) / 3) * sizeof(struct roc_sg2list_comp);
+
+ /* Output Scatter List */
+ last_seg->data_len += sess->max_extended_len;
+
+ i = 0;
+ scatter_comp = (struct roc_sg2list_comp *)((uint8_t *)gather_comp + g_size_bytes);
+
+ i = fill_sg2_comp_from_pkt(scatter_comp, i, m_src);
+
+ cpt_inst_w6.s.scatter_sz = ((i + 2) / 3);
+
+ cpt_inst_w5.s.dptr = (uint64_t)gather_comp;
+ cpt_inst_w6.s.rptr = (uint64_t)scatter_comp;
+
+ inst->w5.u64 = cpt_inst_w5.u64;
+ inst->w6.u64 = cpt_inst_w6.u64;
+ inst->w4.u64 = sess->inst.w4 | rte_pktmbuf_pkt_len(m_src);
+ inst->w4.s.opcode_major &= (~(ROC_IE_OW_INPLACE_BIT));
+ }
+
+ return 0;
+}
+
+static __rte_always_inline int
+process_inb_sa(struct rte_crypto_op *cop, struct cn20k_sec_session *sess, struct cpt_inst_s *inst,
+ struct cpt_qp_meta_info *m_info, struct cpt_inflight_req *infl_req)
+{
+ struct rte_crypto_sym_op *sym_op = cop->sym;
+ struct rte_mbuf *m_src = sym_op->m_src;
+ uint64_t dptr;
+
+ if (likely(m_src->next == NULL)) {
+ /* Prepare CPT instruction */
+ inst->w4.u64 = sess->inst.w4 | rte_pktmbuf_pkt_len(m_src);
+ dptr = rte_pktmbuf_mtod(m_src, uint64_t);
+ inst->dptr = dptr;
+ m_src->ol_flags |= (uint64_t)sess->ipsec.ip_csum;
+ } else {
+ struct roc_sg2list_comp *scatter_comp, *gather_comp;
+ union cpt_inst_w5 cpt_inst_w5;
+ union cpt_inst_w6 cpt_inst_w6;
+ uint32_t g_size_bytes;
+ void *m_data;
+ int i;
+
+ m_data = alloc_op_meta(NULL, m_info->mlen, m_info->pool, infl_req);
+ if (unlikely(m_data == NULL)) {
+ plt_dp_err("Error allocating meta buffer for request");
+ return -ENOMEM;
+ }
+
+ /* Input Gather List */
+ i = 0;
+ gather_comp = (struct roc_sg2list_comp *)((uint8_t *)m_data);
+
+ i = fill_sg2_comp_from_pkt(gather_comp, i, m_src);
+
+ cpt_inst_w5.s.gather_sz = ((i + 2) / 3);
+ g_size_bytes = ((i + 2) / 3) * sizeof(struct roc_sg2list_comp);
+
+ /* Output Scatter List */
+ i = 0;
+ scatter_comp = (struct roc_sg2list_comp *)((uint8_t *)gather_comp + g_size_bytes);
+ i = fill_sg2_comp_from_pkt(scatter_comp, i, m_src);
+
+ cpt_inst_w6.s.scatter_sz = ((i + 2) / 3);
+
+ cpt_inst_w5.s.dptr = (uint64_t)gather_comp;
+ cpt_inst_w6.s.rptr = (uint64_t)scatter_comp;
+
+ inst->w5.u64 = cpt_inst_w5.u64;
+ inst->w6.u64 = cpt_inst_w6.u64;
+ inst->w4.u64 = sess->inst.w4 | rte_pktmbuf_pkt_len(m_src);
+ inst->w4.s.opcode_major &= (~(ROC_IE_OW_INPLACE_BIT));
+ }
+ return 0;
+}
+
+#endif /* __CN20K_IPSEC_LA_OPS_H__ */
diff --git a/drivers/crypto/cnxk/cnxk_cryptodev_ops.c b/drivers/crypto/cnxk/cnxk_cryptodev_ops.c
index b4020f96c1..982fbe991f 100644
--- a/drivers/crypto/cnxk/cnxk_cryptodev_ops.c
+++ b/drivers/crypto/cnxk/cnxk_cryptodev_ops.c
@@ -31,6 +31,8 @@
#include "cn10k_cryptodev_ops.h"
#include "cn10k_cryptodev_sec.h"
+#include "cn20k_cryptodev_ops.h"
+#include "cn20k_cryptodev_sec.h"
#include "cn9k_cryptodev_ops.h"
#include "cn9k_ipsec.h"
diff --git a/drivers/crypto/cnxk/cnxk_ipsec.h b/drivers/crypto/cnxk/cnxk_ipsec.h
index 42f8e64009..5f65c34380 100644
--- a/drivers/crypto/cnxk/cnxk_ipsec.h
+++ b/drivers/crypto/cnxk/cnxk_ipsec.h
@@ -10,6 +10,7 @@
#include "roc_cpt.h"
#include "roc_ie_on.h"
#include "roc_ie_ot.h"
+#include "roc_ie_ow.h"
#include "roc_model.h"
extern struct rte_security_ops cnxk_sec_ops;
--
2.25.1
^ permalink raw reply [flat|nested] 41+ messages in thread
* [PATCH 21/40] crypto/cnxk: add Rx inject in security lookaside
2025-05-23 13:50 [PATCH 00/40] fixes and new features to cnxk crypto PMD Tejasree Kondoj
` (19 preceding siblings ...)
2025-05-23 13:50 ` [PATCH 20/40] crypto/cnxk: add rte security datapath handling Tejasree Kondoj
@ 2025-05-23 13:50 ` Tejasree Kondoj
2025-05-23 13:50 ` [PATCH 22/40] crypto/cnxk: add skeleton for tls Tejasree Kondoj
` (18 subsequent siblings)
39 siblings, 0 replies; 41+ messages in thread
From: Tejasree Kondoj @ 2025-05-23 13:50 UTC (permalink / raw)
To: Akhil Goyal
Cc: Anoob Joseph, Aakash Sasidharan, Nithinsen Kaithakadan,
Rupesh Chiluka, Vidya Sagar Velumuri, dev
Add Rx inject fastpath API for cn20k
Signed-off-by: Vidya Sagar Velumuri <vvelumuri@marvell.com>
---
drivers/crypto/cnxk/cn20k_cryptodev_ops.c | 186 ++++++++++++++++++++++
drivers/crypto/cnxk/cn20k_cryptodev_ops.h | 8 +
drivers/crypto/cnxk/cn20k_cryptodev_sec.c | 35 ----
3 files changed, 194 insertions(+), 35 deletions(-)
diff --git a/drivers/crypto/cnxk/cn20k_cryptodev_ops.c b/drivers/crypto/cnxk/cn20k_cryptodev_ops.c
index 28f88704b7..97dfa5865f 100644
--- a/drivers/crypto/cnxk/cn20k_cryptodev_ops.c
+++ b/drivers/crypto/cnxk/cn20k_cryptodev_ops.c
@@ -3,8 +3,11 @@
*/
#include <cryptodev_pmd.h>
+#include <eal_export.h>
+#include <ethdev_driver.h>
#include <rte_cryptodev.h>
#include <rte_hexdump.h>
+#include <rte_vect.h>
#include "roc_cpt.h"
#include "roc_idev.h"
@@ -508,6 +511,189 @@ cn20k_sym_configure_raw_dp_ctx(struct rte_cryptodev *dev, uint16_t qp_id,
return 0;
}
+#if defined(RTE_ARCH_ARM64)
+RTE_EXPORT_INTERNAL_SYMBOL(cn20k_cryptodev_sec_inb_rx_inject)
+uint16_t __rte_hot
+cn20k_cryptodev_sec_inb_rx_inject(void *dev, struct rte_mbuf **pkts,
+ struct rte_security_session **sess, uint16_t nb_pkts)
+{
+ uint64_t lmt_base, io_addr, u64_0, u64_1, l2_len, pf_func;
+ uint64x2_t inst_01, inst_23, inst_45, inst_67;
+ struct cn20k_sec_session *sec_sess;
+ struct rte_cryptodev *cdev = dev;
+ union cpt_res_s *hw_res = NULL;
+ uint16_t lmt_id, count = 0;
+ struct cpt_inst_s *inst;
+ union cpt_fc_write_s fc;
+ struct cnxk_cpt_vf *vf;
+ struct rte_mbuf *m;
+ uint64_t u64_dptr;
+ uint64_t *fc_addr;
+ int i;
+
+ vf = cdev->data->dev_private;
+
+ lmt_base = vf->rx_inj_lmtline.lmt_base;
+ io_addr = vf->rx_inj_lmtline.io_addr;
+ fc_addr = vf->rx_inj_lmtline.fc_addr;
+
+ ROC_LMT_BASE_ID_GET(lmt_base, lmt_id);
+ pf_func = vf->rx_inj_sso_pf_func;
+
+ const uint32_t fc_thresh = vf->rx_inj_lmtline.fc_thresh;
+
+again:
+ fc.u64[0] =
+ rte_atomic_load_explicit((RTE_ATOMIC(uint64_t) *)fc_addr, rte_memory_order_relaxed);
+ inst = (struct cpt_inst_s *)lmt_base;
+
+ i = 0;
+
+ if (unlikely(fc.s.qsize > fc_thresh))
+ goto exit;
+
+ for (; i < RTE_MIN(CN20K_CPT_PKTS_PER_LOOP, nb_pkts); i++) {
+
+ m = pkts[i];
+ sec_sess = (struct cn20k_sec_session *)sess[i];
+
+ if (unlikely(rte_pktmbuf_headroom(m) < 32)) {
+ plt_dp_err("No space for CPT res_s");
+ break;
+ }
+
+ l2_len = m->l2_len;
+
+ *rte_security_dynfield(m) = (uint64_t)sec_sess->userdata;
+
+ hw_res = rte_pktmbuf_mtod(m, void *);
+ hw_res = RTE_PTR_SUB(hw_res, 32);
+ hw_res = RTE_PTR_ALIGN_CEIL(hw_res, 16);
+
+ /* Prepare CPT instruction */
+ if (m->nb_segs > 1) {
+ struct rte_mbuf *last = rte_pktmbuf_lastseg(m);
+ uintptr_t dptr, rxphdr, wqe_hdr;
+ uint16_t i;
+
+ if ((m->nb_segs > CNXK_CPT_MAX_SG_SEGS) ||
+ (rte_pktmbuf_tailroom(m) < CNXK_CPT_MIN_TAILROOM_REQ))
+ goto exit;
+
+ wqe_hdr = rte_pktmbuf_mtod_offset(last, uintptr_t, last->data_len);
+ wqe_hdr += BIT_ULL(7);
+ wqe_hdr = (wqe_hdr - 1) & ~(BIT_ULL(7) - 1);
+
+ /* Pointer to WQE header */
+ *(uint64_t *)(m + 1) = wqe_hdr;
+
+ /* Reserve SG list after end of last mbuf data location. */
+ rxphdr = wqe_hdr + 8;
+ dptr = rxphdr + 7 * 8;
+
+ /* Prepare Multiseg SG list */
+ i = fill_sg2_comp_from_pkt((struct roc_sg2list_comp *)dptr, 0, m);
+ u64_dptr = dptr | ((uint64_t)(i) << 60);
+ } else {
+ struct roc_sg2list_comp *sg2;
+ uintptr_t dptr, wqe_hdr;
+
+ /* Reserve space for WQE, NIX_RX_PARSE_S and SG_S.
+ * Populate SG_S with num segs and seg length
+ */
+ wqe_hdr = (uintptr_t)(m + 1);
+ *(uint64_t *)(m + 1) = wqe_hdr;
+
+ sg2 = (struct roc_sg2list_comp *)(wqe_hdr + 8 * 8);
+ sg2->u.s.len[0] = rte_pktmbuf_pkt_len(m);
+ sg2->u.s.valid_segs = 1;
+
+ dptr = (uint64_t)rte_pktmbuf_iova(m);
+ u64_dptr = dptr;
+ }
+
+ /* Word 0 and 1 */
+ inst_01 = vdupq_n_u64(0);
+ u64_0 = pf_func << 48 | *(vf->rx_chan_base + m->port) << 4 | (l2_len - 2) << 24 |
+ l2_len << 16;
+ inst_01 = vsetq_lane_u64(u64_0, inst_01, 0);
+ inst_01 = vsetq_lane_u64((uint64_t)hw_res, inst_01, 1);
+ vst1q_u64(&inst->w0.u64, inst_01);
+
+ /* Word 2 and 3 */
+ inst_23 = vdupq_n_u64(0);
+ u64_1 = (((uint64_t)m + sizeof(struct rte_mbuf)) >> 3) << 3 | 1;
+ inst_23 = vsetq_lane_u64(u64_1, inst_23, 1);
+ vst1q_u64(&inst->w2.u64, inst_23);
+
+ /* Word 4 and 5 */
+ inst_45 = vdupq_n_u64(0);
+ u64_0 = sec_sess->inst.w4 | (rte_pktmbuf_pkt_len(m));
+ inst_45 = vsetq_lane_u64(u64_0, inst_45, 0);
+ inst_45 = vsetq_lane_u64(u64_dptr, inst_45, 1);
+ vst1q_u64(&inst->w4.u64, inst_45);
+
+ /* Word 6 and 7 */
+ inst_67 = vdupq_n_u64(0);
+ u64_1 = sec_sess->inst.w7;
+ inst_67 = vsetq_lane_u64(u64_1, inst_67, 1);
+ vst1q_u64(&inst->w6.u64, inst_67);
+
+ inst++;
+ }
+
+ cn20k_cpt_lmtst_dual_submit(&io_addr, lmt_id, &i);
+
+ if (nb_pkts - i > 0 && i == CN20K_CPT_PKTS_PER_LOOP) {
+ nb_pkts -= CN20K_CPT_PKTS_PER_LOOP;
+ pkts += CN20K_CPT_PKTS_PER_LOOP;
+ count += CN20K_CPT_PKTS_PER_LOOP;
+ sess += CN20K_CPT_PKTS_PER_LOOP;
+ goto again;
+ }
+
+exit:
+ return count + i;
+}
+#else
+RTE_EXPORT_INTERNAL_SYMBOL(cn20k_cryptodev_sec_inb_rx_inject)
+uint16_t __rte_hot
+cn20k_cryptodev_sec_inb_rx_inject(void *dev, struct rte_mbuf **pkts,
+ struct rte_security_session **sess, uint16_t nb_pkts)
+{
+ RTE_SET_USED(dev);
+ RTE_SET_USED(pkts);
+ RTE_SET_USED(sess);
+ RTE_SET_USED(nb_pkts);
+ return 0;
+}
+#endif
+
+RTE_EXPORT_INTERNAL_SYMBOL(cn20k_cryptodev_sec_rx_inject_configure)
+int
+cn20k_cryptodev_sec_rx_inject_configure(void *device, uint16_t port_id, bool enable)
+{
+ struct rte_cryptodev *crypto_dev = device;
+ struct rte_eth_dev *eth_dev;
+ int ret;
+
+ if (!rte_eth_dev_is_valid_port(port_id))
+ return -EINVAL;
+
+ if (!(crypto_dev->feature_flags & RTE_CRYPTODEV_FF_SECURITY_RX_INJECT))
+ return -ENOTSUP;
+
+ eth_dev = &rte_eth_devices[port_id];
+
+ ret = strncmp(eth_dev->device->driver->name, "net_cn20k", 8);
+ if (ret)
+ return -ENOTSUP;
+
+ roc_idev_nix_rx_inject_set(port_id, enable);
+
+ return 0;
+}
+
struct rte_cryptodev_ops cn20k_cpt_ops = {
/* Device control ops */
.dev_configure = cnxk_cpt_dev_config,
diff --git a/drivers/crypto/cnxk/cn20k_cryptodev_ops.h b/drivers/crypto/cnxk/cn20k_cryptodev_ops.h
index bdd6f71022..752ca588e0 100644
--- a/drivers/crypto/cnxk/cn20k_cryptodev_ops.h
+++ b/drivers/crypto/cnxk/cn20k_cryptodev_ops.h
@@ -25,6 +25,14 @@ extern struct rte_cryptodev_ops cn20k_cpt_ops;
void cn20k_cpt_set_enqdeq_fns(struct rte_cryptodev *dev);
+__rte_internal
+uint16_t __rte_hot cn20k_cryptodev_sec_inb_rx_inject(void *dev, struct rte_mbuf **pkts,
+ struct rte_security_session **sess,
+ uint16_t nb_pkts);
+
+__rte_internal
+int cn20k_cryptodev_sec_rx_inject_configure(void *device, uint16_t port_id, bool enable);
+
static __rte_always_inline void __rte_hot
cn20k_cpt_lmtst_dual_submit(uint64_t *io_addr, const uint16_t lmt_id, int *i)
{
diff --git a/drivers/crypto/cnxk/cn20k_cryptodev_sec.c b/drivers/crypto/cnxk/cn20k_cryptodev_sec.c
index ba7f1baf86..7374a83795 100644
--- a/drivers/crypto/cnxk/cn20k_cryptodev_sec.c
+++ b/drivers/crypto/cnxk/cn20k_cryptodev_sec.c
@@ -112,41 +112,6 @@ cn20k_sec_session_update(void *dev, struct rte_security_session *sec_sess,
return -ENOTSUP;
}
-static int
-cn20k_cryptodev_sec_rx_inject_configure(void *device, uint16_t port_id, bool enable)
-{
- RTE_SET_USED(device);
- RTE_SET_USED(port_id);
- RTE_SET_USED(enable);
-
- return -ENOTSUP;
-}
-
-#if defined(RTE_ARCH_ARM64)
-static uint16_t
-cn20k_cryptodev_sec_inb_rx_inject(void *dev, struct rte_mbuf **pkts,
- struct rte_security_session **sess, uint16_t nb_pkts)
-{
- RTE_SET_USED(dev);
- RTE_SET_USED(pkts);
- RTE_SET_USED(sess);
- RTE_SET_USED(nb_pkts);
-
- return 0;
-}
-#else
-uint16_t __rte_hot
-cn20k_cryptodev_sec_inb_rx_inject(void *dev, struct rte_mbuf **pkts,
- struct rte_security_session **sess, uint16_t nb_pkts)
-{
- RTE_SET_USED(dev);
- RTE_SET_USED(sess);
- RTE_SET_USED(nb_pkts);
-
- return 0;
-}
-#endif
-
/* Update platform specific security ops */
void
cn20k_sec_ops_override(void)
--
2.25.1
^ permalink raw reply [flat|nested] 41+ messages in thread
* [PATCH 22/40] crypto/cnxk: add skeleton for tls
2025-05-23 13:50 [PATCH 00/40] fixes and new features to cnxk crypto PMD Tejasree Kondoj
` (20 preceding siblings ...)
2025-05-23 13:50 ` [PATCH 21/40] crypto/cnxk: add Rx inject in security lookaside Tejasree Kondoj
@ 2025-05-23 13:50 ` Tejasree Kondoj
2025-05-23 13:50 ` [PATCH 23/40] crypto/cnxk: add tls write session creation Tejasree Kondoj
` (17 subsequent siblings)
39 siblings, 0 replies; 41+ messages in thread
From: Tejasree Kondoj @ 2025-05-23 13:50 UTC (permalink / raw)
To: Akhil Goyal
Cc: Vidya Sagar Velumuri, Anoob Joseph, Aakash Sasidharan,
Nithinsen Kaithakadan, Rupesh Chiluka, dev
From: Vidya Sagar Velumuri <vvelumuri@marvell.com>
Add skeleton for tls support for cn20k
Signed-off-by: Vidya Sagar Velumuri <vvelumuri@marvell.com>
---
drivers/common/cnxk/roc_ie_ow_tls.h | 233 ++++++++++++++++++++++++++++
drivers/crypto/cnxk/cn20k_tls.c | 56 +++++++
drivers/crypto/cnxk/cn20k_tls.h | 40 +++++
drivers/crypto/cnxk/meson.build | 1 +
4 files changed, 330 insertions(+)
create mode 100644 drivers/common/cnxk/roc_ie_ow_tls.h
create mode 100644 drivers/crypto/cnxk/cn20k_tls.c
create mode 100644 drivers/crypto/cnxk/cn20k_tls.h
diff --git a/drivers/common/cnxk/roc_ie_ow_tls.h b/drivers/common/cnxk/roc_ie_ow_tls.h
new file mode 100644
index 0000000000..d2338926cc
--- /dev/null
+++ b/drivers/common/cnxk/roc_ie_ow_tls.h
@@ -0,0 +1,233 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2025 Marvell.
+ */
+
+#ifndef __ROC_IE_OW_TLS_H__
+#define __ROC_IE_OW_TLS_H__
+
+#include "roc_platform.h"
+
+#define ROC_IE_OW_TLS_CTX_ILEN 1
+#define ROC_IE_OW_TLS_CTX_HDR_SIZE 1
+#define ROC_IE_OW_TLS_AR_WIN_SIZE_MAX 4096
+#define ROC_IE_OW_TLS_LOG_MIN_AR_WIN_SIZE_M1 5
+
+/* u64 array size to fit anti replay window bits */
+#define ROC_IE_OW_TLS_AR_WINBITS_SZ \
+ (PLT_ALIGN_CEIL(ROC_IE_OW_TLS_AR_WIN_SIZE_MAX, BITS_PER_LONG_LONG) / BITS_PER_LONG_LONG)
+
+/* CN20K TLS opcodes */
+#define ROC_IE_OW_TLS_MAJOR_OP_RECORD_ENC 0x16UL
+#define ROC_IE_OW_TLS_MAJOR_OP_RECORD_DEC 0x17UL
+#define ROC_IE_OW_TLS13_MAJOR_OP_RECORD_ENC 0x18UL
+#define ROC_IE_OW_TLS13_MAJOR_OP_RECORD_DEC 0x19UL
+
+#define ROC_IE_OW_TLS_CTX_MAX_OPAD_IPAD_LEN 128
+#define ROC_IE_OW_TLS_CTX_MAX_KEY_IV_LEN 48
+#define ROC_IE_OW_TLS_CTX_MAX_IV_LEN 16
+
+enum roc_ie_ow_tls_mac_type {
+ ROC_IE_OW_TLS_MAC_MD5 = 1,
+ ROC_IE_OW_TLS_MAC_SHA1 = 2,
+ ROC_IE_OW_TLS_MAC_SHA2_256 = 4,
+ ROC_IE_OW_TLS_MAC_SHA2_384 = 5,
+ ROC_IE_OW_TLS_MAC_SHA2_512 = 6,
+};
+
+enum roc_ie_ow_tls_cipher_type {
+ ROC_IE_OW_TLS_CIPHER_3DES = 1,
+ ROC_IE_OW_TLS_CIPHER_AES_CBC = 3,
+ ROC_IE_OW_TLS_CIPHER_AES_GCM = 7,
+ ROC_IE_OW_TLS_CIPHER_AES_CCM = 10,
+ ROC_IE_OW_TLS_CIPHER_CHACHA_POLY = 9,
+};
+
+enum roc_ie_ow_tls_ver {
+ ROC_IE_OW_TLS_VERSION_TLS_12 = 1,
+ ROC_IE_OW_TLS_VERSION_DTLS_12 = 2,
+ ROC_IE_OW_TLS_VERSION_TLS_13 = 3,
+};
+
+enum roc_ie_ow_tls_aes_key_len {
+ ROC_IE_OW_TLS_AES_KEY_LEN_128 = 1,
+ ROC_IE_OW_TLS_AES_KEY_LEN_256 = 3,
+};
+
+enum {
+ ROC_IE_OW_TLS_IV_SRC_DEFAULT = 0,
+ ROC_IE_OW_TLS_IV_SRC_FROM_SA = 1,
+};
+
+struct roc_ie_ow_tls_read_ctx_update_reg {
+ uint64_t ar_base;
+ uint64_t ar_valid_mask;
+ uint64_t hard_life;
+ uint64_t soft_life;
+ uint64_t mib_octs;
+ uint64_t mib_pkts;
+ uint64_t ar_winbits[ROC_IE_OW_TLS_AR_WINBITS_SZ];
+};
+
+struct roc_ie_ow_tls_1_3_read_ctx_update_reg {
+ uint64_t rsvd0;
+ uint64_t ar_valid_mask;
+ uint64_t hard_life;
+ uint64_t soft_life;
+ uint64_t mib_octs;
+ uint64_t mib_pkts;
+ uint64_t rsvd1;
+};
+
+union roc_ie_ow_tls_param2 {
+ uint16_t u16;
+ struct {
+ uint8_t msg_type;
+ uint8_t rsvd;
+ } s;
+};
+
+struct roc_ie_ow_tls_read_sa {
+ /* Word0 */
+ union {
+ struct {
+ uint64_t ar_win : 3;
+ uint64_t hard_life_dec : 1;
+ uint64_t soft_life_dec : 1;
+ uint64_t count_glb_octets : 1;
+ uint64_t count_glb_pkts : 1;
+ uint64_t count_mib_bytes : 1;
+
+ uint64_t count_mib_pkts : 1;
+ uint64_t hw_ctx_off : 7;
+
+ uint64_t ctx_id : 16;
+
+ uint64_t orig_pkt_fabs : 1;
+ uint64_t orig_pkt_free : 1;
+ uint64_t pkind : 6;
+
+ uint64_t rsvd0 : 1;
+ uint64_t et_ovrwr : 1;
+ uint64_t pkt_output : 2;
+ uint64_t pkt_format : 1;
+ uint64_t defrag_opt : 2;
+ uint64_t x2p_dst : 1;
+
+ uint64_t ctx_push_size : 7;
+ uint64_t rsvd1 : 1;
+
+ uint64_t ctx_hdr_size : 2;
+ uint64_t aop_valid : 1;
+ uint64_t rsvd2 : 1;
+ uint64_t ctx_size : 4;
+ } s;
+ uint64_t u64;
+ } w0;
+
+ /* Word1 */
+ uint64_t w1_rsvd3;
+
+ /* Word2 */
+ union {
+ struct {
+ uint64_t version_select : 4;
+ uint64_t aes_key_len : 2;
+ uint64_t cipher_select : 4;
+ uint64_t mac_select : 4;
+ uint64_t rsvd4 : 50;
+ } s;
+ uint64_t u64;
+ } w2;
+
+ /* Word3 */
+ uint64_t w3_rsvd5;
+
+ /* Word4 - Word9 */
+ uint8_t cipher_key[ROC_IE_OW_TLS_CTX_MAX_KEY_IV_LEN];
+
+ union {
+ struct {
+ /* Word10 - Word16 */
+ struct roc_ie_ow_tls_1_3_read_ctx_update_reg ctx;
+ } tls_13;
+
+ struct {
+ /* Word10 - Word25 */
+ uint8_t opad_ipad[ROC_IE_OW_TLS_CTX_MAX_OPAD_IPAD_LEN];
+
+ /* Word26 - Word95 */
+ struct roc_ie_ow_tls_read_ctx_update_reg ctx;
+ } tls_12;
+ };
+};
+
+struct roc_ie_ow_tls_write_sa {
+ /* Word0 */
+ union {
+ struct {
+ uint64_t rsvd0 : 3;
+ uint64_t hard_life_dec : 1;
+ uint64_t soft_life_dec : 1;
+ uint64_t count_glb_octets : 1;
+ uint64_t count_glb_pkts : 1;
+ uint64_t count_mib_bytes : 1;
+
+ uint64_t count_mib_pkts : 1;
+ uint64_t hw_ctx_off : 7;
+
+ uint64_t rsvd1 : 32;
+
+ uint64_t ctx_push_size : 7;
+ uint64_t rsvd2 : 1;
+
+ uint64_t ctx_hdr_size : 2;
+ uint64_t aop_valid : 1;
+ uint64_t rsvd3 : 1;
+ uint64_t ctx_size : 4;
+ } s;
+ uint64_t u64;
+ } w0;
+
+ /* Word1 */
+ uint64_t w1_rsvd4;
+
+ /* Word2 */
+ union {
+ struct {
+ uint64_t version_select : 4;
+ uint64_t aes_key_len : 2;
+ uint64_t cipher_select : 4;
+ uint64_t mac_select : 4;
+ uint64_t iv_at_cptr : 1;
+ uint64_t rsvd5 : 49;
+ } s;
+ uint64_t u64;
+ } w2;
+
+ /* Word3 */
+ uint64_t w3_rsvd6;
+
+ /* Word4 - Word9 */
+ uint8_t cipher_key[ROC_IE_OW_TLS_CTX_MAX_KEY_IV_LEN];
+
+ union {
+ struct {
+ /* Word10 */
+ uint64_t w10_rsvd7;
+
+ uint64_t seq_num;
+ } tls_13;
+
+ struct {
+ /* Word10 - Word25 */
+ uint8_t opad_ipad[ROC_IE_OW_TLS_CTX_MAX_OPAD_IPAD_LEN];
+
+ /* Word26 */
+ uint64_t w26_rsvd7;
+
+ /* Word27 */
+ uint64_t seq_num;
+ } tls_12;
+ };
+};
+#endif /* __ROC_IE_OW_TLS_H__ */
diff --git a/drivers/crypto/cnxk/cn20k_tls.c b/drivers/crypto/cnxk/cn20k_tls.c
new file mode 100644
index 0000000000..cef13a68a4
--- /dev/null
+++ b/drivers/crypto/cnxk/cn20k_tls.c
@@ -0,0 +1,56 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2025 Marvell.
+ */
+
+#include <rte_crypto_sym.h>
+#include <rte_cryptodev.h>
+#include <rte_security.h>
+
+#include <cryptodev_pmd.h>
+
+#include "roc_cpt.h"
+#include "roc_se.h"
+
+#include "cn20k_cryptodev_sec.h"
+#include "cn20k_tls.h"
+#include "cnxk_cryptodev.h"
+#include "cnxk_cryptodev_ops.h"
+#include "cnxk_security.h"
+
+int
+cn20k_tls_record_session_update(struct cnxk_cpt_vf *vf, struct cnxk_cpt_qp *qp,
+ struct cn20k_sec_session *sess,
+ struct rte_security_session_conf *conf)
+{
+ RTE_SET_USED(vf);
+ RTE_SET_USED(qp);
+ RTE_SET_USED(sess);
+ RTE_SET_USED(conf);
+
+ return 0;
+}
+
+int
+cn20k_tls_record_session_create(struct cnxk_cpt_vf *vf, struct cnxk_cpt_qp *qp,
+ struct rte_security_tls_record_xform *tls_xfrm,
+ struct rte_crypto_sym_xform *crypto_xfrm,
+ struct rte_security_session *sess)
+{
+ RTE_SET_USED(vf);
+ RTE_SET_USED(qp);
+ RTE_SET_USED(tls_xfrm);
+ RTE_SET_USED(crypto_xfrm);
+ RTE_SET_USED(sess);
+
+ return 0;
+}
+
+int
+cn20k_sec_tls_session_destroy(struct cnxk_cpt_qp *qp, struct cn20k_sec_session *sess)
+{
+
+ RTE_SET_USED(qp);
+ RTE_SET_USED(sess);
+
+ return 0;
+}
diff --git a/drivers/crypto/cnxk/cn20k_tls.h b/drivers/crypto/cnxk/cn20k_tls.h
new file mode 100644
index 0000000000..27124602a0
--- /dev/null
+++ b/drivers/crypto/cnxk/cn20k_tls.h
@@ -0,0 +1,40 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2025 Marvell.
+ */
+
+#ifndef __CN20K_TLS_H__
+#define __CN20K_TLS_H__
+
+#include <rte_crypto_sym.h>
+#include <rte_security.h>
+
+#include "roc_ie_ow_tls.h"
+
+#include "cnxk_cryptodev.h"
+#include "cnxk_cryptodev_ops.h"
+
+/* Forward declaration */
+struct cn20k_sec_session;
+
+struct __rte_aligned(ROC_ALIGN) cn20k_tls_record
+{
+ union {
+ /** Read SA */
+ struct roc_ie_ow_tls_read_sa read_sa;
+ /** Write SA */
+ struct roc_ie_ow_tls_write_sa write_sa;
+ };
+};
+
+int cn20k_tls_record_session_update(struct cnxk_cpt_vf *vf, struct cnxk_cpt_qp *qp,
+ struct cn20k_sec_session *sess,
+ struct rte_security_session_conf *conf);
+
+int cn20k_tls_record_session_create(struct cnxk_cpt_vf *vf, struct cnxk_cpt_qp *qp,
+ struct rte_security_tls_record_xform *tls_xfrm,
+ struct rte_crypto_sym_xform *crypto_xfrm,
+ struct rte_security_session *sess);
+
+int cn20k_sec_tls_session_destroy(struct cnxk_cpt_qp *qp, struct cn20k_sec_session *sess);
+
+#endif /* __CN20K_TLS_H__ */
diff --git a/drivers/crypto/cnxk/meson.build b/drivers/crypto/cnxk/meson.build
index f8077e4d4c..912c4a0851 100644
--- a/drivers/crypto/cnxk/meson.build
+++ b/drivers/crypto/cnxk/meson.build
@@ -21,6 +21,7 @@ sources = files(
'cn20k_cryptodev_ops.c',
'cn20k_cryptodev_sec.c',
'cn20k_ipsec.c',
+ 'cn20k_tls.c',
'cnxk_cryptodev.c',
'cnxk_cryptodev_capabilities.c',
'cnxk_cryptodev_devargs.c',
--
2.25.1
^ permalink raw reply [flat|nested] 41+ messages in thread
* [PATCH 23/40] crypto/cnxk: add tls write session creation
2025-05-23 13:50 [PATCH 00/40] fixes and new features to cnxk crypto PMD Tejasree Kondoj
` (21 preceding siblings ...)
2025-05-23 13:50 ` [PATCH 22/40] crypto/cnxk: add skeleton for tls Tejasree Kondoj
@ 2025-05-23 13:50 ` Tejasree Kondoj
2025-05-23 13:50 ` [PATCH 24/40] crypto/cnxk: add tls read " Tejasree Kondoj
` (16 subsequent siblings)
39 siblings, 0 replies; 41+ messages in thread
From: Tejasree Kondoj @ 2025-05-23 13:50 UTC (permalink / raw)
To: Akhil Goyal
Cc: Vidya Sagar Velumuri, Anoob Joseph, Aakash Sasidharan,
Nithinsen Kaithakadan, Rupesh Chiluka, dev
From: Vidya Sagar Velumuri <vvelumuri@marvell.com>
Add support for tls read session creation for cn20k
Signed-off-by: Vidya Sagar Velumuri <vvelumuri@marvell.com>
---
drivers/crypto/cnxk/cn20k_cryptodev_sec.c | 4 +
drivers/crypto/cnxk/cn20k_cryptodev_sec.h | 12 +
drivers/crypto/cnxk/cn20k_tls.c | 463 +++++++++++++++++++++-
3 files changed, 473 insertions(+), 6 deletions(-)
diff --git a/drivers/crypto/cnxk/cn20k_cryptodev_sec.c b/drivers/crypto/cnxk/cn20k_cryptodev_sec.c
index 7374a83795..e5158af595 100644
--- a/drivers/crypto/cnxk/cn20k_cryptodev_sec.c
+++ b/drivers/crypto/cnxk/cn20k_cryptodev_sec.c
@@ -32,6 +32,10 @@ cn20k_sec_session_create(void *dev, struct rte_security_session_conf *conf,
return cn20k_ipsec_session_create(vf, qp, &conf->ipsec, conf->crypto_xform, sess);
}
+ if (conf->protocol == RTE_SECURITY_PROTOCOL_TLS_RECORD)
+ return cn20k_tls_record_session_create(vf, qp, &conf->tls_record,
+ conf->crypto_xform, sess);
+
return -ENOTSUP;
}
diff --git a/drivers/crypto/cnxk/cn20k_cryptodev_sec.h b/drivers/crypto/cnxk/cn20k_cryptodev_sec.h
index 4d6dcc9670..42f588e4ac 100644
--- a/drivers/crypto/cnxk/cn20k_cryptodev_sec.h
+++ b/drivers/crypto/cnxk/cn20k_cryptodev_sec.h
@@ -12,9 +12,19 @@
#include "roc_cpt.h"
#include "cn20k_ipsec.h"
+#include "cn20k_tls.h"
#define SEC_SESS_SIZE sizeof(struct rte_security_session)
+struct cn20k_tls_opt {
+ uint16_t pad_shift : 3;
+ uint16_t enable_padding : 1;
+ uint16_t tail_fetch_len : 2;
+ uint16_t tls_ver : 2;
+ uint16_t is_write : 1;
+ uint16_t mac_len : 7;
+};
+
void cn20k_sec_ops_override(void);
struct __rte_aligned(ROC_ALIGN) cn20k_sec_session {
@@ -31,6 +41,7 @@ struct __rte_aligned(ROC_ALIGN) cn20k_sec_session {
uint8_t iv_length;
union {
uint16_t u16;
+ struct cn20k_tls_opt tls_opt;
struct {
uint8_t ip_csum;
uint8_t is_outbound : 1;
@@ -46,6 +57,7 @@ struct __rte_aligned(ROC_ALIGN) cn20k_sec_session {
*/
union {
struct cn20k_ipsec_sa sa;
+ struct cn20k_tls_record tls_rec;
};
};
diff --git a/drivers/crypto/cnxk/cn20k_tls.c b/drivers/crypto/cnxk/cn20k_tls.c
index cef13a68a4..40fe48ae69 100644
--- a/drivers/crypto/cnxk/cn20k_tls.c
+++ b/drivers/crypto/cnxk/cn20k_tls.c
@@ -15,8 +15,452 @@
#include "cn20k_tls.h"
#include "cnxk_cryptodev.h"
#include "cnxk_cryptodev_ops.h"
+#include "cnxk_ipsec.h"
#include "cnxk_security.h"
+static int
+tls_xform_cipher_auth_verify(struct rte_crypto_sym_xform *cipher_xform,
+ struct rte_crypto_sym_xform *auth_xform)
+{
+ enum rte_crypto_cipher_algorithm c_algo = cipher_xform->cipher.algo;
+ enum rte_crypto_auth_algorithm a_algo = auth_xform->auth.algo;
+ int ret = -ENOTSUP;
+
+ switch (c_algo) {
+ case RTE_CRYPTO_CIPHER_NULL:
+ if ((a_algo == RTE_CRYPTO_AUTH_MD5_HMAC) || (a_algo == RTE_CRYPTO_AUTH_SHA1_HMAC) ||
+ (a_algo == RTE_CRYPTO_AUTH_SHA256_HMAC) ||
+ (a_algo == RTE_CRYPTO_AUTH_SHA384_HMAC))
+ ret = 0;
+ break;
+ case RTE_CRYPTO_CIPHER_3DES_CBC:
+ if (a_algo == RTE_CRYPTO_AUTH_SHA1_HMAC)
+ ret = 0;
+ break;
+ case RTE_CRYPTO_CIPHER_AES_CBC:
+ if ((a_algo == RTE_CRYPTO_AUTH_SHA1_HMAC) ||
+ (a_algo == RTE_CRYPTO_AUTH_SHA256_HMAC) ||
+ (a_algo == RTE_CRYPTO_AUTH_SHA384_HMAC))
+ ret = 0;
+ break;
+ default:
+ break;
+ }
+
+ return ret;
+}
+
+static int
+tls_xform_cipher_verify(struct rte_crypto_sym_xform *crypto_xform)
+{
+ enum rte_crypto_cipher_algorithm c_algo = crypto_xform->cipher.algo;
+ uint16_t keylen = crypto_xform->cipher.key.length;
+
+ if (((c_algo == RTE_CRYPTO_CIPHER_NULL) && (keylen == 0)) ||
+ ((c_algo == RTE_CRYPTO_CIPHER_3DES_CBC) && (keylen == 24)) ||
+ ((c_algo == RTE_CRYPTO_CIPHER_AES_CBC) && ((keylen == 16) || (keylen == 32))))
+ return 0;
+
+ return -EINVAL;
+}
+
+static int
+tls_xform_auth_verify(struct rte_crypto_sym_xform *crypto_xform)
+{
+ enum rte_crypto_auth_algorithm a_algo = crypto_xform->auth.algo;
+ uint16_t keylen = crypto_xform->auth.key.length;
+
+ if (((a_algo == RTE_CRYPTO_AUTH_MD5_HMAC) && (keylen == 16)) ||
+ ((a_algo == RTE_CRYPTO_AUTH_SHA1_HMAC) && (keylen == 20)) ||
+ ((a_algo == RTE_CRYPTO_AUTH_SHA256_HMAC) && (keylen == 32)) ||
+ ((a_algo == RTE_CRYPTO_AUTH_SHA384_HMAC) && (keylen == 48)))
+ return 0;
+
+ return -EINVAL;
+}
+
+static int
+tls_xform_aead_verify(struct rte_security_tls_record_xform *tls_xform,
+ struct rte_crypto_sym_xform *crypto_xform)
+{
+ uint16_t keylen = crypto_xform->aead.key.length;
+
+ if (tls_xform->type == RTE_SECURITY_TLS_SESS_TYPE_WRITE &&
+ crypto_xform->aead.op != RTE_CRYPTO_AEAD_OP_ENCRYPT)
+ return -EINVAL;
+
+ if (tls_xform->type == RTE_SECURITY_TLS_SESS_TYPE_READ &&
+ crypto_xform->aead.op != RTE_CRYPTO_AEAD_OP_DECRYPT)
+ return -EINVAL;
+
+ if (crypto_xform->aead.algo == RTE_CRYPTO_AEAD_AES_GCM) {
+ if ((keylen == 16) || (keylen == 32))
+ return 0;
+ }
+
+ if ((crypto_xform->aead.algo == RTE_CRYPTO_AEAD_CHACHA20_POLY1305) && (keylen == 32))
+ return 0;
+
+ return -EINVAL;
+}
+
+static int
+cnxk_tls_xform_verify(struct rte_security_tls_record_xform *tls_xform,
+ struct rte_crypto_sym_xform *crypto_xform)
+{
+ struct rte_crypto_sym_xform *auth_xform, *cipher_xform = NULL;
+ int ret = 0;
+
+ if ((tls_xform->ver != RTE_SECURITY_VERSION_TLS_1_2) &&
+ (tls_xform->ver != RTE_SECURITY_VERSION_DTLS_1_2) &&
+ (tls_xform->ver != RTE_SECURITY_VERSION_TLS_1_3))
+ return -EINVAL;
+
+ if ((tls_xform->type != RTE_SECURITY_TLS_SESS_TYPE_READ) &&
+ (tls_xform->type != RTE_SECURITY_TLS_SESS_TYPE_WRITE))
+ return -EINVAL;
+
+ if (crypto_xform->type == RTE_CRYPTO_SYM_XFORM_AEAD) {
+ /* optional padding is not allowed in TLS-1.2 for AEAD */
+ if ((tls_xform->options.extra_padding_enable == 1) &&
+ (tls_xform->ver != RTE_SECURITY_VERSION_TLS_1_3))
+ return -EINVAL;
+
+ return tls_xform_aead_verify(tls_xform, crypto_xform);
+ }
+
+ /* TLS-1.3 only support AEAD.
+ * Control should not reach here for TLS-1.3
+ */
+ if (tls_xform->ver == RTE_SECURITY_VERSION_TLS_1_3)
+ return -EINVAL;
+
+ if (tls_xform->type == RTE_SECURITY_TLS_SESS_TYPE_WRITE) {
+ /* Egress */
+
+ /* First should be for auth in Egress */
+ if (crypto_xform->type != RTE_CRYPTO_SYM_XFORM_AUTH)
+ return -EINVAL;
+
+ /* Next if present, should be for cipher in Egress */
+ if ((crypto_xform->next != NULL) &&
+ (crypto_xform->next->type != RTE_CRYPTO_SYM_XFORM_CIPHER))
+ return -EINVAL;
+
+ auth_xform = crypto_xform;
+ cipher_xform = crypto_xform->next;
+ } else {
+ /* Ingress */
+
+ /* First can be for auth only when next is NULL in Ingress. */
+ if ((crypto_xform->type == RTE_CRYPTO_SYM_XFORM_AUTH) &&
+ (crypto_xform->next != NULL))
+ return -EINVAL;
+ else if ((crypto_xform->type != RTE_CRYPTO_SYM_XFORM_CIPHER) ||
+ (crypto_xform->next->type != RTE_CRYPTO_SYM_XFORM_AUTH))
+ return -EINVAL;
+
+ cipher_xform = crypto_xform;
+ auth_xform = crypto_xform->next;
+ }
+
+ if (cipher_xform) {
+ if ((tls_xform->type == RTE_SECURITY_TLS_SESS_TYPE_WRITE) &&
+ !(cipher_xform->cipher.op == RTE_CRYPTO_CIPHER_OP_ENCRYPT &&
+ auth_xform->auth.op == RTE_CRYPTO_AUTH_OP_GENERATE))
+ return -EINVAL;
+
+ if ((tls_xform->type == RTE_SECURITY_TLS_SESS_TYPE_READ) &&
+ !(cipher_xform->cipher.op == RTE_CRYPTO_CIPHER_OP_DECRYPT &&
+ auth_xform->auth.op == RTE_CRYPTO_AUTH_OP_VERIFY))
+ return -EINVAL;
+ } else {
+ if ((tls_xform->type == RTE_SECURITY_TLS_SESS_TYPE_WRITE) &&
+ (auth_xform->auth.op != RTE_CRYPTO_AUTH_OP_GENERATE))
+ return -EINVAL;
+
+ if ((tls_xform->type == RTE_SECURITY_TLS_SESS_TYPE_READ) &&
+ (auth_xform->auth.op == RTE_CRYPTO_AUTH_OP_VERIFY))
+ return -EINVAL;
+ }
+
+ if (cipher_xform)
+ ret = tls_xform_cipher_verify(cipher_xform);
+
+ if (!ret)
+ ret = tls_xform_auth_verify(auth_xform);
+
+ if (cipher_xform && !ret)
+ return tls_xform_cipher_auth_verify(cipher_xform, auth_xform);
+
+ return ret;
+}
+
+static size_t
+tls_read_ctx_size(struct roc_ie_ow_tls_read_sa *sa, enum rte_security_tls_version tls_ver)
+{
+ size_t size;
+
+ /* Variable based on Anti-replay Window */
+ if (tls_ver == RTE_SECURITY_VERSION_TLS_1_3) {
+ size = offsetof(struct roc_ie_ow_tls_read_sa, tls_13.ctx) +
+ sizeof(struct roc_ie_ow_tls_1_3_read_ctx_update_reg);
+ } else {
+ size = offsetof(struct roc_ie_ow_tls_read_sa, tls_12.ctx) +
+ offsetof(struct roc_ie_ow_tls_read_ctx_update_reg, ar_winbits);
+ }
+
+ if (sa->w0.s.ar_win)
+ size += (1 << (sa->w0.s.ar_win - 1)) * sizeof(uint64_t);
+
+ return size;
+}
+
+static int
+tls_read_sa_fill(struct roc_ie_ow_tls_read_sa *read_sa,
+ struct rte_security_tls_record_xform *tls_xfrm,
+ struct rte_crypto_sym_xform *crypto_xfrm, struct cn20k_tls_opt *tls_opt)
+{
+ enum rte_security_tls_version tls_ver = tls_xfrm->ver;
+ struct rte_crypto_sym_xform *auth_xfrm, *cipher_xfrm;
+ const uint8_t *key = NULL;
+ uint64_t *tmp, *tmp_key;
+ uint32_t replay_win_sz;
+ uint8_t *cipher_key;
+ int i, length = 0;
+ size_t offset;
+
+ /* Initialize the SA */
+ memset(read_sa, 0, sizeof(struct roc_ie_ow_tls_read_sa));
+
+ if (tls_ver == RTE_SECURITY_VERSION_TLS_1_2) {
+ read_sa->w2.s.version_select = ROC_IE_OW_TLS_VERSION_TLS_12;
+ read_sa->tls_12.ctx.ar_valid_mask = tls_xfrm->tls_1_2.seq_no - 1;
+ } else if (tls_ver == RTE_SECURITY_VERSION_DTLS_1_2) {
+ read_sa->w2.s.version_select = ROC_IE_OW_TLS_VERSION_DTLS_12;
+ } else if (tls_ver == RTE_SECURITY_VERSION_TLS_1_3) {
+ read_sa->w2.s.version_select = ROC_IE_OW_TLS_VERSION_TLS_13;
+ read_sa->tls_13.ctx.ar_valid_mask = tls_xfrm->tls_1_3.seq_no - 1;
+ }
+
+ cipher_key = read_sa->cipher_key;
+
+ /* Set encryption algorithm */
+ if (crypto_xfrm->type == RTE_CRYPTO_SYM_XFORM_AEAD) {
+ length = crypto_xfrm->aead.key.length;
+ if (crypto_xfrm->aead.algo == RTE_CRYPTO_AEAD_AES_GCM) {
+ read_sa->w2.s.cipher_select = ROC_IE_OW_TLS_CIPHER_AES_GCM;
+ if (length == 16)
+ read_sa->w2.s.aes_key_len = ROC_IE_OW_TLS_AES_KEY_LEN_128;
+ else
+ read_sa->w2.s.aes_key_len = ROC_IE_OW_TLS_AES_KEY_LEN_256;
+ }
+
+ if (crypto_xfrm->aead.algo == RTE_CRYPTO_AEAD_CHACHA20_POLY1305) {
+ read_sa->w2.s.cipher_select = ROC_IE_OW_TLS_CIPHER_CHACHA_POLY;
+ read_sa->w2.s.aes_key_len = ROC_IE_OW_TLS_AES_KEY_LEN_256;
+ }
+
+ key = crypto_xfrm->aead.key.data;
+ memcpy(cipher_key, key, length);
+
+ if (tls_ver == RTE_SECURITY_VERSION_TLS_1_2)
+ memcpy(((uint8_t *)cipher_key + 32), &tls_xfrm->tls_1_2.imp_nonce, 4);
+ else if (tls_ver == RTE_SECURITY_VERSION_DTLS_1_2)
+ memcpy(((uint8_t *)cipher_key + 32), &tls_xfrm->dtls_1_2.imp_nonce, 4);
+ else if (tls_ver == RTE_SECURITY_VERSION_TLS_1_3)
+ memcpy(((uint8_t *)cipher_key + 32), &tls_xfrm->tls_1_3.imp_nonce, 12);
+
+ goto key_swap;
+ }
+
+ if (crypto_xfrm->type == RTE_CRYPTO_SYM_XFORM_AUTH) {
+ auth_xfrm = crypto_xfrm;
+ cipher_xfrm = crypto_xfrm->next;
+ } else {
+ cipher_xfrm = crypto_xfrm;
+ auth_xfrm = crypto_xfrm->next;
+ }
+
+ if (cipher_xfrm != NULL) {
+ if (cipher_xfrm->cipher.algo == RTE_CRYPTO_CIPHER_3DES_CBC) {
+ read_sa->w2.s.cipher_select = ROC_IE_OW_TLS_CIPHER_3DES;
+ length = cipher_xfrm->cipher.key.length;
+ } else if (cipher_xfrm->cipher.algo == RTE_CRYPTO_CIPHER_AES_CBC) {
+ read_sa->w2.s.cipher_select = ROC_IE_OW_TLS_CIPHER_AES_CBC;
+ length = cipher_xfrm->cipher.key.length;
+ if (length == 16)
+ read_sa->w2.s.aes_key_len = ROC_IE_OW_TLS_AES_KEY_LEN_128;
+ else if (length == 32)
+ read_sa->w2.s.aes_key_len = ROC_IE_OW_TLS_AES_KEY_LEN_256;
+ else
+ return -EINVAL;
+ } else {
+ return -EINVAL;
+ }
+
+ key = cipher_xfrm->cipher.key.data;
+ memcpy(cipher_key, key, length);
+ }
+
+ switch (auth_xfrm->auth.algo) {
+ case RTE_CRYPTO_AUTH_MD5_HMAC:
+ read_sa->w2.s.mac_select = ROC_IE_OW_TLS_MAC_MD5;
+ tls_opt->mac_len = 0;
+ break;
+ case RTE_CRYPTO_AUTH_SHA1_HMAC:
+ read_sa->w2.s.mac_select = ROC_IE_OW_TLS_MAC_SHA1;
+ tls_opt->mac_len = 20;
+ break;
+ case RTE_CRYPTO_AUTH_SHA256_HMAC:
+ read_sa->w2.s.mac_select = ROC_IE_OW_TLS_MAC_SHA2_256;
+ tls_opt->mac_len = 32;
+ break;
+ case RTE_CRYPTO_AUTH_SHA384_HMAC:
+ read_sa->w2.s.mac_select = ROC_IE_OW_TLS_MAC_SHA2_384;
+ tls_opt->mac_len = 48;
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ roc_se_hmac_opad_ipad_gen(read_sa->w2.s.mac_select, auth_xfrm->auth.key.data,
+ auth_xfrm->auth.key.length, read_sa->tls_12.opad_ipad,
+ ROC_SE_TLS);
+
+ tmp = (uint64_t *)read_sa->tls_12.opad_ipad;
+ for (i = 0; i < (int)(ROC_CTX_MAX_OPAD_IPAD_LEN / sizeof(uint64_t)); i++)
+ tmp[i] = rte_be_to_cpu_64(tmp[i]);
+
+key_swap:
+ tmp_key = (uint64_t *)cipher_key;
+ for (i = 0; i < (int)(ROC_IE_OW_TLS_CTX_MAX_KEY_IV_LEN / sizeof(uint64_t)); i++)
+ tmp_key[i] = rte_be_to_cpu_64(tmp_key[i]);
+
+ if (tls_xfrm->ver == RTE_SECURITY_VERSION_DTLS_1_2) {
+ /* Only support power-of-two window sizes supported */
+ replay_win_sz = tls_xfrm->dtls_1_2.ar_win_sz;
+ if (replay_win_sz) {
+ if (!rte_is_power_of_2(replay_win_sz) ||
+ replay_win_sz > ROC_IE_OW_TLS_AR_WIN_SIZE_MAX)
+ return -ENOTSUP;
+
+ read_sa->w0.s.ar_win = rte_log2_u32(replay_win_sz) - 5;
+ }
+ }
+
+ read_sa->w0.s.ctx_hdr_size = ROC_IE_OW_TLS_CTX_HDR_SIZE;
+ read_sa->w0.s.aop_valid = 1;
+
+ offset = offsetof(struct roc_ie_ow_tls_read_sa, tls_12.ctx);
+ if (tls_ver == RTE_SECURITY_VERSION_TLS_1_3)
+ offset = offsetof(struct roc_ie_ow_tls_read_sa, tls_13.ctx);
+
+ /* Entire context size in 128B units */
+ read_sa->w0.s.ctx_size =
+ (PLT_ALIGN_CEIL(tls_read_ctx_size(read_sa, tls_ver), ROC_CTX_UNIT_128B) /
+ ROC_CTX_UNIT_128B) -
+ 1;
+
+ /* Word offset for HW managed CTX field */
+ read_sa->w0.s.hw_ctx_off = offset / 8;
+ read_sa->w0.s.ctx_push_size = read_sa->w0.s.hw_ctx_off;
+
+ rte_wmb();
+
+ return 0;
+}
+
+static int
+cn20k_tls_read_sa_create(struct roc_cpt *roc_cpt, struct roc_cpt_lf *lf,
+ struct rte_security_tls_record_xform *tls_xfrm,
+ struct rte_crypto_sym_xform *crypto_xfrm,
+ struct cn20k_sec_session *sec_sess)
+{
+ struct roc_ie_ow_tls_read_sa *sa_dptr;
+ uint8_t tls_ver = tls_xfrm->ver;
+ struct cn20k_tls_record *tls;
+ union cpt_inst_w4 inst_w4;
+ void *read_sa;
+ int ret = 0;
+
+ tls = &sec_sess->tls_rec;
+ read_sa = &tls->read_sa;
+
+ /* Allocate memory to be used as dptr for CPT ucode WRITE_SA op */
+ sa_dptr = plt_zmalloc(sizeof(struct roc_ie_ow_tls_read_sa), 8);
+ if (sa_dptr == NULL) {
+ plt_err("Couldn't allocate memory for SA dptr");
+ return -ENOMEM;
+ }
+
+ /* Translate security parameters to SA */
+ ret = tls_read_sa_fill(sa_dptr, tls_xfrm, crypto_xfrm, &sec_sess->tls_opt);
+ if (ret) {
+ plt_err("Could not fill read session parameters");
+ goto sa_dptr_free;
+ }
+ if (crypto_xfrm->type == RTE_CRYPTO_SYM_XFORM_AEAD) {
+ sec_sess->iv_offset = crypto_xfrm->aead.iv.offset;
+ sec_sess->iv_length = crypto_xfrm->aead.iv.length;
+ } else if (crypto_xfrm->type == RTE_CRYPTO_SYM_XFORM_CIPHER) {
+ sec_sess->iv_offset = crypto_xfrm->cipher.iv.offset;
+ sec_sess->iv_length = crypto_xfrm->cipher.iv.length;
+ } else {
+ sec_sess->iv_offset = crypto_xfrm->auth.iv.offset;
+ sec_sess->iv_length = crypto_xfrm->auth.iv.length;
+ }
+
+ sec_sess->proto = RTE_SECURITY_PROTOCOL_TLS_RECORD;
+
+ /* pre-populate CPT INST word 4 */
+ inst_w4.u64 = 0;
+ if ((tls_ver == RTE_SECURITY_VERSION_TLS_1_2) ||
+ (tls_ver == RTE_SECURITY_VERSION_DTLS_1_2)) {
+ inst_w4.s.opcode_major = ROC_IE_OW_TLS_MAJOR_OP_RECORD_DEC | ROC_IE_OW_INPLACE_BIT;
+ sec_sess->tls_opt.tail_fetch_len = 0;
+ if (sa_dptr->w2.s.cipher_select == ROC_IE_OW_TLS_CIPHER_3DES)
+ sec_sess->tls_opt.tail_fetch_len = 1;
+ else if (sa_dptr->w2.s.cipher_select == ROC_IE_OW_TLS_CIPHER_AES_CBC)
+ sec_sess->tls_opt.tail_fetch_len = 2;
+ } else if (tls_xfrm->ver == RTE_SECURITY_VERSION_TLS_1_3) {
+ inst_w4.s.opcode_major =
+ ROC_IE_OW_TLS13_MAJOR_OP_RECORD_DEC | ROC_IE_OW_INPLACE_BIT;
+ }
+
+ sec_sess->tls_opt.tls_ver = tls_ver;
+ sec_sess->inst.w4 = inst_w4.u64;
+ sec_sess->inst.w7 = cnxk_cpt_sec_inst_w7_get(roc_cpt, read_sa);
+
+ memset(read_sa, 0, sizeof(struct roc_ie_ow_tls_read_sa));
+
+ /* Copy word0 from sa_dptr to populate ctx_push_sz ctx_size fields */
+ memcpy(read_sa, sa_dptr, 8);
+
+ rte_atomic_thread_fence(rte_memory_order_seq_cst);
+
+ /* Write session using microcode opcode */
+ ret = roc_cpt_ctx_write(lf, sa_dptr, read_sa, sizeof(struct roc_ie_ow_tls_read_sa));
+ if (ret) {
+ plt_err("Could not write read session to hardware");
+ goto sa_dptr_free;
+ }
+
+ /* Trigger CTX flush so that data is written back to DRAM */
+ ret = roc_cpt_lf_ctx_flush(lf, read_sa, true);
+ if (ret == -EFAULT) {
+ plt_err("Could not flush TLS read session to hardware");
+ goto sa_dptr_free;
+ }
+
+ rte_atomic_thread_fence(rte_memory_order_seq_cst);
+
+sa_dptr_free:
+ plt_free(sa_dptr);
+
+ return ret;
+}
+
int
cn20k_tls_record_session_update(struct cnxk_cpt_vf *vf, struct cnxk_cpt_qp *qp,
struct cn20k_sec_session *sess,
@@ -36,13 +480,20 @@ cn20k_tls_record_session_create(struct cnxk_cpt_vf *vf, struct cnxk_cpt_qp *qp,
struct rte_crypto_sym_xform *crypto_xfrm,
struct rte_security_session *sess)
{
- RTE_SET_USED(vf);
- RTE_SET_USED(qp);
- RTE_SET_USED(tls_xfrm);
- RTE_SET_USED(crypto_xfrm);
- RTE_SET_USED(sess);
+ struct roc_cpt *roc_cpt;
+ int ret;
- return 0;
+ ret = cnxk_tls_xform_verify(tls_xfrm, crypto_xfrm);
+ if (ret)
+ return ret;
+
+ roc_cpt = &vf->cpt;
+
+ if (tls_xfrm->type == RTE_SECURITY_TLS_SESS_TYPE_READ)
+ return cn20k_tls_read_sa_create(roc_cpt, &qp->lf, tls_xfrm, crypto_xfrm,
+ (struct cn20k_sec_session *)sess);
+
+ return -ENOTSUP;
}
int
--
2.25.1
^ permalink raw reply [flat|nested] 41+ messages in thread
* [PATCH 24/40] crypto/cnxk: add tls read session creation
2025-05-23 13:50 [PATCH 00/40] fixes and new features to cnxk crypto PMD Tejasree Kondoj
` (22 preceding siblings ...)
2025-05-23 13:50 ` [PATCH 23/40] crypto/cnxk: add tls write session creation Tejasree Kondoj
@ 2025-05-23 13:50 ` Tejasree Kondoj
2025-05-23 13:50 ` [PATCH 25/40] crypto/cnxk: add tls session destroy Tejasree Kondoj
` (15 subsequent siblings)
39 siblings, 0 replies; 41+ messages in thread
From: Tejasree Kondoj @ 2025-05-23 13:50 UTC (permalink / raw)
To: Akhil Goyal
Cc: Vidya Sagar Velumuri, Anoob Joseph, Aakash Sasidharan,
Nithinsen Kaithakadan, Rupesh Chiluka, dev
From: Vidya Sagar Velumuri <vvelumuri@marvell.com>
Add session creation for tls read for cn20k
Signed-off-by: Vidya Sagar Velumuri <vvelumuri@marvell.com>
---
drivers/crypto/cnxk/cn20k_tls.c | 329 +++++++++++++++++++++++++++++++-
1 file changed, 327 insertions(+), 2 deletions(-)
diff --git a/drivers/crypto/cnxk/cn20k_tls.c b/drivers/crypto/cnxk/cn20k_tls.c
index 40fe48ae69..4a68edf731 100644
--- a/drivers/crypto/cnxk/cn20k_tls.c
+++ b/drivers/crypto/cnxk/cn20k_tls.c
@@ -461,6 +461,330 @@ cn20k_tls_read_sa_create(struct roc_cpt *roc_cpt, struct roc_cpt_lf *lf,
return ret;
}
+static int
+tls_write_rlens_get(struct rte_security_tls_record_xform *tls_xfrm,
+ struct rte_crypto_sym_xform *crypto_xfrm)
+{
+ enum rte_crypto_cipher_algorithm c_algo = RTE_CRYPTO_CIPHER_NULL;
+ enum rte_crypto_auth_algorithm a_algo = RTE_CRYPTO_AUTH_NULL;
+ uint8_t roundup_byte, tls_hdr_len;
+ uint8_t mac_len, iv_len;
+
+ switch (tls_xfrm->ver) {
+ case RTE_SECURITY_VERSION_TLS_1_2:
+ case RTE_SECURITY_VERSION_TLS_1_3:
+ tls_hdr_len = 5;
+ break;
+ case RTE_SECURITY_VERSION_DTLS_1_2:
+ tls_hdr_len = 13;
+ break;
+ default:
+ tls_hdr_len = 0;
+ break;
+ }
+
+ /* Get Cipher and Auth algo */
+ if (crypto_xfrm->type == RTE_CRYPTO_SYM_XFORM_AEAD)
+ return tls_hdr_len + ROC_CPT_AES_GCM_IV_LEN + ROC_CPT_AES_GCM_MAC_LEN;
+
+ if (crypto_xfrm->type == RTE_CRYPTO_SYM_XFORM_CIPHER) {
+ c_algo = crypto_xfrm->cipher.algo;
+ if (crypto_xfrm->next)
+ a_algo = crypto_xfrm->next->auth.algo;
+ } else {
+ a_algo = crypto_xfrm->auth.algo;
+ if (crypto_xfrm->next)
+ c_algo = crypto_xfrm->next->cipher.algo;
+ }
+
+ switch (c_algo) {
+ case RTE_CRYPTO_CIPHER_NULL:
+ roundup_byte = 4;
+ iv_len = 0;
+ break;
+ case RTE_CRYPTO_CIPHER_3DES_CBC:
+ roundup_byte = ROC_CPT_DES_BLOCK_LENGTH;
+ iv_len = ROC_CPT_DES_IV_LEN;
+ break;
+ case RTE_CRYPTO_CIPHER_AES_CBC:
+ roundup_byte = ROC_CPT_AES_BLOCK_LENGTH;
+ iv_len = ROC_CPT_AES_CBC_IV_LEN;
+ break;
+ default:
+ roundup_byte = 0;
+ iv_len = 0;
+ break;
+ }
+
+ switch (a_algo) {
+ case RTE_CRYPTO_AUTH_NULL:
+ mac_len = 0;
+ break;
+ case RTE_CRYPTO_AUTH_MD5_HMAC:
+ mac_len = 16;
+ break;
+ case RTE_CRYPTO_AUTH_SHA1_HMAC:
+ mac_len = 20;
+ break;
+ case RTE_CRYPTO_AUTH_SHA256_HMAC:
+ mac_len = 32;
+ break;
+ case RTE_CRYPTO_AUTH_SHA384_HMAC:
+ mac_len = 32;
+ break;
+ default:
+ mac_len = 0;
+ break;
+ }
+
+ return tls_hdr_len + iv_len + mac_len + roundup_byte;
+}
+
+static int
+tls_write_sa_fill(struct roc_ie_ow_tls_write_sa *write_sa,
+ struct rte_security_tls_record_xform *tls_xfrm,
+ struct rte_crypto_sym_xform *crypto_xfrm)
+{
+ enum rte_security_tls_version tls_ver = tls_xfrm->ver;
+ struct rte_crypto_sym_xform *auth_xfrm, *cipher_xfrm;
+ const uint8_t *key = NULL;
+ uint8_t *cipher_key;
+ uint64_t *tmp_key;
+ int i, length = 0;
+ size_t offset;
+
+ if (tls_ver == RTE_SECURITY_VERSION_TLS_1_2) {
+ write_sa->w2.s.version_select = ROC_IE_OW_TLS_VERSION_TLS_12;
+ write_sa->tls_12.seq_num = tls_xfrm->tls_1_2.seq_no - 1;
+ } else if (tls_ver == RTE_SECURITY_VERSION_DTLS_1_2) {
+ write_sa->w2.s.version_select = ROC_IE_OW_TLS_VERSION_DTLS_12;
+ write_sa->tls_12.seq_num = ((uint64_t)tls_xfrm->dtls_1_2.epoch << 48) |
+ (tls_xfrm->dtls_1_2.seq_no & 0x0000ffffffffffff);
+ write_sa->tls_12.seq_num -= 1;
+ } else if (tls_ver == RTE_SECURITY_VERSION_TLS_1_3) {
+ write_sa->w2.s.version_select = ROC_IE_OW_TLS_VERSION_TLS_13;
+ write_sa->tls_13.seq_num = tls_xfrm->tls_1_3.seq_no - 1;
+ }
+
+ cipher_key = write_sa->cipher_key;
+
+ /* Set encryption algorithm */
+ if (crypto_xfrm->type == RTE_CRYPTO_SYM_XFORM_AEAD) {
+ length = crypto_xfrm->aead.key.length;
+ if (crypto_xfrm->aead.algo == RTE_CRYPTO_AEAD_AES_GCM) {
+ write_sa->w2.s.cipher_select = ROC_IE_OW_TLS_CIPHER_AES_GCM;
+ if (length == 16)
+ write_sa->w2.s.aes_key_len = ROC_IE_OW_TLS_AES_KEY_LEN_128;
+ else
+ write_sa->w2.s.aes_key_len = ROC_IE_OW_TLS_AES_KEY_LEN_256;
+ }
+ if (crypto_xfrm->aead.algo == RTE_CRYPTO_AEAD_CHACHA20_POLY1305) {
+ write_sa->w2.s.cipher_select = ROC_IE_OW_TLS_CIPHER_CHACHA_POLY;
+ write_sa->w2.s.aes_key_len = ROC_IE_OW_TLS_AES_KEY_LEN_256;
+ }
+
+ key = crypto_xfrm->aead.key.data;
+ memcpy(cipher_key, key, length);
+
+ if (tls_ver == RTE_SECURITY_VERSION_TLS_1_2)
+ memcpy(((uint8_t *)cipher_key + 32), &tls_xfrm->tls_1_2.imp_nonce, 4);
+ else if (tls_ver == RTE_SECURITY_VERSION_DTLS_1_2)
+ memcpy(((uint8_t *)cipher_key + 32), &tls_xfrm->dtls_1_2.imp_nonce, 4);
+ else if (tls_ver == RTE_SECURITY_VERSION_TLS_1_3)
+ memcpy(((uint8_t *)cipher_key + 32), &tls_xfrm->tls_1_3.imp_nonce, 12);
+
+ goto key_swap;
+ }
+
+ if (crypto_xfrm->type == RTE_CRYPTO_SYM_XFORM_AUTH) {
+ auth_xfrm = crypto_xfrm;
+ cipher_xfrm = crypto_xfrm->next;
+ } else {
+ cipher_xfrm = crypto_xfrm;
+ auth_xfrm = crypto_xfrm->next;
+ }
+
+ if (cipher_xfrm != NULL) {
+ if (cipher_xfrm->cipher.algo == RTE_CRYPTO_CIPHER_3DES_CBC) {
+ write_sa->w2.s.cipher_select = ROC_IE_OW_TLS_CIPHER_3DES;
+ length = cipher_xfrm->cipher.key.length;
+ } else if (cipher_xfrm->cipher.algo == RTE_CRYPTO_CIPHER_AES_CBC) {
+ write_sa->w2.s.cipher_select = ROC_IE_OW_TLS_CIPHER_AES_CBC;
+ length = cipher_xfrm->cipher.key.length;
+ if (length == 16)
+ write_sa->w2.s.aes_key_len = ROC_IE_OW_TLS_AES_KEY_LEN_128;
+ else if (length == 32)
+ write_sa->w2.s.aes_key_len = ROC_IE_OW_TLS_AES_KEY_LEN_256;
+ else
+ return -EINVAL;
+ } else {
+ return -EINVAL;
+ }
+
+ key = cipher_xfrm->cipher.key.data;
+ if (key != NULL && length != 0) {
+ /* Copy encryption key */
+ memcpy(cipher_key, key, length);
+ }
+ }
+
+ if (auth_xfrm != NULL) {
+ if (auth_xfrm->auth.algo == RTE_CRYPTO_AUTH_MD5_HMAC)
+ write_sa->w2.s.mac_select = ROC_IE_OW_TLS_MAC_MD5;
+ else if (auth_xfrm->auth.algo == RTE_CRYPTO_AUTH_SHA1_HMAC)
+ write_sa->w2.s.mac_select = ROC_IE_OW_TLS_MAC_SHA1;
+ else if (auth_xfrm->auth.algo == RTE_CRYPTO_AUTH_SHA256_HMAC)
+ write_sa->w2.s.mac_select = ROC_IE_OW_TLS_MAC_SHA2_256;
+ else if (auth_xfrm->auth.algo == RTE_CRYPTO_AUTH_SHA384_HMAC)
+ write_sa->w2.s.mac_select = ROC_IE_OW_TLS_MAC_SHA2_384;
+ else
+ return -EINVAL;
+
+ roc_se_hmac_opad_ipad_gen(write_sa->w2.s.mac_select, auth_xfrm->auth.key.data,
+ auth_xfrm->auth.key.length, write_sa->tls_12.opad_ipad,
+ ROC_SE_TLS);
+ }
+
+ tmp_key = (uint64_t *)write_sa->tls_12.opad_ipad;
+ for (i = 0; i < (int)(ROC_CTX_MAX_OPAD_IPAD_LEN / sizeof(uint64_t)); i++)
+ tmp_key[i] = rte_be_to_cpu_64(tmp_key[i]);
+
+key_swap:
+ tmp_key = (uint64_t *)cipher_key;
+ for (i = 0; i < (int)(ROC_IE_OW_TLS_CTX_MAX_KEY_IV_LEN / sizeof(uint64_t)); i++)
+ tmp_key[i] = rte_be_to_cpu_64(tmp_key[i]);
+
+ write_sa->w0.s.ctx_hdr_size = ROC_IE_OW_TLS_CTX_HDR_SIZE;
+ /* Entire context size in 128B units */
+ write_sa->w0.s.ctx_size =
+ (PLT_ALIGN_CEIL(sizeof(struct roc_ie_ow_tls_write_sa), ROC_CTX_UNIT_128B) /
+ ROC_CTX_UNIT_128B) -
+ 1;
+ offset = offsetof(struct roc_ie_ow_tls_write_sa, tls_12.w26_rsvd7);
+
+ if (tls_ver == RTE_SECURITY_VERSION_TLS_1_3) {
+ offset = offsetof(struct roc_ie_ow_tls_write_sa, tls_13.w10_rsvd7);
+ write_sa->w0.s.ctx_size -= 1;
+ }
+
+ /* Word offset for HW managed CTX field */
+ write_sa->w0.s.hw_ctx_off = offset / 8;
+ write_sa->w0.s.ctx_push_size = write_sa->w0.s.hw_ctx_off;
+
+ write_sa->w0.s.aop_valid = 1;
+
+ write_sa->w2.s.iv_at_cptr = ROC_IE_OW_TLS_IV_SRC_DEFAULT;
+
+ if (write_sa->w2.s.version_select != ROC_IE_OW_TLS_VERSION_TLS_13) {
+#ifdef LA_IPSEC_DEBUG
+ if (tls_xfrm->options.iv_gen_disable == 1)
+ write_sa->w2.s.iv_at_cptr = ROC_IE_OW_TLS_IV_SRC_FROM_SA;
+#else
+ if (tls_xfrm->options.iv_gen_disable) {
+ plt_err("Application provided IV is not supported");
+ return -ENOTSUP;
+ }
+#endif
+ }
+
+ rte_wmb();
+
+ return 0;
+}
+
+static int
+cn20k_tls_write_sa_create(struct roc_cpt *roc_cpt, struct roc_cpt_lf *lf,
+ struct rte_security_tls_record_xform *tls_xfrm,
+ struct rte_crypto_sym_xform *crypto_xfrm,
+ struct cn20k_sec_session *sec_sess)
+{
+ struct roc_ie_ow_tls_write_sa *sa_dptr;
+ uint8_t tls_ver = tls_xfrm->ver;
+ struct cn20k_tls_record *tls;
+ union cpt_inst_w4 inst_w4;
+ void *write_sa;
+ int ret = 0;
+
+ tls = &sec_sess->tls_rec;
+ write_sa = &tls->write_sa;
+
+ /* Allocate memory to be used as dptr for CPT ucode WRITE_SA op */
+ sa_dptr = plt_zmalloc(sizeof(struct roc_ie_ow_tls_write_sa), 8);
+ if (sa_dptr == NULL) {
+ plt_err("Couldn't allocate memory for SA dptr");
+ return -ENOMEM;
+ }
+
+ /* Translate security parameters to SA */
+ ret = tls_write_sa_fill(sa_dptr, tls_xfrm, crypto_xfrm);
+ if (ret) {
+ plt_err("Could not fill write session parameters");
+ goto sa_dptr_free;
+ }
+
+ if (crypto_xfrm->type == RTE_CRYPTO_SYM_XFORM_AEAD) {
+ sec_sess->iv_offset = crypto_xfrm->aead.iv.offset;
+ sec_sess->iv_length = crypto_xfrm->aead.iv.length;
+ } else if (crypto_xfrm->type == RTE_CRYPTO_SYM_XFORM_CIPHER) {
+ sec_sess->iv_offset = crypto_xfrm->cipher.iv.offset;
+ sec_sess->iv_length = crypto_xfrm->cipher.iv.length;
+ } else {
+ sec_sess->iv_offset = crypto_xfrm->next->cipher.iv.offset;
+ sec_sess->iv_length = crypto_xfrm->next->cipher.iv.length;
+ }
+
+ sec_sess->tls_opt.is_write = 1;
+ sec_sess->tls_opt.pad_shift = 0;
+ sec_sess->tls_opt.tls_ver = tls_ver;
+ sec_sess->tls_opt.enable_padding = tls_xfrm->options.extra_padding_enable;
+ sec_sess->max_extended_len = tls_write_rlens_get(tls_xfrm, crypto_xfrm);
+ sec_sess->proto = RTE_SECURITY_PROTOCOL_TLS_RECORD;
+
+ /* pre-populate CPT INST word 4 */
+ inst_w4.u64 = 0;
+ if ((tls_ver == RTE_SECURITY_VERSION_TLS_1_2) ||
+ (tls_ver == RTE_SECURITY_VERSION_DTLS_1_2)) {
+ inst_w4.s.opcode_major = ROC_IE_OW_TLS_MAJOR_OP_RECORD_ENC | ROC_IE_OW_INPLACE_BIT;
+ if (sa_dptr->w2.s.cipher_select == ROC_IE_OW_TLS_CIPHER_3DES)
+ sec_sess->tls_opt.pad_shift = 3;
+ else
+ sec_sess->tls_opt.pad_shift = 4;
+ } else if (tls_ver == RTE_SECURITY_VERSION_TLS_1_3) {
+ inst_w4.s.opcode_major =
+ ROC_IE_OW_TLS13_MAJOR_OP_RECORD_ENC | ROC_IE_OW_INPLACE_BIT;
+ }
+ sec_sess->inst.w4 = inst_w4.u64;
+ sec_sess->inst.w7 = cnxk_cpt_sec_inst_w7_get(roc_cpt, write_sa);
+
+ memset(write_sa, 0, sizeof(struct roc_ie_ow_tls_write_sa));
+
+ /* Copy word0 from sa_dptr to populate ctx_push_sz ctx_size fields */
+ memcpy(write_sa, sa_dptr, 8);
+
+ rte_atomic_thread_fence(rte_memory_order_seq_cst);
+
+ /* Write session using microcode opcode */
+ ret = roc_cpt_ctx_write(lf, sa_dptr, write_sa, sizeof(struct roc_ie_ow_tls_write_sa));
+ if (ret) {
+ plt_err("Could not write tls write session to hardware");
+ goto sa_dptr_free;
+ }
+
+ /* Trigger CTX flush so that data is written back to DRAM */
+ ret = roc_cpt_lf_ctx_flush(lf, write_sa, false);
+ if (ret == -EFAULT) {
+ plt_err("Could not flush TLS write session to hardware");
+ goto sa_dptr_free;
+ }
+
+ rte_atomic_thread_fence(rte_memory_order_seq_cst);
+
+sa_dptr_free:
+ plt_free(sa_dptr);
+
+ return ret;
+}
+
int
cn20k_tls_record_session_update(struct cnxk_cpt_vf *vf, struct cnxk_cpt_qp *qp,
struct cn20k_sec_session *sess,
@@ -492,8 +816,9 @@ cn20k_tls_record_session_create(struct cnxk_cpt_vf *vf, struct cnxk_cpt_qp *qp,
if (tls_xfrm->type == RTE_SECURITY_TLS_SESS_TYPE_READ)
return cn20k_tls_read_sa_create(roc_cpt, &qp->lf, tls_xfrm, crypto_xfrm,
(struct cn20k_sec_session *)sess);
-
- return -ENOTSUP;
+ else
+ return cn20k_tls_write_sa_create(roc_cpt, &qp->lf, tls_xfrm, crypto_xfrm,
+ (struct cn20k_sec_session *)sess);
}
int
--
2.25.1
^ permalink raw reply [flat|nested] 41+ messages in thread
* [PATCH 25/40] crypto/cnxk: add tls session destroy
2025-05-23 13:50 [PATCH 00/40] fixes and new features to cnxk crypto PMD Tejasree Kondoj
` (23 preceding siblings ...)
2025-05-23 13:50 ` [PATCH 24/40] crypto/cnxk: add tls read " Tejasree Kondoj
@ 2025-05-23 13:50 ` Tejasree Kondoj
2025-05-23 13:50 ` [PATCH 26/40] crypto/cnxk: add enq and dequeue support for TLS Tejasree Kondoj
` (14 subsequent siblings)
39 siblings, 0 replies; 41+ messages in thread
From: Tejasree Kondoj @ 2025-05-23 13:50 UTC (permalink / raw)
To: Akhil Goyal
Cc: Vidya Sagar Velumuri, Anoob Joseph, Aakash Sasidharan,
Nithinsen Kaithakadan, Rupesh Chiluka, dev
From: Vidya Sagar Velumuri <vvelumuri@marvell.com>
Add tls session destroy for cn20k
Signed-off-by: Vidya Sagar Velumuri <vvelumuri@marvell.com>
---
drivers/crypto/cnxk/cn20k_cryptodev_sec.c | 3 +
drivers/crypto/cnxk/cn20k_tls.c | 84 ++++++++++++++++++++++-
2 files changed, 85 insertions(+), 2 deletions(-)
diff --git a/drivers/crypto/cnxk/cn20k_cryptodev_sec.c b/drivers/crypto/cnxk/cn20k_cryptodev_sec.c
index e5158af595..ab676cc6cf 100644
--- a/drivers/crypto/cnxk/cn20k_cryptodev_sec.c
+++ b/drivers/crypto/cnxk/cn20k_cryptodev_sec.c
@@ -58,6 +58,9 @@ cn20k_sec_session_destroy(void *dev, struct rte_security_session *sec_sess)
if (cn20k_sec_sess->proto == RTE_SECURITY_PROTOCOL_IPSEC)
return cn20k_sec_ipsec_session_destroy(qp, cn20k_sec_sess);
+ if (cn20k_sec_sess->proto == RTE_SECURITY_PROTOCOL_TLS_RECORD)
+ return cn20k_sec_tls_session_destroy(qp, cn20k_sec_sess);
+
return -EINVAL;
}
diff --git a/drivers/crypto/cnxk/cn20k_tls.c b/drivers/crypto/cnxk/cn20k_tls.c
index 4a68edf731..e0cd1b1b34 100644
--- a/drivers/crypto/cnxk/cn20k_tls.c
+++ b/drivers/crypto/cnxk/cn20k_tls.c
@@ -785,6 +785,36 @@ cn20k_tls_write_sa_create(struct roc_cpt *roc_cpt, struct roc_cpt_lf *lf,
return ret;
}
+static void
+tls_write_sa_init(struct roc_ie_ow_tls_write_sa *sa)
+{
+ size_t offset;
+
+ memset(sa, 0, sizeof(struct roc_ie_ow_tls_write_sa));
+
+ offset = offsetof(struct roc_ie_ow_tls_write_sa, tls_12.w26_rsvd7);
+ sa->w0.s.hw_ctx_off = offset / ROC_CTX_UNIT_8B;
+ sa->w0.s.ctx_push_size = sa->w0.s.hw_ctx_off;
+ sa->w0.s.ctx_size = ROC_IE_OW_TLS_CTX_ILEN;
+ sa->w0.s.ctx_hdr_size = ROC_IE_OW_TLS_CTX_HDR_SIZE;
+ sa->w0.s.aop_valid = 1;
+}
+
+static void
+tls_read_sa_init(struct roc_ie_ow_tls_read_sa *sa)
+{
+ size_t offset;
+
+ memset(sa, 0, sizeof(struct roc_ie_ow_tls_read_sa));
+
+ offset = offsetof(struct roc_ie_ow_tls_read_sa, tls_12.ctx);
+ sa->w0.s.hw_ctx_off = offset / ROC_CTX_UNIT_8B;
+ sa->w0.s.ctx_push_size = sa->w0.s.hw_ctx_off;
+ sa->w0.s.ctx_size = ROC_IE_OW_TLS_CTX_ILEN;
+ sa->w0.s.ctx_hdr_size = ROC_IE_OW_TLS_CTX_HDR_SIZE;
+ sa->w0.s.aop_valid = 1;
+}
+
int
cn20k_tls_record_session_update(struct cnxk_cpt_vf *vf, struct cnxk_cpt_qp *qp,
struct cn20k_sec_session *sess,
@@ -824,9 +854,59 @@ cn20k_tls_record_session_create(struct cnxk_cpt_vf *vf, struct cnxk_cpt_qp *qp,
int
cn20k_sec_tls_session_destroy(struct cnxk_cpt_qp *qp, struct cn20k_sec_session *sess)
{
+ struct cn20k_tls_record *tls;
+ struct roc_cpt_lf *lf;
+ void *sa_dptr = NULL;
+ int ret = -ENOMEM;
- RTE_SET_USED(qp);
- RTE_SET_USED(sess);
+ lf = &qp->lf;
+
+ tls = &sess->tls_rec;
+
+ /* Trigger CTX flush to write dirty data back to DRAM */
+ roc_cpt_lf_ctx_flush(lf, &tls->read_sa, false);
+
+ if (sess->tls_opt.is_write) {
+ sa_dptr = plt_zmalloc(sizeof(struct roc_ie_ow_tls_write_sa), 8);
+ if (sa_dptr != NULL) {
+ tls_write_sa_init(sa_dptr);
+
+ ret = roc_cpt_ctx_write(lf, sa_dptr, &tls->write_sa,
+ sizeof(struct roc_ie_ow_tls_write_sa));
+ plt_free(sa_dptr);
+ }
+ if (ret) {
+ /* MC write_ctx failed. Attempt reload of CTX */
+
+ /* Wait for 1 ms so that flush is complete */
+ rte_delay_ms(1);
+
+ rte_atomic_thread_fence(rte_memory_order_seq_cst);
+
+ /* Trigger CTX reload to fetch new data from DRAM */
+ roc_cpt_lf_ctx_reload(lf, &tls->write_sa);
+ }
+ } else {
+ sa_dptr = plt_zmalloc(sizeof(struct roc_ie_ow_tls_read_sa), 8);
+ if (sa_dptr != NULL) {
+ tls_read_sa_init(sa_dptr);
+
+ ret = roc_cpt_ctx_write(lf, sa_dptr, &tls->read_sa,
+ sizeof(struct roc_ie_ow_tls_read_sa));
+ plt_free(sa_dptr);
+ }
+ if (ret) {
+ /* MC write_ctx failed. Attempt reload of CTX */
+
+ /* Wait for 1 ms so that flush is complete */
+ rte_delay_ms(1);
+
+ rte_atomic_thread_fence(rte_memory_order_seq_cst);
+
+ /* Trigger CTX reload to fetch new data from DRAM */
+ roc_cpt_lf_ctx_reload(lf, &tls->read_sa);
+ }
+ }
return 0;
}
--
2.25.1
^ permalink raw reply [flat|nested] 41+ messages in thread
* [PATCH 26/40] crypto/cnxk: add enq and dequeue support for TLS
2025-05-23 13:50 [PATCH 00/40] fixes and new features to cnxk crypto PMD Tejasree Kondoj
` (24 preceding siblings ...)
2025-05-23 13:50 ` [PATCH 25/40] crypto/cnxk: add tls session destroy Tejasree Kondoj
@ 2025-05-23 13:50 ` Tejasree Kondoj
2025-05-23 13:50 ` [PATCH 27/40] crypto/cnxk: tls post process Tejasree Kondoj
` (13 subsequent siblings)
39 siblings, 0 replies; 41+ messages in thread
From: Tejasree Kondoj @ 2025-05-23 13:50 UTC (permalink / raw)
To: Akhil Goyal
Cc: Vidya Sagar Velumuri, Anoob Joseph, Aakash Sasidharan,
Nithinsen Kaithakadan, Rupesh Chiluka, dev
From: Vidya Sagar Velumuri <vvelumuri@marvell.com>
Add enqueue and dequeue support for TLS for cn20k
Signed-off-by: Vidya Sagar Velumuri <vvelumuri@marvell.com>
---
drivers/crypto/cnxk/cn20k_cryptodev_ops.c | 14 ++
drivers/crypto/cnxk/cn20k_tls_ops.h | 250 ++++++++++++++++++++++
2 files changed, 264 insertions(+)
create mode 100644 drivers/crypto/cnxk/cn20k_tls_ops.h
diff --git a/drivers/crypto/cnxk/cn20k_cryptodev_ops.c b/drivers/crypto/cnxk/cn20k_cryptodev_ops.c
index 97dfa5865f..cdca1f4a24 100644
--- a/drivers/crypto/cnxk/cn20k_cryptodev_ops.c
+++ b/drivers/crypto/cnxk/cn20k_cryptodev_ops.c
@@ -16,6 +16,7 @@
#include "cn20k_cryptodev_ops.h"
#include "cn20k_cryptodev_sec.h"
#include "cn20k_ipsec_la_ops.h"
+#include "cn20k_tls_ops.h"
#include "cnxk_ae.h"
#include "cnxk_cryptodev.h"
#include "cnxk_cryptodev_ops.h"
@@ -86,6 +87,17 @@ cpt_sec_ipsec_inst_fill(struct cnxk_cpt_qp *qp, struct rte_crypto_op *op,
return ret;
}
+static __rte_always_inline int __rte_hot
+cpt_sec_tls_inst_fill(struct cnxk_cpt_qp *qp, struct rte_crypto_op *op,
+ struct cn20k_sec_session *sess, struct cpt_inst_s *inst,
+ struct cpt_inflight_req *infl_req)
+{
+ if (sess->tls_opt.is_write)
+ return process_tls_write(&qp->lf, op, sess, &qp->meta_info, infl_req, inst);
+ else
+ return process_tls_read(op, sess, &qp->meta_info, infl_req, inst);
+}
+
static __rte_always_inline int __rte_hot
cpt_sec_inst_fill(struct cnxk_cpt_qp *qp, struct rte_crypto_op *op, struct cn20k_sec_session *sess,
struct cpt_inst_s *inst, struct cpt_inflight_req *infl_req)
@@ -93,6 +105,8 @@ cpt_sec_inst_fill(struct cnxk_cpt_qp *qp, struct rte_crypto_op *op, struct cn20k
if (sess->proto == RTE_SECURITY_PROTOCOL_IPSEC)
return cpt_sec_ipsec_inst_fill(qp, op, sess, &inst[0], infl_req);
+ else if (sess->proto == RTE_SECURITY_PROTOCOL_TLS_RECORD)
+ return cpt_sec_tls_inst_fill(qp, op, sess, &inst[0], infl_req);
return 0;
}
diff --git a/drivers/crypto/cnxk/cn20k_tls_ops.h b/drivers/crypto/cnxk/cn20k_tls_ops.h
new file mode 100644
index 0000000000..14f879f2a9
--- /dev/null
+++ b/drivers/crypto/cnxk/cn20k_tls_ops.h
@@ -0,0 +1,250 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2025 Marvell.
+ */
+
+#ifndef __CN20K_TLS_OPS_H__
+#define __CN20K_TLS_OPS_H__
+
+#include <rte_crypto_sym.h>
+#include <rte_security.h>
+
+#include "roc_ie.h"
+
+#include "cn20k_cryptodev.h"
+#include "cn20k_cryptodev_sec.h"
+#include "cnxk_cryptodev.h"
+#include "cnxk_cryptodev_ops.h"
+#include "cnxk_sg.h"
+
+static __rte_always_inline int
+process_tls_write(struct roc_cpt_lf *lf, struct rte_crypto_op *cop, struct cn20k_sec_session *sess,
+ struct cpt_qp_meta_info *m_info, struct cpt_inflight_req *infl_req,
+ struct cpt_inst_s *inst)
+{
+ struct cn20k_tls_opt tls_opt = sess->tls_opt;
+ struct rte_crypto_sym_op *sym_op = cop->sym;
+#ifdef LA_IPSEC_DEBUG
+ struct roc_ie_ow_tls_write_sa *write_sa;
+#endif
+ struct rte_mbuf *m_src = sym_op->m_src;
+ struct rte_mbuf *m_dst = sym_op->m_dst;
+ uint32_t pad_len, pad_bytes;
+ struct rte_mbuf *last_seg;
+ union cpt_inst_w4 w4;
+ void *m_data = NULL;
+ uint8_t *in_buffer;
+
+ pad_bytes = (cop->aux_flags * 8) > 0xff ? 0xff : (cop->aux_flags * 8);
+ pad_len = (pad_bytes >> tls_opt.pad_shift) * tls_opt.enable_padding;
+
+#ifdef LA_IPSEC_DEBUG
+ write_sa = &sess->tls_rec.write_sa;
+ if (write_sa->w2.s.iv_at_cptr == ROC_IE_OW_TLS_IV_SRC_FROM_SA) {
+
+ uint8_t *iv = PLT_PTR_ADD(write_sa->cipher_key, 32);
+
+ if (write_sa->w2.s.cipher_select == ROC_IE_OW_TLS_CIPHER_AES_GCM) {
+ uint32_t *tmp;
+
+ /* For GCM, the IV and salt format will be like below:
+ * iv[0-3]: lower bytes of IV in BE format.
+ * iv[4-7]: salt / nonce.
+ * iv[12-15]: upper bytes of IV in BE format.
+ */
+ memcpy(iv, rte_crypto_op_ctod_offset(cop, uint8_t *, sess->iv_offset), 4);
+ tmp = (uint32_t *)iv;
+ *tmp = rte_be_to_cpu_32(*tmp);
+
+ memcpy(iv + 12,
+ rte_crypto_op_ctod_offset(cop, uint8_t *, sess->iv_offset + 4), 4);
+ tmp = (uint32_t *)(iv + 12);
+ *tmp = rte_be_to_cpu_32(*tmp);
+ } else if (write_sa->w2.s.cipher_select == ROC_IE_OW_TLS_CIPHER_AES_CBC) {
+ uint64_t *tmp;
+
+ memcpy(iv, rte_crypto_op_ctod_offset(cop, uint8_t *, sess->iv_offset), 16);
+ tmp = (uint64_t *)iv;
+ *tmp = rte_be_to_cpu_64(*tmp);
+ tmp = (uint64_t *)(iv + 8);
+ *tmp = rte_be_to_cpu_64(*tmp);
+ } else if (write_sa->w2.s.cipher_select == ROC_IE_OW_TLS_CIPHER_3DES) {
+ uint64_t *tmp;
+
+ memcpy(iv, rte_crypto_op_ctod_offset(cop, uint8_t *, sess->iv_offset), 8);
+ tmp = (uint64_t *)iv;
+ *tmp = rte_be_to_cpu_64(*tmp);
+ }
+
+ /* Trigger CTX reload to fetch new data from DRAM */
+ roc_cpt_lf_ctx_reload(lf, write_sa);
+ rte_delay_ms(1);
+ }
+#else
+ RTE_SET_USED(lf);
+#endif
+ /* Single buffer direct mode */
+ if (likely(m_src->next == NULL)) {
+ void *vaddr;
+
+ if (unlikely(rte_pktmbuf_tailroom(m_src) < sess->max_extended_len)) {
+ plt_dp_err("Not enough tail room");
+ return -ENOMEM;
+ }
+
+ vaddr = rte_pktmbuf_mtod(m_src, void *);
+ inst->dptr = (uint64_t)vaddr;
+ inst->rptr = (uint64_t)vaddr;
+
+ w4.u64 = sess->inst.w4;
+ w4.s.param1 = m_src->data_len;
+ w4.s.dlen = m_src->data_len;
+
+ w4.s.param2 = cop->param1.tls_record.content_type;
+ w4.s.opcode_minor = pad_len;
+
+ inst->w4.u64 = w4.u64;
+ } else {
+ struct roc_sg2list_comp *scatter_comp, *gather_comp;
+ union cpt_inst_w5 cpt_inst_w5;
+ union cpt_inst_w6 cpt_inst_w6;
+ uint32_t g_size_bytes;
+ int i;
+
+ last_seg = rte_pktmbuf_lastseg(m_src);
+
+ if (unlikely(rte_pktmbuf_tailroom(last_seg) < sess->max_extended_len)) {
+ plt_dp_err("Not enough tail room (required: %d, available: %d)",
+ sess->max_extended_len, rte_pktmbuf_tailroom(last_seg));
+ return -ENOMEM;
+ }
+
+ m_data = alloc_op_meta(NULL, m_info->mlen, m_info->pool, infl_req);
+ if (unlikely(m_data == NULL)) {
+ plt_dp_err("Error allocating meta buffer for request");
+ return -ENOMEM;
+ }
+
+ in_buffer = (uint8_t *)m_data;
+ /* Input Gather List */
+ i = 0;
+ gather_comp = (struct roc_sg2list_comp *)((uint8_t *)in_buffer);
+ i = fill_sg2_comp_from_pkt(gather_comp, i, m_src);
+
+ cpt_inst_w5.s.gather_sz = ((i + 2) / 3);
+ g_size_bytes = ((i + 2) / 3) * sizeof(struct roc_sg2list_comp);
+
+ /* Output Scatter List */
+ last_seg->data_len += sess->max_extended_len + pad_bytes;
+ i = 0;
+ scatter_comp = (struct roc_sg2list_comp *)((uint8_t *)gather_comp + g_size_bytes);
+
+ if (m_dst == NULL)
+ m_dst = m_src;
+ i = fill_sg2_comp_from_pkt(scatter_comp, i, m_dst);
+
+ cpt_inst_w6.s.scatter_sz = ((i + 2) / 3);
+
+ cpt_inst_w5.s.dptr = (uint64_t)gather_comp;
+ cpt_inst_w6.s.rptr = (uint64_t)scatter_comp;
+
+ inst->w5.u64 = cpt_inst_w5.u64;
+ inst->w6.u64 = cpt_inst_w6.u64;
+ w4.u64 = sess->inst.w4;
+ w4.s.dlen = rte_pktmbuf_pkt_len(m_src);
+ w4.s.opcode_major &= (~(ROC_IE_OW_INPLACE_BIT));
+ w4.s.opcode_minor = pad_len;
+ w4.s.param1 = w4.s.dlen;
+ w4.s.param2 = cop->param1.tls_record.content_type;
+ inst->w4.u64 = w4.u64;
+ }
+
+ return 0;
+}
+
+static __rte_always_inline int
+process_tls_read(struct rte_crypto_op *cop, struct cn20k_sec_session *sess,
+ struct cpt_qp_meta_info *m_info, struct cpt_inflight_req *infl_req,
+ struct cpt_inst_s *inst)
+{
+ struct rte_crypto_sym_op *sym_op = cop->sym;
+ struct rte_mbuf *m_src = sym_op->m_src;
+ struct rte_mbuf *m_dst = sym_op->m_dst;
+ union cpt_inst_w4 w4;
+ uint8_t *in_buffer;
+ void *m_data;
+
+ if (likely(m_src->next == NULL)) {
+ void *vaddr;
+
+ vaddr = rte_pktmbuf_mtod(m_src, void *);
+
+ inst->dptr = (uint64_t)vaddr;
+ inst->rptr = (uint64_t)vaddr;
+
+ w4.u64 = sess->inst.w4;
+ w4.s.dlen = m_src->data_len;
+ w4.s.param1 = m_src->data_len;
+ inst->w4.u64 = w4.u64;
+ } else {
+ struct roc_sg2list_comp *scatter_comp, *gather_comp;
+ int tail_len = sess->tls_opt.tail_fetch_len * 16;
+ int pkt_len = rte_pktmbuf_pkt_len(m_src);
+ union cpt_inst_w5 cpt_inst_w5;
+ union cpt_inst_w6 cpt_inst_w6;
+ uint32_t g_size_bytes;
+ int i;
+
+ m_data = alloc_op_meta(NULL, m_info->mlen, m_info->pool, infl_req);
+ if (unlikely(m_data == NULL)) {
+ plt_dp_err("Error allocating meta buffer for request");
+ return -ENOMEM;
+ }
+
+ in_buffer = (uint8_t *)m_data;
+ /* Input Gather List */
+ i = 0;
+
+ /* First 32 bytes in m_data are rsvd for tail fetch.
+ * SG list start from 32 byte onwards.
+ */
+ gather_comp = (struct roc_sg2list_comp *)((uint8_t *)(in_buffer + 32));
+
+ /* Add the last blocks as first gather component for tail fetch. */
+ if (tail_len) {
+ const uint8_t *output;
+
+ output = rte_pktmbuf_read(m_src, pkt_len - tail_len, tail_len, in_buffer);
+ if (output != in_buffer)
+ rte_memcpy(in_buffer, output, tail_len);
+ i = fill_sg2_comp(gather_comp, i, (uint64_t)in_buffer, tail_len);
+ }
+
+ i = fill_sg2_comp_from_pkt(gather_comp, i, m_src);
+
+ cpt_inst_w5.s.gather_sz = ((i + 2) / 3);
+ g_size_bytes = ((i + 2) / 3) * sizeof(struct roc_sg2list_comp);
+
+ i = 0;
+ scatter_comp = (struct roc_sg2list_comp *)((uint8_t *)gather_comp + g_size_bytes);
+
+ if (m_dst == NULL)
+ m_dst = m_src;
+ i = fill_sg2_comp_from_pkt(scatter_comp, i, m_dst);
+
+ cpt_inst_w6.s.scatter_sz = ((i + 2) / 3);
+
+ cpt_inst_w5.s.dptr = (uint64_t)gather_comp;
+ cpt_inst_w6.s.rptr = (uint64_t)scatter_comp;
+
+ inst->w5.u64 = cpt_inst_w5.u64;
+ inst->w6.u64 = cpt_inst_w6.u64;
+ w4.u64 = sess->inst.w4;
+ w4.s.dlen = pkt_len + tail_len;
+ w4.s.param1 = w4.s.dlen;
+ w4.s.opcode_major &= (~(ROC_IE_OW_INPLACE_BIT));
+ inst->w4.u64 = w4.u64;
+ }
+
+ return 0;
+}
+#endif /* __CN20K_TLS_OPS_H__ */
--
2.25.1
^ permalink raw reply [flat|nested] 41+ messages in thread
* [PATCH 27/40] crypto/cnxk: tls post process
2025-05-23 13:50 [PATCH 00/40] fixes and new features to cnxk crypto PMD Tejasree Kondoj
` (25 preceding siblings ...)
2025-05-23 13:50 ` [PATCH 26/40] crypto/cnxk: add enq and dequeue support for TLS Tejasree Kondoj
@ 2025-05-23 13:50 ` Tejasree Kondoj
2025-05-23 13:50 ` [PATCH 28/40] crypto/cnxk: add tls session update Tejasree Kondoj
` (12 subsequent siblings)
39 siblings, 0 replies; 41+ messages in thread
From: Tejasree Kondoj @ 2025-05-23 13:50 UTC (permalink / raw)
To: Akhil Goyal
Cc: Vidya Sagar Velumuri, Anoob Joseph, Aakash Sasidharan,
Nithinsen Kaithakadan, Rupesh Chiluka, dev
From: Vidya Sagar Velumuri <vvelumuri@marvell.com>
Add tls post process
Signed-off-by: Vidya Sagar Velumuri <vvelumuri@marvell.com>
---
drivers/crypto/cnxk/cn20k_cryptodev_ops.c | 160 ++++++++++++++++++++++
1 file changed, 160 insertions(+)
diff --git a/drivers/crypto/cnxk/cn20k_cryptodev_ops.c b/drivers/crypto/cnxk/cn20k_cryptodev_ops.c
index cdca1f4a24..92e4bce32e 100644
--- a/drivers/crypto/cnxk/cn20k_cryptodev_ops.c
+++ b/drivers/crypto/cnxk/cn20k_cryptodev_ops.c
@@ -317,6 +317,164 @@ cn20k_cpt_ipsec_post_process(struct rte_crypto_op *cop, struct cpt_cn20k_res_s *
mbuf->pkt_len = m_len;
}
+static inline void
+cn20k_cpt_tls12_trim_mac(struct rte_crypto_op *cop, struct cpt_cn20k_res_s *res, uint8_t mac_len)
+{
+ struct rte_mbuf *mac_prev_seg = NULL, *mac_seg = NULL, *seg;
+ uint32_t pad_len, trim_len, mac_offset, pad_offset;
+ struct rte_mbuf *mbuf = cop->sym->m_src;
+ uint16_t m_len = res->rlen;
+ uint32_t i, nb_segs = 1;
+ uint8_t pad_res = 0;
+ uint8_t pad_val;
+
+ pad_val = ((res->spi >> 16) & 0xff);
+ pad_len = pad_val + 1;
+ trim_len = pad_len + mac_len;
+ mac_offset = m_len - trim_len;
+ pad_offset = mac_offset + mac_len;
+
+ /* Handle Direct Mode */
+ if (mbuf->next == NULL) {
+ uint8_t *ptr = rte_pktmbuf_mtod_offset(mbuf, uint8_t *, pad_offset);
+
+ for (i = 0; i < pad_len; i++)
+ pad_res |= ptr[i] ^ pad_val;
+
+ if (pad_res) {
+ cop->status = RTE_CRYPTO_OP_STATUS_ERROR;
+ cop->aux_flags = res->uc_compcode;
+ }
+ mbuf->pkt_len = m_len - trim_len;
+ mbuf->data_len = m_len - trim_len;
+
+ return;
+ }
+
+ /* Handle SG mode */
+ seg = mbuf;
+ while (mac_offset >= seg->data_len) {
+ mac_offset -= seg->data_len;
+ mac_prev_seg = seg;
+ seg = seg->next;
+ nb_segs++;
+ }
+ mac_seg = seg;
+
+ pad_offset = mac_offset + mac_len;
+ while (pad_offset >= seg->data_len) {
+ pad_offset -= seg->data_len;
+ seg = seg->next;
+ }
+
+ while (pad_len != 0) {
+ uint8_t *ptr = rte_pktmbuf_mtod_offset(seg, uint8_t *, pad_offset);
+ uint8_t len = RTE_MIN(seg->data_len - pad_offset, pad_len);
+
+ for (i = 0; i < len; i++)
+ pad_res |= ptr[i] ^ pad_val;
+
+ pad_offset = 0;
+ pad_len -= len;
+ seg = seg->next;
+ }
+
+ if (pad_res) {
+ cop->status = RTE_CRYPTO_OP_STATUS_ERROR;
+ cop->aux_flags = res->uc_compcode;
+ }
+
+ mbuf->pkt_len = m_len - trim_len;
+ if (mac_offset) {
+ rte_pktmbuf_free(mac_seg->next);
+ mac_seg->next = NULL;
+ mac_seg->data_len = mac_offset;
+ mbuf->nb_segs = nb_segs;
+ } else {
+ rte_pktmbuf_free(mac_seg);
+ mac_prev_seg->next = NULL;
+ mbuf->nb_segs = nb_segs - 1;
+ }
+}
+
+/* TLS-1.3:
+ * Read from last until a non-zero value is encountered.
+ * Return the non zero value as the content type.
+ * Remove the MAC and content type and padding bytes.
+ */
+static inline void
+cn20k_cpt_tls13_trim_mac(struct rte_crypto_op *cop, struct cpt_cn20k_res_s *res)
+{
+ struct rte_mbuf *mbuf = cop->sym->m_src;
+ struct rte_mbuf *seg = mbuf;
+ uint16_t m_len = res->rlen;
+ uint8_t *ptr, type = 0x0;
+ int len, i, nb_segs = 1;
+
+ while (m_len && !type) {
+ len = m_len;
+ seg = mbuf;
+
+ /* get the last seg */
+ while (len > seg->data_len) {
+ len -= seg->data_len;
+ seg = seg->next;
+ nb_segs++;
+ }
+
+ /* walkthrough from last until a non zero value is found */
+ ptr = rte_pktmbuf_mtod(seg, uint8_t *);
+ i = len;
+ while (i && (ptr[--i] == 0))
+ ;
+
+ type = ptr[i];
+ m_len -= len;
+ }
+
+ if (type) {
+ cop->param1.tls_record.content_type = type;
+ mbuf->pkt_len = m_len + i;
+ mbuf->nb_segs = nb_segs;
+ seg->data_len = i;
+ rte_pktmbuf_free(seg->next);
+ seg->next = NULL;
+ } else {
+ cop->status = RTE_CRYPTO_OP_STATUS_ERROR;
+ }
+}
+
+static inline void
+cn20k_cpt_tls_post_process(struct rte_crypto_op *cop, struct cpt_cn20k_res_s *res,
+ struct cn20k_sec_session *sess)
+{
+ struct cn20k_tls_opt tls_opt = sess->tls_opt;
+ struct rte_mbuf *mbuf = cop->sym->m_src;
+ uint16_t m_len = res->rlen;
+
+ if (!res->uc_compcode) {
+ if (mbuf->next == NULL)
+ mbuf->data_len = m_len;
+ mbuf->pkt_len = m_len;
+ cop->param1.tls_record.content_type = (res->spi >> 24) & 0xff;
+ return;
+ }
+
+ /* Any error other than post process */
+ if (res->uc_compcode != ROC_SE_ERR_SSL_POST_PROCESS) {
+ cop->status = RTE_CRYPTO_OP_STATUS_ERROR;
+ cop->aux_flags = res->uc_compcode;
+ plt_err("crypto op failed with UC compcode: 0x%x", res->uc_compcode);
+ return;
+ }
+
+ /* Extra padding scenario: Verify padding. Remove padding and MAC */
+ if (tls_opt.tls_ver != RTE_SECURITY_VERSION_TLS_1_3)
+ cn20k_cpt_tls12_trim_mac(cop, res, (uint8_t)tls_opt.mac_len);
+ else
+ cn20k_cpt_tls13_trim_mac(cop, res);
+}
+
static inline void
cn20k_cpt_sec_post_process(struct rte_crypto_op *cop, struct cpt_cn20k_res_s *res)
{
@@ -326,6 +484,8 @@ cn20k_cpt_sec_post_process(struct rte_crypto_op *cop, struct cpt_cn20k_res_s *re
sess = sym_op->session;
if (sess->proto == RTE_SECURITY_PROTOCOL_IPSEC)
cn20k_cpt_ipsec_post_process(cop, res);
+ else if (sess->proto == RTE_SECURITY_PROTOCOL_TLS_RECORD)
+ cn20k_cpt_tls_post_process(cop, res, sess);
}
static inline void
--
2.25.1
^ permalink raw reply [flat|nested] 41+ messages in thread
* [PATCH 28/40] crypto/cnxk: add tls session update
2025-05-23 13:50 [PATCH 00/40] fixes and new features to cnxk crypto PMD Tejasree Kondoj
` (26 preceding siblings ...)
2025-05-23 13:50 ` [PATCH 27/40] crypto/cnxk: tls post process Tejasree Kondoj
@ 2025-05-23 13:50 ` Tejasree Kondoj
2025-05-23 13:51 ` [PATCH 29/40] crypto/cnxk: include required headers Tejasree Kondoj
` (11 subsequent siblings)
39 siblings, 0 replies; 41+ messages in thread
From: Tejasree Kondoj @ 2025-05-23 13:50 UTC (permalink / raw)
To: Akhil Goyal
Cc: Vidya Sagar Velumuri, Anoob Joseph, Aakash Sasidharan,
Nithinsen Kaithakadan, Rupesh Chiluka, dev
From: Vidya Sagar Velumuri <vvelumuri@marvell.com>
Add support for TLS session update for cn20k
Signed-off-by: Vidya Sagar Velumuri <vvelumuri@marvell.com>
---
drivers/crypto/cnxk/cn20k_cryptodev_sec.c | 3 +++
drivers/crypto/cnxk/cn20k_tls.c | 15 ++++++++++-----
2 files changed, 13 insertions(+), 5 deletions(-)
diff --git a/drivers/crypto/cnxk/cn20k_cryptodev_sec.c b/drivers/crypto/cnxk/cn20k_cryptodev_sec.c
index ab676cc6cf..ae1e31e7e1 100644
--- a/drivers/crypto/cnxk/cn20k_cryptodev_sec.c
+++ b/drivers/crypto/cnxk/cn20k_cryptodev_sec.c
@@ -116,6 +116,9 @@ cn20k_sec_session_update(void *dev, struct rte_security_session *sec_sess,
if (cn20k_sec_sess->proto == RTE_SECURITY_PROTOCOL_IPSEC)
return cn20k_ipsec_session_update(vf, qp, cn20k_sec_sess, conf);
+ if (conf->protocol == RTE_SECURITY_PROTOCOL_TLS_RECORD)
+ return cn20k_tls_record_session_update(vf, qp, cn20k_sec_sess, conf);
+
return -ENOTSUP;
}
diff --git a/drivers/crypto/cnxk/cn20k_tls.c b/drivers/crypto/cnxk/cn20k_tls.c
index e0cd1b1b34..cdf885b997 100644
--- a/drivers/crypto/cnxk/cn20k_tls.c
+++ b/drivers/crypto/cnxk/cn20k_tls.c
@@ -820,12 +820,17 @@ cn20k_tls_record_session_update(struct cnxk_cpt_vf *vf, struct cnxk_cpt_qp *qp,
struct cn20k_sec_session *sess,
struct rte_security_session_conf *conf)
{
- RTE_SET_USED(vf);
- RTE_SET_USED(qp);
- RTE_SET_USED(sess);
- RTE_SET_USED(conf);
+ struct roc_cpt *roc_cpt;
+ int ret;
- return 0;
+ if (conf->tls_record.type == RTE_SECURITY_TLS_SESS_TYPE_READ)
+ return -ENOTSUP;
+
+ roc_cpt = &vf->cpt;
+ ret = cn20k_tls_write_sa_create(roc_cpt, &qp->lf, &conf->tls_record, conf->crypto_xform,
+ (struct cn20k_sec_session *)sess);
+
+ return ret;
}
int
--
2.25.1
^ permalink raw reply [flat|nested] 41+ messages in thread
* [PATCH 29/40] crypto/cnxk: include required headers
2025-05-23 13:50 [PATCH 00/40] fixes and new features to cnxk crypto PMD Tejasree Kondoj
` (27 preceding siblings ...)
2025-05-23 13:50 ` [PATCH 28/40] crypto/cnxk: add tls session update Tejasree Kondoj
@ 2025-05-23 13:51 ` Tejasree Kondoj
2025-05-23 13:51 ` [PATCH 30/40] crypto/cnxk: support raw API for cn20k Tejasree Kondoj
` (10 subsequent siblings)
39 siblings, 0 replies; 41+ messages in thread
From: Tejasree Kondoj @ 2025-05-23 13:51 UTC (permalink / raw)
To: Akhil Goyal
Cc: Anoob Joseph, Aakash Sasidharan, Nithinsen Kaithakadan,
Rupesh Chiluka, Vidya Sagar Velumuri, dev
Including required headers.
Signed-off-by: Tejasree Kondoj <ktejasree@marvell.com>
---
drivers/crypto/cnxk/rte_pmd_cnxk_crypto.h | 3 +++
1 file changed, 3 insertions(+)
diff --git a/drivers/crypto/cnxk/rte_pmd_cnxk_crypto.h b/drivers/crypto/cnxk/rte_pmd_cnxk_crypto.h
index 02278605a2..46861ab2cf 100644
--- a/drivers/crypto/cnxk/rte_pmd_cnxk_crypto.h
+++ b/drivers/crypto/cnxk/rte_pmd_cnxk_crypto.h
@@ -13,6 +13,9 @@
#include <stdint.h>
+#include <rte_crypto.h>
+#include <rte_security.h>
+
/* Forward declarations */
/**
--
2.25.1
^ permalink raw reply [flat|nested] 41+ messages in thread
* [PATCH 30/40] crypto/cnxk: support raw API for cn20k
2025-05-23 13:50 [PATCH 00/40] fixes and new features to cnxk crypto PMD Tejasree Kondoj
` (28 preceding siblings ...)
2025-05-23 13:51 ` [PATCH 29/40] crypto/cnxk: include required headers Tejasree Kondoj
@ 2025-05-23 13:51 ` Tejasree Kondoj
2025-05-23 13:51 ` [PATCH 31/40] crypto/cnxk: add model check " Tejasree Kondoj
` (9 subsequent siblings)
39 siblings, 0 replies; 41+ messages in thread
From: Tejasree Kondoj @ 2025-05-23 13:51 UTC (permalink / raw)
To: Akhil Goyal
Cc: Vidya Sagar Velumuri, Anoob Joseph, Aakash Sasidharan,
Nithinsen Kaithakadan, Rupesh Chiluka, dev
From: Vidya Sagar Velumuri <vvelumuri@marvell.com>
Add raw API support for cn20k
Signed-off-by: Vidya Sagar Velumuri <vvelumuri@marvell.com>
---
drivers/crypto/cnxk/cn20k_cryptodev_ops.c | 384 +++++++++++++++++++++-
1 file changed, 377 insertions(+), 7 deletions(-)
diff --git a/drivers/crypto/cnxk/cn20k_cryptodev_ops.c b/drivers/crypto/cnxk/cn20k_cryptodev_ops.c
index 92e4bce32e..9859950b80 100644
--- a/drivers/crypto/cnxk/cn20k_cryptodev_ops.c
+++ b/drivers/crypto/cnxk/cn20k_cryptodev_ops.c
@@ -664,10 +664,352 @@ cn20k_cpt_dev_info_get(struct rte_cryptodev *dev, struct rte_cryptodev_info *inf
}
}
+static inline int
+cn20k_cpt_raw_fill_inst(struct cnxk_iov *iov, struct cnxk_cpt_qp *qp,
+ struct cnxk_sym_dp_ctx *dp_ctx, struct cpt_inst_s inst[],
+ struct cpt_inflight_req *infl_req, void *opaque)
+{
+ struct cnxk_se_sess *sess;
+ int ret;
+
+ const union cpt_res_s res = {
+ .cn20k.compcode = CPT_COMP_NOT_DONE,
+ };
+
+ inst[0].w0.u64 = 0;
+ inst[0].w2.u64 = 0;
+ inst[0].w3.u64 = 0;
+
+ sess = dp_ctx->sess;
+
+ switch (sess->dp_thr_type) {
+ case CPT_DP_THREAD_TYPE_PT:
+ ret = fill_raw_passthrough_params(iov, inst);
+ break;
+ case CPT_DP_THREAD_TYPE_FC_CHAIN:
+ ret = fill_raw_fc_params(iov, sess, &qp->meta_info, infl_req, &inst[0], false,
+ false, true);
+ break;
+ case CPT_DP_THREAD_TYPE_FC_AEAD:
+ ret = fill_raw_fc_params(iov, sess, &qp->meta_info, infl_req, &inst[0], false, true,
+ true);
+ break;
+ case CPT_DP_THREAD_AUTH_ONLY:
+ ret = fill_raw_digest_params(iov, sess, &qp->meta_info, infl_req, &inst[0], true);
+ break;
+ default:
+ ret = -EINVAL;
+ }
+
+ if (unlikely(ret))
+ return 0;
+
+ inst[0].res_addr = (uint64_t)&infl_req->res;
+ rte_atomic_store_explicit(&infl_req->res.u64[0], res.u64[0], rte_memory_order_relaxed);
+ infl_req->opaque = opaque;
+
+ inst[0].w7.u64 = sess->cpt_inst_w7;
+
+ return 1;
+}
+
+static uint32_t
+cn20k_cpt_raw_enqueue_burst(void *qpair, uint8_t *drv_ctx, struct rte_crypto_sym_vec *vec,
+ union rte_crypto_sym_ofs ofs, void *user_data[], int *enqueue_status)
+{
+ uint16_t lmt_id, nb_allowed, nb_ops = vec->num;
+ struct cpt_inflight_req *infl_req;
+ uint64_t lmt_base, io_addr, head;
+ struct cnxk_cpt_qp *qp = qpair;
+ struct cnxk_sym_dp_ctx *dp_ctx;
+ struct pending_queue *pend_q;
+ uint32_t count = 0, index;
+ union cpt_fc_write_s fc;
+ struct cpt_inst_s *inst;
+ uint64_t *fc_addr;
+ int ret, i;
+
+ pend_q = &qp->pend_q;
+ const uint64_t pq_mask = pend_q->pq_mask;
+
+ head = pend_q->head;
+ nb_allowed = pending_queue_free_cnt(head, pend_q->tail, pq_mask);
+ nb_ops = RTE_MIN(nb_ops, nb_allowed);
+
+ if (unlikely(nb_ops == 0))
+ return 0;
+
+ lmt_base = qp->lmtline.lmt_base;
+ io_addr = qp->lmtline.io_addr;
+ fc_addr = qp->lmtline.fc_addr;
+
+ const uint32_t fc_thresh = qp->lmtline.fc_thresh;
+
+ ROC_LMT_BASE_ID_GET(lmt_base, lmt_id);
+ inst = (struct cpt_inst_s *)lmt_base;
+
+ dp_ctx = (struct cnxk_sym_dp_ctx *)drv_ctx;
+again:
+ fc.u64[0] = rte_atomic_load_explicit(fc_addr, rte_memory_order_relaxed);
+ if (unlikely(fc.s.qsize > fc_thresh)) {
+ i = 0;
+ goto pend_q_commit;
+ }
+
+ for (i = 0; i < RTE_MIN(CN20K_CPT_PKTS_PER_LOOP, nb_ops); i++) {
+ struct cnxk_iov iov;
+
+ index = count + i;
+ infl_req = &pend_q->req_queue[head];
+ infl_req->op_flags = 0;
+
+ cnxk_raw_burst_to_iov(vec, &ofs, index, &iov);
+ ret = cn20k_cpt_raw_fill_inst(&iov, qp, dp_ctx, &inst[i], infl_req,
+ user_data[index]);
+ if (unlikely(ret != 1)) {
+ plt_dp_err("Could not process vec: %d", index);
+ if (i == 0 && count == 0)
+ return -1;
+ else if (i == 0)
+ goto pend_q_commit;
+ else
+ break;
+ }
+ pending_queue_advance(&head, pq_mask);
+ }
+
+ cn20k_cpt_lmtst_dual_submit(&io_addr, lmt_id, &i);
+
+ if (nb_ops - i > 0 && i == CN20K_CPT_PKTS_PER_LOOP) {
+ nb_ops -= i;
+ count += i;
+ goto again;
+ }
+
+pend_q_commit:
+ rte_atomic_thread_fence(rte_memory_order_release);
+
+ pend_q->head = head;
+ pend_q->time_out = rte_get_timer_cycles() + DEFAULT_COMMAND_TIMEOUT * rte_get_timer_hz();
+
+ *enqueue_status = 1;
+ return count + i;
+}
+
+static int
+cn20k_cpt_raw_enqueue(void *qpair, uint8_t *drv_ctx, struct rte_crypto_vec *data_vec,
+ uint16_t n_data_vecs, union rte_crypto_sym_ofs ofs,
+ struct rte_crypto_va_iova_ptr *iv, struct rte_crypto_va_iova_ptr *digest,
+ struct rte_crypto_va_iova_ptr *aad_or_auth_iv, void *user_data)
+{
+ struct cpt_inflight_req *infl_req;
+ uint64_t lmt_base, io_addr, head;
+ struct cnxk_cpt_qp *qp = qpair;
+ struct cnxk_sym_dp_ctx *dp_ctx;
+ uint16_t lmt_id, nb_allowed;
+ struct cpt_inst_s *inst;
+ union cpt_fc_write_s fc;
+ struct cnxk_iov iov;
+ uint64_t *fc_addr;
+ int ret, i = 1;
+
+ struct pending_queue *pend_q = &qp->pend_q;
+ const uint64_t pq_mask = pend_q->pq_mask;
+ const uint32_t fc_thresh = qp->lmtline.fc_thresh;
+
+ head = pend_q->head;
+ nb_allowed = pending_queue_free_cnt(head, pend_q->tail, pq_mask);
+
+ if (unlikely(nb_allowed == 0))
+ return -1;
+
+ cnxk_raw_to_iov(data_vec, n_data_vecs, &ofs, iv, digest, aad_or_auth_iv, &iov);
+
+ lmt_base = qp->lmtline.lmt_base;
+ io_addr = qp->lmtline.io_addr;
+ fc_addr = qp->lmtline.fc_addr;
+
+ ROC_LMT_BASE_ID_GET(lmt_base, lmt_id);
+ inst = (struct cpt_inst_s *)lmt_base;
+
+ fc.u64[0] = rte_atomic_load_explicit(fc_addr, rte_memory_order_relaxed);
+ if (unlikely(fc.s.qsize > fc_thresh))
+ return -1;
+
+ dp_ctx = (struct cnxk_sym_dp_ctx *)drv_ctx;
+ infl_req = &pend_q->req_queue[head];
+ infl_req->op_flags = 0;
+
+ ret = cn20k_cpt_raw_fill_inst(&iov, qp, dp_ctx, &inst[0], infl_req, user_data);
+ if (unlikely(ret != 1)) {
+ plt_dp_err("Could not process vec");
+ return -1;
+ }
+
+ pending_queue_advance(&head, pq_mask);
+
+ cn20k_cpt_lmtst_dual_submit(&io_addr, lmt_id, &i);
+
+ pend_q->head = head;
+ pend_q->time_out = rte_get_timer_cycles() + DEFAULT_COMMAND_TIMEOUT * rte_get_timer_hz();
+
+ return 1;
+}
+
+static inline int
+cn20k_cpt_raw_dequeue_post_process(struct cpt_cn20k_res_s *res)
+{
+ const uint8_t uc_compcode = res->uc_compcode;
+ const uint8_t compcode = res->compcode;
+ int ret = 1;
+
+ if (likely(compcode == CPT_COMP_GOOD)) {
+ if (unlikely(uc_compcode))
+ plt_dp_info("Request failed with microcode error: 0x%x", res->uc_compcode);
+ else
+ ret = 0;
+ }
+
+ return ret;
+}
+
+static uint32_t
+cn20k_cpt_sym_raw_dequeue_burst(void *qptr, uint8_t *drv_ctx,
+ rte_cryptodev_raw_get_dequeue_count_t get_dequeue_count,
+ uint32_t max_nb_to_dequeue,
+ rte_cryptodev_raw_post_dequeue_t post_dequeue, void **out_user_data,
+ uint8_t is_user_data_array, uint32_t *n_success,
+ int *dequeue_status)
+{
+ struct cpt_inflight_req *infl_req;
+ struct cnxk_cpt_qp *qp = qptr;
+ struct pending_queue *pend_q;
+ uint64_t infl_cnt, pq_tail;
+ union cpt_res_s res;
+ int is_op_success;
+ uint16_t nb_ops;
+ void *opaque;
+ int i = 0;
+
+ pend_q = &qp->pend_q;
+
+ const uint64_t pq_mask = pend_q->pq_mask;
+
+ RTE_SET_USED(drv_ctx);
+ pq_tail = pend_q->tail;
+ infl_cnt = pending_queue_infl_cnt(pend_q->head, pq_tail, pq_mask);
+
+ /* Ensure infl_cnt isn't read before data lands */
+ rte_atomic_thread_fence(rte_memory_order_acquire);
+
+ infl_req = &pend_q->req_queue[pq_tail];
+
+ opaque = infl_req->opaque;
+ if (get_dequeue_count)
+ nb_ops = get_dequeue_count(opaque);
+ else
+ nb_ops = max_nb_to_dequeue;
+ nb_ops = RTE_MIN(nb_ops, infl_cnt);
+
+ for (i = 0; i < nb_ops; i++) {
+ is_op_success = 0;
+ infl_req = &pend_q->req_queue[pq_tail];
+
+ res.u64[0] =
+ rte_atomic_load_explicit(&infl_req->res.u64[0], rte_memory_order_relaxed);
+
+ if (unlikely(res.cn20k.compcode == CPT_COMP_NOT_DONE)) {
+ if (unlikely(rte_get_timer_cycles() > pend_q->time_out)) {
+ plt_err("Request timed out");
+ cnxk_cpt_dump_on_err(qp);
+ pend_q->time_out = rte_get_timer_cycles() +
+ DEFAULT_COMMAND_TIMEOUT * rte_get_timer_hz();
+ }
+ break;
+ }
+
+ pending_queue_advance(&pq_tail, pq_mask);
+
+ if (!cn20k_cpt_raw_dequeue_post_process(&res.cn20k)) {
+ is_op_success = 1;
+ *n_success += 1;
+ }
+
+ if (is_user_data_array) {
+ out_user_data[i] = infl_req->opaque;
+ post_dequeue(out_user_data[i], i, is_op_success);
+ } else {
+ if (i == 0)
+ out_user_data[0] = opaque;
+ post_dequeue(out_user_data[0], i, is_op_success);
+ }
+
+ if (unlikely(infl_req->op_flags & CPT_OP_FLAGS_METABUF))
+ rte_mempool_put(qp->meta_info.pool, infl_req->mdata);
+ }
+
+ pend_q->tail = pq_tail;
+ *dequeue_status = 1;
+
+ return i;
+}
+
+static void *
+cn20k_cpt_sym_raw_dequeue(void *qptr, uint8_t *drv_ctx, int *dequeue_status,
+ enum rte_crypto_op_status *op_status)
+{
+ struct cpt_inflight_req *infl_req;
+ struct cnxk_cpt_qp *qp = qptr;
+ struct pending_queue *pend_q;
+ uint64_t pq_tail;
+ union cpt_res_s res;
+ void *opaque = NULL;
+
+ pend_q = &qp->pend_q;
+
+ const uint64_t pq_mask = pend_q->pq_mask;
+
+ RTE_SET_USED(drv_ctx);
+
+ pq_tail = pend_q->tail;
+
+ rte_atomic_thread_fence(rte_memory_order_acquire);
+
+ infl_req = &pend_q->req_queue[pq_tail];
+
+ res.u64[0] = rte_atomic_load_explicit(&infl_req->res.u64[0], rte_memory_order_relaxed);
+
+ if (unlikely(res.cn20k.compcode == CPT_COMP_NOT_DONE)) {
+ if (unlikely(rte_get_timer_cycles() > pend_q->time_out)) {
+ plt_err("Request timed out");
+ cnxk_cpt_dump_on_err(qp);
+ pend_q->time_out = rte_get_timer_cycles() +
+ DEFAULT_COMMAND_TIMEOUT * rte_get_timer_hz();
+ }
+ goto exit;
+ }
+
+ pending_queue_advance(&pq_tail, pq_mask);
+
+ opaque = infl_req->opaque;
+
+ if (!cn20k_cpt_raw_dequeue_post_process(&res.cn20k))
+ *op_status = RTE_CRYPTO_OP_STATUS_SUCCESS;
+ else
+ *op_status = RTE_CRYPTO_OP_STATUS_ERROR;
+
+ if (unlikely(infl_req->op_flags & CPT_OP_FLAGS_METABUF))
+ rte_mempool_put(qp->meta_info.pool, infl_req->mdata);
+
+ *dequeue_status = 1;
+exit:
+ return opaque;
+}
+
static int
cn20k_sym_get_raw_dp_ctx_size(struct rte_cryptodev *dev __rte_unused)
{
- return 0;
+ return sizeof(struct cnxk_sym_dp_ctx);
}
static int
@@ -676,12 +1018,40 @@ cn20k_sym_configure_raw_dp_ctx(struct rte_cryptodev *dev, uint16_t qp_id,
enum rte_crypto_op_sess_type sess_type,
union rte_cryptodev_session_ctx session_ctx, uint8_t is_update)
{
- (void)dev;
- (void)qp_id;
- (void)raw_dp_ctx;
- (void)sess_type;
- (void)session_ctx;
- (void)is_update;
+ struct cnxk_se_sess *sess = (struct cnxk_se_sess *)session_ctx.crypto_sess;
+ struct cnxk_sym_dp_ctx *dp_ctx;
+
+ if (sess_type != RTE_CRYPTO_OP_WITH_SESSION)
+ return -ENOTSUP;
+
+ if (sess == NULL)
+ return -EINVAL;
+
+ if ((sess->dp_thr_type == CPT_DP_THREAD_TYPE_PDCP) ||
+ (sess->dp_thr_type == CPT_DP_THREAD_TYPE_PDCP_CHAIN) ||
+ (sess->dp_thr_type == CPT_DP_THREAD_TYPE_KASUMI) ||
+ (sess->dp_thr_type == CPT_DP_THREAD_TYPE_SM))
+ return -ENOTSUP;
+
+ if ((sess->dp_thr_type == CPT_DP_THREAD_AUTH_ONLY) &&
+ ((sess->roc_se_ctx.fc_type == ROC_SE_KASUMI) ||
+ (sess->roc_se_ctx.fc_type == ROC_SE_PDCP)))
+ return -ENOTSUP;
+
+ if (sess->roc_se_ctx.hash_type == ROC_SE_SHA1_TYPE)
+ return -ENOTSUP;
+
+ dp_ctx = (struct cnxk_sym_dp_ctx *)raw_dp_ctx->drv_ctx_data;
+ dp_ctx->sess = sess;
+
+ if (!is_update) {
+ raw_dp_ctx->qp_data = (struct cnxk_cpt_qp *)dev->data->queue_pairs[qp_id];
+ raw_dp_ctx->dequeue = cn20k_cpt_sym_raw_dequeue;
+ raw_dp_ctx->dequeue_burst = cn20k_cpt_sym_raw_dequeue_burst;
+ raw_dp_ctx->enqueue = cn20k_cpt_raw_enqueue;
+ raw_dp_ctx->enqueue_burst = cn20k_cpt_raw_enqueue_burst;
+ }
+
return 0;
}
--
2.25.1
^ permalink raw reply [flat|nested] 41+ messages in thread
* [PATCH 31/40] crypto/cnxk: add model check for cn20k
2025-05-23 13:50 [PATCH 00/40] fixes and new features to cnxk crypto PMD Tejasree Kondoj
` (29 preceding siblings ...)
2025-05-23 13:51 ` [PATCH 30/40] crypto/cnxk: support raw API for cn20k Tejasree Kondoj
@ 2025-05-23 13:51 ` Tejasree Kondoj
2025-05-23 13:51 ` [PATCH 32/40] common/cnxk: fix salt handling with aes-ctr Tejasree Kondoj
` (8 subsequent siblings)
39 siblings, 0 replies; 41+ messages in thread
From: Tejasree Kondoj @ 2025-05-23 13:51 UTC (permalink / raw)
To: Akhil Goyal
Cc: Vidya Sagar Velumuri, Anoob Joseph, Aakash Sasidharan,
Nithinsen Kaithakadan, Rupesh Chiluka, dev
From: Vidya Sagar Velumuri <vvelumuri@marvell.com>
Add model checks for cn20k.
Enable crypto and security capabilities for cn20k
Signed-off-by: Vidya Sagar Velumuri <vvelumuri@marvell.com>
---
drivers/crypto/cnxk/cnxk_cryptodev.c | 14 ++++++++------
.../crypto/cnxk/cnxk_cryptodev_capabilities.c | 10 +++++-----
drivers/crypto/cnxk/cnxk_cryptodev_ops.c | 16 ++++++++++++----
3 files changed, 25 insertions(+), 15 deletions(-)
diff --git a/drivers/crypto/cnxk/cnxk_cryptodev.c b/drivers/crypto/cnxk/cnxk_cryptodev.c
index 1eede2e59c..96b5121097 100644
--- a/drivers/crypto/cnxk/cnxk_cryptodev.c
+++ b/drivers/crypto/cnxk/cnxk_cryptodev.c
@@ -21,10 +21,10 @@ cnxk_cpt_default_ff_get(void)
RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT | RTE_CRYPTODEV_FF_SYM_SESSIONLESS |
RTE_CRYPTODEV_FF_DIGEST_ENCRYPTED | RTE_CRYPTODEV_FF_SECURITY;
- if (roc_model_is_cn10k())
+ if (roc_model_is_cn10k() || roc_model_is_cn20k())
ff |= RTE_CRYPTODEV_FF_SECURITY_INNER_CSUM | RTE_CRYPTODEV_FF_SYM_RAW_DP;
- if (roc_model_is_cn10ka_b0() || roc_model_is_cn10kb())
+ if (roc_model_is_cn10ka_b0() || roc_model_is_cn10kb() || roc_model_is_cn20k())
ff |= RTE_CRYPTODEV_FF_SECURITY_RX_INJECT;
return ff;
@@ -41,10 +41,12 @@ cnxk_cpt_eng_grp_add(struct roc_cpt *roc_cpt)
return -ENOTSUP;
}
- ret = roc_cpt_eng_grp_add(roc_cpt, CPT_ENG_TYPE_IE);
- if (ret < 0) {
- plt_err("Could not add CPT IE engines");
- return -ENOTSUP;
+ if (!roc_model_is_cn20k()) {
+ ret = roc_cpt_eng_grp_add(roc_cpt, CPT_ENG_TYPE_IE);
+ if (ret < 0) {
+ plt_err("Could not add CPT IE engines");
+ return -ENOTSUP;
+ }
}
ret = roc_cpt_eng_grp_add(roc_cpt, CPT_ENG_TYPE_AE);
diff --git a/drivers/crypto/cnxk/cnxk_cryptodev_capabilities.c b/drivers/crypto/cnxk/cnxk_cryptodev_capabilities.c
index 63d2eef349..d2747878d3 100644
--- a/drivers/crypto/cnxk/cnxk_cryptodev_capabilities.c
+++ b/drivers/crypto/cnxk/cnxk_cryptodev_capabilities.c
@@ -1976,16 +1976,16 @@ crypto_caps_populate(struct rte_cryptodev_capabilities cnxk_caps[],
CPT_CAPS_ADD(cnxk_caps, &cur_pos, hw_caps, kasumi);
CPT_CAPS_ADD(cnxk_caps, &cur_pos, hw_caps, des);
- if (!roc_model_is_cn10k())
+ if (roc_model_is_cn9k())
cn9k_crypto_caps_add(cnxk_caps, &cur_pos);
- if (roc_model_is_cn10k())
+ if (roc_model_is_cn10k() || roc_model_is_cn20k())
cn10k_crypto_caps_add(cnxk_caps, hw_caps, &cur_pos);
cpt_caps_add(cnxk_caps, &cur_pos, caps_null, RTE_DIM(caps_null));
cpt_caps_add(cnxk_caps, &cur_pos, caps_end, RTE_DIM(caps_end));
- if (roc_model_is_cn10k())
+ if (roc_model_is_cn10k() || roc_model_is_cn20k())
cn10k_crypto_caps_update(cnxk_caps);
}
@@ -2060,7 +2060,7 @@ sec_ipsec_crypto_caps_populate(struct rte_cryptodev_capabilities cnxk_caps[],
SEC_IPSEC_CAPS_ADD(cnxk_caps, &cur_pos, hw_caps, des);
SEC_IPSEC_CAPS_ADD(cnxk_caps, &cur_pos, hw_caps, sha1_sha2);
- if (roc_model_is_cn10k())
+ if (roc_model_is_cn10k() || roc_model_is_cn20k())
cn10k_sec_ipsec_crypto_caps_update(cnxk_caps, &cur_pos);
else
cn9k_sec_ipsec_crypto_caps_update(cnxk_caps);
@@ -2189,7 +2189,7 @@ cnxk_cpt_caps_populate(struct cnxk_cpt_vf *vf)
cnxk_sec_ipsec_caps_update(&vf->sec_caps[i]);
- if (roc_model_is_cn10k())
+ if (roc_model_is_cn10k() || roc_model_is_cn20k())
cn10k_sec_ipsec_caps_update(&vf->sec_caps[i]);
if (roc_model_is_cn9k())
diff --git a/drivers/crypto/cnxk/cnxk_cryptodev_ops.c b/drivers/crypto/cnxk/cnxk_cryptodev_ops.c
index 982fbe991f..e5ca082e10 100644
--- a/drivers/crypto/cnxk/cnxk_cryptodev_ops.c
+++ b/drivers/crypto/cnxk/cnxk_cryptodev_ops.c
@@ -741,8 +741,10 @@ cnxk_cpt_inst_w7_get(struct cnxk_se_sess *sess, struct roc_cpt *roc_cpt)
inst_w7.s.cptr += 8;
/* Set the engine group */
- if (sess->zsk_flag || sess->aes_ctr_eea2 || sess->is_sha3 || sess->is_sm3 ||
- sess->passthrough || sess->is_sm4)
+ if (roc_model_is_cn20k())
+ inst_w7.s.egrp = roc_cpt->eng_grp[CPT_ENG_TYPE_SE];
+ else if (sess->zsk_flag || sess->aes_ctr_eea2 || sess->is_sha3 || sess->is_sm3 ||
+ sess->passthrough || sess->is_sm4)
inst_w7.s.egrp = roc_cpt->eng_grp[CPT_ENG_TYPE_SE];
else
inst_w7.s.egrp = roc_cpt->eng_grp[CPT_ENG_TYPE_IE];
@@ -1043,7 +1045,7 @@ RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_cnxk_crypto_submit, 24.03)
void
rte_pmd_cnxk_crypto_submit(struct rte_pmd_cnxk_crypto_qptr *qptr, void *inst, uint16_t nb_inst)
{
- if (roc_model_is_cn10k())
+ if (roc_model_is_cn10k() || roc_model_is_cn20k())
return cnxk_crypto_cn10k_submit(qptr, inst, nb_inst);
else if (roc_model_is_cn9k())
return cnxk_crypto_cn9k_submit(qptr, inst, nb_inst);
@@ -1068,7 +1070,7 @@ rte_pmd_cnxk_crypto_cptr_flush(struct rte_pmd_cnxk_crypto_qptr *qptr,
return -EINVAL;
}
- if (unlikely(!roc_model_is_cn10k())) {
+ if (unlikely(roc_model_is_cn9k())) {
plt_err("Invalid cnxk model");
return -EINVAL;
}
@@ -1106,6 +1108,12 @@ rte_pmd_cnxk_crypto_cptr_get(struct rte_pmd_cnxk_crypto_sess *rte_sess)
}
if (rte_sess->sess_type == RTE_CRYPTO_OP_SECURITY_SESSION) {
+ if (roc_model_is_cn20k()) {
+ struct cn20k_sec_session *sec_sess = PLT_PTR_CAST(rte_sess->sec_sess);
+
+ return PLT_PTR_CAST(&sec_sess->sa);
+ }
+
if (roc_model_is_cn10k()) {
struct cn10k_sec_session *sec_sess = PLT_PTR_CAST(rte_sess->sec_sess);
return PLT_PTR_CAST(&sec_sess->sa);
--
2.25.1
^ permalink raw reply [flat|nested] 41+ messages in thread
* [PATCH 32/40] common/cnxk: fix salt handling with aes-ctr
2025-05-23 13:50 [PATCH 00/40] fixes and new features to cnxk crypto PMD Tejasree Kondoj
` (30 preceding siblings ...)
2025-05-23 13:51 ` [PATCH 31/40] crypto/cnxk: add model check " Tejasree Kondoj
@ 2025-05-23 13:51 ` Tejasree Kondoj
2025-05-23 13:51 ` [PATCH 33/40] common/cnxk: set correct salt value for ctr algos Tejasree Kondoj
` (7 subsequent siblings)
39 siblings, 0 replies; 41+ messages in thread
From: Tejasree Kondoj @ 2025-05-23 13:51 UTC (permalink / raw)
To: Akhil Goyal
Cc: Nithinsen Kaithakadan, Anoob Joseph, Aakash Sasidharan,
Rupesh Chiluka, Vidya Sagar Velumuri, dev
From: Nithinsen Kaithakadan <nkaithakadan@marvell.com>
This patch includes fix for setting correct salt value
for CTR algorithm.
Fixes: 78d03027f2cc ("common/cnxk: add IPsec common code")
Signed-off-by: Nithinsen Kaithakadan <nkaithakadan@marvell.com>
---
drivers/common/cnxk/cnxk_security.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/drivers/common/cnxk/cnxk_security.c b/drivers/common/cnxk/cnxk_security.c
index ea3b87e65c..62ae7b9b2e 100644
--- a/drivers/common/cnxk/cnxk_security.c
+++ b/drivers/common/cnxk/cnxk_security.c
@@ -96,6 +96,9 @@ ot_ipsec_sa_common_param_fill(union roc_ot_ipsec_sa_word2 *w2, uint8_t *cipher_k
break;
case RTE_CRYPTO_CIPHER_AES_CTR:
w2->s.enc_type = ROC_IE_SA_ENC_AES_CTR;
+ memcpy(salt_key, &ipsec_xfrm->salt, 4);
+ tmp_salt = (uint32_t *)salt_key;
+ *tmp_salt = rte_be_to_cpu_32(*tmp_salt);
break;
case RTE_CRYPTO_CIPHER_3DES_CBC:
w2->s.enc_type = ROC_IE_SA_ENC_3DES_CBC;
--
2.25.1
^ permalink raw reply [flat|nested] 41+ messages in thread
* [PATCH 33/40] common/cnxk: set correct salt value for ctr algos
2025-05-23 13:50 [PATCH 00/40] fixes and new features to cnxk crypto PMD Tejasree Kondoj
` (31 preceding siblings ...)
2025-05-23 13:51 ` [PATCH 32/40] common/cnxk: fix salt handling with aes-ctr Tejasree Kondoj
@ 2025-05-23 13:51 ` Tejasree Kondoj
2025-05-23 13:51 ` [PATCH 34/40] crypto/cnxk: extend check for max supported gather entries Tejasree Kondoj
` (6 subsequent siblings)
39 siblings, 0 replies; 41+ messages in thread
From: Tejasree Kondoj @ 2025-05-23 13:51 UTC (permalink / raw)
To: Akhil Goyal
Cc: Nithinsen Kaithakadan, Anoob Joseph, Aakash Sasidharan,
Rupesh Chiluka, Vidya Sagar Velumuri, dev
From: Nithinsen Kaithakadan <nkaithakadan@marvell.com>
This patch includes fix for setting correct salt value
for CTR algorithm.
Fixes: 532963b8070 ("crypto/cnxk: move IPsec SA creation to common")
Signed-off-by: Nithinsen Kaithakadan <nkaithakadan@marvell.com>
---
drivers/common/cnxk/cnxk_security.c | 5 +++++
1 file changed, 5 insertions(+)
diff --git a/drivers/common/cnxk/cnxk_security.c b/drivers/common/cnxk/cnxk_security.c
index 62ae7b9b2e..0e6777e6ca 100644
--- a/drivers/common/cnxk/cnxk_security.c
+++ b/drivers/common/cnxk/cnxk_security.c
@@ -965,6 +965,8 @@ on_fill_ipsec_common_sa(struct rte_security_ipsec_xform *ipsec,
cipher_key_len = crypto_xform->aead.key.length;
} else {
if (cipher_xform) {
+ if (cipher_xform->cipher.algo == RTE_CRYPTO_CIPHER_AES_CTR)
+ memcpy(common_sa->iv.gcm.nonce, &ipsec->salt, 4);
cipher_key = cipher_xform->cipher.key.data;
cipher_key_len = cipher_xform->cipher.key.length;
}
@@ -1285,6 +1287,9 @@ ow_ipsec_sa_common_param_fill(union roc_ow_ipsec_sa_word2 *w2, uint8_t *cipher_k
break;
case RTE_CRYPTO_CIPHER_AES_CTR:
w2->s.enc_type = ROC_IE_SA_ENC_AES_CTR;
+ memcpy(salt_key, &ipsec_xfrm->salt, 4);
+ tmp_salt = (uint32_t *)salt_key;
+ *tmp_salt = rte_be_to_cpu_32(*tmp_salt);
break;
case RTE_CRYPTO_CIPHER_3DES_CBC:
w2->s.enc_type = ROC_IE_SA_ENC_3DES_CBC;
--
2.25.1
^ permalink raw reply [flat|nested] 41+ messages in thread
* [PATCH 34/40] crypto/cnxk: extend check for max supported gather entries
2025-05-23 13:50 [PATCH 00/40] fixes and new features to cnxk crypto PMD Tejasree Kondoj
` (32 preceding siblings ...)
2025-05-23 13:51 ` [PATCH 33/40] common/cnxk: set correct salt value for ctr algos Tejasree Kondoj
@ 2025-05-23 13:51 ` Tejasree Kondoj
2025-05-23 13:51 ` [PATCH 35/40] crypto/cnxk: add struct variable for custom metadata Tejasree Kondoj
` (5 subsequent siblings)
39 siblings, 0 replies; 41+ messages in thread
From: Tejasree Kondoj @ 2025-05-23 13:51 UTC (permalink / raw)
To: Akhil Goyal
Cc: Rupesh Chiluka, Anoob Joseph, Aakash Sasidharan,
Nithinsen Kaithakadan, Vidya Sagar Velumuri, dev
From: Rupesh Chiluka <rchiluka@marvell.com>
Extend check for max supported gather entries in CNXK
CPT PMD.
Signed-off-by: Rupesh Chiluka <rchiluka@marvell.com>
---
drivers/common/cnxk/roc_cpt_sg.h | 1 +
drivers/crypto/cnxk/cn10k_ipsec_la_ops.h | 10 ++++++++++
drivers/crypto/cnxk/cn10k_tls_ops.h | 10 ++++++++++
drivers/crypto/cnxk/cn20k_ipsec_la_ops.h | 10 ++++++++++
drivers/crypto/cnxk/cn20k_tls_ops.h | 10 ++++++++++
drivers/crypto/cnxk/cn9k_ipsec_la_ops.h | 10 ++++++++++
6 files changed, 51 insertions(+)
diff --git a/drivers/common/cnxk/roc_cpt_sg.h b/drivers/common/cnxk/roc_cpt_sg.h
index e7e01cd29a..7c3caf94d7 100644
--- a/drivers/common/cnxk/roc_cpt_sg.h
+++ b/drivers/common/cnxk/roc_cpt_sg.h
@@ -15,6 +15,7 @@
#define ROC_SG_MAX_COMP 25
#define ROC_SG_MAX_DLEN_SIZE (ROC_SG_LIST_HDR_SIZE + (ROC_SG_MAX_COMP * ROC_SG_ENTRY_SIZE))
#define ROC_SG2_MAX_PTRS 48
+#define ROC_SG1_MAX_PTRS 32
struct roc_sglist_comp {
union {
diff --git a/drivers/crypto/cnxk/cn10k_ipsec_la_ops.h b/drivers/crypto/cnxk/cn10k_ipsec_la_ops.h
index 87442c2a1f..0cc6283c7e 100644
--- a/drivers/crypto/cnxk/cn10k_ipsec_la_ops.h
+++ b/drivers/crypto/cnxk/cn10k_ipsec_la_ops.h
@@ -105,6 +105,11 @@ process_outb_sa(struct roc_cpt_lf *lf, struct rte_crypto_op *cop, struct cn10k_s
return -ENOMEM;
}
+ if (unlikely(m_src->nb_segs > ROC_SG1_MAX_PTRS)) {
+ plt_dp_err("Exceeds max supported components. Reduce segments");
+ return -1;
+ }
+
m_data = alloc_op_meta(NULL, m_info->mlen, m_info->pool, infl_req);
if (unlikely(m_data == NULL)) {
plt_dp_err("Error allocating meta buffer for request");
@@ -224,6 +229,11 @@ process_inb_sa(struct rte_crypto_op *cop, struct cn10k_sec_session *sess, struct
void *m_data;
int i;
+ if (unlikely(m_src->nb_segs > ROC_SG1_MAX_PTRS)) {
+ plt_dp_err("Exceeds max supported components. Reduce segments");
+ return -1;
+ }
+
m_data = alloc_op_meta(NULL, m_info->mlen, m_info->pool, infl_req);
if (unlikely(m_data == NULL)) {
plt_dp_err("Error allocating meta buffer for request");
diff --git a/drivers/crypto/cnxk/cn10k_tls_ops.h b/drivers/crypto/cnxk/cn10k_tls_ops.h
index 427c31425c..90600bd850 100644
--- a/drivers/crypto/cnxk/cn10k_tls_ops.h
+++ b/drivers/crypto/cnxk/cn10k_tls_ops.h
@@ -117,6 +117,11 @@ process_tls_write(struct roc_cpt_lf *lf, struct rte_crypto_op *cop, struct cn10k
return -ENOMEM;
}
+ if (unlikely(m_src->nb_segs > ROC_SG1_MAX_PTRS)) {
+ plt_dp_err("Exceeds max supported components. Reduce segments");
+ return -1;
+ }
+
m_data = alloc_op_meta(NULL, m_info->mlen, m_info->pool, infl_req);
if (unlikely(m_data == NULL)) {
plt_dp_err("Error allocating meta buffer for request");
@@ -255,6 +260,11 @@ process_tls_read(struct rte_crypto_op *cop, struct cn10k_sec_session *sess,
uint32_t dlen;
int i;
+ if (unlikely(m_src->nb_segs > ROC_SG1_MAX_PTRS)) {
+ plt_dp_err("Exceeds max supported components. Reduce segments");
+ return -1;
+ }
+
m_data = alloc_op_meta(NULL, m_info->mlen, m_info->pool, infl_req);
if (unlikely(m_data == NULL)) {
plt_dp_err("Error allocating meta buffer for request");
diff --git a/drivers/crypto/cnxk/cn20k_ipsec_la_ops.h b/drivers/crypto/cnxk/cn20k_ipsec_la_ops.h
index eff51bd794..505fddb517 100644
--- a/drivers/crypto/cnxk/cn20k_ipsec_la_ops.h
+++ b/drivers/crypto/cnxk/cn20k_ipsec_la_ops.h
@@ -104,6 +104,11 @@ process_outb_sa(struct roc_cpt_lf *lf, struct rte_crypto_op *cop, struct cn20k_s
return -ENOMEM;
}
+ if (unlikely(m_src->nb_segs > ROC_SG2_MAX_PTRS)) {
+ plt_dp_err("Exceeds max supported components. Reduce segments");
+ return -1;
+ }
+
m_data = alloc_op_meta(NULL, m_info->mlen, m_info->pool, infl_req);
if (unlikely(m_data == NULL)) {
plt_dp_err("Error allocating meta buffer for request");
@@ -163,6 +168,11 @@ process_inb_sa(struct rte_crypto_op *cop, struct cn20k_sec_session *sess, struct
void *m_data;
int i;
+ if (unlikely(m_src->nb_segs > ROC_SG2_MAX_PTRS)) {
+ plt_dp_err("Exceeds max supported components. Reduce segments");
+ return -1;
+ }
+
m_data = alloc_op_meta(NULL, m_info->mlen, m_info->pool, infl_req);
if (unlikely(m_data == NULL)) {
plt_dp_err("Error allocating meta buffer for request");
diff --git a/drivers/crypto/cnxk/cn20k_tls_ops.h b/drivers/crypto/cnxk/cn20k_tls_ops.h
index 14f879f2a9..9f70a1d42d 100644
--- a/drivers/crypto/cnxk/cn20k_tls_ops.h
+++ b/drivers/crypto/cnxk/cn20k_tls_ops.h
@@ -118,6 +118,11 @@ process_tls_write(struct roc_cpt_lf *lf, struct rte_crypto_op *cop, struct cn20k
return -ENOMEM;
}
+ if (unlikely(m_src->nb_segs > ROC_SG2_MAX_PTRS)) {
+ plt_dp_err("Exceeds max supported components. Reduce segments");
+ return -1;
+ }
+
m_data = alloc_op_meta(NULL, m_info->mlen, m_info->pool, infl_req);
if (unlikely(m_data == NULL)) {
plt_dp_err("Error allocating meta buffer for request");
@@ -194,6 +199,11 @@ process_tls_read(struct rte_crypto_op *cop, struct cn20k_sec_session *sess,
uint32_t g_size_bytes;
int i;
+ if (unlikely(m_src->nb_segs > ROC_SG2_MAX_PTRS)) {
+ plt_dp_err("Exceeds max supported components. Reduce segments");
+ return -1;
+ }
+
m_data = alloc_op_meta(NULL, m_info->mlen, m_info->pool, infl_req);
if (unlikely(m_data == NULL)) {
plt_dp_err("Error allocating meta buffer for request");
diff --git a/drivers/crypto/cnxk/cn9k_ipsec_la_ops.h b/drivers/crypto/cnxk/cn9k_ipsec_la_ops.h
index befd5b0c05..79e00e3c57 100644
--- a/drivers/crypto/cnxk/cn9k_ipsec_la_ops.h
+++ b/drivers/crypto/cnxk/cn9k_ipsec_la_ops.h
@@ -111,6 +111,11 @@ process_outb_sa(struct cpt_qp_meta_info *m_info, struct rte_crypto_op *cop,
return -ENOMEM;
}
+ if (unlikely(m_src->nb_segs > ROC_SG1_MAX_PTRS)) {
+ plt_dp_err("Exceeds max supported components. Reduce segments");
+ return -1;
+ }
+
m_data = alloc_op_meta(NULL, m_info->mlen, m_info->pool, infl_req);
if (unlikely(m_data == NULL)) {
plt_dp_err("Error allocating meta buffer for request");
@@ -206,6 +211,11 @@ process_inb_sa(struct cpt_qp_meta_info *m_info, struct rte_crypto_op *cop,
void *m_data;
int i;
+ if (unlikely(m_src->nb_segs > ROC_SG1_MAX_PTRS)) {
+ plt_dp_err("Exceeds max supported components. Reduce segments");
+ return -1;
+ }
+
m_data = alloc_op_meta(NULL, m_info->mlen, m_info->pool, infl_req);
if (unlikely(m_data == NULL)) {
plt_dp_err("Error allocating meta buffer for request");
--
2.25.1
^ permalink raw reply [flat|nested] 41+ messages in thread
* [PATCH 35/40] crypto/cnxk: add struct variable for custom metadata
2025-05-23 13:50 [PATCH 00/40] fixes and new features to cnxk crypto PMD Tejasree Kondoj
` (33 preceding siblings ...)
2025-05-23 13:51 ` [PATCH 34/40] crypto/cnxk: extend check for max supported gather entries Tejasree Kondoj
@ 2025-05-23 13:51 ` Tejasree Kondoj
2025-05-23 13:51 ` [PATCH 36/40] crypto/cnxk: add asym sessionless handling Tejasree Kondoj
` (4 subsequent siblings)
39 siblings, 0 replies; 41+ messages in thread
From: Tejasree Kondoj @ 2025-05-23 13:51 UTC (permalink / raw)
To: Akhil Goyal
Cc: Anoob Joseph, Aakash Sasidharan, Nithinsen Kaithakadan,
Rupesh Chiluka, Vidya Sagar Velumuri, dev
Adding struct variable for passing custom metadata
to microcode.
Signed-off-by: Tejasree Kondoj <ktejasree@marvell.com>
---
drivers/crypto/cnxk/cnxk_cryptodev_ops.h | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/drivers/crypto/cnxk/cnxk_cryptodev_ops.h b/drivers/crypto/cnxk/cnxk_cryptodev_ops.h
index df8d08b7c5..17d39aa34f 100644
--- a/drivers/crypto/cnxk/cnxk_cryptodev_ops.h
+++ b/drivers/crypto/cnxk/cnxk_cryptodev_ops.h
@@ -30,6 +30,8 @@
/* Default command timeout in seconds */
#define DEFAULT_COMMAND_TIMEOUT 4
+#define META_LEN 64
+
#define MOD_INC(i, l) ((i) == (l - 1) ? (i) = 0 : (i)++)
#define CN10K_CPT_PKTS_PER_LOOP 64
@@ -58,6 +60,7 @@ struct __rte_aligned(ROC_ALIGN) cpt_inflight_req {
struct rte_event_vector *vec;
};
void *mdata;
+ uint8_t meta[META_LEN];
uint8_t op_flags;
#ifdef CPT_INST_DEBUG_ENABLE
uint8_t scatter_sz;
@@ -70,6 +73,7 @@ struct __rte_aligned(ROC_ALIGN) cpt_inflight_req {
};
PLT_STATIC_ASSERT(sizeof(struct cpt_inflight_req) == ROC_CACHE_LINE_SZ);
+PLT_STATIC_ASSERT(offsetof(struct cpt_inflight_req, meta) == 32);
struct pending_queue {
/** Array of pending requests */
--
2.25.1
^ permalink raw reply [flat|nested] 41+ messages in thread
* [PATCH 36/40] crypto/cnxk: add asym sessionless handling
2025-05-23 13:50 [PATCH 00/40] fixes and new features to cnxk crypto PMD Tejasree Kondoj
` (34 preceding siblings ...)
2025-05-23 13:51 ` [PATCH 35/40] crypto/cnxk: add struct variable for custom metadata Tejasree Kondoj
@ 2025-05-23 13:51 ` Tejasree Kondoj
2025-05-23 13:51 ` [PATCH 37/40] crypto/cnxk: add support for sessionless asym Tejasree Kondoj
` (3 subsequent siblings)
39 siblings, 0 replies; 41+ messages in thread
From: Tejasree Kondoj @ 2025-05-23 13:51 UTC (permalink / raw)
To: Akhil Goyal
Cc: Rupesh Chiluka, Anoob Joseph, Aakash Sasidharan,
Nithinsen Kaithakadan, Vidya Sagar Velumuri, dev
From: Rupesh Chiluka <rchiluka@marvell.com>
Add asymmetric sessionless handling for cnxk
Signed-off-by: Rupesh Chiluka <rchiluka@marvell.com>
---
drivers/crypto/cnxk/cn10k_cryptodev_ops.c | 19 +++++++++++++++++--
drivers/crypto/cnxk/cn9k_cryptodev_ops.c | 20 ++++++++++++++++++--
2 files changed, 35 insertions(+), 4 deletions(-)
diff --git a/drivers/crypto/cnxk/cn10k_cryptodev_ops.c b/drivers/crypto/cnxk/cn10k_cryptodev_ops.c
index 9ad0629519..813a2deb66 100644
--- a/drivers/crypto/cnxk/cn10k_cryptodev_ops.c
+++ b/drivers/crypto/cnxk/cn10k_cryptodev_ops.c
@@ -417,8 +417,23 @@ cn10k_ca_meta_info_extract(struct rte_crypto_op *op, struct cnxk_cpt_qp **qp, ui
priv = (struct cnxk_ae_sess *)op->asym->session;
*qp = priv->qp;
*w2 = priv->cpt_inst_w2;
- } else
- return -EINVAL;
+ } else {
+ union rte_event_crypto_metadata *ec_mdata;
+ struct rte_event *rsp_info;
+ uint8_t cdev_id;
+ uint16_t qp_id;
+
+ if (unlikely(op->private_data_offset == 0))
+ return -EINVAL;
+ ec_mdata = (union rte_event_crypto_metadata *)((uint8_t *)op +
+ op->private_data_offset);
+ rsp_info = &ec_mdata->response_info;
+ cdev_id = ec_mdata->request_info.cdev_id;
+ qp_id = ec_mdata->request_info.queue_pair_id;
+ *qp = rte_cryptodevs[cdev_id].data->queue_pairs[qp_id];
+ *w2 = CNXK_CPT_INST_W2((RTE_EVENT_TYPE_CRYPTODEV << 28) | rsp_info->flow_id,
+ rsp_info->sched_type, rsp_info->queue_id, 0);
+ }
} else
return -EINVAL;
diff --git a/drivers/crypto/cnxk/cn9k_cryptodev_ops.c b/drivers/crypto/cnxk/cn9k_cryptodev_ops.c
index ee35ed1eba..fa22b5ce44 100644
--- a/drivers/crypto/cnxk/cn9k_cryptodev_ops.c
+++ b/drivers/crypto/cnxk/cn9k_cryptodev_ops.c
@@ -333,8 +333,24 @@ cn9k_ca_meta_info_extract(struct rte_crypto_op *op,
priv = (struct cnxk_ae_sess *)op->asym->session;
*qp = priv->qp;
inst->w2.u64 = priv->cpt_inst_w2;
- } else
- return -EINVAL;
+ } else {
+ union rte_event_crypto_metadata *ec_mdata;
+ struct rte_event *rsp_info;
+ uint8_t cdev_id;
+ uint16_t qp_id;
+
+ if (unlikely(op->private_data_offset == 0))
+ return -EINVAL;
+ ec_mdata = (union rte_event_crypto_metadata *)((uint8_t *)op +
+ op->private_data_offset);
+ rsp_info = &ec_mdata->response_info;
+ cdev_id = ec_mdata->request_info.cdev_id;
+ qp_id = ec_mdata->request_info.queue_pair_id;
+ *qp = rte_cryptodevs[cdev_id].data->queue_pairs[qp_id];
+ inst->w2.u64 = CNXK_CPT_INST_W2(
+ (RTE_EVENT_TYPE_CRYPTODEV << 28) | rsp_info->flow_id,
+ rsp_info->sched_type, rsp_info->queue_id, 0);
+ }
} else
return -EINVAL;
--
2.25.1
^ permalink raw reply [flat|nested] 41+ messages in thread
* [PATCH 37/40] crypto/cnxk: add support for sessionless asym
2025-05-23 13:50 [PATCH 00/40] fixes and new features to cnxk crypto PMD Tejasree Kondoj
` (35 preceding siblings ...)
2025-05-23 13:51 ` [PATCH 36/40] crypto/cnxk: add asym sessionless handling Tejasree Kondoj
@ 2025-05-23 13:51 ` Tejasree Kondoj
2025-05-23 13:51 ` [PATCH 38/40] doc: update CN20K CPT documentation Tejasree Kondoj
` (2 subsequent siblings)
39 siblings, 0 replies; 41+ messages in thread
From: Tejasree Kondoj @ 2025-05-23 13:51 UTC (permalink / raw)
To: Akhil Goyal
Cc: Vidya Sagar Velumuri, Anoob Joseph, Aakash Sasidharan,
Nithinsen Kaithakadan, Rupesh Chiluka, dev
From: Vidya Sagar Velumuri <vvelumuri@marvell.com>
Add support for sessionless asymmetric operations for cnxk
Signed-off-by: Vidya Sagar Velumuri <vvelumuri@marvell.com>
---
drivers/crypto/cnxk/cn10k_cryptodev_ops.c | 72 ++++++++++++++++++++++-
drivers/crypto/cnxk/cn9k_cryptodev_ops.c | 57 +++++++++++++++++-
drivers/crypto/cnxk/cnxk_cryptodev.c | 3 +-
drivers/crypto/cnxk/cnxk_cryptodev_ops.c | 5 +-
4 files changed, 130 insertions(+), 7 deletions(-)
diff --git a/drivers/crypto/cnxk/cn10k_cryptodev_ops.c b/drivers/crypto/cnxk/cn10k_cryptodev_ops.c
index 813a2deb66..4f7b34cc21 100644
--- a/drivers/crypto/cnxk/cn10k_cryptodev_ops.c
+++ b/drivers/crypto/cnxk/cn10k_cryptodev_ops.c
@@ -76,6 +76,55 @@ cn10k_cpt_sym_temp_sess_create(struct cnxk_cpt_qp *qp, struct rte_crypto_op *op)
return NULL;
}
+static inline struct cnxk_ae_sess *
+cn10k_cpt_asym_temp_sess_create(struct cnxk_cpt_qp *qp, struct rte_crypto_op *op)
+{
+ struct rte_crypto_asym_op *asym_op = op->asym;
+ struct roc_cpt *roc_cpt = qp->lf.roc_cpt;
+ struct rte_cryptodev_asym_session *sess;
+ struct cnxk_ae_sess *priv;
+ struct cnxk_cpt_vf *vf;
+ union cpt_inst_w7 w7;
+ struct hw_ctx_s *hwc;
+
+ /* Create temporary session */
+ if (rte_mempool_get(qp->sess_mp, (void **)&sess) < 0)
+ return NULL;
+
+ priv = (struct cnxk_ae_sess *)sess;
+ if (cnxk_ae_fill_session_parameters(priv, asym_op->xform))
+ goto sess_put;
+
+ priv->lf = &qp->lf;
+
+ if (roc_errata_cpt_hang_on_mixed_ctx_val()) {
+ hwc = &priv->hw_ctx;
+ hwc->w0.s.aop_valid = 1;
+ hwc->w0.s.ctx_hdr_size = 0;
+ hwc->w0.s.ctx_size = 1;
+ hwc->w0.s.ctx_push_size = 1;
+
+ w7.s.ctx_val = 1;
+ w7.s.cptr = (uint64_t)hwc;
+ }
+
+ w7.u64 = 0;
+ w7.s.egrp = roc_cpt->eng_grp[CPT_ENG_TYPE_AE];
+
+ vf = container_of(roc_cpt, struct cnxk_cpt_vf, cpt);
+ priv->cpt_inst_w7 = w7.u64;
+ priv->cnxk_fpm_iova = vf->cnxk_fpm_iova;
+ priv->ec_grp = vf->ec_grp;
+
+ asym_op->session = sess;
+
+ return priv;
+
+sess_put:
+ rte_mempool_put(qp->sess_mp, sess);
+ return NULL;
+}
+
static __rte_always_inline int __rte_hot
cpt_sec_ipsec_inst_fill(struct cnxk_cpt_qp *qp, struct rte_crypto_op *op,
struct cn10k_sec_session *sess, struct cpt_inst_s *inst,
@@ -177,7 +226,6 @@ cn10k_cpt_fill_inst(struct cnxk_cpt_qp *qp, struct rte_crypto_op *ops[], struct
w7 = sess->cpt_inst_w7;
}
} else if (op->type == RTE_CRYPTO_OP_TYPE_ASYMMETRIC) {
-
if (op->sess_type == RTE_CRYPTO_OP_WITH_SESSION) {
asym_op = op->asym;
ae_sess = (struct cnxk_ae_sess *)asym_op->session;
@@ -186,9 +234,22 @@ cn10k_cpt_fill_inst(struct cnxk_cpt_qp *qp, struct rte_crypto_op *ops[], struct
return 0;
w7 = ae_sess->cpt_inst_w7;
} else {
- plt_dp_err("Not supported Asym op without session");
- return 0;
+ ae_sess = cn10k_cpt_asym_temp_sess_create(qp, op);
+ if (unlikely(ae_sess == NULL)) {
+ plt_dp_err("Could not create temp session");
+ return 0;
+ }
+
+ ret = cnxk_ae_enqueue(qp, op, infl_req, &inst[0], ae_sess);
+ if (unlikely(ret)) {
+ cnxk_ae_session_clear(NULL,
+ (struct rte_cryptodev_asym_session *)ae_sess);
+ rte_mempool_put(qp->sess_mp, ae_sess);
+ return 0;
+ }
+ w7 = ae_sess->cpt_inst_w7;
}
+
} else {
plt_dp_err("Unsupported op type");
return 0;
@@ -1145,6 +1206,11 @@ cn10k_cpt_dequeue_post_process(struct cnxk_cpt_qp *qp, struct rte_crypto_op *cop
rte_mempool_put(qp->sess_mp, cop->sym->session);
cop->sym->session = NULL;
}
+ if (cop->type == RTE_CRYPTO_OP_TYPE_ASYMMETRIC) {
+ cnxk_ae_session_clear(NULL, cop->asym->session);
+ rte_mempool_put(qp->sess_mp, cop->asym->session);
+ cop->asym->session = NULL;
+ }
}
}
diff --git a/drivers/crypto/cnxk/cn9k_cryptodev_ops.c b/drivers/crypto/cnxk/cn9k_cryptodev_ops.c
index fa22b5ce44..570051518c 100644
--- a/drivers/crypto/cnxk/cn9k_cryptodev_ops.c
+++ b/drivers/crypto/cnxk/cn9k_cryptodev_ops.c
@@ -67,6 +67,43 @@ cn9k_cpt_sym_temp_sess_create(struct cnxk_cpt_qp *qp, struct rte_crypto_op *op)
return NULL;
}
+static inline struct cnxk_ae_sess *
+cn9k_cpt_asym_temp_sess_create(struct cnxk_cpt_qp *qp, struct rte_crypto_op *op)
+{
+ struct rte_crypto_asym_op *asym_op = op->asym;
+ struct roc_cpt *roc_cpt = qp->lf.roc_cpt;
+ struct rte_cryptodev_asym_session *sess;
+ struct cnxk_ae_sess *priv;
+ struct cnxk_cpt_vf *vf;
+ union cpt_inst_w7 w7;
+
+ /* Create temporary session */
+ if (rte_mempool_get(qp->sess_mp, (void **)&sess) < 0)
+ return NULL;
+
+ priv = (struct cnxk_ae_sess *)sess;
+ if (cnxk_ae_fill_session_parameters(priv, asym_op->xform))
+ goto sess_put;
+
+ priv->lf = &qp->lf;
+
+ w7.u64 = 0;
+ w7.s.egrp = roc_cpt->eng_grp[CPT_ENG_TYPE_AE];
+
+ vf = container_of(roc_cpt, struct cnxk_cpt_vf, cpt);
+ priv->cpt_inst_w7 = w7.u64;
+ priv->cnxk_fpm_iova = vf->cnxk_fpm_iova;
+ priv->ec_grp = vf->ec_grp;
+
+ asym_op->session = sess;
+
+ return priv;
+
+sess_put:
+ rte_mempool_put(qp->sess_mp, sess);
+ return NULL;
+}
+
static inline int
cn9k_cpt_inst_prep(struct cnxk_cpt_qp *qp, struct rte_crypto_op *op,
struct cpt_inflight_req *infl_req, struct cpt_inst_s *inst)
@@ -106,7 +143,20 @@ cn9k_cpt_inst_prep(struct cnxk_cpt_qp *qp, struct rte_crypto_op *op,
ret = cnxk_ae_enqueue(qp, op, infl_req, inst, sess);
inst->w7.u64 = sess->cpt_inst_w7;
} else {
- ret = -EINVAL;
+ sess = cn9k_cpt_asym_temp_sess_create(qp, op);
+ if (unlikely(sess == NULL)) {
+ plt_dp_err("Could not create temp session");
+ return 0;
+ }
+
+ ret = cnxk_ae_enqueue(qp, op, infl_req, inst, sess);
+ if (unlikely(ret)) {
+ cnxk_ae_session_clear(NULL,
+ (struct rte_cryptodev_asym_session *)sess);
+ rte_mempool_put(qp->sess_mp, sess);
+ return 0;
+ }
+ inst->w7.u64 = sess->cpt_inst_w7;
}
} else {
ret = -EINVAL;
@@ -607,6 +657,11 @@ cn9k_cpt_dequeue_post_process(struct cnxk_cpt_qp *qp, struct rte_crypto_op *cop,
rte_mempool_put(qp->sess_mp, cop->sym->session);
cop->sym->session = NULL;
}
+ if (cop->type == RTE_CRYPTO_OP_TYPE_ASYMMETRIC) {
+ cnxk_ae_session_clear(NULL, cop->asym->session);
+ rte_mempool_put(qp->sess_mp, cop->asym->session);
+ cop->asym->session = NULL;
+ }
}
}
diff --git a/drivers/crypto/cnxk/cnxk_cryptodev.c b/drivers/crypto/cnxk/cnxk_cryptodev.c
index 96b5121097..5828a502e4 100644
--- a/drivers/crypto/cnxk/cnxk_cryptodev.c
+++ b/drivers/crypto/cnxk/cnxk_cryptodev.c
@@ -19,7 +19,8 @@ cnxk_cpt_default_ff_get(void)
RTE_CRYPTODEV_FF_SYM_OPERATION_CHAINING | RTE_CRYPTODEV_FF_IN_PLACE_SGL |
RTE_CRYPTODEV_FF_OOP_LB_IN_LB_OUT | RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT |
RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT | RTE_CRYPTODEV_FF_SYM_SESSIONLESS |
- RTE_CRYPTODEV_FF_DIGEST_ENCRYPTED | RTE_CRYPTODEV_FF_SECURITY;
+ RTE_CRYPTODEV_FF_DIGEST_ENCRYPTED | RTE_CRYPTODEV_FF_SECURITY |
+ RTE_CRYPTODEV_FF_ASYM_SESSIONLESS;
if (roc_model_is_cn10k() || roc_model_is_cn20k())
ff |= RTE_CRYPTODEV_FF_SECURITY_INNER_CSUM | RTE_CRYPTODEV_FF_SYM_RAW_DP;
diff --git a/drivers/crypto/cnxk/cnxk_cryptodev_ops.c b/drivers/crypto/cnxk/cnxk_cryptodev_ops.c
index e5ca082e10..261e14b418 100644
--- a/drivers/crypto/cnxk/cnxk_cryptodev_ops.c
+++ b/drivers/crypto/cnxk/cnxk_cryptodev_ops.c
@@ -866,7 +866,8 @@ cnxk_ae_session_size_get(struct rte_cryptodev *dev __rte_unused)
}
void
-cnxk_ae_session_clear(struct rte_cryptodev *dev, struct rte_cryptodev_asym_session *sess)
+cnxk_ae_session_clear(struct rte_cryptodev *dev __rte_unused,
+ struct rte_cryptodev_asym_session *sess)
{
struct cnxk_ae_sess *priv = (struct cnxk_ae_sess *)sess;
@@ -878,7 +879,7 @@ cnxk_ae_session_clear(struct rte_cryptodev *dev, struct rte_cryptodev_asym_sessi
cnxk_ae_free_session_parameters(priv);
/* Reset and free object back to pool */
- memset(priv, 0, cnxk_ae_session_size_get(dev));
+ memset(priv, 0, sizeof(struct cnxk_ae_sess));
}
int
--
2.25.1
^ permalink raw reply [flat|nested] 41+ messages in thread
* [PATCH 38/40] doc: update CN20K CPT documentation
2025-05-23 13:50 [PATCH 00/40] fixes and new features to cnxk crypto PMD Tejasree Kondoj
` (36 preceding siblings ...)
2025-05-23 13:51 ` [PATCH 37/40] crypto/cnxk: add support for sessionless asym Tejasree Kondoj
@ 2025-05-23 13:51 ` Tejasree Kondoj
2025-05-23 13:51 ` [PATCH 39/40] common/cnxk: update qsize in CPT iq enable Tejasree Kondoj
2025-05-23 13:51 ` [PATCH 40/40] crypto/cnxk: copy 8B iv into sess in aes ctr Tejasree Kondoj
39 siblings, 0 replies; 41+ messages in thread
From: Tejasree Kondoj @ 2025-05-23 13:51 UTC (permalink / raw)
To: Akhil Goyal
Cc: Anoob Joseph, Aakash Sasidharan, Nithinsen Kaithakadan,
Rupesh Chiluka, Vidya Sagar Velumuri, dev
Updating documentation for CN20K CPT support.
Signed-off-by: Tejasree Kondoj <ktejasree@marvell.com>
---
doc/guides/cryptodevs/cnxk.rst | 26 +++++-
doc/guides/cryptodevs/features/cn20k.ini | 113 +++++++++++++++++++++++
2 files changed, 134 insertions(+), 5 deletions(-)
create mode 100644 doc/guides/cryptodevs/features/cn20k.ini
diff --git a/doc/guides/cryptodevs/cnxk.rst b/doc/guides/cryptodevs/cnxk.rst
index ac843ddc53..1799161fdf 100644
--- a/doc/guides/cryptodevs/cnxk.rst
+++ b/doc/guides/cryptodevs/cnxk.rst
@@ -9,8 +9,8 @@ cryptographic operations to cryptographic accelerator units on the
**Marvell OCTEON cnxk** SoC family.
The cnxk crypto PMD code is organized into different sets of files.
-The file names starting with cn9k and cn10k provides support for CN9XX
-and CN10XX respectively. The common code between the SoCs is present
+The file names starting with cn9k, cn10k and cn20k provides support for CN9XX,
+CN10XX and CN20XX respectively. The common code between the SoCs is present
in file names starting with cnxk.
More information about OCTEON cnxk SoCs may be obtained from `<https://www.marvell.com>`_
@@ -20,6 +20,7 @@ Supported OCTEON cnxk SoCs
- CN9XX
- CN10XX
+- CN20XX
Features
--------
@@ -144,7 +145,7 @@ Bind the CPT VF device to the vfio_pci driver:
Refer to :ref:`linux_gsg_hugepages` for more details.
-``CN10K Initialization``
+``CN10K/CN20K Initialization``
List the CPT PF devices available on cn10k platform:
@@ -232,6 +233,13 @@ running the test application:
./dpdk-test
RTE>>cryptodev_cn10k_autotest
+``CN20K``
+
+.. code-block:: console
+
+ ./dpdk-test
+ RTE>>cryptodev_cn20k_autotest
+
The asymmetric crypto operations on OCTEON cnxk crypto PMD may be verified by
running the test application:
@@ -249,6 +257,13 @@ running the test application:
./dpdk-test
RTE>>cryptodev_cn10k_asym_autotest
+``CN20K``
+
+.. code-block:: console
+
+ ./dpdk-test
+ RTE>>cryptodev_cn20k_asym_autotest
+
Lookaside IPsec Support
-----------------------
@@ -265,6 +280,7 @@ Supported OCTEON cnxk SoCs
- CN9XX
- CN10XX
+- CN20XX
CN9XX Features supported
~~~~~~~~~~~~~~~~~~~~~~~~
@@ -301,8 +317,8 @@ Auth algorithms
* AES-XCBC-96
* AES-GMAC
-CN10XX Features supported
-~~~~~~~~~~~~~~~~~~~~~~~~~
+CN10XX/CN20XX Features supported
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
* IPv4
* ESP
diff --git a/doc/guides/cryptodevs/features/cn20k.ini b/doc/guides/cryptodevs/features/cn20k.ini
new file mode 100644
index 0000000000..76553d190e
--- /dev/null
+++ b/doc/guides/cryptodevs/features/cn20k.ini
@@ -0,0 +1,113 @@
+;
+; Supported features of the 'cn20k' crypto driver.
+;
+; Refer to default.ini for the full list of available PMD features.
+;
+[Features]
+Symmetric crypto = Y
+Asymmetric crypto = Y
+Sym operation chaining = Y
+HW Accelerated = Y
+Protocol offload = Y
+In Place SGL = Y
+OOP SGL In LB Out = Y
+OOP SGL In SGL Out = Y
+OOP LB In LB Out = Y
+Symmetric sessionless = Y
+RSA PRIV OP KEY EXP = Y
+RSA PRIV OP KEY QT = Y
+Digest encrypted = Y
+Sym raw data path API = Y
+Inner checksum = Y
+Rx inject = Y
+
+;
+; Supported crypto algorithms of 'cn20k' crypto driver.
+;
+[Cipher]
+NULL = Y
+3DES CBC = Y
+3DES ECB = Y
+AES CBC (128) = Y
+AES CBC (192) = Y
+AES CBC (256) = Y
+AES CTR (128) = Y
+AES CTR (192) = Y
+AES CTR (256) = Y
+AES XTS (128) = Y
+AES XTS (256) = Y
+DES CBC = Y
+KASUMI F8 = Y
+SNOW3G UEA2 = Y
+ZUC EEA3 = Y
+SM4 ECB = Y
+SM4 CBC = Y
+SM4 CTR = Y
+SM4 CFB = Y
+SM4 OFB = Y
+
+;
+; Supported authentication algorithms of 'cn20k' crypto driver.
+;
+[Auth]
+NULL = Y
+AES GMAC = Y
+KASUMI F9 = Y
+MD5 = Y
+MD5 HMAC = Y
+SHA1 = Y
+SHA1 HMAC = Y
+SHA224 = Y
+SHA224 HMAC = Y
+SHA256 = Y
+SHA256 HMAC = Y
+SHA384 = Y
+SHA384 HMAC = Y
+SHA512 = Y
+SHA512 HMAC = Y
+SNOW3G UIA2 = Y
+ZUC EIA3 = Y
+AES CMAC (128) = Y
+AES CMAC (192) = Y
+AES CMAC (256) = Y
+SHA3_224 = Y
+SHA3_224 HMAC = Y
+SHA3_256 = Y
+SHA3_256 HMAC = Y
+SHA3_384 = Y
+SHA3_384 HMAC = Y
+SHA3_512 = Y
+SHA3_512 HMAC = Y
+SHAKE_128 = Y
+SHAKE_256 = Y
+SM3 = Y
+
+;
+; Supported AEAD algorithms of 'cn20k' crypto driver.
+;
+[AEAD]
+AES GCM (128) = Y
+AES GCM (192) = Y
+AES GCM (256) = Y
+AES CCM (128) = Y
+AES CCM (192) = Y
+AES CCM (256) = Y
+CHACHA20-POLY1305 = Y
+
+;
+; Supported Asymmetric algorithms of the 'cn20k' crypto driver.
+;
+[Asymmetric]
+RSA = Y
+Modular Exponentiation = Y
+ECDH = Y
+ECDSA = Y
+ECPM = Y
+SM2 = Y
+EdDSA = Y
+
+;
+; Supported Operating systems of the 'cn20k' crypto driver.
+;
+[OS]
+Linux = Y
--
2.25.1
^ permalink raw reply [flat|nested] 41+ messages in thread
* [PATCH 39/40] common/cnxk: update qsize in CPT iq enable
2025-05-23 13:50 [PATCH 00/40] fixes and new features to cnxk crypto PMD Tejasree Kondoj
` (37 preceding siblings ...)
2025-05-23 13:51 ` [PATCH 38/40] doc: update CN20K CPT documentation Tejasree Kondoj
@ 2025-05-23 13:51 ` Tejasree Kondoj
2025-05-23 13:51 ` [PATCH 40/40] crypto/cnxk: copy 8B iv into sess in aes ctr Tejasree Kondoj
39 siblings, 0 replies; 41+ messages in thread
From: Tejasree Kondoj @ 2025-05-23 13:51 UTC (permalink / raw)
To: Akhil Goyal
Cc: Nithinsen Kaithakadan, Anoob Joseph, Aakash Sasidharan,
Rupesh Chiluka, Vidya Sagar Velumuri, dev
From: Nithinsen Kaithakadan <nkaithakadan@marvell.com>
Reconfigure qsize in each CPT iq enable call.
Fixes: 3bf87839559 ("common/cnxk: move instruction queue enable to ROC")
Signed-off-by: Nithinsen Kaithakadan <nkaithakadan@marvell.com>
---
drivers/common/cnxk/roc_cpt.c | 5 +++++
1 file changed, 5 insertions(+)
diff --git a/drivers/common/cnxk/roc_cpt.c b/drivers/common/cnxk/roc_cpt.c
index b4bf0ccd64..d1ba2b8858 100644
--- a/drivers/common/cnxk/roc_cpt.c
+++ b/drivers/common/cnxk/roc_cpt.c
@@ -1125,9 +1125,14 @@ roc_cpt_iq_disable(struct roc_cpt_lf *lf)
void
roc_cpt_iq_enable(struct roc_cpt_lf *lf)
{
+ union cpt_lf_q_size lf_q_size;
union cpt_lf_inprog lf_inprog;
union cpt_lf_ctl lf_ctl;
+ /* Reconfigure the QSIZE register to ensure NQ_PTR and DQ_PTR are reset */
+ lf_q_size.u = plt_read64(lf->rbase + CPT_LF_Q_SIZE);
+ plt_write64(lf_q_size.u, lf->rbase + CPT_LF_Q_SIZE);
+
/* Disable command queue */
roc_cpt_iq_disable(lf);
--
2.25.1
^ permalink raw reply [flat|nested] 41+ messages in thread
* [PATCH 40/40] crypto/cnxk: copy 8B iv into sess in aes ctr
2025-05-23 13:50 [PATCH 00/40] fixes and new features to cnxk crypto PMD Tejasree Kondoj
` (38 preceding siblings ...)
2025-05-23 13:51 ` [PATCH 39/40] common/cnxk: update qsize in CPT iq enable Tejasree Kondoj
@ 2025-05-23 13:51 ` Tejasree Kondoj
39 siblings, 0 replies; 41+ messages in thread
From: Tejasree Kondoj @ 2025-05-23 13:51 UTC (permalink / raw)
To: Akhil Goyal
Cc: Nithinsen Kaithakadan, Anoob Joseph, Aakash Sasidharan,
Rupesh Chiluka, Vidya Sagar Velumuri, dev
From: Nithinsen Kaithakadan <nkaithakadan@marvell.com>
Copy 8 bytes of the IV into the iv field within the
session for the AES CTR algorithm.
Signed-off-by: Nithinsen Kaithakadan <nkaithakadan@marvell.com>
---
drivers/crypto/cnxk/cn10k_ipsec_la_ops.h | 7 ++++---
drivers/crypto/cnxk/cn20k_ipsec_la_ops.h | 7 ++++---
2 files changed, 8 insertions(+), 6 deletions(-)
diff --git a/drivers/crypto/cnxk/cn10k_ipsec_la_ops.h b/drivers/crypto/cnxk/cn10k_ipsec_la_ops.h
index 0cc6283c7e..b9122a509a 100644
--- a/drivers/crypto/cnxk/cn10k_ipsec_la_ops.h
+++ b/drivers/crypto/cnxk/cn10k_ipsec_la_ops.h
@@ -32,7 +32,7 @@ ipsec_po_sa_iv_set(struct cn10k_sec_session *sess, struct rte_crypto_op *cop)
}
static inline void
-ipsec_po_sa_aes_gcm_iv_set(struct cn10k_sec_session *sess, struct rte_crypto_op *cop)
+ipsec_po_sa_aes_8b_iv_set(struct cn10k_sec_session *sess, struct rte_crypto_op *cop)
{
uint8_t *iv = &sess->sa.out_sa.iv.s.iv_dbg1[0];
uint32_t *tmp_iv;
@@ -63,8 +63,9 @@ process_outb_sa(struct roc_cpt_lf *lf, struct rte_crypto_op *cop, struct cn10k_s
if (sess->sa.out_sa.w2.s.iv_src == ROC_IE_OT_SA_IV_SRC_FROM_SA) {
if (sess->sa.out_sa.w2.s.enc_type == ROC_IE_SA_ENC_AES_GCM ||
sess->sa.out_sa.w2.s.enc_type == ROC_IE_SA_ENC_AES_CCM ||
- sess->sa.out_sa.w2.s.auth_type == ROC_IE_SA_AUTH_AES_GMAC)
- ipsec_po_sa_aes_gcm_iv_set(sess, cop);
+ sess->sa.out_sa.w2.s.auth_type == ROC_IE_SA_AUTH_AES_GMAC ||
+ sess->sa.out_sa.w2.s.enc_type == ROC_IE_SA_ENC_AES_CTR)
+ ipsec_po_sa_aes_8b_iv_set(sess, cop);
else
ipsec_po_sa_iv_set(sess, cop);
}
diff --git a/drivers/crypto/cnxk/cn20k_ipsec_la_ops.h b/drivers/crypto/cnxk/cn20k_ipsec_la_ops.h
index 505fddb517..2f860c1855 100644
--- a/drivers/crypto/cnxk/cn20k_ipsec_la_ops.h
+++ b/drivers/crypto/cnxk/cn20k_ipsec_la_ops.h
@@ -31,7 +31,7 @@ ipsec_po_sa_iv_set(struct cn20k_sec_session *sess, struct rte_crypto_op *cop)
}
static inline void
-ipsec_po_sa_aes_gcm_iv_set(struct cn20k_sec_session *sess, struct rte_crypto_op *cop)
+ipsec_po_sa_aes_8b_iv_set(struct cn20k_sec_session *sess, struct rte_crypto_op *cop)
{
uint8_t *iv = &sess->sa.out_sa.iv.s.iv_dbg1[0];
uint32_t *tmp_iv;
@@ -62,8 +62,9 @@ process_outb_sa(struct roc_cpt_lf *lf, struct rte_crypto_op *cop, struct cn20k_s
if (sess->sa.out_sa.w2.s.iv_src == ROC_IE_OW_SA_IV_SRC_FROM_SA) {
if (sess->sa.out_sa.w2.s.enc_type == ROC_IE_SA_ENC_AES_GCM ||
sess->sa.out_sa.w2.s.enc_type == ROC_IE_SA_ENC_AES_CCM ||
- sess->sa.out_sa.w2.s.auth_type == ROC_IE_SA_AUTH_AES_GMAC)
- ipsec_po_sa_aes_gcm_iv_set(sess, cop);
+ sess->sa.out_sa.w2.s.auth_type == ROC_IE_SA_AUTH_AES_GMAC ||
+ sess->sa.out_sa.w2.s.enc_type == ROC_IE_SA_ENC_AES_CTR)
+ ipsec_po_sa_aes_8b_iv_set(sess, cop);
else
ipsec_po_sa_iv_set(sess, cop);
}
--
2.25.1
^ permalink raw reply [flat|nested] 41+ messages in thread
end of thread, other threads:[~2025-05-23 13:57 UTC | newest]
Thread overview: 41+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2025-05-23 13:50 [PATCH 00/40] fixes and new features to cnxk crypto PMD Tejasree Kondoj
2025-05-23 13:50 ` [PATCH 01/40] crypto/cnxk: update the sg list population Tejasree Kondoj
2025-05-23 13:50 ` [PATCH 02/40] crypto/cnxk: add lookaside IPsec CPT LF stats Tejasree Kondoj
2025-05-23 13:50 ` [PATCH 03/40] crypto/cnxk: fix qp stats PMD API Tejasree Kondoj
2025-05-23 13:50 ` [PATCH 04/40] crypto/cnxk: fail Rx inject configure if not supported Tejasree Kondoj
2025-05-23 13:50 ` [PATCH 05/40] crypto/cnxk: add check for max supported gather entries Tejasree Kondoj
2025-05-23 13:50 ` [PATCH 06/40] crypto/cnxk: enable IV from application support Tejasree Kondoj
2025-05-23 13:50 ` [PATCH 07/40] crypto/cnxk: add probe for cn20k crypto device Tejasree Kondoj
2025-05-23 13:50 ` [PATCH 08/40] crypto/cnxk: add ops skeleton for cn20k Tejasree Kondoj
2025-05-23 13:50 ` [PATCH 09/40] crypto/cnxk: add dev info get Tejasree Kondoj
2025-05-23 13:50 ` [PATCH 10/40] crypto/cnxk: add skeletion for enq deq functions Tejasree Kondoj
2025-05-23 13:50 ` [PATCH 11/40] crypto/cnxk: add lmtst routines for cn20k Tejasree Kondoj
2025-05-23 13:50 ` [PATCH 12/40] crypto/cnxk: add enqueue function support Tejasree Kondoj
2025-05-23 13:50 ` [PATCH 13/40] crypto/cnxk: add cryptodev dequeue support for cn20k Tejasree Kondoj
2025-05-23 13:50 ` [PATCH 14/40] crypto/cnxk: move debug dumps to common Tejasree Kondoj
2025-05-23 13:50 ` [PATCH 15/40] crypto/cnxk: add rte security skeletion for cn20k Tejasree Kondoj
2025-05-23 13:50 ` [PATCH 16/40] crypto/cnxk: add security session creation Tejasree Kondoj
2025-05-23 13:50 ` [PATCH 17/40] crypto/cnxk: add security session destroy Tejasree Kondoj
2025-05-23 13:50 ` [PATCH 18/40] crypto/cnxk: move code to common Tejasree Kondoj
2025-05-23 13:50 ` [PATCH 19/40] crypto/cnxk: add rte sec session update Tejasree Kondoj
2025-05-23 13:50 ` [PATCH 20/40] crypto/cnxk: add rte security datapath handling Tejasree Kondoj
2025-05-23 13:50 ` [PATCH 21/40] crypto/cnxk: add Rx inject in security lookaside Tejasree Kondoj
2025-05-23 13:50 ` [PATCH 22/40] crypto/cnxk: add skeleton for tls Tejasree Kondoj
2025-05-23 13:50 ` [PATCH 23/40] crypto/cnxk: add tls write session creation Tejasree Kondoj
2025-05-23 13:50 ` [PATCH 24/40] crypto/cnxk: add tls read " Tejasree Kondoj
2025-05-23 13:50 ` [PATCH 25/40] crypto/cnxk: add tls session destroy Tejasree Kondoj
2025-05-23 13:50 ` [PATCH 26/40] crypto/cnxk: add enq and dequeue support for TLS Tejasree Kondoj
2025-05-23 13:50 ` [PATCH 27/40] crypto/cnxk: tls post process Tejasree Kondoj
2025-05-23 13:50 ` [PATCH 28/40] crypto/cnxk: add tls session update Tejasree Kondoj
2025-05-23 13:51 ` [PATCH 29/40] crypto/cnxk: include required headers Tejasree Kondoj
2025-05-23 13:51 ` [PATCH 30/40] crypto/cnxk: support raw API for cn20k Tejasree Kondoj
2025-05-23 13:51 ` [PATCH 31/40] crypto/cnxk: add model check " Tejasree Kondoj
2025-05-23 13:51 ` [PATCH 32/40] common/cnxk: fix salt handling with aes-ctr Tejasree Kondoj
2025-05-23 13:51 ` [PATCH 33/40] common/cnxk: set correct salt value for ctr algos Tejasree Kondoj
2025-05-23 13:51 ` [PATCH 34/40] crypto/cnxk: extend check for max supported gather entries Tejasree Kondoj
2025-05-23 13:51 ` [PATCH 35/40] crypto/cnxk: add struct variable for custom metadata Tejasree Kondoj
2025-05-23 13:51 ` [PATCH 36/40] crypto/cnxk: add asym sessionless handling Tejasree Kondoj
2025-05-23 13:51 ` [PATCH 37/40] crypto/cnxk: add support for sessionless asym Tejasree Kondoj
2025-05-23 13:51 ` [PATCH 38/40] doc: update CN20K CPT documentation Tejasree Kondoj
2025-05-23 13:51 ` [PATCH 39/40] common/cnxk: update qsize in CPT iq enable Tejasree Kondoj
2025-05-23 13:51 ` [PATCH 40/40] crypto/cnxk: copy 8B iv into sess in aes ctr Tejasree Kondoj
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).