* [PATCH 1/6] cryptodev: move RSA padding information into xform
2024-09-05 14:56 [PATCH 0/6] vhost: add asymmetric crypto support Gowrishankar Muthukrishnan
@ 2024-09-05 14:56 ` Gowrishankar Muthukrishnan
2024-09-05 14:56 ` [PATCH 2/6] cryptodev: fix RSA xform for ASN.1 syntax Gowrishankar Muthukrishnan
` (5 subsequent siblings)
6 siblings, 0 replies; 16+ messages in thread
From: Gowrishankar Muthukrishnan @ 2024-09-05 14:56 UTC (permalink / raw)
To: dev, Akhil Goyal, Fan Zhang, Anoob Joseph, Ankur Dwivedi,
Tejasree Kondoj, Kai Ji, Brian Dooley,
Gowrishankar Muthukrishnan
Cc: bruce.richardson, ciara.power, jerinj, arkadiuszx.kusztal,
jack.bond-preston, david.marchand, hemant.agrawal,
pablo.de.lara.guarch, fiona.trahe, declan.doherty, matan,
ruifeng.wang, abhinandan.gujjar, maxime.coquelin, chenbox,
sunilprakashrao.uttarwar, andrew.boyer, ajit.khaparde,
raveendra.padasalagi, vikas.gupta, zhangfei.gao, g.singh,
jianjay.zhou, lee.daly
RSA padding information could be a xform entity rather than part of
crypto op, as it seems associated with hashing algorithm used for
the entire crypto session, where this algorithm is used in message
digest itself. Even in virtIO standard spec, this info is associated
in the asymmetric session creation. Hence, moving this info from
crypto op into xform structure.
Signed-off-by: Gowrishankar Muthukrishnan <gmuthukrishn@marvell.com>
---
app/test/test_cryptodev_asym.c | 4 --
app/test/test_cryptodev_rsa_test_vectors.h | 2 +
drivers/common/cpt/cpt_ucode_asym.h | 4 +-
drivers/crypto/cnxk/cnxk_ae.h | 13 +++--
drivers/crypto/octeontx/otx_cryptodev_ops.c | 4 +-
drivers/crypto/openssl/openssl_pmd_private.h | 1 +
drivers/crypto/openssl/rte_openssl_pmd.c | 4 +-
drivers/crypto/openssl/rte_openssl_pmd_ops.c | 1 +
drivers/crypto/qat/qat_asym.c | 17 ++++---
examples/fips_validation/main.c | 52 +++++++++++---------
lib/cryptodev/rte_crypto_asym.h | 6 +--
11 files changed, 58 insertions(+), 50 deletions(-)
diff --git a/app/test/test_cryptodev_asym.c b/app/test/test_cryptodev_asym.c
index f0b5d38543..0928367cb0 100644
--- a/app/test/test_cryptodev_asym.c
+++ b/app/test/test_cryptodev_asym.c
@@ -80,7 +80,6 @@ queue_ops_rsa_sign_verify(void *sess)
asym_op->rsa.message.length = rsaplaintext.len;
asym_op->rsa.sign.length = RTE_DIM(rsa_n);
asym_op->rsa.sign.data = output_buf;
- asym_op->rsa.padding.type = RTE_CRYPTO_RSA_PADDING_PKCS1_5;
debug_hexdump(stdout, "message", asym_op->rsa.message.data,
asym_op->rsa.message.length);
@@ -112,7 +111,6 @@ queue_ops_rsa_sign_verify(void *sess)
/* Verify sign */
asym_op->rsa.op_type = RTE_CRYPTO_ASYM_OP_VERIFY;
- asym_op->rsa.padding.type = RTE_CRYPTO_RSA_PADDING_PKCS1_5;
/* Process crypto operation */
if (rte_cryptodev_enqueue_burst(dev_id, 0, &op, 1) != 1) {
@@ -171,7 +169,6 @@ queue_ops_rsa_enc_dec(void *sess)
asym_op->rsa.cipher.data = cipher_buf;
asym_op->rsa.cipher.length = RTE_DIM(rsa_n);
asym_op->rsa.message.length = rsaplaintext.len;
- asym_op->rsa.padding.type = RTE_CRYPTO_RSA_PADDING_PKCS1_5;
debug_hexdump(stdout, "message", asym_op->rsa.message.data,
asym_op->rsa.message.length);
@@ -203,7 +200,6 @@ queue_ops_rsa_enc_dec(void *sess)
asym_op = result_op->asym;
asym_op->rsa.message.length = RTE_DIM(rsa_n);
asym_op->rsa.op_type = RTE_CRYPTO_ASYM_OP_DECRYPT;
- asym_op->rsa.padding.type = RTE_CRYPTO_RSA_PADDING_PKCS1_5;
/* Process crypto operation */
if (rte_cryptodev_enqueue_burst(dev_id, 0, &op, 1) != 1) {
diff --git a/app/test/test_cryptodev_rsa_test_vectors.h b/app/test/test_cryptodev_rsa_test_vectors.h
index 89981f13f0..1b7b451387 100644
--- a/app/test/test_cryptodev_rsa_test_vectors.h
+++ b/app/test/test_cryptodev_rsa_test_vectors.h
@@ -345,6 +345,7 @@ struct rte_crypto_asym_xform rsa_xform = {
.next = NULL,
.xform_type = RTE_CRYPTO_ASYM_XFORM_RSA,
.rsa = {
+ .padding.type = RTE_CRYPTO_RSA_PADDING_PKCS1_5,
.n = {
.data = rsa_n,
.length = sizeof(rsa_n)
@@ -366,6 +367,7 @@ struct rte_crypto_asym_xform rsa_xform_crt = {
.next = NULL,
.xform_type = RTE_CRYPTO_ASYM_XFORM_RSA,
.rsa = {
+ .padding.type = RTE_CRYPTO_RSA_PADDING_PKCS1_5,
.n = {
.data = rsa_n,
.length = sizeof(rsa_n)
diff --git a/drivers/common/cpt/cpt_ucode_asym.h b/drivers/common/cpt/cpt_ucode_asym.h
index e1034bbeb4..5122378ec7 100644
--- a/drivers/common/cpt/cpt_ucode_asym.h
+++ b/drivers/common/cpt/cpt_ucode_asym.h
@@ -327,7 +327,7 @@ cpt_rsa_prep(struct asym_op_params *rsa_params,
/* Result buffer */
rlen = mod_len;
- if (rsa_op.padding.type == RTE_CRYPTO_RSA_PADDING_NONE) {
+ if (rsa->padding.type == RTE_CRYPTO_RSA_PADDING_NONE) {
/* Use mod_exp operation for no_padding type */
vq_cmd_w0.s.opcode.minor = CPT_MINOR_OP_MODEX;
vq_cmd_w0.s.param2 = exp_len;
@@ -412,7 +412,7 @@ cpt_rsa_crt_prep(struct asym_op_params *rsa_params,
/* Result buffer */
rlen = mod_len;
- if (rsa_op.padding.type == RTE_CRYPTO_RSA_PADDING_NONE) {
+ if (rsa->padding.type == RTE_CRYPTO_RSA_PADDING_NONE) {
/*Use mod_exp operation for no_padding type */
vq_cmd_w0.s.opcode.minor = CPT_MINOR_OP_MODEX_CRT;
} else {
diff --git a/drivers/crypto/cnxk/cnxk_ae.h b/drivers/crypto/cnxk/cnxk_ae.h
index ef9cb5eb91..1bb5a450c5 100644
--- a/drivers/crypto/cnxk/cnxk_ae.h
+++ b/drivers/crypto/cnxk/cnxk_ae.h
@@ -177,6 +177,9 @@ cnxk_ae_fill_rsa_params(struct cnxk_ae_sess *sess,
rsa->n.length = mod_len;
rsa->e.length = exp_len;
+ /* Set padding info */
+ rsa->padding.type = xform->rsa.padding.type;
+
return 0;
}
@@ -366,7 +369,7 @@ cnxk_ae_rsa_prep(struct rte_crypto_op *op, struct roc_ae_buf_ptr *meta_buf,
dptr += in_size;
dlen = total_key_len + in_size;
- if (rsa_op.padding.type == RTE_CRYPTO_RSA_PADDING_NONE) {
+ if (rsa->padding.type == RTE_CRYPTO_RSA_PADDING_NONE) {
/* Use mod_exp operation for no_padding type */
w4.s.opcode_minor = ROC_AE_MINOR_OP_MODEX;
w4.s.param2 = exp_len;
@@ -421,7 +424,7 @@ cnxk_ae_rsa_exp_prep(struct rte_crypto_op *op, struct roc_ae_buf_ptr *meta_buf,
dptr += in_size;
dlen = mod_len + privkey_len + in_size;
- if (rsa_op.padding.type == RTE_CRYPTO_RSA_PADDING_NONE) {
+ if (rsa->padding.type == RTE_CRYPTO_RSA_PADDING_NONE) {
/* Use mod_exp operation for no_padding type */
w4.s.opcode_minor = ROC_AE_MINOR_OP_MODEX;
w4.s.param2 = privkey_len;
@@ -479,7 +482,7 @@ cnxk_ae_rsa_crt_prep(struct rte_crypto_op *op, struct roc_ae_buf_ptr *meta_buf,
dptr += in_size;
dlen = total_key_len + in_size;
- if (rsa_op.padding.type == RTE_CRYPTO_RSA_PADDING_NONE) {
+ if (rsa->padding.type == RTE_CRYPTO_RSA_PADDING_NONE) {
/*Use mod_exp operation for no_padding type */
w4.s.opcode_minor = ROC_AE_MINOR_OP_MODEX_CRT;
} else {
@@ -1151,7 +1154,7 @@ cnxk_ae_dequeue_rsa_op(struct rte_crypto_op *cop, uint8_t *rptr,
memcpy(rsa->cipher.data, rptr, rsa->cipher.length);
break;
case RTE_CRYPTO_ASYM_OP_DECRYPT:
- if (rsa->padding.type == RTE_CRYPTO_RSA_PADDING_NONE) {
+ if (rsa_ctx->padding.type == RTE_CRYPTO_RSA_PADDING_NONE) {
rsa->message.length = rsa_ctx->n.length;
memcpy(rsa->message.data, rptr, rsa->message.length);
} else {
@@ -1171,7 +1174,7 @@ cnxk_ae_dequeue_rsa_op(struct rte_crypto_op *cop, uint8_t *rptr,
memcpy(rsa->sign.data, rptr, rsa->sign.length);
break;
case RTE_CRYPTO_ASYM_OP_VERIFY:
- if (rsa->padding.type == RTE_CRYPTO_RSA_PADDING_NONE) {
+ if (rsa_ctx->padding.type == RTE_CRYPTO_RSA_PADDING_NONE) {
rsa->sign.length = rsa_ctx->n.length;
memcpy(rsa->sign.data, rptr, rsa->sign.length);
} else {
diff --git a/drivers/crypto/octeontx/otx_cryptodev_ops.c b/drivers/crypto/octeontx/otx_cryptodev_ops.c
index bafd0c1669..9a758cd297 100644
--- a/drivers/crypto/octeontx/otx_cryptodev_ops.c
+++ b/drivers/crypto/octeontx/otx_cryptodev_ops.c
@@ -708,7 +708,7 @@ otx_cpt_asym_rsa_op(struct rte_crypto_op *cop, struct cpt_request_info *req,
memcpy(rsa->cipher.data, req->rptr, rsa->cipher.length);
break;
case RTE_CRYPTO_ASYM_OP_DECRYPT:
- if (rsa->padding.type == RTE_CRYPTO_RSA_PADDING_NONE)
+ if (rsa_ctx->padding.type == RTE_CRYPTO_RSA_PADDING_NONE)
rsa->message.length = rsa_ctx->n.length;
else {
/* Get length of decrypted output */
@@ -725,7 +725,7 @@ otx_cpt_asym_rsa_op(struct rte_crypto_op *cop, struct cpt_request_info *req,
memcpy(rsa->sign.data, req->rptr, rsa->sign.length);
break;
case RTE_CRYPTO_ASYM_OP_VERIFY:
- if (rsa->padding.type == RTE_CRYPTO_RSA_PADDING_NONE)
+ if (rsa_ctx->padding.type == RTE_CRYPTO_RSA_PADDING_NONE)
rsa->sign.length = rsa_ctx->n.length;
else {
/* Get length of decrypted output */
diff --git a/drivers/crypto/openssl/openssl_pmd_private.h b/drivers/crypto/openssl/openssl_pmd_private.h
index a50e4d4918..4caf1ebcb5 100644
--- a/drivers/crypto/openssl/openssl_pmd_private.h
+++ b/drivers/crypto/openssl/openssl_pmd_private.h
@@ -197,6 +197,7 @@ struct __rte_cache_aligned openssl_asym_session {
union {
struct rsa {
RSA *rsa;
+ uint32_t pad;
#if (OPENSSL_VERSION_NUMBER >= 0x30000000L)
EVP_PKEY_CTX * ctx;
#endif
diff --git a/drivers/crypto/openssl/rte_openssl_pmd.c b/drivers/crypto/openssl/rte_openssl_pmd.c
index 101111e85b..8877fd5d92 100644
--- a/drivers/crypto/openssl/rte_openssl_pmd.c
+++ b/drivers/crypto/openssl/rte_openssl_pmd.c
@@ -2699,7 +2699,7 @@ process_openssl_rsa_op_evp(struct rte_crypto_op *cop,
struct openssl_asym_session *sess)
{
struct rte_crypto_asym_op *op = cop->asym;
- uint32_t pad = (op->rsa.padding.type);
+ uint32_t pad = sess->u.r.pad;
uint8_t *tmp;
size_t outlen = 0;
int ret = -1;
@@ -3082,7 +3082,7 @@ process_openssl_rsa_op(struct rte_crypto_op *cop,
int ret = 0;
struct rte_crypto_asym_op *op = cop->asym;
RSA *rsa = sess->u.r.rsa;
- uint32_t pad = (op->rsa.padding.type);
+ uint32_t pad = sess->u.r.pad;
uint8_t *tmp;
cop->status = RTE_CRYPTO_OP_STATUS_SUCCESS;
diff --git a/drivers/crypto/openssl/rte_openssl_pmd_ops.c b/drivers/crypto/openssl/rte_openssl_pmd_ops.c
index 1bbb855a59..4c47c3e7dd 100644
--- a/drivers/crypto/openssl/rte_openssl_pmd_ops.c
+++ b/drivers/crypto/openssl/rte_openssl_pmd_ops.c
@@ -889,6 +889,7 @@ static int openssl_set_asym_session_parameters(
if (!n || !e)
goto err_rsa;
+ asym_session->u.r.pad = xform->rsa.padding.type;
#if (OPENSSL_VERSION_NUMBER >= 0x30000000L)
OSSL_PARAM_BLD * param_bld = OSSL_PARAM_BLD_new();
if (!param_bld) {
diff --git a/drivers/crypto/qat/qat_asym.c b/drivers/crypto/qat/qat_asym.c
index 491f5ecd5b..020c8fac5f 100644
--- a/drivers/crypto/qat/qat_asym.c
+++ b/drivers/crypto/qat/qat_asym.c
@@ -362,7 +362,7 @@ rsa_set_pub_input(struct icp_qat_fw_pke_request *qat_req,
alg_bytesize = qat_function.bytesize;
if (asym_op->rsa.op_type == RTE_CRYPTO_ASYM_OP_ENCRYPT) {
- switch (asym_op->rsa.padding.type) {
+ switch (xform->rsa.padding.type) {
case RTE_CRYPTO_RSA_PADDING_NONE:
SET_PKE_LN(asym_op->rsa.message, alg_bytesize, 0);
break;
@@ -374,7 +374,7 @@ rsa_set_pub_input(struct icp_qat_fw_pke_request *qat_req,
}
HEXDUMP("RSA Message", cookie->input_array[0], alg_bytesize);
} else {
- switch (asym_op->rsa.padding.type) {
+ switch (xform->rsa.padding.type) {
case RTE_CRYPTO_RSA_PADDING_NONE:
SET_PKE_LN(asym_op->rsa.sign, alg_bytesize, 0);
break;
@@ -460,7 +460,7 @@ rsa_set_priv_input(struct icp_qat_fw_pke_request *qat_req,
if (asym_op->rsa.op_type ==
RTE_CRYPTO_ASYM_OP_DECRYPT) {
- switch (asym_op->rsa.padding.type) {
+ switch (xform->rsa.padding.type) {
case RTE_CRYPTO_RSA_PADDING_NONE:
SET_PKE_LN(asym_op->rsa.cipher, alg_bytesize, 0);
HEXDUMP("RSA ciphertext", cookie->input_array[0],
@@ -474,7 +474,7 @@ rsa_set_priv_input(struct icp_qat_fw_pke_request *qat_req,
} else if (asym_op->rsa.op_type ==
RTE_CRYPTO_ASYM_OP_SIGN) {
- switch (asym_op->rsa.padding.type) {
+ switch (xform->rsa.padding.type) {
case RTE_CRYPTO_RSA_PADDING_NONE:
SET_PKE_LN(asym_op->rsa.message, alg_bytesize, 0);
HEXDUMP("RSA text to be signed", cookie->input_array[0],
@@ -514,7 +514,8 @@ rsa_set_input(struct icp_qat_fw_pke_request *qat_req,
static uint8_t
rsa_collect(struct rte_crypto_asym_op *asym_op,
- const struct qat_asym_op_cookie *cookie)
+ const struct qat_asym_op_cookie *cookie,
+ const struct rte_crypto_asym_xform *xform)
{
uint32_t alg_bytesize = cookie->alg_bytesize;
@@ -530,7 +531,7 @@ rsa_collect(struct rte_crypto_asym_op *asym_op,
HEXDUMP("RSA Encrypted data", cookie->output_array[0],
alg_bytesize);
} else {
- switch (asym_op->rsa.padding.type) {
+ switch (xform->rsa.padding.type) {
case RTE_CRYPTO_RSA_PADDING_NONE:
rte_memcpy(asym_op->rsa.cipher.data,
cookie->output_array[0],
@@ -547,7 +548,7 @@ rsa_collect(struct rte_crypto_asym_op *asym_op,
}
} else {
if (asym_op->rsa.op_type == RTE_CRYPTO_ASYM_OP_DECRYPT) {
- switch (asym_op->rsa.padding.type) {
+ switch (xform->rsa.padding.type) {
case RTE_CRYPTO_RSA_PADDING_NONE:
rte_memcpy(asym_op->rsa.message.data,
cookie->output_array[0],
@@ -1105,7 +1106,7 @@ qat_asym_collect_response(struct rte_crypto_op *op,
case RTE_CRYPTO_ASYM_XFORM_MODINV:
return modinv_collect(asym_op, cookie, xform);
case RTE_CRYPTO_ASYM_XFORM_RSA:
- return rsa_collect(asym_op, cookie);
+ return rsa_collect(asym_op, cookie, xform);
case RTE_CRYPTO_ASYM_XFORM_ECDSA:
return ecdsa_collect(asym_op, cookie);
case RTE_CRYPTO_ASYM_XFORM_ECPM:
diff --git a/examples/fips_validation/main.c b/examples/fips_validation/main.c
index 7ae2c6c007..c7a78b41de 100644
--- a/examples/fips_validation/main.c
+++ b/examples/fips_validation/main.c
@@ -926,31 +926,7 @@ prepare_rsa_op(void)
__rte_crypto_op_reset(env.op, RTE_CRYPTO_OP_TYPE_ASYMMETRIC);
asym = env.op->asym;
- asym->rsa.padding.type = info.interim_info.rsa_data.padding;
- asym->rsa.padding.hash = info.interim_info.rsa_data.auth;
-
if (env.digest) {
- if (asym->rsa.padding.type == RTE_CRYPTO_RSA_PADDING_PKCS1_5) {
- int b_len = 0;
- uint8_t b[32];
-
- b_len = get_hash_oid(asym->rsa.padding.hash, b);
- if (b_len < 0) {
- RTE_LOG(ERR, USER1, "Failed to get digest info for hash %d\n",
- asym->rsa.padding.hash);
- return -EINVAL;
- }
-
- if (b_len) {
- msg.len = env.digest_len + b_len;
- msg.val = rte_zmalloc(NULL, msg.len, 0);
- rte_memcpy(msg.val, b, b_len);
- rte_memcpy(msg.val + b_len, env.digest, env.digest_len);
- rte_free(env.digest);
- env.digest = msg.val;
- env.digest_len = msg.len;
- }
- }
msg.val = env.digest;
msg.len = env.digest_len;
} else {
@@ -1536,6 +1512,34 @@ prepare_rsa_xform(struct rte_crypto_asym_xform *xform)
xform->rsa.e.length = vec.rsa.e.len;
xform->rsa.n.data = vec.rsa.n.val;
xform->rsa.n.length = vec.rsa.n.len;
+
+ xform->rsa.padding.type = info.interim_info.rsa_data.padding;
+ xform->rsa.padding.hash = info.interim_info.rsa_data.auth;
+ if (env.digest) {
+ if (xform->rsa.padding.type == RTE_CRYPTO_RSA_PADDING_PKCS1_5) {
+ struct fips_val msg;
+ int b_len = 0;
+ uint8_t b[32];
+
+ b_len = get_hash_oid(xform->rsa.padding.hash, b);
+ if (b_len < 0) {
+ RTE_LOG(ERR, USER1, "Failed to get digest info for hash %d\n",
+ xform->rsa.padding.hash);
+ return -EINVAL;
+ }
+
+ if (b_len) {
+ msg.len = env.digest_len + b_len;
+ msg.val = rte_zmalloc(NULL, msg.len, 0);
+ rte_memcpy(msg.val, b, b_len);
+ rte_memcpy(msg.val + b_len, env.digest, env.digest_len);
+ rte_free(env.digest);
+ env.digest = msg.val;
+ env.digest_len = msg.len;
+ }
+ }
+ }
+
return 0;
}
diff --git a/lib/cryptodev/rte_crypto_asym.h b/lib/cryptodev/rte_crypto_asym.h
index 39d3da3952..64d8a42f6a 100644
--- a/lib/cryptodev/rte_crypto_asym.h
+++ b/lib/cryptodev/rte_crypto_asym.h
@@ -312,6 +312,9 @@ struct rte_crypto_rsa_xform {
struct rte_crypto_rsa_priv_key_qt qt;
/**< qt - Private key in quintuple format */
};
+
+ struct rte_crypto_rsa_padding padding;
+ /**< RSA padding information */
};
/**
@@ -447,9 +450,6 @@ struct rte_crypto_rsa_op_param {
* This could be validated and overwritten by the PMD
* with the signature length.
*/
-
- struct rte_crypto_rsa_padding padding;
- /**< RSA padding information */
};
/**
--
2.21.0
^ permalink raw reply [flat|nested] 16+ messages in thread
* [PATCH 2/6] cryptodev: fix RSA xform for ASN.1 syntax
2024-09-05 14:56 [PATCH 0/6] vhost: add asymmetric crypto support Gowrishankar Muthukrishnan
2024-09-05 14:56 ` [PATCH 1/6] cryptodev: move RSA padding information into xform Gowrishankar Muthukrishnan
@ 2024-09-05 14:56 ` Gowrishankar Muthukrishnan
2024-09-05 14:56 ` [PATCH 3/6] vhost: add asymmetric RSA support Gowrishankar Muthukrishnan
` (4 subsequent siblings)
6 siblings, 0 replies; 16+ messages in thread
From: Gowrishankar Muthukrishnan @ 2024-09-05 14:56 UTC (permalink / raw)
To: dev, Akhil Goyal, Fan Zhang
Cc: Anoob Joseph, bruce.richardson, ciara.power, jerinj,
arkadiuszx.kusztal, kai.ji, jack.bond-preston, david.marchand,
hemant.agrawal, pablo.de.lara.guarch, fiona.trahe,
declan.doherty, matan, ruifeng.wang, abhinandan.gujjar,
maxime.coquelin, chenbox, sunilprakashrao.uttarwar, andrew.boyer,
ajit.khaparde, raveendra.padasalagi, vikas.gupta, zhangfei.gao,
g.singh, jianjay.zhou, lee.daly, Brian Dooley,
Gowrishankar Muthukrishnan
As per ASN.1 syntax (RFC 3447 Appendix A.1.2), RSA private key
would need specification of quintuple along with private exponent.
It is up to the implementation to internally handle, but not at
RTE itself to make them exclusive each other. Removing union
on them allows asymmetric implementation in VirtIO to benefit from
the xform as per ASN.1 syntax.
Signed-off-by: Gowrishankar Muthukrishnan <gmuthukrishn@marvell.com>
---
lib/cryptodev/rte_crypto_asym.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/lib/cryptodev/rte_crypto_asym.h b/lib/cryptodev/rte_crypto_asym.h
index 64d8a42f6a..398b6514e3 100644
--- a/lib/cryptodev/rte_crypto_asym.h
+++ b/lib/cryptodev/rte_crypto_asym.h
@@ -306,7 +306,7 @@ struct rte_crypto_rsa_xform {
enum rte_crypto_rsa_priv_key_type key_type;
- union {
+ struct {
rte_crypto_uint d;
/**< the RSA private exponent */
struct rte_crypto_rsa_priv_key_qt qt;
--
2.21.0
^ permalink raw reply [flat|nested] 16+ messages in thread
* [PATCH 3/6] vhost: add asymmetric RSA support
2024-09-05 14:56 [PATCH 0/6] vhost: add asymmetric crypto support Gowrishankar Muthukrishnan
2024-09-05 14:56 ` [PATCH 1/6] cryptodev: move RSA padding information into xform Gowrishankar Muthukrishnan
2024-09-05 14:56 ` [PATCH 2/6] cryptodev: fix RSA xform for ASN.1 syntax Gowrishankar Muthukrishnan
@ 2024-09-05 14:56 ` Gowrishankar Muthukrishnan
2024-09-05 14:56 ` [PATCH 4/6] crypto/virtio: " Gowrishankar Muthukrishnan
` (3 subsequent siblings)
6 siblings, 0 replies; 16+ messages in thread
From: Gowrishankar Muthukrishnan @ 2024-09-05 14:56 UTC (permalink / raw)
To: dev, Akhil Goyal, Fan Zhang, Maxime Coquelin, Chenbo Xia
Cc: Anoob Joseph, bruce.richardson, ciara.power, jerinj,
arkadiuszx.kusztal, kai.ji, jack.bond-preston, david.marchand,
hemant.agrawal, pablo.de.lara.guarch, fiona.trahe,
declan.doherty, matan, ruifeng.wang, abhinandan.gujjar,
sunilprakashrao.uttarwar, andrew.boyer, ajit.khaparde,
raveendra.padasalagi, vikas.gupta, zhangfei.gao, g.singh,
jianjay.zhou, lee.daly, Brian Dooley, Gowrishankar Muthukrishnan
Support asymmetric RSA crypto operations in vhost-user.
Signed-off-by: Gowrishankar Muthukrishnan <gmuthukrishn@marvell.com>
---
lib/cryptodev/cryptodev_pmd.h | 6 +
lib/vhost/rte_vhost_crypto.h | 14 +-
lib/vhost/vhost.c | 11 +-
lib/vhost/vhost.h | 1 +
lib/vhost/vhost_crypto.c | 550 +++++++++++++++++++++++++++++++---
lib/vhost/vhost_user.c | 4 +
lib/vhost/vhost_user.h | 34 ++-
lib/vhost/virtio_crypto.h | 87 +++++-
8 files changed, 654 insertions(+), 53 deletions(-)
diff --git a/lib/cryptodev/cryptodev_pmd.h b/lib/cryptodev/cryptodev_pmd.h
index 6c114f7181..ff3756723d 100644
--- a/lib/cryptodev/cryptodev_pmd.h
+++ b/lib/cryptodev/cryptodev_pmd.h
@@ -697,6 +697,12 @@ struct rte_cryptodev_asym_session {
uint8_t sess_private_data[];
};
+/**
+ * Helper macro to get session private data
+ */
+#define CRYPTODEV_GET_ASYM_SESS_PRIV(s) \
+ ((void *)(((struct rte_cryptodev_asym_session *)s)->sess_private_data))
+
#ifdef __cplusplus
}
#endif
diff --git a/lib/vhost/rte_vhost_crypto.h b/lib/vhost/rte_vhost_crypto.h
index f962a53818..274f737990 100644
--- a/lib/vhost/rte_vhost_crypto.h
+++ b/lib/vhost/rte_vhost_crypto.h
@@ -49,8 +49,10 @@ rte_vhost_crypto_driver_start(const char *path);
* @param cryptodev_id
* The identifier of DPDK Cryptodev, the same cryptodev_id can be assigned to
* multiple Vhost-crypto devices.
- * @param sess_pool
- * The pointer to the created cryptodev session pool.
+ * @param sym_sess_pool
+ * The pointer to the created cryptodev sym session pool.
+ * @param asym_sess_pool
+ * The pointer to the created cryptodev asym session pool.
* @param socket_id
* NUMA Socket ID to allocate resources on. *
* @return
@@ -59,7 +61,7 @@ rte_vhost_crypto_driver_start(const char *path);
*/
int
rte_vhost_crypto_create(int vid, uint8_t cryptodev_id,
- struct rte_mempool *sess_pool,
+ struct rte_mempool *sym_sess_pool, struct rte_mempool *asym_sess_pool,
int socket_id);
/**
@@ -113,6 +115,10 @@ rte_vhost_crypto_fetch_requests(int vid, uint32_t qid,
* dequeued from the cryptodev, this function shall be called to write the
* processed data back to the vring descriptor (if no-copy is turned off).
*
+ * @param vid
+ * The identifier of the vhost device.
+ * @param qid
+ * Virtio queue index.
* @param ops
* The address of an array of *rte_crypto_op* structure that was dequeued
* from cryptodev.
@@ -127,7 +133,7 @@ rte_vhost_crypto_fetch_requests(int vid, uint32_t qid,
* The number of ops processed.
*/
uint16_t
-rte_vhost_crypto_finalize_requests(struct rte_crypto_op **ops,
+rte_vhost_crypto_finalize_requests(int vid, int qid, struct rte_crypto_op **ops,
uint16_t nb_ops, int *callfds, uint16_t *nb_callfds);
#ifdef __cplusplus
diff --git a/lib/vhost/vhost.c b/lib/vhost/vhost.c
index ac71d17784..c9048e4f24 100644
--- a/lib/vhost/vhost.c
+++ b/lib/vhost/vhost.c
@@ -636,8 +636,12 @@ alloc_vring_queue(struct virtio_net *dev, uint32_t vring_idx)
/* Also allocate holes, if any, up to requested vring index. */
for (i = 0; i <= vring_idx; i++) {
- if (dev->virtqueue[i])
+ rte_spinlock_lock(&dev->virtqueue_lock);
+ if (dev->virtqueue[i]) {
+ rte_spinlock_unlock(&dev->virtqueue_lock);
continue;
+ }
+ rte_spinlock_unlock(&dev->virtqueue_lock);
vq = rte_zmalloc(NULL, sizeof(struct vhost_virtqueue), 0);
if (vq == NULL) {
@@ -647,13 +651,15 @@ alloc_vring_queue(struct virtio_net *dev, uint32_t vring_idx)
return -1;
}
- dev->virtqueue[i] = vq;
init_vring_queue(dev, vq, i);
rte_rwlock_init(&vq->access_lock);
rte_rwlock_init(&vq->iotlb_lock);
vq->avail_wrap_counter = 1;
vq->used_wrap_counter = 1;
vq->signalled_used_valid = false;
+ rte_spinlock_lock(&dev->virtqueue_lock);
+ dev->virtqueue[i] = vq;
+ rte_spinlock_unlock(&dev->virtqueue_lock);
}
dev->nr_vring = RTE_MAX(dev->nr_vring, vring_idx + 1);
@@ -740,6 +746,7 @@ vhost_new_device(struct vhost_backend_ops *ops)
dev->postcopy_ufd = -1;
rte_spinlock_init(&dev->backend_req_lock);
dev->backend_ops = ops;
+ rte_spinlock_init(&dev->virtqueue_lock);
return i;
}
diff --git a/lib/vhost/vhost.h b/lib/vhost/vhost.h
index cd3fa55f1b..ad33948e8b 100644
--- a/lib/vhost/vhost.h
+++ b/lib/vhost/vhost.h
@@ -494,6 +494,7 @@ struct __rte_cache_aligned virtio_net {
int extbuf;
int linearbuf;
+ rte_spinlock_t virtqueue_lock;
struct vhost_virtqueue *virtqueue[VHOST_MAX_QUEUE_PAIRS * 2];
rte_rwlock_t iotlb_pending_lock;
diff --git a/lib/vhost/vhost_crypto.c b/lib/vhost/vhost_crypto.c
index 7caf6d9afa..f98fa669ea 100644
--- a/lib/vhost/vhost_crypto.c
+++ b/lib/vhost/vhost_crypto.c
@@ -54,6 +54,14 @@ RTE_LOG_REGISTER_SUFFIX(vhost_crypto_logtype, crypto, INFO);
*/
#define vhost_crypto_desc vring_desc
+struct vhost_crypto_session {
+ union {
+ struct rte_cryptodev_asym_session *asym;
+ struct rte_cryptodev_sym_session *sym;
+ };
+ enum rte_crypto_op_type type;
+};
+
static int
cipher_algo_transform(uint32_t virtio_cipher_algo,
enum rte_crypto_cipher_algorithm *algo)
@@ -197,7 +205,8 @@ struct __rte_cache_aligned vhost_crypto {
*/
struct rte_hash *session_map;
struct rte_mempool *mbuf_pool;
- struct rte_mempool *sess_pool;
+ struct rte_mempool *sym_sess_pool;
+ struct rte_mempool *asym_sess_pool;
struct rte_mempool *wb_pool;
/** DPDK cryptodev ID */
@@ -206,8 +215,10 @@ struct __rte_cache_aligned vhost_crypto {
uint64_t last_session_id;
- uint64_t cache_session_id;
- struct rte_cryptodev_sym_session *cache_session;
+ uint64_t cache_sym_session_id;
+ void *cache_sym_session;
+ uint64_t cache_asym_session_id;
+ void *cache_asym_session;
/** socket id for the device */
int socket_id;
@@ -237,7 +248,7 @@ struct vhost_crypto_data_req {
static int
transform_cipher_param(struct rte_crypto_sym_xform *xform,
- VhostUserCryptoSessionParam *param)
+ VhostUserCryptoSymSessionParam *param)
{
int ret;
@@ -273,7 +284,7 @@ transform_cipher_param(struct rte_crypto_sym_xform *xform,
static int
transform_chain_param(struct rte_crypto_sym_xform *xforms,
- VhostUserCryptoSessionParam *param)
+ VhostUserCryptoSymSessionParam *param)
{
struct rte_crypto_sym_xform *xform_cipher, *xform_auth;
int ret;
@@ -334,17 +345,17 @@ transform_chain_param(struct rte_crypto_sym_xform *xforms,
}
static void
-vhost_crypto_create_sess(struct vhost_crypto *vcrypto,
+vhost_crypto_create_sym_sess(struct vhost_crypto *vcrypto,
VhostUserCryptoSessionParam *sess_param)
{
struct rte_crypto_sym_xform xform1 = {0}, xform2 = {0};
struct rte_cryptodev_sym_session *session;
int ret;
- switch (sess_param->op_type) {
+ switch (sess_param->u.sym_sess.op_type) {
case VIRTIO_CRYPTO_SYM_OP_NONE:
case VIRTIO_CRYPTO_SYM_OP_CIPHER:
- ret = transform_cipher_param(&xform1, sess_param);
+ ret = transform_cipher_param(&xform1, &sess_param->u.sym_sess);
if (unlikely(ret)) {
VC_LOG_ERR("Error transform session msg (%i)", ret);
sess_param->session_id = ret;
@@ -352,7 +363,7 @@ vhost_crypto_create_sess(struct vhost_crypto *vcrypto,
}
break;
case VIRTIO_CRYPTO_SYM_OP_ALGORITHM_CHAINING:
- if (unlikely(sess_param->hash_mode !=
+ if (unlikely(sess_param->u.sym_sess.hash_mode !=
VIRTIO_CRYPTO_SYM_HASH_MODE_AUTH)) {
sess_param->session_id = -VIRTIO_CRYPTO_NOTSUPP;
VC_LOG_ERR("Error transform session message (%i)",
@@ -362,7 +373,7 @@ vhost_crypto_create_sess(struct vhost_crypto *vcrypto,
xform1.next = &xform2;
- ret = transform_chain_param(&xform1, sess_param);
+ ret = transform_chain_param(&xform1, &sess_param->u.sym_sess);
if (unlikely(ret)) {
VC_LOG_ERR("Error transform session message (%i)", ret);
sess_param->session_id = ret;
@@ -377,7 +388,7 @@ vhost_crypto_create_sess(struct vhost_crypto *vcrypto,
}
session = rte_cryptodev_sym_session_create(vcrypto->cid, &xform1,
- vcrypto->sess_pool);
+ vcrypto->sym_sess_pool);
if (!session) {
VC_LOG_ERR("Failed to create session");
sess_param->session_id = -VIRTIO_CRYPTO_ERR;
@@ -402,22 +413,282 @@ vhost_crypto_create_sess(struct vhost_crypto *vcrypto,
vcrypto->last_session_id++;
}
+static int
+tlv_decode(uint8_t *tlv, uint8_t type, uint8_t **data, size_t *data_len)
+{
+ size_t tlen = -EINVAL, len;
+
+ if (tlv[0] != type)
+ return -EINVAL;
+
+ if (tlv[1] == 0x82) {
+ len = (tlv[2] << 8) | tlv[3];
+ *data = rte_malloc(NULL, len, 0);
+ rte_memcpy(*data, &tlv[4], len);
+ tlen = len + 4;
+ } else if (tlv[1] == 0x81) {
+ len = tlv[2];
+ *data = rte_malloc(NULL, len, 0);
+ rte_memcpy(*data, &tlv[3], len);
+ tlen = len + 3;
+ } else {
+ len = tlv[1];
+ *data = rte_malloc(NULL, len, 0);
+ rte_memcpy(*data, &tlv[2], len);
+ tlen = len + 2;
+ }
+
+ *data_len = len;
+ return tlen;
+}
+
+static int
+virtio_crypto_asym_rsa_der_to_xform(uint8_t *der, size_t der_len,
+ struct rte_crypto_asym_xform *xform)
+{
+ uint8_t *n = NULL, *e = NULL, *d = NULL, *p = NULL, *q = NULL, *dp = NULL,
+ *dq = NULL, *qinv = NULL, *v = NULL, *tlv;
+ size_t nlen, elen, dlen, plen, qlen, dplen, dqlen, qinvlen, vlen;
+ int len, i;
+
+ RTE_SET_USED(der_len);
+
+ for (i = 0; i < 8; i++) {
+ if (der[i] == 0x30) {
+ der = &der[i];
+ break;
+ }
+ }
+
+ if (der[0] != 0x30)
+ return -EINVAL;
+
+ if (der[1] == 0x82)
+ tlv = &der[4];
+ else if (der[1] == 0x81)
+ tlv = &der[3];
+ else
+ return -EINVAL;
+
+ len = tlv_decode(tlv, 0x02, &v, &vlen);
+ if (len < 0 || v[0] != 0x0 || vlen != 1) {
+ len = -EINVAL;
+ goto _error;
+ }
+
+ tlv = tlv + len;
+ len = tlv_decode(tlv, 0x02, &n, &nlen);
+ if (len < 0)
+ goto _error;
+
+ tlv = tlv + len;
+ len = tlv_decode(tlv, 0x02, &e, &elen);
+ if (len < 0)
+ goto _error;
+
+ tlv = tlv + len;
+ len = tlv_decode(tlv, 0x02, &d, &dlen);
+ if (len < 0)
+ goto _error;
+
+ tlv = tlv + len;
+ len = tlv_decode(tlv, 0x02, &p, &plen);
+ if (len < 0)
+ goto _error;
+
+ tlv = tlv + len;
+ len = tlv_decode(tlv, 0x02, &q, &qlen);
+ if (len < 0)
+ goto _error;
+
+ tlv = tlv + len;
+ len = tlv_decode(tlv, 0x02, &dp, &dplen);
+ if (len < 0)
+ goto _error;
+
+ tlv = tlv + len;
+ len = tlv_decode(tlv, 0x02, &dq, &dqlen);
+ if (len < 0)
+ goto _error;
+
+ tlv = tlv + len;
+ len = tlv_decode(tlv, 0x02, &qinv, &qinvlen);
+ if (len < 0)
+ goto _error;
+
+ xform->rsa.n.data = n;
+ xform->rsa.n.length = nlen;
+ xform->rsa.e.data = e;
+ xform->rsa.e.length = elen;
+ xform->rsa.d.data = d;
+ xform->rsa.d.length = dlen;
+ xform->rsa.qt.p.data = p;
+ xform->rsa.qt.p.length = plen;
+ xform->rsa.qt.q.data = q;
+ xform->rsa.qt.q.length = qlen;
+ xform->rsa.qt.dP.data = dp;
+ xform->rsa.qt.dP.length = dplen;
+ xform->rsa.qt.dQ.data = dq;
+ xform->rsa.qt.dQ.length = dqlen;
+ xform->rsa.qt.qInv.data = qinv;
+ xform->rsa.qt.qInv.length = qinvlen;
+
+ RTE_ASSERT((tlv + len - &der[0]) == der_len);
+ return 0;
+_error:
+ rte_free(v);
+ rte_free(n);
+ rte_free(e);
+ rte_free(d);
+ rte_free(p);
+ rte_free(q);
+ rte_free(dp);
+ rte_free(dq);
+ rte_free(qinv);
+ return len;
+}
+
+static int
+transform_rsa_param(struct rte_crypto_asym_xform *xform,
+ VhostUserCryptoAsymSessionParam *param)
+{
+ int ret = -EINVAL;
+
+ ret = virtio_crypto_asym_rsa_der_to_xform(param->key_buf, param->key_len, xform);
+ if (ret < 0)
+ goto _error;
+
+ switch (param->u.rsa.padding_algo) {
+ case VIRTIO_CRYPTO_RSA_RAW_PADDING:
+ xform->rsa.padding.type = RTE_CRYPTO_RSA_PADDING_NONE;
+ break;
+ case VIRTIO_CRYPTO_RSA_PKCS1_PADDING:
+ xform->rsa.padding.type = RTE_CRYPTO_RSA_PADDING_PKCS1_5;
+ break;
+ default:
+ VC_LOG_ERR("Unknown padding type");
+ goto _error;
+ }
+
+ switch (param->u.rsa.private_key_type) {
+ case VIRTIO_CRYPTO_RSA_PRIVATE_KEY_EXP:
+ xform->rsa.key_type = RTE_RSA_KEY_TYPE_EXP;
+ break;
+ case VIRTIO_CRYPTO_RSA_PRIVATE_KEY_QT:
+ xform->rsa.key_type = RTE_RSA_KEY_TYPE_QT;
+ break;
+ default:
+ VC_LOG_ERR("Unknown private key type");
+ goto _error;
+ }
+
+ xform->xform_type = RTE_CRYPTO_ASYM_XFORM_RSA;
+_error:
+ return ret;
+}
+
+static void
+vhost_crypto_create_asym_sess(struct vhost_crypto *vcrypto,
+ VhostUserCryptoSessionParam *sess_param)
+{
+ struct rte_cryptodev_asym_session *session = NULL;
+ struct vhost_crypto_session *vhost_session;
+ struct rte_crypto_asym_xform xform = {0};
+ int ret;
+
+ switch (sess_param->u.asym_sess.algo) {
+ case VIRTIO_CRYPTO_AKCIPHER_RSA:
+ ret = transform_rsa_param(&xform, &sess_param->u.asym_sess);
+ if (unlikely(ret)) {
+ VC_LOG_ERR("Error transform session msg (%i)", ret);
+ sess_param->session_id = ret;
+ return;
+ }
+ break;
+ default:
+ VC_LOG_ERR("Invalid op algo");
+ sess_param->session_id = -VIRTIO_CRYPTO_ERR;
+ return;
+ }
+
+ ret = rte_cryptodev_asym_session_create(vcrypto->cid, &xform,
+ vcrypto->asym_sess_pool, (void *)&session);
+ if (!session) {
+ VC_LOG_ERR("Failed to create session");
+ sess_param->session_id = -VIRTIO_CRYPTO_ERR;
+ return;
+ }
+
+ /* insert session to map */
+ vhost_session = rte_malloc(NULL, sizeof(*vhost_session), 0);
+ if (vhost_session == NULL) {
+ VC_LOG_ERR("Failed to alloc session memory");
+ sess_param->session_id = -VIRTIO_CRYPTO_ERR;
+ return;
+ }
+
+ vhost_session->type = RTE_CRYPTO_OP_TYPE_ASYMMETRIC;
+ vhost_session->asym = session;
+ if ((rte_hash_add_key_data(vcrypto->session_map,
+ &vcrypto->last_session_id, vhost_session) < 0)) {
+ VC_LOG_ERR("Failed to insert session to hash table");
+
+ if (rte_cryptodev_asym_session_free(vcrypto->cid, session) < 0)
+ VC_LOG_ERR("Failed to free session");
+ sess_param->session_id = -VIRTIO_CRYPTO_ERR;
+ return;
+ }
+
+ VC_LOG_INFO("Session %"PRIu64" created for vdev %i.",
+ vcrypto->last_session_id, vcrypto->dev->vid);
+
+ sess_param->session_id = vcrypto->last_session_id;
+ vcrypto->last_session_id++;
+}
+
+static void
+vhost_crypto_create_sess(struct vhost_crypto *vcrypto,
+ VhostUserCryptoSessionParam *sess_param)
+{
+ if (sess_param->op_code == VIRTIO_CRYPTO_AKCIPHER_CREATE_SESSION)
+ vhost_crypto_create_asym_sess(vcrypto, sess_param);
+ else
+ vhost_crypto_create_sym_sess(vcrypto, sess_param);
+}
+
static int
vhost_crypto_close_sess(struct vhost_crypto *vcrypto, uint64_t session_id)
{
- struct rte_cryptodev_sym_session *session;
+ struct rte_cryptodev_asym_session *asym_session = NULL;
+ struct rte_cryptodev_sym_session *sym_session = NULL;
+ struct vhost_crypto_session *vhost_session = NULL;
uint64_t sess_id = session_id;
int ret;
ret = rte_hash_lookup_data(vcrypto->session_map, &sess_id,
- (void **)&session);
-
+ (void **)&vhost_session);
if (unlikely(ret < 0)) {
- VC_LOG_ERR("Failed to delete session %"PRIu64".", session_id);
+ VC_LOG_ERR("Failed to find session for id %"PRIu64".", session_id);
return -VIRTIO_CRYPTO_INVSESS;
}
- if (rte_cryptodev_sym_session_free(vcrypto->cid, session) < 0) {
+ if (vhost_session->type == RTE_CRYPTO_OP_TYPE_SYMMETRIC) {
+ sym_session = vhost_session->sym;
+ } else if (vhost_session->type == RTE_CRYPTO_OP_TYPE_ASYMMETRIC) {
+ asym_session = vhost_session->asym;
+ } else {
+ VC_LOG_ERR("Invalid session for id %"PRIu64".", session_id);
+ return -VIRTIO_CRYPTO_INVSESS;
+ }
+
+ if (sym_session != NULL &&
+ rte_cryptodev_sym_session_free(vcrypto->cid, sym_session) < 0) {
+ VC_LOG_DBG("Failed to free session");
+ return -VIRTIO_CRYPTO_ERR;
+ }
+
+ if (asym_session != NULL &&
+ rte_cryptodev_asym_session_free(vcrypto->cid, asym_session) < 0) {
VC_LOG_DBG("Failed to free session");
return -VIRTIO_CRYPTO_ERR;
}
@@ -430,6 +701,7 @@ vhost_crypto_close_sess(struct vhost_crypto *vcrypto, uint64_t session_id)
VC_LOG_INFO("Session %"PRIu64" deleted for vdev %i.", sess_id,
vcrypto->dev->vid);
+ rte_free(vhost_session);
return 0;
}
@@ -1123,6 +1395,118 @@ prepare_sym_chain_op(struct vhost_crypto *vcrypto, struct rte_crypto_op *op,
return ret;
}
+static __rte_always_inline uint8_t
+vhost_crypto_check_akcipher_request(struct virtio_crypto_akcipher_data_req *req)
+{
+ RTE_SET_USED(req);
+ return VIRTIO_CRYPTO_OK;
+}
+
+static __rte_always_inline uint8_t
+prepare_asym_rsa_op(struct vhost_crypto *vcrypto, struct rte_crypto_op *op,
+ struct vhost_crypto_data_req *vc_req,
+ struct virtio_crypto_op_data_req *req,
+ struct vhost_crypto_desc *head,
+ uint32_t max_n_descs)
+{
+ uint8_t ret = vhost_crypto_check_akcipher_request(&req->u.akcipher_req);
+ struct rte_crypto_rsa_op_param *rsa = &op->asym->rsa;
+ struct vhost_crypto_desc *desc = head;
+ uint16_t wlen = 0;
+
+ if (unlikely(ret != VIRTIO_CRYPTO_OK))
+ goto error_exit;
+
+ /* prepare */
+ switch (vcrypto->option) {
+ case RTE_VHOST_CRYPTO_ZERO_COPY_DISABLE:
+ vc_req->wb_pool = vcrypto->wb_pool;
+ if (req->header.opcode == VIRTIO_CRYPTO_AKCIPHER_SIGN) {
+ rsa->op_type = RTE_CRYPTO_ASYM_OP_SIGN;
+ rsa->message.data = get_data_ptr(vc_req, desc, VHOST_ACCESS_RO);
+ rsa->message.length = req->u.akcipher_req.para.src_data_len;
+ rsa->sign.length = req->u.akcipher_req.para.dst_data_len;
+ wlen = rsa->sign.length;
+ desc = find_write_desc(head, desc, max_n_descs);
+ if (unlikely(!desc)) {
+ VC_LOG_ERR("Cannot find write location");
+ ret = VIRTIO_CRYPTO_BADMSG;
+ goto error_exit;
+ }
+
+ rsa->sign.data = get_data_ptr(vc_req, desc, VHOST_ACCESS_RW);
+ if (unlikely(rsa->sign.data == NULL)) {
+ ret = VIRTIO_CRYPTO_ERR;
+ goto error_exit;
+ }
+
+ desc += 1;
+ } else if (req->header.opcode == VIRTIO_CRYPTO_AKCIPHER_VERIFY) {
+ rsa->op_type = RTE_CRYPTO_ASYM_OP_VERIFY;
+ rsa->sign.data = get_data_ptr(vc_req, desc, VHOST_ACCESS_RO);
+ rsa->sign.length = req->u.akcipher_req.para.src_data_len;
+ desc += 1;
+ rsa->message.data = get_data_ptr(vc_req, desc, VHOST_ACCESS_RO);
+ rsa->message.length = req->u.akcipher_req.para.dst_data_len;
+ desc += 1;
+ } else if (req->header.opcode == VIRTIO_CRYPTO_AKCIPHER_ENCRYPT) {
+ rsa->op_type = RTE_CRYPTO_ASYM_OP_ENCRYPT;
+ rsa->message.data = get_data_ptr(vc_req, desc, VHOST_ACCESS_RO);
+ rsa->message.length = req->u.akcipher_req.para.src_data_len;
+ rsa->cipher.length = req->u.akcipher_req.para.dst_data_len;
+ wlen = rsa->cipher.length;
+ desc = find_write_desc(head, desc, max_n_descs);
+ if (unlikely(!desc)) {
+ VC_LOG_ERR("Cannot find write location");
+ ret = VIRTIO_CRYPTO_BADMSG;
+ goto error_exit;
+ }
+
+ rsa->cipher.data = get_data_ptr(vc_req, desc, VHOST_ACCESS_RW);
+ if (unlikely(rsa->cipher.data == NULL)) {
+ ret = VIRTIO_CRYPTO_ERR;
+ goto error_exit;
+ }
+
+ desc += 1;
+ } else if (req->header.opcode == VIRTIO_CRYPTO_AKCIPHER_DECRYPT) {
+ rsa->op_type = RTE_CRYPTO_ASYM_OP_DECRYPT;
+ rsa->cipher.data = get_data_ptr(vc_req, desc, VHOST_ACCESS_RO);
+ rsa->cipher.length = req->u.akcipher_req.para.src_data_len;
+ desc += 1;
+ rsa->message.data = get_data_ptr(vc_req, desc, VHOST_ACCESS_RO);
+ rsa->message.length = req->u.akcipher_req.para.dst_data_len;
+ desc += 1;
+ } else {
+ goto error_exit;
+ }
+ break;
+ case RTE_VHOST_CRYPTO_ZERO_COPY_ENABLE:
+ default:
+ ret = VIRTIO_CRYPTO_BADMSG;
+ goto error_exit;
+ }
+
+ op->type = RTE_CRYPTO_OP_TYPE_ASYMMETRIC;
+ op->sess_type = RTE_CRYPTO_OP_WITH_SESSION;
+
+ vc_req->inhdr = get_data_ptr(vc_req, desc, VHOST_ACCESS_WO);
+ if (unlikely(vc_req->inhdr == NULL)) {
+ ret = VIRTIO_CRYPTO_BADMSG;
+ goto error_exit;
+ }
+
+ vc_req->inhdr->status = VIRTIO_CRYPTO_OK;
+ vc_req->len = wlen + INHDR_LEN;
+ return 0;
+error_exit:
+ if (vc_req->wb)
+ free_wb_data(vc_req->wb, vc_req->wb_pool);
+
+ vc_req->len = INHDR_LEN;
+ return ret;
+}
+
/**
* Process on descriptor
*/
@@ -1133,17 +1517,21 @@ vhost_crypto_process_one_req(struct vhost_crypto *vcrypto,
uint16_t desc_idx)
__rte_no_thread_safety_analysis /* FIXME: requires iotlb_lock? */
{
- struct vhost_crypto_data_req *vc_req = rte_mbuf_to_priv(op->sym->m_src);
- struct rte_cryptodev_sym_session *session;
+ struct vhost_crypto_data_req *vc_req, *vc_req_out;
+ struct rte_cryptodev_asym_session *asym_session;
+ struct rte_cryptodev_sym_session *sym_session;
+ struct vhost_crypto_session *vhost_session;
+ struct vhost_crypto_desc *desc = descs;
+ uint32_t nb_descs = 0, max_n_descs, i;
+ struct vhost_crypto_data_req data_req;
struct virtio_crypto_op_data_req req;
struct virtio_crypto_inhdr *inhdr;
- struct vhost_crypto_desc *desc = descs;
struct vring_desc *src_desc;
uint64_t session_id;
uint64_t dlen;
- uint32_t nb_descs = 0, max_n_descs, i;
int err;
+ vc_req = &data_req;
vc_req->desc_idx = desc_idx;
vc_req->dev = vcrypto->dev;
vc_req->vq = vq;
@@ -1226,12 +1614,14 @@ vhost_crypto_process_one_req(struct vhost_crypto *vcrypto,
switch (req.header.opcode) {
case VIRTIO_CRYPTO_CIPHER_ENCRYPT:
case VIRTIO_CRYPTO_CIPHER_DECRYPT:
+ vc_req_out = rte_mbuf_to_priv(op->sym->m_src);
+ rte_memcpy(vc_req_out, vc_req, sizeof(struct vhost_crypto_data_req));
session_id = req.header.session_id;
/* one branch to avoid unnecessary table lookup */
- if (vcrypto->cache_session_id != session_id) {
+ if (vcrypto->cache_sym_session_id != session_id) {
err = rte_hash_lookup_data(vcrypto->session_map,
- &session_id, (void **)&session);
+ &session_id, (void **)&vhost_session);
if (unlikely(err < 0)) {
err = VIRTIO_CRYPTO_ERR;
VC_LOG_ERR("Failed to find session %"PRIu64,
@@ -1239,13 +1629,14 @@ vhost_crypto_process_one_req(struct vhost_crypto *vcrypto,
goto error_exit;
}
- vcrypto->cache_session = session;
- vcrypto->cache_session_id = session_id;
+ vcrypto->cache_sym_session = vhost_session->sym;
+ vcrypto->cache_sym_session_id = session_id;
}
- session = vcrypto->cache_session;
+ sym_session = vcrypto->cache_sym_session;
+ op->type = RTE_CRYPTO_OP_TYPE_SYMMETRIC;
- err = rte_crypto_op_attach_sym_session(op, session);
+ err = rte_crypto_op_attach_sym_session(op, sym_session);
if (unlikely(err < 0)) {
err = VIRTIO_CRYPTO_ERR;
VC_LOG_ERR("Failed to attach session to op");
@@ -1257,12 +1648,12 @@ vhost_crypto_process_one_req(struct vhost_crypto *vcrypto,
err = VIRTIO_CRYPTO_NOTSUPP;
break;
case VIRTIO_CRYPTO_SYM_OP_CIPHER:
- err = prepare_sym_cipher_op(vcrypto, op, vc_req,
+ err = prepare_sym_cipher_op(vcrypto, op, vc_req_out,
&req.u.sym_req.u.cipher, desc,
max_n_descs);
break;
case VIRTIO_CRYPTO_SYM_OP_ALGORITHM_CHAINING:
- err = prepare_sym_chain_op(vcrypto, op, vc_req,
+ err = prepare_sym_chain_op(vcrypto, op, vc_req_out,
&req.u.sym_req.u.chain, desc,
max_n_descs);
break;
@@ -1271,6 +1662,53 @@ vhost_crypto_process_one_req(struct vhost_crypto *vcrypto,
VC_LOG_ERR("Failed to process sym request");
goto error_exit;
}
+ break;
+ case VIRTIO_CRYPTO_AKCIPHER_SIGN:
+ case VIRTIO_CRYPTO_AKCIPHER_VERIFY:
+ case VIRTIO_CRYPTO_AKCIPHER_ENCRYPT:
+ case VIRTIO_CRYPTO_AKCIPHER_DECRYPT:
+ session_id = req.header.session_id;
+
+ /* one branch to avoid unnecessary table lookup */
+ if (vcrypto->cache_asym_session_id != session_id) {
+ err = rte_hash_lookup_data(vcrypto->session_map,
+ &session_id, (void **)&vhost_session);
+ if (unlikely(err < 0)) {
+ err = VIRTIO_CRYPTO_ERR;
+ VC_LOG_ERR("Failed to find asym session %"PRIu64,
+ session_id);
+ goto error_exit;
+ }
+
+ vcrypto->cache_asym_session = vhost_session->asym;
+ vcrypto->cache_asym_session_id = session_id;
+ }
+
+ asym_session = vcrypto->cache_asym_session;
+ op->type = RTE_CRYPTO_OP_TYPE_ASYMMETRIC;
+
+ err = rte_crypto_op_attach_asym_session(op, asym_session);
+ if (unlikely(err < 0)) {
+ err = VIRTIO_CRYPTO_ERR;
+ VC_LOG_ERR("Failed to attach asym session to op");
+ goto error_exit;
+ }
+
+ vc_req_out = rte_cryptodev_asym_session_get_user_data(asym_session);
+ rte_memcpy(vc_req_out, vc_req, sizeof(struct vhost_crypto_data_req));
+ vc_req_out->wb = NULL;
+
+ switch (req.header.algo) {
+ case VIRTIO_CRYPTO_AKCIPHER_RSA:
+ err = prepare_asym_rsa_op(vcrypto, op, vc_req_out,
+ &req, desc, max_n_descs);
+ break;
+ }
+ if (unlikely(err != 0)) {
+ VC_LOG_ERR("Failed to process asym request");
+ goto error_exit;
+ }
+
break;
default:
err = VIRTIO_CRYPTO_ERR;
@@ -1294,12 +1732,22 @@ static __rte_always_inline struct vhost_virtqueue *
vhost_crypto_finalize_one_request(struct rte_crypto_op *op,
struct vhost_virtqueue *old_vq)
{
- struct rte_mbuf *m_src = op->sym->m_src;
- struct rte_mbuf *m_dst = op->sym->m_dst;
- struct vhost_crypto_data_req *vc_req = rte_mbuf_to_priv(m_src);
+ struct rte_mbuf *m_src = NULL, *m_dst = NULL;
+ struct vhost_crypto_data_req *vc_req;
struct vhost_virtqueue *vq;
uint16_t used_idx, desc_idx;
+ if (op->type == RTE_CRYPTO_OP_TYPE_SYMMETRIC) {
+ m_src = op->sym->m_src;
+ m_dst = op->sym->m_dst;
+ vc_req = rte_mbuf_to_priv(m_src);
+ } else if (op->type == RTE_CRYPTO_OP_TYPE_ASYMMETRIC) {
+ vc_req = rte_cryptodev_asym_session_get_user_data(op->asym->session);
+ } else {
+ VC_LOG_ERR("Invalid crypto op type");
+ return NULL;
+ }
+
if (unlikely(!vc_req)) {
VC_LOG_ERR("Failed to retrieve vc_req");
return NULL;
@@ -1321,25 +1769,36 @@ vhost_crypto_finalize_one_request(struct rte_crypto_op *op,
vq->used->ring[desc_idx].id = vq->avail->ring[desc_idx];
vq->used->ring[desc_idx].len = vc_req->len;
- rte_mempool_put(m_src->pool, (void *)m_src);
-
- if (m_dst)
- rte_mempool_put(m_dst->pool, (void *)m_dst);
+ if (op->type == RTE_CRYPTO_OP_TYPE_SYMMETRIC) {
+ rte_mempool_put(m_src->pool, (void *)m_src);
+ if (m_dst)
+ rte_mempool_put(m_dst->pool, (void *)m_dst);
+ }
return vc_req->vq;
}
static __rte_always_inline uint16_t
-vhost_crypto_complete_one_vm_requests(struct rte_crypto_op **ops,
+vhost_crypto_complete_one_vm_requests(int vid, int qid, struct rte_crypto_op **ops,
uint16_t nb_ops, int *callfd)
{
+ struct virtio_net *dev = get_device(vid);
uint16_t processed = 1;
struct vhost_virtqueue *vq, *tmp_vq;
+ if (unlikely(dev == NULL)) {
+ VC_LOG_ERR("Invalid vid %i", vid);
+ return 0;
+ }
+
if (unlikely(nb_ops == 0))
return 0;
- vq = vhost_crypto_finalize_one_request(ops[0], NULL);
+ rte_spinlock_lock(&dev->virtqueue_lock);
+ tmp_vq = dev->virtqueue[qid];
+ rte_spinlock_unlock(&dev->virtqueue_lock);
+
+ vq = vhost_crypto_finalize_one_request(ops[0], tmp_vq);
if (unlikely(vq == NULL))
return 0;
tmp_vq = vq;
@@ -1384,7 +1843,7 @@ rte_vhost_crypto_driver_start(const char *path)
int
rte_vhost_crypto_create(int vid, uint8_t cryptodev_id,
- struct rte_mempool *sess_pool,
+ struct rte_mempool *sym_sess_pool, struct rte_mempool *asym_sess_pool,
int socket_id)
{
struct virtio_net *dev = get_device(vid);
@@ -1405,9 +1864,11 @@ rte_vhost_crypto_create(int vid, uint8_t cryptodev_id,
return -ENOMEM;
}
- vcrypto->sess_pool = sess_pool;
+ vcrypto->sym_sess_pool = sym_sess_pool;
+ vcrypto->asym_sess_pool = asym_sess_pool;
vcrypto->cid = cryptodev_id;
- vcrypto->cache_session_id = UINT64_MAX;
+ vcrypto->cache_sym_session_id = UINT64_MAX;
+ vcrypto->cache_asym_session_id = UINT64_MAX;
vcrypto->last_session_id = 1;
vcrypto->dev = dev;
vcrypto->option = RTE_VHOST_CRYPTO_ZERO_COPY_DISABLE;
@@ -1578,7 +2039,12 @@ rte_vhost_crypto_fetch_requests(int vid, uint32_t qid,
return 0;
}
+ rte_spinlock_lock(&dev->virtqueue_lock);
vq = dev->virtqueue[qid];
+ rte_spinlock_unlock(&dev->virtqueue_lock);
+
+ if (!vq || !vq->avail)
+ return 0;
avail_idx = *((volatile uint16_t *)&vq->avail->idx);
start_idx = vq->last_used_idx;
@@ -1660,7 +2126,7 @@ rte_vhost_crypto_fetch_requests(int vid, uint32_t qid,
}
uint16_t
-rte_vhost_crypto_finalize_requests(struct rte_crypto_op **ops,
+rte_vhost_crypto_finalize_requests(int vid, int qid, struct rte_crypto_op **ops,
uint16_t nb_ops, int *callfds, uint16_t *nb_callfds)
{
struct rte_crypto_op **tmp_ops = ops;
@@ -1669,7 +2135,7 @@ rte_vhost_crypto_finalize_requests(struct rte_crypto_op **ops,
uint16_t idx = 0;
while (left) {
- count = vhost_crypto_complete_one_vm_requests(tmp_ops, left,
+ count = vhost_crypto_complete_one_vm_requests(vid, qid, tmp_ops, left,
&callfd);
if (unlikely(count == 0))
break;
diff --git a/lib/vhost/vhost_user.c b/lib/vhost/vhost_user.c
index 5f470da38a..b139c64a71 100644
--- a/lib/vhost/vhost_user.c
+++ b/lib/vhost/vhost_user.c
@@ -456,7 +456,9 @@ vhost_user_set_features(struct virtio_net **pdev,
if (!vq)
continue;
+ rte_spinlock_lock(&dev->virtqueue_lock);
dev->virtqueue[dev->nr_vring] = NULL;
+ rte_spinlock_unlock(&dev->virtqueue_lock);
cleanup_vq(vq, 1);
cleanup_vq_inflight(dev, vq);
/* vhost_user_lock_all_queue_pairs locked all qps */
@@ -600,7 +602,9 @@ numa_realloc(struct virtio_net **pdev, struct vhost_virtqueue **pvq)
if (vq != dev->virtqueue[vq->index]) {
VHOST_CONFIG_LOG(dev->ifname, INFO, "reallocated virtqueue on node %d", node);
+ rte_spinlock_lock(&dev->virtqueue_lock);
dev->virtqueue[vq->index] = vq;
+ rte_spinlock_unlock(&dev->virtqueue_lock);
}
if (vq_is_packed(dev)) {
diff --git a/lib/vhost/vhost_user.h b/lib/vhost/vhost_user.h
index edf7adb3c0..6174f32dcf 100644
--- a/lib/vhost/vhost_user.h
+++ b/lib/vhost/vhost_user.h
@@ -99,11 +99,10 @@ typedef struct VhostUserLog {
/* Comply with Cryptodev-Linux */
#define VHOST_USER_CRYPTO_MAX_HMAC_KEY_LENGTH 512
#define VHOST_USER_CRYPTO_MAX_CIPHER_KEY_LENGTH 64
+#define VHOST_USER_CRYPTO_MAX_KEY_LENGTH 1024
/* Same structure as vhost-user backend session info */
-typedef struct VhostUserCryptoSessionParam {
- int64_t session_id;
- uint32_t op_code;
+typedef struct VhostUserCryptoSymSessionParam {
uint32_t cipher_algo;
uint32_t cipher_key_len;
uint32_t hash_algo;
@@ -114,10 +113,37 @@ typedef struct VhostUserCryptoSessionParam {
uint8_t dir;
uint8_t hash_mode;
uint8_t chaining_dir;
- uint8_t *ciphe_key;
+ uint8_t *cipher_key;
uint8_t *auth_key;
uint8_t cipher_key_buf[VHOST_USER_CRYPTO_MAX_CIPHER_KEY_LENGTH];
uint8_t auth_key_buf[VHOST_USER_CRYPTO_MAX_HMAC_KEY_LENGTH];
+} VhostUserCryptoSymSessionParam;
+
+
+typedef struct VhostUserCryptoAsymRsaParam {
+ uint32_t padding_algo;
+ uint32_t hash_algo;
+ uint8_t private_key_type;
+} VhostUserCryptoAsymRsaParam;
+
+typedef struct VhostUserCryptoAsymSessionParam {
+ uint32_t algo;
+ uint32_t key_type;
+ uint32_t key_len;
+ uint8_t *key;
+ union {
+ VhostUserCryptoAsymRsaParam rsa;
+ } u;
+ uint8_t key_buf[VHOST_USER_CRYPTO_MAX_KEY_LENGTH];
+} VhostUserCryptoAsymSessionParam;
+
+typedef struct VhostUserCryptoSessionParam {
+ uint32_t op_code;
+ union {
+ VhostUserCryptoSymSessionParam sym_sess;
+ VhostUserCryptoAsymSessionParam asym_sess;
+ } u;
+ uint64_t session_id;
} VhostUserCryptoSessionParam;
typedef struct VhostUserVringArea {
diff --git a/lib/vhost/virtio_crypto.h b/lib/vhost/virtio_crypto.h
index e3b93573c8..703a059768 100644
--- a/lib/vhost/virtio_crypto.h
+++ b/lib/vhost/virtio_crypto.h
@@ -9,6 +9,7 @@
#define VIRTIO_CRYPTO_SERVICE_HASH 1
#define VIRTIO_CRYPTO_SERVICE_MAC 2
#define VIRTIO_CRYPTO_SERVICE_AEAD 3
+#define VIRTIO_CRYPTO_SERVICE_AKCIPHER 4
#define VIRTIO_CRYPTO_OPCODE(service, op) (((service) << 8) | (op))
@@ -29,6 +30,10 @@ struct virtio_crypto_ctrl_header {
VIRTIO_CRYPTO_OPCODE(VIRTIO_CRYPTO_SERVICE_AEAD, 0x02)
#define VIRTIO_CRYPTO_AEAD_DESTROY_SESSION \
VIRTIO_CRYPTO_OPCODE(VIRTIO_CRYPTO_SERVICE_AEAD, 0x03)
+#define VIRTIO_CRYPTO_AKCIPHER_CREATE_SESSION \
+ VIRTIO_CRYPTO_OPCODE(VIRTIO_CRYPTO_SERVICE_AKCIPHER, 0x04)
+#define VIRTIO_CRYPTO_AKCIPHER_DESTROY_SESSION \
+ VIRTIO_CRYPTO_OPCODE(VIRTIO_CRYPTO_SERVICE_AKCIPHER, 0x05)
uint32_t opcode;
uint32_t algo;
uint32_t flag;
@@ -152,6 +157,63 @@ struct virtio_crypto_aead_create_session_req {
uint8_t padding[32];
};
+struct virtio_crypto_rsa_session_para {
+#define VIRTIO_CRYPTO_RSA_RAW_PADDING 0
+#define VIRTIO_CRYPTO_RSA_PKCS1_PADDING 1
+ uint32_t padding_algo;
+
+#define VIRTIO_CRYPTO_RSA_NO_HASH 0
+#define VIRTIO_CRYPTO_RSA_MD2 1
+#define VIRTIO_CRYPTO_RSA_MD3 2
+#define VIRTIO_CRYPTO_RSA_MD4 3
+#define VIRTIO_CRYPTO_RSA_MD5 4
+#define VIRTIO_CRYPTO_RSA_SHA1 5
+#define VIRTIO_CRYPTO_RSA_SHA256 6
+#define VIRTIO_CRYPTO_RSA_SHA384 7
+#define VIRTIO_CRYPTO_RSA_SHA512 8
+#define VIRTIO_CRYPTO_RSA_SHA224 9
+ uint32_t hash_algo;
+
+#define VIRTIO_CRYPTO_RSA_PRIVATE_KEY_UNKNOWN 0
+#define VIRTIO_CRYPTO_RSA_PRIVATE_KEY_EXP 1
+#define VIRTIO_CRYPTO_RSA_PRIVATE_KEY_QT 2
+ uint8_t private_key_type;
+};
+
+struct virtio_crypto_ecdsa_session_para {
+#define VIRTIO_CRYPTO_CURVE_UNKNOWN 0
+#define VIRTIO_CRYPTO_CURVE_NIST_P192 1
+#define VIRTIO_CRYPTO_CURVE_NIST_P224 2
+#define VIRTIO_CRYPTO_CURVE_NIST_P256 3
+#define VIRTIO_CRYPTO_CURVE_NIST_P384 4
+#define VIRTIO_CRYPTO_CURVE_NIST_P521 5
+ uint32_t curve_id;
+ uint32_t padding;
+};
+
+struct virtio_crypto_akcipher_session_para {
+#define VIRTIO_CRYPTO_NO_AKCIPHER 0
+#define VIRTIO_CRYPTO_AKCIPHER_RSA 1
+#define VIRTIO_CRYPTO_AKCIPHER_DSA 2
+#define VIRTIO_CRYPTO_AKCIPHER_ECDSA 3
+ uint32_t algo;
+
+#define VIRTIO_CRYPTO_AKCIPHER_KEY_TYPE_PUBLIC 1
+#define VIRTIO_CRYPTO_AKCIPHER_KEY_TYPE_PRIVATE 2
+ uint32_t keytype;
+ uint32_t keylen;
+
+ union {
+ struct virtio_crypto_rsa_session_para rsa;
+ struct virtio_crypto_ecdsa_session_para ecdsa;
+ } u;
+};
+
+struct virtio_crypto_akcipher_create_session_req {
+ struct virtio_crypto_akcipher_session_para para;
+ uint8_t padding[36];
+};
+
struct virtio_crypto_alg_chain_session_para {
#define VIRTIO_CRYPTO_SYM_ALG_CHAIN_ORDER_HASH_THEN_CIPHER 1
#define VIRTIO_CRYPTO_SYM_ALG_CHAIN_ORDER_CIPHER_THEN_HASH 2
@@ -219,6 +281,8 @@ struct virtio_crypto_op_ctrl_req {
mac_create_session;
struct virtio_crypto_aead_create_session_req
aead_create_session;
+ struct virtio_crypto_akcipher_create_session_req
+ akcipher_create_session;
struct virtio_crypto_destroy_session_req
destroy_session;
uint8_t padding[56];
@@ -238,6 +302,14 @@ struct virtio_crypto_op_header {
VIRTIO_CRYPTO_OPCODE(VIRTIO_CRYPTO_SERVICE_AEAD, 0x00)
#define VIRTIO_CRYPTO_AEAD_DECRYPT \
VIRTIO_CRYPTO_OPCODE(VIRTIO_CRYPTO_SERVICE_AEAD, 0x01)
+#define VIRTIO_CRYPTO_AKCIPHER_ENCRYPT \
+ VIRTIO_CRYPTO_OPCODE(VIRTIO_CRYPTO_SERVICE_AKCIPHER, 0x00)
+#define VIRTIO_CRYPTO_AKCIPHER_DECRYPT \
+ VIRTIO_CRYPTO_OPCODE(VIRTIO_CRYPTO_SERVICE_AKCIPHER, 0x01)
+#define VIRTIO_CRYPTO_AKCIPHER_SIGN \
+ VIRTIO_CRYPTO_OPCODE(VIRTIO_CRYPTO_SERVICE_AKCIPHER, 0x02)
+#define VIRTIO_CRYPTO_AKCIPHER_VERIFY \
+ VIRTIO_CRYPTO_OPCODE(VIRTIO_CRYPTO_SERVICE_AKCIPHER, 0x03)
uint32_t opcode;
/* algo should be service-specific algorithms */
uint32_t algo;
@@ -362,6 +434,16 @@ struct virtio_crypto_aead_data_req {
uint8_t padding[32];
};
+struct virtio_crypto_akcipher_para {
+ uint32_t src_data_len;
+ uint32_t dst_data_len;
+};
+
+struct virtio_crypto_akcipher_data_req {
+ struct virtio_crypto_akcipher_para para;
+ uint8_t padding[40];
+};
+
/* The request of the data virtqueue's packet */
struct virtio_crypto_op_data_req {
struct virtio_crypto_op_header header;
@@ -371,6 +453,7 @@ struct virtio_crypto_op_data_req {
struct virtio_crypto_hash_data_req hash_req;
struct virtio_crypto_mac_data_req mac_req;
struct virtio_crypto_aead_data_req aead_req;
+ struct virtio_crypto_akcipher_data_req akcipher_req;
uint8_t padding[48];
} u;
};
@@ -380,6 +463,8 @@ struct virtio_crypto_op_data_req {
#define VIRTIO_CRYPTO_BADMSG 2
#define VIRTIO_CRYPTO_NOTSUPP 3
#define VIRTIO_CRYPTO_INVSESS 4 /* Invalid session id */
+#define VIRTIO_CRYPTO_NOSPC 5 /* no free session ID */
+#define VIRTIO_CRYPTO_KEY_REJECTED 6 /* Signature verification failed */
/* The accelerator hardware is ready */
#define VIRTIO_CRYPTO_S_HW_READY (1 << 0)
@@ -410,7 +495,7 @@ struct virtio_crypto_config {
uint32_t max_cipher_key_len;
/* Maximum length of authenticated key */
uint32_t max_auth_key_len;
- uint32_t reserve;
+ uint32_t akcipher_algo;
/* Maximum size of each crypto request's content */
uint64_t max_size;
};
--
2.21.0
^ permalink raw reply [flat|nested] 16+ messages in thread
* [PATCH 4/6] crypto/virtio: add asymmetric RSA support
2024-09-05 14:56 [PATCH 0/6] vhost: add asymmetric crypto support Gowrishankar Muthukrishnan
` (2 preceding siblings ...)
2024-09-05 14:56 ` [PATCH 3/6] vhost: add asymmetric RSA support Gowrishankar Muthukrishnan
@ 2024-09-05 14:56 ` Gowrishankar Muthukrishnan
2024-09-05 14:56 ` [PATCH 5/6] examples/vhost_crypto: add asymmetric support Gowrishankar Muthukrishnan
` (2 subsequent siblings)
6 siblings, 0 replies; 16+ messages in thread
From: Gowrishankar Muthukrishnan @ 2024-09-05 14:56 UTC (permalink / raw)
To: dev, Jay Zhou
Cc: Anoob Joseph, Akhil Goyal, bruce.richardson, ciara.power, jerinj,
fanzhang.oss, arkadiuszx.kusztal, kai.ji, jack.bond-preston,
david.marchand, hemant.agrawal, pablo.de.lara.guarch,
fiona.trahe, declan.doherty, matan, ruifeng.wang,
abhinandan.gujjar, maxime.coquelin, chenbox,
sunilprakashrao.uttarwar, andrew.boyer, ajit.khaparde,
raveendra.padasalagi, vikas.gupta, zhangfei.gao, g.singh,
lee.daly, Brian Dooley, Gowrishankar Muthukrishnan
Asymmetric RSA operations (SIGN, VERIFY, ENCRYPT and DECRYPT) are
supported in virtio PMD.
Signed-off-by: Gowrishankar Muthukrishnan <gmuthukrishn@marvell.com>
---
.../virtio/virtio_crypto_capabilities.h | 19 +
drivers/crypto/virtio/virtio_cryptodev.c | 388 +++++++++++++++---
drivers/crypto/virtio/virtio_rxtx.c | 233 ++++++++++-
3 files changed, 572 insertions(+), 68 deletions(-)
diff --git a/drivers/crypto/virtio/virtio_crypto_capabilities.h b/drivers/crypto/virtio/virtio_crypto_capabilities.h
index 03c30deefd..1b26ff6720 100644
--- a/drivers/crypto/virtio/virtio_crypto_capabilities.h
+++ b/drivers/crypto/virtio/virtio_crypto_capabilities.h
@@ -48,4 +48,23 @@
}, } \
}
+#define VIRTIO_ASYM_CAPABILITIES \
+ { /* RSA */ \
+ .op = RTE_CRYPTO_OP_TYPE_ASYMMETRIC, \
+ {.asym = { \
+ .xform_capa = { \
+ .xform_type = RTE_CRYPTO_ASYM_XFORM_RSA, \
+ .op_types = ((1 << RTE_CRYPTO_ASYM_OP_SIGN) | \
+ (1 << RTE_CRYPTO_ASYM_OP_VERIFY) | \
+ (1 << RTE_CRYPTO_ASYM_OP_ENCRYPT) | \
+ (1 << RTE_CRYPTO_ASYM_OP_DECRYPT)), \
+ {.modlen = { \
+ .min = 1, \
+ .max = 1024, \
+ .increment = 1 \
+ }, } \
+ } \
+ }, } \
+ }
+
#endif /* _VIRTIO_CRYPTO_CAPABILITIES_H_ */
diff --git a/drivers/crypto/virtio/virtio_cryptodev.c b/drivers/crypto/virtio/virtio_cryptodev.c
index 4854820ba6..b2a9995c13 100644
--- a/drivers/crypto/virtio/virtio_cryptodev.c
+++ b/drivers/crypto/virtio/virtio_cryptodev.c
@@ -41,6 +41,11 @@ static void virtio_crypto_sym_clear_session(struct rte_cryptodev *dev,
static int virtio_crypto_sym_configure_session(struct rte_cryptodev *dev,
struct rte_crypto_sym_xform *xform,
struct rte_cryptodev_sym_session *session);
+static void virtio_crypto_asym_clear_session(struct rte_cryptodev *dev,
+ struct rte_cryptodev_asym_session *sess);
+static int virtio_crypto_asym_configure_session(struct rte_cryptodev *dev,
+ struct rte_crypto_asym_xform *xform,
+ struct rte_cryptodev_asym_session *session);
/*
* The set of PCI devices this driver supports
@@ -53,6 +58,7 @@ static const struct rte_pci_id pci_id_virtio_crypto_map[] = {
static const struct rte_cryptodev_capabilities virtio_capabilities[] = {
VIRTIO_SYM_CAPABILITIES,
+ VIRTIO_ASYM_CAPABILITIES,
RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
};
@@ -88,7 +94,7 @@ virtio_crypto_send_command(struct virtqueue *vq,
return -EINVAL;
}
/* cipher only is supported, it is available if auth_key is NULL */
- if (!cipher_key) {
+ if (session->ctrl.header.algo == VIRTIO_CRYPTO_SERVICE_CIPHER && !cipher_key) {
VIRTIO_CRYPTO_SESSION_LOG_ERR("cipher key is NULL.");
return -EINVAL;
}
@@ -104,19 +110,23 @@ virtio_crypto_send_command(struct virtqueue *vq,
/* calculate the length of cipher key */
if (cipher_key) {
- switch (ctrl->u.sym_create_session.op_type) {
- case VIRTIO_CRYPTO_SYM_OP_CIPHER:
- len_cipher_key
- = ctrl->u.sym_create_session.u.cipher
- .para.keylen;
- break;
- case VIRTIO_CRYPTO_SYM_OP_ALGORITHM_CHAINING:
- len_cipher_key
- = ctrl->u.sym_create_session.u.chain
- .para.cipher_param.keylen;
- break;
- default:
- VIRTIO_CRYPTO_SESSION_LOG_ERR("invalid op type");
+ if (session->ctrl.header.algo == VIRTIO_CRYPTO_SERVICE_CIPHER) {
+ switch (ctrl->u.sym_create_session.op_type) {
+ case VIRTIO_CRYPTO_SYM_OP_CIPHER:
+ len_cipher_key = ctrl->u.sym_create_session.u.cipher.para.keylen;
+ break;
+ case VIRTIO_CRYPTO_SYM_OP_ALGORITHM_CHAINING:
+ len_cipher_key =
+ ctrl->u.sym_create_session.u.chain.para.cipher_param.keylen;
+ break;
+ default:
+ VIRTIO_CRYPTO_SESSION_LOG_ERR("invalid op type");
+ return -EINVAL;
+ }
+ } else if (session->ctrl.header.algo == VIRTIO_CRYPTO_SERVICE_AKCIPHER) {
+ len_cipher_key = ctrl->u.akcipher_create_session.para.keylen;
+ } else {
+ VIRTIO_CRYPTO_SESSION_LOG_ERR("Invalid crypto service for cipher key");
return -EINVAL;
}
}
@@ -511,7 +521,10 @@ static struct rte_cryptodev_ops virtio_crypto_dev_ops = {
/* Crypto related operations */
.sym_session_get_size = virtio_crypto_sym_get_session_private_size,
.sym_session_configure = virtio_crypto_sym_configure_session,
- .sym_session_clear = virtio_crypto_sym_clear_session
+ .sym_session_clear = virtio_crypto_sym_clear_session,
+ .asym_session_get_size = virtio_crypto_sym_get_session_private_size,
+ .asym_session_configure = virtio_crypto_asym_configure_session,
+ .asym_session_clear = virtio_crypto_asym_clear_session
};
static void
@@ -734,6 +747,9 @@ crypto_virtio_create(const char *name, struct rte_pci_device *pci_dev,
cryptodev->dequeue_burst = virtio_crypto_pkt_rx_burst;
cryptodev->feature_flags = RTE_CRYPTODEV_FF_SYMMETRIC_CRYPTO |
+ RTE_CRYPTODEV_FF_ASYMMETRIC_CRYPTO |
+ RTE_CRYPTODEV_FF_RSA_PRIV_OP_KEY_EXP |
+ RTE_CRYPTODEV_FF_RSA_PRIV_OP_KEY_QT |
RTE_CRYPTODEV_FF_SYM_OPERATION_CHAINING |
RTE_CRYPTODEV_FF_OOP_LB_IN_LB_OUT;
@@ -926,32 +942,24 @@ virtio_crypto_check_sym_clear_session_paras(
#define NUM_ENTRY_SYM_CLEAR_SESSION 2
static void
-virtio_crypto_sym_clear_session(
+virtio_crypto_clear_session(
struct rte_cryptodev *dev,
- struct rte_cryptodev_sym_session *sess)
+ struct virtio_crypto_op_ctrl_req *ctrl)
{
struct virtio_crypto_hw *hw;
struct virtqueue *vq;
- struct virtio_crypto_session *session;
- struct virtio_crypto_op_ctrl_req *ctrl;
struct vring_desc *desc;
uint8_t *status;
uint8_t needed = 1;
uint32_t head;
- uint8_t *malloc_virt_addr;
uint64_t malloc_phys_addr;
uint8_t len_inhdr = sizeof(struct virtio_crypto_inhdr);
uint32_t len_op_ctrl_req = sizeof(struct virtio_crypto_op_ctrl_req);
uint32_t desc_offset = len_op_ctrl_req + len_inhdr;
-
- PMD_INIT_FUNC_TRACE();
-
- if (virtio_crypto_check_sym_clear_session_paras(dev, sess) < 0)
- return;
+ uint64_t session_id = ctrl->u.destroy_session.session_id;
hw = dev->data->dev_private;
vq = hw->cvq;
- session = CRYPTODEV_GET_SYM_SESS_PRIV(sess);
VIRTIO_CRYPTO_SESSION_LOG_INFO("vq->vq_desc_head_idx = %d, "
"vq = %p", vq->vq_desc_head_idx, vq);
@@ -963,34 +971,15 @@ virtio_crypto_sym_clear_session(
return;
}
- /*
- * malloc memory to store information of ctrl request op,
- * returned status and desc vring
- */
- malloc_virt_addr = rte_malloc(NULL, len_op_ctrl_req + len_inhdr
- + NUM_ENTRY_SYM_CLEAR_SESSION
- * sizeof(struct vring_desc), RTE_CACHE_LINE_SIZE);
- if (malloc_virt_addr == NULL) {
- VIRTIO_CRYPTO_SESSION_LOG_ERR("not enough heap room");
- return;
- }
- malloc_phys_addr = rte_malloc_virt2iova(malloc_virt_addr);
-
- /* assign ctrl request op part */
- ctrl = (struct virtio_crypto_op_ctrl_req *)malloc_virt_addr;
- ctrl->header.opcode = VIRTIO_CRYPTO_CIPHER_DESTROY_SESSION;
- /* default data virtqueue is 0 */
- ctrl->header.queue_id = 0;
- ctrl->u.destroy_session.session_id = session->session_id;
+ malloc_phys_addr = rte_malloc_virt2iova(ctrl);
/* status part */
status = &(((struct virtio_crypto_inhdr *)
- ((uint8_t *)malloc_virt_addr + len_op_ctrl_req))->status);
+ ((uint8_t *)ctrl + len_op_ctrl_req))->status);
*status = VIRTIO_CRYPTO_ERR;
/* indirect desc vring part */
- desc = (struct vring_desc *)((uint8_t *)malloc_virt_addr
- + desc_offset);
+ desc = (struct vring_desc *)((uint8_t *)ctrl + desc_offset);
/* ctrl request part */
desc[0].addr = malloc_phys_addr;
@@ -1052,8 +1041,8 @@ virtio_crypto_sym_clear_session(
if (*status != VIRTIO_CRYPTO_OK) {
VIRTIO_CRYPTO_SESSION_LOG_ERR("Close session failed "
"status=%"PRIu32", session_id=%"PRIu64"",
- *status, session->session_id);
- rte_free(malloc_virt_addr);
+ *status, session_id);
+ rte_free(ctrl);
return;
}
@@ -1062,9 +1051,86 @@ virtio_crypto_sym_clear_session(
vq->vq_free_cnt, vq->vq_desc_head_idx);
VIRTIO_CRYPTO_SESSION_LOG_INFO("Close session %"PRIu64" successfully ",
- session->session_id);
+ session_id);
- rte_free(malloc_virt_addr);
+ rte_free(ctrl);
+}
+
+static void
+virtio_crypto_sym_clear_session(
+ struct rte_cryptodev *dev,
+ struct rte_cryptodev_sym_session *sess)
+{
+ uint8_t len_inhdr = sizeof(struct virtio_crypto_inhdr);
+ uint32_t len_op_ctrl_req = sizeof(struct virtio_crypto_op_ctrl_req);
+ struct virtio_crypto_op_ctrl_req *ctrl;
+ struct virtio_crypto_session *session;
+ uint8_t *malloc_virt_addr;
+
+ PMD_INIT_FUNC_TRACE();
+
+ if (virtio_crypto_check_sym_clear_session_paras(dev, sess) < 0)
+ return;
+
+ session = CRYPTODEV_GET_SYM_SESS_PRIV(sess);
+
+ /*
+ * malloc memory to store information of ctrl request op,
+ * returned status and desc vring
+ */
+ malloc_virt_addr = rte_malloc(NULL, len_op_ctrl_req + len_inhdr
+ + NUM_ENTRY_SYM_CLEAR_SESSION
+ * sizeof(struct vring_desc), RTE_CACHE_LINE_SIZE);
+ if (malloc_virt_addr == NULL) {
+ VIRTIO_CRYPTO_SESSION_LOG_ERR("not enough heap room");
+ return;
+ }
+
+ /* assign ctrl request op part */
+ ctrl = (struct virtio_crypto_op_ctrl_req *)malloc_virt_addr;
+ ctrl->header.opcode = VIRTIO_CRYPTO_CIPHER_DESTROY_SESSION;
+ /* default data virtqueue is 0 */
+ ctrl->header.queue_id = 0;
+ ctrl->u.destroy_session.session_id = session->session_id;
+
+ return virtio_crypto_clear_session(dev, ctrl);
+}
+
+static void
+virtio_crypto_asym_clear_session(
+ struct rte_cryptodev *dev,
+ struct rte_cryptodev_asym_session *sess)
+{
+ uint8_t len_inhdr = sizeof(struct virtio_crypto_inhdr);
+ uint32_t len_op_ctrl_req = sizeof(struct virtio_crypto_op_ctrl_req);
+ struct virtio_crypto_op_ctrl_req *ctrl;
+ struct virtio_crypto_session *session;
+ uint8_t *malloc_virt_addr;
+
+ PMD_INIT_FUNC_TRACE();
+
+ session = CRYPTODEV_GET_ASYM_SESS_PRIV(sess);
+
+ /*
+ * malloc memory to store information of ctrl request op,
+ * returned status and desc vring
+ */
+ malloc_virt_addr = rte_malloc(NULL, len_op_ctrl_req + len_inhdr
+ + NUM_ENTRY_SYM_CLEAR_SESSION
+ * sizeof(struct vring_desc), RTE_CACHE_LINE_SIZE);
+ if (malloc_virt_addr == NULL) {
+ VIRTIO_CRYPTO_SESSION_LOG_ERR("not enough heap room");
+ return;
+ }
+
+ /* assign ctrl request op part */
+ ctrl = (struct virtio_crypto_op_ctrl_req *)malloc_virt_addr;
+ ctrl->header.opcode = VIRTIO_CRYPTO_AKCIPHER_DESTROY_SESSION;
+ /* default data virtqueue is 0 */
+ ctrl->header.queue_id = 0;
+ ctrl->u.destroy_session.session_id = session->session_id;
+
+ return virtio_crypto_clear_session(dev, ctrl);
}
static struct rte_crypto_cipher_xform *
@@ -1295,6 +1361,23 @@ virtio_crypto_check_sym_configure_session_paras(
return 0;
}
+static int
+virtio_crypto_check_asym_configure_session_paras(
+ struct rte_cryptodev *dev,
+ struct rte_crypto_asym_xform *xform,
+ struct rte_cryptodev_asym_session *asym_sess)
+{
+ if (unlikely(xform == NULL) || unlikely(asym_sess == NULL)) {
+ VIRTIO_CRYPTO_SESSION_LOG_ERR("NULL pointer");
+ return -1;
+ }
+
+ if (virtio_crypto_check_sym_session_paras(dev) < 0)
+ return -1;
+
+ return 0;
+}
+
static int
virtio_crypto_sym_configure_session(
struct rte_cryptodev *dev,
@@ -1386,6 +1469,207 @@ virtio_crypto_sym_configure_session(
return -1;
}
+static size_t
+tlv_encode(uint8_t **tlv, uint8_t type, uint8_t *data, size_t len)
+{
+ uint8_t *lenval = NULL;
+ size_t lenval_n = 0;
+
+ if (len > 65535) {
+ goto _exit;
+ } else if (len > 255) {
+ lenval_n = 4 + len;
+ lenval = rte_malloc(NULL, lenval_n, 0);
+
+ lenval[0] = type;
+ lenval[1] = 0x82;
+ lenval[2] = (len & 0xFF00) >> 8;
+ lenval[3] = (len & 0xFF);
+ rte_memcpy(&lenval[4], data, len);
+ } else if (len > 127) {
+ lenval_n = 3 + len;
+ lenval = rte_malloc(NULL, lenval_n, 0);
+
+ lenval[0] = type;
+ lenval[1] = 0x81;
+ lenval[2] = len;
+ rte_memcpy(&lenval[3], data, len);
+ } else {
+ lenval_n = 2 + len;
+ lenval = rte_malloc(NULL, lenval_n, 0);
+
+ lenval[0] = type;
+ lenval[1] = len;
+ rte_memcpy(&lenval[2], data, len);
+ }
+
+_exit:
+ *tlv = lenval;
+ return lenval_n;
+}
+
+static int
+virtio_crypto_asym_rsa_xform_to_der(
+ struct rte_crypto_asym_xform *xform,
+ unsigned char **der)
+{
+ size_t nlen, elen, dlen, plen, qlen, dplen, dqlen, qinvlen, tlen;
+ uint8_t *n, *e, *d, *p, *q, *dp, *dq, *qinv, *t;
+ uint8_t ver[3] = {0x02, 0x01, 0x00};
+
+ if (xform->xform_type != RTE_CRYPTO_ASYM_XFORM_RSA)
+ return -EINVAL;
+
+ /* Length of sequence in bytes */
+ tlen = RTE_DIM(ver);
+ nlen = tlv_encode(&n, 0x02, xform->rsa.n.data, xform->rsa.n.length);
+ elen = tlv_encode(&e, 0x02, xform->rsa.e.data, xform->rsa.e.length);
+ tlen += (nlen + elen);
+
+ dlen = tlv_encode(&d, 0x02, xform->rsa.d.data, xform->rsa.d.length);
+ tlen += dlen;
+
+ plen = tlv_encode(&p, 0x02, xform->rsa.qt.p.data, xform->rsa.qt.p.length);
+ qlen = tlv_encode(&q, 0x02, xform->rsa.qt.q.data, xform->rsa.qt.q.length);
+ dplen = tlv_encode(&dp, 0x02, xform->rsa.qt.dP.data, xform->rsa.qt.dP.length);
+ dqlen = tlv_encode(&dq, 0x02, xform->rsa.qt.dQ.data, xform->rsa.qt.dQ.length);
+ qinvlen = tlv_encode(&qinv, 0x02, xform->rsa.qt.qInv.data, xform->rsa.qt.qInv.length);
+ tlen += (plen + qlen + dplen + dqlen + qinvlen);
+
+ t = rte_malloc(NULL, tlen, 0);
+ *der = t;
+ rte_memcpy(t, ver, RTE_DIM(ver));
+ t += RTE_DIM(ver);
+ rte_memcpy(t, n, nlen);
+ t += nlen;
+ rte_memcpy(t, e, elen);
+ t += elen;
+ rte_free(n);
+ rte_free(e);
+
+ rte_memcpy(t, d, dlen);
+ t += dlen;
+ rte_free(d);
+
+ rte_memcpy(t, p, plen);
+ t += plen;
+ rte_memcpy(t, q, plen);
+ t += qlen;
+ rte_memcpy(t, dp, dplen);
+ t += dplen;
+ rte_memcpy(t, dq, dqlen);
+ t += dqlen;
+ rte_memcpy(t, qinv, qinvlen);
+ t += qinvlen;
+ rte_free(p);
+ rte_free(q);
+ rte_free(dp);
+ rte_free(dq);
+ rte_free(qinv);
+
+ t = *der;
+ tlen = tlv_encode(der, 0x30, t, tlen);
+ return tlen;
+}
+
+static int
+virtio_crypto_asym_configure_session(
+ struct rte_cryptodev *dev,
+ struct rte_crypto_asym_xform *xform,
+ struct rte_cryptodev_asym_session *sess)
+{
+ struct virtio_crypto_akcipher_session_para *para;
+ struct virtio_crypto_op_ctrl_req *ctrl_req;
+ struct virtio_crypto_session *session;
+ struct virtio_crypto_hw *hw;
+ struct virtqueue *control_vq;
+ uint8_t *key = NULL;
+ int ret;
+
+ PMD_INIT_FUNC_TRACE();
+
+ ret = virtio_crypto_check_asym_configure_session_paras(dev, xform,
+ sess);
+ if (ret < 0) {
+ VIRTIO_CRYPTO_SESSION_LOG_ERR("Invalid parameters");
+ return ret;
+ }
+
+ session = CRYPTODEV_GET_ASYM_SESS_PRIV(sess);
+ memset(session, 0, sizeof(struct virtio_crypto_session));
+ ctrl_req = &session->ctrl;
+ ctrl_req->header.opcode = VIRTIO_CRYPTO_AKCIPHER_CREATE_SESSION;
+ /* FIXME: support multiqueue */
+ ctrl_req->header.queue_id = 0;
+ ctrl_req->header.algo = VIRTIO_CRYPTO_SERVICE_AKCIPHER;
+ para = &ctrl_req->u.akcipher_create_session.para;
+
+ switch (xform->xform_type) {
+ case RTE_CRYPTO_ASYM_XFORM_RSA:
+ para->algo = VIRTIO_CRYPTO_AKCIPHER_RSA;
+
+ if (xform->rsa.key_type == RTE_RSA_KEY_TYPE_EXP) {
+ para->keytype = VIRTIO_CRYPTO_AKCIPHER_KEY_TYPE_PUBLIC;
+ para->u.rsa.private_key_type = VIRTIO_CRYPTO_RSA_PRIVATE_KEY_EXP;
+ } else {
+ para->keytype = VIRTIO_CRYPTO_AKCIPHER_KEY_TYPE_PRIVATE;
+ para->u.rsa.private_key_type = VIRTIO_CRYPTO_RSA_PRIVATE_KEY_QT;
+ }
+
+ if (xform->rsa.padding.type == RTE_CRYPTO_RSA_PADDING_NONE) {
+ para->u.rsa.padding_algo = VIRTIO_CRYPTO_RSA_RAW_PADDING;
+ } else if (xform->rsa.padding.type == RTE_CRYPTO_RSA_PADDING_PKCS1_5) {
+ para->u.rsa.padding_algo = VIRTIO_CRYPTO_RSA_PKCS1_PADDING;
+ switch (xform->rsa.padding.hash) {
+ case RTE_CRYPTO_AUTH_SHA1:
+ para->u.rsa.hash_algo = VIRTIO_CRYPTO_RSA_SHA1;
+ break;
+ case RTE_CRYPTO_AUTH_SHA224:
+ para->u.rsa.hash_algo = VIRTIO_CRYPTO_RSA_SHA224;
+ break;
+ case RTE_CRYPTO_AUTH_SHA256:
+ para->u.rsa.hash_algo = VIRTIO_CRYPTO_RSA_SHA256;
+ break;
+ case RTE_CRYPTO_AUTH_SHA512:
+ para->u.rsa.hash_algo = VIRTIO_CRYPTO_RSA_SHA512;
+ break;
+ case RTE_CRYPTO_AUTH_MD5:
+ para->u.rsa.hash_algo = VIRTIO_CRYPTO_RSA_MD5;
+ break;
+ default:
+ para->u.rsa.hash_algo = VIRTIO_CRYPTO_RSA_NO_HASH;
+ }
+ } else {
+ VIRTIO_CRYPTO_SESSION_LOG_ERR("Invalid padding type");
+ return -EINVAL;
+ }
+
+ ret = virtio_crypto_asym_rsa_xform_to_der(xform, &key);
+ if (ret <= 0) {
+ VIRTIO_CRYPTO_SESSION_LOG_ERR("Invalid RSA primitives");
+ return ret;
+ }
+
+ ctrl_req->u.akcipher_create_session.para.keylen = ret;
+ break;
+ default:
+ para->algo = VIRTIO_CRYPTO_NO_AKCIPHER;
+ }
+
+ hw = dev->data->dev_private;
+ control_vq = hw->cvq;
+ ret = virtio_crypto_send_command(control_vq, ctrl_req,
+ key, NULL, session);
+ if (ret < 0) {
+ VIRTIO_CRYPTO_SESSION_LOG_ERR("create session failed: %d", ret);
+ goto error_out;
+ }
+
+ return 0;
+error_out:
+ return -1;
+}
+
static void
virtio_crypto_dev_info_get(struct rte_cryptodev *dev,
struct rte_cryptodev_info *info)
diff --git a/drivers/crypto/virtio/virtio_rxtx.c b/drivers/crypto/virtio/virtio_rxtx.c
index 48b5f4ebbb..b85a322175 100644
--- a/drivers/crypto/virtio/virtio_rxtx.c
+++ b/drivers/crypto/virtio/virtio_rxtx.c
@@ -343,6 +343,203 @@ virtqueue_crypto_sym_enqueue_xmit(
return 0;
}
+static int
+virtqueue_crypto_asym_pkt_header_arrange(
+ struct rte_crypto_op *cop,
+ struct virtio_crypto_op_data_req *data,
+ struct virtio_crypto_session *session)
+{
+ struct rte_crypto_asym_op *asym_op = cop->asym;
+ struct virtio_crypto_op_data_req *req_data = data;
+ struct virtio_crypto_akcipher_session_para *para;
+ struct virtio_crypto_op_ctrl_req *ctrl = &session->ctrl;
+
+ req_data->header.session_id = session->session_id;
+ para = &ctrl->u.akcipher_create_session.para;
+
+ if (ctrl->header.algo != VIRTIO_CRYPTO_SERVICE_AKCIPHER) {
+ req_data->header.algo = VIRTIO_CRYPTO_NO_AKCIPHER;
+ return 0;
+ }
+
+ switch (para->algo) {
+ case VIRTIO_CRYPTO_AKCIPHER_RSA:
+ req_data->header.algo = para->algo;
+ if (asym_op->rsa.op_type == RTE_CRYPTO_ASYM_OP_SIGN) {
+ req_data->header.opcode = VIRTIO_CRYPTO_AKCIPHER_SIGN;
+ req_data->u.akcipher_req.para.src_data_len
+ = asym_op->rsa.message.length;
+ /* qemu does not accept zero size write buffer */
+ req_data->u.akcipher_req.para.dst_data_len
+ = asym_op->rsa.sign.length;
+ } else if (asym_op->rsa.op_type == RTE_CRYPTO_ASYM_OP_VERIFY) {
+ req_data->header.opcode = VIRTIO_CRYPTO_AKCIPHER_VERIFY;
+ req_data->u.akcipher_req.para.src_data_len
+ = asym_op->rsa.sign.length;
+ /* qemu does not accept zero size write buffer */
+ req_data->u.akcipher_req.para.dst_data_len
+ = asym_op->rsa.message.length;
+ } else if (asym_op->rsa.op_type == RTE_CRYPTO_ASYM_OP_ENCRYPT) {
+ req_data->header.opcode = VIRTIO_CRYPTO_AKCIPHER_ENCRYPT;
+ req_data->u.akcipher_req.para.src_data_len
+ = asym_op->rsa.message.length;
+ /* qemu does not accept zero size write buffer */
+ req_data->u.akcipher_req.para.dst_data_len
+ = asym_op->rsa.cipher.length;
+ } else if (asym_op->rsa.op_type == RTE_CRYPTO_ASYM_OP_DECRYPT) {
+ req_data->header.opcode = VIRTIO_CRYPTO_AKCIPHER_DECRYPT;
+ req_data->u.akcipher_req.para.src_data_len
+ = asym_op->rsa.cipher.length;
+ /* qemu does not accept zero size write buffer */
+ req_data->u.akcipher_req.para.dst_data_len
+ = asym_op->rsa.message.length;
+ } else {
+ return -EINVAL;
+ }
+
+ break;
+ default:
+ req_data->header.algo = VIRTIO_CRYPTO_NO_AKCIPHER;
+ }
+
+ return 0;
+}
+
+static int
+virtqueue_crypto_asym_enqueue_xmit(
+ struct virtqueue *txvq,
+ struct rte_crypto_op *cop)
+{
+ uint16_t idx = 0;
+ uint16_t num_entry;
+ uint16_t needed = 1;
+ uint16_t head_idx;
+ struct vq_desc_extra *dxp;
+ struct vring_desc *start_dp;
+ struct vring_desc *desc;
+ uint64_t indirect_op_data_req_phys_addr;
+ uint16_t req_data_len = sizeof(struct virtio_crypto_op_data_req);
+ uint32_t indirect_vring_addr_offset = req_data_len +
+ sizeof(struct virtio_crypto_inhdr);
+ struct rte_crypto_asym_op *asym_op = cop->asym;
+ struct virtio_crypto_session *session =
+ CRYPTODEV_GET_ASYM_SESS_PRIV(cop->asym->session);
+ struct virtio_crypto_op_data_req *op_data_req;
+ struct virtio_crypto_op_cookie *crypto_op_cookie;
+
+ if (unlikely(txvq->vq_free_cnt == 0))
+ return -ENOSPC;
+ if (unlikely(txvq->vq_free_cnt < needed))
+ return -EMSGSIZE;
+ head_idx = txvq->vq_desc_head_idx;
+ if (unlikely(head_idx >= txvq->vq_nentries))
+ return -EFAULT;
+
+ dxp = &txvq->vq_descx[head_idx];
+
+ if (rte_mempool_get(txvq->mpool, &dxp->cookie)) {
+ VIRTIO_CRYPTO_TX_LOG_ERR("can not get cookie");
+ return -EFAULT;
+ }
+ crypto_op_cookie = dxp->cookie;
+ indirect_op_data_req_phys_addr =
+ rte_mempool_virt2iova(crypto_op_cookie);
+ op_data_req = (struct virtio_crypto_op_data_req *)crypto_op_cookie;
+ if (virtqueue_crypto_asym_pkt_header_arrange(cop, op_data_req, session))
+ return -EFAULT;
+
+ /* status is initialized to VIRTIO_CRYPTO_ERR */
+ ((struct virtio_crypto_inhdr *)
+ ((uint8_t *)op_data_req + req_data_len))->status =
+ VIRTIO_CRYPTO_ERR;
+
+ /* point to indirect vring entry */
+ desc = (struct vring_desc *)
+ ((uint8_t *)op_data_req + indirect_vring_addr_offset);
+ for (idx = 0; idx < (NUM_ENTRY_VIRTIO_CRYPTO_OP - 1); idx++)
+ desc[idx].next = idx + 1;
+ desc[NUM_ENTRY_VIRTIO_CRYPTO_OP - 1].next = VQ_RING_DESC_CHAIN_END;
+
+ idx = 0;
+
+ /* indirect vring: first part, virtio_crypto_op_data_req */
+ desc[idx].addr = indirect_op_data_req_phys_addr;
+ desc[idx].len = req_data_len;
+ desc[idx++].flags = VRING_DESC_F_NEXT;
+
+ if (asym_op->rsa.op_type == RTE_CRYPTO_ASYM_OP_SIGN) {
+ /* indirect vring: src data */
+ desc[idx].addr = rte_mem_virt2iova(asym_op->rsa.message.data);
+ desc[idx].len = asym_op->rsa.message.length;
+ desc[idx++].flags = VRING_DESC_F_NEXT;
+
+ /* indirect vring: dst data */
+ desc[idx].addr = rte_mem_virt2iova(asym_op->rsa.sign.data);
+ desc[idx].len = asym_op->rsa.sign.length;
+ desc[idx++].flags = VRING_DESC_F_NEXT | VRING_DESC_F_WRITE;
+ } else if (asym_op->rsa.op_type == RTE_CRYPTO_ASYM_OP_VERIFY) {
+ /* indirect vring: src data */
+ desc[idx].addr = rte_mem_virt2iova(asym_op->rsa.sign.data);
+ desc[idx].len = asym_op->rsa.sign.length;
+ desc[idx++].flags = VRING_DESC_F_NEXT;
+
+ /* indirect vring: dst data */
+ desc[idx].addr = rte_mem_virt2iova(asym_op->rsa.message.data);
+ desc[idx].len = asym_op->rsa.message.length;
+ desc[idx++].flags = VRING_DESC_F_NEXT;
+ } else if (asym_op->rsa.op_type == RTE_CRYPTO_ASYM_OP_ENCRYPT) {
+ /* indirect vring: src data */
+ desc[idx].addr = rte_mem_virt2iova(asym_op->rsa.message.data);
+ desc[idx].len = asym_op->rsa.message.length;
+ desc[idx++].flags = VRING_DESC_F_NEXT;
+
+ /* indirect vring: dst data */
+ desc[idx].addr = rte_mem_virt2iova(asym_op->rsa.cipher.data);
+ desc[idx].len = asym_op->rsa.cipher.length;
+ desc[idx++].flags = VRING_DESC_F_NEXT | VRING_DESC_F_WRITE;
+ } else if (asym_op->rsa.op_type == RTE_CRYPTO_ASYM_OP_DECRYPT) {
+ /* indirect vring: src data */
+ desc[idx].addr = rte_mem_virt2iova(asym_op->rsa.cipher.data);
+ desc[idx].len = asym_op->rsa.cipher.length;
+ desc[idx++].flags = VRING_DESC_F_NEXT;
+
+ /* indirect vring: dst data */
+ desc[idx].addr = rte_mem_virt2iova(asym_op->rsa.message.data);
+ desc[idx].len = asym_op->rsa.message.length;
+ desc[idx++].flags = VRING_DESC_F_NEXT | VRING_DESC_F_WRITE;
+ } else {
+ VIRTIO_CRYPTO_TX_LOG_ERR("Invalid asym op");
+ return -EINVAL;
+ }
+
+ /* indirect vring: last part, status returned */
+ desc[idx].addr = indirect_op_data_req_phys_addr + req_data_len;
+ desc[idx].len = sizeof(struct virtio_crypto_inhdr);
+ desc[idx++].flags = VRING_DESC_F_WRITE;
+
+ num_entry = idx;
+
+ /* save the infos to use when receiving packets */
+ dxp->crypto_op = (void *)cop;
+ dxp->ndescs = needed;
+
+ /* use a single buffer */
+ start_dp = txvq->vq_ring.desc;
+ start_dp[head_idx].addr = indirect_op_data_req_phys_addr +
+ indirect_vring_addr_offset;
+ start_dp[head_idx].len = num_entry * sizeof(struct vring_desc);
+ start_dp[head_idx].flags = VRING_DESC_F_INDIRECT;
+
+ idx = start_dp[head_idx].next;
+ txvq->vq_desc_head_idx = idx;
+ if (txvq->vq_desc_head_idx == VQ_RING_DESC_CHAIN_END)
+ txvq->vq_desc_tail_idx = idx;
+ txvq->vq_free_cnt = (uint16_t)(txvq->vq_free_cnt - needed);
+ vq_update_avail_ring(txvq, head_idx);
+
+ return 0;
+}
+
static int
virtqueue_crypto_enqueue_xmit(struct virtqueue *txvq,
struct rte_crypto_op *cop)
@@ -353,6 +550,9 @@ virtqueue_crypto_enqueue_xmit(struct virtqueue *txvq,
case RTE_CRYPTO_OP_TYPE_SYMMETRIC:
ret = virtqueue_crypto_sym_enqueue_xmit(txvq, cop);
break;
+ case RTE_CRYPTO_OP_TYPE_ASYMMETRIC:
+ ret = virtqueue_crypto_asym_enqueue_xmit(txvq, cop);
+ break;
default:
VIRTIO_CRYPTO_TX_LOG_ERR("invalid crypto op type %u",
cop->type);
@@ -476,27 +676,28 @@ virtio_crypto_pkt_tx_burst(void *tx_queue, struct rte_crypto_op **tx_pkts,
VIRTIO_CRYPTO_TX_LOG_DBG("%d packets to xmit", nb_pkts);
for (nb_tx = 0; nb_tx < nb_pkts; nb_tx++) {
- struct rte_mbuf *txm = tx_pkts[nb_tx]->sym->m_src;
- /* nb_segs is always 1 at virtio crypto situation */
- int need = txm->nb_segs - txvq->vq_free_cnt;
-
- /*
- * Positive value indicates it hasn't enough space in vring
- * descriptors
- */
- if (unlikely(need > 0)) {
+ if (tx_pkts[nb_tx]->type == RTE_CRYPTO_OP_TYPE_SYMMETRIC) {
+ struct rte_mbuf *txm = tx_pkts[nb_tx]->sym->m_src;
+ /* nb_segs is always 1 at virtio crypto situation */
+ int need = txm->nb_segs - txvq->vq_free_cnt;
+
/*
- * try it again because the receive process may be
- * free some space
+ * Positive value indicates it hasn't enough space in vring
+ * descriptors
*/
- need = txm->nb_segs - txvq->vq_free_cnt;
if (unlikely(need > 0)) {
- VIRTIO_CRYPTO_TX_LOG_DBG("No free tx "
- "descriptors to transmit");
- break;
+ /*
+ * try it again because the receive process may be
+ * free some space
+ */
+ need = txm->nb_segs - txvq->vq_free_cnt;
+ if (unlikely(need > 0)) {
+ VIRTIO_CRYPTO_TX_LOG_DBG("No free tx "
+ "descriptors to transmit");
+ break;
+ }
}
}
-
txvq->packets_sent_total++;
/* Enqueue Packet buffers */
--
2.21.0
^ permalink raw reply [flat|nested] 16+ messages in thread
* [PATCH 5/6] examples/vhost_crypto: add asymmetric support
2024-09-05 14:56 [PATCH 0/6] vhost: add asymmetric crypto support Gowrishankar Muthukrishnan
` (3 preceding siblings ...)
2024-09-05 14:56 ` [PATCH 4/6] crypto/virtio: " Gowrishankar Muthukrishnan
@ 2024-09-05 14:56 ` Gowrishankar Muthukrishnan
2024-09-05 14:56 ` [PATCH 6/6] app/test: add asymmetric tests for virtio pmd Gowrishankar Muthukrishnan
2024-10-04 6:11 ` [PATCH v2 0/2] cryptodev: fix RSA xform to support VirtIO standard Gowrishankar Muthukrishnan
6 siblings, 0 replies; 16+ messages in thread
From: Gowrishankar Muthukrishnan @ 2024-09-05 14:56 UTC (permalink / raw)
To: dev, Maxime Coquelin, Chenbo Xia
Cc: Anoob Joseph, Akhil Goyal, bruce.richardson, ciara.power, jerinj,
fanzhang.oss, arkadiuszx.kusztal, kai.ji, jack.bond-preston,
david.marchand, hemant.agrawal, pablo.de.lara.guarch,
fiona.trahe, declan.doherty, matan, ruifeng.wang,
abhinandan.gujjar, sunilprakashrao.uttarwar, andrew.boyer,
ajit.khaparde, raveendra.padasalagi, vikas.gupta, zhangfei.gao,
g.singh, jianjay.zhou, lee.daly, Brian Dooley,
Gowrishankar Muthukrishnan
Add symmetric support.
Signed-off-by: Gowrishankar Muthukrishnan <gmuthukrishn@marvell.com>
---
examples/vhost_crypto/main.c | 50 +++++++++++++++++++++---------------
1 file changed, 29 insertions(+), 21 deletions(-)
diff --git a/examples/vhost_crypto/main.c b/examples/vhost_crypto/main.c
index 558c09a60f..bed7fc637d 100644
--- a/examples/vhost_crypto/main.c
+++ b/examples/vhost_crypto/main.c
@@ -45,7 +45,8 @@ struct lcore_option {
struct __rte_cache_aligned vhost_crypto_info {
int vids[MAX_NB_SOCKETS];
uint32_t nb_vids;
- struct rte_mempool *sess_pool;
+ struct rte_mempool *sym_sess_pool;
+ struct rte_mempool *asym_sess_pool;
struct rte_mempool *cop_pool;
uint8_t cid;
uint32_t qid;
@@ -302,7 +303,8 @@ new_device(int vid)
return -ENOENT;
}
- ret = rte_vhost_crypto_create(vid, info->cid, info->sess_pool,
+ ret = rte_vhost_crypto_create(vid, info->cid, info->sym_sess_pool,
+ info->asym_sess_pool,
rte_lcore_to_socket_id(options.los[i].lcore_id));
if (ret) {
RTE_LOG(ERR, USER1, "Cannot create vhost crypto\n");
@@ -362,8 +364,8 @@ destroy_device(int vid)
}
static const struct rte_vhost_device_ops virtio_crypto_device_ops = {
- .new_device = new_device,
- .destroy_device = destroy_device,
+ .new_connection = new_device,
+ .destroy_connection = destroy_device,
};
static int
@@ -385,7 +387,7 @@ vhost_crypto_worker(void *arg)
for (i = 0; i < NB_VIRTIO_QUEUES; i++) {
if (rte_crypto_op_bulk_alloc(info->cop_pool,
- RTE_CRYPTO_OP_TYPE_SYMMETRIC, ops[i],
+ RTE_CRYPTO_OP_TYPE_UNDEFINED, ops[i],
burst_size) < burst_size) {
RTE_LOG(ERR, USER1, "Failed to alloc cops\n");
ret = -1;
@@ -409,20 +411,12 @@ vhost_crypto_worker(void *arg)
rte_cryptodev_enqueue_burst(
info->cid, info->qid, ops[j],
fetched);
- if (unlikely(rte_crypto_op_bulk_alloc(
- info->cop_pool,
- RTE_CRYPTO_OP_TYPE_SYMMETRIC,
- ops[j], fetched) < fetched)) {
- RTE_LOG(ERR, USER1, "Failed realloc\n");
- return -1;
- }
-
fetched = rte_cryptodev_dequeue_burst(
info->cid, info->qid,
ops_deq[j], RTE_MIN(burst_size,
info->nb_inflight_ops));
fetched = rte_vhost_crypto_finalize_requests(
- ops_deq[j], fetched, callfds,
+ info->vids[i], j, ops_deq[j], fetched, callfds,
&nb_callfds);
info->nb_inflight_ops -= fetched;
@@ -455,7 +449,8 @@ free_resource(void)
continue;
rte_mempool_free(info->cop_pool);
- rte_mempool_free(info->sess_pool);
+ rte_mempool_free(info->sym_sess_pool);
+ rte_mempool_free(info->asym_sess_pool);
for (j = 0; j < lo->nb_sockets; j++) {
rte_vhost_driver_unregister(lo->socket_files[i]);
@@ -539,21 +534,34 @@ main(int argc, char *argv[])
goto error_exit;
}
- snprintf(name, 127, "SESS_POOL_%u", lo->lcore_id);
- info->sess_pool = rte_cryptodev_sym_session_pool_create(name,
+ snprintf(name, 127, "SYM_SESS_POOL_%u", lo->lcore_id);
+ info->sym_sess_pool = rte_cryptodev_sym_session_pool_create(name,
SESSION_MAP_ENTRIES,
rte_cryptodev_sym_get_private_session_size(
info->cid), 0, 0,
rte_lcore_to_socket_id(lo->lcore_id));
- if (!info->sess_pool) {
- RTE_LOG(ERR, USER1, "Failed to create mempool");
+ if (!info->sym_sess_pool) {
+ RTE_LOG(ERR, USER1, "Failed to create sym session mempool");
+ goto error_exit;
+ }
+
+ /* TODO: storing vhost_crypto_data_req (56 bytes) in user_data,
+ * but it needs to be moved somewhere.
+ */
+ snprintf(name, 127, "ASYM_SESS_POOL_%u", lo->lcore_id);
+ info->asym_sess_pool = rte_cryptodev_asym_session_pool_create(name,
+ SESSION_MAP_ENTRIES, 0, 64,
+ rte_lcore_to_socket_id(lo->lcore_id));
+
+ if (!info->asym_sess_pool) {
+ RTE_LOG(ERR, USER1, "Failed to create asym session mempool");
goto error_exit;
}
snprintf(name, 127, "COPPOOL_%u", lo->lcore_id);
info->cop_pool = rte_crypto_op_pool_create(name,
- RTE_CRYPTO_OP_TYPE_SYMMETRIC, NB_MEMPOOL_OBJS,
+ RTE_CRYPTO_OP_TYPE_UNDEFINED, NB_MEMPOOL_OBJS,
NB_CACHE_OBJS, VHOST_CRYPTO_MAX_IV_LEN,
rte_lcore_to_socket_id(lo->lcore_id));
@@ -566,7 +574,7 @@ main(int argc, char *argv[])
options.infos[i] = info;
qp_conf.nb_descriptors = NB_CRYPTO_DESCRIPTORS;
- qp_conf.mp_session = info->sess_pool;
+ qp_conf.mp_session = info->sym_sess_pool;
for (j = 0; j < dev_info.max_nb_queue_pairs; j++) {
ret = rte_cryptodev_queue_pair_setup(info->cid, j,
--
2.21.0
^ permalink raw reply [flat|nested] 16+ messages in thread
* [PATCH 6/6] app/test: add asymmetric tests for virtio pmd
2024-09-05 14:56 [PATCH 0/6] vhost: add asymmetric crypto support Gowrishankar Muthukrishnan
` (4 preceding siblings ...)
2024-09-05 14:56 ` [PATCH 5/6] examples/vhost_crypto: add asymmetric support Gowrishankar Muthukrishnan
@ 2024-09-05 14:56 ` Gowrishankar Muthukrishnan
2024-10-10 3:10 ` Stephen Hemminger
2024-10-04 6:11 ` [PATCH v2 0/2] cryptodev: fix RSA xform to support VirtIO standard Gowrishankar Muthukrishnan
6 siblings, 1 reply; 16+ messages in thread
From: Gowrishankar Muthukrishnan @ 2024-09-05 14:56 UTC (permalink / raw)
To: dev, Akhil Goyal, Fan Zhang
Cc: Anoob Joseph, bruce.richardson, ciara.power, jerinj,
arkadiuszx.kusztal, kai.ji, jack.bond-preston, david.marchand,
hemant.agrawal, pablo.de.lara.guarch, fiona.trahe,
declan.doherty, matan, ruifeng.wang, abhinandan.gujjar,
maxime.coquelin, chenbox, sunilprakashrao.uttarwar, andrew.boyer,
ajit.khaparde, raveendra.padasalagi, vikas.gupta, zhangfei.gao,
g.singh, jianjay.zhou, lee.daly, Brian Dooley,
Gowrishankar Muthukrishnan
Add asymmetric tests for Virtio PMD.
Signed-off-by: Gowrishankar Muthukrishnan <gmuthukrishn@marvell.com>
---
app/test/test_cryptodev_asym.c | 42 +++++++++++++++++++---
app/test/test_cryptodev_rsa_test_vectors.h | 26 ++++++++++++++
2 files changed, 64 insertions(+), 4 deletions(-)
diff --git a/app/test/test_cryptodev_asym.c b/app/test/test_cryptodev_asym.c
index 0928367cb0..b425643211 100644
--- a/app/test/test_cryptodev_asym.c
+++ b/app/test/test_cryptodev_asym.c
@@ -475,7 +475,7 @@ testsuite_setup(void)
for (qp_id = 0; qp_id < info.max_nb_queue_pairs; qp_id++) {
TEST_ASSERT_SUCCESS(rte_cryptodev_queue_pair_setup(
dev_id, qp_id, &ts_params->qp_conf,
- rte_cryptodev_socket_id(dev_id)),
+ (int8_t)rte_cryptodev_socket_id(dev_id)),
"Failed to setup queue pair %u on cryptodev %u ASYM",
qp_id, dev_id);
}
@@ -538,7 +538,7 @@ ut_setup_asym(void)
TEST_ASSERT_SUCCESS(rte_cryptodev_queue_pair_setup(
ts_params->valid_devs[0], qp_id,
&ts_params->qp_conf,
- rte_cryptodev_socket_id(ts_params->valid_devs[0])),
+ (int8_t)rte_cryptodev_socket_id(ts_params->valid_devs[0])),
"Failed to setup queue pair %u on cryptodev %u",
qp_id, ts_params->valid_devs[0]);
}
@@ -3319,7 +3319,6 @@ rsa_encrypt(const struct rsa_test_data_2 *vector, uint8_t *cipher_buf)
self->op->asym->rsa.cipher.data = cipher_buf;
self->op->asym->rsa.cipher.length = 0;
SET_RSA_PARAM(self->op->asym->rsa, vector, message);
- self->op->asym->rsa.padding.type = vector->padding;
rte_crypto_op_attach_asym_session(self->op, self->sess);
TEST_ASSERT_SUCCESS(send_one(),
@@ -3343,7 +3342,6 @@ rsa_decrypt(const struct rsa_test_data_2 *vector, uint8_t *plaintext,
self->op->asym->rsa.message.data = plaintext;
self->op->asym->rsa.message.length = 0;
self->op->asym->rsa.op_type = RTE_CRYPTO_ASYM_OP_DECRYPT;
- self->op->asym->rsa.padding.type = vector->padding;
rte_crypto_op_attach_asym_session(self->op, self->sess);
TEST_ASSERT_SUCCESS(send_one(),
"Failed to process crypto op (Decryption)");
@@ -3385,6 +3383,7 @@ kat_rsa_encrypt(const void *data)
SET_RSA_PARAM(xform.rsa, vector, n);
SET_RSA_PARAM(xform.rsa, vector, e);
SET_RSA_PARAM(xform.rsa, vector, d);
+ xform.rsa.padding.type = vector->padding;
xform.rsa.key_type = RTE_RSA_KEY_TYPE_EXP;
int ret = rsa_init_session(&xform);
@@ -3415,6 +3414,7 @@ kat_rsa_encrypt_crt(const void *data)
SET_RSA_PARAM_QT(xform.rsa, vector, dP);
SET_RSA_PARAM_QT(xform.rsa, vector, dQ);
SET_RSA_PARAM_QT(xform.rsa, vector, qInv);
+ xform.rsa.padding.type = vector->padding;
xform.rsa.key_type = RTE_RSA_KEY_TYPE_QT;
int ret = rsa_init_session(&xform);
if (ret) {
@@ -3440,6 +3440,7 @@ kat_rsa_decrypt(const void *data)
SET_RSA_PARAM(xform.rsa, vector, n);
SET_RSA_PARAM(xform.rsa, vector, e);
SET_RSA_PARAM(xform.rsa, vector, d);
+ xform.rsa.padding.type = vector->padding;
xform.rsa.key_type = RTE_RSA_KEY_TYPE_EXP;
int ret = rsa_init_session(&xform);
@@ -3470,6 +3471,7 @@ kat_rsa_decrypt_crt(const void *data)
SET_RSA_PARAM_QT(xform.rsa, vector, dP);
SET_RSA_PARAM_QT(xform.rsa, vector, dQ);
SET_RSA_PARAM_QT(xform.rsa, vector, qInv);
+ xform.rsa.padding.type = vector->padding;
xform.rsa.key_type = RTE_RSA_KEY_TYPE_QT;
int ret = rsa_init_session(&xform);
if (ret) {
@@ -3634,6 +3636,22 @@ static struct unit_test_suite cryptodev_octeontx_asym_testsuite = {
}
};
+static struct unit_test_suite cryptodev_virtio_asym_testsuite = {
+ .suite_name = "Crypto Device VIRTIO ASYM Unit Test Suite",
+ .setup = testsuite_setup,
+ .teardown = testsuite_teardown,
+ .unit_test_cases = {
+ TEST_CASE_ST(ut_setup_asym, ut_teardown_asym, test_capability),
+ TEST_CASE_ST(ut_setup_asym, ut_teardown_asym,
+ test_rsa_sign_verify),
+ TEST_CASE_ST(ut_setup_asym, ut_teardown_asym,
+ test_rsa_sign_verify_crt),
+ TEST_CASE_ST(ut_setup_asym, ut_teardown_asym, test_rsa_enc_dec),
+ TEST_CASE_ST(ut_setup_asym, ut_teardown_asym, test_rsa_enc_dec_crt),
+ TEST_CASES_END() /**< NULL terminate unit test array */
+ }
+};
+
static int
test_cryptodev_openssl_asym(void)
{
@@ -3702,6 +3720,22 @@ test_cryptodev_cn10k_asym(void)
return unit_test_suite_runner(&cryptodev_octeontx_asym_testsuite);
}
+static int
+test_cryptodev_virtio_asym(void)
+{
+ gbl_driver_id = rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_VIRTIO_PMD));
+ if (gbl_driver_id == -1) {
+ RTE_LOG(ERR, USER1, "virtio PMD must be loaded.\n");
+ return TEST_FAILED;
+ }
+
+ /* Use test suite registered for crypto_virtio PMD */
+ return unit_test_suite_runner(&cryptodev_virtio_asym_testsuite);
+}
+
+REGISTER_DRIVER_TEST(cryptodev_virtio_asym_autotest, test_cryptodev_virtio_asym);
+
REGISTER_DRIVER_TEST(cryptodev_openssl_asym_autotest, test_cryptodev_openssl_asym);
REGISTER_DRIVER_TEST(cryptodev_qat_asym_autotest, test_cryptodev_qat_asym);
REGISTER_DRIVER_TEST(cryptodev_octeontx_asym_autotest, test_cryptodev_octeontx_asym);
diff --git a/app/test/test_cryptodev_rsa_test_vectors.h b/app/test/test_cryptodev_rsa_test_vectors.h
index 1b7b451387..1c46a8117a 100644
--- a/app/test/test_cryptodev_rsa_test_vectors.h
+++ b/app/test/test_cryptodev_rsa_test_vectors.h
@@ -358,6 +358,28 @@ struct rte_crypto_asym_xform rsa_xform = {
.d = {
.data = rsa_d,
.length = sizeof(rsa_d)
+ },
+ .qt = {
+ .p = {
+ .data = rsa_p,
+ .length = sizeof(rsa_p)
+ },
+ .q = {
+ .data = rsa_q,
+ .length = sizeof(rsa_q)
+ },
+ .dP = {
+ .data = rsa_dP,
+ .length = sizeof(rsa_dP)
+ },
+ .dQ = {
+ .data = rsa_dQ,
+ .length = sizeof(rsa_dQ)
+ },
+ .qInv = {
+ .data = rsa_qInv,
+ .length = sizeof(rsa_qInv)
+ },
}
}
};
@@ -377,6 +399,10 @@ struct rte_crypto_asym_xform rsa_xform_crt = {
.length = sizeof(rsa_e)
},
.key_type = RTE_RSA_KEY_TYPE_QT,
+ .d = {
+ .data = rsa_d,
+ .length = sizeof(rsa_d)
+ },
.qt = {
.p = {
.data = rsa_p,
--
2.21.0
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH 6/6] app/test: add asymmetric tests for virtio pmd
2024-09-05 14:56 ` [PATCH 6/6] app/test: add asymmetric tests for virtio pmd Gowrishankar Muthukrishnan
@ 2024-10-10 3:10 ` Stephen Hemminger
0 siblings, 0 replies; 16+ messages in thread
From: Stephen Hemminger @ 2024-10-10 3:10 UTC (permalink / raw)
To: Gowrishankar Muthukrishnan
Cc: dev, Akhil Goyal, Fan Zhang, Anoob Joseph, bruce.richardson,
ciara.power, jerinj, arkadiuszx.kusztal, kai.ji,
jack.bond-preston, david.marchand, hemant.agrawal,
pablo.de.lara.guarch, fiona.trahe, declan.doherty, matan,
ruifeng.wang, abhinandan.gujjar, maxime.coquelin, chenbox,
sunilprakashrao.uttarwar, andrew.boyer, ajit.khaparde,
raveendra.padasalagi, vikas.gupta, zhangfei.gao, g.singh,
jianjay.zhou, lee.daly, Brian Dooley
On Thu, 5 Sep 2024 20:26:10 +0530
Gowrishankar Muthukrishnan <gmuthukrishn@marvell.com> wrote:
> Add asymmetric tests for Virtio PMD.
>
> Signed-off-by: Gowrishankar Muthukrishnan <gmuthukrishn@marvell.com>
This patch set has lots of build errors. See patchwork.
For example:
*Build Failed #1:
OS: RHEL94-64
Target: x86_64-native-linuxapp-clang
FAILED: lib/librte_vhost.a.p/vhost_vhost_crypto.c.o
clang -Ilib/librte_vhost.a.p -Ilib -I../lib -Ilib/vhost -I../lib/vhost -I. -I.. -Iconfig -I../config -Ilib/eal/include -I../lib/eal/include -Ilib/eal/linux/include -I../lib/eal/linux/include -Ilib/eal/x86/include -I../lib/eal/x86/include -Ilib/eal/common -I../lib/eal/common -Ilib/eal -I../lib/eal -Ilib/kvargs -I../lib/kvargs -Ilib/log -I../lib/log -Ilib/metrics -I../lib/metrics -Ilib/telemetry -I../lib/telemetry -Ilib/ethdev -I../lib/ethdev -Ilib/net -I../lib/net -Ilib/mbuf -I../lib/mbuf -Ilib/mempool -I../lib/mempool -Ilib/ring -I../lib/ring -Ilib/meter -I../lib/meter -Ilib/cryptodev -I../lib/cryptodev -Ilib/rcu -I../lib/rcu -Ilib/hash -I../lib/hash -Ilib/pci -I../lib/pci -Ilib/dmadev -I../lib/dmadev -fcolor-diagnostics -D_FILE_OFFSET_BITS=64 -Wall -Winvalid-pch -Wextra -Werror -std=c11 -O3 -include rte_config.h -Wcast-qual -Wdeprecated -Wformat -Wformat-nonliteral -Wformat-security -Wmissing-declarations -Wmissing-prototypes -Wnested-externs -Wold-style-definition -Wpointer-arith -
Wsign-compare -Wstrict-prototypes -Wundef -Wwrite-strings -Wno-address-of-packed-member -Wno-missing-field-initializers -D_GNU_SOURCE -fPIC -march=native -mrtm -DALLOW_EXPERIMENTAL_API -DALLOW_INTERNAL_API -DVHOST_CLANG_UNROLL_PRAGMA -fno-strict-aliasing -DRTE_LOG_DEFAULT_LOGTYPE=lib.vhost -DRTE_ANNOTATE_LOCKS -Wthread-safety -MD -MQ lib/librte_vhost.a.p/vhost_vhost_crypto.c.o -MF lib/librte_vhost.a.p/vhost_vhost_crypto.c.o.d -o lib/librte_vhost.a.p/vhost_vhost_crypto.c.o -c ../lib/vhost/vhost_crypto.c
../lib/vhost/vhost_crypto.c:1426:24: error: calling function 'get_data_ptr' requires holding mutex 'vc_req->vq->iotlb_lock' [-Werror,-Wthread-safety-analysis]
1426 | rsa->message.data = get_data_ptr(vc_req, desc, VHOST_ACCESS_RO);
| ^
../lib/vhost/vhost_crypto.c:1437:21: error: calling function 'get_data_ptr' requires holding mutex 'vc_req->vq->iotlb_lock' [-Werror,-Wthread-safety-analysis]
1437 | rsa->sign.data = get_data_ptr(vc_req, desc, VHOST_ACCESS_RW);
| ^
../lib/vhost/vhost_crypto.c:1446:21: error: calling function 'get_data_ptr' requires holding mutex 'vc_req->vq->iotlb_lock' [-Werror,-Wthread-safety-analysis]
1446 | rsa->sign.data = get_data_ptr(vc_req, desc, VHOST_ACCESS_RO);
| ^
../lib/vhost/vhost_crypto.c:1449:24: error: calling function 'get_data_ptr' requires holding mutex 'vc_req->vq->iotlb_lock' [-Werror,-Wthread-safety-analysis]
1449 | rsa->message.data = get_data_ptr(vc_req, desc, VHOST_ACCESS_RO);
| ^
../lib/vhost/vhost_crypto.c:1454:24: error: calling function 'get_data_ptr' requires holding mutex 'vc_req->vq->iotlb_lock' [-Werror,-Wthread-safety-analysis]
1454 | rsa->message.data = get_data_ptr(vc_req, desc, VHOST_ACCESS_RO);
| ^
../lib/vhost/vhost_crypto.c:1465:23: error: calling function 'get_data_ptr' requires holding mutex 'vc_req->vq->iotlb_lock' [-Werror,-Wthread-safety-analysis]
1465 | rsa->cipher.data = get_data_ptr(vc_req, desc, VHOST_ACCESS_RW);
| ^
../lib/vhost/vhost_crypto.c:1474:23: error: calling function 'get_data_ptr' requires holding mutex 'vc_req->vq->iotlb_lock' [-Werror,-Wthread-safety-analysis]
1474 | rsa->cipher.data = get_data_ptr(vc_req, desc, VHOST_ACCESS_RO);
| ^
../lib/vhost/vhost_crypto.c:1477:24: error: calling function 'get_data_ptr' requires holding mutex 'vc_req->vq->iotlb_lock' [-Werror,-Wthread-safety-analysis]
1477 | rsa->message.data = get_data_ptr(vc_req, desc, VHOST_ACCESS_RO);
| ^
../lib/vhost/vhost_crypto.c:1493:18: error: calling function 'get_data_ptr' requires holding mutex 'vc_req->vq->iotlb_lock' [-Werror,-Wthread-safety-analysis]
1493 | vc_req->inhdr = get_data_ptr(vc_req, desc, VHOST_ACCESS_WO);
| ^
9 errors generated.
[419/3000] Generating symbol file lib/librte_mldev.so.25.0.p/librte_mldev.so.25.0.symbols
[420/3000] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o
[421/3000] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o
[422/3000] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o
[423/3000] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o
[424/3000] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o
[425/3000] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o
[426/3000] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o
[427/3000] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o
ninja: build stopped
^ permalink raw reply [flat|nested] 16+ messages in thread
* [PATCH v2 0/2] cryptodev: fix RSA xform to support VirtIO standard
2024-09-05 14:56 [PATCH 0/6] vhost: add asymmetric crypto support Gowrishankar Muthukrishnan
` (5 preceding siblings ...)
2024-09-05 14:56 ` [PATCH 6/6] app/test: add asymmetric tests for virtio pmd Gowrishankar Muthukrishnan
@ 2024-10-04 6:11 ` Gowrishankar Muthukrishnan
2024-10-04 6:11 ` [PATCH v2 1/2] cryptodev: fix RSA xform for ASN.1 syntax Gowrishankar Muthukrishnan
2024-10-04 6:11 ` [PATCH v2 2/2] cryptodev: move RSA padding information into xform Gowrishankar Muthukrishnan
6 siblings, 2 replies; 16+ messages in thread
From: Gowrishankar Muthukrishnan @ 2024-10-04 6:11 UTC (permalink / raw)
To: dev
Cc: Anoob Joseph, Akhil Goyal, bruce.richardson, jerinj,
arkadiuszx.kusztal, kai.ji, jack.bond-preston, david.marchand,
hemant.agrawal, pablo.de.lara.guarch, fiona.trahe,
declan.doherty, matan, ruifeng.wang, abhinandan.gujjar,
maxime.coquelin, chenbox, sunilprakashrao.uttarwar, andrew.boyer,
ajit.khaparde, raveendra.padasalagi, vikas.gupta, zhangfei.gao,
g.singh, jianjay.zhou, lee.daly, Brian Dooley,
Gowrishankar Muthukrishnan
In this series of patches, RSA crypto xform is fixed to support VirtIO standard.
Changes:
v2:
- Decoupled spec related patches into this series from v1.
Gowrishankar Muthukrishnan (2):
cryptodev: fix RSA xform for ASN.1 syntax
cryptodev: move RSA padding information into xform
app/test/test_cryptodev_asym.c | 10 ++--
app/test/test_cryptodev_rsa_test_vectors.h | 2 +
drivers/common/cpt/cpt_ucode_asym.h | 4 +-
drivers/crypto/cnxk/cnxk_ae.h | 13 +++--
drivers/crypto/octeontx/otx_cryptodev_ops.c | 4 +-
drivers/crypto/openssl/openssl_pmd_private.h | 1 +
drivers/crypto/openssl/rte_openssl_pmd.c | 4 +-
drivers/crypto/openssl/rte_openssl_pmd_ops.c | 1 +
drivers/crypto/qat/qat_asym.c | 17 ++++---
examples/fips_validation/main.c | 52 +++++++++++---------
lib/cryptodev/rte_crypto_asym.h | 8 +--
11 files changed, 63 insertions(+), 53 deletions(-)
--
2.21.0
^ permalink raw reply [flat|nested] 16+ messages in thread
* [PATCH v2 1/2] cryptodev: fix RSA xform for ASN.1 syntax
2024-10-04 6:11 ` [PATCH v2 0/2] cryptodev: fix RSA xform to support VirtIO standard Gowrishankar Muthukrishnan
@ 2024-10-04 6:11 ` Gowrishankar Muthukrishnan
2024-10-07 7:19 ` Kusztal, ArkadiuszX
2024-10-04 6:11 ` [PATCH v2 2/2] cryptodev: move RSA padding information into xform Gowrishankar Muthukrishnan
1 sibling, 1 reply; 16+ messages in thread
From: Gowrishankar Muthukrishnan @ 2024-10-04 6:11 UTC (permalink / raw)
To: dev, Akhil Goyal, Fan Zhang
Cc: Anoob Joseph, bruce.richardson, jerinj, arkadiuszx.kusztal,
kai.ji, jack.bond-preston, david.marchand, hemant.agrawal,
pablo.de.lara.guarch, fiona.trahe, declan.doherty, matan,
ruifeng.wang, abhinandan.gujjar, maxime.coquelin, chenbox,
sunilprakashrao.uttarwar, andrew.boyer, ajit.khaparde,
raveendra.padasalagi, vikas.gupta, zhangfei.gao, g.singh,
jianjay.zhou, lee.daly, Brian Dooley, Gowrishankar Muthukrishnan
As per ASN.1 syntax (RFC 3447 Appendix A.1.2), RSA private key
would need specification of quintuple along with private exponent.
It is up to the implementation to internally handle, but not at
RTE itself to make them exclusive each other. Removing union
on them allows asymmetric implementation in VirtIO to benefit from
the xform as per ASN.1 syntax.
Signed-off-by: Gowrishankar Muthukrishnan <gmuthukrishn@marvell.com>
---
lib/cryptodev/rte_crypto_asym.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/lib/cryptodev/rte_crypto_asym.h b/lib/cryptodev/rte_crypto_asym.h
index 39d3da3952..c33be3b155 100644
--- a/lib/cryptodev/rte_crypto_asym.h
+++ b/lib/cryptodev/rte_crypto_asym.h
@@ -306,7 +306,7 @@ struct rte_crypto_rsa_xform {
enum rte_crypto_rsa_priv_key_type key_type;
- union {
+ struct {
rte_crypto_uint d;
/**< the RSA private exponent */
struct rte_crypto_rsa_priv_key_qt qt;
--
2.21.0
^ permalink raw reply [flat|nested] 16+ messages in thread
* RE: [PATCH v2 1/2] cryptodev: fix RSA xform for ASN.1 syntax
2024-10-04 6:11 ` [PATCH v2 1/2] cryptodev: fix RSA xform for ASN.1 syntax Gowrishankar Muthukrishnan
@ 2024-10-07 7:19 ` Kusztal, ArkadiuszX
2024-10-07 7:32 ` Kusztal, ArkadiuszX
0 siblings, 1 reply; 16+ messages in thread
From: Kusztal, ArkadiuszX @ 2024-10-07 7:19 UTC (permalink / raw)
To: Gowrishankar Muthukrishnan, dev, Akhil Goyal, Fan Zhang
Cc: Anoob Joseph, Richardson, Bruce, jerinj, Ji, Kai,
jack.bond-preston, Marchand, David, hemant.agrawal,
De Lara Guarch, Pablo, Trahe, Fiona, Doherty, Declan, matan,
ruifeng.wang, Gujjar, Abhinandan S, maxime.coquelin, chenbox,
sunilprakashrao.uttarwar, andrew.boyer, ajit.khaparde,
raveendra.padasalagi, vikas.gupta, zhangfei.gao, g.singh,
jianjay.zhou, Daly, Lee, Dooley, Brian
Acked with a small comment.
> -----Original Message-----
> From: Gowrishankar Muthukrishnan <gmuthukrishn@marvell.com>
> Sent: Friday, October 4, 2024 8:11 AM
> To: dev@dpdk.org; Akhil Goyal <gakhil@marvell.com>; Fan Zhang
> <fanzhang.oss@gmail.com>
> Cc: Anoob Joseph <anoobj@marvell.com>; Richardson, Bruce
> <bruce.richardson@intel.com>; jerinj@marvell.com; Kusztal, ArkadiuszX
> <arkadiuszx.kusztal@intel.com>; Ji, Kai <kai.ji@intel.com>; jack.bond-
> preston@foss.arm.com; Marchand, David <david.marchand@redhat.com>;
> hemant.agrawal@nxp.com; De Lara Guarch, Pablo
> <pablo.de.lara.guarch@intel.com>; Trahe, Fiona <fiona.trahe@intel.com>;
> Doherty, Declan <declan.doherty@intel.com>; matan@nvidia.com;
> ruifeng.wang@arm.com; Gujjar, Abhinandan S <abhinandan.gujjar@intel.com>;
> maxime.coquelin@redhat.com; chenbox@nvidia.com;
> sunilprakashrao.uttarwar@amd.com; andrew.boyer@amd.com;
> ajit.khaparde@broadcom.com; raveendra.padasalagi@broadcom.com;
> vikas.gupta@broadcom.com; zhangfei.gao@linaro.org; g.singh@nxp.com;
> jianjay.zhou@huawei.com; Daly, Lee <lee.daly@intel.com>; Dooley, Brian
> <brian.dooley@intel.com>; Gowrishankar Muthukrishnan
> <gmuthukrishn@marvell.com>
> Subject: [PATCH v2 1/2] cryptodev: fix RSA xform for ASN.1 syntax
>
> As per ASN.1 syntax (RFC 3447 Appendix A.1.2), RSA private key would need
It could be RFC 8017 instead.
> specification of quintuple along with private exponent.
> It is up to the implementation to internally handle, but not at RTE itself to make
> them exclusive each other. Removing union on them allows asymmetric
> implementation in VirtIO to benefit from the xform as per ASN.1 syntax.
>
> Signed-off-by: Gowrishankar Muthukrishnan <gmuthukrishn@marvell.com>
> ---
> lib/cryptodev/rte_crypto_asym.h | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/lib/cryptodev/rte_crypto_asym.h b/lib/cryptodev/rte_crypto_asym.h
> index 39d3da3952..c33be3b155 100644
> --- a/lib/cryptodev/rte_crypto_asym.h
> +++ b/lib/cryptodev/rte_crypto_asym.h
> @@ -306,7 +306,7 @@ struct rte_crypto_rsa_xform {
>
> enum rte_crypto_rsa_priv_key_type key_type;
>
> - union {
> + struct {
> rte_crypto_uint d;
> /**< the RSA private exponent */
> struct rte_crypto_rsa_priv_key_qt qt;
> --
> 2.21.0
Acked-by: Arkadiusz Kusztal <arkadiuszx.kusztal@intel.com>
^ permalink raw reply [flat|nested] 16+ messages in thread
* RE: [PATCH v2 1/2] cryptodev: fix RSA xform for ASN.1 syntax
2024-10-07 7:19 ` Kusztal, ArkadiuszX
@ 2024-10-07 7:32 ` Kusztal, ArkadiuszX
0 siblings, 0 replies; 16+ messages in thread
From: Kusztal, ArkadiuszX @ 2024-10-07 7:32 UTC (permalink / raw)
To: Kusztal, ArkadiuszX, Gowrishankar Muthukrishnan, dev,
Akhil Goyal, Fan Zhang
Cc: Anoob Joseph, Richardson, Bruce, jerinj, Ji, Kai,
jack.bond-preston, Marchand, David, hemant.agrawal,
De Lara Guarch, Pablo, Trahe, Fiona, Doherty, Declan, matan,
ruifeng.wang, Gujjar, Abhinandan S, maxime.coquelin, chenbox,
sunilprakashrao.uttarwar, andrew.boyer, ajit.khaparde,
raveendra.padasalagi, vikas.gupta, zhangfei.gao, g.singh,
jianjay.zhou, Daly, Lee, Dooley, Brian
> -----Original Message-----
> From: Kusztal, ArkadiuszX <arkadiuszx.kusztal@intel.com>
> Sent: Monday, October 7, 2024 9:20 AM
> To: Gowrishankar Muthukrishnan <gmuthukrishn@marvell.com>;
> dev@dpdk.org; Akhil Goyal <gakhil@marvell.com>; Fan Zhang
> <fanzhang.oss@gmail.com>
> Cc: Anoob Joseph <anoobj@marvell.com>; Richardson, Bruce
> <bruce.richardson@intel.com>; jerinj@marvell.com; Ji, Kai <kai.ji@intel.com>;
> jack.bond-preston@foss.arm.com; Marchand, David
> <david.marchand@redhat.com>; hemant.agrawal@nxp.com; De Lara Guarch,
> Pablo <pablo.de.lara.guarch@intel.com>; Trahe, Fiona
> <fiona.trahe@intel.com>; Doherty, Declan <declan.doherty@intel.com>;
> matan@nvidia.com; ruifeng.wang@arm.com; Gujjar, Abhinandan S
> <abhinandan.gujjar@intel.com>; maxime.coquelin@redhat.com;
> chenbox@nvidia.com; sunilprakashrao.uttarwar@amd.com;
> andrew.boyer@amd.com; ajit.khaparde@broadcom.com;
> raveendra.padasalagi@broadcom.com; vikas.gupta@broadcom.com;
> zhangfei.gao@linaro.org; g.singh@nxp.com; jianjay.zhou@huawei.com; Daly,
> Lee <lee.daly@intel.com>; Dooley, Brian <brian.dooley@intel.com>
> Subject: RE: [PATCH v2 1/2] cryptodev: fix RSA xform for ASN.1 syntax
>
> Acked with a small comment.
>
> > -----Original Message-----
> > From: Gowrishankar Muthukrishnan <gmuthukrishn@marvell.com>
> > Sent: Friday, October 4, 2024 8:11 AM
> > To: dev@dpdk.org; Akhil Goyal <gakhil@marvell.com>; Fan Zhang
> > <fanzhang.oss@gmail.com>
> > Cc: Anoob Joseph <anoobj@marvell.com>; Richardson, Bruce
> > <bruce.richardson@intel.com>; jerinj@marvell.com; Kusztal, ArkadiuszX
> > <arkadiuszx.kusztal@intel.com>; Ji, Kai <kai.ji@intel.com>; jack.bond-
> > preston@foss.arm.com; Marchand, David <david.marchand@redhat.com>;
> > hemant.agrawal@nxp.com; De Lara Guarch, Pablo
> > <pablo.de.lara.guarch@intel.com>; Trahe, Fiona
> > <fiona.trahe@intel.com>; Doherty, Declan <declan.doherty@intel.com>;
> > matan@nvidia.com; ruifeng.wang@arm.com; Gujjar, Abhinandan S
> > <abhinandan.gujjar@intel.com>; maxime.coquelin@redhat.com;
> > chenbox@nvidia.com; sunilprakashrao.uttarwar@amd.com;
> > andrew.boyer@amd.com; ajit.khaparde@broadcom.com;
> > raveendra.padasalagi@broadcom.com;
> > vikas.gupta@broadcom.com; zhangfei.gao@linaro.org; g.singh@nxp.com;
> > jianjay.zhou@huawei.com; Daly, Lee <lee.daly@intel.com>; Dooley, Brian
> > <brian.dooley@intel.com>; Gowrishankar Muthukrishnan
> > <gmuthukrishn@marvell.com>
> > Subject: [PATCH v2 1/2] cryptodev: fix RSA xform for ASN.1 syntax
Additionally, it should not be a fix.
The RFC mandates usage of the ASN.1, but only in case sharing private key across the network.
How the Cryptodev should interpret it, is up to the implementation.
> >
> > As per ASN.1 syntax (RFC 3447 Appendix A.1.2), RSA private key would
> > need
> It could be RFC 8017 instead.
> > specification of quintuple along with private exponent.
> > It is up to the implementation to internally handle, but not at RTE
> > itself to make them exclusive each other. Removing union on them
> > allows asymmetric implementation in VirtIO to benefit from the xform as per
> ASN.1 syntax.
> >
> > Signed-off-by: Gowrishankar Muthukrishnan <gmuthukrishn@marvell.com>
> > ---
> > lib/cryptodev/rte_crypto_asym.h | 2 +-
> > 1 file changed, 1 insertion(+), 1 deletion(-)
> >
> > diff --git a/lib/cryptodev/rte_crypto_asym.h
> > b/lib/cryptodev/rte_crypto_asym.h index 39d3da3952..c33be3b155 100644
> > --- a/lib/cryptodev/rte_crypto_asym.h
> > +++ b/lib/cryptodev/rte_crypto_asym.h
> > @@ -306,7 +306,7 @@ struct rte_crypto_rsa_xform {
> >
> > enum rte_crypto_rsa_priv_key_type key_type;
> >
> > - union {
> > + struct {
> > rte_crypto_uint d;
> > /**< the RSA private exponent */
> > struct rte_crypto_rsa_priv_key_qt qt;
> > --
> > 2.21.0
>
> Acked-by: Arkadiusz Kusztal <arkadiuszx.kusztal@intel.com>
^ permalink raw reply [flat|nested] 16+ messages in thread
* [PATCH v2 2/2] cryptodev: move RSA padding information into xform
2024-10-04 6:11 ` [PATCH v2 0/2] cryptodev: fix RSA xform to support VirtIO standard Gowrishankar Muthukrishnan
2024-10-04 6:11 ` [PATCH v2 1/2] cryptodev: fix RSA xform for ASN.1 syntax Gowrishankar Muthukrishnan
@ 2024-10-04 6:11 ` Gowrishankar Muthukrishnan
2024-10-09 15:23 ` Akhil Goyal
1 sibling, 1 reply; 16+ messages in thread
From: Gowrishankar Muthukrishnan @ 2024-10-04 6:11 UTC (permalink / raw)
To: dev, Akhil Goyal, Fan Zhang, Anoob Joseph, Ankur Dwivedi,
Tejasree Kondoj, Kai Ji, Brian Dooley,
Gowrishankar Muthukrishnan
Cc: bruce.richardson, jerinj, arkadiuszx.kusztal, jack.bond-preston,
david.marchand, hemant.agrawal, pablo.de.lara.guarch,
fiona.trahe, declan.doherty, matan, ruifeng.wang,
abhinandan.gujjar, maxime.coquelin, chenbox,
sunilprakashrao.uttarwar, andrew.boyer, ajit.khaparde,
raveendra.padasalagi, vikas.gupta, zhangfei.gao, g.singh,
jianjay.zhou, lee.daly
RSA padding information could be a xform entity rather than part of
crypto op, as it seems associated with hashing algorithm used for
the entire crypto session, where this algorithm is used in message
digest itself. Even in virtIO standard spec, this info is associated
in the asymmetric session creation. Hence, moving this info from
crypto op into xform structure.
Signed-off-by: Gowrishankar Muthukrishnan <gmuthukrishn@marvell.com>
---
app/test/test_cryptodev_asym.c | 10 ++--
app/test/test_cryptodev_rsa_test_vectors.h | 2 +
drivers/common/cpt/cpt_ucode_asym.h | 4 +-
drivers/crypto/cnxk/cnxk_ae.h | 13 +++--
drivers/crypto/octeontx/otx_cryptodev_ops.c | 4 +-
drivers/crypto/openssl/openssl_pmd_private.h | 1 +
drivers/crypto/openssl/rte_openssl_pmd.c | 4 +-
drivers/crypto/openssl/rte_openssl_pmd_ops.c | 1 +
drivers/crypto/qat/qat_asym.c | 17 ++++---
examples/fips_validation/main.c | 52 +++++++++++---------
lib/cryptodev/rte_crypto_asym.h | 6 +--
11 files changed, 62 insertions(+), 52 deletions(-)
diff --git a/app/test/test_cryptodev_asym.c b/app/test/test_cryptodev_asym.c
index f0b5d38543..6cb416ffe3 100644
--- a/app/test/test_cryptodev_asym.c
+++ b/app/test/test_cryptodev_asym.c
@@ -80,7 +80,6 @@ queue_ops_rsa_sign_verify(void *sess)
asym_op->rsa.message.length = rsaplaintext.len;
asym_op->rsa.sign.length = RTE_DIM(rsa_n);
asym_op->rsa.sign.data = output_buf;
- asym_op->rsa.padding.type = RTE_CRYPTO_RSA_PADDING_PKCS1_5;
debug_hexdump(stdout, "message", asym_op->rsa.message.data,
asym_op->rsa.message.length);
@@ -112,7 +111,6 @@ queue_ops_rsa_sign_verify(void *sess)
/* Verify sign */
asym_op->rsa.op_type = RTE_CRYPTO_ASYM_OP_VERIFY;
- asym_op->rsa.padding.type = RTE_CRYPTO_RSA_PADDING_PKCS1_5;
/* Process crypto operation */
if (rte_cryptodev_enqueue_burst(dev_id, 0, &op, 1) != 1) {
@@ -171,7 +169,6 @@ queue_ops_rsa_enc_dec(void *sess)
asym_op->rsa.cipher.data = cipher_buf;
asym_op->rsa.cipher.length = RTE_DIM(rsa_n);
asym_op->rsa.message.length = rsaplaintext.len;
- asym_op->rsa.padding.type = RTE_CRYPTO_RSA_PADDING_PKCS1_5;
debug_hexdump(stdout, "message", asym_op->rsa.message.data,
asym_op->rsa.message.length);
@@ -203,7 +200,6 @@ queue_ops_rsa_enc_dec(void *sess)
asym_op = result_op->asym;
asym_op->rsa.message.length = RTE_DIM(rsa_n);
asym_op->rsa.op_type = RTE_CRYPTO_ASYM_OP_DECRYPT;
- asym_op->rsa.padding.type = RTE_CRYPTO_RSA_PADDING_PKCS1_5;
/* Process crypto operation */
if (rte_cryptodev_enqueue_burst(dev_id, 0, &op, 1) != 1) {
@@ -3323,7 +3319,6 @@ rsa_encrypt(const struct rsa_test_data_2 *vector, uint8_t *cipher_buf)
self->op->asym->rsa.cipher.data = cipher_buf;
self->op->asym->rsa.cipher.length = 0;
SET_RSA_PARAM(self->op->asym->rsa, vector, message);
- self->op->asym->rsa.padding.type = vector->padding;
rte_crypto_op_attach_asym_session(self->op, self->sess);
TEST_ASSERT_SUCCESS(send_one(),
@@ -3347,7 +3342,6 @@ rsa_decrypt(const struct rsa_test_data_2 *vector, uint8_t *plaintext,
self->op->asym->rsa.message.data = plaintext;
self->op->asym->rsa.message.length = 0;
self->op->asym->rsa.op_type = RTE_CRYPTO_ASYM_OP_DECRYPT;
- self->op->asym->rsa.padding.type = vector->padding;
rte_crypto_op_attach_asym_session(self->op, self->sess);
TEST_ASSERT_SUCCESS(send_one(),
"Failed to process crypto op (Decryption)");
@@ -3389,6 +3383,7 @@ kat_rsa_encrypt(const void *data)
SET_RSA_PARAM(xform.rsa, vector, n);
SET_RSA_PARAM(xform.rsa, vector, e);
SET_RSA_PARAM(xform.rsa, vector, d);
+ xform.rsa.padding.type = vector->padding;
xform.rsa.key_type = RTE_RSA_KEY_TYPE_EXP;
int ret = rsa_init_session(&xform);
@@ -3419,6 +3414,7 @@ kat_rsa_encrypt_crt(const void *data)
SET_RSA_PARAM_QT(xform.rsa, vector, dP);
SET_RSA_PARAM_QT(xform.rsa, vector, dQ);
SET_RSA_PARAM_QT(xform.rsa, vector, qInv);
+ xform.rsa.padding.type = vector->padding;
xform.rsa.key_type = RTE_RSA_KEY_TYPE_QT;
int ret = rsa_init_session(&xform);
if (ret) {
@@ -3444,6 +3440,7 @@ kat_rsa_decrypt(const void *data)
SET_RSA_PARAM(xform.rsa, vector, n);
SET_RSA_PARAM(xform.rsa, vector, e);
SET_RSA_PARAM(xform.rsa, vector, d);
+ xform.rsa.padding.type = vector->padding;
xform.rsa.key_type = RTE_RSA_KEY_TYPE_EXP;
int ret = rsa_init_session(&xform);
@@ -3474,6 +3471,7 @@ kat_rsa_decrypt_crt(const void *data)
SET_RSA_PARAM_QT(xform.rsa, vector, dP);
SET_RSA_PARAM_QT(xform.rsa, vector, dQ);
SET_RSA_PARAM_QT(xform.rsa, vector, qInv);
+ xform.rsa.padding.type = vector->padding;
xform.rsa.key_type = RTE_RSA_KEY_TYPE_QT;
int ret = rsa_init_session(&xform);
if (ret) {
diff --git a/app/test/test_cryptodev_rsa_test_vectors.h b/app/test/test_cryptodev_rsa_test_vectors.h
index 89981f13f0..1b7b451387 100644
--- a/app/test/test_cryptodev_rsa_test_vectors.h
+++ b/app/test/test_cryptodev_rsa_test_vectors.h
@@ -345,6 +345,7 @@ struct rte_crypto_asym_xform rsa_xform = {
.next = NULL,
.xform_type = RTE_CRYPTO_ASYM_XFORM_RSA,
.rsa = {
+ .padding.type = RTE_CRYPTO_RSA_PADDING_PKCS1_5,
.n = {
.data = rsa_n,
.length = sizeof(rsa_n)
@@ -366,6 +367,7 @@ struct rte_crypto_asym_xform rsa_xform_crt = {
.next = NULL,
.xform_type = RTE_CRYPTO_ASYM_XFORM_RSA,
.rsa = {
+ .padding.type = RTE_CRYPTO_RSA_PADDING_PKCS1_5,
.n = {
.data = rsa_n,
.length = sizeof(rsa_n)
diff --git a/drivers/common/cpt/cpt_ucode_asym.h b/drivers/common/cpt/cpt_ucode_asym.h
index e1034bbeb4..5122378ec7 100644
--- a/drivers/common/cpt/cpt_ucode_asym.h
+++ b/drivers/common/cpt/cpt_ucode_asym.h
@@ -327,7 +327,7 @@ cpt_rsa_prep(struct asym_op_params *rsa_params,
/* Result buffer */
rlen = mod_len;
- if (rsa_op.padding.type == RTE_CRYPTO_RSA_PADDING_NONE) {
+ if (rsa->padding.type == RTE_CRYPTO_RSA_PADDING_NONE) {
/* Use mod_exp operation for no_padding type */
vq_cmd_w0.s.opcode.minor = CPT_MINOR_OP_MODEX;
vq_cmd_w0.s.param2 = exp_len;
@@ -412,7 +412,7 @@ cpt_rsa_crt_prep(struct asym_op_params *rsa_params,
/* Result buffer */
rlen = mod_len;
- if (rsa_op.padding.type == RTE_CRYPTO_RSA_PADDING_NONE) {
+ if (rsa->padding.type == RTE_CRYPTO_RSA_PADDING_NONE) {
/*Use mod_exp operation for no_padding type */
vq_cmd_w0.s.opcode.minor = CPT_MINOR_OP_MODEX_CRT;
} else {
diff --git a/drivers/crypto/cnxk/cnxk_ae.h b/drivers/crypto/cnxk/cnxk_ae.h
index ef9cb5eb91..1bb5a450c5 100644
--- a/drivers/crypto/cnxk/cnxk_ae.h
+++ b/drivers/crypto/cnxk/cnxk_ae.h
@@ -177,6 +177,9 @@ cnxk_ae_fill_rsa_params(struct cnxk_ae_sess *sess,
rsa->n.length = mod_len;
rsa->e.length = exp_len;
+ /* Set padding info */
+ rsa->padding.type = xform->rsa.padding.type;
+
return 0;
}
@@ -366,7 +369,7 @@ cnxk_ae_rsa_prep(struct rte_crypto_op *op, struct roc_ae_buf_ptr *meta_buf,
dptr += in_size;
dlen = total_key_len + in_size;
- if (rsa_op.padding.type == RTE_CRYPTO_RSA_PADDING_NONE) {
+ if (rsa->padding.type == RTE_CRYPTO_RSA_PADDING_NONE) {
/* Use mod_exp operation for no_padding type */
w4.s.opcode_minor = ROC_AE_MINOR_OP_MODEX;
w4.s.param2 = exp_len;
@@ -421,7 +424,7 @@ cnxk_ae_rsa_exp_prep(struct rte_crypto_op *op, struct roc_ae_buf_ptr *meta_buf,
dptr += in_size;
dlen = mod_len + privkey_len + in_size;
- if (rsa_op.padding.type == RTE_CRYPTO_RSA_PADDING_NONE) {
+ if (rsa->padding.type == RTE_CRYPTO_RSA_PADDING_NONE) {
/* Use mod_exp operation for no_padding type */
w4.s.opcode_minor = ROC_AE_MINOR_OP_MODEX;
w4.s.param2 = privkey_len;
@@ -479,7 +482,7 @@ cnxk_ae_rsa_crt_prep(struct rte_crypto_op *op, struct roc_ae_buf_ptr *meta_buf,
dptr += in_size;
dlen = total_key_len + in_size;
- if (rsa_op.padding.type == RTE_CRYPTO_RSA_PADDING_NONE) {
+ if (rsa->padding.type == RTE_CRYPTO_RSA_PADDING_NONE) {
/*Use mod_exp operation for no_padding type */
w4.s.opcode_minor = ROC_AE_MINOR_OP_MODEX_CRT;
} else {
@@ -1151,7 +1154,7 @@ cnxk_ae_dequeue_rsa_op(struct rte_crypto_op *cop, uint8_t *rptr,
memcpy(rsa->cipher.data, rptr, rsa->cipher.length);
break;
case RTE_CRYPTO_ASYM_OP_DECRYPT:
- if (rsa->padding.type == RTE_CRYPTO_RSA_PADDING_NONE) {
+ if (rsa_ctx->padding.type == RTE_CRYPTO_RSA_PADDING_NONE) {
rsa->message.length = rsa_ctx->n.length;
memcpy(rsa->message.data, rptr, rsa->message.length);
} else {
@@ -1171,7 +1174,7 @@ cnxk_ae_dequeue_rsa_op(struct rte_crypto_op *cop, uint8_t *rptr,
memcpy(rsa->sign.data, rptr, rsa->sign.length);
break;
case RTE_CRYPTO_ASYM_OP_VERIFY:
- if (rsa->padding.type == RTE_CRYPTO_RSA_PADDING_NONE) {
+ if (rsa_ctx->padding.type == RTE_CRYPTO_RSA_PADDING_NONE) {
rsa->sign.length = rsa_ctx->n.length;
memcpy(rsa->sign.data, rptr, rsa->sign.length);
} else {
diff --git a/drivers/crypto/octeontx/otx_cryptodev_ops.c b/drivers/crypto/octeontx/otx_cryptodev_ops.c
index bafd0c1669..9a758cd297 100644
--- a/drivers/crypto/octeontx/otx_cryptodev_ops.c
+++ b/drivers/crypto/octeontx/otx_cryptodev_ops.c
@@ -708,7 +708,7 @@ otx_cpt_asym_rsa_op(struct rte_crypto_op *cop, struct cpt_request_info *req,
memcpy(rsa->cipher.data, req->rptr, rsa->cipher.length);
break;
case RTE_CRYPTO_ASYM_OP_DECRYPT:
- if (rsa->padding.type == RTE_CRYPTO_RSA_PADDING_NONE)
+ if (rsa_ctx->padding.type == RTE_CRYPTO_RSA_PADDING_NONE)
rsa->message.length = rsa_ctx->n.length;
else {
/* Get length of decrypted output */
@@ -725,7 +725,7 @@ otx_cpt_asym_rsa_op(struct rte_crypto_op *cop, struct cpt_request_info *req,
memcpy(rsa->sign.data, req->rptr, rsa->sign.length);
break;
case RTE_CRYPTO_ASYM_OP_VERIFY:
- if (rsa->padding.type == RTE_CRYPTO_RSA_PADDING_NONE)
+ if (rsa_ctx->padding.type == RTE_CRYPTO_RSA_PADDING_NONE)
rsa->sign.length = rsa_ctx->n.length;
else {
/* Get length of decrypted output */
diff --git a/drivers/crypto/openssl/openssl_pmd_private.h b/drivers/crypto/openssl/openssl_pmd_private.h
index 0282b3d829..27551a7999 100644
--- a/drivers/crypto/openssl/openssl_pmd_private.h
+++ b/drivers/crypto/openssl/openssl_pmd_private.h
@@ -197,6 +197,7 @@ struct __rte_cache_aligned openssl_asym_session {
union {
struct rsa {
RSA *rsa;
+ uint32_t pad;
#if (OPENSSL_VERSION_NUMBER >= 0x30000000L)
EVP_PKEY_CTX * ctx;
#endif
diff --git a/drivers/crypto/openssl/rte_openssl_pmd.c b/drivers/crypto/openssl/rte_openssl_pmd.c
index e10a172f46..3db90a768d 100644
--- a/drivers/crypto/openssl/rte_openssl_pmd.c
+++ b/drivers/crypto/openssl/rte_openssl_pmd.c
@@ -2699,7 +2699,7 @@ process_openssl_rsa_op_evp(struct rte_crypto_op *cop,
struct openssl_asym_session *sess)
{
struct rte_crypto_asym_op *op = cop->asym;
- uint32_t pad = (op->rsa.padding.type);
+ uint32_t pad = sess->u.r.pad;
uint8_t *tmp;
size_t outlen = 0;
int ret = -1;
@@ -3082,7 +3082,7 @@ process_openssl_rsa_op(struct rte_crypto_op *cop,
int ret = 0;
struct rte_crypto_asym_op *op = cop->asym;
RSA *rsa = sess->u.r.rsa;
- uint32_t pad = (op->rsa.padding.type);
+ uint32_t pad = sess->u.r.pad;
uint8_t *tmp;
cop->status = RTE_CRYPTO_OP_STATUS_SUCCESS;
diff --git a/drivers/crypto/openssl/rte_openssl_pmd_ops.c b/drivers/crypto/openssl/rte_openssl_pmd_ops.c
index b7b612fc57..0fc125d2a3 100644
--- a/drivers/crypto/openssl/rte_openssl_pmd_ops.c
+++ b/drivers/crypto/openssl/rte_openssl_pmd_ops.c
@@ -889,6 +889,7 @@ static int openssl_set_asym_session_parameters(
if (!n || !e)
goto err_rsa;
+ asym_session->u.r.pad = xform->rsa.padding.type;
#if (OPENSSL_VERSION_NUMBER >= 0x30000000L)
OSSL_PARAM_BLD * param_bld = OSSL_PARAM_BLD_new();
if (!param_bld) {
diff --git a/drivers/crypto/qat/qat_asym.c b/drivers/crypto/qat/qat_asym.c
index e43884e69b..9e97582e22 100644
--- a/drivers/crypto/qat/qat_asym.c
+++ b/drivers/crypto/qat/qat_asym.c
@@ -362,7 +362,7 @@ rsa_set_pub_input(struct icp_qat_fw_pke_request *qat_req,
alg_bytesize = qat_function.bytesize;
if (asym_op->rsa.op_type == RTE_CRYPTO_ASYM_OP_ENCRYPT) {
- switch (asym_op->rsa.padding.type) {
+ switch (xform->rsa.padding.type) {
case RTE_CRYPTO_RSA_PADDING_NONE:
SET_PKE_LN(asym_op->rsa.message, alg_bytesize, 0);
break;
@@ -374,7 +374,7 @@ rsa_set_pub_input(struct icp_qat_fw_pke_request *qat_req,
}
HEXDUMP("RSA Message", cookie->input_array[0], alg_bytesize);
} else {
- switch (asym_op->rsa.padding.type) {
+ switch (xform->rsa.padding.type) {
case RTE_CRYPTO_RSA_PADDING_NONE:
SET_PKE_LN(asym_op->rsa.sign, alg_bytesize, 0);
break;
@@ -460,7 +460,7 @@ rsa_set_priv_input(struct icp_qat_fw_pke_request *qat_req,
if (asym_op->rsa.op_type ==
RTE_CRYPTO_ASYM_OP_DECRYPT) {
- switch (asym_op->rsa.padding.type) {
+ switch (xform->rsa.padding.type) {
case RTE_CRYPTO_RSA_PADDING_NONE:
SET_PKE_LN(asym_op->rsa.cipher, alg_bytesize, 0);
HEXDUMP("RSA ciphertext", cookie->input_array[0],
@@ -474,7 +474,7 @@ rsa_set_priv_input(struct icp_qat_fw_pke_request *qat_req,
} else if (asym_op->rsa.op_type ==
RTE_CRYPTO_ASYM_OP_SIGN) {
- switch (asym_op->rsa.padding.type) {
+ switch (xform->rsa.padding.type) {
case RTE_CRYPTO_RSA_PADDING_NONE:
SET_PKE_LN(asym_op->rsa.message, alg_bytesize, 0);
HEXDUMP("RSA text to be signed", cookie->input_array[0],
@@ -514,7 +514,8 @@ rsa_set_input(struct icp_qat_fw_pke_request *qat_req,
static uint8_t
rsa_collect(struct rte_crypto_asym_op *asym_op,
- const struct qat_asym_op_cookie *cookie)
+ const struct qat_asym_op_cookie *cookie,
+ const struct rte_crypto_asym_xform *xform)
{
uint32_t alg_bytesize = cookie->alg_bytesize;
@@ -530,7 +531,7 @@ rsa_collect(struct rte_crypto_asym_op *asym_op,
HEXDUMP("RSA Encrypted data", cookie->output_array[0],
alg_bytesize);
} else {
- switch (asym_op->rsa.padding.type) {
+ switch (xform->rsa.padding.type) {
case RTE_CRYPTO_RSA_PADDING_NONE:
rte_memcpy(asym_op->rsa.cipher.data,
cookie->output_array[0],
@@ -547,7 +548,7 @@ rsa_collect(struct rte_crypto_asym_op *asym_op,
}
} else {
if (asym_op->rsa.op_type == RTE_CRYPTO_ASYM_OP_DECRYPT) {
- switch (asym_op->rsa.padding.type) {
+ switch (xform->rsa.padding.type) {
case RTE_CRYPTO_RSA_PADDING_NONE:
rte_memcpy(asym_op->rsa.message.data,
cookie->output_array[0],
@@ -1105,7 +1106,7 @@ qat_asym_collect_response(struct rte_crypto_op *op,
case RTE_CRYPTO_ASYM_XFORM_MODINV:
return modinv_collect(asym_op, cookie, xform);
case RTE_CRYPTO_ASYM_XFORM_RSA:
- return rsa_collect(asym_op, cookie);
+ return rsa_collect(asym_op, cookie, xform);
case RTE_CRYPTO_ASYM_XFORM_ECDSA:
return ecdsa_collect(asym_op, cookie);
case RTE_CRYPTO_ASYM_XFORM_ECPM:
diff --git a/examples/fips_validation/main.c b/examples/fips_validation/main.c
index 7ae2c6c007..c7a78b41de 100644
--- a/examples/fips_validation/main.c
+++ b/examples/fips_validation/main.c
@@ -926,31 +926,7 @@ prepare_rsa_op(void)
__rte_crypto_op_reset(env.op, RTE_CRYPTO_OP_TYPE_ASYMMETRIC);
asym = env.op->asym;
- asym->rsa.padding.type = info.interim_info.rsa_data.padding;
- asym->rsa.padding.hash = info.interim_info.rsa_data.auth;
-
if (env.digest) {
- if (asym->rsa.padding.type == RTE_CRYPTO_RSA_PADDING_PKCS1_5) {
- int b_len = 0;
- uint8_t b[32];
-
- b_len = get_hash_oid(asym->rsa.padding.hash, b);
- if (b_len < 0) {
- RTE_LOG(ERR, USER1, "Failed to get digest info for hash %d\n",
- asym->rsa.padding.hash);
- return -EINVAL;
- }
-
- if (b_len) {
- msg.len = env.digest_len + b_len;
- msg.val = rte_zmalloc(NULL, msg.len, 0);
- rte_memcpy(msg.val, b, b_len);
- rte_memcpy(msg.val + b_len, env.digest, env.digest_len);
- rte_free(env.digest);
- env.digest = msg.val;
- env.digest_len = msg.len;
- }
- }
msg.val = env.digest;
msg.len = env.digest_len;
} else {
@@ -1536,6 +1512,34 @@ prepare_rsa_xform(struct rte_crypto_asym_xform *xform)
xform->rsa.e.length = vec.rsa.e.len;
xform->rsa.n.data = vec.rsa.n.val;
xform->rsa.n.length = vec.rsa.n.len;
+
+ xform->rsa.padding.type = info.interim_info.rsa_data.padding;
+ xform->rsa.padding.hash = info.interim_info.rsa_data.auth;
+ if (env.digest) {
+ if (xform->rsa.padding.type == RTE_CRYPTO_RSA_PADDING_PKCS1_5) {
+ struct fips_val msg;
+ int b_len = 0;
+ uint8_t b[32];
+
+ b_len = get_hash_oid(xform->rsa.padding.hash, b);
+ if (b_len < 0) {
+ RTE_LOG(ERR, USER1, "Failed to get digest info for hash %d\n",
+ xform->rsa.padding.hash);
+ return -EINVAL;
+ }
+
+ if (b_len) {
+ msg.len = env.digest_len + b_len;
+ msg.val = rte_zmalloc(NULL, msg.len, 0);
+ rte_memcpy(msg.val, b, b_len);
+ rte_memcpy(msg.val + b_len, env.digest, env.digest_len);
+ rte_free(env.digest);
+ env.digest = msg.val;
+ env.digest_len = msg.len;
+ }
+ }
+ }
+
return 0;
}
diff --git a/lib/cryptodev/rte_crypto_asym.h b/lib/cryptodev/rte_crypto_asym.h
index c33be3b155..398b6514e3 100644
--- a/lib/cryptodev/rte_crypto_asym.h
+++ b/lib/cryptodev/rte_crypto_asym.h
@@ -312,6 +312,9 @@ struct rte_crypto_rsa_xform {
struct rte_crypto_rsa_priv_key_qt qt;
/**< qt - Private key in quintuple format */
};
+
+ struct rte_crypto_rsa_padding padding;
+ /**< RSA padding information */
};
/**
@@ -447,9 +450,6 @@ struct rte_crypto_rsa_op_param {
* This could be validated and overwritten by the PMD
* with the signature length.
*/
-
- struct rte_crypto_rsa_padding padding;
- /**< RSA padding information */
};
/**
--
2.21.0
^ permalink raw reply [flat|nested] 16+ messages in thread
* RE: [PATCH v2 2/2] cryptodev: move RSA padding information into xform
2024-10-04 6:11 ` [PATCH v2 2/2] cryptodev: move RSA padding information into xform Gowrishankar Muthukrishnan
@ 2024-10-09 15:23 ` Akhil Goyal
2024-10-09 16:16 ` Kusztal, ArkadiuszX
0 siblings, 1 reply; 16+ messages in thread
From: Akhil Goyal @ 2024-10-09 15:23 UTC (permalink / raw)
To: Gowrishankar Muthukrishnan, dev, arkadiuszx.kusztal, Fan Zhang,
Anoob Joseph, Ankur Dwivedi, Tejasree Kondoj, Kai Ji,
Brian Dooley
Cc: bruce.richardson, Jerin Jacob, jack.bond-preston, david.marchand,
hemant.agrawal, pablo.de.lara.guarch, fiona.trahe,
declan.doherty, matan, ruifeng.wang, abhinandan.gujjar,
maxime.coquelin, chenbox, sunilprakashrao.uttarwar, andrew.boyer,
ajit.khaparde, raveendra.padasalagi, vikas.gupta, zhangfei.gao,
g.singh, jianjay.zhou, lee.daly
Hi Arek
Any objections on this patch?
> Subject: [PATCH v2 2/2] cryptodev: move RSA padding information into xform
>
> RSA padding information could be a xform entity rather than part of
> crypto op, as it seems associated with hashing algorithm used for
> the entire crypto session, where this algorithm is used in message
> digest itself. Even in virtIO standard spec, this info is associated
> in the asymmetric session creation. Hence, moving this info from
> crypto op into xform structure.
>
> Signed-off-by: Gowrishankar Muthukrishnan <gmuthukrishn@marvell.com>
^ permalink raw reply [flat|nested] 16+ messages in thread
* RE: [PATCH v2 2/2] cryptodev: move RSA padding information into xform
2024-10-09 15:23 ` Akhil Goyal
@ 2024-10-09 16:16 ` Kusztal, ArkadiuszX
2024-10-09 19:45 ` Akhil Goyal
0 siblings, 1 reply; 16+ messages in thread
From: Kusztal, ArkadiuszX @ 2024-10-09 16:16 UTC (permalink / raw)
To: Akhil Goyal, Gowrishankar Muthukrishnan, dev, Fan Zhang,
Anoob Joseph, Ankur Dwivedi, Tejasree Kondoj, Ji, Kai, Dooley,
Brian
Cc: Richardson, Bruce, Jerin Jacob, jack.bond-preston, Marchand,
David, hemant.agrawal, De Lara Guarch, Pablo, Trahe, Fiona,
Doherty, Declan, matan, ruifeng.wang, Gujjar, Abhinandan S,
maxime.coquelin, chenbox, sunilprakashrao.uttarwar, andrew.boyer,
ajit.khaparde, raveendra.padasalagi, vikas.gupta, zhangfei.gao,
g.singh, jianjay.zhou, Daly, Lee
Acked-by: Arek Kusztal <arkadiuszx.kusztal@intel.com>
> -----Original Message-----
> From: Akhil Goyal <gakhil@marvell.com>
> Sent: Wednesday, October 9, 2024 5:23 PM
> To: Gowrishankar Muthukrishnan <gmuthukrishn@marvell.com>;
> dev@dpdk.org; Kusztal, ArkadiuszX <arkadiuszx.kusztal@intel.com>; Fan Zhang
> <fanzhang.oss@gmail.com>; Anoob Joseph <anoobj@marvell.com>; Ankur
> Dwivedi <adwivedi@marvell.com>; Tejasree Kondoj <ktejasree@marvell.com>;
> Ji, Kai <kai.ji@intel.com>; Dooley, Brian <brian.dooley@intel.com>
> Cc: Richardson, Bruce <bruce.richardson@intel.com>; Jerin Jacob
> <jerinj@marvell.com>; jack.bond-preston@foss.arm.com; Marchand, David
> <david.marchand@redhat.com>; hemant.agrawal@nxp.com; De Lara Guarch,
> Pablo <pablo.de.lara.guarch@intel.com>; Trahe, Fiona
> <fiona.trahe@intel.com>; Doherty, Declan <declan.doherty@intel.com>;
> matan@nvidia.com; ruifeng.wang@arm.com; Gujjar, Abhinandan S
> <abhinandan.gujjar@intel.com>; maxime.coquelin@redhat.com;
> chenbox@nvidia.com; sunilprakashrao.uttarwar@amd.com;
> andrew.boyer@amd.com; ajit.khaparde@broadcom.com;
> raveendra.padasalagi@broadcom.com; vikas.gupta@broadcom.com;
> zhangfei.gao@linaro.org; g.singh@nxp.com; jianjay.zhou@huawei.com; Daly,
> Lee <lee.daly@intel.com>
> Subject: RE: [PATCH v2 2/2] cryptodev: move RSA padding information into
> xform
>
> Hi Arek
> Any objections on this patch?
>
> > Subject: [PATCH v2 2/2] cryptodev: move RSA padding information into
> > xform
> >
> > RSA padding information could be a xform entity rather than part of
> > crypto op, as it seems associated with hashing algorithm used for the
> > entire crypto session, where this algorithm is used in message digest
> > itself. Even in virtIO standard spec, this info is associated in the
> > asymmetric session creation. Hence, moving this info from crypto op
> > into xform structure.
> >
> > Signed-off-by: Gowrishankar Muthukrishnan <gmuthukrishn@marvell.com>
^ permalink raw reply [flat|nested] 16+ messages in thread
* RE: [PATCH v2 2/2] cryptodev: move RSA padding information into xform
2024-10-09 16:16 ` Kusztal, ArkadiuszX
@ 2024-10-09 19:45 ` Akhil Goyal
0 siblings, 0 replies; 16+ messages in thread
From: Akhil Goyal @ 2024-10-09 19:45 UTC (permalink / raw)
To: Kusztal, ArkadiuszX, Gowrishankar Muthukrishnan, dev, Fan Zhang,
Anoob Joseph, Ankur Dwivedi, Tejasree Kondoj, Ji, Kai, Dooley,
Brian
Cc: Richardson, Bruce, Jerin Jacob, jack.bond-preston, Marchand,
David, hemant.agrawal, De Lara Guarch, Pablo, Trahe, Fiona,
Doherty, Declan, matan, ruifeng.wang, Gujjar, Abhinandan S,
maxime.coquelin, chenbox, sunilprakashrao.uttarwar, andrew.boyer,
ajit.khaparde, raveendra.padasalagi, vikas.gupta, zhangfei.gao,
g.singh, jianjay.zhou, Daly, Lee
> Acked-by: Arek Kusztal <arkadiuszx.kusztal@intel.com>
>
Series Acked-by: Akhil Goyal <gakhil@marvell.com>
Applied to dpdk-next-crypto
Updated release notes and removed associated deprecation notices.
Thanks.
^ permalink raw reply [flat|nested] 16+ messages in thread