* [PATCH v6 0/3] add ec points to sm2 op @ 2024-10-22 19:05 Arkadiusz Kusztal 2024-10-22 19:05 ` [PATCH v6 1/3] cryptodev: " Arkadiusz Kusztal ` (3 more replies) 0 siblings, 4 replies; 8+ messages in thread From: Arkadiusz Kusztal @ 2024-10-22 19:05 UTC (permalink / raw) To: dev; +Cc: gakhil, brian.dooley, Arkadiusz Kusztal In the case when PMD cannot support the full process of the SM2, but elliptic curve computation only, additional fields are needed to handle such a case. v2: - rebased against the 24.11 code v3: - added feature flag - added QAT patches - added test patches v4: - replaced feature flag with capability - split API patches v5: - rebased - clarified usage of the partial flag v6: - removed already applied patch 1 - added ABI relase notes comment - removed camel case - added flag reference Arkadiusz Kusztal (3): cryptodev: add ec points to sm2 op crypto/qat: add sm2 encryption/decryption function app/test: add test sm2 C1/Kp test cases app/test/test_cryptodev_asym.c | 138 ++++++++++++++++- app/test/test_cryptodev_sm2_test_vectors.h | 112 +++++++++++++- doc/guides/cryptodevs/features/qat.ini | 1 + doc/guides/rel_notes/release_24_11.rst | 7 + .../common/qat/qat_adf/icp_qat_fw_mmp_ids.h | 3 + drivers/common/qat/qat_adf/qat_pke.h | 20 +++ drivers/crypto/qat/qat_asym.c | 140 +++++++++++++++++- lib/cryptodev/rte_crypto_asym.h | 56 +++++-- 8 files changed, 453 insertions(+), 24 deletions(-) -- 2.17.1 ^ permalink raw reply [flat|nested] 8+ messages in thread
* [PATCH v6 1/3] cryptodev: add ec points to sm2 op 2024-10-22 19:05 [PATCH v6 0/3] add ec points to sm2 op Arkadiusz Kusztal @ 2024-10-22 19:05 ` Arkadiusz Kusztal 2024-10-22 19:05 ` [PATCH v6 2/3] crypto/qat: add sm2 encryption/decryption function Arkadiusz Kusztal ` (2 subsequent siblings) 3 siblings, 0 replies; 8+ messages in thread From: Arkadiusz Kusztal @ 2024-10-22 19:05 UTC (permalink / raw) To: dev; +Cc: gakhil, brian.dooley, Arkadiusz Kusztal In the case when PMD cannot support the full process of the SM2, but elliptic curve computation only, additional fields are needed to handle such a case. Points C1, kP therefore were added to the SM2 crypto operation struct. Signed-off-by: Arkadiusz Kusztal <arkadiuszx.kusztal@intel.com> --- doc/guides/rel_notes/release_24_11.rst | 3 ++ lib/cryptodev/rte_crypto_asym.h | 56 +++++++++++++++++++------- 2 files changed, 45 insertions(+), 14 deletions(-) diff --git a/doc/guides/rel_notes/release_24_11.rst b/doc/guides/rel_notes/release_24_11.rst index fa4822d928..0f91dae987 100644 --- a/doc/guides/rel_notes/release_24_11.rst +++ b/doc/guides/rel_notes/release_24_11.rst @@ -406,6 +406,9 @@ ABI Changes added new structure ``rte_node_xstats`` to ``rte_node_register`` and added ``xstat_off`` to ``rte_node``. +* cryptodev: The ``rte_crypto_sm2_op_param`` struct member to hold ciphertext + is changed to union data type. This change is to support partial SM2 calculation. + Known Issues ------------ diff --git a/lib/cryptodev/rte_crypto_asym.h b/lib/cryptodev/rte_crypto_asym.h index aeb46e688e..f095cebcd0 100644 --- a/lib/cryptodev/rte_crypto_asym.h +++ b/lib/cryptodev/rte_crypto_asym.h @@ -646,6 +646,8 @@ enum rte_crypto_sm2_op_capa { /**< Random number generator supported in SM2 ops. */ RTE_CRYPTO_SM2_PH, /**< Prehash message before crypto op. */ + RTE_CRYPTO_SM2_PARTIAL, + /**< Calculate elliptic curve points only. */ }; /** @@ -673,20 +675,46 @@ struct rte_crypto_sm2_op_param { * will be overwritten by the PMD with the decrypted length. */ - rte_crypto_param cipher; - /**< - * Pointer to input data - * - to be decrypted for SM2 private decrypt. - * - * Pointer to output data - * - for SM2 public encrypt. - * In this case the underlying array should have been allocated - * with enough memory to hold ciphertext output (at least X bytes - * for prime field curve of N bytes and for message M bytes, - * where X = (C1 || C2 || C3) and computed based on SM2 RFC as - * C1 (1 + N + N), C2 = M, C3 = N. The cipher.length field will - * be overwritten by the PMD with the encrypted length. - */ + union { + rte_crypto_param cipher; + /**< + * Pointer to input data + * - to be decrypted for SM2 private decrypt. + * + * Pointer to output data + * - for SM2 public encrypt. + * In this case the underlying array should have been allocated + * with enough memory to hold ciphertext output (at least X bytes + * for prime field curve of N bytes and for message M bytes, + * where X = (C1 || C2 || C3) and computed based on SM2 RFC as + * C1 (1 + N + N), C2 = M, C3 = N. The cipher.length field will + * be overwritten by the PMD with the encrypted length. + */ + struct { + struct rte_crypto_ec_point c1; + /**< + * This field is used only when PMD does not support the full + * process of the SM2 encryption/decryption, but the elliptic + * curve part only. + * + * In the case of encryption, it is an output - point C1 = (x1,y1). + * In the case of decryption, if is an input - point C1 = (x1,y1). + * + * Must be used along with the RTE_CRYPTO_SM2_PARTIAL flag. + */ + struct rte_crypto_ec_point kp; + /**< + * This field is used only when PMD does not support the full + * process of the SM2 encryption/decryption, but the elliptic + * curve part only. + * + * It is an output in the encryption case, it is a point + * [k]P = (x2,y2). + * + * Must be used along with the RTE_CRYPTO_SM2_PARTIAL flag. + */ + }; + }; rte_crypto_uint id; /**< The SM2 id used by signer and verifier. */ -- 2.17.1 ^ permalink raw reply [flat|nested] 8+ messages in thread
* [PATCH v6 2/3] crypto/qat: add sm2 encryption/decryption function 2024-10-22 19:05 [PATCH v6 0/3] add ec points to sm2 op Arkadiusz Kusztal 2024-10-22 19:05 ` [PATCH v6 1/3] cryptodev: " Arkadiusz Kusztal @ 2024-10-22 19:05 ` Arkadiusz Kusztal 2024-10-23 0:46 ` Stephen Hemminger 2024-10-23 7:55 ` [EXTERNAL] " Akhil Goyal 2024-10-22 19:06 ` [PATCH v6 3/3] app/test: add test sm2 C1/Kp test cases Arkadiusz Kusztal 2024-10-23 1:19 ` [PATCH v6 0/3] add ec points to sm2 op Stephen Hemminger 3 siblings, 2 replies; 8+ messages in thread From: Arkadiusz Kusztal @ 2024-10-22 19:05 UTC (permalink / raw) To: dev; +Cc: gakhil, brian.dooley, Arkadiusz Kusztal This commit adds SM2 elliptic curve based asymmetric encryption and decryption to the Intel QuickAssist Technology PMD. Signed-off-by: Arkadiusz Kusztal <arkadiuszx.kusztal@intel.com> --- doc/guides/cryptodevs/features/qat.ini | 1 + doc/guides/rel_notes/release_24_11.rst | 4 + .../common/qat/qat_adf/icp_qat_fw_mmp_ids.h | 3 + drivers/common/qat/qat_adf/qat_pke.h | 20 +++ drivers/crypto/qat/qat_asym.c | 140 +++++++++++++++++- 5 files changed, 162 insertions(+), 6 deletions(-) diff --git a/doc/guides/cryptodevs/features/qat.ini b/doc/guides/cryptodevs/features/qat.ini index f41d29158f..219dd1e011 100644 --- a/doc/guides/cryptodevs/features/qat.ini +++ b/doc/guides/cryptodevs/features/qat.ini @@ -71,6 +71,7 @@ ZUC EIA3 = Y AES CMAC (128) = Y SM3 = Y SM3 HMAC = Y +SM2 = Y ; ; Supported AEAD algorithms of the 'qat' crypto driver. diff --git a/doc/guides/rel_notes/release_24_11.rst b/doc/guides/rel_notes/release_24_11.rst index 0f91dae987..2404753e54 100644 --- a/doc/guides/rel_notes/release_24_11.rst +++ b/doc/guides/rel_notes/release_24_11.rst @@ -247,6 +247,10 @@ New Features Added ability for node to advertise and update multiple xstat counters, that can be retrieved using ``rte_graph_cluster_stats_get``. +* **Updated the QuickAssist Technology (QAT) Crypto PMD.** + + * Added SM2 encryption and decryption algorithms. + Removed Items ------------- diff --git a/drivers/common/qat/qat_adf/icp_qat_fw_mmp_ids.h b/drivers/common/qat/qat_adf/icp_qat_fw_mmp_ids.h index 630c6e1a9b..aa49612ca1 100644 --- a/drivers/common/qat/qat_adf/icp_qat_fw_mmp_ids.h +++ b/drivers/common/qat/qat_adf/icp_qat_fw_mmp_ids.h @@ -1542,6 +1542,9 @@ icp_qat_fw_mmp_ecdsa_verify_gfp_521_input::in in @endlink * @li no output parameters */ +#define PKE_ECSM2_ENCRYPTION 0x25221720 +#define PKE_ECSM2_DECRYPTION 0x201716e6 + #define PKE_LIVENESS 0x00000001 /**< Functionality ID for PKE_LIVENESS * @li 0 input parameter(s) diff --git a/drivers/common/qat/qat_adf/qat_pke.h b/drivers/common/qat/qat_adf/qat_pke.h index f88932a275..ac051e965d 100644 --- a/drivers/common/qat/qat_adf/qat_pke.h +++ b/drivers/common/qat/qat_adf/qat_pke.h @@ -334,4 +334,24 @@ get_sm2_ecdsa_verify_function(void) return qat_function; } +static struct qat_asym_function +get_sm2_encryption_function(void) +{ + struct qat_asym_function qat_function = { + PKE_ECSM2_ENCRYPTION, 32 + }; + + return qat_function; +} + +static struct qat_asym_function +get_sm2_decryption_function(void) +{ + struct qat_asym_function qat_function = { + PKE_ECSM2_DECRYPTION, 32 + }; + + return qat_function; +} + #endif diff --git a/drivers/crypto/qat/qat_asym.c b/drivers/crypto/qat/qat_asym.c index 9e97582e22..991684135c 100644 --- a/drivers/crypto/qat/qat_asym.c +++ b/drivers/crypto/qat/qat_asym.c @@ -933,6 +933,15 @@ sm2_ecdsa_sign_set_input(struct icp_qat_fw_pke_request *qat_req, qat_req->input_param_count = 3; qat_req->output_param_count = 2; + HEXDUMP("SM2 K test", asym_op->sm2.k.data, + cookie->alg_bytesize); + HEXDUMP("SM2 K", cookie->input_array[0], + cookie->alg_bytesize); + HEXDUMP("SM2 msg", cookie->input_array[1], + cookie->alg_bytesize); + HEXDUMP("SM2 pkey", cookie->input_array[2], + cookie->alg_bytesize); + return RTE_CRYPTO_OP_STATUS_SUCCESS; } @@ -983,6 +992,114 @@ sm2_ecdsa_sign_collect(struct rte_crypto_asym_op *asym_op, return RTE_CRYPTO_OP_STATUS_SUCCESS; } +static int +sm2_encryption_set_input(struct icp_qat_fw_pke_request *qat_req, + struct qat_asym_op_cookie *cookie, + const struct rte_crypto_asym_op *asym_op, + const struct rte_crypto_asym_xform *xform) +{ + const struct qat_asym_function qat_function = + get_sm2_encryption_function(); + const uint32_t qat_func_alignsize = + qat_function.bytesize; + + SET_PKE_LN(asym_op->sm2.k, qat_func_alignsize, 0); + SET_PKE_LN(xform->ec.q.x, qat_func_alignsize, 1); + SET_PKE_LN(xform->ec.q.y, qat_func_alignsize, 2); + + cookie->alg_bytesize = qat_function.bytesize; + cookie->qat_func_alignsize = qat_function.bytesize; + qat_req->pke_hdr.cd_pars.func_id = qat_function.func_id; + qat_req->input_param_count = 3; + qat_req->output_param_count = 4; + + HEXDUMP("SM2 K", cookie->input_array[0], + qat_func_alignsize); + HEXDUMP("SM2 Q.x", cookie->input_array[1], + qat_func_alignsize); + HEXDUMP("SM2 Q.y", cookie->input_array[2], + qat_func_alignsize); + + return RTE_CRYPTO_OP_STATUS_SUCCESS; +} + +static uint8_t +sm2_encryption_collect(struct rte_crypto_asym_op *asym_op, + const struct qat_asym_op_cookie *cookie) +{ + uint32_t alg_bytesize = cookie->alg_bytesize; + + rte_memcpy(asym_op->sm2.c1.x.data, cookie->output_array[0], alg_bytesize); + rte_memcpy(asym_op->sm2.c1.y.data, cookie->output_array[1], alg_bytesize); + rte_memcpy(asym_op->sm2.kp.x.data, cookie->output_array[2], alg_bytesize); + rte_memcpy(asym_op->sm2.kp.y.data, cookie->output_array[3], alg_bytesize); + asym_op->sm2.c1.x.length = alg_bytesize; + asym_op->sm2.c1.y.length = alg_bytesize; + asym_op->sm2.kp.x.length = alg_bytesize; + asym_op->sm2.kp.y.length = alg_bytesize; + + HEXDUMP("c1[x1]", cookie->output_array[0], + alg_bytesize); + HEXDUMP("c1[y]", cookie->output_array[1], + alg_bytesize); + HEXDUMP("kp[x]", cookie->output_array[2], + alg_bytesize); + HEXDUMP("kp[y]", cookie->output_array[3], + alg_bytesize); + return RTE_CRYPTO_OP_STATUS_SUCCESS; +} + + +static int +sm2_decryption_set_input(struct icp_qat_fw_pke_request *qat_req, + struct qat_asym_op_cookie *cookie, + const struct rte_crypto_asym_op *asym_op, + const struct rte_crypto_asym_xform *xform) +{ + const struct qat_asym_function qat_function = + get_sm2_decryption_function(); + const uint32_t qat_func_alignsize = + qat_function.bytesize; + + SET_PKE_LN(xform->ec.pkey, qat_func_alignsize, 0); + SET_PKE_LN(asym_op->sm2.c1.x, qat_func_alignsize, 1); + SET_PKE_LN(asym_op->sm2.c1.y, qat_func_alignsize, 2); + + cookie->alg_bytesize = qat_function.bytesize; + cookie->qat_func_alignsize = qat_function.bytesize; + qat_req->pke_hdr.cd_pars.func_id = qat_function.func_id; + qat_req->input_param_count = 3; + qat_req->output_param_count = 2; + + HEXDUMP("d", cookie->input_array[0], + qat_func_alignsize); + HEXDUMP("c1[x]", cookie->input_array[1], + qat_func_alignsize); + HEXDUMP("c1[y]", cookie->input_array[2], + qat_func_alignsize); + + return RTE_CRYPTO_OP_STATUS_SUCCESS; +} + + +static uint8_t +sm2_decryption_collect(struct rte_crypto_asym_op *asym_op, + const struct qat_asym_op_cookie *cookie) +{ + uint32_t alg_bytesize = cookie->alg_bytesize; + + rte_memcpy(asym_op->sm2.kp.x.data, cookie->output_array[0], alg_bytesize); + rte_memcpy(asym_op->sm2.kp.y.data, cookie->output_array[1], alg_bytesize); + asym_op->sm2.kp.x.length = alg_bytesize; + asym_op->sm2.kp.y.length = alg_bytesize; + + HEXDUMP("kp[x]", cookie->output_array[0], + alg_bytesize); + HEXDUMP("kp[y]", cookie->output_array[1], + alg_bytesize); + return RTE_CRYPTO_OP_STATUS_SUCCESS; +} + static int asym_set_input(struct icp_qat_fw_pke_request *qat_req, struct qat_asym_op_cookie *cookie, @@ -1015,14 +1132,20 @@ asym_set_input(struct icp_qat_fw_pke_request *qat_req, asym_op, xform); } case RTE_CRYPTO_ASYM_XFORM_SM2: - if (asym_op->sm2.op_type == - RTE_CRYPTO_ASYM_OP_VERIFY) { + if (asym_op->sm2.op_type == RTE_CRYPTO_ASYM_OP_ENCRYPT) { + return sm2_encryption_set_input(qat_req, cookie, + asym_op, xform); + } else if (asym_op->sm2.op_type == RTE_CRYPTO_ASYM_OP_DECRYPT) { + return sm2_decryption_set_input(qat_req, cookie, + asym_op, xform); + } else if (asym_op->sm2.op_type == RTE_CRYPTO_ASYM_OP_VERIFY) { return sm2_ecdsa_verify_set_input(qat_req, cookie, asym_op, xform); - } else { + } else if (asym_op->sm2.op_type == RTE_CRYPTO_ASYM_OP_SIGN) { return sm2_ecdsa_sign_set_input(qat_req, cookie, asym_op, xform); } + break; default: QAT_LOG(ERR, "Invalid/unsupported asymmetric crypto xform"); return -EINVAL; @@ -1114,7 +1237,13 @@ qat_asym_collect_response(struct rte_crypto_op *op, case RTE_CRYPTO_ASYM_XFORM_ECDH: return ecdh_collect(asym_op, cookie); case RTE_CRYPTO_ASYM_XFORM_SM2: - return sm2_ecdsa_sign_collect(asym_op, cookie); + if (asym_op->sm2.op_type == RTE_CRYPTO_ASYM_OP_ENCRYPT) + return sm2_encryption_collect(asym_op, cookie); + else if (asym_op->sm2.op_type == RTE_CRYPTO_ASYM_OP_DECRYPT) + return sm2_decryption_collect(asym_op, cookie); + else + return sm2_ecdsa_sign_collect(asym_op, cookie); + default: QAT_LOG(ERR, "Not supported xform type"); return RTE_CRYPTO_OP_STATUS_ERROR; @@ -1386,9 +1515,8 @@ qat_asym_session_configure(struct rte_cryptodev *dev __rte_unused, case RTE_CRYPTO_ASYM_XFORM_ECDSA: case RTE_CRYPTO_ASYM_XFORM_ECPM: case RTE_CRYPTO_ASYM_XFORM_ECDH: - session_set_ec(qat_session, xform); - break; case RTE_CRYPTO_ASYM_XFORM_SM2: + session_set_ec(qat_session, xform); break; default: ret = -ENOTSUP; -- 2.17.1 ^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH v6 2/3] crypto/qat: add sm2 encryption/decryption function 2024-10-22 19:05 ` [PATCH v6 2/3] crypto/qat: add sm2 encryption/decryption function Arkadiusz Kusztal @ 2024-10-23 0:46 ` Stephen Hemminger 2024-10-31 17:24 ` Kusztal, ArkadiuszX 2024-10-23 7:55 ` [EXTERNAL] " Akhil Goyal 1 sibling, 1 reply; 8+ messages in thread From: Stephen Hemminger @ 2024-10-23 0:46 UTC (permalink / raw) To: Arkadiusz Kusztal; +Cc: dev, gakhil, brian.dooley On Tue, 22 Oct 2024 20:05:59 +0100 Arkadiusz Kusztal <arkadiuszx.kusztal@intel.com> wrote: > + uint32_t alg_bytesize = cookie->alg_bytesize; > + > + rte_memcpy(asym_op->sm2.c1.x.data, cookie->output_array[0], alg_bytesize); > + rte_memcpy(asym_op->sm2.c1.y.data, cookie->output_array[1], alg_bytesize); > + rte_memcpy(asym_op->sm2.kp.x.data, cookie->output_array[2], alg_bytesize); > + rte_memcpy(asym_op->sm2.kp.y.data, cookie->output_array[3], alg_bytesize); Since the copy is small and not in the fast path, there is no reason to use rte_memcpy(). The memcpy() function is as fast inlines and has more checking from gcc, coverity, ASAN so it is preferred. ^ permalink raw reply [flat|nested] 8+ messages in thread
* RE: [PATCH v6 2/3] crypto/qat: add sm2 encryption/decryption function 2024-10-23 0:46 ` Stephen Hemminger @ 2024-10-31 17:24 ` Kusztal, ArkadiuszX 0 siblings, 0 replies; 8+ messages in thread From: Kusztal, ArkadiuszX @ 2024-10-31 17:24 UTC (permalink / raw) To: Stephen Hemminger; +Cc: dev, gakhil, Dooley, Brian > -----Original Message----- > From: Stephen Hemminger <stephen@networkplumber.org> > Sent: Wednesday, October 23, 2024 2:47 AM > To: Kusztal, ArkadiuszX <arkadiuszx.kusztal@intel.com> > Cc: dev@dpdk.org; gakhil@marvell.com; Dooley, Brian > <brian.dooley@intel.com> > Subject: Re: [PATCH v6 2/3] crypto/qat: add sm2 encryption/decryption function > > On Tue, 22 Oct 2024 20:05:59 +0100 > Arkadiusz Kusztal <arkadiuszx.kusztal@intel.com> wrote: > > > + uint32_t alg_bytesize = cookie->alg_bytesize; > > + > > + rte_memcpy(asym_op->sm2.c1.x.data, cookie->output_array[0], > alg_bytesize); > > + rte_memcpy(asym_op->sm2.c1.y.data, cookie->output_array[1], > alg_bytesize); > > + rte_memcpy(asym_op->sm2.kp.x.data, cookie->output_array[2], > alg_bytesize); > > + rte_memcpy(asym_op->sm2.kp.y.data, cookie->output_array[3], > > +alg_bytesize); > > Since the copy is small and not in the fast path, there is no reason to use > rte_memcpy(). > The memcpy() function is as fast inlines and has more checking from gcc, > coverity, ASAN so it is preferred. This function is called by the crypto_dequeue_op_burst function and in some other cases (like RSA) there may be a 1024 bytes per copy operation. If you think that a regular memcpy will do no worse there, I may change it. ^ permalink raw reply [flat|nested] 8+ messages in thread
* RE: [EXTERNAL] [PATCH v6 2/3] crypto/qat: add sm2 encryption/decryption function 2024-10-22 19:05 ` [PATCH v6 2/3] crypto/qat: add sm2 encryption/decryption function Arkadiusz Kusztal 2024-10-23 0:46 ` Stephen Hemminger @ 2024-10-23 7:55 ` Akhil Goyal 1 sibling, 0 replies; 8+ messages in thread From: Akhil Goyal @ 2024-10-23 7:55 UTC (permalink / raw) To: Arkadiusz Kusztal, dev; +Cc: brian.dooley > This commit adds SM2 elliptic curve based asymmetric > encryption and decryption to the Intel QuickAssist > Technology PMD. > > Signed-off-by: Arkadiusz Kusztal <arkadiuszx.kusztal@intel.com> > --- > doc/guides/cryptodevs/features/qat.ini | 1 + > doc/guides/rel_notes/release_24_11.rst | 4 + > .../common/qat/qat_adf/icp_qat_fw_mmp_ids.h | 3 + > drivers/common/qat/qat_adf/qat_pke.h | 20 +++ > drivers/crypto/qat/qat_asym.c | 140 +++++++++++++++++- > 5 files changed, 162 insertions(+), 6 deletions(-) > > diff --git a/doc/guides/cryptodevs/features/qat.ini > b/doc/guides/cryptodevs/features/qat.ini > index f41d29158f..219dd1e011 100644 > --- a/doc/guides/cryptodevs/features/qat.ini > +++ b/doc/guides/cryptodevs/features/qat.ini > @@ -71,6 +71,7 @@ ZUC EIA3 = Y > AES CMAC (128) = Y > SM3 = Y > SM3 HMAC = Y > +SM2 = Y SM2 is asymmetric algo. Please move it in asymmetric ones. > > ; > ; Supported AEAD algorithms of the 'qat' crypto driver. > diff --git a/doc/guides/rel_notes/release_24_11.rst > b/doc/guides/rel_notes/release_24_11.rst > index 0f91dae987..2404753e54 100644 > --- a/doc/guides/rel_notes/release_24_11.rst > +++ b/doc/guides/rel_notes/release_24_11.rst > @@ -247,6 +247,10 @@ New Features > Added ability for node to advertise and update multiple xstat counters, > that can be retrieved using ``rte_graph_cluster_stats_get``. > > +* **Updated the QuickAssist Technology (QAT) Crypto PMD.** > + > + * Added SM2 encryption and decryption algorithms. > + > > Removed Items > ------------- > diff --git a/drivers/common/qat/qat_adf/icp_qat_fw_mmp_ids.h > b/drivers/common/qat/qat_adf/icp_qat_fw_mmp_ids.h > index 630c6e1a9b..aa49612ca1 100644 > --- a/drivers/common/qat/qat_adf/icp_qat_fw_mmp_ids.h > +++ b/drivers/common/qat/qat_adf/icp_qat_fw_mmp_ids.h > @@ -1542,6 +1542,9 @@ icp_qat_fw_mmp_ecdsa_verify_gfp_521_input::in in > @endlink > * @li no output parameters > */ > > +#define PKE_ECSM2_ENCRYPTION 0x25221720 > +#define PKE_ECSM2_DECRYPTION 0x201716e6 > + > #define PKE_LIVENESS 0x00000001 > /**< Functionality ID for PKE_LIVENESS > * @li 0 input parameter(s) > diff --git a/drivers/common/qat/qat_adf/qat_pke.h > b/drivers/common/qat/qat_adf/qat_pke.h > index f88932a275..ac051e965d 100644 > --- a/drivers/common/qat/qat_adf/qat_pke.h > +++ b/drivers/common/qat/qat_adf/qat_pke.h > @@ -334,4 +334,24 @@ get_sm2_ecdsa_verify_function(void) > return qat_function; > } > > +static struct qat_asym_function > +get_sm2_encryption_function(void) > +{ > + struct qat_asym_function qat_function = { > + PKE_ECSM2_ENCRYPTION, 32 > + }; > + > + return qat_function; > +} > + > +static struct qat_asym_function > +get_sm2_decryption_function(void) > +{ > + struct qat_asym_function qat_function = { > + PKE_ECSM2_DECRYPTION, 32 > + }; > + > + return qat_function; > +} > + > #endif > diff --git a/drivers/crypto/qat/qat_asym.c b/drivers/crypto/qat/qat_asym.c > index 9e97582e22..991684135c 100644 > --- a/drivers/crypto/qat/qat_asym.c > +++ b/drivers/crypto/qat/qat_asym.c > @@ -933,6 +933,15 @@ sm2_ecdsa_sign_set_input(struct > icp_qat_fw_pke_request *qat_req, > qat_req->input_param_count = 3; > qat_req->output_param_count = 2; > > + HEXDUMP("SM2 K test", asym_op->sm2.k.data, > + cookie->alg_bytesize); > + HEXDUMP("SM2 K", cookie->input_array[0], > + cookie->alg_bytesize); > + HEXDUMP("SM2 msg", cookie->input_array[1], > + cookie->alg_bytesize); > + HEXDUMP("SM2 pkey", cookie->input_array[2], > + cookie->alg_bytesize); > + > return RTE_CRYPTO_OP_STATUS_SUCCESS; > } > > @@ -983,6 +992,114 @@ sm2_ecdsa_sign_collect(struct rte_crypto_asym_op > *asym_op, > return RTE_CRYPTO_OP_STATUS_SUCCESS; > } > > +static int > +sm2_encryption_set_input(struct icp_qat_fw_pke_request *qat_req, > + struct qat_asym_op_cookie *cookie, > + const struct rte_crypto_asym_op *asym_op, > + const struct rte_crypto_asym_xform *xform) > +{ > + const struct qat_asym_function qat_function = > + get_sm2_encryption_function(); > + const uint32_t qat_func_alignsize = > + qat_function.bytesize; > + > + SET_PKE_LN(asym_op->sm2.k, qat_func_alignsize, 0); > + SET_PKE_LN(xform->ec.q.x, qat_func_alignsize, 1); > + SET_PKE_LN(xform->ec.q.y, qat_func_alignsize, 2); > + > + cookie->alg_bytesize = qat_function.bytesize; > + cookie->qat_func_alignsize = qat_function.bytesize; > + qat_req->pke_hdr.cd_pars.func_id = qat_function.func_id; > + qat_req->input_param_count = 3; > + qat_req->output_param_count = 4; > + > + HEXDUMP("SM2 K", cookie->input_array[0], > + qat_func_alignsize); > + HEXDUMP("SM2 Q.x", cookie->input_array[1], > + qat_func_alignsize); > + HEXDUMP("SM2 Q.y", cookie->input_array[2], > + qat_func_alignsize); > + > + return RTE_CRYPTO_OP_STATUS_SUCCESS; > +} > + > +static uint8_t > +sm2_encryption_collect(struct rte_crypto_asym_op *asym_op, > + const struct qat_asym_op_cookie *cookie) > +{ > + uint32_t alg_bytesize = cookie->alg_bytesize; > + > + rte_memcpy(asym_op->sm2.c1.x.data, cookie->output_array[0], > alg_bytesize); > + rte_memcpy(asym_op->sm2.c1.y.data, cookie->output_array[1], > alg_bytesize); > + rte_memcpy(asym_op->sm2.kp.x.data, cookie->output_array[2], > alg_bytesize); > + rte_memcpy(asym_op->sm2.kp.y.data, cookie->output_array[3], > alg_bytesize); > + asym_op->sm2.c1.x.length = alg_bytesize; > + asym_op->sm2.c1.y.length = alg_bytesize; > + asym_op->sm2.kp.x.length = alg_bytesize; > + asym_op->sm2.kp.y.length = alg_bytesize; > + > + HEXDUMP("c1[x1]", cookie->output_array[0], > + alg_bytesize); > + HEXDUMP("c1[y]", cookie->output_array[1], > + alg_bytesize); > + HEXDUMP("kp[x]", cookie->output_array[2], > + alg_bytesize); > + HEXDUMP("kp[y]", cookie->output_array[3], > + alg_bytesize); > + return RTE_CRYPTO_OP_STATUS_SUCCESS; > +} > + > + > +static int > +sm2_decryption_set_input(struct icp_qat_fw_pke_request *qat_req, > + struct qat_asym_op_cookie *cookie, > + const struct rte_crypto_asym_op *asym_op, > + const struct rte_crypto_asym_xform *xform) > +{ > + const struct qat_asym_function qat_function = > + get_sm2_decryption_function(); > + const uint32_t qat_func_alignsize = > + qat_function.bytesize; > + > + SET_PKE_LN(xform->ec.pkey, qat_func_alignsize, 0); > + SET_PKE_LN(asym_op->sm2.c1.x, qat_func_alignsize, 1); > + SET_PKE_LN(asym_op->sm2.c1.y, qat_func_alignsize, 2); > + > + cookie->alg_bytesize = qat_function.bytesize; > + cookie->qat_func_alignsize = qat_function.bytesize; > + qat_req->pke_hdr.cd_pars.func_id = qat_function.func_id; > + qat_req->input_param_count = 3; > + qat_req->output_param_count = 2; > + > + HEXDUMP("d", cookie->input_array[0], > + qat_func_alignsize); > + HEXDUMP("c1[x]", cookie->input_array[1], > + qat_func_alignsize); > + HEXDUMP("c1[y]", cookie->input_array[2], > + qat_func_alignsize); > + > + return RTE_CRYPTO_OP_STATUS_SUCCESS; > +} > + > + > +static uint8_t > +sm2_decryption_collect(struct rte_crypto_asym_op *asym_op, > + const struct qat_asym_op_cookie *cookie) > +{ > + uint32_t alg_bytesize = cookie->alg_bytesize; > + > + rte_memcpy(asym_op->sm2.kp.x.data, cookie->output_array[0], > alg_bytesize); > + rte_memcpy(asym_op->sm2.kp.y.data, cookie->output_array[1], > alg_bytesize); > + asym_op->sm2.kp.x.length = alg_bytesize; > + asym_op->sm2.kp.y.length = alg_bytesize; > + > + HEXDUMP("kp[x]", cookie->output_array[0], > + alg_bytesize); > + HEXDUMP("kp[y]", cookie->output_array[1], > + alg_bytesize); > + return RTE_CRYPTO_OP_STATUS_SUCCESS; > +} > + > static int > asym_set_input(struct icp_qat_fw_pke_request *qat_req, > struct qat_asym_op_cookie *cookie, > @@ -1015,14 +1132,20 @@ asym_set_input(struct icp_qat_fw_pke_request > *qat_req, > asym_op, xform); > } > case RTE_CRYPTO_ASYM_XFORM_SM2: > - if (asym_op->sm2.op_type == > - RTE_CRYPTO_ASYM_OP_VERIFY) { > + if (asym_op->sm2.op_type == > RTE_CRYPTO_ASYM_OP_ENCRYPT) { > + return sm2_encryption_set_input(qat_req, cookie, > + asym_op, xform); > + } else if (asym_op->sm2.op_type == > RTE_CRYPTO_ASYM_OP_DECRYPT) { > + return sm2_decryption_set_input(qat_req, cookie, > + asym_op, xform); > + } else if (asym_op->sm2.op_type == > RTE_CRYPTO_ASYM_OP_VERIFY) { > return sm2_ecdsa_verify_set_input(qat_req, cookie, > asym_op, xform); > - } else { > + } else if (asym_op->sm2.op_type == > RTE_CRYPTO_ASYM_OP_SIGN) { > return sm2_ecdsa_sign_set_input(qat_req, cookie, > asym_op, xform); > } > + break; > default: > QAT_LOG(ERR, "Invalid/unsupported asymmetric crypto xform"); > return -EINVAL; > @@ -1114,7 +1237,13 @@ qat_asym_collect_response(struct rte_crypto_op > *op, > case RTE_CRYPTO_ASYM_XFORM_ECDH: > return ecdh_collect(asym_op, cookie); > case RTE_CRYPTO_ASYM_XFORM_SM2: > - return sm2_ecdsa_sign_collect(asym_op, cookie); > + if (asym_op->sm2.op_type == > RTE_CRYPTO_ASYM_OP_ENCRYPT) > + return sm2_encryption_collect(asym_op, cookie); > + else if (asym_op->sm2.op_type == > RTE_CRYPTO_ASYM_OP_DECRYPT) > + return sm2_decryption_collect(asym_op, cookie); > + else > + return sm2_ecdsa_sign_collect(asym_op, cookie); > + > default: > QAT_LOG(ERR, "Not supported xform type"); > return RTE_CRYPTO_OP_STATUS_ERROR; > @@ -1386,9 +1515,8 @@ qat_asym_session_configure(struct rte_cryptodev > *dev __rte_unused, > case RTE_CRYPTO_ASYM_XFORM_ECDSA: > case RTE_CRYPTO_ASYM_XFORM_ECPM: > case RTE_CRYPTO_ASYM_XFORM_ECDH: > - session_set_ec(qat_session, xform); > - break; > case RTE_CRYPTO_ASYM_XFORM_SM2: > + session_set_ec(qat_session, xform); > break; > default: > ret = -ENOTSUP; > -- > 2.17.1 ^ permalink raw reply [flat|nested] 8+ messages in thread
* [PATCH v6 3/3] app/test: add test sm2 C1/Kp test cases 2024-10-22 19:05 [PATCH v6 0/3] add ec points to sm2 op Arkadiusz Kusztal 2024-10-22 19:05 ` [PATCH v6 1/3] cryptodev: " Arkadiusz Kusztal 2024-10-22 19:05 ` [PATCH v6 2/3] crypto/qat: add sm2 encryption/decryption function Arkadiusz Kusztal @ 2024-10-22 19:06 ` Arkadiusz Kusztal 2024-10-23 1:19 ` [PATCH v6 0/3] add ec points to sm2 op Stephen Hemminger 3 siblings, 0 replies; 8+ messages in thread From: Arkadiusz Kusztal @ 2024-10-22 19:06 UTC (permalink / raw) To: dev; +Cc: gakhil, brian.dooley, Arkadiusz Kusztal This commit adds tests cases to be used when C1 or kP elliptic curve points need to be computed. Signed-off-by: Arkadiusz Kusztal <arkadiuszx.kusztal@intel.com> --- app/test/test_cryptodev_asym.c | 138 ++++++++++++++++++++- app/test/test_cryptodev_sm2_test_vectors.h | 112 ++++++++++++++++- 2 files changed, 246 insertions(+), 4 deletions(-) diff --git a/app/test/test_cryptodev_asym.c b/app/test/test_cryptodev_asym.c index e2f74702ad..5f27fb3917 100644 --- a/app/test/test_cryptodev_asym.c +++ b/app/test/test_cryptodev_asym.c @@ -2663,6 +2663,8 @@ test_sm2_sign(void) asym_op->sm2.k.data = input_params.k.data; asym_op->sm2.k.length = input_params.k.length; } + asym_op->sm2.k.data = input_params.k.data; + asym_op->sm2.k.length = input_params.k.length; /* Init out buf */ asym_op->sm2.r.data = output_buf_r; @@ -3515,7 +3517,7 @@ static int send_one(void) ticks++; if (ticks >= DEQ_TIMEOUT) { RTE_LOG(ERR, USER1, - "line %u FAILED: Cannot dequeue the crypto op on device %d", + "line %u FAILED: Cannot dequeue the crypto op on device, timeout %d", __LINE__, params->valid_devs[0]); return TEST_FAILED; } @@ -3822,6 +3824,132 @@ kat_rsa_decrypt_crt(const void *data) return 0; } +static int +test_sm2_partial_encryption(const void *data) +{ + struct rte_crypto_asym_xform xform = { 0 }; + const uint8_t dev_id = params->valid_devs[0]; + const struct crypto_testsuite_sm2_params *test_vector = data; + uint8_t result_C1_x1[TEST_DATA_SIZE] = { 0 }; + uint8_t result_C1_y1[TEST_DATA_SIZE] = { 0 }; + uint8_t result_kP_x1[TEST_DATA_SIZE] = { 0 }; + uint8_t result_kP_y1[TEST_DATA_SIZE] = { 0 }; + struct rte_cryptodev_asym_capability_idx idx; + const struct rte_cryptodev_asymmetric_xform_capability *capa; + + idx.type = RTE_CRYPTO_ASYM_XFORM_SM2; + capa = rte_cryptodev_asym_capability_get(dev_id, &idx); + if (capa == NULL) + return TEST_SKIPPED; + if (!rte_cryptodev_asym_xform_capability_check_opcap(capa, + RTE_CRYPTO_ASYM_OP_ENCRYPT, RTE_CRYPTO_SM2_PARTIAL)) { + return TEST_SKIPPED; + } + + xform.xform_type = RTE_CRYPTO_ASYM_XFORM_SM2; + xform.ec.curve_id = RTE_CRYPTO_EC_GROUP_SM2; + xform.ec.q = test_vector->pubkey; + self->op->asym->sm2.op_type = RTE_CRYPTO_ASYM_OP_ENCRYPT; + self->op->asym->sm2.k = test_vector->k; + if (rte_cryptodev_asym_session_create(dev_id, &xform, + params->session_mpool, &self->sess) < 0) { + RTE_LOG(ERR, USER1, "line %u FAILED: Session creation failed", + __LINE__); + return TEST_FAILED; + } + rte_crypto_op_attach_asym_session(self->op, self->sess); + + self->op->asym->sm2.c1.x.data = result_C1_x1; + self->op->asym->sm2.c1.y.data = result_C1_y1; + self->op->asym->sm2.kp.x.data = result_kP_x1; + self->op->asym->sm2.kp.y.data = result_kP_y1; + TEST_ASSERT_SUCCESS(send_one(), + "Failed to process crypto op"); + + debug_hexdump(stdout, "C1[x]", self->op->asym->sm2.c1.x.data, + self->op->asym->sm2.c1.x.length); + debug_hexdump(stdout, "C1[y]", self->op->asym->sm2.c1.y.data, + self->op->asym->sm2.c1.y.length); + debug_hexdump(stdout, "kP[x]", self->op->asym->sm2.kp.x.data, + self->op->asym->sm2.kp.x.length); + debug_hexdump(stdout, "kP[y]", self->op->asym->sm2.kp.y.data, + self->op->asym->sm2.kp.y.length); + + TEST_ASSERT_BUFFERS_ARE_EQUAL(test_vector->C1.x.data, + self->op->asym->sm2.c1.x.data, + test_vector->C1.x.length, + "Incorrect value of C1[x]\n"); + TEST_ASSERT_BUFFERS_ARE_EQUAL(test_vector->C1.y.data, + self->op->asym->sm2.c1.y.data, + test_vector->C1.y.length, + "Incorrect value of C1[y]\n"); + TEST_ASSERT_BUFFERS_ARE_EQUAL(test_vector->kP.x.data, + self->op->asym->sm2.kp.x.data, + test_vector->kP.x.length, + "Incorrect value of kP[x]\n"); + TEST_ASSERT_BUFFERS_ARE_EQUAL(test_vector->kP.y.data, + self->op->asym->sm2.kp.y.data, + test_vector->kP.y.length, + "Incorrect value of kP[y]\n"); + + return TEST_SUCCESS; +} + +static int +test_sm2_partial_decryption(const void *data) +{ + struct rte_crypto_asym_xform xform = {}; + const uint8_t dev_id = params->valid_devs[0]; + const struct crypto_testsuite_sm2_params *test_vector = data; + uint8_t result_kP_x1[TEST_DATA_SIZE] = { 0 }; + uint8_t result_kP_y1[TEST_DATA_SIZE] = { 0 }; + struct rte_cryptodev_asym_capability_idx idx; + const struct rte_cryptodev_asymmetric_xform_capability *capa; + + idx.type = RTE_CRYPTO_ASYM_XFORM_SM2; + capa = rte_cryptodev_asym_capability_get(dev_id, &idx); + if (capa == NULL) + return TEST_SKIPPED; + if (!rte_cryptodev_asym_xform_capability_check_opcap(capa, + RTE_CRYPTO_ASYM_OP_DECRYPT, RTE_CRYPTO_SM2_PARTIAL)) { + return TEST_SKIPPED; + } + + xform.xform_type = RTE_CRYPTO_ASYM_XFORM_SM2; + xform.ec.pkey = test_vector->pkey; + self->op->asym->sm2.op_type = RTE_CRYPTO_ASYM_OP_DECRYPT; + self->op->asym->sm2.c1 = test_vector->C1; + + if (rte_cryptodev_asym_session_create(dev_id, &xform, + params->session_mpool, &self->sess) < 0) { + RTE_LOG(ERR, USER1, "line %u FAILED: Session creation failed", + __LINE__); + return TEST_FAILED; + } + rte_crypto_op_attach_asym_session(self->op, self->sess); + + self->op->asym->sm2.kp.x.data = result_kP_x1; + self->op->asym->sm2.kp.y.data = result_kP_y1; + TEST_ASSERT_SUCCESS(send_one(), + "Failed to process crypto op"); + + debug_hexdump(stdout, "kP[x]", self->op->asym->sm2.kp.x.data, + self->op->asym->sm2.c1.x.length); + debug_hexdump(stdout, "kP[y]", self->op->asym->sm2.kp.y.data, + self->op->asym->sm2.c1.y.length); + + TEST_ASSERT_BUFFERS_ARE_EQUAL(test_vector->kP.x.data, + self->op->asym->sm2.kp.x.data, + test_vector->kP.x.length, + "Incorrect value of kP[x]\n"); + TEST_ASSERT_BUFFERS_ARE_EQUAL(test_vector->kP.y.data, + self->op->asym->sm2.kp.y.data, + test_vector->kP.y.length, + "Incorrect value of kP[y]\n"); + + return 0; +} + static struct unit_test_suite cryptodev_openssl_asym_testsuite = { .suite_name = "Crypto Device OPENSSL ASYM Unit Test Suite", .setup = testsuite_setup, @@ -3886,6 +4014,14 @@ static struct unit_test_suite cryptodev_qat_asym_testsuite = { .setup = testsuite_setup, .teardown = testsuite_teardown, .unit_test_cases = { + TEST_CASE_NAMED_WITH_DATA( + "SM2 encryption - test case 1", + ut_setup_asym, ut_teardown_asym, + test_sm2_partial_encryption, &sm2_enc_hw_t1), + TEST_CASE_NAMED_WITH_DATA( + "SM2 decryption - test case 1", + ut_setup_asym, ut_teardown_asym, + test_sm2_partial_decryption, &sm2_enc_hw_t1), TEST_CASE_NAMED_WITH_DATA( "Modular Exponentiation (mod=128, base=20, exp=3, res=128)", ut_setup_asym, ut_teardown_asym, diff --git a/app/test/test_cryptodev_sm2_test_vectors.h b/app/test/test_cryptodev_sm2_test_vectors.h index 41f5f7074a..92f7e77671 100644 --- a/app/test/test_cryptodev_sm2_test_vectors.h +++ b/app/test/test_cryptodev_sm2_test_vectors.h @@ -8,19 +8,125 @@ #include "rte_crypto_asym.h" struct crypto_testsuite_sm2_params { - rte_crypto_param pubkey_qx; - rte_crypto_param pubkey_qy; + union { + struct { + rte_crypto_param pubkey_qx; + rte_crypto_param pubkey_qy; + }; + struct rte_crypto_ec_point pubkey; + }; rte_crypto_param pkey; rte_crypto_param k; rte_crypto_param sign_r; rte_crypto_param sign_s; rte_crypto_param id; - rte_crypto_param cipher; + union { + rte_crypto_param cipher; + struct { + struct rte_crypto_ec_point C1; + struct rte_crypto_ec_point kP; + }; + }; rte_crypto_param message; rte_crypto_param digest; int curve; }; +uint8_t sm2_enc_pub_x_t1[] = { + 0x26, 0xf1, 0xf3, 0xef, 0x12, 0x27, 0x85, 0xd1, + 0x7d, 0x38, 0x70, 0xc2, 0x43, 0x46, 0x50, 0x36, + 0x3f, 0xdf, 0x4b, 0x2f, 0x45, 0x0e, 0x8e, 0xd1, + 0xb6, 0x0f, 0xdc, 0x1f, 0xc6, 0xf0, 0x19, 0xab +}; +uint8_t sm2_enc_pub_y_t1[] = { + 0xd9, 0x19, 0x8b, 0xdb, 0xef, 0xa5, 0x84, 0x76, + 0xec, 0x82, 0x25, 0x12, 0x5b, 0x8c, 0xe3, 0xe1, + 0x0a, 0x10, 0x0d, 0xc6, 0x97, 0x6c, 0xc1, 0x89, + 0xd9, 0x6d, 0xa6, 0x88, 0x9e, 0xbc, 0xd3, 0x7a +}; +uint8_t sm2_k_t1[] = { + 0x12, 0x34, 0x56, 0x78, 0xB9, 0x6E, 0x5A, 0xF7, + 0x0B, 0xD4, 0x80, 0xB4, 0x72, 0x40, 0x9A, 0x9A, + 0x32, 0x72, 0x57, 0xF1, 0xEB, 0xB7, 0x3F, 0x5B, + 0x07, 0x33, 0x54, 0xB2, 0x48, 0x66, 0x85, 0x63 +}; + +uint8_t sm2_C1_x_t1[] = { + 0x15, 0xf6, 0xb7, 0x49, 0x00, 0x39, 0x73, 0x9d, + 0x5b, 0xb3, 0xd3, 0xe9, 0x1d, 0xe4, 0xc8, 0xbd, + 0x08, 0xe3, 0x6a, 0x22, 0xff, 0x1a, 0xbf, 0xdc, + 0x75, 0x6b, 0x12, 0x85, 0x81, 0xc5, 0x8b, 0xcf +}; + +uint8_t sm2_C1_y_t1[] = { + 0x6a, 0x92, 0xd4, 0xd8, 0x13, 0xec, 0x8f, 0x9a, + 0x9d, 0xbe, 0x51, 0x47, 0x6f, 0x54, 0xc5, 0x41, + 0x98, 0xf5, 0x5f, 0x83, 0xce, 0x1c, 0x18, 0x1a, + 0x48, 0xbd, 0xeb, 0x38, 0x13, 0x67, 0x0d, 0x06 +}; + +uint8_t sm2_kP_x_t1[] = { + 0x6b, 0xfb, 0x9a, 0xcb, 0xc6, 0xb6, 0x36, 0x31, + 0x0f, 0xd1, 0xdd, 0x9c, 0x9f, 0x17, 0x5f, 0x3f, + 0x68, 0x13, 0x96, 0xd2, 0x54, 0x5b, 0xa6, 0x19, + 0x78, 0x1f, 0x87, 0x3d, 0x81, 0xc3, 0x21, 0x01 +}; + +uint8_t sm2_kP_y_t1[] = { + 0xa4, 0x08, 0xf3, 0x74, 0x35, 0x51, 0x8c, 0x81, + 0x06, 0x4c, 0x8f, 0x31, 0x49, 0xe3, 0x5b, 0x4d, + 0xfc, 0x3d, 0x19, 0xac, 0x7d, 0x07, 0xd0, 0x9a, + 0x99, 0x5a, 0x25, 0x16, 0x66, 0xff, 0x41, 0x3c +}; + +uint8_t sm2_kP_d_t1[] = { + 0x6F, 0xCB, 0xA2, 0xEF, 0x9A, 0xE0, 0xAB, 0x90, + 0x2B, 0xC3, 0xBD, 0xE3, 0xFF, 0x91, 0x5D, 0x44, + 0xBA, 0x4C, 0xC7, 0x8F, 0x88, 0xE2, 0xF8, 0xE7, + 0xF8, 0x99, 0x6D, 0x3B, 0x8C, 0xCE, 0xED, 0xEE +}; + +struct crypto_testsuite_sm2_params sm2_enc_hw_t1 = { + .k = { + .data = sm2_k_t1, + .length = sizeof(sm2_k_t1) + }, + .pubkey = { + .x = { + .data = sm2_enc_pub_x_t1, + .length = sizeof(sm2_enc_pub_x_t1) + }, + .y = { + .data = sm2_enc_pub_y_t1, + .length = sizeof(sm2_enc_pub_y_t1) + } + }, + .C1 = { + .x = { + .data = sm2_C1_x_t1, + .length = sizeof(sm2_C1_x_t1) + }, + .y = { + .data = sm2_C1_y_t1, + .length = sizeof(sm2_C1_y_t1) + } + }, + .kP = { + .x = { + .data = sm2_kP_x_t1, + .length = sizeof(sm2_kP_x_t1) + }, + .y = { + .data = sm2_kP_y_t1, + .length = sizeof(sm2_kP_y_t1) + } + }, + .pkey = { + .data = sm2_kP_d_t1, + .length = sizeof(sm2_kP_d_t1) + } +}; + static uint8_t fp256_pkey[] = { 0x77, 0x84, 0x35, 0x65, 0x4c, 0x7a, 0x6d, 0xb1, 0x1e, 0x63, 0x0b, 0x41, 0x97, 0x36, 0x04, 0xf4, -- 2.17.1 ^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH v6 0/3] add ec points to sm2 op 2024-10-22 19:05 [PATCH v6 0/3] add ec points to sm2 op Arkadiusz Kusztal ` (2 preceding siblings ...) 2024-10-22 19:06 ` [PATCH v6 3/3] app/test: add test sm2 C1/Kp test cases Arkadiusz Kusztal @ 2024-10-23 1:19 ` Stephen Hemminger 3 siblings, 0 replies; 8+ messages in thread From: Stephen Hemminger @ 2024-10-23 1:19 UTC (permalink / raw) To: Arkadiusz Kusztal; +Cc: dev, gakhil, brian.dooley On Tue, 22 Oct 2024 20:05:57 +0100 Arkadiusz Kusztal <arkadiuszx.kusztal@intel.com> wrote: > In the case when PMD cannot support the full process of the SM2, > but elliptic curve computation only, additional fields > are needed to handle such a case. > > v2: > - rebased against the 24.11 code > v3: > - added feature flag > - added QAT patches > - added test patches > v4: > - replaced feature flag with capability > - split API patches > v5: > - rebased > - clarified usage of the partial flag > v6: > - removed already applied patch 1 > - added ABI relase notes comment > - removed camel case > - added flag reference > > Arkadiusz Kusztal (3): > cryptodev: add ec points to sm2 op > crypto/qat: add sm2 encryption/decryption function > app/test: add test sm2 C1/Kp test cases > > app/test/test_cryptodev_asym.c | 138 ++++++++++++++++- > app/test/test_cryptodev_sm2_test_vectors.h | 112 +++++++++++++- > doc/guides/cryptodevs/features/qat.ini | 1 + > doc/guides/rel_notes/release_24_11.rst | 7 + > .../common/qat/qat_adf/icp_qat_fw_mmp_ids.h | 3 + > drivers/common/qat/qat_adf/qat_pke.h | 20 +++ > drivers/crypto/qat/qat_asym.c | 140 +++++++++++++++++- > lib/cryptodev/rte_crypto_asym.h | 56 +++++-- > 8 files changed, 453 insertions(+), 24 deletions(-) There is an issue with new feature missing in some of the templates of the doc. $ ninja -C build doc ninja: Entering directory `build' [4/6] Generating doc/api/dts/dts_api_html with a custom command Warning generate_overview_table(): Unknown feature 'SM2' in 'qat.ini' ^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2024-10-31 17:25 UTC | newest] Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2024-10-22 19:05 [PATCH v6 0/3] add ec points to sm2 op Arkadiusz Kusztal 2024-10-22 19:05 ` [PATCH v6 1/3] cryptodev: " Arkadiusz Kusztal 2024-10-22 19:05 ` [PATCH v6 2/3] crypto/qat: add sm2 encryption/decryption function Arkadiusz Kusztal 2024-10-23 0:46 ` Stephen Hemminger 2024-10-31 17:24 ` Kusztal, ArkadiuszX 2024-10-23 7:55 ` [EXTERNAL] " Akhil Goyal 2024-10-22 19:06 ` [PATCH v6 3/3] app/test: add test sm2 C1/Kp test cases Arkadiusz Kusztal 2024-10-23 1:19 ` [PATCH v6 0/3] add ec points to sm2 op Stephen Hemminger
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).