DPDK patches and discussions
 help / color / mirror / Atom feed
* [PATCH v8 0/3] add ec points to sm2 op
@ 2024-11-04  9:36 Arkadiusz Kusztal
  2024-11-04  9:36 ` [PATCH v8 1/3] cryptodev: " Arkadiusz Kusztal
                   ` (2 more replies)
  0 siblings, 3 replies; 10+ messages in thread
From: Arkadiusz Kusztal @ 2024-11-04  9:36 UTC (permalink / raw)
  To: dev; +Cc: gakhil, brian.dooley, Arkadiusz Kusztal

In the case when PMD cannot support the full process of the SM2,
but elliptic curve computation only, additional fields
are needed to handle such a case.

v2:
- rebased against the 24.11 code
v3:
- added feature flag
- added QAT patches
- added test patches
v4:
- replaced feature flag with capability
- split API patches
v5:
- rebased
- clarified usage of the partial flag
v6:
- removed already applied patch 1
- added ABI relase notes comment
- removed camel case
- added flag reference
v7:
- removed SM2 from auth features, in asym it was added in SM2 ECDSA patch
v8:
- fixed an openssl test issue
- added the partial_flag to QAT capabilities

Arkadiusz Kusztal (3):
  cryptodev: add ec points to sm2 op
  crypto/qat: add sm2 encryption/decryption function
  app/test: add test sm2 C1/Kp test cases

 app/test/test_cryptodev_asym.c                | 134 +++++++++++++++++
 app/test/test_cryptodev_sm2_test_vectors.h    | 112 +++++++++++++-
 doc/guides/rel_notes/release_24_11.rst        |   7 +
 .../common/qat/qat_adf/icp_qat_fw_mmp_ids.h   |   3 +
 drivers/common/qat/qat_adf/qat_pke.h          |  20 +++
 drivers/crypto/qat/dev/qat_crypto_pmd_gen4.c  |  72 ++++++++-
 drivers/crypto/qat/qat_asym.c                 | 140 +++++++++++++++++-
 lib/cryptodev/rte_crypto_asym.h               |  56 +++++--
 8 files changed, 520 insertions(+), 24 deletions(-)

-- 
2.34.1


^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH v8 1/3] cryptodev: add ec points to sm2 op
  2024-11-04  9:36 [PATCH v8 0/3] add ec points to sm2 op Arkadiusz Kusztal
@ 2024-11-04  9:36 ` Arkadiusz Kusztal
  2024-11-06 10:08   ` [EXTERNAL] " Akhil Goyal
  2025-08-22 11:13   ` [dpdk-dev v9 " Kai Ji
  2024-11-04  9:36 ` [PATCH v8 2/3] crypto/qat: add sm2 encryption/decryption function Arkadiusz Kusztal
  2024-11-04  9:36 ` [PATCH v8 3/3] app/test: add test sm2 C1/Kp test cases Arkadiusz Kusztal
  2 siblings, 2 replies; 10+ messages in thread
From: Arkadiusz Kusztal @ 2024-11-04  9:36 UTC (permalink / raw)
  To: dev; +Cc: gakhil, brian.dooley, Arkadiusz Kusztal

In the case when PMD cannot support the full process of the SM2,
but elliptic curve computation only, additional fields
are needed to handle such a case.

Points C1, kP therefore were added to the SM2 crypto operation struct.

Signed-off-by: Arkadiusz Kusztal <arkadiuszx.kusztal@intel.com>
---
 doc/guides/rel_notes/release_24_11.rst |  3 ++
 lib/cryptodev/rte_crypto_asym.h        | 56 +++++++++++++++++++-------
 2 files changed, 45 insertions(+), 14 deletions(-)

diff --git a/doc/guides/rel_notes/release_24_11.rst b/doc/guides/rel_notes/release_24_11.rst
index 53a5ffebe5..ee9e2cea3c 100644
--- a/doc/guides/rel_notes/release_24_11.rst
+++ b/doc/guides/rel_notes/release_24_11.rst
@@ -413,6 +413,9 @@ ABI Changes
   added new structure ``rte_node_xstats`` to ``rte_node_register`` and
   added ``xstat_off`` to ``rte_node``.
 
+* cryptodev: The ``rte_crypto_sm2_op_param`` struct member to hold ciphertext
+  is changed to union data type. This change is to support partial SM2 calculation.
+
 
 Known Issues
 ------------
diff --git a/lib/cryptodev/rte_crypto_asym.h b/lib/cryptodev/rte_crypto_asym.h
index aeb46e688e..f095cebcd0 100644
--- a/lib/cryptodev/rte_crypto_asym.h
+++ b/lib/cryptodev/rte_crypto_asym.h
@@ -646,6 +646,8 @@ enum rte_crypto_sm2_op_capa {
 	/**< Random number generator supported in SM2 ops. */
 	RTE_CRYPTO_SM2_PH,
 	/**< Prehash message before crypto op. */
+	RTE_CRYPTO_SM2_PARTIAL,
+	/**< Calculate elliptic curve points only. */
 };
 
 /**
@@ -673,20 +675,46 @@ struct rte_crypto_sm2_op_param {
 	 * will be overwritten by the PMD with the decrypted length.
 	 */
 
-	rte_crypto_param cipher;
-	/**<
-	 * Pointer to input data
-	 * - to be decrypted for SM2 private decrypt.
-	 *
-	 * Pointer to output data
-	 * - for SM2 public encrypt.
-	 * In this case the underlying array should have been allocated
-	 * with enough memory to hold ciphertext output (at least X bytes
-	 * for prime field curve of N bytes and for message M bytes,
-	 * where X = (C1 || C2 || C3) and computed based on SM2 RFC as
-	 * C1 (1 + N + N), C2 = M, C3 = N. The cipher.length field will
-	 * be overwritten by the PMD with the encrypted length.
-	 */
+	union {
+		rte_crypto_param cipher;
+		/**<
+		 * Pointer to input data
+		 * - to be decrypted for SM2 private decrypt.
+		 *
+		 * Pointer to output data
+		 * - for SM2 public encrypt.
+		 * In this case the underlying array should have been allocated
+		 * with enough memory to hold ciphertext output (at least X bytes
+		 * for prime field curve of N bytes and for message M bytes,
+		 * where X = (C1 || C2 || C3) and computed based on SM2 RFC as
+		 * C1 (1 + N + N), C2 = M, C3 = N. The cipher.length field will
+		 * be overwritten by the PMD with the encrypted length.
+		 */
+		struct {
+			struct rte_crypto_ec_point c1;
+			/**<
+			 * This field is used only when PMD does not support the full
+			 * process of the SM2 encryption/decryption, but the elliptic
+			 * curve part only.
+			 *
+			 * In the case of encryption, it is an output - point C1 = (x1,y1).
+			 * In the case of decryption, if is an input - point C1 = (x1,y1).
+			 *
+			 * Must be used along with the RTE_CRYPTO_SM2_PARTIAL flag.
+			 */
+			struct rte_crypto_ec_point kp;
+			/**<
+			 * This field is used only when PMD does not support the full
+			 * process of the SM2 encryption/decryption, but the elliptic
+			 * curve part only.
+			 *
+			 * It is an output in the encryption case, it is a point
+			 * [k]P = (x2,y2).
+			 *
+			 * Must be used along with the RTE_CRYPTO_SM2_PARTIAL flag.
+			 */
+		};
+	};
 
 	rte_crypto_uint id;
 	/**< The SM2 id used by signer and verifier. */
-- 
2.34.1


^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH v8 2/3] crypto/qat: add sm2 encryption/decryption function
  2024-11-04  9:36 [PATCH v8 0/3] add ec points to sm2 op Arkadiusz Kusztal
  2024-11-04  9:36 ` [PATCH v8 1/3] cryptodev: " Arkadiusz Kusztal
@ 2024-11-04  9:36 ` Arkadiusz Kusztal
  2024-11-06 10:12   ` [EXTERNAL] " Akhil Goyal
  2024-11-04  9:36 ` [PATCH v8 3/3] app/test: add test sm2 C1/Kp test cases Arkadiusz Kusztal
  2 siblings, 1 reply; 10+ messages in thread
From: Arkadiusz Kusztal @ 2024-11-04  9:36 UTC (permalink / raw)
  To: dev; +Cc: gakhil, brian.dooley, Arkadiusz Kusztal

This commit adds SM2 elliptic curve based asymmetric
encryption and decryption to the Intel QuickAssist
Technology PMD.

Depends-on: patch-147900 ("[v2] crypto/qat: fix ecdsa session handling")

Signed-off-by: Arkadiusz Kusztal <arkadiuszx.kusztal@intel.com>
---
 doc/guides/rel_notes/release_24_11.rst        |   4 +
 .../common/qat/qat_adf/icp_qat_fw_mmp_ids.h   |   3 +
 drivers/common/qat/qat_adf/qat_pke.h          |  20 +++
 drivers/crypto/qat/dev/qat_crypto_pmd_gen4.c  |  72 ++++++++-
 drivers/crypto/qat/qat_asym.c                 | 140 +++++++++++++++++-
 5 files changed, 232 insertions(+), 7 deletions(-)

diff --git a/doc/guides/rel_notes/release_24_11.rst b/doc/guides/rel_notes/release_24_11.rst
index ee9e2cea3c..0b2f1e75d3 100644
--- a/doc/guides/rel_notes/release_24_11.rst
+++ b/doc/guides/rel_notes/release_24_11.rst
@@ -251,6 +251,10 @@ New Features
   Added ability for node to advertise and update multiple xstat counters,
   that can be retrieved using ``rte_graph_cluster_stats_get``.
 
+* **Updated the QuickAssist Technology (QAT) Crypto PMD.**
+
+  * Added SM2 encryption and decryption algorithms.
+
 
 Removed Items
 -------------
diff --git a/drivers/common/qat/qat_adf/icp_qat_fw_mmp_ids.h b/drivers/common/qat/qat_adf/icp_qat_fw_mmp_ids.h
index 630c6e1a9b..aa49612ca1 100644
--- a/drivers/common/qat/qat_adf/icp_qat_fw_mmp_ids.h
+++ b/drivers/common/qat/qat_adf/icp_qat_fw_mmp_ids.h
@@ -1542,6 +1542,9 @@ icp_qat_fw_mmp_ecdsa_verify_gfp_521_input::in in @endlink
  * @li no output parameters
  */
 
+#define PKE_ECSM2_ENCRYPTION 0x25221720
+#define PKE_ECSM2_DECRYPTION 0x201716e6
+
 #define PKE_LIVENESS 0x00000001
 /**< Functionality ID for PKE_LIVENESS
  * @li 0 input parameter(s)
diff --git a/drivers/common/qat/qat_adf/qat_pke.h b/drivers/common/qat/qat_adf/qat_pke.h
index f88932a275..ac051e965d 100644
--- a/drivers/common/qat/qat_adf/qat_pke.h
+++ b/drivers/common/qat/qat_adf/qat_pke.h
@@ -334,4 +334,24 @@ get_sm2_ecdsa_verify_function(void)
 	return qat_function;
 }
 
+static struct qat_asym_function
+get_sm2_encryption_function(void)
+{
+	struct qat_asym_function qat_function = {
+		PKE_ECSM2_ENCRYPTION, 32
+	};
+
+	return qat_function;
+}
+
+static struct qat_asym_function
+get_sm2_decryption_function(void)
+{
+	struct qat_asym_function qat_function = {
+		PKE_ECSM2_DECRYPTION, 32
+	};
+
+	return qat_function;
+}
+
 #endif
diff --git a/drivers/crypto/qat/dev/qat_crypto_pmd_gen4.c b/drivers/crypto/qat/dev/qat_crypto_pmd_gen4.c
index 6a5d6e78b9..493f33229e 100644
--- a/drivers/crypto/qat/dev/qat_crypto_pmd_gen4.c
+++ b/drivers/crypto/qat/dev/qat_crypto_pmd_gen4.c
@@ -115,6 +115,38 @@ static struct rte_cryptodev_capabilities qat_sym_crypto_caps_gen4[] = {
 	RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
 };
 
+static struct rte_cryptodev_capabilities qat_asym_crypto_caps_gen4[] = {
+	QAT_ASYM_CAP(MODEX,
+		0, 1, 512, 1),
+	QAT_ASYM_CAP(MODINV,
+		0, 1, 512, 1),
+	QAT_ASYM_CAP(RSA,
+			((1 << RTE_CRYPTO_ASYM_OP_SIGN) |
+			(1 << RTE_CRYPTO_ASYM_OP_VERIFY) |
+			(1 << RTE_CRYPTO_ASYM_OP_ENCRYPT) |
+			(1 << RTE_CRYPTO_ASYM_OP_DECRYPT)),
+			64, 512, 64),
+	{	/* SM2 */
+		.op = RTE_CRYPTO_OP_TYPE_ASYMMETRIC,
+		{.asym = {
+			.xform_capa = {
+				.xform_type = RTE_CRYPTO_ASYM_XFORM_SM2,
+				.op_types =
+				((1 << RTE_CRYPTO_ASYM_OP_SIGN) |
+				 (1 << RTE_CRYPTO_ASYM_OP_VERIFY) |
+				 (1 << RTE_CRYPTO_ASYM_OP_ENCRYPT) |
+				 (1 << RTE_CRYPTO_ASYM_OP_DECRYPT)),
+				.op_capa = {
+					[RTE_CRYPTO_ASYM_OP_ENCRYPT] = (1 << RTE_CRYPTO_SM2_PARTIAL),
+					[RTE_CRYPTO_ASYM_OP_DECRYPT] = (1 << RTE_CRYPTO_SM2_PARTIAL),
+				},
+			},
+		}
+		}
+	},
+	RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
+};
+
 static int
 qat_sym_crypto_cap_get_gen4(struct qat_cryptodev_private *internals,
 			const char *capa_memz_name,
@@ -157,6 +189,44 @@ qat_sym_crypto_cap_get_gen4(struct qat_cryptodev_private *internals,
 	return 0;
 }
 
+static int
+qat_asym_crypto_cap_get_gen4(struct qat_cryptodev_private *internals,
+			const char *capa_memz_name,
+			const uint16_t __rte_unused slice_map)
+{
+	const uint32_t size = sizeof(qat_asym_crypto_caps_gen4);
+	uint32_t i;
+
+	internals->capa_mz = rte_memzone_lookup(capa_memz_name);
+	if (internals->capa_mz == NULL) {
+		internals->capa_mz = rte_memzone_reserve(capa_memz_name,
+				size, rte_socket_id(), 0);
+		if (internals->capa_mz == NULL) {
+			QAT_LOG(DEBUG,
+				"Error allocating memzone for capabilities");
+			return -1;
+		}
+	}
+
+	struct rte_cryptodev_capabilities *addr =
+			(struct rte_cryptodev_capabilities *)
+				internals->capa_mz->addr;
+	const struct rte_cryptodev_capabilities *capabilities =
+		qat_asym_crypto_caps_gen4;
+	const uint32_t capa_num =
+		size / sizeof(struct rte_cryptodev_capabilities);
+	uint32_t curr_capa = 0;
+
+	for (i = 0; i < capa_num; i++) {
+		memcpy(addr + curr_capa, capabilities + i,
+			sizeof(struct rte_cryptodev_capabilities));
+		curr_capa++;
+	}
+	internals->qat_dev_capabilities = internals->capa_mz->addr;
+
+	return 0;
+}
+
 static __rte_always_inline void
 enqueue_one_aead_job_gen4(struct qat_sym_session *ctx,
 	struct icp_qat_fw_la_bulk_req *req,
@@ -543,7 +613,7 @@ RTE_INIT(qat_asym_crypto_gen4_init)
 			&qat_asym_crypto_ops_gen1;
 	qat_asym_gen_dev_ops[QAT_VQAT].get_capabilities =
 		qat_asym_gen_dev_ops[QAT_GEN4].get_capabilities =
-			qat_asym_crypto_cap_get_gen1;
+			qat_asym_crypto_cap_get_gen4;
 	qat_asym_gen_dev_ops[QAT_VQAT].get_feature_flags =
 		qat_asym_gen_dev_ops[QAT_GEN4].get_feature_flags =
 			qat_asym_crypto_feature_flags_get_gen1;
diff --git a/drivers/crypto/qat/qat_asym.c b/drivers/crypto/qat/qat_asym.c
index dfc52d1286..2459b02215 100644
--- a/drivers/crypto/qat/qat_asym.c
+++ b/drivers/crypto/qat/qat_asym.c
@@ -933,6 +933,15 @@ sm2_ecdsa_sign_set_input(struct icp_qat_fw_pke_request *qat_req,
 	qat_req->input_param_count = 3;
 	qat_req->output_param_count = 2;
 
+	HEXDUMP("SM2 K test", asym_op->sm2.k.data,
+		cookie->alg_bytesize);
+	HEXDUMP("SM2 K", cookie->input_array[0],
+		cookie->alg_bytesize);
+	HEXDUMP("SM2 msg", cookie->input_array[1],
+		cookie->alg_bytesize);
+	HEXDUMP("SM2 pkey", cookie->input_array[2],
+		cookie->alg_bytesize);
+
 	return RTE_CRYPTO_OP_STATUS_SUCCESS;
 }
 
@@ -983,6 +992,114 @@ sm2_ecdsa_sign_collect(struct rte_crypto_asym_op *asym_op,
 	return RTE_CRYPTO_OP_STATUS_SUCCESS;
 }
 
+static int
+sm2_encryption_set_input(struct icp_qat_fw_pke_request *qat_req,
+	struct qat_asym_op_cookie *cookie,
+	const struct rte_crypto_asym_op *asym_op,
+	const struct rte_crypto_asym_xform *xform)
+{
+	const struct qat_asym_function qat_function =
+		get_sm2_encryption_function();
+	const uint32_t qat_func_alignsize =
+		qat_function.bytesize;
+
+	SET_PKE_LN(asym_op->sm2.k, qat_func_alignsize, 0);
+	SET_PKE_LN(xform->ec.q.x, qat_func_alignsize, 1);
+	SET_PKE_LN(xform->ec.q.y, qat_func_alignsize, 2);
+
+	cookie->alg_bytesize = qat_function.bytesize;
+	cookie->qat_func_alignsize = qat_function.bytesize;
+	qat_req->pke_hdr.cd_pars.func_id = qat_function.func_id;
+	qat_req->input_param_count = 3;
+	qat_req->output_param_count = 4;
+
+	HEXDUMP("SM2 K", cookie->input_array[0],
+		qat_func_alignsize);
+	HEXDUMP("SM2 Q.x", cookie->input_array[1],
+		qat_func_alignsize);
+	HEXDUMP("SM2 Q.y", cookie->input_array[2],
+		qat_func_alignsize);
+
+	return RTE_CRYPTO_OP_STATUS_SUCCESS;
+}
+
+static uint8_t
+sm2_encryption_collect(struct rte_crypto_asym_op *asym_op,
+		const struct qat_asym_op_cookie *cookie)
+{
+	uint32_t alg_bytesize = cookie->alg_bytesize;
+
+	rte_memcpy(asym_op->sm2.c1.x.data, cookie->output_array[0], alg_bytesize);
+	rte_memcpy(asym_op->sm2.c1.y.data, cookie->output_array[1], alg_bytesize);
+	rte_memcpy(asym_op->sm2.kp.x.data, cookie->output_array[2], alg_bytesize);
+	rte_memcpy(asym_op->sm2.kp.y.data, cookie->output_array[3], alg_bytesize);
+	asym_op->sm2.c1.x.length = alg_bytesize;
+	asym_op->sm2.c1.y.length = alg_bytesize;
+	asym_op->sm2.kp.x.length = alg_bytesize;
+	asym_op->sm2.kp.y.length = alg_bytesize;
+
+	HEXDUMP("c1[x1]", cookie->output_array[0],
+		alg_bytesize);
+	HEXDUMP("c1[y]", cookie->output_array[1],
+		alg_bytesize);
+	HEXDUMP("kp[x]", cookie->output_array[2],
+		alg_bytesize);
+	HEXDUMP("kp[y]", cookie->output_array[3],
+		alg_bytesize);
+	return RTE_CRYPTO_OP_STATUS_SUCCESS;
+}
+
+
+static int
+sm2_decryption_set_input(struct icp_qat_fw_pke_request *qat_req,
+	struct qat_asym_op_cookie *cookie,
+	const struct rte_crypto_asym_op *asym_op,
+	const struct rte_crypto_asym_xform *xform)
+{
+	const struct qat_asym_function qat_function =
+		get_sm2_decryption_function();
+	const uint32_t qat_func_alignsize =
+		qat_function.bytesize;
+
+	SET_PKE_LN(xform->ec.pkey, qat_func_alignsize, 0);
+	SET_PKE_LN(asym_op->sm2.c1.x, qat_func_alignsize, 1);
+	SET_PKE_LN(asym_op->sm2.c1.y, qat_func_alignsize, 2);
+
+	cookie->alg_bytesize = qat_function.bytesize;
+	cookie->qat_func_alignsize = qat_function.bytesize;
+	qat_req->pke_hdr.cd_pars.func_id = qat_function.func_id;
+	qat_req->input_param_count = 3;
+	qat_req->output_param_count = 2;
+
+	HEXDUMP("d", cookie->input_array[0],
+		qat_func_alignsize);
+	HEXDUMP("c1[x]", cookie->input_array[1],
+		qat_func_alignsize);
+	HEXDUMP("c1[y]", cookie->input_array[2],
+		qat_func_alignsize);
+
+	return RTE_CRYPTO_OP_STATUS_SUCCESS;
+}
+
+
+static uint8_t
+sm2_decryption_collect(struct rte_crypto_asym_op *asym_op,
+		const struct qat_asym_op_cookie *cookie)
+{
+	uint32_t alg_bytesize = cookie->alg_bytesize;
+
+	rte_memcpy(asym_op->sm2.kp.x.data, cookie->output_array[0], alg_bytesize);
+	rte_memcpy(asym_op->sm2.kp.y.data, cookie->output_array[1], alg_bytesize);
+	asym_op->sm2.kp.x.length = alg_bytesize;
+	asym_op->sm2.kp.y.length = alg_bytesize;
+
+	HEXDUMP("kp[x]", cookie->output_array[0],
+		alg_bytesize);
+	HEXDUMP("kp[y]", cookie->output_array[1],
+		alg_bytesize);
+	return RTE_CRYPTO_OP_STATUS_SUCCESS;
+}
+
 static int
 asym_set_input(struct icp_qat_fw_pke_request *qat_req,
 		struct qat_asym_op_cookie *cookie,
@@ -1015,14 +1132,20 @@ asym_set_input(struct icp_qat_fw_pke_request *qat_req,
 				asym_op, xform);
 		}
 	case RTE_CRYPTO_ASYM_XFORM_SM2:
-		if (asym_op->sm2.op_type ==
-			RTE_CRYPTO_ASYM_OP_VERIFY) {
+		if (asym_op->sm2.op_type == RTE_CRYPTO_ASYM_OP_ENCRYPT) {
+			return sm2_encryption_set_input(qat_req, cookie,
+				asym_op, xform);
+		} else if (asym_op->sm2.op_type == RTE_CRYPTO_ASYM_OP_DECRYPT) {
+			return sm2_decryption_set_input(qat_req, cookie,
+				asym_op, xform);
+		} else if (asym_op->sm2.op_type == RTE_CRYPTO_ASYM_OP_VERIFY) {
 			return sm2_ecdsa_verify_set_input(qat_req, cookie,
 						asym_op, xform);
-		} else {
+		} else if (asym_op->sm2.op_type == RTE_CRYPTO_ASYM_OP_SIGN) {
 			return sm2_ecdsa_sign_set_input(qat_req, cookie,
 					asym_op, xform);
 		}
+		break;
 	default:
 		QAT_LOG(ERR, "Invalid/unsupported asymmetric crypto xform");
 		return -EINVAL;
@@ -1114,7 +1237,13 @@ qat_asym_collect_response(struct rte_crypto_op *op,
 	case RTE_CRYPTO_ASYM_XFORM_ECDH:
 		return ecdh_collect(asym_op, cookie);
 	case RTE_CRYPTO_ASYM_XFORM_SM2:
-		return sm2_ecdsa_sign_collect(asym_op, cookie);
+		if (asym_op->sm2.op_type == RTE_CRYPTO_ASYM_OP_ENCRYPT)
+			return sm2_encryption_collect(asym_op, cookie);
+		else if (asym_op->sm2.op_type == RTE_CRYPTO_ASYM_OP_DECRYPT)
+			return sm2_decryption_collect(asym_op, cookie);
+		else
+			return sm2_ecdsa_sign_collect(asym_op, cookie);
+
 	default:
 		QAT_LOG(ERR, "Not supported xform type");
 		return  RTE_CRYPTO_OP_STATUS_ERROR;
@@ -1423,9 +1552,8 @@ qat_asym_session_configure(struct rte_cryptodev *dev __rte_unused,
 	case RTE_CRYPTO_ASYM_XFORM_ECDSA:
 	case RTE_CRYPTO_ASYM_XFORM_ECPM:
 	case RTE_CRYPTO_ASYM_XFORM_ECDH:
-		ret = session_set_ec(qat_session, xform);
-		break;
 	case RTE_CRYPTO_ASYM_XFORM_SM2:
+		ret = session_set_ec(qat_session, xform);
 		break;
 	default:
 		ret = -ENOTSUP;
-- 
2.34.1


^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH v8 3/3] app/test: add test sm2 C1/Kp test cases
  2024-11-04  9:36 [PATCH v8 0/3] add ec points to sm2 op Arkadiusz Kusztal
  2024-11-04  9:36 ` [PATCH v8 1/3] cryptodev: " Arkadiusz Kusztal
  2024-11-04  9:36 ` [PATCH v8 2/3] crypto/qat: add sm2 encryption/decryption function Arkadiusz Kusztal
@ 2024-11-04  9:36 ` Arkadiusz Kusztal
  2 siblings, 0 replies; 10+ messages in thread
From: Arkadiusz Kusztal @ 2024-11-04  9:36 UTC (permalink / raw)
  To: dev; +Cc: gakhil, brian.dooley, Arkadiusz Kusztal

This commit adds tests cases to be used when C1 or kP elliptic
curve points need to be computed.

Signed-off-by: Arkadiusz Kusztal <arkadiuszx.kusztal@intel.com>
---
 app/test/test_cryptodev_asym.c             | 134 +++++++++++++++++++++
 app/test/test_cryptodev_sm2_test_vectors.h | 112 ++++++++++++++++-
 2 files changed, 243 insertions(+), 3 deletions(-)

diff --git a/app/test/test_cryptodev_asym.c b/app/test/test_cryptodev_asym.c
index e2f74702ad..cd73ec088e 100644
--- a/app/test/test_cryptodev_asym.c
+++ b/app/test/test_cryptodev_asym.c
@@ -3822,6 +3822,132 @@ kat_rsa_decrypt_crt(const void *data)
 	return 0;
 }
 
+static int
+test_sm2_partial_encryption(const void *data)
+{
+	struct rte_crypto_asym_xform xform = { 0 };
+	const uint8_t dev_id = params->valid_devs[0];
+	const struct crypto_testsuite_sm2_params *test_vector = data;
+	uint8_t result_C1_x1[TEST_DATA_SIZE] = { 0 };
+	uint8_t result_C1_y1[TEST_DATA_SIZE] = { 0 };
+	uint8_t result_kP_x1[TEST_DATA_SIZE] = { 0 };
+	uint8_t result_kP_y1[TEST_DATA_SIZE] = { 0 };
+	struct rte_cryptodev_asym_capability_idx idx;
+	const struct rte_cryptodev_asymmetric_xform_capability *capa;
+
+	idx.type = RTE_CRYPTO_ASYM_XFORM_SM2;
+	capa = rte_cryptodev_asym_capability_get(dev_id, &idx);
+	if (capa == NULL)
+		return TEST_SKIPPED;
+	if (!rte_cryptodev_asym_xform_capability_check_opcap(capa,
+			RTE_CRYPTO_ASYM_OP_ENCRYPT, RTE_CRYPTO_SM2_PARTIAL)) {
+		return TEST_SKIPPED;
+	}
+
+	xform.xform_type = RTE_CRYPTO_ASYM_XFORM_SM2;
+	xform.ec.curve_id = RTE_CRYPTO_EC_GROUP_SM2;
+	xform.ec.q = test_vector->pubkey;
+	self->op->asym->sm2.op_type = RTE_CRYPTO_ASYM_OP_ENCRYPT;
+	self->op->asym->sm2.k = test_vector->k;
+	if (rte_cryptodev_asym_session_create(dev_id, &xform,
+			params->session_mpool, &self->sess) < 0) {
+		RTE_LOG(ERR, USER1, "line %u FAILED: Session creation failed",
+			__LINE__);
+		return TEST_FAILED;
+	}
+	rte_crypto_op_attach_asym_session(self->op, self->sess);
+
+	self->op->asym->sm2.c1.x.data = result_C1_x1;
+	self->op->asym->sm2.c1.y.data = result_C1_y1;
+	self->op->asym->sm2.kp.x.data = result_kP_x1;
+	self->op->asym->sm2.kp.y.data = result_kP_y1;
+	TEST_ASSERT_SUCCESS(send_one(),
+		"Failed to process crypto op");
+
+	debug_hexdump(stdout, "C1[x]", self->op->asym->sm2.c1.x.data,
+		self->op->asym->sm2.c1.x.length);
+	debug_hexdump(stdout, "C1[y]", self->op->asym->sm2.c1.y.data,
+		self->op->asym->sm2.c1.y.length);
+	debug_hexdump(stdout, "kP[x]", self->op->asym->sm2.kp.x.data,
+		self->op->asym->sm2.kp.x.length);
+	debug_hexdump(stdout, "kP[y]", self->op->asym->sm2.kp.y.data,
+		self->op->asym->sm2.kp.y.length);
+
+	TEST_ASSERT_BUFFERS_ARE_EQUAL(test_vector->C1.x.data,
+		self->op->asym->sm2.c1.x.data,
+		test_vector->C1.x.length,
+		"Incorrect value of C1[x]\n");
+	TEST_ASSERT_BUFFERS_ARE_EQUAL(test_vector->C1.y.data,
+		self->op->asym->sm2.c1.y.data,
+		test_vector->C1.y.length,
+		"Incorrect value of C1[y]\n");
+	TEST_ASSERT_BUFFERS_ARE_EQUAL(test_vector->kP.x.data,
+		self->op->asym->sm2.kp.x.data,
+		test_vector->kP.x.length,
+		"Incorrect value of kP[x]\n");
+	TEST_ASSERT_BUFFERS_ARE_EQUAL(test_vector->kP.y.data,
+		self->op->asym->sm2.kp.y.data,
+		test_vector->kP.y.length,
+		"Incorrect value of kP[y]\n");
+
+	return TEST_SUCCESS;
+}
+
+static int
+test_sm2_partial_decryption(const void *data)
+{
+	struct rte_crypto_asym_xform xform = {};
+	const uint8_t dev_id = params->valid_devs[0];
+	const struct crypto_testsuite_sm2_params *test_vector = data;
+	uint8_t result_kP_x1[TEST_DATA_SIZE] = { 0 };
+	uint8_t result_kP_y1[TEST_DATA_SIZE] = { 0 };
+	struct rte_cryptodev_asym_capability_idx idx;
+	const struct rte_cryptodev_asymmetric_xform_capability *capa;
+
+	idx.type = RTE_CRYPTO_ASYM_XFORM_SM2;
+	capa = rte_cryptodev_asym_capability_get(dev_id, &idx);
+	if (capa == NULL)
+		return TEST_SKIPPED;
+	if (!rte_cryptodev_asym_xform_capability_check_opcap(capa,
+			RTE_CRYPTO_ASYM_OP_DECRYPT, RTE_CRYPTO_SM2_PARTIAL)) {
+		return TEST_SKIPPED;
+	}
+
+	xform.xform_type = RTE_CRYPTO_ASYM_XFORM_SM2;
+	xform.ec.pkey = test_vector->pkey;
+	self->op->asym->sm2.op_type = RTE_CRYPTO_ASYM_OP_DECRYPT;
+	self->op->asym->sm2.c1 = test_vector->C1;
+
+	if (rte_cryptodev_asym_session_create(dev_id, &xform,
+			params->session_mpool, &self->sess) < 0) {
+		RTE_LOG(ERR, USER1, "line %u FAILED: Session creation failed",
+			__LINE__);
+		return TEST_FAILED;
+	}
+	rte_crypto_op_attach_asym_session(self->op, self->sess);
+
+	self->op->asym->sm2.kp.x.data = result_kP_x1;
+	self->op->asym->sm2.kp.y.data = result_kP_y1;
+	TEST_ASSERT_SUCCESS(send_one(),
+		"Failed to process crypto op");
+
+	debug_hexdump(stdout, "kP[x]", self->op->asym->sm2.kp.x.data,
+		self->op->asym->sm2.c1.x.length);
+	debug_hexdump(stdout, "kP[y]", self->op->asym->sm2.kp.y.data,
+		self->op->asym->sm2.c1.y.length);
+
+	TEST_ASSERT_BUFFERS_ARE_EQUAL(test_vector->kP.x.data,
+		self->op->asym->sm2.kp.x.data,
+		test_vector->kP.x.length,
+		"Incorrect value of kP[x]\n");
+	TEST_ASSERT_BUFFERS_ARE_EQUAL(test_vector->kP.y.data,
+		self->op->asym->sm2.kp.y.data,
+		test_vector->kP.y.length,
+		"Incorrect value of kP[y]\n");
+
+	return 0;
+}
+
 static struct unit_test_suite cryptodev_openssl_asym_testsuite  = {
 	.suite_name = "Crypto Device OPENSSL ASYM Unit Test Suite",
 	.setup = testsuite_setup,
@@ -3886,6 +4012,14 @@ static struct unit_test_suite cryptodev_qat_asym_testsuite  = {
 	.setup = testsuite_setup,
 	.teardown = testsuite_teardown,
 	.unit_test_cases = {
+		TEST_CASE_NAMED_WITH_DATA(
+			"SM2 encryption - test case 1",
+			ut_setup_asym, ut_teardown_asym,
+			test_sm2_partial_encryption, &sm2_enc_hw_t1),
+		TEST_CASE_NAMED_WITH_DATA(
+			"SM2 decryption - test case 1",
+			ut_setup_asym, ut_teardown_asym,
+			test_sm2_partial_decryption, &sm2_enc_hw_t1),
 		TEST_CASE_NAMED_WITH_DATA(
 			"Modular Exponentiation (mod=128, base=20, exp=3, res=128)",
 			ut_setup_asym, ut_teardown_asym,
diff --git a/app/test/test_cryptodev_sm2_test_vectors.h b/app/test/test_cryptodev_sm2_test_vectors.h
index 41f5f7074a..92f7e77671 100644
--- a/app/test/test_cryptodev_sm2_test_vectors.h
+++ b/app/test/test_cryptodev_sm2_test_vectors.h
@@ -8,19 +8,125 @@
 #include "rte_crypto_asym.h"
 
 struct crypto_testsuite_sm2_params {
-	rte_crypto_param pubkey_qx;
-	rte_crypto_param pubkey_qy;
+	union {
+		struct {
+			rte_crypto_param pubkey_qx;
+			rte_crypto_param pubkey_qy;
+		};
+		struct rte_crypto_ec_point pubkey;
+	};
 	rte_crypto_param pkey;
 	rte_crypto_param k;
 	rte_crypto_param sign_r;
 	rte_crypto_param sign_s;
 	rte_crypto_param id;
-	rte_crypto_param cipher;
+	union {
+		rte_crypto_param cipher;
+		struct {
+			struct rte_crypto_ec_point C1;
+			struct rte_crypto_ec_point kP;
+		};
+	};
 	rte_crypto_param message;
 	rte_crypto_param digest;
 	int curve;
 };
 
+uint8_t sm2_enc_pub_x_t1[] = {
+	0x26, 0xf1, 0xf3, 0xef, 0x12, 0x27, 0x85, 0xd1,
+	0x7d, 0x38, 0x70, 0xc2, 0x43, 0x46, 0x50, 0x36,
+	0x3f, 0xdf, 0x4b, 0x2f, 0x45, 0x0e, 0x8e, 0xd1,
+	0xb6, 0x0f, 0xdc, 0x1f, 0xc6, 0xf0, 0x19, 0xab
+};
+uint8_t sm2_enc_pub_y_t1[] = {
+	0xd9, 0x19, 0x8b, 0xdb, 0xef, 0xa5, 0x84, 0x76,
+	0xec, 0x82, 0x25, 0x12, 0x5b, 0x8c, 0xe3, 0xe1,
+	0x0a, 0x10, 0x0d, 0xc6, 0x97, 0x6c, 0xc1, 0x89,
+	0xd9, 0x6d, 0xa6, 0x88, 0x9e, 0xbc, 0xd3, 0x7a
+};
+uint8_t sm2_k_t1[] = {
+	0x12, 0x34, 0x56, 0x78, 0xB9, 0x6E, 0x5A, 0xF7,
+	0x0B, 0xD4, 0x80, 0xB4, 0x72, 0x40, 0x9A, 0x9A,
+	0x32, 0x72, 0x57, 0xF1, 0xEB, 0xB7, 0x3F, 0x5B,
+	0x07, 0x33, 0x54, 0xB2, 0x48, 0x66, 0x85, 0x63
+};
+
+uint8_t sm2_C1_x_t1[] = {
+	0x15, 0xf6, 0xb7, 0x49, 0x00, 0x39, 0x73, 0x9d,
+	0x5b, 0xb3, 0xd3, 0xe9, 0x1d, 0xe4, 0xc8, 0xbd,
+	0x08, 0xe3, 0x6a, 0x22, 0xff, 0x1a, 0xbf, 0xdc,
+	0x75, 0x6b, 0x12, 0x85, 0x81, 0xc5, 0x8b, 0xcf
+};
+
+uint8_t sm2_C1_y_t1[] = {
+	0x6a, 0x92, 0xd4, 0xd8, 0x13, 0xec, 0x8f, 0x9a,
+	0x9d, 0xbe, 0x51, 0x47, 0x6f, 0x54, 0xc5, 0x41,
+	0x98, 0xf5, 0x5f, 0x83, 0xce, 0x1c, 0x18, 0x1a,
+	0x48, 0xbd, 0xeb, 0x38, 0x13, 0x67, 0x0d, 0x06
+};
+
+uint8_t sm2_kP_x_t1[] = {
+	0x6b, 0xfb, 0x9a, 0xcb, 0xc6, 0xb6, 0x36, 0x31,
+	0x0f, 0xd1, 0xdd, 0x9c, 0x9f, 0x17, 0x5f, 0x3f,
+	0x68, 0x13, 0x96, 0xd2, 0x54, 0x5b, 0xa6, 0x19,
+	0x78, 0x1f, 0x87, 0x3d, 0x81, 0xc3, 0x21, 0x01
+};
+
+uint8_t sm2_kP_y_t1[] = {
+	0xa4, 0x08, 0xf3, 0x74, 0x35, 0x51, 0x8c, 0x81,
+	0x06, 0x4c, 0x8f, 0x31, 0x49, 0xe3, 0x5b, 0x4d,
+	0xfc, 0x3d, 0x19, 0xac, 0x7d, 0x07, 0xd0, 0x9a,
+	0x99, 0x5a, 0x25, 0x16, 0x66, 0xff, 0x41, 0x3c
+};
+
+uint8_t sm2_kP_d_t1[] = {
+	0x6F, 0xCB, 0xA2, 0xEF, 0x9A, 0xE0, 0xAB, 0x90,
+	0x2B, 0xC3, 0xBD, 0xE3, 0xFF, 0x91, 0x5D, 0x44,
+	0xBA, 0x4C, 0xC7, 0x8F, 0x88, 0xE2, 0xF8, 0xE7,
+	0xF8, 0x99, 0x6D, 0x3B, 0x8C, 0xCE, 0xED, 0xEE
+};
+
+struct crypto_testsuite_sm2_params sm2_enc_hw_t1 = {
+	.k = {
+		.data = sm2_k_t1,
+		.length = sizeof(sm2_k_t1)
+	},
+	.pubkey = {
+		.x = {
+			.data = sm2_enc_pub_x_t1,
+			.length = sizeof(sm2_enc_pub_x_t1)
+		},
+		.y = {
+			.data = sm2_enc_pub_y_t1,
+			.length = sizeof(sm2_enc_pub_y_t1)
+		}
+	},
+	.C1 = {
+		.x = {
+			.data = sm2_C1_x_t1,
+			.length = sizeof(sm2_C1_x_t1)
+		},
+		.y = {
+			.data = sm2_C1_y_t1,
+			.length = sizeof(sm2_C1_y_t1)
+		}
+	},
+	.kP = {
+		.x = {
+			.data = sm2_kP_x_t1,
+			.length = sizeof(sm2_kP_x_t1)
+		},
+		.y = {
+			.data = sm2_kP_y_t1,
+			.length = sizeof(sm2_kP_y_t1)
+		}
+	},
+	.pkey = {
+		.data = sm2_kP_d_t1,
+		.length = sizeof(sm2_kP_d_t1)
+	}
+};
+
 static uint8_t fp256_pkey[] = {
 	0x77, 0x84, 0x35, 0x65, 0x4c, 0x7a, 0x6d, 0xb1,
 	0x1e, 0x63, 0x0b, 0x41, 0x97, 0x36, 0x04, 0xf4,
-- 
2.34.1


^ permalink raw reply	[flat|nested] 10+ messages in thread

* RE: [EXTERNAL] [PATCH v8 1/3] cryptodev: add ec points to sm2 op
  2024-11-04  9:36 ` [PATCH v8 1/3] cryptodev: " Arkadiusz Kusztal
@ 2024-11-06 10:08   ` Akhil Goyal
  2024-11-06 15:17     ` Kusztal, ArkadiuszX
  2025-08-22 11:13   ` [dpdk-dev v9 " Kai Ji
  1 sibling, 1 reply; 10+ messages in thread
From: Akhil Goyal @ 2024-11-06 10:08 UTC (permalink / raw)
  To: Arkadiusz Kusztal, dev; +Cc: brian.dooley

> In the case when PMD cannot support the full process of the SM2,
> but elliptic curve computation only, additional fields
> are needed to handle such a case.
> 
> Points C1, kP therefore were added to the SM2 crypto operation struct.
> 
> Signed-off-by: Arkadiusz Kusztal <arkadiuszx.kusztal@intel.com>
> ---

Please rebase. CI failed to apply patch.
Please be proactive to fix CI issues if reported.

>  doc/guides/rel_notes/release_24_11.rst |  3 ++
>  lib/cryptodev/rte_crypto_asym.h        | 56 +++++++++++++++++++-------
>  2 files changed, 45 insertions(+), 14 deletions(-)
> 
> diff --git a/doc/guides/rel_notes/release_24_11.rst
> b/doc/guides/rel_notes/release_24_11.rst
> index 53a5ffebe5..ee9e2cea3c 100644
> --- a/doc/guides/rel_notes/release_24_11.rst
> +++ b/doc/guides/rel_notes/release_24_11.rst
> @@ -413,6 +413,9 @@ ABI Changes
>    added new structure ``rte_node_xstats`` to ``rte_node_register`` and
>    added ``xstat_off`` to ``rte_node``.
> 
> +* cryptodev: The ``rte_crypto_sm2_op_param`` struct member to hold
> ciphertext
> +  is changed to union data type. This change is to support partial SM2 calculation.
> +
> 
>  Known Issues
>  ------------
> diff --git a/lib/cryptodev/rte_crypto_asym.h b/lib/cryptodev/rte_crypto_asym.h
> index aeb46e688e..f095cebcd0 100644
> --- a/lib/cryptodev/rte_crypto_asym.h
> +++ b/lib/cryptodev/rte_crypto_asym.h
> @@ -646,6 +646,8 @@ enum rte_crypto_sm2_op_capa {
>  	/**< Random number generator supported in SM2 ops. */
>  	RTE_CRYPTO_SM2_PH,
>  	/**< Prehash message before crypto op. */
> +	RTE_CRYPTO_SM2_PARTIAL,
> +	/**< Calculate elliptic curve points only. */
>  };
> 
>  /**
> @@ -673,20 +675,46 @@ struct rte_crypto_sm2_op_param {
>  	 * will be overwritten by the PMD with the decrypted length.
>  	 */
> 
> -	rte_crypto_param cipher;
> -	/**<
> -	 * Pointer to input data
> -	 * - to be decrypted for SM2 private decrypt.
> -	 *
> -	 * Pointer to output data
> -	 * - for SM2 public encrypt.
> -	 * In this case the underlying array should have been allocated
> -	 * with enough memory to hold ciphertext output (at least X bytes
> -	 * for prime field curve of N bytes and for message M bytes,
> -	 * where X = (C1 || C2 || C3) and computed based on SM2 RFC as
> -	 * C1 (1 + N + N), C2 = M, C3 = N. The cipher.length field will
> -	 * be overwritten by the PMD with the encrypted length.
> -	 */
> +	union {
> +		rte_crypto_param cipher;
> +		/**<
> +		 * Pointer to input data
> +		 * - to be decrypted for SM2 private decrypt.
> +		 *
> +		 * Pointer to output data
> +		 * - for SM2 public encrypt.
> +		 * In this case the underlying array should have been allocated
> +		 * with enough memory to hold ciphertext output (at least X
> bytes
> +		 * for prime field curve of N bytes and for message M bytes,
> +		 * where X = (C1 || C2 || C3) and computed based on SM2 RFC
> as
> +		 * C1 (1 + N + N), C2 = M, C3 = N. The cipher.length field will
> +		 * be overwritten by the PMD with the encrypted length.
> +		 */
> +		struct {
> +			struct rte_crypto_ec_point c1;
> +			/**<
> +			 * This field is used only when PMD does not support the
> full
> +			 * process of the SM2 encryption/decryption, but the
> elliptic
> +			 * curve part only.
> +			 *
> +			 * In the case of encryption, it is an output - point C1 =
> (x1,y1).
> +			 * In the case of decryption, if is an input - point C1 =
> (x1,y1).
> +			 *
> +			 * Must be used along with the
> RTE_CRYPTO_SM2_PARTIAL flag.
> +			 */
> +			struct rte_crypto_ec_point kp;
> +			/**<
> +			 * This field is used only when PMD does not support the
> full
> +			 * process of the SM2 encryption/decryption, but the
> elliptic
> +			 * curve part only.
> +			 *
> +			 * It is an output in the encryption case, it is a point
> +			 * [k]P = (x2,y2).
> +			 *
> +			 * Must be used along with the
> RTE_CRYPTO_SM2_PARTIAL flag.
> +			 */
> +		};
> +	};
> 
>  	rte_crypto_uint id;
>  	/**< The SM2 id used by signer and verifier. */
> --
> 2.34.1


^ permalink raw reply	[flat|nested] 10+ messages in thread

* RE: [EXTERNAL] [PATCH v8 2/3] crypto/qat: add sm2 encryption/decryption function
  2024-11-04  9:36 ` [PATCH v8 2/3] crypto/qat: add sm2 encryption/decryption function Arkadiusz Kusztal
@ 2024-11-06 10:12   ` Akhil Goyal
  0 siblings, 0 replies; 10+ messages in thread
From: Akhil Goyal @ 2024-11-06 10:12 UTC (permalink / raw)
  To: Arkadiusz Kusztal, dev; +Cc: brian.dooley

> This commit adds SM2 elliptic curve based asymmetric
> encryption and decryption to the Intel QuickAssist
> Technology PMD.
> 
> Depends-on: patch-147900 ("[v2] crypto/qat: fix ecdsa session handling")
> 
> Signed-off-by: Arkadiusz Kusztal <arkadiuszx.kusztal@intel.com>

Update qat.ini file also.

> ---
>  doc/guides/rel_notes/release_24_11.rst        |   4 +
>  .../common/qat/qat_adf/icp_qat_fw_mmp_ids.h   |   3 +
>  drivers/common/qat/qat_adf/qat_pke.h          |  20 +++
>  drivers/crypto/qat/dev/qat_crypto_pmd_gen4.c  |  72 ++++++++-
>  drivers/crypto/qat/qat_asym.c                 | 140 +++++++++++++++++-
>  5 files changed, 232 insertions(+), 7 deletions(-)
> 
> diff --git a/doc/guides/rel_notes/release_24_11.rst
> b/doc/guides/rel_notes/release_24_11.rst
> index ee9e2cea3c..0b2f1e75d3 100644
> --- a/doc/guides/rel_notes/release_24_11.rst
> +++ b/doc/guides/rel_notes/release_24_11.rst
> @@ -251,6 +251,10 @@ New Features
>    Added ability for node to advertise and update multiple xstat counters,
>    that can be retrieved using ``rte_graph_cluster_stats_get``.
> 
> +* **Updated the QuickAssist Technology (QAT) Crypto PMD.**
> +
> +  * Added SM2 encryption and decryption algorithms.
> +
> 
>  Removed Items
>  -------------
> diff --git a/drivers/common/qat/qat_adf/icp_qat_fw_mmp_ids.h
> b/drivers/common/qat/qat_adf/icp_qat_fw_mmp_ids.h
> index 630c6e1a9b..aa49612ca1 100644
> --- a/drivers/common/qat/qat_adf/icp_qat_fw_mmp_ids.h
> +++ b/drivers/common/qat/qat_adf/icp_qat_fw_mmp_ids.h
> @@ -1542,6 +1542,9 @@ icp_qat_fw_mmp_ecdsa_verify_gfp_521_input::in in
> @endlink
>   * @li no output parameters
>   */
> 
> +#define PKE_ECSM2_ENCRYPTION 0x25221720
> +#define PKE_ECSM2_DECRYPTION 0x201716e6
> +
>  #define PKE_LIVENESS 0x00000001
>  /**< Functionality ID for PKE_LIVENESS
>   * @li 0 input parameter(s)
> diff --git a/drivers/common/qat/qat_adf/qat_pke.h
> b/drivers/common/qat/qat_adf/qat_pke.h
> index f88932a275..ac051e965d 100644
> --- a/drivers/common/qat/qat_adf/qat_pke.h
> +++ b/drivers/common/qat/qat_adf/qat_pke.h
> @@ -334,4 +334,24 @@ get_sm2_ecdsa_verify_function(void)
>  	return qat_function;
>  }
> 
> +static struct qat_asym_function
> +get_sm2_encryption_function(void)
> +{
> +	struct qat_asym_function qat_function = {
> +		PKE_ECSM2_ENCRYPTION, 32
> +	};
> +
> +	return qat_function;
> +}
> +
> +static struct qat_asym_function
> +get_sm2_decryption_function(void)
> +{
> +	struct qat_asym_function qat_function = {
> +		PKE_ECSM2_DECRYPTION, 32
> +	};
> +
> +	return qat_function;
> +}
> +
>  #endif
> diff --git a/drivers/crypto/qat/dev/qat_crypto_pmd_gen4.c
> b/drivers/crypto/qat/dev/qat_crypto_pmd_gen4.c
> index 6a5d6e78b9..493f33229e 100644
> --- a/drivers/crypto/qat/dev/qat_crypto_pmd_gen4.c
> +++ b/drivers/crypto/qat/dev/qat_crypto_pmd_gen4.c
> @@ -115,6 +115,38 @@ static struct rte_cryptodev_capabilities
> qat_sym_crypto_caps_gen4[] = {
>  	RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
>  };
> 
> +static struct rte_cryptodev_capabilities qat_asym_crypto_caps_gen4[] = {
> +	QAT_ASYM_CAP(MODEX,
> +		0, 1, 512, 1),
> +	QAT_ASYM_CAP(MODINV,
> +		0, 1, 512, 1),
> +	QAT_ASYM_CAP(RSA,
> +			((1 << RTE_CRYPTO_ASYM_OP_SIGN) |
> +			(1 << RTE_CRYPTO_ASYM_OP_VERIFY) |
> +			(1 << RTE_CRYPTO_ASYM_OP_ENCRYPT) |
> +			(1 << RTE_CRYPTO_ASYM_OP_DECRYPT)),
> +			64, 512, 64),
> +	{	/* SM2 */
> +		.op = RTE_CRYPTO_OP_TYPE_ASYMMETRIC,
> +		{.asym = {
> +			.xform_capa = {
> +				.xform_type =
> RTE_CRYPTO_ASYM_XFORM_SM2,
> +				.op_types =
> +				((1 << RTE_CRYPTO_ASYM_OP_SIGN) |
> +				 (1 << RTE_CRYPTO_ASYM_OP_VERIFY) |
> +				 (1 << RTE_CRYPTO_ASYM_OP_ENCRYPT) |
> +				 (1 << RTE_CRYPTO_ASYM_OP_DECRYPT)),
> +				.op_capa = {
> +					[RTE_CRYPTO_ASYM_OP_ENCRYPT] =
> (1 << RTE_CRYPTO_SM2_PARTIAL),
> +					[RTE_CRYPTO_ASYM_OP_DECRYPT] =
> (1 << RTE_CRYPTO_SM2_PARTIAL),
> +				},
> +			},
> +		}
> +		}
> +	},
> +	RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
> +};
> +
>  static int
>  qat_sym_crypto_cap_get_gen4(struct qat_cryptodev_private *internals,
>  			const char *capa_memz_name,
> @@ -157,6 +189,44 @@ qat_sym_crypto_cap_get_gen4(struct
> qat_cryptodev_private *internals,
>  	return 0;
>  }
> 
> +static int
> +qat_asym_crypto_cap_get_gen4(struct qat_cryptodev_private *internals,
> +			const char *capa_memz_name,
> +			const uint16_t __rte_unused slice_map)
> +{
> +	const uint32_t size = sizeof(qat_asym_crypto_caps_gen4);
> +	uint32_t i;
> +
> +	internals->capa_mz = rte_memzone_lookup(capa_memz_name);
> +	if (internals->capa_mz == NULL) {
> +		internals->capa_mz =
> rte_memzone_reserve(capa_memz_name,
> +				size, rte_socket_id(), 0);
> +		if (internals->capa_mz == NULL) {
> +			QAT_LOG(DEBUG,
> +				"Error allocating memzone for capabilities");
> +			return -1;
> +		}
> +	}
> +
> +	struct rte_cryptodev_capabilities *addr =
> +			(struct rte_cryptodev_capabilities *)
> +				internals->capa_mz->addr;
> +	const struct rte_cryptodev_capabilities *capabilities =
> +		qat_asym_crypto_caps_gen4;
> +	const uint32_t capa_num =
> +		size / sizeof(struct rte_cryptodev_capabilities);
> +	uint32_t curr_capa = 0;
> +
> +	for (i = 0; i < capa_num; i++) {
> +		memcpy(addr + curr_capa, capabilities + i,
> +			sizeof(struct rte_cryptodev_capabilities));
> +		curr_capa++;
> +	}
> +	internals->qat_dev_capabilities = internals->capa_mz->addr;
> +
> +	return 0;
> +}
> +
>  static __rte_always_inline void
>  enqueue_one_aead_job_gen4(struct qat_sym_session *ctx,
>  	struct icp_qat_fw_la_bulk_req *req,
> @@ -543,7 +613,7 @@ RTE_INIT(qat_asym_crypto_gen4_init)
>  			&qat_asym_crypto_ops_gen1;
>  	qat_asym_gen_dev_ops[QAT_VQAT].get_capabilities =
>  		qat_asym_gen_dev_ops[QAT_GEN4].get_capabilities =
> -			qat_asym_crypto_cap_get_gen1;
> +			qat_asym_crypto_cap_get_gen4;
>  	qat_asym_gen_dev_ops[QAT_VQAT].get_feature_flags =
>  		qat_asym_gen_dev_ops[QAT_GEN4].get_feature_flags =
>  			qat_asym_crypto_feature_flags_get_gen1;
> diff --git a/drivers/crypto/qat/qat_asym.c b/drivers/crypto/qat/qat_asym.c
> index dfc52d1286..2459b02215 100644
> --- a/drivers/crypto/qat/qat_asym.c
> +++ b/drivers/crypto/qat/qat_asym.c
> @@ -933,6 +933,15 @@ sm2_ecdsa_sign_set_input(struct
> icp_qat_fw_pke_request *qat_req,
>  	qat_req->input_param_count = 3;
>  	qat_req->output_param_count = 2;
> 
> +	HEXDUMP("SM2 K test", asym_op->sm2.k.data,
> +		cookie->alg_bytesize);
> +	HEXDUMP("SM2 K", cookie->input_array[0],
> +		cookie->alg_bytesize);
> +	HEXDUMP("SM2 msg", cookie->input_array[1],
> +		cookie->alg_bytesize);
> +	HEXDUMP("SM2 pkey", cookie->input_array[2],
> +		cookie->alg_bytesize);
> +
>  	return RTE_CRYPTO_OP_STATUS_SUCCESS;
>  }
> 
> @@ -983,6 +992,114 @@ sm2_ecdsa_sign_collect(struct rte_crypto_asym_op
> *asym_op,
>  	return RTE_CRYPTO_OP_STATUS_SUCCESS;
>  }
> 
> +static int
> +sm2_encryption_set_input(struct icp_qat_fw_pke_request *qat_req,
> +	struct qat_asym_op_cookie *cookie,
> +	const struct rte_crypto_asym_op *asym_op,
> +	const struct rte_crypto_asym_xform *xform)
> +{
> +	const struct qat_asym_function qat_function =
> +		get_sm2_encryption_function();
> +	const uint32_t qat_func_alignsize =
> +		qat_function.bytesize;
> +
> +	SET_PKE_LN(asym_op->sm2.k, qat_func_alignsize, 0);
> +	SET_PKE_LN(xform->ec.q.x, qat_func_alignsize, 1);
> +	SET_PKE_LN(xform->ec.q.y, qat_func_alignsize, 2);
> +
> +	cookie->alg_bytesize = qat_function.bytesize;
> +	cookie->qat_func_alignsize = qat_function.bytesize;
> +	qat_req->pke_hdr.cd_pars.func_id = qat_function.func_id;
> +	qat_req->input_param_count = 3;
> +	qat_req->output_param_count = 4;
> +
> +	HEXDUMP("SM2 K", cookie->input_array[0],
> +		qat_func_alignsize);
> +	HEXDUMP("SM2 Q.x", cookie->input_array[1],
> +		qat_func_alignsize);
> +	HEXDUMP("SM2 Q.y", cookie->input_array[2],
> +		qat_func_alignsize);
> +
> +	return RTE_CRYPTO_OP_STATUS_SUCCESS;
> +}
> +
> +static uint8_t
> +sm2_encryption_collect(struct rte_crypto_asym_op *asym_op,
> +		const struct qat_asym_op_cookie *cookie)
> +{
> +	uint32_t alg_bytesize = cookie->alg_bytesize;
> +
> +	rte_memcpy(asym_op->sm2.c1.x.data, cookie->output_array[0],
> alg_bytesize);
> +	rte_memcpy(asym_op->sm2.c1.y.data, cookie->output_array[1],
> alg_bytesize);
> +	rte_memcpy(asym_op->sm2.kp.x.data, cookie->output_array[2],
> alg_bytesize);
> +	rte_memcpy(asym_op->sm2.kp.y.data, cookie->output_array[3],
> alg_bytesize);
> +	asym_op->sm2.c1.x.length = alg_bytesize;
> +	asym_op->sm2.c1.y.length = alg_bytesize;
> +	asym_op->sm2.kp.x.length = alg_bytesize;
> +	asym_op->sm2.kp.y.length = alg_bytesize;
> +
> +	HEXDUMP("c1[x1]", cookie->output_array[0],
> +		alg_bytesize);
> +	HEXDUMP("c1[y]", cookie->output_array[1],
> +		alg_bytesize);
> +	HEXDUMP("kp[x]", cookie->output_array[2],
> +		alg_bytesize);
> +	HEXDUMP("kp[y]", cookie->output_array[3],
> +		alg_bytesize);
> +	return RTE_CRYPTO_OP_STATUS_SUCCESS;
> +}
> +
> +
> +static int
> +sm2_decryption_set_input(struct icp_qat_fw_pke_request *qat_req,
> +	struct qat_asym_op_cookie *cookie,
> +	const struct rte_crypto_asym_op *asym_op,
> +	const struct rte_crypto_asym_xform *xform)
> +{
> +	const struct qat_asym_function qat_function =
> +		get_sm2_decryption_function();
> +	const uint32_t qat_func_alignsize =
> +		qat_function.bytesize;
> +
> +	SET_PKE_LN(xform->ec.pkey, qat_func_alignsize, 0);
> +	SET_PKE_LN(asym_op->sm2.c1.x, qat_func_alignsize, 1);
> +	SET_PKE_LN(asym_op->sm2.c1.y, qat_func_alignsize, 2);
> +
> +	cookie->alg_bytesize = qat_function.bytesize;
> +	cookie->qat_func_alignsize = qat_function.bytesize;
> +	qat_req->pke_hdr.cd_pars.func_id = qat_function.func_id;
> +	qat_req->input_param_count = 3;
> +	qat_req->output_param_count = 2;
> +
> +	HEXDUMP("d", cookie->input_array[0],
> +		qat_func_alignsize);
> +	HEXDUMP("c1[x]", cookie->input_array[1],
> +		qat_func_alignsize);
> +	HEXDUMP("c1[y]", cookie->input_array[2],
> +		qat_func_alignsize);
> +
> +	return RTE_CRYPTO_OP_STATUS_SUCCESS;
> +}
> +
> +
> +static uint8_t
> +sm2_decryption_collect(struct rte_crypto_asym_op *asym_op,
> +		const struct qat_asym_op_cookie *cookie)
> +{
> +	uint32_t alg_bytesize = cookie->alg_bytesize;
> +
> +	rte_memcpy(asym_op->sm2.kp.x.data, cookie->output_array[0],
> alg_bytesize);
> +	rte_memcpy(asym_op->sm2.kp.y.data, cookie->output_array[1],
> alg_bytesize);
> +	asym_op->sm2.kp.x.length = alg_bytesize;
> +	asym_op->sm2.kp.y.length = alg_bytesize;
> +
> +	HEXDUMP("kp[x]", cookie->output_array[0],
> +		alg_bytesize);
> +	HEXDUMP("kp[y]", cookie->output_array[1],
> +		alg_bytesize);
> +	return RTE_CRYPTO_OP_STATUS_SUCCESS;
> +}
> +
>  static int
>  asym_set_input(struct icp_qat_fw_pke_request *qat_req,
>  		struct qat_asym_op_cookie *cookie,
> @@ -1015,14 +1132,20 @@ asym_set_input(struct icp_qat_fw_pke_request
> *qat_req,
>  				asym_op, xform);
>  		}
>  	case RTE_CRYPTO_ASYM_XFORM_SM2:
> -		if (asym_op->sm2.op_type ==
> -			RTE_CRYPTO_ASYM_OP_VERIFY) {
> +		if (asym_op->sm2.op_type ==
> RTE_CRYPTO_ASYM_OP_ENCRYPT) {
> +			return sm2_encryption_set_input(qat_req, cookie,
> +				asym_op, xform);
> +		} else if (asym_op->sm2.op_type ==
> RTE_CRYPTO_ASYM_OP_DECRYPT) {
> +			return sm2_decryption_set_input(qat_req, cookie,
> +				asym_op, xform);
> +		} else if (asym_op->sm2.op_type ==
> RTE_CRYPTO_ASYM_OP_VERIFY) {
>  			return sm2_ecdsa_verify_set_input(qat_req, cookie,
>  						asym_op, xform);
> -		} else {
> +		} else if (asym_op->sm2.op_type ==
> RTE_CRYPTO_ASYM_OP_SIGN) {
>  			return sm2_ecdsa_sign_set_input(qat_req, cookie,
>  					asym_op, xform);
>  		}
> +		break;
>  	default:
>  		QAT_LOG(ERR, "Invalid/unsupported asymmetric crypto xform");
>  		return -EINVAL;
> @@ -1114,7 +1237,13 @@ qat_asym_collect_response(struct rte_crypto_op
> *op,
>  	case RTE_CRYPTO_ASYM_XFORM_ECDH:
>  		return ecdh_collect(asym_op, cookie);
>  	case RTE_CRYPTO_ASYM_XFORM_SM2:
> -		return sm2_ecdsa_sign_collect(asym_op, cookie);
> +		if (asym_op->sm2.op_type ==
> RTE_CRYPTO_ASYM_OP_ENCRYPT)
> +			return sm2_encryption_collect(asym_op, cookie);
> +		else if (asym_op->sm2.op_type ==
> RTE_CRYPTO_ASYM_OP_DECRYPT)
> +			return sm2_decryption_collect(asym_op, cookie);
> +		else
> +			return sm2_ecdsa_sign_collect(asym_op, cookie);
> +
>  	default:
>  		QAT_LOG(ERR, "Not supported xform type");
>  		return  RTE_CRYPTO_OP_STATUS_ERROR;
> @@ -1423,9 +1552,8 @@ qat_asym_session_configure(struct rte_cryptodev
> *dev __rte_unused,
>  	case RTE_CRYPTO_ASYM_XFORM_ECDSA:
>  	case RTE_CRYPTO_ASYM_XFORM_ECPM:
>  	case RTE_CRYPTO_ASYM_XFORM_ECDH:
> -		ret = session_set_ec(qat_session, xform);
> -		break;
>  	case RTE_CRYPTO_ASYM_XFORM_SM2:
> +		ret = session_set_ec(qat_session, xform);
>  		break;
>  	default:
>  		ret = -ENOTSUP;
> --
> 2.34.1


^ permalink raw reply	[flat|nested] 10+ messages in thread

* RE: [EXTERNAL] [PATCH v8 1/3] cryptodev: add ec points to sm2 op
  2024-11-06 10:08   ` [EXTERNAL] " Akhil Goyal
@ 2024-11-06 15:17     ` Kusztal, ArkadiuszX
  0 siblings, 0 replies; 10+ messages in thread
From: Kusztal, ArkadiuszX @ 2024-11-06 15:17 UTC (permalink / raw)
  To: Akhil Goyal, dev; +Cc: Dooley, Brian

Hi Akhil,

> -----Original Message-----
> From: Akhil Goyal <gakhil@marvell.com>
> Sent: Wednesday, November 6, 2024 11:09 AM
> To: Kusztal, ArkadiuszX <arkadiuszx.kusztal@intel.com>; dev@dpdk.org
> Cc: Dooley, Brian <brian.dooley@intel.com>
> Subject: RE: [EXTERNAL] [PATCH v8 1/3] cryptodev: add ec points to sm2 op
> 
> > In the case when PMD cannot support the full process of the SM2, but
> > elliptic curve computation only, additional fields are needed to
> > handle such a case.
> >
> > Points C1, kP therefore were added to the SM2 crypto operation struct.
> >
> > Signed-off-by: Arkadiusz Kusztal <arkadiuszx.kusztal@intel.com>
> > ---
> 
> Please rebase. CI failed to apply patch.
> Please be proactive to fix CI issues if reported.

I have deferred the whole patchset, no further action is necessary.

> 
> >  doc/guides/rel_notes/release_24_11.rst |  3 ++
> >  lib/cryptodev/rte_crypto_asym.h        | 56 +++++++++++++++++++-------
> >  2 files changed, 45 insertions(+), 14 deletions(-)
> >
> > diff --git a/doc/guides/rel_notes/release_24_11.rst
> > b/doc/guides/rel_notes/release_24_11.rst
> > index 53a5ffebe5..ee9e2cea3c 100644
> > --- a/doc/guides/rel_notes/release_24_11.rst
> > +++ b/doc/guides/rel_notes/release_24_11.rst
> > @@ -413,6 +413,9 @@ ABI Changes
> >    added new structure ``rte_node_xstats`` to ``rte_node_register`` and
> >    added ``xstat_off`` to ``rte_node``.
> >
> > +* cryptodev: The ``rte_crypto_sm2_op_param`` struct member to hold
> > ciphertext
> > +  is changed to union data type. This change is to support partial SM2
> calculation.
> > +
> >
> >  Known Issues
> >  ------------
> > diff --git a/lib/cryptodev/rte_crypto_asym.h
> > b/lib/cryptodev/rte_crypto_asym.h index aeb46e688e..f095cebcd0 100644
> > --- a/lib/cryptodev/rte_crypto_asym.h
> > +++ b/lib/cryptodev/rte_crypto_asym.h
> > @@ -646,6 +646,8 @@ enum rte_crypto_sm2_op_capa {
> >  	/**< Random number generator supported in SM2 ops. */
> >  	RTE_CRYPTO_SM2_PH,
> >  	/**< Prehash message before crypto op. */
> > +	RTE_CRYPTO_SM2_PARTIAL,
> > +	/**< Calculate elliptic curve points only. */
> >  };
> >
> >  /**
> > @@ -673,20 +675,46 @@ struct rte_crypto_sm2_op_param {
> >  	 * will be overwritten by the PMD with the decrypted length.
> >  	 */
> >
> > -	rte_crypto_param cipher;
> > -	/**<
> > -	 * Pointer to input data
> > -	 * - to be decrypted for SM2 private decrypt.
> > -	 *
> > -	 * Pointer to output data
> > -	 * - for SM2 public encrypt.
> > -	 * In this case the underlying array should have been allocated
> > -	 * with enough memory to hold ciphertext output (at least X bytes
> > -	 * for prime field curve of N bytes and for message M bytes,
> > -	 * where X = (C1 || C2 || C3) and computed based on SM2 RFC as
> > -	 * C1 (1 + N + N), C2 = M, C3 = N. The cipher.length field will
> > -	 * be overwritten by the PMD with the encrypted length.
> > -	 */
> > +	union {
> > +		rte_crypto_param cipher;
> > +		/**<
> > +		 * Pointer to input data
> > +		 * - to be decrypted for SM2 private decrypt.
> > +		 *
> > +		 * Pointer to output data
> > +		 * - for SM2 public encrypt.
> > +		 * In this case the underlying array should have been allocated
> > +		 * with enough memory to hold ciphertext output (at least X
> > bytes
> > +		 * for prime field curve of N bytes and for message M bytes,
> > +		 * where X = (C1 || C2 || C3) and computed based on SM2 RFC
> > as
> > +		 * C1 (1 + N + N), C2 = M, C3 = N. The cipher.length field will
> > +		 * be overwritten by the PMD with the encrypted length.
> > +		 */
> > +		struct {
> > +			struct rte_crypto_ec_point c1;
> > +			/**<
> > +			 * This field is used only when PMD does not support
> the
> > full
> > +			 * process of the SM2 encryption/decryption, but the
> > elliptic
> > +			 * curve part only.
> > +			 *
> > +			 * In the case of encryption, it is an output - point C1 =
> > (x1,y1).
> > +			 * In the case of decryption, if is an input - point C1 =
> > (x1,y1).
> > +			 *
> > +			 * Must be used along with the
> > RTE_CRYPTO_SM2_PARTIAL flag.
> > +			 */
> > +			struct rte_crypto_ec_point kp;
> > +			/**<
> > +			 * This field is used only when PMD does not support
> the
> > full
> > +			 * process of the SM2 encryption/decryption, but the
> > elliptic
> > +			 * curve part only.
> > +			 *
> > +			 * It is an output in the encryption case, it is a point
> > +			 * [k]P = (x2,y2).
> > +			 *
> > +			 * Must be used along with the
> > RTE_CRYPTO_SM2_PARTIAL flag.
> > +			 */
> > +		};
> > +	};
> >
> >  	rte_crypto_uint id;
> >  	/**< The SM2 id used by signer and verifier. */
> > --
> > 2.34.1


^ permalink raw reply	[flat|nested] 10+ messages in thread

* [dpdk-dev v9 1/3] cryptodev: add ec points to sm2 op
  2024-11-04  9:36 ` [PATCH v8 1/3] cryptodev: " Arkadiusz Kusztal
  2024-11-06 10:08   ` [EXTERNAL] " Akhil Goyal
@ 2025-08-22 11:13   ` Kai Ji
  2025-08-22 11:13     ` [dpdk-dev v9 2/3] crypto/qat: add sm2 encryption/decryption function Kai Ji
  2025-08-22 11:13     ` [dpdk-dev v9 3/3] app/test: add test sm2 C1/Kp test cases Kai Ji
  1 sibling, 2 replies; 10+ messages in thread
From: Kai Ji @ 2025-08-22 11:13 UTC (permalink / raw)
  To: dev; +Cc: Kai Ji, Arkadiusz Kusztal, Akhil Goyal, Fan Zhang

In the case when PMD cannot support the full process of the SM2,
but elliptic curve computation only, additional fields
are needed to handle such a case.

Points C1, kP therefore were added to the SM2 crypto operation struct.

Signed-off-by: Arkadiusz Kusztal <arkadiuszx.kusztal@intel.com>
Signed-off-by: Kai Ji <kai.ji@intel.com>
---
 doc/guides/rel_notes/release_25_11.rst |  2 +
 lib/cryptodev/rte_crypto_asym.h        | 56 +++++++++++++++++++-------
 2 files changed, 44 insertions(+), 14 deletions(-)

diff --git a/doc/guides/rel_notes/release_25_11.rst b/doc/guides/rel_notes/release_25_11.rst
index ccad6d89ff..b15d2e0e8f 100644
--- a/doc/guides/rel_notes/release_25_11.rst
+++ b/doc/guides/rel_notes/release_25_11.rst
@@ -100,6 +100,8 @@ ABI Changes
    Also, make sure to start the actual text at the margin.
    =======================================================
 
+* cryptodev: The ``rte_crypto_sm2_op_param`` struct member to hold ciphertext
+  is changed to union data type. This change is to support partial SM2 calculation.
 
 Known Issues
 ------------
diff --git a/lib/cryptodev/rte_crypto_asym.h b/lib/cryptodev/rte_crypto_asym.h
index 9787b710e7..039dcb85a7 100644
--- a/lib/cryptodev/rte_crypto_asym.h
+++ b/lib/cryptodev/rte_crypto_asym.h
@@ -654,6 +654,8 @@ enum rte_crypto_sm2_op_capa {
 	/**< Random number generator supported in SM2 ops. */
 	RTE_CRYPTO_SM2_PH,
 	/**< Prehash message before crypto op. */
+	RTE_CRYPTO_SM2_PARTIAL,
+	/**< Calculate elliptic curve points only. */
 };
 
 /**
@@ -681,20 +683,46 @@ struct rte_crypto_sm2_op_param {
 	 * will be overwritten by the PMD with the decrypted length.
 	 */
 
-	rte_crypto_param cipher;
-	/**<
-	 * Pointer to input data
-	 * - to be decrypted for SM2 private decrypt.
-	 *
-	 * Pointer to output data
-	 * - for SM2 public encrypt.
-	 * In this case the underlying array should have been allocated
-	 * with enough memory to hold ciphertext output (at least X bytes
-	 * for prime field curve of N bytes and for message M bytes,
-	 * where X = (C1 || C2 || C3) and computed based on SM2 RFC as
-	 * C1 (1 + N + N), C2 = M, C3 = N. The cipher.length field will
-	 * be overwritten by the PMD with the encrypted length.
-	 */
+	union {
+		rte_crypto_param cipher;
+		/**<
+		 * Pointer to input data
+		 * - to be decrypted for SM2 private decrypt.
+		 *
+		 * Pointer to output data
+		 * - for SM2 public encrypt.
+		 * In this case the underlying array should have been allocated
+		 * with enough memory to hold ciphertext output (at least X bytes
+		 * for prime field curve of N bytes and for message M bytes,
+		 * where X = (C1 || C2 || C3) and computed based on SM2 RFC as
+		 * C1 (1 + N + N), C2 = M, C3 = N. The cipher.length field will
+		 * be overwritten by the PMD with the encrypted length.
+		 */
+		struct {
+			struct rte_crypto_ec_point c1;
+			/**<
+			 * This field is used only when PMD does not support the full
+			 * process of the SM2 encryption/decryption, but the elliptic
+			 * curve part only.
+			 *
+			 * In the case of encryption, it is an output - point C1 = (x1,y1).
+			 * In the case of decryption, if is an input - point C1 = (x1,y1).
+			 *
+			 * Must be used along with the RTE_CRYPTO_SM2_PARTIAL flag.
+			 */
+			struct rte_crypto_ec_point kp;
+			/**<
+			 * This field is used only when PMD does not support the full
+			 * process of the SM2 encryption/decryption, but the elliptic
+			 * curve part only.
+			 *
+			 * It is an output in the encryption case, it is a point
+			 * [k]P = (x2,y2).
+			 *
+			 * Must be used along with the RTE_CRYPTO_SM2_PARTIAL flag.
+			 */
+		};
+	};
 
 	rte_crypto_uint id;
 	/**< The SM2 id used by signer and verifier. */
-- 
2.43.0


^ permalink raw reply	[flat|nested] 10+ messages in thread

* [dpdk-dev v9 2/3] crypto/qat: add sm2 encryption/decryption function
  2025-08-22 11:13   ` [dpdk-dev v9 " Kai Ji
@ 2025-08-22 11:13     ` Kai Ji
  2025-08-22 11:13     ` [dpdk-dev v9 3/3] app/test: add test sm2 C1/Kp test cases Kai Ji
  1 sibling, 0 replies; 10+ messages in thread
From: Kai Ji @ 2025-08-22 11:13 UTC (permalink / raw)
  To: dev; +Cc: Kai Ji, Arkadiusz Kusztal

This commit adds SM2 elliptic curve based asymmetric
encryption and decryption to the Intel QuickAssist
Technology PMD.

Depends-on: patch-147900 ("[v2] crypto/qat: fix ecdsa session handling")

Signed-off-by: Arkadiusz Kusztal <arkadiuszx.kusztal@intel.com>
Signed-off-by: Kai Ji <kai.ji@intel.com>
---
 doc/guides/rel_notes/release_25_11.rst        |   3 +
 .../common/qat/qat_adf/icp_qat_fw_mmp_ids.h   |   3 +
 drivers/common/qat/qat_adf/qat_pke.h          |  20 +++
 drivers/crypto/qat/dev/qat_crypto_pmd_gen4.c  |  72 ++++++++-
 drivers/crypto/qat/qat_asym.c                 | 140 +++++++++++++++++-
 5 files changed, 231 insertions(+), 7 deletions(-)

diff --git a/doc/guides/rel_notes/release_25_11.rst b/doc/guides/rel_notes/release_25_11.rst
index b15d2e0e8f..b31d9c0fb2 100644
--- a/doc/guides/rel_notes/release_25_11.rst
+++ b/doc/guides/rel_notes/release_25_11.rst
@@ -55,6 +55,9 @@ New Features
      Also, make sure to start the actual text at the margin.
      =======================================================
 
+* **Updated the QuickAssist Technology (QAT) Crypto PMD.**
+
+  * Added SM2 encryption and decryption algorithms.
 
 Removed Items
 -------------
diff --git a/drivers/common/qat/qat_adf/icp_qat_fw_mmp_ids.h b/drivers/common/qat/qat_adf/icp_qat_fw_mmp_ids.h
index 630c6e1a9b..aa49612ca1 100644
--- a/drivers/common/qat/qat_adf/icp_qat_fw_mmp_ids.h
+++ b/drivers/common/qat/qat_adf/icp_qat_fw_mmp_ids.h
@@ -1542,6 +1542,9 @@ icp_qat_fw_mmp_ecdsa_verify_gfp_521_input::in in @endlink
  * @li no output parameters
  */
 
+#define PKE_ECSM2_ENCRYPTION 0x25221720
+#define PKE_ECSM2_DECRYPTION 0x201716e6
+
 #define PKE_LIVENESS 0x00000001
 /**< Functionality ID for PKE_LIVENESS
  * @li 0 input parameter(s)
diff --git a/drivers/common/qat/qat_adf/qat_pke.h b/drivers/common/qat/qat_adf/qat_pke.h
index f88932a275..ac051e965d 100644
--- a/drivers/common/qat/qat_adf/qat_pke.h
+++ b/drivers/common/qat/qat_adf/qat_pke.h
@@ -334,4 +334,24 @@ get_sm2_ecdsa_verify_function(void)
 	return qat_function;
 }
 
+static struct qat_asym_function
+get_sm2_encryption_function(void)
+{
+	struct qat_asym_function qat_function = {
+		PKE_ECSM2_ENCRYPTION, 32
+	};
+
+	return qat_function;
+}
+
+static struct qat_asym_function
+get_sm2_decryption_function(void)
+{
+	struct qat_asym_function qat_function = {
+		PKE_ECSM2_DECRYPTION, 32
+	};
+
+	return qat_function;
+}
+
 #endif
diff --git a/drivers/crypto/qat/dev/qat_crypto_pmd_gen4.c b/drivers/crypto/qat/dev/qat_crypto_pmd_gen4.c
index 638da1a173..843580af72 100644
--- a/drivers/crypto/qat/dev/qat_crypto_pmd_gen4.c
+++ b/drivers/crypto/qat/dev/qat_crypto_pmd_gen4.c
@@ -115,6 +115,38 @@ static struct rte_cryptodev_capabilities qat_sym_crypto_caps_gen4[] = {
 	RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
 };
 
+static struct rte_cryptodev_capabilities qat_asym_crypto_caps_gen4[] = {
+	QAT_ASYM_CAP(MODEX,
+		0, 1, 512, 1),
+	QAT_ASYM_CAP(MODINV,
+		0, 1, 512, 1),
+	QAT_ASYM_CAP(RSA,
+			((1 << RTE_CRYPTO_ASYM_OP_SIGN) |
+			(1 << RTE_CRYPTO_ASYM_OP_VERIFY) |
+			(1 << RTE_CRYPTO_ASYM_OP_ENCRYPT) |
+			(1 << RTE_CRYPTO_ASYM_OP_DECRYPT)),
+			64, 512, 64),
+	{	/* SM2 */
+		.op = RTE_CRYPTO_OP_TYPE_ASYMMETRIC,
+		{.asym = {
+			.xform_capa = {
+				.xform_type = RTE_CRYPTO_ASYM_XFORM_SM2,
+				.op_types =
+				((1 << RTE_CRYPTO_ASYM_OP_SIGN) |
+				 (1 << RTE_CRYPTO_ASYM_OP_VERIFY) |
+				 (1 << RTE_CRYPTO_ASYM_OP_ENCRYPT) |
+				 (1 << RTE_CRYPTO_ASYM_OP_DECRYPT)),
+				.op_capa = {
+					[RTE_CRYPTO_ASYM_OP_ENCRYPT] = (1 << RTE_CRYPTO_SM2_PARTIAL),
+					[RTE_CRYPTO_ASYM_OP_DECRYPT] = (1 << RTE_CRYPTO_SM2_PARTIAL),
+				},
+			},
+		}
+		}
+	},
+	RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
+};
+
 static int
 qat_sym_crypto_cap_get_gen4(struct qat_cryptodev_private *internals,
 			const char *capa_memz_name,
@@ -157,6 +189,44 @@ qat_sym_crypto_cap_get_gen4(struct qat_cryptodev_private *internals,
 	return 0;
 }
 
+static int
+qat_asym_crypto_cap_get_gen4(struct qat_cryptodev_private *internals,
+			const char *capa_memz_name,
+			const uint16_t __rte_unused slice_map)
+{
+	const uint32_t size = sizeof(qat_asym_crypto_caps_gen4);
+	uint32_t i;
+
+	internals->capa_mz = rte_memzone_lookup(capa_memz_name);
+	if (internals->capa_mz == NULL) {
+		internals->capa_mz = rte_memzone_reserve(capa_memz_name,
+				size, rte_socket_id(), 0);
+		if (internals->capa_mz == NULL) {
+			QAT_LOG(DEBUG,
+				"Error allocating memzone for capabilities");
+			return -1;
+		}
+	}
+
+	struct rte_cryptodev_capabilities *addr =
+			(struct rte_cryptodev_capabilities *)
+				internals->capa_mz->addr;
+	const struct rte_cryptodev_capabilities *capabilities =
+		qat_asym_crypto_caps_gen4;
+	const uint32_t capa_num =
+		size / sizeof(struct rte_cryptodev_capabilities);
+	uint32_t curr_capa = 0;
+
+	for (i = 0; i < capa_num; i++) {
+		memcpy(addr + curr_capa, capabilities + i,
+			sizeof(struct rte_cryptodev_capabilities));
+		curr_capa++;
+	}
+	internals->qat_dev_capabilities = internals->capa_mz->addr;
+
+	return 0;
+}
+
 static __rte_always_inline void
 enqueue_one_aead_job_gen4(struct qat_sym_session *ctx,
 	struct icp_qat_fw_la_bulk_req *req,
@@ -546,7 +616,7 @@ RTE_INIT(qat_asym_crypto_gen4_init)
 			&qat_asym_crypto_ops_gen1;
 	qat_asym_gen_dev_ops[QAT_VQAT].get_capabilities =
 		qat_asym_gen_dev_ops[QAT_GEN4].get_capabilities =
-			qat_asym_crypto_cap_get_gen1;
+			qat_asym_crypto_cap_get_gen4;
 	qat_asym_gen_dev_ops[QAT_VQAT].get_feature_flags =
 		qat_asym_gen_dev_ops[QAT_GEN4].get_feature_flags =
 			qat_asym_crypto_feature_flags_get_gen1;
diff --git a/drivers/crypto/qat/qat_asym.c b/drivers/crypto/qat/qat_asym.c
index d8a1406819..51e461c12f 100644
--- a/drivers/crypto/qat/qat_asym.c
+++ b/drivers/crypto/qat/qat_asym.c
@@ -932,6 +932,15 @@ sm2_ecdsa_sign_set_input(struct icp_qat_fw_pke_request *qat_req,
 	qat_req->input_param_count = 3;
 	qat_req->output_param_count = 2;
 
+	HEXDUMP("SM2 K test", asym_op->sm2.k.data,
+		cookie->alg_bytesize);
+	HEXDUMP("SM2 K", cookie->input_array[0],
+		cookie->alg_bytesize);
+	HEXDUMP("SM2 msg", cookie->input_array[1],
+		cookie->alg_bytesize);
+	HEXDUMP("SM2 pkey", cookie->input_array[2],
+		cookie->alg_bytesize);
+
 	return RTE_CRYPTO_OP_STATUS_SUCCESS;
 }
 
@@ -982,6 +991,114 @@ sm2_ecdsa_sign_collect(struct rte_crypto_asym_op *asym_op,
 	return RTE_CRYPTO_OP_STATUS_SUCCESS;
 }
 
+static int
+sm2_encryption_set_input(struct icp_qat_fw_pke_request *qat_req,
+	struct qat_asym_op_cookie *cookie,
+	const struct rte_crypto_asym_op *asym_op,
+	const struct rte_crypto_asym_xform *xform)
+{
+	const struct qat_asym_function qat_function =
+		get_sm2_encryption_function();
+	const uint32_t qat_func_alignsize =
+		qat_function.bytesize;
+
+	SET_PKE_LN(asym_op->sm2.k, qat_func_alignsize, 0);
+	SET_PKE_LN(xform->ec.q.x, qat_func_alignsize, 1);
+	SET_PKE_LN(xform->ec.q.y, qat_func_alignsize, 2);
+
+	cookie->alg_bytesize = qat_function.bytesize;
+	cookie->qat_func_alignsize = qat_function.bytesize;
+	qat_req->pke_hdr.cd_pars.func_id = qat_function.func_id;
+	qat_req->input_param_count = 3;
+	qat_req->output_param_count = 4;
+
+	HEXDUMP("SM2 K", cookie->input_array[0],
+		qat_func_alignsize);
+	HEXDUMP("SM2 Q.x", cookie->input_array[1],
+		qat_func_alignsize);
+	HEXDUMP("SM2 Q.y", cookie->input_array[2],
+		qat_func_alignsize);
+
+	return RTE_CRYPTO_OP_STATUS_SUCCESS;
+}
+
+static uint8_t
+sm2_encryption_collect(struct rte_crypto_asym_op *asym_op,
+		const struct qat_asym_op_cookie *cookie)
+{
+	uint32_t alg_bytesize = cookie->alg_bytesize;
+
+	rte_memcpy(asym_op->sm2.c1.x.data, cookie->output_array[0], alg_bytesize);
+	rte_memcpy(asym_op->sm2.c1.y.data, cookie->output_array[1], alg_bytesize);
+	rte_memcpy(asym_op->sm2.kp.x.data, cookie->output_array[2], alg_bytesize);
+	rte_memcpy(asym_op->sm2.kp.y.data, cookie->output_array[3], alg_bytesize);
+	asym_op->sm2.c1.x.length = alg_bytesize;
+	asym_op->sm2.c1.y.length = alg_bytesize;
+	asym_op->sm2.kp.x.length = alg_bytesize;
+	asym_op->sm2.kp.y.length = alg_bytesize;
+
+	HEXDUMP("c1[x1]", cookie->output_array[0],
+		alg_bytesize);
+	HEXDUMP("c1[y]", cookie->output_array[1],
+		alg_bytesize);
+	HEXDUMP("kp[x]", cookie->output_array[2],
+		alg_bytesize);
+	HEXDUMP("kp[y]", cookie->output_array[3],
+		alg_bytesize);
+	return RTE_CRYPTO_OP_STATUS_SUCCESS;
+}
+
+
+static int
+sm2_decryption_set_input(struct icp_qat_fw_pke_request *qat_req,
+	struct qat_asym_op_cookie *cookie,
+	const struct rte_crypto_asym_op *asym_op,
+	const struct rte_crypto_asym_xform *xform)
+{
+	const struct qat_asym_function qat_function =
+		get_sm2_decryption_function();
+	const uint32_t qat_func_alignsize =
+		qat_function.bytesize;
+
+	SET_PKE_LN(xform->ec.pkey, qat_func_alignsize, 0);
+	SET_PKE_LN(asym_op->sm2.c1.x, qat_func_alignsize, 1);
+	SET_PKE_LN(asym_op->sm2.c1.y, qat_func_alignsize, 2);
+
+	cookie->alg_bytesize = qat_function.bytesize;
+	cookie->qat_func_alignsize = qat_function.bytesize;
+	qat_req->pke_hdr.cd_pars.func_id = qat_function.func_id;
+	qat_req->input_param_count = 3;
+	qat_req->output_param_count = 2;
+
+	HEXDUMP("d", cookie->input_array[0],
+		qat_func_alignsize);
+	HEXDUMP("c1[x]", cookie->input_array[1],
+		qat_func_alignsize);
+	HEXDUMP("c1[y]", cookie->input_array[2],
+		qat_func_alignsize);
+
+	return RTE_CRYPTO_OP_STATUS_SUCCESS;
+}
+
+
+static uint8_t
+sm2_decryption_collect(struct rte_crypto_asym_op *asym_op,
+		const struct qat_asym_op_cookie *cookie)
+{
+	uint32_t alg_bytesize = cookie->alg_bytesize;
+
+	rte_memcpy(asym_op->sm2.kp.x.data, cookie->output_array[0], alg_bytesize);
+	rte_memcpy(asym_op->sm2.kp.y.data, cookie->output_array[1], alg_bytesize);
+	asym_op->sm2.kp.x.length = alg_bytesize;
+	asym_op->sm2.kp.y.length = alg_bytesize;
+
+	HEXDUMP("kp[x]", cookie->output_array[0],
+		alg_bytesize);
+	HEXDUMP("kp[y]", cookie->output_array[1],
+		alg_bytesize);
+	return RTE_CRYPTO_OP_STATUS_SUCCESS;
+}
+
 static int
 asym_set_input(struct icp_qat_fw_pke_request *qat_req,
 		struct qat_asym_op_cookie *cookie,
@@ -1014,14 +1131,20 @@ asym_set_input(struct icp_qat_fw_pke_request *qat_req,
 				asym_op, xform);
 		}
 	case RTE_CRYPTO_ASYM_XFORM_SM2:
-		if (asym_op->sm2.op_type ==
-			RTE_CRYPTO_ASYM_OP_VERIFY) {
+		if (asym_op->sm2.op_type == RTE_CRYPTO_ASYM_OP_ENCRYPT) {
+			return sm2_encryption_set_input(qat_req, cookie,
+				asym_op, xform);
+		} else if (asym_op->sm2.op_type == RTE_CRYPTO_ASYM_OP_DECRYPT) {
+			return sm2_decryption_set_input(qat_req, cookie,
+				asym_op, xform);
+		} else if (asym_op->sm2.op_type == RTE_CRYPTO_ASYM_OP_VERIFY) {
 			return sm2_ecdsa_verify_set_input(qat_req, cookie,
 						asym_op, xform);
-		} else {
+		} else if (asym_op->sm2.op_type == RTE_CRYPTO_ASYM_OP_SIGN) {
 			return sm2_ecdsa_sign_set_input(qat_req, cookie,
 					asym_op, xform);
 		}
+		break;
 	default:
 		QAT_LOG(ERR, "Invalid/unsupported asymmetric crypto xform");
 		return -EINVAL;
@@ -1113,7 +1236,13 @@ qat_asym_collect_response(struct rte_crypto_op *op,
 	case RTE_CRYPTO_ASYM_XFORM_ECDH:
 		return ecdh_collect(asym_op, cookie);
 	case RTE_CRYPTO_ASYM_XFORM_SM2:
-		return sm2_ecdsa_sign_collect(asym_op, cookie);
+		if (asym_op->sm2.op_type == RTE_CRYPTO_ASYM_OP_ENCRYPT)
+			return sm2_encryption_collect(asym_op, cookie);
+		else if (asym_op->sm2.op_type == RTE_CRYPTO_ASYM_OP_DECRYPT)
+			return sm2_decryption_collect(asym_op, cookie);
+		else
+			return sm2_ecdsa_sign_collect(asym_op, cookie);
+
 	default:
 		QAT_LOG(ERR, "Not supported xform type");
 		return  RTE_CRYPTO_OP_STATUS_ERROR;
@@ -1422,9 +1551,8 @@ qat_asym_session_configure(struct rte_cryptodev *dev __rte_unused,
 	case RTE_CRYPTO_ASYM_XFORM_ECDSA:
 	case RTE_CRYPTO_ASYM_XFORM_ECPM:
 	case RTE_CRYPTO_ASYM_XFORM_ECDH:
-		ret = session_set_ec(qat_session, xform);
-		break;
 	case RTE_CRYPTO_ASYM_XFORM_SM2:
+		ret = session_set_ec(qat_session, xform);
 		break;
 	default:
 		ret = -ENOTSUP;
-- 
2.43.0


^ permalink raw reply	[flat|nested] 10+ messages in thread

* [dpdk-dev v9 3/3] app/test: add test sm2 C1/Kp test cases
  2025-08-22 11:13   ` [dpdk-dev v9 " Kai Ji
  2025-08-22 11:13     ` [dpdk-dev v9 2/3] crypto/qat: add sm2 encryption/decryption function Kai Ji
@ 2025-08-22 11:13     ` Kai Ji
  1 sibling, 0 replies; 10+ messages in thread
From: Kai Ji @ 2025-08-22 11:13 UTC (permalink / raw)
  To: dev; +Cc: Kai Ji, Arkadiusz Kusztal, Akhil Goyal, Fan Zhang

This commit adds tests cases to be used when C1 or kP elliptic
curve points need to be computed.

Signed-off-by: Arkadiusz Kusztal <arkadiuszx.kusztal@intel.com>
Signed-off-by: Kai Ji <kai.ji@intel.com>
---
 app/test/test_cryptodev_asym.c             | 134 +++++++++++++++++++++
 app/test/test_cryptodev_sm2_test_vectors.h | 112 ++++++++++++++++-
 2 files changed, 243 insertions(+), 3 deletions(-)

diff --git a/app/test/test_cryptodev_asym.c b/app/test/test_cryptodev_asym.c
index 20afb5e98b..6014863907 100644
--- a/app/test/test_cryptodev_asym.c
+++ b/app/test/test_cryptodev_asym.c
@@ -3924,6 +3924,132 @@ kat_rsa_decrypt_crt(const void *data)
 	return 0;
 }
 
+static int
+test_sm2_partial_encryption(const void *data)
+{
+	struct rte_crypto_asym_xform xform = { 0 };
+	const uint8_t dev_id = params->valid_devs[0];
+	const struct crypto_testsuite_sm2_params *test_vector = data;
+	uint8_t result_C1_x1[TEST_DATA_SIZE] = { 0 };
+	uint8_t result_C1_y1[TEST_DATA_SIZE] = { 0 };
+	uint8_t result_kP_x1[TEST_DATA_SIZE] = { 0 };
+	uint8_t result_kP_y1[TEST_DATA_SIZE] = { 0 };
+	struct rte_cryptodev_asym_capability_idx idx;
+	const struct rte_cryptodev_asymmetric_xform_capability *capa;
+
+	idx.type = RTE_CRYPTO_ASYM_XFORM_SM2;
+	capa = rte_cryptodev_asym_capability_get(dev_id, &idx);
+	if (capa == NULL)
+		return TEST_SKIPPED;
+	if (!rte_cryptodev_asym_xform_capability_check_opcap(capa,
+			RTE_CRYPTO_ASYM_OP_ENCRYPT, RTE_CRYPTO_SM2_PARTIAL)) {
+		return TEST_SKIPPED;
+	}
+
+	xform.xform_type = RTE_CRYPTO_ASYM_XFORM_SM2;
+	xform.ec.curve_id = RTE_CRYPTO_EC_GROUP_SM2;
+	xform.ec.q = test_vector->pubkey;
+	self->op->asym->sm2.op_type = RTE_CRYPTO_ASYM_OP_ENCRYPT;
+	self->op->asym->sm2.k = test_vector->k;
+	if (rte_cryptodev_asym_session_create(dev_id, &xform,
+			params->session_mpool, &self->sess) < 0) {
+		RTE_LOG(ERR, USER1, "line %u FAILED: Session creation failed",
+			__LINE__);
+		return TEST_FAILED;
+	}
+	rte_crypto_op_attach_asym_session(self->op, self->sess);
+
+	self->op->asym->sm2.c1.x.data = result_C1_x1;
+	self->op->asym->sm2.c1.y.data = result_C1_y1;
+	self->op->asym->sm2.kp.x.data = result_kP_x1;
+	self->op->asym->sm2.kp.y.data = result_kP_y1;
+	TEST_ASSERT_SUCCESS(send_one(),
+		"Failed to process crypto op");
+
+	debug_hexdump(stdout, "C1[x]", self->op->asym->sm2.c1.x.data,
+		self->op->asym->sm2.c1.x.length);
+	debug_hexdump(stdout, "C1[y]", self->op->asym->sm2.c1.y.data,
+		self->op->asym->sm2.c1.y.length);
+	debug_hexdump(stdout, "kP[x]", self->op->asym->sm2.kp.x.data,
+		self->op->asym->sm2.kp.x.length);
+	debug_hexdump(stdout, "kP[y]", self->op->asym->sm2.kp.y.data,
+		self->op->asym->sm2.kp.y.length);
+
+	TEST_ASSERT_BUFFERS_ARE_EQUAL(test_vector->C1.x.data,
+		self->op->asym->sm2.c1.x.data,
+		test_vector->C1.x.length,
+		"Incorrect value of C1[x]\n");
+	TEST_ASSERT_BUFFERS_ARE_EQUAL(test_vector->C1.y.data,
+		self->op->asym->sm2.c1.y.data,
+		test_vector->C1.y.length,
+		"Incorrect value of C1[y]\n");
+	TEST_ASSERT_BUFFERS_ARE_EQUAL(test_vector->kP.x.data,
+		self->op->asym->sm2.kp.x.data,
+		test_vector->kP.x.length,
+		"Incorrect value of kP[x]\n");
+	TEST_ASSERT_BUFFERS_ARE_EQUAL(test_vector->kP.y.data,
+		self->op->asym->sm2.kp.y.data,
+		test_vector->kP.y.length,
+		"Incorrect value of kP[y]\n");
+
+	return TEST_SUCCESS;
+}
+
+static int
+test_sm2_partial_decryption(const void *data)
+{
+	struct rte_crypto_asym_xform xform = {};
+	const uint8_t dev_id = params->valid_devs[0];
+	const struct crypto_testsuite_sm2_params *test_vector = data;
+	uint8_t result_kP_x1[TEST_DATA_SIZE] = { 0 };
+	uint8_t result_kP_y1[TEST_DATA_SIZE] = { 0 };
+	struct rte_cryptodev_asym_capability_idx idx;
+	const struct rte_cryptodev_asymmetric_xform_capability *capa;
+
+	idx.type = RTE_CRYPTO_ASYM_XFORM_SM2;
+	capa = rte_cryptodev_asym_capability_get(dev_id, &idx);
+	if (capa == NULL)
+		return TEST_SKIPPED;
+	if (!rte_cryptodev_asym_xform_capability_check_opcap(capa,
+			RTE_CRYPTO_ASYM_OP_DECRYPT, RTE_CRYPTO_SM2_PARTIAL)) {
+		return TEST_SKIPPED;
+	}
+
+	xform.xform_type = RTE_CRYPTO_ASYM_XFORM_SM2;
+	xform.ec.pkey = test_vector->pkey;
+	self->op->asym->sm2.op_type = RTE_CRYPTO_ASYM_OP_DECRYPT;
+	self->op->asym->sm2.c1 = test_vector->C1;
+
+	if (rte_cryptodev_asym_session_create(dev_id, &xform,
+			params->session_mpool, &self->sess) < 0) {
+		RTE_LOG(ERR, USER1, "line %u FAILED: Session creation failed",
+			__LINE__);
+		return TEST_FAILED;
+	}
+	rte_crypto_op_attach_asym_session(self->op, self->sess);
+
+	self->op->asym->sm2.kp.x.data = result_kP_x1;
+	self->op->asym->sm2.kp.y.data = result_kP_y1;
+	TEST_ASSERT_SUCCESS(send_one(),
+		"Failed to process crypto op");
+
+	debug_hexdump(stdout, "kP[x]", self->op->asym->sm2.kp.x.data,
+		self->op->asym->sm2.c1.x.length);
+	debug_hexdump(stdout, "kP[y]", self->op->asym->sm2.kp.y.data,
+		self->op->asym->sm2.c1.y.length);
+
+	TEST_ASSERT_BUFFERS_ARE_EQUAL(test_vector->kP.x.data,
+		self->op->asym->sm2.kp.x.data,
+		test_vector->kP.x.length,
+		"Incorrect value of kP[x]\n");
+	TEST_ASSERT_BUFFERS_ARE_EQUAL(test_vector->kP.y.data,
+		self->op->asym->sm2.kp.y.data,
+		test_vector->kP.y.length,
+		"Incorrect value of kP[y]\n");
+
+	return 0;
+}
+
 static struct unit_test_suite cryptodev_openssl_asym_testsuite  = {
 	.suite_name = "Crypto Device OPENSSL ASYM Unit Test Suite",
 	.setup = testsuite_setup,
@@ -3996,6 +4122,14 @@ static struct unit_test_suite cryptodev_qat_asym_testsuite  = {
 	.setup = testsuite_setup,
 	.teardown = testsuite_teardown,
 	.unit_test_cases = {
+		TEST_CASE_NAMED_WITH_DATA(
+			"SM2 encryption - test case 1",
+			ut_setup_asym, ut_teardown_asym,
+			test_sm2_partial_encryption, &sm2_enc_hw_t1),
+		TEST_CASE_NAMED_WITH_DATA(
+			"SM2 decryption - test case 1",
+			ut_setup_asym, ut_teardown_asym,
+			test_sm2_partial_decryption, &sm2_enc_hw_t1),
 		TEST_CASE_NAMED_WITH_DATA(
 			"Modular Exponentiation (mod=128, base=20, exp=3, res=128)",
 			ut_setup_asym, ut_teardown_asym,
diff --git a/app/test/test_cryptodev_sm2_test_vectors.h b/app/test/test_cryptodev_sm2_test_vectors.h
index 41f5f7074a..92f7e77671 100644
--- a/app/test/test_cryptodev_sm2_test_vectors.h
+++ b/app/test/test_cryptodev_sm2_test_vectors.h
@@ -8,19 +8,125 @@
 #include "rte_crypto_asym.h"
 
 struct crypto_testsuite_sm2_params {
-	rte_crypto_param pubkey_qx;
-	rte_crypto_param pubkey_qy;
+	union {
+		struct {
+			rte_crypto_param pubkey_qx;
+			rte_crypto_param pubkey_qy;
+		};
+		struct rte_crypto_ec_point pubkey;
+	};
 	rte_crypto_param pkey;
 	rte_crypto_param k;
 	rte_crypto_param sign_r;
 	rte_crypto_param sign_s;
 	rte_crypto_param id;
-	rte_crypto_param cipher;
+	union {
+		rte_crypto_param cipher;
+		struct {
+			struct rte_crypto_ec_point C1;
+			struct rte_crypto_ec_point kP;
+		};
+	};
 	rte_crypto_param message;
 	rte_crypto_param digest;
 	int curve;
 };
 
+uint8_t sm2_enc_pub_x_t1[] = {
+	0x26, 0xf1, 0xf3, 0xef, 0x12, 0x27, 0x85, 0xd1,
+	0x7d, 0x38, 0x70, 0xc2, 0x43, 0x46, 0x50, 0x36,
+	0x3f, 0xdf, 0x4b, 0x2f, 0x45, 0x0e, 0x8e, 0xd1,
+	0xb6, 0x0f, 0xdc, 0x1f, 0xc6, 0xf0, 0x19, 0xab
+};
+uint8_t sm2_enc_pub_y_t1[] = {
+	0xd9, 0x19, 0x8b, 0xdb, 0xef, 0xa5, 0x84, 0x76,
+	0xec, 0x82, 0x25, 0x12, 0x5b, 0x8c, 0xe3, 0xe1,
+	0x0a, 0x10, 0x0d, 0xc6, 0x97, 0x6c, 0xc1, 0x89,
+	0xd9, 0x6d, 0xa6, 0x88, 0x9e, 0xbc, 0xd3, 0x7a
+};
+uint8_t sm2_k_t1[] = {
+	0x12, 0x34, 0x56, 0x78, 0xB9, 0x6E, 0x5A, 0xF7,
+	0x0B, 0xD4, 0x80, 0xB4, 0x72, 0x40, 0x9A, 0x9A,
+	0x32, 0x72, 0x57, 0xF1, 0xEB, 0xB7, 0x3F, 0x5B,
+	0x07, 0x33, 0x54, 0xB2, 0x48, 0x66, 0x85, 0x63
+};
+
+uint8_t sm2_C1_x_t1[] = {
+	0x15, 0xf6, 0xb7, 0x49, 0x00, 0x39, 0x73, 0x9d,
+	0x5b, 0xb3, 0xd3, 0xe9, 0x1d, 0xe4, 0xc8, 0xbd,
+	0x08, 0xe3, 0x6a, 0x22, 0xff, 0x1a, 0xbf, 0xdc,
+	0x75, 0x6b, 0x12, 0x85, 0x81, 0xc5, 0x8b, 0xcf
+};
+
+uint8_t sm2_C1_y_t1[] = {
+	0x6a, 0x92, 0xd4, 0xd8, 0x13, 0xec, 0x8f, 0x9a,
+	0x9d, 0xbe, 0x51, 0x47, 0x6f, 0x54, 0xc5, 0x41,
+	0x98, 0xf5, 0x5f, 0x83, 0xce, 0x1c, 0x18, 0x1a,
+	0x48, 0xbd, 0xeb, 0x38, 0x13, 0x67, 0x0d, 0x06
+};
+
+uint8_t sm2_kP_x_t1[] = {
+	0x6b, 0xfb, 0x9a, 0xcb, 0xc6, 0xb6, 0x36, 0x31,
+	0x0f, 0xd1, 0xdd, 0x9c, 0x9f, 0x17, 0x5f, 0x3f,
+	0x68, 0x13, 0x96, 0xd2, 0x54, 0x5b, 0xa6, 0x19,
+	0x78, 0x1f, 0x87, 0x3d, 0x81, 0xc3, 0x21, 0x01
+};
+
+uint8_t sm2_kP_y_t1[] = {
+	0xa4, 0x08, 0xf3, 0x74, 0x35, 0x51, 0x8c, 0x81,
+	0x06, 0x4c, 0x8f, 0x31, 0x49, 0xe3, 0x5b, 0x4d,
+	0xfc, 0x3d, 0x19, 0xac, 0x7d, 0x07, 0xd0, 0x9a,
+	0x99, 0x5a, 0x25, 0x16, 0x66, 0xff, 0x41, 0x3c
+};
+
+uint8_t sm2_kP_d_t1[] = {
+	0x6F, 0xCB, 0xA2, 0xEF, 0x9A, 0xE0, 0xAB, 0x90,
+	0x2B, 0xC3, 0xBD, 0xE3, 0xFF, 0x91, 0x5D, 0x44,
+	0xBA, 0x4C, 0xC7, 0x8F, 0x88, 0xE2, 0xF8, 0xE7,
+	0xF8, 0x99, 0x6D, 0x3B, 0x8C, 0xCE, 0xED, 0xEE
+};
+
+struct crypto_testsuite_sm2_params sm2_enc_hw_t1 = {
+	.k = {
+		.data = sm2_k_t1,
+		.length = sizeof(sm2_k_t1)
+	},
+	.pubkey = {
+		.x = {
+			.data = sm2_enc_pub_x_t1,
+			.length = sizeof(sm2_enc_pub_x_t1)
+		},
+		.y = {
+			.data = sm2_enc_pub_y_t1,
+			.length = sizeof(sm2_enc_pub_y_t1)
+		}
+	},
+	.C1 = {
+		.x = {
+			.data = sm2_C1_x_t1,
+			.length = sizeof(sm2_C1_x_t1)
+		},
+		.y = {
+			.data = sm2_C1_y_t1,
+			.length = sizeof(sm2_C1_y_t1)
+		}
+	},
+	.kP = {
+		.x = {
+			.data = sm2_kP_x_t1,
+			.length = sizeof(sm2_kP_x_t1)
+		},
+		.y = {
+			.data = sm2_kP_y_t1,
+			.length = sizeof(sm2_kP_y_t1)
+		}
+	},
+	.pkey = {
+		.data = sm2_kP_d_t1,
+		.length = sizeof(sm2_kP_d_t1)
+	}
+};
+
 static uint8_t fp256_pkey[] = {
 	0x77, 0x84, 0x35, 0x65, 0x4c, 0x7a, 0x6d, 0xb1,
 	0x1e, 0x63, 0x0b, 0x41, 0x97, 0x36, 0x04, 0xf4,
-- 
2.43.0


^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2025-08-22 11:13 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-11-04  9:36 [PATCH v8 0/3] add ec points to sm2 op Arkadiusz Kusztal
2024-11-04  9:36 ` [PATCH v8 1/3] cryptodev: " Arkadiusz Kusztal
2024-11-06 10:08   ` [EXTERNAL] " Akhil Goyal
2024-11-06 15:17     ` Kusztal, ArkadiuszX
2025-08-22 11:13   ` [dpdk-dev v9 " Kai Ji
2025-08-22 11:13     ` [dpdk-dev v9 2/3] crypto/qat: add sm2 encryption/decryption function Kai Ji
2025-08-22 11:13     ` [dpdk-dev v9 3/3] app/test: add test sm2 C1/Kp test cases Kai Ji
2024-11-04  9:36 ` [PATCH v8 2/3] crypto/qat: add sm2 encryption/decryption function Arkadiusz Kusztal
2024-11-06 10:12   ` [EXTERNAL] " Akhil Goyal
2024-11-04  9:36 ` [PATCH v8 3/3] app/test: add test sm2 C1/Kp test cases Arkadiusz Kusztal

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).