DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] [PATCH v3 0/6] Introduce CPU crypto mode
@ 2020-01-15 18:28 Marcin Smoczynski
  2020-01-15 18:28 ` [dpdk-dev] [PATCH v3 1/6] cryptodev: introduce cpu crypto support API Marcin Smoczynski
                   ` (6 more replies)
  0 siblings, 7 replies; 77+ messages in thread
From: Marcin Smoczynski @ 2020-01-15 18:28 UTC (permalink / raw)
  To: akhil.goyal, konstantin.ananyev, roy.fan.zhang, declan.doherty,
	radu.nicolau
  Cc: dev, Marcin Smoczynski

Originally both SW and HW crypto PMDs use rte_crypot_op based API to
process the crypto workload asynchronously. This way provides uniformity
to both PMD types, but also introduce unnecessary performance penalty to
SW PMDs that have to "simulate" HW async behavior (crypto-ops
enqueue/dequeue, HW addresses computations, storing/dereferencing user
provided data (mbuf) for each crypto-op, etc).

The aim is to introduce a new optional API for SW crypto-devices
to perform crypto processing in a synchronous manner.

Marcin Smoczynski (6):
  cryptodev: introduce cpu crypto support API
  crypto/aesni_gcm: cpu crypto support
  security: add cpu crypto action type
  ipsec: introduce support for cpu crypto mode
  examples/ipsec-secgw: cpu crypto support
  examples/ipsec-secgw: cpu crypto testing

 drivers/crypto/aesni_gcm/aesni_gcm_ops.h      |   9 +
 drivers/crypto/aesni_gcm/aesni_gcm_pmd.c      | 149 ++++++++++++++++-
 drivers/crypto/aesni_gcm/aesni_gcm_pmd_ops.c  |   3 +
 .../crypto/aesni_gcm/aesni_gcm_pmd_private.h  |  18 +-
 examples/ipsec-secgw/ipsec.c                  |  12 +-
 examples/ipsec-secgw/ipsec_process.c          | 134 +++++++++------
 examples/ipsec-secgw/sa.c                     |  33 +++-
 examples/ipsec-secgw/test/common_defs.sh      |  21 +++
 examples/ipsec-secgw/test/linux_test4.sh      |  11 +-
 examples/ipsec-secgw/test/linux_test6.sh      |  11 +-
 .../test/trs_3descbc_sha1_common_defs.sh      |   8 +-
 .../test/trs_aescbc_sha1_common_defs.sh       |   8 +-
 .../test/trs_aesctr_sha1_common_defs.sh       |   8 +-
 .../test/tun_3descbc_sha1_common_defs.sh      |   8 +-
 .../test/tun_aescbc_sha1_common_defs.sh       |   8 +-
 .../test/tun_aesctr_sha1_common_defs.sh       |   8 +-
 lib/librte_cryptodev/rte_crypto_sym.h         |  62 ++++++-
 lib/librte_cryptodev/rte_cryptodev.c          |  30 ++++
 lib/librte_cryptodev/rte_cryptodev.h          |  20 +++
 lib/librte_cryptodev/rte_cryptodev_pmd.h      |  19 +++
 .../rte_cryptodev_version.map                 |   1 +
 lib/librte_ipsec/esp_inb.c                    | 154 +++++++++++++++---
 lib/librte_ipsec/esp_outb.c                   | 134 +++++++++++++--
 lib/librte_ipsec/misc.h                       | 118 ++++++++++++++
 lib/librte_ipsec/rte_ipsec.h                  |  18 +-
 lib/librte_ipsec/sa.c                         | 126 +++++++++++---
 lib/librte_ipsec/sa.h                         |  17 ++
 lib/librte_ipsec/ses.c                        |   3 +-
 lib/librte_security/rte_security.h            |   6 +-
 29 files changed, 990 insertions(+), 167 deletions(-)

-- 
2.17.1


^ permalink raw reply	[flat|nested] 77+ messages in thread

* [dpdk-dev] [PATCH v3 1/6] cryptodev: introduce cpu crypto support API
  2020-01-15 18:28 [dpdk-dev] [PATCH v3 0/6] Introduce CPU crypto mode Marcin Smoczynski
@ 2020-01-15 18:28 ` Marcin Smoczynski
  2020-01-15 23:20   ` Ananyev, Konstantin
  2020-01-16 10:11   ` Zhang, Roy Fan
  2020-01-15 18:28 ` [dpdk-dev] [PATCH v3 2/6] crypto/aesni_gcm: cpu crypto support Marcin Smoczynski
                   ` (5 subsequent siblings)
  6 siblings, 2 replies; 77+ messages in thread
From: Marcin Smoczynski @ 2020-01-15 18:28 UTC (permalink / raw)
  To: akhil.goyal, konstantin.ananyev, roy.fan.zhang, declan.doherty,
	radu.nicolau
  Cc: dev, Marcin Smoczynski

Add new API allowing to process crypto operations in a synchronous
manner. Operations are performed on a set of SG arrays.

Sync mode is selected by setting appropriate flag in an xform
type number. Cryptodevs which allows CPU crypto operation mode have to
use RTE_CRYPTODEV_FF_SYM_CPU_CRYPTO capability.

Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Signed-off-by: Marcin Smoczynski <marcinx.smoczynski@intel.com>
---
 lib/librte_cryptodev/rte_crypto_sym.h         | 62 ++++++++++++++++++-
 lib/librte_cryptodev/rte_cryptodev.c          | 30 +++++++++
 lib/librte_cryptodev/rte_cryptodev.h          | 20 ++++++
 lib/librte_cryptodev/rte_cryptodev_pmd.h      | 19 ++++++
 .../rte_cryptodev_version.map                 |  1 +
 5 files changed, 131 insertions(+), 1 deletion(-)

diff --git a/lib/librte_cryptodev/rte_crypto_sym.h b/lib/librte_cryptodev/rte_crypto_sym.h
index ffa038dc4..f5dd05ab0 100644
--- a/lib/librte_cryptodev/rte_crypto_sym.h
+++ b/lib/librte_cryptodev/rte_crypto_sym.h
@@ -25,6 +25,59 @@ extern "C" {
 #include <rte_mempool.h>
 #include <rte_common.h>
 
+/**
+ * Crypto IO Vector (in analogy with struct iovec)
+ * Supposed be used to pass input/output data buffers for crypto data-path
+ * functions.
+ */
+struct rte_crypto_vec {
+	/** virtual address of the data buffer */
+	void *base;
+	/** IOVA of the data buffer */
+	rte_iova_t *iova;
+	/** length of the data buffer */
+	uint32_t len;
+};
+
+struct rte_crypto_sgl {
+	/** start of an array of vectors */
+	struct rte_crypto_vec *vec;
+	/** size of an array of vectors */
+	uint32_t num;
+};
+
+struct rte_crypto_sym_vec {
+	/** array of SGL vectors */
+	struct rte_crypto_sgl *sgl;
+	/** array of pointers to IV */
+	void **iv;
+	/** array of pointers to AAD */
+	void **aad;
+	/** array of pointers to digest */
+	void **digest;
+	/**
+	 * array of statuses for each operation:
+	 *  - 0 on success
+	 *  - errno on error
+	 */
+	int32_t *status;
+	/** number of operations to perform */
+	uint32_t num;
+};
+
+/**
+ * used for cpu_crypto_process_bulk() to specify head/tail offsets
+ * for auth/cipher processing.
+ */
+union rte_crypto_sym_ofs {
+	uint64_t raw;
+	struct {
+		struct {
+			uint16_t head;
+			uint16_t tail;
+		} auth, cipher;
+	} ofs;
+};
 
 /** Symmetric Cipher Algorithms */
 enum rte_crypto_cipher_algorithm {
@@ -425,7 +478,14 @@ enum rte_crypto_sym_xform_type {
 	RTE_CRYPTO_SYM_XFORM_NOT_SPECIFIED = 0,	/**< No xform specified */
 	RTE_CRYPTO_SYM_XFORM_AUTH,		/**< Authentication xform */
 	RTE_CRYPTO_SYM_XFORM_CIPHER,		/**< Cipher xform  */
-	RTE_CRYPTO_SYM_XFORM_AEAD		/**< AEAD xform  */
+	RTE_CRYPTO_SYM_XFORM_AEAD,		/**< AEAD xform  */
+
+	RTE_CRYPTO_SYM_XFORM_TYPE_MASK = 0xFFFF,
+	/**< xform type mask value */
+	RTE_CRYPTO_SYM_XFORM_FLAG_MASK = 0xFFFF0000,
+	/**< xform flag mask value */
+	RTE_CRYPTO_SYM_CPU_CRYPTO = 0x80000000,
+	/**< xform flag for cpu-crypto */
 };
 
 /**
diff --git a/lib/librte_cryptodev/rte_cryptodev.c b/lib/librte_cryptodev/rte_cryptodev.c
index 89aa2ed3e..157fda890 100644
--- a/lib/librte_cryptodev/rte_cryptodev.c
+++ b/lib/librte_cryptodev/rte_cryptodev.c
@@ -1616,6 +1616,36 @@ rte_cryptodev_sym_session_get_user_data(
 	return (void *)(sess->sess_data + sess->nb_drivers);
 }
 
+static inline void
+sym_crypto_fill_status(struct rte_crypto_sym_vec *vec, int32_t errnum)
+{
+	uint32_t i;
+	for (i = 0; i < vec->num; i++)
+		vec->status[i] = errnum;
+}
+
+uint32_t
+rte_cryptodev_sym_cpu_crypto_process(uint8_t dev_id,
+	struct rte_cryptodev_sym_session *sess, union rte_crypto_sym_ofs ofs,
+	struct rte_crypto_sym_vec *vec)
+{
+	struct rte_cryptodev *dev;
+
+	if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+		sym_crypto_fill_status(vec, EINVAL);
+		return 0;
+	}
+
+	dev = rte_cryptodev_pmd_get_dev(dev_id);
+
+	if (*dev->dev_ops->sym_cpu_process == NULL) {
+		sym_crypto_fill_status(vec, ENOTSUP);
+		return 0;
+	}
+
+	return dev->dev_ops->sym_cpu_process(dev, sess, ofs, vec);
+}
+
 /** Initialise rte_crypto_op mempool element */
 static void
 rte_crypto_op_init(struct rte_mempool *mempool,
diff --git a/lib/librte_cryptodev/rte_cryptodev.h b/lib/librte_cryptodev/rte_cryptodev.h
index c6ffa3b35..8786dfb90 100644
--- a/lib/librte_cryptodev/rte_cryptodev.h
+++ b/lib/librte_cryptodev/rte_cryptodev.h
@@ -450,6 +450,8 @@ rte_cryptodev_asym_get_xform_enum(enum rte_crypto_asym_xform_type *xform_enum,
 /**< Support encrypted-digest operations where digest is appended to data */
 #define RTE_CRYPTODEV_FF_ASYM_SESSIONLESS		(1ULL << 20)
 /**< Support asymmetric session-less operations */
+#define	RTE_CRYPTODEV_FF_SYM_CPU_CRYPTO			(1ULL << 21)
+/**< Support symmeteric cpu-crypto processing */
 
 
 /**
@@ -1274,6 +1276,24 @@ void *
 rte_cryptodev_sym_session_get_user_data(
 					struct rte_cryptodev_sym_session *sess);
 
+/**
+ * Perform actual crypto processing (encrypt/digest or auth/decrypt)
+ * on user provided data.
+ *
+ * @param	dev_id	The device identifier.
+ * @param	sess	Cryptodev session structure
+ * @param	ofs	Start and stop offsets for auth and cipher operations
+ * @param	vec	Vectorized operation descriptor
+ *
+ * @return
+ *  - Returns number of successfully processed packets.
+ */
+__rte_experimental
+uint32_t
+rte_cryptodev_sym_cpu_crypto_process(uint8_t dev_id,
+	struct rte_cryptodev_sym_session *sess, union rte_crypto_sym_ofs ofs,
+	struct rte_crypto_sym_vec *vec);
+
 #ifdef __cplusplus
 }
 #endif
diff --git a/lib/librte_cryptodev/rte_cryptodev_pmd.h b/lib/librte_cryptodev/rte_cryptodev_pmd.h
index fba14f2fa..5d9ee5fef 100644
--- a/lib/librte_cryptodev/rte_cryptodev_pmd.h
+++ b/lib/librte_cryptodev/rte_cryptodev_pmd.h
@@ -308,6 +308,23 @@ typedef void (*cryptodev_sym_free_session_t)(struct rte_cryptodev *dev,
  */
 typedef void (*cryptodev_asym_free_session_t)(struct rte_cryptodev *dev,
 		struct rte_cryptodev_asym_session *sess);
+/**
+ * Perform actual crypto processing (encrypt/digest or auth/decrypt)
+ * on user provided data.
+ *
+ * @param	dev	Crypto device pointer
+ * @param	sess	Cryptodev session structure
+ * @param	ofs	Start and stop offsets for auth and cipher operations
+ * @param	vec	Vectorized operation descriptor
+ *
+ * @return
+ *  - Returns number of successfully processed packets.
+ *
+ */
+typedef uint32_t (*cryptodev_sym_cpu_crypto_process_t)
+	(struct rte_cryptodev *dev, struct rte_cryptodev_sym_session *sess,
+	union rte_crypto_sym_ofs ofs, struct rte_crypto_sym_vec *vec);
+
 
 /** Crypto device operations function pointer table */
 struct rte_cryptodev_ops {
@@ -342,6 +359,8 @@ struct rte_cryptodev_ops {
 	/**< Clear a Crypto sessions private data. */
 	cryptodev_asym_free_session_t asym_session_clear;
 	/**< Clear a Crypto sessions private data. */
+	cryptodev_sym_cpu_crypto_process_t sym_cpu_process;
+	/**< process input data synchronously (cpu-crypto). */
 };
 
 
diff --git a/lib/librte_cryptodev/rte_cryptodev_version.map b/lib/librte_cryptodev/rte_cryptodev_version.map
index 1dd1e259a..6e41b4be5 100644
--- a/lib/librte_cryptodev/rte_cryptodev_version.map
+++ b/lib/librte_cryptodev/rte_cryptodev_version.map
@@ -71,6 +71,7 @@ EXPERIMENTAL {
 	rte_cryptodev_asym_session_init;
 	rte_cryptodev_asym_xform_capability_check_modlen;
 	rte_cryptodev_asym_xform_capability_check_optype;
+	rte_cryptodev_sym_cpu_crypto_process;
 	rte_cryptodev_sym_get_existing_header_session_size;
 	rte_cryptodev_sym_session_get_user_data;
 	rte_cryptodev_sym_session_pool_create;
-- 
2.17.1


^ permalink raw reply	[flat|nested] 77+ messages in thread

* [dpdk-dev] [PATCH v3 2/6] crypto/aesni_gcm: cpu crypto support
  2020-01-15 18:28 [dpdk-dev] [PATCH v3 0/6] Introduce CPU crypto mode Marcin Smoczynski
  2020-01-15 18:28 ` [dpdk-dev] [PATCH v3 1/6] cryptodev: introduce cpu crypto support API Marcin Smoczynski
@ 2020-01-15 18:28 ` Marcin Smoczynski
  2020-01-15 23:16   ` Ananyev, Konstantin
                     ` (2 more replies)
  2020-01-15 18:28 ` [dpdk-dev] [PATCH v3 3/6] security: add cpu crypto action type Marcin Smoczynski
                   ` (4 subsequent siblings)
  6 siblings, 3 replies; 77+ messages in thread
From: Marcin Smoczynski @ 2020-01-15 18:28 UTC (permalink / raw)
  To: akhil.goyal, konstantin.ananyev, roy.fan.zhang, declan.doherty,
	radu.nicolau
  Cc: dev, Marcin Smoczynski

Add support for CPU crypto mode by introducing required handler.
Crypto mode (sync/async) is chosen during sym session create if an
appropriate flag is set in an xform type number.

Authenticated encryption and decryption are supported with tag
generation/verification.

Signed-off-by: Marcin Smoczynski <marcinx.smoczynski@intel.com>
---
 drivers/crypto/aesni_gcm/aesni_gcm_ops.h      |   9 ++
 drivers/crypto/aesni_gcm/aesni_gcm_pmd.c      | 149 +++++++++++++++++-
 drivers/crypto/aesni_gcm/aesni_gcm_pmd_ops.c  |   3 +
 .../crypto/aesni_gcm/aesni_gcm_pmd_private.h  |  18 ++-
 4 files changed, 169 insertions(+), 10 deletions(-)

diff --git a/drivers/crypto/aesni_gcm/aesni_gcm_ops.h b/drivers/crypto/aesni_gcm/aesni_gcm_ops.h
index e272f1067..404c0adff 100644
--- a/drivers/crypto/aesni_gcm/aesni_gcm_ops.h
+++ b/drivers/crypto/aesni_gcm/aesni_gcm_ops.h
@@ -65,4 +65,13 @@ struct aesni_gcm_ops {
 	aesni_gcm_finalize_t finalize_dec;
 };
 
+/** GCM per-session operation handlers */
+struct aesni_gcm_session_ops {
+	aesni_gcm_t cipher;
+	aesni_gcm_pre_t pre;
+	aesni_gcm_init_t init;
+	aesni_gcm_update_t update;
+	aesni_gcm_finalize_t finalize;
+};
+
 #endif /* _AESNI_GCM_OPS_H_ */
diff --git a/drivers/crypto/aesni_gcm/aesni_gcm_pmd.c b/drivers/crypto/aesni_gcm/aesni_gcm_pmd.c
index 1a03be31d..860e9b369 100644
--- a/drivers/crypto/aesni_gcm/aesni_gcm_pmd.c
+++ b/drivers/crypto/aesni_gcm/aesni_gcm_pmd.c
@@ -25,9 +25,16 @@ aesni_gcm_set_session_parameters(const struct aesni_gcm_ops *gcm_ops,
 	const struct rte_crypto_sym_xform *aead_xform;
 	uint8_t key_length;
 	const uint8_t *key;
+	uint32_t xform_type;
+
+	/* check for CPU-crypto mode */
+	xform_type = xform->type;
+	sess->mode = xform_type | RTE_CRYPTO_SYM_CPU_CRYPTO ?
+		AESNI_GCM_MODE_SYNC : AESNI_GCM_MODE_ASYNC;
+	xform_type &= RTE_CRYPTO_SYM_XFORM_TYPE_MASK;
 
 	/* AES-GMAC */
-	if (xform->type == RTE_CRYPTO_SYM_XFORM_AUTH) {
+	if (xform_type == RTE_CRYPTO_SYM_XFORM_AUTH) {
 		auth_xform = xform;
 		if (auth_xform->auth.algo != RTE_CRYPTO_AUTH_AES_GMAC) {
 			AESNI_GCM_LOG(ERR, "Only AES GMAC is supported as an "
@@ -49,7 +56,7 @@ aesni_gcm_set_session_parameters(const struct aesni_gcm_ops *gcm_ops,
 		sess->req_digest_length = auth_xform->auth.digest_length;
 
 	/* AES-GCM */
-	} else if (xform->type == RTE_CRYPTO_SYM_XFORM_AEAD) {
+	} else if (xform_type == RTE_CRYPTO_SYM_XFORM_AEAD) {
 		aead_xform = xform;
 
 		if (aead_xform->aead.algo != RTE_CRYPTO_AEAD_AES_GCM) {
@@ -62,11 +69,24 @@ aesni_gcm_set_session_parameters(const struct aesni_gcm_ops *gcm_ops,
 		sess->iv.offset = aead_xform->aead.iv.offset;
 		sess->iv.length = aead_xform->aead.iv.length;
 
+		/* setup session handlers */
+		sess->ops.pre = gcm_ops->pre;
+		sess->ops.init = gcm_ops->init;
+
 		/* Select Crypto operation */
-		if (aead_xform->aead.op == RTE_CRYPTO_AEAD_OP_ENCRYPT)
+		if (aead_xform->aead.op == RTE_CRYPTO_AEAD_OP_ENCRYPT) {
 			sess->op = AESNI_GCM_OP_AUTHENTICATED_ENCRYPTION;
-		else
+			sess->ops.cipher = gcm_ops->enc;
+			sess->ops.update = gcm_ops->update_enc;
+			sess->ops.finalize = gcm_ops->finalize_enc;
+		}
+		/* op == RTE_CRYPTO_AEAD_OP_DECRYPT */
+		else {
 			sess->op = AESNI_GCM_OP_AUTHENTICATED_DECRYPTION;
+			sess->ops.cipher = gcm_ops->dec;
+			sess->ops.update = gcm_ops->update_dec;
+			sess->ops.finalize = gcm_ops->finalize_dec;
+		}
 
 		key_length = aead_xform->aead.key.length;
 		key = aead_xform->aead.key.data;
@@ -78,7 +98,6 @@ aesni_gcm_set_session_parameters(const struct aesni_gcm_ops *gcm_ops,
 		return -ENOTSUP;
 	}
 
-
 	/* IV check */
 	if (sess->iv.length != 16 && sess->iv.length != 12 &&
 			sess->iv.length != 0) {
@@ -356,6 +375,122 @@ process_gcm_crypto_op(struct aesni_gcm_qp *qp, struct rte_crypto_op *op,
 	return 0;
 }
 
+static inline void
+aesni_gcm_fill_error_code(struct rte_crypto_sym_vec *vec, int32_t errnum)
+{
+	uint32_t i;
+
+	for (i = 0; i < vec->num; i++)
+		vec->status[i] = errnum;
+}
+
+
+static inline int32_t
+aesni_gcm_sgl_op_finalize_encryption(struct aesni_gcm_session *s,
+	struct gcm_context_data *gdata_ctx, uint8_t *digest)
+{
+	if (s->req_digest_length != s->gen_digest_length) {
+		uint8_t tmpdigest[s->gen_digest_length];
+
+		s->ops.finalize(&s->gdata_key, gdata_ctx, tmpdigest,
+			s->gen_digest_length);
+		memcpy(digest, tmpdigest, s->req_digest_length);
+	} else {
+		s->ops.finalize(&s->gdata_key, gdata_ctx, digest,
+			s->gen_digest_length);
+	}
+
+	return 0;
+}
+
+static inline int32_t
+aesni_gcm_sgl_op_finalize_decryption(struct aesni_gcm_session *s,
+	struct gcm_context_data *gdata_ctx, uint8_t *digest)
+{
+	uint8_t tmpdigest[s->gen_digest_length];
+
+	s->ops.finalize(&s->gdata_key, gdata_ctx, tmpdigest,
+		s->gen_digest_length);
+
+	return memcmp(digest, tmpdigest, s->req_digest_length) == 0 ? 0 :
+		EBADMSG;
+}
+
+static inline void
+aesni_gcm_process_gcm_sgl_op(struct aesni_gcm_session *s,
+	struct gcm_context_data *gdata_ctx, struct rte_crypto_sgl *sgl,
+	void *iv, void *aad)
+{
+	uint32_t i;
+
+	/* init crypto operation */
+	s->ops.init(&s->gdata_key, gdata_ctx, iv, aad,
+		(uint64_t)s->aad_length);
+
+	/* update with sgl data */
+	for (i = 0; i < sgl->num; i++) {
+		struct rte_crypto_vec *vec = &sgl->vec[i];
+
+		s->ops.update(&s->gdata_key, gdata_ctx, vec->base, vec->base,
+			vec->len);
+	}
+}
+
+/** Process CPU crypto bulk operations */
+uint32_t
+aesni_gcm_pmd_cpu_crypto_process(struct rte_cryptodev *dev,
+	struct rte_cryptodev_sym_session *sess,
+	__rte_unused union rte_crypto_sym_ofs ofs,
+	struct rte_crypto_sym_vec *vec)
+{
+	void *sess_priv;
+	struct aesni_gcm_session *s;
+	uint32_t processed;
+	uint32_t i;
+
+	sess_priv = get_sym_session_private_data(sess, dev->driver_id);
+	if (unlikely(sess_priv == NULL)) {
+		aesni_gcm_fill_error_code(vec, EINVAL);
+		return 0;
+	}
+
+	s = sess_priv;
+	if (unlikely(s->mode != AESNI_GCM_MODE_SYNC)) {
+		aesni_gcm_fill_error_code(vec, EINVAL);
+		return 0;
+	}
+
+	processed = 0;
+	for (i = 0; i < vec->num; ++i) {
+		struct gcm_context_data gdata_ctx;
+		int32_t status;
+
+		aesni_gcm_process_gcm_sgl_op(s, &gdata_ctx, &vec->sgl[i],
+			vec->iv[i], vec->aad[i]);
+
+		switch (s->op) {
+		case AESNI_GCM_OP_AUTHENTICATED_ENCRYPTION:
+			status = aesni_gcm_sgl_op_finalize_encryption(s,
+				&gdata_ctx, vec->digest[i]);
+			break;
+
+		case AESNI_GCM_OP_AUTHENTICATED_DECRYPTION:
+			status = aesni_gcm_sgl_op_finalize_decryption(s,
+				&gdata_ctx, vec->digest[i]);
+			break;
+
+		default:
+			status = EINVAL;
+		}
+
+		vec->status[i] = status;
+		if (status == 0)
+			processed++;
+	}
+
+	return processed;
+}
+
 /**
  * Process a completed job and return rte_mbuf which job processed
  *
@@ -527,7 +662,8 @@ aesni_gcm_create(const char *name,
 			RTE_CRYPTODEV_FF_SYM_OPERATION_CHAINING |
 			RTE_CRYPTODEV_FF_IN_PLACE_SGL |
 			RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT |
-			RTE_CRYPTODEV_FF_OOP_LB_IN_LB_OUT;
+			RTE_CRYPTODEV_FF_OOP_LB_IN_LB_OUT |
+			RTE_CRYPTODEV_FF_SYM_CPU_CRYPTO;
 
 	/* Check CPU for support for AES instruction set */
 	if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_AES))
@@ -672,7 +808,6 @@ RTE_PMD_REGISTER_PARAM_STRING(CRYPTODEV_NAME_AESNI_GCM_PMD,
 RTE_PMD_REGISTER_CRYPTO_DRIVER(aesni_gcm_crypto_drv, aesni_gcm_pmd_drv.driver,
 		cryptodev_driver_id);
 
-
 RTE_INIT(aesni_gcm_init_log)
 {
 	aesni_gcm_logtype_driver = rte_log_register("pmd.crypto.aesni_gcm");
diff --git a/drivers/crypto/aesni_gcm/aesni_gcm_pmd_ops.c b/drivers/crypto/aesni_gcm/aesni_gcm_pmd_ops.c
index 2f66c7c58..5228d98b1 100644
--- a/drivers/crypto/aesni_gcm/aesni_gcm_pmd_ops.c
+++ b/drivers/crypto/aesni_gcm/aesni_gcm_pmd_ops.c
@@ -331,9 +331,12 @@ struct rte_cryptodev_ops aesni_gcm_pmd_ops = {
 		.queue_pair_release	= aesni_gcm_pmd_qp_release,
 		.queue_pair_count	= aesni_gcm_pmd_qp_count,
 
+		.sym_cpu_process        = aesni_gcm_pmd_cpu_crypto_process,
+
 		.sym_session_get_size	= aesni_gcm_pmd_sym_session_get_size,
 		.sym_session_configure	= aesni_gcm_pmd_sym_session_configure,
 		.sym_session_clear	= aesni_gcm_pmd_sym_session_clear
 };
 
 struct rte_cryptodev_ops *rte_aesni_gcm_pmd_ops = &aesni_gcm_pmd_ops;
+
diff --git a/drivers/crypto/aesni_gcm/aesni_gcm_pmd_private.h b/drivers/crypto/aesni_gcm/aesni_gcm_pmd_private.h
index 2039adb53..dc8d3c653 100644
--- a/drivers/crypto/aesni_gcm/aesni_gcm_pmd_private.h
+++ b/drivers/crypto/aesni_gcm/aesni_gcm_pmd_private.h
@@ -73,6 +73,11 @@ enum aesni_gcm_operation {
 	AESNI_GMAC_OP_VERIFY
 };
 
+enum aesni_gcm_mode {
+	AESNI_GCM_MODE_ASYNC,
+	AESNI_GCM_MODE_SYNC
+};
+
 /** AESNI GCM private session structure */
 struct aesni_gcm_session {
 	struct {
@@ -90,8 +95,12 @@ struct aesni_gcm_session {
 	/**< GCM operation type */
 	enum aesni_gcm_key key;
 	/**< GCM key type */
+	enum aesni_gcm_mode mode;
+	/**< Sync/async mode */
 	struct gcm_key_data gdata_key;
 	/**< GCM parameters */
+	struct aesni_gcm_session_ops ops;
+	/**< Session handlers */
 };
 
 
@@ -109,10 +118,13 @@ aesni_gcm_set_session_parameters(const struct aesni_gcm_ops *ops,
 		struct aesni_gcm_session *sess,
 		const struct rte_crypto_sym_xform *xform);
 
-
-/**
- * Device specific operations function pointer structure */
+/* Device specific operations function pointer structure */
 extern struct rte_cryptodev_ops *rte_aesni_gcm_pmd_ops;
 
+/** CPU crypto bulk process handler */
+uint32_t
+aesni_gcm_pmd_cpu_crypto_process(struct rte_cryptodev *dev,
+	struct rte_cryptodev_sym_session *sess, union rte_crypto_sym_ofs ofs,
+	struct rte_crypto_sym_vec *vec);
 
 #endif /* _AESNI_GCM_PMD_PRIVATE_H_ */
-- 
2.17.1


^ permalink raw reply	[flat|nested] 77+ messages in thread

* [dpdk-dev] [PATCH v3 3/6] security: add cpu crypto action type
  2020-01-15 18:28 [dpdk-dev] [PATCH v3 0/6] Introduce CPU crypto mode Marcin Smoczynski
  2020-01-15 18:28 ` [dpdk-dev] [PATCH v3 1/6] cryptodev: introduce cpu crypto support API Marcin Smoczynski
  2020-01-15 18:28 ` [dpdk-dev] [PATCH v3 2/6] crypto/aesni_gcm: cpu crypto support Marcin Smoczynski
@ 2020-01-15 18:28 ` Marcin Smoczynski
  2020-01-15 22:49   ` Ananyev, Konstantin
  2020-01-16 10:01   ` Zhang, Roy Fan
  2020-01-15 18:28 ` [dpdk-dev] [PATCH v3 4/6] ipsec: introduce support for cpu crypto mode Marcin Smoczynski
                   ` (3 subsequent siblings)
  6 siblings, 2 replies; 77+ messages in thread
From: Marcin Smoczynski @ 2020-01-15 18:28 UTC (permalink / raw)
  To: akhil.goyal, konstantin.ananyev, roy.fan.zhang, declan.doherty,
	radu.nicolau
  Cc: dev, Marcin Smoczynski

Introduce CPU crypto action type allowing to differentiate between
regular async 'none security' and synchronous, CPU crypto accelerated
sessions.

Signed-off-by: Marcin Smoczynski <marcinx.smoczynski@intel.com>
---
 lib/librte_security/rte_security.h | 6 +++++-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/lib/librte_security/rte_security.h b/lib/librte_security/rte_security.h
index 546779df2..309f7311c 100644
--- a/lib/librte_security/rte_security.h
+++ b/lib/librte_security/rte_security.h
@@ -307,10 +307,14 @@ enum rte_security_session_action_type {
 	/**< All security protocol processing is performed inline during
 	 * transmission
 	 */
-	RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL
+	RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL,
 	/**< All security protocol processing including crypto is performed
 	 * on a lookaside accelerator
 	 */
+	RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO
+	/**< Crypto processing for security protocol is processed by CPU
+	 * synchronously
+	 */
 };
 
 /** Security session protocol definition */
-- 
2.17.1


^ permalink raw reply	[flat|nested] 77+ messages in thread

* [dpdk-dev] [PATCH v3 4/6] ipsec: introduce support for cpu crypto mode
  2020-01-15 18:28 [dpdk-dev] [PATCH v3 0/6] Introduce CPU crypto mode Marcin Smoczynski
                   ` (2 preceding siblings ...)
  2020-01-15 18:28 ` [dpdk-dev] [PATCH v3 3/6] security: add cpu crypto action type Marcin Smoczynski
@ 2020-01-15 18:28 ` Marcin Smoczynski
  2020-01-16 10:53   ` Zhang, Roy Fan
  2020-01-16 10:53   ` Zhang, Roy Fan
  2020-01-15 18:28 ` [dpdk-dev] [PATCH v3 5/6] examples/ipsec-secgw: cpu crypto support Marcin Smoczynski
                   ` (2 subsequent siblings)
  6 siblings, 2 replies; 77+ messages in thread
From: Marcin Smoczynski @ 2020-01-15 18:28 UTC (permalink / raw)
  To: akhil.goyal, konstantin.ananyev, roy.fan.zhang, declan.doherty,
	radu.nicolau
  Cc: dev, Marcin Smoczynski

Update library to handle CPU cypto security mode which utilizes
cryptodev's synchronous, CPU accelerated crypto operations.

Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Signed-off-by: Marcin Smoczynski <marcinx.smoczynski@intel.com>
---
 lib/librte_ipsec/esp_inb.c   | 154 ++++++++++++++++++++++++++++++-----
 lib/librte_ipsec/esp_outb.c  | 134 +++++++++++++++++++++++++++---
 lib/librte_ipsec/misc.h      | 118 +++++++++++++++++++++++++++
 lib/librte_ipsec/rte_ipsec.h |  18 +++-
 lib/librte_ipsec/sa.c        | 126 ++++++++++++++++++++++------
 lib/librte_ipsec/sa.h        |  17 ++++
 lib/librte_ipsec/ses.c       |   3 +-
 7 files changed, 515 insertions(+), 55 deletions(-)

diff --git a/lib/librte_ipsec/esp_inb.c b/lib/librte_ipsec/esp_inb.c
index 5c653dd39..58b3dec1b 100644
--- a/lib/librte_ipsec/esp_inb.c
+++ b/lib/librte_ipsec/esp_inb.c
@@ -105,6 +105,39 @@ inb_cop_prepare(struct rte_crypto_op *cop,
 	}
 }
 
+static inline uint32_t
+inb_cpu_crypto_prepare(const struct rte_ipsec_sa *sa, struct rte_mbuf *mb,
+	uint32_t *pofs, uint32_t plen, void *iv)
+{
+	struct aead_gcm_iv *gcm;
+	struct aesctr_cnt_blk *ctr;
+	uint64_t *ivp;
+	uint32_t clen;
+
+	ivp = rte_pktmbuf_mtod_offset(mb, uint64_t *,
+		*pofs + sizeof(struct rte_esp_hdr));
+	clen = 0;
+
+	switch (sa->algo_type) {
+	case ALGO_TYPE_AES_GCM:
+		gcm = (struct aead_gcm_iv *)iv;
+		aead_gcm_iv_fill(gcm, ivp[0], sa->salt);
+		break;
+	case ALGO_TYPE_AES_CBC:
+	case ALGO_TYPE_3DES_CBC:
+		copy_iv(iv, ivp, sa->iv_len);
+		break;
+	case ALGO_TYPE_AES_CTR:
+		ctr = (struct aesctr_cnt_blk *)iv;
+		aes_ctr_cnt_blk_fill(ctr, ivp[0], sa->salt);
+		break;
+	}
+
+	*pofs += sa->ctp.auth.offset;
+	clen = plen - sa->ctp.auth.length;
+	return clen;
+}
+
 /*
  * Helper function for prepare() to deal with situation when
  * ICV is spread by two segments. Tries to move ICV completely into the
@@ -157,17 +190,12 @@ inb_pkt_xprepare(const struct rte_ipsec_sa *sa, rte_be64_t sqc,
 	}
 }
 
-/*
- * setup/update packet data and metadata for ESP inbound tunnel case.
- */
-static inline int32_t
-inb_pkt_prepare(const struct rte_ipsec_sa *sa, const struct replay_sqn *rsn,
-	struct rte_mbuf *mb, uint32_t hlen, union sym_op_data *icv)
+static inline int
+inb_get_sqn(const struct rte_ipsec_sa *sa, const struct replay_sqn *rsn,
+	struct rte_mbuf *mb, uint32_t hlen, rte_be64_t *sqc)
 {
 	int32_t rc;
 	uint64_t sqn;
-	uint32_t clen, icv_len, icv_ofs, plen;
-	struct rte_mbuf *ml;
 	struct rte_esp_hdr *esph;
 
 	esph = rte_pktmbuf_mtod_offset(mb, struct rte_esp_hdr *, hlen);
@@ -179,12 +207,21 @@ inb_pkt_prepare(const struct rte_ipsec_sa *sa, const struct replay_sqn *rsn,
 	sqn = rte_be_to_cpu_32(esph->seq);
 	if (IS_ESN(sa))
 		sqn = reconstruct_esn(rsn->sqn, sqn, sa->replay.win_sz);
+	*sqc = rte_cpu_to_be_64(sqn);
 
+	/* check IPsec window */
 	rc = esn_inb_check_sqn(rsn, sa, sqn);
-	if (rc != 0)
-		return rc;
 
-	sqn = rte_cpu_to_be_64(sqn);
+	return rc;
+}
+
+/* prepare packet for upcoming processing */
+static inline int32_t
+inb_prepare(const struct rte_ipsec_sa *sa, struct rte_mbuf *mb,
+	uint32_t hlen, union sym_op_data *icv)
+{
+	uint32_t clen, icv_len, icv_ofs, plen;
+	struct rte_mbuf *ml;
 
 	/* start packet manipulation */
 	plen = mb->pkt_len;
@@ -217,7 +254,8 @@ inb_pkt_prepare(const struct rte_ipsec_sa *sa, const struct replay_sqn *rsn,
 
 	icv_ofs += sa->sqh_len;
 
-	/* we have to allocate space for AAD somewhere,
+	/*
+	 * we have to allocate space for AAD somewhere,
 	 * right now - just use free trailing space at the last segment.
 	 * Would probably be more convenient to reserve space for AAD
 	 * inside rte_crypto_op itself
@@ -238,10 +276,28 @@ inb_pkt_prepare(const struct rte_ipsec_sa *sa, const struct replay_sqn *rsn,
 	mb->pkt_len += sa->sqh_len;
 	ml->data_len += sa->sqh_len;
 
-	inb_pkt_xprepare(sa, sqn, icv);
 	return plen;
 }
 
+static inline int32_t
+inb_pkt_prepare(const struct rte_ipsec_sa *sa, const struct replay_sqn *rsn,
+	struct rte_mbuf *mb, uint32_t hlen, union sym_op_data *icv)
+{
+	int rc;
+	rte_be64_t sqn;
+
+	rc = inb_get_sqn(sa, rsn, mb, hlen, &sqn);
+	if (rc != 0)
+		return rc;
+
+	rc = inb_prepare(sa, mb, hlen, icv);
+	if (rc < 0)
+		return rc;
+
+	inb_pkt_xprepare(sa, sqn, icv);
+	return rc;
+}
+
 /*
  * setup/update packets and crypto ops for ESP inbound case.
  */
@@ -270,17 +326,17 @@ esp_inb_pkt_prepare(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
 			lksd_none_cop_prepare(cop[k], cs, mb[i]);
 			inb_cop_prepare(cop[k], sa, mb[i], &icv, hl, rc);
 			k++;
-		} else
+		} else {
 			dr[i - k] = i;
+			rte_errno = -rc;
+		}
 	}
 
 	rsn_release(sa, rsn);
 
 	/* copy not prepared mbufs beyond good ones */
-	if (k != num && k != 0) {
+	if (k != num && k != 0)
 		move_bad_mbufs(mb, dr, num, num - k);
-		rte_errno = EBADMSG;
-	}
 
 	return k;
 }
@@ -512,7 +568,6 @@ tun_process(const struct rte_ipsec_sa *sa, struct rte_mbuf *mb[],
 	return k;
 }
 
-
 /*
  * *process* function for tunnel packets
  */
@@ -612,7 +667,7 @@ esp_inb_pkt_process(struct rte_ipsec_sa *sa, struct rte_mbuf *mb[],
 	if (k != num && k != 0)
 		move_bad_mbufs(mb, dr, num, num - k);
 
-	/* update SQN and replay winow */
+	/* update SQN and replay window */
 	n = esp_inb_rsn_update(sa, sqn, dr, k);
 
 	/* handle mbufs with wrong SQN */
@@ -625,6 +680,67 @@ esp_inb_pkt_process(struct rte_ipsec_sa *sa, struct rte_mbuf *mb[],
 	return n;
 }
 
+/*
+ * Prepare (plus actual crypto/auth) routine for inbound CPU-CRYPTO
+ * (synchronous mode).
+ */
+uint16_t
+cpu_inb_pkt_prepare(const struct rte_ipsec_session *ss,
+	struct rte_mbuf *mb[], uint16_t num)
+{
+	int32_t rc;
+	uint32_t i, k;
+	struct rte_ipsec_sa *sa;
+	struct replay_sqn *rsn;
+	union sym_op_data icv;
+	void *iv[num];
+	void *aad[num];
+	void *dgst[num];
+	uint32_t dr[num];
+	uint32_t l4ofs[num];
+	uint32_t clen[num];
+	uint64_t ivbuf[num][IPSEC_MAX_IV_QWORD];
+
+	sa = ss->sa;
+
+	/* grab rsn lock */
+	rsn = rsn_acquire(sa);
+
+	/* do preparation for all packets */
+	for (i = 0, k = 0; i != num; i++) {
+
+		/* calculate ESP header offset */
+		l4ofs[k] = mb[i]->l2_len + mb[i]->l3_len;
+
+		/* prepare ESP packet for processing */
+		rc = inb_pkt_prepare(sa, rsn, mb[i], l4ofs[k], &icv);
+		if (rc >= 0) {
+			/* get encrypted data offset and length */
+			clen[k] = inb_cpu_crypto_prepare(sa, mb[i],
+				l4ofs + k, rc, ivbuf[k]);
+
+			/* fill iv, digest and aad */
+			iv[k] = ivbuf[k];
+			aad[k] = icv.va + sa->icv_len;
+			dgst[k++] = icv.va;
+		} else {
+			dr[i - k] = i;
+			rte_errno = -rc;
+		}
+	}
+
+	/* release rsn lock */
+	rsn_release(sa, rsn);
+
+	/* copy not prepared mbufs beyond good ones */
+	if (k != num && k != 0)
+		move_bad_mbufs(mb, dr, num, num - k);
+
+	/* convert mbufs to iovecs and do actual crypto/auth processing */
+	cpu_crypto_bulk(ss, sa->cofs, mb, iv, aad, dgst, l4ofs, clen, k);
+	return k;
+}
+
 /*
  * process group of ESP inbound tunnel packets.
  */
diff --git a/lib/librte_ipsec/esp_outb.c b/lib/librte_ipsec/esp_outb.c
index e983b25a3..faac831d2 100644
--- a/lib/librte_ipsec/esp_outb.c
+++ b/lib/librte_ipsec/esp_outb.c
@@ -15,6 +15,9 @@
 #include "misc.h"
 #include "pad.h"
 
+typedef int32_t (*esp_outb_prepare_t)(struct rte_ipsec_sa *sa, rte_be64_t sqc,
+	const uint64_t ivp[IPSEC_MAX_IV_QWORD], struct rte_mbuf *mb,
+	union sym_op_data *icv, uint8_t sqh_len);
 
 /*
  * helper function to fill crypto_sym op for cipher+auth algorithms.
@@ -177,6 +180,7 @@ outb_tun_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc,
 	espt->pad_len = pdlen;
 	espt->next_proto = sa->proto;
 
+	/* set icv va/pa value(s) */
 	icv->va = rte_pktmbuf_mtod_offset(ml, void *, pdofs);
 	icv->pa = rte_pktmbuf_iova_offset(ml, pdofs);
 
@@ -270,8 +274,7 @@ esp_outb_tun_prepare(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
 static inline int32_t
 outb_trs_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc,
 	const uint64_t ivp[IPSEC_MAX_IV_QWORD], struct rte_mbuf *mb,
-	uint32_t l2len, uint32_t l3len, union sym_op_data *icv,
-	uint8_t sqh_len)
+	union sym_op_data *icv, uint8_t sqh_len)
 {
 	uint8_t np;
 	uint32_t clen, hlen, pdlen, pdofs, plen, tlen, uhlen;
@@ -280,6 +283,10 @@ outb_trs_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc,
 	struct rte_esp_tail *espt;
 	char *ph, *pt;
 	uint64_t *iv;
+	uint32_t l2len, l3len;
+
+	l2len = mb->l2_len;
+	l3len = mb->l3_len;
 
 	uhlen = l2len + l3len;
 	plen = mb->pkt_len - uhlen;
@@ -340,6 +347,7 @@ outb_trs_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc,
 	espt->pad_len = pdlen;
 	espt->next_proto = np;
 
+	/* set icv va/pa value(s) */
 	icv->va = rte_pktmbuf_mtod_offset(ml, void *, pdofs);
 	icv->pa = rte_pktmbuf_iova_offset(ml, pdofs);
 
@@ -381,8 +389,8 @@ esp_outb_trs_prepare(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
 		gen_iv(iv, sqc);
 
 		/* try to update the packet itself */
-		rc = outb_trs_pkt_prepare(sa, sqc, iv, mb[i], l2, l3, &icv,
-					  sa->sqh_len);
+		rc = outb_trs_pkt_prepare(sa, sqc, iv, mb[i], &icv,
+				  sa->sqh_len);
 		/* success, setup crypto op */
 		if (rc >= 0) {
 			outb_pkt_xprepare(sa, sqc, &icv);
@@ -403,6 +411,116 @@ esp_outb_trs_prepare(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
 	return k;
 }
 
+
+static inline uint32_t
+outb_cpu_crypto_prepare(const struct rte_ipsec_sa *sa, uint32_t *pofs,
+	uint32_t plen, void *iv)
+{
+	uint64_t *ivp = iv;
+	struct aead_gcm_iv *gcm;
+	struct aesctr_cnt_blk *ctr;
+	uint32_t clen;
+
+	switch (sa->algo_type) {
+	case ALGO_TYPE_AES_GCM:
+		gcm = iv;
+		aead_gcm_iv_fill(gcm, ivp[0], sa->salt);
+		break;
+	case ALGO_TYPE_AES_CTR:
+		ctr = iv;
+		aes_ctr_cnt_blk_fill(ctr, ivp[0], sa->salt);
+		break;
+	}
+
+	*pofs += sa->ctp.auth.offset;
+	clen = plen + sa->ctp.auth.length;
+	return clen;
+}
+
+static uint16_t
+cpu_outb_pkt_prepare(const struct rte_ipsec_session *ss,
+		struct rte_mbuf *mb[], uint16_t num,
+		esp_outb_prepare_t prepare, uint32_t cofs_mask)
+{
+	int32_t rc;
+	uint64_t sqn;
+	rte_be64_t sqc;
+	struct rte_ipsec_sa *sa;
+	uint32_t i, k, n;
+	uint32_t l2, l3;
+	union sym_op_data icv;
+	void *iv[num];
+	void *aad[num];
+	void *dgst[num];
+	uint32_t dr[num];
+	uint32_t l4ofs[num];
+	uint32_t clen[num];
+	uint64_t ivbuf[num][IPSEC_MAX_IV_QWORD];
+
+	sa = ss->sa;
+
+	n = num;
+	sqn = esn_outb_update_sqn(sa, &n);
+	if (n != num)
+		rte_errno = EOVERFLOW;
+
+	for (i = 0, k = 0; i != n; i++) {
+
+		l2 = mb[i]->l2_len;
+		l3 = mb[i]->l3_len;
+
+		/* calculate ESP header offset */
+		l4ofs[k] = (l2 + l3) & cofs_mask;
+
+		sqc = rte_cpu_to_be_64(sqn + i);
+		gen_iv(ivbuf[k], sqc);
+
+		/* try to update the packet itself */
+		rc = prepare(sa, sqc, ivbuf[k], mb[i], &icv, sa->sqh_len);
+
+		/* success, proceed with preparations */
+		if (rc >= 0) {
+
+			outb_pkt_xprepare(sa, sqc, &icv);
+
+			/* get encrypted data offset and length */
+			clen[k] = outb_cpu_crypto_prepare(sa, l4ofs + k, rc,
+				ivbuf[k]);
+
+			/* fill iv, digest and aad */
+			iv[k] = ivbuf[k];
+			aad[k] = icv.va + sa->icv_len;
+			dgst[k++] = icv.va;
+		} else {
+			dr[i - k] = i;
+			rte_errno = -rc;
+		}
+	}
+
+	/* copy not prepared mbufs beyond good ones */
+	if (k != n && k != 0)
+		move_bad_mbufs(mb, dr, n, n - k);
+
+	/* convert mbufs to iovecs and do actual crypto/auth processing */
+	cpu_crypto_bulk(ss, sa->cofs, mb, iv, aad, dgst, l4ofs, clen, k);
+	return k;
+}
+
+uint16_t
+cpu_outb_tun_pkt_prepare(const struct rte_ipsec_session *ss,
+		struct rte_mbuf *mb[], uint16_t num)
+{
+	return cpu_outb_pkt_prepare(ss, mb, num, outb_tun_pkt_prepare, 0);
+}
+
+uint16_t
+cpu_outb_trs_pkt_prepare(const struct rte_ipsec_session *ss,
+		struct rte_mbuf *mb[], uint16_t num)
+{
+	return cpu_outb_pkt_prepare(ss, mb, num, outb_trs_pkt_prepare,
+		UINT32_MAX);
+}
+
 /*
  * process outbound packets for SA with ESN support,
  * for algorithms that require SQN.hibits to be implictly included
@@ -526,7 +644,7 @@ inline_outb_trs_pkt_process(const struct rte_ipsec_session *ss,
 	struct rte_mbuf *mb[], uint16_t num)
 {
 	int32_t rc;
-	uint32_t i, k, n, l2, l3;
+	uint32_t i, k, n;
 	uint64_t sqn;
 	rte_be64_t sqc;
 	struct rte_ipsec_sa *sa;
@@ -544,15 +662,11 @@ inline_outb_trs_pkt_process(const struct rte_ipsec_session *ss,
 	k = 0;
 	for (i = 0; i != n; i++) {
 
-		l2 = mb[i]->l2_len;
-		l3 = mb[i]->l3_len;
-
 		sqc = rte_cpu_to_be_64(sqn + i);
 		gen_iv(iv, sqc);
 
 		/* try to update the packet itself */
-		rc = outb_trs_pkt_prepare(sa, sqc, iv, mb[i],
-				l2, l3, &icv, 0);
+		rc = outb_trs_pkt_prepare(sa, sqc, iv, mb[i], &icv, 0);
 
 		k += (rc >= 0);
 
diff --git a/lib/librte_ipsec/misc.h b/lib/librte_ipsec/misc.h
index fe4641bfc..6443e0d23 100644
--- a/lib/librte_ipsec/misc.h
+++ b/lib/librte_ipsec/misc.h
@@ -105,4 +105,122 @@ mbuf_cut_seg_ofs(struct rte_mbuf *mb, struct rte_mbuf *ms, uint32_t ofs,
 	mb->pkt_len -= len;
 }
 
+static inline int
+mbuf_to_cryptovec(const struct rte_mbuf *mb, uint32_t ofs, uint32_t data_len,
+	struct rte_crypto_vec vec[], uint32_t num)
+{
+	uint32_t i;
+	struct rte_mbuf *nseg;
+	uint32_t left;
+	uint32_t seglen;
+
+	/* assuming that requested data starts in the first segment */
+	RTE_ASSERT(mb->data_len > ofs);
+
+	if (mb->nb_segs > num)
+		return -mb->nb_segs;
+
+	vec[0].base = rte_pktmbuf_mtod_offset(mb, void *, ofs);
+
+	/* whole data lies in the first segment */
+	seglen = mb->data_len - ofs;
+	if (data_len <= seglen) {
+		vec[0].len = data_len;
+		return 1;
+	}
+
+	/* data spread across segments */
+	vec[0].len = seglen;
+	left = data_len - seglen;
+	for (i = 1, nseg = mb->next; nseg != NULL; nseg = nseg->next, i++) {
+		vec[i].base = rte_pktmbuf_mtod(nseg, void *);
+
+		seglen = nseg->data_len;
+		if (left <= seglen) {
+			/* whole requested data is completed */
+			vec[i].len = left;
+			left = 0;
+			break;
+		}
+
+		/* use whole segment */
+		vec[i].len = seglen;
+		left -= seglen;
+	}
+
+	RTE_ASSERT(left == 0);
+	return i + 1;
+}
+
+/*
+ * process packets using sync crypto engine
+ */
+static inline void
+cpu_crypto_bulk(const struct rte_ipsec_session *ss,
+	union rte_crypto_sym_ofs ofs, struct rte_mbuf *mb[],
+	void *iv[], void *aad[], void *dgst[], uint32_t l4ofs[],
+	uint32_t clen[], uint32_t num)
+{
+	uint32_t i, j, n;
+	int32_t vcnt, vofs;
+	int32_t st[num];
+	struct rte_crypto_sgl vecpkt[num];
+	struct rte_crypto_vec vec[UINT8_MAX];
+	struct rte_crypto_sym_vec symvec;
+
+	const uint32_t vnum = RTE_DIM(vec);
+
+	j = 0, n = 0;
+	vofs = 0;
+	for (i = 0; i != num; i++) {
+
+		vcnt = mbuf_to_cryptovec(mb[i], l4ofs[i], clen[i], &vec[vofs],
+			vnum - vofs);
+
+		/* not enough space in vec[] to hold all segments */
+		if (vcnt < 0) {
+			/* fill the request structure */
+			symvec.sgl = &vecpkt[j];
+			symvec.iv = &iv[j];
+			symvec.aad = &aad[j];
+			symvec.digest = &dgst[j];
+			symvec.status = &st[j];
+			symvec.num = i - j;
+
+			/* flush vec array and try again */
+			n += rte_cryptodev_sym_cpu_crypto_process(
+				ss->crypto.dev_id, ss->crypto.ses, ofs,
+				&symvec);
+			vofs = 0;
+			vcnt = mbuf_to_cryptovec(mb[i], l4ofs[i], clen[i], vec,
+				vnum);
+			RTE_ASSERT(vcnt > 0);
+			j = i;
+		}
+
+		vecpkt[i].vec = &vec[vofs];
+		vecpkt[i].num = vcnt;
+		vofs += vcnt;
+	}
+
+	/* fill the request structure */
+	symvec.sgl = &vecpkt[j];
+	symvec.iv = &iv[j];
+	symvec.aad = &aad[j];
+	symvec.digest = &dgst[j];
+	symvec.status = &st[j];
+	symvec.num = i - j;
+
+	n += rte_cryptodev_sym_cpu_crypto_process(ss->crypto.dev_id,
+		ss->crypto.ses, ofs, &symvec);
+
+	j = num - n;
+	for (i = 0; j != 0 && i != num; i++) {
+		if (st[i] != 0) {
+			mb[i]->ol_flags |= PKT_RX_SEC_OFFLOAD_FAILED;
+			j--;
+		}
+	}
+}
+
 #endif /* _MISC_H_ */
diff --git a/lib/librte_ipsec/rte_ipsec.h b/lib/librte_ipsec/rte_ipsec.h
index f3b1f936b..fd685887c 100644
--- a/lib/librte_ipsec/rte_ipsec.h
+++ b/lib/librte_ipsec/rte_ipsec.h
@@ -33,10 +33,15 @@ struct rte_ipsec_session;
  *   (see rte_ipsec_pkt_process for more details).
  */
 struct rte_ipsec_sa_pkt_func {
-	uint16_t (*prepare)(const struct rte_ipsec_session *ss,
+	union {
+		uint16_t (*async)(const struct rte_ipsec_session *ss,
 				struct rte_mbuf *mb[],
 				struct rte_crypto_op *cop[],
 				uint16_t num);
+		uint16_t (*sync)(const struct rte_ipsec_session *ss,
+				struct rte_mbuf *mb[],
+				uint16_t num);
+	} prepare;
 	uint16_t (*process)(const struct rte_ipsec_session *ss,
 				struct rte_mbuf *mb[],
 				uint16_t num);
@@ -62,6 +67,7 @@ struct rte_ipsec_session {
 	union {
 		struct {
 			struct rte_cryptodev_sym_session *ses;
+			uint8_t dev_id;
 		} crypto;
 		struct {
 			struct rte_security_session *ses;
@@ -114,7 +120,15 @@ static inline uint16_t
 rte_ipsec_pkt_crypto_prepare(const struct rte_ipsec_session *ss,
 	struct rte_mbuf *mb[], struct rte_crypto_op *cop[], uint16_t num)
 {
-	return ss->pkt_func.prepare(ss, mb, cop, num);
+	return ss->pkt_func.prepare.async(ss, mb, cop, num);
+}
+
+__rte_experimental
+static inline uint16_t
+rte_ipsec_pkt_cpu_prepare(const struct rte_ipsec_session *ss,
+	struct rte_mbuf *mb[], uint16_t num)
+{
+	return ss->pkt_func.prepare.sync(ss, mb, num);
 }
 
 /**
diff --git a/lib/librte_ipsec/sa.c b/lib/librte_ipsec/sa.c
index 6f1d92c3c..de6ab46dd 100644
--- a/lib/librte_ipsec/sa.c
+++ b/lib/librte_ipsec/sa.c
@@ -33,17 +33,21 @@ fill_crypto_xform(struct crypto_xform *xform, uint64_t type,
 	const struct rte_ipsec_sa_prm *prm)
 {
 	struct rte_crypto_sym_xform *xf, *xfn;
+	uint32_t xftype, xfntype;
 
 	memset(xform, 0, sizeof(*xform));
 
 	xf = prm->crypto_xform;
+	xftype = xf->type & RTE_CRYPTO_SYM_XFORM_TYPE_MASK;
 	if (xf == NULL)
 		return -EINVAL;
 
 	xfn = xf->next;
+	xfntype = xfn != NULL ? xfn->type & RTE_CRYPTO_SYM_XFORM_TYPE_MASK :
+		RTE_CRYPTO_SYM_XFORM_NOT_SPECIFIED;
 
 	/* for AEAD just one xform required */
-	if (xf->type == RTE_CRYPTO_SYM_XFORM_AEAD) {
+	if (xftype == RTE_CRYPTO_SYM_XFORM_AEAD) {
 		if (xfn != NULL)
 			return -EINVAL;
 		xform->aead = &xf->aead;
@@ -56,8 +60,8 @@ fill_crypto_xform(struct crypto_xform *xform, uint64_t type,
 	} else if ((type & RTE_IPSEC_SATP_DIR_MASK) == RTE_IPSEC_SATP_DIR_IB) {
 
 		/* wrong order or no cipher */
-		if (xfn == NULL || xf->type != RTE_CRYPTO_SYM_XFORM_AUTH ||
-				xfn->type != RTE_CRYPTO_SYM_XFORM_CIPHER)
+		if (xfn == NULL || xftype != RTE_CRYPTO_SYM_XFORM_AUTH ||
+				xfntype != RTE_CRYPTO_SYM_XFORM_CIPHER)
 			return -EINVAL;
 
 		xform->auth = &xf->auth;
@@ -66,8 +70,8 @@ fill_crypto_xform(struct crypto_xform *xform, uint64_t type,
 	} else {
 
 		/* wrong order or no auth */
-		if (xfn == NULL || xf->type != RTE_CRYPTO_SYM_XFORM_CIPHER ||
-				xfn->type != RTE_CRYPTO_SYM_XFORM_AUTH)
+		if (xfn == NULL || xftype != RTE_CRYPTO_SYM_XFORM_CIPHER ||
+				xfntype != RTE_CRYPTO_SYM_XFORM_AUTH)
 			return -EINVAL;
 
 		xform->cipher = &xf->cipher;
@@ -243,10 +247,26 @@ static void
 esp_inb_init(struct rte_ipsec_sa *sa)
 {
 	/* these params may differ with new algorithms support */
-	sa->ctp.auth.offset = 0;
-	sa->ctp.auth.length = sa->icv_len - sa->sqh_len;
 	sa->ctp.cipher.offset = sizeof(struct rte_esp_hdr) + sa->iv_len;
 	sa->ctp.cipher.length = sa->icv_len + sa->ctp.cipher.offset;
+
+	/*
+	 * for AEAD and NULL algorithms we can assume that
+	 * auth and cipher offsets would be equal.
+	 */
+	switch (sa->algo_type) {
+	case ALGO_TYPE_AES_GCM:
+	case ALGO_TYPE_NULL:
+		sa->ctp.auth.raw = sa->ctp.cipher.raw;
+		break;
+	default:
+		sa->ctp.auth.offset = 0;
+		sa->ctp.auth.length = sa->icv_len - sa->sqh_len;
+		sa->cofs.ofs.cipher.tail = sa->sqh_len;
+		break;
+	}
+
+	sa->cofs.ofs.cipher.head = sa->ctp.cipher.offset - sa->ctp.auth.offset;
 }
 
 /*
@@ -269,13 +289,13 @@ esp_outb_init(struct rte_ipsec_sa *sa, uint32_t hlen)
 
 	sa->sqn.outb.raw = 1;
 
-	/* these params may differ with new algorithms support */
-	sa->ctp.auth.offset = hlen;
-	sa->ctp.auth.length = sizeof(struct rte_esp_hdr) +
-		sa->iv_len + sa->sqh_len;
-
 	algo_type = sa->algo_type;
 
+	/*
+	 * Setup auth and cipher length and offset.
+	 * these params may differ with new algorithms support
+	 */
+
 	switch (algo_type) {
 	case ALGO_TYPE_AES_GCM:
 	case ALGO_TYPE_AES_CTR:
@@ -286,11 +306,30 @@ esp_outb_init(struct rte_ipsec_sa *sa, uint32_t hlen)
 		break;
 	case ALGO_TYPE_AES_CBC:
 	case ALGO_TYPE_3DES_CBC:
-		sa->ctp.cipher.offset = sa->hdr_len +
-			sizeof(struct rte_esp_hdr);
+		sa->ctp.cipher.offset = hlen + sizeof(struct rte_esp_hdr);
 		sa->ctp.cipher.length = sa->iv_len;
 		break;
 	}
+
+	/*
+	 * for AEAD and NULL algorithms we can assume that
+	 * auth and cipher offsets would be equal.
+	 */
+	switch (algo_type) {
+	case ALGO_TYPE_AES_GCM:
+	case ALGO_TYPE_NULL:
+		sa->ctp.auth.raw = sa->ctp.cipher.raw;
+		break;
+	default:
+		sa->ctp.auth.offset = hlen;
+		sa->ctp.auth.length = sizeof(struct rte_esp_hdr) +
+			sa->iv_len + sa->sqh_len;
+		break;
+	}
+
+	sa->cofs.ofs.cipher.head = sa->ctp.cipher.offset - sa->ctp.auth.offset;
+	sa->cofs.ofs.cipher.tail = (sa->ctp.auth.offset + sa->ctp.auth.length) -
+			(sa->ctp.cipher.offset + sa->ctp.cipher.length);
 }
 
 /*
@@ -544,9 +583,9 @@ lksd_proto_prepare(const struct rte_ipsec_session *ss,
  * - inbound/outbound for RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL
  * - outbound for RTE_SECURITY_ACTION_TYPE_NONE when ESN is disabled
  */
-static uint16_t
-pkt_flag_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
-	uint16_t num)
+uint16_t
+pkt_flag_process(const struct rte_ipsec_session *ss,
+		struct rte_mbuf *mb[], uint16_t num)
 {
 	uint32_t i, k;
 	uint32_t dr[num];
@@ -588,21 +627,59 @@ lksd_none_pkt_func_select(const struct rte_ipsec_sa *sa,
 	switch (sa->type & msk) {
 	case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TUNLV4):
 	case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TUNLV6):
-		pf->prepare = esp_inb_pkt_prepare;
+		pf->prepare.async = esp_inb_pkt_prepare;
 		pf->process = esp_inb_tun_pkt_process;
 		break;
 	case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TRANS):
-		pf->prepare = esp_inb_pkt_prepare;
+		pf->prepare.async = esp_inb_pkt_prepare;
 		pf->process = esp_inb_trs_pkt_process;
 		break;
 	case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV4):
 	case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV6):
-		pf->prepare = esp_outb_tun_prepare;
+		pf->prepare.async = esp_outb_tun_prepare;
 		pf->process = (sa->sqh_len != 0) ?
 			esp_outb_sqh_process : pkt_flag_process;
 		break;
 	case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TRANS):
-		pf->prepare = esp_outb_trs_prepare;
+		pf->prepare.async = esp_outb_trs_prepare;
+		pf->process = (sa->sqh_len != 0) ?
+			esp_outb_sqh_process : pkt_flag_process;
+		break;
+	default:
+		rc = -ENOTSUP;
+	}
+
+	return rc;
+}
+
+static int
+cpu_crypto_pkt_func_select(const struct rte_ipsec_sa *sa,
+		struct rte_ipsec_sa_pkt_func *pf)
+{
+	int32_t rc;
+
+	static const uint64_t msk = RTE_IPSEC_SATP_DIR_MASK |
+			RTE_IPSEC_SATP_MODE_MASK;
+
+	rc = 0;
+	switch (sa->type & msk) {
+	case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TUNLV4):
+	case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TUNLV6):
+		pf->prepare.sync = cpu_inb_pkt_prepare;
+		pf->process = esp_inb_tun_pkt_process;
+		break;
+	case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TRANS):
+		pf->prepare.sync = cpu_inb_pkt_prepare;
+		pf->process = esp_inb_trs_pkt_process;
+		break;
+	case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV4):
+	case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV6):
+		pf->prepare.sync = cpu_outb_tun_pkt_prepare;
+		pf->process = (sa->sqh_len != 0) ?
+			esp_outb_sqh_process : pkt_flag_process;
+		break;
+	case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TRANS):
+		pf->prepare.sync = cpu_outb_trs_pkt_prepare;
 		pf->process = (sa->sqh_len != 0) ?
 			esp_outb_sqh_process : pkt_flag_process;
 		break;
@@ -660,7 +737,7 @@ ipsec_sa_pkt_func_select(const struct rte_ipsec_session *ss,
 	int32_t rc;
 
 	rc = 0;
-	pf[0] = (struct rte_ipsec_sa_pkt_func) { 0 };
+	pf[0] = (struct rte_ipsec_sa_pkt_func) { {0} };
 
 	switch (ss->type) {
 	case RTE_SECURITY_ACTION_TYPE_NONE:
@@ -677,9 +754,12 @@ ipsec_sa_pkt_func_select(const struct rte_ipsec_session *ss,
 			pf->process = inline_proto_outb_pkt_process;
 		break;
 	case RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL:
-		pf->prepare = lksd_proto_prepare;
+		pf->prepare.async = lksd_proto_prepare;
 		pf->process = pkt_flag_process;
 		break;
+	case RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO:
+		rc = cpu_crypto_pkt_func_select(sa, pf);
+		break;
 	default:
 		rc = -ENOTSUP;
 	}
diff --git a/lib/librte_ipsec/sa.h b/lib/librte_ipsec/sa.h
index 51e69ad05..a16238301 100644
--- a/lib/librte_ipsec/sa.h
+++ b/lib/librte_ipsec/sa.h
@@ -88,6 +88,8 @@ struct rte_ipsec_sa {
 		union sym_op_ofslen cipher;
 		union sym_op_ofslen auth;
 	} ctp;
+	/* cpu-crypto offsets */
+	union rte_crypto_sym_ofs cofs;
 	/* tx_offload template for tunnel mbuf */
 	struct {
 		uint64_t msk;
@@ -156,6 +158,10 @@ uint16_t
 inline_inb_trs_pkt_process(const struct rte_ipsec_session *ss,
 	struct rte_mbuf *mb[], uint16_t num);
 
+uint16_t
+cpu_inb_pkt_prepare(const struct rte_ipsec_session *ss,
+		struct rte_mbuf *mb[], uint16_t num);
+
 /* outbound processing */
 
 uint16_t
@@ -170,6 +176,10 @@ uint16_t
 esp_outb_sqh_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
 	uint16_t num);
 
+uint16_t
+pkt_flag_process(const struct rte_ipsec_session *ss,
+	struct rte_mbuf *mb[], uint16_t num);
+
 uint16_t
 inline_outb_tun_pkt_process(const struct rte_ipsec_session *ss,
 	struct rte_mbuf *mb[], uint16_t num);
@@ -182,4 +192,11 @@ uint16_t
 inline_proto_outb_pkt_process(const struct rte_ipsec_session *ss,
 	struct rte_mbuf *mb[], uint16_t num);
 
+uint16_t
+cpu_outb_tun_pkt_prepare(const struct rte_ipsec_session *ss,
+		struct rte_mbuf *mb[], uint16_t num);
+uint16_t
+cpu_outb_trs_pkt_prepare(const struct rte_ipsec_session *ss,
+		struct rte_mbuf *mb[], uint16_t num);
+
 #endif /* _SA_H_ */
diff --git a/lib/librte_ipsec/ses.c b/lib/librte_ipsec/ses.c
index 82c765a33..7a123e2d9 100644
--- a/lib/librte_ipsec/ses.c
+++ b/lib/librte_ipsec/ses.c
@@ -11,7 +11,8 @@ session_check(struct rte_ipsec_session *ss)
 	if (ss == NULL || ss->sa == NULL)
 		return -EINVAL;
 
-	if (ss->type == RTE_SECURITY_ACTION_TYPE_NONE) {
+	if (ss->type == RTE_SECURITY_ACTION_TYPE_NONE ||
+		ss->type == RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO) {
 		if (ss->crypto.ses == NULL)
 			return -EINVAL;
 	} else {
-- 
2.17.1


^ permalink raw reply	[flat|nested] 77+ messages in thread

* [dpdk-dev] [PATCH v3 5/6] examples/ipsec-secgw: cpu crypto support
  2020-01-15 18:28 [dpdk-dev] [PATCH v3 0/6] Introduce CPU crypto mode Marcin Smoczynski
                   ` (3 preceding siblings ...)
  2020-01-15 18:28 ` [dpdk-dev] [PATCH v3 4/6] ipsec: introduce support for cpu crypto mode Marcin Smoczynski
@ 2020-01-15 18:28 ` Marcin Smoczynski
  2020-01-16 10:54   ` Zhang, Roy Fan
  2020-01-15 18:28 ` [dpdk-dev] [PATCH v3 6/6] examples/ipsec-secgw: cpu crypto testing Marcin Smoczynski
  2020-01-28  3:16 ` [dpdk-dev] [PATCH v4 0/8] Introduce CPU crypto mode Marcin Smoczynski
  6 siblings, 1 reply; 77+ messages in thread
From: Marcin Smoczynski @ 2020-01-15 18:28 UTC (permalink / raw)
  To: akhil.goyal, konstantin.ananyev, roy.fan.zhang, declan.doherty,
	radu.nicolau
  Cc: dev, Marcin Smoczynski

Add support for CPU accelerated crypto. 'cpu-crypto' SA type has
been introduced in configuration allowing to use abovementioned
acceleration.

Legacy mode is not currently supported.

Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Signed-off-by: Marcin Smoczynski <marcinx.smoczynski@intel.com>
---
 examples/ipsec-secgw/ipsec.c         |  12 ++-
 examples/ipsec-secgw/ipsec_process.c | 134 +++++++++++++++++----------
 examples/ipsec-secgw/sa.c            |  33 +++++--
 3 files changed, 123 insertions(+), 56 deletions(-)

diff --git a/examples/ipsec-secgw/ipsec.c b/examples/ipsec-secgw/ipsec.c
index d4b57121a..55b83bb8d 100644
--- a/examples/ipsec-secgw/ipsec.c
+++ b/examples/ipsec-secgw/ipsec.c
@@ -10,6 +10,7 @@
 #include <rte_crypto.h>
 #include <rte_security.h>
 #include <rte_cryptodev.h>
+#include <rte_ipsec.h>
 #include <rte_ethdev.h>
 #include <rte_mbuf.h>
 #include <rte_hash.h>
@@ -86,7 +87,8 @@ create_lookaside_session(struct ipsec_ctx *ipsec_ctx, struct ipsec_sa *sa,
 			ipsec_ctx->tbl[cdev_id_qp].id,
 			ipsec_ctx->tbl[cdev_id_qp].qp);
 
-	if (ips->type != RTE_SECURITY_ACTION_TYPE_NONE) {
+	if (ips->type != RTE_SECURITY_ACTION_TYPE_NONE &&
+		ips->type != RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO) {
 		struct rte_security_session_conf sess_conf = {
 			.action_type = ips->type,
 			.protocol = RTE_SECURITY_PROTOCOL_IPSEC,
@@ -126,6 +128,7 @@ create_lookaside_session(struct ipsec_ctx *ipsec_ctx, struct ipsec_sa *sa,
 			return -1;
 		}
 	} else {
+		ips->crypto.dev_id = ipsec_ctx->tbl[cdev_id_qp].id;
 		ips->crypto.ses = rte_cryptodev_sym_session_create(
 				ipsec_ctx->session_pool);
 		rte_cryptodev_sym_session_init(ipsec_ctx->tbl[cdev_id_qp].id,
@@ -476,6 +479,13 @@ ipsec_enqueue(ipsec_xform_fn xform_func, struct ipsec_ctx *ipsec_ctx,
 			rte_security_attach_session(&priv->cop,
 				ips->security.ses);
 			break;
+
+		case RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO:
+			RTE_LOG(ERR, IPSEC, "CPU crypto is not supported by the"
+					" legacy mode.");
+			rte_pktmbuf_free(pkts[i]);
+			continue;
+
 		case RTE_SECURITY_ACTION_TYPE_NONE:
 
 			priv->cop.type = RTE_CRYPTO_OP_TYPE_SYMMETRIC;
diff --git a/examples/ipsec-secgw/ipsec_process.c b/examples/ipsec-secgw/ipsec_process.c
index 2eb5c8b34..576a9fa8a 100644
--- a/examples/ipsec-secgw/ipsec_process.c
+++ b/examples/ipsec-secgw/ipsec_process.c
@@ -92,7 +92,8 @@ fill_ipsec_session(struct rte_ipsec_session *ss, struct ipsec_ctx *ctx,
 	int32_t rc;
 
 	/* setup crypto section */
-	if (ss->type == RTE_SECURITY_ACTION_TYPE_NONE) {
+	if (ss->type == RTE_SECURITY_ACTION_TYPE_NONE ||
+			ss->type == RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO) {
 		RTE_ASSERT(ss->crypto.ses == NULL);
 		rc = create_lookaside_session(ctx, sa, ss);
 		if (rc != 0)
@@ -215,6 +216,62 @@ ipsec_prepare_crypto_group(struct ipsec_ctx *ctx, struct ipsec_sa *sa,
 	return k;
 }
 
+/*
+ * helper routine for inline and cpu(synchronous) processing
+ * this is just to satisfy inbound_sa_check() and get_hop_for_offload_pkt().
+ * Should be removed in future.
+ */
+static inline void
+prep_process_group(void *sa, struct rte_mbuf *mb[], uint32_t cnt)
+{
+	uint32_t j;
+	struct ipsec_mbuf_metadata *priv;
+
+	for (j = 0; j != cnt; j++) {
+		priv = get_priv(mb[j]);
+		priv->sa = sa;
+	}
+}
+
+/*
+ * finish processing of packets successfully decrypted by an inline processor
+ */
+static uint32_t
+ipsec_process_inline_group(struct rte_ipsec_session *ips, void *sa,
+	struct ipsec_traffic *trf, struct rte_mbuf *mb[], uint32_t cnt)
+{
+	uint64_t satp;
+	uint32_t k;
+
+	/* get SA type */
+	satp = rte_ipsec_sa_type(ips->sa);
+	prep_process_group(sa, mb, cnt);
+
+	k = rte_ipsec_pkt_process(ips, mb, cnt);
+	copy_to_trf(trf, satp, mb, k);
+	return k;
+}
+
+/*
+ * process packets synchronously
+ */
+static uint32_t
+ipsec_process_cpu_group(struct rte_ipsec_session *ips, void *sa,
+	struct ipsec_traffic *trf, struct rte_mbuf *mb[], uint32_t cnt)
+{
+	uint64_t satp;
+	uint32_t k;
+
+	/* get SA type */
+	satp = rte_ipsec_sa_type(ips->sa);
+	prep_process_group(sa, mb, cnt);
+
+	k = rte_ipsec_pkt_cpu_prepare(ips, mb, cnt);
+	k = rte_ipsec_pkt_process(ips, mb, k);
+	copy_to_trf(trf, satp, mb, k);
+	return k;
+}
+
 /*
  * Process ipsec packets.
  * If packet belong to SA that is subject of inline-crypto,
@@ -225,10 +282,8 @@ ipsec_prepare_crypto_group(struct ipsec_ctx *ctx, struct ipsec_sa *sa,
 void
 ipsec_process(struct ipsec_ctx *ctx, struct ipsec_traffic *trf)
 {
-	uint64_t satp;
-	uint32_t i, j, k, n;
+	uint32_t i, k, n;
 	struct ipsec_sa *sa;
-	struct ipsec_mbuf_metadata *priv;
 	struct rte_ipsec_group *pg;
 	struct rte_ipsec_session *ips;
 	struct rte_ipsec_group grp[RTE_DIM(trf->ipsec.pkts)];
@@ -236,10 +291,17 @@ ipsec_process(struct ipsec_ctx *ctx, struct ipsec_traffic *trf)
 	n = sa_group(trf->ipsec.saptr, trf->ipsec.pkts, grp, trf->ipsec.num);
 
 	for (i = 0; i != n; i++) {
+
 		pg = grp + i;
 		sa = ipsec_mask_saptr(pg->id.ptr);
 
-		ips = ipsec_get_primary_session(sa);
+		/* fallback to cryptodev with RX packets which inline
+		 * processor was unable to process
+		 */
+		if (sa != NULL)
+			ips = (pg->id.val & IPSEC_SA_OFFLOAD_FALLBACK_FLAG) ?
+				ipsec_get_fallback_session(sa) :
+				ipsec_get_primary_session(sa);
 
 		/* no valid HW session for that SA, try to create one */
 		if (sa == NULL || (ips->crypto.ses == NULL &&
@@ -247,50 +309,26 @@ ipsec_process(struct ipsec_ctx *ctx, struct ipsec_traffic *trf)
 			k = 0;
 
 		/* process packets inline */
-		else if (ips->type == RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO ||
-				ips->type ==
-				RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL) {
-
-			/* get SA type */
-			satp = rte_ipsec_sa_type(ips->sa);
-
-			/*
-			 * This is just to satisfy inbound_sa_check()
-			 * and get_hop_for_offload_pkt().
-			 * Should be removed in future.
-			 */
-			for (j = 0; j != pg->cnt; j++) {
-				priv = get_priv(pg->m[j]);
-				priv->sa = sa;
+		else {
+			switch (ips->type) {
+			/* enqueue packets to crypto dev */
+			case RTE_SECURITY_ACTION_TYPE_NONE:
+			case RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL:
+				k = ipsec_prepare_crypto_group(ctx, sa, ips,
+					pg->m, pg->cnt);
+				break;
+			case RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO:
+			case RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL:
+				k = ipsec_process_inline_group(ips, sa,
+					trf, pg->m, pg->cnt);
+				break;
+			case RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO:
+				k = ipsec_process_cpu_group(ips, sa,
+					trf, pg->m, pg->cnt);
+				break;
+			default:
+				k = 0;
 			}
-
-			/* fallback to cryptodev with RX packets which inline
-			 * processor was unable to process
-			 */
-			if (pg->id.val & IPSEC_SA_OFFLOAD_FALLBACK_FLAG) {
-				/* offload packets to cryptodev */
-				struct rte_ipsec_session *fallback;
-
-				fallback = ipsec_get_fallback_session(sa);
-				if (fallback->crypto.ses == NULL &&
-					fill_ipsec_session(fallback, ctx, sa)
-					!= 0)
-					k = 0;
-				else
-					k = ipsec_prepare_crypto_group(ctx, sa,
-						fallback, pg->m, pg->cnt);
-			} else {
-				/* finish processing of packets successfully
-				 * decrypted by an inline processor
-				 */
-				k = rte_ipsec_pkt_process(ips, pg->m, pg->cnt);
-				copy_to_trf(trf, satp, pg->m, k);
-
-			}
-		/* enqueue packets to crypto dev */
-		} else {
-			k = ipsec_prepare_crypto_group(ctx, sa, ips, pg->m,
-				pg->cnt);
 		}
 
 		/* drop packets that cannot be enqueued/processed */
diff --git a/examples/ipsec-secgw/sa.c b/examples/ipsec-secgw/sa.c
index 7f046e3ed..cfca4fe8f 100644
--- a/examples/ipsec-secgw/sa.c
+++ b/examples/ipsec-secgw/sa.c
@@ -577,6 +577,8 @@ parse_sa_tokens(char **tokens, uint32_t n_tokens,
 				RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL;
 			else if (strcmp(tokens[ti], "no-offload") == 0)
 				ips->type = RTE_SECURITY_ACTION_TYPE_NONE;
+			else if (strcmp(tokens[ti], "cpu-crypto") == 0)
+				ips->type = RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO;
 			else {
 				APP_CHECK(0, status, "Invalid input \"%s\"",
 						tokens[ti]);
@@ -670,10 +672,12 @@ parse_sa_tokens(char **tokens, uint32_t n_tokens,
 	if (status->status < 0)
 		return;
 
-	if ((ips->type != RTE_SECURITY_ACTION_TYPE_NONE) && (portid_p == 0))
+	if ((ips->type != RTE_SECURITY_ACTION_TYPE_NONE && ips->type !=
+			RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO) && (portid_p == 0))
 		printf("Missing portid option, falling back to non-offload\n");
 
-	if (!type_p || !portid_p) {
+	if (!type_p || (!portid_p && ips->type !=
+			RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO)) {
 		ips->type = RTE_SECURITY_ACTION_TYPE_NONE;
 		rule->portid = -1;
 	}
@@ -759,15 +763,25 @@ print_one_sa_rule(const struct ipsec_sa *sa, int inbound)
 	case RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL:
 		printf("lookaside-protocol-offload ");
 		break;
+	case RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO:
+		printf("cpu-crypto-accelerated");
+		break;
 	}
 
 	fallback_ips = &sa->sessions[IPSEC_SESSION_FALLBACK];
 	if (fallback_ips != NULL && sa->fallback_sessions > 0) {
 		printf("inline fallback: ");
-		if (fallback_ips->type == RTE_SECURITY_ACTION_TYPE_NONE)
+		switch (fallback_ips->type) {
+		case RTE_SECURITY_ACTION_TYPE_NONE:
 			printf("lookaside-none");
-		else
+			break;
+		case RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO:
+			printf("cpu-crypto-accelerated");
+			break;
+		default:
 			printf("invalid");
+			break;
+		}
 	}
 	printf("\n");
 }
@@ -966,7 +980,6 @@ sa_add_rules(struct sa_ctx *sa_ctx, const struct ipsec_sa entries[],
 				return -EINVAL;
 		}
 
-
 		switch (WITHOUT_TRANSPORT_VERSION(sa->flags)) {
 		case IP4_TUNNEL:
 			sa->src.ip.ip4 = rte_cpu_to_be_32(sa->src.ip.ip4);
@@ -1017,7 +1030,6 @@ sa_add_rules(struct sa_ctx *sa_ctx, const struct ipsec_sa entries[],
 					return -EINVAL;
 				}
 			}
-			print_one_sa_rule(sa, inbound);
 		} else {
 			switch (sa->cipher_algo) {
 			case RTE_CRYPTO_CIPHER_NULL:
@@ -1082,9 +1094,16 @@ sa_add_rules(struct sa_ctx *sa_ctx, const struct ipsec_sa entries[],
 			sa_ctx->xf[idx].a.next = &sa_ctx->xf[idx].b;
 			sa_ctx->xf[idx].b.next = NULL;
 			sa->xforms = &sa_ctx->xf[idx].a;
+		}
 
-			print_one_sa_rule(sa, inbound);
+		if (ips->type == RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO) {
+			sa_ctx->xf[idx].a.type |= RTE_CRYPTO_SYM_CPU_CRYPTO;
+			if (sa_ctx->xf[idx].b.type != 0)
+				sa_ctx->xf[idx].b.type |=
+					RTE_CRYPTO_SYM_CPU_CRYPTO;
 		}
+
+		print_one_sa_rule(sa, inbound);
 	}
 
 	return 0;
-- 
2.17.1


^ permalink raw reply	[flat|nested] 77+ messages in thread

* [dpdk-dev] [PATCH v3 6/6] examples/ipsec-secgw: cpu crypto testing
  2020-01-15 18:28 [dpdk-dev] [PATCH v3 0/6] Introduce CPU crypto mode Marcin Smoczynski
                   ` (4 preceding siblings ...)
  2020-01-15 18:28 ` [dpdk-dev] [PATCH v3 5/6] examples/ipsec-secgw: cpu crypto support Marcin Smoczynski
@ 2020-01-15 18:28 ` Marcin Smoczynski
  2020-01-16 10:54   ` Zhang, Roy Fan
  2020-01-28  3:16 ` [dpdk-dev] [PATCH v4 0/8] Introduce CPU crypto mode Marcin Smoczynski
  6 siblings, 1 reply; 77+ messages in thread
From: Marcin Smoczynski @ 2020-01-15 18:28 UTC (permalink / raw)
  To: akhil.goyal, konstantin.ananyev, roy.fan.zhang, declan.doherty,
	radu.nicolau
  Cc: dev, Marcin Smoczynski

Enable cpu-crypto mode testing by adding dedicated environmental
variable CRYPTO_PRIM_TYPE. Setting it to 'type cpu-crypto' allows
to run test scenario with cpu crypto acceleration.

Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Signed-off-by: Marcin Smoczynski <marcinx.smoczynski@intel.com>
---
 examples/ipsec-secgw/test/common_defs.sh      | 21 +++++++++++++++++++
 examples/ipsec-secgw/test/linux_test4.sh      | 11 +---------
 examples/ipsec-secgw/test/linux_test6.sh      | 11 +---------
 .../test/trs_3descbc_sha1_common_defs.sh      |  8 +++----
 .../test/trs_aescbc_sha1_common_defs.sh       |  8 +++----
 .../test/trs_aesctr_sha1_common_defs.sh       |  8 +++----
 .../test/tun_3descbc_sha1_common_defs.sh      |  8 +++----
 .../test/tun_aescbc_sha1_common_defs.sh       |  8 +++----
 .../test/tun_aesctr_sha1_common_defs.sh       |  8 +++----
 9 files changed, 47 insertions(+), 44 deletions(-)

diff --git a/examples/ipsec-secgw/test/common_defs.sh b/examples/ipsec-secgw/test/common_defs.sh
index 4aac4981a..6b6ae06f3 100644
--- a/examples/ipsec-secgw/test/common_defs.sh
+++ b/examples/ipsec-secgw/test/common_defs.sh
@@ -42,6 +42,27 @@ DPDK_BUILD=${RTE_TARGET:-x86_64-native-linux-gcc}
 DEF_MTU_LEN=1400
 DEF_PING_LEN=1200
 
+#upsate operation mode based on env vars values
+select_mode()
+{
+	# select sync/async mode
+	if [[ -n "${CRYPTO_PRIM_TYPE}" && -n "${SGW_CMD_XPRM}" ]]; then
+		echo "${CRYPTO_PRIM_TYPE} is enabled"
+		SGW_CFG_XPRM="${SGW_CFG_XPRM} ${CRYPTO_PRIM_TYPE}"
+	fi
+
+	#make linux to generate fragmented packets
+	if [[ -n "${MULTI_SEG_TEST}" && -n "${SGW_CMD_XPRM}" ]]; then
+		echo "multi-segment test is enabled"
+		SGW_CMD_XPRM="${SGW_CMD_XPRM} ${MULTI_SEG_TEST}"
+		PING_LEN=5000
+		MTU_LEN=1500
+	else
+		PING_LEN=${DEF_PING_LEN}
+		MTU_LEN=${DEF_MTU_LEN}
+	fi
+}
+
 #setup mtu on local iface
 set_local_mtu()
 {
diff --git a/examples/ipsec-secgw/test/linux_test4.sh b/examples/ipsec-secgw/test/linux_test4.sh
index 760451000..fb8ae1023 100644
--- a/examples/ipsec-secgw/test/linux_test4.sh
+++ b/examples/ipsec-secgw/test/linux_test4.sh
@@ -45,16 +45,7 @@ MODE=$1
  . ${DIR}/common_defs.sh
  . ${DIR}/${MODE}_defs.sh
 
-#make linux to generate fragmented packets
-if [[ -n "${MULTI_SEG_TEST}" && -n "${SGW_CMD_XPRM}" ]]; then
-	echo "multi-segment test is enabled"
-	SGW_CMD_XPRM="${SGW_CMD_XPRM} ${MULTI_SEG_TEST}"
-	PING_LEN=5000
-	MTU_LEN=1500
-else
-	PING_LEN=${DEF_PING_LEN}
-	MTU_LEN=${DEF_MTU_LEN}
-fi
+select_mode
 
 config_secgw
 
diff --git a/examples/ipsec-secgw/test/linux_test6.sh b/examples/ipsec-secgw/test/linux_test6.sh
index 479f29be3..dbcca7936 100644
--- a/examples/ipsec-secgw/test/linux_test6.sh
+++ b/examples/ipsec-secgw/test/linux_test6.sh
@@ -46,16 +46,7 @@ MODE=$1
  . ${DIR}/common_defs.sh
  . ${DIR}/${MODE}_defs.sh
 
-#make linux to generate fragmented packets
-if [[ -n "${MULTI_SEG_TEST}" && -n "${SGW_CMD_XPRM}" ]]; then
-	echo "multi-segment test is enabled"
-	SGW_CMD_XPRM="${SGW_CMD_XPRM} ${MULTI_SEG_TEST}"
-	PING_LEN=5000
-	MTU_LEN=1500
-else
-	PING_LEN=${DEF_PING_LEN}
-	MTU_LEN=${DEF_MTU_LEN}
-fi
+select_mode
 
 config_secgw
 
diff --git a/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh b/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh
index 3c5c18afd..62118bb3f 100644
--- a/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh
+++ b/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh
@@ -33,14 +33,14 @@ cipher_key \
 de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
 auth_algo sha1-hmac \
 auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
-mode transport
+mode transport ${SGW_CFG_XPRM}
 
 sa in 9 cipher_algo 3des-cbc \
 cipher_key \
 de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
 auth_algo sha1-hmac \
 auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
-mode transport
+mode transport ${SGW_CFG_XPRM}
 
 #SA out rules
 sa out 7 cipher_algo 3des-cbc \
@@ -48,7 +48,7 @@ cipher_key \
 de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
 auth_algo sha1-hmac \
 auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
-mode transport
+mode transport ${SGW_CFG_XPRM}
 
 #SA out rules
 sa out 9 cipher_algo 3des-cbc \
@@ -56,7 +56,7 @@ cipher_key \
 de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
 auth_algo sha1-hmac \
 auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
-mode transport
+mode transport ${SGW_CFG_XPRM}
 
 #Routing rules
 rt ipv4 dst ${REMOTE_IPV4}/32 port 0
diff --git a/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh b/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh
index 9dbdd1765..7ddeb2b5a 100644
--- a/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh
+++ b/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh
@@ -32,27 +32,27 @@ sa in 7 cipher_algo aes-128-cbc \
 cipher_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
 auth_algo sha1-hmac \
 auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
-mode transport
+mode transport ${SGW_CFG_XPRM}
 
 sa in 9 cipher_algo aes-128-cbc \
 cipher_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
 auth_algo sha1-hmac \
 auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
-mode transport
+mode transport ${SGW_CFG_XPRM}
 
 #SA out rules
 sa out 7 cipher_algo aes-128-cbc \
 cipher_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
 auth_algo sha1-hmac \
 auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
-mode transport
+mode transport ${SGW_CFG_XPRM}
 
 #SA out rules
 sa out 9 cipher_algo aes-128-cbc \
 cipher_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
 auth_algo sha1-hmac \
 auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
-mode transport
+mode transport ${SGW_CFG_XPRM}
 
 #Routing rules
 rt ipv4 dst ${REMOTE_IPV4}/32 port 0
diff --git a/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh b/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh
index 6aba680f9..f0178355a 100644
--- a/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh
+++ b/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh
@@ -32,27 +32,27 @@ sa in 7 cipher_algo aes-128-ctr \
 cipher_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
 auth_algo sha1-hmac \
 auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
-mode transport
+mode transport ${SGW_CFG_XPRM}
 
 sa in 9 cipher_algo aes-128-ctr \
 cipher_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
 auth_algo sha1-hmac \
 auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
-mode transport
+mode transport ${SGW_CFG_XPRM}
 
 #SA out rules
 sa out 7 cipher_algo aes-128-ctr \
 cipher_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
 auth_algo sha1-hmac \
 auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
-mode transport
+mode transport ${SGW_CFG_XPRM}
 
 #SA out rules
 sa out 9 cipher_algo aes-128-ctr \
 cipher_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
 auth_algo sha1-hmac \
 auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
-mode transport
+mode transport ${SGW_CFG_XPRM}
 
 #Routing rules
 rt ipv4 dst ${REMOTE_IPV4}/32 port 0
diff --git a/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh b/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh
index 7c3226f84..d8869fad0 100644
--- a/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh
+++ b/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh
@@ -33,14 +33,14 @@ cipher_key \
 de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
 auth_algo sha1-hmac \
 auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
-mode ipv4-tunnel src ${REMOTE_IPV4} dst ${LOCAL_IPV4}
+mode ipv4-tunnel src ${REMOTE_IPV4} dst ${LOCAL_IPV4} ${SGW_CFG_XPRM}
 
 sa in 9 cipher_algo 3des-cbc \
 cipher_key \
 de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
 auth_algo sha1-hmac \
 auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
-mode ipv6-tunnel src ${REMOTE_IPV6} dst ${LOCAL_IPV6}
+mode ipv6-tunnel src ${REMOTE_IPV6} dst ${LOCAL_IPV6} ${SGW_CFG_XPRM}
 
 #SA out rules
 sa out 7 cipher_algo 3des-cbc \
@@ -48,14 +48,14 @@ cipher_key \
 de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
 auth_algo sha1-hmac \
 auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
-mode ipv4-tunnel src ${LOCAL_IPV4} dst ${REMOTE_IPV4}
+mode ipv4-tunnel src ${LOCAL_IPV4} dst ${REMOTE_IPV4} ${SGW_CFG_XPRM}
 
 sa out 9 cipher_algo 3des-cbc \
 cipher_key \
 de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
 auth_algo sha1-hmac \
 auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
-mode ipv6-tunnel src ${LOCAL_IPV6} dst ${REMOTE_IPV6}
+mode ipv6-tunnel src ${LOCAL_IPV6} dst ${REMOTE_IPV6} ${SGW_CFG_XPRM}
 
 #Routing rules
 rt ipv4 dst ${REMOTE_IPV4}/32 port 0
diff --git a/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh b/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh
index bdf5938a0..2616926b2 100644
--- a/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh
+++ b/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh
@@ -32,26 +32,26 @@ sa in 7 cipher_algo aes-128-cbc \
 cipher_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
 auth_algo sha1-hmac \
 auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
-mode ipv4-tunnel src ${REMOTE_IPV4} dst ${LOCAL_IPV4}
+mode ipv4-tunnel src ${REMOTE_IPV4} dst ${LOCAL_IPV4} ${SGW_CFG_XPRM}
 
 sa in 9 cipher_algo aes-128-cbc \
 cipher_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
 auth_algo sha1-hmac \
 auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
-mode ipv6-tunnel src ${REMOTE_IPV6} dst ${LOCAL_IPV6}
+mode ipv6-tunnel src ${REMOTE_IPV6} dst ${LOCAL_IPV6} ${SGW_CFG_XPRM}
 
 #SA out rules
 sa out 7 cipher_algo aes-128-cbc \
 cipher_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
 auth_algo sha1-hmac \
 auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
-mode ipv4-tunnel src ${LOCAL_IPV4} dst ${REMOTE_IPV4}
+mode ipv4-tunnel src ${LOCAL_IPV4} dst ${REMOTE_IPV4} ${SGW_CFG_XPRM}
 
 sa out 9 cipher_algo aes-128-cbc \
 cipher_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
 auth_algo sha1-hmac \
 auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
-mode ipv6-tunnel src ${LOCAL_IPV6} dst ${REMOTE_IPV6}
+mode ipv6-tunnel src ${LOCAL_IPV6} dst ${REMOTE_IPV6} ${SGW_CFG_XPRM}
 
 #Routing rules
 rt ipv4 dst ${REMOTE_IPV4}/32 port 0
diff --git a/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh b/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh
index 06f2ef0c6..06b561fd7 100644
--- a/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh
+++ b/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh
@@ -32,26 +32,26 @@ sa in 7 cipher_algo aes-128-ctr \
 cipher_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
 auth_algo sha1-hmac \
 auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
-mode ipv4-tunnel src ${REMOTE_IPV4} dst ${LOCAL_IPV4}
+mode ipv4-tunnel src ${REMOTE_IPV4} dst ${LOCAL_IPV4} ${SGW_CFG_XPRM}
 
 sa in 9 cipher_algo aes-128-ctr \
 cipher_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
 auth_algo sha1-hmac \
 auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
-mode ipv6-tunnel src ${REMOTE_IPV6} dst ${LOCAL_IPV6}
+mode ipv6-tunnel src ${REMOTE_IPV6} dst ${LOCAL_IPV6} ${SGW_CFG_XPRM}
 
 #SA out rules
 sa out 7 cipher_algo aes-128-ctr \
 cipher_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
 auth_algo sha1-hmac \
 auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
-mode ipv4-tunnel src ${LOCAL_IPV4} dst ${REMOTE_IPV4}
+mode ipv4-tunnel src ${LOCAL_IPV4} dst ${REMOTE_IPV4} ${SGW_CFG_XPRM}
 
 sa out 9 cipher_algo aes-128-ctr \
 cipher_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
 auth_algo sha1-hmac \
 auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
-mode ipv6-tunnel src ${LOCAL_IPV6} dst ${REMOTE_IPV6}
+mode ipv6-tunnel src ${LOCAL_IPV6} dst ${REMOTE_IPV6} ${SGW_CFG_XPRM}
 
 #Routing rules
 rt ipv4 dst ${REMOTE_IPV4}/32 port 0
-- 
2.17.1


^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [dpdk-dev] [PATCH v3 3/6] security: add cpu crypto action type
  2020-01-15 18:28 ` [dpdk-dev] [PATCH v3 3/6] security: add cpu crypto action type Marcin Smoczynski
@ 2020-01-15 22:49   ` Ananyev, Konstantin
  2020-01-16 10:01   ` Zhang, Roy Fan
  1 sibling, 0 replies; 77+ messages in thread
From: Ananyev, Konstantin @ 2020-01-15 22:49 UTC (permalink / raw)
  To: Smoczynski, MarcinX, akhil.goyal, Zhang, Roy Fan, Doherty,
	Declan, Nicolau, Radu
  Cc: dev


> 
> Introduce CPU crypto action type allowing to differentiate between
> regular async 'none security' and synchronous, CPU crypto accelerated
> sessions.
> 
> Signed-off-by: Marcin Smoczynski <marcinx.smoczynski@intel.com>
> ---
>  lib/librte_security/rte_security.h | 6 +++++-
>  1 file changed, 5 insertions(+), 1 deletion(-)
> 
> diff --git a/lib/librte_security/rte_security.h b/lib/librte_security/rte_security.h
> index 546779df2..309f7311c 100644
> --- a/lib/librte_security/rte_security.h
> +++ b/lib/librte_security/rte_security.h
> @@ -307,10 +307,14 @@ enum rte_security_session_action_type {
>  	/**< All security protocol processing is performed inline during
>  	 * transmission
>  	 */
> -	RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL
> +	RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL,
>  	/**< All security protocol processing including crypto is performed
>  	 * on a lookaside accelerator
>  	 */
> +	RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO
> +	/**< Crypto processing for security protocol is processed by CPU
> +	 * synchronously
> +	 */
>  };
> 
>  /** Security session protocol definition */
> --

Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>

> 2.17.1


^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [dpdk-dev] [PATCH v3 2/6] crypto/aesni_gcm: cpu crypto support
  2020-01-15 18:28 ` [dpdk-dev] [PATCH v3 2/6] crypto/aesni_gcm: cpu crypto support Marcin Smoczynski
@ 2020-01-15 23:16   ` Ananyev, Konstantin
  2020-01-16 10:00   ` Zhang, Roy Fan
  2020-01-21 13:53   ` De Lara Guarch, Pablo
  2 siblings, 0 replies; 77+ messages in thread
From: Ananyev, Konstantin @ 2020-01-15 23:16 UTC (permalink / raw)
  To: Smoczynski, MarcinX, akhil.goyal, Zhang, Roy Fan, Doherty,
	Declan, Nicolau, Radu
  Cc: dev



> 
> Add support for CPU crypto mode by introducing required handler.
> Crypto mode (sync/async) is chosen during sym session create if an
> appropriate flag is set in an xform type number.
> 
> Authenticated encryption and decryption are supported with tag
> generation/verification.
> 
> Signed-off-by: Marcin Smoczynski <marcinx.smoczynski@intel.com>
> ---

Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>

> 2.17.1


^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [dpdk-dev] [PATCH v3 1/6] cryptodev: introduce cpu crypto support API
  2020-01-15 18:28 ` [dpdk-dev] [PATCH v3 1/6] cryptodev: introduce cpu crypto support API Marcin Smoczynski
@ 2020-01-15 23:20   ` Ananyev, Konstantin
  2020-01-16 10:11   ` Zhang, Roy Fan
  1 sibling, 0 replies; 77+ messages in thread
From: Ananyev, Konstantin @ 2020-01-15 23:20 UTC (permalink / raw)
  To: Smoczynski, MarcinX, akhil.goyal, Zhang, Roy Fan, Doherty,
	Declan, Nicolau, Radu
  Cc: dev



Hi Marcin,
 
> Add new API allowing to process crypto operations in a synchronous
> manner. Operations are performed on a set of SG arrays.
> 
> Sync mode is selected by setting appropriate flag in an xform
> type number. Cryptodevs which allows CPU crypto operation mode have to
> use RTE_CRYPTODEV_FF_SYM_CPU_CRYPTO capability.
> 
> Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> Signed-off-by: Marcin Smoczynski <marcinx.smoczynski@intel.com>
> ---
>  lib/librte_cryptodev/rte_crypto_sym.h         | 62 ++++++++++++++++++-
>  lib/librte_cryptodev/rte_cryptodev.c          | 30 +++++++++
>  lib/librte_cryptodev/rte_cryptodev.h          | 20 ++++++
>  lib/librte_cryptodev/rte_cryptodev_pmd.h      | 19 ++++++
>  .../rte_cryptodev_version.map                 |  1 +
>  5 files changed, 131 insertions(+), 1 deletion(-)
> 
> diff --git a/lib/librte_cryptodev/rte_crypto_sym.h b/lib/librte_cryptodev/rte_crypto_sym.h
> index ffa038dc4..f5dd05ab0 100644
> --- a/lib/librte_cryptodev/rte_crypto_sym.h
> +++ b/lib/librte_cryptodev/rte_crypto_sym.h
> @@ -25,6 +25,59 @@ extern "C" {
>  #include <rte_mempool.h>
>  #include <rte_common.h>
> 
> +/**
> + * Crypto IO Vector (in analogy with struct iovec)
> + * Supposed be used to pass input/output data buffers for crypto data-path
> + * functions.
> + */
> +struct rte_crypto_vec {
> +	/** virtual address of the data buffer */
> +	void *base;
> +	/** IOVA of the data buffer */
> +	rte_iova_t *iova;
> +	/** length of the data buffer */
> +	uint32_t len;
> +};
> +
> +struct rte_crypto_sgl {
> +	/** start of an array of vectors */
> +	struct rte_crypto_vec *vec;
> +	/** size of an array of vectors */
> +	uint32_t num;
> +};
> +
> +struct rte_crypto_sym_vec {
> +	/** array of SGL vectors */
> +	struct rte_crypto_sgl *sgl;
> +	/** array of pointers to IV */
> +	void **iv;
> +	/** array of pointers to AAD */
> +	void **aad;
> +	/** array of pointers to digest */
> +	void **digest;
> +	/**
> +	 * array of statuses for each operation:
> +	 *  - 0 on success
> +	 *  - errno on error
> +	 */
> +	int32_t *status;
> +	/** number of operations to perform */
> +	uint32_t num;
> +};
> +
> +/**
> + * used for cpu_crypto_process_bulk() to specify head/tail offsets
> + * for auth/cipher processing.
> + */
> +union rte_crypto_sym_ofs {
> +	uint64_t raw;
> +	struct {
> +		struct {
> +			uint16_t head;
> +			uint16_t tail;
> +		} auth, cipher;
> +	} ofs;
> +};
> 
>  /** Symmetric Cipher Algorithms */
>  enum rte_crypto_cipher_algorithm {
> @@ -425,7 +478,14 @@ enum rte_crypto_sym_xform_type {
>  	RTE_CRYPTO_SYM_XFORM_NOT_SPECIFIED = 0,	/**< No xform specified */
>  	RTE_CRYPTO_SYM_XFORM_AUTH,		/**< Authentication xform */
>  	RTE_CRYPTO_SYM_XFORM_CIPHER,		/**< Cipher xform  */
> -	RTE_CRYPTO_SYM_XFORM_AEAD		/**< AEAD xform  */
> +	RTE_CRYPTO_SYM_XFORM_AEAD,		/**< AEAD xform  */
> +
> +	RTE_CRYPTO_SYM_XFORM_TYPE_MASK = 0xFFFF,
> +	/**< xform type mask value */
> +	RTE_CRYPTO_SYM_XFORM_FLAG_MASK = 0xFFFF0000,
> +	/**< xform flag mask value */
> +	RTE_CRYPTO_SYM_CPU_CRYPTO = 0x80000000,
> +	/**< xform flag for cpu-crypto */

We probably can avoid that.
Instead just expect each PMD that does provide FF_SYM_CPU_CRYPTO capability
to always create session that is capable to work in both modes (sync/async).
Apart from that - LGTM.
Konstantin

>  };
> 
>  /**
> diff --git a/lib/librte_cryptodev/rte_cryptodev.c b/lib/librte_cryptodev/rte_cryptodev.c
> index 89aa2ed3e..157fda890 100644
> --- a/lib/librte_cryptodev/rte_cryptodev.c
> +++ b/lib/librte_cryptodev/rte_cryptodev.c
> @@ -1616,6 +1616,36 @@ rte_cryptodev_sym_session_get_user_data(
>  	return (void *)(sess->sess_data + sess->nb_drivers);
>  }
> 
> +static inline void
> +sym_crypto_fill_status(struct rte_crypto_sym_vec *vec, int32_t errnum)
> +{
> +	uint32_t i;
> +	for (i = 0; i < vec->num; i++)
> +		vec->status[i] = errnum;
> +}
> +
> +uint32_t
> +rte_cryptodev_sym_cpu_crypto_process(uint8_t dev_id,
> +	struct rte_cryptodev_sym_session *sess, union rte_crypto_sym_ofs ofs,
> +	struct rte_crypto_sym_vec *vec)
> +{
> +	struct rte_cryptodev *dev;
> +
> +	if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
> +		sym_crypto_fill_status(vec, EINVAL);
> +		return 0;
> +	}
> +
> +	dev = rte_cryptodev_pmd_get_dev(dev_id);
> +
> +	if (*dev->dev_ops->sym_cpu_process == NULL) {
> +		sym_crypto_fill_status(vec, ENOTSUP);
> +		return 0;
> +	}
> +
> +	return dev->dev_ops->sym_cpu_process(dev, sess, ofs, vec);
> +}
> +
>  /** Initialise rte_crypto_op mempool element */
>  static void
>  rte_crypto_op_init(struct rte_mempool *mempool,
> diff --git a/lib/librte_cryptodev/rte_cryptodev.h b/lib/librte_cryptodev/rte_cryptodev.h
> index c6ffa3b35..8786dfb90 100644
> --- a/lib/librte_cryptodev/rte_cryptodev.h
> +++ b/lib/librte_cryptodev/rte_cryptodev.h
> @@ -450,6 +450,8 @@ rte_cryptodev_asym_get_xform_enum(enum rte_crypto_asym_xform_type *xform_enum,
>  /**< Support encrypted-digest operations where digest is appended to data */
>  #define RTE_CRYPTODEV_FF_ASYM_SESSIONLESS		(1ULL << 20)
>  /**< Support asymmetric session-less operations */
> +#define	RTE_CRYPTODEV_FF_SYM_CPU_CRYPTO			(1ULL << 21)
> +/**< Support symmeteric cpu-crypto processing */
> 
> 
>  /**
> @@ -1274,6 +1276,24 @@ void *
>  rte_cryptodev_sym_session_get_user_data(
>  					struct rte_cryptodev_sym_session *sess);
> 
> +/**
> + * Perform actual crypto processing (encrypt/digest or auth/decrypt)
> + * on user provided data.
> + *
> + * @param	dev_id	The device identifier.
> + * @param	sess	Cryptodev session structure
> + * @param	ofs	Start and stop offsets for auth and cipher operations
> + * @param	vec	Vectorized operation descriptor
> + *
> + * @return
> + *  - Returns number of successfully processed packets.
> + */
> +__rte_experimental
> +uint32_t
> +rte_cryptodev_sym_cpu_crypto_process(uint8_t dev_id,
> +	struct rte_cryptodev_sym_session *sess, union rte_crypto_sym_ofs ofs,
> +	struct rte_crypto_sym_vec *vec);
> +
>  #ifdef __cplusplus
>  }
>  #endif
> diff --git a/lib/librte_cryptodev/rte_cryptodev_pmd.h b/lib/librte_cryptodev/rte_cryptodev_pmd.h
> index fba14f2fa..5d9ee5fef 100644
> --- a/lib/librte_cryptodev/rte_cryptodev_pmd.h
> +++ b/lib/librte_cryptodev/rte_cryptodev_pmd.h
> @@ -308,6 +308,23 @@ typedef void (*cryptodev_sym_free_session_t)(struct rte_cryptodev *dev,
>   */
>  typedef void (*cryptodev_asym_free_session_t)(struct rte_cryptodev *dev,
>  		struct rte_cryptodev_asym_session *sess);
> +/**
> + * Perform actual crypto processing (encrypt/digest or auth/decrypt)
> + * on user provided data.
> + *
> + * @param	dev	Crypto device pointer
> + * @param	sess	Cryptodev session structure
> + * @param	ofs	Start and stop offsets for auth and cipher operations
> + * @param	vec	Vectorized operation descriptor
> + *
> + * @return
> + *  - Returns number of successfully processed packets.
> + *
> + */
> +typedef uint32_t (*cryptodev_sym_cpu_crypto_process_t)
> +	(struct rte_cryptodev *dev, struct rte_cryptodev_sym_session *sess,
> +	union rte_crypto_sym_ofs ofs, struct rte_crypto_sym_vec *vec);
> +
> 
>  /** Crypto device operations function pointer table */
>  struct rte_cryptodev_ops {
> @@ -342,6 +359,8 @@ struct rte_cryptodev_ops {
>  	/**< Clear a Crypto sessions private data. */
>  	cryptodev_asym_free_session_t asym_session_clear;
>  	/**< Clear a Crypto sessions private data. */
> +	cryptodev_sym_cpu_crypto_process_t sym_cpu_process;
> +	/**< process input data synchronously (cpu-crypto). */
>  };
> 
> 
> diff --git a/lib/librte_cryptodev/rte_cryptodev_version.map b/lib/librte_cryptodev/rte_cryptodev_version.map
> index 1dd1e259a..6e41b4be5 100644
> --- a/lib/librte_cryptodev/rte_cryptodev_version.map
> +++ b/lib/librte_cryptodev/rte_cryptodev_version.map
> @@ -71,6 +71,7 @@ EXPERIMENTAL {
>  	rte_cryptodev_asym_session_init;
>  	rte_cryptodev_asym_xform_capability_check_modlen;
>  	rte_cryptodev_asym_xform_capability_check_optype;
> +	rte_cryptodev_sym_cpu_crypto_process;
>  	rte_cryptodev_sym_get_existing_header_session_size;
>  	rte_cryptodev_sym_session_get_user_data;
>  	rte_cryptodev_sym_session_pool_create;
> --
> 2.17.1


^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [dpdk-dev] [PATCH v3 2/6] crypto/aesni_gcm: cpu crypto support
  2020-01-15 18:28 ` [dpdk-dev] [PATCH v3 2/6] crypto/aesni_gcm: cpu crypto support Marcin Smoczynski
  2020-01-15 23:16   ` Ananyev, Konstantin
@ 2020-01-16 10:00   ` Zhang, Roy Fan
  2020-01-21 13:53   ` De Lara Guarch, Pablo
  2 siblings, 0 replies; 77+ messages in thread
From: Zhang, Roy Fan @ 2020-01-16 10:00 UTC (permalink / raw)
  To: Smoczynski, MarcinX, akhil.goyal, Ananyev, Konstantin, Doherty,
	Declan, Nicolau, Radu
  Cc: dev

> -----Original Message-----
> From: Smoczynski, MarcinX <marcinx.smoczynski@intel.com>
> Sent: Wednesday, January 15, 2020 6:28 PM
> To: akhil.goyal@nxp.com; Ananyev, Konstantin
> <konstantin.ananyev@intel.com>; Zhang, Roy Fan
> <roy.fan.zhang@intel.com>; Doherty, Declan <declan.doherty@intel.com>;
> Nicolau, Radu <radu.nicolau@intel.com>
> Cc: dev@dpdk.org; Smoczynski, MarcinX <marcinx.smoczynski@intel.com>
> Subject: [PATCH v3 2/6] crypto/aesni_gcm: cpu crypto support
> 
> Add support for CPU crypto mode by introducing required handler.
> Crypto mode (sync/async) is chosen during sym session create if an
> appropriate flag is set in an xform type number.
> 
> Authenticated encryption and decryption are supported with tag
> generation/verification.
> 
> Signed-off-by: Marcin Smoczynski <marcinx.smoczynski@intel.com>
> ---

Acked-by: Fan Zhang <roy.fan.zhang@intel.com>

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [dpdk-dev] [PATCH v3 3/6] security: add cpu crypto action type
  2020-01-15 18:28 ` [dpdk-dev] [PATCH v3 3/6] security: add cpu crypto action type Marcin Smoczynski
  2020-01-15 22:49   ` Ananyev, Konstantin
@ 2020-01-16 10:01   ` Zhang, Roy Fan
  1 sibling, 0 replies; 77+ messages in thread
From: Zhang, Roy Fan @ 2020-01-16 10:01 UTC (permalink / raw)
  To: Smoczynski, MarcinX, akhil.goyal, Ananyev, Konstantin, Doherty,
	Declan, Nicolau, Radu
  Cc: dev

> -----Original Message-----
> From: Smoczynski, MarcinX <marcinx.smoczynski@intel.com>
> Sent: Wednesday, January 15, 2020 6:28 PM
> To: akhil.goyal@nxp.com; Ananyev, Konstantin
> <konstantin.ananyev@intel.com>; Zhang, Roy Fan
> <roy.fan.zhang@intel.com>; Doherty, Declan <declan.doherty@intel.com>;
> Nicolau, Radu <radu.nicolau@intel.com>
> Cc: dev@dpdk.org; Smoczynski, MarcinX <marcinx.smoczynski@intel.com>
> Subject: [PATCH v3 3/6] security: add cpu crypto action type
> 
> Introduce CPU crypto action type allowing to differentiate between regular
> async 'none security' and synchronous, CPU crypto accelerated sessions.
> 
> Signed-off-by: Marcin Smoczynski <marcinx.smoczynski@intel.com>
> ---
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [dpdk-dev] [PATCH v3 1/6] cryptodev: introduce cpu crypto support API
  2020-01-15 18:28 ` [dpdk-dev] [PATCH v3 1/6] cryptodev: introduce cpu crypto support API Marcin Smoczynski
  2020-01-15 23:20   ` Ananyev, Konstantin
@ 2020-01-16 10:11   ` Zhang, Roy Fan
  1 sibling, 0 replies; 77+ messages in thread
From: Zhang, Roy Fan @ 2020-01-16 10:11 UTC (permalink / raw)
  To: Smoczynski, MarcinX, akhil.goyal, Ananyev, Konstantin, Doherty,
	Declan, Nicolau, Radu
  Cc: dev

Hi Marcin,

> -----Original Message-----
> From: Smoczynski, MarcinX <marcinx.smoczynski@intel.com>
> Sent: Wednesday, January 15, 2020 6:28 PM
> To: akhil.goyal@nxp.com; Ananyev, Konstantin
> <konstantin.ananyev@intel.com>; Zhang, Roy Fan
> <roy.fan.zhang@intel.com>; Doherty, Declan <declan.doherty@intel.com>;
> Nicolau, Radu <radu.nicolau@intel.com>
> Cc: dev@dpdk.org; Smoczynski, MarcinX <marcinx.smoczynski@intel.com>
> Subject: [PATCH v3 1/6] cryptodev: introduce cpu crypto support API
> 
> Add new API allowing to process crypto operations in a synchronous manner.
> Operations are performed on a set of SG arrays.
> 
> Sync mode is selected by setting appropriate flag in an xform type number.
> Cryptodevs which allows CPU crypto operation mode have to use
> RTE_CRYPTODEV_FF_SYM_CPU_CRYPTO capability.
> 
> Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> Signed-off-by: Marcin Smoczynski <marcinx.smoczynski@intel.com>
> ---
>  lib/librte_cryptodev/rte_crypto_sym.h         | 62 ++++++++++++++++++-
>  lib/librte_cryptodev/rte_cryptodev.c          | 30 +++++++++
>  lib/librte_cryptodev/rte_cryptodev.h          | 20 ++++++
>  lib/librte_cryptodev/rte_cryptodev_pmd.h      | 19 ++++++
>  .../rte_cryptodev_version.map                 |  1 +
>  5 files changed, 131 insertions(+), 1 deletion(-)
> 
> diff --git a/lib/librte_cryptodev/rte_crypto_sym.h
> b/lib/librte_cryptodev/rte_crypto_sym.h
> index ffa038dc4..f5dd05ab0 100644
> --- a/lib/librte_cryptodev/rte_crypto_sym.h
> +++ b/lib/librte_cryptodev/rte_crypto_sym.h
> @@ -25,6 +25,59 @@ extern "C" {
>  #include <rte_mempool.h>
>  #include <rte_common.h>
> 
> +/**
> + * Crypto IO Vector (in analogy with struct iovec)
> + * Supposed be used to pass input/output data buffers for crypto
> +data-path
> + * functions.
> + */
> +struct rte_crypto_vec {
> +	/** virtual address of the data buffer */
> +	void *base;
> +	/** IOVA of the data buffer */
> +	rte_iova_t *iova;
> +	/** length of the data buffer */
> +	uint32_t len;
> +};
> +
> +struct rte_crypto_sgl {
> +	/** start of an array of vectors */
> +	struct rte_crypto_vec *vec;
> +	/** size of an array of vectors */
> +	uint32_t num;
> +};
> +
> +struct rte_crypto_sym_vec {
> +	/** array of SGL vectors */
> +	struct rte_crypto_sgl *sgl;
> +	/** array of pointers to IV */
> +	void **iv;
> +	/** array of pointers to AAD */
> +	void **aad;
> +	/** array of pointers to digest */
> +	void **digest;
> +	/**
> +	 * array of statuses for each operation:
> +	 *  - 0 on success
> +	 *  - errno on error
> +	 */
> +	int32_t *status;
> +	/** number of operations to perform */
> +	uint32_t num;
> +};
> +
> +/**
> + * used for cpu_crypto_process_bulk() to specify head/tail offsets
> + * for auth/cipher processing.
> + */
> +union rte_crypto_sym_ofs {
> +	uint64_t raw;
> +	struct {
> +		struct {
> +			uint16_t head;
> +			uint16_t tail;
> +		} auth, cipher;
> +	} ofs;
> +};
> 
>  /** Symmetric Cipher Algorithms */
>  enum rte_crypto_cipher_algorithm {
> @@ -425,7 +478,14 @@ enum rte_crypto_sym_xform_type {
>  	RTE_CRYPTO_SYM_XFORM_NOT_SPECIFIED = 0,	/**< No
> xform specified */
>  	RTE_CRYPTO_SYM_XFORM_AUTH,		/**< Authentication
> xform */
>  	RTE_CRYPTO_SYM_XFORM_CIPHER,		/**< Cipher xform  */
> -	RTE_CRYPTO_SYM_XFORM_AEAD		/**< AEAD xform  */
> +	RTE_CRYPTO_SYM_XFORM_AEAD,		/**< AEAD xform  */
> +
> +	RTE_CRYPTO_SYM_XFORM_TYPE_MASK = 0xFFFF,
> +	/**< xform type mask value */
> +	RTE_CRYPTO_SYM_XFORM_FLAG_MASK = 0xFFFF0000,
> +	/**< xform flag mask value */
> +	RTE_CRYPTO_SYM_CPU_CRYPTO = 0x80000000,
> +	/**< xform flag for cpu-crypto */
>  };
> 
Fan: what I believe RTE_CRYPTO_SYM_XFORM_TYPE_MASK and RTE_CRYPTO_SYM_XFORM_FLAG_MASK should be define outside the enum, but as a marco define. Also I think we missed a doc update patch in the patchset.
Other than that everything looks great.


^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [dpdk-dev] [PATCH v3 4/6] ipsec: introduce support for cpu crypto mode
  2020-01-15 18:28 ` [dpdk-dev] [PATCH v3 4/6] ipsec: introduce support for cpu crypto mode Marcin Smoczynski
@ 2020-01-16 10:53   ` Zhang, Roy Fan
  2020-01-16 10:53   ` Zhang, Roy Fan
  1 sibling, 0 replies; 77+ messages in thread
From: Zhang, Roy Fan @ 2020-01-16 10:53 UTC (permalink / raw)
  To: Smoczynski, MarcinX, akhil.goyal, Ananyev, Konstantin, Doherty,
	Declan, Nicolau, Radu
  Cc: dev

> -----Original Message-----
> From: Smoczynski, MarcinX <marcinx.smoczynski@intel.com>
> Sent: Wednesday, January 15, 2020 6:29 PM
> To: akhil.goyal@nxp.com; Ananyev, Konstantin
> <konstantin.ananyev@intel.com>; Zhang, Roy Fan
> <roy.fan.zhang@intel.com>; Doherty, Declan <declan.doherty@intel.com>;
> Nicolau, Radu <radu.nicolau@intel.com>
> Cc: dev@dpdk.org; Smoczynski, MarcinX <marcinx.smoczynski@intel.com>
> Subject: [PATCH v3 4/6] ipsec: introduce support for cpu crypto mode
> 
> Update library to handle CPU cypto security mode which utilizes cryptodev's
> synchronous, CPU accelerated crypto operations.
> 
> Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> Signed-off-by: Marcin Smoczynski <marcinx.smoczynski@intel.com>
> ---
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [dpdk-dev] [PATCH v3 4/6] ipsec: introduce support for cpu crypto mode
  2020-01-15 18:28 ` [dpdk-dev] [PATCH v3 4/6] ipsec: introduce support for cpu crypto mode Marcin Smoczynski
  2020-01-16 10:53   ` Zhang, Roy Fan
@ 2020-01-16 10:53   ` Zhang, Roy Fan
  1 sibling, 0 replies; 77+ messages in thread
From: Zhang, Roy Fan @ 2020-01-16 10:53 UTC (permalink / raw)
  To: Smoczynski, MarcinX, akhil.goyal, Ananyev, Konstantin, Doherty,
	Declan, Nicolau, Radu
  Cc: dev

> -----Original Message-----
> From: Smoczynski, MarcinX <marcinx.smoczynski@intel.com>
> Sent: Wednesday, January 15, 2020 6:29 PM
> To: akhil.goyal@nxp.com; Ananyev, Konstantin
> <konstantin.ananyev@intel.com>; Zhang, Roy Fan
> <roy.fan.zhang@intel.com>; Doherty, Declan <declan.doherty@intel.com>;
> Nicolau, Radu <radu.nicolau@intel.com>
> Cc: dev@dpdk.org; Smoczynski, MarcinX <marcinx.smoczynski@intel.com>
> Subject: [PATCH v3 4/6] ipsec: introduce support for cpu crypto mode
> 
> Update library to handle CPU cypto security mode which utilizes cryptodev's
> synchronous, CPU accelerated crypto operations.
> 
> Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> Signed-off-by: Marcin Smoczynski <marcinx.smoczynski@intel.com>
> ---
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [dpdk-dev] [PATCH v3 5/6] examples/ipsec-secgw: cpu crypto support
  2020-01-15 18:28 ` [dpdk-dev] [PATCH v3 5/6] examples/ipsec-secgw: cpu crypto support Marcin Smoczynski
@ 2020-01-16 10:54   ` Zhang, Roy Fan
  0 siblings, 0 replies; 77+ messages in thread
From: Zhang, Roy Fan @ 2020-01-16 10:54 UTC (permalink / raw)
  To: Smoczynski, MarcinX, akhil.goyal, Ananyev, Konstantin, Doherty,
	Declan, Nicolau, Radu
  Cc: dev

> -----Original Message-----
> From: Smoczynski, MarcinX <marcinx.smoczynski@intel.com>
> Sent: Wednesday, January 15, 2020 6:29 PM
> To: akhil.goyal@nxp.com; Ananyev, Konstantin
> <konstantin.ananyev@intel.com>; Zhang, Roy Fan
> <roy.fan.zhang@intel.com>; Doherty, Declan <declan.doherty@intel.com>;
> Nicolau, Radu <radu.nicolau@intel.com>
> Cc: dev@dpdk.org; Smoczynski, MarcinX <marcinx.smoczynski@intel.com>
> Subject: [PATCH v3 5/6] examples/ipsec-secgw: cpu crypto support
> 
> Add support for CPU accelerated crypto. 'cpu-crypto' SA type has been
> introduced in configuration allowing to use abovementioned acceleration.
> 
> Legacy mode is not currently supported.
> 
> Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> Signed-off-by: Marcin Smoczynski <marcinx.smoczynski@intel.com>
> ---
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [dpdk-dev] [PATCH v3 6/6] examples/ipsec-secgw: cpu crypto testing
  2020-01-15 18:28 ` [dpdk-dev] [PATCH v3 6/6] examples/ipsec-secgw: cpu crypto testing Marcin Smoczynski
@ 2020-01-16 10:54   ` Zhang, Roy Fan
  0 siblings, 0 replies; 77+ messages in thread
From: Zhang, Roy Fan @ 2020-01-16 10:54 UTC (permalink / raw)
  To: Smoczynski, MarcinX, akhil.goyal, Ananyev, Konstantin, Doherty,
	Declan, Nicolau, Radu
  Cc: dev

> -----Original Message-----
> From: Smoczynski, MarcinX <marcinx.smoczynski@intel.com>
> Sent: Wednesday, January 15, 2020 6:29 PM
> To: akhil.goyal@nxp.com; Ananyev, Konstantin
> <konstantin.ananyev@intel.com>; Zhang, Roy Fan
> <roy.fan.zhang@intel.com>; Doherty, Declan <declan.doherty@intel.com>;
> Nicolau, Radu <radu.nicolau@intel.com>
> Cc: dev@dpdk.org; Smoczynski, MarcinX <marcinx.smoczynski@intel.com>
> Subject: [PATCH v3 6/6] examples/ipsec-secgw: cpu crypto testing
> 
> Enable cpu-crypto mode testing by adding dedicated environmental variable
> CRYPTO_PRIM_TYPE. Setting it to 'type cpu-crypto' allows to run test
> scenario with cpu crypto acceleration.
> 
> Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> Signed-off-by: Marcin Smoczynski <marcinx.smoczynski@intel.com>
> ---
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [dpdk-dev] [PATCH v3 2/6] crypto/aesni_gcm: cpu crypto support
  2020-01-15 18:28 ` [dpdk-dev] [PATCH v3 2/6] crypto/aesni_gcm: cpu crypto support Marcin Smoczynski
  2020-01-15 23:16   ` Ananyev, Konstantin
  2020-01-16 10:00   ` Zhang, Roy Fan
@ 2020-01-21 13:53   ` De Lara Guarch, Pablo
  2020-01-21 14:29     ` Ananyev, Konstantin
  2 siblings, 1 reply; 77+ messages in thread
From: De Lara Guarch, Pablo @ 2020-01-21 13:53 UTC (permalink / raw)
  To: Smoczynski, MarcinX, akhil.goyal, Ananyev, Konstantin, Zhang,
	Roy Fan, Doherty, Declan, Nicolau, Radu
  Cc: dev, Smoczynski, MarcinX

Hi Marcin,

> -----Original Message-----
> From: dev <dev-bounces@dpdk.org> On Behalf Of Marcin Smoczynski
> Sent: Wednesday, January 15, 2020 6:28 PM
> To: akhil.goyal@nxp.com; Ananyev, Konstantin
> <konstantin.ananyev@intel.com>; Zhang, Roy Fan <roy.fan.zhang@intel.com>;
> Doherty, Declan <declan.doherty@intel.com>; Nicolau, Radu
> <radu.nicolau@intel.com>
> Cc: dev@dpdk.org; Smoczynski, MarcinX <marcinx.smoczynski@intel.com>
> Subject: [dpdk-dev] [PATCH v3 2/6] crypto/aesni_gcm: cpu crypto support
> 
> Add support for CPU crypto mode by introducing required handler.
> Crypto mode (sync/async) is chosen during sym session create if an appropriate
> flag is set in an xform type number.
> 
> Authenticated encryption and decryption are supported with tag
> generation/verification.
> 
> Signed-off-by: Marcin Smoczynski <marcinx.smoczynski@intel.com>
> ---
>  drivers/crypto/aesni_gcm/aesni_gcm_ops.h      |   9 ++
>  drivers/crypto/aesni_gcm/aesni_gcm_pmd.c      | 149 +++++++++++++++++-
>  drivers/crypto/aesni_gcm/aesni_gcm_pmd_ops.c  |   3 +
>  .../crypto/aesni_gcm/aesni_gcm_pmd_private.h  |  18 ++-
>  4 files changed, 169 insertions(+), 10 deletions(-)
> 
> diff --git a/drivers/crypto/aesni_gcm/aesni_gcm_ops.h
> b/drivers/crypto/aesni_gcm/aesni_gcm_ops.h
> index e272f1067..404c0adff 100644

...

> --- a/drivers/crypto/aesni_gcm/aesni_gcm_pmd.c
> +++ b/drivers/crypto/aesni_gcm/aesni_gcm_pmd.c
> @@ -25,9 +25,16 @@ aesni_gcm_set_session_parameters(const struct
> aesni_gcm_ops *gcm_ops,
>  	const struct rte_crypto_sym_xform *aead_xform;
>  	uint8_t key_length;
>  	const uint8_t *key;
> +	uint32_t xform_type;
> +
> +	/* check for CPU-crypto mode */
> +	xform_type = xform->type;
> +	sess->mode = xform_type | RTE_CRYPTO_SYM_CPU_CRYPTO ?
> +		AESNI_GCM_MODE_SYNC : AESNI_GCM_MODE_ASYNC;
> +	xform_type &= RTE_CRYPTO_SYM_XFORM_TYPE_MASK;
> 
>  	/* AES-GMAC */
> -	if (xform->type == RTE_CRYPTO_SYM_XFORM_AUTH) {
> +	if (xform_type == RTE_CRYPTO_SYM_XFORM_AUTH) {
>  		auth_xform = xform;
>  		if (auth_xform->auth.algo != RTE_CRYPTO_AUTH_AES_GMAC) {

Could you add support for AES-GMAC, so all algorithms supported by this PMD support this new API?

>  			AESNI_GCM_LOG(ERR, "Only AES GMAC is supported as
> an "
> @@ -49,7 +56,7 @@ aesni_gcm_set_session_parameters(const struct
> aesni_gcm_ops *gcm_ops,
>  		sess->req_digest_length = auth_xform->auth.digest_length;

...

> --- a/drivers/crypto/aesni_gcm/aesni_gcm_pmd_ops.c
> +++ b/drivers/crypto/aesni_gcm/aesni_gcm_pmd_ops.c
> @@ -331,9 +331,12 @@ struct rte_cryptodev_ops aesni_gcm_pmd_ops = {
>  		.queue_pair_release	= aesni_gcm_pmd_qp_release,
>  		.queue_pair_count	= aesni_gcm_pmd_qp_count,
> 
> +		.sym_cpu_process        = aesni_gcm_pmd_cpu_crypto_process,
> +
>  		.sym_session_get_size	=
> aesni_gcm_pmd_sym_session_get_size,
>  		.sym_session_configure	=
> aesni_gcm_pmd_sym_session_configure,
>  		.sym_session_clear	= aesni_gcm_pmd_sym_session_clear
>  };
> 
>  struct rte_cryptodev_ops *rte_aesni_gcm_pmd_ops = &aesni_gcm_pmd_ops;
> +

Remove this extra line.

Thanks!
Pablo

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [dpdk-dev] [PATCH v3 2/6] crypto/aesni_gcm: cpu crypto support
  2020-01-21 13:53   ` De Lara Guarch, Pablo
@ 2020-01-21 14:29     ` Ananyev, Konstantin
  2020-01-21 14:51       ` De Lara Guarch, Pablo
  0 siblings, 1 reply; 77+ messages in thread
From: Ananyev, Konstantin @ 2020-01-21 14:29 UTC (permalink / raw)
  To: De Lara Guarch, Pablo, Smoczynski, MarcinX, akhil.goyal, Zhang,
	Roy Fan, Doherty, Declan, Nicolau, Radu
  Cc: dev, Smoczynski, MarcinX



Hi Pablo,

> > Add support for CPU crypto mode by introducing required handler.
> > Crypto mode (sync/async) is chosen during sym session create if an appropriate
> > flag is set in an xform type number.
> >
> > Authenticated encryption and decryption are supported with tag
> > generation/verification.
> >
> > Signed-off-by: Marcin Smoczynski <marcinx.smoczynski@intel.com>
> > ---
> >  drivers/crypto/aesni_gcm/aesni_gcm_ops.h      |   9 ++
> >  drivers/crypto/aesni_gcm/aesni_gcm_pmd.c      | 149 +++++++++++++++++-
> >  drivers/crypto/aesni_gcm/aesni_gcm_pmd_ops.c  |   3 +
> >  .../crypto/aesni_gcm/aesni_gcm_pmd_private.h  |  18 ++-
> >  4 files changed, 169 insertions(+), 10 deletions(-)
> >
> > diff --git a/drivers/crypto/aesni_gcm/aesni_gcm_ops.h
> > b/drivers/crypto/aesni_gcm/aesni_gcm_ops.h
> > index e272f1067..404c0adff 100644
> 
> ...
> 
> > --- a/drivers/crypto/aesni_gcm/aesni_gcm_pmd.c
> > +++ b/drivers/crypto/aesni_gcm/aesni_gcm_pmd.c
> > @@ -25,9 +25,16 @@ aesni_gcm_set_session_parameters(const struct
> > aesni_gcm_ops *gcm_ops,
> >  	const struct rte_crypto_sym_xform *aead_xform;
> >  	uint8_t key_length;
> >  	const uint8_t *key;
> > +	uint32_t xform_type;
> > +
> > +	/* check for CPU-crypto mode */
> > +	xform_type = xform->type;
> > +	sess->mode = xform_type | RTE_CRYPTO_SYM_CPU_CRYPTO ?
> > +		AESNI_GCM_MODE_SYNC : AESNI_GCM_MODE_ASYNC;
> > +	xform_type &= RTE_CRYPTO_SYM_XFORM_TYPE_MASK;
> >
> >  	/* AES-GMAC */
> > -	if (xform->type == RTE_CRYPTO_SYM_XFORM_AUTH) {
> > +	if (xform_type == RTE_CRYPTO_SYM_XFORM_AUTH) {
> >  		auth_xform = xform;
> >  		if (auth_xform->auth.algo != RTE_CRYPTO_AUTH_AES_GMAC) {
> 
> Could you add support for AES-GMAC, so all algorithms supported by this PMD support this new API?

Not sure I get you here...
This code is present in current version of the driver too, no addition/deletions as I can see:

/* AES-GMAC */
        if (xform->type == RTE_CRYPTO_SYM_XFORM_AUTH) {
                auth_xform = xform;
                if (auth_xform->auth.algo != RTE_CRYPTO_AUTH_AES_GMAC) {
                        AESNI_GCM_LOG(ERR, "Only AES GMAC is supported as an "
                                "authentication only algorithm");
                        return -ENOTSUP;
                }


The only thing is changed: xform type calculation.
Konstantin

> 
> >  			AESNI_GCM_LOG(ERR, "Only AES GMAC is supported as
> > an "
> > @@ -49,7 +56,7 @@ aesni_gcm_set_session_parameters(const struct
> > aesni_gcm_ops *gcm_ops,
> >  		sess->req_digest_length = auth_xform->auth.digest_length;
> 
> ...
> 
> > --- a/drivers/crypto/aesni_gcm/aesni_gcm_pmd_ops.c
> > +++ b/drivers/crypto/aesni_gcm/aesni_gcm_pmd_ops.c
> > @@ -331,9 +331,12 @@ struct rte_cryptodev_ops aesni_gcm_pmd_ops = {
> >  		.queue_pair_release	= aesni_gcm_pmd_qp_release,
> >  		.queue_pair_count	= aesni_gcm_pmd_qp_count,
> >
> > +		.sym_cpu_process        = aesni_gcm_pmd_cpu_crypto_process,
> > +
> >  		.sym_session_get_size	=
> > aesni_gcm_pmd_sym_session_get_size,
> >  		.sym_session_configure	=
> > aesni_gcm_pmd_sym_session_configure,
> >  		.sym_session_clear	= aesni_gcm_pmd_sym_session_clear
> >  };
> >
> >  struct rte_cryptodev_ops *rte_aesni_gcm_pmd_ops = &aesni_gcm_pmd_ops;
> > +
> 
> Remove this extra line.
> 
> Thanks!
> Pablo

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [dpdk-dev] [PATCH v3 2/6] crypto/aesni_gcm: cpu crypto support
  2020-01-21 14:29     ` Ananyev, Konstantin
@ 2020-01-21 14:51       ` De Lara Guarch, Pablo
  2020-01-21 15:23         ` Ananyev, Konstantin
  0 siblings, 1 reply; 77+ messages in thread
From: De Lara Guarch, Pablo @ 2020-01-21 14:51 UTC (permalink / raw)
  To: Ananyev, Konstantin, Smoczynski, MarcinX, akhil.goyal, Zhang,
	Roy Fan, Doherty, Declan, Nicolau, Radu
  Cc: dev, Smoczynski, MarcinX



> -----Original Message-----
> From: Ananyev, Konstantin <konstantin.ananyev@intel.com>
> Sent: Tuesday, January 21, 2020 2:29 PM
> To: De Lara Guarch, Pablo <pablo.de.lara.guarch@intel.com>; Smoczynski,
> MarcinX <marcinx.smoczynski@intel.com>; akhil.goyal@nxp.com; Zhang, Roy
> Fan <roy.fan.zhang@intel.com>; Doherty, Declan <declan.doherty@intel.com>;
> Nicolau, Radu <radu.nicolau@intel.com>
> Cc: dev@dpdk.org; Smoczynski, MarcinX <marcinx.smoczynski@intel.com>
> Subject: RE: [dpdk-dev] [PATCH v3 2/6] crypto/aesni_gcm: cpu crypto support
> 
> 

Hi Konstantin,

> 
> Hi Pablo,
> 
> > > Add support for CPU crypto mode by introducing required handler.
> > > Crypto mode (sync/async) is chosen during sym session create if an
> > > appropriate flag is set in an xform type number.
> > >
> > > Authenticated encryption and decryption are supported with tag
> > > generation/verification.
> > >
> > > Signed-off-by: Marcin Smoczynski <marcinx.smoczynski@intel.com>
> > > ---
> > >  drivers/crypto/aesni_gcm/aesni_gcm_ops.h      |   9 ++
> > >  drivers/crypto/aesni_gcm/aesni_gcm_pmd.c      | 149 +++++++++++++++++-
> > >  drivers/crypto/aesni_gcm/aesni_gcm_pmd_ops.c  |   3 +
> > >  .../crypto/aesni_gcm/aesni_gcm_pmd_private.h  |  18 ++-
> > >  4 files changed, 169 insertions(+), 10 deletions(-)
> > >
> > > diff --git a/drivers/crypto/aesni_gcm/aesni_gcm_ops.h
> > > b/drivers/crypto/aesni_gcm/aesni_gcm_ops.h
> > > index e272f1067..404c0adff 100644
> >
> > ...
> > >
> > >  	/* AES-GMAC */
> > > -	if (xform->type == RTE_CRYPTO_SYM_XFORM_AUTH) {
> > > +	if (xform_type == RTE_CRYPTO_SYM_XFORM_AUTH) {
> > >  		auth_xform = xform;
> > >  		if (auth_xform->auth.algo != RTE_CRYPTO_AUTH_AES_GMAC) {
> >
> > Could you add support for AES-GMAC, so all algorithms supported by this PMD
> support this new API?
> 
> Not sure I get you here...
> This code is present in current version of the driver too, no addition/deletions as
> I can see:
> 
> /* AES-GMAC */
>         if (xform->type == RTE_CRYPTO_SYM_XFORM_AUTH) {
>                 auth_xform = xform;
>                 if (auth_xform->auth.algo != RTE_CRYPTO_AUTH_AES_GMAC) {
>                         AESNI_GCM_LOG(ERR, "Only AES GMAC is supported as an "
>                                 "authentication only algorithm");
>                         return -ENOTSUP;
>                 }
> 
> 
> The only thing is changed: xform type calculation.

From what I can see, sess->op is not set for AES-GMAC.
In aesni_gcm_pmd_cpu_crypto_process(), this value is checked,
In the switch that is in line 471.

Looks like in this case, this would return an invalid status.

Am I getting this wrong?

Thanks!
Pablo

> Konstantin


^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [dpdk-dev] [PATCH v3 2/6] crypto/aesni_gcm: cpu crypto support
  2020-01-21 14:51       ` De Lara Guarch, Pablo
@ 2020-01-21 15:23         ` Ananyev, Konstantin
  2020-01-21 22:33           ` De Lara Guarch, Pablo
  0 siblings, 1 reply; 77+ messages in thread
From: Ananyev, Konstantin @ 2020-01-21 15:23 UTC (permalink / raw)
  To: De Lara Guarch, Pablo, Smoczynski, MarcinX, akhil.goyal, Zhang,
	Roy Fan, Doherty, Declan, Nicolau, Radu
  Cc: dev, Smoczynski, MarcinX



> > > > Add support for CPU crypto mode by introducing required handler.
> > > > Crypto mode (sync/async) is chosen during sym session create if an
> > > > appropriate flag is set in an xform type number.
> > > >
> > > > Authenticated encryption and decryption are supported with tag
> > > > generation/verification.
> > > >
> > > > Signed-off-by: Marcin Smoczynski <marcinx.smoczynski@intel.com>
> > > > ---
> > > >  drivers/crypto/aesni_gcm/aesni_gcm_ops.h      |   9 ++
> > > >  drivers/crypto/aesni_gcm/aesni_gcm_pmd.c      | 149 +++++++++++++++++-
> > > >  drivers/crypto/aesni_gcm/aesni_gcm_pmd_ops.c  |   3 +
> > > >  .../crypto/aesni_gcm/aesni_gcm_pmd_private.h  |  18 ++-
> > > >  4 files changed, 169 insertions(+), 10 deletions(-)
> > > >
> > > > diff --git a/drivers/crypto/aesni_gcm/aesni_gcm_ops.h
> > > > b/drivers/crypto/aesni_gcm/aesni_gcm_ops.h
> > > > index e272f1067..404c0adff 100644
> > >
> > > ...
> > > >
> > > >  	/* AES-GMAC */
> > > > -	if (xform->type == RTE_CRYPTO_SYM_XFORM_AUTH) {
> > > > +	if (xform_type == RTE_CRYPTO_SYM_XFORM_AUTH) {
> > > >  		auth_xform = xform;
> > > >  		if (auth_xform->auth.algo != RTE_CRYPTO_AUTH_AES_GMAC) {
> > >
> > > Could you add support for AES-GMAC, so all algorithms supported by this PMD
> > support this new API?
> >
> > Not sure I get you here...
> > This code is present in current version of the driver too, no addition/deletions as
> > I can see:
> >
> > /* AES-GMAC */
> >         if (xform->type == RTE_CRYPTO_SYM_XFORM_AUTH) {
> >                 auth_xform = xform;
> >                 if (auth_xform->auth.algo != RTE_CRYPTO_AUTH_AES_GMAC) {
> >                         AESNI_GCM_LOG(ERR, "Only AES GMAC is supported as an "
> >                                 "authentication only algorithm");
> >                         return -ENOTSUP;
> >                 }
> >
> >
> > The only thing is changed: xform type calculation.
> 
> From what I can see, sess->op is not set for AES-GMAC.
> In aesni_gcm_pmd_cpu_crypto_process(), this value is checked,
> In the switch that is in line 471.

Ah, you mean sess->ops.* are not initialized properly for GMAC, right?

> 
> Looks like in this case, this would return an invalid status.
> 
> Am I getting this wrong?
> 
> Thanks!
> Pablo
> 
> > Konstantin


^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [dpdk-dev] [PATCH v3 2/6] crypto/aesni_gcm: cpu crypto support
  2020-01-21 15:23         ` Ananyev, Konstantin
@ 2020-01-21 22:33           ` De Lara Guarch, Pablo
  2020-01-22 12:43             ` Ananyev, Konstantin
  0 siblings, 1 reply; 77+ messages in thread
From: De Lara Guarch, Pablo @ 2020-01-21 22:33 UTC (permalink / raw)
  To: Ananyev, Konstantin, Smoczynski, MarcinX, akhil.goyal, Zhang,
	Roy Fan, Doherty, Declan, Nicolau, Radu
  Cc: dev, Smoczynski, MarcinX



> -----Original Message-----
> From: Ananyev, Konstantin <konstantin.ananyev@intel.com>
> Sent: Tuesday, January 21, 2020 3:23 PM
> To: De Lara Guarch, Pablo <pablo.de.lara.guarch@intel.com>; Smoczynski,
> MarcinX <marcinx.smoczynski@intel.com>; akhil.goyal@nxp.com; Zhang, Roy
> Fan <roy.fan.zhang@intel.com>; Doherty, Declan <declan.doherty@intel.com>;
> Nicolau, Radu <radu.nicolau@intel.com>
> Cc: dev@dpdk.org; Smoczynski, MarcinX <marcinx.smoczynski@intel.com>
> Subject: RE: [dpdk-dev] [PATCH v3 2/6] crypto/aesni_gcm: cpu crypto support
> 
> 
> 
> > > > > Add support for CPU crypto mode by introducing required handler.
> > > > > Crypto mode (sync/async) is chosen during sym session create if
> > > > > an appropriate flag is set in an xform type number.
> > > > >
> > > > > Authenticated encryption and decryption are supported with tag
> > > > > generation/verification.
> > > > >
> > > > > Signed-off-by: Marcin Smoczynski <marcinx.smoczynski@intel.com>
> > > > > ---
> > > > >  drivers/crypto/aesni_gcm/aesni_gcm_ops.h      |   9 ++
> > > > >  drivers/crypto/aesni_gcm/aesni_gcm_pmd.c      | 149
> +++++++++++++++++-
> > > > >  drivers/crypto/aesni_gcm/aesni_gcm_pmd_ops.c  |   3 +
> > > > >  .../crypto/aesni_gcm/aesni_gcm_pmd_private.h  |  18 ++-
> > > > >  4 files changed, 169 insertions(+), 10 deletions(-)
> > > > >
> > > > > diff --git a/drivers/crypto/aesni_gcm/aesni_gcm_ops.h
> > > > > b/drivers/crypto/aesni_gcm/aesni_gcm_ops.h
> > > > > index e272f1067..404c0adff 100644
> > > >
> > > > ...
> > > > >
> > > > >  	/* AES-GMAC */
> > > > > -	if (xform->type == RTE_CRYPTO_SYM_XFORM_AUTH) {
> > > > > +	if (xform_type == RTE_CRYPTO_SYM_XFORM_AUTH) {
> > > > >  		auth_xform = xform;
> > > > >  		if (auth_xform->auth.algo != RTE_CRYPTO_AUTH_AES_GMAC) {
> > > >
> > > > Could you add support for AES-GMAC, so all algorithms supported by
> > > > this PMD
> > > support this new API?
> > >
> > > Not sure I get you here...
> > > This code is present in current version of the driver too, no
> > > addition/deletions as I can see:
> > >
> > > /* AES-GMAC */
> > >         if (xform->type == RTE_CRYPTO_SYM_XFORM_AUTH) {
> > >                 auth_xform = xform;
> > >                 if (auth_xform->auth.algo != RTE_CRYPTO_AUTH_AES_GMAC) {
> > >                         AESNI_GCM_LOG(ERR, "Only AES GMAC is supported as an "
> > >                                 "authentication only algorithm");
> > >                         return -ENOTSUP;
> > >                 }
> > >
> > >
> > > The only thing is changed: xform type calculation.
> >
> > From what I can see, sess->op is not set for AES-GMAC.
> > In aesni_gcm_pmd_cpu_crypto_process(), this value is checked, In the
> > switch that is in line 471.
> 
> Ah, you mean sess->ops.* are not initialized properly for GMAC, right?

Correct. Actually, sess->op is set, but in the switch in line 471, only
AESNI_GCM_OP_AUTHENTICATED_ENCRYPTION and
AESNI_GCM_OP_AUTHENTICATED_DECRYPTION are used,
which I would think that it means that any GMAC operation would fail.

Pablo
> 
> >
> > Looks like in this case, this would return an invalid status.
> >
> > Am I getting this wrong?
> >
> > Thanks!
> > Pablo
> >
> > > Konstantin


^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [dpdk-dev] [PATCH v3 2/6] crypto/aesni_gcm: cpu crypto support
  2020-01-21 22:33           ` De Lara Guarch, Pablo
@ 2020-01-22 12:43             ` Ananyev, Konstantin
  0 siblings, 0 replies; 77+ messages in thread
From: Ananyev, Konstantin @ 2020-01-22 12:43 UTC (permalink / raw)
  To: De Lara Guarch, Pablo, Smoczynski, MarcinX, akhil.goyal, Zhang,
	Roy Fan, Doherty, Declan, Nicolau, Radu
  Cc: dev, Smoczynski, MarcinX


> > > > > > Add support for CPU crypto mode by introducing required handler.
> > > > > > Crypto mode (sync/async) is chosen during sym session create if
> > > > > > an appropriate flag is set in an xform type number.
> > > > > >
> > > > > > Authenticated encryption and decryption are supported with tag
> > > > > > generation/verification.
> > > > > >
> > > > > > Signed-off-by: Marcin Smoczynski <marcinx.smoczynski@intel.com>
> > > > > > ---
> > > > > >  drivers/crypto/aesni_gcm/aesni_gcm_ops.h      |   9 ++
> > > > > >  drivers/crypto/aesni_gcm/aesni_gcm_pmd.c      | 149
> > +++++++++++++++++-
> > > > > >  drivers/crypto/aesni_gcm/aesni_gcm_pmd_ops.c  |   3 +
> > > > > >  .../crypto/aesni_gcm/aesni_gcm_pmd_private.h  |  18 ++-
> > > > > >  4 files changed, 169 insertions(+), 10 deletions(-)
> > > > > >
> > > > > > diff --git a/drivers/crypto/aesni_gcm/aesni_gcm_ops.h
> > > > > > b/drivers/crypto/aesni_gcm/aesni_gcm_ops.h
> > > > > > index e272f1067..404c0adff 100644
> > > > >
> > > > > ...
> > > > > >
> > > > > >  	/* AES-GMAC */
> > > > > > -	if (xform->type == RTE_CRYPTO_SYM_XFORM_AUTH) {
> > > > > > +	if (xform_type == RTE_CRYPTO_SYM_XFORM_AUTH) {
> > > > > >  		auth_xform = xform;
> > > > > >  		if (auth_xform->auth.algo != RTE_CRYPTO_AUTH_AES_GMAC) {
> > > > >
> > > > > Could you add support for AES-GMAC, so all algorithms supported by
> > > > > this PMD
> > > > support this new API?
> > > >
> > > > Not sure I get you here...
> > > > This code is present in current version of the driver too, no
> > > > addition/deletions as I can see:
> > > >
> > > > /* AES-GMAC */
> > > >         if (xform->type == RTE_CRYPTO_SYM_XFORM_AUTH) {
> > > >                 auth_xform = xform;
> > > >                 if (auth_xform->auth.algo != RTE_CRYPTO_AUTH_AES_GMAC) {
> > > >                         AESNI_GCM_LOG(ERR, "Only AES GMAC is supported as an "
> > > >                                 "authentication only algorithm");
> > > >                         return -ENOTSUP;
> > > >                 }
> > > >
> > > >
> > > > The only thing is changed: xform type calculation.
> > >
> > > From what I can see, sess->op is not set for AES-GMAC.
> > > In aesni_gcm_pmd_cpu_crypto_process(), this value is checked, In the
> > > switch that is in line 471.
> >
> > Ah, you mean sess->ops.* are not initialized properly for GMAC, right?
> 
> Correct. Actually, sess->op is set, but in the switch in line 471, only
> AESNI_GCM_OP_AUTHENTICATED_ENCRYPTION and
> AESNI_GCM_OP_AUTHENTICATED_DECRYPTION are used,
> which I would think that it means that any GMAC operation would fail.

Yep, you right, good catch.
Will try to address in v4.
Thanks
Konstantin

> 
> Pablo
> >
> > >
> > > Looks like in this case, this would return an invalid status.
> > >
> > > Am I getting this wrong?
> > >
> > > Thanks!
> > > Pablo
> > >
> > > > Konstantin


^ permalink raw reply	[flat|nested] 77+ messages in thread

* [dpdk-dev] [PATCH v4 0/8] Introduce CPU crypto mode
  2020-01-15 18:28 [dpdk-dev] [PATCH v3 0/6] Introduce CPU crypto mode Marcin Smoczynski
                   ` (5 preceding siblings ...)
  2020-01-15 18:28 ` [dpdk-dev] [PATCH v3 6/6] examples/ipsec-secgw: cpu crypto testing Marcin Smoczynski
@ 2020-01-28  3:16 ` Marcin Smoczynski
  2020-01-28  3:16   ` [dpdk-dev] [PATCH v4 1/8] cryptodev: introduce cpu crypto support API Marcin Smoczynski
                     ` (8 more replies)
  6 siblings, 9 replies; 77+ messages in thread
From: Marcin Smoczynski @ 2020-01-28  3:16 UTC (permalink / raw)
  To: akhil.goyal, konstantin.ananyev, roy.fan.zhang, declan.doherty,
	radu.nicolau, pablo.de.lara.guarch
  Cc: dev, Marcin Smoczynski

Originally both SW and HW crypto PMDs use rte_crypot_op based API to
process the crypto workload asynchronously. This way provides uniformity
to both PMD types, but also introduce unnecessary performance penalty to
SW PMDs that have to "simulate" HW async behavior (crypto-ops
enqueue/dequeue, HW addresses computations, storing/dereferencing user
provided data (mbuf) for each crypto-op, etc).

The aim is to introduce a new optional API for SW crypto-devices
to perform crypto processing in a synchronous manner.

v3 to v4 changes:
 - add feature discovery in the ipsec example application when
   using cpu-crypto
 - add gmac in aesni-gcm
 - add tests for aesni-gcm/cpu crypto mode
 - add documentation: pg and rel notes
 - remove xform flags as no longer needed
 - add some extra API comments
 - remove compilation error from v3

Marcin Smoczynski (8):
  cryptodev: introduce cpu crypto support API
  crypto/aesni_gcm: cpu crypto support
  test/crypto: add CPU crypto tests
  security: add cpu crypto action type
  ipsec: introduce support for cpu crypto mode
  examples/ipsec-secgw: cpu crypto support
  examples/ipsec-secgw: cpu crypto testing
  doc: add cpu crypto related documentation

 app/test/Makefile                             |   1 +
 app/test/cpu_crypto_all_gcm_perf_test_cases.h |  11 +
 app/test/cpu_crypto_all_gcm_unit_test_cases.h |  49 +
 .../cpu_crypto_all_gmac_unit_test_cases.h     |   7 +
 app/test/meson.build                          |   1 +
 app/test/test_cryptodev_cpu_crypto.c          | 930 ++++++++++++++++++
 doc/guides/cryptodevs/aesni_gcm.rst           |   5 +
 doc/guides/prog_guide/cryptodev_lib.rst       |  31 +
 doc/guides/prog_guide/ipsec_lib.rst           |   8 +
 doc/guides/prog_guide/rte_security.rst        |  15 +-
 doc/guides/rel_notes/release_20_02.rst        |   8 +
 drivers/crypto/aesni_gcm/aesni_gcm_ops.h      |   9 +
 drivers/crypto/aesni_gcm/aesni_gcm_pmd.c      | 220 ++++-
 drivers/crypto/aesni_gcm/aesni_gcm_pmd_ops.c  |   3 +
 .../crypto/aesni_gcm/aesni_gcm_pmd_private.h  |  11 +-
 examples/ipsec-secgw/ipsec.c                  |  23 +-
 examples/ipsec-secgw/ipsec_process.c          | 134 ++-
 examples/ipsec-secgw/sa.c                     |  28 +-
 examples/ipsec-secgw/test/common_defs.sh      |  21 +
 examples/ipsec-secgw/test/linux_test4.sh      |  11 +-
 examples/ipsec-secgw/test/linux_test6.sh      |  11 +-
 .../test/trs_3descbc_sha1_common_defs.sh      |   8 +-
 .../test/trs_aescbc_sha1_common_defs.sh       |   8 +-
 .../test/trs_aesctr_sha1_common_defs.sh       |   8 +-
 .../test/tun_3descbc_sha1_common_defs.sh      |   8 +-
 .../test/tun_aescbc_sha1_common_defs.sh       |   8 +-
 .../test/tun_aesctr_sha1_common_defs.sh       |   8 +-
 lib/librte_cryptodev/rte_crypto_sym.h         |  61 ++
 lib/librte_cryptodev/rte_cryptodev.c          |  33 +
 lib/librte_cryptodev/rte_cryptodev.h          |  20 +
 lib/librte_cryptodev/rte_cryptodev_pmd.h      |  19 +
 .../rte_cryptodev_version.map                 |   1 +
 lib/librte_ipsec/esp_inb.c                    | 154 ++-
 lib/librte_ipsec/esp_outb.c                   | 134 ++-
 lib/librte_ipsec/misc.h                       | 118 +++
 lib/librte_ipsec/rte_ipsec.h                  |  18 +-
 lib/librte_ipsec/sa.c                         | 112 ++-
 lib/librte_ipsec/sa.h                         |  17 +
 lib/librte_ipsec/ses.c                        |   3 +-
 lib/librte_security/rte_security.h            |   6 +-
 40 files changed, 2119 insertions(+), 162 deletions(-)
 create mode 100644 app/test/cpu_crypto_all_gcm_perf_test_cases.h
 create mode 100644 app/test/cpu_crypto_all_gcm_unit_test_cases.h
 create mode 100644 app/test/cpu_crypto_all_gmac_unit_test_cases.h
 create mode 100644 app/test/test_cryptodev_cpu_crypto.c

-- 
2.17.1


^ permalink raw reply	[flat|nested] 77+ messages in thread

* [dpdk-dev] [PATCH v4 1/8] cryptodev: introduce cpu crypto support API
  2020-01-28  3:16 ` [dpdk-dev] [PATCH v4 0/8] Introduce CPU crypto mode Marcin Smoczynski
@ 2020-01-28  3:16   ` Marcin Smoczynski
  2020-01-28  3:16   ` [dpdk-dev] [PATCH v4 2/8] crypto/aesni_gcm: cpu crypto support Marcin Smoczynski
                     ` (7 subsequent siblings)
  8 siblings, 0 replies; 77+ messages in thread
From: Marcin Smoczynski @ 2020-01-28  3:16 UTC (permalink / raw)
  To: akhil.goyal, konstantin.ananyev, roy.fan.zhang, declan.doherty,
	radu.nicolau, pablo.de.lara.guarch
  Cc: dev, Marcin Smoczynski

Add new API allowing to process crypto operations in a synchronous
manner. Operations are performed on a set of SG arrays.

Sync mode is selected by setting appropriate flag in an xform
type number. Cryptodevs which allows CPU crypto operation mode have to
use RTE_CRYPTODEV_FF_SYM_CPU_CRYPTO capability.

Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Signed-off-by: Marcin Smoczynski <marcinx.smoczynski@intel.com>
---
 lib/librte_cryptodev/rte_crypto_sym.h         | 61 +++++++++++++++++++
 lib/librte_cryptodev/rte_cryptodev.c          | 33 ++++++++++
 lib/librte_cryptodev/rte_cryptodev.h          | 20 ++++++
 lib/librte_cryptodev/rte_cryptodev_pmd.h      | 19 ++++++
 .../rte_cryptodev_version.map                 |  1 +
 5 files changed, 134 insertions(+)

diff --git a/lib/librte_cryptodev/rte_crypto_sym.h b/lib/librte_cryptodev/rte_crypto_sym.h
index bc356f6ff..da1530093 100644
--- a/lib/librte_cryptodev/rte_crypto_sym.h
+++ b/lib/librte_cryptodev/rte_crypto_sym.h
@@ -25,6 +25,67 @@ extern "C" {
 #include <rte_mempool.h>
 #include <rte_common.h>
 
+/**
+ * Crypto IO Vector (in analogy with struct iovec)
+ * Supposed be used to pass input/output data buffers for crypto data-path
+ * functions.
+ */
+struct rte_crypto_vec {
+	/** virtual address of the data buffer */
+	void *base;
+	/** IOVA of the data buffer */
+	rte_iova_t *iova;
+	/** length of the data buffer */
+	uint32_t len;
+};
+
+/**
+ * Crypto scatter-gather list descriptor. Consists of a pointer to an array
+ * of Crypto IO vectors with its size.
+ */
+struct rte_crypto_sgl {
+	/** start of an array of vectors */
+	struct rte_crypto_vec *vec;
+	/** size of an array of vectors */
+	uint32_t num;
+};
+
+/**
+ * Synchronous operation descriptor.
+ * Supposed to be used with CPU crypto API call.
+ */
+struct rte_crypto_sym_vec {
+	/** array of SGL vectors */
+	struct rte_crypto_sgl *sgl;
+	/** array of pointers to IV */
+	void **iv;
+	/** array of pointers to AAD */
+	void **aad;
+	/** array of pointers to digest */
+	void **digest;
+	/**
+	 * array of statuses for each operation:
+	 *  - 0 on success
+	 *  - errno on error
+	 */
+	int32_t *status;
+	/** number of operations to perform */
+	uint32_t num;
+};
+
+/**
+ * used for cpu_crypto_process_bulk() to specify head/tail offsets
+ * for auth/cipher processing.
+ */
+union rte_crypto_sym_ofs {
+	uint64_t raw;
+	struct {
+		struct {
+			uint16_t head;
+			uint16_t tail;
+		} auth, cipher;
+	} ofs;
+};
 
 /** Symmetric Cipher Algorithms */
 enum rte_crypto_cipher_algorithm {
diff --git a/lib/librte_cryptodev/rte_cryptodev.c b/lib/librte_cryptodev/rte_cryptodev.c
index 5c6359b5c..410b22867 100644
--- a/lib/librte_cryptodev/rte_cryptodev.c
+++ b/lib/librte_cryptodev/rte_cryptodev.c
@@ -494,6 +494,8 @@ rte_cryptodev_get_feature_name(uint64_t flag)
 		return "RSA_PRIV_OP_KEY_QT";
 	case RTE_CRYPTODEV_FF_DIGEST_ENCRYPTED:
 		return "DIGEST_ENCRYPTED";
+	case RTE_CRYPTODEV_FF_SYM_CPU_CRYPTO:
+		return "SYM_CPU_CRYPTO";
 	default:
 		return NULL;
 	}
@@ -1619,6 +1621,37 @@ rte_cryptodev_sym_session_get_user_data(
 	return (void *)(sess->sess_data + sess->nb_drivers);
 }
 
+static inline void
+sym_crypto_fill_status(struct rte_crypto_sym_vec *vec, int32_t errnum)
+{
+	uint32_t i;
+	for (i = 0; i < vec->num; i++)
+		vec->status[i] = errnum;
+}
+
+uint32_t
+rte_cryptodev_sym_cpu_crypto_process(uint8_t dev_id,
+	struct rte_cryptodev_sym_session *sess, union rte_crypto_sym_ofs ofs,
+	struct rte_crypto_sym_vec *vec)
+{
+	struct rte_cryptodev *dev;
+
+	if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+		sym_crypto_fill_status(vec, EINVAL);
+		return 0;
+	}
+
+	dev = rte_cryptodev_pmd_get_dev(dev_id);
+
+	if (*dev->dev_ops->sym_cpu_process == NULL ||
+		!(dev->feature_flags & RTE_CRYPTODEV_FF_SYM_CPU_CRYPTO)) {
+		sym_crypto_fill_status(vec, ENOTSUP);
+		return 0;
+	}
+
+	return dev->dev_ops->sym_cpu_process(dev, sess, ofs, vec);
+}
+
 /** Initialise rte_crypto_op mempool element */
 static void
 rte_crypto_op_init(struct rte_mempool *mempool,
diff --git a/lib/librte_cryptodev/rte_cryptodev.h b/lib/librte_cryptodev/rte_cryptodev.h
index c6ffa3b35..8786dfb90 100644
--- a/lib/librte_cryptodev/rte_cryptodev.h
+++ b/lib/librte_cryptodev/rte_cryptodev.h
@@ -450,6 +450,8 @@ rte_cryptodev_asym_get_xform_enum(enum rte_crypto_asym_xform_type *xform_enum,
 /**< Support encrypted-digest operations where digest is appended to data */
 #define RTE_CRYPTODEV_FF_ASYM_SESSIONLESS		(1ULL << 20)
 /**< Support asymmetric session-less operations */
+#define	RTE_CRYPTODEV_FF_SYM_CPU_CRYPTO			(1ULL << 21)
+/**< Support symmeteric cpu-crypto processing */
 
 
 /**
@@ -1274,6 +1276,24 @@ void *
 rte_cryptodev_sym_session_get_user_data(
 					struct rte_cryptodev_sym_session *sess);
 
+/**
+ * Perform actual crypto processing (encrypt/digest or auth/decrypt)
+ * on user provided data.
+ *
+ * @param	dev_id	The device identifier.
+ * @param	sess	Cryptodev session structure
+ * @param	ofs	Start and stop offsets for auth and cipher operations
+ * @param	vec	Vectorized operation descriptor
+ *
+ * @return
+ *  - Returns number of successfully processed packets.
+ */
+__rte_experimental
+uint32_t
+rte_cryptodev_sym_cpu_crypto_process(uint8_t dev_id,
+	struct rte_cryptodev_sym_session *sess, union rte_crypto_sym_ofs ofs,
+	struct rte_crypto_sym_vec *vec);
+
 #ifdef __cplusplus
 }
 #endif
diff --git a/lib/librte_cryptodev/rte_cryptodev_pmd.h b/lib/librte_cryptodev/rte_cryptodev_pmd.h
index fba14f2fa..5d9ee5fef 100644
--- a/lib/librte_cryptodev/rte_cryptodev_pmd.h
+++ b/lib/librte_cryptodev/rte_cryptodev_pmd.h
@@ -308,6 +308,23 @@ typedef void (*cryptodev_sym_free_session_t)(struct rte_cryptodev *dev,
  */
 typedef void (*cryptodev_asym_free_session_t)(struct rte_cryptodev *dev,
 		struct rte_cryptodev_asym_session *sess);
+/**
+ * Perform actual crypto processing (encrypt/digest or auth/decrypt)
+ * on user provided data.
+ *
+ * @param	dev	Crypto device pointer
+ * @param	sess	Cryptodev session structure
+ * @param	ofs	Start and stop offsets for auth and cipher operations
+ * @param	vec	Vectorized operation descriptor
+ *
+ * @return
+ *  - Returns number of successfully processed packets.
+ *
+ */
+typedef uint32_t (*cryptodev_sym_cpu_crypto_process_t)
+	(struct rte_cryptodev *dev, struct rte_cryptodev_sym_session *sess,
+	union rte_crypto_sym_ofs ofs, struct rte_crypto_sym_vec *vec);
+
 
 /** Crypto device operations function pointer table */
 struct rte_cryptodev_ops {
@@ -342,6 +359,8 @@ struct rte_cryptodev_ops {
 	/**< Clear a Crypto sessions private data. */
 	cryptodev_asym_free_session_t asym_session_clear;
 	/**< Clear a Crypto sessions private data. */
+	cryptodev_sym_cpu_crypto_process_t sym_cpu_process;
+	/**< process input data synchronously (cpu-crypto). */
 };
 
 
diff --git a/lib/librte_cryptodev/rte_cryptodev_version.map b/lib/librte_cryptodev/rte_cryptodev_version.map
index 1dd1e259a..6e41b4be5 100644
--- a/lib/librte_cryptodev/rte_cryptodev_version.map
+++ b/lib/librte_cryptodev/rte_cryptodev_version.map
@@ -71,6 +71,7 @@ EXPERIMENTAL {
 	rte_cryptodev_asym_session_init;
 	rte_cryptodev_asym_xform_capability_check_modlen;
 	rte_cryptodev_asym_xform_capability_check_optype;
+	rte_cryptodev_sym_cpu_crypto_process;
 	rte_cryptodev_sym_get_existing_header_session_size;
 	rte_cryptodev_sym_session_get_user_data;
 	rte_cryptodev_sym_session_pool_create;
-- 
2.17.1


^ permalink raw reply	[flat|nested] 77+ messages in thread

* [dpdk-dev] [PATCH v4 2/8] crypto/aesni_gcm: cpu crypto support
  2020-01-28  3:16 ` [dpdk-dev] [PATCH v4 0/8] Introduce CPU crypto mode Marcin Smoczynski
  2020-01-28  3:16   ` [dpdk-dev] [PATCH v4 1/8] cryptodev: introduce cpu crypto support API Marcin Smoczynski
@ 2020-01-28  3:16   ` Marcin Smoczynski
  2020-01-28 10:49     ` De Lara Guarch, Pablo
  2020-01-28  3:16   ` [dpdk-dev] [PATCH v4 3/8] test/crypto: add CPU crypto tests Marcin Smoczynski
                     ` (6 subsequent siblings)
  8 siblings, 1 reply; 77+ messages in thread
From: Marcin Smoczynski @ 2020-01-28  3:16 UTC (permalink / raw)
  To: akhil.goyal, konstantin.ananyev, roy.fan.zhang, declan.doherty,
	radu.nicolau, pablo.de.lara.guarch
  Cc: dev, Marcin Smoczynski

Add support for CPU crypto mode by introducing required handler.
Crypto mode (sync/async) is chosen during sym session create if an
appropriate flag is set in an xform type number.

Authenticated encryption and decryption are supported with tag
generation/verification.

Signed-off-by: Marcin Smoczynski <marcinx.smoczynski@intel.com>
---
 drivers/crypto/aesni_gcm/aesni_gcm_ops.h      |   9 +
 drivers/crypto/aesni_gcm/aesni_gcm_pmd.c      | 220 +++++++++++++++++-
 drivers/crypto/aesni_gcm/aesni_gcm_pmd_ops.c  |   3 +
 .../crypto/aesni_gcm/aesni_gcm_pmd_private.h  |  11 +-
 4 files changed, 237 insertions(+), 6 deletions(-)

diff --git a/drivers/crypto/aesni_gcm/aesni_gcm_ops.h b/drivers/crypto/aesni_gcm/aesni_gcm_ops.h
index e272f1067..404c0adff 100644
--- a/drivers/crypto/aesni_gcm/aesni_gcm_ops.h
+++ b/drivers/crypto/aesni_gcm/aesni_gcm_ops.h
@@ -65,4 +65,13 @@ struct aesni_gcm_ops {
 	aesni_gcm_finalize_t finalize_dec;
 };
 
+/** GCM per-session operation handlers */
+struct aesni_gcm_session_ops {
+	aesni_gcm_t cipher;
+	aesni_gcm_pre_t pre;
+	aesni_gcm_init_t init;
+	aesni_gcm_update_t update;
+	aesni_gcm_finalize_t finalize;
+};
+
 #endif /* _AESNI_GCM_OPS_H_ */
diff --git a/drivers/crypto/aesni_gcm/aesni_gcm_pmd.c b/drivers/crypto/aesni_gcm/aesni_gcm_pmd.c
index 1a03be31d..9901c811b 100644
--- a/drivers/crypto/aesni_gcm/aesni_gcm_pmd.c
+++ b/drivers/crypto/aesni_gcm/aesni_gcm_pmd.c
@@ -15,6 +15,31 @@
 
 static uint8_t cryptodev_driver_id;
 
+/* setup session handlers */
+static void
+set_func_ops(struct aesni_gcm_session *s, const struct aesni_gcm_ops *gcm_ops)
+{
+	s->ops.pre = gcm_ops->pre;
+	s->ops.init = gcm_ops->init;
+
+	switch (s->op) {
+	case AESNI_GCM_OP_AUTHENTICATED_ENCRYPTION:
+		s->ops.cipher = gcm_ops->enc;
+		s->ops.update = gcm_ops->update_enc;
+		s->ops.finalize = gcm_ops->finalize_enc;
+		break;
+	case AESNI_GCM_OP_AUTHENTICATED_DECRYPTION:
+		s->ops.cipher = gcm_ops->dec;
+		s->ops.update = gcm_ops->update_dec;
+		s->ops.finalize = gcm_ops->finalize_dec;
+		break;
+	case AESNI_GMAC_OP_GENERATE:
+	case AESNI_GMAC_OP_VERIFY:
+		s->ops.finalize = gcm_ops->finalize_enc;
+		break;
+	}
+}
+
 /** Parse crypto xform chain and set private session parameters */
 int
 aesni_gcm_set_session_parameters(const struct aesni_gcm_ops *gcm_ops,
@@ -65,6 +90,7 @@ aesni_gcm_set_session_parameters(const struct aesni_gcm_ops *gcm_ops,
 		/* Select Crypto operation */
 		if (aead_xform->aead.op == RTE_CRYPTO_AEAD_OP_ENCRYPT)
 			sess->op = AESNI_GCM_OP_AUTHENTICATED_ENCRYPTION;
+		/* op == RTE_CRYPTO_AEAD_OP_DECRYPT */
 		else
 			sess->op = AESNI_GCM_OP_AUTHENTICATED_DECRYPTION;
 
@@ -78,7 +104,6 @@ aesni_gcm_set_session_parameters(const struct aesni_gcm_ops *gcm_ops,
 		return -ENOTSUP;
 	}
 
-
 	/* IV check */
 	if (sess->iv.length != 16 && sess->iv.length != 12 &&
 			sess->iv.length != 0) {
@@ -102,6 +127,10 @@ aesni_gcm_set_session_parameters(const struct aesni_gcm_ops *gcm_ops,
 		return -EINVAL;
 	}
 
+	/* setup session handlers */
+	set_func_ops(sess, &gcm_ops[sess->key]);
+
+	/* pre-generate key */
 	gcm_ops[sess->key].pre(key, &sess->gdata_key);
 
 	/* Digest check */
@@ -356,6 +385,191 @@ process_gcm_crypto_op(struct aesni_gcm_qp *qp, struct rte_crypto_op *op,
 	return 0;
 }
 
+static inline void
+aesni_gcm_fill_error_code(struct rte_crypto_sym_vec *vec, int32_t errnum)
+{
+	uint32_t i;
+
+	for (i = 0; i < vec->num; i++)
+		vec->status[i] = errnum;
+}
+
+
+static inline int32_t
+aesni_gcm_sgl_op_finalize_encryption(const struct aesni_gcm_session *s,
+	struct gcm_context_data *gdata_ctx, uint8_t *digest)
+{
+	if (s->req_digest_length != s->gen_digest_length) {
+		uint8_t tmpdigest[s->gen_digest_length];
+
+		s->ops.finalize(&s->gdata_key, gdata_ctx, tmpdigest,
+			s->gen_digest_length);
+		memcpy(digest, tmpdigest, s->req_digest_length);
+	} else {
+		s->ops.finalize(&s->gdata_key, gdata_ctx, digest,
+			s->gen_digest_length);
+	}
+
+	return 0;
+}
+
+static inline int32_t
+aesni_gcm_sgl_op_finalize_decryption(const struct aesni_gcm_session *s,
+	struct gcm_context_data *gdata_ctx, uint8_t *digest)
+{
+	uint8_t tmpdigest[s->gen_digest_length];
+
+	s->ops.finalize(&s->gdata_key, gdata_ctx, tmpdigest,
+		s->gen_digest_length);
+
+	return memcmp(digest, tmpdigest, s->req_digest_length) == 0 ? 0 :
+		EBADMSG;
+}
+
+static inline void
+aesni_gcm_process_gcm_sgl_op(const struct aesni_gcm_session *s,
+	struct gcm_context_data *gdata_ctx, struct rte_crypto_sgl *sgl,
+	void *iv, void *aad)
+{
+	uint32_t i;
+
+	/* init crypto operation */
+	s->ops.init(&s->gdata_key, gdata_ctx, iv, aad,
+		(uint64_t)s->aad_length);
+
+	/* update with sgl data */
+	for (i = 0; i < sgl->num; i++) {
+		struct rte_crypto_vec *vec = &sgl->vec[i];
+
+		s->ops.update(&s->gdata_key, gdata_ctx, vec->base, vec->base,
+			vec->len);
+	}
+}
+
+static inline void
+aesni_gcm_process_gmac_sgl_op(const struct aesni_gcm_session *s,
+	struct gcm_context_data *gdata_ctx, struct rte_crypto_sgl *sgl,
+	void *iv)
+{
+	s->ops.init(&s->gdata_key, gdata_ctx, iv, sgl->vec[0].base,
+		sgl->vec[0].len);
+}
+
+static inline uint32_t
+aesni_gcm_sgl_encrypt(struct aesni_gcm_session *s,
+	struct gcm_context_data *gdata_ctx, struct rte_crypto_sym_vec *vec)
+{
+	uint32_t i, processed;
+
+	processed = 0;
+	for (i = 0; i < vec->num; ++i) {
+		aesni_gcm_process_gcm_sgl_op(s, gdata_ctx,
+			&vec->sgl[i], vec->iv[i], vec->aad[i]);
+		vec->status[i] = aesni_gcm_sgl_op_finalize_encryption(s,
+			gdata_ctx, vec->digest[i]);
+		processed += (vec->status[i] == 0);
+	}
+
+	return processed;
+}
+
+static inline uint32_t
+aesni_gcm_sgl_decrypt(struct aesni_gcm_session *s,
+	struct gcm_context_data *gdata_ctx, struct rte_crypto_sym_vec *vec)
+{
+	uint32_t i, processed;
+
+	processed = 0;
+	for (i = 0; i < vec->num; ++i) {
+		aesni_gcm_process_gcm_sgl_op(s, gdata_ctx,
+			&vec->sgl[i], vec->iv[i], vec->aad[i]);
+		 vec->status[i] = aesni_gcm_sgl_op_finalize_decryption(s,
+			gdata_ctx, vec->digest[i]);
+		processed += (vec->status[i] == 0);
+	}
+
+	return processed;
+}
+
+static inline uint32_t
+aesni_gmac_sgl_generate(struct aesni_gcm_session *s,
+	struct gcm_context_data *gdata_ctx, struct rte_crypto_sym_vec *vec)
+{
+	uint32_t i, processed;
+
+	processed = 0;
+	for (i = 0; i < vec->num; ++i) {
+		if (vec->sgl[i].num != 1) {
+			vec->status[i] = ENOTSUP;
+			continue;
+		}
+
+		aesni_gcm_process_gmac_sgl_op(s, gdata_ctx,
+			&vec->sgl[i], vec->iv[i]);
+		vec->status[i] = aesni_gcm_sgl_op_finalize_encryption(s,
+			gdata_ctx, vec->digest[i]);
+		processed += (vec->status[i] == 0);
+	}
+
+	return processed;
+}
+
+static inline uint32_t
+aesni_gmac_sgl_verify(struct aesni_gcm_session *s,
+	struct gcm_context_data *gdata_ctx, struct rte_crypto_sym_vec *vec)
+{
+	uint32_t i, processed;
+
+	processed = 0;
+	for (i = 0; i < vec->num; ++i) {
+		if (vec->sgl[i].num != 1) {
+			vec->status[i] = ENOTSUP;
+			continue;
+		}
+
+		aesni_gcm_process_gmac_sgl_op(s, gdata_ctx,
+			&vec->sgl[i], vec->iv[i]);
+		vec->status[i] = aesni_gcm_sgl_op_finalize_decryption(s,
+			gdata_ctx, vec->digest[i]);
+		processed += (vec->status[i] == 0);
+	}
+
+	return processed;
+}
+
+/** Process CPU crypto bulk operations */
+uint32_t
+aesni_gcm_pmd_cpu_crypto_process(struct rte_cryptodev *dev,
+	struct rte_cryptodev_sym_session *sess,
+	__rte_unused union rte_crypto_sym_ofs ofs,
+	struct rte_crypto_sym_vec *vec)
+{
+	void *sess_priv;
+	struct aesni_gcm_session *s;
+	struct gcm_context_data gdata_ctx;
+
+	sess_priv = get_sym_session_private_data(sess, dev->driver_id);
+	if (unlikely(sess_priv == NULL)) {
+		aesni_gcm_fill_error_code(vec, EINVAL);
+		return 0;
+	}
+
+	s = sess_priv;
+	switch (s->op) {
+	case AESNI_GCM_OP_AUTHENTICATED_ENCRYPTION:
+		return aesni_gcm_sgl_encrypt(s, &gdata_ctx, vec);
+	case AESNI_GCM_OP_AUTHENTICATED_DECRYPTION:
+		return aesni_gcm_sgl_decrypt(s, &gdata_ctx, vec);
+	case AESNI_GMAC_OP_GENERATE:
+		return aesni_gmac_sgl_generate(s, &gdata_ctx, vec);
+	case AESNI_GMAC_OP_VERIFY:
+		return aesni_gmac_sgl_verify(s, &gdata_ctx, vec);
+	default:
+		aesni_gcm_fill_error_code(vec, EINVAL);
+		return 0;
+	}
+}
+
 /**
  * Process a completed job and return rte_mbuf which job processed
  *
@@ -527,7 +741,8 @@ aesni_gcm_create(const char *name,
 			RTE_CRYPTODEV_FF_SYM_OPERATION_CHAINING |
 			RTE_CRYPTODEV_FF_IN_PLACE_SGL |
 			RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT |
-			RTE_CRYPTODEV_FF_OOP_LB_IN_LB_OUT;
+			RTE_CRYPTODEV_FF_OOP_LB_IN_LB_OUT |
+			RTE_CRYPTODEV_FF_SYM_CPU_CRYPTO;
 
 	/* Check CPU for support for AES instruction set */
 	if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_AES))
@@ -672,7 +887,6 @@ RTE_PMD_REGISTER_PARAM_STRING(CRYPTODEV_NAME_AESNI_GCM_PMD,
 RTE_PMD_REGISTER_CRYPTO_DRIVER(aesni_gcm_crypto_drv, aesni_gcm_pmd_drv.driver,
 		cryptodev_driver_id);
 
-
 RTE_INIT(aesni_gcm_init_log)
 {
 	aesni_gcm_logtype_driver = rte_log_register("pmd.crypto.aesni_gcm");
diff --git a/drivers/crypto/aesni_gcm/aesni_gcm_pmd_ops.c b/drivers/crypto/aesni_gcm/aesni_gcm_pmd_ops.c
index 2f66c7c58..5228d98b1 100644
--- a/drivers/crypto/aesni_gcm/aesni_gcm_pmd_ops.c
+++ b/drivers/crypto/aesni_gcm/aesni_gcm_pmd_ops.c
@@ -331,9 +331,12 @@ struct rte_cryptodev_ops aesni_gcm_pmd_ops = {
 		.queue_pair_release	= aesni_gcm_pmd_qp_release,
 		.queue_pair_count	= aesni_gcm_pmd_qp_count,
 
+		.sym_cpu_process        = aesni_gcm_pmd_cpu_crypto_process,
+
 		.sym_session_get_size	= aesni_gcm_pmd_sym_session_get_size,
 		.sym_session_configure	= aesni_gcm_pmd_sym_session_configure,
 		.sym_session_clear	= aesni_gcm_pmd_sym_session_clear
 };
 
 struct rte_cryptodev_ops *rte_aesni_gcm_pmd_ops = &aesni_gcm_pmd_ops;
+
diff --git a/drivers/crypto/aesni_gcm/aesni_gcm_pmd_private.h b/drivers/crypto/aesni_gcm/aesni_gcm_pmd_private.h
index 2039adb53..1823a9997 100644
--- a/drivers/crypto/aesni_gcm/aesni_gcm_pmd_private.h
+++ b/drivers/crypto/aesni_gcm/aesni_gcm_pmd_private.h
@@ -92,6 +92,8 @@ struct aesni_gcm_session {
 	/**< GCM key type */
 	struct gcm_key_data gdata_key;
 	/**< GCM parameters */
+	struct aesni_gcm_session_ops ops;
+	/**< Session handlers */
 };
 
 
@@ -109,10 +111,13 @@ aesni_gcm_set_session_parameters(const struct aesni_gcm_ops *ops,
 		struct aesni_gcm_session *sess,
 		const struct rte_crypto_sym_xform *xform);
 
-
-/**
- * Device specific operations function pointer structure */
+/* Device specific operations function pointer structure */
 extern struct rte_cryptodev_ops *rte_aesni_gcm_pmd_ops;
 
+/** CPU crypto bulk process handler */
+uint32_t
+aesni_gcm_pmd_cpu_crypto_process(struct rte_cryptodev *dev,
+	struct rte_cryptodev_sym_session *sess, union rte_crypto_sym_ofs ofs,
+	struct rte_crypto_sym_vec *vec);
 
 #endif /* _AESNI_GCM_PMD_PRIVATE_H_ */
-- 
2.17.1


^ permalink raw reply	[flat|nested] 77+ messages in thread

* [dpdk-dev] [PATCH v4 3/8] test/crypto: add CPU crypto tests
  2020-01-28  3:16 ` [dpdk-dev] [PATCH v4 0/8] Introduce CPU crypto mode Marcin Smoczynski
  2020-01-28  3:16   ` [dpdk-dev] [PATCH v4 1/8] cryptodev: introduce cpu crypto support API Marcin Smoczynski
  2020-01-28  3:16   ` [dpdk-dev] [PATCH v4 2/8] crypto/aesni_gcm: cpu crypto support Marcin Smoczynski
@ 2020-01-28  3:16   ` Marcin Smoczynski
  2020-01-28  9:31     ` De Lara Guarch, Pablo
  2020-01-28  3:16   ` [dpdk-dev] [PATCH v4 4/8] security: add cpu crypto action type Marcin Smoczynski
                     ` (5 subsequent siblings)
  8 siblings, 1 reply; 77+ messages in thread
From: Marcin Smoczynski @ 2020-01-28  3:16 UTC (permalink / raw)
  To: akhil.goyal, konstantin.ananyev, roy.fan.zhang, declan.doherty,
	radu.nicolau, pablo.de.lara.guarch
  Cc: dev, Marcin Smoczynski

Add unit and performance tests for CPU crypto mode currently implemented
by AESNI-GCM cryptodev. Unit tests cover AES-GCM and GMAC test vectors.

Signed-off-by: Marcin Smoczynski <marcinx.smoczynski@intel.com>
---
 app/test/Makefile                             |   1 +
 app/test/cpu_crypto_all_gcm_perf_test_cases.h |  11 +
 app/test/cpu_crypto_all_gcm_unit_test_cases.h |  49 +
 .../cpu_crypto_all_gmac_unit_test_cases.h     |   7 +
 app/test/meson.build                          |   1 +
 app/test/test_cryptodev_cpu_crypto.c          | 930 ++++++++++++++++++
 6 files changed, 999 insertions(+)
 create mode 100644 app/test/cpu_crypto_all_gcm_perf_test_cases.h
 create mode 100644 app/test/cpu_crypto_all_gcm_unit_test_cases.h
 create mode 100644 app/test/cpu_crypto_all_gmac_unit_test_cases.h
 create mode 100644 app/test/test_cryptodev_cpu_crypto.c

diff --git a/app/test/Makefile b/app/test/Makefile
index 57930c00b..b8f0169ef 100644
--- a/app/test/Makefile
+++ b/app/test/Makefile
@@ -203,6 +203,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_PMD_RING) += test_pmd_ring_perf.c
 SRCS-$(CONFIG_RTE_LIBRTE_CRYPTODEV) += test_cryptodev_blockcipher.c
 SRCS-$(CONFIG_RTE_LIBRTE_CRYPTODEV) += test_cryptodev.c
 SRCS-$(CONFIG_RTE_LIBRTE_CRYPTODEV) += test_cryptodev_asym.c
+SRCS-$(CONFIG_RTE_LIBRTE_CRYPTODEV) += test_cryptodev_cpu_crypto.c
 SRCS-$(CONFIG_RTE_LIBRTE_SECURITY) += test_cryptodev_security_pdcp.c
 
 SRCS-$(CONFIG_RTE_LIBRTE_METRICS) += test_metrics.c
diff --git a/app/test/cpu_crypto_all_gcm_perf_test_cases.h b/app/test/cpu_crypto_all_gcm_perf_test_cases.h
new file mode 100644
index 000000000..ee9545abc
--- /dev/null
+++ b/app/test/cpu_crypto_all_gcm_perf_test_cases.h
@@ -0,0 +1,11 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+TEST_EXPAND(_128, 16, SGL_ONE_SEG)
+TEST_EXPAND(_192, 24, SGL_ONE_SEG)
+TEST_EXPAND(_256, 32, SGL_ONE_SEG)
+
+TEST_EXPAND(_128, 16, SGL_MAX_SEG)
+TEST_EXPAND(_192, 24, SGL_MAX_SEG)
+TEST_EXPAND(_256, 32, SGL_MAX_SEG)
diff --git a/app/test/cpu_crypto_all_gcm_unit_test_cases.h b/app/test/cpu_crypto_all_gcm_unit_test_cases.h
new file mode 100644
index 000000000..ed40c1632
--- /dev/null
+++ b/app/test/cpu_crypto_all_gcm_unit_test_cases.h
@@ -0,0 +1,49 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+TEST_EXPAND(gcm_test_case_1, SGL_ONE_SEG)
+TEST_EXPAND(gcm_test_case_2, SGL_ONE_SEG)
+TEST_EXPAND(gcm_test_case_3, SGL_ONE_SEG)
+TEST_EXPAND(gcm_test_case_4, SGL_ONE_SEG)
+TEST_EXPAND(gcm_test_case_5, SGL_ONE_SEG)
+TEST_EXPAND(gcm_test_case_6, SGL_ONE_SEG)
+TEST_EXPAND(gcm_test_case_7, SGL_ONE_SEG)
+TEST_EXPAND(gcm_test_case_8, SGL_ONE_SEG)
+TEST_EXPAND(gcm_test_case_192_1, SGL_ONE_SEG)
+TEST_EXPAND(gcm_test_case_192_2, SGL_ONE_SEG)
+TEST_EXPAND(gcm_test_case_192_3, SGL_ONE_SEG)
+TEST_EXPAND(gcm_test_case_192_4, SGL_ONE_SEG)
+TEST_EXPAND(gcm_test_case_192_5, SGL_ONE_SEG)
+TEST_EXPAND(gcm_test_case_192_6, SGL_ONE_SEG)
+TEST_EXPAND(gcm_test_case_192_7, SGL_ONE_SEG)
+TEST_EXPAND(gcm_test_case_256_1, SGL_ONE_SEG)
+TEST_EXPAND(gcm_test_case_256_2, SGL_ONE_SEG)
+TEST_EXPAND(gcm_test_case_256_3, SGL_ONE_SEG)
+TEST_EXPAND(gcm_test_case_256_4, SGL_ONE_SEG)
+TEST_EXPAND(gcm_test_case_256_5, SGL_ONE_SEG)
+TEST_EXPAND(gcm_test_case_256_6, SGL_ONE_SEG)
+TEST_EXPAND(gcm_test_case_256_7, SGL_ONE_SEG)
+
+TEST_EXPAND(gcm_test_case_1, SGL_MAX_SEG)
+TEST_EXPAND(gcm_test_case_2, SGL_MAX_SEG)
+TEST_EXPAND(gcm_test_case_3, SGL_MAX_SEG)
+TEST_EXPAND(gcm_test_case_4, SGL_MAX_SEG)
+TEST_EXPAND(gcm_test_case_5, SGL_MAX_SEG)
+TEST_EXPAND(gcm_test_case_6, SGL_MAX_SEG)
+TEST_EXPAND(gcm_test_case_7, SGL_MAX_SEG)
+TEST_EXPAND(gcm_test_case_8, SGL_MAX_SEG)
+TEST_EXPAND(gcm_test_case_192_1, SGL_MAX_SEG)
+TEST_EXPAND(gcm_test_case_192_2, SGL_MAX_SEG)
+TEST_EXPAND(gcm_test_case_192_3, SGL_MAX_SEG)
+TEST_EXPAND(gcm_test_case_192_4, SGL_MAX_SEG)
+TEST_EXPAND(gcm_test_case_192_5, SGL_MAX_SEG)
+TEST_EXPAND(gcm_test_case_192_6, SGL_MAX_SEG)
+TEST_EXPAND(gcm_test_case_192_7, SGL_MAX_SEG)
+TEST_EXPAND(gcm_test_case_256_1, SGL_MAX_SEG)
+TEST_EXPAND(gcm_test_case_256_2, SGL_MAX_SEG)
+TEST_EXPAND(gcm_test_case_256_3, SGL_MAX_SEG)
+TEST_EXPAND(gcm_test_case_256_4, SGL_MAX_SEG)
+TEST_EXPAND(gcm_test_case_256_5, SGL_MAX_SEG)
+TEST_EXPAND(gcm_test_case_256_6, SGL_MAX_SEG)
+TEST_EXPAND(gcm_test_case_256_7, SGL_MAX_SEG)
diff --git a/app/test/cpu_crypto_all_gmac_unit_test_cases.h b/app/test/cpu_crypto_all_gmac_unit_test_cases.h
new file mode 100644
index 000000000..b6ebce936
--- /dev/null
+++ b/app/test/cpu_crypto_all_gmac_unit_test_cases.h
@@ -0,0 +1,7 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+TEST_EXPAND(gmac_test_case_1, SGL_ONE_SEG)
+TEST_EXPAND(gmac_test_case_2, SGL_ONE_SEG)
+TEST_EXPAND(gmac_test_case_3, SGL_ONE_SEG)
diff --git a/app/test/meson.build b/app/test/meson.build
index 22b0cefaa..5a218affe 100644
--- a/app/test/meson.build
+++ b/app/test/meson.build
@@ -30,6 +30,7 @@ test_sources = files('commands.c',
 	'test_cryptodev.c',
 	'test_cryptodev_asym.c',
 	'test_cryptodev_blockcipher.c',
+	'test_cryptodev_cpu_crypto.c',
 	'test_cryptodev_security_pdcp.c',
 	'test_cycles.c',
 	'test_debug.c',
diff --git a/app/test/test_cryptodev_cpu_crypto.c b/app/test/test_cryptodev_cpu_crypto.c
new file mode 100644
index 000000000..7d91b970f
--- /dev/null
+++ b/app/test/test_cryptodev_cpu_crypto.c
@@ -0,0 +1,930 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+#include <rte_common.h>
+#include <rte_hexdump.h>
+#include <rte_mbuf.h>
+#include <rte_malloc.h>
+#include <rte_memcpy.h>
+#include <rte_pause.h>
+#include <rte_bus_vdev.h>
+#include <rte_random.h>
+
+#include <rte_crypto.h>
+#include <rte_crypto_sym.h>
+#include <rte_cryptodev.h>
+#include <rte_cryptodev_pmd.h>
+
+#include "test.h"
+#include "test_cryptodev.h"
+#include "test_cryptodev_blockcipher.h"
+#include "test_cryptodev_aes_test_vectors.h"
+#include "test_cryptodev_aead_test_vectors.h"
+#include "test_cryptodev_des_test_vectors.h"
+#include "test_cryptodev_hash_test_vectors.h"
+
+#define CPU_CRYPTO_TEST_MAX_AAD_LENGTH	16
+#define MAX_NB_SEGMENTS			4
+#define CACHE_WARM_ITER			2048
+#define MAX_SEG_SIZE			2048
+
+#define TOP_ENC		BLOCKCIPHER_TEST_OP_ENCRYPT
+#define TOP_DEC		BLOCKCIPHER_TEST_OP_DECRYPT
+#define TOP_AUTH_GEN	BLOCKCIPHER_TEST_OP_AUTH_GEN
+#define TOP_AUTH_VER	BLOCKCIPHER_TEST_OP_AUTH_VERIFY
+#define TOP_ENC_AUTH	BLOCKCIPHER_TEST_OP_ENC_AUTH_GEN
+#define TOP_AUTH_DEC	BLOCKCIPHER_TEST_OP_AUTH_VERIFY_DEC
+
+enum buffer_assemble_option {
+	SGL_MAX_SEG,
+	SGL_ONE_SEG,
+};
+
+struct cpu_crypto_test_case {
+	struct {
+		uint8_t seg[MAX_SEG_SIZE];
+		uint32_t seg_len;
+	} seg_buf[MAX_NB_SEGMENTS];
+	uint8_t iv[MAXIMUM_IV_LENGTH * 2];
+	uint8_t aad[CPU_CRYPTO_TEST_MAX_AAD_LENGTH * 4];
+	uint8_t digest[DIGEST_BYTE_LENGTH_SHA512];
+} __rte_cache_aligned;
+
+struct cpu_crypto_test_obj {
+	struct rte_crypto_vec vec[MAX_NUM_OPS_INFLIGHT][MAX_NB_SEGMENTS];
+	struct rte_crypto_sgl sec_buf[MAX_NUM_OPS_INFLIGHT];
+	void *iv[MAX_NUM_OPS_INFLIGHT];
+	void *digest[MAX_NUM_OPS_INFLIGHT];
+	void *aad[MAX_NUM_OPS_INFLIGHT];
+	int status[MAX_NUM_OPS_INFLIGHT];
+};
+
+struct cpu_crypto_testsuite_params {
+	struct rte_mempool *buf_pool;
+	struct rte_mempool *session_priv_mpool;
+};
+
+struct cpu_crypto_unittest_params {
+	struct rte_cryptodev_sym_session *sess;
+	void *test_datas[MAX_NUM_OPS_INFLIGHT];
+	struct cpu_crypto_test_obj test_obj;
+	uint32_t nb_bufs;
+};
+
+static struct cpu_crypto_testsuite_params testsuite_params;
+static struct cpu_crypto_unittest_params unittest_params;
+
+static int gbl_driver_id;
+
+static uint32_t valid_dev;
+
+static int
+testsuite_setup(void)
+{
+	struct cpu_crypto_testsuite_params *ts_params = &testsuite_params;
+	uint32_t i, nb_devs;
+	size_t sess_sz;
+	struct rte_cryptodev_info info;
+
+	const char * const pool_name = "CPU_CRYPTO_MBUFPOOL";
+
+	memset(ts_params, 0, sizeof(*ts_params));
+
+	ts_params->buf_pool = rte_mempool_lookup(pool_name);
+	if (ts_params->buf_pool == NULL) {
+		/* Not already created so create */
+		ts_params->buf_pool = rte_pktmbuf_pool_create(pool_name,
+				NUM_MBUFS, MBUF_CACHE_SIZE, 0,
+				sizeof(struct cpu_crypto_test_case),
+				rte_socket_id());
+		if (ts_params->buf_pool == NULL) {
+			RTE_LOG(ERR, USER1, "Can't create %s\n", pool_name);
+			return TEST_FAILED;
+		}
+	}
+
+	/* Create an AESNI GCM device if required */
+	if (gbl_driver_id == rte_cryptodev_driver_id_get(
+			RTE_STR(CRYPTODEV_NAME_AESNI_GCM_PMD))) {
+		nb_devs = rte_cryptodev_device_count_by_driver(
+				rte_cryptodev_driver_id_get(
+				RTE_STR(CRYPTODEV_NAME_AESNI_GCM_PMD)));
+		if (nb_devs < 1) {
+			TEST_ASSERT_SUCCESS(rte_vdev_init(
+				RTE_STR(CRYPTODEV_NAME_AESNI_GCM_PMD), NULL),
+				"Failed to create instance of"
+				" pmd : %s",
+				RTE_STR(CRYPTODEV_NAME_AESNI_GCM_PMD));
+		}
+	}
+
+	nb_devs = rte_cryptodev_count();
+	if (nb_devs < 1) {
+		RTE_LOG(ERR, USER1, "No crypto devices found?\n");
+		return TEST_FAILED;
+	}
+
+	/* get first valid crypto dev */
+	valid_dev = UINT32_MAX;
+	for (i = 0; i < nb_devs; i++) {
+		rte_cryptodev_info_get(i, &info);
+		if (info.driver_id == gbl_driver_id &&
+				(info.feature_flags &
+				RTE_CRYPTODEV_FF_SYM_CPU_CRYPTO) != 0) {
+			valid_dev = i;
+			break;
+		}
+	}
+
+	RTE_LOG(INFO, USER1, "Crypto device %u selected for CPU mode test",
+		valid_dev);
+
+	if (valid_dev == UINT32_MAX) {
+		RTE_LOG(ERR, USER1, "No crypto devices that support CPU mode");
+		return TEST_FAILED;
+	}
+
+	/* get session size */
+	sess_sz = rte_cryptodev_sym_get_private_session_size(valid_dev);
+
+	ts_params->session_priv_mpool = rte_cryptodev_sym_session_pool_create(
+		"CRYPTO_SESPOOL", 2, sess_sz, 0, 0, SOCKET_ID_ANY);
+	if (!ts_params->session_priv_mpool) {
+		RTE_LOG(ERR, USER1, "Not enough memory\n");
+		return TEST_FAILED;
+	}
+
+	return TEST_SUCCESS;
+}
+
+static void
+testsuite_teardown(void)
+{
+	struct cpu_crypto_testsuite_params *ts_params = &testsuite_params;
+
+	if (ts_params->buf_pool)
+		rte_mempool_free(ts_params->buf_pool);
+
+	if (ts_params->session_priv_mpool)
+		rte_mempool_free(ts_params->session_priv_mpool);
+}
+
+static int
+ut_setup(void)
+{
+	struct cpu_crypto_testsuite_params *ts_params = &testsuite_params;
+	struct cpu_crypto_unittest_params *ut_params = &unittest_params;
+
+	memset(ut_params, 0, sizeof(*ut_params));
+
+	ut_params->sess = rte_cryptodev_sym_session_create(
+		ts_params->session_priv_mpool);
+
+	return ut_params->sess ? TEST_SUCCESS : TEST_FAILED;
+}
+
+static void
+ut_teardown(void)
+{
+	struct cpu_crypto_testsuite_params *ts_params = &testsuite_params;
+	struct cpu_crypto_unittest_params *ut_params = &unittest_params;
+
+	if (ut_params->sess) {
+		rte_cryptodev_sym_session_clear(valid_dev, ut_params->sess);
+		rte_cryptodev_sym_session_free(ut_params->sess);
+		ut_params->sess = NULL;
+	}
+
+	if (ut_params->nb_bufs) {
+		uint32_t i;
+
+		for (i = 0; i < ut_params->nb_bufs; i++)
+			memset(ut_params->test_datas[i], 0,
+				sizeof(struct cpu_crypto_test_case));
+
+		rte_mempool_put_bulk(ts_params->buf_pool, ut_params->test_datas,
+				ut_params->nb_bufs);
+	}
+}
+
+static int
+allocate_buf(uint32_t n)
+{
+	struct cpu_crypto_testsuite_params *ts_params = &testsuite_params;
+	struct cpu_crypto_unittest_params *ut_params = &unittest_params;
+	int ret;
+
+	ret = rte_mempool_get_bulk(ts_params->buf_pool, ut_params->test_datas,
+			n);
+
+	if (ret == 0)
+		ut_params->nb_bufs = n;
+
+	return ret;
+}
+
+static int
+check_status(struct cpu_crypto_test_obj *obj, uint32_t n)
+{
+	uint32_t i;
+
+	for (i = 0; i < n; i++)
+		if (obj->status[i] != 0)
+			return -1;
+
+	return 0;
+}
+
+static inline int
+init_aead_session(struct rte_cryptodev_sym_session *ses,
+		struct rte_mempool *sess_mp,
+		enum rte_crypto_aead_operation op,
+		const struct aead_test_data *test_data,
+		uint32_t is_unit_test)
+{
+	struct rte_crypto_sym_xform xform = {0};
+
+	if (is_unit_test)
+		debug_hexdump(stdout, "key:", test_data->key.data,
+				test_data->key.len);
+
+	/* Setup AEAD Parameters */
+	xform.type = RTE_CRYPTO_SYM_XFORM_AEAD;
+	xform.next = NULL;
+	xform.aead.algo = test_data->algo;
+	xform.aead.op = op;
+	xform.aead.key.data = test_data->key.data;
+	xform.aead.key.length = test_data->key.len;
+	xform.aead.iv.offset = 0;
+	xform.aead.iv.length = test_data->iv.len;
+	xform.aead.digest_length = test_data->auth_tag.len;
+	xform.aead.aad_length = test_data->aad.len;
+
+	return rte_cryptodev_sym_session_init(valid_dev, ses, &xform,
+		sess_mp);
+}
+
+static inline int
+init_gmac_session(struct rte_cryptodev_sym_session *ses,
+		struct rte_mempool *sess_mp,
+		enum rte_crypto_auth_operation op,
+		const struct gmac_test_data *test_data,
+		uint32_t is_unit_test)
+{
+	struct rte_crypto_sym_xform xform = {0};
+
+	if (is_unit_test)
+		debug_hexdump(stdout, "key:", test_data->key.data,
+				test_data->key.len);
+
+	/* Setup AEAD Parameters */
+	xform.type = RTE_CRYPTO_SYM_XFORM_AUTH;
+	xform.next = NULL;
+	xform.auth.algo = RTE_CRYPTO_AUTH_AES_GMAC;
+	xform.auth.op = op;
+	xform.auth.digest_length = test_data->gmac_tag.len;
+	xform.auth.key.length = test_data->key.len;
+	xform.auth.key.data = test_data->key.data;
+	xform.auth.iv.length = test_data->iv.len;
+	xform.auth.iv.offset = 0;
+
+	return rte_cryptodev_sym_session_init(valid_dev, ses, &xform, sess_mp);
+}
+
+
+static inline int
+prepare_sgl(struct cpu_crypto_test_case *data,
+	struct cpu_crypto_test_obj *obj,
+	uint32_t obj_idx,
+	enum buffer_assemble_option sgl_option,
+	const uint8_t *src,
+	uint32_t src_len)
+{
+	uint32_t seg_idx;
+	uint32_t bytes_per_seg;
+	uint32_t left;
+
+	switch (sgl_option) {
+	case SGL_MAX_SEG:
+		seg_idx = 0;
+		bytes_per_seg = src_len / MAX_NB_SEGMENTS + 1;
+		left = src_len;
+
+		if (bytes_per_seg > MAX_SEG_SIZE)
+			return -ENOMEM;
+
+		while (left) {
+			uint32_t cp_len = RTE_MIN(left, bytes_per_seg);
+			memcpy(data->seg_buf[seg_idx].seg, src, cp_len);
+			data->seg_buf[seg_idx].seg_len = cp_len;
+			obj->vec[obj_idx][seg_idx].base =
+					(void *)data->seg_buf[seg_idx].seg;
+			obj->vec[obj_idx][seg_idx].len = cp_len;
+			src += cp_len;
+			left -= cp_len;
+			seg_idx++;
+		}
+
+		if (left)
+			return -ENOMEM;
+
+		obj->sec_buf[obj_idx].vec = obj->vec[obj_idx];
+		obj->sec_buf[obj_idx].num = seg_idx;
+
+		break;
+	case SGL_ONE_SEG:
+		memcpy(data->seg_buf[0].seg, src, src_len);
+		data->seg_buf[0].seg_len = src_len;
+		obj->vec[obj_idx][0].base =
+				(void *)data->seg_buf[0].seg;
+		obj->vec[obj_idx][0].len = src_len;
+
+		obj->sec_buf[obj_idx].vec = obj->vec[obj_idx];
+		obj->sec_buf[obj_idx].num = 1;
+		break;
+	default:
+		return -1;
+	}
+
+	return 0;
+}
+
+static inline int
+assemble_aead_buf(struct cpu_crypto_test_case *data,
+		struct cpu_crypto_test_obj *obj,
+		uint32_t obj_idx,
+		enum rte_crypto_aead_operation op,
+		const struct aead_test_data *test_data,
+		enum buffer_assemble_option sgl_option,
+		uint32_t is_unit_test)
+{
+	const uint8_t *src;
+	uint32_t src_len;
+	int ret;
+
+	if (op == RTE_CRYPTO_AEAD_OP_ENCRYPT) {
+		src = test_data->plaintext.data;
+		src_len = test_data->plaintext.len;
+		if (is_unit_test)
+			debug_hexdump(stdout, "plaintext:", src, src_len);
+	} else {
+		src = test_data->ciphertext.data;
+		src_len = test_data->ciphertext.len;
+		memcpy(data->digest, test_data->auth_tag.data,
+				test_data->auth_tag.len);
+		if (is_unit_test) {
+			debug_hexdump(stdout, "ciphertext:", src, src_len);
+			debug_hexdump(stdout, "digest:",
+					test_data->auth_tag.data,
+					test_data->auth_tag.len);
+		}
+	}
+
+	if (src_len > MAX_SEG_SIZE)
+		return -ENOMEM;
+
+	ret = prepare_sgl(data, obj, obj_idx, sgl_option, src, src_len);
+	if (ret < 0)
+		return ret;
+
+	memcpy(data->iv, test_data->iv.data, test_data->iv.len);
+	memcpy(data->aad, test_data->aad.data, test_data->aad.len);
+
+	if (is_unit_test) {
+		debug_hexdump(stdout, "iv:", test_data->iv.data,
+				test_data->iv.len);
+		debug_hexdump(stdout, "aad:", test_data->aad.data,
+				test_data->aad.len);
+	}
+
+	obj->iv[obj_idx] = (void *)data->iv;
+	obj->digest[obj_idx] = (void *)data->digest;
+	obj->aad[obj_idx] = (void *)data->aad;
+
+	return 0;
+}
+
+static inline int
+assemble_gmac_buf(struct cpu_crypto_test_case *data,
+		struct cpu_crypto_test_obj *obj,
+		uint32_t obj_idx,
+		enum rte_crypto_auth_operation op,
+		const struct gmac_test_data *test_data,
+		enum buffer_assemble_option sgl_option,
+		uint32_t is_unit_test)
+{
+	const uint8_t *src;
+	uint32_t src_len;
+	int ret;
+
+	if (op == RTE_CRYPTO_AUTH_OP_GENERATE) {
+		src = test_data->plaintext.data;
+		src_len = test_data->plaintext.len;
+		if (is_unit_test)
+			debug_hexdump(stdout, "plaintext:", src, src_len);
+	} else {
+		src = test_data->plaintext.data;
+		src_len = test_data->plaintext.len;
+		memcpy(data->digest, test_data->gmac_tag.data,
+			test_data->gmac_tag.len);
+		if (is_unit_test)
+			debug_hexdump(stdout, "gmac_tag:", src, src_len);
+	}
+
+	if (src_len > MAX_SEG_SIZE)
+		return -ENOMEM;
+
+	ret = prepare_sgl(data, obj, obj_idx, sgl_option, src, src_len);
+	if (ret < 0)
+		return ret;
+
+	memcpy(data->iv, test_data->iv.data, test_data->iv.len);
+
+	if (is_unit_test) {
+		debug_hexdump(stdout, "iv:", test_data->iv.data,
+				test_data->iv.len);
+	}
+
+	obj->iv[obj_idx] = (void *)data->iv;
+	obj->digest[obj_idx] = (void *)data->digest;
+
+	return 0;
+}
+
+#define CPU_CRYPTO_ERR_EXP_CT	"expect ciphertext:"
+#define CPU_CRYPTO_ERR_GEN_CT	"gen ciphertext:"
+#define CPU_CRYPTO_ERR_EXP_PT	"expect plaintext:"
+#define CPU_CRYPTO_ERR_GEN_PT	"gen plaintext:"
+
+static int
+check_aead_result(struct cpu_crypto_test_case *tcase,
+		enum rte_crypto_aead_operation op,
+		const struct aead_test_data *tdata)
+{
+	const char *err_msg1, *err_msg2;
+	const uint8_t *src_pt_ct;
+	const uint8_t *tmp_src;
+	uint32_t src_len;
+	uint32_t left;
+	uint32_t i = 0;
+	int ret;
+
+	if (op == RTE_CRYPTO_AEAD_OP_ENCRYPT) {
+		err_msg1 = CPU_CRYPTO_ERR_EXP_CT;
+		err_msg2 = CPU_CRYPTO_ERR_GEN_CT;
+
+		src_pt_ct = tdata->ciphertext.data;
+		src_len = tdata->ciphertext.len;
+
+		ret = memcmp(tcase->digest, tdata->auth_tag.data,
+				tdata->auth_tag.len);
+		if (ret != 0) {
+			debug_hexdump(stdout, "expect digest:",
+					tdata->auth_tag.data,
+					tdata->auth_tag.len);
+			debug_hexdump(stdout, "gen digest:",
+					tcase->digest,
+					tdata->auth_tag.len);
+			return -1;
+		}
+	} else {
+		src_pt_ct = tdata->plaintext.data;
+		src_len = tdata->plaintext.len;
+		err_msg1 = CPU_CRYPTO_ERR_EXP_PT;
+		err_msg2 = CPU_CRYPTO_ERR_GEN_PT;
+	}
+
+	tmp_src = src_pt_ct;
+	left = src_len;
+
+	while (left && i < MAX_NB_SEGMENTS) {
+		ret = memcmp(tcase->seg_buf[i].seg, tmp_src,
+				tcase->seg_buf[i].seg_len);
+		if (ret != 0)
+			goto sgl_err_dump;
+		tmp_src += tcase->seg_buf[i].seg_len;
+		left -= tcase->seg_buf[i].seg_len;
+		i++;
+	}
+
+	if (left) {
+		ret = -ENOMEM;
+		goto sgl_err_dump;
+	}
+
+	return 0;
+
+sgl_err_dump:
+	left = src_len;
+	i = 0;
+
+	debug_hexdump(stdout, err_msg1,
+			tdata->ciphertext.data,
+			tdata->ciphertext.len);
+
+	while (left && i < MAX_NB_SEGMENTS) {
+		debug_hexdump(stdout, err_msg2,
+				tcase->seg_buf[i].seg,
+				tcase->seg_buf[i].seg_len);
+		left -= tcase->seg_buf[i].seg_len;
+		i++;
+	}
+	return ret;
+}
+
+static int
+check_gmac_result(struct cpu_crypto_test_case *tcase,
+		enum rte_crypto_auth_operation op,
+		const struct gmac_test_data *tdata)
+{
+	int ret;
+
+	if (op == RTE_CRYPTO_AUTH_OP_GENERATE) {
+		ret = memcmp(tcase->digest, tdata->gmac_tag.data,
+				tdata->gmac_tag.len);
+		if (ret != 0) {
+			debug_hexdump(stdout, "expect digest:",
+					tdata->gmac_tag.data,
+					tdata->gmac_tag.len);
+			debug_hexdump(stdout, "gen digest:",
+					tcase->digest,
+					tdata->gmac_tag.len);
+			return -1;
+		}
+	}
+
+	return 0;
+}
+
+static inline int32_t
+run_test(struct rte_cryptodev_sym_session *sess, union rte_crypto_sym_ofs ofs,
+		struct cpu_crypto_test_obj *obj, uint32_t n)
+{
+	struct rte_crypto_sym_vec symvec;
+
+	symvec.sgl = obj->sec_buf;
+	symvec.iv = obj->iv;
+	symvec.aad = obj->aad;
+	symvec.digest = obj->digest;
+	symvec.status = obj->status;
+	symvec.num = n;
+
+	return rte_cryptodev_sym_cpu_crypto_process(valid_dev, sess, ofs,
+		&symvec);
+}
+
+static int
+cpu_crypto_test_aead(const struct aead_test_data *tdata,
+		enum rte_crypto_aead_operation dir,
+		enum buffer_assemble_option sgl_option)
+{
+	struct cpu_crypto_testsuite_params *ts_params = &testsuite_params;
+	struct cpu_crypto_unittest_params *ut_params = &unittest_params;
+	struct cpu_crypto_test_obj *obj = &ut_params->test_obj;
+	struct cpu_crypto_test_case *tcase;
+	union rte_crypto_sym_ofs ofs;
+	int ret;
+
+	ret = init_aead_session(ut_params->sess, ts_params->session_priv_mpool,
+		dir, tdata, 1);
+	if (ret < 0)
+		return ret;
+
+	ret = allocate_buf(1);
+	if (ret)
+		return ret;
+
+	tcase = ut_params->test_datas[0];
+	ret = assemble_aead_buf(tcase, obj, 0, dir, tdata, sgl_option, 1);
+	if (ret < 0) {
+		printf("Test is not supported by the driver\n");
+		return ret;
+	}
+
+	/* prepare offset descriptor */
+	ofs.raw = 0;
+
+	run_test(ut_params->sess, ofs, obj, 1);
+
+	ret = check_status(obj, 1);
+	if (ret < 0)
+		return ret;
+
+	ret = check_aead_result(tcase, dir, tdata);
+	if (ret < 0)
+		return ret;
+
+	return 0;
+}
+
+static int
+cpu_crypto_test_gmac(const struct gmac_test_data *tdata,
+		enum rte_crypto_auth_operation dir,
+		enum buffer_assemble_option sgl_option)
+{
+	struct cpu_crypto_testsuite_params *ts_params = &testsuite_params;
+	struct cpu_crypto_unittest_params *ut_params = &unittest_params;
+	struct cpu_crypto_test_obj *obj = &ut_params->test_obj;
+	struct cpu_crypto_test_case *tcase;
+	union rte_crypto_sym_ofs ofs;
+	int ret;
+
+	ret = init_gmac_session(ut_params->sess, ts_params->session_priv_mpool,
+		dir, tdata, 1);
+	if (ret < 0)
+		return ret;
+
+	ret = allocate_buf(1);
+	if (ret)
+		return ret;
+
+	tcase = ut_params->test_datas[0];
+	ret = assemble_gmac_buf(tcase, obj, 0, dir, tdata, sgl_option, 1);
+	if (ret < 0) {
+		printf("Test is not supported by the driver\n");
+		return ret;
+	}
+
+	/* prepare offset descriptor */
+	ofs.raw = 0;
+
+	run_test(ut_params->sess, ofs, obj, 1);
+
+	ret = check_status(obj, 1);
+	if (ret < 0)
+		return ret;
+
+	ret = check_gmac_result(tcase, dir, tdata);
+	if (ret < 0)
+		return ret;
+
+	return 0;
+}
+
+#define TEST_EXPAND(t, o)						\
+static int								\
+cpu_crypto_aead_enc_test_##t##_##o(void)				\
+{									\
+	return cpu_crypto_test_aead(&t, RTE_CRYPTO_AEAD_OP_ENCRYPT, o);	\
+}									\
+static int								\
+cpu_crypto_aead_dec_test_##t##_##o(void)				\
+{									\
+	return cpu_crypto_test_aead(&t, RTE_CRYPTO_AEAD_OP_DECRYPT, o);	\
+}									\
+
+#include "cpu_crypto_all_gcm_unit_test_cases.h"
+#undef TEST_EXPAND
+
+#define TEST_EXPAND(t, o)						\
+static int								\
+cpu_crypto_gmac_gen_test_##t##_##o(void)				\
+{									\
+	return cpu_crypto_test_gmac(&t, RTE_CRYPTO_AUTH_OP_GENERATE, o);\
+}									\
+static int								\
+cpu_crypto_gmac_ver_test_##t##_##o(void)				\
+{									\
+	return cpu_crypto_test_gmac(&t, RTE_CRYPTO_AUTH_OP_VERIFY, o);	\
+}
+
+#include "cpu_crypto_all_gmac_unit_test_cases.h"
+#undef TEST_EXPAND
+
+static struct unit_test_suite cpu_crypto_aesgcm_testsuite  = {
+	.suite_name = "CPU Crypto AESNI-GCM Unit Test Suite",
+	.setup = testsuite_setup,
+	.teardown = testsuite_teardown,
+	.unit_test_cases = {
+
+#define TEST_EXPAND(t, o)	\
+	TEST_CASE_ST(ut_setup, ut_teardown, cpu_crypto_aead_enc_test_##t##_##o),
+
+#include "cpu_crypto_all_gcm_unit_test_cases.h"
+#undef TEST_EXPAND
+
+#define TEST_EXPAND(t, o)	\
+	TEST_CASE_ST(ut_setup, ut_teardown, cpu_crypto_aead_dec_test_##t##_##o),
+
+#include "cpu_crypto_all_gcm_unit_test_cases.h"
+#undef TEST_EXPAND
+
+#define TEST_EXPAND(t, o)	\
+	TEST_CASE_ST(ut_setup, ut_teardown, cpu_crypto_gmac_gen_test_##t##_##o),
+
+#include "cpu_crypto_all_gmac_unit_test_cases.h"
+#undef TEST_EXPAND
+
+#define TEST_EXPAND(t, o)	\
+	TEST_CASE_ST(ut_setup, ut_teardown, cpu_crypto_gmac_ver_test_##t##_##o),
+
+#include "cpu_crypto_all_gmac_unit_test_cases.h"
+#undef TEST_EXPAND
+
+	TEST_CASES_END() /**< NULL terminate unit test array */
+	},
+};
+
+static int
+test_cpu_crypto_aesni_gcm(void)
+{
+	gbl_driver_id =	rte_cryptodev_driver_id_get(
+			RTE_STR(CRYPTODEV_NAME_AESNI_GCM_PMD));
+
+	return unit_test_suite_runner(&cpu_crypto_aesgcm_testsuite);
+}
+
+
+static inline void
+gen_rand(uint8_t *data, uint32_t len)
+{
+	uint32_t i;
+
+	for (i = 0; i < len; i++)
+		data[i] = (uint8_t)rte_rand();
+}
+
+static inline void
+switch_aead_enc_to_dec(struct aead_test_data *tdata,
+		struct cpu_crypto_test_case *tcase,
+		enum buffer_assemble_option sgl_option)
+{
+	uint32_t i;
+	uint8_t *dst = tdata->ciphertext.data;
+
+	switch (sgl_option) {
+	case SGL_ONE_SEG:
+		memcpy(dst, tcase->seg_buf[0].seg, tcase->seg_buf[0].seg_len);
+		tdata->ciphertext.len = tcase->seg_buf[0].seg_len;
+		break;
+	case SGL_MAX_SEG:
+		tdata->ciphertext.len = 0;
+		for (i = 0; i < MAX_NB_SEGMENTS; i++) {
+			memcpy(dst, tcase->seg_buf[i].seg,
+					tcase->seg_buf[i].seg_len);
+			tdata->ciphertext.len += tcase->seg_buf[i].seg_len;
+		}
+		break;
+	}
+
+	memcpy(tdata->auth_tag.data, tcase->digest, tdata->auth_tag.len);
+}
+
+static int
+cpu_crypto_test_aead_perf(enum buffer_assemble_option sgl_option,
+		uint32_t key_sz)
+{
+	struct aead_test_data tdata = {0};
+	struct cpu_crypto_testsuite_params *ts_params = &testsuite_params;
+	struct cpu_crypto_unittest_params *ut_params = &unittest_params;
+	struct cpu_crypto_test_obj *obj = &ut_params->test_obj;
+	struct cpu_crypto_test_case *tcase;
+	union rte_crypto_sym_ofs ofs;
+	uint64_t hz = rte_get_tsc_hz(), time_start, time_now;
+	double rate, cycles_per_buf;
+	uint32_t test_data_szs[] = {64, 128, 256, 512, 1024, 2048};
+	uint32_t i, j;
+	uint8_t aad[16];
+	int ret;
+
+	tdata.key.len = key_sz;
+	gen_rand(tdata.key.data, tdata.key.len);
+	tdata.algo = RTE_CRYPTO_AEAD_AES_GCM;
+	tdata.aad.data = aad;
+	ofs.raw = 0;
+
+	if (!ut_params->sess)
+		return -1;
+
+	init_aead_session(ut_params->sess, ts_params->session_priv_mpool,
+		RTE_CRYPTO_AEAD_OP_DECRYPT, &tdata, 0);
+
+	ret = allocate_buf(MAX_NUM_OPS_INFLIGHT);
+	if (ret)
+		return ret;
+
+	for (i = 0; i < RTE_DIM(test_data_szs); i++) {
+		for (j = 0; j < MAX_NUM_OPS_INFLIGHT; j++) {
+			tdata.plaintext.len = test_data_szs[i];
+			gen_rand(tdata.plaintext.data,
+					tdata.plaintext.len);
+
+			tdata.aad.len = 12;
+			gen_rand(tdata.aad.data, tdata.aad.len);
+
+			tdata.auth_tag.len = 16;
+
+			tdata.iv.len = 16;
+			gen_rand(tdata.iv.data, tdata.iv.len);
+
+			tcase = ut_params->test_datas[j];
+			ret = assemble_aead_buf(tcase, obj, j,
+					RTE_CRYPTO_AEAD_OP_ENCRYPT,
+					&tdata, sgl_option, 0);
+			if (ret < 0) {
+				printf("Test is not supported by the driver\n");
+				return ret;
+			}
+		}
+
+		/* warm up cache */
+		for (j = 0; j < CACHE_WARM_ITER; j++)
+			run_test(ut_params->sess, ofs, obj,
+				MAX_NUM_OPS_INFLIGHT);
+
+		time_start = rte_rdtsc();
+
+		run_test(ut_params->sess, ofs, obj, MAX_NUM_OPS_INFLIGHT);
+
+		time_now = rte_rdtsc();
+
+		rate = time_now - time_start;
+		cycles_per_buf = rate / MAX_NUM_OPS_INFLIGHT;
+
+		rate = ((hz / cycles_per_buf)) / 1000000;
+
+		printf("AES-GCM-%u(%4uB) Enc %03.3fMpps (%03.3fGbps) ",
+				key_sz * 8, test_data_szs[i], rate,
+				rate  * test_data_szs[i] * 8 / 1000);
+		printf("cycles per buf %03.3f per byte %03.3f\n",
+				cycles_per_buf,
+				cycles_per_buf / test_data_szs[i]);
+
+		for (j = 0; j < MAX_NUM_OPS_INFLIGHT; j++) {
+			tcase = ut_params->test_datas[j];
+
+			switch_aead_enc_to_dec(&tdata, tcase, sgl_option);
+			ret = assemble_aead_buf(tcase, obj, j,
+					RTE_CRYPTO_AEAD_OP_DECRYPT,
+					&tdata, sgl_option, 0);
+			if (ret < 0) {
+				printf("Test is not supported by the driver\n");
+				return ret;
+			}
+		}
+
+		time_start = rte_get_timer_cycles();
+
+		run_test(ut_params->sess, ofs, obj, MAX_NUM_OPS_INFLIGHT);
+
+		time_now = rte_get_timer_cycles();
+
+		rate = time_now - time_start;
+		cycles_per_buf = rate / MAX_NUM_OPS_INFLIGHT;
+
+		rate = ((hz / cycles_per_buf)) / 1000000;
+
+		printf("AES-GCM-%u(%4uB) Dec %03.3fMpps (%03.3fGbps) ",
+				key_sz * 8, test_data_szs[i], rate,
+				rate  * test_data_szs[i] * 8 / 1000);
+		printf("cycles per buf %03.3f per byte %03.3f\n",
+				cycles_per_buf,
+				cycles_per_buf / test_data_szs[i]);
+	}
+
+	return 0;
+}
+
+/* test-perfix/key-size/sgl-type */
+#define TEST_EXPAND(a, b, c)						\
+static int								\
+cpu_crypto_gcm_perf##a##_##c(void)					\
+{									\
+	return cpu_crypto_test_aead_perf(c, b);				\
+}									\
+
+#include "cpu_crypto_all_gcm_perf_test_cases.h"
+#undef TEST_EXPAND
+
+static struct unit_test_suite security_cpu_crypto_aesgcm_perf_testsuite  = {
+		.suite_name = "Security CPU Crypto AESNI-GCM Perf Test Suite",
+		.setup = testsuite_setup,
+		.teardown = testsuite_teardown,
+		.unit_test_cases = {
+#define TEST_EXPAND(a, b, c)						\
+		TEST_CASE_ST(ut_setup, ut_teardown,			\
+				cpu_crypto_gcm_perf##a##_##c),		\
+
+#include "cpu_crypto_all_gcm_perf_test_cases.h"
+#undef TEST_EXPAND
+
+		TEST_CASES_END() /**< NULL terminate unit test array */
+		},
+};
+
+static int
+test_cpu_crypto_aesni_gcm_perf(void)
+{
+	gbl_driver_id =	rte_cryptodev_driver_id_get(
+			RTE_STR(CRYPTODEV_NAME_AESNI_GCM_PMD));
+
+	return unit_test_suite_runner(
+			&security_cpu_crypto_aesgcm_perf_testsuite);
+}
+
+REGISTER_TEST_COMMAND(cpu_crypto_aesni_gcm_autotest,
+		test_cpu_crypto_aesni_gcm);
+
+REGISTER_TEST_COMMAND(cpu_crypto_aesni_gcm_perftest,
+		test_cpu_crypto_aesni_gcm_perf);
-- 
2.17.1


^ permalink raw reply	[flat|nested] 77+ messages in thread

* [dpdk-dev] [PATCH v4 4/8] security: add cpu crypto action type
  2020-01-28  3:16 ` [dpdk-dev] [PATCH v4 0/8] Introduce CPU crypto mode Marcin Smoczynski
                     ` (2 preceding siblings ...)
  2020-01-28  3:16   ` [dpdk-dev] [PATCH v4 3/8] test/crypto: add CPU crypto tests Marcin Smoczynski
@ 2020-01-28  3:16   ` Marcin Smoczynski
  2020-01-28 11:00     ` Ananyev, Konstantin
  2020-01-28  3:16   ` [dpdk-dev] [PATCH v4 5/8] ipsec: introduce support for cpu crypto mode Marcin Smoczynski
                     ` (4 subsequent siblings)
  8 siblings, 1 reply; 77+ messages in thread
From: Marcin Smoczynski @ 2020-01-28  3:16 UTC (permalink / raw)
  To: akhil.goyal, konstantin.ananyev, roy.fan.zhang, declan.doherty,
	radu.nicolau, pablo.de.lara.guarch
  Cc: dev, Marcin Smoczynski

Introduce CPU crypto action type allowing to differentiate between
regular async 'none security' and synchronous, CPU crypto accelerated
sessions.

Signed-off-by: Marcin Smoczynski <marcinx.smoczynski@intel.com>
---
 lib/librte_security/rte_security.h | 6 +++++-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/lib/librte_security/rte_security.h b/lib/librte_security/rte_security.h
index 546779df2..309f7311c 100644
--- a/lib/librte_security/rte_security.h
+++ b/lib/librte_security/rte_security.h
@@ -307,10 +307,14 @@ enum rte_security_session_action_type {
 	/**< All security protocol processing is performed inline during
 	 * transmission
 	 */
-	RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL
+	RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL,
 	/**< All security protocol processing including crypto is performed
 	 * on a lookaside accelerator
 	 */
+	RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO
+	/**< Crypto processing for security protocol is processed by CPU
+	 * synchronously
+	 */
 };
 
 /** Security session protocol definition */
-- 
2.17.1


^ permalink raw reply	[flat|nested] 77+ messages in thread

* [dpdk-dev] [PATCH v4 5/8] ipsec: introduce support for cpu crypto mode
  2020-01-28  3:16 ` [dpdk-dev] [PATCH v4 0/8] Introduce CPU crypto mode Marcin Smoczynski
                     ` (3 preceding siblings ...)
  2020-01-28  3:16   ` [dpdk-dev] [PATCH v4 4/8] security: add cpu crypto action type Marcin Smoczynski
@ 2020-01-28  3:16   ` Marcin Smoczynski
  2020-01-28  3:16   ` [dpdk-dev] [PATCH v4 6/8] examples/ipsec-secgw: cpu crypto support Marcin Smoczynski
                     ` (3 subsequent siblings)
  8 siblings, 0 replies; 77+ messages in thread
From: Marcin Smoczynski @ 2020-01-28  3:16 UTC (permalink / raw)
  To: akhil.goyal, konstantin.ananyev, roy.fan.zhang, declan.doherty,
	radu.nicolau, pablo.de.lara.guarch
  Cc: dev, Marcin Smoczynski

Update library to handle CPU cypto security mode which utilizes
cryptodev's synchronous, CPU accelerated crypto operations.

Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Signed-off-by: Marcin Smoczynski <marcinx.smoczynski@intel.com>
---
 lib/librte_ipsec/esp_inb.c   | 154 ++++++++++++++++++++++++++++++-----
 lib/librte_ipsec/esp_outb.c  | 134 +++++++++++++++++++++++++++---
 lib/librte_ipsec/misc.h      | 118 +++++++++++++++++++++++++++
 lib/librte_ipsec/rte_ipsec.h |  18 +++-
 lib/librte_ipsec/sa.c        | 112 +++++++++++++++++++++----
 lib/librte_ipsec/sa.h        |  17 ++++
 lib/librte_ipsec/ses.c       |   3 +-
 7 files changed, 506 insertions(+), 50 deletions(-)

diff --git a/lib/librte_ipsec/esp_inb.c b/lib/librte_ipsec/esp_inb.c
index 5c653dd39..58b3dec1b 100644
--- a/lib/librte_ipsec/esp_inb.c
+++ b/lib/librte_ipsec/esp_inb.c
@@ -105,6 +105,39 @@ inb_cop_prepare(struct rte_crypto_op *cop,
 	}
 }
 
+static inline uint32_t
+inb_cpu_crypto_prepare(const struct rte_ipsec_sa *sa, struct rte_mbuf *mb,
+	uint32_t *pofs, uint32_t plen, void *iv)
+{
+	struct aead_gcm_iv *gcm;
+	struct aesctr_cnt_blk *ctr;
+	uint64_t *ivp;
+	uint32_t clen;
+
+	ivp = rte_pktmbuf_mtod_offset(mb, uint64_t *,
+		*pofs + sizeof(struct rte_esp_hdr));
+	clen = 0;
+
+	switch (sa->algo_type) {
+	case ALGO_TYPE_AES_GCM:
+		gcm = (struct aead_gcm_iv *)iv;
+		aead_gcm_iv_fill(gcm, ivp[0], sa->salt);
+		break;
+	case ALGO_TYPE_AES_CBC:
+	case ALGO_TYPE_3DES_CBC:
+		copy_iv(iv, ivp, sa->iv_len);
+		break;
+	case ALGO_TYPE_AES_CTR:
+		ctr = (struct aesctr_cnt_blk *)iv;
+		aes_ctr_cnt_blk_fill(ctr, ivp[0], sa->salt);
+		break;
+	}
+
+	*pofs += sa->ctp.auth.offset;
+	clen = plen - sa->ctp.auth.length;
+	return clen;
+}
+
 /*
  * Helper function for prepare() to deal with situation when
  * ICV is spread by two segments. Tries to move ICV completely into the
@@ -157,17 +190,12 @@ inb_pkt_xprepare(const struct rte_ipsec_sa *sa, rte_be64_t sqc,
 	}
 }
 
-/*
- * setup/update packet data and metadata for ESP inbound tunnel case.
- */
-static inline int32_t
-inb_pkt_prepare(const struct rte_ipsec_sa *sa, const struct replay_sqn *rsn,
-	struct rte_mbuf *mb, uint32_t hlen, union sym_op_data *icv)
+static inline int
+inb_get_sqn(const struct rte_ipsec_sa *sa, const struct replay_sqn *rsn,
+	struct rte_mbuf *mb, uint32_t hlen, rte_be64_t *sqc)
 {
 	int32_t rc;
 	uint64_t sqn;
-	uint32_t clen, icv_len, icv_ofs, plen;
-	struct rte_mbuf *ml;
 	struct rte_esp_hdr *esph;
 
 	esph = rte_pktmbuf_mtod_offset(mb, struct rte_esp_hdr *, hlen);
@@ -179,12 +207,21 @@ inb_pkt_prepare(const struct rte_ipsec_sa *sa, const struct replay_sqn *rsn,
 	sqn = rte_be_to_cpu_32(esph->seq);
 	if (IS_ESN(sa))
 		sqn = reconstruct_esn(rsn->sqn, sqn, sa->replay.win_sz);
+	*sqc = rte_cpu_to_be_64(sqn);
 
+	/* check IPsec window */
 	rc = esn_inb_check_sqn(rsn, sa, sqn);
-	if (rc != 0)
-		return rc;
 
-	sqn = rte_cpu_to_be_64(sqn);
+	return rc;
+}
+
+/* prepare packet for upcoming processing */
+static inline int32_t
+inb_prepare(const struct rte_ipsec_sa *sa, struct rte_mbuf *mb,
+	uint32_t hlen, union sym_op_data *icv)
+{
+	uint32_t clen, icv_len, icv_ofs, plen;
+	struct rte_mbuf *ml;
 
 	/* start packet manipulation */
 	plen = mb->pkt_len;
@@ -217,7 +254,8 @@ inb_pkt_prepare(const struct rte_ipsec_sa *sa, const struct replay_sqn *rsn,
 
 	icv_ofs += sa->sqh_len;
 
-	/* we have to allocate space for AAD somewhere,
+	/*
+	 * we have to allocate space for AAD somewhere,
 	 * right now - just use free trailing space at the last segment.
 	 * Would probably be more convenient to reserve space for AAD
 	 * inside rte_crypto_op itself
@@ -238,10 +276,28 @@ inb_pkt_prepare(const struct rte_ipsec_sa *sa, const struct replay_sqn *rsn,
 	mb->pkt_len += sa->sqh_len;
 	ml->data_len += sa->sqh_len;
 
-	inb_pkt_xprepare(sa, sqn, icv);
 	return plen;
 }
 
+static inline int32_t
+inb_pkt_prepare(const struct rte_ipsec_sa *sa, const struct replay_sqn *rsn,
+	struct rte_mbuf *mb, uint32_t hlen, union sym_op_data *icv)
+{
+	int rc;
+	rte_be64_t sqn;
+
+	rc = inb_get_sqn(sa, rsn, mb, hlen, &sqn);
+	if (rc != 0)
+		return rc;
+
+	rc = inb_prepare(sa, mb, hlen, icv);
+	if (rc < 0)
+		return rc;
+
+	inb_pkt_xprepare(sa, sqn, icv);
+	return rc;
+}
+
 /*
  * setup/update packets and crypto ops for ESP inbound case.
  */
@@ -270,17 +326,17 @@ esp_inb_pkt_prepare(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
 			lksd_none_cop_prepare(cop[k], cs, mb[i]);
 			inb_cop_prepare(cop[k], sa, mb[i], &icv, hl, rc);
 			k++;
-		} else
+		} else {
 			dr[i - k] = i;
+			rte_errno = -rc;
+		}
 	}
 
 	rsn_release(sa, rsn);
 
 	/* copy not prepared mbufs beyond good ones */
-	if (k != num && k != 0) {
+	if (k != num && k != 0)
 		move_bad_mbufs(mb, dr, num, num - k);
-		rte_errno = EBADMSG;
-	}
 
 	return k;
 }
@@ -512,7 +568,6 @@ tun_process(const struct rte_ipsec_sa *sa, struct rte_mbuf *mb[],
 	return k;
 }
 
-
 /*
  * *process* function for tunnel packets
  */
@@ -612,7 +667,7 @@ esp_inb_pkt_process(struct rte_ipsec_sa *sa, struct rte_mbuf *mb[],
 	if (k != num && k != 0)
 		move_bad_mbufs(mb, dr, num, num - k);
 
-	/* update SQN and replay winow */
+	/* update SQN and replay window */
 	n = esp_inb_rsn_update(sa, sqn, dr, k);
 
 	/* handle mbufs with wrong SQN */
@@ -625,6 +680,67 @@ esp_inb_pkt_process(struct rte_ipsec_sa *sa, struct rte_mbuf *mb[],
 	return n;
 }
 
+/*
+ * Prepare (plus actual crypto/auth) routine for inbound CPU-CRYPTO
+ * (synchronous mode).
+ */
+uint16_t
+cpu_inb_pkt_prepare(const struct rte_ipsec_session *ss,
+	struct rte_mbuf *mb[], uint16_t num)
+{
+	int32_t rc;
+	uint32_t i, k;
+	struct rte_ipsec_sa *sa;
+	struct replay_sqn *rsn;
+	union sym_op_data icv;
+	void *iv[num];
+	void *aad[num];
+	void *dgst[num];
+	uint32_t dr[num];
+	uint32_t l4ofs[num];
+	uint32_t clen[num];
+	uint64_t ivbuf[num][IPSEC_MAX_IV_QWORD];
+
+	sa = ss->sa;
+
+	/* grab rsn lock */
+	rsn = rsn_acquire(sa);
+
+	/* do preparation for all packets */
+	for (i = 0, k = 0; i != num; i++) {
+
+		/* calculate ESP header offset */
+		l4ofs[k] = mb[i]->l2_len + mb[i]->l3_len;
+
+		/* prepare ESP packet for processing */
+		rc = inb_pkt_prepare(sa, rsn, mb[i], l4ofs[k], &icv);
+		if (rc >= 0) {
+			/* get encrypted data offset and length */
+			clen[k] = inb_cpu_crypto_prepare(sa, mb[i],
+				l4ofs + k, rc, ivbuf[k]);
+
+			/* fill iv, digest and aad */
+			iv[k] = ivbuf[k];
+			aad[k] = icv.va + sa->icv_len;
+			dgst[k++] = icv.va;
+		} else {
+			dr[i - k] = i;
+			rte_errno = -rc;
+		}
+	}
+
+	/* release rsn lock */
+	rsn_release(sa, rsn);
+
+	/* copy not prepared mbufs beyond good ones */
+	if (k != num && k != 0)
+		move_bad_mbufs(mb, dr, num, num - k);
+
+	/* convert mbufs to iovecs and do actual crypto/auth processing */
+	cpu_crypto_bulk(ss, sa->cofs, mb, iv, aad, dgst, l4ofs, clen, k);
+	return k;
+}
+
 /*
  * process group of ESP inbound tunnel packets.
  */
diff --git a/lib/librte_ipsec/esp_outb.c b/lib/librte_ipsec/esp_outb.c
index e983b25a3..faac831d2 100644
--- a/lib/librte_ipsec/esp_outb.c
+++ b/lib/librte_ipsec/esp_outb.c
@@ -15,6 +15,9 @@
 #include "misc.h"
 #include "pad.h"
 
+typedef int32_t (*esp_outb_prepare_t)(struct rte_ipsec_sa *sa, rte_be64_t sqc,
+	const uint64_t ivp[IPSEC_MAX_IV_QWORD], struct rte_mbuf *mb,
+	union sym_op_data *icv, uint8_t sqh_len);
 
 /*
  * helper function to fill crypto_sym op for cipher+auth algorithms.
@@ -177,6 +180,7 @@ outb_tun_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc,
 	espt->pad_len = pdlen;
 	espt->next_proto = sa->proto;
 
+	/* set icv va/pa value(s) */
 	icv->va = rte_pktmbuf_mtod_offset(ml, void *, pdofs);
 	icv->pa = rte_pktmbuf_iova_offset(ml, pdofs);
 
@@ -270,8 +274,7 @@ esp_outb_tun_prepare(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
 static inline int32_t
 outb_trs_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc,
 	const uint64_t ivp[IPSEC_MAX_IV_QWORD], struct rte_mbuf *mb,
-	uint32_t l2len, uint32_t l3len, union sym_op_data *icv,
-	uint8_t sqh_len)
+	union sym_op_data *icv, uint8_t sqh_len)
 {
 	uint8_t np;
 	uint32_t clen, hlen, pdlen, pdofs, plen, tlen, uhlen;
@@ -280,6 +283,10 @@ outb_trs_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc,
 	struct rte_esp_tail *espt;
 	char *ph, *pt;
 	uint64_t *iv;
+	uint32_t l2len, l3len;
+
+	l2len = mb->l2_len;
+	l3len = mb->l3_len;
 
 	uhlen = l2len + l3len;
 	plen = mb->pkt_len - uhlen;
@@ -340,6 +347,7 @@ outb_trs_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc,
 	espt->pad_len = pdlen;
 	espt->next_proto = np;
 
+	/* set icv va/pa value(s) */
 	icv->va = rte_pktmbuf_mtod_offset(ml, void *, pdofs);
 	icv->pa = rte_pktmbuf_iova_offset(ml, pdofs);
 
@@ -381,8 +389,8 @@ esp_outb_trs_prepare(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
 		gen_iv(iv, sqc);
 
 		/* try to update the packet itself */
-		rc = outb_trs_pkt_prepare(sa, sqc, iv, mb[i], l2, l3, &icv,
-					  sa->sqh_len);
+		rc = outb_trs_pkt_prepare(sa, sqc, iv, mb[i], &icv,
+				  sa->sqh_len);
 		/* success, setup crypto op */
 		if (rc >= 0) {
 			outb_pkt_xprepare(sa, sqc, &icv);
@@ -403,6 +411,116 @@ esp_outb_trs_prepare(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
 	return k;
 }
 
+
+static inline uint32_t
+outb_cpu_crypto_prepare(const struct rte_ipsec_sa *sa, uint32_t *pofs,
+	uint32_t plen, void *iv)
+{
+	uint64_t *ivp = iv;
+	struct aead_gcm_iv *gcm;
+	struct aesctr_cnt_blk *ctr;
+	uint32_t clen;
+
+	switch (sa->algo_type) {
+	case ALGO_TYPE_AES_GCM:
+		gcm = iv;
+		aead_gcm_iv_fill(gcm, ivp[0], sa->salt);
+		break;
+	case ALGO_TYPE_AES_CTR:
+		ctr = iv;
+		aes_ctr_cnt_blk_fill(ctr, ivp[0], sa->salt);
+		break;
+	}
+
+	*pofs += sa->ctp.auth.offset;
+	clen = plen + sa->ctp.auth.length;
+	return clen;
+}
+
+static uint16_t
+cpu_outb_pkt_prepare(const struct rte_ipsec_session *ss,
+		struct rte_mbuf *mb[], uint16_t num,
+		esp_outb_prepare_t prepare, uint32_t cofs_mask)
+{
+	int32_t rc;
+	uint64_t sqn;
+	rte_be64_t sqc;
+	struct rte_ipsec_sa *sa;
+	uint32_t i, k, n;
+	uint32_t l2, l3;
+	union sym_op_data icv;
+	void *iv[num];
+	void *aad[num];
+	void *dgst[num];
+	uint32_t dr[num];
+	uint32_t l4ofs[num];
+	uint32_t clen[num];
+	uint64_t ivbuf[num][IPSEC_MAX_IV_QWORD];
+
+	sa = ss->sa;
+
+	n = num;
+	sqn = esn_outb_update_sqn(sa, &n);
+	if (n != num)
+		rte_errno = EOVERFLOW;
+
+	for (i = 0, k = 0; i != n; i++) {
+
+		l2 = mb[i]->l2_len;
+		l3 = mb[i]->l3_len;
+
+		/* calculate ESP header offset */
+		l4ofs[k] = (l2 + l3) & cofs_mask;
+
+		sqc = rte_cpu_to_be_64(sqn + i);
+		gen_iv(ivbuf[k], sqc);
+
+		/* try to update the packet itself */
+		rc = prepare(sa, sqc, ivbuf[k], mb[i], &icv, sa->sqh_len);
+
+		/* success, proceed with preparations */
+		if (rc >= 0) {
+
+			outb_pkt_xprepare(sa, sqc, &icv);
+
+			/* get encrypted data offset and length */
+			clen[k] = outb_cpu_crypto_prepare(sa, l4ofs + k, rc,
+				ivbuf[k]);
+
+			/* fill iv, digest and aad */
+			iv[k] = ivbuf[k];
+			aad[k] = icv.va + sa->icv_len;
+			dgst[k++] = icv.va;
+		} else {
+			dr[i - k] = i;
+			rte_errno = -rc;
+		}
+	}
+
+	/* copy not prepared mbufs beyond good ones */
+	if (k != n && k != 0)
+		move_bad_mbufs(mb, dr, n, n - k);
+
+	/* convert mbufs to iovecs and do actual crypto/auth processing */
+	cpu_crypto_bulk(ss, sa->cofs, mb, iv, aad, dgst, l4ofs, clen, k);
+	return k;
+}
+
+uint16_t
+cpu_outb_tun_pkt_prepare(const struct rte_ipsec_session *ss,
+		struct rte_mbuf *mb[], uint16_t num)
+{
+	return cpu_outb_pkt_prepare(ss, mb, num, outb_tun_pkt_prepare, 0);
+}
+
+uint16_t
+cpu_outb_trs_pkt_prepare(const struct rte_ipsec_session *ss,
+		struct rte_mbuf *mb[], uint16_t num)
+{
+	return cpu_outb_pkt_prepare(ss, mb, num, outb_trs_pkt_prepare,
+		UINT32_MAX);
+}
+
 /*
  * process outbound packets for SA with ESN support,
  * for algorithms that require SQN.hibits to be implictly included
@@ -526,7 +644,7 @@ inline_outb_trs_pkt_process(const struct rte_ipsec_session *ss,
 	struct rte_mbuf *mb[], uint16_t num)
 {
 	int32_t rc;
-	uint32_t i, k, n, l2, l3;
+	uint32_t i, k, n;
 	uint64_t sqn;
 	rte_be64_t sqc;
 	struct rte_ipsec_sa *sa;
@@ -544,15 +662,11 @@ inline_outb_trs_pkt_process(const struct rte_ipsec_session *ss,
 	k = 0;
 	for (i = 0; i != n; i++) {
 
-		l2 = mb[i]->l2_len;
-		l3 = mb[i]->l3_len;
-
 		sqc = rte_cpu_to_be_64(sqn + i);
 		gen_iv(iv, sqc);
 
 		/* try to update the packet itself */
-		rc = outb_trs_pkt_prepare(sa, sqc, iv, mb[i],
-				l2, l3, &icv, 0);
+		rc = outb_trs_pkt_prepare(sa, sqc, iv, mb[i], &icv, 0);
 
 		k += (rc >= 0);
 
diff --git a/lib/librte_ipsec/misc.h b/lib/librte_ipsec/misc.h
index fe4641bfc..6443e0d23 100644
--- a/lib/librte_ipsec/misc.h
+++ b/lib/librte_ipsec/misc.h
@@ -105,4 +105,122 @@ mbuf_cut_seg_ofs(struct rte_mbuf *mb, struct rte_mbuf *ms, uint32_t ofs,
 	mb->pkt_len -= len;
 }
 
+static inline int
+mbuf_to_cryptovec(const struct rte_mbuf *mb, uint32_t ofs, uint32_t data_len,
+	struct rte_crypto_vec vec[], uint32_t num)
+{
+	uint32_t i;
+	struct rte_mbuf *nseg;
+	uint32_t left;
+	uint32_t seglen;
+
+	/* assuming that requested data starts in the first segment */
+	RTE_ASSERT(mb->data_len > ofs);
+
+	if (mb->nb_segs > num)
+		return -mb->nb_segs;
+
+	vec[0].base = rte_pktmbuf_mtod_offset(mb, void *, ofs);
+
+	/* whole data lies in the first segment */
+	seglen = mb->data_len - ofs;
+	if (data_len <= seglen) {
+		vec[0].len = data_len;
+		return 1;
+	}
+
+	/* data spread across segments */
+	vec[0].len = seglen;
+	left = data_len - seglen;
+	for (i = 1, nseg = mb->next; nseg != NULL; nseg = nseg->next, i++) {
+		vec[i].base = rte_pktmbuf_mtod(nseg, void *);
+
+		seglen = nseg->data_len;
+		if (left <= seglen) {
+			/* whole requested data is completed */
+			vec[i].len = left;
+			left = 0;
+			break;
+		}
+
+		/* use whole segment */
+		vec[i].len = seglen;
+		left -= seglen;
+	}
+
+	RTE_ASSERT(left == 0);
+	return i + 1;
+}
+
+/*
+ * process packets using sync crypto engine
+ */
+static inline void
+cpu_crypto_bulk(const struct rte_ipsec_session *ss,
+	union rte_crypto_sym_ofs ofs, struct rte_mbuf *mb[],
+	void *iv[], void *aad[], void *dgst[], uint32_t l4ofs[],
+	uint32_t clen[], uint32_t num)
+{
+	uint32_t i, j, n;
+	int32_t vcnt, vofs;
+	int32_t st[num];
+	struct rte_crypto_sgl vecpkt[num];
+	struct rte_crypto_vec vec[UINT8_MAX];
+	struct rte_crypto_sym_vec symvec;
+
+	const uint32_t vnum = RTE_DIM(vec);
+
+	j = 0, n = 0;
+	vofs = 0;
+	for (i = 0; i != num; i++) {
+
+		vcnt = mbuf_to_cryptovec(mb[i], l4ofs[i], clen[i], &vec[vofs],
+			vnum - vofs);
+
+		/* not enough space in vec[] to hold all segments */
+		if (vcnt < 0) {
+			/* fill the request structure */
+			symvec.sgl = &vecpkt[j];
+			symvec.iv = &iv[j];
+			symvec.aad = &aad[j];
+			symvec.digest = &dgst[j];
+			symvec.status = &st[j];
+			symvec.num = i - j;
+
+			/* flush vec array and try again */
+			n += rte_cryptodev_sym_cpu_crypto_process(
+				ss->crypto.dev_id, ss->crypto.ses, ofs,
+				&symvec);
+			vofs = 0;
+			vcnt = mbuf_to_cryptovec(mb[i], l4ofs[i], clen[i], vec,
+				vnum);
+			RTE_ASSERT(vcnt > 0);
+			j = i;
+		}
+
+		vecpkt[i].vec = &vec[vofs];
+		vecpkt[i].num = vcnt;
+		vofs += vcnt;
+	}
+
+	/* fill the request structure */
+	symvec.sgl = &vecpkt[j];
+	symvec.iv = &iv[j];
+	symvec.aad = &aad[j];
+	symvec.digest = &dgst[j];
+	symvec.status = &st[j];
+	symvec.num = i - j;
+
+	n += rte_cryptodev_sym_cpu_crypto_process(ss->crypto.dev_id,
+		ss->crypto.ses, ofs, &symvec);
+
+	j = num - n;
+	for (i = 0; j != 0 && i != num; i++) {
+		if (st[i] != 0) {
+			mb[i]->ol_flags |= PKT_RX_SEC_OFFLOAD_FAILED;
+			j--;
+		}
+	}
+}
+
 #endif /* _MISC_H_ */
diff --git a/lib/librte_ipsec/rte_ipsec.h b/lib/librte_ipsec/rte_ipsec.h
index f3b1f936b..fd685887c 100644
--- a/lib/librte_ipsec/rte_ipsec.h
+++ b/lib/librte_ipsec/rte_ipsec.h
@@ -33,10 +33,15 @@ struct rte_ipsec_session;
  *   (see rte_ipsec_pkt_process for more details).
  */
 struct rte_ipsec_sa_pkt_func {
-	uint16_t (*prepare)(const struct rte_ipsec_session *ss,
+	union {
+		uint16_t (*async)(const struct rte_ipsec_session *ss,
 				struct rte_mbuf *mb[],
 				struct rte_crypto_op *cop[],
 				uint16_t num);
+		uint16_t (*sync)(const struct rte_ipsec_session *ss,
+				struct rte_mbuf *mb[],
+				uint16_t num);
+	} prepare;
 	uint16_t (*process)(const struct rte_ipsec_session *ss,
 				struct rte_mbuf *mb[],
 				uint16_t num);
@@ -62,6 +67,7 @@ struct rte_ipsec_session {
 	union {
 		struct {
 			struct rte_cryptodev_sym_session *ses;
+			uint8_t dev_id;
 		} crypto;
 		struct {
 			struct rte_security_session *ses;
@@ -114,7 +120,15 @@ static inline uint16_t
 rte_ipsec_pkt_crypto_prepare(const struct rte_ipsec_session *ss,
 	struct rte_mbuf *mb[], struct rte_crypto_op *cop[], uint16_t num)
 {
-	return ss->pkt_func.prepare(ss, mb, cop, num);
+	return ss->pkt_func.prepare.async(ss, mb, cop, num);
+}
+
+__rte_experimental
+static inline uint16_t
+rte_ipsec_pkt_cpu_prepare(const struct rte_ipsec_session *ss,
+	struct rte_mbuf *mb[], uint16_t num)
+{
+	return ss->pkt_func.prepare.sync(ss, mb, num);
 }
 
 /**
diff --git a/lib/librte_ipsec/sa.c b/lib/librte_ipsec/sa.c
index 6f1d92c3c..0f89a362f 100644
--- a/lib/librte_ipsec/sa.c
+++ b/lib/librte_ipsec/sa.c
@@ -243,10 +243,26 @@ static void
 esp_inb_init(struct rte_ipsec_sa *sa)
 {
 	/* these params may differ with new algorithms support */
-	sa->ctp.auth.offset = 0;
-	sa->ctp.auth.length = sa->icv_len - sa->sqh_len;
 	sa->ctp.cipher.offset = sizeof(struct rte_esp_hdr) + sa->iv_len;
 	sa->ctp.cipher.length = sa->icv_len + sa->ctp.cipher.offset;
+
+	/*
+	 * for AEAD and NULL algorithms we can assume that
+	 * auth and cipher offsets would be equal.
+	 */
+	switch (sa->algo_type) {
+	case ALGO_TYPE_AES_GCM:
+	case ALGO_TYPE_NULL:
+		sa->ctp.auth.raw = sa->ctp.cipher.raw;
+		break;
+	default:
+		sa->ctp.auth.offset = 0;
+		sa->ctp.auth.length = sa->icv_len - sa->sqh_len;
+		sa->cofs.ofs.cipher.tail = sa->sqh_len;
+		break;
+	}
+
+	sa->cofs.ofs.cipher.head = sa->ctp.cipher.offset - sa->ctp.auth.offset;
 }
 
 /*
@@ -269,13 +285,13 @@ esp_outb_init(struct rte_ipsec_sa *sa, uint32_t hlen)
 
 	sa->sqn.outb.raw = 1;
 
-	/* these params may differ with new algorithms support */
-	sa->ctp.auth.offset = hlen;
-	sa->ctp.auth.length = sizeof(struct rte_esp_hdr) +
-		sa->iv_len + sa->sqh_len;
-
 	algo_type = sa->algo_type;
 
+	/*
+	 * Setup auth and cipher length and offset.
+	 * these params may differ with new algorithms support
+	 */
+
 	switch (algo_type) {
 	case ALGO_TYPE_AES_GCM:
 	case ALGO_TYPE_AES_CTR:
@@ -286,11 +302,30 @@ esp_outb_init(struct rte_ipsec_sa *sa, uint32_t hlen)
 		break;
 	case ALGO_TYPE_AES_CBC:
 	case ALGO_TYPE_3DES_CBC:
-		sa->ctp.cipher.offset = sa->hdr_len +
-			sizeof(struct rte_esp_hdr);
+		sa->ctp.cipher.offset = hlen + sizeof(struct rte_esp_hdr);
 		sa->ctp.cipher.length = sa->iv_len;
 		break;
 	}
+
+	/*
+	 * for AEAD and NULL algorithms we can assume that
+	 * auth and cipher offsets would be equal.
+	 */
+	switch (algo_type) {
+	case ALGO_TYPE_AES_GCM:
+	case ALGO_TYPE_NULL:
+		sa->ctp.auth.raw = sa->ctp.cipher.raw;
+		break;
+	default:
+		sa->ctp.auth.offset = hlen;
+		sa->ctp.auth.length = sizeof(struct rte_esp_hdr) +
+			sa->iv_len + sa->sqh_len;
+		break;
+	}
+
+	sa->cofs.ofs.cipher.head = sa->ctp.cipher.offset - sa->ctp.auth.offset;
+	sa->cofs.ofs.cipher.tail = (sa->ctp.auth.offset + sa->ctp.auth.length) -
+			(sa->ctp.cipher.offset + sa->ctp.cipher.length);
 }
 
 /*
@@ -544,9 +579,9 @@ lksd_proto_prepare(const struct rte_ipsec_session *ss,
  * - inbound/outbound for RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL
  * - outbound for RTE_SECURITY_ACTION_TYPE_NONE when ESN is disabled
  */
-static uint16_t
-pkt_flag_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
-	uint16_t num)
+uint16_t
+pkt_flag_process(const struct rte_ipsec_session *ss,
+		struct rte_mbuf *mb[], uint16_t num)
 {
 	uint32_t i, k;
 	uint32_t dr[num];
@@ -588,21 +623,59 @@ lksd_none_pkt_func_select(const struct rte_ipsec_sa *sa,
 	switch (sa->type & msk) {
 	case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TUNLV4):
 	case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TUNLV6):
-		pf->prepare = esp_inb_pkt_prepare;
+		pf->prepare.async = esp_inb_pkt_prepare;
 		pf->process = esp_inb_tun_pkt_process;
 		break;
 	case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TRANS):
-		pf->prepare = esp_inb_pkt_prepare;
+		pf->prepare.async = esp_inb_pkt_prepare;
 		pf->process = esp_inb_trs_pkt_process;
 		break;
 	case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV4):
 	case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV6):
-		pf->prepare = esp_outb_tun_prepare;
+		pf->prepare.async = esp_outb_tun_prepare;
 		pf->process = (sa->sqh_len != 0) ?
 			esp_outb_sqh_process : pkt_flag_process;
 		break;
 	case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TRANS):
-		pf->prepare = esp_outb_trs_prepare;
+		pf->prepare.async = esp_outb_trs_prepare;
+		pf->process = (sa->sqh_len != 0) ?
+			esp_outb_sqh_process : pkt_flag_process;
+		break;
+	default:
+		rc = -ENOTSUP;
+	}
+
+	return rc;
+}
+
+static int
+cpu_crypto_pkt_func_select(const struct rte_ipsec_sa *sa,
+		struct rte_ipsec_sa_pkt_func *pf)
+{
+	int32_t rc;
+
+	static const uint64_t msk = RTE_IPSEC_SATP_DIR_MASK |
+			RTE_IPSEC_SATP_MODE_MASK;
+
+	rc = 0;
+	switch (sa->type & msk) {
+	case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TUNLV4):
+	case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TUNLV6):
+		pf->prepare.sync = cpu_inb_pkt_prepare;
+		pf->process = esp_inb_tun_pkt_process;
+		break;
+	case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TRANS):
+		pf->prepare.sync = cpu_inb_pkt_prepare;
+		pf->process = esp_inb_trs_pkt_process;
+		break;
+	case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV4):
+	case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV6):
+		pf->prepare.sync = cpu_outb_tun_pkt_prepare;
+		pf->process = (sa->sqh_len != 0) ?
+			esp_outb_sqh_process : pkt_flag_process;
+		break;
+	case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TRANS):
+		pf->prepare.sync = cpu_outb_trs_pkt_prepare;
 		pf->process = (sa->sqh_len != 0) ?
 			esp_outb_sqh_process : pkt_flag_process;
 		break;
@@ -660,7 +733,7 @@ ipsec_sa_pkt_func_select(const struct rte_ipsec_session *ss,
 	int32_t rc;
 
 	rc = 0;
-	pf[0] = (struct rte_ipsec_sa_pkt_func) { 0 };
+	pf[0] = (struct rte_ipsec_sa_pkt_func) { {NULL}, NULL };
 
 	switch (ss->type) {
 	case RTE_SECURITY_ACTION_TYPE_NONE:
@@ -677,9 +750,12 @@ ipsec_sa_pkt_func_select(const struct rte_ipsec_session *ss,
 			pf->process = inline_proto_outb_pkt_process;
 		break;
 	case RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL:
-		pf->prepare = lksd_proto_prepare;
+		pf->prepare.async = lksd_proto_prepare;
 		pf->process = pkt_flag_process;
 		break;
+	case RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO:
+		rc = cpu_crypto_pkt_func_select(sa, pf);
+		break;
 	default:
 		rc = -ENOTSUP;
 	}
diff --git a/lib/librte_ipsec/sa.h b/lib/librte_ipsec/sa.h
index 51e69ad05..a16238301 100644
--- a/lib/librte_ipsec/sa.h
+++ b/lib/librte_ipsec/sa.h
@@ -88,6 +88,8 @@ struct rte_ipsec_sa {
 		union sym_op_ofslen cipher;
 		union sym_op_ofslen auth;
 	} ctp;
+	/* cpu-crypto offsets */
+	union rte_crypto_sym_ofs cofs;
 	/* tx_offload template for tunnel mbuf */
 	struct {
 		uint64_t msk;
@@ -156,6 +158,10 @@ uint16_t
 inline_inb_trs_pkt_process(const struct rte_ipsec_session *ss,
 	struct rte_mbuf *mb[], uint16_t num);
 
+uint16_t
+cpu_inb_pkt_prepare(const struct rte_ipsec_session *ss,
+		struct rte_mbuf *mb[], uint16_t num);
+
 /* outbound processing */
 
 uint16_t
@@ -170,6 +176,10 @@ uint16_t
 esp_outb_sqh_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
 	uint16_t num);
 
+uint16_t
+pkt_flag_process(const struct rte_ipsec_session *ss,
+	struct rte_mbuf *mb[], uint16_t num);
+
 uint16_t
 inline_outb_tun_pkt_process(const struct rte_ipsec_session *ss,
 	struct rte_mbuf *mb[], uint16_t num);
@@ -182,4 +192,11 @@ uint16_t
 inline_proto_outb_pkt_process(const struct rte_ipsec_session *ss,
 	struct rte_mbuf *mb[], uint16_t num);
 
+uint16_t
+cpu_outb_tun_pkt_prepare(const struct rte_ipsec_session *ss,
+		struct rte_mbuf *mb[], uint16_t num);
+uint16_t
+cpu_outb_trs_pkt_prepare(const struct rte_ipsec_session *ss,
+		struct rte_mbuf *mb[], uint16_t num);
+
 #endif /* _SA_H_ */
diff --git a/lib/librte_ipsec/ses.c b/lib/librte_ipsec/ses.c
index 82c765a33..7a123e2d9 100644
--- a/lib/librte_ipsec/ses.c
+++ b/lib/librte_ipsec/ses.c
@@ -11,7 +11,8 @@ session_check(struct rte_ipsec_session *ss)
 	if (ss == NULL || ss->sa == NULL)
 		return -EINVAL;
 
-	if (ss->type == RTE_SECURITY_ACTION_TYPE_NONE) {
+	if (ss->type == RTE_SECURITY_ACTION_TYPE_NONE ||
+		ss->type == RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO) {
 		if (ss->crypto.ses == NULL)
 			return -EINVAL;
 	} else {
-- 
2.17.1


^ permalink raw reply	[flat|nested] 77+ messages in thread

* [dpdk-dev] [PATCH v4 6/8] examples/ipsec-secgw: cpu crypto support
  2020-01-28  3:16 ` [dpdk-dev] [PATCH v4 0/8] Introduce CPU crypto mode Marcin Smoczynski
                     ` (4 preceding siblings ...)
  2020-01-28  3:16   ` [dpdk-dev] [PATCH v4 5/8] ipsec: introduce support for cpu crypto mode Marcin Smoczynski
@ 2020-01-28  3:16   ` Marcin Smoczynski
  2020-01-28  3:16   ` [dpdk-dev] [PATCH v4 7/8] examples/ipsec-secgw: cpu crypto testing Marcin Smoczynski
                     ` (2 subsequent siblings)
  8 siblings, 0 replies; 77+ messages in thread
From: Marcin Smoczynski @ 2020-01-28  3:16 UTC (permalink / raw)
  To: akhil.goyal, konstantin.ananyev, roy.fan.zhang, declan.doherty,
	radu.nicolau, pablo.de.lara.guarch
  Cc: dev, Marcin Smoczynski

Add support for CPU accelerated crypto. 'cpu-crypto' SA type has
been introduced in configuration allowing to use abovementioned
acceleration.

Legacy mode is not currently supported.

Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Signed-off-by: Marcin Smoczynski <marcinx.smoczynski@intel.com>
---
 examples/ipsec-secgw/ipsec.c         |  23 ++++-
 examples/ipsec-secgw/ipsec_process.c | 134 +++++++++++++++++----------
 examples/ipsec-secgw/sa.c            |  28 ++++--
 3 files changed, 128 insertions(+), 57 deletions(-)

diff --git a/examples/ipsec-secgw/ipsec.c b/examples/ipsec-secgw/ipsec.c
index d4b57121a..49a947990 100644
--- a/examples/ipsec-secgw/ipsec.c
+++ b/examples/ipsec-secgw/ipsec.c
@@ -10,6 +10,7 @@
 #include <rte_crypto.h>
 #include <rte_security.h>
 #include <rte_cryptodev.h>
+#include <rte_ipsec.h>
 #include <rte_ethdev.h>
 #include <rte_mbuf.h>
 #include <rte_hash.h>
@@ -86,7 +87,8 @@ create_lookaside_session(struct ipsec_ctx *ipsec_ctx, struct ipsec_sa *sa,
 			ipsec_ctx->tbl[cdev_id_qp].id,
 			ipsec_ctx->tbl[cdev_id_qp].qp);
 
-	if (ips->type != RTE_SECURITY_ACTION_TYPE_NONE) {
+	if (ips->type != RTE_SECURITY_ACTION_TYPE_NONE &&
+		ips->type != RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO) {
 		struct rte_security_session_conf sess_conf = {
 			.action_type = ips->type,
 			.protocol = RTE_SECURITY_PROTOCOL_IPSEC,
@@ -126,6 +128,18 @@ create_lookaside_session(struct ipsec_ctx *ipsec_ctx, struct ipsec_sa *sa,
 			return -1;
 		}
 	} else {
+		if (ips->type == RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO) {
+			struct rte_cryptodev_info info;
+			uint16_t cdev_id;
+
+			cdev_id = ipsec_ctx->tbl[cdev_id_qp].id;
+			rte_cryptodev_info_get(cdev_id, &info);
+			if (!(info.feature_flags &
+				RTE_CRYPTODEV_FF_SYM_CPU_CRYPTO))
+				return -ENOTSUP;
+
+			ips->crypto.dev_id = cdev_id;
+		}
 		ips->crypto.ses = rte_cryptodev_sym_session_create(
 				ipsec_ctx->session_pool);
 		rte_cryptodev_sym_session_init(ipsec_ctx->tbl[cdev_id_qp].id,
@@ -476,6 +490,13 @@ ipsec_enqueue(ipsec_xform_fn xform_func, struct ipsec_ctx *ipsec_ctx,
 			rte_security_attach_session(&priv->cop,
 				ips->security.ses);
 			break;
+
+		case RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO:
+			RTE_LOG(ERR, IPSEC, "CPU crypto is not supported by the"
+					" legacy mode.");
+			rte_pktmbuf_free(pkts[i]);
+			continue;
+
 		case RTE_SECURITY_ACTION_TYPE_NONE:
 
 			priv->cop.type = RTE_CRYPTO_OP_TYPE_SYMMETRIC;
diff --git a/examples/ipsec-secgw/ipsec_process.c b/examples/ipsec-secgw/ipsec_process.c
index 2eb5c8b34..576a9fa8a 100644
--- a/examples/ipsec-secgw/ipsec_process.c
+++ b/examples/ipsec-secgw/ipsec_process.c
@@ -92,7 +92,8 @@ fill_ipsec_session(struct rte_ipsec_session *ss, struct ipsec_ctx *ctx,
 	int32_t rc;
 
 	/* setup crypto section */
-	if (ss->type == RTE_SECURITY_ACTION_TYPE_NONE) {
+	if (ss->type == RTE_SECURITY_ACTION_TYPE_NONE ||
+			ss->type == RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO) {
 		RTE_ASSERT(ss->crypto.ses == NULL);
 		rc = create_lookaside_session(ctx, sa, ss);
 		if (rc != 0)
@@ -215,6 +216,62 @@ ipsec_prepare_crypto_group(struct ipsec_ctx *ctx, struct ipsec_sa *sa,
 	return k;
 }
 
+/*
+ * helper routine for inline and cpu(synchronous) processing
+ * this is just to satisfy inbound_sa_check() and get_hop_for_offload_pkt().
+ * Should be removed in future.
+ */
+static inline void
+prep_process_group(void *sa, struct rte_mbuf *mb[], uint32_t cnt)
+{
+	uint32_t j;
+	struct ipsec_mbuf_metadata *priv;
+
+	for (j = 0; j != cnt; j++) {
+		priv = get_priv(mb[j]);
+		priv->sa = sa;
+	}
+}
+
+/*
+ * finish processing of packets successfully decrypted by an inline processor
+ */
+static uint32_t
+ipsec_process_inline_group(struct rte_ipsec_session *ips, void *sa,
+	struct ipsec_traffic *trf, struct rte_mbuf *mb[], uint32_t cnt)
+{
+	uint64_t satp;
+	uint32_t k;
+
+	/* get SA type */
+	satp = rte_ipsec_sa_type(ips->sa);
+	prep_process_group(sa, mb, cnt);
+
+	k = rte_ipsec_pkt_process(ips, mb, cnt);
+	copy_to_trf(trf, satp, mb, k);
+	return k;
+}
+
+/*
+ * process packets synchronously
+ */
+static uint32_t
+ipsec_process_cpu_group(struct rte_ipsec_session *ips, void *sa,
+	struct ipsec_traffic *trf, struct rte_mbuf *mb[], uint32_t cnt)
+{
+	uint64_t satp;
+	uint32_t k;
+
+	/* get SA type */
+	satp = rte_ipsec_sa_type(ips->sa);
+	prep_process_group(sa, mb, cnt);
+
+	k = rte_ipsec_pkt_cpu_prepare(ips, mb, cnt);
+	k = rte_ipsec_pkt_process(ips, mb, k);
+	copy_to_trf(trf, satp, mb, k);
+	return k;
+}
+
 /*
  * Process ipsec packets.
  * If packet belong to SA that is subject of inline-crypto,
@@ -225,10 +282,8 @@ ipsec_prepare_crypto_group(struct ipsec_ctx *ctx, struct ipsec_sa *sa,
 void
 ipsec_process(struct ipsec_ctx *ctx, struct ipsec_traffic *trf)
 {
-	uint64_t satp;
-	uint32_t i, j, k, n;
+	uint32_t i, k, n;
 	struct ipsec_sa *sa;
-	struct ipsec_mbuf_metadata *priv;
 	struct rte_ipsec_group *pg;
 	struct rte_ipsec_session *ips;
 	struct rte_ipsec_group grp[RTE_DIM(trf->ipsec.pkts)];
@@ -236,10 +291,17 @@ ipsec_process(struct ipsec_ctx *ctx, struct ipsec_traffic *trf)
 	n = sa_group(trf->ipsec.saptr, trf->ipsec.pkts, grp, trf->ipsec.num);
 
 	for (i = 0; i != n; i++) {
+
 		pg = grp + i;
 		sa = ipsec_mask_saptr(pg->id.ptr);
 
-		ips = ipsec_get_primary_session(sa);
+		/* fallback to cryptodev with RX packets which inline
+		 * processor was unable to process
+		 */
+		if (sa != NULL)
+			ips = (pg->id.val & IPSEC_SA_OFFLOAD_FALLBACK_FLAG) ?
+				ipsec_get_fallback_session(sa) :
+				ipsec_get_primary_session(sa);
 
 		/* no valid HW session for that SA, try to create one */
 		if (sa == NULL || (ips->crypto.ses == NULL &&
@@ -247,50 +309,26 @@ ipsec_process(struct ipsec_ctx *ctx, struct ipsec_traffic *trf)
 			k = 0;
 
 		/* process packets inline */
-		else if (ips->type == RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO ||
-				ips->type ==
-				RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL) {
-
-			/* get SA type */
-			satp = rte_ipsec_sa_type(ips->sa);
-
-			/*
-			 * This is just to satisfy inbound_sa_check()
-			 * and get_hop_for_offload_pkt().
-			 * Should be removed in future.
-			 */
-			for (j = 0; j != pg->cnt; j++) {
-				priv = get_priv(pg->m[j]);
-				priv->sa = sa;
+		else {
+			switch (ips->type) {
+			/* enqueue packets to crypto dev */
+			case RTE_SECURITY_ACTION_TYPE_NONE:
+			case RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL:
+				k = ipsec_prepare_crypto_group(ctx, sa, ips,
+					pg->m, pg->cnt);
+				break;
+			case RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO:
+			case RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL:
+				k = ipsec_process_inline_group(ips, sa,
+					trf, pg->m, pg->cnt);
+				break;
+			case RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO:
+				k = ipsec_process_cpu_group(ips, sa,
+					trf, pg->m, pg->cnt);
+				break;
+			default:
+				k = 0;
 			}
-
-			/* fallback to cryptodev with RX packets which inline
-			 * processor was unable to process
-			 */
-			if (pg->id.val & IPSEC_SA_OFFLOAD_FALLBACK_FLAG) {
-				/* offload packets to cryptodev */
-				struct rte_ipsec_session *fallback;
-
-				fallback = ipsec_get_fallback_session(sa);
-				if (fallback->crypto.ses == NULL &&
-					fill_ipsec_session(fallback, ctx, sa)
-					!= 0)
-					k = 0;
-				else
-					k = ipsec_prepare_crypto_group(ctx, sa,
-						fallback, pg->m, pg->cnt);
-			} else {
-				/* finish processing of packets successfully
-				 * decrypted by an inline processor
-				 */
-				k = rte_ipsec_pkt_process(ips, pg->m, pg->cnt);
-				copy_to_trf(trf, satp, pg->m, k);
-
-			}
-		/* enqueue packets to crypto dev */
-		} else {
-			k = ipsec_prepare_crypto_group(ctx, sa, ips, pg->m,
-				pg->cnt);
 		}
 
 		/* drop packets that cannot be enqueued/processed */
diff --git a/examples/ipsec-secgw/sa.c b/examples/ipsec-secgw/sa.c
index c75a5a15f..f25a4082f 100644
--- a/examples/ipsec-secgw/sa.c
+++ b/examples/ipsec-secgw/sa.c
@@ -586,6 +586,8 @@ parse_sa_tokens(char **tokens, uint32_t n_tokens,
 				RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL;
 			else if (strcmp(tokens[ti], "no-offload") == 0)
 				ips->type = RTE_SECURITY_ACTION_TYPE_NONE;
+			else if (strcmp(tokens[ti], "cpu-crypto") == 0)
+				ips->type = RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO;
 			else {
 				APP_CHECK(0, status, "Invalid input \"%s\"",
 						tokens[ti]);
@@ -679,10 +681,12 @@ parse_sa_tokens(char **tokens, uint32_t n_tokens,
 	if (status->status < 0)
 		return;
 
-	if ((ips->type != RTE_SECURITY_ACTION_TYPE_NONE) && (portid_p == 0))
+	if ((ips->type != RTE_SECURITY_ACTION_TYPE_NONE && ips->type !=
+			RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO) && (portid_p == 0))
 		printf("Missing portid option, falling back to non-offload\n");
 
-	if (!type_p || !portid_p) {
+	if (!type_p || (!portid_p && ips->type !=
+			RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO)) {
 		ips->type = RTE_SECURITY_ACTION_TYPE_NONE;
 		rule->portid = -1;
 	}
@@ -768,15 +772,25 @@ print_one_sa_rule(const struct ipsec_sa *sa, int inbound)
 	case RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL:
 		printf("lookaside-protocol-offload ");
 		break;
+	case RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO:
+		printf("cpu-crypto-accelerated");
+		break;
 	}
 
 	fallback_ips = &sa->sessions[IPSEC_SESSION_FALLBACK];
 	if (fallback_ips != NULL && sa->fallback_sessions > 0) {
 		printf("inline fallback: ");
-		if (fallback_ips->type == RTE_SECURITY_ACTION_TYPE_NONE)
+		switch (fallback_ips->type) {
+		case RTE_SECURITY_ACTION_TYPE_NONE:
 			printf("lookaside-none");
-		else
+			break;
+		case RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO:
+			printf("cpu-crypto-accelerated");
+			break;
+		default:
 			printf("invalid");
+			break;
+		}
 	}
 	printf("\n");
 }
@@ -975,7 +989,6 @@ sa_add_rules(struct sa_ctx *sa_ctx, const struct ipsec_sa entries[],
 				return -EINVAL;
 		}
 
-
 		switch (WITHOUT_TRANSPORT_VERSION(sa->flags)) {
 		case IP4_TUNNEL:
 			sa->src.ip.ip4 = rte_cpu_to_be_32(sa->src.ip.ip4);
@@ -1026,7 +1039,6 @@ sa_add_rules(struct sa_ctx *sa_ctx, const struct ipsec_sa entries[],
 					return -EINVAL;
 				}
 			}
-			print_one_sa_rule(sa, inbound);
 		} else {
 			switch (sa->cipher_algo) {
 			case RTE_CRYPTO_CIPHER_NULL:
@@ -1091,9 +1103,9 @@ sa_add_rules(struct sa_ctx *sa_ctx, const struct ipsec_sa entries[],
 			sa_ctx->xf[idx].a.next = &sa_ctx->xf[idx].b;
 			sa_ctx->xf[idx].b.next = NULL;
 			sa->xforms = &sa_ctx->xf[idx].a;
-
-			print_one_sa_rule(sa, inbound);
 		}
+
+		print_one_sa_rule(sa, inbound);
 	}
 
 	return 0;
-- 
2.17.1


^ permalink raw reply	[flat|nested] 77+ messages in thread

* [dpdk-dev] [PATCH v4 7/8] examples/ipsec-secgw: cpu crypto testing
  2020-01-28  3:16 ` [dpdk-dev] [PATCH v4 0/8] Introduce CPU crypto mode Marcin Smoczynski
                     ` (5 preceding siblings ...)
  2020-01-28  3:16   ` [dpdk-dev] [PATCH v4 6/8] examples/ipsec-secgw: cpu crypto support Marcin Smoczynski
@ 2020-01-28  3:16   ` Marcin Smoczynski
  2020-01-28  3:16   ` [dpdk-dev] [PATCH v4 8/8] doc: add cpu crypto related documentation Marcin Smoczynski
  2020-01-28 14:22   ` [dpdk-dev] [PATCH v5 0/8] Introduce CPU crypto mode Marcin Smoczynski
  8 siblings, 0 replies; 77+ messages in thread
From: Marcin Smoczynski @ 2020-01-28  3:16 UTC (permalink / raw)
  To: akhil.goyal, konstantin.ananyev, roy.fan.zhang, declan.doherty,
	radu.nicolau, pablo.de.lara.guarch
  Cc: dev, Marcin Smoczynski

Enable cpu-crypto mode testing by adding dedicated environmental
variable CRYPTO_PRIM_TYPE. Setting it to 'type cpu-crypto' allows
to run test scenario with cpu crypto acceleration.

Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Signed-off-by: Marcin Smoczynski <marcinx.smoczynski@intel.com>
---
 examples/ipsec-secgw/test/common_defs.sh      | 21 +++++++++++++++++++
 examples/ipsec-secgw/test/linux_test4.sh      | 11 +---------
 examples/ipsec-secgw/test/linux_test6.sh      | 11 +---------
 .../test/trs_3descbc_sha1_common_defs.sh      |  8 +++----
 .../test/trs_aescbc_sha1_common_defs.sh       |  8 +++----
 .../test/trs_aesctr_sha1_common_defs.sh       |  8 +++----
 .../test/tun_3descbc_sha1_common_defs.sh      |  8 +++----
 .../test/tun_aescbc_sha1_common_defs.sh       |  8 +++----
 .../test/tun_aesctr_sha1_common_defs.sh       |  8 +++----
 9 files changed, 47 insertions(+), 44 deletions(-)

diff --git a/examples/ipsec-secgw/test/common_defs.sh b/examples/ipsec-secgw/test/common_defs.sh
index 4aac4981a..6b6ae06f3 100644
--- a/examples/ipsec-secgw/test/common_defs.sh
+++ b/examples/ipsec-secgw/test/common_defs.sh
@@ -42,6 +42,27 @@ DPDK_BUILD=${RTE_TARGET:-x86_64-native-linux-gcc}
 DEF_MTU_LEN=1400
 DEF_PING_LEN=1200
 
+#upsate operation mode based on env vars values
+select_mode()
+{
+	# select sync/async mode
+	if [[ -n "${CRYPTO_PRIM_TYPE}" && -n "${SGW_CMD_XPRM}" ]]; then
+		echo "${CRYPTO_PRIM_TYPE} is enabled"
+		SGW_CFG_XPRM="${SGW_CFG_XPRM} ${CRYPTO_PRIM_TYPE}"
+	fi
+
+	#make linux to generate fragmented packets
+	if [[ -n "${MULTI_SEG_TEST}" && -n "${SGW_CMD_XPRM}" ]]; then
+		echo "multi-segment test is enabled"
+		SGW_CMD_XPRM="${SGW_CMD_XPRM} ${MULTI_SEG_TEST}"
+		PING_LEN=5000
+		MTU_LEN=1500
+	else
+		PING_LEN=${DEF_PING_LEN}
+		MTU_LEN=${DEF_MTU_LEN}
+	fi
+}
+
 #setup mtu on local iface
 set_local_mtu()
 {
diff --git a/examples/ipsec-secgw/test/linux_test4.sh b/examples/ipsec-secgw/test/linux_test4.sh
index 760451000..fb8ae1023 100644
--- a/examples/ipsec-secgw/test/linux_test4.sh
+++ b/examples/ipsec-secgw/test/linux_test4.sh
@@ -45,16 +45,7 @@ MODE=$1
  . ${DIR}/common_defs.sh
  . ${DIR}/${MODE}_defs.sh
 
-#make linux to generate fragmented packets
-if [[ -n "${MULTI_SEG_TEST}" && -n "${SGW_CMD_XPRM}" ]]; then
-	echo "multi-segment test is enabled"
-	SGW_CMD_XPRM="${SGW_CMD_XPRM} ${MULTI_SEG_TEST}"
-	PING_LEN=5000
-	MTU_LEN=1500
-else
-	PING_LEN=${DEF_PING_LEN}
-	MTU_LEN=${DEF_MTU_LEN}
-fi
+select_mode
 
 config_secgw
 
diff --git a/examples/ipsec-secgw/test/linux_test6.sh b/examples/ipsec-secgw/test/linux_test6.sh
index 479f29be3..dbcca7936 100644
--- a/examples/ipsec-secgw/test/linux_test6.sh
+++ b/examples/ipsec-secgw/test/linux_test6.sh
@@ -46,16 +46,7 @@ MODE=$1
  . ${DIR}/common_defs.sh
  . ${DIR}/${MODE}_defs.sh
 
-#make linux to generate fragmented packets
-if [[ -n "${MULTI_SEG_TEST}" && -n "${SGW_CMD_XPRM}" ]]; then
-	echo "multi-segment test is enabled"
-	SGW_CMD_XPRM="${SGW_CMD_XPRM} ${MULTI_SEG_TEST}"
-	PING_LEN=5000
-	MTU_LEN=1500
-else
-	PING_LEN=${DEF_PING_LEN}
-	MTU_LEN=${DEF_MTU_LEN}
-fi
+select_mode
 
 config_secgw
 
diff --git a/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh b/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh
index 3c5c18afd..62118bb3f 100644
--- a/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh
+++ b/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh
@@ -33,14 +33,14 @@ cipher_key \
 de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
 auth_algo sha1-hmac \
 auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
-mode transport
+mode transport ${SGW_CFG_XPRM}
 
 sa in 9 cipher_algo 3des-cbc \
 cipher_key \
 de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
 auth_algo sha1-hmac \
 auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
-mode transport
+mode transport ${SGW_CFG_XPRM}
 
 #SA out rules
 sa out 7 cipher_algo 3des-cbc \
@@ -48,7 +48,7 @@ cipher_key \
 de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
 auth_algo sha1-hmac \
 auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
-mode transport
+mode transport ${SGW_CFG_XPRM}
 
 #SA out rules
 sa out 9 cipher_algo 3des-cbc \
@@ -56,7 +56,7 @@ cipher_key \
 de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
 auth_algo sha1-hmac \
 auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
-mode transport
+mode transport ${SGW_CFG_XPRM}
 
 #Routing rules
 rt ipv4 dst ${REMOTE_IPV4}/32 port 0
diff --git a/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh b/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh
index 9dbdd1765..7ddeb2b5a 100644
--- a/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh
+++ b/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh
@@ -32,27 +32,27 @@ sa in 7 cipher_algo aes-128-cbc \
 cipher_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
 auth_algo sha1-hmac \
 auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
-mode transport
+mode transport ${SGW_CFG_XPRM}
 
 sa in 9 cipher_algo aes-128-cbc \
 cipher_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
 auth_algo sha1-hmac \
 auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
-mode transport
+mode transport ${SGW_CFG_XPRM}
 
 #SA out rules
 sa out 7 cipher_algo aes-128-cbc \
 cipher_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
 auth_algo sha1-hmac \
 auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
-mode transport
+mode transport ${SGW_CFG_XPRM}
 
 #SA out rules
 sa out 9 cipher_algo aes-128-cbc \
 cipher_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
 auth_algo sha1-hmac \
 auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
-mode transport
+mode transport ${SGW_CFG_XPRM}
 
 #Routing rules
 rt ipv4 dst ${REMOTE_IPV4}/32 port 0
diff --git a/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh b/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh
index 6aba680f9..f0178355a 100644
--- a/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh
+++ b/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh
@@ -32,27 +32,27 @@ sa in 7 cipher_algo aes-128-ctr \
 cipher_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
 auth_algo sha1-hmac \
 auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
-mode transport
+mode transport ${SGW_CFG_XPRM}
 
 sa in 9 cipher_algo aes-128-ctr \
 cipher_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
 auth_algo sha1-hmac \
 auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
-mode transport
+mode transport ${SGW_CFG_XPRM}
 
 #SA out rules
 sa out 7 cipher_algo aes-128-ctr \
 cipher_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
 auth_algo sha1-hmac \
 auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
-mode transport
+mode transport ${SGW_CFG_XPRM}
 
 #SA out rules
 sa out 9 cipher_algo aes-128-ctr \
 cipher_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
 auth_algo sha1-hmac \
 auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
-mode transport
+mode transport ${SGW_CFG_XPRM}
 
 #Routing rules
 rt ipv4 dst ${REMOTE_IPV4}/32 port 0
diff --git a/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh b/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh
index 7c3226f84..d8869fad0 100644
--- a/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh
+++ b/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh
@@ -33,14 +33,14 @@ cipher_key \
 de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
 auth_algo sha1-hmac \
 auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
-mode ipv4-tunnel src ${REMOTE_IPV4} dst ${LOCAL_IPV4}
+mode ipv4-tunnel src ${REMOTE_IPV4} dst ${LOCAL_IPV4} ${SGW_CFG_XPRM}
 
 sa in 9 cipher_algo 3des-cbc \
 cipher_key \
 de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
 auth_algo sha1-hmac \
 auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
-mode ipv6-tunnel src ${REMOTE_IPV6} dst ${LOCAL_IPV6}
+mode ipv6-tunnel src ${REMOTE_IPV6} dst ${LOCAL_IPV6} ${SGW_CFG_XPRM}
 
 #SA out rules
 sa out 7 cipher_algo 3des-cbc \
@@ -48,14 +48,14 @@ cipher_key \
 de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
 auth_algo sha1-hmac \
 auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
-mode ipv4-tunnel src ${LOCAL_IPV4} dst ${REMOTE_IPV4}
+mode ipv4-tunnel src ${LOCAL_IPV4} dst ${REMOTE_IPV4} ${SGW_CFG_XPRM}
 
 sa out 9 cipher_algo 3des-cbc \
 cipher_key \
 de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
 auth_algo sha1-hmac \
 auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
-mode ipv6-tunnel src ${LOCAL_IPV6} dst ${REMOTE_IPV6}
+mode ipv6-tunnel src ${LOCAL_IPV6} dst ${REMOTE_IPV6} ${SGW_CFG_XPRM}
 
 #Routing rules
 rt ipv4 dst ${REMOTE_IPV4}/32 port 0
diff --git a/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh b/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh
index bdf5938a0..2616926b2 100644
--- a/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh
+++ b/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh
@@ -32,26 +32,26 @@ sa in 7 cipher_algo aes-128-cbc \
 cipher_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
 auth_algo sha1-hmac \
 auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
-mode ipv4-tunnel src ${REMOTE_IPV4} dst ${LOCAL_IPV4}
+mode ipv4-tunnel src ${REMOTE_IPV4} dst ${LOCAL_IPV4} ${SGW_CFG_XPRM}
 
 sa in 9 cipher_algo aes-128-cbc \
 cipher_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
 auth_algo sha1-hmac \
 auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
-mode ipv6-tunnel src ${REMOTE_IPV6} dst ${LOCAL_IPV6}
+mode ipv6-tunnel src ${REMOTE_IPV6} dst ${LOCAL_IPV6} ${SGW_CFG_XPRM}
 
 #SA out rules
 sa out 7 cipher_algo aes-128-cbc \
 cipher_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
 auth_algo sha1-hmac \
 auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
-mode ipv4-tunnel src ${LOCAL_IPV4} dst ${REMOTE_IPV4}
+mode ipv4-tunnel src ${LOCAL_IPV4} dst ${REMOTE_IPV4} ${SGW_CFG_XPRM}
 
 sa out 9 cipher_algo aes-128-cbc \
 cipher_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
 auth_algo sha1-hmac \
 auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
-mode ipv6-tunnel src ${LOCAL_IPV6} dst ${REMOTE_IPV6}
+mode ipv6-tunnel src ${LOCAL_IPV6} dst ${REMOTE_IPV6} ${SGW_CFG_XPRM}
 
 #Routing rules
 rt ipv4 dst ${REMOTE_IPV4}/32 port 0
diff --git a/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh b/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh
index 06f2ef0c6..06b561fd7 100644
--- a/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh
+++ b/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh
@@ -32,26 +32,26 @@ sa in 7 cipher_algo aes-128-ctr \
 cipher_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
 auth_algo sha1-hmac \
 auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
-mode ipv4-tunnel src ${REMOTE_IPV4} dst ${LOCAL_IPV4}
+mode ipv4-tunnel src ${REMOTE_IPV4} dst ${LOCAL_IPV4} ${SGW_CFG_XPRM}
 
 sa in 9 cipher_algo aes-128-ctr \
 cipher_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
 auth_algo sha1-hmac \
 auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
-mode ipv6-tunnel src ${REMOTE_IPV6} dst ${LOCAL_IPV6}
+mode ipv6-tunnel src ${REMOTE_IPV6} dst ${LOCAL_IPV6} ${SGW_CFG_XPRM}
 
 #SA out rules
 sa out 7 cipher_algo aes-128-ctr \
 cipher_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
 auth_algo sha1-hmac \
 auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
-mode ipv4-tunnel src ${LOCAL_IPV4} dst ${REMOTE_IPV4}
+mode ipv4-tunnel src ${LOCAL_IPV4} dst ${REMOTE_IPV4} ${SGW_CFG_XPRM}
 
 sa out 9 cipher_algo aes-128-ctr \
 cipher_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
 auth_algo sha1-hmac \
 auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
-mode ipv6-tunnel src ${LOCAL_IPV6} dst ${REMOTE_IPV6}
+mode ipv6-tunnel src ${LOCAL_IPV6} dst ${REMOTE_IPV6} ${SGW_CFG_XPRM}
 
 #Routing rules
 rt ipv4 dst ${REMOTE_IPV4}/32 port 0
-- 
2.17.1


^ permalink raw reply	[flat|nested] 77+ messages in thread

* [dpdk-dev] [PATCH v4 8/8] doc: add cpu crypto related documentation
  2020-01-28  3:16 ` [dpdk-dev] [PATCH v4 0/8] Introduce CPU crypto mode Marcin Smoczynski
                     ` (6 preceding siblings ...)
  2020-01-28  3:16   ` [dpdk-dev] [PATCH v4 7/8] examples/ipsec-secgw: cpu crypto testing Marcin Smoczynski
@ 2020-01-28  3:16   ` Marcin Smoczynski
  2020-01-28 14:22   ` [dpdk-dev] [PATCH v5 0/8] Introduce CPU crypto mode Marcin Smoczynski
  8 siblings, 0 replies; 77+ messages in thread
From: Marcin Smoczynski @ 2020-01-28  3:16 UTC (permalink / raw)
  To: akhil.goyal, konstantin.ananyev, roy.fan.zhang, declan.doherty,
	radu.nicolau, pablo.de.lara.guarch
  Cc: dev, Marcin Smoczynski

Update documentation with a description of cpu crypto in cryptodev,
ipsec and security libraries.

Add release notes for 20.02.

Signed-off-by: Marcin Smoczynski <marcinx.smoczynski@intel.com>
---
 doc/guides/cryptodevs/aesni_gcm.rst     |  5 ++++
 doc/guides/prog_guide/cryptodev_lib.rst | 31 +++++++++++++++++++++++++
 doc/guides/prog_guide/ipsec_lib.rst     |  8 +++++++
 doc/guides/prog_guide/rte_security.rst  | 15 ++++++++----
 doc/guides/rel_notes/release_20_02.rst  |  8 +++++++
 5 files changed, 63 insertions(+), 4 deletions(-)

diff --git a/doc/guides/cryptodevs/aesni_gcm.rst b/doc/guides/cryptodevs/aesni_gcm.rst
index 151aa3060..6b1a3d2a0 100644
--- a/doc/guides/cryptodevs/aesni_gcm.rst
+++ b/doc/guides/cryptodevs/aesni_gcm.rst
@@ -9,6 +9,11 @@ The AES-NI GCM PMD (**librte_pmd_aesni_gcm**) provides poll mode crypto driver
 support for utilizing Intel multi buffer library (see AES-NI Multi-buffer PMD documentation
 to learn more about it, including installation).
 
+The AES-NI GCM PMD supports synchronous mode of operation with
+``rte_cryptodev_sym_cpu_crypto_process`` function call for both AES-GCM and
+GMAC, however GMAC support is limited to one segment per operation. Please
+refer to ``rte_crypto`` programmer's guide for more detail.
+
 Features
 --------
 
diff --git a/doc/guides/prog_guide/cryptodev_lib.rst b/doc/guides/prog_guide/cryptodev_lib.rst
index ac1643774..1a01e1bda 100644
--- a/doc/guides/prog_guide/cryptodev_lib.rst
+++ b/doc/guides/prog_guide/cryptodev_lib.rst
@@ -600,6 +600,37 @@ chain.
         };
     };
 
+Synchronous mode
+----------------
+
+Some cryptodevs support synchronous mode alongside with a standard asynchronous
+mode. In that case operations are performed directly when calling
+``rte_cryptodev_sym_cpu_crypto_process`` method instead of enqueuing and
+dequeuing an operation before. This mode of operation allows cryptodevs which
+utilize CPU cryptographic acceleration to have significant performance boost
+comparing to standard asynchronous approach. Cryptodevs supporting synchronous
+mode have ``RTE_CRYPTODEV_FF_SYM_CPU_CRYPTO`` feature flag set.
+
+To perform a synchronous operation a call to
+``rte_cryptodev_sym_cpu_crypto_process`` has to be made with vectorized
+operation descriptor (``struct rte_crypto_sym_vec``) containing:
+
+- ``num`` - number of operations to perform,
+- pointer to an array of size ``num`` containing a scatter-gather list
+  descriptors of performed operations (``struct rte_crypto_sgl``). Each instance
+  of ``struct rte_crypto_sgl`` consists of a number of segments and a pointer to
+  an array of segment descriptors ``struct rte_crypto_vec``;
+- pointers to arrays of size ``num`` containing IV, AAD and digest information,
+- pointer to an array of size ``num`` where status information will be stored
+  for each operation.
+
+Function returns a number of successfully completed operations and sets
+appropriate status number for each operation in the status array provided as
+a call argument. Status different than zero must be treated as error.
+
+For more details, e.g. how to convert an mbuf to an SGL, please refer to an
+example usage in the IPsec library implementation.
+
 Sample code
 -----------
 
diff --git a/doc/guides/prog_guide/ipsec_lib.rst b/doc/guides/prog_guide/ipsec_lib.rst
index 1ce0db453..e6a21fae6 100644
--- a/doc/guides/prog_guide/ipsec_lib.rst
+++ b/doc/guides/prog_guide/ipsec_lib.rst
@@ -81,6 +81,14 @@ In that mode the library functions perform
   - verify that crypto device operations (encryption, ICV generation)
     were completed successfully
 
+RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+In that mode the library functions perform same operations as in
+``RTE_SECURITY_ACTION_TYPE_NONE``. The only differnce is that crypto operations
+are performed with CPU crypto synchronous API.
+
+
 RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
diff --git a/doc/guides/prog_guide/rte_security.rst b/doc/guides/prog_guide/rte_security.rst
index f77fb89dc..a911c676b 100644
--- a/doc/guides/prog_guide/rte_security.rst
+++ b/doc/guides/prog_guide/rte_security.rst
@@ -511,13 +511,20 @@ Offload.
         /**< No security actions */
         RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO,
         /**< Crypto processing for security protocol is processed inline
-         * during transmission */
+         * during transmission
+         */
         RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL,
         /**< All security protocol processing is performed inline during
-         * transmission */
-        RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL
+         * transmission
+         */
+        RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL,
         /**< All security protocol processing including crypto is performed
-         * on a lookaside accelerator */
+         * on a lookaside accelerator
+         */
+        RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO
+        /**< Crypto processing for security protocol is processed by CPU
+         * synchronously
+         */
     };
 
 The ``rte_security_session_protocol`` is defined as
diff --git a/doc/guides/rel_notes/release_20_02.rst b/doc/guides/rel_notes/release_20_02.rst
index 50e2c1484..b6cf0c4d1 100644
--- a/doc/guides/rel_notes/release_20_02.rst
+++ b/doc/guides/rel_notes/release_20_02.rst
@@ -143,6 +143,14 @@ New Features
   Added a new OCTEON TX2 rawdev PMD for End Point mode of operation.
   See the :doc:`../rawdevs/octeontx2_ep` for more details on this new PMD.
 
+* **Added synchronous Crypto burst API.**
+
+  A new API is introduced in crypto library to handle synchronous cryptographic
+  operations allowing to achieve performance gain for cryptodevs which use
+  CPU based acceleration, such as Intel AES-NI. An example implementation
+  for aesni_gcm cryptodev is provided including unit tests. The IPsec example
+  application and ipsec library itself were changed to allow utilization of this
+  new feature.
 
 Removed Items
 -------------
-- 
2.17.1


^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [dpdk-dev] [PATCH v4 3/8] test/crypto: add CPU crypto tests
  2020-01-28  3:16   ` [dpdk-dev] [PATCH v4 3/8] test/crypto: add CPU crypto tests Marcin Smoczynski
@ 2020-01-28  9:31     ` De Lara Guarch, Pablo
  2020-01-28 10:51       ` De Lara Guarch, Pablo
  0 siblings, 1 reply; 77+ messages in thread
From: De Lara Guarch, Pablo @ 2020-01-28  9:31 UTC (permalink / raw)
  To: Smoczynski, MarcinX, akhil.goyal, Ananyev, Konstantin, Zhang,
	Roy Fan, Doherty, Declan, Nicolau, Radu
  Cc: dev

> -----Original Message-----
> From: Smoczynski, MarcinX <marcinx.smoczynski@intel.com>
> Sent: Tuesday, January 28, 2020 3:17 AM
> To: akhil.goyal@nxp.com; Ananyev, Konstantin
> <konstantin.ananyev@intel.com>; Zhang, Roy Fan <roy.fan.zhang@intel.com>;
> Doherty, Declan <declan.doherty@intel.com>; Nicolau, Radu
> <radu.nicolau@intel.com>; De Lara Guarch, Pablo
> <pablo.de.lara.guarch@intel.com>
> Cc: dev@dpdk.org; Smoczynski, MarcinX <marcinx.smoczynski@intel.com>
> Subject: [PATCH v4 3/8] test/crypto: add CPU crypto tests
> 
> Add unit and performance tests for CPU crypto mode currently implemented by
> AESNI-GCM cryptodev. Unit tests cover AES-GCM and GMAC test vectors.
> 
> Signed-off-by: Marcin Smoczynski <marcinx.smoczynski@intel.com>

Two things about this patch:
1 - Copyright dates are wrong, it should be 2020.
2 - I'd say for the moment it's OK, but for the future, we should try to integrate
the performance tests into the crypto perf application.

Thanks,
Pablo


^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [dpdk-dev] [PATCH v4 2/8] crypto/aesni_gcm: cpu crypto support
  2020-01-28  3:16   ` [dpdk-dev] [PATCH v4 2/8] crypto/aesni_gcm: cpu crypto support Marcin Smoczynski
@ 2020-01-28 10:49     ` De Lara Guarch, Pablo
  0 siblings, 0 replies; 77+ messages in thread
From: De Lara Guarch, Pablo @ 2020-01-28 10:49 UTC (permalink / raw)
  To: Smoczynski, MarcinX, akhil.goyal, Ananyev, Konstantin, Zhang,
	Roy Fan, Doherty, Declan, Nicolau, Radu
  Cc: dev



> -----Original Message-----
> From: Smoczynski, MarcinX <marcinx.smoczynski@intel.com>
> Sent: Tuesday, January 28, 2020 3:17 AM
> To: akhil.goyal@nxp.com; Ananyev, Konstantin
> <konstantin.ananyev@intel.com>; Zhang, Roy Fan <roy.fan.zhang@intel.com>;
> Doherty, Declan <declan.doherty@intel.com>; Nicolau, Radu
> <radu.nicolau@intel.com>; De Lara Guarch, Pablo
> <pablo.de.lara.guarch@intel.com>
> Cc: dev@dpdk.org; Smoczynski, MarcinX <marcinx.smoczynski@intel.com>
> Subject: [PATCH v4 2/8] crypto/aesni_gcm: cpu crypto support
> 
> Add support for CPU crypto mode by introducing required handler.
> Crypto mode (sync/async) is chosen during sym session create if an appropriate
> flag is set in an xform type number.
> 
> Authenticated encryption and decryption are supported with tag
> generation/verification.
> 
> Signed-off-by: Marcin Smoczynski <marcinx.smoczynski@intel.com>

...

> @@ -331,9 +331,12 @@ struct rte_cryptodev_ops aesni_gcm_pmd_ops = {
>  		.queue_pair_release	= aesni_gcm_pmd_qp_release,
>  		.queue_pair_count	= aesni_gcm_pmd_qp_count,
> 
> +		.sym_cpu_process        = aesni_gcm_pmd_cpu_crypto_process,
> +
>  		.sym_session_get_size	=
> aesni_gcm_pmd_sym_session_get_size,
>  		.sym_session_configure	=
> aesni_gcm_pmd_sym_session_configure,
>  		.sym_session_clear	= aesni_gcm_pmd_sym_session_clear
>  };
> 
>  struct rte_cryptodev_ops *rte_aesni_gcm_pmd_ops = &aesni_gcm_pmd_ops;
> +

Minor thing, but you should remove this blank line.

Apart from that:

Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [dpdk-dev] [PATCH v4 3/8] test/crypto: add CPU crypto tests
  2020-01-28  9:31     ` De Lara Guarch, Pablo
@ 2020-01-28 10:51       ` De Lara Guarch, Pablo
  0 siblings, 0 replies; 77+ messages in thread
From: De Lara Guarch, Pablo @ 2020-01-28 10:51 UTC (permalink / raw)
  To: De Lara Guarch, Pablo, Smoczynski, MarcinX, akhil.goyal, Ananyev,
	Konstantin, Zhang, Roy Fan, Doherty, Declan, Nicolau, Radu
  Cc: dev



> -----Original Message-----
> From: dev <dev-bounces@dpdk.org> On Behalf Of De Lara Guarch, Pablo
> Sent: Tuesday, January 28, 2020 9:31 AM
> To: Smoczynski, MarcinX <marcinx.smoczynski@intel.com>;
> akhil.goyal@nxp.com; Ananyev, Konstantin <konstantin.ananyev@intel.com>;
> Zhang, Roy Fan <roy.fan.zhang@intel.com>; Doherty, Declan
> <declan.doherty@intel.com>; Nicolau, Radu <radu.nicolau@intel.com>
> Cc: dev@dpdk.org
> Subject: Re: [dpdk-dev] [PATCH v4 3/8] test/crypto: add CPU crypto tests
> 
> > -----Original Message-----
> > From: Smoczynski, MarcinX <marcinx.smoczynski@intel.com>
> > Sent: Tuesday, January 28, 2020 3:17 AM
> > To: akhil.goyal@nxp.com; Ananyev, Konstantin
> > <konstantin.ananyev@intel.com>; Zhang, Roy Fan
> > <roy.fan.zhang@intel.com>; Doherty, Declan <declan.doherty@intel.com>;
> > Nicolau, Radu <radu.nicolau@intel.com>; De Lara Guarch, Pablo
> > <pablo.de.lara.guarch@intel.com>
> > Cc: dev@dpdk.org; Smoczynski, MarcinX <marcinx.smoczynski@intel.com>
> > Subject: [PATCH v4 3/8] test/crypto: add CPU crypto tests
> >
> > Add unit and performance tests for CPU crypto mode currently
> > implemented by AESNI-GCM cryptodev. Unit tests cover AES-GCM and GMAC
> test vectors.
> >
> > Signed-off-by: Marcin Smoczynski <marcinx.smoczynski@intel.com>
> 
> Two things about this patch:
> 1 - Copyright dates are wrong, it should be 2020.
> 2 - I'd say for the moment it's OK, but for the future, we should try to integrate
> the performance tests into the crypto perf application.
> 
> Thanks,
> Pablo

Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>


^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [dpdk-dev] [PATCH v4 4/8] security: add cpu crypto action type
  2020-01-28  3:16   ` [dpdk-dev] [PATCH v4 4/8] security: add cpu crypto action type Marcin Smoczynski
@ 2020-01-28 11:00     ` Ananyev, Konstantin
  0 siblings, 0 replies; 77+ messages in thread
From: Ananyev, Konstantin @ 2020-01-28 11:00 UTC (permalink / raw)
  To: Smoczynski, MarcinX, akhil.goyal, Zhang, Roy Fan, Doherty,
	Declan, Nicolau, Radu, De Lara Guarch, Pablo
  Cc: dev

> Introduce CPU crypto action type allowing to differentiate between
> regular async 'none security' and synchronous, CPU crypto accelerated
> sessions.
> 
> Signed-off-by: Marcin Smoczynski <marcinx.smoczynski@intel.com>
> ---
>  lib/librte_security/rte_security.h | 6 +++++-
>  1 file changed, 5 insertions(+), 1 deletion(-)
> 
> diff --git a/lib/librte_security/rte_security.h b/lib/librte_security/rte_security.h
> index 546779df2..309f7311c 100644
> --- a/lib/librte_security/rte_security.h
> +++ b/lib/librte_security/rte_security.h
> @@ -307,10 +307,14 @@ enum rte_security_session_action_type {
>  	/**< All security protocol processing is performed inline during
>  	 * transmission
>  	 */
> -	RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL
> +	RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL,
>  	/**< All security protocol processing including crypto is performed
>  	 * on a lookaside accelerator
>  	 */
> +	RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO
> +	/**< Crypto processing for security protocol is processed by CPU
> +	 * synchronously
> +	 */
>  };
> 
>  /** Security session protocol definition */
> --

Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>

> 2.17.1


^ permalink raw reply	[flat|nested] 77+ messages in thread

* [dpdk-dev] [PATCH v5 0/8] Introduce CPU crypto mode
  2020-01-28  3:16 ` [dpdk-dev] [PATCH v4 0/8] Introduce CPU crypto mode Marcin Smoczynski
                     ` (7 preceding siblings ...)
  2020-01-28  3:16   ` [dpdk-dev] [PATCH v4 8/8] doc: add cpu crypto related documentation Marcin Smoczynski
@ 2020-01-28 14:22   ` Marcin Smoczynski
  2020-01-28 14:22     ` [dpdk-dev] [PATCH v5 1/8] cryptodev: introduce cpu crypto support API Marcin Smoczynski
                       ` (8 more replies)
  8 siblings, 9 replies; 77+ messages in thread
From: Marcin Smoczynski @ 2020-01-28 14:22 UTC (permalink / raw)
  To: akhil.goyal, konstantin.ananyev, roy.fan.zhang, declan.doherty,
	radu.nicolau, pablo.de.lara.guarch
  Cc: dev, Marcin Smoczynski

Originally both SW and HW crypto PMDs use rte_crypot_op based API to
process the crypto workload asynchronously. This way provides uniformity
to both PMD types, but also introduce unnecessary performance penalty to
SW PMDs that have to "simulate" HW async behavior (crypto-ops
enqueue/dequeue, HW addresses computations, storing/dereferencing user
provided data (mbuf) for each crypto-op, etc).

The aim is to introduce a new optional API for SW crypto-devices
to perform crypto processing in a synchronous manner.

v3 to v4 changes:
 - add feature discovery in the ipsec example application when
   using cpu-crypto
 - add gmac in aesni-gcm
 - add tests for aesni-gcm/cpu crypto mode
 - add documentation: pg and rel notes
 - remove xform flags as no longer needed
 - add some extra API comments
 - remove compilation error from v3

v4 to v5 changes:
 - fixed build error for arm64 (missing header include)
 - update licensing information

Marcin Smoczynski (8):
  cryptodev: introduce cpu crypto support API
  crypto/aesni_gcm: cpu crypto support
  test/crypto: add CPU crypto tests
  security: add cpu crypto action type
  ipsec: introduce support for cpu crypto mode
  examples/ipsec-secgw: cpu crypto support
  examples/ipsec-secgw: cpu crypto testing
  doc: add cpu crypto related documentation

 app/test/Makefile                             |   3 +-
 app/test/cpu_crypto_all_gcm_perf_test_cases.h |  11 +
 app/test/cpu_crypto_all_gcm_unit_test_cases.h |  49 +
 .../cpu_crypto_all_gmac_unit_test_cases.h     |   7 +
 app/test/meson.build                          |   3 +-
 app/test/test_cryptodev_cpu_crypto.c          | 931 ++++++++++++++++++
 doc/guides/cryptodevs/aesni_gcm.rst           |   7 +-
 doc/guides/prog_guide/cryptodev_lib.rst       |  33 +-
 doc/guides/prog_guide/ipsec_lib.rst           |  10 +-
 doc/guides/prog_guide/rte_security.rst        |  15 +-
 doc/guides/rel_notes/release_20_02.rst        |   8 +
 drivers/crypto/aesni_gcm/aesni_gcm_ops.h      |  11 +-
 drivers/crypto/aesni_gcm/aesni_gcm_pmd.c      | 222 ++++-
 drivers/crypto/aesni_gcm/aesni_gcm_pmd_ops.c  |   4 +-
 .../crypto/aesni_gcm/aesni_gcm_pmd_private.h  |  13 +-
 examples/ipsec-secgw/ipsec.c                  |  25 +-
 examples/ipsec-secgw/ipsec_process.c          | 136 ++-
 examples/ipsec-secgw/sa.c                     |  30 +-
 examples/ipsec-secgw/test/common_defs.sh      |  21 +
 examples/ipsec-secgw/test/linux_test4.sh      |  11 +-
 examples/ipsec-secgw/test/linux_test6.sh      |  11 +-
 .../test/trs_3descbc_sha1_common_defs.sh      |   8 +-
 .../test/trs_aescbc_sha1_common_defs.sh       |   8 +-
 .../test/trs_aesctr_sha1_common_defs.sh       |   8 +-
 .../test/tun_3descbc_sha1_common_defs.sh      |   8 +-
 .../test/tun_aescbc_sha1_common_defs.sh       |   8 +-
 .../test/tun_aesctr_sha1_common_defs.sh       |   8 +-
 lib/librte_cryptodev/rte_crypto_sym.h         |  63 +-
 lib/librte_cryptodev/rte_cryptodev.c          |  35 +-
 lib/librte_cryptodev/rte_cryptodev.h          |  22 +-
 lib/librte_cryptodev/rte_cryptodev_pmd.h      |  21 +-
 .../rte_cryptodev_version.map                 |   1 +
 lib/librte_ipsec/esp_inb.c                    | 156 ++-
 lib/librte_ipsec/esp_outb.c                   | 136 ++-
 lib/librte_ipsec/misc.h                       | 120 ++-
 lib/librte_ipsec/rte_ipsec.h                  |  20 +-
 lib/librte_ipsec/sa.c                         | 114 ++-
 lib/librte_ipsec/sa.h                         |  19 +-
 lib/librte_ipsec/ses.c                        |   5 +-
 lib/librte_security/rte_security.h            |   8 +-
 40 files changed, 2143 insertions(+), 186 deletions(-)
 create mode 100644 app/test/cpu_crypto_all_gcm_perf_test_cases.h
 create mode 100644 app/test/cpu_crypto_all_gcm_unit_test_cases.h
 create mode 100644 app/test/cpu_crypto_all_gmac_unit_test_cases.h
 create mode 100644 app/test/test_cryptodev_cpu_crypto.c

-- 
2.17.1


^ permalink raw reply	[flat|nested] 77+ messages in thread

* [dpdk-dev] [PATCH v5 1/8] cryptodev: introduce cpu crypto support API
  2020-01-28 14:22   ` [dpdk-dev] [PATCH v5 0/8] Introduce CPU crypto mode Marcin Smoczynski
@ 2020-01-28 14:22     ` Marcin Smoczynski
  2020-01-31 14:30       ` Akhil Goyal
  2020-01-28 14:22     ` [dpdk-dev] [PATCH v5 2/8] crypto/aesni_gcm: cpu crypto support Marcin Smoczynski
                       ` (7 subsequent siblings)
  8 siblings, 1 reply; 77+ messages in thread
From: Marcin Smoczynski @ 2020-01-28 14:22 UTC (permalink / raw)
  To: akhil.goyal, konstantin.ananyev, roy.fan.zhang, declan.doherty,
	radu.nicolau, pablo.de.lara.guarch
  Cc: dev, Marcin Smoczynski

Add new API allowing to process crypto operations in a synchronous
manner. Operations are performed on a set of SG arrays.

Sync mode is selected by setting appropriate flag in an xform
type number. Cryptodevs which allows CPU crypto operation mode have to
use RTE_CRYPTODEV_FF_SYM_CPU_CRYPTO capability.

Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Signed-off-by: Marcin Smoczynski <marcinx.smoczynski@intel.com>
---
 lib/librte_cryptodev/rte_crypto_sym.h         | 63 ++++++++++++++++++-
 lib/librte_cryptodev/rte_cryptodev.c          | 35 ++++++++++-
 lib/librte_cryptodev/rte_cryptodev.h          | 22 ++++++-
 lib/librte_cryptodev/rte_cryptodev_pmd.h      | 21 ++++++-
 .../rte_cryptodev_version.map                 |  1 +
 5 files changed, 138 insertions(+), 4 deletions(-)

diff --git a/lib/librte_cryptodev/rte_crypto_sym.h b/lib/librte_cryptodev/rte_crypto_sym.h
index bc356f6ff..d6f3105fe 100644
--- a/lib/librte_cryptodev/rte_crypto_sym.h
+++ b/lib/librte_cryptodev/rte_crypto_sym.h
@@ -1,5 +1,5 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2016-2019 Intel Corporation
+ * Copyright(c) 2016-2020 Intel Corporation
  */
 
 #ifndef _RTE_CRYPTO_SYM_H_
@@ -25,6 +25,67 @@ extern "C" {
 #include <rte_mempool.h>
 #include <rte_common.h>
 
+/**
+ * Crypto IO Vector (in analogy with struct iovec)
+ * Supposed be used to pass input/output data buffers for crypto data-path
+ * functions.
+ */
+struct rte_crypto_vec {
+	/** virtual address of the data buffer */
+	void *base;
+	/** IOVA of the data buffer */
+	rte_iova_t *iova;
+	/** length of the data buffer */
+	uint32_t len;
+};
+
+/**
+ * Crypto scatter-gather list descriptor. Consists of a pointer to an array
+ * of Crypto IO vectors with its size.
+ */
+struct rte_crypto_sgl {
+	/** start of an array of vectors */
+	struct rte_crypto_vec *vec;
+	/** size of an array of vectors */
+	uint32_t num;
+};
+
+/**
+ * Synchronous operation descriptor.
+ * Supposed to be used with CPU crypto API call.
+ */
+struct rte_crypto_sym_vec {
+	/** array of SGL vectors */
+	struct rte_crypto_sgl *sgl;
+	/** array of pointers to IV */
+	void **iv;
+	/** array of pointers to AAD */
+	void **aad;
+	/** array of pointers to digest */
+	void **digest;
+	/**
+	 * array of statuses for each operation:
+	 *  - 0 on success
+	 *  - errno on error
+	 */
+	int32_t *status;
+	/** number of operations to perform */
+	uint32_t num;
+};
+
+/**
+ * used for cpu_crypto_process_bulk() to specify head/tail offsets
+ * for auth/cipher processing.
+ */
+union rte_crypto_sym_ofs {
+	uint64_t raw;
+	struct {
+		struct {
+			uint16_t head;
+			uint16_t tail;
+		} auth, cipher;
+	} ofs;
+};
 
 /** Symmetric Cipher Algorithms */
 enum rte_crypto_cipher_algorithm {
diff --git a/lib/librte_cryptodev/rte_cryptodev.c b/lib/librte_cryptodev/rte_cryptodev.c
index 5c6359b5c..889d61319 100644
--- a/lib/librte_cryptodev/rte_cryptodev.c
+++ b/lib/librte_cryptodev/rte_cryptodev.c
@@ -1,5 +1,5 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2015-2017 Intel Corporation
+ * Copyright(c) 2015-2020 Intel Corporation
  */
 
 #include <sys/types.h>
@@ -494,6 +494,8 @@ rte_cryptodev_get_feature_name(uint64_t flag)
 		return "RSA_PRIV_OP_KEY_QT";
 	case RTE_CRYPTODEV_FF_DIGEST_ENCRYPTED:
 		return "DIGEST_ENCRYPTED";
+	case RTE_CRYPTODEV_FF_SYM_CPU_CRYPTO:
+		return "SYM_CPU_CRYPTO";
 	default:
 		return NULL;
 	}
@@ -1619,6 +1621,37 @@ rte_cryptodev_sym_session_get_user_data(
 	return (void *)(sess->sess_data + sess->nb_drivers);
 }
 
+static inline void
+sym_crypto_fill_status(struct rte_crypto_sym_vec *vec, int32_t errnum)
+{
+	uint32_t i;
+	for (i = 0; i < vec->num; i++)
+		vec->status[i] = errnum;
+}
+
+uint32_t
+rte_cryptodev_sym_cpu_crypto_process(uint8_t dev_id,
+	struct rte_cryptodev_sym_session *sess, union rte_crypto_sym_ofs ofs,
+	struct rte_crypto_sym_vec *vec)
+{
+	struct rte_cryptodev *dev;
+
+	if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+		sym_crypto_fill_status(vec, EINVAL);
+		return 0;
+	}
+
+	dev = rte_cryptodev_pmd_get_dev(dev_id);
+
+	if (*dev->dev_ops->sym_cpu_process == NULL ||
+		!(dev->feature_flags & RTE_CRYPTODEV_FF_SYM_CPU_CRYPTO)) {
+		sym_crypto_fill_status(vec, ENOTSUP);
+		return 0;
+	}
+
+	return dev->dev_ops->sym_cpu_process(dev, sess, ofs, vec);
+}
+
 /** Initialise rte_crypto_op mempool element */
 static void
 rte_crypto_op_init(struct rte_mempool *mempool,
diff --git a/lib/librte_cryptodev/rte_cryptodev.h b/lib/librte_cryptodev/rte_cryptodev.h
index c6ffa3b35..7603af9f6 100644
--- a/lib/librte_cryptodev/rte_cryptodev.h
+++ b/lib/librte_cryptodev/rte_cryptodev.h
@@ -1,5 +1,5 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2015-2017 Intel Corporation.
+ * Copyright(c) 2015-2020 Intel Corporation.
  */
 
 #ifndef _RTE_CRYPTODEV_H_
@@ -450,6 +450,8 @@ rte_cryptodev_asym_get_xform_enum(enum rte_crypto_asym_xform_type *xform_enum,
 /**< Support encrypted-digest operations where digest is appended to data */
 #define RTE_CRYPTODEV_FF_ASYM_SESSIONLESS		(1ULL << 20)
 /**< Support asymmetric session-less operations */
+#define	RTE_CRYPTODEV_FF_SYM_CPU_CRYPTO			(1ULL << 21)
+/**< Support symmeteric cpu-crypto processing */
 
 
 /**
@@ -1274,6 +1276,24 @@ void *
 rte_cryptodev_sym_session_get_user_data(
 					struct rte_cryptodev_sym_session *sess);
 
+/**
+ * Perform actual crypto processing (encrypt/digest or auth/decrypt)
+ * on user provided data.
+ *
+ * @param	dev_id	The device identifier.
+ * @param	sess	Cryptodev session structure
+ * @param	ofs	Start and stop offsets for auth and cipher operations
+ * @param	vec	Vectorized operation descriptor
+ *
+ * @return
+ *  - Returns number of successfully processed packets.
+ */
+__rte_experimental
+uint32_t
+rte_cryptodev_sym_cpu_crypto_process(uint8_t dev_id,
+	struct rte_cryptodev_sym_session *sess, union rte_crypto_sym_ofs ofs,
+	struct rte_crypto_sym_vec *vec);
+
 #ifdef __cplusplus
 }
 #endif
diff --git a/lib/librte_cryptodev/rte_cryptodev_pmd.h b/lib/librte_cryptodev/rte_cryptodev_pmd.h
index fba14f2fa..0e6b5f443 100644
--- a/lib/librte_cryptodev/rte_cryptodev_pmd.h
+++ b/lib/librte_cryptodev/rte_cryptodev_pmd.h
@@ -1,5 +1,5 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2015-2016 Intel Corporation.
+ * Copyright(c) 2015-2020 Intel Corporation.
  */
 
 #ifndef _RTE_CRYPTODEV_PMD_H_
@@ -308,6 +308,23 @@ typedef void (*cryptodev_sym_free_session_t)(struct rte_cryptodev *dev,
  */
 typedef void (*cryptodev_asym_free_session_t)(struct rte_cryptodev *dev,
 		struct rte_cryptodev_asym_session *sess);
+/**
+ * Perform actual crypto processing (encrypt/digest or auth/decrypt)
+ * on user provided data.
+ *
+ * @param	dev	Crypto device pointer
+ * @param	sess	Cryptodev session structure
+ * @param	ofs	Start and stop offsets for auth and cipher operations
+ * @param	vec	Vectorized operation descriptor
+ *
+ * @return
+ *  - Returns number of successfully processed packets.
+ *
+ */
+typedef uint32_t (*cryptodev_sym_cpu_crypto_process_t)
+	(struct rte_cryptodev *dev, struct rte_cryptodev_sym_session *sess,
+	union rte_crypto_sym_ofs ofs, struct rte_crypto_sym_vec *vec);
+
 
 /** Crypto device operations function pointer table */
 struct rte_cryptodev_ops {
@@ -342,6 +359,8 @@ struct rte_cryptodev_ops {
 	/**< Clear a Crypto sessions private data. */
 	cryptodev_asym_free_session_t asym_session_clear;
 	/**< Clear a Crypto sessions private data. */
+	cryptodev_sym_cpu_crypto_process_t sym_cpu_process;
+	/**< process input data synchronously (cpu-crypto). */
 };
 
 
diff --git a/lib/librte_cryptodev/rte_cryptodev_version.map b/lib/librte_cryptodev/rte_cryptodev_version.map
index 1dd1e259a..6e41b4be5 100644
--- a/lib/librte_cryptodev/rte_cryptodev_version.map
+++ b/lib/librte_cryptodev/rte_cryptodev_version.map
@@ -71,6 +71,7 @@ EXPERIMENTAL {
 	rte_cryptodev_asym_session_init;
 	rte_cryptodev_asym_xform_capability_check_modlen;
 	rte_cryptodev_asym_xform_capability_check_optype;
+	rte_cryptodev_sym_cpu_crypto_process;
 	rte_cryptodev_sym_get_existing_header_session_size;
 	rte_cryptodev_sym_session_get_user_data;
 	rte_cryptodev_sym_session_pool_create;
-- 
2.17.1


^ permalink raw reply	[flat|nested] 77+ messages in thread

* [dpdk-dev] [PATCH v5 2/8] crypto/aesni_gcm: cpu crypto support
  2020-01-28 14:22   ` [dpdk-dev] [PATCH v5 0/8] Introduce CPU crypto mode Marcin Smoczynski
  2020-01-28 14:22     ` [dpdk-dev] [PATCH v5 1/8] cryptodev: introduce cpu crypto support API Marcin Smoczynski
@ 2020-01-28 14:22     ` Marcin Smoczynski
  2020-01-28 16:39       ` Ananyev, Konstantin
  2020-01-31 14:33       ` Akhil Goyal
  2020-01-28 14:22     ` [dpdk-dev] [PATCH v5 3/8] test/crypto: add CPU crypto tests Marcin Smoczynski
                       ` (6 subsequent siblings)
  8 siblings, 2 replies; 77+ messages in thread
From: Marcin Smoczynski @ 2020-01-28 14:22 UTC (permalink / raw)
  To: akhil.goyal, konstantin.ananyev, roy.fan.zhang, declan.doherty,
	radu.nicolau, pablo.de.lara.guarch
  Cc: dev, Marcin Smoczynski

Add support for CPU crypto mode by introducing required handler.
Crypto mode (sync/async) is chosen during sym session create if an
appropriate flag is set in an xform type number.

Authenticated encryption and decryption are supported with tag
generation/verification.

Signed-off-by: Marcin Smoczynski <marcinx.smoczynski@intel.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
---
 drivers/crypto/aesni_gcm/aesni_gcm_ops.h      |  11 +-
 drivers/crypto/aesni_gcm/aesni_gcm_pmd.c      | 222 +++++++++++++++++-
 drivers/crypto/aesni_gcm/aesni_gcm_pmd_ops.c  |   4 +-
 .../crypto/aesni_gcm/aesni_gcm_pmd_private.h  |  13 +-
 4 files changed, 240 insertions(+), 10 deletions(-)

diff --git a/drivers/crypto/aesni_gcm/aesni_gcm_ops.h b/drivers/crypto/aesni_gcm/aesni_gcm_ops.h
index e272f1067..74acac09c 100644
--- a/drivers/crypto/aesni_gcm/aesni_gcm_ops.h
+++ b/drivers/crypto/aesni_gcm/aesni_gcm_ops.h
@@ -1,5 +1,5 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2016-2017 Intel Corporation
+ * Copyright(c) 2016-2020 Intel Corporation
  */
 
 #ifndef _AESNI_GCM_OPS_H_
@@ -65,4 +65,13 @@ struct aesni_gcm_ops {
 	aesni_gcm_finalize_t finalize_dec;
 };
 
+/** GCM per-session operation handlers */
+struct aesni_gcm_session_ops {
+	aesni_gcm_t cipher;
+	aesni_gcm_pre_t pre;
+	aesni_gcm_init_t init;
+	aesni_gcm_update_t update;
+	aesni_gcm_finalize_t finalize;
+};
+
 #endif /* _AESNI_GCM_OPS_H_ */
diff --git a/drivers/crypto/aesni_gcm/aesni_gcm_pmd.c b/drivers/crypto/aesni_gcm/aesni_gcm_pmd.c
index 1a03be31d..a1caab993 100644
--- a/drivers/crypto/aesni_gcm/aesni_gcm_pmd.c
+++ b/drivers/crypto/aesni_gcm/aesni_gcm_pmd.c
@@ -1,5 +1,5 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2016-2017 Intel Corporation
+ * Copyright(c) 2016-2020 Intel Corporation
  */
 
 #include <rte_common.h>
@@ -15,6 +15,31 @@
 
 static uint8_t cryptodev_driver_id;
 
+/* setup session handlers */
+static void
+set_func_ops(struct aesni_gcm_session *s, const struct aesni_gcm_ops *gcm_ops)
+{
+	s->ops.pre = gcm_ops->pre;
+	s->ops.init = gcm_ops->init;
+
+	switch (s->op) {
+	case AESNI_GCM_OP_AUTHENTICATED_ENCRYPTION:
+		s->ops.cipher = gcm_ops->enc;
+		s->ops.update = gcm_ops->update_enc;
+		s->ops.finalize = gcm_ops->finalize_enc;
+		break;
+	case AESNI_GCM_OP_AUTHENTICATED_DECRYPTION:
+		s->ops.cipher = gcm_ops->dec;
+		s->ops.update = gcm_ops->update_dec;
+		s->ops.finalize = gcm_ops->finalize_dec;
+		break;
+	case AESNI_GMAC_OP_GENERATE:
+	case AESNI_GMAC_OP_VERIFY:
+		s->ops.finalize = gcm_ops->finalize_enc;
+		break;
+	}
+}
+
 /** Parse crypto xform chain and set private session parameters */
 int
 aesni_gcm_set_session_parameters(const struct aesni_gcm_ops *gcm_ops,
@@ -65,6 +90,7 @@ aesni_gcm_set_session_parameters(const struct aesni_gcm_ops *gcm_ops,
 		/* Select Crypto operation */
 		if (aead_xform->aead.op == RTE_CRYPTO_AEAD_OP_ENCRYPT)
 			sess->op = AESNI_GCM_OP_AUTHENTICATED_ENCRYPTION;
+		/* op == RTE_CRYPTO_AEAD_OP_DECRYPT */
 		else
 			sess->op = AESNI_GCM_OP_AUTHENTICATED_DECRYPTION;
 
@@ -78,7 +104,6 @@ aesni_gcm_set_session_parameters(const struct aesni_gcm_ops *gcm_ops,
 		return -ENOTSUP;
 	}
 
-
 	/* IV check */
 	if (sess->iv.length != 16 && sess->iv.length != 12 &&
 			sess->iv.length != 0) {
@@ -102,6 +127,10 @@ aesni_gcm_set_session_parameters(const struct aesni_gcm_ops *gcm_ops,
 		return -EINVAL;
 	}
 
+	/* setup session handlers */
+	set_func_ops(sess, &gcm_ops[sess->key]);
+
+	/* pre-generate key */
 	gcm_ops[sess->key].pre(key, &sess->gdata_key);
 
 	/* Digest check */
@@ -356,6 +385,191 @@ process_gcm_crypto_op(struct aesni_gcm_qp *qp, struct rte_crypto_op *op,
 	return 0;
 }
 
+static inline void
+aesni_gcm_fill_error_code(struct rte_crypto_sym_vec *vec, int32_t errnum)
+{
+	uint32_t i;
+
+	for (i = 0; i < vec->num; i++)
+		vec->status[i] = errnum;
+}
+
+
+static inline int32_t
+aesni_gcm_sgl_op_finalize_encryption(const struct aesni_gcm_session *s,
+	struct gcm_context_data *gdata_ctx, uint8_t *digest)
+{
+	if (s->req_digest_length != s->gen_digest_length) {
+		uint8_t tmpdigest[s->gen_digest_length];
+
+		s->ops.finalize(&s->gdata_key, gdata_ctx, tmpdigest,
+			s->gen_digest_length);
+		memcpy(digest, tmpdigest, s->req_digest_length);
+	} else {
+		s->ops.finalize(&s->gdata_key, gdata_ctx, digest,
+			s->gen_digest_length);
+	}
+
+	return 0;
+}
+
+static inline int32_t
+aesni_gcm_sgl_op_finalize_decryption(const struct aesni_gcm_session *s,
+	struct gcm_context_data *gdata_ctx, uint8_t *digest)
+{
+	uint8_t tmpdigest[s->gen_digest_length];
+
+	s->ops.finalize(&s->gdata_key, gdata_ctx, tmpdigest,
+		s->gen_digest_length);
+
+	return memcmp(digest, tmpdigest, s->req_digest_length) == 0 ? 0 :
+		EBADMSG;
+}
+
+static inline void
+aesni_gcm_process_gcm_sgl_op(const struct aesni_gcm_session *s,
+	struct gcm_context_data *gdata_ctx, struct rte_crypto_sgl *sgl,
+	void *iv, void *aad)
+{
+	uint32_t i;
+
+	/* init crypto operation */
+	s->ops.init(&s->gdata_key, gdata_ctx, iv, aad,
+		(uint64_t)s->aad_length);
+
+	/* update with sgl data */
+	for (i = 0; i < sgl->num; i++) {
+		struct rte_crypto_vec *vec = &sgl->vec[i];
+
+		s->ops.update(&s->gdata_key, gdata_ctx, vec->base, vec->base,
+			vec->len);
+	}
+}
+
+static inline void
+aesni_gcm_process_gmac_sgl_op(const struct aesni_gcm_session *s,
+	struct gcm_context_data *gdata_ctx, struct rte_crypto_sgl *sgl,
+	void *iv)
+{
+	s->ops.init(&s->gdata_key, gdata_ctx, iv, sgl->vec[0].base,
+		sgl->vec[0].len);
+}
+
+static inline uint32_t
+aesni_gcm_sgl_encrypt(struct aesni_gcm_session *s,
+	struct gcm_context_data *gdata_ctx, struct rte_crypto_sym_vec *vec)
+{
+	uint32_t i, processed;
+
+	processed = 0;
+	for (i = 0; i < vec->num; ++i) {
+		aesni_gcm_process_gcm_sgl_op(s, gdata_ctx,
+			&vec->sgl[i], vec->iv[i], vec->aad[i]);
+		vec->status[i] = aesni_gcm_sgl_op_finalize_encryption(s,
+			gdata_ctx, vec->digest[i]);
+		processed += (vec->status[i] == 0);
+	}
+
+	return processed;
+}
+
+static inline uint32_t
+aesni_gcm_sgl_decrypt(struct aesni_gcm_session *s,
+	struct gcm_context_data *gdata_ctx, struct rte_crypto_sym_vec *vec)
+{
+	uint32_t i, processed;
+
+	processed = 0;
+	for (i = 0; i < vec->num; ++i) {
+		aesni_gcm_process_gcm_sgl_op(s, gdata_ctx,
+			&vec->sgl[i], vec->iv[i], vec->aad[i]);
+		 vec->status[i] = aesni_gcm_sgl_op_finalize_decryption(s,
+			gdata_ctx, vec->digest[i]);
+		processed += (vec->status[i] == 0);
+	}
+
+	return processed;
+}
+
+static inline uint32_t
+aesni_gmac_sgl_generate(struct aesni_gcm_session *s,
+	struct gcm_context_data *gdata_ctx, struct rte_crypto_sym_vec *vec)
+{
+	uint32_t i, processed;
+
+	processed = 0;
+	for (i = 0; i < vec->num; ++i) {
+		if (vec->sgl[i].num != 1) {
+			vec->status[i] = ENOTSUP;
+			continue;
+		}
+
+		aesni_gcm_process_gmac_sgl_op(s, gdata_ctx,
+			&vec->sgl[i], vec->iv[i]);
+		vec->status[i] = aesni_gcm_sgl_op_finalize_encryption(s,
+			gdata_ctx, vec->digest[i]);
+		processed += (vec->status[i] == 0);
+	}
+
+	return processed;
+}
+
+static inline uint32_t
+aesni_gmac_sgl_verify(struct aesni_gcm_session *s,
+	struct gcm_context_data *gdata_ctx, struct rte_crypto_sym_vec *vec)
+{
+	uint32_t i, processed;
+
+	processed = 0;
+	for (i = 0; i < vec->num; ++i) {
+		if (vec->sgl[i].num != 1) {
+			vec->status[i] = ENOTSUP;
+			continue;
+		}
+
+		aesni_gcm_process_gmac_sgl_op(s, gdata_ctx,
+			&vec->sgl[i], vec->iv[i]);
+		vec->status[i] = aesni_gcm_sgl_op_finalize_decryption(s,
+			gdata_ctx, vec->digest[i]);
+		processed += (vec->status[i] == 0);
+	}
+
+	return processed;
+}
+
+/** Process CPU crypto bulk operations */
+uint32_t
+aesni_gcm_pmd_cpu_crypto_process(struct rte_cryptodev *dev,
+	struct rte_cryptodev_sym_session *sess,
+	__rte_unused union rte_crypto_sym_ofs ofs,
+	struct rte_crypto_sym_vec *vec)
+{
+	void *sess_priv;
+	struct aesni_gcm_session *s;
+	struct gcm_context_data gdata_ctx;
+
+	sess_priv = get_sym_session_private_data(sess, dev->driver_id);
+	if (unlikely(sess_priv == NULL)) {
+		aesni_gcm_fill_error_code(vec, EINVAL);
+		return 0;
+	}
+
+	s = sess_priv;
+	switch (s->op) {
+	case AESNI_GCM_OP_AUTHENTICATED_ENCRYPTION:
+		return aesni_gcm_sgl_encrypt(s, &gdata_ctx, vec);
+	case AESNI_GCM_OP_AUTHENTICATED_DECRYPTION:
+		return aesni_gcm_sgl_decrypt(s, &gdata_ctx, vec);
+	case AESNI_GMAC_OP_GENERATE:
+		return aesni_gmac_sgl_generate(s, &gdata_ctx, vec);
+	case AESNI_GMAC_OP_VERIFY:
+		return aesni_gmac_sgl_verify(s, &gdata_ctx, vec);
+	default:
+		aesni_gcm_fill_error_code(vec, EINVAL);
+		return 0;
+	}
+}
+
 /**
  * Process a completed job and return rte_mbuf which job processed
  *
@@ -527,7 +741,8 @@ aesni_gcm_create(const char *name,
 			RTE_CRYPTODEV_FF_SYM_OPERATION_CHAINING |
 			RTE_CRYPTODEV_FF_IN_PLACE_SGL |
 			RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT |
-			RTE_CRYPTODEV_FF_OOP_LB_IN_LB_OUT;
+			RTE_CRYPTODEV_FF_OOP_LB_IN_LB_OUT |
+			RTE_CRYPTODEV_FF_SYM_CPU_CRYPTO;
 
 	/* Check CPU for support for AES instruction set */
 	if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_AES))
@@ -672,7 +887,6 @@ RTE_PMD_REGISTER_PARAM_STRING(CRYPTODEV_NAME_AESNI_GCM_PMD,
 RTE_PMD_REGISTER_CRYPTO_DRIVER(aesni_gcm_crypto_drv, aesni_gcm_pmd_drv.driver,
 		cryptodev_driver_id);
 
-
 RTE_INIT(aesni_gcm_init_log)
 {
 	aesni_gcm_logtype_driver = rte_log_register("pmd.crypto.aesni_gcm");
diff --git a/drivers/crypto/aesni_gcm/aesni_gcm_pmd_ops.c b/drivers/crypto/aesni_gcm/aesni_gcm_pmd_ops.c
index 2f66c7c58..c5e0878f5 100644
--- a/drivers/crypto/aesni_gcm/aesni_gcm_pmd_ops.c
+++ b/drivers/crypto/aesni_gcm/aesni_gcm_pmd_ops.c
@@ -1,5 +1,5 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2016 Intel Corporation
+ * Copyright(c) 2016-2020 Intel Corporation
  */
 
 #include <string.h>
@@ -331,6 +331,8 @@ struct rte_cryptodev_ops aesni_gcm_pmd_ops = {
 		.queue_pair_release	= aesni_gcm_pmd_qp_release,
 		.queue_pair_count	= aesni_gcm_pmd_qp_count,
 
+		.sym_cpu_process        = aesni_gcm_pmd_cpu_crypto_process,
+
 		.sym_session_get_size	= aesni_gcm_pmd_sym_session_get_size,
 		.sym_session_configure	= aesni_gcm_pmd_sym_session_configure,
 		.sym_session_clear	= aesni_gcm_pmd_sym_session_clear
diff --git a/drivers/crypto/aesni_gcm/aesni_gcm_pmd_private.h b/drivers/crypto/aesni_gcm/aesni_gcm_pmd_private.h
index 2039adb53..080d4f7e4 100644
--- a/drivers/crypto/aesni_gcm/aesni_gcm_pmd_private.h
+++ b/drivers/crypto/aesni_gcm/aesni_gcm_pmd_private.h
@@ -1,5 +1,5 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2016-2017 Intel Corporation
+ * Copyright(c) 2016-2020 Intel Corporation
  */
 
 #ifndef _AESNI_GCM_PMD_PRIVATE_H_
@@ -92,6 +92,8 @@ struct aesni_gcm_session {
 	/**< GCM key type */
 	struct gcm_key_data gdata_key;
 	/**< GCM parameters */
+	struct aesni_gcm_session_ops ops;
+	/**< Session handlers */
 };
 
 
@@ -109,10 +111,13 @@ aesni_gcm_set_session_parameters(const struct aesni_gcm_ops *ops,
 		struct aesni_gcm_session *sess,
 		const struct rte_crypto_sym_xform *xform);
 
-
-/**
- * Device specific operations function pointer structure */
+/* Device specific operations function pointer structure */
 extern struct rte_cryptodev_ops *rte_aesni_gcm_pmd_ops;
 
+/** CPU crypto bulk process handler */
+uint32_t
+aesni_gcm_pmd_cpu_crypto_process(struct rte_cryptodev *dev,
+	struct rte_cryptodev_sym_session *sess, union rte_crypto_sym_ofs ofs,
+	struct rte_crypto_sym_vec *vec);
 
 #endif /* _AESNI_GCM_PMD_PRIVATE_H_ */
-- 
2.17.1


^ permalink raw reply	[flat|nested] 77+ messages in thread

* [dpdk-dev] [PATCH v5 3/8] test/crypto: add CPU crypto tests
  2020-01-28 14:22   ` [dpdk-dev] [PATCH v5 0/8] Introduce CPU crypto mode Marcin Smoczynski
  2020-01-28 14:22     ` [dpdk-dev] [PATCH v5 1/8] cryptodev: introduce cpu crypto support API Marcin Smoczynski
  2020-01-28 14:22     ` [dpdk-dev] [PATCH v5 2/8] crypto/aesni_gcm: cpu crypto support Marcin Smoczynski
@ 2020-01-28 14:22     ` Marcin Smoczynski
  2020-01-31 14:37       ` Akhil Goyal
  2020-01-28 14:22     ` [dpdk-dev] [PATCH v5 4/8] security: add cpu crypto action type Marcin Smoczynski
                       ` (5 subsequent siblings)
  8 siblings, 1 reply; 77+ messages in thread
From: Marcin Smoczynski @ 2020-01-28 14:22 UTC (permalink / raw)
  To: akhil.goyal, konstantin.ananyev, roy.fan.zhang, declan.doherty,
	radu.nicolau, pablo.de.lara.guarch
  Cc: dev, Marcin Smoczynski

Add unit and performance tests for CPU crypto mode currently implemented
by AESNI-GCM cryptodev. Unit tests cover AES-GCM and GMAC test vectors.

Signed-off-by: Marcin Smoczynski <marcinx.smoczynski@intel.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
---
 app/test/Makefile                             |   3 +-
 app/test/cpu_crypto_all_gcm_perf_test_cases.h |  11 +
 app/test/cpu_crypto_all_gcm_unit_test_cases.h |  49 +
 .../cpu_crypto_all_gmac_unit_test_cases.h     |   7 +
 app/test/meson.build                          |   3 +-
 app/test/test_cryptodev_cpu_crypto.c          | 931 ++++++++++++++++++
 6 files changed, 1002 insertions(+), 2 deletions(-)
 create mode 100644 app/test/cpu_crypto_all_gcm_perf_test_cases.h
 create mode 100644 app/test/cpu_crypto_all_gcm_unit_test_cases.h
 create mode 100644 app/test/cpu_crypto_all_gmac_unit_test_cases.h
 create mode 100644 app/test/test_cryptodev_cpu_crypto.c

diff --git a/app/test/Makefile b/app/test/Makefile
index 57930c00b..bbe26bd0c 100644
--- a/app/test/Makefile
+++ b/app/test/Makefile
@@ -1,5 +1,5 @@
 # SPDX-License-Identifier: BSD-3-Clause
-# Copyright(c) 2010-2017 Intel Corporation
+# Copyright(c) 2010-2020 Intel Corporation
 
 include $(RTE_SDK)/mk/rte.vars.mk
 
@@ -203,6 +203,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_PMD_RING) += test_pmd_ring_perf.c
 SRCS-$(CONFIG_RTE_LIBRTE_CRYPTODEV) += test_cryptodev_blockcipher.c
 SRCS-$(CONFIG_RTE_LIBRTE_CRYPTODEV) += test_cryptodev.c
 SRCS-$(CONFIG_RTE_LIBRTE_CRYPTODEV) += test_cryptodev_asym.c
+SRCS-$(CONFIG_RTE_LIBRTE_CRYPTODEV) += test_cryptodev_cpu_crypto.c
 SRCS-$(CONFIG_RTE_LIBRTE_SECURITY) += test_cryptodev_security_pdcp.c
 
 SRCS-$(CONFIG_RTE_LIBRTE_METRICS) += test_metrics.c
diff --git a/app/test/cpu_crypto_all_gcm_perf_test_cases.h b/app/test/cpu_crypto_all_gcm_perf_test_cases.h
new file mode 100644
index 000000000..425fcb510
--- /dev/null
+++ b/app/test/cpu_crypto_all_gcm_perf_test_cases.h
@@ -0,0 +1,11 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Intel Corporation
+ */
+
+TEST_EXPAND(_128, 16, SGL_ONE_SEG)
+TEST_EXPAND(_192, 24, SGL_ONE_SEG)
+TEST_EXPAND(_256, 32, SGL_ONE_SEG)
+
+TEST_EXPAND(_128, 16, SGL_MAX_SEG)
+TEST_EXPAND(_192, 24, SGL_MAX_SEG)
+TEST_EXPAND(_256, 32, SGL_MAX_SEG)
diff --git a/app/test/cpu_crypto_all_gcm_unit_test_cases.h b/app/test/cpu_crypto_all_gcm_unit_test_cases.h
new file mode 100644
index 000000000..a2bc11b39
--- /dev/null
+++ b/app/test/cpu_crypto_all_gcm_unit_test_cases.h
@@ -0,0 +1,49 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Intel Corporation
+ */
+
+TEST_EXPAND(gcm_test_case_1, SGL_ONE_SEG)
+TEST_EXPAND(gcm_test_case_2, SGL_ONE_SEG)
+TEST_EXPAND(gcm_test_case_3, SGL_ONE_SEG)
+TEST_EXPAND(gcm_test_case_4, SGL_ONE_SEG)
+TEST_EXPAND(gcm_test_case_5, SGL_ONE_SEG)
+TEST_EXPAND(gcm_test_case_6, SGL_ONE_SEG)
+TEST_EXPAND(gcm_test_case_7, SGL_ONE_SEG)
+TEST_EXPAND(gcm_test_case_8, SGL_ONE_SEG)
+TEST_EXPAND(gcm_test_case_192_1, SGL_ONE_SEG)
+TEST_EXPAND(gcm_test_case_192_2, SGL_ONE_SEG)
+TEST_EXPAND(gcm_test_case_192_3, SGL_ONE_SEG)
+TEST_EXPAND(gcm_test_case_192_4, SGL_ONE_SEG)
+TEST_EXPAND(gcm_test_case_192_5, SGL_ONE_SEG)
+TEST_EXPAND(gcm_test_case_192_6, SGL_ONE_SEG)
+TEST_EXPAND(gcm_test_case_192_7, SGL_ONE_SEG)
+TEST_EXPAND(gcm_test_case_256_1, SGL_ONE_SEG)
+TEST_EXPAND(gcm_test_case_256_2, SGL_ONE_SEG)
+TEST_EXPAND(gcm_test_case_256_3, SGL_ONE_SEG)
+TEST_EXPAND(gcm_test_case_256_4, SGL_ONE_SEG)
+TEST_EXPAND(gcm_test_case_256_5, SGL_ONE_SEG)
+TEST_EXPAND(gcm_test_case_256_6, SGL_ONE_SEG)
+TEST_EXPAND(gcm_test_case_256_7, SGL_ONE_SEG)
+
+TEST_EXPAND(gcm_test_case_1, SGL_MAX_SEG)
+TEST_EXPAND(gcm_test_case_2, SGL_MAX_SEG)
+TEST_EXPAND(gcm_test_case_3, SGL_MAX_SEG)
+TEST_EXPAND(gcm_test_case_4, SGL_MAX_SEG)
+TEST_EXPAND(gcm_test_case_5, SGL_MAX_SEG)
+TEST_EXPAND(gcm_test_case_6, SGL_MAX_SEG)
+TEST_EXPAND(gcm_test_case_7, SGL_MAX_SEG)
+TEST_EXPAND(gcm_test_case_8, SGL_MAX_SEG)
+TEST_EXPAND(gcm_test_case_192_1, SGL_MAX_SEG)
+TEST_EXPAND(gcm_test_case_192_2, SGL_MAX_SEG)
+TEST_EXPAND(gcm_test_case_192_3, SGL_MAX_SEG)
+TEST_EXPAND(gcm_test_case_192_4, SGL_MAX_SEG)
+TEST_EXPAND(gcm_test_case_192_5, SGL_MAX_SEG)
+TEST_EXPAND(gcm_test_case_192_6, SGL_MAX_SEG)
+TEST_EXPAND(gcm_test_case_192_7, SGL_MAX_SEG)
+TEST_EXPAND(gcm_test_case_256_1, SGL_MAX_SEG)
+TEST_EXPAND(gcm_test_case_256_2, SGL_MAX_SEG)
+TEST_EXPAND(gcm_test_case_256_3, SGL_MAX_SEG)
+TEST_EXPAND(gcm_test_case_256_4, SGL_MAX_SEG)
+TEST_EXPAND(gcm_test_case_256_5, SGL_MAX_SEG)
+TEST_EXPAND(gcm_test_case_256_6, SGL_MAX_SEG)
+TEST_EXPAND(gcm_test_case_256_7, SGL_MAX_SEG)
diff --git a/app/test/cpu_crypto_all_gmac_unit_test_cases.h b/app/test/cpu_crypto_all_gmac_unit_test_cases.h
new file mode 100644
index 000000000..97f9c2bec
--- /dev/null
+++ b/app/test/cpu_crypto_all_gmac_unit_test_cases.h
@@ -0,0 +1,7 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Intel Corporation
+ */
+
+TEST_EXPAND(gmac_test_case_1, SGL_ONE_SEG)
+TEST_EXPAND(gmac_test_case_2, SGL_ONE_SEG)
+TEST_EXPAND(gmac_test_case_3, SGL_ONE_SEG)
diff --git a/app/test/meson.build b/app/test/meson.build
index 22b0cefaa..175e3c0fd 100644
--- a/app/test/meson.build
+++ b/app/test/meson.build
@@ -1,5 +1,5 @@
 # SPDX-License-Identifier: BSD-3-Clause
-# Copyright(c) 2017 Intel Corporation
+# Copyright(c) 2017-2020 Intel Corporation
 
 if not get_option('tests')
 	subdir_done()
@@ -30,6 +30,7 @@ test_sources = files('commands.c',
 	'test_cryptodev.c',
 	'test_cryptodev_asym.c',
 	'test_cryptodev_blockcipher.c',
+	'test_cryptodev_cpu_crypto.c',
 	'test_cryptodev_security_pdcp.c',
 	'test_cycles.c',
 	'test_debug.c',
diff --git a/app/test/test_cryptodev_cpu_crypto.c b/app/test/test_cryptodev_cpu_crypto.c
new file mode 100644
index 000000000..4393bcdcc
--- /dev/null
+++ b/app/test/test_cryptodev_cpu_crypto.c
@@ -0,0 +1,931 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Intel Corporation
+ */
+
+#include <rte_common.h>
+#include <rte_hexdump.h>
+#include <rte_mbuf.h>
+#include <rte_malloc.h>
+#include <rte_memcpy.h>
+#include <rte_pause.h>
+#include <rte_bus_vdev.h>
+#include <rte_random.h>
+#include <rte_cycles.h>
+
+#include <rte_crypto.h>
+#include <rte_crypto_sym.h>
+#include <rte_cryptodev.h>
+#include <rte_cryptodev_pmd.h>
+
+#include "test.h"
+#include "test_cryptodev.h"
+#include "test_cryptodev_blockcipher.h"
+#include "test_cryptodev_aes_test_vectors.h"
+#include "test_cryptodev_aead_test_vectors.h"
+#include "test_cryptodev_des_test_vectors.h"
+#include "test_cryptodev_hash_test_vectors.h"
+
+#define CPU_CRYPTO_TEST_MAX_AAD_LENGTH	16
+#define MAX_NB_SEGMENTS			4
+#define CACHE_WARM_ITER			2048
+#define MAX_SEG_SIZE			2048
+
+#define TOP_ENC		BLOCKCIPHER_TEST_OP_ENCRYPT
+#define TOP_DEC		BLOCKCIPHER_TEST_OP_DECRYPT
+#define TOP_AUTH_GEN	BLOCKCIPHER_TEST_OP_AUTH_GEN
+#define TOP_AUTH_VER	BLOCKCIPHER_TEST_OP_AUTH_VERIFY
+#define TOP_ENC_AUTH	BLOCKCIPHER_TEST_OP_ENC_AUTH_GEN
+#define TOP_AUTH_DEC	BLOCKCIPHER_TEST_OP_AUTH_VERIFY_DEC
+
+enum buffer_assemble_option {
+	SGL_MAX_SEG,
+	SGL_ONE_SEG,
+};
+
+struct cpu_crypto_test_case {
+	struct {
+		uint8_t seg[MAX_SEG_SIZE];
+		uint32_t seg_len;
+	} seg_buf[MAX_NB_SEGMENTS];
+	uint8_t iv[MAXIMUM_IV_LENGTH * 2];
+	uint8_t aad[CPU_CRYPTO_TEST_MAX_AAD_LENGTH * 4];
+	uint8_t digest[DIGEST_BYTE_LENGTH_SHA512];
+} __rte_cache_aligned;
+
+struct cpu_crypto_test_obj {
+	struct rte_crypto_vec vec[MAX_NUM_OPS_INFLIGHT][MAX_NB_SEGMENTS];
+	struct rte_crypto_sgl sec_buf[MAX_NUM_OPS_INFLIGHT];
+	void *iv[MAX_NUM_OPS_INFLIGHT];
+	void *digest[MAX_NUM_OPS_INFLIGHT];
+	void *aad[MAX_NUM_OPS_INFLIGHT];
+	int status[MAX_NUM_OPS_INFLIGHT];
+};
+
+struct cpu_crypto_testsuite_params {
+	struct rte_mempool *buf_pool;
+	struct rte_mempool *session_priv_mpool;
+};
+
+struct cpu_crypto_unittest_params {
+	struct rte_cryptodev_sym_session *sess;
+	void *test_datas[MAX_NUM_OPS_INFLIGHT];
+	struct cpu_crypto_test_obj test_obj;
+	uint32_t nb_bufs;
+};
+
+static struct cpu_crypto_testsuite_params testsuite_params;
+static struct cpu_crypto_unittest_params unittest_params;
+
+static int gbl_driver_id;
+
+static uint32_t valid_dev;
+
+static int
+testsuite_setup(void)
+{
+	struct cpu_crypto_testsuite_params *ts_params = &testsuite_params;
+	uint32_t i, nb_devs;
+	size_t sess_sz;
+	struct rte_cryptodev_info info;
+
+	const char * const pool_name = "CPU_CRYPTO_MBUFPOOL";
+
+	memset(ts_params, 0, sizeof(*ts_params));
+
+	ts_params->buf_pool = rte_mempool_lookup(pool_name);
+	if (ts_params->buf_pool == NULL) {
+		/* Not already created so create */
+		ts_params->buf_pool = rte_pktmbuf_pool_create(pool_name,
+				NUM_MBUFS, MBUF_CACHE_SIZE, 0,
+				sizeof(struct cpu_crypto_test_case),
+				rte_socket_id());
+		if (ts_params->buf_pool == NULL) {
+			RTE_LOG(ERR, USER1, "Can't create %s\n", pool_name);
+			return TEST_FAILED;
+		}
+	}
+
+	/* Create an AESNI GCM device if required */
+	if (gbl_driver_id == rte_cryptodev_driver_id_get(
+			RTE_STR(CRYPTODEV_NAME_AESNI_GCM_PMD))) {
+		nb_devs = rte_cryptodev_device_count_by_driver(
+				rte_cryptodev_driver_id_get(
+				RTE_STR(CRYPTODEV_NAME_AESNI_GCM_PMD)));
+		if (nb_devs < 1) {
+			TEST_ASSERT_SUCCESS(rte_vdev_init(
+				RTE_STR(CRYPTODEV_NAME_AESNI_GCM_PMD), NULL),
+				"Failed to create instance of"
+				" pmd : %s",
+				RTE_STR(CRYPTODEV_NAME_AESNI_GCM_PMD));
+		}
+	}
+
+	nb_devs = rte_cryptodev_count();
+	if (nb_devs < 1) {
+		RTE_LOG(ERR, USER1, "No crypto devices found?\n");
+		return TEST_FAILED;
+	}
+
+	/* get first valid crypto dev */
+	valid_dev = UINT32_MAX;
+	for (i = 0; i < nb_devs; i++) {
+		rte_cryptodev_info_get(i, &info);
+		if (info.driver_id == gbl_driver_id &&
+				(info.feature_flags &
+				RTE_CRYPTODEV_FF_SYM_CPU_CRYPTO) != 0) {
+			valid_dev = i;
+			break;
+		}
+	}
+
+	RTE_LOG(INFO, USER1, "Crypto device %u selected for CPU mode test",
+		valid_dev);
+
+	if (valid_dev == UINT32_MAX) {
+		RTE_LOG(ERR, USER1, "No crypto devices that support CPU mode");
+		return TEST_FAILED;
+	}
+
+	/* get session size */
+	sess_sz = rte_cryptodev_sym_get_private_session_size(valid_dev);
+
+	ts_params->session_priv_mpool = rte_cryptodev_sym_session_pool_create(
+		"CRYPTO_SESPOOL", 2, sess_sz, 0, 0, SOCKET_ID_ANY);
+	if (!ts_params->session_priv_mpool) {
+		RTE_LOG(ERR, USER1, "Not enough memory\n");
+		return TEST_FAILED;
+	}
+
+	return TEST_SUCCESS;
+}
+
+static void
+testsuite_teardown(void)
+{
+	struct cpu_crypto_testsuite_params *ts_params = &testsuite_params;
+
+	if (ts_params->buf_pool)
+		rte_mempool_free(ts_params->buf_pool);
+
+	if (ts_params->session_priv_mpool)
+		rte_mempool_free(ts_params->session_priv_mpool);
+}
+
+static int
+ut_setup(void)
+{
+	struct cpu_crypto_testsuite_params *ts_params = &testsuite_params;
+	struct cpu_crypto_unittest_params *ut_params = &unittest_params;
+
+	memset(ut_params, 0, sizeof(*ut_params));
+
+	ut_params->sess = rte_cryptodev_sym_session_create(
+		ts_params->session_priv_mpool);
+
+	return ut_params->sess ? TEST_SUCCESS : TEST_FAILED;
+}
+
+static void
+ut_teardown(void)
+{
+	struct cpu_crypto_testsuite_params *ts_params = &testsuite_params;
+	struct cpu_crypto_unittest_params *ut_params = &unittest_params;
+
+	if (ut_params->sess) {
+		rte_cryptodev_sym_session_clear(valid_dev, ut_params->sess);
+		rte_cryptodev_sym_session_free(ut_params->sess);
+		ut_params->sess = NULL;
+	}
+
+	if (ut_params->nb_bufs) {
+		uint32_t i;
+
+		for (i = 0; i < ut_params->nb_bufs; i++)
+			memset(ut_params->test_datas[i], 0,
+				sizeof(struct cpu_crypto_test_case));
+
+		rte_mempool_put_bulk(ts_params->buf_pool, ut_params->test_datas,
+				ut_params->nb_bufs);
+	}
+}
+
+static int
+allocate_buf(uint32_t n)
+{
+	struct cpu_crypto_testsuite_params *ts_params = &testsuite_params;
+	struct cpu_crypto_unittest_params *ut_params = &unittest_params;
+	int ret;
+
+	ret = rte_mempool_get_bulk(ts_params->buf_pool, ut_params->test_datas,
+			n);
+
+	if (ret == 0)
+		ut_params->nb_bufs = n;
+
+	return ret;
+}
+
+static int
+check_status(struct cpu_crypto_test_obj *obj, uint32_t n)
+{
+	uint32_t i;
+
+	for (i = 0; i < n; i++)
+		if (obj->status[i] != 0)
+			return -1;
+
+	return 0;
+}
+
+static inline int
+init_aead_session(struct rte_cryptodev_sym_session *ses,
+		struct rte_mempool *sess_mp,
+		enum rte_crypto_aead_operation op,
+		const struct aead_test_data *test_data,
+		uint32_t is_unit_test)
+{
+	struct rte_crypto_sym_xform xform = {0};
+
+	if (is_unit_test)
+		debug_hexdump(stdout, "key:", test_data->key.data,
+				test_data->key.len);
+
+	/* Setup AEAD Parameters */
+	xform.type = RTE_CRYPTO_SYM_XFORM_AEAD;
+	xform.next = NULL;
+	xform.aead.algo = test_data->algo;
+	xform.aead.op = op;
+	xform.aead.key.data = test_data->key.data;
+	xform.aead.key.length = test_data->key.len;
+	xform.aead.iv.offset = 0;
+	xform.aead.iv.length = test_data->iv.len;
+	xform.aead.digest_length = test_data->auth_tag.len;
+	xform.aead.aad_length = test_data->aad.len;
+
+	return rte_cryptodev_sym_session_init(valid_dev, ses, &xform,
+		sess_mp);
+}
+
+static inline int
+init_gmac_session(struct rte_cryptodev_sym_session *ses,
+		struct rte_mempool *sess_mp,
+		enum rte_crypto_auth_operation op,
+		const struct gmac_test_data *test_data,
+		uint32_t is_unit_test)
+{
+	struct rte_crypto_sym_xform xform = {0};
+
+	if (is_unit_test)
+		debug_hexdump(stdout, "key:", test_data->key.data,
+				test_data->key.len);
+
+	/* Setup AEAD Parameters */
+	xform.type = RTE_CRYPTO_SYM_XFORM_AUTH;
+	xform.next = NULL;
+	xform.auth.algo = RTE_CRYPTO_AUTH_AES_GMAC;
+	xform.auth.op = op;
+	xform.auth.digest_length = test_data->gmac_tag.len;
+	xform.auth.key.length = test_data->key.len;
+	xform.auth.key.data = test_data->key.data;
+	xform.auth.iv.length = test_data->iv.len;
+	xform.auth.iv.offset = 0;
+
+	return rte_cryptodev_sym_session_init(valid_dev, ses, &xform, sess_mp);
+}
+
+
+static inline int
+prepare_sgl(struct cpu_crypto_test_case *data,
+	struct cpu_crypto_test_obj *obj,
+	uint32_t obj_idx,
+	enum buffer_assemble_option sgl_option,
+	const uint8_t *src,
+	uint32_t src_len)
+{
+	uint32_t seg_idx;
+	uint32_t bytes_per_seg;
+	uint32_t left;
+
+	switch (sgl_option) {
+	case SGL_MAX_SEG:
+		seg_idx = 0;
+		bytes_per_seg = src_len / MAX_NB_SEGMENTS + 1;
+		left = src_len;
+
+		if (bytes_per_seg > MAX_SEG_SIZE)
+			return -ENOMEM;
+
+		while (left) {
+			uint32_t cp_len = RTE_MIN(left, bytes_per_seg);
+			memcpy(data->seg_buf[seg_idx].seg, src, cp_len);
+			data->seg_buf[seg_idx].seg_len = cp_len;
+			obj->vec[obj_idx][seg_idx].base =
+					(void *)data->seg_buf[seg_idx].seg;
+			obj->vec[obj_idx][seg_idx].len = cp_len;
+			src += cp_len;
+			left -= cp_len;
+			seg_idx++;
+		}
+
+		if (left)
+			return -ENOMEM;
+
+		obj->sec_buf[obj_idx].vec = obj->vec[obj_idx];
+		obj->sec_buf[obj_idx].num = seg_idx;
+
+		break;
+	case SGL_ONE_SEG:
+		memcpy(data->seg_buf[0].seg, src, src_len);
+		data->seg_buf[0].seg_len = src_len;
+		obj->vec[obj_idx][0].base =
+				(void *)data->seg_buf[0].seg;
+		obj->vec[obj_idx][0].len = src_len;
+
+		obj->sec_buf[obj_idx].vec = obj->vec[obj_idx];
+		obj->sec_buf[obj_idx].num = 1;
+		break;
+	default:
+		return -1;
+	}
+
+	return 0;
+}
+
+static inline int
+assemble_aead_buf(struct cpu_crypto_test_case *data,
+		struct cpu_crypto_test_obj *obj,
+		uint32_t obj_idx,
+		enum rte_crypto_aead_operation op,
+		const struct aead_test_data *test_data,
+		enum buffer_assemble_option sgl_option,
+		uint32_t is_unit_test)
+{
+	const uint8_t *src;
+	uint32_t src_len;
+	int ret;
+
+	if (op == RTE_CRYPTO_AEAD_OP_ENCRYPT) {
+		src = test_data->plaintext.data;
+		src_len = test_data->plaintext.len;
+		if (is_unit_test)
+			debug_hexdump(stdout, "plaintext:", src, src_len);
+	} else {
+		src = test_data->ciphertext.data;
+		src_len = test_data->ciphertext.len;
+		memcpy(data->digest, test_data->auth_tag.data,
+				test_data->auth_tag.len);
+		if (is_unit_test) {
+			debug_hexdump(stdout, "ciphertext:", src, src_len);
+			debug_hexdump(stdout, "digest:",
+					test_data->auth_tag.data,
+					test_data->auth_tag.len);
+		}
+	}
+
+	if (src_len > MAX_SEG_SIZE)
+		return -ENOMEM;
+
+	ret = prepare_sgl(data, obj, obj_idx, sgl_option, src, src_len);
+	if (ret < 0)
+		return ret;
+
+	memcpy(data->iv, test_data->iv.data, test_data->iv.len);
+	memcpy(data->aad, test_data->aad.data, test_data->aad.len);
+
+	if (is_unit_test) {
+		debug_hexdump(stdout, "iv:", test_data->iv.data,
+				test_data->iv.len);
+		debug_hexdump(stdout, "aad:", test_data->aad.data,
+				test_data->aad.len);
+	}
+
+	obj->iv[obj_idx] = (void *)data->iv;
+	obj->digest[obj_idx] = (void *)data->digest;
+	obj->aad[obj_idx] = (void *)data->aad;
+
+	return 0;
+}
+
+static inline int
+assemble_gmac_buf(struct cpu_crypto_test_case *data,
+		struct cpu_crypto_test_obj *obj,
+		uint32_t obj_idx,
+		enum rte_crypto_auth_operation op,
+		const struct gmac_test_data *test_data,
+		enum buffer_assemble_option sgl_option,
+		uint32_t is_unit_test)
+{
+	const uint8_t *src;
+	uint32_t src_len;
+	int ret;
+
+	if (op == RTE_CRYPTO_AUTH_OP_GENERATE) {
+		src = test_data->plaintext.data;
+		src_len = test_data->plaintext.len;
+		if (is_unit_test)
+			debug_hexdump(stdout, "plaintext:", src, src_len);
+	} else {
+		src = test_data->plaintext.data;
+		src_len = test_data->plaintext.len;
+		memcpy(data->digest, test_data->gmac_tag.data,
+			test_data->gmac_tag.len);
+		if (is_unit_test)
+			debug_hexdump(stdout, "gmac_tag:", src, src_len);
+	}
+
+	if (src_len > MAX_SEG_SIZE)
+		return -ENOMEM;
+
+	ret = prepare_sgl(data, obj, obj_idx, sgl_option, src, src_len);
+	if (ret < 0)
+		return ret;
+
+	memcpy(data->iv, test_data->iv.data, test_data->iv.len);
+
+	if (is_unit_test) {
+		debug_hexdump(stdout, "iv:", test_data->iv.data,
+				test_data->iv.len);
+	}
+
+	obj->iv[obj_idx] = (void *)data->iv;
+	obj->digest[obj_idx] = (void *)data->digest;
+
+	return 0;
+}
+
+#define CPU_CRYPTO_ERR_EXP_CT	"expect ciphertext:"
+#define CPU_CRYPTO_ERR_GEN_CT	"gen ciphertext:"
+#define CPU_CRYPTO_ERR_EXP_PT	"expect plaintext:"
+#define CPU_CRYPTO_ERR_GEN_PT	"gen plaintext:"
+
+static int
+check_aead_result(struct cpu_crypto_test_case *tcase,
+		enum rte_crypto_aead_operation op,
+		const struct aead_test_data *tdata)
+{
+	const char *err_msg1, *err_msg2;
+	const uint8_t *src_pt_ct;
+	const uint8_t *tmp_src;
+	uint32_t src_len;
+	uint32_t left;
+	uint32_t i = 0;
+	int ret;
+
+	if (op == RTE_CRYPTO_AEAD_OP_ENCRYPT) {
+		err_msg1 = CPU_CRYPTO_ERR_EXP_CT;
+		err_msg2 = CPU_CRYPTO_ERR_GEN_CT;
+
+		src_pt_ct = tdata->ciphertext.data;
+		src_len = tdata->ciphertext.len;
+
+		ret = memcmp(tcase->digest, tdata->auth_tag.data,
+				tdata->auth_tag.len);
+		if (ret != 0) {
+			debug_hexdump(stdout, "expect digest:",
+					tdata->auth_tag.data,
+					tdata->auth_tag.len);
+			debug_hexdump(stdout, "gen digest:",
+					tcase->digest,
+					tdata->auth_tag.len);
+			return -1;
+		}
+	} else {
+		src_pt_ct = tdata->plaintext.data;
+		src_len = tdata->plaintext.len;
+		err_msg1 = CPU_CRYPTO_ERR_EXP_PT;
+		err_msg2 = CPU_CRYPTO_ERR_GEN_PT;
+	}
+
+	tmp_src = src_pt_ct;
+	left = src_len;
+
+	while (left && i < MAX_NB_SEGMENTS) {
+		ret = memcmp(tcase->seg_buf[i].seg, tmp_src,
+				tcase->seg_buf[i].seg_len);
+		if (ret != 0)
+			goto sgl_err_dump;
+		tmp_src += tcase->seg_buf[i].seg_len;
+		left -= tcase->seg_buf[i].seg_len;
+		i++;
+	}
+
+	if (left) {
+		ret = -ENOMEM;
+		goto sgl_err_dump;
+	}
+
+	return 0;
+
+sgl_err_dump:
+	left = src_len;
+	i = 0;
+
+	debug_hexdump(stdout, err_msg1,
+			tdata->ciphertext.data,
+			tdata->ciphertext.len);
+
+	while (left && i < MAX_NB_SEGMENTS) {
+		debug_hexdump(stdout, err_msg2,
+				tcase->seg_buf[i].seg,
+				tcase->seg_buf[i].seg_len);
+		left -= tcase->seg_buf[i].seg_len;
+		i++;
+	}
+	return ret;
+}
+
+static int
+check_gmac_result(struct cpu_crypto_test_case *tcase,
+		enum rte_crypto_auth_operation op,
+		const struct gmac_test_data *tdata)
+{
+	int ret;
+
+	if (op == RTE_CRYPTO_AUTH_OP_GENERATE) {
+		ret = memcmp(tcase->digest, tdata->gmac_tag.data,
+				tdata->gmac_tag.len);
+		if (ret != 0) {
+			debug_hexdump(stdout, "expect digest:",
+					tdata->gmac_tag.data,
+					tdata->gmac_tag.len);
+			debug_hexdump(stdout, "gen digest:",
+					tcase->digest,
+					tdata->gmac_tag.len);
+			return -1;
+		}
+	}
+
+	return 0;
+}
+
+static inline int32_t
+run_test(struct rte_cryptodev_sym_session *sess, union rte_crypto_sym_ofs ofs,
+		struct cpu_crypto_test_obj *obj, uint32_t n)
+{
+	struct rte_crypto_sym_vec symvec;
+
+	symvec.sgl = obj->sec_buf;
+	symvec.iv = obj->iv;
+	symvec.aad = obj->aad;
+	symvec.digest = obj->digest;
+	symvec.status = obj->status;
+	symvec.num = n;
+
+	return rte_cryptodev_sym_cpu_crypto_process(valid_dev, sess, ofs,
+		&symvec);
+}
+
+static int
+cpu_crypto_test_aead(const struct aead_test_data *tdata,
+		enum rte_crypto_aead_operation dir,
+		enum buffer_assemble_option sgl_option)
+{
+	struct cpu_crypto_testsuite_params *ts_params = &testsuite_params;
+	struct cpu_crypto_unittest_params *ut_params = &unittest_params;
+	struct cpu_crypto_test_obj *obj = &ut_params->test_obj;
+	struct cpu_crypto_test_case *tcase;
+	union rte_crypto_sym_ofs ofs;
+	int ret;
+
+	ret = init_aead_session(ut_params->sess, ts_params->session_priv_mpool,
+		dir, tdata, 1);
+	if (ret < 0)
+		return ret;
+
+	ret = allocate_buf(1);
+	if (ret)
+		return ret;
+
+	tcase = ut_params->test_datas[0];
+	ret = assemble_aead_buf(tcase, obj, 0, dir, tdata, sgl_option, 1);
+	if (ret < 0) {
+		printf("Test is not supported by the driver\n");
+		return ret;
+	}
+
+	/* prepare offset descriptor */
+	ofs.raw = 0;
+
+	run_test(ut_params->sess, ofs, obj, 1);
+
+	ret = check_status(obj, 1);
+	if (ret < 0)
+		return ret;
+
+	ret = check_aead_result(tcase, dir, tdata);
+	if (ret < 0)
+		return ret;
+
+	return 0;
+}
+
+static int
+cpu_crypto_test_gmac(const struct gmac_test_data *tdata,
+		enum rte_crypto_auth_operation dir,
+		enum buffer_assemble_option sgl_option)
+{
+	struct cpu_crypto_testsuite_params *ts_params = &testsuite_params;
+	struct cpu_crypto_unittest_params *ut_params = &unittest_params;
+	struct cpu_crypto_test_obj *obj = &ut_params->test_obj;
+	struct cpu_crypto_test_case *tcase;
+	union rte_crypto_sym_ofs ofs;
+	int ret;
+
+	ret = init_gmac_session(ut_params->sess, ts_params->session_priv_mpool,
+		dir, tdata, 1);
+	if (ret < 0)
+		return ret;
+
+	ret = allocate_buf(1);
+	if (ret)
+		return ret;
+
+	tcase = ut_params->test_datas[0];
+	ret = assemble_gmac_buf(tcase, obj, 0, dir, tdata, sgl_option, 1);
+	if (ret < 0) {
+		printf("Test is not supported by the driver\n");
+		return ret;
+	}
+
+	/* prepare offset descriptor */
+	ofs.raw = 0;
+
+	run_test(ut_params->sess, ofs, obj, 1);
+
+	ret = check_status(obj, 1);
+	if (ret < 0)
+		return ret;
+
+	ret = check_gmac_result(tcase, dir, tdata);
+	if (ret < 0)
+		return ret;
+
+	return 0;
+}
+
+#define TEST_EXPAND(t, o)						\
+static int								\
+cpu_crypto_aead_enc_test_##t##_##o(void)				\
+{									\
+	return cpu_crypto_test_aead(&t, RTE_CRYPTO_AEAD_OP_ENCRYPT, o);	\
+}									\
+static int								\
+cpu_crypto_aead_dec_test_##t##_##o(void)				\
+{									\
+	return cpu_crypto_test_aead(&t, RTE_CRYPTO_AEAD_OP_DECRYPT, o);	\
+}									\
+
+#include "cpu_crypto_all_gcm_unit_test_cases.h"
+#undef TEST_EXPAND
+
+#define TEST_EXPAND(t, o)						\
+static int								\
+cpu_crypto_gmac_gen_test_##t##_##o(void)				\
+{									\
+	return cpu_crypto_test_gmac(&t, RTE_CRYPTO_AUTH_OP_GENERATE, o);\
+}									\
+static int								\
+cpu_crypto_gmac_ver_test_##t##_##o(void)				\
+{									\
+	return cpu_crypto_test_gmac(&t, RTE_CRYPTO_AUTH_OP_VERIFY, o);	\
+}
+
+#include "cpu_crypto_all_gmac_unit_test_cases.h"
+#undef TEST_EXPAND
+
+static struct unit_test_suite cpu_crypto_aesgcm_testsuite  = {
+	.suite_name = "CPU Crypto AESNI-GCM Unit Test Suite",
+	.setup = testsuite_setup,
+	.teardown = testsuite_teardown,
+	.unit_test_cases = {
+
+#define TEST_EXPAND(t, o)	\
+	TEST_CASE_ST(ut_setup, ut_teardown, cpu_crypto_aead_enc_test_##t##_##o),
+
+#include "cpu_crypto_all_gcm_unit_test_cases.h"
+#undef TEST_EXPAND
+
+#define TEST_EXPAND(t, o)	\
+	TEST_CASE_ST(ut_setup, ut_teardown, cpu_crypto_aead_dec_test_##t##_##o),
+
+#include "cpu_crypto_all_gcm_unit_test_cases.h"
+#undef TEST_EXPAND
+
+#define TEST_EXPAND(t, o)	\
+	TEST_CASE_ST(ut_setup, ut_teardown, cpu_crypto_gmac_gen_test_##t##_##o),
+
+#include "cpu_crypto_all_gmac_unit_test_cases.h"
+#undef TEST_EXPAND
+
+#define TEST_EXPAND(t, o)	\
+	TEST_CASE_ST(ut_setup, ut_teardown, cpu_crypto_gmac_ver_test_##t##_##o),
+
+#include "cpu_crypto_all_gmac_unit_test_cases.h"
+#undef TEST_EXPAND
+
+	TEST_CASES_END() /**< NULL terminate unit test array */
+	},
+};
+
+static int
+test_cpu_crypto_aesni_gcm(void)
+{
+	gbl_driver_id =	rte_cryptodev_driver_id_get(
+			RTE_STR(CRYPTODEV_NAME_AESNI_GCM_PMD));
+
+	return unit_test_suite_runner(&cpu_crypto_aesgcm_testsuite);
+}
+
+
+static inline void
+gen_rand(uint8_t *data, uint32_t len)
+{
+	uint32_t i;
+
+	for (i = 0; i < len; i++)
+		data[i] = (uint8_t)rte_rand();
+}
+
+static inline void
+switch_aead_enc_to_dec(struct aead_test_data *tdata,
+		struct cpu_crypto_test_case *tcase,
+		enum buffer_assemble_option sgl_option)
+{
+	uint32_t i;
+	uint8_t *dst = tdata->ciphertext.data;
+
+	switch (sgl_option) {
+	case SGL_ONE_SEG:
+		memcpy(dst, tcase->seg_buf[0].seg, tcase->seg_buf[0].seg_len);
+		tdata->ciphertext.len = tcase->seg_buf[0].seg_len;
+		break;
+	case SGL_MAX_SEG:
+		tdata->ciphertext.len = 0;
+		for (i = 0; i < MAX_NB_SEGMENTS; i++) {
+			memcpy(dst, tcase->seg_buf[i].seg,
+					tcase->seg_buf[i].seg_len);
+			tdata->ciphertext.len += tcase->seg_buf[i].seg_len;
+		}
+		break;
+	}
+
+	memcpy(tdata->auth_tag.data, tcase->digest, tdata->auth_tag.len);
+}
+
+static int
+cpu_crypto_test_aead_perf(enum buffer_assemble_option sgl_option,
+		uint32_t key_sz)
+{
+	struct aead_test_data tdata = {0};
+	struct cpu_crypto_testsuite_params *ts_params = &testsuite_params;
+	struct cpu_crypto_unittest_params *ut_params = &unittest_params;
+	struct cpu_crypto_test_obj *obj = &ut_params->test_obj;
+	struct cpu_crypto_test_case *tcase;
+	union rte_crypto_sym_ofs ofs;
+	uint64_t hz = rte_get_tsc_hz(), time_start, time_now;
+	double rate, cycles_per_buf;
+	uint32_t test_data_szs[] = {64, 128, 256, 512, 1024, 2048};
+	uint32_t i, j;
+	uint8_t aad[16];
+	int ret;
+
+	tdata.key.len = key_sz;
+	gen_rand(tdata.key.data, tdata.key.len);
+	tdata.algo = RTE_CRYPTO_AEAD_AES_GCM;
+	tdata.aad.data = aad;
+	ofs.raw = 0;
+
+	if (!ut_params->sess)
+		return -1;
+
+	init_aead_session(ut_params->sess, ts_params->session_priv_mpool,
+		RTE_CRYPTO_AEAD_OP_DECRYPT, &tdata, 0);
+
+	ret = allocate_buf(MAX_NUM_OPS_INFLIGHT);
+	if (ret)
+		return ret;
+
+	for (i = 0; i < RTE_DIM(test_data_szs); i++) {
+		for (j = 0; j < MAX_NUM_OPS_INFLIGHT; j++) {
+			tdata.plaintext.len = test_data_szs[i];
+			gen_rand(tdata.plaintext.data,
+					tdata.plaintext.len);
+
+			tdata.aad.len = 12;
+			gen_rand(tdata.aad.data, tdata.aad.len);
+
+			tdata.auth_tag.len = 16;
+
+			tdata.iv.len = 16;
+			gen_rand(tdata.iv.data, tdata.iv.len);
+
+			tcase = ut_params->test_datas[j];
+			ret = assemble_aead_buf(tcase, obj, j,
+					RTE_CRYPTO_AEAD_OP_ENCRYPT,
+					&tdata, sgl_option, 0);
+			if (ret < 0) {
+				printf("Test is not supported by the driver\n");
+				return ret;
+			}
+		}
+
+		/* warm up cache */
+		for (j = 0; j < CACHE_WARM_ITER; j++)
+			run_test(ut_params->sess, ofs, obj,
+				MAX_NUM_OPS_INFLIGHT);
+
+		time_start = rte_rdtsc();
+
+		run_test(ut_params->sess, ofs, obj, MAX_NUM_OPS_INFLIGHT);
+
+		time_now = rte_rdtsc();
+
+		rate = time_now - time_start;
+		cycles_per_buf = rate / MAX_NUM_OPS_INFLIGHT;
+
+		rate = ((hz / cycles_per_buf)) / 1000000;
+
+		printf("AES-GCM-%u(%4uB) Enc %03.3fMpps (%03.3fGbps) ",
+				key_sz * 8, test_data_szs[i], rate,
+				rate  * test_data_szs[i] * 8 / 1000);
+		printf("cycles per buf %03.3f per byte %03.3f\n",
+				cycles_per_buf,
+				cycles_per_buf / test_data_szs[i]);
+
+		for (j = 0; j < MAX_NUM_OPS_INFLIGHT; j++) {
+			tcase = ut_params->test_datas[j];
+
+			switch_aead_enc_to_dec(&tdata, tcase, sgl_option);
+			ret = assemble_aead_buf(tcase, obj, j,
+					RTE_CRYPTO_AEAD_OP_DECRYPT,
+					&tdata, sgl_option, 0);
+			if (ret < 0) {
+				printf("Test is not supported by the driver\n");
+				return ret;
+			}
+		}
+
+		time_start = rte_get_timer_cycles();
+
+		run_test(ut_params->sess, ofs, obj, MAX_NUM_OPS_INFLIGHT);
+
+		time_now = rte_get_timer_cycles();
+
+		rate = time_now - time_start;
+		cycles_per_buf = rate / MAX_NUM_OPS_INFLIGHT;
+
+		rate = ((hz / cycles_per_buf)) / 1000000;
+
+		printf("AES-GCM-%u(%4uB) Dec %03.3fMpps (%03.3fGbps) ",
+				key_sz * 8, test_data_szs[i], rate,
+				rate  * test_data_szs[i] * 8 / 1000);
+		printf("cycles per buf %03.3f per byte %03.3f\n",
+				cycles_per_buf,
+				cycles_per_buf / test_data_szs[i]);
+	}
+
+	return 0;
+}
+
+/* test-perfix/key-size/sgl-type */
+#define TEST_EXPAND(a, b, c)						\
+static int								\
+cpu_crypto_gcm_perf##a##_##c(void)					\
+{									\
+	return cpu_crypto_test_aead_perf(c, b);				\
+}									\
+
+#include "cpu_crypto_all_gcm_perf_test_cases.h"
+#undef TEST_EXPAND
+
+static struct unit_test_suite security_cpu_crypto_aesgcm_perf_testsuite  = {
+		.suite_name = "Security CPU Crypto AESNI-GCM Perf Test Suite",
+		.setup = testsuite_setup,
+		.teardown = testsuite_teardown,
+		.unit_test_cases = {
+#define TEST_EXPAND(a, b, c)						\
+		TEST_CASE_ST(ut_setup, ut_teardown,			\
+				cpu_crypto_gcm_perf##a##_##c),		\
+
+#include "cpu_crypto_all_gcm_perf_test_cases.h"
+#undef TEST_EXPAND
+
+		TEST_CASES_END() /**< NULL terminate unit test array */
+		},
+};
+
+static int
+test_cpu_crypto_aesni_gcm_perf(void)
+{
+	gbl_driver_id =	rte_cryptodev_driver_id_get(
+			RTE_STR(CRYPTODEV_NAME_AESNI_GCM_PMD));
+
+	return unit_test_suite_runner(
+			&security_cpu_crypto_aesgcm_perf_testsuite);
+}
+
+REGISTER_TEST_COMMAND(cpu_crypto_aesni_gcm_autotest,
+		test_cpu_crypto_aesni_gcm);
+
+REGISTER_TEST_COMMAND(cpu_crypto_aesni_gcm_perftest,
+		test_cpu_crypto_aesni_gcm_perf);
-- 
2.17.1


^ permalink raw reply	[flat|nested] 77+ messages in thread

* [dpdk-dev] [PATCH v5 4/8] security: add cpu crypto action type
  2020-01-28 14:22   ` [dpdk-dev] [PATCH v5 0/8] Introduce CPU crypto mode Marcin Smoczynski
                       ` (2 preceding siblings ...)
  2020-01-28 14:22     ` [dpdk-dev] [PATCH v5 3/8] test/crypto: add CPU crypto tests Marcin Smoczynski
@ 2020-01-28 14:22     ` Marcin Smoczynski
  2020-01-31 14:26       ` Akhil Goyal
  2020-01-28 14:22     ` [dpdk-dev] [PATCH v5 5/8] ipsec: introduce support for cpu crypto mode Marcin Smoczynski
                       ` (4 subsequent siblings)
  8 siblings, 1 reply; 77+ messages in thread
From: Marcin Smoczynski @ 2020-01-28 14:22 UTC (permalink / raw)
  To: akhil.goyal, konstantin.ananyev, roy.fan.zhang, declan.doherty,
	radu.nicolau, pablo.de.lara.guarch
  Cc: dev, Marcin Smoczynski

Introduce CPU crypto action type allowing to differentiate between
regular async 'none security' and synchronous, CPU crypto accelerated
sessions.

Signed-off-by: Marcin Smoczynski <marcinx.smoczynski@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
---
 lib/librte_security/rte_security.h | 8 ++++++--
 1 file changed, 6 insertions(+), 2 deletions(-)

diff --git a/lib/librte_security/rte_security.h b/lib/librte_security/rte_security.h
index 546779df2..c8b2dd5ed 100644
--- a/lib/librte_security/rte_security.h
+++ b/lib/librte_security/rte_security.h
@@ -1,6 +1,6 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  * Copyright 2017,2019 NXP
- * Copyright(c) 2017 Intel Corporation.
+ * Copyright(c) 2017-2020 Intel Corporation.
  */
 
 #ifndef _RTE_SECURITY_H_
@@ -307,10 +307,14 @@ enum rte_security_session_action_type {
 	/**< All security protocol processing is performed inline during
 	 * transmission
 	 */
-	RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL
+	RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL,
 	/**< All security protocol processing including crypto is performed
 	 * on a lookaside accelerator
 	 */
+	RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO
+	/**< Crypto processing for security protocol is processed by CPU
+	 * synchronously
+	 */
 };
 
 /** Security session protocol definition */
-- 
2.17.1


^ permalink raw reply	[flat|nested] 77+ messages in thread

* [dpdk-dev] [PATCH v5 5/8] ipsec: introduce support for cpu crypto mode
  2020-01-28 14:22   ` [dpdk-dev] [PATCH v5 0/8] Introduce CPU crypto mode Marcin Smoczynski
                       ` (3 preceding siblings ...)
  2020-01-28 14:22     ` [dpdk-dev] [PATCH v5 4/8] security: add cpu crypto action type Marcin Smoczynski
@ 2020-01-28 14:22     ` Marcin Smoczynski
  2020-01-28 16:37       ` Ananyev, Konstantin
  2020-01-28 14:22     ` [dpdk-dev] [PATCH v5 6/8] examples/ipsec-secgw: cpu crypto support Marcin Smoczynski
                       ` (3 subsequent siblings)
  8 siblings, 1 reply; 77+ messages in thread
From: Marcin Smoczynski @ 2020-01-28 14:22 UTC (permalink / raw)
  To: akhil.goyal, konstantin.ananyev, roy.fan.zhang, declan.doherty,
	radu.nicolau, pablo.de.lara.guarch
  Cc: dev, Marcin Smoczynski

Update library to handle CPU cypto security mode which utilizes
cryptodev's synchronous, CPU accelerated crypto operations.

Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Signed-off-by: Marcin Smoczynski <marcinx.smoczynski@intel.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
---
 lib/librte_ipsec/esp_inb.c   | 156 ++++++++++++++++++++++++++++++-----
 lib/librte_ipsec/esp_outb.c  | 136 +++++++++++++++++++++++++++---
 lib/librte_ipsec/misc.h      | 120 ++++++++++++++++++++++++++-
 lib/librte_ipsec/rte_ipsec.h |  20 ++++-
 lib/librte_ipsec/sa.c        | 114 ++++++++++++++++++++-----
 lib/librte_ipsec/sa.h        |  19 ++++-
 lib/librte_ipsec/ses.c       |   5 +-
 7 files changed, 513 insertions(+), 57 deletions(-)

diff --git a/lib/librte_ipsec/esp_inb.c b/lib/librte_ipsec/esp_inb.c
index 5c653dd39..7b8ab81f6 100644
--- a/lib/librte_ipsec/esp_inb.c
+++ b/lib/librte_ipsec/esp_inb.c
@@ -1,5 +1,5 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2018 Intel Corporation
+ * Copyright(c) 2018-2020 Intel Corporation
  */
 
 #include <rte_ipsec.h>
@@ -105,6 +105,39 @@ inb_cop_prepare(struct rte_crypto_op *cop,
 	}
 }
 
+static inline uint32_t
+inb_cpu_crypto_prepare(const struct rte_ipsec_sa *sa, struct rte_mbuf *mb,
+	uint32_t *pofs, uint32_t plen, void *iv)
+{
+	struct aead_gcm_iv *gcm;
+	struct aesctr_cnt_blk *ctr;
+	uint64_t *ivp;
+	uint32_t clen;
+
+	ivp = rte_pktmbuf_mtod_offset(mb, uint64_t *,
+		*pofs + sizeof(struct rte_esp_hdr));
+	clen = 0;
+
+	switch (sa->algo_type) {
+	case ALGO_TYPE_AES_GCM:
+		gcm = (struct aead_gcm_iv *)iv;
+		aead_gcm_iv_fill(gcm, ivp[0], sa->salt);
+		break;
+	case ALGO_TYPE_AES_CBC:
+	case ALGO_TYPE_3DES_CBC:
+		copy_iv(iv, ivp, sa->iv_len);
+		break;
+	case ALGO_TYPE_AES_CTR:
+		ctr = (struct aesctr_cnt_blk *)iv;
+		aes_ctr_cnt_blk_fill(ctr, ivp[0], sa->salt);
+		break;
+	}
+
+	*pofs += sa->ctp.auth.offset;
+	clen = plen - sa->ctp.auth.length;
+	return clen;
+}
+
 /*
  * Helper function for prepare() to deal with situation when
  * ICV is spread by two segments. Tries to move ICV completely into the
@@ -157,17 +190,12 @@ inb_pkt_xprepare(const struct rte_ipsec_sa *sa, rte_be64_t sqc,
 	}
 }
 
-/*
- * setup/update packet data and metadata for ESP inbound tunnel case.
- */
-static inline int32_t
-inb_pkt_prepare(const struct rte_ipsec_sa *sa, const struct replay_sqn *rsn,
-	struct rte_mbuf *mb, uint32_t hlen, union sym_op_data *icv)
+static inline int
+inb_get_sqn(const struct rte_ipsec_sa *sa, const struct replay_sqn *rsn,
+	struct rte_mbuf *mb, uint32_t hlen, rte_be64_t *sqc)
 {
 	int32_t rc;
 	uint64_t sqn;
-	uint32_t clen, icv_len, icv_ofs, plen;
-	struct rte_mbuf *ml;
 	struct rte_esp_hdr *esph;
 
 	esph = rte_pktmbuf_mtod_offset(mb, struct rte_esp_hdr *, hlen);
@@ -179,12 +207,21 @@ inb_pkt_prepare(const struct rte_ipsec_sa *sa, const struct replay_sqn *rsn,
 	sqn = rte_be_to_cpu_32(esph->seq);
 	if (IS_ESN(sa))
 		sqn = reconstruct_esn(rsn->sqn, sqn, sa->replay.win_sz);
+	*sqc = rte_cpu_to_be_64(sqn);
 
+	/* check IPsec window */
 	rc = esn_inb_check_sqn(rsn, sa, sqn);
-	if (rc != 0)
-		return rc;
 
-	sqn = rte_cpu_to_be_64(sqn);
+	return rc;
+}
+
+/* prepare packet for upcoming processing */
+static inline int32_t
+inb_prepare(const struct rte_ipsec_sa *sa, struct rte_mbuf *mb,
+	uint32_t hlen, union sym_op_data *icv)
+{
+	uint32_t clen, icv_len, icv_ofs, plen;
+	struct rte_mbuf *ml;
 
 	/* start packet manipulation */
 	plen = mb->pkt_len;
@@ -217,7 +254,8 @@ inb_pkt_prepare(const struct rte_ipsec_sa *sa, const struct replay_sqn *rsn,
 
 	icv_ofs += sa->sqh_len;
 
-	/* we have to allocate space for AAD somewhere,
+	/*
+	 * we have to allocate space for AAD somewhere,
 	 * right now - just use free trailing space at the last segment.
 	 * Would probably be more convenient to reserve space for AAD
 	 * inside rte_crypto_op itself
@@ -238,10 +276,28 @@ inb_pkt_prepare(const struct rte_ipsec_sa *sa, const struct replay_sqn *rsn,
 	mb->pkt_len += sa->sqh_len;
 	ml->data_len += sa->sqh_len;
 
-	inb_pkt_xprepare(sa, sqn, icv);
 	return plen;
 }
 
+static inline int32_t
+inb_pkt_prepare(const struct rte_ipsec_sa *sa, const struct replay_sqn *rsn,
+	struct rte_mbuf *mb, uint32_t hlen, union sym_op_data *icv)
+{
+	int rc;
+	rte_be64_t sqn;
+
+	rc = inb_get_sqn(sa, rsn, mb, hlen, &sqn);
+	if (rc != 0)
+		return rc;
+
+	rc = inb_prepare(sa, mb, hlen, icv);
+	if (rc < 0)
+		return rc;
+
+	inb_pkt_xprepare(sa, sqn, icv);
+	return rc;
+}
+
 /*
  * setup/update packets and crypto ops for ESP inbound case.
  */
@@ -270,17 +326,17 @@ esp_inb_pkt_prepare(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
 			lksd_none_cop_prepare(cop[k], cs, mb[i]);
 			inb_cop_prepare(cop[k], sa, mb[i], &icv, hl, rc);
 			k++;
-		} else
+		} else {
 			dr[i - k] = i;
+			rte_errno = -rc;
+		}
 	}
 
 	rsn_release(sa, rsn);
 
 	/* copy not prepared mbufs beyond good ones */
-	if (k != num && k != 0) {
+	if (k != num && k != 0)
 		move_bad_mbufs(mb, dr, num, num - k);
-		rte_errno = EBADMSG;
-	}
 
 	return k;
 }
@@ -512,7 +568,6 @@ tun_process(const struct rte_ipsec_sa *sa, struct rte_mbuf *mb[],
 	return k;
 }
 
-
 /*
  * *process* function for tunnel packets
  */
@@ -612,7 +667,7 @@ esp_inb_pkt_process(struct rte_ipsec_sa *sa, struct rte_mbuf *mb[],
 	if (k != num && k != 0)
 		move_bad_mbufs(mb, dr, num, num - k);
 
-	/* update SQN and replay winow */
+	/* update SQN and replay window */
 	n = esp_inb_rsn_update(sa, sqn, dr, k);
 
 	/* handle mbufs with wrong SQN */
@@ -625,6 +680,67 @@ esp_inb_pkt_process(struct rte_ipsec_sa *sa, struct rte_mbuf *mb[],
 	return n;
 }
 
+/*
+ * Prepare (plus actual crypto/auth) routine for inbound CPU-CRYPTO
+ * (synchronous mode).
+ */
+uint16_t
+cpu_inb_pkt_prepare(const struct rte_ipsec_session *ss,
+	struct rte_mbuf *mb[], uint16_t num)
+{
+	int32_t rc;
+	uint32_t i, k;
+	struct rte_ipsec_sa *sa;
+	struct replay_sqn *rsn;
+	union sym_op_data icv;
+	void *iv[num];
+	void *aad[num];
+	void *dgst[num];
+	uint32_t dr[num];
+	uint32_t l4ofs[num];
+	uint32_t clen[num];
+	uint64_t ivbuf[num][IPSEC_MAX_IV_QWORD];
+
+	sa = ss->sa;
+
+	/* grab rsn lock */
+	rsn = rsn_acquire(sa);
+
+	/* do preparation for all packets */
+	for (i = 0, k = 0; i != num; i++) {
+
+		/* calculate ESP header offset */
+		l4ofs[k] = mb[i]->l2_len + mb[i]->l3_len;
+
+		/* prepare ESP packet for processing */
+		rc = inb_pkt_prepare(sa, rsn, mb[i], l4ofs[k], &icv);
+		if (rc >= 0) {
+			/* get encrypted data offset and length */
+			clen[k] = inb_cpu_crypto_prepare(sa, mb[i],
+				l4ofs + k, rc, ivbuf[k]);
+
+			/* fill iv, digest and aad */
+			iv[k] = ivbuf[k];
+			aad[k] = icv.va + sa->icv_len;
+			dgst[k++] = icv.va;
+		} else {
+			dr[i - k] = i;
+			rte_errno = -rc;
+		}
+	}
+
+	/* release rsn lock */
+	rsn_release(sa, rsn);
+
+	/* copy not prepared mbufs beyond good ones */
+	if (k != num && k != 0)
+		move_bad_mbufs(mb, dr, num, num - k);
+
+	/* convert mbufs to iovecs and do actual crypto/auth processing */
+	cpu_crypto_bulk(ss, sa->cofs, mb, iv, aad, dgst, l4ofs, clen, k);
+	return k;
+}
+
 /*
  * process group of ESP inbound tunnel packets.
  */
diff --git a/lib/librte_ipsec/esp_outb.c b/lib/librte_ipsec/esp_outb.c
index e983b25a3..b6d9cbe98 100644
--- a/lib/librte_ipsec/esp_outb.c
+++ b/lib/librte_ipsec/esp_outb.c
@@ -1,5 +1,5 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2018 Intel Corporation
+ * Copyright(c) 2018-2020 Intel Corporation
  */
 
 #include <rte_ipsec.h>
@@ -15,6 +15,9 @@
 #include "misc.h"
 #include "pad.h"
 
+typedef int32_t (*esp_outb_prepare_t)(struct rte_ipsec_sa *sa, rte_be64_t sqc,
+	const uint64_t ivp[IPSEC_MAX_IV_QWORD], struct rte_mbuf *mb,
+	union sym_op_data *icv, uint8_t sqh_len);
 
 /*
  * helper function to fill crypto_sym op for cipher+auth algorithms.
@@ -177,6 +180,7 @@ outb_tun_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc,
 	espt->pad_len = pdlen;
 	espt->next_proto = sa->proto;
 
+	/* set icv va/pa value(s) */
 	icv->va = rte_pktmbuf_mtod_offset(ml, void *, pdofs);
 	icv->pa = rte_pktmbuf_iova_offset(ml, pdofs);
 
@@ -270,8 +274,7 @@ esp_outb_tun_prepare(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
 static inline int32_t
 outb_trs_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc,
 	const uint64_t ivp[IPSEC_MAX_IV_QWORD], struct rte_mbuf *mb,
-	uint32_t l2len, uint32_t l3len, union sym_op_data *icv,
-	uint8_t sqh_len)
+	union sym_op_data *icv, uint8_t sqh_len)
 {
 	uint8_t np;
 	uint32_t clen, hlen, pdlen, pdofs, plen, tlen, uhlen;
@@ -280,6 +283,10 @@ outb_trs_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc,
 	struct rte_esp_tail *espt;
 	char *ph, *pt;
 	uint64_t *iv;
+	uint32_t l2len, l3len;
+
+	l2len = mb->l2_len;
+	l3len = mb->l3_len;
 
 	uhlen = l2len + l3len;
 	plen = mb->pkt_len - uhlen;
@@ -340,6 +347,7 @@ outb_trs_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc,
 	espt->pad_len = pdlen;
 	espt->next_proto = np;
 
+	/* set icv va/pa value(s) */
 	icv->va = rte_pktmbuf_mtod_offset(ml, void *, pdofs);
 	icv->pa = rte_pktmbuf_iova_offset(ml, pdofs);
 
@@ -381,8 +389,8 @@ esp_outb_trs_prepare(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
 		gen_iv(iv, sqc);
 
 		/* try to update the packet itself */
-		rc = outb_trs_pkt_prepare(sa, sqc, iv, mb[i], l2, l3, &icv,
-					  sa->sqh_len);
+		rc = outb_trs_pkt_prepare(sa, sqc, iv, mb[i], &icv,
+				  sa->sqh_len);
 		/* success, setup crypto op */
 		if (rc >= 0) {
 			outb_pkt_xprepare(sa, sqc, &icv);
@@ -403,6 +411,116 @@ esp_outb_trs_prepare(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
 	return k;
 }
 
+
+static inline uint32_t
+outb_cpu_crypto_prepare(const struct rte_ipsec_sa *sa, uint32_t *pofs,
+	uint32_t plen, void *iv)
+{
+	uint64_t *ivp = iv;
+	struct aead_gcm_iv *gcm;
+	struct aesctr_cnt_blk *ctr;
+	uint32_t clen;
+
+	switch (sa->algo_type) {
+	case ALGO_TYPE_AES_GCM:
+		gcm = iv;
+		aead_gcm_iv_fill(gcm, ivp[0], sa->salt);
+		break;
+	case ALGO_TYPE_AES_CTR:
+		ctr = iv;
+		aes_ctr_cnt_blk_fill(ctr, ivp[0], sa->salt);
+		break;
+	}
+
+	*pofs += sa->ctp.auth.offset;
+	clen = plen + sa->ctp.auth.length;
+	return clen;
+}
+
+static uint16_t
+cpu_outb_pkt_prepare(const struct rte_ipsec_session *ss,
+		struct rte_mbuf *mb[], uint16_t num,
+		esp_outb_prepare_t prepare, uint32_t cofs_mask)
+{
+	int32_t rc;
+	uint64_t sqn;
+	rte_be64_t sqc;
+	struct rte_ipsec_sa *sa;
+	uint32_t i, k, n;
+	uint32_t l2, l3;
+	union sym_op_data icv;
+	void *iv[num];
+	void *aad[num];
+	void *dgst[num];
+	uint32_t dr[num];
+	uint32_t l4ofs[num];
+	uint32_t clen[num];
+	uint64_t ivbuf[num][IPSEC_MAX_IV_QWORD];
+
+	sa = ss->sa;
+
+	n = num;
+	sqn = esn_outb_update_sqn(sa, &n);
+	if (n != num)
+		rte_errno = EOVERFLOW;
+
+	for (i = 0, k = 0; i != n; i++) {
+
+		l2 = mb[i]->l2_len;
+		l3 = mb[i]->l3_len;
+
+		/* calculate ESP header offset */
+		l4ofs[k] = (l2 + l3) & cofs_mask;
+
+		sqc = rte_cpu_to_be_64(sqn + i);
+		gen_iv(ivbuf[k], sqc);
+
+		/* try to update the packet itself */
+		rc = prepare(sa, sqc, ivbuf[k], mb[i], &icv, sa->sqh_len);
+
+		/* success, proceed with preparations */
+		if (rc >= 0) {
+
+			outb_pkt_xprepare(sa, sqc, &icv);
+
+			/* get encrypted data offset and length */
+			clen[k] = outb_cpu_crypto_prepare(sa, l4ofs + k, rc,
+				ivbuf[k]);
+
+			/* fill iv, digest and aad */
+			iv[k] = ivbuf[k];
+			aad[k] = icv.va + sa->icv_len;
+			dgst[k++] = icv.va;
+		} else {
+			dr[i - k] = i;
+			rte_errno = -rc;
+		}
+	}
+
+	/* copy not prepared mbufs beyond good ones */
+	if (k != n && k != 0)
+		move_bad_mbufs(mb, dr, n, n - k);
+
+	/* convert mbufs to iovecs and do actual crypto/auth processing */
+	cpu_crypto_bulk(ss, sa->cofs, mb, iv, aad, dgst, l4ofs, clen, k);
+	return k;
+}
+
+uint16_t
+cpu_outb_tun_pkt_prepare(const struct rte_ipsec_session *ss,
+		struct rte_mbuf *mb[], uint16_t num)
+{
+	return cpu_outb_pkt_prepare(ss, mb, num, outb_tun_pkt_prepare, 0);
+}
+
+uint16_t
+cpu_outb_trs_pkt_prepare(const struct rte_ipsec_session *ss,
+		struct rte_mbuf *mb[], uint16_t num)
+{
+	return cpu_outb_pkt_prepare(ss, mb, num, outb_trs_pkt_prepare,
+		UINT32_MAX);
+}
+
 /*
  * process outbound packets for SA with ESN support,
  * for algorithms that require SQN.hibits to be implictly included
@@ -526,7 +644,7 @@ inline_outb_trs_pkt_process(const struct rte_ipsec_session *ss,
 	struct rte_mbuf *mb[], uint16_t num)
 {
 	int32_t rc;
-	uint32_t i, k, n, l2, l3;
+	uint32_t i, k, n;
 	uint64_t sqn;
 	rte_be64_t sqc;
 	struct rte_ipsec_sa *sa;
@@ -544,15 +662,11 @@ inline_outb_trs_pkt_process(const struct rte_ipsec_session *ss,
 	k = 0;
 	for (i = 0; i != n; i++) {
 
-		l2 = mb[i]->l2_len;
-		l3 = mb[i]->l3_len;
-
 		sqc = rte_cpu_to_be_64(sqn + i);
 		gen_iv(iv, sqc);
 
 		/* try to update the packet itself */
-		rc = outb_trs_pkt_prepare(sa, sqc, iv, mb[i],
-				l2, l3, &icv, 0);
+		rc = outb_trs_pkt_prepare(sa, sqc, iv, mb[i], &icv, 0);
 
 		k += (rc >= 0);
 
diff --git a/lib/librte_ipsec/misc.h b/lib/librte_ipsec/misc.h
index fe4641bfc..fc4b3dc69 100644
--- a/lib/librte_ipsec/misc.h
+++ b/lib/librte_ipsec/misc.h
@@ -1,5 +1,5 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2018 Intel Corporation
+ * Copyright(c) 2018-2020 Intel Corporation
  */
 
 #ifndef _MISC_H_
@@ -105,4 +105,122 @@ mbuf_cut_seg_ofs(struct rte_mbuf *mb, struct rte_mbuf *ms, uint32_t ofs,
 	mb->pkt_len -= len;
 }
 
+static inline int
+mbuf_to_cryptovec(const struct rte_mbuf *mb, uint32_t ofs, uint32_t data_len,
+	struct rte_crypto_vec vec[], uint32_t num)
+{
+	uint32_t i;
+	struct rte_mbuf *nseg;
+	uint32_t left;
+	uint32_t seglen;
+
+	/* assuming that requested data starts in the first segment */
+	RTE_ASSERT(mb->data_len > ofs);
+
+	if (mb->nb_segs > num)
+		return -mb->nb_segs;
+
+	vec[0].base = rte_pktmbuf_mtod_offset(mb, void *, ofs);
+
+	/* whole data lies in the first segment */
+	seglen = mb->data_len - ofs;
+	if (data_len <= seglen) {
+		vec[0].len = data_len;
+		return 1;
+	}
+
+	/* data spread across segments */
+	vec[0].len = seglen;
+	left = data_len - seglen;
+	for (i = 1, nseg = mb->next; nseg != NULL; nseg = nseg->next, i++) {
+		vec[i].base = rte_pktmbuf_mtod(nseg, void *);
+
+		seglen = nseg->data_len;
+		if (left <= seglen) {
+			/* whole requested data is completed */
+			vec[i].len = left;
+			left = 0;
+			break;
+		}
+
+		/* use whole segment */
+		vec[i].len = seglen;
+		left -= seglen;
+	}
+
+	RTE_ASSERT(left == 0);
+	return i + 1;
+}
+
+/*
+ * process packets using sync crypto engine
+ */
+static inline void
+cpu_crypto_bulk(const struct rte_ipsec_session *ss,
+	union rte_crypto_sym_ofs ofs, struct rte_mbuf *mb[],
+	void *iv[], void *aad[], void *dgst[], uint32_t l4ofs[],
+	uint32_t clen[], uint32_t num)
+{
+	uint32_t i, j, n;
+	int32_t vcnt, vofs;
+	int32_t st[num];
+	struct rte_crypto_sgl vecpkt[num];
+	struct rte_crypto_vec vec[UINT8_MAX];
+	struct rte_crypto_sym_vec symvec;
+
+	const uint32_t vnum = RTE_DIM(vec);
+
+	j = 0, n = 0;
+	vofs = 0;
+	for (i = 0; i != num; i++) {
+
+		vcnt = mbuf_to_cryptovec(mb[i], l4ofs[i], clen[i], &vec[vofs],
+			vnum - vofs);
+
+		/* not enough space in vec[] to hold all segments */
+		if (vcnt < 0) {
+			/* fill the request structure */
+			symvec.sgl = &vecpkt[j];
+			symvec.iv = &iv[j];
+			symvec.aad = &aad[j];
+			symvec.digest = &dgst[j];
+			symvec.status = &st[j];
+			symvec.num = i - j;
+
+			/* flush vec array and try again */
+			n += rte_cryptodev_sym_cpu_crypto_process(
+				ss->crypto.dev_id, ss->crypto.ses, ofs,
+				&symvec);
+			vofs = 0;
+			vcnt = mbuf_to_cryptovec(mb[i], l4ofs[i], clen[i], vec,
+				vnum);
+			RTE_ASSERT(vcnt > 0);
+			j = i;
+		}
+
+		vecpkt[i].vec = &vec[vofs];
+		vecpkt[i].num = vcnt;
+		vofs += vcnt;
+	}
+
+	/* fill the request structure */
+	symvec.sgl = &vecpkt[j];
+	symvec.iv = &iv[j];
+	symvec.aad = &aad[j];
+	symvec.digest = &dgst[j];
+	symvec.status = &st[j];
+	symvec.num = i - j;
+
+	n += rte_cryptodev_sym_cpu_crypto_process(ss->crypto.dev_id,
+		ss->crypto.ses, ofs, &symvec);
+
+	j = num - n;
+	for (i = 0; j != 0 && i != num; i++) {
+		if (st[i] != 0) {
+			mb[i]->ol_flags |= PKT_RX_SEC_OFFLOAD_FAILED;
+			j--;
+		}
+	}
+}
+
 #endif /* _MISC_H_ */
diff --git a/lib/librte_ipsec/rte_ipsec.h b/lib/librte_ipsec/rte_ipsec.h
index f3b1f936b..6666cf761 100644
--- a/lib/librte_ipsec/rte_ipsec.h
+++ b/lib/librte_ipsec/rte_ipsec.h
@@ -1,5 +1,5 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2018 Intel Corporation
+ * Copyright(c) 2018-2020 Intel Corporation
  */
 
 #ifndef _RTE_IPSEC_H_
@@ -33,10 +33,15 @@ struct rte_ipsec_session;
  *   (see rte_ipsec_pkt_process for more details).
  */
 struct rte_ipsec_sa_pkt_func {
-	uint16_t (*prepare)(const struct rte_ipsec_session *ss,
+	union {
+		uint16_t (*async)(const struct rte_ipsec_session *ss,
 				struct rte_mbuf *mb[],
 				struct rte_crypto_op *cop[],
 				uint16_t num);
+		uint16_t (*sync)(const struct rte_ipsec_session *ss,
+				struct rte_mbuf *mb[],
+				uint16_t num);
+	} prepare;
 	uint16_t (*process)(const struct rte_ipsec_session *ss,
 				struct rte_mbuf *mb[],
 				uint16_t num);
@@ -62,6 +67,7 @@ struct rte_ipsec_session {
 	union {
 		struct {
 			struct rte_cryptodev_sym_session *ses;
+			uint8_t dev_id;
 		} crypto;
 		struct {
 			struct rte_security_session *ses;
@@ -114,7 +120,15 @@ static inline uint16_t
 rte_ipsec_pkt_crypto_prepare(const struct rte_ipsec_session *ss,
 	struct rte_mbuf *mb[], struct rte_crypto_op *cop[], uint16_t num)
 {
-	return ss->pkt_func.prepare(ss, mb, cop, num);
+	return ss->pkt_func.prepare.async(ss, mb, cop, num);
+}
+
+__rte_experimental
+static inline uint16_t
+rte_ipsec_pkt_cpu_prepare(const struct rte_ipsec_session *ss,
+	struct rte_mbuf *mb[], uint16_t num)
+{
+	return ss->pkt_func.prepare.sync(ss, mb, num);
 }
 
 /**
diff --git a/lib/librte_ipsec/sa.c b/lib/librte_ipsec/sa.c
index 6f1d92c3c..ada195cf8 100644
--- a/lib/librte_ipsec/sa.c
+++ b/lib/librte_ipsec/sa.c
@@ -1,5 +1,5 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2018 Intel Corporation
+ * Copyright(c) 2018-2020 Intel Corporation
  */
 
 #include <rte_ipsec.h>
@@ -243,10 +243,26 @@ static void
 esp_inb_init(struct rte_ipsec_sa *sa)
 {
 	/* these params may differ with new algorithms support */
-	sa->ctp.auth.offset = 0;
-	sa->ctp.auth.length = sa->icv_len - sa->sqh_len;
 	sa->ctp.cipher.offset = sizeof(struct rte_esp_hdr) + sa->iv_len;
 	sa->ctp.cipher.length = sa->icv_len + sa->ctp.cipher.offset;
+
+	/*
+	 * for AEAD and NULL algorithms we can assume that
+	 * auth and cipher offsets would be equal.
+	 */
+	switch (sa->algo_type) {
+	case ALGO_TYPE_AES_GCM:
+	case ALGO_TYPE_NULL:
+		sa->ctp.auth.raw = sa->ctp.cipher.raw;
+		break;
+	default:
+		sa->ctp.auth.offset = 0;
+		sa->ctp.auth.length = sa->icv_len - sa->sqh_len;
+		sa->cofs.ofs.cipher.tail = sa->sqh_len;
+		break;
+	}
+
+	sa->cofs.ofs.cipher.head = sa->ctp.cipher.offset - sa->ctp.auth.offset;
 }
 
 /*
@@ -269,13 +285,13 @@ esp_outb_init(struct rte_ipsec_sa *sa, uint32_t hlen)
 
 	sa->sqn.outb.raw = 1;
 
-	/* these params may differ with new algorithms support */
-	sa->ctp.auth.offset = hlen;
-	sa->ctp.auth.length = sizeof(struct rte_esp_hdr) +
-		sa->iv_len + sa->sqh_len;
-
 	algo_type = sa->algo_type;
 
+	/*
+	 * Setup auth and cipher length and offset.
+	 * these params may differ with new algorithms support
+	 */
+
 	switch (algo_type) {
 	case ALGO_TYPE_AES_GCM:
 	case ALGO_TYPE_AES_CTR:
@@ -286,11 +302,30 @@ esp_outb_init(struct rte_ipsec_sa *sa, uint32_t hlen)
 		break;
 	case ALGO_TYPE_AES_CBC:
 	case ALGO_TYPE_3DES_CBC:
-		sa->ctp.cipher.offset = sa->hdr_len +
-			sizeof(struct rte_esp_hdr);
+		sa->ctp.cipher.offset = hlen + sizeof(struct rte_esp_hdr);
 		sa->ctp.cipher.length = sa->iv_len;
 		break;
 	}
+
+	/*
+	 * for AEAD and NULL algorithms we can assume that
+	 * auth and cipher offsets would be equal.
+	 */
+	switch (algo_type) {
+	case ALGO_TYPE_AES_GCM:
+	case ALGO_TYPE_NULL:
+		sa->ctp.auth.raw = sa->ctp.cipher.raw;
+		break;
+	default:
+		sa->ctp.auth.offset = hlen;
+		sa->ctp.auth.length = sizeof(struct rte_esp_hdr) +
+			sa->iv_len + sa->sqh_len;
+		break;
+	}
+
+	sa->cofs.ofs.cipher.head = sa->ctp.cipher.offset - sa->ctp.auth.offset;
+	sa->cofs.ofs.cipher.tail = (sa->ctp.auth.offset + sa->ctp.auth.length) -
+			(sa->ctp.cipher.offset + sa->ctp.cipher.length);
 }
 
 /*
@@ -544,9 +579,9 @@ lksd_proto_prepare(const struct rte_ipsec_session *ss,
  * - inbound/outbound for RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL
  * - outbound for RTE_SECURITY_ACTION_TYPE_NONE when ESN is disabled
  */
-static uint16_t
-pkt_flag_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
-	uint16_t num)
+uint16_t
+pkt_flag_process(const struct rte_ipsec_session *ss,
+		struct rte_mbuf *mb[], uint16_t num)
 {
 	uint32_t i, k;
 	uint32_t dr[num];
@@ -588,21 +623,59 @@ lksd_none_pkt_func_select(const struct rte_ipsec_sa *sa,
 	switch (sa->type & msk) {
 	case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TUNLV4):
 	case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TUNLV6):
-		pf->prepare = esp_inb_pkt_prepare;
+		pf->prepare.async = esp_inb_pkt_prepare;
 		pf->process = esp_inb_tun_pkt_process;
 		break;
 	case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TRANS):
-		pf->prepare = esp_inb_pkt_prepare;
+		pf->prepare.async = esp_inb_pkt_prepare;
 		pf->process = esp_inb_trs_pkt_process;
 		break;
 	case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV4):
 	case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV6):
-		pf->prepare = esp_outb_tun_prepare;
+		pf->prepare.async = esp_outb_tun_prepare;
 		pf->process = (sa->sqh_len != 0) ?
 			esp_outb_sqh_process : pkt_flag_process;
 		break;
 	case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TRANS):
-		pf->prepare = esp_outb_trs_prepare;
+		pf->prepare.async = esp_outb_trs_prepare;
+		pf->process = (sa->sqh_len != 0) ?
+			esp_outb_sqh_process : pkt_flag_process;
+		break;
+	default:
+		rc = -ENOTSUP;
+	}
+
+	return rc;
+}
+
+static int
+cpu_crypto_pkt_func_select(const struct rte_ipsec_sa *sa,
+		struct rte_ipsec_sa_pkt_func *pf)
+{
+	int32_t rc;
+
+	static const uint64_t msk = RTE_IPSEC_SATP_DIR_MASK |
+			RTE_IPSEC_SATP_MODE_MASK;
+
+	rc = 0;
+	switch (sa->type & msk) {
+	case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TUNLV4):
+	case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TUNLV6):
+		pf->prepare.sync = cpu_inb_pkt_prepare;
+		pf->process = esp_inb_tun_pkt_process;
+		break;
+	case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TRANS):
+		pf->prepare.sync = cpu_inb_pkt_prepare;
+		pf->process = esp_inb_trs_pkt_process;
+		break;
+	case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV4):
+	case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV6):
+		pf->prepare.sync = cpu_outb_tun_pkt_prepare;
+		pf->process = (sa->sqh_len != 0) ?
+			esp_outb_sqh_process : pkt_flag_process;
+		break;
+	case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TRANS):
+		pf->prepare.sync = cpu_outb_trs_pkt_prepare;
 		pf->process = (sa->sqh_len != 0) ?
 			esp_outb_sqh_process : pkt_flag_process;
 		break;
@@ -660,7 +733,7 @@ ipsec_sa_pkt_func_select(const struct rte_ipsec_session *ss,
 	int32_t rc;
 
 	rc = 0;
-	pf[0] = (struct rte_ipsec_sa_pkt_func) { 0 };
+	pf[0] = (struct rte_ipsec_sa_pkt_func) { {NULL}, NULL };
 
 	switch (ss->type) {
 	case RTE_SECURITY_ACTION_TYPE_NONE:
@@ -677,9 +750,12 @@ ipsec_sa_pkt_func_select(const struct rte_ipsec_session *ss,
 			pf->process = inline_proto_outb_pkt_process;
 		break;
 	case RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL:
-		pf->prepare = lksd_proto_prepare;
+		pf->prepare.async = lksd_proto_prepare;
 		pf->process = pkt_flag_process;
 		break;
+	case RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO:
+		rc = cpu_crypto_pkt_func_select(sa, pf);
+		break;
 	default:
 		rc = -ENOTSUP;
 	}
diff --git a/lib/librte_ipsec/sa.h b/lib/librte_ipsec/sa.h
index 51e69ad05..d22451b38 100644
--- a/lib/librte_ipsec/sa.h
+++ b/lib/librte_ipsec/sa.h
@@ -1,5 +1,5 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2018 Intel Corporation
+ * Copyright(c) 2018-2020 Intel Corporation
  */
 
 #ifndef _SA_H_
@@ -88,6 +88,8 @@ struct rte_ipsec_sa {
 		union sym_op_ofslen cipher;
 		union sym_op_ofslen auth;
 	} ctp;
+	/* cpu-crypto offsets */
+	union rte_crypto_sym_ofs cofs;
 	/* tx_offload template for tunnel mbuf */
 	struct {
 		uint64_t msk;
@@ -156,6 +158,10 @@ uint16_t
 inline_inb_trs_pkt_process(const struct rte_ipsec_session *ss,
 	struct rte_mbuf *mb[], uint16_t num);
 
+uint16_t
+cpu_inb_pkt_prepare(const struct rte_ipsec_session *ss,
+		struct rte_mbuf *mb[], uint16_t num);
+
 /* outbound processing */
 
 uint16_t
@@ -170,6 +176,10 @@ uint16_t
 esp_outb_sqh_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
 	uint16_t num);
 
+uint16_t
+pkt_flag_process(const struct rte_ipsec_session *ss,
+	struct rte_mbuf *mb[], uint16_t num);
+
 uint16_t
 inline_outb_tun_pkt_process(const struct rte_ipsec_session *ss,
 	struct rte_mbuf *mb[], uint16_t num);
@@ -182,4 +192,11 @@ uint16_t
 inline_proto_outb_pkt_process(const struct rte_ipsec_session *ss,
 	struct rte_mbuf *mb[], uint16_t num);
 
+uint16_t
+cpu_outb_tun_pkt_prepare(const struct rte_ipsec_session *ss,
+		struct rte_mbuf *mb[], uint16_t num);
+uint16_t
+cpu_outb_trs_pkt_prepare(const struct rte_ipsec_session *ss,
+		struct rte_mbuf *mb[], uint16_t num);
+
 #endif /* _SA_H_ */
diff --git a/lib/librte_ipsec/ses.c b/lib/librte_ipsec/ses.c
index 82c765a33..3d51ac498 100644
--- a/lib/librte_ipsec/ses.c
+++ b/lib/librte_ipsec/ses.c
@@ -1,5 +1,5 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2018 Intel Corporation
+ * Copyright(c) 2018-2020 Intel Corporation
  */
 
 #include <rte_ipsec.h>
@@ -11,7 +11,8 @@ session_check(struct rte_ipsec_session *ss)
 	if (ss == NULL || ss->sa == NULL)
 		return -EINVAL;
 
-	if (ss->type == RTE_SECURITY_ACTION_TYPE_NONE) {
+	if (ss->type == RTE_SECURITY_ACTION_TYPE_NONE ||
+		ss->type == RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO) {
 		if (ss->crypto.ses == NULL)
 			return -EINVAL;
 	} else {
-- 
2.17.1


^ permalink raw reply	[flat|nested] 77+ messages in thread

* [dpdk-dev] [PATCH v5 6/8] examples/ipsec-secgw: cpu crypto support
  2020-01-28 14:22   ` [dpdk-dev] [PATCH v5 0/8] Introduce CPU crypto mode Marcin Smoczynski
                       ` (4 preceding siblings ...)
  2020-01-28 14:22     ` [dpdk-dev] [PATCH v5 5/8] ipsec: introduce support for cpu crypto mode Marcin Smoczynski
@ 2020-01-28 14:22     ` Marcin Smoczynski
  2020-01-28 14:22     ` [dpdk-dev] [PATCH v5 7/8] examples/ipsec-secgw: cpu crypto testing Marcin Smoczynski
                       ` (2 subsequent siblings)
  8 siblings, 0 replies; 77+ messages in thread
From: Marcin Smoczynski @ 2020-01-28 14:22 UTC (permalink / raw)
  To: akhil.goyal, konstantin.ananyev, roy.fan.zhang, declan.doherty,
	radu.nicolau, pablo.de.lara.guarch
  Cc: dev, Marcin Smoczynski

Add support for CPU accelerated crypto. 'cpu-crypto' SA type has
been introduced in configuration allowing to use abovementioned
acceleration.

Legacy mode is not currently supported.

Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Signed-off-by: Marcin Smoczynski <marcinx.smoczynski@intel.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
---
 examples/ipsec-secgw/ipsec.c         |  25 ++++-
 examples/ipsec-secgw/ipsec_process.c | 136 +++++++++++++++++----------
 examples/ipsec-secgw/sa.c            |  30 ++++--
 3 files changed, 131 insertions(+), 60 deletions(-)

diff --git a/examples/ipsec-secgw/ipsec.c b/examples/ipsec-secgw/ipsec.c
index d4b57121a..6e8120702 100644
--- a/examples/ipsec-secgw/ipsec.c
+++ b/examples/ipsec-secgw/ipsec.c
@@ -1,5 +1,5 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2016-2017 Intel Corporation
+ * Copyright(c) 2016-2020 Intel Corporation
  */
 #include <sys/types.h>
 #include <netinet/in.h>
@@ -10,6 +10,7 @@
 #include <rte_crypto.h>
 #include <rte_security.h>
 #include <rte_cryptodev.h>
+#include <rte_ipsec.h>
 #include <rte_ethdev.h>
 #include <rte_mbuf.h>
 #include <rte_hash.h>
@@ -86,7 +87,8 @@ create_lookaside_session(struct ipsec_ctx *ipsec_ctx, struct ipsec_sa *sa,
 			ipsec_ctx->tbl[cdev_id_qp].id,
 			ipsec_ctx->tbl[cdev_id_qp].qp);
 
-	if (ips->type != RTE_SECURITY_ACTION_TYPE_NONE) {
+	if (ips->type != RTE_SECURITY_ACTION_TYPE_NONE &&
+		ips->type != RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO) {
 		struct rte_security_session_conf sess_conf = {
 			.action_type = ips->type,
 			.protocol = RTE_SECURITY_PROTOCOL_IPSEC,
@@ -126,6 +128,18 @@ create_lookaside_session(struct ipsec_ctx *ipsec_ctx, struct ipsec_sa *sa,
 			return -1;
 		}
 	} else {
+		if (ips->type == RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO) {
+			struct rte_cryptodev_info info;
+			uint16_t cdev_id;
+
+			cdev_id = ipsec_ctx->tbl[cdev_id_qp].id;
+			rte_cryptodev_info_get(cdev_id, &info);
+			if (!(info.feature_flags &
+				RTE_CRYPTODEV_FF_SYM_CPU_CRYPTO))
+				return -ENOTSUP;
+
+			ips->crypto.dev_id = cdev_id;
+		}
 		ips->crypto.ses = rte_cryptodev_sym_session_create(
 				ipsec_ctx->session_pool);
 		rte_cryptodev_sym_session_init(ipsec_ctx->tbl[cdev_id_qp].id,
@@ -476,6 +490,13 @@ ipsec_enqueue(ipsec_xform_fn xform_func, struct ipsec_ctx *ipsec_ctx,
 			rte_security_attach_session(&priv->cop,
 				ips->security.ses);
 			break;
+
+		case RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO:
+			RTE_LOG(ERR, IPSEC, "CPU crypto is not supported by the"
+					" legacy mode.");
+			rte_pktmbuf_free(pkts[i]);
+			continue;
+
 		case RTE_SECURITY_ACTION_TYPE_NONE:
 
 			priv->cop.type = RTE_CRYPTO_OP_TYPE_SYMMETRIC;
diff --git a/examples/ipsec-secgw/ipsec_process.c b/examples/ipsec-secgw/ipsec_process.c
index 2eb5c8b34..bb2f2b82d 100644
--- a/examples/ipsec-secgw/ipsec_process.c
+++ b/examples/ipsec-secgw/ipsec_process.c
@@ -1,5 +1,5 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2016-2017 Intel Corporation
+ * Copyright(c) 2016-2020 Intel Corporation
  */
 #include <sys/types.h>
 #include <netinet/in.h>
@@ -92,7 +92,8 @@ fill_ipsec_session(struct rte_ipsec_session *ss, struct ipsec_ctx *ctx,
 	int32_t rc;
 
 	/* setup crypto section */
-	if (ss->type == RTE_SECURITY_ACTION_TYPE_NONE) {
+	if (ss->type == RTE_SECURITY_ACTION_TYPE_NONE ||
+			ss->type == RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO) {
 		RTE_ASSERT(ss->crypto.ses == NULL);
 		rc = create_lookaside_session(ctx, sa, ss);
 		if (rc != 0)
@@ -215,6 +216,62 @@ ipsec_prepare_crypto_group(struct ipsec_ctx *ctx, struct ipsec_sa *sa,
 	return k;
 }
 
+/*
+ * helper routine for inline and cpu(synchronous) processing
+ * this is just to satisfy inbound_sa_check() and get_hop_for_offload_pkt().
+ * Should be removed in future.
+ */
+static inline void
+prep_process_group(void *sa, struct rte_mbuf *mb[], uint32_t cnt)
+{
+	uint32_t j;
+	struct ipsec_mbuf_metadata *priv;
+
+	for (j = 0; j != cnt; j++) {
+		priv = get_priv(mb[j]);
+		priv->sa = sa;
+	}
+}
+
+/*
+ * finish processing of packets successfully decrypted by an inline processor
+ */
+static uint32_t
+ipsec_process_inline_group(struct rte_ipsec_session *ips, void *sa,
+	struct ipsec_traffic *trf, struct rte_mbuf *mb[], uint32_t cnt)
+{
+	uint64_t satp;
+	uint32_t k;
+
+	/* get SA type */
+	satp = rte_ipsec_sa_type(ips->sa);
+	prep_process_group(sa, mb, cnt);
+
+	k = rte_ipsec_pkt_process(ips, mb, cnt);
+	copy_to_trf(trf, satp, mb, k);
+	return k;
+}
+
+/*
+ * process packets synchronously
+ */
+static uint32_t
+ipsec_process_cpu_group(struct rte_ipsec_session *ips, void *sa,
+	struct ipsec_traffic *trf, struct rte_mbuf *mb[], uint32_t cnt)
+{
+	uint64_t satp;
+	uint32_t k;
+
+	/* get SA type */
+	satp = rte_ipsec_sa_type(ips->sa);
+	prep_process_group(sa, mb, cnt);
+
+	k = rte_ipsec_pkt_cpu_prepare(ips, mb, cnt);
+	k = rte_ipsec_pkt_process(ips, mb, k);
+	copy_to_trf(trf, satp, mb, k);
+	return k;
+}
+
 /*
  * Process ipsec packets.
  * If packet belong to SA that is subject of inline-crypto,
@@ -225,10 +282,8 @@ ipsec_prepare_crypto_group(struct ipsec_ctx *ctx, struct ipsec_sa *sa,
 void
 ipsec_process(struct ipsec_ctx *ctx, struct ipsec_traffic *trf)
 {
-	uint64_t satp;
-	uint32_t i, j, k, n;
+	uint32_t i, k, n;
 	struct ipsec_sa *sa;
-	struct ipsec_mbuf_metadata *priv;
 	struct rte_ipsec_group *pg;
 	struct rte_ipsec_session *ips;
 	struct rte_ipsec_group grp[RTE_DIM(trf->ipsec.pkts)];
@@ -236,10 +291,17 @@ ipsec_process(struct ipsec_ctx *ctx, struct ipsec_traffic *trf)
 	n = sa_group(trf->ipsec.saptr, trf->ipsec.pkts, grp, trf->ipsec.num);
 
 	for (i = 0; i != n; i++) {
+
 		pg = grp + i;
 		sa = ipsec_mask_saptr(pg->id.ptr);
 
-		ips = ipsec_get_primary_session(sa);
+		/* fallback to cryptodev with RX packets which inline
+		 * processor was unable to process
+		 */
+		if (sa != NULL)
+			ips = (pg->id.val & IPSEC_SA_OFFLOAD_FALLBACK_FLAG) ?
+				ipsec_get_fallback_session(sa) :
+				ipsec_get_primary_session(sa);
 
 		/* no valid HW session for that SA, try to create one */
 		if (sa == NULL || (ips->crypto.ses == NULL &&
@@ -247,50 +309,26 @@ ipsec_process(struct ipsec_ctx *ctx, struct ipsec_traffic *trf)
 			k = 0;
 
 		/* process packets inline */
-		else if (ips->type == RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO ||
-				ips->type ==
-				RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL) {
-
-			/* get SA type */
-			satp = rte_ipsec_sa_type(ips->sa);
-
-			/*
-			 * This is just to satisfy inbound_sa_check()
-			 * and get_hop_for_offload_pkt().
-			 * Should be removed in future.
-			 */
-			for (j = 0; j != pg->cnt; j++) {
-				priv = get_priv(pg->m[j]);
-				priv->sa = sa;
+		else {
+			switch (ips->type) {
+			/* enqueue packets to crypto dev */
+			case RTE_SECURITY_ACTION_TYPE_NONE:
+			case RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL:
+				k = ipsec_prepare_crypto_group(ctx, sa, ips,
+					pg->m, pg->cnt);
+				break;
+			case RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO:
+			case RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL:
+				k = ipsec_process_inline_group(ips, sa,
+					trf, pg->m, pg->cnt);
+				break;
+			case RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO:
+				k = ipsec_process_cpu_group(ips, sa,
+					trf, pg->m, pg->cnt);
+				break;
+			default:
+				k = 0;
 			}
-
-			/* fallback to cryptodev with RX packets which inline
-			 * processor was unable to process
-			 */
-			if (pg->id.val & IPSEC_SA_OFFLOAD_FALLBACK_FLAG) {
-				/* offload packets to cryptodev */
-				struct rte_ipsec_session *fallback;
-
-				fallback = ipsec_get_fallback_session(sa);
-				if (fallback->crypto.ses == NULL &&
-					fill_ipsec_session(fallback, ctx, sa)
-					!= 0)
-					k = 0;
-				else
-					k = ipsec_prepare_crypto_group(ctx, sa,
-						fallback, pg->m, pg->cnt);
-			} else {
-				/* finish processing of packets successfully
-				 * decrypted by an inline processor
-				 */
-				k = rte_ipsec_pkt_process(ips, pg->m, pg->cnt);
-				copy_to_trf(trf, satp, pg->m, k);
-
-			}
-		/* enqueue packets to crypto dev */
-		} else {
-			k = ipsec_prepare_crypto_group(ctx, sa, ips, pg->m,
-				pg->cnt);
 		}
 
 		/* drop packets that cannot be enqueued/processed */
diff --git a/examples/ipsec-secgw/sa.c b/examples/ipsec-secgw/sa.c
index c75a5a15f..e9e8d624c 100644
--- a/examples/ipsec-secgw/sa.c
+++ b/examples/ipsec-secgw/sa.c
@@ -1,5 +1,5 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2016-2017 Intel Corporation
+ * Copyright(c) 2016-2020 Intel Corporation
  */
 
 /*
@@ -586,6 +586,8 @@ parse_sa_tokens(char **tokens, uint32_t n_tokens,
 				RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL;
 			else if (strcmp(tokens[ti], "no-offload") == 0)
 				ips->type = RTE_SECURITY_ACTION_TYPE_NONE;
+			else if (strcmp(tokens[ti], "cpu-crypto") == 0)
+				ips->type = RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO;
 			else {
 				APP_CHECK(0, status, "Invalid input \"%s\"",
 						tokens[ti]);
@@ -679,10 +681,12 @@ parse_sa_tokens(char **tokens, uint32_t n_tokens,
 	if (status->status < 0)
 		return;
 
-	if ((ips->type != RTE_SECURITY_ACTION_TYPE_NONE) && (portid_p == 0))
+	if ((ips->type != RTE_SECURITY_ACTION_TYPE_NONE && ips->type !=
+			RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO) && (portid_p == 0))
 		printf("Missing portid option, falling back to non-offload\n");
 
-	if (!type_p || !portid_p) {
+	if (!type_p || (!portid_p && ips->type !=
+			RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO)) {
 		ips->type = RTE_SECURITY_ACTION_TYPE_NONE;
 		rule->portid = -1;
 	}
@@ -768,15 +772,25 @@ print_one_sa_rule(const struct ipsec_sa *sa, int inbound)
 	case RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL:
 		printf("lookaside-protocol-offload ");
 		break;
+	case RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO:
+		printf("cpu-crypto-accelerated");
+		break;
 	}
 
 	fallback_ips = &sa->sessions[IPSEC_SESSION_FALLBACK];
 	if (fallback_ips != NULL && sa->fallback_sessions > 0) {
 		printf("inline fallback: ");
-		if (fallback_ips->type == RTE_SECURITY_ACTION_TYPE_NONE)
+		switch (fallback_ips->type) {
+		case RTE_SECURITY_ACTION_TYPE_NONE:
 			printf("lookaside-none");
-		else
+			break;
+		case RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO:
+			printf("cpu-crypto-accelerated");
+			break;
+		default:
 			printf("invalid");
+			break;
+		}
 	}
 	printf("\n");
 }
@@ -975,7 +989,6 @@ sa_add_rules(struct sa_ctx *sa_ctx, const struct ipsec_sa entries[],
 				return -EINVAL;
 		}
 
-
 		switch (WITHOUT_TRANSPORT_VERSION(sa->flags)) {
 		case IP4_TUNNEL:
 			sa->src.ip.ip4 = rte_cpu_to_be_32(sa->src.ip.ip4);
@@ -1026,7 +1039,6 @@ sa_add_rules(struct sa_ctx *sa_ctx, const struct ipsec_sa entries[],
 					return -EINVAL;
 				}
 			}
-			print_one_sa_rule(sa, inbound);
 		} else {
 			switch (sa->cipher_algo) {
 			case RTE_CRYPTO_CIPHER_NULL:
@@ -1091,9 +1103,9 @@ sa_add_rules(struct sa_ctx *sa_ctx, const struct ipsec_sa entries[],
 			sa_ctx->xf[idx].a.next = &sa_ctx->xf[idx].b;
 			sa_ctx->xf[idx].b.next = NULL;
 			sa->xforms = &sa_ctx->xf[idx].a;
-
-			print_one_sa_rule(sa, inbound);
 		}
+
+		print_one_sa_rule(sa, inbound);
 	}
 
 	return 0;
-- 
2.17.1


^ permalink raw reply	[flat|nested] 77+ messages in thread

* [dpdk-dev] [PATCH v5 7/8] examples/ipsec-secgw: cpu crypto testing
  2020-01-28 14:22   ` [dpdk-dev] [PATCH v5 0/8] Introduce CPU crypto mode Marcin Smoczynski
                       ` (5 preceding siblings ...)
  2020-01-28 14:22     ` [dpdk-dev] [PATCH v5 6/8] examples/ipsec-secgw: cpu crypto support Marcin Smoczynski
@ 2020-01-28 14:22     ` Marcin Smoczynski
  2020-01-28 14:22     ` [dpdk-dev] [PATCH v5 8/8] doc: add cpu crypto related documentation Marcin Smoczynski
  2020-02-04 13:12     ` [dpdk-dev] [PATCH v6 0/8] Introduce CPU crypto mode Marcin Smoczynski
  8 siblings, 0 replies; 77+ messages in thread
From: Marcin Smoczynski @ 2020-01-28 14:22 UTC (permalink / raw)
  To: akhil.goyal, konstantin.ananyev, roy.fan.zhang, declan.doherty,
	radu.nicolau, pablo.de.lara.guarch
  Cc: dev, Marcin Smoczynski

Enable cpu-crypto mode testing by adding dedicated environmental
variable CRYPTO_PRIM_TYPE. Setting it to 'type cpu-crypto' allows
to run test scenario with cpu crypto acceleration.

Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Signed-off-by: Marcin Smoczynski <marcinx.smoczynski@intel.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
---
 examples/ipsec-secgw/test/common_defs.sh      | 21 +++++++++++++++++++
 examples/ipsec-secgw/test/linux_test4.sh      | 11 +---------
 examples/ipsec-secgw/test/linux_test6.sh      | 11 +---------
 .../test/trs_3descbc_sha1_common_defs.sh      |  8 +++----
 .../test/trs_aescbc_sha1_common_defs.sh       |  8 +++----
 .../test/trs_aesctr_sha1_common_defs.sh       |  8 +++----
 .../test/tun_3descbc_sha1_common_defs.sh      |  8 +++----
 .../test/tun_aescbc_sha1_common_defs.sh       |  8 +++----
 .../test/tun_aesctr_sha1_common_defs.sh       |  8 +++----
 9 files changed, 47 insertions(+), 44 deletions(-)

diff --git a/examples/ipsec-secgw/test/common_defs.sh b/examples/ipsec-secgw/test/common_defs.sh
index 4aac4981a..6b6ae06f3 100644
--- a/examples/ipsec-secgw/test/common_defs.sh
+++ b/examples/ipsec-secgw/test/common_defs.sh
@@ -42,6 +42,27 @@ DPDK_BUILD=${RTE_TARGET:-x86_64-native-linux-gcc}
 DEF_MTU_LEN=1400
 DEF_PING_LEN=1200
 
+#upsate operation mode based on env vars values
+select_mode()
+{
+	# select sync/async mode
+	if [[ -n "${CRYPTO_PRIM_TYPE}" && -n "${SGW_CMD_XPRM}" ]]; then
+		echo "${CRYPTO_PRIM_TYPE} is enabled"
+		SGW_CFG_XPRM="${SGW_CFG_XPRM} ${CRYPTO_PRIM_TYPE}"
+	fi
+
+	#make linux to generate fragmented packets
+	if [[ -n "${MULTI_SEG_TEST}" && -n "${SGW_CMD_XPRM}" ]]; then
+		echo "multi-segment test is enabled"
+		SGW_CMD_XPRM="${SGW_CMD_XPRM} ${MULTI_SEG_TEST}"
+		PING_LEN=5000
+		MTU_LEN=1500
+	else
+		PING_LEN=${DEF_PING_LEN}
+		MTU_LEN=${DEF_MTU_LEN}
+	fi
+}
+
 #setup mtu on local iface
 set_local_mtu()
 {
diff --git a/examples/ipsec-secgw/test/linux_test4.sh b/examples/ipsec-secgw/test/linux_test4.sh
index 760451000..fb8ae1023 100644
--- a/examples/ipsec-secgw/test/linux_test4.sh
+++ b/examples/ipsec-secgw/test/linux_test4.sh
@@ -45,16 +45,7 @@ MODE=$1
  . ${DIR}/common_defs.sh
  . ${DIR}/${MODE}_defs.sh
 
-#make linux to generate fragmented packets
-if [[ -n "${MULTI_SEG_TEST}" && -n "${SGW_CMD_XPRM}" ]]; then
-	echo "multi-segment test is enabled"
-	SGW_CMD_XPRM="${SGW_CMD_XPRM} ${MULTI_SEG_TEST}"
-	PING_LEN=5000
-	MTU_LEN=1500
-else
-	PING_LEN=${DEF_PING_LEN}
-	MTU_LEN=${DEF_MTU_LEN}
-fi
+select_mode
 
 config_secgw
 
diff --git a/examples/ipsec-secgw/test/linux_test6.sh b/examples/ipsec-secgw/test/linux_test6.sh
index 479f29be3..dbcca7936 100644
--- a/examples/ipsec-secgw/test/linux_test6.sh
+++ b/examples/ipsec-secgw/test/linux_test6.sh
@@ -46,16 +46,7 @@ MODE=$1
  . ${DIR}/common_defs.sh
  . ${DIR}/${MODE}_defs.sh
 
-#make linux to generate fragmented packets
-if [[ -n "${MULTI_SEG_TEST}" && -n "${SGW_CMD_XPRM}" ]]; then
-	echo "multi-segment test is enabled"
-	SGW_CMD_XPRM="${SGW_CMD_XPRM} ${MULTI_SEG_TEST}"
-	PING_LEN=5000
-	MTU_LEN=1500
-else
-	PING_LEN=${DEF_PING_LEN}
-	MTU_LEN=${DEF_MTU_LEN}
-fi
+select_mode
 
 config_secgw
 
diff --git a/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh b/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh
index 3c5c18afd..62118bb3f 100644
--- a/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh
+++ b/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh
@@ -33,14 +33,14 @@ cipher_key \
 de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
 auth_algo sha1-hmac \
 auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
-mode transport
+mode transport ${SGW_CFG_XPRM}
 
 sa in 9 cipher_algo 3des-cbc \
 cipher_key \
 de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
 auth_algo sha1-hmac \
 auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
-mode transport
+mode transport ${SGW_CFG_XPRM}
 
 #SA out rules
 sa out 7 cipher_algo 3des-cbc \
@@ -48,7 +48,7 @@ cipher_key \
 de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
 auth_algo sha1-hmac \
 auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
-mode transport
+mode transport ${SGW_CFG_XPRM}
 
 #SA out rules
 sa out 9 cipher_algo 3des-cbc \
@@ -56,7 +56,7 @@ cipher_key \
 de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
 auth_algo sha1-hmac \
 auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
-mode transport
+mode transport ${SGW_CFG_XPRM}
 
 #Routing rules
 rt ipv4 dst ${REMOTE_IPV4}/32 port 0
diff --git a/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh b/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh
index 9dbdd1765..7ddeb2b5a 100644
--- a/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh
+++ b/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh
@@ -32,27 +32,27 @@ sa in 7 cipher_algo aes-128-cbc \
 cipher_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
 auth_algo sha1-hmac \
 auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
-mode transport
+mode transport ${SGW_CFG_XPRM}
 
 sa in 9 cipher_algo aes-128-cbc \
 cipher_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
 auth_algo sha1-hmac \
 auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
-mode transport
+mode transport ${SGW_CFG_XPRM}
 
 #SA out rules
 sa out 7 cipher_algo aes-128-cbc \
 cipher_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
 auth_algo sha1-hmac \
 auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
-mode transport
+mode transport ${SGW_CFG_XPRM}
 
 #SA out rules
 sa out 9 cipher_algo aes-128-cbc \
 cipher_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
 auth_algo sha1-hmac \
 auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
-mode transport
+mode transport ${SGW_CFG_XPRM}
 
 #Routing rules
 rt ipv4 dst ${REMOTE_IPV4}/32 port 0
diff --git a/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh b/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh
index 6aba680f9..f0178355a 100644
--- a/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh
+++ b/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh
@@ -32,27 +32,27 @@ sa in 7 cipher_algo aes-128-ctr \
 cipher_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
 auth_algo sha1-hmac \
 auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
-mode transport
+mode transport ${SGW_CFG_XPRM}
 
 sa in 9 cipher_algo aes-128-ctr \
 cipher_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
 auth_algo sha1-hmac \
 auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
-mode transport
+mode transport ${SGW_CFG_XPRM}
 
 #SA out rules
 sa out 7 cipher_algo aes-128-ctr \
 cipher_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
 auth_algo sha1-hmac \
 auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
-mode transport
+mode transport ${SGW_CFG_XPRM}
 
 #SA out rules
 sa out 9 cipher_algo aes-128-ctr \
 cipher_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
 auth_algo sha1-hmac \
 auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
-mode transport
+mode transport ${SGW_CFG_XPRM}
 
 #Routing rules
 rt ipv4 dst ${REMOTE_IPV4}/32 port 0
diff --git a/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh b/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh
index 7c3226f84..d8869fad0 100644
--- a/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh
+++ b/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh
@@ -33,14 +33,14 @@ cipher_key \
 de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
 auth_algo sha1-hmac \
 auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
-mode ipv4-tunnel src ${REMOTE_IPV4} dst ${LOCAL_IPV4}
+mode ipv4-tunnel src ${REMOTE_IPV4} dst ${LOCAL_IPV4} ${SGW_CFG_XPRM}
 
 sa in 9 cipher_algo 3des-cbc \
 cipher_key \
 de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
 auth_algo sha1-hmac \
 auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
-mode ipv6-tunnel src ${REMOTE_IPV6} dst ${LOCAL_IPV6}
+mode ipv6-tunnel src ${REMOTE_IPV6} dst ${LOCAL_IPV6} ${SGW_CFG_XPRM}
 
 #SA out rules
 sa out 7 cipher_algo 3des-cbc \
@@ -48,14 +48,14 @@ cipher_key \
 de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
 auth_algo sha1-hmac \
 auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
-mode ipv4-tunnel src ${LOCAL_IPV4} dst ${REMOTE_IPV4}
+mode ipv4-tunnel src ${LOCAL_IPV4} dst ${REMOTE_IPV4} ${SGW_CFG_XPRM}
 
 sa out 9 cipher_algo 3des-cbc \
 cipher_key \
 de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
 auth_algo sha1-hmac \
 auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
-mode ipv6-tunnel src ${LOCAL_IPV6} dst ${REMOTE_IPV6}
+mode ipv6-tunnel src ${LOCAL_IPV6} dst ${REMOTE_IPV6} ${SGW_CFG_XPRM}
 
 #Routing rules
 rt ipv4 dst ${REMOTE_IPV4}/32 port 0
diff --git a/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh b/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh
index bdf5938a0..2616926b2 100644
--- a/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh
+++ b/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh
@@ -32,26 +32,26 @@ sa in 7 cipher_algo aes-128-cbc \
 cipher_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
 auth_algo sha1-hmac \
 auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
-mode ipv4-tunnel src ${REMOTE_IPV4} dst ${LOCAL_IPV4}
+mode ipv4-tunnel src ${REMOTE_IPV4} dst ${LOCAL_IPV4} ${SGW_CFG_XPRM}
 
 sa in 9 cipher_algo aes-128-cbc \
 cipher_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
 auth_algo sha1-hmac \
 auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
-mode ipv6-tunnel src ${REMOTE_IPV6} dst ${LOCAL_IPV6}
+mode ipv6-tunnel src ${REMOTE_IPV6} dst ${LOCAL_IPV6} ${SGW_CFG_XPRM}
 
 #SA out rules
 sa out 7 cipher_algo aes-128-cbc \
 cipher_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
 auth_algo sha1-hmac \
 auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
-mode ipv4-tunnel src ${LOCAL_IPV4} dst ${REMOTE_IPV4}
+mode ipv4-tunnel src ${LOCAL_IPV4} dst ${REMOTE_IPV4} ${SGW_CFG_XPRM}
 
 sa out 9 cipher_algo aes-128-cbc \
 cipher_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
 auth_algo sha1-hmac \
 auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
-mode ipv6-tunnel src ${LOCAL_IPV6} dst ${REMOTE_IPV6}
+mode ipv6-tunnel src ${LOCAL_IPV6} dst ${REMOTE_IPV6} ${SGW_CFG_XPRM}
 
 #Routing rules
 rt ipv4 dst ${REMOTE_IPV4}/32 port 0
diff --git a/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh b/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh
index 06f2ef0c6..06b561fd7 100644
--- a/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh
+++ b/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh
@@ -32,26 +32,26 @@ sa in 7 cipher_algo aes-128-ctr \
 cipher_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
 auth_algo sha1-hmac \
 auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
-mode ipv4-tunnel src ${REMOTE_IPV4} dst ${LOCAL_IPV4}
+mode ipv4-tunnel src ${REMOTE_IPV4} dst ${LOCAL_IPV4} ${SGW_CFG_XPRM}
 
 sa in 9 cipher_algo aes-128-ctr \
 cipher_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
 auth_algo sha1-hmac \
 auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
-mode ipv6-tunnel src ${REMOTE_IPV6} dst ${LOCAL_IPV6}
+mode ipv6-tunnel src ${REMOTE_IPV6} dst ${LOCAL_IPV6} ${SGW_CFG_XPRM}
 
 #SA out rules
 sa out 7 cipher_algo aes-128-ctr \
 cipher_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
 auth_algo sha1-hmac \
 auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
-mode ipv4-tunnel src ${LOCAL_IPV4} dst ${REMOTE_IPV4}
+mode ipv4-tunnel src ${LOCAL_IPV4} dst ${REMOTE_IPV4} ${SGW_CFG_XPRM}
 
 sa out 9 cipher_algo aes-128-ctr \
 cipher_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
 auth_algo sha1-hmac \
 auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
-mode ipv6-tunnel src ${LOCAL_IPV6} dst ${REMOTE_IPV6}
+mode ipv6-tunnel src ${LOCAL_IPV6} dst ${REMOTE_IPV6} ${SGW_CFG_XPRM}
 
 #Routing rules
 rt ipv4 dst ${REMOTE_IPV4}/32 port 0
-- 
2.17.1


^ permalink raw reply	[flat|nested] 77+ messages in thread

* [dpdk-dev] [PATCH v5 8/8] doc: add cpu crypto related documentation
  2020-01-28 14:22   ` [dpdk-dev] [PATCH v5 0/8] Introduce CPU crypto mode Marcin Smoczynski
                       ` (6 preceding siblings ...)
  2020-01-28 14:22     ` [dpdk-dev] [PATCH v5 7/8] examples/ipsec-secgw: cpu crypto testing Marcin Smoczynski
@ 2020-01-28 14:22     ` Marcin Smoczynski
  2020-01-31 14:43       ` Akhil Goyal
  2020-02-04 13:12     ` [dpdk-dev] [PATCH v6 0/8] Introduce CPU crypto mode Marcin Smoczynski
  8 siblings, 1 reply; 77+ messages in thread
From: Marcin Smoczynski @ 2020-01-28 14:22 UTC (permalink / raw)
  To: akhil.goyal, konstantin.ananyev, roy.fan.zhang, declan.doherty,
	radu.nicolau, pablo.de.lara.guarch
  Cc: dev, Marcin Smoczynski

Update documentation with a description of cpu crypto in cryptodev,
ipsec and security libraries.

Add release notes for 20.02.

Signed-off-by: Marcin Smoczynski <marcinx.smoczynski@intel.com>
---
 doc/guides/cryptodevs/aesni_gcm.rst     |  7 +++++-
 doc/guides/prog_guide/cryptodev_lib.rst | 33 ++++++++++++++++++++++++-
 doc/guides/prog_guide/ipsec_lib.rst     | 10 +++++++-
 doc/guides/prog_guide/rte_security.rst  | 15 ++++++++---
 doc/guides/rel_notes/release_20_02.rst  |  8 ++++++
 5 files changed, 66 insertions(+), 7 deletions(-)

diff --git a/doc/guides/cryptodevs/aesni_gcm.rst b/doc/guides/cryptodevs/aesni_gcm.rst
index 151aa3060..a25b63109 100644
--- a/doc/guides/cryptodevs/aesni_gcm.rst
+++ b/doc/guides/cryptodevs/aesni_gcm.rst
@@ -1,5 +1,5 @@
 ..  SPDX-License-Identifier: BSD-3-Clause
-    Copyright(c) 2016-2019 Intel Corporation.
+    Copyright(c) 2016-2020 Intel Corporation.
 
 AES-NI GCM Crypto Poll Mode Driver
 ==================================
@@ -9,6 +9,11 @@ The AES-NI GCM PMD (**librte_pmd_aesni_gcm**) provides poll mode crypto driver
 support for utilizing Intel multi buffer library (see AES-NI Multi-buffer PMD documentation
 to learn more about it, including installation).
 
+The AES-NI GCM PMD supports synchronous mode of operation with
+``rte_cryptodev_sym_cpu_crypto_process`` function call for both AES-GCM and
+GMAC, however GMAC support is limited to one segment per operation. Please
+refer to ``rte_crypto`` programmer's guide for more detail.
+
 Features
 --------
 
diff --git a/doc/guides/prog_guide/cryptodev_lib.rst b/doc/guides/prog_guide/cryptodev_lib.rst
index ac1643774..b91f7c8b7 100644
--- a/doc/guides/prog_guide/cryptodev_lib.rst
+++ b/doc/guides/prog_guide/cryptodev_lib.rst
@@ -1,5 +1,5 @@
 ..  SPDX-License-Identifier: BSD-3-Clause
-    Copyright(c) 2016-2017 Intel Corporation.
+    Copyright(c) 2016-2020 Intel Corporation.
 
 Cryptography Device Library
 ===========================
@@ -600,6 +600,37 @@ chain.
         };
     };
 
+Synchronous mode
+----------------
+
+Some cryptodevs support synchronous mode alongside with a standard asynchronous
+mode. In that case operations are performed directly when calling
+``rte_cryptodev_sym_cpu_crypto_process`` method instead of enqueuing and
+dequeuing an operation before. This mode of operation allows cryptodevs which
+utilize CPU cryptographic acceleration to have significant performance boost
+comparing to standard asynchronous approach. Cryptodevs supporting synchronous
+mode have ``RTE_CRYPTODEV_FF_SYM_CPU_CRYPTO`` feature flag set.
+
+To perform a synchronous operation a call to
+``rte_cryptodev_sym_cpu_crypto_process`` has to be made with vectorized
+operation descriptor (``struct rte_crypto_sym_vec``) containing:
+
+- ``num`` - number of operations to perform,
+- pointer to an array of size ``num`` containing a scatter-gather list
+  descriptors of performed operations (``struct rte_crypto_sgl``). Each instance
+  of ``struct rte_crypto_sgl`` consists of a number of segments and a pointer to
+  an array of segment descriptors ``struct rte_crypto_vec``;
+- pointers to arrays of size ``num`` containing IV, AAD and digest information,
+- pointer to an array of size ``num`` where status information will be stored
+  for each operation.
+
+Function returns a number of successfully completed operations and sets
+appropriate status number for each operation in the status array provided as
+a call argument. Status different than zero must be treated as error.
+
+For more details, e.g. how to convert an mbuf to an SGL, please refer to an
+example usage in the IPsec library implementation.
+
 Sample code
 -----------
 
diff --git a/doc/guides/prog_guide/ipsec_lib.rst b/doc/guides/prog_guide/ipsec_lib.rst
index 1ce0db453..0a860eb47 100644
--- a/doc/guides/prog_guide/ipsec_lib.rst
+++ b/doc/guides/prog_guide/ipsec_lib.rst
@@ -1,5 +1,5 @@
 ..  SPDX-License-Identifier: BSD-3-Clause
-    Copyright(c) 2018 Intel Corporation.
+    Copyright(c) 2018-2020 Intel Corporation.
 
 IPsec Packet Processing Library
 ===============================
@@ -81,6 +81,14 @@ In that mode the library functions perform
   - verify that crypto device operations (encryption, ICV generation)
     were completed successfully
 
+RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+In that mode the library functions perform same operations as in
+``RTE_SECURITY_ACTION_TYPE_NONE``. The only differnce is that crypto operations
+are performed with CPU crypto synchronous API.
+
+
 RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
diff --git a/doc/guides/prog_guide/rte_security.rst b/doc/guides/prog_guide/rte_security.rst
index f77fb89dc..a911c676b 100644
--- a/doc/guides/prog_guide/rte_security.rst
+++ b/doc/guides/prog_guide/rte_security.rst
@@ -511,13 +511,20 @@ Offload.
         /**< No security actions */
         RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO,
         /**< Crypto processing for security protocol is processed inline
-         * during transmission */
+         * during transmission
+         */
         RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL,
         /**< All security protocol processing is performed inline during
-         * transmission */
-        RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL
+         * transmission
+         */
+        RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL,
         /**< All security protocol processing including crypto is performed
-         * on a lookaside accelerator */
+         * on a lookaside accelerator
+         */
+        RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO
+        /**< Crypto processing for security protocol is processed by CPU
+         * synchronously
+         */
     };
 
 The ``rte_security_session_protocol`` is defined as
diff --git a/doc/guides/rel_notes/release_20_02.rst b/doc/guides/rel_notes/release_20_02.rst
index 50e2c1484..b6cf0c4d1 100644
--- a/doc/guides/rel_notes/release_20_02.rst
+++ b/doc/guides/rel_notes/release_20_02.rst
@@ -143,6 +143,14 @@ New Features
   Added a new OCTEON TX2 rawdev PMD for End Point mode of operation.
   See the :doc:`../rawdevs/octeontx2_ep` for more details on this new PMD.
 
+* **Added synchronous Crypto burst API.**
+
+  A new API is introduced in crypto library to handle synchronous cryptographic
+  operations allowing to achieve performance gain for cryptodevs which use
+  CPU based acceleration, such as Intel AES-NI. An example implementation
+  for aesni_gcm cryptodev is provided including unit tests. The IPsec example
+  application and ipsec library itself were changed to allow utilization of this
+  new feature.
 
 Removed Items
 -------------
-- 
2.17.1


^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [dpdk-dev] [PATCH v5 5/8] ipsec: introduce support for cpu crypto mode
  2020-01-28 14:22     ` [dpdk-dev] [PATCH v5 5/8] ipsec: introduce support for cpu crypto mode Marcin Smoczynski
@ 2020-01-28 16:37       ` Ananyev, Konstantin
  0 siblings, 0 replies; 77+ messages in thread
From: Ananyev, Konstantin @ 2020-01-28 16:37 UTC (permalink / raw)
  To: Smoczynski, MarcinX, akhil.goyal, Zhang, Roy Fan, Doherty,
	Declan, Nicolau, Radu, De Lara Guarch, Pablo
  Cc: dev



> 
> Update library to handle CPU cypto security mode which utilizes
> cryptodev's synchronous, CPU accelerated crypto operations.
> 
> Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> Signed-off-by: Marcin Smoczynski <marcinx.smoczynski@intel.com>
> Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
> ---
>  lib/librte_ipsec/esp_inb.c   | 156 ++++++++++++++++++++++++++++++-----

Tested-by: Konstantin Ananyev <konstantin.ananyev@intel.com>

> 2.17.1


^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [dpdk-dev] [PATCH v5 2/8] crypto/aesni_gcm: cpu crypto support
  2020-01-28 14:22     ` [dpdk-dev] [PATCH v5 2/8] crypto/aesni_gcm: cpu crypto support Marcin Smoczynski
@ 2020-01-28 16:39       ` Ananyev, Konstantin
  2020-01-31 14:33       ` Akhil Goyal
  1 sibling, 0 replies; 77+ messages in thread
From: Ananyev, Konstantin @ 2020-01-28 16:39 UTC (permalink / raw)
  To: Smoczynski, MarcinX, akhil.goyal, Zhang, Roy Fan, Doherty,
	Declan, Nicolau, Radu, De Lara Guarch, Pablo
  Cc: dev



> -----Original Message-----
> From: Smoczynski, MarcinX <marcinx.smoczynski@intel.com>
> Sent: Tuesday, January 28, 2020 2:22 PM
> To: akhil.goyal@nxp.com; Ananyev, Konstantin <konstantin.ananyev@intel.com>; Zhang, Roy Fan <roy.fan.zhang@intel.com>; Doherty,
> Declan <declan.doherty@intel.com>; Nicolau, Radu <radu.nicolau@intel.com>; De Lara Guarch, Pablo <pablo.de.lara.guarch@intel.com>
> Cc: dev@dpdk.org; Smoczynski, MarcinX <marcinx.smoczynski@intel.com>
> Subject: [PATCH v5 2/8] crypto/aesni_gcm: cpu crypto support
> 
> Add support for CPU crypto mode by introducing required handler.
> Crypto mode (sync/async) is chosen during sym session create if an
> appropriate flag is set in an xform type number.
> 
> Authenticated encryption and decryption are supported with tag
> generation/verification.
> 
> Signed-off-by: Marcin Smoczynski <marcinx.smoczynski@intel.com>
> Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
> Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
> ---

Tested-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>

> --
> 2.17.1


^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [dpdk-dev] [PATCH v5 4/8] security: add cpu crypto action type
  2020-01-28 14:22     ` [dpdk-dev] [PATCH v5 4/8] security: add cpu crypto action type Marcin Smoczynski
@ 2020-01-31 14:26       ` Akhil Goyal
  2020-02-04 10:36         ` Akhil Goyal
  0 siblings, 1 reply; 77+ messages in thread
From: Akhil Goyal @ 2020-01-31 14:26 UTC (permalink / raw)
  To: Marcin Smoczynski, konstantin.ananyev, roy.fan.zhang,
	declan.doherty, radu.nicolau, pablo.de.lara.guarch
  Cc: dev

Hi Marcin/Konstantin,

> Introduce CPU crypto action type allowing to differentiate between
> regular async 'none security' and synchronous, CPU crypto accelerated
> sessions.
> 
> Signed-off-by: Marcin Smoczynski <marcinx.smoczynski@intel.com>
> Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
> ---
>  lib/librte_security/rte_security.h | 8 ++++++--
>  1 file changed, 6 insertions(+), 2 deletions(-)
> 
> diff --git a/lib/librte_security/rte_security.h b/lib/librte_security/rte_security.h
> index 546779df2..c8b2dd5ed 100644
> --- a/lib/librte_security/rte_security.h
> +++ b/lib/librte_security/rte_security.h
> @@ -1,6 +1,6 @@
>  /* SPDX-License-Identifier: BSD-3-Clause
>   * Copyright 2017,2019 NXP
> - * Copyright(c) 2017 Intel Corporation.
> + * Copyright(c) 2017-2020 Intel Corporation.
>   */
> 
>  #ifndef _RTE_SECURITY_H_
> @@ -307,10 +307,14 @@ enum rte_security_session_action_type {
>  	/**< All security protocol processing is performed inline during
>  	 * transmission
>  	 */
> -	RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL
> +	RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL,
>  	/**< All security protocol processing including crypto is performed
>  	 * on a lookaside accelerator
>  	 */
> +	RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO
> +	/**< Crypto processing for security protocol is processed by CPU
> +	 * synchronously
> +	 */
I am not able to see the need for this enum.

It is used by the app and ipsec library to identify the cpu-crypto codepath.

I don't see any security action been performed for this action_type.

This enum is just like NONE which is not used beyond the application/lib.
I think this needs to be documented properly in the description of the enum.

It should be something like

Similar to ACTION_TYPE_NONE, but the crypto processing is done on CPU
Synchronously.

Also add documentation of this in the rte_security.rst in this patch only.
There should not be any separate patch for documentation.

Regards,
Akhil




^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [dpdk-dev] [PATCH v5 1/8] cryptodev: introduce cpu crypto support API
  2020-01-28 14:22     ` [dpdk-dev] [PATCH v5 1/8] cryptodev: introduce cpu crypto support API Marcin Smoczynski
@ 2020-01-31 14:30       ` Akhil Goyal
  0 siblings, 0 replies; 77+ messages in thread
From: Akhil Goyal @ 2020-01-31 14:30 UTC (permalink / raw)
  To: Marcin Smoczynski, konstantin.ananyev, roy.fan.zhang,
	declan.doherty, radu.nicolau, pablo.de.lara.guarch
  Cc: dev

Hi Marcin/Konstantin,
> 
> Add new API allowing to process crypto operations in a synchronous
> manner. Operations are performed on a set of SG arrays.
> 
> Sync mode is selected by setting appropriate flag in an xform
> type number. Cryptodevs which allows CPU crypto operation mode have to
> use RTE_CRYPTODEV_FF_SYM_CPU_CRYPTO capability.

There is no change in xform. This description need to be updated. I think
It was not edited while you removed that xform changes.

Documentation missing in this patch.

> 
> Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> Signed-off-by: Marcin Smoczynski <marcinx.smoczynski@intel.com>
> ---
>  lib/librte_cryptodev/rte_crypto_sym.h         | 63 ++++++++++++++++++-
>  lib/librte_cryptodev/rte_cryptodev.c          | 35 ++++++++++-
>  lib/librte_cryptodev/rte_cryptodev.h          | 22 ++++++-
>  lib/librte_cryptodev/rte_cryptodev_pmd.h      | 21 ++++++-
>  .../rte_cryptodev_version.map                 |  1 +
>  5 files changed, 138 insertions(+), 4 deletions(-)
> 
> diff --git a/lib/librte_cryptodev/rte_crypto_sym.h
> b/lib/librte_cryptodev/rte_crypto_sym.h
> index bc356f6ff..d6f3105fe 100644
> --- a/lib/librte_cryptodev/rte_crypto_sym.h
> +++ b/lib/librte_cryptodev/rte_crypto_sym.h
> @@ -1,5 +1,5 @@
>  /* SPDX-License-Identifier: BSD-3-Clause
> - * Copyright(c) 2016-2019 Intel Corporation
> + * Copyright(c) 2016-2020 Intel Corporation
>   */
> 
>  #ifndef _RTE_CRYPTO_SYM_H_
> @@ -25,6 +25,67 @@ extern "C" {
>  #include <rte_mempool.h>
>  #include <rte_common.h>
> 
> +/**
> + * Crypto IO Vector (in analogy with struct iovec)
> + * Supposed be used to pass input/output data buffers for crypto data-path
> + * functions.
> + */
> +struct rte_crypto_vec {
> +	/** virtual address of the data buffer */
> +	void *base;
> +	/** IOVA of the data buffer */
> +	rte_iova_t *iova;
> +	/** length of the data buffer */
> +	uint32_t len;
> +};
> +
> +/**
> + * Crypto scatter-gather list descriptor. Consists of a pointer to an array
> + * of Crypto IO vectors with its size.
> + */
> +struct rte_crypto_sgl {
> +	/** start of an array of vectors */
> +	struct rte_crypto_vec *vec;
> +	/** size of an array of vectors */
> +	uint32_t num;
> +};
> +
> +/**
> + * Synchronous operation descriptor.
> + * Supposed to be used with CPU crypto API call.
> + */
> +struct rte_crypto_sym_vec {
> +	/** array of SGL vectors */
> +	struct rte_crypto_sgl *sgl;
> +	/** array of pointers to IV */
> +	void **iv;
> +	/** array of pointers to AAD */
> +	void **aad;
> +	/** array of pointers to digest */
> +	void **digest;
> +	/**
> +	 * array of statuses for each operation:
> +	 *  - 0 on success
> +	 *  - errno on error
> +	 */
> +	int32_t *status;
> +	/** number of operations to perform */
> +	uint32_t num;
> +};
> +
> +/**
> + * used for cpu_crypto_process_bulk() to specify head/tail offsets
> + * for auth/cipher processing.
> + */
> +union rte_crypto_sym_ofs {
> +	uint64_t raw;
> +	struct {
> +		struct {
> +			uint16_t head;
> +			uint16_t tail;
> +		} auth, cipher;
> +	} ofs;
> +};
> 
>  /** Symmetric Cipher Algorithms */
>  enum rte_crypto_cipher_algorithm {
> diff --git a/lib/librte_cryptodev/rte_cryptodev.c
> b/lib/librte_cryptodev/rte_cryptodev.c
> index 5c6359b5c..889d61319 100644
> --- a/lib/librte_cryptodev/rte_cryptodev.c
> +++ b/lib/librte_cryptodev/rte_cryptodev.c
> @@ -1,5 +1,5 @@
>  /* SPDX-License-Identifier: BSD-3-Clause
> - * Copyright(c) 2015-2017 Intel Corporation
> + * Copyright(c) 2015-2020 Intel Corporation
>   */
> 
>  #include <sys/types.h>
> @@ -494,6 +494,8 @@ rte_cryptodev_get_feature_name(uint64_t flag)
>  		return "RSA_PRIV_OP_KEY_QT";
>  	case RTE_CRYPTODEV_FF_DIGEST_ENCRYPTED:
>  		return "DIGEST_ENCRYPTED";
> +	case RTE_CRYPTODEV_FF_SYM_CPU_CRYPTO:
> +		return "SYM_CPU_CRYPTO";

Update needed in the doc/guides/cryptodevs/features/default.ini



^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [dpdk-dev] [PATCH v5 2/8] crypto/aesni_gcm: cpu crypto support
  2020-01-28 14:22     ` [dpdk-dev] [PATCH v5 2/8] crypto/aesni_gcm: cpu crypto support Marcin Smoczynski
  2020-01-28 16:39       ` Ananyev, Konstantin
@ 2020-01-31 14:33       ` Akhil Goyal
  1 sibling, 0 replies; 77+ messages in thread
From: Akhil Goyal @ 2020-01-31 14:33 UTC (permalink / raw)
  To: Marcin Smoczynski, konstantin.ananyev, roy.fan.zhang,
	declan.doherty, radu.nicolau, pablo.de.lara.guarch
  Cc: dev


> 
> Add support for CPU crypto mode by introducing required handler.
> Crypto mode (sync/async) is chosen during sym session create if an
> appropriate flag is set in an xform type number.

Update description of the patch here also for xform.

> 
> Authenticated encryption and decryption are supported with tag
> generation/verification.
> 
> Signed-off-by: Marcin Smoczynski <marcinx.smoczynski@intel.com>
> Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
> Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
> ---

...

> +
>  /**
>   * Process a completed job and return rte_mbuf which job processed
>   *
> @@ -527,7 +741,8 @@ aesni_gcm_create(const char *name,
>  			RTE_CRYPTODEV_FF_SYM_OPERATION_CHAINING |
>  			RTE_CRYPTODEV_FF_IN_PLACE_SGL |
>  			RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT |
> -			RTE_CRYPTODEV_FF_OOP_LB_IN_LB_OUT;
> +			RTE_CRYPTODEV_FF_OOP_LB_IN_LB_OUT |
> +			RTE_CRYPTODEV_FF_SYM_CPU_CRYPTO;

Add corresponding changes in documentation also
doc/guides/cryptodevs/features/aesni_mb.ini



^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [dpdk-dev] [PATCH v5 3/8] test/crypto: add CPU crypto tests
  2020-01-28 14:22     ` [dpdk-dev] [PATCH v5 3/8] test/crypto: add CPU crypto tests Marcin Smoczynski
@ 2020-01-31 14:37       ` Akhil Goyal
  0 siblings, 0 replies; 77+ messages in thread
From: Akhil Goyal @ 2020-01-31 14:37 UTC (permalink / raw)
  To: Marcin Smoczynski, konstantin.ananyev, roy.fan.zhang,
	declan.doherty, radu.nicolau, pablo.de.lara.guarch
  Cc: dev


> 
> Add unit and performance tests for CPU crypto mode currently implemented
> by AESNI-GCM cryptodev. Unit tests cover AES-GCM and GMAC test vectors.
> 
> Signed-off-by: Marcin Smoczynski <marcinx.smoczynski@intel.com>
> Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
> ---

Is it not possible to add it in test_cryptodev.c?

Why do we need to register a new test suite for aesni-mb when we already have one
In test_cryptodev.c. All that code will get duplicated here.

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [dpdk-dev] [PATCH v5 8/8] doc: add cpu crypto related documentation
  2020-01-28 14:22     ` [dpdk-dev] [PATCH v5 8/8] doc: add cpu crypto related documentation Marcin Smoczynski
@ 2020-01-31 14:43       ` Akhil Goyal
  0 siblings, 0 replies; 77+ messages in thread
From: Akhil Goyal @ 2020-01-31 14:43 UTC (permalink / raw)
  To: Marcin Smoczynski, konstantin.ananyev, roy.fan.zhang,
	declan.doherty, radu.nicolau, pablo.de.lara.guarch
  Cc: dev


> 
> Update documentation with a description of cpu crypto in cryptodev,
> ipsec and security libraries.
> 
> Add release notes for 20.02.
> 
> Signed-off-by: Marcin Smoczynski <marcinx.smoczynski@intel.com>
> ---

This patch need to be split and squashed in the relevant patches of this series.




^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [dpdk-dev] [PATCH v5 4/8] security: add cpu crypto action type
  2020-01-31 14:26       ` Akhil Goyal
@ 2020-02-04 10:36         ` Akhil Goyal
  2020-02-04 10:43           ` Ananyev, Konstantin
  0 siblings, 1 reply; 77+ messages in thread
From: Akhil Goyal @ 2020-02-04 10:36 UTC (permalink / raw)
  To: Akhil Goyal, Marcin Smoczynski, konstantin.ananyev,
	roy.fan.zhang, declan.doherty, radu.nicolau,
	pablo.de.lara.guarch
  Cc: dev


> Hi Marcin/Konstantin,
> 
> > Introduce CPU crypto action type allowing to differentiate between
> > regular async 'none security' and synchronous, CPU crypto accelerated
> > sessions.
> >
> > Signed-off-by: Marcin Smoczynski <marcinx.smoczynski@intel.com>
> > Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> > Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
> > ---
> >  lib/librte_security/rte_security.h | 8 ++++++--
> >  1 file changed, 6 insertions(+), 2 deletions(-)
> >
> > diff --git a/lib/librte_security/rte_security.h b/lib/librte_security/rte_security.h
> > index 546779df2..c8b2dd5ed 100644
> > --- a/lib/librte_security/rte_security.h
> > +++ b/lib/librte_security/rte_security.h
> > @@ -1,6 +1,6 @@
> >  /* SPDX-License-Identifier: BSD-3-Clause
> >   * Copyright 2017,2019 NXP
> > - * Copyright(c) 2017 Intel Corporation.
> > + * Copyright(c) 2017-2020 Intel Corporation.
> >   */
> >
> >  #ifndef _RTE_SECURITY_H_
> > @@ -307,10 +307,14 @@ enum rte_security_session_action_type {
> >  	/**< All security protocol processing is performed inline during
> >  	 * transmission
> >  	 */
> > -	RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL
> > +	RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL,
> >  	/**< All security protocol processing including crypto is performed
> >  	 * on a lookaside accelerator
> >  	 */
> > +	RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO
> > +	/**< Crypto processing for security protocol is processed by CPU
> > +	 * synchronously
> > +	 */
> I am not able to see the need for this enum.
> 
> It is used by the app and ipsec library to identify the cpu-crypto codepath.
> 
> I don't see any security action been performed for this action_type.
> 
> This enum is just like NONE which is not used beyond the application/lib.
> I think this needs to be documented properly in the description of the enum.
> 
> It should be something like
> 
> Similar to ACTION_TYPE_NONE, but the crypto processing is done on CPU
> Synchronously.
> 
> Also add documentation of this in the rte_security.rst in this patch only.
> There should not be any separate patch for documentation.
> 

Could you please send the update to the patches that I requested.
I wanted to apply these today.

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [dpdk-dev] [PATCH v5 4/8] security: add cpu crypto action type
  2020-02-04 10:36         ` Akhil Goyal
@ 2020-02-04 10:43           ` Ananyev, Konstantin
  0 siblings, 0 replies; 77+ messages in thread
From: Ananyev, Konstantin @ 2020-02-04 10:43 UTC (permalink / raw)
  To: Akhil Goyal, Smoczynski, MarcinX, Zhang, Roy Fan, Doherty,
	Declan, Nicolau, Radu, De Lara Guarch, Pablo
  Cc: dev


Hi Akhil,

> > > Introduce CPU crypto action type allowing to differentiate between
> > > regular async 'none security' and synchronous, CPU crypto accelerated
> > > sessions.
> > >
> > > Signed-off-by: Marcin Smoczynski <marcinx.smoczynski@intel.com>
> > > Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> > > Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
> > > ---
> > >  lib/librte_security/rte_security.h | 8 ++++++--
> > >  1 file changed, 6 insertions(+), 2 deletions(-)
> > >
> > > diff --git a/lib/librte_security/rte_security.h b/lib/librte_security/rte_security.h
> > > index 546779df2..c8b2dd5ed 100644
> > > --- a/lib/librte_security/rte_security.h
> > > +++ b/lib/librte_security/rte_security.h
> > > @@ -1,6 +1,6 @@
> > >  /* SPDX-License-Identifier: BSD-3-Clause
> > >   * Copyright 2017,2019 NXP
> > > - * Copyright(c) 2017 Intel Corporation.
> > > + * Copyright(c) 2017-2020 Intel Corporation.
> > >   */
> > >
> > >  #ifndef _RTE_SECURITY_H_
> > > @@ -307,10 +307,14 @@ enum rte_security_session_action_type {
> > >  	/**< All security protocol processing is performed inline during
> > >  	 * transmission
> > >  	 */
> > > -	RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL
> > > +	RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL,
> > >  	/**< All security protocol processing including crypto is performed
> > >  	 * on a lookaside accelerator
> > >  	 */
> > > +	RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO
> > > +	/**< Crypto processing for security protocol is processed by CPU
> > > +	 * synchronously
> > > +	 */
> > I am not able to see the need for this enum.
> >
> > It is used by the app and ipsec library to identify the cpu-crypto codepath.
> >
> > I don't see any security action been performed for this action_type.
> >
> > This enum is just like NONE which is not used beyond the application/lib.
> > I think this needs to be documented properly in the description of the enum.
> >
> > It should be something like
> >
> > Similar to ACTION_TYPE_NONE, but the crypto processing is done on CPU
> > Synchronously.
> >
> > Also add documentation of this in the rte_security.rst in this patch only.
> > There should not be any separate patch for documentation.
> >
> 
> Could you please send the update to the patches that I requested.
> I wanted to apply these today.

Marcin works on v6 to address your comments.
Plan to send it by COB today.
Konstantin 


^ permalink raw reply	[flat|nested] 77+ messages in thread

* [dpdk-dev] [PATCH v6 0/8] Introduce CPU crypto mode
  2020-01-28 14:22   ` [dpdk-dev] [PATCH v5 0/8] Introduce CPU crypto mode Marcin Smoczynski
                       ` (7 preceding siblings ...)
  2020-01-28 14:22     ` [dpdk-dev] [PATCH v5 8/8] doc: add cpu crypto related documentation Marcin Smoczynski
@ 2020-02-04 13:12     ` Marcin Smoczynski
  2020-02-04 13:12       ` [dpdk-dev] [PATCH v6 1/8] cryptodev: introduce cpu crypto support API Marcin Smoczynski
                         ` (8 more replies)
  8 siblings, 9 replies; 77+ messages in thread
From: Marcin Smoczynski @ 2020-02-04 13:12 UTC (permalink / raw)
  To: akhil.goyal, konstantin.ananyev, roy.fan.zhang, declan.doherty,
	radu.nicolau, pablo.de.lara.guarch
  Cc: dev, Marcin Smoczynski

Originally both SW and HW crypto PMDs use rte_crypot_op based API to
process the crypto workload asynchronously. This way provides uniformity
to both PMD types, but also introduce unnecessary performance penalty to
SW PMDs that have to "simulate" HW async behavior (crypto-ops
enqueue/dequeue, HW addresses computations, storing/dereferencing user
provided data (mbuf) for each crypto-op, etc).

The aim is to introduce a new optional API for SW crypto-devices
to perform crypto processing in a synchronous manner.

v3 to v4 changes:
 - add feature discovery in the ipsec example application when
   using cpu-crypto
 - add gmac in aesni-gcm
 - add tests for aesni-gcm/cpu crypto mode
 - add documentation: pg and rel notes
 - remove xform flags as no longer needed
 - add some extra API comments
 - remove compilation error from v3

v4 to v5 changes:
 - fixed build error for arm64 (missing header include)
 - update licensing information

v5 to v6 changes:
 - unit tests integrated in the current test application for cryptodev
 - iova fix
 - moved mbuf to sgl helper function to crypo sym header

Marcin Smoczynski (8):
  cryptodev: introduce cpu crypto support API
  crypto/aesni_gcm: cpu crypto support
  security: add cpu crypto action type
  test/crypto: add cpu crypto mode to tests
  ipsec: introduce support for cpu crypto mode
  examples/ipsec-secgw: cpu crypto support
  examples/ipsec-secgw: cpu crypto testing
  doc: add release notes for cpu crypto

 app/test/test_cryptodev.c                     | 161 ++++++++++++-
 doc/guides/cryptodevs/aesni_gcm.rst           |   7 +-
 doc/guides/cryptodevs/features/aesni_gcm.ini  |   1 +
 doc/guides/cryptodevs/features/default.ini    |   1 +
 doc/guides/prog_guide/cryptodev_lib.rst       |  33 ++-
 doc/guides/prog_guide/ipsec_lib.rst           |  10 +-
 doc/guides/prog_guide/rte_security.rst        |  15 +-
 doc/guides/rel_notes/release_20_02.rst        |   7 +
 drivers/crypto/aesni_gcm/aesni_gcm_ops.h      |  11 +-
 drivers/crypto/aesni_gcm/aesni_gcm_pmd.c      | 222 +++++++++++++++++-
 drivers/crypto/aesni_gcm/aesni_gcm_pmd_ops.c  |   4 +-
 .../crypto/aesni_gcm/aesni_gcm_pmd_private.h  |  13 +-
 examples/ipsec-secgw/ipsec.c                  |  25 +-
 examples/ipsec-secgw/ipsec_process.c          | 136 +++++++----
 examples/ipsec-secgw/sa.c                     |  30 ++-
 examples/ipsec-secgw/test/common_defs.sh      |  21 ++
 examples/ipsec-secgw/test/linux_test4.sh      |  11 +-
 examples/ipsec-secgw/test/linux_test6.sh      |  11 +-
 .../test/trs_3descbc_sha1_common_defs.sh      |   8 +-
 .../test/trs_aescbc_sha1_common_defs.sh       |   8 +-
 .../test/trs_aesctr_sha1_common_defs.sh       |   8 +-
 .../test/tun_3descbc_sha1_common_defs.sh      |   8 +-
 .../test/tun_aescbc_sha1_common_defs.sh       |   8 +-
 .../test/tun_aesctr_sha1_common_defs.sh       |   8 +-
 lib/librte_cryptodev/rte_crypto_sym.h         | 128 +++++++++-
 lib/librte_cryptodev/rte_cryptodev.c          |  35 ++-
 lib/librte_cryptodev/rte_cryptodev.h          |  22 +-
 lib/librte_cryptodev/rte_cryptodev_pmd.h      |  21 +-
 .../rte_cryptodev_version.map                 |   1 +
 lib/librte_ipsec/esp_inb.c                    | 156 ++++++++++--
 lib/librte_ipsec/esp_outb.c                   | 136 ++++++++++-
 lib/librte_ipsec/misc.h                       |  73 +++++-
 lib/librte_ipsec/rte_ipsec.h                  |  20 +-
 lib/librte_ipsec/sa.c                         | 114 +++++++--
 lib/librte_ipsec/sa.h                         |  19 +-
 lib/librte_ipsec/ses.c                        |   5 +-
 lib/librte_security/rte_security.h            |   8 +-
 37 files changed, 1311 insertions(+), 194 deletions(-)

-- 
2.17.1


^ permalink raw reply	[flat|nested] 77+ messages in thread

* [dpdk-dev] [PATCH v6 1/8] cryptodev: introduce cpu crypto support API
  2020-02-04 13:12     ` [dpdk-dev] [PATCH v6 0/8] Introduce CPU crypto mode Marcin Smoczynski
@ 2020-02-04 13:12       ` Marcin Smoczynski
  2020-02-05 14:57         ` Akhil Goyal
                           ` (2 more replies)
  2020-02-04 13:12       ` [dpdk-dev] [PATCH v6 2/8] crypto/aesni_gcm: cpu crypto support Marcin Smoczynski
                         ` (7 subsequent siblings)
  8 siblings, 3 replies; 77+ messages in thread
From: Marcin Smoczynski @ 2020-02-04 13:12 UTC (permalink / raw)
  To: akhil.goyal, konstantin.ananyev, roy.fan.zhang, declan.doherty,
	radu.nicolau, pablo.de.lara.guarch
  Cc: dev, Marcin Smoczynski

Add new API allowing to process crypto operations in a synchronous
manner. Operations are performed on a set of SG arrays.

Cryptodevs which allows CPU crypto operation mode have to
use RTE_CRYPTODEV_FF_SYM_CPU_CRYPTO capability.

Add a helper method to easily convert mbufs to a SGL form.

Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Signed-off-by: Marcin Smoczynski <marcinx.smoczynski@intel.com>
---
 doc/guides/cryptodevs/features/default.ini    |   1 +
 doc/guides/prog_guide/cryptodev_lib.rst       |  33 ++++-
 lib/librte_cryptodev/rte_crypto_sym.h         | 128 +++++++++++++++++-
 lib/librte_cryptodev/rte_cryptodev.c          |  35 ++++-
 lib/librte_cryptodev/rte_cryptodev.h          |  22 ++-
 lib/librte_cryptodev/rte_cryptodev_pmd.h      |  21 ++-
 .../rte_cryptodev_version.map                 |   1 +
 7 files changed, 236 insertions(+), 5 deletions(-)

diff --git a/doc/guides/cryptodevs/features/default.ini b/doc/guides/cryptodevs/features/default.ini
index 304a6a94f..a14ee87d9 100644
--- a/doc/guides/cryptodevs/features/default.ini
+++ b/doc/guides/cryptodevs/features/default.ini
@@ -27,6 +27,7 @@ RSA PRIV OP KEY EXP    =
 RSA PRIV OP KEY QT     =
 Digest encrypted       =
 Asymmetric sessionless =
+CPU crypto             =
 
 ;
 ; Supported crypto algorithms of a default crypto driver.
diff --git a/doc/guides/prog_guide/cryptodev_lib.rst b/doc/guides/prog_guide/cryptodev_lib.rst
index ac1643774..b91f7c8b7 100644
--- a/doc/guides/prog_guide/cryptodev_lib.rst
+++ b/doc/guides/prog_guide/cryptodev_lib.rst
@@ -1,5 +1,5 @@
 ..  SPDX-License-Identifier: BSD-3-Clause
-    Copyright(c) 2016-2017 Intel Corporation.
+    Copyright(c) 2016-2020 Intel Corporation.
 
 Cryptography Device Library
 ===========================
@@ -600,6 +600,37 @@ chain.
         };
     };
 
+Synchronous mode
+----------------
+
+Some cryptodevs support synchronous mode alongside with a standard asynchronous
+mode. In that case operations are performed directly when calling
+``rte_cryptodev_sym_cpu_crypto_process`` method instead of enqueuing and
+dequeuing an operation before. This mode of operation allows cryptodevs which
+utilize CPU cryptographic acceleration to have significant performance boost
+comparing to standard asynchronous approach. Cryptodevs supporting synchronous
+mode have ``RTE_CRYPTODEV_FF_SYM_CPU_CRYPTO`` feature flag set.
+
+To perform a synchronous operation a call to
+``rte_cryptodev_sym_cpu_crypto_process`` has to be made with vectorized
+operation descriptor (``struct rte_crypto_sym_vec``) containing:
+
+- ``num`` - number of operations to perform,
+- pointer to an array of size ``num`` containing a scatter-gather list
+  descriptors of performed operations (``struct rte_crypto_sgl``). Each instance
+  of ``struct rte_crypto_sgl`` consists of a number of segments and a pointer to
+  an array of segment descriptors ``struct rte_crypto_vec``;
+- pointers to arrays of size ``num`` containing IV, AAD and digest information,
+- pointer to an array of size ``num`` where status information will be stored
+  for each operation.
+
+Function returns a number of successfully completed operations and sets
+appropriate status number for each operation in the status array provided as
+a call argument. Status different than zero must be treated as error.
+
+For more details, e.g. how to convert an mbuf to an SGL, please refer to an
+example usage in the IPsec library implementation.
+
 Sample code
 -----------
 
diff --git a/lib/librte_cryptodev/rte_crypto_sym.h b/lib/librte_cryptodev/rte_crypto_sym.h
index bc356f6ff..5ca55a5e0 100644
--- a/lib/librte_cryptodev/rte_crypto_sym.h
+++ b/lib/librte_cryptodev/rte_crypto_sym.h
@@ -1,5 +1,5 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2016-2019 Intel Corporation
+ * Copyright(c) 2016-2020 Intel Corporation
  */
 
 #ifndef _RTE_CRYPTO_SYM_H_
@@ -25,6 +25,67 @@ extern "C" {
 #include <rte_mempool.h>
 #include <rte_common.h>
 
+/**
+ * Crypto IO Vector (in analogy with struct iovec)
+ * Supposed be used to pass input/output data buffers for crypto data-path
+ * functions.
+ */
+struct rte_crypto_vec {
+	/** virtual address of the data buffer */
+	void *base;
+	/** IOVA of the data buffer */
+	rte_iova_t iova;
+	/** length of the data buffer */
+	uint32_t len;
+};
+
+/**
+ * Crypto scatter-gather list descriptor. Consists of a pointer to an array
+ * of Crypto IO vectors with its size.
+ */
+struct rte_crypto_sgl {
+	/** start of an array of vectors */
+	struct rte_crypto_vec *vec;
+	/** size of an array of vectors */
+	uint32_t num;
+};
+
+/**
+ * Synchronous operation descriptor.
+ * Supposed to be used with CPU crypto API call.
+ */
+struct rte_crypto_sym_vec {
+	/** array of SGL vectors */
+	struct rte_crypto_sgl *sgl;
+	/** array of pointers to IV */
+	void **iv;
+	/** array of pointers to AAD */
+	void **aad;
+	/** array of pointers to digest */
+	void **digest;
+	/**
+	 * array of statuses for each operation:
+	 *  - 0 on success
+	 *  - errno on error
+	 */
+	int32_t *status;
+	/** number of operations to perform */
+	uint32_t num;
+};
+
+/**
+ * used for cpu_crypto_process_bulk() to specify head/tail offsets
+ * for auth/cipher processing.
+ */
+union rte_crypto_sym_ofs {
+	uint64_t raw;
+	struct {
+		struct {
+			uint16_t head;
+			uint16_t tail;
+		} auth, cipher;
+	} ofs;
+};
 
 /** Symmetric Cipher Algorithms */
 enum rte_crypto_cipher_algorithm {
@@ -798,6 +859,71 @@ __rte_crypto_sym_op_attach_sym_session(struct rte_crypto_sym_op *sym_op,
 	return 0;
 }
 
+/**
+ * Converts portion of mbuf data into a vector representation.
+ * Each segment will be represented as a separate entry in *vec* array.
+ * Expects that provided *ofs* + *len* not to exceed mbuf's *pkt_len*.
+ * @param mbuf
+ *   Pointer to the *rte_mbuf* object.
+ * @param ofs
+ *   Offset within mbuf data to start with.
+ * @param len
+ *   Length of data to represent.
+ * @return
+ *   - number of successfully filled entries in *vec* array.
+ *   - negative number of elements in *vec* array required.
+ */
+__rte_experimental
+static inline int
+rte_crypto_mbuf_to_vec(const struct rte_mbuf *mb, uint32_t ofs, uint32_t len,
+	struct rte_crypto_vec vec[], uint32_t num)
+{
+	uint32_t i;
+	struct rte_mbuf *nseg;
+	uint32_t left;
+	uint32_t seglen;
+
+	/* assuming that requested data starts in the first segment */
+	RTE_ASSERT(mb->data_len > ofs);
+
+	if (mb->nb_segs > num)
+		return -mb->nb_segs;
+
+	vec[0].base = rte_pktmbuf_mtod_offset(mb, void *, ofs);
+	vec[0].iova = rte_pktmbuf_iova_offset(mb, ofs);
+
+	/* whole data lies in the first segment */
+	seglen = mb->data_len - ofs;
+	if (len <= seglen) {
+		vec[0].len = len;
+		return 1;
+	}
+
+	/* data spread across segments */
+	vec[0].len = seglen;
+	left = len - seglen;
+	for (i = 1, nseg = mb->next; nseg != NULL; nseg = nseg->next, i++) {
+
+		vec[i].base = rte_pktmbuf_mtod(nseg, void *);
+		vec[i].iova = rte_pktmbuf_iova(nseg);
+
+		seglen = nseg->data_len;
+		if (left <= seglen) {
+			/* whole requested data is completed */
+			vec[i].len = left;
+			left = 0;
+			break;
+		}
+
+		/* use whole segment */
+		vec[i].len = seglen;
+		left -= seglen;
+	}
+
+	RTE_ASSERT(left == 0);
+	return i + 1;
+}
+
 
 #ifdef __cplusplus
 }
diff --git a/lib/librte_cryptodev/rte_cryptodev.c b/lib/librte_cryptodev/rte_cryptodev.c
index 5c6359b5c..889d61319 100644
--- a/lib/librte_cryptodev/rte_cryptodev.c
+++ b/lib/librte_cryptodev/rte_cryptodev.c
@@ -1,5 +1,5 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2015-2017 Intel Corporation
+ * Copyright(c) 2015-2020 Intel Corporation
  */
 
 #include <sys/types.h>
@@ -494,6 +494,8 @@ rte_cryptodev_get_feature_name(uint64_t flag)
 		return "RSA_PRIV_OP_KEY_QT";
 	case RTE_CRYPTODEV_FF_DIGEST_ENCRYPTED:
 		return "DIGEST_ENCRYPTED";
+	case RTE_CRYPTODEV_FF_SYM_CPU_CRYPTO:
+		return "SYM_CPU_CRYPTO";
 	default:
 		return NULL;
 	}
@@ -1619,6 +1621,37 @@ rte_cryptodev_sym_session_get_user_data(
 	return (void *)(sess->sess_data + sess->nb_drivers);
 }
 
+static inline void
+sym_crypto_fill_status(struct rte_crypto_sym_vec *vec, int32_t errnum)
+{
+	uint32_t i;
+	for (i = 0; i < vec->num; i++)
+		vec->status[i] = errnum;
+}
+
+uint32_t
+rte_cryptodev_sym_cpu_crypto_process(uint8_t dev_id,
+	struct rte_cryptodev_sym_session *sess, union rte_crypto_sym_ofs ofs,
+	struct rte_crypto_sym_vec *vec)
+{
+	struct rte_cryptodev *dev;
+
+	if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+		sym_crypto_fill_status(vec, EINVAL);
+		return 0;
+	}
+
+	dev = rte_cryptodev_pmd_get_dev(dev_id);
+
+	if (*dev->dev_ops->sym_cpu_process == NULL ||
+		!(dev->feature_flags & RTE_CRYPTODEV_FF_SYM_CPU_CRYPTO)) {
+		sym_crypto_fill_status(vec, ENOTSUP);
+		return 0;
+	}
+
+	return dev->dev_ops->sym_cpu_process(dev, sess, ofs, vec);
+}
+
 /** Initialise rte_crypto_op mempool element */
 static void
 rte_crypto_op_init(struct rte_mempool *mempool,
diff --git a/lib/librte_cryptodev/rte_cryptodev.h b/lib/librte_cryptodev/rte_cryptodev.h
index c6ffa3b35..437b8a9b3 100644
--- a/lib/librte_cryptodev/rte_cryptodev.h
+++ b/lib/librte_cryptodev/rte_cryptodev.h
@@ -1,5 +1,5 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2015-2017 Intel Corporation.
+ * Copyright(c) 2015-2020 Intel Corporation.
  */
 
 #ifndef _RTE_CRYPTODEV_H_
@@ -450,6 +450,8 @@ rte_cryptodev_asym_get_xform_enum(enum rte_crypto_asym_xform_type *xform_enum,
 /**< Support encrypted-digest operations where digest is appended to data */
 #define RTE_CRYPTODEV_FF_ASYM_SESSIONLESS		(1ULL << 20)
 /**< Support asymmetric session-less operations */
+#define	RTE_CRYPTODEV_FF_SYM_CPU_CRYPTO			(1ULL << 21)
+/**< Support symmetric cpu-crypto processing */
 
 
 /**
@@ -1274,6 +1276,24 @@ void *
 rte_cryptodev_sym_session_get_user_data(
 					struct rte_cryptodev_sym_session *sess);
 
+/**
+ * Perform actual crypto processing (encrypt/digest or auth/decrypt)
+ * on user provided data.
+ *
+ * @param	dev_id	The device identifier.
+ * @param	sess	Cryptodev session structure
+ * @param	ofs	Start and stop offsets for auth and cipher operations
+ * @param	vec	Vectorized operation descriptor
+ *
+ * @return
+ *  - Returns number of successfully processed packets.
+ */
+__rte_experimental
+uint32_t
+rte_cryptodev_sym_cpu_crypto_process(uint8_t dev_id,
+	struct rte_cryptodev_sym_session *sess, union rte_crypto_sym_ofs ofs,
+	struct rte_crypto_sym_vec *vec);
+
 #ifdef __cplusplus
 }
 #endif
diff --git a/lib/librte_cryptodev/rte_cryptodev_pmd.h b/lib/librte_cryptodev/rte_cryptodev_pmd.h
index fba14f2fa..0e6b5f443 100644
--- a/lib/librte_cryptodev/rte_cryptodev_pmd.h
+++ b/lib/librte_cryptodev/rte_cryptodev_pmd.h
@@ -1,5 +1,5 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2015-2016 Intel Corporation.
+ * Copyright(c) 2015-2020 Intel Corporation.
  */
 
 #ifndef _RTE_CRYPTODEV_PMD_H_
@@ -308,6 +308,23 @@ typedef void (*cryptodev_sym_free_session_t)(struct rte_cryptodev *dev,
  */
 typedef void (*cryptodev_asym_free_session_t)(struct rte_cryptodev *dev,
 		struct rte_cryptodev_asym_session *sess);
+/**
+ * Perform actual crypto processing (encrypt/digest or auth/decrypt)
+ * on user provided data.
+ *
+ * @param	dev	Crypto device pointer
+ * @param	sess	Cryptodev session structure
+ * @param	ofs	Start and stop offsets for auth and cipher operations
+ * @param	vec	Vectorized operation descriptor
+ *
+ * @return
+ *  - Returns number of successfully processed packets.
+ *
+ */
+typedef uint32_t (*cryptodev_sym_cpu_crypto_process_t)
+	(struct rte_cryptodev *dev, struct rte_cryptodev_sym_session *sess,
+	union rte_crypto_sym_ofs ofs, struct rte_crypto_sym_vec *vec);
+
 
 /** Crypto device operations function pointer table */
 struct rte_cryptodev_ops {
@@ -342,6 +359,8 @@ struct rte_cryptodev_ops {
 	/**< Clear a Crypto sessions private data. */
 	cryptodev_asym_free_session_t asym_session_clear;
 	/**< Clear a Crypto sessions private data. */
+	cryptodev_sym_cpu_crypto_process_t sym_cpu_process;
+	/**< process input data synchronously (cpu-crypto). */
 };
 
 
diff --git a/lib/librte_cryptodev/rte_cryptodev_version.map b/lib/librte_cryptodev/rte_cryptodev_version.map
index 1dd1e259a..6e41b4be5 100644
--- a/lib/librte_cryptodev/rte_cryptodev_version.map
+++ b/lib/librte_cryptodev/rte_cryptodev_version.map
@@ -71,6 +71,7 @@ EXPERIMENTAL {
 	rte_cryptodev_asym_session_init;
 	rte_cryptodev_asym_xform_capability_check_modlen;
 	rte_cryptodev_asym_xform_capability_check_optype;
+	rte_cryptodev_sym_cpu_crypto_process;
 	rte_cryptodev_sym_get_existing_header_session_size;
 	rte_cryptodev_sym_session_get_user_data;
 	rte_cryptodev_sym_session_pool_create;
-- 
2.17.1


^ permalink raw reply	[flat|nested] 77+ messages in thread

* [dpdk-dev] [PATCH v6 2/8] crypto/aesni_gcm: cpu crypto support
  2020-02-04 13:12     ` [dpdk-dev] [PATCH v6 0/8] Introduce CPU crypto mode Marcin Smoczynski
  2020-02-04 13:12       ` [dpdk-dev] [PATCH v6 1/8] cryptodev: introduce cpu crypto support API Marcin Smoczynski
@ 2020-02-04 13:12       ` Marcin Smoczynski
  2020-02-04 13:12       ` [dpdk-dev] [PATCH v6 3/8] security: add cpu crypto action type Marcin Smoczynski
                         ` (6 subsequent siblings)
  8 siblings, 0 replies; 77+ messages in thread
From: Marcin Smoczynski @ 2020-02-04 13:12 UTC (permalink / raw)
  To: akhil.goyal, konstantin.ananyev, roy.fan.zhang, declan.doherty,
	radu.nicolau, pablo.de.lara.guarch
  Cc: dev, Marcin Smoczynski

Add support for CPU crypto mode by introducing required handler.
Authenticated encryption and decryption are supported with tag
generation/verification.

CPU crypto support include both AES-GCM and GMAC algorithms.

Signed-off-by: Marcin Smoczynski <marcinx.smoczynski@intel.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
Tested-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 doc/guides/cryptodevs/aesni_gcm.rst           |   7 +-
 doc/guides/cryptodevs/features/aesni_gcm.ini  |   1 +
 drivers/crypto/aesni_gcm/aesni_gcm_ops.h      |  11 +-
 drivers/crypto/aesni_gcm/aesni_gcm_pmd.c      | 222 +++++++++++++++++-
 drivers/crypto/aesni_gcm/aesni_gcm_pmd_ops.c  |   4 +-
 .../crypto/aesni_gcm/aesni_gcm_pmd_private.h  |  13 +-
 6 files changed, 247 insertions(+), 11 deletions(-)

diff --git a/doc/guides/cryptodevs/aesni_gcm.rst b/doc/guides/cryptodevs/aesni_gcm.rst
index 151aa3060..a25b63109 100644
--- a/doc/guides/cryptodevs/aesni_gcm.rst
+++ b/doc/guides/cryptodevs/aesni_gcm.rst
@@ -1,5 +1,5 @@
 ..  SPDX-License-Identifier: BSD-3-Clause
-    Copyright(c) 2016-2019 Intel Corporation.
+    Copyright(c) 2016-2020 Intel Corporation.
 
 AES-NI GCM Crypto Poll Mode Driver
 ==================================
@@ -9,6 +9,11 @@ The AES-NI GCM PMD (**librte_pmd_aesni_gcm**) provides poll mode crypto driver
 support for utilizing Intel multi buffer library (see AES-NI Multi-buffer PMD documentation
 to learn more about it, including installation).
 
+The AES-NI GCM PMD supports synchronous mode of operation with
+``rte_cryptodev_sym_cpu_crypto_process`` function call for both AES-GCM and
+GMAC, however GMAC support is limited to one segment per operation. Please
+refer to ``rte_crypto`` programmer's guide for more detail.
+
 Features
 --------
 
diff --git a/doc/guides/cryptodevs/features/aesni_gcm.ini b/doc/guides/cryptodevs/features/aesni_gcm.ini
index 87eac0fbf..949d6a088 100644
--- a/doc/guides/cryptodevs/features/aesni_gcm.ini
+++ b/doc/guides/cryptodevs/features/aesni_gcm.ini
@@ -14,6 +14,7 @@ CPU AVX512             = Y
 In Place SGL           = Y
 OOP SGL In LB  Out     = Y
 OOP LB  In LB  Out     = Y
+CPU crypto             = Y
 ;
 ; Supported crypto algorithms of the 'aesni_gcm' crypto driver.
 ;
diff --git a/drivers/crypto/aesni_gcm/aesni_gcm_ops.h b/drivers/crypto/aesni_gcm/aesni_gcm_ops.h
index e272f1067..74acac09c 100644
--- a/drivers/crypto/aesni_gcm/aesni_gcm_ops.h
+++ b/drivers/crypto/aesni_gcm/aesni_gcm_ops.h
@@ -1,5 +1,5 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2016-2017 Intel Corporation
+ * Copyright(c) 2016-2020 Intel Corporation
  */
 
 #ifndef _AESNI_GCM_OPS_H_
@@ -65,4 +65,13 @@ struct aesni_gcm_ops {
 	aesni_gcm_finalize_t finalize_dec;
 };
 
+/** GCM per-session operation handlers */
+struct aesni_gcm_session_ops {
+	aesni_gcm_t cipher;
+	aesni_gcm_pre_t pre;
+	aesni_gcm_init_t init;
+	aesni_gcm_update_t update;
+	aesni_gcm_finalize_t finalize;
+};
+
 #endif /* _AESNI_GCM_OPS_H_ */
diff --git a/drivers/crypto/aesni_gcm/aesni_gcm_pmd.c b/drivers/crypto/aesni_gcm/aesni_gcm_pmd.c
index 1a03be31d..a1caab993 100644
--- a/drivers/crypto/aesni_gcm/aesni_gcm_pmd.c
+++ b/drivers/crypto/aesni_gcm/aesni_gcm_pmd.c
@@ -1,5 +1,5 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2016-2017 Intel Corporation
+ * Copyright(c) 2016-2020 Intel Corporation
  */
 
 #include <rte_common.h>
@@ -15,6 +15,31 @@
 
 static uint8_t cryptodev_driver_id;
 
+/* setup session handlers */
+static void
+set_func_ops(struct aesni_gcm_session *s, const struct aesni_gcm_ops *gcm_ops)
+{
+	s->ops.pre = gcm_ops->pre;
+	s->ops.init = gcm_ops->init;
+
+	switch (s->op) {
+	case AESNI_GCM_OP_AUTHENTICATED_ENCRYPTION:
+		s->ops.cipher = gcm_ops->enc;
+		s->ops.update = gcm_ops->update_enc;
+		s->ops.finalize = gcm_ops->finalize_enc;
+		break;
+	case AESNI_GCM_OP_AUTHENTICATED_DECRYPTION:
+		s->ops.cipher = gcm_ops->dec;
+		s->ops.update = gcm_ops->update_dec;
+		s->ops.finalize = gcm_ops->finalize_dec;
+		break;
+	case AESNI_GMAC_OP_GENERATE:
+	case AESNI_GMAC_OP_VERIFY:
+		s->ops.finalize = gcm_ops->finalize_enc;
+		break;
+	}
+}
+
 /** Parse crypto xform chain and set private session parameters */
 int
 aesni_gcm_set_session_parameters(const struct aesni_gcm_ops *gcm_ops,
@@ -65,6 +90,7 @@ aesni_gcm_set_session_parameters(const struct aesni_gcm_ops *gcm_ops,
 		/* Select Crypto operation */
 		if (aead_xform->aead.op == RTE_CRYPTO_AEAD_OP_ENCRYPT)
 			sess->op = AESNI_GCM_OP_AUTHENTICATED_ENCRYPTION;
+		/* op == RTE_CRYPTO_AEAD_OP_DECRYPT */
 		else
 			sess->op = AESNI_GCM_OP_AUTHENTICATED_DECRYPTION;
 
@@ -78,7 +104,6 @@ aesni_gcm_set_session_parameters(const struct aesni_gcm_ops *gcm_ops,
 		return -ENOTSUP;
 	}
 
-
 	/* IV check */
 	if (sess->iv.length != 16 && sess->iv.length != 12 &&
 			sess->iv.length != 0) {
@@ -102,6 +127,10 @@ aesni_gcm_set_session_parameters(const struct aesni_gcm_ops *gcm_ops,
 		return -EINVAL;
 	}
 
+	/* setup session handlers */
+	set_func_ops(sess, &gcm_ops[sess->key]);
+
+	/* pre-generate key */
 	gcm_ops[sess->key].pre(key, &sess->gdata_key);
 
 	/* Digest check */
@@ -356,6 +385,191 @@ process_gcm_crypto_op(struct aesni_gcm_qp *qp, struct rte_crypto_op *op,
 	return 0;
 }
 
+static inline void
+aesni_gcm_fill_error_code(struct rte_crypto_sym_vec *vec, int32_t errnum)
+{
+	uint32_t i;
+
+	for (i = 0; i < vec->num; i++)
+		vec->status[i] = errnum;
+}
+
+
+static inline int32_t
+aesni_gcm_sgl_op_finalize_encryption(const struct aesni_gcm_session *s,
+	struct gcm_context_data *gdata_ctx, uint8_t *digest)
+{
+	if (s->req_digest_length != s->gen_digest_length) {
+		uint8_t tmpdigest[s->gen_digest_length];
+
+		s->ops.finalize(&s->gdata_key, gdata_ctx, tmpdigest,
+			s->gen_digest_length);
+		memcpy(digest, tmpdigest, s->req_digest_length);
+	} else {
+		s->ops.finalize(&s->gdata_key, gdata_ctx, digest,
+			s->gen_digest_length);
+	}
+
+	return 0;
+}
+
+static inline int32_t
+aesni_gcm_sgl_op_finalize_decryption(const struct aesni_gcm_session *s,
+	struct gcm_context_data *gdata_ctx, uint8_t *digest)
+{
+	uint8_t tmpdigest[s->gen_digest_length];
+
+	s->ops.finalize(&s->gdata_key, gdata_ctx, tmpdigest,
+		s->gen_digest_length);
+
+	return memcmp(digest, tmpdigest, s->req_digest_length) == 0 ? 0 :
+		EBADMSG;
+}
+
+static inline void
+aesni_gcm_process_gcm_sgl_op(const struct aesni_gcm_session *s,
+	struct gcm_context_data *gdata_ctx, struct rte_crypto_sgl *sgl,
+	void *iv, void *aad)
+{
+	uint32_t i;
+
+	/* init crypto operation */
+	s->ops.init(&s->gdata_key, gdata_ctx, iv, aad,
+		(uint64_t)s->aad_length);
+
+	/* update with sgl data */
+	for (i = 0; i < sgl->num; i++) {
+		struct rte_crypto_vec *vec = &sgl->vec[i];
+
+		s->ops.update(&s->gdata_key, gdata_ctx, vec->base, vec->base,
+			vec->len);
+	}
+}
+
+static inline void
+aesni_gcm_process_gmac_sgl_op(const struct aesni_gcm_session *s,
+	struct gcm_context_data *gdata_ctx, struct rte_crypto_sgl *sgl,
+	void *iv)
+{
+	s->ops.init(&s->gdata_key, gdata_ctx, iv, sgl->vec[0].base,
+		sgl->vec[0].len);
+}
+
+static inline uint32_t
+aesni_gcm_sgl_encrypt(struct aesni_gcm_session *s,
+	struct gcm_context_data *gdata_ctx, struct rte_crypto_sym_vec *vec)
+{
+	uint32_t i, processed;
+
+	processed = 0;
+	for (i = 0; i < vec->num; ++i) {
+		aesni_gcm_process_gcm_sgl_op(s, gdata_ctx,
+			&vec->sgl[i], vec->iv[i], vec->aad[i]);
+		vec->status[i] = aesni_gcm_sgl_op_finalize_encryption(s,
+			gdata_ctx, vec->digest[i]);
+		processed += (vec->status[i] == 0);
+	}
+
+	return processed;
+}
+
+static inline uint32_t
+aesni_gcm_sgl_decrypt(struct aesni_gcm_session *s,
+	struct gcm_context_data *gdata_ctx, struct rte_crypto_sym_vec *vec)
+{
+	uint32_t i, processed;
+
+	processed = 0;
+	for (i = 0; i < vec->num; ++i) {
+		aesni_gcm_process_gcm_sgl_op(s, gdata_ctx,
+			&vec->sgl[i], vec->iv[i], vec->aad[i]);
+		 vec->status[i] = aesni_gcm_sgl_op_finalize_decryption(s,
+			gdata_ctx, vec->digest[i]);
+		processed += (vec->status[i] == 0);
+	}
+
+	return processed;
+}
+
+static inline uint32_t
+aesni_gmac_sgl_generate(struct aesni_gcm_session *s,
+	struct gcm_context_data *gdata_ctx, struct rte_crypto_sym_vec *vec)
+{
+	uint32_t i, processed;
+
+	processed = 0;
+	for (i = 0; i < vec->num; ++i) {
+		if (vec->sgl[i].num != 1) {
+			vec->status[i] = ENOTSUP;
+			continue;
+		}
+
+		aesni_gcm_process_gmac_sgl_op(s, gdata_ctx,
+			&vec->sgl[i], vec->iv[i]);
+		vec->status[i] = aesni_gcm_sgl_op_finalize_encryption(s,
+			gdata_ctx, vec->digest[i]);
+		processed += (vec->status[i] == 0);
+	}
+
+	return processed;
+}
+
+static inline uint32_t
+aesni_gmac_sgl_verify(struct aesni_gcm_session *s,
+	struct gcm_context_data *gdata_ctx, struct rte_crypto_sym_vec *vec)
+{
+	uint32_t i, processed;
+
+	processed = 0;
+	for (i = 0; i < vec->num; ++i) {
+		if (vec->sgl[i].num != 1) {
+			vec->status[i] = ENOTSUP;
+			continue;
+		}
+
+		aesni_gcm_process_gmac_sgl_op(s, gdata_ctx,
+			&vec->sgl[i], vec->iv[i]);
+		vec->status[i] = aesni_gcm_sgl_op_finalize_decryption(s,
+			gdata_ctx, vec->digest[i]);
+		processed += (vec->status[i] == 0);
+	}
+
+	return processed;
+}
+
+/** Process CPU crypto bulk operations */
+uint32_t
+aesni_gcm_pmd_cpu_crypto_process(struct rte_cryptodev *dev,
+	struct rte_cryptodev_sym_session *sess,
+	__rte_unused union rte_crypto_sym_ofs ofs,
+	struct rte_crypto_sym_vec *vec)
+{
+	void *sess_priv;
+	struct aesni_gcm_session *s;
+	struct gcm_context_data gdata_ctx;
+
+	sess_priv = get_sym_session_private_data(sess, dev->driver_id);
+	if (unlikely(sess_priv == NULL)) {
+		aesni_gcm_fill_error_code(vec, EINVAL);
+		return 0;
+	}
+
+	s = sess_priv;
+	switch (s->op) {
+	case AESNI_GCM_OP_AUTHENTICATED_ENCRYPTION:
+		return aesni_gcm_sgl_encrypt(s, &gdata_ctx, vec);
+	case AESNI_GCM_OP_AUTHENTICATED_DECRYPTION:
+		return aesni_gcm_sgl_decrypt(s, &gdata_ctx, vec);
+	case AESNI_GMAC_OP_GENERATE:
+		return aesni_gmac_sgl_generate(s, &gdata_ctx, vec);
+	case AESNI_GMAC_OP_VERIFY:
+		return aesni_gmac_sgl_verify(s, &gdata_ctx, vec);
+	default:
+		aesni_gcm_fill_error_code(vec, EINVAL);
+		return 0;
+	}
+}
+
 /**
  * Process a completed job and return rte_mbuf which job processed
  *
@@ -527,7 +741,8 @@ aesni_gcm_create(const char *name,
 			RTE_CRYPTODEV_FF_SYM_OPERATION_CHAINING |
 			RTE_CRYPTODEV_FF_IN_PLACE_SGL |
 			RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT |
-			RTE_CRYPTODEV_FF_OOP_LB_IN_LB_OUT;
+			RTE_CRYPTODEV_FF_OOP_LB_IN_LB_OUT |
+			RTE_CRYPTODEV_FF_SYM_CPU_CRYPTO;
 
 	/* Check CPU for support for AES instruction set */
 	if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_AES))
@@ -672,7 +887,6 @@ RTE_PMD_REGISTER_PARAM_STRING(CRYPTODEV_NAME_AESNI_GCM_PMD,
 RTE_PMD_REGISTER_CRYPTO_DRIVER(aesni_gcm_crypto_drv, aesni_gcm_pmd_drv.driver,
 		cryptodev_driver_id);
 
-
 RTE_INIT(aesni_gcm_init_log)
 {
 	aesni_gcm_logtype_driver = rte_log_register("pmd.crypto.aesni_gcm");
diff --git a/drivers/crypto/aesni_gcm/aesni_gcm_pmd_ops.c b/drivers/crypto/aesni_gcm/aesni_gcm_pmd_ops.c
index 2f66c7c58..c5e0878f5 100644
--- a/drivers/crypto/aesni_gcm/aesni_gcm_pmd_ops.c
+++ b/drivers/crypto/aesni_gcm/aesni_gcm_pmd_ops.c
@@ -1,5 +1,5 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2016 Intel Corporation
+ * Copyright(c) 2016-2020 Intel Corporation
  */
 
 #include <string.h>
@@ -331,6 +331,8 @@ struct rte_cryptodev_ops aesni_gcm_pmd_ops = {
 		.queue_pair_release	= aesni_gcm_pmd_qp_release,
 		.queue_pair_count	= aesni_gcm_pmd_qp_count,
 
+		.sym_cpu_process        = aesni_gcm_pmd_cpu_crypto_process,
+
 		.sym_session_get_size	= aesni_gcm_pmd_sym_session_get_size,
 		.sym_session_configure	= aesni_gcm_pmd_sym_session_configure,
 		.sym_session_clear	= aesni_gcm_pmd_sym_session_clear
diff --git a/drivers/crypto/aesni_gcm/aesni_gcm_pmd_private.h b/drivers/crypto/aesni_gcm/aesni_gcm_pmd_private.h
index 2039adb53..080d4f7e4 100644
--- a/drivers/crypto/aesni_gcm/aesni_gcm_pmd_private.h
+++ b/drivers/crypto/aesni_gcm/aesni_gcm_pmd_private.h
@@ -1,5 +1,5 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2016-2017 Intel Corporation
+ * Copyright(c) 2016-2020 Intel Corporation
  */
 
 #ifndef _AESNI_GCM_PMD_PRIVATE_H_
@@ -92,6 +92,8 @@ struct aesni_gcm_session {
 	/**< GCM key type */
 	struct gcm_key_data gdata_key;
 	/**< GCM parameters */
+	struct aesni_gcm_session_ops ops;
+	/**< Session handlers */
 };
 
 
@@ -109,10 +111,13 @@ aesni_gcm_set_session_parameters(const struct aesni_gcm_ops *ops,
 		struct aesni_gcm_session *sess,
 		const struct rte_crypto_sym_xform *xform);
 
-
-/**
- * Device specific operations function pointer structure */
+/* Device specific operations function pointer structure */
 extern struct rte_cryptodev_ops *rte_aesni_gcm_pmd_ops;
 
+/** CPU crypto bulk process handler */
+uint32_t
+aesni_gcm_pmd_cpu_crypto_process(struct rte_cryptodev *dev,
+	struct rte_cryptodev_sym_session *sess, union rte_crypto_sym_ofs ofs,
+	struct rte_crypto_sym_vec *vec);
 
 #endif /* _AESNI_GCM_PMD_PRIVATE_H_ */
-- 
2.17.1


^ permalink raw reply	[flat|nested] 77+ messages in thread

* [dpdk-dev] [PATCH v6 3/8] security: add cpu crypto action type
  2020-02-04 13:12     ` [dpdk-dev] [PATCH v6 0/8] Introduce CPU crypto mode Marcin Smoczynski
  2020-02-04 13:12       ` [dpdk-dev] [PATCH v6 1/8] cryptodev: introduce cpu crypto support API Marcin Smoczynski
  2020-02-04 13:12       ` [dpdk-dev] [PATCH v6 2/8] crypto/aesni_gcm: cpu crypto support Marcin Smoczynski
@ 2020-02-04 13:12       ` Marcin Smoczynski
  2020-02-05 14:58         ` Akhil Goyal
  2020-02-04 13:12       ` [dpdk-dev] [PATCH v6 4/8] test/crypto: add cpu crypto mode to tests Marcin Smoczynski
                         ` (5 subsequent siblings)
  8 siblings, 1 reply; 77+ messages in thread
From: Marcin Smoczynski @ 2020-02-04 13:12 UTC (permalink / raw)
  To: akhil.goyal, konstantin.ananyev, roy.fan.zhang, declan.doherty,
	radu.nicolau, pablo.de.lara.guarch
  Cc: dev, Marcin Smoczynski

Introduce CPU crypto action type allowing to differentiate between
regular async 'none security' and synchronous, CPU crypto accelerated
sessions.

This mode is similar to ACTION_TYPE_NONE but crypto processing is
performed synchronously on a CPU.

Signed-off-by: Marcin Smoczynski <marcinx.smoczynski@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
---
 doc/guides/prog_guide/rte_security.rst | 15 +++++++++++----
 lib/librte_security/rte_security.h     |  8 ++++++--
 2 files changed, 17 insertions(+), 6 deletions(-)

diff --git a/doc/guides/prog_guide/rte_security.rst b/doc/guides/prog_guide/rte_security.rst
index f77fb89dc..9b5d249de 100644
--- a/doc/guides/prog_guide/rte_security.rst
+++ b/doc/guides/prog_guide/rte_security.rst
@@ -511,13 +511,20 @@ Offload.
         /**< No security actions */
         RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO,
         /**< Crypto processing for security protocol is processed inline
-         * during transmission */
+         * during transmission
+         */
         RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL,
         /**< All security protocol processing is performed inline during
-         * transmission */
-        RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL
+         * transmission
+         */
+        RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL,
         /**< All security protocol processing including crypto is performed
-         * on a lookaside accelerator */
+         * on a lookaside accelerator
+         */
+        RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO
+        /**< Similar to ACTION_TYPE_NONE but crypto processing for security
+         * protocol is processed synchronously by a CPU.
+         */
     };
 
 The ``rte_security_session_protocol`` is defined as
diff --git a/lib/librte_security/rte_security.h b/lib/librte_security/rte_security.h
index 546779df2..ef47118fa 100644
--- a/lib/librte_security/rte_security.h
+++ b/lib/librte_security/rte_security.h
@@ -1,6 +1,6 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  * Copyright 2017,2019 NXP
- * Copyright(c) 2017 Intel Corporation.
+ * Copyright(c) 2017-2020 Intel Corporation.
  */
 
 #ifndef _RTE_SECURITY_H_
@@ -307,10 +307,14 @@ enum rte_security_session_action_type {
 	/**< All security protocol processing is performed inline during
 	 * transmission
 	 */
-	RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL
+	RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL,
 	/**< All security protocol processing including crypto is performed
 	 * on a lookaside accelerator
 	 */
+	RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO
+	/**< Similar to ACTION_TYPE_NONE but crypto processing for security
+	 * protocol is processed synchronously by a CPU.
+	 */
 };
 
 /** Security session protocol definition */
-- 
2.17.1


^ permalink raw reply	[flat|nested] 77+ messages in thread

* [dpdk-dev] [PATCH v6 4/8] test/crypto: add cpu crypto mode to tests
  2020-02-04 13:12     ` [dpdk-dev] [PATCH v6 0/8] Introduce CPU crypto mode Marcin Smoczynski
                         ` (2 preceding siblings ...)
  2020-02-04 13:12       ` [dpdk-dev] [PATCH v6 3/8] security: add cpu crypto action type Marcin Smoczynski
@ 2020-02-04 13:12       ` Marcin Smoczynski
  2020-02-05 14:59         ` Akhil Goyal
  2020-02-07 14:28         ` [dpdk-dev] [PATCH] test/crypto: add cpu crypto mode tests Marcin Smoczynski
  2020-02-04 13:12       ` [dpdk-dev] [PATCH v6 5/8] ipsec: introduce support for cpu crypto mode Marcin Smoczynski
                         ` (4 subsequent siblings)
  8 siblings, 2 replies; 77+ messages in thread
From: Marcin Smoczynski @ 2020-02-04 13:12 UTC (permalink / raw)
  To: akhil.goyal, konstantin.ananyev, roy.fan.zhang, declan.doherty,
	radu.nicolau, pablo.de.lara.guarch
  Cc: dev, Marcin Smoczynski

This patch adds ability to run unit tests in cpu crypto mode and
provides test for aesni_gcm's cpu crypto implementation.

Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Signed-off-by: Marcin Smoczynski <marcinx.smoczynski@intel.com>
---
 app/test/test_cryptodev.c | 161 +++++++++++++++++++++++++++++++++++---
 1 file changed, 151 insertions(+), 10 deletions(-)

diff --git a/app/test/test_cryptodev.c b/app/test/test_cryptodev.c
index b5aaca131..8748a6796 100644
--- a/app/test/test_cryptodev.c
+++ b/app/test/test_cryptodev.c
@@ -1,5 +1,5 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2015-2019 Intel Corporation
+ * Copyright(c) 2015-2020 Intel Corporation
  */
 
 #include <time.h>
@@ -52,6 +52,9 @@
 
 static int gbl_driver_id;
 
+static enum rte_security_session_action_type gbl_action_type =
+	RTE_SECURITY_ACTION_TYPE_NONE;
+
 struct crypto_testsuite_params {
 	struct rte_mempool *mbuf_pool;
 	struct rte_mempool *large_mbuf_pool;
@@ -139,9 +142,95 @@ ceil_byte_length(uint32_t num_bits)
 		return (num_bits >> 3);
 }
 
+static void
+process_cpu_gmac_op(uint8_t dev_id, struct rte_crypto_op *op)
+{
+	int32_t n, st;
+	void *iv;
+	struct rte_crypto_sym_op *sop;
+	union rte_crypto_sym_ofs ofs;
+	struct rte_crypto_sgl sgl;
+	struct rte_crypto_sym_vec symvec;
+	struct rte_crypto_vec vec[UINT8_MAX];
+
+	sop = op->sym;
+
+	n = rte_crypto_mbuf_to_vec(sop->m_src, sop->auth.data.offset,
+		sop->auth.data.length, vec, RTE_DIM(vec));
+
+	if (n < 0 || n != sop->m_src->nb_segs) {
+		op->status = RTE_CRYPTO_OP_STATUS_ERROR;
+		return;
+	}
+
+	sgl.vec = vec;
+	sgl.num = n;
+	symvec.sgl = &sgl;
+	iv = rte_crypto_op_ctod_offset(op, void *, IV_OFFSET);
+	symvec.iv = &iv;
+	symvec.aad = NULL;
+	symvec.digest = (void **)&sop->auth.digest.data;
+	symvec.status = &st;
+	symvec.num = 1;
+
+	ofs.raw = 0;
+
+	n = rte_cryptodev_sym_cpu_crypto_process(dev_id, sop->session, ofs,
+		&symvec);
+
+	if (n != 1)
+		op->status = RTE_CRYPTO_OP_STATUS_AUTH_FAILED;
+	else
+		op->status = RTE_CRYPTO_OP_STATUS_SUCCESS;
+}
+
+
+static void
+process_cpu_aead_op(uint8_t dev_id, struct rte_crypto_op *op)
+{
+	int32_t n, st;
+	void *iv;
+	struct rte_crypto_sym_op *sop;
+	union rte_crypto_sym_ofs ofs;
+	struct rte_crypto_sgl sgl;
+	struct rte_crypto_sym_vec symvec;
+	struct rte_crypto_vec vec[UINT8_MAX];
+
+	sop = op->sym;
+
+	n = rte_crypto_mbuf_to_vec(sop->m_src, sop->aead.data.offset,
+		sop->aead.data.length, vec, RTE_DIM(vec));
+
+	if (n < 0 || n != sop->m_src->nb_segs) {
+		op->status = RTE_CRYPTO_OP_STATUS_ERROR;
+		return;
+	}
+
+	sgl.vec = vec;
+	sgl.num = n;
+	symvec.sgl = &sgl;
+	iv = rte_crypto_op_ctod_offset(op, void *, IV_OFFSET);
+	symvec.iv = &iv;
+	symvec.aad = (void **)&sop->aead.aad.data;
+	symvec.digest = (void **)&sop->aead.digest.data;
+	symvec.status = &st;
+	symvec.num = 1;
+
+	ofs.raw = 0;
+
+	n = rte_cryptodev_sym_cpu_crypto_process(dev_id, sop->session, ofs,
+		&symvec);
+
+	if (n != 1)
+		op->status = RTE_CRYPTO_OP_STATUS_AUTH_FAILED;
+	else
+		op->status = RTE_CRYPTO_OP_STATUS_SUCCESS;
+}
+
 static struct rte_crypto_op *
 process_crypto_request(uint8_t dev_id, struct rte_crypto_op *op)
 {
+
 	if (rte_cryptodev_enqueue_burst(dev_id, 0, &op, 1) != 1) {
 		RTE_LOG(ERR, USER1, "Error sending packet for encryption\n");
 		return NULL;
@@ -7862,7 +7951,11 @@ test_authenticated_encryption(const struct aead_test_data *tdata)
 	ut_params->op->sym->m_src = ut_params->ibuf;
 
 	/* Process crypto operation */
-	TEST_ASSERT_NOT_NULL(process_crypto_request(ts_params->valid_devs[0],
+	if (gbl_action_type == RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO)
+		process_cpu_aead_op(ts_params->valid_devs[0], ut_params->op);
+	else
+		TEST_ASSERT_NOT_NULL(
+			process_crypto_request(ts_params->valid_devs[0],
 			ut_params->op), "failed to process sym crypto op");
 
 	TEST_ASSERT_EQUAL(ut_params->op->status, RTE_CRYPTO_OP_STATUS_SUCCESS,
@@ -8760,7 +8853,11 @@ test_authenticated_decryption(const struct aead_test_data *tdata)
 	ut_params->op->sym->m_src = ut_params->ibuf;
 
 	/* Process crypto operation */
-	TEST_ASSERT_NOT_NULL(process_crypto_request(ts_params->valid_devs[0],
+	if (gbl_action_type == RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO)
+		process_cpu_aead_op(ts_params->valid_devs[0], ut_params->op);
+	else
+		TEST_ASSERT_NOT_NULL(
+			process_crypto_request(ts_params->valid_devs[0],
 			ut_params->op), "failed to process sym crypto op");
 
 	TEST_ASSERT_EQUAL(ut_params->op->status, RTE_CRYPTO_OP_STATUS_SUCCESS,
@@ -10467,7 +10564,11 @@ test_AES_GMAC_authentication(const struct gmac_test_data *tdata)
 
 	ut_params->op->sym->m_src = ut_params->ibuf;
 
-	TEST_ASSERT_NOT_NULL(process_crypto_request(ts_params->valid_devs[0],
+	if (gbl_action_type == RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO)
+		process_cpu_gmac_op(ts_params->valid_devs[0], ut_params->op);
+	else
+		TEST_ASSERT_NOT_NULL(
+			process_crypto_request(ts_params->valid_devs[0],
 			ut_params->op), "failed to process sym crypto op");
 
 	TEST_ASSERT_EQUAL(ut_params->op->status, RTE_CRYPTO_OP_STATUS_SUCCESS,
@@ -10571,14 +10672,17 @@ test_AES_GMAC_authentication_verify(const struct gmac_test_data *tdata)
 
 	ut_params->op->sym->m_src = ut_params->ibuf;
 
-	TEST_ASSERT_NOT_NULL(process_crypto_request(ts_params->valid_devs[0],
+	if (gbl_action_type == RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO)
+		process_cpu_gmac_op(ts_params->valid_devs[0], ut_params->op);
+	else
+		TEST_ASSERT_NOT_NULL(
+			process_crypto_request(ts_params->valid_devs[0],
 			ut_params->op), "failed to process sym crypto op");
 
 	TEST_ASSERT_EQUAL(ut_params->op->status, RTE_CRYPTO_OP_STATUS_SUCCESS,
 			"crypto op processing failed");
 
 	return 0;
-
 }
 
 static int
@@ -11176,10 +11280,16 @@ test_authentication_verify_GMAC_fail_when_corruption(
 	else
 		tag_corruption(plaintext, reference->aad.len);
 
-	ut_params->op = process_crypto_request(ts_params->valid_devs[0],
+	if (gbl_action_type == RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO) {
+		process_cpu_gmac_op(ts_params->valid_devs[0], ut_params->op);
+		TEST_ASSERT_NOT_EQUAL(ut_params->op->status,
+			RTE_CRYPTO_OP_STATUS_SUCCESS,
+			"authentication not failed");
+	} else {
+		ut_params->op = process_crypto_request(ts_params->valid_devs[0],
 			ut_params->op);
-
-	TEST_ASSERT_NULL(ut_params->op, "authentication not failed");
+		TEST_ASSERT_NULL(ut_params->op, "authentication not failed");
+	}
 
 	return 0;
 }
@@ -11708,7 +11818,12 @@ test_authenticated_encryption_SGL(const struct aead_test_data *tdata,
 		ut_params->op->sym->m_dst = ut_params->obuf;
 
 	/* Process crypto operation */
-	TEST_ASSERT_NOT_NULL(process_crypto_request(ts_params->valid_devs[0],
+	if (oop == IN_PLACE &&
+			gbl_action_type == RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO)
+		process_cpu_aead_op(ts_params->valid_devs[0], ut_params->op);
+	else
+		TEST_ASSERT_NOT_NULL(
+			process_crypto_request(ts_params->valid_devs[0],
 			ut_params->op), "failed to process sym crypto op");
 
 	TEST_ASSERT_EQUAL(ut_params->op->status, RTE_CRYPTO_OP_STATUS_SUCCESS,
@@ -14620,6 +14735,30 @@ test_cryptodev_aesni_gcm(void)
 	return unit_test_suite_runner(&cryptodev_aesni_gcm_testsuite);
 }
 
+static int
+test_cryptodev_cpu_aesni_gcm(void)
+{
+	int32_t rc;
+	enum rte_security_session_action_type at;
+
+	gbl_driver_id = rte_cryptodev_driver_id_get(
+			RTE_STR(CRYPTODEV_NAME_AESNI_GCM_PMD));
+
+	if (gbl_driver_id == -1) {
+		RTE_LOG(ERR, USER1, "AESNI GCM PMD must be loaded. Check if "
+				"CONFIG_RTE_LIBRTE_PMD_AESNI_GCM is enabled "
+				"in config file to run this testsuite.\n");
+		return TEST_SKIPPED;
+	}
+
+	at = gbl_action_type;
+	gbl_action_type = RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO;
+	rc = unit_test_suite_runner(&cryptodev_aesni_gcm_testsuite);
+	gbl_action_type = at;
+	return rc;
+}
+
+
 static int
 test_cryptodev_null(void)
 {
@@ -14858,6 +14997,8 @@ REGISTER_TEST_COMMAND(cryptodev_qat_autotest, test_cryptodev_qat);
 REGISTER_TEST_COMMAND(cryptodev_aesni_mb_autotest, test_cryptodev_aesni_mb);
 REGISTER_TEST_COMMAND(cryptodev_openssl_autotest, test_cryptodev_openssl);
 REGISTER_TEST_COMMAND(cryptodev_aesni_gcm_autotest, test_cryptodev_aesni_gcm);
+REGISTER_TEST_COMMAND(cryptodev_cpu_aesni_gcm_autotest,
+	test_cryptodev_cpu_aesni_gcm);
 REGISTER_TEST_COMMAND(cryptodev_null_autotest, test_cryptodev_null);
 REGISTER_TEST_COMMAND(cryptodev_sw_snow3g_autotest, test_cryptodev_sw_snow3g);
 REGISTER_TEST_COMMAND(cryptodev_sw_kasumi_autotest, test_cryptodev_sw_kasumi);
-- 
2.17.1


^ permalink raw reply	[flat|nested] 77+ messages in thread

* [dpdk-dev] [PATCH v6 5/8] ipsec: introduce support for cpu crypto mode
  2020-02-04 13:12     ` [dpdk-dev] [PATCH v6 0/8] Introduce CPU crypto mode Marcin Smoczynski
                         ` (3 preceding siblings ...)
  2020-02-04 13:12       ` [dpdk-dev] [PATCH v6 4/8] test/crypto: add cpu crypto mode to tests Marcin Smoczynski
@ 2020-02-04 13:12       ` Marcin Smoczynski
  2020-02-05 14:59         ` Akhil Goyal
  2020-02-04 13:12       ` [dpdk-dev] [PATCH v6 6/8] examples/ipsec-secgw: cpu crypto support Marcin Smoczynski
                         ` (3 subsequent siblings)
  8 siblings, 1 reply; 77+ messages in thread
From: Marcin Smoczynski @ 2020-02-04 13:12 UTC (permalink / raw)
  To: akhil.goyal, konstantin.ananyev, roy.fan.zhang, declan.doherty,
	radu.nicolau, pablo.de.lara.guarch
  Cc: dev, Marcin Smoczynski

Update library to handle CPU cypto security mode which utilizes
cryptodev's synchronous, CPU accelerated crypto operations.

Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Signed-off-by: Marcin Smoczynski <marcinx.smoczynski@intel.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
Tested-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 doc/guides/prog_guide/ipsec_lib.rst |  10 +-
 lib/librte_ipsec/esp_inb.c          | 156 ++++++++++++++++++++++++----
 lib/librte_ipsec/esp_outb.c         | 136 ++++++++++++++++++++++--
 lib/librte_ipsec/misc.h             |  73 ++++++++++++-
 lib/librte_ipsec/rte_ipsec.h        |  20 +++-
 lib/librte_ipsec/sa.c               | 114 ++++++++++++++++----
 lib/librte_ipsec/sa.h               |  19 +++-
 lib/librte_ipsec/ses.c              |   5 +-
 8 files changed, 475 insertions(+), 58 deletions(-)

diff --git a/doc/guides/prog_guide/ipsec_lib.rst b/doc/guides/prog_guide/ipsec_lib.rst
index 1ce0db453..0a860eb47 100644
--- a/doc/guides/prog_guide/ipsec_lib.rst
+++ b/doc/guides/prog_guide/ipsec_lib.rst
@@ -1,5 +1,5 @@
 ..  SPDX-License-Identifier: BSD-3-Clause
-    Copyright(c) 2018 Intel Corporation.
+    Copyright(c) 2018-2020 Intel Corporation.
 
 IPsec Packet Processing Library
 ===============================
@@ -81,6 +81,14 @@ In that mode the library functions perform
   - verify that crypto device operations (encryption, ICV generation)
     were completed successfully
 
+RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+In that mode the library functions perform same operations as in
+``RTE_SECURITY_ACTION_TYPE_NONE``. The only differnce is that crypto operations
+are performed with CPU crypto synchronous API.
+
+
 RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
diff --git a/lib/librte_ipsec/esp_inb.c b/lib/librte_ipsec/esp_inb.c
index 5c653dd39..7b8ab81f6 100644
--- a/lib/librte_ipsec/esp_inb.c
+++ b/lib/librte_ipsec/esp_inb.c
@@ -1,5 +1,5 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2018 Intel Corporation
+ * Copyright(c) 2018-2020 Intel Corporation
  */
 
 #include <rte_ipsec.h>
@@ -105,6 +105,39 @@ inb_cop_prepare(struct rte_crypto_op *cop,
 	}
 }
 
+static inline uint32_t
+inb_cpu_crypto_prepare(const struct rte_ipsec_sa *sa, struct rte_mbuf *mb,
+	uint32_t *pofs, uint32_t plen, void *iv)
+{
+	struct aead_gcm_iv *gcm;
+	struct aesctr_cnt_blk *ctr;
+	uint64_t *ivp;
+	uint32_t clen;
+
+	ivp = rte_pktmbuf_mtod_offset(mb, uint64_t *,
+		*pofs + sizeof(struct rte_esp_hdr));
+	clen = 0;
+
+	switch (sa->algo_type) {
+	case ALGO_TYPE_AES_GCM:
+		gcm = (struct aead_gcm_iv *)iv;
+		aead_gcm_iv_fill(gcm, ivp[0], sa->salt);
+		break;
+	case ALGO_TYPE_AES_CBC:
+	case ALGO_TYPE_3DES_CBC:
+		copy_iv(iv, ivp, sa->iv_len);
+		break;
+	case ALGO_TYPE_AES_CTR:
+		ctr = (struct aesctr_cnt_blk *)iv;
+		aes_ctr_cnt_blk_fill(ctr, ivp[0], sa->salt);
+		break;
+	}
+
+	*pofs += sa->ctp.auth.offset;
+	clen = plen - sa->ctp.auth.length;
+	return clen;
+}
+
 /*
  * Helper function for prepare() to deal with situation when
  * ICV is spread by two segments. Tries to move ICV completely into the
@@ -157,17 +190,12 @@ inb_pkt_xprepare(const struct rte_ipsec_sa *sa, rte_be64_t sqc,
 	}
 }
 
-/*
- * setup/update packet data and metadata for ESP inbound tunnel case.
- */
-static inline int32_t
-inb_pkt_prepare(const struct rte_ipsec_sa *sa, const struct replay_sqn *rsn,
-	struct rte_mbuf *mb, uint32_t hlen, union sym_op_data *icv)
+static inline int
+inb_get_sqn(const struct rte_ipsec_sa *sa, const struct replay_sqn *rsn,
+	struct rte_mbuf *mb, uint32_t hlen, rte_be64_t *sqc)
 {
 	int32_t rc;
 	uint64_t sqn;
-	uint32_t clen, icv_len, icv_ofs, plen;
-	struct rte_mbuf *ml;
 	struct rte_esp_hdr *esph;
 
 	esph = rte_pktmbuf_mtod_offset(mb, struct rte_esp_hdr *, hlen);
@@ -179,12 +207,21 @@ inb_pkt_prepare(const struct rte_ipsec_sa *sa, const struct replay_sqn *rsn,
 	sqn = rte_be_to_cpu_32(esph->seq);
 	if (IS_ESN(sa))
 		sqn = reconstruct_esn(rsn->sqn, sqn, sa->replay.win_sz);
+	*sqc = rte_cpu_to_be_64(sqn);
 
+	/* check IPsec window */
 	rc = esn_inb_check_sqn(rsn, sa, sqn);
-	if (rc != 0)
-		return rc;
 
-	sqn = rte_cpu_to_be_64(sqn);
+	return rc;
+}
+
+/* prepare packet for upcoming processing */
+static inline int32_t
+inb_prepare(const struct rte_ipsec_sa *sa, struct rte_mbuf *mb,
+	uint32_t hlen, union sym_op_data *icv)
+{
+	uint32_t clen, icv_len, icv_ofs, plen;
+	struct rte_mbuf *ml;
 
 	/* start packet manipulation */
 	plen = mb->pkt_len;
@@ -217,7 +254,8 @@ inb_pkt_prepare(const struct rte_ipsec_sa *sa, const struct replay_sqn *rsn,
 
 	icv_ofs += sa->sqh_len;
 
-	/* we have to allocate space for AAD somewhere,
+	/*
+	 * we have to allocate space for AAD somewhere,
 	 * right now - just use free trailing space at the last segment.
 	 * Would probably be more convenient to reserve space for AAD
 	 * inside rte_crypto_op itself
@@ -238,10 +276,28 @@ inb_pkt_prepare(const struct rte_ipsec_sa *sa, const struct replay_sqn *rsn,
 	mb->pkt_len += sa->sqh_len;
 	ml->data_len += sa->sqh_len;
 
-	inb_pkt_xprepare(sa, sqn, icv);
 	return plen;
 }
 
+static inline int32_t
+inb_pkt_prepare(const struct rte_ipsec_sa *sa, const struct replay_sqn *rsn,
+	struct rte_mbuf *mb, uint32_t hlen, union sym_op_data *icv)
+{
+	int rc;
+	rte_be64_t sqn;
+
+	rc = inb_get_sqn(sa, rsn, mb, hlen, &sqn);
+	if (rc != 0)
+		return rc;
+
+	rc = inb_prepare(sa, mb, hlen, icv);
+	if (rc < 0)
+		return rc;
+
+	inb_pkt_xprepare(sa, sqn, icv);
+	return rc;
+}
+
 /*
  * setup/update packets and crypto ops for ESP inbound case.
  */
@@ -270,17 +326,17 @@ esp_inb_pkt_prepare(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
 			lksd_none_cop_prepare(cop[k], cs, mb[i]);
 			inb_cop_prepare(cop[k], sa, mb[i], &icv, hl, rc);
 			k++;
-		} else
+		} else {
 			dr[i - k] = i;
+			rte_errno = -rc;
+		}
 	}
 
 	rsn_release(sa, rsn);
 
 	/* copy not prepared mbufs beyond good ones */
-	if (k != num && k != 0) {
+	if (k != num && k != 0)
 		move_bad_mbufs(mb, dr, num, num - k);
-		rte_errno = EBADMSG;
-	}
 
 	return k;
 }
@@ -512,7 +568,6 @@ tun_process(const struct rte_ipsec_sa *sa, struct rte_mbuf *mb[],
 	return k;
 }
 
-
 /*
  * *process* function for tunnel packets
  */
@@ -612,7 +667,7 @@ esp_inb_pkt_process(struct rte_ipsec_sa *sa, struct rte_mbuf *mb[],
 	if (k != num && k != 0)
 		move_bad_mbufs(mb, dr, num, num - k);
 
-	/* update SQN and replay winow */
+	/* update SQN and replay window */
 	n = esp_inb_rsn_update(sa, sqn, dr, k);
 
 	/* handle mbufs with wrong SQN */
@@ -625,6 +680,67 @@ esp_inb_pkt_process(struct rte_ipsec_sa *sa, struct rte_mbuf *mb[],
 	return n;
 }
 
+/*
+ * Prepare (plus actual crypto/auth) routine for inbound CPU-CRYPTO
+ * (synchronous mode).
+ */
+uint16_t
+cpu_inb_pkt_prepare(const struct rte_ipsec_session *ss,
+	struct rte_mbuf *mb[], uint16_t num)
+{
+	int32_t rc;
+	uint32_t i, k;
+	struct rte_ipsec_sa *sa;
+	struct replay_sqn *rsn;
+	union sym_op_data icv;
+	void *iv[num];
+	void *aad[num];
+	void *dgst[num];
+	uint32_t dr[num];
+	uint32_t l4ofs[num];
+	uint32_t clen[num];
+	uint64_t ivbuf[num][IPSEC_MAX_IV_QWORD];
+
+	sa = ss->sa;
+
+	/* grab rsn lock */
+	rsn = rsn_acquire(sa);
+
+	/* do preparation for all packets */
+	for (i = 0, k = 0; i != num; i++) {
+
+		/* calculate ESP header offset */
+		l4ofs[k] = mb[i]->l2_len + mb[i]->l3_len;
+
+		/* prepare ESP packet for processing */
+		rc = inb_pkt_prepare(sa, rsn, mb[i], l4ofs[k], &icv);
+		if (rc >= 0) {
+			/* get encrypted data offset and length */
+			clen[k] = inb_cpu_crypto_prepare(sa, mb[i],
+				l4ofs + k, rc, ivbuf[k]);
+
+			/* fill iv, digest and aad */
+			iv[k] = ivbuf[k];
+			aad[k] = icv.va + sa->icv_len;
+			dgst[k++] = icv.va;
+		} else {
+			dr[i - k] = i;
+			rte_errno = -rc;
+		}
+	}
+
+	/* release rsn lock */
+	rsn_release(sa, rsn);
+
+	/* copy not prepared mbufs beyond good ones */
+	if (k != num && k != 0)
+		move_bad_mbufs(mb, dr, num, num - k);
+
+	/* convert mbufs to iovecs and do actual crypto/auth processing */
+	cpu_crypto_bulk(ss, sa->cofs, mb, iv, aad, dgst, l4ofs, clen, k);
+	return k;
+}
+
 /*
  * process group of ESP inbound tunnel packets.
  */
diff --git a/lib/librte_ipsec/esp_outb.c b/lib/librte_ipsec/esp_outb.c
index e983b25a3..b6d9cbe98 100644
--- a/lib/librte_ipsec/esp_outb.c
+++ b/lib/librte_ipsec/esp_outb.c
@@ -1,5 +1,5 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2018 Intel Corporation
+ * Copyright(c) 2018-2020 Intel Corporation
  */
 
 #include <rte_ipsec.h>
@@ -15,6 +15,9 @@
 #include "misc.h"
 #include "pad.h"
 
+typedef int32_t (*esp_outb_prepare_t)(struct rte_ipsec_sa *sa, rte_be64_t sqc,
+	const uint64_t ivp[IPSEC_MAX_IV_QWORD], struct rte_mbuf *mb,
+	union sym_op_data *icv, uint8_t sqh_len);
 
 /*
  * helper function to fill crypto_sym op for cipher+auth algorithms.
@@ -177,6 +180,7 @@ outb_tun_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc,
 	espt->pad_len = pdlen;
 	espt->next_proto = sa->proto;
 
+	/* set icv va/pa value(s) */
 	icv->va = rte_pktmbuf_mtod_offset(ml, void *, pdofs);
 	icv->pa = rte_pktmbuf_iova_offset(ml, pdofs);
 
@@ -270,8 +274,7 @@ esp_outb_tun_prepare(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
 static inline int32_t
 outb_trs_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc,
 	const uint64_t ivp[IPSEC_MAX_IV_QWORD], struct rte_mbuf *mb,
-	uint32_t l2len, uint32_t l3len, union sym_op_data *icv,
-	uint8_t sqh_len)
+	union sym_op_data *icv, uint8_t sqh_len)
 {
 	uint8_t np;
 	uint32_t clen, hlen, pdlen, pdofs, plen, tlen, uhlen;
@@ -280,6 +283,10 @@ outb_trs_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc,
 	struct rte_esp_tail *espt;
 	char *ph, *pt;
 	uint64_t *iv;
+	uint32_t l2len, l3len;
+
+	l2len = mb->l2_len;
+	l3len = mb->l3_len;
 
 	uhlen = l2len + l3len;
 	plen = mb->pkt_len - uhlen;
@@ -340,6 +347,7 @@ outb_trs_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc,
 	espt->pad_len = pdlen;
 	espt->next_proto = np;
 
+	/* set icv va/pa value(s) */
 	icv->va = rte_pktmbuf_mtod_offset(ml, void *, pdofs);
 	icv->pa = rte_pktmbuf_iova_offset(ml, pdofs);
 
@@ -381,8 +389,8 @@ esp_outb_trs_prepare(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
 		gen_iv(iv, sqc);
 
 		/* try to update the packet itself */
-		rc = outb_trs_pkt_prepare(sa, sqc, iv, mb[i], l2, l3, &icv,
-					  sa->sqh_len);
+		rc = outb_trs_pkt_prepare(sa, sqc, iv, mb[i], &icv,
+				  sa->sqh_len);
 		/* success, setup crypto op */
 		if (rc >= 0) {
 			outb_pkt_xprepare(sa, sqc, &icv);
@@ -403,6 +411,116 @@ esp_outb_trs_prepare(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
 	return k;
 }
 
+
+static inline uint32_t
+outb_cpu_crypto_prepare(const struct rte_ipsec_sa *sa, uint32_t *pofs,
+	uint32_t plen, void *iv)
+{
+	uint64_t *ivp = iv;
+	struct aead_gcm_iv *gcm;
+	struct aesctr_cnt_blk *ctr;
+	uint32_t clen;
+
+	switch (sa->algo_type) {
+	case ALGO_TYPE_AES_GCM:
+		gcm = iv;
+		aead_gcm_iv_fill(gcm, ivp[0], sa->salt);
+		break;
+	case ALGO_TYPE_AES_CTR:
+		ctr = iv;
+		aes_ctr_cnt_blk_fill(ctr, ivp[0], sa->salt);
+		break;
+	}
+
+	*pofs += sa->ctp.auth.offset;
+	clen = plen + sa->ctp.auth.length;
+	return clen;
+}
+
+static uint16_t
+cpu_outb_pkt_prepare(const struct rte_ipsec_session *ss,
+		struct rte_mbuf *mb[], uint16_t num,
+		esp_outb_prepare_t prepare, uint32_t cofs_mask)
+{
+	int32_t rc;
+	uint64_t sqn;
+	rte_be64_t sqc;
+	struct rte_ipsec_sa *sa;
+	uint32_t i, k, n;
+	uint32_t l2, l3;
+	union sym_op_data icv;
+	void *iv[num];
+	void *aad[num];
+	void *dgst[num];
+	uint32_t dr[num];
+	uint32_t l4ofs[num];
+	uint32_t clen[num];
+	uint64_t ivbuf[num][IPSEC_MAX_IV_QWORD];
+
+	sa = ss->sa;
+
+	n = num;
+	sqn = esn_outb_update_sqn(sa, &n);
+	if (n != num)
+		rte_errno = EOVERFLOW;
+
+	for (i = 0, k = 0; i != n; i++) {
+
+		l2 = mb[i]->l2_len;
+		l3 = mb[i]->l3_len;
+
+		/* calculate ESP header offset */
+		l4ofs[k] = (l2 + l3) & cofs_mask;
+
+		sqc = rte_cpu_to_be_64(sqn + i);
+		gen_iv(ivbuf[k], sqc);
+
+		/* try to update the packet itself */
+		rc = prepare(sa, sqc, ivbuf[k], mb[i], &icv, sa->sqh_len);
+
+		/* success, proceed with preparations */
+		if (rc >= 0) {
+
+			outb_pkt_xprepare(sa, sqc, &icv);
+
+			/* get encrypted data offset and length */
+			clen[k] = outb_cpu_crypto_prepare(sa, l4ofs + k, rc,
+				ivbuf[k]);
+
+			/* fill iv, digest and aad */
+			iv[k] = ivbuf[k];
+			aad[k] = icv.va + sa->icv_len;
+			dgst[k++] = icv.va;
+		} else {
+			dr[i - k] = i;
+			rte_errno = -rc;
+		}
+	}
+
+	/* copy not prepared mbufs beyond good ones */
+	if (k != n && k != 0)
+		move_bad_mbufs(mb, dr, n, n - k);
+
+	/* convert mbufs to iovecs and do actual crypto/auth processing */
+	cpu_crypto_bulk(ss, sa->cofs, mb, iv, aad, dgst, l4ofs, clen, k);
+	return k;
+}
+
+uint16_t
+cpu_outb_tun_pkt_prepare(const struct rte_ipsec_session *ss,
+		struct rte_mbuf *mb[], uint16_t num)
+{
+	return cpu_outb_pkt_prepare(ss, mb, num, outb_tun_pkt_prepare, 0);
+}
+
+uint16_t
+cpu_outb_trs_pkt_prepare(const struct rte_ipsec_session *ss,
+		struct rte_mbuf *mb[], uint16_t num)
+{
+	return cpu_outb_pkt_prepare(ss, mb, num, outb_trs_pkt_prepare,
+		UINT32_MAX);
+}
+
 /*
  * process outbound packets for SA with ESN support,
  * for algorithms that require SQN.hibits to be implictly included
@@ -526,7 +644,7 @@ inline_outb_trs_pkt_process(const struct rte_ipsec_session *ss,
 	struct rte_mbuf *mb[], uint16_t num)
 {
 	int32_t rc;
-	uint32_t i, k, n, l2, l3;
+	uint32_t i, k, n;
 	uint64_t sqn;
 	rte_be64_t sqc;
 	struct rte_ipsec_sa *sa;
@@ -544,15 +662,11 @@ inline_outb_trs_pkt_process(const struct rte_ipsec_session *ss,
 	k = 0;
 	for (i = 0; i != n; i++) {
 
-		l2 = mb[i]->l2_len;
-		l3 = mb[i]->l3_len;
-
 		sqc = rte_cpu_to_be_64(sqn + i);
 		gen_iv(iv, sqc);
 
 		/* try to update the packet itself */
-		rc = outb_trs_pkt_prepare(sa, sqc, iv, mb[i],
-				l2, l3, &icv, 0);
+		rc = outb_trs_pkt_prepare(sa, sqc, iv, mb[i], &icv, 0);
 
 		k += (rc >= 0);
 
diff --git a/lib/librte_ipsec/misc.h b/lib/librte_ipsec/misc.h
index fe4641bfc..53c0457af 100644
--- a/lib/librte_ipsec/misc.h
+++ b/lib/librte_ipsec/misc.h
@@ -1,5 +1,5 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2018 Intel Corporation
+ * Copyright(c) 2018-2020 Intel Corporation
  */
 
 #ifndef _MISC_H_
@@ -105,4 +105,75 @@ mbuf_cut_seg_ofs(struct rte_mbuf *mb, struct rte_mbuf *ms, uint32_t ofs,
 	mb->pkt_len -= len;
 }
 
+/*
+ * process packets using sync crypto engine
+ */
+static inline void
+cpu_crypto_bulk(const struct rte_ipsec_session *ss,
+	union rte_crypto_sym_ofs ofs, struct rte_mbuf *mb[],
+	void *iv[], void *aad[], void *dgst[], uint32_t l4ofs[],
+	uint32_t clen[], uint32_t num)
+{
+	uint32_t i, j, n;
+	int32_t vcnt, vofs;
+	int32_t st[num];
+	struct rte_crypto_sgl vecpkt[num];
+	struct rte_crypto_vec vec[UINT8_MAX];
+	struct rte_crypto_sym_vec symvec;
+
+	const uint32_t vnum = RTE_DIM(vec);
+
+	j = 0, n = 0;
+	vofs = 0;
+	for (i = 0; i != num; i++) {
+
+		vcnt = rte_crypto_mbuf_to_vec(mb[i], l4ofs[i], clen[i],
+			&vec[vofs], vnum - vofs);
+
+		/* not enough space in vec[] to hold all segments */
+		if (vcnt < 0) {
+			/* fill the request structure */
+			symvec.sgl = &vecpkt[j];
+			symvec.iv = &iv[j];
+			symvec.aad = &aad[j];
+			symvec.digest = &dgst[j];
+			symvec.status = &st[j];
+			symvec.num = i - j;
+
+			/* flush vec array and try again */
+			n += rte_cryptodev_sym_cpu_crypto_process(
+				ss->crypto.dev_id, ss->crypto.ses, ofs,
+				&symvec);
+			vofs = 0;
+			vcnt = rte_crypto_mbuf_to_vec(mb[i], l4ofs[i], clen[i],
+				vec, vnum);
+			RTE_ASSERT(vcnt > 0);
+			j = i;
+		}
+
+		vecpkt[i].vec = &vec[vofs];
+		vecpkt[i].num = vcnt;
+		vofs += vcnt;
+	}
+
+	/* fill the request structure */
+	symvec.sgl = &vecpkt[j];
+	symvec.iv = &iv[j];
+	symvec.aad = &aad[j];
+	symvec.digest = &dgst[j];
+	symvec.status = &st[j];
+	symvec.num = i - j;
+
+	n += rte_cryptodev_sym_cpu_crypto_process(ss->crypto.dev_id,
+		ss->crypto.ses, ofs, &symvec);
+
+	j = num - n;
+	for (i = 0; j != 0 && i != num; i++) {
+		if (st[i] != 0) {
+			mb[i]->ol_flags |= PKT_RX_SEC_OFFLOAD_FAILED;
+			j--;
+		}
+	}
+}
+
 #endif /* _MISC_H_ */
diff --git a/lib/librte_ipsec/rte_ipsec.h b/lib/librte_ipsec/rte_ipsec.h
index f3b1f936b..6666cf761 100644
--- a/lib/librte_ipsec/rte_ipsec.h
+++ b/lib/librte_ipsec/rte_ipsec.h
@@ -1,5 +1,5 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2018 Intel Corporation
+ * Copyright(c) 2018-2020 Intel Corporation
  */
 
 #ifndef _RTE_IPSEC_H_
@@ -33,10 +33,15 @@ struct rte_ipsec_session;
  *   (see rte_ipsec_pkt_process for more details).
  */
 struct rte_ipsec_sa_pkt_func {
-	uint16_t (*prepare)(const struct rte_ipsec_session *ss,
+	union {
+		uint16_t (*async)(const struct rte_ipsec_session *ss,
 				struct rte_mbuf *mb[],
 				struct rte_crypto_op *cop[],
 				uint16_t num);
+		uint16_t (*sync)(const struct rte_ipsec_session *ss,
+				struct rte_mbuf *mb[],
+				uint16_t num);
+	} prepare;
 	uint16_t (*process)(const struct rte_ipsec_session *ss,
 				struct rte_mbuf *mb[],
 				uint16_t num);
@@ -62,6 +67,7 @@ struct rte_ipsec_session {
 	union {
 		struct {
 			struct rte_cryptodev_sym_session *ses;
+			uint8_t dev_id;
 		} crypto;
 		struct {
 			struct rte_security_session *ses;
@@ -114,7 +120,15 @@ static inline uint16_t
 rte_ipsec_pkt_crypto_prepare(const struct rte_ipsec_session *ss,
 	struct rte_mbuf *mb[], struct rte_crypto_op *cop[], uint16_t num)
 {
-	return ss->pkt_func.prepare(ss, mb, cop, num);
+	return ss->pkt_func.prepare.async(ss, mb, cop, num);
+}
+
+__rte_experimental
+static inline uint16_t
+rte_ipsec_pkt_cpu_prepare(const struct rte_ipsec_session *ss,
+	struct rte_mbuf *mb[], uint16_t num)
+{
+	return ss->pkt_func.prepare.sync(ss, mb, num);
 }
 
 /**
diff --git a/lib/librte_ipsec/sa.c b/lib/librte_ipsec/sa.c
index 6f1d92c3c..ada195cf8 100644
--- a/lib/librte_ipsec/sa.c
+++ b/lib/librte_ipsec/sa.c
@@ -1,5 +1,5 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2018 Intel Corporation
+ * Copyright(c) 2018-2020 Intel Corporation
  */
 
 #include <rte_ipsec.h>
@@ -243,10 +243,26 @@ static void
 esp_inb_init(struct rte_ipsec_sa *sa)
 {
 	/* these params may differ with new algorithms support */
-	sa->ctp.auth.offset = 0;
-	sa->ctp.auth.length = sa->icv_len - sa->sqh_len;
 	sa->ctp.cipher.offset = sizeof(struct rte_esp_hdr) + sa->iv_len;
 	sa->ctp.cipher.length = sa->icv_len + sa->ctp.cipher.offset;
+
+	/*
+	 * for AEAD and NULL algorithms we can assume that
+	 * auth and cipher offsets would be equal.
+	 */
+	switch (sa->algo_type) {
+	case ALGO_TYPE_AES_GCM:
+	case ALGO_TYPE_NULL:
+		sa->ctp.auth.raw = sa->ctp.cipher.raw;
+		break;
+	default:
+		sa->ctp.auth.offset = 0;
+		sa->ctp.auth.length = sa->icv_len - sa->sqh_len;
+		sa->cofs.ofs.cipher.tail = sa->sqh_len;
+		break;
+	}
+
+	sa->cofs.ofs.cipher.head = sa->ctp.cipher.offset - sa->ctp.auth.offset;
 }
 
 /*
@@ -269,13 +285,13 @@ esp_outb_init(struct rte_ipsec_sa *sa, uint32_t hlen)
 
 	sa->sqn.outb.raw = 1;
 
-	/* these params may differ with new algorithms support */
-	sa->ctp.auth.offset = hlen;
-	sa->ctp.auth.length = sizeof(struct rte_esp_hdr) +
-		sa->iv_len + sa->sqh_len;
-
 	algo_type = sa->algo_type;
 
+	/*
+	 * Setup auth and cipher length and offset.
+	 * these params may differ with new algorithms support
+	 */
+
 	switch (algo_type) {
 	case ALGO_TYPE_AES_GCM:
 	case ALGO_TYPE_AES_CTR:
@@ -286,11 +302,30 @@ esp_outb_init(struct rte_ipsec_sa *sa, uint32_t hlen)
 		break;
 	case ALGO_TYPE_AES_CBC:
 	case ALGO_TYPE_3DES_CBC:
-		sa->ctp.cipher.offset = sa->hdr_len +
-			sizeof(struct rte_esp_hdr);
+		sa->ctp.cipher.offset = hlen + sizeof(struct rte_esp_hdr);
 		sa->ctp.cipher.length = sa->iv_len;
 		break;
 	}
+
+	/*
+	 * for AEAD and NULL algorithms we can assume that
+	 * auth and cipher offsets would be equal.
+	 */
+	switch (algo_type) {
+	case ALGO_TYPE_AES_GCM:
+	case ALGO_TYPE_NULL:
+		sa->ctp.auth.raw = sa->ctp.cipher.raw;
+		break;
+	default:
+		sa->ctp.auth.offset = hlen;
+		sa->ctp.auth.length = sizeof(struct rte_esp_hdr) +
+			sa->iv_len + sa->sqh_len;
+		break;
+	}
+
+	sa->cofs.ofs.cipher.head = sa->ctp.cipher.offset - sa->ctp.auth.offset;
+	sa->cofs.ofs.cipher.tail = (sa->ctp.auth.offset + sa->ctp.auth.length) -
+			(sa->ctp.cipher.offset + sa->ctp.cipher.length);
 }
 
 /*
@@ -544,9 +579,9 @@ lksd_proto_prepare(const struct rte_ipsec_session *ss,
  * - inbound/outbound for RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL
  * - outbound for RTE_SECURITY_ACTION_TYPE_NONE when ESN is disabled
  */
-static uint16_t
-pkt_flag_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
-	uint16_t num)
+uint16_t
+pkt_flag_process(const struct rte_ipsec_session *ss,
+		struct rte_mbuf *mb[], uint16_t num)
 {
 	uint32_t i, k;
 	uint32_t dr[num];
@@ -588,21 +623,59 @@ lksd_none_pkt_func_select(const struct rte_ipsec_sa *sa,
 	switch (sa->type & msk) {
 	case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TUNLV4):
 	case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TUNLV6):
-		pf->prepare = esp_inb_pkt_prepare;
+		pf->prepare.async = esp_inb_pkt_prepare;
 		pf->process = esp_inb_tun_pkt_process;
 		break;
 	case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TRANS):
-		pf->prepare = esp_inb_pkt_prepare;
+		pf->prepare.async = esp_inb_pkt_prepare;
 		pf->process = esp_inb_trs_pkt_process;
 		break;
 	case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV4):
 	case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV6):
-		pf->prepare = esp_outb_tun_prepare;
+		pf->prepare.async = esp_outb_tun_prepare;
 		pf->process = (sa->sqh_len != 0) ?
 			esp_outb_sqh_process : pkt_flag_process;
 		break;
 	case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TRANS):
-		pf->prepare = esp_outb_trs_prepare;
+		pf->prepare.async = esp_outb_trs_prepare;
+		pf->process = (sa->sqh_len != 0) ?
+			esp_outb_sqh_process : pkt_flag_process;
+		break;
+	default:
+		rc = -ENOTSUP;
+	}
+
+	return rc;
+}
+
+static int
+cpu_crypto_pkt_func_select(const struct rte_ipsec_sa *sa,
+		struct rte_ipsec_sa_pkt_func *pf)
+{
+	int32_t rc;
+
+	static const uint64_t msk = RTE_IPSEC_SATP_DIR_MASK |
+			RTE_IPSEC_SATP_MODE_MASK;
+
+	rc = 0;
+	switch (sa->type & msk) {
+	case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TUNLV4):
+	case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TUNLV6):
+		pf->prepare.sync = cpu_inb_pkt_prepare;
+		pf->process = esp_inb_tun_pkt_process;
+		break;
+	case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TRANS):
+		pf->prepare.sync = cpu_inb_pkt_prepare;
+		pf->process = esp_inb_trs_pkt_process;
+		break;
+	case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV4):
+	case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV6):
+		pf->prepare.sync = cpu_outb_tun_pkt_prepare;
+		pf->process = (sa->sqh_len != 0) ?
+			esp_outb_sqh_process : pkt_flag_process;
+		break;
+	case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TRANS):
+		pf->prepare.sync = cpu_outb_trs_pkt_prepare;
 		pf->process = (sa->sqh_len != 0) ?
 			esp_outb_sqh_process : pkt_flag_process;
 		break;
@@ -660,7 +733,7 @@ ipsec_sa_pkt_func_select(const struct rte_ipsec_session *ss,
 	int32_t rc;
 
 	rc = 0;
-	pf[0] = (struct rte_ipsec_sa_pkt_func) { 0 };
+	pf[0] = (struct rte_ipsec_sa_pkt_func) { {NULL}, NULL };
 
 	switch (ss->type) {
 	case RTE_SECURITY_ACTION_TYPE_NONE:
@@ -677,9 +750,12 @@ ipsec_sa_pkt_func_select(const struct rte_ipsec_session *ss,
 			pf->process = inline_proto_outb_pkt_process;
 		break;
 	case RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL:
-		pf->prepare = lksd_proto_prepare;
+		pf->prepare.async = lksd_proto_prepare;
 		pf->process = pkt_flag_process;
 		break;
+	case RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO:
+		rc = cpu_crypto_pkt_func_select(sa, pf);
+		break;
 	default:
 		rc = -ENOTSUP;
 	}
diff --git a/lib/librte_ipsec/sa.h b/lib/librte_ipsec/sa.h
index 51e69ad05..d22451b38 100644
--- a/lib/librte_ipsec/sa.h
+++ b/lib/librte_ipsec/sa.h
@@ -1,5 +1,5 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2018 Intel Corporation
+ * Copyright(c) 2018-2020 Intel Corporation
  */
 
 #ifndef _SA_H_
@@ -88,6 +88,8 @@ struct rte_ipsec_sa {
 		union sym_op_ofslen cipher;
 		union sym_op_ofslen auth;
 	} ctp;
+	/* cpu-crypto offsets */
+	union rte_crypto_sym_ofs cofs;
 	/* tx_offload template for tunnel mbuf */
 	struct {
 		uint64_t msk;
@@ -156,6 +158,10 @@ uint16_t
 inline_inb_trs_pkt_process(const struct rte_ipsec_session *ss,
 	struct rte_mbuf *mb[], uint16_t num);
 
+uint16_t
+cpu_inb_pkt_prepare(const struct rte_ipsec_session *ss,
+		struct rte_mbuf *mb[], uint16_t num);
+
 /* outbound processing */
 
 uint16_t
@@ -170,6 +176,10 @@ uint16_t
 esp_outb_sqh_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
 	uint16_t num);
 
+uint16_t
+pkt_flag_process(const struct rte_ipsec_session *ss,
+	struct rte_mbuf *mb[], uint16_t num);
+
 uint16_t
 inline_outb_tun_pkt_process(const struct rte_ipsec_session *ss,
 	struct rte_mbuf *mb[], uint16_t num);
@@ -182,4 +192,11 @@ uint16_t
 inline_proto_outb_pkt_process(const struct rte_ipsec_session *ss,
 	struct rte_mbuf *mb[], uint16_t num);
 
+uint16_t
+cpu_outb_tun_pkt_prepare(const struct rte_ipsec_session *ss,
+		struct rte_mbuf *mb[], uint16_t num);
+uint16_t
+cpu_outb_trs_pkt_prepare(const struct rte_ipsec_session *ss,
+		struct rte_mbuf *mb[], uint16_t num);
+
 #endif /* _SA_H_ */
diff --git a/lib/librte_ipsec/ses.c b/lib/librte_ipsec/ses.c
index 82c765a33..3d51ac498 100644
--- a/lib/librte_ipsec/ses.c
+++ b/lib/librte_ipsec/ses.c
@@ -1,5 +1,5 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2018 Intel Corporation
+ * Copyright(c) 2018-2020 Intel Corporation
  */
 
 #include <rte_ipsec.h>
@@ -11,7 +11,8 @@ session_check(struct rte_ipsec_session *ss)
 	if (ss == NULL || ss->sa == NULL)
 		return -EINVAL;
 
-	if (ss->type == RTE_SECURITY_ACTION_TYPE_NONE) {
+	if (ss->type == RTE_SECURITY_ACTION_TYPE_NONE ||
+		ss->type == RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO) {
 		if (ss->crypto.ses == NULL)
 			return -EINVAL;
 	} else {
-- 
2.17.1


^ permalink raw reply	[flat|nested] 77+ messages in thread

* [dpdk-dev] [PATCH v6 6/8] examples/ipsec-secgw: cpu crypto support
  2020-02-04 13:12     ` [dpdk-dev] [PATCH v6 0/8] Introduce CPU crypto mode Marcin Smoczynski
                         ` (4 preceding siblings ...)
  2020-02-04 13:12       ` [dpdk-dev] [PATCH v6 5/8] ipsec: introduce support for cpu crypto mode Marcin Smoczynski
@ 2020-02-04 13:12       ` Marcin Smoczynski
  2020-02-05 15:00         ` Akhil Goyal
  2020-02-04 13:12       ` [dpdk-dev] [PATCH v6 7/8] examples/ipsec-secgw: cpu crypto testing Marcin Smoczynski
                         ` (2 subsequent siblings)
  8 siblings, 1 reply; 77+ messages in thread
From: Marcin Smoczynski @ 2020-02-04 13:12 UTC (permalink / raw)
  To: akhil.goyal, konstantin.ananyev, roy.fan.zhang, declan.doherty,
	radu.nicolau, pablo.de.lara.guarch
  Cc: dev, Marcin Smoczynski

Add support for CPU accelerated crypto. 'cpu-crypto' SA type has
been introduced in configuration allowing to use abovementioned
acceleration.

Legacy mode is not currently supported.

Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Signed-off-by: Marcin Smoczynski <marcinx.smoczynski@intel.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
---
 examples/ipsec-secgw/ipsec.c         |  25 ++++-
 examples/ipsec-secgw/ipsec_process.c | 136 +++++++++++++++++----------
 examples/ipsec-secgw/sa.c            |  30 ++++--
 3 files changed, 131 insertions(+), 60 deletions(-)

diff --git a/examples/ipsec-secgw/ipsec.c b/examples/ipsec-secgw/ipsec.c
index d4b57121a..6e8120702 100644
--- a/examples/ipsec-secgw/ipsec.c
+++ b/examples/ipsec-secgw/ipsec.c
@@ -1,5 +1,5 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2016-2017 Intel Corporation
+ * Copyright(c) 2016-2020 Intel Corporation
  */
 #include <sys/types.h>
 #include <netinet/in.h>
@@ -10,6 +10,7 @@
 #include <rte_crypto.h>
 #include <rte_security.h>
 #include <rte_cryptodev.h>
+#include <rte_ipsec.h>
 #include <rte_ethdev.h>
 #include <rte_mbuf.h>
 #include <rte_hash.h>
@@ -86,7 +87,8 @@ create_lookaside_session(struct ipsec_ctx *ipsec_ctx, struct ipsec_sa *sa,
 			ipsec_ctx->tbl[cdev_id_qp].id,
 			ipsec_ctx->tbl[cdev_id_qp].qp);
 
-	if (ips->type != RTE_SECURITY_ACTION_TYPE_NONE) {
+	if (ips->type != RTE_SECURITY_ACTION_TYPE_NONE &&
+		ips->type != RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO) {
 		struct rte_security_session_conf sess_conf = {
 			.action_type = ips->type,
 			.protocol = RTE_SECURITY_PROTOCOL_IPSEC,
@@ -126,6 +128,18 @@ create_lookaside_session(struct ipsec_ctx *ipsec_ctx, struct ipsec_sa *sa,
 			return -1;
 		}
 	} else {
+		if (ips->type == RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO) {
+			struct rte_cryptodev_info info;
+			uint16_t cdev_id;
+
+			cdev_id = ipsec_ctx->tbl[cdev_id_qp].id;
+			rte_cryptodev_info_get(cdev_id, &info);
+			if (!(info.feature_flags &
+				RTE_CRYPTODEV_FF_SYM_CPU_CRYPTO))
+				return -ENOTSUP;
+
+			ips->crypto.dev_id = cdev_id;
+		}
 		ips->crypto.ses = rte_cryptodev_sym_session_create(
 				ipsec_ctx->session_pool);
 		rte_cryptodev_sym_session_init(ipsec_ctx->tbl[cdev_id_qp].id,
@@ -476,6 +490,13 @@ ipsec_enqueue(ipsec_xform_fn xform_func, struct ipsec_ctx *ipsec_ctx,
 			rte_security_attach_session(&priv->cop,
 				ips->security.ses);
 			break;
+
+		case RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO:
+			RTE_LOG(ERR, IPSEC, "CPU crypto is not supported by the"
+					" legacy mode.");
+			rte_pktmbuf_free(pkts[i]);
+			continue;
+
 		case RTE_SECURITY_ACTION_TYPE_NONE:
 
 			priv->cop.type = RTE_CRYPTO_OP_TYPE_SYMMETRIC;
diff --git a/examples/ipsec-secgw/ipsec_process.c b/examples/ipsec-secgw/ipsec_process.c
index 2eb5c8b34..bb2f2b82d 100644
--- a/examples/ipsec-secgw/ipsec_process.c
+++ b/examples/ipsec-secgw/ipsec_process.c
@@ -1,5 +1,5 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2016-2017 Intel Corporation
+ * Copyright(c) 2016-2020 Intel Corporation
  */
 #include <sys/types.h>
 #include <netinet/in.h>
@@ -92,7 +92,8 @@ fill_ipsec_session(struct rte_ipsec_session *ss, struct ipsec_ctx *ctx,
 	int32_t rc;
 
 	/* setup crypto section */
-	if (ss->type == RTE_SECURITY_ACTION_TYPE_NONE) {
+	if (ss->type == RTE_SECURITY_ACTION_TYPE_NONE ||
+			ss->type == RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO) {
 		RTE_ASSERT(ss->crypto.ses == NULL);
 		rc = create_lookaside_session(ctx, sa, ss);
 		if (rc != 0)
@@ -215,6 +216,62 @@ ipsec_prepare_crypto_group(struct ipsec_ctx *ctx, struct ipsec_sa *sa,
 	return k;
 }
 
+/*
+ * helper routine for inline and cpu(synchronous) processing
+ * this is just to satisfy inbound_sa_check() and get_hop_for_offload_pkt().
+ * Should be removed in future.
+ */
+static inline void
+prep_process_group(void *sa, struct rte_mbuf *mb[], uint32_t cnt)
+{
+	uint32_t j;
+	struct ipsec_mbuf_metadata *priv;
+
+	for (j = 0; j != cnt; j++) {
+		priv = get_priv(mb[j]);
+		priv->sa = sa;
+	}
+}
+
+/*
+ * finish processing of packets successfully decrypted by an inline processor
+ */
+static uint32_t
+ipsec_process_inline_group(struct rte_ipsec_session *ips, void *sa,
+	struct ipsec_traffic *trf, struct rte_mbuf *mb[], uint32_t cnt)
+{
+	uint64_t satp;
+	uint32_t k;
+
+	/* get SA type */
+	satp = rte_ipsec_sa_type(ips->sa);
+	prep_process_group(sa, mb, cnt);
+
+	k = rte_ipsec_pkt_process(ips, mb, cnt);
+	copy_to_trf(trf, satp, mb, k);
+	return k;
+}
+
+/*
+ * process packets synchronously
+ */
+static uint32_t
+ipsec_process_cpu_group(struct rte_ipsec_session *ips, void *sa,
+	struct ipsec_traffic *trf, struct rte_mbuf *mb[], uint32_t cnt)
+{
+	uint64_t satp;
+	uint32_t k;
+
+	/* get SA type */
+	satp = rte_ipsec_sa_type(ips->sa);
+	prep_process_group(sa, mb, cnt);
+
+	k = rte_ipsec_pkt_cpu_prepare(ips, mb, cnt);
+	k = rte_ipsec_pkt_process(ips, mb, k);
+	copy_to_trf(trf, satp, mb, k);
+	return k;
+}
+
 /*
  * Process ipsec packets.
  * If packet belong to SA that is subject of inline-crypto,
@@ -225,10 +282,8 @@ ipsec_prepare_crypto_group(struct ipsec_ctx *ctx, struct ipsec_sa *sa,
 void
 ipsec_process(struct ipsec_ctx *ctx, struct ipsec_traffic *trf)
 {
-	uint64_t satp;
-	uint32_t i, j, k, n;
+	uint32_t i, k, n;
 	struct ipsec_sa *sa;
-	struct ipsec_mbuf_metadata *priv;
 	struct rte_ipsec_group *pg;
 	struct rte_ipsec_session *ips;
 	struct rte_ipsec_group grp[RTE_DIM(trf->ipsec.pkts)];
@@ -236,10 +291,17 @@ ipsec_process(struct ipsec_ctx *ctx, struct ipsec_traffic *trf)
 	n = sa_group(trf->ipsec.saptr, trf->ipsec.pkts, grp, trf->ipsec.num);
 
 	for (i = 0; i != n; i++) {
+
 		pg = grp + i;
 		sa = ipsec_mask_saptr(pg->id.ptr);
 
-		ips = ipsec_get_primary_session(sa);
+		/* fallback to cryptodev with RX packets which inline
+		 * processor was unable to process
+		 */
+		if (sa != NULL)
+			ips = (pg->id.val & IPSEC_SA_OFFLOAD_FALLBACK_FLAG) ?
+				ipsec_get_fallback_session(sa) :
+				ipsec_get_primary_session(sa);
 
 		/* no valid HW session for that SA, try to create one */
 		if (sa == NULL || (ips->crypto.ses == NULL &&
@@ -247,50 +309,26 @@ ipsec_process(struct ipsec_ctx *ctx, struct ipsec_traffic *trf)
 			k = 0;
 
 		/* process packets inline */
-		else if (ips->type == RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO ||
-				ips->type ==
-				RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL) {
-
-			/* get SA type */
-			satp = rte_ipsec_sa_type(ips->sa);
-
-			/*
-			 * This is just to satisfy inbound_sa_check()
-			 * and get_hop_for_offload_pkt().
-			 * Should be removed in future.
-			 */
-			for (j = 0; j != pg->cnt; j++) {
-				priv = get_priv(pg->m[j]);
-				priv->sa = sa;
+		else {
+			switch (ips->type) {
+			/* enqueue packets to crypto dev */
+			case RTE_SECURITY_ACTION_TYPE_NONE:
+			case RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL:
+				k = ipsec_prepare_crypto_group(ctx, sa, ips,
+					pg->m, pg->cnt);
+				break;
+			case RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO:
+			case RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL:
+				k = ipsec_process_inline_group(ips, sa,
+					trf, pg->m, pg->cnt);
+				break;
+			case RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO:
+				k = ipsec_process_cpu_group(ips, sa,
+					trf, pg->m, pg->cnt);
+				break;
+			default:
+				k = 0;
 			}
-
-			/* fallback to cryptodev with RX packets which inline
-			 * processor was unable to process
-			 */
-			if (pg->id.val & IPSEC_SA_OFFLOAD_FALLBACK_FLAG) {
-				/* offload packets to cryptodev */
-				struct rte_ipsec_session *fallback;
-
-				fallback = ipsec_get_fallback_session(sa);
-				if (fallback->crypto.ses == NULL &&
-					fill_ipsec_session(fallback, ctx, sa)
-					!= 0)
-					k = 0;
-				else
-					k = ipsec_prepare_crypto_group(ctx, sa,
-						fallback, pg->m, pg->cnt);
-			} else {
-				/* finish processing of packets successfully
-				 * decrypted by an inline processor
-				 */
-				k = rte_ipsec_pkt_process(ips, pg->m, pg->cnt);
-				copy_to_trf(trf, satp, pg->m, k);
-
-			}
-		/* enqueue packets to crypto dev */
-		} else {
-			k = ipsec_prepare_crypto_group(ctx, sa, ips, pg->m,
-				pg->cnt);
 		}
 
 		/* drop packets that cannot be enqueued/processed */
diff --git a/examples/ipsec-secgw/sa.c b/examples/ipsec-secgw/sa.c
index c75a5a15f..e9e8d624c 100644
--- a/examples/ipsec-secgw/sa.c
+++ b/examples/ipsec-secgw/sa.c
@@ -1,5 +1,5 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2016-2017 Intel Corporation
+ * Copyright(c) 2016-2020 Intel Corporation
  */
 
 /*
@@ -586,6 +586,8 @@ parse_sa_tokens(char **tokens, uint32_t n_tokens,
 				RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL;
 			else if (strcmp(tokens[ti], "no-offload") == 0)
 				ips->type = RTE_SECURITY_ACTION_TYPE_NONE;
+			else if (strcmp(tokens[ti], "cpu-crypto") == 0)
+				ips->type = RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO;
 			else {
 				APP_CHECK(0, status, "Invalid input \"%s\"",
 						tokens[ti]);
@@ -679,10 +681,12 @@ parse_sa_tokens(char **tokens, uint32_t n_tokens,
 	if (status->status < 0)
 		return;
 
-	if ((ips->type != RTE_SECURITY_ACTION_TYPE_NONE) && (portid_p == 0))
+	if ((ips->type != RTE_SECURITY_ACTION_TYPE_NONE && ips->type !=
+			RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO) && (portid_p == 0))
 		printf("Missing portid option, falling back to non-offload\n");
 
-	if (!type_p || !portid_p) {
+	if (!type_p || (!portid_p && ips->type !=
+			RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO)) {
 		ips->type = RTE_SECURITY_ACTION_TYPE_NONE;
 		rule->portid = -1;
 	}
@@ -768,15 +772,25 @@ print_one_sa_rule(const struct ipsec_sa *sa, int inbound)
 	case RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL:
 		printf("lookaside-protocol-offload ");
 		break;
+	case RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO:
+		printf("cpu-crypto-accelerated");
+		break;
 	}
 
 	fallback_ips = &sa->sessions[IPSEC_SESSION_FALLBACK];
 	if (fallback_ips != NULL && sa->fallback_sessions > 0) {
 		printf("inline fallback: ");
-		if (fallback_ips->type == RTE_SECURITY_ACTION_TYPE_NONE)
+		switch (fallback_ips->type) {
+		case RTE_SECURITY_ACTION_TYPE_NONE:
 			printf("lookaside-none");
-		else
+			break;
+		case RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO:
+			printf("cpu-crypto-accelerated");
+			break;
+		default:
 			printf("invalid");
+			break;
+		}
 	}
 	printf("\n");
 }
@@ -975,7 +989,6 @@ sa_add_rules(struct sa_ctx *sa_ctx, const struct ipsec_sa entries[],
 				return -EINVAL;
 		}
 
-
 		switch (WITHOUT_TRANSPORT_VERSION(sa->flags)) {
 		case IP4_TUNNEL:
 			sa->src.ip.ip4 = rte_cpu_to_be_32(sa->src.ip.ip4);
@@ -1026,7 +1039,6 @@ sa_add_rules(struct sa_ctx *sa_ctx, const struct ipsec_sa entries[],
 					return -EINVAL;
 				}
 			}
-			print_one_sa_rule(sa, inbound);
 		} else {
 			switch (sa->cipher_algo) {
 			case RTE_CRYPTO_CIPHER_NULL:
@@ -1091,9 +1103,9 @@ sa_add_rules(struct sa_ctx *sa_ctx, const struct ipsec_sa entries[],
 			sa_ctx->xf[idx].a.next = &sa_ctx->xf[idx].b;
 			sa_ctx->xf[idx].b.next = NULL;
 			sa->xforms = &sa_ctx->xf[idx].a;
-
-			print_one_sa_rule(sa, inbound);
 		}
+
+		print_one_sa_rule(sa, inbound);
 	}
 
 	return 0;
-- 
2.17.1


^ permalink raw reply	[flat|nested] 77+ messages in thread

* [dpdk-dev] [PATCH v6 7/8] examples/ipsec-secgw: cpu crypto testing
  2020-02-04 13:12     ` [dpdk-dev] [PATCH v6 0/8] Introduce CPU crypto mode Marcin Smoczynski
                         ` (5 preceding siblings ...)
  2020-02-04 13:12       ` [dpdk-dev] [PATCH v6 6/8] examples/ipsec-secgw: cpu crypto support Marcin Smoczynski
@ 2020-02-04 13:12       ` Marcin Smoczynski
  2020-02-04 13:12       ` [dpdk-dev] [PATCH v6 8/8] doc: add release notes for cpu crypto Marcin Smoczynski
  2020-02-05 15:03       ` [dpdk-dev] [PATCH v6 0/8] Introduce CPU crypto mode Akhil Goyal
  8 siblings, 0 replies; 77+ messages in thread
From: Marcin Smoczynski @ 2020-02-04 13:12 UTC (permalink / raw)
  To: akhil.goyal, konstantin.ananyev, roy.fan.zhang, declan.doherty,
	radu.nicolau, pablo.de.lara.guarch
  Cc: dev, Marcin Smoczynski

Enable cpu-crypto mode testing by adding dedicated environmental
variable CRYPTO_PRIM_TYPE. Setting it to 'type cpu-crypto' allows
to run test scenario with cpu crypto acceleration.

Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Signed-off-by: Marcin Smoczynski <marcinx.smoczynski@intel.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
---
 examples/ipsec-secgw/test/common_defs.sh      | 21 +++++++++++++++++++
 examples/ipsec-secgw/test/linux_test4.sh      | 11 +---------
 examples/ipsec-secgw/test/linux_test6.sh      | 11 +---------
 .../test/trs_3descbc_sha1_common_defs.sh      |  8 +++----
 .../test/trs_aescbc_sha1_common_defs.sh       |  8 +++----
 .../test/trs_aesctr_sha1_common_defs.sh       |  8 +++----
 .../test/tun_3descbc_sha1_common_defs.sh      |  8 +++----
 .../test/tun_aescbc_sha1_common_defs.sh       |  8 +++----
 .../test/tun_aesctr_sha1_common_defs.sh       |  8 +++----
 9 files changed, 47 insertions(+), 44 deletions(-)

diff --git a/examples/ipsec-secgw/test/common_defs.sh b/examples/ipsec-secgw/test/common_defs.sh
index 4aac4981a..6b6ae06f3 100644
--- a/examples/ipsec-secgw/test/common_defs.sh
+++ b/examples/ipsec-secgw/test/common_defs.sh
@@ -42,6 +42,27 @@ DPDK_BUILD=${RTE_TARGET:-x86_64-native-linux-gcc}
 DEF_MTU_LEN=1400
 DEF_PING_LEN=1200
 
+#upsate operation mode based on env vars values
+select_mode()
+{
+	# select sync/async mode
+	if [[ -n "${CRYPTO_PRIM_TYPE}" && -n "${SGW_CMD_XPRM}" ]]; then
+		echo "${CRYPTO_PRIM_TYPE} is enabled"
+		SGW_CFG_XPRM="${SGW_CFG_XPRM} ${CRYPTO_PRIM_TYPE}"
+	fi
+
+	#make linux to generate fragmented packets
+	if [[ -n "${MULTI_SEG_TEST}" && -n "${SGW_CMD_XPRM}" ]]; then
+		echo "multi-segment test is enabled"
+		SGW_CMD_XPRM="${SGW_CMD_XPRM} ${MULTI_SEG_TEST}"
+		PING_LEN=5000
+		MTU_LEN=1500
+	else
+		PING_LEN=${DEF_PING_LEN}
+		MTU_LEN=${DEF_MTU_LEN}
+	fi
+}
+
 #setup mtu on local iface
 set_local_mtu()
 {
diff --git a/examples/ipsec-secgw/test/linux_test4.sh b/examples/ipsec-secgw/test/linux_test4.sh
index 760451000..fb8ae1023 100644
--- a/examples/ipsec-secgw/test/linux_test4.sh
+++ b/examples/ipsec-secgw/test/linux_test4.sh
@@ -45,16 +45,7 @@ MODE=$1
  . ${DIR}/common_defs.sh
  . ${DIR}/${MODE}_defs.sh
 
-#make linux to generate fragmented packets
-if [[ -n "${MULTI_SEG_TEST}" && -n "${SGW_CMD_XPRM}" ]]; then
-	echo "multi-segment test is enabled"
-	SGW_CMD_XPRM="${SGW_CMD_XPRM} ${MULTI_SEG_TEST}"
-	PING_LEN=5000
-	MTU_LEN=1500
-else
-	PING_LEN=${DEF_PING_LEN}
-	MTU_LEN=${DEF_MTU_LEN}
-fi
+select_mode
 
 config_secgw
 
diff --git a/examples/ipsec-secgw/test/linux_test6.sh b/examples/ipsec-secgw/test/linux_test6.sh
index 479f29be3..dbcca7936 100644
--- a/examples/ipsec-secgw/test/linux_test6.sh
+++ b/examples/ipsec-secgw/test/linux_test6.sh
@@ -46,16 +46,7 @@ MODE=$1
  . ${DIR}/common_defs.sh
  . ${DIR}/${MODE}_defs.sh
 
-#make linux to generate fragmented packets
-if [[ -n "${MULTI_SEG_TEST}" && -n "${SGW_CMD_XPRM}" ]]; then
-	echo "multi-segment test is enabled"
-	SGW_CMD_XPRM="${SGW_CMD_XPRM} ${MULTI_SEG_TEST}"
-	PING_LEN=5000
-	MTU_LEN=1500
-else
-	PING_LEN=${DEF_PING_LEN}
-	MTU_LEN=${DEF_MTU_LEN}
-fi
+select_mode
 
 config_secgw
 
diff --git a/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh b/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh
index 3c5c18afd..62118bb3f 100644
--- a/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh
+++ b/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh
@@ -33,14 +33,14 @@ cipher_key \
 de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
 auth_algo sha1-hmac \
 auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
-mode transport
+mode transport ${SGW_CFG_XPRM}
 
 sa in 9 cipher_algo 3des-cbc \
 cipher_key \
 de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
 auth_algo sha1-hmac \
 auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
-mode transport
+mode transport ${SGW_CFG_XPRM}
 
 #SA out rules
 sa out 7 cipher_algo 3des-cbc \
@@ -48,7 +48,7 @@ cipher_key \
 de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
 auth_algo sha1-hmac \
 auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
-mode transport
+mode transport ${SGW_CFG_XPRM}
 
 #SA out rules
 sa out 9 cipher_algo 3des-cbc \
@@ -56,7 +56,7 @@ cipher_key \
 de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
 auth_algo sha1-hmac \
 auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
-mode transport
+mode transport ${SGW_CFG_XPRM}
 
 #Routing rules
 rt ipv4 dst ${REMOTE_IPV4}/32 port 0
diff --git a/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh b/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh
index 9dbdd1765..7ddeb2b5a 100644
--- a/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh
+++ b/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh
@@ -32,27 +32,27 @@ sa in 7 cipher_algo aes-128-cbc \
 cipher_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
 auth_algo sha1-hmac \
 auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
-mode transport
+mode transport ${SGW_CFG_XPRM}
 
 sa in 9 cipher_algo aes-128-cbc \
 cipher_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
 auth_algo sha1-hmac \
 auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
-mode transport
+mode transport ${SGW_CFG_XPRM}
 
 #SA out rules
 sa out 7 cipher_algo aes-128-cbc \
 cipher_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
 auth_algo sha1-hmac \
 auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
-mode transport
+mode transport ${SGW_CFG_XPRM}
 
 #SA out rules
 sa out 9 cipher_algo aes-128-cbc \
 cipher_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
 auth_algo sha1-hmac \
 auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
-mode transport
+mode transport ${SGW_CFG_XPRM}
 
 #Routing rules
 rt ipv4 dst ${REMOTE_IPV4}/32 port 0
diff --git a/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh b/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh
index 6aba680f9..f0178355a 100644
--- a/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh
+++ b/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh
@@ -32,27 +32,27 @@ sa in 7 cipher_algo aes-128-ctr \
 cipher_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
 auth_algo sha1-hmac \
 auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
-mode transport
+mode transport ${SGW_CFG_XPRM}
 
 sa in 9 cipher_algo aes-128-ctr \
 cipher_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
 auth_algo sha1-hmac \
 auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
-mode transport
+mode transport ${SGW_CFG_XPRM}
 
 #SA out rules
 sa out 7 cipher_algo aes-128-ctr \
 cipher_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
 auth_algo sha1-hmac \
 auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
-mode transport
+mode transport ${SGW_CFG_XPRM}
 
 #SA out rules
 sa out 9 cipher_algo aes-128-ctr \
 cipher_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
 auth_algo sha1-hmac \
 auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
-mode transport
+mode transport ${SGW_CFG_XPRM}
 
 #Routing rules
 rt ipv4 dst ${REMOTE_IPV4}/32 port 0
diff --git a/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh b/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh
index 7c3226f84..d8869fad0 100644
--- a/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh
+++ b/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh
@@ -33,14 +33,14 @@ cipher_key \
 de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
 auth_algo sha1-hmac \
 auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
-mode ipv4-tunnel src ${REMOTE_IPV4} dst ${LOCAL_IPV4}
+mode ipv4-tunnel src ${REMOTE_IPV4} dst ${LOCAL_IPV4} ${SGW_CFG_XPRM}
 
 sa in 9 cipher_algo 3des-cbc \
 cipher_key \
 de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
 auth_algo sha1-hmac \
 auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
-mode ipv6-tunnel src ${REMOTE_IPV6} dst ${LOCAL_IPV6}
+mode ipv6-tunnel src ${REMOTE_IPV6} dst ${LOCAL_IPV6} ${SGW_CFG_XPRM}
 
 #SA out rules
 sa out 7 cipher_algo 3des-cbc \
@@ -48,14 +48,14 @@ cipher_key \
 de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
 auth_algo sha1-hmac \
 auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
-mode ipv4-tunnel src ${LOCAL_IPV4} dst ${REMOTE_IPV4}
+mode ipv4-tunnel src ${LOCAL_IPV4} dst ${REMOTE_IPV4} ${SGW_CFG_XPRM}
 
 sa out 9 cipher_algo 3des-cbc \
 cipher_key \
 de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
 auth_algo sha1-hmac \
 auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
-mode ipv6-tunnel src ${LOCAL_IPV6} dst ${REMOTE_IPV6}
+mode ipv6-tunnel src ${LOCAL_IPV6} dst ${REMOTE_IPV6} ${SGW_CFG_XPRM}
 
 #Routing rules
 rt ipv4 dst ${REMOTE_IPV4}/32 port 0
diff --git a/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh b/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh
index bdf5938a0..2616926b2 100644
--- a/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh
+++ b/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh
@@ -32,26 +32,26 @@ sa in 7 cipher_algo aes-128-cbc \
 cipher_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
 auth_algo sha1-hmac \
 auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
-mode ipv4-tunnel src ${REMOTE_IPV4} dst ${LOCAL_IPV4}
+mode ipv4-tunnel src ${REMOTE_IPV4} dst ${LOCAL_IPV4} ${SGW_CFG_XPRM}
 
 sa in 9 cipher_algo aes-128-cbc \
 cipher_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
 auth_algo sha1-hmac \
 auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
-mode ipv6-tunnel src ${REMOTE_IPV6} dst ${LOCAL_IPV6}
+mode ipv6-tunnel src ${REMOTE_IPV6} dst ${LOCAL_IPV6} ${SGW_CFG_XPRM}
 
 #SA out rules
 sa out 7 cipher_algo aes-128-cbc \
 cipher_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
 auth_algo sha1-hmac \
 auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
-mode ipv4-tunnel src ${LOCAL_IPV4} dst ${REMOTE_IPV4}
+mode ipv4-tunnel src ${LOCAL_IPV4} dst ${REMOTE_IPV4} ${SGW_CFG_XPRM}
 
 sa out 9 cipher_algo aes-128-cbc \
 cipher_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
 auth_algo sha1-hmac \
 auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
-mode ipv6-tunnel src ${LOCAL_IPV6} dst ${REMOTE_IPV6}
+mode ipv6-tunnel src ${LOCAL_IPV6} dst ${REMOTE_IPV6} ${SGW_CFG_XPRM}
 
 #Routing rules
 rt ipv4 dst ${REMOTE_IPV4}/32 port 0
diff --git a/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh b/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh
index 06f2ef0c6..06b561fd7 100644
--- a/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh
+++ b/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh
@@ -32,26 +32,26 @@ sa in 7 cipher_algo aes-128-ctr \
 cipher_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
 auth_algo sha1-hmac \
 auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
-mode ipv4-tunnel src ${REMOTE_IPV4} dst ${LOCAL_IPV4}
+mode ipv4-tunnel src ${REMOTE_IPV4} dst ${LOCAL_IPV4} ${SGW_CFG_XPRM}
 
 sa in 9 cipher_algo aes-128-ctr \
 cipher_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
 auth_algo sha1-hmac \
 auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
-mode ipv6-tunnel src ${REMOTE_IPV6} dst ${LOCAL_IPV6}
+mode ipv6-tunnel src ${REMOTE_IPV6} dst ${LOCAL_IPV6} ${SGW_CFG_XPRM}
 
 #SA out rules
 sa out 7 cipher_algo aes-128-ctr \
 cipher_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
 auth_algo sha1-hmac \
 auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
-mode ipv4-tunnel src ${LOCAL_IPV4} dst ${REMOTE_IPV4}
+mode ipv4-tunnel src ${LOCAL_IPV4} dst ${REMOTE_IPV4} ${SGW_CFG_XPRM}
 
 sa out 9 cipher_algo aes-128-ctr \
 cipher_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
 auth_algo sha1-hmac \
 auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
-mode ipv6-tunnel src ${LOCAL_IPV6} dst ${REMOTE_IPV6}
+mode ipv6-tunnel src ${LOCAL_IPV6} dst ${REMOTE_IPV6} ${SGW_CFG_XPRM}
 
 #Routing rules
 rt ipv4 dst ${REMOTE_IPV4}/32 port 0
-- 
2.17.1


^ permalink raw reply	[flat|nested] 77+ messages in thread

* [dpdk-dev] [PATCH v6 8/8] doc: add release notes for cpu crypto
  2020-02-04 13:12     ` [dpdk-dev] [PATCH v6 0/8] Introduce CPU crypto mode Marcin Smoczynski
                         ` (6 preceding siblings ...)
  2020-02-04 13:12       ` [dpdk-dev] [PATCH v6 7/8] examples/ipsec-secgw: cpu crypto testing Marcin Smoczynski
@ 2020-02-04 13:12       ` Marcin Smoczynski
  2020-02-05 15:03       ` [dpdk-dev] [PATCH v6 0/8] Introduce CPU crypto mode Akhil Goyal
  8 siblings, 0 replies; 77+ messages in thread
From: Marcin Smoczynski @ 2020-02-04 13:12 UTC (permalink / raw)
  To: akhil.goyal, konstantin.ananyev, roy.fan.zhang, declan.doherty,
	radu.nicolau, pablo.de.lara.guarch
  Cc: dev, Marcin Smoczynski

Add release note for cpu crypto, a new features added to the cryptodev
API.

Signed-off-by: Marcin Smoczynski <marcinx.smoczynski@intel.com>
---
 doc/guides/rel_notes/release_20_02.rst | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/doc/guides/rel_notes/release_20_02.rst b/doc/guides/rel_notes/release_20_02.rst
index 50e2c1484..c4144ef44 100644
--- a/doc/guides/rel_notes/release_20_02.rst
+++ b/doc/guides/rel_notes/release_20_02.rst
@@ -143,6 +143,13 @@ New Features
   Added a new OCTEON TX2 rawdev PMD for End Point mode of operation.
   See the :doc:`../rawdevs/octeontx2_ep` for more details on this new PMD.
 
+* **Added synchronous Crypto burst API.**
+
+  A new API is introduced in crypto library to handle synchronous cryptographic
+  operations allowing to achieve performance gain for cryptodevs which use
+  CPU based acceleration, such as Intel AES-NI. An implementation for aesni_gcm
+  cryptodev is provided including unit tests. The IPsec example application and
+  ipsec library itself were changed to allow utilization of this new feature.
 
 Removed Items
 -------------
-- 
2.17.1


^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [dpdk-dev] [PATCH v6 1/8] cryptodev: introduce cpu crypto support API
  2020-02-04 13:12       ` [dpdk-dev] [PATCH v6 1/8] cryptodev: introduce cpu crypto support API Marcin Smoczynski
@ 2020-02-05 14:57         ` Akhil Goyal
  2020-02-06  0:48         ` Thomas Monjalon
  2020-02-06 12:36         ` [dpdk-dev] [PATCH] cryptodev: fix missing doxygen comment Marcin Smoczynski
  2 siblings, 0 replies; 77+ messages in thread
From: Akhil Goyal @ 2020-02-05 14:57 UTC (permalink / raw)
  To: Marcin Smoczynski, konstantin.ananyev, roy.fan.zhang,
	declan.doherty, radu.nicolau, pablo.de.lara.guarch
  Cc: dev

> 
> Add new API allowing to process crypto operations in a synchronous
> manner. Operations are performed on a set of SG arrays.
> 
> Cryptodevs which allows CPU crypto operation mode have to
> use RTE_CRYPTODEV_FF_SYM_CPU_CRYPTO capability.
> 
> Add a helper method to easily convert mbufs to a SGL form.
> 
> Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> Signed-off-by: Marcin Smoczynski <marcinx.smoczynski@intel.com>
> ---
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [dpdk-dev] [PATCH v6 3/8] security: add cpu crypto action type
  2020-02-04 13:12       ` [dpdk-dev] [PATCH v6 3/8] security: add cpu crypto action type Marcin Smoczynski
@ 2020-02-05 14:58         ` Akhil Goyal
  0 siblings, 0 replies; 77+ messages in thread
From: Akhil Goyal @ 2020-02-05 14:58 UTC (permalink / raw)
  To: Marcin Smoczynski, konstantin.ananyev, roy.fan.zhang,
	declan.doherty, radu.nicolau, pablo.de.lara.guarch
  Cc: dev

> Introduce CPU crypto action type allowing to differentiate between
> regular async 'none security' and synchronous, CPU crypto accelerated
> sessions.
> 
> This mode is similar to ACTION_TYPE_NONE but crypto processing is
> performed synchronously on a CPU.
> 
> Signed-off-by: Marcin Smoczynski <marcinx.smoczynski@intel.com>
> Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
> ---
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>


^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [dpdk-dev] [PATCH v6 4/8] test/crypto: add cpu crypto mode to tests
  2020-02-04 13:12       ` [dpdk-dev] [PATCH v6 4/8] test/crypto: add cpu crypto mode to tests Marcin Smoczynski
@ 2020-02-05 14:59         ` Akhil Goyal
  2020-02-07 14:28         ` [dpdk-dev] [PATCH] test/crypto: add cpu crypto mode tests Marcin Smoczynski
  1 sibling, 0 replies; 77+ messages in thread
From: Akhil Goyal @ 2020-02-05 14:59 UTC (permalink / raw)
  To: Marcin Smoczynski, konstantin.ananyev, roy.fan.zhang,
	declan.doherty, radu.nicolau, pablo.de.lara.guarch
  Cc: dev

> 
> This patch adds ability to run unit tests in cpu crypto mode and
> provides test for aesni_gcm's cpu crypto implementation.
> 
> Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> Signed-off-by: Marcin Smoczynski <marcinx.smoczynski@intel.com>
> ---
>  app/test/test_cryptodev.c | 161 +++++++++++++++++++++++++++++++++++---
>  1 file changed, 151 insertions(+), 10 deletions(-)
> 
This patch is having a merge conflict.
So dropped this patch while merging the series
Please send it again and we will apply it in RC3


^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [dpdk-dev] [PATCH v6 5/8] ipsec: introduce support for cpu crypto mode
  2020-02-04 13:12       ` [dpdk-dev] [PATCH v6 5/8] ipsec: introduce support for cpu crypto mode Marcin Smoczynski
@ 2020-02-05 14:59         ` Akhil Goyal
  0 siblings, 0 replies; 77+ messages in thread
From: Akhil Goyal @ 2020-02-05 14:59 UTC (permalink / raw)
  To: Marcin Smoczynski, konstantin.ananyev, roy.fan.zhang,
	declan.doherty, radu.nicolau, pablo.de.lara.guarch
  Cc: dev

> 
> Update library to handle CPU cypto security mode which utilizes
> cryptodev's synchronous, CPU accelerated crypto operations.
> 
> Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> Signed-off-by: Marcin Smoczynski <marcinx.smoczynski@intel.com>
> Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
> Tested-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> ---
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [dpdk-dev] [PATCH v6 6/8] examples/ipsec-secgw: cpu crypto support
  2020-02-04 13:12       ` [dpdk-dev] [PATCH v6 6/8] examples/ipsec-secgw: cpu crypto support Marcin Smoczynski
@ 2020-02-05 15:00         ` Akhil Goyal
  0 siblings, 0 replies; 77+ messages in thread
From: Akhil Goyal @ 2020-02-05 15:00 UTC (permalink / raw)
  To: Marcin Smoczynski, konstantin.ananyev, roy.fan.zhang,
	declan.doherty, radu.nicolau, pablo.de.lara.guarch
  Cc: dev

> 
> Add support for CPU accelerated crypto. 'cpu-crypto' SA type has
> been introduced in configuration allowing to use abovementioned
> acceleration.
> 
> Legacy mode is not currently supported.
> 
> Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> Signed-off-by: Marcin Smoczynski <marcinx.smoczynski@intel.com>
> Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
> ---
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [dpdk-dev] [PATCH v6 0/8] Introduce CPU crypto mode
  2020-02-04 13:12     ` [dpdk-dev] [PATCH v6 0/8] Introduce CPU crypto mode Marcin Smoczynski
                         ` (7 preceding siblings ...)
  2020-02-04 13:12       ` [dpdk-dev] [PATCH v6 8/8] doc: add release notes for cpu crypto Marcin Smoczynski
@ 2020-02-05 15:03       ` Akhil Goyal
  8 siblings, 0 replies; 77+ messages in thread
From: Akhil Goyal @ 2020-02-05 15:03 UTC (permalink / raw)
  To: Marcin Smoczynski, konstantin.ananyev, roy.fan.zhang,
	declan.doherty, radu.nicolau, pablo.de.lara.guarch
  Cc: dev



> Originally both SW and HW crypto PMDs use rte_crypot_op based API to
> process the crypto workload asynchronously. This way provides uniformity
> to both PMD types, but also introduce unnecessary performance penalty to
> SW PMDs that have to "simulate" HW async behavior (crypto-ops
> enqueue/dequeue, HW addresses computations, storing/dereferencing user
> provided data (mbuf) for each crypto-op, etc).
> 
> The aim is to introduce a new optional API for SW crypto-devices
> to perform crypto processing in a synchronous manner.
> 
> v3 to v4 changes:
>  - add feature discovery in the ipsec example application when
>    using cpu-crypto
>  - add gmac in aesni-gcm
>  - add tests for aesni-gcm/cpu crypto mode
>  - add documentation: pg and rel notes
>  - remove xform flags as no longer needed
>  - add some extra API comments
>  - remove compilation error from v3
> 
> v4 to v5 changes:
>  - fixed build error for arm64 (missing header include)
>  - update licensing information
> 
> v5 to v6 changes:
>  - unit tests integrated in the current test application for cryptodev
>  - iova fix
>  - moved mbuf to sgl helper function to crypo sym header
> 
> Marcin Smoczynski (8):
>   cryptodev: introduce cpu crypto support API
>   crypto/aesni_gcm: cpu crypto support
>   security: add cpu crypto action type
>   test/crypto: add cpu crypto mode to tests
>   ipsec: introduce support for cpu crypto mode
>   examples/ipsec-secgw: cpu crypto support
>   examples/ipsec-secgw: cpu crypto testing
>   doc: add release notes for cpu crypto

Series applied to dpdk-next-crypto

Last patch is split and merged in relevant patches

Following patch is dropped while merging due to merge conflict.
Please send it again, we will merge in RC3
"test/crypto: add cpu crypto mode to tests"


> 
>  app/test/test_cryptodev.c                     | 161 ++++++++++++-
>  doc/guides/cryptodevs/aesni_gcm.rst           |   7 +-
>  doc/guides/cryptodevs/features/aesni_gcm.ini  |   1 +
>  doc/guides/cryptodevs/features/default.ini    |   1 +
>  doc/guides/prog_guide/cryptodev_lib.rst       |  33 ++-
>  doc/guides/prog_guide/ipsec_lib.rst           |  10 +-
>  doc/guides/prog_guide/rte_security.rst        |  15 +-
>  doc/guides/rel_notes/release_20_02.rst        |   7 +
>  drivers/crypto/aesni_gcm/aesni_gcm_ops.h      |  11 +-
>  drivers/crypto/aesni_gcm/aesni_gcm_pmd.c      | 222 +++++++++++++++++-
>  drivers/crypto/aesni_gcm/aesni_gcm_pmd_ops.c  |   4 +-
>  .../crypto/aesni_gcm/aesni_gcm_pmd_private.h  |  13 +-
>  examples/ipsec-secgw/ipsec.c                  |  25 +-
>  examples/ipsec-secgw/ipsec_process.c          | 136 +++++++----
>  examples/ipsec-secgw/sa.c                     |  30 ++-
>  examples/ipsec-secgw/test/common_defs.sh      |  21 ++
>  examples/ipsec-secgw/test/linux_test4.sh      |  11 +-
>  examples/ipsec-secgw/test/linux_test6.sh      |  11 +-
>  .../test/trs_3descbc_sha1_common_defs.sh      |   8 +-
>  .../test/trs_aescbc_sha1_common_defs.sh       |   8 +-
>  .../test/trs_aesctr_sha1_common_defs.sh       |   8 +-
>  .../test/tun_3descbc_sha1_common_defs.sh      |   8 +-
>  .../test/tun_aescbc_sha1_common_defs.sh       |   8 +-
>  .../test/tun_aesctr_sha1_common_defs.sh       |   8 +-
>  lib/librte_cryptodev/rte_crypto_sym.h         | 128 +++++++++-
>  lib/librte_cryptodev/rte_cryptodev.c          |  35 ++-
>  lib/librte_cryptodev/rte_cryptodev.h          |  22 +-
>  lib/librte_cryptodev/rte_cryptodev_pmd.h      |  21 +-
>  .../rte_cryptodev_version.map                 |   1 +
>  lib/librte_ipsec/esp_inb.c                    | 156 ++++++++++--
>  lib/librte_ipsec/esp_outb.c                   | 136 ++++++++++-
>  lib/librte_ipsec/misc.h                       |  73 +++++-
>  lib/librte_ipsec/rte_ipsec.h                  |  20 +-
>  lib/librte_ipsec/sa.c                         | 114 +++++++--
>  lib/librte_ipsec/sa.h                         |  19 +-
>  lib/librte_ipsec/ses.c                        |   5 +-
>  lib/librte_security/rte_security.h            |   8 +-
>  37 files changed, 1311 insertions(+), 194 deletions(-)
> 
> --
> 2.17.1


^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [dpdk-dev] [PATCH v6 1/8] cryptodev: introduce cpu crypto support API
  2020-02-04 13:12       ` [dpdk-dev] [PATCH v6 1/8] cryptodev: introduce cpu crypto support API Marcin Smoczynski
  2020-02-05 14:57         ` Akhil Goyal
@ 2020-02-06  0:48         ` Thomas Monjalon
  2020-02-06 12:36         ` [dpdk-dev] [PATCH] cryptodev: fix missing doxygen comment Marcin Smoczynski
  2 siblings, 0 replies; 77+ messages in thread
From: Thomas Monjalon @ 2020-02-06  0:48 UTC (permalink / raw)
  To: Marcin Smoczynski
  Cc: akhil.goyal, konstantin.ananyev, roy.fan.zhang, declan.doherty,
	radu.nicolau, pablo.de.lara.guarch, dev

04/02/2020 14:12, Marcin Smoczynski:
> +/**
> + * Converts portion of mbuf data into a vector representation.
> + * Each segment will be represented as a separate entry in *vec* array.
> + * Expects that provided *ofs* + *len* not to exceed mbuf's *pkt_len*.
> + * @param mbuf
> + *   Pointer to the *rte_mbuf* object.
> + * @param ofs
> + *   Offset within mbuf data to start with.
> + * @param len
> + *   Length of data to represent.
> + * @return
> + *   - number of successfully filled entries in *vec* array.
> + *   - negative number of elements in *vec* array required.
> + */
> +__rte_experimental
> +static inline int
> +rte_crypto_mbuf_to_vec(const struct rte_mbuf *mb, uint32_t ofs, uint32_t len,
> +	struct rte_crypto_vec vec[], uint32_t num)

The doxygen comment is incomplete. I workaround the miss with this change:

- * @param mbuf
+ * @param mb
  *   Pointer to the *rte_mbuf* object.
  * @param ofs
  *   Offset within mbuf data to start with.
  * @param len
  *   Length of data to represent.
+ * @param vec
+ * @param num

Please complete vec and num descriptions.




^ permalink raw reply	[flat|nested] 77+ messages in thread

* [dpdk-dev] [PATCH] cryptodev: fix missing doxygen comment
  2020-02-04 13:12       ` [dpdk-dev] [PATCH v6 1/8] cryptodev: introduce cpu crypto support API Marcin Smoczynski
  2020-02-05 14:57         ` Akhil Goyal
  2020-02-06  0:48         ` Thomas Monjalon
@ 2020-02-06 12:36         ` Marcin Smoczynski
  2020-02-06 12:43           ` Ananyev, Konstantin
  2 siblings, 1 reply; 77+ messages in thread
From: Marcin Smoczynski @ 2020-02-06 12:36 UTC (permalink / raw)
  To: akhil.goyal, konstantin.ananyev, thomas; +Cc: dev, Marcin Smoczynski

Add missing doxygen comment of rte_crypto_mbuf_to_vec's fields.

Signed-off-by: Marcin Smoczynski <marcinx.smoczynski@intel.com>
---
 lib/librte_cryptodev/rte_crypto_sym.h | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/lib/librte_cryptodev/rte_crypto_sym.h b/lib/librte_cryptodev/rte_crypto_sym.h
index deb46971f..9e887c110 100644
--- a/lib/librte_cryptodev/rte_crypto_sym.h
+++ b/lib/librte_cryptodev/rte_crypto_sym.h
@@ -861,7 +861,9 @@ __rte_crypto_sym_op_attach_sym_session(struct rte_crypto_sym_op *sym_op,
  * @param len
  *   Length of data to represent.
  * @param vec
+ *   Pointer to an output array of IO vectors.
  * @param num
+ *   Size of an output array.
  * @return
  *   - number of successfully filled entries in *vec* array.
  *   - negative number of elements in *vec* array required.
-- 
2.17.1


^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [dpdk-dev] [PATCH] cryptodev: fix missing doxygen comment
  2020-02-06 12:36         ` [dpdk-dev] [PATCH] cryptodev: fix missing doxygen comment Marcin Smoczynski
@ 2020-02-06 12:43           ` Ananyev, Konstantin
  2020-02-12 13:15             ` Akhil Goyal
  0 siblings, 1 reply; 77+ messages in thread
From: Ananyev, Konstantin @ 2020-02-06 12:43 UTC (permalink / raw)
  To: Smoczynski, MarcinX, akhil.goyal, thomas; +Cc: dev



> -----Original Message-----
> From: Smoczynski, MarcinX <marcinx.smoczynski@intel.com>
> Sent: Thursday, February 6, 2020 12:36 PM
> To: akhil.goyal@nxp.com; Ananyev, Konstantin <konstantin.ananyev@intel.com>; thomas@monjalon.net
> Cc: dev@dpdk.org; Smoczynski, MarcinX <marcinx.smoczynski@intel.com>
> Subject: [PATCH] cryptodev: fix missing doxygen comment
> 
> Add missing doxygen comment of rte_crypto_mbuf_to_vec's fields.
> 
> Signed-off-by: Marcin Smoczynski <marcinx.smoczynski@intel.com>
> ---
>  lib/librte_cryptodev/rte_crypto_sym.h | 2 ++
>  1 file changed, 2 insertions(+)
> 
> diff --git a/lib/librte_cryptodev/rte_crypto_sym.h b/lib/librte_cryptodev/rte_crypto_sym.h
> index deb46971f..9e887c110 100644
> --- a/lib/librte_cryptodev/rte_crypto_sym.h
> +++ b/lib/librte_cryptodev/rte_crypto_sym.h
> @@ -861,7 +861,9 @@ __rte_crypto_sym_op_attach_sym_session(struct rte_crypto_sym_op *sym_op,
>   * @param len
>   *   Length of data to represent.
>   * @param vec
> + *   Pointer to an output array of IO vectors.
>   * @param num
> + *   Size of an output array.
>   * @return
>   *   - number of successfully filled entries in *vec* array.
>   *   - negative number of elements in *vec* array required.
> --

Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>

> 2.17.1


^ permalink raw reply	[flat|nested] 77+ messages in thread

* [dpdk-dev] [PATCH] test/crypto: add cpu crypto mode tests
  2020-02-04 13:12       ` [dpdk-dev] [PATCH v6 4/8] test/crypto: add cpu crypto mode to tests Marcin Smoczynski
  2020-02-05 14:59         ` Akhil Goyal
@ 2020-02-07 14:28         ` Marcin Smoczynski
  2020-02-07 17:04           ` Ananyev, Konstantin
  1 sibling, 1 reply; 77+ messages in thread
From: Marcin Smoczynski @ 2020-02-07 14:28 UTC (permalink / raw)
  To: akhil.goyal, konstantin.ananyev, declan.doherty; +Cc: dev, Marcin Smoczynski

This patch adds ability to run unit tests in cpu crypto mode for AESNI
GCM cryptodev.

Signed-off-by: Marcin Smoczynski <marcinx.smoczynski@intel.com>
---
 app/test/test_cryptodev.c | 181 ++++++++++++++++++++++++++++++++++++--
 1 file changed, 172 insertions(+), 9 deletions(-)

diff --git a/app/test/test_cryptodev.c b/app/test/test_cryptodev.c
index e6abc22b6..7b1ef5c86 100644
--- a/app/test/test_cryptodev.c
+++ b/app/test/test_cryptodev.c
@@ -1,5 +1,5 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2015-2019 Intel Corporation
+ * Copyright(c) 2015-2020 Intel Corporation
  */
 
 #include <time.h>
@@ -52,6 +52,9 @@
 
 static int gbl_driver_id;
 
+static enum rte_security_session_action_type gbl_action_type =
+	RTE_SECURITY_ACTION_TYPE_NONE;
+
 struct crypto_testsuite_params {
 	struct rte_mempool *mbuf_pool;
 	struct rte_mempool *large_mbuf_pool;
@@ -139,9 +142,97 @@ ceil_byte_length(uint32_t num_bits)
 		return (num_bits >> 3);
 }
 
+static void
+process_cpu_gmac_op(uint8_t dev_id, struct rte_crypto_op *op)
+{
+	int32_t n, st;
+	void *iv;
+	struct rte_crypto_sym_op *sop;
+	union rte_crypto_sym_ofs ofs;
+	struct rte_crypto_sgl sgl;
+	struct rte_crypto_sym_vec symvec;
+	struct rte_crypto_vec vec[UINT8_MAX];
+
+	sop = op->sym;
+
+	n = rte_crypto_mbuf_to_vec(sop->m_src, sop->auth.data.offset,
+		sop->auth.data.length, vec, RTE_DIM(vec));
+
+	if (n < 0 || n != sop->m_src->nb_segs) {
+		op->status = RTE_CRYPTO_OP_STATUS_ERROR;
+		return;
+	}
+
+	sgl.vec = vec;
+	sgl.num = n;
+	symvec.sgl = &sgl;
+	iv = rte_crypto_op_ctod_offset(op, void *, IV_OFFSET);
+	symvec.iv = &iv;
+	symvec.aad = NULL;
+	symvec.digest = (void **)&sop->auth.digest.data;
+	symvec.status = &st;
+	symvec.num = 1;
+
+	ofs.raw = 0;
+
+	n = rte_cryptodev_sym_cpu_crypto_process(dev_id, sop->session, ofs,
+		&symvec);
+
+	if (n != 1)
+		op->status = RTE_CRYPTO_OP_STATUS_AUTH_FAILED;
+	else
+		op->status = RTE_CRYPTO_OP_STATUS_SUCCESS;
+}
+
+
+static void
+process_cpu_aead_op(uint8_t dev_id, struct rte_crypto_op *op)
+{
+	int32_t n, st;
+	void *iv;
+	struct rte_crypto_sym_op *sop;
+	union rte_crypto_sym_ofs ofs;
+	struct rte_crypto_sgl sgl;
+	struct rte_crypto_sym_vec symvec;
+	struct rte_crypto_vec vec[UINT8_MAX];
+
+	sop = op->sym;
+
+	n = rte_crypto_mbuf_to_vec(sop->m_src, sop->aead.data.offset,
+		sop->aead.data.length, vec, RTE_DIM(vec));
+
+	if (n < 0 || n != sop->m_src->nb_segs) {
+		op->status = RTE_CRYPTO_OP_STATUS_ERROR;
+		return;
+	}
+
+	sgl.vec = vec;
+	sgl.num = n;
+	symvec.sgl = &sgl;
+	iv = rte_crypto_op_ctod_offset(op, void *, IV_OFFSET);
+	symvec.iv = &iv;
+	symvec.aad = (void **)&sop->aead.aad.data;
+	symvec.digest = (void **)&sop->aead.digest.data;
+	symvec.status = &st;
+	symvec.num = 1;
+
+	ofs.raw = 0;
+
+	n = rte_cryptodev_sym_cpu_crypto_process(dev_id, sop->session, ofs,
+		&symvec);
+
+	if (n != 1)
+		op->status = RTE_CRYPTO_OP_STATUS_AUTH_FAILED;
+	else
+		op->status = RTE_CRYPTO_OP_STATUS_SUCCESS;
+}
+
 static struct rte_crypto_op *
 process_crypto_request(uint8_t dev_id, struct rte_crypto_op *op)
 {
+
+	RTE_VERIFY(gbl_action_type != RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO);
+
 	if (rte_cryptodev_enqueue_burst(dev_id, 0, &op, 1) != 1) {
 		RTE_LOG(ERR, USER1, "Error sending packet for encryption\n");
 		return NULL;
@@ -6937,7 +7028,11 @@ test_authenticated_encryption(const struct aead_test_data *tdata)
 	ut_params->op->sym->m_src = ut_params->ibuf;
 
 	/* Process crypto operation */
-	TEST_ASSERT_NOT_NULL(process_crypto_request(ts_params->valid_devs[0],
+	if (gbl_action_type == RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO)
+		process_cpu_aead_op(ts_params->valid_devs[0], ut_params->op);
+	else
+		TEST_ASSERT_NOT_NULL(
+			process_crypto_request(ts_params->valid_devs[0],
 			ut_params->op), "failed to process sym crypto op");
 
 	TEST_ASSERT_EQUAL(ut_params->op->status, RTE_CRYPTO_OP_STATUS_SUCCESS,
@@ -7868,7 +7963,11 @@ test_authenticated_decryption(const struct aead_test_data *tdata)
 	ut_params->op->sym->m_src = ut_params->ibuf;
 
 	/* Process crypto operation */
-	TEST_ASSERT_NOT_NULL(process_crypto_request(ts_params->valid_devs[0],
+	if (gbl_action_type == RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO)
+		process_cpu_aead_op(ts_params->valid_devs[0], ut_params->op);
+	else
+		TEST_ASSERT_NOT_NULL(
+			process_crypto_request(ts_params->valid_devs[0],
 			ut_params->op), "failed to process sym crypto op");
 
 	TEST_ASSERT_EQUAL(ut_params->op->status, RTE_CRYPTO_OP_STATUS_SUCCESS,
@@ -8154,6 +8253,10 @@ test_authenticated_encryption_oop(const struct aead_test_data *tdata)
 			&cap_idx) == NULL)
 		return -ENOTSUP;
 
+	/* not supported with CPU crypto */
+	if (gbl_action_type == RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO)
+		return -ENOTSUP;
+
 	/* Create AEAD session */
 	retval = create_aead_session(ts_params->valid_devs[0],
 			tdata->algo,
@@ -8239,6 +8342,10 @@ test_authenticated_decryption_oop(const struct aead_test_data *tdata)
 			&cap_idx) == NULL)
 		return -ENOTSUP;
 
+	/* not supported with CPU crypto */
+	if (gbl_action_type == RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO)
+		return -ENOTSUP;
+
 	/* Create AEAD session */
 	retval = create_aead_session(ts_params->valid_devs[0],
 			tdata->algo,
@@ -8318,6 +8425,10 @@ test_authenticated_encryption_sessionless(
 				RTE_STR(CRYPTODEV_NAME_AESNI_GCM_PMD))))
 		return -ENOTSUP;
 
+	/* not supported with CPU crypto */
+	if (gbl_action_type == RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO)
+		return -ENOTSUP;
+
 	/* Verify the capabilities */
 	struct rte_cryptodev_sym_capability_idx cap_idx;
 	cap_idx.type = RTE_CRYPTO_SYM_XFORM_AEAD;
@@ -8414,6 +8525,10 @@ test_authenticated_decryption_sessionless(
 				RTE_STR(CRYPTODEV_NAME_AESNI_GCM_PMD))))
 		return -ENOTSUP;
 
+	/* not supported with CPU crypto */
+	if (gbl_action_type == RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO)
+		return -ENOTSUP;
+
 	/* Verify the capabilities */
 	struct rte_cryptodev_sym_capability_idx cap_idx;
 	cap_idx.type = RTE_CRYPTO_SYM_XFORM_AEAD;
@@ -9736,7 +9851,11 @@ test_AES_GMAC_authentication(const struct gmac_test_data *tdata)
 
 	ut_params->op->sym->m_src = ut_params->ibuf;
 
-	TEST_ASSERT_NOT_NULL(process_crypto_request(ts_params->valid_devs[0],
+	if (gbl_action_type == RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO)
+		process_cpu_gmac_op(ts_params->valid_devs[0], ut_params->op);
+	else
+		TEST_ASSERT_NOT_NULL(
+			process_crypto_request(ts_params->valid_devs[0],
 			ut_params->op), "failed to process sym crypto op");
 
 	TEST_ASSERT_EQUAL(ut_params->op->status, RTE_CRYPTO_OP_STATUS_SUCCESS,
@@ -9848,7 +9967,11 @@ test_AES_GMAC_authentication_verify(const struct gmac_test_data *tdata)
 
 	ut_params->op->sym->m_src = ut_params->ibuf;
 
-	TEST_ASSERT_NOT_NULL(process_crypto_request(ts_params->valid_devs[0],
+	if (gbl_action_type == RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO)
+		process_cpu_gmac_op(ts_params->valid_devs[0], ut_params->op);
+	else
+		TEST_ASSERT_NOT_NULL(
+			process_crypto_request(ts_params->valid_devs[0],
 			ut_params->op), "failed to process sym crypto op");
 
 	TEST_ASSERT_EQUAL(ut_params->op->status, RTE_CRYPTO_OP_STATUS_SUCCESS,
@@ -10469,10 +10592,16 @@ test_authentication_verify_GMAC_fail_when_corruption(
 	else
 		tag_corruption(plaintext, reference->aad.len);
 
-	ut_params->op = process_crypto_request(ts_params->valid_devs[0],
+	if (gbl_action_type == RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO) {
+		process_cpu_gmac_op(ts_params->valid_devs[0], ut_params->op);
+		TEST_ASSERT_NOT_EQUAL(ut_params->op->status,
+			RTE_CRYPTO_OP_STATUS_SUCCESS,
+			"authentication not failed");
+	} else {
+		ut_params->op = process_crypto_request(ts_params->valid_devs[0],
 			ut_params->op);
-
-	TEST_ASSERT_NULL(ut_params->op, "authentication not failed");
+		TEST_ASSERT_NULL(ut_params->op, "authentication not failed");
+	}
 
 	return 0;
 }
@@ -10872,6 +11001,10 @@ test_authenticated_encryption_SGL(const struct aead_test_data *tdata,
 			&cap_idx) == NULL)
 		return -ENOTSUP;
 
+	/* OOP not supported with CPU crypto */
+	if (oop && gbl_action_type == RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO)
+		return -ENOTSUP;
+
 	/* Detailed check for the particular SGL support flag */
 	rte_cryptodev_info_get(ts_params->valid_devs[0], &dev_info);
 	if (!oop) {
@@ -11075,7 +11208,12 @@ test_authenticated_encryption_SGL(const struct aead_test_data *tdata,
 		ut_params->op->sym->m_dst = ut_params->obuf;
 
 	/* Process crypto operation */
-	TEST_ASSERT_NOT_NULL(process_crypto_request(ts_params->valid_devs[0],
+	if (oop == IN_PLACE &&
+			gbl_action_type == RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO)
+		process_cpu_aead_op(ts_params->valid_devs[0], ut_params->op);
+	else
+		TEST_ASSERT_NOT_NULL(
+			process_crypto_request(ts_params->valid_devs[0],
 			ut_params->op), "failed to process sym crypto op");
 
 	TEST_ASSERT_EQUAL(ut_params->op->status, RTE_CRYPTO_OP_STATUS_SUCCESS,
@@ -13271,6 +13409,29 @@ test_cryptodev_aesni_gcm(void)
 	return unit_test_suite_runner(&cryptodev_testsuite);
 }
 
+static int
+test_cryptodev_cpu_aesni_gcm(void)
+{
+	int32_t rc;
+	enum rte_security_session_action_type at;
+
+	gbl_driver_id = rte_cryptodev_driver_id_get(
+			RTE_STR(CRYPTODEV_NAME_AESNI_GCM_PMD));
+
+	if (gbl_driver_id == -1) {
+		RTE_LOG(ERR, USER1, "AESNI GCM PMD must be loaded. Check if "
+				"CONFIG_RTE_LIBRTE_PMD_AESNI_GCM is enabled "
+				"in config file to run this testsuite.\n");
+		return TEST_SKIPPED;
+	}
+
+	at = gbl_action_type;
+	gbl_action_type = RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO;
+	rc = unit_test_suite_runner(&cryptodev_testsuite);
+	gbl_action_type = at;
+	return rc;
+}
+
 static int
 test_cryptodev_null(void)
 {
@@ -13509,6 +13670,8 @@ REGISTER_TEST_COMMAND(cryptodev_qat_autotest, test_cryptodev_qat);
 REGISTER_TEST_COMMAND(cryptodev_aesni_mb_autotest, test_cryptodev_aesni_mb);
 REGISTER_TEST_COMMAND(cryptodev_openssl_autotest, test_cryptodev_openssl);
 REGISTER_TEST_COMMAND(cryptodev_aesni_gcm_autotest, test_cryptodev_aesni_gcm);
+REGISTER_TEST_COMMAND(cryptodev_cpu_aesni_gcm_autotest,
+	test_cryptodev_cpu_aesni_gcm);
 REGISTER_TEST_COMMAND(cryptodev_null_autotest, test_cryptodev_null);
 REGISTER_TEST_COMMAND(cryptodev_sw_snow3g_autotest, test_cryptodev_sw_snow3g);
 REGISTER_TEST_COMMAND(cryptodev_sw_kasumi_autotest, test_cryptodev_sw_kasumi);
-- 
2.17.1


^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [dpdk-dev] [PATCH] test/crypto: add cpu crypto mode tests
  2020-02-07 14:28         ` [dpdk-dev] [PATCH] test/crypto: add cpu crypto mode tests Marcin Smoczynski
@ 2020-02-07 17:04           ` Ananyev, Konstantin
  2020-02-13  9:14             ` Akhil Goyal
  0 siblings, 1 reply; 77+ messages in thread
From: Ananyev, Konstantin @ 2020-02-07 17:04 UTC (permalink / raw)
  To: Smoczynski, MarcinX, akhil.goyal, Doherty, Declan; +Cc: dev


> 
> This patch adds ability to run unit tests in cpu crypto mode for AESNI
> GCM cryptodev.
> 
> Signed-off-by: Marcin Smoczynski <marcinx.smoczynski@intel.com>
> ---

Tested-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>

> 2.17.1


^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [dpdk-dev] [PATCH] cryptodev: fix missing doxygen comment
  2020-02-06 12:43           ` Ananyev, Konstantin
@ 2020-02-12 13:15             ` Akhil Goyal
  0 siblings, 0 replies; 77+ messages in thread
From: Akhil Goyal @ 2020-02-12 13:15 UTC (permalink / raw)
  To: Ananyev, Konstantin, Smoczynski, MarcinX, thomas; +Cc: dev

> 
> Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> 
Applied to dpdk-next-crypto

Added Fixes line.

Thanks.


^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [dpdk-dev] [PATCH] test/crypto: add cpu crypto mode tests
  2020-02-07 17:04           ` Ananyev, Konstantin
@ 2020-02-13  9:14             ` Akhil Goyal
  2020-02-13  9:29               ` Akhil Goyal
  0 siblings, 1 reply; 77+ messages in thread
From: Akhil Goyal @ 2020-02-13  9:14 UTC (permalink / raw)
  To: Ananyev, Konstantin, Smoczynski, MarcinX, Doherty, Declan; +Cc: dev

> 
> >
> > This patch adds ability to run unit tests in cpu crypto mode for AESNI
> > GCM cryptodev.
> >
> > Signed-off-by: Marcin Smoczynski <marcinx.smoczynski@intel.com>
> > ---
> 
> Tested-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>


^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [dpdk-dev] [PATCH] test/crypto: add cpu crypto mode tests
  2020-02-13  9:14             ` Akhil Goyal
@ 2020-02-13  9:29               ` Akhil Goyal
  0 siblings, 0 replies; 77+ messages in thread
From: Akhil Goyal @ 2020-02-13  9:29 UTC (permalink / raw)
  To: Akhil Goyal, Ananyev, Konstantin, Smoczynski, MarcinX, Doherty, Declan
  Cc: dev

> >
> > >
> > > This patch adds ability to run unit tests in cpu crypto mode for AESNI
> > > GCM cryptodev.
> > >
> > > Signed-off-by: Marcin Smoczynski <marcinx.smoczynski@intel.com>
> > > ---
> >
> > Tested-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> > Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> Acked-by: Akhil Goyal <akhil.goyal@nxp.com>

Applied to dpdk-next-crypto

Thanks.


^ permalink raw reply	[flat|nested] 77+ messages in thread

end of thread, other threads:[~2020-02-13  9:29 UTC | newest]

Thread overview: 77+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-01-15 18:28 [dpdk-dev] [PATCH v3 0/6] Introduce CPU crypto mode Marcin Smoczynski
2020-01-15 18:28 ` [dpdk-dev] [PATCH v3 1/6] cryptodev: introduce cpu crypto support API Marcin Smoczynski
2020-01-15 23:20   ` Ananyev, Konstantin
2020-01-16 10:11   ` Zhang, Roy Fan
2020-01-15 18:28 ` [dpdk-dev] [PATCH v3 2/6] crypto/aesni_gcm: cpu crypto support Marcin Smoczynski
2020-01-15 23:16   ` Ananyev, Konstantin
2020-01-16 10:00   ` Zhang, Roy Fan
2020-01-21 13:53   ` De Lara Guarch, Pablo
2020-01-21 14:29     ` Ananyev, Konstantin
2020-01-21 14:51       ` De Lara Guarch, Pablo
2020-01-21 15:23         ` Ananyev, Konstantin
2020-01-21 22:33           ` De Lara Guarch, Pablo
2020-01-22 12:43             ` Ananyev, Konstantin
2020-01-15 18:28 ` [dpdk-dev] [PATCH v3 3/6] security: add cpu crypto action type Marcin Smoczynski
2020-01-15 22:49   ` Ananyev, Konstantin
2020-01-16 10:01   ` Zhang, Roy Fan
2020-01-15 18:28 ` [dpdk-dev] [PATCH v3 4/6] ipsec: introduce support for cpu crypto mode Marcin Smoczynski
2020-01-16 10:53   ` Zhang, Roy Fan
2020-01-16 10:53   ` Zhang, Roy Fan
2020-01-15 18:28 ` [dpdk-dev] [PATCH v3 5/6] examples/ipsec-secgw: cpu crypto support Marcin Smoczynski
2020-01-16 10:54   ` Zhang, Roy Fan
2020-01-15 18:28 ` [dpdk-dev] [PATCH v3 6/6] examples/ipsec-secgw: cpu crypto testing Marcin Smoczynski
2020-01-16 10:54   ` Zhang, Roy Fan
2020-01-28  3:16 ` [dpdk-dev] [PATCH v4 0/8] Introduce CPU crypto mode Marcin Smoczynski
2020-01-28  3:16   ` [dpdk-dev] [PATCH v4 1/8] cryptodev: introduce cpu crypto support API Marcin Smoczynski
2020-01-28  3:16   ` [dpdk-dev] [PATCH v4 2/8] crypto/aesni_gcm: cpu crypto support Marcin Smoczynski
2020-01-28 10:49     ` De Lara Guarch, Pablo
2020-01-28  3:16   ` [dpdk-dev] [PATCH v4 3/8] test/crypto: add CPU crypto tests Marcin Smoczynski
2020-01-28  9:31     ` De Lara Guarch, Pablo
2020-01-28 10:51       ` De Lara Guarch, Pablo
2020-01-28  3:16   ` [dpdk-dev] [PATCH v4 4/8] security: add cpu crypto action type Marcin Smoczynski
2020-01-28 11:00     ` Ananyev, Konstantin
2020-01-28  3:16   ` [dpdk-dev] [PATCH v4 5/8] ipsec: introduce support for cpu crypto mode Marcin Smoczynski
2020-01-28  3:16   ` [dpdk-dev] [PATCH v4 6/8] examples/ipsec-secgw: cpu crypto support Marcin Smoczynski
2020-01-28  3:16   ` [dpdk-dev] [PATCH v4 7/8] examples/ipsec-secgw: cpu crypto testing Marcin Smoczynski
2020-01-28  3:16   ` [dpdk-dev] [PATCH v4 8/8] doc: add cpu crypto related documentation Marcin Smoczynski
2020-01-28 14:22   ` [dpdk-dev] [PATCH v5 0/8] Introduce CPU crypto mode Marcin Smoczynski
2020-01-28 14:22     ` [dpdk-dev] [PATCH v5 1/8] cryptodev: introduce cpu crypto support API Marcin Smoczynski
2020-01-31 14:30       ` Akhil Goyal
2020-01-28 14:22     ` [dpdk-dev] [PATCH v5 2/8] crypto/aesni_gcm: cpu crypto support Marcin Smoczynski
2020-01-28 16:39       ` Ananyev, Konstantin
2020-01-31 14:33       ` Akhil Goyal
2020-01-28 14:22     ` [dpdk-dev] [PATCH v5 3/8] test/crypto: add CPU crypto tests Marcin Smoczynski
2020-01-31 14:37       ` Akhil Goyal
2020-01-28 14:22     ` [dpdk-dev] [PATCH v5 4/8] security: add cpu crypto action type Marcin Smoczynski
2020-01-31 14:26       ` Akhil Goyal
2020-02-04 10:36         ` Akhil Goyal
2020-02-04 10:43           ` Ananyev, Konstantin
2020-01-28 14:22     ` [dpdk-dev] [PATCH v5 5/8] ipsec: introduce support for cpu crypto mode Marcin Smoczynski
2020-01-28 16:37       ` Ananyev, Konstantin
2020-01-28 14:22     ` [dpdk-dev] [PATCH v5 6/8] examples/ipsec-secgw: cpu crypto support Marcin Smoczynski
2020-01-28 14:22     ` [dpdk-dev] [PATCH v5 7/8] examples/ipsec-secgw: cpu crypto testing Marcin Smoczynski
2020-01-28 14:22     ` [dpdk-dev] [PATCH v5 8/8] doc: add cpu crypto related documentation Marcin Smoczynski
2020-01-31 14:43       ` Akhil Goyal
2020-02-04 13:12     ` [dpdk-dev] [PATCH v6 0/8] Introduce CPU crypto mode Marcin Smoczynski
2020-02-04 13:12       ` [dpdk-dev] [PATCH v6 1/8] cryptodev: introduce cpu crypto support API Marcin Smoczynski
2020-02-05 14:57         ` Akhil Goyal
2020-02-06  0:48         ` Thomas Monjalon
2020-02-06 12:36         ` [dpdk-dev] [PATCH] cryptodev: fix missing doxygen comment Marcin Smoczynski
2020-02-06 12:43           ` Ananyev, Konstantin
2020-02-12 13:15             ` Akhil Goyal
2020-02-04 13:12       ` [dpdk-dev] [PATCH v6 2/8] crypto/aesni_gcm: cpu crypto support Marcin Smoczynski
2020-02-04 13:12       ` [dpdk-dev] [PATCH v6 3/8] security: add cpu crypto action type Marcin Smoczynski
2020-02-05 14:58         ` Akhil Goyal
2020-02-04 13:12       ` [dpdk-dev] [PATCH v6 4/8] test/crypto: add cpu crypto mode to tests Marcin Smoczynski
2020-02-05 14:59         ` Akhil Goyal
2020-02-07 14:28         ` [dpdk-dev] [PATCH] test/crypto: add cpu crypto mode tests Marcin Smoczynski
2020-02-07 17:04           ` Ananyev, Konstantin
2020-02-13  9:14             ` Akhil Goyal
2020-02-13  9:29               ` Akhil Goyal
2020-02-04 13:12       ` [dpdk-dev] [PATCH v6 5/8] ipsec: introduce support for cpu crypto mode Marcin Smoczynski
2020-02-05 14:59         ` Akhil Goyal
2020-02-04 13:12       ` [dpdk-dev] [PATCH v6 6/8] examples/ipsec-secgw: cpu crypto support Marcin Smoczynski
2020-02-05 15:00         ` Akhil Goyal
2020-02-04 13:12       ` [dpdk-dev] [PATCH v6 7/8] examples/ipsec-secgw: cpu crypto testing Marcin Smoczynski
2020-02-04 13:12       ` [dpdk-dev] [PATCH v6 8/8] doc: add release notes for cpu crypto Marcin Smoczynski
2020-02-05 15:03       ` [dpdk-dev] [PATCH v6 0/8] Introduce CPU crypto mode Akhil Goyal

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).