* [dpdk-dev] [PATCH 1/4] kasumi: add new KASUMI PMD
2016-05-06 13:58 [dpdk-dev] [PATCH 0/4] Add new KASUMI SW PMD Pablo de Lara
@ 2016-05-06 13:58 ` Pablo de Lara
2016-05-06 13:58 ` [dpdk-dev] [PATCH 2/4] app/test: add new buffer comparison macros Pablo de Lara
` (2 subsequent siblings)
3 siblings, 0 replies; 5+ messages in thread
From: Pablo de Lara @ 2016-05-06 13:58 UTC (permalink / raw)
To: dev; +Cc: declan.doherty, Pablo de Lara
Added new SW PMD which makes use of the libsso_kasumi SW library,
which provides wireless algorithms KASUMI F8 and F9
in software.
This PMD supports cipher-only, hash-only and chained operations
("cipher then hash" and "hash then cipher") of the following
algorithms:
- RTE_CRYPTO_SYM_CIPHER_KASUMI_F8
- RTE_CRYPTO_SYM_AUTH_KASUMI_F9
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
---
config/common_base | 6 +
doc/guides/cryptodevs/overview.rst | 78 +--
doc/guides/rel_notes/release_16_07.rst | 5 +
drivers/crypto/Makefile | 1 +
drivers/crypto/kasumi/Makefile | 64 +++
drivers/crypto/kasumi/rte_kasumi_pmd.c | 666 +++++++++++++++++++++++
drivers/crypto/kasumi/rte_kasumi_pmd_ops.c | 344 ++++++++++++
drivers/crypto/kasumi/rte_kasumi_pmd_private.h | 106 ++++
drivers/crypto/kasumi/rte_pmd_kasumi_version.map | 3 +
lib/librte_cryptodev/rte_crypto_sym.h | 6 +-
lib/librte_cryptodev/rte_cryptodev.h | 3 +
mk/rte.app.mk | 4 +
scripts/test-build.sh | 4 +
13 files changed, 1251 insertions(+), 39 deletions(-)
create mode 100644 drivers/crypto/kasumi/Makefile
create mode 100644 drivers/crypto/kasumi/rte_kasumi_pmd.c
create mode 100644 drivers/crypto/kasumi/rte_kasumi_pmd_ops.c
create mode 100644 drivers/crypto/kasumi/rte_kasumi_pmd_private.h
create mode 100644 drivers/crypto/kasumi/rte_pmd_kasumi_version.map
diff --git a/config/common_base b/config/common_base
index 35d38d9..5cfec8f 100644
--- a/config/common_base
+++ b/config/common_base
@@ -358,6 +358,12 @@ CONFIG_RTE_LIBRTE_PMD_SNOW3G=n
CONFIG_RTE_LIBRTE_PMD_SNOW3G_DEBUG=n
#
+# Compile PMD for KASUMI device
+#
+CONFIG_RTE_LIBRTE_PMD_KASUMI=n
+CONFIG_RTE_LIBRTE_PMD_KASUMI_DEBUG=n
+
+#
# Compile PMD for NULL Crypto device
#
CONFIG_RTE_LIBRTE_PMD_NULL_CRYPTO=y
diff --git a/doc/guides/cryptodevs/overview.rst b/doc/guides/cryptodevs/overview.rst
index 4a84146..e76844f 100644
--- a/doc/guides/cryptodevs/overview.rst
+++ b/doc/guides/cryptodevs/overview.rst
@@ -33,62 +33,64 @@ Crypto Device Supported Functionality Matrices
Supported Feature Flags
.. csv-table::
- :header: "Feature Flags", "qat", "null", "aesni_mb", "aesni_gcm", "snow3g"
+ :header: "Feature Flags", "qat", "null", "aesni_mb", "aesni_gcm", "snow3g", "kasumi"
:stub-columns: 1
- "RTE_CRYPTODEV_FF_SYMMETRIC_CRYPTO",x,x,,,
- "RTE_CRYPTODEV_FF_ASYMMETRIC_CRYPTO",,,,,
- "RTE_CRYPTODEV_FF_SYM_OPERATION_CHAINING",x,x,x,x,x
- "RTE_CRYPTODEV_FF_CPU_SSE",,,x,x,x
- "RTE_CRYPTODEV_FF_CPU_AVX",,,x,x,x
- "RTE_CRYPTODEV_FF_CPU_AVX2",,,x,x,
- "RTE_CRYPTODEV_FF_CPU_AESNI",,,x,x,
- "RTE_CRYPTODEV_FF_HW_ACCELERATED",x,,,,
+ "RTE_CRYPTODEV_FF_SYMMETRIC_CRYPTO",x,x,,,,
+ "RTE_CRYPTODEV_FF_ASYMMETRIC_CRYPTO",,,,,,
+ "RTE_CRYPTODEV_FF_SYM_OPERATION_CHAINING",x,x,x,x,x,x
+ "RTE_CRYPTODEV_FF_CPU_SSE",,,x,x,x,x
+ "RTE_CRYPTODEV_FF_CPU_AVX",,,x,x,x,x
+ "RTE_CRYPTODEV_FF_CPU_AVX2",,,x,x,,
+ "RTE_CRYPTODEV_FF_CPU_AESNI",,,x,x,x
+ "RTE_CRYPTODEV_FF_HW_ACCELERATED",x,,,,,
Supported Cipher Algorithms
.. csv-table::
- :header: "Cipher Algorithms", "qat", "null", "aesni_mb", "aesni_gcm", "snow3g"
+ :header: "Cipher Algorithms", "qat", "null", "aesni_mb", "aesni_gcm", "snow3g", "kasumi"
:stub-columns: 1
- "NULL",,x,,,
- "AES_CBC_128",x,,x,,
- "AES_CBC_192",x,,x,,
- "AES_CBC_256",x,,x,,
- "AES_CTR_128",x,,x,,
- "AES_CTR_192",x,,x,,
- "AES_CTR_256",x,,x,,
- "SNOW3G_UEA2",x,,,,x
+ "NULL",,x,,,,
+ "AES_CBC_128",x,,x,,,
+ "AES_CBC_192",x,,x,,,
+ "AES_CBC_256",x,,x,,,
+ "AES_CTR_128",x,,x,,,
+ "AES_CTR_192",x,,x,,,
+ "AES_CTR_256",x,,x,,,
+ "SNOW3G_UEA2",x,,,,x,
+ "KASUMI_F8",,,,,,x
Supported Authentication Algorithms
.. csv-table::
- :header: "Cipher Algorithms", "qat", "null", "aesni_mb", "aesni_gcm", "snow3g"
+ :header: "Cipher Algorithms", "qat", "null", "aesni_mb", "aesni_gcm", "snow3g", "kasumi"
:stub-columns: 1
- "NONE",,x,,,
- "MD5",,,,,
- "MD5_HMAC",,,x,,
- "SHA1",,,,,
- "SHA1_HMAC",x,,x,,
- "SHA224",,,,,
- "SHA224_HMAC",,,x,,
- "SHA256",,,,,
- "SHA256_HMAC",x,,x,,
- "SHA384",,,,,
- "SHA384_HMAC",,,x,,
- "SHA512",,,,,
- "SHA512_HMAC",x,,x,,
- "AES_XCBC",x,,x,,
- "SNOW3G_UIA2",x,,,,x
+ "NONE",,x,,,,
+ "MD5",,,,,,
+ "MD5_HMAC",,,x,,,
+ "SHA1",,,,,,
+ "SHA1_HMAC",x,,x,,,
+ "SHA224",,,,,,
+ "SHA224_HMAC",,,x,,,
+ "SHA256",,,,,,
+ "SHA256_HMAC",x,,x,,,
+ "SHA384",,,,,,
+ "SHA384_HMAC",,,x,,,
+ "SHA512",,,,,,
+ "SHA512_HMAC",x,,x,,,
+ "AES_XCBC",x,,x,,,
+ "SNOW3G_UIA2",x,,,,x,
+ "KASUMI_F9",,,,,,x
Supported AEAD Algorithms
.. csv-table::
- :header: "AEAD Algorithms", "qat", "null", "aesni_mb", "aesni_gcm", "snow3g"
+ :header: "AEAD Algorithms", "qat", "null", "aesni_mb", "aesni_gcm", "snow3g", "kasumi"
:stub-columns: 1
- "AES_GCM_128",x,,x,,
- "AES_GCM_192",x,,,,
- "AES_GCM_256",x,,,,
+ "AES_GCM_128",x,,x,,,
+ "AES_GCM_192",x,,,,,
+ "AES_GCM_256",x,,,,,
diff --git a/doc/guides/rel_notes/release_16_07.rst b/doc/guides/rel_notes/release_16_07.rst
index 0384abf..8469463 100644
--- a/doc/guides/rel_notes/release_16_07.rst
+++ b/doc/guides/rel_notes/release_16_07.rst
@@ -44,6 +44,11 @@ This section should contain new features added in this release. Sample format:
Enabled support for the AES CTR algorithm for Intel QuickAssist devices.
Provided support for algorithm-chaining operations.
+* **Added KASUMI SW PMD.**
+
+ A new Crypto PMD has been added, which provides KASUMI F8 (UEA1) ciphering
+ and KASUMI F9 (UIA1) hashing.
+
Resolved Issues
---------------
diff --git a/drivers/crypto/Makefile b/drivers/crypto/Makefile
index b420538..dc4ef7f 100644
--- a/drivers/crypto/Makefile
+++ b/drivers/crypto/Makefile
@@ -35,6 +35,7 @@ DIRS-$(CONFIG_RTE_LIBRTE_PMD_AESNI_GCM) += aesni_gcm
DIRS-$(CONFIG_RTE_LIBRTE_PMD_AESNI_MB) += aesni_mb
DIRS-$(CONFIG_RTE_LIBRTE_PMD_QAT) += qat
DIRS-$(CONFIG_RTE_LIBRTE_PMD_SNOW3G) += snow3g
+DIRS-$(CONFIG_RTE_LIBRTE_PMD_KASUMI) += kasumi
DIRS-$(CONFIG_RTE_LIBRTE_PMD_NULL_CRYPTO) += null
include $(RTE_SDK)/mk/rte.subdir.mk
diff --git a/drivers/crypto/kasumi/Makefile b/drivers/crypto/kasumi/Makefile
new file mode 100644
index 0000000..490ddd8
--- /dev/null
+++ b/drivers/crypto/kasumi/Makefile
@@ -0,0 +1,64 @@
+# BSD LICENSE
+#
+# Copyright(c) 2016 Intel Corporation. All rights reserved.
+#
+# Redistribution and use in source and binary forms, with or without
+# modification, are permitted provided that the following conditions
+# are met:
+#
+# * Redistributions of source code must retain the above copyright
+# notice, this list of conditions and the following disclaimer.
+# * Redistributions in binary form must reproduce the above copyright
+# notice, this list of conditions and the following disclaimer in
+# the documentation and/or other materials provided with the
+# distribution.
+# * Neither the name of Intel Corporation nor the names of its
+# contributors may be used to endorse or promote products derived
+# from this software without specific prior written permission.
+#
+# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+ifeq ($(LIBSSO_KASUMI_PATH),)
+$(error "Please define LIBSSO_KASUMI_PATH environment variable")
+endif
+
+# library name
+LIB = librte_pmd_kasumi.a
+
+# build flags
+CFLAGS += -O3
+CFLAGS += $(WERROR_FLAGS)
+
+# library version
+LIBABIVER := 1
+
+# versioning export map
+EXPORT_MAP := rte_pmd_kasumi_version.map
+
+# external library include paths
+CFLAGS += -I$(LIBSSO_KASUMI_PATH)
+CFLAGS += -I$(LIBSSO_KASUMI_PATH)/include
+CFLAGS += -I$(LIBSSO_KASUMI_PATH)/build
+
+# library source files
+SRCS-$(CONFIG_RTE_LIBRTE_PMD_KASUMI) += rte_kasumi_pmd.c
+SRCS-$(CONFIG_RTE_LIBRTE_PMD_KASUMI) += rte_kasumi_pmd_ops.c
+
+# library dependencies
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_KASUMI) += lib/librte_eal
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_KASUMI) += lib/librte_mbuf
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_KASUMI) += lib/librte_cryptodev
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/crypto/kasumi/rte_kasumi_pmd.c b/drivers/crypto/kasumi/rte_kasumi_pmd.c
new file mode 100644
index 0000000..ebf5d41
--- /dev/null
+++ b/drivers/crypto/kasumi/rte_kasumi_pmd.c
@@ -0,0 +1,666 @@
+/*-
+ * BSD LICENSE
+ *
+ * Copyright(c) 2016 Intel Corporation. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in
+ * the documentation and/or other materials provided with the
+ * distribution.
+ * * Neither the name of Intel Corporation nor the names of its
+ * contributors may be used to endorse or promote products derived
+ * from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <rte_common.h>
+#include <rte_config.h>
+#include <rte_hexdump.h>
+#include <rte_cryptodev.h>
+#include <rte_cryptodev_pmd.h>
+#include <rte_dev.h>
+#include <rte_malloc.h>
+#include <rte_cpuflags.h>
+#include <rte_kvargs.h>
+
+#include "rte_kasumi_pmd_private.h"
+
+#define KASUMI_KEY_LENGTH 8
+#define KASUMI_IV_LENGTH 8
+#define KASUMI_DIGEST_LENGTH 4
+#define KASUMI_MAX_BURST 4
+#define BYTE_LEN 8
+
+/**
+ * Global static parameter used to create a unique name for each KASUMI
+ * crypto device.
+ */
+static unsigned unique_name_id;
+
+static inline int
+create_unique_device_name(char *name, size_t size)
+{
+ int ret;
+
+ if (name == NULL)
+ return -EINVAL;
+
+ ret = snprintf(name, size, "%s_%u", CRYPTODEV_NAME_KASUMI_PMD,
+ unique_name_id++);
+ if (ret < 0)
+ return ret;
+ return 0;
+}
+
+/** Get xform chain order. */
+static enum kasumi_operation
+kasumi_get_mode(const struct rte_crypto_sym_xform *xform)
+{
+ if (xform == NULL)
+ return KASUMI_OP_NOT_SUPPORTED;
+
+ if (xform->next)
+ if (xform->next->next != NULL)
+ return KASUMI_OP_NOT_SUPPORTED;
+
+ if (xform->type == RTE_CRYPTO_SYM_XFORM_AUTH) {
+ if (xform->next == NULL)
+ return KASUMI_OP_ONLY_AUTH;
+ else if (xform->next->type == RTE_CRYPTO_SYM_XFORM_CIPHER)
+ return KASUMI_OP_AUTH_CIPHER;
+ else
+ return KASUMI_OP_NOT_SUPPORTED;
+ }
+
+ if (xform->type == RTE_CRYPTO_SYM_XFORM_CIPHER) {
+ if (xform->next == NULL)
+ return KASUMI_OP_ONLY_CIPHER;
+ else if (xform->next->type == RTE_CRYPTO_SYM_XFORM_AUTH)
+ return KASUMI_OP_CIPHER_AUTH;
+ else
+ return KASUMI_OP_NOT_SUPPORTED;
+ }
+
+ return KASUMI_OP_NOT_SUPPORTED;
+}
+
+
+/** Parse crypto xform chain and set private session parameters. */
+int
+kasumi_set_session_parameters(struct kasumi_session *sess,
+ const struct rte_crypto_sym_xform *xform)
+{
+ const struct rte_crypto_sym_xform *auth_xform = NULL;
+ const struct rte_crypto_sym_xform *cipher_xform = NULL;
+ int mode;
+
+ /* Select Crypto operation - hash then cipher / cipher then hash */
+ mode = kasumi_get_mode(xform);
+
+ switch (mode) {
+ case KASUMI_OP_CIPHER_AUTH:
+ auth_xform = xform->next;
+ /* Fall-through */
+ case KASUMI_OP_ONLY_CIPHER:
+ cipher_xform = xform;
+ break;
+ case KASUMI_OP_AUTH_CIPHER:
+ cipher_xform = xform->next;
+ /* Fall-through */
+ case KASUMI_OP_ONLY_AUTH:
+ auth_xform = xform;
+ }
+
+ if (mode == KASUMI_OP_NOT_SUPPORTED) {
+ KASUMI_LOG_ERR("Unsupported operation chain order parameter");
+ return -EINVAL;
+ }
+
+ if (cipher_xform) {
+ /* Only KASUMI F8 supported */
+ if (cipher_xform->cipher.algo != RTE_CRYPTO_CIPHER_KASUMI_F8)
+ return -EINVAL;
+ /* Initialize key */
+ sso_kasumi_init_f8_key_sched(xform->cipher.key.data,
+ &sess->pKeySched_cipher);
+ }
+
+ if (auth_xform) {
+ /* Only KASUMI F9 supported */
+ if (auth_xform->auth.algo != RTE_CRYPTO_AUTH_KASUMI_F9)
+ return -EINVAL;
+ sess->auth_op = auth_xform->auth.op;
+ /* Initialize key */
+ sso_kasumi_init_f9_key_sched(xform->auth.key.data,
+ &sess->pKeySched_hash);
+ }
+
+
+ sess->op = mode;
+
+ return 0;
+}
+
+/** Get KASUMI session. */
+static struct kasumi_session *
+kasumi_get_session(struct kasumi_qp *qp, struct rte_crypto_op *op)
+{
+ struct kasumi_session *sess;
+
+ if (op->sym->sess_type == RTE_CRYPTO_SYM_OP_WITH_SESSION) {
+ if (unlikely(op->sym->session->dev_type !=
+ RTE_CRYPTODEV_KASUMI_PMD))
+ return NULL;
+
+ sess = (struct kasumi_session *)op->sym->session->_private;
+ } else {
+ struct rte_cryptodev_session *c_sess = NULL;
+
+ if (rte_mempool_get(qp->sess_mp, (void **)&c_sess))
+ return NULL;
+
+ sess = (struct kasumi_session *)c_sess->_private;
+
+ if (unlikely(kasumi_set_session_parameters(sess,
+ op->sym->xform) != 0))
+ return NULL;
+ }
+
+ return sess;
+}
+
+/** Encrypt/decrypt mbufs with same cipher key. */
+static uint8_t
+process_kasumi_cipher_op(struct rte_crypto_op **ops,
+ struct kasumi_session *session,
+ uint8_t num_ops)
+{
+ unsigned i;
+ uint8_t processed_ops = 0;
+ uint8_t *src[KASUMI_MAX_BURST], *dst[KASUMI_MAX_BURST];
+ uint64_t IV[KASUMI_MAX_BURST];
+ uint32_t num_bytes[KASUMI_MAX_BURST];
+
+ for (i = 0; i < num_ops; i++) {
+ /* Sanity checks. */
+ if (ops[i]->sym->cipher.iv.length != KASUMI_IV_LENGTH) {
+ ops[i]->status = RTE_CRYPTO_OP_STATUS_INVALID_ARGS;
+ KASUMI_LOG_ERR("iv");
+ break;
+ }
+
+ src[i] = rte_pktmbuf_mtod(ops[i]->sym->m_src, uint8_t *) +
+ (ops[i]->sym->cipher.data.offset >> 3);
+ dst[i] = ops[i]->sym->m_dst ?
+ rte_pktmbuf_mtod(ops[i]->sym->m_dst, uint8_t *) +
+ (ops[i]->sym->cipher.data.offset >> 3) :
+ rte_pktmbuf_mtod(ops[i]->sym->m_src, uint8_t *) +
+ (ops[i]->sym->cipher.data.offset >> 3);
+ IV[i] = *((uint64_t *)(ops[i]->sym->cipher.iv.data));
+ num_bytes[i] = ops[i]->sym->cipher.data.length >> 3;
+
+ processed_ops++;
+ }
+
+ if (processed_ops != 0)
+ sso_kasumi_f8_n_buffer(&session->pKeySched_cipher, IV,
+ src, dst, num_bytes, processed_ops);
+
+ return processed_ops;
+}
+
+/** Encrypt/decrypt mbuf (bit level function). */
+static uint8_t
+process_kasumi_cipher_op_bit(struct rte_crypto_op *op,
+ struct kasumi_session *session)
+{
+ uint8_t *src, *dst;
+ uint64_t IV;
+ uint32_t length_in_bits, offset_in_bits;
+
+ /* Sanity checks. */
+ if (op->sym->cipher.iv.length != KASUMI_IV_LENGTH) {
+ op->status = RTE_CRYPTO_OP_STATUS_INVALID_ARGS;
+ KASUMI_LOG_ERR("iv");
+ return 0;
+ }
+
+ offset_in_bits = op->sym->cipher.data.offset;
+ src = rte_pktmbuf_mtod(op->sym->m_src, uint8_t *);
+ dst = op->sym->m_dst ?
+ rte_pktmbuf_mtod(op->sym->m_dst, uint8_t *) :
+ rte_pktmbuf_mtod(op->sym->m_src, uint8_t *);
+ IV = *((uint64_t *)(op->sym->cipher.iv.data));
+ length_in_bits = op->sym->cipher.data.length;
+
+ sso_kasumi_f8_1_buffer_bit(&session->pKeySched_cipher, IV,
+ src, dst, length_in_bits, offset_in_bits);
+
+ return 1;
+}
+
+/** Generate/verify hash from mbufs with same hash key. */
+static int
+process_kasumi_hash_op(struct rte_crypto_op **ops,
+ struct kasumi_session *session,
+ uint8_t num_ops)
+{
+ unsigned i;
+ uint8_t processed_ops = 0;
+ uint8_t *src, *dst;
+ uint32_t length_in_bits;
+ uint32_t num_bytes;
+ uint32_t shift_bits;
+ uint64_t IV;
+ uint8_t direction;
+
+ for (i = 0; i < num_ops; i++) {
+ if (ops[i]->sym->auth.aad.length != KASUMI_IV_LENGTH) {
+ ops[i]->status = RTE_CRYPTO_OP_STATUS_INVALID_ARGS;
+ KASUMI_LOG_ERR("aad");
+ break;
+ }
+
+ if (ops[i]->sym->auth.digest.length != KASUMI_DIGEST_LENGTH) {
+ ops[i]->status = RTE_CRYPTO_OP_STATUS_INVALID_ARGS;
+ KASUMI_LOG_ERR("digest");
+ break;
+ }
+
+ /* Data must be byte aligned */
+ if ((ops[i]->sym->auth.data.offset % BYTE_LEN) != 0) {
+ ops[i]->status = RTE_CRYPTO_OP_STATUS_INVALID_ARGS;
+ KASUMI_LOG_ERR("offset");
+ break;
+ }
+
+ length_in_bits = ops[i]->sym->auth.data.length;
+
+ src = rte_pktmbuf_mtod(ops[i]->sym->m_src, uint8_t *) +
+ (ops[i]->sym->auth.data.offset >> 3);
+ /* IV from AAD */
+ IV = *((uint64_t *)(ops[i]->sym->auth.aad.data));
+ /* Direction from next bit after end of message */
+ num_bytes = (length_in_bits >> 3) + 1;
+ shift_bits = (BYTE_LEN - 1 - length_in_bits) % BYTE_LEN;
+ direction = (src[num_bytes - 1] >> shift_bits) & 0x01;
+
+ if (session->auth_op == RTE_CRYPTO_AUTH_OP_VERIFY) {
+ dst = (uint8_t *)rte_pktmbuf_append(ops[i]->sym->m_src,
+ ops[i]->sym->auth.digest.length);
+
+ sso_kasumi_f9_1_buffer_user(&session->pKeySched_hash,
+ IV, src,
+ length_in_bits, dst, direction);
+ /* Verify digest. */
+ if (memcmp(dst, ops[i]->sym->auth.digest.data,
+ ops[i]->sym->auth.digest.length) != 0)
+ ops[i]->status = RTE_CRYPTO_OP_STATUS_AUTH_FAILED;
+
+ /* Trim area used for digest from mbuf. */
+ rte_pktmbuf_trim(ops[i]->sym->m_src,
+ ops[i]->sym->auth.digest.length);
+ } else {
+ dst = ops[i]->sym->auth.digest.data;
+
+ sso_kasumi_f9_1_buffer_user(&session->pKeySched_hash,
+ IV, src,
+ length_in_bits, dst, direction);
+ }
+ processed_ops++;
+ }
+
+ return processed_ops;
+}
+
+/** Process a batch of crypto ops which shares the same session. */
+static int
+process_ops(struct rte_crypto_op **ops, struct kasumi_session *session,
+ struct kasumi_qp *qp, uint8_t num_ops)
+{
+ unsigned i;
+ unsigned processed_ops;
+
+ switch (session->op) {
+ case KASUMI_OP_ONLY_CIPHER:
+ processed_ops = process_kasumi_cipher_op(ops,
+ session, num_ops);
+ break;
+ case KASUMI_OP_ONLY_AUTH:
+ processed_ops = process_kasumi_hash_op(ops, session,
+ num_ops);
+ break;
+ case KASUMI_OP_CIPHER_AUTH:
+ processed_ops = process_kasumi_cipher_op(ops, session,
+ num_ops);
+ process_kasumi_hash_op(ops, session, processed_ops);
+ break;
+ case KASUMI_OP_AUTH_CIPHER:
+ processed_ops = process_kasumi_hash_op(ops, session,
+ num_ops);
+ process_kasumi_cipher_op(ops, session, processed_ops);
+ break;
+ default:
+ /* Operation not supported. */
+ processed_ops = 0;
+ }
+
+ for (i = 0; i < num_ops; i++) {
+ /*
+ * If there was no error/authentication failure,
+ * change status to successful.
+ */
+ if (ops[i]->status == RTE_CRYPTO_OP_STATUS_NOT_PROCESSED)
+ ops[i]->status = RTE_CRYPTO_OP_STATUS_SUCCESS;
+ /* Free session if a session-less crypto op. */
+ if (ops[i]->sym->sess_type == RTE_CRYPTO_SYM_OP_SESSIONLESS) {
+ rte_mempool_put(qp->sess_mp, ops[i]->sym->session);
+ ops[i]->sym->session = NULL;
+ }
+ }
+
+ return processed_ops;
+}
+
+/** Process a crypto op with length/offset in bits. */
+static int
+process_op_bit(struct rte_crypto_op *op, struct kasumi_session *session,
+ struct kasumi_qp *qp)
+{
+ unsigned processed_op;
+
+ switch (session->op) {
+ case KASUMI_OP_ONLY_CIPHER:
+ processed_op = process_kasumi_cipher_op_bit(op,
+ session);
+ break;
+ case KASUMI_OP_ONLY_AUTH:
+ processed_op = process_kasumi_hash_op(&op, session, 1);
+ break;
+ case KASUMI_OP_CIPHER_AUTH:
+ processed_op = process_kasumi_cipher_op_bit(op, session);
+ if (processed_op == 1)
+ process_kasumi_hash_op(&op, session, 1);
+ break;
+ case KASUMI_OP_AUTH_CIPHER:
+ processed_op = process_kasumi_hash_op(&op, session, 1);
+ if (processed_op == 1)
+ process_kasumi_cipher_op_bit(op, session);
+ break;
+ default:
+ /* Operation not supported. */
+ processed_op = 0;
+ }
+
+ /*
+ * If there was no error/authentication failure,
+ * change status to successful.
+ */
+ if (op->status == RTE_CRYPTO_OP_STATUS_NOT_PROCESSED)
+ op->status = RTE_CRYPTO_OP_STATUS_SUCCESS;
+
+ /* Free session if a session-less crypto op. */
+ if (op->sym->sess_type == RTE_CRYPTO_SYM_OP_SESSIONLESS) {
+ rte_mempool_put(qp->sess_mp, op->sym->session);
+ op->sym->session = NULL;
+ }
+
+ return processed_op;
+}
+
+static uint16_t
+kasumi_pmd_enqueue_burst(void *queue_pair, struct rte_crypto_op **ops,
+ uint16_t nb_ops)
+{
+ struct rte_crypto_op *c_ops[KASUMI_MAX_BURST];
+ struct rte_crypto_op *curr_c_op;
+
+ struct kasumi_session *prev_sess = NULL, *curr_sess = NULL;
+ struct kasumi_qp *qp = queue_pair;
+ unsigned i, n;
+ uint8_t burst_size = 0;
+ uint16_t enqueued_ops = 0;
+ uint8_t processed_ops;
+
+ for (i = 0; i < nb_ops; i++) {
+ curr_c_op = ops[i];
+
+ /* Set status as enqueued (not processed yet) by default. */
+ curr_c_op->status = RTE_CRYPTO_OP_STATUS_NOT_PROCESSED;
+
+ curr_sess = kasumi_get_session(qp, curr_c_op);
+ if (unlikely(curr_sess == NULL ||
+ curr_sess->op == KASUMI_OP_NOT_SUPPORTED)) {
+ curr_c_op->status =
+ RTE_CRYPTO_OP_STATUS_INVALID_SESSION;
+ qp->qp_stats.enqueue_err_count += nb_ops - enqueued_ops;
+ return enqueued_ops;
+ }
+
+ /* If length/offset is at bit-level, process this buffer alone. */
+ if (((curr_c_op->sym->cipher.data.length % BYTE_LEN) != 0)
+ || ((ops[i]->sym->cipher.data.offset
+ % BYTE_LEN) != 0)) {
+ /* Process the ops of the previous session. */
+ if (prev_sess != NULL) {
+ processed_ops = process_ops(c_ops,
+ prev_sess, qp, burst_size);
+ n = rte_ring_enqueue_burst(qp->processed_ops,
+ (void **)c_ops,
+ processed_ops);
+ qp->qp_stats.enqueued_count += n;
+ enqueued_ops += n;
+ if (n < burst_size) {
+ qp->qp_stats.enqueue_err_count +=
+ nb_ops - enqueued_ops;
+ return enqueued_ops;
+ }
+ burst_size = 0;
+
+ prev_sess = NULL;
+ }
+
+ processed_ops = process_op_bit(curr_c_op, curr_sess, qp);
+ n = rte_ring_enqueue_burst(qp->processed_ops,
+ (void **)&curr_c_op, processed_ops);
+ qp->qp_stats.enqueued_count += n;
+ enqueued_ops += n;
+ if (n != 1) {
+ qp->qp_stats.enqueue_err_count += nb_ops - enqueued_ops;
+ return enqueued_ops;
+ }
+ continue;
+ }
+
+ /* Batch ops that share the same session. */
+ if (prev_sess == NULL) {
+ prev_sess = curr_sess;
+ c_ops[burst_size++] = curr_c_op;
+ } else if (curr_sess == prev_sess) {
+ c_ops[burst_size++] = curr_c_op;
+ /*
+ * When there are enough ops to process in a batch,
+ * process them, and start a new batch.
+ */
+ if (burst_size == KASUMI_MAX_BURST) {
+ processed_ops = process_ops(c_ops,
+ prev_sess, qp, burst_size);
+ n = rte_ring_enqueue_burst(qp->processed_ops,
+ (void **)c_ops,
+ processed_ops);
+ qp->qp_stats.enqueued_count += n;
+ enqueued_ops += n;
+ if (n < burst_size) {
+ qp->qp_stats.enqueue_err_count +=
+ nb_ops - enqueued_ops;
+ return enqueued_ops;
+ }
+ burst_size = 0;
+
+ prev_sess = NULL;
+ }
+ } else {
+ /*
+ * Different session, process the ops
+ * of the previous session.
+ */
+ processed_ops = process_ops(c_ops,
+ prev_sess, qp, burst_size);
+ n = rte_ring_enqueue_burst(qp->processed_ops,
+ (void **)c_ops,
+ processed_ops);
+ qp->qp_stats.enqueued_count += n;
+ enqueued_ops += n;
+ if (n < burst_size) {
+ qp->qp_stats.enqueue_err_count +=
+ nb_ops - enqueued_ops;
+ return enqueued_ops;
+ }
+ burst_size = 0;
+
+ prev_sess = curr_sess;
+ c_ops[burst_size++] = curr_c_op;
+ }
+ }
+
+ if (burst_size != 0) {
+ /* Process the crypto ops of the last session. */
+ processed_ops = process_ops(c_ops,
+ prev_sess, qp, burst_size);
+ n = rte_ring_enqueue_burst(qp->processed_ops,
+ (void **)c_ops,
+ processed_ops);
+ qp->qp_stats.enqueued_count += n;
+ enqueued_ops += n;
+ if (n < burst_size) {
+ qp->qp_stats.enqueue_err_count +=
+ nb_ops - enqueued_ops;
+ return enqueued_ops;
+ }
+ }
+
+ return enqueued_ops;
+}
+
+static uint16_t
+kasumi_pmd_dequeue_burst(void *queue_pair,
+ struct rte_crypto_op **c_ops, uint16_t nb_ops)
+{
+ struct kasumi_qp *qp = queue_pair;
+
+ unsigned nb_dequeued;
+
+ nb_dequeued = rte_ring_dequeue_burst(qp->processed_ops,
+ (void **)c_ops, nb_ops);
+ qp->qp_stats.dequeued_count += nb_dequeued;
+
+ return nb_dequeued;
+}
+
+static int cryptodev_kasumi_uninit(const char *name);
+
+static int
+cryptodev_kasumi_create(const char *name,
+ struct rte_crypto_vdev_init_params *init_params)
+{
+ struct rte_cryptodev *dev;
+ char crypto_dev_name[RTE_CRYPTODEV_NAME_MAX_LEN];
+ struct kasumi_private *internals;
+
+ /* Create a unique device name. */
+ if (create_unique_device_name(crypto_dev_name,
+ RTE_CRYPTODEV_NAME_MAX_LEN) != 0) {
+ KASUMI_LOG_ERR("failed to create unique cryptodev name");
+ return -EINVAL;
+ }
+
+ dev = rte_cryptodev_pmd_virtual_dev_init(crypto_dev_name,
+ sizeof(struct kasumi_private), init_params->socket_id);
+ if (dev == NULL) {
+ KASUMI_LOG_ERR("failed to create cryptodev vdev");
+ goto init_error;
+ }
+
+ dev->dev_type = RTE_CRYPTODEV_KASUMI_PMD;
+ dev->dev_ops = rte_kasumi_pmd_ops;
+
+ /* Register RX/TX burst functions for data path. */
+ dev->dequeue_burst = kasumi_pmd_dequeue_burst;
+ dev->enqueue_burst = kasumi_pmd_enqueue_burst;
+
+ dev->feature_flags = RTE_CRYPTODEV_FF_SYMMETRIC_CRYPTO |
+ RTE_CRYPTODEV_FF_SYM_OPERATION_CHAINING;
+
+ internals = dev->data->dev_private;
+
+ internals->max_nb_queue_pairs = init_params->max_nb_queue_pairs;
+ internals->max_nb_sessions = init_params->max_nb_sessions;
+
+ return 0;
+init_error:
+ KASUMI_LOG_ERR("driver %s: cryptodev_kasumi_create failed", name);
+
+ cryptodev_kasumi_uninit(crypto_dev_name);
+ return -EFAULT;
+}
+
+static int
+cryptodev_kasumi_init(const char *name,
+ const char *input_args)
+{
+ struct rte_crypto_vdev_init_params init_params = {
+ RTE_CRYPTODEV_VDEV_DEFAULT_MAX_NB_QUEUE_PAIRS,
+ RTE_CRYPTODEV_VDEV_DEFAULT_MAX_NB_SESSIONS,
+ rte_socket_id()
+ };
+
+ rte_cryptodev_parse_vdev_init_params(&init_params, input_args);
+
+ RTE_LOG(INFO, PMD, "Initialising %s on NUMA node %d\n", name,
+ init_params.socket_id);
+ RTE_LOG(INFO, PMD, " Max number of queue pairs = %d\n",
+ init_params.max_nb_queue_pairs);
+ RTE_LOG(INFO, PMD, " Max number of sessions = %d\n",
+ init_params.max_nb_sessions);
+
+ return cryptodev_kasumi_create(name, &init_params);
+}
+
+static int
+cryptodev_kasumi_uninit(const char *name)
+{
+ if (name == NULL)
+ return -EINVAL;
+
+ RTE_LOG(INFO, PMD, "Closing KASUMI crypto device %s"
+ " on numa socket %u\n",
+ name, rte_socket_id());
+
+ return 0;
+}
+
+static struct rte_driver cryptodev_kasumi_pmd_drv = {
+ .name = CRYPTODEV_NAME_KASUMI_PMD,
+ .type = PMD_VDEV,
+ .init = cryptodev_kasumi_init,
+ .uninit = cryptodev_kasumi_uninit
+};
+
+PMD_REGISTER_DRIVER(cryptodev_kasumi_pmd_drv);
diff --git a/drivers/crypto/kasumi/rte_kasumi_pmd_ops.c b/drivers/crypto/kasumi/rte_kasumi_pmd_ops.c
new file mode 100644
index 0000000..e9f143c
--- /dev/null
+++ b/drivers/crypto/kasumi/rte_kasumi_pmd_ops.c
@@ -0,0 +1,344 @@
+/*-
+ * BSD LICENSE
+ *
+ * Copyright(c) 2016 Intel Corporation. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in
+ * the documentation and/or other materials provided with the
+ * distribution.
+ * * Neither the name of Intel Corporation nor the names of its
+ * contributors may be used to endorse or promote products derived
+ * from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <string.h>
+
+#include <rte_common.h>
+#include <rte_malloc.h>
+#include <rte_cryptodev_pmd.h>
+
+#include "rte_kasumi_pmd_private.h"
+
+static const struct rte_cryptodev_capabilities kasumi_pmd_capabilities[] = {
+ { /* KASUMI (F9) */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_KASUMI_F9,
+ .block_size = 8,
+ .key_size = {
+ .min = 16,
+ .max = 16,
+ .increment = 0
+ },
+ .digest_size = {
+ .min = 4,
+ .max = 4,
+ .increment = 0
+ },
+ .aad_size = {
+ .min = 9,
+ .max = 9,
+ .increment = 0
+ }
+ }, }
+ }, }
+ },
+ { /* KASUMI (F8) */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
+ {.cipher = {
+ .algo = RTE_CRYPTO_CIPHER_KASUMI_F8,
+ .block_size = 8,
+ .key_size = {
+ .min = 16,
+ .max = 16,
+ .increment = 0
+ },
+ .iv_size = {
+ .min = 8,
+ .max = 8,
+ .increment = 0
+ }
+ }, }
+ }, }
+ },
+ RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
+};
+
+/** Configure device */
+static int
+kasumi_pmd_config(__rte_unused struct rte_cryptodev *dev)
+{
+ return 0;
+}
+
+/** Start device */
+static int
+kasumi_pmd_start(__rte_unused struct rte_cryptodev *dev)
+{
+ return 0;
+}
+
+/** Stop device */
+static void
+kasumi_pmd_stop(__rte_unused struct rte_cryptodev *dev)
+{
+}
+
+/** Close device */
+static int
+kasumi_pmd_close(__rte_unused struct rte_cryptodev *dev)
+{
+ return 0;
+}
+
+
+/** Get device statistics */
+static void
+kasumi_pmd_stats_get(struct rte_cryptodev *dev,
+ struct rte_cryptodev_stats *stats)
+{
+ int qp_id;
+
+ for (qp_id = 0; qp_id < dev->data->nb_queue_pairs; qp_id++) {
+ struct kasumi_qp *qp = dev->data->queue_pairs[qp_id];
+
+ stats->enqueued_count += qp->qp_stats.enqueued_count;
+ stats->dequeued_count += qp->qp_stats.dequeued_count;
+
+ stats->enqueue_err_count += qp->qp_stats.enqueue_err_count;
+ stats->dequeue_err_count += qp->qp_stats.dequeue_err_count;
+ }
+}
+
+/** Reset device statistics */
+static void
+kasumi_pmd_stats_reset(struct rte_cryptodev *dev)
+{
+ int qp_id;
+
+ for (qp_id = 0; qp_id < dev->data->nb_queue_pairs; qp_id++) {
+ struct kasumi_qp *qp = dev->data->queue_pairs[qp_id];
+
+ memset(&qp->qp_stats, 0, sizeof(qp->qp_stats));
+ }
+}
+
+
+/** Get device info */
+static void
+kasumi_pmd_info_get(struct rte_cryptodev *dev,
+ struct rte_cryptodev_info *dev_info)
+{
+ struct kasumi_private *internals = dev->data->dev_private;
+
+ if (dev_info != NULL) {
+ dev_info->dev_type = dev->dev_type;
+ dev_info->max_nb_queue_pairs = internals->max_nb_queue_pairs;
+ dev_info->sym.max_nb_sessions = internals->max_nb_sessions;
+ dev_info->feature_flags = dev->feature_flags;
+ dev_info->capabilities = kasumi_pmd_capabilities;
+ }
+}
+
+/** Release queue pair */
+static int
+kasumi_pmd_qp_release(struct rte_cryptodev *dev, uint16_t qp_id)
+{
+ struct kasumi_qp *qp = dev->data->queue_pairs[qp_id];
+ if (qp != NULL) {
+ rte_ring_free(qp->processed_ops);
+ rte_free(qp);
+ dev->data->queue_pairs[qp_id] = NULL;
+ }
+ return 0;
+}
+
+/** set a unique name for the queue pair based on its name, dev_id and qp_id */
+static int
+kasumi_pmd_qp_set_unique_name(struct rte_cryptodev *dev,
+ struct kasumi_qp *qp)
+{
+ unsigned n = snprintf(qp->name, sizeof(qp->name),
+ "kasumi_pmd_%u_qp_%u",
+ dev->data->dev_id, qp->id);
+
+ if (n > sizeof(qp->name))
+ return -1;
+
+ return 0;
+}
+
+/** Create a ring to place processed ops on */
+static struct rte_ring *
+kasumi_pmd_qp_create_processed_ops_ring(struct kasumi_qp *qp,
+ unsigned ring_size, int socket_id)
+{
+ struct rte_ring *r;
+
+ r = rte_ring_lookup(qp->name);
+ if (r) {
+ if (r->prod.size >= ring_size) {
+ KASUMI_LOG_INFO("Reusing existing ring %s"
+ " for processed packets",
+ qp->name);
+ return r;
+ }
+
+ KASUMI_LOG_ERR("Unable to reuse existing ring %s"
+ " for processed packets",
+ qp->name);
+ return NULL;
+ }
+
+ return rte_ring_create(qp->name, ring_size, socket_id,
+ RING_F_SP_ENQ | RING_F_SC_DEQ);
+}
+
+/** Setup a queue pair */
+static int
+kasumi_pmd_qp_setup(struct rte_cryptodev *dev, uint16_t qp_id,
+ const struct rte_cryptodev_qp_conf *qp_conf,
+ int socket_id)
+{
+ struct kasumi_qp *qp = NULL;
+
+ /* Free memory prior to re-allocation if needed. */
+ if (dev->data->queue_pairs[qp_id] != NULL)
+ kasumi_pmd_qp_release(dev, qp_id);
+
+ /* Allocate the queue pair data structure. */
+ qp = rte_zmalloc_socket("KASUMI PMD Queue Pair", sizeof(*qp),
+ RTE_CACHE_LINE_SIZE, socket_id);
+ if (qp == NULL)
+ return (-ENOMEM);
+
+ qp->id = qp_id;
+ dev->data->queue_pairs[qp_id] = qp;
+
+ if (kasumi_pmd_qp_set_unique_name(dev, qp))
+ goto qp_setup_cleanup;
+
+ qp->processed_ops = kasumi_pmd_qp_create_processed_ops_ring(qp,
+ qp_conf->nb_descriptors, socket_id);
+ if (qp->processed_ops == NULL)
+ goto qp_setup_cleanup;
+
+ qp->sess_mp = dev->data->session_pool;
+
+ memset(&qp->qp_stats, 0, sizeof(qp->qp_stats));
+
+ return 0;
+
+qp_setup_cleanup:
+ if (qp)
+ rte_free(qp);
+
+ return -1;
+}
+
+/** Start queue pair */
+static int
+kasumi_pmd_qp_start(__rte_unused struct rte_cryptodev *dev,
+ __rte_unused uint16_t queue_pair_id)
+{
+ return -ENOTSUP;
+}
+
+/** Stop queue pair */
+static int
+kasumi_pmd_qp_stop(__rte_unused struct rte_cryptodev *dev,
+ __rte_unused uint16_t queue_pair_id)
+{
+ return -ENOTSUP;
+}
+
+/** Return the number of allocated queue pairs */
+static uint32_t
+kasumi_pmd_qp_count(struct rte_cryptodev *dev)
+{
+ return dev->data->nb_queue_pairs;
+}
+
+/** Returns the size of the KASUMI session structure */
+static unsigned
+kasumi_pmd_session_get_size(struct rte_cryptodev *dev __rte_unused)
+{
+ return sizeof(struct kasumi_session);
+}
+
+/** Configure a KASUMI session from a crypto xform chain */
+static void *
+kasumi_pmd_session_configure(struct rte_cryptodev *dev __rte_unused,
+ struct rte_crypto_sym_xform *xform, void *sess)
+{
+ if (unlikely(sess == NULL)) {
+ KASUMI_LOG_ERR("invalid session struct");
+ return NULL;
+ }
+
+ if (kasumi_set_session_parameters(sess, xform) != 0) {
+ KASUMI_LOG_ERR("failed configure session parameters");
+ return NULL;
+ }
+
+ return sess;
+}
+
+/** Clear the memory of session so it doesn't leave key material behind */
+static void
+kasumi_pmd_session_clear(struct rte_cryptodev *dev __rte_unused, void *sess)
+{
+ /*
+ * Current just resetting the whole data structure, need to investigate
+ * whether a more selective reset of key would be more performant
+ */
+ if (sess)
+ memset(sess, 0, sizeof(struct kasumi_session));
+}
+
+struct rte_cryptodev_ops kasumi_pmd_ops = {
+ .dev_configure = kasumi_pmd_config,
+ .dev_start = kasumi_pmd_start,
+ .dev_stop = kasumi_pmd_stop,
+ .dev_close = kasumi_pmd_close,
+
+ .stats_get = kasumi_pmd_stats_get,
+ .stats_reset = kasumi_pmd_stats_reset,
+
+ .dev_infos_get = kasumi_pmd_info_get,
+
+ .queue_pair_setup = kasumi_pmd_qp_setup,
+ .queue_pair_release = kasumi_pmd_qp_release,
+ .queue_pair_start = kasumi_pmd_qp_start,
+ .queue_pair_stop = kasumi_pmd_qp_stop,
+ .queue_pair_count = kasumi_pmd_qp_count,
+
+ .session_get_size = kasumi_pmd_session_get_size,
+ .session_configure = kasumi_pmd_session_configure,
+ .session_clear = kasumi_pmd_session_clear
+};
+
+struct rte_cryptodev_ops *rte_kasumi_pmd_ops = &kasumi_pmd_ops;
diff --git a/drivers/crypto/kasumi/rte_kasumi_pmd_private.h b/drivers/crypto/kasumi/rte_kasumi_pmd_private.h
new file mode 100644
index 0000000..3f0ca00
--- /dev/null
+++ b/drivers/crypto/kasumi/rte_kasumi_pmd_private.h
@@ -0,0 +1,106 @@
+/*-
+ * BSD LICENSE
+ *
+ * Copyright(c) 2016 Intel Corporation. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in
+ * the documentation and/or other materials provided with the
+ * distribution.
+ * * Neither the name of Intel Corporation nor the names of its
+ * contributors may be used to endorse or promote products derived
+ * from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _RTE_KASUMI_PMD_PRIVATE_H_
+#define _RTE_KASUMI_PMD_PRIVATE_H_
+
+#include <sso_kasumi.h>
+
+#define KASUMI_LOG_ERR(fmt, args...) \
+ RTE_LOG(ERR, CRYPTODEV, "[%s] %s() line %u: " fmt "\n", \
+ CRYPTODEV_NAME_KASUMI_PMD, \
+ __func__, __LINE__, ## args)
+
+#ifdef RTE_LIBRTE_KASUMI_DEBUG
+#define KASUMI_LOG_INFO(fmt, args...) \
+ RTE_LOG(INFO, CRYPTODEV, "[%s] %s() line %u: " fmt "\n", \
+ CRYPTODEV_NAME_KASUMI_PMD, \
+ __func__, __LINE__, ## args)
+
+#define KASUMI_LOG_DBG(fmt, args...) \
+ RTE_LOG(DEBUG, CRYPTODEV, "[%s] %s() line %u: " fmt "\n", \
+ CRYPTODEV_NAME_KASUMI_PMD, \
+ __func__, __LINE__, ## args)
+#else
+#define KASUMI_LOG_INFO(fmt, args...)
+#define KASUMI_LOG_DBG(fmt, args...)
+#endif
+
+/** private data structure for each virtual KASUMI device */
+struct kasumi_private {
+ unsigned max_nb_queue_pairs;
+ /**< Max number of queue pairs supported by device */
+ unsigned max_nb_sessions;
+ /**< Max number of sessions supported by device */
+};
+
+/** KASUMI buffer queue pair */
+struct kasumi_qp {
+ uint16_t id;
+ /**< Queue Pair Identifier */
+ char name[RTE_CRYPTODEV_NAME_LEN];
+ /**< Unique Queue Pair Name */
+ struct rte_ring *processed_ops;
+ /**< Ring for placing processed ops */
+ struct rte_mempool *sess_mp;
+ /**< Session Mempool */
+ struct rte_cryptodev_stats qp_stats;
+ /**< Queue pair statistics */
+} __rte_cache_aligned;
+
+enum kasumi_operation {
+ KASUMI_OP_ONLY_CIPHER,
+ KASUMI_OP_ONLY_AUTH,
+ KASUMI_OP_CIPHER_AUTH,
+ KASUMI_OP_AUTH_CIPHER,
+ KASUMI_OP_NOT_SUPPORTED
+};
+
+/** KASUMI private session structure */
+struct kasumi_session {
+ /* Keys have to be 16-byte aligned */
+ sso_kasumi_key_sched_t pKeySched_cipher;
+ sso_kasumi_key_sched_t pKeySched_hash;
+ enum kasumi_operation op;
+ enum rte_crypto_auth_operation auth_op;
+} __rte_cache_aligned;
+
+
+extern int
+kasumi_set_session_parameters(struct kasumi_session *sess,
+ const struct rte_crypto_sym_xform *xform);
+
+
+/** device specific operations function pointer structure */
+extern struct rte_cryptodev_ops *rte_kasumi_pmd_ops;
+
+#endif /* _RTE_KASUMI_PMD_PRIVATE_H_ */
diff --git a/drivers/crypto/kasumi/rte_pmd_kasumi_version.map b/drivers/crypto/kasumi/rte_pmd_kasumi_version.map
new file mode 100644
index 0000000..8ffeca9
--- /dev/null
+++ b/drivers/crypto/kasumi/rte_pmd_kasumi_version.map
@@ -0,0 +1,3 @@
+DPDK_16.07 {
+ local: *;
+};
diff --git a/lib/librte_cryptodev/rte_crypto_sym.h b/lib/librte_cryptodev/rte_crypto_sym.h
index 4ae9b9e..d9bd821 100644
--- a/lib/librte_cryptodev/rte_crypto_sym.h
+++ b/lib/librte_cryptodev/rte_crypto_sym.h
@@ -388,7 +388,8 @@ struct rte_crypto_sym_op {
* this location.
*
* @note
- * For Snow3G @ RTE_CRYPTO_CIPHER_SNOW3G_UEA2,
+ * For Snow3G @ RTE_CRYPTO_CIPHER_SNOW3G_UEA2
+ * and KASUMI @ RTE_CRYPTO_CIPHER_KASUMI_F8,
* this field should be in bits.
*/
@@ -413,6 +414,7 @@ struct rte_crypto_sym_op {
*
* @note
* For Snow3G @ RTE_CRYPTO_AUTH_SNOW3G_UEA2
+ * and KASUMI @ RTE_CRYPTO_CIPHER_KASUMI_F8,
* this field should be in bits.
*/
} data; /**< Data offsets and length for ciphering */
@@ -485,6 +487,7 @@ struct rte_crypto_sym_op {
*
* @note
* For Snow3G @ RTE_CRYPTO_AUTH_SNOW3G_UIA2
+ * and KASUMI @ RTE_CRYPTO_AUTH_KASUMI_F9,
* this field should be in bits.
*/
@@ -504,6 +507,7 @@ struct rte_crypto_sym_op {
*
* @note
* For Snow3G @ RTE_CRYPTO_AUTH_SNOW3G_UIA2
+ * and KASUMI @ RTE_CRYPTO_AUTH_KASUMI_F9,
* this field should be in bits.
*/
} data; /**< Data offsets and length for authentication */
diff --git a/lib/librte_cryptodev/rte_cryptodev.h b/lib/librte_cryptodev/rte_cryptodev.h
index d47f1e8..27cf8ef 100644
--- a/lib/librte_cryptodev/rte_cryptodev.h
+++ b/lib/librte_cryptodev/rte_cryptodev.h
@@ -59,12 +59,15 @@ extern "C" {
/**< Intel QAT Symmetric Crypto PMD device name */
#define CRYPTODEV_NAME_SNOW3G_PMD ("cryptodev_snow3g_pmd")
/**< SNOW 3G PMD device name */
+#define CRYPTODEV_NAME_KASUMI_PMD ("cryptodev_kasumi_pmd")
+/**< KASUMI PMD device name */
/** Crypto device type */
enum rte_cryptodev_type {
RTE_CRYPTODEV_NULL_PMD = 1, /**< Null crypto PMD */
RTE_CRYPTODEV_AESNI_GCM_PMD, /**< AES-NI GCM PMD */
RTE_CRYPTODEV_AESNI_MB_PMD, /**< AES-NI multi buffer PMD */
+ RTE_CRYPTODEV_KASUMI_PMD, /**< KASUMI PMD */
RTE_CRYPTODEV_QAT_SYM_PMD, /**< QAT PMD Symmetric Crypto */
RTE_CRYPTODEV_SNOW3G_PMD, /**< SNOW 3G PMD */
};
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index c66e491..65352a7 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -165,6 +165,10 @@ endif
# SNOW3G PMD is dependent on the LIBSSO library
_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_SNOW3G) += -lrte_pmd_snow3g
_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_SNOW3G) += -L$(LIBSSO_PATH)/build -lsso
+
+# KASUMI PMD is dependent on the LIBSSO-KASUMI library
+_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_KASUMI) += -lrte_pmd_kasumi
+_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_KASUMI) += -L$(LIBSSO_KASUMI_PATH)/build -lsso_kasumi
endif # CONFIG_RTE_LIBRTE_CRYPTODEV
ifeq ($(CONFIG_RTE_LIBRTE_VHOST),y)
diff --git a/scripts/test-build.sh b/scripts/test-build.sh
index 48539c1..55e3e58 100755
--- a/scripts/test-build.sh
+++ b/scripts/test-build.sh
@@ -45,6 +45,7 @@ default_path=$PATH
# - DPDK_MAKE_JOBS (int)
# - DPDK_NOTIFY (notify-send)
# - LIBSSO_PATH
+# - LIBSSO_KASUMI_PATH
. $(dirname $(readlink -e $0))/load-devel-config.sh
print_usage () {
@@ -120,6 +121,7 @@ reset_env ()
unset DPDK_DEP_ZLIB
unset AESNI_MULTI_BUFFER_LIB_PATH
unset LIBSSO_PATH
+ unset LIBSSO_KASUMI_PATH
unset PQOS_INSTALL_PATH
}
@@ -164,6 +166,8 @@ config () # <directory> <target> <options>
sed -ri 's,(PMD_AESNI_GCM=)n,\1y,' $1/.config
test -z "$LIBSSO_PATH" || \
sed -ri 's,(PMD_SNOW3G=)n,\1y,' $1/.config
+ test -z "$LIBSSO_KASUMI_PATH" || \
+ sed -ri 's,(PMD_KASUMI=)n,\1y,' $1/.config
test "$DPDK_DEP_SSL" != y || \
sed -ri 's,(PMD_QAT=)n,\1y,' $1/.config
sed -ri 's,(KNI_VHOST.*=)n,\1y,' $1/.config
--
2.5.0
^ permalink raw reply [flat|nested] 5+ messages in thread
* [dpdk-dev] [PATCH 3/4] app/test: add unit tests for KASUMI PMD
2016-05-06 13:58 [dpdk-dev] [PATCH 0/4] Add new KASUMI SW PMD Pablo de Lara
2016-05-06 13:58 ` [dpdk-dev] [PATCH 1/4] kasumi: add new KASUMI PMD Pablo de Lara
2016-05-06 13:58 ` [dpdk-dev] [PATCH 2/4] app/test: add new buffer comparison macros Pablo de Lara
@ 2016-05-06 13:58 ` Pablo de Lara
2016-05-06 13:58 ` [dpdk-dev] [PATCH 4/4] l2fwd-crypto: add KASUMI algo support Pablo de Lara
3 siblings, 0 replies; 5+ messages in thread
From: Pablo de Lara @ 2016-05-06 13:58 UTC (permalink / raw)
To: dev; +Cc: declan.doherty, Pablo de Lara
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
---
app/test/test_cryptodev.c | 519 +++++++++++++++++++++
app/test/test_cryptodev.h | 1 +
app/test/test_cryptodev_kasumi_hash_test_vectors.h | 222 +++++++++
app/test/test_cryptodev_kasumi_test_vectors.h | 308 ++++++++++++
4 files changed, 1050 insertions(+)
create mode 100644 app/test/test_cryptodev_kasumi_hash_test_vectors.h
create mode 100644 app/test/test_cryptodev_kasumi_test_vectors.h
diff --git a/app/test/test_cryptodev.c b/app/test/test_cryptodev.c
index 45e6daa..176de59 100644
--- a/app/test/test_cryptodev.c
+++ b/app/test/test_cryptodev.c
@@ -44,6 +44,8 @@
#include "test_cryptodev.h"
#include "test_cryptodev_aes_ctr_test_vectors.h"
+#include "test_cryptodev_kasumi_test_vectors.h"
+#include "test_cryptodev_kasumi_hash_test_vectors.h"
#include "test_cryptodev_snow3g_test_vectors.h"
#include "test_cryptodev_snow3g_hash_test_vectors.h"
#include "test_cryptodev_gcm_test_vectors.h"
@@ -125,6 +127,16 @@ setup_oop_test_mbufs(struct rte_mbuf **ibuf, struct rte_mbuf **obuf,
return 0;
}
+/* Get number of bytes in X bits (rounding up) */
+static uint32_t
+ceil_byte_length(uint32_t num_bits)
+{
+ if (num_bits % 8)
+ return ((num_bits >> 3) + 1);
+ else
+ return (num_bits >> 3);
+}
+
#if HEX_DUMP
static void
hexdump_mbuf_data(FILE *f, const char *title, struct rte_mbuf *m)
@@ -243,6 +255,20 @@ testsuite_setup(void)
}
}
+ /* Create 2 KASUMI devices if required */
+ if (gbl_cryptodev_type == RTE_CRYPTODEV_KASUMI_PMD) {
+ nb_devs = rte_cryptodev_count_devtype(RTE_CRYPTODEV_KASUMI_PMD);
+ if (nb_devs < 2) {
+ for (i = nb_devs; i < 2; i++) {
+ TEST_ASSERT_SUCCESS(rte_eal_vdev_init(
+ CRYPTODEV_NAME_KASUMI_PMD, NULL),
+ "Failed to create instance %u of"
+ " pmd : %s",
+ i, CRYPTODEV_NAME_KASUMI_PMD);
+ }
+ }
+ }
+
/* Create 2 NULL devices if required */
if (gbl_cryptodev_type == RTE_CRYPTODEV_NULL_PMD) {
nb_devs = rte_cryptodev_count_devtype(
@@ -2320,6 +2346,108 @@ create_snow3g_hash_session(uint8_t dev_id,
TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation failed");
return 0;
}
+
+static int
+create_kasumi_hash_session(uint8_t dev_id,
+ const uint8_t *key, const uint8_t key_len,
+ const uint8_t aad_len, const uint8_t auth_len,
+ enum rte_crypto_auth_operation op)
+{
+ uint8_t hash_key[key_len];
+
+ struct crypto_unittest_params *ut_params = &unittest_params;
+
+ memcpy(hash_key, key, key_len);
+#ifdef RTE_APP_TEST_DEBUG
+ rte_hexdump(stdout, "key:", key, key_len);
+#endif
+ /* Setup Authentication Parameters */
+ ut_params->auth_xform.type = RTE_CRYPTO_SYM_XFORM_AUTH;
+ ut_params->auth_xform.next = NULL;
+
+ ut_params->auth_xform.auth.op = op;
+ ut_params->auth_xform.auth.algo = RTE_CRYPTO_AUTH_KASUMI_F9;
+ ut_params->auth_xform.auth.key.length = key_len;
+ ut_params->auth_xform.auth.key.data = hash_key;
+ ut_params->auth_xform.auth.digest_length = auth_len;
+ ut_params->auth_xform.auth.add_auth_data_length = aad_len;
+ ut_params->sess = rte_cryptodev_sym_session_create(dev_id,
+ &ut_params->auth_xform);
+ TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation failed");
+ return 0;
+}
+
+static int
+create_kasumi_cipher_session(uint8_t dev_id,
+ enum rte_crypto_cipher_operation op,
+ const uint8_t *key, const uint8_t key_len)
+{
+ uint8_t cipher_key[key_len];
+
+ struct crypto_unittest_params *ut_params = &unittest_params;
+
+ memcpy(cipher_key, key, key_len);
+
+ /* Setup Cipher Parameters */
+ ut_params->cipher_xform.type = RTE_CRYPTO_SYM_XFORM_CIPHER;
+ ut_params->cipher_xform.next = NULL;
+
+ ut_params->cipher_xform.cipher.algo = RTE_CRYPTO_CIPHER_KASUMI_F8;
+ ut_params->cipher_xform.cipher.op = op;
+ ut_params->cipher_xform.cipher.key.data = cipher_key;
+ ut_params->cipher_xform.cipher.key.length = key_len;
+
+#ifdef RTE_APP_TEST_DEBUG
+ rte_hexdump(stdout, "key:", key, key_len);
+#endif
+ /* Create Crypto session */
+ ut_params->sess = rte_cryptodev_sym_session_create(dev_id,
+ &ut_params->
+ cipher_xform);
+ TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation failed");
+ return 0;
+}
+
+static int
+create_kasumi_cipher_operation(const uint8_t *iv, const unsigned iv_len,
+ const unsigned cipher_len,
+ const unsigned cipher_offset)
+{
+ struct crypto_testsuite_params *ts_params = &testsuite_params;
+ struct crypto_unittest_params *ut_params = &unittest_params;
+ unsigned iv_pad_len = 0;
+
+ /* Generate Crypto op data structure */
+ ut_params->op = rte_crypto_op_alloc(ts_params->op_mpool,
+ RTE_CRYPTO_OP_TYPE_SYMMETRIC);
+ TEST_ASSERT_NOT_NULL(ut_params->op,
+ "Failed to allocate pktmbuf offload");
+
+ /* Set crypto operation data parameters */
+ rte_crypto_op_attach_sym_session(ut_params->op, ut_params->sess);
+
+ struct rte_crypto_sym_op *sym_op = ut_params->op->sym;
+
+ /* set crypto operation source mbuf */
+ sym_op->m_src = ut_params->ibuf;
+
+ /* iv */
+ iv_pad_len = RTE_ALIGN_CEIL(iv_len, 8);
+ sym_op->cipher.iv.data = (uint8_t *)rte_pktmbuf_prepend(ut_params->ibuf
+ , iv_pad_len);
+
+ TEST_ASSERT_NOT_NULL(sym_op->cipher.iv.data, "no room to prepend iv");
+
+ memset(sym_op->cipher.iv.data, 0, iv_pad_len);
+ sym_op->cipher.iv.phys_addr = rte_pktmbuf_mtophys(ut_params->ibuf);
+ sym_op->cipher.iv.length = iv_pad_len;
+
+ rte_memcpy(sym_op->cipher.iv.data, iv, iv_len);
+ sym_op->cipher.data.length = cipher_len;
+ sym_op->cipher.data.offset = cipher_offset;
+ return 0;
+}
+
static int
create_snow3g_cipher_session(uint8_t dev_id,
enum rte_crypto_cipher_operation op,
@@ -2601,6 +2729,85 @@ create_snow3g_hash_operation(const uint8_t *auth_tag,
}
static int
+create_kasumi_hash_operation(const uint8_t *auth_tag,
+ const unsigned auth_tag_len,
+ const uint8_t *aad, const unsigned aad_len,
+ unsigned data_pad_len,
+ enum rte_crypto_auth_operation op,
+ const unsigned auth_len, const unsigned auth_offset)
+{
+ struct crypto_testsuite_params *ts_params = &testsuite_params;
+
+ struct crypto_unittest_params *ut_params = &unittest_params;
+
+ unsigned aad_buffer_len;
+
+ /* Generate Crypto op data structure */
+ ut_params->op = rte_crypto_op_alloc(ts_params->op_mpool,
+ RTE_CRYPTO_OP_TYPE_SYMMETRIC);
+ TEST_ASSERT_NOT_NULL(ut_params->op,
+ "Failed to allocate pktmbuf offload");
+
+ /* Set crypto operation data parameters */
+ rte_crypto_op_attach_sym_session(ut_params->op, ut_params->sess);
+
+ struct rte_crypto_sym_op *sym_op = ut_params->op->sym;
+
+ /* set crypto operation source mbuf */
+ sym_op->m_src = ut_params->ibuf;
+
+ /* aad */
+ /*
+ * Always allocate the aad up to the block size.
+ * The cryptodev API calls out -
+ * - the array must be big enough to hold the AAD, plus any
+ * space to round this up to the nearest multiple of the
+ * block size (16 bytes).
+ */
+ aad_buffer_len = ALIGN_POW2_ROUNDUP(aad_len, 8);
+ sym_op->auth.aad.data = (uint8_t *)rte_pktmbuf_prepend(
+ ut_params->ibuf, aad_buffer_len);
+ TEST_ASSERT_NOT_NULL(sym_op->auth.aad.data,
+ "no room to prepend aad");
+ sym_op->auth.aad.phys_addr = rte_pktmbuf_mtophys(
+ ut_params->ibuf);
+ sym_op->auth.aad.length = aad_len;
+
+ memset(sym_op->auth.aad.data, 0, aad_buffer_len);
+ rte_memcpy(sym_op->auth.aad.data, aad, aad_len);
+
+#ifdef RTE_APP_TEST_DEBUG
+ rte_hexdump(stdout, "aad:",
+ sym_op->auth.aad.data, aad_len);
+#endif
+
+ /* digest */
+ sym_op->auth.digest.data = (uint8_t *)rte_pktmbuf_append(
+ ut_params->ibuf, auth_tag_len);
+
+ TEST_ASSERT_NOT_NULL(sym_op->auth.digest.data,
+ "no room to append auth tag");
+ ut_params->digest = sym_op->auth.digest.data;
+ sym_op->auth.digest.phys_addr = rte_pktmbuf_mtophys_offset(
+ ut_params->ibuf, data_pad_len + aad_len);
+ sym_op->auth.digest.length = auth_tag_len;
+ if (op == RTE_CRYPTO_AUTH_OP_GENERATE)
+ memset(sym_op->auth.digest.data, 0, auth_tag_len);
+ else
+ rte_memcpy(sym_op->auth.digest.data, auth_tag, auth_tag_len);
+
+#ifdef RTE_APP_TEST_DEBUG
+ rte_hexdump(stdout, "digest:",
+ sym_op->auth.digest.data,
+ sym_op->auth.digest.length);
+#endif
+
+ sym_op->auth.data.length = auth_len;
+ sym_op->auth.data.offset = auth_offset;
+
+ return 0;
+}
+static int
create_snow3g_cipher_hash_operation(const uint8_t *auth_tag,
const unsigned auth_tag_len,
const uint8_t *aad, const uint8_t aad_len,
@@ -2907,6 +3114,123 @@ test_snow3g_authentication_verify(const struct snow3g_hash_test_data *tdata)
return 0;
}
+static int
+test_kasumi_authentication(const struct kasumi_hash_test_data *tdata)
+{
+ struct crypto_testsuite_params *ts_params = &testsuite_params;
+ struct crypto_unittest_params *ut_params = &unittest_params;
+
+ int retval;
+ unsigned plaintext_pad_len;
+ unsigned plaintext_len;
+ uint8_t *plaintext;
+
+ /* Create KASUMI session */
+ retval = create_kasumi_hash_session(ts_params->valid_devs[0],
+ tdata->key.data, tdata->key.len,
+ tdata->aad.len, tdata->digest.len,
+ RTE_CRYPTO_AUTH_OP_GENERATE);
+ if (retval < 0)
+ return retval;
+
+ /* alloc mbuf and set payload */
+ ut_params->ibuf = rte_pktmbuf_alloc(ts_params->mbuf_pool);
+
+ memset(rte_pktmbuf_mtod(ut_params->ibuf, uint8_t *), 0,
+ rte_pktmbuf_tailroom(ut_params->ibuf));
+
+ plaintext_len = ceil_byte_length(tdata->plaintext.len);
+ /* Append data which is padded to a multiple of */
+ /* the algorithms block size */
+ plaintext_pad_len = RTE_ALIGN_CEIL(plaintext_len, 8);
+ plaintext = (uint8_t *)rte_pktmbuf_append(ut_params->ibuf,
+ plaintext_pad_len);
+ memcpy(plaintext, tdata->plaintext.data, plaintext_len);
+
+ /* Create KASUMI operation */
+ retval = create_kasumi_hash_operation(NULL, tdata->digest.len,
+ tdata->aad.data, tdata->aad.len,
+ plaintext_pad_len, RTE_CRYPTO_AUTH_OP_GENERATE,
+ tdata->validAuthLenInBits.len,
+ tdata->validAuthOffsetLenInBits.len);
+ if (retval < 0)
+ return retval;
+
+ ut_params->op = process_crypto_request(ts_params->valid_devs[0],
+ ut_params->op);
+ ut_params->obuf = ut_params->op->sym->m_src;
+ TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
+ ut_params->digest = rte_pktmbuf_mtod(ut_params->obuf, uint8_t *)
+ + plaintext_pad_len + ALIGN_POW2_ROUNDUP(tdata->aad.len, 8);
+
+ /* Validate obuf */
+ TEST_ASSERT_BUFFERS_ARE_EQUAL(
+ ut_params->digest,
+ tdata->digest.data,
+ DIGEST_BYTE_LENGTH_KASUMI_F9,
+ "KASUMI Generated auth tag not as expected");
+
+ return 0;
+}
+
+static int
+test_kasumi_authentication_verify(const struct kasumi_hash_test_data *tdata)
+{
+ struct crypto_testsuite_params *ts_params = &testsuite_params;
+ struct crypto_unittest_params *ut_params = &unittest_params;
+
+ int retval;
+ unsigned plaintext_pad_len;
+ unsigned plaintext_len;
+ uint8_t *plaintext;
+
+ /* Create KASUMI session */
+ retval = create_kasumi_hash_session(ts_params->valid_devs[0],
+ tdata->key.data, tdata->key.len,
+ tdata->aad.len, tdata->digest.len,
+ RTE_CRYPTO_AUTH_OP_VERIFY);
+ if (retval < 0)
+ return retval;
+ /* alloc mbuf and set payload */
+ ut_params->ibuf = rte_pktmbuf_alloc(ts_params->mbuf_pool);
+
+ memset(rte_pktmbuf_mtod(ut_params->ibuf, uint8_t *), 0,
+ rte_pktmbuf_tailroom(ut_params->ibuf));
+
+ plaintext_len = ceil_byte_length(tdata->plaintext.len);
+ /* Append data which is padded to a multiple */
+ /* of the algorithms block size */
+ plaintext_pad_len = RTE_ALIGN_CEIL(plaintext_len, 8);
+ plaintext = (uint8_t *)rte_pktmbuf_append(ut_params->ibuf,
+ plaintext_pad_len);
+ memcpy(plaintext, tdata->plaintext.data, plaintext_len);
+
+ /* Create KASUMI operation */
+ retval = create_kasumi_hash_operation(tdata->digest.data,
+ tdata->digest.len,
+ tdata->aad.data, tdata->aad.len,
+ plaintext_pad_len,
+ RTE_CRYPTO_AUTH_OP_VERIFY,
+ tdata->validAuthLenInBits.len,
+ tdata->validAuthOffsetLenInBits.len);
+ if (retval < 0)
+ return retval;
+
+ ut_params->op = process_crypto_request(ts_params->valid_devs[0],
+ ut_params->op);
+ TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
+ ut_params->obuf = ut_params->op->sym->m_src;
+ ut_params->digest = rte_pktmbuf_mtod(ut_params->obuf, uint8_t *)
+ + plaintext_pad_len + tdata->aad.len;
+
+ /* Validate obuf */
+ if (ut_params->op->status == RTE_CRYPTO_OP_STATUS_SUCCESS)
+ return 0;
+ else
+ return -1;
+
+ return 0;
+}
static int
test_snow3g_hash_generate_test_case_1(void)
@@ -2946,6 +3270,120 @@ test_snow3g_hash_verify_test_case_3(void)
}
static int
+test_kasumi_hash_generate_test_case_1(void)
+{
+ return test_kasumi_authentication(&kasumi_hash_test_case_1);
+}
+
+static int
+test_kasumi_hash_generate_test_case_2(void)
+{
+ return test_kasumi_authentication(&kasumi_hash_test_case_2);
+}
+
+static int
+test_kasumi_hash_generate_test_case_3(void)
+{
+ return test_kasumi_authentication(&kasumi_hash_test_case_3);
+}
+
+static int
+test_kasumi_hash_generate_test_case_4(void)
+{
+ return test_kasumi_authentication(&kasumi_hash_test_case_4);
+}
+
+static int
+test_kasumi_hash_verify_test_case_1(void)
+{
+ return test_kasumi_authentication_verify(&kasumi_hash_test_case_1);
+}
+
+static int
+test_kasumi_hash_verify_test_case_2(void)
+{
+ return test_kasumi_authentication_verify(&kasumi_hash_test_case_2);
+}
+
+static int
+test_kasumi_hash_verify_test_case_3(void)
+{
+ return test_kasumi_authentication_verify(&kasumi_hash_test_case_3);
+}
+
+static int
+test_kasumi_hash_verify_test_case_4(void)
+{
+ return test_kasumi_authentication_verify(&kasumi_hash_test_case_4);
+}
+
+static int
+test_kasumi_encryption(const struct kasumi_test_data *tdata)
+{
+ struct crypto_testsuite_params *ts_params = &testsuite_params;
+ struct crypto_unittest_params *ut_params = &unittest_params;
+
+ int retval;
+ uint8_t *plaintext, *ciphertext;
+ unsigned plaintext_pad_len;
+ unsigned plaintext_len;
+
+ /* Create KASUMI session */
+ retval = create_kasumi_cipher_session(ts_params->valid_devs[0],
+ RTE_CRYPTO_CIPHER_OP_ENCRYPT,
+ tdata->key.data, tdata->key.len);
+ if (retval < 0)
+ return retval;
+
+ ut_params->ibuf = rte_pktmbuf_alloc(ts_params->mbuf_pool);
+
+ /* Clear mbuf payload */
+ memset(rte_pktmbuf_mtod(ut_params->ibuf, uint8_t *), 0,
+ rte_pktmbuf_tailroom(ut_params->ibuf));
+
+ plaintext_len = ceil_byte_length(tdata->plaintext.len);
+ /* Append data which is padded to a multiple */
+ /* of the algorithms block size */
+ plaintext_pad_len = RTE_ALIGN_CEIL(plaintext_len, 8);
+ plaintext = (uint8_t *)rte_pktmbuf_append(ut_params->ibuf,
+ plaintext_pad_len);
+ memcpy(plaintext, tdata->plaintext.data, plaintext_len);
+
+#ifdef RTE_APP_TEST_DEBUG
+ rte_hexdump(stdout, "plaintext:", plaintext, plaintext_len);
+#endif
+ /* Create KASUMI operation */
+ retval = create_kasumi_cipher_operation(tdata->iv.data, tdata->iv.len,
+ tdata->plaintext.len,
+ tdata->validCipherOffsetLenInBits.len);
+ if (retval < 0)
+ return retval;
+
+ ut_params->op = process_crypto_request(ts_params->valid_devs[0],
+ ut_params->op);
+ TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
+
+ ut_params->obuf = ut_params->op->sym->m_dst;
+ if (ut_params->obuf)
+ ciphertext = rte_pktmbuf_mtod(ut_params->obuf, uint8_t *)
+ + tdata->iv.len;
+ else
+ ciphertext = plaintext;
+
+#ifdef RTE_APP_TEST_DEBUG
+ rte_hexdump(stdout, "ciphertext:", ciphertext, tdata->ciphertext.len >> 3);
+#endif
+
+ /* Validate obuf */
+ TEST_ASSERT_BUFFERS_ARE_EQUAL_BIT(
+ ciphertext,
+ tdata->ciphertext.data,
+ tdata->validCipherLenInBits.len,
+ "KASUMI Ciphertext data not as expected");
+ return 0;
+}
+
+static int
test_snow3g_encryption(const struct snow3g_test_data *tdata)
{
struct crypto_testsuite_params *ts_params = &testsuite_params;
@@ -3444,6 +3882,36 @@ test_snow3g_encrypted_authentication(const struct snow3g_test_data *tdata)
}
static int
+test_kasumi_encryption_test_case_1(void)
+{
+ return test_kasumi_encryption(&kasumi_test_case_1);
+}
+
+static int
+test_kasumi_encryption_test_case_2(void)
+{
+ return test_kasumi_encryption(&kasumi_test_case_2);
+}
+
+static int
+test_kasumi_encryption_test_case_3(void)
+{
+ return test_kasumi_encryption(&kasumi_test_case_3);
+}
+
+static int
+test_kasumi_encryption_test_case_4(void)
+{
+ return test_kasumi_encryption(&kasumi_test_case_4);
+}
+
+static int
+test_kasumi_encryption_test_case_5(void)
+{
+ return test_kasumi_encryption(&kasumi_test_case_5);
+}
+
+static int
test_snow3g_encryption_test_case_1(void)
{
return test_snow3g_encryption(&snow3g_test_case_1);
@@ -4709,6 +5177,43 @@ static struct unit_test_suite cryptodev_aesni_gcm_testsuite = {
}
};
+static struct unit_test_suite cryptodev_sw_kasumi_testsuite = {
+ .suite_name = "Crypto Device SW KASUMI Unit Test Suite",
+ .setup = testsuite_setup,
+ .teardown = testsuite_teardown,
+ .unit_test_cases = {
+ /** KASUMI encrypt only (UEA1) */
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_kasumi_encryption_test_case_1),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_kasumi_encryption_test_case_2),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_kasumi_encryption_test_case_3),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_kasumi_encryption_test_case_4),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_kasumi_encryption_test_case_5),
+ /** KASUMI hash only (UIA1) */
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_kasumi_hash_generate_test_case_1),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_kasumi_hash_generate_test_case_2),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_kasumi_hash_generate_test_case_3),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_kasumi_hash_generate_test_case_4),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_kasumi_hash_verify_test_case_1),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_kasumi_hash_verify_test_case_2),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_kasumi_hash_verify_test_case_3),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_kasumi_hash_verify_test_case_4),
+
+ TEST_CASES_END() /**< NULL terminate unit test array */
+ }
+};
static struct unit_test_suite cryptodev_sw_snow3g_testsuite = {
.suite_name = "Crypto Device SW Snow3G Unit Test Suite",
.setup = testsuite_setup,
@@ -4844,8 +5349,22 @@ static struct test_command cryptodev_sw_snow3g_cmd = {
.callback = test_cryptodev_sw_snow3g,
};
+static int
+test_cryptodev_sw_kasumi(void /*argv __rte_unused, int argc __rte_unused*/)
+{
+ gbl_cryptodev_type = RTE_CRYPTODEV_KASUMI_PMD;
+
+ return unit_test_suite_runner(&cryptodev_sw_kasumi_testsuite);
+}
+
+static struct test_command cryptodev_sw_kasumi_cmd = {
+ .command = "cryptodev_sw_kasumi_autotest",
+ .callback = test_cryptodev_sw_kasumi,
+};
+
REGISTER_TEST_COMMAND(cryptodev_qat_cmd);
REGISTER_TEST_COMMAND(cryptodev_aesni_mb_cmd);
REGISTER_TEST_COMMAND(cryptodev_aesni_gcm_cmd);
REGISTER_TEST_COMMAND(cryptodev_null_cmd);
REGISTER_TEST_COMMAND(cryptodev_sw_snow3g_cmd);
+REGISTER_TEST_COMMAND(cryptodev_sw_kasumi_cmd);
diff --git a/app/test/test_cryptodev.h b/app/test/test_cryptodev.h
index 6059a01..7d0e7bb 100644
--- a/app/test/test_cryptodev.h
+++ b/app/test/test_cryptodev.h
@@ -61,6 +61,7 @@
#define DIGEST_BYTE_LENGTH_SHA512 (BYTE_LENGTH(512))
#define DIGEST_BYTE_LENGTH_AES_XCBC (BYTE_LENGTH(96))
#define DIGEST_BYTE_LENGTH_SNOW3G_UIA2 (BYTE_LENGTH(32))
+#define DIGEST_BYTE_LENGTH_KASUMI_F9 (BYTE_LENGTH(32))
#define AES_XCBC_MAC_KEY_SZ (16)
#define TRUNCATED_DIGEST_BYTE_LENGTH_SHA1 (12)
diff --git a/app/test/test_cryptodev_kasumi_hash_test_vectors.h b/app/test/test_cryptodev_kasumi_hash_test_vectors.h
new file mode 100644
index 0000000..7a0a305
--- /dev/null
+++ b/app/test/test_cryptodev_kasumi_hash_test_vectors.h
@@ -0,0 +1,222 @@
+/*-
+ * BSD LICENSE
+ *
+ * Copyright(c) 2016 Intel Corporation. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in
+ * the documentation and/or other materials provided with the
+ * distribution.
+ * * Neither the name of Intel Corporation nor the names of its
+ * contributors may be used to endorse or promote products derived
+ * from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef TEST_CRYPTODEV_KASUMI_HASH_TEST_VECTORS_H_
+#define TEST_CRYPTODEV_KASUMI_HASH_TEST_VECTORS_H_
+
+struct kasumi_hash_test_data {
+ struct {
+ uint8_t data[16];
+ unsigned len;
+ } key;
+
+ /* Includes: COUNT (4 bytes) and FRESH (4 bytes) */
+ struct {
+ uint8_t data[8];
+ unsigned len;
+ } aad;
+
+ /* Includes message and DIRECTION (1 bit), plus 1 0*,
+ * with enough 0s, so total length is multiple of 64 bits */
+ struct {
+ uint8_t data[2056];
+ unsigned len; /* length must be in Bits */
+ } plaintext;
+
+ /* Actual length of data to be hashed */
+ struct {
+ unsigned len;
+ } validAuthLenInBits;
+
+ struct {
+ unsigned len;
+ } validAuthOffsetLenInBits;
+
+ struct {
+ uint8_t data[64];
+ unsigned len;
+ } digest;
+};
+
+struct kasumi_hash_test_data kasumi_hash_test_case_1 = {
+ .key = {
+ .data = {
+ 0x2B, 0xD6, 0x45, 0x9F, 0x82, 0xC5, 0xB3, 0x00,
+ 0x95, 0x2C, 0x49, 0x10, 0x48, 0x81, 0xFF, 0x48
+ },
+ .len = 16
+ },
+ .aad = {
+ .data = {
+ 0x38, 0xA6, 0xF0, 0x56, 0x05, 0xD2, 0xEC, 0x49,
+ },
+ .len = 8
+ },
+ .plaintext = {
+ .data = {
+ 0x6B, 0x22, 0x77, 0x37, 0x29, 0x6F, 0x39, 0x3C,
+ 0x80, 0x79, 0x35, 0x3E, 0xDC, 0x87, 0xE2, 0xE8,
+ 0x05, 0xD2, 0xEC, 0x49, 0xA4, 0xF2, 0xD8, 0xE2
+ },
+ .len = 192
+ },
+ .validAuthLenInBits = {
+ .len = 189
+ },
+ .validAuthOffsetLenInBits = {
+ .len = 64
+ },
+ .digest = {
+ .data = {0xF6, 0x3B, 0xD7, 0x2C},
+ .len = 4
+ }
+};
+
+struct kasumi_hash_test_data kasumi_hash_test_case_2 = {
+ .key = {
+ .data = {
+ 0xD4, 0x2F, 0x68, 0x24, 0x28, 0x20, 0x1C, 0xAF,
+ 0xCD, 0x9F, 0x97, 0x94, 0x5E, 0x6D, 0xE7, 0xB7
+ },
+ .len = 16
+ },
+ .aad = {
+ .data = {
+ 0x3E, 0xDC, 0x87, 0xE2, 0xA4, 0xF2, 0xD8, 0xE2,
+ },
+ .len = 8
+ },
+ .plaintext = {
+ .data = {
+ 0xB5, 0x92, 0x43, 0x84, 0x32, 0x8A, 0x4A, 0xE0,
+ 0x0B, 0x73, 0x71, 0x09, 0xF8, 0xB6, 0xC8, 0xDD,
+ 0x2B, 0x4D, 0xB6, 0x3D, 0xD5, 0x33, 0x98, 0x1C,
+ 0xEB, 0x19, 0xAA, 0xD5, 0x2A, 0x5B, 0x2B, 0xC3
+ },
+ .len = 256
+ },
+ .validAuthLenInBits = {
+ .len = 254
+ },
+ .validAuthOffsetLenInBits = {
+ .len = 64
+ },
+ .digest = {
+ .data = {0xA9, 0xDA, 0xF1, 0xFF},
+ .len = 4
+ }
+};
+
+struct kasumi_hash_test_data kasumi_hash_test_case_3 = {
+ .key = {
+ .data = {
+ 0xFD, 0xB9, 0xCF, 0xDF, 0x28, 0x93, 0x6C, 0xC4,
+ 0x83, 0xA3, 0x18, 0x69, 0xD8, 0x1B, 0x8F, 0xAB
+ },
+ .len = 16
+ },
+ .aad = {
+ .data = {
+ 0x36, 0xAF, 0x61, 0x44, 0x98, 0x38, 0xF0, 0x3A,
+ },
+ .len = 8
+ },
+ .plaintext = {
+ .data = {
+ 0x59, 0x32, 0xBC, 0x0A, 0xCE, 0x2B, 0x0A, 0xBA,
+ 0x33, 0xD8, 0xAC, 0x18, 0x8A, 0xC5, 0x4F, 0x34,
+ 0x6F, 0xAD, 0x10, 0xBF, 0x9D, 0xEE, 0x29, 0x20,
+ 0xB4, 0x3B, 0xD0, 0xC5, 0x3A, 0x91, 0x5C, 0xB7,
+ 0xDF, 0x6C, 0xAA, 0x72, 0x05, 0x3A, 0xBF, 0xF3,
+ 0x80, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00
+ },
+ .len = 384
+ },
+ .validAuthLenInBits = {
+ .len = 319
+ },
+ .validAuthOffsetLenInBits = {
+ .len = 64
+ },
+ .digest = {
+ .data = {0x15, 0x37, 0xD3, 0x16},
+ .len = 4
+ }
+};
+
+struct kasumi_hash_test_data kasumi_hash_test_case_4 = {
+ .key = {
+ .data = {
+ 0xF4, 0xEB, 0xEC, 0x69, 0xE7, 0x3E, 0xAF, 0x2E,
+ 0xB2, 0xCF, 0x6A, 0xF4, 0xB3, 0x12, 0x0F, 0xFD
+ },
+ .len = 16
+ },
+ .aad = {
+ .data = {
+ 0x29, 0x6F, 0x39, 0x3C, 0x6B, 0x22, 0x77, 0x37,
+ },
+ .len = 8
+ },
+ .plaintext = {
+ .data = {
+ 0x10, 0xBF, 0xFF, 0x83, 0x9E, 0x0C, 0x71, 0x65,
+ 0x8D, 0xBB, 0x2D, 0x17, 0x07, 0xE1, 0x45, 0x72,
+ 0x4F, 0x41, 0xC1, 0x6F, 0x48, 0xBF, 0x40, 0x3C,
+ 0x3B, 0x18, 0xE3, 0x8F, 0xD5, 0xD1, 0x66, 0x3B,
+ 0x6F, 0x6D, 0x90, 0x01, 0x93, 0xE3, 0xCE, 0xA8,
+ 0xBB, 0x4F, 0x1B, 0x4F, 0x5B, 0xE8, 0x22, 0x03,
+ 0x22, 0x32, 0xA7, 0x8D, 0x7D, 0x75, 0x23, 0x8D,
+ 0x5E, 0x6D, 0xAE, 0xCD, 0x3B, 0x43, 0x22, 0xCF,
+ 0x59, 0xBC, 0x7E, 0xA8, 0x4A, 0xB1, 0x88, 0x11,
+ 0xB5, 0xBF, 0xB7, 0xBC, 0x55, 0x3F, 0x4F, 0xE4,
+ 0x44, 0x78, 0xCE, 0x28, 0x7A, 0x14, 0x87, 0x99,
+ 0x90, 0xD1, 0x8D, 0x12, 0xCA, 0x79, 0xD2, 0xC8,
+ 0x55, 0x14, 0x90, 0x21, 0xCD, 0x5C, 0xE8, 0xCA,
+ 0x03, 0x71, 0xCA, 0x04, 0xFC, 0xCE, 0x14, 0x3E,
+ 0x3D, 0x7C, 0xFE, 0xE9, 0x45, 0x85, 0xB5, 0x88,
+ 0x5C, 0xAC, 0x46, 0x06, 0x8B, 0xC0, 0x00, 0x00
+ },
+ .len = 1024
+ },
+ .validAuthLenInBits = {
+ .len = 1000
+ },
+ .validAuthOffsetLenInBits = {
+ .len = 64
+ },
+ .digest = {
+ .data = {0xC3, 0x83, 0x83, 0x9D},
+ .len = 4
+ }
+};
+#endif /* TEST_CRYPTODEV_KASUMI_HASH_TEST_VECTORS_H_ */
diff --git a/app/test/test_cryptodev_kasumi_test_vectors.h b/app/test/test_cryptodev_kasumi_test_vectors.h
new file mode 100644
index 0000000..9163d7c
--- /dev/null
+++ b/app/test/test_cryptodev_kasumi_test_vectors.h
@@ -0,0 +1,308 @@
+/*-
+ * BSD LICENSE
+ *
+ * Copyright(c) 2016 Intel Corporation. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in
+ * the documentation and/or other materials provided with the
+ * distribution.
+ * * Neither the name of Intel Corporation nor the names of its
+ * contributors may be used to endorse or promote products derived
+ * from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef TEST_CRYPTODEV_KASUMI_TEST_VECTORS_H_
+#define TEST_CRYPTODEV_KASUMI_TEST_VECTORS_H_
+
+struct kasumi_test_data {
+ struct {
+ uint8_t data[64];
+ unsigned len;
+ } key;
+
+ struct {
+ uint8_t data[64] __rte_aligned(16);
+ unsigned len;
+ } iv;
+
+ struct {
+ uint8_t data[1024];
+ unsigned len; /* length must be in Bits */
+ } plaintext;
+
+ struct {
+ uint8_t data[1024];
+ unsigned len; /* length must be in Bits */
+ } ciphertext;
+
+ struct {
+ unsigned len;
+ } validCipherLenInBits;
+
+ struct {
+ unsigned len;
+ } validCipherOffsetLenInBits;
+};
+
+struct kasumi_test_data kasumi_test_case_1 = {
+ .key = {
+ .data = {
+ 0x2B, 0xD6, 0x45, 0x9F, 0x82, 0xC5, 0xB3, 0x00,
+ 0x95, 0x2C, 0x49, 0x10, 0x48, 0x81, 0xFF, 0x48
+ },
+ .len = 16
+ },
+ .iv = {
+ .data = {
+ 0x72, 0xA4, 0xF2, 0x0F, 0x64, 0x00, 0x00, 0x00
+ },
+ .len = 8
+ },
+ .plaintext = {
+ .data = {
+ 0x7E, 0xC6, 0x12, 0x72, 0x74, 0x3B, 0xF1, 0x61,
+ 0x47, 0x26, 0x44, 0x6A, 0x6C, 0x38, 0xCE, 0xD1,
+ 0x66, 0xF6, 0xCA, 0x76, 0xEB, 0x54, 0x30, 0x04,
+ 0x42, 0x86, 0x34, 0x6C, 0xEF, 0x13, 0x0F, 0x92,
+ 0x92, 0x2B, 0x03, 0x45, 0x0D, 0x3A, 0x99, 0x75,
+ 0xE5, 0xBD, 0x2E, 0xA0, 0xEB, 0x55, 0xAD, 0x8E,
+ 0x1B, 0x19, 0x9E, 0x3E, 0xC4, 0x31, 0x60, 0x20,
+ 0xE9, 0xA1, 0xB2, 0x85, 0xE7, 0x62, 0x79, 0x53,
+ 0x59, 0xB7, 0xBD, 0xFD, 0x39, 0xBE, 0xF4, 0xB2,
+ 0x48, 0x45, 0x83, 0xD5, 0xAF, 0xE0, 0x82, 0xAE,
+ 0xE6, 0x38, 0xBF, 0x5F, 0xD5, 0xA6, 0x06, 0x19,
+ 0x39, 0x01, 0xA0, 0x8F, 0x4A, 0xB4, 0x1A, 0xAB,
+ 0x9B, 0x13, 0x48, 0x80
+ },
+ .len = 800
+ },
+ .ciphertext = {
+ .data = {
+ 0xD1, 0xE2, 0xDE, 0x70, 0xEE, 0xF8, 0x6C, 0x69,
+ 0x64, 0xFB, 0x54, 0x2B, 0xC2, 0xD4, 0x60, 0xAA,
+ 0xBF, 0xAA, 0x10, 0xA4, 0xA0, 0x93, 0x26, 0x2B,
+ 0x7D, 0x19, 0x9E, 0x70, 0x6F, 0xC2, 0xD4, 0x89,
+ 0x15, 0x53, 0x29, 0x69, 0x10, 0xF3, 0xA9, 0x73,
+ 0x01, 0x26, 0x82, 0xE4, 0x1C, 0x4E, 0x2B, 0x02,
+ 0xBE, 0x20, 0x17, 0xB7, 0x25, 0x3B, 0xBF, 0x93,
+ 0x09, 0xDE, 0x58, 0x19, 0xCB, 0x42, 0xE8, 0x19,
+ 0x56, 0xF4, 0xC9, 0x9B, 0xC9, 0x76, 0x5C, 0xAF,
+ 0x53, 0xB1, 0xD0, 0xBB, 0x82, 0x79, 0x82, 0x6A,
+ 0xDB, 0xBC, 0x55, 0x22, 0xE9, 0x15, 0xC1, 0x20,
+ 0xA6, 0x18, 0xA5, 0xA7, 0xF5, 0xE8, 0x97, 0x08,
+ 0x93, 0x39, 0x65, 0x0F
+ },
+ .len = 800
+ },
+ .validCipherLenInBits = {
+ .len = 798
+ },
+ .validCipherOffsetLenInBits = {
+ .len = 64
+ },
+};
+
+struct kasumi_test_data kasumi_test_case_2 = {
+ .key = {
+ .data = {
+ 0xEF, 0xA8, 0xB2, 0x22, 0x9E, 0x72, 0x0C, 0x2A,
+ 0x7C, 0x36, 0xEA, 0x55, 0xE9, 0x60, 0x56, 0x95
+ },
+ .len = 16
+ },
+ .iv = {
+ .data = {
+ 0xE2, 0x8B, 0xCF, 0x7B, 0xC0, 0x00, 0x00, 0x00
+ },
+ .len = 8
+ },
+ .plaintext = {
+ .data = {
+ 0x10, 0x11, 0x12, 0x31, 0xE0, 0x60, 0x25, 0x3A,
+ 0x43, 0xFD, 0x3F, 0x57, 0xE3, 0x76, 0x07, 0xAB,
+ 0x28, 0x27, 0xB5, 0x99, 0xB6, 0xB1, 0xBB, 0xDA,
+ 0x37, 0xA8, 0xAB, 0xCC, 0x5A, 0x8C, 0x55, 0x0D,
+ 0x1B, 0xFB, 0x2F, 0x49, 0x46, 0x24, 0xFB, 0x50,
+ 0x36, 0x7F, 0xA3, 0x6C, 0xE3, 0xBC, 0x68, 0xF1,
+ 0x1C, 0xF9, 0x3B, 0x15, 0x10, 0x37, 0x6B, 0x02,
+ 0x13, 0x0F, 0x81, 0x2A, 0x9F, 0xA1, 0x69, 0xD8
+ },
+ .len = 512
+ },
+ .ciphertext = {
+ .data = {
+ 0x3D, 0xEA, 0xCC, 0x7C, 0x15, 0x82, 0x1C, 0xAA,
+ 0x89, 0xEE, 0xCA, 0xDE, 0x9B, 0x5B, 0xD3, 0x61,
+ 0x4B, 0xD0, 0xC8, 0x41, 0x9D, 0x71, 0x03, 0x85,
+ 0xDD, 0xBE, 0x58, 0x49, 0xEF, 0x1B, 0xAC, 0x5A,
+ 0xE8, 0xB1, 0x4A, 0x5B, 0x0A, 0x67, 0x41, 0x52,
+ 0x1E, 0xB4, 0xE0, 0x0B, 0xB9, 0xEC, 0xF3, 0xE9,
+ 0xF7, 0xCC, 0xB9, 0xCA, 0xE7, 0x41, 0x52, 0xD7,
+ 0xF4, 0xE2, 0xA0, 0x34, 0xB6, 0xEA, 0x00, 0xEC
+ },
+ .len = 512
+ },
+ .validCipherLenInBits = {
+ .len = 510
+ },
+ .validCipherOffsetLenInBits = {
+ .len = 64
+ }
+};
+
+struct kasumi_test_data kasumi_test_case_3 = {
+ .key = {
+ .data = {
+ 0x5A, 0xCB, 0x1D, 0x64, 0x4C, 0x0D, 0x51, 0x20,
+ 0x4E, 0xA5, 0xF1, 0x45, 0x10, 0x10, 0xD8, 0x52
+ },
+ .len = 16
+ },
+ .iv = {
+ .data = {
+ 0xFA, 0x55, 0x6B, 0x26, 0x1C, 0x00, 0x00, 0x00
+ },
+ .len = 8
+ },
+ .plaintext = {
+ .data = {
+ 0xAD, 0x9C, 0x44, 0x1F, 0x89, 0x0B, 0x38, 0xC4,
+ 0x57, 0xA4, 0x9D, 0x42, 0x14, 0x07, 0xE8
+ },
+ .len = 120
+ },
+ .ciphertext = {
+ .data = {
+ 0x9B, 0xC9, 0x2C, 0xA8, 0x03, 0xC6, 0x7B, 0x28,
+ 0xA1, 0x1A, 0x4B, 0xEE, 0x5A, 0x0C, 0x25
+ },
+ .len = 120
+ },
+ .validCipherLenInBits = {
+ .len = 120
+ },
+ .validCipherOffsetLenInBits = {
+ .len = 64
+ }
+};
+
+struct kasumi_test_data kasumi_test_case_4 = {
+ .key = {
+ .data = {
+ 0xD3, 0xC5, 0xD5, 0x92, 0x32, 0x7F, 0xB1, 0x1C,
+ 0x40, 0x35, 0xC6, 0x68, 0x0A, 0xF8, 0xC6, 0xD1
+ },
+ .len = 16
+ },
+ .iv = {
+ .data = {
+ 0x39, 0x8A, 0x59, 0xB4, 0x2C, 0x00, 0x00, 0x00,
+ },
+ .len = 8
+ },
+ .plaintext = {
+ .data = {
+ 0x98, 0x1B, 0xA6, 0x82, 0x4C, 0x1B, 0xFB, 0x1A,
+ 0xB4, 0x85, 0x47, 0x20, 0x29, 0xB7, 0x1D, 0x80,
+ 0x8C, 0xE3, 0x3E, 0x2C, 0xC3, 0xC0, 0xB5, 0xFC,
+ 0x1F, 0x3D, 0xE8, 0xA6, 0xDC, 0x66, 0xB1, 0xF0
+ },
+ .len = 256
+ },
+ .ciphertext = {
+ .data = {
+ 0x5B, 0xB9, 0x43, 0x1B, 0xB1, 0xE9, 0x8B, 0xD1,
+ 0x1B, 0x93, 0xDB, 0x7C, 0x3D, 0x45, 0x13, 0x65,
+ 0x59, 0xBB, 0x86, 0xA2, 0x95, 0xAA, 0x20, 0x4E,
+ 0xCB, 0xEB, 0xF6, 0xF7, 0xA5, 0x10, 0x15, 0x10
+ },
+ .len = 256
+ },
+ .validCipherLenInBits = {
+ .len = 253
+ },
+ .validCipherOffsetLenInBits = {
+ .len = 64
+ }
+};
+
+struct kasumi_test_data kasumi_test_case_5 = {
+ .key = {
+ .data = {
+ 0x60, 0x90, 0xEA, 0xE0, 0x4C, 0x83, 0x70, 0x6E,
+ 0xEC, 0xBF, 0x65, 0x2B, 0xE8, 0xE3, 0x65, 0x66
+ },
+ .len = 16
+ },
+ .iv = {
+ .data = {
+ 0x72, 0xA4, 0xF2, 0x0F, 0x48, 0x00, 0x00, 0x00
+ },
+ .len = 8
+ },
+ .plaintext = {
+ .data = {
+ 0x40, 0x98, 0x1B, 0xA6, 0x82, 0x4C, 0x1B, 0xFB,
+ 0x42, 0x86, 0xB2, 0x99, 0x78, 0x3D, 0xAF, 0x44,
+ 0x2C, 0x09, 0x9F, 0x7A, 0xB0, 0xF5, 0x8D, 0x5C,
+ 0x8E, 0x46, 0xB1, 0x04, 0xF0, 0x8F, 0x01, 0xB4,
+ 0x1A, 0xB4, 0x85, 0x47, 0x20, 0x29, 0xB7, 0x1D,
+ 0x36, 0xBD, 0x1A, 0x3D, 0x90, 0xDC, 0x3A, 0x41,
+ 0xB4, 0x6D, 0x51, 0x67, 0x2A, 0xC4, 0xC9, 0x66,
+ 0x3A, 0x2B, 0xE0, 0x63, 0xDA, 0x4B, 0xC8, 0xD2,
+ 0x80, 0x8C, 0xE3, 0x3E, 0x2C, 0xCC, 0xBF, 0xC6,
+ 0x34, 0xE1, 0xB2, 0x59, 0x06, 0x08, 0x76, 0xA0,
+ 0xFB, 0xB5, 0xA4, 0x37, 0xEB, 0xCC, 0x8D, 0x31,
+ 0xC1, 0x9E, 0x44, 0x54, 0x31, 0x87, 0x45, 0xE3,
+ 0x98, 0x76, 0x45, 0x98, 0x7A, 0x98, 0x6F, 0x2C,
+ 0xB0
+ },
+ .len = 840
+ },
+ .ciphertext = {
+ .data = {
+ 0xDD, 0xB3, 0x64, 0xDD, 0x2A, 0xAE, 0xC2, 0x4D,
+ 0xFF, 0x29, 0x19, 0x57, 0xB7, 0x8B, 0xAD, 0x06,
+ 0x3A, 0xC5, 0x79, 0xCD, 0x90, 0x41, 0xBA, 0xBE,
+ 0x89, 0xFD, 0x19, 0x5C, 0x05, 0x78, 0xCB, 0x9F,
+ 0xDE, 0x42, 0x17, 0x56, 0x61, 0x78, 0xD2, 0x02,
+ 0x40, 0x20, 0x6D, 0x07, 0xCF, 0xA6, 0x19, 0xEC,
+ 0x05, 0x9F, 0x63, 0x51, 0x44, 0x59, 0xFC, 0x10,
+ 0xD4, 0x2D, 0xC9, 0x93, 0x4E, 0x56, 0xEB, 0xC0,
+ 0xCB, 0xC6, 0x0D, 0x4D, 0x2D, 0xF1, 0x74, 0x77,
+ 0x4C, 0xBD, 0xCD, 0x5D, 0xA4, 0xA3, 0x50, 0x31,
+ 0x7A, 0x7F, 0x12, 0xE1, 0x94, 0x94, 0x71, 0xF8,
+ 0xA2, 0x95, 0xF2, 0x72, 0xE6, 0x8F, 0xC0, 0x71,
+ 0x59, 0xB0, 0x7D, 0x8E, 0x2D, 0x26, 0xE4, 0x59,
+ 0x9E
+ },
+ .len = 840
+ },
+ .validCipherLenInBits = {
+ .len = 837
+ },
+ .validCipherOffsetLenInBits = {
+ .len = 64
+ },
+};
+
+#endif /* TEST_CRYPTODEV_KASUMI_TEST_VECTORS_H_ */
--
2.5.0
^ permalink raw reply [flat|nested] 5+ messages in thread