From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by dpdk.org (Postfix) with ESMTP id 81AF056A1 for ; Thu, 29 Sep 2016 14:17:20 +0200 (CEST) Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by fmsmga105.fm.intel.com with ESMTP; 29 Sep 2016 05:17:19 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.30,414,1470726000"; d="scan'208";a="1063935508" Received: from gklab-246-019.igk.intel.com (HELO intel.com) ([10.217.246.19]) by fmsmga002.fm.intel.com with SMTP; 29 Sep 2016 05:17:16 -0700 Received: by intel.com (sSMTP sendmail emulation); Thu, 29 Sep 2016 16:16:49 +0200 From: Slawomir Mrozowicz To: dev@dpdk.org Cc: Slawomir Mrozowicz , Michal Kobylinski , Tomasz Kulasek , Daniel Mrzyglod Date: Thu, 29 Sep 2016 16:14:23 +0200 Message-Id: <1475158465-28970-2-git-send-email-slawomirx.mrozowicz@intel.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1475158465-28970-1-git-send-email-slawomirx.mrozowicz@intel.com> References: <1474275582-6108-1-git-send-email-michalx.k.jastrzebski@intel.com> <1475158465-28970-1-git-send-email-slawomirx.mrozowicz@intel.com> Subject: [dpdk-dev] [PATCH v3 1/3] libcrypto_pmd: initial implementation of SW crypto device X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 29 Sep 2016 12:17:22 -0000 This code provides the initial implementation of the libcrypto poll mode driver. All cryptography operations are using Openssl library crypto API. Each algorithm uses EVP_ interface from openssl API - which is recommended by Openssl maintainers. This patch adds libcrypto poll mode driver support to librte_cryptodev library. Signed-off-by: Slawomir Mrozowicz Signed-off-by: Michal Kobylinski Signed-off-by: Tomasz Kulasek Signed-off-by: Daniel Mrzyglod --- v2: - add gcm crypto cipher and authentication algorithm - rework gmac crypto authentication algorithm v3: - fix pmd according to negative verification tests - change gmac aad max size - update documentation --- MAINTAINERS | 4 + config/common_base | 6 + doc/guides/cryptodevs/index.rst | 1 + doc/guides/cryptodevs/libcrypto.rst | 116 +++ doc/guides/rel_notes/release_16_11.rst | 23 +- drivers/crypto/Makefile | 1 + drivers/crypto/libcrypto/Makefile | 60 ++ drivers/crypto/libcrypto/rte_libcrypto_pmd.c | 1051 ++++++++++++++++++++ drivers/crypto/libcrypto/rte_libcrypto_pmd_ops.c | 708 +++++++++++++ .../crypto/libcrypto/rte_libcrypto_pmd_private.h | 174 ++++ .../crypto/libcrypto/rte_pmd_libcrypto_version.map | 3 + lib/librte_cryptodev/rte_cryptodev.h | 3 + mk/rte.app.mk | 19 +- 13 files changed, 2159 insertions(+), 10 deletions(-) create mode 100644 doc/guides/cryptodevs/libcrypto.rst create mode 100644 drivers/crypto/libcrypto/Makefile create mode 100644 drivers/crypto/libcrypto/rte_libcrypto_pmd.c create mode 100644 drivers/crypto/libcrypto/rte_libcrypto_pmd_ops.c create mode 100644 drivers/crypto/libcrypto/rte_libcrypto_pmd_private.h create mode 100644 drivers/crypto/libcrypto/rte_pmd_libcrypto_version.map diff --git a/MAINTAINERS b/MAINTAINERS index 7c33ad4..0982aef 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -434,6 +434,10 @@ M: Declan Doherty F: drivers/crypto/null/ F: doc/guides/cryptodevs/null.rst +LibCrypto Crypto PMD +M: Declan Doherty +F: drivers/crypto/libcrypto/ +F: doc/guides/cryptodevs/libcrypto.rst Packet processing ----------------- diff --git a/config/common_base b/config/common_base index 7830535..42d28dd 100644 --- a/config/common_base +++ b/config/common_base @@ -376,6 +376,12 @@ CONFIG_RTE_LIBRTE_PMD_AESNI_MB=n CONFIG_RTE_LIBRTE_PMD_AESNI_MB_DEBUG=n # +# Compile PMD for Software backed device +# +CONFIG_RTE_LIBRTE_PMD_LIBCRYPTO=n +CONFIG_RTE_LIBRTE_PMD_LIBCRYPTO_DEBUG=n + +# # Compile PMD for AESNI GCM device # CONFIG_RTE_LIBRTE_PMD_AESNI_GCM=n diff --git a/doc/guides/cryptodevs/index.rst b/doc/guides/cryptodevs/index.rst index 9616de1..adb6e98c 100644 --- a/doc/guides/cryptodevs/index.rst +++ b/doc/guides/cryptodevs/index.rst @@ -39,6 +39,7 @@ Crypto Device Drivers aesni_mb aesni_gcm kasumi + libcrypto null snow3g qat diff --git a/doc/guides/cryptodevs/libcrypto.rst b/doc/guides/cryptodevs/libcrypto.rst new file mode 100644 index 0000000..77eff95 --- /dev/null +++ b/doc/guides/cryptodevs/libcrypto.rst @@ -0,0 +1,116 @@ +.. BSD LICENSE + Copyright(c) 2016 Intel Corporation. All rights reserved. + + Redistribution and use in source and binary forms, with or without + modification, are permitted provided that the following conditions + are met: + + * Redistributions of source code must retain the above copyright + notice, this list of conditions and the following disclaimer. + * Redistributions in binary form must reproduce the above copyright + notice, this list of conditions and the following disclaimer in + the documentation and/or other materials provided with the + distribution. + * Neither the name of Intel Corporation nor the names of its + contributors may be used to endorse or promote products derived + from this software without specific prior written permission. + + THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +LibCrypto Crypto Poll Mode Driver + +This code provides the initial implementation of the libcrypto poll mode +driver. All cryptography operations are using Openssl library crypto API. +Each algorithm uses EVP_ interface from openssl API - which is recommended +by Openssl maintainers. + +For more details about openssl library please visit openssl webpage: +https://www.openssl.org/ + +Features +-------- + +LibCrypto PMD has support for: + +Supported cipher algorithms: +* ``RTE_CRYPTO_CIPHER_3DES_CBC`` +* ``RTE_CRYPTO_CIPHER_AES_CBC`` +* ``RTE_CRYPTO_CIPHER_AES_CTR`` +* ``RTE_CRYPTO_CIPHER_3DES_CTR`` +* ``RTE_CRYPTO_CIPHER_AES_GCM`` + +Supported authentication algorithms: +* ``RTE_CRYPTO_AUTH_AES_GMAC`` +* ``RTE_CRYPTO_AUTH_MD5`` +* ``RTE_CRYPTO_AUTH_SHA1`` +* ``RTE_CRYPTO_AUTH_SHA224`` +* ``RTE_CRYPTO_AUTH_SHA256`` +* ``RTE_CRYPTO_AUTH_SHA384`` +* ``RTE_CRYPTO_AUTH_SHA512`` +* ``RTE_CRYPTO_AUTH_MD5_HMAC`` +* ``RTE_CRYPTO_AUTH_SHA1_HMAC`` +* ``RTE_CRYPTO_AUTH_SHA224_HMAC`` +* ``RTE_CRYPTO_AUTH_SHA256_HMAC`` +* ``RTE_CRYPTO_AUTH_SHA384_HMAC`` +* ``RTE_CRYPTO_AUTH_SHA512_HMAC`` + + +Installation +------------ + +To compile libcrypto PMD, it has to be enabled in the config/common_base file +and appropriate openssl packages have to be installed in the build environment. + +The newest openssl library version is supported: +* 1.0.2h-fips 3 May 2016. +Older versions that were also verified: +* 1.0.1f 6 Jan 2014 +* 1.0.1 14 Mar 2012 + +For Ubuntu 14.04 LTS these packages have to be installed in the build system: +sudo apt-get install openssl +sudo apt-get install libc6-dev-i386 (for i686-native-linuxapp-gcc target) + +This code was also verified on Fedora 24. +This code was NOT yet verified on FreeBSD. + +Initialization +-------------- + +User can use app/test application to check how to use this pmd and to verify +crypto processing. + +Test name is cryptodev_libcrypto_autotest. +For performance test cryptodev_libcrypto_perftest can be used. + +To verify real traffic l2fwd-crypto example can be used with this command: + +.. code-block:: console + +sudo ./build/l2fwd-crypto -c 0x3 -n 4 --vdev "cryptodev_libcrypto_pmd" +--vdev "cryptodev_libcrypto_pmd"-- -p 0x3 --chain CIPHER_HASH +--cipher_op ENCRYPT --cipher_algo AES_CBC +--cipher_key 00:01:02:03:04:05:06:07:08:09:0a:0b:0c:0d:0e:0f +--iv 00:01:02:03:04:05:06:07:08:09:0a:0b:0c:0d:0e:ff +--auth_op GENERATE --auth_algo SHA1_HMAC +--auth_key 11:11:11:11:11:11:11:11:11:11:11:11:11:11:11:11:11:11:11:11:11:11 +:11:11:11:11:11:11:11:11:11:11:11:11:11:11:11:11:11:11:11:11:11:11:11:11:11 +:11:11:11:11:11:11:11:11:11:11:11:11:11:11:11:11:11 + +Limitations +----------- + +* Maximum number of sessions is 2048. +* Chained mbufs are not supported. +* Hash only is not supported for GCM and GMAC. +* Cipher only is not supported for GCM and GMAC. diff --git a/doc/guides/rel_notes/release_16_11.rst b/doc/guides/rel_notes/release_16_11.rst index cc507a9..6e92966 100644 --- a/doc/guides/rel_notes/release_16_11.rst +++ b/doc/guides/rel_notes/release_16_11.rst @@ -34,7 +34,28 @@ New Features Refer to the previous release notes for examples. - This section is a comment. Make sure to start the actual text at the margin. +* **Added libcrypto PMD.** + + A new crypto PMD has been added, which provides several ciphering and hashing. + All cryptography operations are using Openssl library crypto API. + +* ** Added support of C3xxx Device in QAT PMD.** + Support for Device c3xxx has been enabled in QAT PMD. + +* ** Added support of C62XX Device in QAT PMD.** + Support for Device c62xx has been enabled in QAT PMD. + + +* **Updated the QAT PMD.** + The QAT PMD was updated with changes including the following: + + * Added support for MD5_HMAC algorithm. + * Added support for SHA224-HMAC algorithm. + * Added support for SHA384-HMAC algorithm. + * Added support for NULL algorithm. + * Added support for KASUMI (F8 and F9) algorithm. + * Added support for GMAC algorithm. + * Added support for 3DES block cipher algorithm. * ** Added support of C3xxx Device in QAT PMD.** Support for Device c3xxx has been enabled in QAT PMD. diff --git a/drivers/crypto/Makefile b/drivers/crypto/Makefile index dc4ef7f..11b0863 100644 --- a/drivers/crypto/Makefile +++ b/drivers/crypto/Makefile @@ -33,6 +33,7 @@ include $(RTE_SDK)/mk/rte.vars.mk DIRS-$(CONFIG_RTE_LIBRTE_PMD_AESNI_GCM) += aesni_gcm DIRS-$(CONFIG_RTE_LIBRTE_PMD_AESNI_MB) += aesni_mb +DIRS-$(CONFIG_RTE_LIBRTE_PMD_LIBCRYPTO) += libcrypto DIRS-$(CONFIG_RTE_LIBRTE_PMD_QAT) += qat DIRS-$(CONFIG_RTE_LIBRTE_PMD_SNOW3G) += snow3g DIRS-$(CONFIG_RTE_LIBRTE_PMD_KASUMI) += kasumi diff --git a/drivers/crypto/libcrypto/Makefile b/drivers/crypto/libcrypto/Makefile new file mode 100644 index 0000000..c5f8cf2 --- /dev/null +++ b/drivers/crypto/libcrypto/Makefile @@ -0,0 +1,60 @@ +# BSD LICENSE +# +# Copyright(c) 2016 Intel Corporation. All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# * Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# * Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# * Neither the name of Intel Corporation nor the names of its +# contributors may be used to endorse or promote products derived +# from this software without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +include $(RTE_SDK)/mk/rte.vars.mk + +# library name +LIB = librte_pmd_libcrypto.a + +# build flags +CFLAGS += -O3 +CFLAGS += $(WERROR_FLAGS) + +# library version +LIBABIVER := 1 + +# versioning export map +EXPORT_MAP := rte_pmd_libcrypto_version.map + +# external library dependencies +LDLIBS += -lcrypto + +# library source files +SRCS-$(CONFIG_RTE_LIBRTE_PMD_LIBCRYPTO) += rte_libcrypto_pmd.c +SRCS-$(CONFIG_RTE_LIBRTE_PMD_LIBCRYPTO) += rte_libcrypto_pmd_ops.c + +# library dependencies +DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_LIBCRYPTO) += lib/librte_eal +DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_LIBCRYPTO) += lib/librte_mbuf +DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_LIBCRYPTO) += lib/librte_mempool +DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_LIBCRYPTO) += lib/librte_ring +DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_LIBCRYPTO) += lib/librte_cryptodev + +include $(RTE_SDK)/mk/rte.lib.mk diff --git a/drivers/crypto/libcrypto/rte_libcrypto_pmd.c b/drivers/crypto/libcrypto/rte_libcrypto_pmd.c new file mode 100644 index 0000000..d9a60d0 --- /dev/null +++ b/drivers/crypto/libcrypto/rte_libcrypto_pmd.c @@ -0,0 +1,1051 @@ +/*- + * BSD LICENSE + * + * Copyright(c) 2016 Intel Corporation. All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +#include +#include +#include +#include +#include +#include +#include + +#include + +#include "rte_libcrypto_pmd_private.h" + +static int cryptodev_libcrypto_uninit(const char *name); + +/*----------------------------------------------------------------------------*/ + +/** + * Global static parameter used to create a unique name for each + * LIBCRYPTO crypto device. + */ +static unsigned int unique_name_id; + +static inline int +create_unique_device_name(char *name, size_t size) +{ + int ret; + + if (name == NULL) + return -EINVAL; + + ret = snprintf(name, size, "%s_%u", RTE_STR(CRYPTODEV_NAME_LIBCRYPTO_PMD), + unique_name_id++); + if (ret < 0) + return ret; + return 0; +} + +/** + * Increment counter by 1 + * Counter is 64 bit array, big-endian + */ +static void +ctr_inc(uint8_t *ctr) +{ + uint64_t *ctr64 = (uint64_t *)ctr; + + *ctr64 = __builtin_bswap64(*ctr64); + (*ctr64)++; + *ctr64 = __builtin_bswap64(*ctr64); +} + +/* + *------------------------------------------------------------------------------ + * Session Prepare + *------------------------------------------------------------------------------ + */ + +/** Get xform chain order */ +static enum libcrypto_chain_order +libcrypto_get_chain_order(const struct rte_crypto_sym_xform *xform) +{ + enum libcrypto_chain_order res = LIBCRYPTO_CHAIN_NOT_SUPPORTED; + + if (xform != NULL) { + if (xform->type == RTE_CRYPTO_SYM_XFORM_AUTH) { + if (xform->next == NULL) + res = LIBCRYPTO_CHAIN_ONLY_AUTH; + else if (xform->next->type == + RTE_CRYPTO_SYM_XFORM_CIPHER) + res = LIBCRYPTO_CHAIN_AUTH_CIPHER; + } + if (xform->type == RTE_CRYPTO_SYM_XFORM_CIPHER) { + if (xform->next == NULL) + res = LIBCRYPTO_CHAIN_ONLY_CIPHER; + else if (xform->next->type == RTE_CRYPTO_SYM_XFORM_AUTH) + res = LIBCRYPTO_CHAIN_CIPHER_AUTH; + } + } + + return res; +} + +/** Get session cipher key from input cipher key */ +static void +get_cipher_key(uint8_t *input_key, int keylen, uint8_t *session_key) +{ + memcpy(session_key, input_key, keylen); +} + +/** Get key ede 24 bytes standard from input key */ +static int +get_cipher_key_ede(uint8_t *key, int keylen, uint8_t *key_ede) +{ + int res = 0; + + /* Initialize keys - 24 bytes: [key1-key2-key3] */ + switch (keylen) { + case 24: + memcpy(key_ede, key, 24); + break; + case 16: + /* K3 = K1 */ + memcpy(key_ede, key, 16); + memcpy(key_ede + 16, key, 8); + break; + case 8: + /* K1 = K2 = K3 (DES compatibility) */ + memcpy(key_ede, key, 8); + memcpy(key_ede + 8, key, 8); + memcpy(key_ede + 16, key, 8); + break; + default: + LIBCRYPTO_LOG_ERR("Unsupported key size"); + res = -EINVAL; + } + + return res; +} + +/** Get adequate libcrypto function for input cipher algorithm */ +static uint8_t +get_cipher_algo(enum rte_crypto_cipher_algorithm sess_algo, size_t keylen, + const EVP_CIPHER **algo) +{ + int res = 0; + + if (algo != NULL) { + switch (sess_algo) { + case RTE_CRYPTO_CIPHER_3DES_CBC: + switch (keylen) { + case 16: + *algo = EVP_des_ede_cbc(); + break; + case 24: + *algo = EVP_des_ede3_cbc(); + break; + default: + res = -EINVAL; + } + break; + case RTE_CRYPTO_CIPHER_3DES_CTR: + break; + case RTE_CRYPTO_CIPHER_AES_CBC: + switch (keylen) { + case 16: + *algo = EVP_aes_128_cbc(); + break; + case 24: + *algo = EVP_aes_192_cbc(); + break; + case 32: + *algo = EVP_aes_256_cbc(); + break; + default: + res = -EINVAL; + } + break; + case RTE_CRYPTO_CIPHER_AES_CTR: + switch (keylen) { + case 16: + *algo = EVP_aes_128_ctr(); + break; + case 24: + *algo = EVP_aes_192_ctr(); + break; + case 32: + *algo = EVP_aes_256_ctr(); + break; + default: + res = -EINVAL; + } + break; + case RTE_CRYPTO_CIPHER_AES_GCM: + switch (keylen) { + case 16: + *algo = EVP_aes_128_gcm(); + break; + case 24: + *algo = EVP_aes_192_gcm(); + break; + case 32: + *algo = EVP_aes_256_gcm(); + break; + default: + res = -EINVAL; + } + break; + default: + res = -EINVAL; + break; + } + } else { + res = -EINVAL; + } + + return res; +} + +/** Get adequate libcrypto function for input auth algorithm */ +static uint8_t +get_auth_algo(enum rte_crypto_auth_algorithm sessalgo, + const EVP_MD **algo) +{ + int res = 0; + + if (algo != NULL) { + switch (sessalgo) { + case RTE_CRYPTO_AUTH_MD5: + case RTE_CRYPTO_AUTH_MD5_HMAC: + *algo = EVP_md5(); + break; + case RTE_CRYPTO_AUTH_SHA1: + case RTE_CRYPTO_AUTH_SHA1_HMAC: + *algo = EVP_sha1(); + break; + case RTE_CRYPTO_AUTH_SHA224: + case RTE_CRYPTO_AUTH_SHA224_HMAC: + *algo = EVP_sha224(); + break; + case RTE_CRYPTO_AUTH_SHA256: + case RTE_CRYPTO_AUTH_SHA256_HMAC: + *algo = EVP_sha256(); + break; + case RTE_CRYPTO_AUTH_SHA384: + case RTE_CRYPTO_AUTH_SHA384_HMAC: + *algo = EVP_sha384(); + break; + case RTE_CRYPTO_AUTH_SHA512: + case RTE_CRYPTO_AUTH_SHA512_HMAC: + *algo = EVP_sha512(); + break; + default: + res = -EINVAL; + break; + } + } else { + res = -EINVAL; + } + + return res; +} + +/** Set session cipher parameters */ +static int +libcrypto_set_session_cipher_parameters(struct libcrypto_session *sess, + const struct rte_crypto_sym_xform *xform) +{ + /* Select cipher direction */ + sess->cipher.direction = xform->cipher.op; + /* Select cipher key */ + sess->cipher.key.length = xform->cipher.key.length; + + /* Select cipher algo */ + switch (xform->cipher.algo) { + case RTE_CRYPTO_CIPHER_3DES_CBC: + case RTE_CRYPTO_CIPHER_AES_CBC: + case RTE_CRYPTO_CIPHER_AES_CTR: + case RTE_CRYPTO_CIPHER_AES_GCM: + sess->cipher.mode = LIBCRYPTO_CIPHER_LIB; + sess->cipher.algo = xform->cipher.algo; + sess->cipher.ctx = EVP_CIPHER_CTX_new(); + + if (get_cipher_algo(sess->cipher.algo, sess->cipher.key.length, + &sess->cipher.evp_algo) != 0) + return -EINVAL; + + get_cipher_key(xform->cipher.key.data, sess->cipher.key.length, + sess->cipher.key.data); + + break; + + case RTE_CRYPTO_CIPHER_3DES_CTR: + sess->cipher.mode = LIBCRYPTO_CIPHER_DES3CTR; + sess->cipher.ctx = EVP_CIPHER_CTX_new(); + + if (get_cipher_key_ede(xform->cipher.key.data, + sess->cipher.key.length, sess->cipher.key.data) != 0) + return -EINVAL; + break; + + default: + sess->cipher.algo = RTE_CRYPTO_CIPHER_NULL; + return -EINVAL; + } + + return 0; +} + +/* Set session auth parameters */ +static int +libcrypto_set_session_auth_parameters(struct libcrypto_session *sess, + const struct rte_crypto_sym_xform *xform) +{ + /* Select auth generate/verify */ + sess->auth.operation = xform->auth.op; + sess->auth.algo = xform->auth.algo; + + /* Select auth algo */ + switch (xform->auth.algo) { + case RTE_CRYPTO_AUTH_AES_GMAC: + case RTE_CRYPTO_AUTH_AES_GCM: + /* Check additional condition for AES_GMAC/GCM */ + if (sess->cipher.algo != RTE_CRYPTO_CIPHER_AES_GCM) + return -EINVAL; + sess->chain_order = LIBCRYPTO_CHAIN_COMBINED; + break; + + case RTE_CRYPTO_AUTH_MD5: + case RTE_CRYPTO_AUTH_SHA1: + case RTE_CRYPTO_AUTH_SHA224: + case RTE_CRYPTO_AUTH_SHA256: + case RTE_CRYPTO_AUTH_SHA384: + case RTE_CRYPTO_AUTH_SHA512: + sess->auth.mode = LIBCRYPTO_AUTH_AS_AUTH; + if (get_auth_algo(xform->auth.algo, &sess->auth.auth.evp_algo) != 0) + return -EINVAL; + sess->auth.auth.ctx = EVP_MD_CTX_create(); + break; + + case RTE_CRYPTO_AUTH_MD5_HMAC: + case RTE_CRYPTO_AUTH_SHA1_HMAC: + case RTE_CRYPTO_AUTH_SHA224_HMAC: + case RTE_CRYPTO_AUTH_SHA256_HMAC: + case RTE_CRYPTO_AUTH_SHA384_HMAC: + case RTE_CRYPTO_AUTH_SHA512_HMAC: + sess->auth.mode = LIBCRYPTO_AUTH_AS_HMAC; + sess->auth.hmac.ctx = EVP_MD_CTX_create(); + if (get_auth_algo(xform->auth.algo, &sess->auth.hmac.evp_algo) != 0) + return -EINVAL; + sess->auth.hmac.pkey = EVP_PKEY_new_mac_key(EVP_PKEY_HMAC, NULL, + xform->auth.key.data, xform->auth.key.length); + break; + + default: + return -EINVAL; + } + + return 0; +} + +/** Parse crypto xform chain and set private session parameters */ +int +libcrypto_set_session_parameters(struct libcrypto_session *sess, + const struct rte_crypto_sym_xform *xform) +{ + const struct rte_crypto_sym_xform *cipher_xform = NULL; + const struct rte_crypto_sym_xform *auth_xform = NULL; + + sess->chain_order = libcrypto_get_chain_order(xform); + switch (sess->chain_order) { + case LIBCRYPTO_CHAIN_ONLY_CIPHER: + cipher_xform = xform; + break; + case LIBCRYPTO_CHAIN_ONLY_AUTH: + auth_xform = xform; + break; + case LIBCRYPTO_CHAIN_CIPHER_AUTH: + cipher_xform = xform; + auth_xform = xform->next; + break; + case LIBCRYPTO_CHAIN_AUTH_CIPHER: + auth_xform = xform; + cipher_xform = xform->next; + break; + default: + return -EINVAL; + } + + /* cipher_xform must be check before auth_xform */ + if (cipher_xform) { + if (libcrypto_set_session_cipher_parameters(sess, cipher_xform)) { + LIBCRYPTO_LOG_ERR( + "Invalid/unsupported cipher parameters"); + return -EINVAL; + } + } + + if (auth_xform) { + if (libcrypto_set_session_auth_parameters(sess, auth_xform)) { + LIBCRYPTO_LOG_ERR( + "Invalid/unsupported auth parameters"); + return -EINVAL; + } + } + + return 0; +} + +/** Reset private session parameters */ +void +libcrypto_reset_session(struct libcrypto_session *sess) +{ + EVP_CIPHER_CTX_free(sess->cipher.ctx); + + switch (sess->auth.mode) { + case LIBCRYPTO_AUTH_AS_AUTH: + EVP_MD_CTX_destroy(sess->auth.auth.ctx); + break; + case LIBCRYPTO_AUTH_AS_HMAC: + EVP_PKEY_free(sess->auth.hmac.pkey); + EVP_MD_CTX_destroy(sess->auth.hmac.ctx); + break; + default: + break; + } +} + +/** Provide session for operation */ +static struct libcrypto_session * +get_session(struct libcrypto_qp *qp, struct rte_crypto_op *op) +{ + struct libcrypto_session *sess = NULL; + + if (op->sym->sess_type == RTE_CRYPTO_SYM_OP_WITH_SESSION) { + /* get existing session */ + if (likely(op->sym->session != NULL && + op->sym->session->dev_type == + RTE_CRYPTODEV_LIBCRYPTO_PMD)) + sess = (struct libcrypto_session *) + op->sym->session->_private; + } else { + /* provide internal session */ + void *_sess = NULL; + + if (!rte_mempool_get(qp->sess_mp, (void **)&_sess)) { + sess = (struct libcrypto_session *) + ((struct rte_cryptodev_sym_session *)_sess) + ->_private; + + if (unlikely(libcrypto_set_session_parameters( + sess, op->sym->xform) != 0)) { + rte_mempool_put(qp->sess_mp, _sess); + sess = NULL; + } else + op->sym->session = _sess; + } + } + + if (sess == NULL) + op->status = RTE_CRYPTO_OP_STATUS_INVALID_SESSION; + + return sess; +} + +/* + *------------------------------------------------------------------------------ + * Process Operations + *------------------------------------------------------------------------------ + */ + +/** Process standard libcrypto cipher encryption */ +static int +process_libcrypto_cipher_encrypt(uint8_t *src, uint8_t *dst, + uint8_t *iv, uint8_t *key, int srclen, + EVP_CIPHER_CTX *ctx, const EVP_CIPHER *algo) +{ + int dstlen, totlen; + + if (EVP_EncryptInit_ex(ctx, algo, NULL, key, iv) <= 0) + goto process_cipher_encrypt_err; + + if (EVP_EncryptUpdate(ctx, dst, &dstlen, src, srclen) <= 0) + goto process_cipher_encrypt_err; + + if (EVP_EncryptFinal_ex(ctx, dst + dstlen, &totlen) <= 0) + goto process_cipher_encrypt_err; + + return 0; + +process_cipher_encrypt_err: + LIBCRYPTO_LOG_ERR("Process libcrypto cipher encrypt failed"); + return -EINVAL; +} + +/** Process standard libcrypto cipher decryption */ +static int +process_libcrypto_cipher_decrypt(uint8_t *src, uint8_t *dst, + uint8_t *iv, uint8_t *key, int srclen, + EVP_CIPHER_CTX *ctx, const EVP_CIPHER *algo) +{ + int dstlen, totlen; + + if (EVP_DecryptInit_ex(ctx, algo, NULL, key, iv) <= 0) + goto process_cipher_decrypt_err; + + if (EVP_CIPHER_CTX_set_padding(ctx, 0) <= 0) + goto process_cipher_decrypt_err; + + if (EVP_DecryptUpdate(ctx, dst, &dstlen, src, srclen) <= 0) + goto process_cipher_decrypt_err; + + if (EVP_DecryptFinal_ex(ctx, dst + dstlen, &totlen) <= 0) + goto process_cipher_decrypt_err; + + return 0; + +process_cipher_decrypt_err: + LIBCRYPTO_LOG_ERR("Process libcrypto cipher decrypt failed"); + return -EINVAL; +} + +/** Process cipher des 3 ctr encryption, decryption algorithm */ +static int +process_libcrypto_cipher_des3ctr(uint8_t *src, uint8_t *dst, + uint8_t *iv, uint8_t *key, int srclen, EVP_CIPHER_CTX *ctx) +{ + uint8_t ebuf[8], ctr[8]; + int unused, n; + + /* We use 3DES encryption also for decryption. + * IV is not important for 3DES ecb + */ + if (EVP_EncryptInit_ex(ctx, EVP_des_ede3_ecb(), NULL, key, NULL) <= 0) + goto process_cipher_des3ctr_err; + + memcpy(ctr, iv, 8); + n = 0; + + while (n < srclen) { + if (n % 8 == 0) { + if (EVP_EncryptUpdate(ctx, (unsigned char *)&ebuf, &unused, + (const unsigned char *)&ctr, 8) <= 0) + goto process_cipher_des3ctr_err; + ctr_inc(ctr); + } + dst[n] = src[n] ^ ebuf[n % 8]; + n++; + } + + return 0; + +process_cipher_des3ctr_err: + LIBCRYPTO_LOG_ERR("Process libcrypto cipher des 3 ede ctr failed"); + return -EINVAL; +} + +/** Process auth/encription aes-gcm algorithm */ +static int +process_libcrypto_auth_encryption_gcm(uint8_t *src, int srclen, + uint8_t *aad, int aadlen, uint8_t *iv, int ivlen, + uint8_t *key, uint8_t *dst, uint8_t *tag, + EVP_CIPHER_CTX *ctx, const EVP_CIPHER *algo) +{ + int len = 0, unused = 0; + uint8_t empty[] = {}; + + if (EVP_EncryptInit_ex(ctx, algo, NULL, NULL, NULL) <= 0) + goto process_auth_encryption_gcm_err; + + if (EVP_CIPHER_CTX_ctrl(ctx, EVP_CTRL_GCM_SET_IVLEN, ivlen, NULL) <= 0) + goto process_auth_encryption_gcm_err; + + if (EVP_EncryptInit_ex(ctx, NULL, NULL, key, iv) <= 0) + goto process_auth_encryption_gcm_err; + + if (aadlen > 0) { + if (EVP_EncryptUpdate(ctx, NULL, &len, aad, aadlen) <= 0) + goto process_auth_encryption_gcm_err; + + /* Workaround open ssl bug in version less then 1.0.1f */ + if (EVP_EncryptUpdate(ctx, empty, &unused, empty, 0) <= 0) + goto process_auth_encryption_gcm_err; + } + + if (srclen > 0) + if (EVP_EncryptUpdate(ctx, dst, &len, src, srclen) <= 0) + goto process_auth_encryption_gcm_err; + + if (EVP_EncryptFinal_ex(ctx, dst + len, &len) <= 0) + goto process_auth_encryption_gcm_err; + + if (EVP_CIPHER_CTX_ctrl(ctx, EVP_CTRL_GCM_GET_TAG, 16, tag) <= 0) + goto process_auth_encryption_gcm_err; + + return 0; + +process_auth_encryption_gcm_err: + LIBCRYPTO_LOG_ERR("Process libcrypto auth encryption gcm failed"); + return -EINVAL; +} + +static int +process_libcrypto_auth_decryption_gcm(uint8_t *src, int srclen, + uint8_t *aad, int aadlen, uint8_t *iv, int ivlen, + uint8_t *key, uint8_t *dst, uint8_t *tag, + EVP_CIPHER_CTX *ctx, const EVP_CIPHER *algo) +{ + int len = 0, unused = 0; + uint8_t empty[] = {}; + + if (EVP_DecryptInit_ex(ctx, algo, NULL, NULL, NULL) <= 0) + goto process_auth_decryption_gcm_err; + + if (EVP_CIPHER_CTX_ctrl(ctx, EVP_CTRL_GCM_SET_IVLEN, ivlen, NULL) <= 0) + goto process_auth_decryption_gcm_err; + + if (EVP_CIPHER_CTX_ctrl(ctx, EVP_CTRL_GCM_SET_TAG, 16, tag) <= 0) + goto process_auth_decryption_gcm_err; + + if (EVP_DecryptInit_ex(ctx, NULL, NULL, key, iv) <= 0) + goto process_auth_decryption_gcm_err; + + if (aadlen > 0) { + if (EVP_DecryptUpdate(ctx, NULL, &len, aad, aadlen) <= 0) + goto process_auth_decryption_gcm_err; + + /* Workaround open ssl bug in version less then 1.0.1f */ + if (EVP_DecryptUpdate(ctx, empty, &unused, empty, 0) <= 0) + goto process_auth_decryption_gcm_err; + } + + if (srclen > 0) + if (EVP_DecryptUpdate(ctx, dst, &len, src, srclen) <= 0) + goto process_auth_decryption_gcm_err; + + if (EVP_DecryptFinal_ex(ctx, dst + len, &len) <= 0) + goto process_auth_decryption_gcm_final_err; + + return 0; + +process_auth_decryption_gcm_err: + LIBCRYPTO_LOG_ERR("Process libcrypto auth decription gcm failed"); + return -EINVAL; + +process_auth_decryption_gcm_final_err: + return -EFAULT; +} + +/** Process standard libcrypto auth algorithms */ +static int +process_libcrypto_auth(uint8_t *src, uint8_t *dst, + __rte_unused uint8_t *iv, __rte_unused EVP_PKEY * pkey, + int srclen, EVP_MD_CTX *ctx, const EVP_MD *algo) +{ + size_t dstlen; + + if (EVP_DigestInit_ex(ctx, algo, NULL) <= 0) + goto process_auth_err; + + if (EVP_DigestUpdate(ctx, (char *)src, srclen) <= 0) + goto process_auth_err; + + if (EVP_DigestFinal_ex(ctx, dst, (unsigned int *)&dstlen) <= 0) + goto process_auth_err; + + return 0; + +process_auth_err: + LIBCRYPTO_LOG_ERR("Process libcrypto auth failed"); + return -EINVAL; +} + +/** Process standard libcrypto auth algorithms with hmac */ +static int +process_libcrypto_auth_hmac(uint8_t *src, uint8_t *dst, + __rte_unused uint8_t *iv, EVP_PKEY *pkey, + int srclen, EVP_MD_CTX *ctx, const EVP_MD *algo) +{ + size_t dstlen; + + if (EVP_DigestSignInit(ctx, NULL, algo, NULL, pkey) <= 0) + goto process_auth_err; + + if (EVP_DigestSignUpdate(ctx, (char *)src, srclen) <= 0) + goto process_auth_err; + + if (EVP_DigestSignFinal(ctx, dst, &dstlen) <= 0) + goto process_auth_err; + + return 0; + +process_auth_err: + LIBCRYPTO_LOG_ERR("Process libcrypto auth failed"); + return -EINVAL; +} + +/*----------------------------------------------------------------------------*/ + +/** Process auth/cipher combined operation */ +static void +process_libcrypto_combined_op + (struct rte_crypto_op *op, struct libcrypto_session *sess, + struct rte_mbuf *mbuf_src, struct rte_mbuf *mbuf_dst) +{ + /* cipher */ + uint8_t *src = NULL, *dst = NULL, *iv, *tag, *aad; + int srclen, ivlen, aadlen, status = -1; + + iv = op->sym->cipher.iv.data; + ivlen = op->sym->cipher.iv.length; + aad = op->sym->auth.aad.data; + aadlen = op->sym->auth.aad.length; + + tag = op->sym->auth.digest.data; + if (tag == NULL) + tag = rte_pktmbuf_mtod_offset(mbuf_dst, uint8_t *, + op->sym->cipher.data.offset + + op->sym->cipher.data.length); + + if (sess->auth.algo == RTE_CRYPTO_AUTH_AES_GMAC) + srclen = 0; + else { + srclen = op->sym->cipher.data.length; + src = rte_pktmbuf_mtod_offset(mbuf_src, uint8_t *, + op->sym->cipher.data.offset); + dst = rte_pktmbuf_mtod_offset(mbuf_dst, uint8_t *, + op->sym->cipher.data.offset); + } + + if (sess->cipher.direction == RTE_CRYPTO_CIPHER_OP_ENCRYPT) + status = process_libcrypto_auth_encryption_gcm( + src, srclen, aad, aadlen, iv, ivlen, + sess->cipher.key.data, dst, tag, + sess->cipher.ctx, sess->cipher.evp_algo); + else + status = process_libcrypto_auth_decryption_gcm( + src, srclen, aad, aadlen, iv, ivlen, + sess->cipher.key.data, dst, tag, + sess->cipher.ctx, sess->cipher.evp_algo); + + if (status != 0) { + if (status == (-EFAULT) && + sess->auth.operation == RTE_CRYPTO_AUTH_OP_VERIFY) + op->status = RTE_CRYPTO_OP_STATUS_AUTH_FAILED; + else + op->status = RTE_CRYPTO_OP_STATUS_ERROR; + } +} + +/** Process cipher operation */ +static void +process_libcrypto_cipher_op + (struct rte_crypto_op *op, struct libcrypto_session *sess, + struct rte_mbuf *mbuf_src, struct rte_mbuf *mbuf_dst) +{ + uint8_t *src, *dst, *iv; + int srclen, status; + + srclen = op->sym->cipher.data.length; + src = rte_pktmbuf_mtod_offset(mbuf_src, uint8_t *, + op->sym->cipher.data.offset); + dst = rte_pktmbuf_mtod_offset(mbuf_dst, uint8_t *, + op->sym->cipher.data.offset); + + iv = op->sym->cipher.iv.data; + + if (sess->cipher.mode == LIBCRYPTO_CIPHER_LIB) + if (sess->cipher.direction == RTE_CRYPTO_CIPHER_OP_ENCRYPT) + status = process_libcrypto_cipher_encrypt(src, dst, iv, + sess->cipher.key.data, srclen, + sess->cipher.ctx, sess->cipher.evp_algo); + else + status = process_libcrypto_cipher_decrypt(src, dst, iv, + sess->cipher.key.data, srclen, + sess->cipher.ctx, sess->cipher.evp_algo); + else + status = process_libcrypto_cipher_des3ctr(src, dst, iv, + sess->cipher.key.data, srclen, sess->cipher.ctx); + + if (status != 0) + op->status = RTE_CRYPTO_OP_STATUS_ERROR; +} + +/** Process auth operation */ +static void +process_libcrypto_auth_op + (struct rte_crypto_op *op, struct libcrypto_session *sess, + struct rte_mbuf *mbuf_src, struct rte_mbuf *mbuf_dst) +{ + uint8_t *src, *dst; + int srclen, status; + + srclen = op->sym->auth.data.length; + src = rte_pktmbuf_mtod_offset(mbuf_src, uint8_t *, + op->sym->auth.data.offset); + + if (sess->auth.operation == RTE_CRYPTO_AUTH_OP_VERIFY) + dst = (uint8_t *)rte_pktmbuf_append(mbuf_src, + op->sym->auth.digest.length); + else { + dst = op->sym->auth.digest.data; + if (dst == NULL) + dst = rte_pktmbuf_mtod_offset(mbuf_dst, uint8_t *, + op->sym->auth.data.offset + + op->sym->auth.data.length); + } + + switch (sess->auth.mode) { + case LIBCRYPTO_AUTH_AS_AUTH: + status = process_libcrypto_auth(src, dst, + NULL, NULL, srclen, + sess->auth.auth.ctx, sess->auth.auth.evp_algo); + break; + case LIBCRYPTO_AUTH_AS_HMAC: + status = process_libcrypto_auth_hmac(src, dst, + NULL, sess->auth.hmac.pkey, srclen, + sess->auth.hmac.ctx, sess->auth.hmac.evp_algo); + break; + default: + status = -1; + break; + } + + if (sess->auth.operation == RTE_CRYPTO_AUTH_OP_VERIFY) { + if (memcmp(dst, op->sym->auth.digest.data, + op->sym->auth.digest.length) != 0) { + op->status = RTE_CRYPTO_OP_STATUS_AUTH_FAILED; + } + /* Trim area used for digest from mbuf. */ + rte_pktmbuf_trim(mbuf_src, + op->sym->auth.digest.length); + } + + if (status != 0) + op->status = RTE_CRYPTO_OP_STATUS_ERROR; +} + +/** Process crypto operation for mbuf */ +static int +process_op(const struct libcrypto_qp *qp, struct rte_crypto_op *op, + struct libcrypto_session *sess) +{ + struct rte_mbuf *msrc, *mdst; + int retval; + + msrc = op->sym->m_src; + mdst = op->sym->m_dst ? op->sym->m_dst : op->sym->m_src; + + op->status = RTE_CRYPTO_OP_STATUS_NOT_PROCESSED; + + switch (sess->chain_order) { + case LIBCRYPTO_CHAIN_ONLY_CIPHER: + process_libcrypto_cipher_op(op, sess, msrc, mdst); + break; + case LIBCRYPTO_CHAIN_ONLY_AUTH: + process_libcrypto_auth_op(op, sess, msrc, mdst); + break; + case LIBCRYPTO_CHAIN_CIPHER_AUTH: + process_libcrypto_cipher_op(op, sess, msrc, mdst); + process_libcrypto_auth_op(op, sess, mdst, mdst); + break; + case LIBCRYPTO_CHAIN_AUTH_CIPHER: + process_libcrypto_auth_op(op, sess, msrc, mdst); + process_libcrypto_cipher_op(op, sess, msrc, mdst); + break; + case LIBCRYPTO_CHAIN_COMBINED: + process_libcrypto_combined_op(op, sess, msrc, mdst); + break; + default: + op->status = RTE_CRYPTO_OP_STATUS_ERROR; + break; + } + + /* Free session if a session-less crypto op */ + if (op->sym->sess_type == RTE_CRYPTO_SYM_OP_SESSIONLESS) { + libcrypto_reset_session(sess); + memset(sess, 0, sizeof(struct libcrypto_session)); + rte_mempool_put(qp->sess_mp, op->sym->session); + op->sym->session = NULL; + } + + + if (op->status == RTE_CRYPTO_OP_STATUS_NOT_PROCESSED) + op->status = RTE_CRYPTO_OP_STATUS_SUCCESS; + + if (op->status != RTE_CRYPTO_OP_STATUS_ERROR) + retval = rte_ring_enqueue(qp->processed_ops, (void *)op); + else + retval = -1; + + return retval; +} + +/* + *------------------------------------------------------------------------------ + * PMD Framework + *------------------------------------------------------------------------------ + */ + +/** Enqueue burst */ +static uint16_t +libcrypto_pmd_enqueue_burst(void *queue_pair, struct rte_crypto_op **ops, + uint16_t nb_ops) +{ + struct libcrypto_session *sess; + struct libcrypto_qp *qp = queue_pair; + int i, retval; + + for (i = 0; i < nb_ops; i++) { + sess = get_session(qp, ops[i]); + if (unlikely(sess == NULL)) + goto enqueue_err; + + retval = process_op(qp, ops[i], sess); + if (unlikely(retval < 0)) + goto enqueue_err; + } + + qp->stats.enqueued_count += i; + return i; + +enqueue_err: + qp->stats.enqueue_err_count++; + return i; +} + +/** Dequeue burst */ +static uint16_t +libcrypto_pmd_dequeue_burst(void *queue_pair, struct rte_crypto_op **ops, + uint16_t nb_ops) +{ + struct libcrypto_qp *qp = queue_pair; + + unsigned int nb_dequeued = 0; + + nb_dequeued = rte_ring_dequeue_burst(qp->processed_ops, + (void **)ops, nb_ops); + qp->stats.dequeued_count += nb_dequeued; + + return nb_dequeued; +} + +/** Create LIBCRYPTO crypto device */ +static int +cryptodev_libcrypto_create(const char *name, + struct rte_crypto_vdev_init_params *init_params) +{ + struct rte_cryptodev *dev; + char crypto_dev_name[RTE_CRYPTODEV_NAME_MAX_LEN]; + struct libcrypto_private *internals; + + /* create a unique device name */ + if (create_unique_device_name(crypto_dev_name, + RTE_CRYPTODEV_NAME_MAX_LEN) != 0) { + LIBCRYPTO_LOG_ERR("failed to create unique cryptodev name"); + return -EINVAL; + } + + dev = rte_cryptodev_pmd_virtual_dev_init(crypto_dev_name, + sizeof(struct libcrypto_private), init_params->socket_id); + if (dev == NULL) { + LIBCRYPTO_LOG_ERR("failed to create cryptodev vdev"); + goto init_error; + } + + dev->dev_type = RTE_CRYPTODEV_LIBCRYPTO_PMD; + dev->dev_ops = rte_libcrypto_pmd_ops; + + /* register rx/tx burst functions for data path */ + dev->dequeue_burst = libcrypto_pmd_dequeue_burst; + dev->enqueue_burst = libcrypto_pmd_enqueue_burst; + + dev->feature_flags = RTE_CRYPTODEV_FF_SYMMETRIC_CRYPTO | + RTE_CRYPTODEV_FF_SYM_OPERATION_CHAINING | + RTE_CRYPTODEV_FF_CPU_AESNI; + + /* Set vector instructions mode supported */ + internals = dev->data->dev_private; + + internals->max_nb_qpairs = init_params->max_nb_queue_pairs; + internals->max_nb_sessions = init_params->max_nb_sessions; + + return 0; + +init_error: + LIBCRYPTO_LOG_ERR("driver %s: cryptodev_libcrypto_create failed", name); + + cryptodev_libcrypto_uninit(crypto_dev_name); + return -EFAULT; +} + +/** Initialise LIBCRYPTO crypto device */ +static int +cryptodev_libcrypto_init(const char *name, + const char *input_args) +{ + struct rte_crypto_vdev_init_params init_params = { + RTE_CRYPTODEV_VDEV_DEFAULT_MAX_NB_QUEUE_PAIRS, + RTE_CRYPTODEV_VDEV_DEFAULT_MAX_NB_SESSIONS, + rte_socket_id() + }; + + rte_cryptodev_parse_vdev_init_params(&init_params, input_args); + + RTE_LOG(INFO, PMD, "Initialising %s on NUMA node %d\n", name, + init_params.socket_id); + RTE_LOG(INFO, PMD, " Max number of queue pairs = %d\n", + init_params.max_nb_queue_pairs); + RTE_LOG(INFO, PMD, " Max number of sessions = %d\n", + init_params.max_nb_sessions); + + return cryptodev_libcrypto_create(name, &init_params); +} + +/** Uninitialise LIBCRYPTO crypto device */ +static int +cryptodev_libcrypto_uninit(const char *name) +{ + if (name == NULL) + return -EINVAL; + + RTE_LOG(INFO, PMD, + "Closing LIBCRYPTO crypto device %s on numa socket %u\n", + name, rte_socket_id()); + + return 0; +} + +static struct rte_driver cryptodev_libcrypto_pmd_drv = { + .type = PMD_VDEV, + .init = cryptodev_libcrypto_init, + .uninit = cryptodev_libcrypto_uninit +}; + +PMD_REGISTER_DRIVER(cryptodev_libcrypto_pmd_drv, CRYPTODEV_NAME_LIBCRYPTO_PMD); +DRIVER_REGISTER_PARAM_STRING(CRYPTODEV_NAME_LIBCRYPTO_PMD, + "max_nb_queue_pairs= " + "max_nb_sessions= " + "socket_id="); diff --git a/drivers/crypto/libcrypto/rte_libcrypto_pmd_ops.c b/drivers/crypto/libcrypto/rte_libcrypto_pmd_ops.c new file mode 100644 index 0000000..b5d7bd5 --- /dev/null +++ b/drivers/crypto/libcrypto/rte_libcrypto_pmd_ops.c @@ -0,0 +1,708 @@ +/*- + * BSD LICENSE + * + * Copyright(c) 2016 Intel Corporation. All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +#include + +#include +#include +#include + +#include "rte_libcrypto_pmd_private.h" + + +static const struct rte_cryptodev_capabilities libcrypto_pmd_capabilities[] = { + { /* MD5 HMAC */ + .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, + {.sym = { + .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH, + {.auth = { + .algo = RTE_CRYPTO_AUTH_MD5_HMAC, + .block_size = 64, + .key_size = { + .min = 64, + .max = 64, + .increment = 0 + }, + .digest_size = { + .min = 16, + .max = 16, + .increment = 0 + }, + .aad_size = { 0 } + }, } + }, } + }, + { /* MD5 */ + .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, + {.sym = { + .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH, + {.auth = { + .algo = RTE_CRYPTO_AUTH_MD5, + .block_size = 64, + .key_size = { + .min = 0, + .max = 0, + .increment = 0 + }, + .digest_size = { + .min = 16, + .max = 16, + .increment = 0 + }, + .aad_size = { 0 } + }, } + }, } + }, + { /* SHA1 HMAC */ + .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, + {.sym = { + .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH, + {.auth = { + .algo = RTE_CRYPTO_AUTH_SHA1_HMAC, + .block_size = 64, + .key_size = { + .min = 64, + .max = 64, + .increment = 0 + }, + .digest_size = { + .min = 20, + .max = 20, + .increment = 0 + }, + .aad_size = { 0 } + }, } + }, } + }, + { /* SHA1 */ + .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, + {.sym = { + .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH, + {.auth = { + .algo = RTE_CRYPTO_AUTH_SHA1, + .block_size = 64, + .key_size = { + .min = 0, + .max = 0, + .increment = 0 + }, + .digest_size = { + .min = 20, + .max = 20, + .increment = 0 + }, + .aad_size = { 0 } + }, } + }, } + }, + { /* SHA224 HMAC */ + .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, + {.sym = { + .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH, + {.auth = { + .algo = RTE_CRYPTO_AUTH_SHA224_HMAC, + .block_size = 64, + .key_size = { + .min = 64, + .max = 64, + .increment = 0 + }, + .digest_size = { + .min = 28, + .max = 28, + .increment = 0 + }, + .aad_size = { 0 } + }, } + }, } + }, + { /* SHA224 */ + .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, + {.sym = { + .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH, + {.auth = { + .algo = RTE_CRYPTO_AUTH_SHA224, + .block_size = 64, + .key_size = { + .min = 0, + .max = 0, + .increment = 0 + }, + .digest_size = { + .min = 28, + .max = 28, + .increment = 0 + }, + .aad_size = { 0 } + }, } + }, } + }, + { /* SHA256 HMAC */ + .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, + {.sym = { + .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH, + {.auth = { + .algo = RTE_CRYPTO_AUTH_SHA256_HMAC, + .block_size = 64, + .key_size = { + .min = 64, + .max = 64, + .increment = 0 + }, + .digest_size = { + .min = 32, + .max = 32, + .increment = 0 + }, + .aad_size = { 0 } + }, } + }, } + }, + { /* SHA256 */ + .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, + {.sym = { + .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH, + {.auth = { + .algo = RTE_CRYPTO_AUTH_SHA256, + .block_size = 64, + .key_size = { + .min = 0, + .max = 0, + .increment = 0 + }, + .digest_size = { + .min = 32, + .max = 32, + .increment = 0 + }, + .aad_size = { 0 } + }, } + }, } + }, + { /* SHA384 HMAC */ + .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, + {.sym = { + .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH, + {.auth = { + .algo = RTE_CRYPTO_AUTH_SHA384_HMAC, + .block_size = 128, + .key_size = { + .min = 128, + .max = 128, + .increment = 0 + }, + .digest_size = { + .min = 48, + .max = 48, + .increment = 0 + }, + .aad_size = { 0 } + }, } + }, } + }, + { /* SHA384 */ + .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, + {.sym = { + .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH, + {.auth = { + .algo = RTE_CRYPTO_AUTH_SHA384, + .block_size = 128, + .key_size = { + .min = 0, + .max = 0, + .increment = 0 + }, + .digest_size = { + .min = 48, + .max = 48, + .increment = 0 + }, + .aad_size = { 0 } + }, } + }, } + }, + { /* SHA512 HMAC */ + .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, + {.sym = { + .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH, + {.auth = { + .algo = RTE_CRYPTO_AUTH_SHA512_HMAC, + .block_size = 128, + .key_size = { + .min = 128, + .max = 128, + .increment = 0 + }, + .digest_size = { + .min = 64, + .max = 64, + .increment = 0 + }, + .aad_size = { 0 } + }, } + }, } + }, + { /* SHA512 */ + .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, + {.sym = { + .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH, + {.auth = { + .algo = RTE_CRYPTO_AUTH_SHA512, + .block_size = 128, + .key_size = { + .min = 0, + .max = 0, + .increment = 0 + }, + .digest_size = { + .min = 64, + .max = 64, + .increment = 0 + }, + .aad_size = { 0 } + }, } + }, } + }, + { /* AES CBC */ + .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, + {.sym = { + .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER, + {.cipher = { + .algo = RTE_CRYPTO_CIPHER_AES_CBC, + .block_size = 16, + .key_size = { + .min = 16, + .max = 32, + .increment = 8 + }, + .iv_size = { + .min = 16, + .max = 16, + .increment = 0 + } + }, } + }, } + }, + { /* AES CTR */ + .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, + {.sym = { + .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER, + {.cipher = { + .algo = RTE_CRYPTO_CIPHER_AES_CTR, + .block_size = 16, + .key_size = { + .min = 16, + .max = 32, + .increment = 8 + }, + .iv_size = { + .min = 16, + .max = 16, + .increment = 0 + } + }, } + }, } + }, + { /* AES GCM (AUTH) */ + .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, + {.sym = { + .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH, + {.auth = { + .algo = RTE_CRYPTO_AUTH_AES_GCM, + .block_size = 16, + .key_size = { + .min = 16, + .max = 32, + .increment = 8 + }, + .digest_size = { + .min = 16, + .max = 16, + .increment = 0 + }, + .aad_size = { + .min = 8, + .max = 12, + .increment = 4 + } + }, } + }, } + }, + { /* AES GCM (CIPHER) */ + .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, + {.sym = { + .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER, + {.cipher = { + .algo = RTE_CRYPTO_CIPHER_AES_GCM, + .block_size = 16, + .key_size = { + .min = 16, + .max = 16, + .increment = 0 + }, + .iv_size = { + .min = 12, + .max = 16, + .increment = 4 + } + }, } + }, } + }, + { /* AES GMAC (AUTH) */ + .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, + {.sym = { + .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH, + {.auth = { + .algo = RTE_CRYPTO_AUTH_AES_GMAC, + .block_size = 16, + .key_size = { + .min = 16, + .max = 32, + .increment = 8 + }, + .digest_size = { + .min = 16, + .max = 16, + .increment = 0 + }, + .aad_size = { + .min = 8, + .max = 65532, + .increment = 4 + } + }, } + }, } + }, + { /* 3DES CBC */ + .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, + {.sym = { + .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER, + {.cipher = { + .algo = RTE_CRYPTO_CIPHER_3DES_CBC, + .block_size = 8, + .key_size = { + .min = 16, + .max = 24, + .increment = 8 + }, + .iv_size = { + .min = 8, + .max = 8, + .increment = 0 + } + }, } + }, } + }, + { /* 3DES CTR */ + .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, + {.sym = { + .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER, + {.cipher = { + .algo = RTE_CRYPTO_CIPHER_3DES_CTR, + .block_size = 8, + .key_size = { + .min = 16, + .max = 24, + .increment = 8 + }, + .iv_size = { + .min = 8, + .max = 8, + .increment = 0 + } + }, } + }, } + }, + + RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST() +}; + + +/** Configure device */ +static int +libcrypto_pmd_config(__rte_unused struct rte_cryptodev *dev) +{ + return 0; +} + +/** Start device */ +static int +libcrypto_pmd_start(__rte_unused struct rte_cryptodev *dev) +{ + return 0; +} + +/** Stop device */ +static void +libcrypto_pmd_stop(__rte_unused struct rte_cryptodev *dev) +{ +} + +/** Close device */ +static int +libcrypto_pmd_close(__rte_unused struct rte_cryptodev *dev) +{ + return 0; +} + + +/** Get device statistics */ +static void +libcrypto_pmd_stats_get(struct rte_cryptodev *dev, + struct rte_cryptodev_stats *stats) +{ + int qp_id; + + for (qp_id = 0; qp_id < dev->data->nb_queue_pairs; qp_id++) { + struct libcrypto_qp *qp = dev->data->queue_pairs[qp_id]; + + stats->enqueued_count += qp->stats.enqueued_count; + stats->dequeued_count += qp->stats.dequeued_count; + + stats->enqueue_err_count += qp->stats.enqueue_err_count; + stats->dequeue_err_count += qp->stats.dequeue_err_count; + } +} + +/** Reset device statistics */ +static void +libcrypto_pmd_stats_reset(struct rte_cryptodev *dev) +{ + int qp_id; + + for (qp_id = 0; qp_id < dev->data->nb_queue_pairs; qp_id++) { + struct libcrypto_qp *qp = dev->data->queue_pairs[qp_id]; + + memset(&qp->stats, 0, sizeof(qp->stats)); + } +} + + +/** Get device info */ +static void +libcrypto_pmd_info_get(struct rte_cryptodev *dev, + struct rte_cryptodev_info *dev_info) +{ + struct libcrypto_private *internals = dev->data->dev_private; + + if (dev_info != NULL) { + dev_info->dev_type = dev->dev_type; + dev_info->feature_flags = dev->feature_flags; + dev_info->capabilities = libcrypto_pmd_capabilities; + dev_info->max_nb_queue_pairs = internals->max_nb_qpairs; + dev_info->sym.max_nb_sessions = internals->max_nb_sessions; + } +} + +/** Release queue pair */ +static int +libcrypto_pmd_qp_release(struct rte_cryptodev *dev, uint16_t qp_id) +{ + if (dev->data->queue_pairs[qp_id] != NULL) { + rte_free(dev->data->queue_pairs[qp_id]); + dev->data->queue_pairs[qp_id] = NULL; + } + return 0; +} + +/** set a unique name for the queue pair based on it's name, dev_id and qp_id */ +static int +libcrypto_pmd_qp_set_unique_name(struct rte_cryptodev *dev, + struct libcrypto_qp *qp) +{ + unsigned int n = snprintf(qp->name, sizeof(qp->name), + "libcrypto_pmd_%u_qp_%u", + dev->data->dev_id, qp->id); + + if (n > sizeof(qp->name)) + return -1; + + return 0; +} + + +/** Create a ring to place processed operations on */ +static struct rte_ring * +libcrypto_pmd_qp_create_processed_ops_ring(struct libcrypto_qp *qp, + unsigned int ring_size, int socket_id) +{ + struct rte_ring *r; + + r = rte_ring_lookup(qp->name); + if (r) { + if (r->prod.size >= ring_size) { + LIBCRYPTO_LOG_INFO( + "Reusing existing ring %s for processed ops", + qp->name); + return r; + } + + LIBCRYPTO_LOG_ERR( + "Unable to reuse existing ring %s for processed ops", + qp->name); + return NULL; + } + + return rte_ring_create(qp->name, ring_size, socket_id, + RING_F_SP_ENQ | RING_F_SC_DEQ); +} + + +/** Setup a queue pair */ +static int +libcrypto_pmd_qp_setup(struct rte_cryptodev *dev, uint16_t qp_id, + const struct rte_cryptodev_qp_conf *qp_conf, + int socket_id) +{ + struct libcrypto_qp *qp = NULL; + + /* Free memory prior to re-allocation if needed. */ + if (dev->data->queue_pairs[qp_id] != NULL) + libcrypto_pmd_qp_release(dev, qp_id); + + /* Allocate the queue pair data structure. */ + qp = rte_zmalloc_socket("LIBCRYPTO PMD Queue Pair", sizeof(*qp), + RTE_CACHE_LINE_SIZE, socket_id); + if (qp == NULL) + return -ENOMEM; + + qp->id = qp_id; + dev->data->queue_pairs[qp_id] = qp; + + if (libcrypto_pmd_qp_set_unique_name(dev, qp)) + goto qp_setup_cleanup; + + qp->processed_ops = libcrypto_pmd_qp_create_processed_ops_ring(qp, + qp_conf->nb_descriptors, socket_id); + if (qp->processed_ops == NULL) + goto qp_setup_cleanup; + + qp->sess_mp = dev->data->session_pool; + + memset(&qp->stats, 0, sizeof(qp->stats)); + + return 0; + +qp_setup_cleanup: + if (qp) + rte_free(qp); + + return -1; +} + +/** Start queue pair */ +static int +libcrypto_pmd_qp_start(__rte_unused struct rte_cryptodev *dev, + __rte_unused uint16_t queue_pair_id) +{ + return -ENOTSUP; +} + +/** Stop queue pair */ +static int +libcrypto_pmd_qp_stop(__rte_unused struct rte_cryptodev *dev, + __rte_unused uint16_t queue_pair_id) +{ + return -ENOTSUP; +} + +/** Return the number of allocated queue pairs */ +static uint32_t +libcrypto_pmd_qp_count(struct rte_cryptodev *dev) +{ + return dev->data->nb_queue_pairs; +} + +/** Returns the size of the session structure */ +static unsigned +libcrypto_pmd_session_get_size(struct rte_cryptodev *dev __rte_unused) +{ + return sizeof(struct libcrypto_session); +} + +/** Configure the session from a crypto xform chain */ +static void * +libcrypto_pmd_session_configure(struct rte_cryptodev *dev __rte_unused, + struct rte_crypto_sym_xform *xform, void *sess) +{ + if (unlikely(sess == NULL)) { + LIBCRYPTO_LOG_ERR("invalid session struct"); + return NULL; + } + + if (libcrypto_set_session_parameters( + sess, xform) != 0) { + LIBCRYPTO_LOG_ERR("failed configure session parameters"); + return NULL; + } + + return sess; +} + + +/** Clear the memory of session so it doesn't leave key material behind */ +static void +libcrypto_pmd_session_clear(struct rte_cryptodev *dev __rte_unused, void *sess) +{ + /* + * Current just resetting the whole data structure, need to investigate + * whether a more selective reset of key would be more performant + */ + if (sess) { + libcrypto_reset_session(sess); + memset(sess, 0, sizeof(struct libcrypto_session)); + } +} + +struct rte_cryptodev_ops libcrypto_pmd_ops = { + .dev_configure = libcrypto_pmd_config, + .dev_start = libcrypto_pmd_start, + .dev_stop = libcrypto_pmd_stop, + .dev_close = libcrypto_pmd_close, + + .stats_get = libcrypto_pmd_stats_get, + .stats_reset = libcrypto_pmd_stats_reset, + + .dev_infos_get = libcrypto_pmd_info_get, + + .queue_pair_setup = libcrypto_pmd_qp_setup, + .queue_pair_release = libcrypto_pmd_qp_release, + .queue_pair_start = libcrypto_pmd_qp_start, + .queue_pair_stop = libcrypto_pmd_qp_stop, + .queue_pair_count = libcrypto_pmd_qp_count, + + .session_get_size = libcrypto_pmd_session_get_size, + .session_configure = libcrypto_pmd_session_configure, + .session_clear = libcrypto_pmd_session_clear +}; + +struct rte_cryptodev_ops *rte_libcrypto_pmd_ops = &libcrypto_pmd_ops; diff --git a/drivers/crypto/libcrypto/rte_libcrypto_pmd_private.h b/drivers/crypto/libcrypto/rte_libcrypto_pmd_private.h new file mode 100644 index 0000000..dbef57f --- /dev/null +++ b/drivers/crypto/libcrypto/rte_libcrypto_pmd_private.h @@ -0,0 +1,174 @@ +/*- + * BSD LICENSE + * + * Copyright(c) 2016 Intel Corporation. All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +#ifndef _LIBCRYPTO_PMD_PRIVATE_H_ +#define _LIBCRYPTO_PMD_PRIVATE_H_ + +#include +#include + + +#define LIBCRYPTO_LOG_ERR(fmt, args...) \ + RTE_LOG(ERR, CRYPTODEV, "[%s] %s() line %u: " fmt "\n", \ + RTE_STR(CRYPTODEV_NAME_LIBCRYPTO_PMD), \ + __func__, __LINE__, ## args) + +#ifdef RTE_LIBRTE_LIBCRYPTO_DEBUG +#define LIBCRYPTO_LOG_INFO(fmt, args...) \ + RTE_LOG(INFO, CRYPTODEV, "[%s] %s() line %u: " fmt "\n", \ + RTE_STR(CRYPTODEV_NAME_LIBCRYPTO_PMD), \ + __func__, __LINE__, ## args) + +#define LIBCRYPTO_LOG_DBG(fmt, args...) \ + RTE_LOG(DEBUG, CRYPTODEV, "[%s] %s() line %u: " fmt "\n", \ + RTE_STR(CRYPTODEV_NAME_LIBCRYPTO_PMD), \ + __func__, __LINE__, ## args) +#else +#define LIBCRYPTO_LOG_INFO(fmt, args...) +#define LIBCRYPTO_LOG_DBG(fmt, args...) +#endif + + +/** LIBCRYPTO operation order mode enumerator */ +enum libcrypto_chain_order { + LIBCRYPTO_CHAIN_ONLY_CIPHER, + LIBCRYPTO_CHAIN_ONLY_AUTH, + LIBCRYPTO_CHAIN_CIPHER_AUTH, + LIBCRYPTO_CHAIN_AUTH_CIPHER, + LIBCRYPTO_CHAIN_COMBINED, + LIBCRYPTO_CHAIN_NOT_SUPPORTED +}; + +/** LIBCRYPTO cipher mode enumerator */ +enum libcrypto_cipher_mode { + LIBCRYPTO_CIPHER_LIB, + LIBCRYPTO_CIPHER_DES3CTR, +}; + +/** LIBCRYPTO auth mode enumerator */ +enum libcrypto_auth_mode { + LIBCRYPTO_AUTH_AS_AUTH, + LIBCRYPTO_AUTH_AS_HMAC, +}; + +/** private data structure for each LIBCRYPTO crypto device */ +struct libcrypto_private { + unsigned int max_nb_qpairs; + /**< Max number of queue pairs */ + unsigned int max_nb_sessions; + /**< Max number of sessions */ +}; + +/** LIBCRYPTO crypto queue pair */ +struct libcrypto_qp { + uint16_t id; + /**< Queue Pair Identifier */ + char name[RTE_CRYPTODEV_NAME_LEN]; + /**< Unique Queue Pair Name */ + struct rte_ring *processed_ops; + /**< Ring for placing process packets */ + struct rte_mempool *sess_mp; + /**< Session Mempool */ + struct rte_cryptodev_stats stats; + /**< Queue pair statistics */ +} __rte_cache_aligned; + +/** LIBCRYPTO crypto private session structure */ +struct libcrypto_session { + enum libcrypto_chain_order chain_order; + /**< chain order mode */ + + /** Cipher Parameters */ + struct { + enum rte_crypto_cipher_operation direction; + /**< cipher operation direction */ + enum libcrypto_cipher_mode mode; + /**< cipher operation mode */ + enum rte_crypto_cipher_algorithm algo; + /**< cipher algorithm */ + + struct { + uint8_t data[32]; + /**< key data */ + size_t length; + /**< key length in bytes */ + } key; + + const EVP_CIPHER *evp_algo; + /**< pointer to EVP algorithm function */ + EVP_CIPHER_CTX *ctx; + /**< pointer to EVP context structure */ + } cipher; + + /** Authentication Parameters */ + struct { + enum rte_crypto_auth_operation operation; + /**< auth operation generate or verify */ + enum libcrypto_auth_mode mode; + /**< auth operation mode */ + enum rte_crypto_auth_algorithm algo; + /**< cipher algorithm */ + + union { + struct { + const EVP_MD *evp_algo; + /**< pointer to EVP algorithm function */ + EVP_MD_CTX *ctx; + /**< pointer to EVP context structure */ + } auth; + + struct { + EVP_PKEY *pkey; + /**< pointer to EVP key */ + const EVP_MD *evp_algo; + /**< pointer to EVP algorithm function */ + EVP_MD_CTX *ctx; + /**< pointer to EVP context structure */ + } hmac; + }; + } auth; + +} __rte_cache_aligned; + +/** Set and validate LIBCRYPTO crypto session parameters */ +extern int +libcrypto_set_session_parameters(struct libcrypto_session *sess, + const struct rte_crypto_sym_xform *xform); + +/** Reset LIBCRYPTO crypto session parameters */ +extern void +libcrypto_reset_session(struct libcrypto_session *sess); + +/** device specific operations function pointer structure */ +extern struct rte_cryptodev_ops *rte_libcrypto_pmd_ops; + +#endif /* _LIBCRYPTO_PMD_PRIVATE_H_ */ diff --git a/drivers/crypto/libcrypto/rte_pmd_libcrypto_version.map b/drivers/crypto/libcrypto/rte_pmd_libcrypto_version.map new file mode 100644 index 0000000..cc5829e --- /dev/null +++ b/drivers/crypto/libcrypto/rte_pmd_libcrypto_version.map @@ -0,0 +1,3 @@ +DPDK_16.11 { + local: *; +}; diff --git a/lib/librte_cryptodev/rte_cryptodev.h b/lib/librte_cryptodev/rte_cryptodev.h index 7fb5f6e..3f0fc2e 100644 --- a/lib/librte_cryptodev/rte_cryptodev.h +++ b/lib/librte_cryptodev/rte_cryptodev.h @@ -56,6 +56,8 @@ extern "C" { /**< AES-NI Multi buffer PMD device name */ #define CRYPTODEV_NAME_AESNI_GCM_PMD crypto_aesni_gcm /**< AES-NI GCM PMD device name */ +#define CRYPTODEV_NAME_LIBCRYPTO_PMD cryptodev_libcrypto +/**< Open SSL Crypto PMD device name */ #define CRYPTODEV_NAME_QAT_SYM_PMD crypto_qat /**< Intel QAT Symmetric Crypto PMD device name */ #define CRYPTODEV_NAME_SNOW3G_PMD crypto_snow3g @@ -71,6 +73,7 @@ enum rte_cryptodev_type { RTE_CRYPTODEV_QAT_SYM_PMD, /**< QAT PMD Symmetric Crypto */ RTE_CRYPTODEV_SNOW3G_PMD, /**< SNOW 3G PMD */ RTE_CRYPTODEV_KASUMI_PMD, /**< KASUMI PMD */ + RTE_CRYPTODEV_LIBCRYPTO_PMD, /**< LibCrypto PMD */ }; extern const char **rte_cyptodev_names; diff --git a/mk/rte.app.mk b/mk/rte.app.mk index 1a0095b..e1b9e64 100644 --- a/mk/rte.app.mk +++ b/mk/rte.app.mk @@ -131,16 +131,17 @@ endif # $(CONFIG_RTE_LIBRTE_VHOST) _LDLIBS-$(CONFIG_RTE_LIBRTE_VMXNET3_PMD) += -lrte_pmd_vmxnet3_uio ifeq ($(CONFIG_RTE_LIBRTE_CRYPTODEV),y) -_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_AESNI_MB) += -lrte_pmd_aesni_mb -_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_AESNI_MB) += -L$(AESNI_MULTI_BUFFER_LIB_PATH) -lIPSec_MB -_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_AESNI_GCM) += -lrte_pmd_aesni_gcm -lcrypto -_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_AESNI_GCM) += -L$(AESNI_MULTI_BUFFER_LIB_PATH) -lIPSec_MB +_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_AESNI_MB) += -lrte_pmd_aesni_mb +_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_AESNI_MB) += -L$(AESNI_MULTI_BUFFER_LIB_PATH) -lIPSec_MB +_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_AESNI_GCM) += -lrte_pmd_aesni_gcm -lcrypto +_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_AESNI_GCM) += -L$(AESNI_MULTI_BUFFER_LIB_PATH) -lIPSec_MB +_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_LIBCRYPTO) += -lrte_pmd_libcrypto -lcrypto _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_NULL_CRYPTO) += -lrte_pmd_null_crypto -_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_QAT) += -lrte_pmd_qat -lcrypto -_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_SNOW3G) += -lrte_pmd_snow3g -_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_SNOW3G) += -L$(LIBSSO_SNOW3G_PATH)/build -lsso_snow3g -_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_KASUMI) += -lrte_pmd_kasumi -_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_KASUMI) += -L$(LIBSSO_KASUMI_PATH)/build -lsso_kasumi +_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_QAT) += -lrte_pmd_qat -lcrypto +_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_SNOW3G) += -lrte_pmd_snow3g +_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_SNOW3G) += -L$(LIBSSO_SNOW3G_PATH)/build -lsso_snow3g +_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_KASUMI) += -lrte_pmd_kasumi +_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_KASUMI) += -L$(LIBSSO_KASUMI_PATH)/build -lsso_kasumi endif # CONFIG_RTE_LIBRTE_CRYPTODEV endif # !CONFIG_RTE_BUILD_SHARED_LIBS -- 2.5.0