From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by dpdk.org (Postfix) with ESMTP id BBF708E91 for ; Tue, 10 Nov 2015 18:28:11 +0100 (CET) Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga103.jf.intel.com with ESMTP; 10 Nov 2015 09:28:10 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.20,271,1444719600"; d="scan'208";a="847140869" Received: from dwdohert-dpdk-fedora-20.ir.intel.com ([163.33.213.96]) by orsmga002.jf.intel.com with ESMTP; 10 Nov 2015 09:28:10 -0800 From: Declan Doherty To: dev@dpdk.org Date: Tue, 10 Nov 2015 17:32:41 +0000 Message-Id: <1447176763-19303-9-git-send-email-declan.doherty@intel.com> X-Mailer: git-send-email 2.4.3 In-Reply-To: <1447176763-19303-1-git-send-email-declan.doherty@intel.com> References: <1447101259-18972-1-git-send-email-declan.doherty@intel.com> <1447176763-19303-1-git-send-email-declan.doherty@intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Subject: [dpdk-dev] [PATCH v6 08/10] aesni_mb_pmd: Initial implementation of multi buffer based crypto device X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 10 Nov 2015 17:28:13 -0000 This patch provides the initial implementation of the AES-NI multi-buffer based crypto poll mode driver using DPDK's new cryptodev framework. This PMD is dependent on Intel's multibuffer library, see the whitepaper "Fast Multi-buffer IPsec Implementations on IntelĀ® Architecture Processors", see ref 1 for details on the library's design and ref 2 to download the library itself. This initial implementation is limited to supporting the chained operations of "hash then cipher" or "cipher then hash" for the following cipher and hash algorithms: Cipher algorithms: - RTE_CRYPTO_CIPHER_AES128_CBC - RTE_CRYPTO_CIPHER_AES256_CBC - RTE_CRYPTO_CIPHER_AES512_CBC Hash algorithms: - RTE_CRYPTO_AUTH_SHA1_HMAC - RTE_CRYPTO_AUTH_SHA256_HMAC - RTE_CRYPTO_AUTH_SHA512_HMAC - RTE_CRYPTO_AUTH_AES_XCBC_MAC Important Note: Due to the fact that the multi-buffer library is designed for accelerating IPsec crypto oepration, the digest's generated for the HMAC functions are truncated to lengths specified by IPsec RFC's, ie RFC2404 for using HMAC-SHA-1 with IPsec specifies that the digest is truncate from 20 to 12 bytes. Build instructions: To build DPKD with the AESNI_MB_PMD the user is required to download (ref 2) and compile the multi-buffer library on there user system before building DPDK. The environmental variable AESNI_MULTI_BUFFER_LIB_PATH must be exported with the path where you extracted and built the multi buffer library and finally set CONFIG_RTE_LIBRTE_PMD_AESNI_MB=y in config/common_linuxapp. Current status: It's doesn't support crypto operation across chained mbufs, or cipher only or hash only operations. ref 1: https://www-ssl.intel.com/content/www/us/en/intelligent-systems/intel-technology/fast-multi-buffer-ipsec-implementations-ia-processors-p ref 2: https://downloadcenter.intel.com/download/22972 Acked-by: Sergio Gonzalez Monroy Signed-off-by: Declan Doherty --- MAINTAINERS | 3 + config/common_bsdapp | 7 + config/common_linuxapp | 7 + doc/guides/cryptodevs/aesni_mb.rst | 76 +++ doc/guides/cryptodevs/index.rst | 1 + drivers/crypto/Makefile | 1 + drivers/crypto/aesni_mb/Makefile | 63 ++ drivers/crypto/aesni_mb/aesni_mb_ops.h | 210 +++++++ drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c | 669 +++++++++++++++++++++ drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c | 298 +++++++++ drivers/crypto/aesni_mb/rte_aesni_mb_pmd_private.h | 229 +++++++ drivers/crypto/aesni_mb/rte_pmd_aesni_version.map | 3 + mk/rte.app.mk | 4 + 13 files changed, 1571 insertions(+) create mode 100644 doc/guides/cryptodevs/aesni_mb.rst create mode 100644 drivers/crypto/aesni_mb/Makefile create mode 100644 drivers/crypto/aesni_mb/aesni_mb_ops.h create mode 100644 drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c create mode 100644 drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c create mode 100644 drivers/crypto/aesni_mb/rte_aesni_mb_pmd_private.h create mode 100644 drivers/crypto/aesni_mb/rte_pmd_aesni_version.map diff --git a/MAINTAINERS b/MAINTAINERS index 73d9578..2d5808c 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -303,6 +303,9 @@ Null PMD M: Tetsuya Mukawa F: drivers/net/null/ +Crypto AES-NI Multi-Buffer PMD +M: Declan Doherty +F: driver/crypto/aesni_mb Packet processing ----------------- diff --git a/config/common_bsdapp b/config/common_bsdapp index 0068b20..a18e817 100644 --- a/config/common_bsdapp +++ b/config/common_bsdapp @@ -168,6 +168,13 @@ CONFIG_RTE_LIBRTE_QAT_DEBUG_DRIVER=n # CONFIG_RTE_MAX_QAT_SESSIONS=200 + +# +# Compile PMD for AESNI backed device +# +CONFIG_RTE_LIBRTE_PMD_AESNI_MB=n +CONFIG_RTE_LIBRTE_AESNI_MB_DEBUG=n + # # Support NIC bypass logic # diff --git a/config/common_linuxapp b/config/common_linuxapp index b29d3dd..d9c8c5c 100644 --- a/config/common_linuxapp +++ b/config/common_linuxapp @@ -166,6 +166,13 @@ CONFIG_RTE_LIBRTE_PMD_QAT_DEBUG_DRIVER=n # CONFIG_RTE_QAT_PMD_MAX_NB_SESSIONS=2048 +# Compile PMD for AESNI backed device +# +CONFIG_RTE_LIBRTE_PMD_AESNI_MB=n +CONFIG_RTE_LIBRTE_PMD_AESNI_MB_DEBUG=n +CONFIG_RTE_AESNI_MB_PMD_MAX_NB_QUEUE_PAIRS=8 +CONFIG_RTE_AESNI_MB_PMD_MAX_NB_SESSIONS=2048 + # # Support NIC bypass logic # diff --git a/doc/guides/cryptodevs/aesni_mb.rst b/doc/guides/cryptodevs/aesni_mb.rst new file mode 100644 index 0000000..826b632 --- /dev/null +++ b/doc/guides/cryptodevs/aesni_mb.rst @@ -0,0 +1,76 @@ +.. BSD LICENSE + Copyright(c) 2015 Intel Corporation. All rights reserved. + + Redistribution and use in source and binary forms, with or without + modification, are permitted provided that the following conditions + are met: + + * Redistributions of source code must retain the above copyright + notice, this list of conditions and the following disclaimer. + * Redistributions in binary form must reproduce the above copyright + notice, this list of conditions and the following disclaimer in + the documentation and/or other materials provided with the + distribution. + * Neither the name of Intel Corporation nor the names of its + contributors may be used to endorse or promote products derived + from this software without specific prior written permission. + + THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +AESN-NI Multi Buffer Crytpo Poll Mode Driver +============================================ + + +The AESNI MB PMD (**librte_pmd_aesni_mb**) provides poll mode crypto driver +support for utilising Intel multi buffer library, see the white paper +`Fast Multi-buffer IPsec Implementations on IntelĀ® Architecture Processors +`_. + +The AES-NI MB PMD has current only been tested on Fedora 21 64-bit with gcc. + +Features +-------- + +AESNI MB PMD has support for: + +Cipher algorithms: + +* RTE_CRYPTO_SYM_CIPHER_AES128_CBC +* RTE_CRYPTO_SYM_CIPHER_AES256_CBC +* RTE_CRYPTO_SYM_CIPHER_AES512_CBC + +Hash algorithms: + +* RTE_CRYPTO_SYM_HASH_SHA1_HMAC +* RTE_CRYPTO_SYM_HASH_SHA256_HMAC +* RTE_CRYPTO_SYM_HASH_SHA512_HMAC + +Limitations +----------- + +* Chained mbufs are not supported. +* Hash only is not supported. +* Cipher only is not supported. +* Only in-place is currently supported (destination address is the same as source address). +* Only supports session-oriented API implementation (session-less APIs are not supported). +* Not performance tuned. + +Installation +------------ + +To build DPKD with the AESNI_MB_PMD the user is required to download the library +from `here `_ and compile it on +their user system before building DPDK. The environmental variable +AESNI_MULTI_BUFFER_LIB_PATH must be exported with the path where you extracted +and built the multi buffer library and finally set +CONFIG_RTE_LIBRTE_PMD_AESNI_MB=y in config/common_linuxapp. diff --git a/doc/guides/cryptodevs/index.rst b/doc/guides/cryptodevs/index.rst index 1c31697..8949fd0 100644 --- a/doc/guides/cryptodevs/index.rst +++ b/doc/guides/cryptodevs/index.rst @@ -39,4 +39,5 @@ Crypto Device Drivers :maxdepth: 2 :numbered: + aesni_mb qat diff --git a/drivers/crypto/Makefile b/drivers/crypto/Makefile index f6aecea..d07ee96 100644 --- a/drivers/crypto/Makefile +++ b/drivers/crypto/Makefile @@ -31,6 +31,7 @@ include $(RTE_SDK)/mk/rte.vars.mk +DIRS-$(CONFIG_RTE_LIBRTE_PMD_AESNI_MB) += aesni_mb DIRS-$(CONFIG_RTE_LIBRTE_PMD_QAT) += qat include $(RTE_SDK)/mk/rte.sharelib.mk diff --git a/drivers/crypto/aesni_mb/Makefile b/drivers/crypto/aesni_mb/Makefile new file mode 100644 index 0000000..3bf83d1 --- /dev/null +++ b/drivers/crypto/aesni_mb/Makefile @@ -0,0 +1,63 @@ +# BSD LICENSE +# +# Copyright(c) 2015 Intel Corporation. All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# * Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# * Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# * Neither the name of Intel Corporation nor the names of its +# contributors may be used to endorse or promote products derived +# from this software without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +include $(RTE_SDK)/mk/rte.vars.mk + +ifeq ($(AESNI_MULTI_BUFFER_LIB_PATH),) +$(error "Please define AESNI_MULTI_BUFFER_LIB_PATH environment variable") +endif + +# library name +LIB = librte_pmd_aesni_mb.a + +# build flags +CFLAGS += -O3 +CFLAGS += $(WERROR_FLAGS) + +# library version +LIBABIVER := 1 + +# versioning export map +EXPORT_MAP := rte_pmd_aesni_version.map + +# external library include paths +CFLAGS += -I$(AESNI_MULTI_BUFFER_LIB_PATH) +CFLAGS += -I$(AESNI_MULTI_BUFFER_LIB_PATH)/include + +# library source files +SRCS-$(CONFIG_RTE_LIBRTE_PMD_AESNI_MB) += rte_aesni_mb_pmd.c +SRCS-$(CONFIG_RTE_LIBRTE_PMD_AESNI_MB) += rte_aesni_mb_pmd_ops.c + +# library dependencies +DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_AESNI_MB) += lib/librte_eal +DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_AESNI_MB) += lib/librte_mbuf +DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_AESNI_MB) += lib/librte_cryptodev + +include $(RTE_SDK)/mk/rte.lib.mk diff --git a/drivers/crypto/aesni_mb/aesni_mb_ops.h b/drivers/crypto/aesni_mb/aesni_mb_ops.h new file mode 100644 index 0000000..0c119bf --- /dev/null +++ b/drivers/crypto/aesni_mb/aesni_mb_ops.h @@ -0,0 +1,210 @@ +/*- + * BSD LICENSE + * + * Copyright(c) 2015 Intel Corporation. All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +#ifndef _AESNI_MB_OPS_H_ +#define _AESNI_MB_OPS_H_ + +#ifndef LINUX +#define LINUX +#endif + +#include +#include + +enum aesni_mb_vector_mode { + RTE_AESNI_MB_NOT_SUPPORTED = 0, + RTE_AESNI_MB_SSE, + RTE_AESNI_MB_AVX, + RTE_AESNI_MB_AVX2 +}; + +typedef void (*md5_one_block_t)(void *data, void *digest); + +typedef void (*sha1_one_block_t)(void *data, void *digest); +typedef void (*sha224_one_block_t)(void *data, void *digest); +typedef void (*sha256_one_block_t)(void *data, void *digest); +typedef void (*sha384_one_block_t)(void *data, void *digest); +typedef void (*sha512_one_block_t)(void *data, void *digest); + +typedef void (*aes_keyexp_128_t) + (void *key, void *enc_exp_keys, void *dec_exp_keys); +typedef void (*aes_keyexp_192_t) + (void *key, void *enc_exp_keys, void *dec_exp_keys); +typedef void (*aes_keyexp_256_t) + (void *key, void *enc_exp_keys, void *dec_exp_keys); + +typedef void (*aes_xcbc_expand_key_t) + (void *key, void *exp_k1, void *k2, void *k3); + +/** Multi-buffer library function pointer table */ +struct aesni_mb_ops { + struct { + init_mb_mgr_t init_mgr; + /**< Initialise scheduler */ + get_next_job_t get_next; + /**< Get next free job structure */ + submit_job_t submit; + /**< Submit job to scheduler */ + get_completed_job_t get_completed_job; + /**< Get completed job */ + flush_job_t flush_job; + /**< flush jobs from manager */ + } job; + /**< multi buffer manager functions */ + + struct { + struct { + md5_one_block_t md5; + /**< MD5 one block hash */ + sha1_one_block_t sha1; + /**< SHA1 one block hash */ + sha224_one_block_t sha224; + /**< SHA224 one block hash */ + sha256_one_block_t sha256; + /**< SHA256 one block hash */ + sha384_one_block_t sha384; + /**< SHA384 one block hash */ + sha512_one_block_t sha512; + /**< SHA512 one block hash */ + } one_block; + /**< one block hash functions */ + + struct { + aes_keyexp_128_t aes128; + /**< AES128 key expansions */ + aes_keyexp_192_t aes192; + /**< AES192 key expansions */ + aes_keyexp_256_t aes256; + /**< AES256 key expansions */ + + aes_xcbc_expand_key_t aes_xcbc; + /**< AES XCBC key expansions */ + } keyexp; + /**< Key expansion functions */ + } aux; + /**< Auxiliary functions */ +}; + + +static const struct aesni_mb_ops job_ops[] = { + [RTE_AESNI_MB_NOT_SUPPORTED] = { + .job = { + NULL + }, + .aux = { + .one_block = { + NULL + }, + .keyexp = { + NULL + } + } + }, + [RTE_AESNI_MB_SSE] = { + .job = { + init_mb_mgr_sse, + get_next_job_sse, + submit_job_sse, + get_completed_job_sse, + flush_job_sse + }, + .aux = { + .one_block = { + md5_one_block_sse, + sha1_one_block_sse, + sha224_one_block_sse, + sha256_one_block_sse, + sha384_one_block_sse, + sha512_one_block_sse + }, + .keyexp = { + aes_keyexp_128_sse, + aes_keyexp_192_sse, + aes_keyexp_256_sse, + aes_xcbc_expand_key_sse + } + } + }, + [RTE_AESNI_MB_AVX] = { + .job = { + init_mb_mgr_avx, + get_next_job_avx, + submit_job_avx, + get_completed_job_avx, + flush_job_avx + }, + .aux = { + .one_block = { + md5_one_block_avx, + sha1_one_block_avx, + sha224_one_block_avx, + sha256_one_block_avx, + sha384_one_block_avx, + sha512_one_block_avx + }, + .keyexp = { + aes_keyexp_128_avx, + aes_keyexp_192_avx, + aes_keyexp_256_avx, + aes_xcbc_expand_key_avx + } + } + }, + [RTE_AESNI_MB_AVX2] = { + .job = { + init_mb_mgr_avx2, + get_next_job_avx2, + submit_job_avx2, + get_completed_job_avx2, + flush_job_avx2 + }, + .aux = { + .one_block = { + md5_one_block_avx2, + sha1_one_block_avx2, + sha224_one_block_avx2, + sha256_one_block_avx2, + sha384_one_block_avx2, + sha512_one_block_avx2 + }, + .keyexp = { + aes_keyexp_128_avx2, + aes_keyexp_192_avx2, + aes_keyexp_256_avx2, + aes_xcbc_expand_key_avx2 + } + } + } +}; + + +#endif /* _AESNI_MB_OPS_H_ */ diff --git a/drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c new file mode 100644 index 0000000..d8ccf05 --- /dev/null +++ b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c @@ -0,0 +1,669 @@ +/*- + * BSD LICENSE + * + * Copyright(c) 2015 Intel Corporation. All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "rte_aesni_mb_pmd_private.h" + +/** + * Global static parameter used to create a unique name for each AES-NI multi + * buffer crypto device. + */ +static unsigned unique_name_id; + +static inline int +create_unique_device_name(char *name, size_t size) +{ + int ret; + + if (name == NULL) + return -EINVAL; + + ret = snprintf(name, size, "%s_%u", CRYPTODEV_NAME_AESNI_MB_PMD, + unique_name_id++); + if (ret < 0) + return ret; + return 0; +} + +typedef void (*hash_one_block_t)(void *data, void *digest); +typedef void (*aes_keyexp_t)(void *key, void *enc_exp_keys, void *dec_exp_keys); + +/** + * Calculate the authentication pre-computes + * + * @param one_block_hash Function pointer to calculate digest on ipad/opad + * @param ipad Inner pad output byte array + * @param opad Outer pad output byte array + * @param hkey Authentication key + * @param hkey_len Authentication key length + * @param blocksize Block size of selected hash algo + */ +static void +calculate_auth_precomputes(hash_one_block_t one_block_hash, + uint8_t *ipad, uint8_t *opad, + uint8_t *hkey, uint16_t hkey_len, + uint16_t blocksize) +{ + unsigned i, length; + + uint8_t ipad_buf[blocksize] __rte_aligned(16); + uint8_t opad_buf[blocksize] __rte_aligned(16); + + /* Setup inner and outer pads */ + memset(ipad_buf, HMAC_IPAD_VALUE, blocksize); + memset(opad_buf, HMAC_OPAD_VALUE, blocksize); + + /* XOR hash key with inner and outer pads */ + length = hkey_len > blocksize ? blocksize : hkey_len; + + for (i = 0; i < length; i++) { + ipad_buf[i] ^= hkey[i]; + opad_buf[i] ^= hkey[i]; + } + + /* Compute partial hashes */ + (*one_block_hash)(ipad_buf, ipad); + (*one_block_hash)(opad_buf, opad); + + /* Clean up stack */ + memset(ipad_buf, 0, blocksize); + memset(opad_buf, 0, blocksize); +} + +/** Get xform chain order */ +static int +aesni_mb_get_chain_order(const struct rte_crypto_xform *xform) +{ + /* + * Multi-buffer only supports HASH_CIPHER or CIPHER_HASH chained + * operations, all other options are invalid, so we must have exactly + * 2 xform structs chained together + */ + if (xform->next == NULL || xform->next->next != NULL) + return -1; + + if (xform->type == RTE_CRYPTO_XFORM_AUTH && + xform->next->type == RTE_CRYPTO_XFORM_CIPHER) + return HASH_CIPHER; + + if (xform->type == RTE_CRYPTO_XFORM_CIPHER && + xform->next->type == RTE_CRYPTO_XFORM_AUTH) + return CIPHER_HASH; + + return -1; +} + +/** Set session authentication parameters */ +static int +aesni_mb_set_session_auth_parameters(const struct aesni_mb_ops *mb_ops, + struct aesni_mb_session *sess, + const struct rte_crypto_xform *xform) +{ + hash_one_block_t hash_oneblock_fn; + + if (xform->type != RTE_CRYPTO_XFORM_AUTH) { + MB_LOG_ERR("Crypto xform struct not of type auth"); + return -1; + } + + /* Set Authentication Parameters */ + if (xform->auth.algo == RTE_CRYPTO_AUTH_AES_XCBC_MAC) { + sess->auth.algo = AES_XCBC; + (*mb_ops->aux.keyexp.aes_xcbc)(xform->auth.key.data, + sess->auth.xcbc.k1_expanded, + sess->auth.xcbc.k2, sess->auth.xcbc.k3); + return 0; + } + + switch (xform->auth.algo) { + case RTE_CRYPTO_AUTH_MD5_HMAC: + sess->auth.algo = MD5; + hash_oneblock_fn = mb_ops->aux.one_block.md5; + break; + case RTE_CRYPTO_AUTH_SHA1_HMAC: + sess->auth.algo = SHA1; + hash_oneblock_fn = mb_ops->aux.one_block.sha1; + break; + case RTE_CRYPTO_AUTH_SHA224_HMAC: + sess->auth.algo = SHA_224; + hash_oneblock_fn = mb_ops->aux.one_block.sha224; + break; + case RTE_CRYPTO_AUTH_SHA256_HMAC: + sess->auth.algo = SHA_256; + hash_oneblock_fn = mb_ops->aux.one_block.sha256; + break; + case RTE_CRYPTO_AUTH_SHA384_HMAC: + sess->auth.algo = SHA_384; + hash_oneblock_fn = mb_ops->aux.one_block.sha384; + break; + case RTE_CRYPTO_AUTH_SHA512_HMAC: + sess->auth.algo = SHA_512; + hash_oneblock_fn = mb_ops->aux.one_block.sha512; + break; + default: + MB_LOG_ERR("Unsupported authentication algorithm selection"); + return -1; + } + + /* Calculate Authentication precomputes */ + calculate_auth_precomputes(hash_oneblock_fn, + sess->auth.pads.inner, sess->auth.pads.outer, + xform->auth.key.data, + xform->auth.key.length, + get_auth_algo_blocksize(sess->auth.algo)); + + return 0; +} + +/** Set session cipher parameters */ +static int +aesni_mb_set_session_cipher_parameters(const struct aesni_mb_ops *mb_ops, + struct aesni_mb_session *sess, + const struct rte_crypto_xform *xform) +{ + aes_keyexp_t aes_keyexp_fn; + + if (xform->type != RTE_CRYPTO_XFORM_CIPHER) { + MB_LOG_ERR("Crypto xform struct not of type cipher"); + return -1; + } + + /* Select cipher direction */ + switch (xform->cipher.op) { + case RTE_CRYPTO_CIPHER_OP_ENCRYPT: + sess->cipher.direction = ENCRYPT; + break; + case RTE_CRYPTO_CIPHER_OP_DECRYPT: + sess->cipher.direction = DECRYPT; + break; + default: + MB_LOG_ERR("Unsupported cipher operation parameter"); + return -1; + } + + /* Select cipher mode */ + switch (xform->cipher.algo) { + case RTE_CRYPTO_CIPHER_AES_CBC: + sess->cipher.mode = CBC; + break; + default: + MB_LOG_ERR("Unsupported cipher mode parameter"); + return -1; + } + + /* Check key length and choose key expansion function */ + switch (xform->cipher.key.length) { + case AES_128_BYTES: + sess->cipher.key_length_in_bytes = AES_128_BYTES; + aes_keyexp_fn = mb_ops->aux.keyexp.aes128; + break; + case AES_192_BYTES: + sess->cipher.key_length_in_bytes = AES_192_BYTES; + aes_keyexp_fn = mb_ops->aux.keyexp.aes192; + break; + case AES_256_BYTES: + sess->cipher.key_length_in_bytes = AES_256_BYTES; + aes_keyexp_fn = mb_ops->aux.keyexp.aes256; + break; + default: + MB_LOG_ERR("Unsupported cipher key length"); + return -1; + } + + /* Expanded cipher keys */ + (*aes_keyexp_fn)(xform->cipher.key.data, + sess->cipher.expanded_aes_keys.encode, + sess->cipher.expanded_aes_keys.decode); + + return 0; +} + +/** Parse crypto xform chain and set private session parameters */ +int +aesni_mb_set_session_parameters(const struct aesni_mb_ops *mb_ops, + struct aesni_mb_session *sess, + const struct rte_crypto_xform *xform) +{ + const struct rte_crypto_xform *auth_xform = NULL; + const struct rte_crypto_xform *cipher_xform = NULL; + + /* Select Crypto operation - hash then cipher / cipher then hash */ + switch (aesni_mb_get_chain_order(xform)) { + case HASH_CIPHER: + sess->chain_order = HASH_CIPHER; + auth_xform = xform; + cipher_xform = xform->next; + break; + case CIPHER_HASH: + sess->chain_order = CIPHER_HASH; + auth_xform = xform->next; + cipher_xform = xform; + break; + default: + MB_LOG_ERR("Unsupported operation chain order parameter"); + return -1; + } + + if (aesni_mb_set_session_auth_parameters(mb_ops, sess, auth_xform)) { + MB_LOG_ERR("Invalid/unsupported authentication parameters"); + return -1; + } + + if (aesni_mb_set_session_cipher_parameters(mb_ops, sess, + cipher_xform)) { + MB_LOG_ERR("Invalid/unsupported cipher parameters"); + return -1; + } + return 0; +} + +/** Get multi buffer session */ +static struct aesni_mb_session * +get_session(struct aesni_mb_qp *qp, struct rte_crypto_op *crypto_op) +{ + struct aesni_mb_session *sess; + + if (crypto_op->type == RTE_CRYPTO_OP_WITH_SESSION) { + if (unlikely(crypto_op->session->type != + RTE_CRYPTODEV_AESNI_MB_PMD)) + return NULL; + + sess = (struct aesni_mb_session *)crypto_op->session->_private; + } else { + struct rte_cryptodev_session *c_sess = NULL; + + if (rte_mempool_get(qp->sess_mp, (void **)&c_sess)) + return NULL; + + sess = (struct aesni_mb_session *)c_sess->_private; + + if (unlikely(aesni_mb_set_session_parameters(qp->ops, + sess, crypto_op->xform) != 0)) + return NULL; + } + + return sess; +} + +/** + * Process a crypto operation and complete a JOB_AES_HMAC job structure for + * submission to the multi buffer library for processing. + * + * @param qp queue pair + * @param job JOB_AES_HMAC structure to fill + * @param m mbuf to process + * + * @return + * - Completed JOB_AES_HMAC structure pointer on success + * - NULL pointer if completion of JOB_AES_HMAC structure isn't possible + */ +static JOB_AES_HMAC * +process_crypto_op(struct aesni_mb_qp *qp, struct rte_mbuf *m, + struct rte_crypto_op *c_op, struct aesni_mb_session *session) +{ + JOB_AES_HMAC *job; + + job = (*qp->ops->job.get_next)(&qp->mb_mgr); + if (unlikely(job == NULL)) + return job; + + /* Set crypto operation */ + job->chain_order = session->chain_order; + + /* Set cipher parameters */ + job->cipher_direction = session->cipher.direction; + job->cipher_mode = session->cipher.mode; + + job->aes_key_len_in_bytes = session->cipher.key_length_in_bytes; + job->aes_enc_key_expanded = session->cipher.expanded_aes_keys.encode; + job->aes_dec_key_expanded = session->cipher.expanded_aes_keys.decode; + + + /* Set authentication parameters */ + job->hash_alg = session->auth.algo; + if (job->hash_alg == AES_XCBC) { + job->_k1_expanded = session->auth.xcbc.k1_expanded; + job->_k2 = session->auth.xcbc.k2; + job->_k3 = session->auth.xcbc.k3; + } else { + job->hashed_auth_key_xor_ipad = session->auth.pads.inner; + job->hashed_auth_key_xor_opad = session->auth.pads.outer; + } + + /* Mutable crypto operation parameters */ + + /* Set digest output location */ + if (job->cipher_direction == DECRYPT) { + job->auth_tag_output = (uint8_t *)rte_pktmbuf_append(m, + get_digest_byte_length(job->hash_alg)); + + if (job->auth_tag_output) + memset(job->auth_tag_output, 0, + sizeof(get_digest_byte_length(job->hash_alg))); + else + return NULL; + } else { + job->auth_tag_output = c_op->digest.data; + } + + /* + * Multiple buffer library current only support returning a truncated + * digest length as specified in the relevant IPsec RFCs + */ + job->auth_tag_output_len_in_bytes = + get_truncated_digest_byte_length(job->hash_alg); + + /* Set IV parameters */ + job->iv = c_op->iv.data; + job->iv_len_in_bytes = c_op->iv.length; + + /* Data Parameter */ + job->src = rte_pktmbuf_mtod(m, uint8_t *); + job->dst = c_op->dst.m ? + rte_pktmbuf_mtod(c_op->dst.m, uint8_t *) + + c_op->dst.offset : + rte_pktmbuf_mtod(m, uint8_t *) + + c_op->data.to_cipher.offset; + + job->cipher_start_src_offset_in_bytes = c_op->data.to_cipher.offset; + job->msg_len_to_cipher_in_bytes = c_op->data.to_cipher.length; + + job->hash_start_src_offset_in_bytes = c_op->data.to_hash.offset; + job->msg_len_to_hash_in_bytes = c_op->data.to_hash.length; + + /* Set user data to be crypto operation data struct */ + job->user_data = m; + job->user_data2 = c_op; + + return job; +} + +/** + * Process a completed job and return rte_mbuf which job processed + * + * @param job JOB_AES_HMAC job to process + * + * @return + * - Returns processed mbuf which is trimmed of output digest used in + * verification of supplied digest in the case of a HASH_CIPHER operation + * - Returns NULL on invalid job + */ +static struct rte_mbuf * +post_process_mb_job(struct aesni_mb_qp *qp, JOB_AES_HMAC *job) +{ + struct rte_mbuf *m; + struct rte_crypto_op *c_op; + + if (job->user_data == NULL) + return NULL; + + /* handled retrieved job */ + m = (struct rte_mbuf *)job->user_data; + c_op = (struct rte_crypto_op *)job->user_data2; + + /* set status as successful by default */ + c_op->status = RTE_CRYPTO_OP_STATUS_SUCCESS; + + /* check if job has been processed */ + if (unlikely(job->status != STS_COMPLETED)) { + c_op->status = RTE_CRYPTO_OP_STATUS_ERROR; + return m; + } else if (job->chain_order == HASH_CIPHER) { + /* Verify digest if required */ + if (memcmp(job->auth_tag_output, c_op->digest.data, + job->auth_tag_output_len_in_bytes) != 0) + c_op->status = RTE_CRYPTO_OP_STATUS_AUTH_FAILED; + + /* trim area used for digest from mbuf */ + rte_pktmbuf_trim(m, get_digest_byte_length(job->hash_alg)); + } + + /* Free session if a session-less crypto op */ + if (c_op->type == RTE_CRYPTO_OP_SESSIONLESS) { + rte_mempool_put(qp->sess_mp, c_op->session); + c_op->session = NULL; + } + + return m; +} + +/** + * Process a completed JOB_AES_HMAC job and keep processing jobs until + * get_completed_job return NULL + * + * @param qp Queue Pair to process + * @param job JOB_AES_HMAC job + * + * @return + * - Number of processed jobs + */ +static unsigned +handle_completed_jobs(struct aesni_mb_qp *qp, JOB_AES_HMAC *job) +{ + struct rte_mbuf *m = NULL; + unsigned processed_jobs = 0; + + while (job) { + processed_jobs++; + m = post_process_mb_job(qp, job); + if (m) + rte_ring_enqueue(qp->processed_pkts, (void *)m); + else + qp->qp_stats.dequeue_err_count++; + + job = (*qp->ops->job.get_completed_job)(&qp->mb_mgr); + } + + return processed_jobs; +} + +static uint16_t +aesni_mb_pmd_enqueue_burst(void *queue_pair, struct rte_mbuf **bufs, + uint16_t nb_bufs) +{ + struct rte_mbuf_offload *ol; + + struct aesni_mb_session *sess; + struct aesni_mb_qp *qp = queue_pair; + + JOB_AES_HMAC *job = NULL; + + int i, processed_jobs = 0; + + for (i = 0; i < nb_bufs; i++) { + ol = rte_pktmbuf_offload_get(bufs[i], RTE_PKTMBUF_OL_CRYPTO); + if (unlikely(ol == NULL)) { + qp->qp_stats.enqueue_err_count++; + goto flush_jobs; + } + + sess = get_session(qp, &ol->op.crypto); + if (unlikely(sess == NULL)) { + qp->qp_stats.enqueue_err_count++; + goto flush_jobs; + } + + job = process_crypto_op(qp, bufs[i], &ol->op.crypto, sess); + if (unlikely(job == NULL)) { + qp->qp_stats.enqueue_err_count++; + goto flush_jobs; + } + + /* Submit Job */ + job = (*qp->ops->job.submit)(&qp->mb_mgr); + + /* + * If submit returns a processed job then handle it, + * before submitting subsequent jobs + */ + if (job) + processed_jobs += handle_completed_jobs(qp, job); + } + + if (processed_jobs == 0) + goto flush_jobs; + else + qp->qp_stats.enqueued_count += processed_jobs; + return i; + +flush_jobs: + /* + * If we haven't processed any jobs in submit loop, then flush jobs + * queue to stop the output stalling + */ + job = (*qp->ops->job.flush_job)(&qp->mb_mgr); + if (job) + qp->qp_stats.enqueued_count += handle_completed_jobs(qp, job); + + return i; +} + +static uint16_t +aesni_mb_pmd_dequeue_burst(void *queue_pair, + struct rte_mbuf **bufs, uint16_t nb_bufs) +{ + struct aesni_mb_qp *qp = queue_pair; + + unsigned nb_dequeued; + + nb_dequeued = rte_ring_dequeue_burst(qp->processed_pkts, + (void **)bufs, nb_bufs); + qp->qp_stats.dequeued_count += nb_dequeued; + + return nb_dequeued; +} + + +static int cryptodev_aesni_mb_uninit(const char *name); + +static int +cryptodev_aesni_mb_create(const char *name, unsigned socket_id) +{ + struct rte_cryptodev *dev; + char crypto_dev_name[RTE_CRYPTODEV_NAME_MAX_LEN]; + struct aesni_mb_private *internals; + enum aesni_mb_vector_mode vector_mode; + + /* Check CPU for support for AES instruction set */ + if (!rte_cpu_get_flag_enabled(RTE_CPUFLAG_AES)) { + MB_LOG_ERR("AES instructions not supported by CPU"); + return -EFAULT; + } + + /* Check CPU for supported vector instruction set */ + if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX2)) + vector_mode = RTE_AESNI_MB_AVX2; + else if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX)) + vector_mode = RTE_AESNI_MB_AVX; + else if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_SSE4_1)) + vector_mode = RTE_AESNI_MB_SSE; + else { + MB_LOG_ERR("Vector instructions are not supported by CPU"); + return -EFAULT; + } + + /* create a unique device name */ + if (create_unique_device_name(crypto_dev_name, + RTE_CRYPTODEV_NAME_MAX_LEN) != 0) { + MB_LOG_ERR("failed to create unique cryptodev name"); + return -EINVAL; + } + + + dev = rte_cryptodev_pmd_virtual_dev_init(crypto_dev_name, + sizeof(struct aesni_mb_private), socket_id); + if (dev == NULL) { + MB_LOG_ERR("failed to create cryptodev vdev"); + goto init_error; + } + + dev->dev_type = RTE_CRYPTODEV_AESNI_MB_PMD; + dev->dev_ops = rte_aesni_mb_pmd_ops; + + /* register rx/tx burst functions for data path */ + dev->dequeue_burst = aesni_mb_pmd_dequeue_burst; + dev->enqueue_burst = aesni_mb_pmd_enqueue_burst; + + /* Set vector instructions mode supported */ + internals = dev->data->dev_private; + + internals->vector_mode = vector_mode; + internals->max_nb_queue_pairs = RTE_AESNI_MB_PMD_MAX_NB_QUEUE_PAIRS; + internals->max_nb_sessions = RTE_AESNI_MB_PMD_MAX_NB_SESSIONS; + + return dev->data->dev_id; +init_error: + MB_LOG_ERR("driver %s: cryptodev_aesni_create failed", name); + + cryptodev_aesni_mb_uninit(crypto_dev_name); + return -EFAULT; +} + + +static int +cryptodev_aesni_mb_init(const char *name, + const char *params __rte_unused) +{ + RTE_LOG(INFO, PMD, "Initialising %s\n", name); + + return cryptodev_aesni_mb_create(name, rte_socket_id()); +} + +static int +cryptodev_aesni_mb_uninit(const char *name) +{ + if (name == NULL) + return -EINVAL; + + RTE_LOG(INFO, PMD, "Closing AESNI crypto device %s on numa socket %u\n", + name, rte_socket_id()); + + return 0; +} + +static struct rte_driver cryptodev_aesni_mb_pmd_drv = { + .name = CRYPTODEV_NAME_AESNI_MB_PMD, + .type = PMD_VDEV, + .init = cryptodev_aesni_mb_init, + .uninit = cryptodev_aesni_mb_uninit +}; + +PMD_REGISTER_DRIVER(cryptodev_aesni_mb_pmd_drv); diff --git a/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c new file mode 100644 index 0000000..96d22f6 --- /dev/null +++ b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c @@ -0,0 +1,298 @@ +/*- + * BSD LICENSE + * + * Copyright(c) 2015 Intel Corporation. All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +#include + +#include +#include +#include + +#include "rte_aesni_mb_pmd_private.h" + +/** Configure device */ +static int +aesni_mb_pmd_config(__rte_unused struct rte_cryptodev *dev) +{ + return 0; +} + +/** Start device */ +static int +aesni_mb_pmd_start(__rte_unused struct rte_cryptodev *dev) +{ + return 0; +} + +/** Stop device */ +static void +aesni_mb_pmd_stop(__rte_unused struct rte_cryptodev *dev) +{ +} + +/** Close device */ +static int +aesni_mb_pmd_close(__rte_unused struct rte_cryptodev *dev) +{ + return 0; +} + + +/** Get device statistics */ +static void +aesni_mb_pmd_stats_get(struct rte_cryptodev *dev, + struct rte_cryptodev_stats *stats) +{ + int qp_id; + + for (qp_id = 0; qp_id < dev->data->nb_queue_pairs; qp_id++) { + struct aesni_mb_qp *qp = dev->data->queue_pairs[qp_id]; + + stats->enqueued_count += qp->qp_stats.enqueued_count; + stats->dequeued_count += qp->qp_stats.dequeued_count; + + stats->enqueue_err_count += qp->qp_stats.enqueue_err_count; + stats->dequeue_err_count += qp->qp_stats.dequeue_err_count; + } +} + +/** Reset device statistics */ +static void +aesni_mb_pmd_stats_reset(struct rte_cryptodev *dev) +{ + int qp_id; + + for (qp_id = 0; qp_id < dev->data->nb_queue_pairs; qp_id++) { + struct aesni_mb_qp *qp = dev->data->queue_pairs[qp_id]; + + memset(&qp->qp_stats, 0, sizeof(qp->qp_stats)); + } +} + + +/** Get device info */ +static void +aesni_mb_pmd_info_get(struct rte_cryptodev *dev, + struct rte_cryptodev_info *dev_info) +{ + struct aesni_mb_private *internals = dev->data->dev_private; + + if (dev_info != NULL) { + dev_info->dev_type = dev->dev_type; + dev_info->max_nb_queue_pairs = internals->max_nb_queue_pairs; + dev_info->max_nb_sessions = internals->max_nb_sessions; + } +} + +/** Release queue pair */ +static int +aesni_mb_pmd_qp_release(struct rte_cryptodev *dev, uint16_t qp_id) +{ + if (dev->data->queue_pairs[qp_id] != NULL) { + rte_free(dev->data->queue_pairs[qp_id]); + dev->data->queue_pairs[qp_id] = NULL; + } + return 0; +} + +/** set a unique name for the queue pair based on it's name, dev_id and qp_id */ +static int +aesni_mb_pmd_qp_set_unique_name(struct rte_cryptodev *dev, + struct aesni_mb_qp *qp) +{ + unsigned n = snprintf(qp->name, sizeof(qp->name), + "aesni_mb_pmd_%u_qp_%u", + dev->data->dev_id, qp->id); + + if (n > sizeof(qp->name)) + return -1; + + return 0; +} + +/** Create a ring to place process packets on */ +static struct rte_ring * +aesni_mb_pmd_qp_create_processed_pkts_ring(struct aesni_mb_qp *qp, + unsigned ring_size, int socket_id) +{ + struct rte_ring *r; + + r = rte_ring_lookup(qp->name); + if (r) { + if (r->prod.size >= ring_size) { + MB_LOG_INFO("Reusing existing ring %s for processed packets", + qp->name); + return r; + } + + MB_LOG_ERR("Unable to reuse existing ring %s for processed packets", + qp->name); + return NULL; + } + + return rte_ring_create(qp->name, ring_size, socket_id, + RING_F_SP_ENQ | RING_F_SC_DEQ); +} + +/** Setup a queue pair */ +static int +aesni_mb_pmd_qp_setup(struct rte_cryptodev *dev, uint16_t qp_id, + const struct rte_cryptodev_qp_conf *qp_conf, + int socket_id) +{ + struct aesni_mb_qp *qp = NULL; + struct aesni_mb_private *internals = dev->data->dev_private; + + /* Free memory prior to re-allocation if needed. */ + if (dev->data->queue_pairs[qp_id] != NULL) + aesni_mb_pmd_qp_release(dev, qp_id); + + /* Allocate the queue pair data structure. */ + qp = rte_zmalloc_socket("AES-NI PMD Queue Pair", sizeof(*qp), + RTE_CACHE_LINE_SIZE, socket_id); + if (qp == NULL) + return (-ENOMEM); + + qp->id = qp_id; + dev->data->queue_pairs[qp_id] = qp; + + if (aesni_mb_pmd_qp_set_unique_name(dev, qp)) + goto qp_setup_cleanup; + + qp->ops = &job_ops[internals->vector_mode]; + + qp->processed_pkts = aesni_mb_pmd_qp_create_processed_pkts_ring(qp, + qp_conf->nb_descriptors, socket_id); + if (qp->processed_pkts == NULL) + goto qp_setup_cleanup; + + qp->sess_mp = dev->data->session_pool; + + memset(&qp->qp_stats, 0, sizeof(qp->qp_stats)); + + /* Initialise multi-buffer manager */ + (*qp->ops->job.init_mgr)(&qp->mb_mgr); + + return 0; + +qp_setup_cleanup: + if (qp) + rte_free(qp); + + return -1; +} + +/** Start queue pair */ +static int +aesni_mb_pmd_qp_start(__rte_unused struct rte_cryptodev *dev, + __rte_unused uint16_t queue_pair_id) +{ + return -ENOTSUP; +} + +/** Stop queue pair */ +static int +aesni_mb_pmd_qp_stop(__rte_unused struct rte_cryptodev *dev, + __rte_unused uint16_t queue_pair_id) +{ + return -ENOTSUP; +} + +/** Return the number of allocated queue pairs */ +static uint32_t +aesni_mb_pmd_qp_count(struct rte_cryptodev *dev) +{ + return dev->data->nb_queue_pairs; +} + +/** Returns the size of the aesni multi-buffer session structure */ +static unsigned +aesni_mb_pmd_session_get_size(struct rte_cryptodev *dev __rte_unused) +{ + return sizeof(struct aesni_mb_session); +} + +/** Configure a aesni multi-buffer session from a crypto xform chain */ +static void * +aesni_mb_pmd_session_configure(struct rte_cryptodev *dev, + struct rte_crypto_xform *xform, void *sess) +{ + struct aesni_mb_private *internals = dev->data->dev_private; + + if (unlikely(sess == NULL)) { + MB_LOG_ERR("invalid session struct"); + return NULL; + } + + if (aesni_mb_set_session_parameters(&job_ops[internals->vector_mode], + sess, xform) != 0) { + MB_LOG_ERR("failed configure session parameters"); + return NULL; + } + + return sess; +} + +/** Clear the memory of session so it doesn't leave key material behind */ +static void +aesni_mb_pmd_session_clear(struct rte_cryptodev *dev __rte_unused, void *sess) +{ + /* + * Current just resetting the whole data structure, need to investigate + * whether a more selective reset of key would be more performant + */ + if (sess) + memset(sess, 0, sizeof(struct aesni_mb_session)); +} + +struct rte_cryptodev_ops aesni_mb_pmd_ops = { + .dev_configure = aesni_mb_pmd_config, + .dev_start = aesni_mb_pmd_start, + .dev_stop = aesni_mb_pmd_stop, + .dev_close = aesni_mb_pmd_close, + + .stats_get = aesni_mb_pmd_stats_get, + .stats_reset = aesni_mb_pmd_stats_reset, + + .dev_infos_get = aesni_mb_pmd_info_get, + + .queue_pair_setup = aesni_mb_pmd_qp_setup, + .queue_pair_release = aesni_mb_pmd_qp_release, + .queue_pair_start = aesni_mb_pmd_qp_start, + .queue_pair_stop = aesni_mb_pmd_qp_stop, + .queue_pair_count = aesni_mb_pmd_qp_count, + + .session_get_size = aesni_mb_pmd_session_get_size, + .session_configure = aesni_mb_pmd_session_configure, + .session_clear = aesni_mb_pmd_session_clear +}; + +struct rte_cryptodev_ops *rte_aesni_mb_pmd_ops = &aesni_mb_pmd_ops; diff --git a/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_private.h b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_private.h new file mode 100644 index 0000000..2f98609 --- /dev/null +++ b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_private.h @@ -0,0 +1,229 @@ +/*- + * BSD LICENSE + * + * Copyright(c) 2015 Intel Corporation. All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +#ifndef _RTE_AESNI_MB_PMD_PRIVATE_H_ +#define _RTE_AESNI_MB_PMD_PRIVATE_H_ + +#include "aesni_mb_ops.h" + +#define MB_LOG_ERR(fmt, args...) \ + RTE_LOG(ERR, CRYPTODEV, "[%s] %s() line %u: " fmt "\n", \ + CRYPTODEV_NAME_AESNI_MB_PMD, \ + __func__, __LINE__, ## args) + +#ifdef RTE_LIBRTE_AESNI_MB_DEBUG +#define MB_LOG_INFO(fmt, args...) \ + RTE_LOG(INFO, CRYPTODEV, "[%s] %s() line %u: " fmt "\n", \ + CRYPTODEV_NAME_AESNI_MB_PMD, \ + __func__, __LINE__, ## args) + +#define MB_LOG_DBG(fmt, args...) \ + RTE_LOG(DEBUG, CRYPTODEV, "[%s] %s() line %u: " fmt "\n", \ + CRYPTODEV_NAME_AESNI_MB_PMD, \ + __func__, __LINE__, ## args) +#else +#define MB_LOG_INFO(fmt, args...) +#define MB_LOG_DBG(fmt, args...) +#endif + +#define HMAC_IPAD_VALUE (0x36) +#define HMAC_OPAD_VALUE (0x5C) + +static const unsigned auth_blocksize[] = { + [MD5] = 64, + [SHA1] = 64, + [SHA_224] = 64, + [SHA_256] = 64, + [SHA_384] = 128, + [SHA_512] = 128, + [AES_XCBC] = 16, +}; + +/** + * Get the blocksize in bytes for a specified authentication algorithm + * + * @Note: this function will not return a valid value for a non-valid + * authentication algorithm + */ +static inline unsigned +get_auth_algo_blocksize(JOB_HASH_ALG algo) +{ + return auth_blocksize[algo]; +} + +static const unsigned auth_truncated_digest_byte_lengths[] = { + [MD5] = 12, + [SHA1] = 12, + [SHA_224] = 14, + [SHA_256] = 16, + [SHA_384] = 24, + [SHA_512] = 32, + [AES_XCBC] = 12, +}; + +/** + * Get the IPsec specified truncated length in bytes of the HMAC digest for a + * specified authentication algorithm + * + * @Note: this function will not return a valid value for a non-valid + * authentication algorithm + */ +static inline unsigned +get_truncated_digest_byte_length(JOB_HASH_ALG algo) +{ + return auth_truncated_digest_byte_lengths[algo]; +} + +static const unsigned auth_digest_byte_lengths[] = { + [MD5] = 16, + [SHA1] = 20, + [SHA_224] = 28, + [SHA_256] = 32, + [SHA_384] = 48, + [SHA_512] = 64, + [AES_XCBC] = 16, +}; + +/** + * Get the output digest size in bytes for a specified authentication algorithm + * + * @Note: this function will not return a valid value for a non-valid + * authentication algorithm + */ +static inline unsigned +get_digest_byte_length(JOB_HASH_ALG algo) +{ + return auth_digest_byte_lengths[algo]; +} + + +/** private data structure for each virtual AESNI device */ +struct aesni_mb_private { + enum aesni_mb_vector_mode vector_mode; + /**< CPU vector instruction set mode */ + unsigned max_nb_queue_pairs; + /**< Max number of queue pairs supported by device */ + unsigned max_nb_sessions; + /**< Max number of sessions supported by device */ +}; + +/** AESNI Multi buffer queue pair */ +struct aesni_mb_qp { + uint16_t id; + /**< Queue Pair Identifier */ + char name[RTE_CRYPTODEV_NAME_LEN]; + /**< Unique Queue Pair Name */ + const struct aesni_mb_ops *ops; + /**< Vector mode dependent pointer table of the multi-buffer APIs */ + MB_MGR mb_mgr; + /**< Multi-buffer instance */ + struct rte_ring *processed_pkts; + /**< Ring for placing process packets */ + struct rte_mempool *sess_mp; + /**< Session Mempool */ + struct rte_cryptodev_stats qp_stats; + /**< Queue pair statistics */ +} __rte_cache_aligned; + + +/** AES-NI multi-buffer private session structure */ +struct aesni_mb_session { + JOB_CHAIN_ORDER chain_order; + + /** Cipher Parameters */ + struct { + /** Cipher direction - encrypt / decrypt */ + JOB_CIPHER_DIRECTION direction; + /** Cipher mode - CBC / Counter */ + JOB_CIPHER_MODE mode; + + uint64_t key_length_in_bytes; + + struct { + uint32_t encode[60] __rte_aligned(16); + /**< encode key */ + uint32_t decode[60] __rte_aligned(16); + /**< decode key */ + } expanded_aes_keys; + /**< Expanded AES keys - Allocating space to + * contain the maximum expanded key size which + * is 240 bytes for 256 bit AES, calculate by: + * ((key size (bytes)) * + * ((number of rounds) + 1)) + */ + } cipher; + + /** Authentication Parameters */ + struct { + JOB_HASH_ALG algo; /**< Authentication Algorithm */ + union { + struct { + uint8_t inner[128] __rte_aligned(16); + /**< inner pad */ + uint8_t outer[128] __rte_aligned(16); + /**< outer pad */ + } pads; + /**< HMAC Authentication pads - + * allocating space for the maximum pad + * size supported which is 128 bytes for + * SHA512 + */ + + struct { + uint32_t k1_expanded[44] __rte_aligned(16); + /**< k1 (expanded key). */ + uint8_t k2[16] __rte_aligned(16); + /**< k2. */ + uint8_t k3[16] __rte_aligned(16); + /**< k3. */ + } xcbc; + /**< Expanded XCBC authentication keys */ + }; + } auth; +} __rte_cache_aligned; + + +/** + * + */ +extern int +aesni_mb_set_session_parameters(const struct aesni_mb_ops *mb_ops, + struct aesni_mb_session *sess, + const struct rte_crypto_xform *xform); + + +/** device specific operations function pointer structure */ +extern struct rte_cryptodev_ops *rte_aesni_mb_pmd_ops; + + + +#endif /* _RTE_AESNI_MB_PMD_PRIVATE_H_ */ diff --git a/drivers/crypto/aesni_mb/rte_pmd_aesni_version.map b/drivers/crypto/aesni_mb/rte_pmd_aesni_version.map new file mode 100644 index 0000000..ad607bb --- /dev/null +++ b/drivers/crypto/aesni_mb/rte_pmd_aesni_version.map @@ -0,0 +1,3 @@ +DPDK_2.2 { + local: *; +}; diff --git a/mk/rte.app.mk b/mk/rte.app.mk index cfcb064..4a660e6 100644 --- a/mk/rte.app.mk +++ b/mk/rte.app.mk @@ -153,6 +153,10 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_NULL) += -lrte_pmd_null # QAT PMD has a dependency on libcrypto (from openssl) for calculating HMAC precomputes _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_QAT) += -lrte_pmd_qat -lcrypto +# AESNI MULTI BUFFER is dependent on the IPSec_MB library +_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_AESNI_MB) += -lrte_pmd_aesni_mb +_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_AESNI_MB) += -L$(AESNI_MULTI_BUFFER_LIB_PATH) -lIPSec_MB + endif # ! $(CONFIG_RTE_BUILD_SHARED_LIB) endif # ! CONFIG_RTE_BUILD_COMBINE_LIBS -- 2.4.3