From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-yw0-f181.google.com (mail-yw0-f181.google.com [209.85.161.181]) by dpdk.org (Postfix) with ESMTP id ACF712B91 for ; Fri, 6 Jan 2017 03:45:39 +0100 (CET) Received: by mail-yw0-f181.google.com with SMTP id t125so358979026ywc.1 for ; Thu, 05 Jan 2017 18:45:39 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :cc; bh=wpDGca1X74Y8VxB7wfBzWHmjOm1KYF8c9IBM/+vr8Lg=; b=fWwPrpkiNuLZFsBMJCPUbQ5nAya9kaurLX1WFJK/3QF8Tyb0CwEZBki9VNL5ZaLgKY MTzDm3ZZ/Cov7mggv9IN+BuA9PRgQVaLopYJZ+3YrXxpj3iAZU19dcOi0viqeLximsho cVJBwlEM8u9jsO6DZsSHQcvyBWtrfIG6b4yjw= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:in-reply-to:references:from:date :message-id:subject:to:cc; bh=wpDGca1X74Y8VxB7wfBzWHmjOm1KYF8c9IBM/+vr8Lg=; b=E332YQs0iaQfMrugGY/fs6IdrSeAkeFRnqnNMshppPF3ufw22wP0aLOs2ooENjplcD JabCXdNYiDwLWIdgKP0Fc2Vxptz6DyadwmQtV6vDhR64pIQ3DzNf6XZxjAX/VbjSX9mM yJdmyqxBakg6OfVzB8/kALf0LTfW1WX8DUZJsH5h/ex5c3XhT6ZL1b8pWH8pjGpJ3S5Z BCITEyl8vjSuaaMhoEq8/msOPABdYa7BD24+WRaGtr/7TpR64FCn4402O+8/6RNHNkuy 7uuRialkvLI8I8sZLQifYbHKXfenieyUk/nf53y2IJx2tMtY+qei0zYmKc4ccoacdOs6 6cnw== X-Gm-Message-State: AIkVDXLSMhYRwzUFrxg3wY6a74XfoDYaHxL4lpJK3ZqyxyvcDdK4F43ub4IfklQUtJVysNDrXL2dZhUoJSRPj07N X-Received: by 10.13.255.5 with SMTP id p5mr77587338ywf.60.1483670738348; Thu, 05 Jan 2017 18:45:38 -0800 (PST) MIME-Version: 1.0 Received: by 10.37.221.65 with HTTP; Thu, 5 Jan 2017 18:45:37 -0800 (PST) In-Reply-To: <1483551207-18236-4-git-send-email-zbigniew.bodek@caviumnetworks.com> References: <1481077985-4224-2-git-send-email-zbigniew.bodek@caviumnetworks.com> <1483551207-18236-1-git-send-email-zbigniew.bodek@caviumnetworks.com> <1483551207-18236-4-git-send-email-zbigniew.bodek@caviumnetworks.com> From: Jianbo Liu Date: Fri, 6 Jan 2017 10:45:37 +0800 Message-ID: To: zbigniew.bodek@caviumnetworks.com Cc: dev@dpdk.org, pablo.de.lara.guarch@intel.com, Declan Doherty , Jerin Jacob Content-Type: text/plain; charset=UTF-8 Subject: Re: [dpdk-dev] [PATCH v3 3/8] crypto/armv8: add PMD optimized for ARMv8 processors X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 06 Jan 2017 02:45:40 -0000 On 5 January 2017 at 01:33, wrote: > From: Zbigniew Bodek > > This patch introduces crypto poll mode driver > using ARMv8 cryptographic extensions. > CPU compatibility with this driver is detected in > run-time and virtual crypto device will not be > created if CPU doesn't provide: > AES, SHA1, SHA2 and NEON. > > This PMD is optimized to provide performance boost > for chained crypto operations processing, > such as encryption + HMAC generation, > decryption + HMAC validation. In particular, > cipher only or hash only operations are > not provided. > > The driver currently supports AES-128-CBC > in combination with: SHA256 HMAC and SHA1 HMAC > and relies on the external armv8_crypto library: > https://github.com/caviumnetworks/armv8_crypto > It's standalone lib. I think you should change the following line in its Makefile, so not depend on DPDK. "include $(RTE_SDK)/mk/rte.lib.mk" > This patch adds driver's code only and does > not include it in the build system. > > Signed-off-by: Zbigniew Bodek > --- > drivers/crypto/armv8/Makefile | 73 ++ > drivers/crypto/armv8/rte_armv8_pmd.c | 926 +++++++++++++++++++++++++ > drivers/crypto/armv8/rte_armv8_pmd_ops.c | 369 ++++++++++ > drivers/crypto/armv8/rte_armv8_pmd_private.h | 211 ++++++ > drivers/crypto/armv8/rte_armv8_pmd_version.map | 3 + > 5 files changed, 1582 insertions(+) > create mode 100644 drivers/crypto/armv8/Makefile > create mode 100644 drivers/crypto/armv8/rte_armv8_pmd.c > create mode 100644 drivers/crypto/armv8/rte_armv8_pmd_ops.c > create mode 100644 drivers/crypto/armv8/rte_armv8_pmd_private.h > create mode 100644 drivers/crypto/armv8/rte_armv8_pmd_version.map > > diff --git a/drivers/crypto/armv8/Makefile b/drivers/crypto/armv8/Makefile > new file mode 100644 > index 0000000..dc5ea02 > --- /dev/null > +++ b/drivers/crypto/armv8/Makefile > @@ -0,0 +1,73 @@ > +# > +# BSD LICENSE > +# > +# Copyright (C) Cavium networks Ltd. 2017. > +# > +# Redistribution and use in source and binary forms, with or without > +# modification, are permitted provided that the following conditions > +# are met: > +# > +# * Redistributions of source code must retain the above copyright > +# notice, this list of conditions and the following disclaimer. > +# * Redistributions in binary form must reproduce the above copyright > +# notice, this list of conditions and the following disclaimer in > +# the documentation and/or other materials provided with the > +# distribution. > +# * Neither the name of Cavium networks nor the names of its > +# contributors may be used to endorse or promote products derived > +# from this software without specific prior written permission. > +# > +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS > +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT > +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR > +# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT > +# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, > +# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT > +# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, > +# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY > +# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT > +# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE > +# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. > +# > + > +include $(RTE_SDK)/mk/rte.vars.mk > + > +ifneq ($(MAKECMDGOALS),clean) > +ifneq ($(MAKECMDGOALS),config) > +ifeq ($(ARMV8_CRYPTO_LIB_PATH),) > +$(error "Please define ARMV8_CRYPTO_LIB_PATH environment variable") > +endif > +endif > +endif > + > +# library name > +LIB = librte_pmd_armv8.a > + > +# build flags > +CFLAGS += -O3 > +CFLAGS += $(WERROR_FLAGS) > +CFLAGS += -L$(RTE_SDK)/../openssl -I$(RTE_SDK)/../openssl/include Is it really needed? > + > +# library version > +LIBABIVER := 1 > + > +# versioning export map > +EXPORT_MAP := rte_armv8_pmd_version.map > + > +# external library dependencies > +CFLAGS += -I$(ARMV8_CRYPTO_LIB_PATH) > +CFLAGS += -I$(ARMV8_CRYPTO_LIB_PATH)/asm/include > +LDLIBS += -L$(ARMV8_CRYPTO_LIB_PATH) -larmv8_crypto > + > +# library source files > +SRCS-$(CONFIG_RTE_LIBRTE_PMD_ARMV8_CRYPTO) += rte_armv8_pmd.c > +SRCS-$(CONFIG_RTE_LIBRTE_PMD_ARMV8_CRYPTO) += rte_armv8_pmd_ops.c > + > +# library dependencies > +DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_ARMV8_CRYPTO) += lib/librte_eal > +DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_ARMV8_CRYPTO) += lib/librte_mbuf > +DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_ARMV8_CRYPTO) += lib/librte_mempool > +DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_ARMV8_CRYPTO) += lib/librte_ring > +DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_ARMV8_CRYPTO) += lib/librte_cryptodev > + > +include $(RTE_SDK)/mk/rte.lib.mk > diff --git a/drivers/crypto/armv8/rte_armv8_pmd.c b/drivers/crypto/armv8/rte_armv8_pmd.c > new file mode 100644 > index 0000000..39433bb > --- /dev/null > +++ b/drivers/crypto/armv8/rte_armv8_pmd.c > @@ -0,0 +1,926 @@ > +/* > + * BSD LICENSE > + * > + * Copyright (C) Cavium networks Ltd. 2017. > + * > + * Redistribution and use in source and binary forms, with or without > + * modification, are permitted provided that the following conditions > + * are met: > + * > + * * Redistributions of source code must retain the above copyright > + * notice, this list of conditions and the following disclaimer. > + * * Redistributions in binary form must reproduce the above copyright > + * notice, this list of conditions and the following disclaimer in > + * the documentation and/or other materials provided with the > + * distribution. > + * * Neither the name of Cavium networks nor the names of its > + * contributors may be used to endorse or promote products derived > + * from this software without specific prior written permission. > + * > + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS > + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT > + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR > + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT > + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, > + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT > + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, > + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY > + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT > + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE > + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. > + */ > + > +#include > + > +#include > +#include > +#include > +#include > +#include > +#include > +#include > + > +#include "armv8_crypto_defs.h" > + > +#include "rte_armv8_pmd_private.h" > + > +static int cryptodev_armv8_crypto_uninit(const char *name); > + > +/** > + * Pointers to the supported combined mode crypto functions are stored > + * in the static tables. Each combined (chained) cryptographic operation > + * can be decribed by a set of numbers: > + * - order: order of operations (cipher, auth) or (auth, cipher) > + * - direction: encryption or decryption > + * - calg: cipher algorithm such as AES_CBC, AES_CTR, etc. > + * - aalg: authentication algorithm such as SHA1, SHA256, etc. > + * - keyl: cipher key length, for example 128, 192, 256 bits > + * > + * In order to quickly acquire each function pointer based on those numbers, > + * a hierarchy of arrays is maintained. The final level, 3D array is indexed > + * by the combined mode function parameters only (cipher algorithm, > + * authentication algorithm and key length). > + * > + * This gives 3 memory accesses to obtain a function pointer instead of > + * traversing the array manually and comparing function parameters on each loop. > + * > + * +--+CRYPTO_FUNC > + * +--+ENC| > + * +--+CA| > + * | +--+DEC > + * ORDER| > + * | +--+ENC > + * +--+AC| > + * +--+DEC > + * > + */ > + > +/** > + * 3D array type for ARM Combined Mode crypto functions pointers. > + * CRYPTO_CIPHER_MAX: max cipher ID number > + * CRYPTO_AUTH_MAX: max auth ID number > + * CRYPTO_CIPHER_KEYLEN_MAX: max key length ID number > + */ > +typedef const crypto_func_t > +crypto_func_tbl_t[CRYPTO_CIPHER_MAX][CRYPTO_AUTH_MAX][CRYPTO_CIPHER_KEYLEN_MAX]; > + > +/* Evaluate to key length definition */ > +#define KEYL(keyl) (ARMV8_CRYPTO_CIPHER_KEYLEN_ ## keyl) > + > +/* Local aliases for supported ciphers */ > +#define CIPH_AES_CBC RTE_CRYPTO_CIPHER_AES_CBC > +/* Local aliases for supported hashes */ > +#define AUTH_SHA1_HMAC RTE_CRYPTO_AUTH_SHA1_HMAC > +#define AUTH_SHA256 RTE_CRYPTO_AUTH_SHA256 > +#define AUTH_SHA256_HMAC RTE_CRYPTO_AUTH_SHA256_HMAC > + > +/** > + * Arrays containing pointers to particular cryptographic, > + * combined mode functions. > + * crypto_op_ca_encrypt: cipher (encrypt), authenticate > + * crypto_op_ca_decrypt: cipher (decrypt), authenticate > + * crypto_op_ac_encrypt: authenticate, cipher (encrypt) > + * crypto_op_ac_decrypt: authenticate, cipher (decrypt) > + */ > +static const crypto_func_tbl_t > +crypto_op_ca_encrypt = { > + /* [cipher alg][auth alg][key length] = crypto_function, */ > + [CIPH_AES_CBC][AUTH_SHA1_HMAC][KEYL(128)] = aes128cbc_sha1_hmac, > + [CIPH_AES_CBC][AUTH_SHA256_HMAC][KEYL(128)] = aes128cbc_sha256_hmac, > +}; > + > +static const crypto_func_tbl_t > +crypto_op_ca_decrypt = { > + NULL > +}; > + > +static const crypto_func_tbl_t > +crypto_op_ac_encrypt = { > + NULL > +}; > + > +static const crypto_func_tbl_t > +crypto_op_ac_decrypt = { > + /* [cipher alg][auth alg][key length] = crypto_function, */ > + [CIPH_AES_CBC][AUTH_SHA1_HMAC][KEYL(128)] = sha1_hmac_aes128cbc_dec, > + [CIPH_AES_CBC][AUTH_SHA256_HMAC][KEYL(128)] = sha256_hmac_aes128cbc_dec, > +}; > + > +/** > + * Arrays containing pointers to particular cryptographic function sets, > + * covering given cipher operation directions (encrypt, decrypt) > + * for each order of cipher and authentication pairs. > + */ > +static const crypto_func_tbl_t * > +crypto_cipher_auth[] = { > + &crypto_op_ca_encrypt, > + &crypto_op_ca_decrypt, > + NULL > +}; > + > +static const crypto_func_tbl_t * > +crypto_auth_cipher[] = { > + &crypto_op_ac_encrypt, > + &crypto_op_ac_decrypt, > + NULL > +}; > + > +/** > + * Top level array containing pointers to particular cryptographic > + * function sets, covering given order of chained operations. > + * crypto_cipher_auth: cipher first, authenticate after > + * crypto_auth_cipher: authenticate first, cipher after > + */ > +static const crypto_func_tbl_t ** > +crypto_chain_order[] = { > + crypto_cipher_auth, > + crypto_auth_cipher, > + NULL > +}; > + > +/** > + * Extract particular combined mode crypto function from the 3D array. > + */ > +#define CRYPTO_GET_ALGO(order, cop, calg, aalg, keyl) \ > +({ \ > + crypto_func_tbl_t *func_tbl = \ > + (crypto_chain_order[(order)])[(cop)]; \ > + \ > + ((*func_tbl)[(calg)][(aalg)][KEYL(keyl)]); \ > +}) > + > +/*----------------------------------------------------------------------------*/ > + > +/** > + * 2D array type for ARM key schedule functions pointers. > + * CRYPTO_CIPHER_MAX: max cipher ID number > + * CRYPTO_CIPHER_KEYLEN_MAX: max key length ID number > + */ > +typedef const crypto_key_sched_t > +crypto_key_sched_tbl_t[CRYPTO_CIPHER_MAX][CRYPTO_CIPHER_KEYLEN_MAX]; > + > +static const crypto_key_sched_tbl_t > +crypto_key_sched_encrypt = { > + /* [cipher alg][key length] = key_expand_func, */ > + [CIPH_AES_CBC][KEYL(128)] = aes128_key_sched_enc, > +}; > + > +static const crypto_key_sched_tbl_t > +crypto_key_sched_decrypt = { > + /* [cipher alg][key length] = key_expand_func, */ > + [CIPH_AES_CBC][KEYL(128)] = aes128_key_sched_dec, > +}; > + > +/** > + * Top level array containing pointers to particular key generation > + * function sets, covering given operation direction. > + * crypto_key_sched_encrypt: keys for encryption > + * crypto_key_sched_decrypt: keys for decryption > + */ > +static const crypto_key_sched_tbl_t * > +crypto_key_sched_dir[] = { > + &crypto_key_sched_encrypt, > + &crypto_key_sched_decrypt, > + NULL > +}; > + > +/** > + * Extract particular combined mode crypto function from the 3D array. > + */ > +#define CRYPTO_GET_KEY_SCHED(cop, calg, keyl) \ > +({ \ > + crypto_key_sched_tbl_t *ks_tbl = crypto_key_sched_dir[(cop)]; \ > + \ > + ((*ks_tbl)[(calg)][KEYL(keyl)]); \ > +}) > + > +/*----------------------------------------------------------------------------*/ > + > +/** > + * Global static parameter used to create a unique name for each > + * ARMV8 crypto device. > + */ > +static unsigned int unique_name_id; > + > +static inline int > +create_unique_device_name(char *name, size_t size) > +{ > + int ret; > + > + if (name == NULL) > + return -EINVAL; > + > + ret = snprintf(name, size, "%s_%u", RTE_STR(CRYPTODEV_NAME_ARMV8_PMD), > + unique_name_id++); > + if (ret < 0) > + return ret; > + return 0; > +} > + > +/* > + *------------------------------------------------------------------------------ > + * Session Prepare > + *------------------------------------------------------------------------------ > + */ > + > +/** Get xform chain order */ > +static enum armv8_crypto_chain_order > +armv8_crypto_get_chain_order(const struct rte_crypto_sym_xform *xform) > +{ > + > + /* > + * This driver currently covers only chained operations. > + * Ignore only cipher or only authentication operations > + * or chains longer than 2 xform structures. > + */ > + if (xform->next == NULL || xform->next->next != NULL) > + return ARMV8_CRYPTO_CHAIN_NOT_SUPPORTED; > + > + if (xform->type == RTE_CRYPTO_SYM_XFORM_AUTH) { > + if (xform->next->type == RTE_CRYPTO_SYM_XFORM_CIPHER) > + return ARMV8_CRYPTO_CHAIN_AUTH_CIPHER; > + } > + > + if (xform->type == RTE_CRYPTO_SYM_XFORM_CIPHER) { > + if (xform->next->type == RTE_CRYPTO_SYM_XFORM_AUTH) > + return ARMV8_CRYPTO_CHAIN_CIPHER_AUTH; > + } > + > + return ARMV8_CRYPTO_CHAIN_NOT_SUPPORTED; > +} > + > +static inline void > +auth_hmac_pad_prepare(struct armv8_crypto_session *sess, > + const struct rte_crypto_sym_xform *xform) > +{ > + size_t i; > + > + /* Generate i_key_pad and o_key_pad */ > + memset(sess->auth.hmac.i_key_pad, 0, sizeof(sess->auth.hmac.i_key_pad)); > + rte_memcpy(sess->auth.hmac.i_key_pad, sess->auth.hmac.key, > + xform->auth.key.length); > + memset(sess->auth.hmac.o_key_pad, 0, sizeof(sess->auth.hmac.o_key_pad)); > + rte_memcpy(sess->auth.hmac.o_key_pad, sess->auth.hmac.key, > + xform->auth.key.length); > + /* > + * XOR key with IPAD/OPAD values to obtain i_key_pad > + * and o_key_pad. > + * Byte-by-byte operation may seem to be the less efficient > + * here but in fact it's the opposite. > + * The result ASM code is likely operate on NEON registers > + * (load auth key to Qx, load IPAD/OPAD to multiple > + * elements of Qy, eor 128 bits at once). > + */ > + for (i = 0; i < SHA_BLOCK_MAX; i++) { > + sess->auth.hmac.i_key_pad[i] ^= HMAC_IPAD_VALUE; > + sess->auth.hmac.o_key_pad[i] ^= HMAC_OPAD_VALUE; > + } > +} > + > +static inline int > +auth_set_prerequisites(struct armv8_crypto_session *sess, > + const struct rte_crypto_sym_xform *xform) > +{ > + uint8_t partial[64] = { 0 }; > + int error; > + > + switch (xform->auth.algo) { > + case RTE_CRYPTO_AUTH_SHA1_HMAC: > + /* > + * Generate authentication key, i_key_pad and o_key_pad. > + */ > + /* Zero memory under key */ > + memset(sess->auth.hmac.key, 0, SHA1_AUTH_KEY_LENGTH); > + > + if (xform->auth.key.length > SHA1_AUTH_KEY_LENGTH) { > + /* > + * In case the key is longer than 160 bits > + * the algorithm will use SHA1(key) instead. > + */ > + error = sha1_block(NULL, xform->auth.key.data, > + sess->auth.hmac.key, xform->auth.key.length); > + if (error != 0) > + return -1; > + } else { > + /* > + * Now copy the given authentication key to the session > + * key assuming that the session key is zeroed there is > + * no need for additional zero padding if the key is > + * shorter than SHA1_AUTH_KEY_LENGTH. > + */ > + rte_memcpy(sess->auth.hmac.key, xform->auth.key.data, > + xform->auth.key.length); > + } > + > + /* Prepare HMAC padding: key|pattern */ > + auth_hmac_pad_prepare(sess, xform); > + /* > + * Calculate partial hash values for i_key_pad and o_key_pad. > + * Will be used as initialization state for final HMAC. > + */ > + error = sha1_block_partial(NULL, sess->auth.hmac.i_key_pad, > + partial, SHA1_BLOCK_SIZE); > + if (error != 0) > + return -1; > + memcpy(sess->auth.hmac.i_key_pad, partial, SHA1_BLOCK_SIZE); > + > + error = sha1_block_partial(NULL, sess->auth.hmac.o_key_pad, > + partial, SHA1_BLOCK_SIZE); > + if (error != 0) > + return -1; > + memcpy(sess->auth.hmac.o_key_pad, partial, SHA1_BLOCK_SIZE); > + > + break; > + case RTE_CRYPTO_AUTH_SHA256_HMAC: > + /* > + * Generate authentication key, i_key_pad and o_key_pad. > + */ > + /* Zero memory under key */ > + memset(sess->auth.hmac.key, 0, SHA256_AUTH_KEY_LENGTH); > + > + if (xform->auth.key.length > SHA256_AUTH_KEY_LENGTH) { > + /* > + * In case the key is longer than 256 bits > + * the algorithm will use SHA256(key) instead. > + */ > + error = sha256_block(NULL, xform->auth.key.data, > + sess->auth.hmac.key, xform->auth.key.length); > + if (error != 0) > + return -1; > + } else { > + /* > + * Now copy the given authentication key to the session > + * key assuming that the session key is zeroed there is > + * no need for additional zero padding if the key is > + * shorter than SHA256_AUTH_KEY_LENGTH. > + */ > + rte_memcpy(sess->auth.hmac.key, xform->auth.key.data, > + xform->auth.key.length); > + } > + > + /* Prepare HMAC padding: key|pattern */ > + auth_hmac_pad_prepare(sess, xform); > + /* > + * Calculate partial hash values for i_key_pad and o_key_pad. > + * Will be used as initialization state for final HMAC. > + */ > + error = sha256_block_partial(NULL, sess->auth.hmac.i_key_pad, > + partial, SHA256_BLOCK_SIZE); > + if (error != 0) > + return -1; > + memcpy(sess->auth.hmac.i_key_pad, partial, SHA256_BLOCK_SIZE); > + > + error = sha256_block_partial(NULL, sess->auth.hmac.o_key_pad, > + partial, SHA256_BLOCK_SIZE); > + if (error != 0) > + return -1; > + memcpy(sess->auth.hmac.o_key_pad, partial, SHA256_BLOCK_SIZE); > + > + break; > + default: > + break; > + } > + > + return 0; > +} > + > +static inline int > +cipher_set_prerequisites(struct armv8_crypto_session *sess, > + const struct rte_crypto_sym_xform *xform) > +{ > + crypto_key_sched_t cipher_key_sched; > + > + cipher_key_sched = sess->cipher.key_sched; > + if (likely(cipher_key_sched != NULL)) { > + /* Set up cipher session key */ > + cipher_key_sched(sess->cipher.key.data, xform->cipher.key.data); > + } > + > + return 0; > +} > + > +static int > +armv8_crypto_set_session_chained_parameters(struct armv8_crypto_session *sess, > + const struct rte_crypto_sym_xform *cipher_xform, > + const struct rte_crypto_sym_xform *auth_xform) > +{ > + enum armv8_crypto_chain_order order; > + enum armv8_crypto_cipher_operation cop; > + enum rte_crypto_cipher_algorithm calg; > + enum rte_crypto_auth_algorithm aalg; > + > + /* Validate and prepare scratch order of combined operations */ > + switch (sess->chain_order) { > + case ARMV8_CRYPTO_CHAIN_CIPHER_AUTH: > + case ARMV8_CRYPTO_CHAIN_AUTH_CIPHER: > + order = sess->chain_order; > + break; > + default: > + return -EINVAL; > + } > + /* Select cipher direction */ > + sess->cipher.direction = cipher_xform->cipher.op; > + /* Select cipher key */ > + sess->cipher.key.length = cipher_xform->cipher.key.length; > + /* Set cipher direction */ > + cop = sess->cipher.direction; > + /* Set cipher algorithm */ > + calg = cipher_xform->cipher.algo; > + > + /* Select cipher algo */ > + switch (calg) { > + /* Cover supported cipher algorithms */ > + case RTE_CRYPTO_CIPHER_AES_CBC: > + sess->cipher.algo = calg; > + /* IV len is always 16 bytes (block size) for AES CBC */ > + sess->cipher.iv_len = 16; > + break; > + default: > + return -EINVAL; > + } > + /* Select auth generate/verify */ > + sess->auth.operation = auth_xform->auth.op; > + > + /* Select auth algo */ > + switch (auth_xform->auth.algo) { > + /* Cover supported hash algorithms */ > + case RTE_CRYPTO_AUTH_SHA256: > + aalg = auth_xform->auth.algo; > + sess->auth.mode = ARMV8_CRYPTO_AUTH_AS_AUTH; > + break; > + case RTE_CRYPTO_AUTH_SHA1_HMAC: > + case RTE_CRYPTO_AUTH_SHA256_HMAC: /* Fall through */ > + aalg = auth_xform->auth.algo; > + sess->auth.mode = ARMV8_CRYPTO_AUTH_AS_HMAC; > + break; > + default: > + return -EINVAL; > + } > + > + /* Verify supported key lengths and extract proper algorithm */ > + switch (cipher_xform->cipher.key.length << 3) { > + case 128: > + sess->crypto_func = > + CRYPTO_GET_ALGO(order, cop, calg, aalg, 128); > + sess->cipher.key_sched = > + CRYPTO_GET_KEY_SCHED(cop, calg, 128); > + break; > + case 192: > + sess->crypto_func = > + CRYPTO_GET_ALGO(order, cop, calg, aalg, 192); > + sess->cipher.key_sched = > + CRYPTO_GET_KEY_SCHED(cop, calg, 192); > + break; > + case 256: > + sess->crypto_func = > + CRYPTO_GET_ALGO(order, cop, calg, aalg, 256); > + sess->cipher.key_sched = > + CRYPTO_GET_KEY_SCHED(cop, calg, 256); > + break; > + default: > + sess->crypto_func = NULL; > + sess->cipher.key_sched = NULL; > + return -EINVAL; > + } > + > + if (unlikely(sess->crypto_func == NULL)) { > + /* > + * If we got here that means that there must be a bug Since AES-128-CBC is only supported in your patch. It means that crypto_func could be NULL according to the switch above if cipher.key.length > 128? > + * in the algorithms selection above. Nevertheless keep > + * it here to catch bug immediately and avoid NULL pointer > + * dereference in OPs processing. > + */ > + ARMV8_CRYPTO_LOG_ERR( > + "No appropriate crypto function for given parameters"); > + return -EINVAL; > + } > + > + /* Set up cipher session prerequisites */ > + if (cipher_set_prerequisites(sess, cipher_xform) != 0) > + return -EINVAL; > + > + /* Set up authentication session prerequisites */ > + if (auth_set_prerequisites(sess, auth_xform) != 0) > + return -EINVAL; > + > + return 0; > +} > + .... > diff --git a/drivers/crypto/armv8/rte_armv8_pmd_ops.c b/drivers/crypto/armv8/rte_armv8_pmd_ops.c > new file mode 100644 > index 0000000..2bf6475 > --- /dev/null > +++ b/drivers/crypto/armv8/rte_armv8_pmd_ops.c > @@ -0,0 +1,369 @@ > +/* > + * BSD LICENSE > + * > + * Copyright (C) Cavium networks Ltd. 2017. > + * > + * Redistribution and use in source and binary forms, with or without > + * modification, are permitted provided that the following conditions > + * are met: > + * > + * * Redistributions of source code must retain the above copyright > + * notice, this list of conditions and the following disclaimer. > + * * Redistributions in binary form must reproduce the above copyright > + * notice, this list of conditions and the following disclaimer in > + * the documentation and/or other materials provided with the > + * distribution. > + * * Neither the name of Cavium networks nor the names of its > + * contributors may be used to endorse or promote products derived > + * from this software without specific prior written permission. > + * > + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS > + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT > + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR > + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT > + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, > + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT > + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, > + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY > + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT > + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE > + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. > + */ > + > +#include > + > +#include > +#include > +#include > + > +#include "armv8_crypto_defs.h" > + > +#include "rte_armv8_pmd_private.h" > + > +static const struct rte_cryptodev_capabilities > + armv8_crypto_pmd_capabilities[] = { > + { /* SHA1 HMAC */ > + .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, > + {.sym = { > + .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH, > + {.auth = { > + .algo = RTE_CRYPTO_AUTH_SHA1_HMAC, > + .block_size = 64, > + .key_size = { > + .min = 16, > + .max = 128, > + .increment = 0 > + }, > + .digest_size = { > + .min = 20, > + .max = 20, > + .increment = 0 > + }, > + .aad_size = { 0 } > + }, } > + }, } > + }, > + { /* SHA256 HMAC */ > + .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, > + {.sym = { > + .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH, > + {.auth = { > + .algo = RTE_CRYPTO_AUTH_SHA256_HMAC, > + .block_size = 64, > + .key_size = { > + .min = 16, > + .max = 128, > + .increment = 0 > + }, > + .digest_size = { > + .min = 32, > + .max = 32, > + .increment = 0 > + }, > + .aad_size = { 0 } > + }, } > + }, } > + }, > + { /* AES CBC */ > + .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, > + {.sym = { > + .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER, > + {.cipher = { > + .algo = RTE_CRYPTO_CIPHER_AES_CBC, > + .block_size = 16, > + .key_size = { > + .min = 16, > + .max = 16, > + .increment = 0 > + }, > + .iv_size = { > + .min = 16, > + .max = 16, > + .increment = 0 > + } > + }, } > + }, } > + }, > + It's strange that you defined aes and hmac here, but not implemented them, though their combinations are implemented. Will you add later? > + RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST() > +}; > + > +