* [dpdk-dev] [RFC PATCH 1/5] crypto/zuc: use IPSec MB library v0.53
2020-03-05 15:34 [dpdk-dev] [RFC PATCH 0/5] Support Intel IPSec MB v0.53 in DPDK 18.11 Pablo de Lara
@ 2020-03-05 15:34 ` Pablo de Lara
2020-03-05 15:34 ` [dpdk-dev] [RFC PATCH 2/5] crypto/snow3g: " Pablo de Lara
` (4 subsequent siblings)
5 siblings, 0 replies; 8+ messages in thread
From: Pablo de Lara @ 2020-03-05 15:34 UTC (permalink / raw)
To: dev; +Cc: Pablo de Lara
Link against Intel IPSec Multi-buffer library, which
added support for ZUC-EEA3 and ZUC-EIA3 from version v0.53,
moving from libSSO ZUC library.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
---
devtools/test-build.sh | 6 +--
doc/guides/cryptodevs/zuc.rst | 52 ++++++++++++---------
drivers/crypto/zuc/Makefile | 28 +++++++-----
drivers/crypto/zuc/meson.build | 13 +++++-
drivers/crypto/zuc/rte_zuc_pmd.c | 58 +++++++++++++++++-------
drivers/crypto/zuc/rte_zuc_pmd_ops.c | 2 +
drivers/crypto/zuc/rte_zuc_pmd_private.h | 6 ++-
mk/rte.app.mk | 2 +-
8 files changed, 110 insertions(+), 57 deletions(-)
diff --git a/devtools/test-build.sh b/devtools/test-build.sh
index 42f4ad003..911f77e01 100755
--- a/devtools/test-build.sh
+++ b/devtools/test-build.sh
@@ -25,7 +25,6 @@ default_path=$PATH
# - LIBMUSDK_PATH
# - LIBSSO_SNOW3G_PATH
# - LIBSSO_KASUMI_PATH
-# - LIBSSO_ZUC_PATH
. $(dirname $(readlink -e $0))/load-devel-config
print_usage () {
@@ -111,7 +110,6 @@ reset_env ()
unset LIBMUSDK_PATH
unset LIBSSO_SNOW3G_PATH
unset LIBSSO_KASUMI_PATH
- unset LIBSSO_ZUC_PATH
unset PQOS_INSTALL_PATH
}
@@ -165,12 +163,12 @@ config () # <directory> <target> <options>
sed -ri 's,(PMD_AESNI_MB=)n,\1y,' $1/.config
test "$DPDK_DEP_IPSEC_MB" != y || \
sed -ri 's,(PMD_AESNI_GCM=)n,\1y,' $1/.config
+ test "$DPDK_DEP_IPSEC_MB" != y || \
+ sed -ri 's,(PMD_ZUC=)n,\1y,' $1/.config
test -z "$LIBSSO_SNOW3G_PATH" || \
sed -ri 's,(PMD_SNOW3G=)n,\1y,' $1/.config
test -z "$LIBSSO_KASUMI_PATH" || \
sed -ri 's,(PMD_KASUMI=)n,\1y,' $1/.config
- test -z "$LIBSSO_ZUC_PATH" || \
- sed -ri 's,(PMD_ZUC=)n,\1y,' $1/.config
test "$DPDK_DEP_SSL" != y || \
sed -ri 's,(PMD_CCP=)n,\1y,' $1/.config
test "$DPDK_DEP_SSL" != y || \
diff --git a/doc/guides/cryptodevs/zuc.rst b/doc/guides/cryptodevs/zuc.rst
index e38989968..d4001b1f4 100644
--- a/doc/guides/cryptodevs/zuc.rst
+++ b/doc/guides/cryptodevs/zuc.rst
@@ -1,12 +1,12 @@
.. SPDX-License-Identifier: BSD-3-Clause
- Copyright(c) 2016 Intel Corporation.
+ Copyright(c) 2016-2019 Intel Corporation.
ZUC Crypto Poll Mode Driver
===========================
-The ZUC PMD (**librte_pmd_zuc**) provides poll mode crypto driver
-support for utilizing Intel Libsso library, which implements F8 and F9 functions
-for ZUC EEA3 cipher and EIA3 hash algorithms.
+The ZUC PMD (**librte_pmd_zuc**) provides poll mode crypto driver support for
+utilizing `Intel IPSec Multi-buffer library <https://github.com/01org/intel-ipsec-mb>`_
+which implements F8 and F9 functions for ZUC EEA3 cipher and EIA3 hash algorithms.
Features
--------
@@ -27,36 +27,46 @@ Limitations
* Chained mbufs are not supported.
* ZUC (EIA3) supported only if hash offset field is byte-aligned.
* ZUC (EEA3) supported only if cipher length, cipher offset fields are byte-aligned.
-* ZUC PMD cannot be built as a shared library, due to limitations in
- in the underlying library.
Installation
------------
-To build DPDK with the ZUC_PMD the user is required to download
-the export controlled ``libsso_zuc`` library, by registering in
-`Intel Resource & Design Center <https://www.intel.com/content/www/us/en/design/resource-design-center.html>`_.
-Once approval has been granted, the user needs to search for
-*ZUC 128-EAA3 and 128-EIA3 3GPP cryptographic algorithms Software Library* to download the
-library or directly through this `link <https://cdrdv2.intel.com/v1/dl/getContent/575868>`_.
+To build DPDK with the ZUC_PMD the user is required to download the multi-buffer
+library from `here <https://github.com/01org/intel-ipsec-mb>`_
+and compile it on their user system before building DPDK.
+The latest version of the library supported by this PMD is v0.53, which
+can be downloaded from `<https://github.com/01org/intel-ipsec-mb/archive/v0.53.zip>`_.
+
After downloading the library, the user needs to unpack and compile it
-on their system before building DPDK::
+on their system before building DPDK:
+
+.. code-block:: console
+
+ make
+ make install
+
+As a reference, the following table shows a mapping between the past DPDK versions
+and the external crypto libraries supported by them:
+
+.. _table_zuc_versions:
+
+.. table:: DPDK and external crypto library version compatibility
+
+ ============= ================================
+ DPDK version Crypto library version
+ ============= ================================
+ 16.11 - 19.11 LibSSO ZUC
+ 20.02+ Multi-buffer library 0.53
+ ============= ================================
- make
Initialization
--------------
In order to enable this virtual crypto PMD, user must:
-* Export the environmental variable LIBSSO_ZUC_PATH with the path where
- the library was extracted (zuc folder).
-
-* Export the environmental variable LD_LIBRARY_PATH with the path
- where the built libsso library is (LIBSSO_ZUC_PATH/build).
-
-* Build the LIBSSO_ZUC library (explained in Installation section).
+* Build the multi buffer library (explained in Installation section).
* Build DPDK as follows:
diff --git a/drivers/crypto/zuc/Makefile b/drivers/crypto/zuc/Makefile
index 68d84eebc..cc0d7943b 100644
--- a/drivers/crypto/zuc/Makefile
+++ b/drivers/crypto/zuc/Makefile
@@ -1,14 +1,8 @@
# SPDX-License-Identifier: BSD-3-Clause
-# Copyright(c) 2016 Intel Corporation
+# Copyright(c) 2016-2019 Intel Corporation
include $(RTE_SDK)/mk/rte.vars.mk
-ifneq ($(MAKECMDGOALS),clean)
-ifeq ($(LIBSSO_ZUC_PATH),)
-$(error "Please define LIBSSO_ZUC_PATH environment variable")
-endif
-endif
-
# library name
LIB = librte_pmd_zuc.a
@@ -23,14 +17,26 @@ LIBABIVER := 1
EXPORT_MAP := rte_pmd_zuc_version.map
# external library dependencies
-CFLAGS += -I$(LIBSSO_ZUC_PATH)
-CFLAGS += -I$(LIBSSO_ZUC_PATH)/include
-CFLAGS += -I$(LIBSSO_ZUC_PATH)/build
-LDLIBS += -L$(LIBSSO_ZUC_PATH)/build -lsso_zuc
+LDLIBS += -lIPSec_MB
LDLIBS += -lrte_eal -lrte_mbuf -lrte_mempool -lrte_ring
LDLIBS += -lrte_cryptodev
LDLIBS += -lrte_bus_vdev
+IMB_HDR = $(shell echo '\#include <intel-ipsec-mb.h>' | \
+ $(CC) -E $(EXTRA_CFLAGS) - | grep 'intel-ipsec-mb.h' | \
+ head -n1 | cut -d'"' -f2)
+
+# Detect library version
+IMB_VERSION = $(shell grep -e "IMB_VERSION_STR" $(IMB_HDR) | cut -d'"' -f2)
+IMB_VERSION_NUM = $(shell grep -e "IMB_VERSION_NUM" $(IMB_HDR) | cut -d' ' -f3)
+
+ifeq ($(IMB_VERSION),)
+$(error "IPSec_MB version >= 0.53 is required")
+endif
+
+ifeq ($(shell expr $(IMB_VERSION_NUM) \< 0x3400), 1)
+$(error "IPSec_MB version >= 0.53 is required")
+endif
# library source files
SRCS-$(CONFIG_RTE_LIBRTE_PMD_ZUC) += rte_zuc_pmd.c
SRCS-$(CONFIG_RTE_LIBRTE_PMD_ZUC) += rte_zuc_pmd_ops.c
diff --git a/drivers/crypto/zuc/meson.build b/drivers/crypto/zuc/meson.build
index b8ca7107e..f0fcd8246 100644
--- a/drivers/crypto/zuc/meson.build
+++ b/drivers/crypto/zuc/meson.build
@@ -1,11 +1,20 @@
# SPDX-License-Identifier: BSD-3-Clause
-# Copyright(c) 2018 Intel Corporation
+# Copyright(c) 2018-2020 Intel Corporation
-lib = cc.find_library('libsso_zuc', required: false)
+IMB_required_ver = '0.53.0'
+lib = cc.find_library('IPSec_MB', required: false)
if not lib.found()
build = false
else
ext_deps += lib
+ # version comes with quotes, so we split based on " and take the middle
+ imb_ver = cc.get_define('IMB_VERSION_STR',
+ prefix : '#include<intel-ipsec-mb.h>').split('"')[1]
+
+ if (imb_ver == '') or (imb_ver.version_compare('<' + IMB_required_ver))
+ build = false
+ endif
+
endif
sources = files('rte_zuc_pmd.c', 'rte_zuc_pmd_ops.c')
diff --git a/drivers/crypto/zuc/rte_zuc_pmd.c b/drivers/crypto/zuc/rte_zuc_pmd.c
index 313f4590b..c880eea7c 100644
--- a/drivers/crypto/zuc/rte_zuc_pmd.c
+++ b/drivers/crypto/zuc/rte_zuc_pmd.c
@@ -11,7 +11,7 @@
#include <rte_cpuflags.h>
#include "rte_zuc_pmd_private.h"
-#define ZUC_MAX_BURST 4
+#define ZUC_MAX_BURST 16
#define BYTE_LEN 8
static uint8_t cryptodev_driver_id;
@@ -169,16 +169,17 @@ zuc_get_session(struct zuc_qp *qp, struct rte_crypto_op *op)
/** Encrypt/decrypt mbufs. */
static uint8_t
-process_zuc_cipher_op(struct rte_crypto_op **ops,
+process_zuc_cipher_op(struct zuc_qp *qp, struct rte_crypto_op **ops,
struct zuc_session **sessions,
uint8_t num_ops)
{
unsigned i;
uint8_t processed_ops = 0;
- uint8_t *src[ZUC_MAX_BURST], *dst[ZUC_MAX_BURST];
- uint8_t *iv[ZUC_MAX_BURST];
+ const void *src[ZUC_MAX_BURST];
+ void *dst[ZUC_MAX_BURST];
+ const void *iv[ZUC_MAX_BURST];
uint32_t num_bytes[ZUC_MAX_BURST];
- uint8_t *cipher_keys[ZUC_MAX_BURST];
+ const void *cipher_keys[ZUC_MAX_BURST];
struct zuc_session *sess;
for (i = 0; i < num_ops; i++) {
@@ -221,7 +222,8 @@ process_zuc_cipher_op(struct rte_crypto_op **ops,
processed_ops++;
}
- sso_zuc_eea3_n_buffer(cipher_keys, iv, src, dst,
+ IMB_ZUC_EEA3_N_BUFFER(qp->mb_mgr, (const void **)cipher_keys,
+ (const void **)iv, (const void **)src, (void **)dst,
num_bytes, processed_ops);
return processed_ops;
@@ -261,7 +263,7 @@ process_zuc_hash_op(struct zuc_qp *qp, struct rte_crypto_op **ops,
if (sess->auth_op == RTE_CRYPTO_AUTH_OP_VERIFY) {
dst = (uint32_t *)qp->temp_digest;
- sso_zuc_eia3_1_buffer(sess->pKey_hash,
+ IMB_ZUC_EIA3_1_BUFFER(qp->mb_mgr, sess->pKey_hash,
iv, src,
length_in_bits, dst);
/* Verify digest. */
@@ -271,7 +273,7 @@ process_zuc_hash_op(struct zuc_qp *qp, struct rte_crypto_op **ops,
} else {
dst = (uint32_t *)ops[i]->sym->auth.digest.data;
- sso_zuc_eia3_1_buffer(sess->pKey_hash,
+ IMB_ZUC_EIA3_1_BUFFER(qp->mb_mgr, sess->pKey_hash,
iv, src,
length_in_bits, dst);
}
@@ -293,7 +295,7 @@ process_ops(struct rte_crypto_op **ops, enum zuc_operation op_type,
switch (op_type) {
case ZUC_OP_ONLY_CIPHER:
- processed_ops = process_zuc_cipher_op(ops,
+ processed_ops = process_zuc_cipher_op(qp, ops,
sessions, num_ops);
break;
case ZUC_OP_ONLY_AUTH:
@@ -301,14 +303,14 @@ process_ops(struct rte_crypto_op **ops, enum zuc_operation op_type,
num_ops);
break;
case ZUC_OP_CIPHER_AUTH:
- processed_ops = process_zuc_cipher_op(ops, sessions,
+ processed_ops = process_zuc_cipher_op(qp, ops, sessions,
num_ops);
process_zuc_hash_op(qp, ops, sessions, processed_ops);
break;
case ZUC_OP_AUTH_CIPHER:
processed_ops = process_zuc_hash_op(qp, ops, sessions,
num_ops);
- process_zuc_cipher_op(ops, sessions, processed_ops);
+ process_zuc_cipher_op(qp, ops, sessions, processed_ops);
break;
default:
/* Operation not supported. */
@@ -455,8 +457,7 @@ cryptodev_zuc_create(const char *name,
{
struct rte_cryptodev *dev;
struct zuc_private *internals;
- uint64_t cpu_flags = RTE_CRYPTODEV_FF_CPU_SSE;
-
+ MB_MGR *mb_mgr;
dev = rte_cryptodev_pmd_create(name, &vdev->device, init_params);
if (dev == NULL) {
@@ -464,6 +465,27 @@ cryptodev_zuc_create(const char *name,
goto init_error;
}
+ dev->feature_flags = RTE_CRYPTODEV_FF_SYMMETRIC_CRYPTO |
+ RTE_CRYPTODEV_FF_SYM_OPERATION_CHAINING;
+
+ mb_mgr = alloc_mb_mgr(0);
+ if (mb_mgr == NULL)
+ return -ENOMEM;
+
+ if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512F)) {
+ dev->feature_flags |= RTE_CRYPTODEV_FF_CPU_AVX512;
+ init_mb_mgr_avx512(mb_mgr);
+ } else if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX2)) {
+ dev->feature_flags |= RTE_CRYPTODEV_FF_CPU_AVX2;
+ init_mb_mgr_avx2(mb_mgr);
+ } else if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX)) {
+ dev->feature_flags |= RTE_CRYPTODEV_FF_CPU_AVX;
+ init_mb_mgr_avx(mb_mgr);
+ } else {
+ dev->feature_flags |= RTE_CRYPTODEV_FF_CPU_SSE;
+ init_mb_mgr_sse(mb_mgr);
+ }
+
dev->driver_id = cryptodev_driver_id;
dev->dev_ops = rte_zuc_pmd_ops;
@@ -471,11 +493,8 @@ cryptodev_zuc_create(const char *name,
dev->dequeue_burst = zuc_pmd_dequeue_burst;
dev->enqueue_burst = zuc_pmd_enqueue_burst;
- dev->feature_flags = RTE_CRYPTODEV_FF_SYMMETRIC_CRYPTO |
- RTE_CRYPTODEV_FF_SYM_OPERATION_CHAINING |
- cpu_flags;
-
internals = dev->data->dev_private;
+ internals->mb_mgr = mb_mgr;
internals->max_nb_queue_pairs = init_params->max_nb_queue_pairs;
@@ -516,6 +535,7 @@ cryptodev_zuc_remove(struct rte_vdev_device *vdev)
struct rte_cryptodev *cryptodev;
const char *name;
+ struct zuc_private *internals;
name = rte_vdev_device_name(vdev);
if (name == NULL)
@@ -525,6 +545,10 @@ cryptodev_zuc_remove(struct rte_vdev_device *vdev)
if (cryptodev == NULL)
return -ENODEV;
+ internals = cryptodev->data->dev_private;
+
+ free_mb_mgr(internals->mb_mgr);
+
return rte_cryptodev_pmd_destroy(cryptodev);
}
diff --git a/drivers/crypto/zuc/rte_zuc_pmd_ops.c b/drivers/crypto/zuc/rte_zuc_pmd_ops.c
index 6da396542..14a831867 100644
--- a/drivers/crypto/zuc/rte_zuc_pmd_ops.c
+++ b/drivers/crypto/zuc/rte_zuc_pmd_ops.c
@@ -196,6 +196,7 @@ zuc_pmd_qp_setup(struct rte_cryptodev *dev, uint16_t qp_id,
int socket_id, struct rte_mempool *session_pool)
{
struct zuc_qp *qp = NULL;
+ struct zuc_private *internals = dev->data->dev_private;
/* Free memory prior to re-allocation if needed. */
if (dev->data->queue_pairs[qp_id] != NULL)
@@ -218,6 +219,7 @@ zuc_pmd_qp_setup(struct rte_cryptodev *dev, uint16_t qp_id,
if (qp->processed_ops == NULL)
goto qp_setup_cleanup;
+ qp->mb_mgr = internals->mb_mgr;
qp->sess_mp = session_pool;
memset(&qp->qp_stats, 0, sizeof(qp->qp_stats));
diff --git a/drivers/crypto/zuc/rte_zuc_pmd_private.h b/drivers/crypto/zuc/rte_zuc_pmd_private.h
index 5e5906ddb..e049df377 100644
--- a/drivers/crypto/zuc/rte_zuc_pmd_private.h
+++ b/drivers/crypto/zuc/rte_zuc_pmd_private.h
@@ -5,7 +5,7 @@
#ifndef _RTE_ZUC_PMD_PRIVATE_H_
#define _RTE_ZUC_PMD_PRIVATE_H_
-#include <sso_zuc.h>
+#include <intel-ipsec-mb.h>
#define CRYPTODEV_NAME_ZUC_PMD crypto_zuc
/**< KASUMI PMD device name */
@@ -24,6 +24,8 @@ int zuc_logtype_driver;
struct zuc_private {
unsigned max_nb_queue_pairs;
/**< Max number of queue pairs supported by device */
+ MB_MGR *mb_mgr;
+ /**< Multi-buffer instance */
};
/** ZUC buffer queue pair */
@@ -43,6 +45,8 @@ struct zuc_qp {
* by the driver when verifying a digest provided
* by the user (using authentication verify operation)
*/
+ MB_MGR *mb_mgr;
+ /**< Multi-buffer instance */
} __rte_cache_aligned;
enum zuc_operation {
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index 5699d979d..7c04ee490 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -233,7 +233,7 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_SNOW3G) += -L$(LIBSSO_SNOW3G_PATH)/build -l
_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_KASUMI) += -lrte_pmd_kasumi
_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_KASUMI) += -L$(LIBSSO_KASUMI_PATH)/build -lsso_kasumi
_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_ZUC) += -lrte_pmd_zuc
-_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_ZUC) += -L$(LIBSSO_ZUC_PATH)/build -lsso_zuc
+_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_ZUC) += -lIPSec_MB
_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_ARMV8_CRYPTO) += -lrte_pmd_armv8
_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_ARMV8_CRYPTO) += -L$(ARMV8_CRYPTO_LIB_PATH) -larmv8_crypto
_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_MVSAM_CRYPTO) += -L$(LIBMUSDK_PATH)/lib -lrte_pmd_mvsam_crypto -lmusdk
--
2.24.1
^ permalink raw reply [flat|nested] 8+ messages in thread
* [dpdk-dev] [RFC PATCH 2/5] crypto/snow3g: use IPSec MB library v0.53
2020-03-05 15:34 [dpdk-dev] [RFC PATCH 0/5] Support Intel IPSec MB v0.53 in DPDK 18.11 Pablo de Lara
2020-03-05 15:34 ` [dpdk-dev] [RFC PATCH 1/5] crypto/zuc: use IPSec MB library v0.53 Pablo de Lara
@ 2020-03-05 15:34 ` Pablo de Lara
2020-03-05 15:34 ` [dpdk-dev] [RFC PATCH 3/5] crypto/kasumi: " Pablo de Lara
` (3 subsequent siblings)
5 siblings, 0 replies; 8+ messages in thread
From: Pablo de Lara @ 2020-03-05 15:34 UTC (permalink / raw)
To: dev; +Cc: Pablo de Lara
Link against Intel IPSec Multi-buffer library, which
added support for SNOW3G-UEA2 and SNOW3G-UIA2 from version v0.53,
moving from libSSO SNOW3G library.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
---
devtools/test-build.sh | 4 +-
doc/guides/cryptodevs/snow3g.rst | 58 ++++++++------
drivers/crypto/snow3g/Makefile | 29 ++++---
drivers/crypto/snow3g/meson.build | 21 +++++
drivers/crypto/snow3g/rte_snow3g_pmd.c | 79 ++++++++++++-------
drivers/crypto/snow3g/rte_snow3g_pmd_ops.c | 8 +-
.../crypto/snow3g/rte_snow3g_pmd_private.h | 12 ++-
mk/rte.app.mk | 2 +-
8 files changed, 139 insertions(+), 74 deletions(-)
create mode 100644 drivers/crypto/snow3g/meson.build
diff --git a/devtools/test-build.sh b/devtools/test-build.sh
index 911f77e01..0c62c3950 100755
--- a/devtools/test-build.sh
+++ b/devtools/test-build.sh
@@ -23,7 +23,6 @@ default_path=$PATH
# - DPDK_NOTIFY (notify-send)
# - FLEXRAN_SDK
# - LIBMUSDK_PATH
-# - LIBSSO_SNOW3G_PATH
# - LIBSSO_KASUMI_PATH
. $(dirname $(readlink -e $0))/load-devel-config
@@ -108,7 +107,6 @@ reset_env ()
unset ARMV8_CRYPTO_LIB_PATH
unset FLEXRAN_SDK
unset LIBMUSDK_PATH
- unset LIBSSO_SNOW3G_PATH
unset LIBSSO_KASUMI_PATH
unset PQOS_INSTALL_PATH
}
@@ -165,7 +163,7 @@ config () # <directory> <target> <options>
sed -ri 's,(PMD_AESNI_GCM=)n,\1y,' $1/.config
test "$DPDK_DEP_IPSEC_MB" != y || \
sed -ri 's,(PMD_ZUC=)n,\1y,' $1/.config
- test -z "$LIBSSO_SNOW3G_PATH" || \
+ test "$DPDK_DEP_IPSEC_MB" != y || \
sed -ri 's,(PMD_SNOW3G=)n,\1y,' $1/.config
test -z "$LIBSSO_KASUMI_PATH" || \
sed -ri 's,(PMD_KASUMI=)n,\1y,' $1/.config
diff --git a/doc/guides/cryptodevs/snow3g.rst b/doc/guides/cryptodevs/snow3g.rst
index 7cba712c1..d45bcad67 100644
--- a/doc/guides/cryptodevs/snow3g.rst
+++ b/doc/guides/cryptodevs/snow3g.rst
@@ -1,12 +1,12 @@
.. SPDX-License-Identifier: BSD-3-Clause
- Copyright(c) 2016 Intel Corporation.
+ Copyright(c) 2016-2019 Intel Corporation.
SNOW 3G Crypto Poll Mode Driver
===============================
-The SNOW 3G PMD (**librte_pmd_snow3g**) provides poll mode crypto driver
-support for utilizing Intel Libsso library, which implements F8 and F9 functions
-for SNOW 3G UEA2 cipher and UIA2 hash algorithms.
+The SNOW3G PMD (**librte_snow3g_zuc**) provides poll mode crypto driver support for
+utilizing `Intel IPSec Multi-buffer library <https://github.com/01org/intel-ipsec-mb>`_
+which implements F8 and F8 functions for SNOW 3G UEA2 cipher and UIA2 hash algorithms.
Features
--------
@@ -32,26 +32,33 @@ Limitations
Installation
------------
-To build DPDK with the SNOW3G_PMD the user is required to download
-the export controlled ``libsso_snow3g`` library, by registering in
-`Intel Resource & Design Center <https://www.intel.com/content/www/us/en/design/resource-design-center.html>`_.
-Once approval has been granted, the user needs to search for
-*Snow3G F8 F9 3GPP cryptographic algorithms Software Library* to download the
-library or directly through this `link <https://cdrdv2.intel.com/v1/dl/getContent/575867>`_.
+To build DPDK with the SNOW3G_PMD the user is required to download the multi-buffer
+library from `here <https://github.com/01org/intel-ipsec-mb>`_
+and compile it on their user system before building DPDK.
+The latest version of the library supported by this PMD is v0.53, which
+can be downloaded from `<https://github.com/01org/intel-ipsec-mb/archive/v0.53.zip>`_.
+
After downloading the library, the user needs to unpack and compile it
-on their system before building DPDK::
+on their system before building DPDK:
+
+.. code-block:: console
+
+ make
+ make install
- make snow3G
+As a reference, the following table shows a mapping between the past DPDK versions
+and the external crypto libraries supported by them:
-**Note**: When encrypting with SNOW3G UEA2, by default the library
-encrypts blocks of 4 bytes, regardless the number of bytes to
-be encrypted provided (which leads to a possible buffer overflow).
-To avoid this situation, it is necessary not to pass
-3GPP_SAFE_BUFFERS as a compilation flag.
-For this, in the Makefile of the library, make sure that this flag
-is commented out.::
+.. _table_zuc_versions:
- #EXTRA_CFLAGS += -D_3GPP_SAFE_BUFFERS
+.. table:: DPDK and external crypto library version compatibility
+
+ ============= ================================
+ DPDK version Crypto library version
+ ============= ================================
+ 16.04 - 19.11 LibSSO SNOW3G
+ 20.02+ Multi-buffer library 0.53
+ ============= ================================
Initialization
@@ -59,12 +66,15 @@ Initialization
In order to enable this virtual crypto PMD, user must:
-* Export the environmental variable LIBSSO_SNOW3G_PATH with the path where
- the library was extracted (snow3g folder).
+* Build the multi buffer library (explained in Installation section).
+
+* Build DPDK as follows:
-* Build the LIBSSO_SNOW3G library (explained in Installation section).
+.. code-block:: console
-* Set CONFIG_RTE_LIBRTE_PMD_SNOW3G=y in config/common_base.
+ make config T=x86_64-native-linux-gcc
+ sed -i 's,\(CONFIG_RTE_LIBRTE_PMD_SNOW3G\)=n,\1=y,' build/.config
+ make
To use the PMD in an application, user must:
diff --git a/drivers/crypto/snow3g/Makefile b/drivers/crypto/snow3g/Makefile
index ee5027d0c..9c2f59d36 100644
--- a/drivers/crypto/snow3g/Makefile
+++ b/drivers/crypto/snow3g/Makefile
@@ -1,14 +1,8 @@
# SPDX-License-Identifier: BSD-3-Clause
-# Copyright(c) 2016 Intel Corporation
+# Copyright(c) 2016-2019 Intel Corporation
include $(RTE_SDK)/mk/rte.vars.mk
-ifneq ($(MAKECMDGOALS),clean)
-ifeq ($(LIBSSO_SNOW3G_PATH),)
-$(error "Please define LIBSSO_SNOW3G_PATH environment variable")
-endif
-endif
-
# library name
LIB = librte_pmd_snow3g.a
@@ -23,14 +17,27 @@ LIBABIVER := 1
EXPORT_MAP := rte_pmd_snow3g_version.map
# external library dependencies
-CFLAGS += -I$(LIBSSO_SNOW3G_PATH)
-CFLAGS += -I$(LIBSSO_SNOW3G_PATH)/include
-CFLAGS += -I$(LIBSSO_SNOW3G_PATH)/build
-LDLIBS += -L$(LIBSSO_SNOW3G_PATH)/build -lsso_snow3g
+LDLIBS += -lIPSec_MB
LDLIBS += -lrte_eal -lrte_mbuf -lrte_mempool -lrte_ring
LDLIBS += -lrte_cryptodev
LDLIBS += -lrte_bus_vdev
+IMB_HDR = $(shell echo '\#include <intel-ipsec-mb.h>' | \
+ $(CC) -E $(EXTRA_CFLAGS) - | grep 'intel-ipsec-mb.h' | \
+ head -n1 | cut -d'"' -f2)
+
+# Detect library version
+IMB_VERSION = $(shell grep -e "IMB_VERSION_STR" $(IMB_HDR) | cut -d'"' -f2)
+IMB_VERSION_NUM = $(shell grep -e "IMB_VERSION_NUM" $(IMB_HDR) | cut -d' ' -f3)
+
+ifeq ($(IMB_VERSION),)
+$(error "IPSec_MB version >= 0.53 is required")
+endif
+
+ifeq ($(shell expr $(IMB_VERSION_NUM) \< 0x3400), 1)
+$(error "IPSec_MB version >= 0.53 is required")
+endif
+
# library source files
SRCS-$(CONFIG_RTE_LIBRTE_PMD_SNOW3G) += rte_snow3g_pmd.c
SRCS-$(CONFIG_RTE_LIBRTE_PMD_SNOW3G) += rte_snow3g_pmd_ops.c
diff --git a/drivers/crypto/snow3g/meson.build b/drivers/crypto/snow3g/meson.build
new file mode 100644
index 000000000..e04138e92
--- /dev/null
+++ b/drivers/crypto/snow3g/meson.build
@@ -0,0 +1,21 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2018-2020 Intel Corporation
+
+IMB_required_ver = '0.53.0'
+lib = cc.find_library('IPSec_MB', required: false)
+if not lib.found()
+ build = false
+else
+ ext_deps += lib
+ # version comes with quotes, so we split based on " and take the middle
+ imb_ver = cc.get_define('IMB_VERSION_STR',
+ prefix : '#include<intel-ipsec-mb.h>').split('"')[1]
+
+ if (imb_ver == '') or (imb_ver.version_compare('<' + IMB_required_ver))
+ build = false
+ endif
+
+endif
+
+sources = files('rte_snow3g_pmd.c', 'rte_snow3g_pmd_ops.c')
+deps += ['bus_vdev']
diff --git a/drivers/crypto/snow3g/rte_snow3g_pmd.c b/drivers/crypto/snow3g/rte_snow3g_pmd.c
index a17536b77..5abf7e090 100644
--- a/drivers/crypto/snow3g/rte_snow3g_pmd.c
+++ b/drivers/crypto/snow3g/rte_snow3g_pmd.c
@@ -53,7 +53,7 @@ snow3g_get_mode(const struct rte_crypto_sym_xform *xform)
/** Parse crypto xform chain and set private session parameters. */
int
-snow3g_set_session_parameters(struct snow3g_session *sess,
+snow3g_set_session_parameters(MB_MGR *mgr, struct snow3g_session *sess,
const struct rte_crypto_sym_xform *xform)
{
const struct rte_crypto_sym_xform *auth_xform = NULL;
@@ -95,8 +95,8 @@ snow3g_set_session_parameters(struct snow3g_session *sess,
sess->cipher_iv_offset = cipher_xform->cipher.iv.offset;
/* Initialize key */
- sso_snow3g_init_key_sched(cipher_xform->cipher.key.data,
- &sess->pKeySched_cipher);
+ IMB_SNOW3G_INIT_KEY_SCHED(mgr, cipher_xform->cipher.key.data,
+ &sess->pKeySched_cipher);
}
if (auth_xform) {
@@ -118,11 +118,10 @@ snow3g_set_session_parameters(struct snow3g_session *sess,
sess->auth_iv_offset = auth_xform->auth.iv.offset;
/* Initialize key */
- sso_snow3g_init_key_sched(auth_xform->auth.key.data,
- &sess->pKeySched_hash);
+ IMB_SNOW3G_INIT_KEY_SCHED(mgr, auth_xform->auth.key.data,
+ &sess->pKeySched_hash);
}
-
sess->op = mode;
return 0;
@@ -152,7 +151,7 @@ snow3g_get_session(struct snow3g_qp *qp, struct rte_crypto_op *op)
sess = (struct snow3g_session *)_sess_private_data;
- if (unlikely(snow3g_set_session_parameters(sess,
+ if (unlikely(snow3g_set_session_parameters(qp->mgr, sess,
op->sym->xform) != 0)) {
rte_mempool_put(qp->sess_mp, _sess);
rte_mempool_put(qp->sess_mp, _sess_private_data);
@@ -172,14 +171,15 @@ snow3g_get_session(struct snow3g_qp *qp, struct rte_crypto_op *op)
/** Encrypt/decrypt mbufs with same cipher key. */
static uint8_t
-process_snow3g_cipher_op(struct rte_crypto_op **ops,
+process_snow3g_cipher_op(struct snow3g_qp *qp, struct rte_crypto_op **ops,
struct snow3g_session *session,
uint8_t num_ops)
{
unsigned i;
uint8_t processed_ops = 0;
- uint8_t *src[SNOW3G_MAX_BURST], *dst[SNOW3G_MAX_BURST];
- uint8_t *iv[SNOW3G_MAX_BURST];
+ const void *src[SNOW3G_MAX_BURST];
+ void *dst[SNOW3G_MAX_BURST];
+ const void *iv[SNOW3G_MAX_BURST];
uint32_t num_bytes[SNOW3G_MAX_BURST];
for (i = 0; i < num_ops; i++) {
@@ -197,15 +197,16 @@ process_snow3g_cipher_op(struct rte_crypto_op **ops,
processed_ops++;
}
- sso_snow3g_f8_n_buffer(&session->pKeySched_cipher, iv, src, dst,
- num_bytes, processed_ops);
+ IMB_SNOW3G_F8_N_BUFFER(qp->mgr, &session->pKeySched_cipher, iv,
+ src, dst, num_bytes, processed_ops);
return processed_ops;
}
/** Encrypt/decrypt mbuf (bit level function). */
static uint8_t
-process_snow3g_cipher_op_bit(struct rte_crypto_op *op,
+process_snow3g_cipher_op_bit(struct snow3g_qp *qp,
+ struct rte_crypto_op *op,
struct snow3g_session *session)
{
uint8_t *src, *dst;
@@ -224,7 +225,7 @@ process_snow3g_cipher_op_bit(struct rte_crypto_op *op,
session->cipher_iv_offset);
length_in_bits = op->sym->cipher.data.length;
- sso_snow3g_f8_1_buffer_bit(&session->pKeySched_cipher, iv,
+ IMB_SNOW3G_F8_1_BUFFER_BIT(qp->mgr, &session->pKeySched_cipher, iv,
src, dst, length_in_bits, offset_in_bits);
return 1;
@@ -260,9 +261,9 @@ process_snow3g_hash_op(struct snow3g_qp *qp, struct rte_crypto_op **ops,
if (session->auth_op == RTE_CRYPTO_AUTH_OP_VERIFY) {
dst = qp->temp_digest;
- sso_snow3g_f9_1_buffer(&session->pKeySched_hash,
- iv, src,
- length_in_bits, dst);
+ IMB_SNOW3G_F9_1_BUFFER(qp->mgr,
+ &session->pKeySched_hash,
+ iv, src, length_in_bits, dst);
/* Verify digest. */
if (memcmp(dst, ops[i]->sym->auth.digest.data,
SNOW3G_DIGEST_LENGTH) != 0)
@@ -270,9 +271,9 @@ process_snow3g_hash_op(struct snow3g_qp *qp, struct rte_crypto_op **ops,
} else {
dst = ops[i]->sym->auth.digest.data;
- sso_snow3g_f9_1_buffer(&session->pKeySched_hash,
- iv, src,
- length_in_bits, dst);
+ IMB_SNOW3G_F9_1_BUFFER(qp->mgr,
+ &session->pKeySched_hash,
+ iv, src, length_in_bits, dst);
}
processed_ops++;
}
@@ -306,7 +307,7 @@ process_ops(struct rte_crypto_op **ops, struct snow3g_session *session,
switch (session->op) {
case SNOW3G_OP_ONLY_CIPHER:
- processed_ops = process_snow3g_cipher_op(ops,
+ processed_ops = process_snow3g_cipher_op(qp, ops,
session, num_ops);
break;
case SNOW3G_OP_ONLY_AUTH:
@@ -314,14 +315,14 @@ process_ops(struct rte_crypto_op **ops, struct snow3g_session *session,
num_ops);
break;
case SNOW3G_OP_CIPHER_AUTH:
- processed_ops = process_snow3g_cipher_op(ops, session,
+ processed_ops = process_snow3g_cipher_op(qp, ops, session,
num_ops);
process_snow3g_hash_op(qp, ops, session, processed_ops);
break;
case SNOW3G_OP_AUTH_CIPHER:
processed_ops = process_snow3g_hash_op(qp, ops, session,
num_ops);
- process_snow3g_cipher_op(ops, session, processed_ops);
+ process_snow3g_cipher_op(qp, ops, session, processed_ops);
break;
default:
/* Operation not supported. */
@@ -363,21 +364,21 @@ process_op_bit(struct rte_crypto_op *op, struct snow3g_session *session,
switch (session->op) {
case SNOW3G_OP_ONLY_CIPHER:
- processed_op = process_snow3g_cipher_op_bit(op,
+ processed_op = process_snow3g_cipher_op_bit(qp, op,
session);
break;
case SNOW3G_OP_ONLY_AUTH:
processed_op = process_snow3g_hash_op(qp, &op, session, 1);
break;
case SNOW3G_OP_CIPHER_AUTH:
- processed_op = process_snow3g_cipher_op_bit(op, session);
+ processed_op = process_snow3g_cipher_op_bit(qp, op, session);
if (processed_op == 1)
process_snow3g_hash_op(qp, &op, session, 1);
break;
case SNOW3G_OP_AUTH_CIPHER:
processed_op = process_snow3g_hash_op(qp, &op, session, 1);
if (processed_op == 1)
- process_snow3g_cipher_op_bit(op, session);
+ process_snow3g_cipher_op_bit(qp, op, session);
break;
default:
/* Operation not supported. */
@@ -533,7 +534,7 @@ cryptodev_snow3g_create(const char *name,
{
struct rte_cryptodev *dev;
struct snow3g_private *internals;
- uint64_t cpu_flags = RTE_CRYPTODEV_FF_CPU_SSE;
+ MB_MGR *mgr;
dev = rte_cryptodev_pmd_create(name, &vdev->device, init_params);
if (dev == NULL) {
@@ -549,10 +550,25 @@ cryptodev_snow3g_create(const char *name,
dev->enqueue_burst = snow3g_pmd_enqueue_burst;
dev->feature_flags = RTE_CRYPTODEV_FF_SYMMETRIC_CRYPTO |
- RTE_CRYPTODEV_FF_SYM_OPERATION_CHAINING |
- cpu_flags;
+ RTE_CRYPTODEV_FF_SYM_OPERATION_CHAINING;
+
+ mgr = alloc_mb_mgr(0);
+ if (mgr == NULL)
+ return -ENOMEM;
+
+ if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX2)) {
+ dev->feature_flags |= RTE_CRYPTODEV_FF_CPU_AVX2;
+ init_mb_mgr_avx2(mgr);
+ } else if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX)) {
+ dev->feature_flags |= RTE_CRYPTODEV_FF_CPU_AVX;
+ init_mb_mgr_avx(mgr);
+ } else {
+ dev->feature_flags |= RTE_CRYPTODEV_FF_CPU_SSE;
+ init_mb_mgr_sse(mgr);
+ }
internals = dev->data->dev_private;
+ internals->mgr = mgr;
internals->max_nb_queue_pairs = init_params->max_nb_queue_pairs;
@@ -592,6 +608,7 @@ cryptodev_snow3g_remove(struct rte_vdev_device *vdev)
{
struct rte_cryptodev *cryptodev;
const char *name;
+ struct snow3g_private *internals;
name = rte_vdev_device_name(vdev);
if (name == NULL)
@@ -601,6 +618,10 @@ cryptodev_snow3g_remove(struct rte_vdev_device *vdev)
if (cryptodev == NULL)
return -ENODEV;
+ internals = cryptodev->data->dev_private;
+
+ free_mb_mgr(internals->mgr);
+
return rte_cryptodev_pmd_destroy(cryptodev);
}
diff --git a/drivers/crypto/snow3g/rte_snow3g_pmd_ops.c b/drivers/crypto/snow3g/rte_snow3g_pmd_ops.c
index cfbc9522a..414029b5a 100644
--- a/drivers/crypto/snow3g/rte_snow3g_pmd_ops.c
+++ b/drivers/crypto/snow3g/rte_snow3g_pmd_ops.c
@@ -196,6 +196,7 @@ snow3g_pmd_qp_setup(struct rte_cryptodev *dev, uint16_t qp_id,
int socket_id, struct rte_mempool *session_pool)
{
struct snow3g_qp *qp = NULL;
+ struct snow3g_private *internals = dev->data->dev_private;
/* Free memory prior to re-allocation if needed. */
if (dev->data->queue_pairs[qp_id] != NULL)
@@ -218,6 +219,7 @@ snow3g_pmd_qp_setup(struct rte_cryptodev *dev, uint16_t qp_id,
if (qp->processed_ops == NULL)
goto qp_setup_cleanup;
+ qp->mgr = internals->mgr;
qp->sess_mp = session_pool;
memset(&qp->qp_stats, 0, sizeof(qp->qp_stats));
@@ -247,13 +249,14 @@ snow3g_pmd_sym_session_get_size(struct rte_cryptodev *dev __rte_unused)
/** Configure a SNOW 3G session from a crypto xform chain */
static int
-snow3g_pmd_sym_session_configure(struct rte_cryptodev *dev __rte_unused,
+snow3g_pmd_sym_session_configure(struct rte_cryptodev *dev,
struct rte_crypto_sym_xform *xform,
struct rte_cryptodev_sym_session *sess,
struct rte_mempool *mempool)
{
void *sess_private_data;
int ret;
+ struct snow3g_private *internals = dev->data->dev_private;
if (unlikely(sess == NULL)) {
SNOW3G_LOG(ERR, "invalid session struct");
@@ -266,7 +269,8 @@ snow3g_pmd_sym_session_configure(struct rte_cryptodev *dev __rte_unused,
return -ENOMEM;
}
- ret = snow3g_set_session_parameters(sess_private_data, xform);
+ ret = snow3g_set_session_parameters(internals->mgr,
+ sess_private_data, xform);
if (ret != 0) {
SNOW3G_LOG(ERR, "failed configure session parameters");
diff --git a/drivers/crypto/snow3g/rte_snow3g_pmd_private.h b/drivers/crypto/snow3g/rte_snow3g_pmd_private.h
index b7807b621..670140144 100644
--- a/drivers/crypto/snow3g/rte_snow3g_pmd_private.h
+++ b/drivers/crypto/snow3g/rte_snow3g_pmd_private.h
@@ -5,7 +5,7 @@
#ifndef _RTE_SNOW3G_PMD_PRIVATE_H_
#define _RTE_SNOW3G_PMD_PRIVATE_H_
-#include <sso_snow3g.h>
+#include <intel-ipsec-mb.h>
#define CRYPTODEV_NAME_SNOW3G_PMD crypto_snow3g
/**< SNOW 3G PMD device name */
@@ -24,6 +24,8 @@ int snow3g_logtype_driver;
struct snow3g_private {
unsigned max_nb_queue_pairs;
/**< Max number of queue pairs supported by device */
+ MB_MGR *mgr;
+ /**< Multi-buffer instance */
};
/** SNOW 3G buffer queue pair */
@@ -43,6 +45,8 @@ struct snow3g_qp {
* by the driver when verifying a digest provided
* by the user (using authentication verify operation)
*/
+ MB_MGR *mgr;
+ /**< Multi-buffer instance */
} __rte_cache_aligned;
enum snow3g_operation {
@@ -57,15 +61,15 @@ enum snow3g_operation {
struct snow3g_session {
enum snow3g_operation op;
enum rte_crypto_auth_operation auth_op;
- sso_snow3g_key_schedule_t pKeySched_cipher;
- sso_snow3g_key_schedule_t pKeySched_hash;
+ snow3g_key_schedule_t pKeySched_cipher;
+ snow3g_key_schedule_t pKeySched_hash;
uint16_t cipher_iv_offset;
uint16_t auth_iv_offset;
} __rte_cache_aligned;
extern int
-snow3g_set_session_parameters(struct snow3g_session *sess,
+snow3g_set_session_parameters(MB_MGR *mgr, struct snow3g_session *sess,
const struct rte_crypto_sym_xform *xform);
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index 7c04ee490..b33603ef5 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -229,7 +229,7 @@ ifeq ($(CONFIG_RTE_LIBRTE_PMD_QAT),y)
_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_QAT_SYM) += -lrte_pmd_qat -lcrypto
endif # CONFIG_RTE_LIBRTE_PMD_QAT
_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_SNOW3G) += -lrte_pmd_snow3g
-_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_SNOW3G) += -L$(LIBSSO_SNOW3G_PATH)/build -lsso_snow3g
+_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_SNOW3G) += -lIPSec_MB
_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_KASUMI) += -lrte_pmd_kasumi
_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_KASUMI) += -L$(LIBSSO_KASUMI_PATH)/build -lsso_kasumi
_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_ZUC) += -lrte_pmd_zuc
--
2.24.1
^ permalink raw reply [flat|nested] 8+ messages in thread
* [dpdk-dev] [RFC PATCH 3/5] crypto/kasumi: use IPSec MB library v0.53
2020-03-05 15:34 [dpdk-dev] [RFC PATCH 0/5] Support Intel IPSec MB v0.53 in DPDK 18.11 Pablo de Lara
2020-03-05 15:34 ` [dpdk-dev] [RFC PATCH 1/5] crypto/zuc: use IPSec MB library v0.53 Pablo de Lara
2020-03-05 15:34 ` [dpdk-dev] [RFC PATCH 2/5] crypto/snow3g: " Pablo de Lara
@ 2020-03-05 15:34 ` Pablo de Lara
2020-03-05 15:34 ` [dpdk-dev] [RFC PATCH 4/5] crypto/aesni_mb: support " Pablo de Lara
` (2 subsequent siblings)
5 siblings, 0 replies; 8+ messages in thread
From: Pablo de Lara @ 2020-03-05 15:34 UTC (permalink / raw)
To: dev; +Cc: Pablo de Lara
Link against Intel IPSec Multi-buffer library, which
added support for KASUMI-F8 and KASUMI-F9 from version v0.53,
moving from libSSO KASUMI library.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
---
devtools/test-build.sh | 4 +-
doc/guides/cryptodevs/kasumi.rst | 62 ++++++++-------
drivers/crypto/kasumi/Makefile | 26 +++---
drivers/crypto/kasumi/meson.build | 11 ++-
drivers/crypto/kasumi/rte_kasumi_pmd.c | 79 +++++++++++--------
drivers/crypto/kasumi/rte_kasumi_pmd_ops.c | 8 +-
.../crypto/kasumi/rte_kasumi_pmd_private.h | 12 ++-
mk/rte.app.mk | 2 +-
8 files changed, 120 insertions(+), 84 deletions(-)
diff --git a/devtools/test-build.sh b/devtools/test-build.sh
index 0c62c3950..73b4bdd1a 100755
--- a/devtools/test-build.sh
+++ b/devtools/test-build.sh
@@ -23,7 +23,6 @@ default_path=$PATH
# - DPDK_NOTIFY (notify-send)
# - FLEXRAN_SDK
# - LIBMUSDK_PATH
-# - LIBSSO_KASUMI_PATH
. $(dirname $(readlink -e $0))/load-devel-config
print_usage () {
@@ -107,7 +106,6 @@ reset_env ()
unset ARMV8_CRYPTO_LIB_PATH
unset FLEXRAN_SDK
unset LIBMUSDK_PATH
- unset LIBSSO_KASUMI_PATH
unset PQOS_INSTALL_PATH
}
@@ -165,7 +163,7 @@ config () # <directory> <target> <options>
sed -ri 's,(PMD_ZUC=)n,\1y,' $1/.config
test "$DPDK_DEP_IPSEC_MB" != y || \
sed -ri 's,(PMD_SNOW3G=)n,\1y,' $1/.config
- test -z "$LIBSSO_KASUMI_PATH" || \
+ test "$DPDK_DEP_IPSEC_MB" != y || \
sed -ri 's,(PMD_KASUMI=)n,\1y,' $1/.config
test "$DPDK_DEP_SSL" != y || \
sed -ri 's,(PMD_CCP=)n,\1y,' $1/.config
diff --git a/doc/guides/cryptodevs/kasumi.rst b/doc/guides/cryptodevs/kasumi.rst
index 2265eee4e..6c86fe264 100644
--- a/doc/guides/cryptodevs/kasumi.rst
+++ b/doc/guides/cryptodevs/kasumi.rst
@@ -1,12 +1,12 @@
.. SPDX-License-Identifier: BSD-3-Clause
- Copyright(c) 2016 Intel Corporation.
+ Copyright(c) 2016-2019 Intel Corporation.
KASUMI Crypto Poll Mode Driver
===============================
-The KASUMI PMD (**librte_pmd_kasumi**) provides poll mode crypto driver
-support for utilizing Intel Libsso library, which implements F8 and F9 functions
-for KASUMI UEA1 cipher and UIA1 hash algorithms.
+The KASUMI PMD (**librte_pmd_kasumi**) provides poll mode crypto driver support for
+utilizing `Intel IPSec Multi-buffer library <https://github.com/01org/intel-ipsec-mb>`_
+which implements F8 and F9 functions for KASUMI UEA1 cipher and UIA1 hash algorithms.
Features
--------
@@ -33,33 +33,33 @@ Limitations
Installation
------------
-To build DPDK with the KASUMI_PMD the user is required to download
-the export controlled ``libsso_kasumi`` library, by registering in
-`Intel Resource & Design Center <https://www.intel.com/content/www/us/en/design/resource-design-center.html>`_.
-Once approval has been granted, the user needs to search for
-*Kasumi F8 F9 3GPP cryptographic algorithms Software Library* to download the
-library or directly through this `link <https://cdrdv2.intel.com/v1/dl/getContent/575866>`_.
+To build DPDK with the KASUMI_PMD the user is required to download the multi-buffer
+library from `here <https://github.com/01org/intel-ipsec-mb>`_
+and compile it on their user system before building DPDK.
+The latest version of the library supported by this PMD is v0.53, which
+can be downloaded from `<https://github.com/01org/intel-ipsec-mb/archive/v0.53.zip>`_.
+
After downloading the library, the user needs to unpack and compile it
-on their system before building DPDK::
+on their system before building DPDK:
+
+.. code-block:: console
- make
+ make
+ make install
-**Note**: When encrypting with KASUMI F8, by default the library
-encrypts full blocks of 8 bytes, regardless the number of bytes to
-be encrypted provided (which leads to a possible buffer overflow).
-To avoid this situation, it is necessary not to pass
-3GPP_SAFE_BUFFERS as a compilation flag.
-Also, this is required when using chained operations
-(cipher-then-auth/auth-then-cipher).
-For this, in the Makefile of the library, make sure that this flag
-is commented out::
+As a reference, the following table shows a mapping between the past DPDK versions
+and the external crypto libraries supported by them:
- #EXTRA_CFLAGS += -D_3GPP_SAFE_BUFFERS
+.. _table_kasumi_versions:
-**Note**: To build the PMD as a shared library, the libsso_kasumi
-library must be built as follows::
+.. table:: DPDK and external crypto library version compatibility
- make KASUMI_CFLAGS=-DKASUMI_C
+ ============= ================================
+ DPDK version Crypto library version
+ ============= ================================
+ 16.11 - 19.11 LibSSO KASUMI
+ 20.02+ Multi-buffer library 0.53
+ ============= ================================
Initialization
@@ -67,12 +67,16 @@ Initialization
In order to enable this virtual crypto PMD, user must:
-* Export the environmental variable LIBSSO_KASUMI_PATH with the path where
- the library was extracted (kasumi folder).
+* Build the multi buffer library (explained in Installation section).
+
+* Build DPDK as follows:
+
+.. code-block:: console
-* Build the LIBSSO library (explained in Installation section).
+ make config T=x86_64-native-linux-gcc
+ sed -i 's,\(CONFIG_RTE_LIBRTE_PMD_KASUMI\)=n,\1=y,' build/.config
+ make
-* Set CONFIG_RTE_LIBRTE_PMD_KASUMI=y in config/common_base.
To use the PMD in an application, user must:
diff --git a/drivers/crypto/kasumi/Makefile b/drivers/crypto/kasumi/Makefile
index cafe94986..51f31d0aa 100644
--- a/drivers/crypto/kasumi/Makefile
+++ b/drivers/crypto/kasumi/Makefile
@@ -3,12 +3,6 @@
include $(RTE_SDK)/mk/rte.vars.mk
-ifneq ($(MAKECMDGOALS),clean)
-ifeq ($(LIBSSO_KASUMI_PATH),)
-$(error "Please define LIBSSO_KASUMI_PATH environment variable")
-endif
-endif
-
# library name
LIB = librte_pmd_kasumi.a
@@ -23,14 +17,26 @@ LIBABIVER := 1
EXPORT_MAP := rte_pmd_kasumi_version.map
# external library dependencies
-CFLAGS += -I$(LIBSSO_KASUMI_PATH)
-CFLAGS += -I$(LIBSSO_KASUMI_PATH)/include
-CFLAGS += -I$(LIBSSO_KASUMI_PATH)/build
-LDLIBS += -L$(LIBSSO_KASUMI_PATH)/build -lsso_kasumi
+LDLIBS += -lIPSec_MB
LDLIBS += -lrte_eal -lrte_mbuf -lrte_mempool -lrte_ring
LDLIBS += -lrte_cryptodev
LDLIBS += -lrte_bus_vdev
+IMB_HDR = $(shell echo '\#include <intel-ipsec-mb.h>' | \
+ $(CC) -E $(EXTRA_CFLAGS) - | grep 'intel-ipsec-mb.h' | \
+ head -n1 | cut -d'"' -f2)
+
+# Detect library version
+IMB_VERSION = $(shell grep -e "IMB_VERSION_STR" $(IMB_HDR) | cut -d'"' -f2)
+IMB_VERSION_NUM = $(shell grep -e "IMB_VERSION_NUM" $(IMB_HDR) | cut -d' ' -f3)
+
+ifeq ($(IMB_VERSION),)
+$(error "IPSec_MB version >= 0.53 is required")
+endif
+
+ifeq ($(shell expr $(IMB_VERSION_NUM) \< 0x3400), 1)
+$(error "IPSec_MB version >= 0.53 is required")
+endif
# library source files
SRCS-$(CONFIG_RTE_LIBRTE_PMD_KASUMI) += rte_kasumi_pmd.c
SRCS-$(CONFIG_RTE_LIBRTE_PMD_KASUMI) += rte_kasumi_pmd_ops.c
diff --git a/drivers/crypto/kasumi/meson.build b/drivers/crypto/kasumi/meson.build
index a09b0e251..bda9a4052 100644
--- a/drivers/crypto/kasumi/meson.build
+++ b/drivers/crypto/kasumi/meson.build
@@ -1,11 +1,20 @@
# SPDX-License-Identifier: BSD-3-Clause
# Copyright(c) 2018 Intel Corporation
-lib = cc.find_library('libsso_kasumi', required: false)
+IMB_required_ver = '0.53.0'
+lib = cc.find_library('IPSec_MB', required: false)
if not lib.found()
build = false
else
ext_deps += lib
+ # version comes with quotes, so we split based on " and take the middle
+ imb_ver = cc.get_define('IMB_VERSION_STR',
+ prefix : '#include<intel-ipsec-mb.h>').split('"')[1]
+
+ if (imb_ver == '') or (imb_ver.version_compare('<' + IMB_required_ver))
+ build = false
+ endif
+
endif
sources = files('rte_kasumi_pmd.c', 'rte_kasumi_pmd_ops.c')
diff --git a/drivers/crypto/kasumi/rte_kasumi_pmd.c b/drivers/crypto/kasumi/rte_kasumi_pmd.c
index 239a1cf44..037abf710 100644
--- a/drivers/crypto/kasumi/rte_kasumi_pmd.c
+++ b/drivers/crypto/kasumi/rte_kasumi_pmd.c
@@ -54,7 +54,7 @@ kasumi_get_mode(const struct rte_crypto_sym_xform *xform)
/** Parse crypto xform chain and set private session parameters. */
int
-kasumi_set_session_parameters(struct kasumi_session *sess,
+kasumi_set_session_parameters(MB_MGR *mgr, struct kasumi_session *sess,
const struct rte_crypto_sym_xform *xform)
{
const struct rte_crypto_sym_xform *auth_xform = NULL;
@@ -97,7 +97,7 @@ kasumi_set_session_parameters(struct kasumi_session *sess,
}
/* Initialize key */
- sso_kasumi_init_f8_key_sched(cipher_xform->cipher.key.data,
+ IMB_KASUMI_INIT_F8_KEY_SCHED(mgr, cipher_xform->cipher.key.data,
&sess->pKeySched_cipher);
}
@@ -116,7 +116,7 @@ kasumi_set_session_parameters(struct kasumi_session *sess,
sess->auth_op = auth_xform->auth.op;
/* Initialize key */
- sso_kasumi_init_f9_key_sched(auth_xform->auth.key.data,
+ IMB_KASUMI_INIT_F9_KEY_SCHED(mgr, auth_xform->auth.key.data,
&sess->pKeySched_hash);
}
@@ -150,7 +150,7 @@ kasumi_get_session(struct kasumi_qp *qp, struct rte_crypto_op *op)
sess = (struct kasumi_session *)_sess_private_data;
- if (unlikely(kasumi_set_session_parameters(sess,
+ if (unlikely(kasumi_set_session_parameters(qp->mgr, sess,
op->sym->xform) != 0)) {
rte_mempool_put(qp->sess_mp, _sess);
rte_mempool_put(qp->sess_mp, _sess_private_data);
@@ -169,13 +169,13 @@ kasumi_get_session(struct kasumi_qp *qp, struct rte_crypto_op *op)
/** Encrypt/decrypt mbufs with same cipher key. */
static uint8_t
-process_kasumi_cipher_op(struct rte_crypto_op **ops,
- struct kasumi_session *session,
- uint8_t num_ops)
+process_kasumi_cipher_op(struct kasumi_qp *qp, struct rte_crypto_op **ops,
+ struct kasumi_session *session, uint8_t num_ops)
{
unsigned i;
uint8_t processed_ops = 0;
- uint8_t *src[num_ops], *dst[num_ops];
+ const void *src[num_ops];
+ void *dst[num_ops];
uint8_t *iv_ptr;
uint64_t iv[num_ops];
uint32_t num_bytes[num_ops];
@@ -197,7 +197,7 @@ process_kasumi_cipher_op(struct rte_crypto_op **ops,
}
if (processed_ops != 0)
- sso_kasumi_f8_n_buffer(&session->pKeySched_cipher, iv,
+ IMB_KASUMI_F8_N_BUFFER(qp->mgr, &session->pKeySched_cipher, iv,
src, dst, num_bytes, processed_ops);
return processed_ops;
@@ -205,7 +205,7 @@ process_kasumi_cipher_op(struct rte_crypto_op **ops,
/** Encrypt/decrypt mbuf (bit level function). */
static uint8_t
-process_kasumi_cipher_op_bit(struct rte_crypto_op *op,
+process_kasumi_cipher_op_bit(struct kasumi_qp *qp, struct rte_crypto_op *op,
struct kasumi_session *session)
{
uint8_t *src, *dst;
@@ -215,18 +215,16 @@ process_kasumi_cipher_op_bit(struct rte_crypto_op *op,
offset_in_bits = op->sym->cipher.data.offset;
src = rte_pktmbuf_mtod(op->sym->m_src, uint8_t *);
- if (op->sym->m_dst == NULL) {
- op->status = RTE_CRYPTO_OP_STATUS_INVALID_ARGS;
- KASUMI_LOG(ERR, "bit-level in-place not supported");
- return 0;
- }
- dst = rte_pktmbuf_mtod(op->sym->m_dst, uint8_t *);
+ if (op->sym->m_dst == NULL)
+ dst = src;
+ else
+ dst = rte_pktmbuf_mtod(op->sym->m_dst, uint8_t *);
iv_ptr = rte_crypto_op_ctod_offset(op, uint8_t *,
session->cipher_iv_offset);
iv = *((uint64_t *)(iv_ptr));
length_in_bits = op->sym->cipher.data.length;
- sso_kasumi_f8_1_buffer_bit(&session->pKeySched_cipher, iv,
+ IMB_KASUMI_F8_1_BUFFER_BIT(qp->mgr, &session->pKeySched_cipher, iv,
src, dst, length_in_bits, offset_in_bits);
return 1;
@@ -261,7 +259,8 @@ process_kasumi_hash_op(struct kasumi_qp *qp, struct rte_crypto_op **ops,
if (session->auth_op == RTE_CRYPTO_AUTH_OP_VERIFY) {
dst = qp->temp_digest;
- sso_kasumi_f9_1_buffer(&session->pKeySched_hash, src,
+ IMB_KASUMI_F9_1_BUFFER(qp->mgr,
+ &session->pKeySched_hash, src,
num_bytes, dst);
/* Verify digest. */
@@ -271,7 +270,8 @@ process_kasumi_hash_op(struct kasumi_qp *qp, struct rte_crypto_op **ops,
} else {
dst = ops[i]->sym->auth.digest.data;
- sso_kasumi_f9_1_buffer(&session->pKeySched_hash, src,
+ IMB_KASUMI_F9_1_BUFFER(qp->mgr,
+ &session->pKeySched_hash, src,
num_bytes, dst);
}
processed_ops++;
@@ -291,7 +291,7 @@ process_ops(struct rte_crypto_op **ops, struct kasumi_session *session,
switch (session->op) {
case KASUMI_OP_ONLY_CIPHER:
- processed_ops = process_kasumi_cipher_op(ops,
+ processed_ops = process_kasumi_cipher_op(qp, ops,
session, num_ops);
break;
case KASUMI_OP_ONLY_AUTH:
@@ -299,14 +299,14 @@ process_ops(struct rte_crypto_op **ops, struct kasumi_session *session,
num_ops);
break;
case KASUMI_OP_CIPHER_AUTH:
- processed_ops = process_kasumi_cipher_op(ops, session,
+ processed_ops = process_kasumi_cipher_op(qp, ops, session,
num_ops);
process_kasumi_hash_op(qp, ops, session, processed_ops);
break;
case KASUMI_OP_AUTH_CIPHER:
processed_ops = process_kasumi_hash_op(qp, ops, session,
num_ops);
- process_kasumi_cipher_op(ops, session, processed_ops);
+ process_kasumi_cipher_op(qp, ops, session, processed_ops);
break;
default:
/* Operation not supported. */
@@ -348,21 +348,21 @@ process_op_bit(struct rte_crypto_op *op, struct kasumi_session *session,
switch (session->op) {
case KASUMI_OP_ONLY_CIPHER:
- processed_op = process_kasumi_cipher_op_bit(op,
+ processed_op = process_kasumi_cipher_op_bit(qp, op,
session);
break;
case KASUMI_OP_ONLY_AUTH:
processed_op = process_kasumi_hash_op(qp, &op, session, 1);
break;
case KASUMI_OP_CIPHER_AUTH:
- processed_op = process_kasumi_cipher_op_bit(op, session);
+ processed_op = process_kasumi_cipher_op_bit(qp, op, session);
if (processed_op == 1)
process_kasumi_hash_op(qp, &op, session, 1);
break;
case KASUMI_OP_AUTH_CIPHER:
processed_op = process_kasumi_hash_op(qp, &op, session, 1);
if (processed_op == 1)
- process_kasumi_cipher_op_bit(op, session);
+ process_kasumi_cipher_op_bit(qp, op, session);
break;
default:
/* Operation not supported. */
@@ -531,7 +531,7 @@ cryptodev_kasumi_create(const char *name,
{
struct rte_cryptodev *dev;
struct kasumi_private *internals;
- uint64_t cpu_flags = 0;
+ MB_MGR *mgr;
dev = rte_cryptodev_pmd_create(name, &vdev->device, init_params);
if (dev == NULL) {
@@ -539,12 +539,6 @@ cryptodev_kasumi_create(const char *name,
goto init_error;
}
- /* Check CPU for supported vector instruction set */
- if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX))
- cpu_flags |= RTE_CRYPTODEV_FF_CPU_AVX;
- else
- cpu_flags |= RTE_CRYPTODEV_FF_CPU_SSE;
-
dev->driver_id = cryptodev_driver_id;
dev->dev_ops = rte_kasumi_pmd_ops;
@@ -553,12 +547,24 @@ cryptodev_kasumi_create(const char *name,
dev->enqueue_burst = kasumi_pmd_enqueue_burst;
dev->feature_flags = RTE_CRYPTODEV_FF_SYMMETRIC_CRYPTO |
- RTE_CRYPTODEV_FF_SYM_OPERATION_CHAINING |
- cpu_flags;
+ RTE_CRYPTODEV_FF_SYM_OPERATION_CHAINING;
+
+ mgr = alloc_mb_mgr(0);
+ if (mgr == NULL)
+ return -ENOMEM;
+
+ if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX)) {
+ dev->feature_flags |= RTE_CRYPTODEV_FF_CPU_AVX;
+ init_mb_mgr_avx(mgr);
+ } else {
+ dev->feature_flags |= RTE_CRYPTODEV_FF_CPU_SSE;
+ init_mb_mgr_sse(mgr);
+ }
internals = dev->data->dev_private;
internals->max_nb_queue_pairs = init_params->max_nb_queue_pairs;
+ internals->mgr = mgr;
return 0;
init_error:
@@ -596,6 +602,7 @@ cryptodev_kasumi_remove(struct rte_vdev_device *vdev)
{
struct rte_cryptodev *cryptodev;
const char *name;
+ struct kasumi_private *internals;
name = rte_vdev_device_name(vdev);
if (name == NULL)
@@ -605,6 +612,10 @@ cryptodev_kasumi_remove(struct rte_vdev_device *vdev)
if (cryptodev == NULL)
return -ENODEV;
+ internals = cryptodev->data->dev_private;
+
+ free_mb_mgr(internals->mgr);
+
return rte_cryptodev_pmd_destroy(cryptodev);
}
diff --git a/drivers/crypto/kasumi/rte_kasumi_pmd_ops.c b/drivers/crypto/kasumi/rte_kasumi_pmd_ops.c
index 9e4bf1b52..2f30115b9 100644
--- a/drivers/crypto/kasumi/rte_kasumi_pmd_ops.c
+++ b/drivers/crypto/kasumi/rte_kasumi_pmd_ops.c
@@ -195,6 +195,7 @@ kasumi_pmd_qp_setup(struct rte_cryptodev *dev, uint16_t qp_id,
int socket_id, struct rte_mempool *session_pool)
{
struct kasumi_qp *qp = NULL;
+ struct kasumi_private *internals = dev->data->dev_private;
/* Free memory prior to re-allocation if needed. */
if (dev->data->queue_pairs[qp_id] != NULL)
@@ -217,6 +218,7 @@ kasumi_pmd_qp_setup(struct rte_cryptodev *dev, uint16_t qp_id,
if (qp->processed_ops == NULL)
goto qp_setup_cleanup;
+ qp->mgr = internals->mgr;
qp->sess_mp = session_pool;
memset(&qp->qp_stats, 0, sizeof(qp->qp_stats));
@@ -245,13 +247,14 @@ kasumi_pmd_sym_session_get_size(struct rte_cryptodev *dev __rte_unused)
/** Configure a KASUMI session from a crypto xform chain */
static int
-kasumi_pmd_sym_session_configure(struct rte_cryptodev *dev __rte_unused,
+kasumi_pmd_sym_session_configure(struct rte_cryptodev *dev,
struct rte_crypto_sym_xform *xform,
struct rte_cryptodev_sym_session *sess,
struct rte_mempool *mempool)
{
void *sess_private_data;
int ret;
+ struct kasumi_private *internals = dev->data->dev_private;
if (unlikely(sess == NULL)) {
KASUMI_LOG(ERR, "invalid session struct");
@@ -264,7 +267,8 @@ kasumi_pmd_sym_session_configure(struct rte_cryptodev *dev __rte_unused,
return -ENOMEM;
}
- ret = kasumi_set_session_parameters(sess_private_data, xform);
+ ret = kasumi_set_session_parameters(internals->mgr,
+ sess_private_data, xform);
if (ret != 0) {
KASUMI_LOG(ERR, "failed configure session parameters");
diff --git a/drivers/crypto/kasumi/rte_kasumi_pmd_private.h b/drivers/crypto/kasumi/rte_kasumi_pmd_private.h
index 488777ca8..3db52c03a 100644
--- a/drivers/crypto/kasumi/rte_kasumi_pmd_private.h
+++ b/drivers/crypto/kasumi/rte_kasumi_pmd_private.h
@@ -5,7 +5,7 @@
#ifndef _RTE_KASUMI_PMD_PRIVATE_H_
#define _RTE_KASUMI_PMD_PRIVATE_H_
-#include <sso_kasumi.h>
+#include <intel-ipsec-mb.h>
#define CRYPTODEV_NAME_KASUMI_PMD crypto_kasumi
/**< KASUMI PMD device name */
@@ -24,6 +24,8 @@ int kasumi_logtype_driver;
struct kasumi_private {
unsigned max_nb_queue_pairs;
/**< Max number of queue pairs supported by device */
+ MB_MGR *mgr;
+ /**< Multi-buffer instance */
};
/** KASUMI buffer queue pair */
@@ -43,6 +45,8 @@ struct kasumi_qp {
* by the driver when verifying a digest provided
* by the user (using authentication verify operation)
*/
+ MB_MGR *mgr;
+ /**< Multi-buffer instance */
} __rte_cache_aligned;
enum kasumi_operation {
@@ -56,8 +60,8 @@ enum kasumi_operation {
/** KASUMI private session structure */
struct kasumi_session {
/* Keys have to be 16-byte aligned */
- sso_kasumi_key_sched_t pKeySched_cipher;
- sso_kasumi_key_sched_t pKeySched_hash;
+ kasumi_key_sched_t pKeySched_cipher;
+ kasumi_key_sched_t pKeySched_hash;
enum kasumi_operation op;
enum rte_crypto_auth_operation auth_op;
uint16_t cipher_iv_offset;
@@ -65,7 +69,7 @@ struct kasumi_session {
int
-kasumi_set_session_parameters(struct kasumi_session *sess,
+kasumi_set_session_parameters(MB_MGR *mgr, struct kasumi_session *sess,
const struct rte_crypto_sym_xform *xform);
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index b33603ef5..0ec48482c 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -231,7 +231,7 @@ endif # CONFIG_RTE_LIBRTE_PMD_QAT
_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_SNOW3G) += -lrte_pmd_snow3g
_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_SNOW3G) += -lIPSec_MB
_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_KASUMI) += -lrte_pmd_kasumi
-_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_KASUMI) += -L$(LIBSSO_KASUMI_PATH)/build -lsso_kasumi
+_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_KASUMI) += -lIPSec_MB
_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_ZUC) += -lrte_pmd_zuc
_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_ZUC) += -lIPSec_MB
_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_ARMV8_CRYPTO) += -lrte_pmd_armv8
--
2.24.1
^ permalink raw reply [flat|nested] 8+ messages in thread
* [dpdk-dev] [RFC PATCH 4/5] crypto/aesni_mb: support IPSec MB library v0.53
2020-03-05 15:34 [dpdk-dev] [RFC PATCH 0/5] Support Intel IPSec MB v0.53 in DPDK 18.11 Pablo de Lara
` (2 preceding siblings ...)
2020-03-05 15:34 ` [dpdk-dev] [RFC PATCH 3/5] crypto/kasumi: " Pablo de Lara
@ 2020-03-05 15:34 ` Pablo de Lara
2020-03-05 15:34 ` [dpdk-dev] [RFC PATCH 5/5] crypto/aesni_gcm: " Pablo de Lara
2020-03-19 14:32 ` [dpdk-dev] [RFC PATCH 0/5] Support Intel IPSec MB v0.53 in DPDK 18.11 Kevin Traynor
5 siblings, 0 replies; 8+ messages in thread
From: Pablo de Lara @ 2020-03-05 15:34 UTC (permalink / raw)
To: dev; +Cc: Pablo de Lara
Add support for underlying IPSec Multi-buffer library v0.53.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
---
drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c | 476 ++++++++++++------
.../crypto/aesni_mb/rte_aesni_mb_pmd_ops.c | 205 +++++---
.../aesni_mb/rte_aesni_mb_pmd_private.h | 30 +-
3 files changed, 489 insertions(+), 222 deletions(-)
diff --git a/drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c
index 83250e32c..9dfa89f71 100644
--- a/drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c
+++ b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c
@@ -35,7 +35,7 @@ typedef void (*aes_keyexp_t)(const void *key, void *enc_exp_keys, void *dec_exp_
static void
calculate_auth_precomputes(hash_one_block_t one_block_hash,
uint8_t *ipad, uint8_t *opad,
- uint8_t *hkey, uint16_t hkey_len,
+ const uint8_t *hkey, uint16_t hkey_len,
uint16_t blocksize)
{
unsigned i, length;
@@ -85,6 +85,25 @@ aesni_mb_get_chain_order(const struct rte_crypto_sym_xform *xform)
return AESNI_MB_OP_HASH_CIPHER;
}
+#if IMB_VERSION_NUM > IMB_VERSION(0, 52, 0)
+ if (xform->type == RTE_CRYPTO_SYM_XFORM_AEAD) {
+ if (xform->aead.op == RTE_CRYPTO_AEAD_OP_ENCRYPT) {
+ /*
+ * CCM requires to hash first and cipher later
+ * when encrypting
+ */
+ if (xform->aead.algo == RTE_CRYPTO_AEAD_AES_CCM)
+ return AESNI_MB_OP_AEAD_HASH_CIPHER;
+ else
+ return AESNI_MB_OP_AEAD_CIPHER_HASH;
+ } else {
+ if (xform->aead.algo == RTE_CRYPTO_AEAD_AES_CCM)
+ return AESNI_MB_OP_AEAD_CIPHER_HASH;
+ else
+ return AESNI_MB_OP_AEAD_HASH_CIPHER;
+ }
+ }
+#else
if (xform->type == RTE_CRYPTO_SYM_XFORM_AEAD) {
if (xform->aead.algo == RTE_CRYPTO_AEAD_AES_CCM ||
xform->aead.algo == RTE_CRYPTO_AEAD_AES_GCM) {
@@ -94,19 +113,21 @@ aesni_mb_get_chain_order(const struct rte_crypto_sym_xform *xform)
return AESNI_MB_OP_AEAD_HASH_CIPHER;
}
}
+#endif
return AESNI_MB_OP_NOT_SUPPORTED;
}
/** Set session authentication parameters */
static int
-aesni_mb_set_session_auth_parameters(const struct aesni_mb_op_fns *mb_ops,
+aesni_mb_set_session_auth_parameters(const MB_MGR *mb_mgr,
struct aesni_mb_session *sess,
const struct rte_crypto_sym_xform *xform)
{
- hash_one_block_t hash_oneblock_fn;
+ hash_one_block_t hash_oneblock_fn = NULL;
unsigned int key_larger_block_size = 0;
uint8_t hashed_key[HMAC_MAX_BLOCK_SIZE] = { 0 };
+ uint32_t auth_precompute = 1;
if (xform == NULL) {
sess->auth.algo = NULL_HASH;
@@ -135,13 +156,16 @@ aesni_mb_set_session_auth_parameters(const struct aesni_mb_op_fns *mb_ops,
return -EINVAL;
}
sess->auth.gen_digest_len = sess->auth.req_digest_len;
- (*mb_ops->aux.keyexp.aes_xcbc)(xform->auth.key.data,
+
+ IMB_AES_XCBC_KEYEXP(mb_mgr, xform->auth.key.data,
sess->auth.xcbc.k1_expanded,
sess->auth.xcbc.k2, sess->auth.xcbc.k3);
return 0;
}
if (xform->auth.algo == RTE_CRYPTO_AUTH_AES_CMAC) {
+ uint32_t dust[4*15];
+
sess->auth.algo = AES_CMAC;
uint16_t cmac_digest_len = get_digest_byte_length(AES_CMAC);
@@ -150,102 +174,144 @@ aesni_mb_set_session_auth_parameters(const struct aesni_mb_op_fns *mb_ops,
AESNI_MB_LOG(ERR, "Invalid digest size\n");
return -EINVAL;
}
+ if (sess->auth.req_digest_len < 4)
+ sess->auth.gen_digest_len = cmac_digest_len;
+ else
+ sess->auth.gen_digest_len = sess->auth.req_digest_len;
+ IMB_AES_KEYEXP_128(mb_mgr, xform->auth.key.data,
+ sess->auth.cmac.expkey, dust);
+ IMB_AES_CMAC_SUBKEY_GEN_128(mb_mgr, sess->auth.cmac.expkey,
+ sess->auth.cmac.skey1, sess->auth.cmac.skey2);
+ return 0;
+ }
+
+ if (xform->auth.algo == RTE_CRYPTO_AUTH_AES_GMAC) {
+ if (xform->auth.op == RTE_CRYPTO_AUTH_OP_GENERATE) {
+ sess->cipher.direction = ENCRYPT;
+ sess->chain_order = CIPHER_HASH;
+ } else
+ sess->cipher.direction = DECRYPT;
+
+ sess->auth.algo = AES_GMAC;
/*
- * Multi-buffer lib supports digest sizes from 4 to 16 bytes
- * in version 0.50 and sizes of 12 and 16 bytes,
- * in version 0.49.
+ * Multi-buffer lib supports 8, 12 and 16 bytes of digest.
* If size requested is different, generate the full digest
* (16 bytes) in a temporary location and then memcpy
* the requested number of bytes.
*/
-#if IMB_VERSION_NUM >= IMB_VERSION(0, 50, 0)
- if (sess->auth.req_digest_len < 4)
-#else
- uint16_t cmac_trunc_digest_len =
- get_truncated_digest_byte_length(AES_CMAC);
- if (sess->auth.req_digest_len != cmac_digest_len &&
- sess->auth.req_digest_len != cmac_trunc_digest_len)
-#endif
- sess->auth.gen_digest_len = cmac_digest_len;
- else
+ if (sess->auth.req_digest_len != 16 &&
+ sess->auth.req_digest_len != 12 &&
+ sess->auth.req_digest_len != 8) {
+ sess->auth.gen_digest_len = 16;
+ } else {
sess->auth.gen_digest_len = sess->auth.req_digest_len;
- (*mb_ops->aux.keyexp.aes_cmac_expkey)(xform->auth.key.data,
- sess->auth.cmac.expkey);
+ }
+ sess->iv.length = xform->auth.iv.length;
+ sess->iv.offset = xform->auth.iv.offset;
+
+ switch (xform->auth.key.length) {
+ case AES_128_BYTES:
+ IMB_AES128_GCM_PRE(mb_mgr, xform->auth.key.data,
+ &sess->cipher.gcm_key);
+ sess->cipher.key_length_in_bytes = AES_128_BYTES;
+ break;
+ case AES_192_BYTES:
+ IMB_AES192_GCM_PRE(mb_mgr, xform->auth.key.data,
+ &sess->cipher.gcm_key);
+ sess->cipher.key_length_in_bytes = AES_192_BYTES;
+ break;
+ case AES_256_BYTES:
+ IMB_AES256_GCM_PRE(mb_mgr, xform->auth.key.data,
+ &sess->cipher.gcm_key);
+ sess->cipher.key_length_in_bytes = AES_256_BYTES;
+ break;
+ default:
+ RTE_LOG(ERR, PMD, "failed to parse test type\n");
+ return -EINVAL;
+ }
- (*mb_ops->aux.keyexp.aes_cmac_subkey)(sess->auth.cmac.expkey,
- sess->auth.cmac.skey1, sess->auth.cmac.skey2);
return 0;
}
switch (xform->auth.algo) {
case RTE_CRYPTO_AUTH_MD5_HMAC:
sess->auth.algo = MD5;
- hash_oneblock_fn = mb_ops->aux.one_block.md5;
+ hash_oneblock_fn = mb_mgr->md5_one_block;
break;
case RTE_CRYPTO_AUTH_SHA1_HMAC:
sess->auth.algo = SHA1;
- hash_oneblock_fn = mb_ops->aux.one_block.sha1;
-#if IMB_VERSION_NUM >= IMB_VERSION(0, 50, 0)
+ hash_oneblock_fn = mb_mgr->sha1_one_block;
if (xform->auth.key.length > get_auth_algo_blocksize(SHA1)) {
- mb_ops->aux.multi_block.sha1(
+ IMB_SHA1(mb_mgr,
xform->auth.key.data,
xform->auth.key.length,
hashed_key);
key_larger_block_size = 1;
}
-#endif
+ break;
+ case RTE_CRYPTO_AUTH_SHA1:
+ sess->auth.algo = PLAIN_SHA1;
+ auth_precompute = 0;
break;
case RTE_CRYPTO_AUTH_SHA224_HMAC:
sess->auth.algo = SHA_224;
- hash_oneblock_fn = mb_ops->aux.one_block.sha224;
-#if IMB_VERSION_NUM >= IMB_VERSION(0, 50, 0)
+ hash_oneblock_fn = mb_mgr->sha224_one_block;
if (xform->auth.key.length > get_auth_algo_blocksize(SHA_224)) {
- mb_ops->aux.multi_block.sha224(
+ IMB_SHA224(mb_mgr,
xform->auth.key.data,
xform->auth.key.length,
hashed_key);
key_larger_block_size = 1;
}
-#endif
+ break;
+ case RTE_CRYPTO_AUTH_SHA224:
+ sess->auth.algo = PLAIN_SHA_224;
+ auth_precompute = 0;
break;
case RTE_CRYPTO_AUTH_SHA256_HMAC:
sess->auth.algo = SHA_256;
- hash_oneblock_fn = mb_ops->aux.one_block.sha256;
-#if IMB_VERSION_NUM >= IMB_VERSION(0, 50, 0)
+ hash_oneblock_fn = mb_mgr->sha256_one_block;
if (xform->auth.key.length > get_auth_algo_blocksize(SHA_256)) {
- mb_ops->aux.multi_block.sha256(
+ IMB_SHA256(mb_mgr,
xform->auth.key.data,
xform->auth.key.length,
hashed_key);
key_larger_block_size = 1;
}
-#endif
+ break;
+ case RTE_CRYPTO_AUTH_SHA256:
+ sess->auth.algo = PLAIN_SHA_256;
+ auth_precompute = 0;
break;
case RTE_CRYPTO_AUTH_SHA384_HMAC:
sess->auth.algo = SHA_384;
- hash_oneblock_fn = mb_ops->aux.one_block.sha384;
-#if IMB_VERSION_NUM >= IMB_VERSION(0, 50, 0)
+ hash_oneblock_fn = mb_mgr->sha384_one_block;
if (xform->auth.key.length > get_auth_algo_blocksize(SHA_384)) {
- mb_ops->aux.multi_block.sha384(
+ IMB_SHA384(mb_mgr,
xform->auth.key.data,
xform->auth.key.length,
hashed_key);
key_larger_block_size = 1;
}
-#endif
+ break;
+ case RTE_CRYPTO_AUTH_SHA384:
+ sess->auth.algo = PLAIN_SHA_384;
+ auth_precompute = 0;
break;
case RTE_CRYPTO_AUTH_SHA512_HMAC:
sess->auth.algo = SHA_512;
- hash_oneblock_fn = mb_ops->aux.one_block.sha512;
-#if IMB_VERSION_NUM >= IMB_VERSION(0, 50, 0)
+ hash_oneblock_fn = mb_mgr->sha512_one_block;
if (xform->auth.key.length > get_auth_algo_blocksize(SHA_512)) {
- mb_ops->aux.multi_block.sha512(
+ IMB_SHA512(mb_mgr,
xform->auth.key.data,
xform->auth.key.length,
hashed_key);
key_larger_block_size = 1;
}
-#endif
+ break;
+ case RTE_CRYPTO_AUTH_SHA512:
+ sess->auth.algo = PLAIN_SHA_512;
+ auth_precompute = 0;
break;
default:
AESNI_MB_LOG(ERR, "Unsupported authentication algorithm selection");
@@ -256,12 +322,8 @@ aesni_mb_set_session_auth_parameters(const struct aesni_mb_op_fns *mb_ops,
uint16_t full_digest_size =
get_digest_byte_length(sess->auth.algo);
-#if IMB_VERSION_NUM >= IMB_VERSION(0, 50, 0)
if (sess->auth.req_digest_len > full_digest_size ||
sess->auth.req_digest_len == 0) {
-#else
- if (sess->auth.req_digest_len != trunc_digest_size) {
-#endif
AESNI_MB_LOG(ERR, "Invalid digest size\n");
return -EINVAL;
}
@@ -272,6 +334,10 @@ aesni_mb_set_session_auth_parameters(const struct aesni_mb_op_fns *mb_ops,
else
sess->auth.gen_digest_len = sess->auth.req_digest_len;
+ /* Plain SHA does not require precompute key */
+ if (auth_precompute == 0)
+ return 0;
+
/* Calculate Authentication precomputes */
if (key_larger_block_size) {
calculate_auth_precomputes(hash_oneblock_fn,
@@ -292,13 +358,12 @@ aesni_mb_set_session_auth_parameters(const struct aesni_mb_op_fns *mb_ops,
/** Set session cipher parameters */
static int
-aesni_mb_set_session_cipher_parameters(const struct aesni_mb_op_fns *mb_ops,
+aesni_mb_set_session_cipher_parameters(const MB_MGR *mb_mgr,
struct aesni_mb_session *sess,
const struct rte_crypto_sym_xform *xform)
{
uint8_t is_aes = 0;
uint8_t is_3DES = 0;
- aes_keyexp_t aes_keyexp_fn;
if (xform == NULL) {
sess->cipher.mode = NULL_CIPHER;
@@ -361,26 +426,26 @@ aesni_mb_set_session_cipher_parameters(const struct aesni_mb_op_fns *mb_ops,
switch (xform->cipher.key.length) {
case AES_128_BYTES:
sess->cipher.key_length_in_bytes = AES_128_BYTES;
- aes_keyexp_fn = mb_ops->aux.keyexp.aes128;
+ IMB_AES_KEYEXP_128(mb_mgr, xform->cipher.key.data,
+ sess->cipher.expanded_aes_keys.encode,
+ sess->cipher.expanded_aes_keys.decode);
break;
case AES_192_BYTES:
sess->cipher.key_length_in_bytes = AES_192_BYTES;
- aes_keyexp_fn = mb_ops->aux.keyexp.aes192;
+ IMB_AES_KEYEXP_192(mb_mgr, xform->cipher.key.data,
+ sess->cipher.expanded_aes_keys.encode,
+ sess->cipher.expanded_aes_keys.decode);
break;
case AES_256_BYTES:
sess->cipher.key_length_in_bytes = AES_256_BYTES;
- aes_keyexp_fn = mb_ops->aux.keyexp.aes256;
+ IMB_AES_KEYEXP_256(mb_mgr, xform->cipher.key.data,
+ sess->cipher.expanded_aes_keys.encode,
+ sess->cipher.expanded_aes_keys.decode);
break;
default:
AESNI_MB_LOG(ERR, "Invalid cipher key length");
return -EINVAL;
}
-
- /* Expanded cipher keys */
- (*aes_keyexp_fn)(xform->cipher.key.data,
- sess->cipher.expanded_aes_keys.encode,
- sess->cipher.expanded_aes_keys.decode);
-
} else if (is_3DES) {
uint64_t *keys[3] = {sess->cipher.exp_3des_keys.key[0],
sess->cipher.exp_3des_keys.key[1],
@@ -388,9 +453,12 @@ aesni_mb_set_session_cipher_parameters(const struct aesni_mb_op_fns *mb_ops,
switch (xform->cipher.key.length) {
case 24:
- des_key_schedule(keys[0], xform->cipher.key.data);
- des_key_schedule(keys[1], xform->cipher.key.data+8);
- des_key_schedule(keys[2], xform->cipher.key.data+16);
+ IMB_DES_KEYSCHED(mb_mgr, keys[0],
+ xform->cipher.key.data);
+ IMB_DES_KEYSCHED(mb_mgr, keys[1],
+ xform->cipher.key.data + 8);
+ IMB_DES_KEYSCHED(mb_mgr, keys[2],
+ xform->cipher.key.data + 16);
/* Initialize keys - 24 bytes: [K1-K2-K3] */
sess->cipher.exp_3des_keys.ks_ptr[0] = keys[0];
@@ -398,8 +466,10 @@ aesni_mb_set_session_cipher_parameters(const struct aesni_mb_op_fns *mb_ops,
sess->cipher.exp_3des_keys.ks_ptr[2] = keys[2];
break;
case 16:
- des_key_schedule(keys[0], xform->cipher.key.data);
- des_key_schedule(keys[1], xform->cipher.key.data+8);
+ IMB_DES_KEYSCHED(mb_mgr, keys[0],
+ xform->cipher.key.data);
+ IMB_DES_KEYSCHED(mb_mgr, keys[1],
+ xform->cipher.key.data + 8);
/* Initialize keys - 16 bytes: [K1=K1,K2=K2,K3=K1] */
sess->cipher.exp_3des_keys.ks_ptr[0] = keys[0];
@@ -407,7 +477,8 @@ aesni_mb_set_session_cipher_parameters(const struct aesni_mb_op_fns *mb_ops,
sess->cipher.exp_3des_keys.ks_ptr[2] = keys[0];
break;
case 8:
- des_key_schedule(keys[0], xform->cipher.key.data);
+ IMB_DES_KEYSCHED(mb_mgr, keys[0],
+ xform->cipher.key.data);
/* Initialize keys - 8 bytes: [K1 = K2 = K3] */
sess->cipher.exp_3des_keys.ks_ptr[0] = keys[0];
@@ -419,11 +490,7 @@ aesni_mb_set_session_cipher_parameters(const struct aesni_mb_op_fns *mb_ops,
return -EINVAL;
}
-#if IMB_VERSION_NUM >= IMB_VERSION(0, 50, 0)
sess->cipher.key_length_in_bytes = 24;
-#else
- sess->cipher.key_length_in_bytes = 8;
-#endif
} else {
if (xform->cipher.key.length != 8) {
AESNI_MB_LOG(ERR, "Invalid cipher key length");
@@ -431,9 +498,11 @@ aesni_mb_set_session_cipher_parameters(const struct aesni_mb_op_fns *mb_ops,
}
sess->cipher.key_length_in_bytes = 8;
- des_key_schedule((uint64_t *)sess->cipher.expanded_aes_keys.encode,
+ IMB_DES_KEYSCHED(mb_mgr,
+ (uint64_t *)sess->cipher.expanded_aes_keys.encode,
xform->cipher.key.data);
- des_key_schedule((uint64_t *)sess->cipher.expanded_aes_keys.decode,
+ IMB_DES_KEYSCHED(mb_mgr,
+ (uint64_t *)sess->cipher.expanded_aes_keys.decode,
xform->cipher.key.data);
}
@@ -441,15 +510,10 @@ aesni_mb_set_session_cipher_parameters(const struct aesni_mb_op_fns *mb_ops,
}
static int
-aesni_mb_set_session_aead_parameters(const struct aesni_mb_op_fns *mb_ops,
+aesni_mb_set_session_aead_parameters(const MB_MGR *mb_mgr,
struct aesni_mb_session *sess,
const struct rte_crypto_sym_xform *xform)
{
- union {
- aes_keyexp_t aes_keyexp_fn;
- aes_gcm_keyexp_t aes_gcm_keyexp_fn;
- } keyexp;
-
switch (xform->aead.op) {
case RTE_CRYPTO_AEAD_OP_ENCRYPT:
sess->cipher.direction = ENCRYPT;
@@ -473,17 +537,15 @@ aesni_mb_set_session_aead_parameters(const struct aesni_mb_op_fns *mb_ops,
switch (xform->aead.key.length) {
case AES_128_BYTES:
sess->cipher.key_length_in_bytes = AES_128_BYTES;
- keyexp.aes_keyexp_fn = mb_ops->aux.keyexp.aes128;
+ IMB_AES_KEYEXP_128(mb_mgr, xform->aead.key.data,
+ sess->cipher.expanded_aes_keys.encode,
+ sess->cipher.expanded_aes_keys.decode);
break;
default:
AESNI_MB_LOG(ERR, "Invalid cipher key length");
return -EINVAL;
}
- /* Expanded cipher keys */
- (*keyexp.aes_keyexp_fn)(xform->aead.key.data,
- sess->cipher.expanded_aes_keys.encode,
- sess->cipher.expanded_aes_keys.decode);
break;
case RTE_CRYPTO_AEAD_AES_GCM:
@@ -493,26 +555,24 @@ aesni_mb_set_session_aead_parameters(const struct aesni_mb_op_fns *mb_ops,
switch (xform->aead.key.length) {
case AES_128_BYTES:
sess->cipher.key_length_in_bytes = AES_128_BYTES;
- keyexp.aes_gcm_keyexp_fn =
- mb_ops->aux.keyexp.aes_gcm_128;
+ IMB_AES128_GCM_PRE(mb_mgr, xform->aead.key.data,
+ &sess->cipher.gcm_key);
break;
case AES_192_BYTES:
sess->cipher.key_length_in_bytes = AES_192_BYTES;
- keyexp.aes_gcm_keyexp_fn =
- mb_ops->aux.keyexp.aes_gcm_192;
+ IMB_AES192_GCM_PRE(mb_mgr, xform->aead.key.data,
+ &sess->cipher.gcm_key);
break;
case AES_256_BYTES:
sess->cipher.key_length_in_bytes = AES_256_BYTES;
- keyexp.aes_gcm_keyexp_fn =
- mb_ops->aux.keyexp.aes_gcm_256;
+ IMB_AES256_GCM_PRE(mb_mgr, xform->aead.key.data,
+ &sess->cipher.gcm_key);
break;
default:
AESNI_MB_LOG(ERR, "Invalid cipher key length");
return -EINVAL;
}
- (keyexp.aes_gcm_keyexp_fn)(xform->aead.key.data,
- &sess->cipher.gcm_key);
break;
default:
@@ -539,7 +599,7 @@ aesni_mb_set_session_aead_parameters(const struct aesni_mb_op_fns *mb_ops,
/** Parse crypto xform chain and set private session parameters */
int
-aesni_mb_set_session_parameters(const struct aesni_mb_op_fns *mb_ops,
+aesni_mb_set_session_parameters(const MB_MGR *mb_mgr,
struct aesni_mb_session *sess,
const struct rte_crypto_sym_xform *xform)
{
@@ -598,13 +658,13 @@ aesni_mb_set_session_parameters(const struct aesni_mb_op_fns *mb_ops,
/* Default IV length = 0 */
sess->iv.length = 0;
- ret = aesni_mb_set_session_auth_parameters(mb_ops, sess, auth_xform);
+ ret = aesni_mb_set_session_auth_parameters(mb_mgr, sess, auth_xform);
if (ret != 0) {
AESNI_MB_LOG(ERR, "Invalid/unsupported authentication parameters");
return ret;
}
- ret = aesni_mb_set_session_cipher_parameters(mb_ops, sess,
+ ret = aesni_mb_set_session_cipher_parameters(mb_mgr, sess,
cipher_xform);
if (ret != 0) {
AESNI_MB_LOG(ERR, "Invalid/unsupported cipher parameters");
@@ -612,7 +672,7 @@ aesni_mb_set_session_parameters(const struct aesni_mb_op_fns *mb_ops,
}
if (aead_xform) {
- ret = aesni_mb_set_session_aead_parameters(mb_ops, sess,
+ ret = aesni_mb_set_session_aead_parameters(mb_mgr, sess,
aead_xform);
if (ret != 0) {
AESNI_MB_LOG(ERR, "Invalid/unsupported aead parameters");
@@ -673,7 +733,7 @@ get_session(struct aesni_mb_qp *qp, struct rte_crypto_op *op)
sess = (struct aesni_mb_session *)_sess_private_data;
- if (unlikely(aesni_mb_set_session_parameters(qp->op_fns,
+ if (unlikely(aesni_mb_set_session_parameters(qp->mb_mgr,
sess, op->sym->xform) != 0)) {
rte_mempool_put(qp->sess_mp, _sess);
rte_mempool_put(qp->sess_mp, _sess_private_data);
@@ -690,6 +750,56 @@ get_session(struct aesni_mb_qp *qp, struct rte_crypto_op *op)
return sess;
}
+static inline uint64_t
+auth_start_offset(struct rte_crypto_op *op, struct aesni_mb_session *session,
+ uint32_t oop)
+{
+ struct rte_mbuf *m_src, *m_dst;
+ uint8_t *p_src, *p_dst;
+ uintptr_t u_src, u_dst;
+ uint32_t cipher_end, auth_end;
+
+ /* Only cipher then hash needs special calculation. */
+ if (!oop || session->chain_order != CIPHER_HASH)
+ return op->sym->auth.data.offset;
+
+ m_src = op->sym->m_src;
+ m_dst = op->sym->m_dst;
+
+ p_src = rte_pktmbuf_mtod(m_src, uint8_t *);
+ p_dst = rte_pktmbuf_mtod(m_dst, uint8_t *);
+ u_src = (uintptr_t)p_src;
+ u_dst = (uintptr_t)p_dst + op->sym->auth.data.offset;
+
+ /**
+ * Copy the content between cipher offset and auth offset for generating
+ * correct digest.
+ */
+ if (op->sym->cipher.data.offset > op->sym->auth.data.offset)
+ memcpy(p_dst + op->sym->auth.data.offset,
+ p_src + op->sym->auth.data.offset,
+ op->sym->cipher.data.offset -
+ op->sym->auth.data.offset);
+
+ /**
+ * Copy the content between (cipher offset + length) and (auth offset +
+ * length) for generating correct digest
+ */
+ cipher_end = op->sym->cipher.data.offset + op->sym->cipher.data.length;
+ auth_end = op->sym->auth.data.offset + op->sym->auth.data.length;
+ if (cipher_end < auth_end)
+ memcpy(p_dst + cipher_end, p_src + cipher_end,
+ auth_end - cipher_end);
+
+ /**
+ * Since intel-ipsec-mb only supports positive values,
+ * we need to deduct the correct offset between src and dst.
+ */
+
+ return u_src < u_dst ? (u_dst - u_src) :
+ (UINT64_MAX - u_src + u_dst + 1);
+}
+
/**
* Process a crypto operation and complete a JOB_AES_HMAC job structure for
* submission to the multi buffer library for processing.
@@ -708,7 +818,7 @@ set_mb_job_params(JOB_AES_HMAC *job, struct aesni_mb_qp *qp,
{
struct rte_mbuf *m_src = op->sym->m_src, *m_dst;
struct aesni_mb_session *session;
- uint16_t m_offset = 0;
+ uint32_t m_offset, oop;
session = get_session(qp, op);
if (session == NULL) {
@@ -760,8 +870,16 @@ set_mb_job_params(JOB_AES_HMAC *job, struct aesni_mb_qp *qp,
break;
case AES_GMAC:
- job->u.GCM.aad = op->sym->aead.aad.data;
- job->u.GCM.aad_len_in_bytes = session->aead.aad_len;
+ if (session->cipher.mode == GCM) {
+ job->u.GCM.aad = op->sym->aead.aad.data;
+ job->u.GCM.aad_len_in_bytes = session->aead.aad_len;
+ } else {
+ /* For GMAC */
+ job->u.GCM.aad = rte_pktmbuf_mtod_offset(m_src,
+ uint8_t *, op->sym->auth.data.offset);
+ job->u.GCM.aad_len_in_bytes = op->sym->auth.data.length;
+ job->cipher_mode = GCM;
+ }
job->aes_enc_key_expanded = &session->cipher.gcm_key;
job->aes_dec_key_expanded = &session->cipher.gcm_key;
break;
@@ -783,37 +901,34 @@ set_mb_job_params(JOB_AES_HMAC *job, struct aesni_mb_qp *qp,
}
}
- /* Mutable crypto operation parameters */
- if (op->sym->m_dst) {
- m_src = m_dst = op->sym->m_dst;
-
- /* append space for output data to mbuf */
- char *odata = rte_pktmbuf_append(m_dst,
- rte_pktmbuf_data_len(op->sym->m_src));
- if (odata == NULL) {
- AESNI_MB_LOG(ERR, "failed to allocate space in destination "
- "mbuf for source data");
- op->status = RTE_CRYPTO_OP_STATUS_ERROR;
- return -1;
- }
-
- memcpy(odata, rte_pktmbuf_mtod(op->sym->m_src, void*),
- rte_pktmbuf_data_len(op->sym->m_src));
- } else {
+ if (!op->sym->m_dst) {
+ /* in-place operation */
m_dst = m_src;
- if (job->hash_alg == AES_CCM || job->hash_alg == AES_GMAC)
- m_offset = op->sym->aead.data.offset;
- else
- m_offset = op->sym->cipher.data.offset;
+ oop = 0;
+ } else if (op->sym->m_dst == op->sym->m_src) {
+ /* in-place operation */
+ m_dst = m_src;
+ oop = 0;
+ } else {
+ /* out-of-place operation */
+ m_dst = op->sym->m_dst;
+ oop = 1;
}
+ if (job->hash_alg == AES_CCM || (job->hash_alg == AES_GMAC &&
+ session->cipher.mode == GCM))
+ m_offset = op->sym->aead.data.offset;
+ else
+ m_offset = op->sym->cipher.data.offset;
+
/* Set digest output location */
if (job->hash_alg != NULL_HASH &&
session->auth.operation == RTE_CRYPTO_AUTH_OP_VERIFY) {
job->auth_tag_output = qp->temp_digests[*digest_idx];
*digest_idx = (*digest_idx + 1) % MAX_JOBS;
} else {
- if (job->hash_alg == AES_CCM || job->hash_alg == AES_GMAC)
+ if (job->hash_alg == AES_CCM || (job->hash_alg == AES_GMAC &&
+ session->cipher.mode == GCM))
job->auth_tag_output = op->sym->aead.digest.data;
else
job->auth_tag_output = op->sym->auth.digest.data;
@@ -834,7 +949,7 @@ set_mb_job_params(JOB_AES_HMAC *job, struct aesni_mb_qp *qp,
/* Set IV parameters */
job->iv_len_in_bytes = session->iv.length;
- /* Data Parameter */
+ /* Data Parameters */
job->src = rte_pktmbuf_mtod(m_src, uint8_t *);
job->dst = rte_pktmbuf_mtod_offset(m_dst, uint8_t *, m_offset);
@@ -851,11 +966,24 @@ set_mb_job_params(JOB_AES_HMAC *job, struct aesni_mb_qp *qp,
break;
case AES_GMAC:
- job->cipher_start_src_offset_in_bytes =
- op->sym->aead.data.offset;
- job->hash_start_src_offset_in_bytes = op->sym->aead.data.offset;
- job->msg_len_to_cipher_in_bytes = op->sym->aead.data.length;
- job->msg_len_to_hash_in_bytes = job->msg_len_to_cipher_in_bytes;
+ if (session->cipher.mode == GCM) {
+ job->cipher_start_src_offset_in_bytes =
+ op->sym->aead.data.offset;
+ job->hash_start_src_offset_in_bytes =
+ op->sym->aead.data.offset;
+ job->msg_len_to_cipher_in_bytes =
+ op->sym->aead.data.length;
+ job->msg_len_to_hash_in_bytes =
+ op->sym->aead.data.length;
+ } else {
+ job->cipher_start_src_offset_in_bytes =
+ op->sym->auth.data.offset;
+ job->hash_start_src_offset_in_bytes =
+ op->sym->auth.data.offset;
+ job->msg_len_to_cipher_in_bytes = 0;
+ job->msg_len_to_hash_in_bytes = 0;
+ }
+
job->iv = rte_crypto_op_ctod_offset(op, uint8_t *,
session->iv.offset);
break;
@@ -865,7 +993,8 @@ set_mb_job_params(JOB_AES_HMAC *job, struct aesni_mb_qp *qp,
op->sym->cipher.data.offset;
job->msg_len_to_cipher_in_bytes = op->sym->cipher.data.length;
- job->hash_start_src_offset_in_bytes = op->sym->auth.data.offset;
+ job->hash_start_src_offset_in_bytes = auth_start_offset(op,
+ session, oop);
job->msg_len_to_hash_in_bytes = op->sym->auth.data.length;
job->iv = rte_crypto_op_ctod_offset(op, uint8_t *,
@@ -879,26 +1008,18 @@ set_mb_job_params(JOB_AES_HMAC *job, struct aesni_mb_qp *qp,
}
static inline void
-verify_digest(JOB_AES_HMAC *job, struct rte_crypto_op *op,
- struct aesni_mb_session *sess)
+verify_digest(JOB_AES_HMAC *job, void *digest, uint16_t len, uint8_t *status)
{
/* Verify digest if required */
- if (job->hash_alg == AES_CCM || job->hash_alg == AES_GMAC) {
- if (memcmp(job->auth_tag_output, op->sym->aead.digest.data,
- sess->auth.req_digest_len) != 0)
- op->status = RTE_CRYPTO_OP_STATUS_AUTH_FAILED;
- } else {
- if (memcmp(job->auth_tag_output, op->sym->auth.digest.data,
- sess->auth.req_digest_len) != 0)
- op->status = RTE_CRYPTO_OP_STATUS_AUTH_FAILED;
- }
+ if (memcmp(job->auth_tag_output, digest, len) != 0)
+ *status = RTE_CRYPTO_OP_STATUS_AUTH_FAILED;
}
static inline void
generate_digest(JOB_AES_HMAC *job, struct rte_crypto_op *op,
struct aesni_mb_session *sess)
{
- /* No extra copy neeed */
+ /* No extra copy needed */
if (likely(sess->auth.req_digest_len == sess->auth.gen_digest_len))
return;
@@ -933,13 +1054,24 @@ post_process_mb_job(struct aesni_mb_qp *qp, JOB_AES_HMAC *job)
case STS_COMPLETED:
op->status = RTE_CRYPTO_OP_STATUS_SUCCESS;
- if (job->hash_alg != NULL_HASH) {
- if (sess->auth.operation ==
- RTE_CRYPTO_AUTH_OP_VERIFY)
- verify_digest(job, op, sess);
+ if (job->hash_alg == NULL_HASH)
+ break;
+
+ if (sess->auth.operation == RTE_CRYPTO_AUTH_OP_VERIFY) {
+ if (job->hash_alg == AES_CCM ||
+ (job->hash_alg == AES_GMAC &&
+ sess->cipher.mode == GCM))
+ verify_digest(job,
+ op->sym->aead.digest.data,
+ sess->auth.req_digest_len,
+ &op->status);
else
- generate_digest(job, op, sess);
- }
+ verify_digest(job,
+ op->sym->auth.digest.data,
+ sess->auth.req_digest_len,
+ &op->status);
+ } else
+ generate_digest(job, op, sess);
break;
default:
op->status = RTE_CRYPTO_OP_STATUS_ERROR;
@@ -989,7 +1121,7 @@ handle_completed_jobs(struct aesni_mb_qp *qp, JOB_AES_HMAC *job,
if (processed_jobs == nb_ops)
break;
- job = (*qp->op_fns->job.get_completed_job)(qp->mb_mgr);
+ job = IMB_GET_COMPLETED_JOB(qp->mb_mgr);
}
return processed_jobs;
@@ -1002,7 +1134,7 @@ flush_mb_mgr(struct aesni_mb_qp *qp, struct rte_crypto_op **ops,
int processed_ops = 0;
/* Flush the remaining jobs */
- JOB_AES_HMAC *job = (*qp->op_fns->job.flush_job)(qp->mb_mgr);
+ JOB_AES_HMAC *job = IMB_FLUSH_JOB(qp->mb_mgr);
if (job)
processed_ops += handle_completed_jobs(qp, job,
@@ -1042,7 +1174,7 @@ aesni_mb_pmd_dequeue_burst(void *queue_pair, struct rte_crypto_op **ops,
uint8_t digest_idx = qp->digest_idx;
do {
/* Get next free mb job struct from mb manager */
- job = (*qp->op_fns->job.get_next)(qp->mb_mgr);
+ job = IMB_GET_NEXT_JOB(qp->mb_mgr);
if (unlikely(job == NULL)) {
/* if no free mb job structs we need to flush mb_mgr */
processed_jobs += flush_mb_mgr(qp,
@@ -1052,7 +1184,7 @@ aesni_mb_pmd_dequeue_burst(void *queue_pair, struct rte_crypto_op **ops,
if (nb_ops == processed_jobs)
break;
- job = (*qp->op_fns->job.get_next)(qp->mb_mgr);
+ job = IMB_GET_NEXT_JOB(qp->mb_mgr);
}
/*
@@ -1072,8 +1204,11 @@ aesni_mb_pmd_dequeue_burst(void *queue_pair, struct rte_crypto_op **ops,
}
/* Submit job to multi-buffer for processing */
- job = (*qp->op_fns->job.submit)(qp->mb_mgr);
-
+#ifdef RTE_LIBRTE_PMD_AESNI_MB_DEBUG
+ job = IMB_SUBMIT_JOB(qp->mb_mgr);
+#else
+ job = IMB_SUBMIT_JOB_NOCHECK(qp->mb_mgr);
+#endif
/*
* If submit returns a processed job then handle it,
* before submitting subsequent jobs
@@ -1105,12 +1240,7 @@ cryptodev_aesni_mb_create(const char *name,
struct rte_cryptodev *dev;
struct aesni_mb_private *internals;
enum aesni_mb_vector_mode vector_mode;
-
- /* Check CPU for support for AES instruction set */
- if (!rte_cpu_get_flag_enabled(RTE_CPUFLAG_AES)) {
- AESNI_MB_LOG(ERR, "AES instructions not supported by CPU");
- return -EFAULT;
- }
+ MB_MGR *mb_mgr;
dev = rte_cryptodev_pmd_create(name, &vdev->device, init_params);
if (dev == NULL) {
@@ -1137,23 +1267,38 @@ cryptodev_aesni_mb_create(const char *name,
dev->feature_flags = RTE_CRYPTODEV_FF_SYMMETRIC_CRYPTO |
RTE_CRYPTODEV_FF_SYM_OPERATION_CHAINING |
- RTE_CRYPTODEV_FF_CPU_AESNI;
+ RTE_CRYPTODEV_FF_OOP_LB_IN_LB_OUT;
+
+ /* Check CPU for support for AES instruction set */
+ if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_AES))
+ dev->feature_flags |= RTE_CRYPTODEV_FF_CPU_AESNI;
+ else
+ AESNI_MB_LOG(WARNING, "AES instructions not supported by CPU");
+
+ mb_mgr = alloc_mb_mgr(0);
+ if (mb_mgr == NULL)
+ return -ENOMEM;
switch (vector_mode) {
case RTE_AESNI_MB_SSE:
dev->feature_flags |= RTE_CRYPTODEV_FF_CPU_SSE;
+ init_mb_mgr_sse(mb_mgr);
break;
case RTE_AESNI_MB_AVX:
dev->feature_flags |= RTE_CRYPTODEV_FF_CPU_AVX;
+ init_mb_mgr_avx(mb_mgr);
break;
case RTE_AESNI_MB_AVX2:
dev->feature_flags |= RTE_CRYPTODEV_FF_CPU_AVX2;
+ init_mb_mgr_avx2(mb_mgr);
break;
case RTE_AESNI_MB_AVX512:
dev->feature_flags |= RTE_CRYPTODEV_FF_CPU_AVX512;
+ init_mb_mgr_avx512(mb_mgr);
break;
default:
- break;
+ AESNI_MB_LOG(ERR, "Unsupported vector mode %u\n", vector_mode);
+ goto error_exit;
}
/* Set vector instructions mode supported */
@@ -1161,15 +1306,19 @@ cryptodev_aesni_mb_create(const char *name,
internals->vector_mode = vector_mode;
internals->max_nb_queue_pairs = init_params->max_nb_queue_pairs;
+ internals->mb_mgr = mb_mgr;
-#if IMB_VERSION_NUM >= IMB_VERSION(0, 50, 0)
AESNI_MB_LOG(INFO, "IPSec Multi-buffer library version used: %s\n",
imb_get_version_str());
-#else
- AESNI_MB_LOG(INFO, "IPSec Multi-buffer library version used: 0.49.0\n");
-#endif
return 0;
+error_exit:
+ if (mb_mgr)
+ free_mb_mgr(mb_mgr);
+
+ rte_cryptodev_pmd_destroy(dev);
+
+ return -1;
}
static int
@@ -1204,6 +1353,7 @@ static int
cryptodev_aesni_mb_remove(struct rte_vdev_device *vdev)
{
struct rte_cryptodev *cryptodev;
+ struct aesni_mb_private *internals;
const char *name;
name = rte_vdev_device_name(vdev);
@@ -1214,6 +1364,10 @@ cryptodev_aesni_mb_remove(struct rte_vdev_device *vdev)
if (cryptodev == NULL)
return -ENODEV;
+ internals = cryptodev->data->dev_private;
+
+ free_mb_mgr(internals->mb_mgr);
+
return rte_cryptodev_pmd_destroy(cryptodev);
}
diff --git a/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c
index f3eff2685..e58dfa33b 100644
--- a/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c
+++ b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c
@@ -25,15 +25,9 @@ static const struct rte_cryptodev_capabilities aesni_mb_pmd_capabilities[] = {
.increment = 1
},
.digest_size = {
-#if IMB_VERSION_NUM >= IMB_VERSION(0, 50, 0)
.min = 1,
.max = 16,
.increment = 1
-#else
- .min = 12,
- .max = 12,
- .increment = 0
-#endif
},
.iv_size = { 0 }
}, }
@@ -48,23 +42,34 @@ static const struct rte_cryptodev_capabilities aesni_mb_pmd_capabilities[] = {
.block_size = 64,
.key_size = {
.min = 1,
-#if IMB_VERSION_NUM >= IMB_VERSION(0, 50, 0)
.max = 65535,
-#else
- .max = 64,
-#endif
.increment = 1
},
.digest_size = {
-#if IMB_VERSION_NUM >= IMB_VERSION(0, 50, 0)
.min = 1,
.max = 20,
.increment = 1
-#else
- .min = 12,
- .max = 12,
+ },
+ .iv_size = { 0 }
+ }, }
+ }, }
+ },
+ { /* SHA1 */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_SHA1,
+ .block_size = 64,
+ .key_size = {
+ .min = 0,
+ .max = 0,
.increment = 0
-#endif
+ },
+ .digest_size = {
+ .min = 1,
+ .max = 20,
+ .increment = 1
},
.iv_size = { 0 }
}, }
@@ -79,23 +84,34 @@ static const struct rte_cryptodev_capabilities aesni_mb_pmd_capabilities[] = {
.block_size = 64,
.key_size = {
.min = 1,
-#if IMB_VERSION_NUM >= IMB_VERSION(0, 50, 0)
.max = 65535,
-#else
- .max = 64,
-#endif
.increment = 1
},
.digest_size = {
-#if IMB_VERSION_NUM >= IMB_VERSION(0, 50, 0)
.min = 1,
.max = 28,
.increment = 1
-#else
- .min = 14,
- .max = 14,
+ },
+ .iv_size = { 0 }
+ }, }
+ }, }
+ },
+ { /* SHA224 */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_SHA224,
+ .block_size = 64,
+ .key_size = {
+ .min = 0,
+ .max = 0,
.increment = 0
-#endif
+ },
+ .digest_size = {
+ .min = 1,
+ .max = 28,
+ .increment = 1
},
.iv_size = { 0 }
}, }
@@ -110,23 +126,34 @@ static const struct rte_cryptodev_capabilities aesni_mb_pmd_capabilities[] = {
.block_size = 64,
.key_size = {
.min = 1,
-#if IMB_VERSION_NUM >= IMB_VERSION(0, 50, 0)
.max = 65535,
-#else
- .max = 64,
-#endif
.increment = 1
},
.digest_size = {
-#if IMB_VERSION_NUM >= IMB_VERSION(0, 50, 0)
.min = 1,
.max = 32,
.increment = 1
-#else
- .min = 16,
- .max = 16,
+ },
+ .iv_size = { 0 }
+ }, }
+ }, }
+ },
+ { /* SHA256 */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_SHA256,
+ .block_size = 64,
+ .key_size = {
+ .min = 0,
+ .max = 0,
.increment = 0
-#endif
+ },
+ .digest_size = {
+ .min = 1,
+ .max = 32,
+ .increment = 1
},
.iv_size = { 0 }
}, }
@@ -141,23 +168,34 @@ static const struct rte_cryptodev_capabilities aesni_mb_pmd_capabilities[] = {
.block_size = 128,
.key_size = {
.min = 1,
-#if IMB_VERSION_NUM >= IMB_VERSION(0, 50, 0)
.max = 65535,
-#else
- .max = 128,
-#endif
.increment = 1
},
.digest_size = {
-#if IMB_VERSION_NUM >= IMB_VERSION(0, 50, 0)
.min = 1,
.max = 48,
.increment = 1
-#else
- .min = 24,
- .max = 24,
+ },
+ .iv_size = { 0 }
+ }, }
+ }, }
+ },
+ { /* SHA384 */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_SHA384,
+ .block_size = 128,
+ .key_size = {
+ .min = 0,
+ .max = 0,
.increment = 0
-#endif
+ },
+ .digest_size = {
+ .min = 1,
+ .max = 48,
+ .increment = 1
},
.iv_size = { 0 }
}, }
@@ -172,23 +210,34 @@ static const struct rte_cryptodev_capabilities aesni_mb_pmd_capabilities[] = {
.block_size = 128,
.key_size = {
.min = 1,
-#if IMB_VERSION_NUM >= IMB_VERSION(0, 50, 0)
.max = 65535,
-#else
- .max = 128,
-#endif
.increment = 1
},
.digest_size = {
-#if IMB_VERSION_NUM >= IMB_VERSION(0, 50, 0)
.min = 1,
.max = 64,
.increment = 1
-#else
- .min = 32,
- .max = 32,
+ },
+ .iv_size = { 0 }
+ }, }
+ }, }
+ },
+ { /* SHA512 */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_SHA512,
+ .block_size = 128,
+ .key_size = {
+ .min = 0,
+ .max = 0,
.increment = 0
-#endif
+ },
+ .digest_size = {
+ .min = 1,
+ .max = 64,
+ .increment = 1
},
.iv_size = { 0 }
}, }
@@ -416,6 +465,31 @@ static const struct rte_cryptodev_capabilities aesni_mb_pmd_capabilities[] = {
}, }
}, }
},
+ { /* AES GMAC (AUTH) */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_AES_GMAC,
+ .block_size = 16,
+ .key_size = {
+ .min = 16,
+ .max = 32,
+ .increment = 8
+ },
+ .digest_size = {
+ .min = 8,
+ .max = 16,
+ .increment = 4
+ },
+ .iv_size = {
+ .min = 12,
+ .max = 12,
+ .increment = 0
+ }
+ }, }
+ }, }
+ },
RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
};
@@ -595,7 +669,28 @@ aesni_mb_pmd_qp_setup(struct rte_cryptodev *dev, uint16_t qp_id,
goto qp_setup_cleanup;
}
- qp->op_fns = &job_ops[internals->vector_mode];
+ switch (internals->vector_mode) {
+ case RTE_AESNI_MB_SSE:
+ dev->feature_flags |= RTE_CRYPTODEV_FF_CPU_SSE;
+ init_mb_mgr_sse(qp->mb_mgr);
+ break;
+ case RTE_AESNI_MB_AVX:
+ dev->feature_flags |= RTE_CRYPTODEV_FF_CPU_AVX;
+ init_mb_mgr_avx(qp->mb_mgr);
+ break;
+ case RTE_AESNI_MB_AVX2:
+ dev->feature_flags |= RTE_CRYPTODEV_FF_CPU_AVX2;
+ init_mb_mgr_avx2(qp->mb_mgr);
+ break;
+ case RTE_AESNI_MB_AVX512:
+ dev->feature_flags |= RTE_CRYPTODEV_FF_CPU_AVX512;
+ init_mb_mgr_avx512(qp->mb_mgr);
+ break;
+ default:
+ AESNI_MB_LOG(ERR, "Unsupported vector mode %u\n",
+ internals->vector_mode);
+ goto qp_setup_cleanup;
+ }
qp->ingress_queue = aesni_mb_pmd_qp_create_processed_ops_ring(qp,
qp_conf->nb_descriptors, socket_id);
@@ -613,13 +708,11 @@ aesni_mb_pmd_qp_setup(struct rte_cryptodev *dev, uint16_t qp_id,
snprintf(mp_name, RTE_MEMPOOL_NAMESIZE,
"digest_mp_%u_%u", dev->data->dev_id, qp_id);
- /* Initialise multi-buffer manager */
- (*qp->op_fns->job.init_mgr)(qp->mb_mgr);
return 0;
qp_setup_cleanup:
if (qp) {
- if (qp->mb_mgr == NULL)
+ if (qp->mb_mgr)
free_mb_mgr(qp->mb_mgr);
rte_free(qp);
}
@@ -663,7 +756,7 @@ aesni_mb_pmd_sym_session_configure(struct rte_cryptodev *dev,
return -ENOMEM;
}
- ret = aesni_mb_set_session_parameters(&job_ops[internals->vector_mode],
+ ret = aesni_mb_set_session_parameters(internals->mb_mgr,
sess_private_data, xform);
if (ret != 0) {
AESNI_MB_LOG(ERR, "failed configure session parameters");
diff --git a/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_private.h b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_private.h
index d8021cdaa..6a5df942c 100644
--- a/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_private.h
+++ b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_private.h
@@ -25,6 +25,7 @@ int aesni_mb_logtype_driver;
/* Maximum length for digest */
#define DIGEST_LENGTH_MAX 64
static const unsigned auth_blocksize[] = {
+ [NULL_HASH] = 0,
[MD5] = 64,
[SHA1] = 64,
[SHA_224] = 64,
@@ -33,6 +34,13 @@ static const unsigned auth_blocksize[] = {
[SHA_512] = 128,
[AES_XCBC] = 16,
[AES_CCM] = 16,
+ [AES_CMAC] = 16,
+ [AES_GMAC] = 16,
+ [PLAIN_SHA1] = 64,
+ [PLAIN_SHA_224] = 64,
+ [PLAIN_SHA_256] = 64,
+ [PLAIN_SHA_384] = 128,
+ [PLAIN_SHA_512] = 128
};
/**
@@ -57,7 +65,13 @@ static const unsigned auth_truncated_digest_byte_lengths[] = {
[AES_XCBC] = 12,
[AES_CMAC] = 12,
[AES_CCM] = 8,
- [NULL_HASH] = 0
+ [NULL_HASH] = 0,
+ [AES_GMAC] = 16,
+ [PLAIN_SHA1] = 20,
+ [PLAIN_SHA_224] = 28,
+ [PLAIN_SHA_256] = 32,
+ [PLAIN_SHA_384] = 48,
+ [PLAIN_SHA_512] = 64
};
/**
@@ -82,8 +96,14 @@ static const unsigned auth_digest_byte_lengths[] = {
[SHA_512] = 64,
[AES_XCBC] = 16,
[AES_CMAC] = 16,
+ [AES_CCM] = 16,
[AES_GMAC] = 12,
- [NULL_HASH] = 0
+ [NULL_HASH] = 0,
+ [PLAIN_SHA1] = 20,
+ [PLAIN_SHA_224] = 28,
+ [PLAIN_SHA_256] = 32,
+ [PLAIN_SHA_384] = 48,
+ [PLAIN_SHA_512] = 64
};
/**
@@ -115,6 +135,8 @@ struct aesni_mb_private {
/**< CPU vector instruction set mode */
unsigned max_nb_queue_pairs;
/**< Max number of queue pairs supported by device */
+ MB_MGR *mb_mgr;
+ /**< Multi-buffer instance */
};
/** AESNI Multi buffer queue pair */
@@ -123,8 +145,6 @@ struct aesni_mb_qp {
/**< Queue Pair Identifier */
char name[RTE_CRYPTODEV_NAME_MAX_LEN];
/**< Unique Queue Pair Name */
- const struct aesni_mb_op_fns *op_fns;
- /**< Vector mode dependent pointer table of the multi-buffer APIs */
MB_MGR *mb_mgr;
/**< Multi-buffer instance */
struct rte_ring *ingress_queue;
@@ -238,7 +258,7 @@ struct aesni_mb_session {
*
*/
extern int
-aesni_mb_set_session_parameters(const struct aesni_mb_op_fns *mb_ops,
+aesni_mb_set_session_parameters(const MB_MGR *mb_mgr,
struct aesni_mb_session *sess,
const struct rte_crypto_sym_xform *xform);
--
2.24.1
^ permalink raw reply [flat|nested] 8+ messages in thread
* [dpdk-dev] [RFC PATCH 5/5] crypto/aesni_gcm: support IPSec MB library v0.53
2020-03-05 15:34 [dpdk-dev] [RFC PATCH 0/5] Support Intel IPSec MB v0.53 in DPDK 18.11 Pablo de Lara
` (3 preceding siblings ...)
2020-03-05 15:34 ` [dpdk-dev] [RFC PATCH 4/5] crypto/aesni_mb: support " Pablo de Lara
@ 2020-03-05 15:34 ` Pablo de Lara
2020-03-19 14:32 ` [dpdk-dev] [RFC PATCH 0/5] Support Intel IPSec MB v0.53 in DPDK 18.11 Kevin Traynor
5 siblings, 0 replies; 8+ messages in thread
From: Pablo de Lara @ 2020-03-05 15:34 UTC (permalink / raw)
To: dev; +Cc: Pablo de Lara
Add support for underlying Intel IPSec Multi-buffer library v0.53.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
---
drivers/crypto/aesni_gcm/aesni_gcm_ops.h | 65 ++-------
drivers/crypto/aesni_gcm/aesni_gcm_pmd.c | 130 +++++++++++++-----
drivers/crypto/aesni_gcm/aesni_gcm_pmd_ops.c | 4 +-
.../crypto/aesni_gcm/aesni_gcm_pmd_private.h | 4 +
4 files changed, 114 insertions(+), 89 deletions(-)
diff --git a/drivers/crypto/aesni_gcm/aesni_gcm_ops.h b/drivers/crypto/aesni_gcm/aesni_gcm_ops.h
index 450616698..b2cc4002e 100644
--- a/drivers/crypto/aesni_gcm/aesni_gcm_ops.h
+++ b/drivers/crypto/aesni_gcm/aesni_gcm_ops.h
@@ -17,14 +17,15 @@ enum aesni_gcm_vector_mode {
RTE_AESNI_GCM_SSE,
RTE_AESNI_GCM_AVX,
RTE_AESNI_GCM_AVX2,
+ RTE_AESNI_GCM_AVX512,
RTE_AESNI_GCM_VECTOR_NUM
};
enum aesni_gcm_key {
- AESNI_GCM_KEY_128,
- AESNI_GCM_KEY_192,
- AESNI_GCM_KEY_256,
- AESNI_GCM_KEY_NUM
+ GCM_KEY_128 = 0,
+ GCM_KEY_192,
+ GCM_KEY_256,
+ GCM_KEY_NUM
};
@@ -34,7 +35,7 @@ typedef void (*aesni_gcm_t)(const struct gcm_key_data *gcm_key_data,
const uint8_t *aad, uint64_t aad_len,
uint8_t *auth_tag, uint64_t auth_tag_len);
-typedef void (*aesni_gcm_precomp_t)(const void *key, struct gcm_key_data *gcm_data);
+typedef void (*aesni_gcm_pre_t)(const void *key, struct gcm_key_data *gcm_data);
typedef void (*aesni_gcm_init_t)(const struct gcm_key_data *gcm_key_data,
struct gcm_context_data *gcm_ctx_data,
@@ -57,60 +58,12 @@ typedef void (*aesni_gcm_finalize_t)(const struct gcm_key_data *gcm_key_data,
struct aesni_gcm_ops {
aesni_gcm_t enc; /**< GCM encode function pointer */
aesni_gcm_t dec; /**< GCM decode function pointer */
- aesni_gcm_precomp_t precomp; /**< GCM pre-compute */
+ aesni_gcm_pre_t pre; /**< GCM pre-compute */
aesni_gcm_init_t init;
aesni_gcm_update_t update_enc;
aesni_gcm_update_t update_dec;
- aesni_gcm_finalize_t finalize;
+ aesni_gcm_finalize_t finalize_enc;
+ aesni_gcm_finalize_t finalize_dec;
};
-#define AES_GCM_FN(keylen, arch) \
-aes_gcm_enc_##keylen##_##arch,\
-aes_gcm_dec_##keylen##_##arch,\
-aes_gcm_pre_##keylen##_##arch,\
-aes_gcm_init_##keylen##_##arch,\
-aes_gcm_enc_##keylen##_update_##arch,\
-aes_gcm_dec_##keylen##_update_##arch,\
-aes_gcm_enc_##keylen##_finalize_##arch,
-
-static const struct aesni_gcm_ops gcm_ops[RTE_AESNI_GCM_VECTOR_NUM][AESNI_GCM_KEY_NUM] = {
- [RTE_AESNI_GCM_NOT_SUPPORTED] = {
- [AESNI_GCM_KEY_128] = {NULL},
- [AESNI_GCM_KEY_192] = {NULL},
- [AESNI_GCM_KEY_256] = {NULL}
- },
- [RTE_AESNI_GCM_SSE] = {
- [AESNI_GCM_KEY_128] = {
- AES_GCM_FN(128, sse)
- },
- [AESNI_GCM_KEY_192] = {
- AES_GCM_FN(192, sse)
- },
- [AESNI_GCM_KEY_256] = {
- AES_GCM_FN(256, sse)
- }
- },
- [RTE_AESNI_GCM_AVX] = {
- [AESNI_GCM_KEY_128] = {
- AES_GCM_FN(128, avx_gen2)
- },
- [AESNI_GCM_KEY_192] = {
- AES_GCM_FN(192, avx_gen2)
- },
- [AESNI_GCM_KEY_256] = {
- AES_GCM_FN(256, avx_gen2)
- }
- },
- [RTE_AESNI_GCM_AVX2] = {
- [AESNI_GCM_KEY_128] = {
- AES_GCM_FN(128, avx_gen4)
- },
- [AESNI_GCM_KEY_192] = {
- AES_GCM_FN(192, avx_gen4)
- },
- [AESNI_GCM_KEY_256] = {
- AES_GCM_FN(256, avx_gen4)
- }
- }
-};
#endif /* _AESNI_GCM_OPS_H_ */
diff --git a/drivers/crypto/aesni_gcm/aesni_gcm_pmd.c b/drivers/crypto/aesni_gcm/aesni_gcm_pmd.c
index ebdf7c35a..2bda5a560 100644
--- a/drivers/crypto/aesni_gcm/aesni_gcm_pmd.c
+++ b/drivers/crypto/aesni_gcm/aesni_gcm_pmd.c
@@ -24,7 +24,7 @@ aesni_gcm_set_session_parameters(const struct aesni_gcm_ops *gcm_ops,
const struct rte_crypto_sym_xform *auth_xform;
const struct rte_crypto_sym_xform *aead_xform;
uint8_t key_length;
- uint8_t *key;
+ const uint8_t *key;
/* AES-GMAC */
if (xform->type == RTE_CRYPTO_SYM_XFORM_AUTH) {
@@ -89,20 +89,20 @@ aesni_gcm_set_session_parameters(const struct aesni_gcm_ops *gcm_ops,
/* Check key length and calculate GCM pre-compute. */
switch (key_length) {
case 16:
- sess->key = AESNI_GCM_KEY_128;
+ sess->key = GCM_KEY_128;
break;
case 24:
- sess->key = AESNI_GCM_KEY_192;
+ sess->key = GCM_KEY_192;
break;
case 32:
- sess->key = AESNI_GCM_KEY_256;
+ sess->key = GCM_KEY_256;
break;
default:
AESNI_GCM_LOG(ERR, "Invalid key length");
return -EINVAL;
}
- gcm_ops[sess->key].precomp(key, &sess->gdata_key);
+ gcm_ops[sess->key].pre(key, &sess->gdata_key);
/* Digest check */
if (sess->req_digest_length > 16) {
@@ -195,6 +195,7 @@ process_gcm_crypto_op(struct aesni_gcm_qp *qp, struct rte_crypto_op *op,
uint32_t offset, data_offset, data_length;
uint32_t part_len, total_len, data_len;
uint8_t *tag;
+ unsigned int oop = 0;
if (session->op == AESNI_GCM_OP_AUTHENTICATED_ENCRYPTION ||
session->op == AESNI_GCM_OP_AUTHENTICATED_DECRYPTION) {
@@ -216,27 +217,30 @@ process_gcm_crypto_op(struct aesni_gcm_qp *qp, struct rte_crypto_op *op,
RTE_ASSERT(m_src != NULL);
}
+ src = rte_pktmbuf_mtod_offset(m_src, uint8_t *, offset);
+
data_len = m_src->data_len - offset;
part_len = (data_len < data_length) ? data_len :
data_length;
- /* Destination buffer is required when segmented source buffer */
- RTE_ASSERT((part_len == data_length) ||
- ((part_len != data_length) &&
- (sym_op->m_dst != NULL)));
- /* Segmented destination buffer is not supported */
RTE_ASSERT((sym_op->m_dst == NULL) ||
((sym_op->m_dst != NULL) &&
rte_pktmbuf_is_contiguous(sym_op->m_dst)));
-
- dst = sym_op->m_dst ?
- rte_pktmbuf_mtod_offset(sym_op->m_dst, uint8_t *,
- data_offset) :
- rte_pktmbuf_mtod_offset(sym_op->m_src, uint8_t *,
+ /* In-place */
+ if (sym_op->m_dst == NULL || (sym_op->m_dst == sym_op->m_src))
+ dst = src;
+ /* Out-of-place */
+ else {
+ oop = 1;
+ /*
+ * Segmented destination buffer is not supported if operation is
+ * Out-of-place
+ */
+ RTE_ASSERT(rte_pktmbuf_is_contiguous(sym_op->m_dst));
+ dst = rte_pktmbuf_mtod_offset(sym_op->m_dst, uint8_t *,
data_offset);
-
- src = rte_pktmbuf_mtod_offset(m_src, uint8_t *, offset);
+ }
iv_ptr = rte_crypto_op_ctod_offset(op, uint8_t *,
session->iv.offset);
@@ -254,12 +258,15 @@ process_gcm_crypto_op(struct aesni_gcm_qp *qp, struct rte_crypto_op *op,
total_len = data_length - part_len;
while (total_len) {
- dst += part_len;
m_src = m_src->next;
RTE_ASSERT(m_src != NULL);
src = rte_pktmbuf_mtod(m_src, uint8_t *);
+ if (oop)
+ dst += part_len;
+ else
+ dst = src;
part_len = (m_src->data_len < total_len) ?
m_src->data_len : total_len;
@@ -274,7 +281,7 @@ process_gcm_crypto_op(struct aesni_gcm_qp *qp, struct rte_crypto_op *op,
else
tag = sym_op->aead.digest.data;
- qp->ops[session->key].finalize(&session->gdata_key,
+ qp->ops[session->key].finalize_enc(&session->gdata_key,
&qp->gdata_ctx,
tag,
session->gen_digest_length);
@@ -291,12 +298,15 @@ process_gcm_crypto_op(struct aesni_gcm_qp *qp, struct rte_crypto_op *op,
total_len = data_length - part_len;
while (total_len) {
- dst += part_len;
m_src = m_src->next;
RTE_ASSERT(m_src != NULL);
src = rte_pktmbuf_mtod(m_src, uint8_t *);
+ if (oop)
+ dst += part_len;
+ else
+ dst = src;
part_len = (m_src->data_len < total_len) ?
m_src->data_len : total_len;
@@ -308,7 +318,7 @@ process_gcm_crypto_op(struct aesni_gcm_qp *qp, struct rte_crypto_op *op,
}
tag = qp->temp_digest;
- qp->ops[session->key].finalize(&session->gdata_key,
+ qp->ops[session->key].finalize_dec(&session->gdata_key,
&qp->gdata_ctx,
tag,
session->gen_digest_length);
@@ -322,7 +332,7 @@ process_gcm_crypto_op(struct aesni_gcm_qp *qp, struct rte_crypto_op *op,
tag = qp->temp_digest;
else
tag = sym_op->auth.digest.data;
- qp->ops[session->key].finalize(&session->gdata_key,
+ qp->ops[session->key].finalize_enc(&session->gdata_key,
&qp->gdata_ctx,
tag,
session->gen_digest_length);
@@ -338,7 +348,7 @@ process_gcm_crypto_op(struct aesni_gcm_qp *qp, struct rte_crypto_op *op,
* the bytes passed.
*/
tag = qp->temp_digest;
- qp->ops[session->key].finalize(&session->gdata_key,
+ qp->ops[session->key].finalize_enc(&session->gdata_key,
&qp->gdata_ctx,
tag,
session->gen_digest_length);
@@ -487,12 +497,8 @@ aesni_gcm_create(const char *name,
struct rte_cryptodev *dev;
struct aesni_gcm_private *internals;
enum aesni_gcm_vector_mode vector_mode;
+ MB_MGR *mb_mgr;
- /* Check CPU for support for AES instruction set */
- if (!rte_cpu_get_flag_enabled(RTE_CPUFLAG_AES)) {
- AESNI_GCM_LOG(ERR, "AES instructions not supported by CPU");
- return -EFAULT;
- }
dev = rte_cryptodev_pmd_create(name, &vdev->device, init_params);
if (dev == NULL) {
AESNI_GCM_LOG(ERR, "driver %s: create failed",
@@ -501,7 +507,9 @@ aesni_gcm_create(const char *name,
}
/* Check CPU for supported vector instruction set */
- if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX2))
+ if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512F))
+ vector_mode = RTE_AESNI_GCM_AVX512;
+ else if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX2))
vector_mode = RTE_AESNI_GCM_AVX2;
else if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX))
vector_mode = RTE_AESNI_GCM_AVX;
@@ -517,27 +525,74 @@ aesni_gcm_create(const char *name,
dev->feature_flags = RTE_CRYPTODEV_FF_SYMMETRIC_CRYPTO |
RTE_CRYPTODEV_FF_SYM_OPERATION_CHAINING |
- RTE_CRYPTODEV_FF_CPU_AESNI |
+ RTE_CRYPTODEV_FF_IN_PLACE_SGL |
RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT |
RTE_CRYPTODEV_FF_OOP_LB_IN_LB_OUT;
+ /* Check CPU for support for AES instruction set */
+ if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_AES))
+ dev->feature_flags |= RTE_CRYPTODEV_FF_CPU_AESNI;
+ else
+ AESNI_GCM_LOG(WARNING, "AES instructions not supported by CPU");
+
+ mb_mgr = alloc_mb_mgr(0);
+ if (mb_mgr == NULL)
+ return -ENOMEM;
+
switch (vector_mode) {
case RTE_AESNI_GCM_SSE:
dev->feature_flags |= RTE_CRYPTODEV_FF_CPU_SSE;
+ init_mb_mgr_sse(mb_mgr);
break;
case RTE_AESNI_GCM_AVX:
dev->feature_flags |= RTE_CRYPTODEV_FF_CPU_AVX;
+ init_mb_mgr_avx(mb_mgr);
break;
case RTE_AESNI_GCM_AVX2:
dev->feature_flags |= RTE_CRYPTODEV_FF_CPU_AVX2;
+ init_mb_mgr_avx2(mb_mgr);
break;
- default:
+ case RTE_AESNI_GCM_AVX512:
+ dev->feature_flags |= RTE_CRYPTODEV_FF_CPU_AVX2;
+ init_mb_mgr_avx512(mb_mgr);
break;
+ default:
+ AESNI_GCM_LOG(ERR, "Unsupported vector mode %u\n", vector_mode);
+ goto error_exit;
}
internals = dev->data->dev_private;
internals->vector_mode = vector_mode;
+ internals->mb_mgr = mb_mgr;
+
+ /* Set arch independent function pointers, based on key size */
+ internals->ops[GCM_KEY_128].enc = mb_mgr->gcm128_enc;
+ internals->ops[GCM_KEY_128].dec = mb_mgr->gcm128_dec;
+ internals->ops[GCM_KEY_128].pre = mb_mgr->gcm128_pre;
+ internals->ops[GCM_KEY_128].init = mb_mgr->gcm128_init;
+ internals->ops[GCM_KEY_128].update_enc = mb_mgr->gcm128_enc_update;
+ internals->ops[GCM_KEY_128].update_dec = mb_mgr->gcm128_dec_update;
+ internals->ops[GCM_KEY_128].finalize_enc = mb_mgr->gcm128_enc_finalize;
+ internals->ops[GCM_KEY_128].finalize_dec = mb_mgr->gcm128_dec_finalize;
+
+ internals->ops[GCM_KEY_192].enc = mb_mgr->gcm192_enc;
+ internals->ops[GCM_KEY_192].dec = mb_mgr->gcm192_dec;
+ internals->ops[GCM_KEY_192].pre = mb_mgr->gcm192_pre;
+ internals->ops[GCM_KEY_192].init = mb_mgr->gcm192_init;
+ internals->ops[GCM_KEY_192].update_enc = mb_mgr->gcm192_enc_update;
+ internals->ops[GCM_KEY_192].update_dec = mb_mgr->gcm192_dec_update;
+ internals->ops[GCM_KEY_192].finalize_enc = mb_mgr->gcm192_enc_finalize;
+ internals->ops[GCM_KEY_192].finalize_dec = mb_mgr->gcm192_dec_finalize;
+
+ internals->ops[GCM_KEY_256].enc = mb_mgr->gcm256_enc;
+ internals->ops[GCM_KEY_256].dec = mb_mgr->gcm256_dec;
+ internals->ops[GCM_KEY_256].pre = mb_mgr->gcm256_pre;
+ internals->ops[GCM_KEY_256].init = mb_mgr->gcm256_init;
+ internals->ops[GCM_KEY_256].update_enc = mb_mgr->gcm256_enc_update;
+ internals->ops[GCM_KEY_256].update_dec = mb_mgr->gcm256_dec_update;
+ internals->ops[GCM_KEY_256].finalize_enc = mb_mgr->gcm256_enc_finalize;
+ internals->ops[GCM_KEY_256].finalize_dec = mb_mgr->gcm256_dec_finalize;
internals->max_nb_queue_pairs = init_params->max_nb_queue_pairs;
@@ -549,6 +604,14 @@ aesni_gcm_create(const char *name,
#endif
return 0;
+
+error_exit:
+ if (mb_mgr)
+ free_mb_mgr(mb_mgr);
+
+ rte_cryptodev_pmd_destroy(dev);
+
+ return -1;
}
static int
@@ -576,6 +639,7 @@ static int
aesni_gcm_remove(struct rte_vdev_device *vdev)
{
struct rte_cryptodev *cryptodev;
+ struct aesni_gcm_private *internals;
const char *name;
name = rte_vdev_device_name(vdev);
@@ -586,6 +650,10 @@ aesni_gcm_remove(struct rte_vdev_device *vdev)
if (cryptodev == NULL)
return -ENODEV;
+ internals = cryptodev->data->dev_private;
+
+ free_mb_mgr(internals->mb_mgr);
+
return rte_cryptodev_pmd_destroy(cryptodev);
}
diff --git a/drivers/crypto/aesni_gcm/aesni_gcm_pmd_ops.c b/drivers/crypto/aesni_gcm/aesni_gcm_pmd_ops.c
index c343a393f..f599fc3f7 100644
--- a/drivers/crypto/aesni_gcm/aesni_gcm_pmd_ops.c
+++ b/drivers/crypto/aesni_gcm/aesni_gcm_pmd_ops.c
@@ -222,7 +222,7 @@ aesni_gcm_pmd_qp_setup(struct rte_cryptodev *dev, uint16_t qp_id,
if (aesni_gcm_pmd_qp_set_unique_name(dev, qp))
goto qp_setup_cleanup;
- qp->ops = (const struct aesni_gcm_ops *)gcm_ops[internals->vector_mode];
+ qp->ops = (const struct aesni_gcm_ops *)internals->ops;
qp->processed_pkts = aesni_gcm_pmd_qp_create_processed_pkts_ring(qp,
qp_conf->nb_descriptors, socket_id);
@@ -277,7 +277,7 @@ aesni_gcm_pmd_sym_session_configure(struct rte_cryptodev *dev __rte_unused,
"Couldn't get object from session mempool");
return -ENOMEM;
}
- ret = aesni_gcm_set_session_parameters(gcm_ops[internals->vector_mode],
+ ret = aesni_gcm_set_session_parameters(internals->ops,
sess_private_data, xform);
if (ret != 0) {
AESNI_GCM_LOG(ERR, "failed configure session parameters");
diff --git a/drivers/crypto/aesni_gcm/aesni_gcm_pmd_private.h b/drivers/crypto/aesni_gcm/aesni_gcm_pmd_private.h
index 92b041354..fd43df3cf 100644
--- a/drivers/crypto/aesni_gcm/aesni_gcm_pmd_private.h
+++ b/drivers/crypto/aesni_gcm/aesni_gcm_pmd_private.h
@@ -35,6 +35,10 @@ struct aesni_gcm_private {
/**< Vector mode */
unsigned max_nb_queue_pairs;
/**< Max number of queue pairs supported by device */
+ MB_MGR *mb_mgr;
+ /**< Multi-buffer instance */
+ struct aesni_gcm_ops ops[GCM_KEY_NUM];
+ /**< Function pointer table of the gcm APIs */
};
struct aesni_gcm_qp {
--
2.24.1
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [dpdk-dev] [RFC PATCH 0/5] Support Intel IPSec MB v0.53 in DPDK 18.11
2020-03-05 15:34 [dpdk-dev] [RFC PATCH 0/5] Support Intel IPSec MB v0.53 in DPDK 18.11 Pablo de Lara
` (4 preceding siblings ...)
2020-03-05 15:34 ` [dpdk-dev] [RFC PATCH 5/5] crypto/aesni_gcm: " Pablo de Lara
@ 2020-03-19 14:32 ` Kevin Traynor
2020-03-20 15:14 ` De Lara Guarch, Pablo
5 siblings, 1 reply; 8+ messages in thread
From: Kevin Traynor @ 2020-03-19 14:32 UTC (permalink / raw)
To: Pablo de Lara, stable; +Cc: Luca Boccassi, dev
+stable@dpdk.org
Hi Pablo,
Sorry, but I'm not comfortable with these patches for 18.11.
On 05/03/2020 15:34, Pablo de Lara wrote:
> This patchset adds support to the following crypto PMDs to use
> Intel IPSec MB v0.53, in DPDK v18.11:
> - AESNI MB PMD: had support up to v0.52, extending to v0.53
> - AESNI GCM PMD: had support up to v0.52, extending to v0.53
For the AES ones, it looks like it is removing support for <0.50? I'm
also not clear if it's changing the default or not. The patches are
very intrusive too. My concern is that it might break backwards
compatibility and introduce regressions.
> - SNOW3G PMD: linking now to IPSec MB v0.53, instead of libsso
> - ZUC PMD: linking now to IPSec MB v0.53, instead of libsso
> - KASUMI PMD: linking now to IPSec MB v0.53, instead of libsso
>
Aren't these the ones we discussed offline? If so, Luca and I both
commented that this will break build for existing users and is not a
backwards compatible change that could be put on stable branches.
> Pablo de Lara (5):
> crypto/zuc: use IPSec MB library v0.53
> crypto/snow3g: use IPSec MB library v0.53
> crypto/kasumi: use IPSec MB library v0.53
> crypto/aesni_mb: support IPSec MB library v0.53
> crypto/aesni_gcm: support IPSec MB library v0.53
>
> devtools/test-build.sh | 14 +-
> doc/guides/cryptodevs/kasumi.rst | 62 +--
> doc/guides/cryptodevs/snow3g.rst | 58 ++-
> doc/guides/cryptodevs/zuc.rst | 52 +-
> drivers/crypto/aesni_gcm/aesni_gcm_ops.h | 65 +--
> drivers/crypto/aesni_gcm/aesni_gcm_pmd.c | 130 +++--
> drivers/crypto/aesni_gcm/aesni_gcm_pmd_ops.c | 4 +-
> .../crypto/aesni_gcm/aesni_gcm_pmd_private.h | 4 +
> drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c | 476 ++++++++++++------
> .../crypto/aesni_mb/rte_aesni_mb_pmd_ops.c | 205 +++++---
> .../aesni_mb/rte_aesni_mb_pmd_private.h | 30 +-
> drivers/crypto/kasumi/Makefile | 26 +-
> drivers/crypto/kasumi/meson.build | 11 +-
> drivers/crypto/kasumi/rte_kasumi_pmd.c | 79 +--
> drivers/crypto/kasumi/rte_kasumi_pmd_ops.c | 8 +-
> .../crypto/kasumi/rte_kasumi_pmd_private.h | 12 +-
> drivers/crypto/snow3g/Makefile | 29 +-
> drivers/crypto/snow3g/meson.build | 21 +
> drivers/crypto/snow3g/rte_snow3g_pmd.c | 79 +--
> drivers/crypto/snow3g/rte_snow3g_pmd_ops.c | 8 +-
> .../crypto/snow3g/rte_snow3g_pmd_private.h | 12 +-
> drivers/crypto/zuc/Makefile | 28 +-
> drivers/crypto/zuc/meson.build | 13 +-
> drivers/crypto/zuc/rte_zuc_pmd.c | 58 ++-
> drivers/crypto/zuc/rte_zuc_pmd_ops.c | 2 +
> drivers/crypto/zuc/rte_zuc_pmd_private.h | 6 +-
> mk/rte.app.mk | 6 +-
> 27 files changed, 972 insertions(+), 526 deletions(-)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
This is a *lot* of code churn
> create mode 100644 drivers/crypto/snow3g/meson.build
>
thanks,
Kevin.
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [dpdk-dev] [RFC PATCH 0/5] Support Intel IPSec MB v0.53 in DPDK 18.11
2020-03-19 14:32 ` [dpdk-dev] [RFC PATCH 0/5] Support Intel IPSec MB v0.53 in DPDK 18.11 Kevin Traynor
@ 2020-03-20 15:14 ` De Lara Guarch, Pablo
0 siblings, 0 replies; 8+ messages in thread
From: De Lara Guarch, Pablo @ 2020-03-20 15:14 UTC (permalink / raw)
To: Kevin Traynor, stable; +Cc: Luca Boccassi, dev
Hi Kevin,
> -----Original Message-----
> From: Kevin Traynor <ktraynor@redhat.com>
> Sent: Thursday, March 19, 2020 2:32 PM
> To: De Lara Guarch, Pablo <pablo.de.lara.guarch@intel.com>; stable@dpdk.org
> Cc: Luca Boccassi <bluca@debian.org>; dev@dpdk.org
> Subject: Re: [dpdk-dev] [RFC PATCH 0/5] Support Intel IPSec MB v0.53 in DPDK
> 18.11
>
> +stable@dpdk.org
>
> Hi Pablo,
>
> Sorry, but I'm not comfortable with these patches for 18.11.
>
> On 05/03/2020 15:34, Pablo de Lara wrote:
> > This patchset adds support to the following crypto PMDs to use Intel
> > IPSec MB v0.53, in DPDK v18.11:
> > - AESNI MB PMD: had support up to v0.52, extending to v0.53
> > - AESNI GCM PMD: had support up to v0.52, extending to v0.53
>
> For the AES ones, it looks like it is removing support for <0.50? I'm also not clear
> if it's changing the default or not. The patches are very intrusive too. My
> concern is that it might break backwards compatibility and introduce
> regressions.
>
> > - SNOW3G PMD: linking now to IPSec MB v0.53, instead of libsso
> > - ZUC PMD: linking now to IPSec MB v0.53, instead of libsso
> > - KASUMI PMD: linking now to IPSec MB v0.53, instead of libsso
> >
>
> Aren't these the ones we discussed offline? If so, Luca and I both commented
> that this will break build for existing users and is not a backwards compatible
> change that could be put on stable branches.
No problem, we can park these patches and possibly explore in the future a way to introduce them into 18.11
(probably not, knowing that 18.11 support will be dropped in a few months).
Having these patches available publicly is OK for us, even if they don't get merged.
Thanks,
Pablo
^ permalink raw reply [flat|nested] 8+ messages in thread