DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] [PATCH 0/4] A proposed DPDK Crypto API and device framework
@ 2015-08-20 14:07 Declan Doherty
  2015-08-20 14:07 ` [dpdk-dev] [PATCH 1/4] cryptodev: Initial DPDK Crypto APIs and device framework release Declan Doherty
                   ` (3 more replies)
  0 siblings, 4 replies; 8+ messages in thread
From: Declan Doherty @ 2015-08-20 14:07 UTC (permalink / raw)
  To: dev

Co-authored-by: Des O Dea <des.j.o.dea@intel.com>
Co-authored-by: John Griffin <john.griffin@intel.com>
Co-authored-by: Fiona Trahe <fiona.trahe@intel.com>

This series of patches proposes a set of application burst oriented APIs for
asynchronous symmetric cryptographic  functions within DPDK. It also contains a
poll mode driver cryptographic device framework for the implementation of
crypto devices within DPDK.

In the patch set we also have included 2 reference implementations of crypto
PMDs, both are still early in development but act as an example of how we
envisage the APIs and device framework to be used. Currently both
implementations only support AES128-CBC with HMAC_SHA1/SHA256/SHA512
authentication operations. The first device is a purely software PMD based on
Intel's multi-buffer library, which utilises both AES-NI instructions and
vector operations to accelerate crypto operations and the second PMD utilises
Intel's Quick Assist Technology (on DH895xxC) to provide hardware accelerated
crypto operations.

 The proposed API set supports two functional modes of operation: 

1, A session oriented mode. In this mode the user creates a crypto session
which defines all the immutable data required to perform a particular crypto
operation in advance, including cipher/hash algorithms and operations to be
performed as well as the keys to used etc. The session is then referenced by
the crypto operation data structure which is a data structure specific to each
mbuf. It is used to contain all mutable data about the cryto operation to be
performed, such as data offsets and lengths into the mbuf's data payload for
cipher and hash operations to be performed. 

2, A session-less mode. In this mode the user is able to provision crypto
operations on an mbuf without the need to have a cached session created in
advance, but at the cost of entailing the overhead of calculating
authentication pre-computes and preforming key expansions in-line with the
crypto operation. Only the crypto operation data structure must be completed in
this mode but all of immutable crypto operation  parameters that would be
normally set within a session are now specified within the crypto operation
data structure. Once all mutable and immutable parameters are set the crypto
operation data structure can be attached to the specified mbuf and enqueued on
a specified crypto device for processing. 

The patch set contains the following features:
- Crypto device APIs and device framework
- Example implementation of a software crypto PMD based on multi-buffer library
- Example implementation of a hardware crypto PMD baed on Intel QAT(DH895xxC)
- Unit and performance test's which give and example of utilising the crypto API's.


Current Status: The patch set has only been compiled and tested with 64-bit
gcc. There is no support for chained mbuf's and as mentioned above the PMD's
have only currently implemented support for AES128-CBC/AES256-CBC/AES512-CBC
and HMAC_SHA1/SHA256/SHA512. At this stage we are looking for feedback on the
proposed API's and the framework implementations. 


Declan Doherty (3):
  cryptodev: Initial DPDK Crypto APIs and device framework release
  aesni_mb_pmd: Initial implementation of multi buffer based crypto
    device
  app/test: add cryptodev unit and performance tests

John Griffin (1):
  qat_crypto_pmd: Addition of a new QAT DPDK PMD.

 app/test/Makefile                                  |    7 +-
 app/test/test.c                                    |   91 +-
 app/test/test.h                                    |   34 +-
 app/test/test_cryptodev.c                          | 1079 +++++++++++++++
 app/test/test_cryptodev_perf.c                     | 1438 ++++++++++++++++++++
 app/test/test_link_bonding.c                       |    6 +-
 app/test/test_link_bonding_mode4.c                 |    7 +-
 config/common_bsdapp                               |   30 +-
 config/common_linuxapp                             |   29 +-
 doc/api/doxy-api-index.md                          |    1 +
 doc/api/doxy-api.conf                              |    1 +
 doc/guides/cryptodevs/aesni_mb.rst                 |   76 ++
 doc/guides/cryptodevs/index.rst                    |   43 +
 doc/guides/cryptodevs/qat.rst                      |  155 +++
 doc/guides/index.rst                               |    1 +
 drivers/Makefile                                   |    1 +
 drivers/crypto/Makefile                            |   38 +
 drivers/crypto/aesni_mb/Makefile                   |   67 +
 drivers/crypto/aesni_mb/aesni_mb_ops.h             |  206 +++
 drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c         |  550 ++++++++
 drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c     |  346 +++++
 drivers/crypto/aesni_mb/rte_aesni_mb_pmd_private.h |  224 +++
 drivers/crypto/aesni_mb/rte_pmd_aesni_version.map  |    5 +
 drivers/crypto/qat/Makefile                        |   63 +
 .../qat/qat_adf/adf_transport_access_macros.h      |  173 +++
 drivers/crypto/qat/qat_adf/icp_qat_fw.h            |  316 +++++
 drivers/crypto/qat/qat_adf/icp_qat_fw_la.h         |  404 ++++++
 drivers/crypto/qat/qat_adf/icp_qat_hw.h            |  305 +++++
 drivers/crypto/qat/qat_adf/qat_algs.h              |  124 ++
 drivers/crypto/qat/qat_adf/qat_algs_build_desc.c   |  462 +++++++
 drivers/crypto/qat/qat_crypto.c                    |  469 +++++++
 drivers/crypto/qat/qat_crypto.h                    |   99 ++
 drivers/crypto/qat/qat_logs.h                      |   78 ++
 drivers/crypto/qat/qat_qp.c                        |  372 +++++
 drivers/crypto/qat/rte_pmd_qat_version.map         |    5 +
 drivers/crypto/qat/rte_qat_cryptodev.c             |  128 ++
 lib/Makefile                                       |    1 +
 lib/librte_cryptodev/Makefile                      |   60 +
 lib/librte_cryptodev/rte_crypto.h                  |  649 +++++++++
 lib/librte_cryptodev/rte_crypto_version.map        |   40 +
 lib/librte_cryptodev/rte_cryptodev.c               |  966 +++++++++++++
 lib/librte_cryptodev/rte_cryptodev.h               |  550 ++++++++
 lib/librte_cryptodev/rte_cryptodev_pmd.h           |  622 +++++++++
 lib/librte_eal/common/include/rte_log.h            |    1 +
 lib/librte_eal/common/include/rte_memory.h         |   14 +-
 lib/librte_mbuf/rte_mbuf.c                         |    1 +
 lib/librte_mbuf/rte_mbuf.h                         |   51 +
 mk/rte.app.mk                                      |    8 +
 48 files changed, 10346 insertions(+), 50 deletions(-)
 create mode 100644 app/test/test_cryptodev.c
 create mode 100644 app/test/test_cryptodev_perf.c
 create mode 100644 doc/guides/cryptodevs/aesni_mb.rst
 create mode 100644 doc/guides/cryptodevs/index.rst
 create mode 100644 doc/guides/cryptodevs/qat.rst
 create mode 100644 drivers/crypto/Makefile
 create mode 100644 drivers/crypto/aesni_mb/Makefile
 create mode 100644 drivers/crypto/aesni_mb/aesni_mb_ops.h
 create mode 100644 drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c
 create mode 100644 drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c
 create mode 100644 drivers/crypto/aesni_mb/rte_aesni_mb_pmd_private.h
 create mode 100644 drivers/crypto/aesni_mb/rte_pmd_aesni_version.map
 create mode 100644 drivers/crypto/qat/Makefile
 create mode 100644 drivers/crypto/qat/qat_adf/adf_transport_access_macros.h
 create mode 100644 drivers/crypto/qat/qat_adf/icp_qat_fw.h
 create mode 100644 drivers/crypto/qat/qat_adf/icp_qat_fw_la.h
 create mode 100644 drivers/crypto/qat/qat_adf/icp_qat_hw.h
 create mode 100644 drivers/crypto/qat/qat_adf/qat_algs.h
 create mode 100644 drivers/crypto/qat/qat_adf/qat_algs_build_desc.c
 create mode 100644 drivers/crypto/qat/qat_crypto.c
 create mode 100644 drivers/crypto/qat/qat_crypto.h
 create mode 100644 drivers/crypto/qat/qat_logs.h
 create mode 100644 drivers/crypto/qat/qat_qp.c
 create mode 100644 drivers/crypto/qat/rte_pmd_qat_version.map
 create mode 100644 drivers/crypto/qat/rte_qat_cryptodev.c
 create mode 100644 lib/librte_cryptodev/Makefile
 create mode 100644 lib/librte_cryptodev/rte_crypto.h
 create mode 100644 lib/librte_cryptodev/rte_crypto_version.map
 create mode 100644 lib/librte_cryptodev/rte_cryptodev.c
 create mode 100644 lib/librte_cryptodev/rte_cryptodev.h
 create mode 100644 lib/librte_cryptodev/rte_cryptodev_pmd.h

-- 
1.9.3

^ permalink raw reply	[flat|nested] 8+ messages in thread

* [dpdk-dev] [PATCH 1/4] cryptodev: Initial DPDK Crypto APIs and device framework release
  2015-08-20 14:07 [dpdk-dev] [PATCH 0/4] A proposed DPDK Crypto API and device framework Declan Doherty
@ 2015-08-20 14:07 ` Declan Doherty
  2015-08-20 19:07   ` Neil Horman
  2015-08-20 14:07 ` [dpdk-dev] [PATCH 2/4] qat_crypto_pmd: Addition of a new QAT DPDK PMD Declan Doherty
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 8+ messages in thread
From: Declan Doherty @ 2015-08-20 14:07 UTC (permalink / raw)
  To: dev

Co-authored-by: Des O Dea <des.j.o.dea@intel.com>
Co-authored-by: John Griffin <john.griffin@intel.com>
Co-authored-by: Fiona Trahe <fiona.trahe@intel.com>

This patch contains the initial proposed APIs and device framework for
integrating crypto packet processing into DPDK.

features include:
 - Crypto device configuration / management APIs
 - Definitions of supported cipher algorithms and operations.
 - Definitions of supported hash/authentication algorithms and
   operations.
 - Crypto session management APIs
 - Crypto operation data structures and APIs allocation of crypto
   operation structure used to specify the crypto operations to
   be performed  on a particular mbuf.
 - Extension of mbuf to contain crypto operation data pointer and
   extra flags.
 - Burst enqueue / dequeue APIs for processing of crypto operations.

Signed-off-by: Declan Doherty <declan.doherty@intel.com>
---
 config/common_bsdapp                        |   9 +-
 config/common_linuxapp                      |   7 +
 doc/api/doxy-api-index.md                   |   1 +
 doc/api/doxy-api.conf                       |   1 +
 lib/Makefile                                |   1 +
 lib/librte_cryptodev/Makefile               |  60 ++
 lib/librte_cryptodev/rte_crypto.h           | 649 +++++++++++++++++++
 lib/librte_cryptodev/rte_crypto_version.map |  40 ++
 lib/librte_cryptodev/rte_cryptodev.c        | 966 ++++++++++++++++++++++++++++
 lib/librte_cryptodev/rte_cryptodev.h        | 550 ++++++++++++++++
 lib/librte_cryptodev/rte_cryptodev_pmd.h    | 622 ++++++++++++++++++
 lib/librte_eal/common/include/rte_log.h     |   1 +
 lib/librte_eal/common/include/rte_memory.h  |  14 +-
 lib/librte_mbuf/rte_mbuf.c                  |   1 +
 lib/librte_mbuf/rte_mbuf.h                  |  51 ++
 mk/rte.app.mk                               |   1 +
 16 files changed, 2971 insertions(+), 3 deletions(-)
 create mode 100644 lib/librte_cryptodev/Makefile
 create mode 100644 lib/librte_cryptodev/rte_crypto.h
 create mode 100644 lib/librte_cryptodev/rte_crypto_version.map
 create mode 100644 lib/librte_cryptodev/rte_cryptodev.c
 create mode 100644 lib/librte_cryptodev/rte_cryptodev.h
 create mode 100644 lib/librte_cryptodev/rte_cryptodev_pmd.h

diff --git a/config/common_bsdapp b/config/common_bsdapp
index b37dcf4..ed30180 100644
--- a/config/common_bsdapp
+++ b/config/common_bsdapp
@@ -1,6 +1,6 @@
 #   BSD LICENSE
 #
-#   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+#   Copyright(c) 2010-2015 Intel Corporation. All rights reserved.
 #   All rights reserved.
 #
 #   Redistribution and use in source and binary forms, with or without
@@ -147,6 +147,13 @@ CONFIG_RTE_ETHDEV_QUEUE_STAT_CNTRS=16
 CONFIG_RTE_ETHDEV_RXTX_CALLBACKS=y
 
 #
+# Compile generic Crypto device library
+#
+CONFIG_RTE_LIBRTE_CRYPTODEV=y
+CONFIG_RTE_LIBRTE_CRYPTODEV_DEBUG=y
+CONFIG_RTE_MAX_CRYPTOPORTS=32
+
+#
 # Support NIC bypass logic
 #
 CONFIG_RTE_NIC_BYPASS=n
diff --git a/config/common_linuxapp b/config/common_linuxapp
index 0de43d5..12a75c6 100644
--- a/config/common_linuxapp
+++ b/config/common_linuxapp
@@ -145,6 +145,13 @@ CONFIG_RTE_ETHDEV_QUEUE_STAT_CNTRS=16
 CONFIG_RTE_ETHDEV_RXTX_CALLBACKS=y
 
 #
+# Compile generic Crypto device library
+#
+CONFIG_RTE_LIBRTE_CRYPTODEV=y
+CONFIG_RTE_LIBRTE_CRYPTODEV_DEBUG=y
+CONFIG_RTE_MAX_CRYPTODEVS=64
+
+#
 # Support NIC bypass logic
 #
 CONFIG_RTE_NIC_BYPASS=n
diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md
index 72ac3c4..bdb6130 100644
--- a/doc/api/doxy-api-index.md
+++ b/doc/api/doxy-api-index.md
@@ -39,6 +39,7 @@ There are many libraries, so their headers may be grouped by topics:
   [dev]                (@ref rte_dev.h),
   [ethdev]             (@ref rte_ethdev.h),
   [ethctrl]            (@ref rte_eth_ctrl.h),
+  [cryptodev]          (@ref rte_cryptodev.h),
   [devargs]            (@ref rte_devargs.h),
   [bond]               (@ref rte_eth_bond.h),
   [vhost]              (@ref rte_virtio_net.h),
diff --git a/doc/api/doxy-api.conf b/doc/api/doxy-api.conf
index cfb4627..7244b8f 100644
--- a/doc/api/doxy-api.conf
+++ b/doc/api/doxy-api.conf
@@ -37,6 +37,7 @@ INPUT                   = doc/api/doxy-api-index.md \
                           lib/librte_cfgfile \
                           lib/librte_cmdline \
                           lib/librte_compat \
+                          lib/librte_cryptodev \
                           lib/librte_distributor \
                           lib/librte_ether \
                           lib/librte_hash \
diff --git a/lib/Makefile b/lib/Makefile
index 2055539..9e5f484 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -41,6 +41,7 @@ DIRS-$(CONFIG_RTE_LIBRTE_TIMER) += librte_timer
 DIRS-$(CONFIG_RTE_LIBRTE_CFGFILE) += librte_cfgfile
 DIRS-$(CONFIG_RTE_LIBRTE_CMDLINE) += librte_cmdline
 DIRS-$(CONFIG_RTE_LIBRTE_ETHER) += librte_ether
+DIRS-$(CONFIG_RTE_LIBRTE_CRYPTODEV) += librte_cryptodev
 DIRS-$(CONFIG_RTE_LIBRTE_VHOST) += librte_vhost
 DIRS-$(CONFIG_RTE_LIBRTE_HASH) += librte_hash
 DIRS-$(CONFIG_RTE_LIBRTE_LPM) += librte_lpm
diff --git a/lib/librte_cryptodev/Makefile b/lib/librte_cryptodev/Makefile
new file mode 100644
index 0000000..6ed9b76
--- /dev/null
+++ b/lib/librte_cryptodev/Makefile
@@ -0,0 +1,60 @@
+#   BSD LICENSE
+#
+#   Copyright(c) 2015 Intel Corporation. All rights reserved.
+#
+#   Redistribution and use in source and binary forms, with or without
+#   modification, are permitted provided that the following conditions
+#   are met:
+#
+#     * Redistributions of source code must retain the above copyright
+#       notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above copyright
+#       notice, this list of conditions and the following disclaimer in
+#       the documentation and/or other materials provided with the
+#       distribution.
+#     * Neither the name of Intel Corporation nor the names of its
+#       contributors may be used to endorse or promote products derived
+#       from this software without specific prior written permission.
+#
+#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+# library name
+LIB = libcryptodev.a
+
+# library version
+LIBABIVER := 1
+
+# build flags
+CFLAGS += -O3
+CFLAGS += $(WERROR_FLAGS)
+
+# library source files
+SRCS-y += rte_cryptodev.c
+
+# export include files
+SYMLINK-y-include += rte_crypto.h
+SYMLINK-y-include += rte_cryptodev.h
+SYMLINK-y-include += rte_cryptodev_pmd.h
+
+# versioning export map
+EXPORT_MAP := rte_cryptodev_version.map
+
+# library dependencies
+DEPDIRS-y += lib/librte_eal
+DEPDIRS-y += lib/librte_mempool
+DEPDIRS-y += lib/librte_ring
+DEPDIRS-y += lib/librte_mbuf
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_cryptodev/rte_crypto.h b/lib/librte_cryptodev/rte_crypto.h
new file mode 100644
index 0000000..b776609
--- /dev/null
+++ b/lib/librte_cryptodev/rte_crypto.h
@@ -0,0 +1,649 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2015 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _RTE_CRYPTO_H_
+#define _RTE_CRYPTO_H_
+
+/**
+ * @file rte_crypto.h
+ *
+ * RTE Cryptographic Definitions
+ *
+ * Defines symmetric cipher and authentication algorithms and modes, as well
+ * as supported symmetric crypto operation combinations.
+ */
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include <rte_mbuf.h>
+#include <rte_memory.h>
+#include <rte_mempool.h>
+
+/**
+ * This enumeration lists different types of crypto operations supported by rte
+ * crypto devices. The operation type is defined during session registration and
+ * cannot be changed for a session once it has been setup, or if using a
+ * session-less crypto operation it is defined within the crypto operation
+ * op_params.
+ */
+enum rte_crypto_operation_chain {
+	RTE_CRYPTO_SYM_OP_CIPHER_ONLY,
+	/**< Cipher only operation on the data */
+	RTE_CRYPTO_SYM_OP_HASH_ONLY,
+	/**< Hash only operation on the data */
+	RTE_CRYPTO_SYM_OPCHAIN_HASH_CIPHER,
+	/**<
+	 * Chain a hash followed by any cipher operation.
+	 *
+	 * If it is required that the result of the hash (i.e. the digest)
+	 * is going to be included in the data to be ciphered, then:
+	 *
+	 * - The digest MUST be placed in the destination buffer at the
+	 *   location corresponding to the end of the data region to be hashed
+	 *   (hash_start_offset + message length to hash),  i.e. there must be
+	 *   no gaps between the start of the digest and the end of the data
+	 *   region to be hashed.
+	 *
+	 * - The message length to cipher member of the rte_crypto_op_data
+	 *   structure must be equal to the overall length of the plain text,
+	 *   the digest length and any (optional) trailing data that is to be
+	 *   included.
+	 *
+	 * - The message length to cipher must be a multiple to the block
+	 *   size if a block cipher is being used - the implementation does not
+	 *   pad.
+	 */
+	RTE_CRYPTO_SYM_OPCHAIN_CIPHER_HASH,
+	/**<
+	 * Chain any cipher followed by any hash operation.The hash operation
+	 * will be performed on the ciphertext resulting from the cipher
+	 * operation.
+	 */
+};
+
+/** Symmetric Cipher Algorithms */
+enum rte_crypto_cipher_algorithm {
+	RTE_CRYPTO_SYM_CIPHER_NULL = 1,
+	/**< NULL cipher algorithm. No mode applies to the NULL algorithm. */
+
+	RTE_CRYPTO_SYM_CIPHER_3DES_CBC,
+	/**< Triple DES algorithm in CBC mode */
+	RTE_CRYPTO_SYM_CIPHER_3DES_CTR,
+	/**< Triple DES algorithm in CTR mode */
+	RTE_CRYPTO_SYM_CIPHER_3DES_ECB,
+	/**< Triple DES algorithm in ECB mode */
+
+	RTE_CRYPTO_SYM_CIPHER_AES_CBC,
+	/**< AES algorithm in CBC mode */
+	RTE_CRYPTO_SYM_CIPHER_AES_CCM,
+	/**< AES algorithm in CCM mode. When this cipher algorithm is used the
+	 * *RTE_CRYPTO_SYM_HASH_AES_CCM* element of the
+	 * *rte_crypto_hash_algorithm* enum MUST be used to set up the related
+	 * *rte_crypto_hash_setup_data* structure in the session context or in
+	 * the op_params of the crypto operation structure in the case of a
+	 * session-less crypto operation
+	 */
+	RTE_CRYPTO_SYM_CIPHER_AES_CTR,
+	/**< AES algorithm in Counter mode */
+	RTE_CRYPTO_SYM_CIPHER_AES_ECB,
+	/**< AES algorithm in ECB mode */
+	RTE_CRYPTO_SYM_CIPHER_AES_F8,
+	/**< AES algorithm in F8 mode */
+	RTE_CRYPTO_SYM_CIPHER_AES_GCM,
+	/**< AES algorithm in CGM mode. When this cipher algorithm is used the
+	 * *RTE_CRYPTO_SYM_CIPHER_AES_GCM* element of the
+	 * *rte_crypto_hash_algorithm* enum MUST be used to set up the related
+	 * *rte_crypto_hash_setup_data* structure in the session context or in
+	 * the op_params of the crypto operation structure in the case of a
+	 * session-less crypto operation.
+	 */
+	RTE_CRYPTO_SYM_CIPHER_AES_XTS,
+	/**< AES algorithm in XTS mode */
+
+	RTE_CRYPTO_SYM_CIPHER_ARC4,
+	/**< (A)RC4 cipher algorithm */
+
+	RTE_CRYPTO_SYM_CIPHER_KASUMI_F8,
+	/**< Kasumi algorithm in F8 mode */
+
+	RTE_CRYPTO_SYM_CIPHER_SNOW3G_UEA2,
+	/**< SNOW3G algorithm in UEA2 mode */
+
+	RTE_CRYPTO_SYM_CIPHER_ZUC_EEA3
+	/**< ZUC algorithm in EEA3 mode */
+};
+
+/** Symmetric Cipher Direction */
+enum rte_crypto_cipher_operation {
+	RTE_CRYPTO_SYM_CIPHER_OP_ENCRYPT,
+	/**< Encrypt cipher operation */
+	RTE_CRYPTO_SYM_CIPHER_OP_DECRYPT
+	/**< Decrypt cipher operation */
+};
+
+/** Crypto key structure */
+struct rte_crypto_key {
+	uint8_t *data;	/**< pointer to key data */
+	size_t length;	/**< key length in bytes */
+};
+
+/**
+ * Symmetric Cipher Setup Data.
+ *
+ * This structure contains data relating to Cipher (Encryption and Decryption)
+ *  use to create a session.
+ */
+struct rte_crypto_cipher_params {
+	enum rte_crypto_cipher_operation op;
+	/**< This parameter determines if the cipher operation is an encrypt or
+	 * a decrypt operation. For the RC4 algorithm and the F8/CTR modes,
+	 * only encrypt operations are valid. */
+	enum rte_crypto_cipher_algorithm algo;
+	/**< Cipher algorithm */
+
+	struct rte_crypto_key key;
+	/**< Cipher key
+	 *
+	 * For the RTE_CRYPTO_SYM_CIPHER_AES_F8 mode of operation, key.data will
+	 * point to a concatenation of the AES encryption key followed by a
+	 * keymask. As per RFC3711, the keymask should be padded with trailing
+	 * bytes to match the length of the encryption key used.
+	 *
+	 * For AES-XTS mode of operation, two keys must be provided and
+	 * key.data must point to the two keys concatenated together (Key1 ||
+	 * Key2). The cipher key length will contain the total size of both keys.
+	 *
+	 * Cipher key length is in bytes. For AES it can be 128 bits (16 bytes),
+	 * 192 bits (24 bytes) or 256 bits (32 bytes).
+	 *
+	 * For the CCM mode of operation, the only supported key length is 128
+	 * bits (16 bytes).
+	 *
+	 * For the RTE_CRYPTO_SYM_CIPHER_AES_F8 mode of operation, key.length
+	 * should be set to the combined length of the encryption key and the
+	 * keymask. Since the keymask and the encryption key are the same size,
+	 * key.length should be set to 2 x the AES encryption key length.
+	 *
+	 * For the AES-XTS mode of operation:
+	 *  - Two keys must be provided and key.length refers to total length of
+	 *    the two keys.
+	 *  - Each key can be either 128 bits (16 bytes) or 256 bits (32 bytes).
+	 *  - Both keys must have the same size.
+	 **/
+};
+
+/** Symmetric Hash / Authentication Algorithms */
+enum rte_crypto_hash_algorithm {
+	RTE_CRYPTO_SYM_HASH_NONE = 0,
+	/**< No hash algorithm. */
+
+	RTE_CRYPTO_SYM_HASH_AES_CBC_MAC,
+	/**< AES-CBC-MAC algorithm. Only 128-bit keys are supported. */
+	RTE_CRYPTO_SYM_HASH_AES_CCM,
+	/**< AES algorithm in CCM mode. This is an authenticated cipher. When
+	 * this hash algorithm is used, the *RTE_CRYPTO_SYM_CIPHER_AES_CCM*
+	 * element of the *rte_crypto_cipher_algorithm* enum MUST be used to
+	 * set up the related rte_crypto_cipher_setup_data structure in the
+	 * session context or the corresponding parameter in the crypto operation
+	 * data structures op_params parameter MUST be set for a session-less
+	 * crypto operation.
+	 * */
+	RTE_CRYPTO_SYM_HASH_AES_CMAC,
+	/**< AES CMAC algorithm. */
+	RTE_CRYPTO_SYM_HASH_AES_GCM,
+	/**< AES algorithm in GCM mode. When this hash algorithm
+	 * is used, the RTE_CRYPTO_SYM_CIPHER_AES_GCM element of the
+	 * rte_crypto_cipher_algorithm enum MUST be used to set up the related
+	 * rte_crypto_cipher_setup_data structure in the session context, or
+	 * the corresponding parameter in the crypto operation data structures
+	 * op_params parameter MUST be set for a session-less crypto operation.
+	 */
+	RTE_CRYPTO_SYM_HASH_AES_GMAC,
+	/**< AES GMAC algorithm. When this hash algorithm
+	* is used, the RTE_CRYPTO_SYM_CIPHER_AES_GCM element of the
+	* rte_crypto_cipher_algorithm enum MUST be used to set up the related
+	* rte_crypto_cipher_setup_data structure in the session context,  or
+	* the corresponding parameter in the crypto operation data structures
+	* op_params parameter MUST be set for a session-less crypto operation.
+	*/
+	RTE_CRYPTO_SYM_HASH_AES_XCBC_MAC,
+	/**< AES XCBC algorithm. */
+
+	RTE_CRYPTO_SYM_HASH_KASUMI_F9,
+	/**< Kasumi algorithm in F9 mode. */
+
+	RTE_CRYPTO_SYM_HASH_MD5,
+	/**< MD5 algorithm */
+	RTE_CRYPTO_SYM_HASH_MD5_HMAC,
+	/**< HMAC using MD5 algorithm */
+
+	RTE_CRYPTO_SYM_HASH_SHA1,
+	/**< 128 bit SHA algorithm. */
+	RTE_CRYPTO_SYM_HASH_SHA1_HMAC,
+	/**< HMAC using 128 bit SHA algorithm. */
+	RTE_CRYPTO_SYM_HASH_SHA224,
+	/**< 224 bit SHA algorithm. */
+	RTE_CRYPTO_SYM_HASH_SHA224_HMAC,
+	/**< HMAC using 224 bit SHA algorithm. */
+	RTE_CRYPTO_SYM_HASH_SHA256,
+	/**< 256 bit SHA algorithm. */
+	RTE_CRYPTO_SYM_HASH_SHA256_HMAC,
+	/**< HMAC using 256 bit SHA algorithm. */
+	RTE_CRYPTO_SYM_HASH_SHA384,
+	/**< 384 bit SHA algorithm. */
+	RTE_CRYPTO_SYM_HASH_SHA384_HMAC,
+	/**< HMAC using 384 bit SHA algorithm. */
+	RTE_CRYPTO_SYM_HASH_SHA512,
+	/**< 512 bit SHA algorithm. */
+	RTE_CRYPTO_SYM_HASH_SHA512_HMAC,
+	/**< HMAC using 512 bit SHA algorithm. */
+
+	RTE_CRYPTO_SYM_HASH_SNOW3G_UIA2,
+	/**< SNOW3G algorithm in UIA2 mode. */
+
+	RTE_CRYPTO_SYM_HASH_ZUC_EIA3,
+	/**< ZUC algorithm in EIA3 mode */
+};
+
+/** Symmetric Hash Operations */
+enum rte_crypto_hash_operation {
+	RTE_CRYPTO_SYM_HASH_OP_DIGEST_VERIFY,	/**< Verify digest */
+	RTE_CRYPTO_SYM_HASH_OP_DIGEST_GENERATE	/**< Generate digest */
+};
+
+/**
+ * Hash Setup Data.
+ *
+ * This structure contains data relating to a hash session. The fields hash_algorithm, hash_mode and digest_result_len are common to all
+ *      three hash modes and MUST be set for each mode.
+ *
+ *****************************************************************************/
+struct rte_crypto_hash_params {
+	enum rte_crypto_hash_operation op;
+	/* hash operation type */
+	enum rte_crypto_hash_algorithm algo;
+	/* hashing algorithm selection */
+
+	struct rte_crypto_key auth_key;
+	/**< Authentication key data.
+	 * The authentication key length MUST be less than or equal to the
+	 * block size of the algorithm. It is the callers responsibility to
+	 * ensure that the key length is compliant with the standard being used
+	 * (for example RFC 2104, FIPS 198a).
+	 */
+
+	uint32_t digest_length;
+	/**< Length of the digest to be returned. If the verify option is set,
+	 * this specifies the length of the digest to be compared for the
+	 * session.
+	 *
+	 * If the value is less than the maximum length allowed by the hash,
+	 * the result shall be truncated.  If the value is greater than the
+	 * maximum length allowed by the hash then an error will be generated
+	 * by *rte_cryptodev_session_create* or by the
+	 * *rte_cryptodev_enqueue_burst* if using session-less APIs.
+	 */
+
+	uint32_t add_auth_data_length;
+	/**< The length of the additional authenticated data (AAD) in bytes.
+	 * The maximum permitted value is 240 bytes, unless otherwise specified
+	 * below.
+	 *
+	 * This field must be specified when the hash algorithm is one of the
+	 * following:
+	 *
+	 * - For SNOW3G (@ref RTE_CRYPTO_SYM_HASH_SNOW3G_UIA2), this is the
+	 *   length of the IV (which should be 16).
+	 *
+	 * - For GCM (@ref RTE_CRYPTO_SYM_HASH_AES_GCM).  In this case, this is
+	 *   the length of the Additional Authenticated Data (called A, in NIST
+	 *   SP800-38D).
+	 *
+	 * - For CCM (@ref RTE_CRYPTO_SYM_HASH_AES_CCM).  In this case, this is
+	 *   the length of the associated data (called A, in NIST SP800-38C).
+	 *   Note that this does NOT include the length of any padding, or the
+	 *   18 bytes reserved at the start of the above field to store the
+	 *   block B0 and the encoded length.  The maximum permitted value in
+	 *   this case is 222 bytes.
+	 *
+	 * @note
+	 *  For AES-GMAC (@ref RTE_CRYPTO_SYM_HASH_AES_GMAC) mode of operation
+	 *  this field is not used and should be set to 0. Instead the length
+	 *  of the AAD data is specified in the message length to hash field of
+	 *  the rte_crypto_op_data structure.
+	 */
+};
+
+/**
+ * Crypto operation session type. This is used to specify whether a crypto
+ * operation has session structure attached for immutable parameters or if all
+ * operation information is include in the operation data structure op_params.
+ */
+enum rte_crypto_op_sess_type {
+	RTE_CRYPTO_OP_WITH_SESSION,	/**< Session based crypto operation */
+	RTE_CRYPTO_OP_SESSIONLESS	/**< Session-less crypto operation */
+};
+
+/**
+ * Cryptographic Operation Data.
+ *
+ * This structure contains data relating to performing cryptographic processing
+ * on a data buffer. This request is used with rte_crypto_enqueue_burst() call
+ * for performing cipher, hash, or a combined hash and cipher operations.
+ */
+struct rte_crypto_op_data {
+	enum rte_crypto_op_sess_type type;
+
+	struct rte_mbuf *dst;
+
+	union {
+		struct rte_cryptodev_session *session;
+		/**< Handle for the initialised session context */
+		struct {
+			struct rte_crypto_cipher_params cipher;
+			struct rte_crypto_hash_params hash;
+			enum rte_crypto_operation_chain opchain;
+		} op_params;
+		/**< Session-less API crypto operation parameters */
+	};
+
+	struct  {
+		struct {
+			 uint32_t offset;
+			 /**< Starting point for cipher processing, specified
+			  * as number of bytes from start of data in the source
+			  * buffer. The result of the cipher operation will be
+			  * written back into the output buffer starting at
+			  * this location. */
+
+			 uint32_t length;
+			 /**< The message length, in bytes, of the source buffer
+			  * on which the cryptographic operation will be
+			  * computed. This must be a multiple of the block size
+			  * if a block cipher is being used. This is also the
+			  * same as the result length.
+			  *
+			  * @note
+			  * In the case of CCM @ref RTE_CRYPTO_SYM_HASH_AES_CCM,
+			  * this value should not include the length of the
+			  * padding or the length of the MAC; the driver will
+			  * compute the actual number of bytes over which the
+			  * encryption will occur, which will include these
+			  * values.
+			  *
+			  * @note
+			  * For AES-GMAC @ref RTE_CRYPTO_SYM_HASH_AES_GMAC, this
+			  * field should be set to 0.
+			  */
+		} to_cipher; /**< Data offsets and length for ciphering */
+
+		struct {
+			 uint32_t offset;
+			 /**< Starting point for hash processing, specified as
+			  * number of bytes from start of packet in source
+			  * buffer.
+			  *
+			  * @note
+			  * For CCM and GCM modes of operation, this field is
+			  * ignored. The field @ref additional_auth field
+			  * should be set instead.
+			  *
+			  * @note For AES-GMAC (@ref RTE_CRYPTO_SYM_HASH_AES_GMAC) mode of
+			  * operation, this field specifies the start of the AAD data in
+			  * the source buffer.
+			  */
+
+			 uint32_t length;
+			 /**< The message length, in bytes, of the source buffer that
+			  * the hash will be computed on.
+			  *
+			  * @note
+			  * For CCM and GCM modes of operation, this field is
+			  * ignored. The field @ref additional_auth field should
+			  * be set instead.
+			  *
+			  * @note
+			  * For AES-GMAC @ref RTE_CRYPTO_SYM_HASH_AES_GMAC mode
+			  * of operation, this field specifies the length of
+			  * the AAD data in the source buffer.
+			  */
+		} to_hash; /**< Data offsets and length for authentication */
+	} data;	/**< Details of data to be operated on */
+
+	struct {
+		uint8_t *data;
+		/**< Initialisation Vector or Counter.
+		 *
+		 * - For block ciphers in CBC or F8 mode, or for Kasumi in F8
+		 * mode, or for SNOW3G in UEA2 mode, this is the Initialisation
+		 * Vector (IV) value.
+		 *
+		 * - For block ciphers in CTR mode, this is the counter.
+		 *
+		 * - For GCM mode, this is either the IV (if the length is 96
+		 * bits) or J0 (for other sizes), where J0 is as defined by
+		 * NIST SP800-38D. Regardless of the IV length, a full 16 bytes
+		 * needs to be allocated.
+		 *
+		 * - For CCM mode, the first byte is reserved, and the nonce
+		 * should be written starting at &iv[1] (to allow space for the
+		 * implementation to write in the flags in the first byte).
+		 * Note that a full 16 bytes should be allocated, even though
+		 * the length field will have a value less than this.
+		 *
+		 * - For AES-XTS, this is the 128bit tweak, i, from IEEE Std
+		 * 1619-2007.
+		 *
+		 * For optimum performance, the data pointed to SHOULD be
+		 * 8-byte aligned.
+		 */
+		phys_addr_t phys_addr;
+		size_t length;
+		/**< Length of valid IV data.
+		 *
+		 * - For block ciphers in CBC or F8 mode, or for Kasumi in F8
+		 * mode, or for SNOW3G in UEA2 mode, this is the length of the
+		 * IV (which must be the same as the block length of the
+		 * cipher).
+		 *
+		 * - For block ciphers in CTR mode, this is the length of the
+		 * counter (which must be the same as the block length of the
+		 * cipher).
+		 *
+		 * - For GCM mode, this is either 12 (for 96-bit IVs) or 16, in
+		 * which case data points to J0.
+		 *
+		 * - For CCM mode, this is the length of the nonce, which can
+		 * be in the range 7 to 13 inclusive.
+		 */
+	} iv;	/**< Initialisation vector parameters */
+
+	struct {
+		uint8_t *data;
+		/**< If this member of this structure is set this is a
+		 * pointer to the location where the digest result should be
+		 * inserted (in the case of digest generation) or where the
+		 * purported digest exists (in the case of digest
+		 * verification).
+		 *
+		 * At session creation time, the client specified the digest
+		 * result length with the digest_length member of the @ref
+		 * rte_crypto_hash_setup_data structure. For physical crypto
+		 * devices the caller must allocate at least digest_length of
+		 * physically contiguous memory at this location.
+		 *
+		 * For digest generation, the digest result will overwrite
+		 * any data at this location.
+		 *
+		 * @note
+		 * For GCM (@ref RTE_CRYPTO_SYM_HASH_AES_GCM), for
+		 * "digest result" read "authentication tag T".
+		 *
+		 * If this member is not set the digest result is understood
+		 * to be in the destination buffer for digest generation, and
+		 * in the source buffer for digest verification. The location
+		 * of the digest result in this case is immediately following
+		 * the region over which the digest is computed.
+		 */
+		phys_addr_t phys_addr;	/**< Physical address of digest */
+		uint32_t length;	/**< Length of digest */
+	} digest; /**< Digest parameters */
+
+	struct {
+		uint8_t *data;
+		/**< Pointer to Additional Authenticated Data (AAD) needed for
+		 * authenticated cipher mechanisms (CCM and GCM), and to the IV
+		 * for SNOW3G authentication
+		 * (@ref RTE_CRYPTO_SYM_HASH_SNOW3G_UIA2). For other
+		 * authentication mechanisms this pointer is ignored.
+		 *
+		 * The length of the data pointed to by this field is set up for
+		 * the session in the @ref rte_crypto_hash_params structure
+		 * as part of the @ref rte_cryptodev_session_create function
+		 * call.  This length must not exceed 240 bytes.
+		 *
+		 * Specifically for CCM (@ref RTE_CRYPTO_SYM_HASH_AES_CCM), the
+		 * caller should setup this field as follows:
+		 *
+		 * - the nonce should be written starting at an offset of one
+		 *   byte into the array, leaving room for the implementation
+		 *   to write in the flags to the first byte.
+		 *
+		 * - the additional  authentication data itself should be
+		 *   written starting at an offset of 18 bytes into the array,
+		 *   leaving room for the length encoding in the first two
+		 *   bytes of the second block.
+		 *
+		 * - the array should be big enough to hold the above fields,
+		 *   plus any padding to round this up to the nearest multiple
+		 *   of the block size (16 bytes).  Padding will be added by the
+		 *   implementation.
+		 *
+		 * Finally, for GCM (@ref RTE_CRYPTO_SYM_HASH_AES_GCM), the
+		 * caller should setup this field as follows:
+		 *
+		 * - the AAD is written in starting at byte 0
+		 * - the array must be big enough to hold the AAD, plus any
+		 *   padding to round this up to the nearest multiple of the
+		 *   block size (16 bytes).  Padding will be added by the
+		 *    implementation.
+		 *
+		 * @note
+		 * For AES-GMAC (@ref RTE_CRYPTO_SYM_HASH_AES_GMAC) mode of
+		 * operation, this field is not used and should be set to 0.
+		 * Instead the AAD data should be placed in the source buffer.
+		 */
+		phys_addr_t phys_addr;	/**< physical address */
+	} additional_auth; /**< Additional authentication parameters */
+
+	struct rte_mempool *pool;	/**< mempool used to allocate crypto op */
+};
+
+/**
+ * Reset the fields of a packet mbuf to their default values.
+ *
+ * The given mbuf must have only one segment.
+ *
+ * @param m
+ *   The packet mbuf to be resetted.
+ */
+static inline void
+rte_crypto_op_reset(struct rte_crypto_op_data *op)
+{
+
+	op->type = RTE_CRYPTO_OP_SESSIONLESS;
+}
+
+static inline struct rte_crypto_op_data *
+__rte_crypto_op_raw_alloc(struct rte_mempool *mp)
+{
+	void *buf = NULL;
+
+	if (rte_mempool_get(mp, &buf) < 0)
+		return NULL;
+
+	return (struct rte_crypto_op_data *)buf;
+}
+
+/**
+ * Create an crypto operation structure which is used to define the crypto
+ * operation processing which is to be done on a packet.
+ *
+ * @param	dev_id		Device identifier
+ * @param	m_src		Source mbuf of data for processing.
+ * @param	m_dst		Destination mbuf for processed data. Can be NULL
+ *				if crypto operation is done in place.
+ */
+static inline struct rte_crypto_op_data *
+rte_crypto_op_alloc(struct rte_mempool *mp)
+{
+	struct rte_crypto_op_data *op;
+
+	if ((op = __rte_crypto_op_raw_alloc(mp)) != NULL)
+		rte_crypto_op_reset(op);
+	return op;
+}
+
+
+/**
+ * Free operation structure free function
+ *
+ * @param	op	Crypto operation data structure to be freed
+ */
+static inline void
+rte_crypto_op_free(struct rte_crypto_op_data *op)
+{
+	if (op != NULL) {
+		rte_mempool_put(op->pool, op);
+	}
+}
+
+extern struct rte_mempool *
+rte_crypto_op_pool_create(const char *name, unsigned n, unsigned cache_size,
+		int socket_id);
+
+static inline void
+rte_crypto_op_attach_session(struct rte_crypto_op_data *op,
+		struct rte_cryptodev_session *sess)
+{
+	op->session = sess;
+	op->type = RTE_CRYPTO_OP_WITH_SESSION;
+}
+
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_CRYPTO_H_ */
diff --git a/lib/librte_cryptodev/rte_crypto_version.map b/lib/librte_cryptodev/rte_crypto_version.map
new file mode 100644
index 0000000..c93fcad
--- /dev/null
+++ b/lib/librte_cryptodev/rte_crypto_version.map
@@ -0,0 +1,40 @@
+DPDK_2.2 {
+	global:
+
+	rte_cryptodev_create_vdev;
+	rte_cryptodev_get_dev_id;
+	rte_cryptodev_count;
+	rte_cryptodev_configure;
+	rte_cryptodev_start;
+	rte_cryptodev_stop;
+	rte_cryptodev_close;
+	rte_cryptodev_queue_pair_setup;
+	rte_cryptodev_queue_pair_start;
+	rte_cryptodev_queue_pair_stop;
+	rte_cryptodev_queue_pair_count;
+	rte_cryptodev_stats_get;
+	rte_cryptodev_stats_reset;
+	rte_cryptodev_info_get;
+	rte_cryptodev_callback_register;
+	rte_cryptodev_callback_unregister;
+	rte_cryptodev_enqueue_burst;
+	rte_cryptodev_dequeue_burst;
+	rte_cryptodev_create_crypto_op;
+	rte_cryptodev_crypto_op_free;
+	rte_cryptodev_session_create;
+	rte_cryptodev_session_free;
+
+	rte_cryptodev_pmd_get_dev;
+	rte_cryptodev_pmd_get_named_dev;
+	rte_cryptodev_pmd_is_valid_dev;
+	rte_cryptodev_pmd_allocate;
+	rte_cryptodev_pmd_virtual_dev_init;
+	rte_cryptodev_pmd_release_device;
+	rte_cryptodev_pmd_attach;
+	rte_cryptodev_pmd_detach;
+	rte_cryptodev_pmd_driver_register;
+	rte_cryptodev_pmd_socket_id;
+	rte_cryptodev_pmd_callback_process;
+
+	local: *;
+};
diff --git a/lib/librte_cryptodev/rte_cryptodev.c b/lib/librte_cryptodev/rte_cryptodev.c
new file mode 100644
index 0000000..a1797ce
--- /dev/null
+++ b/lib/librte_cryptodev/rte_cryptodev.c
@@ -0,0 +1,966 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2015 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <sys/types.h>
+#include <sys/queue.h>
+#include <ctype.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <stdarg.h>
+#include <errno.h>
+#include <stdint.h>
+#include <inttypes.h>
+#include <netinet/in.h>
+
+#include <rte_byteorder.h>
+#include <rte_log.h>
+#include <rte_debug.h>
+#include <rte_dev.h>
+#include <rte_interrupts.h>
+#include <rte_pci.h>
+#include <rte_memory.h>
+#include <rte_memcpy.h>
+#include <rte_memzone.h>
+#include <rte_launch.h>
+#include <rte_tailq.h>
+#include <rte_eal.h>
+#include <rte_per_lcore.h>
+#include <rte_lcore.h>
+#include <rte_atomic.h>
+#include <rte_branch_prediction.h>
+#include <rte_common.h>
+#include <rte_ring.h>
+#include <rte_mempool.h>
+#include <rte_malloc.h>
+#include <rte_mbuf.h>
+#include <rte_errno.h>
+#include <rte_spinlock.h>
+#include <rte_string_fns.h>
+
+#include "rte_crypto.h"
+#include "rte_cryptodev.h"
+
+
+
+/* Macros to check for invalid function pointers in dev_ops structure */
+#define FUNC_PTR_OR_ERR_RET(func, retval) do { \
+	if ((func) == NULL) { \
+		CDEV_LOG_ERR("Function not supported"); \
+		return retval; \
+	} \
+} while (0)
+
+#define PROC_PRIMARY_OR_RET() do { \
+	if (rte_eal_process_type() != RTE_PROC_PRIMARY) { \
+		CDEV_LOG_ERR("Cannot run in secondary processes"); \
+		return; \
+	} \
+} while (0)
+
+#define PROC_PRIMARY_OR_ERR_RET(retval) do { \
+	if (rte_eal_process_type() != RTE_PROC_PRIMARY) { \
+		CDEV_LOG_ERR("Cannot run in secondary processes"); \
+		return retval; \
+	} \
+} while (0)
+
+#define FUNC_PTR_OR_RET(func) do { \
+	if ((func) == NULL) { \
+		CDEV_LOG_ERR("Function not supported"); \
+		return; \
+	} \
+} while (0)
+
+struct rte_cryptodev rte_crypto_devices[RTE_MAX_CRYPTODEVS];
+
+static struct rte_cryptodev_global cryptodev_globals = {
+		.devs			= &rte_crypto_devices[0],
+		.data			= NULL,
+		.nb_devs		= 0,
+		.max_devs		= RTE_MAX_CRYPTODEVS
+};
+
+struct rte_cryptodev_global *rte_cryptodev_globals = &cryptodev_globals;
+
+/* spinlock for crypto device callbacks */
+static rte_spinlock_t rte_cryptodev_cb_lock = RTE_SPINLOCK_INITIALIZER;
+
+
+/**
+ * The user application callback description.
+ *
+ * It contains callback address to be registered by user application,
+ * the pointer to the parameters for callback, and the event type.
+ */
+struct rte_cryptodev_callback {
+	TAILQ_ENTRY(rte_cryptodev_callback) next; /**< Callbacks list */
+	rte_cryptodev_cb_fn cb_fn;                /**< Callback address */
+	void *cb_arg;                           /**< Parameter for callback */
+	enum rte_cryptodev_event_type event;          /**< Interrupt event type */
+	uint32_t active;                        /**< Callback is executing */
+};
+
+int
+rte_cryptodev_create_vdev(const char *name, const char *args)
+{
+	return rte_eal_vdev_init(name, args);
+}
+
+
+static inline void
+rte_cryptodev_data_alloc(int socket_id)
+{
+	const unsigned flags = 0;
+	const struct rte_memzone *mz;
+
+	if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
+		mz = rte_memzone_reserve("rte_cryptodev_data",
+				cryptodev_globals.max_devs * sizeof(struct rte_cryptodev_data),
+				socket_id, flags);
+	} else
+		mz = rte_memzone_lookup("rte_cryptodev_data");
+	if (mz == NULL)
+		rte_panic("Cannot allocate memzone for the crypto device data");
+
+	cryptodev_globals.data = mz->addr;
+	if (rte_eal_process_type() == RTE_PROC_PRIMARY)
+		memset(cryptodev_globals.data, 0,
+				cryptodev_globals.max_devs * sizeof(struct rte_cryptodev_data));
+}
+
+
+static uint8_t
+rte_cryptodev_find_free_device_index(void)
+{
+	uint8_t dev_id;
+
+	for (dev_id = 0; dev_id < RTE_MAX_CRYPTODEVS; dev_id++) {
+		if (rte_crypto_devices[dev_id].attached == RTE_CRYPTODEV_DETACHED)
+			return dev_id;
+	}
+	return RTE_MAX_CRYPTODEVS;
+}
+
+struct rte_cryptodev *
+rte_cryptodev_pmd_allocate(const char *name, enum pmd_type type, int socket_id)
+{
+	uint8_t dev_id;
+	struct rte_cryptodev *cryptodev;
+
+	dev_id = rte_cryptodev_find_free_device_index();
+	if (dev_id == RTE_MAX_CRYPTODEVS) {
+		CDEV_LOG_ERR("Reached maximum number of crypto devices");
+		return NULL;
+	}
+
+	if (cryptodev_globals.data == NULL)
+		rte_cryptodev_data_alloc(socket_id);
+
+	if (rte_cryptodev_pmd_get_named_dev(name) != NULL) {
+		CDEV_LOG_ERR("Crypto device with name %s already "
+				"allocated!", name);
+		return NULL;
+	}
+
+	cryptodev = rte_cryptodev_pmd_get_dev(dev_id);
+	cryptodev->data = &cryptodev_globals.data[dev_id];
+	snprintf(cryptodev->data->name, RTE_CRYPTODEV_NAME_MAX_LEN, "%s", name);
+	cryptodev->data->dev_id = dev_id;
+	cryptodev->attached = RTE_CRYPTODEV_ATTACHED;
+	cryptodev->pmd_type = type;
+	cryptodev_globals.nb_devs++;
+
+	return cryptodev;
+}
+
+static inline int
+rte_cryptodev_create_unique_device_name(char *name, size_t size,
+		struct rte_pci_device *pci_dev)
+{
+	int ret;
+
+	if ((name == NULL) || (pci_dev == NULL))
+		return -EINVAL;
+
+	ret = snprintf(name, size, "%d:%d.%d",
+			pci_dev->addr.bus, pci_dev->addr.devid,
+			pci_dev->addr.function);
+	if (ret < 0)
+		return ret;
+	return 0;
+}
+
+int
+rte_cryptodev_pmd_release_device(struct rte_cryptodev *cryptodev)
+{
+	if (cryptodev == NULL)
+		return -EINVAL;
+
+	cryptodev->attached = RTE_CRYPTODEV_DETACHED;
+	cryptodev_globals.nb_devs--;
+	return 0;
+}
+
+struct rte_cryptodev *
+rte_cryptodev_pmd_virtual_dev_init(const char *name, size_t dev_private_size,
+		int socket_id)
+{
+	struct rte_cryptodev *cryptodev;
+
+	/* allocate device structure */
+	cryptodev = rte_cryptodev_pmd_allocate(name, PMD_VDEV, socket_id);
+	if (cryptodev == NULL)
+		return NULL;
+
+	/* allocate private device structure */
+	if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
+		cryptodev->data->dev_private =
+				rte_zmalloc("%s private structure",
+						dev_private_size,
+						RTE_CACHE_LINE_SIZE);
+
+		if (cryptodev->data->dev_private == NULL)
+			rte_panic("Cannot allocate memzone for private device"
+					" data");
+	}
+
+	/* initialise user call-back tail queue */
+	TAILQ_INIT(&(cryptodev->link_intr_cbs));
+
+	return cryptodev;
+}
+
+static int
+rte_cryptodev_init(struct rte_pci_driver *pci_drv,
+		struct rte_pci_device *pci_dev)
+{
+	struct rte_cryptodev_driver *cryptodrv;
+	struct rte_cryptodev *cryptodev;
+
+	char cryptodev_name[RTE_CRYPTODEV_NAME_MAX_LEN];
+
+	int retval;
+
+	cryptodrv = (struct rte_cryptodev_driver *)pci_drv;
+	if (cryptodrv == NULL)
+			return -ENODEV;
+
+	/* Create unique Crypto device name using PCI address */
+	rte_cryptodev_create_unique_device_name(cryptodev_name,
+			sizeof(cryptodev_name), pci_dev);
+
+	cryptodev = rte_cryptodev_pmd_allocate(cryptodev_name, PMD_PDEV, rte_socket_id());
+	if (cryptodev == NULL)
+		return -ENOMEM;
+
+	if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
+		cryptodev->data->dev_private =
+				rte_zmalloc_socket("cryptodev private structure",
+						cryptodrv->dev_private_size,
+						RTE_CACHE_LINE_SIZE, rte_socket_id());
+
+		if (cryptodev->data->dev_private == NULL)
+			rte_panic("Cannot allocate memzone for private device data");
+	}
+
+	cryptodev->pci_dev = pci_dev;
+	cryptodev->driver = cryptodrv;
+
+	/* init user callbacks */
+	TAILQ_INIT(&(cryptodev->link_intr_cbs));
+
+	/* Invoke PMD device initialization function */
+	retval = (*cryptodrv->cryptodev_init)(cryptodrv, cryptodev);
+	if (retval == 0)
+		return 0;
+
+	CDEV_LOG_ERR("driver %s: crypto_dev_init(vendor_id=0x%u device_id=0x%x)"
+			" failed", pci_drv->name,
+			(unsigned) pci_dev->id.vendor_id,
+			(unsigned) pci_dev->id.device_id);
+
+	if (rte_eal_process_type() == RTE_PROC_PRIMARY)
+		rte_free(cryptodev->data->dev_private);
+
+	cryptodev->attached = RTE_CRYPTODEV_DETACHED;
+	cryptodev_globals.nb_devs--;
+
+	return -ENXIO;
+}
+
+static int
+rte_cryptodev_uninit(struct rte_pci_device *pci_dev)
+{
+	const struct rte_cryptodev_driver *cryptodrv;
+	struct rte_cryptodev *cryptodev;
+	char cryptodev_name[RTE_CRYPTODEV_NAME_MAX_LEN];
+	int ret;
+
+	if (pci_dev == NULL)
+		return -EINVAL;
+
+	/* Create unique device name using PCI address */
+	rte_cryptodev_create_unique_device_name(cryptodev_name,
+			sizeof(cryptodev_name), pci_dev);
+
+	cryptodev = rte_cryptodev_pmd_get_named_dev(cryptodev_name);
+	if (cryptodev == NULL)
+		return -ENODEV;
+
+	cryptodrv = (const struct rte_cryptodev_driver *)pci_dev->driver;
+	if (cryptodrv == NULL)
+			return -ENODEV;
+
+	/* Invoke PMD device uninit function */
+	if (*cryptodrv->cryptodev_uninit) {
+		ret = (*cryptodrv->cryptodev_uninit)(cryptodrv, cryptodev);
+		if (ret)
+			return ret;
+	}
+
+	/* free ether device */
+	rte_cryptodev_pmd_release_device(cryptodev);
+
+	if (rte_eal_process_type() == RTE_PROC_PRIMARY)
+		rte_free(cryptodev->data->dev_private);
+
+	cryptodev->pci_dev = NULL;
+	cryptodev->driver = NULL;
+	cryptodev->data = NULL;
+
+	return 0;
+}
+
+int
+rte_cryptodev_pmd_driver_register(struct rte_cryptodev_driver *cryptodrv,
+		enum pmd_type type)
+{
+	/* Call crypto device initialization directly if device is virtual */
+	if (type == PMD_VDEV)
+		return rte_cryptodev_init((struct rte_pci_driver *)cryptodrv,
+				NULL);
+
+	/* Register PCI driver for physical device intialisation during
+	 * PCI probing */
+	cryptodrv->pci_drv.devinit = rte_cryptodev_init;
+	cryptodrv->pci_drv.devuninit = rte_cryptodev_uninit;
+	rte_eal_pci_register(&cryptodrv->pci_drv);
+	return 0;
+}
+
+int
+rte_cryptodev_pmd_attach(const char *devargs __rte_unused,
+			uint8_t *dev_id __rte_unused)
+{
+	RTE_LOG(ERR, EAL, "Hotplug support isn't enabled");
+	return -1;
+}
+
+/* detach the device, then store the name of the device */
+int
+rte_cryptodev_pmd_detach(uint8_t dev_id __rte_unused,
+			char *name __rte_unused)
+{
+	RTE_LOG(ERR, EAL, "Hotplug support isn't enabled");
+	return -1;
+}
+
+uint16_t
+rte_cryptodev_queue_pair_count(uint8_t dev_id)
+{
+	struct rte_cryptodev *dev;
+
+	dev = &rte_crypto_devices[dev_id];
+	return dev->data->nb_queue_pairs;
+}
+
+static int
+rte_cryptodev_queue_pairs_config(struct rte_cryptodev *dev, uint16_t nb_qpairs, int socket_id)
+{
+	struct rte_cryptodev_info dev_info;
+	uint16_t old_nb_queues = dev->data->nb_queue_pairs;
+	void **qp;
+	unsigned i;
+
+	if ((dev == NULL) || (nb_qpairs < 1)) {
+		CDEV_LOG_ERR("invalid param: dev %p, nb_queues %u",
+							dev, nb_qpairs);
+		return -EINVAL;
+	}
+
+	CDEV_LOG_DEBUG("Setup %d queues pairs on device %u",
+			nb_qpairs, dev->data->dev_id);
+
+
+	memset(&dev_info, 0, sizeof(struct rte_cryptodev_info));
+
+	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_infos_get, -ENOTSUP);
+	(*dev->dev_ops->dev_infos_get)(dev, &dev_info);
+
+	if (nb_qpairs > (dev_info.max_queue_pairs)) {
+		CDEV_LOG_ERR("Invalid num queue_pairs (%u) for dev %u",
+				nb_qpairs, dev->data->dev_id);
+	    return (-EINVAL);
+	}
+
+	if (dev->data->queue_pairs == NULL) { /* first time configuration */
+		dev->data->queue_pairs = rte_zmalloc_socket(
+				"cryptodev->queue_pairs",
+				sizeof(dev->data->queue_pairs[0]) * nb_qpairs,
+				RTE_CACHE_LINE_SIZE, socket_id);
+
+		if (dev->data->queue_pairs == NULL) {
+			dev->data->nb_queue_pairs = 0;
+			CDEV_LOG_ERR("failed to get memory for qp meta data, "
+							"nb_queues %u", nb_qpairs);
+			return -(ENOMEM);
+		}
+	} else { /* re-configure */
+		FUNC_PTR_OR_ERR_RET(*dev->dev_ops->queue_pair_release, -ENOTSUP);
+
+		qp = dev->data->queue_pairs;
+
+		for (i = nb_qpairs; i < old_nb_queues; i++)
+			(*dev->dev_ops->queue_pair_release)(dev, i);
+		qp = rte_realloc(qp, sizeof(qp[0]) * nb_qpairs,
+				RTE_CACHE_LINE_SIZE);
+		if (qp == NULL) {
+			CDEV_LOG_ERR("failed to realloc qp meta data,"
+						" nb_queues %u", nb_qpairs);
+			return -(ENOMEM);
+		}
+		if (nb_qpairs > old_nb_queues) {
+			uint16_t new_qs = nb_qpairs - old_nb_queues;
+
+			memset(qp + old_nb_queues, 0,
+				sizeof(qp[0]) * new_qs);
+		}
+
+		dev->data->queue_pairs = qp;
+
+	}
+	dev->data->nb_queue_pairs = nb_qpairs;
+	return 0;
+}
+
+int
+rte_cryptodev_queue_pair_start(uint8_t dev_id, uint16_t queue_pair_id)
+{
+	struct rte_cryptodev *dev;
+
+	/* This function is only safe when called from the primary process
+	 * in a multi-process setup*/
+	PROC_PRIMARY_OR_ERR_RET(-E_RTE_SECONDARY);
+
+	if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+		CDEV_LOG_ERR("Invalid dev_id=%" PRIu8, dev_id);
+		return -EINVAL;
+	}
+
+	dev = &rte_crypto_devices[dev_id];
+	if (queue_pair_id >= dev->data->nb_queue_pairs) {
+		CDEV_LOG_ERR("Invalid queue_pair_id=%d", queue_pair_id);
+		return -EINVAL;
+	}
+
+	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->queue_pair_start, -ENOTSUP);
+
+	return dev->dev_ops->queue_pair_start(dev, queue_pair_id);
+
+}
+
+int
+rte_cryptodev_queue_pair_stop(uint8_t dev_id, uint16_t queue_pair_id)
+{
+	struct rte_cryptodev *dev;
+
+	/* This function is only safe when called from the primary process
+	 * in a multi-process setup*/
+	PROC_PRIMARY_OR_ERR_RET(-E_RTE_SECONDARY);
+
+	if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+		CDEV_LOG_ERR("Invalid dev_id=%" PRIu8, dev_id);
+		return -EINVAL;
+	}
+
+	dev = &rte_crypto_devices[dev_id];
+	if (queue_pair_id >= dev->data->nb_queue_pairs) {
+		CDEV_LOG_ERR("Invalid queue_pair_id=%d", queue_pair_id);
+		return -EINVAL;
+	}
+
+	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->queue_pair_stop, -ENOTSUP);
+
+	return dev->dev_ops->queue_pair_stop(dev, queue_pair_id);
+
+}
+
+int
+rte_cryptodev_configure(uint8_t dev_id, struct rte_cryptodev_config *config)
+{
+	struct rte_cryptodev *dev;
+	int diag;
+
+	/* This function is only safe when called from the primary process
+	 * in a multi-process setup*/
+	PROC_PRIMARY_OR_ERR_RET(-E_RTE_SECONDARY);
+
+	if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+		CDEV_LOG_ERR("Invalid dev_id=%" PRIu8, dev_id);
+		return (-EINVAL);
+	}
+
+	dev = &rte_crypto_devices[dev_id];
+
+	if (dev->data->dev_started) {
+		CDEV_LOG_ERR(
+		    "device %d must be stopped to allow configuration", dev_id);
+		return (-EBUSY);
+	}
+
+	/* Setup new number of queue pairs and reconfigure device. */
+	diag = rte_cryptodev_queue_pairs_config(dev, config->nb_queue_pairs,
+			config->socket_id);
+	if (diag != 0) {
+		CDEV_LOG_ERR("dev%d rte_crypto_dev_queue_pairs_config = %d",
+				dev_id, diag);
+		return diag;
+	}
+
+	/* Setup Session mempool for device */
+	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->session_mp_create, -ENOTSUP);
+	return (*dev->dev_ops->session_mp_create)(dev,
+				config->session_mp.nb_objs,
+				config->session_mp.cache_size,
+				config->socket_id);
+}
+
+static void
+rte_cryptodev_config_restore(uint8_t dev_id __rte_unused)
+{
+}
+
+int
+rte_cryptodev_start(uint8_t dev_id)
+{
+	struct rte_cryptodev *dev;
+	int diag;
+
+	CDEV_LOG_DEBUG("Start dev_id=%" PRIu8, dev_id);
+
+	/* This function is only safe when called from the primary process
+	 * in a multi-process setup*/
+	PROC_PRIMARY_OR_ERR_RET(-E_RTE_SECONDARY);
+
+	if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+		CDEV_LOG_ERR("Invalid dev_id=%" PRIu8, dev_id);
+		return (-EINVAL);
+	}
+
+	dev = &rte_crypto_devices[dev_id];
+
+	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_start, -ENOTSUP);
+
+	if (dev->data->dev_started != 0) {
+		CDEV_LOG_ERR("Device with dev_id=%" PRIu8 " already started",
+			dev_id);
+		return 0;
+	}
+
+	diag = (*dev->dev_ops->dev_start)(dev);
+	if (diag == 0)
+		dev->data->dev_started = 1;
+	else
+		return diag;
+
+	rte_cryptodev_config_restore(dev_id);
+
+	return 0;
+}
+
+void
+rte_cryptodev_stop(uint8_t dev_id)
+{
+	struct rte_cryptodev *dev;
+
+	/* This function is only safe when called from the primary process
+	 * in a multi-process setup*/
+	PROC_PRIMARY_OR_RET();
+
+	if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+		CDEV_LOG_ERR("Invalid dev_id=%" PRIu8, dev_id);
+		return;
+	}
+
+	dev = &rte_crypto_devices[dev_id];
+
+	FUNC_PTR_OR_RET(*dev->dev_ops->dev_stop);
+
+	if (dev->data->dev_started == 0) {
+		CDEV_LOG_ERR("Device with dev_id=%" PRIu8 " already stopped",
+			dev_id);
+		return;
+	}
+
+	dev->data->dev_started = 0;
+	(*dev->dev_ops->dev_stop)(dev);
+}
+
+void
+rte_cryptodev_close(uint8_t dev_id)
+{
+	struct rte_cryptodev *dev;
+
+	/* This function is only safe when called from the primary process
+	 * in a multi-process setup*/
+	PROC_PRIMARY_OR_RET();
+
+	if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+		CDEV_LOG_ERR("Invalid dev_id=%" PRIu8, dev_id);
+		return;
+	}
+
+	dev = &rte_crypto_devices[dev_id];
+
+	FUNC_PTR_OR_RET(*dev->dev_ops->dev_close);
+	dev->data->dev_started = 0;
+	(*dev->dev_ops->dev_close)(dev);
+}
+
+int
+rte_cryptodev_queue_pair_setup(uint8_t dev_id, uint16_t queue_pair_id,
+		const struct rte_cryptodev_qp_conf *qp_conf, int socket_id)
+{
+	struct rte_cryptodev *dev;
+
+	/* This function is only safe when called from the primary process
+	 * in a multi-process setup*/
+	PROC_PRIMARY_OR_ERR_RET(-E_RTE_SECONDARY);
+
+	if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+		CDEV_LOG_ERR("Invalid dev_id=%" PRIu8, dev_id);
+		return (-EINVAL);
+	}
+
+	dev = &rte_crypto_devices[dev_id];
+	if (queue_pair_id >= dev->data->nb_queue_pairs) {
+		CDEV_LOG_ERR("Invalid queue_pair_id=%d", queue_pair_id);
+		return (-EINVAL);
+	}
+
+	if (dev->data->dev_started) {
+		CDEV_LOG_ERR(
+		    "device %d must be stopped to allow configuration", dev_id);
+		return -EBUSY;
+	}
+
+	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->queue_pair_setup, -ENOTSUP);
+
+	return (*dev->dev_ops->queue_pair_setup)(dev, queue_pair_id, qp_conf,
+			socket_id);
+}
+
+
+int
+rte_cryptodev_stats_get(uint8_t dev_id, struct rte_cryptodev_stats *stats)
+{
+	struct rte_cryptodev *dev;
+
+	if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+		CDEV_LOG_ERR("Invalid dev_id=%d", dev_id);
+		return (-ENODEV);
+	}
+
+	if (stats == NULL) {
+		CDEV_LOG_ERR("Invalid stats ptr");
+		return -EINVAL;
+	}
+
+	dev = &rte_crypto_devices[dev_id];
+	memset(stats, 0, sizeof(*stats));
+
+	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->stats_get, -ENOTSUP);
+	(*dev->dev_ops->stats_get)(dev, stats);
+	return 0;
+}
+
+void
+rte_cryptodev_stats_reset(uint8_t dev_id)
+{
+	struct rte_cryptodev *dev;
+
+	if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+		CDEV_LOG_ERR("Invalid dev_id=%" PRIu8, dev_id);
+		return;
+	}
+
+	dev = &rte_crypto_devices[dev_id];
+
+	FUNC_PTR_OR_RET(*dev->dev_ops->stats_reset);
+	(*dev->dev_ops->stats_reset)(dev);
+}
+
+
+void
+rte_cryptodev_info_get(uint8_t dev_id, struct rte_cryptodev_info *dev_info)
+{
+	struct rte_cryptodev *dev;
+
+	if (dev_id >= cryptodev_globals.nb_devs) {
+		CDEV_LOG_ERR("Invalid dev_id=%d", dev_id);
+		return;
+	}
+
+	dev = &rte_crypto_devices[dev_id];
+
+	memset(dev_info, 0, sizeof(struct rte_cryptodev_info));
+
+	FUNC_PTR_OR_RET(*dev->dev_ops->dev_infos_get);
+	(*dev->dev_ops->dev_infos_get)(dev, dev_info);
+
+	dev_info->pci_dev = dev->pci_dev;
+	if (dev->driver)
+		dev_info->driver_name = dev->driver->pci_drv.name;
+}
+
+
+int
+rte_cryptodev_callback_register(uint8_t dev_id,
+			enum rte_cryptodev_event_type event,
+			rte_cryptodev_cb_fn cb_fn, void *cb_arg)
+{
+	struct rte_cryptodev *dev;
+	struct rte_cryptodev_callback *user_cb;
+
+	if (!cb_fn)
+		return (-EINVAL);
+
+	if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+		CDEV_LOG_ERR("Invalid dev_id=%" PRIu8, dev_id);
+		return (-EINVAL);
+	}
+
+	dev = &rte_crypto_devices[dev_id];
+	rte_spinlock_lock(&rte_cryptodev_cb_lock);
+
+	TAILQ_FOREACH(user_cb, &(dev->link_intr_cbs), next) {
+		if (user_cb->cb_fn == cb_fn &&
+			user_cb->cb_arg == cb_arg &&
+			user_cb->event == event) {
+			break;
+		}
+	}
+
+	/* create a new callback. */
+	if (user_cb == NULL) {
+		user_cb = rte_zmalloc("INTR_USER_CALLBACK",
+				sizeof(struct rte_cryptodev_callback), 0);
+		if (user_cb != NULL) {
+			user_cb->cb_fn = cb_fn;
+			user_cb->cb_arg = cb_arg;
+			user_cb->event = event;
+			TAILQ_INSERT_TAIL(&(dev->link_intr_cbs), user_cb, next);
+		}
+	}
+
+	rte_spinlock_unlock(&rte_cryptodev_cb_lock);
+	return ((user_cb == NULL) ? -ENOMEM : 0);
+}
+
+int
+rte_cryptodev_callback_unregister(uint8_t dev_id,
+			enum rte_cryptodev_event_type event,
+			rte_cryptodev_cb_fn cb_fn, void *cb_arg)
+{
+	int ret;
+	struct rte_cryptodev *dev;
+	struct rte_cryptodev_callback *cb, *next;
+
+	if (!cb_fn)
+		return (-EINVAL);
+
+	if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+		CDEV_LOG_ERR("Invalid dev_id=%" PRIu8, dev_id);
+		return (-EINVAL);
+	}
+
+	dev = &rte_crypto_devices[dev_id];
+	rte_spinlock_lock(&rte_cryptodev_cb_lock);
+
+	ret = 0;
+	for (cb = TAILQ_FIRST(&dev->link_intr_cbs); cb != NULL; cb = next) {
+
+		next = TAILQ_NEXT(cb, next);
+
+		if (cb->cb_fn != cb_fn || cb->event != event ||
+				(cb->cb_arg != (void *)-1 &&
+				cb->cb_arg != cb_arg))
+			continue;
+
+		/*
+		 * if this callback is not executing right now,
+		 * then remove it.
+		 */
+		if (cb->active == 0) {
+			TAILQ_REMOVE(&(dev->link_intr_cbs), cb, next);
+			rte_free(cb);
+		} else {
+			ret = -EAGAIN;
+		}
+	}
+
+	rte_spinlock_unlock(&rte_cryptodev_cb_lock);
+	return ret;
+}
+
+void
+rte_cryptodev_pmd_callback_process(struct rte_cryptodev *dev,
+	enum rte_cryptodev_event_type event)
+{
+	struct rte_cryptodev_callback *cb_lst;
+	struct rte_cryptodev_callback dev_cb;
+
+	rte_spinlock_lock(&rte_cryptodev_cb_lock);
+	TAILQ_FOREACH(cb_lst, &(dev->link_intr_cbs), next) {
+		if (cb_lst->cb_fn == NULL || cb_lst->event != event)
+			continue;
+		dev_cb = *cb_lst;
+		cb_lst->active = 1;
+		rte_spinlock_unlock(&rte_cryptodev_cb_lock);
+		dev_cb.cb_fn(dev->data->dev_id, dev_cb.event,
+						dev_cb.cb_arg);
+		rte_spinlock_lock(&rte_cryptodev_cb_lock);
+		cb_lst->active = 0;
+	}
+	rte_spinlock_unlock(&rte_cryptodev_cb_lock);
+}
+
+
+struct rte_cryptodev_session *
+rte_cryptodev_session_create(uint8_t dev_id,
+		struct rte_crypto_cipher_params *cipher_setup_data,
+		struct rte_crypto_hash_params *hash_setup_data,
+		enum rte_crypto_operation_chain op_chain)
+{
+	struct rte_cryptodev *dev;
+
+	if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+		CDEV_LOG_ERR("Invalid dev_id=%d", dev_id);
+		return NULL;
+	}
+
+	 dev = &rte_crypto_devices[dev_id];
+
+	 return dev->dev_ops->session_create(dev, cipher_setup_data,
+			 hash_setup_data, op_chain);
+}
+
+void
+rte_cryptodev_session_free(uint8_t dev_id,
+		struct rte_cryptodev_session *session)
+{
+	struct rte_cryptodev *dev;
+
+	if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+		CDEV_LOG_ERR("Invalid dev_id=%d", dev_id);
+		return;
+	}
+
+	 dev = &rte_crypto_devices[dev_id];
+
+	 dev->dev_ops->session_destroy(dev, session);
+}
+
+
+static void
+rte_crypto_op_init(struct rte_mempool *mp,
+		__rte_unused void *opaque_arg,
+		void *_op_data,
+		__rte_unused unsigned i)
+{
+	struct rte_crypto_op_data *op_data = _op_data;
+
+	memset(op_data, 0, mp->elt_size);
+
+	op_data->pool = mp;
+}
+
+static void
+rte_crypto_op_pool_init(__rte_unused struct rte_mempool *mp,
+		__rte_unused void *opaque_arg)
+{
+}
+
+struct rte_mempool *
+rte_crypto_op_pool_create(const char *name, unsigned n, unsigned cache_size,
+		int socket_id)
+{
+	/* lookup mempool in case already allocated */
+	struct rte_mempool *mp = rte_mempool_lookup(name);
+	if (mp != NULL) {
+		if (mp->elt_size != sizeof(struct rte_crypto_op_data) ||
+				mp->cache_size < cache_size ||
+				mp->size < n) {
+			mp = NULL;
+			CDEV_LOG_ERR("%s mempool already exists with "
+					"incompatible initialisation parameters",
+					name);
+			return NULL;
+		}
+		CDEV_LOG_DEBUG("%s mempool already exists, reusing!", name);
+		return mp;
+	}
+
+	mp = rte_mempool_create(name,	/* mempool name */
+			n,			/* number of elements*/
+			sizeof(struct rte_crypto_op_data),/* element size*/
+			cache_size,			/* Cache size*/
+			0,				/* private data size */
+			rte_crypto_op_pool_init,	/* pool initialisation constructor */
+			NULL,				/* pool initialisation constructor argument */
+			rte_crypto_op_init,		/* obj constructor */
+			NULL,				/* obj constructor argument */
+			socket_id,			/* socket id */
+			0);				/* flags */
+
+	if (mp == NULL) {
+		CDEV_LOG_ERR("failed to allocate %s mempool", name);
+		return NULL;
+	}
+
+
+	CDEV_LOG_DEBUG("%s mempool created!", name);
+	return mp;
+}
+
+
diff --git a/lib/librte_cryptodev/rte_cryptodev.h b/lib/librte_cryptodev/rte_cryptodev.h
new file mode 100644
index 0000000..d7694ad
--- /dev/null
+++ b/lib/librte_cryptodev/rte_cryptodev.h
@@ -0,0 +1,550 @@
+/*-
+ *
+ *   Copyright(c) 2015 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _RTE_CRYPTODEV_H_
+#define _RTE_CRYPTODEV_H_
+
+/**
+ * @file rte_cryptodev.h
+ *
+ * RTE Cryptographic Device APIs
+ *
+ * Defines RTE Crypto Device APIs for the provisioning of cipher and
+ * authentication operations.
+ */
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include "stddef.h"
+
+#include "rte_crypto.h"
+#include "rte_cryptodev_pmd.h"
+
+/**  Crypto device information */
+struct rte_cryptodev_info {
+	const char *driver_name;		/**< Driver name. */
+	enum rte_cryptodev_type dev_type;	/**< Device type */
+	struct rte_pci_device *pci_dev;		/**< PCI information. */
+	uint16_t max_queue_pairs;		/**< Maximum number of queues
+						* pairs supported by device.
+						*/
+};
+
+/** Definitions of Crypto device event types */
+enum rte_cryptodev_event_type {
+	RTE_CRYPTODEV_EVENT_UNKNOWN,	/**< unknown event type */
+	RTE_CRYPTODEV_EVENT_ERROR,	/**< error interrupt event */
+	RTE_CRYPTODEV_EVENT_MAX		/**< max value of this enum */
+};
+
+/** Crypto device queue pair configuration structure. */
+struct rte_cryptodev_qp_conf {
+	uint32_t nb_descriptors; /**< Number of descriptors per queue pair */
+};
+
+/**
+ * Typedef for application callback function to be registered by application
+ * software for notification of device events
+ *
+ * @param	dev_id	Crypto device identifier
+ * @param	event	Crypto device event to register for notification of.
+ * @param	cb_arg	User specified parameter to be passed as to passed to
+ *			users callback function.
+ */
+typedef void (*rte_cryptodev_cb_fn)(uint8_t dev_id,
+		enum rte_cryptodev_event_type event, void *cb_arg);
+
+#ifdef RTE_CRYPTODEV_PERF
+/**
+ * Crypto Device performance counter statistics structure. This structure is
+ * used for RDTSC counters for measuring crypto operations.
+ */
+struct rte_cryptodev_perf_stats {
+	uint64_t t_accumlated;		/**< Accumulated time processing operation */
+	uint64_t t_min;			/**< Max time */
+	uint64_t t_max;			/**< Min time */
+};
+#endif
+
+/** Crypto Device statistics */
+struct rte_cryptodev_stats {
+	uint64_t enqueued_count;	/**< Count of all operations enqueued */
+	uint64_t dequeued_count;	/**< Count of all operations dequeued */
+
+	uint64_t enqueue_err_count;	/**< Total error count on operations enqueued */
+	uint64_t dequeue_err_count;	/**< Total error count on operations dequeued */
+
+#ifdef RTE_CRYPTODEV_DETAILED_STATS
+	struct {
+		uint64_t encrypt_ops;	/**< Count of encrypt operations */
+		uint64_t encrypt_bytes;	/**< Number of bytes encrypted */
+
+		uint64_t decrypt_ops;	/**< Count of decrypt operations */
+		uint64_t decrypt_bytes;	/**< Number of bytes decrypted */
+	} cipher; /**< Cipher operations stats */
+
+	struct {
+		uint64_t generate_ops;	/**< Count of generate operations */
+		uint64_t bytes_hashed;	/**< Number of bytes hashed */
+
+		uint64_t verify_ops;	/**< Count of verify operations */
+		uint64_t bytes_verified;/**< Number of bytes verified */
+	} hash;	 /**< Hash operations stats */
+#endif
+
+#ifdef RTE_CRYPTODEV_PERF
+	struct rte_cryptodev_perf_stats op_perf;	/**< Operations stats */
+#endif
+} __rte_cache_aligned;
+
+/**
+ * Create a virtual crypto device
+ *
+ * @param	name	Cryptodev PMD name of device to be created.
+ * @param	args	Options arguments for device.
+ *
+ * @return
+ * - On successful creation of the cryptodev the device index is returned,
+ *   which will be between 0 and rte_cryptodev_count().
+ * - In the case of a failure, returns -1.
+ */
+extern int
+rte_cryptodev_create_vdev(const char *name, const char *args);
+
+/**
+ * Get the device identifier for the named crypto device.
+ *
+ * @param	name	device name to select the device structure.
+ *
+ * @return
+ *   - Returns crypto device identifier on success.
+ *   - Return -1 on failure to find named crypto device.
+ */
+static inline int
+rte_cryptodev_get_dev_id(const char *name) {
+	unsigned i;
+
+	if (name == NULL)
+		return -1;
+
+	for (i = 0; i < rte_cryptodev_globals->max_devs; i++)
+		if (strcmp(rte_cryptodev_globals->devs[i].data->name, name) == 0 &&
+				rte_cryptodev_globals->devs[i].attached ==
+						RTE_CRYPTODEV_ATTACHED)
+			return i;
+
+	return -1;
+}
+
+/**
+ * Get the total number of crypto devices that have been successfully
+ * initialised.
+ *
+ * @return
+ *   - The total number of usable crypto devices.
+ */
+static inline uint8_t
+rte_cryptodev_count(void)
+{
+	return rte_cryptodev_globals->nb_devs;
+}
+
+static inline uint8_t
+rte_cryptodev_count_devtype(enum rte_cryptodev_type type)
+{
+	uint8_t i, dev_count = 0;
+
+	for (i = 0; i < rte_cryptodev_globals->max_devs; i++)
+		if (rte_cryptodev_globals->devs[i].dev_type == type &&
+			rte_cryptodev_globals->devs[i].attached == RTE_CRYPTODEV_ATTACHED)
+			dev_count++;
+
+	return dev_count;
+}
+
+/*
+ * Return the NUMA socket to which a device is connected
+ *
+ * @param dev_id
+ *   The identifier of the device
+ * @return
+ *   The NUMA socket id to which the device is connected or
+ *   a default of zero if the socket could not be determined.
+ *   -1 if returned is the dev_id value is out of range.
+ */
+static inline int
+rte_cryptodev_socket_id(uint8_t dev_id)
+{
+	struct rte_cryptodev *dev;
+
+	if (!rte_cryptodev_pmd_is_valid_dev(dev_id))
+		return -1;
+
+	dev = rte_cryptodev_pmd_get_dev(dev_id);
+
+	if (dev->pci_dev)
+		return dev->pci_dev->numa_node;
+	else
+		return 0;
+}
+
+/**
+ * Configure a device.
+ *
+ * This function must be invoked first before any other function in the
+ * API. This function can also be re-invoked when a device is in the
+ * stopped state.
+ *
+ * @param	dev_id		The identifier of the device to configure.
+ * @param	nb_qp_queue	The number of queue pairs to set up for the device.
+ *
+ * @return
+ *   - 0: Success, device configured.
+ *   - <0: Error code returned by the driver configuration function.
+ */
+extern int
+rte_cryptodev_configure(uint8_t dev_id, struct rte_cryptodev_config *config);
+
+/**
+ * Start an device.
+ *
+ * The device start step is the last one and consists of setting the configured
+ * offload features and in starting the transmit and the receive units of the
+ * device.
+ * On success, all basic functions exported by the API (link status,
+ * receive/transmit, and so on) can be invoked.
+ *
+ * @param dev_id
+ *   The identifier of the device.
+ * @return
+ *   - 0: Success, device started.
+ *   - <0: Error code of the driver device start function.
+ */
+extern int
+rte_cryptodev_start(uint8_t dev_id);
+
+/**
+ * Stop an device. The device can be restarted with a call to
+ * rte_cryptodev_start()
+ *
+ * @param	dev_id		The identifier of the device.
+ */
+extern void
+rte_cryptodev_stop(uint8_t dev_id);
+
+/**
+ * Close an device. The device cannot be restarted!
+ *
+ * @param	dev_id		The identifier of the device.
+ */
+extern void
+rte_cryptodev_close(uint8_t dev_id);
+
+/**
+ * Allocate and set up a receive queue pair for a device.
+ *
+ *
+ * @param	dev_id		The identifier of the device.
+ * @param	queue_pair_id	The index of the queue pairs to set up. The
+ *				value must be in the range [0, nb_queue_pair
+ *				- 1] previously supplied to
+ *				rte_cryptodev_configure().
+ * @param	qp_conf		The pointer to the configuration data to be
+ *				used for the queue pair. NULL value is
+ *				allowed, in which case default configuration
+ *				will be used.
+ * @param	socket_id	The *socket_id* argument is the socket
+ *				identifier in case of NUMA. The value can be
+ *				*SOCKET_ID_ANY* if there is no NUMA constraint
+ *				for the DMA memory allocated for the receive
+ *				queue pair.
+ *
+ * @return
+ *   - 0: Success, queue pair correctly set up.
+ *   - <0: Queue pair configuration failed
+ */
+extern int
+rte_cryptodev_queue_pair_setup(uint8_t dev_id, uint16_t queue_pair_id,
+		const struct rte_cryptodev_qp_conf *qp_conf, int socket_id);
+
+/**
+ * Start a specified queue pair of a device. It is used
+ * when deferred_start flag of the specified queue is true.
+ *
+ * @param	dev_id		The identifier of the device
+ * @param	queue_pair_id	The index of the queue pair to start. The value
+ *				must be in the range [0, nb_queue_pair - 1]
+ *				previously supplied to rte_crypto_dev_configure().
+ * @return
+ *   - 0: Success, the transmit queue is correctly set up.
+ *   - -EINVAL: The dev_id or the queue_id out of range.
+ *   - -ENOTSUP: The function not supported in PMD driver.
+ */
+extern int
+rte_cryptodev_queue_pair_start(uint8_t dev_id, uint16_t queue_pair_id);
+
+/**
+ * Stop specified queue pair of a device
+ *
+ * @param	dev_id		The identifier of the device
+ * @param	queue_pair_id	The index of the queue pair to stop. The value
+ *				must be in the range [0, nb_queue_pair - 1]
+ *				previously supplied to rte_cryptodev_configure().
+ * @return
+ *   - 0: Success, the transmit queue is correctly set up.
+ *   - -EINVAL: The dev_id or the queue_id out of range.
+ *   - -ENOTSUP: The function not supported in PMD driver.
+ */
+extern int
+rte_cryptodev_queue_pair_stop(uint8_t dev_id, uint16_t queue_pair_id);
+
+/**
+ * Get the number of queue pairs on a specific crypto device
+ *
+ * @param	dev_id		Crypto device identifier.
+ * @return
+ *   - The number of configured queue pairs.
+ */
+extern uint16_t
+rte_cryptodev_queue_pair_count(uint8_t dev_id);
+
+
+/**
+ * Retrieve the general I/O statistics of a device.
+ *
+ * @param	dev_id		The identifier of the device.
+ * @param	stats		A pointer to a structure of type
+ *				*rte_cryptodev_stats* to be filled with the
+ *				values of device counters.
+ * @return
+ *   - Zero if successful.
+ *   - Non-zero otherwise.
+ */
+extern int
+rte_cryptodev_stats_get(uint8_t dev_id, struct rte_cryptodev_stats *stats);
+
+/**
+ * Reset the general I/O statistics of a device.
+ *
+ * @param	dev_id		The identifier of the device.
+ */
+extern void
+rte_cryptodev_stats_reset(uint8_t dev_id);
+
+/**
+ * Retrieve the contextual information of a device.
+ *
+ * @param	dev_id		The identifier of the device.
+ * @param	dev_info	A pointer to a structure of type
+ *				*rte_cryptodev_info* to be filled with the
+ *				contextual information of the device.
+ */
+extern void
+rte_cryptodev_info_get(uint8_t dev_id, struct rte_cryptodev_info *dev_info);
+
+
+/**
+ * Register a callback function for specific device id.
+ *
+ * @param	dev_id		Device id.
+ * @param	event		Event interested.
+ * @param	cb_fn		User supplied callback function to be called.
+ * @param	cb_arg		Pointer to the parameters for the registered callback.
+ *
+ * @return
+ *  - On success, zero.
+ *  - On failure, a negative value.
+ */
+extern int
+rte_cryptodev_callback_register(uint8_t dev_id,
+		enum rte_cryptodev_event_type event,
+		rte_cryptodev_cb_fn cb_fn, void *cb_arg);
+
+/**
+ * Unregister a callback function for specific device id.
+ *
+ * @param	dev_id		The device identifier.
+ * @param	event		Event interested.
+ * @param	cb_fn		User supplied callback function to be called.
+ * @param	cb_arg		Pointer to the parameters for the registered callback.
+ *
+ * @return
+ *  - On success, zero.
+ *  - On failure, a negative value.
+ */
+extern int
+rte_cryptodev_callback_unregister(uint8_t dev_id,
+		enum rte_cryptodev_event_type event,
+		rte_cryptodev_cb_fn cb_fn, void *cb_arg);
+
+/**
+ *
+ * Dequeue a burst of processed packets from a queue of the crypto device.
+ * The dequeued packets are stored in *rte_mbuf* structures whose pointers are
+ * supplied in the *pkts* array.
+ *
+ * The rte_crypto_dequeue_burst() function returns the number of packets
+ * actually dequeued, which is the number of *rte_mbuf* data structures
+ * effectively supplied into the *pkts* array.
+ *
+ * A return value equal to *nb_pkts* indicates that the queue contained
+ * at least *rx_pkts* packets, and this is likely to signify that other
+ * received packets remain in the input queue. Applications implementing
+ * a "retrieve as much received packets as possible" policy can check this
+ * specific case and keep invoking the rte_crypto_dequeue_burst() function until
+ * a value less than *nb_pkts* is returned.
+ *
+ * The rte_crypto_dequeue_burst() function does not provide any error
+ * notification to avoid the corresponding overhead.
+ *
+ * @param	dev_id		The identifier of the device.
+ * @param	qp_id		The index of the queue pair from which to
+ *				retrieve processed packets. The value must be
+ *				in the range [0, nb_queue_pair - 1] previously
+ *				supplied to rte_cryptodev_configure().
+ * @param	pkts		The address of an array of pointers to
+ *				*rte_mbuf* structures that must be large enough
+ *				to store *nb_pkts* pointers in it.
+ * @param	nb_pkts		The maximum number of packets to dequeue.
+ *
+ * @return
+ *   - The number of packets actually dequeued, which is the number
+ *   of pointers to *rte_mbuf* structures effectively supplied to the
+ *   *pkts* array.
+ */
+static inline uint16_t
+rte_cryptodev_dequeue_burst(uint8_t dev_id, uint16_t qp_id,
+		struct rte_mbuf **pkts, uint16_t nb_pkts)
+{
+	struct rte_cryptodev *dev = &rte_crypto_devices[dev_id];
+
+	nb_pkts = (*dev->dequeue_burst)
+			(dev->data->queue_pairs[qp_id], pkts, nb_pkts);
+
+	return nb_pkts;
+}
+
+/**
+ * Enqueue a burst of packets for processing on a crypto device.
+ *
+ * The rte_crypto_enqueue_burst() function is invoked to place packets
+ * on the queue *queue_id* of the device designated by its *dev_id*.
+ *
+ * The *nb_pkts* parameter is the number of packets to process which are
+ * supplied in the *pkts* array of *rte_mbuf* structures.
+ *
+ * The rte_crypto_enqueue_burst() function returns the number of packets it
+ * actually sent. A return value equal to *nb_pkts* means that all packets
+ * have been sent.
+ * *
+ * @param	dev_id		The identifier of the device.
+ * @param	queue_id	The index of the transmit queue through
+ *				which output packets must be sent. The value
+ *				must be in the range [0, nb_queue_pairs - 1]
+ *				previously supplied to rte_cryptodev_configure().
+ * @param	tx_pkts		The address of an array of *nb_pkts* pointers
+ *				to *rte_mbuf* structures which contain the
+ *				output packets.
+ * @param	nb_pkts		The number of packets to transmit.
+ *
+ * @return
+ * The number of packets actually enqueued on the crypto device. The return
+ * value can be less than the value of the *nb_pkts* parameter when the
+ * crypto devices queue is full or has been filled up.
+ */
+static inline uint16_t
+rte_cryptodev_enqueue_burst(uint8_t dev_id, uint16_t qp_id,
+		struct rte_mbuf **pkts, uint16_t nb_pkts)
+{
+	struct rte_cryptodev *dev = &rte_crypto_devices[dev_id];
+
+	return (*dev->enqueue_burst)
+			(dev->data->queue_pairs[qp_id], pkts, nb_pkts);
+}
+
+
+
+/**
+ * Initialise a session for symmetric cryptographic operations.
+ *
+ * This function is used by the client to initialize immutable
+ * parameters of symmetric cryptographic operation.
+ * To perform the operation the rte_cryptodev_enqueue_burst function is
+ * used.  Each mbuf should contain a reference to the session
+ * pointer returned from this function.
+ * Memory to contain the session information is allocated by the
+ * implementation.
+ * An upper limit on the number of session that many be created is
+ * defined by a build configuration constant.
+ * The rte_cryptodev_session_free must be called to free allocated
+ * memory when the session information is no longer needed.
+ *
+ * @param	dev_id			The device identifier.
+ * @param	cipher_setup_data	The parameters associated with the
+ *					cipher operation. This may be NULL.
+ * @param	hash_setup_data		The parameters associated with the hash
+ *					operation. This may be NULL.
+ * @param	op_chain		Specifies the crypto operation chaining,
+ *					cipher and/or hash and the order in
+ *					which they are performed.
+ *
+ * @return
+ *  Pointer to the created session or NULL
+ */
+extern struct rte_cryptodev_session *
+rte_cryptodev_session_create(uint8_t dev_id,
+		struct rte_crypto_cipher_params *cipher_setup_data,
+		struct rte_crypto_hash_params *hash_setup_data,
+		enum rte_crypto_operation_chain op_chain);
+
+
+/**
+ * Free the memory associated with a previously allocated session.
+ *
+ * @param	dev_id		The device identifier.
+ * @param	session		Session pointer previously allocated by
+ *				*rte_cryptodev_session_create*.
+ *
+ * @return
+ *   0 on success.
+ */
+extern void
+rte_cryptodev_session_free(uint8_t dev_id,
+		struct rte_cryptodev_session *session);
+
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_CRYPTODEV_H_ */
diff --git a/lib/librte_cryptodev/rte_cryptodev_pmd.h b/lib/librte_cryptodev/rte_cryptodev_pmd.h
new file mode 100644
index 0000000..e6fdd1c
--- /dev/null
+++ b/lib/librte_cryptodev/rte_cryptodev_pmd.h
@@ -0,0 +1,622 @@
+/*-
+ *
+ *   Copyright(c) 2015 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _RTE_CRYPTODEV_PMD_H_
+#define _RTE_CRYPTODEV_PMD_H_
+
+/** @file
+ * RTE Crypto PMD APIs
+ *
+ * @note
+ * These API are from crypto PMD only and user applications should not call them
+ * directly.
+ */
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include <string.h>
+
+#include <rte_dev.h>
+#include <rte_pci.h>
+#include <rte_malloc.h>
+#include <rte_mbuf.h>
+#include <rte_mempool.h>
+#include <rte_log.h>
+
+#include "rte_crypto.h"
+
+struct rte_cryptodev_stats;
+struct rte_cryptodev_info;
+struct rte_cryptodev_qp_conf;
+
+enum rte_cryptodev_event_type;
+
+#define RTE_CRYPTODEV_NAME_MAX_LEN	(64)
+/**< Max length of name of crypto PMD */
+
+
+/* Logging Macros */
+
+#define CDEV_LOG_ERR(fmt, args...) do { \
+	RTE_LOG(ERR, CRYPTODEV, "%s() line %u: " fmt "\n", \
+			__func__, __LINE__, ## args); \
+	} while (0)
+
+#define CDEV_PMD_LOG_ERR(dev, fmt, args...) do { \
+	RTE_LOG(ERR, CRYPTODEV, "[%s] %s() line %u: " fmt "\n", \
+			dev, __func__, __LINE__, ## args); \
+	} while (0)
+
+#ifdef RTE_LIBRTE_CRYPTODEV_DEBUG
+#define CDEV_LOG_DEBUG(fmt, args...) do {                        \
+		RTE_LOG(DEBUG, CRYPTODEV, "%s() line %u: " fmt "\n", \
+				__func__, __LINE__, ## args); \
+	} while (0)
+
+#define CDEV_PMD_TRACE(fmt, args...) do {                        \
+		RTE_LOG(DEBUG, CRYPTODEV, "[%s] %s: " fmt "\n", dev, __func__, ## args); \
+	} while (0)
+
+#else
+#define CDEV_LOG_DEBUG(fmt, args...)
+#define CDEV_PMD_TRACE(fmt, args...)
+#endif
+
+#define CRYPTODEV_NAME_AESNI_MB_PMD	("cryptodev_aesni_mb_pmd")
+/**< AES-NI Multi buffer PMD device name */
+#define CRYPTODEV_NAME_QAT_PMD		("cryptodev_qat_pmd")
+/**< Intel QAT PMD device name */
+
+/** Crypto device type */
+enum rte_cryptodev_type {
+	RTE_CRYPTODEV_AESNI_MB_PMD = 1,	/**< AES-NI multi buffer PMD */
+	RTE_CRYPTODEV_QAT_PMD,		/**< QAT PMD */
+};
+
+#define RTE_CRYPTODEV_DETACHED  (0)
+#define RTE_CRYPTODEV_ATTACHED  (1)
+
+typedef uint16_t (*dequeue_pkt_burst_t)(void *qp, struct rte_mbuf **pkts,
+		uint16_t nb_pkts);
+/**< Dequeue processed packets from queue pair of a device. */
+
+typedef uint16_t (*enqueue_pkt_burst_t)(void *qp, struct rte_mbuf **pkts,
+		uint16_t nb_pkts);
+/**< Enqueue packets for processing on queue pair of a device. */
+
+
+struct rte_cryptodev_callback;
+
+/** Structure to keep track of registered callbacks */
+TAILQ_HEAD(rte_cryptodev_cb_list, rte_cryptodev_callback);
+
+
+/**
+ *
+ * The data part, with no function pointers, associated with each crypto device.
+ *
+ * This structure is safe to place in shared memory to be common among different
+ * processes in a multi-process configuration.
+ */
+struct rte_cryptodev_data {
+	uint8_t dev_id;				/**< Device ID for this instance */
+	char name[RTE_CRYPTODEV_NAME_MAX_LEN];	/**< Unique identifier name */
+	uint8_t dev_started : 1;		/**< Device state: STARTED(1)/STOPPED(0) */
+
+	void **queue_pairs;		/**< Array of pointers to queue pairs. */
+	uint16_t nb_queue_pairs;	/**< Number of device queue pairs. */
+
+	void *dev_private;		/**< PMD-specific private data */
+};
+
+
+struct rte_cryptodev_driver;
+struct rte_cryptodev;
+
+/**
+ * Initialisation function of a crypto driver invoked for each matching
+ * crypto PCI device detected during the PCI probing phase.
+ *
+ * @param	drv	The pointer to the [matching] crypto driver structure
+ *			supplied by the PMD when it registered itself.
+ * @param	dev	The dev pointer is the address of the *rte_cryptodev*
+ *			structure associated with the matching device and which
+ *			has been [automatically] allocated in the
+ *			*rte_crypto_devices* array.
+ *
+ * @return
+ *   - 0: Success, the device is properly initialised by the driver.
+ *        In particular, the driver MUST have set up the *dev_ops* pointer
+ *        of the *dev* structure.
+ *   - <0: Error code of the device initialisation failure.
+ */
+typedef int (*cryptodev_init_t)(struct rte_cryptodev_driver *drv,
+		struct rte_cryptodev *dev);
+
+/**
+ * Finalisation function of a driver invoked for each matching
+ * PCI device detected during the PCI closing phase.
+ *
+ * @param	drv	The pointer to the [matching] driver structure supplied
+ * 			by the PMD when it registered itself.
+ * @param	dev	The dev pointer is the address of the *rte_cryptodev*
+ *			structure associated with the matching device and which
+ *			has been [automatically] allocated in the
+ *			*rte_crypto_devices* array.
+ *
+ *  * @return
+ *   - 0: Success, the device is properly finalised by the driver.
+ *        In particular, the driver MUST free the *dev_ops* pointer
+ *        of the *dev* structure.
+ *   - <0: Error code of the device initialisation failure.
+ */
+typedef int (*cryptodev_uninit_t)(const struct rte_cryptodev_driver  *drv,
+				struct rte_cryptodev *dev);
+
+/**
+ * The structure associated with a PMD driver.
+ *
+ * Each driver acts as a PCI driver and is represented by a generic
+ * *crypto_driver* structure that holds:
+ *
+ * - An *rte_pci_driver* structure (which must be the first field).
+ *
+ * - The *cryptodev_init* function invoked for each matching PCI device.
+ *
+ * - The size of the private data to allocate for each matching device.
+ */
+struct rte_cryptodev_driver {
+	struct rte_pci_driver pci_drv;	/**< The PMD is also a PCI driver. */
+	unsigned dev_private_size;	/**< Size of device private data. */
+
+	cryptodev_init_t cryptodev_init;	/**< Device init function. */
+	cryptodev_uninit_t cryptodev_uninit;	/**< Device uninit function. */
+};
+
+/** The data structure associated with each crypto device. */
+struct rte_cryptodev {
+	dequeue_pkt_burst_t dequeue_burst;	/**< Pointer to PMD receive function. */
+	enqueue_pkt_burst_t enqueue_burst;	/**< Pointer to PMD transmit function. */
+
+	const struct rte_cryptodev_driver *driver;	/**< Driver for this device */
+	struct rte_cryptodev_data *data;		/**< Pointer to device data */
+	struct rte_cryptodev_ops *dev_ops;		/**< Functions exported by PMD */
+	struct rte_pci_device *pci_dev;			/**< PCI info. supplied by probing */
+
+	enum rte_cryptodev_type dev_type;		/**< Crypto device type */
+	enum pmd_type pmd_type;				/**< PMD type - PDEV / VDEV */
+
+	struct rte_cryptodev_cb_list link_intr_cbs;
+	/**< User application callback for interrupts if present */
+
+	uint8_t attached : 1;	/**< Flag indicating the device is attached */
+};
+
+/** Crypto device configuration structure */
+struct rte_cryptodev_config {
+	int socket_id;			/**< Socket to allocate resources on */
+	uint16_t nb_queue_pairs;	/**< Number of queue pairs to configure
+					* on device */
+
+	struct {
+		uint32_t nb_objs;	/**< Number of objects in mempool */
+		uint32_t cache_size;	/**< l-core object cache size */
+	} session_mp;		/**< Session mempool configuration */
+};
+
+/** Global structure used for maintaining state of allocated crypto devices */
+struct rte_cryptodev_global {
+	struct rte_cryptodev *devs;		/**< Device information array */
+	struct rte_cryptodev_data *data;	/**< Device private data */
+	uint8_t nb_devs;			/**< Number of devices found */
+	uint8_t max_devs;			/**< Max number of devices */
+};
+
+/** pointer to global crypto devices data structure. */
+extern struct rte_cryptodev_global *rte_cryptodev_globals;
+
+/**
+ * Get the rte_cryptodev structure device pointer for the device. Assumes a
+ * valid device index.
+ *
+ * @param	dev_id	Device ID value to select the device structure.
+ *
+ * @return
+ *   - The rte_cryptodev structure pointer for the given device ID.
+ */
+static inline struct rte_cryptodev *
+rte_cryptodev_pmd_get_dev(uint8_t dev_id)
+{
+	return &rte_cryptodev_globals->devs[dev_id];
+}
+
+/**
+ * Get the rte_cryptodev structure device pointer for the named device.
+ *
+ * @param	name	device name to select the device structure.
+ *
+ * @return
+ *   - The rte_cryptodev structure pointer for the given device ID.
+ */
+static inline struct rte_cryptodev *
+rte_cryptodev_pmd_get_named_dev(const char *name)
+{
+	unsigned i;
+
+	if (name == NULL)
+		return NULL;
+
+	for (i = 0; i < rte_cryptodev_globals->max_devs; i++) {
+		if (rte_cryptodev_globals->devs[i].attached == RTE_CRYPTODEV_ATTACHED &&
+				strcmp(rte_cryptodev_globals->devs[i].data->name, name) == 0)
+			return &rte_cryptodev_globals->devs[i];
+	}
+
+	return NULL;
+}
+
+/**
+ * Validate if the crypto device index is valid attached crypto device.
+ *
+ * @param	dev_id	Crypto device index.
+ *
+ * @return
+ *   - If the device index is valid (1) or not (0).
+ */
+static inline unsigned
+rte_cryptodev_pmd_is_valid_dev(uint8_t dev_id)
+{
+	struct rte_cryptodev *dev = NULL;
+
+	if (dev_id >= rte_cryptodev_globals->nb_devs)
+		return 0;
+
+	dev = rte_cryptodev_pmd_get_dev(dev_id);
+	if (dev->attached != RTE_CRYPTODEV_ATTACHED)
+		return 0;
+	else
+		return 1;
+}
+
+/**
+ * The pool of rte_cryptodev structures. The size of the pool
+ * is configured at compile-time in the <rte_cryptodev.c> file.
+ */
+extern struct rte_cryptodev rte_crypto_devices[];
+
+
+/**
+ * Definitions of all functions exported by a driver through the
+ * the generic structure of type *crypto_dev_ops* supplied in the
+ * *rte_cryptodev* structure associated with a device.
+ */
+
+/**
+ *	Function used to configure device.
+ *
+ * @param	dev	Crypto device pointer
+ *
+ * @return	Returns 0 on success
+ */
+typedef int (*cryptodev_configure_t)(struct rte_cryptodev *dev);
+
+/**
+ * Function used to start a configured device.
+ *
+ * @param	dev	Crypto device pointer
+ *
+ * @return	Returns 0 on success
+ */
+typedef int (*cryptodev_start_t)(struct rte_cryptodev *dev);
+
+/**
+ * Function used to stop a configured device.
+ *
+ * @param	dev	Crypto device pointer
+ */
+typedef void (*cryptodev_stop_t)(struct rte_cryptodev *dev);
+
+/**
+ Function used to close a configured device.
+ *
+ * @param	dev	Crypto device pointer
+ */
+typedef void (*cryptodev_close_t)(struct rte_cryptodev *dev);
+
+
+/**
+ * Function used to get statistics of a device.
+ *
+ * @param	dev	Crypto device pointer
+ * @param	stats	Pointer to crypto device stats structure to populate
+ */
+typedef void (*cryptodev_stats_get_t)(struct rte_cryptodev *dev,
+				struct rte_cryptodev_stats *stats);
+
+
+/**
+ * Function used to reset statistics of a device.
+ *
+ * @param	dev	Crypto device pointer
+ */
+typedef void (*cryptodev_stats_reset_t)(struct rte_cryptodev *dev);
+
+
+/**
+ * Function used to get specific information of a device.
+ *
+ * @param	dev	Crypto device pointer
+ */
+typedef void (*cryptodev_info_get_t)(struct rte_cryptodev *dev,
+				struct rte_cryptodev_info *dev_info);
+
+/**
+ * Start queue pair of a device.
+ *
+ * @param	dev	Crypto device pointer
+ * @param	qp_id	Queue Pair Index
+ *
+ * @return	Returns 0 on success.
+ */
+typedef int (*cryptodev_queue_pair_start_t)(struct rte_cryptodev *dev,
+				uint16_t qp_id);
+
+/**
+ * Stop queue pair of a device.
+ *
+ * @param	dev	Crypto device pointer
+ * @param	qp_id	Queue Pair Index
+ *
+ * @return	Returns 0 on success.
+ */
+typedef int (*cryptodev_queue_pair_stop_t)(struct rte_cryptodev *dev,
+				uint16_t qp_id);
+
+/**
+ * Setup a queue pair for a device.
+ *
+ * @param	dev		Crypto device pointer
+ * @param	qp_id		Queue Pair Index
+ * @param	qp_conf		Queue configuration structure
+ * @param	socket_id	Socket Index
+ *
+ * @return	Returns 0 on success.
+ */
+typedef int (*cryptodev_queue_pair_setup_t)(struct rte_cryptodev *dev,
+		uint16_t qp_id,	const struct rte_cryptodev_qp_conf *qp_conf,
+		int socket_id);
+
+/**
+ * Release memory resources allocated by given queue pair.
+ *
+ * @param	dev	Crypto device pointer
+ * @param	qp_id	Queue Pair Index
+ */
+typedef void (*cryptodev_queue_pair_release_t)(struct rte_cryptodev *dev,
+		uint16_t qp_id);
+
+/**
+ * Get number of available queue pairs of a device.
+ *
+ * @param	dev	Crypto device pointer
+ *
+ * @return	Returns number of queue pairs on success.
+ */
+typedef uint32_t (*cryptodev_queue_pair_count_t)(struct rte_cryptodev *dev);
+
+/**
+ * Create a session mempool to allocate sessions from
+ *
+ * @param	dev		Crypto device pointer
+ * @param	nb_objs		number of sessions objects in mempool
+ * @param	obj_cache	l-core object cache size, see *rte_ring_create*
+ * @param	socket_id	Socket Id to allocate  mempool on.
+ *
+ * @return
+ * - On success returns a pointer to a rte_mempool
+ * - On failure returns a NULL pointer
+ *  */
+typedef int (*cryptodev_create_session_pool_t)(
+		struct rte_cryptodev *dev, unsigned nb_objs,
+		unsigned obj_cache_size, int socket_id);
+
+/**
+ * Create a Crypto session on a device.
+ *
+ * @param	dev			Crypto device pointer
+ * @param	cipher_setup_data	Cipher operation parameters
+ * @param	hash_setup_data		Hash operation parameters
+ * @param	op_chain		Operation chaining
+ *
+ * @return
+ *  - Returns cryptodev session structure on success.
+ *  - Returns NULL on failure.
+ * */
+typedef struct rte_cryptodev_session * (*cryptodev_create_session_t)(
+		struct rte_cryptodev *dev,
+		struct rte_crypto_cipher_params *cipher_setup_data,
+		struct rte_crypto_hash_params *hash_setup_data,
+		enum rte_crypto_operation_chain op_chain);
+
+/**
+ * Free Crypto session.
+ * @param	session		Cryptodev session structure to free
+ * */
+typedef void (*cryptodev_free_session_t)(struct rte_cryptodev *dev,
+		struct rte_cryptodev_session *session);
+
+
+/** Crypto device operations function pointer table */
+struct rte_cryptodev_ops {
+	cryptodev_configure_t dev_configure;	/**< Configure device. */
+	cryptodev_start_t dev_start;		/**< Start device. */
+	cryptodev_stop_t dev_stop;		/**< Stop device. */
+	cryptodev_close_t dev_close;		/**< Close device. */
+
+	cryptodev_info_get_t dev_infos_get;	/**< Get device info. */
+
+	cryptodev_stats_get_t stats_get;	/**< Get generic device statistics. */
+	cryptodev_stats_reset_t stats_reset;	/**< Reset generic device statistics. */
+
+	cryptodev_queue_pair_setup_t queue_pair_setup;		/**< Set up a device queue pair. */
+	cryptodev_queue_pair_release_t queue_pair_release;	/**< Release a queue pair. */
+	cryptodev_queue_pair_start_t queue_pair_start;		/**< Start a queue pair. */
+	cryptodev_queue_pair_stop_t queue_pair_stop;		/**< Stop a queue pair. */
+	cryptodev_queue_pair_count_t queue_pair_count;		/**< Get count of the queue pairs. */
+
+	cryptodev_create_session_pool_t session_mp_create;	/**< Create a session mempool to allocate sessions from */
+	cryptodev_create_session_t session_create;		/**< Create a Crypto session. */
+	cryptodev_free_session_t session_destroy;		/**< Destroy a Crypto session. */
+};
+
+
+/**
+ * Function for internal use by dummy drivers primarily, e.g. ring-based
+ * driver.
+ * Allocates a new cryptodev slot for an crypto device and returns the pointer
+ * to that slot for the driver to use.
+ *
+ * @param	name		Unique identifier name for each device
+ * @param	type		Device type of this Crypto device
+ * @param	socket_id	Socket to allocate resources on.
+ * @return
+ *   - Slot in the rte_dev_devices array for a new device;
+ */
+struct rte_cryptodev *
+rte_cryptodev_pmd_allocate(const char *name, enum pmd_type type, int  socket_id);
+
+/**
+ * Creates a new virtual crypto device and returns the pointer
+ * to that device.
+ *
+ * @param	name			PMD type name
+ * @param	dev_private_size	Size of crypto PMDs private data
+ * @param	socket_id		Socket to allocate resources on.
+ *
+ * @return
+ *   - Cryptodev pointer if device is successfully created.
+ *   - NULL if device cannot be created.
+ */
+struct rte_cryptodev *
+rte_cryptodev_pmd_virtual_dev_init(const char *name, size_t dev_private_size,
+		int socket_id);
+
+
+/**
+ * Function for internal use by dummy drivers primarily, e.g. ring-based
+ * driver.
+ * Release the specified cryptodev device.
+ *
+ * @param cryptodev
+ * The *cryptodev* pointer is the address of the *rte_cryptodev* structure.
+ * @return
+ *   - 0 on success, negative on error
+ */
+extern int
+rte_cryptodev_pmd_release_device(struct rte_cryptodev *cryptodev);
+
+/**
+ * Attach a new device specified by arguments.
+ *
+ * @param devargs
+ *  A pointer to a string array describing the new device
+ *  to be attached. The string should be a pci address like
+ *  '0000:01:00.0' or virtual device name like 'crypto_pcap0'.
+ * @param dev_id
+ *  A pointer to a identifier actually attached.
+ * @return
+ *  0 on success and dev_id is filled, negative on error
+ */
+extern int
+rte_cryptodev_pmd_attach(const char *devargs, uint8_t *dev_id);
+
+/**
+ * Detach a device specified by identifier.
+ *
+ * @param dev_id
+ *   The identifier of the device to detach.
+ * @param addr
+ *  A pointer to a device name actually detached.
+ * @return
+ *  0 on success and devname is filled, negative on error
+ */
+extern int
+rte_cryptodev_pmd_detach(uint8_t dev_id, char *devname);
+
+/**
+ * Register a Crypto [Poll Mode] driver.
+ *
+ * Function invoked by the initialization function of a Crypto driver
+ * to simultaneously register itself as Crypto Poll Mode Driver and to either:
+ *
+ *	a - register itself as PCI driver if the crypto device is a physical
+ *		device, by invoking the rte_eal_pci_register() function to
+ *		register the *pci_drv* structure embedded in the *crypto_drv*
+ *		structure, after having stored the address of the
+ *		rte_cryptodev_init() function in the *devinit* field of the
+ *		*pci_drv* structure.
+ *
+ *		During the PCI probing phase, the rte_cryptodev_init()
+ *		function is invoked for each PCI [device] matching the
+ *		embedded PCI identifiers provided by the driver.
+ *
+ *	b, complete the initialization sequence if the device is a virtual
+ *		device by calling the rte_cryptodev_init() directly passing a
+ *		NULL parameter for the rte_pci_device structure.
+ *
+ *   @param crypto_drv	crypto_driver structure associated with the crypto
+ *					driver.
+ *   @param type		pmd type
+ */
+extern int
+rte_cryptodev_pmd_driver_register(struct rte_cryptodev_driver *crypto_drv,
+		enum pmd_type type);
+
+/**
+ * Executes all the user application registered callbacks for the specific
+ * device.
+ *  *
+ * @param	dev	Pointer to cryptodev struct
+ * @param	event	Crypto device interrupt event type.
+ *
+ * @return
+ *  void
+ */
+void rte_cryptodev_pmd_callback_process(struct rte_cryptodev *dev,
+				enum rte_cryptodev_event_type event);
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_CRYPTODEV_PMD_H_ */
diff --git a/lib/librte_eal/common/include/rte_log.h b/lib/librte_eal/common/include/rte_log.h
index ede0dca..2e47e7f 100644
--- a/lib/librte_eal/common/include/rte_log.h
+++ b/lib/librte_eal/common/include/rte_log.h
@@ -78,6 +78,7 @@ extern struct rte_logs rte_logs;
 #define RTE_LOGTYPE_TABLE   0x00004000 /**< Log related to table. */
 #define RTE_LOGTYPE_PIPELINE 0x00008000 /**< Log related to pipeline. */
 #define RTE_LOGTYPE_MBUF    0x00010000 /**< Log related to mbuf. */
+#define RTE_LOGTYPE_CRYPTODEV 0x00020000 /**< Log related to cryptodev. */
 
 /* these log types can be used in an application */
 #define RTE_LOGTYPE_USER1   0x01000000 /**< User-defined log type 1. */
diff --git a/lib/librte_eal/common/include/rte_memory.h b/lib/librte_eal/common/include/rte_memory.h
index 1bed415..c8e1d8a 100644
--- a/lib/librte_eal/common/include/rte_memory.h
+++ b/lib/librte_eal/common/include/rte_memory.h
@@ -76,9 +76,19 @@ enum rte_page_sizes {
 /**< Return the first cache-aligned value greater or equal to size. */
 
 /**
+ * Force alignment.
+ */
+#define __rte_align(a) __attribute__((__aligned__(a)))
+
+/**
  * Force alignment to cache line.
  */
-#define __rte_cache_aligned __attribute__((__aligned__(RTE_CACHE_LINE_SIZE)))
+#define __rte_cache_aligned __rte_align(RTE_CACHE_LINE_SIZE)
+
+/**
+ * Force a structure to be packed
+ */
+#define __rte_packed __attribute__((__packed__))
 
 typedef uint64_t phys_addr_t; /**< Physical address definition. */
 #define RTE_BAD_PHYS_ADDR ((phys_addr_t)-1)
@@ -104,7 +114,7 @@ struct rte_memseg {
 	 /**< store segment MFNs */
 	uint64_t mfn[DOM0_NUM_MEMBLOCK];
 #endif
-} __attribute__((__packed__));
+} __rte_packed;
 
 /**
  * Lock page in physical memory and prevent from swapping.
diff --git a/lib/librte_mbuf/rte_mbuf.c b/lib/librte_mbuf/rte_mbuf.c
index e416312..a8e9dbc 100644
--- a/lib/librte_mbuf/rte_mbuf.c
+++ b/lib/librte_mbuf/rte_mbuf.c
@@ -281,6 +281,7 @@ const char *rte_get_rx_ol_flag_name(uint64_t mask)
 const char *rte_get_tx_ol_flag_name(uint64_t mask)
 {
 	switch (mask) {
+	case PKT_TX_CRYPTO_OP: return "PKT_TX_CRYPTO_OP";
 	case PKT_TX_VLAN_PKT: return "PKT_TX_VLAN_PKT";
 	case PKT_TX_IP_CKSUM: return "PKT_TX_IP_CKSUM";
 	case PKT_TX_TCP_CKSUM: return "PKT_TX_TCP_CKSUM";
diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h
index 8c2db1b..0468b7c 100644
--- a/lib/librte_mbuf/rte_mbuf.h
+++ b/lib/librte_mbuf/rte_mbuf.h
@@ -108,10 +108,12 @@ extern "C" {
 #define PKT_RX_FDIR_ID       (1ULL << 13) /**< FD id reported if FDIR match. */
 #define PKT_RX_FDIR_FLX      (1ULL << 14) /**< Flexible bytes reported if FDIR match. */
 #define PKT_RX_QINQ_PKT      (1ULL << 15)  /**< RX packet with double VLAN stripped. */
+#define PKT_RX_CRYPTO_DIGEST_BAD (1ULL << 16) /**< Crypto hash digest verification failed. */
 /* add new RX flags here */
 
 /* add new TX flags here */
 
+#define PKT_TX_CRYPTO_OP	(1ULL << 48) /**< Valid Crypto Operation attached to mbuf */
 /**
  * Second VLAN insertion (QinQ) flag.
  */
@@ -740,6 +742,9 @@ typedef uint8_t  MARKER8[0];  /**< generic marker with 1B alignment */
 typedef uint64_t MARKER64[0]; /**< marker that allows us to overwrite 8 bytes
                                * with a single assignment */
 
+/** Opaque accelerator operations declarations */
+struct rte_crypto_op_data;
+
 /**
  * The generic rte_mbuf, containing a packet mbuf.
  */
@@ -867,6 +872,8 @@ struct rte_mbuf {
 
 	/** Timesync flags for use with IEEE1588. */
 	uint16_t timesync;
+	/* Crypto Accelerator operation */
+	struct rte_crypto_op_data *crypto_op;
 } __rte_cache_aligned;
 
 static inline uint16_t rte_pktmbuf_priv_size(struct rte_mempool *mp);
@@ -1648,6 +1655,33 @@ static inline struct rte_mbuf *rte_pktmbuf_lastseg(struct rte_mbuf *m)
 #define rte_pktmbuf_mtod(m, t) rte_pktmbuf_mtod_offset(m, t, 0)
 
 /**
+ * A macro that returns the physical address of the data in the mbuf.
+ *
+ * The returned pointer is cast to type t. Before using this
+ * function, the user must ensure that m_headlen(m) is large enough to
+ * read its data.
+ *
+ * @param m
+ *   The packet mbuf.
+ * @param o
+ *   The offset into the data to calculate address from.
+ */
+#define rte_pktmbuf_mtophys_offset(m, o) ((phys_addr_t)((char *)(m)->buf_physaddr + (m)->data_off) + (o))
+
+/**
+ * A macro that returns the physical address of the data in the mbuf.
+ *
+ * The returned pointer is cast to type t. Before using this
+ * function, the user must ensure that m_headlen(m) is large enough to
+ * read its data.
+ *
+ * @param m
+ *   The packet mbuf.
+ * @param o
+ *   The offset into the data to calculate address from.
+ */
+#define rte_pktmbuf_mtophys(m) rte_pktmbuf_mtophys_offset(m, 0)
+/**
  * A macro that returns the length of the packet.
  *
  * The value can be read or assigned.
@@ -1816,6 +1850,23 @@ static inline int rte_pktmbuf_is_contiguous(const struct rte_mbuf *m)
  */
 void rte_pktmbuf_dump(FILE *f, const struct rte_mbuf *m, unsigned dump_len);
 
+
+
+/**
+ * Attach a crypto operation to a mbuf.
+ *
+ * @param m
+ *   The packet mbuf.
+ * @param op
+ *   The crypto operation data structure to attach.
+ */ 
+static inline void
+rte_pktmbuf_attach_crypto_op(struct rte_mbuf *m, struct rte_crypto_op_data *op)
+{
+	m->crypto_op = op;
+	m->ol_flags |= PKT_TX_CRYPTO_OP;
+}
+
 #ifdef __cplusplus
 }
 #endif
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index 3871205..c7ee033 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -114,6 +114,7 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_KVARGS)         += -lrte_kvargs
 _LDLIBS-$(CONFIG_RTE_LIBRTE_MBUF)           += -lrte_mbuf
 _LDLIBS-$(CONFIG_RTE_LIBRTE_IP_FRAG)        += -lrte_ip_frag
 _LDLIBS-$(CONFIG_RTE_LIBRTE_ETHER)          += -lethdev
+_LDLIBS-$(CONFIG_RTE_LIBRTE_CRYPTODEV)      += -lcryptodev
 _LDLIBS-$(CONFIG_RTE_LIBRTE_MALLOC)         += -lrte_malloc
 _LDLIBS-$(CONFIG_RTE_LIBRTE_MEMPOOL)        += -lrte_mempool
 _LDLIBS-$(CONFIG_RTE_LIBRTE_RING)           += -lrte_ring
-- 
1.9.3

^ permalink raw reply	[flat|nested] 8+ messages in thread

* [dpdk-dev] [PATCH 2/4] qat_crypto_pmd: Addition of a new QAT DPDK PMD.
  2015-08-20 14:07 [dpdk-dev] [PATCH 0/4] A proposed DPDK Crypto API and device framework Declan Doherty
  2015-08-20 14:07 ` [dpdk-dev] [PATCH 1/4] cryptodev: Initial DPDK Crypto APIs and device framework release Declan Doherty
@ 2015-08-20 14:07 ` Declan Doherty
  2015-08-20 14:07 ` [dpdk-dev] [PATCH 3/4] aesni_mb_pmd: Initial implementation of multi buffer based crypto device Declan Doherty
  2015-08-20 14:07 ` [dpdk-dev] [PATCH 4/4] app/test: add cryptodev unit and performance tests Declan Doherty
  3 siblings, 0 replies; 8+ messages in thread
From: Declan Doherty @ 2015-08-20 14:07 UTC (permalink / raw)
  To: dev

From: John Griffin <john.griffin@intel.com>

Co-authored-by: Des O Dea <des.j.o.dea@intel.com>
Co-authored-by: Fiona Trahe <fiona.trahe@intel.com>

This patch adds a PMD for the Intel Quick Assist Technology DH895xxC
hardware accelerator.
This PMD will adhere to the cryptodev API (contained in a previous patch).
This patch depends on a QAT PF driver which may be downloaded from
01.org (please see the file qat_pf_driver_install.txt contained in
this patch).

This is a limited patchset which has support for a chain of cipher and
hash  the following algorithms are supported:
Cipher algorithms:
 -   RTE_CRYPTO_SYM_CIPHER_AES128_CBC
 -   RTE_CRYPTO_SYM_CIPHER_AES256_CBC
 -   RTE_CRYPTO_SYM_CIPHER_AES512_CBC
Hash algorithms:
 -   RTE_CRYPTO_SYM_HASH_SHA1_HMAC
 -   RTE_CRYPTO_SYM_HASH_SHA256_HMAC
 -   RTE_CRYPTO_SYM_HASH_SHA512_HMAC

Some limitation on this patchset which shall be contributed in a
subsequent release:
 -   Chained mbufs are not supported.
 -   Hash only is not supported.
 -   Cipher only is not supported.
 -   Only in-place is currently supported (destination address is the
     same as source address).
 -   Only supports session-oriented API implementation (session-less
     APIs are not supported).
 -   Not performance tuned.

Signed-off-by: Declan Doherty <declan.doherty@intel.com>
---
 config/common_bsdapp                               |  13 +
 config/common_linuxapp                             |  15 +-
 doc/guides/cryptodevs/index.rst                    |  42 ++
 doc/guides/cryptodevs/qat.rst                      | 155 +++++++
 doc/guides/index.rst                               |   1 +
 drivers/Makefile                                   |   1 +
 drivers/crypto/Makefile                            |  38 ++
 drivers/crypto/qat/Makefile                        |  63 +++
 .../qat/qat_adf/adf_transport_access_macros.h      | 173 ++++++++
 drivers/crypto/qat/qat_adf/icp_qat_fw.h            | 316 ++++++++++++++
 drivers/crypto/qat/qat_adf/icp_qat_fw_la.h         | 404 ++++++++++++++++++
 drivers/crypto/qat/qat_adf/icp_qat_hw.h            | 305 ++++++++++++++
 drivers/crypto/qat/qat_adf/qat_algs.h              | 124 ++++++
 drivers/crypto/qat/qat_adf/qat_algs_build_desc.c   | 462 ++++++++++++++++++++
 drivers/crypto/qat/qat_crypto.c                    | 469 +++++++++++++++++++++
 drivers/crypto/qat/qat_crypto.h                    |  99 +++++
 drivers/crypto/qat/qat_logs.h                      |  78 ++++
 drivers/crypto/qat/qat_qp.c                        | 372 ++++++++++++++++
 drivers/crypto/qat/rte_pmd_qat_version.map         |   5 +
 drivers/crypto/qat/rte_qat_cryptodev.c             | 128 ++++++
 mk/rte.app.mk                                      |   3 +
 21 files changed, 3265 insertions(+), 1 deletion(-)
 create mode 100644 doc/guides/cryptodevs/index.rst
 create mode 100644 doc/guides/cryptodevs/qat.rst
 create mode 100644 drivers/crypto/Makefile
 create mode 100644 drivers/crypto/qat/Makefile
 create mode 100644 drivers/crypto/qat/qat_adf/adf_transport_access_macros.h
 create mode 100644 drivers/crypto/qat/qat_adf/icp_qat_fw.h
 create mode 100644 drivers/crypto/qat/qat_adf/icp_qat_fw_la.h
 create mode 100644 drivers/crypto/qat/qat_adf/icp_qat_hw.h
 create mode 100644 drivers/crypto/qat/qat_adf/qat_algs.h
 create mode 100644 drivers/crypto/qat/qat_adf/qat_algs_build_desc.c
 create mode 100644 drivers/crypto/qat/qat_crypto.c
 create mode 100644 drivers/crypto/qat/qat_crypto.h
 create mode 100644 drivers/crypto/qat/qat_logs.h
 create mode 100644 drivers/crypto/qat/qat_qp.c
 create mode 100644 drivers/crypto/qat/rte_pmd_qat_version.map
 create mode 100644 drivers/crypto/qat/rte_qat_cryptodev.c

diff --git a/config/common_bsdapp b/config/common_bsdapp
index ed30180..8fcc004 100644
--- a/config/common_bsdapp
+++ b/config/common_bsdapp
@@ -154,6 +154,19 @@ CONFIG_RTE_LIBRTE_CRYPTODEV_DEBUG=y
 CONFIG_RTE_MAX_CRYPTOPORTS=32
 
 #
+# Compile PMD for QuickAssist based devices
+#
+CONFIG_RTE_LIBRTE_PMD_QAT=y
+CONFIG_RTE_LIBRTE_QAT_DEBUG_INIT=n
+CONFIG_RTE_LIBRTE_QAT_DEBUG_TX=y
+CONFIG_RTE_LIBRTE_QAT_DEBUG_RX=y
+CONFIG_RTE_LIBRTE_QAT_DEBUG_DRIVER=y
+#
+# Number of sessions to create in the session memory pool
+# on a single QuickAssist device.
+#
+CONFIG_RTE_MAX_QAT_SESSIONS=200
+
 # Support NIC bypass logic
 #
 CONFIG_RTE_NIC_BYPASS=n
diff --git a/config/common_linuxapp b/config/common_linuxapp
index 12a75c6..7199c95 100644
--- a/config/common_linuxapp
+++ b/config/common_linuxapp
@@ -1,6 +1,6 @@
 #   BSD LICENSE
 #
-#   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+#   Copyright(c) 2010-2015 Intel Corporation. All rights reserved.
 #   All rights reserved.
 #
 #   Redistribution and use in source and binary forms, with or without
@@ -152,6 +152,19 @@ CONFIG_RTE_LIBRTE_CRYPTODEV_DEBUG=y
 CONFIG_RTE_MAX_CRYPTODEVS=64
 
 #
+# Compile PMD for QuickAssist based devices
+#
+CONFIG_RTE_LIBRTE_PMD_QAT=y
+CONFIG_RTE_LIBRTE_PMD_QAT_DEBUG_INIT=n
+CONFIG_RTE_LIBRTE_PMD_QAT_DEBUG_TX=y
+CONFIG_RTE_LIBRTE_PMD_QAT_DEBUG_RX=y
+CONFIG_RTE_LIBRTE_PMD_QAT_DEBUG_DRIVER=y
+#
+# Number of sessions to create in the session memory pool
+# on a single QuickAssist device.
+#
+CONFIG_RTE_LIBRTE_PMD_QAT_MAX_SESSIONS=4096
+
 # Support NIC bypass logic
 #
 CONFIG_RTE_NIC_BYPASS=n
diff --git a/doc/guides/cryptodevs/index.rst b/doc/guides/cryptodevs/index.rst
new file mode 100644
index 0000000..1c31697
--- /dev/null
+++ b/doc/guides/cryptodevs/index.rst
@@ -0,0 +1,42 @@
+..  BSD LICENSE
+    Copyright(c) 2015 Intel Corporation. All rights reserved.
+
+    Redistribution and use in source and binary forms, with or without
+    modification, are permitted provided that the following conditions
+    are met:
+
+    * Redistributions of source code must retain the above copyright
+    notice, this list of conditions and the following disclaimer.
+    * Redistributions in binary form must reproduce the above copyright
+    notice, this list of conditions and the following disclaimer in
+    the documentation and/or other materials provided with the
+    distribution.
+    * Neither the name of Intel Corporation nor the names of its
+    contributors may be used to endorse or promote products derived
+    from this software without specific prior written permission.
+
+    THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+    "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+    LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+    A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+    OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+    SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+    LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+    DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+    THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+    (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+    OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+Crypto Device Drivers
+====================================
+
+|today|
+
+
+**Contents**
+
+.. toctree::
+    :maxdepth: 2
+    :numbered:
+
+    qat
diff --git a/doc/guides/cryptodevs/qat.rst b/doc/guides/cryptodevs/qat.rst
new file mode 100644
index 0000000..e09145d
--- /dev/null
+++ b/doc/guides/cryptodevs/qat.rst
@@ -0,0 +1,155 @@
+..  BSD LICENSE
+    Copyright(c) 2015 Intel Corporation. All rights reserved.
+
+    Redistribution and use in source and binary forms, with or without
+    modification, are permitted provided that the following conditions
+    are met:
+
+    * Redistributions of source code must retain the above copyright
+    notice, this list of conditions and the following disclaimer.
+    * Redistributions in binary form must reproduce the above copyright
+    notice, this list of conditions and the following disclaimer in
+    the documentation and/or other materials provided with the
+    distribution.
+    * Neither the name of Intel Corporation nor the names of its
+    contributors may be used to endorse or promote products derived
+    from this software without specific prior written permission.
+
+    THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+    "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+    LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+    A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+    OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+    SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+    LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+    DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+    THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+    (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+    OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+Quick Assist Crypto Poll Mode Driver
+====================================
+
+
+The QAT PMD provides poll mode crypto driver support for **Intel
+QuickAssist Technology DH895xxC hardware accelerator. QAT PMD has
+current only been tested on Fedora 21 64-bit with gcc.
+
+Features
+--------
+
+QAT PMD has support for:
+
+Cipher algorithms:
+
+* RTE_CRYPTO_SYM_CIPHER_AES128_CBC
+* RTE_CRYPTO_SYM_CIPHER_AES256_CBC
+* RTE_CRYPTO_SYM_CIPHER_AES512_CBC
+
+Hash algorithms:
+
+* RTE_CRYPTO_SYM_HASH_SHA1_HMAC
+* RTE_CRYPTO_SYM_HASH_SHA256_HMAC
+* RTE_CRYPTO_SYM_HASH_SHA512_HMAC
+
+Limitations
+-----------
+
+* Chained mbufs are not supported.
+* Hash only is not supported.
+* Cipher only is not supported.
+* Only in-place is currently supported (destination address is the same as source address).
+* Only supports session-oriented API implementation (session-less APIs are not supported).
+* Not performance tuned.
+
+Installation
+------------
+
+To use the DPDK QAT PMD an SRIOV-enabled QAT kernel driver is required. 
+The VF devices exposed by this driver will be used by QAT PMD
+Future kernel versions will provide this as standard, in the interim the 
+following steps are necessary to load this driver.
+
+
+Download the latest QuickAssist Technology Driver from 01.org
+https://01.org/packet-processing/intel%C2%AE-quickassist-technology-drivers-and-patches
+Consult the Getting Started Guide at the same URL for further information.
+
+Steps below assume 
+  * building on a platform with one DH895xCC device
+  * using package qatmux.l.2.3.0-34.tgz
+  * on Fedora21 kernel 3.17.4-301.fc21.x86_64
+
+In BIOS ensure that SRIOV is enabled and VT-d is disabled.
+
+Uninstall any existing QAT driver, e.g. by running 
+  *  "./installer.sh uninstall" in the directory where originally installed
+     or
+  *  "rmmod qat_dh895xcc; rmmod intel_qat"	  
+
+Build and install the SRIOV-enabled QAT driver
+
+.. code-block:: console
+  
+    "mkdir /QAT; cd /QAT"
+    copy qatmux.l.2.3.0-34.tgz to this location
+    "tar zxof qatmux.l.2.3.0-34.tgz"
+    "export ICP_WITHOUT_IOMMU=1"
+    "./installer.sh install QAT1.6 host"
+
+You can use "cat /proc/icp_dh895xcc_dev0/version" to confirm the driver is correctly installed.
+You can use "lspci -d:443" to confirm the bdf of the 32 VF devices available per DH895xCC device. 
+
+The unbind command below assumes bdfs of 02:01.00-02:04.07, if yours are different adjust the unbind command below. 
+
+Make available to DPDK
+
+.. code-block:: console
+
+   cd $(RTE_SDK) (See http://dpdk.org/doc/quick-start to install DPDK)
+   "modprobe uio"
+   "insmod ./build/kmod/igb_uio.ko"
+   "for device in $(seq 1 4); do for fn in $(seq 0 7); do echo -n 0000:02:0${device}.${fn} > /sys/bus/pci/devices/0000\:02\:0${device}.${fn}/driver/unbind;done ;done"
+   "echo "8086 0443" > /sys/bus/pci/drivers/igb_uio/new_id"
+  
+You can use "lspci -vvd:443" to confirm that all devices are now in use by igb_uio kernel driver
+
+
+Notes:
+If using a later kernel and the build fails with an error relating to strict_stroul not being available patch the following file:  
+ 
+.. code-block:: console
+
+  /QAT/QAT1.6/quickassist/utilities/downloader/Target_CoreLibs/uclo/include/linux/uclo_platform.h
+  + #if LINUX_VERSION_CODE >= KERNEL_VERSION(3,18,5)
+  + #define STR_TO_64(str, base, num, endPtr) {endPtr=NULL; if (kstrtoul((str), (base), (num))) printk("Error strtoull convert %s\n", str); }
+  + #else
+  #if LINUX_VERSION_CODE >= KERNEL_VERSION(2,6,38)
+  #define STR_TO_64(str, base, num, endPtr) {endPtr=NULL; if (strict_strtoull((str), (base), (num))) printk("Error strtoull convert %s\n", str); }
+  #else 
+  #if LINUX_VERSION_CODE >= KERNEL_VERSION(2,6,25)
+  #define STR_TO_64(str, base, num, endPtr) {endPtr=NULL; strict_strtoll((str), (base), (num));}
+  #else
+  #define STR_TO_64(str, base, num, endPtr)                                 \
+       do {                                                               \
+             if (str[0] == '-')                                           \
+             {                                                            \
+                  *(num) = -(simple_strtoull((str+1), &(endPtr), (base))); \
+             }else {                                                      \
+                  *(num) = simple_strtoull((str), &(endPtr), (base));      \
+             }                                                            \
+       } while(0)
+  + #endif
+  #endif
+  #endif
+
+
+If build fails due to missing header files you may need to do following:
+  *  sudo yum install zlib-devel
+  *  sudo yum install openssl-devel
+
+If build or install fails due to mismatching kernel sources you may need to do the following:
+  *  sudo yum install kernel-headers-`uname -r`
+  *  sudo yum install kernel-src-`uname -r`
+  *  sudo yum install kernel-devel-`uname -r`
+
diff --git a/doc/guides/index.rst b/doc/guides/index.rst
index 439c7e3..c5d7a9f 100644
--- a/doc/guides/index.rst
+++ b/doc/guides/index.rst
@@ -42,6 +42,7 @@ Contents:
    xen/index
    prog_guide/index
    nics/index
+   cryptodevs/index
    sample_app_ug/index
    testpmd_app_ug/index
    faq/index
diff --git a/drivers/Makefile b/drivers/Makefile
index b60eb5e..6ec67f6 100644
--- a/drivers/Makefile
+++ b/drivers/Makefile
@@ -32,5 +32,6 @@
 include $(RTE_SDK)/mk/rte.vars.mk
 
 DIRS-y += net
+DIRS-y += crypto
 
 include $(RTE_SDK)/mk/rte.subdir.mk
diff --git a/drivers/crypto/Makefile b/drivers/crypto/Makefile
new file mode 100644
index 0000000..eeb998e
--- /dev/null
+++ b/drivers/crypto/Makefile
@@ -0,0 +1,38 @@
+#   BSD LICENSE
+#
+#   Copyright(c) 2010-2015 Intel Corporation. All rights reserved.
+#   All rights reserved.
+#
+#   Redistribution and use in source and binary forms, with or without
+#   modification, are permitted provided that the following conditions
+#   are met:
+#
+#     * Redistributions of source code must retain the above copyright
+#       notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above copyright
+#       notice, this list of conditions and the following disclaimer in
+#       the documentation and/or other materials provided with the
+#       distribution.
+#     * Neither the name of Intel Corporation nor the names of its
+#       contributors may be used to endorse or promote products derived
+#       from this software without specific prior written permission.
+#
+#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+
+DIRS-$(CONFIG_RTE_LIBRTE_PMD_QAT) += qat
+
+include $(RTE_SDK)/mk/rte.sharelib.mk
+include $(RTE_SDK)/mk/rte.subdir.mk
diff --git a/drivers/crypto/qat/Makefile b/drivers/crypto/qat/Makefile
new file mode 100644
index 0000000..e027ff9
--- /dev/null
+++ b/drivers/crypto/qat/Makefile
@@ -0,0 +1,63 @@
+#   BSD LICENSE
+#
+#   Copyright(c) 2015 Intel Corporation. All rights reserved.
+#
+#   Redistribution and use in source and binary forms, with or without
+#   modification, are permitted provided that the following conditions
+#   are met:
+#
+#     * Redistributions of source code must retain the above copyright
+#       notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above copyright
+#       notice, this list of conditions and the following disclaimer in
+#       the documentation and/or other materials provided with the
+#       distribution.
+#     * Neither the name of Intel Corporation nor the names of its
+#       contributors may be used to endorse or promote products derived
+#       from this software without specific prior written permission.
+#
+#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+# library name
+LIB = librte_pmd_qat.a
+
+# library version
+LIBABIVER := 1
+
+# build flags
+CFLAGS += $(WERROR_FLAGS)
+
+# external library include paths
+CFLAGS += -I$(SRCDIR)/qat_adf
+
+# library source files
+SRCS-$(CONFIG_RTE_LIBRTE_PMD_QAT) += qat_crypto.c
+SRCS-$(CONFIG_RTE_LIBRTE_PMD_QAT) += qat_qp.c
+SRCS-$(CONFIG_RTE_LIBRTE_PMD_QAT) += qat_adf/qat_algs_build_desc.c
+SRCS-$(CONFIG_RTE_LIBRTE_PMD_QAT) += rte_qat_cryptodev.c
+
+# export include files
+SYMLINK-y-include +=
+
+# versioning export map
+EXPORT_MAP := rte_pmd_qat_version.map
+
+# library dependencies
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_QAT) += lib/librte_eal
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_QAT) += lib/librte_mbuf
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_QAT) += lib/librte_cryptodev
+
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/crypto/qat/qat_adf/adf_transport_access_macros.h b/drivers/crypto/qat/qat_adf/adf_transport_access_macros.h
new file mode 100644
index 0000000..d2b79c6
--- /dev/null
+++ b/drivers/crypto/qat/qat_adf/adf_transport_access_macros.h
@@ -0,0 +1,173 @@
+/*
+  This file is provided under a dual BSD/GPLv2 license.  When using or
+  redistributing this file, you may do so under either license.
+
+  GPL LICENSE SUMMARY
+  Copyright(c) 2015 Intel Corporation.
+  This program is free software; you can redistribute it and/or modify
+  it under the terms of version 2 of the GNU General Public License as
+  published by the Free Software Foundation.
+
+  This program is distributed in the hope that it will be useful, but
+  WITHOUT ANY WARRANTY; without even the implied warranty of
+  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+  General Public License for more details.
+
+  Contact Information:
+  qat-linux@intel.com
+
+  BSD LICENSE
+  Copyright(c) 2015 Intel Corporation.
+  Redistribution and use in source and binary forms, with or without
+  modification, are permitted provided that the following conditions
+  are met:
+
+    * Redistributions of source code must retain the above copyright
+      notice, this list of conditions and the following disclaimer.
+    * Redistributions in binary form must reproduce the above copyright
+      notice, this list of conditions and the following disclaimer in
+      the documentation and/or other materials provided with the
+      distribution.
+    * Neither the name of Intel Corporation nor the names of its
+      contributors may be used to endorse or promote products derived
+      from this software without specific prior written permission.
+
+  THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+  "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+  LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+  A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+  OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+  SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+  LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+  DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+  THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+  (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+  OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+*/
+#ifndef ADF_TRANSPORT_ACCESS_MACROS_H
+#define ADF_TRANSPORT_ACCESS_MACROS_H
+
+/* CSR write macro */
+#define ADF_CSR_WR(csrAddr, csrOffset, val) \
+	(void)((*((volatile uint32_t *)(((uint8_t *)csrAddr) + csrOffset)) = (val)))
+
+/* CSR read macro */
+#define ADF_CSR_RD(csrAddr, csrOffset) \
+	(*((volatile uint32_t *)(((uint8_t *)csrAddr) + csrOffset)))
+
+#define ADF_BANK_INT_SRC_SEL_MASK_0 0x4444444CUL
+#define ADF_BANK_INT_SRC_SEL_MASK_X 0x44444444UL
+#define ADF_RING_CSR_RING_CONFIG 0x000
+#define ADF_RING_CSR_RING_LBASE 0x040
+#define ADF_RING_CSR_RING_UBASE 0x080
+#define ADF_RING_CSR_RING_HEAD 0x0C0
+#define ADF_RING_CSR_RING_TAIL 0x100
+#define ADF_RING_CSR_E_STAT 0x14C
+#define ADF_RING_CSR_INT_SRCSEL 0x174
+#define ADF_RING_CSR_INT_SRCSEL_2 0x178
+#define ADF_RING_CSR_INT_COL_EN 0x17C
+#define ADF_RING_CSR_INT_COL_CTL 0x180
+#define ADF_RING_CSR_INT_FLAG_AND_COL 0x184
+#define ADF_RING_CSR_INT_COL_CTL_ENABLE	0x80000000
+#define ADF_RING_BUNDLE_SIZE 0x1000
+#define ADF_RING_CONFIG_NEAR_FULL_WM 0x0A
+#define ADF_RING_CONFIG_NEAR_EMPTY_WM 0x05
+#define ADF_COALESCING_MIN_TIME 0x1FF
+#define ADF_COALESCING_MAX_TIME 0xFFFFF
+#define ADF_COALESCING_DEF_TIME 0x27FF
+#define ADF_RING_NEAR_WATERMARK_512 0x08
+#define ADF_RING_NEAR_WATERMARK_0 0x00
+#define ADF_RING_EMPTY_SIG 0x7F7F7F7F
+
+/* Valid internal ring size values */
+#define ADF_RING_SIZE_128 0x01
+#define ADF_RING_SIZE_256 0x02
+#define ADF_RING_SIZE_512 0x03
+#define ADF_RING_SIZE_4K 0x06
+#define ADF_RING_SIZE_16K 0x08
+#define ADF_RING_SIZE_4M 0x10
+#define ADF_MIN_RING_SIZE ADF_RING_SIZE_128
+#define ADF_MAX_RING_SIZE ADF_RING_SIZE_4M
+#define ADF_DEFAULT_RING_SIZE ADF_RING_SIZE_16K
+
+#define ADF_NUM_BUNDLES_PER_DEV         1
+#define ADF_NUM_SYM_QPS_PER_BUNDLE      2
+
+/* Valid internal msg size values */
+#define ADF_MSG_SIZE_32 0x01
+#define ADF_MSG_SIZE_64 0x02
+#define ADF_MSG_SIZE_128 0x04
+#define ADF_MIN_MSG_SIZE ADF_MSG_SIZE_32
+#define ADF_MAX_MSG_SIZE ADF_MSG_SIZE_128
+
+/* Size to bytes conversion macros for ring and msg size values */
+#define ADF_MSG_SIZE_TO_BYTES(SIZE) (SIZE << 5)
+#define ADF_BYTES_TO_MSG_SIZE(SIZE) (SIZE >> 5)
+#define ADF_SIZE_TO_RING_SIZE_IN_BYTES(SIZE) ((1 << (SIZE - 1)) << 7)
+#define ADF_RING_SIZE_IN_BYTES_TO_SIZE(SIZE) ((1 << (SIZE - 1)) >> 7)
+
+/* Minimum ring bufer size for memory allocation */
+#define ADF_RING_SIZE_BYTES_MIN(SIZE) ((SIZE < ADF_RING_SIZE_4K) ? \
+				ADF_RING_SIZE_4K : SIZE)
+#define ADF_RING_SIZE_MODULO(SIZE) (SIZE + 0x6)
+#define ADF_SIZE_TO_POW(SIZE) ((((SIZE & 0x4) >> 1) | ((SIZE & 0x4) >> 2) | \
+				SIZE) & ~0x4)
+/* Max outstanding requests */
+#define ADF_MAX_INFLIGHTS(RING_SIZE, MSG_SIZE) \
+	((((1 << (RING_SIZE - 1)) << 3) >> ADF_SIZE_TO_POW(MSG_SIZE)) - 1)
+#define BUILD_RING_CONFIG(size)	\
+	((ADF_RING_NEAR_WATERMARK_0 << ADF_RING_CONFIG_NEAR_FULL_WM) \
+	| (ADF_RING_NEAR_WATERMARK_0 << ADF_RING_CONFIG_NEAR_EMPTY_WM) \
+	| size)
+#define BUILD_RESP_RING_CONFIG(size, watermark_nf, watermark_ne) \
+	((watermark_nf << ADF_RING_CONFIG_NEAR_FULL_WM)	\
+	| (watermark_ne << ADF_RING_CONFIG_NEAR_EMPTY_WM) \
+	| size)
+#define BUILD_RING_BASE_ADDR(addr, size) \
+	((addr >> 6) & (0xFFFFFFFFFFFFFFFFULL << size))
+#define READ_CSR_RING_HEAD(csr_base_addr, bank, ring) \
+	ADF_CSR_RD(csr_base_addr, (ADF_RING_BUNDLE_SIZE * bank) + \
+			ADF_RING_CSR_RING_HEAD + (ring << 2))
+#define READ_CSR_RING_TAIL(csr_base_addr, bank, ring) \
+	ADF_CSR_RD(csr_base_addr, (ADF_RING_BUNDLE_SIZE * bank) + \
+			ADF_RING_CSR_RING_TAIL + (ring << 2))
+#define READ_CSR_E_STAT(csr_base_addr, bank) \
+	ADF_CSR_RD(csr_base_addr, (ADF_RING_BUNDLE_SIZE * bank) + \
+			ADF_RING_CSR_E_STAT)
+#define WRITE_CSR_RING_CONFIG(csr_base_addr, bank, ring, value) \
+	ADF_CSR_WR(csr_base_addr, (ADF_RING_BUNDLE_SIZE * bank) + \
+		ADF_RING_CSR_RING_CONFIG + (ring << 2), value)
+#define WRITE_CSR_RING_BASE(csr_base_addr, bank, ring, value) \
+do { \
+	uint32_t l_base = 0, u_base = 0; \
+	l_base = (uint32_t)(value & 0xFFFFFFFF); \
+	u_base = (uint32_t)((value & 0xFFFFFFFF00000000ULL) >> 32); \
+	ADF_CSR_WR(csr_base_addr, (ADF_RING_BUNDLE_SIZE * bank) + \
+		ADF_RING_CSR_RING_LBASE + (ring << 2), l_base);	\
+	ADF_CSR_WR(csr_base_addr, (ADF_RING_BUNDLE_SIZE * bank) + \
+		ADF_RING_CSR_RING_UBASE + (ring << 2), u_base);	\
+} while (0)
+#define WRITE_CSR_RING_HEAD(csr_base_addr, bank, ring, value) \
+	ADF_CSR_WR(csr_base_addr, (ADF_RING_BUNDLE_SIZE * bank) + \
+		ADF_RING_CSR_RING_HEAD + (ring << 2), value)
+#define WRITE_CSR_RING_TAIL(csr_base_addr, bank, ring, value) \
+	ADF_CSR_WR(csr_base_addr, (ADF_RING_BUNDLE_SIZE * bank) + \
+		ADF_RING_CSR_RING_TAIL + (ring << 2), value)
+#define WRITE_CSR_INT_SRCSEL(csr_base_addr, bank) \
+do { \
+	ADF_CSR_WR(csr_base_addr, (ADF_RING_BUNDLE_SIZE * bank) + \
+	ADF_RING_CSR_INT_SRCSEL, ADF_BANK_INT_SRC_SEL_MASK_0);	\
+	ADF_CSR_WR(csr_base_addr, (ADF_RING_BUNDLE_SIZE * bank) + \
+	ADF_RING_CSR_INT_SRCSEL_2, ADF_BANK_INT_SRC_SEL_MASK_X); \
+} while (0)
+#define WRITE_CSR_INT_COL_EN(csr_base_addr, bank, value) \
+	ADF_CSR_WR(csr_base_addr, (ADF_RING_BUNDLE_SIZE * bank) + \
+			ADF_RING_CSR_INT_COL_EN, value)
+#define WRITE_CSR_INT_COL_CTL(csr_base_addr, bank, value) \
+	ADF_CSR_WR(csr_base_addr, (ADF_RING_BUNDLE_SIZE * bank) + \
+			ADF_RING_CSR_INT_COL_CTL, \
+			ADF_RING_CSR_INT_COL_CTL_ENABLE | value)
+#define WRITE_CSR_INT_FLAG_AND_COL(csr_base_addr, bank, value) \
+	ADF_CSR_WR(csr_base_addr, (ADF_RING_BUNDLE_SIZE * bank) + \
+			ADF_RING_CSR_INT_FLAG_AND_COL, value)
+#endif
diff --git a/drivers/crypto/qat/qat_adf/icp_qat_fw.h b/drivers/crypto/qat/qat_adf/icp_qat_fw.h
new file mode 100644
index 0000000..cc96d45
--- /dev/null
+++ b/drivers/crypto/qat/qat_adf/icp_qat_fw.h
@@ -0,0 +1,316 @@
+/*
+  This file is provided under a dual BSD/GPLv2 license.  When using or
+  redistributing this file, you may do so under either license.
+
+  GPL LICENSE SUMMARY
+  Copyright(c) 2015 Intel Corporation.
+  This program is free software; you can redistribute it and/or modify
+  it under the terms of version 2 of the GNU General Public License as
+  published by the Free Software Foundation.
+
+  This program is distributed in the hope that it will be useful, but
+  WITHOUT ANY WARRANTY; without even the implied warranty of
+  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+  General Public License for more details.
+
+  Contact Information:
+  qat-linux@intel.com
+
+  BSD LICENSE
+  Copyright(c) 2015 Intel Corporation.
+  Redistribution and use in source and binary forms, with or without
+  modification, are permitted provided that the following conditions
+  are met:
+
+    * Redistributions of source code must retain the above copyright
+      notice, this list of conditions and the following disclaimer.
+    * Redistributions in binary form must reproduce the above copyright
+      notice, this list of conditions and the following disclaimer in
+      the documentation and/or other materials provided with the
+      distribution.
+    * Neither the name of Intel Corporation nor the names of its
+      contributors may be used to endorse or promote products derived
+      from this software without specific prior written permission.
+
+  THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+  "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+  LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+  A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+  OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+  SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+  LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+  DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+  THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+  (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+  OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+*/
+#ifndef _ICP_QAT_FW_H_
+#define _ICP_QAT_FW_H_
+#include <linux/types.h>
+#include "icp_qat_hw.h"
+
+#define QAT_FIELD_SET(flags, val, bitpos, mask) \
+{ (flags) = (((flags) & (~((mask) << (bitpos)))) | \
+		(((val) & (mask)) << (bitpos))) ; }
+
+#define QAT_FIELD_GET(flags, bitpos, mask) \
+	(((flags) >> (bitpos)) & (mask))
+
+#define ICP_QAT_FW_REQ_DEFAULT_SZ 128
+#define ICP_QAT_FW_RESP_DEFAULT_SZ 32
+#define ICP_QAT_FW_COMN_ONE_BYTE_SHIFT 8
+#define ICP_QAT_FW_COMN_SINGLE_BYTE_MASK 0xFF
+#define ICP_QAT_FW_NUM_LONGWORDS_1 1
+#define ICP_QAT_FW_NUM_LONGWORDS_2 2
+#define ICP_QAT_FW_NUM_LONGWORDS_3 3
+#define ICP_QAT_FW_NUM_LONGWORDS_4 4
+#define ICP_QAT_FW_NUM_LONGWORDS_5 5
+#define ICP_QAT_FW_NUM_LONGWORDS_6 6
+#define ICP_QAT_FW_NUM_LONGWORDS_7 7
+#define ICP_QAT_FW_NUM_LONGWORDS_10 10
+#define ICP_QAT_FW_NUM_LONGWORDS_13 13
+#define ICP_QAT_FW_NULL_REQ_SERV_ID 1
+
+enum icp_qat_fw_comn_resp_serv_id {
+	ICP_QAT_FW_COMN_RESP_SERV_NULL,
+	ICP_QAT_FW_COMN_RESP_SERV_CPM_FW,
+	ICP_QAT_FW_COMN_RESP_SERV_DELIMITER
+};
+
+enum icp_qat_fw_comn_request_id {
+	ICP_QAT_FW_COMN_REQ_NULL = 0,
+	ICP_QAT_FW_COMN_REQ_CPM_FW_PKE = 3,
+	ICP_QAT_FW_COMN_REQ_CPM_FW_LA = 4,
+	ICP_QAT_FW_COMN_REQ_CPM_FW_DMA = 7,
+	ICP_QAT_FW_COMN_REQ_CPM_FW_COMP = 9,
+	ICP_QAT_FW_COMN_REQ_DELIMITER
+};
+
+struct icp_qat_fw_comn_req_hdr_cd_pars {
+	union {
+		struct {
+			uint64_t content_desc_addr;
+			uint16_t content_desc_resrvd1;
+			uint8_t content_desc_params_sz;
+			uint8_t content_desc_hdr_resrvd2;
+			uint32_t content_desc_resrvd3;
+		} s;
+		struct {
+			uint32_t serv_specif_fields[4];
+		} s1;
+	} u;
+};
+
+struct icp_qat_fw_comn_req_mid {
+	uint64_t opaque_data;
+	uint64_t src_data_addr;
+	uint64_t dest_data_addr;
+	uint32_t src_length;
+	uint32_t dst_length;
+};
+
+struct icp_qat_fw_comn_req_cd_ctrl {
+	uint32_t content_desc_ctrl_lw[ICP_QAT_FW_NUM_LONGWORDS_5];
+};
+
+struct icp_qat_fw_comn_req_hdr {
+	uint8_t resrvd1;
+	uint8_t service_cmd_id;
+	uint8_t service_type;
+	uint8_t hdr_flags;
+	uint16_t serv_specif_flags;
+	uint16_t comn_req_flags;
+};
+
+struct icp_qat_fw_comn_req_rqpars {
+	uint32_t serv_specif_rqpars_lw[ICP_QAT_FW_NUM_LONGWORDS_13];
+};
+
+struct icp_qat_fw_comn_req {
+	struct icp_qat_fw_comn_req_hdr comn_hdr;
+	struct icp_qat_fw_comn_req_hdr_cd_pars cd_pars;
+	struct icp_qat_fw_comn_req_mid comn_mid;
+	struct icp_qat_fw_comn_req_rqpars serv_specif_rqpars;
+	struct icp_qat_fw_comn_req_cd_ctrl cd_ctrl;
+};
+
+struct icp_qat_fw_comn_error {
+	uint8_t xlat_err_code;
+	uint8_t cmp_err_code;
+};
+
+struct icp_qat_fw_comn_resp_hdr {
+	uint8_t resrvd1;
+	uint8_t service_id;
+	uint8_t response_type;
+	uint8_t hdr_flags;
+	struct icp_qat_fw_comn_error comn_error;
+	uint8_t comn_status;
+	uint8_t cmd_id;
+};
+
+struct icp_qat_fw_comn_resp {
+	struct icp_qat_fw_comn_resp_hdr comn_hdr;
+	uint64_t opaque_data;
+	uint32_t resrvd[ICP_QAT_FW_NUM_LONGWORDS_4];
+};
+
+#define ICP_QAT_FW_COMN_REQ_FLAG_SET 1
+#define ICP_QAT_FW_COMN_REQ_FLAG_CLR 0
+#define ICP_QAT_FW_COMN_VALID_FLAG_BITPOS 7
+#define ICP_QAT_FW_COMN_VALID_FLAG_MASK 0x1
+#define ICP_QAT_FW_COMN_HDR_RESRVD_FLD_MASK 0x7F
+
+#define ICP_QAT_FW_COMN_OV_SRV_TYPE_GET(icp_qat_fw_comn_req_hdr_t) \
+	icp_qat_fw_comn_req_hdr_t.service_type
+
+#define ICP_QAT_FW_COMN_OV_SRV_TYPE_SET(icp_qat_fw_comn_req_hdr_t, val) \
+	icp_qat_fw_comn_req_hdr_t.service_type = val
+
+#define ICP_QAT_FW_COMN_OV_SRV_CMD_ID_GET(icp_qat_fw_comn_req_hdr_t) \
+	icp_qat_fw_comn_req_hdr_t.service_cmd_id
+
+#define ICP_QAT_FW_COMN_OV_SRV_CMD_ID_SET(icp_qat_fw_comn_req_hdr_t, val) \
+	icp_qat_fw_comn_req_hdr_t.service_cmd_id = val
+
+#define ICP_QAT_FW_COMN_HDR_VALID_FLAG_GET(hdr_t) \
+	ICP_QAT_FW_COMN_VALID_FLAG_GET(hdr_t.hdr_flags)
+
+#define ICP_QAT_FW_COMN_HDR_VALID_FLAG_SET(hdr_t, val) \
+	ICP_QAT_FW_COMN_VALID_FLAG_SET(hdr_t, val)
+
+#define ICP_QAT_FW_COMN_VALID_FLAG_GET(hdr_flags) \
+	QAT_FIELD_GET(hdr_flags, \
+	ICP_QAT_FW_COMN_VALID_FLAG_BITPOS, \
+	ICP_QAT_FW_COMN_VALID_FLAG_MASK)
+
+#define ICP_QAT_FW_COMN_HDR_RESRVD_FLD_GET(hdr_flags) \
+	(hdr_flags & ICP_QAT_FW_COMN_HDR_RESRVD_FLD_MASK)
+
+#define ICP_QAT_FW_COMN_VALID_FLAG_SET(hdr_t, val) \
+	QAT_FIELD_SET((hdr_t.hdr_flags), (val), \
+	ICP_QAT_FW_COMN_VALID_FLAG_BITPOS, \
+	ICP_QAT_FW_COMN_VALID_FLAG_MASK)
+
+#define ICP_QAT_FW_COMN_HDR_FLAGS_BUILD(valid) \
+	(((valid) & ICP_QAT_FW_COMN_VALID_FLAG_MASK) << \
+	 ICP_QAT_FW_COMN_VALID_FLAG_BITPOS)
+
+#define QAT_COMN_PTR_TYPE_BITPOS 0
+#define QAT_COMN_PTR_TYPE_MASK 0x1
+#define QAT_COMN_CD_FLD_TYPE_BITPOS 1
+#define QAT_COMN_CD_FLD_TYPE_MASK 0x1
+#define QAT_COMN_PTR_TYPE_FLAT 0x0
+#define QAT_COMN_PTR_TYPE_SGL 0x1
+#define QAT_COMN_CD_FLD_TYPE_64BIT_ADR 0x0
+#define QAT_COMN_CD_FLD_TYPE_16BYTE_DATA 0x1
+
+#define ICP_QAT_FW_COMN_FLAGS_BUILD(cdt, ptr) \
+	((((cdt) & QAT_COMN_CD_FLD_TYPE_MASK) << QAT_COMN_CD_FLD_TYPE_BITPOS) \
+	 | (((ptr) & QAT_COMN_PTR_TYPE_MASK) << QAT_COMN_PTR_TYPE_BITPOS))
+
+#define ICP_QAT_FW_COMN_PTR_TYPE_GET(flags) \
+	QAT_FIELD_GET(flags, QAT_COMN_PTR_TYPE_BITPOS, QAT_COMN_PTR_TYPE_MASK)
+
+#define ICP_QAT_FW_COMN_CD_FLD_TYPE_GET(flags) \
+	QAT_FIELD_GET(flags, QAT_COMN_CD_FLD_TYPE_BITPOS, \
+			QAT_COMN_CD_FLD_TYPE_MASK)
+
+#define ICP_QAT_FW_COMN_PTR_TYPE_SET(flags, val) \
+	QAT_FIELD_SET(flags, val, QAT_COMN_PTR_TYPE_BITPOS, \
+			QAT_COMN_PTR_TYPE_MASK)
+
+#define ICP_QAT_FW_COMN_CD_FLD_TYPE_SET(flags, val) \
+	QAT_FIELD_SET(flags, val, QAT_COMN_CD_FLD_TYPE_BITPOS, \
+			QAT_COMN_CD_FLD_TYPE_MASK)
+
+#define ICP_QAT_FW_COMN_NEXT_ID_BITPOS 4
+#define ICP_QAT_FW_COMN_NEXT_ID_MASK 0xF0
+#define ICP_QAT_FW_COMN_CURR_ID_BITPOS 0
+#define ICP_QAT_FW_COMN_CURR_ID_MASK 0x0F
+
+#define ICP_QAT_FW_COMN_NEXT_ID_GET(cd_ctrl_hdr_t) \
+	((((cd_ctrl_hdr_t)->next_curr_id) & ICP_QAT_FW_COMN_NEXT_ID_MASK) \
+	>> (ICP_QAT_FW_COMN_NEXT_ID_BITPOS))
+
+#define ICP_QAT_FW_COMN_NEXT_ID_SET(cd_ctrl_hdr_t, val) \
+	{ ((cd_ctrl_hdr_t)->next_curr_id) = ((((cd_ctrl_hdr_t)->next_curr_id) \
+	& ICP_QAT_FW_COMN_CURR_ID_MASK) | \
+	((val << ICP_QAT_FW_COMN_NEXT_ID_BITPOS) \
+	 & ICP_QAT_FW_COMN_NEXT_ID_MASK)); }
+
+#define ICP_QAT_FW_COMN_CURR_ID_GET(cd_ctrl_hdr_t) \
+	(((cd_ctrl_hdr_t)->next_curr_id) & ICP_QAT_FW_COMN_CURR_ID_MASK)
+
+#define ICP_QAT_FW_COMN_CURR_ID_SET(cd_ctrl_hdr_t, val) \
+	{ ((cd_ctrl_hdr_t)->next_curr_id) = ((((cd_ctrl_hdr_t)->next_curr_id) \
+	& ICP_QAT_FW_COMN_NEXT_ID_MASK) | \
+	((val) & ICP_QAT_FW_COMN_CURR_ID_MASK)); }
+
+#define QAT_COMN_RESP_CRYPTO_STATUS_BITPOS 7
+#define QAT_COMN_RESP_CRYPTO_STATUS_MASK 0x1
+#define QAT_COMN_RESP_CMP_STATUS_BITPOS 5
+#define QAT_COMN_RESP_CMP_STATUS_MASK 0x1
+#define QAT_COMN_RESP_XLAT_STATUS_BITPOS 4
+#define QAT_COMN_RESP_XLAT_STATUS_MASK 0x1
+#define QAT_COMN_RESP_CMP_END_OF_LAST_BLK_BITPOS 3
+#define QAT_COMN_RESP_CMP_END_OF_LAST_BLK_MASK 0x1
+
+#define ICP_QAT_FW_COMN_RESP_STATUS_BUILD(crypto, comp, xlat, eolb) \
+	((((crypto) & QAT_COMN_RESP_CRYPTO_STATUS_MASK) << \
+	QAT_COMN_RESP_CRYPTO_STATUS_BITPOS) | \
+	(((comp) & QAT_COMN_RESP_CMP_STATUS_MASK) << \
+	QAT_COMN_RESP_CMP_STATUS_BITPOS) | \
+	(((xlat) & QAT_COMN_RESP_XLAT_STATUS_MASK) << \
+	QAT_COMN_RESP_XLAT_STATUS_BITPOS) | \
+	(((eolb) & QAT_COMN_RESP_CMP_END_OF_LAST_BLK_MASK) << \
+	QAT_COMN_RESP_CMP_END_OF_LAST_BLK_BITPOS))
+
+#define ICP_QAT_FW_COMN_RESP_CRYPTO_STAT_GET(status) \
+	QAT_FIELD_GET(status, QAT_COMN_RESP_CRYPTO_STATUS_BITPOS, \
+	QAT_COMN_RESP_CRYPTO_STATUS_MASK)
+
+#define ICP_QAT_FW_COMN_RESP_CMP_STAT_GET(status) \
+	QAT_FIELD_GET(status, QAT_COMN_RESP_CMP_STATUS_BITPOS, \
+	QAT_COMN_RESP_CMP_STATUS_MASK)
+
+#define ICP_QAT_FW_COMN_RESP_XLAT_STAT_GET(status) \
+	QAT_FIELD_GET(status, QAT_COMN_RESP_XLAT_STATUS_BITPOS, \
+	QAT_COMN_RESP_XLAT_STATUS_MASK)
+
+#define ICP_QAT_FW_COMN_RESP_CMP_END_OF_LAST_BLK_FLAG_GET(status) \
+	QAT_FIELD_GET(status, QAT_COMN_RESP_CMP_END_OF_LAST_BLK_BITPOS, \
+	QAT_COMN_RESP_CMP_END_OF_LAST_BLK_MASK)
+
+#define ICP_QAT_FW_COMN_STATUS_FLAG_OK 0
+#define ICP_QAT_FW_COMN_STATUS_FLAG_ERROR 1
+#define ICP_QAT_FW_COMN_STATUS_CMP_END_OF_LAST_BLK_FLAG_CLR 0
+#define ICP_QAT_FW_COMN_STATUS_CMP_END_OF_LAST_BLK_FLAG_SET 1
+#define ERR_CODE_NO_ERROR 0
+#define ERR_CODE_INVALID_BLOCK_TYPE -1
+#define ERR_CODE_NO_MATCH_ONES_COMP -2
+#define ERR_CODE_TOO_MANY_LEN_OR_DIS -3
+#define ERR_CODE_INCOMPLETE_LEN -4
+#define ERR_CODE_RPT_LEN_NO_FIRST_LEN -5
+#define ERR_CODE_RPT_GT_SPEC_LEN -6
+#define ERR_CODE_INV_LIT_LEN_CODE_LEN -7
+#define ERR_CODE_INV_DIS_CODE_LEN -8
+#define ERR_CODE_INV_LIT_LEN_DIS_IN_BLK -9
+#define ERR_CODE_DIS_TOO_FAR_BACK -10
+#define ERR_CODE_OVERFLOW_ERROR -11
+#define ERR_CODE_SOFT_ERROR -12
+#define ERR_CODE_FATAL_ERROR -13
+#define ERR_CODE_SSM_ERROR -14
+#define ERR_CODE_ENDPOINT_ERROR -15
+
+enum icp_qat_fw_slice {
+	ICP_QAT_FW_SLICE_NULL = 0,
+	ICP_QAT_FW_SLICE_CIPHER = 1,
+	ICP_QAT_FW_SLICE_AUTH = 2,
+	ICP_QAT_FW_SLICE_DRAM_RD = 3,
+	ICP_QAT_FW_SLICE_DRAM_WR = 4,
+	ICP_QAT_FW_SLICE_COMP = 5,
+	ICP_QAT_FW_SLICE_XLAT = 6,
+	ICP_QAT_FW_SLICE_DELIMITER
+};
+#endif
diff --git a/drivers/crypto/qat/qat_adf/icp_qat_fw_la.h b/drivers/crypto/qat/qat_adf/icp_qat_fw_la.h
new file mode 100644
index 0000000..7671465
--- /dev/null
+++ b/drivers/crypto/qat/qat_adf/icp_qat_fw_la.h
@@ -0,0 +1,404 @@
+/*
+  This file is provided under a dual BSD/GPLv2 license.  When using or
+  redistributing this file, you may do so under either license.
+
+  GPL LICENSE SUMMARY
+  Copyright(c) 2015 Intel Corporation.
+  This program is free software; you can redistribute it and/or modify
+  it under the terms of version 2 of the GNU General Public License as
+  published by the Free Software Foundation.
+
+  This program is distributed in the hope that it will be useful, but
+  WITHOUT ANY WARRANTY; without even the implied warranty of
+  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+  General Public License for more details.
+
+  Contact Information:
+  qat-linux@intel.com
+
+  BSD LICENSE
+  Copyright(c) 2015 Intel Corporation.
+  Redistribution and use in source and binary forms, with or without
+  modification, are permitted provided that the following conditions
+  are met:
+
+    * Redistributions of source code must retain the above copyright
+      notice, this list of conditions and the following disclaimer.
+    * Redistributions in binary form must reproduce the above copyright
+      notice, this list of conditions and the following disclaimer in
+      the documentation and/or other materials provided with the
+      distribution.
+    * Neither the name of Intel Corporation nor the names of its
+      contributors may be used to endorse or promote products derived
+      from this software without specific prior written permission.
+
+  THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+  "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+  LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+  A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+  OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+  SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+  LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+  DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+  THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+  (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+  OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+*/
+#ifndef _ICP_QAT_FW_LA_H_
+#define _ICP_QAT_FW_LA_H_
+#include "icp_qat_fw.h"
+
+enum icp_qat_fw_la_cmd_id {
+	ICP_QAT_FW_LA_CMD_CIPHER = 0,
+	ICP_QAT_FW_LA_CMD_AUTH = 1,
+	ICP_QAT_FW_LA_CMD_CIPHER_HASH = 2,
+	ICP_QAT_FW_LA_CMD_HASH_CIPHER = 3,
+	ICP_QAT_FW_LA_CMD_TRNG_GET_RANDOM = 4,
+	ICP_QAT_FW_LA_CMD_TRNG_TEST = 5,
+	ICP_QAT_FW_LA_CMD_SSL3_KEY_DERIVE = 6,
+	ICP_QAT_FW_LA_CMD_TLS_V1_1_KEY_DERIVE = 7,
+	ICP_QAT_FW_LA_CMD_TLS_V1_2_KEY_DERIVE = 8,
+	ICP_QAT_FW_LA_CMD_MGF1 = 9,
+	ICP_QAT_FW_LA_CMD_AUTH_PRE_COMP = 10,
+	ICP_QAT_FW_LA_CMD_CIPHER_PRE_COMP = 11,
+	ICP_QAT_FW_LA_CMD_DELIMITER = 12
+};
+
+#define ICP_QAT_FW_LA_ICV_VER_STATUS_PASS ICP_QAT_FW_COMN_STATUS_FLAG_OK
+#define ICP_QAT_FW_LA_ICV_VER_STATUS_FAIL ICP_QAT_FW_COMN_STATUS_FLAG_ERROR
+#define ICP_QAT_FW_LA_TRNG_STATUS_PASS ICP_QAT_FW_COMN_STATUS_FLAG_OK
+#define ICP_QAT_FW_LA_TRNG_STATUS_FAIL ICP_QAT_FW_COMN_STATUS_FLAG_ERROR
+
+struct icp_qat_fw_la_bulk_req {
+	struct icp_qat_fw_comn_req_hdr comn_hdr;
+	struct icp_qat_fw_comn_req_hdr_cd_pars cd_pars;
+	struct icp_qat_fw_comn_req_mid comn_mid;
+	struct icp_qat_fw_comn_req_rqpars serv_specif_rqpars;
+	struct icp_qat_fw_comn_req_cd_ctrl cd_ctrl;
+};
+
+#define ICP_QAT_FW_LA_GCM_IV_LEN_12_OCTETS 1
+#define ICP_QAT_FW_LA_GCM_IV_LEN_NOT_12_OCTETS 0
+#define QAT_FW_LA_ZUC_3G_PROTO_FLAG_BITPOS 12
+#define ICP_QAT_FW_LA_ZUC_3G_PROTO 1
+#define QAT_FW_LA_ZUC_3G_PROTO_FLAG_MASK 0x1
+#define QAT_LA_GCM_IV_LEN_FLAG_BITPOS 11
+#define QAT_LA_GCM_IV_LEN_FLAG_MASK 0x1
+#define ICP_QAT_FW_LA_DIGEST_IN_BUFFER 1
+#define ICP_QAT_FW_LA_NO_DIGEST_IN_BUFFER 0
+#define QAT_LA_DIGEST_IN_BUFFER_BITPOS	10
+#define QAT_LA_DIGEST_IN_BUFFER_MASK 0x1
+#define ICP_QAT_FW_LA_SNOW_3G_PROTO 4
+#define ICP_QAT_FW_LA_GCM_PROTO	2
+#define ICP_QAT_FW_LA_CCM_PROTO	1
+#define ICP_QAT_FW_LA_NO_PROTO 0
+#define QAT_LA_PROTO_BITPOS 7
+#define QAT_LA_PROTO_MASK 0x7
+#define ICP_QAT_FW_LA_CMP_AUTH_RES 1
+#define ICP_QAT_FW_LA_NO_CMP_AUTH_RES 0
+#define QAT_LA_CMP_AUTH_RES_BITPOS 6
+#define QAT_LA_CMP_AUTH_RES_MASK 0x1
+#define ICP_QAT_FW_LA_RET_AUTH_RES 1
+#define ICP_QAT_FW_LA_NO_RET_AUTH_RES 0
+#define QAT_LA_RET_AUTH_RES_BITPOS 5
+#define QAT_LA_RET_AUTH_RES_MASK 0x1
+#define ICP_QAT_FW_LA_UPDATE_STATE 1
+#define ICP_QAT_FW_LA_NO_UPDATE_STATE 0
+#define QAT_LA_UPDATE_STATE_BITPOS 4
+#define QAT_LA_UPDATE_STATE_MASK 0x1
+#define ICP_QAT_FW_CIPH_AUTH_CFG_OFFSET_IN_CD_SETUP 0
+#define ICP_QAT_FW_CIPH_AUTH_CFG_OFFSET_IN_SHRAM_CP 1
+#define QAT_LA_CIPH_AUTH_CFG_OFFSET_BITPOS 3
+#define QAT_LA_CIPH_AUTH_CFG_OFFSET_MASK 0x1
+#define ICP_QAT_FW_CIPH_IV_64BIT_PTR 0
+#define ICP_QAT_FW_CIPH_IV_16BYTE_DATA 1
+#define QAT_LA_CIPH_IV_FLD_BITPOS 2
+#define QAT_LA_CIPH_IV_FLD_MASK   0x1
+#define ICP_QAT_FW_LA_PARTIAL_NONE 0
+#define ICP_QAT_FW_LA_PARTIAL_START 1
+#define ICP_QAT_FW_LA_PARTIAL_MID 3
+#define ICP_QAT_FW_LA_PARTIAL_END 2
+#define QAT_LA_PARTIAL_BITPOS 0
+#define QAT_LA_PARTIAL_MASK 0x3
+#define ICP_QAT_FW_LA_FLAGS_BUILD(zuc_proto, gcm_iv_len, auth_rslt, proto, \
+	cmp_auth, ret_auth, update_state, \
+	ciph_iv, ciphcfg, partial) \
+	(((zuc_proto & QAT_FW_LA_ZUC_3G_PROTO_FLAG_MASK) << \
+	QAT_FW_LA_ZUC_3G_PROTO_FLAG_BITPOS) | \
+	((gcm_iv_len & QAT_LA_GCM_IV_LEN_FLAG_MASK) << \
+	QAT_LA_GCM_IV_LEN_FLAG_BITPOS) | \
+	((auth_rslt & QAT_LA_DIGEST_IN_BUFFER_MASK) << \
+	QAT_LA_DIGEST_IN_BUFFER_BITPOS) | \
+	((proto & QAT_LA_PROTO_MASK) << \
+	QAT_LA_PROTO_BITPOS)	| \
+	((cmp_auth & QAT_LA_CMP_AUTH_RES_MASK) << \
+	QAT_LA_CMP_AUTH_RES_BITPOS) | \
+	((ret_auth & QAT_LA_RET_AUTH_RES_MASK) << \
+	QAT_LA_RET_AUTH_RES_BITPOS) | \
+	((update_state & QAT_LA_UPDATE_STATE_MASK) << \
+	QAT_LA_UPDATE_STATE_BITPOS) | \
+	((ciph_iv & QAT_LA_CIPH_IV_FLD_MASK) << \
+	QAT_LA_CIPH_IV_FLD_BITPOS) | \
+	((ciphcfg & QAT_LA_CIPH_AUTH_CFG_OFFSET_MASK) << \
+	QAT_LA_CIPH_AUTH_CFG_OFFSET_BITPOS) | \
+	((partial & QAT_LA_PARTIAL_MASK) << \
+	QAT_LA_PARTIAL_BITPOS))
+
+#define ICP_QAT_FW_LA_CIPH_IV_FLD_FLAG_GET(flags) \
+	QAT_FIELD_GET(flags, QAT_LA_CIPH_IV_FLD_BITPOS, \
+	QAT_LA_CIPH_IV_FLD_MASK)
+
+#define ICP_QAT_FW_LA_CIPH_AUTH_CFG_OFFSET_FLAG_GET(flags) \
+	QAT_FIELD_GET(flags, QAT_LA_CIPH_AUTH_CFG_OFFSET_BITPOS, \
+	QAT_LA_CIPH_AUTH_CFG_OFFSET_MASK)
+
+#define ICP_QAT_FW_LA_ZUC_3G_PROTO_FLAG_GET(flags) \
+	QAT_FIELD_GET(flags, QAT_FW_LA_ZUC_3G_PROTO_FLAG_BITPOS, \
+	QAT_FW_LA_ZUC_3G_PROTO_FLAG_MASK)
+
+#define ICP_QAT_FW_LA_GCM_IV_LEN_FLAG_GET(flags) \
+	QAT_FIELD_GET(flags, QAT_LA_GCM_IV_LEN_FLAG_BITPOS, \
+	QAT_LA_GCM_IV_LEN_FLAG_MASK)
+
+#define ICP_QAT_FW_LA_PROTO_GET(flags) \
+	QAT_FIELD_GET(flags, QAT_LA_PROTO_BITPOS, QAT_LA_PROTO_MASK)
+
+#define ICP_QAT_FW_LA_CMP_AUTH_GET(flags) \
+	QAT_FIELD_GET(flags, QAT_LA_CMP_AUTH_RES_BITPOS, \
+	QAT_LA_CMP_AUTH_RES_MASK)
+
+#define ICP_QAT_FW_LA_RET_AUTH_GET(flags) \
+	QAT_FIELD_GET(flags, QAT_LA_RET_AUTH_RES_BITPOS, \
+	QAT_LA_RET_AUTH_RES_MASK)
+
+#define ICP_QAT_FW_LA_DIGEST_IN_BUFFER_GET(flags) \
+	QAT_FIELD_GET(flags, QAT_LA_DIGEST_IN_BUFFER_BITPOS, \
+	QAT_LA_DIGEST_IN_BUFFER_MASK)
+
+#define ICP_QAT_FW_LA_UPDATE_STATE_GET(flags) \
+	QAT_FIELD_GET(flags, QAT_LA_UPDATE_STATE_BITPOS, \
+	QAT_LA_UPDATE_STATE_MASK)
+
+#define ICP_QAT_FW_LA_PARTIAL_GET(flags) \
+	QAT_FIELD_GET(flags, QAT_LA_PARTIAL_BITPOS, \
+	QAT_LA_PARTIAL_MASK)
+
+#define ICP_QAT_FW_LA_CIPH_IV_FLD_FLAG_SET(flags, val) \
+	QAT_FIELD_SET(flags, val, QAT_LA_CIPH_IV_FLD_BITPOS, \
+	QAT_LA_CIPH_IV_FLD_MASK)
+
+#define ICP_QAT_FW_LA_CIPH_AUTH_CFG_OFFSET_FLAG_SET(flags, val) \
+	QAT_FIELD_SET(flags, val, QAT_LA_CIPH_AUTH_CFG_OFFSET_BITPOS, \
+	QAT_LA_CIPH_AUTH_CFG_OFFSET_MASK)
+
+#define ICP_QAT_FW_LA_ZUC_3G_PROTO_FLAG_SET(flags, val) \
+	QAT_FIELD_SET(flags, val, QAT_FW_LA_ZUC_3G_PROTO_FLAG_BITPOS, \
+	QAT_FW_LA_ZUC_3G_PROTO_FLAG_MASK)
+
+#define ICP_QAT_FW_LA_GCM_IV_LEN_FLAG_SET(flags, val) \
+	QAT_FIELD_SET(flags, val, QAT_LA_GCM_IV_LEN_FLAG_BITPOS, \
+	QAT_LA_GCM_IV_LEN_FLAG_MASK)
+
+#define ICP_QAT_FW_LA_PROTO_SET(flags, val) \
+	QAT_FIELD_SET(flags, val, QAT_LA_PROTO_BITPOS, \
+	QAT_LA_PROTO_MASK)
+
+#define ICP_QAT_FW_LA_CMP_AUTH_SET(flags, val) \
+	QAT_FIELD_SET(flags, val, QAT_LA_CMP_AUTH_RES_BITPOS, \
+	QAT_LA_CMP_AUTH_RES_MASK)
+
+#define ICP_QAT_FW_LA_RET_AUTH_SET(flags, val) \
+	QAT_FIELD_SET(flags, val, QAT_LA_RET_AUTH_RES_BITPOS, \
+	QAT_LA_RET_AUTH_RES_MASK)
+
+#define ICP_QAT_FW_LA_DIGEST_IN_BUFFER_SET(flags, val) \
+	QAT_FIELD_SET(flags, val, QAT_LA_DIGEST_IN_BUFFER_BITPOS, \
+	QAT_LA_DIGEST_IN_BUFFER_MASK)
+
+#define ICP_QAT_FW_LA_UPDATE_STATE_SET(flags, val) \
+	QAT_FIELD_SET(flags, val, QAT_LA_UPDATE_STATE_BITPOS, \
+	QAT_LA_UPDATE_STATE_MASK)
+
+#define ICP_QAT_FW_LA_PARTIAL_SET(flags, val) \
+	QAT_FIELD_SET(flags, val, QAT_LA_PARTIAL_BITPOS, \
+	QAT_LA_PARTIAL_MASK)
+
+struct icp_qat_fw_cipher_req_hdr_cd_pars {
+	union {
+		struct {
+			uint64_t content_desc_addr;
+			uint16_t content_desc_resrvd1;
+			uint8_t content_desc_params_sz;
+			uint8_t content_desc_hdr_resrvd2;
+			uint32_t content_desc_resrvd3;
+		} s;
+		struct {
+			uint32_t cipher_key_array[ICP_QAT_FW_NUM_LONGWORDS_4];
+		} s1;
+	} u;
+};
+
+struct icp_qat_fw_cipher_auth_req_hdr_cd_pars {
+	union {
+		struct {
+			uint64_t content_desc_addr;
+			uint16_t content_desc_resrvd1;
+			uint8_t content_desc_params_sz;
+			uint8_t content_desc_hdr_resrvd2;
+			uint32_t content_desc_resrvd3;
+		} s;
+		struct {
+			uint32_t cipher_key_array[ICP_QAT_FW_NUM_LONGWORDS_4];
+		} sl;
+	} u;
+};
+
+struct icp_qat_fw_cipher_cd_ctrl_hdr {
+	uint8_t cipher_state_sz;
+	uint8_t cipher_key_sz;
+	uint8_t cipher_cfg_offset;
+	uint8_t next_curr_id;
+	uint8_t cipher_padding_sz;
+	uint8_t resrvd1;
+	uint16_t resrvd2;
+	uint32_t resrvd3[ICP_QAT_FW_NUM_LONGWORDS_3];
+};
+
+struct icp_qat_fw_auth_cd_ctrl_hdr {
+	uint32_t resrvd1;
+	uint8_t resrvd2;
+	uint8_t hash_flags;
+	uint8_t hash_cfg_offset;
+	uint8_t next_curr_id;
+	uint8_t resrvd3;
+	uint8_t outer_prefix_sz;
+	uint8_t final_sz;
+	uint8_t inner_res_sz;
+	uint8_t resrvd4;
+	uint8_t inner_state1_sz;
+	uint8_t inner_state2_offset;
+	uint8_t inner_state2_sz;
+	uint8_t outer_config_offset;
+	uint8_t outer_state1_sz;
+	uint8_t outer_res_sz;
+	uint8_t outer_prefix_offset;
+};
+
+struct icp_qat_fw_cipher_auth_cd_ctrl_hdr {
+	uint8_t cipher_state_sz;
+	uint8_t cipher_key_sz;
+	uint8_t cipher_cfg_offset;
+	uint8_t next_curr_id_cipher;
+	uint8_t cipher_padding_sz;
+	uint8_t hash_flags;
+	uint8_t hash_cfg_offset;
+	uint8_t next_curr_id_auth;
+	uint8_t resrvd1;
+	uint8_t outer_prefix_sz;
+	uint8_t final_sz;
+	uint8_t inner_res_sz;
+	uint8_t resrvd2;
+	uint8_t inner_state1_sz;
+	uint8_t inner_state2_offset;
+	uint8_t inner_state2_sz;
+	uint8_t outer_config_offset;
+	uint8_t outer_state1_sz;
+	uint8_t outer_res_sz;
+	uint8_t outer_prefix_offset;
+};
+
+#define ICP_QAT_FW_AUTH_HDR_FLAG_DO_NESTED 1
+#define ICP_QAT_FW_AUTH_HDR_FLAG_NO_NESTED 0
+#define ICP_QAT_FW_CCM_GCM_AAD_SZ_MAX	240
+#define ICP_QAT_FW_HASH_REQUEST_PARAMETERS_OFFSET \
+	(sizeof(struct icp_qat_fw_la_cipher_req_params_t))
+#define ICP_QAT_FW_CIPHER_REQUEST_PARAMETERS_OFFSET (0)
+
+struct icp_qat_fw_la_cipher_req_params {
+	uint32_t cipher_offset;
+	uint32_t cipher_length;
+	union {
+		uint32_t cipher_IV_array[ICP_QAT_FW_NUM_LONGWORDS_4];
+		struct {
+			uint64_t cipher_IV_ptr;
+			uint64_t resrvd1;
+		} s;
+	} u;
+};
+
+struct icp_qat_fw_la_auth_req_params {
+	uint32_t auth_off;
+	uint32_t auth_len;
+	union {
+		uint64_t auth_partial_st_prefix;
+		uint64_t aad_adr;
+	} u1;
+	uint64_t auth_res_addr;
+	union {
+		uint8_t inner_prefix_sz;
+		uint8_t aad_sz;
+	} u2;
+	uint8_t resrvd1;
+	uint8_t hash_state_sz;
+	uint8_t auth_res_sz;
+} __rte_packed;
+
+struct icp_qat_fw_la_auth_req_params_resrvd_flds {
+	uint32_t resrvd[ICP_QAT_FW_NUM_LONGWORDS_6];
+	union {
+		uint8_t inner_prefix_sz;
+		uint8_t aad_sz;
+	} u2;
+	uint8_t resrvd1;
+	uint16_t resrvd2;
+};
+
+struct icp_qat_fw_la_resp {
+	struct icp_qat_fw_comn_resp_hdr comn_resp;
+	uint64_t opaque_data;
+	uint32_t resrvd[ICP_QAT_FW_NUM_LONGWORDS_4];
+};
+
+#define ICP_QAT_FW_CIPHER_NEXT_ID_GET(cd_ctrl_hdr_t) \
+	((((cd_ctrl_hdr_t)->next_curr_id_cipher) & \
+	  ICP_QAT_FW_COMN_NEXT_ID_MASK) >> (ICP_QAT_FW_COMN_NEXT_ID_BITPOS))
+
+#define ICP_QAT_FW_CIPHER_NEXT_ID_SET(cd_ctrl_hdr_t, val) \
+{ (cd_ctrl_hdr_t)->next_curr_id_cipher = \
+	((((cd_ctrl_hdr_t)->next_curr_id_cipher) \
+	& ICP_QAT_FW_COMN_CURR_ID_MASK) | \
+	((val << ICP_QAT_FW_COMN_NEXT_ID_BITPOS) \
+	& ICP_QAT_FW_COMN_NEXT_ID_MASK)) }
+
+#define ICP_QAT_FW_CIPHER_CURR_ID_GET(cd_ctrl_hdr_t) \
+	(((cd_ctrl_hdr_t)->next_curr_id_cipher) \
+	& ICP_QAT_FW_COMN_CURR_ID_MASK)
+
+#define ICP_QAT_FW_CIPHER_CURR_ID_SET(cd_ctrl_hdr_t, val) \
+{ (cd_ctrl_hdr_t)->next_curr_id_cipher = \
+	((((cd_ctrl_hdr_t)->next_curr_id_cipher) \
+	& ICP_QAT_FW_COMN_NEXT_ID_MASK) | \
+	((val) & ICP_QAT_FW_COMN_CURR_ID_MASK)) }
+
+#define ICP_QAT_FW_AUTH_NEXT_ID_GET(cd_ctrl_hdr_t) \
+	((((cd_ctrl_hdr_t)->next_curr_id_auth) & ICP_QAT_FW_COMN_NEXT_ID_MASK) \
+	>> (ICP_QAT_FW_COMN_NEXT_ID_BITPOS))
+
+#define ICP_QAT_FW_AUTH_NEXT_ID_SET(cd_ctrl_hdr_t, val) \
+{ (cd_ctrl_hdr_t)->next_curr_id_auth = \
+	((((cd_ctrl_hdr_t)->next_curr_id_auth) \
+	& ICP_QAT_FW_COMN_CURR_ID_MASK) | \
+	((val << ICP_QAT_FW_COMN_NEXT_ID_BITPOS) \
+	& ICP_QAT_FW_COMN_NEXT_ID_MASK)) }
+
+#define ICP_QAT_FW_AUTH_CURR_ID_GET(cd_ctrl_hdr_t) \
+	(((cd_ctrl_hdr_t)->next_curr_id_auth) \
+	& ICP_QAT_FW_COMN_CURR_ID_MASK)
+
+#define ICP_QAT_FW_AUTH_CURR_ID_SET(cd_ctrl_hdr_t, val) \
+{ (cd_ctrl_hdr_t)->next_curr_id_auth = \
+	((((cd_ctrl_hdr_t)->next_curr_id_auth) \
+	& ICP_QAT_FW_COMN_NEXT_ID_MASK) | \
+	((val) & ICP_QAT_FW_COMN_CURR_ID_MASK)) }
+
+#endif
diff --git a/drivers/crypto/qat/qat_adf/icp_qat_hw.h b/drivers/crypto/qat/qat_adf/icp_qat_hw.h
new file mode 100644
index 0000000..7f68557
--- /dev/null
+++ b/drivers/crypto/qat/qat_adf/icp_qat_hw.h
@@ -0,0 +1,305 @@
+/*
+  This file is provided under a dual BSD/GPLv2 license.  When using or
+  redistributing this file, you may do so under either license.
+
+  GPL LICENSE SUMMARY
+  Copyright(c) 2015 Intel Corporation.
+  This program is free software; you can redistribute it and/or modify
+  it under the terms of version 2 of the GNU General Public License as
+  published by the Free Software Foundation.
+
+  This program is distributed in the hope that it will be useful, but
+  WITHOUT ANY WARRANTY; without even the implied warranty of
+  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+  General Public License for more details.
+
+  Contact Information:
+  qat-linux@intel.com
+
+  BSD LICENSE
+  Copyright(c) 2015 Intel Corporation.
+  Redistribution and use in source and binary forms, with or without
+  modification, are permitted provided that the following conditions
+  are met:
+
+    * Redistributions of source code must retain the above copyright
+      notice, this list of conditions and the following disclaimer.
+    * Redistributions in binary form must reproduce the above copyright
+      notice, this list of conditions and the following disclaimer in
+      the documentation and/or other materials provided with the
+      distribution.
+    * Neither the name of Intel Corporation nor the names of its
+      contributors may be used to endorse or promote products derived
+      from this software without specific prior written permission.
+
+  THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+  "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+  LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+  A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+  OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+  SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+  LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+  DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+  THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+  (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+  OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+*/
+#ifndef _ICP_QAT_HW_H_
+#define _ICP_QAT_HW_H_
+
+enum icp_qat_hw_ae_id {
+	ICP_QAT_HW_AE_0 = 0,
+	ICP_QAT_HW_AE_1 = 1,
+	ICP_QAT_HW_AE_2 = 2,
+	ICP_QAT_HW_AE_3 = 3,
+	ICP_QAT_HW_AE_4 = 4,
+	ICP_QAT_HW_AE_5 = 5,
+	ICP_QAT_HW_AE_6 = 6,
+	ICP_QAT_HW_AE_7 = 7,
+	ICP_QAT_HW_AE_8 = 8,
+	ICP_QAT_HW_AE_9 = 9,
+	ICP_QAT_HW_AE_10 = 10,
+	ICP_QAT_HW_AE_11 = 11,
+	ICP_QAT_HW_AE_DELIMITER = 12
+};
+
+enum icp_qat_hw_qat_id {
+	ICP_QAT_HW_QAT_0 = 0,
+	ICP_QAT_HW_QAT_1 = 1,
+	ICP_QAT_HW_QAT_2 = 2,
+	ICP_QAT_HW_QAT_3 = 3,
+	ICP_QAT_HW_QAT_4 = 4,
+	ICP_QAT_HW_QAT_5 = 5,
+	ICP_QAT_HW_QAT_DELIMITER = 6
+};
+
+enum icp_qat_hw_auth_algo {
+	ICP_QAT_HW_AUTH_ALGO_NULL = 0,
+	ICP_QAT_HW_AUTH_ALGO_SHA1 = 1,
+	ICP_QAT_HW_AUTH_ALGO_MD5 = 2,
+	ICP_QAT_HW_AUTH_ALGO_SHA224 = 3,
+	ICP_QAT_HW_AUTH_ALGO_SHA256 = 4,
+	ICP_QAT_HW_AUTH_ALGO_SHA384 = 5,
+	ICP_QAT_HW_AUTH_ALGO_SHA512 = 6,
+	ICP_QAT_HW_AUTH_ALGO_AES_XCBC_MAC = 7,
+	ICP_QAT_HW_AUTH_ALGO_AES_CBC_MAC = 8,
+	ICP_QAT_HW_AUTH_ALGO_AES_F9 = 9,
+	ICP_QAT_HW_AUTH_ALGO_GALOIS_128 = 10,
+	ICP_QAT_HW_AUTH_ALGO_GALOIS_64 = 11,
+	ICP_QAT_HW_AUTH_ALGO_KASUMI_F9 = 12,
+	ICP_QAT_HW_AUTH_ALGO_SNOW_3G_UIA2 = 13,
+	ICP_QAT_HW_AUTH_ALGO_ZUC_3G_128_EIA3 = 14,
+	ICP_QAT_HW_AUTH_RESERVED_1 = 15,
+	ICP_QAT_HW_AUTH_RESERVED_2 = 16,
+	ICP_QAT_HW_AUTH_ALGO_SHA3_256 = 17,
+	ICP_QAT_HW_AUTH_RESERVED_3 = 18,
+	ICP_QAT_HW_AUTH_ALGO_SHA3_512 = 19,
+	ICP_QAT_HW_AUTH_ALGO_DELIMITER = 20
+};
+
+enum icp_qat_hw_auth_mode {
+	ICP_QAT_HW_AUTH_MODE0 = 0,
+	ICP_QAT_HW_AUTH_MODE1 = 1,
+	ICP_QAT_HW_AUTH_MODE2 = 2,
+	ICP_QAT_HW_AUTH_MODE_DELIMITER = 3
+};
+
+struct icp_qat_hw_auth_config {
+	uint32_t config;
+	uint32_t reserved;
+};
+
+#define QAT_AUTH_MODE_BITPOS 4
+#define QAT_AUTH_MODE_MASK 0xF
+#define QAT_AUTH_ALGO_BITPOS 0
+#define QAT_AUTH_ALGO_MASK 0xF
+#define QAT_AUTH_CMP_BITPOS 8
+#define QAT_AUTH_CMP_MASK 0x7F
+#define QAT_AUTH_SHA3_PADDING_BITPOS 16
+#define QAT_AUTH_SHA3_PADDING_MASK 0x1
+#define QAT_AUTH_ALGO_SHA3_BITPOS 22
+#define QAT_AUTH_ALGO_SHA3_MASK 0x3
+#define ICP_QAT_HW_AUTH_CONFIG_BUILD(mode, algo, cmp_len) \
+	(((mode & QAT_AUTH_MODE_MASK) << QAT_AUTH_MODE_BITPOS) | \
+	((algo & QAT_AUTH_ALGO_MASK) << QAT_AUTH_ALGO_BITPOS) | \
+	(((algo >> 4) & QAT_AUTH_ALGO_SHA3_MASK) << \
+	 QAT_AUTH_ALGO_SHA3_BITPOS) | \
+	 (((((algo == ICP_QAT_HW_AUTH_ALGO_SHA3_256) || \
+	(algo == ICP_QAT_HW_AUTH_ALGO_SHA3_512)) ? 1 : 0) \
+	& QAT_AUTH_SHA3_PADDING_MASK) << QAT_AUTH_SHA3_PADDING_BITPOS) | \
+	((cmp_len & QAT_AUTH_CMP_MASK) << QAT_AUTH_CMP_BITPOS))
+
+struct icp_qat_hw_auth_counter {
+	uint32_t counter;
+	uint32_t reserved;
+};
+
+#define QAT_AUTH_COUNT_MASK 0xFFFFFFFF
+#define QAT_AUTH_COUNT_BITPOS 0
+#define ICP_QAT_HW_AUTH_COUNT_BUILD(val) \
+	(((val) & QAT_AUTH_COUNT_MASK) << QAT_AUTH_COUNT_BITPOS)
+
+struct icp_qat_hw_auth_setup {
+	struct icp_qat_hw_auth_config auth_config;
+	struct icp_qat_hw_auth_counter auth_counter;
+};
+
+#define QAT_HW_DEFAULT_ALIGNMENT 8
+#define QAT_HW_ROUND_UP(val, n) (((val) + ((n) - 1)) & (~(n - 1)))
+#define ICP_QAT_HW_NULL_STATE1_SZ 32
+#define ICP_QAT_HW_MD5_STATE1_SZ 16
+#define ICP_QAT_HW_SHA1_STATE1_SZ 20
+#define ICP_QAT_HW_SHA224_STATE1_SZ 32
+#define ICP_QAT_HW_SHA256_STATE1_SZ 32
+#define ICP_QAT_HW_SHA3_256_STATE1_SZ 32
+#define ICP_QAT_HW_SHA384_STATE1_SZ 64
+#define ICP_QAT_HW_SHA512_STATE1_SZ 64
+#define ICP_QAT_HW_SHA3_512_STATE1_SZ 64
+#define ICP_QAT_HW_SHA3_224_STATE1_SZ 28
+#define ICP_QAT_HW_SHA3_384_STATE1_SZ 48
+#define ICP_QAT_HW_AES_XCBC_MAC_STATE1_SZ 16
+#define ICP_QAT_HW_AES_CBC_MAC_STATE1_SZ 16
+#define ICP_QAT_HW_AES_F9_STATE1_SZ 32
+#define ICP_QAT_HW_KASUMI_F9_STATE1_SZ 16
+#define ICP_QAT_HW_GALOIS_128_STATE1_SZ 16
+#define ICP_QAT_HW_SNOW_3G_UIA2_STATE1_SZ 8
+#define ICP_QAT_HW_ZUC_3G_EIA3_STATE1_SZ 8
+#define ICP_QAT_HW_NULL_STATE2_SZ 32
+#define ICP_QAT_HW_MD5_STATE2_SZ 16
+#define ICP_QAT_HW_SHA1_STATE2_SZ 20
+#define ICP_QAT_HW_SHA224_STATE2_SZ 32
+#define ICP_QAT_HW_SHA256_STATE2_SZ 32
+#define ICP_QAT_HW_SHA3_256_STATE2_SZ 0
+#define ICP_QAT_HW_SHA384_STATE2_SZ 64
+#define ICP_QAT_HW_SHA512_STATE2_SZ 64
+#define ICP_QAT_HW_SHA3_512_STATE2_SZ 0
+#define ICP_QAT_HW_SHA3_224_STATE2_SZ 0
+#define ICP_QAT_HW_SHA3_384_STATE2_SZ 0
+#define ICP_QAT_HW_AES_XCBC_MAC_KEY_SZ 16
+#define ICP_QAT_HW_AES_CBC_MAC_KEY_SZ 16
+#define ICP_QAT_HW_AES_CCM_CBC_E_CTR0_SZ 16
+#define ICP_QAT_HW_F9_IK_SZ 16
+#define ICP_QAT_HW_F9_FK_SZ 16
+#define ICP_QAT_HW_KASUMI_F9_STATE2_SZ (ICP_QAT_HW_F9_IK_SZ + \
+	ICP_QAT_HW_F9_FK_SZ)
+#define ICP_QAT_HW_AES_F9_STATE2_SZ ICP_QAT_HW_KASUMI_F9_STATE2_SZ
+#define ICP_QAT_HW_SNOW_3G_UIA2_STATE2_SZ 24
+#define ICP_QAT_HW_ZUC_3G_EIA3_STATE2_SZ 32
+#define ICP_QAT_HW_GALOIS_H_SZ 16
+#define ICP_QAT_HW_GALOIS_LEN_A_SZ 8
+#define ICP_QAT_HW_GALOIS_E_CTR0_SZ 16
+
+struct icp_qat_hw_auth_sha512 {
+	struct icp_qat_hw_auth_setup inner_setup;
+	uint8_t state1[ICP_QAT_HW_SHA512_STATE1_SZ];
+	struct icp_qat_hw_auth_setup outer_setup;
+	uint8_t state2[ICP_QAT_HW_SHA512_STATE2_SZ];
+};
+
+struct icp_qat_hw_auth_algo_blk {
+	struct icp_qat_hw_auth_sha512 sha;
+};
+
+#define ICP_QAT_HW_GALOIS_LEN_A_BITPOS 0
+#define ICP_QAT_HW_GALOIS_LEN_A_MASK 0xFFFFFFFF
+
+enum icp_qat_hw_cipher_algo {
+	ICP_QAT_HW_CIPHER_ALGO_NULL = 0,
+	ICP_QAT_HW_CIPHER_ALGO_DES = 1,
+	ICP_QAT_HW_CIPHER_ALGO_3DES = 2,
+	ICP_QAT_HW_CIPHER_ALGO_AES128 = 3,
+	ICP_QAT_HW_CIPHER_ALGO_AES192 = 4,
+	ICP_QAT_HW_CIPHER_ALGO_AES256 = 5,
+	ICP_QAT_HW_CIPHER_ALGO_ARC4 = 6,
+	ICP_QAT_HW_CIPHER_ALGO_KASUMI = 7,
+	ICP_QAT_HW_CIPHER_ALGO_SNOW_3G_UEA2 = 8,
+	ICP_QAT_HW_CIPHER_ALGO_ZUC_3G_128_EEA3 = 9,
+	ICP_QAT_HW_CIPHER_DELIMITER = 10
+};
+
+enum icp_qat_hw_cipher_mode {
+	ICP_QAT_HW_CIPHER_ECB_MODE = 0,
+	ICP_QAT_HW_CIPHER_CBC_MODE = 1,
+	ICP_QAT_HW_CIPHER_CTR_MODE = 2,
+	ICP_QAT_HW_CIPHER_F8_MODE = 3,
+	ICP_QAT_HW_CIPHER_XTS_MODE = 6,
+	ICP_QAT_HW_CIPHER_MODE_DELIMITER = 7
+};
+
+struct icp_qat_hw_cipher_config {
+	uint32_t val;
+	uint32_t reserved;
+};
+
+enum icp_qat_hw_cipher_dir {
+	ICP_QAT_HW_CIPHER_ENCRYPT = 0,
+	ICP_QAT_HW_CIPHER_DECRYPT = 1,
+};
+
+enum icp_qat_hw_cipher_convert {
+	ICP_QAT_HW_CIPHER_NO_CONVERT = 0,
+	ICP_QAT_HW_CIPHER_KEY_CONVERT = 1,
+};
+
+#define QAT_CIPHER_MODE_BITPOS 4
+#define QAT_CIPHER_MODE_MASK 0xF
+#define QAT_CIPHER_ALGO_BITPOS 0
+#define QAT_CIPHER_ALGO_MASK 0xF
+#define QAT_CIPHER_CONVERT_BITPOS 9
+#define QAT_CIPHER_CONVERT_MASK 0x1
+#define QAT_CIPHER_DIR_BITPOS 8
+#define QAT_CIPHER_DIR_MASK 0x1
+#define QAT_CIPHER_MODE_F8_KEY_SZ_MULT 2
+#define QAT_CIPHER_MODE_XTS_KEY_SZ_MULT 2
+#define ICP_QAT_HW_CIPHER_CONFIG_BUILD(mode, algo, convert, dir) \
+	(((mode & QAT_CIPHER_MODE_MASK) << QAT_CIPHER_MODE_BITPOS) | \
+	((algo & QAT_CIPHER_ALGO_MASK) << QAT_CIPHER_ALGO_BITPOS) | \
+	((convert & QAT_CIPHER_CONVERT_MASK) << QAT_CIPHER_CONVERT_BITPOS) | \
+	((dir & QAT_CIPHER_DIR_MASK) << QAT_CIPHER_DIR_BITPOS))
+#define ICP_QAT_HW_DES_BLK_SZ 8
+#define ICP_QAT_HW_3DES_BLK_SZ 8
+#define ICP_QAT_HW_NULL_BLK_SZ 8
+#define ICP_QAT_HW_AES_BLK_SZ 16
+#define ICP_QAT_HW_KASUMI_BLK_SZ 8
+#define ICP_QAT_HW_SNOW_3G_BLK_SZ 8
+#define ICP_QAT_HW_ZUC_3G_BLK_SZ 8
+#define ICP_QAT_HW_NULL_KEY_SZ 256
+#define ICP_QAT_HW_DES_KEY_SZ 8
+#define ICP_QAT_HW_3DES_KEY_SZ 24
+#define ICP_QAT_HW_AES_128_KEY_SZ 16
+#define ICP_QAT_HW_AES_192_KEY_SZ 24
+#define ICP_QAT_HW_AES_256_KEY_SZ 32
+#define ICP_QAT_HW_AES_128_F8_KEY_SZ (ICP_QAT_HW_AES_128_KEY_SZ * \
+	QAT_CIPHER_MODE_F8_KEY_SZ_MULT)
+#define ICP_QAT_HW_AES_192_F8_KEY_SZ (ICP_QAT_HW_AES_192_KEY_SZ * \
+	QAT_CIPHER_MODE_F8_KEY_SZ_MULT)
+#define ICP_QAT_HW_AES_256_F8_KEY_SZ (ICP_QAT_HW_AES_256_KEY_SZ * \
+	QAT_CIPHER_MODE_F8_KEY_SZ_MULT)
+#define ICP_QAT_HW_AES_128_XTS_KEY_SZ (ICP_QAT_HW_AES_128_KEY_SZ * \
+	QAT_CIPHER_MODE_XTS_KEY_SZ_MULT)
+#define ICP_QAT_HW_AES_256_XTS_KEY_SZ (ICP_QAT_HW_AES_256_KEY_SZ * \
+	QAT_CIPHER_MODE_XTS_KEY_SZ_MULT)
+#define ICP_QAT_HW_KASUMI_KEY_SZ 16
+#define ICP_QAT_HW_KASUMI_F8_KEY_SZ (ICP_QAT_HW_KASUMI_KEY_SZ * \
+	QAT_CIPHER_MODE_F8_KEY_SZ_MULT)
+#define ICP_QAT_HW_AES_128_XTS_KEY_SZ (ICP_QAT_HW_AES_128_KEY_SZ * \
+	QAT_CIPHER_MODE_XTS_KEY_SZ_MULT)
+#define ICP_QAT_HW_AES_256_XTS_KEY_SZ (ICP_QAT_HW_AES_256_KEY_SZ * \
+	QAT_CIPHER_MODE_XTS_KEY_SZ_MULT)
+#define ICP_QAT_HW_ARC4_KEY_SZ 256
+#define ICP_QAT_HW_SNOW_3G_UEA2_KEY_SZ 16
+#define ICP_QAT_HW_SNOW_3G_UEA2_IV_SZ 16
+#define ICP_QAT_HW_ZUC_3G_EEA3_KEY_SZ 16
+#define ICP_QAT_HW_ZUC_3G_EEA3_IV_SZ 16
+#define ICP_QAT_HW_MODE_F8_NUM_REG_TO_CLEAR 2
+#define INIT_SHRAM_CONSTANTS_TABLE_SZ 1024
+
+struct icp_qat_hw_cipher_aes256_f8 {
+	struct icp_qat_hw_cipher_config cipher_config;
+	uint8_t key[ICP_QAT_HW_AES_256_F8_KEY_SZ];
+};
+
+struct icp_qat_hw_cipher_algo_blk {
+	struct icp_qat_hw_cipher_aes256_f8 aes;
+} __rte_cache_aligned;
+#endif
diff --git a/drivers/crypto/qat/qat_adf/qat_algs.h b/drivers/crypto/qat/qat_adf/qat_algs.h
new file mode 100644
index 0000000..3968d52
--- /dev/null
+++ b/drivers/crypto/qat/qat_adf/qat_algs.h
@@ -0,0 +1,124 @@
+/*
+  This file is provided under a dual BSD/GPLv2 license.  When using or
+  redistributing this file, you may do so under either license.
+
+  GPL LICENSE SUMMARY
+  Copyright(c) 2015 Intel Corporation.
+  This program is free software; you can redistribute it and/or modify
+  it under the terms of version 2 of the GNU General Public License as
+  published by the Free Software Foundation.
+
+  This program is distributed in the hope that it will be useful, but
+  WITHOUT ANY WARRANTY; without even the implied warranty of
+  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+  General Public License for more details.
+
+  Contact Information:
+  qat-linux@intel.com
+
+  BSD LICENSE
+  Copyright(c) 2015 Intel Corporation.
+  Redistribution and use in source and binary forms, with or without
+  modification, are permitted provided that the following conditions
+  are met:
+
+    * Redistributions of source code must retain the above copyright
+      notice, this list of conditions and the following disclaimer.
+    * Redistributions in binary form must reproduce the above copyright
+      notice, this list of conditions and the following disclaimer in
+      the documentation and/or other materials provided with the
+      distribution.
+    * Neither the name of Intel Corporation nor the names of its
+      contributors may be used to endorse or promote products derived
+      from this software without specific prior written permission.
+
+  THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+  "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+  LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+  A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+  OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+  SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+  LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+  DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+  THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+  (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+  OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+*/
+#ifndef _ICP_QAT_ALGS_H_
+#define _ICP_QAT_ALGS_H_
+#include <rte_memory.h>
+#include "icp_qat_hw.h"
+#include "icp_qat_fw.h"
+#include "icp_qat_fw_la.h"
+
+#define QAT_AES_HW_CONFIG_CBC_ENC(alg) \
+	ICP_QAT_HW_CIPHER_CONFIG_BUILD(ICP_QAT_HW_CIPHER_CBC_MODE, alg, \
+					ICP_QAT_HW_CIPHER_NO_CONVERT, \
+					ICP_QAT_HW_CIPHER_ENCRYPT)
+
+#define QAT_AES_HW_CONFIG_CBC_DEC(alg) \
+	ICP_QAT_HW_CIPHER_CONFIG_BUILD(ICP_QAT_HW_CIPHER_CBC_MODE, alg, \
+					ICP_QAT_HW_CIPHER_KEY_CONVERT, \
+					ICP_QAT_HW_CIPHER_DECRYPT)
+
+struct qat_alg_buf {
+	uint32_t len;
+	uint32_t resrvd;
+	uint64_t addr;
+} __rte_packed;
+
+struct qat_alg_buf_list {
+	uint64_t resrvd;
+	uint32_t num_bufs;
+	uint32_t num_mapped_bufs;
+	struct qat_alg_buf bufers[];
+} __rte_packed __rte_cache_aligned;
+
+/* Common content descriptor */
+struct qat_alg_cd {
+	struct icp_qat_hw_cipher_algo_blk cipher;
+	struct icp_qat_hw_auth_algo_blk hash;
+} __rte_packed __rte_cache_aligned;
+
+struct qat_session {
+	enum icp_qat_fw_la_cmd_id qat_cmd;
+	enum icp_qat_hw_cipher_algo qat_cipher_alg;
+	enum icp_qat_hw_cipher_dir qat_dir;
+	enum icp_qat_hw_cipher_mode qat_mode;
+	enum icp_qat_hw_auth_algo qat_hash_alg;
+	struct qat_alg_cd cd;
+	phys_addr_t cd_paddr;
+	struct icp_qat_fw_la_bulk_req fw_req;
+	struct qat_crypto_instance *inst;
+	uint8_t salt[ICP_QAT_HW_AES_BLK_SZ];
+	rte_spinlock_t lock;	/* protects this struct */
+};
+
+struct qat_alg_ablkcipher_cd {
+	struct icp_qat_hw_cipher_algo_blk *cd;
+	phys_addr_t cd_paddr;
+	struct icp_qat_fw_la_bulk_req fw_req;
+	struct qat_crypto_instance *inst;
+	rte_spinlock_t lock;	/* protects this struct */
+};
+
+int qat_get_inter_state_size(enum icp_qat_hw_auth_algo qat_hash_alg);
+
+int qat_alg_aead_session_create_content_desc(struct qat_session *cd,
+					uint8_t *enckey, uint32_t enckeylen,
+					uint8_t *authkey, uint32_t authkeylen,
+					uint32_t digestsize);
+
+void qat_alg_init_common_hdr(struct icp_qat_fw_comn_req_hdr *header);
+
+void qat_alg_ablkcipher_init_enc(struct qat_alg_ablkcipher_cd *cd,
+					int alg, const uint8_t *key,
+					unsigned int keylen);
+
+void qat_alg_ablkcipher_init_dec(struct qat_alg_ablkcipher_cd *cd,
+					int alg, const uint8_t *key,
+					unsigned int keylen);
+
+int qat_alg_validate_aes_key(int key_len, enum icp_qat_hw_cipher_algo *alg);
+
+#endif
diff --git a/drivers/crypto/qat/qat_adf/qat_algs_build_desc.c b/drivers/crypto/qat/qat_adf/qat_algs_build_desc.c
new file mode 100644
index 0000000..7d5c9d3
--- /dev/null
+++ b/drivers/crypto/qat/qat_adf/qat_algs_build_desc.c
@@ -0,0 +1,462 @@
+/*
+  This file is provided under a dual BSD/GPLv2 license.  When using or
+  redistributing this file, you may do so under either license.
+
+  GPL LICENSE SUMMARY
+  Copyright(c) 2015 Intel Corporation.
+  This program is free software; you can redistribute it and/or modify
+  it under the terms of version 2 of the GNU General Public License as
+  published by the Free Software Foundation.
+
+  This program is distributed in the hope that it will be useful, but
+  WITHOUT ANY WARRANTY; without even the implied warranty of
+  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+  General Public License for more details.
+
+  Contact Information:
+  qat-linux@intel.com
+
+  BSD LICENSE
+  Copyright(c) 2015 Intel Corporation.
+  Redistribution and use in source and binary forms, with or without
+  modification, are permitted provided that the following conditions
+  are met:
+
+	* Redistributions of source code must retain the above copyright
+	  notice, this list of conditions and the following disclaimer.
+	* Redistributions in binary form must reproduce the above copyright
+	  notice, this list of conditions and the following disclaimer in
+	  the documentation and/or other materials provided with the
+	  distribution.
+	* Neither the name of Intel Corporation nor the names of its
+	  contributors may be used to endorse or promote products derived
+	  from this software without specific prior written permission.
+
+  THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+  "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+  LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+  A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+  OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+  SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+  LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+  DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+  THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+  (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+  OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+*/
+#include <rte_memcpy.h>
+#include <rte_common.h>
+#include <rte_spinlock.h>
+#include <rte_byteorder.h>
+#include <rte_log.h>
+#include "../qat_logs.h"
+#include "qat_algs.h"
+
+#include <openssl/sha.h>	/* Needed to calculate pre-compute values */
+
+
+
+/* returns size in bytes per hash algo for state1 size field in cd_ctrl
+ * This is digest size rounded up to nearest quadword */
+static int qat_hash_get_state1_size(enum icp_qat_hw_auth_algo qat_hash_alg)
+{
+	switch (qat_hash_alg) {
+	case ICP_QAT_HW_AUTH_ALGO_SHA1:
+		return QAT_HW_ROUND_UP(ICP_QAT_HW_SHA1_STATE1_SZ,
+						QAT_HW_DEFAULT_ALIGNMENT);
+	case ICP_QAT_HW_AUTH_ALGO_SHA256:
+		return QAT_HW_ROUND_UP(ICP_QAT_HW_SHA256_STATE1_SZ,
+						QAT_HW_DEFAULT_ALIGNMENT);
+	case ICP_QAT_HW_AUTH_ALGO_SHA512:
+		return QAT_HW_ROUND_UP(ICP_QAT_HW_SHA512_STATE1_SZ,
+						QAT_HW_DEFAULT_ALIGNMENT);
+	case ICP_QAT_HW_AUTH_ALGO_DELIMITER:
+		/* return maximum state1 size in this case */
+		return QAT_HW_ROUND_UP(ICP_QAT_HW_SHA512_STATE1_SZ,
+						QAT_HW_DEFAULT_ALIGNMENT);
+	default:
+	    PMD_DRV_LOG(ERR, "invalid hash alg %u", qat_hash_alg);
+	    return -EFAULT;
+	};
+	return -EFAULT;
+}
+
+/* returns digest size in bytes  per hash algo */
+static int qat_hash_get_digest_size(enum icp_qat_hw_auth_algo qat_hash_alg)
+{
+	switch (qat_hash_alg) {
+	case ICP_QAT_HW_AUTH_ALGO_SHA1:
+		return ICP_QAT_HW_SHA1_STATE1_SZ;
+	case ICP_QAT_HW_AUTH_ALGO_SHA256:
+		return ICP_QAT_HW_SHA256_STATE1_SZ;
+	case ICP_QAT_HW_AUTH_ALGO_SHA512:
+		return ICP_QAT_HW_SHA512_STATE1_SZ;
+	case ICP_QAT_HW_AUTH_ALGO_DELIMITER:
+		/* return maximum digest size in this case */
+		return ICP_QAT_HW_SHA512_STATE1_SZ;
+	default:
+		PMD_DRV_LOG(ERR, "invalid hash alg %u", qat_hash_alg);
+		return -EFAULT;
+	};
+	return -EFAULT;
+}
+
+/* returns block size in byes per hash algo */
+static int qat_hash_get_block_size(enum icp_qat_hw_auth_algo qat_hash_alg)
+{
+	switch (qat_hash_alg) {
+	case ICP_QAT_HW_AUTH_ALGO_SHA1:
+		return SHA_CBLOCK;
+	case ICP_QAT_HW_AUTH_ALGO_SHA256:
+		return SHA256_CBLOCK;
+	case ICP_QAT_HW_AUTH_ALGO_SHA512:
+		return SHA512_CBLOCK;
+	case ICP_QAT_HW_AUTH_ALGO_DELIMITER:
+		/* return maximum block size in this case */
+		return SHA512_CBLOCK;
+	default:
+	    PMD_DRV_LOG(ERR, "invalid hash alg %u", qat_hash_alg);
+		return -EFAULT;
+	};
+	return -EFAULT;
+}
+
+static int partial_hash_sha1(uint8_t *data_in, uint8_t *data_out)
+{
+	SHA_CTX ctx;
+
+	if (!SHA1_Init(&ctx))
+		return -EFAULT;
+	SHA1_Transform(&ctx, data_in);
+	rte_memcpy(data_out, &ctx, SHA_DIGEST_LENGTH);
+	return 0;
+}
+
+static int partial_hash_sha256(uint8_t *data_in, uint8_t *data_out)
+{
+	SHA256_CTX ctx;
+
+	if (!SHA256_Init(&ctx))
+		return -EFAULT;
+	SHA256_Transform(&ctx, data_in);
+	rte_memcpy(data_out, &ctx, SHA256_DIGEST_LENGTH);
+	return 0;
+}
+
+static int partial_hash_sha512(uint8_t *data_in, uint8_t *data_out)
+{
+	SHA512_CTX ctx;
+
+	if (!SHA512_Init(&ctx))
+		return -EFAULT;
+	SHA512_Transform(&ctx, data_in);
+	rte_memcpy(data_out, &ctx, SHA512_DIGEST_LENGTH);
+	return 0;
+}
+
+static int partial_hash_compute(enum icp_qat_hw_auth_algo hash_alg,
+			uint8_t *data_in,
+			uint8_t *data_out)
+{
+	int digest_size;
+	uint8_t digest[qat_hash_get_digest_size(ICP_QAT_HW_AUTH_ALGO_DELIMITER)];
+	uint32_t *hash_state_out_be32;
+	uint64_t *hash_state_out_be64;
+	int i;
+
+	PMD_INIT_FUNC_TRACE();
+	digest_size = qat_hash_get_digest_size(hash_alg);
+	if (digest_size <= 0)
+		return -EFAULT;
+
+	hash_state_out_be32 = (uint32_t *)data_out;
+	hash_state_out_be64 = (uint64_t *)data_out;
+
+	switch (hash_alg) {
+	case ICP_QAT_HW_AUTH_ALGO_SHA1:
+		if (partial_hash_sha1(data_in, digest))
+			return -EFAULT;
+		for (i = 0; i < digest_size >> 2; i++, hash_state_out_be32++)
+			*hash_state_out_be32 =
+				rte_bswap32(*(((uint32_t *)digest)+i));
+		break;
+	case ICP_QAT_HW_AUTH_ALGO_SHA256:
+		if (partial_hash_sha256(data_in, digest))
+			return -EFAULT;
+		for (i = 0; i < digest_size >> 2; i++, hash_state_out_be32++)
+			*hash_state_out_be32 =
+				rte_bswap32(*(((uint32_t *)digest)+i));
+		break;
+	case ICP_QAT_HW_AUTH_ALGO_SHA512:
+		if (partial_hash_sha512(data_in, digest))
+			return -EFAULT;
+		for (i = 0; i < digest_size >> 3; i++, hash_state_out_be64++)
+			*hash_state_out_be64 =
+				rte_bswap64(*(((uint64_t *)digest)+i));
+		break;
+	default:
+	    PMD_DRV_LOG(ERR, "invalid hash alg %u", hash_alg);
+		return -EFAULT;
+	}
+
+	return 0;
+}
+#define HMAC_IPAD_VALUE	0x36
+#define HMAC_OPAD_VALUE	0x5c
+
+static int qat_alg_do_precomputes(enum icp_qat_hw_auth_algo hash_alg,
+				const uint8_t *auth_key,
+				uint16_t auth_keylen,
+				uint8_t *p_state_buf,
+				uint16_t *p_state_len)
+{
+	int block_size;
+	uint8_t ipad[qat_hash_get_block_size(ICP_QAT_HW_AUTH_ALGO_DELIMITER)];
+	uint8_t opad[qat_hash_get_block_size(ICP_QAT_HW_AUTH_ALGO_DELIMITER)];
+	int i;
+
+	PMD_INIT_FUNC_TRACE();
+	block_size = qat_hash_get_block_size(hash_alg);
+	if (block_size <= 0)
+		return -EFAULT;
+	/* init ipad and opad from key and xor with fixed values */
+	memset(ipad, 0, block_size);
+	memset(opad, 0, block_size);
+
+	if (auth_keylen > (unsigned int)block_size) {
+		PMD_DRV_LOG(ERR, "invalid keylen %u", auth_keylen);
+		return -EFAULT;
+	} else {
+		rte_memcpy(ipad, auth_key, auth_keylen);
+		rte_memcpy(opad, auth_key, auth_keylen);
+	}
+
+	for (i = 0; i < block_size; i++) {
+		uint8_t *ipad_ptr = ipad + i;
+		uint8_t *opad_ptr = opad + i;
+		*ipad_ptr ^= HMAC_IPAD_VALUE;
+		*opad_ptr ^= HMAC_OPAD_VALUE;
+	}
+
+	/* do partial hash of ipad and copy to state1 */
+	if (partial_hash_compute(hash_alg, ipad, p_state_buf)) {
+		memset(ipad, 0, block_size);
+		memset(opad, 0, block_size);
+		PMD_DRV_LOG(ERR, "ipad precompute failed");
+		return -EFAULT;
+	}
+
+	/* state len is a multiple of 8, so may be larger than the digest.
+	   Put the partial hash of opad state_len bytes after state1 */
+	*p_state_len = qat_hash_get_state1_size(hash_alg);
+	if (partial_hash_compute(hash_alg, opad, p_state_buf + *p_state_len)) {
+		memset(ipad, 0, block_size);
+		memset(opad, 0, block_size);
+		PMD_DRV_LOG(ERR, "opad precompute failed");
+		return -EFAULT;
+	}
+
+	/*  don't leave data lying around */
+	memset(ipad, 0, block_size);
+	memset(opad, 0, block_size);
+	return 0;
+}
+
+void qat_alg_init_common_hdr(struct icp_qat_fw_comn_req_hdr *header)
+{
+	PMD_INIT_FUNC_TRACE();
+	header->hdr_flags =
+		ICP_QAT_FW_COMN_HDR_FLAGS_BUILD(ICP_QAT_FW_COMN_REQ_FLAG_SET);
+	header->service_type = ICP_QAT_FW_COMN_REQ_CPM_FW_LA;
+	header->comn_req_flags =
+		ICP_QAT_FW_COMN_FLAGS_BUILD(QAT_COMN_CD_FLD_TYPE_64BIT_ADR,
+					QAT_COMN_PTR_TYPE_FLAT);
+	ICP_QAT_FW_LA_PARTIAL_SET(header->serv_specif_flags,
+				  ICP_QAT_FW_LA_PARTIAL_NONE);
+	ICP_QAT_FW_LA_CIPH_IV_FLD_FLAG_SET(header->serv_specif_flags,
+					   ICP_QAT_FW_CIPH_IV_16BYTE_DATA);
+	ICP_QAT_FW_LA_PROTO_SET(header->serv_specif_flags,
+				ICP_QAT_FW_LA_NO_PROTO);
+	ICP_QAT_FW_LA_UPDATE_STATE_SET(header->serv_specif_flags,
+					   ICP_QAT_FW_LA_NO_UPDATE_STATE);
+}
+
+int qat_alg_aead_session_create_content_desc(struct qat_session *cdesc,
+				uint8_t *cipherkey, uint32_t cipherkeylen,
+				uint8_t *authkey, uint32_t authkeylen,
+				uint32_t digestsize)
+{
+	struct qat_alg_cd *content_desc = &cdesc->cd;
+	struct icp_qat_hw_cipher_algo_blk *cipher = &content_desc->cipher;
+	struct icp_qat_hw_auth_algo_blk *hash = &content_desc->hash;
+	struct icp_qat_fw_la_bulk_req *req_tmpl = &cdesc->fw_req;
+	struct icp_qat_fw_comn_req_hdr_cd_pars *cd_pars = &req_tmpl->cd_pars;
+	struct icp_qat_fw_comn_req_hdr *header = &req_tmpl->comn_hdr;
+	void *ptr = &req_tmpl->cd_ctrl;
+	struct icp_qat_fw_cipher_cd_ctrl_hdr *cipher_cd_ctrl = ptr;
+	struct icp_qat_fw_auth_cd_ctrl_hdr *hash_cd_ctrl = ptr;
+	struct icp_qat_fw_la_auth_req_params *auth_param =
+		(struct icp_qat_fw_la_auth_req_params *)
+		((char *)&req_tmpl->serv_specif_rqpars +
+		sizeof(struct icp_qat_fw_la_cipher_req_params));
+	enum icp_qat_hw_cipher_convert key_convert;
+	uint16_t state_size = 0;
+
+	PMD_INIT_FUNC_TRACE();
+	/* CD setup */
+	if (cdesc->qat_dir == ICP_QAT_HW_CIPHER_ENCRYPT) {
+		key_convert = ICP_QAT_HW_CIPHER_NO_CONVERT;
+		ICP_QAT_FW_LA_RET_AUTH_SET(header->serv_specif_flags,
+				ICP_QAT_FW_LA_RET_AUTH_RES);
+		ICP_QAT_FW_LA_CMP_AUTH_SET(header->serv_specif_flags,
+				ICP_QAT_FW_LA_NO_CMP_AUTH_RES);
+	} else {
+		key_convert = ICP_QAT_HW_CIPHER_KEY_CONVERT;
+		ICP_QAT_FW_LA_RET_AUTH_SET(header->serv_specif_flags,
+				ICP_QAT_FW_LA_NO_RET_AUTH_RES);
+		ICP_QAT_FW_LA_CMP_AUTH_SET(header->serv_specif_flags,
+				   ICP_QAT_FW_LA_CMP_AUTH_RES);
+	}
+
+	cipher->aes.cipher_config.val = ICP_QAT_HW_CIPHER_CONFIG_BUILD(cdesc->qat_mode,
+			cdesc->qat_cipher_alg, key_convert, cdesc->qat_dir);
+	memcpy(cipher->aes.key, cipherkey, cipherkeylen);
+
+	hash->sha.inner_setup.auth_config.reserved = 0;
+	hash->sha.inner_setup.auth_config.config =
+			ICP_QAT_HW_AUTH_CONFIG_BUILD(ICP_QAT_HW_AUTH_MODE1,
+				cdesc->qat_hash_alg, digestsize);
+	hash->sha.inner_setup.auth_counter.counter =
+		rte_bswap32(qat_hash_get_block_size(cdesc->qat_hash_alg));
+
+	if (qat_alg_do_precomputes(cdesc->qat_hash_alg,
+		authkey, authkeylen, (uint8_t *)(hash->sha.state1), &state_size)) {
+		PMD_DRV_LOG(ERR, "precomputes failed");
+		return -EFAULT;
+	}
+
+	/* Request template setup */
+	qat_alg_init_common_hdr(header);
+	header->service_cmd_id = cdesc->qat_cmd;
+	ICP_QAT_FW_LA_DIGEST_IN_BUFFER_SET(header->serv_specif_flags,
+					   ICP_QAT_FW_LA_DIGEST_IN_BUFFER);
+	cd_pars->u.s.content_desc_addr = cdesc->cd_paddr;
+	cd_pars->u.s.content_desc_params_sz = sizeof(struct qat_alg_cd) >> 3;
+
+	/* Cipher CD config setup */
+	cipher_cd_ctrl->cipher_key_sz = cipherkeylen >> 3;
+	cipher_cd_ctrl->cipher_state_sz = ICP_QAT_HW_AES_BLK_SZ >> 3;
+	cipher_cd_ctrl->cipher_cfg_offset = 0;
+
+	/* Auth CD config setup */
+	hash_cd_ctrl->hash_cfg_offset = ((char *)hash - (char *)cipher) >> 3;
+	hash_cd_ctrl->hash_flags = ICP_QAT_FW_AUTH_HDR_FLAG_NO_NESTED;
+	hash_cd_ctrl->inner_res_sz = digestsize;
+	hash_cd_ctrl->final_sz = digestsize;
+
+	switch (cdesc->qat_hash_alg) {
+	case ICP_QAT_HW_AUTH_ALGO_SHA1:
+		hash_cd_ctrl->inner_state2_sz =
+			RTE_ALIGN_CEIL(ICP_QAT_HW_SHA1_STATE2_SZ, 8);
+		break;
+	case ICP_QAT_HW_AUTH_ALGO_SHA256:
+		hash_cd_ctrl->inner_state2_sz = ICP_QAT_HW_SHA256_STATE2_SZ;
+		break;
+	case ICP_QAT_HW_AUTH_ALGO_SHA512:
+		hash_cd_ctrl->inner_state2_sz = ICP_QAT_HW_SHA512_STATE2_SZ;
+		break;
+	default:
+	    PMD_DRV_LOG(ERR, "invalid HASH alg %u", cdesc->qat_hash_alg);
+		return -EFAULT;
+	}
+	hash_cd_ctrl->inner_state1_sz = state_size;
+	hash_cd_ctrl->inner_state2_offset = hash_cd_ctrl->hash_cfg_offset +
+			((sizeof(struct icp_qat_hw_auth_setup) +
+			 RTE_ALIGN_CEIL(hash_cd_ctrl->inner_state1_sz, 8)) >> 3);
+	auth_param->auth_res_sz = digestsize;
+
+	if (cdesc->qat_cmd == ICP_QAT_FW_LA_CMD_CIPHER_HASH) {
+		ICP_QAT_FW_COMN_CURR_ID_SET(cipher_cd_ctrl, ICP_QAT_FW_SLICE_CIPHER);
+		ICP_QAT_FW_COMN_NEXT_ID_SET(cipher_cd_ctrl, ICP_QAT_FW_SLICE_AUTH);
+		ICP_QAT_FW_COMN_CURR_ID_SET(hash_cd_ctrl, ICP_QAT_FW_SLICE_AUTH);
+		ICP_QAT_FW_COMN_NEXT_ID_SET(hash_cd_ctrl, ICP_QAT_FW_SLICE_DRAM_WR);
+	} else if (cdesc->qat_cmd == ICP_QAT_FW_LA_CMD_HASH_CIPHER) {
+		ICP_QAT_FW_COMN_CURR_ID_SET(hash_cd_ctrl, ICP_QAT_FW_SLICE_AUTH);
+		ICP_QAT_FW_COMN_NEXT_ID_SET(hash_cd_ctrl, ICP_QAT_FW_SLICE_CIPHER);
+		ICP_QAT_FW_COMN_CURR_ID_SET(cipher_cd_ctrl, ICP_QAT_FW_SLICE_CIPHER);
+		ICP_QAT_FW_COMN_NEXT_ID_SET(cipher_cd_ctrl, ICP_QAT_FW_SLICE_DRAM_WR);
+	} else {
+	    PMD_DRV_LOG(ERR, "invalid param, only authenticated encryption supported");
+		return -EFAULT;
+	}
+	return 0;
+}
+
+static void qat_alg_ablkcipher_init_com(struct icp_qat_fw_la_bulk_req *req,
+					struct icp_qat_hw_cipher_algo_blk *cd,
+					const uint8_t *key, unsigned int keylen)
+{
+	struct icp_qat_fw_comn_req_hdr_cd_pars *cd_pars = &req->cd_pars;
+	struct icp_qat_fw_comn_req_hdr *header = &req->comn_hdr;
+	struct icp_qat_fw_cipher_cd_ctrl_hdr *cd_ctrl = (void *)&req->cd_ctrl;
+
+	PMD_INIT_FUNC_TRACE();
+	rte_memcpy(cd->aes.key, key, keylen);
+	qat_alg_init_common_hdr(header);
+	header->service_cmd_id = ICP_QAT_FW_LA_CMD_CIPHER;
+	cd_pars->u.s.content_desc_params_sz =
+				sizeof(struct icp_qat_hw_cipher_algo_blk) >> 3;
+	/* Cipher CD config setup */
+	cd_ctrl->cipher_key_sz = keylen >> 3;
+	cd_ctrl->cipher_state_sz = ICP_QAT_HW_AES_BLK_SZ >> 3;
+	cd_ctrl->cipher_cfg_offset = 0;
+	ICP_QAT_FW_COMN_CURR_ID_SET(cd_ctrl, ICP_QAT_FW_SLICE_CIPHER);
+	ICP_QAT_FW_COMN_NEXT_ID_SET(cd_ctrl, ICP_QAT_FW_SLICE_DRAM_WR);
+}
+
+void qat_alg_ablkcipher_init_enc(struct qat_alg_ablkcipher_cd *cdesc,
+					int alg, const uint8_t *key,
+					unsigned int keylen)
+{
+	struct icp_qat_hw_cipher_algo_blk *enc_cd = cdesc->cd;
+	struct icp_qat_fw_la_bulk_req *req = &cdesc->fw_req;
+	struct icp_qat_fw_comn_req_hdr_cd_pars *cd_pars = &req->cd_pars;
+
+	PMD_INIT_FUNC_TRACE();
+	qat_alg_ablkcipher_init_com(req, enc_cd, key, keylen);
+	cd_pars->u.s.content_desc_addr = cdesc->cd_paddr;
+	enc_cd->aes.cipher_config.val = QAT_AES_HW_CONFIG_CBC_ENC(alg);
+}
+
+void qat_alg_ablkcipher_init_dec(struct qat_alg_ablkcipher_cd *cdesc,
+					int alg, const uint8_t *key,
+					unsigned int keylen)
+{
+	struct icp_qat_hw_cipher_algo_blk *dec_cd = cdesc->cd;
+	struct icp_qat_fw_la_bulk_req *req = &cdesc->fw_req;
+	struct icp_qat_fw_comn_req_hdr_cd_pars *cd_pars = &req->cd_pars;
+
+	PMD_INIT_FUNC_TRACE();
+	qat_alg_ablkcipher_init_com(req, dec_cd, key, keylen);
+	cd_pars->u.s.content_desc_addr = cdesc->cd_paddr;
+	dec_cd->aes.cipher_config.val = QAT_AES_HW_CONFIG_CBC_DEC(alg);
+}
+
+int qat_alg_validate_aes_key(int key_len, enum icp_qat_hw_cipher_algo *alg)
+{
+	switch (key_len) {
+	case ICP_QAT_HW_AES_128_KEY_SZ:
+		*alg = ICP_QAT_HW_CIPHER_ALGO_AES128;
+		break;
+	case ICP_QAT_HW_AES_192_KEY_SZ:
+		*alg = ICP_QAT_HW_CIPHER_ALGO_AES192;
+		break;
+	case ICP_QAT_HW_AES_256_KEY_SZ:
+		*alg = ICP_QAT_HW_CIPHER_ALGO_AES256;
+		break;
+	default:
+		return -EINVAL;
+	}
+	return 0;
+}
+
diff --git a/drivers/crypto/qat/qat_crypto.c b/drivers/crypto/qat/qat_crypto.c
new file mode 100644
index 0000000..d026562
--- /dev/null
+++ b/drivers/crypto/qat/qat_crypto.c
@@ -0,0 +1,469 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2015 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *	 * Redistributions of source code must retain the above copyright
+ *	   notice, this list of conditions and the following disclaimer.
+ *	 * Redistributions in binary form must reproduce the above copyright
+ *	   notice, this list of conditions and the following disclaimer in
+ *	   the documentation and/or other materials provided with the
+ *	   distribution.
+ *	 * Neither the name of Intel Corporation nor the names of its
+ *	   contributors may be used to endorse or promote products derived
+ *	   from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <stdio.h>
+#include <stdlib.h>
+#include <strings.h>
+#include <string.h>
+#include <inttypes.h>
+#include <errno.h>
+#include <sys/queue.h>
+#include <stdarg.h>
+
+#include <rte_common.h>
+#include <rte_log.h>
+#include <rte_debug.h>
+#include <rte_memory.h>
+#include <rte_memzone.h>
+#include <rte_tailq.h>
+#include <rte_ether.h>
+#include <rte_malloc.h>
+#include <rte_launch.h>
+#include <rte_eal.h>
+#include <rte_per_lcore.h>
+#include <rte_lcore.h>
+#include <rte_atomic.h>
+#include <rte_branch_prediction.h>
+#include <rte_ring.h>
+#include <rte_mempool.h>
+#include <rte_mbuf.h>
+#include <rte_string_fns.h>
+#include <rte_spinlock.h>
+
+#include "qat_logs.h"
+#include "qat_algs.h"
+#include "qat_crypto.h"
+#include "adf_transport_access_macros.h"
+
+
+static inline uint32_t adf_modulo(uint32_t data, uint32_t shift);
+static inline int qat_alg_write_mbuf_entry(struct rte_mbuf *mbuf, uint8_t *out_msg);
+static void qat_crypto_sessionbuf_init(struct rte_mempool *mp, void *opaque_arg,
+		void *_s, unsigned i);
+
+void qat_crypto_sym_destroy_session(struct rte_cryptodev *dev,
+		struct rte_cryptodev_session *session)
+{
+	struct qat_pmd_private *internals = dev->data->dev_private;
+
+	PMD_INIT_FUNC_TRACE();
+	if (session != NULL && internals->sess_mp != NULL)
+		rte_mempool_put(internals->sess_mp, session);
+}
+
+struct rte_cryptodev_session *
+qat_crypto_sym_create_session(struct rte_cryptodev *dev,
+		struct rte_crypto_cipher_params *cipher_setup_data,
+		struct rte_crypto_hash_params *hash_setup_data,
+		enum rte_crypto_operation_chain op_type)
+{
+	struct qat_session *session;
+	struct qat_pmd_private *internals = dev->data->dev_private;
+	enum icp_qat_hw_cipher_algo cipher_alg;
+	enum icp_qat_hw_auth_algo hash_alg;
+	enum icp_qat_hw_cipher_mode cipher_mode;
+	uint32_t digest_size;
+
+	PMD_INIT_FUNC_TRACE();
+	if (hash_setup_data == NULL || cipher_setup_data == NULL) {
+		PMD_DRV_LOG(ERR, "Invalid parameters - currently only "
+				"authenticated encryption supported");
+		return NULL;
+	}
+	switch (cipher_setup_data->algo) {
+	case RTE_CRYPTO_SYM_CIPHER_AES_CBC:
+		if (qat_alg_validate_aes_key(cipher_setup_data->key.length, &cipher_alg) != 0) {
+			PMD_DRV_LOG(ERR, "Invalid AES cipher key size");
+			return NULL;
+		}
+		cipher_mode = ICP_QAT_HW_CIPHER_CBC_MODE;
+		break;
+	case RTE_CRYPTO_SYM_CIPHER_NULL:
+	case RTE_CRYPTO_SYM_CIPHER_3DES_ECB:
+	case RTE_CRYPTO_SYM_CIPHER_3DES_CBC:
+	case RTE_CRYPTO_SYM_CIPHER_AES_ECB:
+	case RTE_CRYPTO_SYM_CIPHER_AES_CTR:
+	case RTE_CRYPTO_SYM_CIPHER_AES_GCM:
+	case RTE_CRYPTO_SYM_CIPHER_AES_CCM:
+	case RTE_CRYPTO_SYM_CIPHER_KASUMI_F8:
+		PMD_DRV_LOG(ERR, "Crypto: Unsupported Cipher alg %u",
+						cipher_setup_data->algo);
+		return NULL;
+	default:
+		PMD_DRV_LOG(ERR, "Crypto: Undefined Cipher specified %u\n",
+						cipher_setup_data->algo);
+		return NULL;
+	}
+	switch (hash_setup_data->algo) {
+	case RTE_CRYPTO_SYM_HASH_SHA1_HMAC:
+		hash_alg = ICP_QAT_HW_AUTH_ALGO_SHA1;
+		break;
+	case RTE_CRYPTO_SYM_HASH_SHA256_HMAC:
+		hash_alg = ICP_QAT_HW_AUTH_ALGO_SHA256;
+		break;
+	case RTE_CRYPTO_SYM_HASH_SHA512_HMAC:
+		hash_alg = ICP_QAT_HW_AUTH_ALGO_SHA512;
+		break;
+
+	case RTE_CRYPTO_SYM_HASH_NONE:
+	case RTE_CRYPTO_SYM_HASH_SHA1:
+	case RTE_CRYPTO_SYM_HASH_SHA256:
+	case RTE_CRYPTO_SYM_HASH_SHA512:
+	case RTE_CRYPTO_SYM_HASH_SHA224:
+	case RTE_CRYPTO_SYM_HASH_SHA224_HMAC:
+	case RTE_CRYPTO_SYM_HASH_SHA384:
+	case RTE_CRYPTO_SYM_HASH_SHA384_HMAC:
+	case RTE_CRYPTO_SYM_HASH_MD5:
+	case RTE_CRYPTO_SYM_HASH_MD5_HMAC:
+	case RTE_CRYPTO_SYM_HASH_AES_XCBC_MAC:
+	case RTE_CRYPTO_SYM_HASH_AES_CCM:
+	case RTE_CRYPTO_SYM_HASH_AES_GCM:
+	case RTE_CRYPTO_SYM_HASH_KASUMI_F9:
+	case RTE_CRYPTO_SYM_HASH_SNOW3G_UIA2:
+	case RTE_CRYPTO_SYM_HASH_AES_CMAC:
+	case RTE_CRYPTO_SYM_HASH_AES_GMAC:
+	case RTE_CRYPTO_SYM_HASH_AES_CBC_MAC:
+	case RTE_CRYPTO_SYM_HASH_ZUC_EIA3:
+		PMD_DRV_LOG(ERR, "Crypto: Unsupported hash alg %u",
+				hash_setup_data->algo);
+		return NULL;
+	default:
+		PMD_DRV_LOG(ERR, "Crypto: Undefined Hash algo %u specified",
+				hash_setup_data->algo);
+		return NULL;
+	}
+
+	if (rte_mempool_get(internals->sess_mp, (void **)&session)) {
+		PMD_DRV_LOG(ERR, "Crypto: Failed to get session memory");
+		return NULL;
+	}
+
+	session->qat_cipher_alg = cipher_alg;
+	session->qat_hash_alg = hash_alg;
+	session->qat_mode = cipher_mode;
+	digest_size = hash_setup_data->digest_length;
+
+	if (cipher_setup_data->op == RTE_CRYPTO_SYM_CIPHER_OP_ENCRYPT)
+		session->qat_dir = ICP_QAT_HW_CIPHER_ENCRYPT;
+	else
+		session->qat_dir = ICP_QAT_HW_CIPHER_DECRYPT;
+
+	if (op_type == RTE_CRYPTO_SYM_OPCHAIN_HASH_CIPHER)
+		session->qat_cmd = ICP_QAT_FW_LA_CMD_HASH_CIPHER;
+	else if (op_type == RTE_CRYPTO_SYM_OPCHAIN_CIPHER_HASH)
+		session->qat_cmd = ICP_QAT_FW_LA_CMD_CIPHER_HASH;
+	else {
+		PMD_DRV_LOG(ERR, "Crypto: Invalid operation chaining - "
+				"only authenticate encryption supported");
+		goto error_out;
+	}
+	qat_alg_aead_session_create_content_desc(session,
+		cipher_setup_data->key.data,
+		cipher_setup_data->key.length,
+		hash_setup_data->auth_key.data,
+		hash_setup_data->auth_key.length,
+		digest_size);
+	return (struct rte_cryptodev_session *)session;
+
+error_out:
+	rte_mempool_put(internals->sess_mp, session);
+	return NULL;
+}
+
+int
+qat_pmd_session_mempool_create(struct rte_cryptodev *dev,
+		unsigned nb_objs, unsigned obj_cache_size, int socket_id)
+{
+	struct qat_pmd_private *internals = dev->data->dev_private;
+	uint16_t qat_session_size = RTE_ALIGN_CEIL(sizeof(struct qat_session), 8);
+
+	unsigned n = snprintf(internals->sess_mp_name,
+			sizeof(internals->sess_mp_name), "qat_pmd_%d_sess_mp",
+			dev->data->dev_id);
+
+	if (n > sizeof(internals->sess_mp_name)) {
+		PMD_DRV_LOG(ERR, "Unable to create unique name for session mempool");
+		return -ENOMEM;
+	}
+	internals->sess_mp = rte_mempool_lookup(internals->sess_mp_name);
+	if (internals->sess_mp != NULL) {
+		if (internals->sess_mp->elt_size != qat_session_size ||
+				internals->sess_mp->cache_size < obj_cache_size ||
+				internals->sess_mp->size < nb_objs) {
+
+			PMD_DRV_LOG(ERR, "%s mempool already exists with different "
+						"initialisation parameters",
+						internals->sess_mp_name);
+			return -ENOMEM;
+		}
+		return 0;
+	}
+
+	internals->sess_mp = rte_mempool_create(
+			internals->sess_mp_name,	/* mempool name */
+			nb_objs,			/* number of elements*/
+			qat_session_size,		/* element size*/
+			obj_cache_size, 		/* Cache size*/
+			0,				/* private data size */
+			NULL,				/* obj initialisation constructor */
+			NULL,				/* obj initialisation constructor argument */
+			qat_crypto_sessionbuf_init,	/* obj constructor */
+			NULL,				/* obj constructor argument */
+			socket_id,			/* socket id */
+			0);				/* flags */
+
+	if (internals->sess_mp == NULL) {
+		PMD_DRV_LOG(ERR, "%s mempool allocation failed",
+				internals->sess_mp_name);
+		return -ENOMEM;
+	}
+	return 0;
+}
+
+uint16_t qat_crypto_pkt_tx_burst(void *qp, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
+{
+	struct qat_queue *queue;
+	struct qat_qp *tmp_qp = (struct qat_qp *)qp;
+	uint32_t nb_pkts_sent = 0;
+	struct rte_mbuf **cur_tx_pkt = tx_pkts;
+	int ret = 0;
+
+	queue = &(tmp_qp->tx_q);
+	while (nb_pkts_sent != nb_pkts) {
+		if (rte_atomic16_add_return(&tmp_qp->inflights16, 1) > queue->max_inflights) {
+			rte_atomic16_sub(&tmp_qp->inflights16, 1);
+			if (nb_pkts_sent == 0)
+				return 0;
+			else
+				goto kick_tail;
+		}
+		ret = qat_alg_write_mbuf_entry(*cur_tx_pkt,
+			(uint8_t *)queue->base_addr + queue->tail);
+		if (ret != 0) {
+			tmp_qp->stats.enqueue_err_count++;
+			if (nb_pkts_sent == 0)
+				return 0;
+			else
+				goto kick_tail;
+		}
+
+		queue->tail = adf_modulo(queue->tail +
+				queue->msg_size,
+				ADF_RING_SIZE_MODULO(queue->queue_size));
+		nb_pkts_sent++;
+		cur_tx_pkt++;
+	}
+kick_tail:
+	WRITE_CSR_RING_TAIL(tmp_qp->mmap_bar_addr, queue->hw_bundle_number,
+			queue->hw_queue_number, queue->tail);
+	tmp_qp->stats.enqueued_count += nb_pkts_sent;
+	return nb_pkts_sent;
+}
+
+uint16_t
+qat_crypto_pkt_rx_burst(void *qp, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
+{
+	struct qat_queue *queue;
+	struct qat_qp *tmp_qp = (struct qat_qp *)qp;
+	uint32_t msg_counter = 0;
+	struct rte_mbuf *rx_mbuf;
+	struct icp_qat_fw_comn_resp *resp_msg;
+
+	queue = &(tmp_qp->rx_q);
+
+	resp_msg = (struct icp_qat_fw_comn_resp *)((uint8_t *)queue->base_addr + queue->head);
+	while (*(uint32_t *)resp_msg != ADF_RING_EMPTY_SIG && msg_counter != nb_pkts) {
+		rx_mbuf = (struct rte_mbuf *)(resp_msg->opaque_data);
+		if (ICP_QAT_FW_COMN_STATUS_FLAG_OK !=
+				ICP_QAT_FW_COMN_RESP_CRYPTO_STAT_GET(
+						resp_msg->comn_hdr.comn_status)) {
+			rx_mbuf->ol_flags |= PKT_RX_CRYPTO_DIGEST_BAD;
+		}
+		*(uint32_t *)resp_msg = ADF_RING_EMPTY_SIG;
+		queue->head = adf_modulo(queue->head +
+					queue->msg_size,
+					ADF_RING_SIZE_MODULO(queue->queue_size));
+		resp_msg = (struct icp_qat_fw_comn_resp *)((uint8_t *)queue->base_addr + queue->head);
+
+		*rx_pkts = rx_mbuf;
+		rx_pkts++;
+		msg_counter++;
+	}
+	if (msg_counter > 0) {
+		WRITE_CSR_RING_HEAD(tmp_qp->mmap_bar_addr,
+					queue->hw_bundle_number,
+					queue->hw_queue_number, queue->head);
+		rte_atomic16_sub(&tmp_qp->inflights16, msg_counter);
+		tmp_qp->stats.dequeued_count += msg_counter;
+	}
+	return msg_counter;
+}
+
+static inline int qat_alg_write_mbuf_entry(struct rte_mbuf *mbuf, uint8_t *out_msg)
+{
+	struct rte_crypto_op_data *rte_op_data = mbuf->crypto_op;
+	struct qat_session *ctx;
+	struct icp_qat_fw_la_cipher_req_params *cipher_param;
+	struct icp_qat_fw_la_auth_req_params *auth_param;
+	struct icp_qat_fw_la_bulk_req *qat_req;
+
+	if (unlikely(rte_op_data->type == RTE_CRYPTO_OP_SESSIONLESS)) {
+		PMD_DRV_LOG(ERR, "QAT PMD only supports session oriented requests "
+				"mbuf (%p) is sessionless.", mbuf);
+		return -EINVAL;
+	}
+	ctx = (struct qat_session *)rte_op_data->session;
+	qat_req = (struct icp_qat_fw_la_bulk_req *)out_msg;
+	*qat_req = ctx->fw_req;
+	qat_req->comn_mid.opaque_data = (uint64_t)mbuf;
+
+	/*
+	 * The following code assumes:
+	 * - single entry buffer.
+	 * - always in place.
+	 */
+	qat_req->comn_mid.dst_length = qat_req->comn_mid.src_length = mbuf->data_len;
+	qat_req->comn_mid.dest_data_addr = qat_req->comn_mid.src_data_addr
+							= rte_pktmbuf_mtophys(mbuf);
+
+	cipher_param = (void *)&qat_req->serv_specif_rqpars;
+	auth_param = (void *)((uint8_t *)cipher_param + sizeof(*cipher_param));
+
+	cipher_param->cipher_length = rte_op_data->data.to_cipher.length;
+	cipher_param->cipher_offset = rte_op_data->data.to_cipher.offset;
+	if (rte_op_data->iv.length &&
+		(rte_op_data->iv.length <= sizeof(cipher_param->u.cipher_IV_array))) {
+		rte_memcpy(cipher_param->u.cipher_IV_array, rte_op_data->iv.data,
+							rte_op_data->iv.length);
+	} else {
+		ICP_QAT_FW_LA_CIPH_IV_FLD_FLAG_SET(qat_req->comn_hdr.serv_specif_flags,
+				ICP_QAT_FW_CIPH_IV_64BIT_PTR);
+		cipher_param->u.s.cipher_IV_ptr = rte_op_data->iv.phys_addr;
+	}
+	if (rte_op_data->digest.phys_addr) {
+		ICP_QAT_FW_LA_DIGEST_IN_BUFFER_SET(qat_req->comn_hdr.serv_specif_flags,
+					ICP_QAT_FW_LA_NO_DIGEST_IN_BUFFER);
+		auth_param->auth_res_addr = rte_op_data->digest.phys_addr;
+	}
+	auth_param->auth_off = rte_op_data->data.to_hash.offset;
+	auth_param->auth_len = rte_op_data->data.to_hash.length;
+	return 0;
+}
+
+static inline uint32_t adf_modulo(uint32_t data, uint32_t shift)
+{
+	uint32_t div = data >> shift;
+	uint32_t mult = div << shift;
+
+	return data - mult;
+}
+
+static void qat_crypto_sessionbuf_init(struct rte_mempool *mp,
+		__rte_unused void *opaque_arg,
+		 void *_s,
+		 __rte_unused unsigned i)
+{
+	struct qat_session *s = _s;
+
+	PMD_INIT_FUNC_TRACE();
+	s->cd_paddr = rte_mempool_virt2phy(mp, &s->cd);
+}
+
+int qat_dev_config(__rte_unused struct rte_cryptodev *dev)
+{
+	PMD_INIT_FUNC_TRACE();
+	return -ENOTSUP;
+}
+
+int qat_dev_start(__rte_unused struct rte_cryptodev *dev)
+{
+	PMD_INIT_FUNC_TRACE();
+	return -ENOTSUP;
+}
+
+void qat_dev_stop(__rte_unused struct rte_cryptodev *dev)
+{
+	PMD_INIT_FUNC_TRACE();
+}
+
+void qat_dev_close(__rte_unused struct rte_cryptodev *dev)
+{
+	PMD_INIT_FUNC_TRACE();
+}
+
+void qat_dev_info_get(__rte_unused struct rte_cryptodev *dev,
+						struct rte_cryptodev_info *info)
+{
+	PMD_INIT_FUNC_TRACE();
+	if (info != NULL) {
+		info->max_queue_pairs =
+				ADF_NUM_SYM_QPS_PER_BUNDLE*ADF_NUM_BUNDLES_PER_DEV;
+		info->dev_type = RTE_CRYPTODEV_QAT_PMD;
+	}
+}
+
+void qat_crypto_sym_stats_get(struct rte_cryptodev *dev,
+		struct rte_cryptodev_stats *stats)
+{
+	int i;
+	struct qat_qp **qp = (struct qat_qp **)(dev->data->queue_pairs);
+
+	PMD_INIT_FUNC_TRACE();
+	if (stats == NULL) {
+		PMD_DRV_LOG(ERR, "invalid stats ptr NULL");
+		return;
+	}
+	for (i = 0; i < dev->data->nb_queue_pairs; i++) {
+		if (qp[i] == NULL) {
+			PMD_DRV_LOG(DEBUG, "Uninitialised queue pair");
+			continue;
+		}
+
+		stats->enqueued_count += qp[i]->stats.enqueued_count;
+		stats->dequeued_count += qp[i]->stats.enqueued_count;
+		stats->enqueue_err_count += qp[i]->stats.enqueue_err_count;
+		stats->dequeue_err_count += qp[i]->stats.enqueue_err_count;
+	}
+}
+
+void qat_crypto_sym_stats_reset(struct rte_cryptodev *dev)
+{
+	int i;
+	struct qat_qp **qp = (struct qat_qp **)(dev->data->queue_pairs);
+
+	PMD_INIT_FUNC_TRACE();
+	for (i = 0; i < dev->data->nb_queue_pairs; i++)
+		memset(&(qp[i]->stats), 0, sizeof(qp[i]->stats));
+	PMD_DRV_LOG(DEBUG, "QAT crypto: stats cleared");
+}
+
diff --git a/drivers/crypto/qat/qat_crypto.h b/drivers/crypto/qat/qat_crypto.h
new file mode 100644
index 0000000..1be3f2f
--- /dev/null
+++ b/drivers/crypto/qat/qat_crypto.h
@@ -0,0 +1,99 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2015 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _QAT_CRYPTO_H_
+#define _QAT_CRYPTO_H_
+
+#include <rte_cryptodev.h>
+#include <rte_memzone.h>
+
+/**
+ * Structure associated with each queue.
+ */
+struct qat_queue {
+	char		memz_name[RTE_MEMZONE_NAMESIZE];
+	void		*base_addr;		/* Base address */
+	phys_addr_t	base_phys_addr;		/* Queue physical address */
+	uint32_t	head;			/* Shadow copy of the head */
+	uint32_t	tail;			/* Shadow copy of the tail */
+	uint32_t	msg_size;
+	uint16_t	max_inflights;
+	uint32_t	queue_size;
+	uint8_t		hw_bundle_number;
+	uint8_t		hw_queue_number;	 /* HW queue aka ring offset on bundle */
+};
+
+struct qat_qp {
+	void			*mmap_bar_addr;
+	rte_atomic16_t		inflights16;
+	struct	qat_queue	tx_q;
+	struct	qat_queue	rx_q;
+	struct	rte_cryptodev_stats stats;
+} __rte_cache_aligned;
+
+/** private data structure for each QAT device */
+struct qat_pmd_private {
+	char sess_mp_name[RTE_MEMPOOL_NAMESIZE];
+	struct rte_mempool *sess_mp;
+};
+
+int qat_dev_config(struct rte_cryptodev *dev);
+int qat_dev_start(struct rte_cryptodev *dev);
+void qat_dev_stop(struct rte_cryptodev *dev);
+void qat_dev_close(struct rte_cryptodev *dev);
+void qat_dev_info_get(struct rte_cryptodev *dev,
+	struct rte_cryptodev_info *info);
+
+void qat_crypto_sym_stats_get(struct rte_cryptodev *dev,
+	struct rte_cryptodev_stats *stats);
+void qat_crypto_sym_stats_reset(struct rte_cryptodev *dev);
+
+int qat_crypto_sym_qp_setup(struct rte_cryptodev *dev, uint16_t queue_pair_id,
+	const struct rte_cryptodev_qp_conf *rx_conf, int socket_id);
+void qat_crypto_sym_qp_release(struct rte_cryptodev *dev, uint16_t queue_pair_id);
+
+int
+qat_pmd_session_mempool_create(struct rte_cryptodev *dev,
+	unsigned nb_objs, unsigned obj_cache_size, int socket_id);
+struct rte_cryptodev_session *
+qat_crypto_sym_create_session(struct rte_cryptodev *dev,
+	struct rte_crypto_cipher_params *cipher_setup_data,
+	struct rte_crypto_hash_params *hash_setup_data,
+	enum rte_crypto_operation_chain op_type);
+void qat_crypto_sym_destroy_session(struct rte_cryptodev *dev __rte_unused,
+	struct rte_cryptodev_session *session);
+
+uint16_t qat_crypto_pkt_tx_burst(void *txq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts);
+uint16_t qat_crypto_pkt_rx_burst(void *rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts);
+
+#endif /* _QAT_CRYPTO_H_ */
diff --git a/drivers/crypto/qat/qat_logs.h b/drivers/crypto/qat/qat_logs.h
new file mode 100644
index 0000000..04293e3
--- /dev/null
+++ b/drivers/crypto/qat/qat_logs.h
@@ -0,0 +1,78 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2015 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _QAT_LOGS_H_
+#define _QAT_LOGS_H_
+
+#define PMD_INIT_LOG(level, fmt, args...) \
+	rte_log(RTE_LOG_ ## level, RTE_LOGTYPE_PMD, \
+		"PMD: %s(): " fmt "\n", __func__, ##args)
+
+#ifdef RTE_LIBRTE_QAT_DEBUG_INIT
+#define PMD_INIT_FUNC_TRACE() PMD_INIT_LOG(DEBUG, " >>")
+#else
+#define PMD_INIT_FUNC_TRACE() do { } while (0)
+#endif
+
+#ifdef RTE_LIBRTE_QAT_DEBUG_RX
+#define PMD_RX_LOG(level, fmt, args...) \
+	RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ## args)
+#else
+#define PMD_RX_LOG(level, fmt, args...) do { } while (0)
+#endif
+
+#ifdef RTE_LIBRTE_QAT_DEBUG_TX
+#define PMD_TX_LOG(level, fmt, args...) \
+	RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ## args)
+#else
+#define PMD_TX_LOG(level, fmt, args...) do { } while (0)
+#endif
+
+#ifdef RTE_LIBRTE_QAT_DEBUG_TX_FREE
+#define PMD_TX_FREE_LOG(level, fmt, args...) \
+	RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ## args)
+#else
+#define PMD_TX_FREE_LOG(level, fmt, args...) do { } while (0)
+#endif
+
+#ifdef RTE_LIBRTE_QAT_DEBUG_DRIVER
+#define PMD_DRV_LOG_RAW(level, fmt, args...) \
+	RTE_LOG(level, PMD, "%s(): " fmt, __func__, ## args)
+#else
+#define PMD_DRV_LOG_RAW(level, fmt, args...) do { } while (0)
+#endif
+
+#define PMD_DRV_LOG(level, fmt, args...) \
+	PMD_DRV_LOG_RAW(level, fmt "\n", ## args)
+
+#endif /* _QAT_LOGS_H_ */
diff --git a/drivers/crypto/qat/qat_qp.c b/drivers/crypto/qat/qat_qp.c
new file mode 100644
index 0000000..57aa461
--- /dev/null
+++ b/drivers/crypto/qat/qat_qp.c
@@ -0,0 +1,372 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2015 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <rte_common.h>
+#include <rte_dev.h>
+#include <rte_malloc.h>
+#include <rte_memzone.h>
+#include <rte_cryptodev.h>
+#include <rte_atomic.h>
+#include <rte_prefetch.h>
+
+#include "qat_logs.h"
+#include "qat_crypto.h"
+#include "adf_transport_access_macros.h"
+
+#define ADF_MAX_SYM_DESC			4096
+#define ADF_MIN_SYM_DESC			128
+#define ADF_SYM_TX_RING_DESC_SIZE		128
+#define ADF_SYM_RX_RING_DESC_SIZE		32
+#define ADF_SYM_TX_QUEUE_STARTOFF		2 /* Offset from bundle start to 1st Sym Tx queue */
+#define ADF_SYM_RX_QUEUE_STARTOFF		10
+#define ADF_ARB_REG_SLOT			0x1000
+#define ADF_ARB_RINGSRVARBEN_OFFSET		0x19C
+
+#define WRITE_CSR_ARB_RINGSRVARBEN(csr_addr, index, value) \
+	ADF_CSR_WR(csr_addr, ADF_ARB_RINGSRVARBEN_OFFSET + \
+	(ADF_ARB_REG_SLOT * index), value)
+
+static int qat_qp_check_queue_alignment(uint64_t phys_addr,
+	uint32_t queue_size_bytes);
+static int qat_tx_queue_create(struct rte_cryptodev *dev,
+	struct qat_queue *queue, uint8_t id, uint32_t nb_desc,
+	int socket_id);
+static int qat_rx_queue_create(struct rte_cryptodev *dev,
+	struct qat_queue *queue, uint8_t id, uint32_t nb_desc,
+	int socket_id);
+static int qat_queue_create(struct rte_cryptodev *dev,
+	struct qat_queue *queue, uint32_t nb_desc, uint8_t desc_size,
+	int socket_id);
+static int adf_verify_queue_size(uint32_t msg_size, uint32_t msg_num,
+    uint32_t *queue_size_for_csr);
+static void adf_configure_queues(struct qat_qp *queue);
+static void adf_queue_arb_enable(struct qat_queue *txq, void *base_addr);
+static void adf_queue_arb_disable(struct qat_queue *txq, void *base_addr);
+
+static const struct rte_memzone *
+queue_dma_zone_reserve(const char *qp_name, uint32_t queue_size, int socket_id)
+{
+	const struct rte_memzone *mz;
+	unsigned memzone_flags = 0;
+	const struct rte_memseg *ms;
+
+	PMD_INIT_FUNC_TRACE();
+	mz = rte_memzone_lookup(qp_name);
+	if (mz != 0) {
+		if (((size_t)queue_size <= mz->len) &&
+				((socket_id == SOCKET_ID_ANY) ||
+					(socket_id == mz->socket_id))) {
+			PMD_DRV_LOG(DEBUG, "re-use memzone already allocated for %s", qp_name);
+			return mz;
+		} else {
+			PMD_DRV_LOG(ERR, "Incompatible memzone already allocated %s, "
+					"size %u, socket %d. Requested size %u, socket %u",
+					qp_name, (uint32_t)mz->len, mz->socket_id,
+					queue_size, socket_id);
+			return NULL;
+		}
+	}
+
+	PMD_DRV_LOG(DEBUG, "Allocate memzone for %s, size %u on socket %u",
+					qp_name, queue_size, socket_id);
+	ms = rte_eal_get_physmem_layout();
+	switch (ms[0].hugepage_sz) {
+	case(RTE_PGSIZE_2M):
+		memzone_flags = RTE_MEMZONE_2MB;
+	break;
+	case(RTE_PGSIZE_1G):
+		memzone_flags = RTE_MEMZONE_1GB;
+	break;
+	case(RTE_PGSIZE_16M):
+		memzone_flags = RTE_MEMZONE_16MB;
+	break;
+	case(RTE_PGSIZE_16G):
+		memzone_flags = RTE_MEMZONE_16GB;
+	break;
+	default:
+		memzone_flags = RTE_MEMZONE_SIZE_HINT_ONLY;
+}
+#ifdef RTE_LIBRTE_XEN_DOM0
+	return rte_memzone_reserve_bounded(qp_name, queue_size,
+		socket_id, 0, RTE_CACHE_LINE_SIZE, RTE_PGSIZE_2M);
+#else
+	return rte_memzone_reserve_aligned(qp_name, queue_size, socket_id,
+		memzone_flags, queue_size);
+#endif
+}
+
+int qat_crypto_sym_qp_setup(struct rte_cryptodev *dev, uint16_t queue_pair_id,
+	const struct rte_cryptodev_qp_conf *qp_conf,
+	int socket_id)
+{
+	struct qat_qp *qp;
+
+	PMD_INIT_FUNC_TRACE();
+	if ((qp_conf->nb_descriptors > ADF_MAX_SYM_DESC) ||
+		(qp_conf->nb_descriptors < ADF_MIN_SYM_DESC)) {
+		PMD_DRV_LOG(ERR, "Can't create qp for %u descriptors",
+				qp_conf->nb_descriptors);
+		return (-EINVAL);
+	}
+
+	if ((dev->pci_dev->mem_resource == NULL) ||
+		(dev->pci_dev->mem_resource[0].addr == NULL)) {
+		PMD_DRV_LOG(ERR, "Could not find VF config space (UIO driver attached?).");
+		return (-EINVAL);
+	}
+
+	if (queue_pair_id >= (ADF_NUM_SYM_QPS_PER_BUNDLE*ADF_NUM_BUNDLES_PER_DEV)) {
+		PMD_DRV_LOG(ERR, "qp_id %u invalid for this device", queue_pair_id);
+		return (-EINVAL);
+	}
+
+	/* Free memory prior to re-allocation if needed. */
+	if (dev->data->queue_pairs[queue_pair_id] != NULL) {
+		qat_crypto_sym_qp_release(dev, queue_pair_id);
+		dev->data->queue_pairs[queue_pair_id] = NULL;
+	}
+
+	/* Allocate the queue pair data structure. */
+	qp = rte_zmalloc("qat PMD qp queue", sizeof(*qp), RTE_CACHE_LINE_SIZE);
+	if (qp == NULL) {
+		PMD_DRV_LOG(ERR, "Failed to alloc mem for qp struct");
+		return (-ENOMEM);
+	}
+	qp->mmap_bar_addr = dev->pci_dev->mem_resource[0].addr;
+	rte_atomic16_init(&qp->inflights16);
+
+	if (qat_tx_queue_create(dev, &(qp->tx_q),
+			queue_pair_id, qp_conf->nb_descriptors, socket_id) != 0) {
+		PMD_INIT_LOG(ERR, "Tx queue create failed "
+				"queue_pair_id=%u", queue_pair_id);
+		goto create_err;
+	}
+
+	if (qat_rx_queue_create(dev, &(qp->rx_q),
+			queue_pair_id, qp_conf->nb_descriptors, socket_id) != 0) {
+		PMD_DRV_LOG(ERR, "Rx queue create failed "
+				"queue_pair_id=%hu", queue_pair_id);
+		goto create_err;
+	}
+	dev->data->queue_pairs[queue_pair_id] = qp;
+	adf_configure_queues(qp);
+	adf_queue_arb_enable(&qp->tx_q, qp->mmap_bar_addr);
+	return 0;
+
+create_err:
+	rte_free(qp);
+	return (-EFAULT);
+}
+
+void qat_crypto_sym_qp_release(struct rte_cryptodev *dev, uint16_t queue_pair_id)
+{
+	struct qat_qp *qp = (struct qat_qp *)dev->data->queue_pairs[queue_pair_id];
+
+	PMD_INIT_FUNC_TRACE();
+	if (qp == NULL) {
+		PMD_DRV_LOG(DEBUG, "qp already freed");
+		return;
+	}
+
+	adf_queue_arb_disable(&(qp->tx_q), qp->mmap_bar_addr);
+	rte_free(qp);
+}
+
+static int qat_tx_queue_create(struct rte_cryptodev *dev,
+	struct qat_queue *queue, uint8_t qp_id,
+	uint32_t nb_desc, int socket_id)
+{
+	PMD_INIT_FUNC_TRACE();
+	queue->hw_bundle_number = qp_id/ADF_NUM_SYM_QPS_PER_BUNDLE;
+	queue->hw_queue_number = (qp_id%ADF_NUM_SYM_QPS_PER_BUNDLE) +
+						ADF_SYM_TX_QUEUE_STARTOFF;
+	PMD_DRV_LOG(DEBUG, "TX ring for %u msgs: qp_id %d, bundle %u, ring %u",
+		nb_desc, qp_id, queue->hw_bundle_number, queue->hw_queue_number);
+
+	return qat_queue_create(dev, queue, nb_desc,
+				ADF_SYM_TX_RING_DESC_SIZE, socket_id);
+}
+
+static int qat_rx_queue_create(struct rte_cryptodev *dev,
+		struct qat_queue *queue, uint8_t qp_id, uint32_t nb_desc,
+		int socket_id)
+{
+	PMD_INIT_FUNC_TRACE();
+	queue->hw_bundle_number = qp_id/ADF_NUM_SYM_QPS_PER_BUNDLE;
+	queue->hw_queue_number = (qp_id%ADF_NUM_SYM_QPS_PER_BUNDLE) +
+						ADF_SYM_RX_QUEUE_STARTOFF;
+
+	PMD_DRV_LOG(DEBUG, "RX ring for %u msgs: qp id %d, bundle %u, ring %u",
+		nb_desc, qp_id, queue->hw_bundle_number, queue->hw_queue_number);
+	return qat_queue_create(dev, queue, nb_desc,
+				ADF_SYM_RX_RING_DESC_SIZE, socket_id);
+}
+
+static int
+qat_queue_create(struct rte_cryptodev *dev, struct qat_queue *queue,
+		uint32_t nb_desc, uint8_t desc_size, int socket_id)
+{
+	uint64_t queue_base;
+	void *io_addr;
+	const struct rte_memzone *qp_mz;
+	uint32_t queue_size_bytes = nb_desc*desc_size;
+
+	PMD_INIT_FUNC_TRACE();
+	if (desc_size > ADF_MSG_SIZE_TO_BYTES(ADF_MAX_MSG_SIZE)) {
+		PMD_DRV_LOG(ERR, "Invalid descriptor size %d", desc_size);
+		return (-EINVAL);
+	}
+
+	/*
+	 * Allocate a memzone for the queue - create a unique name.
+	 */
+	snprintf(queue->memz_name, sizeof(queue->memz_name), "%s_%s_%d_%d_%d",
+		dev->driver->pci_drv.name, "qp_mem", dev->data->dev_id,
+		queue->hw_bundle_number, queue->hw_queue_number);
+	qp_mz = queue_dma_zone_reserve(queue->memz_name, queue_size_bytes, socket_id);
+	if (qp_mz == NULL) {
+		PMD_DRV_LOG(ERR, "Failed to allocate ring memzone");
+		return (-ENOMEM);
+	}
+
+	queue->base_addr = (char *)qp_mz->addr;
+	queue->base_phys_addr = qp_mz->phys_addr;
+	if (qat_qp_check_queue_alignment(queue->base_phys_addr, queue_size_bytes)) {
+		PMD_DRV_LOG(ERR, "Invalid alignment on queue create "
+					" 0x%"PRIx64"\n", queue->base_phys_addr);
+		return -EFAULT;
+	}
+
+	if (adf_verify_queue_size(desc_size, nb_desc, &(queue->queue_size)) != 0) {
+		PMD_DRV_LOG(ERR, "Invalid num inflights");
+		return (-EINVAL);
+	}
+
+	queue->max_inflights = ADF_MAX_INFLIGHTS(queue->queue_size,
+					ADF_BYTES_TO_MSG_SIZE(desc_size));
+	PMD_DRV_LOG(DEBUG, "RING size in CSR: %u, in bytes %u, nb msgs %u,"
+				" msg_size %u, max_inflights %u ",
+				queue->queue_size, queue_size_bytes,
+				nb_desc, desc_size, queue->max_inflights);
+
+	if (queue->max_inflights < 2) {
+		PMD_DRV_LOG(ERR, "Invalid num inflights");
+		return (-EINVAL);
+	}
+	queue->head = 0;
+	queue->tail = 0;
+	queue->msg_size = desc_size;
+
+	/*
+	 * Write an unused pattern to the queue memory.
+	 */
+	memset(queue->base_addr, 0x7F, queue_size_bytes);
+
+	queue_base = BUILD_RING_BASE_ADDR(queue->base_phys_addr,
+					queue->queue_size);
+	io_addr = dev->pci_dev->mem_resource[0].addr;
+
+	WRITE_CSR_RING_BASE(io_addr, queue->hw_bundle_number,
+			queue->hw_queue_number, queue_base);
+	return 0;
+}
+
+static int qat_qp_check_queue_alignment(uint64_t phys_addr,
+					uint32_t queue_size_bytes)
+{
+	PMD_INIT_FUNC_TRACE();
+	if (((queue_size_bytes - 1) & phys_addr) != 0)
+		return (-EINVAL);
+	return 0;
+}
+
+static int adf_verify_queue_size(uint32_t msg_size, uint32_t msg_num,
+	uint32_t *p_queue_size_for_csr)
+{
+	uint8_t i = ADF_MIN_RING_SIZE;
+
+	PMD_INIT_FUNC_TRACE();
+	for (; i <= ADF_MAX_RING_SIZE; i++)
+		if ((msg_size * msg_num) ==
+				(uint32_t)ADF_SIZE_TO_RING_SIZE_IN_BYTES(i)) {
+			*p_queue_size_for_csr = i;
+			return 0;
+		}
+	PMD_DRV_LOG(ERR, "Invalid ring size %d", msg_size * msg_num);
+	return (-EINVAL);
+}
+
+static void adf_queue_arb_enable(struct qat_queue *txq, void *base_addr)
+{
+	uint32_t arb_csr_offset =  ADF_ARB_RINGSRVARBEN_OFFSET +
+					(ADF_ARB_REG_SLOT * txq->hw_bundle_number);
+	uint32_t value;
+
+	PMD_INIT_FUNC_TRACE();
+	value = ADF_CSR_RD(base_addr, arb_csr_offset);
+	value |= (0x01 << txq->hw_queue_number);
+	ADF_CSR_WR(base_addr, arb_csr_offset, value);
+}
+
+static void adf_queue_arb_disable(struct qat_queue *txq, void *base_addr)
+{
+	uint32_t arb_csr_offset =  ADF_ARB_RINGSRVARBEN_OFFSET +
+					(ADF_ARB_REG_SLOT * txq->hw_bundle_number);
+	uint32_t value;
+
+	PMD_INIT_FUNC_TRACE();
+	value = ADF_CSR_RD(base_addr, arb_csr_offset);
+	value ^= (0x01 << txq->hw_queue_number);
+	ADF_CSR_WR(base_addr, arb_csr_offset, value);
+}
+
+static void adf_configure_queues(struct qat_qp *qp)
+{
+	uint32_t queue_config;
+	struct qat_queue *queue = &qp->tx_q;
+
+	PMD_INIT_FUNC_TRACE();
+	queue_config = BUILD_RING_CONFIG(queue->queue_size);
+
+	WRITE_CSR_RING_CONFIG(qp->mmap_bar_addr, queue->hw_bundle_number,
+			queue->hw_queue_number, queue_config);
+
+	queue = &qp->rx_q;
+	queue_config =
+			BUILD_RESP_RING_CONFIG(queue->queue_size,
+					ADF_RING_NEAR_WATERMARK_512,
+					ADF_RING_NEAR_WATERMARK_0);
+
+	WRITE_CSR_RING_CONFIG(qp->mmap_bar_addr, queue->hw_bundle_number,
+			queue->hw_queue_number, queue_config);
+}
diff --git a/drivers/crypto/qat/rte_pmd_qat_version.map b/drivers/crypto/qat/rte_pmd_qat_version.map
new file mode 100644
index 0000000..fcf5bb3
--- /dev/null
+++ b/drivers/crypto/qat/rte_pmd_qat_version.map
@@ -0,0 +1,5 @@
+DPDK_2.0 {
+	global:
+
+	local: *;
+};
diff --git a/drivers/crypto/qat/rte_qat_cryptodev.c b/drivers/crypto/qat/rte_qat_cryptodev.c
new file mode 100644
index 0000000..b7e9c62
--- /dev/null
+++ b/drivers/crypto/qat/rte_qat_cryptodev.c
@@ -0,0 +1,128 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2015 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <rte_common.h>
+#include <rte_dev.h>
+#include <rte_malloc.h>
+#include <rte_cryptodev.h>
+
+#include "qat_crypto.h"
+#include "qat_logs.h"
+
+static struct rte_cryptodev_ops crypto_qat_ops = {
+
+/* Device related operations */
+		.dev_configure		= qat_dev_config,
+		.dev_start		= qat_dev_start,
+		.dev_stop		= qat_dev_stop,
+		.dev_close		= qat_dev_close,
+		.dev_infos_get		= qat_dev_info_get,
+
+		.stats_get		= qat_crypto_sym_stats_get,
+		.stats_reset		= qat_crypto_sym_stats_reset,
+		.queue_pair_setup	= qat_crypto_sym_qp_setup,
+		.queue_pair_release	= qat_crypto_sym_qp_release,
+		.queue_pair_start	= NULL,
+		.queue_pair_stop	= NULL,
+		.queue_pair_count	= NULL,
+
+/* Crypto related operations */
+		.session_mp_create 	= qat_pmd_session_mempool_create,
+		.session_create		= qat_crypto_sym_create_session,
+		.session_destroy	= qat_crypto_sym_destroy_session
+};
+
+/*
+ * The set of PCI devices this driver supports
+ */
+
+static struct rte_pci_id pci_id_qat_map[] = {
+		{
+			.vendor_id = 0x8086,
+			.device_id = 0x0443,
+			.subsystem_vendor_id = PCI_ANY_ID,
+			.subsystem_device_id = PCI_ANY_ID
+		},
+		{.device_id = 0},
+};
+
+static int
+crypto_qat_dev_init(__attribute__((unused)) struct rte_cryptodev_driver *crypto_drv,
+			struct rte_cryptodev *cryptodev)
+{
+	PMD_INIT_FUNC_TRACE();
+	PMD_DRV_LOG(DEBUG, "Found crypto device at %02x:%02x.%x",
+		cryptodev->pci_dev->addr.bus,
+		cryptodev->pci_dev->addr.devid,
+		cryptodev->pci_dev->addr.function);
+
+	cryptodev->dev_ops = &crypto_qat_ops;
+
+	cryptodev->enqueue_burst = qat_crypto_pkt_tx_burst;
+	cryptodev->dequeue_burst = qat_crypto_pkt_rx_burst;
+
+	/* for secondary processes, we don't initialise any further as primary
+	 * has already done this work. Only check we don't need a different
+	 * RX function */
+	if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
+		PMD_DRV_LOG(DEBUG, "Device already initialised by primary process");
+		return 0;
+	}
+
+	return 0;
+}
+
+static struct rte_cryptodev_driver rte_qat_pmd = {
+	{
+		.name = "rte_qat_pmd",
+		.id_table = pci_id_qat_map,
+		.drv_flags = RTE_PCI_DRV_NEED_MAPPING,
+	},
+	.cryptodev_init = crypto_qat_dev_init,
+	.dev_private_size = sizeof(struct qat_pmd_private),
+};
+
+static int
+rte_qat_pmd_init(const char *name __rte_unused, const char *params __rte_unused)
+{
+	PMD_INIT_FUNC_TRACE();
+	return rte_cryptodev_pmd_driver_register(&rte_qat_pmd, PMD_PDEV);
+}
+
+static struct rte_driver pmd_qat_drv = {
+	.type = PMD_PDEV,
+	.init = rte_qat_pmd_init,
+};
+
+PMD_REGISTER_DRIVER(pmd_qat_drv);
+
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index c7ee033..5502cc4 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -145,6 +145,9 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_PCAP)       += -lrte_pmd_pcap
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_AF_PACKET)  += -lrte_pmd_af_packet
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_NULL)       += -lrte_pmd_null
 
+# QAT PMD has a dependancy on libcrypto (from openssl) for calculating HMAC precomputes
+_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_QAT)        += -lrte_pmd_qat -lcrypto
+
 endif # ! $(CONFIG_RTE_BUILD_SHARED_LIB)
 
 endif # ! CONFIG_RTE_BUILD_COMBINE_LIBS
-- 
1.9.3

^ permalink raw reply	[flat|nested] 8+ messages in thread

* [dpdk-dev] [PATCH 3/4] aesni_mb_pmd: Initial implementation of multi buffer based crypto device
  2015-08-20 14:07 [dpdk-dev] [PATCH 0/4] A proposed DPDK Crypto API and device framework Declan Doherty
  2015-08-20 14:07 ` [dpdk-dev] [PATCH 1/4] cryptodev: Initial DPDK Crypto APIs and device framework release Declan Doherty
  2015-08-20 14:07 ` [dpdk-dev] [PATCH 2/4] qat_crypto_pmd: Addition of a new QAT DPDK PMD Declan Doherty
@ 2015-08-20 14:07 ` Declan Doherty
  2015-08-20 14:07 ` [dpdk-dev] [PATCH 4/4] app/test: add cryptodev unit and performance tests Declan Doherty
  3 siblings, 0 replies; 8+ messages in thread
From: Declan Doherty @ 2015-08-20 14:07 UTC (permalink / raw)
  To: dev

This patch provides the initial implementation of the AES-NI multi-buffer
based crypto poll mode driver using DPDK's new cryptodev framework.
This PMD is dependent on Intel's multibuffer library, see the white paper
"Fast Multi-buffer IPsec Implementations on Intel® Architecture
Processors", see ref 1 for details on the library's design and ref 2 to
download the library itself. This initial implementation is limited to
supporting the chained operations of "hash then cipher" or "cipher then
hash" for the following cipher and hash algorithms:

 - RTE_CRYPTO_SYM_CIPHER_AES128_CBC
 - RTE_CRYPTO_SYM_CIPHER_AES256_CBC
 - RTE_CRYPTO_SYM_CIPHER_AES512_CBC
 - RTE_CRYPTO_SYM_HASH_SHA1_HMAC
 - RTE_CRYPTO_SYM_HASH_SHA256_HMAC
 - RTE_CRYPTO_SYM_HASH_SHA512_HMAC

Important Note:
Due to the fact that the multi-buffer library is designed for
accelerating IPsec crypto oepration, the digest's generated for the HMAC
functions are truncated to lengths specified by IPsec RFC's, ie RFC2404
for using HMAC-SHA-1 with IPsec specifies that the digest is truncate
from 20 to 12 bytes.

Build instructions:
To build DPKD with the AESNI_MB_PMD the user is required to download
(ref 2) and compile the multi-buffer library on there user system before
building DPDK. The environmental variable AESNI_MULTI_BUFFER_LIB_PATH
must be exported with the path where you extracted and built the multi
buffer library and finally set CONFIG_RTE_LIBRTE_PMD_AESNI_MB=y in
config/common_linuxapp.

Current status: This is a work in progress, which has not been
performance tuned. The software has only been built and tested on
Fedora 20 64-bit using gcc. It's doesn't support crypto operation across
chained mbufs, or cipher only or hash only operations.

ref 1:
https://www-ssl.intel.com/content/www/us/en/intelligent-systems/intel-technology/fast-multi-buffer-ipsec-implementations-ia-processors-p

ref 2: https://downloadcenter.intel.com/download/22972

Signed-off-by: Declan Doherty <declan.doherty@intel.com>
---
 config/common_bsdapp                               |   8 +
 config/common_linuxapp                             |   7 +
 doc/guides/cryptodevs/aesni_mb.rst                 |  76 +++
 doc/guides/cryptodevs/index.rst                    |   1 +
 drivers/crypto/Makefile                            |   2 +-
 drivers/crypto/aesni_mb/Makefile                   |  67 +++
 drivers/crypto/aesni_mb/aesni_mb_ops.h             | 206 ++++++++
 drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c         | 550 +++++++++++++++++++++
 drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c     | 346 +++++++++++++
 drivers/crypto/aesni_mb/rte_aesni_mb_pmd_private.h | 224 +++++++++
 drivers/crypto/aesni_mb/rte_pmd_aesni_version.map  |   5 +
 mk/rte.app.mk                                      |   4 +
 12 files changed, 1495 insertions(+), 1 deletion(-)
 create mode 100644 doc/guides/cryptodevs/aesni_mb.rst
 create mode 100644 drivers/crypto/aesni_mb/Makefile
 create mode 100644 drivers/crypto/aesni_mb/aesni_mb_ops.h
 create mode 100644 drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c
 create mode 100644 drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c
 create mode 100644 drivers/crypto/aesni_mb/rte_aesni_mb_pmd_private.h
 create mode 100644 drivers/crypto/aesni_mb/rte_pmd_aesni_version.map

diff --git a/config/common_bsdapp b/config/common_bsdapp
index 8fcc004..9c5e1e0 100644
--- a/config/common_bsdapp
+++ b/config/common_bsdapp
@@ -167,6 +167,14 @@ CONFIG_RTE_LIBRTE_QAT_DEBUG_DRIVER=y
 #
 CONFIG_RTE_MAX_QAT_SESSIONS=200
 
+
+#
+# Compile PMD for AESNI backed device
+#
+CONFIG_RTE_LIBRTE_PMD_AESNI_MB=y
+CONFIG_RTE_LIBRTE_AESNI_MB_DEBUG=n
+
+#
 # Support NIC bypass logic
 #
 CONFIG_RTE_NIC_BYPASS=n
diff --git a/config/common_linuxapp b/config/common_linuxapp
index 7199c95..8e9e8fd 100644
--- a/config/common_linuxapp
+++ b/config/common_linuxapp
@@ -165,6 +165,13 @@ CONFIG_RTE_LIBRTE_PMD_QAT_DEBUG_DRIVER=y
 #
 CONFIG_RTE_LIBRTE_PMD_QAT_MAX_SESSIONS=4096
 
+# Compile PMD for AESNI backed device
+#
+CONFIG_RTE_LIBRTE_PMD_AESNI_MB=y
+CONFIG_RTE_LIBRTE_PMD_AESNI_MB_DEBUG=n
+CONFIG_RTE_LIBRTE_PMD_AESNI_MB_MAX_SESSIONS=2048
+
+#
 # Support NIC bypass logic
 #
 CONFIG_RTE_NIC_BYPASS=n
diff --git a/doc/guides/cryptodevs/aesni_mb.rst b/doc/guides/cryptodevs/aesni_mb.rst
new file mode 100644
index 0000000..4d15b6b
--- /dev/null
+++ b/doc/guides/cryptodevs/aesni_mb.rst
@@ -0,0 +1,76 @@
+..  BSD LICENSE
+    Copyright(c) 2015 Intel Corporation. All rights reserved.
+
+    Redistribution and use in source and binary forms, with or without
+    modification, are permitted provided that the following conditions
+    are met:
+
+    * Redistributions of source code must retain the above copyright
+    notice, this list of conditions and the following disclaimer.
+    * Redistributions in binary form must reproduce the above copyright
+    notice, this list of conditions and the following disclaimer in
+    the documentation and/or other materials provided with the
+    distribution.
+    * Neither the name of Intel Corporation nor the names of its
+    contributors may be used to endorse or promote products derived
+    from this software without specific prior written permission.
+
+    THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+    "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+    LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+    A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+    OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+    SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+    LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+    DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+    THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+    (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+    OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+AESN-NI Multi Buffer Crytpo Poll Mode Driver
+============================================
+
+
+The AESNI MB PMD (**librte_pmd_aesni_mb**) provides poll mode crypto driver 
+support for utilising Intel multi buffer library, see the white paper
+`Fast Multi-buffer IPsec Implementations on Intel® Architecture Processors
+<https://www-ssl.intel.com/content/www/us/en/intelligent-systems/intel-technology/fast-multi-buffer-ipsec-implementations-ia-processors-paper.html?wapkw=multi+buffer>`_. 
+
+The AES-NI MB PMD has current only been tested on Fedora 21 64-bit with gcc.
+
+Features
+--------
+
+AESNI MB PMD has support for:
+
+Cipher algorithms:
+
+* RTE_CRYPTO_SYM_CIPHER_AES128_CBC
+* RTE_CRYPTO_SYM_CIPHER_AES256_CBC
+* RTE_CRYPTO_SYM_CIPHER_AES512_CBC
+
+Hash algorithms:
+
+* RTE_CRYPTO_SYM_HASH_SHA1_HMAC
+* RTE_CRYPTO_SYM_HASH_SHA256_HMAC
+* RTE_CRYPTO_SYM_HASH_SHA512_HMAC
+
+Limitations
+-----------
+
+* Chained mbufs are not supported.
+* Hash only is not supported.
+* Cipher only is not supported.
+* Only in-place is currently supported (destination address is the same as source address).
+* Only supports session-oriented API implementation (session-less APIs are not supported).
+*  Not performance tuned.
+
+Installation
+------------
+
+To build DPKD with the AESNI_MB_PMD the user is required to download the library
+from `here <https://downloadcenter.intel.com/download/22972>`_ and compile it on
+their user system before building DPDK. The environmental variable 
+AESNI_MULTI_BUFFER_LIB_PATH must be exported with the path where you extracted 
+and built the multi buffer library and finally set 
+CONFIG_RTE_LIBRTE_PMD_AESNI_MB=y in config/common_linuxapp. 
\ No newline at end of file
diff --git a/doc/guides/cryptodevs/index.rst b/doc/guides/cryptodevs/index.rst
index 1c31697..8949fd0 100644
--- a/doc/guides/cryptodevs/index.rst
+++ b/doc/guides/cryptodevs/index.rst
@@ -39,4 +39,5 @@ Crypto Device Drivers
     :maxdepth: 2
     :numbered:
 
+    aesni_mb
     qat
diff --git a/drivers/crypto/Makefile b/drivers/crypto/Makefile
index eeb998e..26325b0 100644
--- a/drivers/crypto/Makefile
+++ b/drivers/crypto/Makefile
@@ -31,7 +31,7 @@
 
 include $(RTE_SDK)/mk/rte.vars.mk
 
-
+DIRS-$(CONFIG_RTE_LIBRTE_PMD_AESNI_MB) += aesni_mb
 DIRS-$(CONFIG_RTE_LIBRTE_PMD_QAT) += qat
 
 include $(RTE_SDK)/mk/rte.sharelib.mk
diff --git a/drivers/crypto/aesni_mb/Makefile b/drivers/crypto/aesni_mb/Makefile
new file mode 100644
index 0000000..62f51ce
--- /dev/null
+++ b/drivers/crypto/aesni_mb/Makefile
@@ -0,0 +1,67 @@
+#   BSD LICENSE
+#
+#   Copyright(c) 2015 Intel Corporation. All rights reserved.
+#
+#   Redistribution and use in source and binary forms, with or without
+#   modification, are permitted provided that the following conditions
+#   are met:
+#
+#     * Redistributions of source code must retain the above copyright
+#       notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above copyright
+#       notice, this list of conditions and the following disclaimer in
+#       the documentation and/or other materials provided with the
+#       distribution.
+#     * Neither the name of Intel Corporation nor the names of its
+#       contributors may be used to endorse or promote products derived
+#       from this software without specific prior written permission.
+#
+#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+ifeq ($(AESNI_MULTI_BUFFER_LIB_PATH),)
+$(error "Please define AESNI_MULTI_BUFFER_LIB_PATH environment variable")
+endif
+
+# library name
+LIB = librte_pmd_aesni_mb.a
+
+# build flags
+CFLAGS += -O3
+CFLAGS += $(WERROR_FLAGS)
+
+# library version
+LIBABIVER := 1
+
+# versioning export map
+EXPORT_MAP := rte_pmd_aesni_version.map
+
+# external library include paths
+CFLAGS += -I$(AESNI_MULTI_BUFFER_LIB_PATH)
+CFLAGS += -I$(AESNI_MULTI_BUFFER_LIB_PATH)/include
+
+# library source files
+SRCS-$(CONFIG_RTE_LIBRTE_PMD_AESNI_MB) += rte_aesni_mb_pmd.c
+SRCS-$(CONFIG_RTE_LIBRTE_PMD_AESNI_MB) += rte_aesni_mb_pmd_ops.c
+
+# export include files
+SYMLINK-y-include +=
+
+
+# library dependencies
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_AESNI_MB) += lib/librte_eal
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_AESNI_MB) += lib/librte_mbuf
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_AESNI_MB) += lib/librte_cryptodev
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/crypto/aesni_mb/aesni_mb_ops.h b/drivers/crypto/aesni_mb/aesni_mb_ops.h
new file mode 100644
index 0000000..ab96990
--- /dev/null
+++ b/drivers/crypto/aesni_mb/aesni_mb_ops.h
@@ -0,0 +1,206 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2015 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _AESNI_MB_OPS_H_
+#define _AESNI_MB_OPS_H_
+
+#ifndef LINUX
+#define LINUX
+#endif
+
+#include <mb_mgr.h>
+#include <aux_funcs.h>
+#include <gcm_defines.h>
+
+enum aesni_mb_vector_mode {
+	RTE_AESNI_MB_NOT_SUPPORTED = 0,
+	RTE_AESNI_MB_SSE,
+	RTE_AESNI_MB_AVX,
+	RTE_AESNI_MB_AVX2
+};
+
+typedef void (*md5_one_block_t)(void *data, void *digest);
+typedef void (*sha1_one_block_t)(void *data, void *digest);
+typedef void (*sha224_one_block_t)(void *data, void *digest);
+typedef void (*sha256_one_block_t)(void *data, void *digest);
+typedef void (*sha384_one_block_t)(void *data, void *digest);
+typedef void (*sha512_one_block_t)(void *data, void *digest);
+
+typedef void (*aes_keyexp_128_t)(void *key, void *enc_exp_keys, void *dec_exp_keys);
+typedef void (*aes_keyexp_192_t)(void *key, void *enc_exp_keys, void *dec_exp_keys);
+typedef void (*aes_keyexp_256_t)(void *key, void *enc_exp_keys, void *dec_exp_keys);
+
+typedef void (*aes_xcbc_expand_key_t)(void *key, void *k1_exp, void *k2, void *k3);
+
+typedef void (*aesni_gcm_t)(gcm_data *my_ctx_data, u8 *out, const u8 *in,
+		u64 plaintext_len, u8 *iv, const u8 *aad, u64 aad_len,
+		u8 *auth_tag, u64 auth_tag_len);
+
+typedef void (*aesni_gcm_precomp_t)(gcm_data *my_ctx_data, u8 *hash_subkey);
+
+/** Multi-buffer library function pointer table */
+struct aesni_mb_ops {
+	struct {
+		init_mb_mgr_t init_mgr;		/**< Initialise scheduler  */
+		get_next_job_t get_next;	/**< Get next free job structure */
+		submit_job_t submit;		/**< Submit job to scheduler */
+		get_completed_job_t get_completed_job; /**< Get completed job */
+		flush_job_t flush_job;		/**< flush jobs from manager */
+	} job; /**< multi buffer manager functions */
+	struct {
+		struct {
+			md5_one_block_t md5;		/**< MD5 one block hash */
+			sha1_one_block_t sha1;		/**< SHA1 one block hash */
+			sha224_one_block_t sha224;	/**< SHA224 one block hash */
+			sha256_one_block_t sha256;	/**< SHA256 one block hash */
+			sha384_one_block_t sha384;	/**< SHA384 one block hash */
+			sha512_one_block_t sha512;	/**< SHA512 one block hash */
+		} one_block; /**< one block hash functions */
+		struct {
+			aes_keyexp_128_t aes128;	/**< AES128 key expansions */
+			aes_keyexp_192_t aes192;	/**< AES192 key expansions */
+			aes_keyexp_256_t aes256;	/**< AES256 key expansions */
+			aes_xcbc_expand_key_t aes_xcbc;	/**< AES XCBC key expansions */
+		} keyexp;	/**< Key expansion functions */
+	} aux; /**< Auxiliary functions */
+	struct {
+		aesni_gcm_t enc;		/**< MD5 encode */
+		aesni_gcm_t dec;		/**< GCM decode */
+		aesni_gcm_precomp_t precomp;	/**< GCM pre-compute */
+	} gcm; /**< GCM functions */
+};
+
+
+static const struct aesni_mb_ops job_ops[] = {
+		[RTE_AESNI_MB_NOT_SUPPORTED] = {
+			.job = { NULL },
+			.aux = {
+				.one_block = { NULL },
+				.keyexp = { NULL }
+			},
+			.gcm = { NULL
+			}
+		},
+		[RTE_AESNI_MB_SSE] = {
+			.job = {
+				init_mb_mgr_sse,
+				get_next_job_sse,
+				submit_job_sse,
+				get_completed_job_sse,
+				flush_job_sse
+			},
+			.aux = {
+				.one_block = {
+					md5_one_block_sse,
+					sha1_one_block_sse,
+					sha224_one_block_sse,
+					sha256_one_block_sse,
+					sha384_one_block_sse,
+					sha512_one_block_sse
+				},
+				.keyexp = {
+					aes_keyexp_128_sse,
+					aes_keyexp_192_sse,
+					aes_keyexp_256_sse,
+					aes_xcbc_expand_key_sse
+				}
+			},
+			.gcm = {
+				aesni_gcm_enc_sse,
+				aesni_gcm_dec_sse,
+				aesni_gcm_precomp_sse
+			}
+		},
+		[RTE_AESNI_MB_AVX] = {
+				.job = {
+					init_mb_mgr_avx,
+					get_next_job_avx,
+					submit_job_avx,
+					get_completed_job_avx,
+					flush_job_avx
+				},
+				.aux = {
+					.one_block = {
+						md5_one_block_avx,
+						sha1_one_block_avx,
+						sha224_one_block_avx,
+						sha256_one_block_avx,
+						sha384_one_block_avx,
+						sha512_one_block_avx
+					},
+					.keyexp = {
+						aes_keyexp_128_avx,
+						aes_keyexp_192_avx,
+						aes_keyexp_256_avx,
+						aes_xcbc_expand_key_avx
+					}
+				},
+				.gcm = {
+					aesni_gcm_enc_avx_gen2,
+					aesni_gcm_dec_avx_gen2,
+					aesni_gcm_precomp_avx_gen2
+				}
+		},
+		[RTE_AESNI_MB_AVX2] = {
+				.job = {
+					init_mb_mgr_avx2,
+					get_next_job_avx2,
+					submit_job_avx2,
+					get_completed_job_avx2,
+					flush_job_avx2
+				},
+				.aux = {
+					.one_block = {
+						md5_one_block_avx2,
+						sha1_one_block_avx2,
+						sha224_one_block_avx2,
+						sha256_one_block_avx2,
+						sha384_one_block_avx2,
+						sha512_one_block_avx2
+					},
+					.keyexp = {
+						aes_keyexp_128_avx2,
+						aes_keyexp_192_avx2,
+						aes_keyexp_256_avx2,
+						aes_xcbc_expand_key_avx2
+					}
+				},
+				.gcm = {
+					aesni_gcm_enc_avx_gen4,
+					aesni_gcm_dec_avx_gen4,
+					aesni_gcm_precomp_avx_gen4
+				}
+		},
+};
+
+
+#endif /* _AESNI_MB_OPS_H_ */
diff --git a/drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c
new file mode 100644
index 0000000..65a3731
--- /dev/null
+++ b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c
@@ -0,0 +1,550 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2015 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <rte_common.h>
+#include <rte_cryptodev.h>
+#include <rte_dev.h>
+#include <rte_malloc.h>
+#include <rte_cpuflags.h>
+
+#include "rte_aesni_mb_pmd_private.h"
+
+/**
+ * Global static parameter used to create a unique name for each AES-NI multi
+ * buffer crypto device.
+ */
+static unsigned unique_name_id = 0;
+
+static inline int
+create_unique_device_name(char *name, size_t size)
+{
+	int ret;
+
+	if (name == NULL)
+		return -EINVAL;
+
+	ret = snprintf(name, size, "%s_%u", CRYPTODEV_NAME_AESNI_MB_PMD,
+			unique_name_id++);
+	if (ret < 0)
+		return ret;
+	return 0;
+}
+
+typedef void (*hash_one_block_t)(void *data, void *digest);
+typedef void (*aes_keyexp_t)(void *key, void *enc_exp_keys, void *dec_exp_keys);
+
+/**
+ * Calculate the authentication pre-computes
+ *
+ * @param one_block_hash	Function pointer to calculate digest on ipad/opad
+ * @param ipad			Inner pad output byte array
+ * @param opad			Outer pad output byte array
+ * @param hkey			Authentication key
+ * @param hkey_len		Authentication key length
+ * @param blocksize		Block size of selected hash algo
+ */
+static void
+calculate_auth_precomputes(hash_one_block_t one_block_hash,
+		uint8_t *ipad, uint8_t *opad,
+		uint8_t *hkey, uint16_t hkey_len,
+		uint16_t blocksize)
+{
+	unsigned i, length;
+
+	uint8_t ipad_buf[blocksize] __rte_align(16);
+	uint8_t opad_buf[blocksize] __rte_align(16);
+
+	/* Setup inner and outer pads */
+	memset(ipad_buf, HMAC_IPAD_VALUE, blocksize);
+	memset(opad_buf, HMAC_OPAD_VALUE, blocksize);
+
+	/* XOR hash key with inner and outer pads */
+	length = hkey_len > blocksize ? blocksize : hkey_len;
+
+	for (i = 0; i < length; i++) {
+		ipad_buf[i] ^= hkey[i];
+		opad_buf[i] ^= hkey[i];
+	}
+
+	/* Compute partial hashes */
+	(*one_block_hash)(ipad_buf, ipad);
+	(*one_block_hash)(opad_buf, opad);
+
+	/* Clean up stack */
+	memset(ipad_buf, 0, blocksize);
+	memset(opad_buf, 0, blocksize);
+}
+
+int
+aesni_mb_set_session_parameters(const struct aesni_mb_ops *mb_ops,
+		struct aesni_mb_session *sess,
+		struct rte_crypto_cipher_params *cipher_params,
+		struct rte_crypto_hash_params *auth_params,
+		enum rte_crypto_operation_chain op_chain)
+{
+	aes_keyexp_t aes_keyexp_fn;
+	hash_one_block_t hash_oneblock_fn;
+
+	/* Select Crypto operation - hash then cipher / cipher then hash */
+	switch (op_chain) {
+	case RTE_CRYPTO_SYM_OPCHAIN_HASH_CIPHER:
+		sess->chain_order = HASH_CIPHER;
+		break;
+	case RTE_CRYPTO_SYM_OPCHAIN_CIPHER_HASH:
+		sess->chain_order = CIPHER_HASH;
+		break;
+	default:
+		printf("unsupported operation chain order parameter");
+		return -1;
+	}
+
+	/* Select cipher direction */
+	switch (cipher_params->op) {
+	case RTE_CRYPTO_SYM_CIPHER_OP_ENCRYPT:
+		sess->cipher.direction = ENCRYPT;
+		break;
+	case RTE_CRYPTO_SYM_CIPHER_OP_DECRYPT:
+		sess->cipher.direction = DECRYPT;
+		break;
+	default:
+		printf("unsupported cipher operation parameter");
+		return -1;
+	}
+
+	/* Select cipher mode */
+	switch (cipher_params->algo) {
+	case RTE_CRYPTO_SYM_CIPHER_AES_CBC:
+		sess->cipher.mode = CBC;
+		break;
+	default:
+		printf("unsupported cipher mode parameter");
+		return -1;
+	}
+
+	/* Check key length and choose key expansion function */
+	switch (cipher_params->key.length) {
+	case AES_128_BYTES:
+		sess->cipher.key_length_in_bytes = AES_128_BYTES;
+		aes_keyexp_fn = mb_ops->aux.keyexp.aes128;
+		break;
+	case AES_192_BYTES:
+		sess->cipher.key_length_in_bytes = AES_192_BYTES;
+		aes_keyexp_fn = mb_ops->aux.keyexp.aes192;
+		break;
+	case AES_256_BYTES:
+		sess->cipher.key_length_in_bytes = AES_256_BYTES;
+		aes_keyexp_fn = mb_ops->aux.keyexp.aes256;
+		break;
+	default:
+		printf("unsupported cipher key length");
+		return -1;
+	}
+
+	/* Expanded cipher keys */
+	(*aes_keyexp_fn)(cipher_params->key.data,
+			sess->cipher.expanded_aes_keys.encode,
+			sess->cipher.expanded_aes_keys.decode);
+
+	/* Set Authentication Parameters */
+	switch (auth_params->algo) {
+	case RTE_CRYPTO_SYM_HASH_MD5_HMAC:
+		sess->auth.algo = MD5;
+		hash_oneblock_fn = mb_ops->aux.one_block.md5;
+		break;
+	case RTE_CRYPTO_SYM_HASH_SHA1_HMAC:
+		sess->auth.algo = SHA1;
+		hash_oneblock_fn = mb_ops->aux.one_block.sha1;
+		break;
+	case RTE_CRYPTO_SYM_HASH_SHA224_HMAC:
+		sess->auth.algo = SHA_224;
+		hash_oneblock_fn = mb_ops->aux.one_block.sha224;
+		break;
+	case RTE_CRYPTO_SYM_HASH_SHA256_HMAC:
+		sess->auth.algo = SHA_256;
+		hash_oneblock_fn = mb_ops->aux.one_block.sha256;
+		break;
+	case RTE_CRYPTO_SYM_HASH_SHA384_HMAC:
+		sess->auth.algo = SHA_384;
+		hash_oneblock_fn = mb_ops->aux.one_block.sha384;
+		break;
+	case RTE_CRYPTO_SYM_HASH_SHA512_HMAC:
+		sess->auth.algo = SHA_512;
+		hash_oneblock_fn = mb_ops->aux.one_block.sha512;
+		break;
+	default:
+		printf("unsupported authentication algorithm selection");
+		return -1;
+	}
+
+	/* Calculate Authentication precomputes */
+	calculate_auth_precomputes(hash_oneblock_fn,
+			sess->auth.pads.inner, sess->auth.pads.outer,
+			auth_params->auth_key.data,
+			auth_params->auth_key.length,
+			get_auth_algo_blocksize(sess->auth.algo));
+
+	return 0;
+}
+
+/**
+ * Process a crypto operation and complete a JOB_AES_HMAC job structure for
+ * submission to the multi buffer library for processing.
+ *
+ * @param	qp	queue pair
+ * @param	job	JOB_AES_HMAC structure to fill
+ * @param	m	mbuf to process
+ *
+ * @return
+ * - Completed JOB_AES_HMAC structure pointer on success
+ * - NULL pointer if completion of JOB_AES_HMAC structure isn't possible
+ */
+static JOB_AES_HMAC *
+process_crypto_op(struct aesni_mb_qp *qp, JOB_AES_HMAC *job, struct rte_mbuf *m)
+{
+	struct rte_crypto_op_data *c_op = m->crypto_op;
+	struct aesni_mb_session *sess;
+
+	if (c_op->type == RTE_CRYPTO_OP_SESSIONLESS) {
+		sess = aesni_mb_get_session(qp->sess_mp);
+		if (unlikely(sess == NULL))
+			return NULL;
+
+		if (unlikely(aesni_mb_set_session_parameters(qp->mb_ops,
+				sess, &c_op->op_params.cipher,
+				&c_op->op_params.hash,
+				c_op->op_params.opchain) != 0))
+			return NULL;
+	} else {
+		sess = (struct aesni_mb_session *)c_op->session;
+	}
+
+	/* Set crypto operation */
+	job->chain_order = sess->chain_order;
+
+
+	/* Set cipher parameters */
+	job->cipher_direction = sess->cipher.direction;
+	job->cipher_mode = sess->cipher.mode;
+
+	job->aes_key_len_in_bytes = sess->cipher.key_length_in_bytes;
+	job->aes_enc_key_expanded = sess->cipher.expanded_aes_keys.encode;
+	job->aes_dec_key_expanded = sess->cipher.expanded_aes_keys.decode;
+
+
+	/* Set authentication parameters */
+	job->hash_alg = sess->auth.algo;
+	job->hashed_auth_key_xor_ipad = sess->auth.pads.inner;
+	job->hashed_auth_key_xor_opad = sess->auth.pads.outer;
+
+
+	/* Mutable crypto operation parameters */
+
+	/* Set digest output location */
+	if (job->cipher_direction == DECRYPT) {
+		job->auth_tag_output = (uint8_t *)rte_pktmbuf_append(m,
+				get_digest_byte_length(job->hash_alg));
+
+		if (job->auth_tag_output)
+			memset(job->auth_tag_output, 0,
+				sizeof(get_digest_byte_length(job->hash_alg)));
+		else
+			return NULL;
+	} else {
+		job->auth_tag_output = c_op->digest.data;
+	}
+
+	/* Set digest output length */
+#if 0
+	job->auth_tag_output_len_in_bytes = get_digest_byte_length(job->hash_alg);
+	job->auth_tag_output_len_in_bytes = job->auth_tag_output_len_in_bytes > c_op->digest.length ? c_op->digest.length : job->auth_tag_output_len_in_bytes;
+#else
+	job->auth_tag_output_len_in_bytes = get_truncated_digest_byte_length(job->hash_alg);
+#endif
+	/* Set IV parameters */
+	job->iv = c_op->iv.data;
+	job->iv_len_in_bytes = c_op->iv.length;
+
+	/* Data  Parameter */
+	job->src = rte_pktmbuf_mtod(m, uint8_t *);
+	job->dst = c_op->dst ? rte_pktmbuf_mtod(c_op->dst, uint8_t *) :
+			rte_pktmbuf_mtod(m, uint8_t *) + c_op->data.to_cipher.offset;
+
+	job->cipher_start_src_offset_in_bytes = c_op->data.to_cipher.offset;
+	job->msg_len_to_cipher_in_bytes = c_op->data.to_cipher.length;
+
+	job->hash_start_src_offset_in_bytes = c_op->data.to_hash.offset;
+	job->msg_len_to_hash_in_bytes = c_op->data.to_hash.length;
+
+	/* Set user data to be crypto operation data struct */
+	job->user_data = m;
+
+	return job;
+}
+
+/**
+ * Process a completed job and return rte_mbuf which job processed
+ *
+ * @param job	JOB_AES_HMAC job to process
+ *
+ * @return
+ * - Returns processed mbuf which is trimmed of output digest used in
+ * verification of supplied digest in the case of a HASH_CIPHER operation
+ * - Returns NULL on invalid job
+ */
+static struct rte_mbuf *
+post_process_job(JOB_AES_HMAC *job)
+{
+	struct rte_mbuf *m;
+
+	if (job == NULL  || job->user_data == NULL)
+		return NULL;
+
+	/* handled retrieved job */
+	m = (struct rte_mbuf *)job->user_data;
+
+	/* Verify digest if required */
+	if (job->chain_order == HASH_CIPHER) {
+		if (memcmp(job->auth_tag_output, m->crypto_op->digest.data,
+				job->auth_tag_output_len_in_bytes) != 0)
+			m->ol_flags |= PKT_RX_CRYPTO_DIGEST_BAD;
+		else
+			m->ol_flags &= ~PKT_RX_CRYPTO_DIGEST_BAD;
+
+		/* trim area used for digest from mbuf */
+		rte_pktmbuf_trim(m, get_digest_byte_length(job->hash_alg));
+	}
+
+	return m;
+}
+
+/**
+ * Process a completed JOB_AES_HMAC job and keep processing jobs until
+ * get_completed_job return NULL
+ *
+ * @param qp		Queue Pair to process
+ * @param job		JOB_AES_HMAC job
+ *
+ * @return
+ * - Number of processed jobs
+ */
+static unsigned
+handle_completed_jobs(struct aesni_mb_qp *qp, JOB_AES_HMAC *job)
+{
+	struct rte_mbuf *m = NULL;
+	unsigned processed_jobs = 0;
+
+	while (job) {
+		processed_jobs++;
+		m = post_process_job(job);
+		if (m)
+			rte_ring_enqueue(qp->processed_pkts, (void *)m);
+		else
+			qp->qp_stats.dequeue_err_count++;
+
+		job = (*qp->mb_ops->job.get_completed_job)(&qp->mb_mgr);
+	}
+
+	return processed_jobs;
+}
+
+
+static uint16_t
+aesni_mb_pmd_enqueue_burst(void *queue_pair,
+		struct rte_mbuf **bufs, uint16_t nb_bufs)
+{
+	struct aesni_mb_qp *qp = queue_pair;
+	JOB_AES_HMAC *job = NULL;
+
+	int i, processed_jobs = 0;
+
+	for (i = 0; i < nb_bufs; i++) {
+
+		if (unlikely(!(bufs[i]->ol_flags & PKT_TX_CRYPTO_OP))) {
+			qp->qp_stats.enqueue_err_count++;
+			goto flush_jobs;
+		}
+
+		job = (*qp->mb_ops->job.get_next)(&qp->mb_mgr);
+		if (unlikely(job == NULL)) {
+			qp->qp_stats.enqueue_err_count++;
+			goto flush_jobs;
+		}
+
+		job = process_crypto_op(qp, job, bufs[i]);
+		if (unlikely(job == NULL)) {
+			qp->qp_stats.enqueue_err_count++;
+			goto flush_jobs;
+		}
+
+		/* Submit Job */
+		job = (*qp->mb_ops->job.submit)(&qp->mb_mgr);
+		qp->qp_stats.enqueued_count++;
+
+		/* If submit return a processed job then handle it, before
+		 * submitting subsequent jobs */
+		if (job)
+			processed_jobs += handle_completed_jobs(qp, job);
+	}
+
+	if (processed_jobs == 0)
+		goto flush_jobs;
+	else
+		qp->qp_stats.dequeued_count += processed_jobs;
+		return i;
+
+flush_jobs:
+	/* if we haven't processed any jobs in submit loop, then flush jobs
+	 * queue to stop the output stalling */
+	job = (*qp->mb_ops->job.flush_job)(&qp->mb_mgr);
+	if (job)
+		qp->qp_stats.dequeued_count += handle_completed_jobs(qp, job);
+
+	return i;
+}
+
+static uint16_t
+aesni_mb_pmd_dequeue_burst(void *queue_pair,
+		struct rte_mbuf **bufs,	uint16_t nb_bufs)
+{
+	struct aesni_mb_qp *qp = queue_pair;
+	unsigned i, nb_dequeued;
+
+	nb_dequeued = rte_ring_dequeue_burst(qp->processed_pkts,
+			(void **)bufs, nb_bufs);
+
+	for (i = 0; i < nb_dequeued; i++) {
+		/* Free session if a session-less crypto op */
+		if (bufs[i]->crypto_op->type == RTE_CRYPTO_OP_SESSIONLESS) {
+			aesni_mb_free_session(qp->sess_mp,
+					(struct aesni_mb_session *)bufs[i]->crypto_op->session);
+			bufs[i]->crypto_op->session = NULL;
+		}
+	}
+
+	return nb_dequeued;
+}
+
+
+static int cryptodev_aesni_mb_uninit(const char *name);
+
+static int
+cryptodev_aesni_mb_create(const char *name, unsigned socket_id)
+{
+	struct rte_cryptodev *dev;
+	char crypto_dev_name[RTE_CRYPTODEV_NAME_MAX_LEN];
+	struct aesni_mb_private *internals;
+	enum aesni_mb_vector_mode vector_mode;
+
+	/* Check CPU for support for AES instruction set */
+	if (!rte_cpu_get_flag_enabled(RTE_CPUFLAG_AES)) {
+		MB_LOG_ERR("AES instructions not supported by CPU");
+		return -EFAULT;
+	}
+
+	/* Check CPU for supported vector instruction set */
+	if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX2))
+		vector_mode = RTE_AESNI_MB_AVX2;
+	else if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX))
+		vector_mode = RTE_AESNI_MB_AVX;
+	else if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_SSE4_1))
+		vector_mode = RTE_AESNI_MB_SSE;
+	else {
+		MB_LOG_ERR("Vector instructions are not supported by CPU");
+		return -EFAULT;
+	}
+
+	/* create a unique device name */
+	if (create_unique_device_name(crypto_dev_name,
+			RTE_CRYPTODEV_NAME_MAX_LEN) != 0) {
+		MB_LOG_ERR("failed to create unique cryptodev name");
+		return -EINVAL;
+	}
+
+
+	dev = rte_cryptodev_pmd_virtual_dev_init(crypto_dev_name,
+			sizeof(struct aesni_mb_private), socket_id);
+	if (dev == NULL) {
+		MB_LOG_ERR("failed to create cryptodev vdev");
+		goto init_error;
+	}
+
+	dev->dev_type = RTE_CRYPTODEV_AESNI_MB_PMD;
+	dev->dev_ops = rte_aesni_mb_pmd_ops;
+
+	/* register rx/tx burst functions for data path */
+	dev->dequeue_burst = aesni_mb_pmd_dequeue_burst;
+	dev->enqueue_burst = aesni_mb_pmd_enqueue_burst;
+
+	/* Set vector instructions mode supported */
+	internals = dev->data->dev_private;
+
+	internals->vector_mode = vector_mode;
+	internals->max_nb_qpairs = AESNI_MB_MAX_NB_QUEUE_PAIRS;
+
+	return dev->data->dev_id;
+init_error:
+	MB_LOG_ERR("driver %s: cryptodev_aesni_create failed", name);
+
+	cryptodev_aesni_mb_uninit(crypto_dev_name);
+	return -EFAULT;
+}
+
+
+static int
+cryptodev_aesni_mb_init(const char *name,
+		const char *params __rte_unused)
+{
+	RTE_LOG(INFO, PMD, "Initialising %s\n", name);
+
+	return cryptodev_aesni_mb_create(name, rte_socket_id());
+}
+
+static int
+cryptodev_aesni_mb_uninit(const char *name)
+{
+	if (name == NULL)
+		return -EINVAL;
+
+	RTE_LOG(INFO, PMD, "Closing AESNI crypto device %s on numa socket %u\n",
+			name, rte_socket_id());
+
+	return 0;
+}
+
+static struct rte_driver cryptodev_aesni_mb_pmd_drv = {
+	.name = CRYPTODEV_NAME_AESNI_MB_PMD,
+	.type = PMD_VDEV,
+	.init = cryptodev_aesni_mb_init,
+	.uninit = cryptodev_aesni_mb_uninit
+};
+
+PMD_REGISTER_DRIVER(cryptodev_aesni_mb_pmd_drv);
diff --git a/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c
new file mode 100644
index 0000000..fb57e7b
--- /dev/null
+++ b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c
@@ -0,0 +1,346 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2015 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <rte_common.h>
+#include <rte_cryptodev.h>
+
+#include "rte_aesni_mb_pmd_private.h"
+
+/** Configure device */
+static int
+aesni_mb_pmd_config(__rte_unused struct rte_cryptodev *dev)
+{
+	return 0;
+}
+
+/** Start device */
+static int
+aesni_mb_pmd_start(__rte_unused struct rte_cryptodev *dev)
+{
+	return -ENOTSUP;
+}
+
+/** Stop device */
+static void
+aesni_mb_pmd_stop(__rte_unused struct rte_cryptodev *dev)
+{
+}
+
+/** Close device */
+static void
+aesni_mb_pmd_close(__rte_unused struct rte_cryptodev *dev)
+{
+}
+
+
+/** Get device statistics */
+static void
+aesni_mb_pmd_stats_get(struct rte_cryptodev *dev,
+		struct rte_cryptodev_stats *stats)
+{
+	int qp_id;
+
+	for (qp_id = 0; qp_id < dev->data->nb_queue_pairs; qp_id++) {
+		struct aesni_mb_qp *qp = dev->data->queue_pairs[qp_id];
+
+		stats->enqueued_count += qp->qp_stats.enqueued_count;
+		stats->dequeued_count += qp->qp_stats.dequeued_count;
+
+		stats->enqueue_err_count += qp->qp_stats.enqueue_err_count;
+		stats->dequeue_err_count += qp->qp_stats.dequeue_err_count;
+	}
+}
+
+/** Reset device statistics */
+static void
+aesni_mb_pmd_stats_reset(struct rte_cryptodev *dev)
+{
+	int qp_id;
+
+	for (qp_id = 0; qp_id < dev->data->nb_queue_pairs; qp_id++) {
+		struct aesni_mb_qp *qp = dev->data->queue_pairs[qp_id];
+
+		memset(&qp->qp_stats, 0, sizeof(qp->qp_stats));
+	}
+}
+
+
+/** Get device info */
+static void
+aesni_mb_pmd_info_get(struct rte_cryptodev *dev,
+		struct rte_cryptodev_info *dev_info)
+{
+	struct aesni_mb_private *internals = dev->data->dev_private;
+
+	if (dev_info != NULL) {
+		dev_info->dev_type = dev->dev_type;
+		dev_info->max_queue_pairs = internals->max_nb_qpairs;
+	}
+}
+
+/** Release queue pair */
+static void
+aesni_mb_pmd_qp_release(struct rte_cryptodev *dev, uint16_t qp_id)
+{
+	if (dev->data->queue_pairs[qp_id] != NULL) {
+		rte_free(dev->data->queue_pairs[qp_id]);
+		dev->data->queue_pairs[qp_id] = NULL;
+	}
+}
+
+/** set a unique name for the queue pair based on it's name, dev_id and qp_id */
+static int
+aesni_mb_pmd_qp_set_unique_name(struct rte_cryptodev *dev,
+		struct aesni_mb_qp *qp)
+{
+	unsigned n = snprintf(qp->name, sizeof(qp->name),
+			"aesni_mb_pmd_%u_qp_%u",
+			dev->data->dev_id, qp->id);
+
+	if (n > sizeof(qp->name))
+		return -1;
+
+	return 0;
+}
+
+/** Create a ring to place process packets on */
+static struct rte_ring *
+aesni_mb_pmd_qp_create_processed_pkts_ring(struct aesni_mb_qp *qp,
+		unsigned ring_size, int socket_id)
+{
+	struct rte_ring *r;
+
+	r = rte_ring_lookup(qp->name);
+	if (r) {
+		if (r->prod.size >= ring_size) {
+			MB_LOG_INFO("Reusing existing ring %s for processed packets", qp->name);
+			return r;
+		} else {
+			MB_LOG_ERR("Unable to reuse existing ring %s for processed packets", qp->name);
+			return NULL;
+		}
+	}
+
+	return rte_ring_create(qp->name, ring_size, socket_id,
+			RING_F_SP_ENQ | RING_F_SC_DEQ);
+}
+
+/** Setup a queue pair */
+static int
+aesni_mb_pmd_qp_setup(struct rte_cryptodev *dev, uint16_t qp_id,
+		const struct rte_cryptodev_qp_conf *qp_conf,
+		 int socket_id)
+{
+	struct aesni_mb_qp *qp = NULL;
+	struct aesni_mb_private *internals = dev->data->dev_private;
+
+	/* Free memory prior to re-allocation if needed. */
+	if (dev->data->queue_pairs[qp_id] != NULL)
+		aesni_mb_pmd_qp_release(dev, qp_id);
+
+	/* Allocate the queue pair data structure. */
+	qp = rte_zmalloc_socket("AES-NI PMD Queue Pair", sizeof(*qp),
+					RTE_CACHE_LINE_SIZE, socket_id);
+	if (qp == NULL)
+		return (-ENOMEM);
+
+	qp->id = qp_id;
+	dev->data->queue_pairs[qp_id] = qp;
+
+	if (aesni_mb_pmd_qp_set_unique_name(dev, qp))
+		goto qp_setup_cleanup;
+
+	qp->mb_ops = &job_ops[internals->vector_mode];
+
+	qp->processed_pkts = aesni_mb_pmd_qp_create_processed_pkts_ring(qp,
+			qp_conf->nb_descriptors, socket_id);
+
+	if (qp->processed_pkts == NULL)
+		goto qp_setup_cleanup;
+
+	qp->sess_mp = internals->sess_mp;
+
+	memset(&qp->qp_stats, 0, sizeof(qp->qp_stats));
+
+	/* Initialise multi-buffer manager */
+	(*qp->mb_ops->job.init_mgr)(&qp->mb_mgr);
+
+	return 0;
+
+qp_setup_cleanup:
+	if (qp)
+		rte_free(qp);
+
+	return -1;
+}
+
+/** Start queue pair */
+static int
+aesni_mb_pmd_qp_start(__rte_unused struct rte_cryptodev *dev,
+		__rte_unused uint16_t queue_pair_id)
+{
+	return -ENOTSUP;
+}
+
+/** Stop queue pair */
+static int
+aesni_mb_pmd_qp_stop(__rte_unused struct rte_cryptodev *dev,
+		__rte_unused uint16_t queue_pair_id)
+{
+	return -ENOTSUP;
+}
+
+/** Return the number of allocated queue pairs */
+static uint32_t
+aesni_mb_pmd_qp_count(struct rte_cryptodev *dev)
+{
+	return dev->data->nb_queue_pairs;
+}
+
+
+static void
+session_obj_init(__rte_unused struct rte_mempool *mp,
+		__rte_unused void *user_arg, __rte_unused void *element,
+		__rte_unused unsigned element_idx)
+{
+}
+
+static int
+aesni_mb_pmd_session_mempool_create(struct rte_cryptodev *dev,
+		unsigned nb_objs, unsigned obj_cache_size, int socket_id)
+{
+	struct aesni_mb_private *internals = dev->data->dev_private;
+
+	unsigned n = snprintf(internals->sess_mp_name,
+			sizeof(internals->sess_mp_name), "mb_cdev_%d_sess_mp",
+			dev->data->dev_id);
+
+	if (n > sizeof(internals->sess_mp_name)){
+		MB_LOG_ERR("Unable to create unique name for session mempool");
+		return -ENOMEM;
+	}
+	internals->sess_mp = rte_mempool_lookup(internals->sess_mp_name);
+	if (internals->sess_mp != NULL) {
+		if (internals->sess_mp->elt_size != sizeof(struct aesni_mb_session) ||
+				internals->sess_mp->cache_size < obj_cache_size ||
+				internals->sess_mp->size < nb_objs) {
+
+			MB_LOG_ERR("%s mempool already exists with different "
+					"initialisation parameters",
+					internals->sess_mp_name);
+			internals->sess_mp = NULL;
+			return -ENOMEM;
+		}
+	} else {
+		internals->sess_mp = rte_mempool_create(
+				internals->sess_mp_name,	/* mempool name */
+				nb_objs,			/* number of elements*/
+				sizeof(struct aesni_mb_session),/* element size*/
+				obj_cache_size, 		/* Cache size*/
+				0,				/* private data size */
+				NULL,				/* obj initialisation constructor */
+				NULL,				/* obj initialisation constructor argument */
+				session_obj_init,		/* obj constructor */
+				NULL,				/* obj constructor argument */
+				socket_id,			/* socket id */
+				0);				/* flags */
+
+		if (internals->sess_mp == NULL) {
+			MB_LOG_ERR("%s mempool allocation failed",
+					internals->sess_mp_name);
+			return -ENOMEM;
+		}
+	}
+
+	return 0;
+}
+
+static struct rte_cryptodev_session *
+aesni_mb_pmd_create_session(struct rte_cryptodev *dev,
+		struct rte_crypto_cipher_params *cipher_setup_data,
+		struct rte_crypto_hash_params *hash_setup_data,
+		enum rte_crypto_operation_chain op_chain)
+{
+	struct aesni_mb_private *internals = dev->data->dev_private;
+	struct aesni_mb_session *sess  =
+			aesni_mb_get_session(internals->sess_mp);
+
+	if (unlikely(sess == NULL)) {
+		MB_LOG_ERR("failed to get session from mempool");
+		return NULL;
+	}
+
+	if (aesni_mb_set_session_parameters(
+			&job_ops[internals->vector_mode], sess,
+			cipher_setup_data, hash_setup_data, op_chain) != 0) {
+		aesni_mb_free_session(internals->sess_mp, sess);
+	}
+
+	return (struct rte_cryptodev_session *)sess;
+}
+
+static void
+aesni_mb_pmd_destroy_session(struct rte_cryptodev *dev,
+		struct rte_cryptodev_session *sess)
+{
+	struct aesni_mb_private *internals = dev->data->dev_private;
+
+	if (sess)
+		aesni_mb_free_session(internals->sess_mp,
+				(struct aesni_mb_session *)sess);
+}
+
+struct rte_cryptodev_ops aesni_mb_pmd_ops = {
+		.dev_configure		= aesni_mb_pmd_config,
+		.dev_start		= aesni_mb_pmd_start,
+		.dev_stop		= aesni_mb_pmd_stop,
+		.dev_close		= aesni_mb_pmd_close,
+
+		.stats_get		= aesni_mb_pmd_stats_get,
+		.stats_reset		= aesni_mb_pmd_stats_reset,
+
+		.dev_infos_get		= aesni_mb_pmd_info_get,
+
+		.queue_pair_setup	= aesni_mb_pmd_qp_setup,
+		.queue_pair_release	= aesni_mb_pmd_qp_release,
+		.queue_pair_start	= aesni_mb_pmd_qp_start,
+		.queue_pair_stop	= aesni_mb_pmd_qp_stop,
+		.queue_pair_count	= aesni_mb_pmd_qp_count,
+
+		.session_mp_create	= aesni_mb_pmd_session_mempool_create,
+
+		.session_create		= aesni_mb_pmd_create_session,
+		.session_destroy	= aesni_mb_pmd_destroy_session
+};
+
+struct rte_cryptodev_ops *rte_aesni_mb_pmd_ops = &aesni_mb_pmd_ops;
+
diff --git a/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_private.h b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_private.h
new file mode 100644
index 0000000..c5c4a86
--- /dev/null
+++ b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_private.h
@@ -0,0 +1,224 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2015 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _RTE_AESNI_MB_PMD_PRIVATE_H_
+#define _RTE_AESNI_MB_PMD_PRIVATE_H_
+
+#include "aesni_mb_ops.h"
+
+#define MB_LOG_ERR(fmt, args...) do { \
+	RTE_LOG(ERR, CRYPTODEV, "[%s] %s() line %u: " fmt "\n",  \
+			CRYPTODEV_NAME_AESNI_MB_PMD, \
+			__func__, __LINE__, ## args); \
+	} while (0)
+
+#ifdef RTE_LIBRTE_AESNI_MB_DEBUG
+#define MB_LOG_INFO(fmt, args...) do { \
+	RTE_LOG(INFO, CRYPTODEV, "[%s] %s() line %u: " fmt "\n", \
+			CRYPTODEV_NAME_AESNI_MB_PMD, \
+			__func__, __LINE__, ## args); \
+	} while (0)
+
+#define MB_LOG_DBG(fmt, args...) do { \
+	RTE_LOG(DEBUG, CRYPTODEV, "[%s] %s() line %u: " fmt "\n", \
+			CRYPTODEV_NAME_AESNI_MB_PMD, \
+			__func__, __LINE__, ## args); \
+	} while (0)
+#else
+#define MB_LOG_INFO(fmt, args...)
+#define MB_LOG_DBG(fmt, args...)
+#endif
+
+#define AESNI_MB_NAME_MAX_LENGTH	(64)
+#define AESNI_MB_MAX_NB_QUEUE_PAIRS	(4)
+
+#define HMAC_IPAD_VALUE			(0x36)
+#define HMAC_OPAD_VALUE			(0x5C)
+
+static const unsigned auth_blocksize[] = {
+		[MD5]		= 64,
+		[SHA1]		= 64,
+		[SHA_224]	= 64,
+		[SHA_256]	= 64,
+		[SHA_384]	= 128,
+		[SHA_512]	= 128,
+		[AES_XCBC]	= 16,
+};
+
+/**
+ * Get the blocksize in bytes for a specified authentication algorithm
+ *
+ * @Note: this function will not return a valid value for a non-valid
+ * authentication algorithm
+ */
+static inline unsigned
+get_auth_algo_blocksize(JOB_HASH_ALG algo)
+{
+	return auth_blocksize[algo];
+}
+
+static const unsigned auth_truncated_digest_byte_lengths[] = {
+		[MD5]		= 12,
+		[SHA1]		= 12,
+		[SHA_224]	= 14,
+		[SHA_256]	= 16,
+		[SHA_384]	= 24,
+		[SHA_512]	= 32,
+		[AES_XCBC]	= 12,
+};
+
+/**
+ * Get the IPsec specified truncated length in bytes of the HMAC digest for a
+ * specified authentication algorithm
+ *
+ * @Note: this function will not return a valid value for a non-valid
+ * authentication algorithm
+ */
+static inline unsigned
+get_truncated_digest_byte_length(JOB_HASH_ALG algo)
+{
+	return auth_truncated_digest_byte_lengths[algo];
+}
+
+static const unsigned auth_digest_byte_lengths[] = {
+		[MD5]		= 16,
+		[SHA1]		= 20,
+		[SHA_224]	= 28,
+		[SHA_256]	= 32,
+		[SHA_384]	= 48,
+		[SHA_512]	= 64,
+		[AES_XCBC]	= 16,
+};
+
+/**
+ * Get the output digest size in bytes for a specified authentication algorithm
+ *
+ * @Note: this function will not return a valid value for a non-valid
+ * authentication algorithm
+ */
+static inline unsigned
+get_digest_byte_length(JOB_HASH_ALG algo)
+{
+	return auth_digest_byte_lengths[algo];
+}
+
+
+/** private data structure for each virtual AESNI device */
+struct aesni_mb_private {
+	enum aesni_mb_vector_mode vector_mode;
+
+	char sess_mp_name[AESNI_MB_NAME_MAX_LENGTH];
+	struct rte_mempool *sess_mp;
+
+	unsigned max_nb_qpairs;
+};
+
+struct aesni_mb_qp {
+	uint16_t id;				/**< Queue Pair Identifier */
+	char name[AESNI_MB_NAME_MAX_LENGTH];	/**< Unique Queue Pair Name */
+	const struct aesni_mb_ops *mb_ops;	/**<
+						 * Architecture dependent function
+						 * pointer table of the multi-buffer
+						 * APIs */
+	MB_MGR mb_mgr;				/**< Multi-buffer instance */
+	struct rte_ring *processed_pkts;	/**< Ring for placing process packets */
+	struct rte_mempool *sess_mp;		/**< Crypto Session mempool */
+
+	struct rte_cryptodev_stats qp_stats;	/**< Queue pair statistics */
+} __rte_cache_aligned;
+
+
+/** AES-NI Multi buffer session */
+struct aesni_mb_session {
+	JOB_CHAIN_ORDER chain_order;
+
+	struct {
+		JOB_CIPHER_DIRECTION direction;	/**< Cipher direction - encrypt / decrypt */
+		JOB_CIPHER_MODE mode;		/**< Cipher mode - CBC / Counter */
+
+		uint64_t key_length_in_bytes;
+
+		struct {
+			uint32_t encode[256] __rte_align(16);	/**< encode key */
+			uint32_t decode[256] __rte_align(16);	/**< decode key */
+		} expanded_aes_keys;
+		/**< Expanded AES keys - Allocating space to contain the
+		 * maximum expanded key size which is 240 bytes for 256 bit
+		 * AES, calculate by: ((key size (bytes)) * ((number of rounds) + 1)) */
+	} cipher;	/**< Cipher Parameters */
+
+	struct {
+		JOB_HASH_ALG algo;	/** Authentication Algorithm */
+
+		struct {
+			uint8_t inner[128] __rte_align(16);	/**< inner pad */
+			uint8_t outer[128] __rte_align(16);	/**< outer pad */
+		} pads;
+		/** HMAC Authentication pads - allocating space for the maximum
+		 * pad size supported which is 128 bytes for SHA512 */
+
+		uint8_t digest[64] __rte_align(16);
+	} auth;	/**< Authentication Parameters */
+} __rte_cache_aligned;
+
+
+static inline struct aesni_mb_session *
+aesni_mb_get_session(struct rte_mempool *mempool)
+{
+	struct aesni_mb_session *sess;
+
+	if (rte_mempool_get(mempool, (void **)&sess)) {
+		return NULL;
+	}
+	return sess;
+}
+
+static inline void
+aesni_mb_free_session(struct rte_mempool *mempool,
+		struct aesni_mb_session *sess)
+{
+	rte_mempool_put(mempool, (void *)sess);
+}
+
+extern int
+aesni_mb_set_session_parameters(const struct aesni_mb_ops *mb_ops,
+		struct aesni_mb_session *sess,
+		struct rte_crypto_cipher_params *cparams,
+		struct rte_crypto_hash_params *aparams,
+		enum rte_crypto_operation_chain op_chain);
+
+extern struct rte_cryptodev_ops *rte_aesni_mb_pmd_ops;
+
+
+
+#endif /* _RTE_AESNI_MB_PMD_PRIVATE_H_ */
+
diff --git a/drivers/crypto/aesni_mb/rte_pmd_aesni_version.map b/drivers/crypto/aesni_mb/rte_pmd_aesni_version.map
new file mode 100644
index 0000000..39cc84f
--- /dev/null
+++ b/drivers/crypto/aesni_mb/rte_pmd_aesni_version.map
@@ -0,0 +1,5 @@
+DPDK_2.2 {
+	global:
+
+	local: *;
+};
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index 5502cc4..496cbeb 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -148,6 +148,10 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_NULL)       += -lrte_pmd_null
 # QAT PMD has a dependancy on libcrypto (from openssl) for calculating HMAC precomputes
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_QAT)        += -lrte_pmd_qat -lcrypto
 
+# AESNI MULTI BUFFER is dependent on the IPSec_MB library
+_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_AESNI_MB)   += -lrte_pmd_aesni_mb
+_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_AESNI_MB)   += -L$(AESNI_MULTI_BUFFER_LIB_PATH) -lIPSec_MB
+
 endif # ! $(CONFIG_RTE_BUILD_SHARED_LIB)
 
 endif # ! CONFIG_RTE_BUILD_COMBINE_LIBS
-- 
1.9.3

^ permalink raw reply	[flat|nested] 8+ messages in thread

* [dpdk-dev] [PATCH 4/4] app/test: add cryptodev unit and performance tests
  2015-08-20 14:07 [dpdk-dev] [PATCH 0/4] A proposed DPDK Crypto API and device framework Declan Doherty
                   ` (2 preceding siblings ...)
  2015-08-20 14:07 ` [dpdk-dev] [PATCH 3/4] aesni_mb_pmd: Initial implementation of multi buffer based crypto device Declan Doherty
@ 2015-08-20 14:07 ` Declan Doherty
  3 siblings, 0 replies; 8+ messages in thread
From: Declan Doherty @ 2015-08-20 14:07 UTC (permalink / raw)
  To: dev

Co-authored-by: Des O Dea <des.j.o.dea@intel.com>
Co-authored-by: John Griffin <john.griffin@intel.com>
Co-authored-by: Fiona Trahe <fiona.trahe@intel.com>

unit tests are run by using cryptodev_qat_autotest or
cryptodev_aesni_autotest from the test apps interactive console.

performance tests are run by using the cryptodev_qat_perftest or
cryptodev_aesni_mb_perftest command from the test apps interactive
console.

If you which to run the tests on a QAT device there must be one
bound to igb_uio kernel driver.

Signed-off-by: Declan Doherty <declan.doherty@intel.com>
---
 app/test/Makefile                  |    7 +-
 app/test/test.c                    |   91 ++-
 app/test/test.h                    |   34 +-
 app/test/test_cryptodev.c          | 1079 +++++++++++++++++++++++++++
 app/test/test_cryptodev_perf.c     | 1438 ++++++++++++++++++++++++++++++++++++
 app/test/test_link_bonding.c       |    6 +-
 app/test/test_link_bonding_mode4.c |    7 +-
 7 files changed, 2616 insertions(+), 46 deletions(-)
 create mode 100644 app/test/test_cryptodev.c
 create mode 100644 app/test/test_cryptodev_perf.c

diff --git a/app/test/Makefile b/app/test/Makefile
index e7f148f..0812487 100644
--- a/app/test/Makefile
+++ b/app/test/Makefile
@@ -140,11 +140,14 @@ SRCS-$(CONFIG_RTE_LIBRTE_PMD_BOND) += test_link_bonding.c
 SRCS-$(CONFIG_RTE_LIBRTE_PMD_BOND) += test_link_bonding_mode4.c
 endif
 
+SRCS-$(CONFIG_RTE_LIBRTE_CRYPTODEV) += test_cryptodev_perf.c
+SRCS-$(CONFIG_RTE_LIBRTE_CRYPTODEV) += test_cryptodev.c
+
 SRCS-$(CONFIG_RTE_LIBRTE_PMD_RING) += test_pmd_ring.c
 SRCS-$(CONFIG_RTE_LIBRTE_KVARGS) += test_kvargs.c
 
-CFLAGS += -O3
-CFLAGS += $(WERROR_FLAGS)
+#CFLAGS += -O3
+CFLAGS += -g -O0 $(WERROR_FLAGS)
 
 # Disable warnings of deprecated-declarations in test_kni.c
 ifeq ($(CC), icc)
diff --git a/app/test/test.c b/app/test/test.c
index e8992f4..19cfcb1 100644
--- a/app/test/test.c
+++ b/app/test/test.c
@@ -159,51 +159,82 @@ main(int argc, char **argv)
 int
 unit_test_suite_runner(struct unit_test_suite *suite)
 {
-	int retval, i = 0;
+	int test_success;
+	unsigned total = 0, executed = 0, skipped = 0, succeeded = 0, failed = 0;
 
 	if (suite->suite_name)
-		printf("Test Suite : %s\n", suite->suite_name);
+		printf(" + ------------------------------------------------------- +\n");
+		printf(" + Test Suite : %s\n", suite->suite_name);
 
 	if (suite->setup)
 		if (suite->setup() != 0)
-			return -1;
-
-	while (suite->unit_test_cases[i].testcase) {
-		/* Run test case setup */
-		if (suite->unit_test_cases[i].setup) {
-			retval = suite->unit_test_cases[i].setup();
-			if (retval != 0)
-				return retval;
-		}
+			goto suite_summary;
 
-		/* Run test case */
-		if (suite->unit_test_cases[i].testcase() == 0) {
-			printf("TestCase %2d: %s\n", i,
-					suite->unit_test_cases[i].success_msg ?
-					suite->unit_test_cases[i].success_msg :
-					"passed");
+	printf(" + ------------------------------------------------------- +\n");
+
+	while (suite->unit_test_cases[total].testcase) {
+		if (!suite->unit_test_cases[total].enabled) {
+			skipped++;
+			total++;
+			continue;
+		} else {
+			executed++;
 		}
-		else {
-			printf("TestCase %2d: %s\n", i, suite->unit_test_cases[i].fail_msg ?
-					suite->unit_test_cases[i].fail_msg :
-					"failed");
-			return -1;
+
+		/* run test case setup */
+		if (suite->unit_test_cases[total].setup)
+			test_success = suite->unit_test_cases[total].setup();
+		else
+			test_success = TEST_SUCCESS;
+
+		if (test_success == TEST_SUCCESS) {
+			/* run the test case */
+			test_success = suite->unit_test_cases[total].testcase();
+			if (test_success == TEST_SUCCESS)
+				succeeded++;
+			else
+				failed++;
+		} else {
+			failed++;
 		}
 
-		/* Run test case teardown */
-		if (suite->unit_test_cases[i].teardown) {
-			retval = suite->unit_test_cases[i].teardown();
-			if (retval != 0)
-				return retval;
+		/* run the test case teardown */
+		if (suite->unit_test_cases[total].teardown) {
+			suite->unit_test_cases[total].teardown();
 		}
 
-		i++;
+		if (test_success == TEST_SUCCESS)
+			printf(" + TestCase [%2d] : %s\n", total,
+					suite->unit_test_cases[total].success_msg ?
+					suite->unit_test_cases[total].success_msg :
+					"passed");
+		else
+			printf(" + TestCase [%2d] : %s\n", total,
+					suite->unit_test_cases[total].fail_msg ?
+					suite->unit_test_cases[total].fail_msg :
+					"failed");
+
+		total++;
 	}
 
 	/* Run test suite teardown */
 	if (suite->teardown)
-		if (suite->teardown() != 0)
-			return -1;
+		suite->teardown();
+
+	goto suite_summary;
+
+suite_summary:
+	printf(" + ------------------------------------------------------- +\n");
+	printf(" + Test Suite Summary \n");
+	printf(" + Tests Total :       %2d\n", total);
+	printf(" + Tests Skipped :     %2d\n", skipped);
+	printf(" + Tests Executed :    %2d\n", executed);
+	printf(" + Tests Passed :      %2d\n", succeeded);
+	printf(" + Tests Failed :      %2d\n", failed);
+	printf(" + ------------------------------------------------------- +\n");
+
+	if (failed)
+		return -1;
 
 	return 0;
 }
diff --git a/app/test/test.h b/app/test/test.h
index 62eb51d..a2b33c0 100644
--- a/app/test/test.h
+++ b/app/test/test.h
@@ -33,7 +33,7 @@
 
 #ifndef _TEST_H_
 #define _TEST_H_
-
+#include <stddef.h>
 #include <sys/queue.h>
 
 #define TEST_SUCCESS  (0)
@@ -64,6 +64,17 @@
 		}                                                        \
 } while (0)
 
+
+#define TEST_ASSERT_BUFFERS_ARE_EQUAL(a, b, len,  msg, ...) do {	\
+	if (memcmp(a, b,len)) {                                         \
+		printf("TestCase %s() line %d failed: "              \
+			msg "\n", __func__, __LINE__, ##__VA_ARGS__);    \
+		TEST_TRACE_FAILURE(__FILE__, __LINE__, __func__);    \
+		return TEST_FAILED;                                  \
+	}                                                        \
+} while (0)
+
+
 #define TEST_ASSERT_NOT_EQUAL(a, b, msg, ...) do {               \
 		if (!(a != b)) {                                         \
 			printf("TestCase %s() line %d failed: "              \
@@ -113,27 +124,36 @@
 
 struct unit_test_case {
 	int (*setup)(void);
-	int (*teardown)(void);
+	void (*teardown)(void);
 	int (*testcase)(void);
 	const char *success_msg;
 	const char *fail_msg;
+	unsigned enabled;
 };
 
-#define TEST_CASE(fn) { NULL, NULL, fn, #fn " succeeded", #fn " failed"}
+#define TEST_CASE(fn) { NULL, NULL, fn, #fn " succeeded", #fn " failed", 1 }
 
 #define TEST_CASE_NAMED(name, fn) { NULL, NULL, fn, name " succeeded", \
-		name " failed"}
+		name " failed", 1}
 
 #define TEST_CASE_ST(setup, teardown, testcase)         \
 		{ setup, teardown, testcase, #testcase " succeeded",    \
-		#testcase " failed "}
+		#testcase " failed ", 1}
+
+
+#define TEST_CASE_DISABLED(fn) { NULL, NULL, fn, #fn " succeeded", \
+	#fn " failed", 0 }
+
+#define TEST_CASE_ST_DISABLED(setup, teardown, testcase)         \
+		{ setup, teardown, testcase, #testcase " succeeded",    \
+		#testcase " failed ", 0 }
 
-#define TEST_CASES_END() { NULL, NULL, NULL, NULL, NULL }
+#define TEST_CASES_END() { NULL, NULL, NULL, NULL, NULL, 0 }
 
 struct unit_test_suite {
 	const char *suite_name;
 	int (*setup)(void);
-	int (*teardown)(void);
+	void (*teardown)(void);
 	struct unit_test_case unit_test_cases[];
 };
 
diff --git a/app/test/test_cryptodev.c b/app/test/test_cryptodev.c
new file mode 100644
index 0000000..68cc0bf
--- /dev/null
+++ b/app/test/test_cryptodev.c
@@ -0,0 +1,1079 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2015 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *	 * Redistributions of source code must retain the above copyright
+ *	   notice, this list of conditions and the following disclaimer.
+ *	 * Redistributions in binary form must reproduce the above copyright
+ *	   notice, this list of conditions and the following disclaimer in
+ *	   the documentation and/or other materials provided with the
+ *	   distribution.
+ *	 * Neither the name of Intel Corporation nor the names of its
+ *	   contributors may be used to endorse or promote products derived
+ *	   from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <rte_common.h>
+#include <rte_hexdump.h>
+#include <rte_mbuf.h>
+#include <rte_malloc.h>
+#include <rte_memcpy.h>
+
+#include <rte_crypto.h>
+#include <rte_cryptodev.h>
+
+#include "test.h"
+
+#define HEX_DUMP 0
+
+#define MAX_NUM_OPS_INFLIGHT		(RTE_LIBRTE_PMD_QAT_MAX_SESSIONS)
+#define MIN_NUM_OPS_INFLIGHT		(128)
+#define DEFAULT_NUM_OPS_INFLIGHT	(128)
+
+#define NUM_MBUFS			(8191)
+#define MBUF_CACHE_SIZE			(250)
+#define MBUF_SIZE	(1600 + sizeof(struct rte_mbuf) + RTE_PKTMBUF_HEADROOM)
+#define MAX_NUM_QPS_PER_QAT_DEVICE	(2)
+#define DEFAULT_NUM_QPS_PER_QAT_DEVICE	(1)
+
+#define FALSE			0
+#define TRUE			1
+
+#define BYTE_LENGTH(x) (x/8)
+
+/* HASH DIGEST LENGTHS */
+#define DIGEST_BYTE_LENGTH_SHA1		(BYTE_LENGTH(160))
+#define DIGEST_BYTE_LENGTH_SHA256	(BYTE_LENGTH(256))
+#define DIGEST_BYTE_LENGTH_SHA512	(BYTE_LENGTH(512))
+
+#define TRUNCATED_DIGEST_BYTE_LENGTH_SHA1		(12)
+#define TRUNCATED_DIGEST_BYTE_LENGTH_SHA256		(16)
+#define TRUNCATED_DIGEST_BYTE_LENGTH_SHA512		(32)
+
+static enum rte_cryptodev_type gbl_cryptodev_type;
+
+struct crypto_testsuite_params {
+	struct rte_mempool *mbuf_pool;
+	struct rte_mempool *crypto_op_pool;
+	struct rte_cryptodev_config conf;
+	struct rte_cryptodev_qp_conf qp_conf;
+
+	uint8_t valid_devs[RTE_MAX_CRYPTODEVS];
+	uint8_t valid_dev_count;
+};
+
+struct crypto_unittest_params {
+	struct rte_crypto_cipher_params cipher_params;
+	struct rte_crypto_hash_params hash_params;
+
+	struct rte_cryptodev_session *sess;
+
+	struct rte_crypto_op_data *op;
+
+	struct rte_mbuf *obuf, *ibuf;
+
+	uint8_t *digest;
+};
+
+static struct rte_mbuf *
+setup_test_string(struct rte_mempool *mpool,
+		const char *string, size_t len, uint8_t blocksize)
+{
+	struct rte_mbuf *m = rte_pktmbuf_alloc(mpool);
+	size_t t_len = len - (blocksize ? (len % blocksize) : 0);
+
+	if (m) {
+		char *dst = rte_pktmbuf_append(m, t_len);
+
+		if (!dst) {
+			rte_pktmbuf_free(m);
+			return NULL;
+		}
+
+		rte_memcpy(dst, string, t_len);
+	}
+
+	return m;
+}
+
+#if HEX_DUMP
+static void
+hexdump_mbuf_data(FILE *f, const char *title, struct rte_mbuf *m)
+{
+	rte_hexdump(f, title, rte_pktmbuf_mtod(m, const void *), m->data_len);
+}
+#endif
+
+static struct rte_mbuf *
+process_crypto_request(uint8_t dev_id, struct rte_mbuf *ibuf)
+{
+	struct rte_mbuf *obuf = NULL;
+#if HEX_DUMP
+	hexdump_mbuf_data(stdout, "Enqueued Packet", ibuf);
+#endif
+
+	if (rte_cryptodev_enqueue_burst(dev_id, 0, &ibuf, 1) != 1) {
+		printf("Error sending packet for encryption");
+		return NULL;
+	}
+	while (rte_cryptodev_dequeue_burst(dev_id, 0, &obuf,	1) == 0)
+		rte_pause();
+
+#if HEX_DUMP
+	if (obuf)
+		hexdump_mbuf_data(stdout, "Dequeued Packet", obuf);
+#endif
+
+	return obuf;
+}
+
+static struct crypto_testsuite_params testsuite_params = { NULL };
+static struct crypto_unittest_params unittest_params;
+
+static void
+free_testsuite_mbufs(void)
+{
+	struct crypto_unittest_params *ut_params = &unittest_params;
+	/* free mbuf - both obuf and ibuf are usually the same,
+	 * but rte copes even if we call free twice */
+	if (ut_params->obuf) {
+		rte_pktmbuf_free(ut_params->obuf);
+		ut_params->obuf = 0;
+	}
+	if (ut_params->ibuf) {
+		rte_pktmbuf_free(ut_params->ibuf);
+		ut_params->ibuf = 0;
+	}
+}
+
+static int
+testsuite_setup(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct rte_cryptodev_info info;
+	unsigned i, nb_devs, dev_id = 0;
+	uint16_t qp_id;
+
+	ts_params->mbuf_pool = rte_mempool_lookup("CRYPTO_MBUFPOOL");
+	if (ts_params->mbuf_pool == NULL) {
+		/* Not already created so create */
+		ts_params->mbuf_pool = rte_pktmbuf_pool_create("CRYPTO_MBUFPOOL",
+				NUM_MBUFS, MBUF_CACHE_SIZE, 0, MBUF_SIZE,
+				rte_socket_id());
+		if (ts_params->mbuf_pool == NULL) {
+			RTE_LOG(ERR, USER1, "Can't create CRYPTO_MBUFPOOL\n");
+			return TEST_FAILED;
+		}
+	}
+
+	ts_params->crypto_op_pool = rte_crypto_op_pool_create("CRYPTO_OP_POOL",
+			NUM_MBUFS, MBUF_CACHE_SIZE, rte_socket_id());
+	if (ts_params->crypto_op_pool == NULL) {
+		RTE_LOG(ERR, USER1, "Can't create CRYPTO_OP_POOL\n");
+		return TEST_FAILED;
+	}
+
+	/* Create 2 AESNI MB devices if required */
+	if (gbl_cryptodev_type == RTE_CRYPTODEV_AESNI_MB_PMD) {
+		nb_devs = rte_cryptodev_count_devtype(RTE_CRYPTODEV_AESNI_MB_PMD);
+		if (nb_devs < 2) {
+			for (i = nb_devs; i < 2; i++) {
+				int dev_id = rte_eal_vdev_init(
+					CRYPTODEV_NAME_AESNI_MB_PMD, NULL);
+
+				TEST_ASSERT(dev_id >= 0,
+					"Failed to create instance %u of pmd : %s",
+					i, CRYPTODEV_NAME_AESNI_MB_PMD);
+			}
+		}
+	}
+
+	nb_devs = rte_cryptodev_count();
+	if (nb_devs < 1) {
+		RTE_LOG(ERR, USER1, "No crypto devices found?");
+		return TEST_FAILED;
+	}
+
+	/* Create list of valid crypto devs */
+	for (i = 0; i < nb_devs; i++) {
+		rte_cryptodev_info_get(i, &info);
+		if (info.dev_type == gbl_cryptodev_type) {
+			ts_params->valid_devs[ts_params->valid_dev_count++] = i;
+			break;
+		}
+	}
+
+	if (ts_params->valid_dev_count < 1)
+		return TEST_FAILED;
+
+
+	/* Set up all the qps on all of the devices found */
+	for (i = 0; i < ts_params->valid_dev_count; i++) {
+		dev_id = ts_params->valid_devs[i];
+
+		/* Since we can't free and re-allocate queue memory always set the
+		 * queues on this device up to max size first so enough memory is
+		 * allocated for any later re-configures needed by other tests */
+
+		ts_params->conf.nb_queue_pairs = MAX_NUM_QPS_PER_QAT_DEVICE;
+		ts_params->conf.socket_id = SOCKET_ID_ANY;
+		ts_params->conf.session_mp.nb_objs =
+				(gbl_cryptodev_type == RTE_CRYPTODEV_QAT_PMD) ?
+						RTE_LIBRTE_PMD_QAT_MAX_SESSIONS :
+						RTE_LIBRTE_PMD_AESNI_MB_MAX_SESSIONS;
+
+		TEST_ASSERT_SUCCESS(rte_cryptodev_configure(dev_id,
+				&ts_params->conf),
+				"Failed to configure cryptodev %u with %u qps",
+				dev_id, ts_params->conf.nb_queue_pairs);
+
+		ts_params->qp_conf.nb_descriptors = MAX_NUM_OPS_INFLIGHT;
+
+		for (qp_id = 0; qp_id < DEFAULT_NUM_QPS_PER_QAT_DEVICE; qp_id++) {
+			TEST_ASSERT_SUCCESS(rte_cryptodev_queue_pair_setup(
+					dev_id, qp_id, &ts_params->qp_conf,
+					rte_cryptodev_socket_id(dev_id)),
+					"Failed to setup queue pair %u on cryptodev %u",
+					qp_id, dev_id);
+		}
+	}
+
+	return TEST_SUCCESS;
+}
+
+static void
+testsuite_teardown(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+
+	if (ts_params->mbuf_pool != NULL) {
+		RTE_LOG(DEBUG, USER1, "CRYPTO_MBUFPOOL count %u\n",
+		rte_mempool_count(ts_params->mbuf_pool));
+	}
+
+
+	if (ts_params->crypto_op_pool != NULL) {
+		RTE_LOG(DEBUG, USER1, "CRYPTO_OP_POOL count %u\n",
+		rte_mempool_count(ts_params->crypto_op_pool));
+	}
+
+}
+
+static int
+ut_setup(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
+
+	uint16_t qp_id;
+
+	/* Clear unit test parameters before running test */
+	memset(ut_params, 0, sizeof(*ut_params));
+
+	/* Reconfigure device to default parameters */
+	ts_params->conf.nb_queue_pairs = DEFAULT_NUM_QPS_PER_QAT_DEVICE;
+	ts_params->conf.socket_id = SOCKET_ID_ANY;
+	ts_params->conf.session_mp.nb_objs =
+			(gbl_cryptodev_type == RTE_CRYPTODEV_QAT_PMD) ?
+					DEFAULT_NUM_OPS_INFLIGHT :
+					DEFAULT_NUM_OPS_INFLIGHT;
+
+	TEST_ASSERT_SUCCESS(rte_cryptodev_configure(ts_params->valid_devs[0],
+			&ts_params->conf),
+			"Failed to configure cryptodev %u",
+			ts_params->valid_devs[0]);
+
+	/* Now reconfigure queues to size we actually want to use in this
+	 * test suite. */
+	ts_params->qp_conf.nb_descriptors = DEFAULT_NUM_OPS_INFLIGHT;
+
+	for (qp_id = 0; qp_id < ts_params->conf.nb_queue_pairs ; qp_id++) {
+		TEST_ASSERT_SUCCESS(rte_cryptodev_queue_pair_setup(
+			ts_params->valid_devs[0], qp_id,
+			&ts_params->qp_conf,
+			rte_cryptodev_socket_id(ts_params->valid_devs[0])),
+			"Failed to setup queue pair %u on cryptodev %u",
+			qp_id, ts_params->valid_devs[0]);
+	}
+
+
+	rte_cryptodev_stats_reset(ts_params->valid_devs[0]);
+
+	return TEST_SUCCESS;
+}
+
+static void
+ut_teardown(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
+	struct rte_cryptodev_stats stats;
+
+	/* free crypto session structure */
+	if (ut_params->sess) {
+		rte_cryptodev_session_free(ts_params->valid_devs[0],
+				ut_params->sess);
+		ut_params->sess = NULL;
+	}
+
+	/* free crypto operation structure */
+	if (ut_params->op)
+		rte_crypto_op_free(ut_params->op);
+
+	/* just in case test didn't free mbufs */
+	free_testsuite_mbufs();
+
+	if (ts_params->mbuf_pool != NULL)
+		RTE_LOG(DEBUG, USER1, "CRYPTO_MBUFPOOL count %u\n",
+				rte_mempool_count(ts_params->mbuf_pool));
+
+	rte_cryptodev_stats_get(ts_params->valid_devs[0], &stats);
+
+}
+
+/* ***** Plaintext data for tests ***** */
+
+const char catch_22_quote[] =
+		"What a lousy earth! He wondered how many people were "
+		"destitute that same night even in his own prosperous country, "
+		"how many homes were shanties, how many husbands were drunk "
+		"and wives socked, and how many children were bullied, abused, "
+		"or abandoned. How many families hungered for food they could "
+		"not afford to buy? How many hearts were broken? How many "
+		"suicides would take place that same night, how many people "
+		"would go insane? How many cockroaches and landlords would "
+		"triumph? How many winners were losers, successes failures, "
+		"and rich men poor men? How many wise guys were stupid? How "
+		"many happy endings were unhappy endings? How many honest men "
+		"were liars, brave men cowards, loyal men traitors, how many "
+		"sainted men were corrupt, how many people in positions of "
+		"trust had sold their souls to bodyguards, how many had never "
+		"had souls? How many straight-and-narrow paths were crooked "
+		"paths? How many best families were worst families and how "
+		"many good people were bad people? When you added them all up "
+		"and then subtracted, you might be left with only the children, "
+		"and perhaps with Albert Einstein and an old violinist or "
+		"sculptor somewhere.";
+
+#define QUOTE_512_BYTES		(512)
+
+/* ***** SHA1 Hash Tests ***** */
+
+#define HMAC_KEY_LENGTH_SHA1	(DIGEST_BYTE_LENGTH_SHA1)
+
+static uint8_t hmac_sha1_key[] = {
+	0xF8, 0x2A, 0xC7, 0x54, 0xDB, 0x96, 0x18, 0xAA,
+	0xC3, 0xA1, 0x53, 0xF6, 0x1F, 0x17, 0x60, 0xBD,
+	0xDE, 0xF4, 0xDE, 0xAD };
+
+static const uint8_t catch_22_quote_2_512_bytes_AES_CBC_HMAC_SHA1_digest[] = {
+	0x9a, 0X4f, 0X88, 0X1b, 0Xb6, 0X8f, 0Xd8, 0X60,
+	0X42, 0X1a, 0X7d, 0X3d, 0Xf5, 0X82, 0X80, 0Xf1,
+	0X18, 0X8c, 0X1d, 0X32 };
+
+/* ***** AES-CBC Cipher Tests ***** */
+
+#define CIPHER_KEY_LENGTH_AES_CBC	(16)
+#define CIPHER_IV_LENGTH_AES_CBC	(CIPHER_KEY_LENGTH_AES_CBC)
+
+static uint8_t aes_cbc_key[] = {
+	0xE4, 0x23, 0x33, 0x8A, 0x35, 0x64, 0x61, 0xE2,
+	0x49, 0x03, 0xDD, 0xC6, 0xB8, 0xCA, 0x55, 0x7A };
+
+static uint8_t aes_cbc_iv[] = {
+	0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07,
+	0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f };
+
+static const uint8_t catch_22_quote_2_512_bytes_AES_CBC_ciphertext[] = {
+	0x8B, 0X4D, 0XDA, 0X1B, 0XCF, 0X04, 0XA0, 0X31, 0XB4, 0XBF, 0XBD, 0X68, 0X43, 0X20, 0X7E, 0X76,
+	0XB1, 0X96, 0X8B, 0XA2, 0X7C, 0XA2, 0X83, 0X9E, 0X39, 0X5A, 0X2F, 0X7E, 0X92, 0XB4, 0X48, 0X1A,
+	0X3F, 0X6B, 0X5D, 0XDF, 0X52, 0X85, 0X5F, 0X8E, 0X42, 0X3C, 0XFB, 0XE9, 0X1A, 0X24, 0XD6, 0X08,
+	0XDD, 0XFD, 0X16, 0XFB, 0XE9, 0X55, 0XEF, 0XF0, 0XA0, 0X8D, 0X13, 0XAB, 0X81, 0XC6, 0X90, 0X01,
+	0XB5, 0X18, 0X84, 0XB3, 0XF6, 0XE6, 0X11, 0X57, 0XD6, 0X71, 0XC6, 0X3C, 0X3F, 0X2F, 0X33, 0XEE,
+	0X24, 0X42, 0X6E, 0XAC, 0X0B, 0XCA, 0XEC, 0XF9, 0X84, 0XF8, 0X22, 0XAA, 0X60, 0XF0, 0X32, 0XA9,
+	0X75, 0X75, 0X3B, 0XCB, 0X70, 0X21, 0X0A, 0X8D, 0X0F, 0XE0, 0XC4, 0X78, 0X2B, 0XF8, 0X97, 0XE3,
+	0XE4, 0X26, 0X4B, 0X29, 0XDA, 0X88, 0XCD, 0X46, 0XEC, 0XAA, 0XF9, 0X7F, 0XF1, 0X15, 0XEA, 0XC3,
+	0X87, 0XE6, 0X31, 0XF2, 0XCF, 0XDE, 0X4D, 0X80, 0X70, 0X91, 0X7E, 0X0C, 0XF7, 0X26, 0X3A, 0X92,
+	0X4F, 0X18, 0X83, 0XC0, 0X8F, 0X59, 0X01, 0XA5, 0X88, 0XD1, 0XDB, 0X26, 0X71, 0X27, 0X16, 0XF5,
+	0XEE, 0X10, 0X82, 0XAC, 0X68, 0X26, 0X9B, 0XE2, 0X6D, 0XD8, 0X9A, 0X80, 0XDF, 0X04, 0X31, 0XD5,
+	0XF1, 0X35, 0X5C, 0X3B, 0XDD, 0X9A, 0X65, 0XBA, 0X58, 0X34, 0X85, 0X61, 0X1C, 0X42, 0X10, 0X76,
+	0X73, 0X02, 0X42, 0XC9, 0X23, 0X18, 0X8E, 0XB4, 0X6F, 0XB4, 0XA3, 0X54, 0X6E, 0X88, 0X3B, 0X62,
+	0X7C, 0X02, 0X8D, 0X4C, 0X9F, 0XC8, 0X45, 0XF4, 0XC9, 0XDE, 0X4F, 0XEB, 0X22, 0X83, 0X1B, 0XE4,
+	0X49, 0X37, 0XE4, 0XAD, 0XE7, 0XCD, 0X21, 0X54, 0XBC, 0X1C, 0XC2, 0X04, 0X97, 0XB4, 0X10, 0X61,
+	0XF0, 0XE4, 0XEF, 0X27, 0X63, 0X3A, 0XDA, 0X91, 0X41, 0X25, 0X62, 0X1C, 0X5C, 0XB6, 0X38, 0X4A,
+	0X88, 0X71, 0X59, 0X5A, 0X8D, 0XA0, 0X09, 0XAF, 0X72, 0X94, 0XD7, 0X79, 0X5C, 0X60, 0X7C, 0X8F,
+	0X4C, 0XF5, 0XD9, 0XA1, 0X39, 0X6D, 0X81, 0X28, 0XEF, 0X13, 0X28, 0XDF, 0XF5, 0X3E, 0XF7, 0X8E,
+	0X09, 0X9C, 0X78, 0X18, 0X79, 0XB8, 0X68, 0XD7, 0XA8, 0X29, 0X62, 0XAD, 0XDE, 0XE1, 0X61, 0X76,
+	0X1B, 0X05, 0X16, 0XCD, 0XBF, 0X02, 0X8E, 0XA6, 0X43, 0X6E, 0X92, 0X55, 0X4F, 0X60, 0X9C, 0X03,
+	0XB8, 0X4F, 0XA3, 0X02, 0XAC, 0XA8, 0XA7, 0X0C, 0X1E, 0XB5, 0X6B, 0XF8, 0XC8, 0X4D, 0XDE, 0XD2,
+	0XB0, 0X29, 0X6E, 0X40, 0XE6, 0XD6, 0XC9, 0XE6, 0XB9, 0X0F, 0XB6, 0X63, 0XF5, 0XAA, 0X2B, 0X96,
+	0XA7, 0X16, 0XAC, 0X4E, 0X0A, 0X33, 0X1C, 0XA6, 0XE6, 0XBD, 0X8A, 0XCF, 0X40, 0XA9, 0XB2, 0XFA,
+	0X63, 0X27, 0XFD, 0X9B, 0XD9, 0XFC, 0XD5, 0X87, 0X8D, 0X4C, 0XB6, 0XA4, 0XCB, 0XE7, 0X74, 0X55,
+	0XF4, 0XFB, 0X41, 0X25, 0XB5, 0X4B, 0X0A, 0X1B, 0XB1, 0XD6, 0XB7, 0XD9, 0X47, 0X2A, 0XC3, 0X98,
+	0X6A, 0XC4, 0X03, 0X73, 0X1F, 0X93, 0X6E, 0X53, 0X19, 0X25, 0X64, 0X15, 0X83, 0XF9, 0X73, 0X2A,
+	0X74, 0XB4, 0X93, 0X69, 0XC4, 0X72, 0XFC, 0X26, 0XA2, 0X9F, 0X43, 0X45, 0XDD, 0XB9, 0XEF, 0X36,
+	0XC8, 0X3A, 0XCD, 0X99, 0X9B, 0X54, 0X1A, 0X36, 0XC1, 0X59, 0XF8, 0X98, 0XA8, 0XCC, 0X28, 0X0D,
+	0X73, 0X4C, 0XEE, 0X98, 0XCB, 0X7C, 0X58, 0X7E, 0X20, 0X75, 0X1E, 0XB7, 0XC9, 0XF8, 0XF2, 0X0E,
+	0X63, 0X9E, 0X05, 0X78, 0X1A, 0XB6, 0XA8, 0X7A, 0XF9, 0X98, 0X6A, 0XA6, 0X46, 0X84, 0X2E, 0XF6,
+	0X4B, 0XDC, 0X9B, 0X8F, 0X9B, 0X8F, 0XEE, 0XB4, 0XAA, 0X3F, 0XEE, 0XC0, 0X37, 0X27, 0X76, 0XC7,
+	0X95, 0XBB, 0X26, 0X74, 0X69, 0X12, 0X7F, 0XF1, 0XBB, 0XFF, 0XAE, 0XB5, 0X99, 0X6E, 0XCB, 0X0C
+};
+
+
+static int
+test_AES_CBC_HMAC_SHA1_encrypt_digest(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
+
+	/* Generate test mbuf data and space for digest */
+	ut_params->ibuf = setup_test_string(ts_params->mbuf_pool, catch_22_quote,
+			QUOTE_512_BYTES, 0);
+
+	ut_params->digest = (uint8_t *)rte_pktmbuf_append(ut_params->ibuf,
+			DIGEST_BYTE_LENGTH_SHA1);
+	TEST_ASSERT_NOT_NULL(ut_params->digest, "no room to append digest");
+
+	/* Setup Cipher Parameters */
+	ut_params->cipher_params.algo = RTE_CRYPTO_SYM_CIPHER_AES_CBC;
+	ut_params->cipher_params.op = RTE_CRYPTO_SYM_CIPHER_OP_ENCRYPT;
+	ut_params->cipher_params.key.data = aes_cbc_key;
+	ut_params->cipher_params.key.length = CIPHER_KEY_LENGTH_AES_CBC;
+
+	/* Setup HMAC Parameters */
+	ut_params->hash_params.op = RTE_CRYPTO_SYM_HASH_OP_DIGEST_GENERATE;
+	ut_params->hash_params.algo = RTE_CRYPTO_SYM_HASH_SHA1_HMAC;
+	ut_params->hash_params.auth_key.length = HMAC_KEY_LENGTH_SHA1;
+	ut_params->hash_params.auth_key.data = hmac_sha1_key;
+	ut_params->hash_params.digest_length = DIGEST_BYTE_LENGTH_SHA1;
+
+	/* Create Crypto session*/
+	ut_params->sess = rte_cryptodev_session_create(ts_params->valid_devs[0],
+			&ut_params->cipher_params, &ut_params->hash_params,
+			RTE_CRYPTO_SYM_OPCHAIN_CIPHER_HASH);
+	TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation failed");
+
+	/* Generate Crypto op data structure */
+	ut_params->op = rte_crypto_op_alloc(ts_params->crypto_op_pool);
+	TEST_ASSERT_NOT_NULL(ut_params->op, "Failed to allocate crypto_op");
+
+	/* Set crypto operation data parameters */
+	rte_crypto_op_attach_session(ut_params->op, ut_params->sess);
+
+	ut_params->op->digest.data = ut_params->digest;
+	ut_params->op->digest.phys_addr = rte_pktmbuf_mtophys_offset(ut_params->ibuf,
+			QUOTE_512_BYTES);
+	ut_params->op->digest.length = DIGEST_BYTE_LENGTH_SHA1;
+
+	ut_params->op->iv.data = (uint8_t *)rte_pktmbuf_prepend(ut_params->ibuf,
+			CIPHER_IV_LENGTH_AES_CBC);
+	ut_params->op->iv.phys_addr = rte_pktmbuf_mtophys(ut_params->ibuf);
+	ut_params->op->iv.length = CIPHER_IV_LENGTH_AES_CBC;
+
+	rte_memcpy(ut_params->op->iv.data, aes_cbc_iv, CIPHER_IV_LENGTH_AES_CBC);
+
+	ut_params->op->data.to_cipher.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_cipher.length = QUOTE_512_BYTES;
+	ut_params->op->data.to_hash.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_hash.length = QUOTE_512_BYTES;
+
+	rte_pktmbuf_attach_crypto_op(ut_params->ibuf, ut_params->op);
+
+	/* Process crypto operation */
+	ut_params->obuf = process_crypto_request(ts_params->valid_devs[0],
+			ut_params->ibuf);
+	TEST_ASSERT_NOT_NULL(ut_params->obuf, "failed to retrieve obuf");
+
+	/* Validate obuf */
+	TEST_ASSERT_BUFFERS_ARE_EQUAL(
+			rte_pktmbuf_mtod(ut_params->obuf, uint8_t *) +
+			CIPHER_IV_LENGTH_AES_CBC,
+			catch_22_quote_2_512_bytes_AES_CBC_ciphertext,
+			QUOTE_512_BYTES,
+			"Ciphertext data not as expected");
+
+	TEST_ASSERT_BUFFERS_ARE_EQUAL(
+			rte_pktmbuf_mtod(ut_params->obuf, uint8_t *) +
+			CIPHER_IV_LENGTH_AES_CBC + QUOTE_512_BYTES,
+			catch_22_quote_2_512_bytes_AES_CBC_HMAC_SHA1_digest,
+			gbl_cryptodev_type == RTE_CRYPTODEV_AESNI_MB_PMD ?
+					TRUNCATED_DIGEST_BYTE_LENGTH_SHA1 :
+					DIGEST_BYTE_LENGTH_SHA1,
+			"Generated digest data not as expected");
+
+	free_testsuite_mbufs();
+	return TEST_SUCCESS;
+}
+
+static int
+test_AES_CBC_HMAC_SHA1_decrypt_digest_verify(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
+
+	/* Generate test mbuf data and digest */
+	ut_params->ibuf = setup_test_string(ts_params->mbuf_pool,
+			(const char *)catch_22_quote_2_512_bytes_AES_CBC_ciphertext,
+			QUOTE_512_BYTES, 0);
+
+	ut_params->digest = (uint8_t *)rte_pktmbuf_append(ut_params->ibuf,
+			DIGEST_BYTE_LENGTH_SHA1);
+	TEST_ASSERT_NOT_NULL(ut_params->digest,	"no room to append digest");
+
+	rte_memcpy(ut_params->digest,
+			catch_22_quote_2_512_bytes_AES_CBC_HMAC_SHA1_digest,
+			DIGEST_BYTE_LENGTH_SHA1);
+
+	/* Setup Cipher Parameters */
+	ut_params->cipher_params.algo = RTE_CRYPTO_SYM_CIPHER_AES_CBC;
+	ut_params->cipher_params.op = RTE_CRYPTO_SYM_CIPHER_OP_DECRYPT;
+	ut_params->cipher_params.key.data = aes_cbc_key;
+	ut_params->cipher_params.key.length = CIPHER_KEY_LENGTH_AES_CBC;
+
+	/* Setup HMAC Parameters */
+	ut_params->hash_params.op = RTE_CRYPTO_SYM_HASH_OP_DIGEST_VERIFY;
+	ut_params->hash_params.algo = RTE_CRYPTO_SYM_HASH_SHA1_HMAC;
+	ut_params->hash_params.auth_key.length = HMAC_KEY_LENGTH_SHA1;
+	ut_params->hash_params.auth_key.data = hmac_sha1_key;
+	ut_params->hash_params.digest_length = DIGEST_BYTE_LENGTH_SHA1;
+
+	/* Create Crypto session*/
+	ut_params->sess = rte_cryptodev_session_create(ts_params->valid_devs[0],
+			&ut_params->cipher_params, &ut_params->hash_params,
+			RTE_CRYPTO_SYM_OPCHAIN_HASH_CIPHER);
+	TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation failed");
+
+	/* Generate Crypto op data structure */
+	ut_params->op = rte_crypto_op_alloc(ts_params->crypto_op_pool);
+
+	TEST_ASSERT_NOT_NULL(ut_params->op, "Failed to allocate crypto_op");
+
+	/* Set crypto operation data parameters */
+	rte_crypto_op_attach_session(ut_params->op, ut_params->sess);
+
+	ut_params->op->digest.data = ut_params->digest;
+	ut_params->op->digest.phys_addr = rte_pktmbuf_mtophys_offset(
+			ut_params->ibuf, QUOTE_512_BYTES);
+	ut_params->op->digest.length = DIGEST_BYTE_LENGTH_SHA1;
+
+	ut_params->op->iv.data = (uint8_t *)rte_pktmbuf_prepend(ut_params->ibuf,
+			CIPHER_IV_LENGTH_AES_CBC);
+	ut_params->op->iv.phys_addr = rte_pktmbuf_mtophys(ut_params->ibuf);
+	ut_params->op->iv.length = CIPHER_IV_LENGTH_AES_CBC;
+
+	rte_memcpy(ut_params->op->iv.data, aes_cbc_iv, CIPHER_IV_LENGTH_AES_CBC);
+
+	ut_params->op->data.to_cipher.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_cipher.length = QUOTE_512_BYTES;
+
+	ut_params->op->data.to_hash.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_hash.length = QUOTE_512_BYTES;
+
+	rte_pktmbuf_attach_crypto_op(ut_params->ibuf, ut_params->op);
+
+	/* Process crypto operation */
+	ut_params->obuf = process_crypto_request(ts_params->valid_devs[0],
+			ut_params->ibuf);
+	TEST_ASSERT_NOT_NULL(ut_params->obuf, "failed to retrieve obuf");
+
+	/* Validate obuf */
+	TEST_ASSERT_BUFFERS_ARE_EQUAL(
+			rte_pktmbuf_mtod(ut_params->obuf, uint8_t *) +
+			CIPHER_IV_LENGTH_AES_CBC,
+			catch_22_quote,
+			QUOTE_512_BYTES,
+			"Ciphertext data not as expected");
+
+	ut_params->op = ut_params->obuf->crypto_op;
+	TEST_ASSERT(!(ut_params->obuf->ol_flags & PKT_RX_CRYPTO_DIGEST_BAD),
+			"Digest verification failed");
+
+	free_testsuite_mbufs();
+	return TEST_SUCCESS;
+}
+
+
+/* ***** AES-CBC / HMAC-SHA256 Hash Tests ***** */
+
+#define HMAC_KEY_LENGTH_SHA256	(DIGEST_BYTE_LENGTH_SHA256)
+
+static uint8_t hmac_sha256_key[] = {
+	0x42, 0x1a, 0x7d, 0x3d, 0xf5, 0x82, 0x80, 0xf1,
+	0xF1, 0x35, 0x5C, 0x3B, 0xDD, 0x9A, 0x65, 0xBA,
+	0x58, 0x34, 0x85, 0x61, 0x1C, 0x42, 0x10, 0x76,
+	0x9a, 0x4f, 0x88, 0x1b, 0xb6, 0x8f, 0xd8, 0x60 };
+
+static const uint8_t catch_22_quote_2_512_bytes_AES_CBC_HMAC_SHA256_digest[] = {
+	0xc8, 0x57, 0x57, 0x31, 0x03, 0xe0, 0x03, 0x55,
+	0x07, 0xc8, 0x9e, 0x7f, 0x48, 0x9a, 0x61, 0x9a,
+	0x68, 0xee, 0x03, 0x0e, 0x71, 0x75, 0xc7, 0xf4,
+	0x2e, 0x45, 0x26, 0x32, 0x7c, 0x12, 0x15, 0x15 };
+
+static int
+test_AES_CBC_HMAC_SHA256_encrypt_digest(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
+
+	/* Generate test mbuf data and space for digest */
+	ut_params->ibuf = setup_test_string(ts_params->mbuf_pool, catch_22_quote,
+			QUOTE_512_BYTES, 0);
+
+	ut_params->digest = (uint8_t *)rte_pktmbuf_append(ut_params->ibuf,
+			DIGEST_BYTE_LENGTH_SHA256);
+	TEST_ASSERT_NOT_NULL(ut_params->digest, "no room to append digest");
+
+	/* Setup Cipher Parameters */
+	ut_params->cipher_params.algo = RTE_CRYPTO_SYM_CIPHER_AES_CBC;
+	ut_params->cipher_params.op = RTE_CRYPTO_SYM_CIPHER_OP_ENCRYPT;
+	ut_params->cipher_params.key.data = aes_cbc_key;
+	ut_params->cipher_params.key.length = CIPHER_KEY_LENGTH_AES_CBC;
+
+	/* Setup HMAC Parameters */
+	ut_params->hash_params.op = RTE_CRYPTO_SYM_HASH_OP_DIGEST_GENERATE;
+	ut_params->hash_params.algo = RTE_CRYPTO_SYM_HASH_SHA256_HMAC;
+	ut_params->hash_params.auth_key.length = HMAC_KEY_LENGTH_SHA256;
+	ut_params->hash_params.auth_key.data = hmac_sha256_key;
+	ut_params->hash_params.digest_length = DIGEST_BYTE_LENGTH_SHA256;
+
+	/* Create Crypto session*/
+	ut_params->sess = rte_cryptodev_session_create(ts_params->valid_devs[0],
+			&ut_params->cipher_params, &ut_params->hash_params,
+			RTE_CRYPTO_SYM_OPCHAIN_CIPHER_HASH);
+	TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation failed");
+
+	/* Generate Crypto op data structure */
+	ut_params->op = rte_crypto_op_alloc(ts_params->crypto_op_pool);
+	TEST_ASSERT_NOT_NULL(ut_params->op, "Failed to allocate crypto_op");
+
+	/* Set crypto operation data parameters */
+	rte_crypto_op_attach_session(ut_params->op, ut_params->sess);
+
+	ut_params->op->digest.data = ut_params->digest;
+	ut_params->op->digest.phys_addr = rte_pktmbuf_mtophys_offset(
+			ut_params->ibuf, QUOTE_512_BYTES);
+	ut_params->op->digest.length = DIGEST_BYTE_LENGTH_SHA256;
+
+	ut_params->op->iv.data = (uint8_t *)rte_pktmbuf_prepend(ut_params->ibuf,
+			CIPHER_IV_LENGTH_AES_CBC);
+	ut_params->op->iv.phys_addr = rte_pktmbuf_mtophys(ut_params->ibuf);
+	ut_params->op->iv.length = CIPHER_IV_LENGTH_AES_CBC;
+
+	rte_memcpy(ut_params->op->iv.data, aes_cbc_iv, CIPHER_IV_LENGTH_AES_CBC);
+
+	ut_params->op->data.to_cipher.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_cipher.length = QUOTE_512_BYTES;
+	ut_params->op->data.to_hash.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_hash.length = QUOTE_512_BYTES;
+
+	rte_pktmbuf_attach_crypto_op(ut_params->ibuf, ut_params->op);
+
+	/* Process crypto operation */
+	ut_params->obuf = process_crypto_request(ts_params->valid_devs[0],
+			ut_params->ibuf);
+	TEST_ASSERT_NOT_NULL(ut_params->obuf, "failed to retrieve obuf");
+
+	/* Validate obuf */
+	TEST_ASSERT_BUFFERS_ARE_EQUAL(
+			rte_pktmbuf_mtod(ut_params->obuf, uint8_t *) +
+			CIPHER_IV_LENGTH_AES_CBC,
+			catch_22_quote_2_512_bytes_AES_CBC_ciphertext,
+			QUOTE_512_BYTES,
+			"Ciphertext data not as expected");
+
+	TEST_ASSERT_BUFFERS_ARE_EQUAL(
+			rte_pktmbuf_mtod(ut_params->obuf, uint8_t *) +
+			CIPHER_IV_LENGTH_AES_CBC + QUOTE_512_BYTES,
+			catch_22_quote_2_512_bytes_AES_CBC_HMAC_SHA256_digest,
+			gbl_cryptodev_type == RTE_CRYPTODEV_AESNI_MB_PMD ?
+					TRUNCATED_DIGEST_BYTE_LENGTH_SHA256 :
+					DIGEST_BYTE_LENGTH_SHA256,
+			"Generated digest data not as expected");
+
+	free_testsuite_mbufs();
+	return TEST_SUCCESS;
+}
+
+static int
+test_AES_CBC_HMAC_SHA256_decrypt_digest_verify(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
+
+	/* Generate test mbuf data and digest */
+	ut_params->ibuf = setup_test_string(ts_params->mbuf_pool,
+			(const char *)catch_22_quote_2_512_bytes_AES_CBC_ciphertext,
+			QUOTE_512_BYTES, 0);
+
+	ut_params->digest = (uint8_t *)rte_pktmbuf_append(ut_params->ibuf,
+			DIGEST_BYTE_LENGTH_SHA256);
+	TEST_ASSERT_NOT_NULL(ut_params->digest,	"no room to append digest");
+
+	rte_memcpy(ut_params->digest,
+			catch_22_quote_2_512_bytes_AES_CBC_HMAC_SHA256_digest,
+			DIGEST_BYTE_LENGTH_SHA256);
+
+	/* Setup Cipher Parameters */
+	ut_params->cipher_params.algo = RTE_CRYPTO_SYM_CIPHER_AES_CBC;
+	ut_params->cipher_params.op = RTE_CRYPTO_SYM_CIPHER_OP_DECRYPT;
+	ut_params->cipher_params.key.data = aes_cbc_key;
+	ut_params->cipher_params.key.length = CIPHER_KEY_LENGTH_AES_CBC;
+
+	/* Setup HMAC Parameters */
+	ut_params->hash_params.op = RTE_CRYPTO_SYM_HASH_OP_DIGEST_VERIFY;
+	ut_params->hash_params.algo = RTE_CRYPTO_SYM_HASH_SHA256_HMAC;
+	ut_params->hash_params.auth_key.data = hmac_sha256_key;
+	ut_params->hash_params.auth_key.length = HMAC_KEY_LENGTH_SHA256;
+	ut_params->hash_params.digest_length = DIGEST_BYTE_LENGTH_SHA256;
+
+	/* Create Crypto session*/
+	ut_params->sess = rte_cryptodev_session_create(ts_params->valid_devs[0],
+			&ut_params->cipher_params, &ut_params->hash_params,
+			RTE_CRYPTO_SYM_OPCHAIN_HASH_CIPHER);
+	TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation failed");
+
+	/* Generate Crypto op data structure */
+	ut_params->op = rte_crypto_op_alloc(ts_params->crypto_op_pool);
+	TEST_ASSERT_NOT_NULL(ut_params->op, "Failed to allocate crypto_op");
+
+	/* Set crypto operation data parameters */
+	rte_crypto_op_attach_session(ut_params->op, ut_params->sess);
+
+	ut_params->op->digest.data = ut_params->digest;
+	ut_params->op->digest.phys_addr = rte_pktmbuf_mtophys_offset(
+			ut_params->ibuf, QUOTE_512_BYTES);
+	ut_params->op->digest.length = DIGEST_BYTE_LENGTH_SHA256;
+
+	ut_params->op->iv.data = (uint8_t *)rte_pktmbuf_prepend(ut_params->ibuf,
+							CIPHER_IV_LENGTH_AES_CBC);
+	ut_params->op->iv.phys_addr = rte_pktmbuf_mtophys(ut_params->ibuf);
+	ut_params->op->iv.length = CIPHER_IV_LENGTH_AES_CBC;
+
+	rte_memcpy(ut_params->op->iv.data, aes_cbc_iv, CIPHER_IV_LENGTH_AES_CBC);
+
+	ut_params->op->data.to_cipher.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_cipher.length = QUOTE_512_BYTES;
+
+	ut_params->op->data.to_hash.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_hash.length = QUOTE_512_BYTES;
+
+	rte_pktmbuf_attach_crypto_op(ut_params->ibuf, ut_params->op);
+
+	/* Process crypto operation */
+	ut_params->obuf = process_crypto_request(ts_params->valid_devs[0],
+			ut_params->ibuf);
+	TEST_ASSERT_NOT_NULL(ut_params->obuf, "failed to retrieve obuf");
+
+	/* Validate obuf */
+	TEST_ASSERT_BUFFERS_ARE_EQUAL(
+			rte_pktmbuf_mtod(ut_params->obuf, uint8_t *) +
+			CIPHER_IV_LENGTH_AES_CBC, catch_22_quote,
+			QUOTE_512_BYTES,
+			"Plaintext data not as expected");
+
+	/* Validate obuf */
+	ut_params->op = ut_params->obuf->crypto_op;
+
+	TEST_ASSERT(!(ut_params->obuf->ol_flags & PKT_RX_CRYPTO_DIGEST_BAD),
+			"Digest verification failed");
+
+	free_testsuite_mbufs();
+	return TEST_SUCCESS;
+}
+
+/* ***** AES-CBC / HMAC-SHA512 Hash Tests ***** */
+
+#define HMAC_KEY_LENGTH_SHA512  (DIGEST_BYTE_LENGTH_SHA512)
+
+static uint8_t hmac_sha512_key[] = {
+	0x42, 0x1a, 0x7d, 0x3d, 0xf5, 0x82, 0x80, 0xf1,
+	0xF1, 0x35, 0x5C, 0x3B, 0xDD, 0x9A, 0x65, 0xBA,
+	0x58, 0x34, 0x85, 0x65, 0x1C, 0x42, 0x50, 0x76,
+	0x9a, 0xaf, 0x88, 0x1b, 0xb6, 0x8f, 0xf8, 0x60,
+	0xa2, 0x5a, 0x7f, 0x3f, 0xf4, 0x72, 0x70, 0xf1,
+	0xF5, 0x35, 0x4C, 0x3B, 0xDD, 0x90, 0x65, 0xB0,
+	0x47, 0x3a, 0x75, 0x61, 0x5C, 0xa2, 0x10, 0x76,
+	0x9a, 0xaf, 0x77, 0x5b, 0xb6, 0x7f, 0xf7, 0x60 };
+
+static const uint8_t catch_22_quote_2_512_bytes_AES_CBC_HMAC_SHA512_digest[] = {
+	0x5D, 0x54, 0x66, 0xC1, 0x6E, 0xBC, 0x04, 0xB8,
+	0x46, 0xB8, 0x08, 0x6E, 0xE0, 0xF0, 0x43, 0x48,
+	0x37, 0x96, 0x9C, 0xC6, 0x9C, 0xC2, 0x1E, 0xE8,
+	0xF2, 0x0C, 0x0B, 0xEF, 0x86, 0xA2, 0xE3, 0x70,
+	0x95, 0xC8, 0xB3, 0x06, 0x47, 0xA9, 0x90, 0xE8,
+	0xA0, 0xC6, 0x72, 0x69, 0x05, 0xC0, 0x0D, 0x0E,
+	0x21, 0x96, 0x65, 0x93, 0x74, 0x43, 0x2A, 0x1D,
+	0x2E, 0xBF, 0xC2, 0xC2, 0xEE, 0xCC, 0x2F, 0x0A };
+
+static int
+test_AES_CBC_HMAC_SHA512_encrypt_digest(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
+
+	/* Generate test mbuf data and space for digest */
+	ut_params->ibuf = setup_test_string(ts_params->mbuf_pool, catch_22_quote,
+			QUOTE_512_BYTES, 0);
+
+	ut_params->digest = (uint8_t *)rte_pktmbuf_append(ut_params->ibuf,
+			DIGEST_BYTE_LENGTH_SHA512);
+	TEST_ASSERT_NOT_NULL(ut_params->digest, "no room to append digest");
+
+	/* Setup Cipher Parameters */
+	ut_params->cipher_params.algo = RTE_CRYPTO_SYM_CIPHER_AES_CBC;
+	ut_params->cipher_params.op = RTE_CRYPTO_SYM_CIPHER_OP_ENCRYPT;
+	ut_params->cipher_params.key.data = aes_cbc_key;
+	ut_params->cipher_params.key.length = CIPHER_KEY_LENGTH_AES_CBC;
+
+	/* Setup HMAC Parameters */
+	ut_params->hash_params.op = RTE_CRYPTO_SYM_HASH_OP_DIGEST_GENERATE;
+	ut_params->hash_params.algo = RTE_CRYPTO_SYM_HASH_SHA512_HMAC;
+	ut_params->hash_params.auth_key.length = HMAC_KEY_LENGTH_SHA512;
+	ut_params->hash_params.auth_key.data = hmac_sha512_key;
+	ut_params->hash_params.digest_length = DIGEST_BYTE_LENGTH_SHA512;
+
+	/* Create Crypto session*/
+	ut_params->sess = rte_cryptodev_session_create(ts_params->valid_devs[0],
+			&ut_params->cipher_params, &ut_params->hash_params,
+			RTE_CRYPTO_SYM_OPCHAIN_CIPHER_HASH);
+
+	TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation failed");
+
+
+	/* Generate Crypto op data structure */
+	ut_params->op = rte_crypto_op_alloc(ts_params->crypto_op_pool);
+
+	TEST_ASSERT_NOT_NULL(ut_params->op, "Failed to allocate crypto_op");
+
+	/* Set crypto operation data parameters */
+	rte_crypto_op_attach_session(ut_params->op, ut_params->sess);
+
+	ut_params->op->digest.data = ut_params->digest;
+	ut_params->op->digest.phys_addr = rte_pktmbuf_mtophys_offset(
+			ut_params->ibuf, QUOTE_512_BYTES);
+	ut_params->op->digest.length = DIGEST_BYTE_LENGTH_SHA512;
+
+	ut_params->op->iv.data = (uint8_t *)rte_pktmbuf_prepend(ut_params->ibuf,
+			CIPHER_IV_LENGTH_AES_CBC);
+	ut_params->op->iv.phys_addr = rte_pktmbuf_mtophys(ut_params->ibuf);
+	ut_params->op->iv.length = CIPHER_IV_LENGTH_AES_CBC;
+
+	rte_memcpy(ut_params->op->iv.data, aes_cbc_iv, CIPHER_IV_LENGTH_AES_CBC);
+
+	ut_params->op->data.to_cipher.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_cipher.length = QUOTE_512_BYTES;
+	ut_params->op->data.to_hash.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_hash.length = QUOTE_512_BYTES;
+
+	rte_pktmbuf_attach_crypto_op(ut_params->ibuf, ut_params->op);
+
+	/* Process crypto operation */
+	ut_params->obuf = process_crypto_request(ts_params->valid_devs[0],
+			ut_params->ibuf);
+	TEST_ASSERT_NOT_NULL(ut_params->obuf, "failed to retrieve obuf");
+
+	/* Validate obuf */
+	TEST_ASSERT_BUFFERS_ARE_EQUAL(
+			rte_pktmbuf_mtod(ut_params->obuf, uint8_t *) +
+			CIPHER_IV_LENGTH_AES_CBC,
+			catch_22_quote_2_512_bytes_AES_CBC_ciphertext,
+			QUOTE_512_BYTES,
+			"Ciphertext data not as expected");
+
+	TEST_ASSERT_BUFFERS_ARE_EQUAL(
+			rte_pktmbuf_mtod(ut_params->obuf, uint8_t *) +
+			CIPHER_IV_LENGTH_AES_CBC + QUOTE_512_BYTES,
+			catch_22_quote_2_512_bytes_AES_CBC_HMAC_SHA512_digest,
+			gbl_cryptodev_type == RTE_CRYPTODEV_AESNI_MB_PMD ?
+					TRUNCATED_DIGEST_BYTE_LENGTH_SHA512 :
+					DIGEST_BYTE_LENGTH_SHA512,
+			"Generated digest data not as expected");
+
+	free_testsuite_mbufs();
+	return TEST_SUCCESS;
+}
+
+
+static int
+test_AES_CBC_HMAC_SHA512_decrypt_create_session_params(
+		struct crypto_unittest_params *ut_params);
+
+static int
+test_AES_CBC_HMAC_SHA512_decrypt_perform(struct rte_cryptodev_session *sess,
+		struct crypto_unittest_params *ut_params,
+		struct crypto_testsuite_params *ts_params);
+
+static int
+test_AES_CBC_HMAC_SHA512_decrypt_digest_verify(void)
+{
+	struct crypto_unittest_params *ut_params = &unittest_params;
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+
+	TEST_ASSERT(test_AES_CBC_HMAC_SHA512_decrypt_create_session_params(ut_params)
+			== TEST_SUCCESS, "Failed to create session params");
+
+	/* Create Crypto session*/
+	ut_params->sess = rte_cryptodev_session_create(ts_params->valid_devs[0],
+			&ut_params->cipher_params, &ut_params->hash_params,
+			RTE_CRYPTO_SYM_OPCHAIN_HASH_CIPHER);
+	TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation failed");
+
+	return test_AES_CBC_HMAC_SHA512_decrypt_perform(ut_params->sess,
+			ut_params, ts_params);
+}
+
+static int
+test_AES_CBC_HMAC_SHA512_decrypt_create_session_params(struct crypto_unittest_params *ut_params)
+{
+
+	/* Setup Cipher Parameters */
+	ut_params->cipher_params.algo = RTE_CRYPTO_SYM_CIPHER_AES_CBC;
+	ut_params->cipher_params.op = RTE_CRYPTO_SYM_CIPHER_OP_DECRYPT;
+	ut_params->cipher_params.key.data = aes_cbc_key;
+	ut_params->cipher_params.key.length = CIPHER_KEY_LENGTH_AES_CBC;
+
+	/* Setup HMAC Parameters */
+	ut_params->hash_params.op = RTE_CRYPTO_SYM_HASH_OP_DIGEST_VERIFY;
+	ut_params->hash_params.algo = RTE_CRYPTO_SYM_HASH_SHA512_HMAC;
+	ut_params->hash_params.auth_key.data = hmac_sha512_key;
+	ut_params->hash_params.auth_key.length = HMAC_KEY_LENGTH_SHA512;
+	ut_params->hash_params.digest_length = DIGEST_BYTE_LENGTH_SHA512;
+	return TEST_SUCCESS;
+}
+
+
+static int
+test_AES_CBC_HMAC_SHA512_decrypt_perform(struct rte_cryptodev_session *sess,
+		struct crypto_unittest_params *ut_params,
+		struct crypto_testsuite_params *ts_params)
+{
+	/* Generate test mbuf data and digest */
+	ut_params->ibuf = setup_test_string(ts_params->mbuf_pool,
+			(const char *)catch_22_quote_2_512_bytes_AES_CBC_ciphertext,
+			QUOTE_512_BYTES, 0);
+
+	ut_params->digest = (uint8_t *)rte_pktmbuf_append(ut_params->ibuf,
+			DIGEST_BYTE_LENGTH_SHA512);
+	TEST_ASSERT_NOT_NULL(ut_params->digest, "no room to append digest");
+
+	rte_memcpy(ut_params->digest,
+			catch_22_quote_2_512_bytes_AES_CBC_HMAC_SHA512_digest,
+			DIGEST_BYTE_LENGTH_SHA512);
+
+	/* Generate Crypto op data structure */
+	ut_params->op = rte_crypto_op_alloc(ts_params->crypto_op_pool);
+
+	TEST_ASSERT_NOT_NULL(ut_params->op, "Failed to allocate crypto_op");
+
+	/* Set crypto operation data parameters */
+	rte_crypto_op_attach_session(ut_params->op, sess);
+
+	ut_params->op->digest.data = ut_params->digest;
+	ut_params->op->digest.phys_addr = rte_pktmbuf_mtophys_offset(ut_params->ibuf, QUOTE_512_BYTES);
+	ut_params->op->digest.length = DIGEST_BYTE_LENGTH_SHA512;
+
+	ut_params->op->iv.data = (uint8_t *)rte_pktmbuf_prepend(ut_params->ibuf, CIPHER_IV_LENGTH_AES_CBC);
+	ut_params->op->iv.phys_addr = rte_pktmbuf_mtophys_offset(ut_params->ibuf, 0);
+	ut_params->op->iv.length = CIPHER_IV_LENGTH_AES_CBC;
+
+	rte_memcpy(ut_params->op->iv.data, aes_cbc_iv, CIPHER_IV_LENGTH_AES_CBC);
+
+	ut_params->op->data.to_cipher.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_cipher.length = QUOTE_512_BYTES;
+
+	ut_params->op->data.to_hash.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_hash.length = QUOTE_512_BYTES;
+
+	rte_pktmbuf_attach_crypto_op(ut_params->ibuf, ut_params->op);
+
+	/* Process crypto operation */
+	ut_params->obuf = process_crypto_request(ts_params->valid_devs[0], ut_params->ibuf);
+	TEST_ASSERT_NOT_NULL(ut_params->obuf, "failed to retrieve obuf");
+
+	/* Validate obuf */
+	TEST_ASSERT_BUFFERS_ARE_EQUAL(
+			rte_pktmbuf_mtod(ut_params->obuf, uint8_t *) +
+			CIPHER_IV_LENGTH_AES_CBC, catch_22_quote,
+			QUOTE_512_BYTES,
+			"Plaintext data not as expected");
+
+	/* Validate obuf */
+	ut_params->op = ut_params->obuf->crypto_op;
+
+	TEST_ASSERT(!(ut_params->obuf->ol_flags & PKT_RX_CRYPTO_DIGEST_BAD),
+			"Digest verification failed");
+
+	/*
+	 * Free crypto operation structure and buffers.
+	 */
+	if (ut_params->op)
+	{
+		rte_crypto_op_free(ut_params->op);
+		ut_params->op = NULL;
+	}
+	free_testsuite_mbufs();
+	return TEST_SUCCESS;
+}
+
+
+static struct unit_test_suite cryptodev_testsuite  = {
+	.suite_name = "Crypto Device Unit Test Suite",
+	.setup = testsuite_setup,
+	.teardown = testsuite_teardown,
+	.unit_test_cases = {
+		TEST_CASE_ST(ut_setup, ut_teardown, test_AES_CBC_HMAC_SHA1_encrypt_digest),
+		TEST_CASE_ST(ut_setup, ut_teardown, test_AES_CBC_HMAC_SHA1_decrypt_digest_verify),
+
+		TEST_CASE_ST(ut_setup, ut_teardown, test_AES_CBC_HMAC_SHA256_encrypt_digest),
+		TEST_CASE_ST(ut_setup, ut_teardown, test_AES_CBC_HMAC_SHA256_decrypt_digest_verify),
+
+		TEST_CASE_ST(ut_setup, ut_teardown, test_AES_CBC_HMAC_SHA512_encrypt_digest),
+		TEST_CASE_ST(ut_setup, ut_teardown, test_AES_CBC_HMAC_SHA512_decrypt_digest_verify),
+
+		TEST_CASES_END() /**< NULL terminate unit test array */
+	}
+};
+
+static int
+test_cryptodev_qat(void /*argv __rte_unused, int argc __rte_unused*/)
+{
+	gbl_cryptodev_type = RTE_CRYPTODEV_QAT_PMD;
+	return unit_test_suite_runner(&cryptodev_testsuite);
+}
+static struct test_command cryptodev_qat_cmd = {
+	.command = "cryptodev_qat_autotest",
+	.callback = test_cryptodev_qat,
+};
+
+static int
+test_cryptodev_aesni(void /*argv __rte_unused, int argc __rte_unused*/)
+{
+	gbl_cryptodev_type = RTE_CRYPTODEV_AESNI_MB_PMD;
+
+	return unit_test_suite_runner(&cryptodev_testsuite);
+}
+
+static struct test_command cryptodev_aesni_cmd = {
+	.command = "cryptodev_aesni_autotest",
+	.callback = test_cryptodev_aesni,
+};
+
+REGISTER_TEST_COMMAND(cryptodev_qat_cmd);
+REGISTER_TEST_COMMAND(cryptodev_aesni_cmd);
diff --git a/app/test/test_cryptodev_perf.c b/app/test/test_cryptodev_perf.c
new file mode 100644
index 0000000..80f37e7
--- /dev/null
+++ b/app/test/test_cryptodev_perf.c
@@ -0,0 +1,1438 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2015 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *	 * Redistributions of source code must retain the above copyright
+ *	   notice, this list of conditions and the following disclaimer.
+ *	 * Redistributions in binary form must reproduce the above copyright
+ *	   notice, this list of conditions and the following disclaimer in
+ *	   the documentation and/or other materials provided with the
+ *	   distribution.
+ *	 * Neither the name of Intel Corporation nor the names of its
+ *	   contributors may be used to endorse or promote products derived
+ *	   from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <rte_common.h>
+#include <rte_mbuf.h>
+#include <rte_malloc.h>
+#include <rte_memcpy.h>
+
+#include <rte_crypto.h>
+#include <rte_cryptodev.h>
+#include <rte_cycles.h>
+#include <rte_hexdump.h>
+
+#include "test.h"
+
+#define BYTE_LENGTH(x) (x/8)
+
+/* HASH DIGEST LENGTHS */
+#define DIGEST_BYTE_LENGTH_MD5		(BYTE_LENGTH(128))
+#define DIGEST_BYTE_LENGTH_SHA1		(BYTE_LENGTH(160))
+#define DIGEST_BYTE_LENGTH_SHA224	(BYTE_LENGTH(224))
+#define DIGEST_BYTE_LENGTH_SHA256	(BYTE_LENGTH(256))
+#define DIGEST_BYTE_LENGTH_SHA384	(BYTE_LENGTH(384))
+#define DIGEST_BYTE_LENGTH_SHA512	(BYTE_LENGTH(512))
+
+#define MAX_NUM_OPS_INFLIGHT		(4096)
+#define MIN_NUM_OPS_INFLIGHT		(128)
+#define PERF_NUM_OPS_INFLIGHT		(128)
+#define DEFAULT_NUM_QPS_PER_QAT_DEVICE	(2)
+#define DEFAULT_NUM_REQS_TO_SUBMIT	(10000000)
+#define DEFAULT_BURST_SIZE		(64)
+
+#define NUM_MBUFS			(8191)
+#define MBUF_CACHE_SIZE			(250)
+#define MBUF_SIZE	(2048 + DIGEST_BYTE_LENGTH_SHA512 + \
+			sizeof(struct rte_mbuf) + RTE_PKTMBUF_HEADROOM)
+
+
+#define FALSE						0
+#define TRUE						1
+
+
+struct crypto_testsuite_params {
+	struct rte_mempool *mbuf_mp;
+	struct rte_mempool *crypto_op_mp;
+
+	uint16_t nb_queue_pairs;
+
+	struct rte_cryptodev_config conf;
+	struct rte_cryptodev_qp_conf qp_conf;
+	uint8_t dev_id;
+};
+
+struct crypto_unittest_perf_params {
+	const char *name;
+	uint32_t iter_count;
+
+	struct {
+		uint64_t start;
+		uint64_t run;
+		uint64_t min;
+		uint64_t max;
+		uint64_t accumulated;
+	} cycles;
+};
+
+#define MAX_NUM_OF_OPS_PER_UT	(128)
+
+struct crypto_unittest_params {
+	struct rte_crypto_cipher_params cipher_params;
+	struct rte_crypto_hash_params hash_params;
+
+	struct rte_cryptodev_session *sess;
+
+	struct rte_crypto_op_data *op;
+
+	struct crypto_unittest_perf_params perf;
+
+	struct rte_mbuf *obuf[MAX_NUM_OF_OPS_PER_UT];
+	struct rte_mbuf *ibuf[MAX_NUM_OF_OPS_PER_UT];
+
+	uint8_t *digest;
+};
+
+static struct rte_mbuf *
+setup_test_string(struct rte_mempool *mpool,
+		const char *string, size_t len, uint8_t blocksize)
+{
+	struct rte_mbuf *m = rte_pktmbuf_alloc(mpool);
+	size_t t_len = len - (blocksize ? (len % blocksize) : 0);
+
+	if (m) {
+		char *dst = rte_pktmbuf_append(m, t_len);
+
+		if (!dst) {
+			rte_pktmbuf_free(m);
+			return NULL;
+		}
+
+		rte_memcpy(dst, string, t_len);
+	}
+	return m;
+}
+
+static struct crypto_testsuite_params testsuite_params = { NULL };
+static struct crypto_unittest_params unittest_params;
+static enum rte_cryptodev_type gbl_cryptodev_preftest_devtype;
+
+static int
+testsuite_setup(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct rte_cryptodev_info info;
+	unsigned i, nb_devs, valid_dev_id = 0;
+	uint16_t qp_id;
+
+	ts_params->mbuf_mp = rte_mempool_lookup("CRYPTO_PERF_MBUFPOOL");
+	if (ts_params->mbuf_mp == NULL) {
+		/* Not already created so create */
+		ts_params->mbuf_mp = rte_mempool_create("CRYPTO_PERF_MBUFPOOL", NUM_MBUFS,
+			MBUF_SIZE, MBUF_CACHE_SIZE,
+			sizeof(struct rte_pktmbuf_pool_private),
+			rte_pktmbuf_pool_init, NULL, rte_pktmbuf_init, NULL,
+			rte_socket_id(), 0);
+		if (ts_params->mbuf_mp == NULL) {
+			RTE_LOG(ERR, USER1, "Can't create CRYPTO_PERF_MBUFPOOL\n");
+			return TEST_FAILED;
+		}
+	}
+
+	ts_params->crypto_op_mp = rte_crypto_op_pool_create("CRYPTO_OP_POOL",
+			NUM_MBUFS, MBUF_CACHE_SIZE, rte_socket_id());
+	if (ts_params->crypto_op_mp == NULL) {
+		RTE_LOG(ERR, USER1, "Can't create CRYPTO_OP_POOL\n");
+		return TEST_FAILED;
+	}
+
+	/* Create 2 AESNI MB devices if required */
+	if (gbl_cryptodev_preftest_devtype == RTE_CRYPTODEV_AESNI_MB_PMD) {
+		nb_devs = rte_cryptodev_count_devtype(RTE_CRYPTODEV_AESNI_MB_PMD);
+		if (nb_devs < 2) {
+			for (i = nb_devs; i < 2; i++) {
+				int dev_id = rte_eal_vdev_init(
+					CRYPTODEV_NAME_AESNI_MB_PMD, NULL);
+
+				TEST_ASSERT(dev_id >= 0,
+					"Failed to create instance %u of pmd : %s",
+					i, CRYPTODEV_NAME_AESNI_MB_PMD);
+			}
+		}
+	}
+
+	nb_devs = rte_cryptodev_count();
+	if (nb_devs < 1) {
+		RTE_LOG(ERR, USER1, "No crypto devices found?");
+		return TEST_FAILED;
+	}
+
+	/* Search for the first valid */
+	for (i = 0; i < nb_devs; i++) {
+		rte_cryptodev_info_get(i, &info);
+		if (info.dev_type == gbl_cryptodev_preftest_devtype) {
+			ts_params->dev_id = i;
+			valid_dev_id = 1;
+			break;
+		}
+	}
+
+	if (!valid_dev_id)
+		return TEST_FAILED;
+
+	/* Using Crypto Device Id 0 by default.
+	 * Since we can't free and re-allocate queue memory always set the queues
+	 * on this device up to max size first so enough memory is allocated for
+	 * any later re-configures needed by other tests */
+
+	ts_params->conf.nb_queue_pairs = DEFAULT_NUM_QPS_PER_QAT_DEVICE;
+	ts_params->conf.socket_id = SOCKET_ID_ANY;
+	ts_params->conf.session_mp.nb_objs =
+			(gbl_cryptodev_preftest_devtype == RTE_CRYPTODEV_QAT_PMD) ?
+					RTE_LIBRTE_PMD_QAT_MAX_SESSIONS :
+					RTE_LIBRTE_PMD_AESNI_MB_MAX_SESSIONS;
+
+	TEST_ASSERT_SUCCESS(rte_cryptodev_configure(ts_params->dev_id,
+			&ts_params->conf),
+			"Failed to configure cryptodev %u",
+			ts_params->dev_id);
+
+
+	ts_params->qp_conf.nb_descriptors = MAX_NUM_OPS_INFLIGHT;
+
+	for (qp_id = 0; qp_id < ts_params->conf.nb_queue_pairs ; qp_id++) {
+		TEST_ASSERT_SUCCESS(rte_cryptodev_queue_pair_setup(
+			ts_params->dev_id, qp_id,
+			&ts_params->qp_conf,
+			rte_cryptodev_socket_id(ts_params->dev_id)),
+			"Failed to setup queue pair %u on cryptodev %u",
+			qp_id, ts_params->dev_id);
+	}
+
+	/*Now reconfigure queues to size we actually want to use in this testsuite.*/
+	ts_params->qp_conf.nb_descriptors = PERF_NUM_OPS_INFLIGHT;
+	for (qp_id = 0; qp_id < ts_params->conf.nb_queue_pairs ; qp_id++) {
+
+		TEST_ASSERT_SUCCESS(rte_cryptodev_queue_pair_setup(
+			ts_params->dev_id, qp_id,
+				&ts_params->qp_conf,
+				rte_cryptodev_socket_id(ts_params->dev_id)),
+				"Failed to setup queue pair %u on cryptodev %u",
+				qp_id, ts_params->dev_id);
+	}
+
+	return TEST_SUCCESS;
+}
+static void
+testsuite_teardown(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+
+	if (ts_params->mbuf_mp != NULL)
+		RTE_LOG(DEBUG, USER1, "CRYPTO_PERF_MBUFPOOL count %u\n",
+		rte_mempool_count(ts_params->mbuf_mp));
+}
+
+static int
+ut_setup(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
+
+	/* Clear unit test parameters before running test */
+	memset(ut_params, 0, sizeof(*ut_params));
+
+	rte_cryptodev_stats_reset(ts_params->dev_id);
+
+	return TEST_SUCCESS;
+}
+
+static void
+ut_teardown(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
+	struct rte_cryptodev_stats stats;
+
+	unsigned i;
+
+	/* free crypto session structure */
+	if (ut_params->sess)
+		rte_cryptodev_session_free(ts_params->dev_id,
+				ut_params->sess);
+
+	/* free crypto operation structure */
+	if (ut_params->op)
+		rte_crypto_op_free(ut_params->op);
+
+	for (i = 0; i < MAX_NUM_OF_OPS_PER_UT; i++) {
+		if (ut_params->obuf[i])
+			rte_pktmbuf_free(ut_params->obuf[i]);
+		else if (ut_params->ibuf[i])
+			rte_pktmbuf_free(ut_params->ibuf[i]);
+	}
+
+	if (ts_params->mbuf_mp != NULL)
+		RTE_LOG(DEBUG, USER1, "CRYPTO_PERF_MBUFPOOL count %u\n",
+			rte_mempool_count(ts_params->mbuf_mp));
+
+	rte_cryptodev_stats_get(ts_params->dev_id, &stats);
+
+}
+
+const char plaintext_quote[] =
+		"THE COUNT OF MONTE CRISTO by Alexandre Dumas, Pere Chapter 1. "
+		"Marseilles--The Arrival. On the 24th of February, 1815, the "
+		"look-out at Notre-Dame de la Garde signalled the three-master,"
+		" the Pharaon from Smyrna, Trieste, and Naples. As usual, a "
+		"pilot put off immediately, and rounding the Chateau d'If, got "
+		"on board the vessel between Cape Morgion and Rion island. "
+		"Immediately, and according to custom, the ramparts of Fort "
+		"Saint-Jean were covered with spectators; it is always an event "
+		"at Marseilles for a ship to come into port, especially when "
+		"this ship, like the Pharaon, has been built, rigged, and laden"
+		" at the old Phocee docks, and belongs to an owner of the city."
+		" The ship drew on and had safely passed the strait, which some"
+		" volcanic shock has made between the Calasareigne and Jaros "
+		"islands; had doubled Pomegue, and approached the harbor under"
+		" topsails, jib, and spanker, but so slowly and sedately that"
+		" the idlers, with that instinct which is the forerunner of "
+		"evil, asked one another what misfortune could have happened "
+		"on board. However, those experienced in navigation saw plainly"
+		" that if any accident had occurred, it was not to the vessel "
+		"herself, for she bore down with all the evidence of being "
+		"skilfully handled, the anchor a-cockbill, the jib-boom guys "
+		"already eased off, and standing by the side of the pilot, who"
+		" was steering the Pharaon towards the narrow entrance of the"
+		" inner port, was a young man, who, with activity and vigilant"
+		" eye, watched every motion of the ship, and repeated each "
+		"direction of the pilot. The vague disquietude which prevailed "
+		"among the spectators had so much affected one of the crowd "
+		"that he did not await the arrival of the vessel in harbor, but"
+		" jumping into a small skiff, desired to be pulled alongside "
+		"the Pharaon, which he reached as she rounded into La Reserve "
+		"basin. When the young man on board saw this person approach, "
+		"he left his station by the pilot, and, hat in hand, leaned "
+		"over the ship's bulwarks. He was a fine, tall, slim young "
+		"fellow of eighteen or twenty, with black eyes, and hair as "
+		"dark as a raven's wing; and his whole appearance bespoke that "
+		"calmness and resolution peculiar to men accustomed from their "
+		"cradle to contend with danger. \"Ah, is it you, Dantes?\" "
+		"cried the man in the skiff. \"What's the matter? and why have "
+		"you such an air of sadness aboard?\" \"A great misfortune, M. "
+		"Morrel,\" replied the young man,--\"a great misfortune, for me"
+		" especially! Off Civita Vecchia we lost our brave Captain "
+		"Leclere.\" \"And the cargo?\" inquired the owner, eagerly. "
+		"\"Is all safe, M. Morrel; and I think you will be satisfied on"
+		" that head. But poor Captain Leclere--\" \"What happened to "
+		"him?\" asked the owner, with an air of considerable "
+		"resignation. \"What happened to the worthy captain?\" \"He "
+		"died.\" \"Fell into the sea?\" \"No, sir, he died of "
+		"brain-fever in dreadful agony.\" Then turning to the crew, "
+		"he said, \"Bear a hand there, to take in sail!\" All hands "
+		"obeyed, and at once the eight or ten seamen who composed the "
+		"crew, sprang to their respective stations at the spanker "
+		"brails and outhaul, topsail sheets and halyards, the jib "
+		"downhaul, and the topsail clewlines and buntlines. The young "
+		"sailor gave a look to see that his orders were promptly and "
+		"accurately obeyed, and then turned again to the owner. \"And "
+		"how did this misfortune occur?\" inquired the latter, resuming"
+		" the interrupted conversation. \"Alas, sir, in the most "
+		"unexpected manner. After a long talk with the harbor-master, "
+		"Captain Leclere left Naples greatly disturbed in mind. In "
+		"twenty-four hours he was attacked by a fever, and died three "
+		"days afterwards. We performed the usual burial service, and he"
+		" is at his rest, sewn up in his hammock with a thirty-six "
+		"pound shot at his head and his heels, off El Giglio island. "
+		"We bring to his widow his sword and cross of honor. It was "
+		"worth while, truly,\" added the young man with a melancholy "
+		"smile, \"to make war against the English for ten years, and "
+		"to die in his bed at last, like everybody else.";
+
+#define QUOTE_LEN_64B		(64)
+#define QUOTE_LEN_128B		(128)
+#define QUOTE_LEN_256B		(256)
+#define QUOTE_LEN_512B		(512)
+#define QUOTE_LEN_768B		(768)
+#define QUOTE_LEN_1024B		(1024)
+#define QUOTE_LEN_1280B		(1280)
+#define QUOTE_LEN_1536B		(1536)
+#define QUOTE_LEN_1792B		(1792)
+#define QUOTE_LEN_2048B		(2048)
+
+
+/* ***** AES-CBC / HMAC-SHA256 Performance Tests ***** */
+
+#define HMAC_KEY_LENGTH_SHA256	(DIGEST_BYTE_LENGTH_SHA256)
+
+#define CIPHER_KEY_LENGTH_AES_CBC	(16)
+#define CIPHER_IV_LENGTH_AES_CBC	(CIPHER_KEY_LENGTH_AES_CBC)
+
+
+static uint8_t aes_cbc_key[] = {
+		0xE4, 0x23, 0x33, 0x8A, 0x35, 0x64, 0x61, 0xE2,
+		0xF1, 0x35, 0x5C, 0x3B, 0xDD, 0x9A, 0x65, 0xBA };
+
+static uint8_t aes_cbc_iv[] = {
+		0xf5, 0xd3, 0x89, 0x0f, 0x47, 0x00, 0xcb, 0x52,
+		0x42, 0x1a, 0x7d, 0x3d, 0xf5, 0x82, 0x80, 0xf1 };
+
+static uint8_t hmac_sha256_key[] = {
+		0xff, 0xcb, 0x37, 0x30, 0x1d, 0x4a, 0xc2, 0x41,
+		0x49, 0x03, 0xDD, 0xC6, 0xB8, 0xCA, 0x55, 0x7A,
+		0x58, 0x34, 0x85, 0x61, 0x1C, 0x42, 0x10, 0x76,
+		0x9a, 0x4f, 0x88, 0x1b, 0xb6, 0x8f, 0xd8, 0x60 };
+
+
+/* Cipher text output */
+
+static const uint8_t AES_CBC_ciphertext_64B[] = {
+		0x05, 0x15, 0x77, 0x32, 0xc9, 0x66, 0x91, 0x50, 0x93, 0x9f, 0xbb, 0x4e, 0x2e, 0x5a, 0x02, 0xd0,
+		0x2d, 0x9d, 0x31, 0x5d, 0xc8, 0x9e, 0x86, 0x36, 0x54, 0x5c, 0x50, 0xe8, 0x75, 0x54, 0x74, 0x5e,
+		0xd5, 0xa2, 0x84, 0x21, 0x2d, 0xc5, 0xf8, 0x1c, 0x55, 0x1a, 0xba, 0x91, 0xce, 0xb5, 0xa3, 0x1e,
+		0x31, 0xbf, 0xe9, 0xa1, 0x97, 0x5c, 0x2b, 0xd6, 0x57, 0xa5, 0x9f, 0xab, 0xbd, 0xb0, 0x9b, 0x9c
+};
+
+static const uint8_t AES_CBC_ciphertext_128B[] = {
+		0x79, 0x92, 0x65, 0xc8, 0xfb, 0x0a, 0xc7, 0xc4, 0x9b, 0x3b, 0xbe, 0x69, 0x7f, 0x7c, 0xf4, 0x4e,
+		0xa5, 0x0d, 0xf6, 0x33, 0xc4, 0xdf, 0xf3, 0x0d, 0xdb, 0xb9, 0x68, 0x34, 0xb0, 0x0d, 0xbd, 0xb9,
+		0xa7, 0xf3, 0x86, 0x50, 0x2a, 0xbe, 0x50, 0x5d, 0xb3, 0xbe, 0x72, 0xf9, 0x02, 0xb1, 0x69, 0x0b,
+		0x8c, 0x96, 0x4c, 0x3c, 0x0c, 0x1e, 0x76, 0xe5, 0x7e, 0x75, 0xdd, 0xd0, 0xa9, 0x75, 0x00, 0x13,
+		0x6b, 0x1e, 0xc0, 0xad, 0xfc, 0x03, 0xb5, 0x99, 0xdc, 0x37, 0x35, 0xfc, 0x16, 0x34, 0xfd, 0xb4,
+		0xea, 0x1e, 0xb6, 0x51, 0xdf, 0xab, 0x87, 0xd6, 0x87, 0x41, 0xfa, 0x1c, 0xc6, 0x78, 0xa6, 0x3c,
+		0x1d, 0x76, 0xfe, 0xff, 0x65, 0xfc, 0x63, 0x1e, 0x1f, 0xe2, 0x7c, 0x9b, 0xa2, 0x72, 0xc3, 0x34,
+		0x23, 0xdf, 0x01, 0xf0, 0xfd, 0x02, 0x8b, 0x97, 0x00, 0x2b, 0x97, 0x4e, 0xab, 0x98, 0x21, 0x3c
+};
+
+static const uint8_t AES_CBC_ciphertext_256B[] = {
+		0xc7, 0x71, 0x2b, 0xed, 0x2c, 0x97, 0x59, 0xfa, 0xcf, 0x5a, 0xb9, 0x31, 0x92, 0xe0, 0xc9, 0x92,
+		0xc0, 0x2d, 0xd5, 0x9c, 0x84, 0xbf, 0x70, 0x36, 0x13, 0x48, 0xe0, 0xb1, 0xbf, 0x6c, 0xcd, 0x91,
+		0xa0, 0xc3, 0x57, 0x6c, 0x3f, 0x0e, 0x34, 0x41, 0xe7, 0x9c, 0xc0, 0xec, 0x18, 0x0c, 0x05, 0x52,
+		0x78, 0xe2, 0x3c, 0x6e, 0xdf, 0xa5, 0x49, 0xc7, 0xf2, 0x55, 0x00, 0x8f, 0x65, 0x6d, 0x4b, 0xd0,
+		0xcb, 0xd4, 0xd2, 0x0b, 0xea, 0xf4, 0xb0, 0x85, 0x61, 0x9e, 0x36, 0xc0, 0x71, 0xb7, 0x80, 0xad,
+		0x40, 0x78, 0xb4, 0x70, 0x2b, 0xe8, 0x80, 0xc5, 0x19, 0x35, 0x96, 0x55, 0x3b, 0x40, 0x03, 0xbb,
+		0x9f, 0xa6, 0xc2, 0x82, 0x92, 0x04, 0xc3, 0xa6, 0x96, 0xc4, 0x7f, 0x4c, 0x3e, 0x3c, 0x79, 0x82,
+		0x88, 0x8b, 0x3f, 0x8b, 0xc5, 0x9f, 0x44, 0xbe, 0x71, 0xe7, 0x09, 0xa2, 0x40, 0xa2, 0x23, 0x4e,
+		0x9f, 0x31, 0xab, 0x6f, 0xdf, 0x59, 0x40, 0xe1, 0x12, 0x15, 0x55, 0x4b, 0xea, 0x3f, 0xa1, 0x41,
+		0x4f, 0xaf, 0xcd, 0x27, 0x2a, 0x61, 0xa1, 0x9e, 0x82, 0x30, 0x05, 0x05, 0x55, 0xce, 0x99, 0xd3,
+		0x8f, 0x3f, 0x86, 0x79, 0xdc, 0x9f, 0x33, 0x07, 0x75, 0x26, 0xc8, 0x72, 0x81, 0x0f, 0x9b, 0xf7,
+		0xb1, 0xfb, 0xd3, 0x91, 0x36, 0x08, 0xab, 0x26, 0x70, 0x53, 0x0c, 0x99, 0xfd, 0xa9, 0x07, 0xb4,
+		0xe9, 0xce, 0xc1, 0xd6, 0xd2, 0x2c, 0x71, 0x80, 0xec, 0x59, 0x61, 0x0b, 0x24, 0xf0, 0x6d, 0x33,
+		0x73, 0x45, 0x6e, 0x80, 0x03, 0x45, 0xf2, 0x76, 0xa5, 0x8a, 0xc9, 0xcf, 0xaf, 0x4a, 0xed, 0x35,
+		0xc0, 0x97, 0x52, 0xc5, 0x00, 0xdf, 0xef, 0xc7, 0x9f, 0xf2, 0xe8, 0x15, 0x3e, 0xb3, 0x30, 0xe7,
+		0x00, 0xd0, 0x4e, 0xeb, 0x79, 0xf6, 0xf6, 0xcf, 0xf0, 0xe7, 0x61, 0xd5, 0x3d, 0x6a, 0x73, 0x9d
+};
+
+static const uint8_t AES_CBC_ciphertext_512B[] = {
+		0xb4, 0xc6, 0xc6, 0x5f, 0x7e, 0xca, 0x05, 0x70, 0x21, 0x7b, 0x92, 0x9e, 0x23, 0xe7, 0x92, 0xb8,
+		0x27, 0x3d, 0x20, 0x29, 0x57, 0xfa, 0x1f, 0x26, 0x0a, 0x04, 0x34, 0xa6, 0xf2, 0xdc, 0x44, 0xb6,
+		0x43, 0x40, 0x62, 0xde, 0x0c, 0xde, 0x1c, 0x30, 0x43, 0x85, 0x0b, 0xe8, 0x93, 0x1f, 0xa1, 0x2a,
+		0x8a, 0x27, 0x35, 0x39, 0x14, 0x9f, 0x37, 0x64, 0x59, 0xb5, 0x0e, 0x96, 0x82, 0x5d, 0x63, 0x45,
+		0xd6, 0x93, 0x89, 0x46, 0xe4, 0x71, 0x31, 0xeb, 0x0e, 0xd1, 0x7b, 0xda, 0x90, 0xb5, 0x81, 0xac,
+		0x76, 0x54, 0x54, 0x85, 0x0b, 0xa9, 0x46, 0x9c, 0xf0, 0xfd, 0xde, 0x5d, 0xa8, 0xe3, 0xee, 0xe9,
+		0xf4, 0x9d, 0x34, 0x76, 0x39, 0xe7, 0xc3, 0x4a, 0x84, 0x38, 0x92, 0x61, 0xf1, 0x12, 0x9f, 0x05,
+		0xda, 0xdb, 0xc1, 0xd4, 0xb0, 0xa0, 0x27, 0x19, 0xa0, 0x56, 0x5d, 0x9b, 0xcc, 0x47, 0x7c, 0x15,
+		0x1d, 0x52, 0x66, 0xd5, 0xff, 0xef, 0x12, 0x23, 0x86, 0xe2, 0xee, 0x81, 0x2c, 0x3d, 0x7d, 0x28,
+		0xd5, 0x42, 0xdf, 0xdb, 0x75, 0x1c, 0xeb, 0xdf, 0x13, 0x23, 0xd5, 0x17, 0x89, 0xea, 0xd7, 0x01,
+		0xff, 0x57, 0x6a, 0x44, 0x61, 0xf4, 0xea, 0xbe, 0x97, 0x9b, 0xc2, 0xb1, 0x9c, 0x5d, 0xff, 0x4f,
+		0x73, 0x2d, 0x3f, 0x57, 0x28, 0x38, 0xbf, 0x3d, 0x9f, 0xda, 0x49, 0x55, 0x8f, 0xb2, 0x77, 0xec,
+		0x0f, 0xbc, 0xce, 0xb8, 0xc6, 0xe1, 0x03, 0xed, 0x35, 0x9c, 0xf2, 0x4d, 0xa4, 0x29, 0x6c, 0xd6,
+		0x6e, 0x05, 0x53, 0x46, 0xc1, 0x41, 0x09, 0x36, 0x0b, 0x7d, 0xf4, 0x9e, 0x0f, 0xba, 0x86, 0x33,
+		0xdd, 0xf1, 0xa7, 0xf7, 0xd5, 0x29, 0xa8, 0xa7, 0x4d, 0xce, 0x0c, 0xf5, 0xb4, 0x6c, 0xd8, 0x27,
+		0xb0, 0x87, 0x2a, 0x6f, 0x7f, 0x3f, 0x8f, 0xc3, 0xe2, 0x3e, 0x94, 0xcf, 0x61, 0x4a, 0x09, 0x3d,
+		0xf9, 0x55, 0x19, 0x31, 0xf2, 0xd2, 0x4a, 0x3e, 0xc1, 0xf5, 0xed, 0x7c, 0x45, 0xb0, 0x0c, 0x7b,
+		0xdd, 0xa6, 0x0a, 0x26, 0x66, 0xec, 0x85, 0x49, 0x00, 0x38, 0x05, 0x7c, 0x9c, 0x1c, 0x92, 0xf5,
+		0xf7, 0xdb, 0x5d, 0xbd, 0x61, 0x0c, 0xc9, 0xaf, 0xfd, 0x57, 0x3f, 0xee, 0x2b, 0xad, 0x73, 0xef,
+		0xa3, 0xc1, 0x66, 0x26, 0x44, 0x5e, 0xf9, 0x12, 0x86, 0x66, 0xa9, 0x61, 0x75, 0xa1, 0xbc, 0x40,
+		0x7f, 0xa8, 0x08, 0x02, 0xc0, 0x76, 0x0e, 0x76, 0xb3, 0x26, 0x3d, 0x1c, 0x40, 0x65, 0xe4, 0x18,
+		0x0f, 0x62, 0x17, 0x8f, 0x1e, 0x61, 0xb8, 0x08, 0x83, 0x54, 0x42, 0x11, 0x03, 0x30, 0x8e, 0xb7,
+		0xc1, 0x9c, 0xec, 0x69, 0x52, 0x95, 0xfb, 0x7b, 0x1a, 0x0c, 0x20, 0x24, 0xf7, 0xb8, 0x38, 0x0c,
+		0xb8, 0x7b, 0xb6, 0x69, 0x70, 0xd0, 0x61, 0xb9, 0x70, 0x06, 0xc2, 0x5b, 0x20, 0x47, 0xf7, 0xd9,
+		0x32, 0xc2, 0xf2, 0x90, 0xb6, 0x4d, 0xcd, 0x3c, 0x6d, 0x74, 0xea, 0x82, 0x35, 0x1b, 0x08, 0x44,
+		0xba, 0xb7, 0x33, 0x82, 0x33, 0x27, 0x54, 0x77, 0x6e, 0x58, 0xfe, 0x46, 0x5a, 0xb4, 0x88, 0x53,
+		0x8d, 0x9b, 0xb1, 0xab, 0xdf, 0x04, 0xe1, 0xfb, 0xd7, 0x1e, 0xd7, 0x38, 0x64, 0x54, 0xba, 0xb0,
+		0x6c, 0x84, 0x7a, 0x0f, 0xa7, 0x80, 0x6b, 0x86, 0xd9, 0xc9, 0xc6, 0x31, 0x95, 0xfa, 0x8a, 0x2c,
+		0x14, 0xe1, 0x85, 0x66, 0x27, 0xfd, 0x63, 0x3e, 0xf0, 0xfa, 0x81, 0xc9, 0x89, 0x4f, 0xe2, 0x6a,
+		0x8c, 0x17, 0xb5, 0xc7, 0x9f, 0x5d, 0x3f, 0x6b, 0x3f, 0xcd, 0x13, 0x7a, 0x3c, 0xe6, 0x4e, 0xfa,
+		0x7a, 0x10, 0xb8, 0x7c, 0x40, 0xec, 0x93, 0x11, 0x1f, 0xd0, 0x9e, 0xc3, 0x56, 0xb9, 0xf5, 0x21,
+		0x18, 0x41, 0x31, 0xea, 0x01, 0x8d, 0xea, 0x1c, 0x95, 0x5e, 0x56, 0x33, 0xbc, 0x7a, 0x3f, 0x6f
+};
+
+static const uint8_t AES_CBC_ciphertext_768B[] = {
+		0x3e, 0x7f, 0x9e, 0x4c, 0x88, 0x15, 0x68, 0x69, 0x10, 0x09, 0xe1, 0xa7, 0x0f, 0x27, 0x88, 0x2d,
+		0x90, 0x73, 0x4f, 0x67, 0xd3, 0x8b, 0xaf, 0xa1, 0x2c, 0x37, 0xa5, 0x6c, 0x7c, 0xbd, 0x95, 0x4c,
+		0x82, 0xcf, 0x05, 0x49, 0x16, 0x5c, 0xe7, 0x06, 0xd4, 0xcb, 0x55, 0x65, 0x9a, 0xd0, 0xe1, 0x46,
+		0x3a, 0x37, 0x71, 0xad, 0xb0, 0xb4, 0x99, 0x1e, 0x23, 0x57, 0x48, 0x96, 0x9c, 0xc5, 0xc4, 0xdb,
+		0x64, 0x3e, 0xc9, 0x7f, 0x90, 0x5a, 0xa0, 0x08, 0x75, 0x4c, 0x09, 0x06, 0x31, 0x6e, 0x59, 0x29,
+		0xfc, 0x2f, 0x72, 0xde, 0xf2, 0x40, 0x5a, 0xfe, 0xd3, 0x66, 0x64, 0xb8, 0x9c, 0xc9, 0xa6, 0x1f,
+		0xc3, 0x52, 0xcd, 0xb5, 0xd1, 0x4f, 0x43, 0x3f, 0xf4, 0x59, 0x25, 0xc4, 0xdd, 0x3e, 0x58, 0x7c,
+		0x21, 0xd6, 0x21, 0xce, 0xa4, 0xbe, 0x08, 0x23, 0x46, 0x68, 0xc0, 0x00, 0x91, 0x47, 0xca, 0x9b,
+		0xe0, 0xb4, 0xe3, 0xab, 0xbf, 0xcf, 0x68, 0x26, 0x97, 0x23, 0x09, 0x93, 0x64, 0x8f, 0x57, 0x59,
+		0xe2, 0x41, 0x7c, 0xa2, 0x48, 0x7e, 0xd5, 0x2c, 0x54, 0x09, 0x1b, 0x07, 0x94, 0xca, 0x39, 0x83,
+		0xdd, 0xf4, 0x7a, 0x1d, 0x2d, 0xdd, 0x67, 0xf7, 0x3c, 0x30, 0x89, 0x3e, 0xc1, 0xdc, 0x1d, 0x8f,
+		0xfc, 0xb1, 0xe9, 0x13, 0x31, 0xb0, 0x16, 0xdb, 0x88, 0xf2, 0x32, 0x7e, 0x73, 0xa3, 0xdf, 0x08,
+		0x6b, 0x53, 0x92, 0x08, 0xc9, 0x9d, 0x98, 0xb2, 0xf4, 0x8c, 0xb1, 0x95, 0xdc, 0xb6, 0xfc, 0xec,
+		0xf1, 0xc9, 0x0d, 0x6d, 0x42, 0x2c, 0xf5, 0x38, 0x29, 0xf4, 0xd8, 0x98, 0x0f, 0xb0, 0x81, 0xa5,
+		0xaa, 0xe6, 0x1f, 0x6e, 0x87, 0x32, 0x1b, 0x02, 0x07, 0x57, 0x38, 0x83, 0xf3, 0xe4, 0x54, 0x7c,
+		0xa8, 0x43, 0xdf, 0x3f, 0x42, 0xfd, 0x67, 0x28, 0x06, 0x4d, 0xea, 0xce, 0x1f, 0x84, 0x4a, 0xcd,
+		0x8c, 0x61, 0x5e, 0x8f, 0x61, 0xed, 0x84, 0x03, 0x53, 0x6a, 0x9e, 0xbf, 0x68, 0x83, 0xa7, 0x42,
+		0x56, 0x57, 0xcd, 0x45, 0x29, 0xfc, 0x7b, 0x07, 0xfc, 0xe9, 0xb9, 0x42, 0xfd, 0x29, 0xd5, 0xfd,
+		0x98, 0x11, 0xd1, 0x8d, 0x67, 0x29, 0x47, 0x61, 0xd8, 0x27, 0x37, 0x79, 0x29, 0xd1, 0x94, 0x6f,
+		0x8d, 0xf3, 0x1b, 0x3d, 0x6a, 0xb1, 0x59, 0xef, 0x1b, 0xd4, 0x70, 0x0e, 0xac, 0xab, 0xa0, 0x2b,
+		0x1f, 0x5e, 0x04, 0xf0, 0x0e, 0x35, 0x72, 0x90, 0xfc, 0xcf, 0x86, 0x43, 0xea, 0x45, 0x6d, 0x22,
+		0x63, 0x06, 0x1a, 0x58, 0xd7, 0x2d, 0xc5, 0xb0, 0x60, 0x69, 0xe8, 0x53, 0xc2, 0xa2, 0x57, 0x83,
+		0xc4, 0x31, 0xb4, 0xc6, 0xb3, 0xa1, 0x77, 0xb3, 0x1c, 0xca, 0x89, 0x3f, 0xf5, 0x10, 0x3b, 0x36,
+		0x31, 0x7d, 0x00, 0x46, 0x00, 0x92, 0xa0, 0xa0, 0x34, 0xd8, 0x5e, 0x62, 0xa9, 0xe0, 0x23, 0x37,
+		0x50, 0x85, 0xc7, 0x3a, 0x20, 0xa3, 0x98, 0xc0, 0xac, 0x20, 0x06, 0x0f, 0x17, 0x3c, 0xfc, 0x43,
+		0x8c, 0x9d, 0xec, 0xf5, 0x9a, 0x35, 0x96, 0xf7, 0xb7, 0x4c, 0xf9, 0x69, 0xf8, 0xd4, 0x1e, 0x9e,
+		0xf9, 0x7c, 0xc4, 0xd2, 0x11, 0x14, 0x41, 0xb9, 0x89, 0xd6, 0x07, 0xd2, 0x37, 0x07, 0x5e, 0x5e,
+		0xae, 0x60, 0xdc, 0xe4, 0xeb, 0x38, 0x48, 0x6d, 0x95, 0x8d, 0x71, 0xf2, 0xba, 0xda, 0x5f, 0x08,
+		0x9d, 0x4a, 0x0f, 0x56, 0x90, 0x64, 0xab, 0xb6, 0x88, 0x22, 0xa8, 0x90, 0x1f, 0x76, 0x2c, 0x83,
+		0x43, 0xce, 0x32, 0x55, 0x45, 0x84, 0x57, 0x43, 0xf9, 0xa8, 0xd1, 0x4f, 0xe3, 0xc1, 0x72, 0x9c,
+		0xeb, 0x64, 0xf7, 0xe4, 0x61, 0x2b, 0x93, 0xd1, 0x1f, 0xbb, 0x5c, 0xff, 0xa1, 0x59, 0x69, 0xcf,
+		0xf7, 0xaf, 0x58, 0x45, 0xd5, 0x3e, 0x98, 0x7d, 0x26, 0x39, 0x5c, 0x75, 0x3c, 0x4a, 0xbf, 0x5e,
+		0x12, 0x10, 0xb0, 0x93, 0x0f, 0x86, 0x82, 0xcf, 0xb2, 0xec, 0x70, 0x5c, 0x0b, 0xad, 0x5d, 0x63,
+		0x65, 0x32, 0xa6, 0x04, 0x58, 0x03, 0x91, 0x2b, 0xdb, 0x8f, 0xd3, 0xa3, 0x2b, 0x3a, 0xf5, 0xa1,
+		0x62, 0x6c, 0xb6, 0xf0, 0x13, 0x3b, 0x8c, 0x07, 0x10, 0x82, 0xc9, 0x56, 0x24, 0x87, 0xfc, 0x56,
+		0xe8, 0xef, 0x90, 0x8b, 0xd6, 0x48, 0xda, 0x53, 0x04, 0x49, 0x41, 0xa4, 0x67, 0xe0, 0x33, 0x24,
+		0x6b, 0x9c, 0x07, 0x55, 0x4c, 0x5d, 0xe9, 0x35, 0xfa, 0xbd, 0xea, 0xa8, 0x3f, 0xe9, 0xf5, 0x20,
+		0x5c, 0x60, 0x0f, 0x0d, 0x24, 0xcb, 0x1a, 0xd6, 0xe8, 0x5c, 0xa8, 0x42, 0xae, 0xd0, 0xd2, 0xf2,
+		0xa8, 0xbe, 0xea, 0x0f, 0x8d, 0xfb, 0x81, 0xa3, 0xa4, 0xef, 0xb7, 0x3e, 0x91, 0xbd, 0x26, 0x0f,
+		0x8e, 0xf1, 0xb2, 0xa5, 0x47, 0x06, 0xfa, 0x40, 0x8b, 0x31, 0x7a, 0x5a, 0x74, 0x2a, 0x0a, 0x7c,
+		0x62, 0x5d, 0x39, 0xa4, 0xae, 0x14, 0x85, 0x08, 0x5b, 0x20, 0x85, 0xf1, 0x57, 0x6e, 0x71, 0x13,
+		0x4e, 0x2b, 0x49, 0x87, 0x01, 0xdf, 0x37, 0xed, 0x28, 0xee, 0x4d, 0xa1, 0xf4, 0xb3, 0x3b, 0xba,
+		0x2d, 0xb3, 0x46, 0x17, 0x84, 0x80, 0x9d, 0xd7, 0x93, 0x1f, 0x28, 0x7c, 0xf5, 0xf9, 0xd6, 0x85,
+		0x8c, 0xa5, 0x44, 0xe9, 0x2c, 0x65, 0x51, 0x5f, 0x53, 0x7a, 0x09, 0xd9, 0x30, 0x16, 0x95, 0x89,
+		0x9c, 0x0b, 0xef, 0x90, 0x6d, 0x23, 0xd3, 0x48, 0x57, 0x3b, 0x55, 0x69, 0x96, 0xfc, 0xf7, 0x52,
+		0x92, 0x38, 0x36, 0xbf, 0xa9, 0x0a, 0xbb, 0x68, 0x45, 0x08, 0x25, 0xee, 0x59, 0xfe, 0xee, 0xf2,
+		0x2c, 0xd4, 0x5f, 0x78, 0x59, 0x0d, 0x90, 0xf1, 0xd7, 0xe4, 0x39, 0x0e, 0x46, 0x36, 0xf5, 0x75,
+		0x03, 0x3c, 0x28, 0xfb, 0xfa, 0x8f, 0xef, 0xc9, 0x61, 0x00, 0x94, 0xc3, 0xd2, 0x0f, 0xd9, 0xda
+};
+
+static const uint8_t AES_CBC_ciphertext_1024B[] = {
+		0x7d, 0x01, 0x7e, 0x2f, 0x92, 0xb3, 0xea, 0x72, 0x4a, 0x3f, 0x10, 0xf9, 0x2b, 0xb0, 0xd5, 0xb9,
+		0x19, 0x68, 0x94, 0xe9, 0x93, 0xe9, 0xd5, 0x26, 0x20, 0x44, 0xe2, 0x47, 0x15, 0x8d, 0x75, 0x48,
+		0x8e, 0xe4, 0x40, 0x81, 0xb5, 0x06, 0xa8, 0xb8, 0x0e, 0x0f, 0x3b, 0xbc, 0x5b, 0xbe, 0x3b, 0xa2,
+		0x2a, 0x0c, 0x48, 0x98, 0x19, 0xdf, 0xe9, 0x25, 0x75, 0xab, 0x93, 0x44, 0xb1, 0x72, 0x70, 0xbb,
+		0x20, 0xcf, 0x78, 0xe9, 0x4d, 0xc6, 0xa9, 0xa9, 0x84, 0x78, 0xc5, 0xc0, 0xc4, 0xc9, 0x79, 0x1a,
+		0xbc, 0x61, 0x25, 0x5f, 0xac, 0x01, 0x03, 0xb7, 0xef, 0x07, 0xf2, 0x62, 0x98, 0xee, 0xe3, 0xad,
+		0x94, 0x75, 0x30, 0x67, 0xb9, 0x15, 0x00, 0xe7, 0x11, 0x32, 0x2e, 0x6b, 0x55, 0x9f, 0xac, 0x68,
+		0xde, 0x61, 0x05, 0x80, 0x01, 0xf3, 0xad, 0xab, 0xaf, 0x45, 0xe0, 0xf4, 0x68, 0x5c, 0xc0, 0x52,
+		0x92, 0xc8, 0x21, 0xb6, 0xf5, 0x8a, 0x1d, 0xbb, 0xfc, 0x4a, 0x11, 0x62, 0xa2, 0xc4, 0xf1, 0x2d,
+		0x0e, 0xb2, 0xc7, 0x17, 0x34, 0xb4, 0x2a, 0x54, 0x81, 0xc2, 0x1e, 0xcf, 0x51, 0x0a, 0x76, 0x54,
+		0xf1, 0x48, 0x0d, 0x5c, 0xcd, 0x38, 0x3e, 0x38, 0x3e, 0xf8, 0x46, 0x1d, 0x00, 0xf5, 0x62, 0xe1,
+		0x5c, 0xb7, 0x8d, 0xce, 0xd0, 0x3f, 0xbb, 0x22, 0xf1, 0xe5, 0xb1, 0xa0, 0x58, 0x5e, 0x3c, 0x0f,
+		0x15, 0xd1, 0xac, 0x3e, 0xc7, 0x72, 0xc4, 0xde, 0x8b, 0x95, 0x3e, 0x91, 0xf7, 0x1d, 0x04, 0x9a,
+		0xc8, 0xe4, 0xbf, 0xd3, 0x22, 0xca, 0x4a, 0xdc, 0xb6, 0x16, 0x79, 0x81, 0x75, 0x2f, 0x6b, 0xa7,
+		0x04, 0x98, 0xa7, 0x4e, 0xc1, 0x19, 0x90, 0x33, 0x33, 0x3c, 0x7f, 0xdd, 0xac, 0x09, 0x0c, 0xc3,
+		0x91, 0x34, 0x74, 0xab, 0xa5, 0x35, 0x0a, 0x13, 0xc3, 0x56, 0x67, 0x6d, 0x1a, 0x3e, 0xbf, 0x56,
+		0x06, 0x67, 0x15, 0x5f, 0xfc, 0x8b, 0xa2, 0x3c, 0x5e, 0xaf, 0x56, 0x1f, 0xe3, 0x2e, 0x9d, 0x0a,
+		0xf9, 0x9b, 0xc7, 0xb5, 0x03, 0x1c, 0x68, 0x99, 0xfa, 0x3c, 0x37, 0x59, 0xc1, 0xf7, 0x6a, 0x83,
+		0x22, 0xee, 0xca, 0x7f, 0x7d, 0x49, 0xe6, 0x48, 0x84, 0x54, 0x7a, 0xff, 0xb3, 0x72, 0x21, 0xd8,
+		0x7a, 0x5d, 0xb1, 0x4b, 0xcc, 0x01, 0x6f, 0x90, 0xc6, 0x68, 0x1c, 0x2c, 0xa1, 0xe2, 0x74, 0x40,
+		0x26, 0x9b, 0x57, 0x53, 0xa3, 0x7c, 0x0b, 0x0d, 0xcf, 0x05, 0x5d, 0x62, 0x4f, 0x75, 0x06, 0x62,
+		0x1f, 0x26, 0x32, 0xaa, 0x25, 0xcc, 0x26, 0x8d, 0xae, 0x01, 0x47, 0xa3, 0x00, 0x42, 0xe2, 0x4c,
+		0xee, 0x29, 0xa2, 0x81, 0xa0, 0xfd, 0xeb, 0xff, 0x9a, 0x66, 0x6e, 0x47, 0x5b, 0xab, 0x93, 0x5a,
+		0x02, 0x6d, 0x6f, 0xf2, 0x6e, 0x02, 0x9d, 0xb1, 0xab, 0x56, 0xdc, 0x8b, 0x9b, 0x17, 0xa8, 0xfb,
+		0x87, 0x42, 0x7c, 0x91, 0x1e, 0x14, 0xc6, 0x6f, 0xdc, 0xf0, 0x27, 0x30, 0xfa, 0x3f, 0xc4, 0xad,
+		0x57, 0x85, 0xd2, 0xc9, 0x32, 0x2c, 0x13, 0xa6, 0x04, 0x04, 0x50, 0x05, 0x2f, 0x72, 0xd9, 0x44,
+		0x55, 0x6e, 0x93, 0x40, 0xed, 0x7e, 0xd4, 0x40, 0x3e, 0x88, 0x3b, 0x8b, 0xb6, 0xeb, 0xc6, 0x5d,
+		0x9c, 0x99, 0xa1, 0xcf, 0x30, 0xb2, 0xdc, 0x48, 0x8a, 0x01, 0xa7, 0x61, 0x77, 0x50, 0x14, 0xf3,
+		0x0c, 0x49, 0x53, 0xb3, 0xb4, 0xb4, 0x28, 0x41, 0x4a, 0x2d, 0xd2, 0x4d, 0x2a, 0x30, 0x31, 0x83,
+		0x03, 0x5e, 0xaa, 0xd3, 0xa3, 0xd1, 0xa1, 0xca, 0x62, 0xf0, 0xe1, 0xf2, 0xff, 0xf0, 0x19, 0xa6,
+		0xde, 0x22, 0x47, 0xb5, 0x28, 0x7d, 0xf7, 0x07, 0x16, 0x0d, 0xb1, 0x55, 0x81, 0x95, 0xe5, 0x1d,
+		0x4d, 0x78, 0xa9, 0x3e, 0xce, 0xe3, 0x1c, 0xf9, 0x47, 0xc8, 0xec, 0xc5, 0xc5, 0x93, 0x4c, 0x34,
+		0x20, 0x6b, 0xee, 0x9a, 0xe6, 0x86, 0x57, 0x58, 0xd5, 0x58, 0xf1, 0x33, 0x10, 0x29, 0x9e, 0x93,
+		0x2f, 0xf5, 0x90, 0x00, 0x17, 0x67, 0x4f, 0x39, 0x18, 0xe1, 0xcf, 0x55, 0x78, 0xbb, 0xe6, 0x29,
+		0x3e, 0x77, 0xd5, 0x48, 0xb7, 0x42, 0x72, 0x53, 0x27, 0xfa, 0x5b, 0xe0, 0x36, 0x14, 0x97, 0xb8,
+		0x9b, 0x3c, 0x09, 0x77, 0xc1, 0x0a, 0xe4, 0xa2, 0x63, 0xfc, 0xbe, 0x5c, 0x17, 0xcf, 0x01, 0xf5,
+		0x03, 0x0f, 0x17, 0xbc, 0x93, 0xdd, 0x5f, 0xe2, 0xf3, 0x08, 0xa8, 0xb1, 0x85, 0xb6, 0x34, 0x3f,
+		0x87, 0x42, 0xa5, 0x42, 0x3b, 0x0e, 0xd6, 0x83, 0x6a, 0xfd, 0x5d, 0xc9, 0x67, 0xd5, 0x51, 0xc9,
+		0x2a, 0x4e, 0x91, 0xb0, 0x59, 0xb2, 0x0f, 0xa2, 0xe6, 0x47, 0x73, 0xc2, 0xa2, 0xae, 0xbb, 0xc8,
+		0x42, 0xa3, 0x2a, 0x27, 0x29, 0x48, 0x8c, 0x54, 0x6c, 0xec, 0x00, 0x2a, 0x42, 0xa3, 0x7a, 0x0f,
+		0x12, 0x66, 0x6b, 0x96, 0xf6, 0xd0, 0x56, 0x4f, 0x49, 0x5c, 0x47, 0xec, 0x05, 0x62, 0x54, 0xb2,
+		0x64, 0x5a, 0x69, 0x1f, 0x19, 0xb4, 0x84, 0x5c, 0xbe, 0x48, 0x8e, 0xfc, 0x58, 0x21, 0xce, 0xfa,
+		0xaa, 0x84, 0xd2, 0xc1, 0x08, 0xb3, 0x87, 0x0f, 0x4f, 0xa3, 0x3a, 0xb6, 0x44, 0xbe, 0x2e, 0x9a,
+		0xdd, 0xb5, 0x44, 0x80, 0xca, 0xf4, 0xc3, 0x6e, 0xba, 0x93, 0x77, 0xe0, 0x53, 0xfb, 0x37, 0xfb,
+		0x88, 0xc3, 0x1f, 0x25, 0xde, 0x3e, 0x11, 0xf4, 0x89, 0xe7, 0xd1, 0x3b, 0xb4, 0x23, 0xcb, 0x70,
+		0xba, 0x35, 0x97, 0x7c, 0xbe, 0x84, 0x13, 0xcf, 0xe0, 0x4d, 0x33, 0x91, 0x71, 0x85, 0xbb, 0x4b,
+		0x97, 0x32, 0x5d, 0xa0, 0xb9, 0x8f, 0xdc, 0x27, 0x5a, 0xeb, 0x71, 0xf1, 0xd5, 0x0d, 0x65, 0xb4,
+		0x22, 0x81, 0xde, 0xa7, 0x58, 0x20, 0x0b, 0x18, 0x11, 0x76, 0x5c, 0xe6, 0x6a, 0x2c, 0x99, 0x69,
+		0xdc, 0xed, 0x67, 0x08, 0x5d, 0x5e, 0xe9, 0x1e, 0x55, 0x70, 0xc1, 0x5a, 0x76, 0x1b, 0x8d, 0x2e,
+		0x0d, 0xf9, 0xcc, 0x30, 0x8c, 0x44, 0x0f, 0x63, 0x8c, 0x42, 0x8a, 0x9f, 0x4c, 0xd1, 0x48, 0x28,
+		0x8a, 0xf5, 0x56, 0x2e, 0x23, 0x12, 0xfe, 0x67, 0x9a, 0x13, 0x65, 0x75, 0x83, 0xf1, 0x3c, 0x98,
+		0x07, 0x6b, 0xb7, 0x27, 0x5b, 0xf0, 0x70, 0xda, 0x30, 0xf8, 0x74, 0x4e, 0x7a, 0x32, 0x84, 0xcc,
+		0x0e, 0xcd, 0x80, 0x8b, 0x82, 0x31, 0x9a, 0x48, 0xcf, 0x75, 0x00, 0x1f, 0x4f, 0xe0, 0x8e, 0xa3,
+		0x6a, 0x2c, 0xd4, 0x73, 0x4c, 0x63, 0x7c, 0xa6, 0x4d, 0x5e, 0xfd, 0x43, 0x3b, 0x27, 0xe1, 0x5e,
+		0xa3, 0xa9, 0x5c, 0x3b, 0x60, 0xdd, 0xc6, 0x8d, 0x5a, 0xf1, 0x3e, 0x89, 0x4b, 0x24, 0xcf, 0x01,
+		0x3a, 0x2d, 0x44, 0xe7, 0xda, 0xe7, 0xa1, 0xac, 0x11, 0x05, 0x0c, 0xa9, 0x7a, 0x82, 0x8c, 0x5c,
+		0x29, 0x68, 0x9c, 0x73, 0x13, 0xcc, 0x67, 0x32, 0x11, 0x5e, 0xe5, 0xcc, 0x8c, 0xf5, 0xa7, 0x52,
+		0x83, 0x9a, 0x70, 0xef, 0xde, 0x55, 0x9c, 0xc7, 0x8a, 0xed, 0xad, 0x28, 0x4a, 0xc5, 0x92, 0x6d,
+		0x8e, 0x47, 0xca, 0xe3, 0xf8, 0x77, 0xb5, 0x26, 0x64, 0x84, 0xc2, 0xf1, 0xd7, 0xae, 0x0c, 0xb9,
+		0x39, 0x0f, 0x43, 0x6b, 0xe9, 0xe0, 0x09, 0x4b, 0xe5, 0xe3, 0x17, 0xa6, 0x68, 0x69, 0x46, 0xf4,
+		0xf0, 0x68, 0x7f, 0x2f, 0x1c, 0x7e, 0x4c, 0xd2, 0xb5, 0xc6, 0x16, 0x85, 0xcf, 0x02, 0x4c, 0x89,
+		0x0b, 0x25, 0xb0, 0xeb, 0xf3, 0x77, 0x08, 0x6a, 0x46, 0x5c, 0xf6, 0x2f, 0xf1, 0x24, 0xc3, 0x4d,
+		0x80, 0x60, 0x4d, 0x69, 0x98, 0xde, 0xc7, 0xa1, 0xf6, 0x4e, 0x18, 0x0c, 0x2a, 0xb0, 0xb2, 0xe0,
+		0x46, 0xe7, 0x49, 0x37, 0xc8, 0x5a, 0x23, 0x24, 0xe3, 0x0f, 0xcc, 0x92, 0xb4, 0x8d, 0xdc, 0x9e
+};
+
+static const uint8_t AES_CBC_ciphertext_1280B[] = {
+		0x91, 0x99, 0x5e, 0x9e, 0x84, 0xff, 0x59, 0x45, 0xc1, 0xf4, 0xbc, 0x9c, 0xb9, 0x30, 0x6c, 0x51,
+		0x73, 0x52, 0xb4, 0x44, 0x09, 0x79, 0xe2, 0x89, 0x75, 0xeb, 0x54, 0x26, 0xce, 0xd8, 0x24, 0x98,
+		0xaa, 0xf8, 0x13, 0x16, 0x68, 0x58, 0xc4, 0x82, 0x0e, 0x31, 0xd3, 0x6a, 0x13, 0x58, 0x31, 0xe9,
+		0x3a, 0xc1, 0x8b, 0xc5, 0x3f, 0x50, 0x42, 0xd1, 0x93, 0xe4, 0x9b, 0x65, 0x2b, 0xf4, 0x1d, 0x9e,
+		0x2d, 0xdb, 0x48, 0xef, 0x9a, 0x01, 0x68, 0xb6, 0xea, 0x7a, 0x2b, 0xad, 0xfe, 0x77, 0x44, 0x7e,
+		0x5a, 0xc5, 0x64, 0xb4, 0xfe, 0x5c, 0x80, 0xf3, 0x20, 0x7e, 0xaf, 0x5b, 0xf8, 0xd1, 0x38, 0xa0,
+		0x8d, 0x09, 0x77, 0x06, 0xfe, 0xf5, 0xf4, 0xe4, 0xee, 0xb8, 0x95, 0x27, 0xed, 0x07, 0xb8, 0xaa,
+		0x25, 0xb4, 0xe1, 0x4c, 0xeb, 0x3f, 0xdb, 0x39, 0x66, 0x28, 0x1b, 0x60, 0x42, 0x8b, 0x99, 0xd9,
+		0x49, 0xd6, 0x8c, 0xa4, 0x9d, 0xd8, 0x93, 0x58, 0x8f, 0xfa, 0xd3, 0xf7, 0x37, 0x9c, 0x88, 0xab,
+		0x16, 0x50, 0xfe, 0x01, 0x1f, 0x88, 0x48, 0xbe, 0x21, 0xa9, 0x90, 0x9e, 0x73, 0xe9, 0x82, 0xf7,
+		0xbf, 0x4b, 0x43, 0xf4, 0xbf, 0x22, 0x3c, 0x45, 0x47, 0x95, 0x5b, 0x49, 0x71, 0x07, 0x1c, 0x8b,
+		0x49, 0xa4, 0xa3, 0x49, 0xc4, 0x5f, 0xb1, 0xf5, 0xe3, 0x6b, 0xf1, 0xdc, 0xea, 0x92, 0x7b, 0x29,
+		0x40, 0xc9, 0x39, 0x5f, 0xdb, 0xbd, 0xf3, 0x6a, 0x09, 0x9b, 0x2a, 0x5e, 0xc7, 0x0b, 0x25, 0x94,
+		0x55, 0x71, 0x9c, 0x7e, 0x0e, 0xb4, 0x08, 0x12, 0x8c, 0x6e, 0x77, 0xb8, 0x29, 0xf1, 0xc6, 0x71,
+		0x04, 0x40, 0x77, 0x18, 0x3f, 0x01, 0x09, 0x9c, 0x23, 0x2b, 0x5d, 0x2a, 0x88, 0x20, 0x23, 0x59,
+		0x74, 0x2a, 0x67, 0x8f, 0xb7, 0xba, 0x38, 0x9f, 0x0f, 0xcf, 0x94, 0xdf, 0xe1, 0x8f, 0x35, 0x5e,
+		0x34, 0x0c, 0x32, 0x92, 0x2b, 0x23, 0x81, 0xf4, 0x73, 0xa0, 0x5a, 0x2a, 0xbd, 0xa6, 0x6b, 0xae,
+		0x43, 0xe2, 0xdc, 0x01, 0xc1, 0xc6, 0xc3, 0x04, 0x06, 0xbb, 0xb0, 0x89, 0xb3, 0x4e, 0xbd, 0x81,
+		0x1b, 0x03, 0x63, 0x93, 0xed, 0x4e, 0xf6, 0xe5, 0x94, 0x6f, 0xd6, 0xf3, 0x20, 0xf3, 0xbc, 0x30,
+		0xc5, 0xd6, 0xbe, 0x1c, 0x05, 0x34, 0x26, 0x4d, 0x46, 0x5e, 0x56, 0x63, 0xfb, 0xdb, 0xcd, 0xed,
+		0xb0, 0x7f, 0x83, 0x94, 0x55, 0x54, 0x2f, 0xab, 0xc9, 0xb7, 0x16, 0x4f, 0x9e, 0x93, 0x25, 0xd7,
+		0x9f, 0x39, 0x2b, 0x63, 0xcf, 0x1e, 0xa3, 0x0e, 0x28, 0x47, 0x8a, 0x5f, 0x40, 0x02, 0x89, 0x1f,
+		0x83, 0xe7, 0x87, 0xd1, 0x90, 0x17, 0xb8, 0x27, 0x64, 0xe1, 0xe1, 0x48, 0x5a, 0x55, 0x74, 0x99,
+		0x27, 0x9d, 0x05, 0x67, 0xda, 0x70, 0x12, 0x8f, 0x94, 0x96, 0xfd, 0x36, 0xa4, 0x1d, 0x22, 0xe5,
+		0x0b, 0xe5, 0x2f, 0x38, 0x55, 0xa3, 0x5d, 0x0b, 0xcf, 0xd4, 0xa9, 0xb8, 0xd6, 0x9a, 0x16, 0x2e,
+		0x6c, 0x4a, 0x25, 0x51, 0x7a, 0x09, 0x48, 0xdd, 0xf0, 0xa3, 0x5b, 0x08, 0x1e, 0x2f, 0x03, 0x91,
+		0x80, 0xe8, 0x0f, 0xe9, 0x5a, 0x2f, 0x90, 0xd3, 0x64, 0xed, 0xd7, 0x51, 0x17, 0x66, 0x53, 0x40,
+		0x43, 0x74, 0xef, 0x0a, 0x0d, 0x49, 0x41, 0xf2, 0x67, 0x6e, 0xea, 0x14, 0xc8, 0x74, 0xd6, 0xa9,
+		0xb9, 0x6a, 0xe3, 0xec, 0x7d, 0xe8, 0x6a, 0x21, 0x3a, 0x52, 0x42, 0xfe, 0x9a, 0x15, 0x6d, 0x60,
+		0x64, 0x88, 0xc5, 0xb2, 0x8b, 0x15, 0x2c, 0xff, 0xe2, 0x35, 0xc3, 0xee, 0x9f, 0xcd, 0x82, 0xd9,
+		0x14, 0x35, 0x2a, 0xb7, 0xf5, 0x2f, 0x7b, 0xbc, 0x01, 0xfd, 0xa8, 0xe0, 0x21, 0x4e, 0x73, 0xf9,
+		0xf2, 0xb0, 0x79, 0xc9, 0x10, 0x52, 0x8f, 0xa8, 0x3e, 0x3b, 0xbe, 0xc5, 0xde, 0xf6, 0x53, 0xe3,
+		0x1c, 0x25, 0x3a, 0x1f, 0x13, 0xbf, 0x13, 0xbb, 0x94, 0xc2, 0x97, 0x43, 0x64, 0x47, 0x8f, 0x76,
+		0xd7, 0xaa, 0xeb, 0xa4, 0x03, 0x50, 0x0c, 0x10, 0x50, 0xd8, 0xf7, 0x75, 0x52, 0x42, 0xe2, 0x94,
+		0x67, 0xf4, 0x60, 0xfb, 0x21, 0x9b, 0x7a, 0x05, 0x50, 0x7c, 0x1b, 0x4a, 0x8b, 0x29, 0xe1, 0xac,
+		0xd7, 0x99, 0xfd, 0x0d, 0x65, 0x92, 0xcd, 0x23, 0xa7, 0x35, 0x8e, 0x13, 0xf2, 0xe4, 0x10, 0x74,
+		0xc6, 0x4f, 0x19, 0xf7, 0x01, 0x0b, 0x46, 0xab, 0xef, 0x8d, 0x4a, 0x4a, 0xfa, 0xda, 0xf3, 0xfb,
+		0x40, 0x28, 0x88, 0xa2, 0x65, 0x98, 0x4d, 0x88, 0xc7, 0xbf, 0x00, 0xc8, 0xd0, 0x91, 0xcb, 0x89,
+		0x2f, 0xb0, 0x85, 0xfc, 0xa1, 0xc1, 0x9e, 0x83, 0x88, 0xad, 0x95, 0xc0, 0x31, 0xa0, 0xad, 0xa2,
+		0x42, 0xb5, 0xe7, 0x55, 0xd4, 0x93, 0x5a, 0x74, 0x4e, 0x41, 0xc3, 0xcf, 0x96, 0x83, 0x46, 0xa1,
+		0xb7, 0x5b, 0xb1, 0x34, 0x67, 0x4e, 0xb1, 0xd7, 0x40, 0x20, 0x72, 0xe9, 0xc8, 0x74, 0xb7, 0xde,
+		0x72, 0x29, 0x77, 0x4c, 0x74, 0x7e, 0xcc, 0x18, 0xa5, 0x8d, 0x79, 0x8c, 0xd6, 0x6e, 0xcb, 0xd9,
+		0xe1, 0x61, 0xe7, 0x36, 0xbc, 0x37, 0xea, 0xee, 0xd8, 0x3c, 0x5e, 0x7c, 0x47, 0x50, 0xd5, 0xec,
+		0x37, 0xc5, 0x63, 0xc3, 0xc9, 0x99, 0x23, 0x9f, 0x64, 0x39, 0xdf, 0x13, 0x96, 0x6d, 0xea, 0x08,
+		0x0c, 0x27, 0x2d, 0xfe, 0x0f, 0xc2, 0xa3, 0x97, 0x04, 0x12, 0x66, 0x0d, 0x94, 0xbf, 0xbe, 0x3e,
+		0xb9, 0xcf, 0x8e, 0xc1, 0x9d, 0xb1, 0x64, 0x17, 0x54, 0x92, 0x3f, 0x0a, 0x51, 0xc8, 0xf5, 0x82,
+		0x98, 0x73, 0x03, 0xc0, 0x5a, 0x51, 0x01, 0x67, 0xb4, 0x01, 0x04, 0x06, 0xbc, 0x37, 0xde, 0x96,
+		0x23, 0x3c, 0xce, 0x98, 0x3f, 0xd6, 0x51, 0x1b, 0x01, 0x83, 0x0a, 0x1c, 0xf9, 0xeb, 0x7e, 0x72,
+		0xa9, 0x51, 0x23, 0xc8, 0xd7, 0x2f, 0x12, 0xbc, 0x08, 0xac, 0x07, 0xe7, 0xa7, 0xe6, 0x46, 0xae,
+		0x54, 0xa3, 0xc2, 0xf2, 0x05, 0x2d, 0x06, 0x5e, 0xfc, 0xe2, 0xa2, 0x23, 0xac, 0x86, 0xf2, 0x54,
+		0x83, 0x4a, 0xb6, 0x48, 0x93, 0xa1, 0x78, 0xc2, 0x07, 0xec, 0x82, 0xf0, 0x74, 0xa9, 0x18, 0xe9,
+		0x53, 0x44, 0x49, 0xc2, 0x94, 0xf8, 0x94, 0x92, 0x08, 0x3f, 0xbf, 0xa6, 0xe5, 0xc6, 0x03, 0x8a,
+		0xc6, 0x90, 0x48, 0x6c, 0xee, 0xbd, 0x44, 0x92, 0x1f, 0x2a, 0xce, 0x1d, 0xb8, 0x31, 0xa2, 0x9d,
+		0x24, 0x93, 0xa8, 0x9f, 0x36, 0x00, 0x04, 0x7b, 0xcb, 0x93, 0x59, 0xa1, 0x53, 0xdb, 0x13, 0x7a,
+		0x54, 0xb1, 0x04, 0xdb, 0xce, 0x48, 0x4f, 0xe5, 0x2f, 0xcb, 0xdf, 0x8f, 0x50, 0x7c, 0xfc, 0x76,
+		0x80, 0xb4, 0xdc, 0x3b, 0xc8, 0x98, 0x95, 0xf5, 0x50, 0xba, 0x70, 0x5a, 0x97, 0xd5, 0xfc, 0x98,
+		0x4d, 0xf3, 0x61, 0x0f, 0xcf, 0xac, 0x49, 0x0a, 0xdb, 0xc1, 0x42, 0x8f, 0xb6, 0x29, 0xd5, 0x65,
+		0xef, 0x83, 0xf1, 0x30, 0x4b, 0x84, 0xd0, 0x69, 0xde, 0xd2, 0x99, 0xe5, 0xec, 0xd3, 0x90, 0x86,
+		0x39, 0x2a, 0x6e, 0xd5, 0x32, 0xe3, 0x0d, 0x2d, 0x01, 0x8b, 0x17, 0x55, 0x1d, 0x65, 0x57, 0xbf,
+		0xd8, 0x75, 0xa4, 0x85, 0xb6, 0x4e, 0x35, 0x14, 0x58, 0xe4, 0x89, 0xb8, 0x7a, 0x58, 0x86, 0x0c,
+		0xbd, 0x8b, 0x05, 0x7b, 0x63, 0xc0, 0x86, 0x80, 0x33, 0x46, 0xd4, 0x9b, 0xb6, 0x0a, 0xeb, 0x6c,
+		0xae, 0xd6, 0x57, 0x7a, 0xc7, 0x59, 0x33, 0xa0, 0xda, 0xa4, 0x12, 0xbf, 0x52, 0x22, 0x05, 0x8d,
+		0xeb, 0xee, 0xd5, 0xec, 0xea, 0x29, 0x9b, 0x76, 0x95, 0x50, 0x6d, 0x99, 0xe1, 0x45, 0x63, 0x09,
+		0x16, 0x5f, 0xb0, 0xf2, 0x5b, 0x08, 0x33, 0xdd, 0x8f, 0xb7, 0x60, 0x7a, 0x8e, 0xc6, 0xfc, 0xac,
+		0xa9, 0x56, 0x2c, 0xa9, 0x8b, 0x74, 0x33, 0xad, 0x2a, 0x7e, 0x96, 0xb6, 0xba, 0x22, 0x28, 0xcf,
+		0x4d, 0x96, 0xb7, 0xd1, 0xfa, 0x99, 0x4a, 0x61, 0xe6, 0x84, 0xd1, 0x94, 0xca, 0xf5, 0x86, 0xb0,
+		0xba, 0x34, 0x7a, 0x04, 0xcc, 0xd4, 0x81, 0xcd, 0xd9, 0x86, 0xb6, 0xe0, 0x5a, 0x6f, 0x9b, 0x99,
+		0xf0, 0xdf, 0x49, 0xae, 0x6d, 0xc2, 0x54, 0x67, 0xe0, 0xb4, 0x34, 0x2d, 0x1c, 0x46, 0xdf, 0x73,
+		0x3b, 0x45, 0x43, 0xe7, 0x1f, 0xa3, 0x36, 0x35, 0x25, 0x33, 0xd9, 0xc0, 0x54, 0x38, 0x6e, 0x6b,
+		0x80, 0xcf, 0x50, 0xa4, 0xb6, 0x21, 0x17, 0xfd, 0x9b, 0x5c, 0x36, 0xca, 0xcc, 0x73, 0x73, 0xad,
+		0xe0, 0x57, 0x77, 0x90, 0x0e, 0x7f, 0x0f, 0x87, 0x7f, 0xdb, 0x73, 0xbf, 0xda, 0xc2, 0xb3, 0x05,
+		0x22, 0x06, 0xf5, 0xa3, 0xfc, 0x1e, 0x8f, 0xda, 0xcf, 0x49, 0xd6, 0xb3, 0x66, 0x2c, 0xb5, 0x00,
+		0xaf, 0x85, 0x6e, 0xb8, 0x5b, 0x8c, 0xa1, 0xa4, 0x21, 0xce, 0x40, 0xf3, 0x98, 0xac, 0xec, 0x88,
+		0x62, 0x43, 0x2a, 0xac, 0xca, 0xcf, 0xb9, 0x30, 0xeb, 0xfc, 0xef, 0xf0, 0x6e, 0x64, 0x6d, 0xe7,
+		0x54, 0x88, 0x6b, 0x22, 0x29, 0xbe, 0xa5, 0x8c, 0x31, 0x23, 0x3b, 0x4a, 0x80, 0x37, 0xe6, 0xd0,
+		0x05, 0xfc, 0x10, 0x0e, 0xdd, 0xbb, 0x00, 0xc5, 0x07, 0x20, 0x59, 0xd3, 0x41, 0x17, 0x86, 0x46,
+		0xab, 0x68, 0xf6, 0x48, 0x3c, 0xea, 0x5a, 0x06, 0x30, 0x21, 0x19, 0xed, 0x74, 0xbe, 0x0b, 0x97,
+		0xee, 0x91, 0x35, 0x94, 0x1f, 0xcb, 0x68, 0x7f, 0xe4, 0x48, 0xb0, 0x16, 0xfb, 0xf0, 0x74, 0xdb,
+		0x06, 0x59, 0x2e, 0x5a, 0x9c, 0xce, 0x8f, 0x7d, 0xba, 0x48, 0xd5, 0x3f, 0x5c, 0xb0, 0xc2, 0x33,
+		0x48, 0x60, 0x17, 0x08, 0x85, 0xba, 0xff, 0xb9, 0x34, 0x0a, 0x3d, 0x8f, 0x21, 0x13, 0x12, 0x1b
+};
+
+static const uint8_t AES_CBC_ciphertext_1536B[] = {
+		0x89, 0x93, 0x05, 0x99, 0xa9, 0xed, 0xea, 0x62, 0xc9, 0xda, 0x51, 0x15, 0xce, 0x42, 0x91, 0xc3,
+		0x80, 0xc8, 0x03, 0x88, 0xc2, 0x63, 0xda, 0x53, 0x1a, 0xf3, 0xeb, 0xd5, 0xba, 0x6f, 0x23, 0xb2,
+		0xed, 0x8f, 0x89, 0xb1, 0xb3, 0xca, 0x90, 0x7a, 0xdd, 0x3f, 0xf6, 0xca, 0x86, 0x58, 0x54, 0xbc,
+		0xab, 0x0f, 0xf4, 0xab, 0x6d, 0x5d, 0x42, 0xd0, 0x17, 0x49, 0x17, 0xd1, 0x93, 0xea, 0xe8, 0x22,
+		0xc1, 0x34, 0x9f, 0x3a, 0x3b, 0xaa, 0xe9, 0x1b, 0x93, 0xff, 0x6b, 0x68, 0xba, 0xe6, 0xd2, 0x39,
+		0x3d, 0x55, 0x34, 0x8f, 0x98, 0x86, 0xb4, 0xd8, 0x7c, 0x0d, 0x3e, 0x01, 0x63, 0x04, 0x01, 0xff,
+		0x16, 0x0f, 0x51, 0x5f, 0x73, 0x53, 0xf0, 0x3a, 0x38, 0xb4, 0x4d, 0x8d, 0xaf, 0xa3, 0xca, 0x2f,
+		0x6f, 0xdf, 0xc0, 0x41, 0x6c, 0x48, 0x60, 0x1a, 0xe4, 0xe7, 0x8a, 0x65, 0x6f, 0x8d, 0xd7, 0xe1,
+		0x10, 0xab, 0x78, 0x5b, 0xb9, 0x69, 0x1f, 0xe0, 0x5c, 0xf1, 0x19, 0x12, 0x21, 0xc7, 0x51, 0xbc,
+		0x61, 0x5f, 0xc0, 0x36, 0x17, 0xc0, 0x28, 0xd9, 0x51, 0xcb, 0x43, 0xd9, 0xfa, 0xd1, 0xad, 0x79,
+		0x69, 0x86, 0x49, 0xc5, 0xe5, 0x69, 0x27, 0xce, 0x22, 0xd0, 0xe1, 0x6a, 0xf9, 0x02, 0xca, 0x6c,
+		0x34, 0xc7, 0xb8, 0x02, 0xc1, 0x38, 0x7f, 0xd5, 0x15, 0xf5, 0xd6, 0xeb, 0xf9, 0x30, 0x40, 0x43,
+		0xea, 0x87, 0xde, 0x35, 0xf6, 0x83, 0x59, 0x09, 0x68, 0x62, 0x00, 0x87, 0xb8, 0xe7, 0xca, 0x05,
+		0x0f, 0xac, 0x42, 0x58, 0x45, 0xaa, 0xc9, 0x9b, 0xfd, 0x2a, 0xda, 0x65, 0x33, 0x93, 0x9d, 0xc6,
+		0x93, 0x8d, 0xe2, 0xc5, 0x71, 0xc1, 0x5c, 0x13, 0xde, 0x7b, 0xd4, 0xb9, 0x4c, 0x35, 0x61, 0x85,
+		0x90, 0x78, 0xf7, 0x81, 0x98, 0x45, 0x99, 0x24, 0x58, 0x73, 0x28, 0xf8, 0x31, 0xab, 0x54, 0x2e,
+		0xc0, 0x38, 0x77, 0x25, 0x5c, 0x06, 0x9c, 0xc3, 0x69, 0x21, 0x92, 0x76, 0xe1, 0x16, 0xdc, 0xa9,
+		0xee, 0xb6, 0x80, 0x66, 0x43, 0x11, 0x24, 0xb3, 0x07, 0x17, 0x89, 0x0f, 0xcb, 0xe0, 0x60, 0xa8,
+		0x9d, 0x06, 0x4b, 0x6e, 0x72, 0xb7, 0xbc, 0x4f, 0xb8, 0xc0, 0x80, 0xa2, 0xfb, 0x46, 0x5b, 0x8f,
+		0x11, 0x01, 0x92, 0x9d, 0x37, 0x09, 0x98, 0xc8, 0x0a, 0x46, 0xae, 0x12, 0xac, 0x61, 0x3f, 0xe7,
+		0x41, 0x1a, 0xaa, 0x2e, 0xdc, 0xd7, 0x2a, 0x47, 0xee, 0xdf, 0x08, 0xd1, 0xff, 0xea, 0x13, 0xc6,
+		0x05, 0xdb, 0x29, 0xcc, 0x03, 0xba, 0x7b, 0x6d, 0x40, 0xc1, 0xc9, 0x76, 0x75, 0x03, 0x7a, 0x71,
+		0xc9, 0x5f, 0xd9, 0xe0, 0x61, 0x69, 0x36, 0x8f, 0xb2, 0xbc, 0x28, 0xf3, 0x90, 0x71, 0xda, 0x5f,
+		0x08, 0xd5, 0x0d, 0xc1, 0xe6, 0xbd, 0x2b, 0xc6, 0x6c, 0x42, 0xfd, 0xbf, 0x10, 0xe8, 0x5f, 0x87,
+		0x3d, 0x21, 0x42, 0x85, 0x01, 0x0a, 0xbf, 0x8e, 0x49, 0xd3, 0x9c, 0x89, 0x3b, 0xea, 0xe1, 0xbf,
+		0xe9, 0x9b, 0x5e, 0x0e, 0xb8, 0xeb, 0xcd, 0x3a, 0xf6, 0x29, 0x41, 0x35, 0xdd, 0x9b, 0x13, 0x24,
+		0xe0, 0x1d, 0x8a, 0xcb, 0x20, 0xf8, 0x41, 0x51, 0x3e, 0x23, 0x8c, 0x67, 0x98, 0x39, 0x53, 0x77,
+		0x2a, 0x68, 0xf4, 0x3c, 0x7e, 0xd6, 0xc4, 0x6e, 0xf1, 0x53, 0xe9, 0xd8, 0x5c, 0xc1, 0xa9, 0x38,
+		0x6f, 0x5e, 0xe4, 0xd4, 0x29, 0x1c, 0x6c, 0xee, 0x2f, 0xea, 0xde, 0x61, 0x71, 0x5a, 0xea, 0xce,
+		0x23, 0x6e, 0x1b, 0x16, 0x43, 0xb7, 0xc0, 0xe3, 0x87, 0xa1, 0x95, 0x1e, 0x97, 0x4d, 0xea, 0xa6,
+		0xf7, 0x25, 0xac, 0x82, 0x2a, 0xd3, 0xa6, 0x99, 0x75, 0xdd, 0xc1, 0x55, 0x32, 0x6b, 0xea, 0x33,
+		0x88, 0xce, 0x06, 0xac, 0x15, 0x39, 0x19, 0xa3, 0x59, 0xaf, 0x7a, 0x1f, 0xd9, 0x72, 0x5e, 0xf7,
+		0x4c, 0xf3, 0x5d, 0x6b, 0xf2, 0x16, 0x92, 0xa8, 0x9e, 0x3d, 0xd4, 0x4c, 0x72, 0x55, 0x4e, 0x4a,
+		0xf7, 0x8b, 0x2f, 0x67, 0x5a, 0x90, 0xb7, 0xcf, 0x16, 0xd3, 0x7b, 0x5a, 0x9a, 0xc8, 0x9f, 0xbf,
+		0x01, 0x76, 0x3b, 0x86, 0x2c, 0x2a, 0x78, 0x10, 0x70, 0x05, 0x38, 0xf9, 0xdd, 0x2a, 0x1d, 0x00,
+		0x25, 0xb7, 0x10, 0xac, 0x3b, 0x3c, 0x4d, 0x3c, 0x01, 0x68, 0x3c, 0x5a, 0x29, 0xc2, 0xa0, 0x1b,
+		0x95, 0x67, 0xf9, 0x0a, 0x60, 0xb7, 0x11, 0x9c, 0x40, 0x45, 0xd7, 0xb0, 0xda, 0x49, 0x87, 0xcd,
+		0xb0, 0x9b, 0x61, 0x8c, 0xf4, 0x0d, 0x94, 0x1d, 0x79, 0x66, 0x13, 0x0b, 0xc6, 0x6b, 0x19, 0xee,
+		0xa0, 0x6b, 0x64, 0x7d, 0xc4, 0xff, 0x98, 0x72, 0x60, 0xab, 0x7f, 0x0f, 0x4d, 0x5d, 0x6b, 0xc3,
+		0xba, 0x5e, 0x0d, 0x04, 0xd9, 0x59, 0x17, 0xd0, 0x64, 0xbe, 0xfb, 0x58, 0xfc, 0xed, 0x18, 0xf6,
+		0xac, 0x19, 0xa4, 0xfd, 0x16, 0x59, 0x80, 0x58, 0xb8, 0x0f, 0x79, 0x24, 0x60, 0x18, 0x62, 0xa9,
+		0xa3, 0xa0, 0xe8, 0x81, 0xd6, 0xec, 0x5b, 0xfe, 0x5b, 0xb8, 0xa4, 0x00, 0xa9, 0xd0, 0x90, 0x17,
+		0xe5, 0x50, 0x3d, 0x2b, 0x12, 0x6e, 0x2a, 0x13, 0x65, 0x7c, 0xdf, 0xdf, 0xa7, 0xdd, 0x9f, 0x78,
+		0x5f, 0x8f, 0x4e, 0x90, 0xa6, 0x10, 0xe4, 0x7b, 0x68, 0x6b, 0xfd, 0xa9, 0x6d, 0x47, 0xfa, 0xec,
+		0x42, 0x35, 0x07, 0x12, 0x3e, 0x78, 0x23, 0x15, 0xff, 0xe2, 0x65, 0xc7, 0x47, 0x89, 0x2f, 0x97,
+		0x7c, 0xd7, 0x6b, 0x69, 0x35, 0x79, 0x6f, 0x85, 0xb4, 0xa9, 0x75, 0x04, 0x32, 0x9a, 0xfe, 0xf0,
+		0xce, 0xe3, 0xf1, 0xab, 0x15, 0x47, 0xe4, 0x9c, 0xc1, 0x48, 0x32, 0x3c, 0xbe, 0x44, 0x72, 0xc9,
+		0xaa, 0x50, 0x37, 0xa6, 0xbe, 0x41, 0xcf, 0xe8, 0x17, 0x4e, 0x37, 0xbe, 0xf1, 0x34, 0x2c, 0xd9,
+		0x60, 0x48, 0x09, 0xa5, 0x26, 0x00, 0x31, 0x77, 0x4e, 0xac, 0x7c, 0x89, 0x75, 0xe3, 0xde, 0x26,
+		0x4c, 0x32, 0x54, 0x27, 0x8e, 0x92, 0x26, 0x42, 0x85, 0x76, 0x01, 0x76, 0x62, 0x4c, 0x29, 0xe9,
+		0x38, 0x05, 0x51, 0x54, 0x97, 0xa3, 0x03, 0x59, 0x5e, 0xec, 0x0c, 0xe4, 0x96, 0xb7, 0x15, 0xa8,
+		0x41, 0x06, 0x2b, 0x78, 0x95, 0x24, 0xf6, 0x32, 0xc5, 0xec, 0xd7, 0x89, 0x28, 0x1e, 0xec, 0xb1,
+		0xc7, 0x21, 0x0c, 0xd3, 0x80, 0x7c, 0x5a, 0xe6, 0xb1, 0x3a, 0x52, 0x33, 0x84, 0x4e, 0x32, 0x6e,
+		0x7a, 0xf6, 0x43, 0x15, 0x5b, 0xa6, 0xba, 0xeb, 0xa8, 0xe4, 0xff, 0x4f, 0xbd, 0xbd, 0xa8, 0x5e,
+		0xbe, 0x27, 0xaf, 0xc5, 0xf7, 0x9e, 0xdf, 0x48, 0x22, 0xca, 0x6a, 0x0b, 0x3c, 0xd7, 0xe0, 0xdc,
+		0xf3, 0x71, 0x08, 0xdc, 0x28, 0x13, 0x08, 0xf2, 0x08, 0x1d, 0x9d, 0x7b, 0xd9, 0xde, 0x6f, 0xe6,
+		0xe8, 0x88, 0x18, 0xc2, 0xcd, 0x93, 0xc5, 0x38, 0x21, 0x68, 0x4c, 0x9a, 0xfb, 0xb6, 0x18, 0x16,
+		0x73, 0x2c, 0x1d, 0x6f, 0x95, 0xfb, 0x65, 0x4f, 0x7c, 0xec, 0x8d, 0x6c, 0xa8, 0xc0, 0x55, 0x28,
+		0xc6, 0xc3, 0xea, 0xeb, 0x05, 0xf5, 0x65, 0xeb, 0x53, 0xe1, 0x54, 0xef, 0xb8, 0x64, 0x98, 0x2d,
+		0x98, 0x9e, 0xc8, 0xfe, 0xa2, 0x07, 0x30, 0xf7, 0xf7, 0xae, 0xdb, 0x32, 0xf8, 0x71, 0x9d, 0x06,
+		0xdf, 0x9b, 0xda, 0x61, 0x7d, 0xdb, 0xae, 0x06, 0x24, 0x63, 0x74, 0xb6, 0xf3, 0x1b, 0x66, 0x09,
+		0x60, 0xff, 0x2b, 0x29, 0xf5, 0xa9, 0x9d, 0x61, 0x5d, 0x55, 0x10, 0x82, 0x21, 0xbb, 0x64, 0x0d,
+		0xef, 0x5c, 0xe3, 0x30, 0x1b, 0x60, 0x1e, 0x5b, 0xfe, 0x6c, 0xf5, 0x15, 0xa3, 0x86, 0x27, 0x58,
+		0x46, 0x00, 0x20, 0xcb, 0x86, 0x9a, 0x52, 0x29, 0x20, 0x68, 0x4d, 0x67, 0x88, 0x70, 0xc2, 0x31,
+		0xd8, 0xbb, 0xa5, 0xa7, 0x88, 0x7f, 0x66, 0xbc, 0xaa, 0x0f, 0xe1, 0x78, 0x7b, 0x97, 0x3c, 0xb7,
+		0xd7, 0xd8, 0x04, 0xe0, 0x09, 0x60, 0xc8, 0xd0, 0x9e, 0xe5, 0x6b, 0x31, 0x7f, 0x88, 0xfe, 0xc3,
+		0xfd, 0x89, 0xec, 0x76, 0x4b, 0xb3, 0xa7, 0x37, 0x03, 0xb7, 0xc6, 0x10, 0x7c, 0x9d, 0x0c, 0x75,
+		0xd3, 0x08, 0x14, 0x94, 0x03, 0x42, 0x25, 0x26, 0x85, 0xf7, 0xf0, 0x90, 0x06, 0x3e, 0x6f, 0x60,
+		0x52, 0x55, 0xd5, 0x0f, 0x79, 0x64, 0x69, 0x69, 0x46, 0xf9, 0x7f, 0x7f, 0x03, 0xf1, 0x1f, 0xdb,
+		0x39, 0x05, 0xba, 0x4a, 0x8f, 0x17, 0xe7, 0xba, 0xe2, 0x07, 0x7c, 0x1d, 0x9e, 0xbc, 0x94, 0xc0,
+		0x61, 0x59, 0x8e, 0x72, 0xaf, 0xfc, 0x99, 0xe4, 0xd5, 0xa8, 0xee, 0x0a, 0x48, 0x2d, 0x82, 0x8b,
+		0x34, 0x54, 0x8a, 0xce, 0xc7, 0xfa, 0xdd, 0xba, 0x54, 0xdf, 0xb3, 0x30, 0x33, 0x73, 0x2e, 0xd5,
+		0x52, 0xab, 0x49, 0x91, 0x4e, 0x0a, 0xd6, 0x2f, 0x67, 0xe4, 0xdd, 0x64, 0x48, 0x16, 0xd9, 0x85,
+		0xaa, 0x52, 0xa5, 0x0b, 0xd3, 0xb4, 0x2d, 0x77, 0x5e, 0x52, 0x77, 0x17, 0xcf, 0xbe, 0x88, 0x04,
+		0x01, 0x52, 0xe2, 0xf1, 0x46, 0xe2, 0x91, 0x30, 0x65, 0xcf, 0xc0, 0x65, 0x45, 0xc3, 0x7e, 0xf4,
+		0x2e, 0xb5, 0xaf, 0x6f, 0xab, 0x1a, 0xfa, 0x70, 0x35, 0xb8, 0x4f, 0x2d, 0x78, 0x90, 0x33, 0xb5,
+		0x9a, 0x67, 0xdb, 0x2f, 0x28, 0x32, 0xb6, 0x54, 0xab, 0x4c, 0x6b, 0x85, 0xed, 0x6c, 0x3e, 0x05,
+		0x2a, 0xc7, 0x32, 0xe8, 0xf5, 0xa3, 0x7b, 0x4e, 0x7b, 0x58, 0x24, 0x73, 0xf7, 0xfd, 0xc7, 0xc8,
+		0x6c, 0x71, 0x68, 0xb1, 0xf6, 0xc5, 0x9e, 0x1e, 0xe3, 0x5c, 0x25, 0xc0, 0x5b, 0x3e, 0x59, 0xa1,
+		0x18, 0x5a, 0xe8, 0xb5, 0xd1, 0x44, 0x13, 0xa3, 0xe6, 0x05, 0x76, 0xd2, 0x8d, 0x6e, 0x54, 0x68,
+		0x0c, 0xa4, 0x7b, 0x8b, 0xd3, 0x8c, 0x42, 0x13, 0x87, 0xda, 0xdf, 0x8f, 0xa5, 0x83, 0x7a, 0x42,
+		0x99, 0xb7, 0xeb, 0xe2, 0x79, 0xe0, 0xdb, 0xda, 0x33, 0xa8, 0x50, 0x3a, 0xd7, 0xe7, 0xd3, 0x61,
+		0x18, 0xb8, 0xaa, 0x2d, 0xc8, 0xd8, 0x2c, 0x28, 0xe5, 0x97, 0x0a, 0x7c, 0x6c, 0x7f, 0x09, 0xd7,
+		0x88, 0x80, 0xac, 0x12, 0xed, 0xf8, 0xc6, 0xb5, 0x2d, 0xd6, 0x63, 0x9b, 0x98, 0x35, 0x26, 0xde,
+		0xf6, 0x31, 0xee, 0x7e, 0xa0, 0xfb, 0x16, 0x98, 0xb1, 0x96, 0x1d, 0xee, 0xe3, 0x2f, 0xfb, 0x41,
+		0xdd, 0xea, 0x10, 0x1e, 0x03, 0x89, 0x18, 0xd2, 0x47, 0x0c, 0xa0, 0x57, 0xda, 0x76, 0x3a, 0x37,
+		0x2c, 0xe4, 0xf9, 0x77, 0xc8, 0x43, 0x5f, 0xcb, 0xd6, 0x85, 0xf7, 0x22, 0xe4, 0x32, 0x25, 0xa8,
+		0xdc, 0x21, 0xc0, 0xf5, 0x95, 0xb2, 0xf8, 0x83, 0xf0, 0x65, 0x61, 0x15, 0x48, 0x94, 0xb7, 0x03,
+		0x7f, 0x66, 0xa1, 0x39, 0x1f, 0xdd, 0xce, 0x96, 0xfe, 0x58, 0x81, 0x3d, 0x41, 0x11, 0x87, 0x13,
+		0x26, 0x1b, 0x6d, 0xf3, 0xca, 0x2e, 0x2c, 0x76, 0xd3, 0x2f, 0x6d, 0x49, 0x70, 0x53, 0x05, 0x96,
+		0xcc, 0x30, 0x2b, 0x83, 0xf2, 0xc6, 0xb2, 0x4b, 0x22, 0x13, 0x95, 0x42, 0xeb, 0x56, 0x4d, 0x22,
+		0xe6, 0x43, 0x6f, 0xba, 0xe7, 0x3b, 0xe5, 0x59, 0xce, 0x57, 0x88, 0x85, 0xb6, 0xbf, 0x15, 0x37,
+		0xb3, 0x7a, 0x7e, 0xc4, 0xbc, 0x99, 0xfc, 0xe4, 0x89, 0x00, 0x68, 0x39, 0xbc, 0x5a, 0xba, 0xab,
+		0x52, 0xab, 0xe6, 0x81, 0xfd, 0x93, 0x62, 0xe9, 0xb7, 0x12, 0xd1, 0x18, 0x1a, 0xb9, 0x55, 0x4a,
+		0x0f, 0xae, 0x35, 0x11, 0x04, 0x27, 0xf3, 0x42, 0x4e, 0xca, 0xdf, 0x9f, 0x12, 0x62, 0xea, 0x03,
+		0xc0, 0xa9, 0x22, 0x7b, 0x6c, 0x6c, 0xe3, 0xdf, 0x16, 0xad, 0x03, 0xc9, 0xfe, 0xa4, 0xdd, 0x4f
+};
+
+static const uint8_t AES_CBC_ciphertext_1792B[] = {
+		0x59, 0xcc, 0xfe, 0x8f, 0xb4, 0x9d, 0x0e, 0xd1, 0x85, 0xfc, 0x9b, 0x43, 0xc1, 0xb7, 0x54, 0x67,
+		0x01, 0xef, 0xb8, 0x71, 0x36, 0xdb, 0x50, 0x48, 0x7a, 0xea, 0xcf, 0xce, 0xba, 0x30, 0x10, 0x2e,
+		0x96, 0x2b, 0xfd, 0xcf, 0x00, 0xe3, 0x1f, 0xac, 0x66, 0x14, 0x30, 0x86, 0x49, 0xdb, 0x01, 0x8b,
+		0x07, 0xdd, 0x00, 0x9d, 0x0d, 0x5c, 0x19, 0x11, 0xe8, 0x44, 0x2b, 0x25, 0x70, 0xed, 0x7c, 0x33,
+		0x0d, 0xe3, 0x34, 0x93, 0x63, 0xad, 0x26, 0xb1, 0x11, 0x91, 0x34, 0x2e, 0x1d, 0x50, 0xaa, 0xd4,
+		0xef, 0x3a, 0x6d, 0xd7, 0x33, 0x20, 0x0d, 0x3f, 0x9b, 0xdd, 0xc3, 0xa5, 0xc5, 0xf1, 0x99, 0xdc,
+		0xea, 0x52, 0xda, 0x55, 0xea, 0xa2, 0x7a, 0xc5, 0x78, 0x44, 0x4a, 0x02, 0x33, 0x19, 0x62, 0x37,
+		0xf8, 0x8b, 0xd1, 0x0c, 0x21, 0xdf, 0x40, 0x19, 0x81, 0xea, 0xfb, 0x1c, 0xa7, 0xcc, 0x60, 0xfe,
+		0x63, 0x25, 0x8f, 0xf3, 0x73, 0x0f, 0x45, 0xe6, 0x6a, 0x18, 0xbf, 0xbe, 0xad, 0x92, 0x2a, 0x1e,
+		0x15, 0x65, 0x6f, 0xef, 0x92, 0xcd, 0x0e, 0x19, 0x3d, 0x42, 0xa8, 0xfc, 0x0d, 0x32, 0x58, 0xe0,
+		0x56, 0x9f, 0xd6, 0x9b, 0x8b, 0xec, 0xe0, 0x45, 0x4d, 0x7e, 0x73, 0x87, 0xff, 0x74, 0x92, 0x59,
+		0x60, 0x13, 0x93, 0xda, 0xec, 0xbf, 0xfa, 0x20, 0xb6, 0xe7, 0xdf, 0xc7, 0x10, 0xf5, 0x79, 0xb4,
+		0xd7, 0xac, 0xaf, 0x2b, 0x37, 0x52, 0x30, 0x1d, 0xbe, 0x0f, 0x60, 0x77, 0x3d, 0x03, 0x63, 0xa9,
+		0xae, 0xb1, 0xf3, 0xca, 0xca, 0xb4, 0x21, 0xd7, 0x6f, 0x2e, 0x5e, 0x9b, 0x68, 0x53, 0x80, 0xab,
+		0x30, 0x23, 0x0a, 0x72, 0x6b, 0xb1, 0xd8, 0x25, 0x5d, 0x3a, 0x62, 0x9b, 0x4f, 0x59, 0x3b, 0x79,
+		0xa8, 0x9e, 0x08, 0x6d, 0x37, 0xb0, 0xfc, 0x42, 0x51, 0x25, 0x86, 0xbd, 0x54, 0x5a, 0x95, 0x20,
+		0x6c, 0xac, 0xb9, 0x30, 0x1c, 0x03, 0xc9, 0x49, 0x38, 0x55, 0x31, 0x49, 0xed, 0xa9, 0x0e, 0xc3,
+		0x65, 0xb4, 0x68, 0x6b, 0x07, 0x4c, 0x0a, 0xf9, 0x21, 0x69, 0x7c, 0x9f, 0x28, 0x80, 0xe9, 0x49,
+		0x22, 0x7c, 0xec, 0x97, 0xf7, 0x70, 0xb4, 0xb8, 0x25, 0xe7, 0x80, 0x2c, 0x43, 0x24, 0x8a, 0x2e,
+		0xac, 0xa2, 0x84, 0x20, 0xe7, 0xf4, 0x6b, 0x86, 0x37, 0x05, 0xc7, 0x59, 0x04, 0x49, 0x2a, 0x99,
+		0x80, 0x46, 0x32, 0x19, 0xe6, 0x30, 0xce, 0xc0, 0xef, 0x6e, 0xec, 0xe5, 0x2f, 0x24, 0xc1, 0x78,
+		0x45, 0x02, 0xd3, 0x64, 0x99, 0xf5, 0xc7, 0xbc, 0x8f, 0x8c, 0x75, 0xb1, 0x0a, 0xc8, 0xc3, 0xbd,
+		0x5e, 0x7e, 0xbd, 0x0e, 0xdf, 0x4b, 0x96, 0x6a, 0xfd, 0x03, 0xdb, 0xd1, 0x31, 0x1e, 0x27, 0xf9,
+		0xe5, 0x83, 0x9a, 0xfc, 0x13, 0x4c, 0xd3, 0x04, 0xdb, 0xdb, 0x3f, 0x35, 0x93, 0x4e, 0x14, 0x6b,
+		0x00, 0x5c, 0xb6, 0x11, 0x50, 0xee, 0x61, 0x5c, 0x10, 0x5c, 0xd0, 0x90, 0x02, 0x2e, 0x12, 0xe0,
+		0x50, 0x44, 0xad, 0x75, 0xcd, 0x94, 0xcf, 0x92, 0xcb, 0xe3, 0xe8, 0x77, 0x4b, 0xd7, 0x1a, 0x7c,
+		0xdd, 0x6b, 0x49, 0x21, 0x7c, 0xe8, 0x2c, 0x25, 0x49, 0x86, 0x1e, 0x54, 0xae, 0xfc, 0x0e, 0x80,
+		0xb1, 0xd5, 0xa5, 0x23, 0xcf, 0xcc, 0x0e, 0x11, 0xe2, 0x7c, 0x3c, 0x25, 0x78, 0x64, 0x03, 0xa1,
+		0xdd, 0x9f, 0x74, 0x12, 0x7b, 0x21, 0xb5, 0x73, 0x15, 0x3c, 0xed, 0xad, 0x07, 0x62, 0x21, 0x79,
+		0xd4, 0x2f, 0x0d, 0x72, 0xe9, 0x7c, 0x6b, 0x96, 0x6e, 0xe5, 0x36, 0x4a, 0xd2, 0x38, 0xe1, 0xff,
+		0x6e, 0x26, 0xa4, 0xac, 0x83, 0x07, 0xe6, 0x67, 0x74, 0x6c, 0xec, 0x8b, 0x4b, 0x79, 0x33, 0x50,
+		0x2f, 0x8f, 0xa0, 0x8f, 0xfa, 0x38, 0x6a, 0xa2, 0x3a, 0x42, 0x85, 0x15, 0x90, 0xd0, 0xb3, 0x0d,
+		0x8a, 0xe4, 0x60, 0x03, 0xef, 0xf9, 0x65, 0x8a, 0x4e, 0x50, 0x8c, 0x65, 0xba, 0x61, 0x16, 0xc3,
+		0x93, 0xb7, 0x75, 0x21, 0x98, 0x25, 0x60, 0x6e, 0x3d, 0x68, 0xba, 0x7c, 0xe4, 0xf3, 0xd9, 0x9b,
+		0xfb, 0x7a, 0xed, 0x1f, 0xb3, 0x4b, 0x88, 0x74, 0x2c, 0xb8, 0x8c, 0x22, 0x95, 0xce, 0x90, 0xf1,
+		0xdb, 0x80, 0xa6, 0x39, 0xae, 0x82, 0xa1, 0xef, 0x75, 0xec, 0xfe, 0xf1, 0xe8, 0x04, 0xfd, 0x99,
+		0x1b, 0x5f, 0x45, 0x87, 0x4f, 0xfa, 0xa2, 0x3e, 0x3e, 0xb5, 0x01, 0x4b, 0x46, 0xeb, 0x13, 0x9a,
+		0xe4, 0x7d, 0x03, 0x87, 0xb1, 0x59, 0x91, 0x8e, 0x37, 0xd3, 0x16, 0xce, 0xef, 0x4b, 0xe9, 0x46,
+		0x8d, 0x2a, 0x50, 0x2f, 0x41, 0xd3, 0x7b, 0xcf, 0xf0, 0xb7, 0x8b, 0x65, 0x0f, 0xa3, 0x27, 0x10,
+		0xe9, 0xa9, 0xe9, 0x2c, 0xbe, 0xbb, 0x82, 0xe3, 0x7b, 0x0b, 0x81, 0x3e, 0xa4, 0x6a, 0x4f, 0x3b,
+		0xd5, 0x61, 0xf8, 0x47, 0x04, 0x99, 0x5b, 0xff, 0xf3, 0x14, 0x6e, 0x57, 0x5b, 0xbf, 0x1b, 0xb4,
+		0x3f, 0xf9, 0x31, 0xf6, 0x95, 0xd5, 0x10, 0xa9, 0x72, 0x28, 0x23, 0xa9, 0x6a, 0xa2, 0xcf, 0x7d,
+		0xe3, 0x18, 0x95, 0xda, 0xbc, 0x6f, 0xe9, 0xd8, 0xef, 0x49, 0x3f, 0xd3, 0xef, 0x1f, 0xe1, 0x50,
+		0xe8, 0x8a, 0xc0, 0xce, 0xcc, 0xb7, 0x5e, 0x0e, 0x8b, 0x95, 0x80, 0xfd, 0x58, 0x2a, 0x9b, 0xc8,
+		0xb4, 0x17, 0x04, 0x46, 0x74, 0xd4, 0x68, 0x91, 0x33, 0xc8, 0x31, 0x15, 0x84, 0x16, 0x35, 0x03,
+		0x64, 0x6d, 0xa9, 0x4e, 0x20, 0xeb, 0xa9, 0x3f, 0x21, 0x5e, 0x9b, 0x09, 0xc3, 0x45, 0xf8, 0x7c,
+		0x59, 0x62, 0x29, 0x9a, 0x5c, 0xcf, 0xb4, 0x27, 0x5e, 0x13, 0xea, 0xb3, 0xef, 0xd9, 0x01, 0x2a,
+		0x65, 0x5f, 0x14, 0xf4, 0xbf, 0x28, 0x89, 0x3d, 0xdd, 0x9d, 0x52, 0xbd, 0x9e, 0x5b, 0x3b, 0xd2,
+		0xc2, 0x81, 0x35, 0xb6, 0xac, 0xdd, 0x27, 0xc3, 0x7b, 0x01, 0x5a, 0x6d, 0x4c, 0x5e, 0x2c, 0x30,
+		0xcb, 0x3a, 0xfa, 0xc1, 0xd7, 0x31, 0x67, 0x3e, 0x08, 0x6a, 0xe8, 0x8c, 0x75, 0xac, 0x1a, 0x6a,
+		0x52, 0xf7, 0x51, 0xcd, 0x85, 0x3f, 0x3c, 0xa7, 0xea, 0xbc, 0xd7, 0x18, 0x9e, 0x27, 0x73, 0xe6,
+		0x2b, 0x58, 0xb6, 0xd2, 0x29, 0x68, 0xd5, 0x8f, 0x00, 0x4d, 0x55, 0xf6, 0x61, 0x5a, 0xcc, 0x51,
+		0xa6, 0x5e, 0x85, 0xcb, 0x0b, 0xfd, 0x06, 0xca, 0xf5, 0xbf, 0x0d, 0x13, 0x74, 0x78, 0x6d, 0x9e,
+		0x20, 0x11, 0x84, 0x3e, 0x78, 0x17, 0x04, 0x4f, 0x64, 0x2c, 0x3b, 0x3e, 0x93, 0x7b, 0x58, 0x33,
+		0x07, 0x52, 0xf7, 0x60, 0x6a, 0xa8, 0x3b, 0x19, 0x27, 0x7a, 0x93, 0xc5, 0x53, 0xad, 0xec, 0xf6,
+		0xc8, 0x94, 0xee, 0x92, 0xea, 0xee, 0x7e, 0xea, 0xb9, 0x5f, 0xac, 0x59, 0x5d, 0x2e, 0x78, 0x53,
+		0x72, 0x81, 0x92, 0xdd, 0x1c, 0x63, 0xbe, 0x02, 0xeb, 0xa8, 0x1b, 0x2a, 0x6e, 0x72, 0xe3, 0x2d,
+		0x84, 0x0d, 0x8a, 0x22, 0xf6, 0xba, 0xab, 0x04, 0x8e, 0x04, 0x24, 0xdb, 0xcc, 0xe2, 0x69, 0xeb,
+		0x4e, 0xfa, 0x6b, 0x5b, 0xc8, 0xc0, 0xd9, 0x25, 0xcb, 0x40, 0x8d, 0x4b, 0x8e, 0xa0, 0xd4, 0x72,
+		0x98, 0x36, 0x46, 0x3b, 0x4f, 0x5f, 0x96, 0x84, 0x03, 0x28, 0x86, 0x4d, 0xa1, 0x8a, 0xd7, 0xb2,
+		0x5b, 0x27, 0x01, 0x80, 0x62, 0x49, 0x56, 0xb9, 0xa0, 0xa1, 0xe3, 0x6e, 0x22, 0x2a, 0x5d, 0x03,
+		0x86, 0x40, 0x36, 0x22, 0x5e, 0xd2, 0xe5, 0xc0, 0x6b, 0xfa, 0xac, 0x80, 0x4e, 0x09, 0x99, 0xbc,
+		0x2f, 0x9b, 0xcc, 0xf3, 0x4e, 0xf7, 0x99, 0x98, 0x11, 0x6e, 0x6f, 0x62, 0x22, 0x6b, 0x92, 0x95,
+		0x3b, 0xc3, 0xd2, 0x8e, 0x0f, 0x07, 0xc2, 0x51, 0x5c, 0x4d, 0xb2, 0x6e, 0xc0, 0x27, 0x73, 0xcd,
+		0x57, 0xb7, 0xf0, 0xe9, 0x2e, 0xc8, 0xe2, 0x0c, 0xd1, 0xb5, 0x0f, 0xff, 0xf9, 0xec, 0x38, 0xba,
+		0x97, 0xd6, 0x94, 0x9b, 0xd1, 0x79, 0xb6, 0x6a, 0x01, 0x17, 0xe4, 0x7e, 0xa6, 0xd5, 0x86, 0x19,
+		0xae, 0xf3, 0xf0, 0x62, 0x73, 0xc0, 0xf0, 0x0a, 0x7a, 0x96, 0x93, 0x72, 0x89, 0x7e, 0x25, 0x57,
+		0xf8, 0xf7, 0xd5, 0x1e, 0xe5, 0xac, 0xd6, 0x38, 0x4f, 0xe8, 0x81, 0xd1, 0x53, 0x41, 0x07, 0x2d,
+		0x58, 0x34, 0x1c, 0xef, 0x74, 0x2e, 0x61, 0xca, 0xd3, 0xeb, 0xd6, 0x93, 0x0a, 0xf2, 0xf2, 0x86,
+		0x9c, 0xe3, 0x7a, 0x52, 0xf5, 0x42, 0xf1, 0x8b, 0x10, 0xf2, 0x25, 0x68, 0x7e, 0x61, 0xb1, 0x19,
+		0xcf, 0x8f, 0x5a, 0x53, 0xb7, 0x68, 0x4f, 0x1a, 0x71, 0xe9, 0x83, 0x91, 0x3a, 0x78, 0x0f, 0xf7,
+		0xd4, 0x74, 0xf5, 0x06, 0xd2, 0x88, 0xb0, 0x06, 0xe5, 0xc0, 0xfb, 0xb3, 0x91, 0xad, 0xc0, 0x84,
+		0x31, 0xf2, 0x3a, 0xcf, 0x63, 0xe6, 0x4a, 0xd3, 0x78, 0xbe, 0xde, 0x73, 0x3e, 0x02, 0x8e, 0xb8,
+		0x3a, 0xf6, 0x55, 0xa7, 0xf8, 0x5a, 0xb5, 0x0e, 0x0c, 0xc5, 0xe5, 0x66, 0xd5, 0xd2, 0x18, 0xf3,
+		0xef, 0xa5, 0xc9, 0x68, 0x69, 0xe0, 0xcd, 0x00, 0x33, 0x99, 0x6e, 0xea, 0xcb, 0x06, 0x7a, 0xe1,
+		0xe1, 0x19, 0x0b, 0xe7, 0x08, 0xcd, 0x09, 0x1b, 0x85, 0xec, 0xc4, 0xd4, 0x75, 0xf0, 0xd6, 0xfb,
+		0x84, 0x95, 0x07, 0x44, 0xca, 0xa5, 0x2a, 0x6c, 0xc2, 0x00, 0x58, 0x08, 0x87, 0x9e, 0x0a, 0xd4,
+		0x06, 0xe2, 0x91, 0x5f, 0xb7, 0x1b, 0x11, 0xfa, 0x85, 0xfc, 0x7c, 0xf2, 0x0f, 0x6e, 0x3c, 0x8a,
+		0xe1, 0x0f, 0xa0, 0x33, 0x84, 0xce, 0x81, 0x4d, 0x32, 0x4d, 0xeb, 0x41, 0xcf, 0x5a, 0x05, 0x60,
+		0x47, 0x6c, 0x2a, 0xc4, 0x17, 0xd5, 0x16, 0x3a, 0xe4, 0xe7, 0xab, 0x84, 0x94, 0x22, 0xff, 0x56,
+		0xb0, 0x0c, 0x92, 0x6c, 0x19, 0x11, 0x4c, 0xb3, 0xed, 0x58, 0x48, 0x84, 0x2a, 0xe2, 0x19, 0x2a,
+		0xe1, 0xc0, 0x56, 0x82, 0x3c, 0x83, 0xb4, 0x58, 0x2d, 0xf0, 0xb5, 0x1e, 0x76, 0x85, 0x51, 0xc2,
+		0xe4, 0x95, 0x27, 0x96, 0xd1, 0x90, 0xc3, 0x17, 0x75, 0xa1, 0xbb, 0x46, 0x5f, 0xa6, 0xf2, 0xef,
+		0x71, 0x56, 0x92, 0xc5, 0x8a, 0x85, 0x52, 0xe4, 0x63, 0x21, 0x6f, 0x55, 0x85, 0x2b, 0x6b, 0x0d,
+		0xc9, 0x92, 0x77, 0x67, 0xe3, 0xff, 0x2a, 0x2b, 0x90, 0x01, 0x3d, 0x74, 0x63, 0x04, 0x61, 0x3c,
+		0x8e, 0xf8, 0xfc, 0x04, 0xdd, 0x21, 0x85, 0x92, 0x1e, 0x4d, 0x51, 0x8d, 0xb5, 0x6b, 0xf1, 0xda,
+		0x96, 0xf5, 0x8e, 0x3c, 0x38, 0x5a, 0xac, 0x9b, 0xba, 0x0c, 0x84, 0x5d, 0x50, 0x12, 0xc7, 0xc5,
+		0x7a, 0xcb, 0xb1, 0xfa, 0x16, 0x93, 0xdf, 0x98, 0xda, 0x3f, 0x49, 0xa3, 0x94, 0x78, 0x70, 0xc7,
+		0x0b, 0xb6, 0x91, 0xa6, 0x16, 0x2e, 0xcf, 0xfd, 0x51, 0x6a, 0x5b, 0xad, 0x7a, 0xdd, 0xa9, 0x48,
+		0x48, 0xac, 0xd6, 0x45, 0xbc, 0x23, 0x31, 0x1d, 0x86, 0x54, 0x8a, 0x7f, 0x04, 0x97, 0x71, 0x9e,
+		0xbc, 0x2e, 0x6b, 0xd9, 0x33, 0xc8, 0x20, 0xc9, 0xe0, 0x25, 0x86, 0x59, 0x15, 0xcf, 0x63, 0xe5,
+		0x99, 0xf1, 0x24, 0xf1, 0xba, 0xc4, 0x15, 0x02, 0xe2, 0xdb, 0xfe, 0x4a, 0xf8, 0x3b, 0x91, 0x13,
+		0x8d, 0x03, 0x81, 0x9f, 0xb3, 0x3f, 0x04, 0x03, 0x58, 0xc0, 0xef, 0x27, 0x82, 0x14, 0xd2, 0x7f,
+		0x93, 0x70, 0xb7, 0xb2, 0x02, 0x21, 0xb3, 0x07, 0x7f, 0x1c, 0xef, 0x88, 0xee, 0x29, 0x7a, 0x0b,
+		0x3d, 0x75, 0x5a, 0x93, 0xfe, 0x7f, 0x14, 0xf7, 0x4e, 0x4b, 0x7f, 0x21, 0x02, 0xad, 0xf9, 0x43,
+		0x29, 0x1a, 0xe8, 0x1b, 0xf5, 0x32, 0xb2, 0x96, 0xe6, 0xe8, 0x96, 0x20, 0x9b, 0x96, 0x8e, 0x7b,
+		0xfe, 0xd8, 0xc9, 0x9c, 0x65, 0x16, 0xd6, 0x68, 0x95, 0xf8, 0x22, 0xe2, 0xae, 0x84, 0x03, 0xfd,
+		0x87, 0xa2, 0x72, 0x79, 0x74, 0x95, 0xfa, 0xe1, 0xfe, 0xd0, 0x4e, 0x3d, 0x39, 0x2e, 0x67, 0x55,
+		0x71, 0x6c, 0x89, 0x33, 0x49, 0x0c, 0x1b, 0x46, 0x92, 0x31, 0x6f, 0xa6, 0xf0, 0x09, 0xbd, 0x2d,
+		0xe2, 0xca, 0xda, 0x18, 0x33, 0xce, 0x67, 0x37, 0xfd, 0x6f, 0xcb, 0x9d, 0xbd, 0x42, 0xbc, 0xb2,
+		0x9c, 0x28, 0xcd, 0x65, 0x3c, 0x61, 0xbc, 0xde, 0x9d, 0xe1, 0x2a, 0x3e, 0xbf, 0xee, 0x3c, 0xcb,
+		0xb1, 0x50, 0xa9, 0x2c, 0xbe, 0xb5, 0x43, 0xd0, 0xec, 0x29, 0xf9, 0x16, 0x6f, 0x31, 0xd9, 0x9b,
+		0x92, 0xb1, 0x32, 0xae, 0x0f, 0xb6, 0x9d, 0x0e, 0x25, 0x7f, 0x89, 0x1f, 0x1d, 0x01, 0x68, 0xab,
+		0x3d, 0xd1, 0x74, 0x5b, 0x4c, 0x38, 0x7f, 0x3d, 0x33, 0xa5, 0xa2, 0x9f, 0xda, 0x84, 0xa5, 0x82,
+		0x2d, 0x16, 0x66, 0x46, 0x08, 0x30, 0x14, 0x48, 0x5e, 0xca, 0xe3, 0xf4, 0x8c, 0xcb, 0x32, 0xc6,
+		0xf1, 0x43, 0x62, 0xc6, 0xef, 0x16, 0xfa, 0x43, 0xae, 0x9c, 0x53, 0xe3, 0x49, 0x45, 0x80, 0xfd,
+		0x1d, 0x8c, 0xa9, 0x6d, 0x77, 0x76, 0xaa, 0x40, 0xc4, 0x4e, 0x7b, 0x78, 0x6b, 0xe0, 0x1d, 0xce,
+		0x56, 0x3d, 0xf0, 0x11, 0xfe, 0x4f, 0x6a, 0x6d, 0x0f, 0x4f, 0x90, 0x38, 0x92, 0x17, 0xfa, 0x56,
+		0x12, 0xa6, 0xa1, 0x0a, 0xea, 0x2f, 0x50, 0xf9, 0x60, 0x66, 0x6c, 0x7d, 0x5a, 0x08, 0x8e, 0x3c,
+		0xf3, 0xf0, 0x33, 0x02, 0x11, 0x02, 0xfe, 0x4c, 0x56, 0x2b, 0x9f, 0x0c, 0xbd, 0x65, 0x8a, 0x83,
+		0xde, 0x7c, 0x05, 0x26, 0x93, 0x19, 0xcc, 0xf3, 0x71, 0x0e, 0xad, 0x2f, 0xb3, 0xc9, 0x38, 0x50,
+		0x64, 0xd5, 0x4c, 0x60, 0x5f, 0x02, 0x13, 0x34, 0xc9, 0x75, 0xc4, 0x60, 0xab, 0x2e, 0x17, 0x7d
+};
+
+static const uint8_t AES_CBC_ciphertext_2048B[] = {
+		0x8b, 0x55, 0xbd, 0xfd, 0x2b, 0x35, 0x76, 0x5c, 0xd1, 0x90, 0xd7, 0x6a, 0x63, 0x1e, 0x39, 0x71,
+		0x0d, 0x5c, 0xd8, 0x03, 0x00, 0x75, 0xf1, 0x07, 0x03, 0x8d, 0x76, 0xeb, 0x3b, 0x00, 0x1e, 0x33,
+		0x88, 0xfc, 0x8f, 0x08, 0x4d, 0x33, 0xf1, 0x3c, 0xee, 0xd0, 0x5d, 0x19, 0x8b, 0x3c, 0x50, 0x86,
+		0xfd, 0x8d, 0x58, 0x21, 0xb4, 0xae, 0x0f, 0x81, 0xe9, 0x9f, 0xc9, 0xc0, 0x90, 0xf7, 0x04, 0x6f,
+		0x39, 0x1d, 0x8a, 0x3f, 0x8d, 0x32, 0x23, 0xb5, 0x1f, 0xcc, 0x8a, 0x12, 0x2d, 0x46, 0x82, 0x5e,
+		0x6a, 0x34, 0x8c, 0xb1, 0x93, 0x70, 0x3b, 0xde, 0x55, 0xaf, 0x16, 0x35, 0x99, 0x84, 0xd5, 0x88,
+		0xc9, 0x54, 0xb1, 0xb2, 0xd3, 0xeb, 0x9e, 0x55, 0x9a, 0xa9, 0xa7, 0xf5, 0xda, 0x29, 0xcf, 0xe1,
+		0x98, 0x64, 0x45, 0x77, 0xf2, 0x12, 0x69, 0x8f, 0x78, 0xd8, 0x82, 0x41, 0xb2, 0x9f, 0xe2, 0x1c,
+		0x63, 0x9b, 0x24, 0x81, 0x67, 0x95, 0xa2, 0xff, 0x26, 0x9d, 0x65, 0x48, 0x61, 0x30, 0x66, 0x41,
+		0x68, 0x84, 0xbb, 0x59, 0x14, 0x8e, 0x9a, 0x62, 0xb6, 0xca, 0xda, 0xbe, 0x7c, 0x41, 0x52, 0x6e,
+		0x1b, 0x86, 0xbf, 0x08, 0xeb, 0x37, 0x84, 0x60, 0xe4, 0xc4, 0x1e, 0xa8, 0x4c, 0x84, 0x60, 0x2f,
+		0x70, 0x90, 0xf2, 0x26, 0xe7, 0x65, 0x0c, 0xc4, 0x58, 0x36, 0x8e, 0x4d, 0xdf, 0xff, 0x9a, 0x39,
+		0x93, 0x01, 0xcf, 0x6f, 0x6d, 0xde, 0xef, 0x79, 0xb0, 0xce, 0xe2, 0x98, 0xdb, 0x85, 0x8d, 0x62,
+		0x9d, 0xb9, 0x63, 0xfd, 0xf0, 0x35, 0xb5, 0xa9, 0x1b, 0xf9, 0xe5, 0xd4, 0x2e, 0x22, 0x2d, 0xcc,
+		0x42, 0xbf, 0x0e, 0x51, 0xf7, 0x15, 0x07, 0x32, 0x75, 0x5b, 0x74, 0xbb, 0x00, 0xef, 0xd4, 0x66,
+		0x8b, 0xad, 0x71, 0x53, 0x94, 0xd7, 0x7d, 0x2c, 0x40, 0x3e, 0x69, 0xa0, 0x4c, 0x86, 0x5e, 0x06,
+		0xed, 0xdf, 0x22, 0xe2, 0x24, 0x25, 0x4e, 0x9b, 0x5f, 0x49, 0x74, 0xba, 0xed, 0xb1, 0xa6, 0xeb,
+		0xae, 0x3f, 0xc6, 0x9e, 0x0b, 0x29, 0x28, 0x9a, 0xb6, 0xb2, 0x74, 0x58, 0xec, 0xa6, 0x4a, 0xed,
+		0xe5, 0x10, 0x00, 0x85, 0xe1, 0x63, 0x41, 0x61, 0x30, 0x7c, 0x97, 0xcf, 0x75, 0xcf, 0xb6, 0xf3,
+		0xf7, 0xda, 0x35, 0x3f, 0x85, 0x8c, 0x64, 0xca, 0xb7, 0xea, 0x7f, 0xe4, 0xa3, 0x4d, 0x30, 0x84,
+		0x8c, 0x9c, 0x80, 0x5a, 0x50, 0xa5, 0x64, 0xae, 0x26, 0xd3, 0xb5, 0x01, 0x73, 0x36, 0x8a, 0x92,
+		0x49, 0xc4, 0x1a, 0x94, 0x81, 0x9d, 0xf5, 0x6c, 0x50, 0xe1, 0x58, 0x0b, 0x75, 0xdd, 0x6b, 0x6a,
+		0xca, 0x69, 0xea, 0xc3, 0x33, 0x90, 0x9f, 0x3b, 0x65, 0x5d, 0x5e, 0xee, 0x31, 0xb7, 0x32, 0xfd,
+		0x56, 0x83, 0xb6, 0xfb, 0xa8, 0x04, 0xfc, 0x1e, 0x11, 0xfb, 0x02, 0x23, 0x53, 0x49, 0x45, 0xb1,
+		0x07, 0xfc, 0xba, 0xe7, 0x5f, 0x5d, 0x2d, 0x7f, 0x9e, 0x46, 0xba, 0xe9, 0xb0, 0xdb, 0x32, 0x04,
+		0xa4, 0xa7, 0x98, 0xab, 0x91, 0xcd, 0x02, 0x05, 0xf5, 0x74, 0x31, 0x98, 0x83, 0x3d, 0x33, 0x11,
+		0x0e, 0xe3, 0x8d, 0xa8, 0xc9, 0x0e, 0xf3, 0xb9, 0x47, 0x67, 0xe9, 0x79, 0x2b, 0x34, 0xcd, 0x9b,
+		0x45, 0x75, 0x29, 0xf0, 0xbf, 0xcc, 0xda, 0x3a, 0x91, 0xb2, 0x15, 0x27, 0x7a, 0xe5, 0xf5, 0x6a,
+		0x5e, 0xbe, 0x2c, 0x98, 0xe8, 0x40, 0x96, 0x4f, 0x8a, 0x09, 0xfd, 0xf6, 0xb2, 0xe7, 0x45, 0xb6,
+		0x08, 0xc1, 0x69, 0xe1, 0xb3, 0xc4, 0x24, 0x34, 0x07, 0x85, 0xd5, 0xa9, 0x78, 0xca, 0xfa, 0x4b,
+		0x01, 0x19, 0x4d, 0x95, 0xdc, 0xa5, 0xc1, 0x9c, 0xec, 0x27, 0x5b, 0xa6, 0x54, 0x25, 0xbd, 0xc8,
+		0x0a, 0xb7, 0x11, 0xfb, 0x4e, 0xeb, 0x65, 0x2e, 0xe1, 0x08, 0x9c, 0x3a, 0x45, 0x44, 0x33, 0xef,
+		0x0d, 0xb9, 0xff, 0x3e, 0x68, 0x9c, 0x61, 0x2b, 0x11, 0xb8, 0x5c, 0x47, 0x0f, 0x94, 0xf2, 0xf8,
+		0x0b, 0xbb, 0x99, 0x18, 0x85, 0xa3, 0xba, 0x44, 0xf3, 0x79, 0xb3, 0x63, 0x2c, 0x1f, 0x2a, 0x35,
+		0x3b, 0x23, 0x98, 0xab, 0xf4, 0x16, 0x36, 0xf8, 0xde, 0x86, 0xa4, 0xd4, 0x75, 0xff, 0x51, 0xf9,
+		0xeb, 0x42, 0x5f, 0x55, 0xe2, 0xbe, 0xd1, 0x5b, 0xb5, 0x38, 0xeb, 0xb4, 0x4d, 0xec, 0xec, 0x99,
+		0xe1, 0x39, 0x43, 0xaa, 0x64, 0xf7, 0xc9, 0xd8, 0xf2, 0x9a, 0x71, 0x43, 0x39, 0x17, 0xe8, 0xa8,
+		0xa2, 0xe2, 0xa4, 0x2c, 0x18, 0x11, 0x49, 0xdf, 0x18, 0xdd, 0x85, 0x6e, 0x65, 0x96, 0xe2, 0xba,
+		0xa1, 0x0a, 0x2c, 0xca, 0xdc, 0x5f, 0xe4, 0xf4, 0x35, 0x03, 0xb2, 0xa9, 0xda, 0xcf, 0xb7, 0x6d,
+		0x65, 0x82, 0x82, 0x67, 0x9d, 0x0e, 0xf3, 0xe8, 0x85, 0x6c, 0x69, 0xb8, 0x4c, 0xa6, 0xc6, 0x2e,
+		0x40, 0xb5, 0x54, 0x28, 0x95, 0xe4, 0x57, 0xe0, 0x5b, 0xf8, 0xde, 0x59, 0xe0, 0xfd, 0x89, 0x48,
+		0xac, 0x56, 0x13, 0x54, 0xb9, 0x1b, 0xf5, 0x59, 0x97, 0xb6, 0xb3, 0xe8, 0xac, 0x2d, 0xfc, 0xd2,
+		0xea, 0x57, 0x96, 0x57, 0xa8, 0x26, 0x97, 0x2c, 0x01, 0x89, 0x56, 0xea, 0xec, 0x8c, 0x53, 0xd5,
+		0xd7, 0x9e, 0xc9, 0x98, 0x0b, 0xad, 0x03, 0x75, 0xa0, 0x6e, 0x98, 0x8b, 0x97, 0x8d, 0x8d, 0x85,
+		0x7d, 0x74, 0xa7, 0x2d, 0xde, 0x67, 0x0c, 0xcd, 0x54, 0xb8, 0x15, 0x7b, 0xeb, 0xf5, 0x84, 0xb9,
+		0x78, 0xab, 0xd8, 0x68, 0x91, 0x1f, 0x6a, 0xa6, 0x28, 0x22, 0xf7, 0x00, 0x49, 0x00, 0xbe, 0x41,
+		0x71, 0x0a, 0xf5, 0xe7, 0x9f, 0xb4, 0x11, 0x41, 0x3f, 0xcd, 0xa9, 0xa9, 0x01, 0x8b, 0x6a, 0xeb,
+		0x54, 0x4c, 0x58, 0x92, 0x68, 0x02, 0x0e, 0xe9, 0xed, 0x65, 0x4c, 0xfb, 0x95, 0x48, 0x58, 0xa2,
+		0xaa, 0x57, 0x69, 0x13, 0x82, 0x0c, 0x2c, 0x4b, 0x5d, 0x4e, 0x18, 0x30, 0xef, 0x1c, 0xb1, 0x9d,
+		0x05, 0x05, 0x02, 0x1c, 0x97, 0xc9, 0x48, 0xfe, 0x5e, 0x7b, 0x77, 0xa3, 0x1f, 0x2a, 0x81, 0x42,
+		0xf0, 0x4b, 0x85, 0x12, 0x9c, 0x1f, 0x44, 0xb1, 0x14, 0x91, 0x92, 0x65, 0x77, 0xb1, 0x87, 0xa2,
+		0xfc, 0xa4, 0xe7, 0xd2, 0x9b, 0xf2, 0x17, 0xf0, 0x30, 0x1c, 0x8d, 0x33, 0xbc, 0x25, 0x28, 0x48,
+		0xfd, 0x30, 0x79, 0x0a, 0x99, 0x3e, 0xb4, 0x0f, 0x1e, 0xa6, 0x68, 0x76, 0x19, 0x76, 0x29, 0xac,
+		0x5d, 0xb8, 0x1e, 0x42, 0xd6, 0x85, 0x04, 0xbf, 0x64, 0x1c, 0x2d, 0x53, 0xe9, 0x92, 0x78, 0xf8,
+		0xc3, 0xda, 0x96, 0x92, 0x10, 0x6f, 0x45, 0x85, 0xaf, 0x5e, 0xcc, 0xa8, 0xc0, 0xc6, 0x2e, 0x73,
+		0x51, 0x3f, 0x5e, 0xd7, 0x52, 0x33, 0x71, 0x12, 0x6d, 0x85, 0xee, 0xea, 0x85, 0xa8, 0x48, 0x2b,
+		0x40, 0x64, 0x6d, 0x28, 0x73, 0x16, 0xd7, 0x82, 0xd9, 0x90, 0xed, 0x1f, 0xa7, 0x5c, 0xb1, 0x5c,
+		0x27, 0xb9, 0x67, 0x8b, 0xb4, 0x17, 0x13, 0x83, 0x5f, 0x09, 0x72, 0x0a, 0xd7, 0xa0, 0xec, 0x81,
+		0x59, 0x19, 0xb9, 0xa6, 0x5a, 0x37, 0x34, 0x14, 0x47, 0xf6, 0xe7, 0x6c, 0xd2, 0x09, 0x10, 0xe7,
+		0xdd, 0xbb, 0x02, 0xd1, 0x28, 0xfa, 0x01, 0x2c, 0x93, 0x64, 0x2e, 0x1b, 0x4c, 0x02, 0x52, 0xcb,
+		0x07, 0xa1, 0xb6, 0x46, 0x02, 0x80, 0xd9, 0x8f, 0x5c, 0x62, 0xbe, 0x78, 0x9e, 0x75, 0xc4, 0x97,
+		0x91, 0x39, 0x12, 0x65, 0xb9, 0x3b, 0xc2, 0xd1, 0xaf, 0xf2, 0x1f, 0x4e, 0x4d, 0xd1, 0xf0, 0x9f,
+		0xb7, 0x12, 0xfd, 0xe8, 0x75, 0x18, 0xc0, 0x9d, 0x8c, 0x70, 0xff, 0x77, 0x05, 0xb6, 0x1a, 0x1f,
+		0x96, 0x48, 0xf6, 0xfe, 0xd5, 0x5d, 0x98, 0xa5, 0x72, 0x1c, 0x84, 0x76, 0x3e, 0xb8, 0x87, 0x37,
+		0xdd, 0xd4, 0x3a, 0x45, 0xdd, 0x09, 0xd8, 0xe7, 0x09, 0x2f, 0x3e, 0x33, 0x9e, 0x7b, 0x8c, 0xe4,
+		0x85, 0x12, 0x4e, 0xf8, 0x06, 0xb7, 0xb1, 0x85, 0x24, 0x96, 0xd8, 0xfe, 0x87, 0x92, 0x81, 0xb1,
+		0xa3, 0x38, 0xb9, 0x56, 0xe1, 0xf6, 0x36, 0x41, 0xbb, 0xd6, 0x56, 0x69, 0x94, 0x57, 0xb3, 0xa4,
+		0xca, 0xa4, 0xe1, 0x02, 0x3b, 0x96, 0x71, 0xe0, 0xb2, 0x2f, 0x85, 0x48, 0x1b, 0x4a, 0x41, 0x80,
+		0x4b, 0x9c, 0xe0, 0xc9, 0x39, 0xb8, 0xb1, 0xca, 0x64, 0x77, 0x46, 0x58, 0xe6, 0x84, 0xd5, 0x2b,
+		0x65, 0xce, 0xe9, 0x09, 0xa3, 0xaa, 0xfb, 0x83, 0xa9, 0x28, 0x68, 0xfd, 0xcd, 0xfd, 0x76, 0x83,
+		0xe1, 0x20, 0x22, 0x77, 0x3a, 0xa3, 0xb2, 0x93, 0x14, 0x91, 0xfc, 0xe2, 0x17, 0x63, 0x2b, 0xa6,
+		0x29, 0x38, 0x7b, 0x9b, 0x8b, 0x15, 0x77, 0xd6, 0xaa, 0x92, 0x51, 0x53, 0x50, 0xff, 0xa0, 0x35,
+		0xa0, 0x59, 0x7d, 0xf0, 0x11, 0x23, 0x49, 0xdf, 0x5a, 0x21, 0xc2, 0xfe, 0x35, 0xa0, 0x1d, 0xe2,
+		0xae, 0xa2, 0x8a, 0x61, 0x5b, 0xf7, 0xf1, 0x1c, 0x1c, 0xec, 0xc4, 0xf6, 0xdc, 0xaa, 0xc8, 0xc2,
+		0xe5, 0xa1, 0x2e, 0x14, 0xe5, 0xc6, 0xc9, 0x73, 0x03, 0x78, 0xeb, 0xed, 0xe0, 0x3e, 0xc5, 0xf4,
+		0xf1, 0x50, 0xb2, 0x01, 0x91, 0x96, 0xf5, 0xbb, 0xe1, 0x32, 0xcd, 0xa8, 0x66, 0xbf, 0x73, 0x85,
+		0x94, 0xd6, 0x7e, 0x68, 0xc5, 0xe4, 0xed, 0xd5, 0xe3, 0x67, 0x4c, 0xa5, 0xb3, 0x1f, 0xdf, 0xf8,
+		0xb3, 0x73, 0x5a, 0xac, 0xeb, 0x46, 0x16, 0x24, 0xab, 0xca, 0xa4, 0xdd, 0x87, 0x0e, 0x24, 0x83,
+		0x32, 0x04, 0x4c, 0xd8, 0xda, 0x7d, 0xdc, 0xe3, 0x01, 0x93, 0xf3, 0xc1, 0x5b, 0xbd, 0xc3, 0x1d,
+		0x40, 0x62, 0xde, 0x94, 0x03, 0x85, 0x91, 0x2a, 0xa0, 0x25, 0x10, 0xd3, 0x32, 0x9f, 0x93, 0x00,
+		0xa7, 0x8a, 0xfa, 0x77, 0x7c, 0xaf, 0x4d, 0xc8, 0x7a, 0xf3, 0x16, 0x2b, 0xba, 0xeb, 0x74, 0x51,
+		0xb8, 0xdd, 0x32, 0xad, 0x68, 0x7d, 0xdd, 0xca, 0x60, 0x98, 0xc9, 0x9b, 0xb6, 0x5d, 0x4d, 0x3a,
+		0x66, 0x8a, 0xbe, 0x05, 0xf9, 0x0c, 0xc5, 0xba, 0x52, 0x82, 0x09, 0x1f, 0x5a, 0x66, 0x89, 0x69,
+		0xa3, 0x5d, 0x93, 0x50, 0x7d, 0x44, 0xc3, 0x2a, 0xb8, 0xab, 0xec, 0xa6, 0x5a, 0xae, 0x4a, 0x6a,
+		0xcd, 0xfd, 0xb6, 0xff, 0x3d, 0x98, 0x05, 0xd9, 0x5b, 0x29, 0xc4, 0x6f, 0xe0, 0x76, 0xe2, 0x3f,
+		0xec, 0xd7, 0xa4, 0x91, 0x63, 0xf5, 0x4e, 0x4b, 0xab, 0x20, 0x8c, 0x3a, 0x41, 0xed, 0x8b, 0x4b,
+		0xb9, 0x01, 0x21, 0xc0, 0x6d, 0xfd, 0x70, 0x5b, 0x20, 0x92, 0x41, 0x89, 0x74, 0xb7, 0xe9, 0x8b,
+		0xfc, 0x6d, 0x17, 0x3f, 0x7f, 0x89, 0x3d, 0x6b, 0x8f, 0xbc, 0xd2, 0x57, 0xe9, 0xc9, 0x6e, 0xa7,
+		0x19, 0x26, 0x18, 0xad, 0xef, 0xb5, 0x87, 0xbf, 0xb8, 0xa8, 0xd6, 0x7d, 0xdd, 0x5f, 0x94, 0x54,
+		0x09, 0x92, 0x2b, 0xf5, 0x04, 0xf7, 0x36, 0x69, 0x8e, 0xf4, 0xdc, 0x1d, 0x6e, 0x55, 0xbb, 0xe9,
+		0x13, 0x05, 0x83, 0x35, 0x9c, 0xed, 0xcf, 0x8c, 0x26, 0x8c, 0x7b, 0xc7, 0x0b, 0xba, 0xfd, 0xe2,
+		0x84, 0x5c, 0x2a, 0x79, 0x43, 0x99, 0xb2, 0xc3, 0x82, 0x87, 0xc8, 0xcd, 0x37, 0x6d, 0xa1, 0x2b,
+		0x39, 0xb2, 0x38, 0x99, 0xd9, 0xfc, 0x02, 0x15, 0x55, 0x21, 0x62, 0x59, 0xeb, 0x00, 0x86, 0x08,
+		0x20, 0xbe, 0x1a, 0x62, 0x4d, 0x7e, 0xdf, 0x68, 0x73, 0x5b, 0x5f, 0xaf, 0x84, 0x96, 0x2e, 0x1f,
+		0x6b, 0x03, 0xc9, 0xa6, 0x75, 0x18, 0xe9, 0xd4, 0xbd, 0xc8, 0xec, 0x9a, 0x5a, 0xb3, 0x99, 0xab,
+		0x5f, 0x7c, 0x08, 0x7f, 0x69, 0x4d, 0x52, 0xa2, 0x30, 0x17, 0x3b, 0x16, 0x15, 0x1b, 0x11, 0x62,
+		0x3e, 0x80, 0x4b, 0x85, 0x7c, 0x9c, 0xd1, 0x3a, 0x13, 0x01, 0x5e, 0x45, 0xf1, 0xc8, 0x5f, 0xcd,
+		0x0e, 0x21, 0xf5, 0x82, 0xd4, 0x7b, 0x5c, 0x45, 0x27, 0x6b, 0xef, 0xfe, 0xb8, 0xc0, 0x6f, 0xdc,
+		0x60, 0x7b, 0xe4, 0xd5, 0x75, 0x71, 0xe6, 0xe8, 0x7d, 0x6b, 0x6d, 0x80, 0xaf, 0x76, 0x41, 0x58,
+		0xb7, 0xac, 0xb7, 0x13, 0x2f, 0x81, 0xcc, 0xf9, 0x19, 0x97, 0xe8, 0xee, 0x40, 0x91, 0xfc, 0x89,
+		0x13, 0x1e, 0x67, 0x9a, 0xdb, 0x8f, 0x8f, 0xc7, 0x4a, 0xc9, 0xaf, 0x2f, 0x67, 0x01, 0x3c, 0xb8,
+		0xa8, 0x3e, 0x78, 0x93, 0x1b, 0xdf, 0xbb, 0x34, 0x0b, 0x1a, 0xfa, 0xc2, 0x2d, 0xc5, 0x1c, 0xec,
+		0x97, 0x4f, 0x48, 0x41, 0x15, 0x0e, 0x75, 0xed, 0x66, 0x8c, 0x17, 0x7f, 0xb1, 0x48, 0x13, 0xc1,
+		0xfb, 0x60, 0x06, 0xf9, 0x72, 0x41, 0x3e, 0xcf, 0x6e, 0xb6, 0xc8, 0xeb, 0x4b, 0x5a, 0xd2, 0x0c,
+		0x28, 0xda, 0x02, 0x7a, 0x46, 0x21, 0x42, 0xb5, 0x34, 0xda, 0xcb, 0x5e, 0xbd, 0x66, 0x5c, 0xca,
+		0xff, 0x52, 0x43, 0x89, 0xf9, 0x10, 0x9a, 0x9e, 0x9b, 0xe3, 0xb0, 0x51, 0xe9, 0xf3, 0x0a, 0x35,
+		0x77, 0x54, 0xcc, 0xac, 0xa6, 0xf1, 0x2e, 0x36, 0x89, 0xac, 0xc5, 0xc6, 0x62, 0x5a, 0xc0, 0x6d,
+		0xc4, 0xe1, 0xf7, 0x64, 0x30, 0xff, 0x11, 0x40, 0x13, 0x89, 0xd8, 0xd7, 0x73, 0x3f, 0x93, 0x08,
+		0x68, 0xab, 0x66, 0x09, 0x1a, 0xea, 0x78, 0xc9, 0x52, 0xf2, 0xfd, 0x93, 0x1b, 0x94, 0xbe, 0x5c,
+		0xe5, 0x00, 0x6e, 0x00, 0xb9, 0xea, 0x27, 0xaa, 0xb3, 0xee, 0xe3, 0xc8, 0x6a, 0xb0, 0xc1, 0x8e,
+		0x9b, 0x54, 0x40, 0x10, 0x96, 0x06, 0xe8, 0xb3, 0xf5, 0x55, 0x77, 0xd7, 0x5c, 0x94, 0xc1, 0x74,
+		0xf3, 0x07, 0x64, 0xac, 0x1c, 0xde, 0xc7, 0x22, 0xb0, 0xbf, 0x2a, 0x5a, 0xc0, 0x8f, 0x8a, 0x83,
+		0x50, 0xc2, 0x5e, 0x97, 0xa0, 0xbe, 0x49, 0x7e, 0x47, 0xaf, 0xa7, 0x20, 0x02, 0x35, 0xa4, 0x57,
+		0xd9, 0x26, 0x63, 0xdb, 0xf1, 0x34, 0x42, 0x89, 0x36, 0xd1, 0x77, 0x6f, 0xb1, 0xea, 0x79, 0x7e,
+		0x95, 0x10, 0x5a, 0xee, 0xa3, 0xae, 0x6f, 0xba, 0xa9, 0xef, 0x5a, 0x7e, 0x34, 0x03, 0x04, 0x07,
+		0x92, 0xd6, 0x07, 0x79, 0xaa, 0x14, 0x90, 0x97, 0x05, 0x4d, 0xa6, 0x27, 0x10, 0x5c, 0x25, 0x24,
+		0xcb, 0xcc, 0xf6, 0x77, 0x9e, 0x43, 0x23, 0xd4, 0x98, 0xef, 0x22, 0xa8, 0xad, 0xf2, 0x26, 0x08,
+		0x59, 0x69, 0xa4, 0xc3, 0x97, 0xe0, 0x5c, 0x6f, 0xeb, 0x3d, 0xd4, 0x62, 0x6e, 0x80, 0x61, 0x02,
+		0xf4, 0xfc, 0x94, 0x79, 0xbb, 0x4e, 0x6d, 0xd7, 0x30, 0x5b, 0x10, 0x11, 0x5a, 0x3d, 0xa7, 0x50,
+		0x1d, 0x9a, 0x13, 0x5f, 0x4f, 0xa8, 0xa7, 0xb6, 0x39, 0xc7, 0xea, 0xe6, 0x19, 0x61, 0x69, 0xc7,
+		0x9a, 0x3a, 0xeb, 0x9d, 0xdc, 0xf7, 0x06, 0x37, 0xbd, 0xac, 0xe3, 0x18, 0xff, 0xfe, 0x11, 0xdb,
+		0x67, 0x42, 0xb4, 0xea, 0xa8, 0xbd, 0xb0, 0x76, 0xd2, 0x74, 0x32, 0xc2, 0xa4, 0x9c, 0xe7, 0x60,
+		0xc5, 0x30, 0x9a, 0x57, 0x66, 0xcd, 0x0f, 0x02, 0x4c, 0xea, 0xe9, 0xd3, 0x2a, 0x5c, 0x09, 0xc2,
+		0xff, 0x6a, 0xde, 0x5d, 0xb7, 0xe9, 0x75, 0x6b, 0x29, 0x94, 0xd6, 0xf7, 0xc3, 0xdf, 0xfb, 0x70,
+		0xec, 0xb5, 0x8c, 0xb0, 0x78, 0x7a, 0xee, 0x52, 0x5f, 0x8c, 0xae, 0x85, 0xe5, 0x98, 0xa2, 0xb7,
+		0x7c, 0x02, 0x2a, 0xcc, 0x9e, 0xde, 0x99, 0x5f, 0x84, 0x20, 0xbb, 0xdc, 0xf2, 0xd2, 0x13, 0x46,
+		0x3c, 0xd6, 0x4d, 0xe7, 0x50, 0xef, 0x55, 0xc3, 0x96, 0x9f, 0xec, 0x6c, 0xd8, 0xe2, 0xea, 0xed,
+		0xc7, 0x33, 0xc9, 0xb3, 0x1c, 0x4f, 0x1d, 0x83, 0x1d, 0xe4, 0xdd, 0xb2, 0x24, 0x8f, 0xf9, 0xf5
+};
+
+
+static const uint8_t HMAC_SHA256_ciphertext_64B_digest[] = {
+		0xc5, 0x6d, 0x4f, 0x29, 0xf4, 0xd2, 0xcc, 0x87,
+		0x3c, 0x81, 0x02, 0x6d, 0x38, 0x7a, 0x67, 0x3e,
+		0x95, 0x9c, 0x5c, 0x8f, 0xda, 0x5c, 0x06, 0xe0,
+		0x65, 0xf1, 0x6c, 0x51, 0x52, 0x49, 0x3e, 0x5f
+};
+
+static const uint8_t HMAC_SHA256_ciphertext_128B_digest[] = {
+		0x76, 0x64, 0x2d, 0x69, 0x71, 0x5d, 0x6a, 0xd8,
+		0x9f, 0x74, 0x11, 0x2f, 0x58, 0xe0, 0x4a, 0x2f,
+		0x6c, 0x88, 0x5e, 0x4d, 0x9c, 0x79, 0x83, 0x1c,
+		0x8a, 0x14, 0xd0, 0x07, 0xfb, 0xbf, 0x6c, 0x8f
+};
+
+static const uint8_t HMAC_SHA256_ciphertext_256B_digest[] = {
+		0x05, 0xa7, 0x44, 0xcd, 0x91, 0x8c, 0x95, 0xcf,
+		0x7b, 0x8f, 0xd3, 0x90, 0x86, 0x7e, 0x7b, 0xb9,
+		0x05, 0xd6, 0x6e, 0x7a, 0xc1, 0x7b, 0x26, 0xff,
+		0xd3, 0x4b, 0xe0, 0x22, 0x8b, 0xa8, 0x47, 0x52
+};
+
+static const uint8_t HMAC_SHA256_ciphertext_512B_digest[] = {
+		0x08, 0xb7, 0x29, 0x54, 0x18, 0x7e, 0x97, 0x49,
+		0xc6, 0x7c, 0x9f, 0x94, 0xa5, 0x4f, 0xa2, 0x25,
+		0xd0, 0xe2, 0x30, 0x7b, 0xad, 0x93, 0xc9, 0x12,
+		0x0f, 0xf0, 0xf0, 0x71, 0xc2, 0xf6, 0x53, 0x8f
+};
+
+static const uint8_t HMAC_SHA256_ciphertext_768B_digest[] = {
+		0xe4, 0x3e, 0x73, 0x93, 0x03, 0xaf, 0x6f, 0x9c,
+		0xca, 0x57, 0x3b, 0x4a, 0x6e, 0x83, 0x58, 0xf5,
+		0x66, 0xc2, 0xb4, 0xa7, 0xe0, 0xee, 0x63, 0x6b,
+		0x48, 0xb7, 0x50, 0x45, 0x69, 0xdf, 0x5c, 0x5b
+};
+
+static const uint8_t HMAC_SHA256_ciphertext_1024B_digest[] = {
+		0x03, 0xb9, 0x96, 0x26, 0xdc, 0x1c, 0xab, 0xe2,
+		0xf5, 0x70, 0x55, 0x15, 0x67, 0x6e, 0x48, 0x11,
+		0xe7, 0x67, 0xea, 0xfa, 0x5c, 0x6b, 0x28, 0x22,
+		0xc9, 0x0e, 0x67, 0x04, 0xb3, 0x71, 0x7f, 0x88
+};
+
+static const uint8_t HMAC_SHA256_ciphertext_1280B_digest[] = {
+		0x01, 0x91, 0xb8, 0x78, 0xd3, 0x21, 0x74, 0xa5,
+		0x1c, 0x8b, 0xd4, 0xd2, 0xc0, 0x49, 0xd7, 0xd2,
+		0x16, 0x46, 0x66, 0x85, 0x50, 0x6d, 0x08, 0xcc,
+		0xc7, 0x0a, 0xa3, 0x71, 0xcc, 0xde, 0xee, 0xdc
+};
+
+static const uint8_t HMAC_SHA256_ciphertext_1536B_digest[] = {
+		0xf2, 0xe5, 0xe9, 0x57, 0x53, 0xd7, 0x69, 0x28,
+		0x7b, 0x69, 0xb5, 0x49, 0xa3, 0x31, 0x56, 0x5f,
+		0xa4, 0xe9, 0x87, 0x26, 0x2f, 0xe0, 0x2d, 0xd6,
+		0x08, 0x44, 0x01, 0x71, 0x0c, 0x93, 0x85, 0x84
+};
+
+static const uint8_t HMAC_SHA256_ciphertext_1792B_digest[] = {
+		0xf6, 0x57, 0x62, 0x01, 0xbf, 0x2d, 0xea, 0x4a,
+		0xef, 0x43, 0x85, 0x60, 0x18, 0xdf, 0x8b, 0xb4,
+		0x60, 0xc0, 0xfd, 0x2f, 0x90, 0x15, 0xe6, 0x91,
+		0x56, 0x61, 0x68, 0x7f, 0x5e, 0x92, 0xa8, 0xdd
+};
+
+static const uint8_t HMAC_SHA256_ciphertext_2048B_digest[] = {
+		0x81, 0x1a, 0x29, 0xbc, 0x6b, 0x9f, 0xbb, 0xb8,
+		0xef, 0x71, 0x7b, 0x1f, 0x6f, 0xd4, 0x7e, 0x68,
+		0x3a, 0x9c, 0xb9, 0x98, 0x22, 0x81, 0xfa, 0x95,
+		0xee, 0xbc, 0x7f, 0x23, 0x29, 0x88, 0x76, 0xb8
+};
+
+struct crypto_data_params {
+	const char *name;
+	uint16_t length;
+	const char *plaintext;
+	struct crypto_expected_output {
+		const uint8_t *ciphertext;
+		const uint8_t *digest;
+	} expected;
+};
+
+#define MAX_PACKET_SIZE_INDEX	10
+
+struct crypto_data_params aes_cbc_hmac_sha256_output[MAX_PACKET_SIZE_INDEX] = {
+		{ "64B", 64, &plaintext_quote[sizeof(plaintext_quote) - 1 - 64], { AES_CBC_ciphertext_64B, HMAC_SHA256_ciphertext_64B_digest } },
+		{ "128B", 128, &plaintext_quote[sizeof(plaintext_quote) - 1 - 128], { AES_CBC_ciphertext_128B, HMAC_SHA256_ciphertext_128B_digest } },
+		{ "256B", 256, &plaintext_quote[sizeof(plaintext_quote) - 1 - 256], { AES_CBC_ciphertext_256B, HMAC_SHA256_ciphertext_256B_digest } },
+		{ "512B", 512, &plaintext_quote[sizeof(plaintext_quote) - 1 - 512], { AES_CBC_ciphertext_512B, HMAC_SHA256_ciphertext_512B_digest } },
+		{ "768B", 768, &plaintext_quote[sizeof(plaintext_quote) - 1 - 768], { AES_CBC_ciphertext_768B, HMAC_SHA256_ciphertext_768B_digest } },
+		{ "1024B", 1024, &plaintext_quote[sizeof(plaintext_quote) - 1 - 1024], { AES_CBC_ciphertext_1024B, HMAC_SHA256_ciphertext_1024B_digest } },
+		{ "1280B", 1280, &plaintext_quote[sizeof(plaintext_quote) - 1 - 1280], { AES_CBC_ciphertext_1280B, HMAC_SHA256_ciphertext_1280B_digest } },
+		{ "1536B", 1536, &plaintext_quote[sizeof(plaintext_quote) - 1 - 1536], { AES_CBC_ciphertext_1536B, HMAC_SHA256_ciphertext_1536B_digest } },
+		{ "1792B", 1792, &plaintext_quote[sizeof(plaintext_quote) - 1 - 1792], { AES_CBC_ciphertext_1792B, HMAC_SHA256_ciphertext_1792B_digest } },
+		{ "2048B", 2048, &plaintext_quote[sizeof(plaintext_quote) - 1 - 2048], { AES_CBC_ciphertext_2048B, HMAC_SHA256_ciphertext_2048B_digest } }
+};
+
+
+static int
+test_perf_crypto_qp_vary_burst_size(uint16_t dev_num)
+{
+	uint32_t num_to_submit = 2048, max_outstanding_reqs = 512;
+	struct rte_mbuf *rx_mbufs[max_outstanding_reqs], *tx_mbufs[max_outstanding_reqs];
+	uint64_t failed_polls, retries, start_cycles, end_cycles, total_cycles = 0;
+	uint32_t burst_sent, burst_received;
+	uint32_t b, burst_size, num_sent, num_received;
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
+	struct crypto_data_params *data_params = aes_cbc_hmac_sha256_output;
+
+	if (rte_cryptodev_count() == 0) {
+		printf("\nNo crypto devices available. Is kernel driver loaded?\n");
+		return TEST_FAILED;
+	}
+
+	/* Setup Cipher Parameters */
+	ut_params->cipher_params.algo = RTE_CRYPTO_SYM_CIPHER_AES_CBC;
+	ut_params->cipher_params.op = RTE_CRYPTO_SYM_CIPHER_OP_DECRYPT;
+	ut_params->cipher_params.key.data = aes_cbc_key;
+	ut_params->cipher_params.key.length = CIPHER_IV_LENGTH_AES_CBC;
+
+	/* Setup HMAC Parameters */
+	ut_params->hash_params.op = RTE_CRYPTO_SYM_HASH_OP_DIGEST_VERIFY;
+	ut_params->hash_params.algo = RTE_CRYPTO_SYM_HASH_SHA256_HMAC;
+	ut_params->hash_params.auth_key.data = hmac_sha256_key;
+	ut_params->hash_params.auth_key.length = HMAC_KEY_LENGTH_SHA256;
+	ut_params->hash_params.digest_length = DIGEST_BYTE_LENGTH_SHA256;
+
+	/* Create Crypto session*/
+	ut_params->sess = rte_cryptodev_session_create(ts_params->dev_id,
+		&ut_params->cipher_params, &ut_params->hash_params,
+		RTE_CRYPTO_SYM_OPCHAIN_HASH_CIPHER);
+
+	TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation failed");
+
+	/* Generate Crypto op data structure(s) */
+	for (b = 0; b < max_outstanding_reqs ; b++) {
+		tx_mbufs[b] = setup_test_string(ts_params->mbuf_mp,
+				(const char *)data_params[0].expected.ciphertext,
+				data_params[0].length, 0);
+		TEST_ASSERT_NOT_NULL(tx_mbufs[b], "Failed to allocate tx_buf");
+
+		ut_params->digest = (uint8_t *)rte_pktmbuf_append(tx_mbufs[b],
+				DIGEST_BYTE_LENGTH_SHA256);
+		TEST_ASSERT_NOT_NULL(ut_params->digest, "no room to append digest");
+
+		rte_memcpy(ut_params->digest, data_params[0].expected.digest,
+			DIGEST_BYTE_LENGTH_SHA256);
+
+		struct rte_crypto_op_data *cop = rte_crypto_op_alloc(ts_params->crypto_op_mp);
+		TEST_ASSERT_NOT_NULL(cop, "Failed to allocate crypto_op");
+
+		rte_crypto_op_attach_session(cop, ut_params->sess);
+
+		cop->digest.data = ut_params->digest;
+		cop->digest.phys_addr = rte_pktmbuf_mtophys_offset(tx_mbufs[b], data_params[0].length);
+		cop->digest.length = DIGEST_BYTE_LENGTH_SHA256;
+
+		cop->iv.data = (uint8_t *)rte_pktmbuf_prepend(tx_mbufs[b], CIPHER_IV_LENGTH_AES_CBC);
+		cop->iv.phys_addr = rte_pktmbuf_mtophys(tx_mbufs[b]);
+		cop->iv.length = CIPHER_IV_LENGTH_AES_CBC;
+
+		rte_memcpy(cop->iv.data, aes_cbc_iv, CIPHER_IV_LENGTH_AES_CBC);
+
+		cop->data.to_cipher.offset = CIPHER_IV_LENGTH_AES_CBC;
+		cop->data.to_cipher.length = data_params[0].length;
+
+		cop->data.to_hash.offset = CIPHER_IV_LENGTH_AES_CBC;
+		cop->data.to_hash.length = data_params[0].length;
+
+		rte_pktmbuf_attach_crypto_op(tx_mbufs[b], cop);
+	}
+
+	printf("\nTest to measure the IA cycle cost using AES128_CBC_SHA256_HMAC algorithm with "
+			"a constant request size of %u.", data_params[0].length);
+	printf("\nThis test will keep retries at 0 and only measure IA cycle cost for each request.");
+	printf("\nDev No\tQP No\tNum Sent\tNum Received\tTx/Rx burst");
+	printf("\tRetries (Device Busy)\tAverage IA cycle cost (assuming 0 retries)");
+	for (b = 2; b <= 128 ; b *= 2) {
+		num_sent = 0;
+		num_received = 0;
+		retries = 0;
+		failed_polls = 0;
+		burst_size = b;
+		total_cycles = 0;
+		while (num_sent < num_to_submit) {
+			start_cycles = rte_rdtsc_precise();
+			burst_sent = rte_cryptodev_enqueue_burst(dev_num, 0, tx_mbufs,
+						((num_to_submit-num_sent) < burst_size) ?
+						num_to_submit-num_sent : burst_size);
+			if (burst_sent == 0)
+				retries++;
+			else
+				num_sent += burst_sent;
+			end_cycles = rte_rdtsc_precise();
+			total_cycles += (end_cycles - start_cycles);
+			/*
+			 * Wait until requests have been sent.
+			 */
+			rte_delay_ms(1);
+
+			start_cycles = rte_rdtsc_precise();
+			burst_received = rte_cryptodev_dequeue_burst(dev_num,
+						0, rx_mbufs, burst_size);
+			if (burst_received == 0)
+				failed_polls++;
+			else
+				num_received += burst_received;
+			end_cycles = rte_rdtsc_precise();
+			total_cycles += end_cycles - start_cycles;
+		}
+		while (num_received != num_to_submit) {
+			if (gbl_cryptodev_preftest_devtype == RTE_CRYPTODEV_AESNI_MB_PMD)
+				rte_cryptodev_enqueue_burst(dev_num, 0, NULL, 0);
+
+			burst_received = rte_cryptodev_dequeue_burst(dev_num,
+						0, rx_mbufs, burst_size);
+			if (burst_received == 0)
+				failed_polls++;
+			else
+				num_received += burst_received;
+		}
+		printf("\n%u\t%u\t\%u\t\t%u\t\t%u", dev_num, 0,
+					num_sent, num_received, burst_size);
+		printf("\t\t%"PRIu64, retries);
+		printf("\t\t\t%"PRIu64, total_cycles/num_received);
+	}
+	printf("\n");
+
+	for (b = 0; b < max_outstanding_reqs ; b++) {
+		rte_crypto_op_free(tx_mbufs[b]->crypto_op);
+		rte_pktmbuf_free(tx_mbufs[b]);
+	}
+	return TEST_SUCCESS;
+}
+
+static int
+test_perf_AES_CBC_HMAC_SHA256_encrypt_digest_vary_req_size(uint16_t dev_num)
+{
+	uint16_t index;
+	uint32_t burst_sent, burst_received;
+	uint32_t b, num_sent, num_received, throughput;
+	uint64_t failed_polls, retries, start_cycles, end_cycles;
+	const uint64_t mhz = rte_get_tsc_hz()/1000000;
+	double mmps;
+	struct rte_mbuf *rx_mbufs[DEFAULT_BURST_SIZE], *tx_mbufs[DEFAULT_BURST_SIZE];
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
+	struct crypto_data_params *data_params = aes_cbc_hmac_sha256_output;
+
+	if (rte_cryptodev_count() == 0) {
+		printf("\nNo crypto devices available. Is kernel driver loaded?\n");
+		return TEST_FAILED;
+	}
+
+		/* Setup Cipher Parameters */
+	ut_params->cipher_params.algo = RTE_CRYPTO_SYM_CIPHER_AES_CBC;
+	ut_params->cipher_params.op = RTE_CRYPTO_SYM_CIPHER_OP_ENCRYPT;
+	ut_params->cipher_params.key.data = aes_cbc_key;
+	ut_params->cipher_params.key.length = CIPHER_IV_LENGTH_AES_CBC;
+
+	/* Setup HMAC Parameters */
+	ut_params->hash_params.op = RTE_CRYPTO_SYM_HASH_OP_DIGEST_GENERATE;
+	ut_params->hash_params.algo = RTE_CRYPTO_SYM_HASH_SHA256_HMAC;
+	ut_params->hash_params.auth_key.data = hmac_sha256_key;
+	ut_params->hash_params.auth_key.length = HMAC_KEY_LENGTH_SHA256;
+	ut_params->hash_params.digest_length = DIGEST_BYTE_LENGTH_SHA256;
+
+	/* Create Crypto session*/
+	ut_params->sess = rte_cryptodev_session_create(ts_params->dev_id,
+		&ut_params->cipher_params, &ut_params->hash_params,
+		RTE_CRYPTO_SYM_OPCHAIN_CIPHER_HASH);
+
+	TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation failed");
+
+	printf("\nThroughput test which will continually attempt to send AES128_CBC_SHA256_HMAC requests "
+		"with a constant burst size of %u while varying payload sizes", DEFAULT_BURST_SIZE);
+	printf("\nDev No\tQP No\tReq Size(B)\tNum Sent\tNum Received\tMrps\tThoughput(Mbps)");
+	printf("\tRetries (Attempted a burst, but the device was busy)");
+	for (index = 0; index < MAX_PACKET_SIZE_INDEX; index++) {
+		num_sent = 0;
+		num_received = 0;
+		retries = 0;
+		failed_polls = 0;
+
+		/* Generate Crypto op data structure(s) */
+		for (b = 0; b < DEFAULT_BURST_SIZE ; b++) {
+			tx_mbufs[b] = setup_test_string(ts_params->mbuf_mp,
+					data_params[index].plaintext, data_params[index].length, 0);
+
+			ut_params->digest = (uint8_t *)rte_pktmbuf_append(
+				tx_mbufs[b], DIGEST_BYTE_LENGTH_SHA256);
+			TEST_ASSERT_NOT_NULL(ut_params->digest, "no room to append digest");
+
+			rte_memcpy(ut_params->digest, data_params[index].expected.digest,
+			DIGEST_BYTE_LENGTH_SHA256);
+			struct rte_crypto_op_data *cop = rte_crypto_op_alloc(ts_params->crypto_op_mp);
+			TEST_ASSERT_NOT_NULL(cop, "Failed to allocate crypto_op");
+
+			rte_crypto_op_attach_session(cop, ut_params->sess);
+
+			cop->digest.data = ut_params->digest;
+			cop->digest.phys_addr = rte_pktmbuf_mtophys_offset(
+				tx_mbufs[b], data_params[index].length);
+			cop->digest.length = DIGEST_BYTE_LENGTH_SHA256;
+
+			cop->iv.data = (uint8_t *)rte_pktmbuf_prepend(tx_mbufs[b],
+					CIPHER_IV_LENGTH_AES_CBC);
+			cop->iv.phys_addr = rte_pktmbuf_mtophys(tx_mbufs[b]);
+			cop->iv.length = CIPHER_IV_LENGTH_AES_CBC;
+
+			rte_memcpy(cop->iv.data, aes_cbc_iv, CIPHER_IV_LENGTH_AES_CBC);
+
+			cop->data.to_cipher.offset = CIPHER_IV_LENGTH_AES_CBC;
+			cop->data.to_cipher.length = data_params[index].length;
+
+			cop->data.to_hash.offset = CIPHER_IV_LENGTH_AES_CBC;
+			cop->data.to_hash.length = data_params[index].length;
+
+			rte_pktmbuf_attach_crypto_op(tx_mbufs[b], cop);
+		}
+		start_cycles = rte_rdtsc_precise();
+		while (num_sent < DEFAULT_NUM_REQS_TO_SUBMIT) {
+			burst_sent = rte_cryptodev_enqueue_burst(dev_num, 0, tx_mbufs,
+				((DEFAULT_NUM_REQS_TO_SUBMIT-num_sent) < DEFAULT_BURST_SIZE) ?
+				DEFAULT_NUM_REQS_TO_SUBMIT-num_sent : DEFAULT_BURST_SIZE);
+			if (burst_sent == 0)
+				retries++;
+			else
+				num_sent += burst_sent;
+
+			burst_received = rte_cryptodev_dequeue_burst(dev_num, 0, rx_mbufs, DEFAULT_BURST_SIZE);
+			if (burst_received == 0)
+				failed_polls++;
+			else
+				num_received += burst_received;
+		}
+		while (num_received != DEFAULT_NUM_REQS_TO_SUBMIT) {
+			if (gbl_cryptodev_preftest_devtype == RTE_CRYPTODEV_AESNI_MB_PMD)
+				rte_cryptodev_enqueue_burst(dev_num, 0, NULL, 0);
+
+			burst_received = rte_cryptodev_dequeue_burst(dev_num, 0,
+						rx_mbufs, DEFAULT_BURST_SIZE);
+			if (burst_received == 0)
+				failed_polls++;
+			else
+				num_received += burst_received;
+		}
+		end_cycles = rte_rdtsc_precise();
+		mmps = (double)num_received*mhz/(end_cycles - start_cycles);
+		throughput = mmps*data_params[index].length*8;
+		printf("\n%u\t%u\t%u\t\t%u\t%u", dev_num, 0, data_params[index].length, num_sent, num_received);
+		printf("\t%.2f\t%u", mmps, throughput);
+		printf("\t\t%"PRIu64, retries);
+		for (b = 0; b < DEFAULT_BURST_SIZE ; b++) {
+			rte_crypto_op_free(tx_mbufs[b]->crypto_op);
+			rte_pktmbuf_free(tx_mbufs[b]);
+		}
+	}
+	printf("\n");
+	return TEST_SUCCESS;
+}
+
+static int
+test_perf_encrypt_digest_vary_req_size(void)
+{
+	return test_perf_AES_CBC_HMAC_SHA256_encrypt_digest_vary_req_size(testsuite_params.dev_id);
+}
+
+static int
+test_perf_vary_burst_size(void)
+{
+	return test_perf_crypto_qp_vary_burst_size(testsuite_params.dev_id);
+}
+
+
+static struct unit_test_suite cryptodev_testsuite  = {
+	.suite_name = "Crypto Device Unit Test Suite",
+	.setup = testsuite_setup,
+	.teardown = testsuite_teardown,
+	.unit_test_cases = {
+		TEST_CASE_ST(ut_setup, ut_teardown, test_perf_encrypt_digest_vary_req_size),
+		TEST_CASE_ST(ut_setup, ut_teardown, test_perf_vary_burst_size),
+		TEST_CASES_END() /**< NULL terminate unit test array */
+	}
+};
+
+static int
+perftest_aesni_mb_cryptodev(void /*argv __rte_unused, int argc __rte_unused*/)
+{
+	gbl_cryptodev_preftest_devtype = RTE_CRYPTODEV_AESNI_MB_PMD;
+
+	return unit_test_suite_runner(&cryptodev_testsuite);
+}
+
+static int
+perftest_qat_cryptodev(void /*argv __rte_unused, int argc __rte_unused*/)
+{
+	gbl_cryptodev_preftest_devtype = RTE_CRYPTODEV_QAT_PMD;
+
+	return unit_test_suite_runner(&cryptodev_testsuite);
+}
+
+static struct test_command cryptodev_aesni_mb_perf_cmd = {
+	.command = "cryptodev_aesni_mb_perftest",
+	.callback = perftest_aesni_mb_cryptodev,
+};
+
+static struct test_command cryptodev_qat_perf_cmd = {
+	.command = "cryptodev_qat_perftest",
+	.callback = perftest_qat_cryptodev,
+};
+
+REGISTER_TEST_COMMAND(cryptodev_aesni_mb_perf_cmd);
+REGISTER_TEST_COMMAND(cryptodev_qat_perf_cmd);
diff --git a/app/test/test_link_bonding.c b/app/test/test_link_bonding.c
index 388cf11..2d98958 100644
--- a/app/test/test_link_bonding.c
+++ b/app/test/test_link_bonding.c
@@ -4020,7 +4020,7 @@ test_close_bonded_device(void)
 	return 0;
 }
 
-static int
+static void
 testsuite_teardown(void)
 {
 	if (test_params->pkt_eth_hdr != NULL) {
@@ -4029,7 +4029,7 @@ testsuite_teardown(void)
 	}
 
 	/* Clean up and remove slaves from bonded device */
-	return remove_slaves_and_stop_bonded_device();
+	remove_slaves_and_stop_bonded_device();
 }
 
 static void
@@ -4993,7 +4993,7 @@ static struct unit_test_suite link_bonding_test_suite  = {
 		TEST_CASE(test_reconfigure_bonded_device),
 		TEST_CASE(test_close_bonded_device),
 
-		{ NULL, NULL, NULL, NULL, NULL } /**< NULL terminate unit test array */
+		TEST_CASES_END() /**< NULL terminate unit test array */
 	}
 };
 
diff --git a/app/test/test_link_bonding_mode4.c b/app/test/test_link_bonding_mode4.c
index 460539d..713368d 100644
--- a/app/test/test_link_bonding_mode4.c
+++ b/app/test/test_link_bonding_mode4.c
@@ -453,7 +453,7 @@ test_setup(void)
 	return 0;
 }
 
-static int
+static void
 testsuite_teardown(void)
 {
 	struct slave_conf *port;
@@ -467,8 +467,6 @@ testsuite_teardown(void)
 
 	FOR_EACH_PORT(i, port)
 		rte_eth_dev_stop(port->port_id);
-
-	return 0;
 }
 
 /*
@@ -1390,7 +1388,8 @@ static struct unit_test_suite link_bonding_mode4_test_suite  = {
 		TEST_CASE_NAMED("test_mode4_tx_burst", test_mode4_tx_burst_wrapper),
 		TEST_CASE_NAMED("test_mode4_marker", test_mode4_marker_wrapper),
 		TEST_CASE_NAMED("test_mode4_expired", test_mode4_expired_wrapper),
-		{ NULL, NULL, NULL, NULL, NULL } /**< NULL terminate unit test array */
+
+		TEST_CASES_END() /**< NULL terminate unit test array */
 	}
 };
 
-- 
1.9.3

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [dpdk-dev] [PATCH 1/4] cryptodev: Initial DPDK Crypto APIs and device framework release
  2015-08-20 14:07 ` [dpdk-dev] [PATCH 1/4] cryptodev: Initial DPDK Crypto APIs and device framework release Declan Doherty
@ 2015-08-20 19:07   ` Neil Horman
  2015-08-21 14:02     ` Declan Doherty
  2015-09-15 16:36     ` [dpdk-dev] [PATCH] cryptodev: changes to crypto operation APIs to support non prescriptive chaining of crypto transforms in a crypto operation. app/test: updates to cryptodev unit tests to support new xform chaining APIs. aesni_mb_pmd: updates to device to support API changes Declan Doherty
  0 siblings, 2 replies; 8+ messages in thread
From: Neil Horman @ 2015-08-20 19:07 UTC (permalink / raw)
  To: Declan Doherty; +Cc: dev

On Thu, Aug 20, 2015 at 03:07:20PM +0100, Declan Doherty wrote:
> Co-authored-by: Des O Dea <des.j.o.dea@intel.com>
> Co-authored-by: John Griffin <john.griffin@intel.com>
> Co-authored-by: Fiona Trahe <fiona.trahe@intel.com>
> 
> This patch contains the initial proposed APIs and device framework for
> integrating crypto packet processing into DPDK.
> 
> features include:
>  - Crypto device configuration / management APIs
>  - Definitions of supported cipher algorithms and operations.
>  - Definitions of supported hash/authentication algorithms and
>    operations.
>  - Crypto session management APIs
>  - Crypto operation data structures and APIs allocation of crypto
>    operation structure used to specify the crypto operations to
>    be performed  on a particular mbuf.
>  - Extension of mbuf to contain crypto operation data pointer and
>    extra flags.
>  - Burst enqueue / dequeue APIs for processing of crypto operations.
> 
> Signed-off-by: Declan Doherty <declan.doherty@intel.com>

Hey, only had a qick read so some of this might be off base, but a few comments
in line

><snip>
> index 0000000..b776609
> --- /dev/null
> +++ b/lib/librte_cryptodev/rte_crypto.h
> @@ -0,0 +1,649 @@
> +/*-
> + *   BSD LICENSE
> + *
> + *   Copyright(c) 2015 Intel Corporation. All rights reserved.
> + *
> + *   Redistribution and use in source and binary forms, with or without
> + *   modification, are permitted provided that the following conditions
> + *   are met:
> + *
> + *     * Redistributions of source code must retain the above copyright
> + *       notice, this list of conditions and the following disclaimer.
> + *     * Redistributions in binary form must reproduce the above copyright
> + *       notice, this list of conditions and the following disclaimer in
> + *       the documentation and/or other materials provided with the
> + *       distribution.
> + *     * Neither the name of Intel Corporation nor the names of its
> + *       contributors may be used to endorse or promote products derived
> + *       from this software without specific prior written permission.
> + *
> + *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
> + *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
> + *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
> + *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
> + *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
> + *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
> + *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
> + *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
> + *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
> + *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
> + *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
> + */
> +
> +#ifndef _RTE_CRYPTO_H_
> +#define _RTE_CRYPTO_H_
> +
> +/**
> + * @file rte_crypto.h
> + *
> + * RTE Cryptographic Definitions
> + *
> + * Defines symmetric cipher and authentication algorithms and modes, as well
> + * as supported symmetric crypto operation combinations.
> + */
> +
> +#ifdef __cplusplus
> +extern "C" {
> +#endif
> +
> +#include <rte_mbuf.h>
> +#include <rte_memory.h>
> +#include <rte_mempool.h>
> +
> +/**
> + * This enumeration lists different types of crypto operations supported by rte
> + * crypto devices. The operation type is defined during session registration and
> + * cannot be changed for a session once it has been setup, or if using a
> + * session-less crypto operation it is defined within the crypto operation
> + * op_params.
> + */
> +enum rte_crypto_operation_chain {
> +	RTE_CRYPTO_SYM_OP_CIPHER_ONLY,
> +	/**< Cipher only operation on the data */
> +	RTE_CRYPTO_SYM_OP_HASH_ONLY,
> +	/**< Hash only operation on the data */
> +	RTE_CRYPTO_SYM_OPCHAIN_HASH_CIPHER,
> +	/**<
> +	 * Chain a hash followed by any cipher operation.
> +	 *
> +	 * If it is required that the result of the hash (i.e. the digest)
> +	 * is going to be included in the data to be ciphered, then:
> +	 *
> +	 * - The digest MUST be placed in the destination buffer at the
> +	 *   location corresponding to the end of the data region to be hashed
> +	 *   (hash_start_offset + message length to hash),  i.e. there must be
> +	 *   no gaps between the start of the digest and the end of the data
> +	 *   region to be hashed.
> +	 *
> +	 * - The message length to cipher member of the rte_crypto_op_data
> +	 *   structure must be equal to the overall length of the plain text,
> +	 *   the digest length and any (optional) trailing data that is to be
> +	 *   included.
> +	 *
> +	 * - The message length to cipher must be a multiple to the block
> +	 *   size if a block cipher is being used - the implementation does not
> +	 *   pad.
> +	 */
> +	RTE_CRYPTO_SYM_OPCHAIN_CIPHER_HASH,
> +	/**<
> +	 * Chain any cipher followed by any hash operation.The hash operation
> +	 * will be performed on the ciphertext resulting from the cipher
> +	 * operation.
> +	 */
> +};

So, this seems a bit...not sure what the word is..specific perhaps?  For an API.
That is to say, if a underlying device supports performing multiple operations
in a single transaction, I'm not sure that should be exposed in this way.  As
the number of devices and chain combinations grow, so too will this list, and if
there are multiple simmilar (but distinct) chain operations, an application will
have to know which chains are applicable to which devices which is sub-optimal

Instead, perhaps it would be better to simply ennumerate the list of crypto
primitives that a device supports (HASH/CIPHER/etc), and allow the application
to define the desired chain when creating a session.  That is to say, an
appilcation can create a session that requests a given chain of operations
(using an array of the primitive enum perhaps).  The implementing PMD is then
responsible for implementing that chain in hardware or software if need be.  If
you need to report on the disposition of the implementation, you can do so via
return code (i.e. SESSION_OK_HW_IMPL or SESSION_OK_SW_IMPL).  

Using this method, more complex operations can be implemented with chain macros
(e.g. AEAD is just an array of primitives [CIPHER, HASH])

><snip>
> diff --git a/lib/librte_cryptodev/rte_cryptodev.c b/lib/librte_cryptodev/rte_cryptodev.c
> new file mode 100644
> index 0000000..a1797ce
> --- /dev/null
> +++ b/lib/librte_cryptodev/rte_cryptodev.c
> @@ -0,0 +1,966 @@
> +/*-
> + *   BSD LICENSE
> + *
> + *   Copyright(c) 2015 Intel Corporation. All rights reserved.
> + *
> + *   Redistribution and use in source and binary forms, with or without
> + *   modification, are permitted provided that the following conditions
> + *   are met:
> + *
> + *     * Redistributions of source code must retain the above copyright
> + *       notice, this list of conditions and the following disclaimer.
> + *     * Redistributions in binary form must reproduce the above copyright
> + *       notice, this list of conditions and the following disclaimer in
> + *       the documentation and/or other materials provided with the
> + *       distribution.
> + *     * Neither the name of Intel Corporation nor the names of its
> + *       contributors may be used to endorse or promote products derived
> + *       from this software without specific prior written permission.
> + *
> + *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
> + *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
> + *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
> + *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
> + *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
> + *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
> + *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
> + *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
> + *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
> + *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
> + *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
> + */
> +
> +#include <sys/types.h>
> +#include <sys/queue.h>
> +#include <ctype.h>
> +#include <stdio.h>
> +#include <stdlib.h>
> +#include <string.h>
> +#include <stdarg.h>
> +#include <errno.h>
> +#include <stdint.h>
> +#include <inttypes.h>
> +#include <netinet/in.h>
> +
> +#include <rte_byteorder.h>
> +#include <rte_log.h>
> +#include <rte_debug.h>
> +#include <rte_dev.h>
> +#include <rte_interrupts.h>
> +#include <rte_pci.h>
> +#include <rte_memory.h>
> +#include <rte_memcpy.h>
> +#include <rte_memzone.h>
> +#include <rte_launch.h>
> +#include <rte_tailq.h>
> +#include <rte_eal.h>
> +#include <rte_per_lcore.h>
> +#include <rte_lcore.h>
> +#include <rte_atomic.h>
> +#include <rte_branch_prediction.h>
> +#include <rte_common.h>
> +#include <rte_ring.h>
> +#include <rte_mempool.h>
> +#include <rte_malloc.h>
> +#include <rte_mbuf.h>
> +#include <rte_errno.h>
> +#include <rte_spinlock.h>
> +#include <rte_string_fns.h>
> +
> +#include "rte_crypto.h"
> +#include "rte_cryptodev.h"
> +
> +
> +
> +/* Macros to check for invalid function pointers in dev_ops structure */
> +#define FUNC_PTR_OR_ERR_RET(func, retval) do { \
> +	if ((func) == NULL) { \
> +		CDEV_LOG_ERR("Function not supported"); \
> +		return retval; \
> +	} \
> +} while (0)
> +
> +#define PROC_PRIMARY_OR_RET() do { \
> +	if (rte_eal_process_type() != RTE_PROC_PRIMARY) { \
> +		CDEV_LOG_ERR("Cannot run in secondary processes"); \
> +		return; \
> +	} \
> +} while (0)
> +
> +#define PROC_PRIMARY_OR_ERR_RET(retval) do { \
> +	if (rte_eal_process_type() != RTE_PROC_PRIMARY) { \
> +		CDEV_LOG_ERR("Cannot run in secondary processes"); \
> +		return retval; \
> +	} \
> +} while (0)
> +
> +#define FUNC_PTR_OR_RET(func) do { \
> +	if ((func) == NULL) { \
> +		CDEV_LOG_ERR("Function not supported"); \
> +		return; \
> +	} \
> +} while (0)
> +
These are all defined in rte_ethdev.  You should just move those to a public
place rather then re-creating them

> +struct rte_cryptodev rte_crypto_devices[RTE_MAX_CRYPTODEVS];
> +
> +static struct rte_cryptodev_global cryptodev_globals = {
> +		.devs			= &rte_crypto_devices[0],
> +		.data			= NULL,
> +		.nb_devs		= 0,
> +		.max_devs		= RTE_MAX_CRYPTODEVS
> +};
> +
> +struct rte_cryptodev_global *rte_cryptodev_globals = &cryptodev_globals;
> +
> +/* spinlock for crypto device callbacks */
> +static rte_spinlock_t rte_cryptodev_cb_lock = RTE_SPINLOCK_INITIALIZER;
> +
> +
> +/**
> + * The user application callback description.
> + *
> + * It contains callback address to be registered by user application,
> + * the pointer to the parameters for callback, and the event type.
> + */
> +struct rte_cryptodev_callback {
> +	TAILQ_ENTRY(rte_cryptodev_callback) next; /**< Callbacks list */
> +	rte_cryptodev_cb_fn cb_fn;                /**< Callback address */
> +	void *cb_arg;                           /**< Parameter for callback */
> +	enum rte_cryptodev_event_type event;          /**< Interrupt event type */
> +	uint32_t active;                        /**< Callback is executing */
> +};
> +
> +int
> +rte_cryptodev_create_vdev(const char *name, const char *args)
> +{
> +	return rte_eal_vdev_init(name, args);
> +}
> +
> +
> +static inline void
> +rte_cryptodev_data_alloc(int socket_id)
> +{
> +	const unsigned flags = 0;
> +	const struct rte_memzone *mz;
> +
> +	if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
> +		mz = rte_memzone_reserve("rte_cryptodev_data",
> +				cryptodev_globals.max_devs * sizeof(struct rte_cryptodev_data),
> +				socket_id, flags);
> +	} else
> +		mz = rte_memzone_lookup("rte_cryptodev_data");
> +	if (mz == NULL)
> +		rte_panic("Cannot allocate memzone for the crypto device data");
> +
> +	cryptodev_globals.data = mz->addr;
> +	if (rte_eal_process_type() == RTE_PROC_PRIMARY)
> +		memset(cryptodev_globals.data, 0,
> +				cryptodev_globals.max_devs * sizeof(struct rte_cryptodev_data));
> +}
> +
> +
> +static uint8_t
> +rte_cryptodev_find_free_device_index(void)
> +{
> +	uint8_t dev_id;
> +
> +	for (dev_id = 0; dev_id < RTE_MAX_CRYPTODEVS; dev_id++) {
> +		if (rte_crypto_devices[dev_id].attached == RTE_CRYPTODEV_DETACHED)
> +			return dev_id;
> +	}
> +	return RTE_MAX_CRYPTODEVS;
> +}
> +
> +struct rte_cryptodev *
> +rte_cryptodev_pmd_allocate(const char *name, enum pmd_type type, int socket_id)
> +{
> +	uint8_t dev_id;
> +	struct rte_cryptodev *cryptodev;
> +
> +	dev_id = rte_cryptodev_find_free_device_index();
> +	if (dev_id == RTE_MAX_CRYPTODEVS) {
> +		CDEV_LOG_ERR("Reached maximum number of crypto devices");
> +		return NULL;
> +	}
> +
> +	if (cryptodev_globals.data == NULL)
> +		rte_cryptodev_data_alloc(socket_id);
> +
> +	if (rte_cryptodev_pmd_get_named_dev(name) != NULL) {
> +		CDEV_LOG_ERR("Crypto device with name %s already "
> +				"allocated!", name);
> +		return NULL;
> +	}
> +
> +	cryptodev = rte_cryptodev_pmd_get_dev(dev_id);
> +	cryptodev->data = &cryptodev_globals.data[dev_id];
> +	snprintf(cryptodev->data->name, RTE_CRYPTODEV_NAME_MAX_LEN, "%s", name);
> +	cryptodev->data->dev_id = dev_id;
> +	cryptodev->attached = RTE_CRYPTODEV_ATTACHED;
> +	cryptodev->pmd_type = type;
> +	cryptodev_globals.nb_devs++;
> +
> +	return cryptodev;
> +}
> +
> +static inline int
> +rte_cryptodev_create_unique_device_name(char *name, size_t size,
> +		struct rte_pci_device *pci_dev)
> +{
> +	int ret;
> +
> +	if ((name == NULL) || (pci_dev == NULL))
> +		return -EINVAL;
> +
> +	ret = snprintf(name, size, "%d:%d.%d",
> +			pci_dev->addr.bus, pci_dev->addr.devid,
> +			pci_dev->addr.function);
> +	if (ret < 0)
> +		return ret;
> +	return 0;
> +}
> +
> +int
> +rte_cryptodev_pmd_release_device(struct rte_cryptodev *cryptodev)
> +{
> +	if (cryptodev == NULL)
> +		return -EINVAL;
> +
> +	cryptodev->attached = RTE_CRYPTODEV_DETACHED;
> +	cryptodev_globals.nb_devs--;
> +	return 0;
> +}
> +
> +struct rte_cryptodev *
> +rte_cryptodev_pmd_virtual_dev_init(const char *name, size_t dev_private_size,
> +		int socket_id)
> +{
> +	struct rte_cryptodev *cryptodev;
> +
> +	/* allocate device structure */
> +	cryptodev = rte_cryptodev_pmd_allocate(name, PMD_VDEV, socket_id);
> +	if (cryptodev == NULL)
> +		return NULL;
> +
> +	/* allocate private device structure */
> +	if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
> +		cryptodev->data->dev_private =
> +				rte_zmalloc("%s private structure",
> +						dev_private_size,
> +						RTE_CACHE_LINE_SIZE);
> +
> +		if (cryptodev->data->dev_private == NULL)
> +			rte_panic("Cannot allocate memzone for private device"
> +					" data");
> +	}
> +
> +	/* initialise user call-back tail queue */
> +	TAILQ_INIT(&(cryptodev->link_intr_cbs));
> +
> +	return cryptodev;
> +}
> +
> +static int
> +rte_cryptodev_init(struct rte_pci_driver *pci_drv,
> +		struct rte_pci_device *pci_dev)
> +{
> +	struct rte_cryptodev_driver *cryptodrv;
> +	struct rte_cryptodev *cryptodev;
> +
> +	char cryptodev_name[RTE_CRYPTODEV_NAME_MAX_LEN];
> +
> +	int retval;
> +
> +	cryptodrv = (struct rte_cryptodev_driver *)pci_drv;
> +	if (cryptodrv == NULL)
> +			return -ENODEV;
> +
> +	/* Create unique Crypto device name using PCI address */
> +	rte_cryptodev_create_unique_device_name(cryptodev_name,
> +			sizeof(cryptodev_name), pci_dev);
> +
> +	cryptodev = rte_cryptodev_pmd_allocate(cryptodev_name, PMD_PDEV, rte_socket_id());
> +	if (cryptodev == NULL)
> +		return -ENOMEM;
> +
> +	if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
> +		cryptodev->data->dev_private =
> +				rte_zmalloc_socket("cryptodev private structure",
> +						cryptodrv->dev_private_size,
> +						RTE_CACHE_LINE_SIZE, rte_socket_id());
> +
> +		if (cryptodev->data->dev_private == NULL)
> +			rte_panic("Cannot allocate memzone for private device data");
> +	}
> +
> +	cryptodev->pci_dev = pci_dev;
> +	cryptodev->driver = cryptodrv;
> +
> +	/* init user callbacks */
> +	TAILQ_INIT(&(cryptodev->link_intr_cbs));
> +
> +	/* Invoke PMD device initialization function */
> +	retval = (*cryptodrv->cryptodev_init)(cryptodrv, cryptodev);
> +	if (retval == 0)
> +		return 0;
> +
> +	CDEV_LOG_ERR("driver %s: crypto_dev_init(vendor_id=0x%u device_id=0x%x)"
> +			" failed", pci_drv->name,
> +			(unsigned) pci_dev->id.vendor_id,
> +			(unsigned) pci_dev->id.device_id);
> +
> +	if (rte_eal_process_type() == RTE_PROC_PRIMARY)
> +		rte_free(cryptodev->data->dev_private);
> +
> +	cryptodev->attached = RTE_CRYPTODEV_DETACHED;
> +	cryptodev_globals.nb_devs--;
> +
> +	return -ENXIO;
> +}
> +
> +static int
> +rte_cryptodev_uninit(struct rte_pci_device *pci_dev)
> +{
> +	const struct rte_cryptodev_driver *cryptodrv;
> +	struct rte_cryptodev *cryptodev;
> +	char cryptodev_name[RTE_CRYPTODEV_NAME_MAX_LEN];
> +	int ret;
> +
> +	if (pci_dev == NULL)
> +		return -EINVAL;
> +
> +	/* Create unique device name using PCI address */
> +	rte_cryptodev_create_unique_device_name(cryptodev_name,
> +			sizeof(cryptodev_name), pci_dev);
> +
> +	cryptodev = rte_cryptodev_pmd_get_named_dev(cryptodev_name);
> +	if (cryptodev == NULL)
> +		return -ENODEV;
> +
> +	cryptodrv = (const struct rte_cryptodev_driver *)pci_dev->driver;
> +	if (cryptodrv == NULL)
> +			return -ENODEV;
> +
> +	/* Invoke PMD device uninit function */
> +	if (*cryptodrv->cryptodev_uninit) {
> +		ret = (*cryptodrv->cryptodev_uninit)(cryptodrv, cryptodev);
> +		if (ret)
> +			return ret;
> +	}
> +
> +	/* free ether device */
> +	rte_cryptodev_pmd_release_device(cryptodev);
> +
> +	if (rte_eal_process_type() == RTE_PROC_PRIMARY)
> +		rte_free(cryptodev->data->dev_private);
> +
> +	cryptodev->pci_dev = NULL;
> +	cryptodev->driver = NULL;
> +	cryptodev->data = NULL;
> +
> +	return 0;
> +}
> +
Shouldn't there be some interlock here if a device is being removed to block on
closure of all the sessions that may be open against it, and serializtion
againsnt any list modifications for tracking of these devices?

> +int
> +rte_cryptodev_pmd_driver_register(struct rte_cryptodev_driver *cryptodrv,
> +		enum pmd_type type)
> +{
> +	/* Call crypto device initialization directly if device is virtual */
> +	if (type == PMD_VDEV)
> +		return rte_cryptodev_init((struct rte_pci_driver *)cryptodrv,
> +				NULL);
> +
> +	/* Register PCI driver for physical device intialisation during
> +	 * PCI probing */
> +	cryptodrv->pci_drv.devinit = rte_cryptodev_init;
> +	cryptodrv->pci_drv.devuninit = rte_cryptodev_uninit;
> +	rte_eal_pci_register(&cryptodrv->pci_drv);
> +	return 0;
> +}
> +
> +int
> +rte_cryptodev_pmd_attach(const char *devargs __rte_unused,
> +			uint8_t *dev_id __rte_unused)
> +{
> +	RTE_LOG(ERR, EAL, "Hotplug support isn't enabled");
> +	return -1;
> +}
> +
> +/* detach the device, then store the name of the device */
> +int
> +rte_cryptodev_pmd_detach(uint8_t dev_id __rte_unused,
> +			char *name __rte_unused)
> +{
> +	RTE_LOG(ERR, EAL, "Hotplug support isn't enabled");
> +	return -1;
> +}
> +
> +uint16_t
> +rte_cryptodev_queue_pair_count(uint8_t dev_id)
> +{
> +	struct rte_cryptodev *dev;
> +
> +	dev = &rte_crypto_devices[dev_id];
> +	return dev->data->nb_queue_pairs;
> +}
> +
> +static int
> +rte_cryptodev_queue_pairs_config(struct rte_cryptodev *dev, uint16_t nb_qpairs, int socket_id)
> +{
> +	struct rte_cryptodev_info dev_info;
> +	uint16_t old_nb_queues = dev->data->nb_queue_pairs;
> +	void **qp;
> +	unsigned i;
> +
> +	if ((dev == NULL) || (nb_qpairs < 1)) {
> +		CDEV_LOG_ERR("invalid param: dev %p, nb_queues %u",
> +							dev, nb_qpairs);
> +		return -EINVAL;
> +	}
> +
> +	CDEV_LOG_DEBUG("Setup %d queues pairs on device %u",
> +			nb_qpairs, dev->data->dev_id);
> +
> +
> +	memset(&dev_info, 0, sizeof(struct rte_cryptodev_info));
> +
> +	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_infos_get, -ENOTSUP);
> +	(*dev->dev_ops->dev_infos_get)(dev, &dev_info);
> +
> +	if (nb_qpairs > (dev_info.max_queue_pairs)) {
> +		CDEV_LOG_ERR("Invalid num queue_pairs (%u) for dev %u",
> +				nb_qpairs, dev->data->dev_id);
> +	    return (-EINVAL);
> +	}
> +
> +	if (dev->data->queue_pairs == NULL) { /* first time configuration */
> +		dev->data->queue_pairs = rte_zmalloc_socket(
> +				"cryptodev->queue_pairs",
> +				sizeof(dev->data->queue_pairs[0]) * nb_qpairs,
> +				RTE_CACHE_LINE_SIZE, socket_id);
> +
> +		if (dev->data->queue_pairs == NULL) {
> +			dev->data->nb_queue_pairs = 0;
> +			CDEV_LOG_ERR("failed to get memory for qp meta data, "
> +							"nb_queues %u", nb_qpairs);
> +			return -(ENOMEM);
> +		}
> +	} else { /* re-configure */
> +		FUNC_PTR_OR_ERR_RET(*dev->dev_ops->queue_pair_release, -ENOTSUP);
> +
> +		qp = dev->data->queue_pairs;
> +
> +		for (i = nb_qpairs; i < old_nb_queues; i++)
> +			(*dev->dev_ops->queue_pair_release)(dev, i);
> +		qp = rte_realloc(qp, sizeof(qp[0]) * nb_qpairs,
> +				RTE_CACHE_LINE_SIZE);
> +		if (qp == NULL) {
> +			CDEV_LOG_ERR("failed to realloc qp meta data,"
> +						" nb_queues %u", nb_qpairs);
> +			return -(ENOMEM);
> +		}
> +		if (nb_qpairs > old_nb_queues) {
> +			uint16_t new_qs = nb_qpairs - old_nb_queues;
> +
> +			memset(qp + old_nb_queues, 0,
> +				sizeof(qp[0]) * new_qs);
> +		}
> +
> +		dev->data->queue_pairs = qp;
> +
> +	}
> +	dev->data->nb_queue_pairs = nb_qpairs;
> +	return 0;
> +}
> +
> +int
> +rte_cryptodev_queue_pair_start(uint8_t dev_id, uint16_t queue_pair_id)
> +{
> +	struct rte_cryptodev *dev;
> +
> +	/* This function is only safe when called from the primary process
> +	 * in a multi-process setup*/
> +	PROC_PRIMARY_OR_ERR_RET(-E_RTE_SECONDARY);
> +
> +	if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
> +		CDEV_LOG_ERR("Invalid dev_id=%" PRIu8, dev_id);
> +		return -EINVAL;
> +	}
> +
> +	dev = &rte_crypto_devices[dev_id];
> +	if (queue_pair_id >= dev->data->nb_queue_pairs) {
> +		CDEV_LOG_ERR("Invalid queue_pair_id=%d", queue_pair_id);
> +		return -EINVAL;
> +	}
> +
> +	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->queue_pair_start, -ENOTSUP);
> +
> +	return dev->dev_ops->queue_pair_start(dev, queue_pair_id);
> +
> +}
> +
> +int
> +rte_cryptodev_queue_pair_stop(uint8_t dev_id, uint16_t queue_pair_id)
> +{
> +	struct rte_cryptodev *dev;
> +
> +	/* This function is only safe when called from the primary process
> +	 * in a multi-process setup*/
> +	PROC_PRIMARY_OR_ERR_RET(-E_RTE_SECONDARY);
> +
> +	if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
> +		CDEV_LOG_ERR("Invalid dev_id=%" PRIu8, dev_id);
> +		return -EINVAL;
> +	}
> +
> +	dev = &rte_crypto_devices[dev_id];
> +	if (queue_pair_id >= dev->data->nb_queue_pairs) {
> +		CDEV_LOG_ERR("Invalid queue_pair_id=%d", queue_pair_id);
> +		return -EINVAL;
> +	}
> +
> +	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->queue_pair_stop, -ENOTSUP);
> +
> +	return dev->dev_ops->queue_pair_stop(dev, queue_pair_id);
> +
> +}
> +
So, I'm a bit confused here.  How do you communicate with a cryptodev.  I see
you're creating queue pairs here, which I think are intended for input/output,
but you also allow the creation of sessions.  The former seems to have no
linkage to the latter, so you have sessionless queue pairs and sessions without
method to perform operations on?  I'm clearly missing something, but I can't see
the relationship.

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [dpdk-dev] [PATCH 1/4] cryptodev: Initial DPDK Crypto APIs and device framework release
  2015-08-20 19:07   ` Neil Horman
@ 2015-08-21 14:02     ` Declan Doherty
  2015-09-15 16:36     ` [dpdk-dev] [PATCH] cryptodev: changes to crypto operation APIs to support non prescriptive chaining of crypto transforms in a crypto operation. app/test: updates to cryptodev unit tests to support new xform chaining APIs. aesni_mb_pmd: updates to device to support API changes Declan Doherty
  1 sibling, 0 replies; 8+ messages in thread
From: Declan Doherty @ 2015-08-21 14:02 UTC (permalink / raw)
  To: Neil Horman; +Cc: dev

On 20/08/15 20:07, Neil Horman wrote:
> On Thu, Aug 20, 2015 at 03:07:20PM +0100, Declan Doherty wrote:
>> Co-authored-by: Des O Dea <des.j.o.dea@intel.com>
>> Co-authored-by: John Griffin <john.griffin@intel.com>
>> Co-authored-by: Fiona Trahe <fiona.trahe@intel.com>
>>
>> This patch contains the initial proposed APIs and device framework for
>> integrating crypto packet processing into DPDK.
>>
>> features include:
>>   - Crypto device configuration / management APIs
>>   - Definitions of supported cipher algorithms and operations.
>>   - Definitions of supported hash/authentication algorithms and
>>     operations.
>>   - Crypto session management APIs
>>   - Crypto operation data structures and APIs allocation of crypto
>>     operation structure used to specify the crypto operations to
>>     be performed  on a particular mbuf.
>>   - Extension of mbuf to contain crypto operation data pointer and
>>     extra flags.
>>   - Burst enqueue / dequeue APIs for processing of crypto operations.
>>
>> Signed-off-by: Declan Doherty <declan.doherty@intel.com>
>
> Hey, only had a qick read so some of this might be off base, but a few comments
> in line
>

Hey Neil, thanks for the feedback, I've reply in line below.


>> <snip>
>
> So, this seems a bit...not sure what the word is..specific perhaps?  For an API.
> That is to say, if a underlying device supports performing multiple operations
> in a single transaction, I'm not sure that should be exposed in this way.  As
> the number of devices and chain combinations grow, so too will this list, and if
> there are multiple simmilar (but distinct) chain operations, an application will
> have to know which chains are applicable to which devices which is sub-optimal
>
> Instead, perhaps it would be better to simply ennumerate the list of crypto
> primitives that a device supports (HASH/CIPHER/etc), and allow the application
> to define the desired chain when creating a session.  That is to say, an
> appilcation can create a session that requests a given chain of operations
> (using an array of the primitive enum perhaps).  The implementing PMD is then
> responsible for implementing that chain in hardware or software if need be.  If
> you need to report on the disposition of the implementation, you can do so via
> return code (i.e. SESSION_OK_HW_IMPL or SESSION_OK_SW_IMPL).
>
> Using this method, more complex operations can be implemented with chain macros
> (e.g. AEAD is just an array of primitives [CIPHER, HASH])
>

ok, we may have let the hardware which we have available to us bias the 
scope of the API a little. I guess something similar to the approach 
taken by the OCF in BSD may be appropriate. This has a more generic 
structure for specifying any type of crypto operations and a next 
pointer to the next operations in the chain, this would make the 
provisioning of operations and there chaining much more generic? I'll 
put together a prototype and come back to the mailing list with it for 
further comment.

>> <snip>
> These are all defined in rte_ethdev.  You should just move those to a public
> place rather then re-creating them
>

Will do.

>> <snip>
> Shouldn't there be some interlock here if a device is being removed to block on
> closure of all the sessions that may be open against it, and serializtion
> againsnt any list modifications for tracking of these devices?
>

Good point, I hadn't considered the releasing of session resources on
device closure. I'll also look further at the device management, at this 
point we just had the minimal infrastructure in place to allow
functional testing.

>> <snip>
> So, I'm a bit confused here.  How do you communicate with a cryptodev.  I see
> you're creating queue pairs here, which I think are intended for input/output,
> but you also allow the creation of sessions.  The former seems to have no
> linkage to the latter, so you have sessionless queue pairs and sessions without
> method to perform operations on?  I'm clearly missing something, but I can't see
> the relationship.
>

All data path communication with the cryptodev is done via the burst 
enqueue and dequeue functions, see the rte_cryptodev.h .

So the session structure is a container for all the immutable data used 
to perform a crypto operation on a particular packet flow. We have a 
crypto operation data struct which contains the mutable data such as 
data offsets and lengths, initialization vectorsand the location of the 
digest, and additional data etc.

The crypto operation can be session based or session-less. If session 
based, it will have a pointer to a valid session for use with a specific 
cryptodev otherwise all the immutable crypto operation data parameters 
need to be set in the crypto operation data struct which is attached to 
each mbuf which will be enqueued with the cryptodev
for processing.

Once the crypto operation data struct is completed, it is attached
to the specific mbuf which contains the data to be operated on. The 
cryptodev can then handle a burst of mbuf's for processing enqueued on 
it using the burst_enqueue function.

The data associations from the mbuf, crypto_op and crypto_session are 
connected as below:

mbuf
   |-> data_payload etc..
   --> crypto_op
          |-> crypto session
          |-> digest data ptr / length
          |-> iv data ptr / length
          --> data offsets

One of the main reason for using session is that there are some
computationally costly precomputes which are required for authentication 
algos and key expansions for the cipher algos. By using a crypto session 
you can do these calculation once outside of the data-path and then the 
session can be used for every packet in a flow which requires that 
particular crypto transformation.

The one drawback to this is that if you are dealing with a system that 
has tens of thousands or millions of flows, then you end up caching a 
huge amount of data in the crypto session which is more than likely also 
stored in the application layer above.

The cryptodev can then handle burst of mbuf's for processing passed into 
it using the burst_enqueue function. The rte_cryptodev_burst_enqueue is 
analogous to an rte_eth_dev_tx_burst, but we use the enqueue/dequeue 
naming convention as tx/rx don't really what sense in the case of a 
crypto device where the input and output queues are intrinsically linked.


Cheers
Declan

^ permalink raw reply	[flat|nested] 8+ messages in thread

* [dpdk-dev] [PATCH] cryptodev: changes to crypto operation APIs to support non prescriptive chaining of crypto transforms in a crypto operation. app/test: updates to cryptodev unit tests to support new xform chaining APIs. aesni_mb_pmd: updates to device to support API changes
  2015-08-20 19:07   ` Neil Horman
  2015-08-21 14:02     ` Declan Doherty
@ 2015-09-15 16:36     ` Declan Doherty
  1 sibling, 0 replies; 8+ messages in thread
From: Declan Doherty @ 2015-09-15 16:36 UTC (permalink / raw)
  To: dev

Proposed changes to cryptodev API for comment based on Neil's comments
on the initial RFC. I have included the updates to the cryptodev unit test
suite and the AESNI multi buffer PMD for illustrative purposes, I will include the
changes for the QAT in a V1 patchset if the proposes changes to the API are
acceptable.

Signed-off-by: Declan Doherty <declan.doherty@intel.com>
---
 app/test/test_cryptodev.c                          | 276 +++++++++++++++------
 drivers/crypto/aesni_mb/aesni_mb_ops.h             |   2 +-
 drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c         | 188 +++++++++-----
 drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c     |  10 +-
 drivers/crypto/aesni_mb/rte_aesni_mb_pmd_private.h |  28 ++-
 lib/librte_cryptodev/rte_crypto.h                  | 141 ++++++++---
 lib/librte_cryptodev/rte_cryptodev.c               |  54 ++--
 lib/librte_cryptodev/rte_cryptodev.h               |  26 +-
 lib/librte_cryptodev/rte_cryptodev_pmd.h           |  10 +-
 9 files changed, 506 insertions(+), 229 deletions(-)

diff --git a/app/test/test_cryptodev.c b/app/test/test_cryptodev.c
index 68cc0bf..93b7e0a 100644
--- a/app/test/test_cryptodev.c
+++ b/app/test/test_cryptodev.c
@@ -62,6 +62,8 @@
 #define DIGEST_BYTE_LENGTH_SHA1		(BYTE_LENGTH(160))
 #define DIGEST_BYTE_LENGTH_SHA256	(BYTE_LENGTH(256))
 #define DIGEST_BYTE_LENGTH_SHA512	(BYTE_LENGTH(512))
+#define DIGEST_BYTE_LENGTH_AES_XCBC     (BYTE_LENGTH(96))
+#define AES_XCBC_MAC_KEY_SZ             (16)
 
 #define TRUNCATED_DIGEST_BYTE_LENGTH_SHA1		(12)
 #define TRUNCATED_DIGEST_BYTE_LENGTH_SHA256		(16)
@@ -75,13 +77,13 @@ struct crypto_testsuite_params {
 	struct rte_cryptodev_config conf;
 	struct rte_cryptodev_qp_conf qp_conf;
 
-	uint8_t valid_devs[RTE_MAX_CRYPTODEVS];
+	uint8_t valid_devs[RTE_CRYPTO_MAX_DEVS];
 	uint8_t valid_dev_count;
 };
 
 struct crypto_unittest_params {
-	struct rte_crypto_cipher_params cipher_params;
-	struct rte_crypto_hash_params hash_params;
+	struct rte_crypto_xform cipher_xform;
+	struct rte_crypto_xform auth_xform;
 
 	struct rte_cryptodev_session *sess;
 
@@ -92,6 +94,17 @@ struct crypto_unittest_params {
 	uint8_t *digest;
 };
 
+/*
+ * Forward declarations.
+ */
+static int
+test_AES_CBC_HMAC_SHA512_decrypt_create_session_params(struct crypto_unittest_params *ut_params);
+
+static int
+test_AES_CBC_HMAC_SHA512_decrypt_perform(struct rte_cryptodev_session *sess,
+		struct crypto_unittest_params *ut_params,
+		struct crypto_testsuite_params *ts_param);
+
 static struct rte_mbuf *
 setup_test_string(struct rte_mempool *mpool,
 		const char *string, size_t len, uint8_t blocksize)
@@ -184,7 +197,7 @@ testsuite_setup(void)
 	}
 
 	ts_params->crypto_op_pool = rte_crypto_op_pool_create("CRYPTO_OP_POOL",
-			NUM_MBUFS, MBUF_CACHE_SIZE, rte_socket_id());
+			NUM_MBUFS, MBUF_CACHE_SIZE, 2, rte_socket_id());
 	if (ts_params->crypto_op_pool == NULL) {
 		RTE_LOG(ERR, USER1, "Can't create CRYPTO_OP_POOL\n");
 		return TEST_FAILED;
@@ -436,6 +449,11 @@ static const uint8_t catch_22_quote_2_512_bytes_AES_CBC_ciphertext[] = {
 	0X95, 0XBB, 0X26, 0X74, 0X69, 0X12, 0X7F, 0XF1, 0XBB, 0XFF, 0XAE, 0XB5, 0X99, 0X6E, 0XCB, 0X0C
 };
 
+static const uint8_t catch_22_quote_2_512_bytes_AES_CBC_HMAC_SHA1_digest[] = {
+	0x9a, 0X4f, 0X88, 0X1b, 0Xb6, 0X8f, 0Xd8, 0X60,
+	0X42, 0X1a, 0X7d, 0X3d, 0Xf5, 0X82, 0X80, 0Xf1,
+	0X18, 0X8c, 0X1d, 0X32 };
+
 
 static int
 test_AES_CBC_HMAC_SHA1_encrypt_digest(void)
@@ -452,22 +470,28 @@ test_AES_CBC_HMAC_SHA1_encrypt_digest(void)
 	TEST_ASSERT_NOT_NULL(ut_params->digest, "no room to append digest");
 
 	/* Setup Cipher Parameters */
-	ut_params->cipher_params.algo = RTE_CRYPTO_SYM_CIPHER_AES_CBC;
-	ut_params->cipher_params.op = RTE_CRYPTO_SYM_CIPHER_OP_ENCRYPT;
-	ut_params->cipher_params.key.data = aes_cbc_key;
-	ut_params->cipher_params.key.length = CIPHER_KEY_LENGTH_AES_CBC;
+	ut_params->cipher_xform.type = RTE_CRYPTO_XFORM_CIPHER;
+	ut_params->cipher_xform.next = &ut_params->auth_xform;
+
+	ut_params->cipher_xform.cipher.algo = RTE_CRYPTO_SYM_CIPHER_AES_CBC;
+	ut_params->cipher_xform.cipher.op = RTE_CRYPTO_SYM_CIPHER_OP_ENCRYPT;
+	ut_params->cipher_xform.cipher.key.data = aes_cbc_key;
+	ut_params->cipher_xform.cipher.key.length = CIPHER_KEY_LENGTH_AES_CBC;
 
 	/* Setup HMAC Parameters */
-	ut_params->hash_params.op = RTE_CRYPTO_SYM_HASH_OP_DIGEST_GENERATE;
-	ut_params->hash_params.algo = RTE_CRYPTO_SYM_HASH_SHA1_HMAC;
-	ut_params->hash_params.auth_key.length = HMAC_KEY_LENGTH_SHA1;
-	ut_params->hash_params.auth_key.data = hmac_sha1_key;
-	ut_params->hash_params.digest_length = DIGEST_BYTE_LENGTH_SHA1;
+
+	ut_params->auth_xform.type = RTE_CRYPTO_XFORM_AUTH;
+	ut_params->auth_xform.next = NULL;
+
+	ut_params->auth_xform.auth.op = RTE_CRYPTO_SYM_HASH_OP_DIGEST_GENERATE;
+	ut_params->auth_xform.auth.algo = RTE_CRYPTO_SYM_HASH_SHA1_HMAC;
+	ut_params->auth_xform.auth.key.length = HMAC_KEY_LENGTH_SHA1;
+	ut_params->auth_xform.auth.key.data = hmac_sha1_key;
+	ut_params->auth_xform.auth.digest_length = DIGEST_BYTE_LENGTH_SHA1;
 
 	/* Create Crypto session*/
 	ut_params->sess = rte_cryptodev_session_create(ts_params->valid_devs[0],
-			&ut_params->cipher_params, &ut_params->hash_params,
-			RTE_CRYPTO_SYM_OPCHAIN_CIPHER_HASH);
+			&ut_params->cipher_xform);
 	TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation failed");
 
 	/* Generate Crypto op data structure */
@@ -522,6 +546,88 @@ test_AES_CBC_HMAC_SHA1_encrypt_digest(void)
 	return TEST_SUCCESS;
 }
 
+
+static int
+test_AES_CBC_HMAC_SHA1_encrypt_digest_sessionless(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
+
+	/* Generate test mbuf data and space for digest */
+	ut_params->ibuf = setup_test_string(ts_params->mbuf_pool, catch_22_quote,
+			QUOTE_512_BYTES, 0);
+
+	ut_params->digest = (uint8_t *)rte_pktmbuf_append(ut_params->ibuf,
+			DIGEST_BYTE_LENGTH_SHA1);
+	TEST_ASSERT_NOT_NULL(ut_params->digest, "no room to append digest");
+
+	/* Generate Crypto op data structure */
+	ut_params->op = rte_crypto_op_alloc_sessionless(ts_params->crypto_op_pool, 2);
+	TEST_ASSERT_NOT_NULL(ut_params->op, "Failed to allocate crypto_op");
+
+	/* Set crypto operation data parameters */
+	ut_params->op->xform->type = RTE_CRYPTO_XFORM_CIPHER;
+
+	/* cipher parameters */
+	ut_params->op->xform->cipher.op = RTE_CRYPTO_SYM_CIPHER_OP_ENCRYPT;
+	ut_params->op->xform->cipher.algo = RTE_CRYPTO_SYM_CIPHER_AES_CBC;
+	ut_params->op->xform->cipher.key.data = aes_cbc_key;
+	ut_params->op->xform->cipher.key.length = CIPHER_KEY_LENGTH_AES_CBC;
+
+	/* hash parameters */
+	ut_params->op->xform->next->type = RTE_CRYPTO_XFORM_AUTH;
+
+	ut_params->op->xform->next->auth.op = RTE_CRYPTO_SYM_HASH_OP_DIGEST_GENERATE;
+	ut_params->op->xform->next->auth.algo = RTE_CRYPTO_SYM_HASH_SHA1_HMAC;
+	ut_params->op->xform->next->auth.key.length = HMAC_KEY_LENGTH_SHA1;
+	ut_params->op->xform->next->auth.key.data = hmac_sha1_key;
+	ut_params->op->xform->next->auth.digest_length = DIGEST_BYTE_LENGTH_SHA1;
+
+	ut_params->op->digest.data = ut_params->digest;
+	ut_params->op->digest.phys_addr = rte_pktmbuf_mtophys_offset(ut_params->ibuf,
+			QUOTE_512_BYTES);
+	ut_params->op->digest.length = DIGEST_BYTE_LENGTH_SHA1;
+
+	ut_params->op->iv.data = (uint8_t *)rte_pktmbuf_prepend(ut_params->ibuf,
+			CIPHER_IV_LENGTH_AES_CBC);
+	ut_params->op->iv.phys_addr = rte_pktmbuf_mtophys(ut_params->ibuf);
+	ut_params->op->iv.length = CIPHER_IV_LENGTH_AES_CBC;
+
+	rte_memcpy(ut_params->op->iv.data, aes_cbc_iv, CIPHER_IV_LENGTH_AES_CBC);
+
+	ut_params->op->data.to_cipher.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_cipher.length = QUOTE_512_BYTES;
+	ut_params->op->data.to_hash.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_hash.length = QUOTE_512_BYTES;
+
+	rte_pktmbuf_attach_crypto_op(ut_params->ibuf, ut_params->op);
+
+	/* Process crypto operation */
+	ut_params->obuf = process_crypto_request(ts_params->valid_devs[0],
+			ut_params->ibuf);
+	TEST_ASSERT_NOT_NULL(ut_params->obuf, "failed to retrieve obuf");
+
+	/* Validate obuf */
+	TEST_ASSERT_BUFFERS_ARE_EQUAL(
+			rte_pktmbuf_mtod(ut_params->obuf, uint8_t *) +
+			CIPHER_IV_LENGTH_AES_CBC,
+			catch_22_quote_2_512_bytes_AES_CBC_ciphertext,
+			QUOTE_512_BYTES,
+			"Ciphertext data not as expected");
+
+	TEST_ASSERT_BUFFERS_ARE_EQUAL(
+			rte_pktmbuf_mtod(ut_params->obuf, uint8_t *) +
+			CIPHER_IV_LENGTH_AES_CBC + QUOTE_512_BYTES,
+			catch_22_quote_2_512_bytes_AES_CBC_HMAC_SHA1_digest,
+			gbl_cryptodev_type == RTE_CRYPTODEV_AESNI_MB_PMD ?
+					TRUNCATED_DIGEST_BYTE_LENGTH_SHA1 :
+					DIGEST_BYTE_LENGTH_SHA1,
+			"Generated digest data not as expected");
+
+	free_testsuite_mbufs();
+	return TEST_SUCCESS;
+}
+
 static int
 test_AES_CBC_HMAC_SHA1_decrypt_digest_verify(void)
 {
@@ -542,22 +648,27 @@ test_AES_CBC_HMAC_SHA1_decrypt_digest_verify(void)
 			DIGEST_BYTE_LENGTH_SHA1);
 
 	/* Setup Cipher Parameters */
-	ut_params->cipher_params.algo = RTE_CRYPTO_SYM_CIPHER_AES_CBC;
-	ut_params->cipher_params.op = RTE_CRYPTO_SYM_CIPHER_OP_DECRYPT;
-	ut_params->cipher_params.key.data = aes_cbc_key;
-	ut_params->cipher_params.key.length = CIPHER_KEY_LENGTH_AES_CBC;
+	ut_params->cipher_xform.type = RTE_CRYPTO_XFORM_CIPHER;
+	ut_params->cipher_xform.next = NULL;
+
+	ut_params->cipher_xform.cipher.algo = RTE_CRYPTO_SYM_CIPHER_AES_CBC;
+	ut_params->cipher_xform.cipher.op = RTE_CRYPTO_SYM_CIPHER_OP_DECRYPT;
+	ut_params->cipher_xform.cipher.key.data = aes_cbc_key;
+	ut_params->cipher_xform.cipher.key.length = CIPHER_KEY_LENGTH_AES_CBC;
 
 	/* Setup HMAC Parameters */
-	ut_params->hash_params.op = RTE_CRYPTO_SYM_HASH_OP_DIGEST_VERIFY;
-	ut_params->hash_params.algo = RTE_CRYPTO_SYM_HASH_SHA1_HMAC;
-	ut_params->hash_params.auth_key.length = HMAC_KEY_LENGTH_SHA1;
-	ut_params->hash_params.auth_key.data = hmac_sha1_key;
-	ut_params->hash_params.digest_length = DIGEST_BYTE_LENGTH_SHA1;
+	ut_params->auth_xform.type = RTE_CRYPTO_XFORM_AUTH;
+	ut_params->auth_xform.next = &ut_params->cipher_xform;
+
+	ut_params->auth_xform.auth.op = RTE_CRYPTO_SYM_HASH_OP_DIGEST_VERIFY;
+	ut_params->auth_xform.auth.algo = RTE_CRYPTO_SYM_HASH_SHA1_HMAC;
+	ut_params->auth_xform.auth.key.length = HMAC_KEY_LENGTH_SHA1;
+	ut_params->auth_xform.auth.key.data = hmac_sha1_key;
+	ut_params->auth_xform.auth.digest_length = DIGEST_BYTE_LENGTH_SHA1;
 
 	/* Create Crypto session*/
 	ut_params->sess = rte_cryptodev_session_create(ts_params->valid_devs[0],
-			&ut_params->cipher_params, &ut_params->hash_params,
-			RTE_CRYPTO_SYM_OPCHAIN_HASH_CIPHER);
+			&ut_params->auth_xform);
 	TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation failed");
 
 	/* Generate Crypto op data structure */
@@ -641,22 +752,27 @@ test_AES_CBC_HMAC_SHA256_encrypt_digest(void)
 	TEST_ASSERT_NOT_NULL(ut_params->digest, "no room to append digest");
 
 	/* Setup Cipher Parameters */
-	ut_params->cipher_params.algo = RTE_CRYPTO_SYM_CIPHER_AES_CBC;
-	ut_params->cipher_params.op = RTE_CRYPTO_SYM_CIPHER_OP_ENCRYPT;
-	ut_params->cipher_params.key.data = aes_cbc_key;
-	ut_params->cipher_params.key.length = CIPHER_KEY_LENGTH_AES_CBC;
+	ut_params->cipher_xform.type = RTE_CRYPTO_XFORM_CIPHER;
+	ut_params->cipher_xform.next = &ut_params->auth_xform;
+
+	ut_params->cipher_xform.cipher.algo = RTE_CRYPTO_SYM_CIPHER_AES_CBC;
+	ut_params->cipher_xform.cipher.op = RTE_CRYPTO_SYM_CIPHER_OP_ENCRYPT;
+	ut_params->cipher_xform.cipher.key.data = aes_cbc_key;
+	ut_params->cipher_xform.cipher.key.length = CIPHER_KEY_LENGTH_AES_CBC;
 
 	/* Setup HMAC Parameters */
-	ut_params->hash_params.op = RTE_CRYPTO_SYM_HASH_OP_DIGEST_GENERATE;
-	ut_params->hash_params.algo = RTE_CRYPTO_SYM_HASH_SHA256_HMAC;
-	ut_params->hash_params.auth_key.length = HMAC_KEY_LENGTH_SHA256;
-	ut_params->hash_params.auth_key.data = hmac_sha256_key;
-	ut_params->hash_params.digest_length = DIGEST_BYTE_LENGTH_SHA256;
+	ut_params->auth_xform.type = RTE_CRYPTO_XFORM_AUTH;
+	ut_params->auth_xform.next = NULL;
+
+	ut_params->auth_xform.auth.op = RTE_CRYPTO_SYM_HASH_OP_DIGEST_GENERATE;
+	ut_params->auth_xform.auth.algo = RTE_CRYPTO_SYM_HASH_SHA256_HMAC;
+	ut_params->auth_xform.auth.key.length = HMAC_KEY_LENGTH_SHA256;
+	ut_params->auth_xform.auth.key.data = hmac_sha256_key;
+	ut_params->auth_xform.auth.digest_length = DIGEST_BYTE_LENGTH_SHA256;
 
 	/* Create Crypto session*/
 	ut_params->sess = rte_cryptodev_session_create(ts_params->valid_devs[0],
-			&ut_params->cipher_params, &ut_params->hash_params,
-			RTE_CRYPTO_SYM_OPCHAIN_CIPHER_HASH);
+			&ut_params->cipher_xform);
 	TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation failed");
 
 	/* Generate Crypto op data structure */
@@ -731,22 +847,27 @@ test_AES_CBC_HMAC_SHA256_decrypt_digest_verify(void)
 			DIGEST_BYTE_LENGTH_SHA256);
 
 	/* Setup Cipher Parameters */
-	ut_params->cipher_params.algo = RTE_CRYPTO_SYM_CIPHER_AES_CBC;
-	ut_params->cipher_params.op = RTE_CRYPTO_SYM_CIPHER_OP_DECRYPT;
-	ut_params->cipher_params.key.data = aes_cbc_key;
-	ut_params->cipher_params.key.length = CIPHER_KEY_LENGTH_AES_CBC;
+	ut_params->cipher_xform.type = RTE_CRYPTO_XFORM_CIPHER;
+	ut_params->cipher_xform.next = NULL;
+
+	ut_params->cipher_xform.cipher.algo = RTE_CRYPTO_SYM_CIPHER_AES_CBC;
+	ut_params->cipher_xform.cipher.op = RTE_CRYPTO_SYM_CIPHER_OP_DECRYPT;
+	ut_params->cipher_xform.cipher.key.data = aes_cbc_key;
+	ut_params->cipher_xform.cipher.key.length = CIPHER_KEY_LENGTH_AES_CBC;
 
 	/* Setup HMAC Parameters */
-	ut_params->hash_params.op = RTE_CRYPTO_SYM_HASH_OP_DIGEST_VERIFY;
-	ut_params->hash_params.algo = RTE_CRYPTO_SYM_HASH_SHA256_HMAC;
-	ut_params->hash_params.auth_key.data = hmac_sha256_key;
-	ut_params->hash_params.auth_key.length = HMAC_KEY_LENGTH_SHA256;
-	ut_params->hash_params.digest_length = DIGEST_BYTE_LENGTH_SHA256;
+	ut_params->auth_xform.type = RTE_CRYPTO_XFORM_AUTH;
+	ut_params->auth_xform.next = &ut_params->cipher_xform;
+
+	ut_params->auth_xform.auth.op = RTE_CRYPTO_SYM_HASH_OP_DIGEST_VERIFY;
+	ut_params->auth_xform.auth.algo = RTE_CRYPTO_SYM_HASH_SHA256_HMAC;
+	ut_params->auth_xform.auth.key.data = hmac_sha256_key;
+	ut_params->auth_xform.auth.key.length = HMAC_KEY_LENGTH_SHA256;
+	ut_params->auth_xform.auth.digest_length = DIGEST_BYTE_LENGTH_SHA256;
 
 	/* Create Crypto session*/
 	ut_params->sess = rte_cryptodev_session_create(ts_params->valid_devs[0],
-			&ut_params->cipher_params, &ut_params->hash_params,
-			RTE_CRYPTO_SYM_OPCHAIN_HASH_CIPHER);
+			&ut_params->auth_xform);
 	TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation failed");
 
 	/* Generate Crypto op data structure */
@@ -837,22 +958,27 @@ test_AES_CBC_HMAC_SHA512_encrypt_digest(void)
 	TEST_ASSERT_NOT_NULL(ut_params->digest, "no room to append digest");
 
 	/* Setup Cipher Parameters */
-	ut_params->cipher_params.algo = RTE_CRYPTO_SYM_CIPHER_AES_CBC;
-	ut_params->cipher_params.op = RTE_CRYPTO_SYM_CIPHER_OP_ENCRYPT;
-	ut_params->cipher_params.key.data = aes_cbc_key;
-	ut_params->cipher_params.key.length = CIPHER_KEY_LENGTH_AES_CBC;
+	ut_params->cipher_xform.type = RTE_CRYPTO_XFORM_CIPHER;
+	ut_params->cipher_xform.next = &ut_params->auth_xform;
+
+	ut_params->cipher_xform.cipher.algo = RTE_CRYPTO_SYM_CIPHER_AES_CBC;
+	ut_params->cipher_xform.cipher.op = RTE_CRYPTO_SYM_CIPHER_OP_ENCRYPT;
+	ut_params->cipher_xform.cipher.key.data = aes_cbc_key;
+	ut_params->cipher_xform.cipher.key.length = CIPHER_KEY_LENGTH_AES_CBC;
 
 	/* Setup HMAC Parameters */
-	ut_params->hash_params.op = RTE_CRYPTO_SYM_HASH_OP_DIGEST_GENERATE;
-	ut_params->hash_params.algo = RTE_CRYPTO_SYM_HASH_SHA512_HMAC;
-	ut_params->hash_params.auth_key.length = HMAC_KEY_LENGTH_SHA512;
-	ut_params->hash_params.auth_key.data = hmac_sha512_key;
-	ut_params->hash_params.digest_length = DIGEST_BYTE_LENGTH_SHA512;
+	ut_params->auth_xform.type = RTE_CRYPTO_XFORM_AUTH;
+	ut_params->auth_xform.next = NULL;
+
+	ut_params->auth_xform.auth.op = RTE_CRYPTO_SYM_HASH_OP_DIGEST_GENERATE;
+	ut_params->auth_xform.auth.algo = RTE_CRYPTO_SYM_HASH_SHA512_HMAC;
+	ut_params->auth_xform.auth.key.length = HMAC_KEY_LENGTH_SHA512;
+	ut_params->auth_xform.auth.key.data = hmac_sha512_key;
+	ut_params->auth_xform.auth.digest_length = DIGEST_BYTE_LENGTH_SHA512;
 
 	/* Create Crypto session*/
 	ut_params->sess = rte_cryptodev_session_create(ts_params->valid_devs[0],
-			&ut_params->cipher_params, &ut_params->hash_params,
-			RTE_CRYPTO_SYM_OPCHAIN_CIPHER_HASH);
+			&ut_params->cipher_xform);
 
 	TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation failed");
 
@@ -931,8 +1057,7 @@ test_AES_CBC_HMAC_SHA512_decrypt_digest_verify(void)
 
 	/* Create Crypto session*/
 	ut_params->sess = rte_cryptodev_session_create(ts_params->valid_devs[0],
-			&ut_params->cipher_params, &ut_params->hash_params,
-			RTE_CRYPTO_SYM_OPCHAIN_HASH_CIPHER);
+			&ut_params->auth_xform);
 	TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation failed");
 
 	return test_AES_CBC_HMAC_SHA512_decrypt_perform(ut_params->sess,
@@ -944,17 +1069,23 @@ test_AES_CBC_HMAC_SHA512_decrypt_create_session_params(struct crypto_unittest_pa
 {
 
 	/* Setup Cipher Parameters */
-	ut_params->cipher_params.algo = RTE_CRYPTO_SYM_CIPHER_AES_CBC;
-	ut_params->cipher_params.op = RTE_CRYPTO_SYM_CIPHER_OP_DECRYPT;
-	ut_params->cipher_params.key.data = aes_cbc_key;
-	ut_params->cipher_params.key.length = CIPHER_KEY_LENGTH_AES_CBC;
+	ut_params->cipher_xform.type = RTE_CRYPTO_XFORM_CIPHER;
+	ut_params->cipher_xform.next = NULL;
+
+	ut_params->cipher_xform.cipher.algo = RTE_CRYPTO_SYM_CIPHER_AES_CBC;
+	ut_params->cipher_xform.cipher.op = RTE_CRYPTO_SYM_CIPHER_OP_DECRYPT;
+	ut_params->cipher_xform.cipher.key.data = aes_cbc_key;
+	ut_params->cipher_xform.cipher.key.length = CIPHER_KEY_LENGTH_AES_CBC;
 
 	/* Setup HMAC Parameters */
-	ut_params->hash_params.op = RTE_CRYPTO_SYM_HASH_OP_DIGEST_VERIFY;
-	ut_params->hash_params.algo = RTE_CRYPTO_SYM_HASH_SHA512_HMAC;
-	ut_params->hash_params.auth_key.data = hmac_sha512_key;
-	ut_params->hash_params.auth_key.length = HMAC_KEY_LENGTH_SHA512;
-	ut_params->hash_params.digest_length = DIGEST_BYTE_LENGTH_SHA512;
+	ut_params->auth_xform.type = RTE_CRYPTO_XFORM_AUTH;
+	ut_params->auth_xform.next = &ut_params->cipher_xform;
+
+	ut_params->auth_xform.auth.op = RTE_CRYPTO_SYM_HASH_OP_DIGEST_VERIFY;
+	ut_params->auth_xform.auth.algo = RTE_CRYPTO_SYM_HASH_SHA512_HMAC;
+	ut_params->auth_xform.auth.key.data = hmac_sha512_key;
+	ut_params->auth_xform.auth.key.length = HMAC_KEY_LENGTH_SHA512;
+	ut_params->auth_xform.auth.digest_length = DIGEST_BYTE_LENGTH_SHA512;
 	return TEST_SUCCESS;
 }
 
@@ -1047,6 +1178,11 @@ static struct unit_test_suite cryptodev_testsuite  = {
 		TEST_CASE_ST(ut_setup, ut_teardown, test_AES_CBC_HMAC_SHA512_encrypt_digest),
 		TEST_CASE_ST(ut_setup, ut_teardown, test_AES_CBC_HMAC_SHA512_decrypt_digest_verify),
 
+		TEST_CASE_ST(ut_setup, ut_teardown, test_AES_CBC_HMAC_AES_XCBC_encrypt_digest),
+		TEST_CASE_ST(ut_setup, ut_teardown, test_AES_CBC_HMAC_AES_XCBC_decrypt_digest_verify),
+
+		TEST_CASE_ST(ut_setup, ut_teardown, test_AES_CBC_HMAC_SHA1_encrypt_digest_sessionless),
+
 		TEST_CASES_END() /**< NULL terminate unit test array */
 	}
 };
diff --git a/drivers/crypto/aesni_mb/aesni_mb_ops.h b/drivers/crypto/aesni_mb/aesni_mb_ops.h
index ab96990..1188278 100644
--- a/drivers/crypto/aesni_mb/aesni_mb_ops.h
+++ b/drivers/crypto/aesni_mb/aesni_mb_ops.h
@@ -59,7 +59,7 @@ typedef void (*aes_keyexp_128_t)(void *key, void *enc_exp_keys, void *dec_exp_ke
 typedef void (*aes_keyexp_192_t)(void *key, void *enc_exp_keys, void *dec_exp_keys);
 typedef void (*aes_keyexp_256_t)(void *key, void *enc_exp_keys, void *dec_exp_keys);
 
-typedef void (*aes_xcbc_expand_key_t)(void *key, void *k1_exp, void *k2, void *k3);
+typedef void (*aes_xcbc_expand_key_t)(void *key, void *exp_k1, void *k2, void *k3);
 
 typedef void (*aesni_gcm_t)(gcm_data *my_ctx_data, u8 *out, const u8 *in,
 		u64 plaintext_len, u8 *iv, const u8 *aad, u64 aad_len,
diff --git a/drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c
index 65a3731..506754e 100644
--- a/drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c
+++ b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c
@@ -104,31 +104,101 @@ calculate_auth_precomputes(hash_one_block_t one_block_hash,
 	memset(opad_buf, 0, blocksize);
 }
 
-int
-aesni_mb_set_session_parameters(const struct aesni_mb_ops *mb_ops,
+static int
+aesni_mb_get_chain_order(const struct rte_crypto_xform *xform)
+{
+	/* multi-buffer only supports HASH_CIPHER or CIPHER_HASH chained
+	 * operations, all other options are invalid, so we must have exactly
+	 * 2 xform structs chained together */
+	if (xform->next == NULL || xform->next->next != NULL)
+		return -1;
+
+	if (xform->type == RTE_CRYPTO_XFORM_AUTH &&
+			xform->next->type == RTE_CRYPTO_XFORM_CIPHER)
+		return HASH_CIPHER;
+
+	if (xform->type == RTE_CRYPTO_XFORM_CIPHER &&
+				xform->next->type == RTE_CRYPTO_XFORM_AUTH)
+		return CIPHER_HASH;
+
+	return -1;
+}
+
+static int
+aesni_mb_set_session_auth_parameters(const struct aesni_mb_ops *mb_ops,
 		struct aesni_mb_session *sess,
-		struct rte_crypto_cipher_params *cipher_params,
-		struct rte_crypto_hash_params *auth_params,
-		enum rte_crypto_operation_chain op_chain)
+		const struct rte_crypto_xform *xform)
 {
-	aes_keyexp_t aes_keyexp_fn;
 	hash_one_block_t hash_oneblock_fn;
 
-	/* Select Crypto operation - hash then cipher / cipher then hash */
-	switch (op_chain) {
-	case RTE_CRYPTO_SYM_OPCHAIN_HASH_CIPHER:
-		sess->chain_order = HASH_CIPHER;
+	if (xform->type != RTE_CRYPTO_XFORM_AUTH) {
+		MB_LOG_ERR("Crypto xform struct not of type auth");
+		return -1;
+	}
+
+	/* Set Authentication Parameters */
+	if (xform->auth.algo == RTE_CRYPTO_SYM_HASH_AES_XCBC_MAC) {
+		sess->auth.algo = AES_XCBC;
+		(*mb_ops->aux.keyexp.aes_xcbc)(xform->auth.key.data,
+				sess->auth.xcbc.k1_expanded,
+				sess->auth.xcbc.k2, sess->auth.xcbc.k3);
+		return 0;
+	}
+
+	switch (xform->auth.algo) {
+	case RTE_CRYPTO_SYM_HASH_MD5_HMAC:
+		sess->auth.algo = MD5;
+		hash_oneblock_fn = mb_ops->aux.one_block.md5;
 		break;
-	case RTE_CRYPTO_SYM_OPCHAIN_CIPHER_HASH:
-		sess->chain_order = CIPHER_HASH;
+	case RTE_CRYPTO_SYM_HASH_SHA1_HMAC:
+		sess->auth.algo = SHA1;
+		hash_oneblock_fn = mb_ops->aux.one_block.sha1;
+		break;
+	case RTE_CRYPTO_SYM_HASH_SHA224_HMAC:
+		sess->auth.algo = SHA_224;
+		hash_oneblock_fn = mb_ops->aux.one_block.sha224;
+		break;
+	case RTE_CRYPTO_SYM_HASH_SHA256_HMAC:
+		sess->auth.algo = SHA_256;
+		hash_oneblock_fn = mb_ops->aux.one_block.sha256;
+		break;
+	case RTE_CRYPTO_SYM_HASH_SHA384_HMAC:
+		sess->auth.algo = SHA_384;
+		hash_oneblock_fn = mb_ops->aux.one_block.sha384;
+		break;
+	case RTE_CRYPTO_SYM_HASH_SHA512_HMAC:
+		sess->auth.algo = SHA_512;
+		hash_oneblock_fn = mb_ops->aux.one_block.sha512;
 		break;
 	default:
-		printf("unsupported operation chain order parameter");
+		MB_LOG_ERR("Unsupported authentication algorithm selection");
+		return -1;
+	}
+
+	/* Calculate Authentication precomputes */
+	calculate_auth_precomputes(hash_oneblock_fn,
+			sess->auth.pads.inner, sess->auth.pads.outer,
+			xform->auth.key.data,
+			xform->auth.key.length,
+			get_auth_algo_blocksize(sess->auth.algo));
+
+	return 0;
+}
+
+static int
+aesni_mb_set_session_cipher_parameters(const struct aesni_mb_ops *mb_ops,
+		struct aesni_mb_session *sess,
+		const struct rte_crypto_xform *xform)
+{
+	aes_keyexp_t aes_keyexp_fn;
+
+	if (xform->type != RTE_CRYPTO_XFORM_CIPHER) {
+		MB_LOG_ERR("Crypto xform struct not of type cipher");
 		return -1;
 	}
 
 	/* Select cipher direction */
-	switch (cipher_params->op) {
+	switch (xform->cipher.op) {
 	case RTE_CRYPTO_SYM_CIPHER_OP_ENCRYPT:
 		sess->cipher.direction = ENCRYPT;
 		break;
@@ -136,22 +206,22 @@ aesni_mb_set_session_parameters(const struct aesni_mb_ops *mb_ops,
 		sess->cipher.direction = DECRYPT;
 		break;
 	default:
-		printf("unsupported cipher operation parameter");
+		MB_LOG_ERR("Unsupported cipher operation parameter");
 		return -1;
 	}
 
 	/* Select cipher mode */
-	switch (cipher_params->algo) {
+	switch (xform->cipher.algo) {
 	case RTE_CRYPTO_SYM_CIPHER_AES_CBC:
 		sess->cipher.mode = CBC;
 		break;
 	default:
-		printf("unsupported cipher mode parameter");
+		MB_LOG_ERR("Unsupported cipher mode parameter");
 		return -1;
 	}
 
 	/* Check key length and choose key expansion function */
-	switch (cipher_params->key.length) {
+	switch (xform->cipher.key.length) {
 	case AES_128_BYTES:
 		sess->cipher.key_length_in_bytes = AES_128_BYTES;
 		aes_keyexp_fn = mb_ops->aux.keyexp.aes128;
@@ -165,53 +235,53 @@ aesni_mb_set_session_parameters(const struct aesni_mb_ops *mb_ops,
 		aes_keyexp_fn = mb_ops->aux.keyexp.aes256;
 		break;
 	default:
-		printf("unsupported cipher key length");
+		MB_LOG_ERR("Unsupported cipher key length");
 		return -1;
 	}
 
 	/* Expanded cipher keys */
-	(*aes_keyexp_fn)(cipher_params->key.data,
+	(*aes_keyexp_fn)(xform->cipher.key.data,
 			sess->cipher.expanded_aes_keys.encode,
 			sess->cipher.expanded_aes_keys.decode);
 
-	/* Set Authentication Parameters */
-	switch (auth_params->algo) {
-	case RTE_CRYPTO_SYM_HASH_MD5_HMAC:
-		sess->auth.algo = MD5;
-		hash_oneblock_fn = mb_ops->aux.one_block.md5;
-		break;
-	case RTE_CRYPTO_SYM_HASH_SHA1_HMAC:
-		sess->auth.algo = SHA1;
-		hash_oneblock_fn = mb_ops->aux.one_block.sha1;
-		break;
-	case RTE_CRYPTO_SYM_HASH_SHA224_HMAC:
-		sess->auth.algo = SHA_224;
-		hash_oneblock_fn = mb_ops->aux.one_block.sha224;
-		break;
-	case RTE_CRYPTO_SYM_HASH_SHA256_HMAC:
-		sess->auth.algo = SHA_256;
-		hash_oneblock_fn = mb_ops->aux.one_block.sha256;
-		break;
-	case RTE_CRYPTO_SYM_HASH_SHA384_HMAC:
-		sess->auth.algo = SHA_384;
-		hash_oneblock_fn = mb_ops->aux.one_block.sha384;
+	return 0;
+}
+
+
+int
+aesni_mb_set_session_parameters(const struct aesni_mb_ops *mb_ops,
+		struct aesni_mb_session *sess,
+		const struct rte_crypto_xform *xform)
+{
+	const struct rte_crypto_xform *auth_xform = NULL;
+	const struct rte_crypto_xform *cipher_xform = NULL;
+
+	/* Select Crypto operation - hash then cipher / cipher then hash */
+	switch (aesni_mb_get_chain_order(xform)) {
+	case HASH_CIPHER:
+		sess->chain_order = HASH_CIPHER;
+		auth_xform = xform;
+		cipher_xform = xform->next;
 		break;
-	case RTE_CRYPTO_SYM_HASH_SHA512_HMAC:
-		sess->auth.algo = SHA_512;
-		hash_oneblock_fn = mb_ops->aux.one_block.sha512;
+	case CIPHER_HASH:
+		sess->chain_order = CIPHER_HASH;
+		auth_xform = xform->next;
+		cipher_xform = xform;
 		break;
 	default:
-		printf("unsupported authentication algorithm selection");
+		MB_LOG_ERR("Unsupported operation chain order parameter");
 		return -1;
 	}
 
-	/* Calculate Authentication precomputes */
-	calculate_auth_precomputes(hash_oneblock_fn,
-			sess->auth.pads.inner, sess->auth.pads.outer,
-			auth_params->auth_key.data,
-			auth_params->auth_key.length,
-			get_auth_algo_blocksize(sess->auth.algo));
+	if (aesni_mb_set_session_auth_parameters(mb_ops, sess, auth_xform)) {
+		MB_LOG_ERR("Invalid/unsupported authentication parameters");
+		return -1;
+	}
 
+	if (aesni_mb_set_session_cipher_parameters(mb_ops, sess, cipher_xform)) {
+		MB_LOG_ERR("Invalid/unsupported cipher parameters");
+		return -1;
+	}
 	return 0;
 }
 
@@ -239,9 +309,7 @@ process_crypto_op(struct aesni_mb_qp *qp, JOB_AES_HMAC *job, struct rte_mbuf *m)
 			return NULL;
 
 		if (unlikely(aesni_mb_set_session_parameters(qp->mb_ops,
-				sess, &c_op->op_params.cipher,
-				&c_op->op_params.hash,
-				c_op->op_params.opchain) != 0))
+				sess, c_op->xform) != 0))
 			return NULL;
 	} else {
 		sess = (struct aesni_mb_session *)c_op->session;
@@ -250,7 +318,6 @@ process_crypto_op(struct aesni_mb_qp *qp, JOB_AES_HMAC *job, struct rte_mbuf *m)
 	/* Set crypto operation */
 	job->chain_order = sess->chain_order;
 
-
 	/* Set cipher parameters */
 	job->cipher_direction = sess->cipher.direction;
 	job->cipher_mode = sess->cipher.mode;
@@ -262,9 +329,14 @@ process_crypto_op(struct aesni_mb_qp *qp, JOB_AES_HMAC *job, struct rte_mbuf *m)
 
 	/* Set authentication parameters */
 	job->hash_alg = sess->auth.algo;
-	job->hashed_auth_key_xor_ipad = sess->auth.pads.inner;
-	job->hashed_auth_key_xor_opad = sess->auth.pads.outer;
-
+	if (job->hash_alg == AES_XCBC) {
+		job->_k1_expanded = sess->auth.xcbc.k1_expanded;
+		job->_k2 = sess->auth.xcbc.k2;
+		job->_k3 = sess->auth.xcbc.k3;
+	} else {
+		job->hashed_auth_key_xor_ipad = sess->auth.pads.inner;
+		job->hashed_auth_key_xor_opad = sess->auth.pads.outer;
+	}
 
 	/* Mutable crypto operation parameters */
 
diff --git a/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c
index fb57e7b..d9cdd5b 100644
--- a/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c
+++ b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c
@@ -286,9 +286,7 @@ aesni_mb_pmd_session_mempool_create(struct rte_cryptodev *dev,
 
 static struct rte_cryptodev_session *
 aesni_mb_pmd_create_session(struct rte_cryptodev *dev,
-		struct rte_crypto_cipher_params *cipher_setup_data,
-		struct rte_crypto_hash_params *hash_setup_data,
-		enum rte_crypto_operation_chain op_chain)
+		struct rte_crypto_xform *xform)
 {
 	struct aesni_mb_private *internals = dev->data->dev_private;
 	struct aesni_mb_session *sess  =
@@ -299,9 +297,9 @@ aesni_mb_pmd_create_session(struct rte_cryptodev *dev,
 		return NULL;
 	}
 
-	if (aesni_mb_set_session_parameters(
-			&job_ops[internals->vector_mode], sess,
-			cipher_setup_data, hash_setup_data, op_chain) != 0) {
+	if (aesni_mb_set_session_parameters(&job_ops[internals->vector_mode],
+			sess, xform) != 0) {
+		MB_LOG_ERR("failed configure session parameters");
 		aesni_mb_free_session(internals->sess_mp, sess);
 	}
 
diff --git a/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_private.h b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_private.h
index c5c4a86..abfec16 100644
--- a/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_private.h
+++ b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_private.h
@@ -178,13 +178,21 @@ struct aesni_mb_session {
 
 	struct {
 		JOB_HASH_ALG algo;	/** Authentication Algorithm */
-
-		struct {
-			uint8_t inner[128] __rte_align(16);	/**< inner pad */
-			uint8_t outer[128] __rte_align(16);	/**< outer pad */
-		} pads;
-		/** HMAC Authentication pads - allocating space for the maximum
-		 * pad size supported which is 128 bytes for SHA512 */
+		union {
+			struct {
+				uint8_t inner[128] __rte_align(16);	/**< inner pad */
+				uint8_t outer[128] __rte_align(16);	/**< outer pad */
+			} pads;
+			/** HMAC Authentication pads - allocating space for the maximum
+			 * pad size supported which is 128 bytes for SHA512 */
+
+			struct {
+			    uint32_t k1_expanded[44] __rte_align(16);	/* k1 (expanded key). */
+			    uint8_t k2[16] __rte_align(16);		/* k2. */
+			    uint8_t k3[16] __rte_align(16);		/* k3. */
+			} xcbc;
+			/** Expanded XCBC authentication keys */
+		};
 
 		uint8_t digest[64] __rte_align(16);
 	} auth;	/**< Authentication Parameters */
@@ -212,10 +220,10 @@ aesni_mb_free_session(struct rte_mempool *mempool,
 extern int
 aesni_mb_set_session_parameters(const struct aesni_mb_ops *mb_ops,
 		struct aesni_mb_session *sess,
-		struct rte_crypto_cipher_params *cparams,
-		struct rte_crypto_hash_params *aparams,
-		enum rte_crypto_operation_chain op_chain);
+		const struct rte_crypto_xform *xform);
+
 
+/** device specific operations function pointer structure */
 extern struct rte_cryptodev_ops *rte_aesni_mb_pmd_ops;
 
 
diff --git a/lib/librte_cryptodev/rte_crypto.h b/lib/librte_cryptodev/rte_crypto.h
index b776609..2160003 100644
--- a/lib/librte_cryptodev/rte_crypto.h
+++ b/lib/librte_cryptodev/rte_crypto.h
@@ -121,8 +121,8 @@ enum rte_crypto_cipher_algorithm {
 	RTE_CRYPTO_SYM_CIPHER_AES_F8,
 	/**< AES algorithm in F8 mode */
 	RTE_CRYPTO_SYM_CIPHER_AES_GCM,
-	/**< AES algorithm in CGM mode. When this cipher algorithm is used the
-	 * *RTE_CRYPTO_SYM_CIPHER_AES_GCM* element of the
+	/**< AES algorithm in GCM mode. When this cipher algorithm is used the
+	 * *RTE_CRYPTO_SYM_HASH_AES_GCM* element of the
 	 * *rte_crypto_hash_algorithm* enum MUST be used to set up the related
 	 * *rte_crypto_hash_setup_data* structure in the session context or in
 	 * the op_params of the crypto operation structure in the case of a
@@ -164,7 +164,7 @@ struct rte_crypto_key {
  * This structure contains data relating to Cipher (Encryption and Decryption)
  *  use to create a session.
  */
-struct rte_crypto_cipher_params {
+struct rte_crypto_cipher_xform {
 	enum rte_crypto_cipher_operation op;
 	/**< This parameter determines if the cipher operation is an encrypt or
 	 * a decrypt operation. For the RC4 algorithm and the F8/CTR modes,
@@ -203,8 +203,8 @@ struct rte_crypto_cipher_params {
 	 **/
 };
 
-/** Symmetric Hash / Authentication Algorithms */
-enum rte_crypto_hash_algorithm {
+/** Symmetric Authentication / Hash Algorithms */
+enum rte_crypto_auth_algorithm {
 	RTE_CRYPTO_SYM_HASH_NONE = 0,
 	/**< No hash algorithm. */
 
@@ -276,27 +276,24 @@ enum rte_crypto_hash_algorithm {
 	/**< ZUC algorithm in EIA3 mode */
 };
 
-/** Symmetric Hash Operations */
-enum rte_crypto_hash_operation {
+/** Symmetric Authentication / Hash Operations */
+enum rte_crypto_auth_operation {
 	RTE_CRYPTO_SYM_HASH_OP_DIGEST_VERIFY,	/**< Verify digest */
 	RTE_CRYPTO_SYM_HASH_OP_DIGEST_GENERATE	/**< Generate digest */
 };
 
 /**
- * Hash Setup Data.
+ * Authentication / Hash transform data.
  *
- * This structure contains data relating to a hash session. The fields hash_algorithm, hash_mode and digest_result_len are common to all
- *      three hash modes and MUST be set for each mode.
- *
- *****************************************************************************/
-struct rte_crypto_hash_params {
-	enum rte_crypto_hash_operation op;
-	/* hash operation type */
-	enum rte_crypto_hash_algorithm algo;
-	/* hashing algorithm selection */
-
-	struct rte_crypto_key auth_key;
-	/**< Authentication key data.
+ * This structure contains data relating to an authentication/hash crypto
+ * transforms. The fields op, algo and digest_length are common to all
+ * authentication transforms and MUST be set.
+ */
+struct rte_crypto_auth_xform {
+	enum rte_crypto_auth_operation op;	/**< Authentication operation type */
+	enum rte_crypto_auth_algorithm algo;	/**< Authentication algorithm selection */
+
+	struct rte_crypto_key key;		/**< Authentication key data.
 	 * The authentication key length MUST be less than or equal to the
 	 * block size of the algorithm. It is the callers responsibility to
 	 * ensure that the key length is compliant with the standard being used
@@ -346,9 +343,36 @@ struct rte_crypto_hash_params {
 };
 
 /**
+ * Defines the crypto transforms available
+ */
+enum rte_crypto_xform_type {
+	RTE_CRYPTO_XFORM_NOT_SPECIFIED = 0,
+	RTE_CRYPTO_XFORM_AUTH,
+	RTE_CRYPTO_XFORM_CIPHER
+};
+
+/**
+ * Crypto transform structure.
+ *
+ * This is used to specify the crypto transforms required, multiple transforms
+ * can be chained together to specify a chain transforms such as authentication
+ * then cipher, or cipher then authentication. Each transform structure can
+ * hold a single transform, the type field is used to specify which transform
+ * is contained within the union */
+struct rte_crypto_xform {
+	struct rte_crypto_xform *next; /**< next xform in chain */
+
+	enum rte_crypto_xform_type type; /**< xform type */
+	union {
+		struct rte_crypto_auth_xform auth;	/**< Authentication / hash xform */
+		struct rte_crypto_cipher_xform cipher;	/**< Cipher xform */
+	};
+};
+
+/**
  * Crypto operation session type. This is used to specify whether a crypto
  * operation has session structure attached for immutable parameters or if all
- * operation information is include in the operation data structure op_params.
+ * operation information is included in the operation data structure.
  */
 enum rte_crypto_op_sess_type {
 	RTE_CRYPTO_OP_WITH_SESSION,	/**< Session based crypto operation */
@@ -370,11 +394,7 @@ struct rte_crypto_op_data {
 	union {
 		struct rte_cryptodev_session *session;
 		/**< Handle for the initialised session context */
-		struct {
-			struct rte_crypto_cipher_params cipher;
-			struct rte_crypto_hash_params hash;
-			enum rte_crypto_operation_chain opchain;
-		} op_params;
+		struct rte_crypto_xform *xform;
 		/**< Session-less API crypto operation parameters */
 	};
 
@@ -570,6 +590,20 @@ struct rte_crypto_op_data {
 	struct rte_mempool *pool;	/**< mempool used to allocate crypto op */
 };
 
+
+/**
+ * Crypto Operation Pool Private Data Structure
+ */
+struct crypto_op_pool_private {
+	unsigned max_nb_xforms;
+};
+
+
+extern struct rte_mempool *
+rte_crypto_op_pool_create(const char *name, unsigned nb_ops,
+		unsigned cache_size, unsigned nb_xforms, int socket_id);
+
+
 /**
  * Reset the fields of a packet mbuf to their default values.
  *
@@ -579,9 +613,8 @@ struct rte_crypto_op_data {
  *   The packet mbuf to be resetted.
  */
 static inline void
-rte_crypto_op_reset(struct rte_crypto_op_data *op)
+__rte_crypto_op_reset(struct rte_crypto_op_data *op)
 {
-
 	op->type = RTE_CRYPTO_OP_SESSIONLESS;
 }
 
@@ -597,13 +630,10 @@ __rte_crypto_op_raw_alloc(struct rte_mempool *mp)
 }
 
 /**
- * Create an crypto operation structure which is used to define the crypto
- * operation processing which is to be done on a packet.
+ * Allocate a crypto operation structure from an crypto pool which is used 
+ * to define the crypto operation processing which is to be done on a packet.
  *
- * @param	dev_id		Device identifier
- * @param	m_src		Source mbuf of data for processing.
- * @param	m_dst		Destination mbuf for processed data. Can be NULL
- *				if crypto operation is done in place.
+ * @param	mp		crypto operation pool
  */
 static inline struct rte_crypto_op_data *
 rte_crypto_op_alloc(struct rte_mempool *mp)
@@ -611,7 +641,41 @@ rte_crypto_op_alloc(struct rte_mempool *mp)
 	struct rte_crypto_op_data *op;
 
 	if ((op = __rte_crypto_op_raw_alloc(mp)) != NULL)
-		rte_crypto_op_reset(op);
+		__rte_crypto_op_reset(op);
+	return op;
+}
+
+/**
+ * Allocate a sessionless crypto operation structure from an crypto pool which is used 
+ * to define the crypto operation processing which is to be done on a packet. The number of
+ * xforms supported is calculated from the mempools private data is used to check whether
+ * enough extra data has been allocated for each xform requested after the operation structure
+ *
+ * @param	mp			crypto operation pool
+ * @param	nb_xforms	number of crypto transforms to be used in operation
+ */
+static inline struct rte_crypto_op_data *
+rte_crypto_op_alloc_sessionless(struct rte_mempool *mp, unsigned nb_xforms)
+{
+	struct rte_crypto_op_data *op = NULL;
+	struct rte_crypto_xform *xform = NULL;
+	struct crypto_op_pool_private *priv_data =
+					(struct crypto_op_pool_private *)
+					rte_mempool_get_priv(mp);
+
+	if (nb_xforms > priv_data->max_nb_xforms && nb_xforms > 0)
+		return op;
+
+	if ((op = __rte_crypto_op_raw_alloc(mp)) != NULL) {
+		__rte_crypto_op_reset(op);
+
+		xform = op->xform = (struct rte_crypto_xform *)(op + 1);
+
+		do {
+			xform->type = RTE_CRYPTO_XFORM_NOT_SPECIFIED;
+			xform = xform->next = --nb_xforms > 0 ? xform + 1 : NULL;
+		} while (xform);
+	}
 	return op;
 }
 
@@ -629,10 +693,9 @@ rte_crypto_op_free(struct rte_crypto_op_data *op)
 	}
 }
 
-extern struct rte_mempool *
-rte_crypto_op_pool_create(const char *name, unsigned n, unsigned cache_size,
-		int socket_id);
-
+/**
+ * Attach a session to a crypto operation
+ */
 static inline void
 rte_crypto_op_attach_session(struct rte_crypto_op_data *op,
 		struct rte_cryptodev_session *sess)
diff --git a/lib/librte_cryptodev/rte_cryptodev.c b/lib/librte_cryptodev/rte_cryptodev.c
index a1797ce..7f2e5d1 100644
--- a/lib/librte_cryptodev/rte_cryptodev.c
+++ b/lib/librte_cryptodev/rte_cryptodev.c
@@ -101,13 +101,13 @@
 	} \
 } while (0)
 
-struct rte_cryptodev rte_crypto_devices[RTE_MAX_CRYPTODEVS];
+struct rte_cryptodev rte_crypto_devices[RTE_CRYPTO_MAX_DEVS];
 
 static struct rte_cryptodev_global cryptodev_globals = {
 		.devs			= &rte_crypto_devices[0],
 		.data			= NULL,
 		.nb_devs		= 0,
-		.max_devs		= RTE_MAX_CRYPTODEVS
+		.max_devs		= RTE_CRYPTO_MAX_DEVS
 };
 
 struct rte_cryptodev_global *rte_cryptodev_globals = &cryptodev_globals;
@@ -164,11 +164,11 @@ rte_cryptodev_find_free_device_index(void)
 {
 	uint8_t dev_id;
 
-	for (dev_id = 0; dev_id < RTE_MAX_CRYPTODEVS; dev_id++) {
+	for (dev_id = 0; dev_id < RTE_CRYPTO_MAX_DEVS; dev_id++) {
 		if (rte_crypto_devices[dev_id].attached == RTE_CRYPTODEV_DETACHED)
 			return dev_id;
 	}
-	return RTE_MAX_CRYPTODEVS;
+	return RTE_CRYPTO_MAX_DEVS;
 }
 
 struct rte_cryptodev *
@@ -178,7 +178,7 @@ rte_cryptodev_pmd_allocate(const char *name, enum pmd_type type, int socket_id)
 	struct rte_cryptodev *cryptodev;
 
 	dev_id = rte_cryptodev_find_free_device_index();
-	if (dev_id == RTE_MAX_CRYPTODEVS) {
+	if (dev_id == RTE_CRYPTO_MAX_DEVS) {
 		CDEV_LOG_ERR("Reached maximum number of crypto devices");
 		return NULL;
 	}
@@ -868,9 +868,7 @@ rte_cryptodev_pmd_callback_process(struct rte_cryptodev *dev,
 
 struct rte_cryptodev_session *
 rte_cryptodev_session_create(uint8_t dev_id,
-		struct rte_crypto_cipher_params *cipher_setup_data,
-		struct rte_crypto_hash_params *hash_setup_data,
-		enum rte_crypto_operation_chain op_chain)
+		struct rte_crypto_xform *xform)
 {
 	struct rte_cryptodev *dev;
 
@@ -879,10 +877,10 @@ rte_cryptodev_session_create(uint8_t dev_id,
 		return NULL;
 	}
 
-	 dev = &rte_crypto_devices[dev_id];
+	dev = &rte_crypto_devices[dev_id];
 
-	 return dev->dev_ops->session_create(dev, cipher_setup_data,
-			 hash_setup_data, op_chain);
+	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->session_create, NULL);
+	return dev->dev_ops->session_create(dev, xform);
 }
 
 void
@@ -896,9 +894,10 @@ rte_cryptodev_session_free(uint8_t dev_id,
 		return;
 	}
 
-	 dev = &rte_crypto_devices[dev_id];
+	dev = &rte_crypto_devices[dev_id];
 
-	 dev->dev_ops->session_destroy(dev, session);
+	FUNC_PTR_OR_RET(*dev->dev_ops->session_destroy);
+	dev->dev_ops->session_destroy(dev, session);
 }
 
 
@@ -922,15 +921,24 @@ rte_crypto_op_pool_init(__rte_unused struct rte_mempool *mp,
 }
 
 struct rte_mempool *
-rte_crypto_op_pool_create(const char *name, unsigned n, unsigned cache_size,
-		int socket_id)
+rte_crypto_op_pool_create(const char *name, unsigned size,
+		unsigned cache_size, unsigned nb_xforms, int socket_id)
 {
+	struct crypto_op_pool_private *priv_data = NULL;
+
+	unsigned elt_size = sizeof(struct rte_crypto_op_data) +
+			(sizeof(struct rte_crypto_xform) * nb_xforms);
+
 	/* lookup mempool in case already allocated */
 	struct rte_mempool *mp = rte_mempool_lookup(name);
 	if (mp != NULL) {
-		if (mp->elt_size != sizeof(struct rte_crypto_op_data) ||
+		priv_data = (struct crypto_op_pool_private *)
+				rte_mempool_get_priv(mp);
+
+		if (priv_data->max_nb_xforms <  nb_xforms ||
+				mp->elt_size != elt_size ||
 				mp->cache_size < cache_size ||
-				mp->size < n) {
+				mp->size < size) {
 			mp = NULL;
 			CDEV_LOG_ERR("%s mempool already exists with "
 					"incompatible initialisation parameters",
@@ -941,11 +949,12 @@ rte_crypto_op_pool_create(const char *name, unsigned n, unsigned cache_size,
 		return mp;
 	}
 
-	mp = rte_mempool_create(name,	/* mempool name */
-			n,			/* number of elements*/
-			sizeof(struct rte_crypto_op_data),/* element size*/
+	mp = rte_mempool_create(
+			name,				/* mempool name */
+			size,				/* number of elements*/
+			elt_size,			/* element size*/
 			cache_size,			/* Cache size*/
-			0,				/* private data size */
+			sizeof(struct crypto_op_pool_private),	/* private data size */
 			rte_crypto_op_pool_init,	/* pool initialisation constructor */
 			NULL,				/* pool initialisation constructor argument */
 			rte_crypto_op_init,		/* obj constructor */
@@ -958,6 +967,9 @@ rte_crypto_op_pool_create(const char *name, unsigned n, unsigned cache_size,
 		return NULL;
 	}
 
+	priv_data = (struct crypto_op_pool_private *)rte_mempool_get_priv(mp);
+
+	priv_data->max_nb_xforms = nb_xforms;
 
 	CDEV_LOG_DEBUG("%s mempool created!", name);
 	return mp;
diff --git a/lib/librte_cryptodev/rte_cryptodev.h b/lib/librte_cryptodev/rte_cryptodev.h
index d7694ad..d2d34f2 100644
--- a/lib/librte_cryptodev/rte_cryptodev.h
+++ b/lib/librte_cryptodev/rte_cryptodev.h
@@ -501,31 +501,23 @@ rte_cryptodev_enqueue_burst(uint8_t dev_id, uint16_t qp_id,
  * parameters of symmetric cryptographic operation.
  * To perform the operation the rte_cryptodev_enqueue_burst function is
  * used.  Each mbuf should contain a reference to the session
- * pointer returned from this function.
- * Memory to contain the session information is allocated by the
- * implementation.
- * An upper limit on the number of session that many be created is
- * defined by a build configuration constant.
+ * pointer returned from this function contained within it's crypto_op if a
+ * session-based operation is being provisioned. Memory to contain the session
+ * information is allocated from within mempool managed by the cryptodev.
+ *
  * The rte_cryptodev_session_free must be called to free allocated
- * memory when the session information is no longer needed.
+ * memory when the session is no longer required.
  *
- * @param	dev_id			The device identifier.
- * @param	cipher_setup_data	The parameters associated with the
- *					cipher operation. This may be NULL.
- * @param	hash_setup_data		The parameters associated with the hash
- *					operation. This may be NULL.
- * @param	op_chain		Specifies the crypto operation chaining,
- *					cipher and/or hash and the order in
- *					which they are performed.
+ * @param	dev_id		The device identifier.
+ * @param	xform		Crypto transform chain.
+
  *
  * @return
  *  Pointer to the created session or NULL
  */
 extern struct rte_cryptodev_session *
 rte_cryptodev_session_create(uint8_t dev_id,
-		struct rte_crypto_cipher_params *cipher_setup_data,
-		struct rte_crypto_hash_params *hash_setup_data,
-		enum rte_crypto_operation_chain op_chain);
+		struct rte_crypto_xform *xform);
 
 
 /**
diff --git a/lib/librte_cryptodev/rte_cryptodev_pmd.h b/lib/librte_cryptodev/rte_cryptodev_pmd.h
index e6fdd1c..aa2f6c4 100644
--- a/lib/librte_cryptodev/rte_cryptodev_pmd.h
+++ b/lib/librte_cryptodev/rte_cryptodev_pmd.h
@@ -453,10 +453,8 @@ typedef int (*cryptodev_create_session_pool_t)(
 /**
  * Create a Crypto session on a device.
  *
- * @param	dev			Crypto device pointer
- * @param	cipher_setup_data	Cipher operation parameters
- * @param	hash_setup_data		Hash operation parameters
- * @param	op_chain		Operation chaining
+ * @param	dev		Crypto device pointer
+ * @param	xform		Single or chain of crypto xforms
  *
  * @return
  *  - Returns cryptodev session structure on success.
@@ -464,9 +462,7 @@ typedef int (*cryptodev_create_session_pool_t)(
  * */
 typedef struct rte_cryptodev_session * (*cryptodev_create_session_t)(
 		struct rte_cryptodev *dev,
-		struct rte_crypto_cipher_params *cipher_setup_data,
-		struct rte_crypto_hash_params *hash_setup_data,
-		enum rte_crypto_operation_chain op_chain);
+		struct rte_crypto_xform *xform);
 
 /**
  * Free Crypto session.
-- 
2.4.3

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2015-09-15 16:27 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-08-20 14:07 [dpdk-dev] [PATCH 0/4] A proposed DPDK Crypto API and device framework Declan Doherty
2015-08-20 14:07 ` [dpdk-dev] [PATCH 1/4] cryptodev: Initial DPDK Crypto APIs and device framework release Declan Doherty
2015-08-20 19:07   ` Neil Horman
2015-08-21 14:02     ` Declan Doherty
2015-09-15 16:36     ` [dpdk-dev] [PATCH] cryptodev: changes to crypto operation APIs to support non prescriptive chaining of crypto transforms in a crypto operation. app/test: updates to cryptodev unit tests to support new xform chaining APIs. aesni_mb_pmd: updates to device to support API changes Declan Doherty
2015-08-20 14:07 ` [dpdk-dev] [PATCH 2/4] qat_crypto_pmd: Addition of a new QAT DPDK PMD Declan Doherty
2015-08-20 14:07 ` [dpdk-dev] [PATCH 3/4] aesni_mb_pmd: Initial implementation of multi buffer based crypto device Declan Doherty
2015-08-20 14:07 ` [dpdk-dev] [PATCH 4/4] app/test: add cryptodev unit and performance tests Declan Doherty

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).